Unnamed: 0
int64 0
7.24k
| id
int64 1
7.28k
| raw_text
stringlengths 9
124k
| vw_text
stringlengths 12
15k
|
---|---|---|---|
1,400 | 2,274 | A Formulation for Minimax Probability
Machine Regression
Thomas Strohmann
Department of Computer Science
University of Colorado, Boulder
[email protected]
Gregory Z. Grudic
Department of Computer Science
University of Colorado, Boulder
[email protected]
Abstract
We formulate the regression problem as one of maximizing the minimum probability, symbolized by ?, that future predicted outputs of the
regression model will be within some ?? bound of the true regression
function. Our formulation is unique in that we obtain a direct estimate
of this lower probability bound ?. The proposed framework, minimax
probability machine regression (MPMR), is based on the recently described minimax probability machine classification algorithm [Lanckriet
et al.] and uses Mercer Kernels to obtain nonlinear regression models.
MPMR is tested on both toy and real world data, verifying the accuracy
of the ? bound, and the efficacy of the regression models.
1
Introduction
The problem of constructing a regression model can be posed as maximizing the minimum
probability of future predictions being within some bound of the true regression function.
We refer to this regression framework as minimax probability machine regression (MPMR).
For MPMR to be useful in practice, it must make minimal assumptions about the distributions underlying the true regression function, since accurate estimation of these distribution
is prohibitive on anything but the most trivial regression problems. As with the minimax
probability machine classification (MPMC) framework proposed in [1], we avoid the use
of detailed distribution knowledge by obtaining a worst case bound on the probability that
the regression model is within some ? > 0 of the true regression function. Our regression formulation closely follows the classification formulation in [1] by making use of the
following theorem due to Isii [2] and extended by Bertsimas and Sethuraman [3]:
1
supE[z]=?z,Cov[z]=?z P r{aT z ? b} =
, ? 2 = inf aT z?b (z ? ?
z)T ??1
z) (1)
z (z ? ?
1 + ?2
where a and b are constants, z is a random vector, and the supremum is taken over all distributions having mean ?
z and covariance matrix ?z . This theorem assumes linear boundaries,
however, as shown in [1], Mercer kernels can be used to obtain nonlinear versions of this
theorem, giving one the ability to estimate upper and lower bounds on probability that
points generated form any distribution having mean ?
z and covariance ?z , will be on one
side of a nonlinear boundary. In [1], this formulation is used to construct nonlinear classifiers (MPMC) that maximize the minimum probability of correct classification on future
data.
In this paper we exploit the above theorem (??) for building nonlinear regression functions
which maximize the minimum probability that the future predictions will be within an ? to
the true regression function. We propose to implement MPMR by using MPMC to construct a classifier that separates two sets of points: the first set is obtained by shifting all of
the regression data +? along the dependent variable axis; and the second set is obtained by
shifting all of the regression data ?? along the dependent variable axis. The the separating
surface (i.e. classification boundary) between these two classes corresponds to a regression
surface, which we term the minimix probability machine regression model. The proposed
MPMR formulation is unique because it directly computes a bound on the probability that
the regression model is within ?? of the true regression function (see Theorem 1 below).
The theoretical foundations of MPMR are formalized in Section 2. Experimental results on synthetic and real data are given in Section 3, verifying the accuracy of
the minimax probability regression bound and the efficacy of the regression models. Proofs of the two theorems presented in this paper are given in the appendix.
Matlab and C source code for generating MPMR models can be downloaded from
http://www.cs.colorado.edu/?grudic/software.
2
Regression Model
We assume that learning data is generated from some unknown regression function f :
<d 7? < that has the form:
y = f (x) + ?
(2)
where x ? <d are generated according to some bounded distribution ?, y ? <,
E[?] = 0, V ar[?] = ? 2 , and ? ? < is finite. We are given N learning examples
? = {(x1 , y1 ), ..., (xN , yN )}, where ?i ? {1, ..., N }, xi = (xi1 , ..., xid ) ? <d is generated from the distribution ?, and yi ? <. The goal of our formulation is two-fold: first
we wish to use ? to construct an approximation f? of f , such that, for any x generated from
the distribution ?, we can approximate y? using
y? = f?(x)
(3)
The second goal of our formulation is, for any ? ? <, ? > 0, estimate the bound on the
minimum probability, symbolized by ?, that f?(x) is within ? of y (define in (2)):
? = inf Pr {|?
y ? y| ? ?}
(4)
Our proposed formulation of the regression problem is unique because we obtain direct
estimates of ?. Therefore we can estimate the predictive power of a regression function by
a bound on the minimum probability that we are within ? of the true regression function. We
refer to a regression function that directly estimates (4) as a mimimax probability machine
regression (MPMR) model.
The proposed MPMR formulation is based on the kernel formulation for mimimax probability machine classification (MPMC) presented in [1]. Therefore, the MPMR model has
the form:
N
X
y? = f? (x) =
?i K (xi , x) + b
(5)
i=1
where, K (xi , x) = ?(xi )?(x) is a kernel satisfying Mercer?s Conditions, xi , ?i ?
{1, ..., N }, are obtained from the learning data ?, and ?i , b ? < are outputs of the MPMR
learning algorithm.
2.1
Kernel Based MPM Classification
Before formalizing the MPMR algorithm for calculating ?i and b from the training data ?,
we first describe the MPMC formulation upon which it is based. In [1], the binary classification problem is posed as one of maximizing the probability of correctly classifying future
data. Specifically, two sets of points are considered, here symbolized by {u 1 , ..., uNu },
where ?i ? {1, ..., Nu }, ui ? <m , belonging to the first class, and {v1 , ..., vNv }, where
?i ? {1, ..., Nv }, vi ? <m , belonging to the second class. The points ui are assumed to
be generated from a distribution that has mean u and a covariance matrix ? u , and correspondingly, the points vi are assumed to be generated from a distribution that has mean v
and a covariance matrix ?v . For the nonlinear kernel formulation, these points are mapped
into a higher dimensional space ? : <m 7? <h as follows: u 7? ?(u) with corresponding
mean and covariance matrix (?(u), ??(u) ), and v 7? ?(v) with corresponding mean and
covariance matrix (?(v), ??(v) ). The binary classifier derived in [1] has the form (c = ?1
for the first class and c = +1 for the"second):
#
Nu
+Nv
X
c
?i K (zi , z) + bc
c = sign
(6)
i=1
where K c (zi , z) = ?(zi )?(z), zi = ui for i = 1, ..., Nu , zi = vi?Nu for i =
Nu + 1, ..., Nu + Nv , and ? = (?1 , ..., ?Nu +Nv ), bc obtained by solving the following
optimization problem:
)
(
K
K
?v
?u
?u ? k
?v = 1
min
? ?
+
? ?
s.t.? T k
(7)
?
Nv
Nu
2
2
?u ; where K
?v ; where k
?v , k
?u ? <Nu +Nv defined
? u = K u ? 1 Nu k
? v = K v ? 1 Nv k
where K
P
P
N
N
v
u
1
1
c
c
?
? v ]i =
as: [k
j=1 K (vj , zi ) and [ku ]i = Nu
j=1 K (uj , zi ); where 1k is a k
Nv
dimensional column vector of ones; where Ku contains the first Nu rows of the Gram
matrix K (i.e. a square matrix consisting of the elements Kij = K c (zi , zj )); and finally
Kv contains the last Nv rows of the Gram matrix K. Given that ? solves the minimization
problem in (7), bc can be calculated
using:
r
r
1 T ?T ?
1 T ?T ?
T?
?v + ?
b c = ? ku ? ?
? Ku Ku ? = ? T k
? Kv Kv ?
(8)
Nu
Nv
where,
r
?1
r
1 T ?T ?
1 T ?T ?
? Ku Ku ? +
? Kv Kv ?
(9)
?=
Nu
Nv
One significant advantage of this framework for binary classification is that, given perfect
knowledge of the statistics u, ?u , v, ?v , the maximum probability of incorrect classification is bounded by 1 ? ?, where ? can be directly calculated from ? as follows:
?2
?=
(10)
1 + ?2
This result is used below to formulate a lower bound on the probability that that the approximated regression function is within ? of the true regression function.
2.2
Kernel Based MPM Regression
In order to use the above MPMC formulation for our proposed MPMR framework, we first
take the original learning data ? and create two classes of points ui ? <d+1 and vi ? <d+1 ,
for i = 1, ..., N , as follows:
ui = (yi + ?, xi1 , xi2 , ..., xid )
(11)
vi = (yi ? ?, xi1 , xi2 , ..., xid )
Given these two sets of points, we obtain ? by minimizing equation (7). Then, from (6),
the MPM classification boundary between points ui and vi is given by
2N
X
?i K c (zi , z) + bc = 0
(12)
i=1
We interpret this classification boundary as a regression surface because it acts to separate
points which are ? above the y values in the learning set ?, and ? below the y values
in ?. Furthermore, given any point x = (x1 , ..., xd ) generated from the distribution ?,
calculating y? the regression model output (5), involves finding a y? that solves equation (12),
where z = (?
y , x1 , ..., xd ), and, recalling from above, zi = ui for i = 1, ..., N , zi = vi?N
for i = N + 1, ..., 2N (note that Nu = Nv = N ). If K c (zi , z) is nonlinear, solving (12)
for y? is in general a nonlinear single variable optimization problem, which can be solved
using a root finding algorithm (for example the Newton-Raphson Method outlined in [4]).
However, below we present a specific form of nonlinear K c (zi , z) that allows (12) to be
solved analytically.
It is interesting to note that the above formulation of a regression model can be derived
using any binary classification algorithm, and is not limited to the MPMC algorithm.
Specifically, if a binary classifier is built to separate any two sets of points (11), then
finding a crossing point y? at where the classifier separates these classes for some input
x = (x1 , ..., xd ), is equivalent to finding the output of the regression model for input
x = (x1 , ..., xd ). It would be interesting to explore the efficacy of various classification algorithms for this type of regression model formulation. However, as formalized in Theorem
1 below, using the MPM framework gives us one clear advantage over other techniques. We
now state the main result of this paper:
Theorem 1: For any x = (x1 , ..., xd ) generated according to the distribution ?, assume
that there exists only one y? that solves equation (12). Assume also perfect knowledge of the
statistics u, ?u , v, ?v . Then, the minimum probability that y? is within ? of y (as defined in
(2)) is given by:
?2
? = inf Pr {|?
y ? y| ? ?} =
(13)
1 + ?2
where ? is defined in (9).
Proof: See Appendix.
Therefore, from the above theorem, the MPMC framework directly computes the lower
bound on the probability that the regression model is within ? of the function that generated
the learning data ? (i.e. the true regression function). However, one key requirement of the
theorem is perfect knowledge of the statistics u, ?u , v, ?v . In the actual implementation of
MPMR, these statistics are estimated from ?, and it is an open question (which we address
in Section 3) as to how accurately ? can be estimated from real data.
In order to avoid the use of nonlinear optimizations techniques to solve (12) for y?, we
restrict the form of the kernel K c (zi , z) to the following:
K c (zi , z) = yi0 y? + K (xi , x)
(14)
where K (xi , x) = ?(xi )?(x) is a kernel satisfying Mercer?s Conditions; where z =
0
(?
y , x1 , ..., xd ); where zi = ui , yi0 = yi + for i = 1, ..., N ; and where zi = vi?N , yi?N
=
c
yi ? for i = N + 1, ..., 2N . Given this restriction on K (zi , z), we now state our final
theorem which uses the following lemma:
Lemma 1:
Proof: See Appendix.
k?u ? k?v = 2y0
Theorem 2: Assume that (14) is true. Then all of the following are true:
Part 1: Equation (12) has an analytical solution as defined in (5), where
?i = ?2(?i + ?i+N )
?u = K
?v
Part 2: K
b = ?2bc
(15)
Table 1: Results over 100 random trials for sinc data: mean squared errors and the standard deviation; MPTD?: fraction of test points that are within = 0.2 of y; predicted ?:
predicted probability that the model is within ? = 0.2 of y.
2
? =0
? 2 = 0.5
? 2 = 1.0
mean (std)
mean (std)
mean (std)
mean squared error
0.0 (0.0)
0.0524 (0.0386)
0.2592 (0.3118)
MPTD?
1.0 (0.0)
0.6888 (0.1133)
0.3870 (0.1110)
predicted ?
1.0 (0.0)
0.1610 (0.0229)
0.0463 (0.0071)
Part 3: The problem of finding an optimal ? in (7) is reduced to solving the following
linear least squares problem for t ? <
2N ?1 :
?
min
K
u (?o + Ft)
t
2 2
2N ?(2N ?1)
?u ? k
?v /
?
?
where ? = ?o + Ft , ?o = k
k
?
k
is an
u
v
, and F ? <
?u ? k
?v .
orthogonal matrix whose columns span the subspace of vectors orthogonal to k
Proof: See Appendix.
Therefore, Theorem 2 establishes that the MPMR formulation proposed in this paper has a
closed form analytical solution, and its computational complexity is equivalent to solving
a linear system of 2N ? 1 equations in 2N ? 1 unknowns.
3
Experimental Results
For complete implementation details of the MPMR algorithm used in the
following experiments, see the Matlab and C source code available at
http://www.cs.colorado.edu/?grudic/software.
Toy Sinc Data: Our toy example uses the noisy sinc function yi = sin(?xi )/(?xi ) +
?i i = 1, ..., N , where ?i is drawn from a Gaussian distribution with mean 0 and variance
? 2 [5]. We use a RBF kernel K(a, b) = exp(?|a ? b|2 ) and N = 100 training examples.
Figure 1 (a), (b), and (c), and Table 1 show the results for different variances ? 2 and a
constant value of ? = 0.2. Figure 1 (d) and (e) illustrate how different tube sizes 0.05 ?
? ? 2 affect the mean squared error (on 100 random test points), the predicted ? and
measured percentage of test data within ? (here called MPTD?) of the regression model.
Each experiment consists of 100 random trials. The average mean squared error in (e)
has a small deviation (0.0453) over all tested ? and always was within the range 0.19 to
0.35. This indicates that the accuracy of the regression model is essentially independent
from the choice of ?. Also note that the mean predicted ? is a lower bound on the mean
MPTD?. The tightness of this lower bound varies for different amounts of noise (Table 1)
and different choices of ? (Figure 1 d).
Boston Housing Data: We test MPMR on the widely used Boston housing regression
data available from the UCI repository. Following the experiments done in [5], we use the
RBF kernel K(a, b) = exp(?ka ? bk/(2? 2 )), where (2? 2 )) = 0.3 ? d and d = 13 for
this data set. No attempt was made to pick optimal values for ? using cross validation.
The Boston housing data contains 506 training examples, which we randomly divided into
N = 481 training examples and 25 testing examples for each test run. 100 such random
tests where run for each of ? = 0.1, 1.0, 2.0, ..., 10.0. Results are reported in Table 2 for 1)
average mean squared errors and the standard deviation; 2) MPTD?: fraction of test points
that are within of y and the standard deviation; 3) predicted ?: predicted probability that
the model is within ? of y and standard deviation. We first note that the results compare
favorably to those reported for other state of the art regression algorithms [5], even though
2
1.5
4
3
learning examples
true regression function
MPMR function
MPMR function + ?
MPMR function ? ?
learning examples
true regression function
MPMR function
MPMR function + ?
MPMR function ? ?
2.5
2
learning examples
true regression function
MPMR function
MPMR function + ?
MPMR function ? ?
3
2
1.5
1
1
y
1
0
y
y
0.5
0.5
0
?1
?0.5
0
?2
?1
?3
?2
?1
0
1
2
3
x
?1.5
?3
?2
Percentage of Test Data within ? ?
?1
0
1
2
?3
?3
3
?1
0
1
2
3
x
2
a) ? = 0.2, ? 2 = 0
?2
x
c) ? = 0.2, ? 2 = 1.0
b) ? = 0.2, ? = 0.5
1
0.9
1
Average MSE (100 runs)
Probability
0.8
0.8
MPTD? ? std
0.6
estimated ? ? std
0.4
0.7
0.6
0.5
0.4
0.3
0.2
0.2
0.1
0
0
0
0.2
0.4
0.6
0.8
?
1
1.2
1.4
0
0.2
0.4
0.6
1.6
0.8
?
1
1.2
1.4
1.6
d) MPTD? and predicted ? e) mean squared error on test
data w.r.t. ?, ? 2 = 1.0
w.r.t. ?, ? 2 = 1.0
Figure 1: Experimental results on toy sinc data.
Table 2: Results over 100 random trials for the Boston Housing Data for ? =
0.1, 1.0, 2.0, ..., 10.0: mean squared errors and the standard deviation; MPDT?: fraction
of test points that are within of y and the standard deviation; predicted ?: predicted
probability that the model is within ? of y and standard deviation.
Average MSE (100 runs)
?
MSE
STD
MPDT?
STD
?
STD
0.1
9.9
5.9
0.05
0.04
0.002
0.0005
1.0
10.5
9.5
0.33
0.09
0.19
0.03
2.0
10.9
8.6
0.58
0.09
0.51
0.06
3.0
9.5
5.9
0.76
0.08
0.69
0.05
4.0
10.3
8.1
0.84
0.07
0.80
0.04
4.0
9.9
8.0
0.89
0.06
0.87
0.03
6.0
10.5
8.5
0.93
0.05
0.90
0.01
7.0
10.5
8.1
0.95
0.04
0.92
0.01
8.0
9.0
10.0
9.2
10.1
10.6
5.3
6.9
7.6
0.97
0.97
0.98
0.03
0.03
0.02
0.94
0.95
0.96
0.009 0.009 0.008
no attempt was made to optimize for ?. Second, as with the toy data, the errors are relatively
independent of ?. Finally, we note that the mean predicted ? is lower than the measured
average MPTD?, thus validating the the MPMR algorithm does indeed predict an effective
lower bound on the probability that the regression model is within ? of the true regression
function.
4
Discussion and Conclusion
We formalize the regression problem as one of maximizing the minimum probability, ?,
that the regression model is within ?? of the true regression function. By estimating mean
and covariance matrix statistics of the regression data (and making no other assumptions
on the underlying true regression function distributions), the proposed minimax probability
machine regression (MPMR) algorithm obtains a direct estimate of ?. Two theorems are
presented proving that, given perfect knowledge of the mean and covariance statistics of the
true regression function, the proposed MPMR algorithm directly computes the exact lower
probability bound ?. We are unaware of any other nonlinear regression model formulation
that has this property.
Experimental results are given showing: 1) the regression models produced are competitive with existing state of the art models; 2) the mean squared error on test data is relatively
independent of the choice of ?; and 3) estimating mean and covariance statistics directly
from the learning data gives accurate lower probability bound ? estimates that the regression model is within ?? of the true regression function - thus supporting our theoretical
results.
Future research will focus on a theoretical analysis of the conditions under which the accuracy of the regression model is independent of ?. Also, we are analyzing the rate, as a
function of sample size, at which estimates of the lower probability bound ? converge to
the true value. Finally, the proposed minimax probability machine regression framework
is a new formulation of the regression problem, and therefore its properties can only be
fully understood through extensive experimentation. We are currently applying MPMR to
a wide variety of regression problems and have made Matlab / C source code available
(http://www.cs.colorado.edu/?grudic/software) for others to do the same.
References
[1] G. R. G. Lanckriet, L. E. Ghaoui, C. Bhattacharyya, and M. I. Jordan. Minimax probability machine. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances
in Neural Information Processing Systems 14, Cambridge, MA, 2002. MIT Press.
[2] A. W. Marshall and I. Olkin. Multivariate chebyshev inequalities. Annals of Mathematical Statistics, 31(4):1001?1014, 1960.
[3] I. Popescu and D. Bertsimas. Optimal inequalities in probability theory: A convex optimization approach. Technical Report TM62, INSEAD, Dept. Math. O.R., Cambridge,
Mass, 2001.
[4] W. H. Press, B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling. Numerical Recipes
in C. Cambridge University Press, New York NY, 1988.
[5] Bernhard Sch?olkopf, Peter L. Bartlett, Alex J. Smola, and Robert Williamson. Shrinking the tube: A new support vector regression algorithm. In D. A. Cohn M. S. Kearns,
S. A. Solla, editor, Advances in Neural Information Processing Systems, volume 11,
Cambridge, MA, 1999. The MIT Press.
Appendix: Proofs of Theorems 1 and 2
Proof of Theorem 1:
Consider any point x = (x1 , ..., xd ) generated according to the distribution ?. This point
will have a corresponding y (defined in (2)), and from (10), the probability that z +? = (y +
?, x1 , ..., xd ) will be classified correctly (as belonging to class u) by (6) is ?. Furthermore,
the classification boundary occurs uniquely at the point where z = (?
y , x1 , ..., xd ), where,
from the assumptions, y? is the unique solution to (12). Similarly, for the same point y,
the probability that z?? = (y ? ?, x1 , ..., xd ) will be classified correctly (as belonging
to class v) by (6) is also ?, and the classifications boundary occurs uniquely at the point
where z = (?
y , x1 , ..., xd ). Therefore, both z+? = (y + ?, x1 , ..., xd ) and z?? = (y ?
?, x1 , ..., xd ) are, with probability ?, on the correct side of the regression surface, defined
by z = (?
y , x1 , ..., xd ). Therefore, z+? differs from z by at most +? in the first dimension,
and z?? differs from z by at most ?? in the first dimension. Thus, the minimum bound on
the probability that |y ? y?| ? ? is ? (defined in (10)), which has the same form as ?. This
completes the proof.
Proof of Lemma 1:
PN
PN
[k?u ]i ? [k?v ]i = N1 ( l=1 K c (ul , zi )) ? N1 ( l=1 K c (vl , zi )) =
PN
1
1
0
0
0
0
l=1 (yl + )yi + K(xl , xi ) ? ((yl ? )yi + K(xl , xi )) = N N 2yi = 2yi
N
Proof of Theorem 2:
Part 1: Plugging (14) into (12), we get:
2N
P
0=
?i [yi0 y? + K (xi , x)] + bc
0=
0=
i=1
N
P
i=1
N
P
i=1
?i [(yi + ?) y? + K (xi , x)] +
N
P
i=1
?i+N [(yi ? ?) y? + K (xi , x)] + bc
{(?i + ?i+N ) [yi y? + K (xi , x)] + (?i ? ?i+N ) ??
y } + bc
When we solve analytically for y?, giving (5), the coefficients ?i and the offset b have a
N
P
denominator that looks like: ?
[(?i + ?i+N ) yi + (?i ? ?i+N ) ?] = ?? T y0
i=1
1
Applying Lemma 1 and (7) we obtain: 1 = ? T (?(ku ) ? k?v ) = ? T 2y0 ? ?? T y0 = ? 2
for the denominator of ?i and b.
Part 2: The values zi are defined as: z1 = u1 , ..., zN = uN , zN +1 = v1 = u1 ?
T
T
(2, 0, ? ? ? , 0) , ..., z2N = vN = uN ? (2, 0, ? ? ? , 0) . Since K?u = Ku ? 1N k?u we have
the following term for a single matrix entry:
PN
[K?u ]i,j = K c (ui , zj ) ? N1 l=1 K c (ul , zj ) i = 1, .., N j = 1, ..., 2N
Similarly the matrix entries for K?v look like:
PN
[K?v ]i,j = K c (vi , zj ) ? N1 l=1 K c (vl , zj ) i = 1, .., N j = 1, ..., 2N
We show that these entries are the same for all i and j:
PN
T
T
[K?u ]i,j = K c (vi + (2 0 ? ? ? 0) , zj ) ? N1 l=1 K c (vl + (2 0 ? ? ? 0) , zj ) =
PN
K c (vi , zj ) + 2[zj ]1 ? N1 ( l=1 K c (vl , zj ) + 2[zj ]1 ) =
PN
PN
K c (vi , zj ) + 2[zj ]1 ? N1 l=1 K c (vl , zj ) ? N1 l=1 2[zj ]1 =
PN
K c (vi , zj ) + 2[zj ]1 ? N1 l=1 K c (vl , zj ) ? N1 N 2[zj ]1 =
PN
K c (vi , zj ) ? N1 l=1 K c (vl , zj ) = [K?v ]i,j
This completes the proof of Part 2.
Part 3: From Part 2 we know that K?u = K?v . Therefore, the minimization problem
(7) collapses to minkK?u ?k22 with respect to ? (the N is constant and can be removed).
Formulating this minimization with the use of the orthogonal matrix F and an initial vector
?o this becomes (see [1]): minkK?u (?o + Ft)k22 with respect to t ? <2N ?1 . We set
h(t) = kK?u (? + Ft)k22 . Therefore in order to find the minimum we must solve 2N ? 1
linear equations: 0 = dtdi h(t) i = 1, ..., 2N ? 1. This completes the proof of Part 3.
| 2274 |@word trial:3 repository:1 version:1 yi0:3 open:1 covariance:9 pick:1 initial:1 contains:3 efficacy:3 bc:8 strohman:1 bhattacharyya:1 existing:1 ka:1 olkin:1 must:2 numerical:1 prohibitive:1 mpm:4 math:1 mathematical:1 along:2 direct:3 incorrect:1 consists:1 indeed:1 actual:1 becomes:1 estimating:2 underlying:2 bounded:2 formalizing:1 mass:1 finding:5 act:1 xd:14 classifier:5 yn:1 before:1 understood:1 analyzing:1 limited:1 collapse:1 unu:1 range:1 unique:4 testing:1 practice:1 implement:1 differs:2 get:1 applying:2 www:3 equivalent:2 restriction:1 optimize:1 maximizing:4 convex:1 formulate:2 formalized:2 proving:1 annals:1 colorado:7 exact:1 us:3 lanckriet:2 element:1 crossing:1 satisfying:2 approximated:1 std:8 ft:4 solved:2 verifying:2 worst:1 solla:1 removed:1 ui:9 complexity:1 solving:4 predictive:1 upon:1 various:1 describe:1 effective:1 whose:1 posed:2 solve:3 widely:1 tightness:1 ability:1 cov:1 statistic:8 noisy:1 final:1 supe:1 housing:4 advantage:2 analytical:2 propose:1 uci:1 kv:5 olkopf:1 recipe:1 requirement:1 generating:1 perfect:4 illustrate:1 measured:2 solves:3 c:5 predicted:12 involves:1 closely:1 correct:2 xid:3 considered:1 exp:2 predict:1 estimation:1 currently:1 create:1 establishes:1 minimization:3 mit:2 gaussian:1 always:1 avoid:2 pn:11 derived:2 focus:1 indicates:1 dependent:2 vetterling:1 vl:7 classification:16 art:2 construct:3 having:2 look:2 vnv:1 future:6 others:1 report:1 randomly:1 consisting:1 n1:11 recalling:1 attempt:2 accurate:2 orthogonal:3 theoretical:3 minimal:1 kij:1 column:2 marshall:1 ar:1 zn:2 deviation:8 entry:3 reported:2 varies:1 gregory:1 synthetic:1 xi1:3 yl:2 squared:8 tube:2 toy:5 coefficient:1 vi:14 root:1 closed:1 competitive:1 square:2 accuracy:4 variance:2 accurately:1 produced:1 classified:2 z2n:1 proof:11 knowledge:5 formalize:1 higher:1 formulation:19 done:1 though:1 furthermore:2 smola:1 cohn:1 nonlinear:11 mimimax:2 building:1 dietterich:1 k22:3 true:20 mpmc:8 analytically:2 sin:1 uniquely:2 anything:1 complete:1 recently:1 volume:1 interpret:1 refer:2 significant:1 cambridge:4 outlined:1 similarly:2 surface:4 multivariate:1 inf:3 inequality:2 binary:5 yi:15 minimum:10 converge:1 maximize:2 technical:1 cross:1 raphson:1 divided:1 plugging:1 prediction:2 regression:69 denominator:2 essentially:1 kernel:11 completes:3 source:3 sch:1 nv:12 validating:1 jordan:1 variety:1 affect:1 zi:21 restrict:1 chebyshev:1 bartlett:1 becker:1 ul:2 peter:1 york:1 matlab:3 useful:1 detailed:1 clear:1 amount:1 reduced:1 http:3 percentage:2 zj:21 sign:1 estimated:3 correctly:3 key:1 drawn:1 v1:2 bertsimas:2 fraction:3 run:4 vn:1 appendix:5 bound:19 grudic:5 fold:1 symbolized:3 alex:1 software:3 u1:2 min:2 span:1 formulating:1 relatively:2 department:2 according:3 belonging:4 y0:4 making:2 pr:2 ghaoui:1 boulder:2 taken:1 equation:6 xi2:2 know:1 available:3 experimentation:1 isii:1 thomas:1 original:1 assumes:1 newton:1 calculating:2 exploit:1 giving:2 ghahramani:1 uj:1 question:1 occurs:2 subspace:1 separate:4 mapped:1 separating:1 trivial:1 code:3 kk:1 minimizing:1 robert:1 favorably:1 implementation:2 unknown:2 upper:1 finite:1 supporting:1 extended:1 y1:1 bk:1 extensive:1 z1:1 nu:15 address:1 below:5 built:1 shifting:2 power:1 minimax:9 sethuraman:1 axis:2 popescu:1 fully:1 interesting:2 validation:1 foundation:1 downloaded:1 mercer:4 editor:2 classifying:1 row:2 last:1 strohmann:1 side:2 wide:1 correspondingly:1 boundary:7 calculated:2 xn:1 world:1 gram:2 unaware:1 computes:3 dimension:2 made:3 approximate:1 obtains:1 bernhard:1 supremum:1 assumed:2 xi:16 un:2 table:5 ku:9 obtaining:1 mse:3 williamson:1 constructing:1 vj:1 main:1 noise:1 x1:15 ny:1 shrinking:1 wish:1 xl:2 theorem:17 specific:1 showing:1 offset:1 sinc:4 exists:1 boston:4 flannery:1 explore:1 insead:1 corresponds:1 teukolsky:1 ma:2 goal:2 rbf:2 specifically:2 lemma:4 kearns:1 called:1 experimental:4 support:1 dept:1 tested:2 |
1,401 | 2,275 | Self Supervised Boosting
Max Welling, Richard S. Zemel, and Geoffrey E. Hinton
Department of Computer Science
University of Toronto
10 King?s College Road
Toronto, M5S 3G5 Canada
Abstract
Boosting algorithms and successful applications thereof abound for classification and regression learning problems, but not for unsupervised
learning. We propose a sequential approach to adding features to a random field model by training them to improve classification performance
between the data and an equal-sized sample of ?negative examples? generated from the model?s current estimate of the data density. Training in
each boosting round proceeds in three stages: first we sample negative
examples from the model?s current Boltzmann distribution. Next, a feature is trained to improve classification performance between data and
negative examples. Finally, a coefficient is learned which determines the
importance of this feature relative to ones already in the pool. Negative
examples only need to be generated once to learn each new feature. The
validity of the approach is demonstrated on binary digits and continuous
synthetic data.
1 Introduction
While researchers have developed and successfully applied a myriad of boosting algorithms
for classification and regression problems, boosting for density estimation has received relatively scant attention. Yet incremental, stage-wise fitting is an attractive model for density
estimation. One can imagine that the initial features, or weak learners, could model the
rough outlines of the data density, and more detailed carving of the density landscape could
occur on each successive round. Ideally, the algorithm would achieve automatic model selection, determining the requisite number of weak learners on its own. It has proven difficult
to formulate an objective for such a system, under which the weights on examples, and the
objective for training a weak learner at each round have a natural gradient-descent interpretation as in standard boosting algorithms [10] [7]. In this paper we propose an algorithm
that provides some progress towards this goal.
A key idea in our algorithm is that unsupervised learning can be converted into supervised
learning by using the model?s imperfect current estimate of the data to generate negative
examples. A form of this idea was previously exploited in the contrastive divergence algorithm [4]. We take the idea a step further here by training a weak learner to discriminate
between the positive examples from the original data and the negative examples generated
by sampling from the current density estimate. This new weak learner minimizes a simple
additive logistic loss function [2].
Our algorithm obtains an important advantage over sampling-based, unsupervised methods
that learn features in parallel. Parallel-update methods require a new sample after each
iteration of parameter changes, in order to reflect the current model?s estimate of the data
density. We improve on this by using one sample per boosting round, to fit one weak
learner. The justification for this approach comes from the proposal that, for stagewise
additive models, boosting can be considered as gradient-descent in function space, so the
new learner can simply optimize its inner product with the gradient of the objective in
function space [3].
Unlike other attempts at ?unsupervised boosting? [9], where at each round a new component distribution is added to a mixture model, our approach will add features in the
log-domain and as such learns a product model.
Our algorithm incrementally constructs random fields from examples. As such, it bears
some relation to maximum entropy models, which are popular in natural language processing [8]. In these applications, the features are typically not learned; instead the algorithms greedily select at each round the most informative feature from a large set of
pre-enumerated features.
2 The Model
Let the input, or state
be a vector of random variables taking values in some finite
domain . The probability of is defined by assigning it an energy,
, which is
converted into a probability using the Boltzmann distribution,
"#$ % &'(
!
(1)
We furthermore assume that the energy is additive. More explicitly, it will be modelled as
a weighted sum of features,
)
)-,.)
)
% &' #
&0/21
(2)
)
)+*
)54
4
, )
where 3 *
are the) weights, 3 76 the features and each feature may depend on its own
set of parameters 1 .
The model described above is very similar to an ?additive random field?, otherwise known
as ?maximum entropy model?. The key difference
) is that we allow each feature to be
flexible through its dependence on the parameters 1 .
Learning in random fields may proceed by performing gradient ascent on the log8:9
8
8
likelihood:
8:;
<
;
=
>@?A
%8: ;B > C
ED GF
H
%
8I;
(3)
where B > is a data-vector and is some arbitrary parameter that we want to learn. This
equation makes explicit the main philosophy behind learning
in random fields: the energy
A
of states ?occupied? by data is lowered (weighted by ) while the energy of all states is
raised (weighted by
). Since there are usually an exponential
number of states in the
=
system, the second term is often approximated by a sample from
. To reduce sampling
noise a relatively large sample is necessary and moreover, it must be drawn each time we
compute gradients. These considerations make learning in random fields generally very
inefficient.
Iterative scaling methods
have been developed for models that do
) 4
) 4 not include adaptive
feature parameters 3J1
but instead train only the coefficients 3 *
[8]. These methods
make more efficient use of the samples than gradient ascent, but they only minimize a
loose bound on the cost function and their terminal convergence can be slow.
3 An Algorithm for Self Supervised Boosting
Boosting algorithms typically implement phases: a feature (or weak learner) is trained,
the relative weight of this feature with respect to the other features already in the pool is
determined, and finally the data vectors are reweighted. In the following we will discuss a
similar strategy in an unsupervised setting.
3.1 Finding New Features
In [7], boosting is reinterpreted as functional gradient descent on a loss function. Using the
log-likelihood as a negative loss function this idea can be used to find features for additive
random field models. Consider a change in the energy by adding an infinitesimal multiple
of a feature. The optimal feature is then the one that provides the maximal increase in
log-likelihood, i.e. the feature that maximizes the second term of
9
8 9
8
% &'
8
Using Eqn. 3 with
8 9
8
% &'(
?
C
(4)
we rewrite the second term as,
?
8
9
,:)
,.) &'(
C
<
=
>@?A
, )
C
&B >
ED GF H
, )
&'
&'
(5)
where
is our current estimate of the data distribution. In order to maximize this
derivative, the feature should therefore be small at the data and large at all other states. It is
however important to realize that the norm of the feature must be bounded, since
, ) otherwise
the derivative can be made arbitrarily large by simply increasing the length of
.
Because the total number of possible states of a model is often exponentially
large, the
second term of Eqn. 5 must be approximated using samples from
,
8:9
8
?
<
=
>@?A
, )
C
B >
?A
, )
(6)
These samples, or ?negative examples?, inform us about the states that are likely under
the current model. Intuitively, because the model is imperfect, we would like to move its
density estimate away from these samples and towards the actual data. By labelling the
C
data with and the negative examples with
, we can map this to a supervised
problem where a new feature is a classifier. Since a good classifier is negative at the data
and positive at the negative examples (so we can use its sign to discriminate them), adding
its output to the total energy will lower the energy at states where there are data and raise it
at states where there are negative examples. The main difference with supervised boosting
is that the negative examples change at every round.
3.2 Weighting the Data
It has been observed [6] that boosting algorithms can outperform classifications algorithms
that maximize log-likelihood. This has motivated us to use the logistic loss function from
the boosting literature for training new features.
#
C !"
Loss +
(7)
C
) and negative examples (
). Perturbing the
where runs over data (
energy of the negative loss function by adding an infinitesimal multiple of a new feature:
and computing the derivative w.r.t. we derive the following cost function
for adding a new feature,
(8)
The main difference with Eqn. 6 is the weights on data and negative examples, that give
C
, )
, )
>
=
> ? A
C
&B >
, )
?A
poorly ?classified? examples (data with very high energy and negative examples with very
low energy) a stronger vote in changes to the energy surface. The extra weights (which are
bounded between [0,1]) will incur a certain bias w.r.t. the maximum likelihood solution.
However, it is expected that the extra effort on ?hard cases? will cause the algorithm to
converge faster to good density models.
It is important to realize that the loss function Eqn. 7 is a valid cost function only when the
negative examples are fixed. The reason is that after a change of the energy surface, the
negative examples are no longer a representative sample from the Boltzmann distribution in
Eqn. 1. However, as long as we re-sample the negative examples after every change in the
energy we may use Eqn. 8 as an objective to decide what feature to add to the energy, i.e.
8 9 consider
8
we may
it as the derivative of some (possibly unknown) weighted log-likelihood:
?
'
% &'2 as the probability that a certain
By analogy, we can interpret
state is occupied by a data-vector and consequently % &' as the ?margin?. Note that
the introduction of the weights has given meaning to the ?height? of the energy surface, in
contrast to the Boltzmann distribution for which only relative energy differences count. In
fact, as we will further explain in the next section, the height of the energy will be chosen
such that the total weight on data is equal to the total weight on the negative examples.
3.3 Adding the New Feature to the Pool
According to the functional gradient interpretation, the new feature computed as described
above represents the infinitesimal change in energy that maximally increases
the (weighted)
)
log-likelihood. Consistent with that interpretation we will determine * via a line search in
the direction of this ?gradient?. In fact, we will propose a slightly more general change in
energy given by,
) , )
)
C
C
(9)
%
% &'
&'
*
)
As mentioned in the previous section, the constant will have no effect on the Boltzmann
9
8 weight on data versus
distribution in Eqn. 1. However, it does influence the relative8 total
? it is not hard to
negative examples. Using 9 the interpretation
of ) in Eqn. 8 as
)
1
see that the derivatives of
w.r.t. to * and
are given by,
8
8 9
8
*
8 9
)
)
=
>@?A
9
Therefore, at a stationary point of
ples precisely balances out.
)
=
>@?A
>
, )
>
w.r.t.
&B >
C
)
C
?A
, )
&
(10)
(11)
?A
the total weight on data and negative exam-
When iteratively updating * we not only change the weights
but also the Boltzmann
distribution, which makes the negative examples no longer representative of the current
1
Since
is independent of , it is easy to compute the second derivative
and we can do Newton updates to compute the stationary point.
"!#%$ #'& #
3
% Classification Error
2.5
2
1.5
1
0.5
0
0
100
200
300
400
500
600
boosting round
Figure 1: (a ? left). Training error (lower curves) and test error (higher curves) for the
weighted boosting algorithm
(solid curves) and the un-weighted algorithm (dashed curves).
)
(b ? right). Features found by the learning algorithm.
on the
estimated data distribution. To correct for
) this we include importance weights
negative examples that are all
at *
. It) is,.) very easy to update these weights
*
& .2 and renormalizing. It is well
from iteration to iteration using
known that in high dimensions the effective sample size of the weighted sample can rapidly
become
too small to be useful. We therefore monitor the effective sample size, given by
, where the sum runs over the negative examples only. If it drops below a threshold we have two choices. We can obtain a new set of negative
examples from the ) updated
Boltzmann distribution, reset the importance weights
to
and resume fitting * . Alter)
natively, we simply accept
) the current value of * and proceed to the next round of boosting.
Because we initialize *
in the fitting procedure, the latter approach underestimates
the importance of this particular feature, which is not a problem since a similar feature can
be added in the next round.
4 A Binary Example: The Generalized RBM
We propose a simple extension of the ?restricted Boltzmann machine? (RBM) with) (+1,; ) [1] as a model for binary data. Each feature is parametrized by weights
1)-units
and a
; )
bias :
) , )
)
)
C
(12)
*
*
)
where the RBM is obtained by setting all *
. One can sample from the summed
energy model using straightforward Gibbs sampling, where every visible unit is sampled
given all the others. Alternatively, one can design a much faster mixing Markov chain by
introducing hidden variables and sampling all hidden units independently
given the visible
)
units and vice versa. Unfortunately, by including the coefficients * this trick is no longer
valid. But an approximate Markov chain; can be used
;
*
)
)
)
C
*
)
)
C
)
*
)
(13)
This approximate Gibbs sampling thus involves sampling from an RBM with scaled
weights and biases,
)
*
)
)
C
*
) ; )
)
*
)
)
)
(14)
When using the above Markov chain, we will not wait until it has reached equilibrium but
initialize it at the data-vectors and use it for a fixed number of steps, as is done in contrastive
divergence learning [4].
When we fit a new feature we need to make sure its norm is controlled. The appropriate
; ) problem; in the experiment described
value depends on the number of dimensions in
) the
;
;
; ) to be no larger than
below we bounded ) the norm
of
the
vector
. The updates
) C
)
)
) C
are thus given by
and
with,
)
)
; )
C
; )
-
where the weights
are proportional to
using the procedure of Section 3.3.
)
. The coefficients *
; )
C
(15)
)
are determined
To test whether
data, we
% we can learn good models of (fairly) high-dimensional, real-world
used the
real-valued digits from the ?br? set on the CEDAR cdrom # . We learned
completely separate models on binarized ?2?s and ?3?s. The first
data cases of each
class were used for training while the remaining
digits of each
class were
)
used for testing. The minimum effective sample size for the coefficients * , ) was set to) . We used
different sets of negative examples,
examples each, to fit -6 and * . After a new feature was added, the total energies of all ?2?s and ?3?s were computed under both models.
The energies of the training data (under both models) were used as two-dimensional features to compute a separation boundary using logistic regression, which was subsequently
applied to the test data to compute the total misclassification. In Figure 1a we show the total
error on both training data and test data as a function of the number of features in the model.
For comparison we also plot the training and test error for the un-weighted version of the
). The
rounds of boosting for the
algorithm (
classification error after
weighted
algorithm
is
about
,
and
only
very
gradually
increases
to about after
rounds of boosting.
This
is
good
as
compared
to
logistic
regression
( ), k-nearest
is optimal), while a parallel-trained RBM with
hidden
neighbors (
units achieves respectively. The un-weighted learning algorithm con) slowly to a good solution, both on training and test data. In Figure 1b
verges much more
)
we show every
feature between rounds and
for both digits.
#
5 A Continuous Example: The Dimples Model
For continuous data we propose a different form of feature, which we term a dimple because
of its shape in the energy domain. A dimple is a mixture of a narrow Gaussian and a broad
Gaussian, with a common mean:
, )
!/!
A C
&0/"
(16)
is fixed and large. Each round
where the mixing proportion is constant and equal, and
of the algorithm fits and A for a new learner. A nice property of dimples is that they can
reduce the entropy of an existing distribution by placing the dimple in a region that already
has low energy, but they can also raise the entropy by putting the dimple in a high energy
region [5].
)
Sampling is again simple if all *
, since in that case we can use a Gibbs chain which
first picks a narrow or broad Gaussian for every feature given the visible variables and then
samples the visible variables from the resulting multivariate Gaussian. For general * the
situation is less tractable, but using a similar approximation as for the generalized RBM,
*
!/!
A C
&0/"
(
&0/"
A !# C
&0/"
!#
(17)
This approximation will be accurate when one Gaussian is dominating the other, i.e., when
the responsibilities are close to zero and one. This is expected to be the case in highdimensional applications. In the low-dimensional example discussed below we implemented a simple MCMC chain with isotropic, normal proposal density which was initiated
at the data-points and run for a fixed number of steps.
25
25
20
20
15
15
10
10
5
(a)
5
(c)
0
0
?5
?5
?10
?10
?15
?15
?20
?20
?25
?40
?30
?20
?10
0
10
20
30
?25
?40
40
?30
?20
?10
0
10
20
30
40
0
5
?20
0
?40
?5
?60
?10
?80
?100
?15
25
?20
(b)
(d)
30
20
15
20
10
5
10
0
0
?5
?10
?10
?15
?20
?20
?30
?40
?30
?10
?20
20
10
0
40
30
?25
0
?10
?20
?30
?40
20
10
40
30
Figure 2: (a). Plot of iso-energy contours after
rounds of boosting. The crosses represent the data and the dots the negative examples generated from the model. (b). Three
dimensional plot of the negative energy surface. (c). Contour plot for a mixture of
Gaussians learned using EM. (d). Negative energy surface for the mixture of Gaussians.
"
The type of dimple we used in the experiment below can adapt a common mean ( ) and
the inverse-variance of the small Gaussian ( A ) in each dimension separately. The update
C
A C
A with
rules are given by,
and A
C
A
" A
A
C
A
(18)
A
(19)
A
where A
are the responsibilities for the narrow
A A
and
and broad Gaussian respectively
. Finally,
) and the weights are given by
the combination coefficients
are computed as described in Section 3.3.
To illustrate the proposed algorithm we fit the dimples model to the two-dimensional data
(crosses) shown in Figure 2a-c. The data were synthetically generated
by defining angles
C
with uniform between
and a radius
with standard
normal, which were converted to Euclidean coordinates and mirrored and translated to produce the spirals. The first feature is an isotropic Gaussian with the mean and the variance
of the data, while later features were dimples trained in the way described above. Figure 2a
also shows the contours of equal energy after
rounds of boosting together with examples (dots) from the model. A 3-dimensional plot of the negative energy surface is shown in
Figure 2b. For comparison, similar plots for a mixture of
Gaussians, trained in parallel
with EM, are depicted in Figures 2c and 2d.
The main qualitative difference between the fits in Figures 2a-b (product of dimples) and
2c-d (mixture of Gaussians), is that the first seems to produce smoother energy surfaces,
only creating structure where there is structure in the data. This can be understood by
recalling that the role of the negative examples is precisely to remove ?dips? in the energy
surface where there is no data. The philosophy of avoiding structure in the model that is
not dictated by the data is consistent with the ideas behind maximum entropy modelling
[11] and is thought to improve generalization.
6 Discussion
This paper discusses a boosting approach to density estimation, which we formulate as a
sequential approach to training additive random field models. The philosophy is to view
unsupervised learning as a sequence of classification problems where the aim is to discriminate between data-vectors and negative examples generated from the current model. The
sampling step is usually the most time consuming operation, but it is also unavoidable since
it informs the algorithm of the states whose energy is too low. The proposed algorithm uses
just one sample of negative examples to fit a new feature, which is very economical as
compared to most non-sequential algorithms which must generate an entire new sample for
every gradient update.
There are many interesting issues and variations that we have not addressed
in this paper.
What is the effect of using approximate, e.g. variational distributions for
? Can we
improve
the accuracy of the model by fitting the feature parameters and the coefficients
)
together? Does re-sampling the negative examples more frequently during learning
*
improve the final model? What is the effect of using different functions to weight the data
and how do the weighting schemes interact with the dimensionality of the problem?
References
[1] Y. Freund and D. Haussler. Unsupervised learning of distributions of binary vectors using
2-layer networks. In Advances in Neural Information Processing Systems, volume 4, pages
912?919, 1992.
[2] J. Friedman, T. Hastie, and R. Tibshirani. Additive logistic regression: A statistical view of
boosting. Technical report, Dept. of Statistics, Stanford University Technical Report., 1998.
[3] J.H. Friedman. Greedy function approximation: A gradient boosting machine. Technical report,
Technical Report, Dept. of Statistics, Stanford University, 1999.
[4] G.E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation, 14:1771?1800, 2002.
[5] G.E. Hinton and A. Brown. Spiking Boltzmann machines. In Advances in Neural Information
Processing Systems, volume 12, 2000.
[6] G. Lebanon and J. Lafferty. Boosting and maximum likelihood for exponential models. In
Advances in Neural Information Processing Systems, volume 14, 2002.
[7] L. Mason, J. Baxter, P. Bartlett, and M. Frean. Boosting algorithms as gradient descent. In
Advances in Neural Information Processing Systems, volume 12, 2000.
[8] S. Della Pietra, V.J. Della Pietra, and J.D. Lafferty. Inducing features of random fields. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 19(4):380?393, 1997.
[9] S. Rosset and E. Segal. Boosting density estimation. In Advances in Neural Information Processing Systems, volume 15 (this volume), 2002.
[10] R.E. Schapire and Y. Singer. Improved boosting algorithms using confidence-rated predictions.
In Computational Learing Theory, pages 80?91, 1998.
[11] S.C. Zhu, Z.N. Wu, and D. Mumford. Minimax entropy principle and its application to texture
modeling. Neural Computation, 9(8):1627?1660, 1997.
| 2275 |@word version:1 seems:1 proportion:1 norm:3 stronger:1 contrastive:3 pick:1 solid:1 initial:1 existing:1 current:10 yet:1 assigning:1 must:4 realize:2 visible:4 additive:7 j1:1 informative:1 shape:1 remove:1 drop:1 plot:6 update:6 stationary:2 greedy:1 intelligence:1 isotropic:2 iso:1 provides:2 boosting:29 toronto:2 successive:1 height:2 become:1 learing:1 qualitative:1 fitting:4 expected:2 frequently:1 terminal:1 actual:1 increasing:1 abound:1 moreover:1 bounded:3 maximizes:1 what:3 minimizes:1 developed:2 finding:1 every:6 binarized:1 classifier:2 scaled:1 unit:5 positive:2 understood:1 initiated:1 carving:1 testing:1 implement:1 digit:4 procedure:2 scant:1 thought:1 pre:1 road:1 confidence:1 wait:1 close:1 selection:1 influence:1 optimize:1 map:1 demonstrated:1 straightforward:1 attention:1 independently:1 formulate:2 rule:1 haussler:1 coordinate:1 justification:1 variation:1 updated:1 imagine:1 us:1 trick:1 approximated:2 updating:1 observed:1 role:1 region:2 mentioned:1 ideally:1 trained:5 depend:1 rewrite:1 raise:2 myriad:1 incur:1 learner:9 completely:1 translated:1 train:1 effective:3 zemel:1 whose:1 larger:1 valued:1 dominating:1 stanford:2 otherwise:2 statistic:2 final:1 advantage:1 sequence:1 propose:5 product:4 maximal:1 reset:1 rapidly:1 mixing:2 poorly:1 achieve:1 inducing:1 convergence:1 produce:2 renormalizing:1 incremental:1 derive:1 exam:1 illustrate:1 frean:1 informs:1 nearest:1 received:1 progress:1 implemented:1 involves:1 come:1 direction:1 radius:1 correct:1 subsequently:1 log8:1 require:1 generalization:1 enumerated:1 extension:1 considered:1 normal:2 equilibrium:1 achieves:1 estimation:4 vice:1 successfully:1 weighted:11 rough:1 gaussian:8 aim:1 occupied:2 dimple:10 modelling:1 likelihood:8 contrast:1 greedily:1 typically:2 entire:1 accept:1 hidden:3 relation:1 issue:1 classification:8 flexible:1 raised:1 summed:1 initialize:2 fairly:1 field:9 equal:4 once:1 construct:1 sampling:10 represents:1 broad:3 placing:1 unsupervised:7 alter:1 others:1 report:4 richard:1 divergence:3 pietra:2 phase:1 attempt:1 recalling:1 friedman:2 reinterpreted:1 mixture:6 behind:2 chain:5 accurate:1 necessary:1 ples:1 euclidean:1 re:2 modeling:1 cost:3 introducing:1 cedar:1 uniform:1 successful:1 too:2 synthetic:1 rosset:1 density:12 pool:3 together:2 again:1 reflect:1 unavoidable:1 possibly:1 slowly:1 verge:1 creating:1 inefficient:1 derivative:6 expert:1 converted:3 segal:1 coefficient:7 explicitly:1 depends:1 later:1 view:2 responsibility:2 reached:1 parallel:4 minimize:1 accuracy:1 variance:2 landscape:1 resume:1 weak:7 modelled:1 economical:1 researcher:1 m5s:1 classified:1 explain:1 inform:1 ed:2 infinitesimal:3 underestimate:1 energy:33 thereof:1 rbm:6 con:1 sampled:1 popular:1 dimensionality:1 higher:1 supervised:5 maximally:1 improved:1 done:1 furthermore:1 just:1 stage:2 until:1 eqn:8 incrementally:1 logistic:5 stagewise:1 effect:3 validity:1 brown:1 iteratively:1 attractive:1 round:16 reweighted:1 during:1 self:2 generalized:2 outline:1 meaning:1 wise:1 consideration:1 variational:1 common:2 functional:2 spiking:1 perturbing:1 exponentially:1 volume:6 discussed:1 interpretation:4 interpret:1 versa:1 gibbs:3 automatic:1 language:1 dot:2 lowered:1 longer:3 surface:8 add:2 multivariate:1 own:2 dictated:1 certain:2 binary:4 arbitrarily:1 exploited:1 minimum:1 converge:1 maximize:2 determine:1 dashed:1 smoother:1 multiple:2 technical:4 faster:2 adapt:1 cross:2 long:1 controlled:1 prediction:1 regression:5 iteration:3 represent:1 proposal:2 want:1 separately:1 addressed:1 extra:2 unlike:1 ascent:2 sure:1 lafferty:2 synthetically:1 easy:2 spiral:1 baxter:1 fit:7 hastie:1 imperfect:2 idea:5 inner:1 reduce:2 br:1 whether:1 motivated:1 bartlett:1 effort:1 proceed:2 cause:1 generally:1 useful:1 detailed:1 requisite:1 generate:2 schapire:1 outperform:1 mirrored:1 sign:1 estimated:1 per:1 tibshirani:1 key:2 putting:1 threshold:1 monitor:1 drawn:1 sum:2 run:3 inverse:1 angle:1 decide:1 wu:1 separation:1 scaling:1 bound:1 layer:1 occur:1 precisely:2 performing:1 relatively:2 department:1 according:1 combination:1 slightly:1 em:2 intuitively:1 restricted:1 gradually:1 equation:1 previously:1 discus:2 loose:1 count:1 singer:1 tractable:1 gaussians:4 operation:1 away:1 appropriate:1 original:1 remaining:1 include:2 newton:1 objective:4 move:1 already:3 added:3 mumford:1 strategy:1 dependence:1 g5:1 gradient:12 separate:1 parametrized:1 reason:1 length:1 balance:1 minimizing:1 difficult:1 unfortunately:1 negative:36 design:1 boltzmann:9 unknown:1 markov:3 finite:1 descent:4 situation:1 hinton:3 defining:1 arbitrary:1 canada:1 learned:4 narrow:3 proceeds:1 usually:2 below:4 pattern:1 cdrom:1 max:1 including:1 misclassification:1 natural:2 zhu:1 minimax:1 scheme:1 improve:6 rated:1 gf:2 nice:1 literature:1 determining:1 relative:3 freund:1 loss:7 bear:1 interesting:1 proportional:1 proven:1 geoffrey:1 analogy:1 versus:1 consistent:2 principle:1 bias:3 allow:1 neighbor:1 taking:1 curve:4 dimension:3 boundary:1 valid:2 world:1 contour:3 dip:1 made:1 adaptive:1 welling:1 transaction:1 lebanon:1 approximate:3 obtains:1 consuming:1 alternatively:1 continuous:3 iterative:1 search:1 un:3 learn:4 interact:1 domain:3 main:4 noise:1 representative:2 slow:1 natively:1 explicit:1 exponential:2 weighting:2 learns:1 mason:1 sequential:3 adding:6 importance:4 texture:1 labelling:1 margin:1 entropy:6 depicted:1 simply:3 likely:1 determines:1 sized:1 goal:1 king:1 consequently:1 towards:2 change:9 hard:2 determined:2 total:9 discriminate:3 vote:1 select:1 college:1 highdimensional:1 latter:1 avoiding:1 philosophy:3 dept:2 mcmc:1 della:2 |
1,402 | 2,276 | Stochastic Neighbor Embedding
Geoffrey Hinton and Sam Roweis
Department of Computer Science, University of Toronto
10 King?s College Road, Toronto, M5S 3G5 Canada
hinton,roweis @cs.toronto.edu
Abstract
We describe a probabilistic approach to the task of placing objects, described by high-dimensional vectors or by pairwise dissimilarities, in a
low-dimensional space in a way that preserves neighbor identities. A
Gaussian is centered on each object in the high-dimensional space and
the densities under this Gaussian (or the given dissimilarities) are used
to define a probability distribution over all the potential neighbors of
the object. The aim of the embedding is to approximate this distribution as well as possible when the same operation is performed on the
low-dimensional ?images? of the objects. A natural cost function is a
sum of Kullback-Leibler divergences, one per object, which leads to a
simple gradient for adjusting the positions of the low-dimensional images. Unlike other dimensionality reduction methods, this probabilistic
framework makes it easy to represent each object by a mixture of widely
separated low-dimensional images. This allows ambiguous objects, like
the document count vector for the word ?bank?, to have versions close to
the images of both ?river? and ?finance? without forcing the images of
outdoor concepts to be located close to those of corporate concepts.
1 Introduction
Automatic dimensionality reduction is an important ?toolkit? operation in machine learning, both as a preprocessing step for other algorithms (e.g. to reduce classifier input size)
and as a goal in itself for visualization, interpolation, compression, etc. There are many
ways to ?embed? objects, described by high-dimensional vectors or by pairwise dissimilarities, into a lower-dimensional space. Multidimensional scaling methods[1] preserve
dissimilarities between items, as measured either by Euclidean distance, some nonlinear
squashing of distances, or shortest graph paths as with Isomap[2, 3]. Principal components analysis (PCA) finds a linear projection of the original data which captures as much
variance as possible. Other methods attempt to preserve local geometry (e.g. LLE[4]) or
associate high-dimensional points with a fixed grid of points in the low-dimensional space
(e.g. self-organizing maps[5] or their probabilistic extension GTM[6]). All of these methods, however, require each high-dimensional object to be associated with only a single
location in the low-dimensional space. This makes it difficult to unfold ?many-to-one?
mappings in which a single ambiguous object really belongs in several disparate locations
in the low-dimensional space. In this paper we define a new notion of embedding based on
probable neighbors. Our algorithm, Stochastic Neighbor Embedding (SNE) tries to place
the objects in a low-dimensional space so as to optimally preserve neighborhood identity,
and can be naturally extended to allow multiple different low-d images of each object.
2 The basic SNE algorithm
For each object, , and each potential neighbor, , we start by computing the asymmetric
probability, , that would pick as its neighbor:
(1)
The dissimilarities, , may be given as part of the problem definition (and need not be
symmetric), or they may be computed using the scaled squared Euclidean distance (?affinity?) between two high-dimensional points, !"$#$! :
%&%
)
'% %
! !
()
(2)
where is either
) set by hand or (as in some of our experiments) found by a binary search
for the value of that makes the entropy of the distribution over neighbors equal to *'+,.- .
Here, - is the effective number of local neighbors or ?perplexity? and is chosen by hand.
In the low-dimensional space we also use Gaussian neighborhoods but with a fixed variance
(which we set without loss of generality to be / ) so the induced probability 01 that point
picks point as its neighbor is a function of the low-dimensional images 23 of all the
objects and is given by the expression:
4
%'% 2 2 &% %
4
'% % &% %
2 2
0
(3)
The aim of the embedding is to match these two distributions as well as possible. This is
achieved by minimizing a cost function which is a sum of Kullback-Leibler divergences
between the original (5 ) and induced ( 06 ) distributions over neighbors for each object:
7
98
8
*'+,
:
@ ? %&% A
;8
0
9<>=
(4)
The dimensionality of the 2 space is chosen by hand (much less than the number of objects).
Notice that making 0 large when is small wastes some of the probability mass in the 0
distribution so there is a cost for modeling a big distance in the high-dimensional space with
a small distance in the low-dimensional space, though it is less than the cost of modeling
a small distance with a big one. In this respect, SNE is an improvement over methods
like LLE [4] or SOM [5] in which widely separated data-points can be ?collapsed? as near
neighbors in the low-dimensional space. The intuition is that while SNE emphasizes local
distances, its cost function cleanly enforces both keeping the images of nearby objects
nearby and keeping the images of widely separated objects relatively far apart.
Differentiating C is tedious because 2
the result is simple: B
B
7
2
(
8
affects 0 via the normalization term in Eq. 3, but
2 2
0DCEFG 0HG
(5)
which has the nice interpretation of a sum of forces pulling 2" toward 2 or pushing it away
depending on whether is observed to be a neighbor more or less often than desired.
7
Given the gradient, there are many possible ways to minimize and we have only just begun the search for the best method. Steepest descent in which all of the points are adjusted
in parallel is inefficient and can get stuck in poor local optima. Adding random jitter that
decreases with time finds much better local optima and is the method we used for the examples in this paper, even though it is still quite slow. We initialize the embedding by putting
all the low-dimensional images in random locations very close to the origin. Several other
minimization methods, including annealing the perplexity, are discussed in sections 5&6.
3 Application of SNE to image and document collections
As a graphic illustration of the ability of SNE to model high-dimensional, near-neighbor
relationships using only two dimensions, we ran the algorithm on a collection of bitmaps of
handwritten digits and on a set of word-author counts taken from the scanned proceedings
of NIPS conference papers. Both of these datasets are likely to have intrinsic structure in
many fewer dimensions than their raw dimensionalities: 256 for the handwritten digits and
13679 for the author-word counts.
To begin, we used a set of digit bitmaps from the UPS database[7] with examples
from each
( of the five classes 0,1,2,3,4. The variance of the Gaussian around each point
in the -dimensional raw pixel image space was set to achieve a perplexity of 15 in the
distribution over high-dimensional neighbors. SNE was initialized by putting all the 2"
in random locations very close to the origin and then was trained using gradient descent
with annealed noise. Although SNE was given no information about class labels, it quite
cleanly separates the digit groups as shown in figure 1. Furthermore, within each region of
the low-dimensional space, SNE has arranged the data so that properties like orientation,
skew and stroke-thickness tend to vary smoothly. For the embedding shown, the SNE
cost function in Eq. 4 has a value of
nats;
( with a uniform
distribution across lowdimensional neighbors, the cost is *'+,
nats. We also applied
principal component analysis (PCA)[8] to the same data; the projection onto the first two
principal components does not separate classes nearly as cleanly as SNE because PCA is
much more interested in getting the large separations right which causes it to jumble up
some of the boundaries between similar classes. In this experiment, we used digit classes
that do not have very similar pairs like 3 and 5 or 7 and 9. When there are more classes and
only two available dimensions, SNE does not as cleanly separate very similar pairs.
We have also applied SNE to word-document and word-author matrices calculated from
the OCRed text of NIPS volume 0-12 papers[9]. Figure 2 shows a map locating NIPS authors into two dimensions. Each of the 676 authors who published more than one paper
in NIPS vols. 0-12 is shown by a dot at the position 2 found by SNE; larger red dots
and corresponding last names are authors who published six or more papers in that period.
Distances were computed as the norm of the difference between log aggregate author
word counts, summed across all NIPS papers. Co-authored papers gave fractional counts
evenly to all authors. All words occurring in six or more documents were included, except for stopwords giving a vocabulary size of) 13649. (The bow toolkit[10] was used for
part of
( the pre-processing of the data.) The were set to achieve a local perplexity of
-
neighbors. SNE seems to have grouped authors by broad NIPS field: generative
models, support vector machines, neuroscience, reinforcement learning and VLSI all have
distinguishable localized regions.
4 A full mixture version of SNE
The clean probabilistic formulation of SNE makes it easy to modify the cost function so
that instead of a single image, each high-dimensional object can have several different
versions of its low-dimensional image. These alternative versions have mixing proportions
that sum to . Image-version of object has location 2 and mixing proportion 5
. The
low-dimensional neighborhood distribution for is a mixture of the distributions induced
by each of its image-versions across all image-versions of a potential neighbor :
0 8 : 8
%&%
%'%
4
$ 2
2
4
$ %&% 2 2 '% %
(6)
In this multiple-image model, the derivatives with respect to the image locations 23 are
straightforward; the derivatives w.r.t the mixing proportions are most easily expressed
(
Figure 1: The result of running the SNE algorithm on
-dimensional grayscale
images of handwritten digits. Pictures of the original data vectors ! (scans of handwritten
digit) are shown at the location corresponding to their low-dimensional images 23 as found
by SNE. The classes are quite well separated even though SNE had no information about
class labels. Furthermore, within each class, properties like orientation, skew and strokethickness tend to vary smoothly across the space. Not all points are shown: to produce this
display, digits are chosen in random order and are only displayed if a x region of the
display centered on the 2-D location of the digit in the embedding does not overlap any of
the x regions for digits that have already been displayed.
(SNE was initialized by putting all the in random locations very close to the origin and then was
trained using batch gradient descent (see Eq. 5) with annealed noise. The learning rate was 0.2. For
the first 3500 iterations, each 2-D point was jittered by adding Gaussian noise with a standard deviation of after each position update. The jitter was then reduced to for a further
iterations.)
Touretzky
Wiles
Maass
Kailath
Chauvin Munro Shavlik
Sanger
Movellan Baluja Lewicki Schmidhuber
Hertz
Baldi Buhmann Pearlmutter Yang
Tenenbaum
Cottrell
Krogh
Omohundro Abu?Mostafa
Schraudolph
MacKay
Coolen
Lippmann
Robinson Smyth
Cohn
Ahmad Tesauro
Pentland
Goodman
Atkeson
Neuneier
Warmuth
Sollich Moore
Thrun
Pomerleau
Barber
Ruppin
Horn
Meilijson MeadLazzaro
Koch
Obermayer Ruderman
Eeckman HarrisMurray
Bialek Cowan
Baird Andreou
Mel
Cauwenberghs
Brown Li
Jabri
Giles Chen
Spence Principe
Doya Touretzky
Sun
Stork Alspector Mjolsness
Bell
Lee
Maass
Lee
Gold
Pomerleau Kailath Meir
Seung Movellan
Rangarajan
Yang Amari
Tenenbaum
Cottrell Baldi
Abu?Mostafa
MacKay
Nowlan Lippmann
Smyth Cohn Kowalczyk
Waibel
Pouget
Atkeson
Kawato
Viola Bourlard Warmuth
Dayan
Sollich
Morgan Thrun MooreSutton
Barber Barto Singh
Tishby WolpertOpper
Sejnowski
Williamson
Kearns
Singer
Moody
Shawe?Taylor
Saad
Zemel
Saul
Tresp
Bartlett
Platt
Leen
Mozer
Bishop Jaakkola
Solla
Ghahramani
Smola
Williams
Vapnik
Scholkopf
Hinton
Bengio
Jordan
Muller
Graf
LeCun Simard
Denker
Guyon
Bower
Figure 2: Embedding of NIPS authors into two dimensions. Each of the 676 authors
who published more than one paper in NIPS vols. 0-12 is show by a dot at the location 2 found by the SNE algorithm. Larger red dots and corresponding last names
are authors who published six or more papers in that period. The inset in upper left
shows a blowup of the crowded boxed central portion of the space. Dissimilarities between authors were computed based on squared Euclidean distance between vectors of
log aggregate author word counts. Co-authored papers gave fractional counts evenly
to all authors. All words occurring in six or more documents were included, except
for stopwords giving a vocabulary size of 13649. The NIPS text data is available at
http://www.cs.toronto.edu/ roweis/data.html.
in terms of , the probability that version of picks version of :
@
&% %
%&%
4
$ 2 2
9
%'% 2 2 '% %
(7)
The effect on 06 of changing the mixing proportion for version of object
B
B 0 8
where
if
$ C 8
:
8
is given by
@
(8)
and otherwise. The effect of changing on the cost, C, is
B 7
B
B
B 0
8 8
0
(9)
Rather than optimizing the mixing proportions directly, it is easier
to perform unconstrained
4
.
optimization on ?softmax weights? defined by 4
As a ?proof-of-concept?, we recently implemented a simplified mixture version in which
every object is represented in the low-dimensional
space by exactly two components that
are constrained to have mixing proportions of . The two components are pulled together
by a force which increases linearly up to a threshold separation. Beyond this threshold
the force remains constant.1 We ran two experiments with this simplified mixture version
of SNE. We took a dataset containing pictures of each of the digits 2,3,4 and added
hybrid digit-pictures that were each constructed by picking new examples of two of
the classes and taking each pixel at random from one of these two ?parents?. After mini of the hybrids and only
of the non-hybrids had significantly different
mization,
locations for their two mixture components. Moreover, the mixture components of each
hybrid always lay in the regions of the space devoted to the classes of its two parents and
never in the region devoted to the third class. For this example we used a perplexity of
in defining the local neighborhoods, a step size of for each position update of
times the
gradient, and used a constant jitter of
. Our very simple mixture version of SNE also
makes it possible to map a circle onto a line without losing any near neighbor relationships
or introducing any new ones. Points near one ?cut point? on the circle can mapped to a
mixture of two points, one near one end of the line and one near the other end. Obviously,
the location of the cut on the two-dimensional circle gets decided by which pairs of mixture
components split first during the stochastic optimization. For certain optimization parameters that control the ease with which two mixture components can be pulled apart, only
a single cut in the circle is made. For other parameter settings, however, the circle may
fragment into two or more smaller line-segments, each of which is topologically correct
but which may not be linked to each other.
The example with hybrid digits demonstrates that even the most primitive mixture version
of SNE can deal with ambiguous high-dimensional objects that need to be mapped to two
widely separated regions of the low-dimensional space. More work needs to be done before
SNE is efficient enough to cope with large matrices of document-word counts, but it is
the only dimensionality reduction method we know of that promises to treat homonyms
sensibly without going back to the original documents to disambiguate each occurrence of
the homonym.
1
We used a threshold of . At threshold the force was
nats per unit length. The low-d
space has a natural scale because the variance of the Gaussian used to determine is fixed at 0.5.
5 Practical optimization strategies
Our current method of reducing the SNE cost is to use steepest descent with added jitter
that is slowly reduced. This produces quite good embeddings, which demonstrates that the
SNE cost function is worth minimizing, but it takes several hours to find a good embedding
for just datapoints so we clearly need a better search algorithm.
The time per iteration could be reduced considerably by ignoring pairs of points for which
all four of # G #G0 #G0 G are small. Since the matrix is fixed during the learning, it is
natural to sparsify it by replacing all entries below a certain threshold with zero and renormalizing. Then pairs H# for which both 5 and FG are zero can be ignored from gradient
calculations if both 0 and 0 G are small. This can in turn be determined in logarithmic
time in the size of the training set by using sophisticated geometric data structures such
as K-D trees, ball-trees and AD-trees, since the 0 depend only on 42 2 . Computational physics has attacked exactly this same complexity when performing multibody
gravitational or electrostatic simulations using, for example, the fast multipole method.
In the mixture version of SNE there appears to be an interesting way of avoiding local
optima that does not involve annealing the jitter. Consider two components in the mixture
for an object that are far apart in the low-dimensional space. By raising the mixing proportion of one and lowering the mixing proportion of the other, we can move probability mass
from one part of the space to another without it ever appearing at intermediate locations.
This type of ?probability wormhole? seems like a good way to avoid local optima that arise
because a cluster of low-dimensional points must move through a bad region of the space
in order to reach a better one.
Yet another search method, which we have used with some success on toy problems, is
to provide extra dimensions in the low-dimensional space but to penalize non-zero values
on these dimensions. During the search, SNE will use the extra dimensions to go around
lower-dimensional barriers but as the penalty on using these dimensions is increased, they
will cease to be used, effectively constraining the embedding to the original dimensionality.
6 Discussion and Conclusions
Preliminary
experiments show that we can find good optima by first annealing the perplex)
ities (using high jitter) and only reducing the jitter after the final perplexity
has been
)
reached. This raises the question of what SNE is doing when the variance, , of the Gaussian centered on each high-dimensional point is very big so that the distribution across
neighbors is almost uniform. It is clear that in the high variance limit, the contribution of
:*&+ , :
06 to) the SNE cost function is just as important for distant neighbors as for
close ones. When is very large, it can be shown that SNE is equivalent to minimizing the
mismatch between squared distances in the two spaces, provided all the squared distances
from an object are first normalized by subtracting off their ?antigeometric? mean, :
@
4
#
*&+ , 8
%&% ! ! %&%
) #
4
$
'% % 2 2 '% %
) #
*'+, 8
where is the number of objects.
;8
(10)
(11)
(12)
This mismatch is very similar to ?stress? functions used in nonmetric versions of MDS,
and enables us to understand the large-variance limit of SNE as a particular variant of such
procedures. We are still investigating the relationship to metric MDS and to PCA.
SNE can also be seen as an interesting special case of Linear Relational Embedding (LRE)
[11]. In LRE the data consists of triples (e.g. Colin has-mother Victoria) and the task
is to predict the third term from the other two. LRE learns an N-dimensional vector for
each object and an NxN-dimensional matrix for each relation. To predict the third term in
a triple, LRE multiplies the vector representing the first term by the matrix representing
the relationship and uses the resulting vector as the mean of a Gaussian. Its predictive
distribution for the third term is then determined by the relative densities of all known
objects under this Gaussian. SNE is just a degenerate version of LRE in which the only
relationship is ?near? and the matrix representing this relationship is the identity.
In summary, we have presented a new criterion, Stochastic Neighbor Embedding, for mapping high-dimensional points into a low-dimensional space based on stochastic selection
of similar neighbors. Unlike self-organizing maps, in which the low-dimensional coordinates are fixed to a grid and the high-dimensional ends are free to move, in SNE the
high-dimensional coordinates are fixed to the data and the low-dimensional points move.
Our method can also be applied to arbitrary pairwise dissimilarities between objects if such
are available instead of (or in addition to) high-dimensional observations. The gradient of
the SNE cost function has an appealing ?push-pull? property in which the forces acting on
2 to bring it closer to points it is under-selecting and further from points it is over-selecting
as its neighbor. We have shown results of applying this algorithm to image and document
collections for which it sensibly placed similar objects nearby in a low-dimensional space
while keeping dissimilar objects well separated.
Most importantly, because of its probabilistic formulation, SNE has the ability to be extended to mixtures in which ambiguous high-dimensional objects (such as the word ?bank?)
can have several widely-separated images in the low-dimensional space.
Acknowledgments We thank the anonymous referees and several visitors to our poster for helpful
suggestions. Yann LeCun provided digit and NIPS text data. This research was funded by NSERC.
References
[1] T. Cox and M. Cox. Multidimensional Scaling. Chapman & Hall, London, 1994.
[2] J. Tenenbaum. Mapping a manifold of perceptual observations. In Advances in Neural Information Processing Systems, volume 10, pages 682?688. MIT Press, 1998.
[3] J. B. Tenenbaum, V. de Silva, and J. C. Langford. A global geometric framework for nonlinear
dimensionality reduction. Science, 290:2319?2323, 2000.
[4] S. T. Roweis and L. K. Saul. Nonlinear dimensionality reduction by locally linear embedding.
Science, 290:2323?2326, 2000.
[5] T. Kohonen. Self-organization and Associative Memory. Springer-Verlag, Berlin, 1988.
[6] C. Bishop, M. Svensen, and C. Williams. GTM: The generative topographic mapping. Neural
Computation, 10:215, 1998.
[7] J. J. Hull. A database for handwritten text recognition research. IEEE Transaction on Pattern
Analysis and Machine Intelligence, 16(5):550?554, May 1994.
[8] I. T. Jolliffe. Principal Component Analysis. Springer-Verlag, New York, 1986.
[9] Yann LeCun. Nips online web site. http://nips.djvuzone.org, 2001.
[10] Andrew Kachites McCallum. Bow: A toolkit for statistical language modeling, text retrieval,
classification and clustering. http://www.cs.cmu.edu/ mccallum/bow, 1996.
[11] A. Paccanaro and G.E. Hinton. Learning distributed representations of concepts from relational
data using linear relational embedding. IEEE Transactions on Knowledge and Data Engineering, 13:232?245, 2000.
| 2276 |@word cox:2 version:17 compression:1 norm:1 seems:2 proportion:8 tedious:1 cleanly:4 simulation:1 pick:3 reduction:5 fragment:1 selecting:2 document:8 bitmap:2 neuneier:1 current:1 nowlan:1 yet:1 must:1 cottrell:2 distant:1 enables:1 update:2 generative:2 fewer:1 intelligence:1 item:1 warmuth:2 mccallum:2 steepest:2 toronto:4 location:13 org:1 five:1 stopwords:2 constructed:1 scholkopf:1 consists:1 baldi:2 pairwise:3 blowup:1 alspector:1 begin:1 multibody:1 moreover:1 provided:2 mass:2 what:1 every:1 multidimensional:2 finance:1 exactly:2 sensibly:2 classifier:1 scaled:1 platt:1 control:1 demonstrates:2 unit:1 before:1 engineering:1 local:9 modify:1 treat:1 limit:2 path:1 interpolation:1 co:2 ease:1 decided:1 horn:1 lecun:3 enforces:1 spence:1 practical:1 acknowledgment:1 movellan:2 digit:14 procedure:1 unfold:1 bell:1 significantly:1 poster:1 projection:2 ups:1 word:11 road:1 pre:1 get:2 onto:2 close:6 selection:1 collapsed:1 applying:1 www:2 equivalent:1 map:4 annealed:2 straightforward:1 williams:2 primitive:1 go:1 pouget:1 importantly:1 pull:1 datapoints:1 embedding:15 notion:1 coordinate:2 smyth:2 losing:1 us:1 origin:3 associate:1 referee:1 recognition:1 located:1 lay:1 asymmetric:1 cut:3 database:2 observed:1 capture:1 region:8 sun:1 mjolsness:1 solla:1 decrease:1 ahmad:1 ran:2 intuition:1 mozer:1 complexity:1 nats:3 seung:1 trained:2 singh:1 depend:1 segment:1 raise:1 predictive:1 easily:1 mization:1 gtm:2 represented:1 separated:7 fast:1 describe:1 effective:1 sejnowski:1 london:1 zemel:1 aggregate:2 neighborhood:4 quite:4 widely:5 larger:2 amari:1 otherwise:1 ability:2 topographic:1 itself:1 final:1 associative:1 obviously:1 online:1 took:1 lowdimensional:1 subtracting:1 kohonen:1 bow:3 organizing:2 mixing:8 degenerate:1 achieve:2 roweis:4 gold:1 getting:1 parent:2 cluster:1 optimum:5 rangarajan:1 produce:2 renormalizing:1 object:32 depending:1 andrew:1 svensen:1 measured:1 eq:3 krogh:1 implemented:1 c:3 correct:1 stochastic:5 hull:1 centered:3 require:1 really:1 preliminary:1 anonymous:1 probable:1 adjusted:1 extension:1 gravitational:1 around:2 koch:1 hall:1 mapping:4 predict:2 mostafa:2 vary:2 label:2 coolen:1 grouped:1 minimization:1 mit:1 clearly:1 gaussian:9 always:1 aim:2 rather:1 avoid:1 barto:1 jaakkola:1 sparsify:1 improvement:1 helpful:1 dayan:1 vlsi:1 relation:1 going:1 interested:1 pixel:2 classification:1 orientation:2 html:1 multiplies:1 constrained:1 summed:1 initialize:1 mackay:2 softmax:1 equal:1 field:1 never:1 special:1 chapman:1 placing:1 broad:1 nearly:1 preserve:4 divergence:2 geometry:1 attempt:1 organization:1 mixture:15 devoted:2 closer:1 tree:3 euclidean:3 taylor:1 initialized:2 desired:1 circle:5 increased:1 modeling:3 giles:1 cost:13 introducing:1 deviation:1 entry:1 uniform:2 graphic:1 tishby:1 optimally:1 thickness:1 jittered:1 considerably:1 density:2 river:1 probabilistic:5 lee:2 physic:1 off:1 picking:1 together:1 moody:1 squared:4 central:1 containing:1 slowly:1 inefficient:1 derivative:2 simard:1 li:1 toy:1 potential:3 de:1 waste:1 crowded:1 baird:1 ad:1 performed:1 try:1 linked:1 meilijson:1 red:2 start:1 cauwenberghs:1 portion:1 parallel:1 doing:1 reached:1 contribution:1 minimize:1 variance:7 who:4 handwritten:5 raw:2 emphasizes:1 worth:1 m5s:1 published:4 stroke:1 reach:1 touretzky:2 definition:1 naturally:1 associated:1 proof:1 dataset:1 adjusting:1 begun:1 knowledge:1 dce:1 dimensionality:8 fractional:2 sophisticated:1 nonmetric:1 back:1 appears:1 arranged:1 formulation:2 though:3 leen:1 generality:1 furthermore:2 just:4 smola:1 done:1 langford:1 hand:3 web:1 ruderman:1 cohn:2 nonlinear:3 replacing:1 vols:2 pulling:1 name:2 effect:2 concept:4 brown:1 isomap:1 normalized:1 symmetric:1 leibler:2 moore:1 maass:2 deal:1 during:3 self:3 ambiguous:4 mel:1 criterion:1 paccanaro:1 stress:1 omohundro:1 pearlmutter:1 bring:1 silva:1 image:23 ruppin:1 recently:1 kawato:1 stork:1 lre:5 volume:2 discussed:1 interpretation:1 eeckman:1 mother:1 automatic:1 unconstrained:1 grid:2 shawe:1 had:2 dot:4 funded:1 toolkit:3 language:1 etc:1 electrostatic:1 optimizing:1 belongs:1 apart:3 forcing:1 perplexity:6 schmidhuber:1 tesauro:1 certain:2 verlag:2 binary:1 success:1 muller:1 morgan:1 seen:1 determine:1 shortest:1 period:2 colin:1 multiple:2 corporate:1 full:1 match:1 calculation:1 schraudolph:1 retrieval:1 variant:1 basic:1 metric:1 cmu:1 iteration:3 represent:1 normalization:1 achieved:1 penalize:1 addition:1 annealing:3 goodman:1 saad:1 extra:2 unlike:2 induced:3 tend:2 cowan:1 wormhole:1 homonym:2 jordan:1 near:7 yang:2 intermediate:1 bengio:1 easy:2 split:1 enough:1 embeddings:1 affect:1 constraining:1 gave:2 reduce:1 whether:1 expression:1 pca:4 six:4 munro:1 bartlett:1 penalty:1 locating:1 york:1 cause:1 ignored:1 clear:1 involve:1 authored:2 tenenbaum:4 locally:1 reduced:3 http:3 meir:1 notice:1 neuroscience:1 per:3 promise:1 abu:2 group:1 putting:3 four:1 threshold:5 changing:2 clean:1 lowering:1 graph:1 sum:4 jitter:7 topologically:1 place:1 almost:1 guyon:1 yann:2 doya:1 separation:2 scaling:2 display:2 scanned:1 nearby:3 performing:1 relatively:1 department:1 waibel:1 ball:1 poor:1 hertz:1 across:5 sollich:2 smaller:1 sam:1 appealing:1 making:1 wile:1 taken:1 visualization:1 remains:1 skew:2 count:8 turn:1 jolliffe:1 singer:1 know:1 end:3 available:3 operation:2 denker:1 victoria:1 away:1 kowalczyk:1 occurrence:1 appearing:1 alternative:1 batch:1 original:5 running:1 multipole:1 clustering:1 sanger:1 pushing:1 giving:2 ghahramani:1 move:4 g0:2 already:1 added:2 question:1 strategy:1 md:2 g5:1 obermayer:1 bialek:1 gradient:7 affinity:1 distance:11 separate:3 mapped:2 thrun:2 thank:1 berlin:1 evenly:2 manifold:1 barber:2 toward:1 chauvin:1 length:1 relationship:6 illustration:1 mini:1 minimizing:3 difficult:1 sne:39 disparate:1 pomerleau:2 perform:1 upper:1 observation:2 datasets:1 descent:4 attacked:1 displayed:2 pentland:1 viola:1 hinton:4 extended:2 defining:1 ever:1 relational:3 arbitrary:1 canada:1 pair:5 andreou:1 raising:1 hour:1 nip:12 robinson:1 beyond:1 below:1 pattern:1 mismatch:2 including:1 memory:1 overlap:1 natural:3 force:5 hybrid:5 buhmann:1 bourlard:1 representing:3 picture:3 tresp:1 text:5 nice:1 geometric:2 graf:1 nxn:1 relative:1 loss:1 interesting:2 suggestion:1 geoffrey:1 localized:1 triple:2 bank:2 squashing:1 summary:1 placed:1 last:2 keeping:3 free:1 lle:2 allow:1 pulled:2 shavlik:1 neighbor:24 saul:2 taking:1 barrier:1 differentiating:1 understand:1 distributed:1 boundary:1 dimension:9 calculated:1 vocabulary:2 stuck:1 collection:3 author:15 preprocessing:1 reinforcement:1 simplified:2 atkeson:2 far:2 made:1 cope:1 transaction:2 approximate:1 lippmann:2 kullback:2 ities:1 global:1 investigating:1 grayscale:1 search:5 disambiguate:1 ignoring:1 williamson:1 boxed:1 som:1 jabri:1 linearly:1 big:3 noise:3 arise:1 site:1 slow:1 position:4 outdoor:1 bower:1 perceptual:1 kachites:1 third:4 learns:1 embed:1 bad:1 bishop:2 inset:1 cease:1 intrinsic:1 vapnik:1 adding:2 effectively:1 dissimilarity:7 occurring:2 push:1 chen:1 easier:1 entropy:1 smoothly:2 logarithmic:1 distinguishable:1 likely:1 expressed:1 nserc:1 lewicki:1 springer:2 identity:3 goal:1 king:1 kailath:2 included:2 baluja:1 except:2 reducing:2 determined:2 acting:1 principal:4 kearns:1 college:1 principe:1 support:1 scan:1 dissimilar:1 visitor:1 avoiding:1 |
1,403 | 2,277 | A Prototype for Automatic Recognition of
Spontaneous Facial Actions
M.S. Bartlett, G. Littlewort, B. Braathen, T.J. Sejnowski , and J.R. Movellan
Institute for Neural Computation and Department of Biology
University of California, San Diego
and Howard Hughes Medical Institute at the Salk Institute
Email: marni, gwen, bjorn, terry, javier @inc.ucsd.edu
Abstract
We present ongoing work on a project for automatic recognition of spontaneous facial actions. Spontaneous facial expressions differ substantially from posed expressions, similar to how continuous, spontaneous
speech differs from isolated words produced on command. Previous
methods for automatic facial expression recognition assumed images
were collected in controlled environments in which the subjects deliberately faced the camera. Since people often nod or turn their heads,
automatic recognition of spontaneous facial behavior requires methods
for handling out-of-image-plane head rotations. Here we explore an approach based on 3-D warping of images into canonical views. We evaluated the performance of the approach as a front-end for a spontaneous
expression recognition system using support vector machines and hidden
Markov models. This system employed general purpose learning mechanisms that can be applied to recognition of any facial movement. The
system was tested for recognition of a set of facial actions defined by
the Facial Action Coding System (FACS). We showed that 3D tracking
and warping followed by machine learning techniques directly applied to
the warped images, is a viable and promising technology for automatic
facial expression recognition. One exciting aspect of the approach presented here is that information about movement dynamics emerged out
of filters which were derived from the statistics of images.
1 Introduction
Much of the early work on computer vision applied to facial expressions focused on recognizing a few prototypical expressions of emotion produced on command (e.g. ?smile?).
These examples were collected under controlled imaging conditions with subjects deliberately facing the camera. Extending these systems to spontaneous facial behavior is a critical
step forward for applications of this technology. Spontaneous facial expressions differ substantially from posed expressions, similar to how continuous, spontaneous speech differs
from isolated words produced on command. Spontaneous facial expressions are mediated by a distinct neural pathway from posed expressions. The pyramidal motor system,
originating in the cortical motor strip, drives voluntary facial actions, whereas involuntary,
emotional facial expressions appear to originate in a subcortical motor circuit involving
the basal ganglia, limbic system, and the cingulate motor area (e.g. [15]). Psychophysical work has shown that spontaneous facial expressions differ from posed expressions in a
number of ways [6]. Subjects often contract different facial muscles when asked to pose
an emotion such as fear versus when they are actually experiencing fear. (See Figure 1b.)
In addition, the dynamics are different. Spontaneous expressions have a fast and smooth
onset, with apex coordination, in which muscle contractions in different parts of the face
peak at the same time. In posed expressions, the onset tends to be slow and jerky, and the
muscle contractions typically do not peak simultaneously.
Spontaneous facial expressions often contain much information beyond what is conveyed
by basic emotion categories, such as happy, sad, or surprised. Faces convey signs of cognitive state such as interest, boredom, and confusion, conversational signals, and blends of
two or more emotions. Instead of classifying expressions into a few basic emotion categories, the work presented here attempts to measure the full range of facial behavior by
recognizing facial animation units that comprise facial expressions. The system is based
on the Facial Action Coding System (FACS) [7].
FACS [7] is the leading method for measuring facial movement in behavioral science. It
is a human judgment system that is presently performed without aid from computer vision. In FACS, human coders decompose facial expressions into action units (AUs) that
roughly correspond to independent muscle movements in the face (see Figure 1). Ekman
and Friesen described 46 independent facial movements, or ?facial actions? (Figure 1).
These facial actions are analogous to phonemes for facial expression. Over 7000 distinct
combinations of such movements have been observed in spontaneous behavior.
AU1
Inner Brow Raiser
(Central Frontalis)
1+2
AU2
Outer Brow Raiser
(Lateral Frontalis)
1+4
AU4
Brow Lower
(Corrugator,
Depressor Supercilli,
Depressor Glaballae)
1+2+4
Figure 1: The Facial Action Coding System decomposes facial expressions into component
actions. The three individual brow region actions and selected combinations are illustrated.
When subjects pose fear they often perform 1+2 (top right), whereas spontaneous fear
reliably elicits 1+2+4 (bottom right) [6].
Advantages of FACS include (1) Objectivity. It does not apply interpretive labels to expressions but rather a description of physical changes in the face. This enables studies of
new relationships between facial movement and internal state, such as the facial signals
of stress or fatigue. (2) Comprehensiveness. FACS codes for all independent motions of
the face observed by behavioral psychologists over 20 years of study. (3) Robust link with
ground truth. There is over 20 years of behavioral data on the relationships between FACS
movement parameters and underlying emotional or cognitive states. Automated facial action coding would be effective for human-computer interaction tools and low bandwidth
facial animation coding, and would have a tremendous impact on behavioral science by
making objective measurement more accessible.
There has been an emergence of groups that analyze facial expressing into elementary
movements. For example, Essa and Pentland [8] and Yacoob and Davis [16] proposed
methods to analyze expressions into elementary movements using an animation style coding system inspired by FACS. Eric Petajan?s group has also worked for many years on
methods for automatic coding of facial expressions in the style of MPEG4 [5], which codes
movement of a set of facial feature points. While coding standards like MPEG4 are useful for animating facial avatars, they are of limited use for behavioral research since, for
example, MPEG4 does not encode some behaviorally relevant facial movements such as
the muscle that circles the eye (the orbicularis oculi, which differentiates spontaneous from
posed smiles [6]). It also does not encode the wrinkles and bulges that are critical for distinguishing some facial muscle activations that are difficult to differentiate using motion
alone yet can have different behavioral implications (e.g. see Figure 1b.) One other group
has focused on automatic FACS recognition as a tool for behavioral research, lead by Jeff
Cohn and Takeo Kanade. They present an alternative approach based on traditional computer vision techniques, including edge detection and optic flow. A comparative analysis
of our approaches is available in [1, 4, 10].
2 Factorizing rigid head motion from nonrigid facial deformations
The most difficult technical challenge that came with spontaneous behavior was the presence of out-of-plane rotations due to the fact that people often nod or turn their head as
they communicate with others. Our approach to expression recognition is based on statistical methods applied directly to filter bank image representations. While in principle such
methods may be able to learn the invariances underlying out-of-plane rotations, the amount
of data needed to learn such invariances is likely to be impractical. Instead, we addressed
this issue by means of deformable 3D face models. We fit 3D face models to the image
plane, texture those models using the original image frame, then rotate the model to frontal
views, warp it to a canonical face geometry, and then render the model back into the image
plane. (See Figures 2,3,4). This allowed us to factor out image variation due to rigid head
rotations from variations due to nonrigid face deformations. The rigid transformations were
encoded by the rotation and translation parameters of the 3D model. These parameters are
retained for analysis of the relation of rigid head dynamics to emotional and cognitive state.
Since our goal was to explore the use of 3D models to handle out-of-plane rotations for
expression recognition, we first tested the system using hand-labeling to give the position
of 8 facial landmarks. However the approach can be generalized in a straightforward and
principled manner to work with automatic 3D trackers, which we are presently developing [9].
Although human labeling can be highly precise, the labels employed here had substantial error
due to inattention when the face moved. Mean deviation between two labelers was 4 pixels 8.7.
Hence it may be realistic to suppose that a fully automatic head pose tracker may achieve at least this
level of accuracy.
a.
b.
Figure 2: Head pose estimation. a. First camera parameters and face geometry are jointly
estimated using an iterative least squares technique b. Next head pose is estimated in each
frame using stochastic particle filtering. Each particle is a head model at a particular orientation and scale.
When landmark positions in the image plane are known, the problem of 3D pose estimation
is relatively easy to solve. We begin with a canonical wire-mesh face model and adapt it to
the face of a particular individual by using 30 image frames in which 8 facial features have
been labeled by hand. Using an iterative least squares triangulation technique, we jointly
estimate camera parameters and the 3D coordinates of these 8 features. A scattered data
interpolation technique is then used to modify the canonical 3D face model so that it fits
the 8 feature positions [14]. Once camera parameters and 3D face geometry are known,
we use a stochastic particle filtering approach [11] to estimate the most likely rotation and
translation parameters of the 3D face model in each video frame. (See [2]).
3 Action unit recognition
Database of spontaneous facial expressions. We employed a dataset of spontaneous
facial expressions from freely behaving individuals. The dataset consisted of 300 Gigabytes
of 640 x 480 color images, 8 bits per pixels, 60 fields per second, 2:1 interlaced. The video
sequences contained out of plane head rotation up to 75 degrees. There were 17 subjects:
3 Asian, 3 African American, and 11 Caucasians. Three subjects wore glasses. The facial
behaviors in one minute of video per subject were scored frame by frame by 2 teams experts
on the FACS system, one lead by Mark Frank at Rutgers, and another lead by Jeffrey Cohn
at U. Pittsburgh.
While the database we used was rather large for current digital video storage standards,
in practice the number of spontaneous examples of each action unit in the database was
relatively small. Hence, we prototyped the system on the three actions which had the most
examples: Blinks (AU 45 in the FACS system) for which we used 168 examples provided
by 10 subjects, Brow raises (AU 1+2) for which we had 48 total examples provided by
12 subjects, and Brow lower (AU 4) for which we had 14 total examples provided by 12
subjects. Negative examples for each category consisted of randomly selected sequences
matched by subject and sequence length. These three facial actions have relevance to applications such as monitoring of alertness, anxiety, and confusion. The system presented here
employs general purpose learning mechanisms that can be applied to recognition of any
facial action once sufficient training data is available. There is no need to develop special
purpose feature measures to recognize additional facial actions.
SVM Bank
HMM Decoder
Figure 3: Flow diagram of recognition system. First, head pose is estimated, and images
are warped to frontal views and canonical face geometry. The warped images are then
passed through a bank of Gabor filters. SVM?s are then trained to classify facial actions
from the Gabor representation in individual video frames. The output trajectories of the
SVM?s for full video sequences are then channeled to hidden Markov models.
Recognition system. An overview of the recognition system is illustrated in Figure 3.
Head pose was estimated in the video sequences using a particle filter with 100 particles.
Face images were then warped onto a face model with canonical face geometry, rotated to
frontal, and then projected back into the image plane. This alignment was used to define
and crop a subregion of the face image containing the eyes and brows. The vertical position
of the eyes was 0.67 of the window height. There were 105 pixels between the eyes and
120 pixels from eyes to mouth. Pixel brightnesses were linearly rescaled to [0,255]. Soft
histogram equalization was then performed on the image gray-levels by applying a logistic
filter with parameters chosen to match the mean and variance of the gray-levels in the
neutral frame [13].
The resulting images were then convolved with a bank of Gabor kernels at 5 spatial frequencies and 8 orientations. Output magnitudes were normalized to unit length and then
downsampled by a factor of 4. The Gabor representations were then channeled to a bank
of support vector machines (SVM?s). Nonlinear SVM?s were trained to recognize facial
actions in individual video frames. The training samples for the SVM?s were the action
peaks as identified by the FACS experts, and negative examples were randomly selected
frames matched by subject. Generalization to novel subjects was tested using leave-oneout cross-validation. The SVM output was the margin (distance along the normal to the
class partition). Trajectories of SVM outputs for the full video sequence of test subjects
were then channeled to hidden Markov models (HMM?s). The HMM?s were trained to classify facial actions without using information about which frame contained the action peak.
Generalization to novel subjects was again tested using leave-one-out cross-validation.
Figure 4: User interface for the FACS recognition system. The face on the bottom right
is an original frame from the dataset. Top right: Estimate of head pose. Center image:
Warped to frontal view and conical geometry. The curve shows the output of the blink
detector for the video sequence. This frame is in the relaxation phase of a blink.
4 Results
Classifying individual frames with SVM?s. SVM?s were first trained to discriminate
images containing the peak of blink sequences from randomly selected images containing
no blinks. A nonlinear SVM applied to the Gabor representations obtained 95.9% correct
for discriminating blinks
from non-blinks for the peak frames. The nonlinear kernel was of
the form where is Euclidean distance, and is a constant. Here
.
Recovering FACS dynamics. Figure 5a shows the time course of SVM outputs for complete sequences of blinks. Although the SVM was not trained to measure the amount of
eye opening, it is an emergent property. In all time courses shown, the SVM outputs are
test outputs (the SVM was not trained on the subject shown). Figure 5b shows the SVM
trajectory when tested on a sequence with multiple peaks. The SVM outputs provide in-
formation about FACS dynamics that was previously unavailable by human coding due to
time constraints. Current coding methods provide only the beginning and end of the action, along with the location and magnitude of the action unit peak. This information about
dynamics may be useful for future behavioral studies.
C
*
*
D
C
C
b.
Output
Output
*
a.
Frame
D
c.
B
B
Frame
Figure 5: a. Blink trajectories of SVM outputs for four different subjects. Star indicates
the location of the AU peak as coded by the human FACS expert. b. SVM output trajectory
for a blink with multiple peaks (flutter). c. Brow raise trajectories of SVM outputs for one
subject. Letters A-D indicate the intensity of the AU as coded by the human FACS expert,
and are placed at the peak frame.
HMM?s were trained to classify action units from the trajectories of SVM outputs. HMM?s
addressed the case in which the frame containing the action unit peak is unknown. Two hidden Markov models, one for Blinks and one for random sequences matched by subject and
length, were trained and tested using leave-one-out cross-validation. A mixture of Gaussians model was employed. Test sequences were assigned to the category for which the
probability of the sequence given the model was greatest. The number of states was varied
from 1-10, and the number of Gaussian mixtures was varied from 1-7. Best performance
of 98.2% correct was obtained using 6 states and 7 Gaussians.
Brow movement discrimination. The goal was to discriminate three action units localized around the eyebrows. Since this is a 3-category task and SVMs are originally designed
for binary classification tasks, we trained a different SVM on each possible binary decision
task: Brow Raise (AU 1+2) versus matched random sequences, Brow Lower (AU 4) versus
another set of matched random sequences, and Brow Raise versus Brow Lower. The output
of these three SVM?s was then fed to an HMM for classification. The input to the HMM
consisted of three values which were the outputs of each of the three 2-category SVM?s. As
for the blinks, the HMM?s were trained on the ?test? outputs of the SVM?s. The HMM?s
achieved 78.2% accuracy using 10 states, 7 Gaussians and including the first derivatives of
the observation sequence in the input. Separate HMM?s were also trained to perform each
of the 2-category brow movement discriminations in image sequences. These results are
summarized in Table 1.
Figure 5c shows example output trajectories for the SVM trained to discriminate Brow
Raise from Random matched sequences. As with the blinks, we see that despite not being
trained to indicate AU intensity, an emergent property of the SVM output was the magnitude of the brow raise. Maximum SVM output for each sequence was positively correlated
with
action unit intensity, as scored by the human FACS expert
.
The contribution of Gabors was examined by comparing linear and nonlinear SVM?s applied directly to the difference images versus to Gabor outputs. Consistent with our previous findings [12], Gabor filters made the space more linearly separable than the raw difference images. For blink detection, a linear SVM on the Gabors performed significantly
better (93.5%) than a linear SVM applied directly to difference images (78.3%). Using
a nonlinear SVM with difference images improved performance substantially to 95.9%,
whereas the nonlinear SVM on Gabors gave only a small increment in performance, also
Action
Blink vs. Non-blink
Brow Raise vs. Random
Brow Lower vs. Random
Brow Raise vs. Brow Lower
Brow Raise vs. Lower vs. Random
% Correct
(HMM)
98.2
90.6
75.0
93.5
78.2
N
168
48
14
31
62
Table 1: Summary of results. All performances are for generalization to novel subjects.
Random: Random sequences matched by subject and length. N: Total number of positive
(and also negative) examples.
to 95.9%. A similar pattern was obtained for the brow movements, except that nonlinear
SVMs applied directly to difference images did not perform as well as nonlinear SVM?s
applied to Gabors. The details of this analysis, and also an analysis of the contribution of
SVM?s to system performance, are available in [1].
5 Conclusions
We explored an approach for handling out-of-plane head rotations in automatic recognition
of spontaneous facial expressions from freely behaving individuals. The approach fits a 3D
model of the face and rotates it back to a canonical pose (e.g., frontal view). We found that
machine learning techniques applied directly to the warped images is a promising approach
for automatic coding of spontaneous facial expressions.
This approach employed general purpose learning mechanisms that can be applied to the
recognition of any facial action. The approach is parsimonious and does not require defining a different set of feature parameters or image operations for each facial action. While
the database we used was rather large for current digital video storage standards, in practice the number of spontaneous examples of each action unit in the database was relatively
small. We therefore prototyped the system on the three actions which had the most examples. Inspection of the performance of our system shows that 14 examples was sufficient to
successfully learn an action, an order of 50 examples was sufficient to achieve performance
over 90%, and an order of 150 examples was sufficient to achieve over 98% accuracy and
learn smooth trajectories. Based on these results, we estimate that a database of 250 minutes of coded, spontaneous behavior would be sufficient to train the system on the vast
majority of facial actions.
One exciting finding is the observation that important measurements emerged out of filters
derived from the statistics of the images. For example, the output of the SVM filter matched
to the blink detector could be potentially used to measure the dynamics of eyelid closure,
even though the system was not designed to explicitly detect the contours of the eyelid and
measure the closure. (See Figure 5.)
The results presented here employed hand-labeled feature points for the head pose tracking step. We are presently developing a fully automated head pose tracker that integrates
particle filtering with a system developed by Matthew Brand for automatic real-time 3D
tracking based on optic flow [3].
All of the pieces of the puzzle are ready for the development of automated systems that
recognize spontaneous facial actions at the level of detail required by FACS. Collection of
a much larger, realistic database to be shared by the research community is a critical next
step.
Acknowledgments
Support for this project was provided by ONR N00014-02-1-0616, NSF-ITR IIS-0220141 and IIS0086107, DCI contract No.2000-I-058500-000, and California Digital Media Innovation Program
DiMI 01-10130.
References
[1] M.S. Bartlett, B. Braathen, G. Littlewort-Ford, J. Hershey, I. Fasel, T. Marks, E. Smith, T.J.
Sejnowski, and J.R. Movellan. Automatic analysis of of spontaneous facial behavior: A final
project report. Technical Report UCSD MPLab TR 2001.08, University of California, San
Diego, 2001.
[2] B. Braathen, M.S. Bartlett, G. Littlewort-Ford, and J.R. Movellan. 3-D head pose estimation
from video by nonlinear stochastic particle filtering. In Proceedings of the 8th Joint Symposium
on Neural Computation, 2001.
[3] M. Brand. Flexible flow for 3d nonrigid tracking and shape recovery. In CVPR, 2001.
[4] J.F. Cohn, T. Kanade, T. Moriyama, Z. Ambadar, J. Xiao, J. Gao, and H. Imamura. A comparative study of alternative FACS coding algorithms. Technical Report CMU-RI-TR-02-06,
Robotics Institute, Carnegie-Mellon Univerisity, 2001.
[5] P. Doenges, F. Lavagetto, J. Ostermann, I.S. Pandzic, and E. Petajan. Mpeg-4: Audio/video
and synthetic graphics/audio for real-time, interactive media delivery. Image Communications
Journal, 5(4), 1997.
[6] P. Ekman. Telling Lies: Clues to Deceit in the Marketplace, Politics, and Marriage. W.W.
Norton, New York, 3rd edition, 2001.
[7] P. Ekman and W. Friesen. Facial Action Coding System: A Technique for the Measurement of
Facial Movement. Consulting Psychologists Press, Palo Alto, CA, 1978.
[8] I. Essa and A. Pentland. Coding, analysis, interpretation, and recognition of facial expressions.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7):757?63, 1997.
[9] I.R. Fasel, M.S. Bartlett, and J.R. Movellan. A comparison of gabor filter methods for automatic
detection of facial landmarks. In Proceedings of the 5th International Conference on Face and
Gesture Recognition, 2002. Accepted.
[10] M.G. Frank, P. Perona, and Y. Yacoob. Automatic extraction of facial action codes. final report and panel recommendations for automatic facial action coding. Unpublished manuscript,
Rutgers University, 2001.
[11] G. Kitagawa. Monte carlo filter and smoother for non-Gaussian nonlinear state space models.
Journal of Computational and Graphical Statistics, 5(1):1?25, 1996.
[12] G. Littlewort-Ford, M.S. Bartlett, and J.R. Movellan. Are your eyes smiling? detecting genuine smiles with support vector machines and gabor wavelets. In Proceedings of the 8th Joint
Symposium on Neural Computation, 2001.
[13] J.R. Movellan. Visual speech recognition with stochastic networks. In G. Tesauro, D.S. Touretzky, and T. Leen, editors, Advances in Neural Information Processing Systems, volume 7, pages
851?858. MIT Press, Cambridge, MA, 1995.
[14] Fr?ed?eric Pighin, Jamie Hecker, Dani Lischinski, Richard Szeliski, and David H. Salesin. Synthesizing realistic facial expressions from photographs. Computer Graphics, 32(Annual Conference Series):75?84, 1998.
[15] W. E. Rinn. The neuropsychology of facial expression: A review of the neurological and
psychological mechanisms for producing facial expressions. Psychological Bulletin, 95(1):52?
77, 1984.
[16] Y. Yacoob and L. Davis. Recognizing human facial expressions from long image sequences using optical flow. IEEE Transactions on Pattern Analysis and Machine Intelligence, 18(6):636?
642, 1996.
| 2277 |@word cingulate:1 closure:2 contraction:2 brightness:1 tr:2 series:1 current:3 comparing:1 activation:1 yet:1 takeo:1 mesh:1 realistic:3 partition:1 shape:1 enables:1 motor:4 designed:2 discrimination:2 alone:1 v:6 selected:4 intelligence:2 caucasian:1 plane:10 inspection:1 beginning:1 smith:1 detecting:1 consulting:1 location:2 height:1 along:2 symposium:2 viable:1 surprised:1 pathway:1 behavioral:8 manner:1 au1:1 roughly:1 behavior:8 mplab:1 inspired:1 gwen:1 window:1 project:3 begin:1 underlying:2 provided:4 circuit:1 matched:8 medium:2 coder:1 what:1 alto:1 panel:1 substantially:3 developed:1 finding:2 transformation:1 impractical:1 brow:22 interactive:1 unit:11 medical:1 appear:1 producing:1 positive:1 fasel:2 modify:1 tends:1 despite:1 interpolation:1 au:9 examined:1 limited:1 range:1 acknowledgment:1 camera:5 hughes:1 practice:2 movellan:6 differs:2 area:1 flutter:1 wrinkle:1 gabor:13 significantly:1 word:2 downsampled:1 onto:1 inattention:1 storage:2 applying:1 marni:1 equalization:1 center:1 straightforward:1 focused:2 recovery:1 gigabyte:1 handle:1 variation:2 coordinate:1 analogous:1 increment:1 spontaneous:26 diego:2 avatar:1 experiencing:1 au4:1 suppose:1 user:1 distinguishing:1 recognition:22 labeled:2 database:7 observed:2 bottom:2 orbicularis:1 region:1 movement:16 alertness:1 rescaled:1 neuropsychology:1 limbic:1 principled:1 environment:1 substantial:1 asked:1 dynamic:7 trained:13 raise:9 eric:2 joint:2 emergent:2 train:1 distinct:2 fast:1 effective:1 sejnowski:2 monte:1 marketplace:1 objectivity:1 labeling:2 formation:1 emerged:2 posed:6 encoded:1 solve:1 larger:1 cvpr:1 littlewort:4 statistic:3 emergence:1 jointly:2 ford:3 final:2 differentiate:1 advantage:1 sequence:21 essa:2 au2:1 jamie:1 interaction:1 fr:1 relevant:1 achieve:3 deformable:1 description:1 moved:1 pighin:1 extending:1 comparative:2 mpeg:1 leave:3 rotated:1 develop:1 pose:13 subregion:1 recovering:1 nod:2 indicate:2 differ:3 correct:3 filter:10 stochastic:4 human:9 require:1 generalization:3 decompose:1 elementary:2 kitagawa:1 tracker:3 around:1 ground:1 normal:1 marriage:1 lischinski:1 puzzle:1 matthew:1 depressor:2 early:1 purpose:4 estimation:3 facs:20 integrates:1 label:2 coordination:1 palo:1 successfully:1 tool:2 dani:1 mit:1 behaviorally:1 gaussian:2 rather:3 command:3 yacoob:3 encode:2 derived:2 indicates:1 detect:1 glass:1 rigid:4 typically:1 hidden:4 relation:1 perona:1 originating:1 pixel:5 issue:1 classification:2 flexible:1 orientation:2 development:1 spatial:1 special:1 genuine:1 emotion:5 comprise:1 once:2 field:1 extraction:1 biology:1 future:1 others:1 report:4 richard:1 few:2 employ:1 opening:1 randomly:3 simultaneously:1 recognize:3 individual:7 asian:1 geometry:6 phase:1 jeffrey:1 attempt:1 detection:3 interest:1 highly:1 alignment:1 mixture:2 implication:1 edge:1 facial:69 euclidean:1 circle:1 isolated:2 deformation:2 psychological:2 classify:3 soft:1 measuring:1 deviation:1 neutral:1 recognizing:3 front:1 graphic:2 synthetic:1 peak:12 international:1 discriminating:1 accessible:1 contract:2 bulge:1 again:1 central:1 containing:4 cognitive:3 warped:6 american:1 leading:1 style:2 expert:5 channeled:3 derivative:1 star:1 coding:15 summarized:1 inc:1 explicitly:1 onset:2 piece:1 performed:3 view:5 analyze:2 jerky:1 petajan:2 contribution:2 square:2 accuracy:3 phoneme:1 variance:1 judgment:1 correspond:1 blink:17 salesin:1 raw:1 produced:3 carlo:1 monitoring:1 trajectory:9 drive:1 dimi:1 african:1 detector:2 touretzky:1 ed:1 strip:1 email:1 norton:1 frequency:1 dataset:3 color:1 javier:1 actually:1 back:3 manuscript:1 originally:1 friesen:2 hershey:1 improved:1 leen:1 evaluated:1 though:1 hand:3 cohn:3 nonlinear:10 logistic:1 gray:2 smiling:1 contain:1 consisted:3 normalized:1 deliberately:2 hence:2 assigned:1 illustrated:2 davis:2 generalized:1 nonrigid:3 fatigue:1 stress:1 complete:1 confusion:2 motion:3 interface:1 oculus:1 image:34 novel:3 rotation:9 physical:1 overview:1 volume:1 interpretation:1 measurement:3 expressing:1 mellon:1 cambridge:1 automatic:16 rd:1 particle:7 had:5 apex:1 behaving:2 labelers:1 showed:1 triangulation:1 tesauro:1 n00014:1 binary:2 came:1 onr:1 muscle:6 univerisity:1 additional:1 employed:6 freely:2 signal:2 ii:1 smoother:1 full:3 multiple:2 smooth:2 technical:3 match:1 adapt:1 gesture:1 cross:3 long:1 coded:3 controlled:2 impact:1 involving:1 basic:2 crop:1 vision:3 cmu:1 rutgers:2 histogram:1 kernel:2 animating:1 achieved:1 robotics:1 whereas:3 addition:1 addressed:2 diagram:1 pyramidal:1 subject:21 flow:5 smile:3 presence:1 easy:1 automated:3 fit:3 gave:1 bandwidth:1 identified:1 inner:1 prototype:1 itr:1 politics:1 interlaced:1 expression:36 bartlett:5 passed:1 render:1 speech:3 york:1 action:41 useful:2 amount:2 svms:2 category:7 canonical:7 nsf:1 sign:1 estimated:4 per:3 carnegie:1 basal:1 group:3 four:1 imaging:1 vast:1 relaxation:1 year:3 letter:1 communicate:1 sad:1 parsimonious:1 delivery:1 decision:1 bit:1 followed:1 conical:1 annual:1 optic:2 constraint:1 worked:1 your:1 ri:1 aspect:1 conversational:1 separable:1 optical:1 relatively:3 department:1 developing:2 combination:2 making:1 psychologist:2 presently:3 previously:1 turn:2 mechanism:4 differentiates:1 needed:1 fed:1 end:2 available:3 gaussians:3 operation:1 apply:1 alternative:2 convolved:1 original:2 top:2 include:1 graphical:1 emotional:3 warping:2 psychophysical:1 objective:1 blend:1 traditional:1 distance:2 separate:1 elicits:1 link:1 lateral:1 landmark:3 outer:1 hmm:11 decoder:1 originate:1 rotates:1 majority:1 collected:2 code:3 length:4 retained:1 relationship:2 happy:1 anxiety:1 innovation:1 difficult:2 potentially:1 frank:2 dci:1 negative:3 oneout:1 synthesizing:1 reliably:1 unknown:1 perform:3 vertical:1 wire:1 observation:2 markov:4 howard:1 pentland:2 voluntary:1 defining:1 communication:1 head:18 precise:1 frame:19 team:1 ucsd:2 varied:2 community:1 intensity:3 david:1 unpublished:1 required:1 california:3 tremendous:1 beyond:1 able:1 involuntary:1 pattern:3 challenge:1 eyebrow:1 program:1 including:2 video:13 terry:1 mouth:1 critical:3 greatest:1 technology:2 eye:7 ready:1 mediated:1 faced:1 review:1 fully:2 prototypical:1 subcortical:1 filtering:4 facing:1 versus:5 localized:1 digital:3 validation:3 degree:1 conveyed:1 sufficient:5 consistent:1 xiao:1 exciting:2 principle:1 bank:5 editor:1 classifying:2 translation:2 course:2 summary:1 placed:1 warp:1 institute:4 wore:1 telling:1 face:24 eyelid:2 szeliski:1 bulletin:1 curve:1 cortical:1 contour:1 forward:1 made:1 boredom:1 san:2 projected:1 collection:1 clue:1 transaction:2 pittsburgh:1 assumed:1 factorizing:1 continuous:2 iterative:2 decomposes:1 table:2 promising:2 kanade:2 learn:4 robust:1 correlated:1 ca:1 unavailable:1 did:1 linearly:2 animation:3 scored:2 edition:1 allowed:1 convey:1 interpretive:1 positively:1 scattered:1 salk:1 slow:1 aid:1 position:4 lie:1 wavelet:1 minute:2 raiser:2 explored:1 svm:36 texture:1 magnitude:3 margin:1 photograph:1 explore:2 likely:2 ganglion:1 gao:1 visual:1 bjorn:1 contained:2 tracking:4 neurological:1 fear:4 recommendation:1 prototyped:2 truth:1 ma:1 goal:2 jeff:1 shared:1 ekman:3 change:1 except:1 total:3 discriminate:3 invariance:2 accepted:1 brand:2 internal:1 people:2 support:4 mark:2 rotate:1 relevance:1 frontal:5 ongoing:1 audio:2 tested:6 handling:2 |
1,404 | 2,278 | A Maximum Entropy Approach To
Collaborative Filtering in Dynamic, Sparse,
High-Dimensional Domains
David M. Pennock
Overture Services, Inc.
74 N. Pasadena Ave., 3rd floor
Pasadena, CA 91103,
[email protected]
Dmitry Y. Pavlov
NEC Laboratories America
4 Independence Way
Princeton, NJ 08540,
[email protected]
Abstract
We develop a maximum entropy (maxent) approach to generating recommendations in the context of a user?s current navigation stream, suitable
for environments where data is sparse, high-dimensional, and dynamic?
conditions typical of many recommendation applications. We address
sparsity and dimensionality reduction by first clustering items based on
user access patterns so as to attempt to minimize the apriori probability that recommendations will cross cluster boundaries and then recommending only within clusters. We address the inherent dynamic nature
of the problem by explicitly modeling the data as a time series; we show
how this representational expressivity fits naturally into a maxent framework. We conduct experiments on data from ResearchIndex, a popular online repository of over 470,000 computer science documents. We
show that our maxent formulation outperforms several competing algorithms in offline tests simulating the recommendation of documents to
ResearchIndex users.
1 Introduction
Recommender systems attempt to automate the process of ?word of mouth? recommendations within a community. Typical application environments are dynamic in many respects:
users come and go, users preferences and goals change, items are added and removed, and
user navigation itself is a dynamic process. Recommendation domains are also often high
dimensional and sparse, with tens or hundreds of thousands of items, among which very
few are known to any particular user.
Consider, for instance, the problem of generating recommendations within ResearchIndex
(a.k.a., CiteSeer),1 an online digital library of computer science papers, receiving thousands
of user accesses per hour. The site automatically locates computer science papers found on
the Web, indexes their full text, allows browsing via the literature citation graph, and isolates the text around citations, among other services [8]. The archive contains over 470,000
1
http://www.researchindex.com
documents including the full text of each document, citation links between documents,
and a wealth of user access data. With so many documents, and only seven accesses per
user on average, the user-document data matrix is exceedingly sparse and thus challenging
to model. In this paper, we work with the ResearchIndex data, since it is an interesting
application domain, and is typical of many recommendation application areas [14].
There are two conceptually different ways of making recommendations. A content filtering
approach is to recommend solely based on the features of a document (e.g., showing documents written by the same author(s), or textually similar documents to ). These methods
have been shown to be good predictors [3]. Another possibility is to perform collaborative
filtering [13] by assessing the similarities between the documents requested by the current
user and the users who interacted with ResearchIndex in the past. Once the users with
browsing histories similar to that of a given user are identified, an assumption is made that
the future browsing patterns will be similar as well, and the prediction is made accordingly.
Common measures of similarity between users include Pearson correlation coefficient [13],
mean squared error [16], and vector similarity [1]. More recent work includes application
of statistical machine learning techniques, such as Bayesian networks [1], dependency networks [6], singular value decomposition [14] and latent class models [7, 12]. Most of these
recommendation algorithms are context and order independent: that is, the rank of recommendations does not depend on the context of the user?s current navigation or on recency
effects (past viewed items receive as much weight as recently viewed items).
Currently, ResearchIndex mostly employs fairly simple content-based recommenders. Our
objective was to design a superior (or at least complementary) model-based recommendation algorithm that (1) is tuned for a particular user at hand, and (2) takes into account the
identity of the currently-viewed document , so as not the lead the user too far astray from
his or her current search goal.
To overcome the sparsity and high dimensionality of the data, we cluster the documents
with an objective of maximizing the likelihood that recommendable items co-occur in the
same cluster. By marrying the clustering technique with the end goal of recommendation,
our approach appears to do a good job at maintaining high recall (sensitivity). Similar ideas
in the context of maxent were proposed recently by Goodman in [5].
We explicitly model time: each user is associated with a set of sessions, and each session
is modeled as a time sequence of document accesses. We present a maxent model that
effectively estimates the probability of the next visited document ID (DID) given the most
recently visited DID (?bigrams?) and past indicative DIDs (?triggers?). To our knowledge, this is the first application of maxent for collaborative filtering, and one of the few
published formulations that makes accurate recommendations in the context of a dynamic
user session [3, 15]. We perform offline empirical tests of our recommender and compare it
to competing models. The comparison shows our method is quite accurate, outperforming
several other less-expressive models.
The rest of the paper is organized as follows. In Section 2, we describe the log data from
ResearchIndex and how we preprocessed it. Section 3 presents the greedy algorithm for
clustering the documents and discusses how the clustering helps to decompose the original
prediction task. In Section 4, we give a high-level description of our maxent model and the
features we used for its learning. Experimental results and comparisons with other models
are discussed in Section 5. In Section 6, we draw conclusions and describe directions for
future work.
2 Preprocessing the ResearchIndex data
Each document indexed in ResearchIndex is assigned a unique document ID (DID). Whenever a user accesses the site with a cookie-enabled browser, (s)he is identified as a new or returning user and all activity is recorded on the server side with a unique user ID (UID) and a
time stamp (TID). We obtained a log file that recorded approximately 3 month worth of ResearchIndex data that can roughly be viewed as a series of requests
.
In the first processing step, we aggregated the requests by the
and broke them into
sessions. For a fixed UID, a session is defined as a sequence of document requests, with
no two consecutive requests more than seconds apart. In our experiments we chose
, so that if a user was inactive for more than 300 seconds, his next request was
considered to mark a start of a new session.
The next processing step included heuristics, such as identifying and discarding the sessions belonging to robots (they obviously contaminate the browsing patterns of human
users), collapsing all same consecutive DID accesses into a single instance of this DID (our
objective was to predict what interests the user beyond the currently requested document),
getting rid of all DIDs that occurred less than two times in the log (for two or fewer occurrences, it is hard to reliably train the system to predict them and evaluate performance),
and finally discarding sessions containing only one document.
3 Dimensionality Reduction Via Clustering
Even after the log is processed, the data still remains high-dimensional (62,240 documents),
and sparse, and hence still hard to model. To solve these problems we clustered the documents. Since our objective was to predict the instantaneous user interests, among many
possibilities of performing the clustering we chose to cluster based on user navigation patterns.
We scanned the processed log once and for each document
accumulated the number
was requested immediately after
; in other words, we
of times the document
computed the first-order Markov statistics or bigrams. Based on the user navigation patterns
encoded in bigrams, the greedy clustering is done as shown in the following pseudocode:
! " $#&% ; Number of Clusters ' ;
( ' Clusters.
)+*,-
).-/1032546=# /1798;: <3! " $#&% =#%
// max number of transitions
" such that ! " # >?) do
// all docs with n transitions
@;"A /CBDB"=21)+% EDFG>IH>J and A /5BDB"=21)+EDFK>LH>J and )+*MN'>O
(P )+* % A QSRTB3UV@;#"WO ;
(P )+* A QSRTB3UV@ O ;
"A /CB3B"=25)+E3FK # A /CBDB"X25)+EDFK-) * ;
// new cluster for i and j
)+*.YNY ;
@Z"A /CBDB"X25)+EDF\% [1LH>J # and # A /CB3B"=25)+E3FK>LH>J3O
# (PA /C"B3A /5BBD"=25B)+"=21E3)+FKED?F A "QSA RT/CBDBDBU]"=@ 21)+O ;EDF ;
// j goes to cluster of i
@Z# "A /CBDB"X25)+EDFK% >^H>J and # A /5BDB"=21)+EDF\[1LH>J3O
(P A /5BDB"=21)+EDF A QSRTBDU]@Z"WO ;
"A /CB3B"=25)+E3FK # A /CBDB"X25)+EDF ;
// i goes to cluster of j
" $#% IH>J ;
Input: Bigrams
Output: Set of
Algorithm:
0.
;
1. set
2. for all docs
3.
if
4.
5.
6.
7.
8.
else if
9.
10.
11.
else if
12.
13.
14.
end if
15.
16. end for
Table 1: Top features for some of the clusters.
/ 25E3) /2CE3) B E US/1 "D0 2 F /R
D)D4
DR B A A A W
0 /"$D)T "$) 2 R BWE30D"=) 2 F"WB W/1) E D/CBD B"9" /" D) E30D )+E B14 A A A
^E F R946E3) B R E30 USE ^E
R E30D"$E B Q9/25E B A A A
Q / E /5B 0 DR
"$) W2 /5FF0 EDBD B USE )+ED0 " Q Q9/ E B A A A
0 /)] B D0D4 US/1)T)+ E F"$) 2 0 /WE D4 Q 0DE BD B"D) "$4 /2CE B A A A
FEWE "D) /2CE3) B B3E R 0D" "$)03RTB"D) FEWE "D) A A A
0 / 9 " 0 !/ WE Q / E D) 2 P"F1E B U EDFR "$) 2 A A A
4 " E P"$0 E E BDB QS 0 B3E30 " E B3E30 " E B A A A
"
Cluster 1
Cluster 2
Cluster 3
Cluster 4
Cluster 5
Cluster 6
Cluster 7
Cluster 8
@;) O
17. if $#&% goto 1
18. Return S
The algorithm starts with empty clusters and then cycles through all documents picking the
pairs of documents that have the current highest joint visitation frequency as prompted by
a bigram frequency (lines 1 and 2). If both documents in the selected pair are unassigned, a
new cluster is allocated for them (lines 3 through 7). If one of the documents in the selected
pair has been assigned to one of the previous clusters, the second document is assigned to
the same cluster (lines 8 through 14). The algorithm repeats for a lower frequency , as
long as $#&% .
)
)
( #% " % #
( "%
$
%
( " / W/CO KJ H
"
After the clustering, we can assume that if the user requests a document from the -th
cluster
,(
he' is considerably more likely
rather than
"+ to prefer
-, a next
.+ document from
,
, i.e. ) *)
from
0/
1) . This
assumption is reasonable because by construction clusters represent densely connected (in
terms of traffic) components, and the traffic across the clusters is small compared to the
traffic within each cluster. In view of this observation, we broke individual user sessions
down into subsessions, where each subsession consisted of documents belonging to the
same cluster. The problem was thus reduced to a series of prediction problems for each
cluster.
(
"
@
(P " %
We studied the clusters by trying to find out if the documents within a cluster are topically
related. We ran code previously developed at NEC Labs [4] that uses information gain
to find the top features that distinguish each cluster from the rest. Table 1 shows the top
features for some of the created clusters. The top features are quite consistent descriptors,
suggesting that in one session a ResearchIndex user is typically interested in searching
among topically-related documents.
4 Trigger MaxEnt
@ , 2 @ O / W/CO
/ W/
W
as a maxent distribution, where
In this paper, we model )
is
the
identity
of
the
document
that
will
be
next
requested
by
the
user
,
given
the
history
2
and the available ! for all other users. This choice of the maxent model is
natural since our intuition is that all of the previously requested documents in the user
session influence the identity of
. It is also clear that we cannot afford to build a
high-order model, because of the sparsity and high-dimensional data, so we need to restrict
ourselves to models that can be reliably estimated from the low-order statistics.
@ O
W
Bigrams provide one type of such statistics. In order to introduce long term dependence of
on the documents
a trigger as a
3 that occurred in the history of the session,4, we+5define
2
pair of documents
in a given cluster such that )
is substantially
different from )
. To measure the quality of triggers and in order to rank them
@
@Z/ O
O
@ /
O
U
2
Table 2: Average number of hits and height
of predictions across the clusters for
different ranges of heights and using various models. The boxed numbers are the best
values across all models. 2
2
2
2
2 1%
&%
Model
Mult.
48.78
67.94
80.94
90.93
98.54
2
1 c.
1.437
2.947
4.390
5.773
7.026
Mult.
95.49
120.52
132.07
138.89
143.33
2
25 c.
1.421
2.503
3.312
3.975
4.528
Mark.
91.39
115.68
123.44
126.26
127.57
2
1 c.
1.959
3.007
3.571
3.875
4.063
Mark.
89.75
114.49
122.57
125.61
127.14
2
25 c.
1.959
3.047
3.646
3.972
4.191
Maxent 2
111.95
130.35
138.18
142.56
145.55
no sm.
1.510
2.296
2.858
3.303
3.694
Maxent
112.68
130.86
138.53
142.85
145.78
2
w. sm.
1.476
2.258
2.810
3.248
3.633
Corr.
111.02
132.87
140.96
144.99
147.34
2
1.973
2.801
3.340
3.726
4.021
J
U
J
&
U
U
U
U
U
U
we computed mutual information between events
and
3/
+"2
.
The set of features, together with maxent as an objective function, can be shown to lead to
the following form of the conditional maxent model
)
@ W , 2 O
J 2 3E 7Q
@ O
@ W 32 O %$
(1)
@ @ 2 O 2 O B IJ AAA (
(2)
@ 2 O is a normalization constant ensuring that the distribution sums to 1.
The set of parameters , 2 needs to be found from the following set of equations that restrict
O to have the same expected value for each feature as seen in
the distribution ) @
where
the training data:
@ O
)
@
,2
O @ 32 O
QV@
O
where32 the LHS represents the expectation (up
, 2 to a normalization factor) of the feature
with respect to the distribution
and the RHS is the actual frequency (up
to the same normalization factor) of this feature
in
the training data. There exist efficient
(e.g. improved
algorithms for finding the parameters
iterative scaling [11]) that are
on ) are consistent.
known to converge if the constraints imposed
Under fairly general assumptions, the maxent model can also be shown to be a maximum
likelihood model [11]. Employing a Gaussian prior with a zero mean on parameters
yields a maximum aposteriori solution that has been shown to be more accurate than the related maximum likelihood solution and other smoothing techniques for maxent models [2].
We use Gaussian smoothing in our experiments with a maxent model.
5 Experimental Results and Comparisons
We compared the trigger maxent model with the following models: mixture of Markov
models (1 and 25 components), mixture of multinomials (1 and 25 components) and the
Table 3: Average time per 1000 predictions and average memory used by various models
across 1000 clusters.
Time, s Memory, KBytes
Mult.,
0.0049
0.5038
Mult., 25
0.0559
12.58
Markov, 1
0.0024
1.53
Markov, 25
0.0311
68.23
Maxent, no sm. 0.0746
90.12
Maxent, w. sm. 0.0696
90.12
Correlation
7.2013
17.26
correlation method [1]. The definitions of the models can be found in [9]. The maxent
model came in two flavors: unsmoothed and smoothed with a Gaussian prior, with 0 mean
and fixed variance 2. We did not optimize the adjustable parameters of the models (such as
the number of components for the mixture or the variance of the prior for maxent models)
or the number of clusters (1000).
We chronologically partitioned the log into roughly 8 million training requests (covering
82 days) and 2 million test requests (covering 17 days). We used the average height of
predictions on the test data as a main evaluation criteria. The 5,
height
of a prediction is
2
are available from
defined as follows. Assuming 2 that the probability estimates )
a model ) for a fixed history
and all possible values of , we first sort them in the
descending order of ) and then find the distance in terms of the number of documents to
the actually requested (which we know from the test data) from the top of this sorted list.
The height tells us how deep into the list the user must go in order to see the document that
actually interests him. The height of a perfect prediction is 0, the maximum (worst) height
for a given cluster equals the number of documents in this cluster. Since heights greater
than 20 are of little practical interest, we binned
of predictions for
each cluster.
the
heights for
For binning purposes we used height ranges
. Within each
bin we also computed the average height of predictions. Thus, the best performing model
would place most of the predictions inside the bin(s) with low value(s) of
and within
those bins the averages would be as low as possible.
@
@ Y.J3O O
O
A AA
Table 2 reports the average number of hits each model makes on average in each of the
bins, as well as the average height of predictions within the bin. The smoothed maxent
model has the best average height of predictions across the bins and scores roughly the
same number of hits in each of the bins as the correlation method. The mixture of Markov
models with 25 components evidently overfits on the training data and fails to outperform
a 1 component mixture. The mixture of multinomials is quite close in quality to, but still
not as good as, the maxent model with respect to both the number of hits and the height
predictions in each of the bins.
In Table 3, we present comparison of various models with respect to the average time
taken and memory required to make a prediction. The table clearly illustrates that the
maxent model (i.e., the model-based approach) is substantially more time efficient than the
correlation (i.e., the memory-based approach), even despite the fact that the model takes
on average more memory. In particular, our maxent approach is roughly two orders of
magnitude faster than the correlation.
6 Conclusions and Future Work
We have described a maxent approach to generating document recommendations in ResearchIndex. We addressed the problem of sparse, high-dimensional data by introducing a
clustering of the documents based on the user navigation patterns. A particular advantage
of our clustering is that by its definition the traffic across the clusters is small compared to
the traffic within the cluster. This advantage allowed us to decompose the original prediction problem into a set of problems corresponding to the clusters. We also demonstrated
that our clustering produces highly interpretable clusters: each cluster can be assigned a
topical name based on the top-extracted features.
We presented a number of models that can be used to solve a document prediction problem
within cluster. We showed that the maxent model that combines zero and first order Markov
terms as well as the triggers with high information content provides the best average outof-sample performance. Gaussian smoothing improved results even further.
There are several important directions to extend the work described in this paper. First,
we plan to perform ?live? testing of the clustering approach and various models in ResearchIndex. Secondly, our recent work [10] suggests that for difficult prediction problems
improvement beyond the plain maxent models can be sought by employing the mixtures of
maxent models. We also plan to look at different clustering methods for documents (e.g.,
based on the content or the link structure) and try to combine prediction results for different clusterings. Our expectation is that such combining could yield better accuracy at the
expense of longer running times. Finally, one could think of a (quite involved) EM algorithm that performs the clustering of the documents in a manner that would make prediction
within resulting clusters easier.
Acknowledgements We would like to thank Steve Lawrence for making available the ResearchIndex log data, Eric Glover for running his naming code on our clusters, Kostas
Tsioutsiouliklis and Darya Chudova for many useful discussions, and the anonymous reviewers for helpful suggestions.
References
[1] J. Breese, D. Heckerman, and C. Kadie. Empirical analysis of predictive algorithms for collaborative filtering. In Proceedings of UAI-1998, pages 43?52. San Francisco, CA: Morgan
Kaufmann Publishers, 1998.
[2] S. Chen and R. Rosenfeld. A Gaussian prior for smoothing maximum entropy models. Technical Report CMUCS -99-108, Carnegie Mellon University, 1999.
[3] D. Cosley, S. Lawrence, and D. Pennock. An open framework for practical testing of recommender systems using ResearchIindex. In International Conference on Very Large Databases
(VLDB?02), 2002.
[4] E. Glover, D. Pennock, S. Lawrence, and R. Krovetz. Inferring hierarchical descriptions. Technical Report NECI TR 2002-035, NEC Research Institute, 2002.
[5] J. Goodman. Classes for fast maximum entropy training. In Proceedings of IEEE International
Conference on Acoustics, Speech, and Signal Processing, 2001.
[6] D. Heckerman, D. Chickering, C. Meek, R. Rounthwaite, and C. Kadie. Dependency networks for density estimation, collaborative filtering, and data visualization. Journal of Machine
Learning Research, 1:49?75, 2000.
[7] T. Hofmann and J. Puzicha. Latent class models for collaborative filtering. In Proceedings of
the Sixteenth International Joint Conference on Artificial Intelligence, pages 688?693, 1999.
[8] S. Lawrence, C. L. Giles, and K. Bollacker. Digital libraries and Autonomous Citation Indexing.
IEEE Computer, 32(6):67?71, 1999.
[9] D. Pavlov and D. Pennock. A maximum entropy approach to collaborative filtering in dynamic,
sparse, high-dimensional domains. Technical Report NECI TR, NEC Research Institute, 2002.
[10] D. Pavlov, A. Popescul, D. Pennock, and L. Ungar. Mixtures of conditional maximum entropy
models. Technical Report NECI TR, NEC Research Institute, 2002.
[11] S. D. Pietra, V. D. Pietra, and J. Lafferty. Inducing features of random fields. IEEE Transactions
on Pattern Analysis and Machine Intelligence, 19(4):380?393, April 1997.
[12] A. Popescul, L. Ungar, D. Pennock, and S. Lawrence. Probabilistic models for unified collaborative and content-based recommendation in sparse-data environments. In Proceedings of the
Seventeenth Conference on Uncertainty in Artificial Intelligence, pages 437?444, 2001.
[13] P. Resnick, N. Iacovou, M. Suchak, P. Bergstorm, and J. Riedl. GroupLens: An Open Architecture for Collaborative Filtering of Netnews. In Proceedings of ACM 1994 Conference on
Computer Supported Cooperative Work, pages 175?186, Chapel Hill, North Carolina, 1994.
ACM.
[14] B. Sarwar, G. Karypis, J. Konstan, and J. Riedl. Analysis of recommender algorithms for
e-commerce. In Proceedings of the 2nd ACM Conference on Electronic Commerce, pages 158?
167, 2000.
[15] G. Shani, R. Brafman, and D. Heckerman. An MDP-based recommender system. In Proceedings of the Seventeenth Conference on Uncertainty in Artificial Intelligence, pages 453?460,
2002.
[16] U. Shardanand and P. Maes. Social information filtering: Algorithms for automating ?word of
mouth?. In Proceedings of ACM CHI?95 Conference on Human Factors in Computing Systems,
volume 1, pages 210?217, 1995.
| 2278 |@word repository:1 bigram:6 nd:1 open:2 vldb:1 carolina:1 decomposition:1 citeseer:1 maes:1 tr:3 reduction:2 series:3 contains:1 score:1 tuned:1 document:47 outperforms:1 past:3 current:5 com:3 written:1 bd:3 must:1 hofmann:1 interpretable:1 greedy:2 fewer:1 selected:2 item:6 intelligence:4 accordingly:1 indicative:1 kbytes:1 provides:1 preference:1 height:14 glover:2 combine:2 inside:1 manner:1 introduce:1 expected:1 roughly:4 chi:1 automatically:1 actual:1 little:1 e30:1 what:1 substantially:2 developed:1 outof:1 unified:1 shani:1 finding:1 nj:1 contaminate:1 marrying:1 returning:1 hit:4 service:2 tid:1 topically:2 despite:1 id:3 solely:1 approximately:1 chose:2 studied:1 suggests:1 challenging:1 pavlov:3 co:3 range:2 karypis:1 seventeenth:2 unique:2 practical:2 commerce:2 testing:2 area:1 empirical:2 mult:4 word:3 cannot:1 close:1 recency:1 context:5 influence:1 live:1 descending:1 www:1 optimize:1 imposed:1 demonstrated:1 reviewer:1 maximizing:1 go:4 identifying:1 immediately:1 chapel:1 q:1 his:3 enabled:1 searching:1 autonomous:1 construction:1 trigger:6 user:38 us:1 pa:1 database:1 binning:1 cooperative:1 resnick:1 worst:1 thousand:2 cycle:1 connected:1 removed:1 highest:1 ran:1 intuition:1 environment:3 dynamic:7 depend:1 predictive:1 eric:1 joint:2 various:4 america:1 train:1 fast:1 describe:2 artificial:3 tell:1 netnews:1 pearson:1 quite:4 heuristic:1 encoded:1 solve:2 tsioutsiouliklis:1 statistic:3 browser:1 think:1 rosenfeld:1 itself:1 online:2 obviously:1 sequence:2 advantage:2 evidently:1 combining:1 representational:1 sixteenth:1 description:2 inducing:1 getting:1 interacted:1 cluster:48 empty:1 assessing:1 produce:1 generating:3 perfect:1 help:1 develop:1 ij:1 job:1 come:1 direction:2 chudova:1 broke:2 human:2 bin:8 ungar:2 clustered:1 decompose:2 anonymous:1 secondly:1 around:1 considered:1 lawrence:5 predict:3 automate:1 sought:1 consecutive:2 purpose:1 estimation:1 currently:3 visited:2 grouplens:1 him:1 qv:1 clearly:1 gaussian:5 rather:1 unassigned:1 improvement:1 rank:2 likelihood:3 ave:1 helpful:1 accumulated:1 typically:1 pasadena:2 her:1 interested:1 among:4 plan:2 smoothing:4 fairly:2 mutual:1 apriori:1 equal:1 once:2 field:1 represents:1 look:1 future:3 report:5 recommend:1 inherent:1 few:2 employ:1 densely:1 individual:1 pietra:2 ourselves:1 krovetz:1 attempt:2 interest:4 possibility:2 highly:1 evaluation:1 navigation:6 mixture:8 accurate:3 lh:5 conduct:1 indexed:1 maxent:30 instance:2 suchak:1 modeling:1 wb:1 giles:1 introducing:1 hundred:1 predictor:1 too:1 dependency:2 considerably:1 density:1 international:3 sensitivity:1 automating:1 probabilistic:1 receiving:1 picking:1 together:1 squared:1 recorded:2 containing:1 collapsing:1 dr:2 return:1 account:1 suggesting:1 de:1 includes:1 coefficient:1 inc:1 kadie:2 north:1 explicitly:2 stream:1 view:1 try:1 lab:2 overfits:1 traffic:5 start:2 sort:1 recommendable:1 collaborative:9 minimize:1 accuracy:1 descriptor:1 who:1 variance:2 kaufmann:1 yield:2 conceptually:1 bayesian:1 worth:1 published:1 history:4 whenever:1 yny:1 definition:2 iacovou:1 frequency:4 involved:1 naturally:1 associated:1 gain:1 popular:1 recall:1 knowledge:1 dimensionality:3 organized:1 actually:2 appears:1 steve:1 day:2 improved:2 april:1 formulation:2 done:1 correlation:6 hand:1 web:1 expressive:1 quality:2 rtb:1 mdp:1 name:1 effect:1 consisted:1 hence:1 assigned:4 laboratory:1 covering:2 d4:2 criterion:1 trying:1 bdb:4 hill:1 performs:1 isolates:1 instantaneous:1 recently:3 common:1 superior:1 pseudocode:1 multinomial:2 volume:1 million:2 discussed:1 he:2 occurred:2 extend:1 mellon:1 rd:1 session:12 recommenders:1 access:7 robot:1 similarity:3 longer:1 sarwar:1 recent:2 showed:1 apart:1 server:1 outperforming:1 came:1 seen:1 morgan:1 greater:1 floor:1 overture:2 aggregated:1 converge:1 signal:1 full:2 d0:2 technical:4 faster:1 cross:1 long:2 naming:1 locates:1 ensuring:1 prediction:20 expectation:2 represent:1 normalization:3 neci:3 receive:1 addressed:1 wealth:1 singular:1 else:2 allocated:1 goodman:2 w2:1 rest:2 publisher:1 archive:1 pennock:7 file:1 goto:1 lafferty:1 independence:1 fit:1 architecture:1 competing:2 identified:2 restrict:2 idea:1 inactive:1 wo:2 e3:3 speech:1 afford:1 deep:1 useful:1 clear:1 ten:1 processed:2 reduced:1 http:1 outperform:1 exist:1 estimated:1 per:3 carnegie:1 visitation:1 uid:2 preprocessed:1 ce:1 graph:1 chronologically:1 sum:1 uncertainty:2 place:1 reasonable:1 electronic:1 draw:1 doc:2 prefer:1 scaling:1 q9:2 meek:1 distinguish:1 activity:1 binned:1 occur:1 scanned:1 constraint:1 unsmoothed:1 performing:2 request:8 riedl:2 belonging:2 across:6 heckerman:3 em:1 partitioned:1 making:2 indexing:1 taken:1 equation:1 visualization:1 remains:1 previously:2 discus:1 know:1 end:3 available:3 hierarchical:1 simulating:1 occurrence:1 original:2 top:6 clustering:15 include:1 ce3:1 running:2 maintaining:1 build:1 objective:5 added:1 rt:1 dependence:1 distance:1 link:2 thank:1 astray:1 seven:1 assuming:1 code:2 index:1 modeled:1 prompted:1 difficult:1 mostly:1 expense:1 popescul:2 design:1 reliably:2 adjustable:1 perform:3 recommender:5 observation:1 markov:6 sm:4 topical:1 smoothed:2 community:1 david:2 pair:4 required:1 acoustic:1 expressivity:1 hour:1 address:2 beyond:2 pattern:7 sparsity:3 including:1 max:1 memory:5 mouth:2 suitable:1 event:1 natural:1 mn:1 library:2 created:1 rounthwaite:1 kj:1 text:3 prior:4 literature:1 acknowledgement:1 interesting:1 suggestion:1 filtering:10 aposteriori:1 digital:2 x25:4 consistent:2 edf:6 repeat:1 supported:1 brafman:1 cmucs:1 offline:2 side:1 institute:3 sparse:8 qsa:1 boundary:1 overcome:1 plain:1 transition:2 exceedingly:1 author:1 made:2 preprocessing:1 san:1 far:1 employing:2 social:1 transaction:1 citation:4 dmitry:1 uai:1 rid:1 recommending:1 francisco:1 search:1 latent:2 iterative:1 table:7 nature:1 ca:2 requested:6 boxed:1 domain:4 did:6 main:1 rh:1 allowed:1 complementary:1 site:2 textually:1 kostas:1 fails:1 inferring:1 konstan:1 stamp:1 chickering:1 down:1 discarding:2 showing:1 list:2 ih:2 effectively:1 corr:1 nec:6 magnitude:1 illustrates:1 cookie:1 browsing:4 chen:1 flavor:1 easier:1 entropy:6 likely:1 recommendation:16 bollacker:1 extracted:1 acm:4 conditional:2 goal:3 viewed:4 identity:3 month:1 sorted:1 content:5 change:1 hard:2 included:1 typical:3 breese:1 experimental:2 puzicha:1 mark:3 evaluate:1 princeton:1 researchindex:15 |
1,405 | 2,279 | Multiclass Learning by Probabilistic Embeddings
Ofer Dekel and Yoram Singer
School of Computer Science & Engineering
The Hebrew University, Jerusalem 91904, Israel
{oferd,singer}@cs.huji.ac.il
Abstract
We describe a new algorithmic framework for learning multiclass categorization problems. In this framework a multiclass predictor is composed
of a pair of embeddings that map both instances and labels into a common
space. In this space each instance is assigned the label it is nearest to. We
outline and analyze an algorithm, termed Bunching, for learning the pair
of embeddings from labeled data. A key construction in the analysis of
the algorithm is the notion of probabilistic output codes, a generalization of error correcting output codes (ECOC). Furthermore, the method
of multiclass categorization using ECOC is shown to be an instance of
Bunching. We demonstrate the advantage of Bunching over ECOC by
comparing their performance on numerous categorization problems.
1 Introduction
The focus of this paper is supervised learning from multiclass data. In multiclass problems
the goal is to learn a classifier that accurately assigns labels to instances where the set of
labels is of finite cardinality and contains more than two elements. Many machine learning
applications employ a multiclass categorization stage. Notable examples are document
classification, spoken dialog categorization, optical character recognition (OCR), and partof-speech tagging. Dietterich and Bakiri [6] proposed a technique based on error correcting
output coding (ECOC) as a means of reducing a multiclass classification problem to several
binary classification problems and then solving each binary problem individually to obtain
a multiclass classifier. More recent work of Allwein et al. [1] provided analysis of the
empirical and generalization errors of ECOC-based classifiers. In the above papers, as well
as in most previous work on ECOC, learning the set of binary classifiers and selecting a
particular error correcting code are done independently. An exception is a method based
on continuous relaxation of the code [3] in which the code matrix is post-processed once
based on the learned binary classifiers.
The inherent decoupling of the learning process from the class representation problem employed by ECOC is both a blessing and a curse. On one hand it offers great flexibility and
modularity, on the other hand, the resulting binary learning problems might be unnatural
and therefore potentially difficult. We instead describe and analyze an approach that ties the
learning problem with the class representation problem. The approach we take perceives
the set of binary classifiers as an embedding of the instance space and the code matrix as
an embedding of the label set into a common space. In this common space each instance is
assigned the label from which it?s divergence is smallest. To construct these embeddings,
we introduce the notion of probabilistic output codes. We then describe an algorithm that
constructs the label and instance embeddings such that the resulting classifier achieves a
small empirical error. The result is a paradigm that includes ECOC as a special case.
The algorithm we describe, termed Bunching, alternates between two steps. One step improves the embedding of the instance space into the common space while keeping the
embedding of the label set fixed. This step is analogous to the learning stage of the ECOC
technique, where a set of binary classifiers are learned with respect to a predefined code.
The second step complements the first by updating the label embedding while keeping the
instance embedding fixed. The two alternating steps resemble the steps performed by the
EM algorithm [5] and by Alternating Minimization [4]. The techniques we use in the design and analysis of the Bunching algorithm also build on recent results in classification
learning using Bregman divergences [8, 2].
The paper is organized as follows. In the next section we give a formal description of
the multiclass learning problem and of our classification setting. In Sec. 3 we give an
alternative view of ECOC which naturally leads to the definition of probabilistic output
codes presented in Sec. 4. In Sec. 5 we cast our learning problem as a minimization problem
of a continuous objective function and in Sec. 6 we present the Bunching algorithm. We
describe experimental results that demonstrate the merits of our approach in Sec. 7 and
conclude in Sec. 8.
2 Problem Setting
Let X be a domain of instance encodings from m and let Y be a set of r labels that can
be assigned to each instance from X . Given a training set of instance-label pairs S =
(xj , yj )nj=1 such that each xj is in X and each yj is in Y, we are faced with the problem of
learning a classification function that predicts the labels of instances from X . This problem
is often referred to as multiclass learning. In other multiclass problem settings it is common
to encode the set Y as a prefix of the integers {1, . . . , r}, however in our setting it will prove
useful to assume that the labels are encoded as the set of r standard unit vectors in r . That
is, the i?th label in Y is encoded by the vector whose i?th component is set to 1, and all of
its other components are set to 0.
The classification functions we study in this paper are
composed of a pair of embeddings from the spaces X and
Y into a common space Z, and a measure of divergence
between vectors in Z. That is, given an instance x ? X ,
we embed it into Z along with all of the label vectors
in Y and predict the label that x is closest to in Z. The
measure of distance between vectors in Z builds upon the
definitions given below:
The logistic transformation ? : s ? (0, 1)s is defined
?k = 1, ..., s ?k (?) = (1 + e??k )?1
s
Figure 1: An illustration of
the embedding model used.
The entropy of a multivariate Bernoulli random variable with parameter p ? [0, 1] s is
s
X
H[p] = ?
[pk log(pk ) + (1 ? pk ) log(1 ? pk )] .
k=1
The Kullback-Leibler (KL) divergence between a pair of multivariate Bernoulli random
variables with respective parameters p, q ? [0, 1]s is
s
X
pk
1 ? pk
D[p k q] =
pk log
+ (1 ? pk ) log
.
(1)
qk
1 ? qk
k=1
Returning to our method of classification, let s be some positive integer and let Z denote
the space [0, 1]s . Given any two linear mappings T : m ? s and C : r ? s , where
T is given as a matrix in s?m and C as a matrix in s?r , instances from X are embedded
into Z by ?(T x) and labels from Y are embedded into Z by ?(Cy). An illustration of the
two embeddings is given in Fig. 1.
We define the divergence between any two points z1 , z2 ? Z as the sum of the KLdivergence between them and the entropy of z1 , D[z1 k z2 ] + H[z1 ]. We now define
the loss ` of each instance-label pair as the divergence of their respective images,
`(x, y|C, T ) = D[?(Cy) k ?(T x)] + H[?(Cy)] .
(2)
This loss is clearly non-negative and can be zero iff x and y are embedded to the same
point in Z and the entropy of this point is zero. ` is our means of classifying new instances:
given a new instance we predict its label to be y? if
y? = argmin `(x, y|C, T ) .
(3)
y?Y
For brevity, we restrict ourselves to the case where only a single label attains the minimum
loss, and our classifier is thus always well defined. We point out that our analysis is still
valid when this constraint is relaxed. We name the loss over the entire training set S the
empirical loss and use the notation
X
L(S|C, T ) =
`(x, y|C, T ) .
(4)
(x,y)?S
Our goal is to learn a good multiclass prediction function by finding a pair (C, T ) that
attains a small empirical loss. As we show in the sequel, the rationale behind this choice
of empirical loss lies in the fact that it bounds the (discrete) empirical classification error
attained by the classification function.
3 An Alternative View of Error Correcting Output Codes
The technique of ECOC uses error correcting codes to reduce an r-class classification problem to multiple binary problems. Each binary problem is then learned independently via
an external binary learning algorithm and the learned binary classifiers are combined into
one r-class classifier. We begin by giving a brief overview of ECOC for the case where the
binary learning algorithm used is a logistic regressor.
A binary output code C is a matrix in {0, 1}s?r where each of C?s columns is an s-bit
code word that corresponds to a label in Y. Recall that the set of labels Y is assumed to
be the standard unit vectors in r . Therefore, the code word corresponding to the label y
is simply the product of the matrix C and the vector y, Cy. The distance ? of a code C is
defined as the minimal Hamming distance between any two code words, formally
s
X
?(C) = min
Ck,i (1 ? Ck,j ) + Ck,j (1 ? Ck,i ) .
i6=j
k=1
For any k ? {1, . . . , s}, the k?th row of C, denoted henceforth by C k , defines a partition
of the set of labels Y into two disjoint subsets: the first subset constitutes labels for which
Ck ? y = 0 (i.e., the set of labels in Y which are mapped according to C k to the binary
label 0) and the labels for which Ck ? y = 1. Thus, each Ck induces a binary classification
problem from the original multiclass problem. Formally, we construct for each k a binarylabeled sample Sk = {(xj , Ck ? yj )}nj=1 and for each Sk we learn a binary classification
function Tk : X ?
using a logistic regression algorithm. That is, for each original
instance xj and induced binary label Ck ? yj we posit a logistic model that estimates the
conditional probability that Ck ? yj equals 1 given xj ,
P r[Ck ? yj = 1| xj ; Tk ] = ?(Tk ? xj ) .
find T k?
Given a predefined code matrix C the learning task at hand is to
log-likelihood of the labelling given in Sk ,
n
X
Tk? = argmax
log(P r[Ck ? yj | xj ; Tk ]) .
Tk ?
m
j=1
(5)
that maximizes the
(6)
Defining 0 log 0 = 0, we can use the logistic estimate in Eq. (5) and the KL-divergence
from Eq. (1) to rewrite Eq. (6) as follows
n
X
Tk? = argmin
D[Ck ? yj k ?(Tk ? xj )] .
Tk ?
m
j=1
In words, a good set of binary predictors is found by minimizing the sample-averaged KLdivergence between the binary vectors induced by C and the logistic estimates induced by
T1 , . . . , Ts . Let T ? be the matrix in s?m constructed by the concatenation of the row
vectors {Tk?}sk=1 . For any instance x ? X , ?(T ? x) is a vector of probability estimates
that the label of x is 1 for each of the s induced binary problems. We can summarize the
learning task defined by the code C as the task of finding a matrix T ? such that
n
X
?
T = argmin
D[Cyj k ?(T xj )] .
T?
s?m
j=1
Given a code matrix C and a transformation T we classify a new instance as follows,
y? = argmin D[Cy k ?(T x)] .
(7)
y?Y
A classification error occurs if the predicted label y? is different from the correct label y.
Building on Thm. 1 from Allwein et al. [1] it is straightforward to show that the empirical
classification error (?
y 6= y) is bounded above by the empirical KL-divergence between the
correct code word Cy and the estimated probabilities ?(T x) divided by the code distance,
Pn
j=1 D[Cyj k ?(T xj )]
n
.
(8)
|{?
yj 6= yj }j=1 | ?
?(C)
This bound is a special case of the bound given below in Thm. 1 for general probabilistic
output codes. We therefore defer the discussion on this bound to the following section.
4 Probabilistic Output Codes
We now describe a relaxation of binary output codes by defining the notion of probabilistic
output codes. We give a bound on the empirical error attained by a classifier that uses
probabilistic output codes which generalizes the bound in Eq. (8). The rationale for our
construction is that the discrete nature of ECOC can potentially induce difficult binary
classification problems. In contrast, probabilistic codes induce real-valued problems that
may be easier to learn.
Analogous to discrete codes, A probabilistic output code C is a matrix in s?r used in
conjunction with the logistic transformation to produce a set of r probability vectors that
correspond to the r labels in Y. Namely, C maps each label y ? Y to the probabilistic code
word ?(Cy) ? [0, 1]s . As before, we assume that Y is the set of r standard unit vectors
in {0, 1}r and therefore each probabilistic code word is the image of one of C?s columns
under the logistic transformation. The natural extension of code distance to probabilistic
codes is achieved by replacing Hamming distance with expected Hamming distance. If
for each y ? Y and k ? {1, . . . , s} we view the k?th component of the code word that
corresponds to y as a Bernoulli random variable with parameter p = ? k (Cy) then the
expected Hamming distance between the code word for classes i and j is,
s
X
?k (Cyi )(1 ? ?k (Cyj )) + ?k (Cyj )(1 ? ?k (Cyi )) .
k=1
Analogous to discrete codes we define the distance ? of a code C as the minimum expected
Hamming distance between all pairs of code words in C, that is,
?(C) = min
i6=j
s
X
?k (Cyi )(1 ? ?k (Cyj )) + ?k (Cyj )(1 ? ?k (Cyi )) .
k=1
Put another way, we have relaxed the definition of code words from deterministic vectors
to multivariate Bernoulli random variables. The matrix C now defines the distributions of
these random variables. When C?s entries are all ?? then the logistic transformation of
C?s entries defines a deterministic code and the two definitions of ? coincide.
Given a probabilistic code matrix C ? s?r and a transformation T ? s?m we associate
a loss `(x, y|C, T ) with each instance-label pair (x, y) using Eq. (2) and we measure the
empirical loss over the entire training set S as defined in Eq. (4). We classify new instances
by finding the label y? that attains the smallest loss as defined in Eq. (3). This construction is
equivalent to the classification method discussed in Sec. 2 that employs embeddings except
that instead of viewing C and T as abstract embeddings C is interpreted as a probabilistic
output code and the rows of T are viewed as binary classifiers.
Note that when all of the entries of C are ?? then the classification rule from Eq. (3) is
reduced to the classification rule for ECOC from Eq. (7) since the entropy of ?(Cy) is zero
for all y. We now give a theorem that builds on our construction of probabilistic output
codes and relates the classification rule from Eq. (3) with the empirical loss defined by
Eq. (4). As noted before, the theorem generalizes the bound given in Eq. (8).
Theorem 1 Let Y be a set of r vectors in r . Let C ? s?r be a probabilistic output
code with distance ?(C) and let T ? s?m be a transformation matrix. Given a sample
S = {(xj , yj )}ni=j of instance-label pairs where xj ? X and yj ? Y, denote by L the loss
on S with respect to C and T as given by Eq. (4) and denote by y?j the predicted label of
xj according to the classification rule given in Eq. (3). Then,
L(S|C, T )
.
?(C)
The proof of the theorem is omitted due to the lack of space.
n
|{?
yj 6= yj }j=1 | ?
5 The Learning Problem
We now discuss how our formalism of probabilistic output codes via embeddings and the
accompanying Thm. 1 lead to a learning paradigm in which both T and C are found concurrently. Thm. 1 implies that the empirical error over S can be reduced by minimizing the
empirical loss over S while maintaining a large distance ?(C). A naive modification of C
so as to minimize the loss may result in a probabilistic code whose distance is undesirably
small. Therefore, we assume that we are initially provided with a fixed reference matrix
C0 ? s?r that is known to have a large code distance. We now require that the learned
matrix C remain relatively close to C0 (in a sense defined shortly) throughout the learning
procedure. Rather than requiring that C attain a fixed distance to C 0 we add a penalty
proportional to the distance between C and C0 to the loss defined in Eq. (4). This penalty
on C can be viewed as a form of regularization (see for instance [10]). Similar paradigms
have been used extensively in the pioneering work of Warmuth and his colleagues on online learning (see for instance [7] and the references therein) and more recently for incorporating prior knowledge into boosting [11]. The regularization factor we employ is the
KL-divergence between the images of C and C0 under the logistic transformation,
n
X
D[?(Cyj ) k ?(C0 yj )] .
R(S|C, C0 ) =
j=1
The influence of this penalty term is controlled by a parameter ? ? [0, ?]. The resulting
objective function that we attempt to minimize is
O(S|C, T ) = L(S|C, T ) + ?R(S|C, C0 )
(9)
where ? and C0 are fixed parameters. The goal of learning boils down to finding a pair
(C ? , T ? ) that minimizes the objective function defined in Eq. (9). We would like to note
that this objective function is not convex due to the concave entropic term in the definition of `. Therefore, the learning procedure described in the sequel converges to a local
minimum or a saddle point of O.
6 The Learning Algorithm
B UNCH S, ? ? + , C0 ? s?r , T0 ?
For t = 1, 2, ...
Tt = I MPROVE -T (S, Ct?1 , Tt?1 )
Ct = I MPROVE -C (S, ?, Tt , C0 )
The goal of the learning algorithm is to
find C and T that minimize the objective function defined above. The algorithm alternates between two complementing steps for decreasing the objective function. The first step, called
I MPROVE -T, improves T leaving C
unchanged, and the second step, called
I MPROVE -C, finds the optimal matrix
C for any given matrix T . The algorithm is provided with initial matrices C0 and T0 , where C0 is assumed
to have a large code distance ?. The
I MPROVE -T step makes the assumption that all of the instances
in S satPm
isfy the constraints i=1 xi ? 1 and
for all i ? {1, 2, ..., m}, 0 ? xi . Any
finite training set can be easily shifted
and scaled to conform with these constraints and therefore they do not impose any real limitation. In addition,
the I MPROVE -C step is presented for
the case where Y is the set of standard
unit vectors in r .
s?m
I MPROVE -T (S, C, T )
For k = 1, 2, ..., s and i = 1, 2, ..., m
+
Wk,i
=
?(Ck y) ?(?Tk x) xi
(x,y)?S
?
Wk,i
=
?(?Ck y) ?(Tk x) xi
(x,y)?S
1
?k,i = ln
2
Return T + ?
+
Wk,i
?
Wk,i
I MPROVE -C (S, ?, T , C0 )
For each y ? Y
Sy = {(x, y?) ? S : y? = y}
1
(y)
C (y) = C0 +
Tx
?|Sy | x?S
y
Return C = C (1) , . . . , C (r)
Figure 2: The Bunching Algorithm.
Since the regularization factor R is independent of T we can restrict our description and
analysis of the I MPROVE -T step to considering only the loss term L of the objective function O. The I MPROVE -T step receives the current matrices C and T as input and calculates
a matrix ? that is used for updating the current T additively. Denoting the iteration index
by t, the update is of the form Tt+1 = Tt + ?. The next theorem states that updating T by
the I MPROVE -T step decreases the loss or otherwise T remains unchanged and is globally
optimal with respect to C. Again, the proof is omitted due to space constraints.
+
?
Theorem 2 Given matrices C ? s?r and T ? s?m , let Wk,i
, Wk,i
and ? be as defined
in the I MPROVE -T step of Fig. 2. Then, the decrease in the loss L is bounded below by,
s X
m q
X
k=1 i=1
+
Wk,i
?
q
2
?
Wk,i
? L(S|C, T ) ? L(S|C, T + ?) .
Based on the theorem above we can derive the following corollary
Corollary 1 If ? is generated by a call to I MPROVE -T and L(S|C, T + ?) = L(S|C, T )
then ? is the zero matrix and T is globally optimal with respect to C.
In the I MPROVE -C step we fix the current matrix T and find a code matrix C that globally
minimizes the objective function. According to the discussion above, the matrix C defines
an embedding of the label vectors from Y into Z and the images of this embedding constitute the classification rule. For each y ? Y denote its image under C and the logistic
transformation by py = ?(Cy) and let Sy be the subset of S that is labeled y. Note that the
objective function can be decomposed into r separate summands according to y,
X
O(S|C, T ) =
O(Sy |C, T ) ,
y?Y
where
O(Sy |C, T ) =
X
D[py k ?(T x)] + H[py ] + ?D[py k ?(C0 y0 )] .
(x,y)?Sy
We can therefore find for each y ? Y the vector py that minimizes O(Sy ) independently
and then reconstruct the code matrix C that achieves these values. It is straightforward to
show that O(Sy ) is convex in py , and our task is reduced to finding it?s stationary point.
We examine the derivative of O(Sy ) with respect to py,k and get,
X
?Oy (Sy )
py,k
?(Tk ? x)
? ?|Sy | C0,k ? y + log
.
=
? log
?py,k
1 ? ?(Tk ? x)
1 ? py,k
(x,y)?Sy
We now plug py = ?(Cy) into the equation above and evaluate it at zero to get that,
X
1
Cy = C0 y +
Tx .
?|Sy |
(x,y)?Sy
Since Y was assumed to be the set of standard unit vectors, Cy is a column of C and the
above is simply a column wise assignment of C.
We have shown that each call to I MPROVE -T followed by I MPROVE -C decreases the objective function until convergence to a pair (C ? , T ? ) such that C ? is optimal given T ? and
T ? is optimal given C ? . Therefore O(S|C ? , T ? ) is either a minimum or a saddle point.
7 Experiments
70
random
one?vs?rest
60
50
Relative performance %
To assess the merits of Bunching we
compared it to a standard ECOCbased algorithm on numerous multiclass problems. For the ECOCbased algorithm we used a logistic regressor as the binary learning algorithm, trained using the parallel update described in [2]. The two approaches share the same form of classifiers (logistic regressors) and differ
solely in the coding matrix they employ: while ECOC uses a fixed code
matrix Bunching adapts its code matrix during the learning process.
40
30
20
10
0
?10
glass
isolet
letter
mnist
satimage
soybean
vowel
Figure 3: The relative performance of Bunching
We selected the following multiclass
compared to ECOC on various datasets.
datasets: glass, isolet, letter,
satimage, soybean and vowel
from the UCI repository (www.ics.uci.edu/?mlearn/MLRepository.html) and the mnist
dataset available from LeCun?s homepage (yann.lecun.com/exdb/mnist/index.html). The
only dataset not supplied with a test set is glass for which we use 5-fold cross validation.
For each dataset, we compare the test error rate attained by the ECOC classifier and the
Bunching classifier. We conducted the experiments for two families of code matrices. The
first family corresponds to the one-vs-rest approach in which each class is trained against
the rest of the classes and the corresponding code is a matrix whose logistic transformation
is simply the identity matrix. The second family is the set of random code matrices with
r log2 r rows where r is the number of different labels. These matrices are used as C 0
for Bunching and as the fixed code for ECOC. Throughout all of the experiments with
Bunching, we set the regularization parameter ? to 1.
A summary of the results is depicted in Fig. 3. The height of each bar is proportional to
(eE ? eB )/eE where eE is the test error attained by the ECOC classifier and eB is the
test error attained by the Bunching classifier. As shown in the figure, for almost all of
the experiments conducted Bunching outperforms standard ECOC. The improvement is
more significant when using random code matrices. This can be explained by the fact that
random code matrices tend to induce unnatural and rather difficult binary partitions of the
set of labels. Since Bunching modifies the code matrix C along its run, it can relax difficult
binary problems. This suggests that Bunching can improve the classification accuracy in
problems where, for instance, the one-vs-rest approach fails to give good results or when
there is a need to add error correction properties to the code matrix.
8 A Brief Discussion
In this paper we described a framework for solving multiclass problems via pairs of embeddings. The proposed framework can be viewed as a generalization of ECOC with logistic
regressors. It is possible to extend our framework in a few ways. First, the probabilistic
embeddings can be replaced with non-negative embeddings by replacing the logistic transformation with the exponential function. In this case, the KL divergence is replaced with
its unormalized version [2, 9]. The resulting generalized Bunching algorithm is somewhat
more involved and less intuitive to understand. Second, while our work focuses on linear
embeddings, our algorithm and analysis can be adapted to more complex mappings by employing kernel operators. This can be achieved by replacing the k?th scalar-product T k ? x
with an abstract inner-product ?(Tk , x). Last, we would like to note that it is possible to
devise an alternative objective function to the one given in Eq. (9) which is jointly convex
in (T, ?(C)) and for which we can state a bound of a form similar to the bound in Thm. 1.
References
[1] E.L. Allwein, R.E. Schapire, and Y. Singer. Reducing multiclass to binary: A unifying approach
for margin classifiers. Journal of Machine Learning Research, 1:113?141, 2000.
[2] Michael Collins, Robert E. Schapire, and Yoram Singer. Logistic regression, adaboost and
bregman distances. Machine Learning, 47(2/3):253?285, 2002.
[3] K. Crammer and Y. Singer. On the learnability and design of output codes for multiclass problems. In Proc. of the Thirteenth Annual Conference on Computational Learning Theory, 2000.
[4] I. Csisz?ar and G. Tusn?ady. Information geometry and alternaning minimization procedures.
Statistics and Decisions, Supplement Issue, 1:205?237, 1984.
[5] A.P. Dempster, N.M. Laird, and D.B. Rubin. Maximum likelihood from incomplete data via
the EM algorithm. Journal of the Royal Statistical Society, Ser. B, 39:1?38, 1977.
[6] Thomas G. Dietterich and Ghulum Bakiri. Solving multiclass learning problems via errorcorrecting output codes. Journal of Artificial Intelligence Research, 2:263?286, January 1995.
[7] Jyrki Kivinen and Manfred K. Warmuth. Additive versus exponentiated gradient updates for
linear prediction. Information and Computation, 132(1):1?64, January 1997.
[8] John D. Lafferty. Additive models, boosting and inference for generalized divergences. In
Proceedings of the Twelfth Annual Conference on Computational Learning Theory, 1999.
[9] S. Della Pietra, V. Della Pietra, and J. Lafferty. Duality and auxilary functions for Bregman
distances. Technical Report CS-01-10, CMU, 2002.
[10] T. Poggio and F. Girosi. Networks for approximation and learning. Proc. of IEEE, 78(9), 1990.
[11] R.E. Schapire, M. Rochery, M. Rahim, and N. Gupta. Incorporating prior knowledge into
boosting. In Machine Learning: Proceedings of the Nineteenth International Conference, 2002.
| 2279 |@word repository:1 version:1 dekel:1 twelfth:1 c0:17 additively:1 initial:1 contains:1 selecting:1 denoting:1 document:1 prefix:1 outperforms:1 current:3 comparing:1 z2:2 com:1 john:1 additive:2 partition:2 girosi:1 update:3 v:3 stationary:1 intelligence:1 selected:1 warmuth:2 complementing:1 manfred:1 boosting:3 height:1 along:2 constructed:1 prove:1 introduce:1 tagging:1 expected:3 dialog:1 examine:1 ecoc:21 globally:3 decreasing:1 decomposed:1 curse:1 cardinality:1 considering:1 perceives:1 provided:3 begin:1 notation:1 bounded:2 maximizes:1 homepage:1 israel:1 argmin:4 interpreted:1 minimizes:3 spoken:1 finding:5 transformation:11 nj:2 kldivergence:2 concave:1 tie:1 rahim:1 returning:1 classifier:19 scaled:1 ser:1 unit:5 positive:1 t1:1 engineering:1 before:2 local:1 encoding:1 solely:1 might:1 therein:1 eb:2 suggests:1 averaged:1 lecun:2 yj:15 partof:1 procedure:3 empirical:13 attain:1 word:11 induce:3 get:2 close:1 operator:1 put:1 influence:1 isfy:1 py:11 equivalent:1 map:2 deterministic:2 www:1 modifies:1 jerusalem:1 straightforward:2 independently:3 convex:3 assigns:1 correcting:5 rule:5 isolet:2 his:1 embedding:9 notion:3 analogous:3 construction:4 us:3 associate:1 element:1 recognition:1 updating:3 predicts:1 labeled:2 cy:13 decrease:3 dempster:1 trained:2 solving:3 rewrite:1 upon:1 easily:1 various:1 tx:2 describe:6 artificial:1 whose:3 encoded:2 valued:1 nineteenth:1 relax:1 otherwise:1 reconstruct:1 statistic:1 jointly:1 laird:1 online:1 advantage:1 product:3 bunching:18 uci:2 iff:1 flexibility:1 adapts:1 description:2 intuitive:1 csisz:1 convergence:1 produce:1 categorization:5 converges:1 tk:15 derive:1 ac:1 nearest:1 school:1 eq:17 c:2 resemble:1 predicted:2 implies:1 differ:1 posit:1 correct:2 viewing:1 require:1 fix:1 generalization:3 cyj:7 unormalized:1 extension:1 correction:1 accompanying:1 ic:1 great:1 algorithmic:1 predict:2 mapping:2 achieves:2 entropic:1 smallest:2 omitted:2 proc:2 label:41 individually:1 minimization:3 clearly:1 concurrently:1 always:1 ck:15 rather:2 pn:1 allwein:3 conjunction:1 corollary:2 encode:1 focus:2 improvement:1 bernoulli:4 likelihood:2 contrast:1 attains:3 sense:1 glass:3 inference:1 entire:2 initially:1 issue:1 classification:23 html:2 denoted:1 special:2 equal:1 once:1 construct:3 cyi:4 ady:1 constitutes:1 report:1 inherent:1 employ:4 few:1 composed:2 divergence:11 pietra:2 replaced:2 argmax:1 ourselves:1 geometry:1 vowel:2 attempt:1 behind:1 predefined:2 bregman:3 poggio:1 respective:2 incomplete:1 minimal:1 instance:28 column:4 classify:2 formalism:1 ghulum:1 ar:1 assignment:1 subset:3 entry:3 predictor:2 conducted:2 learnability:1 combined:1 international:1 huji:1 sequel:2 probabilistic:20 regressor:2 michael:1 again:1 soybean:2 henceforth:1 external:1 derivative:1 return:2 coding:2 sec:7 includes:1 wk:8 notable:1 performed:1 view:3 analyze:2 undesirably:1 parallel:1 defer:1 minimize:3 il:1 ni:1 ass:1 accuracy:1 qk:2 sy:14 correspond:1 accurately:1 mlearn:1 definition:5 against:1 colleague:1 involved:1 naturally:1 proof:2 hamming:5 boil:1 dataset:3 recall:1 knowledge:2 improves:2 organized:1 attained:5 supervised:1 adaboost:1 done:1 furthermore:1 stage:2 until:1 hand:3 receives:1 replacing:3 lack:1 defines:4 logistic:17 building:1 dietterich:2 name:1 requiring:1 regularization:4 assigned:3 alternating:2 leibler:1 during:1 noted:1 mlrepository:1 generalized:2 exdb:1 outline:1 tt:5 demonstrate:2 image:5 wise:1 recently:1 common:6 overview:1 discussed:1 extend:1 significant:1 i6:2 summands:1 add:2 closest:1 multivariate:3 recent:2 termed:2 binary:27 devise:1 minimum:4 relaxed:2 impose:1 somewhat:1 employed:1 paradigm:3 relates:1 multiple:1 technical:1 plug:1 offer:1 cross:1 divided:1 post:1 controlled:1 calculates:1 prediction:2 regression:2 cmu:1 iteration:1 kernel:1 achieved:2 addition:1 thirteenth:1 leaving:1 rest:4 induced:4 tend:1 lafferty:2 integer:2 call:2 ee:3 embeddings:14 xj:14 restrict:2 reduce:1 inner:1 multiclass:20 t0:2 unnatural:2 penalty:3 speech:1 constitute:1 useful:1 extensively:1 induces:1 processed:1 reduced:3 schapire:3 supplied:1 shifted:1 estimated:1 disjoint:1 conform:1 discrete:4 key:1 relaxation:2 sum:1 run:1 letter:2 throughout:2 family:3 almost:1 yann:1 decision:1 bit:1 bound:9 ct:2 followed:1 fold:1 annual:2 adapted:1 constraint:4 min:2 optical:1 relatively:1 according:4 alternate:2 remain:1 em:2 character:1 y0:1 modification:1 explained:1 errorcorrecting:1 ln:1 equation:1 remains:1 discus:1 singer:5 merit:2 ofer:1 generalizes:2 available:1 ocr:1 alternative:3 shortly:1 original:2 thomas:1 log2:1 maintaining:1 unifying:1 yoram:2 giving:1 build:3 bakiri:2 society:1 unchanged:2 objective:11 tusn:1 occurs:1 gradient:1 distance:19 separate:1 mapped:1 concatenation:1 code:62 index:2 illustration:2 minimizing:2 hebrew:1 difficult:4 robert:1 potentially:2 negative:2 design:2 datasets:2 finite:2 t:1 auxilary:1 january:2 defining:2 thm:5 complement:1 pair:13 cast:1 kl:5 namely:1 z1:4 learned:5 bar:1 below:3 summarize:1 pioneering:1 royal:1 natural:1 kivinen:1 improve:1 brief:2 numerous:2 naive:1 faced:1 prior:2 relative:2 embedded:3 loss:18 rationale:2 oy:1 limitation:1 proportional:2 versus:1 validation:1 rubin:1 classifying:1 share:1 row:4 summary:1 last:1 keeping:2 formal:1 exponentiated:1 understand:1 valid:1 coincide:1 regressors:2 employing:1 kullback:1 conclude:1 assumed:3 xi:4 continuous:2 modularity:1 sk:4 learn:4 nature:1 decoupling:1 complex:1 domain:1 pk:8 oferd:1 fig:3 referred:1 fails:1 exponential:1 lie:1 theorem:7 down:1 embed:1 gupta:1 incorporating:2 mnist:3 supplement:1 labelling:1 margin:1 easier:1 entropy:4 depicted:1 simply:3 saddle:2 scalar:1 corresponds:3 conditional:1 goal:4 viewed:3 identity:1 jyrki:1 satimage:2 except:1 reducing:2 blessing:1 called:2 duality:1 experimental:1 exception:1 formally:2 crammer:1 collins:1 brevity:1 evaluate:1 della:2 |
1,406 | 228 | 650
Lincoln and Skrzypek
Synergy Of Clustering Multiple Back Propagation Networks
William P. Lincoln* and Josef Skrzypekt
UCLA Machine Perception Laboratory
Computer Science Department
Los Angeles, CA 90024
ABSTRACT
The properties of a cluster of multiple back-propagation (BP) networks
are examined and compared to the performance of a single BP network. The underlying idea is that a synergistic effect within the cluster
improves the perfonnance and fault tolerance. Five networks were initially trained to perfonn the same input-output mapping. Following
training, a cluster was created by computing an average of the outputs
generated by the individual networks. The output of the cluster can be
used as the desired output during training by feeding it back to the individual networks. In comparison to a single BP network, a cluster of
multiple BP's generalization and significant fault tolerance. It appear
that cluster advantage follows from simple maxim "you can fool some
of the single BP's in a cluster all of the time but you cannot fool all of
them all of the time" {Lincoln}
1 INTRODUCTION
Shortcomings of back-propagation (BP) in supervised learning has been well documented in the past {Soulie, 1987; Bernasconi, 1987}. Often, a network of a finite size
does not learn a particular mapping completely or it generalizes poorly. Increasing the
size and number of hidden layers most often does not lead to any improvements {Soulie,
* also with Hughes Aircraft Company
t to whom the correspondence should be addressed
Synergy of Clustering Multiple Back Propagation Networks
1987}. The central question that this paper addresses is whether a "synergy" of clustering
multiple back-prop nets improves the properties of the clustered system over a comparably complex non-clustered system. We use the formulation of back-prop given in
{Rumelhart, 1986}. A cluster is shown in figure 1. We start with five, three-layered,
back propagation networks that "learn" to perform the same input-output mapping. Initially the nets are given different starting weights. Thus after learning, the individual nets
are expected to have different internal representations. An input to the cluster is routed to
each of the nets. Each net computes its output and the judge uses these outputs, Yk to
form the cluster output, y. There are many ways of forming but for the sake of simplicity, in this paper we consider the following two rules:
Y
,..
simple average:y =
1,..
N
L
N Yk
(1.1)
K=l
N
convex combination:y =
L WkYk
(1.2)
K=l
Cluster function 1.2 adds an extra level of fault tolerance by giving the judge the ability
to bias the outputs based on the past reliability of the nets. The Wk are adjusted to take
into account the recent reliability of the net. One weight adjustment rule is
i
where e = ~
ek, G is the gain of adjustment and
ek
N k=l
ek = I IY - Yk I I is the network deviation from the cluster output. Also, in the absence
Wk
= Wk?G?~
of an initial training period with a perfect teacher the cluster can collectively selforganize. The cluster in this case is performing an "averaging" of the mappings that the
individual networks perform based on their initial distribution of weights. Simulations
have been done to verify that self organization does in fact occur. In all the simulations,
convergence occurred before 1000 passes.
Besides improved learning and generalization our clustered network displays other desirable characteristics such as fault tolerance and self-organization. Feeding back the
cluster's output to the N individual networks as the desired output in training endows the
cluster with fault tolerance in the absence of a teacher. Feeding back also makes the cluster continuously adaptable to changing conditions. This aspects of clustering is similar to
the tracking capabilities of adaptive equalizers. After the initial training period it is usually assumed that no teacher is present, or that a teacher is present only at relatively infrequent intervals. However, if the failure rate is large enough, the perfonnance of a single,
non-clustered net will degrade during the periods when no teacher is present.
2 CLUSTERING WITH FEEDBACK TO INCREASE FAULT
TOLERANCE IN THE ABSENCE OF A PERFECT TEACHER.
Y
When a teacher is not present, can be used as the desired output and used to continuously train the individual nets. In general, the correc} error that should b~ backpropagated, dk = Y-Yk , will differ from the actual error, dk = Y - Yk If dk and dk differ
significantly, the error of the individual nets (and thus the cluster as a whole) can increase
651
652
Lincoln and Skrzypek
y
over time. This phenomenon is called drift. Because of drift, retraining using as the
desired output may seem disadvantageous when no faults exist within the nets. The possibility of drift is decreased by training the nets to a sufficiently small error. In fact under
these circumstance with sufficiently small error, it is possible to see the error to decrease
even further.
It is when we assume that faults exist that retraining becomes more advantageous. If the
failure rate of a network node is sufficiently low, the injured net can be retrained using
the judge's output. By having many nets in the cluster the effect of the injured net's output on the cluster output can be minimized. Retraining using adds fault tolerance but
causes drift if the nets did not complete learning when the teacher was removed.
y
cluster
Figure 1: A cluster of N back-prop nets.
3 EXPERIMENT AL METHODS.
To test the ideas outlined in this paper an abstract learning problem was chosen. This
abstract problem was used because many neural network problems require similar
separation and classification of a group of topologically equivalent sets in the process of
learning {Lippman, 1987}. For instance, images categorized according to their characteristics. The input is a 3-dimensional point, P = (x,y,z). The problem is to categorize
the point P into one of eight sets. The 8 sets are the 8 spheres of radius 1 centered at x =
(?1), y = (?,1), z = (?,l) The input layer consists of three continuous nodes. The size of
the output layer was 8, with each node trained to be an indicator function for its associated sphere. One hidden layer was used with full connectivity between layers. Five nets
with the above specifications were used to form a cluster. Generalization was tested using
points outside the spheres.
Synergy of Clustering Multiple Back Propagation Networks
4 CLUSTER ADVANTAGE.
The performance of a single net is compared to performance of a five net cluster when
the nets are not retrained using y. The networks in the cluster have the same structure and
size as the single network. Average errors of the two systems are compared. A useful
measure of the cluster advantage is obtained by taking the ratio of an individual net's
error to the cluster error. This ratio will be smaller or larger than 1 depending on the relative magnitudes of the cluster and individual net's errors. Figures 2a and 2b show the
cluster advantage plotted versus individual net error for 256 and 1024 training passes
respectively. It is seen that when the individual nets either learn the task completely or
don't learn at all there is not a cluster advantage. However, when the task is learned even
marginally, there is a cluster advantage.
60
?at
-??
"
...
--?.
C
>
4(
0
50
?
??
-??
c 100
>
"c
i~
,,
30
20
I ? \
10
0
...
-..?
50
::;,
\
,'0 \b . .
0
B) Pass. 1024
at
I,
I,
II
40
::;,
(,)
150
A) Pass. 256
(,)
,
Error
0
3
0
Error
2
Figure 2: Cluster Advantage versus Error.
Data points from more than one learning task are shown.
A) After 256 training passes. B) Mter 1024 training passes.
The cluster's increased learning is based on the synergy between the individual networks
and not on larger size of a cluster compared to an individual network. An individual net's
error is dependent on the size of the hidden layer and the length of the training period.
However, in general the error is not a decreasing function of the size of the hidden layer
throughout its domain, i.e. increasing the size of the hidden layer does not always result
in a decrease in the error. This may be due to the more direct credit assignment with the
smaller number of nodes. Figures 4a and 4b show an individual net's error versus hidden
layer size for different training passes. The point to this pedagogics is to counter the anticipated argument: "a cluster should have a lower error based on the fact that it has more
nodes".
653
654
Lincoln and Skrzypek
2
2
A) Pass. 256
B) Pass - 1 024
..
...e
e.. ,
,
w
.. 1
z
,,
,,
~,
.
w
z
.........
,f'
'
~
"
o~~?~~/~~~~~~~
o
20
40
60
80
Number of Hidden Unit.
100
1
,,
,,
,,
,,
,,
,,
,,
,, P-o~ , ,0
\,
\
OT-~~""'+-"""
o
20
40
__""'..-...t......~_
60
80
100
Number of Hidden Unit.
Figure 3: Error of a single BP network is a nonlinear funtion
of the number of hidden nodes.
A) After 256 training passes B) After 1024 training passes
S FAULT TOLERANCE.
the judge's output as the desired output and retraining the individual networks, fault
tolerance is added. The fault tolerant capabilities of a cluster of 5 were studied. The size
of the hidden layer is 15. After the nets were trained, a failure rate of 1 link in the cluster
per 350 inputs was introduced. This failure rate in terms of a single unclustered net is 1
link per 1750 (=5.350) inputs. The link that is chosen to fail in the cluster was randomly
selected from the links of all the networks in the cluster. When a link failed its weight
was set to O. The links from the nets to the judge are considered immune from faults in
this comparison. A pass consisted of 1 presentation of a random point from each of the 8
spheres. Figure 4 shows the fault tolerant capabilities of a cluster. By knowing the
behavior of the single net in the presence of faults, the fault tolerant behavior of any conventional configuration (i.e. comparison and spares) of single nets can be determined, so
that this form of fault tolerance can be compared with conventional fault tolerant
schemes.
Synergy of Clustering Multiple Back Propagation Networks
..
..
2
o
w
1
o~--~~a.~&:~~~~~~~
o
10000
20000
__
~~
__ __
~
30000
~
40000
Numb.r of training p.....
Figure 4: Fault tolerance of a cluster using feedback
from the "judge" as a desired training output
Error as a function of time (# of training passes) without link
failures (solid circles) and with link failures (open cirles).
Link failure rate = 1 cluster link per 350 inputs
or 1 single net link per 1750 (=5 nets*350) inputs
6 CONCLUSIONS.
Clustering multiple back-prop nets has been shown to increase the performance and fault
tolerance over a single network. Clustering has exhibited very interesting self organization. Preliminary investigations are restricted to a few simple examples. Nevertheless,
there are some interesting results that appear to be rather general and which can thus be
expected to remain valid for much larger and complex systems. The clustering ideas
presented in this paper are not specific to back-prop but can apply to any nets trained
with a supervised learning rule. The results of this paper can be viewed in an enlightening way. Given a set of weights. the cluster performs a mapping. There is empirical evidence of local minimum in this "mapping space". The initial point in the mapping space
is taken to be when the cluster output begins to be fed back. Each time a new cluster output is fed back the point in the mapping space moves. The step size is related to the step
size of the back prop algorithm. Each task is conjectured to have a local minimum in the
mapping space. If the point moves away from the desired local minimum, drift occurs. A
fault moves the point away from the local minimum. Feedback moves the point closer to
the local minimum. Self organization can be viewed as finding the local minimum of the
valley that the point is initially placed based on the initial distribution of weights.
655
656
Lincoln and Skrzypek
0.008
0.007
....
w
0
,
?,,
\
0.006
0.005
\
\
e-__ ..
-- ............
.--...--...
0.004 +----..--......---....--,...--......--,.--.......--.,
30000
o
10000
40000
20000
Numb.r of trllnlng p.....
Figure 5: Cluster can continue to learn in the absence of a
teacher if the feedback from the judge is used as a
desired training output No link failures.
6.1
INTERPRETAnON OF RESULTS.
The results of the previous section can be interpreted from the viewpoint of the model
described in this section. This model attempts to describe how the state of the nets change
due to possibly incorrect error terms being back-propagated, and how in turn the state of
the net determines its performance. The state of a net could be defined by its weight
string. Given its weight string, there is a duality between the mapping that the net is performing and its error. When a net is being trained towards a particular mapping, its
current weight string determines the error of the net The back-propagation algorithm is
used to change the weight string so that the error decreases. The duality is that at any
time a net is performing some mapping (it may not be the desired mapping) it is perfonning that mapping with no error. This duality has significance in connection with selforganization which can be viewed as taking an "average" of the N mappings.
Synergy of Clustering Multiple Back Propagation Networks
While the state of a net could be defined by its weight string, a state transition due to a
backward error propagation is not obvious. A more useful definition of the state of a net
is its error. (The error can be estimated by taking a representative sample of input vectors
and propagating them through the net and computing the average error of the outputs.)
Having defined the state, a description of the state transition rules can now be given.
output of net (i) = f ( state of net (i) , input)
state of net (i) = g ( state of net (i), output of net (1) ,... ,output of net(N) )
delta error (i) = error (i) at t+ 1 - error (i) at t
cluster mistake = I correct output - cluster output I
This model says that for positive constants A and B:
delta error = A * ( cluster mistake - B )
This equation has the property that the error increase or decrease is proportional to the
size of the cluster mistake. The equilibrium is when the mistake equals B. An assumption
is made that an individual net's mistake is a guassian random variable Zj with mean and
variance equal to its error. For the purposes of this analysis, the judge uses a convex
combination of the net outputs to form the cluster output Using the assumptions of this
I1VJdel, it can be shown that a strategy of increasing the relative weight in the convex
combination of a net that has a relatively small error and conversely decreasing the relative weight for poorly performing nets. (1,2) is an example weight adjustment rule. This
rule has the effect of increasing the weight of a network that produced a network deviation that was smaller than average. The opposite effect is seen for a network that produced a network deviation that was larger than average.
6.1.1
References.
D.E. Rumelhart, J.L. McClelland, and the PDP Research Group. Parallel Distributed
Processing (PDP): Exploration in the Microstructure of Cognition (Vol. 1). MIT Press,
Cambridge, Massachusetts, 1986.
R.P. Lippman. An Introduction to Computing with Neural Ne:s. IEEE ASSP magazine,
Vol. 4, pp. 4-22, April, 1987.
F.F. Soulie, P. Gallinari, Y. Le Cun, and S. Thiria. Evaluation of network architectures
on test learning tasks. IEEE First International Conference on Neural Networks, San
Diego, pp. 11653-11660, June 1987.
J. Bernasconi. Analysis and Comparison of Different Learning Algorithms for Pattern
Association Problems. Neural Information Processing Systems, Denver, Co, pp. 72-81,
1987.
Abraham Lincoln. Personal communication.
657
PART VIII:
THEORETICAL ANALYSES
| 228 |@word aircraft:1 selforganization:1 advantageous:1 retraining:4 open:1 simulation:2 solid:1 initial:5 configuration:1 past:2 current:1 selected:1 node:6 five:4 direct:1 incorrect:1 consists:1 expected:2 behavior:2 decreasing:2 company:1 actual:1 increasing:4 becomes:1 begin:1 underlying:1 funtion:1 interpreted:1 string:5 finding:1 perfonn:1 gallinari:1 unit:2 appear:2 before:1 positive:1 local:6 mistake:5 studied:1 examined:1 conversely:1 co:1 hughes:1 lippman:2 empirical:1 significantly:1 cannot:1 synergistic:1 layered:1 valley:1 equivalent:1 conventional:2 starting:1 convex:3 simplicity:1 rule:6 diego:1 infrequent:1 magazine:1 us:2 rumelhart:2 decrease:4 removed:1 counter:1 yk:5 correc:1 personal:1 trained:5 completely:2 train:1 guassian:1 shortcoming:1 describe:1 outside:1 larger:4 say:1 ability:1 advantage:7 net:56 poorly:2 lincoln:7 description:1 los:1 convergence:1 cluster:52 perfect:2 depending:1 propagating:1 judge:8 differ:2 radius:1 correct:1 centered:1 exploration:1 spare:1 require:1 feeding:3 microstructure:1 generalization:3 clustered:4 preliminary:1 investigation:1 adjusted:1 sufficiently:3 credit:1 considered:1 equilibrium:1 mapping:15 cognition:1 numb:2 purpose:1 mit:1 always:1 rather:1 june:1 improvement:1 dependent:1 initially:3 hidden:10 josef:1 classification:1 equal:2 having:2 anticipated:1 minimized:1 few:1 randomly:1 individual:17 william:1 attempt:1 organization:4 possibility:1 evaluation:1 closer:1 perfonnance:2 desired:9 plotted:1 circle:1 theoretical:1 instance:1 increased:1 assignment:1 deviation:3 teacher:9 international:1 iy:1 continuously:2 connectivity:1 central:1 possibly:1 ek:3 account:1 wk:3 disadvantageous:1 start:1 capability:3 parallel:1 variance:1 characteristic:2 comparably:1 produced:2 marginally:1 definition:1 failure:8 pp:3 obvious:1 associated:1 propagated:1 gain:1 massachusetts:1 improves:2 injured:2 back:21 adaptable:1 supervised:2 improved:1 april:1 formulation:1 done:1 nonlinear:1 propagation:10 effect:4 verify:1 consisted:1 laboratory:1 during:2 self:4 complete:1 performs:1 image:1 denver:1 association:1 occurred:1 significant:1 cambridge:1 outlined:1 reliability:2 immune:1 specification:1 add:2 recent:1 selforganize:1 conjectured:1 continue:1 fault:21 seen:2 minimum:6 period:4 ii:1 multiple:9 desirable:1 full:1 sphere:4 circumstance:1 addressed:1 interval:1 decreased:1 extra:1 ot:1 exhibited:1 pass:8 seem:1 presence:1 enough:1 architecture:1 opposite:1 idea:3 knowing:1 angeles:1 whether:1 routed:1 cause:1 useful:2 fool:2 backpropagated:1 mcclelland:1 documented:1 skrzypek:4 exist:2 zj:1 estimated:1 delta:2 per:4 vol:2 group:2 nevertheless:1 changing:1 backward:1 equalizer:1 you:2 topologically:1 throughout:1 separation:1 bernasconi:2 layer:10 display:1 correspondence:1 occur:1 bp:7 sake:1 ucla:1 thiria:1 aspect:1 argument:1 performing:4 relatively:2 department:1 according:1 combination:3 smaller:3 remain:1 cun:1 restricted:1 taken:1 equation:1 turn:1 fail:1 fed:2 generalizes:1 eight:1 apply:1 away:2 clustering:11 giving:1 move:4 question:1 added:1 occurs:1 strategy:1 link:12 degrade:1 whom:1 viii:1 besides:1 length:1 ratio:2 perform:2 finite:1 assp:1 communication:1 pdp:2 retrained:2 drift:5 introduced:1 connection:1 learned:1 address:1 usually:1 perception:1 pattern:1 unclustered:1 enlightening:1 endows:1 indicator:1 scheme:1 ne:1 created:1 relative:3 interesting:2 proportional:1 versus:3 viewpoint:1 placed:1 bias:1 taking:3 tolerance:12 distributed:1 soulie:3 feedback:4 valid:1 transition:2 computes:1 made:1 adaptive:1 san:1 synergy:7 mter:1 tolerant:4 assumed:1 don:1 continuous:1 learn:5 ca:1 complex:2 domain:1 did:1 significance:1 abraham:1 whole:1 categorized:1 representative:1 specific:1 dk:4 evidence:1 maxim:1 magnitude:1 forming:1 failed:1 adjustment:3 tracking:1 collectively:1 determines:2 prop:6 viewed:3 presentation:1 towards:1 absence:4 change:2 determined:1 averaging:1 called:1 pas:5 duality:3 internal:1 categorize:1 perfonning:1 tested:1 phenomenon:1 |
1,407 | 2,280 | Multiplicative Updates for Nonnegative Quadratic
Programming in Support Vector Machines
Fei Sha1 , Lawrence K. Saul1 , and Daniel D. Lee2
1
Department of Computer and Information Science
2
Department of Electrical and System Engineering
University of Pennsylvania
200 South 33rd Street, Philadelphia, PA 19104
{feisha,lsaul}@cis.upenn.edu, [email protected]
Abstract
We derive multiplicative updates for solving the nonnegative quadratic
programming problem in support vector machines (SVMs). The updates
have a simple closed form, and we prove that they converge monotonically to the solution of the maximum margin hyperplane. The updates
optimize the traditionally proposed objective function for SVMs. They
do not involve any heuristics such as choosing a learning rate or deciding
which variables to update at each iteration. They can be used to adjust all
the quadratic programming variables in parallel with a guarantee of improvement at each iteration. We analyze the asymptotic convergence of
the updates and show that the coefficients of non-support vectors decay
geometrically to zero at a rate that depends on their margins. In practice,
the updates converge very rapidly to good classifiers.
1 Introduction
Support vector machines (SVMs) currently provide state-of-the-art solutions to many problems in machine learning and statistical pattern recognition[18]. Their superior performance is owed to the particular way they manage the tradeoff between bias (underfitting)
and variance (overfitting). In SVMs, kernel methods are used to map inputs into a higher,
potentially infinite, dimensional feature space; the decision boundary between classes is
then identified as the maximum margin hyperplane in the feature space. While SVMs provide the flexibility to implement highly nonlinear classifiers, the maximum margin criterion
helps to control the capacity for overfitting. In practice, SVMs generalize very well ? even
better than their theory suggests.
Computing the maximum margin hyperplane in SVMs gives rise to a problem in nonnegative quadratic programming. The resulting optimization is convex, but due to the nonnegativity constraints, it cannot be solved in closed form, and iterative solutions are required.
There is a large literature on iterative algorithms for nonnegative quadratic programming
in general and for SVMs as a special case[3, 17]. Gradient-based methods are the simplest
possible approach, but their convergence depends on careful selection of the learning rate,
as well as constant attention to the nonnegativity constraints which may not be naturally
enforced. Multiplicative updates based on exponentiated gradients (EG)[5, 10] have been
investigated as an alternative to traditional gradient-based methods. Multiplicative updates
are naturally suited to sparse nonnegative optimizations, but EG updates?like their additive counterparts?suffer the drawback of having to choose a learning rate.
Subset selection methods constitute another approach to the problem of nonnegative
quadratic programming in SVMs. Generally speaking, these methods split the variables
at each iteration into two sets: a fixed set in which the variables are held constant, and a
working set in which the variables are optimized by an internal subroutine. At the end of
each iteration, a heuristic is used to transfer variables between the two sets and improve
the objective function. An extreme version of this approach is the method of Sequential
Minimal Optimization (SMO)[15], which updates only two variables per iteration. In this
case, there exists an analytical solution for the updates, so that one avoids the expense of a
potentially iterative optimization within each iteration of the main loop.
In general, despite the many proposed approaches for training SVMs, solving the quadratic
programming problem remains a bottleneck in their implementation. (Some researchers
have even advocated changing the objective function in SVMs to simplify the required
optimization[8, 13].) In this paper, we propose a new iterative algorithm, called Multiplicative Margin Maximization (M3 ), for training SVMs. The M3 updates have a simple
closed form and converge monotonically to the solution of the maximum margin hyperplane. They do not involve heuristics such as the setting of a learning rate or the switching
between fixed and working subsets; all the variables are updated in parallel. They provide an extremely straightforward way to implement traditional SVMs. Experimental and
theoretical results confirm the promise of our approach.
2 Nonnegative quadratic programming
We begin by studying the general problem of nonnegative quadratic programming. Consider the minimization of the quadratic objective function
1
F (v) = vT Av + bT v,
(1)
2
subject to the constraints that vi ? 0 ?i. We assume that the matrix A is symmetric and
semipositive definite, so that the objective function F (v) is bounded below, and its optimization is convex. Due to the nonnegativity constraints, however, there does not exist an
analytical solution for the global minimum (or minima), and an iterative solution is needed.
2.1 Multiplicative updates
Our iterative solution is expressed in terms of the positive and negative components of the
matrix A in eq. (1). In particular, let A+ and A? denote the nonnegative matrices:
|Aij | if Aij < 0,
Aij if Aij > 0,
+
?
(2)
Aij =
and Aij =
0
otherwise.
0
otherwise,
It follows trivially that A = A+ ?A? . In terms of these nonnegative matrices, our proposed
updates (to be applied in parallel to all the elements of v) take the form:
"
#
p
?bi + b2i + 4(A+ v)i (A? v)i
vi ?? vi
.
(3)
2(A+ v)i
The iterative updates in eq. (3) are remarkably simple to implement. Their somewhat mysterious form will be clarified as we proceed. Let us begin with two simple observations.
First, eq. (3) prescribes a multiplicative update for the ith element of v in terms of the
ith elements of the vectors b, A+ v, and A+ v. Second, since the elements of v, A+ ,
and A? are nonnegative, the overall factor multiplying vi on the right hand side of eq. (3)
is always nonnegative. Hence, these updates never violate the constraints of nonnegativity.
2.2 Fixed points
We can show further that these updates have fixed points wherever the objective function, F (v) achieves its minimum value. Let v ? denote a global minimum of F (v). At
such a point, one of two conditions must hold for each element v i?: either (i) vi? > 0 and
(?F/?vi )|v? = 0, or (ii), vi? = 0 and (?F/?vi )|v? ? 0. The first condition applies to the
positive elements of v? , whose corresponding terms in the gradient must vanish. These
derivatives are given by:
?F
= (A+ v? )i ? (A? v? )i + bi .
(4)
?vi ?
v
The second condition applies to the zero elements of v ? . Here, the corresponding terms of
the gradient must be nonnegative, thus pinning vi? to the boundary of the feasibility region.
The multiplicative updates in eq. (3) have fixed points wherever the conditions for global
minima are satisfied. To see this, let
p
b2i + 4(A+ v? )i (A? v? )i
4 ?bi +
?i =
(5)
2(A+ v? )i
denote the factor multiplying the ith element of v in eq. (3), evaluated at v? . Fixed points
of the multiplicative updates occur when one of two conditions holds for each element v i :
either (i) vi? > 0 and ?i = 1, or (ii) vi? = 0. It is straightforward to show from eqs. (4?5)
that (?F/?vi )|v? = 0 implies ?i = 1. Thus the conditions for global minima establish the
conditions for fixed points of the multiplicative updates.
2.3 Monotonic convergence
The updates not only have the correct fixed points; they also lead to monotonic improvement in the objective function, F (v). This is established by the following theorem:
Theorem 1 The function F (v) in eq. (1) decreases monotonically to the value of its global
minimum under the multiplicative updates in eq. (3).
The proof of this theorem (sketched in Appendix A) relies on the construction of an auxiliary function which provides an upper bound on F (v). Similar methods have been used to
prove the convergence of many algorithms in machine learning[1, 4, 6, 7, 12, 16].
3 Support vector machines
We now consider the problem of computing the maximum margin hyperplane in SVMs[3,
17, 18]. Let {(xi , yi )}N
i=1 denote labeled examples with binary class labels y i = ?1, and
let K(xi , xj ) denote the kernel dot product between inputs. In this paper, we focus on the
simple case where in the high dimensional feature space, the classes are linearly separable
and the hyperplane is required to pass through the origin 1 . In this case, the maximum
margin hyperplane is obtained by minimizing the loss function:
X
1X
?i ?j yi yj K(xi , xj ),
(6)
L(?) = ?
?i +
2 ij
i
subject to the nonnegativity constraints ?i ? 0. Let ?? denote the location of theP
minimum
of this loss function. The maximal margin hyperplane has normal vector w = i ??i yi xi
and satisfies the margin constraints yi K(w, xi ) ? 1 for all examples in the training set.
1
The extensions to non-realizable data sets and to hyperplanes that do not pass through the origin
are straightforward. They will be treated in a longer paper.
Kernel
Data
Sonar
Breast cancer
Polynomial
k =4 k =6
9.6% 9.6%
5.1% 3.6%
? = 0.3
7.6%
4.4%
Radial
? = 1.0
6.7%
4.4%
? = 3.0
10.6%
4.4%
Table 1: Misclassification error rates on the sonar and breast cancer data sets after 512
iterations of the multiplicative updates.
3.1 Multiplicative updates
The loss function in eq. (6) is a special case of eq. (1) with Aij = yi yj K(xi , xj ) and
bi = ?1. Thus, the multiplicative updates for computing the maximal margin hyperplane
in hard margin SVMs are given by:
"
#
p
1 + 1 + 4(A+ ?)i (A? ?)i
?i ?? ?i
(7)
2(A+ ?)i
where A? are defined as in eq. (2). We will refer to the learning algorithm for hard margin
SVMs based on these updates as Multiplicative Margin Maximization (M3 ).
It is worth comparing the properties of these updates to those of other approaches. Like
multiplicative updates based on exponentiated gradients (EG)[5, 10], the M 3 updates are
well suited to sparse nonnegative optimizations2; unlike EG updates, however, they do
not involve a learning rate, and they come with a guarantee of monotonic improvement.
Like the updates for Sequential Minimal Optimization (SMO)[15], the M 3 updates have
a simple closed form; unlike SMO updates, however, they can be used to adjust all the
quadratic programming variables in parallel (or any subset thereof), not just two at a time.
Finally, we emphasize that the M3 updates optimize the traditional objective function for
SVMs; they do not compromise the goal of computing the maximal margin hyperplane.
3.2 Experimental results
We tested the effectiveness of the multiplicative updates in eq. (7) on two real world problems: binary classification of aspect-angle dependent sonar signals[9] and breast cancer
data[14]. Both data sets, available from the UCI Machine Learning Repository[2], have
been widely used to benchmark many learning algorithms, including SVMs[5]. The sonar
and breast cancer data sets consist of 208 and 683 labeled examples, respectively. Training and test sets for the breast cancer experiments were created by 80%/20% splits of the
available data.
We experimented with both polynomial and radial basis function kernels. The polynomial
kernels had degrees k = 4 and k = 6, while the radial basis function kernels had variances
of ? = 0.3, 1.0 and 3.0. The coefficients ?i were uniformly initialized to a value of one in
all experiments.
Misclassification rates on the test data sets after 512 iterations of the multiplicative updates
are shown in Table 1. As expected, the results match previously published error rates on
these data sets[5], showing that the M3 updates do in practice converge to the maximum
margin hyperplane. Figure 1 shows the rapid convergence of the updates to good classifiers
in just one or two iterations.
2
In fact, the multiplicative updates by nature cannot directly set a variable to zero. However,
a variable can be clamped to zero whenever its value falls below some threshold (e.g., machine
precision) and when a zero value would satisfy the Karush-Kuhn-Tucker conditions.
? (%)
? (%)
00
2.9
3.6
01
2.4
2.2
02
1.1
4.4
0.5
4.4
0.0
4.4
16
0.0
4.4
32
0.0
4.4
0.0
4.4
04
08
64
support vectors
non-support vectors
t
coefficients
iteration
0
0
100
200
300
training examples
400
g
500
Figure 1: Rapid convergence of the multiplicative updates in eq. (7). The plots show
results after different numbers of iterations on the breast cancer data set with the radial
basis function kernel (? = 3). The horizontal axes index the coefficients ? i of the 546
training examples; the vertical axes show their values. For ease of visualization, the training
examples were ordered so that support vectors appear to the left and non-support vectors,
to the right. The coefficients ?i were uniformly initialized to a value of one. Note the rapid
attenuation of non-support vector coefficients after one or two iterations. Intermediate error
rates on the training set (t ) and test set (g ) are also shown.
3.3 Asymptotic convergence
The rapid decay of non-support vector coefficients in Fig. 1 motivated us to analyze their
rates of asymptotic convergence. Suppose we perturb just one of the non-support vector
coefficients in eq. (6)?say ?i ?away from the fixed point to some small nonzero value ??i .
If we hold all the variables but ?i fixed and apply its multiplicative update, then the new
displacement ??0i after the update is given asymptotically by (??0i ) ? (??i )?i , where
p
1 + 1 + 4(A+ ?? )i (A? ?? )i
,
(8)
?i =
2(A+ ?? )i
and Aij = yi yj K(xi , xj ). (Eq. (8) is merely the specialization of eq. (5) to SVMs.) We can
thus bound the asymptotic rate of convergence?in this idealized but instructive setting?
by computing an upper bound on ?i , which determines how fast the perturbed coefficient
decays to zero. (Smaller ?i implies faster decay.) In general, the asymptotic rate of convergence is determined by the overall positioning of the data points and classification hyperplane in the feature space. The following theorem, however, provides a simple bound in
terms of easily understood geometric quantities.
p
Theorem 2 Let di = |K(xi , w)|/ K(w, w) denote the perpendicular distance in the
feature
minj dj =
p space from xi to the maximum margin hyperplane, and let d = p
1/ K(w, w) denote the one-sided margin of the classifier. Also, let `i = K(xi , xi )
denote the distance of xi to the origin in the feature space, and let ` = maxj `j denote the
largest such distance. Then a bound on the asymptotic rate of convergence ? i is given by:
?1
1 (di ? d)d
?i ? 1 +
.
(9)
2
`i `
+
+
+
li
di
_
+
_
_
d
classification hyperplane
_
Figure 2: Quantities used to bound the asymptotic rate of convergence in eq. (9); see text.
Solid circles denote support vectors; empty circles denote non-support vectors.
The proof of this theorem is sketched in Appendix B. Figure 2 gives a schematic representation of the quantities that appear in the bound. The bound has a simple geometric
intuition: the more distant a non-support vector from the classification hyperplane, the
faster its coefficient decays to zero. This is a highly desirable property for large numerical calculations, suggesting that the multiplicative updates could be used to quickly prune
away outliers and reduce the size of the quadratic programming problem. Note that while
the bound is insensitive to the scale of the inputs, its tightness does depend on their relative
locations in the feature space.
4 Conclusion
SVMs represent one of the most widely used architectures in machine learning. In this
paper, we have derived simple, closed form multiplicative updates for solving the nonnegative quadratic programming problem in SVMs. The M3 updates are straightforward
to implement and have a rigorous guarantee of monotonic convergence. It is intriguing
that multiplicative updates derived from auxiliary functions appear in so many other areas
of machine learning, especially those involving sparse, nonnegative optimizations. Examples include the Baum-Welch algorithm[1] for discrete hidden markov models, generalized iterative scaling[6] and adaBoost[4] for logistic regression, and nonnegative matrix
factorization[11, 12] for dimensionality reduction and feature extraction. In these areas,
simple multiplicative updates with guarantees of monotonic convergence have emerged
over time as preferred methods of optimization. Thus it seems worthwhile to explore their
full potential for SVMs.
References
[1] L. Baum. An inequality and associated maximization technique in statistical estimation of
probabilistic functions of Markov processes. Inequalities, 3:1?8, 1972.
[2] C. L. Blake and C. J. Merz. UCI repository of machine learning databases, 1998.
[3] C. J. C. Burges. A tutorial on support vector machines for pattern recognition. Knowledge
Discovery and Data Mining, 2(2):121?167, 1998.
[4] M. Collins, R. Schapire, and Y. Singer. Logistic regression, adaBoost, and Bregman distances.
In Proceedings of the Thirteenth Annual Conference on Computational Learning Theory, 2000.
[5] N. Cristianini, C. Campbell, and J. Shawe-Taylor. Multiplicative updatings for support vector
machines. In Proceedings of ESANN?99, pages 189?194, 1999.
[6] J. N. Darroch and D. Ratcliff. Generalized iterative scaling for log-linear models. Annals of
Mathematical Statistics, 43:1470?1480, 1972.
[7] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via
the EM algorithm. Journal of the Royal Statistical Society B, 39:1?37, 1977.
[8] C. Gentile. A new approximate maximal margin classification algorithm. Journal of Machine
Learning Research, 2:213?242, 2001.
[9] R. P. Gorman and T. J. Sejnowski. Analysis of hidden units in a layered network trained to
classify sonar targets. Neural Networks, 1(1):75?89, 1988.
[10] J. Kivinen and M. Warmuth. Exponentiated gradient versus gradient descent for linear predictors. Information and Computation, 132(1):1?63, 1997.
[11] D. D. Lee and H. S. Seung. Learning the parts of objects with nonnegative matrix factorization.
Nature, 401:788?791, 1999.
[12] D. D. Lee and H. S. Seung. Algorithms for non-negative matrix factorization. In T. K. Leen,
T. G. Dietterich, and V. Tresp, editors, Advances in Neural and Information Processing Systems,
volume 13, Cambridge, MA, 2001. MIT Press.
[13] O. L. Mangasarian and D. R. Musicant. Lagrangian support vector machines. Journal of Machine Learning Research, 1:161?177, 2001.
[14] O. L. Mangasarian and W. H. Wolberg. Cancer diagnosis via linear programming. SIAM News,
23(5):1?18, 1990.
[15] J. Platt. Fast training of support vector machines using sequential minimal optimization. In
B. Sch?olkopf, C. J. C. Burges, and A. J. Smola, editors, Advances in Kernel Methods ? Support
Vector Learning, pages 185?208, Cambridge, MA, 1999. MIT Press.
[16] L. K. Saul and D. D. Lee. Multiplicative updates for classification by mixture models. In
T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural and Information
Processing Systems, volume 14, Cambridge, MA, 2002. MIT Press.
[17] B. Sch?olkopf and A. J. Smola. Learning with Kernels. MIT Press, Cambridge, MA, 2002.
[18] V. Vapnik. Statistical Learning Theory. Wiley, N.Y., 1998.
A
Proof of Theorem 1
The proof of monotonic convergence in the objective function F (v), eq. (1), is based on
the derivation of an auxiliary function. Similar techniques have been used for many models
in statistical learning[1, 4, 6, 7, 12, 16]. An auxiliary function G(?
v , v) has the two crucial
? ,v. From such
properties that F (?
v) ? G(?
v , v) and F (v) = G(v, v) for all nonnegative v
an auxiliary function, we can derive the update rule v 0 = arg minv? G(?
v , v) which never
increases (and generally decreases) the objective function F (v):
F (v0 ) ? G(v0 , v) ? G(v, v) = F (v).
(10)
By iterating this procedure, we obtain a series of estimates that improve the objective func? , v) by
tion. For nonnegative quadratic programming, we derive an auxiliary function G( v
decomposing F (v) in eq. (1) into three terms and then bounding each term separately:
X
1X +
1X ?
F (v) =
Aij vi vj ?
Aij vi vj +
bi vi ,
(11)
2 ij
2 ij
i
X
1 X (A+ v)i 2 1 X ?
v?i v?j
v?i ?
Aij vi vj 1 + log
+
bi v?i . (12)
G(?
v , v) =
2 i
vi
2 ij
vi vj
i
It can be shown that F (?
v) ? G(?
v, v). The minimization of G(?
v, v) is performed by
setting its derivative to zero, leading to the multiplicative updates in eq. (3). The updates
move each element vi in the same direction as ??F/?vi , with fixed points occurring only
if vi? = 0 or ?F/?vi = 0. Since the overall optimization is convex, all minima of F (v) are
global minima. The updates converge to the unique global minimum if it exists.
B
Proof of Theorem 2
The proof of the bound on the asymptotic rate of convergence relies on the repeated use
of equalities and inequalities that hold at the fixed point ?? . For example, if ??i = 0 is a
non-support vector coefficient, then (?L/??i )|?? ? 0 implies (A+ ?? )i ?(A? ?? )i ? 1. As
shorthand, let zi+ = (A+ ?? )i and zi? = (A? ?? )i . Then we have the following result:
1
2z +
q i
=
(13)
?i
1 + 1 + 4z + z ?
i
i
2zi+
?
1+
(14)
q
(zi+ ? zi? )2 + 4zi+ zi?
2zi+
zi+ ? zi? ? 1
+
? = 1+ +
1 + z i + zi
zi + zi? + 1
=
(15)
zi+ ? zi? ? 1
.
(16)
2zi+
To prove the theorem, we need to express this result in terms of kernel dot products. We
can rewrite the variables in the numerator of eq. (16) as:
X
X
zi+ ? zi? =
Aij ??j =
yi yj K(xi , xj )??j = yi K(xi , w) = |K(xi , w)|, (17)
? 1+
j
j
P
where w = j ??j xj yj is the normal vector to the maximum margin hyperplane. Likewise,
we can obtain a bound on the denominator of eq. (16) by:
X
?
zi+ =
A+
(18)
ij ?j
j
? max A+
ik
k
X
??j
? max |K(xi , xk )|
k
?
=
(19)
j
X
??j
(20)
j
X
p
p
K(xi , xi ) max K(xk , xk )
??j
k
(21)
j
p
p
K(xi , xi ) max K(xk , xk )K(w, w).
k
(22)
Eq. (21) is an application of the Cauchy-Schwartz inequality for kernels, while eq. (22)
exploits the observation that:
X
X X
X
K(w, w) =
Ajk ??j ??k =
??j
Ajk ??k =
??j .
(23)
jk
j
k
j
The last step in eq. (23) is obtained by recognizing that ??j is nonzero only for the coefficients ofP
support vectors, and that in this case the optimality condition (?L/?? j )|?? = 0
implies k Ajk ??k = 1. Finally, substituting eqs. (17) and (22) into eq. (16) gives:
1
|K(xi , w)| ? 1
p
? 1+ p
.
(24)
?i
2 K(xi , xi ) maxk K(xk , xk )K(w, w)
This reduces in a straightforward way to the claim of the theorem.
| 2280 |@word repository:2 version:1 polynomial:3 seems:1 solid:1 reduction:1 series:1 daniel:1 comparing:1 intriguing:1 must:3 distant:1 numerical:1 additive:1 plot:1 update:54 warmuth:1 xk:7 ith:3 provides:2 clarified:1 location:2 hyperplanes:1 mathematical:1 ik:1 prove:3 shorthand:1 underfitting:1 upenn:2 expected:1 rapid:4 begin:2 bounded:1 guarantee:4 attenuation:1 classifier:4 platt:1 control:1 unit:1 schwartz:1 appear:3 positive:2 engineering:1 understood:1 switching:1 despite:1 suggests:1 ease:1 factorization:3 bi:6 perpendicular:1 unique:1 yj:5 practice:3 minv:1 implement:4 definite:1 procedure:1 displacement:1 area:2 radial:4 cannot:2 selection:2 layered:1 optimize:2 map:1 lagrangian:1 baum:2 straightforward:5 attention:1 convex:3 welch:1 rule:1 traditionally:1 updated:1 annals:1 construction:1 target:1 suppose:1 programming:14 origin:3 pa:1 element:10 recognition:2 jk:1 updating:1 labeled:2 database:1 electrical:1 solved:1 region:1 news:1 decrease:2 intuition:1 dempster:1 seung:2 cristianini:1 prescribes:1 depend:1 solving:3 trained:1 rewrite:1 compromise:1 basis:3 easily:1 derivation:1 fast:2 sejnowski:1 choosing:1 whose:1 heuristic:3 widely:2 emerged:1 say:1 tightness:1 otherwise:2 statistic:1 laird:1 analytical:2 propose:1 product:2 maximal:4 uci:2 loop:1 rapidly:1 flexibility:1 olkopf:2 convergence:16 empty:1 object:1 help:1 derive:3 ij:5 advocated:1 eq:28 esann:1 auxiliary:6 implies:4 come:1 lee2:1 kuhn:1 direction:1 drawback:1 correct:1 karush:1 extension:1 hold:4 blake:1 normal:2 deciding:1 lawrence:1 claim:1 substituting:1 achieves:1 estimation:1 label:1 currently:1 largest:1 minimization:2 mit:4 feisha:1 always:1 ax:2 focus:1 derived:2 improvement:3 ratcliff:1 likelihood:1 rigorous:1 realizable:1 dependent:1 bt:1 lsaul:1 hidden:2 subroutine:1 sketched:2 overall:3 classification:6 arg:1 art:1 special:2 never:2 having:1 extraction:1 semipositive:1 simplify:1 maxj:1 highly:2 mining:1 adjust:2 mixture:1 extreme:1 held:1 bregman:1 incomplete:1 taylor:1 initialized:2 circle:2 theoretical:1 minimal:3 classify:1 maximization:3 subset:3 predictor:1 recognizing:1 perturbed:1 siam:1 probabilistic:1 lee:3 quickly:1 satisfied:1 manage:1 choose:1 derivative:2 leading:1 li:1 suggesting:1 potential:1 coefficient:12 satisfy:1 idealized:1 depends:2 vi:23 b2i:2 multiplicative:28 tion:1 closed:5 performed:1 analyze:2 pinning:1 parallel:4 variance:2 likewise:1 generalize:1 multiplying:2 worth:1 researcher:1 published:1 minj:1 whenever:1 mysterious:1 tucker:1 thereof:1 naturally:2 proof:6 di:3 associated:1 knowledge:1 dimensionality:1 campbell:1 higher:1 adaboost:2 leen:1 evaluated:1 just:3 smola:2 working:2 hand:1 horizontal:1 nonlinear:1 logistic:2 dietterich:2 counterpart:1 hence:1 equality:1 symmetric:1 nonzero:2 eg:4 numerator:1 criterion:1 generalized:2 mangasarian:2 superior:1 insensitive:1 volume:2 refer:1 cambridge:4 rd:1 trivially:1 shawe:1 had:2 dot:2 dj:1 longer:1 v0:2 inequality:4 binary:2 vt:1 yi:8 musicant:1 minimum:11 gentile:1 somewhat:1 prune:1 converge:5 monotonically:3 signal:1 ii:2 violate:1 desirable:1 full:1 reduces:1 saul1:1 match:1 faster:2 positioning:1 calculation:1 ofp:1 feasibility:1 schematic:1 involving:1 regression:2 breast:6 denominator:1 iteration:12 kernel:11 represent:1 remarkably:1 separately:1 thirteenth:1 crucial:1 sch:2 unlike:2 south:1 subject:2 effectiveness:1 ee:1 intermediate:1 split:2 xj:6 zi:19 pennsylvania:1 identified:1 architecture:1 reduce:1 tradeoff:1 bottleneck:1 motivated:1 specialization:1 darroch:1 becker:1 suffer:1 speaking:1 proceed:1 constitute:1 wolberg:1 generally:2 iterating:1 involve:3 svms:22 simplest:1 schapire:1 exist:1 tutorial:1 per:1 diagnosis:1 discrete:1 promise:1 express:1 threshold:1 changing:1 asymptotically:1 geometrically:1 merely:1 enforced:1 angle:1 decision:1 appendix:2 scaling:2 bound:11 quadratic:14 nonnegative:20 annual:1 occur:1 constraint:7 fei:1 aspect:1 extremely:1 optimality:1 separable:1 department:2 smaller:1 em:1 wherever:2 outlier:1 sided:1 visualization:1 remains:1 previously:1 needed:1 singer:1 end:1 studying:1 available:2 decomposing:1 apply:1 worthwhile:1 away:2 alternative:1 include:1 exploit:1 perturb:1 especially:1 establish:1 ghahramani:1 society:1 objective:11 move:1 quantity:3 traditional:3 gradient:8 distance:4 capacity:1 street:1 cauchy:1 index:1 minimizing:1 potentially:2 expense:1 negative:2 rise:1 implementation:1 upper:2 av:1 observation:2 vertical:1 markov:2 benchmark:1 descent:1 maxk:1 required:3 optimized:1 smo:3 established:1 below:2 pattern:2 including:1 royal:1 max:4 misclassification:2 treated:1 kivinen:1 improve:2 created:1 philadelphia:1 tresp:1 func:1 text:1 literature:1 geometric:2 discovery:1 asymptotic:8 relative:1 loss:3 versus:1 degree:1 rubin:1 editor:3 cancer:7 last:1 aij:12 bias:1 exponentiated:3 side:1 burges:2 fall:1 saul:1 sparse:3 boundary:2 world:1 avoids:1 approximate:1 emphasize:1 preferred:1 confirm:1 global:7 overfitting:2 xi:23 thep:1 iterative:9 sonar:5 table:2 nature:2 transfer:1 investigated:1 vj:4 main:1 linearly:1 bounding:1 repeated:1 fig:1 ddlee:1 wiley:1 precision:1 nonnegativity:5 clamped:1 vanish:1 theorem:10 showing:1 decay:5 experimented:1 exists:2 consist:1 vapnik:1 sequential:3 ci:1 occurring:1 margin:21 gorman:1 suited:2 explore:1 expressed:1 ordered:1 applies:2 monotonic:6 satisfies:1 relies:2 determines:1 ma:4 goal:1 careful:1 ajk:3 hard:2 infinite:1 determined:1 uniformly:2 hyperplane:16 called:1 pas:2 experimental:2 merz:1 m3:6 owed:1 internal:1 support:22 collins:1 tested:1 instructive:1 |
1,408 | 2,281 | Using Tarjan?s Red Rule for Fast Dependency
Tree Construction
Dan Pelleg and Andrew Moore
School of Computer Science
Carnegie-Mellon University
Pittsburgh, PA 15213 USA
[email protected], [email protected]
Abstract
We focus on the problem of efficient learning of dependency trees. It
is well-known that given the pairwise mutual information coefficients,
a minimum-weight spanning tree algorithm solves this problem exactly
and in polynomial time. However, for large data-sets it is the construction of the correlation matrix that dominates the running time. We have
developed a new spanning-tree algorithm which is capable of exploiting
partial knowledge about edge weights. The partial knowledge we maintain is a probabilistic confidence interval on the coefficients, which we
derive by examining just a small sample of the data. The algorithm is
able to flag the need to shrink an interval, which translates to inspection of more data for the particular attribute pair. Experimental results
show running time that is near-constant in the number of records, without significant loss in accuracy of the generated trees. Interestingly, our
spanning-tree algorithm is based solely on Tarjan?s red-edge rule, which
is generally considered a guaranteed recipe for bad performance.
1
Introduction
Bayes? nets are widely used for data modeling. However, the problem of constructing
Bayes? nets from data remains a hard one, requiring search in a super-exponential space of
possible graph structures. Despite recent advances [1], learning network structure from big
data sets demands huge computational resources. We therefore turn to a simpler model,
which is easier to compute while still being expressive enough to be useful. Namely, we
look at dependency trees, which are belief networks that satisfy the additional constraint
that each node has at most one parent. In this simple case it has been shown [2] that
finding the tree that maximizes the data likelihood is equivalent to finding a minimumweight spanning tree in the attribute graph, where edge weights are derived from the mutual
information of the corresponding attribute pairs.
Dependency tress are interesting in their own right, but also as initializers for Bayes? Net
search, as mixture components [3], or as components in classifiers [4]. It is our intent to
eventually apply the technology introduced in this paper to the full problem of Bayes Net
structure search.
Once the weight matrix is constructed, executing a minimum spanning tree (MST) algo-
rithm is fast. The time-consuming part is the population of the weight matrix, which takes
time O(Rn2 ) for R records and n attributes. This becomes expensive when considering
datasets with hundreds of thousands of records and hundreds of attributes.
To overcome this problem, we propose a new way of interleaving the spanning tree construction with the operations needed to compute the mutual information coefficients. We
develop a new spanning-tree algorithm, based solely on Tarjan?s [5] red-edge rule. This
algorithm is capable of using partial knowledge about edge weights and of signaling the
need for more accurate information regarding a particular edge. The partial information we
maintain is in the form of probabilistic confidence intervals on the edge weights; an interval
is derived by looking at a sub-sample of the data for a particular attribute pair. Whenever
the algorithm signals that a currently-known interval is too wide, we inspect more data
records in order to shrink it. Once the interval is small enough, we may be able to prove
that the corresponding edge is not a part of the tree. Whenever such an edge can be eliminated without looking at the full data-set, the work associated with the remainder of the
data is saved. This is where performance is potentially gained.
We have implemented the algorithm for numeric and categorical data and tested it on real
and synthetic data-sets containing hundreds of attributes and millions of records. We show
experimental results of up to 5, 000-fold speed improvements over the traditional algorithm.
The resulting trees are, in most cases, of near-identical quality to the ones grown by the
naive algorithm.
Use of probabilistic bounds to direct structure-search appears in [6] for classification and
in [7] for model selection. In a sequence of papers, Domingos et al. have demonstrated the
usefulness of this technique for decision trees [8], K-means clustering [9], and mixturesof-Gaussians EM [10]. In the context of dependency trees, Meila [11] discusses the discrete
case that frequently comes up in text-mining applications, where the attributes are sparse in
the sense that only a small fraction of them is true for any record. In this case it is possible
to exploit the sparseness and accelerate the Chow-Liu algorithm.
Throughout the paper we use the following notation. The number of data records is R, the
number of attributes n. When x is an attribute, xi is the value it takes for the i-th record. We
denote by ?xy the correlation coefficient between attributes x and y, and omit the subscript
when it is clear from the context.
2
A slow minimum-spanning tree algorithm
We begin by describing our MST algorithm1 . Although in its given form it can be applied
to any graph, it is asymptotically slower than established algorithms (as predicted in [5] for
all algorithms in its class). We then proceed to describe its use in the case where some edge
weights are known not exactly, but rather only to lie within a given interval. In Section 4
we will show how this property of the algorithm interacts with the data-scanning step to
produce an efficient dependency-tree algorithm.
In the following discussion we assume we are given a complete graph with n nodes, and
the task is to find a tree connecting all of its nodes such that the total tree weight (defined
to be the sum of the weights of its edges) is minimized. This problem has been extremely
well studied and numerous efficient algorithms for it exist.
We start with a rule to eliminate edges from consideration for the output tree. Following [5],
we state the so-called ?red-edge? rule:
Theorem 1:
The heaviest edge in any cycle in the graph is not part of the minimum
1
To be precise, we will use it as a maximum spanning tree algorithm. The two are interchangeable,
requiring just a reversal of the edge weight comparison operator.
1. T ? an arbitrary spanning set of n ? 1 edges.
L ? empty set.
? > n ? 1 do:
2. While |L|
? \ T.
Pick an arbitrary edge e ? L
0
Let e be the heaviest edge on the path in T between the
endpoints of e.
If e is heavier than e0 :
L ? L ? {e}
otherwise:
T ? T ? {e} \ {e0 }
L ? L ? {e0 }
3. Output T .
Figure 1: The MIST algorithm. At each step of the iteration, T contains the current ?draft?
?
tree. L contains the set of edges that have been proven to not be in the MST and so L
contains the set of edges that still have some chance of being in the MST. T never contains
an edge in L.
spanning tree.
Traditionally, MST algorithms use this rule in conjunction with a greedy ?blue-edge? rule,
which chooses edges for inclusion in the tree. In contrast, we will repeatedly use the
red-edge rule until all but n ? 1 edges have been eliminated. The proof this results in a
minimum-spanning tree follows from [5].
Let E be the original set of edges. Denote by L the set of edges that have already been
? = E \ L. As a way to guide our search for edges to eliminate we
eliminated, and let L
maintain the following invariant:
?
Invariant 2: At any point there is a spanning tree T , which is composed of edges in L.
? \ T and try to eliminate it using the
In each step, we arbitrarily choose some edge e in L
red-edge rule. Let P be the path in T between e?s endpoints. The cycle we will apply the
red-edge rule to will be composed of e and P . It is clear we only need to compare e with
the heaviest edge in P . If e is heavier, we can eliminate it by the red-edge rule. However, if
it is lighter, then we can eliminate the tree edge by the same rule. We do so and add e to the
tree to preserve Invariant 2. The algorithm, which we call Minimum Incremental Spanning
Tree (MIST), is listed in Figure 1.
The MIST algorithm can be applied directly to a graph where the edge weights are known
exactly. And like many other MST algorithms, it can also be used in the case where just
the relative order of the edge weights is given. Now imagine a different setup, where edge
weights are not given, and instead an oracle exists, who knows the exact values of the edge
weights. When asked about the relative order of two edges, it may either respond with
the correct answer, or it may give an inconclusive answer. Furthermore, a constant fee is
charged for each query. In this setup, MIST is still suited for finding a spanning tree while
minimizing the number of queries issued. In step 2, we go to the oracle to determine the
order. If the answer is conclusive, the algorithm proceeds as described. Otherwise, it just
ignores the ?if? clause altogether and iterates (possibly with a different edge e).
For the moment, this setup may seem contrived, but in Section 4, we go back to the MIST
algorithm and put it in a context very similar to the one described here.
3
Probabilistic bounds on mutual information
We now concentrate once again on the specific problem of determining the mutual information between a pair of attributes. We show how to compute it given the complete data,
and how to derive probabilistic confidence intervals for it, given just a sample of the data.
As shown in [12], the mutual information for two jointly Gaussian numeric attributes X
and Y is:
1
I(X; Y ) = ? ln(1 ? ?2 )
2
PR
((xi ??
x)(yi ??
y ))
2
i=1
where the correlation coefficient ? = ?XY =
with x
?, y?, ?
?X
and ?
?Y2
2 ?
2
?
?X
?Y
being the sample means and variances for attributes X and Y .
Since the log function is monotonic, I(X; Y ) must be monotonic in |?|. This is a sufficient
condition for the use of |?| as the edge weight in a MST algorithm. Consequently, the
sample correlation can be used in a straightforward manner when the complete data is
available. Now consider the case where just a sample of the data has been observed.
PR
Let xP
and y be two data attributes. We are trying to estimate i=1 xi ? yi given the partial
r
sum i=1 xi ? yi for some r < R. To derive a confidence interval, we use the Central Limit
Theorem 2 . It states thatP
given samples of the random variable Z (where for our purposes
Zi = xi ? yi ), the sum i Zi can be approximated by a Normal distribution with mean
and variance closely related to the distribution mean and variance. Furthermore, for large
samples, the sample mean and variance can be substituted for the unknown distribution
parameters. Note in particular that the central limit theorem does not require us to make
any assumption
P aboutPthe Gaussianity of Z. We thus can derive a two-sided confidence
interval for i Zi = i xi ? yi with probability 1 ? ? for some user-specified ?, typically
1%. Given this interval, computing an interval for ? is straightforward. Categorical data
can be treated similarly; for lack of space we refer the reader to [13] for the details.
4
The full algorithm
As we argued, the MIST algorithm is capable of using partial information about edge
weights. We have also shown how to derive confidence intervals on edge weights. We
now combine the two and give an efficient dependency-tree algorithm.
We largely follow the MIST algorithm as listed in Figure 1. We initialize the tree T in
the following heuristic way: first we take a small sub-sample of the data, and derive point
estimates for the edge weights from it. Then we feed the point estimates to a MST algorithm
and obtain a tree T .
When we come to compare edge weights, we generally need to deal with two intervals. If
they do not intersect, then the points in one of them are all smaller in value than any point
in the other, in which case we can determine which represents a heavier edge. We apply
this logic to all comparisons, where the goal is to determine the heaviest path edge e 0 and
to compare it to the candidate e. If we are lucky enough that all of these comparisons are
conclusive, the amount of work we save is related to how much data was used in computing
the confidence intervals ? the rest of the data for the attribute-pair that is represented by
the eliminated edge can be ignored.
However, there is no guarantee that the intervals are separated and allow us to draw meaningful conclusions. If they do not, then we have a situation similar to the inconclusive
2
One can use the weaker Hoeffding bound instead, and our implementation supports it as well,
although it is generally much less powerful.
oracle answers in Section 2. The price we need to pay here is looking at more data to
shrink the confidence intervals. We do this by choosing one edge ? either a tree-path edge
or the candidate edge ? for ?promotion?, and doubling the sample size used to compute
the sufficient statistics for it. After doing so we try to eliminate again (since we can do
this at no additional cost). If we fail to eliminate we iterate, possibly choosing a different
candidate edge (and the corresponding tree path) this time. The choice of which edge to
promote is heuristic, and depends on the expected success of resolution once the interval
has shrunk. The details of these heuristics are omitted due to space constraints.
Another heuristic we employ goes as follows. Consider the comparison of the path-heaviest
edge to an estimate of a candidate edge. The candidate edge?s confidence interval may be
very small, and yet still intersect the interval that is the heavy edge?s weight (this would
happen if, for example, both attribute-pairs have the same distribution). We may be able
to reduce the amount of work by pretending the interval is narrower than it really is. We
therefore trim the interval by a constant, parameterized by the user as , before performing
the comparison. This use of ? and is analogous to their use in ?Probably Approximately
Correct? analysis: on each decision, with high probability (1 ? ?) we will make at worst a
small mistake ().
5
Experimental results
In the following description of experiments, we vary different parameters for the data and
the algorithm. Unless otherwise specified, these are the default values for the parameters.
We set ? to 1% and to 0.05 (on either side of the interval, totaling 0.1). The initial sample
size is fifty records. There are 100, 000 records and 100 attributes. The data is numeric.
The data-generation process first generates a random tree, then draws points for each node
from a normal distribution with the node?s parent?s value as the mean. In addition, any data
value is set to random noise with probability 0.15.
To construct the correlation matrix from the full data, each of the R records needs to be
considered for each of the n2 attribute pairs. We evaluate the performance of our algorithm
by adding the number of records
that were actually scanned for all the attribute-pairs, and
dividing the total by R n2 . We call this number the ?data usage? of our algorithm. The
closer it is to zero, the more efficient our sampling is, while a value of one means the same
amount of work as for the full-data algorithm.
We first demonstrate the speed of our algorithm as compared with the full O(Rn 2 ) scan.
Figure 2 shows that the amount of data the algorithm examines is a constant that does not
depend on the size of the data-set. This translates to relative run-times of 0.7% (for the
37, 500-record set) to 0.02% (for the 1, 200, 000-record set) as compared with the full-data
algorithm. The latter number translates to a 5, 000-fold speedup. Note that the reported
usage is an average over the number of attributes. However this does not mean that the
same amount of data was inspected for every attribute-pair ? the algorithm determines
how much effort to invest in each edge separately. We return to this point below.
The running time is plotted against the number of data attributes in Figure 3. A linear
relation is clearly seen, meaning that (at least for this particular data-generation scheme)
the algorithm is successful in doing work that is proportional to the number of tree edges.
Clearly speed has to be traded off. For our algorithm the risk is making the wrong decision
about which edges to include in the resulting tree. For many applications this is an acceptable risk. However, there might be a simpler way to grow estimate-based dependency trees,
one that does not involve complex red-edge rules. In particular, we can just run the original
algorithm on a small sample of the data, and use the generated tree. It would certainly be
fast, and the only question is how well it performs.
30
250
20
running time
cells per attribute-pair
25
200
150
100
15
10
50
5
0
0
200000
400000
600000
800000
1e+06
0
1.2e+06
20
records
40
60
80
100
120
140
160
number of attributes
Figure 2: Data usage (indicative of absolute running
Figure 3: Running time as a function of the number
time), in attribute-pair units per attribute.
of attributes.
2
0
relative log-likelihood
relative log-likelihood
-1
1.5
1
0.5
-2
-3
-4
-5
MIST
SAMPLE
0
0
200000
400000
600000
800000
1e+06
1.2e+06
records
-6
0.001
0.0015
0.002
0.0025
0.003
0.0035
0.004
data usage
Figure 4: Relative log-likelihood vs. the sample-
Figure 5: Relative log-likelihood vs. the sample-
based algorithm. The log-likelihood difference is di-
based algorithm, drawn against the fraction of data
vided by the number of records.
scanned.
To examine this effect we have generated data as above, then ran a 30-fold cross-validation
test for the trees our algorithm generated. We also ran a sample-based algorithm on each of
the folds. This variant behaves just like the full-data algorithm, but instead examines just
the fraction of it that adds up to the total amount of data used by our algorithm. Results for
multiple data-sets are in Figure 4. We see that our algorithm outperforms the sample-based
algorithm, even though they are both using the same total amount of data. The reason is
that using the same amount of data for all edges assumes all attribute-pairs have the same
variance. This is in contrast to our algorithm, which determines the amount of data for each
edge independently. Apparently for some edges this decision is very easy, requiring just a
small sample. These ?savings? can be used to look at more data for high-variance edges.
The sample-based algorithm would not put more effort into those high-variance edges,
eventually making the wrong decision. In Figure 5 we show the log-likelihood difference
for a particular (randomly generated) set. Here, multiple runs with different ? and values
were performed, and the result is plotted against the fraction of data used. The baseline (0)
is the log-likelihood of the tree grown by the original algorithm using the full data. Again
we see that MIST is better over a wide range of data utilization ratio.
Keep in mind that the sample-based algorithm has been given an unfair advantage, compared with MIST: it knows how much data it needs to look at. This parameter is implicitly
passed to it from our algorithm, and represents an important piece of information about
the data. Without it, there would need to be a preliminary stage to determine the sample
size. The alternative is to use a fixed amount (specified either as a fraction or as an absolute
count), which is likely to be too much or too little.
To test our algorithm on real-life data, we used various data-sets from [14, 15], as well
as analyzed data derived from astronomical observations taken in the Sloan Digital Sky
Survey. On each data-set we ran a 30-fold cross-validation test as described above. For
Table 1: Results, relative to the sample-based algorithm, on real data. ?Type? means numerical or categorical data.
NAME
CENSUS - HOUSE
C OLOR H ISTOGRAM
C OOC T EXTURE
ABALONE
C OLOR M OMENTS
CENSUS - INCOME
COIL 2000
IPUMS
KDDCUP 99
LETTER
COVTYPE
PHOTOZ
ATTR .
RECORDS
TYPE
129
32
16
8
10
678
624
439
214
16
151
23
22784
68040
68040
4177
68040
99762
5822
88443
303039
20000
581012
2381112
N
N
N
N
N
C
C
C
C
N
C
N
DATA
USAGE
1.0%
0.5%
4.6%
21.0%
0.6%
0.05%
0.9%
0.06%
0.02%
1.5%
0.009%
0.008%
MIST
BETTER ?
?
?
?
?
?
?
?
?
?
?
?
?
SAMPLE
BETTER ?
?
?
?
?
?
?
?
?
?
?
?
?
each training fold, we ran our algorithm, followed by a sample-based algorithm that uses
as much data as our algorithm did. Then the log-likelihoods of both trees were computed
for the test fold. Table 1 shows whether the 99% confidence interval for the log-likelihood
difference indicates that either of the algorithms outperforms the other. In seven cases
the MIST-based algorithm was better, while the sample-based version won in four, and
there was one tie. Remember that the sample-based algorithm takes advantage of the ?data
usage? quantity computed by our algorithm. Without it, it would be weaker or slower,
depending on how conservative the sample size was.
6
Conclusion and future work
We have presented an algorithm that applies a ?probably approximately correct? approach
to dependency-tree construction for numeric and categorical data. Experiments in sets with
up to millions of records and hundreds of attributes show it is capable of processing massive
data-sets in time that is constant in the number of records, with just a minor loss in output
quality.
Future work includes embedding our algorithm in a framework for fast Bayes? Net structure
search.
A additional issue we would like to tackle is disk access. One advantage the full-data
algorithm has is that it is easily executed with a single sequential scan of the data file.
We will explore the ways in which this behavior can be attained or approximated by our
algorithm.
While we have derived formulas for both numeric and categorical data, we currently do not
allow both types of attributes to be present in a single network.
Acknowledgments
We would like to thank Mihai Budiu, Scott Davies, Danny Sleator and Larry Wasserman
for helpful discussions, and Andy Connolly for providing access to data.
References
[1] Nir Friedman, Iftach Nachman, and Dana Pe?er. Learning bayesian network structure from massive datasets: The ?sparse candidate? algorithm. In Proceedings of the
15th Conference on Uncertainty in Artificial Intelligence (UAI-99), pages 206?215,
Stockholm, Sweden, 1999.
[2] C. K. Chow and C. N. Liu. Approximating discrete probability distributions with
dependence trees. IEEE Transactions on Information Theory, 14:462?467, 1968.
[3] Marina Meila. Learning with Mixtures of Trees. PhD thesis, Massachusetts Institute
of Technology, 1999.
[4] N. Friedman, M. Goldszmidt, and T. J. Lee. Bayesian Network Classification with
Continuous Attributes: Getting the Best of Both Discretization and Parametric Fitting.
In Jude Shavlik, editor, International Conference on Machine Learning, 1998.
[5] Robert Endre Tarjan. Data structures and network algorithms, volume 44 of CBMSNSF Reg. Conf. Ser. Appl. Math. SIAM, 1983.
[6] Oded Maron and Andrew W. Moore. Hoeffding races: Accelerating model selection search for classification and function approximation. In Jack D. Cowan, Gerald
Tesauro, and Joshua Alspector, editors, Advances in Neural Information Processing
Systems, volume 6, pages 59?66, Denver, Colorado, 1994. Morgan Kaufmann.
[7] Andrew W. Moore and Mary S. Lee. Efficient algorithms for minimizing cross validation error. In Proceedings of the 11th International Conference on Machine Learning
(ICML-94), pages 190?198. Morgan Kaufmann, 1994.
[8] Pedro Domingos and Geoff Hulten. Mining high-speed data streams. In Raghu Ramakrishnan, Sal Stolfo, Roberto Bayardo, and Ismail Parsa, editors, Proceedings of
the 6th ACM SIGKDD International Conference on Knowledge Discovery and Data
Mining (KDD-00), pages 71?80, N. Y., August 20?23 2000. ACM Press.
[9] Pedro Domingos and Geoff Hulten. A general method for scaling up machine learning
algorithms and its application to clustering. In Carla Brodley and Andrea Danyluk,
editors, Proceeding of the 17th International Conference on Machine Learning, San
Francisco, CA, 2001. Morgan Kaufmann.
[10] Pedro Domingos and Geoff Hulten. Learning from infinite data in finite time. In
Proceedings of the 14th Neural Information Processing Systems (NIPS-2001), Vancouver, British Columbia, Canada, 2001.
[11] Marina Meila. An accelerated Chow and Liu algorithm: fitting tree distributions to
high dimensional sparse data. In Proceedings of the Sixteenth International Conference on Machine Learning (ICML-99), Bled, Slovenia, 1999.
[12] Fazlollah Reza. An Introduction to Information Theory, pages 282?283. Dover Publications, New York, 1994.
[13] Dan Pelleg and Andrew Moore. Using Tarjan?s red rule for fast dependency tree
construction. Technical Report CMU-CS-02-116, Carnegie-Mellon University, 2002.
[14] C.L. Blake and C.J. Merz. UCI repository of machine learning databases, 1998.
http://www.ics.uci.edu/?mlearn/MLRepository.html.
[15] S. Hettich and S. D. Bay.
The UCI KDD archive, 1999.
http://
kdd.ics.uci.edu.
| 2281 |@word repository:1 version:1 polynomial:1 disk:1 pick:1 moment:1 initial:1 liu:3 contains:4 interestingly:1 outperforms:2 current:1 discretization:1 yet:1 danny:1 must:1 mst:8 numerical:1 happen:1 kdd:3 v:2 greedy:1 intelligence:1 indicative:1 inspection:1 dover:1 record:20 draft:1 iterates:1 node:5 math:1 simpler:2 constructed:1 direct:1 prove:1 dan:2 combine:1 fitting:2 stolfo:1 manner:1 pairwise:1 expected:1 andrea:1 alspector:1 frequently:1 examine:1 behavior:1 little:1 considering:1 becomes:1 begin:1 notation:1 maximizes:1 developed:1 finding:3 guarantee:1 sky:1 every:1 remember:1 tackle:1 tie:1 exactly:3 classifier:1 wrong:2 ser:1 utilization:1 unit:1 omit:1 before:1 limit:2 mistake:1 despite:1 subscript:1 solely:2 path:6 approximately:2 might:1 studied:1 appl:1 range:1 acknowledgment:1 ooc:1 signaling:1 intersect:2 lucky:1 davy:1 confidence:10 selection:2 operator:1 put:2 context:3 risk:2 www:1 equivalent:1 demonstrated:1 charged:1 go:3 straightforward:2 independently:1 survey:1 resolution:1 wasserman:1 attr:1 rule:14 examines:2 population:1 embedding:1 traditionally:1 analogous:1 construction:5 imagine:1 inspected:1 user:2 exact:1 lighter:1 massive:2 us:1 colorado:1 domingo:4 pa:1 expensive:1 approximated:2 database:1 observed:1 istogram:1 worst:1 thousand:1 cycle:2 thatp:1 ran:4 sal:1 asked:1 gerald:1 depend:1 interchangeable:1 algo:1 accelerate:1 easily:1 geoff:3 represented:1 various:1 grown:2 separated:1 fast:5 describe:1 query:2 artificial:1 choosing:2 heuristic:4 widely:1 otherwise:3 statistic:1 jointly:1 sequence:1 advantage:3 net:5 propose:1 remainder:1 uci:4 sixteenth:1 ismail:1 description:1 getting:1 recipe:1 exploiting:1 parent:2 empty:1 contrived:1 invest:1 produce:1 incremental:1 executing:1 derive:6 andrew:4 develop:1 depending:1 minor:1 school:1 solves:1 dividing:1 implemented:1 c:3 predicted:1 come:2 concentrate:1 closely:1 saved:1 attribute:32 correct:3 shrunk:1 awm:1 larry:1 require:1 argued:1 really:1 preliminary:1 stockholm:1 considered:2 blake:1 normal:2 ic:2 traded:1 danyluk:1 vary:1 omitted:1 purpose:1 nachman:1 currently:2 promotion:1 clearly:2 gaussian:1 super:1 rather:1 hulten:3 totaling:1 conjunction:1 publication:1 derived:4 focus:1 improvement:1 likelihood:10 indicates:1 contrast:2 sigkdd:1 baseline:1 sense:1 helpful:1 eliminate:7 typically:1 chow:3 relation:1 issue:1 classification:3 html:1 initialize:1 mutual:6 once:4 never:1 construct:1 saving:1 eliminated:4 sampling:1 identical:1 represents:2 look:3 icml:2 promote:1 future:2 minimized:1 report:1 employ:1 randomly:1 composed:2 preserve:1 maintain:3 friedman:2 initializers:1 huge:1 mining:3 certainly:1 mixture:2 analyzed:1 accurate:1 andy:1 edge:67 capable:4 partial:6 closer:1 xy:2 sweden:1 unless:1 tree:49 plotted:2 e0:3 modeling:1 cost:1 hundred:4 usefulness:1 examining:1 successful:1 connolly:1 too:3 reported:1 dependency:10 answer:4 scanning:1 synthetic:1 chooses:1 international:5 siam:1 probabilistic:5 off:1 lee:2 connecting:1 thesis:1 heaviest:5 again:3 central:2 containing:1 choose:1 possibly:2 hoeffding:2 conf:1 return:1 rn2:1 bled:1 gaussianity:1 coefficient:5 includes:1 satisfy:1 sloan:1 race:1 depends:1 stream:1 piece:1 performed:1 try:2 doing:2 apparently:1 red:10 start:1 bayes:5 accuracy:1 variance:7 who:1 largely:1 kaufmann:3 bayesian:2 mlearn:1 whenever:2 against:3 associated:1 proof:1 di:1 massachusetts:1 knowledge:4 astronomical:1 actually:1 back:1 appears:1 feed:1 attained:1 follow:1 shrink:3 though:1 furthermore:2 just:11 stage:1 correlation:5 until:1 expressive:1 lack:1 maron:1 quality:2 mary:1 usa:1 usage:6 effect:1 requiring:3 true:1 y2:1 name:1 moore:4 deal:1 won:1 abalone:1 mlrepository:1 trying:1 mist:12 complete:3 demonstrate:1 performs:1 slovenia:1 meaning:1 consideration:1 jack:1 behaves:1 clause:1 denver:1 endpoint:2 reza:1 volume:2 million:2 mellon:2 significant:1 refer:1 mihai:1 meila:3 similarly:1 inclusion:1 access:2 add:2 own:1 recent:1 tesauro:1 olor:2 issued:1 arbitrarily:1 success:1 life:1 yi:5 joshua:1 seen:1 minimum:6 additional:3 morgan:3 determine:4 signal:1 full:10 multiple:2 technical:1 cross:3 marina:2 variant:1 cmu:3 iteration:1 jude:1 cell:1 addition:1 separately:1 interval:24 grow:1 fifty:1 rest:1 archive:1 probably:2 file:1 cowan:1 seem:1 call:2 near:2 enough:3 easy:1 iterate:1 zi:3 reduce:1 regarding:1 translates:3 whether:1 heavier:3 passed:1 accelerating:1 effort:2 proceed:1 york:1 repeatedly:1 ignored:1 generally:3 useful:1 clear:2 listed:2 involve:1 amount:10 http:2 exist:1 per:2 blue:1 carnegie:2 discrete:2 four:1 drawn:1 bayardo:1 graph:6 asymptotically:1 pelleg:2 fraction:5 sum:3 run:3 parameterized:1 powerful:1 respond:1 vided:1 letter:1 uncertainty:1 throughout:1 reader:1 hettich:1 draw:2 decision:5 acceptable:1 scaling:1 fee:1 bound:3 pay:1 guaranteed:1 followed:1 fold:7 oracle:3 scanned:2 constraint:2 generates:1 speed:4 extremely:1 performing:1 speedup:1 endre:1 smaller:1 em:1 making:2 iftach:1 invariant:3 sleator:1 pr:2 census:2 sided:1 taken:1 ln:1 resource:1 remains:1 turn:1 eventually:2 discus:1 describing:1 needed:1 know:2 fail:1 mind:1 count:1 reversal:1 raghu:1 available:1 operation:1 gaussians:1 apply:3 save:1 alternative:1 slower:2 algorithm1:1 altogether:1 original:3 assumes:1 running:6 clustering:2 include:1 exploit:1 approximating:1 already:1 question:1 quantity:1 parametric:1 dependence:1 traditional:1 interacts:1 thank:1 seven:1 spanning:15 reason:1 ratio:1 minimizing:2 providing:1 setup:3 executed:1 robert:1 potentially:1 intent:1 implementation:1 unknown:1 pretending:1 inspect:1 observation:1 datasets:2 finite:1 situation:1 looking:3 precise:1 rn:1 arbitrary:2 tarjan:5 august:1 canada:1 introduced:1 pair:12 namely:1 specified:3 conclusive:2 established:1 nip:1 able:3 proceeds:1 below:1 scott:1 belief:1 treated:1 scheme:1 technology:2 brodley:1 numerous:1 categorical:5 naive:1 columbia:1 roberto:1 nir:1 text:1 discovery:1 vancouver:1 determining:1 relative:8 loss:2 interesting:1 generation:2 proportional:1 proven:1 dana:1 validation:3 digital:1 ipums:1 sufficient:2 xp:1 editor:4 heavy:1 guide:1 allow:2 weaker:2 side:1 institute:1 wide:2 shavlik:1 absolute:2 sparse:3 overcome:1 default:1 numeric:5 ignores:1 san:1 income:1 transaction:1 trim:1 implicitly:1 logic:1 keep:1 uai:1 pittsburgh:1 francisco:1 consuming:1 xi:6 kddcup:1 search:7 continuous:1 bay:1 table:2 tress:1 ca:1 complex:1 constructing:1 substituted:1 did:1 big:1 noise:1 n2:2 oded:1 rithm:1 slow:1 sub:2 parsa:1 exponential:1 lie:1 candidate:6 unfair:1 house:1 pe:1 interleaving:1 theorem:3 formula:1 british:1 bad:1 specific:1 er:1 covtype:1 dominates:1 inconclusive:2 exists:1 adding:1 sequential:1 gained:1 phd:1 sparseness:1 demand:1 easier:1 suited:1 carla:1 likely:1 explore:1 doubling:1 monotonic:2 applies:1 pedro:3 ramakrishnan:1 chance:1 determines:2 acm:2 coil:1 goal:1 narrower:1 consequently:1 price:1 hard:1 infinite:1 flag:1 conservative:1 total:4 called:1 experimental:3 merz:1 meaningful:1 support:1 latter:1 scan:2 goldszmidt:1 accelerated:1 evaluate:1 reg:1 tested:1 |
1,409 | 2,282 | A Hierarchical Bayesian Markovian Model for
Motifs in Biopolymer Sequences
Eric P. Xing, Michael I. Jordan, Richard M. Karp and Stuart Russell
Computer Science Division
University of California, Berkeley
Berkeley, CA 94720
epxing,jordan,karp,russell @cs.berkeley.edu
Abstract
We propose a dynamic Bayesian model for motifs in biopolymer sequences which captures rich biological prior knowledge and positional
dependencies in motif structure in a principled way. Our model posits
that the position-specific multinomial parameters for monomer distribution are distributed as a latent Dirichlet-mixture random variable, and the
position-specific Dirichlet component is determined by a hidden Markov
process. Model parameters can be fit on training motifs using a variational EM algorithm within an empirical Bayesian framework. Variational inference is also used for detecting hidden motifs. Our model improves over previous models that ignore biological priors and positional
dependence. It has much higher sensitivity to motifs during detection
and a notable ability to distinguish genuine motifs from false recurring
patterns.
1 Introduction
The identification of motif structures in biopolymer sequences such as proteins and DNA
is an important task in computational biology and is essential in advancing our knowledge
about biological systems. For example, the gene regulatory motifs in DNA provide key
clues about the regulatory network underlying the complex control and coordination of
gene expression in response to physiological or environmental changes in living cells [11].
There have been several lines of research on statistical modeling of motifs [7, 10], which
have led to algorithms for motif detection such as MEME [1] and BioProspector [9] Unfortunately, although these algorithms work well for simple motif patterns, often they are
incapable of distinguishing what biologists would recognize as a true motif from a random
recurring pattern [4], and provide no mechanism for incorporating biological knowledge of
motif structure and sequence composition.
Most motif models assume independence of position-specific multinomial distributions of
monomers such as nucleotides (nt) and animo acids (aa). Such strategies contradict our intuition that the sites in motifs naturally possess spatial dependencies for functional reasons.
Furthermore, the vague Dirichlet prior used in some of these models acts as no more than
a smoother, taking little consideration of the rich prior knowledge in biologically identified motifs. In this paper we describe a new model for monomer distribution in motifs.
Our model is based on a finite set of informative Dirichlet distributions and a (first-order)
Markov model for transitions between Dirichlets. The distribution of the monomers is
a continuous mixture of position-specific multinomials which admit a Dirichlet prior according to the hidden Markov states, introducing both multi-modal prior information and
dependencies. We also propose a framework for decomposing the general motif model into
a local alignment model for motif pattern and a global model for motif instance distribution,
which allows complex models to be developed in a modular way.
To simplify our discussion, we use DNA motif modeling as a running example in this paper,
though it should be clear that the model is applicable to other sequence modeling problems.
2 Preliminaries
DNA motifs are short (about 6-30 bp) stochastic string patterns (Figure 1) in the regulatory
sequences of genes that facilitate control functions by interacting with specific transcriptional regulatory proteins. Each motif typically appears once or multiple times in the control regions of a small set of genes. Each gene usually harbors several motifs. We do not
know the patterns of most motifs, in which gene they appear and where they appear. The
goal of motif detection is to identify instances of possible motifs hidden in sequences and
learn a model for each motif for future prediction.
A regulatory DNA sequence can be fully specified by a character string
A,T,C,G , and an indicator string that signals the locations of the motif occurrences.
The reason to call a motif a stochastic string pattern rather than a word is due to the variability in the ?spellings? of different instances of the same motif in the genome. Conventionally, biologists display a motif pattern (of length ) by a multi-alignment of all its
instances. The stochasticity of motif patterns is reflected in the heterogeneity of nucleotide
species appearing in each column (corresponding to a position or site in the motif) of the
multi-alignment. We denote the multi-alignment of all instances of a motif specified by
the indicator string in sequence by
. Since any
can be characterized
(or
),
by the nucleotide counts for each column, we define a counting matrix
where each column
is an integer vector with four elements, giving the
number of occurrences of each nucleotide at position of the motif. (Similarly we can define the counting vector
for the whole sequence .) With these settings, one can model
the nt-distribution of a position of the motif by a position-specific multinomial distribution,
. Formally, the problem of inferring
and
(often called a position-weight matrix, or PWM), given a sequence set
, is motif detection in a nutshell 1 .
"!#$!%
&!('*)
,
+
*
.
!
/
0
#
*
1
!
2
*
3
!
*
'
)
:
< =6-*>7
-*; =698 7
+
45 6 7
698 7
abf1 (21)
gal4 (14)
gcn4 (24)
gcr1 (17)
2
2
2
2
1
1
1
1
0
0
0
20
40
60
0
0
20
mat?a2 (12)
40
60
20
40
60
0
2
1
1
1
1
0
0
0
60
0
20
40
60
40
60
crp (24)
2
40
20
mig1 (11)
2
20
0
0
20
40
60
0
?
20
40
60
Figure 1: Yeast motifs (solid line) with 30 bp flanking regions (dashed line). The
axis indexes position and the axis represents the information content
of the multinomial distribution of nt at position . Note the two typical patterns: the U-shape and
the bell-shape.
@
BDCFEHG0IKJ%L A
M
1
ql
ql?
?l
?l?
0
0
mcb (16)
2
0
?
IJ
yt
yt?
xt
xt?
ym,l
ym,l?
M
M
Figure 2: (Left) A general motif model
is a Bayes-ian multinet. Conditional on
the value of , admits different distributions (round-cornered boxes) parameterized by . (Right) The HMDM
model for motif instances specified by a
given . Boxes are plates representing
replicates.
@
@ A
N
Multiple motif detection can be formulated in a similar way, but for simplicity, we omit this
elaboration. See full paper for details. Also for simplicity, we omit the superscript (sequence
index) of variable and in wherever it is unnecessary.
@
A
O
3 Generative models for regulatory DNA sequences
3.1 General setting and related work
:
Without loss of generality, assume that the occurrences of
motifs in a DNA sequence, as
indicated by , are governed by a global distribution
; for each type of motif,
the nucleotide sequence pattern shared by all its instances admits a local alignment model
. (Usually, the background
non-motif sequences are modeled by a
simple conditional
model,
, where the background nt-distribution
are assumed to be learned a priori from the entire
parameters
sequence
and supplied
as constants in the motif detection process.) The symbols
, ,
,
stand for
the parameters and model classes in the respective submodels. Thus, the likelihood of a
regulatory sequence is:
: ! !
:
0
0 :
:
: !
!
: : : !>/!
(1)
0 : : !>/! 0 :
:
where ! 0 . Note
that ! here is not necessarily equivalent to the position-specific
:
multinomial parameters in Eq. 2 below, but is a generic symbol for the parameters of a
general model of aligned motif instances.
: captures properties such as the frequencies of different motifs
The model 0
and the dependencies between motif occurrences. Although specifying this model is an
important aspect of motif detection and remains largely unexplored, we defer this issue to
future work. In the current paper, our focus is on capturing the intrinsic properties within
motifs that can help to improve sensitivity and specificity to genuine motif patterns. For
this the key lies in the local alignment model #"
, which determines the
PWM of the motif. Depending on the value of the latent indicator $ (a motif or not
at position % ), $ admits different probabilistic models, such as a motif alignment model
or a background model. Thus sequence is characterized by a Bayesian multinet [6], a
mixture model in which each component of the mixture is a specific nt-distribution model
corresponding to sequences of a particular nature. Our goal in this paper is to develop an
expressive local alignment model #"
capable of capturing characteristic
site-dependencies in motifs.
In the standard product-multinomial (PM) model for local alignment, the columns of a
PWM are assumed to be independent [9]. Thus the likelihood of a multi-alignment is:
0 : ! /!
: !> /!
&; &' *
: D ) *- ! ),+-. /
!(' '
(2)
Although a popular model for many motif finders, PM nevertheless is sensitive to noise and
random or trivial recurrent patterns, and is unable to capture potential site-dependencies
inside the motifs. Pattern-driven auxiliary submodels (e.g., the fragmentation model [10])
or heuristics (e.g., split a ?two-block? motif into two coupled sub-motifs [9, 1]) have been
developed to handle special patterns such as the U-shaped motifs, but they are inflexible
and difficult to generalize. Some of the literature has introduced vague Dirichlet priors for
in the PM [2, 10], but they are primarily used for smoothing rather than for explicitly
incorporating prior knowledges about motifs.
We depart from the PM model and introduce a dynamic hierarchical Bayesian model for
motif alignment , which captures site dependencies inside the motif so that we can predict
biologically more plausible motifs, and incorporate prior knowledge of nucleotide frequencies of general motif sites. In order to keep the local alignment model our main focus as
well as simplifying the presentation, we adopt an idealized global motif distribution model
called ?one-per-sequence? [8], which, as the name suggests, assumes each sequence harbors one motif instance (at an unknown location). Generalization to more expressive global
models is straightforward and is described in the full paper.
-
3.2 Hidden Markov Dirichlet-Multinomial (HMDM) Model
In the HMDM model, we assume that there are underlying latent nt-distribution prototypes, according to which position-specific multinomial distributions of nt are determined,
and that each prototype is represented by a Dirichlet distribution. Furthermore, the choice
of prototype at each position in the motif is governed by a first-order Markov process.
More precisely, a multi-alignment
containing motif instances is generated by the
following process. First we sample a sequence of prototype indicators
from a first-order Markov process with initial distribution and transition matrix . Then
we repeat the following for each column
: (1) A component from a mixture
of Dirichlets
, where each
, is picked according
to indicator . Say we picked . (2) A multinomial distribution is sampled according
to
, the probability defined by Dirichlet component over all such distributions. (3)
All the nucleotides in column are generated i.i.d. according to Multi
.
characterized by counting matrix is:
The complete likelihood of motif alignment
;
-
!
+
'
-*!
H 2
;
+
-*!
;
:
- ! ! $ - 0-
& ' )! #" & / 7 &
" ; & ( &
) / "
& ; &
. ) -*!
. %6 $ /' - .)(
. .+*
'
'
!(' '
(! ' ) '
(3)
The major role of HMDM is to impose dynamic priors for modeling data whose distributions exhibit temporal or spatial dependencies. As Figure 2(b) makes clear, this model is
not a simple HMM for discrete sequences. In such a model the transition would be between
the emission models (i.e., multinomials) themselves, and the output at each time would be
a single data instance in the sequence. In HMDM, the transitions are between different
priors of the emission models, and the direct output of the HMM is the parameter vector
of a generative model, which will be sampled multiple times at each position to generate
random instances.
This approach is especially useful when we have either empirical or learned prior knowledge about the dynamics of the data to be modeled. For example, for the case of motifs, biological evidence show that conserved positions (manifested by a low-entropy multinomial
nt-distribution) are likely to concatenate, and maybe so do the less conserved positions.
However, it is unlikely that conserved and less conserved positions are interpolated [4].
This is called site clustering, and is one of the main motivations for the HMDM model.
4 Inference and Learning
4.1 Variational Bayesian Learning
-
In order to do Bayesian estimation of the motif parameter , and to predict the locations of
motif instances via , we need to be able to compute the posterior distribution
, which
is infeasible in a complex motif model. Thus we turn to variational approximation [5]. We
seek to approximate the joint posterior over parameters
and hidden states
with a simpler distribution
, where
and
can be, for
the time being, thought of as free distributions to be optimized. Using Jensen?s inequality,
we have the following lower found on the log likelihood:
,0-" D-,/. 0-)2, 0=
,/.
0-
-
,0.
5 6 6
576
1!2 0 43 7
- 8, . -2:9 ;, = 1<2 , 0- = 1<2 ,0.- 0-" 2# > (4)
KL ?
, 0-) =A@ 0-) =
Thus, maximizing the lower bound of the log likelihood (call it B , C, . ) with respect
between
to free distributions ,
and , . is equivalent to minimizing the KL divergence
the true joint posterior and its variational approximation. Keeping either ,
or ,/. fixed
and maximizing
B
# 1!2 0 - ) &
- # 1!2 0 "- )
with
to the other, we obtain the following coupled updates:
respect
(5)
, 0 =
, . -2
(6)
In our motif model, the prior and the conditional submodels form a conjugate-exponential
pair (Dirichlet-Multinomial). It can be shown that in this case we can essentially recover
the same form of the original conditional and prior distributions in their variational approximations except that the parameterization is augmented with appropriate Bayesian and
posterior updates, respectively:
(7)
(8)
, 0 0 - D 0 -
,0 . 0-)2 0-) 0
where 0- D
# - ) &
( is the natural parameter) and # 0)
.
As Eqs. 7 and 8 make clear, the locality of inference and marginalization on the latent variables is preserved in the variational approximation, which means probabilistic calculations
can be performed in the prior and the conditional models separately and iteratively. For
motif modeling, this modular property means that the motif alignment model and motif
distribution model can be treated separately with a simple interface of the posterior mean
for the motif parameters and expected sufficient statistics for the motif instances.
4.2 Inference and learning
According to Eq. 8, we replace the counting matrix in Eq. 3, which is the output of the
HMDM model, by the expected counting matrix
obtained from inference in the global
distribution model (we will handle this later, thanks to the locality preservation property
of inference in variational approximations), and proceed with the inference as if we have
?observations?
. Integrating over , we have the marginal distribution:
# =)
# =)
;&
&;
#$ ) ( ! ' ! #$ ! ) !
(! '
!('
a standard HMM with emission probability:
G /J 4) ' 5 / L
G &' % L
.
#" L
$
$
G J J!
G ( J *)+,' % L
G ' 5 / L 6
/1032
$
$
(9)
(10)
*! (#$ )
We can compute the posterior probability of the hidden states
and the matrix of
co-occurrence probabilities
using standard forward-backward algorithm.
for multinomial
We next compute the expectation of the natural parameters (which is
parameters). Given the ?observations?
, the posterior mean is computed as follows:
87
! K! ' 9# =)
# =)
IKJ 5 / G IK% J?D J& '
9;:=<?>
G0IKJ 5 / L
.A@CB
< G
where
R
!
0=2IH
GJ LKJ,L G '
5 /
1<2 -
D
DE
L,F IK% J
J L
CML G&' % 1)N( JO L&PQD
)+ /
(11)
is the posterior probability of the hidden
state (an output of the forward-
backward algorithm) and S
0 UTWV X1YT Z 6 7 [
Z]Z \ 6 6 7 7
is the digamma function.
Following Eq. 7, given the posterior means of the multinomial parameters, computing the
expected counting matrix
under the the one-per-sequence global model for sequence
is straightforward based on Eq. 2 and we simply give the final results:
set
=6 >7
=6 7
# =)
# "! )
8 Q ` ( ; ' _ 7
) ^ , 6 %a $ 6 '_ 7 ! ( &b
_ '
$'
(12)
.
G L
J 0
where
2
.
-
/
0=2
I 7 J 5 / `* . & / "!$# <
2 < - G J D,+ LKJ 8 7 G0IKJ / L
C 8 0G I / L&P.5
5
I 5 /
J 0% /
0=2'& A *( )
(13)
Bayesian estimates of the multinomial parameters for the position-specific nt-distribution
of the motif are obtained via fixed-point iteration under the following EM-like procedure:
/
Variational E step: Compute the expected sufficient
statistic, the count matrix
) .
, via inference in the global motif model given
- !
# =)
/
Variational M step: Compute the expected natural parameter
ence in the local motif alignment model given
.
# =)
-*! )
via infer-
This basic inference and learning procedure provides a framework that scales readily to
more complex models. For example, the motif distribution model
can be made more
sophisticated so as to model complex properties of multiple motifs such as motif-level
dependencies (e.g., co-occurrence, overlaps and concentration within regulatory modules)
without complicating the inference in the local alignment model. Similarly, the motif alignment model can also be more expressive (e.g., a mixture of HMDMs) without interfering
with inference in the motif distribution model.
0=
5 Experiments
We test the HMDM model on a motif collection from The Promoter Database of Saccharomyces cerevisiae (SCPD). Our dataset contains twenty motifs, each has 6 to 32 instances
all of which are identified via biological experiments.
We begin with an experiment showing how HMDM can capture intrinsic properties of
the motifs. The posterior distribution of the position-specific multinomial parameters ,
reflected in the parameters of the Dirichlet mixtures learned from data, can reveal the ntdistribution patterns of the motifs. Examining the transition probabilities between different
Dirichlet components further tells us the about dependencies between adjacent positions
(which indirectly reveals the ?shape? information). We set the total number of Dirichlet
components to be 8 based on an intelligent guess (using biological intuition), and Figure 3(a) shows the Dirichlet parameters fitted from the dataset via empirical Bayes estimation. Among the 8 Dirichlet components, numbers 1-4 favor a pure distribution of single
nucleotides A, T, G, C, respectively, suggesting they correspond to ?homogeneous? prototypes. Whereas numbers 7 and 8 favor a near uniform distribution of all 4 nt-types, hence
?heterogeneous? prototypes. Components 5 and 6 are somewhat in between. Such patterns
agree well with the biological definition of motifs. Interestingly, from the learned transition
model of the HMM (Figure 3(b)), it can be seen that the transition probability from a homogeneous prototype to a heterogeneous prototype is significantly less than that between
two homogeneous or two heterogeneous prototypes, confirming an empirical speculation
in biology that motifs have the so-called site clustering property [4].
Posterior Dirichlet parameters
-
10
10
5
0
10
5
A
T
G
C
10
0
5
A
T
G
C
10
5
10
0
5
A
T
G
C
10
5
0
abf1 (hit)
A
T
G
5
A
T
G
C
0
A
T
G
C
0
A
T
G
C
0
1
0.8
2
0.6
0.6
0.6
0.6
4
0.4
0.4
0.4
0.4
0.2
0.2
0.2
0
0
0
6
A
T
G
C
gal4 (mis?hit)
1
0.8
8
0
gal4 (hit)
1
0.8
10
5
abf1 (mis?hit)
1
0.8
C
1
2
4
6
2
1
2
0.2
0
1
2
1
2
8
(a)
(b)
(c)
Figure 3: (a) Dirichlet hyperparameters. (b) Markov transition matrix. (c) Boxplots of hit and mishit
rate of HMDM(1) and PM(2) on two motifs used during HMDM training.
Are the motif properties captured in HMDM useful in motif detection? We first examine
an HMDM trained on the complete dataset for its ability to detect motifs used in training in
the presence of a ?decoy?: a permuted motif. By randomly permuting the positions in the
motif, the shapes of the ?U-shaped? motifs (e.g., abf1 and gal4) change dramatically. 2 We
insert each instance of motif/decoy pair into a 300-500 bp random background sequence at
random position and .3 We allow a 3 bp offset as a tolerance window, and score a hit
(and a mis-hit when
), where is the position
when
where a motif instance is found. The (mis)hit rate is the proportion of (mis)hits to the total
number of motif instances to be found in an experiment. Figure 3(c) shows a boxplot of the
hit and mishit rate of HMDM on abf1 and gal4 over 50 randomly generated experiments.
Note the dramatic contrast of the sensitivity of the HMDM to true motifs compared to that
of the PM model (which is essentially the MEME model).
=
abf1
=
gal4
gcn4
gcr1
abf1
gal4
gcn4
gcr1
1
1
1
1
1
1
1
1
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.6
0.6
0.6
0.6
0.6
0.6
0.6
0.6
0.4
0.4
0.4
0.4
0.4
0.4
0.4
0.4
0.2
0.2
0.2
0.2
0.2
0.2
0.2
0
0
0
0
0
0
0
1
2
3
4
1
mat?a2
2
3
4
1
mcb
2
3
4
1
2
mig1
3
4
1
crp
2
3
4
1
mat?a2
2
3
4
0.2
0
1
mcb
2
3
4
1
1
1
1
1
1
1
1
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.6
0.6
0.6
0.6
0.6
0.6
0.6
0.6
0.4
0.4
0.4
0.4
0.4
0.4
0.4
0.4
0.2
0.2
0.2
0.2
0.2
0.2
0.2
0
1
2
3
4
0
1
2
3
4
0
1
2
3
4
0
1
2
3
4
0
1
2
3
4
2
3
4
4
3
4
0.2
0
1
3
crp
1
0
2
mig1
0
1
2
3
4
1
2
(a) true motif only
(b) true motif + decoy
Figure 4: Motif detection on an independent test dataset (the 8 motifs in Figure 1(a)). Four models
used are indexed as: 1. HMDM(bell); 2. HMDM(U); 3. HMDM-mixture; 4. PM. Boxplot of hit-rate
is for 80 randomly generated experiments (the center of the notch is the median).
How well does HMDM generalize? We split our data into a training set and a testing set,
and further divide the training set roughly based on bell-shaped and U-shaped patterns to
train two different HMDMs, respectively, and a mixture of HMDMs. In the first motif
finding task, we are given sequences each of which has only one true motif instance at a
random position. The results are given in Figure 4(a). We see that for 4 motifs, using an
HMDM or the HMDM-mixtures significantly improves performance over PM model. In
three other cases they are comparable, but for motif mcb, all HMDM models lose. Note that
mcb is very ?conserved,? which is in fact ?atypical? in the training set. It is also very short,
which diminishes the utility of an HMM. Another interesting observation from Figure 4(a)
is that even when both HMDMs perform poorly, the HMDM-mixtures can still perform
well (e.g., mat-a2), presumably because of the extra flexibility provided by the mixture
model.
The second task is more challenging and biologically more realistic, where we have both
the true motifs and the permuted ?decoys.? We show only the hit-rate over 80 experiments
in Figure 4(b). Again, in most cases HMDM or the HMDM mixture outperforms PM.
6 Conclusions
We have presented a generative probabilistic framework for modeling motifs in biopolymer
sequences. Naively, categorical random variables with spatial/temporal dependencies can
be modeled by a standard HMM with multinomial emission models. However, the limited
flexibility of each multinomial distribution and the concomitant need for a potentially large
number of states to model complex domains may require a large parameter count and lead
to overfitting. The infinite HMM [3] solve this issue by replacing the emission model with
a Dirichlet process which provides potentially infinite flexibility. However, this approach
is purely data-driven and provides no mechanism for explicitly capturing multi-modality
2
By permutation we mean each time the same permuted order is applied to all the instances of a
motif so that the multinomial distribution of each position is not changed but their order changed.
3
We resisted the temptation of using biological background sequences because we would not
know if and how many other motifs are in such sequences, which renders them ill-suited for purposes
of evaluation.
in the emission and the transition models or for incorporating informative priors. Furthermore, when the output of the HMM involves hidden variables (as for the case of motif
detection), inference and learning is further complicated.
HMDM assumes that positional dependencies are induced at a higher level among the finite
number of informative Dirichlet priors rather than between the multinomials themselves.
Within such a framework, we can explicitly capture the multi-modalities of the multinomial
distributions governing the categorical variable (such as motif sequences at different positions) and the dependencies between modalities, by learning the model parameters from
training data and using them for future predictions. In motif modeling, such a strategy was
used to capture different distribution patterns of nucleotides (homogeneous and heterogeneous) and transition properties between patterns (site clustering). Such a prior proves to
be beneficial in searching for unseen motifs in our experiment and helps to distinguish more
probable motifs from biologically meaningless random recurrent patterns.
Although in the motif detection setting the HMDM model involves a complex missing
data problem in which both the output and the internal states of the HMDM are hidden,
we show that a variational Bayesian learning procedure allows probabilistic inference in
the prior model of motif sequence patterns and in the global distribution model of motif
locations to be carried out virtually separately with a Bayesian interface connecting the
two processes. This divide and conquer strategy makes it much easier to develop more
sophisticated models for various aspects of motif analysis without being overburdened by
the somewhat daunting complexity of the full motif problem.
References
[1] T. L. Bailey and C. Elkan. Unsupervised learning of multiple motifs in biopolymers using EM.
Machine Learning, 21:51?80, 1995.
[2] T. L. Bailey and C. Elkan. The value of prior knowledge in discovering motifs with MEME. In
Proc. of the 3rd International Conf. on Intelligent Systems for Molecular Biology, 1995.
[3] M. J. Beal, Z. Ghahramani, and C. E. Rasmussen. The infinite hidden Markov model. In Proc.
of 14th Conference on Advances in Neural Information Processing Systems, 2001.
[4] M. Eisen. Structural properties of transcription factor-DNA interactions and the inference of
sequence specificity. manuscript in preparation.
[5] Z. Ghahramani and M.J. Beal. Propagation algorithms for variational Bayesian learning. In
Proc. of 13th Conference on Advances in Neural Information Processing Systems, 2000.
[6] D. Heckerman, D. Geiger, and D. M. Chickering. Learning Bayesian networks: the combination
of knowledge and statistics data. Machine Learning, 20:197?243, 1995.
[7] C. Lawrence and A. Reilly. An expectation maximization (EM) algorithm for the identification
and characterization of common sites in unaligned biopolymer sequences. Proteins, 7:41?51,
1990.
[8] C.E. Lawrence, S.F. Altschul, M.S. Boguski, J.S. Liu, A.F. Neuwald, and J.C. Wootton. Detecting subtle sequence signals: A Gibbs sampling strategy for multiple alignment. Science,
262:208?214, 1993.
[9] J. Liu, X. Liu, and D.L. Brutlag. Bioprospector: Discovering conserved DNA motifs in upstream regulatory regions of co-expressed genes. In Proc. of PSB, 2001.
[10] J.S. Liu, A.F. Neuwald, and C.E. Lawrence. Bayesian models for multiple local sequence
alignment and Gibbs sampling strategies. J. Amer. Statistical Assoc, 90:1156?1169, 1995.
[11] A. M. Michelson. Deciphering genetic regulatory codes: A challenge for functional genomics.
Proc. Natl. Acad. Sci. USA, 99:546?548, 2002.
| 2282 |@word proportion:1 cml:1 seek:1 simplifying:1 dramatic:1 solid:1 initial:1 liu:4 contains:1 score:1 genetic:1 interestingly:1 outperforms:1 current:1 nt:10 readily:1 realistic:1 concatenate:1 informative:3 pqd:1 shape:4 confirming:1 update:2 generative:3 discovering:2 guess:1 parameterization:1 short:2 detecting:2 provides:3 characterization:1 location:4 simpler:1 direct:1 ik:2 inside:2 introduce:1 expected:5 roughly:1 themselves:2 examine:1 multi:9 little:1 window:1 begin:1 provided:1 underlying:2 what:1 string:5 developed:2 finding:1 temporal:2 berkeley:3 unexplored:1 act:1 nutshell:1 assoc:1 hit:12 control:3 omit:2 appear:2 local:9 acad:1 specifying:1 suggests:1 challenging:1 co:3 limited:1 testing:1 block:1 procedure:3 empirical:4 bell:3 thought:1 significantly:2 reilly:1 word:1 integrating:1 specificity:2 protein:3 equivalent:2 yt:2 maximizing:2 center:1 straightforward:2 missing:1 simplicity:2 pure:1 biopolymer:5 handle:2 searching:1 homogeneous:4 distinguishing:1 elkan:2 element:1 database:1 role:1 module:1 capture:7 region:3 russell:2 principled:1 intuition:2 meme:3 complexity:1 dynamic:4 trained:1 purely:1 division:1 eric:1 vague:2 joint:2 represented:1 various:1 train:1 describe:1 tell:1 whose:1 modular:2 heuristic:1 plausible:1 solve:1 say:1 ability:2 statistic:3 favor:2 unseen:1 superscript:1 final:1 beal:2 sequence:38 propose:2 interaction:1 product:1 unaligned:1 aligned:1 poorly:1 flexibility:3 help:2 depending:1 develop:2 recurrent:2 eq:6 auxiliary:1 c:1 involves:2 posit:1 stochastic:2 require:1 generalization:1 preliminary:1 biological:9 probable:1 insert:1 presumably:1 cb:1 lawrence:3 predict:2 major:1 adopt:1 a2:4 purpose:1 estimation:2 diminishes:1 proc:5 applicable:1 lose:1 coordination:1 sensitive:1 cerevisiae:1 rather:3 temptation:1 karp:2 focus:2 emission:6 saccharomyces:1 multinet:2 likelihood:5 digamma:1 contrast:1 detect:1 inference:14 motif:132 typically:1 entire:1 unlikely:1 hidden:11 issue:2 among:2 ill:1 priori:1 spatial:3 special:1 smoothing:1 biologist:2 marginal:1 genuine:2 once:1 shaped:4 sampling:2 biology:3 represents:1 stuart:1 unsupervised:1 future:3 simplify:1 richard:1 primarily:1 intelligent:2 randomly:3 recognize:1 divergence:1 detection:11 evaluation:1 alignment:20 replicates:1 mixture:13 permuting:1 natl:1 capable:1 nucleotide:9 respective:1 indexed:1 divide:2 fitted:1 instance:20 column:6 modeling:7 markovian:1 ence:1 maximization:1 introducing:1 deciphering:1 uniform:1 examining:1 dependency:13 thanks:1 international:1 sensitivity:3 probabilistic:4 michael:1 ym:2 connecting:1 dirichlets:2 jo:1 again:1 containing:1 admit:1 conf:1 suggesting:1 potential:1 notable:1 explicitly:3 idealized:1 performed:1 later:1 picked:2 xing:1 bayes:2 recover:1 complicated:1 defer:1 acid:1 largely:1 characteristic:1 correspond:1 identify:1 generalize:2 bayesian:14 identification:2 definition:1 frequency:2 naturally:1 mi:5 sampled:2 dataset:4 popular:1 knowledge:9 improves:2 subtle:1 sophisticated:2 appears:1 manuscript:1 higher:2 reflected:2 response:1 modal:1 daunting:1 amer:1 though:1 box:2 generality:1 furthermore:3 governing:1 crp:3 expressive:3 replacing:1 propagation:1 reveal:1 indicated:1 yeast:1 usa:1 name:1 facilitate:1 true:7 hence:1 iteratively:1 round:1 adjacent:1 during:2 plate:1 complete:2 interface:2 variational:12 consideration:1 common:1 permuted:3 multinomial:22 functional:2 hmdm:28 composition:1 gibbs:2 rd:1 pm:9 similarly:2 stochasticity:1 cornered:1 posterior:11 driven:2 altschul:1 manifested:1 incapable:1 inequality:1 conserved:6 seen:1 captured:1 somewhat:2 impose:1 dashed:1 living:1 smoother:1 multiple:7 signal:2 full:3 preservation:1 infer:1 characterized:3 calculation:1 elaboration:1 molecular:1 finder:1 prediction:2 basic:1 heterogeneous:4 essentially:2 expectation:2 iteration:1 cell:1 preserved:1 background:5 whereas:1 separately:3 median:1 modality:3 extra:1 meaningless:1 posse:1 induced:1 virtually:1 jordan:2 call:2 integer:1 structural:1 near:1 counting:6 presence:1 split:2 independence:1 fit:1 harbor:2 marginalization:1 identified:2 prototype:9 expression:1 notch:1 utility:1 render:1 proceed:1 dramatically:1 useful:2 clear:3 maybe:1 dna:9 generate:1 supplied:1 psb:1 per:2 discrete:1 mat:4 michelson:1 key:2 four:2 nevertheless:1 boxplots:1 advancing:1 backward:2 parameterized:1 submodels:3 geiger:1 comparable:1 capturing:3 bound:1 distinguish:2 display:1 precisely:1 bp:4 boxplot:2 interpolated:1 aspect:2 according:6 combination:1 conjugate:1 inflexible:1 beneficial:1 em:4 character:1 heckerman:1 biologically:4 wherever:1 ikj:1 flanking:1 agree:1 remains:1 turn:1 count:3 mechanism:2 know:2 decomposing:1 hierarchical:2 generic:1 appropriate:1 indirectly:1 occurrence:6 appearing:1 bailey:2 original:1 pwm:3 dirichlet:19 running:1 assumes:2 clustering:3 giving:1 ghahramani:2 especially:1 prof:1 conquer:1 depart:1 strategy:5 concentration:1 dependence:1 spelling:1 transcriptional:1 exhibit:1 unable:1 utwv:1 sci:1 hmm:8 trivial:1 reason:2 length:1 code:1 index:2 modeled:3 decoy:4 minimizing:1 concomitant:1 ql:2 unfortunately:1 difficult:1 potentially:2 unknown:1 twenty:1 perform:2 brutlag:1 observation:3 markov:8 finite:2 heterogeneity:1 variability:1 interacting:1 biopolymers:1 introduced:1 pair:2 specified:3 kl:2 optimized:1 speculation:1 california:1 learned:4 able:1 recurring:2 usually:2 pattern:22 below:1 lkj:2 challenge:1 overlap:1 natural:3 treated:1 indicator:5 representing:1 improve:1 epxing:1 axis:2 carried:1 conventionally:1 monomer:4 coupled:2 categorical:2 genomics:1 prior:20 literature:1 fully:1 loss:1 permutation:1 interesting:1 sufficient:2 interfering:1 changed:2 repeat:1 free:2 keeping:1 infeasible:1 rasmussen:1 allow:1 neuwald:2 taking:1 distributed:1 tolerance:1 complicating:1 transition:10 stand:1 rich:2 genome:1 eisen:1 forward:2 made:1 clue:1 collection:1 approximate:1 contradict:1 ignore:1 transcription:1 gene:7 keep:1 global:8 overfitting:1 reveals:1 unnecessary:1 assumed:2 continuous:1 latent:4 regulatory:10 learn:1 nature:1 gal4:7 ca:1 complex:7 necessarily:1 upstream:1 domain:1 main:2 promoter:1 whole:1 noise:1 motivation:1 hyperparameters:1 augmented:1 site:10 sub:1 inferring:1 position:28 exponential:1 lie:1 governed:2 atypical:1 chickering:1 ian:1 specific:11 xt:2 showing:1 jensen:1 symbol:2 offset:1 physiological:1 admits:3 evidence:1 essential:1 incorporating:3 intrinsic:2 false:1 ih:1 naively:1 resisted:1 fragmentation:1 easier:1 locality:2 entropy:1 suited:1 led:1 simply:1 likely:1 positional:3 expressed:1 aa:1 environmental:1 determines:1 conditional:5 goal:2 formulated:1 presentation:1 shared:1 replace:1 content:1 change:2 determined:2 typical:1 except:1 infinite:3 called:4 specie:1 total:2 formally:1 internal:1 preparation:1 incorporate:1 mcb:5 |
1,410 | 2,283 | Modeling Midazolam' s Effect on the
__
H_il!Jlocampus and Recognition Memor!
Kenneth J'" .I\'lalJrnbeJ~2
Departn1ent of Psychology
Indiana V'uiversity
Bloomington, IN' 47405
Rene
Le!ele:nD~er2
Department of rS'/cnOlCHIV
Indiana University
Bloomington, IN 47405
rzeelenb(~~indiana.edu
Richard 1\'1.. Sbiffrin
Departm.entsof Cognitive Science and Psychology
Indiana 'University
Bloomington, TN' 47405
[email protected]
Abstract
The benz.odiaze:pine '~1idazolam causes dense,but
temporary ~ anterograde amnesia, similar to that produced
by- hippocampal damage~Does the action of M'idazola:m on
the hippocanlpus cause less storage, or less accurate
storage, .of information in episodic. long-term menlory?- \rVe
used a sinlple variant of theREJv1. JD.odel [18] to fit data
collected. by IIirsbnla.n~Fisher, .IIenthorn,Arndt} and
Passa.nnante [9] on the effects of Midazola.m, study time~
and normative \vQrd.. frequenc:y on both yes-no and
remember-k.novv recognition m.emory. That a: simple
strength. 'model fit well \\tas cont.rary to the expectations of
'flirshman et aLMore important,within the Bayesian based
R.EM modeling frame\vork, the data were consistentw'ith
the view that Midazolam causes less accurate storage~
rather than less storage, of infornlation in episodic
mcm.ory..
1 Introduction
'Danlage to the hippocampus (and nearby regions), often caused by lesiclns
leaves normal cognitive function intact in the short term., including long".tenn
memo-ry retrieval, but prevents learning of new1' inJornlat.ion.We have found a ,yay
to begin to distinguish two alternative accounts for this lea.ming deficit: Does
damage cause less storage, or less accurate storagc 1 of information in long-term
episodic menlQry?! We addressed this question by using the REM model of
recognition 'mC'mQry [18] to fit data collected by Hirshnlan and colleagues [9], vlho
1
tested recognition memory in nornlalparticipants given either saline (control group)
or Midazolam, a benzodiazepine that temporarily- causes anterograde amnesia with
effects that generally' 'mimic those found after hippoca.mpa1 dan1age.
2
Empirical findings
The participants in Hirshman et at [9] studied lists of \~ords that ?varied in
nomlative. word frequency (Le., lo\v-frequency vs. high.-frequency) and the amount of
time allocated for study (either not studied, or studied for 500, 1.200, or 2500 ms per
?word)+ These variables are known to have a robust effect on rec.ognition memory in
nornlal populations; Lo\v-frequency (LF) \vords are better recognized tllan high??
frequency (FIT) \"rOrd5~ an.d a.n. increase in study tinle inJproves recognition perfbl1:11ance.
In. addi.tion~ the probability ofrespon.ding 'oldY to studied words (temJed hit rate~ or FfR) is
higher forL?F \:vords than forHF '\?ords, and t11e probability of responding 'old? to
unstu.died. \\lords (~ermed fa.tse alarm. rate, or FAR) is lo\>ver for l,F \vords than. tor HF
'\Tords. Th.is pattern of data is commonly kno\vn as a ~l;mirror etIecf' [7].,
In. Hirshulan et al. [9], participants received either salin.e or l'vfidazolatn a11d
then studied a list of ?words. A.fter a delay of about an hour they ,vere sho\vn studied
words eoldt ) and unstudied words Cnew1)'1 a.nd asked to give old-new recognition
and. renlenlber/k'11o\v judgments. The HR and F.AR. ii.ndin.gs are depicted in Figure 1
as the large circles (tl.I1ed for l,F test?iords and un.filled for HF test '~i'ords). The
results fror.n the saline condition, given in the left panel, replicate tIle standard
effects in the literature: In the figure; the points labeled with zero study time give
FAR.s (for ne?';fl test itelns), and the other poin.ts give HRs (for old test items). Thus
,ve see that the saline group exhibits better performance forL?F words al1d a rnirror
effect: ForLF words~ FA?Rs are IO\,l.ler an-dHRs are higheL The Midazolam group of
course gave ]oVi-rer performance (see right pan.el). More critically, the pattern of
results differs from that for the sal ine group: The mirror effect was lostL,F \vords
produced both.loweTF~A,..Rsand lower HR.s+
0.8 - , - - - - - - - - - - - - - - - , ....---------------,- 0.8
0.7
HRsandFARs
HRsandFARs
in Saline Condition
0.6
0.7
in MidazolamCondition
.... :::::::::::::::: .. ,.@
0.6 II
0.5 0
~
0.5
--.- LF Data
~
"
;
c.
0.4
???0???? HFData
0.4 '-'
0.3
--.- LF Fit
.. ??0? .. ? HF Fit
0.3
0.2
-'---.,----,.---,------r--,----,..-----t '----,----.,-----,--.,----,.----r----+
o
500 1000 1500 .2000 25003000
Study Time (ms.)
o
0.2
50010001500 200025003000
Study Time (ms.)
Figure J.!.y"?cs-no recognition data. from Hirshman ct at and predictions o:f aR.EMm.odeL
ZerQ U1S study time refers to 'new~ items so the data gives the false-alarnl rate (FAR). Data
sh(nvn for non-zero study times give hit rates (lIR). Only the REM parameter & varies
bchvecn the saline andm.idaz:olam conditions. 1~hc fi.ts are hased on 300 JVlonte (~arlo
simulations usinggLF ~ .325 t g= . 40" gnr ~ ,,45; ~:= 16; 10 ~ 4, ~ ~ .8~ !!* ~ .. 025, QS.l ,~ .77,
~M1d = .25, CritQ/ N '= .92. ?LF?= low-frequency words and lIP = high-freque.ncy \yords.
The participants also indicated 'Athether their "old'" judgrnents \vere
made on the basis of '~rememberingH the study event or on the basis of "kno\?ing"
the v.rord \vas studied even tb.ougb. tlley could not explicitly rernenlber the study
event [5]. Data: are sh.o\?n in Figure 2. Of greatest interest (or present purposes,
~~knowr~" and "rernelnber)' responses \vere differently affected by the \vord frequen.cy
and the drug manipulations. In the 1Vlidazol.aul condition~ th.e conditional probability
of a t'kno\\{'~ judgnlent (given an t'o]d:l~ response) was consistently higber tb.an that of
a "remember'} judg1nent (for both HF and L,F Vi-i'ords). lVl"oreover, these probabilities
?were hardly affected by study timei A different pattern \vas obtained in the Saline
condition.FQT HF words, the cQnditional probability of a '.tknO\~l~1 judgment vvas
higher than that,of a "rerrleulber" judgmen.t~ but the difference decreased with study'
time..Final1y~ tor LF \vords, the conditional probability of a "'kn.o\v" Judgment vvas
higher than that of a Hrernenlber~' judgrrlent tor nonstudjed foj.ls~ but tor studied
targets the conditional probability of a. Hremernber" judgrnent \vas 11 igher tha.n that
of a '~kn.ow?" judgrnent The recognition an.d rerrlenlberlk"u\? re?sults were interpreted
by Hirshman et aL [91 to require a dual process account; in particular~ tlle authors
argued against Hnlenlory strel1gtll~' accounts [4 t 6~ 11]. Although not the n1uin
message of this tlote~ itvvitl be of som.e inteTest to m.emory theorists to n.ote that our
present results. sho?ws tIllS con.clusion to be in.correct.
1.0 . . , - - - - - - - - - - - - - ,
1.0 . , - - - - , - - - - - - - - - - - - - - - ,
,-.. 0.8 D."...
HF .. Saline
~ 0.6
???""O:::::::::::::::::::::::~::::::::::::::::::::..
I:
~ 0.4
g
~
0.2
.2
o 0.0
\..
~
E
4)
!
~
1000
2000
0
1.0 . , - - - - - - - - - - - - - - - - - ,
0.8
OJ>.
[J
~o:t
~
LF ? Saline
o
0.0
~------,-------.-------'
~
0.8
..c
OJ
~
cr
LF ... Midazolam
0.6 ~::::::::::::::::::::::'a:::::::::::::::::::::?
0.4 ~
0.2
0.2
0.0
Dr.------ID
o
2000
1000
1.0 . . , . - - - - - - - - - - - - - - - ,
r-
~
D:' 0.4
0.4
0.2
(U
1I:r-
E
C:t,...
HF ... Midazolam
0.8
~ 0.6 O-???????:::::~? ..:::::::;;::???????????????,,,?o?..??~--_????_-_ ..?~~_?????-O
..J..-,-------,-------.-------'
0=
1000
Study Time
2000
0.0
..J..-,-
o
-II-
p~Uremember~ f?old)
-Ii
pJlk.now.~~ I~'Old.)
-0- Rmember Fit
???_O????--,.??_Ko_o_w_F_it_---r_ _---'
1000
2000
Study Time
Figu.re2~ .Remember/kn.ow data froluHirshman et aL and predictions ora REM
fllodeL The paramete:r values are those listed in t.he caption for.Figure I,. plus there
arc two remember...know crite.rion:Fo.r the saline group,. CritR/K;; 1.52; for the
midazo.lanl group, (:ritRlK;;;' 1~30~
3 A REM model for recognition and remember/know judgments
Aconlmonway to conceive of recognition. nlenl0ry is to posit that memory.
is probed \vith the test item, and the recognition decision is based on a. continuous
random variable that is oft.en conceptuali.zed as the resultant strength, intensity, or
fam.iliarity [6]" If the familiarity exceeds a subjective c.riteri.on, then the subject
responds'~old"+Otherwise, a "n.ew" response is made [8].
?of
A subclass
this type of model accounts for the vvord-frequency mirror
effect by assuming that there exist four underlying d.istributions of fa.nliHarity
values~
such th.at th.e means of these distributions. are arranged along a familiarity
scale in the follo\ving n1annct: p(L?F-nc\v) ~:::: jl(HF-nc\\r) <~ p(HP-old) < p(LF~old).
The left side of Figure 3 displa.ys this relation graphical.ly. l\.. model of this type can
predict the recognition fmdings of IIirshn1a.n ?ct a1. (in press) if the effect of
. Midazolam is to rearrange the underlying distribut.ions on the familiarity scale such
that }t(L.F-old) < p(HF-old). The right side of Figure 3 displays this relation.
graphically. The R.EM 1110del of the \~lord-frequency effect described by Shiffrin and
Steyvers [13, 18, 19] is a member of this class of models, as \ve describe next.
RE.M [1.8] assumes that memory traces consist of vectors Y, of length ~, of
nonnegativ?e integer feature values Zero represents no infomlation about a feature
()thenvise the values for a given feature? are assum.ed to tbllo\\l the geometric. probability
distribution given as Equation 1: P(V = j) = (l_gy-lg, for j= 1 and higber~ Thus higber
integer values represent feature values less likely to be encountered in the environment
R.EM adopts a "feature-frequen.cy'" assumption [13]: the lexicalJsemantic traces of lU\\ler
frequency ?words are generated \vith a low?er value of g (Le. gLP < gllr). These
lexical/semantic traces represent general knovvledge (ekg~, the orthographic, pl1onological,
senlantic, and contextual characteristics of a \vord) and bave very many non-zero feature
values~ most of'\vl1icb. are e.ncoded correctly. Episodic traces represent the occurrence of
stinluli in a certain environmental context; they are built of tlle same feature types as
lexical/senlantic traces, but tend to be in,cOlnplete (bavemany zero values) and inaccurate
(the values do not nec.essarily represent correctly the ?v?alues ofth,e presented event).
+
v
When a \vord is studied, an incomplete and error prone representation of the
trace is stored in a. separate episodic image. The probability
that a feature ',eVill be stored in the episodic inlage after! time units of study is given
as Equation 2: 1 - (1 - 11*)1, where !!* is the probability of storing a feature in an
arbitrary un-it of time~ The number of attempts, 1j, at storin.g a con.tent featur-e for an
itenl studied for j units of time is co.mputed from Equation 3: 11 == 11.=.1(1 +' ~-JAJ), \vh.-ere
'~lord's lexical/semantic
Midazolam
Saline
new
LF
(}ld
HF
HF
cld
lleW
LF
LF
HF I.F HF
JI
more
Figure 3. l\rrallgomcnt of Inoans of the theoretica.l distributions of strcngth.. bascd
models that may give risc to 'Hirshman ct at ~s findings. HF and LF = high or LF
freq~ucncy ,vol'ds:; respectively..
less.
Fmniliarity
tnQ~
less
Fami1:L."U'ity
i.s a rat.e parameter:- and t, is the number of atten1pts at storing a. feature in the' first
1 s. of study. "rhus, increased study time increases the storage of features, but the
gain in the amount of information stored diminishes as the itctn is studied longer.
Features that arc not copied frotn the lexical/semantic trace arc represented by a
valu.c of O. If storage of a feature docs occur, the feature value is correctly copied from
the ,vord~s lexical/semantic tI'aCC '\vith probability Q. With probability l ..~ the value is
incorrectly copied and sall1plcd randolnly from the long-run base-rate gco111ctric
distribution:, a distribution defin.ed. by g such that gHF ~> g > gLF.
:?;
At test, a probe made with context features only is assumed to activate the
episodic traces~ Ij, of the !l list i tenlS and no otllefS [24]. Then the content features of the
probe cue are tnatched in parallel to the activated traces..For each episodic trace, Ii, the
system notes the values. of features of Ii that rnatch the corresponding feature of the cue
(njjm stands for the number of matching values in tl1e j-th image that have value i)} and the
ntnnber of nlisulatcbing featq.res (njq stands tor the number of mismatching values in the
fhimage). Next~ a likelihood ratio, ~j~ is cOlnputed for each Ii:
A ~ (.l-c )12,)Il
'.
j
n?C.",""'>. [..
/;1
.
c+
(l-,{.~)g(l-i-Ig)r-l ].,.?
fi
l1. m
(4)
gel-g)
~ is the likelihood ratio for the fh itnage~ It can be thought of as a? runtchstrength bet\veen the retrieval cue and.Ii. It gives tlle probability of the data (the olutcl1es
and misn1atches) given that tlle retrieval cue and the inlage represent the san1e word (in
which case features are expected to luatch, except for errors in storage) divided by the
probability' of the data given t11at the retrieval cue and tIle irnage represent different "fords
(in \vhich case features matell only by chance).
TI,e recognition decision is based on the odds1 <1>, giving the probability that the
test item is old divided by the probability the test itetn is ne," [18]. This is just the
average of the likelihood ratios:
1
=-
n
LA.?J
(5)
11 j=4
If the odds exceed a criterion~ then an Uoldj~ response is 1nade, The default criterion is '1.0
(wllich maximizes probability correct) although subjects could of course deviate from
this setting.
Thus an Hold" response is given 'Vvnen there is more evidence that the test ,vord
is old. !\1atching features contribute evidence that an item is old (contribute factors to the
product in ,Eq. 3 greater than 1~O) and n1ismatching features contribute evidence that an
item is ne\\' (contribute factors less than l .O)~ RE!vlpredicts an effect of study time
because storage of Olore non-zero features increases the number of matching target-trace
features; this factor outweighs the general increase in variance produced by'" greater
nunlbers of non-zero features in
vectors. 'RENt predicts a L?F HR advantage because
the matching ofthe more uncon1mon features associated 'W'"ith LF words produces greater
evidence that the item is old than the matching of the more COOlmon features associated
with H.F words..For foils~ however~ every teature match is due to chance; such matching
occurs n10re frequently for HF tl1an LF \vords because HF features are ,nore common
[12]. TIlis factor outweighs the higher diagnosticity of matches tl1f theLF words, andHF
vV'otds are predicted to have higher FARs than L?F '\vords~
an
Much evidence points to the critical role of the hippocampal region in
storing episodic memory' traces ['I, 14, l5, ] 6 20]. Interestingly, f\.1idazolam has
been sho\vn to affect the storage, but not the retrieval of memory traces [22]. As
described above, there are tw'o parameters in R.EM that affect the storage offeatures
in tnemory: 11* detennines the nuolber of features that get stored, ~nd ?. deternlines
the accuracy with which features get stored. In order to lower performance, it could
be assun1ed that Midazolanl reduces the values of eitl1er or both oftl1eseparameters.
Ho\vever, Hirshulan et at '8 data constrain wl11ch of these possibil ities is viable.
l
Let us assutne that MidazoJam only causes the hippocampal region to store
fe\ver features, relative to the saline condition (i.e. ll* is reduced). In REM~ this
causes te\\>Ter terms in the product given by .Eq. 4~ and a lO\\>Tervalue for tlle result~
on the average. Het1ce~ if Midazolam causes fe\ver features to be stored~ subjects
should approach chance-le,\tel performance for both HF and .LF \-'lords: LF{FA.R) ~
H.F(F..A.R) . ..,/ L-F(HR) . . ~ HF(HR). However, Hirshnlan et a1 found that the difference
in the LF and H'F FA.Rs \?as not affected b:y 1\1idazolam. In RETvl this difference
would n.ot be much affected; if at al1~ 'by changes in criterion, or c.hanges in & that
one 1111ght assume Midazolam induces. Thus \vithin the fratnework of R.ENf, the
main effect of l\1idazolam on the functioning of the hippocampal region is not to
reduce the n.umber of features "that get stored.
Alternatively let us assunle that Midazolam causes tIle hippocalTIpal region
to store '~nQisier" episodic traces, as o.pposed to traces wi th feV~ter :non-zero features~
instantiated in RE?Tvf by d.ecreasing the valu.e of th.e ~ parameter (that governs coo-ect
copying of a feature value). Decreasing Q only slightly affects the false alann rates~
because these FARs are based on chance matches 1 .HO\\feVer, decreasing ~ causes
the LF an.d .HF old-itenl distributions (see Figure 3) to ~pproacb. the L~F and HF ne\\L..
item distrihutions; \vhen. the decrease is large en.oug:h.~ this factor tnust cause the LF
and .HF old-item distributions to reverse position. The reversal occ.urs bee-ause the
H,F retrieval cues used to prope melTIOry have more comnlon features (on average)
than the LF retrieval cues, a factor that cornes to dominate \vhen the true 'signar
(mate-hing features in the target trace) begins to disintegrate into noise (due to
l.o\vering of~).
4
Figure 1. shows predictions of a REM nlodel incorporating the? assumption
that only ~ \taries benveen the saline a.nd IVIidazolalTI groups~ a.nd only at storage,
.For retrieval the same ~ value \vas used iri both the saline and Midazolanl conditions
to calculate the likelihoods in E,q~ation 4 (an assumptioll consistent with retrieval
tuned to the partiei.pant's lifetime learning, and consistent ,vith prior findings
sh{)~ring that Midazolam affec.ts the storage of traces and not their retrieval [17].
The criterion for an. oldlnc\v judgment '--va.s set to ;,92~ rather than. tlle nornlatively
optimal value of I ~O!lin order to obtain a good qua.ntitative fit, but the criterion did
not vary betw~een. the 1v1idazolarn and saline gro~ps, and therefore is 110t of
consequence tor the present article \Vithin the RE,M framework; then; the main
effect of Midazolan1 is to ca?use the hippocampal region to store more noisy episodic
traces. These conclusions are based on the recognition data. 'h7 e tum next to the
remenlber/kno\v judgments.
x
\Ve chose to model renlenlber.. kno\v judgments in "vhat is probably the
shnplest way. The approach is based on the olodels described by Donaldson [41 and
.Hirsbrnan and Master [10, II]. As described above~ an totd t decision is given when
the familiarity (Le~ a,ctivation~ or in RE1vf tenns the odds) a.ssociated '\vith a test
'word exceeds tb.e yes-no criterion. \\7Jlen this happens, th.en. it is aSSUllled th.at a
higher remember/kl10\V criterion is set. \llords ,,,bose familiarity exceeds the higher
renlenlherllo,O\v" criterion a.re given the ?'renlenlber" response, and a "knowH
response is givenw'hen the remember/kno\? criterion is 110t exceeded. Figure 2
shows that this lnodel predicts the effects of MidazQlam and saline both
qualitatively and qua,ntitatively?. TIllS fit was obtained by' using slightly different
renlenlber~know criteria in the saline and 'Midazolam conditions (1.40 and 1.26 in
the saline and Midazolam conditions, respectively), but aJl the qualitative effects are
predicted correctly even\vhen the same criterion is adopted for remembetlknow.
1
Slight din-'erences are predicted depending on the interrelations of ,g~ gl1f~ and
gLf
These predictions pro'lide a.n existence proof that Hirshman et aL [9] were a bit
hasty in. usin.g tlleir data to reject single.. process tnodels of the present type [4, 11]:t
an.d sho\v that single- versus dual-process models \\lQuld hav?e to be distinguished on
the basis of other sorts of studies. There is already a large literature devoted to this
as-yet-unresolved issue [10], and spa.ce prevents discussion here.
Thus far we detnonstrated tlle sufticien.cy of a model assulning that
lVHdazolanl reduc.es storage acc?uracy rathe-r than storage quantity, an.d have argued
that the reverse assumption cannot 'Vvork. \Vhat degree of Inixture of tllese
assumptions tnight be conlpatible with the data'? A.l1 ans"ver "~lould require an
exhaust.ive ex:ploration. of the paralnet.er s.pace" but \?e found that tD.e use of a 50~/Q
reduced value of y* for the Midazola.m group (11* suI == .02; Y*rrti*i == .01) predicted an
LF-Fi\R. advantage that deviated from the data by bein.g noticeably snlaller in. the
Midazolanl than saline condition. Within. the RE.1\1 fratnework this result suggests
the maill effect of l\1idazolalu (possibly all tIle effect) is on ~ (accuracy of storage)
rather than Otll1* (quantity of storage).
AJtern.atively~ i.t is possible to conceive of a much more complex RE?M
model that assurnes that the effect of IVIidazolatll is to reduce the aOlount of storage.
Accordillg1.y~ one might assunle th.at relatively little in.f1)rnlation is stored. in. m.emory
in the Mid.azo]am. cOl1dition.~ an.d that the retrieval cue is Inatch.ed primarily aga.inst
traces stored prior to tl1e experiment Such a modeL Inightpredict Hirshman et at "5
tin.din.gs bec.ause? once again. targets will only be randonlly similar to contents of
m.emory.. Ho\vever, suell a lTIodel is far tnore com:plex. than. the InQdel described
above. Perhaps, future research will provide data that requires a Olore complex
m.odel~ but for n.O\V the simple m.odel presented here is sufficien.t+
4 Neurosc.ientific. Speculations
The }lippocatnpus (proper) consists of approximately 'I O~/~ C] ABAergic
intern.euron.s, and these intern.eurons are th.ought to control tbe firing of the
remaining 909/~ of the hippocan1pal principle neurons [21]. Some of the principle
neur011S are gra.nule neurons and SOlne are pyramidal neurons~ The granule cells are
associated ,vitb. a rhythmic pattern of neuronal activity k~llown as theta,,vaves [1]~
Tl1eta \\laves are associated ",tith exploratory activities in both animals [1.6] and
hUlnans [2]~ activities in \vhic.h infortnation about novel situations is being acquired.
Midazolam is a. benzodiazepine~ and benzodiazepines inhibit the tiring of
(]ABAergic interneurons in the hippocampus [3]. Hence, if tv1idazolan) inhibits the
tiring of those cells that regulate the orderly firing of the vac;t majority of
hippocampal cel1s!l then it is a reasonable to speculate that the result is a "noisier"
episodic memory trace~
The a.rgUlnent that?vt idazolaln causes noisier storage rather than less
storage raises tb.e question whether a sitnilar process produces th.e silnilar effects
caused by hippocampal lesions or other sorts of datnage (e.g.. Korsakoff's
syndrolne). l'hi8 question could be explored in future research.
Refel~ences
[1 ]Bazsaki~ Gy (1989), T\vo.. stagc mode1 of memory trace formation: A role for HnoisyH
brain stales. Neu70sciencelj 31 55l-510.
j
[2] Caplan, J. B.1 R.aghavachari, S. and Madscn~ J. R.. ,Kahana, M. 1 (2001). Distinct patterns of
brain Qscillat1(lns underlie two b~qjc parameters of hum.an n13ze learning. .l ajNeurophys,,> 86} 3683g0~
[3] Dcad\\rytcr, S. t.\.~ Wcst, M., &, Lynch!! G. (1979). I\ctivity of dentate granule cells during
learning; difJerentiation of peri'orant path input. Brain Res~, "] 69~ 29-43.
[4] Donaldson, Vl. (1996). The role oJ decision processes in
.J.\1emory & Cognition" 24, 523-533.
[5]
Gardincr~
l~emembering
and kno\ving,
J. tvL (1988). Functional aspects or rccollective experience. A-fenu)ry &
(To&rnition, 1671 309-313.
[6] Gillund" G., & Shiffrin~ R__ !Vt (1984). A retrieval model for both recognition and rcc?alL
.P,r.;yeh. Rev., 91, 1-67.
[7] Glanzer, 1v1., & Adams~l :K.. (1985). The mirror effect in rc?c?ognition. lllcmory. .l\lenl0ry &
Cognition., 12, 8-20.
[8] Green7 D?,
York: Wiley.
tVL~
& S\veis J J. A. (1966). Signal detection theo~y and psycho]Jh...vsics. Ne?\\l
[9] H?h?shman, E", 'Fisheric J.~ H'cnthorn:1 T.~Arndt~ J-, &. Passannantc~ l\. (in press) :Mjdazolam
amnesia and dual-process models of the ,Yord frcquc?ncy mirror effect..I. qll\{enUJJ:V and
L?anguage~ 47~ 499-516.
Hirshman~ E. & Henzler, A_ (1998) The role of decision processes in conscious memory,
Psych. ScL, 9, 61-64+
[lOJ
[11J IIiTshman~ E~ & JvIaster., S. (1997) I\1odeling the conscious c(ltrelates ofrecQgnitioll
memory: Reflections on the 'Rctnctnbcr-Know paradigm. .Atemory & G"()gniti(J11~ 25~ 345--352.
[12] Malmberg, K. J. & t\,furnane:t?K. (2002)_ List compos111on and the word-frequency
effecL for recognition memory. J. oj?Exp. P~ych.: Leanling l\;lemory~ and c..of:,rnition:t 28~ 616-630.
J
[13] l\lfa.hnbcrg, K. J~, StcY\lCrs IVL, Stephens, I D., &. Shiffrin t R~ 1.f~ (in prcss)~ Feature
frequency efiects in recognition men1Qry. A/emory & G'o&:TTlition.
f.
[14] tv1a.rr~D~ (1971).. Simple lUCl110ry: a theory tor the atchicortcx. Proceedin.gs o,.(the Royal
Society, L,ondon B 84l, 262:23-81.
.B~ L..? & 07~Rcil.ly ,R. C. (1995). \Vhy there arc
CoUtplCnlcntary learning systems in the hippocampus and ncocortex: Insights fronl the
succe?gses and failures or connecti()uist n1{)d~ls of learning and memory.. Psych. Rev" "J 02,
419-457.
[15] lVIc?CLcUand., J. L,.? rv1.c?Naughtou,
[1.6] O~.Kccfc, J. &,N'adcl~ .L. (1978). 17:e hippoclunpus as a cOJJnilive
Clarendon IJnivcrsity Press.
11't~p.
Oxford:
[17] Polster,MA, ~1cCarthy, RA, O;Sullivan, G., Gray,P., & Park, G. (1993). lVlidazolam.Induced amn.csia~ Implications for the hnpUcit/cxplicit tncrnory distinction. Br(tit1 &
Cognition.., 22, 244-265.
[18J Shiffrin~ R.M.'t & Stcyvcrs. M. (1997). A tllodcl for-recognition memory: REM ~
retrieving effectively frotu lllcmory. Psy'cho/1(Jlnic Bulletin & Review ~ 4; 145-166.
[19] Sh.iffrin,R...M.. & Steyvcrs~ .'1\4:. (1998). The effectiven.ess of retrieval fronl m.crnoty. In
.1Vt Oak.sforrl &N. Chater (Eds.), .Rational fttodels o.l'cogt1Uio!2. (pp" 73.. 95)~ London: Oxford
University
[20]
Press~
Squirc~
L. 'R..
(1987)~ }~lenlory
and the .Brain. 'Nc\v York:
Oxfor.d~
1:21] 'lizi~ E. S. & Kiss:t K. P. (1998). Neurocnemistry and pharmacology of the nlajor
hippocatnpaJ. tranSfilittcr systcnls: Synaptic an.d .No.nsynaptic interactions. 'H~ppoCall~p1JS, 8,
566-607.
| 2283 |@word hippocampus:3 replicate:1 nd:5 anterograde:2 ences:1 simulation:1 r:3 crite:1 ld:1 tuned:1 interestingly:1 subjective:1 emory:6 contextual:1 com:1 yet:1 vere:3 v:1 cue:8 leaf:1 tenn:1 item:9 es:1 ith:2 short:1 contribute:4 tvl:2 oak:1 gillund:1 rc:1 along:1 ect:1 viable:1 amnesia:3 re2:1 consists:1 frequen:2 retrieving:1 qualitative:1 acquired:1 uist:1 ra:1 expected:1 os:1 frequently:1 ry:2 brain:4 rem:7 ming:1 decreasing:2 ote:1 td:1 little:1 begin:2 underlying:2 maximizes:1 panel:1 interpreted:1 psych:2 prope:1 finding:3 indiana:4 ought:1 remember:7 vhich:1 every:1 subclass:1 ti:2 hit:2 lcrs:1 assum:1 control:2 underlie:1 tiring:2 rcc:1 ly:2 unit:2 llew:1 died:1 io:1 consequence:1 interrelation:1 oxford:2 id:1 firing:2 path:1 approximately:1 might:1 plus:1 chose:1 studied:11 suggests:1 co:1 rion:1 orthographic:1 lf:22 differs:1 ance:1 sullivan:1 diagnosticity:1 veen:1 episodic:12 empirical:1 drug:1 thought:1 reject:1 matching:5 fev:1 ving:2 vvas:2 word:17 refers:1 get:3 prc:1 cannot:1 donaldson:2 context:2 storage:20 adcl:1 lexical:5 graphically:1 iri:1 l:2 fam:1 insight:1 q:1 dominate:1 steyvers:1 ity:1 population:1 exploratory:1 target:4 caption:1 vork:1 recognition:18 rec:1 predicts:2 featur:1 qll:1 bec:1 role:4 labeled:1 ding:1 calculate:1 cy:3 region:6 decrease:1 inhibit:1 sal:1 environment:1 asked:1 raise:1 lord:4 hased:1 basis:3 differently:1 represented:1 instantiated:1 distinct:1 describe:1 activate:1 london:1 formation:1 ive:1 otherwise:1 addi:1 gro:1 tvf:1 ford:1 noisy:1 advantage:2 arlo:1 rr:1 interaction:1 product:2 unresolved:1 till:2 shiffrin:5 ine:1 glp:1 p:1 produce:2 adam:1 ring:1 lvl:1 depending:1 ij:1 received:1 eq:2 predicted:4 c:1 posit:1 kno:7 detennines:1 correct:2 tlle:7 noticeably:1 require:2 argued:2 f1:1 hold:1 aga:1 exp:1 normal:1 aul:1 sho:4 dentate:1 cognition:3 predict:1 benzodiazepine:3 pine:1 vith:5 tor:7 arndt:2 al1:1 vary:1 fh:1 purpose:1 diminishes:1 hav:1 ere:1 lynch:1 rather:4 r_:1 cr:1 bet:1 chater:1 consistently:1 likelihood:4 ivl:1 psy:1 am:1 inst:1 caplan:1 defin:1 el:1 vl:1 inaccurate:1 psycho:1 w:1 relation:2 umber:1 issue:1 dual:3 distribut:1 hasty:1 animal:1 once:1 represents:1 park:1 mimic:1 future:2 richard:1 primarily:1 ve:3 saline:18 n1:1 attempt:1 detection:1 interest:1 message:1 hing:1 interneurons:1 sh:4 activated:1 rearrange:1 devoted:1 implication:1 accurate:3 experience:1 filled:1 incomplete:1 old:17 circle:1 re:10 atching:1 increased:1 tse:1 modeling:2 ncy:2 ar:2 andm:1 mcm:1 ory:1 delay:1 stored:9 kn:3 varies:1 cho:1 peri:1 l5:1 mode1:1 offeatures:1 again:1 possibly:1 tile:4 dr:1 cognitive:2 account:4 gy:1 fronl:2 speculate:1 alues:1 exhaust:1 explicitly:1 caused:2 vi:1 tion:1 view:1 sort:2 participant:3 parallel:1 odel:4 hf:21 il:1 accuracy:2 conceive:2 variance:1 bave:1 characteristic:1 judgment:7 ofthe:1 yes:2 bayesian:1 produced:3 critically:1 mc:1 lu:1 wllich:1 acc:2 plex:1 fo:1 inixture:1 ed:4 synaptic:1 against:1 failure:1 colleague:1 frequency:11 pp:1 resultant:1 proof:1 associated:4 con:2 ploration:1 gain:1 eurons:1 bloomington:3 rational:1 ele:1 ords:4 exceeded:1 clarendon:1 tum:1 ta:1 higher:7 response:7 arranged:1 erences:1 lifetime:1 just:1 glanzer:1 d:1 vords:7 del:1 gray:1 perhaps:1 indicated:1 effect:21 true:1 functioning:1 hence:1 din:2 vhen:3 semantic:4 freq:1 amn:1 ll:1 during:1 rat:1 m:3 criterion:11 hippocampal:7 vo:1 tn:1 l1:2 reflection:1 pro:1 glf:2 image:2 novel:1 fi:3 common:1 functional:1 ji:1 jl:1 alann:1 slight:1 he:1 rene:1 theorist:1 hp:1 ffr:1 poin:1 longer:1 base:1 azo:1 reverse:2 manipulation:1 store:3 certain:1 ajl:1 tenns:1 vt:3 greater:3 recognized:1 paradigm:1 signal:1 ii:9 stephen:1 reduces:1 ing:1 exceeds:3 match:3 h7:1 long:4 retrieval:13 lin:1 divided:2 y:1 disintegrate:1 a1:2 va:5 prediction:4 variant:1 expectation:1 represent:6 ion:2 cell:3 lea:1 addressed:1 decreased:1 pyramidal:1 allocated:1 ot:1 usin:1 probably:1 subject:3 tend:1 j11:1 induced:1 member:1 odds:2 integer:2 ter:2 exceed:1 affect:3 fit:9 gave:1 psychology:2 een:1 reduce:2 ovi:1 br:1 whether:1 jaj:1 vord:5 york:2 cause:12 hardly:1 vithin:2 action:1 hi8:1 generally:1 governs:1 listed:1 amount:2 mid:1 conscious:2 induces:1 risc:1 reduced:2 exist:1 correctly:4 per:1 pace:1 probed:1 vol:1 affected:4 group:8 four:1 ce:1 kenneth:1 v1:1 tbe:1 run:1 master:1 gra:1 reasonable:1 rer:1 vn:3 ctivity:1 doc:1 decision:5 spa:1 bit:1 fl:1 ct:3 distinguish:1 fever:1 deviated:1 display:1 copied:3 encountered:1 g:4 activity:3 strength:2 occur:1 constrain:1 nearby:1 aspect:1 relatively:1 inhibits:1 department:1 kahana:1 mismatching:1 slightly:2 em:4 pan:1 ur:1 wi:1 tw:1 rev:2 happens:1 tent:1 coo:1 benz:1 equation:3 know:4 scl:1 reversal:1 adopted:1 probe:2 regulate:1 occurrence:1 distinguished:1 lns:1 alternative:1 ho:3 jd:1 existence:1 ych:1 responding:1 remaining:1 assumes:1 graphical:1 follo:1 outweighs:2 giving:1 ght:1 neurosc:1 granule:2 society:1 g0:1 already:1 quantity:2 hum:1 occurs:1 damage:2 fa:5 question:3 responds:1 lfa:1 exhibit:1 ow:2 separate:1 deficit:1 ause:2 majority:1 collected:2 assuming:1 length:1 copying:1 cont:1 ler:2 ratio:3 gel:1 nc:3 lg:1 fe:2 trace:21 memo:1 proper:1 neuron:3 arc:4 mate:1 t:3 ekg:1 incorrectly:1 situation:1 frame:1 varied:1 arbitrary:1 bose:1 intensity:1 lanl:1 speculation:1 distinction:1 temporary:1 hour:1 foj:1 pattern:5 inlage:2 cld:1 oft:1 tb:4 built:1 royal:1 memory:13 oj:4 including:1 greatest:1 emm:1 event:3 critical:1 ation:1 hr:6 sults:1 theta:1 ne:5 vh:1 deviate:1 prior:2 literature:2 geometric:1 review:1 yeh:1 hen:1 bee:1 relative:1 valu:2 versus:1 frequenc:1 degree:1 consistent:2 forl:2 article:1 principle:2 storing:3 pant:1 foil:1 lo:4 prone:1 course:2 theo:1 side:2 jh:1 vv:1 bulletin:1 rhythmic:1 default:1 stand:2 adopts:1 commonly:1 made:3 qualitatively:1 ig:1 author:1 far:7 unstudied:1 orderly:1 ities:1 lir:1 ver:4 vhat:2 assumed:1 a_:1 alternatively:1 un:2 continuous:1 lip:1 robust:1 ca:1 tel:1 r__:1 nvn:1 hc:1 complex:2 som:1 did:1 dense:1 main:2 noise:1 alarm:1 higber:3 pharmacology:1 lesion:1 neuronal:1 nade:1 tl:1 en:3 wiley:1 position:1 loj:1 rent:1 rve:1 tin:1 sui:1 familiarity:5 qua:2 vac:1 er:2 normative:1 list:4 explored:1 ognition:2 evidence:5 consist:1 incorporating:1 false:2 effectively:1 mirror:5 nec:1 te:1 proceedin:1 depicted:1 likely:1 intern:2 prevents:2 kiss:1 temporarily:1 chance:4 environmental:1 tha:1 ma:1 conditional:3 occ:1 fisher:1 content:2 change:1 except:1 gnr:1 e:1 la:1 intact:1 ew:1 fter:1 noisier:2 betw:1 tested:1 ex:1 |
1,411 | 2,284 | Bayesian Models of Inductive Generalization
Neville E. Sanjana & Joshua B. Tenenbaum
Department of Brain and Cognitive Sciences
Massachusetts Institute of Technology
Cambridge, MA 02139
nsanjana, jbt @mit.edu
Abstract
We argue that human inductive generalization is best explained in a
Bayesian framework, rather than by traditional models based on similarity computations. We go beyond previous work on Bayesian concept
learning by introducing an unsupervised method for constructing flexible hypothesis spaces, and we propose a version of the Bayesian Occam?s razor that trades off priors and likelihoods to prevent under- or
over-generalization in these flexible spaces. We analyze two published
data sets on inductive reasoning as well as the results of a new behavioral
study that we have carried out.
1 Introduction
The problem of inductive reasoning ? in particular, how we can generalize after seeing
only one or a few specific examples of a novel concept ? has troubled philosophers, psychologists, and computer scientists since the early days of their disciplines. Computational
approaches to inductive generalization range from simple heuristics based on similarity
matching to complex statistical models [5]. Here we consider where human inference
falls on this spectrum. Based on two classic data sets from the literature and one more
comprehensive data set that we have collected, we will argue for models based on a rational Bayesian learning framework [10]. We also confront an issue that has often been
side-stepped in previous models of concept learning: the origin of the learner?s hypothesis
space. We present a simple, unsupervised clustering method for creating hypotheses spaces
that, when applied to human similarity judgments and embedded in our Bayesian framework, consistently outperforms the best alternative models of inductive reasoning based on
similarity-matching heuristics.
We focus on two related inductive generalization tasks introduced in [6], which involve
reasoning about the properties of animals. The first task is to judge the strength of a generalization from one or more specific kinds of mammals to a different kind of mammal:
given that animals of kind and have property , how likely is it that an animal of
kind also has property ? For example, might be chimp, might be squirrel, and
might be horse. is always a blank predicate, such as ?is susceptible to the disease blicketitis?, about which nothing is known outside of the given examples. Working with blank
predicates ensures that people?s inductions are driven by their deep knowledge about the
general features of animals rather than the details they might or might not know about any
one particular property. Stimuli are typically presented in the form of an argument from
premises (examples) to conclusion (the generalization test item), as in
Chimps are susceptible to the disease blicketitis.
Squirrels are susceptible to the disease blicketitis.
Horses are susceptible to the disease blicketitis.
and subjects are asked to judge the strength of the argument ? the likelihood that the
conclusion (below the line) is true given that the premises (above the line) are true. The
second task is the same except for the form of the conclusion. Instead of asking how likely
the property is to hold for another kind of mammal, e.g., horses, we ask how likely it is to
hold for all mammals. We refer to these two kinds of induction tasks as the specific and
general tasks, respectively.
Osherson et al. [6] present data from two experiments using these tasks. One data set
contains human judgments for the relative strengths of 36 specific inferences, each with a
different pair of mammals given as examples (premises) but the same test species, horses.
The other set contains judgments of argument strength for 45 general inferences, each with
a different triplet of mammals given as examples and the same test category, all mammals.
Osherson et al. also published subjects? judgments of similarity for all 45 pairs of the
10 mammals used in their generalization experiments, which they (and we) use to build
models of generalization.
2 Previous approaches
There have been several attempts to model the data in [6]: the similarity-coverage model
[6], a feature-based model [8], and a Bayesian model [3]. The two factors that determine
the strength of an inductive generalization in Osherson et al.?s model [6] are (i) similarity
of the animals in the premise(s) to those in the conclusion, and (ii) coverage, defined as the
similarity of the animals in the premise(s) to the larger taxonomic category of mammals,
including all specific animal types in this domain. To see the importance of the coverage
factor, compare the following two inductive generalizations. The chance that horses can get
a disease given that we know chimps and squirrels can get that disease seems higher than
if we know only that chimps and gorillas can get the disease. Yet simple similarity favors
the latter generalization: horses are judged to be more similar to gorillas than to chimps,
and much more similar to either primate species than to squirrels. Coverage, however,
intuitively favors the first generalization: the set chimp, squirrel ?covers? the set of all
mammals much better than does the set chimp, gorilla , and to the extent that a set of
examples supports generalization to all mammals, it should also support generalization to
horses, a particular type of mammal.
Similarity and coverage factors are mixed linearly to predict the strength of a generalization. Mathematically, the prediction is given by
all mammals ,
where is the set of examples (premises), is the test set (conclusion), is a free parameter, and
is a setwise similarity metric defined to be the sum of each element?s
maximal similarity to the elements:
. For the specific
arguments, the test set has just one element,
horse, so
is just the maximum similarity of horses to the example animal types in . For the general arguments,
all mammals, which is approximated by the set of all mammal types used in the experiment (see Figure 1). Osherson et al. [6] also consider a sum-similarity model, which
replaces the maximum with a sum:
. Summed similarity
has more traditionally been used to model human concept learning, and also has a rational
interpretation in terms of nonparametric density estimation, but Osherson et al. favor the
-
!#"%$'&)(* "+
+
,
. /0 " $'&)(* "+
max-similarity model based on its match to their intuitions for these particular tasks. We
examine both models in our experiments.
Sloman [8] developed a feature-based model that encodes the shared features between the
premise set and the conclusion set as weights in a neural network. Despite some psychological plausibility, this model consistently fit the two data sets significantly worse than
the max-similarity model. Heit [3] outlines a Bayesian framework that provides qualitative
explanations of various inductive reasoning phenomena from [6]. His model does not constrain the learner?s hypothesis space, nor does it embody a generative model of the data,
so its predictions depend strictly on well-chosen prior probabilities. Without a general
method for setting these prior probabilities, it does not make quantitative predictions that
can be compared here.
3 A Bayesian model
Tenenbaum & colleagues have previously introduced a Bayesian framework for learning
concepts from examples, and applied it to learning number concepts [10], word meanings
[11], as well as otherdomains.
Formally,
for the specific inference task, we observe posi
tive examples
of the concept and want to compute the probability
that a particular test stimulus belongs to the concept given the observed examples :
. These generalization probabilities
are computed by averaging
the predictions of a set of hypotheses weighted by their posterior probabilities:
,
,
,
,
,
,
!"#
$
%
(1)
Hypotheses pick out subsets of stimuli ? candidate extensions of the concept ? and
is just 1 or 0 depending on whether the test stimulus falls under the subset .
In the general inference task, we are interested in computing the probability that a whole
test category falls under the concept :
,
'&
" (*)+
%
(2)
A crucial component in modeling both tasks is the structure of the learner?s hypothesis
space , .
3.1 Hypothesis space
Elements of the hypothesis space , represent natural subsets of the objects in the domain
? subsets likely to be the extension of some novel property or concept. Our goal in building up , is to capture as many hypotheses as possible that people might employ in concept
learning, using a procedure that is ideally automatic and unsupervised. One natural way to
begin is to identify hypotheses with the clusters returned by a clustering algorithm [11][7].
Here, hierarchical clustering seems particularly appropriate, as people across cultures appear to organize their concepts of biological species in a hierarchical taxonomic structure [1]. We applied four standard agglomerative clustering algorithms [2] (single-link,
complete-link, average-link, and centroid) to subjects? similarity judgments for all pairs of
10 animals given in [6]. All four algorithms produced the same output (Figure 1), suggesting a robust cluster structure. We define the base set of clusters - to consist of all
19 clusters in this tree. The most straightforward way to define a hypothesis space for
- ; each hypothesis consists of one base cluster.
Bayesian concept learning is to take ,
We refer to , as the ?taxonomic hypothesis space?.
It is clear that , alone is not sufficient. The chance that horses can get a disease given that
we know cows and squirrels can get that disease seems much higher than if we know only
Horse
Cow Elephant Rhino Chimp Gorilla Mouse Squirrel Dolphin
Seal
Figure 1: Hierarchical clustering of mammals based on similarity judgments in [6]. Each
node in the tree corresponds to one hypothesis in the taxonomic hypothesis space , .
that chimps and squirrels can get the disease, yet the taxonomic hypotheses consistent with
the example sets cow, squirrel and chimp, squirrel are the same. Bayesian generalization with a purely taxonomic hypothesis space essentially depends only on the least similar
example (here, squirrel), ignoring more fine-grained similarity structure, such as that one
example in the set cow, squirrel is very similar to the target horse even if the other is
not. This sense of fine-grained similarity has a clear objective basis in biology, because a
single property can apply to more than one taxonomic cluster, either by chance or through
convergent evolution. If the disease in question could afflict two distinct clusters of animals, one exemplified by cows and the other by squirrels, then it is much more likely also
to afflict horses (since they share most taxonomic clusters with cows) than if the disease
afflicted two distinct clusters exemplified by chimps and squirrels. Thus we consider richer
hypothesis subspaces , , consisting of all pairs of taxonomic clusters (i.e., all unions of
two clusters from Figure 1, except those already included in , ), and , , consisting of
all triples of taxonomic clusters (except those included in lower layers). We stop with ,
because we have no behavioral data beyond three examples. Our total hypothesis space is
,
,
, .
then the union of these three layers, ,
The notion that the hypothesis space of candidate concepts might correspond to the power
set of the base clusters, rather than just single clusters, is broadly applicable beyond the
domain of biological properties. If the base system of clusters is sufficiently fine-grained,
this framework can parameterize any logically possible concept. It is analogous to other
general-purpose representations for concepts, such as disjunctive normal form (DNF) in
PAC-Learning, or class-conditional mixture models in density-based classification [5].
3.2 The Bayesian Occam?s razor: balancing priors and likelihoods
Given this hypothesis space, Bayesian generalization then requires assigning a prior
and likelihood
for each hypothesis , . Let - be the number of base clusters,
and be a hypothesis in the th layer of the hypothesis space , , corresponding to a union
of base clusters. A simple but reasonable prior assigns to a sequence of - i. i. d.
Bernoulli variables with successes and parameter , with probability
.
(3)
Intuitively, this choice of prior is like assuming a generative model for hypotheses in which
each base cluster has some small independent probability of expressing the concept ;
the correspondence is not exact because each hypothesis may be expressed as the union
of base clusters in multiple ways, and we consider only the minimal union in defining
. For
, instantiates a preference for simpler hypotheses ? that is, hypotheses consisting of fewer disjoint clusters (smaller ). More complex hypotheses receive exponentially lower probability under , and the penalty for complexity increases
as becomes smaller. This prior can be applied with any set of base clusters, not just
those which are taxonomically structured. We are currently exploring a more sophisticated
domain-specific prior for taxonomic clusters defined by a stochastic mutation process over
the branches of the tree.
Following [10], the likelihood
is calculated by assuming that the examples
are
a random sample (with replacement) of instances from the concept to be learned. Let
, the number of examples, and let the size $ of each hypothesis be simply the
number of animal types it contains. Then
follows the size principle,
if includes all examples in
if does not include all examples in
(4)
assigning greater likelihood to smaller hypotheses, by a factor that increases exponentially
as the number of consistent examples observed increases.
Note the tension between priors and likelihoods here, which implements a form of the
Bayesian Occam?s razor. The prior favors hypotheses consisting of few clusters, while
the likelihood favors hypotheses consisting of small clusters. These factors will typically
trade off against each other. For any set of examples, we can always cover them under a
single cluster if we make the cluster large enough, and we can always cover them with a
hypothesis of minimal size (i.e., including no other animals beyond the examples) if we
use only singleton clusters and let the number of clusters equal the number of examples.
The posterior probability , proportional to the product of these terms, thus seeks an
optimal tradeoff between over- and under-generalization.
4 Model results
We consider three data sets. Data sets 1 and 2 come from the specific and general tasks in
[6], described in Section 1. Both tasks drew their stimuli from the same set of 10 mammals
shown in Figure 1. Each data set (including the set of similarity judgments used to construct the models) came from a different group of subjects. Our models of the probability of
generalization for specific and general arguments are given by Equations 1 and 2, respectively, letting be the example set that varies from trial to trial and or (respectively)
be the fixed test category, horses or all mammals. Osherson at al.?s subjects did not provide
an explicit judgment of generalization for each example set, but only a relative ranking
of the strengths of all arguments in the general or specific sets. Hence we also converted
all models? predictions to ranks for each data set, to enable the most natural comparisons
between model and data.
,
Figure 3 shows the (rank) predictions of three models, Bayesian, max-similarity and sumsimilarity, versus human subjects? (rank) confirmation judgments on the general (row 1)
and specific (row 2) induction tasks from [6]. Each model had one free parameter ( in
the Bayesian model, in the similarity models), which was tuned to the single value that
maximized rank-order correlation between model and data jointly over both data sets.
The best correlations achieved by the Bayesian model in both the general and specific tasks
were greater than those achieved by either the max-similarity or sum-similarity models.
The sum-similarity model is far worse than the other two ? it is actually negatively correlated with the data on the general task ? while max-similarity consistently scores slightly
worse than the Bayesian model.
4.1 A new experiment: Varying example set composition
In order to provide a more comprehensive test of the models, we conducted a variant of the
specific experiment using the same 10 animal types and the same constant test category,
horses, but with example sets of different sizes and similarity structures. In both data sets
1 and 2, the number of examples was constant across all trials; we expected that varying
the number of examples would cause difficulty for the max-similarity model because it
is not explicitly sensitive to this factor. For this purpose, we included five three-premise
arguments, each with three examples of the same animal species (e.g., chimp, chimp,
chimp ), and five one-premise arguments with the same five animals (e.g., chimp ). We
also included three-premise arguments where all examples were drawn from a low-level
cluster of species in Figure 1 (e.g., chimp, gorilla, chimp ). Because of the increasing
preference for smaller hypotheses as more examples are observed, Bayes will in general
make very different predictions in these three cases, but max-similarity will not. This
manipulation also allowed us to distinguish the predictions of our Bayesian model from
alternative Bayesian formulations [5][3] that do not include the size principle, and thus do
not predict differences between generalization from one example and generalization from
three examples of the same kind.
We also changed the judgment task and cover story slightly, to match more closely the natural problem of inductive learning from randomly sampled examples. Subjects were told
that they were training to be veterinarians, by observing examples of particular animals that
had been diagnosed with novel diseases. They were required to judge the probability that
horses could get the same disease given the examples observed. This cover story made it
clear to subjects that when multiple examples of the same animal type were presented, these
instances referred to distinct individual animals. Figure 3 (row 3) shows the model?s predicted generalization probabilities along with the data from our experiment: mean ratings
of generalization from 24 subjects on 28 example sets, using either
, or examples
and the same test species (horses) across all arguments. Again we show predictions for the
best values of the free parameters and . All three models fit best at different parameter
values than in data sets 1 and 2, perhaps due to the task differences or the greater range of
stimuli here.
0.6
1 example
3 examples
0.55
Argument strength
0.5
0.45
0.4
0.35
Figure 2: Human generalization
to the conclusion category horse
when given one or three examples
of a single premise type.
0.3
0.25
0.2
0.15
cow
chimp
mouse
dolphin
elephant
Premise category
Again, the max-similarity model comes close to the performance of the Bayesian model,
but it is inconsistent with several qualitative trends in the data. Most notably, we found a
difference between generalization from one example and generalization from three examples of the same kind, in the direction predicted by our Bayesian model. Generalization to
the test category of horses was greater from singleton examples (e.g., chimp ) than from
three examples of the same kind (e.g., chimp, chimp, chimp ), as shown in Figure 2. This
effect was relatively small but it was observed for all five animal types tested and it was
statistically significant (
) in a 2 5 (number of examples animal type) ANOVA.
The max-similarity model, however, predicts no effect here, as do Bayesian accounts that
do not include the size principle [5][3].
It is also of interest to ask whether these models are sufficiently robust as to make reasonable predictions across all three experiments using a single parameter setting, or to make
good predictions on held-out data when their free parameter is tuned on the remaining data.
On these criteria, our Bayesian
model maintains its advantage over
At the
max-similarity.
single value of
, Bayes achieves correlations
of
,
and
on the
three data sets, respectively, compared
to
, and
for max-similarity at its
single best parameter value (
). Using Monte Carlo cross validation [9] to estimate
(1000 runs for each data
set,
80%-20%
training-test splits), Bayes obtains average test-set
correlations
of
and
on the three data sets, respectively, compared to
and
for max-similarity using the same method to tune .
%
5 Conclusion
Our Bayesian model offers a moderate but consistent quantitative advantage over the best
similarity-based models of generalization, and also predicts qualitative effects of varying
sample size that contradict alternative approaches. More importantly, our Bayesian approach has a principled rational foundation, and we have introduced a framework for unsupervised construction of hypothesis spaces that could be applied in many other domains.
In contrast, the similarity-based approach requires arbitrary assumptions about the form
of the similarity measure: it must include both ?similarity? and ?coverage? terms, and it
must be based on max-similarity rather than sum-similarity. These choices have no a priori
justification and run counter to how similarity models have been applied in other domains,
leading us to conclude that rational statistical principles offer the best hope for explaining
how people can generalize so well from so little data. Still, the consistently good performance of the max-similarity model raises an important question for future study: whether
a relatively small number of simple heuristics might provide the algorithmic machinery
implementing approximate rational inference in the brain.
We would also like to understand how people?s subjective hypothesis spaces have their origin in the objective structure of their environment. Two plausible sources for the taxonomic
hypothesis space used here can both be ruled out. The actual biological taxonomy for these
10 animals, based on their evolutionary history, looks quite different from the subjective
taxonomy used here. Substituting the true taxonomic clusters from biology for the base
clusters of our model?s hypothesis space leads to dramatically worse predictions of people?s generalization behavior. Taxonomies constructed from linguistic co-occurrences, by
applying the same agglomerative clustering algorithms to similarity scores output from the
LSA algorithm [4], also lead to much worse predictions. Perhaps the most likely possibility has not yet been tested. It may well be that by clustering on simple perceptual features
(e.g., size, shape, hairiness, speed, etc.), weighted appropriately, we can reproduce the taxonomy constructed here from people?s similarity judgments. However, that only seems to
push the problem back, to the question of what defines the appropriate features and feature weights. We do not offer a solution here, but merely point to this question as perhaps
the most salient open problem in trying to understand the computational basis of human
inductive inference.
Acknowledgments
Tom Griffiths provided valuable help with statistical analysis. Supported by grants from
NTT Communication Science Laboratories and MERL and an HHMI fellowship to NES.
References
[1] S. Atran. Classifying nature across cultures. In An Invitation to Cognitive Science, volume 3.
MIT Press, 1995.
[2] R. Duda, P. Hart, and D. Stork. Pattern Classification. Wiley, New York, NY, 2001.
[3] E. Heit. A Bayesian analysis of some forms of induction. In Rational Models of Cognition.
Oxford University Press, 1998.
[4] T. Landauer and S. Dumais. A solution to Plato?s problem: The Latent Semantic Analysis
theory of the acquisition, induction, and representation of knowledge. Psychological Review,
104:211?240, 1997.
[5] T. Mitchell. Machine Learning. McGraw-Hill, Boston, MA, 1997.
[6] D. Osherson, E. Smith, O. Wilkie, A. L?opez, and E. Shafir. Category-based induction. Psychological Review, 97(2):185?200, 1990.
[7] N. Sanjana and J. Tenenbaum. Capturing property-based similarity in human concept learning.
In Sixth International Conference on Cognitive and Neural Systems, 2002.
[8] S. Sloman. Feature-based induction. Cognitive Psychology, 25:231?280, 1993.
[9] P. Smyth. Clustering using Monte Carlo cross-validation. In Second International Conference
on Knowledge Discovery and Data Mining, 1996.
[10] J. Tenenbaum. Rules and similarity in concept learning. In S. Solla, T. Keen, and K.-R. M?uller,
editors, Advances in Neural Information Processing Systems 12, pages 59?65. MIT Press, 2000.
[11] J. Tenenbaum and F. Xu. Word learning as Bayesian inference. In Proceedings of the 22nd
Annual Conference of the Cognitive Science Society, 2000.
Bayes
1
General:
mammals
n=3
? = 0.94
0.5
0
0
1
0
0
0.5
1
0
0
0.5
1
0.5
1
0
0
0.5
1
? = 0.87
0.5
0
0.2
0.4
0.6
0.8
0
0
1
? = 0.93
0.5
0
0
1
? = 0.91
1
? = 0.97
? = _ 0.33
0.5
0.5
0.5
0
? = 0.87
1
? = 0.97
1
Specific:
horse
n=1,2,3
0.5
0.5
0
Sum?Similarity
1
0.5
1
Specific:
horse
n=2
Max?Similarity
1
0.5
1
? = 0.39
0.5
0
0.2
0.4
0.6
0.8
0
0
1
2
3
Figure 3: Model predictions ( -axis) plotted against human confirmation scores ( -axis). Each
column shows the results for a particular model. Each row is a different inductive generalization
experiment, where indicates the number of examples (premises) in the stimuli.
| 2284 |@word trial:3 version:1 seems:4 duda:1 seal:1 nd:1 hairiness:1 open:1 seek:1 pick:1 mammal:19 contains:3 score:3 tuned:2 outperforms:1 subjective:2 blank:2 afflict:2 yet:3 assigning:2 must:2 invitation:1 shape:1 alone:1 generative:2 fewer:1 item:1 smith:1 provides:1 node:1 preference:2 simpler:1 five:4 along:1 constructed:2 qualitative:3 consists:1 behavioral:2 notably:1 expected:1 behavior:1 embody:1 nor:1 examine:1 brain:2 little:1 actual:1 increasing:1 becomes:1 begin:1 provided:1 what:1 kind:9 developed:1 quantitative:2 shafir:1 grant:1 lsa:1 appear:1 organize:1 scientist:1 despite:1 oxford:1 might:8 co:1 range:2 statistically:1 acknowledgment:1 union:5 implement:1 procedure:1 significantly:1 matching:2 word:2 griffith:1 seeing:1 get:7 close:1 judged:1 applying:1 go:1 straightforward:1 chimp:22 assigns:1 rule:1 importantly:1 his:1 classic:1 notion:1 traditionally:1 justification:1 analogous:1 setwise:1 target:1 construction:1 exact:1 smyth:1 hypothesis:40 origin:2 element:4 trend:1 approximated:1 particularly:1 predicts:2 observed:5 disjunctive:1 capture:1 parameterize:1 ensures:1 solla:1 trade:2 counter:1 valuable:1 disease:14 intuition:1 principled:1 environment:1 complexity:1 asked:1 ideally:1 depend:1 raise:1 purely:1 negatively:1 learner:3 basis:2 osherson:7 various:1 distinct:3 dnf:1 monte:2 horse:21 outside:1 quite:1 heuristic:3 larger:1 richer:1 plausible:1 elephant:2 favor:5 jointly:1 sequence:1 advantage:2 propose:1 maximal:1 product:1 dolphin:2 cluster:31 object:1 help:1 depending:1 coverage:6 predicted:2 judge:3 come:2 direction:1 closely:1 stochastic:1 human:10 enable:1 implementing:1 premise:13 generalization:33 biological:3 mathematically:1 strictly:1 extension:2 exploring:1 squirrel:14 hold:2 sufficiently:2 normal:1 algorithmic:1 predict:2 cognition:1 substituting:1 achieves:1 early:1 purpose:2 estimation:1 applicable:1 currently:1 sensitive:1 weighted:2 hope:1 uller:1 mit:3 always:3 rather:4 varying:3 linguistic:1 focus:1 philosopher:1 consistently:4 bernoulli:1 likelihood:8 logically:1 rank:4 indicates:1 contrast:1 centroid:1 sense:1 inference:8 typically:2 rhino:1 reproduce:1 interested:1 issue:1 classification:2 flexible:2 priori:1 animal:21 summed:1 equal:1 construct:1 biology:2 look:1 unsupervised:4 future:1 stimulus:7 few:2 employ:1 randomly:1 comprehensive:2 individual:1 consisting:5 replacement:1 attempt:1 interest:1 possibility:1 mining:1 mixture:1 held:1 culture:2 machinery:1 tree:3 ruled:1 plotted:1 minimal:2 psychological:3 instance:2 merl:1 modeling:1 column:1 asking:1 cover:5 afflicted:1 introducing:1 subset:4 predicate:2 conducted:1 varies:1 dumais:1 density:2 international:2 told:1 off:2 discipline:1 mouse:2 posi:1 again:2 worse:5 cognitive:5 creating:1 leading:1 suggesting:1 converted:1 account:1 singleton:2 includes:1 explicitly:1 ranking:1 depends:1 analyze:1 observing:1 bayes:4 maintains:1 mutation:1 maximized:1 judgment:11 identify:1 correspond:1 generalize:2 bayesian:29 produced:1 heit:2 carlo:2 published:2 history:1 opez:1 sixth:1 against:2 colleague:1 acquisition:1 keen:1 rational:6 stop:1 sampled:1 massachusetts:1 ask:2 mitchell:1 knowledge:3 sophisticated:1 actually:1 back:1 higher:2 day:1 tension:1 tom:1 formulation:1 diagnosed:1 just:5 correlation:4 working:1 defines:1 perhaps:3 building:1 effect:3 concept:21 true:3 evolution:1 inductive:13 hence:1 laboratory:1 jbt:1 semantic:1 razor:3 criterion:1 trying:1 hill:1 outline:1 complete:1 reasoning:5 meaning:1 novel:3 stork:1 exponentially:2 volume:1 interpretation:1 refer:2 expressing:1 composition:1 cambridge:1 significant:1 automatic:1 had:2 similarity:50 etc:1 base:10 posterior:2 belongs:1 driven:1 moderate:1 manipulation:1 success:1 came:1 joshua:1 greater:4 determine:1 ii:1 branch:1 multiple:2 ntt:1 match:2 plausibility:1 cross:2 offer:3 hhmi:1 hart:1 prediction:14 variant:1 confront:1 metric:1 essentially:1 represent:1 achieved:2 receive:1 want:1 fine:3 fellowship:1 source:1 crucial:1 appropriately:1 subject:9 plato:1 inconsistent:1 split:1 enough:1 fit:2 psychology:1 cow:7 tradeoff:1 whether:3 penalty:1 returned:1 york:1 cause:1 deep:1 dramatically:1 clear:3 involve:1 tune:1 nonparametric:1 tenenbaum:5 category:9 disjoint:1 broadly:1 group:1 four:2 salient:1 drawn:1 prevent:1 anova:1 merely:1 sum:7 run:2 taxonomic:13 reasonable:2 capturing:1 layer:3 distinguish:1 convergent:1 correspondence:1 replaces:1 annual:1 strength:8 constrain:1 encodes:1 speed:1 argument:12 relatively:2 department:1 structured:1 instantiates:1 across:5 smaller:4 slightly:2 primate:1 psychologist:1 explained:1 intuitively:2 equation:1 previously:1 know:5 letting:1 apply:1 observe:1 hierarchical:3 appropriate:2 occurrence:1 alternative:3 clustering:8 include:4 remaining:1 build:1 society:1 objective:2 question:4 already:1 traditional:1 evolutionary:1 sloman:2 subspace:1 link:3 stepped:1 argue:2 collected:1 extent:1 agglomerative:2 induction:7 assuming:2 neville:1 susceptible:4 taxonomy:4 defining:1 communication:1 arbitrary:1 rating:1 introduced:3 tive:1 pair:4 required:1 learned:1 beyond:4 below:1 exemplified:2 pattern:1 gorilla:5 including:3 max:15 explanation:1 power:1 natural:4 difficulty:1 technology:1 ne:1 axis:2 carried:1 prior:11 literature:1 review:2 discovery:1 relative:2 embedded:1 mixed:1 proportional:1 versus:1 triple:1 validation:2 foundation:1 sufficient:1 consistent:3 principle:4 editor:1 story:2 classifying:1 occam:3 share:1 balancing:1 row:4 changed:1 supported:1 free:4 side:1 understand:2 institute:1 fall:3 explaining:1 calculated:1 made:1 far:1 wilkie:1 approximate:1 obtains:1 contradict:1 mcgraw:1 conclude:1 landauer:1 spectrum:1 latent:1 triplet:1 nature:1 robust:2 confirmation:2 ignoring:1 complex:2 constructing:1 domain:7 did:1 linearly:1 whole:1 nothing:1 allowed:1 xu:1 referred:1 ny:1 wiley:1 explicit:1 candidate:2 perceptual:1 grained:3 specific:16 pac:1 taxonomically:1 consist:1 importance:1 drew:1 push:1 boston:1 simply:1 likely:6 expressed:1 corresponds:1 chance:3 ma:2 conditional:1 goal:1 shared:1 included:4 except:3 averaging:1 total:1 specie:6 formally:1 people:7 support:2 latter:1 tested:2 phenomenon:1 correlated:1 |
1,412 | 2,285 | A Probabilistic Model for Learning
Concatenative Morphology
Matthew G. Snover
Department of Computer Science
Washington University
St Louis, MO, USA, 63130-4809
[email protected]
Michael R. Brent
Department of Computer Science
Washington University
St Louis, MO, USA, 63130-4809
[email protected]
Abstract
This paper describes a system for the unsupervised learning of morphological suffixes and stems from word lists. The system is composed of a
generative probability model and hill-climbing and directed search algorithms. By extracting and examining morphologically rich subsets of an
input lexicon, the directed search identifies highly productive paradigms.
The hill-climbing algorithm then further maximizes the probability of the
hypothesis. Quantitative results are shown by measuring the accuracy of
the morphological relations identified. Experiments in English and Polish, as well as comparisons with another recent unsupervised morphology learning algorithm demonstrate the effectiveness of this technique.
1 Introduction
One of the fundamental problems in computational linguistics is adaptation of language
processing systems to new languages with minimal reliance on human expertise. A ubiquitous component of language processing systems is the morphological analyzer, which
determines the properties of morphologically complex words like watches and gladly by
inferring their derivation as watch+s and glad+ly. The derivation reveals much about the
word, such as the fact that glad+ly share syntactic properties with quick+ly and semantic
properties with its stem glad. While morphological processes can take many forms, the
most common are suffixation and prefixation (collectively, concatenative morphology).
In this paper, we present a system for unsupervised inference of morphological derivations
of written words, with no prior knowledge of the language in question. Specifically, neither
the stems nor the suffixes of the language are given in advance. This system is designed
for concatenative morphology, and the experiments presented focus on suffixation. It is
applicable to any language for written words lists are available. In languages that have
been a focus of research in computational linguistics the practical applications are limited,
but in languages like Polish, automated analysis of unannotated text corpora has potential
applications for information retrieval and other language processing systems. In addition,
automated analysis might find application as a hypothesis-generating tool for linguists or as
a cognitive model of language acquisition. In this paper, however, we focus on the problem
of unsupervised morphological inference for its inherent interest.
During the last decade several minimally supervised and unsupervised algorithms have
been developed. Gaussier[1] describes an explicitly probabilistic system that is based primarily on spellings. It is an unsupervised algorithm, but requires the tweaking of parameters to tune it to the target language. Brent [2] and Brent et al. [3] describe Minimum
Description Length, (MDL), systems. Goldsmith [4] describes a similar MDL approach.
Our motivation in developing a new system was to improve performance and to have a
model cast in an explicitly probabilistic framework. We are particularly interested in developing automated morphological analysis as a first stage of a larger grammatical inference
system, and hence we favor a conservative analysis that identifies primarily productive
morphological processes (those that can be applied to new words).
In this paper, we present a probabilistic model and search algorithm for automated analysis
of suffixation, along with experiments comparing our system to that of Goldsmith [4]. This
system, which extends the system of Snover and Brent [5], is designed to detect the final
stem and suffix break of each word given a list of words. It does not distinguish between
derivational and inflectional suffixation or between the notion of a stem and a root. Further,
it does not currently have a mechanism to deal with multiple interpretations of a word, or
to deal with morphological ambiguity. Within it?s design limitations, however, it is both
mathematically clean and effective.
2 Probability Model
This section introduces a prior probability distribution over the space of all hypotheses,
where a hypothesis is a set of words, each with morphological split separating the stem and
suffix. The distribution is based on a seven-step model for the generation of hypotheses,
which is heavily based upon the probability model presented in [5]. The hypothesis is
generated by choosing the number of stems and suffixes, the spellings of those stems and
suffixes and then the combination of the stems and suffixes.
The seven steps are presented below, along with their probability distributions and a running
example of how a hypothesis could be generated by this process. By taking the product over
the distributions from all of the steps of the generative process, one can calculate the prior
probability for any given hypothesis. What is described in this section is a mathematical
model and not an algorithm intended to be run.
1. Choose the number of stems,
, according to the distribution:
(1)
The
term normalizes the inverse-squared distribution on the positive integers. The number of suffixes, is chosen according to the same probability
distribution. The symbols M for steMs and X for suffiXes are used throughout
this paper.
Example:
= 5. = 3.
2. For each stem , choose its length in letters , according to the inverse squared
distribution. Assuming that the lengths are chosen independently and multiplying
together their probabilities we have:
! "
(2)
$&# %
The distribution for the lengths of the suffixes, ' , is similar to (2), differing only
in that suffixes of length 0 are allowed, by offsetting the length by one.
Example: ( = 4, 4, 4, 3, 3. ' = 2, 0, 1.
3. Let be the alphabet, and let %
be a probability distribution on . For
each from 1 to , generate stem by choosing letters at random, according
to the probabilities %
. Call the resulting stem set STEM. The suffix set
SUFF is generated in the same manner. The probability of any
, being
character,
where is the
chosen is obtained from a maximum likelihood estimate:
count of among all the hypothesized stems and suffixes and
.
The joint probability of the hypothesized stem and suffix sets is defined by the
distribution:
STEM SUFF
'
!
"
#"
$
&%
(3)
The factorial terms reflect the fact that the stems and suffixes could be generated
in any order.
Example: STEM = walk, look, door, far, cat . SUFF = ed, ' , s .
4. We now choose the number of paradigms, ( . A paradigm is a set of suffixes and
the stems that attach to those suffixes and no others. Each stem is in exactly one
paradigm, and each paradigm has at least one stem., thus ( can range from 1 to
. We pick ( according to the following uniform distribution:
(
%
*)
(4)
Example: ( = 3.
5. We choose the number of suffixes in the paradigms, + , according to a uniform
distribution. The distribution for picking + , suffixes for paradigm is:
+
,(
The joint probability over all paradigms, +
+
,(
."
# %
is therefore:
)
%
-
(5)
Example: + = 2, 1, 2 .
6. For each paradigm , choose the set of + suffixes, PARA ' that the paradigm will
represent. The number of subsets of a given size is finite so we can again use the
uniform distribution. This implies that the probability of each individual subset
of size + , is the inverse of the total number of such subsets. Assuming that the
choices for each paradigm are independent:
"- ) % )
PARA ' ,(/+
+
# % +
Example: PARA '% = 0' , s, ed . PARA '
= 1' . PARA '2 = 1' , s .
(6)
7. For each stem choose the paradigm that the stem will belong in, according to a
distribution that favors paradigms with more stems. The probability of choosing a
paradigm , for a stem is calculated using a maximum likelihood estimate:
PARA
where PARA is the set of stems in paradigm . Assuming that all these choices
are made independently yields the following:
" - PARA PARA 34
PARA
(7)
,(
# %
Example: PARA % = walk, look . PARA
= far . PARA 2 = door, cat .
Combining the results of stages 6 and 7, one can see that the running example would yield
the hypothesis consisting of the set of words with suffix breaks, walk+' , walk+s, walk+ed,
look+ ' , look+s, look+ed, far+' , door+ ' , door+s, cat+' , cat+s . Removing the breaks in the
words results in the set of input words. To find the probability for this hypothesis just take
of the product of the probabilities from equations (1) to (7).
Using this generative model, we can assign a probability to any hypothesis. Typically one
wishes to know the probability of the hypothesis given the data, however in our case such a
distribution is not required. Equation (8) shows how the probability of the hypothesis given
the data could be derived from Bayes law.
Hyp
Data Hyp
Hyp Data
(8)
Data
Our search only considers
hypotheses
consistent with the data. The probability of the data
Data Hyp , is always , since if you remove the breaks from any
given the hypothesis,
hypothesis, the input data is produced. This would not be the case if our search considered
inconsistent hypotheses. The prior probability of the data is constant
over all hypotheses, thus the probability of the hypothesis given the data reduces to
Hyp . The prior
probability of the hypothesis is given by the above generative process and, among all consistent hypotheses, the one with the greatest prior probability also has the greatest posterior
probability.
3 Search
This section details a novel search algorithm which is used to find a high probability segmentation of the all the words in the input lexicon, . The input lexicon is a list of words
extracted from a corpus. The output of the search is a segmentation of each of the input
words into a stem and suffix.
The search algorithm has two phases, which we call the directed search and the hillclimbing search. The directed search builds up a consistent hypothesis about the segmentation of all words in the input out of consistent hypothesis about subsets of the words.
The hill-climbing search further tunes the result of the directed search by trying out nearby
hypotheses over all the input words.
3.1 Directed Search
The directed search is accomplished in two steps. First, sub-hypotheses, each of which
is a hypothesis about a subset of the lexicon, are examined and ranked. The
best subhypotheses are then incrementally combined until a single sub-hypothesis remains. The
remainder of the input lexicon is added to this sub-hypothesis at which point it becomes
the final hypothesis.
We define the set of possible suffixes to be the set of terminal substrings, including the
empty string ' , of the words in . For each subset of the possible suffixes , there is a
maximal set of possible stems (initial substrings)
, such that for each
and each
,
is a word in . We define to be the sub-hypothesis in which each input
word
that can be analyzed as consisting of a stem in
and a suffix in is analyzed
that way. This subhypothesis consists of all pairings of the stems in
and the suffixes in
with the corresponding morphological breaks. One can think of each sub-hypothesis as
initially corresponding to a maximally filled paradigm. We only consider sub-hypotheses
which have at least two stems and two suffixes.
For each sub-hypothesis, , there is a corresponding null hypothesis, , which has the
same set of words as , but in which all the words are hypothesized to consist of the
word as the
stem
and
' as the suffix. We give each sub-hypothesis a score as follows:
score
. This reflects how much more probable is for those words,
than the null hypothesis.
One can view all sub-hypotheses as nodes in a directed graph. Each node, , is connected
to another node, if and only if represents a superset of the suffixes that represents,
which is exactly one suffix greater in size than the set that represents. By beginning at
the node representing no suffixes, one can apply standard graph search techniques, such as a
beam search or a best first search to find the best scoring nodes without visiting all nodes.
While one cannot guarantee that such approaches perform exactly the same as examining
all sub-hypotheses, initial experiments using a beam search with a beam size equal to ,
with a of 100, show that the best sub-hypotheses are found with a significant decrease
in the number of nodes visited. The experiments presented in this paper do not use these
pruning methods.
The
highest scoring sub-hypotheses are incrementally combined in order to create a
hypothesis over the complete set of input words. Changing the value of
does not dramatically alter the results of the algorithm, though higher values of give slightly better
results. We let be 100 in the experiments reported here.
Let be the highest scoring sub-hypotheses. We iteratively remove the highest scoring
hypothesis from . The words in are added to each of the remaining sub-hypotheses
in , and their null hypotheses, with their morphological breaks from . If a word in
was already in the morphological break from overrides the one from . All of the
sub-hypotheses are now rescored, as the words in them have changed. If, after rescoring,
none of the sub-hypotheses have likelihood ratios greater than one, then we use as our
final hypothesis. Otherwise we, iterate until either there is only one sub-hypotheses left or
all subhypotheses have scores no greater than one.
The final sub-hypothesis, , is now converted into a full hypothesis over all the words. All
words in that are not in are added to with suffix ' .
3.2 Hill Climbing Search
The hill climbing search further optimizes the probability of the hypothesis by moving
stems to new nodes. For each possible suffix , and each node , the search attempts to
add to . This means that all stems in that can take the suffix are moved to a new
node, , which represents all the suffixes of and . This is analogous to pushing stems
to adjacent nodes in a directed graph. A stem , can only be moved into a node with the
suffix , if the new word,
is an observed word in the input lexicon. The move is only
done if it increases the probability of the hypothesis.
There is an analogous suffix removal step which attempts to remove suffixes from nodes.
The hill climbing search continues to add and remove suffixes to nodes until the probability
of the hypothesis cannot be increased. A more detailed description of this portion of the
search and its algorithmic invariants is given in [5].
4 Experiment and Evaluation
4.1 Experiment
We tested our unsupervised morphology learning system, which we refer to as Paramorph,
and Goldsmith?s MDL system, otherwise known as Linguistica1 , on various sized word lists
1
A demo version available on the web, http://humanities.uchicago.edu/faculty/goldsmith/, was
used for these experiments. Word-list corpus mode and the method A suffix detection were used. All
from English and Polish corpora. For English we used set A of the Hansard corpus, which is
a parallel English and French corpus of proceedings of the Canadian Parliament. We were
unable to find a standard corpus for Polish and developed one from online sources. The
sources for the Polish corpus were older texts and thus our results correspond to a slightly
antiquated form of the language. The results were evaluated by measuring the accuracy of
the stem relations identified.
We extracted input lexicons from each corpus, excluding words containing non-alphabetic
characters. The 100 most common words in each corpus were also excluded, since these
words tend to be function words and are not very informative for morphology. The systems
were run on the 500, 1,000, 2,000, 4,000, and 8,000 most common remaining words. The
experiments in English were also conducted on the 16,000 most common words from the
Hansard corpus.
4.1.1 Stem Relation
Ideally, we would like to be able to specify the correct morphological break for each of
the words in the input, however morphology is laced with ambiguity, and we believe this
to be an inappropriate method for this task. For example it is unclear where the break in
the word, ?location? should be placed. It seems that the stem ?locate? is combined with
the suffix ?tion?, but in terms of simple concatenation it is unclear if the break should be
placed before or after the ?t?.
In an attempt to solve this problem we have developed a new measure of performance,
which does not specify the exact morphological split of a word. We measure the accuracy
of the stems predicted by examining whether two words which are morphologically related
are predicted as having the same stem. The actual break point for the stems is not evaluated,
only whether the words are predicted as having the same stem. We are working on a similar
measure for suffix identification.
Two words are related if they share the same immediate stem. For example the words
?building?, ?build?, and ?builds? are related since they all have ?build? as a stem, just as
?building? and ?buildings? are related as they both have ?building? as a stem. The two
words, ?buildings? and ?build? are not directly related since the former has ?building?
as a stem, while ?build? is its own stem. Irregular forms of words are also considered
to be related even though such relations would be very difficult to detect with a simple
concatenation model.
The stem relation precision measures how many of the relations predicted by the system
were correct, while the recall measures how many of the relations present in the data were
found. Stem relation fscore is an unbiased combination of precision and recall that favors
equal scores.
4.2 Results
The results from the experiments are shown in Figures 1 and 2. All graphs are shown using
a log scale for the corpus size. Due to software difficulties we were unable to get Linguistica
to run on 500, 1000, and 2000 words in English. The software ran without difficulties on
the larger English datasets and on the Polish data. As an additional note, Linguistica was
dramatically faster than Paramorph, which is a development oriented software package and
not as optimized for efficient runtime as Linguistica appears to be.
Figure 1 shows the number of different suffixes predicted by each of the algorithms in
both English and Polish. Our Paramorph system found a relatively constant number of
other parameters were left at their default values.
160
700
140
Polish Number of Suffixes
English Number of Suffixes
800
600
500
400
300
200
120
100
80
60
40
20
100
0
ParaMorph
Linguistica
500 1000
2k
4k
8k
Lexicon Size
0
16k
500
1000
2k
4k
Lexicon Size
8k
Figure 1: Number of Suffixes Predicted
1
1
0.8
Polish Stem Relation Fscore
English Stem Relation Fscore
ParaMorph
Linguistica
0.6
0.4
0.2
0
500 1000
2k
4k
8k
Lexicon Size
16k
0.8
0.6
0.4
0.2
0
500
1000
2k
4k
Lexicon Size
8k
Figure 2: Stem Relation Fscores
suffixes across lexicon sizes and Linguistica found an increasingly large number of suffixes,
predicting over 700 different suffixes in the 16,000 word English lexicon.
Figure 2 shows the fscores using the stem relation metric for various sizes of English and
Polish input lexicons. Paramorph maintains a very high precision across lexicon sizes
in both languages, whereas the precision of Linguistica decreases considerably at larger
lexicon sizes. However Linguistica shows an increasing recall as the lexicon size increases,
with Paramorph having a decreasing recall as lexicon size increases, though the recall of
Linguistica in Polish is consistently lower than the Paramorph?s recall. The fscores for
Paramorph and Linguistica in English are very close, and Paramorph appears to clearly
outperform Linguistica in Polish.
Suffixes
-a -e -ego -ej -ie -o -y
' -a -ami -y -e?
' -cie -li -m -?
c
Stems
dziwn
chmur siekier
gada odda sprzeda
Table 1: Sample Paradigms in Polish
Table 1 shows several of the larger paradigms found by Paramorph when run on 8000 words
of Polish. The first paradigm shown is for the single adjective stem meaning ?strange? with
numerous inflections for gender, number and case, as well as one derivational suffix, ?ie? which changes it into an adverb, ?strangely?. The second paradigm is for the nouns,
?cloud? and ?ax?, with various case inflections and the third paradigm paradigm contains
the verbs, ?talk?, ?return?, and ?sell?. All suffixes in the third paradigm are inflectional
indicating tense and agreement.
The differences between the performance of Linguistica and Paramorph can most easily
be seen in the number of suffixes predicted by each algorithm. The number of suffixes
predicted by Linguistica grows linearly with the number of words, in general causing his
algorithm to get much higher recall at the expense of precision. Paramorph maintains
a fairly constant number of suffixes, causing it to generally have higher precision at the
expense of recall. This is consistent with our goals to create a conservative system for
morphological analysis, where the number of false positives is minimized.
The Polish language presents special difficulties for both Linguistica and Paramorph, due
to the highly complex nature of its morphology. There are far fewer spelling change rules
and a much higher frequency of suffixes in Polish than in English. In addition phonology
plays a much stronger role in Polish morphology, causing alterations in stems, which are
difficult to detect using a concatenative framework.
5 Discussion
Many of the stem relations predicted by Paramorph result from postulating stem and suffix
breaks in words that are actually morphologically simple. This occurs when the endings
of these words resemble other, correct, suffixes. In an attempt to deal with this problem
we have investigated incorporating semantic information into the probability model since
morphologically related words also tend to be semantically related. A successful implementation of such information should eliminate errors such as capable breaking down as
cap+able since capable is not semantically related to cape or cap.
The goal of the Paramorph system was to produce a preliminary description, with very
low false positives, of the final suffixation, both inflectional and derivational, in a language
independent manner. Paramorph performed better for the most part with respect to Fscore
than Linguistica, but more importantly, the precision of Linguistica does not approach the
precision of our algorithm, particularly on the larger corpus sizes. In summary, we feel our
Paramorph system has attained the goal of producing an initial estimate of suffixation that
could serve as a front end to aid other models in discovering higher level structure.
References
?
[1] Eric.
Gaussier. 1999. Unsupervised learning of derivational morphology from inflectional lexicons. In ACL ?99 Workshop Proceedings: Unsupervised Learning in Natural Language Processing.
ACL.
[2] Michael R. Brent. 1993. Minimal generative models: A middle ground between neurons and
triggers. In Proceedings of the Fifth International Workshop on Artificial Intelligence and Statistics,
Ft. Laudersdale, FL.
[3] Michael R. Brent, Sreerama K. Murthy, and Andrew Lundberg. 1995. Discovering morphemic
suffixes: A case study in minimum description length induction. In Proceedings of the 15th Annual
Conference of the Cognitive Science Society, pages 28-36, Hillsdale, NJ. Erlbaum.
[4] John Goldsmith. 2001. Unsupervised learning of the morphology of a natural language. Computational Linguistics, 27:153-198.
[5] Matthew G. Snover and Michael R. Brent. 2001. A Bayesian Model for Morpheme and Paradigm
Identification. In Proceedings of the 39th Annual Meeting of the ACL, pages 482-490. ACL.
| 2285 |@word faculty:1 version:1 middle:1 seems:1 stronger:1 pick:1 initial:3 contains:1 score:4 comparing:1 written:2 john:1 informative:1 remove:4 designed:2 generative:5 fewer:1 discovering:2 intelligence:1 beginning:1 rescoring:1 node:14 lexicon:19 location:1 mathematical:1 along:2 pairing:1 consists:1 manner:2 nor:1 morphology:11 terminal:1 decreasing:1 actual:1 inappropriate:1 increasing:1 becomes:1 maximizes:1 null:3 inflectional:4 what:1 string:1 developed:3 differing:1 nj:1 guarantee:1 quantitative:1 runtime:1 exactly:3 ly:3 louis:2 producing:1 positive:3 before:1 might:1 acl:4 minimally:1 examined:1 limited:1 range:1 directed:9 practical:1 offsetting:1 hansard:2 word:58 wustl:2 tweaking:1 get:2 cannot:2 close:1 quick:1 independently:2 suff:3 rule:1 importantly:1 his:1 notion:1 analogous:2 feel:1 target:1 trigger:1 play:1 heavily:1 exact:1 humanity:1 hypothesis:54 agreement:1 ego:1 particularly:2 continues:1 observed:1 cloud:1 role:1 ft:1 calculate:1 connected:1 morphological:16 decrease:2 highest:3 ran:1 ideally:1 productive:2 serve:1 upon:1 eric:1 easily:1 joint:2 cat:4 various:3 talk:1 alphabet:1 derivation:3 concatenative:4 describe:1 effective:1 artificial:1 choosing:3 larger:5 solve:1 otherwise:2 favor:3 statistic:1 syntactic:1 think:1 final:5 online:1 product:2 maximal:1 adaptation:1 remainder:1 causing:3 combining:1 alphabetic:1 description:4 moved:2 empty:1 produce:1 generating:1 andrew:1 c:2 predicted:9 implies:1 resemble:1 correct:3 human:1 hillsdale:1 assign:1 preliminary:1 probable:1 mathematically:1 considered:2 ground:1 algorithmic:1 mo:2 matthew:2 applicable:1 currently:1 visited:1 create:2 tool:1 reflects:1 clearly:1 always:1 ej:1 derived:1 focus:3 ax:1 consistently:1 likelihood:3 polish:17 inflection:2 detect:3 inference:3 suffix:59 typically:1 eliminate:1 initially:1 relation:13 interested:1 morphologically:5 among:2 morpheme:1 development:1 noun:1 special:1 fairly:1 equal:2 having:3 washington:2 represents:4 sell:1 look:5 unsupervised:10 alter:1 minimized:1 others:1 inherent:1 primarily:2 oriented:1 composed:1 individual:1 intended:1 consisting:2 phase:1 attempt:4 detection:1 hyp:5 interest:1 highly:2 evaluation:1 mdl:3 introduces:1 analyzed:2 capable:2 filled:1 walk:5 minimal:2 increased:1 measuring:2 subset:7 uniform:3 examining:3 successful:1 conducted:1 erlbaum:1 front:1 reported:1 para:13 considerably:1 combined:3 st:2 fundamental:1 international:1 ie:2 probabilistic:4 picking:1 michael:4 together:1 squared:2 ambiguity:2 reflect:1 again:1 containing:1 choose:6 cognitive:2 brent:8 return:1 li:1 potential:1 converted:1 alteration:1 explicitly:2 unannotated:1 tion:1 break:12 root:1 view:1 performed:1 portion:1 bayes:1 maintains:2 parallel:1 accuracy:3 yield:2 correspond:1 climbing:6 identification:2 bayesian:1 produced:1 substring:2 none:1 multiplying:1 expertise:1 murthy:1 ed:4 acquisition:1 frequency:1 recall:8 knowledge:1 cap:2 ubiquitous:1 segmentation:3 actually:1 appears:2 higher:5 attained:1 supervised:1 specify:2 maximally:1 done:1 though:3 evaluated:2 just:2 stage:2 until:3 working:1 web:1 incrementally:2 french:1 mode:1 grows:1 believe:1 building:6 usa:2 hypothesized:3 tense:1 unbiased:1 former:1 hence:1 excluded:1 iteratively:1 semantic:2 deal:3 adjacent:1 during:1 trying:1 hill:6 override:1 goldsmith:5 demonstrate:1 complete:1 meaning:1 novel:1 common:4 belong:1 interpretation:1 significant:1 refer:1 analyzer:1 language:17 moving:1 add:2 posterior:1 own:1 recent:1 optimizes:1 adverb:1 meeting:1 accomplished:1 rescored:1 scoring:4 seen:1 minimum:2 greater:3 additional:1 paradigm:24 multiple:1 full:1 reduces:1 stem:61 faster:1 retrieval:1 metric:1 represent:1 cie:1 beam:3 irregular:1 addition:2 whereas:1 source:2 tend:2 inconsistent:1 effectiveness:1 call:2 extracting:1 integer:1 door:4 canadian:1 split:2 superset:1 automated:4 iterate:1 identified:2 whether:2 linguist:1 dramatically:2 generally:1 detailed:1 tune:2 factorial:1 generate:1 http:1 outperform:1 suffixation:6 reliance:1 changing:1 neither:1 clean:1 graph:4 run:4 inverse:3 letter:2 you:1 package:1 extends:1 throughout:1 strange:1 fl:1 distinguish:1 annual:2 software:3 nearby:1 strangely:1 glad:3 relatively:1 department:2 developing:2 according:7 combination:2 describes:3 slightly:2 across:2 character:2 increasingly:1 invariant:1 equation:2 remains:1 count:1 mechanism:1 know:1 end:1 available:2 apply:1 running:2 linguistics:3 remaining:2 cape:1 pushing:1 phonology:1 build:6 society:1 move:1 question:1 added:3 already:1 occurs:1 spelling:3 visiting:1 unclear:2 unable:2 separating:1 concatenation:2 seven:2 considers:1 induction:1 assuming:3 length:7 ratio:1 gaussier:2 difficult:2 expense:2 design:1 implementation:1 perform:1 neuron:1 datasets:1 finite:1 immediate:1 excluding:1 locate:1 verb:1 cast:1 required:1 optimized:1 able:2 below:1 adjective:1 including:1 greatest:2 ranked:1 difficulty:3 attach:1 predicting:1 natural:2 representing:1 older:1 improve:1 numerous:1 identifies:2 text:2 prior:6 removal:1 law:1 generation:1 limitation:1 derivational:4 consistent:5 parliament:1 share:2 normalizes:1 changed:1 summary:1 placed:2 last:1 english:14 uchicago:1 taking:1 fifth:1 grammatical:1 calculated:1 default:1 ending:1 rich:1 made:1 far:4 pruning:1 reveals:1 corpus:13 demo:1 search:25 decade:1 table:2 nature:1 investigated:1 complex:2 linearly:1 motivation:1 allowed:1 postulating:1 aid:1 precision:8 sub:18 inferring:1 wish:1 breaking:1 third:2 removing:1 down:1 symbol:1 list:6 consist:1 incorporating:1 workshop:2 false:2 fscore:4 hillclimbing:1 watch:2 collectively:1 gender:1 determines:1 extracted:2 sized:1 goal:3 change:2 specifically:1 ami:1 semantically:2 conservative:2 total:1 indicating:1 tested:1 |
1,413 | 2,286 | Recovering Intrinsic Images from a Single Image
Marshall F Tappen
William T Freeman
Edward H Adelson
MIT Artificial Intelligence Laboratory
Cambridge, MA 02139
[email protected], [email protected], [email protected]
Abstract
We present an algorithm that uses multiple cues to recover shading and
reflectance intrinsic images from a single image. Using both color information and a classifier trained to recognize gray-scale patterns, each
image derivative is classified as being caused by shading or a change in
the surface?s reflectance. Generalized Belief Propagation is then used to
propagate information from areas where the correct classification is clear
to areas where it is ambiguous. We also show results on real images.
1 Introduction
Every image is the product of the characteristics of a scene. Two of the most important
characteristics of the scene are its shading and reflectance. The shading of a scene is the
interaction of the surfaces in the scene and the illumination. The reflectance of the scene
describes how each point reflects light. The ability to find the reflectance of each point
in the scene and how it is shaded is important because interpreting an image requires the
ability to decide how these two factors affect the image. For example, the geometry of
an object in the scene cannot be recovered without being able to isolate the shading of
every point. Likewise, segmentation would be simpler given the reflectance of each point
in the scene. In this work, we present a system which finds the shading and reflectance
of each point in a scene by decomposing an input image into two images, one containing
the shading of each point in the scene and another image containing the reflectance of
each point. These two images are types of a representation known as intrinsic images [1]
because each image contains one intrinsic characteristic of the scene.
Most prior algorithms for finding shading and reflectance images can be broadly classified as generative or discriminative approaches. The generative approaches create possible
surfaces and reflectance patterns that explain the image, then use a model to choose the
most likely surface. Previous generative approaches include modeling worlds of painted
polyhedra [11] or constructing surfaces from patches taken out of a training set [3]. In
contrast, discriminative approaches attempt to differentiate between changes in the image
caused by shading and those caused by a reflectance change. Early algorithms, such as
Retinex [8], were based on simple assumptions, such as the assumption that the gradients
along reflectance changes have much larger magnitudes than those caused by shading. That
assumption does not hold for many real images, so recent algorithms have used more complex statistics to separate shading and reflectance. Bell and Freeman [2] trained a classifier
to use local image information to classify steerable pyramid coefficients as being due to
shading or reflectance. Using steerable pyramid coefficients allowed the algorithm to classify edges at multiple orientations and scales. However, the steerable pyramid decomposition has a low-frequency residual component that cannot be classified. Without classifying
the low-frequency residual, only band-pass filtered copies of the shading and reflectance
images can be recovered. In addition, low-frequency coefficients may not have a natural
classification.
In a different direction, Weiss [13] proposed using multiple images where the reflectance
is constant, but the illumination changes. This approach was able to create full frequency
images, but required multiple input images of a fixed scene.
In this work, we present a system which uses multiple cues to recover full-frequency shading and reflectance intrinsic images from a single image. Our approach is discriminative,
using both a classifier based on color information in the image and a classifier trained to recognize local image patterns to distinguish derivatives caused by reflectance changes from
derivatives caused by shading. We also address the problem of ambiguous local evidence
by using a Markov Random Field to propagate the classifications of those areas where the
evidence is clear into ambiguous areas of the image.
2 Separating Shading and Reflectance
Our algorithm decomposes an image into shading and reflectance images by classifying
each image derivative as being caused by shading or a reflectance change. We assume that
the input image, I(x, y), can be expressed as the product of the shading image, S(x, y), and
the reflectance image, R(x, y). Considering the images in the log domain, the derivatives
of the input image are the sum of the derivatives of the shading and reflectance images. It is
unlikely that significant shading boundaries and reflectance edges occur at the same point,
thus we make the simplifying assumption that every image derivative is either caused by
shading or reflectance. This reduces the problem of specifying the shading and reflectance
derivatives to that of binary classification of the image?s x and y derivatives.
Labelling each x and y derivative produces estimates of the derivatives of the shading and
reflectance images. Each derivative represents a set of linear constraints on the image
and using both derivative images results in an over-constrained system. We recover each
intrinsic image from its derivatives by using the method introduced by Weiss in [13] to find
the pseudo-inverse of the over-constrained system of derivatives. If fx and fy are the filters
used to compute the x and y derivatives and Fx and Fy are the estimated derivatives of
shading image, then the shading image, S(x, y) is:
S(x, y) = g ? [(fx (?x, ?y) ? Fx ) + (fy (?x, ?y) ? Fy )]
(1)
where ? is convolution, f (?x, ?y) is a reversed copy of f (x, y), and g is the solution of
g ? [(fx (?x, ?y) ? fx (x, y)) + (fy (?x, ?y) ? fx (x, y))] = ?
(2)
The reflectance image is found in the same fashion. One nice property of this technique is
that the computation can be done using the FFT, making it more computationally efficient.
3 Classifying Derivatives
With an architecture for recovering intrinsic images, the next step is to create the classifiers
to separate the underlying processes in the image. Our system uses two classifiers, one
which uses color information to separate shading and reflectance derivatives and a second
classifier that uses local image patterns to classify each derivative.
Original Image
Shape Image
Reflectance Image
Figure 1: Example computed using only color information to classify derivatives. To facilitate printing, the intrinsic images have been computed from a gray-scale version of the
image. The color information is used solely for classifying derivatives in the gray-scale
copy of the image.
3.1 Using Color Information
Our system takes advantage of the property that changes in color between pixels indicate
a reflectance change [10]. When surfaces are diffuse, any changes in a color image due to
shading should affect all three color channels proportionally. Assume two adjacent pixels
in the image have values c1 and c2 , where c1 and c2 are RGB triplets. If the change
between the two pixels is caused by shading, then only the intensity of the color changes
and c2 = ?c1 for some scalar ?. If c2 6= ?c1 , the chromaticity of the colors has changed
and the color change must have been caused by a reflectance change. A chromaticity
change in the image indicates that the reflectance must have changed at that point.
To find chromaticity changes, we treat each RGB triplet as a vector and normalize them
to create ?
c1 and ?
c2 . We then use the angle between ?
c1 and ?
c2 to find reflectance changes.
When the change is caused by shading, (?
c1 ? ?
c2 ) equals 1. If (?
c1 ? ?
c2 ) is below a threshold,
then the derivative associated with the two colors is classified as a reflectance derivative.
Using only the color information, this approach is similar to that used in [6]. The primary
difference is that our system classifies the vertical and horizontal derivatives independently.
Figure 1 shows an example of the results produced by the algorithm. The classifier marked
all of the reflectance areas correctly and the text is cleanly removed from the bottle. This
example also demonstrates the high quality reconstructions that can be obtained by classifying derivatives.
3.2 Using Gray-Scale Information
While color information is useful, it is not sufficient to properly decompose images. A
change in color intensity could be caused by either shading or a reflectance change. Using
only local color information, color intensity changes cannot be classified properly. Fortunately, shading patterns have a unique appearance which can be discriminated from most
common reflectance patterns. This allows us to use the local gray-scale image pattern surrounding a derivative to classify it.
The basic feature of the gray-scale classifier is the absolute value of the response of a linear
filter. We refer to a feature computed in this manner as a non-linear filter. The output of a
non-linear, F , given an input patch Ip is
F
= |Ip ? w|
(3)
where ? is convolution and w is a linear filter. The filter, w is the same size as the image
patch, I, and we only consider the response at the center of Ip . This makes the feature
a function from a patch of image data to a scalar response. This feature could also be
viewed as the absolute value of the dot product of Ip and w. We use the responses of linear
Figure 2: Example images from the training set. The first two are examples of reflectance
changes and the last three are examples of shading
(a) Original Image
(b) Shading Image
(c) Reflectance Image
Figure 3: Results obtained using the gray-scale classifier.
filters as the basis for our feature, in part, because they have been used successfully for
characterizing [9] and synthesizing [7] images of textured surfaces.
The non-linear filters are used to classify derivatives with a classifier similar to that used
by Tieu and Viola in [12]. This classifier uses the AdaBoost [4] algorithm to combine a
set of weak classifiers into a single strong classifier. Each weak classifier is a threshold
test on the output of one non-linear filter. At each iteration of the AdaBoost algorithm, a
new weak classifier is chosen by choosing a non-linear filter and a threshold. The filter
and threshold are chosen greedily by finding the combination that performs best on the
re-weighted training set. The linear filter in each non-linear filter is chosen from a set of
oriented first and second derivative of Gaussian filters.
The training set consists of a mix of images of rendered fractal surfaces and images of
shaded ellipses placed randomly in the image. Examples of reflectance changes were created using images of random lines and images of random ellipse painted onto the image.
Samples from the training set are shown in 2. In the training set, the illumination is always
coming from the right side of the image. When evaluating test images, the classifier will
assume that the test image is also lit from the right.
Figure 3 shows the results of our system using only the gray-scale classifier. The results
can be evaluated by thinking of the shading image as how the scene should appear if it
were made entirely of gray plastic. The reflectance image should appear very flat, with the
the three-dimensional depth cues placed in the shading image. Our system performs well
on the image shown in Figure 3. The shading image has a very uniform appearance, with
almost all of the effects of the reflectance changes placed in the reflectance image.
The examples shown are computed without taking the log of the input image before processing it. The input images are uncalibrated and ordinary photographic tonescale is very
similar to a log transformation. Errors from not taking log of the input image first would
(a)
(b)
(c)
(d)
Figure 4: An example where propagation is needed. The smile from the pillow image
in (a) has been enlarged in (b). Figures (c) and (d) contain an example of shading and a
reflectance change, respectively. Locally, the center of the mouth in (b) is as similar to the
shading example in (c) as it is to the example reflectance change in (d).
(a) Original Image
(b) Shading Image
(c) Reflectance Image
Figure 5: The pillow from Figure 4. This is found by combining the local evidence from
the color and gray-scale classifiers, then using Generalized Belief Propagation to propagate
local evidence.
cause one intrinsic image to modulate the local brightness of the other. However, this does
not occur in the results.
4 Propagating Evidence
While the classifier works well, there are still areas in the image where the local information is ambiguous. An example of this is shown in Figure 4. When compared to the
example shading and reflectance change in Figure 4(c) and 4(d), the center of the mouth in
Figure 4(b) is equally well classified with either label. However, the corners of the mouth
can be classified as being caused by a reflectance change with little ambiguity. Since the
derivatives in the corner of the mouth and the center all lie on the same image contour, they
should have the same classification. A mechanism is needed to propagate information from
the corners of the mouth, where the classification is clear, into areas where the local evidence is ambiguous. This will allow areas where the classification is clear to disambiguate
those areas where it is not.
In order to propagate evidence, we treat each derivative as a node in a Markov Random
Field with two possible states, indicating whether the derivative is caused by shading or
caused by a reflectance change. Setting the compatibility functions between nodes correctly
will force nodes along the same contour to have the same classification.
4.1 Model for the Potential Functions
Each node in the MRF corresponds to the classification of a derivative. We constrain the
compatibility functions for two neighboring nodes, xi and xj , to be of the form
?
1??
?(xi , xj ) =
(4)
1??
?
with 0 ? ? ? 1.
The term ? controls how much the two nodes should influence each other. Since derivatives
along an image contour should have the same classification, ? should be close to 1 when
two neighboring derivatives are along a contour and should be 0.5 when no contour is
present.
Since ? depends on the image at each point, we express it as ?(Ixy ), where Ixy is the
image information at some point. To ensure ?(Ixy ) between 0 and 1, it is modelled as
?(Ixy ) = g(z(Ixy )), where g(?) is the logistic function and z(Ixy ) has a large response
along image contours.
4.2 Learning the Potential Functions
The function z(Ixy ) is based on two local image features, the magnitude of the image and
the difference in orientation between the gradient and the orientation of the graph edge.
These features reflect our heuristic that derivatives along an image contour should have the
same classification.
? is
The difference in orientation between a horizontal graph edge and image contour, ?,
found from the orientation of the image gradient, ?. Assuming that ??/2 ? ? ? ?/2, the
? is ?? = |?|. For vertical edges,
angle between a horizontal edge and the image gradient,?,
?
? = |?| ? ?/2.
To find the values of z(?) we maximize the probability of a set of the training examples
over the parameters of z(?). The examples are taken from the same set used to train the
gray-scale classifiers. The probability of training samples is
1 Y
?(xi , xj )
(5)
P =
Z
(i,j)
where all (i, j) are the indices of neighboring nodes in the MRF and Z is a normalization
constant. Note that each ?(?) is a function of z(Ixy ).
The function relating the image features to ?(?), z(?), is chosen to be a linear function and
is found by maximizing equation 5 over a set of training images similar to those used to
train the local classifier. In order to simplify the training process, we approximate the true
probability in Equation 5 by assuming that Z is constant. Doing so leads to the following
value of z(?):
? |?I|) = ?1.2 ? ?? + 1.62 ? |?I| + 2.3
z(?,
(6)
where |?I| is the magnitude of the image gradient and both ?? and |?I| have been normalized to be between 0 and 1. These measures break down in areas with a weak gradient, so we set ?(Ixy ) to 0.5 for regions of the image with a gradient magnitude less
than 0.05. Combined with the values learned for z(?), this effectively limits ? to the range
0.5 ? ? ? 1.
Larger values of z(?) correspond to a belief that the derivatives connected by the edge
should have the same value, while negative values signify that the derivatives should have
(a) Original Image
(b) Shading Image
(c) Reflectance Image
Figure 6: Example generated by combining color and gray-scale information, along with
using propagation.
a different value. The values in equation 6 correspond with our expected results; two
derivatives are constrained to have the same value when they are along an edge in the
image that has a similar orientation to the edge in the MRF connecting the two nodes.
4.3 Inferring the Correct Labelling
Once the compatibility functions have been learned, the label of each derivative can be
inferred. The local evidence for each node in the MRF is obtained from the results of the
color classifier and from the gray-scale classifier by assuming that the two are statistically
independent. It is necessary to use the color information because propagation cannot help
in areas where the gray-scale classifier misses an edge altogether. In Figure 5, the cheek
patches on the pillow, which are pink in the color image, are missed by the gray-scale
classifier, but caught by the color classifier. For the results shown, we used the results of
the AdaBoost classifier to classify the gray-scale images and used the method suggested by
Friedman et al. to obtain the probability of the labels [5].
We used the Generalized Belief Propagation algorithm [14] to infer the best label of each
node in the MRF because ordinary Belief Propagation performed poorly in areas with both
weak local evidence and strong compatibility constraints. The results of using color, grayscale information, and propagation can be seen in Figure 5. The ripples on the pillow are
correctly identified as being caused by shading, while the face is correctly identified as
having been painted on. In a second example, shown in Figure 6, the algorithm correctly
identifies the change in reflectance between the sweatshirt and the jersey and correctly identifies the folds in the clothing as being caused by shading. There are some small shading
artifacts in the reflectance image, especially around the sleeves of the sweatshirt, presumably caused by particular shapes not present in the training set. All of the examples were
computed using ten non-linear filters as input for the AdaBoost gray-scale classifier.
5 Discussion
We have presented a system that is able to use multiple cues to produce shading and reflectance intrinsic images from a single image. This method is also able to produce satisfying results for real images. The most computationally intense steps for recovering the
shading and reflectance images are computing the local evidence, which takes about six
minutes on a 700MHz Pentium for a 256 ? 256 image, and running the Generalized Belief
Propagation algorithm. Belief propagation was used on both the x and y derivative images
and took around 6 minutes to run 200 iterations on each image. The pseudo-inverse process
took under 5 seconds.
The primary limitation of this method lies in the classifiers. For each type of surface, the
classifiers must incorporate knowledge about the structure of the surface and how it appears
when illuminated. The present classifiers operate at a single spatial scale, however the MRF
framework allows the integration of information from multiple scales.
Acknowledgments
Portions of this work were completed while W.T.F was a Senior Research Scientist and
M.F.T was a summer intern at Mitsubishi Electric Research Labs. This work was supported
by an NDSEG fellowship to M.F.T, by NIH Grant EY11005-04 to E.H.A., by a grant from
NTT to E.H.A., and by a contract with Unilever Research.
References
[1] H. G. Barrow and J. M. Tenenbaum. Recovering intrinsic scene characteristics from
images. In Computer Vision Systems, pages 3?26. Academic Press, 1978.
[2] M. Bell and W. T. Freeman. Learning local evidence for shading and reflection. In
Proceedings International Conference on Computer Vision, 2001.
[3] W. T. Freeman, E. C. Pasztor, and O. T. Carmichael. Learning low-level vision. International Journal of Computer Vision, 40(1):25?47, 2000.
[4] Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line learning
and an application to boosting. Journal of Computer and System Sciences, 55(1):119?
139, 1997.
[5] J. Friedman, T. Hastie, and R. Tibshirami. Additive logistic regression: A statistical
view of boosting. The Annals of Statistics, 38(2):337?374, 2000.
[6] B. V. Funt, M. S. Drew, and M. Brockington. Recovering shading from color images.
In G. Sandini, editor, ECCV-92: Second European Conference on Computer Vision,
pages 124?132. Springer-Verlag, May 1992.
[7] D. Heeger and J. Bergen. Pyramid-based texture analysis/synthesis. In Computer
Graphics Proceeding, SIGGRAPH 95, pages 229?238, August 1995.
[8] E. H. Land and J. J. McCann. Lightness and retinex theory. Journal of the Optical
Society of America, 61:1?11, 1971.
[9] T. Leung and J. Malik. Recognizing surfaces using three-dimensional textons. In
IEEE International Conference on Computer Vision, 1999.
[10] J. M. Rubin and W. A. Richards. Color vision and image intensities: When are
changes material. Biological Cybernetics, 45:215?226, 1982.
[11] P. Sinha and E. H. Adelson. Recovering reflectance in a world of painted polyhedra.
In Fourth International Conference on Computer Vision, pages 156?163. IEEE, 1993.
[12] K. Tieu and P. Viola. Boosting image retrieval. In Proceedings IEEE Computer Vision
and Pattern Recognition, volume 1, pages 228?235, 2000.
[13] Y. Weiss. Deriving intrinsic images from image sequences. In Proceedings International Conference on Computer Vision, Vancouver, Canada, 2001. IEEE.
[14] J. Yedidia, W. T. Freeman, and Y. Weiss. Generalized belief propagation. In Advances
in Neural Information Processing Systems 13, pages 689?695, 2001.
| 2286 |@word version:1 cleanly:1 propagate:5 rgb:2 decomposition:1 simplifying:1 mitsubishi:1 brightness:1 shading:50 contains:1 recovered:2 must:3 additive:1 shape:2 intelligence:1 cue:4 generative:3 filtered:1 boosting:3 node:10 simpler:1 along:8 c2:8 consists:1 combine:1 manner:1 mccann:1 expected:1 freeman:5 little:1 considering:1 classifies:1 underlying:1 finding:2 transformation:1 pseudo:2 every:3 sleeve:1 classifier:32 demonstrates:1 control:1 grant:2 appear:2 before:1 scientist:1 local:17 treat:2 limit:1 painted:4 solely:1 specifying:1 shaded:2 range:1 statistically:1 unique:1 acknowledgment:1 steerable:3 carmichael:1 area:12 bell:2 cannot:4 onto:1 close:1 influence:1 center:4 maximizing:1 independently:1 caught:1 deriving:1 fx:7 annals:1 us:6 tappen:1 satisfying:1 recognition:1 richards:1 region:1 connected:1 removed:1 uncalibrated:1 trained:3 textured:1 basis:1 siggraph:1 jersey:1 america:1 surrounding:1 train:2 artificial:1 choosing:1 heuristic:1 larger:2 ability:2 statistic:2 ip:4 differentiate:1 advantage:1 sequence:1 took:2 reconstruction:1 interaction:1 product:3 coming:1 neighboring:3 combining:2 poorly:1 normalize:1 ripple:1 produce:3 object:1 help:1 propagating:1 strong:2 edward:1 recovering:6 indicate:1 direction:1 correct:2 filter:14 material:1 generalization:1 decompose:1 biological:1 clothing:1 hold:1 around:2 presumably:1 early:1 label:4 create:4 successfully:1 reflects:1 weighted:1 mit:4 gaussian:1 always:1 properly:2 polyhedron:2 indicates:1 contrast:1 pentium:1 greedily:1 bergen:1 leung:1 unlikely:1 pixel:3 compatibility:4 classification:11 orientation:6 constrained:3 spatial:1 integration:1 field:2 equal:1 once:1 having:1 represents:1 lit:1 adelson:3 thinking:1 simplify:1 oriented:1 randomly:1 recognize:2 geometry:1 william:1 attempt:1 friedman:2 light:1 edge:10 necessary:1 intense:1 re:1 sinha:1 classify:7 modeling:1 marshall:1 mhz:1 ordinary:2 uniform:1 recognizing:1 graphic:1 combined:1 international:5 contract:1 connecting:1 synthesis:1 ambiguity:1 reflect:1 ndseg:1 containing:2 choose:1 corner:3 derivative:41 potential:2 coefficient:3 textons:1 caused:18 depends:1 performed:1 break:1 view:1 lab:1 doing:1 portion:1 recover:3 characteristic:4 likewise:1 correspond:2 weak:5 modelled:1 plastic:1 produced:1 cybernetics:1 classified:7 explain:1 frequency:5 associated:1 color:27 knowledge:1 segmentation:1 appears:1 adaboost:4 response:5 wei:4 done:1 evaluated:1 horizontal:3 propagation:11 logistic:2 artifact:1 quality:1 gray:17 facilitate:1 effect:1 contain:1 true:1 normalized:1 laboratory:1 chromaticity:3 adjacent:1 ambiguous:5 ixy:9 generalized:5 theoretic:1 performs:2 interpreting:1 reflection:1 image:127 nih:1 common:1 discriminated:1 volume:1 relating:1 significant:1 refer:1 cambridge:1 ai:3 dot:1 surface:11 recent:1 verlag:1 binary:1 wtf:1 seen:1 fortunately:1 maximize:1 multiple:7 full:2 mix:1 reduces:1 photographic:1 infer:1 ntt:1 academic:1 retrieval:1 equally:1 ellipsis:1 mrf:6 basic:1 regression:1 vision:10 funt:1 iteration:2 normalization:1 pyramid:4 c1:8 addition:1 fellowship:1 signify:1 operate:1 isolate:1 smile:1 fft:1 affect:2 xj:3 architecture:1 identified:2 hastie:1 whether:1 six:1 cause:1 fractal:1 useful:1 clear:4 proportionally:1 band:1 locally:1 ten:1 tenenbaum:1 schapire:1 estimated:1 correctly:6 broadly:1 express:1 threshold:4 graph:2 sum:1 run:1 inverse:2 angle:2 fourth:1 almost:1 cheek:1 decide:1 missed:1 patch:5 decision:1 illuminated:1 entirely:1 summer:1 distinguish:1 fold:1 occur:2 constraint:2 constrain:1 scene:14 flat:1 diffuse:1 optical:1 rendered:1 combination:1 pink:1 describes:1 making:1 taken:2 computationally:2 equation:3 mechanism:1 needed:2 decomposing:1 yedidia:1 altogether:1 original:4 running:1 include:1 ensure:1 completed:1 reflectance:56 especially:1 ellipse:1 society:1 malik:1 primary:2 gradient:7 reversed:1 separate:3 separating:1 fy:5 assuming:3 index:1 negative:1 synthesizing:1 vertical:2 convolution:2 markov:2 pasztor:1 barrow:1 viola:2 august:1 intensity:4 canada:1 inferred:1 introduced:1 bottle:1 required:1 learned:2 address:1 able:4 suggested:1 below:1 pattern:8 belief:8 mouth:5 natural:1 force:1 residual:2 lightness:1 identifies:2 created:1 text:1 prior:1 nice:1 vancouver:1 freund:1 limitation:1 sandini:1 sufficient:1 rubin:1 editor:1 classifying:5 land:1 eccv:1 changed:2 placed:3 last:1 copy:3 supported:1 side:1 allow:1 senior:1 characterizing:1 taking:2 face:1 absolute:2 boundary:1 tieu:2 depth:1 world:2 evaluating:1 pillow:4 contour:8 made:1 approximate:1 discriminative:3 xi:3 grayscale:1 triplet:2 decomposes:1 disambiguate:1 channel:1 complex:1 european:1 constructing:1 domain:1 electric:1 allowed:1 enlarged:1 fashion:1 inferring:1 heeger:1 lie:2 printing:1 down:1 minute:2 evidence:11 intrinsic:12 effectively:1 drew:1 texture:1 magnitude:4 illumination:3 labelling:2 likely:1 appearance:2 intern:1 expressed:1 scalar:2 springer:1 corresponds:1 ma:1 modulate:1 marked:1 viewed:1 change:31 miss:1 pas:1 indicating:1 retinex:2 incorporate:1 |
1,414 | 2,287 | Derivative observations in Gaussian Process
Models of Dynamic Systems
E. Solak
Dept. Elec. & Electr. Eng.,
Strathclyde University,
Glasgow G1 1QE,
Scotland, UK.
[email protected]
D. J. Leith
Hamilton Institute,
National Univ. of
Ireland, Maynooth,
Co. Kildare, Ireland
[email protected]
R. Murray-Smith
Dept. Computing Science,
University of Glasgow
Glasgow G12 8QQ,
Scotland, UK.
[email protected]
W. E. Leithead
Hamilton Institute,
National Univ. of
Ireland, Maynooth,
Co. Kildare, Ireland.
[email protected]
C. E. Rasmussen
Gatsby Computational Neuroscience Unit,
University College London, UK
[email protected]
Abstract
Gaussian processes provide an approach to nonparametric modelling
which allows a straightforward combination of function and derivative
observations in an empirical model. This is of particular importance in
identification of nonlinear dynamic systems from experimental data. 1) It
allows us to combine derivative information, and associated uncertainty
with normal function observations into the learning and inference process. This derivative information can be in the form of priors specified
by an expert or identified from perturbation data close to equilibrium. 2)
It allows a seamless fusion of multiple local linear models in a consistent manner, inferring consistent models and ensuring that integrability
constraints are met. 3) It improves dramatically the computational efficiency of Gaussian process models for dynamic system identification,
by summarising large quantities of near-equilibrium data by a handful of
linearisations, reducing the training set size ? traditionally a problem for
Gaussian process models.
1 Introduction
In many applications which involve modelling an unknown system
from observed data, model accuracy could be improved by using not only observations of , but
also observations of derivatives e.g. . These derivative observations might be directly available from sensors which, for example, measure velocity or acceleration rather
than position, they might be prior linearisation models from historical experiments. A
further practical reason is related to the fact that the computational expense of Gaussian
processes increases rapidly (
) with training set size . We may therefore wish to
use linearisations, which are cheap to estimate, to describe the system in those areas in
which they are sufficiently accurate, efficiently summarising a large subset of training data.
We focus on application of such models in modelling nonlinear dynamic systems from
experimental data.
2 Gaussian processes and derivative processes
2.1 Gaussian processes
Bayesian regression based on Gaussian processes is described by [1] and interest has grown
since publication of [2, 3, 4]. Assume a set of input/output pairs,
are given,
In the GP framework, the output values are
where
viewed as being drawn from a zero-mean multivariable Gaussian distribution whose co
variance matrix is a function of the input vectors Namely the output distribution is
#"
!
$
A general model, which reflects the higher correlation between spatially close (in some
appropriate metric) points ? a smoothness assumption in target system
? uses a covariance matrix with the following structure;
"
where the norm
1:9;1 3
&% !
%
. 1 3
46587 %
021
('*),+- /.
(1)
is defined as
1,<1 3
F
>=@?BAC=
ED
HGIBJLK NM
A
M
0
The OP4
variables, ' M M
5 are the hyper-parameters of the GP model, which are
constrained to be non-negative. In particular 5 is included to capture the noise component
of the covariance. The GP model can be used to calculate the distribution of an unknown
output Q corresponding to known input Q as
Q
:Q
P
SR
" U
"
8T"
where
"
R
T"
and W [Z
"
Q
Q
"VU
:Q
!
Y.
"
XW
(2)
Q
!
!
Q
(3)
]\N?
The mean R of this distribution can be chosen as the maximum-likelihood prediction for
the output corresponding to the input Q
2.2 Gaussian process derivatives
Differentiation is a linear operation, so the derivative of a Gaussian process remains a
Gaussian process. The use of derivative observations in Gaussian processes is described in
[5, 6], and in
engineering applications in [7, 8, 9]. Suppose we are given new sets of pairs
% /`_a%
Yc
d
each % ? corresponding to the f points
b
^ O
e /f
% ? ^
c;g>h
of
partial derivative of the underlying function
In the noise-free setting this
corresponds to the relation
_ %
%
iLjilk!m n
Y
[
f
_
We now wish to find the joint probability of the vector of ?s and ?s, which involves
calculation of the covariance between the function and the derivative observations as well
as the covariance among the derivative observations. Covariance functions are typically
differentiable, so the covariance between a derivative and function observation and the one
between two derivative points satisfy
_ %
%
J G
_ %
/_
%
The following identities give those relations necessary to form the full covariance matrix,
for the covariance function (1),
_ %
_ % `_
01
'),+- /.
.]'CM % %
'CM % >7 % .
.
1 3
. %
8),+- /.
M %
.
(4)
021
%
.
1 3
(5)
021
.
),+- `.
.
1 3
(6)
1.5
cov(y,y)
cov(?,y)
cov(?,?)
1
covariance
0.5
0
?0.5
?1
?3
?2
?1
0
distance
1
2
3
Figure 1: The covariance functions between
function and derivative points
in one dimen
sion, with hyper-parameters M ' . The function
defines a covariance
that
decays
monotonically
as
the
distance
between
the
corresponding
input points
and increases. Covariance _ %
between a derivative point and a function
point is an odd function, and does not decrease as fast due to the presence of the multiplica_]/_
illustrates the implicit assumption in the choice of the basic
tive distance term.
covariance function, that gradients increase with M and that the slopes
of realisations will
M , giving an indication
tend to have highest negative correlation at a distance of
of the typical size of ?wiggles? in realisations of the corresponding Gaussian process .
2.3 Derivative observations from identified linearisations
Given perturbation data
*. , around an equilibrium point , we can identify
a linearisation Z
\ , the parameters
of which can be viewed as
Q
_
_
observations of derivatives
, and the bias term from the linearisation can be
used as a function ?observation?, i.e. . We use standard linear regression solutions,
to estimate the derivatives with a prior of on the covariance matrix
4
U
U
2W
(7)
n
n
>W .
4
U
(8)
U
(9)
can be viewed as ?observations? which have uncertainty specified by the a >O 4
>O 4
covariance matrix
n for the th derivative observations, and their associated
linearisation point.
_2
_
_2
), the
With a suitable ordering of the observations
(e.g. _
associated noise covariance matrix , which is added to the covariance
matrix
calculated
matrices. Use
using (4)-(6), will be block diagonal, where the blocks are the
D
of numerical estimates from linearisations makes it easy to use the
full covariance
ma
trix, including off-diagonal elements. This would be much more involved if were to be
estimated simultaneously with other covariance function hyperparameters.
In a one-dimensional case, given zero noise on observations then two function observations
close together give exactly the same information, and constrain the model in the same
way as a derivative observation with zero uncertainty. Data is, however, rarely noise-free,
and the fact that we can so easily include knowledge of derivative or function observation
uncertainty is a major benefit of the Gaussian process prior approach.
The identified derivative and function observation, and their covariance matrix can locally
summarise the large number of perturbation training points, leading to a significant reduction in data needed during Gaussian process inference. We can, however, choose to improve
robustness by retaining any data in the training set from the equilibrium region which have
a low likelihood given the GP model based only on the linearisations (e.g. responses three
standard deviations away from the mean).
In this paper we choose the hyper-parameters that maximise the likelihood of the occur
rence of the data in the sets
?
? , using standard optimisation software. Given
the data sets
?
? and the hyper-parameters the Gaussian process can be used to
infer the conditional distribution of the output as well as its partial derivatives for a given
input. The ability to predict not only the mean function response, and derivatives but also
to be able to predict the input-dependent variance of the function response and derivatives
has great utility in the many engineering applications including optimisation and control
which depend on derivative information.
2.4 Derivative and prediction uncertainty
Figure 2(c) gives intuitive insight into the constraining effect of function observations,
and function+derivative observations on realisations drawn from a Gaussian process prior.
To further illustrate the effect of knowledge of derivative information on prediction uncertainty. We consider a simple example with a single pair of function observations
`_
and a single derivative pair
, Hyper-parameters are fixed at
'[ M
5H Figure 2(a) plots the standard deviation from models resulting
from variations of function and derivatives observations. The four cases considered are
1. a single function observation,
2. a single function observation + a derivative observation, noise-free, i.e.
3. 150 noisy function observations with std. dev.
H
0
H
.
4. a single function observation + uncertain derivative observation
0 (identified
from
the
150
noisy
function
observations
above,
with
,
).
Z
\
2
2.5
1 function observation
1.5
2
1
1.5
1 function obs + 1 noise?free
derivative observation
1
0.5
0.5
0
0
?0.5
?0.5
?1
?1
1 function obs. + 1 noisy
derivative observation
almost indistinguishable from
150 function observations
?1.5
?2
?2
?1.5
?1
?0.5
0
?1.5
0.5
1
1.5
2
2
0
?2
?5
0
covariate, x
5
?1.5
?1
0
0.5
(b) Effect of including a noise-free derivative or function observation on the prediction of mean and variance, given appropriate hyperparameters.
2
0
?2
?5
?0.5
dependent variable, y(x)
dependent variable, y(x)
dependent variable, y(x)
(a) The effect of adding a derivative observation on the prediction uncertainty ?
standard deviation of GP predictions
?2
?2
sin(? x)
derivative obs.
function obs.
?2? derivative obs.
?2? function obs.
1
1.5
2
0
covariate, x
5
2
0
?2
?5
0
covariate, x
5
(c) Examples of realisations drawn from a Gaussian process with
, left ? no
data, middle, showing the constraining effect of function observations (crosses), and right the
effect of function & derivative observations (lines).
Figure 2: Variance effects of derivative information.
Note that the addition of a derivative point does not have an effect on the mean prediction in
any of the cases, because the function derivative is zero. The striking effect of the derivative
is on the uncertainty. In the case of prediction using function data the uncertainty increases
as we move away from the function observation. Addition of a noise-free derivative observation does not affect uncertainty at
, but it does mean that uncertainty increases
more slowly as we move away from 0, but if uncertainty on the derivative increases, then
there is less of an impact on variance. The model based on the single derivative observation identified from the 150 noisy function observations is almost indistinguishable from
the model with all 150 function observations.
To further illustrate the effect of adding derivative information, consider the pairs of noisefree observations of I
. The hyper-parameters of the model are obtained through
a training
involving large amounts of data, but we then perform inference using only points
0
0
is replaced with a derivative point
at . . For illustration, the function point at
at the same location, and the results shown in Figure 2(b).
3 Nonlinear dynamics example
As an example of a situation where we wish to integrate derivative and function observations we look at a discrete-time nonlinear dynamic system
(10)
.
4P =
Q
]4
(11)
Q
where is the system state at time , is the observed output, = is the control input
and noise term
* >
. A standard starting point for identification is to find linear
dynamic models at various points on the manifold of equilibria. In the first part of the experiment, we wish to acquire training data by stimulating the system input = to take the system
through a wide range of conditions along the manifold of equilibria, shown in Figure 3(a).
The linearisations are each identified from 200 function observations W
obtained
. by
starting a simulation at and perturbing the control signal about = by >
We infer the system response, and the derivative response at various points along the manifold of equilibria, and plot these in Figure 4. The quadratic derivative from the
cubic true function is clearly visible in Figure 4(c), and is smooth, despite the presence of
several derivative observations with significant errors, because of the appropriate estimates
of derivative uncertainty. The @= is close to constant in Figure 4(c). Note that the
function ?observations? derived from the linearisations have much lower uncertainty than
the individual function observations.
As a second part of the experiment as shown in Figure 3(b), we now add some offequilibrium function observations to the training set, by applying large control perturbations to the system, taking it through transient regions. We perform a new hyper-parameter
optimisation using the using the combination of the transient, off-equilibrium observations
and the derivative observations already available. The model incorporates both groups
of data and has reduced variance in the off-equilibrium areas. A comparison of simulation
runs from the two models with the true data is shown in Figure 5(a), shows the improvement
in performance brought by the combination of equilibrium derivatives and off-equilibrium
observations over equilibrium information alone. The combined model is almost identical
in response to the true system response.
4 Conclusions
Engineers are used to interpreting linearisations, and find them a natural way of expressing
prior knowledge, or constraints that a data-driven model should conform to. Derivative
observations in the form of system linearisations are frequently used in control engineering,
and many nonlinear identification campaigns will have linearisations of different operating
regions as prior information. Acquiring perturbation data close to equilibrium is relatively
easy, and the large amounts of data mean that equilibrium linearisations can be made very
accurate. While in many cases we will be able to have accurate derivative observations,
they will rarely be noise-free, and the fact that we can so easily include knowledge of
derivative or function observation uncertainty is a major benefit of the Gaussian process
prior approach. In this paper we used numerical estimates of the full covariance matrix
for each linearisation, which were different for every linearisation. The analytic inference
of derivative information from a model, and importantly, its uncertainty is potentially of
great importance to control engineers designing or validating robust control laws, e.g. [8].
Other applications of models which base decisions on model derivatives will have similar
potential benefits.
Local linearisation models around equilibrium conditions are, however, not sufficient for
specifying global dynamics. We need observations away from equilibrium in transient regions, which tend to be much sparser as they are more difficult to obtain experimentally,
and the system behaviour tends to be more complex away from equilibrium. Gaussian processes, with robust inference, and input-dependent uncertainty predictions, are especially
interesting in sparsely populated off-equilibrium regions. Summarising the large quantities
of near-equilibrium data by derivative ?observations? should signficantly reduce the computational problems associated with Gaussian processes in modelling dynamic systems.
We have demonstrated with a simulation of an example nonlinear system that Gaussian
process priors can combine derivative and function observations in a principled manner
which is highly applicable in nonlinear dynamic systems modelling tasks. Any smoothing
procedure involving linearisations needs to satisfy an integrability constraint, which has
not been solved in a satisfactory fashion in other widely-used approaches (e.g. multiple
model [10], or Takagi-Sugeno fuzzy methods [11]), but which is inherently solved within
the Gaussian process formulation. The method scales to higher input dimensions O well,
adding only an extra O derivative observations + one function observation for each linearisation. In fact the real benefits may become more obvious in higher dimensions, with
increased quantities of training data which can be efficiently summarised by linearisations,
and more severe problems in blending local linearisations together consistently.
References
[1] A. O?Hagan. On curve fitting and optimal design for regression (with discussion). Journal of
the Royal Statistical Society B, 40:1?42, 1978.
[2] C. K. I. Williams and C. E. Rasmussen. Gaussian processes for regression. In Neural Information Processing Systems - 8, pages 514?520, Cambridge, MA, 1996. MIT press.
[3] C. K. I. Williams. Prediction with Gaussian processes: From linear regression to linear prediction and beyond. In M. I. Jordan, editor, Learning and Inference in Graphical Models, pages
599?621. Kluwer, 1998.
[4] D. J. C. MacKay. Introduction to Gaussian Processes. NIPS?97 Tutorial notes., 1999.
[5] A. O?Hagan. Some Bayesian numerical analysis. In J. M. Bernardo, J. O. Berger, A. P. Dawid,
and A. F. M. Smith, editors, Bayesian Statistics 4, pages 345?363. Oxford University Press,
1992.
[6] C. E. Rasmussen. Gaussian processes to speed up Hybrid Monte Carlo for expensive Bayesian
integrals. Draft: available at http://www.gatsby.ucl.ac.uk/ edward/pub/gphmc.ps.gz, 2003.
[7] R. Murray-Smith, T. A. Johansen, and R. Shorten. On transient dynamics, off-equilibrium
behaviour and identification in blended multiple model structures. In European Control Conference, Karlsruhe, 1999, pages BA?14, 1999.
[8] R. Murray-Smith and D. Sbarbaro. Nonlinear adaptive control using non-parametric Gaussian
process prior models. In 15th IFAC World Congress on Automatic Control, Barcelona, 2002.
[9] D. J. Leith, W. E. Leithead, E. Solak, and R. Murray-Smith. Divide & conquer identification: Using Gaussian process priors to combine derivative and non-derivative observations in a
consistent manner. In Conference on Decision and Control, 2002.
[10] R. Murray-Smith and T. A. Johansen. Multiple Model Approaches to Modelling and Control.
Taylor and Francis, London, 1997.
[11] T. Takagi and M. Sugeno. Fuzzy identification of systems and its applications for modeling and
control. IEEE Trans. on Systems, Man and Cybernetics, 15(1):116?132, 1985.
Acknowledgements
The authors gratefully acknowledge the support of the Multi-Agent Control Research Training Network by EC TMR grant HPRN-CT-1999-00107, support from EPSRC grant Modern statistical approaches to off-equilibrium modelling for nonlinear system control GR/M76379/01, support from
EPSRC grant GR/R15863/01, and Science Foundation Ireland grant 00/PI.1/C067. Thanks to J.Q.
Shi and A. Girard for useful comments.
1.5
1.5
1
1
0.5
0.5
0
0
?0.5
?0.5
?1
?1
?1.5
2
?1.5
2
2
1
2
1
1
0
1
0
0
?1
0
?1
?1
?2
u
?2
?1
?2
u
x
(a) Derivative observations from linearisations identified from the perturbation data. 200 per linearisation
point with noisy (
).
?2
x
(b) Derivative observations on equilibrium, and off-equilibrium function observations from a transient trajectory.
Figure 3: The manifold of equilibria on the true function. Circles indicate points at which a derivative observation is made. Crosses indicate a function observation
2.5
2
0.5
2
0.4
1.5
1.5
0.3
1
0.5
0.2
1
0
0.1
?0.5
0.5
0
?1
?0.1
?1.5
0
?0.2
?2
?2.5
?2
?1.5
?1
?0.5
0
0.5
1
1.5
2
?0.5
?2
(a) Function observations
?1.5
?1
?0.5
0
0.5
1
1.5
2
(b) Derivative observations
?0.3
?2
?1.5
?1
?0.5
0
0.5
1
1.5
2
(c) Derivative observations
Figure 4: Inferred values of function and derivatives, with contours, as and are varied
along manifold of equilibria (c.f. Fig. 3) from to . Circles indicate the locations of the
derivative observations points, lines indicate the uncertainty of observations ( standard deviations.)
0.2
true system
GP with off?equilibrium data
Equilibrium data GP
0
2
?0.2
1.5
?0.4
0.5
1
y
0
?0.5
?0.6
?1
?1.5
?0.8
?2
2
?1
2
1
1
0
0
?1.2
?1
0
20
40
60
time
80
100
?1
120
?2
(a) Simulation of dynamics. GP trained
with both on and off-equilibrium data is
close to true system, unlike model based
only on equilibrium data.
?2
(b) Inferred mean and surfaces using
linearisations and off-equilibrium data.
The trajectory of the simulation shown
in a) is plotted for comparison.
Figure 5: Modelling results
| 2287 |@word middle:1 norm:1 simulation:5 covariance:21 eng:1 reduction:1 pub:1 visible:1 numerical:3 cheap:1 analytic:1 plot:2 alone:1 electr:1 scotland:2 smith:6 draft:1 location:2 along:3 become:1 combine:3 fitting:1 dimen:1 manner:3 frequently:1 multi:1 underlying:1 cm:2 fuzzy:2 differentiation:1 every:1 bernardo:1 exactly:1 uk:8 control:15 unit:1 grant:4 hamilton:2 maximise:1 engineering:3 leithead:2 local:3 tends:1 congress:1 leith:3 despite:1 oxford:1 might:2 specifying:1 co:3 campaign:1 range:1 practical:1 vu:1 block:2 procedure:1 area:2 empirical:1 close:6 applying:1 www:1 bill:1 demonstrated:1 shi:1 straightforward:1 williams:2 starting:2 glasgow:3 shorten:1 insight:1 importantly:1 traditionally:1 variation:1 maynooth:2 qq:1 target:1 suppose:1 us:1 designing:1 velocity:1 element:1 dawid:1 expensive:1 hagan:2 std:1 sparsely:1 observed:2 epsrc:2 solved:2 capture:1 calculate:1 region:5 ordering:1 decrease:1 highest:1 principled:1 dynamic:12 trained:1 depend:1 efficiency:1 easily:2 joint:1 various:2 grown:1 elec:1 univ:2 fast:1 describe:1 london:2 monte:1 hyper:7 whose:1 widely:1 ability:1 cov:3 statistic:1 g1:1 gp:8 noisy:5 differentiable:1 indication:1 ucl:2 rapidly:1 intuitive:1 p:1 illustrate:2 ac:5 odd:1 edward:2 involves:1 indicate:4 met:1 transient:5 behaviour:2 blending:1 sufficiently:1 around:2 considered:1 normal:1 great:2 equilibrium:30 predict:2 major:2 applicable:1 reflects:1 brought:1 clearly:1 sensor:1 gaussian:30 mit:1 rather:1 sion:1 publication:1 derived:1 focus:1 improvement:1 consistently:1 modelling:8 integrability:2 likelihood:3 inference:6 dependent:5 typically:1 relation:2 among:1 sugeno:2 retaining:1 constrained:1 smoothing:1 mackay:1 noisefree:1 identical:1 look:1 summarise:1 realisation:4 modern:1 simultaneously:1 national:2 individual:1 replaced:1 interest:1 highly:1 severe:1 hg:1 gla:1 accurate:3 integral:1 partial:2 necessary:1 divide:1 taylor:1 circle:2 plotted:1 linearisations:16 uncertain:1 increased:1 modeling:1 dev:1 blended:1 strath:2 deviation:4 subset:1 gr:2 combined:1 thanks:1 ie:1 seamless:1 off:11 together:2 tmr:1 ilj:1 nm:1 choose:2 slowly:1 expert:1 derivative:75 leading:1 potential:1 satisfy:2 francis:1 slope:1 accuracy:1 variance:6 efficiently:2 identify:1 identification:7 bayesian:4 carlo:1 trajectory:2 cybernetics:1 ed:1 involved:1 obvious:1 associated:4 knowledge:4 improves:1 higher:3 response:7 improved:1 formulation:1 implicit:1 correlation:2 nonlinear:9 defines:1 karlsruhe:1 effect:10 true:6 spatially:1 satisfactory:1 indistinguishable:2 during:1 sin:1 qe:1 multivariable:1 interpreting:1 perturbing:1 kluwer:1 significant:2 expressing:1 cambridge:1 smoothness:1 automatic:1 populated:1 gratefully:1 operating:1 surface:1 add:1 base:1 rence:1 linearisation:9 driven:1 monotonically:1 signal:1 multiple:4 full:3 infer:2 smooth:1 ifac:1 calculation:1 cross:2 ensuring:1 prediction:11 involving:2 regression:5 basic:1 impact:1 optimisation:3 metric:1 addition:2 hprn:1 extra:1 unlike:1 sr:1 comment:1 tend:2 validating:1 incorporates:1 jordan:1 near:2 presence:2 constraining:2 easy:2 affect:1 identified:7 reduce:1 rod:1 utility:1 dramatically:1 useful:1 involve:1 amount:2 nonparametric:1 locally:1 bac:1 reduced:1 http:1 tutorial:1 neuroscience:1 estimated:1 per:1 summarised:1 conform:1 discrete:1 group:1 four:1 drawn:3 run:1 uncertainty:18 striking:1 almost:3 ob:6 decision:2 ct:1 quadratic:1 ilk:1 occur:1 constraint:3 handful:1 constrain:1 software:1 speed:1 relatively:1 signficantly:1 combination:3 remains:1 needed:1 available:3 operation:1 away:5 appropriate:3 robustness:1 include:2 graphical:1 xw:1 giving:1 murray:5 especially:1 conquer:1 society:1 move:2 added:1 quantity:3 already:1 parametric:1 diagonal:2 gradient:1 ireland:5 distance:4 manifold:5 reason:1 berger:1 illustration:1 acquire:1 difficult:1 potentially:1 expense:1 negative:2 kildare:2 ba:1 design:1 unknown:2 perform:2 observation:74 summarising:3 acknowledge:1 takagi:2 situation:1 dc:1 perturbation:6 varied:1 inferred:2 tive:1 pair:5 namely:1 specified:2 johansen:2 barcelona:1 nip:1 trans:1 able:2 beyond:1 yc:1 including:3 royal:1 suitable:1 natural:1 hybrid:1 improve:1 doug:1 gz:1 prior:11 acknowledgement:1 law:1 interesting:1 foundation:1 integrate:1 agent:1 sufficient:1 consistent:3 editor:2 pi:1 free:7 rasmussen:3 bias:1 institute:2 wide:1 taking:1 benefit:4 curve:1 calculated:1 dimension:2 world:1 contour:1 author:1 made:2 adaptive:1 historical:1 ec:1 global:1 robust:2 inherently:1 solak:3 complex:1 european:1 icu:1 noise:11 hyperparameters:2 girard:1 fig:1 cubic:1 gatsby:3 fashion:1 inferring:1 position:1 wish:4 covariate:3 showing:1 decay:1 fusion:1 adding:3 importance:2 illustrates:1 wiggle:1 sparser:1 trix:1 acquiring:1 corresponds:1 ma:2 stimulating:1 conditional:1 viewed:3 identity:1 g12:1 acceleration:1 man:1 experimentally:1 included:1 typical:1 reducing:1 engineer:2 experimental:2 rarely:2 college:1 support:3 dept:2 |
1,415 | 2,288 | Learning about Multiple Objects in Images:
Factorial Learning without Factorial Search
Christopher K. I. Williams and Michalis K. Titsias
School of Informatics, University of Edinburgh, Edinburgh EH1 2QL, UK
[email protected]
[email protected]
Abstract
We consider data which are images containing views of multiple objects.
Our task is to learn about each of the objects present in the images. This
task can be approached as a factorial learning problem, where each image
must be explained by instantiating a model for each of the objects present
with the correct instantiation parameters. A major problem with learning
a factorial model is that as the number of objects increases, there is a
combinatorial explosion of the number of configurations that need to be
considered. We develop a method to extract object models sequentially
from the data by making use of a robust statistical method, thus avoiding the combinatorial explosion, and present results showing successful
extraction of objects from real images.
1 Introduction
In this paper we consider data which are images containing views of multiple objects.
Our task is to learn about each of the objects present in the images. Previous approaches
(discussed in more detail below) have approached this as a factorial learning problem,
where each image must be explained by instantiating a model for each of the objects present
with the correct instantiation parameters. A serious concern with the factorial learning
problem is that as the number of objects increases, there is a combinatorial explosion of the
number of configurations that need to be considered. Suppose there are possible objects,
and that there are possible values that the instantiation parameters of any one object can
take on; we will need to consider
combinations to explain any image. In contrast,
in our approach we find one object at a time, thus avoiding the combinatorial explosion.
In unsupervised learning we aim to identify regularities in data such as images. One fairly
simple unsupervised learning model is clustering, which can be viewed as a mixture model
where there are a finite number of types of object, and data is produced by choosing one of
these objects and then generating the data conditional on this choice. As a model of objects
in images standard clustering approaches are limited as they do not take into account the
variability that can arise due to the transformations that can take place, described by instantiation parameters such as translation, rotation etc of the object. Suppose that there are
different instantiation parameters, then a single object will sweep out a -dimensional
manifold in the image space. Learning about objects taking this regularity into account has
http://anc.ed.ac.uk
been called transformation-invariant clustering by Frey and Jojic (1999, 2002). However,
this work is still limited to finding a single object in each image.
A more general model for data is that where the observations are explained by multiple
causes; in our example this will be that in each image there are objects. The approach
of Frey and Jojic (1999, 2002) can be extended to this case by explicitly considering the
simultaneous instantiation of all objects (Jojic and Frey, 2001). However, this gives rise
to a large search problem over the instantiation parameters of all objects simultaneously,
and approximations such as variational methods are needed to carry out the inference. In
our method, by contrast, we discover the objects one at a time using a robust statistical
method. Sequential object discovery is possible because multiple objects combine by occluding each other.
The general problem of factorial learning has longer history, see, for example, Barlow
(1989), Hinton and Zemel (1994), and Ghahramani (1995). However, Frey and Jojic made
the important step for image analysis problems of using explicit transformations of object
models, which allows the incorporation of prior knowledge about these transformations
and leads to good interpretability of the results.
A related line of research is that concerned with discovering part decompositions of objects.
Lee and Seung (1999) described a non-negative matrix factorization method addressing this
problem, although their work does not deal with parts undergoing transformations. There
is also work on learning parts by Shams and von der Malsburg (1999), which is compared
and contrasted with our work in section 4.
The structure of the remainder of this paper is as follows. In section 2 we describe the
model, first for images containing only a single object ( 2.1) and then for images containing multiple objects ( 2.2). In section 3 we present experimental results for up to five
objects appearing against stationary and non-stationary backgrounds. We conclude with a
discussion in section 4.
2 Theory
2.1 Learning one object
In this section we consider the problem of learning about one object which can appear at
various locations in an image. The object is in the foreground, with a background behind it.
This background can either be fixed for all training images, or vary from image to image.
The two key issues that we must deal with are (i) the notion of a pixel being modelled
as foreground or background, and (ii) the problem of transformations of the object. We
consider first the foreground/background issue.
Consider an image of size containing
pixels, arranged as a length
vector. Our aim is to learn appearance-based representations of the foreground and the
background . As the object will be smaller than pixels, we will need to specify
which pixels belong to the background and which to the foreground; this is achieved by a
vector of binary latent variables , one for each pixel. Each binary variable in is drawn
independently from the corresponding entry in a vector of probabilities . For pixel , if
, then the pixel will be ascribed to the background with high probability, and if
, it will be ascribed to the foreground with high probability. We sometimes refer to
as a mask.
!
is modelled by a mixture distribution:
* ! %,')-/.10
! "$# !! & %(%('): +
;
98
* ! %(: -/.180
24365 7 24365 <-
(1)
where .10 and .180 are respectively the foreground and background variances. Thus, ignoring
transformations, we obtain
! %('
9 8 ! %(:
The second issue that we must deal with is that of transformations. Below we consider only
translations, although the ideas can be extended to deal with other transformations such as
scaling and rotation (see e.g. Jojic and Frey (2001)). Each possible transformation (e.g.
translations in units of one pixel) is represented by a corresponding transformation matrix,
so that matrix
corresponds to transformation and is the transformed foreground
model. In our implementation the translations use wrap-around, so that each
is in fact
a permutation matrix. The semantics of foreground and background mean that the mask
must also be transformed, so that we obtain
! &%
8 ! &%(:/
(2)
Notice that the foreground and mask are transformed by , but the background is
not. In order for equation 2 to make sense, each element of must be a valid probability
(lying in <- ). This is certainly true for the case when is a permutation matrix (and can
be true more generally).
on each transformation ; this is
"! # . Given a data
* - - -(. 0 -(.180 by maximizing
To complete the model we place a prior probability
taken to be uniform over all possibilities so that
- * we can adapt the parameters
set
, 7-
$ %'& (
)
,+ -/.10 % *
%
*
. This can be achieved through using the EM
the log likelihood
algorithm to handle the missing data which is the transformation and .
The model developed in this section is similar to Jojic and Frey (2001), except that our mask
has probabilistic semantics, which means that an exact M-step can be used as opposed
to the generalized M-step used by Jojic and Frey.
2.2 Coping with multiple objects
If there are foreground objects, one natural approach is to consider models with latent variables, each taking on the values of the possible transformations. We also need
to account for object occlusions. By assuming that the objects can arbitrarily occlude
one another (and this occlusion ordering can change in different images), there are possible arrangements. A model that accounts for multiple objects is described in Jojic and
Frey (2001) where the occlusion ordering of the objects is taken as being fixed since they
assume that each object is ascribed to a global layer. A full search over the parameters
(assuming unknown occlusion ordering for each image) must consider
possibilities,
which scales exponentially with . An alternative is to consider approximations; Ghahramani (1995) suggests mean field and Gibbs sampling approximations and Jojic and Frey
(2001) use approximate variational inference.
32
42
Our goal is to find one object at a time in the images. We describe two methods for doing
this. The first uses random initializations, and on different runs can find different objects;
we denote this RANDOM STARTS. The second method (denoted GREEDY) removes
objects found in earlier iterations and looks for as-yet-undiscovered objects in what remains.
For both methods we need to adapt the model presented in section 2.1. The problem is that
occlusion can occur of both the foreground and the background. For a foreground pixel, a
different object to the one being modelled may be interposed between the camera and our
object, thus perturbing the pixel value. This can be modelled with a mixture distribution
! , where is the fraction of times
* ! %(' -/.10
as ! %,'
65
7
8 5 : 9
5
9
! is a uniform
a foreground pixel is not occluded and the robustifying component
distribution common for all image pixels. Such robust models have been used for image
matching tasks by a number of authors, notably Black and colleagues (Black and Jepson,
1996).
Similarly for the background, a different object from the one being modelled may be interposed between the background and the camera, so that we again have a mixture model
! , with similar semantics for the parameter
8 ! &%(:/ 8 * ! &%,:/ -(.180
8
8 . (If the background has high variability then this robustness may not be required, but it
will be in the case that the background is fixed while the objects move.)
5
5
5 :9
2.2.1 Finding the first object
With this robust model we can now apply the RANDOM STARTS algorithm by maximizing the likelihood of a set of images with respect to the model using the EM algorithm.
The expected complete data log likelihood is given by
+ !
%
% $ % -/.10
%
%8 . 8 0
% -/.10
%
. 0 %
0
0 /- .10
-/.10
. 80
&7
(
. 0
5 -
(3)
where
defines the element-wise product between two vectors,
is written
as 0 for compactness
and denotes the -dimensional vector containing
ones.
The
"
!
expected values of several latent variables are as follows:
"
"$#&%
is the transformation responsibility, is a -dimensional vector associated with the
with
element storing the probability 5
binary variables
(')+*(-, *./ 0+each
*
21 * , * .3 0 * 546 879 21 * -:( * . * , & is the vector containing the robust responsi
8
%
bilities for the foreground
on image
,
./ 0 *
+
+
'
%
%
transformation , so that its <;>= element
% using
and similarly the vector %8 defines the robust
,
A*@
* ? ./ 0 *
, 54D E 7 BC ,GF9 *
@
@
B C
?
is equal to ,
?
responsibilities of the background. Note that the latter responsibilities do not depend on
the transformation since the background is not transformed.
All of the above expected values of the missing variables are estimated
in the H -step using
the current parameter values. In the I -step we maximise the function with respect to
the model parameters , , - . 0 and .18 0 . We do not have space to show all of the updates
but for example
+ !
% /L % % %& - (4)
%
where /L stands for the element-wise division between two vectors. This update is quite
intuitive. Consider the case when for and otherwise. For pix% M ), the values in % are
els which are ascribed to the foreground
(i.e. 8 % M
7
are permutation matrices). This
transformed by 8 M (which is M as the transformations
KJ
+ !
% %
%
%
removes the effect of the transformation and thus allows the foreground pixels found in
each training image to be averaged to produce .
On different runs we hope to discover objects. However, this is rather inefficient as the
basins of attraction for the different objects may be very different in size given the initialization. Thus we describe the GREEDY algorithm next.
2.2.2 The GREEDY algorithm
We assume that we have run the RANDOM STARTS algorithm and have learned a foreground model and mask . We wish to remove from consideration the pixels of the
learned object (in each training image) in order to find a new object by applying the same
to find the
algorithm. For each example image we can use the responsibilities
most likely transformation .1 Now note that the transformed mask M% obtains values
close to 1 for all object pixels, however some of these pixels might be occluded by other
not-yet-discovered objects and we do not wish to remove them from consideration. Thus
M%
% . According to the semantics of the robust
we consider the vector
M%
M%
foreground responsibilities % , will roughly give close to values only for the non
occluded object pixels. To further
explain all pixels having we introduce a new
foreground model 0 and mask 0 , then for each transformation of model 2, we obtain
M% - * ! % M% -/. 0 %
!
0 % ) 0
0 8 ! %(: (5)
Note that we have dropped
the robustifying component 9 ! from model 1, since the
parameters of this object have been learned. By summing out over the possible transformations we can maximize the likelihood with respect to 0 , 0 , .10 , and .180 .
C
The above expression says that each image pixel ! is modelled by a three-component
mixture distribution; the pixel ! can belong to the first object with probability ,
does not belong to the first object and belongs to the second one with probability
0 , while with the remaining probability it is background. Thus, the search for
a new object involves only the pixels that are not accounted for by model 1 (i.e. those for
which ).
This process can be continued, so that after finding a second model, the remaining background is searched for a third model, and so on. The formula for objects becomes
7
7
M% - - M % -
* ! % M
&-/. 0 :
7
7
! % )
8 ! %,: (6)
This is a
component
at each pixel, where the
;>= object is the background.
7mixture
M is defined to be equal to . Note that all parameters of
then the term
If
the first components are kept fixed (learned in previous stages). We always deal with
only one object at a time and thus with one transformation latent variable. This approach
can be viewed as approximating the full factorial model by sequentially learning each factor
(object). A crucial point is that the algorithm is not assumed to extract layers in images,
ordered from the nearest layer to the furthest one. In fact in next section we show a twoobject example of a video sequence where we learn first the occluded object.
Space limitations do not permit us to show the function and updates for the parameters,
but these are very similar to the RANDOM STARTS, since we also learn only the parameters of one object plus the background while keeping fixed all the parameters of previously
discovered objects.
1
It would be possible to make a ?softer? version of this, where the transformations are weighted
by their posterior probabilities, but in practice we have found that these probabilities are usually for
the best-fitting transformation and otherwise after learning and .
Mask
Foreground * Mask
Mask
Foreground * Mask
(a)
Background
(b)
Figure 1: Learning two objects against a stationary background. Panel (a) displays some
frames of the training images, and (b) shows the two objects and background found by the
GREEDY algorithm.
3 Experiments
We describe three experiments extracting objects from images including up to five movable
objects, using stationary as well as non-stationary backgrounds. In these experiments the
! is based on the maximum and minimum pixel values of all
uniform distribution
and 8 were chosen to be
training image pixels. In all the experiments reported below
. Also we assume that the total number of objects that appear
in the images is known,
thus the GREEDY algorithm terminates when we discover the ;>= object.
9
5
5
The learning algorithm also requires the initialization of the foreground and background
appearances , the mask and the parameters . 0 and . 80 . Each element of the mask is
initialised to 0.5, the background appearance to the mean of the training images and the
variances . 0 and . 80 are initialized to equal large values (larger than the overall variance of
all image pixels).
For the foreground appearance we compute the pixelwise mean of the
training images and add independent Gaussian noise with the equal variances at each pixel,
where the variance is set to be large enough so that the range of pixel values found in the
training images can be explored.
In the GREEDY algorithm each time we add a new object
the parameters
, ,
,
.10 -/.180 are initialized as described above. This means that the background is reset to
mean of the training images; this is done to avoid local maxima since the background
the
found by considering only some of the objects in the images can be very different than the
true background.
Figure 1 illustrates the detection
of two objects against a stationary background 2. Some ex
amples of the 44
training images (excluding the black border) are shown in Figure
1(a) and results are shown in Figure 1(b). For both objects we show both the learned mask
and the elementwise product of the learned foreground and mask. In most runs the person
with the lighter shirt (Jojic) is discovered first, even though he is occluded and the person
with the striped shirt (Frey) is not. Video sequences of the raw data and the extracted objects
can be viewed at http://www.dai.ed.ac.uk/homes/s0129556/lmo.html .
In Figure 2 five objects are learned against a stationary background, using a dataset of 7
images of size . Notice the large amount of occlusion in some of the training images
shown in Figure 2(a). Results are shown in Figure 2(b) for the GREEDY algorithm.
2
These data are used in Jojic and Frey (2001). We thank N. Jojic and B. Frey for making available
these data via http://www.psi.toronto.edu/layers.html.
Mask
Mask
Mask
Mask
Mask
Foregr. * Mask
Foregr. * Mask
Foregr. * Mask
Foregr. * Mask
Foregr. * Mask
(a)
(b)
Figure 2: Learning five objects against a stationary background. Panel (a) displays some of
the training images and (b) shows the objects learned by the GREEDY algorithm.
(a)
Mask
Foreground * Mask
Mask
Foreground * Mask
Background
(b)
Figure 3: Two objects are learned from a set of images with non-stationary background.
Panel (a) displays some examples of the training images, and (b) shows the objects found
by the GREEDY algorithm.
In Figure 3 we consider learning objects against a non-stationary background. Actually
three different backgrounds were used, as can be seen in the example images shown in
Figure 3(a). There were images in the training set. Using the RANDOM
STARTS algorithm the CD was found in 9 out of 10 runs. The results with the GREEDY
algorithm are shown in Figure 3(b). The background found is approximately the average
of the three backgrounds.
Overall we conclude that the RANDOM STARTS algorithm is not very effective at finding multiple objects in images; it needs many runs from different initial conditions, and
sometimes fails entirely to find all objects. In contrast the GREEDY algorithm is very
effective.
4 Discussion
Shams and von der Malsburg (1999) obtained candidate parts by matching images in a
pairwise fashion, trying to identify corresponding regions in the two images. These candidate image patches were then clustered to compensate for the effect of occlusions. We
make four observations: (i) instead of directly learning the models, they match each image
against all others (with complexity * 0 ), as compared to the linear scaling with * in
our method; (ii) in their method the background must be removed otherwise it would give
rise to large match regions; (iii) they do not define a probabilistic model for the images
(with all its attendant benefits); (iv) their data (although based on realistic CAD-type models) is synthetic, and designed to focus learning on shape related features by eliminating
complicating factors such as background, surface markings etc.
In our work the model for each pixel is a mixture of Gaussians. There is some previous
work on pixelwise mixtures of Gaussians (see, e.g. Rowe and Blake 1995) which can, for
example, be used to achieve background subtraction and highlight moving objects against
a stationary background. Our work extends beyond this by gathering the foreground pixels
into objects, and also allows us to learn objects in the more difficult non-stationary background case. For the stationary background case, pixelwise mixture of Gaussians might be
useful ways to create candidate objects.
The GREEDY algorithm has shown itself to be an effective factorial learning algorithm
for image data. We are currently investigating issues such as dealing with richer classes
of transformations, detecting automatically, and allowing objects not to appear in all images. Furthermore, although we have described this work in relation to image modelling,
it can be applied to other domains. For example, one can make a model for sequence
data by having Hidden Markov models (HMMs) for a ?foreground? pattern and the ?background?. Faced with sequences containing multiple foreground patterns, one could extract
these patterns sequentially using a similar algorithm to that described above. It is true that
HMM
for sequence data it would be possible to train a compound HMM consisting of
components simultaneously, but there may be severe local minima problems in the search
space so that the sequential approach might be preferable.
Acknowledgements: CW thanks Geoff Hinton for helpful discussions concerning the idea
of learning one object at a time.
References
Barlow, H. (1989). Unsupervised Learning. Neural Computation, 1:295?311.
Black, M. J. and Jepson, A. (1996). EigenTracking: Robust matching and tracking of articulated
objects using a view-based representation. In Buxton, B. and Cipolla, R., editors, Proceedings of
the Fourth European Conference on Computer Vision, ECCV?96, pages 329?342. Springer-Verlag.
Frey, B. J. and Jojic, N. (1999). Estimating mixture models of images and inferring spatial transformations using the EM algorithm. In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition 1999. IEEE Computer Society Press. Ft. Collins, CO.
Frey, B. J. and Jojic, N. (2002). Transformation Invariant Clustering and Linear Component Analysis
Using the EM Algorithm. Revised manuscript under review for IEEE PAMI.
Ghahramani, Z. (1995). Factorial Learning and the EM Algorithm. In Tesauro, G., Touretzky, D. S.,
and Leen, T. K., editors, Advances in Neural Information Processing Systems 7, pages 617?624.
Morgan Kaufmann, San Mateo, CA.
Hinton, G. E. and Zemel, R. S. (1994). Autoencoders, minimum description length, and Helmholtz
free energy. In Cowan, J., Tesauro, G., and Alspector, J., editors, Advances in Neural Information
Processing Systems 6. Morgan Kaufmann.
Jojic, N. and Frey, B. J. (2001). Learning Flexible Sprites in Video Layers. In Proceedings of the
IEEE Conference on Computer Vision and Pattern Recognition 2001. IEEE Computer Society
Press. Kauai, Hawaii.
Lee, D. D. and Seung, H. S. (1999). Learning the parts of objects by non-negative matrix factorization. Nature, 401:788?791.
Rowe, S. and Blake, A. (1995). Statistical Background Modelling For Tracking With A Virtual
Camera. In Pycock, D., editor, Proceedings of the 6th British Machine Vision Conference, volume
volume 2, pages 423?432. BMVA Press.
Shams, L. and von der Malsburg, C. (1999). Are object shape primitives learnable? Neurocomputing,
26-27:855?863.
| 2288 |@word version:1 eliminating:1 decomposition:1 carry:1 initial:1 configuration:2 undiscovered:1 current:1 cad:1 yet:2 must:8 written:1 realistic:1 eigentracking:1 shape:2 remove:4 designed:1 update:3 occlude:1 stationary:13 greedy:12 discovering:1 detecting:1 location:1 toronto:1 five:4 combine:1 fitting:1 introduce:1 pairwise:1 ascribed:4 notably:1 mask:29 expected:3 alspector:1 roughly:1 shirt:2 automatically:1 considering:2 becomes:1 discover:3 estimating:1 panel:3 what:1 developed:1 finding:4 transformation:29 preferable:1 uk:5 unit:1 appear:3 maximise:1 dropped:1 frey:15 local:2 approximately:1 pami:1 black:4 might:3 plus:1 initialization:3 mateo:1 suggests:1 co:1 hmms:1 limited:2 factorization:2 range:1 averaged:1 camera:3 practice:1 kauai:1 coping:1 matching:3 close:2 applying:1 www:2 missing:2 maximizing:2 williams:2 primitive:1 independently:1 attraction:1 continued:1 handle:1 notion:1 suppose:2 exact:1 lighter:1 us:1 element:6 helmholtz:1 recognition:2 ft:1 region:2 ordering:3 removed:1 bilities:1 complexity:1 seung:2 occluded:5 depend:1 titsias:2 division:1 geoff:1 various:1 represented:1 train:1 articulated:1 describe:4 effective:3 approached:2 zemel:2 choosing:1 quite:1 richer:1 larger:1 say:1 otherwise:3 itself:1 sequence:5 product:2 reset:1 remainder:1 achieve:1 intuitive:1 description:1 regularity:2 produce:1 generating:1 object:99 develop:1 ac:4 nearest:1 school:1 involves:1 correct:2 softer:1 virtual:1 clustered:1 lying:1 around:1 considered:2 blake:2 major:1 vary:1 combinatorial:4 currently:1 create:1 weighted:1 hope:1 always:1 gaussian:1 aim:2 rather:1 avoid:1 focus:1 modelling:2 likelihood:4 contrast:3 sense:1 helpful:1 inference:2 el:1 compactness:1 hidden:1 relation:1 transformed:6 semantics:4 pixel:29 issue:4 overall:2 html:2 flexible:1 denoted:1 spatial:1 fairly:1 field:1 equal:4 extraction:1 having:2 sampling:1 look:1 unsupervised:3 foreground:30 others:1 serious:1 simultaneously:2 neurocomputing:1 occlusion:7 consisting:1 detection:1 possibility:2 certainly:1 severe:1 mixture:10 behind:1 explosion:4 iv:1 initialized:2 earlier:1 addressing:1 entry:1 uniform:3 successful:1 pixelwise:3 reported:1 synthetic:1 person:2 thanks:1 lee:2 probabilistic:2 informatics:1 von:3 again:1 containing:8 opposed:1 hawaii:1 inefficient:1 account:4 amples:1 explicitly:1 view:3 responsibility:5 doing:1 start:6 variance:5 kaufmann:2 identify:2 modelled:6 raw:1 produced:1 history:1 explain:2 simultaneous:1 touretzky:1 ed:4 against:8 rowe:2 energy:1 colleague:1 initialised:1 associated:1 psi:1 dataset:1 knowledge:1 actually:1 manuscript:1 specify:1 arranged:1 done:1 though:1 leen:1 furthermore:1 stage:1 autoencoders:1 christopher:1 defines:2 effect:2 true:4 barlow:2 jojic:15 deal:5 generalized:1 trying:1 complete:2 image:62 variational:2 wise:2 consideration:2 common:1 rotation:2 perturbing:1 exponentially:1 volume:2 discussed:1 belong:3 he:1 elementwise:1 refer:1 gibbs:1 similarly:2 moving:1 longer:1 surface:1 etc:2 movable:1 add:2 posterior:1 belongs:1 tesauro:2 compound:1 verlag:1 binary:3 arbitrarily:1 der:3 seen:1 minimum:3 dai:1 morgan:2 subtraction:1 maximize:1 ii:2 multiple:10 full:2 sham:3 match:2 adapt:2 compensate:1 concerning:1 instantiating:2 vision:4 iteration:1 sometimes:2 achieved:2 background:48 crucial:1 cowan:1 extracting:1 iii:1 enough:1 concerned:1 idea:2 expression:1 sprite:1 cause:1 generally:1 useful:1 factorial:10 amount:1 http:3 notice:2 estimated:1 key:1 four:1 drawn:1 kept:1 fraction:1 run:6 pix:1 fourth:1 place:2 extends:1 patch:1 home:1 scaling:2 entirely:1 layer:5 display:3 occur:1 incorporation:1 striped:1 robustifying:2 marking:1 according:1 combination:1 smaller:1 terminates:1 em:5 making:2 explained:3 invariant:2 gathering:1 taken:2 equation:1 remains:1 previously:1 needed:1 available:1 gaussians:3 permit:1 apply:1 appearing:1 alternative:1 robustness:1 denotes:1 michalis:1 clustering:4 remaining:2 malsburg:3 ghahramani:3 approximating:1 society:2 sweep:1 move:1 eh1:1 arrangement:1 wrap:1 cw:1 thank:1 hmm:2 manifold:1 furthest:1 assuming:2 length:2 ql:1 difficult:1 negative:2 rise:2 implementation:1 unknown:1 allowing:1 observation:2 revised:1 markov:1 sm:1 finite:1 extended:2 variability:2 hinton:3 excluding:1 frame:1 discovered:3 required:1 learned:9 beyond:1 below:3 usually:1 pattern:5 interpretability:1 including:1 video:3 natural:1 extract:3 kj:1 faced:1 prior:2 review:1 discovery:1 acknowledgement:1 permutation:3 highlight:1 limitation:1 buxton:1 basin:1 editor:4 storing:1 cd:1 translation:4 eccv:1 accounted:1 keeping:1 free:1 interposed:2 taking:2 edinburgh:2 benefit:1 attendant:1 valid:1 stand:1 complicating:1 author:1 made:1 san:1 approximate:1 obtains:1 dealing:1 global:1 sequentially:3 instantiation:7 investigating:1 summing:1 conclude:2 assumed:1 search:5 latent:4 learn:6 nature:1 robust:8 ca:1 ignoring:1 anc:1 european:1 domain:1 jepson:2 border:1 noise:1 arise:1 fashion:1 fails:1 lmo:1 inferring:1 explicit:1 wish:2 candidate:3 third:1 formula:1 british:1 showing:1 undergoing:1 explored:1 learnable:1 concern:1 sequential:2 illustrates:1 appearance:4 likely:1 ordered:1 tracking:2 cipolla:1 springer:1 corresponds:1 extracted:1 conditional:1 viewed:3 goal:1 change:1 except:1 contrasted:1 called:1 total:1 experimental:1 occluding:1 searched:1 latter:1 collins:1 avoiding:2 ex:1 |
1,416 | 2,289 | Real Time Voice Processing with Audiovisual
Feedback: Toward Autonomous Agents
with Perfect Pitch
Lawrence K. Saul1 , Daniel D. Lee2 , Charles L. Isbell3 , and Yann LeCun4
1
Department of Computer and Information Science
2
Department of Electrical and System Engineering
University of Pennsylvania, 200 South 33rd St, Philadelphia, PA 19104
3
Georgia Tech College of Computing, 801 Atlantic Drive, Atlanta, GA 30332
4
NEC Research Institute, 4 Independence Way, Princeton, NJ 08540
[email protected], [email protected], [email protected], [email protected]
Abstract
We have implemented a real time front end for detecting voiced speech
and estimating its fundamental frequency. The front end performs the
signal processing for voice-driven agents that attend to the pitch contours
of human speech and provide continuous audiovisual feedback. The algorithm we use for pitch tracking has several distinguishing features: it
makes no use of FFTs or autocorrelation at the pitch period; it updates the
pitch incrementally on a sample-by-sample basis; it avoids peak picking
and does not require interpolation in time or frequency to obtain high resolution estimates; and it works reliably over a four octave range, in real
time, without the need for postprocessing to produce smooth contours.
The algorithm is based on two simple ideas in neural computation: the
introduction of a purposeful nonlinearity, and the error signal of a least
squares fit. The pitch tracker is used in two real time multimedia applications: a voice-to-MIDI player that synthesizes electronic music from vocalized melodies, and an audiovisual Karaoke machine with multimodal
feedback. Both applications run on a laptop and display the user?s pitch
scrolling across the screen as he or she sings into the computer.
1 Introduction
The pitch of the human voice is one of its most easily and rapidly controlled acoustic attributes. It plays a central role in both the production and perception of speech[17]. In clean
speech, and even in corrupted speech, pitch is generally perceived with great accuracy[2, 6]
at the fundamental frequency characterizing the vibration of the speaker?s vocal chords.
There is a large literature on machine algorithms for pitch tracking[7], as well as applications to speech synthesis, coding, and recognition. Most algorithms have one or more
of the following components. First, sliding windows of speech are analyzed at 5-10 ms
intervals, and the results concatenated over time to obtain an initial estimate of the pitch
contour. Second, within each window (30-60 ms), the pitch is deduced from peaks in the
windowed autocorrelation function[13] or power spectrum[9, 10, 15], then refined by further interpolation in time or frequency. Third, the estimated pitch contours are smoothed
by a postprocessing procedure[16], such as dynamic programming or median filtering, to
remove octave errors and isolated glitches.
In this paper, we describe an algorithm for pitch tracking that works quite differently
and?based on our experience?quite well as a real time front end for interactive voicedriven agents. Notably, our algorithm does not make use of FFTs or autocorrelation at the
pitch period; it updates the pitch incrementally on a sample-by-sample basis; it avoids peak
picking and does not require interpolation in time or frequency to obtain high resolution
estimates; and it works reliably over a four octave range?in real time?without any postprocessing. We have implemented the algorithm in two real-time multimedia applications:
a voice-to-MIDI player and an audiovisual Karaoke machine. More generally, we are using
the algorithm to explore novel types of human-computer interaction, as well as studying
extensions of the algorithm for handling corrupted speech and overlapping speakers.
2 Algorithm
A pitch tracker performs two essential functions: it labels speech as voiced or unvoiced, and
throughout segments of voiced speech, it computes a running estimate of the fundamental
frequency. Pitch tracking thus depends on the running detection and identification of periodic signals in speech. We develop our algorithm for pitch tracking by first examining the
simpler problem of detecting sinusoids. For this simpler problem, we describe a solution
that does not involve FFTs or autocorrelation at the period of the sinusoid. We then extend
this solution to the more general problem of detecting periodic signals in speech.
2.1 Detecting sinusoids
A simple approach to detecting sinusoids is based on viewing them as the solution of a
second order linear difference equation[12]. A discretely sampled sinusoid has the form:
sn = A sin(?n + ?).
(1)
Sinusoids obey a simple difference equation such that each sample s n is proportional to the
average of its neighbors 21 (sn?1 +sn+1), with the constant of proportionality given by:
?1 sn?1 + sn+1
sn = (cos ?)
.
(2)
2
Eq. (2) can be proved using trigonometric identities to expand the terms on the right hand
side. We can use this property to judge whether an unknown signal x n is approximately
sinusoidal. Consider the error function:
2
X
xn?1 + xn+1
.
(3)
E(?) =
xn ? ?
2
n
If the signal xn is well described by a sinusoid, then the right hand side of this error function
will achieve a small value when the coefficient ? is tuned to match its frequency, as in
eq. (2). The minimum of the error function is found by solving a least squares problem:
P
2
xn (xn?1 + xn+1 )
?? = Pn
.
(4)
2
(x
n n?1 + xn+1 )
Thus, to test whether a signal xn is sinusoidal, we can minimize its error function by
eq. (4), then check two conditions: first, that E(?? ) E(0), and second, that |?? | ? 1.
The first condition establishes that the mean squared error is small relative to the mean
squared amplitude of the signal, while the second establishes that the signal is sinusoidal
(as opposed to exponential), with estimated frequency:
? ? = cos?1 (1/?? ).
(5)
This procedure for detecting sinusoids (known as Prony?s method[12]) has several notable
features. First, it does not rely on computing FFTs or autocorrelation at the period of the
sinusoid, but only on computing
and one-sample-lagged autocorrelations
Pthe zero-lagged
P
that appear in eq. (4), namely n x2n and n xn xn?1 . Second, the frequency estimates
are obtained from the solution of a least squares problem, as opposed to the peaks of an
autocorrelation or FFT, where the resolution may be limited by the sampling rate or signal
length. Third, the method can be used in an incremental way to track the frequency of a
slowly modulated sinusoid. In particular, suppose we analyze sliding windows?shifted by
just one sample at a time?of a longer, nonstationary signal. Then we can efficiently update
the windowed autocorrelations that appear in eq. (4) by adding just those terms generated
by the rightmost sample of the current window and dropping just those terms generated
by the leftmost sample of the previous window. (The number of operations per update is
constant and does not depend on the window size.)
We can extract more information from the least squares fit besides the error in eq. (3)
and the estimate in eq. (5). In particular, we can characterize the uncertainty in the estimated frequency. The normalized error function N (?) = log[E(?)/E(0)] evaluates the
least squares fit on a dimensionless logarithmic scale that does not depend on the amplitude of the signal. Let ? = log(cos?1 (1/?)) denote the log-frequency implied by the coefficient ?, and let ??? denote the uncertainty in the estimated log-frequency ?? = log ? ?.
(By working in the log domain, we measure uncertainty in the same units as the distance
between notes on the musical scale.) A heuristic measure of uncertainty is obtained by
evaluating the sharpness of the least squares fit, as characterized by the second derivative:
"
#? 12
? 12
? 2N
1 cos2 ? ?
1 ? 2E
?
?? =
.
(6)
=
??2 ?=??
? ? sin ? ?
E ??2 ?=??
Eq. (6) relates sharper fits to lower uncertainty, or higher precision. As we shall see, it
provides a valuable criterion for comparing the results of different least squares fits.
2.2 Detecting voiced speech
Our algorithm for detecting voice speech is a simple extension of the algorithm described
in the previous section. The algorithm operates on the time domain waveform in a number of stages, as summarized in Fig. 1. The analysis is based on the assumption that the
low frequency spectrum of voiced speech can be modeled as a sum of (noisy) sinusoids
occurring at integer multiples of the fundamental frequency, f 0 .
Stage 1. Lowpass filtering
The first stage of the algorithm is to lowpass filter the speech, removing energy at frequencies above 1 kHz. This is done to eliminate the aperiodic component of voiced
fricatives[17], such as /z/. The signal can be aggressively downsampled after lowpass filtering, though the sampling rate should remain at least twice the maximum allowed value
of f0 . The lower sampling rate determines the rate at which the estimates of f 0 are updated,
but it does not limit the resolution of the estimates themselves. (In our formal evaluations
of the algorithm, we downsampled from 20 kHz to 4 kHz after lowpass filtering; in the
real-time multimedia applications, we downsampled from 44.1 kHz to 3675 Hz.)
Stage 2. Pointwise nonlinearity
The second stage of the algorithm is to pass the signal through a pointwise nonlinearity,
such as squaring or half-wave rectification (which clips negative samples to zero). The
speech
lowpass
filter
pointwise
nonlinearity
two octave
filterbank
sinusoid
detectors
25-100 Hz
f0 < 100 Hz?
50-200 Hz
f0 < 200 Hz?
100-400 Hz
f0 < 400 Hz?
200-800 Hz
f0 < 800 Hz?
pitch
yes
voiced?
sharpest
estimate
Figure 1: Estimating the fundamental frequency f0 of voiced speech without FFTs or autocorrelation at the pitch period. The speech is lowpass filtered (and optionally downsampled)
to remove fricative noise, then transformed by a pointwise nonlinearity that concentrates
additional energy at f0 . The resulting signal is analyzed by a bank of bandpass filters that
are narrow enough to resolve the harmonic at f0 , but too wide to resolve higher-order harmonics. A resolved harmonic at f0 (essentially, a sinusoid) is detected by a running least
squares fit, and its frequency recovered as the pitch. If more that one sinusoid is detected
at the outputs of the filterbank, the one with the sharpest fit is used to estimate the pitch; if
no sinusoid is detected, the speech is labeled as unvoiced. (The two octave filterbank in the
figure is an idealization. In practice, a larger bank of narrower filters is used.)
purpose of the nonlinearity is to concentrate additional energy at the fundamental, particularly if such energy was missing or only weakly present in the original signal. In voiced
speech, pointwise nonlinearities such as squaring or half-wave rectification tend to create
energy at f0 by virtue of extracting a crude representation of the signal?s envelope. This
is particularly easy to see for the operation of squaring, which?applied to the sum of two
sinusoids?creates energy at their sum and difference frequencies, the latter of which characterizes the envelope. In practice, we use half-wave rectification as the nonlinearity in this
stage of the algorithm; though less easily characterized than squaring, it has the advantage
of preserving the dynamic range of the original signal.
Stage 3. Filterbank
The third stage of the algorithm is to analyze the transformed speech by a bank of bandpass
filters. These filters are designed to satisfy two competing criteria. On one hand, they are
sufficiently narrow to resolve the harmonic at f0 ; on the other hand, they are sufficiently
wide to integrate higher-order harmonics. An idealized two octave filterbank that meets
these criteria is shown in Fig. 1. The result of this analysis?for voiced speech?is that the
output of the filterbank consists either of sinusoids at f0 (and not any other frequency), or
signals that do not resemble sinusoids at all. Consider, for example, a segment of voiced
speech with fundamental frequency f0 = 180 Hz. For such speech, only the second filter
from 50-200 Hz will resolve the harmonic at 180 Hz. On the other hand, the first filter from
25-100 Hz will pass low frequency noise; the third filter from 100-400 Hz will pass the first
and second harmonics at 180 Hz and 360 Hz, and the fourth filter from 200-800 Hz will
pass the second through fourth harmonics at 360, 540, and 720 Hz. Thus, the output of the
filterbank will consist of a sinusoid at f0 and three other signals that are random or periodic,
but definitely not sinusoidal. In practice, we do not use the idealized two octave filterbank
shown in Fig. 1, but a larger bank of narrower filters that helps to avoid contaminating the
harmonic at f0 by energy at 2f0 . The bandpass filters in our experiments were 8th order
Chebyshev (type I) filters with 0.5 dB of ripple in 1.6 octave passbands, and signals were
doubly filtered to obtain sharp frequency cutoffs.
Stage 4. Sinusoid detection
The fourth stage of the algorithm is to detect sinusoids at the outputs of the filterbank.
Sinusoids are detected by the adaptive least squares fits described in section 2.1. Running
estimates of sinusoid frequencies and their uncertainties are obtained from eqs. (5?6) and
updated on a sample by sample basis for the output of each filter. If the uncertainty in any
filter?s estimate is less than a specified threshold, then the corresponding sample is labeled
as voiced, and the fundamental frequency f0 determined by whichever filter?s estimate has
the least uncertainty. (For sliding windows of length 40?60 ms, the thresholds typically fall
in the range 0.08?0.12, with higher thresholds required for shorter windows.) Empirically,
we have found the uncertainty in eq. (6) to be a better criterion than the error function
itself for evaluating and comparing the least squares fits from different filters. A possible
explanation for this is that the expression in eq. (6) was derived by a dimensional analysis,
whereas the error functions of different filters are not even computed on the same signals.
Overall, the four stages of the algorithm are well suited to a real time implementation. The
algorithm can also be used for batch processing of waveforms, in which case startup and
ending transients can be minimized by zero-phase forward and reverse filtering.
3 Evaluation
The algorithm was evaluated on a small database of speech collected at the University of
Edinburgh[1]. The Edinburgh database contains about 5 minutes of speech consisting of
50 sentences read by one male speaker and one female speaker. The database also contains
reference f0 contours derived from simultaneously recorded larynogograph signals. The
sentences in the database are biased to contain difficult cases for f 0 estimation, such as
voiced fricatives, nasals, liquids, and glides. The results of our algorithm on the first three
utterances of each speaker are shown in Fig. 2.
A formal evaluation was made by accumulating errors over all utterances in the database,
using the reference f0 contours as ground truth[1]. Comparisons between estimated and
reference f0 values were made every 6.4 ms, as in previous benchmarks. Also, in these
evaluations, the estimates of f0 from eqs. (4?5) were confined to the range 50?250 Hz
for the male speaker and the range 120?400 Hz for the female speaker; this was done
for consistency with previous benchmarks, which enforced these limits. Note that our
estimated f0 contours were not postprocessed by a smoothing procedure, such as median
filtering or dynamic programming.
Error rates were computed for the fraction of unvoiced (or silent) speech misclassified as
voiced and for the fraction of voiced speech misclassified as unvoiced. Additionally, for the
fraction of speech correctly identified as voiced, a gross error rate was computed measuring
the percentage of comparisons for which the reference and estimated f 0 differed by more
than 20%. Finally, for the fraction of speech correctly identified as voiced and in which
the estimated f0 , was not in gross error, a root mean square (rms) deviation was computed
between the reference and estimated f0 .
The original study on this database published results for a number of approaches to pitch
tracking. Earlier results, as well as those derived from the algorithm in this paper, are
shown in Table 1. The overall results show our algorithm?indicated as the adaptive least
squares (ALS) approach to pitch tracking?to be extremely competitive in all respects.
The only anomaly in these results is the slightly larger rms deviation produced by ALS
estimation compared to other approaches. The discrepancy could be an artifact of the
filtering operations in Fig. 1, resulting in a slight desychronization of the reference and
estimated f0 contours. On the other hand, the discrepancy could indicate that for certain
voiced sounds, a more robust estimation procedure[12] would yield better results than the
simple least squares fits in section 2.1.
Where can I park my car?
Where can I park my car?
reference
estimated
reference
estimated
300
pitch (Hz)
pitch (Hz)
200
150
250
100
200
0
0.5
1
time (sec)
I'd like to leave this in your safe.
180
reference
estimated
140
120
100
0.2
0.4
0.6
0.8
1
1.2
1.4
time (sec)
How much are my telephone charges?
250
1.6
0.5
1
1.5
2
time (sec)
How much are my telephone charges?
350
reference
estimated
2.5
reference
estimated
300
pitch (Hz)
160
pitch (Hz)
reference
estimated
300
150
180
140
120
250
200
100
150
80
0
2.5
200
80
0
1.5
2
time (sec)
I'd like to leave this in your safe.
350
pitch (Hz)
160
pitch (Hz)
1
1.5
0.2
0.4
0.6
0.8
1
time (sec)
1.2
1.4
1.6
0.5
1
1.5
time (sec)
2
2.5
Figure 2: Reference and estimated f0 contours for the first three utterances of the male
(left) and female (right) speaker in the Edinburgh database[1]. Mismatches between the
contours reveal voiced and unvoiced errors.
4 Agents
We have implemented our pitch tracking algorithm as a real time front end for two interactive voice-driven agents. The first is a voice-to-MIDI player that synthesizes electronic
music from vocalized melodies[4]. Over one hundred electronic instruments are available.
The second (see the storyboard in Fig. 3) is a a multimedia Karaoke machine with audiovisual feedback, voice-driven key selection, and performance scoring. In both applications,
the user?s pitch is displayed in real time, scrolling across the screen as he or she sings into
the computer. In the Karaoke demo, the correct pitch is also simultaneously displayed,
providing an additional element of embarrassment when the singer misses a note. Both
applications run on a laptop with an external microphone.
Interestingly, the real time audiovisual feedback provided by these agents creates a profoundly different user experience than current systems in automatic speech recognition[14].
Unlike dictation programs or dialog managers, our more primitive agents?which only attend to pitch contours?are not designed to replace human operators, but to entertain and
amuse in a way that humans cannot. The effect is to enhance the medium of voice, as
opposed to highlighting the gap between human and machine performance.
algorithm
CPD
FBPT
HPS
IPTA
PP
SPRD
eSPRD
ALS
CPD
FBPT
HPS
IPTA
PP
SPRD
eSPRD
ALS
unvoiced
in error
(%)
18.11
3.73
14.11
9.78
7.69
4.05
4.63
4.20
31.53
3.61
19.10
5.70
6.15
2.35
2.73
4.92
voiced
in error
(%)
19.89
13.90
7.07
17.45
15.82
15.78
12.07
11.00
22.22
12.16
21.06
15.93
13.01
12.16
9.13
5.58
gross errors
high
low
(%)
(%)
4.09
0.64
1.27
0.64
5.34
28.15
1.40
0.83
0.22
1.74
0.62
2.01
0.90
0.56
0.05
0.20
0.61
3.97
0.60
3.55
0.46
1.61
0.53
3.12
0.26
3.20
0.39
5.56
0.43
0.23
0.33
0.04
rms
deviation
(Hz)
3.60
2.89
3.21
3.37
3.01
2.46
1.74
3.24
7.61
7.03
5.31
5.35
6.45
5.51
5.13
6.91
Table 1: Evaluations of different pitch tracking algorithms on male speech (top) and
female speech (bottom). The algorithms in the table are cepstrum pitch determination
(CPD)[9], feature-based pitch tracking (FBPT)[11], harmonic product spectrum (HPS)
pitch determination[10, 15], parallel processing (PP) of multiple estimators in the time
domain[5], integrated pitch tracking (IPTA)[16], super resolution pitch determination
(SRPD)[8], enhanced SRPD (eSRPD)[1], and adaptive least squares (ALS) estimation, as
described in this paper. The benchmarks other than ALS were previously reported[1]. The
best results in each column are indicated in boldface.
Figure 3: Screen shots from the multimedia Karoake machine with voice-driven key selection, audiovisual feedback, and performance scoring. From left to right: splash screen;
singing ?happy birthday?; machine evaluation.
5 Future work
Voice is the most natural and expressive medium of human communication. Tapping the
full potential of this medium remains a grand challenge for researchers in artificial intelligence (AI) and human-computer interaction. In most situations, a speaker?s intentions are
derived not only from the literal transcription of his speech, but also from prosodic cues,
such as pitch, stress, and rhythm. The real time processing of such cues thus represents a
fundamental challenge for autonomous, voice-driven agents. Indeed, a machine that could
learn from speech as naturally as a newborn infant?responding to prosodic cues but recognizing in fact no words?would constitute a genuine triumph of AI.
We are pursuing the ideas in this paper with this vision in mind, looking beyond the immediate applications to voice-to-midi synthesis and audiovisual Karaoke. The algorithm in
this paper was purposefully limited to clean speech from non-overlapping speakers. While
the algorithm works well in this domain, we view it mainly as a vehicle for experimenting with non-traditional methods that avoid FFTs and autocorrelation and that (ultimately)
might be applied to more complicated signals. We have two main goals for future work:
first, to add more sophisticated types of human-computer interaction to our voice-driven
agents, and second, to incorporate the novel elements of our pitch tracker into a more comprehensive front end for auditory scene analysis[2, 3]. The agents need to be sufficiently
complex to engage humans in extended interactions, as well as sufficiently robust to handle corrupted speech and overlapping speakers. From such agents, we expect interesting
possibilities to emerge.
References
[1] P. C. Bagshaw, S. M. Hiller, and M. A. Jack. Enhanced pitch tracking and the processing
of f0 contours for computer aided intonation teaching. In Proceedings of the 3rd European
Conference on Speech Communication and Technology, volume 2, pages 1003?1006, 1993.
[2] A. S. Bregman. Auditory scene analysis: the perceptual organization of sound. M.I.T. Press,
Cambridge, MA, 1994.
[3] M. Cooke and D. P. W. Ellis. The auditory organization of speech and other sources in listeners
and computational models. Speech Communication, 35:141?177, 2001.
[4] P. de la Cuadra, A. Master, and C. Sapp. Efficient pitch detection techniques for interactive
music. In Proceedings of the 2001 International Computer Music Conference, La Habana,
Cuba, September 2001.
[5] B. Gold and L. R. Rabiner. Parallel processing techniques for estimating pitch periods of speech
in the time domain. Journal of the Acoustical Society of America, 46(2,2):442?448, August
1969.
[6] W. M. Hartmann. Pitch, periodicity, and auditory organization. Journal of the Acoustical Society
of America, 100(6):3491?3502, 1996.
[7] W. Hess. Pitch Determination of Speech Signals: Algorithms and Devices. Springer, 1983.
[8] Y. Medan, E. Yair, and D. Chazan. Super resolution pitch determination of speech signals. IEEE
Transactions on Signal Processing, 39(1):40?48, 1991.
[9] A. M. Noll. Cepstrum pitch determination. Journal of the Acoustical Society of America,
41(2):293?309, 1967.
[10] A. M. Noll. Pitch determination of human speech by the harmonic product spectrum, the harmonic sum spectrum, and a maximum likelihood estimate. In Proceedings of the Symposium
on Computer Processing in Communication, pages 779?798, April 1969.
[11] M. S. Phillips. A feature-based time domain pitch tracker. Journal of the Acoustical Society of
America, 79:S9?S10, 1985.
[12] J. G. Proakis, C. M. Rader, F. Ling, M. Moonen, I. K. Proudler, and C. L. Nikias. Algorithms
for Statistical Signal Processing. Prentice Hall, 2002.
[13] L. R. Rabiner. On the use of autocorrelation analysis for pitch determination. IEEE Transactions
on Acoustics, Speech, and Signal Processing, 25:22?33, 1977.
[14] L. R. Rabiner and B. H. Juang. Fundamentals of Speech Recognition. Prentice Hall, Englewoods Cliffs, NJ, 1993.
[15] M. R. Schroeder. Period histogram and product spectrum: new methods for fundamental frequency measurement. Journal of the Acoustical Society of America, 43(4):829?834, 1968.
[16] B. G. Secrest and G. R. Doddington. An integrated pitch tracking algorithm for speech systems.
In Proceedings of the 1983 IEEE International Conference on Acoustics, Speech, and Signal
Processing, pages 1352?1355, Boston, 1983.
[17] K. Stevens. Acoustic Phonetics. M.I.T. Press, Cambridge, MA, 1999.
| 2289 |@word proportionality:1 cos2:1 shot:1 noll:2 initial:1 contains:2 liquid:1 daniel:1 tuned:1 interestingly:1 rightmost:1 atlantic:1 current:2 com:1 comparing:2 recovered:1 embarrassment:1 remove:2 designed:2 update:4 infant:1 half:3 intelligence:1 cue:3 device:1 filtered:2 detecting:8 provides:1 triumph:1 simpler:2 passbands:1 windowed:2 symposium:1 consists:1 doubly:1 autocorrelation:9 notably:1 upenn:2 indeed:1 themselves:1 dialog:1 manager:1 audiovisual:8 resolve:4 window:8 provided:1 estimating:3 laptop:2 medium:3 nj:3 every:1 charge:2 interactive:3 filterbank:9 unit:1 appear:2 engineering:1 attend:2 limit:2 cliff:1 meet:1 tapping:1 interpolation:3 approximately:1 birthday:1 might:1 twice:1 co:3 limited:2 range:6 practice:3 procedure:4 sings:2 intention:1 word:1 vocal:1 downsampled:4 cannot:1 ga:1 selection:2 operator:1 s9:1 prentice:2 dimensionless:1 accumulating:1 missing:1 primitive:1 resolution:6 sharpness:1 estimator:1 his:1 handle:1 autonomous:2 updated:2 enhanced:2 play:1 suppose:1 user:3 anomaly:1 programming:2 engage:1 distinguishing:1 pa:1 element:2 recognition:3 particularly:2 labeled:2 database:7 bottom:1 role:1 electrical:1 singing:1 chord:1 valuable:1 gross:3 dynamic:3 ultimately:1 depend:2 solving:1 segment:2 weakly:1 creates:2 basis:3 multimodal:1 easily:2 lowpass:6 differently:1 resolved:1 listener:1 america:5 describe:2 prosodic:2 detected:4 artificial:1 startup:1 refined:1 quite:2 heuristic:1 larger:3 noisy:1 itself:1 advantage:1 interaction:4 product:3 rapidly:1 trigonometric:1 pthe:1 achieve:1 gold:1 cuba:1 juang:1 ripple:1 produce:1 postprocessed:1 perfect:1 incremental:1 leave:2 help:1 scrolling:2 develop:1 eq:12 implemented:3 resemble:1 judge:1 indicate:1 lee2:1 concentrate:2 waveform:2 safe:2 aperiodic:1 correct:1 attribute:1 filter:18 stevens:1 human:11 viewing:1 transient:1 melody:2 require:2 extension:2 tracker:4 sufficiently:4 ground:1 hall:2 great:1 lawrence:1 purpose:1 perceived:1 estimation:4 label:1 vibration:1 create:1 establishes:2 super:2 pn:1 avoid:2 fricative:3 newborn:1 gatech:1 derived:4 she:2 check:1 mainly:1 experimenting:1 tech:1 likelihood:1 detect:1 squaring:4 eliminate:1 typically:1 integrated:2 lsaul:1 expand:1 transformed:2 misclassified:2 overall:2 hartmann:1 proakis:1 smoothing:1 genuine:1 x2n:1 sampling:3 represents:1 park:2 discrepancy:2 minimized:1 future:2 cpd:3 simultaneously:2 comprehensive:1 phase:1 consisting:1 atlanta:1 detection:3 organization:3 possibility:1 evaluation:6 male:4 analyzed:2 bregman:1 entertain:1 experience:2 shorter:1 isolated:1 column:1 earlier:1 elli:1 measuring:1 deviation:3 hundred:1 recognizing:1 examining:1 front:5 too:1 characterize:1 reported:1 corrupted:3 periodic:3 my:4 st:1 deduced:1 fundamental:11 peak:4 definitely:1 grand:1 international:2 picking:2 enhance:1 synthesis:2 squared:2 central:1 recorded:1 opposed:3 slowly:1 literal:1 external:1 derivative:1 potential:1 sinusoidal:4 nonlinearities:1 de:1 coding:1 summarized:1 sec:6 coefficient:2 satisfy:1 notable:1 depends:1 idealized:2 vehicle:1 root:1 view:1 analyze:2 characterizes:1 wave:3 competitive:1 parallel:2 complicated:1 voiced:20 minimize:1 square:14 accuracy:1 musical:1 efficiently:1 yield:1 rabiner:3 yes:1 identification:1 sharpest:2 produced:1 drive:1 cc:1 published:1 researcher:1 detector:1 evaluates:1 energy:7 frequency:26 pp:3 naturally:1 sampled:1 auditory:4 proved:1 car:2 sapp:1 amplitude:2 sophisticated:1 higher:4 april:1 cepstrum:2 done:2 though:2 evaluated:1 just:3 stage:11 hand:6 working:1 expressive:1 overlapping:3 incrementally:2 autocorrelations:2 artifact:1 indicated:2 reveal:1 effect:1 normalized:1 contain:1 sinusoid:23 aggressively:1 read:1 sin:2 speaker:11 rhythm:1 m:4 leftmost:1 octave:8 criterion:4 stress:1 performs:2 phonetics:1 postprocessing:3 harmonic:12 jack:1 novel:2 charles:1 empirically:1 khz:4 volume:1 extend:1 he:2 slight:1 measurement:1 cambridge:2 ai:2 phillips:1 hess:1 rd:2 automatic:1 consistency:1 hp:3 teaching:1 nonlinearity:7 f0:27 longer:1 add:1 contaminating:1 female:4 driven:6 reverse:1 certain:1 scoring:2 nikias:1 preserving:1 minimum:1 additional:3 period:7 signal:30 sliding:3 relates:1 multiple:2 sound:2 full:1 smooth:1 saul1:1 match:1 characterized:2 determination:8 controlled:1 pitch:54 essentially:1 vision:1 histogram:1 confined:1 whereas:1 interval:1 median:2 source:1 envelope:2 biased:1 unlike:1 south:1 hz:27 tend:1 db:1 integer:1 nonstationary:1 ee:1 extracting:1 enough:1 easy:1 fft:1 independence:1 fit:11 pennsylvania:1 competing:1 identified:2 silent:1 idea:2 chebyshev:1 whether:2 expression:1 rms:3 moonen:1 speech:49 constitute:1 generally:2 involve:1 nasal:1 clip:1 percentage:1 shifted:1 estimated:17 track:1 per:1 correctly:2 dropping:1 shall:1 profoundly:1 key:2 four:3 threshold:3 purposeful:1 cutoff:1 clean:2 fraction:4 sum:4 idealization:1 enforced:1 run:2 uncertainty:9 fourth:3 master:1 throughout:1 yann:2 electronic:3 pursuing:1 display:1 discretely:1 schroeder:1 s10:1 isbell:1 your:2 scene:2 extremely:1 department:2 across:2 remain:1 slightly:1 dictation:1 rectification:3 equation:2 previously:1 remains:1 singer:1 mind:1 whichever:1 end:5 instrument:1 studying:1 available:1 operation:3 obey:1 batch:1 voice:15 yair:1 original:3 top:1 running:4 responding:1 music:4 concatenated:1 society:5 implied:1 traditional:1 september:1 distance:1 acoustical:5 collected:1 toward:1 boldface:1 length:2 besides:1 glitch:1 modeled:1 pointwise:5 providing:1 happy:1 optionally:1 difficult:1 sharper:1 negative:1 lagged:2 implementation:1 reliably:2 unknown:1 unvoiced:6 benchmark:3 displayed:2 immediate:1 situation:1 extended:1 communication:4 looking:1 smoothed:1 sharp:1 august:1 namely:1 required:1 specified:1 sentence:2 acoustic:4 purposefully:1 narrow:2 beyond:1 perception:1 mismatch:1 challenge:2 program:1 prony:1 explanation:1 power:1 chazan:1 natural:1 rely:1 technology:1 extract:1 utterance:3 philadelphia:1 sn:6 literature:1 relative:1 expect:1 interesting:1 filtering:7 proportional:1 integrate:1 agent:11 bank:4 production:1 cooke:1 periodicity:1 side:2 formal:2 institute:1 neighbor:1 wide:2 characterizing:1 fall:1 emerge:1 edinburgh:3 feedback:6 vocalized:2 xn:11 evaluating:2 avoids:2 contour:12 computes:1 ending:1 forward:1 made:2 adaptive:3 transaction:2 midi:4 transcription:1 demo:1 spectrum:6 continuous:1 table:3 additionally:1 learn:1 robust:2 synthesizes:2 complex:1 european:1 domain:6 main:1 noise:2 ling:1 allowed:1 fig:6 screen:4 georgia:1 ddlee:1 differed:1 precision:1 bandpass:3 exponential:1 intonation:1 crude:1 perceptual:1 third:4 ffts:6 removing:1 minute:1 virtue:1 essential:1 consist:1 adding:1 ci:1 nec:2 occurring:1 splash:1 gap:1 boston:1 suited:1 logarithmic:1 explore:1 highlighting:1 tracking:13 springer:1 truth:1 determines:1 ma:2 identity:1 narrower:2 goal:1 replace:1 aided:1 determined:1 telephone:2 operates:1 miss:1 glide:1 multimedia:5 microphone:1 pas:4 la:2 player:3 college:1 latter:1 modulated:1 doddington:1 incorporate:1 princeton:1 handling:1 |
1,417 | 229 | A Cost Function for Internal Representations
A Cost Function for Internal Representations
Anders Krogh
The Niels Bohr Institute
Blegdamsvej 17
2100 Copenhagen
Denmark
G. I. Thorbergsson
Nordita
Blegdamsvej 17
2100 Copenhagen
Denmark
John A. Hertz
Nordita
Blegdamsvej 17
2100 Copenhagen
Denmark
ABSTRACT
We introduce a cost function for learning in feed-forward neural
networks which is an explicit function of the internal representation in addition to the weights. The learning problem can then
be formulated as two simple perceptrons and a search for internal
representations. Back-propagation is recovered as a limit. The
frequency of successful solutions is better for this algorithm than
for back-propagation when weights and hidden units are updated
on the same timescale i.e. once every learning step.
1
INTRODUCTION
In their review of back-propagation in layered networks, Rumelhart et al. (1986)
describe the learning process in terms of finding good "internal representations" of
the input patterns on the hidden units. However, the search for these representations is an indirect one, since the variables which are adjusted in its course are the
connection weights, not the activations of the hidden units themselves when specific
input patterns are fed into the input layer. Rather, the internal representations are
represented implicitly in the connection weight values.
More recently, Grossman et al. (1988 and 1989)1 suggested a way in which the
search for internal representations could be made much more explicit. They proposed to make the activations of the hidden units for each of the input patterns
1 See
also the paper by Grossman in this volume.
733
734
Krogh, Thorbergsson and Hertz
explicit variables to be adjusted iteratively (together with the weights) in the learning process. However, although they found that the algorithm they gave for making
these adjustments could be effective in some test problems, it is rather ad hoc and
it is difficult to see whether the algorithm will converge to a good solution.
If an optimization task is posed in terms of a cost function which is systematically
reduced as the algorithm runs, one is in a much better position to answer questions
like these. This is the motivation for this work, where we construct a cost function
which is an explicit function of the internal representations as well as the connection
weights. Learning is then a descent on the cost function surface, and variations in
the algorithm, corresponding to variations in the parameters of the cost function,
can be studied systematically. Both the conventional back-propagation algorithm
and that of Grossman et al. can be recovered in special limits of ours. It is easy to
change the algorithm to include constraints on the learning.
A method somewhat similar to ours has been proposed by Rohwer (1989)2. He considers networks with feedback but in this paper we study feed-forward networks. Le
Cun has also been working along the same lines, but in a quite different formulation
(Le Cun, 1987).
The learning problem for a two-layer perceptron is reduced to learning in two simple
perceptrons and the search for internal representations. This search can be carried
out by gradient descent of the cost function or by an iterative method.
2
THE COST FUNCTION
We work within the standard architecture, with three layers of units and two of
connections. Input pattern number J1. is denoted e~, the corresponding target pattern (f, and its internal representation
We use a convention in which i always
labels output units, j labels hidden units, and k labels input units. Thus Wij is
always a hidden-to-output weight and Wjle an input-to-hidden connection weight.
Then the actual activations of the hidden units when pattern J1. is the input are
u1.
S1 = g(hf) = g(2.: Wjke~)
(1)
k
and those of the output units, when given the internal representations
are
Sf
= g(hf) = g(2.: Wij ( 1)
u1 as inputs,
(2)
j
where g(h) is the activation function, which we take to be tanh h.
The cost function has two terms, one of which describes simple delta-rule learning
(Rumelhart et al., 1986) of the internal representations from the inputs by the first
layer of connections, and the other of which describes the same kind of learning of the
2See also the paper by Rohwer in this volume.
A Cost Function for Internal Representations
target patterns from the internal representations in the second layer of connections.
We use the "entropic" form for these terms:
1 (f) + "
? S~
E -_ "L....J '21 ( 1 ? (i1-') In ( 1 ?
ilJ?
T L....J '21 ( 1 ? O'j1-') In
(1 ?? O'f)
1
j IJ?
1
S~
(3)
)
This form of the cost function has been shown to reduce the learning time (Solla
et al., 1988). We allow different relative weights for the two terms through the
parameter T. This cost function should now be minimized with respect to the two
sets of connection weights Wij and Wjk and the internal representations
O'f.
The resulting gradient descent learning equations for the connection weights are
simply those of simple one-layer perceptrons:
8 Wij ex: _ 8E = "(I'~ _
8t
8w' .
L....J ~,
IJ
Sf:A)O'~
I)
IJ
8Wjk ex: _ 8E =
8t
8Wjk
TL(O'~
-
="6f:A0'~
L....J
1
(4)
}
IJ
Sf:A)e~
IJ}}
=
TL
IJ
6~e~
(5)
}
The new element is the corresponding equation for the adjustment of the internal
representations:
80'f
8E
- ex: - - =
8t
80'''!
}
L 6?' Wi}' + T
IJ
i
. - Ttan h- 1 0'.I-'
hlJ
}
(6)
}
The stationary values of the internal representations thus solve
(7)
O'f
which has a simple interpretation: The internal representation variables
are like
conventional units except that in addition to the field fed forward into them from
the input layer they also feel the back-propagated error field
Li 6f Wi;. The
parameter T regulates the relative weights of these terms.
bf =
Instead of doing gradient descent we have iterated equation (7) to find the internal
representations.
One of the advantages offormulating the learning problem in terms of a cost function
is that it is easy to implement constraints on the learning. Suppose we want to
prevent the network from forming the same internal representations for different
output patterns. We can then add the term
E
= 1::2 "L....J 1'1:' I''! O'I!'} O'~}
~,~,
ij IJ/I
(8)
735
736
Krogh, Thorbergsson and Hertz
to the energy. We may also want to suppress internal representations where the
units have identical values. This may be seen as an attempt to produce efficient
representations. The term
(9)
is then added to the energy. The parameters "( and "(' can be tuned to get the best
performance. With these new terms equation (7) for the internal representations
becomes
The only change in the algorithm is that this equation is iterated rather than (7).
These terms lead to better performance in some problems. The benefit of including
such terms is very problem-dependent. We include in our results an example where
these terms are useful.
3
SIMPLE LIMITS
It is simple to recover ordinary back-propagation in this model. It is the limit where
T ~ 1: Expanding (7) we obtain
(jj = Sf + T- 1 L 6f W ij(1 -
tanh 2 hj)
(11)
i
Keeping only the lowest-order surviving terms, the learning equations for the connection weights then reduce to
(12)
and
(13)
which are just the standard back-propagation equations (with an entropic cost
function).
Now consider the opposite limit, T <:: 1. Then the second term dominates in (7):
(14)
A similar algorithm to the one of Grossman et al. is then to train the input-tohidden connection weights with these
as targets while training the hidden-tooutput weights with the
obtained in the other limit (7) as inputs. That is,
one alternates between high and low T according to which layer of weights one is
adjusting.
(jf
(jf
A Cost Function for Internal Representations
4
RESULTS
There are many ways to do the optimization in practice. To be able to make a
comparison with back-propagation, we have made simulations that, at high T, are
essentially the same as back-propagation (in terms of weight adjustment).
In one set of simulations we have kept the internal representations, uf, optimal with
the given set of connections. This means that after one step of weight changes we
have relaxed the u's. One can think of the u's as fast-varying and the weights as
slowly-varying. In the T ~ 1 limit we can use these simulations to get a comparison
with back-propagation as described in the previous section.
In our second set of simulations we iterate the equation for the u's only once after
one step of weight updating. All variables are then updated on the same timescale.
This turns out to increase the success rate for learning considerably compared to
the back-propagation limit. The u's are updated in random order such that each
one is updated once on the average.
The learning rate, momentum, etc. have been chosen optimally for the back-propagation limit (large T) and kept fixed at these values for other values of T (though
no systematic optimization of parameters has been done).
=
1 and
We have tested the algorithm on the parity and encoding problems for T
T
10 (the back-propagation limit). Each problem was run 100 times and the
average error and success rate were measured and plotted as functions of learning
steps (time). One learning step corresponds to one updating of the weights.
=
For the parity problem (and other similar tasks) the learning did not converge for
T lower than about 3. When the weights are small we can expand the tanh on the
output in equation (7),
uf ~ tanh(hf
+ T- 1L: Wij[(f -
L: Wijluj,]),
(15)
j'
so the uf sits in a spin-glass-like "local field" except for the connection to itself. When the algorithm is started with small random weights this self-coupling
(Ei(Wjj )2) is dominant. Forcing the self-coupling to be small at low w's and gradually increasing it to full strength when the units saturate improves the performance
a lot.
For larger networks the self-coupling does not seem to be a pr.oblem.
The specific test problems were:
Parity with 4 input units and 4 hidden units and all the 16 patterns in the training
set. We stop the runs after 300 sweeps of the training set. For T = 1 the self
coupling is suppressed.
Encoding with 8 input, 3 hidden and 8 output units and 8 patterns to learn (same
input as output). The 8 patterns have -1 at all units but one. We stop the
runs after 500 sweeps of the training set.
737
738
Krogh, Thorbergsson and Hertz
Both problems were run with fast-varying O"s and with all variables updated on
the same timescale. We determined the average learning time of the successful runs
and the percentage of the 100 trials that were successful. The success criterion was
that the sign of the output was correct. The learning times and success rates are
shown in table 1.
Table 1: Learning Times and Succes Rates
Fast-varying O"S
Slow-varying O"S
Parity
Encoding
Parity
Encoding
Learning
T=l
130?1O
167?1O
146?1O
145?8
times
T=10
97?6
88?4
121?6
64?2
Success rate
T=l T=10
30%
48%
95%
98%
36%
99%
57%
100%
In figure 1 we plot the average error as a function of learning steps and the success
rate for each set of runs.
It can seem a disadvantage of this method that it is necessary to store the values
of the O"s between learning sweeps. We have therefore tried to start the iteration
of equation (7) with the value
= tanh(Ek
on the right hand side. This
does not affect the performance much.
0'1
Wi ken
We have investigated the effect of including the terms (8) and (9) in the energy.
For the same parity problem as above we get an improved success rate in the high
T limit.
5
CONCLUSION
The most striking result is the improvement in the success rate when all variables,
weights and hidden units, are updated once every learning step. This is in contrast
to back-propagation, where the values of the hidden units are completely determined by the weights and inputs. In our formulation this corresponds to relaxing
the hidden units fully in every learning cycle and having the parameter T ? 1.
There is then an advantage in considering the hidden units as additional variables
during the learning phase whose values are not completely determined by the field
fed forward to them from the inputs.
The results indicate that the performance of the algorithm is best in the high T
limit.
For the parity problem the performance of the algorithm presented here is similar
to that of the back-propagation algorithm measured in learning time. The real
advantage is the higher frequency of successful solutions. For the encoding problem
the algorithm is faster than back-propagation but the success rate is similar (~
100%). The algorithm should also be comparable to back-propagation in cpu time
A Cost Function for Internal Representations
1.4
100
1.2
80
...
1.0
t)
0.8
e...
t)
~
e
0.8
~
0.4
~
f
80
VI
VI
8:s
t)
40
rn
~
,... ".<'." .....
::..~
, ..,
20
~-
0.2
0.0
---, .. __:::: ... 0???????
0
0
100
200
300
0
100
Learning cycles
300
200
Learning cycles
(A)
...
t
t)
0.&
100
0.4
80
oS
f!
0.3
t)
"
!
j
0.2
~
0.1
.,,:"
I'
,-:
'
!
;
.':"'
,'.:'
',0?
'l
I
;
I~;
;
!
40
rO
I.:
J
.f
i
....
" ~ ; ..
, ......
;
{
80
VI
VI
I?
e
r?.. -? . . -?,~?:;;-~?:.::~~:~?..:?:?
,:
~
! '.:
20
I I?
? r
.
~
0.0
0
0
100
200
300
0
Learning cycles
.'
100
200
300
400
GOO
Learning cycles
(B)
Figure 1: (A) The left plot shows the error as a function of learning time for
the 4-parity problem for those runs that converged within 300 learning steps. The
curves are: T = 10 and slow sigmas (
), T = 10 and fast sigmas (-.-.-.-. ),
T
1 and slow sigmas (------), and T
1 and fast sigmas ( ......... ). The right plot
is the percentage of converged runs as a function of learning time.
(B) The same as above but for the encoding problem.
=
=
739
740
Krogh, Thorbergsson and Hertz
in the limit where all variables are updated on the same timescale (once every
learning sweep).
Because the computational complexity is shifted from the calculation of new weights
to the determination of internal representations, it might be easier to implement
this method in hardware than back-propagation is. It is possible to use the method
without saving the array of internal representations by using the field fed forward
from the inputs to generate an internal representation that then becomes a starting
point for iterating the equation for (1.
The method can easily be generalized to networks with feedback (as in [Rohwer,
1989]) and it would be interesting to see how it compares to other algorithms for
recurrent networks. There are many other directions in which one can continue
this work. One is to try another cost function. Another is to use binary units and
perceptron learning.
References
Le Cun, Y (1987). Modeles Connexionistes de l'Apprentissage. Thesis, Paris.
Grossman, T, R Meir and E Domany (1988). Learning by Choice of Internal Representations. Complex Systems 2, 555.
Grossman, T (1989). The CHIR Algorithm: A Generalization for Multiple Output
and Multilayered Networks. Preprint, submitted to Complex Systems.
Rohwer, R (1989) . The "Moving Targets" Training Method. Preprint, Edinburgh.
Rumelhart, D E, G E Hinton and R J Williams (1986). Chapter 8 in Parallel
Distributed Processing, vol 1 (D E Rumelhart and J L McClelland, eds), MIT Press.
SoHa, S A, E Levin, M Fleisher (1988). Accelerated Learning in Layered Neural
Networks . Complex Systems 2, 625.
PART IX:
HARDWARE IMPLEMENTATION
| 229 |@word trial:1 bf:1 simulation:4 tried:1 tuned:1 ours:2 recovered:2 activation:4 john:1 j1:3 plot:3 stationary:1 sits:1 along:1 introduce:1 themselves:1 actual:1 cpu:1 considering:1 increasing:1 becomes:2 lowest:1 modeles:1 kind:1 finding:1 every:4 ro:1 unit:22 local:1 limit:13 encoding:6 might:1 studied:1 relaxing:1 practice:1 implement:2 get:3 hlj:1 layered:2 conventional:2 williams:1 starting:1 rule:1 array:1 variation:2 updated:7 feel:1 target:4 suppose:1 element:1 rumelhart:4 updating:2 preprint:2 fleisher:1 cycle:5 chir:1 solla:1 goo:1 complexity:1 wjj:1 completely:2 easily:1 indirect:1 represented:1 chapter:1 train:1 fast:5 describe:1 effective:1 quite:1 whose:1 posed:1 solve:1 larger:1 timescale:4 think:1 itself:1 hoc:1 advantage:3 wjk:3 produce:1 coupling:4 recurrent:1 measured:2 ij:10 krogh:5 indicate:1 convention:1 direction:1 correct:1 generalization:1 adjusted:2 entropic:2 niels:1 label:3 tanh:5 mit:1 always:2 rather:3 hj:1 varying:5 improvement:1 contrast:1 tooutput:1 glass:1 dependent:1 anders:1 a0:1 hidden:15 expand:1 wij:5 i1:1 oblem:1 denoted:1 special:1 field:5 once:5 construct:1 having:1 saving:1 identical:1 minimized:1 phase:1 attempt:1 bohr:1 necessary:1 plotted:1 disadvantage:1 ordinary:1 cost:18 successful:4 levin:1 optimally:1 answer:1 considerably:1 systematic:1 together:1 ilj:1 thesis:1 slowly:1 ek:1 grossman:6 li:1 de:1 ad:1 vi:4 try:1 lot:1 doing:1 start:1 hf:3 recover:1 parallel:1 spin:1 iterated:2 converged:2 submitted:1 ed:1 rohwer:4 energy:3 frequency:2 propagated:1 stop:2 adjusting:1 improves:1 back:18 feed:2 higher:1 improved:1 formulation:2 done:1 though:1 just:1 working:1 hand:1 ei:1 o:1 propagation:17 effect:1 iteratively:1 during:1 self:4 criterion:1 generalized:1 recently:1 regulates:1 volume:2 he:1 interpretation:1 moving:1 surface:1 etc:1 add:1 dominant:1 forcing:1 store:1 binary:1 success:9 continue:1 seen:1 additional:1 somewhat:1 relaxed:1 converge:2 full:1 multiple:1 faster:1 determination:1 calculation:1 essentially:1 iteration:1 addition:2 want:2 seem:2 surviving:1 easy:2 iterate:1 affect:1 gave:1 architecture:1 opposite:1 reduce:2 domany:1 whether:1 jj:1 useful:1 iterating:1 hardware:2 ken:1 mcclelland:1 reduced:2 generate:1 meir:1 percentage:2 shifted:1 sign:1 delta:1 nordita:2 vol:1 prevent:1 kept:2 run:9 succes:1 striking:1 comparable:1 layer:8 wjle:1 strength:1 constraint:2 u1:2 uf:3 according:1 alternate:1 hertz:5 describes:2 suppressed:1 wi:3 cun:3 making:1 s1:1 gradually:1 pr:1 equation:11 turn:1 fed:4 include:2 sweep:4 question:1 added:1 gradient:3 blegdamsvej:3 considers:1 denmark:3 difficult:1 sigma:4 suppress:1 implementation:1 descent:4 hinton:1 rn:1 copenhagen:3 paris:1 connection:13 able:1 suggested:1 pattern:11 including:2 started:1 carried:1 review:1 relative:2 fully:1 interesting:1 apprentissage:1 systematically:2 course:1 parity:8 keeping:1 side:1 allow:1 perceptron:2 institute:1 benefit:1 edinburgh:1 feedback:2 curve:1 distributed:1 forward:5 made:2 implicitly:1 search:5 iterative:1 table:2 learn:1 expanding:1 investigated:1 complex:3 did:1 multilayered:1 motivation:1 tl:2 slow:3 position:1 momentum:1 explicit:4 tohidden:1 sf:4 ix:1 saturate:1 specific:2 dominates:1 easier:1 soha:1 simply:1 forming:1 adjustment:3 thorbergsson:5 corresponds:2 formulated:1 jf:2 change:3 determined:3 except:2 perceptrons:3 internal:29 accelerated:1 tested:1 ex:3 |
1,418 | 2,290 | Intrinsic Dimension Estimation Using Packing
Numbers
Bal?azs K?egl
Department of Computer Science and Operations Research
University of Montreal
CP 6128 succ. Centre-Ville, Montr?eal, Canada H3C 3J7
[email protected]
Abstract
We propose a new algorithm to estimate the intrinsic dimension of data
sets. The method is based on geometric properties of the data and requires neither parametric assumptions on the data generating model nor
input parameters to set. The method is compared to a similar, widelyused algorithm from the same family of geometric techniques. Experiments show that our method is more robust in terms of the data generating
distribution and more reliable in the presence of noise.
1
Introduction
High-dimensional data sets have several unfortunate properties that make them hard to analyze. The phenomenon that the computational and statistical efficiency of statistical techniques degrade rapidly with the dimension is often referred to as the ?curse of dimensionality?. One particular characteristic of high-dimensional spaces is that as the volumes of
constant diameter neighborhoods become large, exponentially many points are needed for
reliable density estimation. Another important problem is that as the data dimension grows,
sophisticated data structures constructed to speed up nearest neighbor searches rapidly become inefficient.
Fortunately, most meaningful, real life data do not uniformly fill the spaces in which
they are represented. Rather, the data distributions are observed to concentrate to nonlinear manifolds of low intrinsic dimension. Several methods have been developed to find
low-dimensional representations of high-dimensional data, including Principal Component
Analysis (PCA), Self-Organizing Maps (SOM) [1], Multidimensional Scaling (MDS) [2],
and, more recently, Local Linear Embedding (LLE) [3] and the ISOMAP algorithm [4].
Although most of these algorithms require that the intrinsic dimension of the manifold be
explicitly set, there has been little effort devoted to design and analyze techniques that
estimate the intrinsic dimension of data in this context.
There are two principal areas where a good estimate of the intrinsic dimension can be
useful. First, as mentioned before, the estimate can be used to set input parameters of
dimension reduction algorithms. Certain methods (e.g., LLE and the ISOMAP algorithm)
also require a scale parameter that determines the size of the local neighborhoods used in
the algorithms. In this case, it is useful if the dimension estimate is provided as a function
of the scale (see Figure 1 for an intuitive example where the intrinsic dimension of the data
depends on the resolution). Nearest neighbor searching algorithms can also profit from
a good dimension estimate. The complexity of search data structures (e.g., kd-trees and
R-trees) increase exponentially with the dimension, and these methods become inefficient
if the dimension is more than about 20. Nevertheless, it was shown by Ch?avez et al. [5]
that the complexity increases with the intrinsic dimension of the data rather then with the
dimension of the embedding space.
Figure 1: Intrinsic dimension D at different resolutions. (a) At very small
scale the data looks zero-dimensional.
(b) If the scale is comparable to the
PSfragdimension
replacements
noise level, the intrinsic
seems larger than expected. (c) The
?right? scale in terms of noise and curvature. (d) At very large scale the global
dimension dominates.
(c) D ' 1
(b) D ' 2
(d) D ' 2
(a) D ' 0
In this paper we present a novel method for intrinsic dimension estimation. The estimate is
based on geometric properties of the data, and requires no parameters to set. Experimental
results on both artificial and real data show that the algorithm is able to capture the scale
dependence of the intrinsic dimension. The main advantage of the method over existing
techniques is its robustness in terms of the generating distribution. The paper is organized
as follows. In Section 2 we introduce the field of intrinsic dimension estimation, and give
a short overview of existing approaches. The proposed algorithm is described in Section 3.
Experimental results are given in Section 4.
2
Intrinsic dimension estimation
Informally, the intrinsic dimension of a random vector X is usually defined as the number of
?independent? parameters needed to represent X. Although in practice this informal notion
seems to have a well-defined meaning, formally it is ambiguous due to the existence of
space-filling curves. So, instead of this informal notion, we turn to the classical concept of
topological dimension, and define the intrinsic dimension of X as the topological dimension
of the support of the distribution of X . For the definition, we need to introduce some
notions. Given a topological space X , the covering of a subset S is a collection C of open
subsets in X whose union contains S . A refinement of a covering C of S is another covering
C 0 such that each set in C 0 is contained in some set in C . The following definition is based
on the observation that a d-dimensional set can be covered by open balls such that each
point belongs to maximum (d + 1) open balls.
Definition 1 A subset S of a topological space X has topological dimension D top (also
known as Lebesgue covering dimension) if every covering C of S has a refinement C 0 in
which every point of S belongs to at most (Dtop + 1) sets in C 0 , and Dtop is the smallest such
integer.
The main technical difficulty with the topological dimension is that it is computationally
difficult to estimate on a finite sample. Hence, practical methods use various other definitions of the intrinsic dimension. It is common to categorize intrinsic dimension estimating
methods into two classes, projection techniques and geometric approaches.
Projection techniques explicitly construct a mapping, and usually measure the dimension by using some variants of principal component analysis. Indeed, given a set S n =
{X1 , . . . , Xn }, Xi ? X , i = 1, . . . , n of data points drawn independently from the distribution
of X, probably the most obvious way to estimate the intrinsic dimension is by looking at
b pca is defined as the
the eigenstructure of the covariance matrix C of Sn . In this approach, D
number of eigenvalues of C that are larger than a given threshold. The first disadvantage of
the technique is the requirement of a threshold parameter that determines which eigenvalb pca will characterize
ues are to discard. In addition, if the manifold is highly nonlinear, D
the global (intrinsic) dimension of the data rather than the local dimension of the manifold.
b pca will always overestimate Dtop ; the difference depends on the level of nonlinearity of
D
b pca can only be used if the covariance matrix of Sn can be calcuthe manifold. Finally, D
lated (e.g., when X = Rd ). Although in Section 4 we will only consider Euclidean data
sets, there are certain applications where only a distance metric d : X ? X 7? R+ ? {0} and
the matrix of pairwise distances D = [di j ] = d(xi , x j ) are given.
Bruske and Sommer [6] present an approach to circumvent the second problem. Instead
of doing PCA on the original data, they first cluster the data, then construct an optimally
topology preserving map (OPTM) on the cluster centers, and finally, carry out PCA locally
on the OPTM nodes. The advantages of the method are that it works well on non-linear
data, and that it can produce dimension estimates at different resolutions. At the same time,
the threshold parameter must still be set as in PCA, moreover, other parameters, such as
the number of OPTM nodes, must also be decided by the user. The technique is similar
in spirit to the way the dimension parameter of LLE is set in [3]. The algorithm runs in
O(n2 d) time (where n is the number of points and d is the embedding dimension) which
b pca ) complexity of the fast PCA algorithm of Roweis [7]
is slightly worse than the O(nd D
b pca .
when computing D
Another general scheme in the family of projection techniques is to turn the dimensionality
reduction algorithm from an embedding technique into a probabilistic, generative model
[8], and optimize the dimension as any other parameter by using cross-validation in a maximum likelihood setting. The main disadvantage of this approach is that the dimension
estimate depends on the generative model and the particular algorithm, so if the model
does not fit the data or if the algorithm does not work well on the particular problem, the
estimate can be invalid.
The second basic approach to intrinsic dimension estimation is based on geometric properties of the data rather then projection techniques. Methods from this family usually require
neither any explicit assumption on the underlying data model, nor input parameters to set.
Most of the geometric methods use the correlation dimension from the family of fractal
dimensions due to the computational simplicity of its estimation. The formal definition is
based on the observation that in a D-dimensional set the number of pairs of points closer to
each other than r is proportional to r D .
Definition 2 Given a finite set Sn = {x1 , . . . , xn } of a metric space X , let
Cn (r) =
n
n
2
I
?
?
n(n ? 1) i=1 j=i+1 {kxi ?x j k<r}
where IA is the indicator function of the event A. For a countable set S = {x1 , x2 , . . .} ? X ,
the correlation integral is defined as C(r) = limn?? Cn (r). If the limit exists, the correlation
dimension of S is defined as
logC(r)
Dcorr = lim
.
r?0 log r
For a finite sample, the zero limit cannot be achieved so the estimation procedure usually
consists of plotting logC(r) versus log r and measuring the slope ? logC(r)
? log r of the linear part
of the curve [9, 10, 11]. To formalize this intuitive procedure, we present the following
definition.
Definition 3 The scale-dependent correlation dimension of a finite set Sn = {x1 , . . . , xn }
is
b corr (r1 , r2 ) = logC(r2 ) ? logC(r1 ) .
D
log r2 ? log r1
It is known that Dcorr ? Dtop and that Dcorr approximates well Dtop if the data distribution
on the manifold is nearly uniform. However, using a non-uniform distribution on the same
manifold, the correlation dimension can severely underestimate the topological dimension.
To overcome this problem, we turn to the capacity dimension, which is another member of
the fractal dimension family. For the formal definition, we need to introduce some more
concepts. Given a metric space X with distance metric d(?, ?), the r-covering number N(r)
of a set S ? X is the minimum number of open balls B(x0 , r) = {x ? X |d(x0 , x) < r} whose
union is a covering of S . The following definition is based on the observation that the
covering number N(r) of a D-dimensional set is proportional to r ?D .
Definition 4 The capacity dimension of a subset S of a metric space X is
Dcap = ? lim
r?0
log N(r)
.
log r
The principal advantage of Dcap over Dcorr is that Dcap does not depend on the data distribution on the manifold. Moreover, if both Dcap and Dtop exist (which is certainly the case
in machine learning applications), it is known that the two dimensions agree. In spite of
that, Dcap is usually discarded in practical approaches due to the high computational cost
of its estimation. The main contribution of this paper is an efficient intrinsic dimension
estimating method that is based on the capacity dimension. Experiments on both synthetic
and real data confirm that our method is much more robust in terms of the data distribution
than methods based on the correlation dimension.
3
Algorithm
Finding the covering number even of a finite set of data points is computationally difficult.
To tackle this problem, first we redefine Dcap by using packing numbers rather than covering numbers. Given a metric space X with distance metric d(?, ?), a set V ? X is said to
be r-separated if d(x, y) ? r for all distinct x, y ? V . The r-packing number M(r) of a set
S ? X is defined as the maximum cardinality of an r-separated subset of S . The following proposition follows from the basic inequality between packing and covering numbers
N(r) ? M(r) ? N(r/2).
Proposition 1 Dcap = ? lim
r?0
log M(r)
.
log r
For a finite sample, the zero limit cannot be achieved so, similarly to the correlation dimension, we need to redefine the capacity dimension in a scale-dependent manner.
Definition 5 The scale-dependent capacity dimension of a finite set Sn = {x1 , . . . , xn } is
b cap (r1 , r2 ) = ? log M(r2 ) ? log M(r1 ) .
D
log r2 ? log r1
Finding M(r) for a data set Sn = {x1 , . . . , xn } is equivalent to finding the cardinality of a
maximum independent vertex set MI(Gr ) of the graph Gr (V, E) with vertex set V = Sn
and edge set E = {(xi , x j )|d(xi , x j ) < r}. This problem is known to be NP-hard. There are
results that show that for a general graph, even the approximation of MI(G) within a factor
of n1?? , for any ? > 0, is NP-hard [12]. On the positive side, it was shown that for such
geometric graphs as Gr , MI(G) can be approximated arbitrarily well by polynomial time
algorithms [13]. However, approximating algorithms of this kind scale exponentially with
the data dimension both in terms of the quality of the approximation and the running time 1
so they are of little practical use for d > 2. Hence, instead of using one of these algorithms,
we apply the following greedy approximation technique. Given a data set S n , we start with
an empty set of centers C , and in an iteration over Sn we add to C data points that are at a
b
distance of at least r from all the centers in C (lines 4 to 10 in Figure 2). The estimate M(r)
is the cardinality of C after every point in Sn has been visited.
The procedure is designed to produce an r-packing but certainly underestimates the packing
number of the manifold, first, because we are using a finite sample, and second, because in
b < M(r). Nevertheless, we can still obtain a good estimate for D
b cap by using
general M(r)
b
M(r) in the place of M(r) in Definition 5. To see why, observe that, for a good estimate for
b cap , it is enough if we can estimate M(r) with a constant multiplicative bias independent
D
b
of r. Although we have no formal proof that the bias of M(r)
does not change with r, the
simple greedy procedure described above seems to work well in practice.
b
b cap as long as it does
Even though the bias of M(r)
does not affect the estimation of D
b
not change with r, the variance of M(r)
can distort the dimension estimate. The main
b
source of the variance is the dependence of M(r)
on the the order of the data points in
which they are visited. To eliminate this variance, we repeat the procedure several times
b pack by using the average
on random permutations of the data, and compute the estimate D
of the logarithms of the packing numbers. The number of repetitions depends on r 1 , r2 ,
and a preset parameter that determines the accuracy of the final estimate (set to 99% in all
experiments) . The complete algorithm is given formally in Figure 2.
The running time of the algorithm is O nM(r)d
where r = min(r1 , r2 ). At smaller scales,
where M(r) is comparable with n, it is O n2 d . On the other hand, since the variance of the
estimate also tends to be smaller at smaller scales, the algorithm iterates less for the same
accuracy.
4
Experiments
The two main objectives of the four experiments described here is to demonstrate the ability
of the method to capture the scale-dependent behavior of the intrinsic dimension, and to
underline its robustness in terms of the data generating distribution. In all experiments, the
b pack is compared to the correlation dimension estimate D
b corr . Both dimensions
estimate D
are measured on consecutive pairs of a sequence r1 , . . . , rm of resolutions, and the estimate
b i , ri+1 ) is plotted at (ri + ri+1 )/2.)
is plotted halfway between the two parameters (i.e., D(r
In the first three experiments the manifold is either known or can be approximated easily.
In these experiments we use a two-sided multivariate power distribution with density
p(x) = I{x?[?1,1]d }
p d
2
d
?
i=1
1 ? |x(i) |
p?1
the computation of an independent vertex set of G of size at least 1 ? 1k
d
k
requires O(n ) time.
1 Typically,
(1)
d
MI(G)
PACKING D IMENSION(Sn , r1 , r2 , ?)
1
for ` ? 1 to ? do
2
Permute Sn randomly
3
for k ? 1 to 2 do
4
C ? 0/
5
for i ? 1 to n do
6
for j ? 1 to |C | do
7
if d Sn [i], C [ j] < rk then
8
j ? n+1
9
if j < n + 1 then
10
C ? C ? {Sn [i]}
bk [`] = log |C |
11
L
b
b
b pack = ? ?(L2 ) ? ?(L1 )
12
D
log r2 ? ?
log r1
b1 )+?2 (L
b2 )
?2 ( L
b pack ? (1 ? ?)/2 then
13
if ` > 10 and 1.65 ?
<D
b pack
return D
14
PSfrag replacements
(a) D ' 0
(b) D ' 2
(c) D ' 1
(d) D ' 2
`(log r2 ?log r1 )
b pack (r1 , r2 ) of a data set
Figure 2: The algorithm returns the packing dimension estimate D
Sn with ? accuracy nine times out of ten.
PSfrag replacements
with different exponents p to generate
manifold.
(a) D ' 0
(b) D ' 2
uniform
(p = 1) and non-uniform data sets on the
(c) D ' 1
(d) D ' 2
The first synthetic data is that of Figure 1. DWe
generated 5000 points on a spiral-shaped
b
manifold with a small uniform perpendicular
noise.
The curves in Figure 3(a) reflect the
b
D
b corr severely
scale-dependency observed in Figure 1. As the
distribution
becomes uneven, D
p=1
b top while D
b pack remains stable.
underestimates D
p=2
corr
pack
p=2
p=3
(a) Spiral
2.5
2
p=1
p=3
p=5
p=8
b
D
1
p=1
0.5
p=3
d=3
d=2
2
{
p=5
p=8
0.2
0.4
0.6
0.8
r
1
1.2
d=5
d=4
}
d=2
0
0
d=6
{
3
d=3
d=4
}}
4
b
D
1.5
d=6
d=5
b pack , p = 1
D
b corr , p = 1
D
b pack , p = 3
D
b corr , p = 3
D
5
b corr , p = 3
D
b pack , p = 3
D
6
b pack
D
b corr
D
b corr , p = 1
D
b pack , p = 1
D
(b) Hypercube
p=5
p=8
1.4
1
0.05
0.1
0.15
0.2
0.25
0.3
r
Figure 3: Intrinsic dimension of (a) a spiral-shaped manifold and (b) hypercubes of different dimensions. The curves reflect the scale-dependency observed in Figure 1. The more
b corr underestimates D
b top while D
b pack remains relatively
uneven the distribution, the more D
stable.
The second set of experiments were designed to test how well the methods estimate the
dimension of 5000 data points generated in hypercubes of dimensions two to six (Figb corr and D
b pack underestimates D
b top . The negative bias grows
ure 3(b)). In general, both D
with the dimension, probably due to the fact that data sets of equal cardinality become
sparser in a higher dimensional space. To compensate this bias on a general data set,
Camastra
and Vinciarelli
[10]
propose
to replacements
correct
estimate
by the
bias observed on a
PSfrag replacements
PSfrag replacements
PSfrag replacements
PSfrag
replacements
PSfrag replacements
PSfrag
replacements
PSfrag
PSfrag the
replacements
PSfrag
replacements
PSfrag
replacements
uniformly
generated
data
set
of
the
same
cardinality.
Our
experiment
shows that, in the
(a) D ' 0 (a) D ' 0 (a) D ' 0 (a) D ' 0 (a) D ' 0 (a) D ' 0 (a) D ' 0 (a) D ' 0 (a) D ' 0 (a) D ' 0
b
D'corr
this
calibrating
can
if Dthe
(b) Dcase
' 2 of
(b) D
2 , (b)
D'
2 (b) D ' 2 procedure
(b) D ' 2 (b)
D 'fail
2 (b)
' 2distribution
(b) D ' 2 (b)isDhighly
' 2 (b) non-uniform.
D'2
b pack
technique
the
PSfrag replacements
PSfrag
PSfrag
replacements
replacements
replacements
replacements
replacements
(c) replacements
DOn
' 1 the
(c)other
DPSfrag
' 1 hand,
(c) replacements
DPSfrag
'the
1 (c)
DPSfrag
' 1 (c)seems
DPSfrag
' 1 more
(c) replacements
DPSfrag
' reliable
1 (c) replacements
DPSfrag
'for
1 D
(c)
DPSfrag
' 1due
(c)to
D'
1 relative
(c) D ' 1 stability
b pack
(d)
'
(d)
.'
(a) D
Dof
' 20D
(a) D
D
' 20 (d)
(a) D
D'
' 20 (d)
(a) D
D'
' 20 (d)
(a) D
D'
' 20 (d)
(a) D
D'
' 20 (d)
(a) D
D'
' 20 (d)
(a) D
D'
' 20 (d)
(a) D
D'
' 20 (d)
(a) D
D'
' 20
b corr
(b) D D
'
2
b corr
(b) D D
'
2
b corr
(b) D D
'
2
b corr
(b) D D
'
2
b corr
(b) D D
'
2
b corr
(b) D D
'
2
b corr
(b) D D
'
2
b corr
(b) D D
'
2
b corr
(b) D D
'
2
b corr
(b) D D
'
2
We
also tested
the bmethodsb on two sets
of image
data.b Both sets
contained
64 ?b 64 images
b
b
b
b
b
b
(c) D D' 1 (c) D D' 1 (c) D D' 1 (c) D D' 1 (c) D D' 1 (c) D D' 1 (c) D D' 1 (c) D D' 1 (c) D D' 1 (c) D D' 1
with
256
gray
levels.
The
images
were
normalized
so
that
the
distance
between
a black
p =2
1
p =2
1
p =2
1
p =2
1
p =2
1
p =2
1
p =2
1
p =2
1
p =2
1
p =2
1
(d) D '
(d) D '
(d) D '
(d) D '
(d) D '
(d) D '
(d) D '
(d) D '
(d) D '
(d) D '
and
a
white
image
is
1.
The
first
set
is
a
sequence
of
481
snapshots
of
a
hand
turning
a
pb= 2
pb= 2
pb= 2
pb= 2
pb= 2
pb= 2
pb= 2
pb= 2
pb= 2
pb= 2
D
D
D
D
D
D
D
D
D
D
2 (Figure
cup
from
the
CMU
database
4(a)).
The
sequence
of
images
sweeps
a
curve
in
bp = 3
bp = 3
bp = 3
bp = 3
bp = 3
bp = 3
bp = 3
bp = 3
bp = 3
bp = 3
D
D
D
D
D
D
D
D
D
D
ap =4096-dimensional
space pso
its informal
intrinsic
dimension
is one. p =Figure
5(a)
shows
5
p=5
p=5
=5
p=5
p=5
p=5
p=5
5
p=5
p=1
p=1
p=1
p=1
p=1
p=1
p=1
p=1
p=1
p=1
that
at
a
small
scale,
both
methods
find
a
local
dimension
between
1
and
2.
At
a
slightly
p=8
p=8
p=8
p=8
p=8
p=8
p=8
p=8
p=8
p=8
p=2
p=2
p=2
p=2
p=2
p=2
p=2
p=2
p=2
p=2
theDb intrinsic
curvature
of the
b ,higher
b scale
b dimension
b , p =increases
b , p = 1indicating
b , p = 1 aD
b relatively
b high
b ,p=1
D
pp== 13
D
, pp== 13
, pp== 13
D
, pp== 13
D
1
D
D
, pp== 13
D
, pp== 13
D
p=3
p=3
p=3
p=3
To
the
estimates,
we
b ,image
bsequence
b curve.
b , test
b distribution
b , p = dependence
b ,p=1
bof , the
b ,p=1
b , pconstructed
D
p=1
D
,p=1
D
,p=1
D
p=1
D
,p=1
D
1
D
D
p=1
D
D
=1
p=5
p=5
p=5
p=5
p=5
p=5
p=5
p=5
p=5
p=5
by
connecting
the
and
resampled
481
b ,ap =polygonal
b , p = 3 curve
b ,p=
b ,p=3
b consecutive
b , p = 3points
b , p of
b sequence,
b ,p=
b ,p=3
D
3
D
D
3
D
D
, pp== 38
D
D
=3
D
, pp== 38
D
3
D
p=8
p=8
p=8
p=8
p=8
p=8
p=8
p=8
points
by
using
the
power
distribution
(1)
with
p
=
2,
3.
We
also
constructed
a
highlyb
b
b
b
b
b
b
b
b
b
Db , p = 3
Db , p = 3
Db , p = 3
Db , p = 3
Db , p = 3
Db , p = 3
Db , p = 3
Db , p = 3
Db , p = 3
Db , p = 3
D ,p=1
D ,p=1
D ,p=1
D ,p=1
D ,p=1
D ,p=1
D ,p=1
D ,p=1
D ,p=1
D ,p=1
uniform,
set by
from
b1 Db , lattice-like
b1 Db , p =D
b1data
b1 drawing
b1 approximately
b1 Db , p =D
b1equidistant
b1 consecutive
b1 Db , points
b1
b , p =D
b , p =D
b , p =D
b , p =D
b , p =D
b , p =D
D
p =D
D
D
D
D
D
p =D
b
the
polygonal
curve.
Our
results
in
Figure
5(a)
confirm
again
that
D
varies
extensively
corr
b , p = r3
b , p = r3
b , p = r3
b , p = r3
b , p = r3
b , p = r3
b , p = r3
b , p = r3
b , p = r3
b , p = r3
D
D
D
D
D
D
D
D
D
D
b
on
b with
b ,generating
b , pd=
b , pd=
b ,the
b , pd=
b , pd=
b remains
b remarkably
b , pd=
=3
6 the
=3
6
=distribution
=3
6
=3
6 manifold
=3
6 while
=D
=3
6
=3
6
=3
6stable.
D
, pd=
D
pd=
D
36
D
D
pd=
D
D
36 pack
D
, pd=
D
, pd=
D
pack
pack
pack
pack
pack
pack
pack
pack
pack
pack
corr
corr
corr
corr
corr
corr
corr
corr
corr
corr
pack
pack
pack
pack
pack
pack
pack
pack
pack
pack
corr
corr
corr
corr
corr
corr
corr
corr
corr
corr
pack
pack
pack
pack
pack
pack
pack
pack
pack
pack
corr
corr
corr
corr
corr
corr
corr
corr
corr
corr
pack
corr
pack
corr
pack
corr
pack
corr
pack
corr
pack
corr
pack
corr
pack
corr
pack
corr
pack
corr
pack
pack
pack
pack
pack
pack
pack
pack
pack
pack
corr
corr
corr
corr
corr
corr
corr
corr
corr
corr
pack
pack
pack
pack
pack
pack
pack
pack
pack
pack
d=5
d=5
r
d=4
r
d=4
r
d=4
r
b
D
d=5
d=4
b
D
d=5
r
b
D
d=5
d=4
b
D
d=5
r
b
D
d=5
d=4
b
D
d=5
r
b
D
d=5
d=4
d=4
r
d=4
(a)
d=3
d=6
b
D
d=5
d=4
b
D
d=3
d=6
d=3
d=6
d=3
d=6
d=3
d=6
d=3
d=6
d=3
d=6
d=3
d=6
d=3
d=6
d=3
d=6
d=2
d=5
d=2
d=5
d=2
d=5
d=2
d=2
d=2
PSfrag
replacements
d=5
d=5
d=5
d=2
d=5
d=2
d=5
d=2
d=5
d=2
d=5
d=4
d=4
d=4
d=4
d=4
d=4
d=4
d=4
d=3
d=3
d=3
d =04
(a) D '
d=4
d=3
d=3
d=3
d=3
d=3
d=2
d=2
d=2
d=2
d =12
(c) D '
d=2
d=2
d=2
d=2
r
(a) D ' 0
(b) D ' 2
(c) D ' 1
(d) D ' 2
p=1
p=5
(b)
d=3
d=2
d =23
(b) D '
(d) D ' 2
Figure 4: The real datasets. (a) Sequence of snapshots of a hand turning a cup. (b) Faces
database from ISOMAP [4].
p=1
The final experiment was conducted on the ?faces?
database from the ISOMAP paper [4]
p=2
(Figure 4(b)). The data set contained 698 images
of faces generated by using three free
p=3
parameters: vertical and horizontal orientation,
and light direction. Figure 5(b) indicates
p=5
that both estimates are reasonably close to the
informal intrinsic dimension.
p=8
p=8
(a) Turning cup
b corr , p = 1
D
b corr , p = 3
D
b pack , p = 3
D
d=6
5
b pack , p = 1
D
b pack
D
b corr
D
4.5
b corr , p = 3
D
b pack , p = 3
D
4
4.5
3.5
3
3
2.5
d=6
2.5
lattice
original
b pack
D
b corr
D
4
3.5
d=5
2
d=5
2
1.5
d=4
p=2
d=4
1.5
1
d=3
d=2
(b) ISOMAP faces
b corr , p = 1
D
b
D
b pack , p = 1
D
r
b
D
PSfrag replacements
b
D
p=3
1
d=3
d=2
0.5
0.04
0.06
0.08
0.1
0.12
r
0.14
0.16
0.18
original
lattice
0.5
0
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
r
Figure 5: The intrinsic dimension of image data sets.
b corr tends to be higher than D
b pack ,
We found in all experiments that at a very small scale D
2 http://vasc.ri.cmu.edu/idb/html/motion/hand/index.html
b pack tends to be more stable as the scale grows. Hence, if the data contains very little
while D
b corr seems to be closer to the ?real?
noise and it is generated uniformly on the manifold, D
intrinsic dimension. On the other hand, if the data contains noise (in which case at a very
small scale we are estimating the dimension of the noise rather than the dimension of the
b pack seems more reliable
manifold), or the distribution on the manifold is non-uniform, D
b corr .
than D
5
Conclusion
We have presented a new algorithm to estimate the intrinsic dimension of data sets. The
method estimates the packing dimension of the data and requires neither parametric assumptions on the data generating model nor input parameters to set. The method is compared to a widely-used technique based on the correlation dimension. Experiments show
that our method is more robust in terms of the data generating distribution and more reliable
in the presence of noise.
References
[1] T. Kohonen, The Self-Organizing Map, Springer-Verlag, 2nd edition, 1997.
[2] T. F. Cox and M. A. Cox, Multidimensional Scaling, Chapman & Hill, 1994.
[3] S. Roweis and Saul L. K., ?Nonlinear dimensionality reduction by locally linear embedding,?
Science, vol. 290, pp. 2323?2326, 2000.
[4] J. B. Tenenbaum, V. de Silva, and Langford J. C., ?A global geometric framework for nonlinear
dimensionality reduction,? Science, vol. 290, pp. 2319?2323, 2000.
[5] E. Ch?
avez, G. Navarro, R. Baeza-Yates, and J. Marroqu?
?n, ?Searching in metric spaces,? ACM
Computing Surveys, p. to appear, 2001.
[6] J. Bruske and G. Sommer, ?Intrinsic dimensionality estimation with optimally topology preserving maps,? IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no.
5, pp. 572?575, 1998.
[7] S. Roweis, ?EM algorithms for PCA and SPCA,? in Advances in Neural Information Processing
Systems. 1998, vol. 10, pp. 626?632, The MIT Press.
[8] C. M. Bishop, M. Svens?
en, and C. K. I. Williams, ?GTM: The generative topographic mapping,?
Neural Computation, vol. 10, no. 1, pp. 215?235, 1998.
[9] P. Grassberger and I. Procaccia, ?Measuring the strangeness of strange attractors,? Physica,
vol. D9, pp. 189?208, 1983.
[10] F. Camastra and A. Vinciarelli, ?Estimating intrinsic dimension of data with a fractal-based
approach,? IEEE Transactions on Pattern Analysis and Machine Intelligence, 2002, to appear.
[11] A. Belussi and C. Faloutsos, ?Spatial join selectivity estimation using fractal concepts,? ACM
Transactions on Information Systems, vol. 16, no. 2, pp. 161?201, 1998.
[12] J. Hastad, ?Clicque is hard to approximate within n 1?? ,? in Proceedings of the 37th Annual
Symposium on Foundations of Computer Science FOCS?96, 1996, pp. 627?636.
[13] T. Erlebach, K. Jansen, and E. Seidel, ?Polynomial-time approximation schemes for geometric
graphs,? in Proceedings of the 12th ACM-SIAM Symposium on Discrete Algorithms SODA?01,
2001, pp. 671?679.
| 2290 |@word cox:2 polynomial:2 seems:6 nd:2 underline:1 open:4 covariance:2 profit:1 carry:1 reduction:4 contains:3 existing:2 must:2 grassberger:1 designed:2 generative:3 greedy:2 intelligence:2 short:1 iterates:1 node:2 constructed:2 become:4 symposium:2 psfrag:17 focs:1 consists:1 redefine:2 manner:1 introduce:3 x0:2 pairwise:1 indeed:1 expected:1 behavior:1 nor:3 little:3 curse:1 cardinality:5 becomes:1 provided:1 estimating:4 moreover:2 underlying:1 kind:1 developed:1 finding:3 every:3 multidimensional:2 tackle:1 lated:1 rm:1 appear:2 eigenstructure:1 before:1 overestimate:1 positive:1 local:4 tends:3 limit:3 severely:2 ure:1 ap:2 approximately:1 black:1 perpendicular:1 decided:1 practical:3 practice:2 union:2 procedure:6 area:1 projection:4 spite:1 cannot:2 close:1 context:1 optimize:1 equivalent:1 map:4 center:3 williams:1 independently:1 survey:1 resolution:4 simplicity:1 fill:1 embedding:5 searching:2 notion:3 stability:1 user:1 approximated:2 database:3 observed:4 capture:2 mentioned:1 pd:10 complexity:3 depend:1 efficiency:1 packing:10 easily:1 succ:1 various:1 represented:1 gtm:1 separated:2 distinct:1 fast:1 artificial:1 neighborhood:2 dof:1 whose:2 larger:2 widely:1 drawing:1 ability:1 topographic:1 h3c:1 final:2 dthe:1 advantage:3 eigenvalue:1 sequence:5 propose:2 kohonen:1 rapidly:2 organizing:2 roweis:3 intuitive:2 vasc:1 az:1 cluster:2 requirement:1 r1:12 empty:1 produce:2 generating:7 montreal:1 measured:1 nearest:2 concentrate:1 direction:1 correct:1 avez:2 require:3 proposition:2 physica:1 mapping:2 consecutive:3 smallest:1 estimation:12 visited:2 repetition:1 mit:1 j7:1 always:1 idb:1 rather:6 likelihood:1 indicates:1 dependent:4 eliminate:1 typically:1 orientation:1 html:2 exponent:1 jansen:1 spatial:1 logc:5 field:1 construct:2 equal:1 shaped:2 dcorr:4 chapman:1 look:1 filling:1 nearly:1 np:2 randomly:1 replacement:25 lebesgue:1 attractor:1 n1:1 montr:1 highly:1 certainly:2 light:1 devoted:1 strangeness:1 integral:1 closer:2 edge:1 tree:2 euclidean:1 logarithm:1 plotted:2 eal:1 hastad:1 disadvantage:2 measuring:2 lattice:3 cost:1 vertex:3 subset:5 uniform:8 conducted:1 gr:3 characterize:1 optimally:2 dependency:2 varies:1 kxi:1 synthetic:2 hypercubes:2 density:2 siam:1 probabilistic:1 connecting:1 d9:1 again:1 reflect:2 nm:1 worse:1 inefficient:2 return:2 de:1 b2:1 explicitly:2 depends:4 ad:1 multiplicative:1 analyze:2 doing:1 start:1 pso:1 slope:1 contribution:1 accuracy:3 variance:4 characteristic:1 definition:13 distort:1 bof:1 underestimate:5 pp:17 obvious:1 proof:1 di:1 mi:4 lim:3 cap:4 dimensionality:5 organized:1 formalize:1 sophisticated:1 higher:3 though:1 correlation:9 langford:1 hand:6 horizontal:1 nonlinear:4 quality:1 gray:1 grows:3 calibrating:1 concept:3 normalized:1 isomap:5 hence:3 white:1 self:2 ambiguous:1 covering:11 bal:1 hill:1 complete:1 demonstrate:1 cp:1 l1:1 motion:1 silva:1 meaning:1 image:8 novel:1 recently:1 umontreal:1 common:1 overview:1 exponentially:3 volume:1 approximates:1 cup:3 rd:1 similarly:1 centre:1 nonlinearity:1 stable:4 add:1 curvature:2 multivariate:1 belongs:2 discard:1 selectivity:1 verlag:1 certain:2 inequality:1 arbitrarily:1 life:1 preserving:2 minimum:1 fortunately:1 vinciarelli:2 seidel:1 technical:1 cross:1 compensate:1 long:1 dcase:1 variant:1 basic:2 metric:8 cmu:2 iteration:1 represent:1 achieved:2 addition:1 remarkably:1 source:1 limn:1 probably:2 navarro:1 db:14 member:1 spirit:1 integer:1 presence:2 spca:1 enough:1 spiral:3 baeza:1 affect:1 fit:1 topology:2 cn:2 six:1 pca:12 effort:1 nine:1 fractal:4 useful:2 covered:1 informally:1 locally:2 ten:1 extensively:1 tenenbaum:1 diameter:1 generate:1 http:1 exist:1 camastra:2 discrete:1 vol:7 yates:1 four:1 nevertheless:2 threshold:3 pb:10 drawn:1 neither:3 graph:4 ville:1 halfway:1 run:1 soda:1 place:1 family:5 strange:1 scaling:2 comparable:2 resampled:1 topological:7 annual:1 svens:1 bp:10 x2:1 ri:4 speed:1 min:1 relatively:2 department:1 ball:3 kd:1 smaller:3 slightly:2 em:1 sided:1 computationally:2 agree:1 remains:3 turn:3 r3:10 fail:1 needed:2 informal:4 operation:1 apply:1 observe:1 robustness:2 faloutsos:1 existence:1 original:3 top:4 running:2 sommer:2 unfortunate:1 approximating:1 classical:1 hypercube:1 sweep:1 objective:1 parametric:2 dependence:3 md:1 dwe:1 said:1 distance:6 capacity:5 degrade:1 manifold:17 iro:1 index:1 difficult:2 negative:1 design:1 countable:1 vertical:1 observation:3 snapshot:2 datasets:1 discarded:1 finite:8 kegl:1 looking:1 canada:1 bk:1 pair:2 able:1 usually:5 pattern:2 reliable:5 including:1 ia:1 event:1 power:2 difficulty:1 circumvent:1 indicator:1 turning:3 scheme:2 ues:1 sn:14 geometric:9 l2:1 relative:1 permutation:1 proportional:2 versus:1 validation:1 foundation:1 plotting:1 repeat:1 free:1 dcap:7 formal:3 lle:3 side:1 bias:6 neighbor:2 saul:1 face:4 curve:8 dimension:80 xn:5 overcome:1 collection:1 refinement:2 transaction:3 approximate:1 confirm:2 global:3 b1:9 xi:4 don:1 search:2 why:1 pack:87 reasonably:1 robust:3 ca:1 permute:1 som:1 main:6 noise:8 edition:1 n2:2 bruske:2 x1:6 referred:1 join:1 en:1 explicit:1 rk:1 bishop:1 r2:12 dominates:1 intrinsic:32 exists:1 polygonal:2 corr:82 widelyused:1 egl:1 sparser:1 contained:3 springer:1 ch:2 determines:3 acm:3 invalid:1 hard:4 change:2 uniformly:3 preset:1 principal:4 experimental:2 meaningful:1 indicating:1 formally:2 procaccia:1 uneven:2 support:1 categorize:1 tested:1 phenomenon:1 |
1,419 | 2,291 | Improving a Page Classifier with Anchor
Extraction and Link Analysis
William W. Cohen
Center for Automated Learning and Discovery,
Carnegie-Mellon University
5000 Forbes Ave, Pittsburgh, PA 15213
[email protected]
Abstract
Most text categorization systems use simple models of documents and
document collections. In this paper we describe a technique that improves a simple web page classifier?s performance on pages from a new,
unseen web site, by exploiting link structure within a site as well as
page structure within hub pages. On real-world test cases, this technique
significantly and substantially improves the accuracy of a bag-of-words
classifier, reducing error rate by about half, on average. The system uses
a variant of co-training to exploit unlabeled data from a new site. Pages
are labeled using the base classifier; the results are used by a restricted
wrapper-learner to propose potential ?main-category anchor wrappers?;
and finally, these wrappers are used as features by a third learner to find
a categorization of the site that implies a simple hub structure, but which
also largely agrees with the original bag-of-words classifier.
1 Introduction
Most text categorization systems use simple models of documents and document collections. For instance, it is common to model documents as ?bags of words?, and to model
a collection as a set of documents drawn from some fixed distribution. An interesting
question is how to exploit more detailed information about the structure of individual documents, or the structure of a collection of documents.
For web page categorization, a frequently-used approach is to use hyperlink information
to improve classification accuracy (e.g., [7, 9, 15]). Often hyperlink structure is used to
?smooth? the predictions of a learned classifier, so that documents that (say) are pointed to
by the same ?hub? page will be more likely to have the same classification after smoothing.
This smoothing can be done either explicitly [15] or implicitly (for instance, by representing examples so that the distance between examples depends on hyperlink connectivity
[7, 9]).
The structure of individual pages, as represented by HTML markup structure or linguis-
tic structure, is less commonly used in web page classification: however, page structure is
often used in extracting information from web pages. Page structure seems to be particularly important in finding site-specific extraction rules (?wrappers?), since on a given site,
formatting information is frequently an excellent indication of content [6, 10, 12].
This paper is based on two practical observations about web page classification. The first
is that for many categories of economic interest (e.g., product pages, job-posting pages,
and press releases) many sites contain ?hub? or index pages that point to essentially all
pages in that category on a site. These hubs rarely link exclusively to pages of a single
category?instead the hubs will contain a number of additional links, such as links back to
a home page and links to related hubs. However, the page structure of a hub page often
gives strong indications of which links are to pages from the ?main? category associated
with the hub, and which are ancillary links that exist for other (e.g., navigational) purposes.
As an example, refer to Figure 1. Links to pages in the main category associated with this
hub (previous NIPS conference homepages) are in the left-hand column of the table, and
hence can be easily identified by the page structure.
The second observation is that it is relatively easy to learn to extract links from hub pages
to main-category pages using existing wrapper-learning methods [8, 6]. Wrapper-learning
techniques interactively learn to extract data of some type from a single site using userprovided training examples. Our experience in a number of domains indicates that maincategory links on hub pages (like the NIPS-homepage links from Figure 1) can almost
always be learned from two or three positive examples.
Exploiting these observations, we describe in this paper a web page categorization system
that exploits link structure within a site, as well as page structure within hub pages, to
improve classification accuracy of a traditional bag-of-words classifier on pages from a
previously unseen site. The system uses a variant of co-training [3] to exploit unlabeled
data from a new, previously unseen site. Specifically, pages are labeled using a simple
bag-of-words classifier, and the results are used by a restricted wrapper-learner to propose
potential ?main-category link wrappers?. These wrappers are then used as features by a
decision tree learner to find a categorization of the pages on the site that implies a simple
hub structure, but which also largely agrees with the original bag-of-words classifier.
2 One-step co-training and hyperlink structure
Consider a binary bag-of-words classifier f that has been learned from some set of labeled
web pages D` . We wish to improve the performance of f on pages from an unknown web
site S, by smoothing its predictions in a way that is plausible given the hyperlink of S,
and the page structure of potential hub pages in S. As background for the algorithm, let
us consider first co-training, a well-studied approach for improving classifier performance
using unlabeled data [3].
In co-training one assumes a concept learning problem where every instance x can be
written as a pair (x1 , x2 ) such that x1 is conditionally independent of x2 given the class
y. One also assumes that both x1 and x2 are sufficient for classification, in the sense that
the target function f (x) can be written either as a function of x1 or x2 , i.e., that there exist
functions f1 (x1 ) = f (x) and f2 (x2 ) = f (x). Finally one assumes that both f1 and f2 are
learnable, i.e., that f1 ? H1 and f2 ? H2 and noise-tolerant learning algorithms A1 and
A2 exist for H1 and H2 .
..
.
Webpages and Papers for Recent NIPS Conferences
A. David Redish ([email protected]) created and maintained these web pages from 1994
until 1996. L. Douglas Baker ([email protected]) maintained these web pages from
1997 until 1999. They were maintained in 2000 by L. Douglas Baker and Alexander Gray
([email protected]).
NIPS*2000
NIPS 13, the conference proceedings for 2000 (?Advances in Neural
Information Processing Systems 13?, edited by Leen, Todd K., Dietterich,
Thomas G. and Tresp, Volker will be available to all attendees in June 2001.
? Abstracts and papers from this forthcoming volume are available
on-line.
? BibTeX entries for all papers from this forthcoming volume are available
on-line.
NIPS*99
NIPS 12 is available from MIT Press.
Abstracts and papers from this volume are available on-line.
NIPS*98
NIPS 11 is available from MIT Press.
Abstracts and (some) papers from this volume are available on-line.
..
.
Figure 1: Part of a ?hub? page. Links to pages in the main category associated with this
hub are in the left-hand column of the table.
In this setting, a large amount of unlabeled data Du can be used to improve the accuracy
of a small set of labeled data D` , as follows. First, use A1 to learn an approximation f10
to f1 using D` . Then, use f10 to label the examples in Du , and use A2 to learn from this
training set. Given the assumptions above, f10 ?s errors on Du will appear to A2 as random,
uncorrelated noise, and A2 can in principle learn an arbitrarily good approximation to f ,
given enough unlabeled data in Du . We call this process one-step co-training using A1 ,
A2 , and Du .
Now, consider a set DS of unlabeled pages from a unseen web site S. It seems not unreasonable to assume that the words x1 on a page x ? S and the hub pages x2 ? S
that hyperlink to x are independent, given the class of x. This suggests that one-step cotraining could be used to improve a learned bag-of-words classifier f 10 , using the following
algorithm:
Algorithm 1 (One-step co-training):
1. Parameters. Let S be a web site, f10 be a bag-of-words page classifier, and DS be
the pages on the site S.
2. Instance generation and labeling. For each page xi ? DS , represent xi as a vector
of all pages in S that hyperlink to xi . Call this vector xi2 . Let y i = f10 (xi ).
3. Learning. Use a learner A2 to learn f20 from the labeled examples D2 =
{(xi2 , y i )}i .
4. Labeling. Use f20 (x) as the final label for each page x ? DS .
This ?one-step? use of co-training is consistent with the theoretical results underlying cotraining. In experimental studies, co-training is usually done iteratively, alternating between using f10 and f20 for tagging the unlabeled data. The one-step version seems more
appropriate in this setting, in which there are a limited number of unlabeled examples over
which each x2 is defined.
3 Anchor Extraction and Page Classification
3.1 Learning to extract anchors from web pages
Algorithm 1 has some shortcomings. Co-training assumes a large pool of unlabeled data:
however, if the informative hubs for pages on S are mostly within S (a very plausible
assumption) then the amount of useful unlabeled data is limited by the size of S. With limited amounts of unlabeled data, it is very important that A2 has a strong (and appropriate)
statistical bias, and that A2 has some effective method for avoiding overfitting.
As suggested by Figure 1, the informativeness of hub features can be improved by using
knowledge of the structure of hub pages themselves. To make use of hub page structure,
we used a wrapper-learning system called WL2 , which has experimentally proven to be
effective at learning substructures of web pages [6]. The output of WL 2 is an extraction
predicate: a binary relation p between pages x and substrings a within x. As an example,
WL2 might output p = {(x, a) : x is the page of Figure 1 and a is an anchor appearing
in the first column of the table}. (An anchor is a substring of a web page that defines a
hyperlink.)
This suggests a modification of Algorithm 1, in which one-step co-training is carried out on
the problem of extracting anchors rather than the problem of labeling web pages. Specifically, one might map f1 ?s predictions from web pages to anchors, by giving a positive label
to anchor a iff a links to a page x such that f10 (x) = 1; then use WL2 algorithm A2 to learn
a predicate p02 ; and finally, map the predictions of p02 from anchors back to web pages.
One problem with this approach is that WL2 was designed for user-provided data sets,
which are small and noise-free. Another problem is that it unclear how to map class labels from anchors back to web pages, since a page might be pointed to by many different
anchors.
3.2 Bridging the gap between anchors and pages
Based on these observations we modified Algorithm 1 as follows. As suggested, we map
the predictions about page labels made by f10 to anchors. Using these anchor labels, we then
produce many small training sets that are passed to WL2 . The intuition here is that some of
these training sets will be noise-free, and hence similar to those that might be provided by
a user. Finally, we use the many wrappers produced by WL2 as features in a representation
of a page x, and again use a learner to combine the wrapper-features and produce a single
classification for a page.
Algorithm 2:
1. Parameters. Let S be a web site, f10 be a bag-of-words page classifier, and DS be
the pages on the site.
2. Link labeling. For each anchor a on a page x ? S, label a as tentatively-positive
if a points to a page x0 such that x0 ? S and f10 (x0 ) = 1.
3. Wrapper proposal. Let P be the set of all pairs (x, a) where a is a tentativelypositive link and x is the page on which a is found. Generate a number of small
sets D1 , . . . , Dk containing such pairs, and for each subset Di , use WL2 to produce a number of possible extraction predicates pi,1 , . . . , pi,ki . (See appendix for
details).
4. Instance generation and labeling. We will say that the ?wrapper predicate? p ij
links to x iff pij includes some pair (x0 , a) such that x0 ? DS and a is a hyperlink
to page x. For each page xi ? DS , represent xi as a vector of all wrappers pij
that link to x. Call this vector xi2 . Let y i = f10 (xi ).
5. Learning. Use a learner A2 to learn f20 from the labeled examples DS =
{(xi2 , y i )}i .
6. Labeling. Use f20 (x) as the final label for each page x ? DS .
A general problem in building learning systems for new problems is exploiting existing
knowledge about these problems. In this case, in building a page classifier, one would
like to exploit knowledge about the related problem of link extraction. Unfortunately this
knowledge is not in any particularly convenient form (e.g., a set of well-founded parametric
assumptions about the data): instead, we only know that experimentally, a certain learning
algorithm works well on the problem. In general, it is often the case that this sort of
experimental evidence is available, even when a learning problem is not formally wellunderstood.
The advantage of Algorithm 2 is that one need make no parametric assumptions about
the anchor-extraction problem. The bagging-like approach of ?feeding? WL 2 many small
training sets, and the use of a second learning algorithm to aggregate the results of WL 2 ,
are a means of exploiting prior experimental results, in lieu of more precise statistical assumptions.
4 Experimental results
To evaluate the technique, we used the task of categorizing web pages from company sites
as executive biography or other. We selected nine company web sites with non-trivial
hub structures. These were crawled using a heuristic spidering strategy intended to find
executive biography pages with high recall.1 The crawl found 879 pages, of which 128
were labeled positive. A simple bag-of-words classifier f10 was trained using a disjoint set
of sites (different from the nine above), obtaining an average accuracy of 91.6% (recall
82.0%, precision 61.8%) on the nine held-out sites. Using an implemention of Winnow
[2, 11] as A2 , Algorithm 2 obtained an average accuracy of 96.4% on the nine held-out
sites. Algorithm 2 improves over the baseline classifier f10 on six of the nine sites, and
obtains the same accuracy on two more. This difference is significant at the 98% level with
a 2-tailed paired sign test, and at the 95% level with a 2-tailed paired t test.
Similar results were also obtained using a sparse-feature implementation of a C4.5-like
decision tree learning algorithm [14] for learner A2 . (Note that both Winnow and C4.5 are
known to work well when data is noisy, irrelevant attributes are present, and the underlying
concept is ?simple?.) These results are summarized in Table 1.
1
The authors wish to thank Vijay Boyaparti for assembling this data set.
Site
1
2
3
4
5
6
7
8
9
avg
Classifier f10
Accuracy (SE)
1.000
(0.000)
0.932
(0.027)
0.813
(0.028)
0.904
(0.029)
0.939
(0.024)
1.000
(0.000)
0.918
(0.028)
0.788
(0.044)
0.948
(0.029)
0.916
Algorithm 2 (C4.5)
Accuracy (SE)
0.960
(0.028)
0.955
(0.022)
0.934
(0.018)
0.962
(0.019)
0.960
(0.020)
1.000
(0.000)
0.990
(0.010)
0.882
(0.035)
0.948
(0.029)
0.954
Algorithm 2 (Winnow)
Accuracy (SE)
0.960
(0.028)
0.955
(0.022)
0.939
(0.017)
0.962
(0.019)
0.960
(0.020)
1.000
(0.000)
0.990
(0.010)
0.929
(0.028)
0.983
(0.017)
0.964
Table 1: Experimental results with Algorithm 2. Paired tests indicate that both versions of
Algorithm 2 significantly improve on the baseline classifier.
5 Related work
The introduction discusses the relationship between this work and a number of previous
techniques for using hyperlink structure in web page classification [7, 9, 15]. The WL 2 based method for finding document structure has antecedents in other techniques for learning [10, 12] and automatically detecting [4, 5] structure in web pages.
In concurrent work, Blei et al [1] introduce a probabilistic model called ?scoped learning?
which gives a generative model for the situation described here: collections of examples
in which some subsets (documents from the same site) share common ?local? features,
and all documents share common ?content? features. Blei et al do not address the specific
problem considered here, of using both page structure and hyperlink structure in web page
classification. However, they do apply their technique to two closely related problems:
they augment a page classification method with local features based on the page?s URL,
and also augment content-based classification of ?text nodes? (specific substrings of a web
page) with page-structure-based local features.
We note that Algorithm 2 could be adapted to operate in Blei et al?s setting: specifically,
the x2 vectors produced in Steps 2-4 could be viewed as ?local features?. (In fact, Blei
et al generated page-structure-based features for their extraction task in exactly this way:
the only difference is that WL2 was parameterized differently.) The co-training framework
adopted here clearly makes different assumptions than those adopted by Blei et al. More experimentation is needed to determine which is preferable?current experimental evidence
[13] is ambiguous as to when probabilistic approaches should be prefered to co-training.
6 Conclusions
We have described a technique that improves a simple web page classifier by exploiting
link structure within a site, as well as page structure within hub pages. The system uses
a variant of co-training called ?one-step co-training? to exploit unlabeled data from a new
site. First, pages are labeled using the base classifier. Next, results of this labeling are
propogated to links to labeled pages, and these labeled links are used by a wrapper-learner
called WL2 to propose potential ?main-category link wrappers?. Finally, these wrappers
are used as features by another learner A2 to find a categorization of the site that implies a
simple hub structure, but which also largely agrees with the original bag-of-words classifier.
Experiments suggest the choice of A2 is not critical.
On a real-world benchmark problem, this technique substantially improved the accuracy
of a simple bag-of-words classifier, reducing error rate by about half. This improvement is
statistically significant.
Acknowledgments
The author wishes to thank his former colleagues at Whizbang Labs for many helpful discussions and useful advice.
Appendix A: Details on ?Wrapper Proposal?
Extraction predicates are constructed by WL2 using a rule-learning algorithm and a configurable set of components called builders. Each builder B corresponds to a language L B of
extraction predicates. Builders support a certain set of operations relative to L B , in particular, the least general generalization (LGG) operation. Given a set of pairs D = {(x i , ai )}
such that each ai is a substring of xi , LGGB (D) is the least general p ? LB such that
(x, a) ? D ? (x, a) ? p. Intuitively, LGGB (D) encodes common properties of the (positive) examples in D. Depending on B, these properties might be membership in a particular
syntactic HTML structure (e.g., a specific table column), common visual properties (e.g.,
being rendered in boldface), etc.
To generate subsets Di in Step 3 of Algorithm 2, we used every pair of links that pointed
to the two most confidently labeled examples; every pair of adjacent tentatively-positive
links; and every triple and every quadruple of tentatively-positive links that were separated
by at most 10 intervening tokens. These heuristics were based on the observation that in
most extraction tasks, the items to be extracted are close together. Careful implementation
allows the subsets Di to be generated in time linear in the size of the site. (We also note
that these heuristics were initially developed to support a different set of experiments [1],
and were not substantially modified for the experiments in this paper.)
Normally, WL2 is parameterized by a list B of builders, which are called by a ?master?
rule-learning algorithm. In our use of WL2 , we simply applied each builder Bj to a dataset
Di , to get the set of predicates {pij } = {LGGBj (Di )}, instead of running the full WL2
learning algorithm.
References
[1] David M. Blei, J. Andrew Bagnell, and Andrew K. McCallum. Learning with scope,
with application to information extraction and classification. In Proceedings of UAI2002, Edmonton, Alberta, 2002.
[2] Avrim Blum. Learning boolean functions in an infinite attribute space. Machine
Learning, 9(4):373?386, 1992.
[3] Avrin Blum and Tom Mitchell. Combining labeled and unlabeled data with cotraining. In Proceedings of the 1998 Conference on Computational Learning Theory,
Madison, WI, 1998.
[4] William W. Cohen. Automatically extracting features for concept learning from the
web. In Machine Learning: Proceedings of the Seventeeth International Conference,
Palo Alto, California, 2000. Morgan Kaufmann.
[5] William W. Cohen and Wei Fan. Learning page-independent heuristics for extracting
data from web pages. In Proceedings of The Eigth International World Wide Web
Conference (WWW-99), Toronto, 1999.
[6] William W. Cohen, Lee S. Jensen, and Matthew Hurst. A flexible learning system
for wrapping tables and lists in HTML documents. In Proceedings of The Eleventh
International World Wide Web Conference (WWW-2002), Honolulu, Hawaii, 2002.
[7] David Cohn and Thomas Hofmann. The missing link - a probabilistic model of document content and hypertext connectivity. In Advances in Neural Information Processing Systems 13. MIT Press, 2001.
[8] Lee S. Jensen and William W. Cohen. A structured wrapper induction system for
extracting information from semi-structured documents. In Proceedings of the IJCAI2001 Workshop on Adaptive Text Extraction and Mining, Seattle, WA, 2001.
[9] T. Joachims, N. Cristianini, and J. Shawe-Taylor. Composite kernels for hypertext
categorisation. In Proceedings of the International Conference on Machine Learning
(ICML-2001), 2001.
[10] N. Kushmeric. Wrapper induction: efficiency and expressiveness. Artificial Intelligence, 118:15?68, 2000.
[11] Nick Littlestone. Learning quickly when irrelevant attributes abound: A new linearthreshold algorithm. Machine Learning, 2(4), 1988.
[12] Ion Muslea, Steven Minton, and Craig Knoblock. Wrapper induction for semistructured information sources. Journal of Autonomous Agents and Multi-Agent Systems,
16(12), 1999.
[13] Kamal Nigam and Rayyid Ghani. Analyzing the effectiveness and applicability of cotraining. In Proceedings of the Ninth International Conference on Information and
Knowledge Management (CIKM-2000), 2000.
[14] J. Ross Quinlan. C4.5: programs for machine learning. Morgan Kaufmann, 1994.
[15] S. Slattery and T. Mitchell. Discovering test set regularities in relational domains.
In Proceedings of the 17th International Conference on Machine Learning (ICML2000), June 2000.
| 2291 |@word version:2 seems:3 d2:1 wrapper:22 exclusively:1 bibtex:1 document:15 existing:2 current:1 com:1 written:2 informative:1 hofmann:1 designed:1 half:2 selected:1 generative:1 item:1 intelligence:1 discovering:1 mccallum:1 blei:6 detecting:1 node:1 toronto:1 constructed:1 combine:1 eleventh:1 introduce:1 x0:5 tagging:1 themselves:1 frequently:2 multi:1 muslea:1 alberta:1 company:2 automatically:2 abound:1 provided:2 baker:2 underlying:2 alto:1 homepage:2 tic:1 p02:2 substantially:3 developed:1 finding:2 every:5 exactly:1 preferable:1 classifier:23 normally:1 appear:1 positive:7 local:4 todd:1 analyzing:1 quadruple:1 might:5 studied:1 suggests:2 co:15 limited:3 statistically:1 practical:1 acknowledgment:1 honolulu:1 significantly:2 composite:1 convenient:1 whizbang:1 word:14 suggest:1 get:1 unlabeled:13 close:1 www:2 map:4 center:1 missing:1 rule:3 his:1 autonomous:1 target:1 user:2 us:3 pa:1 particularly:2 labeled:12 attendee:1 steven:1 hypertext:2 prefered:1 edited:1 intuition:1 slattery:1 cristianini:1 trained:1 f2:3 learner:10 efficiency:1 easily:1 differently:1 represented:1 separated:1 describe:2 shortcoming:1 effective:2 artificial:1 labeling:7 aggregate:1 heuristic:4 plausible:2 say:2 unseen:4 syntactic:1 noisy:1 final:2 advantage:1 indication:2 propose:3 product:1 combining:1 iff:2 f10:14 intervening:1 exploiting:5 webpage:1 seattle:1 regularity:1 produce:3 categorization:7 depending:1 andrew:2 ij:1 job:1 strong:2 c:3 implies:3 indicate:1 closely:1 attribute:3 ancillary:1 feeding:1 f1:5 generalization:1 seventeeth:1 considered:1 scope:1 bj:1 matthew:1 a2:14 purpose:1 bag:13 label:8 palo:1 ross:1 agrees:3 wl:4 concurrent:1 builder:5 mit:3 clearly:1 always:1 modified:2 rather:1 volker:1 crawled:1 minton:1 categorizing:1 release:1 june:2 joachim:1 improvement:1 indicates:1 ave:1 baseline:2 sense:1 helpful:1 membership:1 initially:1 relation:1 linearthreshold:1 classification:13 html:3 flexible:1 augment:2 smoothing:3 extraction:13 icml:1 kamal:1 individual:2 intended:1 antecedent:1 william:6 interest:1 mining:1 held:2 wl2:13 experience:1 tree:2 taylor:1 littlestone:1 theoretical:1 instance:5 column:4 boolean:1 applicability:1 entry:1 subset:4 predicate:7 semistructured:1 configurable:1 international:6 probabilistic:3 lee:2 pool:1 together:1 quickly:1 connectivity:2 again:1 interactively:1 containing:1 management:1 hawaii:1 potential:4 redish:1 summarized:1 includes:1 explicitly:1 wcohen:1 depends:1 h1:2 lab:1 sort:1 substructure:1 forbes:1 accuracy:11 kaufmann:2 largely:3 produced:2 craig:1 substring:4 colleague:1 icml2000:1 associated:3 di:5 dataset:1 mitchell:2 recall:2 knowledge:5 improves:4 back:3 tom:1 improved:2 wei:1 leen:1 done:2 until:2 d:9 hand:2 web:32 cohn:1 defines:1 gray:1 building:2 dietterich:1 contain:2 concept:3 former:1 hence:2 alternating:1 iteratively:1 conditionally:1 adjacent:1 maintained:3 ambiguous:1 common:5 cohen:5 volume:4 assembling:1 mellon:1 refer:1 significant:2 ai:2 pointed:3 language:1 shawe:1 knoblock:1 etc:1 base:2 recent:1 winnow:3 irrelevant:2 certain:2 binary:2 arbitrarily:1 morgan:2 additional:1 determine:1 wellunderstood:1 semi:1 full:1 smooth:1 a1:3 paired:3 prediction:5 variant:3 essentially:1 cmu:3 lgg:1 represent:2 kernel:1 ion:1 proposal:2 background:1 source:1 operate:1 effectiveness:1 call:3 extracting:5 hurst:1 easy:1 enough:1 automated:1 forthcoming:2 identified:1 economic:1 six:1 bridging:1 url:1 passed:1 nine:5 useful:2 detailed:1 se:3 amount:3 category:10 generate:2 exist:3 sign:1 cikm:1 disjoint:1 carnegie:1 blum:2 drawn:1 douglas:2 parameterized:2 master:1 almost:1 home:1 decision:2 appendix:2 ki:1 fan:1 adapted:1 categorisation:1 x2:8 encodes:1 rendered:1 relatively:1 structured:2 wi:1 formatting:1 modification:1 intuitively:1 restricted:2 previously:2 discus:1 xi2:4 needed:1 know:1 lieu:1 adopted:2 available:8 operation:2 experimentation:1 unreasonable:1 apply:1 appropriate:2 appearing:1 original:3 thomas:2 assumes:4 bagging:1 running:1 quinlan:1 madison:1 exploit:6 giving:1 question:1 wrapping:1 parametric:2 strategy:1 traditional:1 bagnell:1 unclear:1 distance:1 link:29 thank:2 trivial:1 boldface:1 induction:3 index:1 relationship:1 mostly:1 unfortunately:1 implementation:2 unknown:1 observation:5 benchmark:1 situation:1 relational:1 precise:1 ninth:1 lb:1 expressiveness:1 david:3 pair:7 nick:1 c4:4 california:1 learned:4 nip:11 address:1 suggested:2 usually:1 hyperlink:11 confidently:1 program:1 navigational:1 critical:1 representing:1 improve:6 created:1 carried:1 eigth:1 extract:3 tentatively:3 tresp:1 text:4 prior:1 discovery:1 relative:1 interesting:1 generation:2 proven:1 triple:1 h2:2 executive:2 agent:2 sufficient:1 consistent:1 informativeness:1 pij:3 principle:1 uncorrelated:1 pi:2 share:2 token:1 free:2 bias:1 wide:2 sparse:1 world:4 crawl:1 author:2 collection:5 commonly:1 made:1 avg:1 adaptive:1 founded:1 obtains:1 implicitly:1 overfitting:1 tolerant:1 anchor:17 pittsburgh:1 xi:8 tailed:2 table:7 learn:8 obtaining:1 nigam:1 improving:2 du:5 excellent:1 agray:1 domain:2 main:7 noise:4 ghani:1 x1:6 site:31 advice:1 edmonton:1 precision:1 wish:3 cotraining:4 third:1 posting:1 specific:4 hub:25 jensen:2 learnable:1 list:2 dk:1 evidence:2 workshop:1 avrim:1 f20:5 gap:1 vijay:1 simply:1 likely:1 visual:1 corresponds:1 extracted:1 viewed:1 careful:1 content:4 experimentally:2 specifically:3 infinite:1 reducing:2 called:6 experimental:6 rarely:1 formally:1 support:2 alexander:1 avoiding:1 evaluate:1 d1:1 biography:2 |
1,420 | 2,292 | Clustering with the Fisher Score
Koji Tsuda, Motoaki Kawanabe and Klaus-Robert Muller
?
AIST
CBRC, 2-41-6, Aomi, Koto-ku, Tokyo, 135-0064, Japan
Fraunhofer FIRST, Kekul?estr. 7, 12489 Berlin, Germany
Dept. of CS, University of Potsdam, A.-Bebel-Str. 89, 14482 Potsdam, Germany
[email protected],
nabe,klaus @first.fhg.de
Abstract
Recently the Fisher score (or the Fisher kernel) is increasingly used as a
feature extractor for classification problems. The Fisher score is a vector
of parameter derivatives of loglikelihood of a probabilistic model. This
paper gives a theoretical analysis about how class information is preserved in the space of the Fisher score, which turns out that the Fisher
score consists of a few important dimensions with class information and
many nuisance dimensions. When we perform clustering with the Fisher
score, K-Means type methods are obviously inappropriate because they
make use of all dimensions. So we will develop a novel but simple clustering algorithm specialized for the Fisher score, which can exploit important dimensions. This algorithm is successfully tested in experiments
with artificial data and real data (amino acid sequences).
1 Introduction
Clustering is widely used in exploratory analysis for various kinds of data [6]. Among
them, discrete data such as biological sequences [2] are especially challenging, because
efficient clustering algorithms e.g. K-Means [6] cannot be used directly. In such cases, one
naturally considers to map data to a vector space and perform clustering there. We call the
mapping a ?feature extractor?. Recently, the Fisher score has been successfully applied as a
feature extractor in supervised classification [5, 15, 14, 13, 16]. The Fisher score is derived
as follows: Let us assume that a probabilistic model
is available. Given a
parameter estimate from training samples, the Fisher score vector is obtained as
"!
#$
&%"'
)()(*(
+ "!
,
,$
-%/.
102(
The Fisher kernel refers to the inner product in this space [5]. When combined with high
performance classifiers such as SVMs, the Fisher kernel often shows superb results [5, 14].
For successful clustering with the Fisher score, one has to investigate how original classes
are mapped into the feature space, and select a proper clustering algorithm. In this paper,
it will be claimed that the Fisher score has only a few dimensions which contains the class
information and a lot of unnecessary nuisance dimensions. So K-Means type clustering [6]
is obviously inappropriate because it takes all dimensions into account. We will propose
a clustering method specialized for the Fisher score, which exploits important dimensions
with class information. This method has an efficient EM-like alternating procedure to learn,
and has the favorable property that the resultant clusters are invariant to any invertible linear
transformation. Two experiments with an artificial data and an biological sequence data
will be shown to illustrate the effectiveness of our approach.
2 Preservation of Cluster Structure
Before starting, let us determine several notations. Denote by the domain of objects (discrete or continuous)
and by .
)(*()( the set of class labels. The feature extraction
is denoted as ,&
. Let
be the underlying joint distribution and assume
that the class distributions ,
are well separated, i.e.
&
is close to 0 or 1.
First of all, let us assume that the marginal distribution &
is known. Then the problem
is how to find a good feature extractor, which can preserve class information, based on the
. of ,&
. In the Fisher score, it amounts to finding a good parametric model
prior knowledge
. This problem is by no means trivial, since it is in general hard to infer
$
anything about the possible
&
from the marginal
without additional assumptions
[12].
A loss function for feature extraction In order to investigate how the cluster structure
is preserved, we first have to define what the class information is. We regard that the class
information is completely preserved, if a set of predictors in the feature space can recover
the true posterior probability $
&
. This view makes sense, because it is impossible
to recover the posteriors when classes are totally mixed up. As a predictor of posterior
probability in the feature space, we adopt the simplest one,
. i.e. a linear estimator:
0 ,&
! "
(
The prediction accuracy of ,&
for #
&
is difficult to formulate, because parameters $ and are learned from samples. To make the theoretical analysis possible,
we
consider the best possible linear predictors. Thus the loss of feature extractor for -th
class is defined as%
+-,&(.0/2'*)1 34,.
57698 0
:<;=
&
4>@?
(2.1)
% the true' %marginal
. The overall loss
where 5 6 denote the expectation with
. distribution
is just the sum over all classes
BAEC D
%
Even when the full class information is preserved, i.e.
GF , clustering in the feature space may not be easy, because of nuisance dimensions which do not contribute to
clustering at all. The posterior predictors make use of an at most dimensional subspace
out of the H -dimensional Fisher score, and the complementary subspace may not have any
information about classes. K-means type methods [6] assume a cluster to be hyperspher% For
ical, which means that every dimension should contribute to cluster discrimination.
such methods, we have to try to minimize the dimensionality H while keeping
small.
When nuisance dimensions cannot be excluded, we will need a different clustering method
that is robust to nuisance dimensions. This issue will be discussed in Sec. 3.
%
#F
Optimal Feature Extraction In the following, we will discuss
how to determine
$
.
First, a simple but unrealistic example is shown to achieve
+
, without producing
nuisance dimensions at all. Let us assume that
$
is determined as a mixture model of
true class distributions: '
'
JI K EMCDL 'ON
P
Q
; EMCDL 'RN
S
)
N UT (2.2)
'
'
E
D
where T V@K# A CL N XW " NZY\[ F ^] V )(*()(_<;` . Obviously this model realizes
the true marginal distribution &
, when
N a
b N c d"*()(*(e;f"(
%
%
F
I K
PF
(proof) To prove the lemma, it is sufficient to show the existence of e;`
H matrix
and b; dimensional vector such that
"! I
K
d
*()()(* b;
1
0 (
(2.3)
The Fisher score for I ,
K
is
"!
I ,
K
; P" '
' &
#" )(*()(_b; (
;cA ECLD N
N ' '
N
Let
7Ib
CL 1 CL where
7I+O' ! N ' *()()(* N '
; A CDL ''
Y NY
CL
and
denotes matrix filled with ones. Then
"! I
'K
d'
&
)(*()() e;f
&
10;
CL ' 1 ' (
' '
When we set
L and
L
CL 1 , (2.3) holds.
Loose Models and Nuisance Dimensions We assumed that
is known but still we
do not know the true class distributions
. Thus the model @I
K
in Lemma 1 is
never available. In the following, the result of Lemma 1 is relaxed to a more general class
When the Fisher score is derived at the true parameter, it achieves
.
"!
Lemma 1. The Fisher score
achieves
.
of probability models by means of the chain rule of derivatives. However, in this case, we
have to pay the price: nuisance dimensions.
!
!
2I JI K K aT
a set of probability distributions
. According
Denote by
,
. in a Riemannian space. Let
to the information geometry [1],
is regarded as a manifold
$
( Now the question is how to
denote the manifold of
$
:
determine a manifold such that
, which is answered by the following theorem.
Theorem 1. Assume that the true distribution
is contained in :
"
!%"
"
,&
F
"
I K
"
where is the true parameter. If the tangent space of at ,&
contains the tangent space
%
of ! at the same point (Fig. 1), then the Fisher score derived from ,
satisfies
F (
(proof) To prove the theorem, it is sufficient to show the existence of ;a
H matrix
and b; dimensional vector such that
(2.4)
# "!
$
#
&
)"()(*( Pb;f
&
1
0 (
When the tangent space of ! is contained in that of around
, we have the following
by the chain rule:
.
+ !
"!
&% $$
M
JI
K
,
Y$
(2.5)
N $ Y D ' &% Y N $&%(' D %*') (
$
Let + -, .0/ 2Y 1 where .3/ Y 547%;698 : $ %(: D %:) ( With this notation, (2.5) is rewritten as
4
+<# !
V
'*()()() b; '
1
10;
CL ' 1 '
' '
The equation (2.4) holds by setting
L + and
L
CL 1 .
,
M
x
Q
p
Important
Nuisance
Figure 1: Information geometric picture of a probabilistic model whose Fisher score can fully extract the
class information. When the tangent space of
is
contained in , the Fisher
score
can
fully
extract
the
class information, i.e.
. Details explained in
the text.
"
%
!
PF
Figure 2: Feature space constructed by the Fisher score
from the samples with two distinct clusters. The and
-axis corresponds to an nuisance and an important dimension, respectively. When the Euclidean metric is
used as in K-Means, it is difficult to recover the two
?lines? as clusters.
H
In determination of ,
$
, we face the following dilemma: For capturing important dimensions (i.e. the tangent space of ), the number of parameters should be sufficiently
larger than . But a large leads to a lot of nuisance dimensions, which are harmful for
clustering in the feature space. In typical supervised classification experiments with hidden
markov models [5, 15, 14], the number of parameters is much larger than the number of
classes. However, in supervised scenarios, the existence of nuisance dimensions is not a serious problem, because advanced supervised classifiers such as the support vector machine
have a built-in feature selector [7]. However in unsupervised scenarios without class labels, it is much more difficult to ignore nuisance dimensions. Fig. 2 shows how the feature
space looks like, when the number of clusters is two and only one nuisance dimension is
involved. Projected on the important dimension, clusters will be concentrated into two distinct points. However, when the Euclidean distance is adopted as in K-Means, it is difficult
to recover true clusters because two ?lines? are close to each other.
!
H
3 Clustering Algorithm for the Fisher score
2 / / D V
/ / D / D
/
In this ' section, we will develop a new clustering algorithm
for the Fisher score. Let
'
be a set of class labels' assigned to
', respectively. The puronly from samples
. As mensioned before,
pose of clustering is to obtain
in clustering with the Fisher score, it is necessary to capture important dimensions. So far,
it has been implemented as projection pursuit methods [3], which use general measures for
interestingness, e.g. nongaussianity. However, from the last section?s analysis, we know
more than nongaussianity about important dimensions of the Fisher score. Thus we will
construct a method specially tuned for the Fisher score.
@ / / D
B / %
F
Let us assume that the underlying classes are well separated, i.e.
is close
to 0
or 1 for each sample . When the class information is fully preserved, i.e.
,
there are bases in the space of the Fisher score, such that the samples in the -th cluster are
projected close to 1 on the -th basis and the others are projected close to 0. The objective
function of our clustering algorithm is designed to detect such bases:
/
(E& 1 ' ) 1 + X& 1 ' ) 1 + 3 E(& 1 '*) 1 3 EM D C ' M/ D ' 8 0 /
; ;/
>2?
(3.1)
where
1
is the indicator function which is 1 if the condition holds and 0 otherwise.
Notice
that the
optimal result of (3.1) is invariant to any invertible linear transformation
. In contrast, K-means type methods are quite sensitive to linear
transformation or data normalization [6]. When linear transformation is notoriously set,
7-
K-means can end up with a false result which may not reflect the underlying structure. 1
The objective function (3.1) can be minimized by the following EM-like alternating procedure:
/ D ' to initial values.' Compute ' A / D ' ,&
and
2
(
Q
/
L A / D ' /
/
0 ; 0 L for later use.
'
2. Repeat 3. and 4. until the convergence of 2 / / D .
'
'
'
3. Fix 2 / / D and minimize with respect to 2 EC D and EC D . Each _ is
obtained as the solution of the following problem:
, 1 ! &(' ) +71 3 M D ' 8 0 , /
!; /
>2? (
/
1. Initialization:
Set
'
'
'
L M/ D ' /
/
;
2
!
; M/ D '
10 /
'
' /
(
where
A / D
'
'
'
4. Fix 2 EC D , EC D and minimize with respect to 2 / / D . Each / is obtained
by solving the following problem
/ P&('*) JM D C ' 8 0 /
;
> ?
This problem is analytically solved as
The solution can be obtained by exhaustive search.
H
H ?
? H
H
Steps 1, 3, 4 take
,
,
computational costs, respectively. Since
the computational cost of algorithm is linear in , it can be applied to problems with large
sample sizes. This algorithm requires
time for inverting the matrix , which may
only be an obstacle for an application in an extremely high dimensional data setting.
4 Clustering Artificial Data
A Y D Y Y Y
We will perform a clustering experiment with artificially generated data (Fig. 3). Since this
data has a complicated structure, the Gaussian mixture with
components is used as a
'
where ,
probabilistic model for the Fisher score:
$
denotes the Gaussian distribution with mean and covariance matrix . The parameters
are learned with the EM algorithm and the marginal distribution is accurately estimated
as shown in Fig. 3 (upperleft). We applied the proposed algorithm and K-Means to the
Fisher score calculated by taking derivatives with respect to . In order to have an initial
partition, we first divided the points into 8 subclusters by the posterior probability to each
Gaussian. In K-means and our approach defined in Sec. 3, initial clusters are constructed
by randomly combining these subclusters. For each method, we chose the best result which
achieved the minimum loss among the local minima obtained from 100 clustering experiments. As a result, the proposed method obtained clearly separated clusters (Fig. 3, upper
right) but K-Means failed to recover the ?correct? clusters, which is considered as the effect
of nuisance dimensions (Fig. 3, lower left). When the Fisher score is whitened (i.e. linear
transformation to have mean 0 and unit covariance matrix), the result of K-Means changed
to Fig. 3 (lowerright) but the solution of our method stayed the same as discussed in Sec. 3.
Of course, this kind of problem can be solved by many state-of-the-art methods (e.g. [9, 8])
Y
1
When the covariance matrix of each cluster is allowed to be different in K-Means, it becomes
invariant to normalization. However this method in turn causes singularities, where a cluster shrinks
to the delta distribution, and difficult to learn in high dimensional spaces.
Figure 3: (Upperleft) Toy
dataset used for clustering. Contours show the
estimated density with the
mixture of 8 Gaussians.
(Upperright) Clustering result of the proposed algorithm. (Lowerleft) Result
of K-Means with the Fisher
score. (Lowerright) Result of K-Means with the
whitened Fisher score.
because it is only two dimensional. However these methods typically do not scale to large
dimensional or discrete problems. Standard mixture modeling methods have difficulties in
modeling such complicated cluster shapes [9, 10]. One straightforward way is to model
'
'
each cluster as a Gaussian Mixture: &
( However,
special care needs to be taken for such a ?mixture of mixtures? problem. When the pa and are jointly optimized in a maximum likelihood process, the
rameters
solution is not unique. In order to have meaningful results e.g. in our dataset, one has to
constrain the parameters such that 8 Gaussians form 2 groups. In the Bayesian framework,
this can be done by specifying an appropriate prior distributions on parameters, which can
become rather involved. Roberts et. al. [10] tackled this problem by means of the minimum
entropy principle using MCMC which is somewhat more complicated than our approach.
N
'
APJC D N A D
5 Clustering Amino Acid Sequences
In this section, we will apply our method to cluster bacterial gyrB amino acid sequences,
where the hidden markov model (HMM) is used to derive the Fisher score. gyrB - gyrase
subunit B - is a DNA topoisomerase (type II) which plays essential roles in fundamental
mechanisms of living organisms such as DNA replication, transcription, recombination and
repair etc. One more important feature of gyrB is its capability of being an evolutionary and
taxonomic marker alternating popular 16S rRNA [17]. Our data set consists of 55 amino
acid sequences containing three clusters (9,32,14). The three clusters correspond to three
genera of high GC-content gram-positive bacteria, that is, Corynebacteria, Mycobacteria,
Rhodococcus, respectively. Each sequence is represented as a sequence of 20 characters,
each of which represents an amino acid. The length of each sequence is different from 408
to 442, which makes it difficult to convert a sequence into a vector of fixed dimensionality.
+
+C
/Y
In' order to evaluate the partitions we use' the Adjusted Rand Index (ARI) [4, 18]. Let
be the obtained clusters and )()(*( be the ground truth clusters. Let
be
)(*()()
the number of samples which belongs to both and . Also let and be the number
of samples in
and , respectively. ARI is defined as
+/
Y
A / 1 Y
/ Y
'
/
A / fA Y
?
+/ Y
/
;A / / A Y Y
Y ; A /
/ A
Y
Y
Y
The attractive point of ARI is that it can measure the difference of two partitions even when
0.8
Proposed
K-Means
ARI
0.6
0.4
0.2
0
2
3
4
Number of HMM States
5
Figure 4: Adjusted Rand indices of K-Means and the proposed method in a sequence classification experiment.
the number of clusters is different. When the two partitions are exactly the same, ARI is 1,
and the expected value of ARI over random partitions is 0 (see [4] for details).
In order to derive the Fisher score, we trained complete-connection HMMs via the BaumWelch algorithm, where
the number of states is changed from 2 to 5, and each state
emits one of
characters. This HMM has initial state probabilities, terminal
transition probabilities and emission probabilities. Thus when
state probabilities,
for example, a HMM has 75 parameters in total, which is much larger than the
number of potential classes (i.e. 3). The derivative is taken with respect to all paramaters as
described in detail in [15]. Notice that we did not perform any normalization to the Fisher
score vectors. In order to avoid local minima, we tried 1000 different initial values and
chose the one which achieved the minimum loss both in K-means and our method. In KMeans, initial centers are sampled from the uniform distribution in the smallest hypercube
which contains all samples. In the proposed method, every
is sampled from the normal
distribution with mean 0 and standard deviation 0.001. Every is initially set to zero.
F
?
$
/
Fig. 4 shows the ARIs of two methods against the number of HMM states. Our method
shows the highest ARI (0.754) when the number of HMM states is 3, which shows that
important dimensions are successfully discovered from the ?sea? of nuisance dimensions.
In contrast, the ARI of K-Means decreases monotonically as the number of HMM states
increases, which shows the K-Means is not robust against nuisance dimensions. But when
the number of nuisance dimensions are too many (i.e. +
), our method was caught in
false clusters which happened to appear in nuisance dimensions. This result suggests that
prior dimensionality reduction may be effective (cf.[11]), but it is beyond the scope of this
paper.
6 Concluding Remarks
In this paper, we illustrated how the class information is encoded in the Fisher score: most
information is packed in a few dimensions and there are a lot of nuisance dimensions. Advanced supervised classifiers such as the support vector machine have a built-in feature
selector [7], so they can detect important dimensions automatically. However in unsupervised learning, it is not easy to detect important dimensions because of the lack of class
labels. We proposed a novel very simple clustering algorithm that can ignore nuisance
dimensions and tested it in artificial and real data experiments. An interesting aspect of
our gyrB experiment is that the ideal scenario assumed in the theory section is not fulfilled
anymore as clusters might overlap. Nevertheless our algorithm is robust in this respect and
achieves highly promising results.
The Fisher score derives features using the prior knowledge of the marginal distribution.
In general, it is impossible to infer anything about the conditional distribution $
&
from
the marginal
[12] without any further assumptions. However, when one knows the
directions that the marginal distribution can move (i.e. the model of marginal distribution),
it is possible to extract information about
&
, even though it may be corrupted by many
nuisance dimensions. Our method is straightforwardly applicable to the objects to which
the Fisher kernel has been applied (e.g. speech signals [13] and documents [16]).
Acknowledgement The authors gratefully acknowledge that the bacterial gyrB amino
acid sequences are offered by courtesy of Identification and Classification of Bacteria (ICB)
database team [17]. KRM thanks for partial support by DFG grant # MU 987/1-1.
References
[1] S. Amari and H. Nagaoka. Methods of Information Geometry, volume 191 of Translations of
Mathematical Monographs. American Mathematical Society, 2001.
[2] R. Durbin, S. Eddy, A. Krogh, and G. Mitchison. Biological Sequence Analysis: Probabilistic
Models of Proteins and Nucleic Acids. Cambridge University Press, 1998.
[3] P.J. Huber. Projection pursuit. Annals of Statistics, 13:435?475, 1985.
[4] L. Hubert and P. Arabie. Comparing partitions. J. Classif., pages 193?218, 1985.
[5] T.S. Jaakkola and D. Haussler. Exploiting generative models in discriminative classifiers. In
M.S. Kearns, S.A. Solla, and D.A. Cohn, editors, Advances in Neural Information Processing
Systems 11, pages 487?493. MIT Press, 1999.
[6] A.K. Jain and R.C. Dubes. Algorithms for Clustering Data. Prentice Hall, 1988.
[7] K.-R. M?uller, S. Mika, G. R?atsch, K. Tsuda, and B. Sch?olkopf. An introduction to kernel-based
learning algorithms. IEEE Trans. Neural Networks, 12(2):181?201, 2001.
[8] A. Y. Ng, M. I. Jordan, and Y. Weiss. On spectral clustering: Analysis and an algorithm. In T. G.
Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing
Systems 14. MIT Press, 2002.
[9] M. Rattray. A model-based distance for clustering. In Proc. IJCNN?00, 2000.
[10] S.J. Roberts, C. Holmes, and D. Denison. Minimum entropy data partitioning using reversible
jump markov chain monte carlo. IEEE Trans. Patt. Anal. Mach. Intell., 23(8):909?915, 2001.
[11] V. Roth, J. Laub, J.M. Buhmann, and K.-R. M?uller. Going metric: Denoising pairwise data. In
NIPS02, 2003. to appear.
[12] M. Seeger.
Learning with labeled and unlabeled data.
stitute for Adaptive and Neural Computation, University
http://www.dai.ed.ac.uk/homes/seeger/papers/review.ps.gz.
Technical report, Inof Edinburgh, 2001.
[13] N. Smith and M. Gales. Speech recognition using SVMs. In T.G. Dietterich, S. Becker, and
Z. Ghahramani, editors, Advances in Neural Information Processing Systems 14. MIT Press,
2002.
[14] S. Sonnenburg, G. R?atsch, A. Jagota, and K.-R. M?uller. New methods for splice site recognition.
In ICANN?02, pages 329?336, 2002.
[15] K. Tsuda, M. Kawanabe, G. R?atsch, S. Sonnenburg, and K.-R. M?uller. A new discriminative
kernel from probabilistic models. Neural Computation, 14(10):2397?2414, 2002.
[16] A. Vinokourov and M. Girolami. A probabilistic framework for the hierarchic organization and
classification of document collections. Journal of Intelligent Information Systems, 18(2/3):153?
172, 2002.
[17] K. Watanabe, J.S. Nelson, S. Harayama, and H. Kasai. ICB database: the gyrB database for
identification and classification of bacteria. Nucleic Acids Res., 29:344?345, 2001.
[18] K.Y. Yeung and W.L. Ruzzo. Principal component analysis for clustering gene expression data.
Bioinformatics, 17(9):763?774, 2001.
| 2292 |@word tried:1 covariance:3 ld:1 reduction:1 initial:6 contains:3 score:38 tuned:1 document:2 comparing:1 partition:6 shape:1 designed:1 discrimination:1 generative:1 denison:1 smith:1 contribute:2 mathematical:2 constructed:2 become:1 replication:1 laub:1 consists:2 prove:2 pairwise:1 huber:1 expected:1 terminal:1 automatically:1 jm:1 pf:2 str:1 inappropriate:2 totally:1 becomes:1 notation:2 underlying:3 what:1 kind:2 superb:1 finding:1 transformation:5 every:3 exactly:1 classifier:4 uk:1 partitioning:1 unit:1 grant:1 appear:2 producing:1 interestingness:1 before:2 positive:1 local:2 mach:1 gyrb:6 ap:1 might:1 chose:2 mika:1 initialization:1 specifying:1 challenging:1 genus:1 suggests:1 hmms:1 unique:1 procedure:2 projection:2 refers:1 protein:1 cannot:2 close:5 unlabeled:1 prentice:1 impossible:2 www:1 map:1 center:1 roth:1 courtesy:1 go:1 straightforward:1 starting:1 caught:1 formulate:1 estimator:1 rule:2 haussler:1 regarded:1 holmes:1 exploratory:1 annals:1 play:1 pa:1 recognition:2 database:3 labeled:1 role:1 solved:2 capture:1 sonnenburg:2 solla:1 decrease:1 highest:1 monograph:1 mu:1 arabie:1 trained:1 solving:1 dilemma:1 completely:1 basis:1 joint:1 various:1 represented:1 separated:3 distinct:2 jain:1 effective:1 monte:1 artificial:4 klaus:2 exhaustive:1 whose:1 quite:1 widely:1 larger:3 encoded:1 loglikelihood:1 otherwise:1 amari:1 statistic:1 nagaoka:1 jointly:1 obviously:3 sequence:13 propose:1 product:1 combining:1 achieve:1 olkopf:1 exploiting:1 convergence:1 cluster:26 p:1 sea:1 object:2 illustrate:1 develop:2 derive:2 dubes:1 pose:1 ac:1 krogh:1 implemented:1 c:1 motoaki:1 girolami:1 direction:1 tokyo:1 correct:1 koto:1 fix:2 stayed:1 biological:3 singularity:1 adjusted:2 harayama:1 hold:3 around:1 sufficiently:1 considered:1 ground:1 normal:1 hall:1 mapping:1 scope:1 achieves:3 adopt:1 smallest:1 favorable:1 proc:1 applicable:1 realizes:1 label:4 sensitive:1 successfully:3 uller:4 mit:3 clearly:1 gaussian:4 rather:1 avoid:1 jaakkola:1 derived:3 emission:1 likelihood:1 contrast:2 seeger:2 sense:1 detect:3 bebel:1 typically:1 initially:1 hidden:2 ical:1 going:1 fhg:1 germany:2 issue:1 classification:7 among:2 overall:1 denoted:1 art:1 special:1 icb:2 marginal:9 bacterial:2 construct:1 never:1 extraction:3 ng:1 represents:1 look:1 unsupervised:2 minimized:1 others:1 report:1 intelligent:1 serious:1 few:3 randomly:1 preserve:1 intell:1 dfg:1 geometry:2 organization:1 investigate:2 highly:1 mixture:7 hubert:1 chain:3 partial:1 necessary:1 bacteria:3 filled:1 euclidean:2 koji:2 harmful:1 re:1 tsuda:4 theoretical:2 modeling:2 obstacle:1 kekul:1 cost:2 deviation:1 kasai:1 predictor:4 uniform:1 successful:1 too:1 straightforwardly:1 corrupted:1 combined:1 thanks:1 density:1 fundamental:1 probabilistic:7 invertible:2 emc:2 aomi:1 reflect:1 containing:1 gale:1 american:1 derivative:4 toy:1 japan:1 account:1 potential:1 de:1 sec:3 jc:1 later:1 view:1 lot:3 try:1 recover:5 complicated:3 capability:1 minimize:3 accuracy:1 acid:8 correspond:1 bayesian:1 identification:2 accurately:1 carlo:1 notoriously:1 ed:1 against:2 involved:2 naturally:1 resultant:1 proof:2 riemannian:1 emits:1 sampled:2 dataset:2 popular:1 knowledge:2 ut:1 dimensionality:3 eddy:1 supervised:5 subclusters:2 wei:1 rand:2 done:1 shrink:1 though:1 just:1 until:1 cohn:1 marker:1 lack:1 reversible:1 effect:1 dietterich:2 true:9 classif:1 analytically:1 assigned:1 alternating:3 excluded:1 aist:2 illustrated:1 attractive:1 nuisance:22 anything:2 complete:1 estr:1 novel:2 recently:2 ari:8 specialized:2 ji:3 jp:1 volume:1 discussed:2 organism:1 cambridge:1 gratefully:1 etc:1 base:2 lowerleft:1 posterior:5 belongs:1 scenario:3 claimed:1 jagota:1 muller:1 minimum:6 additional:1 relaxed:1 care:1 somewhat:1 dai:1 determine:3 monotonically:1 signal:1 living:1 preservation:1 full:1 ii:1 infer:2 technical:1 determination:1 dept:1 divided:1 rrna:1 prediction:1 whitened:2 expectation:1 metric:2 yeung:1 kernel:6 normalization:3 achieved:2 cbrc:1 preserved:5 sch:1 specially:1 effectiveness:1 jordan:1 call:1 ideal:1 easy:2 nabe:1 inner:1 vinokourov:1 expression:1 becker:2 speech:2 cause:1 remark:1 amount:1 concentrated:1 svms:2 simplest:1 dna:2 http:1 notice:2 happened:1 estimated:2 delta:1 fulfilled:1 rattray:1 patt:1 discrete:3 group:1 nevertheless:1 sum:1 convert:1 baumwelch:1 taxonomic:1 home:1 capturing:1 pay:1 tackled:1 durbin:1 ijcnn:1 constrain:1 aspect:1 answered:1 extremely:1 concluding:1 according:1 increasingly:1 em:3 character:2 explained:1 invariant:3 repair:1 taken:2 equation:1 turn:2 discus:1 loose:1 mechanism:1 know:3 end:1 adopted:1 available:2 pursuit:2 rewritten:1 gaussians:2 apply:1 kawanabe:2 appropriate:1 spectral:1 anymore:1 nongaussianity:2 existence:3 original:1 denotes:2 clustering:30 cf:1 xw:1 exploit:2 recombination:1 ghahramani:2 especially:1 hypercube:1 society:1 objective:2 move:1 question:1 parametric:1 fa:1 evolutionary:1 subspace:2 distance:2 mapped:1 berlin:1 hmm:7 nelson:1 manifold:3 considers:1 trivial:1 length:1 index:2 difficult:6 aris:1 robert:3 ba:1 anal:1 proper:1 packed:1 perform:4 upper:1 nucleic:2 markov:3 acknowledge:1 subunit:1 team:1 rn:1 gc:1 discovered:1 inverting:1 optimized:1 connection:1 potsdam:2 learned:2 trans:2 beyond:1 built:2 unrealistic:1 overlap:1 ruzzo:1 difficulty:1 indicator:1 buhmann:1 advanced:2 picture:1 axis:1 fraunhofer:1 gz:1 extract:3 gf:1 text:1 prior:4 geometric:1 acknowledgement:1 tangent:5 review:1 loss:5 fully:3 mixed:1 interesting:1 rameters:1 offered:1 sufficient:2 principle:1 editor:3 translation:1 course:1 changed:2 repeat:1 last:1 keeping:1 hierarchic:1 face:1 taking:1 edinburgh:1 regard:1 dimension:36 calculated:1 gram:1 transition:1 contour:1 author:1 collection:1 jump:1 projected:3 adaptive:1 ec:6 far:1 selector:2 ignore:2 transcription:1 gene:1 unnecessary:1 assumed:2 discriminative:2 mitchison:1 continuous:1 search:1 promising:1 ku:1 learn:2 robust:3 ca:1 artificially:1 domain:1 did:1 icann:1 allowed:1 complementary:1 amino:6 fig:8 site:1 ny:1 watanabe:1 ib:1 extractor:5 splice:1 theorem:3 dl:3 essential:1 derives:1 false:2 entropy:2 failed:1 contained:3 corresponds:1 truth:1 satisfies:1 conditional:1 kmeans:1 krm:1 price:1 fisher:42 content:1 hard:1 determined:1 typical:1 denoising:1 lemma:4 kearns:1 total:1 principal:1 meaningful:1 atsch:3 select:1 support:3 bioinformatics:1 evaluate:1 mcmc:1 tested:2 |
1,421 | 2,293 | Learning Graphical Models
with Mercer Kernels
Francis R. Bach
Division of Computer Science
University of California
Berkeley, CA 94720
[email protected]
Michael I. Jordan
Computer Science and Statistics
University of California
Berkeley, CA 94720
[email protected]
Abstract
We present a class of algorithms for learning the structure of graphical
models from data. The algorithms are based on a measure known as
the kernel generalized variance (KGV), which essentially allows us to
treat all variables on an equal footing as Gaussians in a feature space
obtained from Mercer kernels. Thus we are able to learn hybrid graphs
involving discrete and continuous variables of arbitrary type. We explore
the computational properties of our approach, showing how to use the
kernel trick to compute the relevant statistics in linear time. We illustrate
our framework with experiments involving discrete and continuous data.
1 Introduction
Graphical models are a compact and efficient way of representing a joint probability distribution of a set of variables. In recent years, there has been a growing interest in learning
the structure of graphical models directly from data, either in the directed case [1, 2, 3, 4]
or the undirected case [5]. Current algorithms deal reasonably well with models involving discrete variables or Gaussian variables having only limited interaction with discrete
neighbors. However, applications to general hybrid graphs and to domains with general
continuous variables are few, and are generally based on discretization.
In this paper, we present a general framework that can be applied to any type of variable.
We make use of a relationship between kernel-based measures of ?generalized variance?
in a feature space, and quantities such as mutual information and pairwise independence in
the input space. In particular,
suppose that each variable in our domain is mapped into a
high-dimensional space via a map . Let
and consider the set of random
variables in feature space. Suppose that we compute the mean and covariance matrix
of these variables and consider a set of Gaussian variables, , that have the same mean
and covariance. We showed in [6] that a canonical correlation analysis of yields a
measure, known as ?kernel generalized variance,? that characterizes pairwise independence
among the original variables , and is closely related to the mutual information among
the original variables. This link led to a new set of algorithms for independent component
analysis. In the current paper we pursue this idea in a different direction, considering the
use of the kernel generalized variance as a surrogate for the mutual information in model
selection problems. Effectively, we map data into a feature space via a set of Mercer
kernels, with different kernels for different data types, and treat all data on an equal footing
as Gaussian in feature space.
We briefly review the structure-learning problem in Section 2, and in Section 4 and Section 5 we show how classical approaches to the problem, based on MDL/BIC and conditional independence tests, can be extended to our kernel-based approach. In Section 3 we
show that by making use of the ?kernel trick? we are able to compute the sample covariance matrix in feature space in linear time in the number of samples. Section 6 presents
experimental results.
2 Learning graphical models
Structure learning algorithms generally use one of two equivalent interpretations of graphical models [7]: the compact factorization of the joint probability distribution function leads
to local search algorithms while conditional independence relationships suggest methods
based on conditional independence tests.
Local search. In this approach, structure learning is explicitly cast as a model selection
problem. For directed graphical models, in the MDL/BIC
setting of [2], the likelihood is
penalized by a model selection term that is equal to
times the number of parameters necessary to encode the local distributions. The likelihood term can be
decomposed
!
and expressed as follows:
, with
,
is the
where
is the set of parents of node in the graph to be scored and
empirical mutual information between the variable and the vector . These mutual information terms and the number of parameters for each local conditional distributions are
easily computable in discrete models, as well as in Gaussian models. Alternatively, in a full
Bayesian framework, under assumptions about parameter independence, parameter modularity, and prior distributions (Dirichlet for discrete networks, inverse Wishart for Gaussian
networks), the log-posterior probability of a graph given the data can be decomposed in a
similar way [1, 3].
Given that our approach is based on the assumption of Gaussianity in feature space, we
could base our development on either the MDL/BIC approach or the full Bayesian approach. In this paper, we extend the MDL/BIC approach, as detailed in Section 4.
Conditional independence tests. In this approach, conditional independence tests are
performed to constrain the structure of possible graphs. For undirected models, going
from the graph to the set of conditional independences is relatively easy: there is an edge
between and #" if and only if and $" are independent given all other variables [7].
In Section 5, we show how our approach could be used to perform independence tests and
learn an undirected graphical model. We also show how this approach can be used to prune
the search space for the local search of a directed model.
3 Gaussians in feature space
In this section, we introduce our Gaussianity assumption and show how to approximate the
mutual information, as required for the structure learning algorithms.
3.1 Mercer Kernels
A Mercer kernel on a space % is
a function &
'( from %
to ) such that for any set of
/.0
+*,*+*+ -
points
in % , the
matrix 1 , defined by 1 " &
" , is positive
semidefinite. The matrix 1 is usually referred to as the Gram
matrix
of
the
points
.
Given a Mercer kernel &
'( , it is possible
to
find
a
space
and
a
map
from
to
,
%
such
that & 2( is the dot product in between and ( (see, e.g., [8]). The space
is usually referred to as the feature space and the map as the feature map. We will use
the notation to denote the dot product of and in feature space
notation to denote the representative of in the dual space of .
. We also use the
+*,*+*
For
, we use the trivial kernel &
'(
a discrete variable which takes values in
, which corresponds to a feature space of dimension . The feature map is
+*,*+*,
. Note that this mapping corresponds
to the usual embedding of a multinomial variable of order in the vector space ) .
For continuous variables, we use the Gaussian kernel &
2(
. The feature
space has infinite dimension, but as we will show, the data only occupy a small linear
manifold and this linear subspace can be determined adaptively in linear time. Note that
(
, which corresponds to simply modeling the
an alternative is to use the kernel & 2(
data as Gaussian in input space.
3.2 Notation
Let ,*+*+*, be ! random variables with values in spaces % ,*+*,*+ % . Let us assign
a Mercer kernel & to each of the input spaces % , with feature space and feature map
. The random vector of feature images +*+*,* #" +*,*+*,
$
has a covariance matrix % defined by blocks, with block % " being the covariance matrix
between
and " " #" . Let +*,*+* denote a jointly Gaussian
vector with the same mean and covariance as ,*+*,* . The vector will be
used as the random vector on which the learning of graphical model structure is based.
Note that the sufficient statistics for this vector are
"
#" , and are inherently pairwise. No dependency involving strictly more than two variables is modeled
explicitly, which makes our scoring metric easy to compute. In Section 6, we present empirical evidence that good models can be learned using only pairwise information.
3.3 Computing sample covariances using kernel trick
+*,*+* -
*+*+*
. By mapping
We are given a random sample
of elements of %
%
& . We assume that for each
into the feature spaces, we define ! elements &
-
the data in feature space +*+*,*+ - have been centered, i.e.,
& ' . The sample
&
-
"
"
"
covariance matrix % ( is then equal to % (
)& )& . Note that a Gaussian with
&
covariance matrix % ( has zero variance along directions that are orthogonal to the images
of the data. Consequently, in order to compute the mutual information, we only need to
compute the covariance matrix of the projection of onto the linear span of the data, that
is, for all +* -,. :
.
.
1 " " 1 "
1 1 " 0 (1)
&
&
)
/
$
0
1 /& 1 0&
/
& .
&
vectors with only zeros except at position , , and 1 is the Gram
where denotes the
/
matrix of the centered points, the so-called centered Gram matrix of the -th component,
defined from
the Gram matrix 2 of.the
original (non-centered) points as 1
2
, where is a
matrix composed of ones [8]. From Eq. (1), we
-43
-43
3
see that the sample covariance matrix of in the ?data basis? has blocks - 1 1 " .
)/ % ( " $ "0
3.4 Regularization
When the feature space has infinite dimension (as in the case of a Gaussian kernel on ) ),
then the covariance we are implicitly fitting with a kernel method has an infinite number
of parameters. In order to avoid overfitting and control the capacity of our models, we
regularize by smoothing the Gaussian by another Gaussian with small variance (for
an alternative interpretation and further details, see
Let be a small constant. We
[6]).
add to an isotropic Gaussian with covariance
in an orthonormal basis. In the data
basis, the covariance of this Gaussian is exactly the block diagonal matrix
1 . Consequently, our regularized Gaussian covariance % has blocks % " with1 blocks
1 " if
* and %
1 . Since is a small constant, we can use % - 1
1
leads to a more compact correlation matrix
- 1
, with blocks
" 1
" for , which
* , and
, where
1 1 - .
These cross-correlation matrices have exact dimension , but since the eigenvalues of
1
are softly thresholded to zero or one by the regularization, the effective dimension is
1 1 - . This dimensionality will be used as the dimension of our
Gaussian variables for the MDL/BIC criterion, in Section 4.
3.5 Efficient implementation
.
Direct manipulation of
matrices would lead to algorithms that scale as
Gram matrices, however, are known to be well approximated by matrices of low rank .
The approximation is exact when the feature space has finite dimension (e.g., with discrete kernels), and can be chosen less than . In the case of continuous data with the
Gaussian kernel, we have shown that can be chosen to be upper bounded by a constant
independent of [6]. Finding a low-rank decomposition
can thus be done through incom
(for a detailed treatment of this issue,
plete Cholesky decomposition in linear time in
see [6]).
.
Using the incomplete Cholesky decomposition,
for each matrix 1 we obtain the factor
.
ization 1
, where is an
matrix with rank
, where . We
.
perform a singular value decomposition of to obtain. an
matrix with orthogonal columns (i.e., such that
), and an diagonal matrix such that
.
1
- 1 , where where is the diagonal matrix
We have 1
"! " " to its
obtained from the diagonal matrix by applying the function
#
"
elements. Thus has a correlation matrix with blocks
in the new
basis defined by the columns of the matrices , and these blocks will be used to compute
the various mutual information terms.
3.6 KGV-mutual information
We now show how to compute the mutual information between
a link with the mutual information of the original variables ,*+*,*
+*,*+*,
.
, and we make
Let ( +*,*+* ( be ! jointly Gaussian
random vectors with covariance matrix $ , defined
(' ( (!"
in terms of blocks $ " &%
. The mutual information between the variables
(
+*+*,*+(
is equal to (see, e.g., [9]):
)$ )
(2)
) $ )+*,*-*.) $ )
where ) /0) denotes the determinant of the matrix / . The ratio of determinants in this expression is usually referred to as the generalized variance, and is independent of the basis
which is chosen to compute $ .
(
+*,*+* (
Following Eq. (2), the mutual information between
the distribution of , is equal to
21
+*+*,*+
$
)
+*,*+*+
)
)
)+*,*-*,)
, which depends solely on
)
*
(3)
We refer to this quantity as the 1 -mutual information (KGV stands for kernel generalized variance). It is always nonnegative and can also be defined
for partitions of the
variables into subsets, by simply partitioning the correlation matrix accordingly.
The KGV has an interesting relationship to the mutual information among the original
variables, ,*+*,* . In particular, as shown in [6], in the case of two discrete variables,
the KGV is equal to the mutual information up to second order, when expanding around the
manifold of distributions that factorize in the trivial graphical model (i.e. with independent
components). Moreover, in the case of continuous variables, when the width of the
Gaussian kernel tend to zero, the KGV necessarily tends to a limit, and also provides a
second-order expansion of the mutual information around independence.
This suggests that the KGV-mutual information might also provide a useful,
computationally-tractable surrogate for the mutual information more generally, and in particular substitute for mutual information terms in objective functions for model selection,
where even a rough approximation might suffice to rank models. In the remainder of the
paper, we investigate this possibility empirically.
4 Structure learning using local search
In this approach, an objective function ) measures the goodness of fit of the
directed graphical model , and is minimized. The MDL/BIC objective function for our
Gaussian variables is easily derived. Let be the set of parents of node in .
We have , with
)
)
)
) )
)
!
(4)
" . Given the scoring metric , we are faced with an NP"
where !
hard optimization problem on the space of directed acyclic graphs [10]. Because the score
decomposes as a sum of local scores, local greedy search heuristics are usually exploited.
We adopt such heuristics in our simulations, using hillclimbing. It is also possible to use
Markov-chain Monte Carlo (MCMC) techniques to sample from the posterior distribution
defined by ) within our framework; this would in principle allow
us to output several high-scoring networks.
5 Conditional independence tests using KGV
In this section, we indicate how conditional independence tests can be performed using the
KGV, and show how these tests can be used to estimate Markov blankets of nodes.
Likelihood ratio criterion. In the case of marginal independence, the likelihood ratio
criterion is exactly equal to a power of the mutual information (see, e.g, [11] in the case
of Gaussian variables). This generalizes easily to conditional independence, where the
likelihood
criterion to test the conditional independence of ( and given is equal to
ratio
2( 2(
2 , where is the number of samples and the mutual
information terms are computed using empirical distributions.
Applied to our Gaussian variables 1 , we obtain a test
statistic based on linear combination
2( 1 2( 1 2 . Theoretical threshof KGV-mutual information terms:
old values exist for conditional independence tests with Gaussian variables [7], but instead,
we prefer to use the value given by the MDL/BIC criterion, i.e.,
- "! (where and
! are the dimensions of the Gaussians), so that the same decision regarding conditional
independence is made in the two approaches (scoring metric or independence tests) [12].
Markov blankets. For Gaussian variables, it is well-known that some conditional independencies can be read out from the inverse of the joint covariance matrix [7]. More precisely,
If ( +*,*+*+( are ! jointly Gaussian random vectors
with dimensions , and with covari
" % (' ( (!"
ance matrix $ defined in terms of blocks $
, then ( and (!" are independent
+*
given all the other variables if and only if the block
of 1 $ is equal to zero.
Thus in the sample case, we can read out the edges of the undirected model directly from
,
1
1 1 1 1
with the threshold value
- .
using the test statistic "
1
(
Applied to the variables
and for all pairs of nodes, we can find an undirected
graphical model in polynomial time, and thus a set of Markov blankets [4].
-
We may also be interested in constructing a directed model from the Markov blankets;
however, this transformation is not always possible [7]. Consequently, most approaches
use heuristics to define a directed model from a set of conditional independencies [4, 13].
Alternatively, as a pruning step in learning a directed graphical model, the Markov blanket
can be safely used by only considering directed models whose moral graph is covered by
the undirected graph.
6 Experiments
We compare the performance of three hillclimbing algorithms for directed graphical modmetric
els, one using the KGV metric (with '$* ' and ), one using the MDL/BIC
).
of [2] and one using the BDe metric of [1] (with equivalent prior sample size
When the domain includes continuous variables, we used two discretization strategies; the
first one is to use K-means with a given number of clusters, the second one uses the adaptive
discretization scheme for the MDL/BIC scoring metric of [14]. Also, to parameterize the
local conditional probabilities we used mixture models (mixture of Gaussians, mixture
of softmax regressions, mixture of linear regressions), which provide enough flexibility
at reasonable cost. These models were fitted using penalized maximum likelihood, and
invoking the EM algorithm whenever necessary. The number of mixture components was
less than four and determined using the minimum description length (MDL) principle.
When the true generating network is known, we measure the performance of algorithms by
the KL divergence to the true distribution; otherwise, we report log-likelihood on held-out
test data. We use as a baseline the log-likelihood for the maximum likelihood solution to a
model with independent components and multinomial or Gaussian densities as appropriate
(i.e., for discrete and continuous variables respectively).
Toy examples. We tested all three algorithms on a very simple generative model on !
binary nodes, where nodes through ! point to node ! . For each assignment ( of the
! parents, we set
) ( by sampling uniformly at random in '$ . We also
studied a linear Gaussian generative model with the identical topology, with regression
weights chosen uniformly at random in . We generated ' ' ' samples.
We report average results (over 20 replications) in Figure 1 (left), for ! ranging from
to ' . We see that on the discrete networks, the performance of all three algorithms is
similar, degrading slightly as ! increases. On the linear networks, on the other hand, the
discretization methods degrade significantly as ! increases. The KGV approach is the
only approach of the three capable of discovering these simple dependencies in both kinds
of networks.
Discrete networks. We used three networks commonly used as benchmarks 1, the A LARM
network (37 variables), the I NSURANCE network (27 variables)
and the H AILFINDER net
work (56 variables). We tested various numbers of samples . We performed 40 replications and report average results in Figure 1 (right). We see that the performance of our
metric lies between the (approximate Bayesian) BIC metric and the (full Bayesian) BDe
1
Available at http://www.cs.huji.ac.il/labs/compbio/Repository/.
1
Network N (
)
A LARM
0.5
1
4
16
I NSURANCE
0.5
1
4
16
H AILFINDER
0.5
1
4
16
0.5
0
2
4
6
8
10
m
2
4
6
8
10
m
1
0.5
0
BIC
0.85
0.42
0.17
0.04
1.84
0.93
0.27
0.05
2.98
1.70
0.63
0.25
BDe
0.47
0.25
0.07
0.02
0.92
0.52
0.15
0.04
2.29
1.32
0.48
0.17
KGV
0.66
0.39
0.15
0.06
1.53
0.83
0.40
0.19
2.99
1.77
0.63
0.32
Figure 1: (Top left) KL divergence vs. size of discrete network ! : KGV (plain), BDe
(dashed), MDL/BIC (dotted). (Bottom left) KL divergence vs. size of linear Gaussian
network: KGV (plain), BDe with discretized data (dashed), MDL/BIC with discretized
data (dotted x), MDL/BIC with adaptive discretization (dotted +). (Right) KL divergence
for discrete network benchmarks.
Network
A BALONE
V EHICLE
P IMA
AUSTRALIAN
B REAST
BALANCE
H OUSING
C ARS 1
C LEVE
H EART
N
4175
846
768
690
683
625
506
392
296
270
D
1
1
1
9
1
1
1
1
8
9
C
8
18
8
6
10
4
13
7
6
5
d-5
10.68
21.92
3.18
5.26
15.00
1.97
14.71
6.93
2.66
1.34
d-10
10.53
21.12
3.14
5.11
15.03
2.03
14.25
6.58
2.57
1.36
KGV
11.16
22.71
3.30
5.40
15.04
1.88
14.16
6.85
2.68
1.32
Table 1: Performance for hybrid networks.
is the number of samples, and and %
are the number of discrete and continuous variables, respectively. The best performance in
each row is indicated in bold font.
metric. Thus the performance of the new metric appears to be competitive with standard
metrics for discrete data, providing some assurance that even in this case pairwise sufficient statistics in feature space seem to provide a reasonable characterization of Bayesian
network structure.
Hybrid networks. It is the case of hybrid discrete/continuous networks that is our principal
interest?in this case the KGV metric can be applied directly, without discretization of the
continuous variables. We investigated performance on several hybrid datasets from the
UCI machine learning repository, dividing them into two subsets, 4/5 for training and 1/5
for testing. We also log-transformed all continuous variables that represent rates or counts.
We report average results (over 10 replications) in Table 1 for the KGV metric and for the
BDe metric?continuous variables are discretized using K-means with 5 clusters (d-5) or
10 clusters (d-10). We see that although the BDe methods perform well in some problems,
their performance overall is not as consistent as that of the KGV metric.
7 Conclusion
We have presented a general method for learning the structure of graphical models, based
on treating variables as Gaussians in a high-dimensional feature space. The method seamlessly integrates discrete and continuous variables in a unified framework, and can provide
improvements in performance when compared to approaches based on discretization of
continuous variables.
The method also has appealing computational properties; in particular, the Gaussianity assumption enables us make only a single pass over the data in order to compute the pairwise
sufficient statistics. The Gaussianity assumption also provides a direct way to approximate Markov blankets for undirected graphical models, based on the classical link between
conditional independence and zeros in the precision matrix.
While the use of the KGV as a scoring metric is inspired by the relationship between the
KGV and the mutual information, it must be emphasized that this relationship is a local one,
based on an expansion of the mutual information around independence. While our empirical results suggest that the KGV is also an effective surrogate for the mutual information
more generally, further theoretical work is needed to provide a deeper understanding of the
KGV in models that are far from independence.
Finally, our algorithms have free parameters, in particular the regularization parameter and
the width of the Gaussian kernel for continuous variables. Although the performance is
empirically robust to the setting of these parameters, learning those parameters from data
would not only provide better and more consistent performance, but it would also provide
a principled way to learn graphical models with local structure [15].
Acknowledgments
The simulations were performed using Kevin Murphy?s Bayes Net Toolbox for MATLAB.
We would like to acknowledge support from NSF grant IIS-9988642, ONR MURI N0001400-1-0637 and a grant from Intel Corporation.
References
[1] D. Heckerman, D. Geiger, and D. M. Chickering. Learning Bayesian networks: The combination of knowledge and statistical data. Machine Learning, 20(3):197?243, 1995.
[2] W. Lam and F. Bacchus. Learning Bayesian belief networks: An approach based on the MDL
principle. Computational Intelligence, 10(4):269?293, 1994.
[3] D. Geiger and D. Heckerman. Learning Gaussian networks. In Proc. UAI, 1994.
[4] J. Pearl. Causality: Models, Reasoning and Inference. Cambridge University Press, 2000.
[5] S. Della Pietra, V. J. Della Pietra, and J. D. Lafferty. Inducing features of random fields. IEEE
Trans. PAMI, 19(4):380?393, 1997.
[6] F. R. Bach and M. I. Jordan. Kernel independent component analysis. Journal of Machine
Learning Research, 3:1?48, 2002.
[7] S. L. Lauritzen. Graphical Models. Clarendon Press, 1996.
[8] B. Sch?olkopf and A. J. Smola. Learning with Kernels. MIT Press, 2001.
[9] T. M. Cover and J. A. Thomas. Elements of Information Theory. Wiley & Sons, 1991.
[10] D. M. Chickering. Learning Bayesian networks is NP-complete. In Learning from Data: Artificial Intelligence and Statistics 5. Springer-Verlag, 1996.
[11] T. W. Anderson. An Introduction to Multivariate Statistical Analysis. Wiley & Sons, 1984.
[12] R. G. Cowell. Conditions under which conditional independence and scoring methods lead to
identical selection of Bayesian network models. In Proc. UAI, 2001.
[13] D. Margaritis and S. Thrun. Bayesian network induction via local neighborhoods. In Adv. NIPS
12, 2000.
[14] N. Friedman and M. Goldszmidt. Discretizing continuous attributes while learning Bayesian
networks. In Proc. ICML, 1996.
[15] N. Friedman and M. Goldszmidt. Learning Bayesian networks with local structure. In Learning
in Graphical Models. MIT Press, 1998.
| 2293 |@word determinant:2 briefly:1 repository:2 polynomial:1 simulation:2 covariance:17 decomposition:4 invoking:1 score:2 current:2 discretization:7 must:1 partition:1 enables:1 treating:1 v:2 greedy:1 generative:2 discovering:1 assurance:1 intelligence:2 accordingly:1 isotropic:1 footing:2 provides:2 characterization:1 node:7 along:1 direct:2 replication:3 fitting:1 introduce:1 pairwise:6 growing:1 discretized:3 inspired:1 decomposed:2 considering:2 notation:3 bounded:1 moreover:1 suffice:1 kind:1 pursue:1 degrading:1 unified:1 finding:1 transformation:1 corporation:1 safely:1 berkeley:4 exactly:2 control:1 partitioning:1 grant:2 positive:1 local:13 treat:2 tends:1 limit:1 solely:1 pami:1 might:2 studied:1 suggests:1 limited:1 factorization:1 directed:10 acknowledgment:1 testing:1 block:12 ance:1 empirical:4 significantly:1 projection:1 suggest:2 onto:1 selection:5 applying:1 www:1 equivalent:2 map:7 orthonormal:1 regularize:1 embedding:1 suppose:2 exact:2 us:1 trick:3 element:4 approximated:1 muri:1 bottom:1 parameterize:1 adv:1 principled:1 division:1 basis:5 easily:3 joint:3 various:2 effective:2 monte:1 artificial:1 kevin:1 neighborhood:1 whose:1 heuristic:3 otherwise:1 statistic:8 jointly:3 eigenvalue:1 net:2 lam:1 interaction:1 product:2 remainder:1 relevant:1 uci:1 flexibility:1 description:1 inducing:1 olkopf:1 parent:3 cluster:3 generating:1 illustrate:1 ac:1 lauritzen:1 eq:2 dividing:1 c:3 indicate:1 blanket:6 australian:1 direction:2 closely:1 attribute:1 centered:4 assign:1 strictly:1 around:3 mapping:2 adopt:1 proc:3 integrates:1 rough:1 mit:2 gaussian:30 always:2 avoid:1 encode:1 derived:1 improvement:1 rank:4 likelihood:9 seamlessly:1 baseline:1 inference:1 el:1 softly:1 going:1 transformed:1 interested:1 issue:1 among:3 dual:1 overall:1 development:1 smoothing:1 softmax:1 mutual:26 marginal:1 equal:10 field:1 having:1 sampling:1 identical:2 icml:1 minimized:1 np:2 report:4 few:1 composed:1 divergence:4 murphy:1 ima:1 pietra:2 friedman:2 interest:2 investigate:1 possibility:1 mdl:14 mixture:5 semidefinite:1 held:1 chain:1 edge:2 capable:1 necessary:2 orthogonal:2 incomplete:1 old:1 theoretical:2 fitted:1 column:2 modeling:1 ar:1 cover:1 goodness:1 assignment:1 cost:1 subset:2 bacchus:1 dependency:2 adaptively:1 density:1 huji:1 michael:1 wishart:1 toy:1 leve:1 bold:1 gaussianity:4 includes:1 explicitly:2 depends:1 performed:4 lab:1 francis:1 characterizes:1 competitive:1 bayes:1 il:1 variance:8 yield:1 bayesian:12 carlo:1 whenever:1 treatment:1 knowledge:1 dimensionality:1 appears:1 clarendon:1 done:1 anderson:1 smola:1 correlation:5 hand:1 indicated:1 larm:2 true:2 ization:1 regularization:3 read:2 deal:1 width:2 compbio:1 criterion:5 generalized:6 complete:1 reasoning:1 image:2 ranging:1 plete:1 multinomial:2 empirically:2 extend:1 interpretation:2 refer:1 cambridge:1 dot:2 base:1 add:1 posterior:2 multivariate:1 recent:1 showed:1 manipulation:1 verlag:1 binary:1 onr:1 discretizing:1 exploited:1 scoring:7 minimum:1 prune:1 dashed:2 ii:1 full:3 cross:1 bach:2 involving:4 regression:3 essentially:1 metric:16 kernel:28 represent:1 singular:1 sch:1 fbach:1 tend:1 undirected:7 lafferty:1 seem:1 jordan:3 easy:2 enough:1 independence:23 bic:14 fit:1 topology:1 idea:1 regarding:1 computable:1 expression:1 moral:1 matlab:1 generally:4 useful:1 detailed:2 covered:1 http:1 occupy:1 exist:1 canonical:1 nsf:1 dotted:3 discrete:18 independency:2 four:1 threshold:1 thresholded:1 graph:9 year:1 sum:1 inverse:2 reasonable:2 geiger:2 decision:1 prefer:1 nonnegative:1 precisely:1 constrain:1 span:1 relatively:1 combination:2 heckerman:2 slightly:1 em:1 son:2 appealing:1 making:1 computationally:1 count:1 needed:1 tractable:1 generalizes:1 gaussians:5 available:1 appropriate:1 alternative:2 original:5 substitute:1 denotes:2 dirichlet:1 top:1 thomas:1 graphical:19 classical:2 covari:1 objective:3 quantity:2 font:1 strategy:1 usual:1 diagonal:4 surrogate:3 subspace:1 link:3 mapped:1 thrun:1 capacity:1 degrade:1 manifold:2 trivial:2 induction:1 length:1 modeled:1 relationship:5 ratio:4 balance:1 providing:1 margaritis:1 bde:7 implementation:1 perform:3 upper:1 markov:7 datasets:1 benchmark:2 finite:1 acknowledge:1 extended:1 arbitrary:1 cast:1 required:1 pair:1 kl:4 toolbox:1 california:2 learned:1 pearl:1 nip:1 trans:1 able:2 usually:4 belief:1 power:1 hybrid:6 regularized:1 representing:1 scheme:1 reast:1 faced:1 review:1 prior:2 understanding:1 interesting:1 acyclic:1 sufficient:3 consistent:2 mercer:7 principle:3 row:1 penalized:2 free:1 allow:1 deeper:1 neighbor:1 dimension:9 plain:2 gram:5 stand:1 made:1 adaptive:2 commonly:1 far:1 approximate:3 compact:3 pruning:1 implicitly:1 overfitting:1 uai:2 factorize:1 alternatively:2 continuous:17 search:6 modularity:1 decomposes:1 table:2 learn:3 reasonably:1 robust:1 ca:2 inherently:1 expanding:1 expansion:2 investigated:1 necessarily:1 constructing:1 domain:3 scored:1 causality:1 referred:3 representative:1 intel:1 wiley:2 precision:1 position:1 lie:1 chickering:2 emphasized:1 showing:1 evidence:1 effectively:1 led:1 simply:2 explore:1 hillclimbing:2 expressed:1 cowell:1 springer:1 corresponds:3 conditional:18 consequently:3 hard:1 infinite:3 determined:2 except:1 uniformly:2 principal:1 called:1 pas:1 experimental:1 kgv:23 cholesky:2 support:1 goldszmidt:2 mcmc:1 tested:2 della:2 |
1,422 | 2,294 | Evidence Optimization Techniques
for Estimating Stimulus-Response Functions
Maneesh Sahani
Gatsby Unit, UCL
17 Queen Sq., London, WC1N 3AR, UK.
[email protected]
Jennifer F. Linden
Keck Center, UCSF
San Francisco, CA 94143?0732, USA.
[email protected]
Abstract
An essential step in understanding the function of sensory nervous systems is to characterize as accurately as possible the stimulus-response
function (SRF) of the neurons that relay and process sensory information. One increasingly common experimental approach is to present a
rapidly varying complex stimulus to the animal while recording the responses of one or more neurons, and then to directly estimate a functional transformation of the input that accounts for the neuronal firing.
The estimation techniques usually employed, such as Wiener filtering or
other correlation-based estimation of the Wiener or Volterra kernels, are
equivalent to maximum likelihood estimation in a Gaussian-output-noise
regression model. We explore the use of Bayesian evidence-optimization
techniques to condition these estimates. We show that by learning hyperparameters that control the smoothness and sparsity of the transfer function it is possible to improve dramatically the quality of SRF estimates,
as measured by their success in predicting responses to novel input.
1 Introduction
A common experimental approach to the measurement of the stimulus-response function
(SRF) of sensory neurons, particularly in the visual and auditory modalities, is ?reverse
correlation? and its related non-linear extensions [1]. The neural response to a continuous, rapidly varying stimulus , is measured and used in an attempt to reconstruct
the functional mapping
. In the simplest case, the functional is taken to
be a finite impulse response (FIR) linear filter; if the input is white the filter is identified
by the spike-triggered average of the stimulus, and otherwise by the Wiener filter. Such
linear filter estimates are often called STRFs for spatio-temporal (in the visual case) or
spectro-temporal (in the auditory case) receptive fields. The general the SRF may also be
parameterized on the basis of known or guessed non-linear properties of the neurons, or
may be expanded in terms of the Volterra or Wiener integral power series. In the case
of the Wiener expansion, the integral kernels are usually estimated by measuring various
cross-moments of and .
In practice, the stimulus is often a discrete-time process . In visual experiments, the
discretization may correspond to the frame rate of the display. In the auditory experiments
that will be considered below, it is set by the rate of the component tone pulses in a random
chord stimulus. On time-scales finer than that set by this discretization rate, the stimulus is
strongly autocorrelated. This makes estimation of the SRF at a finer time-scale extremely
non-robust. We therefore lose very little generality by discretizing the response with the
same time-step, obtaining a response histogram .
In this discrete-time framework, the estimation of FIR Wiener-Volterra kernels (of any
order) corresponds to linear regression. To estimate the first-order kernel up to a given
maximum time lag , we construct a set of input lag-vectors
. If
a single stimulus frame, , is itself a -dimensional vector (representing, say, pixels in
an image or power in different frequency bands) then the lag vectors are formed by concatenating stimulus frames together into vectors of length . The Wiener filter is then
obtained by least-squares linear regression from the lag vectors to the corresponding
observed activities .
Higher-order kernels can also be found by linear regression, using augmented versions
of the stimulus lag vectors. For example, the second-order kernel is obtained by regression using input vectors formed by all quadratic combinations of the elements of (or,
equivalently, by support-vector-like kernel regression using a homogeneous second-order
polynomial kernel). The present paper will be confined to a treatment of the linear case.
It should be clear, however, that the basic techniques can be extended to higher orders at
the expense of additional computational load, provided only that a sensible definition of
smoothness in these higher-order kernels is available.
The least-squares solution to a regression problem is identical to the maximum likelihood
(ML) value of the weight vector for the probabilistic regression model with Gaussian
output noise of constant variance :
!" $ #
(1)
As is common with ML learning, weight vectors obtained in this way are often overfit
to the training data, and so give poor estimates of the true underlying stimulus-response
function. This is the case even for linear models. If the stimulus is uncorrelated, the MLestimated weight along some input dimension is proportional to the observed correlation
between that dimension of the stimulus and the output response. Noise in the output can
introduce spurious input-output correlations and thus result in erroneous weight values.
Furthermore, if the true relationship between stimulus and response is non-linear, limited
sampling of the input space may also lead to observed correlations that would have been
absent given unlimited data.
The statistics and machine learning literatures provide a number of techniques for the containment of overfitting in probabilistic models. Many of these approaches are equivalent
to the maximum a posteriori (MAP) estimation of parameters under a suitable prior distribution. Here, we investigate an approach in which these prior distributions are optimized
with reference to the data; as such, they cease to be ?prior? in a strict sense, and instead
become part of a hierarchical probabalistic model. A distribution on the regression parameters is first specified up to the unknown values of some hyperparameters. These hyperparameters are then adjusted so as to maximize the marginal likelihood or ?evidence? ?
that is, the probability of the data given the hyperparameters, with the parameters themselves integrated out. Finally, the estimate of the parameters is given by the MAP weight
vector under the optimized ?prior?. Such evidence optimization schemes have previously
been used in the context of linear, kernel and Gaussian-process regression. We show that,
with realistic data volumes, such techniques provide considerably better estimates of the
stimulus-response function than do the unregularized (ML) Wiener estimates.
2 Test data and methods
A diagnostic of overfitting, and therefore divergence from the true stimulus-response relationship, is that the resultant model generalizes poorly; that is, it does not predict actual
responses to novel stimuli well. We assessed the generalization ability of parameters chosen by maximum likelihood and by various evidence optimization schemes on a set of
responses collected from the auditory cortex of rodents. As will be seen, evidence optimization yielded estimates that generalized far better than those obtained by the more
elementary ML techniques, and so provided a more accurate picture of the underlying
stimulus-response function.
A total of 205 recordings were collected extracellularly from 68 recording sites in the
thalamo-recipient layers of the left primary auditory cortex of anaesthetized rodents (6
CBA/CaJ mice and 4 Long-Evans rats) while a dynamic random chord stimulus (described
below) was presented to the right ear. Recordings often reflected the activity of a number of
neurons; single neurons were identified by Bayesian spike-sorting techniques [2, 3] whenever possible. The stimulus consisted of 20 ms tone pulses (ramped up and down with a
5 ms cosine gate) presented at random center frequencies, maximal intensities, and times,
such that pulses at more than one frequency might be played simultaneously. This stimulus resembled that used in a previous study [4], except in the variation of pulse intensity.
The times, frequencies and sound intensities of all tone pulses were chosen independently
within the discretizations of those variables (20 ms bins in time, 1/12 octave bins covering
either 2?32 or 25?100 kHz in frequency, and 5 dB SPL bins covering 25?70 dB SPL in
level). At any time point, the stimulus averaged two tone pulses per octave, with an expected loudness of approximately 73 dB SPL for the 2?32 kHz stimulus and 70 dB SPL for
the 25?100 kHz stimulus. Each pulse was ramped up and down with a 5 ms cosine gate.
The total duration of each stimulus was 60 s. At each recording site, the 2?32 kHz stimulus
was repeated for 20 trials, and the 25?100 kHz stimulus for 10 trials.
Neural responses from all 10 or 20 trials were histogrammed in 20 ms bins aligned with
stimulus pulse durations. Thus, in the regression framework, the instantaneous input vector
comprised the sound amplitudes at each possible frequency at time , and the output
was the number of spikes per trial collected into the th bin. The repetition of the same
stimulus made it possible to partition the recorded response power into a stimulus-related
(signal) component and a noise component. (For derivation, see Sahani and Linden, ?How
Linear are Auditory Cortical Responses??, this volume.) Only those 92 recordings in which
the signal power was significantly greater than zero were used in this study.
Tests of generalization were performed by cross-validation. The total duration of the stimulus was divided 10 times into a training data segment (9/10 of the total) and a test data
segment (1/10), such that all 10 test segments were disjoint. Performance was assessed by
the predictive power, that is the test data variance minus average squared prediction error.
The 10 estimates of the predictive power were averaged, and normalized by the estimated
signal power to give a number less than 1. Note that the predictive power could be negative
in cases where the mean was a better description of the test data than was the model prediction. In graphs of the predictive power as a function of noise level, the estimate of the
noise power is also shown after normalization by the estimated signal power.
3 Evidence optimization for linear regression
As is common in regression problems, it is convenient to collect all the stimulus vectors
and observed responses into matrices. Thus, we described the input by a matrix , the th
column of which is the input lag-vector . Similarly, we collect the outputs into a row
vector , the th element of which is . The first
time-steps are dropped to avoid
incomplete lag-vectors. Then, assuming independent noise in each time bin, we combine
the individual probabilities to give:
! !
(2)
We now choose the prior distribution on to be normal with zero mean (having no prior
reason to favour either positive or negative weights) and covariance matrix . Then the
joint density of and is
!
! !"
(3)
. Fixing
where the normalizer
to be the observed values, this
implies a normal posterior on with variance
and mean
.
By integrating this normal density in we obtain an expression for the evidence:
(
!
)
!
!
"!#%$
,
-
+*
" '#
&
.
(4)
We seek to optimize this evidence with respect to the hyperparameters in , and the noise
variance . To do this we need the respective gradients. If the covariance matrix contains
a parameter , then the derivative of the log-evidence with respect to is given by
/
0
/
0
0 /214365 ( Tr 7 8
&9& 0 /
while the gradient in the noise variance is
0
0 14365 ( : $ Tr ; <
= $ 8& 8&
where : is the number of training data points.
(5)
(6)
4 Automatic relevance determination (ARD)
The most common evidence optimization scheme for regression is known as automatic
relevance determination (ARD). Originally proposed by MacKay and Neal, it has been
used extensively in the literature, notably by MacKay[5] and, in a recent application to
kernel regression, by Tipping [6]. The prior covariance on the weights is taken to be of the
. That is, the weights are taken to be independent with
form
with
potentially different prior precisions . Substitution into (5) yields
?>
> ?@BACED GFIH
JF H
0
0 FIH 14365 ( F H
<
4H H 8 & H
#
(7)
Previous authors have noted that, in comparison to simple gradient methods, iteration of
fixed point equations derived from this and from (6) converge more rapidly:
F,K
H LNM
and
.
OKPLNM
F H H4H
& H
8 & HRHSFIH &
:<Q H <
(8)
(9)
ARD
?240 ?180 ?120 ?60
time (ms)
0
ASD
ASD/RD
100
100
100
100
50
50
50
50
25
?240 ?180 ?120 ?60
time (ms)
0
25
?240 ?180 ?120 ?60
time (ms)
0
25
?240 ?180 ?120 ?60
time (ms)
0
frequency (kHz)
ML
25
R2001011802G/20010731/pen14loc2poisshical020
Figure 1: Comparison of various STRF estimates for the same recording.
A pronounced general feature of the maxima discovered by this approach is that many of
the optimal precisions are infinite (that is, the variances are zero). Since the prior distribution is centered on zero, this forces the corresponding weight to vanish. In practice, as
the iterated value of a precision crosses some pre-determined threshold, the corresponding
input dimension is eliminated from the regression problem. The results of evidence optimization suggest that such inputs are irrelevant to predicting the output; hence the name
given to this technique. The resulting MAP estimates obtained under the optimized ARD
prior thus tend to be sparse, with only a small number of non-zero weights often appearing
as isolated spots in the STRF.
The estimated STRFs for one example recording using ML and ARD are shown in the
two left-most panels of figure 1 (the other panels show smoothed estimates which will be
described below), with the estimated weight vectors rearranged into time-frequency matrices. The sparsity of the ARD solution is evident in the reduction of apparent estimation
noise at higher frequencies and longer time lags. This reduction improves the ability of
the estimated model to predict novel data by more than a factor of 2 in this case. Assessed
by cross-validation, as described above, the ARD estimate accurately predicted 26% of the
signal power in test data, whereas the ML estimate (or Wiener kernel) predicted only 12%.
0.5
1
0
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
?0.5
?1
?1.5
?1.5
?1
?0.5
0
0.5
normalized ML predictive power
0
0
25
50
normalized noise power
0
20
0
40
no. of recordings
Figure 2: Comparison of ARD and ML predictions.
normalized prediction difference
(ARD ? ML)
normalized ARD predictive power
This improvement in predictive quality was evident in every one of the 92 recordings with
significant signal power, indicating that the optimized prior does improve estimation accuracy. The left-most panel of figure 2 compares the normalized cross-validation predictive
power of the two STRF estimates. The other two panels show the difference in predictive
powers as function of noise (in the center) and as a histogram (right). The advantage of the
evidence-optimization approach is clearly most pronounced at higher noise levels.
5 Automatic smoothness determination (ASD)
In many regression problems, such as those for which ARD was developed, the different
input dimensions are often unrelated; indeed they may be measured in different units. In
such contexts, an independent prior on the weights, as in ARD, is reasonable. By contrast,
the weights of an STRF are dimensionally and semantically similar. Furthermore, we might
expect weights that are nearby in either time or frequency (or space) to be similar in value;
that is, the STRF is likely to be smooth on the scale at which we model it.
Here we introduce a new evidence optimization scheme, in which the prior covariance
matrix is used to favour smoothing of the STRF weights. The appropriate scale (along
either the time or the frequency/space axis) cannot be known a priori. Instead, we introduce
hyperparameters and that set the scale of smoothness in the spectral (or spatial) and
temporal dimensions respectively, and then, for each recording, optimize the evidence to
determine their appropriate values.
The new parameterized covariance matrix, , depends on two matrices and .
The
element of each of these gives the squared distance between the weights and
in terms of center frequency (or space) and time respectively. We take
H
-
$
!
(10)
where the exponent is taken element by element. In this scheme, the hyperparameters
and set the correlation distances for the weights along the spectral (spatial) and temporal
dimensions, while the additional hyperparameter sets their overall scale.
0
0 41 365 (
Substitution of (10) into the general hyperparameter derivative expression (5) gives
and
0
0 41 365 (
Tr ; 7 &9&
Tr < 8&9&
=
(
(11)
(12)
(where the denotes the Hadamard or Schur product; i.e., the matrices are multiplied ele. In this case, optimization
ment by element), along with a similar expression for
is performed by simple gradient methods.
14365
The third panel of figure 1 shows the ASD-optimized MAP estimate of the STRF for the
same example recording discussed previously. Optimization yielded smoothing width estimates of 0.96 (20 ms) bins in time and 2.57 (1/12 octave) bins in frequency; the effect of
this smoothing of the STRF estimate is evident. ASD further improved the ability of the
linear kernel to predict test data, accounting for 27.5% of the signal power in this example.
In the population of 92 recordings (figure 3, upper panels) MAP estimates based on the
ASD-optimized prior again outperformed ML (Wiener kernel) estimates substantially on
every single recording considered, particularly on those with poorer signal-to-noise ratios.
They also tended to predict more accurately than the ARD-based estimates (figure 3, lower
panels). The improvement over ARD was not quite so pronounced (although it was frequently greater than in the example of figure 1).
6 ARD in an ASD-defined basis
The two evidence optimization frameworks presented above appear inconsistent. ARD
yields a sparse, independent prior, and often leads to isolated non-zero weights in the estimated STRF. By contrast, ASD is explicitly designed to recover smooth STRF estimates.
0
1
1
?0.5
0.5
0.5
?1
0
?1.5
?1.5
normalized ASD predictive power
1.5
?1
?0.5
0
0.5
normalized ML predictive power
0
0
20
40
normalized noise power
600
10
20
number of recordings
0.5
0.4
0.4
0.2
0.2
0
0
0
?0.5
?0.2
?0.5
0
0.5
0
normalized ARD predictive power
20
40
normalized noise power
?0.2
600
10
20
number of recordings
normalized predictive power difference
(ML ? ASD)
1.5
normalized predictive power difference
(ARD ? ASD)
normalized ASD predictive power
0.5
Figure 3: Comparison of ASD predictions to ML and ARD.
Nonetheless, both frameworks appear to improve the ability of estimated models to generalize to novel data. We are thus led to consider ways in which features of both methods
may be combined.
By decomposing the prior covariance
of (3) as
!
)! ! -
, it is possible to rewrite the joint density
(13)
Making the substitutions
and
, this expression may be recognized
as the joint density for a transformed regression problem with unit prior covariance (the
normalizing constant, not shown, is appropriately transformed by the Jacobean associated
with the change in variables). If now we introduce and optimize a diagonal prior covariance
of the ARD form in this transformed problem, we are indirectly optimizing a covariance
matrix of the form
in the original basis. Intuitively, the sparseness driven by
ARD is applied to basis vectors drawn from the rows of the transformation matrix , rather
than to individual weights. If this basis reflects the smoothness prior obtained from ASD
then the resulting prior will combine the smoothness and sparseness of two approaches.
>
We choose to be the (positive branch) matrix square root of the optimal prior matrix
(see (10)) obtained from ASD. If the eigenvector decomposition of is
, then
, where the diagonal elements of
are the positive square roots of the
eigenvalues of . The components of , defined in this way, are Gaussian basis vectors
slightly narrower than those in (this is easily seen by noting that the eigenvalue spectrum
for the Toeplitz matrix is given by the Fourier transform, and that the square-root of
the Gaussian function in the Fourier space is a Gaussian of larger width, corresponding
to a smaller width in the original space). Thus, weight vectors obtained through ARD
#
0.4
0.08
0.08
0.06
0.06
0.04
0.04
0.02
0.02
0.2
0
0
?0.02
0
0
0.2
0.4
0.6
?0.04
normalized ASD predictive power
?0.02
0
25
50
normalized noise power
0
10
?0.04
20
normalized prediction difference
(ASD/RD ? ASD)
normalized ASD/RD predictive power
0.6
no. of recordings
Figure 4: Comparison of ARD in the ASD basis and simple ASD
in this basis will be formed by a superposition of Gaussian components, each of which
individually matches the ASD prior on its covariance.
The results of this procedure (labelled ASD/RD) on our example recording are shown in
the rightmost panel of figure 1. The combined prior shows a similar degree of smoothing
to the ASD-optimized prior alone; in addition, like the ARD prior, it suppresses the apparent background estimation noise at higher frequencies and longer time lags. Predictions
made with this estimate are yet more accurate, capturing 30% of the signal power. This
improvement over estimates derived from ASD alone is borne out in the whole population
(figure 4), although the gain is smaller than in the previous cases.
7 Conclusions
We have demonstrated a succession of evidence-optimization techniques which appear to
improve the accuracy of STRF estimates from noisy data. The mean improvement in prediction of the ASD/RD method over the Wiener kernel is 40% of the stimulus-related signal
power. Considering that the best linear predictor would on average capture no more than
40% of the signal power in these data even in the absence of noise (Sahani and Linden,
?How Linear are Auditory Cortical Responses??, this volume), this is a dramatic improvement. These results apply to the case of linear models; our current work is directed toward
extensions to non-linear SRFs within an augmented linear regression framework.
References
[1] Marmarelis, P. Z & Marmarelis, V. Z. (1978) Analysis of Physiological Systems. (Plenum Press,
New York).
[2] Lewicki, M. S. (1994) Neural Comp 6, 1005?1030.
[3] Sahani, M. (1999) Ph.D. thesis (California Institute of Technology, Pasadena, CA).
[4] deCharms, R. C, Blake, D. T, & Merzenich, M. M. (1998) Science 280, 1439?1443.
[5] MacKay, D. J. C. (1994) ASHRAE Transactions 100, 1053?1062.
[6] Tipping, M. E. (2001) J Machine Learning Res 1, 211?244.
| 2294 |@word trial:4 version:1 polynomial:1 seek:1 pulse:8 gfih:1 covariance:10 accounting:1 decomposition:1 dramatic:1 minus:1 tr:4 moment:1 reduction:2 substitution:3 series:1 contains:1 phy:1 rightmost:1 current:1 discretization:2 yet:1 evans:1 realistic:1 partition:1 designed:1 alone:2 nervous:1 tone:4 along:4 become:1 ramped:2 combine:2 introduce:4 notably:1 indeed:1 expected:1 themselves:1 frequently:1 little:1 actual:1 considering:1 provided:2 estimating:1 underlying:2 unrelated:1 panel:8 substantially:1 eigenvector:1 suppresses:1 developed:1 transformation:2 temporal:4 every:2 uk:2 control:1 unit:3 appear:3 positive:3 dropped:1 firing:1 approximately:1 might:2 strf:11 collect:2 limited:1 averaged:2 directed:1 practice:2 spot:1 procedure:1 sq:1 discretizations:1 maneesh:2 significantly:1 convenient:1 pre:1 integrating:1 suggest:1 cannot:1 context:2 optimize:3 equivalent:2 map:5 demonstrated:1 center:4 independently:1 duration:3 population:2 variation:1 plenum:1 homogeneous:1 element:7 particularly:2 observed:5 capture:1 chord:2 dynamic:1 rewrite:1 segment:3 predictive:17 basis:8 easily:1 joint:3 various:3 autocorrelated:1 derivation:1 london:1 apparent:2 lag:9 quite:1 larger:1 say:1 reconstruct:1 otherwise:1 ability:4 statistic:1 toeplitz:1 transform:1 itself:1 noisy:1 triggered:1 advantage:1 eigenvalue:2 ucl:2 ment:1 maximal:1 product:1 aligned:1 hadamard:1 rapidly:3 poorly:1 description:1 pronounced:3 keck:1 ac:1 fixing:1 measured:3 ard:24 predicted:2 implies:1 filter:5 centered:1 bin:8 generalization:2 cba:1 elementary:1 adjusted:1 extension:2 considered:2 blake:1 normal:3 mapping:1 predict:4 relay:1 estimation:9 outperformed:1 lose:1 superposition:1 individually:1 repetition:1 reflects:1 clearly:1 gaussian:7 rather:1 avoid:1 varying:2 derived:2 improvement:5 likelihood:4 contrast:2 normalizer:1 sense:1 posteriori:1 integrated:1 spurious:1 pasadena:1 transformed:3 pixel:1 overall:1 priori:1 exponent:1 animal:1 smoothing:4 spatial:2 mackay:3 marginal:1 field:1 construct:1 having:1 sampling:1 eliminated:1 identical:1 stimulus:35 simultaneously:1 divergence:1 individual:2 attempt:1 investigate:1 wc1n:1 accurate:2 poorer:1 integral:2 respective:1 incomplete:1 re:1 isolated:2 ashrae:1 column:1 ar:1 measuring:1 queen:1 predictor:1 comprised:1 characterize:1 considerably:1 combined:2 density:4 probabilistic:2 together:1 mouse:1 squared:2 again:1 ear:1 recorded:1 thesis:1 choose:2 fir:2 marmarelis:2 borne:1 derivative:2 account:1 explicitly:1 depends:1 performed:2 root:3 extracellularly:1 recover:1 formed:3 square:5 accuracy:2 wiener:11 variance:6 succession:1 guessed:1 correspond:1 yield:2 generalize:1 bayesian:2 iterated:1 accurately:3 comp:1 finer:2 tended:1 whenever:1 definition:1 nonetheless:1 frequency:14 resultant:1 associated:1 gain:1 auditory:7 treatment:1 ele:1 improves:1 amplitude:1 fih:1 higher:6 originally:1 tipping:2 reflected:1 response:22 improved:1 strongly:1 generality:1 furthermore:2 correlation:6 overfit:1 quality:2 impulse:1 name:1 effect:1 usa:1 consisted:1 true:3 normalized:18 hence:1 merzenich:1 neal:1 white:1 histogrammed:1 width:3 covering:2 noted:1 cosine:2 rat:1 m:10 generalized:1 octave:3 evident:3 image:1 instantaneous:1 novel:4 common:5 functional:3 khz:6 volume:3 discussed:1 measurement:1 significant:1 smoothness:6 automatic:3 rd:5 similarly:1 cortex:2 longer:2 posterior:1 recent:1 optimizing:1 irrelevant:1 driven:1 reverse:1 dimensionally:1 discretizing:1 success:1 seen:2 ced:1 additional:2 greater:2 employed:1 recognized:1 determine:1 maximize:1 converge:1 signal:11 branch:1 sound:2 smooth:2 match:1 determination:3 cross:5 long:1 divided:1 prediction:8 regression:19 basic:1 histogram:2 kernel:15 normalization:1 iteration:1 confined:1 whereas:1 addition:1 background:1 modality:1 appropriately:1 strict:1 recording:18 tend:1 db:4 inconsistent:1 schur:1 noting:1 identified:2 absent:1 favour:2 expression:4 caj:1 york:1 dramatically:1 strfs:2 clear:1 band:1 extensively:1 ph:1 simplest:1 rearranged:1 estimated:8 diagnostic:1 per:2 disjoint:1 discrete:2 lnm:1 hyperparameter:2 threshold:1 drawn:1 asd:26 graph:1 parameterized:2 reasonable:1 spl:4 capturing:1 layer:1 played:1 display:1 quadratic:1 yielded:2 activity:2 unlimited:1 nearby:1 fourier:2 extremely:1 expanded:1 combination:1 poor:1 smaller:2 slightly:1 increasingly:1 making:1 intuitively:1 taken:4 unregularized:1 equation:1 previously:2 jennifer:1 available:1 generalizes:1 decomposing:1 multiplied:1 apply:1 hierarchical:1 appropriate:2 spectral:2 indirectly:1 appearing:1 gate:2 original:2 recipient:1 denotes:1 anaesthetized:1 spike:3 volterra:3 receptive:1 primary:1 diagonal:2 loudness:1 gradient:4 distance:2 sensible:1 collected:3 reason:1 toward:1 assuming:1 length:1 relationship:2 ratio:1 equivalently:1 potentially:1 expense:1 decharms:1 negative:2 ba:1 unknown:1 upper:1 neuron:6 finite:1 extended:1 frame:3 discovered:1 smoothed:1 intensity:3 specified:1 optimized:7 srf:5 california:1 usually:2 below:3 sparsity:2 power:33 suitable:1 force:1 predicting:2 representing:1 scheme:5 improve:4 technology:1 picture:1 axis:1 sahani:4 prior:25 understanding:1 literature:2 expect:1 filtering:1 proportional:1 validation:3 degree:1 uncorrelated:1 row:2 institute:1 sparse:2 dimension:6 cortical:2 sensory:3 author:1 made:2 san:1 far:1 transaction:1 spectro:1 ml:14 overfitting:2 containment:1 francisco:1 spatio:1 spectrum:1 continuous:1 transfer:1 robust:1 ca:2 obtaining:1 expansion:1 probabalistic:1 complex:1 whole:1 noise:19 hyperparameters:7 repeated:1 neuronal:1 augmented:2 site:2 gatsby:2 precision:3 concatenating:1 vanish:1 third:1 down:2 erroneous:1 load:1 resembled:1 cease:1 jacobean:1 linden:4 evidence:17 normalizing:1 essential:1 physiological:1 thalamo:1 sparseness:2 sorting:1 rodent:2 led:1 explore:1 likely:1 visual:3 lewicki:1 corresponds:1 narrower:1 labelled:1 jf:1 absence:1 change:1 infinite:1 except:1 determined:1 semantically:1 called:1 total:4 experimental:2 indicating:1 support:1 assessed:3 relevance:2 ucsf:2 |
1,423 | 2,295 | Constraint Classification for Multiclass
Classification and Ranking
Sariel Har-Peled
Dan Roth
Dav Zimak
Department of Computer Science
University of Illinois
Urbana, IL 61801
sariel,danr,davzimak @uiuc.edu
Abstract
The constraint classification framework captures many flavors of multiclass classification including winner-take-all multiclass classification,
multilabel classification and ranking. We present a meta-algorithm for
learning in this framework that learns via a single linear classifier in high
dimension. We discuss distribution independent as well as margin-based
generalization bounds and present empirical and theoretical evidence
showing that constraint classification benefits over existing methods of
multiclass classification.
1 Introduction
Multiclass classification is a central problem in machine learning, as applications that require a discrimination among several classes are ubiquitous. In machine learning, these include handwritten character recognition [LS97, LBD 89], part-of-speech tagging [Bri94,
EZR01], speech recognition [Jel98] and text categorization [ADW94, DKR97].
While binary classification is well understood, relatively little is known about multiclass
classification. Indeed, the most common approach to multiclass classification, the oneversus-all (OvA) approach, makes direct use of standard binary classifiers to encode and
train the output labels. The OvA scheme assumes that for each class there exists a single
(simple) separator between that class and all the other classes. Another common approach,
all-versus-all (AvA) [HT98], is a more expressive alternative which assumes the existence
of a separator between any two classes.
OvA classifiers are usually implemented using a winner-take-all (WTA) strategy that associates a real-valued function with each class in order to determine class membership.
Specifically, an example belongs to the class which assigns it the highest value (i.e.,
the ?winner?) among all classes. While it is known that WTA is an expressive classifier [Maa00], it has limited expressivity when trained using the OvA assumption since
OvA assumes that each class can be easily separated from the rest. In addition, little is
known about the generalization properties or convergence of the algorithms used.
This work is motivated by several successful practical approaches, such as multiclass support vector machines (SVMs) and the sparse network of winnows (SNoW) architecture that
rely on the WTA strategy over linear functions. Our aim is to improve the understanding of
such classifier systems and to develop more theoretically justifiable algorithms that realize
the full potential of WTA.
An alternative interpretation of WTA is that every example provides an ordering of the
classes (sorted in descending order by the assigned values), where the ?winner? is the first
class in this ordering. It is thus natural to specify the ordering of the classes for an example
directly, instead of implicitly through WTA.
In Section 2, we introduce constraint classification, where each example is labeled with a
set of constraints relating multiple classes. Each such constraint specifies the relative order of two classes for this example. The goal is to learn a classifier consistent with these
constraints. Learning is made possible by a simple transformation mapping each example
into a set of examples (one for each constraint) and the application of any binary classifier
on the mapped examples. In Section 3, we present a new algorithm for constraint classification that takes on the properties of the binary classification algorithm used. Therefore,
using the Perceptron algorithm, it is able to learn a consistent classifier if one exists, using the winnow algorithm it can learn attribute efficiently, and using the SVM, it provides
a simple implementation of multiclass SVM. The algorithm can be implemented with a
subtle change to the standard (via OvA) approach to training a network of linear threshold gates. In Section 4, we discuss both VC-dimension and margin-based generalization
bounds presented a companion paper[HPRZ02]. Our generalization bounds apply to WTA
classifiers over linear functions, for which VC-style bounds were not known.
In addition to multiclass classification, constraint classification generalizes multilabel classification, ranking on labels, and of course, binary classification. As a result, our algorithm
provides new insight into these problems, as well as new, powerful tools for solving them.
For example, in Section , we show that the commonly used OvA assumption can cause
learning to fail, even when a consistent classifier exists. Section 5 provides empirical evidence that the constraint classification outperforms the OvA approach.
2 Constraint Classification
Learning problems often assume that examples,
, are drawn from
fixed probability distribution, , over
. is referred to as the instance space
and is referred to as the output space (label set).
Definition 2.1 (Learning) Given examples, !"#%$&#'$(%)*)*)+-,.#/>=,# , drawn 0 -
from 123 , a hypothesis class 4 and an error function 56/7
89
:4<;
)? , a learning
algorithm @2AB#4
attempts to output a function CD
4 , where C
6'E; , that minimizes
the expected error on a randomly drawn example.
Definition 2.2 (Permutations) Denote the set of full orders over ?F)***0G as /H , consisting of all permutations of ?/**)(0G . Similarly, I H denotes the set of all partial orders
over ?/*)*(G . A partial order, J
KLI H , defines a binary relation, MN and can be rep. In addition, for any set
resented by set of pairs on which M N holds, J1
PO')Q RM N O
of pairs J
$SPOT$*%*)*))VUWXO)U- , we refer to J both as a set of pairs and as the partial
order produced by the transitive closure of J with respect to MYN . Given two partial orders
Z
Z
0[ \ I H ,
?F)***0G S_ , `MbacO
is consistent with [ (denoted Z^] [ ) if for every +0XO'2
holds whenever `Md:O . If Je f H is a full order, then it can be represented by a list of G
integers where `M N O if precedes O in the list. The size of a partial order, Q JFQ is the number
of pairs specified in J .
Definition 2.3 (Constraint Classification) Constraint classification is a learning problem
where each example +3. gh
< I H is labeled according to a partial order ! K I H . A
constraint classifier, C^6i; I H , is consistent with example if is consistent with
]
Cjk (
C+k ). When Q cQ'l7J , we call it J -constraint classification.
Problem
binary
multiclass
!
-multilabel
ranking
constraint*
J -constraint*
Internal
Representation
$T)**)
$T)**)
$T)**)
$T)**)
$T)**)
H
:
H
:
H
:
:
H
H
:
Output
Space ( )
?/*?
?F)***0G
H
?/**)(0G
H
I H
H
(
" $ H
)
#$ $% H &
(
#$ '( $% H
*
#$ '( $% H
*
#$ '( $% H *
H
Hypothesis
"
H
H
I NH
Size of
Mapping
!
?
G
AG
G
?!
?
?
J
Table 1: Definitions for various learning problems (notice that the hypothesis for constraint clas-
sification is always a full order) and the size of the resultant mapping to ) -constraint classification.
*,+.-0/1*,23 is a variant of *4+.-5/1*42 that returns the 6 maximal indices with respect to 798;:< . *,+.-0=?>0+?@ is
a linear sorting function (see Definition 2.6).
Definition 2.4 (Error Indicator Function) For any j# ^i
^ I H , and hypothesis C 6
E;
I H , the indicator function 5 j#W0Ck indicates an error on example , 5 j#W0Ck`
?
=
if B] A CjW , and otherwise.
For example, if G DC and example j#8"+ FEG %)HE3C T , Cc$TW: HE3G*?/C , and
C
W`"C%E3$G*?S , then CW$ is correct since E precedes G and E precedes C in the full order
FE_ G*?FC whereas C is incorrect since C precedes E in C%E3$G*?S .
_
Definition 2.5 (Error) Given an example +# drawn from j23 , the true error
of C g4 , where C 6L ; JI is defined to be K5LML3ACk ONPQJR 5:j#W0CkTS . Given
#+ $ # $ %)**))+ , # , # , the empirical error of C
4
with respect to is defined to be
K5LML'PB0CkR U VW$ U QXZY
[ \4]?^ V 5`j#W0Ck*Q .
In thispaper,
we consider constraint classification problems where hypotheses are functions
from
to H that output a permutation of ?/*)*(G .
Definition 2.6 (Linear Sorting
Function) Let K _ $ )*)* be a set of G vectors,
H
where
. Given
, a linear sorting classifier is a function C 6
$T)**) H D
;
H computed in the following way:
`'(
)
bja $ H
CjW`
where #$ '( returns a permutation of ?/*)*(G where precedes O if
dcegfP
# . In
the case that &
*
h f
* , precedes O if 9i!O .
Constraint classification can model many well-studied learning problems including multiclass classification, ranking and multilabel classification. Table 1 shows a few interesting
classification problems expressible as constraint classification. It is easy to show:
Lemma 2.7 (Problem mappings) All of the learning problems in Table 1 can be expressed
as constraint classification problems.
Consider a C -class multiclass example, $G/ . It is transformed into the G -constraint example, FG*?S%>_G%EF()_GC' > . If we find a constraint classifier that correctly labels
according to the given constraints where kjl
Bcmb$n
, jl
dco
, and jl
dcoqp
,
_
then G r#$ $% j, p s`
P . If instead we are given a ranking example
j
_G%E3)?FC' S ,
_
it can be transformed into FG$E/%>FE3)?>() ?/C' S .
3 Learning
In this section, G -class constraint classification
9 is transformed into binary classification
in
higher dimension. Each example j#
I H becomes a set of examples in
H
with each constraint PO' contributing a single ?positive? and a single
?/*?
?negative?
example. Then, a separating hyperplane for the expanded example set (in H ) can be
viewed as a linear sorting function over G linear functions, each in dimensional space.
3.1 Kesler?s Construction
Kesler?s construction for multiclass classification was first introduced by Nilsson in
1965[Nil65, 75?77] and can also be found more recently[DH73]. This subsection extends
the Kesler construction for constraint classification.
$ *)*)
,
0
-
Definition 3.1 (Chunk) A vector
, is broken
H
H
into G chunks $T)*)( where the -th chunk,
Y c$.]
$ *)*)# .
H
#
embedded in G3 dimensions,
Definition 3.2 (Expansion) Let
j# be a vector
"
Y $.]
in the -th chunk of a vector in
by writing the coordinates
of
. Denote by the
H
!
zero vector of length . Then
of three
vectors,
Y $]
Y W
] + V can be written as
the concatenation
j# E
#j H
2
+#XO
. Finally,
H
jPO' ,
is the embedding of in the -th chunk and in the O -th chunk of a vector in H .
#
Definition 3.3 (Expanded Example Sets) Given an example +# , where
and
I H , we define the expansion of j# into a set of examples as follows,
j#`
j#PO'%)?> PO':
H
? L
A set of negative examples is defined as the reflection of each expanded example through
the origin, specifically
R+3R
?>
*?S8
H
? L
and
the set of both positive and negative examples is denoted by 1
j#
R+# . The expansion of a set of examples, , is defined as the union of all of the
expanded examples in the set,
AL`
Yb[ 4\ ]T^ V
+3
H
`
?F)? `
Note that the original Kesler construction produces only
. We also create to simplify
the analysis and to maintain consistency when learning non-linear functions (such as SVM).
3.2 Algorithm
Figure 1 (a) shows a meta-learning algorithm for constraint classification that finds a linear
sorting function
a set
of
by using any algorithm for learning a binary classifier. Given
examples
!
I H , the algorithm simply finds a separating hyperplane Cj
`"k
for
A`#
H
?/*? . Suppose C correctly classifies
*?SR"
j#PO'(*?>:
PL ,
=
, and the constraint
then
$ EB
%&
d
& f c
+0XO' on (dictating that d
& 1c
' f ) is consistent with C
. Therefore, if C
correctly classifies all f
A` , then
#$ '( $
) is a consistent linear sorting function.
H
This framework is significant to multiclass classification in many ways. First, the hypothesis learned above is more expressive than when the OvA assumption is used. Second,
it is easy to verify that other algorithmic-specific properties are maintained by the above
transformation. For example, attribute efficiency is preserved when using the winnow algorithm. Finally, the multiclass
support vector machine can be implemented by learning a
hyperplane to separate PL with maximal margin.
3.3 Comparison to ?One-Versus-All?
) is to make the oneA common approach to multiclass classification (
?F)*)(0G
versus-all (OvA) assumption, namely, that each class can be separated from the rest using
Algorithm O NLINE C ON C LASS L EARN
I NPUT:
Algorithm C ONSTR C LASS L EARN
I NPUT:
4\
CD6
C
L
C
Calculate
C
g@2
Set C
end
$& $%(**)*>-,.F,2
,
I H
where
O UTPUT: A classifier C
begin
!
H
`
PL8
AL( 4
8
;
I
d
?/*?
H
?F
?
#+W$&# $*%*)*))k,.#F,Y#
,
I H
where f
O UTPUT: A classifier C
begin
Initialize j$&**) 8
H
Repeat until converge
H
,
for B?/
for all
if f
I
4
I kRD `'( $ H
(
Set C
end
(a)
H
do
I :
do
)ni! f
(
W
promote lf>
demote Wf
X
,
O/XO
then
I W` `'( $ H &&
(
(b)
Figure 1: (a) Meta-learning algorithm for constraint classification with linear sorting functions (see Definition 2.6). @2.
,
is any binary learning algorithm returning a separating hyperplane. (b) Online meta-algorithm for constraint classification with linear sorting functions (see Definition 2.6). The particular online algorithm used determines how
$ *)*)
is initialized and the promotion and demotion strategies.
H
a binary classification algorithm. Learning proceeds by learning G independent binary
classifiers, one corresponding to each class, where example j# is considered positive
for classifier and negative for all others.
It is easy to construct an example where the OvA assumption causes the learning to fail
even when there exists a consistent linear sorting function. (see Figure 2) Notice, since
the existence of a consistent linear
sorting function (w.r.t. ) implies the existence of a
separating hyperplane (w.r.t.
PL ), any learning algorithm guaranteed to separate two
separable point sets (e.g. the Perceptron algorithm) is guaranteed to find a consistent linear
sorting function. In Section 5, we use the perceptron
$ algorithm to find a consistent classifier
for an extension of the example in Figure 2 to
when OvA fails.
3.4 Comparison to Newtorks of Linear Threshold Gates (Perceptron)
It is possible to implement the algorithm in Section using a network of linear classifiers such as multi-output Perceptron [AB99], SNoW
9 [CCRR99, Rot98], and multiclass
SVM [CS00, WW99]. Such a network
has
as input and G outputs, each repre
, where the -th output computes
S (see Figure 1
sented by a weight vector, k2
(b)).
Typically, a label is mapped, via fixed transformation, into a G -dimensional output vector, and each output is trained separately, as in the OvA case. Alternately, if the online
perceptron algorithm is plugged into the meta-algorithm in Section , then updates are performed according to a dynamic transformation. Specifically, given j# , for every constraint XO , if
Vdiegf
V , is ?promoted? and qf is ?demoted?. Using a network
in this results in an ultraconservative online algorithm for multiclass classification [CS01].
This subtle change enables the commonly used network of linear threshold gates to learn
every hypothesis it is capable of representing.
+
f =0
f =0
f =0
?
+
?
?
+
Figure 2: A 3-class classification example in
showing that one-versus-all (OvA) does not converge to a consistent hypothesis. Three classes (squares, triangles, and circles) should be separated
from the rest. Solid points act as points in their respective classes. The OvA assumption will
attempt to separate the circles from squares and triangles with a single separating hyperplane, as well
as the other 2 combinations. Because the solid points are weighted, all OvA classifiers are required
to classify them correctly or suffer mistakes, thus restricting what the final hypotheses will be. As
a result, the OvA assumption will misclassify point outlined with a double square since the square
classifier predicts ?not square? and the circle classifier predicts ?circle?. One can verify that there
exists a WTA classifier for this example.
Dataset
glass
vowel
soybean
audiology
ISOLET
letter
Synthetic*
Features
9
10
35
69
617
16
100
Classes
6
11
19
24
26
26
3
Training Examples
214
528
307
200
6238
16000
50000
Testing Examples
?
462
376
26
1559
4000
50000
Table 2: Summary of problems from the UCI repository. The synthetic data is sampled from a
random linear sorting function (see Section 5).
4 Generalization Bounds
A PAC-style analysis of multiclass functions that uses an extended notion of VC-dimension
for multiclass case [BCHL95] provides poor bounds on generalization for WTA, and the
current best bounds rely on a generalized notion of margin [ASS00]. In this section, we
prove tighter bounds using the new framework.
We seek generalization bounds for learning with 4 , the class of linear sorting functions
(Definition 2.6). Although both VC-dimension-based
(based on growth function) and
H
margin-based bounds for the class
of
hyperplanes
in
are known [Vap98, AB99], they
cannot directly be applied since PL produces points that are random, but not independently drawn. It turns out that bounds can be derived indirectly by using known bounds for
constraint classification. Due to space considerations see[HPRZ02], where natural extensions to the growth function and margin are used to develop generalization bounds.
5 Experiments
As in previous multiclass classification work [DB95, ASS00], we tested our algorithm on
a suite of problems from the Irvine Repository of machine learning [BM98] (see Table 2).
In addition, we created a simple experiment using synthetic data. The data was generated
=F=
according to a WTA function over G randomly= generated linear= functions in $ , each
with weight vectors inside the unit ball. Then, K training and K testing examples were
80
Constraint Classification
One versus All
% Error
60
40
20
0
audiology
glass
vowel
letter
isolet
soybean synthetic*
Figure 3: Comparison of constraint classification meta-algorithm using the Perceptron algorithm to
multi-output Perceptron using the OvA assumption. All of the results for the constraint classification algorithm are competitive with the known. The synthetic data would converge to error using
constraint classification but would not converge using the OvA approach.
E
randomly sampled within a ball of radius
function that produced the highest value.
around the origin and labeled with the linear
A comparison is made between the OvA approach (Section ) and the constraint classification approach. Both were implemented on the same network of multi-output Perceptron
network with G +
?> weights (with one threshold per class). Constraint classification
used the modified update rule discussed in Section . Each update was performed as follows:
$ h
for promotion and
for demotion. The networks were
=
_
initialized with weights all .
For each multiclass example j#
,28
?F)***0G
, a constraint classification example
was created, where N
/,1V O^
?F)***0G
RF, . Notice error
(Definition 2.4) of N corresponds to the traditional error for multiclass classification.
j# N 8
I H
Figure 3 shows that constraint classification outperforms the multioutput Perceptron when
using the OvA assumption.
6 Discussion
We think constraint classification provides two significant contributions to multiclass classification. Firstly, it provides a conceptual generalization that encompasses multiclass classification, multilabel classification, and label ranking problems in addition to problems with
more complex relationships between labels. Secondly, it reminds the community that the
Kesler construction can be used to extend any learning algorithm for binary classification
to the multiclass (or constraint) setting.
Section 5 showed that the constraint approach to learning is advantageous over the oneversus-all approach on both real-world and synthetic data sets. However, preliminary experiments using various natural language data sets, such as part-of-speech tagging, do not
yield any significant difference between the two approaches. We used a common transformation [EZR01] to convert raw data to approximately three million examples in one
hundred thousand dimensional boolean feature space. There were about 50 different partof-speech tags. Because the constraint approach is more expressive than the one-versus-all
approach, and because both approaches use the same hypothesis space ( G linear functions),
we expected the constraint approach to achieve higher accuracy. Is it possible that a difference would emerge if more data were used? We find it unlikely since both methods
use identical representations. Perhaps, it is instead a result of the fact that we are working in very high dimensional space. Again, we think this is not the case, since it seems
that ?most? random winner-take-all problems (as with the synthetic data) would cause the
one-versus-all assumption to fail.
Rather, we conjecture that for some reason, natural language problems (along with the
transformation) are suited to the one-versus-all approach and do not require a more complex
hypothesis. Why, and how, this is so is a direction for future speculation and research.
7 Conclusions
The view of multiclass classification presented here simplifies the implementation, analysis, and understanding of many preexisting approaches. Multiclass support vector machines, ultraconservative online algorithms, and traditional one-versus-all approaches can
be cast in this framework. It would be interesting to see if it could be combined with the
error-correcting output coding method in [DB95] that provides another way to extend the
OvA approach. Furthermore, this view allows for a very natural extension of multiclass
classification to constraint classification ? capturing within it complex learning tasks such
as multilabel classification and ranking. Because constraint classification is a very intuitive
approach and its implementation can be carried out by any discriminant technique, and not
only by optimization techniques, we think it will have useful real-world applications.
References
[AB99]
M. Anthony and P. Bartlett. Neural Network Learning: Theoretical Foundations. Cambridge University Press,
Cambridge, England, 1999.
[ADW94]
C. Apte, F. Damerau, and S. M. Weiss. Automated learning of decision rules for text categorization. Information
Systems, 12(3):233?251, 1994.
[ASS00]
E. Allwein, R.E. Schapire, and Y. Singer. Reducing multiclass to binary: A unifying approach for margin classifiers.
In Proc. 17th International Conf. on Machine Learning, pages 9?16. Morgan Kaufmann, San Francisco, CA, 2000.
[BCHL95] S. Ben-David, N. Cesa-Bianchi, D. Haussler, and P. Long. Characterizations of learnability for classes of
valued functions. J. Comput. Sys. Sci., 50(1):74?86, 1995.
0.
[BM98]
C.L. Blake and C.J. Merz. UCI repository of machine learning databases, 1998.
[Bri94]
E. Brill. Some advances in transformation-based part of speech tagging. In AAAI, Vol. 1, pages 722?727, 1994.
AU -
[CCRR99] A. Carlson, C. Cumby, J. Rosen, and D. Roth. The SNoW learning architecture. Technical Report UIUCDCS-R-992101, UIUC Computer Science Department, May 1999.
[CS00]
K. Crammer and Y. Singer. On the learnability and design of output codes for multiclass problems. In Computational
Learing Theory, pages 35?46, 2000.
[CS01]
K. Crammer and Y. Singer. Ultraconservative online algorithms for multiclass problems. In COLT/EuroCOLT, pages
99?115, 2001.
[DB95]
T. Dietterich and G. Bakiri. Solving multiclass learning problems via error-correcting output codes. Journal of
Artificial Intelligence Research, 2:263?286, 1995.
[DH73]
R. Duda and P. Hart. Pattern Classification and Scene Analysis. Wiley, New York, 1973.
[DKR97]
I. Dagan, Y. Karov, and D. Roth. Mistake-driven learning in text categorization. In EMNLP-97, The Second Conference on Empirical Methods in Natural Language Processing, pages 55?63, 1997.
[EZR01]
Y. Even-Zohar and D. Roth. A sequential model for multi class classification. In EMNLP-2001, the SIGDAT Conference on Empirical Methods in Natural Language Processing, pages 10?19, 2001.
[HPRZ02] S. Har-Peled, D. Roth, and D. Zimak. Constraint classification: A new approach to multiclass classification. In Proc.
13th International Conf. of Algorithmic Learning Theory, pages 365?397, 2002.
[HT98]
T. Hastie and R. Tibshirani. Classification by pairwise coupling. In NIPS-10, The 1997 Conference on Advances in
Neural Information Processing Systems, pages 507?513. MIT Press, 1998.
[Jel98]
F. Jelinek. Statistical Methods for Speech Recognition. The MIT Press, Cambridge, Massachusetts, 1998.
[LBD 89] Y. Le Cun, B. Boser, J. Denker, D. Hendersen, R. Howard, W. Hubbard, and L. Jackel. Backpropagation applied to
handwritten zip code recognition. Neural Computation, 1:pp 541, 1989.
[LS97]
D. Lee and H. Seung. Unsupervised learning by convex and conic coding. In Michael C. Mozer, Michael I. Jordan,
and Thomas Petsche, editors, Advances in Neural Information Processing Systems, volume 9, page 515. The MIT
Press, 1997.
[Maa00]
W. Maass. On the computational power of winner-take-all. Neural Computation, 12(11):2519?2536, 2000.
[Nil65]
Nils J. Nilsson. Learning Machines: Foundations of trainable pattern-classifying systems. McGraw-Hill, New York,
NY, 1965.
[Rot98]
D. Roth. Learning to resolve natural language ambiguities: A unified approach. In Proc. of AAAI, pages 806?813,
1998.
[Vap98]
V. Vapnik. Statistical Learning Theory. Wiley, 605 Third Avenue, New York, New York, 10158-0012, 1998.
[WW99]
J. Weston and C. Watkins. Support vector machines for multiclass pattern recognition. In Proceedings of the Seventh
European Symposium On Artificial Neural Networks, 4 1999.
| 2295 |@word repository:3 duda:1 advantageous:1 seems:1 closure:1 seek:1 solid:2 outperforms:2 existing:1 current:1 written:1 realize:1 multioutput:1 j1:1 enables:1 update:3 discrimination:1 intelligence:1 sys:1 characterization:1 provides:8 hyperplanes:1 firstly:1 demoted:1 along:1 direct:1 symposium:1 learing:1 incorrect:1 prove:1 dan:1 inside:1 introduce:1 g4:1 pairwise:1 theoretically:1 tagging:3 indeed:1 expected:2 uiuc:2 multi:4 eurocolt:1 resolve:1 little:2 becomes:1 begin:2 classifies:2 dav:1 what:1 minimizes:1 karov:1 ag:1 transformation:7 unified:1 suite:1 every:4 act:1 growth:2 returning:1 classifier:25 rm:1 unit:1 positive:3 understood:1 mistake:2 ls97:2 approximately:1 au:1 ava:1 studied:1 limited:1 practical:1 testing:2 union:1 implement:1 lf:1 backpropagation:1 partof:1 empirical:5 cannot:1 writing:1 descending:1 roth:6 independently:1 convex:1 assigns:1 correcting:2 insight:1 isolet:2 rule:2 haussler:1 embedding:1 notion:2 coordinate:1 construction:5 suppose:1 us:1 hypothesis:11 origin:2 associate:1 dkr97:2 recognition:5 predicts:2 labeled:3 database:1 capture:1 calculate:1 thousand:1 ordering:3 highest:2 mozer:1 broken:1 peled:2 seung:1 dynamic:1 multilabel:6 trained:2 solving:2 efficiency:1 triangle:2 easily:1 po:6 represented:1 various:2 train:1 separated:3 preexisting:1 precedes:6 artificial:2 valued:2 tested:1 otherwise:1 think:3 final:1 online:6 maximal:2 uci:2 achieve:1 intuitive:1 convergence:1 double:1 produce:2 categorization:3 ben:1 coupling:1 develop:2 implemented:4 pot:1 implies:1 direction:1 snow:3 ab99:3 radius:1 correct:1 attribute:2 vc:4 require:2 generalization:9 preliminary:1 tighter:1 secondly:1 extension:3 hold:2 around:1 considered:1 blake:1 mapping:4 algorithmic:2 proc:3 label:7 jackel:1 hubbard:1 create:1 tool:1 weighted:1 promotion:2 mit:3 always:1 aim:1 modified:1 ck:5 rather:1 allwein:1 encode:1 derived:1 indicates:1 lass:2 wf:1 glass:2 membership:1 typically:1 unlikely:1 relation:1 expressible:1 transformed:3 classification:70 among:2 colt:1 denoted:2 initialize:1 construct:1 identical:1 unsupervised:1 promote:1 future:1 rosen:1 others:1 report:1 simplify:1 few:1 randomly:3 consisting:1 maintain:1 vowel:2 attempt:2 danr:1 misclassify:1 har:2 capable:1 partial:6 respective:1 plugged:1 kjl:1 initialized:2 circle:4 theoretical:2 instance:1 classify:1 boolean:1 vuw:1 zimak:2 hundred:1 successful:1 seventh:1 learnability:2 synthetic:7 combined:1 chunk:6 international:2 lee:1 michael:2 earn:2 again:1 central:1 cesa:1 aaai:2 ambiguity:1 emnlp:2 soybean:2 conf:2 style:2 return:2 li:1 potential:1 resented:1 coding:2 ranking:8 performed:2 view:2 repre:1 competitive:1 contribution:1 il:1 square:5 accuracy:1 kaufmann:1 efficiently:1 yield:1 handwritten:2 raw:1 produced:2 justifiable:1 cc:1 whenever:1 definition:15 pp:1 resultant:1 sampled:2 irvine:1 dataset:1 massachusetts:1 subsection:1 ubiquitous:1 cj:4 subtle:2 dh73:2 higher:2 specify:1 wei:1 yb:1 furthermore:1 until:1 working:1 xzy:1 expressive:4 defines:1 perhaps:1 dietterich:1 verify:2 true:1 assigned:1 maass:1 reminds:1 maintained:1 generalized:1 hill:1 reflection:1 consideration:1 ef:1 recently:1 common:4 ji:1 winner:6 nh:1 jl:2 discussed:1 interpretation:1 extend:2 relating:1 volume:1 million:1 refer:1 significant:3 cambridge:3 rd:1 consistency:1 outlined:1 similarly:1 illinois:1 language:5 lbd:2 showed:1 winnow:3 belongs:1 driven:1 meta:6 binary:14 rep:1 morgan:1 promoted:1 zip:1 determine:1 converge:4 full:5 multiple:1 onp:1 technical:1 england:1 long:1 hart:1 variant:1 preserved:1 addition:5 whereas:1 separately:1 rest:3 jordan:1 integer:1 call:1 vw:1 easy:3 automated:1 architecture:2 hastie:1 simplifies:1 avenue:1 multiclass:36 motivated:1 bartlett:1 suffer:1 speech:6 e3:3 cause:3 york:4 dictating:1 useful:1 svms:1 schapire:1 specifies:1 notice:3 correctly:4 per:1 tibshirani:1 demotion:2 vol:1 threshold:4 drawn:5 oneversus:2 convert:1 letter:2 powerful:1 sigdat:1 audiology:2 extends:1 sented:1 decision:1 capturing:1 bound:13 guaranteed:2 constraint:52 scene:1 tag:1 expanded:4 separable:1 relatively:1 nline:1 conjecture:1 department:2 according:4 combination:1 poor:1 ball:2 character:1 kesler:5 g3:1 wta:10 cun:1 nilsson:2 xo:6 apte:1 discus:2 bja:1 fail:3 turn:1 singer:3 end:2 generalizes:1 apply:1 denker:1 indirectly:1 petsche:1 alternative:2 gate:3 existence:3 original:1 thomas:1 assumes:3 denotes:1 include:1 unifying:1 carlson:1 bakiri:1 nput:2 strategy:3 traditional:2 cw:1 separate:3 mapped:2 separating:5 concatenation:1 sci:1 s_:1 discriminant:1 reason:1 length:1 code:3 index:1 relationship:1 cq:1 fe:2 negative:4 implementation:3 design:1 bianchi:1 urbana:1 howard:1 t:1 extended:1 dc:1 community:1 introduced:1 david:1 pair:4 namely:1 specified:1 required:1 speculation:1 cast:1 learned:1 boser:1 expressivity:1 alternately:1 nip:1 zohar:1 able:1 proceeds:1 usually:1 pattern:3 encompasses:1 including:2 power:1 natural:8 rely:2 indicator:2 representing:1 scheme:1 improve:1 conic:1 created:2 carried:1 transitive:1 text:3 understanding:2 sariel:2 contributing:1 relative:1 embedded:1 permutation:4 interesting:2 versus:9 foundation:2 consistent:13 editor:1 classifying:1 cd:1 qf:1 course:1 summary:1 repeat:1 ovum:22 perceptron:10 dagan:1 sification:1 emerge:1 sparse:1 fg:2 benefit:1 jelinek:1 dimension:6 world:2 computes:1 made:2 commonly:2 san:1 implicitly:1 mcgraw:1 conceptual:1 francisco:1 ultraconservative:3 why:1 table:5 learn:4 ca:1 expansion:3 utput:2 separator:2 complex:3 anthony:1 european:1 referred:2 je:1 ny:1 wiley:2 fails:1 comput:1 watkins:1 third:1 learns:1 companion:1 vap98:2 specific:1 showing:2 pac:1 list:2 svm:4 evidence:2 exists:5 restricting:1 sequential:1 vapnik:1 margin:7 sorting:13 flavor:1 suited:1 simply:1 clas:1 expressed:1 corresponds:1 determines:1 uiucdcs:1 weston:1 sorted:1 goal:1 viewed:1 change:2 specifically:3 reducing:1 hyperplane:6 brill:1 lemma:1 nil:1 merz:1 cumby:1 internal:1 support:4 crammer:2 trainable:1 |
1,424 | 2,296 | Adaptive Caching by Refetching
Robert B. Gramacy , Manfred K. Warmuth, Scott A. Brandt, Ismail Ari
Department of Computer Science, UCSC
Santa Cruz, CA 95064
rbgramacy, manfred, scott, ari @cs.ucsc.edu
Abstract
We are constructing caching policies that have 13-20% lower miss rates
than the best of twelve baseline policies over a large variety of request
streams. This represents an improvement of 49?63% over Least Recently
Used, the most commonly implemented policy. We achieve this not by
designing a specific new policy but by using on-line Machine Learning
algorithms to dynamically shift between the standard policies based on
their observed miss rates. A thorough experimental evaluation of our
techniques is given, as well as a discussion of what makes caching an
interesting on-line learning problem.
1 Introduction
Caching is ubiquitous in operating systems. It is useful whenever we have a small, fast main
memory and a larger, slower secondary memory. In file system caching, the secondary
memory is a hard drive or a networked storage server while in web caching the secondary
memory is the Internet. The goal of caching is to keep within the smaller memory data
objects (files, web pages, etc.) from the larger memory which are likely to be accessed
again in the near future. Since the future request stream is not generally known, heuristics,
called caching policies, are used to decide which objects should be discarded as new objects
are retained. More precisely, if a requested object already resides in the cache then we
call it a hit, corresponding to a low-latency data access. Otherwise, we call it a miss,
corresponding to a high-latency data access as the data must be fetched from the slower
secondary memory into the faster cache memory. In the case of a miss, room must be made
in the cache memory for the new object. To accomplish this a caching policy discards from
the cache objects which it thinks will cause the fewest or least expensive future misses.
In this work we consider twelve baseline policies including seven common policies
(RAND, FIFO, LIFO, LRU, MRU, LFU, and MFU), and five more recently developed and very successful policies (SIZE and GDS [CI97], GD* [JB00], GDSF and
LFUDA [ACD 99]). These algorithms employ a variety of directly observable criteria
including recency of access, frequency of access, size of the objects, cost of fetching the
objects from secondary memory, and various combinations of these.
The primary difficulty in selecting the best policy lies in the fact that each of these policies
may work well in different situations or at different times due to variations in workload,
Partial support from NSF grant CCR 9821087
Supported by Hewlett Packard Labs, Storage Technologies Department
system architecture, request size, type of processing, CPU speed, relative speeds of the
different memories, load on the communication network, etc. Thus the difficult question
is: In a given situation, which policy should govern the cache? For example, the request
stream from disk accesses on a PC is quite different from the request stream produced by
web-proxy accesses via a browser, or that of a file server on a local network. The relative
performance of the twelve policies vary greatly depending on the application. Furthermore,
the characteristics of a single request stream can vary temporally for a fixed application.
For example, a file server can behave quite differently during the middle of the night while
making tape archives in order to backup data, whereas during the day its purpose is to
serve file requests to and from other machines and/or users. Because of their differing
decision criteria, different policies perform better given different workload characteristics.
The request streams become even more difficult to characterize when there is a hierarchy
or a network of caches handling a variety of file-type requests. In these cases, choosing a
fixed policy for each cache in advance is doomed to be sub-optimal.
lru
fifo
mru
lifo
size
lfu
mfu
rand
gds
gdsf
lfuda
gd
0.8
0.7
0.6
0.5
0.4
0.3
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.2
0.1
0.1
0
0
205000
210000
215000
220000
225000
230000
235000
205000 210000 215000 220000 225000 230000
(a)
(b)
Lowest miss rate policy switches between SIZE, GDS, GDSF, and GD*
Lowest miss rate policy ... SIZE, GDS, GDSF, and GD*
size
gds
gdsf
gd
205000
210000
215000
220000
(c)
225000
230000
235000
205000
210000
215000
220000 225000
230000
(d)
Figure 1: Miss rates ( axis)of a) the twelve fixed policies (calculated w.r.t. a window of 300 requests)
over 30,000 requests ( axis), b) the same policies on a random permutation of the data set, c) and d)
the policies with the lowest miss rates in the figures above.
The usual answer to the question of which policy to employ is either to select one that works
well on average, or to select one that provides the best performance on some past workload
that is believed to be representative. However, these strategies have two inherent costs.
First, the selection (and perhaps tuning) of the single policy to be used in any given situation
is done by hand and may be both difficult and error-prone, especially in complex system
architectures with unknown and/or time-varying workloads. And second, the performance
of the chosen policy with the best expected average case performance may in fact be worse
than that achievable by another policy at any particular moment. Figure 1 (a) shows the hit
rate of the twelve policies described above on a representative portion of one of our data
sets (described below in Section 3) and Figure 1 (b) shows the hit rate of the same policies
on a random permutation of the request stream. As can be clearly be seen, the miss rates
on the permuted data set are quite different from those of the original data set, and it is this
difference that our algorithms aim to exploit. Figures 1 (c) and (d) show which policy is
best at each instant of time for the data segment and the permuted data segment. It is clear
from these (representative) figures that the best policy changes over time.
To avoid the perils associated with trying to hand-pick a single policy, one would like to be
able to automatically and dynamically select the best policy for any given situation. In other
words, one wants a cache replacement policy which is ?adaptive?. In our Storage Systems
Research Group, we have identified the need for such a solution in the context of complex
network architectures and time-varying workloads and suggested a preliminary framework
in which a solution could operate [AAG ar], but without giving specific algorithmic solutions to the adaptation problem. This paper presents specific algorithmic solutions that
address the need identified in that work.
It is difficult to give a precise definition of ?adaptive? when the data stream is continually
changing. We use the term ?adaptive? only informally and when we want to be precise
we use off-line comparators to judge the performance of our on-line algorithms, as is
commonly done in on-line learning [LW94, CBFH 97, KW97]. An on-line algorithm
is called adaptive if it performs well when measured up against off-line comparators.
In this paper we use two off-line comparators: BestFixed and BestShifting( ). BestFixed is the a posteriori selected
WWk, BestShifting(K)
policy with the lowest miss rate on the
BF=SIZE
entire request stream for our twelve
policies. BestShifting( ) considers
Best Fixed = SIZE
all possible partitions of the request
BestShift(K)
All Virtual Caches
stream into at most
segments along
with the best policy for each segment.
BestShifting( ) chooses the partition
with the lowest total miss rate over
the entire dataset and can be computed
All VC
in time
using dynamic
programming. Here
is the total
K = Number of Shifts
a bound on
number of requests,
Figure 2: Optimal offline comparators. AllVC
the number of segments, and
the
BestShifting( ).
number of base-line policies. Figure 2
shows graphically each of the comparators mentioned above. Notice that BestFixed
BestShifting( ), and that most of the advantage of shifting policies occurs with relatively
few shifts (
shifts in roughly 300,000 requests).
Missrates %
4.0
0
4.5
5.0
5.5
200
400
600
Rather than developing a new caching policy (well-plowed ground, to say the least), this
paper uses a master policy to dynamically determine the success rate of all the other policies and switch among them based on their relative performance on the current request
stream. We show that with no additional fetches, the master policy works about as well as
BestFixed. We define a refetch as a fetch of a previously seen object that was favored by the
current policy but discarded from the real cache by a previously active policy. With refetching, it can outperform BestFixed. In particular, when all required objects are refetched
instantly, this policy has a 13-20% lower miss rate than BestFixed, and almost the same
performance as BestShifting( ) for modest . For reference, when compared with LRU,
this policy has a 49-63% lower miss rate. Disregarding misses on objects never seen before
(compulsory misses), the performance improvements are even greater.
Because refetches themselves potentially costly, it is important to note that they can be
done in the background. Our preliminary experiments show this to be both feasible and
effective, capturing most of the advantage of instant refetching. A more detailed discussion
of our results is given in Section 3
2 The Master Policy
We seek to develop an on-line master policy that determines which of a set of baseline policies should govern the real cache at any time. Appropriate switch points need
to be found and switches must be facilitated. Our key idea is ?virtual caches?. A virtual cache simulates the operation of
a single baseline policy. Each virtual
cache records a few bytes of metadata about each object in its cache:
ID, size, and calculated priority. Object data is only kept in the real
cache, making the cost of maintain- Figure 3: Virtual caches embedded in the cache memory.
ing the virtual caches negligible1. Via the virtual caches, the master policy can observe the
miss rates of each policy on the actual request stream in order to determine their performance on the current workload.
To be fair, virtual caches reside in the memory space which could have been used to cache
real objects, as is illustrated in Figure 3. Thus, the space used by the real cache is reduced by
the space occupied by the virtual caches. We set the virtual size of each virtual cache equal
to the size of the full cache. The caches used for computing the comparators BestFixed and
BestShifting( ) are based on caches of the full size.
A simple heuristic the master policy can use to choose which caching policy should control
at any given time is to continuously monitor the number of misses incurred by each policy
in a past window of, for example, 300 requests (depicted in Figure 1 (a)). The master policy then gives control of the real cache to the policy with the least misses in this window
(shown in Figure 1 (c)). While this works well in practice, maintaining such a window for
many fixed policies is expensive, further reducing the space for the real cache. It is also
hard to tune the window size. A better master policy keeps just one weight for each
policy (non-negative and summing to one) which represents an estimate of its current relative performance. The master policy is always governed by the policy with the maximum
weight2 .
Weights are updated by using the combined loss and share updates of Herbster and Warmuth [HW98] and Bousquet and Warmuth [BW02] from the expert framework [CBFH 97]
for on-line learning. Here the experts are the caching policies. This technique is preferred
to the window-based master policy because it uses much less memory, and because the
parameters of the weight updates are easier to tune than the window size. This also makes
the resulting master policy more robust (not shown).
2.1 The Weight Updates
Updating the weight vector
after each trial is a two-part process.
the
First,
weights of all policies that missed the new request are multiplied by a factor
and
then renormalized. We call this the loss update. Since the weights are renormalized, they
remain unchanged if all policies miss the new request. As noticed by Herbster and Warmuth [HW98], multiplicative updates drive the weights of poor experts to zero so quickly
that it becomes difficult for them to recover if their experts subsequently start doing well.
1
As an additional optimization, we record the id and size of each object only once, regardless of
the number of virtual caches it appears in.
2
This can be sub-optimal in the worst case since it is always possible to construct a data stream
where two policies switch back and forth after each request. However, real request streams appear
to be divided into segments that favor one of the twelve policies for a substantial number of requests
(see Figure 1).
Therefore, the second share update prevents the weights of experts that did well in the
past from becoming too small, allowing them to recover quickly, as shown in Figure 4.
Figure 1(a) shows the current absolute performance of the policies in a rolling window
(
), whereas Figure 4 depicts relative performance and shows how the policies
compete over time. (Recall that the policy with the highest weight always controls the real
cache).
FSUP Weight
There are a number of share upWeight History for Individual Policies
dates [HW98, BW02] with various
1
lru
fifo
recovery properties. We chose the
mru
0.8
F IXED S HARE TO U NIFORM PAST
lifo
size
(FSUP) update because of its simpliclfu
0.6
mfu
ity and efficiency. Note that the loss
rand
gds
0.4
bounds proven in the expert framegdsf
lfuda
work for the combined loss and share
gd
0.2
update do not apply in this context.
0
This is because we use the mixture
205000 210000 215000 220000 225000 230000 235000
weights only to select the best policy.
Requests Over Time
However, our experimental results
Figure
4:
Weights
of baseline policies.
suggest that we are exploiting the
recovery properties of the combined update that are discussed extensively by Bousquet
and Warmuth [BW02].
miss
for
where is a parameter in and miss is 1 if the -th object is missed by policy
and 0 otherwise. The initial distribution is uniform, i.e. . The Fixed-Share to
Uniform Past update
mixes the current weight vector with the past average weight vector
, which
is easy to maintain:
! # " $
Formally, for each trial , the loss update is
miss
where is a parameter in . A small parameter causes high weight to decay quickly if
its corresponding policy starts incurring more misses than other policies with high weights.
The higher
quickly past good policies will recover. In our experiments we
theand the more
used
.
&%
!
2.2 Demand vs. Instantaneous Rollover
When space is needed to cache a new request, the master policy discards objects not present
in the governing policy?s virtual cache3 . This causes the content of the real cache to ?roll
over? to the content of the current governing virtual cache. We call this demand rollover
because objects in the governing virtual cache are refetched into the real cache on demand.
While this master policy works almost as well as BestFixed, we were not satisfied and
wanted to do as well as BestShifting( ) (for a reasonably large bound
on the number
of segments). We noticed that the content of the real cache lagged behind the content of
the governing virtual cache and had more misses, and conjectured that ?quicker? rollover
strategies would improve overall performance.
Our search for a better master policy began by considering an extreme and unrealistic
rollover strategy that assures no lag time: After each switch instantaneously refetch all
3
We update the virtual caches before the real cache, so there are always objects in the real cache
that are not in the governing virtual cache when the master policy goes to find space for a new request.
the objects in the new governing virtual cache that were not retained in the real cache.
We call this refetching policy instantaneous rollover. By appropriate tuning of the update
parameters and the number of instantaneous rollovers can be kept reasonably small and
the miss rates of our master policy are almost as good as BestShifting( ) for much larger
than the actual number of shifts used on-line. Note that the comparator BestShifting( )
is also not penalized for its instantaneous rollovers. While this makes sense for defining a
comparator, we now give more realistic rollover strategies that reduce the lag time.
2.3 Background Rollover
Because instantaneous rollover immediately refetches everything in the governing virtual
cache that is not already in the real cache, it may cause a large number of refetches even
when the number of policy switches is kept small. If all refetches are counted as misses,
then the miss rate of such a master policy is comparable to that of BestFixed. The same
holds for BestShifting. However, from a user perspective, refetching is advantageous because of the latency advantage gained by having required objects in memory before they
are needed. And from a system perspective, refetches can be ?free? if they are done when
the system is idle. To take advantage of these ?free? refetches, we introduce the concept
of background rollover. The exact criteria for when to refetch each missing object will
depend heavily on the system, workload, and expected cost and benefit of each object. To
characterize the performance of background rollover without addressing these architectural
details, the following background refetching strategies were examined: 1 refetch for every
cache miss; 1 for every hit; 1 for every request; 2 for every request; 1 for every hit and 5 for
every miss, etc. Each background technique gave fewer misses than BestFixed, approaching and nearly matching the performance obtained by the master policy using instantaneous
rollover. Of course, techniques which reduce the number of policy switches (by tuning
and ) also reduce the number of refetches. Figure 5 compares the performance of each
master policy with that of BestFixed and shows that the three master policies almost always
outperform BestFixed.
Miss Rate Differences
0.6
bestF - demd
bestF - back
bestF - inst
0.5
Miss Rate
0.4
0.3
0.2
0.1
0
-0.1
205000
210000
215000
220000
225000
230000
Requests Over Time
Figure 5: BestFixed - P, where P
Instantaneous, Demand, and Background Rollover 2 . The
is BestFixed. Deviations from the baseline
show how the performance of
baseline
our on-line shifting policies differ in miss rate. Above (Below)
corresponds to fewer (more)
misses than BestFixed.
3 Data and Results
Figure 6 shows how the master policy with instantaneous rollover (labeled ?roll?) ?tracks?
the baseline policy with the lowest miss rate over the representative data segment used in
previous figures. Figure 7 shows the performance of our master policies with respect to
BestFixed, BestShifting( ), and LRU. It shows that demand rollover does slightly worse
than BestFixed, while background 1 (1 refetch every request) and background 2 (1 refetch
every hit and 5 every miss) do better than BestFixed and almost as well as instantaneous,
which itself does almost as well as BestShifting. All of the policies do significantly better
than LRU. Discounting the compulsory misses, our best policies have 1/3 fewer ?real?
misses than BestFixed and 1/2 the ?real? misses of LRU.
Figure 8 summarizes the performance of our algorithms over three large datasets. These
were gathered using Carnegie Mellon University?s DFSTrace system [MS96] and had durations ranging from a single day to over a year. The traces we used represent a variety of
workloads including a personal workstation (Work-Week), a single user (User-Month), and
a remote storage system with a large number of clients, filtered by LRU on the clients? local
caches (Server-Month-LRU). For each data set, the table shows the number of requests, %
of requests skipped (size cache size), number of compulsory misses of objects not previously seen, and the number of rollovers. For each policy (including BestShifting( )), the
table shows miss rate, and % improvement over BestFixed (labeled ?
BF?) and LRU. In
each case all 12 virtual caches consumed on average less than 2% of the real cache space.
for all experiments. As already mentioned, BestShifting( )
,
We fixed
is never penalized for rollovers.
% #
Miss Rates under FSUP with Master
0.8
lru
fifo
mru
lifo
size
lfu
mfu
rand
gds
gdsf
lfuda
gd
roll
0.7
Miss Rates
0.6
0.5
0.4
0.3
0.2
0.1
#Requests
Cache size
%Skipped
# Compuls
# Shifts
0
205000
210000
215000 220000 225000
Requests Over Time
230000
235000
Figure 6: ?Tracking? the best policy.
9
WWk Master and Comparator Missrates
8
LRU
BF=SIZE
Background 1
Background 2
Instantaneous
5
6
4
Missrates %
7
LRU
Best Fixed = SIZE
BestShift(K)
All Virtual Caches
Compulsory Missrate
Demand
3
All VC
2
K = 76
0
200
400
600
LRU
Miss Rate
BestFixed
Policy
Miss Rate
% LRU
Demand
Miss Rate
% BestF
% LRU
Backgrnd 1
Miss Rate
% BestF
% LRU
Backgrnd 2
Miss Rate
% BestF
% LRU
Instant
Miss Rate
% BestF
% LRU
BestShifting
Miss Rate
% BestF
% LRU
800
K = Number of Shifts
Works
Week
Dataset
User
Month
138k
900KB
6.5%
0.020
88
382k
2MB
12.8%
0.015
485
0.088
0.076
0.450
SIZE
0.055
36.8%
GDS
0.075
54.7%
GDSF
0.399
54.2%
0.061
-9.6%
30.9%
0.076
-0.5%
54.4%
0.450
-12.8%
48.5%
0.053
5.1%
40.1%
0.068
9.8%
59.4%
0.401
-0.7%
55.5%
0.047
15.4%
46.6%
0.067
11.9%
60.1%
0.349
12.4%
60.3%
0.044
19.7%
49.2%
0.065
13.4%
60.8%
0.322
19.3%
63%
0.042
23.6%
52.2%
0.039
48.0%
48.7%
0.312
21.8%
30.1%
Server
Month
LRU
48k
4MB
15.7%
0.152
93
Figure 8: Performance Summary.
Figure 7: Online shifting policies against offline comparators and LRU for Work-Week dataset.
4 Conclusion
Operating systems have many hidden parameter tweaking problems which are ideal applications for on-line Machine Learning algorithms. These parameters are often set to values
which provide good average case performance on a test workload. For example, we have
identified candidate parameters in device management, file systems, and network protocols. Previously the on-line algorithms for predicting as well as the best shifting expert
were used to tune the time-out for spinning down the disk of a PC [HLSS00]. In this paper we use the weight updates of these algorithms for dynamically determining the best
caching policy. This application is more elaborate because we needed to actively gather
performance information about the caching policies via virtual caches. In future work we
plan to do a more thorough study of feasibility of background rollover by building actual
systems.
Acknowledgements: Thanks to David P. Helmbold for an efficient dynamic programming
approach to BestShifting( ), Ahmed Amer for data, and Ethan Miller many helpful insights.
References
[AAG ar] Ismail Ari, Ahmed Amer, Robert Gramacy, Ethan Miller, Scott Brandt, and
Darrell D. E. Long. ACME: Adaptive caching using multiple experts. In Proceedings of the 2002 Workshop on Distributed Data and Structures (WDAS
2002). Carleton Scientific, (to appear).
[ACD 99] Martin Arlitt, Ludmilla Cherkasova, John Dilley, Rich Friedrich, and Tai Jin.
Evaluating content management techniques for Web proxy caches. In Proceedings of the Workshop on Internet Server Performance (WISP99), May
1999.
[BW02] O. Bousquet and M. K. Warmuth. Tracking a small set of experts by mixing
past posteriors. J. of Machine Learning Research, 3(Nov):363?396, 2002.
Special issue for COLT01.
[CBFH 97] N. Cesa-Bianchi, Y. Freund, D. Haussler, D. P. Helmbold, R. E. Schapire, and
M. K. Warmuth. How to use expert advice. Journal of the ACM, 44(3):427?
485, 1997.
[CI97] Pei Cao and Sandy Irani. Cost-aware WWW proxy caching algorithms. In
Proceedings of the 1997 Usenix Symposium on Internet Technologies and
Systems (USITS-97), 1997.
[HLSS00] David P. Helmbold, Darrell D. E. Long, Tracey L. Sconyers, and Bruce Sherrod. Adaptive disk spin-down for mobile computers. ACM/Baltzer Mobile
Networks and Applications (MONET), pages 285?297, 2000.
[HW98] M. Herbster and M. K. Warmuth. Tracking the best expert. Journal of Machine Learning, 32(2):151?178, August 1998. Special issue on concept drift.
[JB00] Shudong Jin and Azer Bestavros. Greedydual* web caching algorithm: Exploiting the two sources of temporal locality in web request streams. Technical Report 2000-011, 4, 2000.
[KW97] J. Kivinen and M. K. Warmuth. Additive versus exponentiated gradient updates for linear prediction. Information and Computation, 132(1):1?64, January 1997.
[LW94] N. Littlestone and M. K. Warmuth. The weighted majority algorithm. Information and Computation, 108(2):212?261, 1994.
[MS96] Lily Mummert and Mahadev Satyanarayanan. Long term distributed file reference tracing: Implementation and experience. Software - Practice and Experience (SPE), 26(6):705?736, June 1996.
| 2296 |@word trial:2 middle:1 achievable:1 advantageous:1 disk:3 bf:3 seek:1 pick:1 moment:1 initial:1 selecting:1 past:8 current:7 must:3 john:1 cruz:1 additive:1 realistic:1 partition:2 wanted:1 update:15 v:1 selected:1 fewer:3 device:1 warmuth:10 record:2 manfred:2 filtered:1 provides:1 brandt:2 accessed:1 five:1 along:1 ucsc:2 become:1 symposium:1 introduce:1 expected:2 roughly:1 themselves:1 automatically:1 cpu:1 actual:3 cache:56 window:8 considering:1 becomes:1 lowest:6 what:1 developed:1 differing:1 lru:22 temporal:1 thorough:2 every:9 hit:6 control:3 grant:1 appear:2 continually:1 baltzer:1 before:3 local:2 id:2 becoming:1 chose:1 examined:1 dynamically:4 practice:2 significantly:1 matching:1 word:1 idle:1 tweaking:1 fetching:1 suggest:1 selection:1 storage:4 recency:1 context:2 www:1 missing:1 graphically:1 regardless:1 go:1 duration:1 recovery:2 immediately:1 gramacy:2 helmbold:3 insight:1 haussler:1 lw94:2 ity:1 variation:1 updated:1 hierarchy:1 heavily:1 user:5 exact:1 programming:2 us:2 designing:1 expensive:2 updating:1 labeled:2 observed:1 quicker:1 worst:1 theand:1 remote:1 highest:1 acd:2 mentioned:2 substantial:1 govern:2 dynamic:2 renormalized:2 personal:1 depend:1 segment:8 serve:1 monet:1 efficiency:1 workload:9 differently:1 various:2 fewest:1 fast:1 effective:1 choosing:1 ixed:1 quite:3 heuristic:2 larger:3 lag:2 say:1 otherwise:2 favor:1 browser:1 think:1 itself:1 online:1 advantage:4 mb:2 adaptation:1 networked:1 cao:1 date:1 mixing:1 achieve:1 ismail:2 forth:1 exploiting:2 darrell:2 object:24 depending:1 develop:1 measured:1 implemented:1 c:1 judge:1 differ:1 subsequently:1 vc:2 kb:1 virtual:23 everything:1 preliminary:2 hold:1 ground:1 algorithmic:2 week:3 vary:2 sandy:1 purpose:1 mfu:4 instantaneously:1 weighted:1 clearly:1 always:5 aim:1 rather:1 occupied:1 caching:17 avoid:1 varying:2 mobile:2 lifo:4 june:1 improvement:3 greatly:1 skipped:2 baseline:8 sense:1 posteriori:1 inst:1 helpful:1 entire:2 hidden:1 overall:1 among:1 issue:2 favored:1 plan:1 special:2 equal:1 once:1 never:2 construct:1 having:1 aware:1 represents:2 comparators:7 nearly:1 future:4 report:1 inherent:1 employ:2 few:2 refetching:6 individual:1 replacement:1 maintain:2 evaluation:1 mixture:1 extreme:1 hewlett:1 pc:2 behind:1 partial:1 experience:2 modest:1 littlestone:1 hw98:4 ar:2 cost:5 addressing:1 deviation:1 rolling:1 uniform:2 successful:1 too:1 characterize:2 answer:1 accomplish:1 fetch:2 gd:15 chooses:1 combined:3 thanks:1 twelve:7 herbster:3 sherrod:1 fifo:4 off:3 continuously:1 quickly:4 again:1 satisfied:1 management:2 cesa:1 choose:1 priority:1 worse:2 expert:11 actively:1 lily:1 stream:15 multiplicative:1 lab:1 doing:1 portion:1 start:2 recover:3 bruce:1 spin:1 roll:3 characteristic:2 miller:2 gathered:1 peril:1 produced:1 cbfh:3 drive:2 history:1 whenever:1 definition:1 against:2 frequency:1 hare:1 spe:1 associated:1 workstation:1 dataset:3 recall:1 ubiquitous:1 niform:1 back:2 appears:1 higher:1 day:2 rand:4 amer:2 done:4 furthermore:1 just:1 governing:7 hand:2 web:6 night:1 perhaps:1 scientific:1 building:1 concept:2 discounting:1 irani:1 illustrated:1 during:2 criterion:3 trying:1 performs:1 ranging:1 aag:2 instantaneous:10 ari:3 recently:2 began:1 common:1 permuted:2 discussed:1 doomed:1 mellon:1 tuning:3 had:2 access:6 operating:2 etc:3 base:1 posterior:1 perspective:2 conjectured:1 discard:2 server:6 success:1 arlitt:1 seen:4 additional:2 greater:1 determine:2 full:2 mix:1 multiple:1 ing:1 technical:1 faster:1 ahmed:2 believed:1 long:3 divided:1 feasibility:1 prediction:1 represent:1 whereas:2 want:2 background:12 source:1 carleton:1 operate:1 archive:1 file:8 simulates:1 call:5 near:1 ideal:1 easy:1 variety:4 switch:8 gave:1 architecture:3 identified:3 approaching:1 reduce:3 idea:1 consumed:1 shift:7 cause:4 tape:1 useful:1 generally:1 santa:1 latency:3 clear:1 informally:1 detailed:1 tune:3 extensively:1 reduced:1 schapire:1 outperform:2 nsf:1 notice:1 track:1 ccr:1 instantly:1 carnegie:1 group:1 key:1 hlss00:2 monitor:1 changing:1 kept:3 year:1 compete:1 facilitated:1 master:24 almost:6 decide:1 architectural:1 missed:2 fetched:1 decision:1 summarizes:1 comparable:1 capturing:1 bound:3 internet:3 fsup:3 precisely:1 software:1 bousquet:3 speed:2 relatively:1 martin:1 department:2 developing:1 request:36 combination:1 kw97:2 poor:1 smaller:1 remain:1 slightly:1 making:2 previously:4 assures:1 tai:1 needed:3 bw02:4 operation:1 incurring:1 multiplied:1 apply:1 observe:1 appropriate:2 slower:2 original:1 maintaining:1 instant:3 exploit:1 giving:1 especially:1 unchanged:1 noticed:2 already:3 question:2 occurs:1 strategy:5 primary:1 costly:1 usual:1 gradient:1 majority:1 seven:1 considers:1 spinning:1 retained:2 difficult:5 robert:2 potentially:1 trace:1 negative:1 lagged:1 implementation:1 policy:103 unknown:1 perform:1 allowing:1 bianchi:1 pei:1 datasets:1 discarded:2 jin:2 behave:1 january:1 situation:4 defining:1 communication:1 precise:2 august:1 usenix:1 drift:1 david:2 required:2 ethan:2 friedrich:1 address:1 able:1 suggested:1 below:2 scott:3 including:4 memory:15 packard:1 shifting:4 unrealistic:1 difficulty:1 client:2 predicting:1 kivinen:1 improve:1 technology:2 temporally:1 axis:2 metadata:1 byte:1 acknowledgement:1 determining:1 relative:5 embedded:1 loss:5 freund:1 permutation:2 interesting:1 proven:1 versus:1 incurred:1 gather:1 proxy:3 share:5 prone:1 course:1 penalized:2 summary:1 supported:1 free:2 offline:2 exponentiated:1 absolute:1 tracing:1 benefit:1 distributed:2 calculated:2 evaluating:1 resides:1 rich:1 reside:1 commonly:2 adaptive:7 made:1 counted:1 nov:1 observable:1 preferred:1 keep:2 active:1 summing:1 search:1 table:2 reasonably:2 robust:1 ca:1 requested:1 complex:2 constructing:1 protocol:1 did:1 main:1 backup:1 fair:1 advice:1 representative:4 depicts:1 elaborate:1 sub:2 lie:1 governed:1 candidate:1 down:2 lfu:3 specific:3 load:1 disregarding:1 decay:1 workshop:2 gained:1 compulsory:4 demand:7 easier:1 locality:1 depicted:1 likely:1 prevents:1 tracking:3 tracey:1 corresponds:1 determines:1 acm:2 comparator:3 goal:1 month:4 room:1 feasible:1 hard:2 change:1 content:5 reducing:1 miss:51 called:2 total:2 secondary:5 experimental:2 select:4 formally:1 support:1 handling:1 |
1,425 | 2,297 | Kernel Dependency Estimation
Jason Weston, Olivier Chapelle, Andre Elisseeff,
Bernhard Scholkopf and Vladimir Vapnik*
Max Planck Institute for Biological Cybernetics, 72076 Tubingen, Germany
*NEC Research Institute, Princeton, NJ 08540 USA
Abstract
We consider the learning problem of finding a dependency between
a general class of objects and another, possibly different, general
class of objects. The objects can be for example: vectors, images,
strings, trees or graphs. Such a task is made possible by employing
similarity measures in both input and output spaces using kernel functions, thus embedding the objects into vector spaces. We
experimentally validate our approach on several tasks: mapping
strings to strings, pattern recognition, and reconstruction from partial images.
1
Introduction
In this article we consider the rather general learning problem of finding a dependency between inputs x E X and outputs y E Y given a training set
(Xl,yl), ... ,(xm , Ym) E X x Y where X and Yare nonempty sets. This includes
conventional pattern recognition and regression estimation. It also encompasses
more complex dependency estimation tasks, e.g mapping of a certain class of strings
to a certain class of graphs (as in text parsing) or the mapping of text descriptions
to images. In this setting, we define learning as estimating the function j(x, ex*)
from the set offunctions {f (. , ex), ex E A} which provides the minimum value of the
risk function
R(ex)
=
r
ix xY
L(y , j(x,ex))dP(x, y)
(1)
where P is the (unknown) joint distribution ofx and y and L(y, 1]) is a loss function,
a measure of distance between the estimate 1] and the true output y at a point x.
Hence in this setting one is given a priori knowledge of the similarity measure used
in the space Y in the form of a loss function. In pattern recognition this is often the
zero-one loss, in regression often squared loss is chosen. However, for other types
of outputs, for example if one was required to learn a mapping to images, or to
a mixture of drugs (a drug cocktail) to prescribe to a patient then more complex
costs would apply. We would like to be able to encode these costs into the method
of estimation we choose.
The framework we attempt to address is rather general. Few algorithms have been
constructed which can work in such a domain - in fact the only algorithm that we
are aware of is k-nearest neighbors. Most algorithms have focussed on the pattern
recognition and regression problems and cannot deal with more general outputs.
Conversely, specialist algorithms have been made for structured outputs, for example the ones of text classification which calculate parse trees for natural language
sentences, however these algorithms are specialized for their tasks. Recently, kernel
methods [12, 11] have been extended to deal with inputs that are structured objects such as strings or trees by linearly embedding the objects using the so-called
kernel trick [5, 7]. These objects are then used in pattern recognition or regression
domains. In this article we show how to construct a general algorithm for dealing
with dependencies between both general inputs and general outputs. The algorithm
ends up in an formulation which has a kernel function for the inputs and a kernel
function (which will correspond to choosing a particular loss function) for the outputs. This also enables us (in principle) to encode specific prior information about
the outputs (such as special cost functions and/or invariances) in an elegant way,
although this is not experimentally validated in this work.
The paper is organized as follows. In Section 2 it is shown how to use kernel
functions to measure similarity between outputs as well as inputs. This leads to
the derivation of the Kernel Dependency Estimation (KDE) algorithm in Section
3. Section 4 validates the method experimentally and Section 5 concludes.
2
Loss functions and kernels
An informal way of looking at the learning problem consists of the following. Generalization occurs when, given a previously unseen x EX, we find a suitable
y E Y such that (x,y) should be "similar" to (Xl,Yl), ... ,(xm,Ym). For outputs one is usually given a loss function for measuring similarity (this can be, but
is not always , inherent to the problem domain). For inputs, one way of measuring similarity is by using a kernel function. A kernel k is a symmetric function which is an inner product in some Hilbert space F, i.e., there exists a map
<I>k : X ---+ F such that k(X,X/) = (<I>k(X) . <I>k(X / )). We can think of the patterns
as <I>k(X) , <I>k(X / ), and carry out geometric algorithms in the inner product space
("feature space") F. Many successful algorithms are now based on this approach,
see e.g [12, 11]. Typical kernel functions are polynomials k(x, Xl) = (x . Xl + 1)P
and RBFs k (x, Xl) = exp( -llx - x/l12 /2( 2 ) although many other types (including
ones which take into account prior information about the learning problem) exist.
Note that , like distances between examples in input space, it is also possible to
think of the loss function as a distance measure in output space, we will denote
this space 1:. We can measure inner products in this space using a kernel function.
We will denote this as C(y,y/) = (<I>?(y). <I>?(y/)), where <I>? : Y ---+ 1:. This map
makes it possible to consider a large class of nonlinear loss functions. l As in the
traditional kernel trick for the inputs, the nonlinearity is only taken into account
when computing the kernel matrix. The rest of the training is "simple" (e.g. , a
convex program, or methods of linear algebra such as matrix diagonalization) . It
also makes it possible to consider structured objects as outputs such as the ones
described in [5]: strings, trees, graphs and so forth. One embeds the output objects
in the space I: using a kernel.
Let us define some kernel functions for output spaces.
IFor instance, assuming the outputs live in lI~n, usin~ an RBF kernel, one obtains a
loss function II<I>e(y) - <I>e(Y/) 112 = 2 - 2 exp (-Ily - y'll /2(7 2 ). This is a nonlinear loss
function which takes the value 0 if Y and y' coincide, and 2 if they are maximally different .
The rate of increase in between (i.e., the "locality") , is controlled by a .
In M-class pattern recognition, given Y = {I, ... , M}, one often uses the distance
L(y , y') = 1- [y = y'], where [y = y'] is 1 if Y = y' and 0 otherwise. To construct a
corresponding inner product it is necessary to embed this distance into a Euclidean
space, which can be done using the following kernel:
?pat(y,y') =
~[y =
y'],
(2)
as L(y, y')2 = Illf>f(Y) - If>f(y')11 2 = ?(y, y) + ?(y', y') - 2?(y, y') = 1 - [y = y'].
It corresponds to embedding into aM-dimensional Euclidean space via the map
If>f( Y) = (0,0, . . . ,
, 0) where the yth coordinate is nonzero. It is also possible
to describe multi-label classification (where anyone example belongs to an arbitrary
subset of the M classes) in a similar way.
1', ...
For regression estimation, one can use the usual inner product
(3)
?reg(y, y') = (y . y').
For outputs such as strings and other structured objects we require the corresponding string kernels and kernels for structured objects [5, 7]. We give one example
here, the string subsequence kernel employed in [7] for text categorization. This
kernel is an inner product in a feature space consisting of all ordered subsequences
of length r, denoted ~r. The subsequences, which do not have to be contiguous,
are weighted by an exponentially decaying factor A of their full length in the text:
(4)
j:u=t[j]
u EEr i:u= s[i]
where u = xli] denotes u is the subsequence of x with indices 1 :::; it :::; ... :::; i lul
and l(i) = i lul - it + 1. A fast way to compute this kernel is described in [7].
Sometimes, one would also like apply the loss given by an (arbitrary) distance matrix
D of the loss between training examples, i.e where D ij = L(Yi,Yj). In general it
is not always obvious to find an embedding of such data in an Euclidian space (in
order to apply kernels) . However, one such method is to compute the inner product
with [11 , Proposition 2.27]:
?(Yi,Yj) =
~
~CpIDiPI2 m
(
ID ijl2 -
m
+ p~t cpcqlD pq l2
m
{;CqlDqjl2
)
(5)
where coefficients Ci satisfy L i Ci = 1 (e.g using Ci = 1... for all i - this amounts
to using the centre of mass as an origin). See also
for ways of dealing with
problems of embedding distances when equation (5) will not suffice.
[m
3
Algorithm
Now we will describe the algorithm for performing KDE. We wish to minimize the
risk function (1) using the feature space F induced by the kernel k and the loss
function measured in the space ? induced by the kernel ?. To do this we must learn
the mapping from If>k(X) to If>f(Y). Our solution is the following: decompose If>e(Y)
into p orthogonal directions using kernel principal components analysis (KPCA)
(see, e.g [11 , Chapter 14]). One can then learn the mapping from If>k(X) to each
direction independently using a standard kernel regression method, e.g SVM regression [12] or kernel ridge regression [9]. Finally, to output an estimate Y given a test
example x one must solve a pre-image problem as the solution of the algorithm is
initially a solution in the space ?. We will now describe each step in detail.
1) Decomposition of outputs Let us construct the kernel matrix L on the
training data such that Lij = f(Yi,Yj), and perform kernel principal components
analysis on L. This can be achieved by centering the data in feature space using: V = (I - ~lm1~)L(1 - ~lm1~), where 1 is the m-dimensional identity
matrix and 1m is an m dimensional vector of ones. One then solves the eigenvalue problem Aa = Va where an is the nth eigenvector of V which we normalize such that 1 = (an. Va n) = An(a n . an). We can then compute the
projection of If>?(y) onto the nth principal component v n = 2:::1o:ilf>?(Yi) by
(v n . If>?(y)) = 2:::1 o:if(Yi' y) .
2) Learning the map We can now learn the map from If>k(X) to ((Vi .
If>c(Y)), ... , (v P ?If>c(Y))) where p is the number of principal components. One can
learn the map by estimating each output independently. In our experiments we
use kernel ridge regression [9] , note that this requires only a single matrix inversion to learn all p directions. That is, we minimize with respect to w the function
~ 2:::1 (Yi - (w . If> k (Xi)))2 + , IIwl1 2 in its dual form. We thus learn each output
direction (v n . If> ?(y)) using the kernel matrix Kij = k(Xi ' Xj) and the training labels
:Vi = (v n ?If>C(Yi)) , with estimator fn(x):
m
fn(x) =
L ,Bi k(Xi' x),
(6)
i=l
3) Solving the pre-image problem During the testing phase, to obtain the
estimate Y for a given x it is now necessary to find the pre-image of the given
output If>c(Y). This can be achieved by finding:
Y(X) = argminYEyl1 ((vi. If>c(Y)), ... , (v P . If>c(Y))) - (It (x), ... , fp(x))11
For the kernel (3) it is possible to compute the solution explicit ely. For other
problems searching from a set of candidate solutions may be enough, e.g from the set
of training set outputs Yl, ... , Ym; in our experiments we use this set. When more
accurate solutions are required, several algorithms exist for finding approximate
pre-images e.g via fixed-point iteration methods, see [10] or [11, Chapter 18] for an
overview.
For the simple case of vectorial outputs with linear kernel (3), if the output is only
one dimension the method of KDE boils down to the same solution as using ridge
regression since the matrix L is rank 1 in this case. However, when there are d
outputs, the rank of L is d and the method trains ridge regression d times, but the
kernel PCA step first decorrelates the outputs. Thus, in the special case of multiple
outputs regression with a linear kernel , the method is also related to the work of
[2] (see [4, page 73] for an overview of other multiple output regression methods.)
In the case of classification, the method is related to Kernel Fisher Discriminant
Analysis (KFD) [8].
4
Experiments
In the following we validate our method with several experiments. In the experiments we chose the parameters of KDE to be from the following : u* =
{l0-3 , 10- 2,10-\ 10?,10\ 102, 103} where u =
and the ridge parameter
2
1
3
, = {l0-4, 10- , 10- ,10-\ 100 , 1O }. We chose them by five fold cross validation.
b,
4.1
Mapping from strings to strings
Toy problem. Three classes of strings consist ofletters from the same alphabet of
4 letters (a,b,c,d), and strings from all classes are generated with a random length
between 10 to 15. Strings from the first class are generated by a model where
transitions from any letter to any other letter are equally likely. The output is the
string abad, corrupted with the following noise. There is a probability of 0.3 of
a random insertion of a random letter, and a probability of 0.15 of two random
insertions. After the potential insertions there is a probability of 0.3 of a random
deletion, and a probability of 0.15 of two random deletions. In the second class,
transitions from one letter to itself (so the next letter is the same as the last) have
probability 0.7, and all other transitions have probability 0.1. The output is the
string dbbd, but corrupted with the same noise as for class one. In the third class
only the letters c and d are used; transitions from one letter to itself have probability
0.7. The output is the string aabc, but corrupted with the same noise as for class
one. For classes one and two any starting letter is equally likely, for the third class
only c and d are (equally probable) starting letters.
input string
ccdddddddd
dccccdddcd
adddccccccccc
bbcdcdadbad
cdaaccadcbccdd
--+
--+
--+
--+
--+
output string
aabc
abc
bb
aebad
abad
Figure 1: Five examples from our artificial task (mapping strings to strings).
The task is to predict the output string given the input string. Note that this is
almost like a classification problem with three classes, apart from the noise on the
outputs. This construction was employed so we can also calculate classification error
as a sanity check. We use the string subsequence kernel (4) from [7] for both inputs
and outputs, normalized such that k(x,x') = k(x,x' )/(Jk(x,x)Jk(x',x')). We
chose the parameters r = 3 and A = 0.01. In the space induced by the input kernel
k we then chose a further nonlinear map using an RBF kernel: exp( - (k(x, x) +
k(x',x') - 2k(x,x'))/2(J2).
We generated 200 such strings and measured the success by calculating the mean
and standard error of the loss (computed via the output kernel) over 4 fold cross
validation. We chose (J (the width of the RBF kernel) and'Y (the ridge parameter)
on each trial via a further level of 5 fold cross validation. We compare our method
to an adaptation of k-nearest neighbors for general outputs: if k = 1 it returns the
output of the nearest neighbor , otherwise it returns the linear combination (in the
space of outputs) of the k nearest neighbors (in input space) . In the case of k > 1,
as well as for KDE, we find a pre-image by finding the closest training example
output to the given solution. We choose k again via a further level of 5 fold cross
validation. The mean results, and their standard errors, are given in Table 1.
string loss
classification loss
KDE
0.676 ? 0.030
0.125 ? 0.012
k-NN
0.985
0.205
? 0.029
? 0.026
Table 1: Performance of KDE and k-NN on the string to string mapping problem.
4.2
Multi-class classification problem
We next tried a multi-class classification problem, a simple special case of the general
dependency estimation problem. We performed 5-fold cross validation on 1000 digits
(the first 100 examples of each digit) of the USPS handwritten 16x16 pixel digit
database, training with a single fold (200 examples) and testing on the remainder.
We used an RBF kernel for the inputs and the zero-one multi-class classification
loss for the outputs using kernel (2). We again compared to k-NN and also to 1vs-rest Support Vector Machines (SVMs) (see, e.g [11, Section 7.6]). We found k
for k-NN and a and "( for the other methods (we employed a ridge also to the SVM
method, reulting in a squared error penalization term) by another level of 5-fold
cross validation. The results are given in Table 2. SVMs and KDE give similar
results (this is not too surprising since KDE gives a rather similar solution to KFD,
whose similarity to SVMs in terms of performance has been shown before [8]). Both
SVM and KDE outperform k-NN.
classification loss
KDE
0.0798 ? 0.0067
1-vs-rest SVM
0.0847 ? 0.0064
k-NN
0.1250 ? 0.0075
Table 2: Performance of KDE, 1-vs-rest SVMs and k-NN on a classification problem
of handwritten digits.
4.3
Image reconstruction
We then considered a problem of image reconstruction: given the top half (the first
8 pixel lines) of a USPS postal digit, it is required to estimate what the bottom
half will be (we thus ignored the original labels of the data).2 The loss function we
choose for the outputs is induced by an RBF kernel. The reason for this is that
a penalty that is only linear in y would encourage the algorithm to choose images
that are "inbetween" clearly readable digits. Hence, the difficulty in this task is
both choosing a good loss function (to reflect the end user's objectives) as well as
an accurate estimator. We chose the width a' of the output RBF kernel which
maximized the kernel alignment [1] with a target kernel generated via k-means
clustering. We chose k=30 clusters and the target kernel is K ij = 1 if Xi and Xj
are in the same cluster, and 0 otherwise. Kernel alignment is then calculated via:
A(K 1 ,K 2 ) = (K l,K2)F/J(Kl, Kl) F(K2,K2)F where (K , K')F = 2:7,'j=l KijK~j
is the Frobenius dot product, which gave a' = 0.35. For the inputs we use an RBF
kernel of width a .
We again performed 5-fold cross validation on the first 1000 digits of the USPS
handwritten 16x16 pixel digit database, training with a single fold (200 examples)
and testing on the remainder, comparing KDE to k-NN and a Hopfield net. 3 The
Hopfield network we used was the one of [6] implemented in the Neural Network
Toolbox for Matlab. It is a generalization of standard Hopfield nets that has a
nonlinear transfer function and can thus deal with scalars between -1 and +1 ;
after building the network based on the (complete) digits of the training set we
present the top half of test digits and fill the bottom half with zeros, and then
find the networks equilibrium point. We then chose as output the pre-image from
the training data that is closest to this solution (thus the possible outputs are the
2 A similar problem, of higher dimensionality, would be to learn the mapping from top
half to complete digit .
3Note that training a naive regressor on each pixel output independently would not
take into account that the combination of pixel outputs should resemble a digit .
Figure 2: Errors in the digit database image reconstruction problem. Images
have to be estimated using only the top half (first 8 rows of pixels) of the original image (top row) by KDE (middle row) and k-NN (bottom row). We show
all the test examples on the first fold of cross validation where k-NN makes
an error in estimating the correct digit whilst KDE does not (73 mistakes) and
vice-versa (23 mistakes). We chose them by viewing the complete results by
eye (and are thus somewhat subjective). The complete results can be found at
http://www.kyb.tuebingen.mpg.de/bs/people/weston/kde/kde.html.
same as the competing algorithms). We found (Y and I for KDE and k for k-NN by
another level of 5-fold cross validation. The results are given in Table 3.
RBF loss
KDE
0.8384 ? 0.0077
k-NN
0.8960 ? 0.0052
Hopfield net
1.2190 ? 0.0072
Table 3: Performance of KDE, k-NN and a Hopfield network on an image reconstruction problem of handwritten digits.
KDE outperforms k-NN and Hopfield nets on average, see Figure 2 for comparison
with k-NN. Note that we cannot easily compare classification rates on this problem
using the pre-images selected since KDE outputs are not correlated well with the
labels. For example it will use the bottom stalk of a digit "7" or a digit "9" equally
if they are identical, whereas k-NN will not: in the region of the input space which
is the top half of "9"s it will only output the bottom half of "9"s. This explains why
measuring the class of the pre-images compared to the true class as a classification
problem yields a lower loss for k-NN, 0.2345 ? 0.0058, compared to KDE, 0.2985 ?
0.0147 and Hopfield nets, 0.591O?0.0137. Note that if we performed classification as
in Section 4.2 but using only the first 8 pixel rows then k-NN yields 0.2345 ? 0.0058,
but KDE yields 0.1878 ? 0.0098 and 1-vs-rest SVMs yield 0.1942 ? 0.0097, so k-NN
does not adapt well to the given learning task (loss function).
Finally, we note that nothing was stopping us from incorporating known invariances
into our loss function in KDE via the kernel. For example we could have used a
kernel which takes into account local patches of pixels rendering spatial information
or jittered kernels which take into account chosen transformations (translations,
rotations, and so forth). It may also be useful to add virtual examples to the
output matrix 1:- before the decomposition step. For an overview of incorporating
invariances see [11, Chapter 11] or [12].
5
Discussion
We have introduced a kernel method of learning general dependencies. We also gave
some first experiments indicating the usefulness of the approach. There are many
applications of KDE to explore: problems with complex outputs (natural language
parsing, image interpretation/manipulation, ... ), applying to special cost functions
(e.g ROC scores) and when prior knowledge can be encoded in the outputs.
In terms of further research, we feel there are also still many possibilities to explore
in terms of algorithm development. We admit in this work we have a very simplified
algorithm for the pre-image part (just choosing the closest image given from the
training sample). To make the approach work on more complex problems (where
a test output is not so trivially close to a training output) improved pre-image
approaches should be applied. Although one can apply techniques such as [10] for
vector based pre-images, efficiently finding pre-images for structured objects such
as strings is an open problem. Finally, the algorithm should be extended to deal
with non-Euclidean loss functions directly, e.g for classification with a general cost
matrix. One naive way is to use a distance matrix directly, ignoring the PCA step.
References
[1] N. Cristianini, A. Elisseeff, and J. Shawe-Taylor. On optimizing kernel alignment .
Technical Report 2001-087, NeuroCOLT, 200l.
[2] I. Frank and J . Friedman. A Statistical View of Some Chemometrics Regression Tools.
Technometrics , 35(2):109- 147, 1993.
[3] T. Graepel, R. Herbrich, P. Bollmann-Sdorra, and K Obermayer. Classification on
pairwise proximity data. NIPS, 11:438- 444, 1999.
[4] T. Hastie, R. Tibshirani, and J. Friedman. The Elem ents of Statistical Learning.
Springer-Verlag, New York , 200l.
[5] D. Haussler. Convolutional kernels on discrete structures. Technical Report UCSCCRL-99-10 , Computer Science Department, University of California at Santa Cruz ,
1999.
[6] J . Li, A. N. Michel, and W . Porod. Analysis and synthesis of a class of neural
networks: linear systems operating on a closed hypercube. IEEE Trans . on Circuits
and Systems, 36(11) :1405- 22 , 1989.
[7] H. Lodhi , C. Saunders, J . Shawe-Taylor, N . Cristianini, and C. Watkins. Text classification using string kernels. Journal of Machine Learning Research, 2:419- 444 ,
2002.
[8] S. Mika, G. Ratsch, J. Weston , B. Sch6lkopf, and K-R. Miiller. Fisher discriminant
analysis with kernels. In Y.-H. Hu , J . Larsen , E. Wilson, and S. Douglas, editors,
N eural N etworks for Signal Processing IX, pages 41- 48 . IEEE, 1999.
[9] C. Saunders, V. Vovk , and A. Gammerman. Ridge regression learning algorithm in
dual variables. In J . Shavlik, editor, Machine Learning Proceedings of the Fifteenth
International Conference(ICML '98), San Francisco, CA , 1998. Morgan Kaufmann.
[10] B. Sch6lkopf, S. Mika, C. Burges, P. Knirsch, K-R. Miiller, G. Ratsch, and A. J.
Smola. Input space vs. feature space in kernel-based methods. IEEE-NN, 10(5):10001017, 1999.
[11] B. Sch6lkopf and A. J. Smola. Learning with K ern els. MIT Press, Cambridge, MA,
2002.
[12] V . Vapnik. Statistical Learning Theory. John Wiley and Sons, New York, 1998.
| 2297 |@word trial:1 middle:1 inversion:1 polynomial:1 lodhi:1 open:1 hu:1 tried:1 decomposition:2 elisseeff:2 euclidian:1 carry:1 score:1 subjective:1 outperforms:1 comparing:1 surprising:1 must:2 parsing:2 john:1 cruz:1 fn:2 offunctions:1 enables:1 kyb:1 v:5 half:8 selected:1 provides:1 postal:1 herbrich:1 five:2 constructed:1 scholkopf:1 consists:1 pairwise:1 mpg:1 multi:4 estimating:3 suffice:1 circuit:1 mass:1 sdorra:1 what:1 string:30 eigenvector:1 whilst:1 finding:6 transformation:1 nj:1 k2:3 planck:1 before:2 local:1 mistake:2 id:1 mika:2 chose:9 conversely:1 bi:1 yj:3 testing:3 digit:17 drug:2 projection:1 pre:12 eer:1 cannot:2 onto:1 close:1 risk:2 live:1 applying:1 ilf:1 www:1 conventional:1 map:7 starting:2 independently:3 convex:1 estimator:2 haussler:1 fill:1 embedding:5 searching:1 coordinate:1 feel:1 construction:1 target:2 user:1 olivier:1 us:1 prescribe:1 origin:1 trick:2 recognition:6 jk:2 database:3 bottom:5 calculate:2 region:1 insertion:3 cristianini:2 solving:1 algebra:1 usps:3 easily:1 joint:1 hopfield:7 chapter:3 lm1:2 derivation:1 train:1 lul:2 alphabet:1 fast:1 describe:3 artificial:1 choosing:3 saunders:2 sanity:1 whose:1 encoded:1 solve:1 otherwise:3 unseen:1 think:2 itself:2 validates:1 eigenvalue:1 net:5 reconstruction:5 product:8 porod:1 adaptation:1 remainder:2 j2:1 forth:2 description:1 frobenius:1 validate:2 normalize:1 chemometrics:1 cluster:2 categorization:1 object:12 measured:2 ij:2 nearest:4 ex:6 solves:1 implemented:1 resemble:1 direction:4 correct:1 viewing:1 virtual:1 explains:1 require:1 generalization:2 decompose:1 proposition:1 biological:1 probable:1 proximity:1 considered:1 exp:3 equilibrium:1 mapping:10 predict:1 estimation:7 label:4 vice:1 tool:1 weighted:1 mit:1 clearly:1 always:2 rather:3 wilson:1 encode:2 validated:1 l0:2 ily:1 rank:2 check:1 am:1 stopping:1 el:1 nn:20 initially:1 germany:1 pixel:8 classification:17 dual:2 html:1 denoted:1 priori:1 development:1 spatial:1 special:4 aware:1 construct:3 identical:1 icml:1 report:2 inherent:1 few:1 phase:1 consisting:1 ents:1 attempt:1 friedman:2 technometrics:1 kfd:2 possibility:1 alignment:3 mixture:1 inbetween:1 accurate:2 encourage:1 partial:1 necessary:2 xy:1 orthogonal:1 tree:4 euclidean:3 taylor:2 instance:1 kij:1 tubingen:1 contiguous:1 measuring:3 kpca:1 cost:5 subset:1 usefulness:1 successful:1 too:1 dependency:8 corrupted:3 jittered:1 international:1 yl:3 regressor:1 ym:3 synthesis:1 squared:2 again:3 reflect:1 choose:4 possibly:1 yth:1 admit:1 illf:1 knirsch:1 return:2 michel:1 li:2 toy:1 account:5 potential:1 de:1 includes:1 coefficient:1 satisfy:1 ely:1 vi:3 performed:3 view:1 jason:1 closed:1 decaying:1 rbfs:1 minimize:2 convolutional:1 kaufmann:1 efficiently:1 maximized:1 correspond:1 yield:4 xli:1 handwritten:4 cybernetics:1 andre:1 centering:1 larsen:1 obvious:1 boil:1 knowledge:2 dimensionality:1 organized:1 hilbert:1 abad:2 graepel:1 higher:1 maximally:1 improved:1 formulation:1 done:1 just:1 smola:2 parse:1 nonlinear:4 usa:1 building:1 normalized:1 true:2 hence:2 symmetric:1 nonzero:1 deal:4 ll:1 during:1 width:3 ridge:8 complete:4 image:25 recently:1 rotation:1 specialized:1 overview:3 exponentially:1 interpretation:1 elem:1 versa:1 cambridge:1 llx:1 trivially:1 nonlinearity:1 centre:1 language:2 shawe:2 pq:1 dot:1 chapelle:1 similarity:6 operating:1 add:1 closest:3 optimizing:1 belongs:1 apart:1 manipulation:1 certain:2 verlag:1 success:1 yi:7 morgan:1 minimum:1 somewhat:1 employed:3 signal:1 ii:1 full:1 multiple:2 technical:2 adapt:1 cross:9 equally:4 controlled:1 va:2 ucsccrl:1 regression:15 patient:1 fifteenth:1 iteration:1 kernel:62 sometimes:1 achieved:2 whereas:1 ratsch:2 rest:5 usin:1 induced:4 elegant:1 ofx:1 bollmann:1 enough:1 rendering:1 xj:2 gave:2 hastie:1 competing:1 inner:7 pca:2 penalty:1 miiller:2 york:2 matlab:1 cocktail:1 ignored:1 useful:1 santa:1 amount:1 ifor:1 svms:5 http:1 outperform:1 exist:2 estimated:1 tibshirani:1 gammerman:1 discrete:1 douglas:1 graph:3 letter:10 almost:1 patch:1 fold:11 vectorial:1 anyone:1 performing:1 ern:1 structured:6 department:1 combination:2 son:1 b:1 taken:1 equation:1 previously:1 etworks:1 nonempty:1 end:2 informal:1 yare:1 apply:4 specialist:1 original:2 denotes:1 top:6 clustering:1 readable:1 calculating:1 hypercube:1 objective:1 occurs:1 usual:1 traditional:1 obermayer:1 dp:1 distance:8 neurocolt:1 l12:1 discriminant:2 reason:1 tuebingen:1 assuming:1 length:3 index:1 vladimir:1 kde:26 frank:1 unknown:1 perform:1 sch6lkopf:3 pat:1 extended:2 looking:1 arbitrary:2 princeton:1 introduced:1 required:3 kl:2 toolbox:1 sentence:1 california:1 deletion:2 nip:1 trans:1 address:1 able:1 usually:1 pattern:7 xm:2 fp:1 encompasses:1 program:1 max:1 including:1 suitable:1 natural:2 difficulty:1 nth:2 eye:1 concludes:1 naive:2 lij:1 text:6 prior:3 geometric:1 l2:1 loss:26 iiwl1:1 validation:9 penalization:1 article:2 principle:1 editor:2 translation:1 row:5 last:1 burges:1 institute:2 neighbor:4 shavlik:1 focussed:1 decorrelates:1 dimension:1 calculated:1 transition:4 made:2 coincide:1 simplified:1 san:1 employing:1 bb:1 approximate:1 obtains:1 bernhard:1 dealing:2 francisco:1 xi:4 subsequence:5 why:1 table:6 learn:8 transfer:1 ca:1 ignoring:1 complex:4 domain:3 linearly:1 noise:4 nothing:1 eural:1 roc:1 x16:2 wiley:1 embeds:1 wish:1 explicit:1 xl:5 candidate:1 watkins:1 third:2 ix:2 down:1 embed:1 specific:1 svm:4 exists:1 consist:1 incorporating:2 vapnik:2 ci:3 diagonalization:1 nec:1 locality:1 likely:2 explore:2 ordered:1 scalar:1 springer:1 aa:1 corresponds:1 abc:1 ma:1 weston:3 identity:1 rbf:8 fisher:2 experimentally:3 typical:1 vovk:1 principal:4 called:1 invariance:3 indicating:1 support:1 people:1 reg:1 correlated:1 |
1,426 | 2,298 | Boosting Density Estimation
Saharon Rosset
Department of Statistics
Stanford University
Stanford, CA, 94305
[email protected]
Eran Segal
Computer Science Department
Stanford University
Stanford, CA, 94305
[email protected]
Abstract
Several authors have suggested viewing boosting as a gradient descent search for
a good fit in function space. We apply gradient-based boosting methodology to
the unsupervised learning problem of density estimation. We show convergence
properties of the algorithm and prove that a strength of weak learnability property applies to this problem as well. We illustrate the potential of this approach
through experiments with boosting Bayesian networks to learn density models.
1 Introduction
Boosting is a method for incrementally building linearcombinations
of ?weak? models,
to generate a ?strong? predictive model. Given data
, a basis (or dictionary) of
and a loss function
weak
,
a boosting
algorithm sequentially
#"$models
"
learners
finds
to minimize
. Ad and constants
!
aBoost [6], the original
boosting
algorithm,
was
specifically
devised
for
the
task
of
classi%'&
*$+,-"
+,.,/1020
&
+2
#"$"
fication, where
)(
with
and
. AdaBoost
3 4(
sequentially fits weak learners on re-weighted versions of the data, where the weights are
determined according to the performance of the model so far, emphasizing the more ?challenging? examples. Its inventors attribute its success to the ?boosting? effect which the
linear combination of weak learners achieves, when compared to their individual performance. This effect manifests itself both in training data performance, where the boosted
model can be shown to converge, under mild conditions, to ideal training classification, and
in generalization error, where the success of boosting has been attributed to its ?separating?
? or margin maximizing ? properties [18].
It has been shown [8, 13] that AdaBoost can be described as a gradient descent algorithm,
where the weights in each step of the algorithm correspond to the gradient of an exponential
loss function at the ?current? fit. In a recent paper, [17] show that the margin maximizing
properties of AdaBoost can be derived in this framework as well. This view of boosting
as gradient descent has allowed several authors [7, 13, 21] to suggest ?gradient boosting
machines? which apply to a wider class of supervised learning problems and loss functions
than the original AdaBoost. Their results have been very promising.
In this paper we apply gradient boosting methodology to the unsupervisedlearning
6"$"7&
problem
of density estimation, using the negative log likelihood loss criterion
5 !
/98 :,;
6"$"
. The density estimation problem has been studied extensively in many
!
contexts using various parametric and non-parametric approaches [2, 5]. A particular
framework which has recently gained much popularity is that of Bayesian networks [11],
whose main strength stems from their graphical representation, allowing for highly interpretable models. More recently, researchers have developed methods for learning Bayesian
networks from data including learning in the context of incomplete data. We use Bayesian
networks as our choice of weak learners, combining the models using the boosting methodology. We note that several researchers have considered learning weighted mixtures of
networks [14], or ensembles of Bayesian networks combined by model averaging [9, 20].
We describe a generic density estimation boosting algorithm, following the approach
of
[13]. The main idea is to identify, at each boosting iteration, the basis function
which gives the largest ?local? improvement in the loss at the current fit. Intuitively,
assigns higher probability to instances that received low probability by the current model. A
line search is then used to find an appropriate coefficient for the newly selected function,
and it is added to the current model.
We provide a theoretical analysis of our density estimation boosting algorithm, showing an
explicit condition, which if satisfied, guarantees that adding a weak learner to the model improves the training set loss. We also prove a ?strength of weak learnability? theorem which
gives lower bounds on overall training loss improvement as a function of the individual
weak learners? performance on re-weighted versions of the training data.
We describe the instantiation of our generic boosting algorithm for the case of using
Bayesian networks as our basis of weak learners and provide experimental results on
two distinct data sets, showing that our algorithm achieves higher generalization on unseen
data as compared to a single Bayesian network and one particular ensemble of Bayesian
networks. We also show that our theoretical criterion for a weak learner to improve the
overall model applies well in practice.
2 A density estimation boosting algorithm
" &
"
At each step in a boosting algorithm, the model built so far is:
.
and add it to our& model
with
If we now choose a weak learner
a
small
coefficient
, then developing the training loss of the new model
in
a
Taylor
series
around the loss at gives
5
"*"7&
5
#"$"
$" "
#"
"
which in the case of negative log-likelihood loss can be written as
/
"$" &
/
"$" /
0
#"
-"
"
"
Since
is small, we can
ignore
" the second order term and choose the next boosting step
to maximize !#"%$'& . We are thus finding the first order optimal weak learner,
which gives the ?steepest descent? in the loss at the current model predictions. However,
we should note that once becomes non-infinitesimal, no ?optimality? property can be
claimed for this selected .
The main idea of gradient-based generic boosting algorithms, such as AnyBoost [13] and
GradientBoost [7], is to utilize this first order approach to find, at each step, the weak
learner which gives good improvement in the loss and then follow the ?direction? of this
weak learner to augment the current model. The step size is determined in various ways
in the different algorithms, the most popular choice being line-search, which we adopt here.
When we consider applying this methodology to density estimation, where the basis is
comprised of probability distributions and the overall model is a probability distribution
as well, we cannot simply augment the model, since
no
& 0 will
/
" longer
be a
probability distribution.
Rather,
we
consider
a
step
of
the
form
,
0
where
. It is easy to see that the first order theory of gradient boosting and the
line search solution apply to this formulation as well.
If at some stage , the current cannot be improved by adding any of the weak learners
as above, the algorithm terminates, and we have reached a global minimum. This can only
happen if the derivative of the loss at the current model with respect to the coefficient of
each weak learner is non-negative:
/98 :,;
*
0/
"
-"
Thus, the algorithm terminates if no
proof and discussion).
#"$"
,
&
gives
'
0
/
"
-"
(see section 3 for
The resulting generic gradient boosting algorithm for density estimation can be seen in
Fig. 1. Implementation details for this algorithm
include the choice of the family of weak
learners , and the method for searching for at each boosting iteration. We address
these details in Section 4.
1. Set
to uniform on the domain of
2. For t = 1 to T
!"#%$&!!'
( #*),+ to maximize - ( #
- ( # /.10 break.
2#354!687:9<;>=@? - BADC>E 7FGH A 32 H#%$I&!"'KJL2(M#N!'G
#H A 2#HH#%$I&JO2#P(M#
3. Output the final model Q
(a)
(b)
(c)
(d)
(e)
Set
Find
If
Find
Set
Figure 1: Boosting density estimation algorithm
3 Training data performance
The concept of ?strength of weak learnability? [6, 18] has been developed in the context
of boosting classification models. Conceptually,
$ this
as follows:
property can be described
, there is a weak learner
?if for any weighting of the training data
which
achieves weighted training error slightly better than random guessing on the re-weighted
version of the data using these weights, then the combined boosted learner will have vanishing error on the training data?.
SR
UT
In classification, this concept is realized elegantly. At each step in the
algorithm, the
weighted error of the previous model, using the new weights is exactly
. Thus, the new
weak learner doing ?better than random? on the re-weighted data means it can improve the
previous weak learner?s
performance at the current fit, by achieving weighted classification
error better than
. In fact it is easy to show that the weak learnability
condition of at
least one weak learner attaining classification error less than
on the re-weighted data
does not hold only if the current combined model is the optimal solution in the space of
linear combinations of weak learners.
T
T
We now derive a similar formulation for our density estimation boosting algorithm. We
start with a quantitative description
of the performance of the previous weak learner
at the combined model , given in the following lemma:
Lemma 1 Using the algorithm of section 2 we get:
number of training examples.
&
*V #"#" $$ && W
, where
is the
Proof: The line search (step 2(c) in the algorithm) implies:
&
/98 :,;
0 /
$
"
#"
-"*"
2:
0
&
0/
9/
"
"
-"
Lemma 1 allows us to derive the following stopping criterion (or optimality condition) for
the boosting algorithm, illustrating that in order to improve training set loss, the new weak
learner only has to exceed the previous one?s performance at the current fit.
"
#" $ &
Theorem 1 If there does not exist a weak learner
,
such that
global
minimum
in
the
domain
of
normalized
linear
combinations
of
:
then& ;isthe
& 0
#"$",
/
Proof:
/98 :,; This is a direct result of the optimality conditions for a convex function (in this case
) in a compact domain.
So unless we have reached the global optimum in the simplex within
(which will
generally happen quickly only if is very small, i.e. the ?weak? learners are very
weak),
#"#" $ $ & &
.
we will have some weak learners doing better than ?random? and attaining
If this is indeed the case, we can derive an explicit lower bound for training set loss improvement as a function of the new weak learner?s performance at the current model:
DV
Theorem 2 Assume:
1. The sequence
&
weak learners in the algorithm of section 2 has:
of-" selected
2.
#" $ &
Then we get:
Proof:
/
/ 8:2;
3
8:2;
"
0/
"
"*"
"$"
#"
/
8:2;
-"$"
"$" / " !
25&
"
"
"
#
" /
"$"
/98 :,; 0/ "
&
*
"
"
#
0
/
'
'
&
#"
/
&
.
#" $ &'& /
$ +* ,- ' $
Combining these two gives: )
' (
!
' 54
9
!
"$" /
/1032 6 /87 & /
"*"
8 :,;
8:2;
! (
$
V
/
/
&
-"%$
"&$
, which implies:
9 ! & / " !
The second assumption of theorem 2 may not seem obvious but it is actually
quite mild.
With a bit more notation we could get rid of the need to lower bound completely. For
, we can see intuitively that a boosting algorithm will not let any observation have exceptionally low probability over time since that would cause this observation to have overwhelming weight in the next boosting iteration and hence the next selected is certain to
give it high probability. Thus, after some iterations we can assume that we would actually
have a threshold independent of the iteration number and hence the
loss would decrease
at least as the sum of squares of the ?weak learnability? quantities .
4 Boosting Bayesian Networks
We now focus our attention on a specific application of the boosting methodology for density estimation, using Bayesian networks as the weak learners. A Bayesian network is a
graphical model for describing a joint distribution over a set of random variables. Recently,
there has been much work on developing algorithms for learning Bayesian networks (both
network structure and parameters) from data for the task of density estimation and hence
they seem appropriate as our choice of weak learners. Another advantage of Bayesian networks in our context, is the ability to tune the strength of the weak learners using parameters
such as number of edges and strength of prior.
Assume we have categorical data
in a domain where each of the observations
variables. We rewrite step 2(b) of the boosting algorithm as:
contains
assignments
to
&
"
6"
Find
to maximize "
" , where "
" $ "
In this formulation, all possible values of have weights, some of which may be .
R
As mentioned above, the two main implementation-specific details in the generic density
estimation algorithm are the set of weak models and the method for searching for the
?optimal? weak model at each boosting iteration. When boosting Bayesian networks,
a natural way of limiting the ?strength? of weak learners in is to limit the complexity
of the network structure in . This can be done, for instance, by bounding the number of
edges in each ?weak density estimator? learned during the boosting iterations.
The problem of finding an ?optimal? weak model at each boosting
iteration (step 2(b) of
the algorithm) is trickier. We first note that if
we
only
impose
an
constraint on the norm
6"& 0
of (specifically, the PDF constraint "
), then step 2(b) has a
trivial
solution,
02 &
6" &
concentrating
all the probability at the value of with the highest ?weight?:
;
. This phenomenon is not limited to the density estimation case and would
appear in boosting for classification if the set of weak learners had fixed
norm, rather
than the fixed
norm, implicitly imposed by limiting to contain classifiers. This
distributions is particularly problematic
consequence of limiting to contain probability
when boosting Bayesian networks, since
can be represented with a fully disconnected
network. Thus, limiting to ?simple? structures by itself does not amend this problem.
However, the boosting algorithm does not explicitly require to include only probability
distributions. Let us consider instead a somewhat
different family of candidate models,
with an implicit
size constraint, rather than
as in the case of probability distributions
(note that using an
constraint
as in Adaboost is not possible, since the trivial optimal
0
). For the unconstrained ?distribution? case (corresponding to a
solution would be
fully connected Bayesian network), this leads to re-writing step 2(b) of the boosting algo
rithm
" as:
6"
6" & 0
Find to maximize " " , subject to "
By considering the Lagrange multiplier version of this problem it is easy to see that the
6" &
optimal solution is
and is proportional to the optimal solution to the
"!
(
log-likelihood
"
8:2; 6"$"
maximization problem:
6"7& 0
"
Find
to maximize "
, subject to "
#%$'& "1&
(
given by
. This fact points to an interesting correspondence between
( "!
solutions to
-constrained linear optimization problems and
-constrained
log optimiza"
"
tion problems and leads us to believe that good
solutions
to
step
)
of
the boosting
algorithm can be approximated by solving step instead.
"
The formulation given in )
presents us with a problem that is natural for Bayesian
network learning,8 that
of
maximizing
the log-likelihood (or in this case the weighted log,
:
;
6
"
likelihood "* "
) of the data given the structure.
Our implementation of the boosting algorithm, therefore, does indeed limit to include probability distributions only, in this case those that can" be represented by ?simple?
Bayesian
instead of the original ver" networks. It solves a constrained version of step
sion . Note that this use of ?surrogate? optimization tasks is not alien to other boosting
-25.5
-24.5
Bayesian Network
AutoClass
Avg.Log-likelihood
-26.5
-27
-27.5
-28
Boosting
BayesianNetwork
-24.9
-25.1
-25.3
-25.5
-25.7
-25.9
-26.1
-26.3
-26.5
29
27
25
23
21
19
17
15
13
9
11
7
3
5
1
-28.5
1
2
3
4
5
6
7
8
9 10 11 12 13 14 15 16 17 18 19
BoostingIterations
Boosting Iterations
(a)
(b)
-24
-20
Trainingperformance
-23
40
WeakLearnability
-24
30
Log(n)
BoostingIterations
29
27
25
23
21
19
17
15
13
9
-27
11
-26
0
7
10
5
-25
3
20
LogWeakLearnability
-22
50
Avg.Log-Likelihood
60
-24.2
100
-21
1
LogWeakLearnability
70
-24.4
80
-24.6
Trainingperformance
60
WeakLearnability
40
-24.8
-25
Log(n)
-25.2
20
-25.4
-25.6
0
1
2
3
4
5
6
7
8
9 10 11 12 13 14 15 16 17 18 19
BoostingIterations
(c)
(d)
Figure 2: (a) Comparison of boosting, single Bayesian network and AutoClass performance on the
genomic expression dataset. The average log-likelihood for each test set instance is plotted. (b) Same
as (a) for the census dataset. Results for AutoClass were omitted as they were not competitive in this
domain (see text). (c) The weak learnability condition is plotted along with training data performance
as a reference
for the genomic expression dataset. The plot is in log-scale and also includes
where is the number of training instances (d) Same as (c) for the census dataset.
C>E 7F
applications as well. For example, Adaboost calls for optimizing a re-weighted classification problem at each step; Decision trees, the most popular boosting weak learners, search
for ?optimal? solutions using surrogate loss functions - such as the Gini index for CART
[3] or information gain for C4.5 [16].
5 Experimental Results
We evaluated the performance of our algorithms on two distinct datasets: a genomic expression dataset and a US census dataset. In gene expression data, the level of mRNA
transcript of every gene in the cell is measured simultaneously, using DNA microarray technology, allowing researchers to detect functionally related genes based on the
correlation of their expression profiles across the various experiments. We combined
three yeast expression data sets [10, 12, 19] for a total of 550 expression experiments.
To test our methods on a set of correlated variables, we selected 56 genes associated
with the oxidative phosphorlylation pathway in the KEGG database [1]. We discretized
the expression measurements of each gene into three levels (down, same, up) as in
[15]. We obtained the 1990 US census data set from the UC Irvine data repository
(http://kdd.ics.uci.edu/databases/census1990/USCensus1990.html). The data set includes
68 discretized attributes such as age, income, occupation, work status, etc. We randomly
selected 5k entries from the 2.5M available entries in the entire data set.
Each of the data sets was randomly partitioned into 5 equally sized sets and our boosting
algorithm was learned from each of the 5 possible combinations of 4 partitions. The performance of each boosting model was evaluated by measuring the log-likelihood achieved on
Avg.Log-Likelihood
Avg. Log-likelihood
-24.7
Boosting
-26
the data instances in the left out partition. We compared the performance achieved to that
of a single Bayesian network learned using standard techniques (see [11] and references
therein). To test whether our boosting approach gains its performance primarily by using
an ensemble of Bayesian networks, we also compared the performance to that achieved
by an ensemble of Bayesian networks learned using AutoClass [4], varying the number of
classes from 2 to 100. We report results for the setting of AutoClass achieving the best
performance. The results are reported as the average log-likelihood measured for each instance in the test data and summarized in Fig. 2(a,b). We omit the results of AutoClass
for the census data as it was not comparable to boosting
Bayesian network,
/ 02 and
a single
achieving an average test instance log-likelihood of
. As can be seen, our
boosting algorithm
performs significantly better, rendering each instance in the test data
roughly and
times more likely than it is using other approaches in the genomic and
census datasets, respectively.
To illustrate the theoretical concepts discussed in Section 3, we recorded the performance
of our boosting
algorithm on the training
set for both data sets. As shown in Section 3,
#" $ & "
if
, then adding to the model is guaranteed to improve our training
set performance. Theorem 2 relates the magnitude of this difference to the amount of
improvement
in training set performance. Fig. 2(c,d) plots the weak learnability quantity
#" $ & -"
, the training set log-likelihood and the threshold for both data sets on a
log scale. As can be seen, the theory matches nicely, as the improvement is large when the
weak learnability condition is large and stops entirely once it asymptotes to .
Finally, boosting theory tells us that the effect of boosting is more pronounced for ?weaker?
weak learners. To that extent, we experimented (data not shown) with various strength
parameters for the family of weak learners (number of allowed edges in each Bayesian
network, strength of prior). As expected, the overall effect of boosting was much stronger
for weaker learners.
6 Discussion and future work
In this paper we extended the boosting methodology to the domain of density estimation
and demonstrated its practical performance on real world datasets. We believe that this direction shows promise and hope that our work will lead to other boosting implementations
in density estimation as well as other function estimation domains.
Our theoretical results include an exposition of the training data performance of the generic
algorithm, proving analogous results to those in the case of boosting for classification. Of
particular interest is theorem 1, implying that the idealized algorithm converges, asymptotically, to the global minimum. This result is interesting, as it implies that the greedy
boosting algorithm converges to the exhaustive solution. However, this global minimum is
usually not a good solution in terms of test-set performance as it will tend to overfit (especially if is not very small). Boosting can be described as generating a regularized path to
this optimal solution [17], and thus we can assume that points along the path will usually
have better generalization performance than the non-regularized optimum.
In Section 4 we described the theoretical and practical difficulties in solving the optimization step of the boosting iterations (step 2(b)). We suggested replacing it with a more easily
solvable log-optimization problem, a replacement that can be partly justified by theoretical
arguments. However, it will be interesting to formulate other cases where the original problem has non-trivial solutions - for instance, by not limiting to probability distributions
only and using non-density estimation algorithms to generate the ?weak? models .
The popularity of Bayesian networks as density estimators stems from their intuitive interpretation as describing causal relations in data. However, when learning the network
structure from data, a major issue is assigning confidence to the learned features. A potential use of boosting could be in improving interpretability and reducing instability in
structure learning. If the weak models in are limited to a small number of edges, we can
collect and interpret the ?total influence? of edges in the combined model. This seems like
a promising avenue for future research, which we intend to pursue.
Acknowledgements We thank Jerry Friedman, Daphne Koller and Christian Shelton for
useful discussions. E. Segal was supported by a Stanford Graduate Fellowship (SGF).
References
[1] Kegg: Kyoto encyclopedia of genes and genomes. In http://www.genome.ad.jp/kegg.
[2] C. M. Bishop. Neural Networks for Pattern Recognition. Oxford University Press, Oxford,
U.K., 1995.
[3] L. Breiman, J.H. Friedman, R. Olshen, and C. Stone. Classification and Regression Trees.
Wardsworth International Group, 1984.
[4] P. Cheeseman and J. Stutz. Bayesian Classification (AutoClass): Theory and Results. AAAI
Press, 1995.
[5] R. O. Duda and P. E. Hart. Pattern Classification and Scene Analysis. John Wiley & Sons, New
York, 1973.
[6] Y. Freund and R.E. Scahpire. A decision theoretic generalization of on-line learning and an
application to boosting. In the 2nd Eurpoean Conference on Computational Learning Theory,
1995.
[7] J.H. Friedman. Greedy function approximation: A gradient boosting machine. Annals of Statistics, Vol. 29 No. 5, 2001.
[8] J.H. Friedman, T. Hastie, and R. Tibshirani. Additive logistic regression: a statistical view of
boosting. Annals of Statistics, Vol. 28 pp. 337-407, 2000.
[9] N. Friedman and D. Koller. Being bayesian about network structure: A bayesian approach to
structure discovery in bayesian networks. Machine Learning Journal, 2002.
[10] A.P. Gasch, P.T. Spellman, C.M. Kao, O.Carmel-Harel, M.B. Eisen, G.Storz, D.Botstein, and
P.O. Brown. Genomic expression program in the response of yeast cells to environmental
changes. Mol. Bio. Cell, 11:4241?4257, 2000.
[11] D. Heckerman. A tutorial on learning with Bayesian networks. In M. I. Jordan, editor, Learning
in Graphical Models. MIT Press, Cambridge, MA, 1998.
[12] T. R. Hughes et al. Functional discovery via a compendium of expression profiles. Cell,
102(1):109?26, 2000.
[13] L. Mason, J. Baxter, P. Bartlett, and P. Frean. Boosting algorithms as gradient descent in function space. In Proc. NIPS, number 12, pages 512?518, 1999.
[14] M. Meila and T. Jaakkola. Tractable bayesian learning of tree belief networks. Technical Report
CMU-RI-TR-00-15, Robotics institute, Carnegie Mellon University, 2000.
[15] D. Pe?er, A. Regev, G. Elidan, and N. Friedman. Inferring subnetworks from perturbed expression profiles. In ISMB?01, 2001.
[16] J.R. Quinlan. C4.5 - Programs for Machine Learning. Morgan-Kaufmann, 1993.
[17] S. Rosset, J. Zhu, and T. Hastie. Boosting as a regularized path to a margin maximizer. Submitted to NIPS 2002.
[18] R.E. Scahpire, Y. Freund, P. Bartlett, and W.S. Lee. Boosting the margin: a new explanation
for the effectiveness of voting methods. Annals of Statistics, Vol. 26 No. 5, 1998.
[19] P. T. Spellman et al. Comprehensive identification of cell cycle-regulated genes of the yeast
saccharomyces cerevisiae by microarray hybridization. Mol. Biol. Cell, 9(12):3273?97, 1998.
[20] B. Thiesson, C. Meek, and D. Heckerman. Learning mixtures of dag models. Technical Report
MSR-TR-98-12, Microsoft Research, 1997.
[21] R.S. Zemel and T. Pitassi. A gradient-based boosting algorithm for regression problems. In
Proc. NIPS, 2001.
| 2298 |@word mild:2 illustrating:1 version:5 sgf:1 repository:1 norm:3 stronger:1 seems:1 duda:1 nd:1 msr:1 tr:2 series:1 contains:1 current:12 assigning:1 written:1 john:1 additive:1 happen:2 partition:2 kdd:1 christian:1 asymptote:1 plot:2 interpretable:1 implying:1 greedy:2 selected:6 steepest:1 vanishing:1 boosting:72 daphne:1 along:2 direct:1 anyboost:1 prove:2 pathway:1 expected:1 indeed:2 roughly:1 discretized:2 overwhelming:1 considering:1 becomes:1 notation:1 pursue:1 developed:2 finding:2 guarantee:1 quantitative:1 every:1 voting:1 exactly:1 classifier:1 bio:1 omit:1 appear:1 local:1 limit:2 consequence:1 oxford:2 path:3 therein:1 studied:1 collect:1 challenging:1 limited:2 graduate:1 ismb:1 practical:2 practice:1 hughes:1 significantly:1 confidence:1 suggest:1 get:3 cannot:2 context:4 applying:1 writing:1 instability:1 influence:1 www:1 imposed:1 demonstrated:1 maximizing:3 mrna:1 attention:1 convex:1 formulate:1 assigns:1 estimator:2 fication:1 proving:1 searching:2 analogous:1 limiting:5 annals:3 approximated:1 particularly:1 recognition:1 database:2 connected:1 cycle:1 decrease:1 highest:1 mentioned:1 complexity:1 rewrite:1 solving:2 algo:1 predictive:1 learner:37 basis:4 completely:1 easily:1 joint:1 various:4 represented:2 distinct:2 amend:1 describe:2 gini:1 zemel:1 tell:1 exhaustive:1 whose:1 quite:1 stanford:7 ability:1 statistic:4 unseen:1 itself:2 final:1 sequence:1 advantage:1 uci:1 combining:2 description:1 intuitive:1 pronounced:1 kao:1 convergence:1 optimum:2 generating:1 converges:2 wider:1 illustrate:2 gradientboost:1 derive:3 frean:1 stat:1 measured:2 received:1 transcript:1 solves:1 strong:1 c:1 implies:3 direction:2 attribute:2 viewing:1 require:1 generalization:4 hold:1 around:1 considered:1 ic:1 major:1 dictionary:1 achieves:3 adopt:1 omitted:1 estimation:19 proc:2 largest:1 weighted:11 hope:1 mit:1 genomic:5 cerevisiae:1 rather:3 boosted:2 sion:1 varying:1 breiman:1 jaakkola:1 derived:1 focus:1 improvement:6 saccharomyces:1 likelihood:14 alien:1 detect:1 stopping:1 entire:1 relation:1 koller:2 overall:4 classification:11 html:1 issue:1 augment:2 constrained:3 uc:1 once:2 nicely:1 unsupervised:2 future:2 simplex:1 report:3 primarily:1 randomly:2 harel:1 simultaneously:1 comprehensive:1 individual:2 replacement:1 microsoft:1 friedman:6 interest:1 highly:1 mixture:2 edge:5 stutz:1 unless:1 tree:3 incomplete:1 taylor:1 re:7 plotted:2 causal:1 theoretical:6 instance:9 measuring:1 trickier:1 assignment:1 maximization:1 entry:2 uniform:1 comprised:1 learnability:8 reported:1 perturbed:1 rosset:2 combined:6 density:21 international:1 lee:1 quickly:1 autoclass:7 aaai:1 satisfied:1 recorded:1 choose:2 derivative:1 potential:2 segal:2 attaining:2 summarized:1 includes:2 coefficient:3 explicitly:1 ad:2 idealized:1 tion:1 view:2 break:1 doing:2 reached:2 start:1 competitive:1 minimize:1 square:1 kaufmann:1 ensemble:4 correspond:1 identify:1 conceptually:1 weak:51 bayesian:32 identification:1 researcher:3 submitted:1 infinitesimal:1 pp:1 obvious:1 proof:4 attributed:1 associated:1 gain:2 newly:1 dataset:6 irvine:1 popular:2 concentrating:1 stop:1 manifest:1 ut:1 improves:1 storz:1 actually:2 higher:2 census1990:1 supervised:1 follow:1 methodology:6 adaboost:6 improved:1 botstein:1 response:1 formulation:4 done:1 evaluated:2 stage:1 implicit:1 correlation:1 overfit:1 replacing:1 maximizer:1 incrementally:1 logistic:1 yeast:3 believe:2 building:1 effect:4 concept:3 multiplier:1 contain:2 normalized:1 brown:1 hence:3 jerry:1 during:1 criterion:3 stone:1 pdf:1 theoretic:1 performs:1 saharon:2 recently:3 thiesson:1 functional:1 jp:1 discussed:1 interpretation:1 functionally:1 interpret:1 measurement:1 mellon:1 cambridge:1 dag:1 unconstrained:1 meila:1 had:1 longer:1 etc:1 add:1 pitassi:1 recent:1 optimizing:1 claimed:1 certain:1 success:2 seen:3 minimum:4 morgan:1 somewhat:1 impose:1 converge:1 maximize:5 elidan:1 relates:1 kyoto:1 stem:2 technical:2 match:1 devised:1 hart:1 equally:1 prediction:1 regression:3 cmu:1 iteration:10 achieved:3 cell:6 robotics:1 justified:1 fellowship:1 microarray:2 sr:1 subject:2 cart:1 tend:1 hybridization:1 seem:2 jordan:1 call:1 effectiveness:1 ideal:1 exceed:1 easy:3 baxter:1 rendering:1 fit:6 hastie:2 idea:2 avenue:1 whether:1 expression:11 bartlett:2 inventor:1 york:1 cause:1 generally:1 useful:1 tune:1 scahpire:2 amount:1 encyclopedia:1 extensively:1 dna:1 generate:2 http:2 exist:1 problematic:1 tutorial:1 popularity:2 tibshirani:1 carnegie:1 promise:1 vol:3 group:1 threshold:2 achieving:3 utilize:1 asymptotically:1 sum:1 family:3 decision:2 comparable:1 bit:1 entirely:1 bound:3 guaranteed:1 meek:1 correspondence:1 strength:9 constraint:4 scene:1 ri:1 compendium:1 argument:1 optimality:3 department:2 developing:2 according:1 combination:5 disconnected:1 terminates:2 slightly:1 across:1 son:1 heckerman:2 partitioned:1 intuitively:2 dv:1 census:6 kegg:3 describing:2 tractable:1 subnetworks:1 available:1 apply:4 generic:6 appropriate:2 original:4 include:4 graphical:3 quinlan:1 especially:1 intend:1 added:1 realized:1 quantity:2 parametric:2 eran:2 regev:1 guessing:1 surrogate:2 gradient:13 regulated:1 thank:1 separating:1 extent:1 gasch:1 trivial:3 index:1 olshen:1 negative:3 implementation:4 allowing:2 observation:3 datasets:3 descent:5 extended:1 shelton:1 c4:2 learned:5 nip:3 address:1 suggested:2 usually:2 pattern:2 program:2 built:1 including:1 interpretability:1 explanation:1 belief:1 natural:2 difficulty:1 regularized:3 solvable:1 cheeseman:1 zhu:1 spellman:2 improve:4 technology:1 categorical:1 text:1 prior:2 acknowledgement:1 discovery:2 occupation:1 freund:2 loss:17 fully:2 interesting:3 proportional:1 age:1 editor:1 supported:1 weaker:2 institute:1 world:1 genome:2 eisen:1 author:2 avg:4 far:2 income:1 compact:1 ignore:1 implicitly:1 status:1 gene:7 global:5 sequentially:2 instantiation:1 ver:1 rid:1 search:6 promising:2 learn:1 carmel:1 ca:2 aboost:1 mol:2 improving:1 domain:7 elegantly:1 main:4 bounding:1 profile:3 allowed:2 fig:3 rithm:1 wiley:1 inferring:1 explicit:2 exponential:1 candidate:1 pe:1 weighting:1 theorem:6 emphasizing:1 down:1 specific:2 bishop:1 showing:2 badc:1 er:1 mason:1 experimented:1 adding:3 gained:1 magnitude:1 margin:4 simply:1 likely:1 lagrange:1 applies:2 environmental:1 ma:1 sized:1 exposition:1 exceptionally:1 change:1 specifically:2 determined:2 reducing:1 averaging:1 classi:1 lemma:3 total:2 partly:1 experimental:2 optimiza:1 phenomenon:1 biol:1 correlated:1 |
1,427 | 2,299 | Bias-Optimal Incremental Problem Solving
Jurgen
?
Schmidhuber
IDSIA, Galleria 2, 6928 Manno-Lugano, Switzerland
[email protected]
Abstract
Given is a problem sequence and a probability distribution (the bias) on
programs computing solution candidates. We present an optimally fast
way of incrementally solving each task in the sequence. Bias shifts are
computed by program prefixes that modify the distribution on their suffixes by reusing successful code for previous tasks (stored in non-modifiable memory). No tested program gets more runtime than its probability
times the total search time. In illustrative experiments, ours becomes the
first general system to learn a universal solver
for arbitrary disk Tow
ers of Hanoi tasks (minimal solution size
). It demonstrates the
advantages of incremental learning by profiting from previously solved,
simpler tasks involving samples of a simple context free language.
1 Brief Introduction to Optimal Universal Search
Consider an asymptotically optimal method for tasks with quickly verifiable solutions:
Method 1.1 (L SEARCH ) View the -th binary string
as a potential
program for a universal Turing machine. Given some problem, for all do: every
steps on average execute (if possible) one instruction of the -th program candidate,
until one of the programs has computed a solution.
Given some problem class, if some unknown optimal program requires steps to solve
a problem instance of size , and happens to be the -th program
in the alphabetical list,
then L SEARCH (for%Levin Search) [6] will need at most
!#"$
steps ?
the constant factor
may be huge but does not depend on . Compare [11, 7, 3].
Recently Hutter developed a more complex asymptotically optimal search algorithm for
all well-defined problems [3]. H SEARCH (for Hutter Search) cleverly allocates part of
the total search time for searching the space of proofs to find provably correct candidate
programs with provable upper runtime bounds, and at any given time focuses resources
on those programs with the currently best proven time bounds.
H SEARCH
&(Unexpectedly,
'
'
manages to reduce the constant slowdown factor to a value of
, where is an arbitrary
positive constant. Unfortunately, however, the search in proof space introduces an unknown
additive problem class-specific constant slowdown, which again may be huge.
In the real world, constants do matter. In this paper we will use basic concepts of optimal
search to construct an optimal incremental problem solver that at any given time may
exploit experience collected in previous searches for solutions to earlier tasks, to minimize
the constants ignored by nonincremental H SEARCH and L SEARCH.
2 Optimal Ordered Problem Solver (OOPS)
Notation. Unless stated otherwise or obvious, to simplify notation, throughout the paper
newly introduced variables are assumed to be integer-valued and to cover the range clear
from the context. Given some finite or infinite countable alphabet "
, let
denote the set of finite sequences or strings over , where is the empty string. We
use the alphabet name?s lower case variant to introduce (possibly variable) strings such
as %
; denotes the number of symbols in string , where " ;
is the -th symbol of ; "
otherwise (where
if
and
"
" ).
"
"
is the concatenation of
and (e.g., if
and
then
).
"
! " # $
" %'&)(
+*,%-(
.%'&)(*,%,(
Consider countable alphabets / and . Strings 0
0 0 12/3 represent possible in
ternal states of a computer; strings % 454 represent code or programs
for
manipulating states. We focus on / being the set of integers and 6 " 87
representing a set of 7 instructions of some programming language (that is, substrings
within states may also encode programs).
9 is a set of currently
tasks. Let the variable 0
:;</ denote the current state
9 , with > -thunsolved
of task :=
component 0,? @: on a computation tape : (think of a separate
tape for each task). For convenience we combine current state 0 @: and current code
: in
a single& address
space,
introducing
negative
and
positive
addresses
ranging
from
!
0
to
, defining the content of address > as A @> : ".,? if CBD>3EF! and A @> @: "
0positive
HG ? @: ifaddresses.
0 @: !!E>CE . All dynamic task-specific data will be represented at nonIn particular, the current instruction pointer ip(r) "IA % ?KJ @: @: of task
:modifiable
can be found at (possibly variable) address % ?KJ @: E . Furthermore,
0 : also encodes
a
on
probability distribution : "$ @:
@:
HL @: NM ? O? @: "
. This variable distribution will be used to select a new instruction in case > @: points to
the current topmost address right after the end of the current code .
%' PR QXZSY\T)[NU]NW
a variable address that cannot decrease. Once chosen, the code bias
^`_ a V will isremain
unchangeable forever ? it is a (possibly empty) sequence of programs , some of them prewired by the user,
9 others frozen after previous successful
9 , by a
searches for solutions to previous tasks. Given , the goal is to solve all tasks :b
program that appropriately uses or extends the current code XcY`[d]N^`_a .
e
We will do this in a bias-optimal fashion, that is, no solution candidate will get much more
search time than it deserves, given some initial probabilistic bias on program space
:
g
:h<f
f
g
:lmV f
:Cqf
Definition 2.1 (B IAS -O PTIMAL S EARCHERS ) Given is a problem class , a search space
of solution candidates (where any problem
should have a solution in ), a taskdependent bias in form of conditional probability distributions
on the candidates
, and a predefined procedure that creates and tests any given on any
within
time % (typically unknown in advance). A searcher is -bias-optimal (
) if for
it is guaranteed to solve any problem
any maximal total search time
if it
has a solution
satisfying
.
l g n :
q g
o n 8X)p: Eri sj: to 8X)p,u
i ! jk:
Unlike reinforcement learners [4] and heuristics such as Genetic Programming [2], OOPS
(section 2.2) will be -bias-optimal, where is a small and acceptable number, such as 8.
2.1 OOPS Prerequisites: Multitasking & Prefix Tracking Through Method ?Try?
The Turing machine-based setups for H SEARCH and L SEARCH assume potentially infinite
storage. Hence they may largely ignore questions of storage management. In any practical
system, however, we have to efficiently reuse limited storage. This, and multitasking, is
what the present subsection is about. The recursive method Try below allocates time to
program prefixes, each being tested on multiple tasks simultaneously, such that the sum of
the runtimes of any given prefix, tested on all tasks, does not exceed the total search time
multiplied by the prefix probability (the product of the tape-dependent probabilities of its
previously selected components in ). Try tracks effects of tested program prefixes, such
as storage modifications (including probability changes) and partially solved task sets, to
reset conditions for subsequent tests of alternative prefix continuations in an optimally efficient fashion (at most as expensive as the prefix tests themselves). Optimal backtracking
requires that any prolongation of some prefix by some token gets immediately executed.
To allow for efficient undoing of state changes, we use global Boolean variables
(initially FALSE
. We initialize time
" prob ) for all possible state components
and state
(including and ) with
"
ability
; q-pointer "
task-specific information for all task names in a ring
of tasks. Here the expression
?ring? indicates that the tasks are ordered in cyclic fashion;
denotes the number of
tasks in ring . Given a global search time limit , we Try to solve all tasks in
, by
and / or by discovering an appropriate prolongation of :
using existing code in "
h%,: ? :
n
i
% PRQS\T)U :
@:
9 9 > @:
j j
9
9
o
. J
9
9
).
or F
;:
Method 2.1 (B
Try ( : \n i )) (returns T
9
9
n 1 "2n Done " F .
1. Make an empty stack ; set local variables :b "2:
"
9
j j
and n E5i;o and instruction
W
valid ( ! 0 : hE > : hEW )
7 ) andpointer
and instruction valid ( E A > @: @:#E
no halt condition (e.g., error such as
0 ? 0 @: :
RUE
OOLEAN
ALSE
ALSE
HILE
division by 0) encountered (evaluate conditions in this order until first satisfied, if any) D O :
A 0 @> @: : :
n
?
0
@
:
0 ? : @:
> @:
If possible, interpret / execute token ! according to the rules of the given programming language (this may modify including instruction pointer and distribution , but not ), continually increasing by the consumed time. Whenever the execution changes some state component whose
" FALSE, set "
T RUE and save the previous value by pushing the triple ! onto . Remove
, set equal to the next task in ring . E LSE set Done "
from if solved. I F
" (all tasks solved; new code frozen, if any).
T RUE;
@:
%,: ? : > \: 0 ? @:
%,: ? @:
9
9
: % PRQST)U 2 j j
2. Use to efficiently reset only the modified %,: ? to F
(but do not pop yet).
&
3. I > : "
(this means an online request for prolongation of the current
and there is some yet untested
prefix through a new token): W
Done " F
J ), set J D " and Done " Try
token
(untried
since
as
value
for
$
n
9 i :
), where :
is ?s probability according to current @: .
&
(
: \n
4. Use to efficiently restore only those 0 ? changed since n , thus also restoring instruction pointer > :R and original search distribution :H . Return the value of Done.
9
ALSE
F
ALSE
HILE
It is important that instructions whose runtimes are not known in advance can be interrupted
by Try at any time. Essentially, Try conducts a depth-first search in program space, where
the branches of the search tree are program prefixes, and backtracking is triggered once the
sum of the runtimes of the current prefix on all current tasks exceeds the prefix probability
multiplied by the total time limit. A successful Try will solve all tasks, possibly increasing
. In any case Try will completely restore all states of all tasks. Tracking / undoing
effects of prefixes essentially does not cost more than their execution. So the in Def. 2.1
of -bias-optimality is not greatly affected by backtracking: ignoring hardware-specific
overhead, we lose at most a factor 2. An efficient iterative (non-recursive) version of Try
for a broad variety of initial programming languages was implemented in C.
% PRQST)U
2.2 OOPS For Finding Universal Solvers
: ? >
s > :O :c
::
Now suppose there is an ordered sequence
of tasks
. Task may or may not
depend on solutions for
"
For instance, task may be to find a
: G .
faster way through a maze than the one found during the search for a solution to task
We are searching for a single program solving all tasks encountered so far (see [9] for variants of this setup). Inductively suppose we have solved the first tasks through programs
, and that the most recently found program starting at address
stored below address
actually solves all of them, possibly using information conveyed by earlier
&
programs. To find a program solving the first
tasks, OOPS invokes Try as follows
(using set notation for ring ):
% PRQS\T)U
% X E=%'PRQS\T)U
9
o
" R q ".% PRQS\T)U .
9
9
1. Set " R: and > : "2% X . I Try ( \: ) then exit.
9
&
2. I
=
o
go to 3. Set " c:, \:R\: 9 H ; set local variable %# "I%OPcQSTU
9
"I% and exit.
for all :4
set > @: "2% . I Try ( : ) set % X
3. Set o "
o , and go to 1.
Method 2.2 (OOPS (n+1)) Initialize
F
F
F
&
;
That is, we spend roughly equal time on two simultaneous searches. The second (step& 2)
considers all tasks and all prefixes. The first (step 1), however, focuses only on task
and the most recent prefix and its possible continuations. In particular, start address
does not increase as long as new tasks can be solved by prolonging
. Why is
this justified? A bit of thought shows that it is impossible for the most recent code starting
at to request any additional tokens that could harm its performance on previous tasks.
We already inductively know that all of its prolongations will solve all tasks up to .
X N XZY\[d]N^`_a % X
%X
%X
:- :c
% >hX
Therefore, given tasks
we first initialize ; then for "
invoke
OOPS to find programs starting at (possibly increasing) address , each solving all
tasks so far, possibly eventually discovering a universal
solver for all tasks in the sequence.
?s
is defined as
As address increases for the -th time,
the program starting at
old value and ending right before its new value. Clearly,
(
) may exploit .
@>
%X
%
X
Optimality. OOPS not only is asymptotically optimal in Levin?s sense [6] (see Method 1.1),
but also near bias-optimal (Def. 2.1). To see this, consider a program solving problem
and . Denote ?s probability by
within steps, given current code bias
. A bias-optimal solver wouldsolve within at most
steps. We observe that
OOPS will solve within at most
steps, ignoring overhead: a factor 2 might get
lost for allocating half the search time to prolongations of the most recent code, another
factor 2 for the incremental doubling of (necessary because we do not know in advance
the best value of ), and another factor 2 for Try?s resets of states and tasks. So the method
is 8-bias-optimal (ignoring hardware-specific overhead) with respect to the current task.
:
i
ui :
o
:
o
XcY`[d]N^`_a
% Xu
i
Our only bias shifts are due to freezing programs once they have solved a problem. That
is, unlike the learning rate-based bias shifts of A DAPTIVE L SEARCH [10], those of O OPS
do not reduce probabilities of programs that were meaningful and executable before the
addition of any new . Only formerly meaningless, interrupted programs trying to access
code for earlier solutions when there weren?t any suddenly may become prolongable and
successful, once some solutions to earlier tasks have been stored.
?
:
i ; i
, where is among the most probable fast solvers of
Hopefully we have
that do not use previously found code. For instance, may be rather short and likely
.
because it uses information conveyed by earlier found programs stored below
E.g., may call an earlier stored
as a subprogram. Or maybe is a short and fast
program that copies into state , then modifies the copy just a little bit to obtain ,
then successfully applies to . If is not many times faster than , then OOPS will
in general suffer from a much smaller constant slowdown factor than L SEARCH, reflecting
the extent to which solutions to successive tasks do share useful mutual information.
? ?
:
0 @: ?
%PRQST)U
?
Unlike nonincremental L SEARCH and H SEARCH, which do not require online-generated
programs for their aymptotic optimality properties, OOPS does depend on such programs:
The currently tested prefix may temporarily rewrite the search procedure by invoking previously frozen code that redefines the probability distribution on its suffixes, based on experience ignored by L SEARCH & H SEARCH (metasearching & metalearning!).
?
As we are solving more and more tasks, thus collecting and freezing more and more , it
will generally become harder and harder to identify and address and copy-edit particular
useful code segments within the earlier solutions. As a consequence we expect that much
of the knowledge embodied by certain actually will be about how to access and edit and
use programs (
) previously stored below .
? > B
3 A Particular Initial Programming Language
The efficient search and backtracking mechanism described in section 2.1 is not aware of
the nature of the particular programming language given by , the set of initial instructions
for modifying states. The language could be list-oriented such as LISP, or based on matrix
operations for neural network-like parallel architectures, etc. For the experiments we wrote
an interpreter for an exemplary, stack-based, universal programming language inspired by
F ORTH [8], whose disciples praise its beauty and the compactness of its programs.
Each task?s tape holds its state: various stack-like data structures represented as sequences
of integers, including a data stack ds (with stack pointer dp) for function arguments, an
auxiliary data stack Ds, a function stack fns of entries describing (possibly recursive) functions defined by the system itself, a callstack cs (with stack pointer cp and top entry )
for calling functions, where local variable is the current instruction pointer, and
base pointer points into ds below the values considered as arguments of the most
recent function call: Any instruction of the form inst ( ) expects its arguments
on top of ds, and replaces them by its return values. Illegal use of any instruction will cause
the currently tested program prefix to halt. In particular, it is illegal to set variables (such
as stack pointers or instruction pointers) to values outside their prewired ranges, or to pop
empty stacks, or to divide by 0, or to call nonexistent functions, or to change probabilities
.
of nonexistent tokens, etc. Try (Section 2.1) will interrupt prefixes as soon as their
(Z0 (
(Z0 ( >
(Z0 ( *
nDo i
Instructions. We defined 68 instructions, such as oldq(n)
for calling the -th previously
on stack ds (e.g., to edit it with
found program , or getq(n) for making a copy of
additional instructions). Lack of space prohibits to explain all instructions (see [9]) ? we
have to limit ourselves to the few appearing in solutions found in the experiments, using
readable names instead of their numbers:
Instruction c1()
returns constant 1. Similarly
for c2(), ..., c5(). dec(x) returns
; by2(x) returns ; grt(x,y) returns 1 if ,
otherwise 0; delD() decrements stack pointer Dp of Ds; fromD() returns the top of Ds;
toD() pushes the top entry of
onto Ds; cpn(n) copies the n topmost ds entries onto the
top of ds, increasing dp by ; cpnb(n) copies ds entries above the -th ds entry
onto the top of ds; exec(n) interprets as the number of an
& instruction and executes it;
bsf(n) considers the entries on stack ds above its
-th entry as code and uses
callstack cs to call this code (code is executed by step 1 of Try (Section 2.1), one instruction
at a time; the instruction ret() causes a return to the address of the next instruction right
after the calling instruction). Given input arguments on ds, instruction defnp() pushes
onto ds the begin of a definition of a procedure with inputs; this procedure returns if
its topmost input is 0, otherwise decrements it. callp() pushes onto ds code for a call of
the most recently defined function / procedure. Both defnp and callp also push code for
making a fresh copy of the inputs of the most recently defined code, expected on top of
ds. endnp() pushes code for returning from the current call, then calls the code generated
so far on stack ds above the inputs, applying the code to a copy of the inputs on top
of . boostq(i) sequentially goes through all tokens of the -th self-discovered frozen
*-0
(c0 ( *
(c0 ( *
*-0
>
7
program, boosting each token?s probability by adding
to its enumerator and also to the
denominator shared by all instruction probabilities ? denominator and all numerators are
stored on tape, defining distribution .
?
:
Initialization. Given any task, we add task-specific instructions. We start with a maximum
entropy distribution on the
(all numerators set to 1), then insert substantial prior
bias by assigning the lowest (easily computable) instruction numbers to the task-specific
instructions, and by boosting (see above) the initial probabilities of appropriate ?small
number pushers? (such as c1, c2, c3) that push onto ds the numbers of the task-specific
instructions, such that they become executable as part of code on ds. We also boost the
probabilities of the simple arithmetic instructions by2 and dec, such that the system can
easily create other integers from the probable ones (e.g., code sequence (c3 by2 by2 dec)
will return integer 11). Finally we also boost boostq.
4 Experiments: Towers of Hanoi and Context-Free Symmetry
Given are disks of different sizes, stacked in decreasing size on the first of three pegs.
Moving some peg?s top disk to the top of another (possibly empty) peg, one disk at a time,
but never a larger disk onto a smaller, transfer all disks
to the third peg. Remarkably, the
fastest way of solving this famous problem requires
moves
.
V
Untrained humans find it hard to solve instances
. Anderson [1] applied traditional
reinforcement learning methods and was able to solve instances up to
, solvable
"
within at most 7 moves. Langley
[5]
used
learning
production
systems
and
was
able
to solve
Hanoi instances up to "
, solvable within at most 31 moves. Traditional nonlearning
planning procedures systematically explore all possible
move combinations. They also fail
to solve Hanoi problem instances with
, due to the exploding search space (Jana
Koehler, IBM Research, personal communication, 2002). OOPS, however, is searching in
program space instead of raw solution space. Therefore, in principle it should be able to
solve arbitrary instances by discovering the problem?s elegant recursive solution: given
and three pegs (source peg, auxiliary peg, destination peg), define procedure
/
Method 4.1 (HANOI(S,A,D,n)) I F "
from S to D; call HANOI(A, S, D, n-1).
exit. Call
HANOI (S,
D, A, n-1); move top disk
The -th task is to solve all Hanoi instances up to instance
& . We represent the dynamic
addresses for each peg, to store
environment for task on the -th task tape, allocating
its current disk positions and a pointer to its top disk (0 if there isn?t any). We represent
pegs by numbers
1, 2, 3, respectively. That is, given an instance of size , we push
onto ds the values . By doing so we insert substantial, nontrivial prior knowledge
about problem size and the fact that it is useful to represent each peg by a symbol.
/
Z( 0 / (
* /
We add three instructions to the 68 instructions of our F ORTH-like programming language:
mvdsk() assumes that
are represented by the first three elements on ds above the
current base pointer , and moves a disk from peg to peg . Instruction xSA()
exchanges the representations of and , xAD() those of and (combinations may create arbitrary peg patterns). Illegal moves cause the current program prefix to halt. Overall
success is easily verifiable since our objective is achieved once the first two pegs are empty.
/
Within reasonable time (a week) on an off-the-shelf personal computer (1.5 GHz) the system was not able to solve instances involving more than 3 disks. This gives us a welcome
opportunity to demonstrate its incremental learning abilities: we first trained it on an additional, easier task, to teach it something about recursion, hoping that this would help to
solve the Hanoi problem as well. For this purpose we
a seemingly unrelated symmeused
try problem based on the context free language
: given input
on the data stack
ds, the goal is to place symbols on the auxiliary stack Ds such that the
topmost elements
are 1?s followed by 2?s. We add two more instructions to the initial programming language: instruction 1toD() pushes 1 onto Ds, instruction 2toD() pushes 2. Now we have a
total of five task-specific instructions (including those for Hanoi), with instruction numbers
1, 2, 3, 4, 5, for 1toD, 2toD, mvdsk, xSA, xAD, respectively.
So we first boost (Section 3) instructions c1, c2 for the first training phase where the -th
task "
is to solve all symmetry problem instances up to . Then we undo
the symmetry-specific boosts
of c1, c2 and boost instead the Hanoi-specific ?instruction
number
pushers?
for
the subsequent training phase where the -th task (again
"
) is to solve all Hanoi instances up to .
( ( (
Results. Within roughly 0.3 days, OOPS found and froze code solving the symmetry problem. Within 2 more days it also found a universal Hanoi solver, exploiting the benefits of
incremental learning ignored by nonincremental H SEARCH and L SEARCH. It is instructive
to study the sequence of intermediate solutions. In what follows we will transform integer sequences discovered by OOPS back into readable programs (to fully understand them,
however, one needs to know all side effects, and which instruction has got which number).
For the symmetry
problem, within less than a second, OOPS found silly but working code
. Within less than 1 hour it had solved the 2nd, 3rd, 4th, and 5th instances,
for "
always simply prolonging the previous code without changing the start address . The
code found so far was unelegant: (defnp 2toD grt c2 c2 endnp boostq delD delD bsf 2toD
fromD delD delD delD fromD bsf by2 bsf by2 fromD delD delD fromD cpnb bsf). But it
does solve all of the first 5 instances. Finally, after 0.3 days, OOPS had created and tested a
new, elegant, recursive program (no prolongation of the previous one) with a new increased
start address , solving all instances up to 6: (defnp c1 calltp c2 endnp). That is, it was
cheaper to solve all instances up to 6 by discovering and applying this new program to all
instances so far, than just prolonging old code on instance 6 only. In fact, the program turns
out to be a universal symmetry problem solver. On the stack, it constructs a 1-argument
procedure that returns nothing if its input argument is 0, otherwise calls the instruction
1toD whose code is 1, then calls itself with a decremented input argument, then calls 2toD
whose code is 2, then returns. Using this program, within an additional 20 milliseconds,
OOPS had also solved the remaining 24 symmetry tasks up to "
.
% X
%X
% X
Then OOPS switched to the Hanoi problem. 1 ms later it had found trivial code for "
:
) for
(mvdsk).
After
a
day
or
so
it
had
found
fresh
yet
bizarre
code
(new
start
address
" : (c4 c3 cpn c4 by2 c3 by2 exec). Finally, after 3 days it had found fresh code (new
) for " : (c3 dec boostq defnp c4 calltp c3 c5 calltp endnp). This already is an
optimal universal Hanoi solver. Therefore, within 1 additional day OOPS was able to solve
the remaining 27 tasks for up to 30, reusing the same program
again and
again. Recall that the optimal solution for " takes
mvdsk operations, and that
for each mvdsk several other instructions need to be executed as well!
%X
X d XZY\[N]N^`_a
The final Hanoi solution profits from the earlier recursive solution to the symmetry problem. How? The prefix (c3 dec boostq) (probability 0.003) temporarily rewrites the search
procedure (this illustrates the benefits of metasearching!) by exploiting previous code:
Instruction c3 pushes 3; dec decrements this; boostq takes the result 2 as an argument and
thus boosts the probabilities of all components of the 2nd frozen program, which happens
to be the previously found universal symmetry solver. This leads to an online bias shift
that greatly increases the probability that defnp, calltp, endnp, will appear in the suffix of
the online-generated program. These instructions in turn are helpful for building (on the
data stack ds) the double-recursive procedure generated by the suffix (defnp c4 calltp c3 c5
calltp endnp), which essentially constructs a 4-argument procedure that returns nothing if
its input argument is 0, otherwise decrements the top input argument, calls the instruction
xAD whose code is 4, then calls itself, then calls mvdsk whose code is 5, then calls xSA
whose code is 3, then calls itself again, then returns (compare the standard Hanoi solution).
G
The total probability of the final solution, given the previous codes, is
. On the
other hand, the probability
essential Hanoi code (defnp c4 calltp c3 c5 calltp endnp),
of the
, which explains why it was not quickly found without the
given nothing, is only
help of an easier task. So in this particular setup the incremental training due to the simple
recursion for the symmetry problem indeed provided useful training for the more complex
Hanoi recursion, speeding up the search by a factor of roughly 1000.
G
The entire 4 day search tested 93,994,568,009 prefixes corresponding to 345,450,362,522
instructions costing 678,634,413,962 time steps (some instructions
cost more than 1 step,
in particular, those making copies of strings with length
, or those increasing the probabilities of more than one instruction). Search time of an optimal solver is a natural
measure of initial bias. Clearly, most tested prefixes are short ? they either halt or get
interrupted soon. Still, some programs do run for a long time; the longest measured runtime exceeded 30 billion steps. The stacks of recursive invocations of Try for storage
management (Section 2.1) collectively never held more than 20,000 elements though.
Different initial bias will yield different results. E.g., we could set to zero the initial probabilities of most of the 73 initial
instructions (most are unnecessary for our two problem
classes), and then solve all
tasks more quickly (at the expense of obtaining a nonuniversal initial programming language). The point of this experimental section, however,
is not to find the most reasonable initial bias for particular problems, but to illustrate the
general functionality of the first general near-bias-optimal incremental learner. In ongoing research we are equipping OOPS with neural network primitives and are applying it to
robotics. Since OOPS will scale
to larger problems in essentially unbeatable fashion, the
hardware speed-up factor of expected for the next 30 years appears promising.
References
[1] C. W. Anderson. Learning and Problem Solving with Multilayer Connectionist Systems. PhD
thesis, University of Massachusetts, Dept. of Comp. and Inf. Sci., 1986.
[2] N. L. Cramer. A representation for the adaptive generation of simple sequential programs. In
J.J. Grefenstette, editor, Proceedings of an International Conference on Genetic Algorithms
and Their Applications, Carnegie-Mellon University, July 24-26, 1985, Hillsdale NJ, 1985.
Lawrence Erlbaum Associates.
[3] M. Hutter. The fastest and shortest algorithm for all well-defined problems. International
Journal of Foundations of Computer Science, 13(3):431?443, 2002.
[4] L.P. Kaelbling, M.L. Littman, and A.W. Moore. Reinforcement learning: a survey. Journal of
AI research, 4:237?285, 1996.
[5] P. Langley. Learning to search: from weak methods to domain-specific heuristics. Cognitive
Science, 9:217?260, 1985.
[6] L. A. Levin. Universal sequential search problems. Problems of Information Transmission,
9(3):265?266, 1973.
[7] M. Li and P. M. B. Vit?anyi. An Introduction to Kolmogorov Complexity and its Applications
(2nd edition). Springer, 1997.
[8] C. H. Moore and G. C. Leach. FORTH - a language for interactive computing, 1970.
http://www.ultratechnology.com.
[9] J. Schmidhuber.
Optimal ordered problem solver.
Technical Report IDSIA-12-02,
arXiv:cs.AI/0207097 v1, IDSIA, Manno-Lugano, Switzerland, July 2002.
[10] J. Schmidhuber, J. Zhao, and M. Wiering. Shifting inductive bias with success-story algorithm,
adaptive Levin search, and incremental self-improvement. Machine Learning, 28:105?130,
1997.
[11] R.J. Solomonoff. An application of algorithmic probability to problems in artificial intelligence.
In L. N. Kanal and J. F. Lemmer, editors, Uncertainty in Artificial Intelligence, pages 473?491.
Elsevier Science Publishers, 1986.
| 2299 |@word version:1 nd:3 disk:11 c0:2 instruction:51 unbeatable:1 invoking:1 profit:1 harder:2 nonexistent:2 cyclic:1 initial:12 genetic:2 ours:1 prefix:23 existing:1 current:18 com:1 yet:3 assigning:1 interrupted:3 additive:1 subsequent:2 remove:1 hoping:1 half:1 selected:1 discovering:4 intelligence:2 short:3 pointer:13 boosting:2 successive:1 simpler:1 five:1 c2:7 become:3 combine:1 overhead:3 introduce:1 indeed:1 expected:2 roughly:3 themselves:1 planning:1 inspired:1 decreasing:1 little:1 solver:13 multitasking:2 becomes:1 increasing:5 begin:1 notation:3 unrelated:1 unchangeable:1 provided:1 lowest:1 what:2 string:8 prohibits:1 developed:1 interpreter:1 ret:1 finding:1 nj:1 every:1 collecting:1 interactive:1 runtime:3 returning:1 demonstrates:1 appear:1 continually:1 positive:3 before:2 local:3 modify:2 limit:3 consequence:1 might:1 initialization:1 praise:1 fastest:2 limited:1 range:2 practical:1 restoring:1 recursive:8 alphabetical:1 lost:1 procedure:11 langley:2 cpn:2 universal:11 thought:1 illegal:3 got:1 get:5 cannot:1 convenience:1 onto:10 storage:5 context:4 impossible:1 applying:3 www:1 modifies:1 go:3 primitive:1 starting:4 vit:1 survey:1 immediately:1 rule:1 bsf:5 searching:3 suppose:2 user:1 programming:10 us:3 associate:1 element:3 idsia:4 satisfying:1 jk:1 expensive:1 solved:9 unexpectedly:1 wiering:1 symme:1 decrease:1 topmost:4 substantial:2 environment:1 ui:1 complexity:1 littman:1 inductively:2 dynamic:2 personal:2 trained:1 depend:3 solving:11 rewrite:2 segment:1 creates:1 division:1 exit:3 learner:2 completely:1 manno:2 easily:3 represented:3 various:1 kolmogorov:1 alphabet:3 stacked:1 fast:3 artificial:2 outside:1 whose:8 heuristic:2 spend:1 solve:22 valued:1 larger:2 otherwise:6 ability:2 think:1 transform:1 itself:4 ip:1 online:4 untried:1 seemingly:1 sequence:10 advantage:1 frozen:5 triggered:1 exemplary:1 final:2 maximal:1 product:1 reset:3 forth:1 exploiting:2 billion:1 empty:6 double:1 transmission:1 anyi:1 incremental:9 ring:5 help:2 illustrate:1 measured:1 jurgen:1 solves:1 implemented:1 auxiliary:3 c:3 switzerland:2 ptimal:1 untested:1 correct:1 functionality:1 modifying:1 human:1 hillsdale:1 explains:1 require:1 exchange:1 hx:1 weren:1 probable:2 insert:2 subprogram:1 hold:1 considered:1 cramer:1 lawrence:1 algorithmic:1 week:1 purpose:1 lose:1 currently:4 by2:8 edit:3 create:2 successfully:1 enumerator:1 clearly:2 always:1 modified:1 rather:1 shelf:1 beauty:1 encode:1 focus:3 interrupt:1 longest:1 improvement:1 indicates:1 greatly:2 sense:1 inst:1 helpful:1 elsevier:1 dependent:1 suffix:4 typically:1 entire:1 initially:1 compactness:1 manipulating:1 provably:1 overall:1 among:1 initialize:3 mutual:1 equal:2 construct:3 once:5 never:2 tow:1 prewired:2 runtimes:3 aware:1 broad:1 others:1 decremented:1 simplify:1 connectionist:1 few:1 report:1 oriented:1 simultaneously:1 cheaper:1 grt:2 phase:2 ourselves:1 huge:2 introduces:1 hg:1 held:1 predefined:1 allocating:2 necessary:1 experience:2 allocates:2 unless:1 conduct:1 tree:1 old:2 divide:1 minimal:1 hutter:3 instance:20 froze:1 earlier:8 boolean:1 increased:1 cover:1 juergen:1 deserves:1 cost:2 introducing:1 kaelbling:1 entry:8 expects:1 successful:4 levin:4 erlbaum:1 optimally:2 stored:7 daptive:1 international:2 ops:1 probabilistic:1 invoke:1 destination:1 off:1 quickly:3 again:5 thesis:1 nm:1 management:2 satisfied:1 possibly:9 cognitive:1 zhao:1 return:15 reusing:2 li:1 potential:1 matter:1 mv:1 later:1 view:1 try:19 doing:1 start:5 parallel:1 minimize:1 largely:1 efficiently:3 yield:1 identify:1 weak:1 famous:1 raw:1 manages:1 substring:1 comp:1 executes:1 simultaneous:1 metalearning:1 explain:1 whenever:1 definition:2 obvious:1 proof:2 galleria:1 newly:1 massachusetts:1 recall:1 subsection:1 knowledge:2 actually:2 reflecting:1 back:1 appears:1 exceeded:1 day:7 execute:2 done:5 though:1 anderson:2 furthermore:1 just:2 equipping:1 until:2 d:27 working:1 hand:1 xzy:2 freezing:2 hopefully:1 lack:1 incrementally:1 name:3 effect:3 building:1 concept:1 inductive:1 hence:1 moore:2 during:1 self:2 numerator:2 illustrative:1 jana:1 m:1 trying:1 demonstrate:1 cp:1 lse:1 ranging:1 ef:1 recently:4 executable:2 he:1 interpret:1 mellon:1 ai:2 rd:1 similarly:1 xsa:3 language:13 had:6 moving:1 access:2 etc:2 base:2 add:3 something:1 recent:4 inf:1 schmidhuber:3 store:1 certain:1 binary:1 success:2 leach:1 additional:5 undoing:2 shortest:1 exploding:1 arithmetic:1 branch:1 multiple:1 july:2 exceeds:1 technical:1 faster:2 long:2 prolongation:6 halt:4 involving:2 basic:1 variant:2 denominator:2 essentially:4 multilayer:1 searcher:1 arxiv:1 represent:5 achieved:1 dec:6 c1:5 justified:1 addition:1 remarkably:1 robotics:1 source:1 publisher:1 appropriately:1 meaningless:1 unlike:3 undo:1 elegant:2 lisp:1 integer:6 call:17 near:2 exceed:1 intermediate:1 variety:1 nonuniversal:1 architecture:1 interprets:1 reduce:2 hile:2 computable:1 consumed:1 shift:4 lemmer:1 expression:1 solomonoff:1 reuse:1 suffer:1 cause:3 tape:6 ignored:3 useful:4 generally:1 clear:1 maybe:1 verifiable:2 redefines:1 welcome:1 hardware:3 continuation:2 http:1 peg:15 millisecond:1 track:1 modifiable:2 carnegie:1 affected:1 changing:1 costing:1 ce:1 v1:1 prolonging:3 asymptotically:3 sum:2 year:1 run:1 turing:2 prob:1 uncertainty:1 extends:1 throughout:1 reasonable:2 place:1 acceptable:1 bit:2 bound:2 def:2 guaranteed:1 followed:1 replaces:1 encountered:2 nontrivial:1 encodes:1 hew:1 calling:3 silly:1 speed:1 argument:11 optimality:3 according:2 request:2 combination:2 cleverly:1 smaller:2 modification:1 happens:2 alse:4 hl:1 making:3 pr:1 fns:1 resource:1 previously:7 describing:1 eventually:1 mechanism:1 fail:1 turn:2 know:3 end:1 profiting:1 operation:2 prerequisite:1 multiplied:2 observe:1 appropriate:2 appearing:1 save:1 alternative:1 original:1 denotes:2 top:13 assumes:1 eri:1 remaining:2 opportunity:1 readable:2 pushing:1 exploit:2 cbd:1 invokes:1 suddenly:1 move:7 objective:1 question:1 already:2 koehler:1 prq:7 traditional:2 dp:3 separate:1 sci:1 concatenation:1 tower:1 evaluate:1 collected:1 considers:2 extent:1 trivial:1 provable:1 fresh:3 code:42 length:1 setup:3 unfortunately:1 executed:3 potentially:1 teach:1 expense:1 stated:1 negative:1 countable:2 unknown:3 exec:2 upper:1 finite:2 defining:2 communication:1 discovered:2 stack:19 arbitrary:4 introduced:1 c3:10 c4:5 pop:2 nu:1 boost:6 hour:1 address:17 able:5 below:5 pattern:1 program:51 including:5 memory:1 shifting:1 ia:2 natural:1 restore:2 solvable:2 recursion:3 representing:1 brief:1 created:1 bizarre:1 embodied:1 isn:1 kj:2 speeding:1 formerly:1 prior:2 fully:1 expect:1 generation:1 proven:1 triple:1 foundation:1 switched:1 conveyed:2 nonincremental:3 principle:1 editor:2 story:1 systematically:1 share:1 production:1 ibm:1 qf:1 changed:1 token:8 slowdown:3 free:3 copy:9 soon:2 bias:23 allow:1 understand:1 side:1 ghz:1 benefit:2 depth:1 world:1 valid:2 maze:1 ending:1 c5:4 reinforcement:3 adaptive:2 far:5 sj:1 ignore:1 forever:1 wrote:1 global:2 sequentially:1 harm:1 assumed:1 unnecessary:1 search:47 iterative:1 why:2 promising:1 learn:1 nature:1 transfer:1 ignoring:3 symmetry:10 obtaining:1 kanal:1 untrained:1 complex:2 rue:3 domain:1 decrement:4 edition:1 nothing:3 xu:1 xad:3 tod:9 fashion:4 hanoi:19 orth:2 position:1 lugano:2 candidate:6 invocation:1 third:1 z0:3 specific:12 er:1 symbol:4 list:2 essential:1 false:2 adding:1 sequential:2 taskdependent:1 phd:1 execution:2 illustrates:1 push:10 easier:2 entropy:1 backtracking:4 simply:1 likely:1 explore:1 ordered:4 tracking:2 partially:1 doubling:1 temporarily:2 applies:1 collectively:1 ch:1 springer:1 grefenstette:1 conditional:1 goal:2 shared:1 content:1 change:4 hard:1 infinite:2 total:7 experimental:1 meaningful:1 select:1 ongoing:1 dept:1 pusher:2 tested:9 instructive:1 |
1,428 | 23 | 715
A COMPUTER SIMULATION OF CEREBRAL NEOCORTEX:
COMPUTATIONAL CAPABILITIES OF NONLINEAR NEURAL NETWORKS
Alexander Singer* and John P. Donoghue**
*Department of Biophysics, Johns Hopkins University,
Baltimore, MD 21218 (to whom all correspondence should
be addressed)
**Center for Neural Science, Brown University,
Providence, RI 02912
? American Institute of Physics 1988
716
A synthetic neural network simulation of cerebral neocortex was
developed based on detailed anatomy and physiology. Processing elements
possess temporal nonlinearities and connection patterns similar to those of
cortical neurons. The network was able to replicate spatial and temporal
integration properties found experimentally in neocortex. A certain level of
randomness was found to be crucial for the robustness of at least some of
the network's computational capabilities. Emphasis was placed on how
synthetic simulations can be of use to the study of both artificial and
biological neural networks.
A variety of fields have benefited from the use of computer simulations. This is
true in spite of the fact that general theories and conceptual models are lacking in many
fields and contrasts with the use of simulations to explore existing theoretical structures that
are extremely complex (cf. MacGregor and Lewis, 1977).
When theoretical
superstructures are missing, simulations can be used to synthesize empirical findings into a
system which can then be studied analytically in and of itself. The vast compendium of
neuroanatomical and neurophysiological data that has been collected and the concomitant
absence of theories of brain function (Crick, 1979; Lewin, 1982) makes neuroscience an
ideal candidate for the application of synthetic simulations. Furthennore, in keeping with
the spirit of this meeting, neural network simulations which synthesize biological data can
make contributions to the study of artificial neural systems as general infonnation
processing machines as well as to the study of the brain. A synthetic simulation of cerebral
neocortex is presented here and is intended to be an example of how traffic might flow on
the two-way street which this conference is trying to build between artificial neural network
modelers and neuroscientists.
The fact that cerebral neocortex is involved in some of the highest fonns of
information processing and the fact that a wide variety of neurophysiological and
neuroanatomical data are amenable to simulation motivated the present development of a
synthetic simulation of neocortex. The simulation itself is comparatively simple;
nevertheless it is more realistic in tenns of its structure and elemental processing units than
most artificial neural networks.
The neurons from which our simulation is constructed go beyond the simple
sigmoid or hard-saturation nonlinearities of most artificial neural systems. For example,
717
because inputs to actual neurons are mediated by ion currents whose driving force depends
on the membrane potential of the neuron. the amplitude of a cell's response to an input. i.e.
the amplitude of the post-synaptic potential (PSP). depends not only on the strength of the
synapse at which the input arrives. but also on the state of the neuron at the time of the
input's arrival. This aspect of classical neuron electrophysiology has been implemented in
our simulation (figure lA). and leads to another important nonlinearity of neurons:
namely. current shunting. Primarily effective as shunting inhibition. excitatory current can
be shunted out an inhibitory synapse so that the sum of an inhibitory postsynaptic potential
and an excitatory postsynaptic potential of equal amplitude does not result in mutual
cancellation. Instead. interactions between the ion reversal potentials. conductance values.
relative timing of inputs. and spatial locations of synapses determine the amplitude of the
response in a nonlinear fashion (figure IB) (see Koch. Poggio. and Torre. 1983 for a
quantitative analysis). These properties of actual neurons have been ignored by most
artificial neural network designers. though detailed knowledge of them has existed for
decades and in spite of the fact that they can be used to implement complex computations
(e.g. Torre and Poggio. 1978; Houchin. 1975).
The development of action potentials and spatial interactions within the model
neurons have been simplified in our simulation. Action potentials involve preprogrammed
\
fluctuations in the membrane potential of our neurons and result in an absolute and a
relative refractory period. Thus. during the time a cell is firing a spike synaptic inputs are
ignored. and immediately following an action potential the neuron is hyperpolarized. The
modeling of spatial interactions is also limited since neurons are modeled primarily as
spheres. Though the spheres can be deformed through control of a synaptic weight which
modulates the amplitudes of ion conductances. detailed dendritic interactions are not
simulated. Nonetheless. the fact that inhibition is generally closer to a cortical neuron's
soma while excitation is more distal in a cell's dendritic tree is simulated through the use of
stronger inhibitory synapses and relatively weaker excitatory synapses.
The relative strengths of synapses in a neural network define its connectivity.
Though initial connectivity is random in many artificial networks. brains can be thought to
contain a combination of randomness and fixed structure at distinct levels (Szentagothai.
1978). From a macroscopic perspective. all of cerebral neocortex might be structured in a
modular fashion analogous to the way the barrel field of mouse somatosensory cortex is
structured (Woolsey and Van der Loos. 1970). Though speculative, arguments for the
existence of some sort of anatomical modularity over the entire cortex are gaining ground
718
(Mountcastle, 1978; Szentagothai, 1979; Shepherd, in press). Thus, inspired by the
barrels of mice and by growing interest in functional units of 50 to 100 microns with on the
order of 1000 neurons, our simulation is built up of five modules (60 cells each) with more
dense local interconnections and fewer intermodular contacts. Furthermore, a wide variety
of neuronal classification schemes have led us to subdivide the gross structure of each
module so as to contain four classes of neurons: cortico-cortical pyramids, output
pyramids, spiny stellate or local excitatory cells, and GABAergic or inhibirtory cells.
At this level of analysis, the impressed structure allows for control over a variety of
pathways. In our simulation each class of neurons within a module is connected to every
other class and intermodular connections are provided along pathways from cortico-cortical
pyramids to inhibitory cells, output pyramids, and cortico-cortical pyramids in immediately
adjacent modules. A general sense of how strong a pathway is can be inferred from the
product of the number of synapses a neuron receives from a particular class and the
strength of each of those synapses. The broad architecture of the simulation is further
structured to emphasize a three step path: Inputs to the network impact most strongly on
the spiny stellate cells of the module receiving the input; these cells in tum project to
cortico-cortical pyramidal cells more strongly than they do to other cell types; and finally,
the pathway from the cortico-cortical pyramids to the output pyramidal cells of the same
module is also particularly strong. This general architecture (figure 2) has received
empirical support in many regions of cortex (Jones, 1986).
In distinction to this synaptic architecture, a fine-grain connectivity is defined in our
simulated network as well. At a more microscopic level, connectivity in the network is
random. Thus, within the confines of the architecture described above, the determination
of which neuron of a particular class is connected to which other cell in a target class is
done at random. Two distinct levels of connectivity have, therefore, been established
(figure 3). Together they provide a middle ground between the completely arbitrary
connectivity of many artificial neural networks and the problem specific connectivities of
other artificial systems. This distinction between gross synaptic architecture and fine-grain
connectivity also has intuitive appeal for theories of brain development and, as we shall
see, has non-trivial effects on the computational capabilities of the network as a whole.
With defintions for input integration within the local processors, that is within the
neurons, and with the establishment of connectivity patterns, the network is complete and
ready to perform as a computational unit. In order to judge the simulation's capabilities in
some rough way, a qualitative analysis of its response to an input will suffice. Figure 4
719
shows the response of the network to an input composed of a small burst of action
potentials arriving at a single module. The data is displayed as a raster in which time is
mapped along the abscissa and all the cells of the network are arranged by module and cell
class along the ordinate. Each marker on the graph represents a single action potential fIred
by the appropriate neuron at the indicated time. Qualitatively, what is of importance is the
fact that the network does not remain unresponsive, saturate with activity in all neurons, or
oscillate in any way. Of course, that the network behave this way was predetermined by
the combination of the properties of the neurons with a judicious selection of synaptic
weights and path strengths. The properties of the neurons were fixed from physiological
data, and once a synaptic architecture was found which produced the results in figure 4,
that too was fixed. A more detailed analysis of the temporal firing pattern and of the
distribution of activity over the different cell classes might reveal important network
properties and the relative importance of various pathways to the overall function. Such an
analysis of the sensitivity of the network to different path strengths and even to intracellular
parameters will, however, have to be postponed. Suffice it to say at this point that the
network, as structured, has some nonzero, finite, non-oscillatory response which,
qualitatively, might not offend a physiologist judging cortical activity.
Though the synaptic architecture was tailored manually and fixed so as to produce
"reasonable" results, the fine-grain connectivity, i.e. the determination of exactly which
cell in a class connects to which other cell, was random. An important property of artificial
(and presumably biological) neural networks can be uncovered by exploiting the distinction
between levels of connectivity described above. Before doing so, however, a detail of
neural network design must be made explicit. Any network, either artificial or biological,
must contend with the time it takes to communicate among the processing elements. In the
brain, the time it takes for an action potential to travel from one neuron to another depends
on the conduction velocity of the axon down which the spike is traveling and on the delay
that occurs at the synapse connecting the cells. Roughly, the total transmission time from
one cortical neuron to another lies between 1 and 5 milliseconds. In our simulation two
720
paradigms were used. In one case, the transmission times between all neurons were
standardized at 1 msec.* Alternatively, the transmission times were fixed at random,
though admittedly unphysiological, values between 0.1 and 2 msec.
Now, if the time it takes for an action potential to travel from one neuron to another
were fixed for all cells at 1 msec, different fine-grain connectivity patterns are found to
produce entirely distinct network responses to the same input, in spite of the fact that the
gross synaptic architecture remained constant. This was true no matter what particular
synaptic architecture was used. If, on the other hand, one changes the transmission times
so that they vary randomly between 0.1 and 2 msec, it becomes easy to find sets of
synaptic strengths that were robust with respect to changes in the fine-grain connectivity.
Thus, a wide search of path strengths failed to produce a network which was robust to
changes in fine-grain connectivity in the case of identical transmission times, while a set of
synaptic weights that produced robust responses was easy to find when the transmission
times were randomized. Figure 5 summarizes this result. In the figure overall network
activity is measured simply as the total number of action potentials generated by pyramidal
cells during an experiment and robustness can be judged as the relative stability of this
response. The abscissa plots distinct experiments using the same synaptic architecture with
different fine-grain connectivity patterns. Thus, though the synaptic architecture remains
constant, the different trials represent changes in which particular cell is connected to which
other cell. The results show quite dramatically that the network in which the transmission
times are randomly distributed is more robust with respect to changes in fine-grain
connectivity than the network in which the transmission times are all 1 msec.
It is important to note that in either case, both when the network was robust and
when changes of fine-grain connectivity produced gross changes in network output, the
synaptic architectures produced outputs like that in figure 4 with some fine-grain
connectivities. If the response of the network to an input can be considered the result of
*
Because neurons receive varying amounts of input and because integration is performed
by summating excitatory and inhibitory postsynaptic potentials in a nonlinear way, the time
each neuron needs to summate its inputs and produce an action potential varies from neuron
to neuron and from time to time. This then allows for asynchronous fuing in spite of the
identical transmission times.
721
some computation, figure 5 reveals that the same computational capability is not robust
with respect to changes in fine-grain connectivity when transmission times between
neurons are all 1 msec, but is more robust when these times are randomized. Thus, a
single computational capability, viz. a response like that in figure 4 to a single input, was
found to exist in networks with different synaptic architectures and different transmission
time paradigms; this computational capability, however, varied in terms of its robustness
with respect to changes in fine-grain connectivity when present in either of the transmission
time paradigms.
A more complex computational capability emerged from the neural network
simulation we have developed and described. If we label two neighboring modules C2 and
C3, an input to C2 will suppress the response of C3 to a second input at C3 if the second
input is delayed. A convenient way of representing this spatio-temporal integration
property is given in figure 6. The ordinate plots the ratio of the normal response of one
module (say C3) to the response of the module to the same input when an input to a
neighboring module (say C2) preceeds the input to the original module (C3). Thus, a value
of one on the ordinate means the earlier spatially distinct input had no effect on the response
of the module in which this property is being measured. A value less than one represents
suppression, while values greater than one represent enhancement. On the abscissa, the
interstimulus interval is plotted. From figure 6, it can be seen that significant suppression
of the pyramidal cell output, mostly of the output pyramidal cell output, occurs when the
inputs are separated by 10 to 30 msec. This response can be characterized as a sort of
dynamic lateral inhibition since an input is suppressing the ability of a neighboring region
to respond when the input pairs have a particular time course. This property could playa
variety of role in biological and artificial neural networks. One role for this spatio-temporal
integration property, for example, might be in detecting the velocity of a moving stimulus.
The emergent spatio-temporal property of the network just described was not
explicitly built into the network. Moreover, no set of synaptic weights was able to give rise
to this computational capability when transmission times were all set to 1 msec. Thus, in
addition to providing robustness, the random transmission times also enabled a more
complex property to emerge. The important factor in the appearances of both the
robustness and the dynamic lateral inhibition was randomization; though it was
implemented as randomly varying transmission times, random spontaneous activity would
have played the same role. From the viewpoint, then, of the engineer designing artificial
neural networks, the neural network presented here has instructional value in spite of the
722
fact that it was designed to synthesize biological data. Specifically, it motivates the
consideration of randomness as a design constraint.
From the prespective of the biologists attending this meeting, a simple fact will
reveal the importance of synthetic simulations. The dynamic lateral inhibition presented in
figure 6 is known to exist in rat somatosensory cortex (Simons, 1985). By deflecting the
whiskers on a rat's face, Simons was able to stimulate individual barrels of the posteromedial somatosensory barrel field in combinations which revealed similar spatio-temporal
interactions among the responses of the cortical neurons of the barrel field. The temporal
suppression he reported even has a time course similar to that of the simulation. What the
experiment did not reveal, however, was the class of cell in which suppression was seen;
the simulation located most of the suppression in the output pyramidal cells. Hence, for a
biologist, even a simple synthetic simulation like the one presented here can make defmitive
predictions. What differentiates the predictions made by synthetic simulations from those
of more general artificial neural systems, of course, is that the strong biological foundations
of synthetic simulations provide an easily grasped and highly relevant framework for both
predictions and experimental verification.
One of the advertised purposes of this meeting was to "bring together
neurobiologists, cognitive psychologists, engineers, and physicists with common interest
in natural and artificial neural networks." Towards that end, synthetic computer
simulations, i.e. simulations which follow known neurophysiological and neuroanatomical
data as if they comprised a complex recipe, can provide an experimental medium which is
useful for both biologists and engineers. The simulation of cerebral neocortex developed
here has information regarding the role of randomness in the the robustness and presence
of various computational capabilities as well as information regarding the value of distinct
levels of connectivity to contribute to the design of artificial neural networks. At the same
time, the synthetic nature of the network provides the biologist with an environment in
which he can test notions of actual neural function as well as with a system which replicates
known properties of biological systems and makes explicit predictions. Providing twoway interactions, synthetic simulations like this one will allow future generations of
artificial neural networks to benefit from the empirical findings of biologists, while the
slowly evolving theories of brain function benefit from the more generalizable results and
methods of engineers.
723
References
Crick, F. H. C. (1979) Thinking about the brain, Scientific American, 241:219 - 232.
Houchin,1. (1975) Direction specificity in cortical responses to moving stimuli -- a simple
model. Proceedings of the Physiological Society, 247:7 - 9.
Jones, E. G. (1986) Connectivity of primate sensory-motor cortex, in Cerebral Cortex,
vol. 5, E. G. Jones and A. Peters (eds), Plenum Press, New York.
Koch, C., Poggio, T., and Torre, V. (1983) Nonlinear interactions in a dendritic tree:
Localization, timing, and role in information processing. Proceedings of the
National Academy of Science, USA, 80:2799 - 2802.
Lewin, R. (1982) Neuroscientists look for theories, Science, 216:507.
MacGregor, R.I. and Lewis, E.R. (1977) Neural Modeling, Plenum Press, New York.
Mountcastle, V. B. (1978) An organizing principle for cerebral function: The unit module
and the distributed system, in The Mindful Brain, G. M. Edelman and V. B.
Mountcastle (eds.), MIT Press, Cambridge, MA.
Shepherd, G.M. (in press) Basic circuit of cortical organization, in Perspectives in Memory
Research, M.S. Gazzaniga (ed.). MIT Press, Cambridge, MA.
Simons, D. J. (1985) Temporal and spatial integration in the rat SI vibrissa cortex, Journal
of Neurophysiology, 54:615 - 635.
Szenthagothai,1. (1978) Specificity versus (quasi-) randomness in cortical connectivity, in
Architectonics of the Cerebral Cortex, M. A. B. Brazier and H. Petsche (eds.),
Raven Press, New York.
Szentagothai, J. (1979) Local neuron circuits in the neocortex, in The Neurosciences.
Fourth Study Program, F. O. Schmitt and F. G. Worden (eds.), MIT Press,
Cambridge, MA.
Torre, V. and Poggio, T. (1978) A synaptic mechanism possibly underlying directional
selectivity to motion, Proceeding of the Royal Society (London) B, 202:409 -416.
Woolsey, T.A. and Van der Loos, H. (1970) Structural organization of layer IV in the
somatosensory region (SI) of mouse cerebral cortex, Brain Research, 17:205-242.
724
Shunting Inhibition
Simultaneous
EPSP & IPSP
IPSP
Figure IA: Intracellular records of post-synaptic potentials resulting from single excitatory and
inhibitory inputs to cells at different resting potentials.
PSP Amplitude Dependence on Membrane Potential
IPSPs
EPSPs
Resting
Potential
? ?40 mV
Resting
Potential
? -60 mV
Resting
Potential
? 40 mV
r-
Resting
Potential
_ -80 mV
L -_ _ _ _ __
I
C'=:-::----
Resting
Potential
- -100 mV .... - - - - - Resting
Potential
_ -120 mV
~
c=
Resting
Potential
- 20 mV
r-
Resting
Potential
- OmV
Resting
Potential
?20 mV
-
I
c----:----L..
_ _ _ _ _ __
Resting
~
Potential
- -40mV
~
Figure IB: Illustration of the current shunting nonlinearity present in the model neurons. Though
the simultaneous arrival of postsynaptic potentials of equal and opposite amplitude would result
in no deflection in the membrane potential of a simple linear neuron model, a variety of factors
contribute to the nonlinear response of actual neurons and of the neurons modeled in the present
simulation.
:-:"~:":-
~
""'XX""""","
:::::. C.a.lls ::::
,, .
, .
, ,, .
,
',,"
.
" ..................................
" " " '" '" " " ' "
;luu
,~,
,,
,,,
. . ,... ,. .. .... ..,
,
~' ..
". " .."' . "" .." ..' ..
.' . "..'.'.' .'."' .. ' . ' ...' ..',"
~,,;
;,".
.,~
Output
~;.
Pyramids
,,
\.
~." '.
~~
,,,
,
';
, ,"
.,",
)
",~
~
..
,.., ..,..,..,.., .., .., ........ ,..,..,.., ..,.. ,..,.., ..",,:.,'.,',,-.,-,, '.,'.,'.,-,,-,,-.,-,,',,-.,',,'.,'.,'., '.,-.,'.,'.,-,,-,,'.,-.,-,,-,,-,,-.,',,',,',,'.,-,,',, '.,-.,-,,',, -,, -.,',,'.,'.,-,,',,',, -,,-.,-,,',,',, -.,'.,'., -., ',,',,', ......
.'.'.'.' .'.' .' .'
Input
Figure 2: A schematic representation of the simulated cortical network. Five modules are used, each containing sixty neurons. Neurons are
divided into four classes. Numerals within the caricatured neurons represent the number of cells in that particular class that are simulated.
Though all cell classes are connected to all other classes, the pathway from input to spiny stellate to cortico-cortical pyramids to output
pyramids is particularly strong.
....::J
~
01
726
Path Strength Number of synapses X
~,.,.....,.,.
...
of
6.
6.
6.
6.
6.6.
6.
0
0
6
0
0
0
0
6.
Output
Pyramidal Cells
6,.
0
0
6.6. 6.
6. 6.
6.
t:..
6.
0
Inhibitory
Cells
6,.
6,. 6
6
6
6~
Intracorlical
Pyramidal Cells
? ? ?
0?0
? ?
? ?
Spiny Stellate
Cells
Figure 3: Two levels of connectivity are defined in the network. Gross synaptic architecture is
defined among classes of cells. Fine-grain connectivity specifies which cell connects to which
other cell and is determined at random .
727
Sample Raster
Input: 333 Hz input, 6 rns duration applied to Module 3
Module
S
Module
4
Module
3
..
-...-
..
."
Module
2
.....
~
:
.
.
--
Cortico-c ortical
ramids
- - - py
Inhibitory
cells
-............ Spin y stellate
cells
-Output
pyr amids
..
:
-
Module
1
I
10
..
Time (ms)
I
I
20
30
Figure 4: Sample response of the entire network to a small burst of action potentials delivered to
module 3.
728
Robustness With Respect to Connectivity Pattern
Synaptic Architecture Constant
400
?
III
I/)
c:
300
0
?
c.
C/)
III
a:
\
Q)
Delay times = 1 ms
()
iU
"C
200
?
'E
~
>.
a..
iU
jDera
"0
~
?
y times random
100
??
I
? ?
? ??
?
? ?
??
?
?
? ? ?
?
Individual Trials with Different Fine-grain Connectivity Patterns
Figure 5: Plot of an arbitrary activity measure (total spike activity in all pyramidal cells) versus
various instatiations ofthe same connectional architecture. Along the abscissa are represented the
different fine-grained patterns of connectivity within a fixed connectional architecture. In one
case the conductance times between all cells was I msec and in the other case the times were
selected at random from values between 0.1 msec and 2 msec. This experiment shows the greater
overall stability produced by random conduction times.
2
Spatio-Temporal Integration Properties
Q)
... Outpyr
fY.
'"0c:
a.
a:'"
Q)
Q)
>
'-;a
~
a:
... C?Cpyr
. . Sst
-+- GABA
Randomized Axonal Conduction Times
oI
o
~
20
40
60
80
100
120
Interstimulus Interval
Figure 6: Spatio-temporal integration within the network. Plot of the time course of response suppression in the various cell classes. The
ordinate plots the ratio of average cell activity (in terms of spikes) to a direct input after the presentation of an input to a neighboring mod ule,
and the average reponse to an input in the absence of prior input to an adjacent module. Values greater than one represent an enhancement of
activity in response to the spatially distinct preceeding input, while values less than one represent a suppression of the normal reponse. The
abscissa plots the interstimulus interval. Note that the response suppression is most striking in only one class of cells.
-.J
~
t:O
| 23 |@word deformed:1 trial:2 neurophysiology:1 middle:1 stronger:1 replicate:1 hyperpolarized:1 simulation:32 initial:1 uncovered:1 suppressing:1 existing:1 current:4 si:2 must:2 john:2 grain:14 realistic:1 predetermined:1 motor:1 plot:6 designed:1 fewer:1 selected:1 record:1 detecting:1 provides:1 contribute:2 location:1 five:2 along:4 constructed:1 burst:2 c2:3 direct:1 qualitative:1 edelman:1 pathway:6 roughly:1 abscissa:5 growing:1 brain:9 inspired:1 superstructure:1 actual:4 becomes:1 provided:1 project:1 moreover:1 suffice:2 circuit:2 medium:1 barrel:5 underlying:1 what:4 xx:1 developed:3 generalizable:1 finding:2 temporal:11 quantitative:1 every:1 exactly:1 control:2 unit:4 before:1 timing:2 local:4 physicist:1 omv:1 fluctuation:1 firing:2 path:5 might:5 emphasis:1 studied:1 limited:1 implement:1 impressed:1 grasped:1 empirical:3 evolving:1 physiology:1 thought:1 convenient:1 specificity:2 spite:5 selection:1 judged:1 py:1 center:1 missing:1 go:1 duration:1 preceeding:1 immediately:2 attending:1 enabled:1 stability:2 notion:1 analogous:1 plenum:2 target:1 spontaneous:1 designing:1 element:2 synthesize:3 velocity:2 particularly:2 located:1 role:5 module:24 region:3 connected:4 highest:1 gross:5 environment:1 dynamic:3 preprogrammed:1 fuing:1 localization:1 completely:1 easily:1 emergent:1 various:4 represented:1 separated:1 distinct:7 effective:1 london:1 artificial:17 whose:1 modular:1 quite:1 emerged:1 say:3 furthennore:1 interconnection:1 ability:1 itself:2 delivered:1 offend:1 twoway:1 interaction:7 product:1 epsp:1 neighboring:4 relevant:1 organizing:1 fired:1 academy:1 intuitive:1 interstimulus:3 recipe:1 elemental:1 exploiting:1 enhancement:2 transmission:15 produce:4 measured:2 received:1 strong:4 epsps:1 implemented:2 somatosensory:4 judge:1 direction:1 anatomy:1 torre:4 numeral:1 stellate:5 randomization:1 biological:8 dendritic:3 koch:2 considered:1 ground:2 normal:2 presumably:1 driving:1 vary:1 purpose:1 travel:2 label:1 infonnation:1 rough:1 mit:3 establishment:1 varying:2 viz:1 contrast:1 suppression:8 sense:1 entire:2 quasi:1 iu:2 overall:3 classification:1 among:3 development:3 spatial:5 integration:8 mutual:1 biologist:5 field:5 equal:2 once:1 manually:1 identical:2 represents:2 broad:1 jones:3 look:1 thinking:1 future:1 stimulus:2 lls:1 primarily:2 randomly:3 composed:1 national:1 individual:2 delayed:1 intended:1 connects:2 conductance:3 neuroscientist:2 interest:2 organization:2 highly:1 replicates:1 arrives:1 sixty:1 amenable:1 closer:1 poggio:4 tree:2 iv:1 plotted:1 theoretical:2 modeling:2 earlier:1 brazier:1 comprised:1 delay:2 unphysiological:1 too:1 loo:2 reported:1 conduction:3 providence:1 varies:1 synthetic:12 sensitivity:1 randomized:3 lewin:2 physic:1 receiving:1 shunted:1 hopkins:1 mouse:3 together:2 connecting:1 connectivity:28 containing:1 slowly:1 possibly:1 cognitive:1 american:2 summating:1 potential:32 nonlinearities:2 unresponsive:1 matter:1 explicitly:1 mv:9 depends:3 performed:1 doing:1 traffic:1 sort:2 capability:10 simon:3 contribution:1 oi:1 spin:1 ofthe:1 directional:1 produced:5 randomness:5 processor:1 oscillatory:1 synapsis:7 simultaneous:2 synaptic:21 ed:5 raster:2 nonetheless:1 involved:1 pyr:1 modeler:1 knowledge:1 amplitude:7 tum:1 follow:1 response:22 synapse:3 reponse:2 arranged:1 done:1 though:10 strongly:2 furthermore:1 just:1 traveling:1 hand:1 receives:1 nonlinear:5 marker:1 indicated:1 reveal:3 stimulate:1 scientific:1 usa:1 effect:2 brown:1 true:2 contain:2 analytically:1 hence:1 spatially:2 nonzero:1 distal:1 adjacent:2 during:2 excitation:1 macgregor:2 rat:3 m:2 trying:1 complete:1 motion:1 bring:1 consideration:1 sigmoid:1 speculative:1 common:1 functional:1 refractory:1 cerebral:10 he:2 resting:11 significant:1 cambridge:3 mindful:1 nonlinearity:2 cancellation:1 had:1 moving:2 cortex:9 inhibition:6 playa:1 perspective:2 selectivity:1 certain:1 tenns:1 meeting:3 der:2 postponed:1 seen:2 greater:3 determine:1 paradigm:3 period:1 determination:2 characterized:1 sphere:2 divided:1 post:2 shunting:4 ipsps:1 biophysics:1 impact:1 prediction:4 schematic:1 basic:1 fonns:1 represent:5 tailored:1 pyramid:9 ion:3 cell:44 receive:1 addition:1 fine:15 baltimore:1 addressed:1 interval:3 pyramidal:9 crucial:1 macroscopic:1 posse:1 shepherd:2 hz:1 flow:1 spirit:1 mod:1 structural:1 axonal:1 presence:1 ideal:1 revealed:1 iii:2 easy:2 variety:6 architecture:17 opposite:1 regarding:2 donoghue:1 motivated:1 peter:1 york:3 oscillate:1 action:10 ignored:2 generally:1 dramatically:1 detailed:4 involve:1 useful:1 sst:1 amount:1 neocortex:9 specifies:1 exist:2 inhibitory:8 millisecond:1 judging:1 designer:1 neuroscience:2 anatomical:1 shall:1 vol:1 four:2 soma:1 nevertheless:1 vast:1 graph:1 sum:1 deflection:1 micron:1 fourth:1 communicate:1 respond:1 striking:1 reasonable:1 neurobiologist:1 summarizes:1 entirely:1 layer:1 played:1 correspondence:1 existed:1 activity:9 strength:8 constraint:1 ri:1 architectonics:1 compendium:1 aspect:1 argument:1 extremely:1 preceeds:1 relatively:1 structured:4 department:1 combination:3 gaba:1 membrane:4 psp:2 remain:1 postsynaptic:4 spiny:4 primate:1 psychologist:1 instructional:1 advertised:1 remains:1 differentiates:1 mechanism:1 singer:1 reversal:1 end:1 physiologist:1 appropriate:1 petsche:1 robustness:7 subdivide:1 existence:1 original:1 neuroanatomical:3 standardized:1 cf:1 build:1 classical:1 comparatively:1 contact:1 society:2 szentagothai:3 spike:4 occurs:2 dependence:1 md:1 microscopic:1 mapped:1 simulated:5 lateral:3 street:1 whom:1 luu:1 collected:1 fy:1 trivial:1 modeled:2 illustration:1 concomitant:1 ratio:2 providing:2 mostly:1 rise:1 suppress:1 design:3 motivates:1 perform:1 contend:1 neuron:41 finite:1 behave:1 displayed:1 rn:1 varied:1 vibrissa:1 arbitrary:2 inferred:1 ordinate:4 namely:1 pair:1 c3:5 connection:2 distinction:3 established:1 able:3 beyond:1 gazzaniga:1 pattern:8 saturation:1 program:1 built:2 gaining:1 memory:1 intermodular:2 woolsey:2 royal:1 ia:1 natural:1 force:1 representing:1 scheme:1 gabaergic:1 ready:1 mediated:1 prior:1 mountcastle:3 relative:5 lacking:1 whisker:1 generation:1 versus:2 foundation:1 verification:1 schmitt:1 principle:1 viewpoint:1 excitatory:6 course:5 placed:1 keeping:1 arriving:1 asynchronous:1 weaker:1 cortico:7 allow:1 institute:1 wide:3 face:1 emerge:1 absolute:1 van:2 distributed:2 benefit:2 cortical:15 sensory:1 qualitatively:2 made:2 simplified:1 emphasize:1 reveals:1 conceptual:1 ipsp:2 spatio:6 alternatively:1 search:1 decade:1 modularity:1 nature:1 robust:7 complex:5 did:1 dense:1 intracellular:2 whole:1 arrival:2 ule:1 neuronal:1 benefited:1 fashion:2 axon:1 explicit:2 msec:11 candidate:1 lie:1 ib:2 grained:1 saturate:1 down:1 remained:1 specific:1 appeal:1 physiological:2 raven:1 modulates:1 importance:3 caricatured:1 electrophysiology:1 led:1 simply:1 explore:1 appearance:1 neurophysiological:3 failed:1 lewis:2 ma:3 presentation:1 towards:1 absence:2 crick:2 experimentally:1 hard:1 judicious:1 change:9 specifically:1 determined:1 engineer:4 admittedly:1 total:3 connectional:2 experimental:2 la:1 support:1 alexander:1 confines:1 |
1,429 | 230 | 258
Seibert and Waxman
Learning Aspect Graph Representations
from View Sequences
Michael Seibert and Allen M. Waxnlan
Lincoln Laborat.ory, l\IIassachusetts Institute of Technology
Lexington, MA 02173-9108
ABSTRACT
In our effort to develop a modular neural system for invariant learning and recognition of 3D objects, we introduce here a new module
architecture called an aspect network constructed around adaptive
axo-axo-dendritic synapses. This builds upon our existing system
(Seibert & Waxman, 1989) which processes 20 shapes and classifies
t.hem into view categories (i.e ., aspects) invariant to illumination,
position, orientat.ion, scale, and projective deformations. From a
sequence 'of views, the aspect network learns the transitions between these aspects, crystallizing a graph-like structure from an
initially amorphous network . Object recognition emerges by accumulating evidence over multiple views which activate competing
object hypotheses.
1
INTRODUCTION
One can "learn" a three-dimensional object by exploring it and noticing how its
appearance changes. When moving from one view to another, intermediate views
are presented . The imagery is continuous, unless some feature of the object appears
or disappears at the object's "horizon" (called the occluding contour). Such visual
(vents can be used to partition continuously varying input imagery into a discrete
sequence of a.-,pects. The sequence of aspects (and the transitions between them) can
be coded and organized into a representation of the 3D object under consideration.
This is the form of 3D object representation that is learned by our aspect network.
\Ve call it an aspect network because it was inspired by the aspect graph concept
of Koenderink and van Doorn (1979). This paper introduces this new network
Learning Aspect Graph Representations from View Sequences
which learns and recognizes sequences of aspf'cl.s, and leaves most of t.he discussion
of t.he visual preprocessing to earlier papers (Seibert &: Waxman, 1989; Waxman.
Seihf'rt, Cunningham, & \\Tu, 1989). Prt'sent.ed ill this way, we hope that our ideas
of sequence learning, representation, and recognition are also useful to investigators
concerned with speech, finite-state machines, planning, and cont.rol.
1.1
2D VISION BEFORE 3D VISION
The aspect network is one module of a more complete VIsIOn system (Figure 1)
int.roduced by us (Seibert & vVaxman, 198~) . The early st.ages of the complete
system learn and recognize 2D views of objects, invariant to t.he scene illumina~nizecl ~
-- . ,
..
.. , 'E
,,",
,
,c
111- Codin9 8nd
O.loIm.tlon 1n .... ri8l1c:e?...-.o:zzd.1::1CK:1O.
Or"" IIII10n 8I'Id
,"
",
,"
,, "
, ,,
..
,
---;---,
,,
.,
F?? lIr.
Conlr . .t
Input
Figure 1: Neural system architecture jor 3D object learning and recognition. The
aspect network is part of t.ht> upper-right. module.
tion and a.n object 's orientat.ion, size, and position in the visual field. Additionally,
projective deformat.ions such as foreshortening and perspective effects are removed
from the learned 2D representations. These processing steps make use of DiffusionEnhancement Bilayers (DEBs)l to generate att.entional cues and featural groupings.
The point of our neural preprocessing is to generate a sequence of views (i.e., aspects) which depends on t.he object's orient.ation in 3-space, but which does not
depend on how the 2D images happen to fall on the retina. If no preprocessing
were done, then t.he :3D represent.ation would have to account for every possible
2D appearance in adJition to the 3D informat.ion which relates the views to each
other. Compressing the views into aspects avoids such combinatorial problems, but
may result in an ambiguous representation, in that some aspects may be common
to a number of objects. Such ambiguity is overcome by learning and recognizing a
IThis architecture was previously called the NADEL (Neural Analog Diffusion-Enhancement
Layer), but has been renamed to avoid causing any problems or confusion, since there is an active
researcher in t.he field wit h this name.
259
260
Seibert and Waxman
seque11ce of aspect.s (i.e., a tr'ajectory t.hrough the aspect graph). The partitioning
and sequence recognition is analogous t.o building a symbol alphabet and learning
syntactic structures within the alphabet.. Each symbol represent.s all aspect. and is
encoded in ollr syst.em as a separate category by an Adapt.ive Resonance Network
architecture (Carpenter & Grossberg, 1987) . This unsupervised learning is competitive and may proceed on-line with recognition; no separate training is required .
1.2
ASPECT Gn.APHS AND ODJECT REPRESENTATIONS
Figure 2 shows a simplified aspect graph for a prismatic object. 2 Each node of
.....:.:.:.:::::.:.:.:..:...
, ........ .
"
.. I
I
Figure 2: Aspect Graph. A 3D object can be represented as a graph of the characteristic view-nodes with adjacent views encoded by arcs bet\... een the nodes.
the graph represents a characteristic view, while the allowable t.ransitions among
views are represented by the arcs between the nodes . In this depiction, symmetries
have been considered to simplify the graph. Although Koenderink and van Doorn
suggested assigning aspects based on topological equivalences, we instead allow the
ART 2 portion of our 2D system to decide when an invariant 2D view is sufficiently
different from previously experienced views to allocate a new view category (aspect).
Transitions between adjacent aspects provide the key to the aspect net.work representation and recognition processes. Storing the transitions in a self-organizing
syna.ptic weight array becomes the learned view-based representation of a 3D object.
Transitions are exploited again during recognition to distinguish among objects with
similar views. Whereas most investigators are interest.ed in the computational complexity of generating aspect graphs from CAD libral?ies (Bowyer, Eggert, Stewman,
2Neither the aspect graph concept nor our aspect network implementat.ion is limited to simple
polyhedral objects, nor must the objects even be convex, i.e., they may be self-occluding.
Learning Aspect Graph Representations from View Sequences
& St.ark, 1989), we are interest.ed ill designing it as a self-organizing represent-at ion,
learned from visual experience and useful for object recognition.
2
ASPECT-NETWORK LEARNING
The view-category nodes of ART 2 excite the aspect nodes (which we a.lso call the;1;nodes) of t.he aspect network (Figure 3). The aspect nodes fan-out to the dendritic
Object
Competition Layer
Accumulation
Node.
Synaptic Array. of
Learned Vie.
Tran.IUon.
~~J
=0
~,
? 1
__
Vie. Tran.IUon
Aspect Nod..
12M
3
Input View Categorlea
N?1
hr:: di!I
Figure 3: Aspect Network. The learned graph representations of 3D objects are realized as weights in the synaptic arrays. Evidence for experienced view-trajectories
is simulta.neously accumulated for all competing objec.ts.
trees of object neurons. An object neuron consists of an adaptive synaptic array
and an evidence accumulating y-node. Each object is learned by a single object
neuron. A view sequence leads to accumulating activit.y in the y-nodes, which
compete to determine the "recognized object" (i.e., maximally active z-node) in
the "object competition layer". Gating signals from these nodes then modulate
learning in the corresponding synaptic array, as in competitive learning paradigms.
The system is designed so that the learning phase is integral with recognition.
Learning (and forgetting) is always possible so that existing representations can
a.lways be elaborated with new information as it becomes available.
Differential equations govern the dynamics and architecture of the aspect network.
These shunting equations model cell membrane and synapse dynamics as pioneered
by Grossberg (1973, 1989). Input activities to the network are given by equation
(1), the learned aspect transitions by equation (2), and the objects recognized from
the experienced view sequences by equation (3).
261
262
Seibert and Waxman
2.1
ASPECT NODE DYNAMICS
The aspect node activities are governed by equation (1):
dXi
dt
==
.
Xj
(1)
= Ii - .AxXi,
=
where .Ax is a passive decay rate, and Ii
1 during the presentation of aspect
i and zero otherwise as determined by the output of the ART 2 module in the
complete system (Figure 1). This equat.ion assures t.hat the activities of the aspect
nodes build and decay in nonzero time (see the timet-races for the input I-nodes and
aspect x-nodes in Figure 3). Whenever an aspect transition occurs, the activity of
the previous aspect decays (with rate .Ax) and the activity of the new aspect builds
(again with rate .Ax in this ca.<;e, which is convenient but not necessary). During the
transient time when both activities are nonzero, only the synapses between these
nodes have both pre- and post-synaptic activities which are significant (Le., above
the t.hreshold) and Hebbian learning can be supported. The overlap of the pre- and
post-synaptic activities is transient, and the extent of the transient is controlled by
the selection of .Ax. This is the fundamental parameter for the dynamical behavior
of the entire network, since it defines the response time of the aspect nodes to their
inputs. As such, nearly every other parameter of the network depends on it.
2.2
VIEW TRANSITION ENCODING BY ADAPTIVE SYNAPSES
The aspect transitions that represent objects are realized by synaptic weights on
the dendritc trees of object neurons. Equation (2) defines how the (initially small
and random) weight relating aspect i, aspect j, and object k changes:
dtv~ _ . k
= tvij
.t
-d-
k
= "'w tvijk (1- tvij)
{<l>w [(Xi
+ f)(Xj + f)] -
.
.A w} 8 Y(Yk)8 z (Zk)'
(2)
Here, "'w governs the rate of evolution of the weights relative to the x-node dynamics,
and .A w is the decay rate of t.he weights. Note that a small "background level" of
activity f is added to each x-node activity. This will be discussed in connection
with (3) below. <l>?>(-r) is a threshold-linear function; that is: <I>?>(-y) = 'Y if'Y > ?>th
and zero otherwise. 8 8 ( 'Y) is a binary-t.hreshold function of the absolute-value of ,;
that is: 8 8 (-r) = 1.0 if I, I> 8th and zero otherwise.
Although this equation appears formidable, it. can be understood as follows. Whenever simultaneous above-threshold activities arise presynaptically at node Xi and
postsynaptically at node xi, the Hebbian product (Xi + f) (Xj + f) causes wfj to be
positive (since above threshold, (Xi + f)(Xj + f) > .A w ) and the weight wfj learns the
transition between the aspects Xi and Xj. By symmetry, Wri would also learn, but
all ot.her weight.s decay (tV ex: -.A w ). The product of the shunting terms wfj(l-w~)
goes to zero (and thus inhibits further weight changes) only when
approaches
either zero or unit.y. This shunting mechanism limit.s the range of weights, but also
assures that these fixed points are invariant to input-activity magnitudes, decayrates, or the initia.l and final network sizes.
wt;
Learning Aspect Graph Representations from View Sequences
The gat.ing t.erms 0 y UiA') and e z (=d modulate the leCl ruing of the synaptic arrays
w~ . As a result of compet.it.ion between multiple object hypot.heses (see equat.ion
(4) helow), only one =k-node is active at a time . This implies recognition (or initial
object neuron assignment.) of "Object.-k," and so only the synaptic array ofObject-k
adapts. All other syna.pt.ic arrays
(I :f. k) remain unchanged. Moreover, learning
occurs only during aspect. transitions. \Vhile Yk :f. 0 both learning and forgetting
proceed; bllt while .III.: ::::::: 0 a.dapt.at.ion ceases t.hough recognition continues (e.g.
during a 10llg sust.ained view).
w!j
2.3
OBJECT RECOGNITION DYNAMICS
Object nodes Yk accumulate evidence over time . Their dynamics are governed by:
Here, I\.y governs the rate of evolution of the object nodes relative to the x-node
dynamics, Ay is the passive decay rate of the object nodes, <l>y (.) is a threshold-linear
function, and f is the same small positive constant as in (2). The same Hebbian-like
product (i.e., (Xi+E) (Xj +f)) used to leam transitions in (2) is used to detect aspect
transitions during recognition in (3) with the addition of t.he synaptic term wfj'
which produces an axo-axo-dendritic synapse (see Section 3). Using this synapse,
an aspect transition must not only be detected, but it must also be a permitted one
for Object-k (i .e., lV~ > 0) if it is t.o contribute activity to the Yk-node .
2.4
SELECTING THE MAXIMALLY ACTIVATED OBJECT
A "winner-take-all" competition is used to select the maximally active object node.
The activity of each evidence accumulation y-node is periodically sampled by a.
corresponding object competition z-node (see Figure 3). The sampled a.ctivities then
compete according to Grossberg's shunted short-term memory model (Grossberg,
1973), leaving only one z-node active at the expense of t.he activities of the other
z-nodes. In addition to signifying the 'recognized' object, outputs of the z-nodes
are used to inhibit weight adaptation of those weights which are not associated with
the winning object via t.he 0 z (zd term in equation (2). The competition is given
by a first-order differential equation taken from (Grossberg, 1973):
(4)
The function J(z) is chosen to be faster-than-linear (e.g. quadratic). The initial
conditions are reset periodically to zk(O) = Yk(t).
3
THE AXO-AXO-DENDRITIC SYNAPSE
Although the learning is very closely Hebbian, the network requires a synapse that
is more complex than that typically analyzed in the current modeling literature.
263
264
Seibert and Waxman
Instead of an axo-delldrit.ic synapse, we utilize all (/J'o-(txo-dctldritic synapse (Shepard, 1979), Figure 4 illllst.rat.es t.he synaptic alli\(omy and our functional model.
We interpret the ~t.ruct.ure by assuming t.hat it is (he conjullct.ioll of activities in
Figure 4: Axo-axo-dendritic Synapse Model. The Hebbian-like wfrweight adapt.s
when simultaneous axonal activities Xi and Xj arise. Similarly, a conjunction of
both activities is necessary to significantly st.imulat.e the dendrite to node Yk.
both axons (as during an aspect transition) that best stimulates the dendrite. If,
however, significant activity is present on only one axon (a sustained static view), it
can stimulate the dendrite to a small extent in conjullction with the small base-level
activity ( present on a.1I axons. This property supports object recognition in static
scenes, though object learning requires dynamic scenes.
4
SAMPLE RESULTS
Consider two objects composed of three aspects ea.ch with one aspect in common:
the first has aspects 0, 2, and 4, while the second has aspects 0, 1, and 3. Figure 5
shows the evolut.ion of the node activities and some of the weights during two aspect
sequences. \Vith an initial distribution of small, random weights, we present the
repetitive aspect sequence 4 -+ 2 -+ 0 -+ " ' , and learning is engaged by Object1. The attention of the system is then redirected with a saccadic eye motion (the
short-term memory node activities are reset to zero) and a new repetitive aspect
sequence is presented: 3 -+ 1 - 0 -+ .... Since the weights for these aspect
transitions in the Object-! synaptic array decayed as it learned its sequence, it
does not respond strongly to this new sequence and Object-2 wins the competition.
Thus, the second sequence is learned (and recognized!) by Object-2's synaptic
weight array. In these simulations (1) - (4) were implemented by a Runge-Kutta
coupled differential equation integrator. Each aspect. was presented for T = 4 timeunits. The equation parameters were set as follows: I
1, Ax ~ In(O.I)/T, Ay ~
0.3, Aw ~ 0.02, Ky ~ 0.3, Kw ~ 0.6, ( ~ 0.03, and thresholds of 8y ~ 10- 5 for
8 y(Yd in equation (2), 8z ~ 10- 5 for 8 z (zt) in equation (2), ?y > (2 for <I>y in
equation (3), ?w > max[?l/Ax+{2, (I/Ax)2exp(-AxT)] for <I>w in equation (2).
The ?w constraint insures that only transitions are learned, and they are learned
only when t < T.
=
Learning Aspect Graph Representations from View Sequences
VIEW 4?2?0? ...
VIEW 3+0? ?.?
ASPECT SEQUENCE
OBJECT-1 EVIDENCE
OBJECT-2 EVIDENCE
OBJECT-1 WEIGHT 0-1
OBJECT-1 WEIGHT 0-2
OBJECT-2 WEIGHT 0-1
OBJECT-2 WEIGHT 0-2
Figure 5: Node activity and synapse adaptation vs. time. Two separate representations are learned automatically as aspect sequences of the objects are experienced.
Acknowledgments
This report is based on studies performed at Lincoln Laboratory, a center for research operated by the Massachusetts Instit.ute of Technology. The work was sponsored by the Department of t.he Ail' Force under Contract F19628-85-C-0002.
References
Bowyer, K., Eggert, D., Stewman, J., & Stark, L. (1989). Developing the aspect
graph representation for use in image understanding. Proceedings of the 1989 Image
Understanding WOT?kshop. 'Vash. DC: DARPA. 831-849.
Carpenter, G. A., & Grossberg, S. (1987). ART 2: Self-organization of stable
category recognition codes for analog input patterns. Applied Optics, 26(23), 49194930 .
Grossberg, S. (1973). Contour enhancement, short term memory, and constancies
in reverberating neural netv,,?orks. Studies in Applied Mathematics, 52(3), 217-257.
Koenderink, J. J., &. van Doorn, A. J. (1979). The internal representation of solid
shape with respect to vision. Biological Cybernetics, 32, 211-216.
Seibert, M., Waxman, A. M. (1989). Spreading Activation Layers, Visual Saccades,
and Invariant Representations for Neural Pattern Recognition Systems. Ne1tral
Networks . 2(1). 9-27 .
Shepard, G . M. (1979). The synaptic organization of the brain. New York:
Oxford University Press.
Waxman, A. M., Seibert, M., Cunningham, R., & Wu, J. (1989). Neural analog
diffusion-enhancement layer and spatio-temporal grouping in early vision. In: Advances in neural inforll1ation processing systems, D. S. Touretzky (ed.), San
Mateo, CA: Morgan Kaufman. 289-296.
265
| 230 |@word nd:1 simulation:1 postsynaptically:1 rol:1 tr:1 solid:1 initial:3 att:1 selecting:1 existing:2 current:1 erms:1 cad:1 activation:1 assigning:1 must:3 periodically:2 happen:1 partition:1 shape:2 designed:1 sponsored:1 v:1 cue:1 leaf:1 short:3 node:40 contribute:1 constructed:1 differential:3 redirected:1 consists:1 sustained:1 polyhedral:1 introduce:1 forgetting:2 behavior:1 planning:1 nor:2 integrator:1 brain:1 inspired:1 automatically:1 becomes:2 classifies:1 moreover:1 formidable:1 kaufman:1 ail:1 wfj:4 lexington:1 temporal:1 every:2 axt:1 partitioning:1 unit:1 before:1 vie:2 understood:1 positive:2 limit:1 encoding:1 id:1 oxford:1 ure:1 yd:1 alli:1 mateo:1 equivalence:1 wri:1 limited:1 projective:2 range:1 grossberg:7 acknowledgment:1 significantly:1 convenient:1 pre:2 selection:1 accumulating:3 accumulation:2 center:1 go:1 attention:1 convex:1 wit:1 array:10 lso:1 analogous:1 pt:1 pioneered:1 designing:1 hypothesis:1 recognition:17 continues:1 ark:1 constancy:1 module:4 compressing:1 removed:1 equat:2 inhibit:1 yk:6 govern:1 complexity:1 dynamic:8 depend:1 upon:1 darpa:1 vent:1 represented:2 alphabet:2 activate:1 detected:1 kshop:1 modular:1 encoded:2 ive:1 otherwise:3 syntactic:1 final:1 runge:1 sequence:22 timet:1 net:1 vhile:1 tran:2 product:3 reset:2 adaptation:2 tu:1 causing:1 organizing:2 lincoln:2 adapts:1 competition:6 ky:1 hrough:1 enhancement:3 produce:1 generating:1 object:58 axo:9 develop:1 implemented:1 nod:1 implies:1 closely:1 transient:3 dendritic:5 biological:1 dapt:1 exploring:1 around:1 considered:1 sufficiently:1 ic:2 exp:1 vith:1 early:2 combinatorial:1 spreading:1 hope:1 always:1 ck:1 avoid:1 varying:1 bet:1 conjunction:1 ax:7 detect:1 prt:1 accumulated:1 entire:1 typically:1 initially:2 cunningham:2 her:1 among:2 ill:2 activit:1 resonance:1 art:4 field:2 represents:1 kw:1 unsupervised:1 nearly:1 report:1 simplify:1 retina:1 foreshortening:1 composed:1 ve:1 recognize:1 phase:1 organization:2 interest:2 llg:1 introduces:1 analyzed:1 operated:1 activated:1 wot:1 integral:1 necessary:2 experience:1 unless:1 tree:2 hough:1 deformation:1 earlier:1 modeling:1 gn:1 assignment:1 ory:1 recognizing:1 hem:1 stimulates:1 aw:1 objec:1 st:3 decayed:1 fundamental:1 contract:1 michael:1 shunted:1 continuously:1 imagery:2 ambiguity:1 again:2 koenderink:3 stark:1 waxman:9 syst:1 account:1 int:1 race:1 depends:2 tion:1 view:33 performed:1 portion:1 competitive:2 orks:1 elaborated:1 amorphous:1 characteristic:2 implementat:1 trajectory:1 researcher:1 cybernetics:1 simultaneous:2 synapsis:3 touretzky:1 whenever:2 ed:4 synaptic:14 associated:1 di:1 dxi:1 static:2 sampled:2 massachusetts:1 emerges:1 illumina:1 organized:1 ea:1 appears:2 dt:1 permitted:1 response:1 maximally:3 synapse:9 done:1 though:1 strongly:1 defines:2 stimulate:1 building:1 effect:1 name:1 concept:2 evolution:2 nonzero:2 laboratory:1 adjacent:2 during:8 self:4 ambiguous:1 rat:1 allowable:1 ay:2 complete:3 confusion:1 eggert:2 allen:1 motion:1 passive:2 image:3 consideration:1 hreshold:2 common:2 functional:1 winner:1 shepard:2 analog:3 he:15 discussed:1 relating:1 interpret:1 accumulate:1 significant:2 mathematics:1 similarly:1 moving:1 ute:1 stable:1 depiction:1 base:1 perspective:1 binary:1 exploited:1 morgan:1 recognized:4 determine:1 paradigm:1 signal:1 ii:2 relates:1 multiple:2 hebbian:5 ing:1 faster:1 adapt:2 post:2 shunting:3 y:1 coded:1 controlled:1 nadel:1 vision:5 ained:1 repetitive:2 represent:4 ion:11 cell:1 doorn:3 whereas:1 background:1 addition:2 leaving:1 ot:1 sent:1 call:2 axonal:1 intermediate:1 iii:1 concerned:1 xj:7 ioll:1 architecture:5 competing:2 een:1 dtv:1 idea:1 allocate:1 effort:1 speech:1 proceed:2 cause:1 york:1 useful:2 governs:2 category:5 instit:1 generate:2 zd:1 discrete:1 uia:1 key:1 threshold:5 neither:1 ht:1 diffusion:2 utilize:1 graph:17 prismatic:1 orient:1 compete:2 noticing:1 respond:1 decide:1 wu:1 informat:1 bowyer:2 layer:5 distinguish:1 fan:1 topological:1 syna:2 quadratic:1 activity:23 optic:1 deb:1 constraint:1 scene:3 aspect:68 inhibits:1 bllt:1 department:1 tv:1 according:1 developing:1 renamed:1 membrane:1 remain:1 em:1 invariant:6 hypot:1 taken:1 equation:16 previously:2 assures:2 mechanism:1 available:1 hat:2 recognizes:1 build:3 unchanged:1 presynaptically:1 omy:1 added:1 realized:2 occurs:2 saccadic:1 rt:1 win:1 kutta:1 separate:3 extent:2 assuming:1 code:1 cont:1 ruct:1 expense:1 zt:1 upper:1 neuron:5 arc:2 finite:1 t:1 ruing:1 dc:1 required:1 connection:1 learned:13 suggested:1 dynamical:1 below:1 pattern:2 max:1 memory:3 overlap:1 ation:2 force:1 hr:1 technology:2 eye:1 disappears:1 coupled:1 featural:1 literature:1 understanding:2 relative:2 lv:1 age:1 storing:1 supported:1 allow:1 institute:1 fall:1 absolute:1 van:3 overcome:1 transition:17 jor:1 contour:2 avoids:1 adaptive:3 preprocessing:3 simplified:1 san:1 lir:1 active:5 excite:1 spatio:1 xi:8 continuous:1 additionally:1 learn:3 zk:2 object1:1 ca:2 symmetry:2 dendrite:3 cl:1 complex:1 arise:2 carpenter:2 axon:3 experienced:4 position:2 winning:1 governed:2 learns:3 tlon:1 gating:1 reverberating:1 symbol:2 decay:6 cease:1 leam:1 evidence:7 grouping:2 gat:1 magnitude:1 illumination:1 horizon:1 orientat:2 appearance:2 insures:1 visual:5 saccade:1 ch:1 ma:1 ptic:1 modulate:2 presentation:1 seibert:10 change:3 determined:1 wt:1 ithis:1 called:3 engaged:1 e:1 occluding:2 select:1 internal:1 support:1 signifying:1 investigator:2 ex:1 |
1,430 | 2,300 | On the Complexity of Learning
the Kernel Matrix
Olivier Bousquet, Daniel J. L. Herrmann
MPI for Biological Cybernetics
Spemannstr. 38, 72076 T?ubingen
Germany
olivier.bousquet, daniel.herrmann @tuebingen.mpg.de
Abstract
We investigate data based procedures for selecting the kernel when learning with Support Vector Machines. We provide generalization error
bounds by estimating the Rademacher complexities of the corresponding
function classes. In particular we obtain a complexity bound for function
classes induced by kernels with given eigenvectors, i.e., we allow to vary
the spectrum and keep the eigenvectors fix. This bound is only a logarithmic factor bigger than the complexity of the function class induced
by a single kernel. However, optimizing the margin over such classes
leads to overfitting. We thus propose a suitable way of constraining the
class. We use an efficient algorithm to solve the resulting optimization
problem, present preliminary experimental results, and compare them to
an alignment-based approach.
1 Introduction
Ever since the introduction of the Support Vector Machine (SVM) algorithm, the question
of choosing the kernel has been considered as crucial. Indeed, the success of SVM can be
attributed to the joint use of a robust classification procedure (large margin hyperplane) and
of a convenient and versatile way of pre-processing the data (kernels). It turns out that with
such a decomposition of the learning process into preprocessing and linear classification,
the performance highly depends on the preprocessing and much less on the linear classification algorithm to be used (e.g. the kernel perceptron has been shown to have comparable
performance to SVM with the same kernel). It is thus of high importance to have a criterion
to choose the suitable kernel for a given problem.
Ideally, this choice should be dictated by the data itself and the kernel should be ?learned?
from the data. The simplest way of doing so is to choose a parametric family of kernels
(such as polynomial or Gaussian) and to choose the values of the parameters by crossvalidation. However this approach is clearly limited to a small number of parameters and
requires the use of extra data.
Chapelle et al. [1] proposed a different approach. They used a bound on the generalization
error and computed the gradient of this bound with respect to the kernel parameters. This
allows to perform a gradient descent optimization and thus to effectively handle a large
number of parameters.
More recently, the idea of using non-parametric classes of kernels has been proposed by
Cristianini et al. [2]. They work in a transduction setting where the test data is known in
advance. In that setting, the kernel reduces to a positive definite matrix of fixed size (Gram
matrix). They consider the set of kernel matrices with given eigenvectors and to choose the
eigenvalues using the ?alignment? between the kernel and the data. This criterion has the
advantage of being easily computed and optimized. However it has no direct connection to
the generalization error.
Lanckriet et al. [5] derived a generalization bound in the transduction setting and proposed to use this bound to choose the kernel. Their parameterization is based on a linear
combination of given kernel matrices and their bound has the advantage of leading to a
convex criterion. They thus proposed to use semidefinite programming for performing the
optimization.
Actually, if one wants to have a feasible optimization, one needs the criterion to be nice
(e.g. differentiable) and the parameterization to be nice (e.g. the criterion is convex with
respect to the parameters). The criterion and parameterization proposed by Lanckriet et al.
satisfy these requirements. We shall use their approach and develop it further.
In this paper, we try to combine the advantages of previous approaches. In particular we
propose several classes of kernels and give bounds on their Rademacher complexity. Instead of using semidefinite programming we propose a simple, fast and efficient gradientdescent algorithm.
In section 2 we calculate the complexity of different classes of kernels. This yields a convex
optimization problem. In section 3 we propose to restrict the optimization of the spectrum
such that the order of the eigenvalues is preserved. This convex constraint is implemented
by using polynomials of the kernel matrix with non?negative coefficients only.
In section 4 we use gradient descent to implement the optimization algorithm. Experimental results on standard data sets (UCI Machine Learning Repository) show in section 5 that
indeed overfitting happens if we do not keep the order of the eigenvalues.
2 Bounding the Rademacher Complexity of Matrix Classes
!#" %$
!&" $ (' !&" )$
' !#" $
*,+.-0/ 123*4
*
123*45+ 7 6 8:9 ;<(=/ >' 8 *? 8 5@BAC
E FDG5+ IHKJMLOR NPSTQ U 8 *? 8 YX
8:9 WV
8V Z[ V 8 Z[ V 8
\
]
a ab^cA ] ]_^`A a
Let us introduce some notation. Let
be a measurable space (the instance space) and
. We consider here the setting of transduction where the data is generis given
and a perated as follows. A fixed sample of size
mutation of
is chosen at random (uniformly). The algorithm is given
and
, i.e. it has access to
all instances but to the labels of the first instances only.
The algorithm picks some classifier
and the goal is to minimize the error of this
the error of on the testing instances,
classifier on the test instances. Let us denote by
. The empirical Rademacher complexity of a set of
functions from to is defined as
D
where the expectation is taken with respect to the independent Rademacher random variables
(
). For a vector ,
means that all
the components of are non-negative. For a matrix ,
means that
is positive
definite.
@.A WF A
<
(
<(
IA
*D D
U 8 8 KH J L N)PQ U 8 8 8
\
123*4 @ 8:9 W>' *? ) R ST 8 9 V <F' *? X
2.1 General Bound
We denote by
the function defined as
for
,
for
and
otherwise. From the proof of Theorem 1 in [5] we obtain the lemma below.
Lemma 1 Let
, for all
be a set of real-valued functions. For any
we have
, with probability at least
Using the comparison inequality for Rademacher processes in [6] we immediately obtain
the following corollary.
D * D
U 8 8 E
123*4 @ 8:9 <>' *? ) D[
cA , with probability at
\
Corollary 1 Let be a set of real-valued functions. For any
least
, for all
we have
E FDG
+ -b/
N
Q
4
C
+
4 4 "! #
$ .N)Q# &%M+ '
(%
$
* % %,+ ! 4 -
N
Q
&%&. /%/0
$ % 4 (1 % %/+ ! 24 ( 1 / 1 3 !
4 I a 4 8 65 ) 8 5
H J L 8 S9 N)P8 Q U 8 *C (DFEG!3X 7 H IJ H2J a !BK
": $2;=< <?>43@BA 8:9 V
V V
Now we will apply this bound to several different classes of functions. We will thus compute
for each of those classes. For a positive definite kernel
one
considers usually the RKHS formed by the closure of
with respect
to the inner product defined by
. Since we will vary the kernel
it is convenient to distinguish between the vectors in the RKHS and their geometric relation. We first define the abstract real vectors space
where
is
the evaluation functional at point . Then we define for a given kernel the Hilbert space
as the closure of with respect to the scalar product given by
.
In this way we can vary , i.e. the geometry, without changing the vector space structure
of any finite dimensional subspace of the form
. We can identify the
RKHS above with
via
and
.
$5)
7
A
Lemma 2 Let be a kernel on
we have
, let
and
N8 PQ U 8 LC D E3! 8 N PQ C U 8 D E(O 7Q PP U 8 D E PP 7 J
< <?>43 @BA 8:9 WV
< <3>43@BANM 8 9 V
PP 8:9 WV PP
P
P
V a V !
\
7
$
$5)
$ + N)Q /%&.
/%/0
8 U 8 J
U
)
N
P
Q
< 8 <?> " @BAS; 8 S9 0 8 9 V LC D E3! 7 PPPP 8:9 V D ETPPPP 9 0 7 V a V !
P
P
Proof: We have
. For all
V a V !
a
The second equality holds due to Cauchy?Schwarz inequality which becomes here an
equality because of the supremum. Notice that
is always non?negative since is
positive definite. Taking expectations concludes the proof.
R
The expression in lemma 2 is up to a factor
the Rademacher complexity of the class
of functions in
with margin . It is important to notice that this is equal to the
Rademacher complexity of the subspace of
which is spanned by the data. Indeed, let
us consider the space
then this is a Hilbert subspace of
.
Moreover, we have
$
This proves that we are actually capturing the right complexity since we are computing
the complexity of the set of hyperplanes whose normal vector can be expressed as a linear
combination of the data points. Now, let?s assume that we allow the kernel to change, that
is, we have a set of possible kernels , or equivalently a set of possible kernel matrices.
Let
be
with the inner product induced by
and
the class of
let
denote
hyperplanes with margin
in the space
, and
. Using lemma 2
we have
$ FaM $
D a _+ DS ?D
E D HKJ
N PQ 0
U 8 LC D EG! ! 7 HKJ#"N)PS Q J a !%$
V V
. 8 9 V
&Oa'&
a &Oa'& 6 8 ; 5 a 8 ; 5
&Oa(& 6 8:9 ) 8
8:9 a 8 ; 8 6 8 9 , ) 8 `N)PQ
*+ a
6
&Oa'&
&Oa'
&
<-T<?>4 ]/.a ]
0 21 )
) a
&Oa(&43 + &'
a & &O'a E & + *5+a \ 7
a
D6 @ *5+ a
7
$4 >a
(1)
denote the Frobenius norm of , i.e.
Recall that for symLet
metric positive definite matrices, the Frobenius norm is equal to the -norm of the spectrum, i.e.
Also, recall that the trace of such a matrix is equal to the
-norm of its spectrum, i.e.
Finally, recall that for a positive definite matrix
the operator norm
is given by
We will denote
and
.
. Also, it
It is easy to see that for a fixed kernel matrix , we have
is useful to keep in mind that for certain kernels like the RBF kernel, the trace of the kernel
matrix grows approximately linearly in the number of examples, while even if the problem is linearly separable, the margin decreases in the best case to a fixed strictly positive
87
constant. This means that we have *5+
a \ 7 \
2.2 Complexity of 9 -balls of kernel matrices
The first class that one may consider is the class of all positive definite matrices with 9 -norm
bounded by some constant.
<;
= a ^BAG+>&Oa(& = @(:
E FD @? 7 A :O
H J H NPQ S ? J a ! K
V V :
a ! a :CB& =& A :O
V
V V (D V E D 3
:
a
: \ V7 F
7 \
Theorem 1 Let :
BA
and 9
. Define
, then
Proof:
Using (1) we thus have to compute
. Since we
can always find some
having an eigenvector with eigenvalue we obtain
which concludes the proof.
?
NPQ S
J
R
Remark: Observe that
for the same value of . However they have
the same Rademacher complexity. From the proof we see that for the calculation of the
complexity only the contribution of in direction of matters. Therefore for every the
worst case element is contained in all three classes.
V
Recall that in the case of the RBF kernel we have
which means that
we would obtain in this case a Rademacher complexity which does not decrease with . It
seems clear that proper learning is not possible in such a class, at least from the view point
of this way of measuring the complexity.
2.3 Complexity of the convex hull of kernel matrices
Lanckriet et al. [5] considered positive definite linear combinations of
i.e. the class
a U H 8 a 8 +A*+a : a ^IA
:8 9 I
G
kernel matrices,
(2)
a U H 8 a 8 2+ *+a : ^BA
89 I
I
a
a H a 8 :a 8 \ *+ a 8
a
a H
E D @ 7 A : 8 9 0 A; 1 ; &a 8 &8
H *+a
NPS Q U H 8 a 8 ! : 8 9 0 2; 1 ; V a 8 8V ! @ A:( 8 9 0 A; 1 ; &a 8 &8
8:9 I V V
H *5+ a
H *5+a
I
FA &O
a8 & A : \ *+ a 8 A
A
V a I 8 V ! @ &a 8 & & V &
R
:
a
a
We rather consider the (smaller) class
(3)
which has simple linear constraints on the feasible parameter set and allows us to use a
straightforward gradient descent algorithm. Notice that is the convex hull of the matrices
where
.
We obtain the following bound on the Rademacher complexity of this class.
Theorem 2 Let
be some fixed kernel matrices and
as defined in (3) then
Proof: Applying Jensen inequality to equation (1) we calculate first
Indeed, consider the sum as a dot product and identify the domain of . Then one recognizes that the first equality holds since the supremum is obtained for at one of the vectors
. The second part is due to the fact
.
Remark: For a large class of kernel functions the trace of the induced kernel matrix scales
linearly in the sample size . Therefore we have to scale linearly with . On the other
hand the operator norm of the induced kernel matrix grows sublinearly in . If the margin
is bounded we can therefore ensure learning. With other words, if the kernels inducing
are consistent, then the convex hull of the kernels is also consistent.
Remark: The bound on the complexity for this class is less then the one obtained by Lanckriet et al. [5] for their class. Furthermore, it contains only easily computable quantities.
Recognize that in the proof of the above theorem there appears a quantity similar to the
maximal alignment of a kernel to arbitrary labels. It is interesting to notice also that the
Rademacher complexity somehow measures the average alignment of a kernel to random
labels.
2.4 Complexity of spectral classes of kernels
Although the class defined in (3) has smaller complexity than the one in (2), we may want
to restrict it further. One way of doing so is to consider a set of matrices which have the
same eigenvectors. Generally speaking, the kernel encodes some prior about the data and
we may want to retain part of this prior and allow the rest to be tuned from the data. A
kernel matrix can be decomposed into two parts: its set of eigenvectors and its spectrum
(set of eigenvalues). We will fix the eigenvectors and tune the spectrum from the data.
a
a +@*+ a : a
For a kernel matrix
and :
IA
a;
?* Fa
5+@*5+ ?* Fa : * GF8 / 8 8
:O] ]
]
we consider the spectral class of , given by
is diag.
Notice that this class can be considered
as the convex hull of the matrices
are the eigenvectors (columns of ).
(4)
where
Remark: We assume that all eigenvalues are different, otherwise the above sets do not
agree. Note that Cristianini et al. proposed to optimize the alignment over this class.
We obtain the following bound on the complexity of such a class.
7 IA
Theorem 3 Let :
all
^IA
, let
E FD 5@
be some fixed unitary matrix and
7 A:
as defined in (4), then for
]
H J NPS Q U ) 8 ] 8 !I@ : H J L 8:9 0 ;21 ; ] 8 X : H J V " 8 9 0 ;21 ; ] 8 $
89
] 8 6 5 9 8 5 V 5
6 8:9 8 5 R
G
a
a H
Proof: As before we start with Equation (1). If we denote
and obtain
Note that
we obtain the result.
so that, using Lemma 2.2 in [3] and the fact that
,
Remark: As a corollary, we obtain that for any number of kernel matrices
which commute, the same bound holds on the complexity of their convex hull.
3 Optimizing the Kernel
a \ 7
In order to choose the right kernel, we will now consider the bound of Corollary 1. For
a fixed kernel, the complexity term in this bound is proportional to *5+
. We will
consider a class of kernels and pick the one that minimizes this bound. This suggests to
keep the trace fixed and to maximize the margin.
Using Corollary 1 with the bounds derived in Section 2 we immediately obtain a generalization bound for such a procedure.
Theorem 3 suggests that optimizing the whole spectrum of the kernel matrix does not significantly increase the complexity. However experiments (see Section 5) show that overfitting occurs. We present here a possible explanation for this phenomenon.
Loosely speaking, the kernel encodes some prior information about how the labels two
data points should be coupled. Most often this prior corresponds to the knowledge that two
similar data points should have a similar label.
Now, when optimizing over the spectrum of a kernel matrix, we replace the prior of the
kernel function by information given by the data points. It turns out that this leads to
overfitting in practical experiments. In section 2.4 we have shown that the complexity of
the spectral class is not significantly bigger than the complexity for a fixed kernel, thus the
complexity is not a sufficient explanation for this phenomenon.
It is likely that when optimizing the spectrum, some crucial part of the prior knowledge
is lost. To verify this assumption, we ran some experiments on the real line. We have to
separate two clouds of points in . When the clouds are well separated, a Gaussian kernel
easily deals with the task while if we optimize the spectrum of this kernel with respect to
the margin criterion, the classification has arbitrary jumps in the middle of the clouds.
/
A possible way of retaining more of the spatial information contained in the kernel is to
keep the order of the eigenvalues fixed. It turns out that in the same experiments, when the
eigenvalues are optimized keeping their original order, no spurious jumps occur.
C
We thus propose to add the extra constraint of keeping the order of the eigenvalues fix.
This constrain is fulfilled by restricting the functions in (4) to polynomials of degree
G
with non?negative coefficients, i.e. we consider spectral optimization by
convex, non?decreasing functions. For a given kernel matrix , we thus define
a
a U H 8 a 8 + *5+a : I^ A
9I
I
(5)
Indeed, recent results shows that the Rademacher complexity is reduced in this way [7].
4 Implementation
Following Lanckriet et al. [5] one can formulate the problem of optimizing the margin error
bound optimization as a semidefinite programming problem. Here we considered classes
of kernels that can be written as linear combinations of kernel matrices with non-negative
coefficients
and fixed trace. In that case, one obtains the following problem (the subscript
indicates that we keep the block corresponding to the training data only)
0
; ; ;
subject to
U H 8 *5+a 8 : ^IA I^ A
8:9 I
I
:
8
9
' 6 H ) ' ) B^ A
It turns out that implementing this semidefinite program is computationally quite expensive.
We thus propose a different approach based on the work of [1]. Indeed, the goal is to
so that if we fix the trace, we simply have to minimize
minimize a bound of the form
the squared norm of the solution vector . It has been proven in [1] that the gradient of
& & can be computed as
A
& &
.' a
(6)
The algorithm we suggest can thus be described as follows
1. Train an SVM to find the optimal value of
2. Make a gradient step according to (6). Here,
< E<
. ' Fa 8 '
with the current kernel matrix.
3. Enforce the constraints on the coefficients (normalization and non-negativity).
4. Return to 1 unless a termination criterion is reached.
It turns out that this algorithm is very efficient and much simpler to implement than
semidefinite programming. Moreover, the semidefinite programming formulations involve
a large amount of (redundant) variables, so that a typical SDP solver will take 10 to 100
times longer to perform the same task since it will not use the specific symmetries of the
problem.
5 Experiments
In order to compare our results we use the same setting as in [5]: we consider the Breast
cancer and Sonar databases from the UCI repository and perform 30 random splits with
60% of the data for training and 40% for testing.
denotes the matrix induced by the
polynomial kernel
,
the
matrix
induced by the Gaussian kernel
1
, and "! the matrix by the linear kernel #!
.
&
&
)
M ) \
#Q O
1a a a
V
)
1
First we compare two classes of kernels, linear combinations defined by (2) and convex
combination by (3). Figure 1 shows that optimizing the margin on both classes yields
roughly the same performance while optimizing the alignment with the ideal kernel is
worse. Furthermore, considering the class defined in (3) yields a large improvement on
computational efficiency.
Next, we compare the optimization of the margin over the classes (3), (4) and (5) with
degree $ polynomials. Figure 1 indicates that tuning the full spectrum leads to overfitting
while keeping the order of the eigenvalues gives reasonable performance (this performance
is retained when the degree of the polynomial is increased).
a a a
*+a
\ 7 I
V
:
*+a \ 7 I V
Breast cancer
test error (%)
Sonar
test error (%)
25.1
7.1
1.09
10.8
9.65
18.8
1.34
25.1
!
49.0
27.4
a
0.54
4.2
1.14
16.4
a a a a
0.55
3.8
1.22
24.4
0.53
3.3
1.17
18.0
0.42
30.8
0.92
33.0
0.9
10.9
1.23
21.4
a, a a
Figure 1: Performance of optimized kernels for different kernel classes and optimization
procedures (methods proposed in the present paper are typeset in bold face).
and
given by (2) and maximized margin, cf. [5];
! indicate fixed kernels, see text.
given by (3) and maximized alignment with the ideal kernel cf. [2];
given by (3) and
maximized margin;
given by (4), i.e. whole spectral class of
and maximized margin;
given by (5) with G
$ , i.e. keeping the order of the eigenvalues in the spectral class
and maximized margin. The performance of
is much better than of
.
a
a
a
a
a
a a
a
6 Conclusion
We have derived new bounds on the Rademacher complexity of classes of kernels. These
bounds give guarantees for the generalization error when optimizing the margin over a
function class induced by several kernel matrices. We propose a general methodology
for implementing the optimization procedure for such classes which is simpler and faster
than semidefinite programming while retaining the performance. Although the bound for
spectral classes is quite tight, we encountered overfitting in the experiments. We overcome
this problem by keeping the order of the eigenvalues fix. The motivation of this additional
convex constraint is to maintain more information about the similarity measure.
The condition to fix the order of the eigenvalues is a new type of constraint. More work
is needed to understand this constrain and its relation to the prior knowledge contained in
the corresponding class of similarity measures. The complexity of such classes seems also
to be much smaller. Therefore we will investigate the generalization behavior on different
natural and artificial data sets in future work. Another direction for further investigation is
to refine the bounds we obtained, using for instance local Rademacher complexities.
References
[1] O. Chapelle, V. Vapnik, O. Bousquet, and S. Mukherjee. Choosing multiple parameters
for support vector machines. Machine Learning, 46(1):131?159, 2002.
[2] N. Cristianini, J. Kandola, A. Elisseeff, and J. Shawe-Taylor. On optimizing kernel
alignment. Journal of Machine Learning Research, 2002. To appear.
[3] L. Devroye and G. Lugosi. Combinatorial Methods in Density Estimation. SpringerVerlag, New York, 2000.
[4] J. Kandola, J. Shawe-Taylor and N. Cristianini. Optimizing Kernel Alignment over
Combinations of Kernels. In Int Conf Machine Learning, 2002. In press.
[5] G. Lanckriet, N. Cristianini, P. Bartlett, L. El Ghaoui, and M.I. Jordan. Learning the
kernel matrix with semidefinite programming. In Int Conf Machine Learning, 2002. In
press.
[6] M. Ledoux and M. Talagrand. Probability in Banach Spaces. Springer-Verlag, 1991.
[7] O. Bousquet, and D. J. L. Herrmann. Towards Structered Kernel Maschines. Work in
Progress.
| 2300 |@word repository:2 middle:1 polynomial:6 norm:8 seems:2 termination:1 closure:2 decomposition:1 elisseeff:1 pick:2 commute:1 versatile:1 n8:1 contains:1 selecting:1 daniel:2 tuned:1 rkhs:3 current:1 written:1 parameterization:3 hyperplanes:2 simpler:2 direct:1 combine:1 introduce:1 sublinearly:1 indeed:6 behavior:1 mpg:1 roughly:1 sdp:1 decomposed:1 decreasing:1 solver:1 considering:1 becomes:1 estimating:1 notation:1 moreover:2 bounded:2 minimizes:1 eigenvector:1 guarantee:1 every:1 classifier:2 appear:1 positive:9 before:1 local:1 subscript:1 approximately:1 lugosi:1 suggests:2 limited:1 practical:1 testing:2 lost:1 block:1 definite:8 implement:2 procedure:5 empirical:1 significantly:2 convenient:2 pre:1 word:1 suggest:1 operator:2 applying:1 optimize:2 measurable:1 straightforward:1 convex:12 formulate:1 immediately:2 fam:1 spanned:1 handle:1 olivier:2 programming:7 lanckriet:6 element:1 expensive:1 mukherjee:1 database:1 cloud:3 worst:1 calculate:2 decrease:2 ran:1 complexity:34 ideally:1 cristianini:5 tight:1 efficiency:1 easily:3 joint:1 train:1 separated:1 fast:1 artificial:1 choosing:2 whose:1 quite:2 solve:1 valued:2 otherwise:2 typeset:1 itself:1 advantage:3 eigenvalue:13 differentiable:1 ledoux:1 propose:7 product:4 maximal:1 uci:2 kh:1 frobenius:2 inducing:1 crossvalidation:1 requirement:1 rademacher:15 develop:1 ij:1 progress:1 implemented:1 indicate:1 direction:2 hull:5 implementing:2 fix:6 generalization:7 preliminary:1 investigation:1 biological:1 strictly:1 hold:3 considered:4 normal:1 vary:3 estimation:1 bag:1 label:5 combinatorial:1 schwarz:1 clearly:1 gaussian:3 always:2 rather:1 corollary:5 derived:3 improvement:1 indicates:2 el:1 spurious:1 relation:2 germany:1 classification:4 retaining:2 spatial:1 equal:3 having:1 future:1 kandola:2 recognize:1 geometry:1 tq:1 maintain:1 ab:1 fd:2 investigate:2 highly:1 evaluation:1 alignment:9 semidefinite:8 unless:1 loosely:1 taylor:2 instance:6 column:1 increased:1 measuring:1 density:1 retain:1 squared:1 choose:6 worse:1 v7:1 conf:2 leading:1 return:1 de:1 bold:1 coefficient:4 matter:1 int:2 satisfy:1 depends:1 try:1 view:1 doing:2 reached:1 start:1 mutation:1 contribution:1 minimize:3 formed:1 maximized:5 yield:3 identify:2 cybernetics:1 pp:4 proof:9 attributed:1 recall:4 knowledge:3 hilbert:2 actually:2 appears:1 methodology:1 formulation:1 furthermore:2 d:1 hand:1 talagrand:1 somehow:1 grows:2 verify:1 equality:3 eg:1 deal:1 fdg:2 mpi:1 criterion:8 recently:1 functional:1 banach:1 tuning:1 shawe:2 dot:1 chapelle:2 access:1 longer:1 similarity:2 add:1 recent:1 dictated:1 optimizing:11 certain:1 verlag:1 ubingen:1 inequality:3 wv:3 success:1 additional:1 maximize:1 redundant:1 full:1 multiple:1 reduces:1 pppp:1 faster:1 calculation:1 bigger:2 breast:2 expectation:2 metric:1 kernel:81 normalization:1 preserved:1 want:3 crucial:2 extra:2 rest:1 induced:8 subject:1 spemannstr:1 jordan:1 unitary:1 ideal:2 constraining:1 split:1 easy:1 restrict:2 inner:2 idea:1 computable:1 expression:1 bartlett:1 e3:2 speaking:2 york:1 remark:5 useful:1 generally:1 clear:1 eigenvectors:7 tune:1 involve:1 amount:1 bac:1 simplest:1 reduced:1 notice:5 fulfilled:1 shall:1 changing:1 f8:1 sum:1 family:1 reasonable:1 comparable:1 capturing:1 bound:26 distinguish:1 encountered:1 refine:1 occur:1 constraint:6 constrain:2 encodes:2 bousquet:4 performing:1 separable:1 according:1 combination:7 ball:1 smaller:3 happens:1 ghaoui:1 taken:1 computationally:1 equation:2 agree:1 turn:5 needed:1 mind:1 apply:1 observe:1 spectral:7 enforce:1 original:1 denotes:1 ensure:1 cf:2 recognizes:1 yx:1 prof:1 question:1 quantity:2 occurs:1 parametric:2 fa:4 gradient:6 subspace:3 separate:1 oa:7 d6:1 considers:1 tuebingen:1 cauchy:1 devroye:1 retained:1 equivalently:1 trace:6 negative:5 ba:5 implementation:1 proper:1 perform:3 finite:1 descent:3 ever:1 arbitrary:2 bk:1 optimized:3 connection:1 learned:1 below:1 usually:1 program:1 explanation:2 ia:5 suitable:2 natural:1 h2j:1 concludes:2 negativity:1 coupled:1 text:1 nice:2 geometric:1 prior:7 interesting:1 proportional:1 proven:1 degree:3 sufficient:1 consistent:2 cancer:2 keeping:5 allow:3 understand:1 perceptron:1 taking:1 face:1 overcome:1 gram:1 herrmann:3 preprocessing:2 jump:2 obtains:1 keep:6 supremum:2 overfitting:6 spectrum:11 sonar:2 robust:1 ca:2 symmetry:1 domain:1 diag:1 linearly:4 bounding:1 whole:2 motivation:1 transduction:3 lc:3 gradientdescent:1 theorem:6 specific:1 jensen:1 svm:4 restricting:1 vapnik:1 effectively:1 importance:1 margin:16 logarithmic:1 simply:1 likely:1 expressed:1 contained:3 scalar:1 dfeg:1 springer:1 corresponds:1 goal:2 rbf:2 towards:1 replace:1 feasible:2 change:1 springerverlag:1 typical:1 uniformly:1 hyperplane:1 hkj:2 lemma:6 experimental:2 support:3 phenomenon:2 |
1,431 | 2,301 | An Asynchronous Hidden Markov Model
for Audio-Visual Speech Recognition
Samy Bengio
Dalle Molle Institute for Perceptual Artificial Intelligence (IDIAP)
CP 592, rue du Simplon 4,
1920 Martigny, Switzerland
[email protected]://www.idiap.ch/-bengio
Abstract
This paper presents a novel Hidden Markov Model architecture to
model the joint probability of pairs of asynchronous sequences describing the same event. It is based on two other Markovian models,
namely Asynchronous Input/ Output Hidden Markov Models and
Pair Hidden Markov Models. An EM algorithm to train the model
is presented, as well as a Viterbi decoder that can be used to obtain the optimal state sequence as well as the alignment between
the two sequences. The model has been tested on an audio-visual
speech recognition task using the M2VTS database and yielded
robust performances under various noise conditions.
1
Introduction
Hidden Markov Models (HMMs) are statistical tools that have been used successfully in the last 30 years to model difficult tasks such as speech recognition [6) or
biological sequence analysis [4). They are very well suited to handle discrete of continuous sequences of varying sizes. Moreover, an efficient training algorithm (EM)
is available, as well as an efficient decoding algorithm (Viterbi), which provides the
optimal sequence of states (and the corresponding sequence of high level events)
associated with a given sequence of low-level data.
On the other hand, multimodal information processing is currently a very challenging framework of applications including multimodal person authentication, multimodal speech recognition, multimodal event analyzers, etc. In that framework, the
same sequence of events is represented not only by a single sequence of data but
by a series of sequences of data, each of them coming eventually from a different
modality: video streams with various viewpoints, audio stream(s), etc.
One such task, which will be presented in this paper, is multimodal speech recognition using both a microphone and a camera recording a speaker simultaneously
while he (she) speaks. It is indeed well known that seeing the speaker's face in addition to hearing his (her) voice can often improve speech intelligibility, particularly
in noisy environments [7), mainly thanks to the complementarity of the visual and
acoustic signals. Previous solutions proposed for this task can be subdivided into
two categories [8]: early integration, where both signals are first modified to reach
the same frame rate and are then modeled jointly, or late integration, where the
signals are modeled separately and are combined later, during decoding. While in
the former solution, the alignment between the two sequences is decided a priori, in
the latter, there is no explicit learning of the joint probability of the two sequences.
An example of late integration is presented in [3], where the authors present a multistream approach where each stream is modeled by a different HMM, while decoding
is done on a combined HMM (with various combination approaches proposed) .
In this paper, we present a novel Asynchronous Hidden Markov Model (AHMM)
that can learn the joint probability of pairs of sequences of data representing the
same sequence of events, even when the events are not synchronized between the
sequences. In fact, the model enables to desynchronize the streams by temporarily
stretching one of them in order to obtain a better match between the corresponding frames . The model can thus be directly applied to the problem of audio-visual
speech recognition where sometimes lips start to move before any sound is heard
for instance. The paper is organized as follows: in the next section, the AHMM
model is presented, followed by the corresponding EM training and Viterbi decoding algorithms. Related models are then presented and implementation issues are
discussed. Finally, experiments on a audio-visual speech recognition task based on
the M2VTS database are presented, followed by a conclusion.
2
The Asynchronous Hidden Markov Model
For the sake of simplicity, let us present here the case where one is interested in
modeling the joint probability of 2 asynchronous sequences, denoted xi and
with S ::; T without loss of generality!.
yr
We are thus interested in modeling p(xi, Yr). As it is intractable if we do it directly
by considering all possible combinations, we introduce a hidden variable q which
represents the state as in the classical HMM formulation, and which is synchronized
with the longest sequence. Let N be the number of states.
Moreover, in the model presented here, we always emit Xt at time t and sometimes
emit Ys at time t. Let us first define E(i, t) = P(Tt =sh- l =s - 1, qt =i, xLyf) as
the probability that the system emits the next observation of sequence y at time t
while in state i. The additional hidden variable Tt = s can be seen as the alignment
between y and q (and x which is aligned with q). Hence, we model p(x f,yr, qf, T'[).
2.1
Likelihood Computation
Using classical HMM independence assumptions, a simple forward procedure can
be used to compute the joint likelihood of the two sequences, by introducing the
following 0: intermediate variable for each state and each possible alignment between
the sequences x and y:
(1)
o:(i,s,t)
N
o:(i , s,t)
E(i, t)p( Xt , yslqt =i)
L P(qt =ilqt- l =j)o:(j, s -
1, t - 1)
j= l
lIn fact , we assume that for all pairs of sequences (x, y), the sequence x is always at
least as long as the sequence y. If this is not the case, a straightforward extension of the
proposed model is then necessary.
N
+
(1 - E(i, t))p(xtlqt=i) L
P(qt=ilqt- 1=j)a(j, s, t - 1)
j=l
which is very similar to the corresponding a variable used in normal HMMs2. It
can then be used to compute the joint likelihood of the two sequences as follows:
N
p(xi, yf)
L p( qT=i , TT=S, xi, yf)
(2)
i=l
N
L a(i,S,T) .
i=l
2.2
Viterbi Decoding
Using the same technique and replacing all the sums by max operators, a Viterbi
decoding algorithm can be derived in order to obtain the most probable path along
the sequence of states and alignments between x and y :
V(i,s , t)
t
maxt - l P( qt=Z,. Tt=S, Xl'
Y1S)
(3)
t- l
T1
, Ql
max
{
(E(i, t)p(Xt, Ys Iqt=i) mJx P(qt=ilqt- 1=j)V(j, s - 1, t - 1),
(1 - E(i, t))p(xtlqt=i) maxP(qt=i lqt- 1=j)V(j, s, t - 1))
J
Ti
The best path is then obtained after having computed V(i , S,
for the best final
state i and backtracking along the best path that could reach it .
2.3
An EM Training Algorithm
An EM training algorithm can also be derived in the same fashion as in classical
HMMs. We here sketch the resulting algorithm, without going into more details 4 .
Backward Step: Similarly to the forward step based on the a variable used to
compute the joint likelihood, a backward variable, (3 can also be derived as follows:
(3(i,s, t)
(4)
(3(i, s, t)
L E(j, t + l)p( xt+1' Ys+ 1Iqt+1 =j)P(qt+ 1=j lqt=i)(3(j, s + 1, t + 1)
j=l
N
N
+ L (l - E(j, t + 1))P(Xt+ 1Iqt+ 1=j)P(qt+1 =jlqt =i)(3(j, s, t + 1) .
j=l
2The full derivations are not given in this paper but can be found in the appendix of [1).
3In the case where one is only interested in the best state sequence (no matter the
alignment), the solution is then to marginalize over all the alignments during decoding
(essentially keeping the sums on the alignments and the max on the state space). This
solut ion has not yet been tested.
4See the appendix of [1) for more details.
E-Step: Using both the forward and backward variables, one can compute the
posterior probabilities of the hidden variables of the system, namely the posterior
on the state when it emits on both sequences, the posterior on the state when it
emits on x only, and the posterior on transitions.
Let al(i , s, t) be the part of a(i, s, t) when state i emits on Y at time t:
N
E(i, t)p( Xt , ysl qt =i)
L P(qt =ilqt- l =j)a(j , s -
1, t - 1)
(5)
j= l
and similarly, let aO(i, s, t) be the part of a(i, s, t) when state i does not emit on
y at time t:
N
(1 - E(i, t))p( xtlqt =i)
L P(qt =ilqt- l =j)a(j , s , t -
1).
(6)
j= l
Then the posterior on state i when it emits joint observations of sequences x and
y is
.
ITS)
( =Z,Tt
Pqt
=STt- I=S- l ,X I ,YI
=
a l (i,s,t)(3(i,s,t)
(T S)
,
P Xl , YI
(7)
the posterior on state i when it emits the next observation of sequence x only is
ITS)
aO(i , s, t) (3 (i,s,t)
.
P (qt=Z, Tt=S Tt - l =S , Xl , YI =
(T S ) '
P Xl ,YI
( )
8
and the posterior on the transition between states i and j is
*
(
P(qt=ilqt- l =j)
P(x f, yf)
(9)
a(j" - 1, t - 1 )p(x" y.,
L a(j, s , t -
Iq,~i)'(i, t) fi (i, " t) +
1 )p(Xt Iqt =i) (1 - E(i , t) )(3(i , s, t)
s= O
M-Step: The Maximization step is performed exactly as in normal HMMs: when
the distributions are modeled by exponential functions such as Gaussian Mixture Models, then an exact maximization can be performed using the posteriors.
Otherwise, a Generalized EM is performed by gradient ascent , back-propagating
the posteriors through the parameters of the distributions.
3
Related Models
The present AHMM model is related to the Pair HMM model [4], which was proposed to search for the best alignment between two DNA sequences. It was thus
designed and used mainly for discrete sequences. Moreover, the architecture of the
Pair HMM model is such that a given state is designed to always emit either one OR
two vectors, while in the proposed AHMM model, each state can always emit both
one or two vectors, depending on E(i, t), which is learned . In fact, when E(i, t) is
deterministic and solely depends on i , we can indeed recover the Pair HMM model
by slightly transforming the architecture.
It is also very similar to the asynchronous version of Input/ Output HMMs [2], which
was proposed for speech recognition applications. The main difference here is that in
)
AHMMs both sequences are considered as output, while in Asynchronous IOHMMs
one of the sequence (the shorter one, the output) is conditioned on the other one
(the input). The resulting Viterbi decoding algorithm is thus different since in
Asynchronous IOHMMs one of the sequence, the input, is known during decoding,
which is not the case in AHMMs.
4
4.1
Implementation Issues
Time and Space Complexity
The proposed algorithms (either training or decoding) have a complexity of
O(N 2 ST) where N is the number of states (and assuming the worst case with
ergodic connectivity) , S is the length of sequence y and T is the length of sequence
x . This can become quickly intractable if both x and yare longer than, say, 1000
frames. It can however be shortened when a priori knowledge is known about possible alignments between x and
For instance, one can force the alignment between
Xt and Ys to be such that It - 5s1 < k where k is a constant representing the maximum stretching allowed between x and y, which should not depend on S nor T. In
that case, the complexity (both in time and space) becomes O(N 2 Tk), which is k
times the usual HMM training/ decoding complexity.
?.
4.2
Distributions to Model
In order to implement this system, we thus need to model the following distributions:
? P(qt=ilqt- l =j): the transition distribution, as in normal HMMs;
? p(xtlqt =i): the emission distribution in the case where only x is emitted,
as in normal HMMs;
? p(Xt , yslqt =i): the emission distribution in the case where both sequences
are emitted. This distribution could be implemented in various forms, depending on the assumptions made on the data:
- x and y are independent given state i:
p(Xt, Ys Iqt=i) = p(Xt Iqt=i)p(ys Iqt=i)
(10)
- y is conditioned on x :
p( Xt , Ys Iqt =i)
=
p(Ys IXt , qt =i)p( x t Iqt =i)
(11)
- the joint probability is modeled directly, eventually forcing some common parameters from p(Xt Iqt=i) and p(Xt , Ys Iqt=i) to be shared.
In the experiments described later in the paper, we have chosen the latter
implementation, with no sharing except during initialization;
? E(i, t) = P(Tt=slTt - l =s - 1, qt=i, xi,yf): the probability to emit on sequence y at time t on state i. With various assumptions , this probability
could be represented as either independent on i, independent on s, independent on Xt and Ys. In the experiments described later in the paper, we
have chosen the latter implementation.
5
Experiments
Audio-visual speech recognition experiments were performed using the M2VTS
database [5], which contains 185 recordings of 37 subjects, each containing acoustic
and video signals of the subject pronouncing the French digits from zero to nine.
The video consisted of 286x360 pixel color images with a 25 Hz frame rate, while
the audio was recorded at 48 kHz using a 16 bit PCM coding. Although the M2VTS
database is one of the largest databases of its type, it is still relatively small compared to reference audio databases used in speech recognition. Hence, in order to
increase the significance level of the experimental results, a 5-fold cross-validation
method was used. Note that all the subjects always pronounced the same sequence
of words but this information was not used during recognition 5 .
The audio data was down-sampled to 8khz and every 10ms a vector of 16 MFCC
coefficients and their first derivative, as well as the derivative of the log energy was
computed, for a total of 33 features. Each image of the video stream was coded
using 12 shape features and 12 intensity features, as described in [3]. The first
derivative of each of these features was also computed, for a total of 48 features .
The HMM topology was as follows: we used left-to-right HMMs for each instance
of the vocabulary, which consisted of the following 11 words: zero, un, deux trois,
quatre, cinq, six, sept, huit, neuf, silence. Each model had between 3 to 9 states
including non-emitting begin and end states.
In each emitting state, there was 3 distributions: P( Xtlqt) , the emission distribution of audio-only data, which consisted of a Gaussian mixture of 10 Gaussians
(of dimension 33), P(Xt , yslqt), the joint emission distribution of audio and video
data, which consisted also of a Gaussian mixture of 10 Gaussians (of dimension
33+ 48= 81) , and E(i, t), the probability that the system should emit on the video
sequence, which was implemented for these preliminary experiments as a simple
table.
Training was done using the EM algorithm described in the paper. However, in
order to keep the computational time tractable, a constraint was imposed in the
alignment between the audio and video streams: we did not consider alignments
where audio and video information were farther than 0.5 second from each other.
Comparisons were made between the AHMM (taking into account audio and video),
and a normal HMM taking into account either the audio or the video only. We also
compared the model with a normal HMM trained on both audio and video streams
manually synchronized (each frame of the video stream was repeated in multiple
copies in order to reach the same rate as the audio stream). Moreover, in order
to show the interest of robust multimodal speech recognition, we injected various
levels of noise in the audio stream during decoding (training was always done using
clean audio). The noise was taken from the Noisex database [9], and was injected
in order to reach signal-to-noise ratios of 10dB, 5dB and OdB.
Note that all the hyper-parameters of these systems, such as the number of Gaussians in the mixtures, the number of EM iterations, or the minimum value of the
variances of the Gaussians, were not tuned using the M2VTS dataset. They were
taken from a previously trained model on a different task, Numbers'95.
Figure 1 and Table 1 present the results. As it can be seen, the AHMM yielded
better results as soon as the noise level was significant (for clean data, the performance using the audio stream only was almost perfect, hence no enhancement was
expected). Moreover, it never deteriorated significantly (using a 95% confidence
interval) under the level of the video stream, no matter the level of noise in the
audio stream.
5Nevertheless, it can be argued that transitions between words could have been learned
using the training data.
80 ,-r---------~----------~--------~_,
audio HMM --+-audio+video HMM ---)(--audio+video AHMM
"*
video HMM ----0
70
60
50
40
30
20
10
Odb
10db
5db
noise level
Figure 1: Word Error Rates (in percent, the lower the better), of various systems
under various noise conditions during decoding (from 15 to 0 dB additive noise).
The proposed model is the AHMM using both audio and video streams.
Observations
audio
audio+ video
audio+ video
Model
HMM
HMM
AHMM
15 dB
2.9 (? 2.4)
21.5 (? 6.0)
4.8 (? 3.1)
WER (%) and 95% CI
10 dB
5 dB
11.9 (? 4.7) 38.7 ~ ? 7.1)
28.1 (? 6.5) 35.3 (? 6.9)
11.4 (? 4.6) 22.3 (? 6.0)
o dB
79.1 (? 5.9)
45.4 (? 7.2)
41.1 (? 7.1)
Table 1: Word Error Rates (WER, in percent, the lower the better) and corresponding Confidence Intervals (CI, in parenthesis), of various systems under various noise
conditions during decoding (from 15 to 0 dB additive noise). The proposed model
is the AHMM using both audio and video streams. An HMM using the clean video
data only obtains 39.6% WER (? 7.1).
An interesting side effect of the model is to provide an optimal alignment between
the audio and the video streams. Figure 2 shows the alignment obtained while
decoding sequence cd01 on data corrupted with 10dB Noisex noise. It shows that the
rate between video and audio is far from being constant (it would have followed the
stepped line) and hence computing the joint probability using the AHMM appears
more informative than using a naive alignment and a normal HMM.
6
Conclusion
In this paper, we have presented a novel asynchronous HMM architecture to handle
multiple sequences of data representing the same sequence of events. The model was
inspired by two other well-known models, namely Pair HMMs and Asynchronous
IOHMMs. An EM training algorithm was derived as well as a Viterbi decoding
algorithm, and speech recognition experiments were performed on a multimodal
database, yielding significant improvements on noisy audio data. Various propositions were made to implement the model but only the simplest ones were tested
in this paper. Other solutions should thus be investigated soon. Moreover, other
applications of the model should also be investigated, such as multimodal authentication.
Audio
Figure 2: Alignment obtained by the model between video and audio streams on
sequence cdOl corrupted with a 10dE Noisex noise. The vertical lines show the
obtained segmentation between the words. The stepped line represents a constant
alignment.
Acknowledgments
This research has been partially carried out in the framework of the European
project LAVA, funded by the Swiss OFES project number 01.0412. The Swiss
NCCR project 1M2 has also partly funded this research. The author would like to
thank Stephane Dupont for providing the extracted visual features and the experimental protocol used in the paper.
References
[I] S. Bengio. An asynchronous hidden markov model for audio-visual speech recognition.
Technical Report IDIAP-RR 02-26, IDIAP, 2002.
[2] S. Bengio and Y. Bengio. An EM algorithm for asynchronous input/ output hidden
markov models. In Proceedings of the International Conference on Neural Information
Processing, ICONIP, Hong Kong, 1996.
[3] S. Dupont and J . Luettin. Audio-visual speech modelling for continuous speech recognition. IEEE Transactions on Multimedia, 2:141- 151 , 2000.
[4] R. Durbin, S. Eddy, A. Krogh, and G. Michison. Biological Sequence Analysis: Probabilistic Models of proteins and nucleic acids. Cambridge University Press, 1998.
[5] S. Pigeon and L. Vandendorpe. The M2VTS multimodal face database (release 1.00). In
Proceedings of th e First International Conference on Audio- and Vid eo-bas ed Biometric
P erson Authentication ABVPA, 1997.
[6] Laurence R. Rabiner. A tutorial on hidden markov models and selected applications
in speech recognition. Proceedings of th e IEEE, 77(2):257- 286 , 1989.
[7] W. H. Sumby and 1. Pollak. Visual contributions to speech intelligibility in noise.
Journal of th e Acoustical Society of America, 26:212- 215 , 1954.
[8] A. Q. Summerfield. Lipreading and audio-visual speech p erception. Philosophical
Transactions of the Royal Society of London, Series B, 335:71- 78 , 1992.
[9] A. Varga, H.J .M. Steeneken, M. Tomlinson , and D . Jones. The noisex-92 study on
the effect of additive noise on automatic speech recognition. Technical report , DRA
Speech Research Unit, 1992.
| 2301 |@word kong:1 version:1 laurence:1 contains:1 series:2 tuned:1 yet:1 additive:3 informative:1 shape:1 enables:1 dupont:2 designed:2 steeneken:1 intelligence:1 selected:1 yr:3 farther:1 provides:1 along:2 become:1 introduce:1 speaks:1 expected:1 indeed:2 nor:1 inspired:1 considering:1 becomes:1 begin:1 project:3 moreover:6 y1s:1 every:1 ti:1 exactly:1 unit:1 before:1 t1:1 shortened:1 path:3 solely:1 initialization:1 challenging:1 hmms:8 decided:1 acknowledgment:1 camera:1 iqt:11 implement:2 swiss:2 digit:1 procedure:1 significantly:1 word:6 confidence:2 seeing:1 protein:1 marginalize:1 operator:1 www:1 deterministic:1 imposed:1 straightforward:1 ergodic:1 simplicity:1 m2:1 deux:1 his:1 handle:2 trois:1 deteriorated:1 exact:1 samy:1 complementarity:1 recognition:17 particularly:1 database:9 worst:1 environment:1 transforming:1 complexity:4 trained:2 depend:1 vid:1 multimodal:9 joint:11 various:11 represented:2 america:1 derivation:1 train:1 london:1 artificial:1 hyper:1 say:1 tested:3 otherwise:1 maxp:1 pollak:1 jointly:1 noisy:2 final:1 sequence:46 rr:1 coming:1 aligned:1 pronounced:1 enhancement:1 perfect:1 tk:1 iq:1 depending:2 propagating:1 qt:17 krogh:1 implemented:2 idiap:5 synchronized:3 lava:1 switzerland:1 stephane:1 argued:1 subdivided:1 ao:2 preliminary:1 lqt:2 proposition:1 biological:2 probable:1 molle:1 extension:1 stt:1 considered:1 normal:7 viterbi:7 early:1 currently:1 largest:1 successfully:1 tool:1 always:6 gaussian:3 modified:1 varying:1 derived:4 emission:4 release:1 she:1 longest:1 improvement:1 likelihood:4 mainly:2 modelling:1 hidden:13 her:1 going:1 interested:3 pixel:1 issue:2 biometric:1 pronouncing:1 denoted:1 priori:2 integration:3 never:1 having:1 manually:1 represents:2 jones:1 report:2 simultaneously:1 interest:1 alignment:18 mixture:4 sh:1 yielding:1 emit:7 necessary:1 shorter:1 instance:3 modeling:2 markovian:1 maximization:2 hearing:1 introducing:1 ixt:1 corrupted:2 combined:2 person:1 thanks:1 st:1 international:2 probabilistic:1 decoding:16 quickly:1 connectivity:1 recorded:1 containing:1 dra:1 derivative:3 nccr:1 account:2 de:1 coding:1 coefficient:1 matter:2 depends:1 stream:17 later:3 performed:5 start:1 recover:1 contribution:1 variance:1 stretching:2 acid:1 rabiner:1 mfcc:1 reach:4 sharing:1 ed:1 energy:1 associated:1 emits:6 sampled:1 dataset:1 knowledge:1 color:1 organized:1 segmentation:1 eddy:1 back:1 appears:1 formulation:1 done:3 generality:1 hand:1 sketch:1 replacing:1 french:1 yf:4 effect:2 consisted:4 former:1 hence:4 during:8 authentication:3 speaker:2 m:1 generalized:1 hong:1 iconip:1 tt:8 cp:1 percent:2 image:2 novel:3 fi:1 dalle:1 common:1 khz:2 discussed:1 he:1 significant:2 cambridge:1 automatic:1 similarly:2 analyzer:1 had:1 odb:2 funded:2 longer:1 etc:2 posterior:9 forcing:1 yi:4 lipreading:1 seen:2 minimum:1 additional:1 eo:1 tomlinson:1 signal:5 full:1 sound:1 multiple:2 technical:2 match:1 cross:1 long:1 lin:1 y:10 coded:1 parenthesis:1 essentially:1 iteration:1 sometimes:2 luettin:1 ion:1 addition:1 separately:1 interval:2 modality:1 ascent:1 recording:2 subject:3 hz:1 db:11 emitted:2 intermediate:1 bengio:6 independence:1 architecture:4 topology:1 six:1 speech:21 nine:1 heard:1 varga:1 pqt:1 category:1 dna:1 simplest:1 http:1 tutorial:1 discrete:2 nevertheless:1 clean:3 backward:3 year:1 sum:2 injected:2 wer:3 almost:1 appendix:2 bit:1 followed:3 fold:1 yielded:2 durbin:1 constraint:1 sake:1 erception:1 relatively:1 combination:2 slightly:1 em:10 s1:1 taken:2 previously:1 describing:1 eventually:2 desynchronize:1 tractable:1 end:1 available:1 gaussians:4 yare:1 intelligibility:2 voice:1 classical:3 society:2 move:1 usual:1 gradient:1 thank:1 decoder:1 hmm:19 stepped:2 acoustical:1 assuming:1 length:2 modeled:5 ratio:1 providing:1 difficult:1 ql:1 martigny:1 ba:1 implementation:4 vertical:1 observation:4 nucleic:1 markov:10 iohmms:3 frame:5 intensity:1 pair:8 namely:3 philosophical:1 acoustic:2 learned:2 solut:1 including:2 max:3 video:24 royal:1 simplon:1 event:7 force:1 representing:3 improve:1 carried:1 naive:1 sept:1 summerfield:1 loss:1 interesting:1 validation:1 viewpoint:1 maxt:1 qf:1 last:1 asynchronous:13 keeping:1 copy:1 soon:2 silence:1 side:1 institute:1 face:2 taking:2 dimension:2 vocabulary:1 transition:4 sumby:1 author:2 forward:3 made:3 far:1 emitting:2 transaction:2 obtains:1 keep:1 xtlqt:5 xi:5 ahmm:11 continuous:2 search:1 un:1 huit:1 table:3 lip:1 learn:1 robust:2 du:1 ysl:1 investigated:2 european:1 rue:1 protocol:1 did:1 significance:1 main:1 noise:15 allowed:1 repeated:1 fashion:1 explicit:1 exponential:1 xl:4 perceptual:1 late:2 down:1 xt:16 intractable:2 ci:2 conditioned:2 suited:1 backtracking:1 pigeon:1 pcm:1 visual:11 temporarily:1 partially:1 ch:2 extracted:1 shared:1 except:1 microphone:1 total:2 multimedia:1 partly:1 experimental:2 latter:3 audio:38 erson:1 |
1,432 | 2,302 | Prediction of Protein Topologies Using
Generalized IOHMMs and RNNs
Gianluca Pollastri and Pierre Baldi
Department of Information and Computer Science
University of California, Irvine
Irvine, CA 92697-3425
gpollast,[email protected]
Alessandro Vullo and Paolo Frasconi
Dipartimento di Sistemi e Informatica
Universit`
a di Firenze
Via di Santa Marta 3, 50139 Firenze, ITALY
vullo,[email protected]
Abstract
We develop and test new machine learning methods for the prediction of topological representations of protein structures in the form
of coarse- or fine-grained contact or distance maps that are translation and rotation invariant. The methods are based on generalized
input-output hidden Markov models (GIOHMMs) and generalized
recursive neural networks (GRNNs). The methods are used to predict topology directly in the fine-grained case and, in the coarsegrained case, indirectly by first learning how to score candidate
graphs and then using the scoring function to search the space of
possible configurations. Computer simulations show that the predictors achieve state-of-the-art performance.
1
Introduction: Protein Topology Prediction
Predicting the 3D structure of protein chains from the linear sequence of amino
acids is a fundamental open problem in computational molecular biology [1]. Any
approach to the problem must deal with the basic fact that protein structures are
translation and rotation invariant. To address this invariance, we have proposed a
machine learning approach to protein structure prediction [4] based on the prediction of topological representations of proteins, in the form of contact or distance
maps. The contact or distance map is a 2D representation of neighborhood relationships consisting of an adjacency matrix at some distance cutoff (typically in the
range of 6 to 12 ?
A), or a matrix of pairwise Euclidean distances. Fine-grained maps
are derived at the amino acid or even atomic level. Coarse maps are obtained by
looking at secondary structure elements, such as helices, and the distance between
their centers of gravity or, as in the simulations below, the minimal distances between their C? atoms. Reasonable methods for reconstructing 3D coordinates from
contact/distance maps have been developed in the NMR literature and elsewhere
Oi
B
Hi
F
Hi
Ii
Figure 1: Bayesian network for bidirectional IOHMMs consisting of input units,
output units, and both forward and backward Markov chains of hidden states.
[14] using distance geometry and stochastic optimization techniques. Thus the main
focus here is on the more difficult task of contact map prediction.
Various algorithms for the prediction of contact maps have been developed, in particular using feedforward neural networks [6]. The best contact map predictor in the
literature and at the last CASP prediction experiment reports an average precision
[True Positives/(True Positives + False Positives)] of 21% for distant contacts, i.e.
with a linear distance of 8 amino acid or more [6] for fine-grained amino acid maps.
While this result is encouraging and well above chance level by a factor greater
than 6, it is still far from providing sufficient accuracy for reliable 3D structure
prediction. A key issue in this area is the amount of noise that can be tolerated in
a contact map prediction without compromising the 3D-reconstruction step. While
systematic tests in this area have not yet been published, preliminary results appear
to indicate that recovery of as little as half of the distant contacts may suffice for
proper reconstruction, at least for proteins up to 150 amino acid long (Rita Casadio and Piero Fariselli, private communication and oral presentation during CASP4
[10]).
It is important to realize that the input to a fine-grained contact map predictor
need not be confined to the sequence of amino acids only, but may also include
evolutionary information in the form of profiles derived by multiple alignment of
homologue proteins, or structural feature information, such as secondary structure
(alpha helices, beta strands, and coils), or solvent accessibility (surface/buried), derived by specialized predictors [12, 13]. In our approach, we use different GIOHMM
and GRNN strategies to predict both structural features and contact maps.
2
GIOHMM Architectures
Loosely speaking, GIOHMMs are Bayesian networks with input, hidden, and output
units that can be used to process complex data structures such as sequences, images,
trees, chemical compounds and so forth, built on work in, for instance, [5, 3, 7, 2, 11].
In general, the connectivity of the graphs associated with the hidden units matches
the structure of the data being processed. Often multiple copies of the same hidden
graph, but with different edge orientations, are used in the hidden layers to allow
direct propagation of information in all relevant directions.
Output Plane
NE
NW
4 Hidden Planes
SW
SE
Input Plane
Figure 2: 2D GIOHMM Bayesian network for processing two-dimensional objects
such as contact maps, with nodes regularly arranged in one input plane, one output
plane, and four hidden planes. In each hidden plane, nodes are arranged on a
square lattice, and all edges are oriented towards the corresponding cardinal corner.
Additional directed edges run vertically in column from the input plane to each
hidden plane, and from each hidden plane to the output plane.
To illustrate the general idea, a first example of GIOHMM is provided by the bidirectional IOHMMs (Figure 1) introduced in [2] to process sequences and predict
protein structural features, such as secondary structure. Unlike standard HMMs
or IOHMMS used, for instance in speech recognition, this architecture is based on
two hidden markov chains running in opposite directions to leverage the fact that
biological sequences are spatial objects rather than temporal sequences. Bidirectional IOHMMs have been used to derive a suite of structural feature predictors
[12, 13, 4] available through http://promoter.ics.uci.edu/BRNN-PRED/. These
predictors have accuracy rates in the 75-80% range on a per amino acid basis.
2.1
Direct Prediction of Topology
To predict contact maps, we use a 2D generalization of the previous 1D Bayesian
network. The basic version of this architecture (Figures 2) contains 6 layers of
units: input, output, and four hidden layers, one for each cardinal corner. Within
each column indexed by i and j, connections run from the input to the four hidden
units, and from the four hidden units to the output unit. In addition, the hidden
units in each hidden layer are arranged on a square or triangular lattice, with all
the edges oriented towards the corresponding cardinal corner. Thus the parameters
of this two-dimensional GIOHMMs, in the square lattice case, are the conditional
probability distributions:
?
NE
NW
SW
SE
P (Oi |Ii,j , Hi,j
, Hi,j
, Hi,j
, Hi,j,
)
?
?
?
NE
NE
NE
?
P (Hi,j |Ii,j , Hi?1,j , Hi,j?1 )
?
NW
NW
NW
P (Hi,j
|Ii,j , Hi+1,j
, Hi,j?1
)
?
SW
SW
SW
?
P
(H
|I
,
H
,
H
)
?
i,j
i,j
i+1,j
i,j+1
?
?
SE
SE
SE
P (Hi,j
|Ii,j , Hi?1,j
, Hi,j+1
)
(1)
In a contact map prediction at the amino acid level, for instance, the (i, j) output
represents the probability of whether amino acids i and j are in contact or not.
This prediction depends directly on the (i, j) input and the four-hidden units in
the same column, associated with omni-directional contextual propagation in the
hidden planes. In the simulations reported below, we use a more elaborated input
consisting of a 20 ? 20 probability matrix over amino acid pairs derived from a
multiple alignment of the given protein sequence and its homologues, as well as
the structural features of the corresponding amino acids, including their secondary
structure classification and their relative exposure to the solvent, derived from our
corresponding predictors.
It should be clear how GIOHMM ideas can be generalized to other data structures
and problems in many ways. In the case of 3D data, for instance, a standard
GIOHMM would have an input cube, an output cube, and up to 8 cubes of hidden
units, one for each corner with connections inside each hidden cube oriented towards
the corresponding corner. In the case of data with an underlying tree structure, the
hidden layers would correspond to copies of the same tree with different orientations
and so forth. Thus a fundamental advantage of GIOHMMs is that they can process
a wide range of data structures of variable sizes and dimensions.
2.2
Indirect Prediction of Topology
Although GIOHMMs allow flexible integration of contextual information over ranges
that often exceed what can be achieved, for instance, with fixed-input neural networks, the models described above still suffer from the fact that the connections
remain local and therefore long-ranged propagation of information during learning
remains difficult. Introduction of large numbers of long-ranged connections is computationally intractable but in principle not necessary since the number of contacts
in proteins is known to grow linearly with the length of the protein, and hence
connectivity is inherently sparse. The difficulty of course is that the location of the
long-ranged contacts is not known.
To address this problem, we have developed also a complementary GIOHMM approach described in Figure 3 where a candidate graph structure is proposed in the
hidden layers of the GIOHMM, with the two different orientations naturally associated with a protein sequence. Thus the hidden graphs change with each protein. In
principle the output ought to be a single unit (Figure 3b) which directly computes
a global score for the candidate structure presented in the hidden layer. In order
to cope with long-ranged dependencies, however, it is preferable to compute a set
of local scores (Figure 3c), one for each vertex, and combine the local scores into a
global score by averaging.
More specifically, consider a true topology represented by the undirected contact
graph G? = (V, E ? ), and a candidate undirected prediction graph G = (V, E). A
global measure of how well E approximates E ? is provided by the informationretrieval F1 score defined by the normalized edge-overlap F1 = 2|E ? E ? |/(|E| +
|E ? |) = 2P R/(P + R), where P = |E ? E ? |/|E| is the precision (or specificity) and
R = |E ? E ? |/|E ? | is the recall (or sensitivity) measure. Obviously, 0 ? F1 ? 1
and F1 = 1 if and only if E = E ? . The scoring function F1 has the property of
being monotone in the sense that if |E| = |E 0 | then F1 (E) < F1 (E 0 ) if and only if
|E ? E ? | < |E 0 ? E ? |. Furthermore, if E 0 = E ? {e} where e is an edge in E ? but
not in E, then F1 (E 0 ) > F1 (E). Monotonicity is important to guide the search in
the space of possible topologies. It is easy to check that a simple search algorithm
based on F1 takes on the order of O(|V |3 ) steps to find E ? , basically by trying all
possible edges one after the other. The problem then is to learn F1 , or rather a
good approximation to F1 .
To approximate F1 , we first consider a similar local measure Fv by considering the
O
I(v)
I(v)
F
B
H (v) H (v)
(a)
(b)
I(v)
F
B
H (v) H (v)
O(v)
(c)
Figure 3: Indirect prediction of contact maps. (a) target contact graph to be
predicted. (b) GIOHMM with two hidden layers: the two hidden layers correspond
to two copies of the same candidate graph oriented in opposite directions from one
end of the protein to the other end. The single output O is the global score of how
well the candidate graph approximates the true contact map. (c) Similar to (b) but
with a local score O(v) at each vertex. The local scores can be averaged to produce
a global score. In (b) and (c) I(v) represents the input for vertex v, and H F (v) and
H B (v) are the corresponding hidden variables.
?
?
set Ev of edges adjacent
P to vertex v and Fv = 2|Ev ? Ev |/(|Ev | + |Ev |) with the
global average F? = v Fv /|V |. If n and n? are the average degrees of G and G? ,
it can be shown that:
F1 =
1 X 2|Ev ? E ? |
|V | v
n + n?
and
1 X
2|Ev ? E ? |
F? =
|V | v n + v + n? + ?v
(2)
where n + v (resp. n? + ?v ) is the degree of v in G (resp. in G? ). In particular, if G
and G? are regular graphs, then F1 (E) = F? (E) so that F? is a good approximation
to F1 . In the contact map regime where the number of contacts grows linearly with
the length of the sequence, we should have in general |E| ? |E ? | ? (1 + ?)|V | so
that each node on average has n = n? = 2(1 + ?) edges. The value of ? depends of
course on the neighborhood cutoff.
As in reinforcement learning, to learn the scoring function one is faced with the
problem of generating good training sets in a high dimensional space, where the
states are the topologies (graphs), and the policies are algorithms for adding a
single edge to a given graph. In the simulations we adopt several different strategies including static and dynamic generation. Within dynamic generation we use
three exploration strategies: random exploration (successor graph chosen at random), pure exploitation (successor graph maximizes the current scoring function),
and semi-uniform exploitation to find a balance between exploration and exploitation [with probability (resp. 1 ? ) we choose random exploration (resp. pure
exploitation)].
3
GRNN Architectures
Inference and learning in the protein GIOHMMs we have described is computationally intensive due to the large number of undirected loops they contain. This
problem can be addressed using a neural network reparameterization assuming that:
(a) all the nodes in the graphs are associated with a deterministic vector (note that
in the case of the output nodes this vector can represent a probability distribution
so that the overall model remains probabilistic); (b) each vector is a deterministic
function of its parents; (c) each function is parameterized using a neural network (or
some other class of approximators); and (d) weight-sharing or stationarity is used
between similar neural networks in the model. For example, in the 2D GIOHMM
contact map predictor, we can use a total of 5 neural networks to recursively compute the four hidden states and the output in each column in the form:
?
NW
NE
SW
SE
Oij = NO (Iij , Hi,j
, Hi,j
, Hi,j
, Hi,j
)
?
?
?
N
E
N
E
N
E
?
H
=
N
(I
,
H
,
H
)
?
N E i,j
i,j
i?1,j
i,j?1
NW
NW
NW
Hi,j
= NN W (Ii,j , Hi+1,j
, Hi,j?1
)
?
SW
SW
SW
?
Hi,j = NSW (Ii,j , Hi+1,j , Hi,j+1 )
?
?
?
SE
SE
SE
Hi,j
= NSE (Ii,j , Hi?1,j
, Hi,j+1
)
(3)
NE
In the NE plane, for instance, the boundary conditions are set to Hij
= 0 for i = 0
NE
or j = 0. The activity vector associated with the hidden unit Hij
depends on the
NE
NE
local input Iij , and the activity vectors of the units Hi?1,j
and Hi,j?1
. Activity
in NE plane can be propagated row by row, West to East, and from the first row
to the last (from South to North), or column by column South to North, and from
the first column to the last. These GRNN architectures can be trained by gradient
descent by unfolding the structures in space, leveraging the acyclic nature of the
underlying GIOHMMs.
4
Data
Many data sets are available or can be constructed for training and testing purposes,
as described in the references. The data sets used in the present simulations are
extracted from the publicly available Protein Data Bank (PDB) and then redundancy reduced, or from the non-homologous subset of PDB Select (ftp://ftp.emblheidelberg.de/pub/databases/). In addition, we typically exclude structures with
poor resolution (less than 2.5-3 ?
A), sequences containing less than 30 amino acids,
and structures containing multiple sequences or sequences with chain breaks. For
coarse contact maps, we use the DSSP program [9] (CMBI version) to assign secondary structures and we remove also sequences for which DSSP crashes. The
results we report for fine-grained contact maps are derived using 424 proteins with
lengths in the 30-200 range for training and an additional non-homologous set of
48 proteins in the same length range for testing. For the coarse contact map, we
use a set of 587 proteins of length less than 300. Because the average length of a
secondary structure element is slightly above 7, the size of a coarse map is roughly
2% the size of the corresponding amino acid map.
5
Simulation Results and Conclusions
We have trained several 2D GIOHMM/GRNN models on the direct prediction of
fine-grained contact maps. Training of a single model typically takes on the order of
a week on a fast workstation. A sample of validation results is reported in Table 1 for
four different distance cutoffs. Overall percentages of correctly predicted contacts
Table 1: Direct prediction of amino acid contact maps. Column 1: four distance
cutoffs. Column 2, 3, and 4: overall percentages of amino acids correctly classified
as contacts, non-contacts, and in total. Column 5: Precision percentage for distant
contacts (|i ? j| ? 8) with a threshold of 0.5. Single model results except for last
line corresponding to an ensemble of 5 models.
Cutoff
6?
A
8?
A
10 ?
A
12 ?
A
12 ?
A
Contact
.714
.638
.512
.433
.445
Non-Contact
.998
.998
.993
.987
.990
Total
.985
.970
.931
.878
.883
Precision (P)
.594
.670
.557
.549
.717
and non-contacts at all linear distances, as well as precision results for distant
contacts (|i ? j| ? 8) are reported for a single GIOHMM/GRNN model. The
model has k = 14 hidden units in the hidden and output layers of the four hidden
networks, as well as in the hidden layer of the output network. In the last row, we
also report as an example the results obtained at 12?
A by an ensemble of 5 networks
with k = 11, 12, 13, 14 and 15. Note that precision for distant contacts exceeds all
previously reported results and is well above 50%.
For the prediction of coarse-grained contact maps, we use the indirect
GIOHMM/GRNN strategy and compare different exploration/exploitation strategies: random exploration, pure exploitation, and their convex combination (semiuniform exploitation). In the semi-uniform case we set the probability of random
uniform exploration to = 0.4. In addition, we also try a fourth hybrid strategy in
which the search proceeds greedily (i.e. the best successor is chosen at each step,
as in pure exploitation), but the network is trained by randomly sub-sampling the
successors of the current state. Eight numerical features encode the input label
of each node: one-hot encoding of secondary structure classes; normalized linear
distances from the N to C terminus; average, maximum and minimum hydrophobic
character of the segment (based on the Kyte-Doolittle scale with a moving window
of length 7). A sample of results obtained with 5-fold cross-validation is shown in
Table 2. Hidden state vectors have dimension k = 5 with no hidden layers. For each
strategy we measure performances by means of several indices: micro and macroaveraged precision (mP , M P ), recall (mR, M R) and F1 measure (mF1 , M F1 ).
Micro-averages are derived based on each pair of secondary structure elements in
each protein, whereas macro-averages are obtained on a per-protein basis, by first
computing precision and recall for each protein, and then averaging over the set of
all proteins. In addition, we also measure the micro and macro averages for specificity in the sense of percentage of correct prediction for non-contacts (mP (nc),
M P (nc)). Note the tradeoffs between precision and recall across the training methods, the hybrid method achieving the best F 1 results.
Table 2: Indirect prediction of coarse contact maps with dynamic sampling.
Strategy
Random exploration
Semi-uniform
Pure exploitation
Hybrid
mP
.715
.454
.431
.417
mP (nc)
.769
.787
.806
.834
mR
.418
.631
.726
.790
mF1
.518
.526
.539
.546
MP
.767
.507
.481
.474
M P (nc)
.709
.767
.793
.821
MR
.469
.702
.787
.843
M F1
.574
.588
.596
.607
We have presented two approaches, based on a very general IOHMM/RNN framework, that achieve state-of-the-art performance in the prediction of proteins contact
maps at fine and coarse-grained levels of resolution. In principle both methods can
be applied to both resolution levels, although the indirect prediction is computationally too demanding for fine-grained prediction of large proteins. Several extensions
are currently under development, including the integration of these methods into
complete 3D structure predictors. While these systems require long training periods, once trained they can rapidly sift through large proteomic data sets.
Acknowledgments
The work of PB and GP is supported by a Laurel Wilkening Faculty Innovation
award and awards from NIH, BREP, Sun Microsystems, and the California Institute
for Telecommunications and Information Technology. The work of PF and AV is
partially supported by a MURST grant.
References
[1] D. Baker and A. Sali. Protein structure prediction and structural genomics. Science,
294:93?96, 2001.
[2] P. Baldi and S. Brunak and P. Frasconi and G. Soda and G. Pollastri. Exploiting
the past and the future in protein secondary structure prediction. Bioinformatics,
15(11):937?946, 1999.
[3] P. Baldi and Y. Chauvin. Hybrid modeling, HMM/NN architectures, and protein
applications. Neural Computation, 8(7):1541?1565, 1996.
[4] P. Baldi and G. Pollastri. Machine learning structural and functional proteomics.
IEEE Intelligent Systems. Special Issue on Intelligent Systems in Biology, 17(2), 2002.
[5] Y. Bengio and P. Frasconi. Input-output HMM?s for sequence processing. IEEE
Trans. on Neural Networks, 7:1231?1249, 1996.
[6] P. Fariselli, O. Olmea, A. Valencia, and R. Casadio. Prediction of contact maps with
neural networks and correlated mutations. Protein Engineering, 14:835?843, 2001.
[7] P. Frasconi, M. Gori, and A. Sperduti. A general framework for adaptive processing
of data structures. IEEE Trans. on Neural Networks, 9:768?786, 1998.
[8] Z. Ghahramani and M. I. Jordan. Factorial hidden Markov models Machine Learning,
29:245?273, 1997.
[9] W. Kabsch and C. Sander. Dictionary of protein secondary structure: pattern recognition of hydrogen-bonded and geometrical features. Biopolymers, 22:2577?2637,
1983.
[10] A. M. Lesk, L. Lo Conte, and T. J. P. Hubbard. Assessment of novel fold targets
in CASP4: predictions of three-dimensional structures, secondary structures, and
interresidue contacts. Proteins, 45, S5:98?118, 2001.
[11] G. Pollastri and P. Baldi. Predition of contact maps by GIOHMMs and recurrent
neural networks using lateral propagation from all four cardinal corners. Proceedings
of 2002 ISMB (Intelligent Systems for Molecular Biology) Conference. Bioinformatics,
18, S1:62?70, 2002.
[12] G. Pollastri, D. Przybylski, B. Rost, and P. Baldi. Improving the prediction of protein
secondary structure in three and eight classes using recurrent neural networks and
profiles. Proteins, 47:228?235, 2002.
[13] G. Pollastri, P. Baldi, P. Fariselli, and R. Casadio. Prediction of coordination number
and relative solvent accessibility in proteins. Proteins, 47:142?153, 2002.
[14] M. Vendruscolo, E. Kussell, and E. Domany. Recovery of protein structure from
contact maps. Folding and Design, 2:295?306, 1997.
| 2302 |@word private:1 faculty:1 version:2 exploitation:9 open:1 simulation:6 nsw:1 recursively:1 configuration:1 contains:1 score:10 pub:1 terminus:1 past:1 current:2 contextual:2 yet:1 must:1 realize:1 numerical:1 distant:5 remove:1 half:1 plane:14 coarse:8 node:6 location:1 casp:1 constructed:1 direct:4 beta:1 combine:1 baldi:7 inside:1 pairwise:1 roughly:1 encouraging:1 little:1 window:1 considering:1 pf:1 provided:2 underlying:2 suffice:1 maximizes:1 baker:1 what:1 developed:3 suite:1 ought:1 temporal:1 gravity:1 preferable:1 universit:1 nmr:1 unit:15 grant:1 appear:1 positive:3 engineering:1 vertically:1 local:7 encoding:1 giohmms:8 kabsch:1 rnns:1 vendruscolo:1 hmms:1 range:6 averaged:1 ismb:1 directed:1 acknowledgment:1 testing:2 atomic:1 recursive:1 firenze:2 area:2 rnn:1 regular:1 specificity:2 pdb:2 protein:38 map:33 deterministic:2 center:1 exposure:1 convex:1 resolution:3 recovery:2 pure:5 reparameterization:1 coordinate:1 marta:1 resp:4 target:2 rita:1 element:3 recognition:2 database:1 sun:1 alessandro:1 dynamic:3 trained:4 segment:1 oral:1 basis:2 homologues:1 indirect:5 various:1 represented:1 fast:1 neighborhood:2 triangular:1 gp:1 obviously:1 sequence:14 advantage:1 reconstruction:2 macro:2 uci:2 relevant:1 loop:1 rapidly:1 achieve:2 forth:2 exploiting:1 parent:1 produce:1 generating:1 object:2 ftp:2 illustrate:1 develop:1 derive:1 recurrent:2 predicted:2 indicate:1 direction:3 proteomic:1 correct:1 compromising:1 stochastic:1 exploration:8 successor:4 adjacency:1 require:1 assign:1 f1:19 generalization:1 preliminary:1 biological:1 dipartimento:1 extension:1 ic:2 nw:9 predict:4 week:1 dictionary:1 adopt:1 purpose:1 label:1 currently:1 coordination:1 hubbard:1 unfolding:1 rather:2 encode:1 derived:7 focus:1 check:1 laurel:1 greedily:1 sense:2 inference:1 nn:2 iohmm:1 typically:3 hidden:37 buried:1 issue:2 classification:1 orientation:3 flexible:1 overall:3 development:1 art:2 spatial:1 integration:2 special:1 cube:4 once:1 frasconi:4 atom:1 sampling:2 biology:3 represents:2 future:1 report:3 intelligent:3 cardinal:4 micro:3 oriented:4 randomly:1 geometry:1 consisting:3 stationarity:1 alignment:2 chain:4 edge:10 necessary:1 tree:3 indexed:1 euclidean:1 loosely:1 sperduti:1 minimal:1 instance:6 column:10 modeling:1 lattice:3 vertex:4 subset:1 predictor:9 uniform:4 too:1 reported:4 dependency:1 tolerated:1 fundamental:2 sensitivity:1 dssp:2 systematic:1 probabilistic:1 connectivity:2 containing:2 choose:1 corner:6 exclude:1 de:1 fariselli:3 north:2 mp:5 depends:3 break:1 try:1 mutation:1 elaborated:1 oi:2 square:3 accuracy:2 publicly:1 acid:15 macroaveraged:1 ensemble:2 correspond:2 directional:1 bayesian:4 basically:1 published:1 classified:1 sharing:1 pollastri:6 naturally:1 associated:5 di:3 static:1 propagated:1 irvine:2 workstation:1 recall:4 bidirectional:3 arranged:3 furthermore:1 assessment:1 propagation:4 kyte:1 grows:1 ranged:4 true:4 contain:1 normalized:2 hence:1 chemical:1 deal:1 adjacent:1 during:2 generalized:4 trying:1 bonded:1 complete:1 geometrical:1 image:1 novel:1 nih:1 rotation:2 specialized:1 functional:1 approximates:2 s5:1 moving:1 surface:1 italy:1 compound:1 approximators:1 hydrophobic:1 scoring:4 minimum:1 greater:1 additional:2 mr:3 period:1 semi:3 ii:8 multiple:4 exceeds:1 match:1 cross:1 long:6 molecular:2 award:2 prediction:30 basic:2 proteomics:1 represent:1 confined:1 achieved:1 folding:1 addition:4 crash:1 fine:9 whereas:1 addressed:1 grow:1 unlike:1 south:2 undirected:3 valencia:1 regularly:1 leveraging:1 jordan:1 structural:7 leverage:1 feedforward:1 exceed:1 easy:1 bengio:1 sander:1 pfbaldi:1 architecture:6 topology:8 opposite:2 idea:2 domany:1 tradeoff:1 intensive:1 whether:1 suffer:1 speech:1 speaking:1 santa:1 se:9 clear:1 factorial:1 amount:1 processed:1 informatica:1 reduced:1 http:1 percentage:4 sistemi:1 per:2 correctly:2 paolo:2 key:1 four:10 redundancy:1 threshold:1 pb:1 achieving:1 cutoff:5 backward:1 graph:16 monotone:1 run:2 parameterized:1 fourth:1 telecommunication:1 casadio:3 soda:1 reasonable:1 sali:1 layer:12 hi:30 fold:2 topological:2 activity:3 conte:1 solvent:3 department:1 combination:1 poor:1 remain:1 slightly:1 reconstructing:1 character:1 across:1 s1:1 invariant:2 computationally:3 remains:2 previously:1 end:2 available:3 coarsegrained:1 eight:2 indirectly:1 pierre:1 rost:1 vullo:2 gori:1 running:1 include:1 sw:9 ghahramani:1 contact:47 strategy:8 evolutionary:1 gradient:1 distance:14 lateral:1 accessibility:2 hmm:2 chauvin:1 assuming:1 length:7 index:1 relationship:1 providing:1 balance:1 innovation:1 nc:4 difficult:2 hij:2 design:1 proper:1 policy:1 av:1 markov:4 iohmms:5 descent:1 looking:1 communication:1 omni:1 biopolymers:1 introduced:1 pred:1 pair:2 connection:4 california:2 fv:3 trans:2 address:2 proceeds:1 below:2 microsystems:1 ev:7 pattern:1 regime:1 program:1 built:1 reliable:1 including:3 hot:1 overlap:1 demanding:1 difficulty:1 hybrid:4 oij:1 predicting:1 homologous:2 technology:1 ne:12 genomics:1 faced:1 literature:2 relative:2 dsi:1 przybylski:1 generation:2 acyclic:1 validation:2 degree:2 sufficient:1 principle:3 bank:1 helix:2 translation:2 lo:1 nse:1 gianluca:1 elsewhere:1 course:2 row:4 last:5 copy:3 supported:2 guide:1 allow:2 institute:1 wide:1 sparse:1 boundary:1 dimension:2 computes:1 forward:1 reinforcement:1 adaptive:1 far:1 cope:1 alpha:1 approximate:1 monotonicity:1 global:6 brnn:1 search:4 hydrogen:1 table:4 brunak:1 learn:2 nature:1 ca:1 inherently:1 improving:1 complex:1 main:1 promoter:1 linearly:2 noise:1 profile:2 complementary:1 amino:15 west:1 iij:2 precision:9 sub:1 candidate:6 unifi:1 grained:10 sift:1 intractable:1 false:1 adding:1 strand:1 partially:1 chance:1 extracted:1 coil:1 conditional:1 presentation:1 towards:3 change:1 specifically:1 except:1 averaging:2 total:3 secondary:12 invariance:1 east:1 select:1 piero:1 bioinformatics:2 correlated:1 |
1,433 | 2,303 | Speeding up the Parti-Game Algorithm
Maxim Likhachev
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Sven Koenig
College of Computing
Georgia Institute of Technology
Atlanta, GA 30312-0280
[email protected]
Abstract
In this paper, we introduce an efficient replanning algorithm for nondeterministic domains, namely what we believe to be the first incremental
heuristic minimax search algorithm. We apply it to the dynamic discretization of continuous domains, resulting in an efficient implementation of the parti-game reinforcement-learning algorithm for control in
high-dimensional domains.
1 Introduction
We recently developed Lifelong Planning A* (LPA*), a search algorithm for deterministic
domains that combines incremental and heuristic search to reduce its search time [1]. Incremental search reuses information from previous searches to find solutions to series of
similar search tasks faster than is possible by solving each search task from scratch [2],
while heuristic search uses distance estimates to focus the search and solve search problems faster than uninformed search. In this paper, we extend LPA* to nondeterministic
domains. We believe that the resulting search algorithm, called Minimax LPA*, is the first
incremental heuristic minimax search algorithm. We apply it to the dynamic discretization
of continuous domains, resulting in an efficient implementation of the popular parti-game
algorithm [3]. Our first experiments suggest that this implementation of the parti-game
algorithm can be an order of magnitude faster in two-dimensional domains than one with
uninformed search from scratch and thus might allow the parti-game algorithm to scale up
to larger domains. There also exist other ways of decreasing the amount of search performed by the parti-game algorithm. We demonstrate some advantages of Minimax LPA*
over Prioritized Sweeping [4] in [5] but it is future work to compare it with the algorithms
developed in [6].
2 Parti-Game Algorithm
The objective of the parti-game algorithm is to move an agent from given start coordinates
to given goal coordinates in continuous and potentially high-dimensional domains with obstacles of arbitrary shapes. It is popular because it is simple, efficient, and applies to a broad
range of control problems. To solve these problems, one can first discretize the domains
and then use conventional search algorithms to determine plans that move the agent to the
goal coordinates. However, uniform discretizations can prevent one from finding a plan if
(a)
(d)
gd=?
gd=24
S3
S1
S9
S7
S5
A
h=6
g=24
rhs=24
S11
?
6
h=0
g=?
rhs=?
6
s1
S2
S8
Sgoal
S4
S10
s0
gd=24
h=6
g=?
rhs=24
gd=18
h=0
g=18
rhs=18
6
6
s1
6
gd=12
h=6
g=12
rhs=12
6
6
s3
gd=6
h=12
g=6
rhs=6
6
gd=12
h=18
g=?
rhs=12
6
s7
6
s5
6
6
s2
6
gd=18
6
6
s9
6
h=6
g=?
rhs=18
s0
6
h=6
g=12
rhs=12
gd=18
h=6
g=6
rhs=6
6
s2
6
6
S??1
s11
gd=12
sgoal
6
gd=6
h=18
g=?
rhs=6
6
s8
6
gd=0
h=6
g=24
rhs=24
gd=30
h=0
g=30
rhs=30
6
6
s1
6
h=6
g=12
rhs=12
6
6
S0
s10
S?3
h=6
g=18
rhs=18
6
6
h=6
g=12
rhs=12
s2
gd=12
s7
6
6
6
6
h=18
g=12
rhs=12
6
s4
gd=6
6
6
gd=18
h=24
g=?
rhs=18
6
S4
gd=0
6
6
h=3
g=?
rhs=24
6
s??1
6
6
6
s??3
3
s10
6
gd=0
gd=6
gd=12
S7
S9
S11
Sgoal
S8
S10
h=6
g=?
rhs=?
s??5
6
6
3
6
3
3
6
6
h=12
g=0
rhs=0
h=6
g=?
rhs=?
3
6
6
sgoal
s11
6
s9
6
h=6
g=6
rhs=6
s8
6
h=24
g=12
rhs=12
S?5
A
S?2
gd=12
gd=12
6
6
6
6
(f)
h=12
g=6
rhs=6
6
h=18
g=6
rhs=6
S??2
h=24
g=?
rhs=?
6
gd=6
gd=6
6
s5
?
6
gd=18
?
s3
6
s0
gd=12
6
6
S??5
S??3
A
(c)
gd=24
6
6
h=12
g=0
rhs=0
6
6
h=12
g=0
rhs=0
6
s4
6
s11
6
6
sgoal
6
6
6
6
6
s4
gd=6
h=24
g=18
rhs=18
6
(e)
h=24
g=?
rhs=?
S?1
6
6
gd=18
6
s9
6
h=6
g=6
rhs=6
gd=18
6
6
6
6
gd=12
h=18
g=12
rhs=12
6
s7
6
6
6
gd=12
h=12
g=6
rhs=6
6
h=6
g=12
rhs=12
6
gd=6
6
s5
6
h=6
g=18
rhs=18
(b)
6
?
6
6
S0
h=6
g=12
rhs=12
?
s3
6
gd=12
6
h=18
g=6
rhs=6
s8
gd=6
6
6
h=24
g=?
rhs=12
h=6
g=?
rhs=24
h=0
g=15
rhs=15
6
s?1
3
s?3
6
5
6
6
h=18
g=?
rhs=?
6
6
s9
6
h=24
g=?
rhs=?
s11
6
6
s?5
6
3
6
s7
h=6
g=?
rhs=11
6
h=12
g=?
rhs=6
5
s10
3
gd=12
h=3
g=12
rhs=12
6
5
h=6
g=?
rhs=18
s??2
6
3
s0
6
6
h=6
g=?
rhs=12
6
5
6
6
3
6
h=6
g=6
rhs=6
s4
h=12
g=0
rhs=0
6
6
sgoal
6
h=18
g=?
rhs=6
6
6
s8
6
6
6
h=24
g=?
rhs=?
s10
6
s?2
Figure 1: Example behavior of the parti-game algorithm
they are too coarse-grained (for example, because the resolution prevents one from noticing small gaps between obstacles) and results in large state spaces that cannot be searched
efficiently if they are too fine-grained. The parti-game algorithm solves this dilemma by
starting with a coarse discretization and refines it during execution only when and where it
is needed (for example, around obstacles), resulting in a nonuniform discretization.
We use a simple two-dimensional robot navigation domain to illustrate the behavior of the
parti-game algorithm. Figure 1(a) shows the initial discretization of our example domain
into 12 large cells together with the start coordinates of the agent (A) and the goal region
(cell containing ). Thus, it can always attempt to move towards the center of each
adjacent cell (that is, cell that its current cell shares a border line with). The agent can
initially attempt to move towards the centers of either
, , or , as shown in the figure.
Figure 1(b) shows the state space that corresponds to the discretized domain under this
assumption. Each state corresponds to a cell and each action corresponds to a movement
option. The parti-game algorithm initially ignores obstacles and makes the optimistic (and
sometimes wrong) assumption that each action deterministically reaches the intended state,
for example, that the agent indeed reaches if it is somewhere in
and moves towards
the center of . The costs of an action outcome approximates the Euclidean distance from
the center of the old cell of the agent to the center of its new cell.1 (The cost of the action
1
We compute both the costs of action outcomes and the heuristics of states using an imaginary
uniform grid, shown in gray in Figures 1(a) and (e), whose cell size corresponds to the resolution
limit of the parti-game algorithm. The cost of an action outcome is then computed as the maximum
of the absolute values of the differences of the x and y coordinates between the imaginary grid cell
t1
O1
A
A
O2
t2
t0
O3
A
Figure 2: Example of a nondeterministic action
outcome is infinity if the old and new cells are identical since the action then cannot be
part of a plan that minimizes the worst-case plan-execution cost from the current state of
the agent to .) The parti-game algorithm then determines whether the minimax goal
distance of the current state
of the agent is finite. If so, the parti-game algorithm
repeatedly chooses the action that minimizes the worst-case plan-execution cost, until the
agent reaches or observes additional action outcomes. The minimax goal distance
of
is
and the agent minimizes the worst-case plan-execution cost by
moving from
towards the centers of either or . Assume that it decides to move
towards the center of . The agent always continues to move until it either gets blocked by
an obstacle or enters a new cell. It immediately gets blocked by the obstacle in
. When
the agent observes additional action outcomes it adds them to the state space. Thus, it now
assumes that it can end up in either or
if it is somewhere in
and moves towards
the center of . The same scenario repeats when the agent first attempts to move towards
the center of and then attempts to move towards the center of
but gets blocked twice
by the obstacle in
. Figure 1(c) shows the state space after the attempted moves towards
the centers of and , and Figure 1(d) shows the state space after the attempted move
towards the center of . The minimax goal distance of
is now
.
We say that
is unsolvable since an agent in
is not guaranteed to reach
with finite plan-execution cost. In this case, the parti-game algorithm refines the
discretization by splitting all solvable cells that border unsolvable cells and all unsolvable
cells that border solvable cells. Each cell is split into two cells perpendicular to its longest
axis. (The axis of the split is chosen randomly for square cells.) Figure 1(e) shows the new
discretization of the domain. The parti-game algorithm then removes those states (and their
actions) from the state space that correspond to the old cells and adds states (and actions)
for the new cells, again making the optimistic assumption that each action for the new states
deterministically reaches the intended state. This ensures that the minimax goal distance
of
becomes finite. Figure 1(f) shows the resulting state space. The parti-game
algorithm now repeats the process until either the agent reaches or the domain cannot
be discretized any further because the resolution limit is reached.
If all actions either did indeed deterministically reach their intended states or did not change
the state of the agent at all (as in the example from Figure 1), then the parti-game algorithm
could determine the minimax goal distances of the states with a deterministic search algorithm after it has removed all actions that have an action outcome that leaves the state
unchanged (since these actions cannot be part of a plan with minimal worst-case planexecution cost). However, actions can have additional outcomes, as Figure 2 illustrates.
For example, an agent cannot only end up in "! and but also in if it moves from somewhere in towards the center of ! . The parti-game algorithm therefore needs to determine
the minimax goal distances of the states with a minimax search algorithm. Furthermore,
the parti-game algorithm repeatedly determines plans that minimize the worst-case planthat contains the center of the new and old state of the agent. Similarly, the heuristic of a state
is computed as the maximum of the absolute differences of the x and y coordinates between the
imaginary grid cell that contains the center of the state of the agent and the imaginary grid cell that
contains the center of the state in question. Note that the grid is imaginary and never needs to be
constructed. Furthermore, it is only used to compute the costs and heuristics and does not restrict
either the placement of obstacles or the movement of the agent.
The pseudocode uses the following functions to manage the priority queue : U.Top returns a state with the smallest priority of all states in .
is empty, then U.TopKey returns
.) U.Pop deletes the state with the
U.TopKey returns the smallest priority of all states in . (If
smallest priority in and returns the state. U.Insert
inserts into with priority . U.Update
changes the priority of in to . (It
does nothing if the current priority of already equals .) Finally, U.Remove removes from .
procedure
CalculateKey
01 return
!"#%$&'( )!*"*+,%-.
/
!#0
;
procedure
Initialize
02 13254 ;
03
for
all
6879 :23;2 ;
04 ; !<="> ?/@25A ;
05 U.Insert <="> ?
CalculateKey <=B>? # ;
procedure
UpdateState C%
O
O
06 if CE
D2
<="> ? !C%@2 !
> FGIH )J K"LNMO
F P )(( H )'Q > J R C@
#S'
'$ T ;
07
if
U.Remove
;
U
C
6
I
%
C
D2
!C%# U.Insert C@
CalculateKey C%# ;
08 if C%1
procedure
ComputePlan
W
D2
()!*B*+,-"#
09 while U.TopKey :V CalculateKey ()!*"*"+,-" OR !"()!*"*"+,-"1
10 XCY2 U.Pop ;
11
if
/*
is
locally
overconsistent
*/
%
C
I
3
Z
!
%
C
#
C
C%;2 !C% ;
12
for all 968[1.\B]C' UpdateState ;
13
else /* C is locally underconsistent */
14
C%;25 ;
15
16
for all 968[1.\B]C'%^
C' UpdateState ;
procedure
Main()
M >
17 ; ( )!*"*+,- 2 - *"- ;
;
18
Initialize
19 ComputePlan ;
D2
<="> ?
20 while ()!*"*"+,- 3
/* if !( )!*"*+,-":25& then the agent is not guaranteed to reach .<="> ? with finite plan-execution cost */
21
22
Execute K_T`a >NFGaH M ()!*"*"+,- J K"L M FP )(( H M ( )!*"*+,%- Q > J R ( )!*"*+,%-
S'
%$ET ;
Set ( )!*"*+,%- to the current state of the agent after the action execution;
23
24
Scan for changed action costs;
if any action costs have changed
25
26
for all actions with changed action costs RCb
S'
c
27
Update the action cost RC@
/S%
cN ;
UpdateState C% ;
28
for all 96
29
30
U.Update
CalculateKey # ;
31
ComputePlan ;
Figure 3: Minimax LPA*
execution cost from
to . It is therefore important to make the searches fast.
In the next sections, we describe Minimax LPA* and how to implement the parti-game algorithm with it. Figures 1(b), (c), (d) and (f) show the state spaces for our example directly
after the parti-game algorithm has used Minimax LPA* to determine the minimax goal
distance of
. All expanded states (that is, all states whose minimax goal distances
have been computed) are shown in gray. Minimax LPA* speeds up the searches by reusing
information from previous searches, which is the reason why it expands only three states in
Figure 1(d). Minimax LPA* also speeds up the searches by using heuristics to focus them,
which is the reason why it expands only four states in Figure 1(f).
3 Minimax LPA*
Minimax LPA* repeatedly determines plans that minimize the worst-case plan-execution
cost from
to as the agent moves towards in nondeterministic domains
where the costs of actions increase or decrease over time. It generalizes two incremental
search algorithms, namely our LPA* [1] and DynamicSWSF-FP [7]. Figure 3 shows the
algorithm, that we describe in the following. Numbers in curly braces refer to the line
numbers in the figure.
3.1 Notation
denotes the finite set of states.
is the start state, and
is the goal
state. is the set of actions that can be executed in
.
set
is the
in
of
successor
states
that
can
result
from
the
execution
of
.
is the set of successor states of
.
$ for
some
!"#
%
& ' is the set of predecessor states
for some
, .- if the execution of /
of (
. The agent incurs cost )+*
0
1
2
is the minimax goal distance of 0
in
results in
. )
,
, and
defined as the solution of the system of equations: ) if
3465 798;: =< 3?>@ O 7
A
: CB <
;D for all E
.
is
with &F
the current state of the agent, and the minimal worst-case plan-execution cost from
to is
.
3.2 Heuristics and Variables
Minimax LPA* searches backward from to
and uses heuristics to focus its
search. The
need to be non-negative and satisfy G
) and G 9 H I heuristics
;
D
J
for all
K ;
and 2 with ;
G
L K . In other
,
words, the heuristics G
approximate the best-case plan-execution cost from to .
Minimax LPA* maintains two variables for each state that it encounters during the search.
The g-value of a state estimates its minimax goal distance. It is carried forward from
one search to the next one and can be used after each search to determine a plan that
minimizes the worst-case plan-execution cost from
to . The rhs-value of
a state also estimates its minimax goal distance. It is a one-step lookahead value based
on the g-values of its successors and thus potentially better
than its g-value. It
" informed
) if
, and
always
satisfies
the
following
relationship
(Invariant
1):
G
"
G 3465 ,798;: =< 3?>@ O 7
A
: CB < 9C
;D for all
with MF . A
state is called locally consistent iff its g-value is equal to its rhs-value. Minimax LPA* also
maintains a priority queue N that always contains exactly the locally inconsistent states
(Invariant 2). Their priorities are always identical
to their current keys (Invariant
"
" 3), where
the key O of is the pair P 34Q5 G "RD G
ST34Q5 G VU , as
calculated by CalculateKey(). The keys are compared according to a lexicographic ordering.
3.3 Algorithm
Minimax LPA* operates as follows. The main function
Main() first calls Initialize() 18
to set the g-values and rhs-values of all states to
03 . The only exception is the rhsonly locally inconsistent state
value of , that is set to zero 04 . Thus, is the
and is inserted into the otherwise empty priority queue 02, 05 . (Note that, in an actual
implementation, Minimax LPA* needs to initialize a state only once it encounters it during
the search and thus does not need to initialize all states up front. This is important because
the number of states can be large and only a few of them might be reached during the
search.) Then, Minimax LPA* calls ComputePlan() to compute a plan that minimizes the
. If the agent has not reached
worst-case plan-execution cost from
to 19
yet 20 , it executes the first action of the plan 22 and updates
23 .
It then scans for changed action costs 24 . To maintain Invariants 1, 2, and 3, it calls
UpdateState() if some action costs have changed 28 to update the rhs-values and keys of
the states potentially affected by the changed action costs as well as their membership in
the priority queue if they become locally consistent
or inconsistent. It then recalculates the
priorities of all states in the priority queue 29-30 . This is necessary because the heuristics
change when the agent moves, since they are computed with respect to
. This only
procedure
Main()
17? :()!*"*"+,-12 M - > *B- ;
D2
<="> ? )
18?
while
( ()!*"*"+,-9
Refine the discretization, if possible (initially: construct the first discretization);
19?
Construct the state space that corresponds to the current discretization;
20?
Initialize();
21?
22?
ComputePlan();
if ( !( )!*"*+,%-;2 ) stop with no solution;
23?
24?
while ( ( )!*"*+,%-
D2
<=">? AND ! ( )!*"*+,- 5
D2
)
*+ = ) M 2( )!*"*+,- ;
25?
O
O
O
26?
Execute S 25K_T`; > O FGIH M ( )!*"*+,%- J K"L M O F%P )(( H M ( )!*"*+,%- Q > O J R ()!*B*+,-
S
'$ # ;
Set ()!*"*"+,- to the new state of the agent after the action execution;
27?
D 7bCRBR *+ = ) M
/S#
if ( )!*"*+,%- 68
28?
29?
7bCRBR *+ = ) M
S!:27@CRBR *+ = ) M
S^
( )!*"*+,%- ;
7bCRBR *+ = ) M :257bCRBR *+ = ) M %^
( )!*"*+,%- ;
30?
[1.\B]()!*B*+,-":25[1 \]"()!*"*"+,-"'^
*+ = ) M ;
31?
UpdateState( *+ = ) M );
32?
for all 96
33?
U.Update( , CalculateKey( ));
34?
35?
ComputePlan();
Figure 4: Parti-game algorithm using Minimax LPA*
changes the priorities of the states in the priority queue but not which
states are locally
consistent and thus in the priority queue. Finally, it recalculates a plan 31 and repeats the
process.
ComputePlan() operates as follows. It repeatedly
removes the locally
inconsistent state
with the smallest key from the priority queue 10 and expands it 11-16 . It distinguishes
two cases. A state is called locally overconsistent iff its g-value is larger than it rhs-value.
We can prove that the rhs-value of a locally overconsistent state that is about to be expanded
is equal to its minimax
goal distance. ComputePlan() therefore sets the g-value of the state
to its rhs-value 12 . A state is called locally underconsistent iff its g-value is smaller
than
it rhs-value. In this case, ComputePlan() sets the g-value of the state to infinity
15 . In
either case, ComputePlan() ensures that Invariants 1, 2 and 3 continue to hold 13, 16 . It
terminates as soon as
is locally consistent and its key is less than or equal to the
keys of all locally inconsistent states.
Theorem 1 ComputePlan of Minimax LPA* expands each state at most twice and thus terminates.
Assume that,
after ComputePlan() terminates, one starts in R"C..\
and
always
an action
executes
"
that minimizes O 6 7bCRR
S
until #.S!$ is
in the current state
!
reached (ties can be broken arbitrarily). Then, the plan-execution cost is no larger than the minimax
goal distance of R"C .\
% .
We can also prove several additional theorems about the efficiency of Minimax LPA*,
including the fact that it only expands those states whose g-values are not already correct
[5]. To reduce its search time, we optimize Minimax LPA* in several ways, for example, to
avoid unnecessary re-computations of the rhs-values [5]. We use these optimizations in the
experiments. A more detailed description, the intuition behind Minimax LPA*, examples
of its operation, and additional theorems and their proofs can be found in [5].
4 Using Minimax LPA* to Implement the Parti-Game Algorithm
Figure 4 shows how Minimax LPA* can be used to implement the parti-game algorithm in a
more efficient way than with uninformed search from scratch, using some of the functions
from Figure 3. Initially,
the parti-game algorithm constructs a first (coarse) discretiza
tion of the terrain 19? , constructs the corresponding state space (which includes setting
to the state of the agent, to the state that includes the goal coordinates, and
9C
!"#
, and
according to the optimistic assumption that each action
,
deterministically reaches
the intended state) 20? , and uses ComputePlan() to find a first
plan from scratch 21?-22? . If the minimax goal distance of
is infinity, then it
stops unsuccessfully 23? . Otherwise,
it repeatedly executes the action that minimizes
the
. If it observes an unknown action outcome 28? ,
worst-case plan-execution
cost
26?-27?
then it records it 29?-31? , ensures that Invariants 1, 2 and 3 continue to hold 32?-34? ,
uses ComputePlan() to find a new plan incrementally 35? , and then continues to execute
actions until either
is unsolvable
or the agent reaches 24? . In the former
case, it refines the discretization 19? , uses ComputePlan() to find a new plan from scratch
rather
than incrementally (because the discretization changes the state space substantially)
20?-22? , and then repeats the process.
The heuristic of a state in our version of the parti-game algorithm approximates the Euclidean distance from the center of the current cell of the agent to the center of the cell
that corresponds to the state in question. The resulting heuristics have the property that we
described in Section 3.2. Figures 1(b), (c), (d) and (f) show the heuristics, g-values and
rhs-values of all states directly after the call to ComputePlan(). All expanded states are
shown in gray, and all locally inconsistent states (that is, states in the priority queue) are
shown in bold.
It happens quite frequently that
is unsolvable and the parti-game algorithm thus
has to refine the discretization. If
is unsolvable, Minimax LPA* expands a large
number of states because it has to disprove the existence of a plan rather than find one. We
speed up Minimax LPA* for the special case where
is unsolvable but every other
state is solvable since it occurs about half of the time when
is unsolvable. If states
other than
become unsolvable, some of them need to be predecessors of
.
To prove that
is unsolvable but every other state is solvable, Minimax LPA* can
therefore show that all predecessors of
are solvable but
itself is not. To
show that all predecessors of
are solvable, Minimax LPA* checks that they are
locally consistent, their keys are no larger than U.TopKey(), and their rhs-values are finite.
To show that
is unsolvable, Minimax LPA* checks that the rhs-value of
is infinite. We use this optimization in the experiments.
5 Experimental Results
An implementation of the parti-game algorithm can use search from scratch or incremental
search. It can also use uninformed search (using the zero heuristic) and informed search
(using the heuristic that we used in the context of the example from Figure 1). We compare
the four resulting combinations. All of them use binary heaps to implement the priority
queue and the same optimizations but the implementations with search from scratch do not
contain any code needed only for incremental search. Since all implementations move the
agent in the same way, we compare their number of state expansions, their total run times,
and their total search times (that is, the part of the run times spent in the search routines),
)9)
) with 30 percent obstacle
averaged over 25 two-dimensional terrains of size )9)
)
density, where the resolution limit is one cell. In each case, the goal coordinates are in the
center of the terrain, and the start coordinates are in the vertical center and ten percent to
the right of the left edge. We also report the average of the ratios of the three measures for
each of the four implementations and the one with incremental heuristic search (which is
different from the ratio of the averages), together with their 95-percent confidence intervals.
Implementation of PartiRatio
Game Algorithm with . . . Expansions
Expansions
Run Time
Uninformed from Scratch 69,527,969 20.55 4.12 39 min 51 sec
Informed from Scratch
31,303,253 8.06 2.59 22 min 58 sec
Uninformed Incremental
2,628,879 1.23 0.03 1 min 54 sec
Informed Incremental
2,172,430 1.00 0.00 1 min 45 sec
Ratio
(Search Time)
Run Time (Search Time)
(37 min 43 sec) 11.83 3.52 (15.29 3.61)
(20 min 49 sec) 6.08 2.50 ( 7.20 2.70)
( 1 min 41 sec) 1.04 0.02 ( 1.19 0.05)
( 1 min 28 sec) 1.00 0.00 ( 1.00 0.00)
The average number of searches, measured by calls to ComputePlan(), is 29,885 until the
agent reaches . The table shows that the search times of the parti-game algorithm
are substantial due to the large number of searches performed (even though each search is
fast), and that the searches take up most of its run time. Thus, speeding up the searches is
important. The table also shows that incremental and heuristic search individually speed
up the parti-game algorithm and together speed it up even more.
The implementations of the parti-game algorithm in [3] and [6] make slightly different assumptions from ours, for example, minimize state transitions rather than cost. Al-Ansari
reports that the original implementation of the parti-game algorithm with value iteration
performs about 80 percent and that his implementation with a simple uninformed incremental search method performs about 15 percent of the state expansions of the implementation with uninformed search from scratch [6]. Our results show that our implementation
with Minimax LPA* performs about 5 percent of the state expansions of the implementation with uninformed search from scratch. While these results are not directly comparable,
we have also first results where we ran the original implementation with value iteration
and our implementation with Minimax LPA* on a very similar environment and the original implementation expanded one to two orders of magnitude more states than ours even
though its number of searches and its final number of states was smaller. However, these
results are very preliminary since the time per state expansion is different for the different implementations and it is future work to compare the various implementations of the
parti-game algorithm in a common testbed.
References
[1] S. Koenig and M. Likhachev. Incremental A*. In T. Dietterich, S. Becker, and Z. Ghahramani,
editors, Advances in Neural Information Processing Systems 14, Cambridge, MA, 2002. MIT
Press.
[2] D. Frigioni, A. Marchetti-Spaccamela, and U. Nanni. Fully dynamic algorithms for maintaining
shortest paths trees. Journal of Algorithms, 34(2):251?281, 2000.
[3] A. Moore and C. Atkeson. The parti-game algorithm for variable resolution reinforcement learning in multidimensional state-spaces. Machine Learning, 21(3):199?233, 1995.
[4] A. Moore and C. Atkeson. Prioritized sweeping: Reinforcement learning with less data and less
time. Machine Learning, 13(1):103?130, 1993.
[5] M. Likhachev and S. Koenig. Speeding up reinforcement learning with incremental heuristic
minimax search. Technical Report GIT-COGSCI-2002/5, College of Computing, Georgia Institute of Technology, Atlanta (Georgia), 2002.
[6] M. Al-Ansari. Efficient Reinforcement Learning in Continuous Environments. PhD thesis, College of Computer Science, Northeastern University, Boston (Massachusetts), 2001.
[7] G. Ramalingam and T. Reps. An incremental algorithm for a generalization of the shortest-path
problem. Journal of Algorithms, 21:267?305, 1996.
| 2303 |@word version:1 d2:7 git:1 incurs:1 initial:1 series:1 contains:4 bc:5 ours:2 o2:1 imaginary:5 current:10 discretization:13 yet:1 refines:3 shape:1 remove:5 update:6 half:1 leaf:1 record:1 coarse:3 constructed:1 predecessor:4 become:2 prove:3 combine:1 nondeterministic:4 introduce:1 indeed:2 behavior:2 planning:1 frequently:1 discretized:2 decreasing:1 actual:1 becomes:1 notation:1 what:1 minimizes:7 substantially:1 developed:2 informed:4 finding:1 every:2 nf:1 expands:6 multidimensional:1 tie:1 exactly:1 wrong:1 control:2 reuses:1 t1:1 limit:3 topkey:4 path:2 might:2 twice:2 st3:1 range:1 perpendicular:1 averaged:1 vu:1 implement:4 procedure:6 discretizations:1 word:1 confidence:1 suggest:1 get:3 cannot:5 ga:1 s9:6 context:1 optimize:1 conventional:1 deterministic:2 center:20 starting:1 resolution:5 splitting:1 immediately:1 parti:36 his:1 coordinate:9 us:6 pa:1 curly:1 continues:2 inserted:1 enters:1 worst:10 region:1 ensures:3 ordering:1 movement:2 removed:1 decrease:1 observes:3 ran:1 substantial:1 intuition:1 environment:2 broken:1 dynamic:3 q5:2 solving:1 dilemma:1 efficiency:1 various:1 sven:1 fast:2 describe:2 cogsci:1 outcome:9 whose:3 heuristic:21 larger:4 solve:2 quite:1 say:1 otherwise:2 itself:1 final:1 advantage:1 iff:3 lookahead:1 description:1 empty:2 incremental:15 spent:1 illustrate:1 measured:1 uninformed:9 school:1 solves:1 c:1 correct:1 successor:3 generalization:1 preliminary:1 insert:4 hold:2 around:1 cb:3 smallest:4 heap:1 replanning:1 individually:1 mit:1 lexicographic:1 always:6 rather:3 avoid:1 gatech:1 focus:3 longest:1 check:2 membership:1 ansari:2 initially:4 plan:26 special:1 initialize:6 equal:4 once:1 never:1 construct:4 identical:2 broad:1 future:2 t2:1 report:3 few:1 distinguishes:1 randomly:1 unsuccessfully:1 intended:4 recalculates:2 maintain:1 attempt:4 atlanta:2 navigation:1 behind:1 edge:1 necessary:1 disprove:1 tree:1 euclidean:2 old:4 re:1 minimal:2 obstacle:9 cost:29 uniform:2 too:2 front:1 gd:36 chooses:1 density:1 together:3 again:1 thesis:1 manage:1 containing:1 priority:19 return:5 reusing:1 bold:1 sec:8 includes:2 satisfy:1 performed:2 tion:1 optimistic:3 reached:4 start:5 option:1 maintains:2 minimize:3 square:1 efficiently:1 skoenig:1 correspond:1 cc:1 executes:3 reach:11 proof:1 stop:2 popular:2 massachusetts:1 routine:1 spaccamela:1 execute:3 though:2 furthermore:2 until:6 koenig:3 incrementally:2 gray:3 believe:2 dietterich:1 contain:1 former:1 moore:2 adjacent:1 game:37 during:4 o3:1 ramalingam:1 demonstrate:1 performs:3 percent:6 recently:1 common:1 pseudocode:1 extend:1 s8:6 approximates:2 mellon:1 s5:4 blocked:3 refer:1 cambridge:1 rd:1 grid:5 similarly:1 moving:1 robot:1 add:2 scenario:1 binary:1 continue:2 arbitrarily:1 s11:6 rep:1 additional:5 determine:5 shortest:2 technical:1 faster:3 cmu:1 iteration:2 sometimes:1 cell:26 fine:1 interval:1 else:1 brace:1 inconsistent:6 call:5 split:2 restrict:1 reduce:2 cn:1 t0:1 whether:1 becker:1 s7:6 likhachev:3 queue:10 action:37 repeatedly:5 detailed:1 amount:1 sgoal:6 s4:6 locally:15 ten:1 gih:2 exist:1 s3:4 per:1 carnegie:1 lnm:1 affected:1 key:8 four:3 deletes:1 prevent:1 ce:1 backward:1 underconsistent:2 run:5 noticing:1 unsolvable:11 comparable:1 guaranteed:2 refine:2 lpa:32 s10:6 infinity:3 placement:1 speed:5 min:8 expanded:4 according:2 combination:1 smaller:2 terminates:3 slightly:1 making:1 s1:4 happens:1 invariant:6 equation:1 needed:2 end:2 generalizes:1 operation:1 apply:2 encounter:2 existence:1 original:3 assumes:1 top:1 denotes:1 maintaining:1 somewhere:3 ghahramani:1 unchanged:1 objective:1 move:16 question:2 already:2 occurs:1 distance:17 reason:2 code:1 o1:1 relationship:1 ratio:3 executed:1 potentially:3 negative:1 marchetti:1 implementation:20 unknown:1 discretize:1 vertical:1 finite:6 nonuniform:1 sweeping:2 arbitrary:1 namely:2 pair:1 testbed:1 pop:2 fp:1 overconsistent:3 including:1 solvable:6 minimax:47 technology:2 axis:2 carried:1 speeding:3 fully:1 agent:33 consistent:5 s0:6 editor:1 share:1 changed:6 repeat:4 soon:1 allow:1 institute:2 lifelong:1 absolute:2 calculated:1 transition:1 ignores:1 forward:1 reinforcement:5 atkeson:2 approximate:1 decides:1 pittsburgh:1 unnecessary:1 terrain:3 search:56 continuous:4 why:2 table:2 expansion:6 domain:16 did:2 main:4 rh:64 s2:4 border:3 nothing:1 georgia:3 deterministically:4 grained:2 northeastern:1 theorem:3 maxim:2 phd:1 magnitude:2 execution:18 illustrates:1 gap:1 mf:1 boston:1 prevents:1 applies:1 dynamicswsf:1 corresponds:6 determines:3 satisfies:1 ma:1 goal:20 towards:12 prioritized:2 rbr:5 change:5 infinite:1 operates:2 called:4 total:2 experimental:1 attempted:2 exception:1 college:3 searched:1 scan:2 calculatekey:7 scratch:11 |
1,434 | 2,304 | Minimax Differential Dynamic Programming:
An Application to Robust Biped Walking
Jun Morimoto
Human Information Science Labs,
Department 3, ATR International
Keihanna Science City,
Kyoto, JAPAN, 619-0288
[email protected]
Christopher G. Atkeson ?
The Robotics Institute and HCII,
Carnegie Mellon University
5000 Forbes Ave.,
Pittsburgh, USA, 15213
[email protected]
Abstract
We developed a robust control policy design method in high-dimensional
state space by using differential dynamic programming with a minimax
criterion. As an example, we applied our method to a simulated five link
biped robot. The results show lower joint torques from the optimal control policy compared to a hand-tuned PD servo controller. Results also
show that the simulated biped robot can successfully walk with unknown
disturbances that cause controllers generated by standard differential dynamic programming and the hand-tuned PD servo to fail. Learning to
compensate for modeling error and previously unknown disturbances in
conjunction with robust control design is also demonstrated.
1 Introduction
Reinforcement learning[8] is widely studied because of its promise to automatically generate controllers for difficult tasks from attempts to do the task. However, reinforcement
learning requires a great deal of training data and computational resources, and sometimes
fails to learn high dimensional tasks. To improve reinforcement learning, we propose using
differential dynamic programming (DDP) which is a second order local trajectory optimization method to generate locally optimal plans and local models of the value function[2, 4].
Dynamic programming requires task models to learn tasks. However, when we apply dynamic programming to a real environment, handling inevitable modeling errors is crucial.
In this study, we develop minimax differential dynamic programming which provides robust nonlinear controller designs based on the idea of H? control[9, 5] or risk sensitive
control[6, 1]. We apply the proposed method to a simulated five link biped robot (Fig. 1).
Our strategy is to use minimax DDP to find both a low torque biped walk and a policy or
control law to handle deviations from the optimized trajectory. We show that both standard
DDP and minimax DDP can find a local policy for lower torque biped walk than a handtuned PD servo controller. We show that minimax DDP can cope with larger modeling
error than standard DDP or the hand-tuned PD controller. Thus, the robust controller allows us to collect useful training data. In addition, we can use learning to correct modeling
?
also affiliated with Human Information Science Laboratories, Department 3, ATR International
errors and model previously unknown disturbances, and design a new more optimal robust
controller using additional iterations of minimax DDP.
2 Minimax DDP
2.1 Differential dynamic programming (DDP)
A value function is defined as sum of accumulated future penalty r(x i , ui , i) from current
state and terminal penalty ?(xN ),
V (xi , i) = ?(xN ) +
N
?1
X
r(xj , uj , j),
(1)
j=i
where xi is the input state, ui is the control output at the i-th time step, and N is the number
of time steps. Differential dynamic programming maintains a second order local model of
a Q function (Q(i), Qx (i), Qu (i), Qxx (i), Qxu (i), Quu (i)), where Q(i) = r(xi , ui , i) +
V (xi+1 , i + 1), and the subscripts indicate partial derivatives. Then, we can derive the
new control output unew
= ui + ?ui from arg max?ui Q(xi + ?xi , ui + ?ui , i). Finally,
i
by using the new control output unew
, a second order local model of the value function
i
(V (i), Vx (i), Vxx (i)) can be derived [2, 4].
2.2 Finding a local policy
DDP finds a locally optimal trajectory xopt
and the corresponding control trajectory uopt
i
i .
When we apply our control algorithm to a real environment, we usually need a feedback
controller to cope with unknown disturbances or modeling errors. Fortunately, DDP provides us a local policy along the optimized trajectory:
uopt (xi , i) = uopt
+ Ki (xi ? xopt
i
i ),
(2)
where Ki is a time dependent gain matrix given by taking the derivative of the optimal
policy with respect to the state [2, 4].
2.3 Minimax DDP
Minimax DDP can be derived as an extension of standard DDP [2, 4]. The difference is that
the proposed method has an additional disturbance variable w to explicitly represent the
existence of disturbances. This representation of the disturbance provides the robustness
for optimized trajectories and policies [5].
Then, we expand the Q function Q(xi + ?xi , ui + ?ui , wi + ?wi , i) to second order in
terms of ?u, ?w and ?x about the nominal solution:
Q(xi + ?xi , ui + ?ui , wi + ?wi , i) = Q(i) + Qx (i)?xi + Qu (i)?ui + Qw (i)?wi
"
#"
#
Qxx (i) Qxu (i) Qxw (i)
?xi
1 T T T
?ui
+ [?xi ?ui ?wi ] Qux (i) Quu (i) Quw (i)
,
2
Qwx (i) Qwu (i) Qww (i)
?wi
(3)
The second order local model of the Q function can be propagated backward in time using:
Qx (i) = Vx (i + 1)Fx + rx (i)
Qu (i) = Vx (i + 1)Fu + ru (i)
Qw (i) = Vx (i + 1)Fw + rw (i)
Qxx (i) = Fx Vxx (i + 1)Fx + Vx (i + 1)Fxx + rxx (i)
(4)
(5)
(6)
(7)
Qxu (i)
Qxw (i)
Quu (i)
Qww (i)
Quw (i)
=
=
=
=
=
Fx Vxx (i + 1)Fu + Vx (i + 1)Fxu + rxu (i)
Fx Vxx (i + 1)Fu + Vx (i + 1)Fxw + rxw (i)
Fu Vxx (i + 1)Fu + Vx (i + 1)Fuu + ruu (i)
Fw Vxx (i + 1)Fw + Vx (i + 1)Fww + rww (i)
Fu Vxx (i + 1)Fw + Vx (i + 1)Fuw + ruw (i),
(8)
(9)
(10)
(11)
(12)
where xi+1 = F(xi , ui , wi ) is a model of the task dynamics.
Here, ?ui and ?wi must be chosen to minimize and maximize the second order expansion
of the Q function Q(xi + ?xi , ui + ?ui , wi + ?wi , i) in (3) respectively, i.e.,
?ui
= ?Q?1
uu (i)[Qux (i)?xi + Quw (i)?wi + Qu (i)]
?wi
= ?Q?1
ww (i)[Qwx (i)?xi + Qwu (i)?ui + Qw (i)].
(13)
By solving (13), we can derive both ?ui and ?wi . After updating the control output ui and
the disturbance wi with derived ?ui and ?wi , the second order local model of the value
function is given as
V (i)
?1
= V (i + 1) ? Qu (i)Q?1
uu (i)Qu (i) ? Qw (i)Qww (i)Qw (i)
Vx (i)
?1
= Qx (i) ? Qu (i)Q?1
uu (i)Qux (i) ? Qw (i)Qww (i)Qwx (i)
Vxx (i)
?1
= Qxx (i) ? Qxu (i)Q?1
uu (i)Qux (i) ? Qxw (i)Qww (i)Qwx (i).
(14)
3 Experiment
3.1 Biped robot model
In this paper, we use a simulated five link biped robot (Fig. 1:Left) to explore our approach.
Kinematic and dynamic parameters of the simulated robot are chosen to match those of a
biped robot we are currently developing (Fig. 1:Right) and which we will use to further
explore our approach. Height and total weight of the robot are about 0.4 [m] and 2.0 [kg]
respectively. Table 1 shows the parameters of the robot model.
3
link3
joint2,3
4
link4
link2
joint4
2
link1
joint1
5
1
link5
ankle
Figure 1: Left: Five link robot model, Right: Real robot
Table 1: Physical parameters of the robot model
link1 link2 link3 link4 link5
mass [kg]
0.05 0.43
1.0
0.43 0.05
length [m]
0.2
0.2
0.01
0.2
0.2
inertia [kg?m ?10?4 ] 1.75 4.29 4.33 4.29 1.75
We can represent the forward dynamics of the biped robot as
xi+1 = f (xi ) + b(xi )ui ,
(15)
where x = {?1 , . . . , ?5 , ??1 , . . . , ??5 } denotes the input state vector, u = {?1 , . . . , ?4 } denotes the control command (each torque ?j is applied to joint j (Fig.1):Left). In the minimax optimization case, we explicitly represent the existence of the disturbance as
xi+1 = f (xi ) + b(xi )ui + bw (xi )wi ,
(16)
where w = {w0 , w1 , w2 , w3 , w4 } denotes the disturbance (w0 is applied to ankle, and wj
(j = 1 . . . 4) is applied to joint j (Fig. 1:Left)).
3.2 Optimization criterion and method
We use the following objective function, which is designed to reward energy efficiency and
enforce periodicity of the trajectory:
J = ?(x0 , xN ) +
N
?1
X
r(xi , ui , i)
(17)
i=0
which is applied for half the walking cycle, from one heel strike to the next heel strike.
This criterion sums the squared deviations from a nominal trajectory, the squared control
magnitudes, and the squared deviations from a desired velocity of the center of mass:
r(xi , ui , i) = (xi ? xdi )T Q(xi ? xdi ) + ui T Rui + (v(xi ) ? v d )T S(v(xi ) ? v d ), (18)
where xi is a state vector at the i-th time step, xdi is the nominal state vector at the i-th
time step (taken from a trajectory generated by a hand-designed walking controller), v(x i )
denotes the velocity of the center of mass at the i-th time step, and v d denotes the desired
velocity of the center of mass. The term (xi ? xdi )T Q(xi ? xdi ) encourages the robot to
follow the nominal trajectory, the term ui T Rui discourages using large control outputs,
and the term (v(xi ) ? v d )T S(v(xi ) ? v d ) encourages the robot to achieve the desired
velocity.
In addition, penalties on the initial (x0 ) and final (xN ) states are applied:
?(x0 , xN ) = F (x0 ) + ?N (x0 , xN ).
(19)
The term F (x0 ) penalizes an initial state where the foot is not on the ground:
F (x0 ) = Fh T (x0 )P0 Fh (x0 ),
(20)
where Fh (x0 ) denotes height of the swing foot at the initial state x0 . The term ?N (x0 , xN )
is used to generate periodic trajectories:
?N (x0 , xN ) = (xN ? H(x0 ))T PN (xN ? H(x0 )),
(21)
where xN denotes the terminal state, x0 denotes the initial state, and the term (xN ?
H(x0 ))T PN (xN ? H(x0 )) is a measure of terminal control accuracy. A function H()
represents the coordinate change caused by the exchange of a support leg and a swing leg,
and the velocity change caused by a swing foot touching the ground (Appendix A).
We implement the minimax DDP by adding a minimax term to the criterion. We use a
modified objective function:
Jminimax = J ?
N
?1
X
wi T Gwi ,
(22)
i=0
where wi denotes a disturbance vector at the i-th time step, and the term wi T Gwi rewards
coping with large disturbances. This explicit representation of the disturbance w provides
the robustness for the controller [5].
4 Results
We compare the optimized controller with a hand-tuned PD servo controller, which also
is the source of the initial and nominal trajectories in the optimization process. We set
the parameters for the optimization process as Q = 0.25I10 , R = 3.0I4 , S = 0.3I1 ,
desired velocity v d = 0.4[m/s] in equation (18), P0 = 1000000.0I1 in equation (20), and
PN = diag{10000.0, 10000.0, 10000.0, 10000.0, 10000.0, 10.0, 10.0, 10.0, 5.0, 5.0} in
equation (21), where IN denotes N dimensional identity matrix. For minimax DDP, we
set the parameter for the disturbance reward in equation (22) as G = diag{5.0, 20.0, 20.0,
20.0, 20.0} (G with smaller elements generates more conservative but robust trajectories).
Each parameter is set to acquire the best results in terms of both the robustness and the
energy efficiency. When we apply the controllers acquired by standard DDP and minimax
DDP to the biped walk, we adopt a local policy which we introduced in section 2.2.
Results in table 2 show that the controller generated by standard DDP and minimax DDP
did almost halve the cost of the trajectory, as compared to that of the original hand-tuned
PD servo controller. However, because the minimax DDP is more conservative in taking
advantage of the plant dynamics, it has a slightly higher control cost than the standard DDP.
PN ?1
Note that we defined the control cost as N1 i=0 ||ui ||2 , where ui is the control output
(torque) vector at i-th time step, and N denotes total time step for one step trajectories.
Table 2: One step control cost (average over 100 steps)
PD servo standard DDP minimax DDP
control cost [(N ? m)2 ? 10?2 ]
7.50
3.54
3.86
To test robustness, we assume that there is unknown viscous friction at each joint:
?jdist = ??j ??j
(j = 1, . . . , 4),
(23)
where ?j denotes the viscous friction coefficient at joint j.
We used two levels of disturbances in the simulation, with the higher level being 3 times
larger than the base level (Table 3).
Table 3: Parameters of the disturbance
?2 ,?3 (hip joints) ?1 ,?4 (knee joints)
base
0.01
0.05
large
0.03
0.15
All methods could handle the base level disturbances. Both the standard and the minimax
DDP generated much less control cost than the hand-tuned PD servo controller (Table 4).
However, only the minimax DDP control design could cope with the higher level of disturbances. Figure 2 shows trajectories for the three different methods. Both the simulated
robot with the standard DDP and the hand-tuned PD servo controller fell down before
achieving 100 steps. The bottom of figure 2 shows part of a successful biped walking trajectory of the robot with the minimax DDP. Figure 3 shows ankle joint trajectories for the
three different methods. Only the minimax DDP successfully kept ankle joint ? 1 around
90 degrees more than 20 seconds. Table 5 shows the number of steps before the robot fell
down. We terminated a trial when the robot achieved 1000 steps.
Table 4: One step control cost with the base setting (averaged over 100 steps)
PD servo standard DDP minimax DDP
control cost [(N ? m)2 ? 10?2 ]
8.97
5.23
5.87
Hand-tuned PD servo
Standard DDP
Minimax DDP
Figure 2: Biped walk trajectories with the three different methods
5 Learning the unmodeled dynamics
In section 4, we verified that minimax DDP could generate robust biped trajectories and
policies. The minimax DDP coped with larger disturbances than the standard DDP and
the hand-tuned PD servo controller. However, if there are modeling errors, using a robust
controller which does not learn is not particularly energy efficient. Fortunately, with minimax DDP, we can collect sufficient data to improve our dynamics model. Here, we propose
using Receptive Field Weighted Regression (RFWR) [7] to learn the error dynamics of the
biped robot. In this section we present results on learning a simulated modeling error (the
disturbances discussed in section 4). We are currently applying this approach to an actual
robot.
We can represent the full dynamics as the sum of the known dynamics and the error dynamics ?F(xi , ui , i):
xi+1 = F(xi , ui ) + ?F(xi , ui , i).
(24)
We estimate the error dynamics ?F by using RFWR:
P Nb i
?k ?k (xi , ui , i)
? i , ui , i) = k=1P
,
(25)
?F(x
Nb
i
k=1 ?k
? ik ,
?k (xi , ui , i) = ?kT x
(26)
1
?ik = exp ? (i ? ck )Dk (i ? ck ) ,
(27)
2
where, Nb denotes the number of basis function, ck denotes center of k-th basis function,
Dk denotes distance metric of the k-th basis function, ?k denotes parameter of the k? ik = (xi , ui , 1, i ? ck ) denotes
th basis function to approximate error dynamics, and x
augmented state vector for the k-th basis function. We align 20 basis functions (N b = 20)
at even intervals along the biped trajectories.
The learning strategy uses the following sequence: 1) Design the initial controller using
minimax DDP applied to the nominal model. 2) Apply that controller. 3) Learn the actual
dynamics using RFWR. 4) Redesign the biped controller using minimax DDP with the
learned model.
ankle [deg]
100
90
80
70
60
0
2
4
6
ankle [deg]
10
12
14
16
18
20
14
16
18
20
14
16
18
20
90
80
70
60
0
2
4
6
8
10
12
time [sec] (Standard DDP)
100
ankle [deg]
8
time [sec] (PD servo)
100
90
80
70
60
0
2
4
6
8
10
12
time [sec] (Minimax DDP)
Figure 3: Ankle joint trajectories with the three different methods
Table 5: Number of steps with the large disturbances
PD servo standard DDP minimax DDP
Number of steps
49
24
> 1000
We compare the efficiency of the controller with the learned model to the controller without the learned model. Results in table 6 show that the controller after learning the error
dynamics used lower torque to produce stable biped walking trajectories.
Table 6: One step control cost with the large disturbances (averaged over 100 steps)
without learned model with learned model
control cost [(N ? m)2 ? 10?2 ]
17.1
11.3
6 Discussion
In this study, we developed an optimization method to generate biped walking trajectories
by using differential dynamic programming (DDP). We showed that 1) DDP and minimax
DDP can be applied to high dimensional problems, 2) minimax DDP can design more
robust controllers, and 3) learning can be used to reduce modeling error and unknown
disturbances in the context of minimax DDP control design.
Both standard DDP and minimax DDP generated low torque biped trajectories. We showed
that the minimax DDP control design was more robust than the controller designed by
standard DDP and the hand-tuned PD servo. Given a robust controller, we could collect
sufficient data to learn the error dynamics using RFWR[7] without the robot falling down
all the time. We also showed that after learning the error dynamics, the biped robot could
find a lower torque trajectory.
DDP provides a feedback controller which is important in coping with unknown distur-
bances and modeling errors. However, as shown in equation (2), the feedback controller is
indexed by time, and development of a time independent feedback controller is a future
goal.
Appendix
A
Ground contact model
The function H() in equation (21) includes the mapping (velocity change) caused by
ground contact. To derive the first derivative of the value function V x (xN ) and the second derivative Vxx (xN ), where xN denotes the terminal state, the function H() should be
analytical. Then, we used an analytical ground contact model[3]:
+
?
?? ? ?? = M ?1 (?)D(?)f ?t,
(28)
where ? denotes joint angles of the robot, ?? ? denotes angular velocities before ground
contact, ?? + denotes angular velocities after ground contact, M denotes the inertia matrix,
D denotes the Jacobian matrix which converts the ground contact force f to the torque at
each joint, and ?t denotes time step of the simulation.
References
[1] S. P. Coraluppi and S. I. Marcus. Risk-Sensitive and Minmax Control of Discrete-Time
Finite-State Markov Decision Processes. Automatica, 35:301?309, 1999.
[2] P. Dyer and S. R. McReynolds. The Computation and Theory of Optimal Control.
Academic Press, New York, NY, 1970.
[3] Y. Hurmuzlu and D. B. Marghitu. Rigid body collisions of planar kinematic chains
with multiple contact points. International Journal of Robotics Research, 13(1):82?
92, 1994.
[4] D. H. Jacobson and D. Q. Mayne. Differential Dynamic Programming. Elsevier, New
York, NY, 1970.
[5] J. Morimoto and K. Doya. Robust Reinforcement Learning. In Todd K. Leen,
Thomas G. Dietterich, and Volker Tresp, editors, Advances in Neural Information Processing Systems 13, pages 1061?1067. MIT Press, Cambridge, MA, 2001.
[6] R. Neuneier and O. Mihatsch. Risk Sensitive Reinforcement Learning. In M. S. Kearns,
S. A. Solla, and D. A. Cohn, editors, Advances in Neural Information Processing Systems 11, pages 1031?1037. MIT Press, Cambridge, MA, USA, 1998.
[7] S. Schaal and C. G. Atkeson. Constructive incremental learning from only local information. Neural Computation, 10(8):2047?2084, 1998.
[8] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. The MIT
Press, Cambridge, MA, 1998.
[9] K. Zhou, J. C. Doyle, and K. Glover. Robust Optimal Control. PRENTICE HALL,
New Jersey, 1996.
| 2304 |@word trial:1 ankle:8 simulation:2 p0:2 initial:6 minmax:1 tuned:10 current:1 neuneier:1 must:1 designed:3 half:1 provides:5 five:4 height:2 glover:1 along:2 differential:9 ik:3 x0:18 acquired:1 terminal:4 torque:9 automatically:1 actual:2 mass:4 qw:6 kg:3 viscous:2 developed:2 finding:1 control:32 before:3 local:11 coped:1 todd:1 sutton:1 subscript:1 studied:1 collect:3 co:1 averaged:2 implement:1 xopt:2 coping:2 w4:1 nb:3 risk:3 applying:1 context:1 prentice:1 demonstrated:1 center:4 knee:1 handle:2 fx:5 coordinate:1 nominal:6 programming:11 us:1 velocity:9 element:1 particularly:1 walking:6 updating:1 bottom:1 wj:1 cycle:1 solla:1 servo:14 pd:15 environment:2 ui:39 reward:3 dynamic:27 solving:1 efficiency:3 basis:6 joint:12 jersey:1 vxx:9 widely:1 larger:3 final:1 advantage:1 sequence:1 analytical:2 propose:2 achieve:1 mayne:1 produce:1 incremental:1 derive:3 develop:1 c:1 qxx:4 indicate:1 uu:4 foot:3 correct:1 unew:2 mcreynolds:1 human:2 vx:11 exchange:1 extension:1 around:1 hall:1 ground:8 exp:1 great:1 mapping:1 adopt:1 fh:3 currently:2 sensitive:3 city:1 successfully:2 weighted:1 mit:3 modified:1 ck:4 pn:4 zhou:1 volker:1 command:1 barto:1 conjunction:1 derived:3 schaal:1 ave:1 elsevier:1 dependent:1 rigid:1 accumulated:1 expand:1 i1:2 arg:1 development:1 plan:1 field:1 represents:1 inevitable:1 future:2 doyle:1 bw:1 n1:1 attempt:1 kinematic:2 jacobson:1 chain:1 kt:1 fu:6 partial:1 indexed:1 walk:5 desired:4 penalizes:1 mihatsch:1 hip:1 modeling:9 cost:10 deviation:3 successful:1 xdi:5 periodic:1 international:3 rxu:1 w1:1 squared:3 derivative:4 japan:1 sec:3 includes:1 coefficient:1 explicitly:2 caused:3 lab:1 maintains:1 forbes:1 minimize:1 morimoto:2 accuracy:1 trajectory:26 rx:1 halve:1 coraluppi:1 energy:3 propagated:1 gain:1 higher:3 follow:1 planar:1 leen:1 angular:2 hand:11 xmorimo:1 christopher:1 cohn:1 nonlinear:1 usa:2 dietterich:1 swing:3 laboratory:1 deal:1 encourages:2 criterion:4 qxu:4 qwx:4 discourages:1 physical:1 jp:1 discussed:1 mellon:1 cambridge:3 biped:21 robot:24 stable:1 base:4 align:1 showed:3 touching:1 additional:2 fortunately:2 maximize:1 strike:2 full:1 multiple:1 kyoto:1 match:1 academic:1 compensate:1 regression:1 controller:32 cmu:1 metric:1 iteration:1 sometimes:1 represent:4 robotics:2 achieved:1 addition:2 interval:1 source:1 crucial:1 w2:1 fell:2 ruu:1 keihanna:1 xj:1 fxx:1 w3:1 reduce:1 idea:1 link2:2 penalty:3 york:2 cause:1 rfwr:4 useful:1 collision:1 cga:1 fuw:1 locally:2 rw:1 generate:5 carnegie:1 discrete:1 promise:1 rxx:1 achieving:1 falling:1 verified:1 kept:1 backward:1 sum:3 convert:1 angle:1 almost:1 doya:1 decision:1 appendix:2 ki:2 ddp:53 i4:1 link1:2 generates:1 friction:2 department:2 developing:1 smaller:1 slightly:1 wi:20 qu:7 heel:2 leg:2 taken:1 resource:1 equation:6 previously:2 fail:1 dyer:1 apply:5 enforce:1 i10:1 robustness:4 existence:2 thomas:1 original:1 uopt:3 denotes:24 uj:1 contact:7 objective:2 strategy:2 receptive:1 distance:1 link:4 atr:3 simulated:7 w0:2 marcus:1 ru:1 length:1 acquire:1 difficult:1 quu:3 design:9 affiliated:1 policy:10 unknown:7 redesign:1 markov:1 finite:1 ww:1 unmodeled:1 introduced:1 optimized:4 learned:5 usually:1 qux:4 max:1 force:1 disturbance:23 minimax:36 improve:2 jun:1 tresp:1 law:1 plant:1 degree:1 sufficient:2 editor:2 periodicity:1 institute:1 taking:2 feedback:4 xn:16 inertia:2 forward:1 reinforcement:6 atkeson:2 cope:3 qx:4 approximate:1 deg:3 pittsburgh:1 automatica:1 xi:46 table:12 learn:6 robust:14 expansion:1 diag:2 did:1 terminated:1 body:1 augmented:1 fig:5 ny:2 fails:1 explicit:1 jacobian:1 down:3 dk:2 adding:1 magnitude:1 rui:2 explore:2 ma:3 identity:1 goal:1 fw:4 change:3 handtuned:1 kearns:1 conservative:2 total:2 support:1 constructive:1 handling:1 |
1,435 | 2,305 | Real-time Particle Filters
Cody Kwok
Dieter Fox
Marina Meil?a
Dept. of Computer Science & Engineering, Dept. of Statistics
University of Washington
Seattle, WA 98195
ctkwok,fox @cs.washington.edu, [email protected]
Abstract
Particle filters estimate the state of dynamical systems from sensor information. In many real time applications of particle filters, however, sensor
information arrives at a significantly higher rate than the update rate of the
filter. The prevalent approach to dealing with such situations is to update
the particle filter as often as possible and to discard sensor information that
cannot be processed in time. In this paper we present real-time particle filters, which make use of all sensor information even when the filter update
rate is below the update rate of the sensors. This is achieved by representing posteriors as mixtures of sample sets, where each mixture component
integrates one observation arriving during a filter update. The weights of
the mixture components are set so as to minimize the approximation error
introduced by the mixture representation. Thereby, our approach focuses
computational resources (samples) on valuable sensor information. Experiments using data collected with a mobile robot show that our approach
yields strong improvements over other approaches.
1 Introduction
Due to their sample-based representation, particle filters are well suited to estimate the state
of non-linear dynamic systems. Over the last years, particle filters have been applied with
great success to a variety of state estimation problems including visual tracking, speech
recognition, and mobile robotics [1]. The increased representational power of particle filters, however, comes at the cost of higher computational complexity.
The application of particle filters to online, real-time estimation raises new research questions. The key question in this context is: How can we deal with situations in which the rate
of incoming sensor data is higher than the update rate of the particle filter? To the best of
our knowledge, this problem has not been addressed in the literature so far. The prevalent
approach in real time applications is to update the filter as often as possible and to discard
sensor information that arrives during the update process. Obviously, this approach is prone
to losing valuable sensor information. At first sight, the sample based representation of particle filters suggests an alternative approach similar to an any-time implementation: Whenever
a new observation arrives, sampling is interrupted and the next observation is processed.
Unfortunately, such an approach can result in too small sample sets, causing the filter to
diverge [1, 2].
In this paper we introduce real-time particle filters (RTPF) to deal with constraints imposed
by limited computational resources. Instead of discarding sensor readings, we distribute the
(a)
z t1
St 1
zt2
(b)
z t+11
z t3
u t 1 t 2 t3
St+11
.....
z t3
St 1
z t+11
z t+12
(c)
z t+13
u t 3 t+11 t+12
z t1
z t2
St 1 ut1 St 2
St+11
z t3
ut 2
St 3
zt+11
ut 3
St+11
Figure 1: Different strategies for dealing with limited computational power. All approaches process
the same number of samples per estimation interval (window sizeq three). (a) Skip observations, i.e.
integrate only every third observation. (b) Aggregate observations within a window and integrate them
in one step. (c) Reduce sample set size so that each observation can be considered.
samples among the different observations arriving during a filter update. Hence RTPF represents densities over the state space by mixtures of sample sets, thereby avoiding the problem
of filter divergence due to an insufficient number of independent samples. The weights of the
mixture components are computed so as to minimize the approximation error introduced by
the mixture representation. The resuling approach naturally focuses computational resources
(samples) on valuable sensor information.
The remainder of this paper is organized as follows: In the next section we outline the basics
of particle filters in the context of real-time constraints. Then, in Section 3, we introduce our
novel technique to real-time particle filters. Finally, we present experimental results followed
by a discussion of the properties of RTPF.
2 Particle filters
Particle filters are a sample-based variant of Bayes filters, which recursively estimate posterior densities, or beliefs
, over the state of a dynamical system (see [1, 3] for details):
!#" $
%
(1)
Here is a sensor measurement and
is control information measuring the dynamics of
the system. Particle filters represent beliefs by sets of weighted samples
. Each
is a state, and the
are non-negative numerical factors called importance weights,
which sum up to one. The basic form of the particle filter realizes the recursive Bayes filter
according to a sampling procedure, often referred to as sequential importance sampling with
resampling (SISR):
1. Resampling: Draw with replacement a random state from the set
according to the
(discrete) distribution defined through the importance weights
.
2. Sampling: Use
and the control information
to sample
according to the
, which describes the dynamics of the system.
distribution
3. Importance sampling: Weight the sample by the observation likelihood
.
Each iteration of these three steps generates a sample
representing the posterior.
After iterations, the importance weights of the samples are normalized so that they sum up
to one. Particle filters can be shown to converge to the true posterior even in non-Gaussian,
non-linear dynamic systems [4].
&
- (+*.,
(.*+,
') (+*+, - 0
(+*., /
2 !
4
2
-1(.*+,
'2 - 2 /
&
2
- 2 3 2
A typical assumption underlying particle filters is that all samples can be updated whenever
new sensor information arrives. Under realtime conditions, however, it is possible that the
update cannot be completed before the next sensor measurement arrives. This can be the
case for computationally complex sensor models or whenever the underlying posterior requires large sample sets [2]. The majority of filtering approaches deals with this problem by
skipping sensor information that arrives during the update of the filter. While this approach
works reasonably well in many situations, it is prone to miss valuable sensor information.
zt 1
zt 2
St1
St2
?1
zt 3
St3
?2
Estmation window t
?3
zt+11
St+11
zt+12
St+12
z t+13
St+13
?2
?1
?
?
?3
?
Estimation window t+1
Figure 2: Real time particle filters. The samples are distributed among the observations within one
estimation interval (window size three in this example). The belief is a mixture of the individual sample
sets. Each arrow additionally represents the system dynamics
.
Before we discuss ways of dealing with such situations, let us introduce some notation. We
assume that observations arrive at time intervals , which we will call observation intervals.
Let be the number of samples required by the particle filter. Assume that the resulting
update cycle of the particle filter takes and is called the estimation interval or estimation
window. Accordingly, observations arrive during one estimation interval. We call this
number the window size of the filter, i.e. the number of observations obtained during a filter
update. The -th observation and state within window are denoted
and
, respectively.
4
Fig. 1 illustrates different approaches to dealing with window sizes larger than one. The simplest and most common aproach is shown in Fig. 1(a). Here, observations arriving during
the update of the sample set are discarded, which has the obvious disadvantage that valuable
sensor information might get lost. The approach in Fig. 1(b) overcomes this problem by aggregating multiple observations into one. While this technique avoids the loss of information,
it is not applicable to arbitrary dynamical systems. For example, it assumes that observations
can be aggregated optimally, and that the integration of an aggregated observation can be
performed as efficiently as the integration of individual observations, which is often not the
case. The third approach, shown in Fig. 1(c), simply stops generating new samples whenever
an observation is made (hence each sample set contains only samples). While this approach takes advantage of the any-time capabilities of particle filters, it is susceptible to filter
divergence due to an insufficent number of samples [2, 1].
4
3 Real time particle filters
In this paper we propose real time particle filters (RTPFs), a novel approach to dealing with
limited computational resources. The key idea of RTPFs is to consider all sensor measurements by distributing the samples among the observations within an update window. Additionally, by weighting the different sample sets within a window, our approach focuses the
computational resources (samples) on the most valuable observations. Fig. 2 illustrates the
approach. As can be seen, instead of one sample set at time , we maintain smaller sample
! . We treat such a ?virtual sample set?, or belief, as a mixture of the distribusets at
tions represented in it. The mixture components represent the state of the system at different
points in time. If needed, however, the complete belief can be generated by considering the
dynamics between the individual mixture components.
$$ $
Compared to the first approach discussed in the previous section, this method has the advantage of not skipping any observations. In contrast to the approach shown in Fig. 1(b),
RTPFs do not make any assumptions about the nature of the sensor data, i.e. whether it can
be aggregated or not. The difference to the third approach (Fig. 1(c)) is more subtle. In both
approaches, each of the sample sets can only contain samples. The belief state that
is propagated by RTPF to the next estimation interval is a mixture distribution where each
mixture component is represented by one of the sample sets, all generated independently
from the previous window. Thus, the belief state propagation is simulated by
"$# sample trajectories, that for computational convenience are represented at the points in time where the
observations are integrated. In the approach (c) however, the belief propagation is simulated
with only independent samples.
4
4
We will now show how RTPF determines the weights of the mixture belief. The key idea is
to choose the weights that minimize the KL-divergence between the mixture belief and the
optimal belief. The optimal belief is the belief we would get if there was enough time to
compute the full posterior within the update window.
3.1 Mixture representation
Let us restrict our attention to one estimation interval consisting of observations. The optimal belief
at the end of an estimation window results from iterative application
of the Bayes filter update on each obseration [3]:
1% !
1% !
$$ $
*
1 " #$ $$ "
$
(2)
Here
denotes the belief generated in the previous estimation window. In essence,
(2) computes the belief by integrating over all trajectories through the estimation interval,
where the start position of the trajectories is drawn from the previous belief
. The
probability of each trajectory is determined using the control information
,
and the likelihoods of the observations
along the trajectory. Now let
denote the belief resulting from integrating only the observation within the estimation
window. RTPF computes a mixture of such beliefs, one for each observation. The mixture,
, where
denoted
, is the weighted sum of the mixture components
denotes the mixture weights:
1
$$ $
1 *
%$ $$
1 *
*
* 3
* 1% *
*
* $$ $
%
% !
" $.$.$ " $ (3)
*
3 . Here, too, we integrate over all trajectories. In contrast to
where
and
* each
* * selectively integrates only one of the observations within the
(2), however,
trajectory
estimation interval1.
3.2 Optimizing the mixture weights
We will now turn to the problem of finding the weights of the mixture. These weights reflect
the ?importance? of the respective observations for describing the optimal belief. The idea is
to set them so as to minimize the approximation error introduced by the mixture distribution.
More formally, we determine the mixing weights "! by minimizing the KL-divergence [5]
between
and
.
*
3
$&%(')+*#,
-/.0
3
$&%(')+*#,
-/.0
#
!
3
1 * ! " . 1% !
* * !
" $
132
4
#
3
* *
576
#
4
(4)
(5)
*
; <
In the above 8 :9
. Optimizing the weights of mixture
=?>
#
approximations can be done using
EM [6] or (constrained)
gradient descent [7]. Here, we
perform a small number of gradient descent steps to find the mixture weights. Denote by
Note that typically the individual predictions $ can be ?concatenated? so that
only two predictions for each trajectory have to be performed, one before and one after the corresponding observation.
1
the criterion to be minimized in (5). The gradient of
*
3
3
is given by
* * * !
*
*
*
(6)
1 *
" 3 $ $$ $
'
'
#
'
#
The start point for the gradient descent is chosen to be the center of the weight domain 8 ,
that is
.
3 $ $$
3.3 Monte Carlo gradient estimation
The exact computation of the gradients in (6) requires the computation of the different beliefs, each in turn requiring several particle filter updates (see (2), (3)), and integreation over
all states . This is clearly not feasible in our case. We solve this problem by Monte Carlo
approximation. The approach is based on the observation that the beliefs in (6) share the same
trajectories through space and differ only in the observations they integrate. Therefore, we
first generate sample trajectories through the estimation window without considering the observations, and then use importance sampling to generate the beliefs needed for the gradient
from a sample
estimation. Trajectory generation is done as follows: we draw a sample
set of the previous mixture belief, where the probability of chosing a set
is given by the
mixture weights . This sample is then moved forward in time by consecutively drawing
.
samples
from the distributions
at each time step
The resulting trajectories are drawn from the following proposal distribution :
&)
3
$$ $
* 3
!
*
#" #$$ $ "
% *
$$ $
(7)
1 !
Using importance sampling, we obtain sample-based estimates of
and
by simply
weighting each trajectory with
or
, respectively (compare
(2) and (3)).
is generated with minimal computational
overhead by averaging the
#
weights computed for the individual
distributions. The use of the same trajectories for
all distributions has the advantage that it is highly efficient and that it reduces the variance
of the gradient estimate. This variance reduction is due to using the same random bits in
evaluating the diverse scenarios of incorporating one or another of the observations [8].
1 *
*
Further variance reduction is achieved by using stratified sampling on trajectories. The trajectories are grouped by determining connected regions in a grid over the state space (at time
). Neighboring cells are considered connected if both contain samples. To compute the
gradients by formula (6), we then perform summation and normalization over the grouped
trajectories. Empirical evaluations showed that this grouping greatly reduces the number of
trajectories needed to get smooth gradient estimates. An additional, very important benefit
of grouping is the reduction of the bias due to different dynamics applied to the different
sample sets in the estimation window. In our experiments the number of trajectories is less
than of the total number of samples, resulting in a computational overhead of about 1%
of the total estimation time.
4
To summarize, the RTPF algorithm works as follows. The number of independent samples
needed to represent the belief, the update rate of incoming sensor data, and the available
processing power determine the size of the estimation window and hence the number of
mixture components. RTPF computes the optimal weights of the mixture distribution at the
end of each estimation window. This is done by gradient descent using the Monte Carlo estimates of the gradients. The resulting weights are used to generate samples for the individual
sample sets of the next estimation window. To do so, we keep track of the control information
(dynamics) between the different sample sets of two consecutive windows.
18m
54m
Fig. 3: Map of the environment used for the experiment. The robot was moved around the symmetric
loop on the left. The task of the robot was to determine its position using data collected by two distance
measuring devices, one pointing to its left, the other pointing to its right.
4 Experiments
In this section we evaluate the effectiveness of RTPF against the alternatives, using data
collected from a mobile robot in a real-world environment. Figure 3 shows the setup of
the experiment: The robot was placed in the office floor and moved around the loop on the
left. The task of the robot was to determine its position within the map, using data collected
by two laser-beams, one pointing to its left, the other pointing to its right. The two laser
beams were extracted from a planar laser range-finder, allowing the robot only to determine
the distance to the walls on its left and right. Between each observation the robot moved
approximately 50cm (see [3] for details on robot localization and sensor models). Note that
the loop in the environment is symmetric except for a few ?landmarks? along the walls of
the corridor. Localization performance was measured by the average distance between the
samples and the reference robot positions, which were computed offline.
In the experiments, our real-time algorithm, RTPF, is compared to particle filters with skipping observations, called ?Skip data? (Figure 1a), and particle filters with insufficient samples, called ?Naive? (Figure 1c). Furthermore, to gauge the efficiency of our mixture weighting, we also obtained results for our real-time algorithm without weighting, i.e. we used
mixture distributions and fixed the weights to . We denote this variant ?Uniform?. Finally, we also include as reference the ?Baseline? approach, which is allowed to generate
samples for each observation, thereby not considering real-time constraints.
4
4
The experiment is set up as follows. First, we fix the sample set size which is sufficient
for the robot to localize itself. In our experiment is set empirically to 20,000 (the particle
filters may fail at lower , see also [2]). We then vary the computational resources, resulting
in different window sizes . Larger window size means lower computational power, and the
number of samples that can be generated for each observation decreases to ( ).
4
4
4
Figure 4 shows the evolutions of average localization errors over time, using different window sizes. Each graph is obtained by averaging over 30 runs with different random seeds
and start positions. The error bars indicate 95% confidence intervals. As the figures show,
?Naive? gives the worst results, which is due to insufficient numbers of samples, resulting in
divergence of the filter. While ?Uniform? performs slightly better than ?Skip data?, RTPF is
the most effective of all algorithms, localizing the robot in the least amount of time. Furthermore, RTPF shows the least degradation with limited computational power (larger window
sizes). The key advantage of RTPF over ?Uniform? lies in the mixture weighting, which
allows our approach to focus computational resources on valuable sensor information, for
example when the robot passes an informative feature in one of the hallways. For short
window sizes (Fig. 4(a)), this advantage is not very strong since in this environment, most
features can be detected in several consecutive sensor measurements. Note that because the
?Baseline? approach was allowed to integrate all observations with all of the 20,000 samples,
it converges to a lower error level than all the other approaches.
1000
Baseline
Skip data
RTPF
Naive
Uniform
800
Average Localization error [cm]
Average Localization error [cm]
1000
600
400
200
0
800
600
400
200
0
0
50
100
150
200
250
300
350
400
450
Time [sec]
1000
0
(a)
100
150
200
250
300
350
400
450
(b)
2.8
Baseline
Skip data
RTPF
Naive
Uniform
800
50
Time [sec]
2.6
Localization speedup
Average Localization error [cm]
Baseline
Skip data
RTPF
Naive
Uniform
600
400
2.4
2.2
2
1.8
1.6
1.4
200
1.2
1
0
0
50
100
150
200
250
Time [sec]
300
350
400
450
2
4
(c)
6
8
10
Window size
12
14
(d)
Fig. 4(a)-(c): Performance of the different algorithms for window sizes of 4, 8, and 12 respectively.
The -axis represents time elapsed since the beginning of the localization experiment. The -axis plots
the localization error measured in average distance from the reference position. Each figure includes
the performance achieved with unlimited computational power as the ?Baseline? graph. Each point
is averaged over 30 runs, and error bars indicate 95% confidence intervals. Fig. 4(d) represents the
localization speedup of RTPF over ?Skip data? for various window sizes. The advantage of RTPF
increases with the difficulty of the task, i.e. with increasing window size. Between window size 6 and
12, RTPF localizes at least twice as fast as ?Skip data?.
Without mixture weighting of RTPF, we did not expect ?Uniform? to outperform ?Skip data?
significantly. To see this, consider one estimation window of length . Suppose only one of
the observations detects a landmark, or very informative feature in the hallway. In such
a situation, ?Uniform? considers this landmark every time the robot passes it. However, it
only assigns samples to this landmark detection. ?Skip data? on the other hand, detects
the landmark only every -th time, but assigns all samples to it. Therefore, averaged over
many different runs, the mean performance of ?Uniform? and ?Skip data? is very similar.
However, the variance of the error is significantly lower for ?Uniform? since it considers
the detection in every run. In contrast to both approaches, RTPF detects all landmarks and
generates more samples for the landmark detections, thereby gaining the best of both worlds,
and Figures 4(a)?(c) show this is indeed the case.
4
4
In Figure 4(d) we summarize the performance gain of RTPF over ?Skip data? for different
window sizes in terms of localization time. We considered the robot to be localized if the
average localization error remains below 200 cm over a period of 10 seconds. If the run
never reaches this level, the localization time is set to the length of the entire run, which is
574 seconds. The -axis represents the window size and the -axis the localization speedup.
For each window size speedups were determined using -tests on the localization times for
the 30 pairs of data runs. All results are significant at the 95% level. The graph shows that
with increasing window size (i.e. decreasing processing power), the localization speedup
increases. At small window sizes the speedup is 20-50%, but it goes up to 2.7 times for
larger windows, demonstrating the benefits of the RTPF approach over traditional particle
filters. Ultimately, for very large window sizes, the speedup decreases again, which is due to
the fact that none of the approaches is able to reduce the error below 200cm within the run
time of an experiment.
5 Conclusions
In this paper we tackled the problem of particle filtering under the constraint of limited computing resources. Our approach makes near-optimal use of sensor information by dividing
sample sets between all available observations and then representing the state as a mixture of
sample sets. Next we optimize the mixing weights in order to be as close to the true posterior
distribution as possible. Optimization is performed efficiently by gradient descent using a
Monte Carlo approximation of the gradients.
We showed that RTPF produces significant performance improvements in a robot localization
task. The results indicate that our approach outperforms all alternative methods for dealing
with limited computation. Furthermore, RTPF localized the robot more than 2.7 times faster
than the original particle filter approach, which skips sensor data. Based on these results, we
expect our method to be highly valuable in a wide range of real-time applications of particle
filters. RTPF yields maximal performance gain for data streams containing highly valuable
sensor data occuring at unpredictable time points.
The idea of approximating belief states by mixtures has also been used in the context of
dynamic Bayesian networks [9]. However, Boyen and Koller use mixtures to represent belief
states at a specific point in time, not over multiple time steps. Our work is motivated by
real-time constraints that are not present in [9].
So far RTPF uses fixed sample sizes and fixed window sizes. The next natural step is to adapt
these two ?structural parameters? to further speed up the computation. For example, by the
method of [2] we can change the sample size on-the-fly, which in turn allows us to change the
window size. Ongoing experiments suggest that this combination yields further performance
improvements: When the state uncertainty is high, many samples are used and these samples
are spread out over multiple observations. On the other hand, when the uncertainty is low,
the number of samples is very small and RTPF becomes identical to the vanilla particle filter
with one update (sample set) per observation.
6 Acknowledgements
This research is sponsored in part by the National Science Foundation (CAREER grant number 0093406) and by DARPA (MICA program).
References
[1] A. Doucet, N. de Freitas, and N. Gordon, editors. Sequential Monte Carlo in Practice. SpringerVerlag, New York, 2001.
[2] D. Fox. KLD-sampling: Adaptive particle filters and mobile robot localization. In Advances in
Neural Information Processing Systems (NIPS), 2001.
[3] D. Fox, S. Thrun, F. Dellaert, and W. Burgard. Particle filters for mobile robot localization. In
Doucet et al. [1].
[4] P. Del Moral and L. Miclo. Branching and interacting particle systems approximations of feynamkac formulae with applications to non linear filtering. In Seminaire de Probabilites XXXIV, number
1729 in Lecture Notes in Mathematics. Springer-Verlag, 2000.
[5] T. M. Cover and J. A. Thomas. Elements of Information Theory. Wiley Series in Telecommunications. Wiley, New York, 1991.
[6] W. Poland and R. Shachter. Mixtures of Gaussians and minimum relative entropy techniques
for modeling continuous uncertainties. In Proc. of the Conference on Uncertainty in Artificial
Intelligence (UAI), 1993.
[7] T. Jaakkola and M. Jordan. Improving the mean field approximation via the use of mixture distributions. In Learning in Graphical Models. Kluwer, 1997.
[8] P. R. Cohen. Empirical methods for artificial intelligence. MIT Press, 1995.
[9] X. Boyen and D. Koller. Tractable inference for complex stochastic processes. In Proc. of the
Conference on Uncertainty in Artificial Intelligence (UAI), 1998.
| 2305 |@word thereby:4 recursively:1 reduction:3 contains:1 series:1 outperforms:1 freitas:1 skipping:3 interrupted:1 numerical:1 informative:2 plot:1 sponsored:1 update:20 resampling:2 intelligence:3 device:1 accordingly:1 hallway:2 beginning:1 short:1 along:2 corridor:1 overhead:2 introduce:3 indeed:1 detects:3 decreasing:1 window:42 considering:3 increasing:2 unpredictable:1 becomes:1 underlying:2 notation:1 cm:6 probabilites:1 finding:1 every:4 control:4 grant:1 t1:2 before:3 engineering:1 aggregating:1 treat:1 meil:1 approximately:1 might:1 twice:1 st3:1 suggests:1 limited:6 stratified:1 range:2 averaged:2 recursive:1 lost:1 practice:1 procedure:1 empirical:2 significantly:3 confidence:2 integrating:2 suggest:1 get:3 cannot:2 convenience:1 close:1 context:3 kld:1 optimize:1 imposed:1 map:2 center:1 go:1 attention:1 independently:1 assigns:2 updated:1 suppose:1 exact:1 losing:1 us:1 element:1 recognition:1 fly:1 worst:1 region:1 cycle:1 connected:2 decrease:2 valuable:9 environment:4 complexity:1 dynamic:9 ultimately:1 raise:1 localization:19 efficiency:1 darpa:1 represented:3 various:1 laser:3 fast:1 effective:1 monte:5 detected:1 artificial:3 aggregate:1 larger:4 solve:1 drawing:1 statistic:1 itself:1 online:1 obviously:1 advantage:6 propose:1 maximal:1 remainder:1 causing:1 neighboring:1 loop:3 mixing:2 representational:1 moved:4 seattle:1 produce:1 generating:1 converges:1 tions:1 stat:1 measured:2 strong:2 dividing:1 c:1 skip:13 come:1 indicate:3 differ:1 filter:51 consecutively:1 stochastic:1 virtual:1 st1:1 fix:1 wall:2 summation:1 around:2 considered:3 great:1 seed:1 pointing:4 vary:1 consecutive:2 estimation:24 proc:2 integrates:2 applicable:1 realizes:1 grouped:2 gauge:1 weighted:2 mit:1 clearly:1 sensor:27 gaussian:1 sight:1 mobile:5 jaakkola:1 office:1 focus:4 improvement:3 prevalent:2 likelihood:2 greatly:1 contrast:3 baseline:6 inference:1 integrated:1 typically:1 entire:1 koller:2 among:3 denoted:2 constrained:1 integration:2 field:1 never:1 washington:3 sampling:9 identical:1 represents:5 minimized:1 t2:1 gordon:1 few:1 divergence:5 national:1 individual:6 consisting:1 replacement:1 maintain:1 detection:3 highly:3 evaluation:1 mixture:38 arrives:6 respective:1 fox:4 minimal:1 increased:1 modeling:1 cover:1 disadvantage:1 measuring:2 localizing:1 cost:1 uniform:10 burgard:1 too:2 optimally:1 st:11 density:2 diverge:1 again:1 reflect:1 seminaire:1 containing:1 choose:1 distribute:1 de:2 sec:3 includes:1 stream:1 performed:3 start:3 bayes:3 capability:1 minimize:4 variance:4 efficiently:2 yield:3 t3:4 bayesian:1 none:1 carlo:5 trajectory:19 reach:1 whenever:4 against:1 obvious:1 naturally:1 propagated:1 stop:1 gain:2 knowledge:1 ut:2 organized:1 subtle:1 higher:3 planar:1 done:3 furthermore:3 hand:2 propagation:2 del:1 miclo:1 normalized:1 true:2 contain:2 requiring:1 evolution:1 hence:3 symmetric:2 deal:3 during:7 branching:1 essence:1 criterion:1 outline:1 complete:1 occuring:1 performs:1 novel:2 common:1 empirically:1 cohen:1 discussed:1 kluwer:1 measurement:4 significant:2 vanilla:1 grid:1 mathematics:1 particle:38 cody:1 robot:19 posterior:7 showed:2 optimizing:2 discard:2 scenario:1 verlag:1 success:1 seen:1 minimum:1 additional:1 floor:1 converge:1 aggregated:3 period:1 determine:5 multiple:3 full:1 reduces:2 smooth:1 faster:1 adapt:1 dept:2 marina:1 finder:1 prediction:2 variant:2 basic:2 iteration:2 represent:4 normalization:1 achieved:3 robotics:1 beam:2 cell:1 proposal:1 addressed:1 interval:11 pass:2 effectiveness:1 jordan:1 call:2 structural:1 near:1 enough:1 variety:1 restrict:1 reduce:2 idea:4 mica:1 whether:1 motivated:1 distributing:1 moral:1 xxxiv:1 speech:1 york:2 dellaert:1 amount:1 processed:2 simplest:1 generate:4 outperform:1 per:2 track:1 diverse:1 discrete:1 key:4 demonstrating:1 drawn:2 localize:1 graph:3 year:1 sum:3 run:8 uncertainty:5 telecommunication:1 arrive:2 realtime:1 draw:2 bit:1 followed:1 tackled:1 constraint:5 unlimited:1 generates:2 speed:1 aproach:1 speedup:7 according:3 combination:1 describes:1 smaller:1 em:1 slightly:1 dieter:1 computationally:1 resource:8 remains:1 discus:1 turn:3 describing:1 fail:1 needed:4 tractable:1 ut1:1 end:2 available:2 gaussians:1 kwok:1 alternative:3 original:1 thomas:1 assumes:1 denotes:2 include:1 completed:1 graphical:1 concatenated:1 approximating:1 question:2 strategy:1 traditional:1 gradient:14 distance:4 simulated:2 thrun:1 majority:1 landmark:7 collected:4 considers:2 length:2 insufficient:3 minimizing:1 setup:1 unfortunately:1 susceptible:1 negative:1 implementation:1 zt:6 perform:2 allowing:1 observation:45 st2:1 discarded:1 descent:5 situation:5 interacting:1 arbitrary:1 introduced:3 pair:1 required:1 kl:2 elapsed:1 nip:1 able:1 bar:2 dynamical:3 below:3 zt2:1 boyen:2 reading:1 summarize:2 program:1 including:1 gaining:1 belief:27 power:7 difficulty:1 natural:1 localizes:1 representing:3 axis:4 naive:5 poland:1 literature:1 acknowledgement:1 determining:1 relative:1 loss:1 expect:2 lecture:1 generation:1 filtering:3 localized:2 foundation:1 integrate:5 sufficient:1 editor:1 share:1 prone:2 placed:1 last:1 arriving:3 offline:1 bias:1 wide:1 distributed:1 benefit:2 evaluating:1 avoids:1 world:2 computes:3 forward:1 made:1 adaptive:1 far:2 overcomes:1 dealing:6 keep:1 doucet:2 incoming:2 uai:2 continuous:1 iterative:1 additionally:2 nature:1 reasonably:1 career:1 improving:1 complex:2 domain:1 did:1 spread:1 arrow:1 allowed:2 chosing:1 fig:11 referred:1 wiley:2 position:6 mmp:1 lie:1 third:3 weighting:6 formula:2 discarding:1 specific:1 grouping:2 incorporating:1 sequential:2 importance:8 illustrates:2 suited:1 entropy:1 simply:2 shachter:1 visual:1 tracking:1 springer:1 determines:1 extracted:1 feasible:1 change:2 springerverlag:1 typical:1 determined:2 except:1 averaging:2 miss:1 degradation:1 called:4 total:2 experimental:1 selectively:1 formally:1 ongoing:1 evaluate:1 avoiding:1 |
1,436 | 2,306 | Using Manifold Structure for Partially
Labelled Classification
Mikhail B e lkin
University of Chicago
Department of Mathematics
misha@math .uchicago .edu
Partha Niyogi
University of Chicago
Depts of Computer Science and Statistics
[email protected] .edu
Abstract
We consider the general problem of utilizing both labeled and unlabeled data to improve classification accuracy. Under t he assumption that the data lie on a submanifold in a high dimensional space,
we develop an algorithmic framework to classify a partially labeled
data set in a principled manner . The central idea of our approach is
that classification functions are naturally defined only on t he submanifold in question rather than the total ambient space. Using the
Laplace Beltrami operator one produces a basis for a Hilb ert space
of square integrable functions on the submanifold. To recover such
a basis , only unlab eled examples are required. Once a basis is obtained , training can be performed using the labeled data set. Our
algorithm models the manifold using the adjacency graph for the
data and approximates the Laplace Beltrami operator by the graph
Laplacian. Practical applications to image and text classification
are considered.
1
Introduction
In many practical applications of data classification and data mining , one finds a
wealth of easily available unlabeled examples , while collecting labeled examples can
be costly and time-consuming . Standard examples include object recognition in images, speech recognition, classifying news articles by topic. In recent times , genetics
has also provided enormous amounts of readily accessible data. However, classification of this data involves experimentation and can be very resource intensive.
Consequently it is of interest to develop algorithms that are able to utilize both
labeled and unlabeled data for classification and other purposes. Although the area
of partially lab eled classification is fairly new, a considerable amount of work has
been done in that field since the early 90 's , see [2, 4, 7]. In this pap er we address
the problem of classifying a partially labeled set by developing the ideas proposed
in [1] for data representation. In particular , we exploit the intrinsic structure of
the data to improve classification with unlab eled examples under the assumption
that the data resides on a low-dimensional manifold within a high-dimensional representation space. In some cases it seems to be a reasonable assumption that the
data lies on or close to a manifold. For example a handwritten digit 0 can be
fairly accurately represented as an ellipse , which is completely determined by the
coordinates of its foci and the sum of the distances from the foci to any point.
Thus the space of ellipses is a five-dimensional manifold. An actual handwritten 0
would require more parameters, but perhaps not more than 15 or 20. On the other
hand the dimensionality of the ambient representation space is the number of pixels
which is typically far higher. For other types of data the question of the manifold
structure seems significantly more involved. While there has been recent work on
using manifold structure for data representation ([6 , 8]), the only other application
to classification problems that we are aware of, was in [7] , where the authors use a
random walk on the data adjacency graph for partially labeled classification.
2
Why Manifold Structure
Supervised Learning
IS
Useful for Partially
To provide a motivation for using a manifold structure, consider a simple synthetic
example shown in Figure l. The two classes consist of two parts of the curve shown
in the first panel (row 1). We are given a few labeled points and 500 unlabeled
points shown in panels 2 and 3 respectively. The goal is to establish the identity of
the point lab eled with a question mark. By observing the picture in panel 2 (row 1)
we see that we cannot confidently classify"?" by using the labeled examples alone.
On the other hand, the problem seems much more feasible given the unlabeled
data shown in panel 3. Since there is an underlying manifold, it seems clear at
the outset that the (geodesic) distances along the curve are more meaningful than
Euclidean distances in the plane. Therefore rather than building classifiers defined
on the plane (lR 2) it seems preferable to have classifiers defined on the curve itself.
Even though the data has an underlying manifold, the problem is still not quite
trivial since the two different parts of the curve come confusingly close to each
other. There are many possible potential representations of the manifold and the
one provided by the curve itself is unsatisfactory. Ideally, we would like to have a
representation of the data which captures the fact that it is a closed curve. More
specifically, we would like an embedding of the curve where the coordinates vary
as slowly as possible when one traverses the curve. Such an ideal representation
is shown in the panel 4 (first panel of the second row). Note that both represent
the same underlying manifold structure but with different coordinate functions. It
turns out (panel 6) that by taking a two-dimensional representation of the data
with Laplacian Eigenmaps [1] , we get very close to the desired embedding. Panel 5
shows the locations of labeled points in the new representation space. We see that
"?" now falls squarely in the middle of "+" signs and can easily be identified as a
"+".
This artificial example illustrates that recovering the manifold and developing classifiers on the manifold itself might give us an advantage in classification problems.
To recover the manifold, all we need is unlab eled data. The lab eled data is then
used to develop a classifier defined on this manifold. However we need a model for
the manifold to utilize this structure. The model used here is that of a weighted
graph whose vertices are data points. Two data points are connected with an edge if
3
-1
-2
-3
-2
0.5
-0.5
-1
-1
&
0
0
0
to
-1
+
2
? 00
-1
+8
-2
-2
-3
-2
2
?+-++- a
.
++
-0.1
0
a
~\-)
'''',-)
-3
-2
<a
0
C?P
0.1
r--"
-0.1
2
C)
0 .1
Figure 1: Top row: Panel l. Two classes on a plane curve. Panel 2. Labeled
examples. "?" is a point to be classified. Panel 3. 500 random unlab eled examples.
Bottom row: Panel 4. Ideal representation of the curve. Panel 5. Positions of
lab eled points and "?" after applying eigenfunctions of the Laplacian. Panel 6.
Positions of all examples.
and only if the points are sufficiently close. To each edge we can associate a distance
between the corresponding points. The "geodesic distance" between two vertices is
the length of the shortest path between them on the adjacency graph. Once we set
up an approximation to the manifold, we need a method to exploit the structure
of the model to build a classifier. One possible simple approach would be to use
the "geodesic nearest neighbors" . However , while simple and well-motivated , this
method is potentially unstable. A related more sophisticated method based on a
random walk on the adjacency graph is proposed in [7]. We also note the approach
taken in [2] which uses mincuts of certain graphs for partially labeled classifications.
Our approach is based on the Laplace-Beltrami operator defined on Riemannian
manifolds (see [5]). The eigenfunctions of the Laplace Beltrami operator provide a
natural basis for functions on the manifold and the desired classification function
can be expressed in such a basis. The Laplace Beltrami operator can b e estimated
using unlabeled examples alone and the classification function is then approximated
using the labeled data. In the next two sections we describ e our algorithm and the
theoretical underpinnings in some detail.
3
Description of the Algorithm
Given k points X l, . . . , X k E IR I , we assume that the first s < k points h ave lab els
where Ci E {- I, I} and the rest are unlab eled. The goal is to lab el th e unlab eled
points. We also introduce a straightforward extension of the algorithm for the case
of more than two classes.
Ci,
Step 1 [Constru cting the Adja cenc y Graph with n nearest neighbors]. Nodes i and
j corresponding to the points Xi and Xj are connected by an edge if i is among n
nearest neighbors of j or j is among n nearest neighbors of i. The distance can be
the standard Euclidean distance in II{ I or some other appropriately defined distance.
We take Wij = 1 if points Xi and Xj are connected and Wij = 0 otherwise. For a
discussion about the appropriate choice of weights, and connections to the heat
kernel see [1].
Step 2. [Eigenfunctions] Compute p eigenvectors e1 , ... , e p corresponding to the p
smallest eigenvalues for the eigenvector problem Le = Ae where L = D - W is the
graph Laplacian for the adjacency graph. Here W is the adjacency matrix defined
above and D is a diagonal matrix of the same size as W satisfying Dii = 2:: j Wij.
Laplacian is a symmetric , positive semidefinite matrix which can be thought of as
an operator on functions defined on vertices of the graph .
Step 3. [Building the classifier] To approximate the class we minimize t he error function Err(a) = 2:::=1 (Ci - 2::~=1 ajej(i)) 2 where p is the number of eigenfunctions
we wish to employ, the sum is taken over all lab eled points and the minimization is
considered over the space of coefficients a = (a1' ... , apf. The solution is given by
a =
(E T
1ab Elab
)-1 E T1ab C
where c = (C1 , ' .. ,Cs f and El ab is an s x p matrix whose i, j entry is ej (i). For
the case of several classes , we build a one-against-all classifier for each individual
class.
Step 4. [Classifying unlabeled points] If Xi, i
Ci
=
{
> s is an unlabeled point we put
I,
-1 ,
This, of course, is just applying a linear classifier constructed in Step 3. If there are
several classes , one-against-all classifiers compete using 2::~ =1 aj ej (i) as a confidence
measure.
4
Theoretical Interpretation
Let M C II{ k be an n-dimensional compact Riemannian manifold isometrically
embedded in II{ k for some k. Intuitively M can be thought of as an n-dimensional
"surface" in II{ k. Riemannian structure on M induces a volume form that allows
us to integrate functions defined on M. The square integrable functions form a
Hilbert space .c 2(M). The Laplace-Beltrami operator 6.M (or just 6.) acts on
twice differentiable functions on M. There are three important points that are
relevant to our discussion here.
The Laplacian provides a basis on .c 2 (M):
It can b e shown (e.g. , [5]) that 6. is a self-adjoint positive semidefinite op erator and
that its eigenfunctions form a basis for the Hilb ert space .c 2 (M) . The sp ectrum of 6.
is discret e (provided M is compact) , with the smallest eigenvalue 0 corresponding
to the constant eigenfunction. Therefore any f E .c 2 (M) can b e written as f(x) =
2:: ~o ai ei( x) , where ei are eigenfunctions, 6. ei = Ai ei.
The simplest nontrivial example is a circle Sl. 6. S1 f( ?)
- d'li,</? .
Therefore
the eigenfunctions are given by - d:121? = e( if;), where I( if;) is a 7r-periodic function. It is easy to see that all eigenfunctions of 6. are of the form e( if;) = sin( nif; )
or e( if;) = cos( nif;) with eigenvalues {l2, 22, ... }. Therefore, we see that any 7rperiodic ?2 function 1 has a convergent Fourier series expansion given by I( if;) =
2:: ~= o an sin( nif; ) + bn cos( nif;). In general, for any manifold M , the eigenfunctions
of the Laplace-Beltrami operator provide a natural basis for ?2(M). However 6.
provides more than just a basis , it also yields a measure of smoothness for functions
on the manifold.
The Laplacian as a snlOothness functional:
A simple measure of the degree of smoothness for a function 1 on a unit circle 51 is
2 dif;. If S(J) is close to zero, we think
the "smoothness functional" S(J) = J I/( if;)' 1
5'
of 1 as being "smooth" . Naturally, constant functions are the most "smooth" .
Integration by parts yields S(J)
f'( if; )dl
16.ldif; = (6./,1)?.2(51)' In
general , if
I:
M
----+ ~,
S(J)
J
J
5'
5'
then
d~f
JIV/1
2 dp
=
M
J
16.ldp
= (6./ , I)?.2(M )
M
where Viis the gradient vector field of f. If the manifold is ~ n then VI =
!!La
aX -aaX .' In general, for an n-manifold , the expression in a local coordinate
chart involves the coefficients of the m etric tensor. Therefore the smoothness of a
unit norm eigenfunction ei of 6. is controlled by the corresponding eigenvalue Ai
since 5(ei) = (6. ei, ei)?.2(M) = Ai. For an arbitrary 1 = 2::i [ti ei, we can write S(J)
as
"
, n_
L~_1
t
t
A Reproducing Kernel Hilbert Space can be constructed from S. A1 = 0 is the
smallest eigenvalue for which the corresponding eigenfunction is the constant function e1 = 1'(1). It can also be shown that if M is compact and connected there
are no other eigenfunctions with eigenvalue O. Therefore approximating a function
I( x) :::::: 2::; ai ei (x) in terms of the first p eigenfunctions of 6. is a way of controlling
the smoothness of the approximation. The optimal approximation is obtained by
minimizing the ?2 norm of the error : a =
argmin
J (/ (X) -
a=(a" ... ,ap ) M
t
aiei( X)) 2 dp.
,
This approximation is given by a projection in ?2 onto the span of the first p
eigenfunctions ai =
ei( x )/(x)dp = (ei ' I) ?.2(M) In practice we only know the
J
values of
1
M
at a finite number of points
discrete version of this problem
a=
_
X l, ... , X n
a~gmi~
.t
a=(a" ... ,a p ), =l
and therefore have to solve a
(/(X i ) -
t
O,jej(X i )) 2 The so-
) =1
lution to this standard least squares problem is given by aT = (E T E)- l EyT, where
Eij = ei (Xj) and y = (J(xd , ? .. , I(x n )).
Conect ion with the Graph Laplacian:
As we are approximating a manifold with a graph, we need a suitable m easure
of smoothness for functions defined on the graph. It turns out that many of the
concepts in the previous section have parallels in graph theory (e. g ., see [3]). Let
G = (V, E) b e a weighted graph on n vertices. We assume that the vertices are
numbered and use the notation i ~ j for adjacent vertices i and j. The graph
Laplacian of G is defined as L = D - W , where W is the weight matrix and D is a
diagonal matrix, Dii = I:j Wj i. L can be thought of as an operator on functions
defined on vertices of the graph. It is not hard to see that L is a self-adj oint positive
semidefinite operator. By the (finite dimensional) spectral theorem any function on
G can be decomposed as a sum of eigenfunctions of L. If we think of G as a model
for the manifold M it is reasonable to assume that a function on G is smooth if it
does not change too much between nearby points. If f = (11 , ... , In) is a function
on G, then we can formalize that intuition by defining the smoothness functional
SG(f) = I: Wij(Ji - h)2. It is not hard to show that SG(f) = f LfT = (f , Lf)G =
n
I: Ai (f , ei) G
which is the discrete analogue of the integration by parts from the
i =l
previous section . The inner product here is the usual Euclidean inner product on
the vector space with coordinates indexed by the vertices of G , ei are normalized
eigenvectors of L, Lei = Aiei, Ileill = 1. All eigenvalues are non-negative and the
eigenfunctions corresponding to the smaller eigenvalues can be thought as "more
smooth". The smallest eigenvalue A1 = 0 corresponds to the constant eigenvector
e1?
5
5.1
Experimental Results
Handwritten Digit Recognition
We apply our techniques to the problem of optical character recognition. We use
the popular MNIST dataset which contains 28x28 grayscale images of handwritten
digits. 1 We use the 60000 image training set for our experiments. For all experiments we use 8 nearest neighbours to compute the adjacency matrix. The adjacency
matrices are very sparse which makes solving eigenvector problems for matrices as
big as 60000 by 60000 possible. For a particular trial, we fix the number of labeled
examples we wish to use. A random subset of the 60000 images is used with labels
to form the labeled set L. The rest of the images are used without lab els to form the
unlab eled data U. The classification results (for U) are averaged over 20 different
random draws for L. Shown in fig. 2 is a summary plot of classification accuracy on
the unlab eled set comparing the nearest neighbors baseline with our algorithm that
retains the numb er of eigenvectors by following taking it to be 20% of the numb er
of lab eled points. The improvements over the base line are significant, sometimes
exceeding 70% depending on the number of labeled and unlabeled examples . With
only 100 labeled examples (and 59900 unlabeled examples), the Laplacian classifier
does nearly as well as the nearest neighbor classifier with 5000 lab eled examples.
Similarly, with 500/59500 labeled/unlabeled examples, it does slightly better than
the nearest neighbor base line using 20000 labeled examples By comparing the results for the total 60000 point data set, and 10000 and 1000 subsets we see that
adding unlab eled data consistently improves classification accuracy. When almost
all of the data is lab eled , the performance of our classifier is close to that of k-NN. It
is not particularly surprising as our method uses the nearest neighbor information.
1 We use the first 100 prin cipal components of the set of all images to represent each
image as a 100 dimensional vector.
60 ~-----'------'------'------'--r~~====~~====~
--e-
Laplacian 60 ,000 points total
Laplacian 10,000 points total
-A- Laplacian 1 ,QOO points total
+
40
best k-NN, k=1 ,3,5
20
2 L-____- L_ _ _ _ _ _L -_ _ _ _- L_ _ _ _ _ _L -_ _ _ _- L_ _ _ _ _ _L -_ _ _ _- "
20
50
100
500
1000
5000
20000
50000
Number of Labeled Points
Figure 2: MNIST data set, Percentage error rates for different numb ers of labeled
and unlabeled points compared to best k-NN base line,
5.2
Text Classification
The second application is text classification using the popular 20 Newsgroups data
set, This data set contains approximately 1000 postings from each of 20 different
newsgroups, Given an article , the problem is to determine to which newsgroup it
was posted, We tokenize the articles using the software package Rainbow written
by Andrew McCallum, We use a "stop-list" of 500 most common words to be
excluded and also exclude headers , which among other things contain the correct
identification of the newsgroup, Each document is then represented by the counts
of the most frequent 6000 words normalized to sum to L Documents with 0 total
count are removed , thus leaving us with 19935 vectors in a 6000-dimensional space,
We follow the same procedure as wit h the MNIST digit data above , A random
subset of a fixed size is taken with labels to form L, The rest of the dataset is
considered to be U, We average the results over 20 random splits 2 , As with the
digits , we take the number of nearest neighbors for the algorithm to be 8, In fig, 3
we summarize the results by taking 19935 , 2000 and 600 total points respectively
and calculating the error rate for different numbers oflabeled points, The numb er of
eigenvectors used is always 20% of the number of lab eled points, We see that having
more unlabeled points improves the classification error in most cases although when
there are very few lab eled points , the differences are smalL
References
[1] M. Belkin , P. Niyogi , Lap lacian Eigenmaps for Dim ensionality R edu ction and
Data R epresentation, Technical Report, TR-2002-01 , Department of Computer
Science, The University of Chicago, 2002 .
2In the case of 2000 eigenvectors we take just 10 random splits since the computations
are rather time-consuming .
80 ~------'-------'-------'------r~======~7=====~
-e-
Laplacian 19,935 points total
~
Laplacian 2,000 pOints total
---A- Laplacian 600 points total
+ best k-NN, k=1,3,5
60
40
30
22 L-------~-------L------~--------~-------L------~
50
100
500
1000
5000
10000
18000
Number of Labeled Points
Figure 3: 20 Newsgroups data set. Error rates for different numbers of labeled and
unlab eled points compared to best k-NN baseline .
[2] A. Blum , S. Chawla, Learning from Labeled and Un labeled Data using Graph
Mincuts, ICML, 2001 ,
[3] Fan R. K. Chung, Spectra l Graph Theory, Regional Conference Series in Mathematics , number 92, 1997
[4] K. Nigam , A.K. McCallum , S. Thrun , T. Mitchell , Text Classifi cation from
Labeled in Unlabeled Data , Machine Learning 39(2/3),2000,
[5] S. Rosenberg, The Laplacian on a Riemmannian Manifold,
versity Press, 1997,
Cambridge Uni-
[6] Sam T. Roweis, Lawrence K. Saul , N onlin ear Dimensionality Reduction by
Locally Linear Embedding, Science, vol 290 , 22 December 2000 ,
[7] Martin Szummer , Tommi Jaakkola, Partially labeled classification with Markov
random walks , Neural Information Processing Systems (NIPS) 2001 , vol 14. ,
[8] Joshua B. Tenenbaum, Vin de Silva, John C. Langford , A Global Geometric Framework for N onlin ear Dimensionality Reduction, Science, Vol 290, 22
December 2000,
| 2306 |@word trial:1 version:1 middle:1 seems:5 norm:2 bn:1 tr:1 reduction:2 series:2 etric:1 contains:2 document:2 err:1 comparing:2 adj:1 surprising:1 written:2 readily:1 john:1 chicago:3 plot:1 alone:2 plane:3 mccallum:2 lr:1 erator:1 provides:2 math:1 node:1 location:1 traverse:1 five:1 along:1 constructed:2 discret:1 introduce:1 manner:1 decomposed:1 versity:1 actual:1 provided:3 underlying:3 notation:1 panel:14 argmin:1 eigenvector:3 lft:1 collecting:1 act:1 ti:1 xd:1 isometrically:1 preferable:1 classifier:12 unit:2 positive:3 local:1 path:1 ap:1 approximately:1 might:1 twice:1 co:2 dif:1 averaged:1 practical:2 practice:1 lf:1 digit:5 procedure:1 area:1 significantly:1 thought:4 projection:1 outset:1 confidence:1 word:2 numbered:1 get:1 cannot:1 unlabeled:14 close:6 operator:10 onto:1 put:1 applying:2 straightforward:1 wit:1 unlab:10 utilizing:1 embedding:3 coordinate:5 laplace:7 ert:2 controlling:1 us:2 associate:1 recognition:4 approximated:1 satisfying:1 particularly:1 labeled:27 bottom:1 capture:1 wj:1 news:1 connected:4 removed:1 principled:1 intuition:1 classifi:1 ideally:1 geodesic:3 solving:1 basis:9 completely:1 easily:2 represented:2 heat:1 ction:1 artificial:1 header:1 quite:1 whose:2 solve:1 otherwise:1 niyogi:3 statistic:1 think:2 itself:3 advantage:1 eigenvalue:9 differentiable:1 product:2 frequent:1 relevant:1 oint:1 roweis:1 adjoint:1 description:1 produce:1 object:1 depending:1 develop:3 n_:1 andrew:1 nearest:10 op:1 recovering:1 c:2 involves:2 come:1 tommi:1 beltrami:7 correct:1 dii:2 adjacency:8 require:1 fix:1 extension:1 sufficiently:1 considered:3 lawrence:1 algorithmic:1 vary:1 early:1 smallest:4 numb:4 purpose:1 label:2 weighted:2 minimization:1 always:1 rather:3 qoo:1 ej:2 confusingly:1 rosenberg:1 jaakkola:1 ax:1 focus:2 improvement:1 unsatisfactory:1 consistently:1 ave:1 baseline:2 dim:1 el:4 nn:5 typically:1 wij:4 ldp:1 pixel:1 classification:22 among:3 integration:2 fairly:2 field:2 once:2 aware:1 having:1 icml:1 nearly:1 report:1 few:2 employ:1 belkin:1 neighbour:1 individual:1 eyt:1 ab:2 interest:1 mining:1 misha:1 semidefinite:3 ambient:2 edge:3 underpinnings:1 lution:1 indexed:1 euclidean:3 walk:3 desired:2 circle:2 theoretical:2 classify:2 retains:1 vertex:8 entry:1 subset:3 submanifold:3 eigenmaps:2 too:1 periodic:1 synthetic:1 accessible:1 squarely:1 central:1 ear:2 slowly:1 chung:1 li:1 potential:1 exclude:1 de:1 coefficient:2 vi:1 performed:1 lab:13 closed:1 observing:1 recover:2 parallel:1 vin:1 partha:1 minimize:1 square:3 ir:1 accuracy:3 chart:1 yield:2 handwritten:4 identification:1 accurately:1 cation:1 classified:1 against:2 involved:1 naturally:2 riemannian:3 stop:1 dataset:2 popular:2 mitchell:1 dimensionality:3 improves:2 hilbert:2 formalize:1 riemmannian:1 sophisticated:1 higher:1 supervised:1 follow:1 done:1 though:1 lkin:1 just:4 nif:4 langford:1 hand:2 ei:15 aj:1 perhaps:1 lei:1 building:2 concept:1 contain:1 normalized:2 excluded:1 symmetric:1 adjacent:1 sin:2 self:2 silva:1 image:8 common:1 functional:3 ji:1 volume:1 he:3 approximates:1 interpretation:1 significant:1 cambridge:1 ai:7 smoothness:7 mathematics:2 similarly:1 surface:1 base:3 recent:2 certain:1 joshua:1 integrable:2 determine:1 shortest:1 ii:4 smooth:4 technical:1 x28:1 e1:3 ellipsis:1 laplacian:17 a1:3 controlled:1 ae:1 represent:2 kernel:2 sometimes:1 ion:1 c1:1 wealth:1 leaving:1 appropriately:1 constru:1 rest:3 regional:1 eigenfunctions:14 thing:1 december:2 ideal:2 split:2 easy:1 newsgroups:3 xj:3 identified:1 inner:2 idea:2 intensive:1 motivated:1 expression:1 speech:1 useful:1 clear:1 eigenvectors:5 amount:2 locally:1 tenenbaum:1 induces:1 simplest:1 sl:1 percentage:1 sign:1 estimated:1 write:1 discrete:2 vol:3 enormous:1 blum:1 utilize:2 graph:20 pap:1 sum:4 compete:1 package:1 almost:1 reasonable:2 draw:1 convergent:1 fan:1 nontrivial:1 software:1 nearby:1 prin:1 fourier:1 span:1 optical:1 martin:1 gmi:1 department:2 developing:2 smaller:1 slightly:1 character:1 sam:1 s1:1 intuitively:1 taken:3 resource:1 turn:2 count:2 know:1 available:1 experimentation:1 apply:1 appropriate:1 spectral:1 chawla:1 top:1 include:1 calculating:1 exploit:2 build:2 ellipse:1 establish:1 approximating:2 tensor:1 eled:20 question:3 costly:1 usual:1 diagonal:2 gradient:1 dp:3 distance:8 epresentation:1 thrun:1 topic:1 manifold:29 unstable:1 trivial:1 length:1 minimizing:1 potentially:1 negative:1 markov:1 finite:2 defining:1 reproducing:1 arbitrary:1 required:1 connection:1 nip:1 eigenfunction:3 address:1 able:1 confidently:1 summarize:1 apf:1 analogue:1 suitable:1 natural:2 improve:2 picture:1 text:4 sg:2 l2:1 geometric:1 embedded:1 integrate:1 degree:1 article:3 classifying:3 row:5 genetics:1 course:1 summary:1 l_:3 uchicago:2 fall:1 neighbor:9 taking:3 saul:1 mikhail:1 sparse:1 curve:10 rainbow:1 resides:1 author:1 far:1 approximate:1 compact:3 uni:1 global:1 consuming:2 xi:3 grayscale:1 spectrum:1 un:1 why:1 nigam:1 expansion:1 posted:1 hilb:2 sp:1 motivation:1 big:1 fig:2 position:2 wish:2 exceeding:1 lie:2 posting:1 theorem:1 er:5 list:1 dl:1 intrinsic:1 consist:1 mnist:3 adding:1 ci:4 illustrates:1 depts:1 vii:1 jej:1 eij:1 lap:1 expressed:1 partially:8 corresponds:1 goal:2 identity:1 consequently:1 labelled:1 considerable:1 feasible:1 hard:2 change:1 determined:1 specifically:1 total:10 mincuts:2 experimental:1 la:1 meaningful:1 newsgroup:2 mark:1 szummer:1 |
1,437 | 2,307 | A Model for Real-Time Computation in Generic
Neural Microcircuits
Wolfgang Maass , Thomas Natschl?ager
Institute for Theoretical Computer Science
Technische Universitaet Graz, Austria
maass, tnatschl @igi.tu-graz.ac.at
Henry Markram
Brain Mind Institute
EPFL, Lausanne, Switzerland
[email protected]
Abstract
A key challenge for neural modeling is to explain how a continuous
stream of multi-modal input from a rapidly changing environment can be
processed by stereotypical recurrent circuits of integrate-and-fire neurons
in real-time. We propose a new computational model that is based on
principles of high dimensional dynamical systems in combination with
statistical learning theory. It can be implemented on generic evolved or
found recurrent circuitry.
1 Introduction
Diverse real-time information processing tasks are carried out by neural microcircuits in
the cerebral cortex whose anatomical and physiological structure is quite similar in many
brain areas and species. However a model that could explain the potentially universal computational capabilities of such recurrent circuits of neurons has been missing. Common
models for the organization of computations, such as for example Turing machines or attractor neural networks, are not suitable since cortical microcircuits carry out computations
on continuous streams of inputs. Often there is no time to wait until a computation has
converged, the results are needed instantly (?anytime computing?) or within a short time
window (?real-time computing?). Furthermore biological data prove that cortical microcircuits can support several real-time computational tasks in parallel, a fact that is inconsistent with most modeling approaches. In addition the components of biological neural
microcircuits, neurons and synapses, are highly diverse [1] and exhibit complex dynamical
responses on several temporal scales. This makes them completely unsuitable as building
blocks of computational models that require simple uniform components, such as virtually
all models inspired by computer science or artificial neural nets. Finally computations in
common computational models are partitioned into discrete steps, each of which require
convergence to some stable internal state, whereas the dynamics of cortical microcircuits
appears to be continuously changing. In this article we present a new conceptual framework
for the organization of computations in cortical microcircuits that is not only compatible
with all these constraints, but actually requires these biologically realistic features of neural computation. Furthermore like Turing machines this conceptual approach is supported
by theoretical results that prove the universality of the computational model, but for the
biologically more relevant case of real-time computing on continuous input streams.
The work was partially supported by the Austrian Science Fond FWF, project #P15386.
B
2.5
state distance
PSfrag replacements A
d(u,v)=0.4
d(u,v)=0.2
d(u,v)=0.1
d(u,v)=0
2
1.5
1
0.5
PSfrag replacements
0
0
0.1 0.2 0.3 0.4 0.5
time [sec]
Figure 1: A Structure of a Liquid State Machine (LSM), here shown with just a single
readout. B Separation
property of a generic neural microcircuit. Plotted on the
-axis is
the
value of , where denotes the Euclidean norm, and ,
denote the liquid states at time for Poisson spike trains and as inputs, averaged
over
many and with the same distance . is defined as distance ( -norm)
between low-pass filtered versions of and .
2 A New Conceptual Framework for Real-Time Neural Computation
Our approach is based on the following observations. If one excites a sufficiently complex recurrent circuit (or other medium) with a continuous input stream , and looks
at a later time at the current internal state of the circuit, then is likely to
hold a substantial amount of information about recent inputs !" (for the case of neural
circuit models this was first demonstrated by [2]). We as human observers may not be able
to understand the ?code? by which this information about is encoded in the current
circuit state , but that is obviously not essential. Essential is whether a readout neuron that has to extract such information at time for a specific task can accomplish this.
But this amounts to a classical pattern recognition problem, since the temporal dynamics
of the input stream has been transformed by the recurrent circuit into a high dimensional spatial pattern . A related approach for artificial neural nets was independently
explored in [3].
In order to analyze the potential capabilities of this approach, we introduce the abstract
model of a Liquid State Machine (LSM), see Fig. 1A. As the name indicates, this model
has some weak resemblance to a finite state machine. But whereas the finite state set
and the transition function of a finite state machine have to be custom designed for each
particular computational task, a liquid state machine might be viewed as a universal finite
state machine whose ?liquid? high dimensional analog state changes continuously over
time. Furthermore if this analog state is sufficiently high dimensional and its dynamics
is sufficiently complex, then it has embedded in it the states and transition functions
of
many concrete finite state machines. Formally, an LSM # consists of a filter
(i.e. a
function that maps input streams $% onto streams , where may depend not just on
, but in a quite arbitrary nonlinear fashion also on previous inputs ; in mathematical
'&(
)* ), and a (potentially memoryless) readout
terminology
this is written
function
that maps at any time the filter output (i.e., the ?liquid state?) into some
target output . Hence the LSM itself computes a filter that maps $% onto .
In our application to neural microcircuits, the recurrently connected microcircuit could
be
viewed in a first approximation as an implementation of a general purpose filter
(for
example some unbiased analog memory), from which different readout neurons extract and
recombine diverse components of the information contained in the input $% . The liquid
state is that part of the internal circuit state at time that is accessible to readout neurons. An example where $% consists of 4 spike trains is shown in Fig. 2. The generic
microcircuit model (270 neurons) was drawn from the distribution discussed in section 3.
input
: sum of rates of inputs 1&2 in the interval [ -30 ms, ]
0.4
0.2
: sum of rates of inputs 3&4 in the interval [ -30 ms, ]
0.6
0
: sum of rates of inputs 1-4 in the interval [ -60 ms, -30 ms]
0.8
0
: sum of rates of inputs 1-4 in the interval [ -150 ms, ]
0.4
0.2
: spike coincidences of inputs 1&3 in the interval [ -20 ms, ]
3
0
: nonlinear combination
: nonlinear combination
#"
0.15
PSfrag replacements
0
!
$&% ('*) + $&,- .
0.3
0.1
0
0.2
0.4
0.6
0.8
1
time [sec]
Figure 2: Multi-tasking in real-time. Input spike trains were randomly generated in such
a way that at any time the input contained no information about preceding input more
than 30 ms ago. Firing rates / were randomly drawn from the uniform distribution over
[0 Hz, 80 Hz] every 30 ms, and input spike trains 1 and 2 were generated for the present
30 ms time segment as independent Poisson spike trains with this firing rate / . This
process was repeated (with independent drawings of / and Poission spike trains) for
each 30 ms time segment. Spike trains 3 and 4 were generated in the same way, but with
independent drawings of another firing rate 0/ every 30 ms. The results shown in this
figure are for test data, that were never before shown to the circuit. Below the 4 input
spike trains the target (dashed curves) and actual outputs (solid curves) of 7 linear readout
neurons are shown in real-time (on the same time axis). Targets were to output every
30 ms the actual firing rate (rates are normalized
to a maximum rate of 80 Hz) of
21
spike
trains 1&2
during
the
preceding
30
ms
(
),
the
firing
rate
of
spike
trains
3&4
(
), the
31
and
in an earlier time interval [ -60 ms, -30 ms]
(
)
and
during
the
interval
sum of
.
54
[ -150 ms, ] ( % ), spike coincidences between inputs 1&3 ( is defined as the number
of spikes which are accompanied by a spike in the other
76 spike train within 5 ms during the
interval [ -20 ms, ]), a simple
nonlinear
combinations
and a randomly chosen complex
98
of earlier described values. Since that all readouts were linear
nonlinear combination
units, these nonlinear combinations are computed implicitly within the generic microcircuit
model. Average
coefficients between targets and outputs for 200 test inputs of
1 correlation
8
length 1 s for to were 0.91, 0.92, 0.79, 0.75, 0.68, 0.87, and 0.65.
1
:8
In this case the 7 readout neurons
to (modeled for simplicity just as linear units with
a membrane time constant of 30 ms, applied to the spike trains from the neurons in the
circuit) were trained to extract completely different types of information from the input
stream $% , which require different integration times stretching from 30 to 150 ms. Since
the readout neurons had a biologically realistic short time constant of just 30 ms, additional
temporally integrated information had to be contained at any instance in the current firing state of the recurrent circuit (its ?liquid state?). In addition a large number of
nonlinear combinations of this temporally integrated information are also ?automatically?
precomputed in the circuit, so that they can be pulled out by linear readouts. Whereas the
information extracted by some of the readouts can be described in terms of commonly discussed schemes for ?neural codes?, this example demonstrates that it is hopeless to capture
the dynamics or the information content of the primary engine of the neural computation,
the liquid state of the neural circuit, in terms of simple coding schemes.
3 The Generic Neural Microcircuit Model
We used a randomly connected circuit consisting of leaky integrate-and-fire (I&F) neurons, 20% of which were randomly chosen to be inhibitory, as generic neural microcircuit
model.1 Parameters were chosen to fit data from microcircuits in rat somatosensory cortex
(based on [1], [4] and unpublished data from the Markram Lab). 2 It turned out to be essential to keep the connectivity sparse, like in biological neural systems, in order to avoid
chaotic effects.
In the case of a synaptic connection from to we modeled the synaptic dynamics according to the model proposed in [4], with the synaptic parameters (use), (time constant
for depression), (time constant for facilitation) randomly chosen from Gaussian distributions that were based on empirically found data for such connections. 3 We have shown
in [5] that without such synaptic dynamics the computational power of these microcircuit
models decays significantly. For each simulation, the initial conditions of each I&F neuron,
i.e. the membrane voltage at time &
, were drawn randomly (uniform distribution) from
the interval [13.5 mV, 15.0 mV]. The ?liquid state? of the recurrent circuit consisting
of neurons was modeled by an -dimensional vector computed by applying a low pass
filter with a time constant of 30 ms to the spike trains generated by the neurons in the
recurrent microcicuit.
1
The software used to simulate the model is available via www.lsm.tugraz.at .
Neuron parameters: membrane time constant 30 ms, absolute refractory period 3 ms (excitatory neurons), 2 ms (inhibitory neurons), threshold 15 mV (for a resting membrane potential assumed
to be 0), reset voltage 13.5 mV, constant nonspecific background current
nA, input resistance 1 M . Connectivity structure: The probability of a synaptic connection from neuron
to neuron (as well as that of a synaptic connection from neuron to neuron ) was defined as
, where is a parameter which controls both the average number of connections and the average distance between neurons that are synaptically connected (we set
, see [5]
for details). We assumed that the neurons were located on the integer points of a 3 dimensional grid
in space, where
is the Euclidean distance between neurons and . Depending on whether
and were excitatory ( ) or inhibitory ( ), the value of was 0.3 (
), 0.2 ( ), 0.4 ( ), 0.1
( ).
3
Depending on whether and were excitatory ( ) or inhibitory ( ), the mean values of these
three parameters (with , expressed in seconds, s) were chosen to be .5, 1.1, .05 (
), .05, .125,
1.2 ( ), .25, .7, .02 ( ), .32, .144, .06 ( ). The SD of each parameter was chosen to be 50%
of its mean. The mean of the scaling parameter (in nA) was chosen to be 30 (EE), 60 (EI), -19
(IE), -19 (II). In the case of input synapses the parameter had a value of 18 nA if projecting onto
a excitatory neuron and 9 nA if projecting onto an inhibitory neuron. The SD of the parameter
was chosen to be 100% of its mean and was drawn from a gamma distribution. The postsynaptic
with
ms (
ms) for excitatory
current was modeled as an exponential decay
(inhibitory) synapses. The transmission delays between liquid neurons were chosen uniformly to be
1.5 ms (
), and 0.8 ms for the other connections.
2
"!$#& %' )(*,+*-'. % +
.
#1 )(*,+
4
2
# 5
26
2
232
4 7
2
"!98 -;:"<+
232
.&
0/
23
2
232
7
:<=
0
:,<9
?>
7
4 Towards a non-Turing Theory for Real-Time Neural Computation
Whereas the famous results of Turing have shown that one can construct Turing machines
that are universal for digital sequential offline computing, we propose here an alternative
computational theory that is more adequate for analyzing parallel real-time computing on
analog input streams. Furthermore we present a theoretical result which implies that within
this framework the computational units of the system can be quite arbitrary, provided that
sufficiently diverse units are available (see the separation property and approximation property discussed below). It also is not necessary to construct circuits to achieve substantial
computational power. Instead sufficiently large and complex ?found? circuits (such as the
generic circuit used as the main building block for Fig. 2) tend to have already large computational power, provided that the reservoir from which their units are chosen is sufficiently
rich and diverse.
Consider a class of basis filters
(that may for example consist of the components
that are available for building filters
of neural LSMs, such as dynamic synapses). We
say that this class has the point-wise separation property if for any two input functions
$% $% with & ! for some
there exists some
with &
* .4 There exist completely different classes
of filters that satisfy this point-wise
separation property: = all delay lines , = all linear filters , and biologically more
relevant = models for dynamic synapses (see [6]).
The complementary requirement
that is demanded from the class
of functions from
which the readout maps
are to be picked is the well-known universal approximation
property: for any continuous function and any closed and bounded domain one can ap. An
proximate on this domain with any desired degree of precision by some
example for such a class is &
feedforward sigmoidal neural nets . A rigorous mathematical theorem [5], states that for any class of filters that satisfies the point-wise separation property and for any class of functions that satisfies the universal approximation
property one can approximate any given real-time computation on time-varying inputs with
fading memory
#
(and hence any biologically relevant real-time computation) by a LSM
whose filter
is composed of finitely many filters in , and whose readout map
is
chosen from the class . This theoretial result supports the following pragmatic procedure:
In order
to implement a given real-time computation with fading memory it suffices to take
a filter whose dynamics is ?sufficiently complex?,
and train a ?sufficiently flexible? read
out to assign for each time and state & )* the target output . Actually, we
found that if the neural microcircuit model is not too small, it usually suffices to use linear
readouts. Thus the microcircuit automatically assumes ?on the side? the computational role
of a kernel for support vector machines.
For physical implementations of LSMs it makes more sense to study instead of the theoretically relevant point-wise separation property the following
qualitative separation property
as a test for
the
computational
capability
of
a
filter
:
how
different are the liquid states
&
)* and &
* for two different input histories $%
. This is
evaluated in Fig. 1B for the case where $% are Poisson spike trains and is a generic
neural microcircuit model. It turns out, that the difference between the liquid states scales
roughly proportionally to the difference between the two input histories. This appears to
be desirable from the practical point of view, since it implies that saliently different input
histories can be distinguished more easily and in a more noise robust fashion by the readout. We propose to use such evaluation of the separation capability of neural microcircuits
as a new standard test for their computational capabilities.
4
" + " +
Note that it is not required that there exists a single
any two different input histories
,
.
which achieves this separation for
5 A Generic Neural Microcircuit on the Computational Test Stand
The theoretical results sketched in the preceding section can be interpreted as saying that
there are no strong a priori limitations for the power of neural microcircuits for real-time
computing with fading memory, provided they are sufficiently large and their components
are sufficiently heterogeneous. In order to evaluate this somewhat surprising theoretical
prediction, we use a well-studied computational benchmark task for which data have been
made publicly available 5 : the speech recognition task considered in [7] and [8].
The dataset consists of 500 input files: the words ?zero?, ?one?, ..., ?nine? are spoken by 5
different (female) speakers, 10 times by each speaker. The task was to construct a network
of I&F neurons that could recognize each of the 10 spoken words . Each of the 500 input
files had been encoded in the form of 40 spike trains, with at most one spike per spike train 6
signaling onset, peak, or offset of activity in a particular frequency band. A network was
presented in [8] that could solve this task with an error 7 of 0.15 for recognizing the pattern
?one?. No better result had been achieved by any competing networks constructed during a
widely publicized internet competition [7]. The network constructed in [8] transformed the
40 input spike trains into linearly decaying input currents from 800 pools, each consisting
of a ?large set of closely similar unsynchronized neurons? [8]. Each of the 800 currents
was delivered
to a separate pair of neurons consisting of an excitatory ? -neuron? and an
inhibitory ? -neuron?.
To accomplish the particular recognition task some of the synapses
between -neurons ( -neurons) are set to have equal weights, the others are set to zero. A
particular achievement of this network (resulting from the smoothly and linearly decaying
firing activity of the 800 pools of neurons) is that it is robust with regard to linear timewarping of the input spike pattern.
We tested our generic neural microcircuit model on the same task (in fact on exactly the
same 500 input files). A randomly chosen subset of 300 input files was used for training,
the other 200 for testing. The generic neural microcircuit model was drawn from the distribution described in section 3, hence from the same distribution as the circuit drawn for
the completely different task discussed in Fig. 2, with randomly connected I&F neurons
located on the integer
points of a
column. The synaptic weights of 10 linear
readout neurons
which received inputs from the 135 I&F neurons in the circuit were
optimized (like for SVMs with linear kernels) to fire whenever the input encoded the spoken word . Hence the whole circuit consisted of 145 I&F neurons, less than of
the size of the network constructed in [8] for the same task 8 . Nevertheless the average error
achieved after training by these randomly generated generic microcircuit models was 0.14
(measured in the same way, for the same word ?one?), hence slightly better than that of the
30 times larger network custom designed for this task. The score given is the average for
50 randomly drawn generic microcircuit models.
The comparison of the two different approaches also provides a nice illustration of the
5
http://moment.princeton.edu/ mus/Organism/Competition/digits data.html
The network constructed in [8] required that each spike train contained at most one spike.
7
The error (or ?recognition score?) for a particular word was defined in [8] by
, where !#" (%$ " ) is the number of false (correct) positives and &(' and )$ ' are the numbers of
and correct negatives. We use the same definition of error to facilitate comparison of results. The
false
recognition scores of the network constructed in [8] and of competing networks of other researchers
can be found at http://moment.princeton.edu/mus/Organism/Docs/winners.html.
?
For the competition
the networks were allowed to be constructed especially for their task, but only one single pattern for
each word could be used for setting the synaptic weights. Since our microcircuit models were not
prepared for this task, they had to be trained with substantially more examples.
8
If one assumes that each of the 800 ?large? pools of neurons in that network would consist of
just 5 neurons, it contains together with the * and + -neurons 5600 neurons.
6
"one", speaker 5
"one", speaker 3
"five", speaker 1
"eight", speaker 4
PSfrag replacements
readout microcircuit
input
40
20
0
135
90
45
0
PSfrag
replacements
replacements
replacements
0
0.2
0.4 PSfrag
0
0.2
0.4 PSfrag
0
0
0.2
time [s]
time [s]
time [s]
0.2
time [s]
Figure 3: Application of our generic neural microcircuit model to the speech recognition
from [8]. Top row: input spike patterns. Second row: spiking response of the 135 I&F
neurons in the neural microcircuit model. Third row: output of an I&F neuron that was
trained to fire as soon as possible when the word ?one? was spoken, and as little as possible
else.
difference between offline computing, real-time computing, and any-time computing.
Whereas the network of [8] implements an algorithm that needs a few hundred ms of processing time between the end of the input pattern and the answer to the classification task
(450 ms in the example of Fig. 2 in [8]), the readout neurons from the generic neural microcircuit were trained to provide their answer (through firing or non-firing) immediately
when the input pattern ended. In fact, as illustrated in Fig. 3, one can even train the readout neurons quite successfully to provide provisional answers long before the input pattern
has ended (thereby implementing an ?anytime? algorithm). More precisely, each of the 10
linear readout neurons was trained to recognize the spoken word at any multiple of 20 ms
while the word was spoken. An error score of 1.4 was achieved for this anytime speech
recognition task.
We also compared the noise robustness of the generic microcircuit models with that of
[8], which had been constructed to be robust with regard to linear time warping of the input
pattern. Since no benchmark input data were available to calculate this noise robustness, we
constructed such data by creating as templates 10 patterns consisting each of 40 randomly
drawn Poisson spike trains at 4 Hz over 0.5 s. Noisy variations of these templates
were
created by first multiplying their time scale with a randomly drawn factor from )
(thereby allowing for a 9 fold time warp), and subsequently dislocating each spike by an
amount drawn independently from a Gaussian distribution with mean 0 and SD 32 ms.
These spike patterns were given as inputs to the same generic neural microcircuit models
consisting of 135 I&F neurons as discussed before. 10 linear readout neurons were trained
(with 1000 randomly drawn training examples) to recognize which of the 10 templates had
been used to generate a particular input. On 500 novel test examples (drawn from same
distribution) they achieved an error of 0.09 (average performance of 30 randomly generated
microcircuit models). As a consequence of achieving this noise robustness generically,
rather then by a construction tailored to a specific type of noise, we found that the same
generic microcircuit models are also robust with regard to nonlinear time warp of the input.
For the case of nonlinear (sinusoidal) time warp 9 an average (50 microcircuits) error of 0.2
8 8
/)+
8
transformed into a spike at time
+
/ A spike
8 at+*+timewith8 was
/ Hz, randomly drawn from [0.5,2], randomly drawn from ( /
and
chosen such that
+
.
9
is achieved. This demonstrates that it is not necessary to build noise robustness explicitly
into the circuit. A randomly generated microcircuit model has at least the same noise
robustness as a circuit especially constructed to achieve that.
This test had implicitly demonstrated another point. Whereas the network of [8] was only
able to classify spike patterns consisting of at most one spike per spike train, a generic
neural microcircuit model can classify spike patterns without that restriction. It can for
example also classify the original version of the speech data encoded into onsets, peaks,
and offsets in various frequency bands, before all except the first events of each kind were
artificially removed to fit the requirements of the network from [8].
The performance of the same generic neural microcircuit model on completely different
computational tasks (recall of information from preceding input segments, movement prediction and estimation of the direction of movement of extended moving objects) turned out
to be also quite remarkable, see [5], [9] and [10]. Hence this microcircuit model appears to
have quite universal capabilities for real-time computing on time-varying inputs.
6 Discussion
We have presented a new conceptual framework for analyzing computations in generic
neural microcircuit models that satisfies the biological constraints listed in section 1. Thus
for the first time one can now take computer models of neural microcircuits, that can be as
realistic as one wants to, and use them not just for demonstrating dynamic effects such as
synchronization or oscillations, but to really carry out demanding computations with these
models. Furthermore our new conceptual framework for analyzing computations in neural
circuits not only provides theoretical support for their seemingly universal capabilities for
real-time computing, but also throws new light on key concepts such as neural coding. Finally, since in contrast to virtually all computational models the generic neural microcircuit
models that we consider have no preferred direction of information processing, they offer
an ideal platform for investigating the interaction of bottom-up and top-down processing
of information in neural systems.
References
[1] A. Gupta, Y. Wang, and H. Markram. Organizing principles for a diversity of GABAergic
interneurons and synapses in the neocortex. Science, 287:273?278, 2000.
[2] D. V. Buonomano and M. M. Merzenich. Temporal information transformed into a spatial code
by a neural network with realistic properties. Science, 267:1028?1030, Feb. 1995 1995.
[3] H. Jaeger. The ?echo state? approach to analysing and training recurrent neural networks.
German National Research Center for Information Technology, Report 148, 2001.
[4] H. Markram, Y. Wang, and M. Tsodyks. Differential signaling via the same axon of neocortical
pyramidal neurons. Proc. Natl. Acad. Sci., 95:5323?5328, 1998.
[5] W. Maass, T. Natschl?ager, and H. Markram. Real-time computing without stable states: A new
framework for neural computation based on perturbations. Neur. Comp., 14:2531?2560, 2002.
[6] W. Maass and E. D. Sontag. Neural systems as nonlinear filters. Neur. Comp., 12:1743?1772,
2000.
[7] J. J. Hopfield and C. D. Brody. What is a moment? ?cortical? sensory integration over a brief
interval. Proc. Natl. Acad. Sci. USA, 97(25):13919?13924, 2000.
[8] J. J. Hopfield and C. D. Brody. What is a moment? transient synchrony as a collective mechanism for spatiotemporal integration. Proc. Natl. Acad. Sci. USA, 98(3):1282?1287, 2001.
[9] W. Maass, R. A. Legenstein, and H. Markram. A new approach towards vision suggested by
biologically realistic neural microcircuit models. In H. H. Buelthoff, S. W. Lee, T. A. Poggio,
and C. Wallraven, editors, Proc. of the 2nd International Workshop on Biologically Motivated
Computer Vision 2002, volume 2525 of LNCS, pages 282?293. Springer, 2002.
[10] W. Maass, T. Natschl?ager, and H. Markram. Computational models for generic cortical microcircuits. In J. Feng, editor, Computational Neuroscience: A Comprehensive Approach. CRCPress, 2002. to appear.
| 2307 |@word version:2 norm:2 nd:1 simulation:1 thereby:2 solid:1 carry:2 moment:4 initial:1 contains:1 score:4 liquid:13 current:7 surprising:1 universality:1 written:1 realistic:5 designed:2 short:2 filtered:1 provides:2 lsm:6 sigmoidal:1 five:1 provisional:1 mathematical:2 constructed:9 differential:1 psfrag:7 qualitative:1 prove:2 consists:3 introduce:1 theoretically:1 roughly:1 multi:2 brain:2 inspired:1 automatically:2 actual:2 little:1 window:1 project:1 provided:3 bounded:1 circuit:24 medium:1 what:2 evolved:1 kind:1 interpreted:1 substantially:1 spoken:6 ended:2 temporal:3 every:3 exactly:1 demonstrates:2 control:1 unit:5 appear:1 before:4 positive:1 sd:3 consequence:1 acad:3 analyzing:3 firing:9 ap:1 might:1 studied:1 lausanne:1 averaged:1 practical:1 testing:1 block:2 implement:2 chaotic:1 digit:1 procedure:1 signaling:2 lncs:1 area:1 universal:7 tasking:1 significantly:1 word:9 wait:1 onto:4 applying:1 www:1 restriction:1 map:5 demonstrated:2 missing:1 center:1 nonspecific:1 independently:2 simplicity:1 immediately:1 stereotypical:1 facilitation:1 variation:1 target:5 construction:1 recognition:7 located:2 bottom:1 role:1 coincidence:2 wang:2 capture:1 calculate:1 tsodyks:1 graz:2 readout:21 connected:4 movement:2 removed:1 substantial:2 environment:1 mu:2 dynamic:10 trained:6 depend:1 segment:3 recombine:1 completely:5 basis:1 easily:1 hopfield:2 wallraven:1 various:1 train:22 artificial:2 whose:5 quite:6 encoded:4 solve:1 widely:1 say:1 drawing:2 larger:1 itself:1 noisy:1 delivered:1 seemingly:1 obviously:1 echo:1 net:3 propose:3 interaction:1 reset:1 tu:1 relevant:4 turned:2 rapidly:1 organizing:1 achieve:2 competition:3 achievement:1 convergence:1 transmission:1 requirement:2 jaeger:1 object:1 depending:2 recurrent:9 ac:1 measured:1 finitely:1 excites:1 received:1 strong:1 throw:1 implemented:1 somatosensory:1 implies:2 switzerland:1 direction:2 closely:1 correct:2 filter:15 subsequently:1 human:1 transient:1 implementing:1 require:3 assign:1 suffices:2 really:1 fond:1 biological:4 hold:1 sufficiently:10 considered:1 circuitry:1 achieves:1 purpose:1 estimation:1 proc:4 successfully:1 gaussian:2 rather:1 avoid:1 varying:2 voltage:2 indicates:1 contrast:1 rigorous:1 sense:1 epfl:2 integrated:2 transformed:4 sketched:1 classification:1 flexible:1 html:2 priori:1 spatial:2 integration:3 platform:1 equal:1 construct:3 never:1 look:1 others:1 report:1 few:1 randomly:18 composed:1 gamma:1 recognize:3 national:1 comprehensive:1 consisting:7 replacement:7 fire:4 attractor:1 organization:2 interneurons:1 highly:1 custom:2 evaluation:1 lsms:2 generically:1 light:1 natl:3 necessary:2 poggio:1 ager:3 euclidean:2 desired:1 plotted:1 theoretical:6 instance:1 column:1 modeling:2 earlier:2 classify:3 technische:1 subset:1 uniform:3 hundred:1 delay:2 recognizing:1 too:1 answer:3 spatiotemporal:1 accomplish:2 publicized:1 peak:2 international:1 accessible:1 ie:1 lee:1 pool:3 together:1 continuously:2 concrete:1 na:4 connectivity:2 creating:1 potential:2 sinusoidal:1 diversity:1 accompanied:1 sec:2 coding:2 coefficient:1 satisfy:1 explicitly:1 igi:1 onset:2 mv:4 stream:9 later:1 view:1 picked:1 observer:1 wolfgang:1 analyze:1 lab:1 closed:1 decaying:2 capability:7 parallel:2 synchrony:1 publicly:1 stretching:1 weak:1 famous:1 multiplying:1 comp:2 researcher:1 ago:1 converged:1 history:4 explain:2 synapsis:7 whenever:1 synaptic:8 definition:1 frequency:2 dataset:1 austria:1 anytime:3 recall:1 actually:2 appears:3 modal:1 response:2 evaluated:1 microcircuit:45 furthermore:5 just:6 until:1 correlation:1 unsynchronized:1 ei:1 nonlinear:10 resemblance:1 name:1 building:3 effect:2 normalized:1 unbiased:1 consisted:1 facilitate:1 usa:2 hence:6 concept:1 merzenich:1 read:1 memoryless:1 maass:6 illustrated:1 during:4 speaker:6 rat:1 m:33 neocortical:1 wise:4 novel:1 common:2 spiking:1 empirically:1 physical:1 refractory:1 winner:1 cerebral:1 volume:1 analog:4 discussed:5 organism:2 resting:1 grid:1 henry:2 had:9 moving:1 stable:2 cortex:2 feb:1 recent:1 female:1 additional:1 somewhat:1 preceding:4 period:1 dashed:1 ii:1 multiple:1 desirable:1 offer:1 long:1 proximate:1 prediction:2 austrian:1 heterogeneous:1 vision:2 poisson:4 kernel:2 tailored:1 synaptically:1 achieved:5 addition:2 whereas:6 background:1 want:1 interval:10 else:1 pyramidal:1 natschl:3 file:4 hz:5 tend:1 virtually:2 inconsistent:1 fwf:1 integer:2 ee:1 ideal:1 feedforward:1 fit:2 competing:2 whether:3 motivated:1 resistance:1 sontag:1 speech:4 nine:1 depression:1 adequate:1 saliently:1 proportionally:1 listed:1 amount:3 prepared:1 neocortex:1 band:2 processed:1 svms:1 http:2 generate:1 exist:1 inhibitory:7 neuroscience:1 per:2 anatomical:1 diverse:5 instantly:1 discrete:1 key:2 terminology:1 threshold:1 nevertheless:1 achieving:1 drawn:14 demonstrating:1 changing:2 sum:5 turing:5 saying:1 separation:9 oscillation:1 doc:1 legenstein:1 scaling:1 brody:2 internet:1 fold:1 activity:2 fading:3 constraint:2 precisely:1 software:1 simulate:1 buonomano:1 according:1 neur:2 combination:7 membrane:4 slightly:1 postsynaptic:1 partitioned:1 biologically:7 projecting:2 turn:1 precomputed:1 german:1 mechanism:1 needed:1 mind:1 end:1 available:5 eight:1 generic:24 distinguished:1 alternative:1 robustness:5 thomas:1 original:1 denotes:1 assumes:2 top:2 tugraz:1 unsuitable:1 especially:2 build:1 classical:1 feng:1 warping:1 already:1 spike:35 primary:1 exhibit:1 distance:5 separate:1 sci:3 code:3 length:1 modeled:4 illustration:1 potentially:2 negative:1 implementation:2 collective:1 allowing:1 neuron:52 observation:1 benchmark:2 finite:5 extended:1 perturbation:1 timewarping:1 arbitrary:2 unpublished:1 required:2 pair:1 connection:6 optimized:1 engine:1 able:2 suggested:1 dynamical:2 pattern:14 below:2 usually:1 challenge:1 memory:4 power:4 suitable:1 event:1 demanding:1 scheme:2 technology:1 brief:1 temporally:2 axis:2 created:1 carried:1 gabaergic:1 extract:3 nice:1 embedded:1 synchronization:1 limitation:1 remarkable:1 digital:1 integrate:2 degree:1 article:1 principle:2 editor:2 row:3 compatible:1 hopeless:1 excitatory:6 supported:2 soon:1 offline:2 side:1 understand:1 pulled:1 institute:2 warp:3 template:3 markram:8 absolute:1 leaky:1 sparse:1 regard:3 curve:2 cortical:6 transition:2 stand:1 rich:1 computes:1 sensory:1 commonly:1 made:1 approximate:1 implicitly:2 preferred:1 keep:1 universitaet:1 investigating:1 conceptual:5 assumed:2 continuous:5 demanded:1 robust:4 complex:6 artificially:1 domain:2 main:1 linearly:2 whole:1 noise:7 repeated:1 complementary:1 allowed:1 fig:7 reservoir:1 fashion:2 axon:1 precision:1 exponential:1 third:1 theorem:1 down:1 specific:2 recurrently:1 explored:1 decay:2 physiological:1 offset:2 gupta:1 essential:3 consist:2 exists:2 false:2 sequential:1 workshop:1 smoothly:1 p15386:1 likely:1 expressed:1 contained:4 partially:1 springer:1 ch:1 satisfies:3 extracted:1 viewed:2 towards:2 content:1 change:1 analysing:1 except:1 uniformly:1 specie:1 pas:2 tnatschl:1 formally:1 pragmatic:1 internal:3 support:4 evaluate:1 princeton:2 tested:1 |
1,438 | 2,308 | Feature Selection in Mixture-Based Clustering
Martin H. Law, Anil K. Jain
Dept. of Computer Science and Eng.
Michigan State University,
East Lansing, MI 48824
U.S.A.
M?ario A. T. Figueiredo
Instituto de Telecomunicac?o? es,
Instituto Superior T?ecnico
1049-001 Lisboa
Portugal
Abstract
There exist many approaches to clustering, but the important issue of
feature selection, i.e., selecting the data attributes that are relevant for
clustering, is rarely addressed. Feature selection for clustering is difficult
due to the absence of class labels. We propose two approaches to feature
selection in the context of Gaussian mixture-based clustering. In the first
one, instead of making hard selections, we estimate feature saliencies.
An expectation-maximization (EM) algorithm is derived for this task.
The second approach extends Koller and Sahami?s mutual-informationbased feature relevance criterion to the unsupervised case. Feature selection is then carried out by a backward search scheme. This scheme can
be classified as a ?wrapper?, since it wraps mixture estimation in an outer
layer that performs feature selection. Experimental results on synthetic
and real data show that both methods have promising performance.
1 Introduction
In partitional clustering, each pattern is represented by a vector of features. However, not
all the features are useful in constructing the partitions: some features may be just noise,
thus not contributing to (or even degrading) the clustering process. The task of selecting
the ?best? feature subset, known as feature selection (FS), is therefore an important task.
In addition, FS may lead to more economical clustering algorithms (both in storage and
computation) and, in many cases, it may contribute to the interpretability of the models.
FS is particularly relevant for data sets with large numbers of features; e.g., on the order of
thousands as seen in some molecular biology [22] and text clustering applications [21].
In supervised learning, FS has been widely studied, with most methods falling into two
classes: filters, which work independently of the subsequent learning algorithm; wrappers,
which use the learning algorithm to evaluate feature subsets [12]. In contrast, FS has received little attention in clustering, mainly because, without class labels, it is unclear how
to assess feature relevance. The problem is even more difficult when the number of clusters
is unknown, since the number of clusters and the best feature subset are inter-related [6].
Some approaches to FS in clustering have been proposed. Of course, any method not
Email addresses: [email protected], [email protected], [email protected]
This work was supported by the U.S. Office of Naval Research, grant no. 00014-01-1-0266, and by
the Portuguese Foundation for Science and Technology, project POSI/33143/SRI/2000.
relying on class labels (e.g., [16]) can be used. Dy and Brodley [6] suggested a heuristic to
compare feature subsets, using cluster separability. A Bayesian approach for multinomial
mixtures was proposed in [21]; another Bayesian approach using a shrinkage prior was
considered in [8]. Dash and Liu [4] assess the clustering tendency of each feature by
an entropy index. A genetic algorithm was used in [11] for FS in -means clustering.
Talavera [19] addressed FS for symbolic data. Finally, Devaney and Ram [5] use a notion
of ?category utility? for FS in conceptual clustering, and Modha and Scott-Spangler [17]
assign weights to feature groups with a score similar to Fisher discrimination.
In this paper, we introduce two new FS approaches for mixture-based clustering [10, 15].
The first is based on a feature saliency measure which is obtained by an EM algorithm;
unlike most FS methods, this does not involve any explicit search. The second approach
extends the mutual-information based criterion of [13] to the unsupervised context; it is a
wrapper, since FS is wrapped around a basic mixture estimation algorithm.
2 Finite Mixtures and the EM algorithm
, the log-likelihood of a -component mixture is
!" # $ % '( ) #+) * , / % , 0
(1)
%&
%& ,-& '.
where: 1 , ,3254 ; 6 , , 87 ; each , is the set of parameters of the 9 -th component; and
;:< =%C
B
.
> * %DB EG FI=H
. * is the full parameter set. Each % is a ? -dimensional feature
. all components have the same form (e.g., Gaussian).
vector @ A =
-A .
and
Neither maximum likelihood ( J
L
KNM/P O KQSRT #U V !W ) nor maximum a posteriori
O
#
XKNM KQ R
YZ
!W ) estimates can be found analytically. The
(J
usual choice is the EM algorithm, which finds local maxima of these criteria. Let [\
^]
>] beb a set of missing% labels, where ] /% ed _, @ ` %CB =
>` %CB * F , with ` %CB , _7 and
` %DB a 4 , for "c
9 , meaning that is a sample of . The complete log-likelihood is
#/ f-[g '( ) ) * ` %DB , # @ , % , F
(2)
%& ,-&
.
Di i
EM produces a sequence of estimates hJ 0 4 7#>jk=
using two alternating steps:
l E-step: Computes mnpoq@ [g >J Ci F , and plugs it into #/ >[g ' yielding the r Ci f-ms! . Since the elements of [ are binary, we have
function r /J (
t %CB , :uosvw` %DB , e J Ci yx Pr vw` %DB , <7z{ % J Di {x}| J , Di % J , Ci 0
(3)
.
t
followed by normalization so that 6 , %DB , s7 . Notice that , is the a priori probability
that ` %DB , ~7 (i.e., % belongs to cluster 9 ) while t %DB , is the. corresponding a posteriori
probability, after observing % .
l M-step: Updates the parameter estimates,#J C/i Zg 7^
+KNM
O KNQkRhr U J Di ^Z '
in the case of MAP estimation, or without
' in the ML case.
Given
i.i.d. samples
ML
MAP
3 A Mixture Model with Feature Saliency
In our first approach to FS, we assume conditionally independent features, given the component label (which in the Gaussian case corresponds to diagonal covariance matrices),
/ ?? , ? 0? ,? P ) * , / ? ? , P ) *
,-& Y .
,-& Y .
.
$, E ? ,?
??& A ? 0
(4)
ed ? ,?
9
where
is the pdf of the -th feature in the -th component; in general, this could
have any form, although we only consider Gaussian densities. In the sequel, we will use
the indices , and to run through data points, mixture components, and features, respectively. Assume now that some features are irrelevant, in the following sense: if feature is
irrelevant, then
, for
, where
is the common (i.e.,
independent of ) density of feature . Let
be a set of binary parameters,
such that
if feature is relevant and
otherwise; then,
9
A ? ? ,? A ? ?
9q 7 =
- E A ? ?
? 4
? 79
E
? = , =? ,? = ? hP ) * , $ ?/ A ? ? ,?
A ? ? -
(5)
,-& . ??&
.
Our approach consists of:
(i) treating the ? ?s as missing variables rather than as parameters;
?
?
(ii) estimating
7h from the data; ? is the probability that the -th feature is
useful, which we call its saliency. The resulting mixture model (see proof in [14]) is
E
/ ?? , ?0? ,? ?0 ? = ? hP ) * , $ ? / A ? ? ,? UZ 7 ? A ? ?
(6)
,-& Y. ??&
.
The form of
w?
reflects our prior knowledge about the distribution of the non-salient features.
In principle, it can be any 1-D pdf (e.g., Gaussian or student-t); here we only consider
I?
to be a Gaussian. Equation (6) has a generative interpretation. As in a standard finite
mixture, we first select
label 9 by sampling from a multinomial distribution
the component
with parameters =
=
=
. Then, for each feature 7#=
>? , we flip a biased coin
. of getting. * a head is ? ; if we get a head, we use the mixture
whose
w ? ,? probability
component
to generate the -th feature; otherwise, the common component
I ? is used.
=
0- , with % @ A %CB
-A %CB E FwH , the parameters
Given
a set, ?of0^? observations
,? ?0 ? ?0 ? h can be =
estimated
by the maximum likelihood criterion,
.
E
J uKNM PO KQ ) # ) * , $ ? / A %? ? ,? UZ 7 ? A %? ?
N
(7)
%& ,-& Y. ??&
In the absence of a closed-form solution, an EM algorithm can be derived by treating both
the ` % ?s and the ? ?s as missing data (see [14] for details).
3.1 Model Selection
Standard EM for mixtures exhibits some weaknesses which also affect the EM algorithm
just mentioned: it requires knowledge of , and a good initialization is essential for reaching a good local optimum. To overcome these difficulties, we adopt the approach in [9],
which is based on the MML criterion [23, 24]. The MML criterion for the proposed model
(see details in [14]) consists of minimizing, with respect to , the following cost function
) E ) * #' , ?
) E ?
#
/
#
Z
?
V Z j
Z j ?? & ,-&
s
Z j ??&
7 W (8)
.
/
where and are the number of parameters in ? ,? and ? , respectively. If
I?
and
w?
Gaussians (arbitrary mean and variance), j . From a parameare univariate
with
conjugate (improper)
ter estimation viewpoint, this is equivalent to a MAP estimate
,
?
Dirichlet-type priors on the ?s and ?s (see details in [14]); thus, the EM algorithm
. in the M-step, which still has a closed form.
undergoes a minor modification
! V , have simple
The terms in equation (8), inE addition to the log-likelihood
interpretations. The term
* ! ? is a standard MDL-type parameter code-length cor,
responding to
values and "
? values. For the -th feature in the 9 -th component, the
.
? ,?
!
? ,? , ?
?.. ,
7 ?
?effective? number of data points for estimating
is
. Since there are parameters
in each , the corresponding code-length is
. Similarly, for the -th feature
in the common component,
the number of effective data points for estimation is
.
Thus, there is a term
in (8) for each feature.
! 7 ?
,
?
One key property of the EM algorithm for minimizing equation (8) is its pruning behavior,
forcing some of the to go to zero and some of the to go to zero or one. Worries that the
message length in (8) may become invalid at these boundary values can be circumvented
by the arguments in [9]. When goes to zero, the -th feature is no longer salient and
and
are removed. When goes to 1, and are dropped.
.
? ? =
=
=-? * ?
?
?
?
?
?
Finally, since the model selection algorithm determines the number of components, it can
be initialized with a large value of , thus alleviating the need for a good initialization [9].
Because of this, as in [9], a component-wise version of EM [2] is adopted (see [14]).
3.2 Experiments and Results
4 7^
The first data set considered
consists of
800points
from a mixture of 4 equiprobable Gaus
sians with mean vectors
, , , , and identity covariance matrices. Eight
?noisy? features (sampled from a
density) were appended to this data, yielding a
set of 800 10-D patterns. The proposed algorithm was run 10 times, each initialized with
; the common component is initialized to cover all data, and the feature salien
cies are initialized at 0.5. In all the 10 runs, the 4 components were always identified.
The saliencies of all the ten features, together with their standard deviations (error bars),
are shown in Fig. 1. We conclude that, in this case, the algorithm successfully locates
the clusters and correctly assigns the feature saliencies. See [14] for more details on this
experiment.
4
1
1
0.9
0.8
Feature saliency
Feature Saliency
0.8
0.6
0.4
0.2
0.7
0.6
0.5
0.4
0.3
0
1
2
3
4
5
6
7
Feature Number
8
9
10
Figure 1: Feature saliency for 10-D 4-component
Gaussian mixture. Only the first two features are relevant. The error bars show one standard deviation.
7 ! =
=
= !
0.2
5
10
Feature no
15
20
Figure 2: Feature saliency for the Trunk
data. The smaller the feature number, the
more important is the feature.
!
In the next experiment, we consider Trunk?s data [20], which has two 20-dimensional
Gaussians classes with means
and
, and covariances
. Data is obtained by sampling 5000 points from each of these two Gaussians. Note that these features have a descending order of relevance. As above, the initial
is set to 30. In all the 10 runs performed, two components were always detected. The
values of the feature saliencies are shown in Fig. 2. We see the general trend that as the
feature number increases, the saliency decreases, following the true characteristics of the
data.
!
Feature saliency values were also computed for the ?wine? data set (available at the UCI
repository at www.ics.uci.edu/?mlearn/MLRepository.html), consisting of
178 13-dimensional points in three classes. After standardizing all features to zero mean
and unit variance, we applied the LNKnet supervised feature selection algorithm (available
at www.ll.mit.edu/IST/lnknet/). The nine features selected by LNKnet are 7, 13,
1, 5, 10, 2, 12, 6, 9. Our feature saliency algorithm (with no class labels) yielded the values
Table 1: Feature saliency of wine data
1
0.94
2
0.77
3
0.10
4
0.59
5
0.14
6
0.99
7
1.00
8
0.66
9
0.94
10
0.85
11
0.88
12
1.00
13
0.83
in Table 1. Ranking the features in descending order of saliency, we get the ordering: 7, 12,
6, 1, 9, 11, 10, 13, 2, 8, 4, 5, 3. The top 5 features (7, 12, 6, 1, 9) are all in the subset selected
by LNKnet. If we skip the sixth feature (11), the following three features (10, 13, 2) were
also selected by LNKnet. Thus we can see that for this data set, our algorithm, though
totally unsupervised, performs comparably with a supervised feature selection algorithm.
4 A Feature Selection Wrapper
Our second approach is more traditional in the sense that it selects a feature subset, instead
of estimating feature saliency. The number of mixture components is assumed known a
priori, though no restriction on the covariance of the Gaussian components is imposed.
4.1 Irrelevant Features and Conditional Independence
]
/ ]Y (
G ?] ? / ]? / ]
]
] / ] T
Assume that the class labels, , and the full feature vector, , follow some joint probability
. In supervised learning [13], a feature subset is considered irrelevant
function
if it is conditionally independent of the label , given the remaining features , that is,
, where
is split into two subsets: ?useful? features
if
(here,
is the index set of the non-useful
and ?non-useful? features
features). It is easy to show that this implies
]
7#
-?;
P / ] 0
(9)
t ,
To generalize this notion to unsupervised learning, we propose to let the expectations
(a byproduct of the EM algorithm) play the role of the missing class labels. Recall that the
(see (3)) are posterior class probabilities, Prob class
. Consider the posterior
probabilities based on all the features, and only on the useful features, respectively
t,
9' (> F
?| , %DB J >, B W
t %DB , | J , / % J , 0
%CB ,
(10)
J
.
.
t
D
%
B
%
%
D
B
,
C
%
B
,
where is the subset of relevant features of sample (of course, the
and
have
to be normalized such that 6 ,
%CB , <7 and 6 , t %CB , <7 ). If
is a completely irrelevant
feature subset, then
%CB , equals t %CB , exactly, because of the conditional independence in (9),
applied to (3). In practice, such features rarely exist, though they do exhibit different
de-as
grees of irrelevance. So we follow the suggestion
in [13], and find that gives
%CB
close to t %CB as possible. As both t %CB , and
%DB ,
are probabilities, a natural criterion for
@
assessing their closeness is the expected value of the Kullback-Leibler divergence (KLD,
[3]). This criterion is computed as a sample mean
( ) ) * t %DB , # t %C B ,
%CB ,
% & ,-&
indicates that the features in
(11)
in our case. A low value of
are ?almost? conditionally independent from the expected class labels, given the features in
.
t D% B ,
b
In practice, we start by obtaining
reasonable initial estimates of
by running
EM
using all
the
features,
and
set
.
At
each
stage,
we
find
the
feature
such
that
is smallest and add it to . EM is then run again, using the features not
in , to update the posterior probabilities
. The process is then repeated until only
one feature remains, in what can be considered as a backward search algorithm that yields
a sorting of the features by decreasing order of irrelevance.
?h
P
t D% B ,
4.2 The assignment entropy
Given a method to sort the features in the order of relevance, we now require a method
to measure how good each subset is. Unlike in supervised learning, we can not resort
to classification accuracy. We adopt the criterion that a clustering is good if the clusters
are ?crisp?, i.e., if, for every ,
for some . A natural way to formalize this
is to consider the mean entropy of the
; that is, the clustering is considered to be
good if
is small. In the sequel, we call
?the entropy of the assignment?. An important characteristic of the entropy is that
it cannot increase when more features are used (because, for any random variables ,
, and ,
, a fundamental inequality of information theory [3];
note that
is a conditional entropy
). Moreover,
exhibits a diminishing returns behavior (decreasing abruptly as the most relevant features
are included, but changing little when less relevant features are used). Our empirical results
show that
indeed has a strong relationship with the quality of the clusters. Of course,
during the backward search, one can also consider picking the next feature whose removal
least increases , rather than the one yielding the smallest KLD; both options are explored
in the experiments. Finally, we mention that other minimum-entropy-type criteria have
been recently used for clustering [7], [18], but not for feature selection.
t %DB , t 7 C% B ,
9
t %DB ,
6 %& 6 ,-* & t %CB , t D% B ,
t5
% c
t % S ^ D% B
t %
4.3 Experiments
We have conducted experiments on data sets commonly used for supervised learning tasks.
Since we are doing unsupervised learning, the class labels are, of course, withheld and
only used for evaluation. The two heuristics for selecting the next feature to be removed
(based on minimum KLD and minimum entropy) are considered in different runs. To assess
clustering quality, we assign each data point to the Gaussian component that most likely
generated it and then compare this labelling with the ground-truth. Table 2 summarizes the
characteristics of the data sets for which results are reported here (all available from the
UCI repository); we have also performed tests on other data sets achieving similar results.
The experimental results shown in Fig. 3 reveal that the general trend of the error rate
agrees well with . The error rates either have a minimum close to the ?knee? of the H
curve, or the curve becomes flat. The two heuristics for selecting the feature to be removed
perform comparably. For the cover type data set, the DKL heuristic yields lower error rates
than the one based on , while the contrary happens for image segmentation and WBC
datasets.
5 Concluding Remarks and Future Work
The two approaches for unsupervised feature selection herein proposed have different advantages and drawbacks. The first approach avoids explicit feature search and does not
require a pre-specified number of clusters; however, it assumes that the features are conditionally independent, given the components. The second approach places no restriction
on the covariances, but it does assume knowledge of the number of components. We believe that both approaches can be useful in different scenarios, depending on which set of
assumptions fits the given data better.
Several issues require further work: weakly relevant features (in the sense of [12]) are not
removed by the first algorithm while the second approach relies on a good initial clustering.
Overcoming these problems will make the methods more generally applicable. We also
need to investigate the scalability of the proposed algorithms; ideas such as those in [1] can
be exploited.
Table 2: Some details of the data sets (WBC stands for Wisconsin breast cancer).
image segmentation
1000
18
7
60
4000
2500
55
3500
50
3000
45
1500
40
1000
% Erorr
Entropy
2000
Entropy
3000
WBC
569
30
2
65
60
55
2500
2000
50
1500
35
45
1000
500
30
500
0
25
0
70
1200
60
1000
55
2
4
6
No. of features
8
10
40
2
4
6
No. of features
(a)
8
10
35
(b)
900
800
65
700
60
600
50
55
400
50
300
Entropy
800
500
% Error
Entropy
wine
178
13
3
% Error
cover type
2000
10
4
45
600
40
400
45
% Error
Name
No. points used
No. of features
No. of classes
35
200
40
200
35
0
450
22
500
16
400
20
350
18
400
14
300
12
200
10
100
8
100
10
No. of features
15
30
5
16
250
14
200
12
Entropy
Entropy
300
150
100
10
50
8
0
5
10
15
20
No. of features
6
30
25
0
5
10
15
20
No. of features
(e)
6
30
25
(f)
35
70
35
70
30
60
30
25
50
25
40
20
30
15
10
20
10
5
10
0
0
20
40
15
% Error
50
Entropy
80
60
Entropy
25
15
(d)
% Error
(c)
10
No. of features
% Error
5
% Error
0
30
20
10
0
2
4
6
8
No. of features
10
12
(g)
5
2
4
6
8
10
No. of features
(h)
12
0
Figure 3: (a) and (b): cover type; (c) and (d): image segmentation; (e) and (f): WBC; (g)
and (h): wine. Feature removal by minimum KLD (left column) and minimum
(right
column). Solid lines: error rates; dotted lines: . Error bars correspond to one standard
deviation over 10 runs.
References
[1] P. Bradley, U. Fayyad, and C. Reina. Clustering very large database using EM mixture models.
In Proc. 15th Intern. Conf. on Pattern Recognition, pp. 76?80, 2000.
[2] G. Celeux, S. Chr?etien, F. Forbes, and A. Mkhadri. A component-wise EM algorithm for
mixtures. Journal of Computational and Graphical Statistics, 10:699?712, 2001.
[3] T. Cover and J. Thomas. Elements of Information Theory. John Wiley & Sons, 1991.
[4] M. Dash and H. Liu. Feature selection for clustering. In Proc. of Pacific-Asia Conference on
Knowledge Discovery and Data Mining, 2000, pp. 110?121.
[5] M. Devaney and A. Ram. Efficient feature selection in conceptual clustering.
ICML?1997, pp. 92?97, 1997.
In Proc.
[6] J. Dy and C. Brodley. Feature subset selection and order identification for unsupervised learning. In Proc. ICML?2000, pp. 247?254, 2000.
[7] E. Gokcay and J. Principe. Information Theoretic Clustering. IEEE Trans. on PAMI, 24(2):158171, 2002.
[8] P. Gustafson, P. Carbonetto, N. Thompson, and N. de Freitas. Bayesian feature weighting
for unsupervised learning, with application to object recognition. In Proc. of the 9th Intern.
Workshop on Artificial Intelligence and Statistics, 2003.
[9] M. Figueiredo and A. Jain. Unsupervised learning of finite mixture models. IEEE Trans. on
PAMI, 24(3):381?396, 2002.
[10] A. K. Jain and R. C. Dubes. Algorithms for Clustering Data. Prentice Hall, 1988.
[11] Y. Kim, W. Street, and F. Menczer. Feature Selection in Unsupervised Learning via Evolutionary Search. In Proc. ACM SIGKDD, pp. 365?369, 2000.
[12] R. Kohavi and G. John. Wrappers for feature subset selection. Artificial Intelligence, 97(12):273?324, 1997.
[13] D. Koller and M. Sahami. Toward optimal feature selection. In Proc. ICML?1996, pp. 284?292,
1996.
[14] M. Law, M. Figueiredo, and A. Jain. Feature Saliency in Unsupervised Learning. Tech.
Rep., Dept. Computer Science and Eng., Michigan State Univ., 2002.
Available at
http://www.cse.msu.edu/ lawhiu/papers/TR02.ps.gz.
[15] G. McLachlan and K. Basford. Mixture Models: Inference and Application to Clustering.
Marcel Dekker, New York, 1988.
[16] P. Mitra and C. A. Murthy. Unsupervised feature selection using feature similarity. IEEE Trans.
on PAMI, 24(3):301?312, 2002.
[17] D. Modha and W. Scott-Spangler. Feature weighting in k-means clustering. Machine Learning,
2002. to appear.
[18] S. Roberts, C. Holmes, and D. Denison. Minimum-entropy data partitioning using RJ-MCMC.
IEEE Trans. on PAMI, 23(8):909-914, 2001.
[19] L. Talavera. Dependency-based feature selection for clustering symbolic data. Intelligent Data
Analysis, 4:19?28, 2000.
[20] G. Trunk. A problem of dimensionality: A simple example. IEEE Trans. on PAMI, 1(3):306?
307, 1979.
[21] S. Vaithyanathan and B. Dom. Generalized model selection for unsupervised learning in high
dimensions. In S. Solla, T. Leen, and K. Muller, eds, Proc. of NIPS?12. MIT Press, 2000.
[22] E. Xing, M. Jordan, and R. Karp. Feature selection for high-dimensional genomic microarray
data. In Proc. ICML?2001, pp. 601?608, 2001.
[23] C. Wallace and P. Freeman. Estimation and inference via compact coding. Journal of the Royal
Statistical Society (B), 49(3):241?252, 1987.
[24] C.S. Wallace and D.L. Dowe. MML clustering of multi-state, Poisson, von Mises circular and
Gaussian distributions. Statistics and Computing, 10:73?83, 2000.
| 2308 |@word repository:2 version:1 sri:1 dekker:1 eng:2 covariance:5 mention:1 solid:1 initial:3 liu:2 wrapper:5 score:1 selecting:4 genetic:1 freitas:1 bradley:1 portuguese:1 john:2 subsequent:1 partition:1 treating:2 update:2 discrimination:1 generative:1 selected:3 intelligence:2 denison:1 contribute:1 cse:3 lx:1 become:1 consists:3 introduce:1 lansing:1 inter:1 indeed:1 expected:2 behavior:2 nor:1 wallace:2 uz:2 multi:1 freeman:1 relying:1 decreasing:2 little:2 totally:1 becomes:1 project:1 estimating:3 moreover:1 menczer:1 what:1 degrading:1 every:1 exactly:1 partitioning:1 unit:1 grant:1 appear:1 dropped:1 local:2 mitra:1 instituto:2 modha:2 mtf:1 pami:5 initialization:2 studied:1 practice:2 empirical:1 pre:1 symbolic:2 get:2 cannot:1 close:2 selection:25 storage:1 prentice:1 context:2 kld:4 descending:2 www:3 equivalent:1 map:3 restriction:2 missing:4 imposed:1 crisp:1 go:4 attention:1 independently:1 thompson:1 knee:1 assigns:1 holmes:1 notion:2 pt:1 play:1 alleviating:1 etien:1 element:2 trend:2 recognition:2 particularly:1 jk:1 database:1 role:1 informationbased:1 thousand:1 improper:1 ordering:1 decrease:1 removed:4 solla:1 mentioned:1 dom:1 weakly:1 completely:1 po:1 joint:1 represented:1 univ:1 jain:5 effective:2 detected:1 artificial:2 devaney:2 heuristic:4 widely:1 whose:2 otherwise:2 statistic:3 noisy:1 sequence:1 advantage:1 propose:2 relevant:8 uci:3 ine:1 scalability:1 getting:1 cluster:8 optimum:1 assessing:1 p:1 produce:1 object:1 depending:1 dubes:1 minor:1 received:1 strong:1 skip:1 implies:1 marcel:1 drawback:1 attribute:1 filter:1 require:3 carbonetto:1 assign:2 around:1 considered:6 ic:1 ground:1 hall:1 cb:16 adopt:2 smallest:2 wine:4 estimation:6 proc:9 applicable:1 label:12 agrees:1 successfully:1 reflects:1 mclachlan:1 mit:2 genomic:1 gaussian:10 always:2 beb:1 rather:2 reaching:1 hj:1 shrinkage:1 gaus:1 karp:1 office:1 derived:2 naval:1 likelihood:5 mainly:1 indicates:1 tech:1 contrast:1 sigkdd:1 kim:1 sense:3 posteriori:2 inference:2 diminishing:1 koller:2 selects:1 issue:2 classification:1 html:1 priori:2 mutual:2 equal:1 sampling:2 biology:1 unsupervised:13 icml:4 future:1 intelligent:1 equiprobable:1 divergence:1 consisting:1 message:1 spangler:2 investigate:1 mining:1 circular:1 evaluation:1 weakness:1 mdl:1 mixture:21 yielding:3 irrelevance:2 byproduct:1 initialized:4 column:2 cover:5 poq:1 maximization:1 assignment:2 cost:1 deviation:3 subset:13 conducted:1 reported:1 dependency:1 synthetic:1 density:3 fundamental:1 sequel:2 picking:1 together:1 posi:1 again:1 von:1 conf:1 resort:1 return:1 knm:2 de:3 standardizing:1 student:1 coding:1 ranking:1 performed:2 closed:2 observing:1 doing:1 start:1 sort:1 option:1 xing:1 forbes:1 ass:3 appended:1 accuracy:1 variance:2 characteristic:3 yield:2 saliency:18 correspond:1 generalize:1 reina:1 bayesian:3 identification:1 comparably:2 economical:1 classified:1 mlearn:1 murthy:1 ed:3 email:1 sixth:1 telecomunicac:1 pp:7 proof:1 mi:2 di:4 basford:1 sampled:1 recall:1 knowledge:4 dimensionality:1 segmentation:3 formalize:1 worry:1 supervised:6 follow:2 asia:1 leen:1 though:3 just:2 stage:1 until:1 undergoes:1 quality:2 reveal:1 believe:1 name:1 normalized:1 true:1 analytically:1 alternating:1 leibler:1 eg:1 conditionally:4 wrapped:1 ll:1 during:1 mlrepository:1 criterion:10 m:1 generalized:1 pdf:2 ecnico:1 complete:1 theoretic:1 performs:2 meaning:1 wise:2 image:3 fi:1 recently:1 superior:1 common:4 multinomial:2 interpretation:2 hp:2 similarly:1 portugal:1 longer:1 similarity:1 add:1 posterior:3 belongs:1 irrelevant:5 forcing:1 scenario:1 inequality:1 binary:2 rep:1 exploited:1 muller:1 seen:1 minimum:7 ii:1 full:2 lisboa:1 rj:1 plug:1 molecular:1 locates:1 dkl:1 basic:1 breast:1 expectation:2 poisson:1 normalization:1 addition:2 addressed:2 microarray:1 kohavi:1 biased:1 unlike:2 db:14 contrary:1 jordan:1 call:2 vw:1 ter:1 split:1 easy:1 gustafson:1 affect:1 independence:2 fit:1 identified:1 idea:1 utility:1 s7:1 lnknet:5 f:13 abruptly:1 york:1 nine:1 remark:1 useful:7 generally:1 involve:1 ten:1 category:1 generate:1 http:1 exist:2 notice:1 dotted:1 estimated:1 correctly:1 ario:1 group:1 salient:2 key:1 ist:1 falling:1 achieving:1 changing:1 neither:1 backward:3 ram:2 run:7 prob:1 extends:2 almost:1 reasonable:1 place:1 dy:2 summarizes:1 layer:1 dash:2 followed:1 yielded:1 flat:1 wbc:4 argument:1 fayyad:1 concluding:1 martin:1 cies:1 circumvented:1 pacific:1 conjugate:1 smaller:1 em:16 separability:1 son:1 making:1 modification:1 happens:1 pr:1 equation:3 remains:1 trunk:3 sahami:2 mml:3 flip:1 cor:1 adopted:1 available:4 gaussians:3 eight:1 coin:1 thomas:1 responding:1 clustering:29 dirichlet:1 top:1 remaining:1 running:1 assumes:1 graphical:1 yx:1 society:1 usual:1 diagonal:1 traditional:1 unclear:1 exhibit:3 evolutionary:1 partitional:1 wrap:1 street:1 outer:1 evaluate:1 toward:1 code:2 length:3 index:3 relationship:1 minimizing:2 difficult:2 robert:1 unknown:1 perform:1 observation:1 datasets:1 finite:3 withheld:1 head:2 arbitrary:1 overcoming:1 specified:1 herein:1 nip:1 trans:5 address:1 suggested:1 bar:3 pattern:3 scott:2 interpretability:1 royal:1 difficulty:1 natural:2 hr:1 sian:1 mn:1 scheme:2 technology:1 brodley:2 carried:1 gz:1 text:1 prior:3 discovery:1 removal:2 contributing:1 law:2 wisconsin:1 suggestion:1 foundation:1 principle:1 viewpoint:1 cancer:1 course:4 supported:1 figueiredo:3 overcome:1 boundary:1 curve:2 stand:1 avoids:1 dimension:1 computes:1 t5:1 commonly:1 pruning:1 compact:1 kullback:1 ml:2 conceptual:2 conclude:1 assumed:1 search:6 msu:3 table:4 promising:1 obtaining:1 constructing:1 noise:1 repeated:1 fig:3 wiley:1 explicit:2 weighting:2 anil:1 vaithyanathan:1 explored:1 closeness:1 essential:1 workshop:1 ci:4 labelling:1 sorting:1 entropy:17 michigan:2 univariate:1 likely:1 intern:2 corresponds:1 truth:1 determines:1 relies:1 acm:1 conditional:3 identity:1 invalid:1 absence:2 fisher:1 hard:1 included:1 celeux:1 e:1 experimental:2 tendency:1 east:1 rarely:2 zg:1 select:1 chr:1 principe:1 relevance:4 dept:2 mcmc:1 |
1,439 | 2,309 | The Stability of Kernel Principal
Components Analysis and its Relation to
the Process Eigenspectrum
John Shawe-Taylor
Royal Holloway
University of London
john?cs.rhul.ac.uk
Christopher K. I. Williams
School of Informatics
University of Edinburgh
c.k.i.williams?ed.ac.uk
Abstract
In this paper we analyze the relationships between the eigenvalues
of the m x m Gram matrix K for a kernel k(?, .) corresponding to a
sample Xl, ... ,X m drawn from a density p(x) and the eigenvalues
of the corresponding continuous eigenproblem. We bound the differences between the two spectra and provide a performance bound
on kernel peA.
1
Introduction
Over recent years there has been a considerable amount of interest in kernel methods
for supervised learning (e.g. Support Vector Machines and Gaussian Process predict ion) and for unsupervised learning (e.g. kernel peA, Sch61kopf et al. (1998)). In
this paper we study the stability of the subspace of feature space extracted by kernel
peA with respect to the sample of size m, and relate this to the feature space that
would be extracted in the infinite sample-size limit. This analysis essentially "lifts"
into (a potentially infinite dimensional) feature space an analysis which can also
be carried out for peA, comparing the k-dimensional eigenspace extracted from
a sample covariance matrix and the k-dimensional eigenspace extracted from the
population covariance matrix, and comparing the residuals from the k-dimensional
compression for the m-sample and the population.
Earlier work by Shawe-Taylor et al. (2002) discussed the concentration of spectral
properties of Gram matrices and of the residuals of fixed projections. However,
these results gave deviation bounds on the sampling variability the eigenvalues of
the Gram matrix, but did not address the relationship of sample and population
eigenvalues, or the estimation problem of the residual of peA on new data.
The structure the remainder of the paper is as follows. In section 2 we provide
background on the continuous kernel eigenproblem, and the relationship between
the eigenvalues of certain matrices and the expected residuals when projecting into
spaces of dimension k. Section 3 provides inequality relationships between the
process eigenvalues and the expectation of the Gram matrix eigenvalues. Section 4
presents some concentration results and uses these to develop an approximate chain
of inequalities. In section 5 we obtain a performance bound on kernel peA, relating
the performance on the training sample to the expected performance wrt p(x).
2
2.1
Background
The kernel eigenproblern
For a given kernel function k(?,?) the m x m Gram matrix K has entries k(Xi,Xj),
i, j = 1, ... ,m, where {Xi: i = 1, ... ,m} is a given dataset. For Mercer kernels K
is symmetric positive semi-definite. We denote the eigenvalues of the Gram matrix
as Al 2: A2 .. . 2: Am 2: 0 and write its eigendecomposition as K = zAz' where A
is a diagonal matrix of the eigenvalues and Z' denotes the transpose of matrix Z.
The eigenvalues are also referred to as the spectrum of the Gram matrix.
We now describe the relationship between the eigenvalues of the Gram matrix and
those of the underlying process. For a given kernel function and density p(x) on a
space X, we can also write down the eigenfunction problem
Ix
k(x,Y)P(X)?i(X) dx = AiC/Ji(Y)?
(1)
Note that the eigenfunctions are orthonormal with respect to p(x), i.e.
x (Pi(x)p(x)?j (x)dx = 6ij. Let the eigenvalues be ordered so that Al 2: A2 2: ....
This continuous eigenproblem can be approximated in the following way. Let
{Xi: i = 1, . .. , m} be a sample drawn according to p(x). Then as pointed out in
Williams and Seeger (2000), we can approximate the integral with weight function
p(x) by an average over the sample points, and then plug in Y = Xj for j = 1, ... ,m
to obtain the matrix eigenproblem.
J
Thus we see that J.1i d;j ~ Ai is an obvious estimator for the ith eigenvalue of the
continuous problem. The theory of the numerical solution of eigenvalue problems
(Baker 1977, Theorem 3.4) shows that for a fixed k, J.1k will converge to Ak in the
limit as m -+ 00.
For the case that X is one dimensional, p(x) is Gaussian and k(x, y) = exp -b(xy)2, there are analytic results for the eigenvalues and eigenfunctions of equation (1)
as given in section 4 of Zhu et al. (1998). A plot in Williams and Seeger (2000) for
m = 500 with b = 3 and p(x) '" N(O, 1/4) shows good agreement between J.1i and Ai
for small i, but that for larger i the matrix eigenvalues underestimate the process
eigenvalues. One of the by-products of this paper will be bounds on the degree of
underestimation for this estimation problem in a fully general setting.
Koltchinskii and Gine (2000) discuss a number of results including rates of convergence of the J.1-spectrum to the A-spectrum. The measure they use compares the
whole spectrum rather than individual eigenvalues or subsets of eigenvalues. They
also do not deal with the estimation problem for PCA residuals.
2.2
Projections, residuals and eigenvalues
The approach adopted in the proofs of the next section is to relate the eigenvalues
to the sums of squares of residuals. Let X be a random variable in d dimensions,
and let X be a d x m matrix containing m sample vectors Xl, ... , X m . Consider
the m x m matrix M = XIX with eigendecomposition M = zAz'. Then taking
X = Z
we obtain a finite dimensional version of Mercer's theorem. To set the
scene, we now present a short description of the residuals viewpoint.
The starting point is the singular value decomposition of X = UY',Z' , where U
and Z are orthonormal matrices and Y', is a diagonal matrix containing the singular
VA
values (in descending order). We can now reconstruct the eigenvalue decomposition
of M = X'X = Z~U'U~Z' = zAz', where A = ~2. But equally we can construct
a d x d matrix N = X X' = U~Z' Z~U' = u Au', with the same eigenvalues as M.
We have made a slight abuse of notation by using A to represent two matrices of
potentially different dimensions, but the larger is simply an extension of the smaller
with O's. Note that N = mCx , where C x is the sample correlation matrix.
Let V be a linear space spanned by k linearly independent vectors. Let Pv(x)
(PV(x)) be the projection of x onto V (space perpendicular to V), so that IlxW =
IIPv(x)112 + IIPv(x)112. Using the Courant-Fisher minimax theorem it can be proved
(Shawe-Taylor et al., 2002, equation 4) that
m
m
m
L
m
m
k
L
m
L
L
)...i(M)
IIxjl12 )...i(M) = min
IlPv(xj)112.
(2)
dim(V)=k
i=k+1
j=l
i=l
j=l
Hence the subspace spanned by the first k eigenvectors is characterised as that for
which the sum of the squares of the residuals is minimal. We can also obtain similar
results for the population case, e.g. L7=1 Ai = maXdim(V)=k lE[IIPv (x) 11 2].
2.3
Residuals in feature space
Frequently, we consider all of the above as occurring in a kernel defined feature
space, so that wherever we have written a vector x we should have put 'l/J(x),
where 'l/J is the corresponding feature map 'l/J : x E X f---t 'l/J(x) E F to a feature
space F. Hence, the matrix M has entries Mij = ('l/J(Xi),'l/J(Xj)). The kernel
function computes the composition of the inner product with the feature maps,
k(x , z) = ('l/J(x) , 'l/J(z)) = 'l/J(x)''l/J(z) , which can in many cases be computed without
explicitly evaluating the mapping 'l/J. We would also like to evaluate the projections
into eigenspaces without explicitly computing the feature mapping 'l/J . This can be
done as follows. Let Ui be the i-th singular vector in the feature space, that is
the i-th eigenvector of the matrix N, with the corresponding singular value being
O"i = ~ and the corresponding eigenvector of M being Zi. The projection of an
input x onto Ui is given by
'l/J(X)'Ui = ('l/J(X)'U)i = ('l/J(x)' X Z)W;l = k'ZW;l,
where we have used the fact that X = U~Z' and k j = 'l/J(x)''l/J(Xj) = k(x,xj).
Our final background observation concerns the kernel operator and its eigenspaces.
The operator in question is
K(f)(x) =
Ix
k(x, z)J(z)p(z)dz.
Provided the operator is positive semi-definite, by Mercer's theorem we can decompose k(x,z) as a sum of eigenfunctions, k(x,z) = L :1 AiC!Ji(X) ?i(Z) =
('l/J(x), 'l/J(z)), where the functions (?i(X)) ~l form a complete orthonormal basis
with respect to the inner product (j, g)p =
J(x)g(x)p(x)dx and 'l/J(x) is the
feature space mapping
Ix
'l/J : x --+ (1Pi(X)):l = ( A?i(X)):l E F.
Note that ?i(X) has norm 1 and satisfies Ai?i(x) =
1) , so that
Ai =
Ix k(x, z)?i(z)p(z)dz (equation
r k(y, Z) ?i(Y)?i (Z)p(Z)p(y)dydz.
iX2
(3)
If we let cf>(x) = (cPi(X)):l E F, we can define the unit vector U i E F corresponding
to Ai by Ui =
cPi(x)cf>(x)p(x)dx. For a general function J(x) we can similarly
J(x)cf>(x)p(x)dx. Now the expected square of the norm of
define the vector f =
the projection Pr (1jJ(x)) onto the vector f (assumed to be of norm 1) of an input
1jJ(x) drawn according to p(x) is given by
Ix
L
LLL
L3 t,
lE [llPr(1jJ(x)) 112]
=
=
=
=
=
Ix
IlPr(1jJ(x))Wp(x)dx =
=
L
(f'1jJ(X))2 p(x)dx
J(y) cf>(y)'1jJ (x)p(y)dyJ(z)cf> (z)'1jJ (x)p(z)dzp(x)dx
J(y)J(z)
L2
L2
A
cPj(Y)cPj(x)p(y)dy
~ v>:ecPe(z)cPe(x)p(z)dzp(x)dx
J(y)J(z)
j~l AcPj(y)p(y)dyv'):ecPe(z)p(z)dz Ix cPj(x)cPe(x)p(x)dx
J(y)J(z)
~ AjcPj (Y)cPj (z)p(y)dyp(z)dz
r J(y)J(z)k(y , z)p(y)p(z)dydz.
iX2
Since all vectors f in the subspace spanned by the image of the input space in F
can be expressed in this fashion, it follows using (3) that the sum of the finite case
characterisation of eigenvalues and eigenvectors is replaced by an expectation
Ak =
(4)
max
min lE[llPv (1jJ(x)) 112],
dim(V) =k O#vEV
where V is a linear subspace of the feature space F. Similarly,
k
L:Ai
i=l
max lE [llPv(1jJ(x)) 112] = lE [111jJ(x)112] min lE [IIPv(1jJ(x))112] ,
dim(V)=k
dim(V)=k
00
(5)
where Pv(1jJ(x)) (PV(1jJ(x))) is the projection of 1jJ(x) into the subspace V (the
projection of 1jJ(x) into the space orthogonal to V).
2.4
Plan of campaign
We are now in a position to motivate the main results ofthe paper. We consider the
general case of a kernel defined feature space with input space X and probability
density p(x). We fix a sample size m and a draw of m examples S = (Xl, X2 , ... , x m )
according to p. Further we fix a feature dimension k. Let Vk be the space spanned by
the first k eigenvectors of the sample kernel matrix K with corresponding eigenvalues
'\1, '\2,"" '\k, while Vk is the space spanned by the first k process eigenvectors with
corresponding eigenvalues A1 , A2 , ... , Ak ' Similarly, let E[J(x)] denote expectation
with respect to the sample, E[J(x)] = ~ 2:::1 J(Xi), while as before lE[?] denotes
expectation with respect to p.
We are interested in the relationships between the following quantities: (i)
E [IIPVk(x)11 2]
=
~ 2:7=1 ~i
=
2:7=1 ILi , (ii) lE [IIPVk(X)112]
=
2:7=1 Ai
(iii)
lE [IIPVk (x)11 2] and (iv) IE [IIPVk (x)11 2] . Bounding the difference between the first
and second will relate the process eigenvalues to the sample eigenvalues, while the
difference between t he first and third will bound the expected performance of the
space identified by kernel PCA when used on new data.
Our first two observations follow simply from equation (5),
k
IE [IIPYk (x) 112]
-1
l: Ai ~ lE
A
A
[
IIPVk (x) II2] ,
(6)
m i=l
k
and
lE
[IIPVk (x) 11 2]
l: Ai ~ lE [IIPYk (x)11 2] .
(7)
i=l
Our strategy will be to show that the right hand side of inequality (6) and the left
hand side of inequality (7) are close in value making the two inequalit ies approximately a chain of inequalities. We then bound the difference between the first and
last entries in the chain.
3
A veraging over Samples and Population Eigenvalues
ex
The sample correlation matrix is
= ~XXI with eigenvalues ILl ~ IL2??? ~ ILd.
In the notation of the section 2 ILi = (l/m),\i ' The corresponding population
correlation matrix has eigenvalues Al ~ A2 ... ~ Ad and eigenvectors ul , . .. , U d.
Again by the observations above these are the process eigenvalues. Let lE.n [.] denote
averages over random samples of size m .
The following proposition describes how lE.n [ILl ] is related to Al and lE.n [ILd] is related
to Ad. It requires no assumption of Gaussianity.
Proposition 1 (Anderson, 1963, pp 145-146) lE.n [ILd
~
Al and lE.n[ILd]
:s:
Ad'
Proof: By the results of the previous section we have
We now apply the expectation operator lE.n to both sides. On the RHS we get
lE.nIE [llFul (x )11 2] = lE [llFul (x)112] = Al
by equation (5), which completes the proof. Correspondingly ILd is characterized by
= mino#c IE [llFc(Xi) 11 2] (minor components analysis). D
Interpreting this result, we see that lE.n [ILl] overestimates AI, while lE.n [ILd] underestimates Ad.
Proposition 1 can be generalized to give the following result where we have also
allowed for a kernel defined feature space of dimension N F :s: 00.
ILd
Proposition 2 Using the above notation, for any k, 1
L:~=l Ai and lE.n [L::k+l ILi] :s: L:~k+l
:s:
k
:s:
m , lE.n [L:~= l ILi] ~
Ai?
Proof: Let Vk be the space spanned by the first k process eigenvectors. Then from
t he derivations above we have
k
l:ILi = v:
i=l
::~=k IE [11Fv('I/J(x))W] ~ IE [llFvk('I/J(x ))1 12].
Again, applying the expectation operator Em to both sides of this equation and
taking equation (5) into account, the first inequality follows. To prove the second we
turn max into min, Pinto pl. and reverse the inequality. Again taking expectations
of both sides proves the second part. 0
Applying the results obtained in this section, it follows that Em [ILl] will overestimate
A1, and the cumulative sum 2::=1Em [ILi ] will overestimate 2::=1Ai. At the other
end, clearly for N F ::::: k > m, ILk == 0 is an underestimate of Ak.
4
Concentration of eigenvalues
We now make use of results from Shawe-Taylor et al. (2002) concerning the concentration of the eigenvalue spectrum of the Gram matrix. We have
Theorem 3 Let K(x, z) be a positive semi-definite kernel function on a space X,
and let p be a probability density function on X. Fix natural numbers m and 1 :::;
k < m and let S = (Xl, ... ,X m) E xm be a sample of m points drawn according to
p. Then for all t > 0,
p{ I ~~~k(S)_Em [~~9(S)] 1 :::::t} :::; 2exp(-~:m),
where ~~k (S) is the sum of the largest k eigenvalues of the matrix K(S) with entries
K(S)ij = K(Xi,Xj) and R2 = maxxEX K(x, x).
This follows by a similar derivation to Theorem 5 in Shawe-Taylor et al. (2002).
Our next result concerns the concentration of the residuals with respect to a fixed
subspace. For a subspace V and training set S, we introduce the notation
Fv(S) =
t [llPv('IjJ(x)) 112] .
Theorem 4 Let p be a probability density function on X. Fix natural numbers m
and a subspace V and let S = (Xl' ... ' Xm) E xm be a sample of m points drawn
according to a probability density function p. Then for all t > 0,
P{Fv(S) -
Em [Fv(S)]
1
:::::
t} :::; 2exp
(~~~) .
This is theorem 6 in Shawe-Taylor et al. (2002).
The concentration results of this section are very tight. In the notation of the earlier
sections they show that with high probability
and
k
L Ai ~ t
[IIPVk ('IjJ(x))W] ,
(9)
i= l
where we have used Theorem 3 to obtain the first approximate equality and Theorem 4 with V = Vk to obtain the second approximate equality.
This gives the sought relationship to create an approximate chain of inequalities
~
k
IE [IIPVk('IjJ(x))112] =
L Ai::::: IE [IIPVk ('IjJ(X)) 112] . (10)
i= l
This approximate chain of inequalities could also have been obtained using Proposition 2. It remains to bound the difference between the first and last entries in this
chain. This together with the concentration results of this section will deliver the
required bounds on the differences between empirical and process eigenvalues, as
well as providing a performance bound on kernel peA.
5
Learning a projection matrix
The key observation that enables the analysis bounding the difference between
t [IIPvJ!p(X)) 11 2] and IE [IIPvJ'I/J(x)) 11 2] is that we can view the projection norm
IIPvJ'I/J(x))1 12 as a linear function of pairs offeatures from the feature space F.
Proposition 5 The projection norm
IIPVk ('I/J(X)) 11 2 is
a linear function j in a feature space F for which the kernel function is given by k(x, z) = k(x , Z) 2. Furthermore the 2-norm of the function j is Vk.
Proof: Let X = Uy:.Z' be the singular value decomposition of the sample matrix X
in the feature space. The projection norm is then given by j(x) = IIPVk('I/J(X)) 11 2 =
'I/J(x)'UkUk'I/J(x), where Uk is the matrix containing the first k columns of U. Hence
we can write
IIPvJ'I/J(x))11 2 =
NF
NF
ij=l
ij=l
l: (Xij'I/J( X) i'I/J(X)j = l: (Xij1p(X)ij,
where 1p is the projection mapping into the feature space F consisting of all pairs
of F features and (Xij = (UkUk)ij. The standard polynomial construction gives
k(x, z)
NF
NF
i,j=l
i,j=l
l: 'I/J(X)i'I/J(Z)i'I/J(X)j'I/J(z)j = l: ('I/J(X)i'I/J(X)j)('I/J(Z)i'I/J(Z)j)
It remains to show that the norm of the linear function is k. The norm satisfies
(note that II . IIF denotes the Frobenius norm and U i the columns of U)
Ilill'
i~' a1j ~ IIU,U;II} ~ (~",U;,
t,
Ujuj) F
~
it,
(U;Uj)'
~k
as required. D
We are now in a position to apply a learning theory bound where we consider a
regression problem for which the target output is the square of the norm of the
sample point 11'I/J(x)11 2. We restrict the linear function in the space F to have norm
Vk. The loss function is then the shortfall between the output of j and the squared
norm.
Using Rademacher complexity theory we can obtain the following theorems:
Theorem 6 If we perform peA in the feature space defined by a kernel k(x , z)
then with probability greater than 1 - 6, for all 1 :::; k :::; m, if we project new data
onto the space
,\,>.
11k ,
the expected squared residual is bounded by
:<: IE [ IIPt; ("'(x)) II'1 <
'~'~k [~ \>l(S) + 7#
+R2
~ln
,----------------,
C:)
where the support of the distribution is in a ball of radius R in the feature space and
are the process and empirical eigenvalues respectively.
Ai and
.xi
Theorem 7 If we perform peA in the feature space defined by a kernel k(x , z)
then with probability greater than 1 - 5, for all 1 :s: k :s: m, if we project new data
onto the space 11k , the sum of the largest k process eigenvalues is bounded by
A<!,k ;::: lE
[IIPVk ("IjJ(x))W] >
[~.x<!'f(S) -
max
l <!,f<!, k m
_R2
~ln
1 + v'?
Vm
C(mt
!
m
f
k(Xi' Xi)2
i=l
1))
where the support of the distribution is in a ball of radius R in the feature space and
are the process and empirical eigenvalues respectively.
Ai and
.xi
The proofs of these results are given in Shawe-Taylor et al. (2003). Theorem 6
implies that if k ? m the expected residuallE [11Pt;, ("IjJ(x)) 112 ] closely matches the
average sample residual of IE [11Pt;,("IjJ(x))112] = (1/m)E:k+ 1 .x i , thus providing
a bound for kernel peA on new data. Theorem 7 implies a good fit between the
partial sums of the largest k empirical and process eigenvalues when Jk/m is small.
References
Anderson, T. W. (1963). Asymptotic Theory for Principal Component Analysis. Annals
of Mathematical Statistics, 34( 1): 122- 148.
Baker, C. T. H. (1977). The numerical treatm ent of integral equations. Clarendon Press,
Oxford.
Koltchinskii, V. and Gine, E. (2000). Random matrix approximation of spectra of integral
operators. B ernoulli,6(1):113- 167.
Sch6lkopf, B., Smola, A. , and Miiller, K-R. (1998). Nonlinear component analysis as a
kernel eigenvalue problem. Neural Computation, 10:1299- 1319.
Shawe-Taylor, J., Cristianini, N., and Kandola, J. (2002). On the Concentration of Spectral
Properties. In Diettrich, T. G., Becker, S., and Ghahramani, Z., editors, Advances in
Neural Information Processing Systems 14. MIT Press.
Shawe-Taylor, J., Williams, C. K I. , Cristianini, N., and Kandola, J. (2003). On the
Eigenspectrum of the Gram Matrix and the Generalisation Error of Kernel PCA. Technical Report NC2-TR-2003-143 , Dept of Computer Science, Royal Holloway, University
of London. Available from http://www.neurocolt.com/archi ve . html.
Williams, C. K I. and Seeger, M. (2000). The Effect of the Input Density Distribution on
Kernel-based Classifiers. In Langley, P., editor, Proceedings of the Seventeenth International Conference on Machine Learning (ICML 2000). Morgan Kaufmann.
Zhu, H., Williams, C. K I., Rohwer, R. J., and Morciniec, M. (1998). Gaussian regression
and optimal finite dimensional linear models. In Bishop, C. M., editor, Neural Networks
and Machine Learning. Springer-Verlag, Berlin.
| 2309 |@word cpe:2 version:1 polynomial:1 compression:1 norm:13 covariance:2 decomposition:3 tr:1 dzp:2 comparing:2 com:1 dx:10 written:1 john:2 numerical:2 analytic:1 enables:1 plot:1 ith:1 short:1 provides:1 mathematical:1 prove:1 introduce:1 expected:6 frequently:1 lll:1 provided:1 project:2 underlying:1 baker:2 notation:5 bounded:2 eigenspace:2 eigenvector:2 iipvk:12 nf:4 classifier:1 uk:3 unit:1 overestimate:3 positive:3 before:1 morciniec:1 limit:2 ak:4 oxford:1 abuse:1 approximately:1 diettrich:1 koltchinskii:2 au:1 campaign:1 perpendicular:1 seventeenth:1 uy:2 definite:3 langley:1 sch61kopf:1 empirical:4 projection:13 get:1 onto:5 close:1 operator:6 put:1 applying:2 descending:1 www:1 map:2 dz:4 williams:7 starting:1 iiu:1 estimator:1 orthonormal:3 spanned:6 stability:2 population:6 annals:1 construction:1 target:1 pt:2 us:1 agreement:1 approximated:1 jk:1 ui:4 nie:1 complexity:1 cristianini:2 motivate:1 tight:1 deliver:1 basis:1 derivation:2 describe:1 london:2 lift:1 larger:2 reconstruct:1 statistic:1 final:1 eigenvalue:41 product:3 remainder:1 description:1 frobenius:1 ent:1 convergence:1 rademacher:1 develop:1 ac:2 ij:6 minor:1 school:1 c:1 implies:2 radius:2 closely:1 pea:10 fix:4 decompose:1 proposition:6 extension:1 pl:1 exp:3 mapping:4 predict:1 sought:1 a2:4 estimation:3 maxxex:1 largest:3 create:1 mit:1 clearly:1 gaussian:3 rather:1 vk:6 seeger:3 am:1 dim:4 relation:1 interested:1 l7:1 ill:4 html:1 plan:1 construct:1 eigenproblem:4 sampling:1 iif:1 unsupervised:1 icml:1 report:1 kandola:2 ve:1 individual:1 replaced:1 consisting:1 interest:1 chain:6 integral:3 partial:1 xy:1 eigenspaces:2 orthogonal:1 il2:1 iv:1 taylor:9 minimal:1 column:2 earlier:2 deviation:1 entry:5 subset:1 density:7 international:1 ie:10 shortfall:1 vm:1 informatics:1 together:1 offeatures:1 again:3 squared:2 containing:3 account:1 gaussianity:1 dydz:2 explicitly:2 ad:4 view:1 analyze:1 nc2:1 square:4 kaufmann:1 iixjl12:1 ofthe:1 ix2:2 ed:1 rohwer:1 underestimate:3 pp:1 obvious:1 proof:6 dataset:1 proved:1 clarendon:1 courant:1 supervised:1 follow:1 zaz:3 done:1 anderson:2 furthermore:1 smola:1 correlation:3 hand:2 christopher:1 ild:7 nonlinear:1 effect:1 hence:3 equality:2 symmetric:1 wp:1 deal:1 generalized:1 complete:1 interpreting:1 image:1 mt:1 ji:2 discussed:1 slight:1 he:2 relating:1 composition:1 ai:18 similarly:3 pointed:1 shawe:9 l3:1 recent:1 reverse:1 certain:1 verlag:1 inequality:9 morgan:1 greater:2 converge:1 semi:3 ii:4 technical:1 match:1 characterized:1 plug:1 concerning:1 equally:1 y:1 a1:2 va:1 regression:2 essentially:1 expectation:7 gine:2 kernel:28 represent:1 ion:1 xxi:1 background:3 completes:1 singular:5 zw:1 eigenfunctions:3 iii:1 xj:7 fit:1 gave:1 zi:1 identified:1 restrict:1 inner:2 pca:3 ul:1 becker:1 miiller:1 jj:15 eigenvectors:6 amount:1 http:1 dyj:1 xij:2 write:3 key:1 drawn:5 characterisation:1 year:1 sum:8 draw:1 dy:1 iipv:4 bound:12 ukuk:2 aic:2 ilk:1 scene:1 x2:1 archi:1 min:4 according:5 ball:2 smaller:1 describes:1 em:5 wherever:1 making:1 projecting:1 pr:1 ln:2 equation:8 remains:2 discus:1 turn:1 wrt:1 end:1 adopted:1 available:1 apply:2 spectral:2 denotes:3 cf:5 ghahramani:1 prof:1 uj:1 question:1 quantity:1 strategy:1 concentration:8 diagonal:2 subspace:8 neurocolt:1 berlin:1 evaluate:1 eigenspectrum:2 relationship:7 providing:2 a1j:1 potentially:2 relate:3 dyp:1 perform:2 sch6lkopf:1 observation:4 finite:3 variability:1 pair:2 required:2 fv:4 eigenfunction:1 address:1 xm:3 royal:2 including:1 max:4 natural:2 residual:13 zhu:2 minimax:1 carried:1 ii2:1 l2:2 asymptotic:1 fully:1 loss:1 eigendecomposition:2 degree:1 mercer:3 viewpoint:1 editor:3 pi:2 last:2 transpose:1 side:5 taking:3 correspondingly:1 edinburgh:1 xix:1 dimension:5 gram:10 evaluating:1 cumulative:1 computes:1 made:1 approximate:6 veraging:1 assumed:1 xi:11 spectrum:7 continuous:4 did:1 ilpv:1 main:1 linearly:1 rh:1 whole:1 bounding:2 allowed:1 inequalit:1 cpi:2 referred:1 fashion:1 mino:1 position:2 pv:4 xl:5 third:1 ix:7 down:1 theorem:15 bishop:1 rhul:1 r2:3 concern:2 cpj:4 occurring:1 simply:2 ijj:7 vev:1 expressed:1 ordered:1 pinto:1 springer:1 mij:1 satisfies:2 extracted:4 fisher:1 considerable:1 infinite:2 characterised:1 generalisation:1 principal:2 ili:6 underestimation:1 holloway:2 support:3 dept:1 ex:1 |
1,440 | 231 | 388
Smith and Miller
Bayesian Inference of Regular Grammar
and Markov Source Models
Kurt R. Smith and Michael I. Miller
Biomedical Computer Laboratory
and
Electronic Signals and Systems Research Laboratory
Washington University, SL Louis. MO 63130
ABSTRACT
In this paper we develop a Bayes criterion which includes the Rissanen
complexity, for inferring regular grammar models. We develop two
methods for regular grammar Bayesian inference. The fIrst method is
based on treating the regular grammar as a I-dimensional Markov
source, and the second is based on the combinatoric characteristics of
the regular grammar itself. We apply the resulting Bayes criteria to a
particular example in order to show the efficiency of each method.
1 MOTIVATION
We are interested in segmenting electron-microscope autoradiography (EMA) images by
learning representational models for the textures found in the EMA image. In studying
this problem, we have recognized that both structural and statistical features may be
useful for characterizing textures. This has motivated us to study the source modeling
problem for both structural sources and statistical sources. The statistical sources that
we have examined are the class of one and two-dimensional Markov sources (see [Smith
1990] for a Bayesian treatment of Markov random field texture model inference), while
the structural sources that we are primarily interested in here are the class of regular
grammars, which are important due to the role that grammatical constraints may play in
the development of structural features for texture representation.
Bayesian Inference of Regular Grammar and Markov Source Models
2 MARKOV SOURCE INFERENCE
Our primary interest here is the development of a complete Bayesian framework for the
process of inferring a regular grammar from a training sequence. However, we have
shown previously that there exists a I-D Markov source which generates the regular
language defined via some regular grammar [Miller, 1988]. We can therefore develop a
generalized Bayesian inference procedure over the class of I-D Markov sources which
enables us to learn the Markov source corresponding to the optimal regular grammar.
We begin our analysis by developing the general structure for Bayesian source modeling.
2.1 BAYESIAN APPROACH TO SOURCE MODELING
We state the Bayesian approach to model learning: Given a set of source models
{~, th,? . " 8M.I} and the observation x, choose the source model which most accurately
represents the unknown source that generated x. This decision is made by calculating
Bayes risk over the possible models which produces a general decision criterion for the
model learning problem:
a
{
max} log P(xt~) + log Pj .
~8t ?. ?.Bit?}
(2.1)
Under the additional assumption that the apriori probabilities over the candidate models
are equivalent, the decision criterion becomes
(2.2)
which is the quantity that we will use in measuring the accuracy of a model's
representation.
2.2 STOCHASTIC COMPLEXITY AND MODEL LEARNING
It is well known that when given finite data, Bayesian procedures of this kind which do
not have any prior on the models suffer from the fundamental limitation that they will
predict models of greater and greater complexity. This has led others to introduce
priors into the Bayes hypothesis testing procedure based on the complexity of the model
being tested [Rissanen, 1986]. In particular, for the Markov case the complexity is
directly proportional to the number of transition probabilities of the particular model
being tested with the prior exponentially decreaSing with the associated complexity.
We now describe the inclusion of the complexity measure in greater detail.
Following Rissanen, the basic idea is to uncover the model which assigns maximum
probability to the observed data, while also being as simple as possible so as to require a
small Kolmogorov description length. The complexity associated with a model having
k real parameters and a likelihood with n independent samples, is the now well-known
!Jog n which allows us to express the generalization of the original Bayes procedure
2
(2.2) as the quantity
389
390
Smith and Miller
(2.3)
"-
a
a.
Note well that is the k9rdimensional parameter parameterizing model
which must
be estimated from the observed data %,.. An alternative view of (2.3) is discovered by
viewing the second term as the prior in the Bayes model (2.1) where the prior is defined
as
ltl ?
101 "
P~= e---.!
2
(2.4)
?
2.3 I-D l\fARKOV SOURCE MODELING
Consider that x" is a I-D n-Iength string of symbols which is generated by an unknown
finite-state Markov source. In examining (2.3), we recognize that for I-D Markov
n
P9a(S(Xj)lS"(Xj_l? where S(x.) is a state
j-l
.1
A
sources log P(rl8;) may be written as log
function which evaluates to a state in the Markov source state set S9;. Using this
notation, the Bayes hypothesis test for I-D Markov sources may be expressed as:
(2.5)
For the general Markov source inference problem, we know only that the string x" was
generated by a I-D Markov source, with the state set S9; and the transition probabilities
P9a{StIS,). kJeS9a' unknown. They must therefore be included in the inft"rence procedure.
To include the complexity term for this case, we note that the number of parameters to
be estimated for model is simply the number of entries in the state-transition matrix
a
P4, i.e. 19; =IS9;12.
Therefore for I-D Markov sources, the generalized Bayes hypothesis
test including complexity may be stated as
mta
{
~9t, .. ,8M1
'"
ISBJ
L
log Pel.S(Xj)IS(Xj-l? - ~g n.
';-1
2n
1 ,,?1
}n
2
(2.6)
where we have divided the entire quantity by n in order to express the criterion in terms
of bits pc7 symbol. Note that a candidate Markov source model 8; is initially specified
by its ordez and corresponding state set S Ba.
The procedure for inferring 1-0 Markov source models can thus be stated as follows.
Given a sequence x" from some unknown source, consider candidate Markov source
models by computing the state function S(x.) (detemlined by the candidate model
order) over the entire string x~ Enumerating the state transitions which occur in %,.
'" which is then used to compute
provides an estimate of the state-transition matrix P,.
(2.6). Now. the inferred Markov source becomes the ooe maximizing (2.6).
Bayesian Inference of Regular Grammar and Markov Source Models
3 REGULAR GRAMMAR INFERENCE
Although the Bayes criterion developed for I-D Markov sources (2.6) is a sufficient
model learning criterion for the class of regular grammars, we will now show that by
taking advantage of the apriori knowledge that the source is a regular grammar, the
inference procedure can be made much more efficient This apriori knowledge brings a
special structure to the regular grammar inference problem in that not all allowable
sets of Markov probabilities correspond to regular grammars. In fact, as shown in
[Miller, 1988]. corresponding to each regular grammar is a unique set of candidate
probabilities, implying that the Bayesian solution which takes this into account will be
far more efficient. We demonstrate that now.
3.1 BAYESIAN CRITERION l"SING GRAMMAR COMBINATORICS
Our approach is to use the combinatoric properties of the regular grammar in order to
develop the optimal Bayes hypothesis test. We begin by defining the regular grammar.
Definition: A regular grammar G is a quadruple (VN, VT, Ss,R) where VN, VT are finite
sets of non-terminal symbols (or states) and tenninal symbols respectively, Ss is the
sentence start state, and R is a finite set of production rules consisting of the
transfonnation of a non-tenninal symbol to either a terminal followed by a nontenninal, or a terminal alone, i.e..
In the class of regular grammars that we consider, we define the depth of the language
as the maximum number of tenninal symbols which make up a nontenninal symbol.
Corresponding to each regular grammar is an associated incidence matrix B with the i,k,1t
entry B i) equal to the number of times there is a production for some tenninal j and
non-terminals i.k of the fonn Si~Wpk.ER. Also associated with each grammar Gi is
the set of all n-Iength strings produced by the grammar, denoted as the regular language
%Il(Gi).
Now we make the quite reasonable assumption that no string in the language %Il(Gi) is
more or less probable apriori than any other string in that language. This indicates that
all n-lengtb strings that can be generated by Gi are equiprobable with a probability
dictated by the combinatorics of the language as
P(XIlIGi) =
I%Il(Gi)
1 I'
(3.1)
where I%Il(Gi) I denotes the number of n-Iength sequences in the language which can be
computed by considering the combinatorics of the language as follows:
391
392
Smith and Miller
with AGi corresponding to the largest eigenvalue of the state-transition matrix BGI'
This results from the combinatoric growth rate being detennined by the sum of the
entries in the "til power state-transition matrix
which grows as the largest
eigenvalue AGI of BGi [Blahut, 1987]. We can now write (3.1) in these tenns as
Bo..,
(3.2)
which expresses the probability of the sequence x" in tenns of the combinatorics of Gi.
We now use this combinatoric interpretation of the probability to develop Bayes
decision criterion over two candidate grammars. Assume that there exists a fmite space
of sequences X ? all of which may be generated by one of the two possible grammars
{Go. Gl}. Now by dividing this observation space X into two decision regions. Xo (for
Go) and Xl (for G 1). we can write Bayes risk R in terms of the observation probabilities
P(xIIIGo).P(x"IG 1):
(3.3)
x"eXl
.l'"eXo
This implementation of Bayes risk assumes that sequences from each grammar occur
equiprobably apriori and that the cost of choosing the incorrect grammar is equal to 1.
Now incorporating the combinatoric counting probabilities (3.2). we can rewrite (3.3)
as
R=
2, AGo'" + L AG l '"
X"eXl
x"eXo
which can be rewritten
R =1.+ 2, (AGI'? - ko'?) .
2 z,.eXo
The risk is therefore minimized by choosing GO if AGl'" < AGo'? and 01 if
This establishes the likelihood ratio for the grammar inference problem:
Gl
AGI'" >
AGo'? <
(3.4)
AGI'? > AGo'''.
1?
Go
which can alternatively be expressed in tenns of the log as
max) -" log Alii .
(Go.GI
Recognizing this as the maximum likelihood decision. this decision criterion is easily
generalized to M hypothesis. Now by ignoring any complexity component. the
generalized Bayes test for a regular grammar can be stated as
Bayesian Inference of Regular Grammar and Markov Source Models
(3.5)
"" corresponding
where Aai is the largest eigenvalue of the estimated incidence matrix BGi
"" is estimated from
to grammar Gi where BGJ
.r...
The complexity factor to be included in this Bayesian criterion differs from the
complexity term in (2.3) due to the fact that the parameters to be estimated are now the
""
entries in the BGi
matrix which are strictly binary. From a description length
""
interpretation then. these parameters can be fully described using 1 bit per entry in BGj.
The complexity term is thus simply ISOil 2 which now allows us to write the Bayes
inference criterion for regular grammars as
(3.6)
in terms of bits per symbol. We can now state the algorithm for inferring grammars.
Regular Grammar Inference Algorithm
1. Initialize the grammar depth to d= 1.
2. ComputelSGJ =IVT~.
3. Using the state function Sd(:rJ corresponding to the current depth. compute
the state transitions at all sites .t; in the observed sequence x" in order to
"" for the grammar currently being
estimate the incidence matrix BGi
considered.
"" (recall that this is the largest eigenvalue of BGi).
""
4. Compute Aaj from BGj.
5. Using AajandlSGjl compute (3.6) - denote this aslGj= -log AGj_IS~jI2 .
6. Increase the grammar depth d=d+l and goto 2 (Le. test another candidate
grammar) until IGidiscontinues to increase.
The regular grammar of minimum depth which maximizes IGj (Le. maximizes (3.6? is
then the optimal regular grammar source model for the given sequence x,.
3.2 REGULAR GRAMMAR INFERENCE RESULTS
To compare the efficiency of the two Bayes criteria (2.6) and (3.6), we will consider a
regular grammar inference experiment The regular grammar that we will attempt to
learn, which we refer to as the 4-0,ls regular grammar, is a run-length constrained binary
393
394
Smith and Miller
grammar which disallows 4 consecutive occurrences of a 0 or 8 1. Referring to the
regular grammar definition. we note that this regular grammar can be described by its
incidence matrix
B4.O,l
000
100
010
0 1
0 1
0 1
o
o
o
o
o
o
I
0
0
1
1
0
010
001
000
where the states corresponding to row and column indices are
Note that this regular grammar has a depth equal to 3 and thus the corresponding
Markov source has an order equal to 3.
The inference experiment may be described as follows. Given a training set of length 16
strings from the 4-0,ls language, we apply the Bayes criteria (2.6) and (3.6) in an attempt
to infer the regular grammar in each case. We compute the criteria for five candidate
models of order/depth 1 through 5 (recall that this defmes the size of the state set for
the Markov source and the regular grammar, respectively).
Treating the unknown regular grammar as a Markov source, we estimate the
"" and then compute the Bayes criterion according
corresponding state-transition matrix P
to (2.6) for each of the five candidate models. We compute the criterion as a function of
the number of training samples for rach candidate model and plot the result in Figure la.
"" and compute the Bayes criterion according
Similarly. we estimate the incidence matrix B
to (3.6) for each of the five regular grammar candidate models. and plot the results as a
function of the number of training samples in Figure lb.
We compare the two Bayesian criteria by examining Figures 18 and lb. Note that
criterion (3.6) discovers the correct regular grammar (depth = 3) after only 50 training
samples (Figure Ib), while the equivalent Markov source (order = 3) is found only after
almost 500 training samples have been used in computing (2.6) (Figure la). This points
out that a much more efficient inference procedure exists for regular grammars by
taking advantage of the apriori grammar information (i.e. only the depth and the binary
"" must be estimated). whereas for 1-0 Markov sources. both the order
incidence matrix B
and the real-valued state-transition matrix P"" must be estimated.
4. CONCLUSION
In conclusion, we stress the importance of casting the source modeling problem within a
Bayesian framework which incorporates priors based on the model complexity and
known model attributes. Using this approach, we have developed an efficient Bayesian
Bayesian Inference of Regular Grammar and Markov Source Models
-0.8
-0.8 -
-0.9
? ?
?? ?
? ?
?
-1
?
?
?? ?
?
?
0
0
?
0
?
*
"ij_~i()I()I( )I()I()I(
5
50
??
?
? ??
0
??
0
...... .... -...
....
00
?
~
x
o
~
5()()(;
.? .0?
??
n.;l()I()i(
Jj
50000
*
*
0
*
* '" x>li<~ . . . . .
x
x
X
.
. . . . __ _
x x x Limit
I
I
I
I
I
5
50
500
5000
50000
b)
a)
Grammar depth d Markov order:.
0
????
,
..
'
?? ?
-11-
x
x
.
?
o
x x x Limit
500
o
-0.9 -
= 1,. = 2,0 =
3, ? = 4, x = 5 .
Figure 1: Results of computing Bayes criterion measures (2.6) and (3.6)
vs. the number of training samples - a) Markov source criterion
(2.6); b) Regular grammar combinatoric criterion (3.6).
framework for inferring regular grammars. This type of Bayesian model is potentially
quite useful for the texture analysis and image segmentation problem where a consistent
framework is desired for considering both structural and statistical features in the
texture/image representation.
Acknowledgements
This research was supported by the NSF via a Presidential Young Investigator Award
ECE-8552518 and by the NIH via a DRR GrantRR-1380.
Rererences
Blahut, R. E. (1987). Principles and Practice of Information TltMry , Addison-Wesley
Publishing Co.? Reading, MA.
Millex. M. I., Roysam. B. Smith, K. R .? and Udding, 1. T (1988). "Mapping Rule-Based
Regular Grammars to Gibbs Distributions", AMS-IMS-SIAM Joint Conference 011
SPATIAL STATISTICS AND IMAGING. American Mathematical Society.
Rissanen, J. (1986). "Stochastic Complexity and Modeling-, An1lOls of Statistics, 14,
00.3. pp. 1~ 1100.
Smith. K. R .? Miller. M. I. (1990). "A Bayesian Approach Incorporating Rissanen
Complexity for Learning Markov Random Field Texture Models", Proceedings of
Inl Conference on Acoustics, Speech. and Signal Processing. Albuquexque, NM.
395
| 231 |@word fmite:1 fonn:1 disallows:1 kurt:1 current:1 incidence:6 si:1 must:4 written:1 enables:1 treating:2 plot:2 v:1 implying:1 alone:1 exl:2 ji2:1 smith:8 provides:1 five:3 mathematical:1 incorrect:1 introduce:1 terminal:4 decreasing:1 considering:2 becomes:2 begin:2 notation:1 maximizes:2 pel:1 kind:1 string:8 developed:2 ag:1 growth:1 louis:1 segmenting:1 sd:1 limit:2 quadruple:1 examined:1 co:1 unique:1 testing:1 practice:1 differs:1 procedure:8 regular:47 ooe:1 ij_:1 bgi:6 s9:2 aai:1 risk:4 equivalent:2 maximizing:1 go:5 l:3 assigns:1 parameterizing:1 rule:2 play:1 hypothesis:5 observed:3 role:1 region:1 complexity:17 rewrite:1 efficiency:2 easily:1 joint:1 kolmogorov:1 describe:1 choosing:2 quite:2 aaj:1 valued:1 s:2 grammar:62 presidential:1 statistic:2 gi:9 itself:1 sequence:8 advantage:2 eigenvalue:4 p4:1 detennined:1 representational:1 description:2 produce:1 develop:5 agi:5 dividing:1 correct:1 attribute:1 stochastic:2 viewing:1 require:1 generalization:1 probable:1 strictly:1 considered:1 mapping:1 predict:1 mo:1 electron:1 consecutive:1 currently:1 largest:4 establishes:1 casting:1 likelihood:3 indicates:1 am:1 inference:20 entire:2 initially:1 interested:2 denoted:1 development:2 constrained:1 special:1 initialize:1 spatial:1 apriori:6 field:2 equal:4 having:1 washington:1 represents:1 minimized:1 others:1 primarily:1 equiprobable:1 recognize:1 consisting:1 blahut:2 attempt:2 interest:1 desired:1 column:1 combinatoric:6 modeling:6 measuring:1 cost:1 entry:5 recognizing:1 examining:2 referring:1 fundamental:1 siam:1 michael:1 nm:1 choose:1 american:1 til:1 li:1 account:1 includes:1 agl:1 wpk:1 combinatorics:4 view:1 start:1 bayes:20 il:4 accuracy:1 characteristic:1 miller:8 correspond:1 inft:1 bayesian:21 accurately:1 tenninal:4 produced:1 ago:4 definition:2 evaluates:1 pp:1 associated:4 treatment:1 recall:2 knowledge:2 segmentation:1 uncover:1 wesley:1 biomedical:1 until:1 brings:1 grows:1 laboratory:2 drr:1 criterion:23 generalized:4 allowable:1 stress:1 complete:1 demonstrate:1 image:4 discovers:1 nih:1 igj:1 ltl:1 b4:1 exponentially:1 interpretation:2 m1:1 ims:1 refer:1 gibbs:1 nontenninal:2 inclusion:1 similarly:1 language:9 dictated:1 rence:1 binary:3 tenns:3 vt:2 minimum:1 additional:1 greater:3 recognized:1 signal:2 rj:1 infer:1 jog:1 divided:1 award:1 basic:1 ko:1 microscope:1 whereas:1 source:44 goto:1 transfonnation:1 incorporates:1 structural:5 counting:1 xj:3 idea:1 enumerating:1 motivated:1 suffer:1 speech:1 jj:1 useful:2 sl:1 nsf:1 estimated:7 per:2 write:3 ivt:1 express:3 rissanen:5 pj:1 imaging:1 sum:1 run:1 almost:1 reasonable:1 electronic:1 vn:2 decision:7 bit:4 followed:1 occur:2 constraint:1 generates:1 developing:1 mta:1 according:2 xo:1 previously:1 know:1 addison:1 studying:1 rewritten:1 apply:2 occurrence:1 alternative:1 original:1 denotes:1 assumes:1 include:1 publishing:1 calculating:1 society:1 quantity:3 primary:1 length:4 index:1 ratio:1 potentially:1 stated:3 ba:1 implementation:1 unknown:5 observation:3 markov:34 sing:1 finite:4 defining:1 discovered:1 lb:2 inferred:1 specified:1 sentence:1 acoustic:1 reading:1 max:2 including:1 power:1 alii:1 iength:3 prior:6 acknowledgement:1 fully:1 limitation:1 proportional:1 sufficient:1 consistent:1 principle:1 production:2 row:1 gl:2 supported:1 characterizing:1 taking:2 grammatical:1 depth:10 transition:10 made:2 ig:1 far:1 alternatively:1 learn:2 ignoring:1 motivation:1 site:1 inferring:5 xl:1 candidate:11 ib:1 young:1 xt:1 er:1 symbol:8 exists:3 incorporating:2 importance:1 texture:7 led:1 simply:2 expressed:2 inl:1 bo:1 ma:1 exo:3 included:2 ece:1 la:2 rererences:1 ema:2 investigator:1 tested:2 |
1,441 | 2,310 | Adaptation and Unsupervised Learning
Peter Dayan Maneesh Sahani Gr?egoire Deback
Gatsby Computational Neuroscience Unit
17 Queen Square, London, England, WC1N 3AR.
dayan, maneesh @gatsby.ucl.ac.uk, [email protected]
Abstract
Adaptation is a ubiquitous neural and psychological phenomenon, with
a wealth of instantiations and implications. Although a basic form of
plasticity, it has, bar some notable exceptions, attracted computational
theory of only one main variety. In this paper, we study adaptation from
the perspective of factor analysis, a paradigmatic technique of unsupervised learning. We use factor analysis to re-interpret a standard view of
adaptation, and apply our new model to some recent data on adaptation
in the domain of face discrimination.
1 Introduction
Adaptation is one of the first facts with which neophyte neuroscientists and psychologists
are presented. Essentially all sensory and central systems show adaptation at a wide variety
of temporal scales, and to a wide variety of aspects of their informational milieu. Adaptation is a product (or possibly by-product) of many neural mechanisms, from short-term
synaptic facilitation and depression,1 and spike-rate adaptation,28 through synaptic remodeling27 and way beyond. Adaptation has been described as the psychophysicist?s electrode,
since it can be used as a sensitive method for revealing underlying processing mechanisms;
thus it is both phenomenon and tool of the utmost importance.
That adaptation is so pervasive makes it most unlikely that a single theoretical framework
will be able to provide a compelling treatment. Nevertheless, adaptation should be just
as much a tool for theorists interested in modeling neural statistical learning as for psychophysicists interested in neural processing. Put abstractly, adaptation involves short or
long term changes to aspects of the statistics of the environment experienced by a system.
Thus, accounts of neural plasticity driven by such statistics, even if originally conceived as
accounts of developmental (or perhaps representational) plasticity, 19 are automatically candidate models for the course and function of adaptation. Conversely, thoughts about adaptation lay at the heart of the earliest suggestions that redundancy reduction and information
maximization should play a central role in models of cortical unsupervised learning. 4?6, 8, 23
Redundancy reduction theories of adaptation reached their apogee in the work of Linsker, 26
Atick, Li & colleagues2, 3, 25 and van Hateren.40 Their mathematical framework (see section 2) is that of maximizing information transmission subject to various sources of noise
and limitations on the strength of key signals. Noise plays the critical roles of rendering
some signals essentially undetectable, and providing a confusing background against which
other signals should be amplified. Adaptation, by affecting noise levels and informational
content (notably probabilistic priors), leads to altered stimulus processing. Early work concentrated on the effects of sensory noise on visual receptive fields; more recent studies 41
have used the same framework to study stimulus specific adaptation.
Redundancy reduction is one major conceptual plank in the modern theory of unsupervised
learning. However, there are various other important complementary ideas, notably gen-
A
B
Figure 1: A) Redundancy reduction model. is the explicit input, combining signal and noise ! ;
" is the explicit output, to be corrupted by noise # to give $ . We seek the filter % that minimizes
redundancy subject to a power constraint. B) Factor analysis model. Now " , with a white, Gaussian,
prior, captures latent structure underlying the covariance & of . The empirical mean is ' ; the
uniquenesses (*) capture unmodeled variance and additional noise such as +-. , . Generative / and
recognition % weights parameterize statistical inverses.
erative models.19 Here, we consider adaptation from the perspective of factor analysis, 15
which is one of the most fundamental forms of generative model. After describing the factor analysis model and its relationship with redundancy reduction models of adaptation in
section 3, section 4 studies loci of adaptation in one version of this model. As examples,
we consider adaptation of early visual receptive fields to light levels, 38 orientation detection to a persistent bias (the tilt aftereffect),9, 16 and a recent report of adaptation of face
discrimination to morphed anti-faces.24
2 Information Maximization
Figure 1,3 shows a linear model of, for concreteness, retinal processing. Here, 0 dimensional photoreceptor input 132547698 , which is the sum of a signal 4 and detector
noise 8 , is filtered by a retinal matrix to produce an : -dimensional output ;52=<>1 for
communication down the optic nerve ?@2A;B6DC , against a background of additional noise
C . We assume that the signal is Gaussian, with mean E and covariance F , and the noise
H K and GMN
H K , respectively;
terms are white and Gaussian, with mean E and covariances GIJL
all are mutually independent. The input may be higher dimensional than the output, ie
0PO=: , as is true of the retina. Here, the signal is translation invariant, ie F is a circulant
matrix11 with FRQTSVUXWDY[Z]\_^a` . This means that the eigenvectors of F are (discrete) sine
and cosines, with eigenvalues coming from the Fourier series for W , whose terms we will
write as b-c7debfHgdihThjhkOml (they are non-negative since F is a covariance matrix; we
assume for simplicity that they are strictly positive).
Given no input noise ( GMJ H 2Bl ), the mutual information between 1n2>4 and ? is
Kpo 4qr?tsu23v o ?tst\5v o ?w 4-sa2xY[y{zB| <}F~<>B6_G N H K | \?y?z?w G NH K w `????
(1)
|
|
where v is the entropy function (which, for a Gaussian distribution, is proportional to the
y?z determinant of its covariance matrix). We consider maximizing this with respect to < , a
calculation
which only makes sense in the face of a constraint, such as on the average power
?
w ;?w H???2 tr ??<}F~< ?? . It is a conventional result in principal components analysis12, 20 that
the solution to this constrained maximization problem involves whitening, ie making
<92>?B?B?n?
with
?i?
diag ?
??c ??? q I? c ??? qThjhThjq I? c?j??
(2)
where ? is an arbitrary : -dimensional rotation matrix with ?>? 2 K , ? is the :B??:
diagonal matrix with the given form, and ? ? is an :??0 matrix whose rows are the first :
(transposed) eigenvectors of F . This choice makes <}F~< ? K , and effectively amplifies
weak input channels (ie those with small bt? ) so as fully to utilize all the output channels.
A) RR
B) FA
0
10
10
?1
?1
10
10
?2
10
tilt aftereffect
filter power
0
C) RR
?2
0
10
1
10
2
10
10
0
10
frequency
1
2
10
10
D) FA
5
5
0
0
?5
60
frequency
90
120
?5
60
90
angle
120
angle
Figure 2: Simple adaptation. A;B) Filter power as a function of spatial frequency for the redundancy
reduction (A: RR) and factor analysis (B: FA) solutions for the case of translation invariance, for low
(solid: + . ,
) and high (dashed + . ,
) input noise and
, . Even though the optimal
FA solution does have exactly identical uniquenesses, the difference is too small to figure. In (B),
factors were found for
inputs. C) Data9 (crosses) and RR solution41 (solid) for the tilt aftereffect.
D) Data (crosses) and linear approximate FA solution (solid). For FA, angle estimation is based on
the linear output of the single factor; linearity breaks down for
. Adaptation was based
on reducing the uniquenesses (M) for units activated by the adapting stimulus (fitting the width and
strength of this adaptation to the data).
In the face of input noise, whitening is dangerous for those channels for which G?J H
b-? ,
since noise rather than signal would be amplified by the ? ?b-? . One heuristic is to prefilter 1 using an 0 -dimensional matrix such that I1 is the prediction of 4 that minimizes
the average error ?w *1x\54kw H , and then apply the < of equation 2.14 Another conventional
result12 is that has a similar form to < , except that ?D2 o ? s , and the diagonal entries
of the equivalent of ? are bt? ? Y[bf?6 GtJH ` . This makes the full (approximate) filter
$
%$
<92>?B?B? ?
$
&
with
? ?
$
! #"
diag ?
('
?*? ?
?I???
?I???
???*),+ -? q ???.),+ -? qThjhThaq ?j?/)0+ -? ?
(3)
1! /2
Figure 2A shows the most interesting aspect of this filter in the case that b ? 2 ?? ?H , inspired by the statistics of natural scenes,36 for which might be either a temporal or spatial
frequency. The solid curve shows the diagonal components of ? for small input noise.
This filter is a band-pass filter. Intermediate frequencies with input power well above the
noise level GtJH are comparatively amplified against the output noise C , On the other hand,
the dashed line shows the same components for high input noise. This filter is a low-pass
filter, as only those few components with sufficient input power are significantly transmitted. The filter in equation 3 is based on a heuristic argument. An exact argument 2, 3 leads to
a slightly more complicated form for the optimal filter, in which, depending on the power
constraint and the exact value of G JH , there is a sharp cut-off in which some frequencies are
not transmitted at all. However, the main pattern of dependence on GIJH is the same as in
figure 2A; the differences lie well outside the realm of experimental test.
Figure 2A shows a powerful form of adaptation.3 High relative input noise arises in cases
of low illumination; low noise in cases of high illumination. The whole filtering characteristics of the retina should change, from low-pass (smoothing in time or space) to band-pass
(differentiation in space or time) filtering. There is evidence that this indeed happens, with
dendritic remodeling happening over times of the order of minutes. 42
Wainwright41 (see also10 ) suggested an account along exactly these lines for more stimulusspecific forms of adaptation such as the tilt aftereffect shown in figure 2C. Here (conceptually), subjects are presented with a vertical grating (k2 l ) for an adapting period of a
few seconds, and then are asked, by one of a number of means, to assess the orientation
of test gratings. The crosses in figure 2C shows the error in their estimates; the adapting
orientation appears to repel nearby angles, so that true values of near l are reported
2
3 54 76
3
4
6
as being further away. Wainwright modeled this in the light of a neural population code
for representing orientation and a filter related to that of equation 3. He suggested that
during adaptation, the signal associated with k2 l is temporarily increased. Thus, as in
the solid line of figure 2A, the transmission through the adapted filter of this signal should
be temporarily reduced. If the recipient structures that use the equivalent of ; to calculate
the orientation of a test grating are unaware of this adaptation, then, as in the solid line of
figure 2C, an estimation error like that shown by the subjects will result.
3 4 #6
3 Factor Analysis and Adaptation
We sought to understand the adaptation of equation 3 and figure 2A in a factor analysis
model. Factor analysis15 is one of the simplest probabilistic generative schemes used to
model the unsupervised learning of cortical representations, and underlies many more sophisticated approaches. The case of uniform input noise GIJ H is particularly interesting,
because it is central to the relationship between factor analysis and principal components
analysis.20, 34, 39
Figure 1B shows the elements of a factor analysis model (see Dayan & Abbott 12 for a
relevant tutorial introduction). The (so-called) visible variable 1 is generated from the
latent variable ; according to the two-step
o ;ts o EMq K s o 1Rw ;Ms o ;761? q s with }2 diag Y
qThThjhaq
`
(4)
c
'
where o
q ?s is a multi-variate Gaussian distribution with mean
and covariance matrix
, is a set of top-down generative weights, 1 is the mean of 1 , and a diagonal matrix
of uniquenesses, which are the variances of the residuals of 1 that are not represented in
the covariances associated with ; . Marginalizing out ; , equation 4 specifies a Gaussian
distribution for
1 o 1 q
6 s , and, indeed, the maximum likelihood values for the
parameters given some input data 1 are to set 1 to the empirical mean of the 1 that are
presented, and to set and by maximizing the likelihood of the empirical covariance
matrix of the 1 under a Wishart distribution with mean 6 . Note that is only
determined up to an :??: rotation matrix ? , since Y ?B`TY ?}` 2 .
The generative or synthetic model of equation 4 shows how ; determines 1 . In most
instances of unsupervised learning, the focus is on the recognition or analysis model, 30
which maps a presented input 1 into the values of the latent variable ; which might have
generated it, and thereby form its possible internal representations. The recognition model
is the statistical inverse of the generative model and specifies the Gaussian distribution:
o ;?w 1*s o x
(5)
< Y?1x\1? `aq ks with L25Y K 6 c ` c <92 c h
31, 32
The mean value of ; can be derived from the differential equation
(6)
;n2x\;B6* c Y?17\ 1x
\ ;?`
in which 1 \ 1 \ ; , which is the prediction error for 1 based on the current value of
; , is downweighted according to the inverse uniquenesses c , mapped through bottomup weights and left to compete against the contribution of the prior for ; (which is
responsible for the \; term in equation 6). For this scheme to give the right answer, the
bottom-up weights should be the transpose of the top-down weights 2 . However,
we later consider forms of adaptation that weaken this dependency.
In general, factor analysis and principal components analysis lead to different results. Indeed, although the latter is performed by an eigendecomposition of the covariance matrix
of the inputs, the former requires execution of one of a variety of iterative procedures on
the same covariance matrix.21, 22, 35 However, if the uniquenesses are forced to be equal,
ie
2
Vq"!$# , then these procedures are almost the same.34, 39 In this case, assuming that
1n 2}E
,
(7)
2B?>?{? ? with ?u2 diag "BY c \
`aq "BY H \
`uqjhThjhTq "BY ? \
R`
' ? ) c $ ? ? Y[0 ?\ :*`
(8)
~2D?
with the same conventions as in equation 2, except that are the (ordered) eigenvalues of
the covariance matrix of the visible variables 1 rather than explicitly of the signal. Here
has the natural interpretation of being the average power of the unexplained components.
Applying this in equation 5:
?
?
? ??
? ??
? ?
q ??
qjhThThjq ??
? h
(9)
If 1 really comes from a signal and noise model as in figure 1, then 2Db 6 GtJ H , and
B2
? 69GtJ H , where
? is the residual uniqueness of equation 8 in the case that G*JH x
2 l.
This makes the recognition weights of equation 9
? ?? ?
? {? ?
? ?? ?
<92>?B?B? ? with ?92 diag ? ? ? ),+ -?
q ?j? ),+ -?
qThThjhaq ??/? ),+ -?
? h
(10)
<92>?B?B? ?
with ?92 diag ?
The similarity between this and the approximate redundancy reduction expression of equation 3 is evident. Just like that filter, adaptation to high and low light levels (high and low
signal/noise ratios), leads to a transition from bandpass to lowpass filtering in < . The filter
of equation 3 was heuristic; this is exact. Also, there is no power constraint imposed; rather
something similar derives from the generative model?s prior over the latent variables ; .
This analysis is particularly well suited to the standard treatment of redundancy reduction
case of figure 2A, since adding independent noise of the same strength GIJH to each of the
input variables can automatically be captured by adding GIJ H to the common uniqueness
.
However, even though the signal 4 is translation invariant in this case, it need not be that
the maximum likelihood factor analysis solution has the property that is proportional
to K . However, it is to a close approximation, and figure 2B shows that the strength of
the principal components of F in the maximum likelihood < (evaluated as in the figure
caption) shows the same structure of adaptation as in the probabilistic principal components
solution, as a function of GMJ H .
Figure 2D shows a version of the tilt illusion coming from a factor analysis model given
population coded input (with Gaussian tuning curves with an orientation bandwidth of
??l ) and a single factor. It is impossible to perform the full non-linear computation of
extracting an angle from the population activity 1 in a single linear operation <5Y[1?\ 1?
` .
However, in a regime in which a linear approximation holds, the one factor can represent the
systematic covariation in the activity of the population coming from the single dimension
l , this regime comprises
of angular variation in the input. For instance, around >2
o l ?q ???l s . A close match in this model to Wainwright?s41 suggestion is
roughly
that the uniquenesses
for the input units (around R2 l ) that are reliably activated by
an adapting stimulus should be decreased, as if the single factor would predict a greater
proportion of the variability in the activation of those units. This makes < of equation 5
more sensitive to small variations in 1 away from 2 l , and so leads to a tilt aftereffect
as an estimation bias. Figure 2D shows the magnitude of this effect in the linear regime.
This is a rough match for the data in figure 2C. Our model also shows the same effect as
Wainwright?s41 in orientation discrimination, boosting sensitivity near the adapted and
reducing it around half a tuning width away.33
6
3 6 ! 6
3
3 54 6
3 4 6
4 76
3
4 Adaptation for Faces
Another, and even simpler, route to adaptation is changing 1 towards the mean of the
recently presented (ie the adapting) stimuli. We use this to model a recently reported effect
of adaptation on face discrimination.24
Note that changing the mean '
according to the input has no effect on the factor.
B) FA
1
C) Data
1
D) FA
1
1
Adam responses
Adam identification
A) Data
0.5
0
?0.2
0
0.2
Adam strength
0
0.4 ?0.2
0
0.2
Adam strength
0.5
0
0.4 ?0.2
0
0.2
0
0.4 ?0.2
Henry strength
0
0.2
0.4
Henry strength
Figure 3: Face discrimination. Here, Adam and Henry are used for concreteness; all results are
random draws. A) Experimental24 mean propensity to
averages over all faces, and, for FA,
report Adam as a function of the strength of Adam in the input for no adaptation (?o?); adaptation
to anti-Adam (?x?); and adaptation to anti-Henry (? ?). The curves are cumulative normal fits. B)
Mean propensity in the factor analysis model for the same outcomes. The model, like some subjects,
is more extreme than the mean of the subjects, particularly for test anti-faces. C;D) Experimental
and model proportion of reports of Adam when adaptation was to anti-Adam; but various strengths
of Henry are presented. The model captures the decrease in Adam given presentation of anti-Henry
through a normalization pool (solid); although it does not decrease to quite the same extent as the
data. Just reporting the face with the largest ) (dashed) shows no decrease in reporting Adam given
presentation of anti-Henry. Here +
(except for the dashed line in D, for
which
to match the peak of the solid curve).
/
/
Leopold and his colleagues24 studied adaptation in the complex stimulus domain of faces.
Their experiment involved four target faces (associated with names ?Adam?, ?Henry?, ?Jim?,
?John?) which were previously unfamiliar to subjects, together with morphed versions of
these faces lying on ?lines? going through the target faces and the average of all four faces.
These interpolations were made visually sensible using a dense correspondence map between the faces. The task for the subjects was always to identify which of the four faces was
presented; this is obviously impossible at the average face, but becomes progressively easier as the average face is morphed progressively further (by an amount called its strength)
towards one of the target faces. The circles in figure 3A show the mean performance of the
subjects in choosing the correct face as a function of its strength; performance is essentially
perfect
l of the way to the target face.
A negative strength version of one of the target faces (eg anti-Adam) was then shown to
the subjects for
seconds before one of the positive strength faces was shown as a test.
The other two lines in figure 3A show that the effect of adaptation is to boost the effective
strength of the given face (Adam), since (crosses) the subjects were much readier to report
Adam, even for the average face (which contains no identity information), and much less
ready to report the other faces even if they were actually the test stimulus (shown by the
squares). As for the tilt aftereffect, discrimination is biased away from the adapted stimulus.
Figure 3C shows that adapting to anti-Adam offers the greatest boost to the event that Adam
is reported to a test face (say Henry) that is not Adam, at the average face. Reporting Adam
falls off if either increasing strengths of Henry or anti-Henry are presented. That presenting
Henry should decrease the reporting of Adam is obvious, and is commented on in the paper.
However, that presenting anti-Henry should decrease the reporting of Adam is less obvious,
since, by removing Henry as a competitor, one might have expected Adam to have received
an additional boost.
Figure 3B;D shows our factor analysis model of these results. Here, we consider a case with
Adam
?
visible units, and
factors, one for each face, with generative weights 2
qThjhTh
governing the input activity associated with full strength versions of each face generated
from independent Y?E*q K ` distributions. In this representation, morphing is easy, consist
Adam
ing of presenting 1 2
is noise (variance GMH ).
6 where is the strength and
The outputs ;D2 <B1 depend on , the angle between the and the noise. Next, we
need to specify how discrimination is based on the information provided by ; . For reasons discussed below, we considered a normalization pool 17, 37 for the outputs, treating
Y k\
z ` ? Y @\
z ` as the probability that face # would be reported, where
is a discrimination
parameters.
Adaptation to anti-Adam was represented by setting
1n 2x\ Adam , where is the strength of the adapting stimulus.
Figure 3B shows the model of the basic adaptation effect seen in figure 3A. Adapting
Adam
to \
clearly boosts the willingness of the model to report Adam, much as for the
subjects. The model is a little more extreme than the average over the subjects. The results
for two individual subjects presented in the paper24 are just as extreme; other subjects may
have had softer decision biases. Figure 3D shows the model of figure 3C. The dashed
line shows that without the normalization pool, presenting anti-Henry does indeed boost
reporting of Adam, when anti-Adam was the adapting stimulus. However, under the above
normalization, decreasing boosts the relative strengths of Jim and John (through the
minimization in the normalization pool), allowing them to compete, and so reduces the
propensity to report Adam (solid line).
5 Discussion
We have studied how plasticity associated with adaptation fits with regular unsupervised
learning models, in particular factor analysis. It was obvious that there should be a close
relationship; this was, however, obscured by aspects of the redundancy reduction models
such as the existence of multiple sources of added noise and non-informational constraints.
Uniquenesses in factor analysis are exactly the correct noise model for the simple information maximization scheme. We illustrated the model for the case of a simple, linear, model
of the tilt aftereffect, and of adaptation in face discrimination. The latter had the interesting
wrinkle that the experimental data support something like a normalization pool. 17, 37
Under this current conceptual scheme for adaptation, assumed changes in the input statistics are fully compensated for by the factor analysis model (and the linear and Gaussian
nature of the model implies that 1 can be changed without any consequence for the generative or recognition models). The dynamical form of the factor analysis model in equation 6
suggests other possible targets for adaptation. Of particular interest is the possibility that the
top-down weights and/or the uniquenesses might change whilst bottom-up weights
remain constant. The rationale for this comes from suggestive neurophysiological evidence
that bottom-up pathways show delayed plasticity in certain circumstances; 13 and indeed it
is exactly what happens in unsupervised learning techniques such as the wake-sleep algorithm.18, 29 Given satisfaction of an eigenvalue condition that the differential equation 6 be
stable, it will be interesting to explore the consequences of such changes.
Of course, factor analysis is insufficiently powerful to be an adequate model for cortical
unsupervised learning or indeed all aspects of adaptation (as already evident in the limited
range of applicability of the model of the tilt aftereffect). However, the ideas about the
extraction of higher order statistical structure in the inputs into latent variables, the roles of
noise, and the way in equation 6 that predictive coding or explaining away controls cortical
representations,32 survive into sophisticated complex unsupervised learning models, 19 and
offer routes for extending the present results.
A paradoxical aspect of adaptation, which neither we nor others have addressed, is the way
that the systems that are adapting interact with those to which they send their output. For
instance, it would seem unfortunate if all cells in primary visual cortex have to know the
light level governing adaptation in order to be able correctly to interpret the information
coming bottom-up from the thalamus. In some cases, such as the approximate noise filter , there are alternative semantics for the adapted neural activity under which this is
unnecessary; understanding how this generalizes is a major task for future work.
$
Acknowledgements
Funding was from the Gatsby Charitable Foundation. We are most grateful to Odelia
Schwartz for discussion and comments.
References
[1] Abbott, LF, Varela, JA, Sen, K, & Nelson, SB (1997) Synaptic depression and cortical gain control. Science 275, 220-224.
[2] Atick, JJ (1992) Could information theory provide an ecological theory of sensory processing? Network: Computation in
Neural Systems 3, 213-251.
[3] Atick, JJ, & Redlich, AN (1990) Towards a theory of early visual processing. Neural Computation 2, 308-320.
[4] Attneave, F (1954) Some informational aspects of visual perception. Psychological Review 61, 183-193.
[5] Barlow, HB (1961) Possible principles underlying the transformation of sensory messages. In WA Rosenblith, ed., Sensory
Communication. Cambridge, MA: MIT Press.
[6] Barlow, HB (1969) Pattern recognition and the responses of sensory neurones.,Annals of the New York Academy of
Sciences 156, 872-881.
[7] Barlow, HB (1989) Unsupervised learning, Neural Computation, 1, 295-311.
[8] Barlow, H (2001) Redundancy reduction revisited. Network 12,:241-253.
[9] Campbell, FW & Maffei, L (1971) The tilt after-effect: a fresh look. Vision Research 11, 833-40.
[10] Clifford, CWG, Wenderoth, P & Spehar, B. (2000) A functional angle on some after-effects in cortical vision, Proceedings
of the Royal Society of London, Series B 267, 1705-1710.
[11] Davis, PJ (1979) Circulant Matrices. New York, NY: Wiley.
[12] Dayan, P & Abbott, LF (2001). Theoretical Neuroscience. Cambridge, MA: MIT Press.
[13] Diamond, ME, Huang, W & Ebner, FF (1994) Laminar comparison of somatosensory cortical plasticity. Science 265,
1885-1888.
[14] Dong, DW, & Atick, JJ (1995) Temporal decorrelation: A theory of lagged and nonlagged responses in the lateral
geniculate nucleus. Network: Computation in Neural Systems 6, 159-178.
[15] Everitt, BS (1984) An Introduction to Latent Variable Models, London: Chapman and Hall.
[16] Gibson, JJ & Radner, M (1937) Adaptation, after-effect and contrast in the perception of tilted lines. Journal of
Experimental Psychology 20, 453-467.
[17] Heeger, DJ (1992) Normalization of responses in cat striate cortex. Visual Neuroscience 9, 181-198.
[18] Hinton, GE, Dayan, P, Frey, BJ, & Neal, RM (1995) The wake-sleep algorithm for unsupervised neural networks. Science
268, 1158-1160.
[19] Hinton, GE & Sejnowski, TJ (1999) Unsupervised Learning. Cambridge, MA: MIT Press.
[20] Jolliffe, IT (1986) Principal Component Analysis, New York: Springer.
[21] J?oreskog, KG (1967) Some contributions to maximum likelihood factor analysis, Psychometrika, 32, 443-482.
[22] J?oreskog, KG (1969) A general approach to confirmatory maximum likelihood factor analysis, Psychometrika, 34,
183-202.
[23] Kohonen, T & Oja, E (1976) Fast adaptive formation of orthogonalizing filters and associative memory in recurrent
networks of neuron-like elements. Biological Cybernetics 21, 85-95.
[24] Leopold, DA, O?Toole, AJ, Vetter, T & Blanz, V (2001). Prototype-referenced shape encoding revealed by high-level
aftereffects. Nature Neuroscience 4, :89-94.
[25] Li, Z & Atick, JJ (1994a) Efficient stereo coding in the multiscale representation. Network: Computation in Neural Systems
5, 157-174.
[26] Linsker, R (1988) Self-organization in a perceptual network, Computer, 21, 105-128.
[27] Maguire G, Hamasaki DI (1994) The retinal dopamine network alters the adaptational properties of retinal ganglion cells
in the cat.Journal of Neurophysiology, 72, 730-741.
[28] McCormick, DA (1990) Membrane properties and neurotransmitter actions. In GM Shepherd, ed., The Synaptic
Organization of the Brain. New York: Oxford University Press.
[29] Neal, RM & Dayan, P (1997). Factor Analysis using delta-rule wake-sleep learning. Neural Computation, 9, 1781-1803.
[30] Neisser, U (1967) Cognitive Psychology. New York: Appleton-Century-Crofts.
[31] Olshausen, BA, & Field, DJ (1996) Emergence of simple-cell receptive field properties by learning a sparse code for
natural images. Nature 381, 607-609.
[32] Rao, RPN, & Ballard, DH (1997) Dynamic model of visual recognition predicts neural response properties in the visual
cortex. Neural Computation 9, 721-763.
[33] Regan, D & Beverley, KI (1985) Postadaptation orientation discrimination. JOSA A, 2, 147-155.
[34] Roweis, S & Ghahramani, Z (1999) A unifying review of linear gaussian models. Neural Computation 11, 305-345.
[35] Rubin, DB & Thayer, DT (1982) EM algorithms for ML factor analysis, Psychometrika, 47, 69-76.
[36] Ruderman DL & Bialek W (1994) Statistics of natural images: Scaling in the woods. Physical Review Letters 73, 814-817.
[37] Schwartz, O & Simoncelli, EP (2001) Natural signal statistics and sensory gain control. Nature Neuroscience 4, 819-825.
[38] Shapley, R & Enroth-Cugell, C (1984) Visual adaptation and retinal gain control. Progress in Retinal Research 3, 263-346.
[39] Tipping, ME & Bishop, CM (1999) Mixtures of probabilistic principal component analyzers. Neural Computation 11,
443-482.
[40] van Hateren, JH (1992) A theory of maximizing sensory information. Biological Cybernetics 68, 23-29.
[41] Wainwright, MJ (1999) Visual adaptation as optimal information transmission. Vision Research 39, 3960-3974.
[42] Weiler R & Wagner HJ (1984) Light-dependent change of cone-horizontal cell interactions in carp retina. Brain Resesarch
298, 1-9.
| 2310 |@word neurophysiology:1 determinant:1 version:5 proportion:2 wenderoth:1 bf:1 d2:2 seek:1 covariance:11 thereby:1 tr:1 solid:9 reduction:10 series:2 contains:1 current:2 activation:1 attracted:1 john:2 tilted:1 visible:3 plasticity:6 shape:1 treating:1 progressively:2 rpn:1 discrimination:10 generative:9 half:1 short:2 filtered:1 boosting:1 revisited:1 simpler:1 mathematical:1 along:1 undetectable:1 differential:2 persistent:1 neisser:1 fitting:1 pathway:1 shapley:1 notably:2 expected:1 indeed:6 roughly:1 nor:1 multi:1 brain:2 inspired:1 informational:4 decreasing:1 automatically:2 lyon:1 little:1 increasing:1 becomes:1 provided:1 psychometrika:3 underlying:3 linearity:1 what:1 kg:2 cm:1 minimizes:2 whilst:1 transformation:1 differentiation:1 temporal:3 exactly:4 k2:2 rm:2 uk:1 control:4 unit:5 schwartz:2 maffei:1 positive:2 before:1 frey:1 referenced:1 consequence:2 encoding:1 oxford:1 interpolation:1 might:4 k:1 studied:2 conversely:1 suggests:1 limited:1 range:1 responsible:1 lf:2 illusion:1 procedure:2 empirical:3 maneesh:2 gibson:1 thought:1 revealing:1 adapting:10 significantly:1 wrinkle:1 vetter:1 regular:1 close:3 put:1 applying:1 impossible:2 conventional:2 equivalent:2 map:2 imposed:1 maximizing:4 compensated:1 send:1 simplicity:1 rule:1 facilitation:1 his:1 dw:1 population:4 century:1 variation:2 annals:1 target:6 play:2 gm:1 exact:3 caption:1 element:2 recognition:7 particularly:3 lay:1 cut:1 predicts:1 bottom:4 role:3 ep:1 capture:3 parameterize:1 calculate:1 decrease:5 environment:1 developmental:1 asked:1 dynamic:1 depend:1 grateful:1 predictive:1 po:1 lowpass:1 various:3 represented:2 cat:2 neurotransmitter:1 forced:1 fast:1 effective:1 london:3 sejnowski:1 psychophysicist:2 outside:1 outcome:1 choosing:1 formation:1 whose:2 heuristic:3 quite:1 say:1 blanz:1 statistic:6 abstractly:1 emergence:1 associative:1 obviously:1 eigenvalue:3 rr:4 ucl:1 sen:1 interaction:1 product:2 coming:4 adaptation:56 fr:1 kohonen:1 relevant:1 combining:1 radner:1 gen:1 representational:1 amplified:3 academy:1 roweis:1 qr:1 amplifies:1 electrode:1 transmission:3 extending:1 produce:1 adam:32 perfect:1 depending:1 recurrent:1 ac:1 received:1 progress:1 grating:3 involves:2 come:2 implies:1 convention:1 somatosensory:1 correct:2 filter:17 softer:1 ja:1 really:1 dendritic:1 biological:2 strictly:1 hold:1 lying:1 around:3 considered:1 hall:1 normal:1 visually:1 predict:1 bj:1 major:2 sought:1 early:3 uniqueness:11 estimation:3 geniculate:1 unexplained:1 propensity:3 sensitive:2 largest:1 tool:2 minimization:1 rough:1 clearly:1 mit:3 gaussian:10 always:1 rather:3 hj:1 pervasive:1 earliest:1 derived:1 focus:1 likelihood:6 contrast:1 sense:1 dayan:6 dependent:1 sb:1 unlikely:1 bt:2 going:1 interested:2 i1:1 semantics:1 plank:1 orientation:8 constrained:1 spatial:2 smoothing:1 mutual:1 field:4 equal:1 extraction:1 chapman:1 identical:1 kw:1 look:1 unsupervised:13 survive:1 linsker:2 future:1 report:7 stimulus:10 others:1 few:2 retina:3 modern:1 oja:1 individual:1 delayed:1 erative:1 detection:1 neuroscientist:1 interest:1 message:1 organization:2 possibility:1 mixture:1 extreme:3 light:5 activated:2 tj:1 wc1n:1 implication:1 nonlagged:1 circle:1 re:1 obscured:1 theoretical:2 weaken:1 psychological:2 increased:1 instance:3 modeling:1 compelling:1 rao:1 ar:1 queen:1 maximization:4 applicability:1 entry:1 uniform:1 gr:1 too:1 reported:4 dependency:1 answer:1 corrupted:1 synthetic:1 fundamental:1 sensitivity:1 peak:1 ie:6 probabilistic:4 off:2 systematic:1 dong:1 pool:5 together:1 clifford:1 central:3 huang:1 possibly:1 thayer:1 wishart:1 cognitive:1 li:2 downweighted:1 account:3 s41:2 retinal:6 b2:1 coding:2 notable:1 explicitly:1 cugell:1 sine:1 view:1 break:1 later:1 performed:1 reached:1 complicated:1 b6:1 contribution:2 ass:1 square:2 variance:3 characteristic:1 identify:1 conceptually:1 weak:1 identification:1 cybernetics:2 detector:1 rosenblith:1 synaptic:4 ed:2 against:4 ty:1 competitor:1 frequency:6 involved:1 obvious:3 attneave:1 associated:5 di:1 transposed:1 josa:1 gain:3 treatment:2 covariation:1 realm:1 ubiquitous:1 sophisticated:2 actually:1 campbell:1 nerve:1 appears:1 originally:1 higher:2 dt:1 tipping:1 response:5 specify:1 evaluated:1 though:2 just:4 atick:5 angular:1 governing:2 hand:1 horizontal:1 ruderman:1 multiscale:1 aj:1 perhaps:1 willingness:1 olshausen:1 name:1 effect:10 true:2 barlow:4 former:1 neal:2 illustrated:1 white:2 eg:1 during:1 width:2 self:1 davis:1 cosine:1 m:1 presenting:4 evident:2 image:2 prefilter:1 recently:2 funding:1 common:1 rotation:2 functional:1 confirmatory:1 physical:1 tilt:10 egoire:1 nh:1 discussed:1 he:1 interpretation:1 interpret:2 unfamiliar:1 theorist:1 morphed:3 cambridge:3 everitt:1 appleton:1 tuning:2 analyzer:1 aq:2 henry:15 had:2 dj:2 stable:1 similarity:1 cortex:3 whitening:2 something:2 recent:3 perspective:2 driven:1 beverley:1 route:2 certain:1 ecological:1 transmitted:2 captured:1 additional:3 greater:1 seen:1 period:1 paradigmatic:1 signal:15 dashed:5 full:3 simoncelli:1 multiple:1 reduces:1 thalamus:1 ing:1 match:3 england:1 calculation:1 cross:4 long:1 offer:2 coded:1 prediction:2 underlies:1 basic:2 essentially:3 circumstance:1 vision:3 dopamine:1 represent:1 normalization:7 cell:4 background:2 affecting:1 decreased:1 addressed:1 wealth:1 wake:3 source:2 biased:1 comment:1 subject:15 shepherd:1 db:2 seem:1 extracting:1 near:2 intermediate:1 revealed:1 easy:1 rendering:1 variety:4 hb:3 variate:1 fit:2 psychology:2 bandwidth:1 idea:2 prototype:1 expression:1 stereo:1 peter:1 enroth:1 neurones:1 york:5 jj:5 action:1 depression:2 adequate:1 eigenvectors:2 utmost:1 amount:1 band:2 concentrated:1 simplest:1 reduced:1 rw:1 specifies:2 tutorial:1 alters:1 neuroscience:5 conceived:1 correctly:1 delta:1 discrete:1 write:1 commented:1 redundancy:11 key:1 four:3 nevertheless:1 varela:1 tst:1 changing:2 neither:1 pj:1 abbott:3 utilize:1 concreteness:2 cone:1 sum:1 wood:1 compete:2 inverse:3 angle:7 powerful:2 letter:1 reporting:6 almost:1 draw:1 decision:1 confusing:1 scaling:1 ki:1 correspondence:1 laminar:1 sleep:3 activity:4 strength:20 dangerous:1 optic:1 constraint:5 adapted:4 insufficiently:1 scene:1 nearby:1 aspect:7 fourier:1 argument:2 remodeling:1 according:3 membrane:1 remain:1 slightly:1 em:1 making:1 happens:2 b:1 psychologist:1 invariant:2 heart:1 aftereffect:9 mutually:1 equation:18 vq:1 describing:1 previously:1 mechanism:2 jolliffe:1 locus:1 know:1 ge:2 stimulusspecific:1 generalizes:1 operation:1 apply:2 away:5 alternative:1 existence:1 recipient:1 top:3 unfortunate:1 paradoxical:1 unifying:1 ghahramani:1 society:1 comparatively:1 bl:1 added:1 already:1 spike:1 receptive:3 fa:9 dependence:1 primary:1 diagonal:4 striate:1 bialek:1 mapped:1 lateral:1 sensible:1 nelson:1 me:2 extent:1 reason:1 fresh:1 assuming:1 code:2 modeled:1 relationship:3 providing:1 ratio:1 negative:2 ba:1 lagged:1 reliably:1 ebner:1 perform:1 allowing:1 diamond:1 vertical:1 neuron:1 mccormick:1 anti:14 t:1 hinton:2 communication:2 variability:1 jim:2 arbitrary:1 unmodeled:1 sharp:1 toole:1 repel:1 leopold:2 boost:6 beyond:1 bar:1 able:2 suggested:2 pattern:2 below:1 dynamical:1 perception:2 regime:3 maguire:1 royal:1 memory:1 wainwright:4 power:9 critical:1 greatest:1 natural:5 event:1 satisfaction:1 decorrelation:1 carp:1 residual:2 representing:1 scheme:4 altered:1 ready:1 sahani:1 prior:4 morphing:1 understanding:1 acknowledgement:1 review:3 marginalizing:1 relative:2 fully:2 rationale:1 suggestion:2 limitation:1 proportional:2 interesting:4 filtering:3 regan:1 eigendecomposition:1 foundation:1 nucleus:1 sufficient:1 rubin:1 principle:1 charitable:1 translation:3 row:1 course:2 changed:1 transpose:1 bias:3 jh:3 understand:1 circulant:2 wide:2 fall:1 face:34 explaining:1 wagner:1 sparse:1 van:2 curve:4 apogee:1 cortical:7 transition:1 dimension:1 unaware:1 cumulative:1 sensory:8 made:1 adaptive:1 approximate:4 ml:1 suggestive:1 instantiation:1 conceptual:2 b1:1 assumed:1 unnecessary:1 bottomup:1 latent:6 iterative:1 channel:3 nature:4 ballard:1 mj:1 interact:1 complex:2 domain:2 diag:6 da:2 main:2 dense:1 whole:1 noise:30 n2:1 complementary:1 redlich:1 en:1 ff:1 gatsby:3 ny:1 wiley:1 experienced:1 comprises:1 explicit:2 bandpass:1 heeger:1 candidate:1 lie:1 perceptual:1 croft:1 down:5 minute:1 removing:1 specific:1 bishop:1 r2:1 evidence:2 derives:1 consist:1 dl:1 adding:2 effectively:1 importance:1 magnitude:1 execution:1 illumination:2 orthogonalizing:1 easier:1 suited:1 entropy:1 spehar:1 explore:1 ganglion:1 neurophysiological:1 visual:10 happening:1 ordered:1 temporarily:2 u2:1 springer:1 determines:1 dh:1 ma:3 identity:1 presentation:2 towards:3 content:1 change:6 fw:1 determined:1 except:3 reducing:2 principal:7 zb:1 called:2 gij:2 pas:4 invariance:1 experimental:4 photoreceptor:1 exception:1 internal:1 support:1 latter:2 arises:1 odelia:1 hateren:2 phenomenon:2 |
1,442 | 2,311 | Classifying Patterns of Visual Motion a Neuromorphic Approach
Jakob Heinzle and Alan Stocker
Institute of Neuroinformatics
University and ETH Z?urich
Winterthurerstr. 190, 8057 Z?urich, Switzerland
jakob,alan @ini.phys.ethz.ch
Abstract
We report a system that classifies and can learn to classify patterns of
visual motion on-line. The complete system is described by the dynamics of its physical network architectures. The combination of the following properties makes the system novel: Firstly, the front-end of the
system consists of an aVLSI optical flow chip that collectively computes
2-D global visual motion in real-time [1]. Secondly, the complexity of
the classification task is significantly reduced by mapping the continuous motion trajectories to sequences of ?motion events?. And thirdly, all
the network structures are simple and with the exception of the optical
flow chip based on a Winner-Take-All (WTA) architecture. We demonstrate the application of the proposed generic system for a contactless
man-machine interface that allows to write letters by visual motion. Regarding the low complexity of the system, its robustness and the already
existing front-end, a complete aVLSI system-on-chip implementation is
realistic, allowing various applications in mobile electronic devices.
1 Introduction
The classification of continuous temporal patterns is possible using Hopfield networks with
asymmetric weights [2], but classification is restricted to periodic trajectories with a wellknown start and end point. Also purely feed-forward network architectures were proposed
[3]. However, such networks become unfeasibly large for practical applications.
We simplify the task by first mapping the continuous visual motion patterns to sequences of
motion events. A motion event is characterized by the occurrence of visual motion in one
out of a pre-defined set of directions. Known approaches for sequence classification can
be divided into two major categories: The first group typically applies standard Hopfield
networks with time-dependent weight matrices [4, 5]. These networks are relatively inefficient in storage capacity, using many units per stored pattern. The second approach relies
on time-delay elements and some form of coincidence detectors that respond dominantly
to the correctly time-shifted events of a known sequence [6, 7]. These approaches allow a
compact network architecture. Furthermore, they require neither the knowledge of the start
corresponding author; www.ini.unizh.ch/?alan
and end point of a sequence nor a reset of internal states. The sequence classification network of our proposed system is based on the work of Tank and Hopfield [6], but extended
to be time-continuous and to show increased robustness. Finally, we modify the network
architecture to allow the system to learn arbitrary sequences of a particular length.
2 System architecture
N
mx
N
my
W
0
E
S
W
A
?1 ?2 ?3 ?1 ?2 ?3 ?1 ?2 ?3 ?1 ?2 ?3
B
E
NWE
C
S
time
Optical
flow
chip
Direction
selective
network
Sequence
classification
network
System
output
Figure 1: The complete classification system. The input to the system is a real-world moving visual stimulus and the output is the activity of units representing particular trajectory
classes.
The system contains three major stages of processing as shown in Figure 1: the optical
flow chip estimates global visual motion, the direction selective network (DSN) maps the
estimate to motion events and the sequence classification network (SCN) finally classifies
the sequences of these events. The architecture reflects the separation of the task into the
classification in motion space (DSN) and, consecutively, the classification in time (SCN).
Classification in both cases relies on identical WTA networks differing in their inputs only.
The outputs of the DSN and the SCN are ?quasi-discrete? - both signals are continuous-time
but due to the non-linear amplification of the WTA represent discrete information.
2.1 The optical flow chip
The front-end of the classification system consists of the optical flow chip [1, 8], that estimates 2D visual motion. Due to adaptive circuitry, the estimate of visual motion is fairly
independent of illumination conditions. The estimation of visual motion requires the integration of visual information within the image space in order to solve for inherent visual
ambiguities. For the purpose of the here presented classification system, the integration of
visual information is set to take place over the complete image space. Thus, the resulting
estimate represents the global visual motion perceived. The output signals of the chip are
and
that represent at any instant the two components of the
two analog voltages
actual global motion vector. The output signals are linear to the perceived motion within
a range of
volts. The resolvable speed range is 1-3500 pix/sec, thus spans more than
three orders of magnitude. The continuous-time voltage trajectory
is the input to the direction selective network.
2.2 The direction selective network (DSN)
The second stage transforms the trajectory
into a sequence of motion events, where
an event means that the motion vector points into a particular region of motion space.
Motion space is divided into a set of regions each represented by a unit of the DSN (see
Figure 2a). Each direction selective unit (DSU) receives highest input when is within
the corresponding region. In the following we choose four motion directions referred to as
north (N), east (E), south (S) and west (W) and a central region for zero motion.
The WTA behavior of the DSN can be described
by minimizing the cost
function [9]
!#"
(1)
&%'
*
),+# 98
(
./103254 76
$
where
and are the excitatory and inhibitory weights between the DSU [8]. The
units have a sigmoidal activation function
0 ;: . Following gradient descent, the
dynamics of the units are described
by
%'
< 6=: ( :
(2)
$
6
< and ( are the capacitance
and resistance of the units. The preferred direction of
where
the >@? DSU is given by%'the
angle A CB ;>
ED . The
input to the DSU is
N
(3)
GFI H H7JLK=M " A ON ifif H AA O
ON HQPSRR "
H
Q
H
T
"
%VU
%
where N is the motion estimate in polar coordinates. The input to the zero
H
H
motion unit is
thresh H H . In Figure 2b we compare the outputs of a DSU to
activity
b
a
mx
N
my
E
0
c
S
0
mo 0.3
tio
nN 0
0.3
-S
s]
0
o
[V -0.3
V
[ lt
olt -0.3 tion E-W
s]
mo
activity
W
1
1
0
mo 0.3
tio
nN 0
0.3
-S
s]
0
o
[V -0.3
[V lt
olt -0.3 tion E-W
s]
mo
Figure 2: The direction selective network. a) The WTA architecture of the DSN. Filled
connections are excitatory, empty ones are inhibitory. Dotted lines show the regions in
motion space where the different units win. b) The response of the N-DSU to constant
input is shown as surface plot, while the responses of the same unit to dynamic motion
trajectories (circles and straight lines) are plotted as lines. Differences between constant
and dynamic inputs are marginal. c) The output of the zero motion unit to constant input.
constant and varying input . The dynamic response is close to the steady state as long as
the time-constant of the DSN is smaller than the typical time-scale of
.
2.3 The sequence classification network (SCN)
The classification of the temporal structure of the DSN output is the task of the SCN.
The network uses time-delays to ?concentrate information in time? [6] (see Figure 3b). In
equivalence with the regions in motion space these time-delays form ?regions? in time.
The number of units (SCU) of the SCN is equal to the number of trajectory classes the
time-delays, where is the number of events
system is able to classify. We use
of the longest sequence to be classified. The time interval delay between two maxima
of the time-delay functions is the characteristic time-scale of the sequence
classification.
(1), except
Again, the SCN is a WTA network with a cost function equivalent to
that an additional term
is introduced to provide constant input. The SCU have an
activation function
and follow the dynamics
0
< 6 (
(4)
%
U
are the weights of the
The last term is equivalent to the input term $ in (2).
6 is the delayed
connections between the DSN and the SCN and
6
output of the DSU. The time-delay functions are the same as in [6]1 . Note that is the only
additional term compared to the dynamics in (2). It allows to set a detection threshold to
the sequence classification.
Figure 3a shows an outline of the SCN and its connectivity. For example, if the sequence NW-E
be classified, the inputs from the E-DSU delayed by delay , from the W-DSU by
hasandto from
the N-DSU by delay are excitatory, while all the others are inhibitory. All
delay
excitatory as well as all inhibitory weights are equal with excitation being twice as strong
as inhibition. The additional time-delay is always inhibitory. It prevents the first motion
event from overruling the rest of the sequence and is crucial for the exact classification of
short sequences.
a
N
E
S
W
b
WTA
N
3xTdelay
N
W
W
Tdelay
E
E
2xTdelay
?1 ?2 ?3 ?1 ?2 ?3 ?1 ?2 ?3 ?1 ?2 ?3
NWE
delayed
motion
events
motion simultaneous
events input
time
Figure 3: The sequence classification network. a) Outline of its WTA structure (shown
within the dashed line) and its input stage (k=3). The time-delays between the DSU and
the SRU are numbered in units of delay . Filled dots are excitatory connections while empty
ones are inhibitory. The additional inhibitory delay is not shown. The marked unit recognizes the sequence N-W-E. b) A sequence is classified by delaying consecutive motion
events such that they provide a simultaneous excitatory input.
1
"!$#&%('
delay
! & 4 2 )657,8."/ :9 ;0 !<
2 ) ! , where 0 *>=@?AB@B CDAFE
& 4 delay
)+*-,&./ 1! 0 ) 3
delay
3 Performance of the system
We measure the performance of the system in two different ways. Firstly, we analyze the
robustness to time warping. Knowing the response properties of the optical flow chip [8] we
simulate its output to analyze systematically the two other stages of the system. Secondly,
we test the complete system including the optical flow chip under real conditions. Here,
only a qualitative assessment can be given.
3.1 Robustness to time warping
U
We simulate
the visual
motion trajectories as a sum of Gaussians in time, thus
/
" ?
. The important parameters are
6
where 6
?
2
N
the width of the Gaussians
and the time difference
between the centers of two
neighboring Gaussians. Three schemes are tested: changes of N only, changes of
only
and a linear stretch in time, i.e. a change in both parameters. Time is always measured in
units of the characteristic time-delay delay .
For fixed
can be decreased down to D
delay
delay for sequences of length
, N for
longer sequences. Fixing N
two and down toU D
delay
delay , classification is still
of length three and
guaranteed for varying according to Figure 4a; e.g. for a sequence
input strength
volts,
can maximally increase by
. For three and four
events (gray and white bars in Figure 4). Linear time stretches change the total input to the
system. This causes the asymmetry seen in Figure 4b. Short sequences are relatively more
than longer sequences2
robust to any change in
time warp
+150%
b
no class.
time warp
a
+100%
+50%
0%
+150%
no class.
+100%
+50%
0%
-50%
-50%
0.1
0.2
0.3
0.1
input [Volts]
0.2
0.3
input [Volts]
Figure 4: Time warping. The histograms shows the maximal acceptable time warping.
The results are shown for three different trajectory lengths (black: two motion events,
gray: three events, white: four events) and three different
input strengths (maximal output
is changed. b) Time
voltages of the optical flow chip). a) N is held at D
delay while
. No
is stretched linearly and therefore the duration of the events is proportional to
classification is possible for sequences of length four at very low input levels.
The system cannot distinguish between the sequences e.g. N-W-E-W and N-W-W-W. In
this case, the sum of the weighted integrals of the delay functions of both sequences leads
to an equivalent input to the SCN. However, if two adjacent events are not allowed to be
the same, this problem does not occur.
2
'
CA
Imagine the time warp being . For a sequence with five events and more, the time shift
becomes larger than delay for some of the events, which leads to inhibition instead of excitation.
3.2 Real world application - writing letters with patterns of hand movements
The complete system was applied to classify visual motion patterns elicited by hand movements in front of the optical flow chip. Using sequences of three events we are able to
classify 36 valid sequences and therefore encode the alphabet. Figure 5 shows a typical
visual motion pattern (assigned to the letter ?H?) and the corresponding signals at all stages
of processing.
a
b
0.2
motion [Volts]
motion [Volts]
0.2
0
0
-0.2
-0.2
-0.2
0
0
0.2
1
motion [Volts]
c
d
0.5
0
1
2
3
time [Tdelay ]
4
3
4
5
1
SCU activity
DSU activity
1
0
2
time [Tdelay ]
0.5
0
5
0
1
2
3
4
time [Tdelay ]
5
Figure 5: Tracking a signal through all stages. a) The output of the optical flow chip to
a moving hand in a N-S vs. E-W motion plot. The marks on the trajectory show different
time stamps. b) The same trajectory including the time stamps in a motion vs. time plot
(N-S motion: solid line, E-W motion: dashed line). Time is given in units of delay . c) The
output of the DSN showing classification in motion space. (N: solid line, E: dashed, W:
dotted). d) The output of the SCN. Here, the unit that recognizes the trajectory class ?H? is
shown by the solid line. The detection threshold is set at 0.8 maximal activity.
The system runs on a 166Mhz Pentium PC using MatLab (TheMathworks Inc.). The signal
of the optical flow chip is read into the computer using an AD-card. All simulations are
done with simple forward integration of the differential equations.
4 Learning motion trajectories
We expanded the system to be able to learn visual motion patterns. We model each set
of four synapses connecting the four DSU to a single SCU
the same time-delay by a
with
competitive network of four synapse units (see Figure 6) with very slow time constants. We
impose on the output of the four units that their sum
equals . The cost function
N
E S W
?3
?3 ?3 ?3
b
1
activity
a
0.5
0
c
5
0
5
10
15
20
10
15
20
wexc
x
weights
x
x
x
0
-1
+
0
-winh
time [sec]
Figure 6: Learning trajectory classes. a) Schematics of the competitive network of a set
of synapses. The dashed line shows one synapse: the synaptic weight , the input to the
synapse unit and its output . Multiplication by the output signal of the SCU is indicated
by the ?x? in the small square, the linear mapping by the bold line from the synapse output
to the weight. b) Output of the SCU during the repetitive presentation of a particular
trajectory. Initial weights were random. c) Learning the synaptic weights associated with
one particular time-delay.
is given by
) -
"
(
/.
02 4
$
&
A
6
where the synapse units have an sigmoidal activation function
0
are defined as
in (2) and (4).
The synaptic dynamics
are
given
by
(5)
< 6
6
and
,
and
$
&
(6)
Since the activity of the synapse units is always between
0 and 1 a linear
to
mapping
the actual
synaptic weights is performed:
. To allow
activation of the SCU with unlearned synapses
we choose
" ,
( A
where is the strongest possible inhibitory weight. This assures that the weights
are all slightly
positive before learning. increases with increasing learning progress.
the
The input term in (6) is the product of: the input weight ( $ ), the delayed input to
synapse (
) and the output of the SCU ( ) (see Figure 6a). The term
is
included to enable learning only if the sequence is completed. The weight of a particular
synapse is increased if both, the input to the synapse and the activity of the target SCU are
high. The reduction of the other weights is due to the competitive network behavior. The
learning mechanism is tested using simulated and real world inputs. Under the restriction
that trajectories must differ by more than one event the system is able to learn sequences
of length three. Sequences that differ by only one event are learnt by the same SCU, thus
subsequent sequences overwrite previous learned ones. In Figure 6b,c the learning process
of one particular trajectory class of three events is shown. This trajectory is part of a set of
six trajectories that were learned during one simulation cycle, where each input trajectory
was consecutively presented five times.
5 Conclusions and outlook
We have shown a strikingly simple3 network system that reliably classifies distinct visual
motion patterns. Clearly, the application of the optical flow chip substantially reduces the
remaining computational load and allows real-time processing.
A remarkable feature of our system is that - with the exception of the visual motion frontend, but including the learning rule - all networks have competitive dynamics and are based
on the classical Winner-Take-All architecture. WTA networks are shown to be compactly
implemented in aVLSI [10]. Thus, given also the small network size, it seems very likely
to allow a complete aVLSI system-on-chip integration, not considering the learning mechanism. Such a single chip system would represent a very efficient computational device,
requiring minimal space, weight and power. The ?quasi-discretization? in visual motion
space that emerges from the non-linear amplification in the direction selective network
could be refined to include not only more directions but also different speed-levels. That
way, richer sets of trajectories can be classified. Many applications in mobile electronic devices are imaginable that require (or desire) a touchless interface. Commercial applications
in people control and surveillance seem feasible and are already considered.
Acknowledgments
This work is supported by the Human Frontiers Science Project grant no. RG00133/2000-B
and ETHZ Forschungskredit no. 0-23819-01.
References
[1] A. Stocker and R. J. Douglas. Computation of smooth optical flow in a feedback connected
analog network. Advances in Neural Information Processing Systems, 11:706?712, 1999.
[2] L. G. Sotelino, M. Saerens, and H. Bersini. Classification of temporal trajectories by
continuous-time recurrent nets. Neural Networks, 7(5):767?776, 1994.
[3] D. T. Lin, J. E. Dayhoff, and P. A. Ligomenides. Trajectory recognition with a time-delay neural
network. International Joint Conference on Neural Networks, Baltimore, III:197?202, 1992.
[4] H. Gutfreund and M. Mezard. Processing of temporal sequences in neural networks. Phys. Rev.
Letters, 61(2):235?238, July 1988.
[5] D.-L. Lee. Pattern sequence recognition using a time-varying hopfield network. IEEE Trans.
on Neural Networks, 13(2):330?342, March 2002.
[6] D. W. Tank and J. J. Hopfield. Neural computation by concentrating information in time.
Proc. Natl. Acad. Sci. USA, 84:1896?1900, April 1987.
[7] J. J. Hopfield and C. D. Brody. What is a moment? Transient synchrony as a collective mechanism for spatiotemporal integration. Proc. Natl. Acad. Sci. USA, 98:1282?1287, January 2001.
[8] A. Stocker. Constraint optimization networks for visual motion perception - analysis and synthesis. PhD thesis, ETH Z?urich, No. 14360, 2001.
[9] J. J. Hopfield. Neurons with graded response have collective computational properties like those
of two-state neurons. Proc. Natl. Acad. Sci. USA, 81:3088?3092, May 1984.
[10] R. Hahnloser, R. Sarpeshkar, M. Mahowald, R. Douglas, and S. Seung. Digital selection and
analogue amplification coexist in a cortex-inspired silicon circuit. Nature, 405:947?951, June
2000.
3
e.g. the presented man-machine interface consists only of 31 units and 4x4 time-delays, not
counting the network elements in the optical flow chip.
| 2311 |@word seems:1 simulation:2 outlook:1 solid:3 moment:1 reduction:1 initial:1 contains:1 existing:1 discretization:1 activation:4 must:1 subsequent:1 realistic:1 plot:3 v:2 device:3 short:2 firstly:2 sigmoidal:2 five:2 become:1 differential:1 dsn:11 qualitative:1 consists:3 dayhoff:1 behavior:2 nor:1 inspired:1 actual:2 considering:1 increasing:1 becomes:1 project:1 classifies:3 circuit:1 what:1 substantially:1 gutfreund:1 differing:1 temporal:4 control:1 unit:23 dsu:13 grant:1 positive:1 before:1 modify:1 acad:3 black:1 twice:1 equivalence:1 range:2 practical:1 acknowledgment:1 vu:1 eth:2 significantly:1 pre:1 numbered:1 cannot:1 close:1 scu:10 selection:1 coexist:1 storage:1 writing:1 www:1 equivalent:3 map:1 restriction:1 center:1 urich:3 duration:1 rule:1 coordinate:1 overwrite:1 imagine:1 target:1 commercial:1 exact:1 us:1 element:2 recognition:2 asymmetric:1 coincidence:1 region:7 cycle:1 connected:1 movement:2 highest:1 unlearned:1 complexity:2 seung:1 dynamic:9 purely:1 strikingly:1 compactly:1 joint:1 hopfield:7 chip:18 various:1 represented:1 sarpeshkar:1 alphabet:1 distinct:1 unfeasibly:1 refined:1 neuroinformatics:1 richer:1 larger:1 solve:1 sequence:37 net:1 maximal:3 reset:1 product:1 neighboring:1 amplification:3 empty:2 asymmetry:1 avlsi:4 recurrent:1 fixing:1 measured:1 progress:1 strong:1 implemented:1 differ:2 switzerland:1 direction:11 concentrate:1 imaginable:1 consecutively:2 human:1 enable:1 transient:1 require:2 secondly:2 frontier:1 stretch:2 considered:1 mapping:4 nw:1 mo:4 circuitry:1 major:2 consecutive:1 purpose:1 perceived:2 estimation:1 polar:1 proc:3 reflects:1 weighted:1 clearly:1 always:3 mobile:2 voltage:3 varying:3 surveillance:1 encode:1 june:1 longest:1 pentium:1 dependent:1 nn:2 typically:1 selective:7 quasi:2 tank:2 classification:23 integration:5 fairly:1 marginal:1 equal:3 identical:1 represents:1 x4:1 unizh:1 report:1 stimulus:1 simplify:1 inherent:1 others:1 delayed:4 detection:2 pc:1 natl:3 held:1 stocker:3 integral:1 filled:2 circle:1 plotted:1 minimal:1 increased:2 classify:4 mhz:1 neuromorphic:1 mahowald:1 cost:3 delay:31 front:4 stored:1 periodic:1 learnt:1 spatiotemporal:1 my:2 international:1 lee:1 connecting:1 synthesis:1 connectivity:1 again:1 ambiguity:1 central:1 thesis:1 choose:2 inefficient:1 sec:2 bold:1 north:1 inc:1 ad:1 nwe:2 tion:2 performed:1 analyze:2 start:2 competitive:4 elicited:1 synchrony:1 square:1 characteristic:2 trajectory:23 straight:1 classified:4 detector:1 simultaneous:2 phys:2 synapsis:3 strongest:1 ed:1 synaptic:4 associated:1 concentrating:1 knowledge:1 emerges:1 feed:1 follow:1 response:5 maximally:1 synapse:9 april:1 done:1 furthermore:1 stage:6 sru:1 hand:3 receives:1 assessment:1 indicated:1 gray:2 usa:3 requiring:1 assigned:1 read:1 volt:7 white:2 adjacent:1 during:2 width:1 steady:1 excitation:2 ini:2 outline:2 complete:7 demonstrate:1 motion:53 interface:3 saerens:1 image:2 novel:1 physical:1 winner:2 thirdly:1 analog:2 silicon:1 stretched:1 dot:1 moving:2 longer:2 surface:1 inhibition:2 cortex:1 thresh:1 wellknown:1 seen:1 additional:4 impose:1 signal:7 dashed:4 july:1 reduces:1 alan:3 smooth:1 characterized:1 long:1 lin:1 divided:2 schematic:1 histogram:1 represent:3 repetitive:1 interval:1 decreased:1 baltimore:1 crucial:1 rest:1 south:1 flow:15 seem:1 counting:1 iii:1 architecture:9 regarding:1 knowing:1 shift:1 six:1 resistance:1 cause:1 matlab:1 transforms:1 gfi:1 category:1 reduced:1 inhibitory:8 shifted:1 dotted:2 per:1 correctly:1 write:1 discrete:2 group:1 four:8 threshold:2 neither:1 douglas:2 sum:3 pix:1 angle:1 letter:4 run:1 respond:1 place:1 electronic:2 separation:1 acceptable:1 brody:1 guaranteed:1 distinguish:1 activity:9 strength:2 occur:1 constraint:1 speed:2 simulate:2 span:1 expanded:1 optical:15 relatively:2 according:1 combination:1 march:1 smaller:1 slightly:1 wta:9 rev:1 tou:1 restricted:1 equation:1 assures:1 mechanism:3 sequences2:1 end:5 gaussians:3 generic:1 occurrence:1 robustness:4 scn:11 remaining:1 include:1 recognizes:2 completed:1 instant:1 bersini:1 graded:1 classical:1 warping:4 capacitance:1 already:2 gradient:1 win:1 mx:2 card:1 simulated:1 capacity:1 sci:3 length:6 minimizing:1 implementation:1 reliably:1 collective:2 allowing:1 neuron:2 descent:1 january:1 extended:1 delaying:1 olt:2 jakob:2 arbitrary:1 introduced:1 connection:3 learned:2 trans:1 able:4 bar:1 pattern:11 perception:1 including:3 analogue:1 power:1 event:25 representing:1 scheme:1 multiplication:1 proportional:1 remarkable:1 digital:1 tdelay:4 systematically:1 classifying:1 excitatory:6 changed:1 supported:1 last:1 dominantly:1 allow:4 warp:3 institute:1 feedback:1 world:3 valid:1 computes:1 forward:2 author:1 adaptive:1 compact:1 preferred:1 global:4 continuous:7 learn:4 nature:1 robust:1 ca:1 ifif:1 linearly:1 allowed:1 referred:1 west:1 slow:1 mezard:1 stamp:2 resolvable:1 down:2 load:1 showing:1 frontend:1 phd:1 magnitude:1 illumination:1 tio:2 lt:2 likely:1 visual:23 prevents:1 desire:1 tracking:1 collectively:1 applies:1 ch:2 aa:1 relies:2 hahnloser:1 marked:1 presentation:1 man:2 feasible:1 change:5 included:1 typical:2 except:1 total:1 east:1 exception:2 internal:1 mark:1 people:1 ethz:2 tested:2 |
1,443 | 2,312 | Maximally Informative Dimensions: Analyzing
Neural Responses to Natural Signals
Tatyana Sharpee , Nicole C. Rust , and William Bialek
Sloan?Swartz Center for Theoretical Neurobiology, Department of Physiology
University
of California at San Francisco, San Francisco, California 94143?0444
Center for Neural Science, New York University, New York, NY 10003
Department of Physics, Princeton University, Princeton, New Jersey 08544
[email protected], [email protected], [email protected]
We propose a method that allows for a rigorous statistical analysis of
neural responses to natural stimuli, which are non-Gaussian and exhibit
strong correlations. We have in mind a model in which neurons are selective for a small number of stimulus dimensions out of the high dimensional stimulus space, but within this subspace the responses can
be arbitrarily nonlinear. Therefore we maximize the mutual information
between the sequence of elicited neural responses and an ensemble of
stimuli that has been projected on trial directions in the stimulus space.
The procedure can be done iteratively by increasing the number of directions with respect to which information is maximized. Those directions
that allow the recovery of all of the information between spikes and the
full unprojected stimuli describe the relevant subspace. If the dimensionality of the relevant subspace indeed is much smaller than that of the
overall stimulus space, it may become experimentally feasible to map
out the neuron?s input-output function even under fully natural stimulus
conditions. This contrasts with methods based on correlations functions
(reverse correlation, spike-triggered covariance, ...) which all require
simplified stimulus statistics if we are to use them rigorously.
1 Introduction
From olfaction to vision and audition, there is an increasing need, and a growing number
of experiments [1]-[8] that study responses of sensory neurons to natural stimuli. Natural
stimuli have specific statistical properties [9, 10], and therefore sample only a subspace of
all possible spatial and temporal frequencies explored during stimulation with white noise.
Observing the full dynamic range of neural responses may require using stimulus ensembles which approximate those occurring in nature, and it is an attractive hypothesis that
the neural representation of these natural signals may be optimized in some way. Finally,
some neuron responses are strongly nonlinear and adaptive, and may not be predicted from
a combination of responses to simple stimuli. It has also been shown that the variability
in neural response decreases substantially when dynamical, rather than static, stimuli are
used [11, 12]. For all these reasons, it would be attractive to have a rigorous method of
analyzing neural responses to complex, naturalistic inputs.
The stimuli analyzed by sensory neurons are intrinsically high-dimensional, with dimen-
sions
. For example, in the case of visual neurons, input is specified as light
intensity on a grid of at least
pixels. The dimensionality increases further if the
time dependence is to be explored as well. Full exploration of such a large parameter space
is beyond the constraints of experimental data collection. However, progress can be made
provided we make certain assumptions about how the response has been generated. In the
simplest model, the probability of response can be described by one receptive field (RF)
[13]. The receptive field can be thought of as a special direction in the stimulus space
such that the neuron?s response depends only on a projection of a given stimulus onto .
This special direction is the one found by the reverse correlation method [13, 14]. In a
more general case, the probability of the response depends on projections
,
, of the stimulus on a set of vectors
,
:
!#"""%$
'&
("""
)+*
,.-0/+132465!7 89 ,.-0/+132465 8;: - <""" ) 8
(1)
.
,
=
;
/
3
1
2
6
4
>
5
7
.
,
=
;
/
?
1
2
!
4
5
where
8 is the probability of a spike given a stimulus and
8 is the average firing rate. In what follows we will call the subspace spanned by the set of vectors
&'
%* the relevant subspace (RS). Even though the ideas developed below can be used to
analyze input-output functions with respect to different neural responses, we settle on a
single spike as the response of interest.
$
Eq. (1) in itself is not yet a simplification if the dimensionality of the RS is equal to the
dimensionality of the stimulus space. In this paper we will use the idea of dimensionality
reduction [15, 16] and assume that
. The input-output function in Eq. (1) can be
strongly nonlinear, but it is presumed to depend only on a small number of projections. This
assumption appears to be less stringent than that of approximate linearity which one makes
when characterizing neuron?s response in terms of Wiener kernels. The most difficult part
, a description in
in reconstructing the input-output function is to find the RS. For
terms of any linear combination of vectors
is just as valid, since we did not make any
assumptions as to a particular form of nonlinear function . We might however prefer one
coordinate system over another if it, for example, leads to sparser probability distributions
or more statistically independent variables.
@
$BA @
:
'&
+*
$DCE
:
,.-=/;1?24!5>7 8
Once the relevant subspace is known, the probability
becomes a function of
only few parameters, and it becomes feasible to map this function experimentally, inverting
the probability distributions according to Bayes? rule:
, - * 7 /+1324!5 8
.
: - & *8F &.
, - & *8
(2)
If stimuli are correlated Gaussian noise, then the neural response can be characterized by
the spike-triggered covariance method [15, 16]. It can be shown that the dimensionality of
the RS is equal to the number of non-zero eigenvalues of a matrix given by a difference
between covariance matrices of all presented stimuli and stimuli conditional on a spike.
Moreover, the RS is spanned by the eigenvectors associated with the non-zero eigenvalues multiplied by the inverse of the a priori covariance matrix. Compared to the reverse
correlation method, we are no longer limited to finding only one of the relevant directions
. However because of the necessity to probe a two-point correlation function, the spiketriggered covariance method requires better sampling of distributions of inputs conditional
on a spike.
In this paper we investigate whether it is possible to lift the requirement for stimuli to be
Gaussian. When using natural stimuli, which are certainly non-Gaussian, the RS cannot be
found by the spike-triggered covariance method. Similarly, the reverse correlation method
does not give the correct RF, even in the simplest case where the input-output function (1)
depends only on one projection. However, vectors that span the RS are clearly special directions in the stimulus space. This notion can be quantified by Shannon information, and
an optimization problem can be formulated to find the RS. Therefore the current implementation of the dimensionality reduction idea is complimentary to the clustering of stimuli done in the information bottleneck method [17]; see also Ref. [18]. Non?information
based measures of similarity between probability distributions
and
have
also been proposed [19]. We illustrate how the optimization scheme of maximizing information as function of direction in the stimulus space works with natural stimuli for model
orientation sensitive cells with one and two relevant directions, much like simple and complex cells found in primary visual cortex. It is also possible to estimate average errors in
the reconstruction. The advantage of this optimization scheme is that it does not rely on
any specific statistical properties of the stimulus ensemble, and can be used with natural
stimuli.
,.- 8
,.- 7 /;132465 8
2 Information as an objective function
When analyzing neural responses, we compare the a priori probability distribution of all
presented stimuli with the probability distribution of stimuli which lead to a spike. For
Gaussian signals, the probability distribution can be characterized by its second moment,
the covariance matrix. However, an ensemble of natural stimuli is not Gaussian, so that
neither second nor any other finite number of moments is sufficient to describe the probability distribution. In this situation, the Shannon information provides a convenient way
of comparing two probability distributions. The average information carried by the arrival
time of one spike is given by [20]
,.- 7 /;132465 8
.
, - 7 /;1?24!5 8 ,.- 8 "
(3)
,.- 7 /+132465 8
The information per spike, as written in (3) is difficult to estimate experimentally, since
it requires either sampling of the high-dimensional probability distribution
or
a model of how spikes were generated,
i.e.
the
knowledge
of
low-dimensional
RS.
How
ever it is possible to calculate
in a model-independent way, if stimuli are presented
. Then,
multiple times to estimate the probability distribution
,.-=/;1?24!5>7 8
, -=/;1?24!5 8
.
- 8
8
, -0/+132465>7 8
.
,.-0/+1324!5>7 8
,.-0/+132465
8 "!#
(4)
- 8
where the average is taken overall
stimuli. Note that for a finite
of $
presented
dataset
&%
$
repetitions, the obtained
value
will
be
on
average
larger
than
,
with
(' )+*,
.(' )+*,
$
$
/
difference
, where $
is the number of different stimuli,
is the number of elicited spikes [21] across% all of the repetitions. The true value
and
$
can also be found by extrapolating to $10
[22]. The knowledge of the total
information per spike will characterize the quality of the reconstruction of the neuron?s
input-output relation.
-
Having in mind a model in which spikes are generated according to projection onto a lowdimensional subspace, we start by projecting all of the presented stimuli
on a particular
32 4
4
direction 9in
the
stimulus
space,
and
form
probability
distributions
657
8
4
8
2 4
,
. The information
657
7
+
/
3
1
2
6
4
8 5
, - 7 /;1?24!5 8
, - 8 - 8
- 8 ;: 4', 2 -4 7 /+1324!5 8
# , 2 - 4 7 /;1?24!5 8 , 2 - 4 8
-
(5)
provides an invariant measure of how much the occurrence of a spike is determined by
projection on the direction . It is a function only of direction in the stimulus space and
does not change when vector is multiplied by a constant.
This can
#=2 4
A2 be
4 seen by noting that
for any probability distribution
and
any
constant
<
,
>
@
<
?
B< . When evaluated
DC
along any vector,
. The total information
can be recovered along one
particular direction only if
, and the RS is one-dimensional.
'- 8
, - 8
, - 8
'- """ 8
along a set of several
By analogy with (5), one could also calculate information
directions
based on the multi-point probability distributions:
,
2 2
& """ *
- & 4 * 7 /;1?24!5 8
5
7
-4 > 8 7 /+1324!598
,
$
2 2
- & 4 *8
5
-4 > 8 8 "
7
If we are successful in finding all of the directions
in the input-output relation (1),
then
the
information
evaluated
along
the
found
set
will
be
equal to the total information
. When we calculate information along
a
set
of
vectors
that are slightly off
from the
RS, the answer is, of course, smaller than
and is quadratic in deviations 7 . One can
therefore find the RS by maximizing information with respect to
vectors simultaneously.
The information does not increase if more vectors outside the RS are included into the
calculation. On the other hand, the result of optimization with respect to the number of
vectors
may deviate from the RS if stimuli are correlated. The deviation is also
proportional to a weighted average of
. For
uncorrelated stimuli, any vector or a set of vectors that maximizes
belongs
to
the
RS.
To find the RS, we first maximize
, and compare this maximum with
, which is
estimated according to (4). If the difference exceeds that expected from finite sampling
corrections, we increment the number of directions with respect to which information is
simultaneously maximized.
$
The information
computed
,.-=/;1?24!5!7 > <""" 6 ;8 ,.-=/;1?24!5!7 6 <""" ) 8
- 8
'- 8
'- 8
2@
as defined by (5) is a continuous function, whose gradient can be
:
', - 8 50 7 4 /+13246598 =5 7 4.8
4 +2 4
4
, 2 - 47 /+132465 8 "
, 2- 4 8
A
(6)
2B
Since information does not change with the length of the vector,
(which can
also be seen from (6) directly), unnecessary evaluations of information for multiples of
are avoided by maximizing along the gradient. As an optimization algorithm, we have
used a combination of gradient ascent and simulated annealing algorithms: successive line
maximizations were done along the direction of the gradient. During line maximizations, a
point with a smaller value
of
information
was accepted according to Boltzmann statistics,
. The effective temperature T is reduced upon
with probability
completion of each line maximization.
5 1 - -
8 - 8;8
3 Discussion
We tested the scheme of looking for the most informative directions on model neurons that
respond to stimuli derived from natural scenes. As stimuli we used patches of digitized to
8-bit scale photos, in which no corrections were made for camera?s light intensity transformation function. Our goal is to demonstrate that even though spatial correlations present
in natural scenes are non-Gaussian, they can be successfully removed from the estimate of
vectors defining the RS.
3.1 Simple Cell
Our first example is taken to mimic properties of simple cells found in the primary visual
cortex. A model phase and orientation sensitive cell has a single relevant direction
shown in Fig. 1(a). A given frame leads to a spike if projection
reaches a
threshold value in the presence of noise:
9 '
,.-0/+1324!5!7 8
,.-0/+1324!5 8 : - 8
5
-
68 8
"!
(7)
Figure 1: ('Analysis of a model simple cell with RF shown in (a). The spike-triggered
is shown in (b). Panel (c) shows'an attempt to remove
correlations accordaverage
)
ing to reverse correlation method, ?
; (d) vector
found by maximizing
information; (e) The probability of a spike
(crosses) is compared to
)
)3
used in generating spikes (solid line). Parameters
)
)3
)
)3
and
[
and
are the maximum and minimum values of
over the ensemble of presented stimuli.] (f) Convergence of the algorithm according to
information
and projection
as a function of inverse effective temperature ? .
,.-0/+132465!7 8
. ?" -
'- 8
8
!
,.-0/+1324!5!7 F
8
?" -
CE
- 8.
8
>
where
Gaussian4 random variable ! of variance models additive noise, and function
4
for
, and zero otherwise. Together with the RF
, the parameters
for threshold and the noise variance determine the input-output function.
The spike-triggered average (STA), shown in Fig. 1(b), is broadened because of spatial
correlations present in natural stimuli. If stimuli were drawn
from a Gaussian probability
'
distribution, they could be decorrelated by multiplying
by the inverse of the a priori
covariance matrix, according to the reverse correlation method. The procedure is not valid
for non-Gaussian stimuli and nonlinear input-output functions (1). The result of such a
decorrelation is shown in Fig. 1(c). It is clearly missing the structure of the model filter.
However, it is possible to obtain a good estimate of it by maximizing information directly,
see panel (d). A typical progress of the simulated annealing algorithm with decreasing
temperature is shown in panel (e). There we plot both the information along the vector,
and its projection on
. The final value of projection depends on the size of the data
set, see below. In the example
shown in Fig. 1 there were
spikes with average
-
probability of spike
per frame. Having reconstructed the RF, one can proceed to
sample )
the
nonlinear input-output
function. This is done by constructing
histograms for
)
)
and
of projections onto vector
found by
)
maximizing
information, and taking their ratio. In Fig. 1(e) we compare
(crosses)
with the probability
used in the model (solid line).
,.- #
8
3"
,.-
?7 /;1?24!5 8
,.-=/;1?24!5!7 ! 8
3%6!
,
-=/;1?24!5>7
.
3.2 Estimated deviation from the optimal direction
8
When information is calculated with respect to a finite data set, the vector which maximizes will deviate from the true RF
. The deviation 7
arises because the
probability distributions are estimated from experimental histograms and differ from the
1
0.95
e1 ? vmax
0.9
0.85
0.8
0
1
2
3
N?1
10?5
spike
)
Figure 2: Projection of vector
that maximizes information onRF
is plotted as a
function of the number of spikes to show the linear scaling in $
(solid line is a fit).
F
,.- 7 /+1324!5 8
,.- 8
distributions found in the limit on infinite data size. For a simple cell,
the quality of recon
7
struction can be characterized by the projection
, where both and
?
are normalized, and 7 is by definition orthogonal to
. The deviation 7
,
where is the Hessian of information. Its structure is similar to that of a covariance matrix:
4
4 8
4
4
4.8
4.8
4 /
4
(8)
- :
5
5
5
,.- 7 /+132465 8
/
- 7 = 7 = 7 8
When averaged over possible outcomes of N trials, the
is zero for
8 gradient of information
8
? 5
the optimal direction. Here in order to evaluate 57
? , we need
to know the variance4 of the gradient of . By discretizing
both the space of stimuli and
possible projections , and assuming that the probability
of
generating
B8
a spike
- is indepen $
/
dent for different bins, one could obtain that 5
. Therefore
an expected error in the reconstruction of the optimal filter is inversely proportional to the
number of spikes and is given by:
3
(
.
- 57
8
-
$
?
/
-
8
-
(9)
means that the trace is taken in the subspace orthogonal to the model filter, since
where
by definition 7
. In Fig. 2 we plot the average projection of the normalized
reconstructed vector on the RF
, and show that it scales with the number of spikes.
3.3 Complex Cell
A sequence of spikes from a model cell with two relevant
directions was simulated by
projecting each of the stimuli on vectors that differ by A in their spatial phase, taken to
mimic properties of complex cells, see Fig. 3. A particular frame leads
to a spike
according
to a logical OR, that is if either
,
,
, or
exceeds a
threshold value in the presence of noise. Similarly to (7),
!
6
,.-0/+132465>7 8
,.-0/+132465 8 : 8 65 -%7 7 ! 8 -%7 7
6
8
8
!
(10)
where ! and ! are independent Gaussian variables. The sampling of this input-output
function by our particular set of natural stimuli
is shown in Fig. 3(c). Some, especially
large, combinations of values of
and
are not present in the ensemble.
>
6
We start by maximizing information with respect to one direction. Contrary to analysis for
a simple cell, one optimal direction recovers only about
60% of the total information per
spike. This is significantly different from the total
for stimuli drawn from natural
scenes, where due to correlations even a random vector has a high probability of explaining 60% of total information per spike. We therefore go on to maximize information with
respect to two directions. An example of the reconstruction of input-output function of a
are not orthogocomplex cell is given in Fig. 3. Vectors and that maximize
and
. However, the quality of reconstruction
nal, and are also rotated with respect to
is independent of a particular choice of basis with the RS. The appropriate measure of similarity between2 the
two planes
is the dot product of their normals. In the example of Fig. 3,
2
.
?
'- 8
?"
Maximizing information with respect to two directions requires a significantly slower cooling rate, and consequently longer computational
times. However, an expected error in the
2 2
reconstruction,
behavior, similarly to (9), and is
, follows a $ ?
roughly twice that
for
a
simple
cell
given
the
same
number
of spikes. In this calculation
spikes.
there were
(1)
e
model
(a)
10
20
20
10
20
30
v1
(d)
30
10
20
20
10
20
30
30
(c)
e
10
20
10
(f)
20
P(spike|s(1),s(2))
30
v2
(e)
10
30
(2)
(b)
10
30
reconstruction
'
30
P(spike|s? v ,s? v )
1
2
: - 6 8
3
"
8
,.'- ! 8 6 8
Figure 3: Analysis of a model complex cell with relevant directions
and
shown in
(a) and (b). Spikes are generated
according
to
an
?OR?
input-output
function
)
)3
)
)3
with the threshold
and noise variance
.
Panel (c) shows how the input-output
function is sampled by our ensemble of
stimuli. Dark
pixels for large values of
and
correspond to cases where
. Below,
together with the
and found by maximizing information
we show vectors
.
corresponding input-output function with respect to projections
and
? " -
!
6
8
In conclusion, features of the stimulus that are most relevant for generating the response
of a neuron can be found by maximizing information between the sequence of responses
and the projection of stimuli on trial vectors within the stimulus space. Calculated in this
manner, information becomes a function of direction in a stimulus space. Those directions
that maximize the information and account for the total information per response of interest
span the relevant subspace. This analysis allows the reconstruction of the relevant subspace
without assuming a particular form of the input-output function. It can be strongly nonlinear within the relevant subspace, and is to be estimated from experimental histograms.
Most importantly, this method can be used with any stimulus ensemble, even those that are
strongly non-Gaussian as in the case of natural images.
Acknowledgments
We thank K. D. Miller for many helpful discussions. Work at UCSF was supported in part
by the Sloan and Swartz Foundations and by a training grant from the NIH. Our collab-
oration began at the Marine Biological Laboratory in a course supported by grants from
NIMH and the Howard Hughes Medical Institute.
References
[1] F. Rieke, D. A. Bodnar, and W. Bialek. Naturalistic stimuli increase the rate and efficiency of
information transmission by primary auditory afferents. Proc. R. Soc. Lond. B, 262:259?265,
(1995).
[2] W. E. Vinje and J. L. Gallant. Sparse coding and decorrelation in primary visual cortex during
natural vision. Science, 287:1273?1276, 2000.
[3] F. E. Theunissen, K. Sen, and A. J. Doupe. Spectral-temporal receptive fields of nonlinear
auditory neurons obtained using natural sounds. J. Neurosci., 20:2315?2331, 2000.
[4] G. D. Lewen, W. Bialek, and R. R. de Ruyter van Steveninck. Neural coding of naturalistic
motion stimuli. Network: Comput. Neural Syst., 12:317?329, 2001.
[5] N. J. Vickers, T. A. Christensen, T. Baker, and J. G. Hildebrand. Odour-plume dynamics
influence the brain?s olfactory code. Nature, 410:466?470, 2001.
[6] K. Sen, F. E. Theunissen, and A. J. Doupe. Feature analysis of natural sounds in the songbird
auditory forebrain. J. Neurophysiol., 86:1445?1458, 2001.
[7] D. L. Ringach, M. J. Hawken, and R. Shapley. Receptive field structure of neurons in monkey
visual cortex revealed by stimulation with natural image sequences. Journal of Vision, 2:12?24,
2002.
[8] W. E. Vinje and J. L. Gallant. Natural stimulation of the nonclassical receptive field increases
information transmission efficiency in V1. J. Neurosci., 22:2904?2915, 2002.
[9] D. L. Ruderman and W. Bialek. Statistics of natural images: scaling in the woods. Phys. Rev.
Lett., 73:814?817, 1994.
[10] D. J. Field. Relations between the statistics of natural images and the response properties of
cortical cells. J. Opt. Soc. Am. A, 4:2379?2394, 1987.
[11] P. Kara, P. Reinagel, and R. C. Reid. Low response variability in simultaneously recorded
retinal, thalamic, and cortical neurons. Neuron, 27:635?646, 2000.
[12] R. R. de Ruyter van Steveninck, G. D. Lewen, S. P. Strong, R. Koberle, and W. Bialek. Reproducibility and variability in neural spike trains. Science, 275:1805?1808, 1997.
[13] F. Rieke, D. Warland, R. R. de Ruyter van Steveninck, and W. Bialek. Spikes: Exploring the
neural code. MIT Press, Cambridge, 1997.
[14] E. de Boer and P. Kuyper. Triggered correlation. IEEE Trans. Biomed. Eng., 15:169?179, 1968.
[15] N. Brenner, W. Bialek, and R. R. de Ruyter van Steveninck. Adaptive rescaling maximizes
information transmission. Neuron, 26:695?702, 2000.
[16] R. R. de Ruyter van Steveninck and W. Bialek. Real-time performance of a movement-sensitive
neuron in the blowfly visual system: coding and information transfer in short spike sequences.
Proc. R. Soc. Lond. B, 234:379?414, 1988.
[17] N. Tishby, F. C. Pereira, and W. Bialek. The information bottleneck method. In Proceedings of
the 37th Allerton Conference on Communication, Control and Computing, edited by B. Hajek
& R. S. Sreenivas. University of Illinois, 368?377, 1999.
[18] A. G. Dimitrov and J. P. Miller. Neural coding and decoding: communication channels and
quantization. Network: Comput. Neural Syst., 12:441?472, 2001.
[19] L. Paninski. Convergence properties of some spike-triggered analysis techniques. In Advances
in Neural Information Processing 15, edited by S. Becker, S. Thrun, and K. Obermayer, 2003.
[20] N. Brenner, S. P. Strong, R. Koberle, W Bialek, and R. R. de Ruyter van Steveninck. Synergy
in a neural code. Neural Comp., 12:1531-1552, 2000.
[21] A. Treves and S. Panzeri. The upward bias in measures of information derived from limited
data samples. Neural Comp., 7:399, 1995.
[22] S. P. Strong, R. Koberle, R. R. de Ruyter van Steveninck, and W. Bialek. Entropy and information in neural spike trains. Phys. Rev. Lett., 80:197?200, 1998.
| 2312 |@word trial:3 r:18 covariance:9 eng:1 solid:3 moment:2 necessity:1 reduction:2 phy:1 odour:1 current:1 comparing:1 recovered:1 yet:1 written:1 additive:1 informative:2 remove:1 extrapolating:1 plot:2 plane:1 marine:1 short:1 provides:2 successive:1 allerton:1 along:8 become:1 shapley:1 dimen:1 olfactory:1 manner:1 presumed:1 expected:3 roughly:1 indeed:1 nor:1 growing:1 multi:1 brain:1 behavior:1 decreasing:1 increasing:2 becomes:3 provided:1 struction:1 linearity:1 moreover:1 maximizes:4 panel:4 baker:1 what:1 complimentary:1 substantially:1 monkey:1 developed:1 finding:2 transformation:1 temporal:2 control:1 broadened:1 grant:2 medical:1 reid:1 limit:1 analyzing:3 firing:1 might:1 twice:1 quantified:1 limited:2 range:1 statistically:1 averaged:1 steveninck:7 acknowledgment:1 camera:1 hughes:1 procedure:2 physiology:1 thought:1 projection:17 convenient:1 significantly:2 naturalistic:3 onto:3 cannot:1 influence:1 map:2 nicole:1 center:2 maximizing:10 missing:1 go:1 recovery:1 rule:1 reinagel:1 importantly:1 spanned:2 rieke:2 notion:1 coordinate:1 increment:1 hypothesis:1 cooling:1 theunissen:2 calculate:3 decrease:1 removed:1 movement:1 edited:2 nimh:1 rigorously:1 dynamic:2 depend:1 upon:1 efficiency:2 basis:1 neurophysiol:1 jersey:1 train:2 describe:2 effective:2 wbialek:1 lift:1 outside:1 outcome:1 whose:1 larger:1 tested:1 otherwise:1 statistic:4 itself:1 final:1 sequence:5 triggered:7 eigenvalue:2 advantage:1 vickers:1 sen:2 propose:1 reconstruction:8 lowdimensional:1 product:1 nonclassical:1 relevant:13 reproducibility:1 description:1 convergence:2 requirement:1 transmission:3 generating:3 rotated:1 illustrate:1 completion:1 progress:2 eq:2 strong:4 soc:3 predicted:1 differ:2 direction:29 correct:1 filter:3 exploration:1 stringent:1 settle:1 bin:1 require:2 opt:1 biological:1 dent:1 exploring:1 correction:2 normal:1 panzeri:1 a2:1 proc:2 sensitive:3 gaussian4:1 repetition:2 successfully:1 weighted:1 mit:1 clearly:2 gaussian:11 rather:1 sion:1 derived:2 contrast:1 rigorous:2 am:1 helpful:1 relation:3 selective:1 biomed:1 pixel:2 upward:1 overall:1 orientation:2 priori:3 spatial:4 special:3 mutual:1 field:6 equal:3 once:1 having:2 sampling:4 sreenivas:1 mimic:2 stimulus:60 few:1 sta:1 simultaneously:3 phase:2 cns:1 william:1 olfaction:1 attempt:1 interest:2 investigate:1 evaluation:1 certainly:1 analyzed:1 light:2 orthogonal:2 plotted:1 theoretical:1 maximization:3 deviation:5 successful:1 tishby:1 characterize:1 answer:1 boer:1 physic:1 off:1 decoding:1 together:2 b8:1 recorded:1 audition:1 rescaling:1 syst:2 account:1 de:8 retinal:1 coding:4 sloan:2 afferent:1 depends:4 observing:1 analyze:1 start:2 bayes:1 thalamic:1 elicited:2 wiener:1 variance:3 ensemble:8 maximized:2 correspond:1 miller:2 multiplying:1 comp:2 reach:1 phys:2 decorrelated:1 definition:2 frequency:1 associated:1 recovers:1 static:1 sampled:1 auditory:3 dataset:1 intrinsically:1 logical:1 knowledge:2 dce:1 dimensionality:7 hajek:1 appears:1 response:24 maximally:1 done:4 though:2 strongly:4 evaluated:2 just:1 correlation:14 hand:1 ruderman:1 nonlinear:8 quality:3 normalized:2 true:2 iteratively:1 laboratory:1 ringach:1 white:1 attractive:2 during:3 songbird:1 demonstrate:1 motion:1 temperature:3 image:4 nih:1 began:1 stimulation:3 rust:2 cambridge:1 grid:1 similarly:3 illinois:1 dot:1 longer:2 similarity:2 cortex:4 belongs:1 reverse:6 certain:1 collab:1 discretizing:1 arbitrarily:1 seen:2 minimum:1 determine:1 maximize:5 swartz:2 signal:3 full:3 sound:2 multiple:2 ing:1 exceeds:2 characterized:3 calculation:2 cross:2 e1:1 vision:3 histogram:3 kernel:1 cell:15 annealing:2 dimitrov:1 ascent:1 contrary:1 unprojected:1 call:1 noting:1 presence:2 revealed:1 fit:1 idea:3 bottleneck:2 whether:1 becker:1 york:2 proceed:1 hessian:1 eigenvectors:1 dark:1 recon:1 simplest:2 reduced:1 estimated:4 per:6 threshold:4 drawn:2 neither:1 ce:1 nal:1 v1:2 wood:1 inverse:3 respond:1 oration:1 patch:1 hawken:1 prefer:1 scaling:2 bit:1 simplification:1 quadratic:1 constraint:1 scene:3 span:2 lond:2 department:2 according:8 combination:4 smaller:3 across:1 reconstructing:1 slightly:1 rev:2 christensen:1 projecting:2 invariant:1 taken:4 forebrain:1 mind:2 know:1 photo:1 multiplied:2 probe:1 v2:1 appropriate:1 spectral:1 blowfly:1 occurrence:1 slower:1 clustering:1 warland:1 especially:1 objective:1 spike:42 receptive:5 primary:4 dependence:1 bialek:11 obermayer:1 exhibit:1 gradient:6 subspace:12 thank:1 simulated:3 thrun:1 reason:1 assuming:2 length:1 code:3 ratio:1 difficult:2 trace:1 ba:1 implementation:1 boltzmann:1 gallant:2 neuron:17 howard:1 finite:4 spiketriggered:1 situation:1 neurobiology:1 variability:3 ever:1 looking:1 dc:1 digitized:1 defining:1 frame:3 communication:2 tatyana:1 intensity:2 treves:1 inverting:1 specified:1 optimized:1 california:2 trans:1 beyond:1 dynamical:1 below:3 rf:8 decorrelation:2 natural:23 rely:1 scheme:3 inversely:1 carried:1 koberle:3 deviate:2 lewen:2 fully:1 proportional:2 analogy:1 vinje:2 kuyper:1 foundation:1 sufficient:1 uncorrelated:1 course:2 supported:2 indepen:1 bias:1 allow:1 institute:1 explaining:1 characterizing:1 taking:1 sparse:1 van:7 dimension:2 calculated:2 valid:2 lett:2 hildebrand:1 cortical:2 sensory:2 collection:1 adaptive:2 san:2 projected:1 simplified:1 made:2 avoided:1 vmax:1 reconstructed:2 approximate:2 synergy:1 unnecessary:1 francisco:2 continuous:1 nature:2 transfer:1 ruyter:7 channel:1 complex:5 constructing:1 did:1 neurosci:2 noise:7 arrival:1 kara:1 ref:1 fig:10 ny:1 pereira:1 comput:2 specific:2 nyu:1 explored:2 quantization:1 occurring:1 sparser:1 entropy:1 paninski:1 visual:6 conditional:2 goal:1 formulated:1 consequently:1 brenner:2 feasible:2 experimentally:3 change:2 included:1 determined:1 typical:1 infinite:1 total:7 accepted:1 experimental:3 shannon:2 sharpee:2 doupe:2 arises:1 ucsf:2 evaluate:1 princeton:3 bodnar:1 correlated:2 |
1,444 | 2,313 | Gaussian Process Priors With Uncertain Inputs
Application to Multiple-Step Ahead Time Series
Forecasting
Agathe Girard
Department of Computing Science
University of Glasgow
Glasgow, G12 8QQ
[email protected]
Carl Edward Rasmussen
Gatsby Unit
University College London
London, WC1N 3AR
[email protected]
?
Joaquin Quinonero
Candela
Informatics and Mathematical Modelling
Technical University of Denmark
Richard Petersens Plads, Building 321
DK-2800 Kongens, Lyngby, Denmark
[email protected]
Roderick Murray-Smith
Department of Computing Science
University of Glasgow, Glasgow, G12 8QQ
& Hamilton Institute
National University of Ireland, Maynooth
[email protected]
Abstract
We consider the problem of multi-step ahead prediction in time series
analysis using the non-parametric Gaussian process model. -step ahead
forecasting of a discrete-time non-linear dynamic system can be performed by doing repeated one-step ahead predictions. For a state-space
model
of the form
, the prediction of at time
is based on the point estimates of the previous outputs. In this paper, we show how, using an analytical Gaussian approximation, we can
formally incorporate the uncertainty about intermediate regressor values,
thus updating the uncertainty on the current prediction.
1 Introduction
One of the main objectives in time series analysis is forecasting and in many real life problems, one has to predict ahead in time, up to a certain time horizon (sometimes called lead
time or prediction horizon). Furthermore, knowledge of the uncertainty of the prediction is
important. Currently, the multiple-step ahead prediction task is achieved by either explic-
itly training a direct model to predict steps ahead, or by doing repeated one-step ahead
predictions up to the desired horizon, which we call the iterative method.
There are a number of reasons why the iterative method might be preferred to the ?direct?
one. Firstly, the direct method makes predictions for a fixed horizon only, making it computationally demanding if one is interested in different horizons. Furthermore, the larger ,
the more training data we need in order to achieve
a good
predictive performance, because
. On the other hand, the iterated
of the larger number of ?missing? data between and
method provides any -step ahead forecast, up to the desired horizon, as well as the joint
probability distribution of the predicted points.
In the Gaussian process modelling approach, one computes predictive distributions whose
means serve as output estimates. Gaussian processes (GPs) for regression have historically
been first introduced by O?Hagan [1] but started being a popular non-parametric modelling
approach after the publication of [7]. In [10], it is shown that GPs can achieve a predictive performance comparable to (if not better than) other modelling approaches like neural
networks or local learning methods. We will show that for a -step ahead prediction which
ignores the accumulating prediction variance, the model is not conservative enough, with
unrealistically small uncertainty attached to the forecast. An alternative solution is presented for iterative -step ahead prediction, with propagation of the prediction uncertainty.
2 Gaussian Process modelling
We briefly recall some fundamentals of Gaussian processes. For a comprehensive introduction, please refer to [5], [11], or the more recent review [12].
2.1 The GP prior model
Formally, the random function, or stochastic
process, is a Gaussian process, with
mean and covariance function , if its values at a finite number of points,
, are seen as the components of a normally distributed random vector. If
we further assume that the process is stationary: it has a constant mean and a covariance
function only depending on the distance between the inputs . For any , we have
(1)
with
giving the covariance between the points
and , which is a function of the inputs corresponding to the same cases and
1
. A common choice of covariance function is the Gaussian
0 (
0 kernel
')(+*
"!$#&%
/0)1
,.-
0
3
2
2
4
(2)
a different
where 5 is the input dimension. The 3 parameters (correlation length) allow
0
distance measure for each input dimension 6 . For a given problem, these parameters will
be adjusted to the data at hand and, for irrelevant inputs, the corresponding 3 will tend to
zero.
The role of the covariance function in the GP framework is similar to that of the kernels
used in the Support Vector Machines community. This particular choice corresponds to a
prior assumption that the underlying function is smooth and continuous. It accounts for
a high correlation between the outputs of cases with nearby inputs.
1
This choice was motivated by the fact that, in [8], we were aiming at unified expressions for the
GPs and the Relevance Vector Machines models which employ such a kernel. More discussion about
possible covariance functions can be found in [5].
2.2 Predicting with Gaussian Processes
1
Given this prior on the function and a set of data
, our aim, in this
Bayesian setting, is to get the predictive distribution of the function value corresponding to a new (given) input .
If we assume an additive uncorrelated Gaussian white noise, with variance , relates the
targets (observations) to the function outputs, the distribution over the targets is Gaussian,
with zero mean and covariance matrix such that
. We then adjust the
vector of hyperparameters 3 3
so as to maximise the log-likelihood
, where is the vector - of observations.
! #"%$
&
'
$
(
)
*
,+ "
-
*
*
* .
/ .
.
0
13.
2 .
In this framework, for a new , the predictive distribution is simply obtained by conditioning on the training data. The joint distribution of the variables being Gaussian, this
conditional distribution,
is also Gaussian with mean and variance
(
(3)
2
(4)
*
where
is the
vector
* of covariances between
the new point and the training targets and
, with as given by
(2).
- .
4 .
- .
54
The predictive mean serves as a point estimate of the function output, with uncer
tainty . And it is also a point estimate for the target, , with variance 2
.
3 Prediction at a random input
)687 67
$ ) 6 7 6 7 9
$
If we now assume that the input distribution is Gaussian,
distribution is now obtain by integrating over
.
$ .
where
)
, the predictive
6
(5)
is Normal, as specified by (3) and (4).
3.1 Gaussian approximation
$ .
Given that this integral is analytically intractable ( is a complicated function
of ), we opt for an analytical Gaussian approximation and only compute the mean and
variance of
)
. Using the law of iterated expectations and conditional
variance, the ?new? mean and variance are given by
'
$ )687
) 67 67
) 67 67
: 67
)
)
67
: 6677 :<;!= 6 7?6 > 7? >
$
@ A: 67 ) 687
6 7G>
: 67 - BDCE;!=
687$ )
BDC F:<;!=
$
:
BDC
H
) '
- .
) 67
) )67
- )687 I CKJML -
,N 7 PD68Q 77 R L )
N 7 PD68Q 77 L )
L NNN O
L
L L NNN O
where
2
indicates the expectation under
(6)
(7)
.
In our initial development, we made additional approximations ([2]). A first and second
order Taylor expansions of and 2 respectively, around
, led to
)67 86 7
)67 86 7
2
,
*
2
2
1
1
7 SPDQ 7
NNN
NO
(8)
1
(9)
The detailed calculations can be found in [2].
) .
In [8], we derived the exact expressions of the first and second moments. Rewriting the
predictive mean as a linear combination of the covariance between the new
and
the training points (as suggested in [12]), with our choice of covariance function, the calculation of then involves the product of two Gaussian functions:
/
6
6
(10)
'
9
) 67 86 7 9 )
+ "
) 67 67
) 6 7 6 7
)6 7
BD5 $ 6 7 $
2
+
)67 67 )67 )67 I
I
) 67
67
)67
$ 67 $
with
with
3
. This leads to (refer to [9] for details)
( )
(
2 ! #&%
2
3
identity matrix.
5
2 2 and is the 5
- obtain for the variance
In the same manner, we
)
with
,
(+*
! #&%
where "
,
*
(
2 !$# %!
,
2
(12)
(
,
*
(11)
, where
(
"
(
(
(
(
"
$#
(
&%
,
&% #
(13)
,
' .
3.2 Monte-Carlo alternative
Equation (5) can be solved by performing a numerical approximation of the integral, using
a simple Monte-Carlo approach:
where
'
9
$ ) 6 7 6 7
$
are (independent) samples from
6
*
)(
*
/ 1
$
(14)
.
4 Iterative + -step ahead prediction of time series
For the multiple-step ahead prediction task of time series, the iterative method consists in making repeated one-step ahead predictions, up to the desired horizon.
Con-,
/. /. 10 /.
sider the time series and the state-space
model
where
.
.
.
is the state at time (we assume that the lag 2 is known)
and the (white) noise has variance .
H
D
#H
Then, the?naive? iterative -step ahead prediction method works as follows: it predicts
only one time step ahead, using the estimate of the output of the current prediction, as well
as previous outputs (up to the lag 2 ), as the input to the prediction of the next time step,
until the prediction steps ahead is made. That way, only the output estimates are used
and the uncertainty induced by each successive prediction is not accounted for.
Using the results derived in the previous section, we suggest to formally incorporate the
uncertainty information about the intermediate regressor. That is, as we predict ahead in
time, we now view the lagged outputs as random variables. In this framework, the input
at time
is a random
vector with mean formed by the predicted means of the lagged
*
outputs ,
2 , given by (11). The 2 !2 input covariance matrix has the
different predicted variances on its diagonal ( (with the estimated noise variance added to
them), computed with (12), and
the off-diagonal elements are given by, in the case of the
exact solution,
is as defined previously and
, where
/
with
.
?)H 687
8
6
7
) 6 86 7
2
4.1 Illustrative examples
The first example is intended to provide a basis for comparing the approximate and exact
solutions, within the Gaussian approximation of (5)), to the numerical solution (MonteCarlo sampling from the true distribution), when the uncertainty is propagated as we predict
ahead in time. We use the second example, inspired from real-life problems, to show that
iteratively predicting ahead in time without taking account of the uncertainties induced by
each succesive prediction leads to inaccurate results, with unrealistically small error bars.
We then assess the predictive performance of the different methods by computing the average absolute error ( 2 ), the average squared error ( 2 ) and average minus log predictive
2
2 ), which measures the density of the actual true test output under the Gaussian
density2 (
predictive distribution and use its negative log as a measure of loss.
4.1.1 Forecasting the Mackey-Glass time series
0
=>
0
The Mackey-Glass chaotic time series constitutes a wellknown benchmark and a challenge
( the multiple-step ahead prediction task,
due to its strong non-linearity [4]:
for
*
*
,
,
.
We
have
,
and
.
The
series
is
re-sampled
*
*
with period and normalized. We choose 2
for the number
of lagged outputs in the
, are corrupted by a
state vector, "! #$! &
% ' #(! &
* ) ' #(!
* and the targets,
white noise with variance $ .
== >>
3
*
We train a GP model with a Gaussian kernel such as (2) on + points, taken at random
from a series of ,+$$ points. Figure 1 shows the mean predictions with their uncertainties,
given by the exact and approximate
* methods,
* and -( samples from the Monte-Carlo nu$ steps ahead, for different starting points.
merical approximation, from *
to
,
uncerFigure 2 shows the plot of the + -step ahead mean predictions (left) and their
tainties (right), given by the exact and approximate methods, as well as the sample mean
and sample variance obtained with the numerical solution (average over -$ points).
-
These figures show the better performance of the exact method on the approximate one.
Also, they allow us to validate the Gaussian approximation, noticing that the error bars
encompass the samples from the true distribution. Table 1 provides a quantitative confirmation.
Table 1: Average (over -($ test points)
absolute error ( 2 ), squared error (2 ) and mi*
2
nus log predictive density ( 2 ) of the $ -step ahead predictions obtained using the exact
method ( . ), the approximate one ( . ) and the sampling from the true distribution ( . ).
2
2
2
2
2
.
/0- /
1$1$
* ,(
1
2
* /
.
3 -4/++2
/ 4 +2
1+$
2
.5
/+,$,
1
,$2 2
2
To evaluate these losses in the case of Monte-Carlo sampling, we use the sample mean and
sample variance.
From 1 to 100 steps ahead
From 1 to 100 steps ahead
3
2.5
Approx.
m +/? 2?
2
2
True Data
True Data
Exact
m +/? 2?
1.5
Exact
m +/? 2?
1
1
k=1
0.5
0
?1
0
k=1
k=100
Approx.
m +/? 2?
?0.5
?1
k=100
MC samples
?2
?1.5
?2
?3
0
10
20
30
40
50
60
70
80
90
?2.5
100
250
260
270
280
-
*
290
300
310
320
330
340
350
*
Figure 1: Iterative method in action: simulation from to, + steps ahead for different
starting points in the test series. Mean predictions with
error bars given by the exact (dash) and approximate (dot) methods. Also plotted, -$ samples obtained using the
numerical approximation.
100?step ahead predicted variances
100?step ahead predicted means
3.5
2
1.5
exact
approx.
numerical
3
1
2.5
0.5
2
0
?0.5
1.5
?1
1
true
exact
approx
numerical
?1.5
?2
?2.5
100
150
200
250
300
350
400
450
500
550
0.5
600
0
100
150
200
250
300
350
400
450
500
550
*
Figure 2: + -step ahead mean predictions (left) and uncertainties (right.) obtained using
the exact method (dash), the approximate (dot) and the sample mean and variance of the
numerical solution (dash-dot).
600
4.1.2 Prediction of a pH process simulation
We now compare the iterative -step ahead prediction results obtained when propagating
the uncertainty (using the approximate method) and when using the output estimates only
(the naive approach). For doing so, we use the pH neutralisation process benchmark presented in [3]. The training and test data consist of pH values (outputs of the process) and
a control input signal ( ).
With
a model of the form
, we train our GP on
*
, ,
points (all data have been normalized).
, examples and consider a test set of
*
Figure 3 shows the -step ahead predicted means and variances obtained when propagating the uncertainty and when using information
*4 * on the past predicted means
only. The
,
2 ,
losses calculated are the following:
, 2 0- / and 2 $, for the
*
2
approximate method and 2
2$,$ for the naive one!
10?step ahead predicted means
2
2
true
1.8
1.5
1
1.6
0.5
1.4
0
Approx.
m +/? 2?
1.2
1
0
2
4
6
8
true
approx
naive
?0.5
?1
10
12
4
?1.5
10
20
30
40
50
60
70
80
70
80
10?step ahead predicted variances
5
10
Naive
m +/? 2?
2
0
10
0
?2
?5
10
?4
?6
22
k=10
k=1
24
26
28
*
30
*
32
?10
34
10
10
20
30
40
50
60
*
Figure 3: Predictions from to steps ahead (left). -step ahead mean predictions with
the corresponding variances, when propagating the uncertainty (dot) and when using the
previous point estimates only (dash).
5 Conclusions
We have presented a novel approach which allows us to use knowledge of the variance on
inputs to Gaussian process models to achieve more realistic prediction variance in the case
of noisy inputs.
Iterating this approach allows us to use it as a method for efficient propagation of uncertainty in the multi-step ahead prediction task of non-linear time-series. In experiments on
simulated dynamic systems, comparing our Gaussian approximation to Monte Carlo simulations, we found that the propagation method is comparable to Monte Carlo simulations,
and that both approaches achieved more realistic error bars than a naive approach which
ignores the uncertainty on current state.
This method can help understanding the underlying dynamics of a system, as well as being
useful, for instance, in a model predictive control framework where knowledge of the accuracy of the model predictions over the whole prediction horizon is required (see [6] for a
model predictive control law based on Gaussian processes taking account of the prediction
uncertainty). Note that this method is also useful in its own right in the case of noisy model
inputs, assuming they have a Gaussian distribution.
Acknowledgements
Many thanks to Mike Titterington for his useful comments. The authors gratefully acknowledge the support of the Multi-Agent Control Research Training Network - EC TMR
grant HPRN-CT-1999-00107 and RM-S is grateful for EPSRC grant Modern statistical
approaches to off-equilibrium modelling for nonlinear system control GR/M76379/01.
References
[1] O?Hagan, A. (1978) Curve fitting and optimal design for prediction. Journal of the Royal Statistical Society B 40:1-42.
[2] Girard, A. & Rasmussen, C. E. & Murray-Smith, R. (2002) Gaussian Process Priors With Uncertain Inputs: Multiple-Step Ahead Prediction. Technical Report, TR-2002-119, Dept. of Computing
Science, University of Glasgow.
[3] Henson, M. A. & Seborg, D. E. (1994) Adaptive nonlinear control of a pH neutralisation process.
IEEE Trans Control System Technology 2:169-183.
[4] Mackey, M. C. & Glass, L. (1977) Oscillation and Chaos in Physiological Control Systems.
Science 197:287-289.
[5] MacKay, D. J. C. (1997) Gaussian Processes - A Replacement for Supervised Neural Networks?.
Lecture notes for a tutorial at NIPS 1997.
[6] Murray-Smith, R. & Sbarbaro-Hofer, D. (2002) Nonlinear adaptive control using non-parametric
Gaussian process prior models. 15th IFAC World Congress on Automatic Control, Barcelona
[7] Neal, R. M. (1995) Bayesian Learning for Neural Networks PhD thesis, Dept. of Computer
Science, University of Toronto.
[8] Qui?nonero Candela, J & Girard, A. & Larsen, J. (2002) Propagation of Uncertainty in Bayesian
Kernels Models ? Application to Multiple-Step Ahead Forecasting Submitted to ICASSP 2003.
[9] Qui?nonero Candela, J. & Girard, A. (2002) Prediction at an Uncertain Input for Gaussian Processes and Relevance Vector Machines - Application to Multiple-Step Ahead Time-Series Forecasting. Technical Report, IMM, Danish Technical University.
[10] Rasmussen, C. E. (1996) Evaluation of Gaussian Processes and other Methods for Non-Linear
Regression PhD thesis, Dept. of Computer Science, University of Toronto.
[11] Williams, C. K. I. & Rasmussen, C. E. (1996) Gaussian Processes for Regression Advances in
Neural Information Processing Systems 8 MIT Press.
[12] Williams, C. K. I. (2002) Gaussian Processes To appear in The handbook of Brain Theory and
Neural Networks, Second edition MIT Press.
| 2313 |@word briefly:1 seborg:1 simulation:4 covariance:11 minus:1 tr:1 moment:1 initial:1 series:13 past:1 current:3 comparing:2 numerical:7 realistic:2 additive:1 plot:1 mackey:3 stationary:1 smith:3 provides:2 toronto:2 successive:1 firstly:1 mathematical:1 direct:3 consists:1 fitting:1 manner:1 multi:3 brain:1 inspired:1 actual:1 underlying:2 linearity:1 titterington:1 unified:1 quantitative:1 rm:1 uk:3 control:10 unit:1 normally:1 grant:2 appear:1 hamilton:1 maximise:1 local:1 congress:1 aiming:1 might:1 chaotic:1 integrating:1 sider:1 suggest:1 get:1 accumulating:1 missing:1 williams:2 starting:2 glasgow:5 his:1 maynooth:1 qq:2 target:5 exact:13 carl:1 gps:3 element:1 updating:1 hagan:2 predicts:1 mike:1 role:1 epsrc:1 solved:1 roderick:1 dynamic:3 grateful:1 predictive:14 serve:1 basis:1 icassp:1 joint:2 train:2 london:2 monte:6 whose:1 lag:2 larger:2 gp:4 noisy:2 analytical:2 ucl:1 product:1 nonero:2 achieve:3 validate:1 help:1 depending:1 ac:3 propagating:3 strong:1 edward:2 predicted:9 involves:1 stochastic:1 opt:1 adjusted:1 around:1 normal:1 equilibrium:1 predict:4 currently:1 tainty:1 mit:2 gaussian:33 aim:1 publication:1 derived:2 modelling:6 likelihood:1 indicates:1 glass:3 inaccurate:1 interested:1 development:1 mackay:1 sampling:3 constitutes:1 report:2 richard:1 employ:1 modern:1 national:1 comprehensive:1 intended:1 replacement:1 evaluation:1 adjust:1 gla:2 wc1n:1 integral:2 taylor:1 desired:3 re:1 plotted:1 uncertain:3 instance:1 ar:1 gr:1 corrupted:1 thanks:1 density:2 fundamental:1 off:2 informatics:1 regressor:2 tmr:1 jqc:1 squared:2 thesis:2 choose:1 account:3 performed:1 view:1 candela:3 doing:3 complicated:1 ass:1 formed:1 accuracy:1 variance:21 bayesian:3 iterated:2 mc:1 carlo:6 submitted:1 danish:1 petersens:1 larsen:1 mi:1 con:1 propagated:1 sampled:1 popular:1 recall:1 knowledge:3 supervised:1 furthermore:2 correlation:2 agathe:2 joaquin:1 hand:2 until:1 uncer:1 nonlinear:3 propagation:4 building:1 normalized:2 true:9 analytically:1 iteratively:1 neal:1 white:3 please:1 illustrative:1 chaos:1 novel:1 hofer:1 common:1 attached:1 conditioning:1 refer:2 approx:6 automatic:1 gratefully:1 dot:4 henson:1 own:1 recent:1 irrelevant:1 wellknown:1 certain:1 life:2 seen:1 additional:1 period:1 signal:1 relates:1 multiple:7 encompass:1 smooth:1 technical:4 ifac:1 calculation:2 prediction:41 regression:3 expectation:2 kernel:5 sometimes:1 achieved:2 unrealistically:2 hprn:1 comment:1 induced:2 tend:1 call:1 kongens:1 intermediate:2 enough:1 rod:1 motivated:1 expression:2 forecasting:6 nnn:3 action:1 useful:3 iterating:1 detailed:1 ph:4 succesive:1 tutorial:1 estimated:1 discrete:1 rewriting:1 noticing:1 uncertainty:18 oscillation:1 qui:2 comparable:2 ct:1 dash:4 ahead:40 nearby:1 performing:1 bdc:2 department:2 combination:1 making:2 plads:1 taken:1 lyngby:1 computationally:1 equation:1 previously:1 montecarlo:1 serf:1 alternative:2 bd5:1 giving:1 murray:3 society:1 objective:1 added:1 parametric:3 diagonal:2 ireland:1 distance:2 simulated:1 quinonero:1 evaluate:1 reason:1 denmark:2 assuming:1 length:1 negative:1 lagged:3 design:1 observation:2 benchmark:2 finite:1 acknowledge:1 dc:2 community:1 introduced:1 required:1 specified:1 barcelona:1 nu:2 nip:1 trans:1 suggested:1 bar:4 challenge:1 royal:1 demanding:1 predicting:2 historically:1 technology:1 dtu:1 started:1 naive:6 review:1 prior:6 acknowledgement:1 understanding:1 law:2 loss:3 lecture:1 agent:1 uncorrelated:1 accounted:1 rasmussen:4 allow:2 institute:1 taking:2 absolute:2 distributed:1 curve:1 dimension:2 calculated:1 world:1 computes:1 ignores:2 author:1 made:2 adaptive:2 ec:1 approximate:9 preferred:1 imm:2 handbook:1 continuous:1 iterative:8 why:1 table:2 confirmation:1 expansion:1 main:1 whole:1 noise:4 hyperparameters:1 edition:1 repeated:3 girard:4 gatsby:2 dk:2 physiological:1 intractable:1 consist:1 phd:2 horizon:8 forecast:2 led:1 simply:1 corresponds:1 conditional:2 identity:1 g12:2 conservative:1 called:1 formally:3 college:1 support:2 relevance:2 incorporate:2 dept:3 |
1,445 | 2,314 | Monaural Speech Separation
Guoning Hu
Biophysics Program
The Ohio State University
Columbus, OH 43210
[email protected]
DeLiang Wang
Department of Computer and Information
Science & Center of Cognitive Science
The Ohio State University, Columbus, OH 43210
[email protected]
Abstract
Monaural speech separation has been studied in previous systems that
incorporate auditory scene analysis principles. A major problem for
these systems is their inability to deal with speech in the highfrequency range. Psychoacoustic evidence suggests that different
perceptual mechanisms are involved in handling resolved and
unresolved harmonics. Motivated by this, we propose a model for
monaural separation that deals with low-frequency and highfrequency signals differently. For resolved harmonics, our model
generates segments based on temporal continuity and cross-channel
correlation, and groups them according to periodicity. For unresolved
harmonics, the model generates segments based on amplitude
modulation (AM) in addition to temporal continuity and groups them
according to AM repetition rates derived from sinusoidal modeling.
Underlying the separation process is a pitch contour obtained
according to psychoacoustic constraints. Our model is systematically
evaluated, and it yields substantially better performance than previous
systems, especially in the high-frequency range.
1
In t rod u ct i on
In a natural environment, speech usually occurs simultaneously with acoustic
interference. An effective system for attenuating acoustic interference would greatly
facilitate many applications, including automatic speech recognition (ASR) and
speaker identification. Blind source separation using independent component analysis
[10] or sensor arrays for spatial filtering require multiple sensors. In many situations,
such as telecommunication and audio retrieval, a monaural (one microphone) solution
is required, in which intrinsic properties of speech or interference must be considered.
Various algorithms have been proposed for monaural speech enhancement [14]. These
methods assume certain properties of interference and have difficulty in dealing with
general acoustic interference. Monaural separation has also been studied using phasebased decomposition [3] and statistical learning [17], but with only limited evaluation.
While speech enhancement remains a challenge, the auditory system shows a
remarkable capacity for monaural speech separation. According to Bregman [1], the
auditory system separates the acoustic signal into streams, corresponding to different
sources, based on auditory scene analysis (ASA) principles. Research in ASA has
inspired considerable work to build computational auditory scene analysis (CASA)
systems for sound separation [19] [4] [7] [18]. Such systems generally approach speech
separation in two main stages: segmentation (analysis) and grouping (synthesis). In
segmentation, the acoustic input is decomposed into sensory segments, each of which
is likely to originate from a single source. In grouping, those segments that likely come
from the same source are grouped together, based mostly on periodicity. In a recent
CASA model by Wang and Brown [18], segments are formed on the basis of similarity
between adjacent filter responses (cross-channel correlation) and temporal continuity,
while grouping among segments is performed according to the global pitch extracted
within each time frame. In most situations, the model is able to remove intrusions and
recover low-frequency (below 1 kHz) energy of target speech. However, this model
cannot handle high-frequency (above 1 kHz) signals well, and it loses much of target
speech in the high-frequency range. In fact, the inability to deal with speech in the
high-frequency range is a common problem for CASA systems.
We study monaural speech separation with particular emphasis on the high-frequency
problem in CASA. For voiced speech, we note that the auditory system can resolve the
first few harmonics in the low-frequency range [16]. It has been suggested that
different perceptual mechanisms are used to handle resolved and unresolved harmonics
[2]. Consequently, our model employs different methods to segregate resolved and
unresolved harmonics of target speech. More specifically, our model generates
segments for resolved harmonics based on temporal continuity and cross-channel
correlation, and these segments are grouped according to common periodicity. For
unresolved harmonics, it is well known that the corresponding filter responses are
strongly amplitude-modulated and the response envelopes fluctuate at the fundamental
frequency (F0) of target speech [8]. Therefore, our model generates segments for
unresolved harmonics based on common AM in addition to temporal continuity. The
segments are grouped according to AM repetition rates. We calculate AM repetition
rates via sinusoidal modeling, which is guided by target pitch estimated according to
characteristics of natural speech.
Section 2 describes the overall system. In section 3, systematic results and a
comparison with the Wang-Brown system are given. Section 4 concludes the paper.
2
M od el d escri p t i on
Our model is a multistage system, as shown in Fig. 1. Description for each stage is
given below.
2.1
I n i t i a l p r oc e s s i n g
First, an acoustic input is analyzed by a standard cochlear filtering model with a bank
of 128 gammatone filters [15] and subsequent hair cell transduction [12]. This
peripheral processing is done in time frames of 20 ms long with 10 ms overlap between
consecutive frames. As a result, the input signal is decomposed into a group of timefrequency (T-F) units. Each T-F unit contains the response from a certain channel at a
certain frame. The envelope of the response is obtained by a lowpass filter with
Segregated
Speech
Mixture
Peripheral and
Initial
Pitch
mid-level segregation tracking
processing
Unit
Final
Resynthesis
labeling segregation
Figure 1. Schematic diagram of the proposed multistage system.
passband [0, 1 kHz] and a Kaiser window of 18.25 ms.
Mid-level processing is performed by computing a correlogram (autocorrelation
function) of the individual responses and their envelopes. These autocorrelation
functions reveal response periodicities as well as AM repetition rates. The global pitch
is obtained from the summary correlogram. For clean speech, the autocorrelations
generally have peaks consistent with the pitch and their summation shows a dominant
peak corresponding to the pitch period. With acoustic interference, a global pitch may
not be an accurate description of the target pitch, but it is reasonably close.
Because a harmonic extends for a period of time and its frequency changes smoothly,
target speech likely activates contiguous T-F units. This is an instance of the temporal
continuity principle. In addition, since the passbands of adjacent channels overlap, a
resolved harmonic usually activates adjacent channels, which leads to high crosschannel correlations. Hence, in initial segregation, the model first forms segments by
merging T-F units based on temporal continuity and cross-channel correlation. Then
the segments are grouped into a foreground stream and a background stream by
comparing the periodicities of unit responses with global pitch. A similar process is
described in [18].
Fig. 2(a) and Fig. 2(b) illustrate the segments and the foreground stream. The input is a
mixture of a voiced utterance and a cocktail party noise (see Sect. 3). Since the
intrusion is not strongly structured, most segments correspond to target speech. In
addition, most segments are in the low-frequency range. The initial foreground stream
successfully groups most of the major segments.
2.2
P i t c h tr a c k i n g
In the presence of acoustic interference, the global pitch estimated in mid-level
processing is generally not an accurate description of target pitch. To obtain accurate
pitch information, target pitch is first estimated from the foreground stream. At each
frame, the autocorrelation functions of T-F units in the foreground stream are
summated. The pitch period is the lag corresponding to the maximum of the summation
in the plausible pitch range: [2 ms, 12.5 ms]. Then we employ the following two
constraints to check its reliability. First, an accurate pitch period at a frame should be
consistent with the periodicity of the T-F units at this frame in the foreground stream.
At frame j, let ? ( j) represent the estimated pitch period, and A(i, j,? ) the autocorrelation
function of uij, the unit in channel i. uij agrees with ? ( j) if
A(i , j , ? ( j )) / A(i, j ,? m ) > ? d
(1)
Frequency (Hz)
(a)
(b)
5000
5000
2335
2335
1028
1028
387
387
80
0
0.5
1
Time (Sec)
1.5
80
0
0.5
1
Time (Sec)
1.5
Figure 2. Results of initial segregation for a speech and cocktail-party mixture. (a)
Segments formed. Each segment corresponds to a contiguous black region. (b)
Foreground stream.
Here, ?d = 0.95, the same threshold used in [18], and ? m is the lag corresponding to the
maximum of A(i, j,? ) within [2 ms, 12.5 ms]. ? ( j) is considered reliable if more than
half of the units in the foreground stream at frame j agree with it. Second, pitch periods
in natural speech vary smoothly in time [11]. We stipulate the difference between
reliable pitch periods at consecutive frames be smaller than 20% of the pitch period,
justified from pitch statistics. Unreliable pitch periods are replaced by new values
extrapolated from reliable pitch points using temporal continuity. As an example,
suppose at two consecutive frames j and j+1 that ? ( j) is reliable while ? ( j+1) is not. All
the channels corresponding to the T-F units agreeing with ? ( j) are selected. ? ( j+1) is
then obtained from the summation of the autocorrelations for the units at frame j+1 in
those selected channels. Then the re-estimated pitch is further verified with the second
constraint. For more details, see [9].
Fig. 3 illustrates the estimated pitch periods from the speech and cocktail-party
mixture, which match the pitch periods obtained from clean speech very well.
2.3
U n i t l a be l i n g
With estimated pitch periods, (1) provides a criterion to label T-F units according to
whether target speech dominates the unit responses or not. This criterion compares an
estimated pitch period with the periodicity of the unit response. It is referred as the
periodicity criterion. It works well for resolved harmonics, and is used to label the units
of the segments generated in initial segregation.
However, the periodicity criterion is not suitable for units responding to multiple
harmonics because unit responses are amplitude-modulated. As shown in Fig. 4, for a
filter response that is strongly amplitude-modulated (Fig. 4(a)), the target pitch
corresponds to a local maximum, indicated by the vertical line, in the autocorrelation
instead of the global maximum (Fig. 4(b)). Observe that for a filter responding to
multiple harmonics of a harmonic source, the response envelope fluctuates at the rate
of F0 [8]. Hence, we propose a new criterion for labeling the T-F units corresponding
to unresolved harmonics by comparing AM repetition rates with estimated pitch. This
criterion is referred as the AM criterion.
To obtain an AM repetition rate, the entire response of a gammatone filter is half-wave
rectified and then band-pass filtered to remove the DC component and other possible
14
Pitch Period (ms)
12
(a)
10
180
185
190
195
200
Time (ms)
2
4
6
8
Lag (ms)
205
210
8
6
4
0
(b)
0.5
1
Time (Sec)
Figure 3. Estimated target pitch for
the speech and cocktail-party
mixture, marked by ?x?. The solid
line indicates the pitch contour
obtained from clean speech.
0
10
12
Figure 4. AM effects. (a) Response of a
filter with center frequency 2.6 kHz. (b)
Corresponding autocorrelation. The vertical
line marks the position corresponding to the
pitch period of target speech.
harmonics except for the F0 component. The rectified and filtered signal is then
normalized by its envelope to remove the intensity fluctuations of the original signal,
where the envelope is obtained via the Hilbert Transform. Because the pitch of natural
speech does not change noticeably within a single frame, we model the corresponding
normalized signal within a T-F unit by a single sinusoid to obtain the AM repetition
rate. Specifically,
f ij , ? ij = arg min
f ,?
M
[r?(i, jT ? k ) ? sin(2? k f / f S + ? )]2 , for f ?[80 Hz, 500 Hz], (2)
k =1
where a square error measure is used. r?(i , t ) is the normalized filter response, fS is the
sampling frequency, M spans a frame, and T= 10 ms is the progressing period from one
frame to the next. In the above equation, fij gives the AM repetition rate for unit uij.
Note that in the discrete case, a single sinusoid with a sufficiently high frequency can
always match these samples perfectly. However, we are interested in finding a
frequency within the plausible pitch range. Hence, the solution does not reduce to a
degenerate case. With appropriately chosen initial values, this optimization problem
can be solved effectively using iterative gradient descent (see [9]).
The AM criterion is used to label T-F units that do not belong to any segments
generated in initial segregation; such segments, as discussed earlier, tend to miss
unresolved harmonics. Specifically, unit uij is labeled as target speech if the final
square error is less than half of the total energy of the corresponding signal and the AM
repetition rate is close to the estimated target pitch:
| f ij? ( j ) ? 1 | < ? f .
(3)
Psychoacoustic evidence suggests that to separate sounds with overlapping spectra
requires 6-12% difference in F0 [6]. Accordingly, we choose ?f to be 0.12.
2.4
F i n a l s e gr e g a t i on a n d r e s y n t he s i s
For adjacent channels responding to unresolved harmonics, although their responses
may be quite different, they exhibit similar AM patterns and their response envelopes
are highly correlated. Therefore, for T-F units labeled as target speech, segments are
generated based on cross-channel envelope correlation in addition to temporal
continuity.
The spectra of target speech and intrusion often overlap and, as a result, some segments
generated in initial segregation contain both units where target speech dominates and
those where intrusion dominates. Given unit labels generated in the last stage, we
further divide the segments in the foreground stream, SF, so that all the units in a
segment have the same label. Then the streams are adjusted as follows. First, since
segments for speech usually are at least 50 ms long, segments with the target label are
retained in SF only if they are no shorter than 50 ms. Second, segments with the
intrusion label are added to the background stream, SB, if they are no shorter than 50
ms. The remaining segments are removed from SF, becoming undecided.
Finally, other units are grouped into the two streams by temporal and spectral
continuity. First, SB expands iteratively to include undecided segments in its
neighborhood. Then, all the remaining undecided segments are added back to SF. For
individual units that do not belong to either stream, they are grouped into SF iteratively
if the units are labeled as target speech as well as in the neighborhood of SF. The
resulting SF is the final segregated stream of target speech.
Fig. 5(a) shows the new segments generated in this process for the speech and cocktailparty mixture. Fig. 5(b) illustrates the segregated stream from the same mixture. Fig.
5(c) shows all the units where target speech is stronger than intrusion. The foreground
stream generated by our algorithm contains most of the units where target speech is
stronger. In addition, only a small number of units where intrusion is stronger are
incorrectly grouped into it.
A speech waveform is resynthesized from the final foreground stream. Here, the
foreground stream works as a binary mask. It is used to retain the acoustic energy from
the mixture that corresponds to 1?s and reject the mixture energy corresponding to 0?s.
For more details, see [19].
3
Evalu at i on an d comp ari son
Our model is evaluated with a corpus of 100 mixtures composed of 10 voiced
utterances mixed with 10 intrusions collected by Cooke [4]. The intrusions have a
considerable variety. Specifically, they are: N0 - 1 kHz pure tone, N1 - white noise, N2
- noise bursts, N3 - ?cocktail party? noise, N4 - rock music, N5 - siren, N6 - trill
telephone, N7 - female speech, N8 - male speech, and N9 - female speech.
Given our decomposition of an input signal into T-F units, we suggest the use of an
ideal binary mask as the ground truth for target speech. The ideal binary mask is
constructed as follows: a T-F unit is assigned one if the target energy in the
corresponding unit is greater than the intrusion energy and zero otherwise.
Theoretically speaking, an ideal binary mask gives a performance ceiling for all binary
masks. Figure 5(c) illustrates the ideal mask for the speech and cocktail-party mixture.
Ideal masks also suit well the situations where more than one target need to be
segregated or the target changes dynamically. The use of ideal masks is supported by
the auditory masking phenomenon: within a critical band, a weaker signal is masked by
a stronger one [13]. In addition, an ideal mask gives excellent resynthesis for a variety
of sounds and is similar to a prior mask used in a recent ASR study that yields
excellent recognition performance [5].
The speech waveform resynthesized from the final foreground stream is used for
evaluation, and it is denoted by S(t). The speech waveform resynthesized from the ideal
binary mask is denoted by I(t). Furthermore, let e1(t) denote the signal present in I(t)
but missing from S(t), and e2(t) the signal present in S(t) but missing from I(t). Then,
the relative energy loss, REL, and the relative noise residue, RNR, are calculated as
follows:
R EL =
e12 (t )
t
R NR =
I 2 (t ) ,
(4a)
S 2 (t ) .
(4b)
t
e22 (t )
t
t
(a)
(b)
(c)
Frequency (Hz)
5000
2355
1054
387
80
0
0.5
1
Time (Sec)
0
0.5
1
Time (Sec)
0
0.5
1
Time (Sec)
Figure 5. Results of final segregation for the speech and cocktail-party mixture. (a)
New segments formed in the final segregation. (b) Final foreground stream. (c)
Units where target speech is stronger than the intrusion.
Table 1: REL and RNR
Proposed
model
Wang-Brown
model
REL (%) RNR (%)
N0
2.12
0.02
N1
4.66
3.55
N2
1.38
1.30
N3
3.83
2.72
N4
4.00
2.27
N5
2.83
0.10
N6
1.61
0.30
N7
3.21
2.18
N8
1.82
1.48
N9
8.57 19.33
3.32
Average 3.40
REL (%) RNR (%)
6.99
0
28.96
1.61
5.77
0.71
21.92
1.92
10.22
1.41
7.47
0
5.99
0.48
8.61
4.23
7.27
0.48
15.81 33.03
11.91
4.39
15
SNR (dB)
Intrusion
20
10
5
0
?5
N0 N1 N2 N3 N4 N5 N6 N7 N8 N9
Intrusion Type
Figure 6. SNR results for segregated
speech. White bars show the results
from the proposed model, gray bars
those from the Wang-Brown system,
and black bars those of the mixtures.
The results from our model are shown in Table 1. Each value represents the average of
one intrusion with 10 voiced utterances. A further average across all intrusions is also
shown in the table. On average, our system retains 96.60% of target speech energy, and
the relative residual noise is kept at 3.32%. As a comparison, Table 1 also shows the
results from the Wang-Brown model [18], whose performance is representative of
current CASA systems. As shown in the table, our model reduces REL significantly. In
addition, REL and RNR are balanced in our system.
Finally, to compare waveforms directly we measure a form of signal-to-noise ratio
(SNR) in decibels using the resynthesized signal from the ideal binary mask as ground
truth:
SNR = 10 log10 [
( I (t ) ? S (t )) 2 ] .
I 2 (t )
t
(5)
t
The SNR for each intrusion averaged across 10 target utterances is shown in Fig. 6,
together with the results from the Wang-Brown system and the SNR of the original
mixtures. Our model achieves an average SNR gain of around 12 dB and 5 dB
improvement over the Wang-Brown model.
4
Di scu ssi on
The main feature of our model lies in using different mechanisms to deal with resolved
and unresolved harmonics. As a result, our model is able to recover target speech and
reduce noise interference in the high-frequency range where harmonics of target speech
are unresolved.
The proposed system considers the pitch contour of the target source only. However, it
is possible to track the pitch contour of the intrusion if it has a harmonic structure. With
two pitch contours, one could label a T-F unit more accurately by comparing whether
its periodicity is more consistent with one or the other. Such a method is expected to
lead to better performance for the two-speaker situation, e.g. N7 through N9. As
indicated in Fig. 6, the performance gain of our system for such intrusions is relatively
limited. Our model is limited to separation of voiced speech. In our view, unvoiced
speech poses the biggest challenge for monaural speech separation. Other grouping
cues, such as onset, offset, and timbre, have been demonstrated to be effective for
human ASA [1], and may play a role in grouping unvoiced speech. In addition, one
should consider the acoustic and phonetic characteristics of individual unvoiced
consonants. We plan to investigate these issues in future study.
A c k n ow l e d g me n t s
We thank G. J. Brown and M. Wu for helpful comments. Preliminary versions of this
work were presented in 2001 IEEE WASPAA and 2002 IEEE ICASSP. This research
was supported in part by an NSF grant (IIS-0081058) and an AFOSR grant (F4962001-1-0027).
References
[1] A. S. Bregman, Auditory scene analysis, Cambridge MA: MIT Press, 1990.
[2] R. P. Carlyon and T. M. Shackleton, ?Comparing the fundamental frequencies of resolved
and unresolved harmonics: evidence for two pitch mechanisms?? J. Acoust. Soc. Am., Vol.
95, pp. 3541-3554, 1994.
[3] G. Cauwenberghs, ?Monaural separation of independent acoustical components,? In Proc.
of IEEE Symp. Circuit & Systems, 1999.
[4] M. Cooke, Modeling auditory processing and organization, Cambridge U.K.: Cambridge
University Press, 1993.
[5] M. Cooke, P. Green, L. Josifovski, and A. Vizinho, ?Robust automatic speech recognition
with missing and unreliable acoustic data,? Speech Comm., Vol. 34, pp. 267-285, 2001.
[6] C. J. Darwin and R. P. Carlyon, ?Auditory grouping,? in Hearing, B. C. J. Moore, Ed., San
Diego CA: Academic Press, 1995.
[7] D. P. W. Ellis, Prediction-driven computational auditory scene analysis, Ph.D. Dissertation,
MIT Department of Electrical Engineering and Computer Science, 1996.
[8] H. Helmholtz, On the sensations of tone, Braunschweig: Vieweg & Son, 1863. (A. J. Ellis,
English Trans., Dover, 1954.)
[9] G. Hu and D. L. Wang, ?Monaural speech segregation based on pitch tracking and
amplitude modulation,? Technical Report TR6, Ohio State University Department of
Computer and Information Science, 2002. (available at www.cis.ohio-state.edu/~hu)
[10] A. Hyv?rinen, J. Karhunen, and E. Oja, Independent component analysis, New York:
Wiley, 2001.
[11] W. J. M. Levelt, Speaking: From intention to articulation, Cambridge MA: MIT Press,
1989.
[12] R. Meddis, ?Simulation of auditory-neural transduction: further studies,? J. Acoust. Soc.
Am., Vol. 83, pp. 1056-1063, 1988.
[13] B. C. J. Moore, An Introduction to the psychology of hearing, 4th Ed., San Diego CA:
Academic Press, 1997.
[14] D. O?Shaughnessy, Speech communications: human and machine, 2nd Ed., New York:
IEEE Press, 2000.
[15] R. D. Patterson, I. Nimmo-Smith, J. Holdsworth, and P. Rice, ?An efficient auditory
filterbank based on the gammatone function,? APU Report 2341, MRC, Applied
Psychology Unit, Cambridge U.K., 1988.
[16] R. Plomp and A. M. Mimpen, ?The ear as a frequency analyzer II,? J. Acoust. Soc. Am.,
Vol. 43, pp. 764-767, 1968.
[17] S. Roweis, ?One microphone source separation,? In Advances in Neural Information
Processing Systems 13 (NIPS?00), 2001.
[18] D. L. Wang and G. J. Brown, ?Separation of speech from interfering sounds based on
oscillatory correlation,? IEEE Trans. Neural Networks, Vol. 10, pp. 684-697, 1999.
[19] M. Weintraub, A theory and computational model of auditory monaural sound separation,
Ph.D. Dissertation, Stanford University Department of Electrical Engineering, 1985.
| 2314 |@word version:1 timefrequency:1 stronger:5 nd:1 hu:4 hyv:1 simulation:1 decomposition:2 tr:1 solid:1 n8:3 initial:8 contains:2 current:1 comparing:4 od:1 must:1 subsequent:1 remove:3 e22:1 n0:3 half:3 selected:2 cue:1 tone:2 accordingly:1 dover:1 smith:1 dissertation:2 filtered:2 provides:1 passbands:1 burst:1 constructed:1 symp:1 autocorrelation:6 theoretically:1 mask:12 expected:1 inspired:1 decomposed:2 resolve:1 window:1 underlying:1 circuit:1 substantially:1 acoust:3 finding:1 temporal:10 expands:1 filterbank:1 unit:39 grant:2 engineering:2 local:1 fluctuation:1 modulation:2 becoming:1 black:2 emphasis:1 studied:2 dynamically:1 suggests:2 josifovski:1 limited:3 apu:1 range:9 averaged:1 reject:1 significantly:1 intention:1 suggest:1 cannot:1 close:2 scu:1 www:1 demonstrated:1 center:2 missing:3 pure:1 array:1 oh:2 handle:2 resynthesis:2 target:34 suppose:1 play:1 diego:2 rinen:1 helmholtz:1 recognition:3 labeled:3 role:1 wang:10 solved:1 electrical:2 calculate:1 region:1 sect:1 removed:1 balanced:1 environment:1 comm:1 multistage:2 segment:33 asa:3 patterson:1 basis:1 resolved:9 lowpass:1 icassp:1 differently:1 various:1 undecided:3 effective:2 labeling:2 neighborhood:2 quite:1 lag:3 fluctuates:1 plausible:2 whose:1 stanford:1 otherwise:1 statistic:1 transform:1 final:8 rock:1 propose:2 unresolved:12 stipulate:1 degenerate:1 roweis:1 gammatone:3 description:3 enhancement:2 illustrate:1 pose:1 ij:3 soc:3 come:1 guided:1 fij:1 waveform:4 sensation:1 filter:9 human:2 noticeably:1 require:1 preliminary:1 summation:3 adjusted:1 sufficiently:1 considered:2 ground:2 around:1 major:2 vary:1 consecutive:3 achieves:1 proc:1 label:8 grouped:7 agrees:1 repetition:9 successfully:1 tr6:1 mit:3 sensor:2 activates:2 always:1 fluctuate:1 derived:1 improvement:1 check:1 indicates:1 greatly:1 intrusion:18 am:18 progressing:1 helpful:1 el:2 sb:2 entire:1 uij:4 interested:1 overall:1 among:1 arg:1 issue:1 denoted:2 plan:1 spatial:1 asr:2 sampling:1 represents:1 foreground:14 future:1 report:2 few:1 employ:2 oja:1 composed:1 simultaneously:1 individual:3 replaced:1 n1:3 suit:1 organization:1 highly:1 investigate:1 evaluation:2 male:1 analyzed:1 mixture:14 accurate:4 bregman:2 shorter:2 divide:1 re:1 darwin:1 instance:1 modeling:3 earlier:1 elli:2 contiguous:2 retains:1 hearing:2 snr:7 masked:1 gr:1 fundamental:2 peak:2 retain:1 systematic:1 synthesis:1 together:2 ear:1 choose:1 cognitive:1 sinusoidal:2 sec:6 blind:1 stream:22 onset:1 performed:2 view:1 cauwenberghs:1 wave:1 recover:2 masking:1 voiced:5 formed:3 square:2 characteristic:2 yield:2 correspond:1 identification:1 accurately:1 comp:1 rectified:2 mrc:1 e12:1 levelt:1 oscillatory:1 ed:3 energy:8 waspaa:1 frequency:20 involved:1 pp:5 e2:1 weintraub:1 di:1 gain:2 auditory:14 holdsworth:1 segmentation:2 hilbert:1 amplitude:5 back:1 response:18 done:1 evaluated:2 strongly:3 furthermore:1 stage:3 correlation:7 overlapping:1 autocorrelations:2 continuity:10 indicated:2 reveal:1 gray:1 columbus:2 facilitate:1 effect:1 brown:9 normalized:3 contain:1 hence:3 assigned:1 sinusoid:2 iteratively:2 moore:2 deal:4 white:2 adjacent:4 sin:1 speaker:2 oc:1 m:14 criterion:8 harmonic:23 ohio:5 ari:1 common:3 khz:5 belong:2 discussed:1 he:1 cambridge:5 automatic:2 analyzer:1 reliability:1 f0:4 similarity:1 deliang:1 dominant:1 recent:2 female:2 driven:1 phonetic:1 certain:3 binary:7 greater:1 period:16 signal:14 ii:2 multiple:3 sound:5 reduces:1 technical:1 match:2 academic:2 cross:5 long:2 retrieval:1 e1:1 phasebased:1 biophysics:1 schematic:1 pitch:43 prediction:1 hair:1 n5:3 represent:1 cell:1 justified:1 addition:9 background:2 residue:1 diagram:1 source:7 appropriately:1 envelope:8 comment:1 hz:4 tend:1 resynthesized:4 db:3 n7:4 presence:1 ideal:9 variety:2 psychology:2 perfectly:1 reduce:2 rod:1 whether:2 motivated:1 rnr:5 f:1 speech:64 speaking:2 york:2 cocktail:7 generally:3 mid:3 band:2 ph:2 nsf:1 estimated:11 track:1 discrete:1 vol:5 group:4 threshold:1 clean:3 verified:1 kept:1 telecommunication:1 extends:1 wu:1 separation:16 ct:1 braunschweig:1 constraint:3 scene:5 n3:3 generates:4 min:1 span:1 relatively:1 department:4 structured:1 according:9 peripheral:2 describes:1 smaller:1 son:2 agreeing:1 across:2 dwang:1 n4:3 interference:8 meddis:1 ceiling:1 segregation:10 agree:1 remains:1 equation:1 mechanism:4 evalu:1 available:1 observe:1 spectral:1 original:2 n9:4 responding:3 remaining:2 include:1 log10:1 music:1 especially:1 build:1 passband:1 vizinho:1 added:2 occurs:1 kaiser:1 nr:1 highfrequency:2 exhibit:1 gradient:1 ow:1 separate:2 thank:1 capacity:1 me:1 originate:1 cochlear:1 acoustical:1 collected:1 considers:1 retained:1 ratio:1 mostly:1 vertical:2 unvoiced:3 descent:1 incorrectly:1 situation:4 segregate:1 communication:1 frame:15 dc:1 carlyon:2 monaural:12 intensity:1 required:1 acoustic:11 nip:1 trans:2 able:2 suggested:1 bar:3 usually:3 below:2 pattern:1 articulation:1 challenge:2 program:1 trill:1 including:1 reliable:4 green:1 overlap:3 suitable:1 natural:4 difficulty:1 critical:1 residual:1 siren:1 concludes:1 n6:3 utterance:4 prior:1 segregated:5 relative:3 afosr:1 loss:1 mixed:1 filtering:2 remarkable:1 consistent:3 principle:3 bank:1 systematically:1 interfering:1 cooke:3 periodicity:10 summary:1 extrapolated:1 supported:2 last:1 english:1 weaker:1 calculated:1 ssi:1 contour:5 sensory:1 san:2 party:7 unreliable:2 dealing:1 global:6 corpus:1 consonant:1 spectrum:2 iterative:1 table:5 channel:12 reasonably:1 robust:1 correlated:1 ca:2 excellent:2 psychoacoustic:3 main:2 noise:8 n2:3 fig:12 referred:2 representative:1 biggest:1 transduction:2 wiley:1 position:1 sf:7 lie:1 perceptual:2 casa:5 decibel:1 jt:1 offset:1 timbre:1 evidence:3 grouping:6 intrinsic:1 dominates:3 rel:6 merging:1 effectively:1 ci:2 illustrates:3 karhunen:1 smoothly:2 likely:3 correlogram:2 tracking:2 corresponds:3 loses:1 truth:2 extracted:1 ma:2 rice:1 marked:1 attenuating:1 consequently:1 considerable:2 change:3 specifically:4 except:1 telephone:1 miss:1 microphone:2 total:1 pas:1 osu:1 mark:1 inability:2 modulated:3 incorporate:1 audio:1 phenomenon:1 handling:1 |
1,446 | 2,315 | Bayesian Image Super-Resolution
Michael E. Tipping and Christopher M. Bishop
Microsoft Research
Cambridge, CB3 OFB, U.K.
{ mtipping, cmbishop} @microsoft.com
http://research.microsoft.com/ "-'{ mtipping,cmbishop}
Abstract
The extraction of a single high-quality image from a set of lowresolution images is an important problem which arises in fields
such as remote sensing, surveillance, medical imaging and the extraction of still images from video. Typical approaches are based
on the use of cross-correlation to register the images followed by
the inversion of the transformation from the unknown high resolution image to the observed low resolution images, using regularization to resolve the ill-posed nature of the inversion process. In
this paper we develop a Bayesian treatment of the super-resolution
problem in which the likelihood function for the image registration parameters is based on a marginalization over the unknown
high-resolution image. This approach allows us to estimate the
unknown point spread function, and is rendered tractable through
the introduction of a Gaussian process prior over images. Results
indicate a significant improvement over techniques based on MAP
(maximum a-posteriori) point optimization of the high resolution
image and associated registration parameters.
1
Introduction
The task in super-resolution is to combine a set of low resolution images of the
same scene in order to obtain a single image of higher resolution. Provided the
individual low resolution images have sub-pixel displacements relative to each other,
it is possible to extract high frequency details of the scene well beyond the Nyquist
limit of the individual source images.
Ideally the low resolution images would differ only through small (sub-pixel) translations, and would be otherwise identical. In practice, the transformations may be
more substantial and involve rotations or more complex geometric distortions. In
addition the scene itself may change, for instance if the source images are successive frames in a video sequence. Here we focus attention on static scenes in which
the transformations relating the source images correspond to translations and rotations, such as can be obtained by taking several images in succession using a hand
held digital camera. Our approach is readily extended to more general projective
transformations if desired. Larger changes in camera position or orientation can be
handled using techniques of robust feature matching, constrained by the epipolar
geometry, but such sophistication is unnecessary in the present context.
Most previous approaches, for example [1, 2, 3], perform an initial registration of
the low resolution images with respect to each other, and then keep this registration
fixed. They then formulate probabilistic models of the image generation process,
and use maximum likelihood to determine the pixel intensities in the high resolution
image. A more convincing approach [4] is to determine simultaneously both the low
resolution image registration parameters and the pixel values of the high resolution
image, again through maximum likelihood.
An obvious difficulty of these techniques is that if the high resolution image has too
few pixels then not all of the available high frequency information is extracted from
the observed images, whereas if it has too many pixels the maximum likelihood
solution becomes ill conditioned. This is typically resolved by the introduction of
penalty terms to regularize the maximum likelihood solution, where the regularization coefficients may be set by cross-validation. The regularization terms are
often motivated in terms of a prior distribution over the high resolution image,
in which case the solution can be interpreted as a MAP (maximum a-posteriori)
optimization.
Baker and Kanade [5] have tried to improve the performance of super-resolution
algorithms by developing domain-specific image priors, applicable to faces or text
for example, which are learned from data. In this case the algorithm is effectively
hallucinating perceptually plausible high frequency features. Here we focus on general purpose algorithms applicable to any natural image, for which the prior encodes
only high level information such as the correlation of nearby pixels.
The key development in this paper, which distinguishes it from previous approaches,
is the use of Bayesian, rather than simply MAP, techniques by marginalizing over
the unknown high resolution image in order to determine the low resolution image
registration parameters. Our formulation also allows the choice of continuous values
for the up-sampling process, as well the shift and rotation parameters governing the
image registration.
The generative process by which the high resolution image is smoothed to obtain a
low resolution image is described by a point spread function (PSF). It has often been
assumed that the point spread function is known in advance, which is unrealistic.
Some authors [3] have estimated the PSF in advance using only the low resolution
image data, and then kept this estimate fixed while extracting the high resolution
image. A key advantage of our Bayesian marginalization is that it allows us to
determine the point spread function alongside both the registration parameters and
the high resolution image in a single, coherent inference framework.
As we show later, if we attempt to determine the PSF as well as the registration
parameters and the high resolution image by joint optimization, we obtain highly biased (over-fitted) results. By marginalizing over the unknown high resolution image
we are able to determine the PSF and the registration parameters accurately, and
thereby reconstruct the high resolution image with subjectively very good quality.
2
Bayesian Super-resolution
Suppose we are given K low-resolution intensity images (the extension to 3-colour
images is straightforward). We shall find it convenient notationally to represent
the images as vectors y(k) of length M , where k = 1, ... , K, obtained by raster
scanning the pixels of the images. Each image is shifted and rotated relative to a
reference image which we shall arbitrarily take to be y(1). The shifts are described
by 2-dimensional vectors Sk, and the rotations are described by angles Ok.
The goal is to infer the underlying scene from which the low resolution images are
generated. We represent this scene by a single high-resolution image, which we
again denote by a raster-scan vector x whose length is N ? M.
Our approach is based on a generative model for the observed low resolution images,
comprising a prior over the high resolution image together with an observation
model describing the process by which a low resolution image is obtained from the
high resolution one.
It should be emphasized that the real scene which we are trying to infer has effectively an infinite resolution, and that its description as a pixellated image is a
computational artefact. In particular if we take the number N of pixels in this image
to be large the inference algorithm should remain well behaved. This is not the case
with maximum likelihood approaches in which the value of N must be limited to
avoid ill-conditioning. In our approach, if N is large the correlation of neighbouring
pixels is determined primarily by the prior, and the value of N is limited only by
the computational cost of working with large numbers of high resolution pixels.
We represent the prior over the high resolution image by a Gaussian process
p(x) = N(xIO, Zx)
(1)
where the covariance matrix Zx is chosen to be of the form
Zx(i , j) = Aexp {_llvi
~2VjI12}.
(2)
Here Vi denotes the spatial position in the 2-dimensional image space of pixel i, the
coefficient A measures the 'strength' of the prior, and r defines the correlation length
scale. Since we take Zx to be a fixed matrix, it is straightforward to use a different
functional form for Zx if desired. It should be noted that in our image representation
the pixel intensity values lie in the range (-0.5,0.5), and so in principle a Gaussian
process prior is inappropriate 1 . In practice we have found that this causes little
difficulty, and in Section 4 we discuss how a more appropriate distribution could be
used.
The low resolution images are assumed to be generated from the high resolution
image by first applying a shift and a rotation, then convolving with some point
spread function, and finally downsampling to the lower resolution. This is expressed
through the transformation equation
(3)
where ?(k) is a vector of independent Gaussian random variables ?i ~ N(O, /3-1),
with zero mean and precision (inverse variance) /3, representing noise terms intended to model the camera noise as well as to capture any discrepancy between
our generative model and the observed data.
The transformation matrix W(k) in (3) is given by a point spread function which
captures the down-sampling process and which we again take to have a 'Gaussian'
form
(4)
lNote that the established work we have referenced, where a Gaussian prior or quadratic
regularlizer is utilised, also overlooks the bounded nature of the pixel space.
with
(5)
where j = 1, ... M and i = 1, ... , N. Here "( represents the 'width' of the point
spread function, and we shall treat "( as an unknown parameter to be determined
from the data. Note that our approach generalizes readily to any other form of
point spread function, possibly containing several unknown parameters, provided it
is differentiable with respect to those parameters.
In (5) the vector U)k) is the centre of the PSF and is dependent on the shift and
rotation of the low resolution image. We choose a parameterization in which the
centre of rotation coincides with the centre v of the image, so that
U)k)
where
R(k)
=
R(k)(Vj -
v) + v + Sk
(6)
is the rotation matrix
R
(k)
=
(
cosB k
(7)
_ sinB k
We can now write down the likelihood function in the form
(8)
Assuming the images are generated independently from the model, we can then
write the posterior distribution over the high resolution image in the form
(9)
(10)
with
E~ [z;' +fi (~W(WW('))
J.L
= (3~
(~W(k)T y(k)) .
r,
(11)
(12)
Thus the posterior distribution over the high resolution image is again a Gaussian
process.
If we knew the registration parameters {Sk' Bk }, as well as the PSF width parameter
,,(, then we could simply take the mean J.L (which is also the maximum) of the
posterior distribution to be our super-resolved image. However, the registration
parameters are unknown. Previous approaches have either performed a preliminary
registration of the low resolution images against each other and then fixed the
registration while determining the high resolution image, or else have maximized
the posterior distribution (9) jointly with respect to the high resolution image x and
the registration parameters (which we refer to as the 'MAP' approach). Neither
approach takes account of the uncertainty in determining the high resolution image
and the consequential effects on the optimization of the registration parameters.
Here we adopt a Bayesian approach by marginalizing out the unknown high resolution image. This gives the marginal likelihood function for the low resolution
images in the form
(13)
where
(14)
and y and Ware the vector and matrix of stacked y(k) and W(k) respectively. Using
some standard matrix manipulations we can rewrite the marginal likelihood in the
form
10gp(YI {Sk' e k }, I ) =
-"21
[
,B
2..: Ily(k) K
W(k) J.L112 + J.L TZ;l J.L
k=l
+logIZxl-IOgl~I-KMIOg,B].
(15)
We now wish to optimize this marginal likelihood with respect to the parameters
{sk,ed'I' and to do this we have compared two approaches. The first is to use
the expectation-maximization (EM) algorithm. In the E-step we evaluate the posterior distribution over the high resolution image given by (10) . In the M-step
we maximize the expectation over x of the log of the complete data likelihood
p(y,xl{sk,ed'l) obtained from the product of the prior (1) and the likelihood (8).
This maximization is done using the scaled conjugate gradients algorithm (SeG)
[6]. The second approach is to maximize the marginal likelihood (15) directly using
SeG. Empirically we find that direct optimization is faster than EM, and so has
been used to obtain the results reported in this paper.
Since in (15) we must compute ~, which is N x N, in practice we optimize the
shift, rotation and PSF width parameters based on an appropriately-sized subset
of the image only. The complete high resolution image is then found as the mode
of the full posterior distribution, obtained iteratively by maximizing the numerator
in (9), again using SeG optimization.
3
Results
In order to evaluate our approach we first apply it to a set of low resolution images
synthetically down-sampled (by a linear scaling of 4 to 1, or 16 pixels to 1) from a
known high-resolution image as follows. For each image we wish to generate we first
apply a shift drawn from a uniform distribution over the interval (-2,2) in units
of high resolution pixels (larger shifts could in principle be reduced to this level
by pre-registering the low resolution images against each other) and then apply a
rotation drawn uniformly over the interval (-4,4) in units of degrees. Finally we
determine the value at each pixel of the low resolution image by convolution of the
original image with the point spread function (centred on the low resolution pixel),
with width parameter 1 = 2.0. From a high-resolution image of 384 x 256 we chose
to use a set of 16 images of resolution 96 x 64.
In order to limit the computational cost we use patches from the centre of the low
resolution image of size 9 x 9 in order to determine the values of the shift, rotation
and PSF width parameters. We set the resolution of the super-resolved image to
have 16 times as many pixels as the low resolution images which, allowing for shifts
and the support of the point spread function, gives N = 50 x 50. The Gaussian
process prior is chosen to have width parameter r = 1.0, variance parameter A =
0.04, and the noise process is given a standard deviation of 0.05. Note that these
values can be set sensibly a priori and need not be tuned to the data.
The scaled conjugate gradient optimization is initialised by setting the shift and
rotation parameters equal to zero, while the PSF width "( is initialized to 4.0 since
this is the upsampling factor we have chosen between low resolution and superresolved images. We first optimize only the shifts, then we optimize both shifts
and rotations, and finally we optimize shifts, rotations and PSF width, in each case
running until a suitable convergence tolerance is reached.
In Figure l(a) we show the original image, together with an example low resolution
image in Figure l(b). Figure l(c) shows the super-resolved image obtained using our
Bayesian approach. We see that the super-resolved image is of dramatically better
quality than the low resolution images from which it is inferred. The converged
value for the PSF width parameter is "( = 1.94, close to the true value 2.0.
Figure 1: Example using synthetically generated data showing (top left) the
original image, (top right) an example low resolution image and (bottom left)
the inferred super-resolved image. Also shown, in (bottom right), is a comparison super-resolved image obtained by joint optimization with respect to
the super-resolved image and the parameters, demonstrating the significanly
poorer result.
Notice that there are some small edge effects in the super-resolved image arising from
the fact that these pixels only receive evidence from a subset of the low resolution
images due to the image shifts. Thus pixels near the edge of the high resolution
image are determined primarily by the prior.
For comparison we show, in Figure l(d), the corresponding super-resolved image
obtained by performing a MAP optimization with respect to the high resolution
image. This is of significantly poorer quality than that obtained from our Bayesian
approach. The converged value for t he PSF width in this case is '"Y = 0.43 indicating
severe over-fitting.
In Figure 2 we show plots of the true and estimated values for the shift and rotation
parameters using our Bayesian approach and also using MAP optimization. Again
we see the severe over-fitting resulting from joint optimization, and the significantly
better results obtained from the Bayesian approach.
(a) Shift estimation
(b) Rotation estimation
2.51.======;-~-~-~1
2
1.5
truth
~
Bayesian
'--l::,.
__
M_A_P_____
I
x
0
1.8 r-;:::::::::::::::::====;--~--~I
Bayesian
1.6 _
MAP
_
~
1
1
~
g'1.4
-0
;:
..c
(Jl
::- 1.2
e
0.5
Cii
~
0
tQ) -0.5
c
:2ctl 0.8
e
>
-1
0.6
Q)
~ 0.4
-1.5
o(Jl
-2
-2.5 '---~-~--~--~-~----'
-2
-1
0
1
horizontal shift
2
~ 0.: L.........IJIIL...IL..UI'-"!....H...IL.-L..II:LII..oL..ll...II.lL..H..JIO!....aL..J
II
o
5
10
15
low-resolution imaae index
Figure 2: (a) Plots of the true shifts for the synthetic data, together with the
estimated values obtained by optimization of the marginal likelihood in our
Bayesian framework and for comparison the corresponding estimates obtained
by joint optimization with respect to registration parameters and the high
resolution image. (b) Comparison of the errors in determining the rotation
parameters for both Bayesian and MAP approaches.
Finally, we apply our technique to a set of images obtained by taking 16 frames using
a hand held digital camera in 'multi-shot' mode (press and hold the shutter release)
which takes about 12 seconds. An example image, together with the super-resolved
image obtained using our Bayesian algorithm, is shown in Figure 3.
4
Discussion
In this paper we have proposed a new approach to the problem of image superresolution, based on a marginalization over the unknown high resolution image using
a Gaussian process prior. Our results demonstrate a worthwhile improvement over
previous approaches based on MAP estimation, including the ability to estimate
parameters of the point spread function.
One potential application our technique is the extraction of high resolution images
from video sequences. In this case it will be necessary to take account of motion
blur, as well as the registration, for example by tracking moving objects through
the successive frames [7].
(a) Low-resolution image (1 of 16)
(b) 4x Super-resolved image (Bayesian)
Figure 3: Application to real data showing in (a) one of the 16 captured in
succession usind a hand held camera of a doorway with nearby printed sign.
Image (b) shows the final image obtained from our Bayesian super-resolution
algorithm.
Finally, having seen the advantages of marginalizing with respect to the high resolution image, we can extend this approach to a fully Bayesian one based on Markov
chain Monte Carlo sampling over all unknown parameters in the model. Since our
model is differentiable with respect to these parameters, this can be done efficiently
using the hybrid Monte Carlo algorithm. This approach would allow the use of
a prior distribution over high resolution pixel intensities which was confined to a
bounded interval, instead ofthe Gaussian assumed in this paper. Whether the additional improvements in performance will justify the extra computational complexity
remains to be seen.
References
[1] N. Nguyen, P. Milanfar, and G. Golub. A computationally efficient superresolution
image reconstruction algorithm. IEEE Transactions on Image Processing, 10(4):573583, 200l.
[2] V. N. Smelyanskiy, P. Cheeseman, D. Maluf, and R. Morris. Bayesian super-resolved
surface reconstruction from images. In Proceedings CVPR, volume 1, pages 375- 382,
2000.
[3] D. P. Capel and A. Zisserman. Super-resolution enhancement of text image sequences.
In International Conference on Pattern Recognition, pages 600- 605, Barcelona, 2000.
[4] R. C. Hardie, K. J. Barnard, and E. A. Armstrong. Joint MAP registration and
high-resolution image estimation using a sequence of undersampled images. IEEE
Transactions on Image Processing, 6(12):1621-1633, 1997.
[5] S. Baker and T. Kanade. Limits on super-resolution and how to break them. Technical
report, Carnegie Mellon University, 2002. submitted to IEEE Transactions on Pattern
Analysis and Machine Intelligence.
[6] 1. T. Nabney. Netlab: Algorithms for Pattern Recognition. Springer, London, 2002.
http://www.ncrg.aston.ac. uk/netlab;'
[7] B. Bascle, A. Blake, and A. Zisserman. Motion deblurring and super-resolution from
an image sequence. In Proceedings of the Fourth European Conference on Computer
Vision, pages 573- 581, Cambridge, England, 1996.
| 2315 |@word inversion:2 consequential:1 tried:1 covariance:1 thereby:1 shot:1 initial:1 tuned:1 com:2 must:2 readily:2 blur:1 plot:2 generative:3 intelligence:1 parameterization:1 successive:2 registering:1 direct:1 lowresolution:1 combine:1 fitting:2 psf:12 multi:1 ol:1 resolve:1 little:1 inappropriate:1 becomes:1 provided:2 baker:2 underlying:1 bounded:2 superresolution:2 interpreted:1 transformation:6 cmbishop:2 sensibly:1 scaled:2 uk:1 unit:2 medical:1 referenced:1 treat:1 limit:3 ware:1 chose:1 limited:2 projective:1 range:1 camera:5 practice:3 displacement:1 significantly:2 printed:1 matching:1 convenient:1 pre:1 close:1 context:1 applying:1 optimize:5 www:1 map:10 maximizing:1 straightforward:2 attention:1 independently:1 resolution:83 formulate:1 mtipping:2 regularize:1 suppose:1 neighbouring:1 deblurring:1 recognition:2 observed:4 bottom:2 capture:2 seg:3 remote:1 substantial:1 ui:1 xio:1 complexity:1 ideally:1 rewrite:1 resolved:13 joint:5 stacked:1 london:1 monte:2 whose:1 posed:1 larger:2 plausible:1 distortion:1 cvpr:1 otherwise:1 reconstruct:1 ability:1 gp:1 jointly:1 itself:1 final:1 sequence:5 advantage:2 differentiable:2 reconstruction:2 product:1 description:1 convergence:1 enhancement:1 rotated:1 object:1 develop:1 ac:1 indicate:1 differ:1 artefact:1 preliminary:1 extension:1 hold:1 blake:1 adopt:1 purpose:1 estimation:4 applicable:2 ctl:1 gaussian:10 super:21 rather:1 avoid:1 surveillance:1 release:1 focus:2 improvement:3 ily:1 likelihood:14 posteriori:2 inference:2 dependent:1 typically:1 comprising:1 pixel:22 ill:3 orientation:1 priori:1 development:1 constrained:1 spatial:1 marginal:5 field:1 equal:1 extraction:3 having:1 sampling:3 identical:1 represents:1 discrepancy:1 report:1 few:1 distinguishes:1 primarily:2 simultaneously:1 individual:2 geometry:1 intended:1 microsoft:3 tq:1 attempt:1 highly:1 severe:2 golub:1 held:3 chain:1 poorer:2 edge:2 necessary:1 initialized:1 desired:2 fitted:1 instance:1 maximization:2 cost:2 deviation:1 subset:2 uniform:1 too:2 reported:1 scanning:1 synthetic:1 international:1 probabilistic:1 michael:1 together:4 again:6 containing:1 choose:1 possibly:1 tz:1 lii:1 convolving:1 account:2 potential:1 centred:1 coefficient:2 sinb:1 register:1 vi:1 later:1 performed:1 utilised:1 break:1 reached:1 il:2 variance:2 efficiently:1 succession:2 correspond:1 maximized:1 ofthe:1 bayesian:19 accurately:1 overlook:1 carlo:2 zx:5 converged:2 submitted:1 ed:2 against:2 raster:2 frequency:3 initialised:1 obvious:1 associated:1 static:1 sampled:1 treatment:1 ok:1 higher:1 tipping:1 zisserman:2 formulation:1 done:2 governing:1 correlation:4 until:1 hand:3 working:1 horizontal:1 capel:1 christopher:1 defines:1 mode:2 hardie:1 quality:4 behaved:1 effect:2 true:3 regularization:3 iteratively:1 ll:2 numerator:1 width:10 noted:1 coincides:1 trying:1 complete:2 demonstrate:1 motion:2 image:121 fi:1 rotation:17 functional:1 empirically:1 conditioning:1 volume:1 jl:2 extend:1 he:1 ncrg:1 relating:1 significant:1 refer:1 mellon:1 cambridge:2 aexp:1 centre:4 moving:1 surface:1 subjectively:1 posterior:6 manipulation:1 arbitrarily:1 yi:1 captured:1 seen:2 additional:1 cii:1 determine:8 maximize:2 ii:3 full:1 infer:2 technical:1 faster:1 england:1 cross:2 vision:1 expectation:2 represent:3 confined:1 receive:1 addition:1 whereas:1 interval:3 else:1 source:3 appropriately:1 biased:1 extra:1 extracting:1 near:1 synthetically:2 marginalization:3 shift:18 whether:1 motivated:1 handled:1 shutter:1 hallucinating:1 colour:1 nyquist:1 penalty:1 milanfar:1 cause:1 dramatically:1 involve:1 morris:1 reduced:1 http:2 generate:1 shifted:1 notice:1 sign:1 estimated:3 arising:1 write:2 carnegie:1 shall:3 key:2 demonstrating:1 drawn:2 cb3:1 neither:1 registration:19 kept:1 imaging:1 angle:1 inverse:1 uncertainty:1 fourth:1 patch:1 scaling:1 netlab:2 followed:1 nabney:1 quadratic:1 strength:1 scene:7 encodes:1 nearby:2 notationally:1 performing:1 rendered:1 developing:1 jio:1 smelyanskiy:1 conjugate:2 remain:1 em:2 lnote:1 computationally:1 equation:1 remains:1 describing:1 discus:1 tractable:1 available:1 generalizes:1 apply:4 worthwhile:1 appropriate:1 original:3 denotes:1 running:1 top:2 gradient:2 upsampling:1 assuming:1 length:3 index:1 convincing:1 downsampling:1 unknown:11 perform:1 allowing:1 observation:1 convolution:1 markov:1 extended:1 frame:3 ww:1 ofb:1 smoothed:1 intensity:4 inferred:2 bk:1 coherent:1 learned:1 established:1 barcelona:1 beyond:1 able:1 alongside:1 pattern:3 including:1 video:3 epipolar:1 unrealistic:1 suitable:1 difficulty:2 natural:1 hybrid:1 undersampled:1 cheeseman:1 representing:1 improve:1 aston:1 extract:1 text:2 prior:15 geometric:1 marginalizing:4 relative:2 determining:3 fully:1 generation:1 digital:2 validation:1 degree:1 principle:2 translation:2 allow:1 taking:2 face:1 tolerance:1 author:1 nguyen:1 transaction:3 keep:1 unnecessary:1 assumed:3 knew:1 doorway:1 continuous:1 sk:6 kanade:2 nature:2 robust:1 complex:1 european:1 domain:1 vj:1 spread:11 noise:3 precision:1 sub:2 position:2 wish:2 xl:1 lie:1 down:3 bishop:1 specific:1 emphasized:1 showing:2 sensing:1 evidence:1 effectively:2 perceptually:1 conditioned:1 bascle:1 sophistication:1 simply:2 expressed:1 tracking:1 springer:1 truth:1 extracted:1 goal:1 sized:1 barnard:1 change:2 typical:1 infinite:1 determined:3 uniformly:1 justify:1 indicating:1 support:1 arises:1 scan:1 evaluate:2 armstrong:1 |
1,447 | 2,316 | Learning Semantic Similarity
Jaz Kandola
John Shawe-Taylor
Royal Holloway, University of London
{jaz, john}@cs.rhul.ac.uk
N ella Cristianini
University of California, Berkeley
[email protected]
Abstract
The standard representation of text documents as bags of words
suffers from well known limitations, mostly due to its inability to
exploit semantic similarity between terms. Attempts to incorporate some notion of term similarity include latent semantic indexing [8], the use of semantic networks [9], and probabilistic methods
[5]. In this paper we propose two methods for inferring such similarity from a corpus. The first one defines word-similarity based
on document-similarity and viceversa, giving rise to a system of
equations whose equilibrium point we use to obtain a semantic
similarity measure. The second method models semantic relations
by means of a diffusion process on a graph defined by lexicon and
co-occurrence information. Both approaches produce valid kernel
functions parametrised by a real number. The paper shows how
the alignment measure can be used to successfully perform model
selection over this parameter. Combined with the use of support
vector machines we obtain positive results.
1
Introduction
Kernel-based algorithms exploit the information encoded in the inner-products between all pairs of data items (see for example [1]). This matches very naturally the
standard representation used in text retrieval, known as the 'vector space model',
where the similarity of two documents is given by the inner product between high
dimensional vectors indexed by all the terms present in the corpus. The combination of these two methods, pioneered by [6], and successively explored by several
others, produces powerful methods for text categorization. However, such an approach suffers from well known limitations, mostly due to its inability to exploit
semantic similarity between terms: documents sharing terms that are different but
semantically related will be considered as unrelated. A number of attempts have
been made to incorporate semantic knowledge into the vector space representation.
Semantic networks have been considered [9], whilst others use co-occurrence analysis where a semantic relation is assumed between terms whose occurrence patterns
in the documents of the corpus are correlated [3]. Such methods are also limited in
their flexibility, and the question of how to infer semantic relations between terms
or documents from a corpus remains an open issue. In this paper we propose two
methods to model such relations in an unsupervised way. The structure of the paper
is as follows. Section 2 provides an introduction to how semantic similarity can be
introduced into the vector space model. Section 3 derives a parametrised class of
semantic proximity matrices from a recursive definition of similarity of terms and
documents. A further parametrised class of kernels based on alternative similarity
measures inspired by considering diffusion on a weighted graph of documents is
given in Section 4. In Section 5 we show how the recently introduced alignment
measure [2] can be used to perform model selection over the classes of kernels we
have defined. Positive experimental results with the methods are reported in Section
5 before we draw conclusions in Section 6.
2
Representing Semantic Proximity
Kernel based methods are an attractive choice for inferring relations from textual
data since they enable us to work in a document-by-document setting rather than
in a term-by-term one [6]. In the vector space model, a document is represented
by a vector indexed by the terms of the corpus. Hence, the vector will typically
be sparse with non-zero entries for those terms occurring in the document. Two
documents that use semantically related but distinct words will therefore show no
similarity. The aim of a semantic proximity matrix [3] is to correct for this by
indicating the strength of the relationship between terms that even though distinct
are semantically related.
The semantic proximity matrix P is indexed by pairs of terms a and b, with the
entry Pab = Pba giving the strength of their semantic similarity. If the vectors
corresponding to two documents are d i , d j , their inner product is now evaluated
through the kernel
k(d i , dj ) = d~Pdj,
where x' denotes the transpose of the vector or matrix x. The symmetry of P
ensures that the kernel is symmetric. We must also require that P is positive semidefinite in order to satisfy Mercer's conditions. In this case we can decompose
P = R' R for some matrix R, so that we can view the semantic similarity as a
projection into a semantic space
?: d
f--t
Rd,
since
k(di,dj ) = d~Pdj = (Rd i , Rd j
}.
The purpose of this paper is to infer (or refine) the similarity measure between
examples by taking into account higher order correlations, thereby performing unsupervised learning of the proximity matrix from a given corpus. We will propose
two methods based on two different observations.
The first method exploits the fact that the standard representation of text documents as bags of words gives rise to an interesting duality: while documents can be
seen as bags of words, simultaneously terms can be viewed as bags of documents
- the documents that contain them. In such a model, two documents that have
highly correlated term-vectors are considered as having similar content. Similarly,
two terms that have a correlated document-vector will have a semantic relation.
This is of course only a first order approximation since the knock-on effect of the
two similarities on each other needs to be considered. We show that it is possible
to define term-similarity based on document-similarity, and vice versa, to obtain
a system of equations that can be solved in order to obtain a semantic proximity
matrix P.
The second method exploits the representation of a lexicon (the set of all words in
a given corpus) as a graph, where the nodes are indexed by words and where cooccurrence is used to establish links between nodes. Such a representation has been
studied recently giving rise to a number of topological properties [4]. We consider
the idea that higher order correlations between terms can affect their semantic
relations as a diffusion process on such a graph. Although there can be exponentially
many paths connecting two given nodes in the graph, the use of diffusion kernels [7]
enables us to obtain the level of semantic relation between any two nodes efficiently,
so inferring the semantic proximity matrix from data.
3
Equilibrium Equations for Semantic Similarity
In this section we consider the first of the two methods outlined in the previous
section. Here the aim is to create recursive equations for the relations between
documents and between terms.
Let X be the feature example (term/document in the case of text data) matrix
in a possibly kernel-defined feature space, so that X' X gives the kernel matrix K
and X X' gives the correlations between different features over the training set.
We denote this latter matrix with G. Consider the similarity matrices defined
recursively by
K
>"X'GX+K
G=>..X'KX+G
and
(1)
We can interpret this as augmenting the similarity given by K through indirect
similarities measured by G and vice versa. The factor >.. < IIKII- 1 ensures that the
longer range effects decay exponentially. Our first result characterizes the solution
of the above recurrences.
Proposition 1 Provided>"
< IIKII- 1 = IIGII- 1 , the kernels K and G that solve the
recurrences (1) are given by
K
G = G(I -
K(f - >"K)-l and
>"G)-l
Proof: First observe that
K(I - >"K) - l
1
K(I - >"K) - l - -(I - >"K) - l
>..
1
--(I
- >..K)(f - >"K) - 1
>..
1
+ -(f
-
1
+ -(I
-
>..
>..
>"K) - l
>"K) - 1
1
-(I
- >"K) - 1 - -1f
>..
>..
Now if we substitute the second recurrence into the first we obtain
K
>..2X'XKX'X+>"X'XX'X+K
>..2 K(K(I - >..K) - l)K + >..K2 + K
>..2 K( ~(I - >"K)-l >"K(I - >"K)-l K
K(I - >"K) - l
~I)K + >..K2 + K
+ K(I -
>..K)-l(f - >"K)
showing that the expression does indeed satisfy the recurrence. Clearly, by the
symmetry of the definition the expression for G also satisfies the recurrence. _
In view of the form of the solution we introduce the following definition:
Definition 2 von Neumann Kernel Given a kernel K the derived kernel K(>..) =
K(f - >"K)-l will be referred to as the von Neumann kernel.
Note that we can view K(>\) as a kernel based on the semantic proximity matrix
P = >..a + I since
X'PX = X'(>..a
a
+ I)X = >..x'ax + K = K(>").
Hence, the solution
defines a refined similarity between terms/features. In the
next section, we will consider the second method of introducing semantic similarity
derived from viewing the terms and documents as vertices of a weighted graph.
4
Semantic Similarity as a Diffusion Process
Graph like structures within data occur frequently in many diverse settings. In
the case of language, the topological structure of a lexicon graph has recently been
analyzed [4]. Such a graph has nodes indexed by all the terms in the corpus, and
the edges are given by the co-occurrence between terms in documents of the corpus.
Although terms that are connected are likely to have related meaning, terms with
a higher degree of separation would not be considered as being related.
A diffusion process on the graph can also be considered as a model of semantic
relations existing between indirectly connected terms. Although the number of
possible paths between any two given nodes can grow exponentially, results from
spectral graph theory have been recently used by [7] to show that it is possible to
compute the similarity between any two given nodes efficiently without examining
all possible paths. It is also possible to show that the similarity measure obtained
in this way is a valid kernel function. The exponentiation operation used in the
definition, naturally yields the Mercer conditions required for valid kernel functions.
An alternative insight into semantic similarity, to that presented in section 2, is
afforded if we multiply out the expression for K(>..) , K(>..) = K(I - >"K)-l =
L: ~l >..t-l Kt. The entries in the matrix Kt are given by
t-1
Kfj
2..=
=
E {1, ... ,~}t
U1 = i, Ut = j
U
II KUtUt+l'
?=1
that is the sum of the products of the weights over all paths of length t that start
at vertex i and finish at vertex j in the weighted graph on the examples. If we
view the connection strengths as channel capacities, the entry Klj can be seen to
measure the sum over all routes of the products of the capacities. If the entries
satisfy that they are all positive and for each vertex the sum of the connections
is 1, we can view the entry as the probability t hat a random walk beginning at
vertex i is at vertex j after t steps. It is for these reasons that the kernels defined
using these combinations of powers of the kernel matrix have been termed diffusion
kernels [7]. A similar equation holds for Gt. Hence, examples that both lie in a
cluster of similar examples become more strongly related, and similar features that
occur in a cluster of related features are drawn together in the semantic proximity
matrix P. We should stress that the emphasis of this work is not in its diffusion
connections, but its relation to semantic proximity. It is this link that motivates
the alternative decay factors considered below.
The kernel K combines these indirect link kernels with an exponentially decaying
weight. This suggests an alternative weighting scheme that shows faster decay for
increasing path length,
_
K(>..) = K
00
>..tKt
2..= -t., = K exp(>..K)
t=1
The next proposition gives the semantic proximity matrix corresponding to K(>"') .
Proposition 3 Let K(>"') = K exp(>...K).
proximity matrix exp(>"'G).
Then K(>"') corresponds to a semantic
Proof: Let X = UI;V ' be the singular value decomposition of X, so that K =
VAV ' is the eigenvalue decomposition of K, where A = I;/I;. We can write K as
K
VAexp(>...A)V' = XIUI; - lAexp(>...A)I; - lUIX
= XIU exp(>"'A)U' X = Xl exp(>"'G)X, as required. _
The above leads to the definition of the second kernel that we consider.
Definition 4 Given a kernel K the derived kernels K(>"') = K exp(>...K) will be
referred to as the exponential kernels.
5
Experimental Methods
In the previous sections we have introduced two new kernel adaptations, in both
cases parameterized by a positive real parameter >.... In order to apply these kernels
on real text data, we need to develop a method of choosing the parameter >.... Of
course one possibility would be just to use cross-validation as considered by [7].
Rather than adopt this rather expensive methodology we will use a quantitative
measure of agreement between the diffusion kernels and the learning task known as
alignment, which measures the degree of agreement between a kernel and target [2].
Definition 5 Alignment The (empirical) alignment of a kernel kl with a kernel
k2 with respect to the sample S is the quantity
A(S,k 1 ,k2 ) =
(K 1 ,K2 )F
,
y!(K1 ,K1 )F(K2,K2)F
where Ki is the kernel matrix for the sample S using kernel k i .
where we use the following definition of inner products between Gram matrices
m
(K1 ,K2)F =
2..=
(2)
K 1 (Xi ,Xj)K2(Xi,X j )
i,j=l
corresponding to the Frobenius inner product. From a text categorization perspective this can also be viewed as the cosine of the angle between two bi-dimensional
vectors Kl and K 2, representing the Gram matrices. If we consider K2 = yyl, where
y is the vector of outputs (+1/-1) for the sample, then
A(S K
I) _
(K , yy/)F
, , yy - y!(K , K) F (yy I, yy I) F
y'Ky
mllKllF
(3)
The alignment has been shown to possess several convenient properties [2]. Most
notably it can be efficiently computed before any training of the kernel machine
takes place, and based only on training data information; and since it is sharply
concentrated around its expected value, its empirical value is stable with respect to
different splits of the data.
We have developed a method for choosing>... to optimize the alignment of the resulting matrix K(>...) or k(>...) to the target labels on the training set. This method
follows similar results presented in [2], but here the parameterization is non-linear
in A so that we cannot solve for the optimal value. We rather seek the optimal value
using a line search over the range of possible values of A for the value at which the
derivative of the alignment with respect to A is zero. The next two propositions
will give equations that are satisfied at this point.
Proposition 6 If A* is the solution of A* = argmax~A(S, K(A), yy') and Vi, Ai are
the eigenvector/eigenvalue pairs of the kernel matrix K then
m
m
i=l
i=l
m
m
i= l
i=l
L Ai exp(A* Ai)(Vi, y)2 L
Proof: First observe that K(A) = V MV' = 2:~1 J.tiViV~, where Mii
Ai exp(U i ). We can express the alignment of K(A) as
A(S, K(A), yy')
AJ exp(2A* Ai)
= J.ti(A)
2:~1 J.ti(A)(Vi , y)2
mJ2:~l J.ti(A)2
The function is a differentiable function of A and so at its maximal value the derivative will be zero. Taking the derivative of this expression and setting it equal to
zero gives the condition in the proposition statement. _
Proposition 7 If A* is the solution of A* = argmaxAE(O,IIKII-,)A(S, K(A), yy'), and
Vi, Ai are the eigenvector eigenvalue pairs of the kernel matrix K then
~
6
1
(A*(l- VAi))2
~ (Vi,y)2(2A*Ai -1)
6
(A*(l- A*Ai))2
~
6
(Vi,y)2
~
2A*Ai -1
V(l- A*Ai)
(V(l- A*Ai))3
6
Proof: The proof is identical to that of Proposition 6 except that Mii = J.ti(A) =
i>.r'
(l - A
A
.-
Definition 8 Line Search Optimization of the alignment can take place by using
a line search of the values of A to find a maximum point of the alignment by seeking
points at which the equations given in Proposition 6 and 7 hold.
5.1
Results
To demonstrate the performance of the proposed algorithm for text data, the Medline1033 dataset commonly used in text processing [3] was used. This dataset contains 1033 documents and 30 queries obtained from the national library of medicine.
In this work we focus on query20. A Bag of Words kernel was used [6]. Stop words
and punctuation were removed from the documents and the Porter stemmer was
applied to the words. The terms in the documents were weighted according to a
variant of the tfidf scheme. It is given by 10g(1 + tf) * log(m/ df), where tf represents the term frequency, df is used for the document frequency and m is the total
number of documents. A support vector classifier (SVC) was used to assess the performance of the derived kernels on the Medline dataset. A 10-fold cross validation
procedure was used to find the optimal value for the capacity control parameter
'C' . Having selected the optimal 'C' parameter, the SVC was re-trained ten times
using ten random training and test dataset splits. Error results for the different
algorithms are presented together with F1 values. The F1 measure is a popular
statistic used in the information retrieval community for comparing performance of
TRAIN ALIGN
K80
B80
K50
B50
K 20
B 20
0.851
0.423
0.863
0.390
0.867
0.325
{0.012}
(0 .007)
{0.025}
(0.009)
{0.029}
(0.009)
SVC
ERROR
0.017
0.022
0.018
0.024
0.019
0.030
{0.005}
(0.007)
{0.006}
(0.004)
{0.004)
(0.005)
0.795
0.256
0.783
0.456
0.731
0.349
F1
{0.060}
(0.351)
{0.074}
(0 .265)
{0.089}
(0 .209)
A
0.197 ~0.004)
0.185 ~0.008)
0.147 ~0.04)
Table 1: Medline dataset - Mean and associated standard deviation alignment, F1
and sve error values for a sve trained using the Bag of Words kernel (B) and the
exponential kernel (K). The index represents the percentage of training points.
TRAIN ALIGN
K80
B80
K50
B50
K 20
B 20
SVC
ERROR
F1
A
0.032 (0 .001)
0.758 (0.015)
0.423(0.007)
0.017 (0.004)
0.022 (0.007)
0.765 (0.020)
0.256 (0.351)
0.766 (0.025)
0.390 (0 .009)
0.018 (0.005)
0.024 (0.004)
0.701 (0.066)
0.456 (0.265)
0.039 (0.008)
0.728 (0.012)
0.325 (0.009)
0.028 (0.004)
0.030 (0.005)
0.376 (0.089)
0.349 (0.209)
0.029 (0 .07)
Table 2: Medline dataset - Mean and associated standard deviation alignment, F1
and sve error values for a sve trained using the Bag of Words kernel (B) and the
von Neumann (K). The index represents the percentage of training points.
algorithms typically on uneven data. F1 can be computed using F1
= ~~~, where
P represents
precision i.e. a measure of the proportion of selected items that the
system classified correctly, and R represents recall i.e. the proportion of the target
items that the system selected.
Applying the line search procedure to find the optimal value of A for the diffusion
kernels. All of the results are averaged over 10 random splits with the standard
deviation given in brackets. Table 1 shows the results of using the Bag of Words
kernel matrix (B) and the exponential kernel matrix (K). Table 2 presents the results
of using the von Neumann kernel matrix (K) together with the Bag of Words kernel
matrix for different sizes of the training data. The index represents the percentage
of training points. The first column of both table 1 and 2 shows the alignments of
the Gram matrices to the rank 1 labels matrix for different sizes of training data.
In both cases the results presented indicate that the alignment of the diffusion
kernels to the labels is greater than that of the Bag of Words kernel matrix by
more than the sum of the standard deviations across all sizes of training data. The
second column of the tables represents the support vector classifier (SVe) error
obtained using the diffusion Gram matrices and the Bag of Words Gram matrix.
The sve error for the diffusion kernels shows a decrease with increasing alignment
value. F1 values are also shown and in all instances show an improvement for the
diffusion kernel matrices. An interesting observation can be made regarding the F1
value for the von Neumann kernel matrix trained using 20% training data (K20).
Despite an increase in alignment value and reduction of sve error the F1 value
does not increase as much as that for the exponential kernel trained using the same
proportion of the data K 20 . This observation implies that the diffusion kernel needs
more data to be effective. This will be investigated in future work.
6
Conclusions
We have proposed and compared two different methods to model the notion of semantic similarity between documents, by implicitly defining a proximity matrix P
in a way that exploits high order correlations between terms. The two methods
differ in the way the matrix is constructed. In one view, we propose a recursive definition of document similarity that depends on term similarity and vice versa. By
solving the resulting system of kernel equations, we effectively learn the parameters
of the model (P), and construct a kernel function for use in kernel based learning
methods. In the other approach, we model semantic relations as a diffusion process in a graph whose nodes are the documents and edges incorporate first-order
similarity. Diffusion efficiently takes into account all possible paths connecting two
nodes, and propagates the 'similarity' between two remote documents that share
'similar terms'. The kernel resulting from this model is known in the literature
as the 'diffusion kernel'. We have experimentally demonstrated the validity of the
approach on text data using a novel approach to set the adjustable parameter ..\ in
the kernels by optimising their 'alignment' to the target on the training set. For
the dataset partitions substantial improvements in performance over the traditional
Bag of Words kernel matrix were obtained using the diffusion kernels and the line
search method. Despite this success, for large imbalanced datasets such as those encountered in text classification tasks the computational complexity of constructing
the diffusion kernels may become prohibitive. Faster kernel construction methods
are being investigated for this regime.
References
[1] N. Cristianini and J. Shawe-Taylor. An Introduction to Support Vector Machines. Cambridge University Press, Cambridge, UK , 2000.
[2] Nello Cristianini, John Shawe-Taylor, and Jaz Kandola. On kernel target alignment. In Proceedings of the Neural Information Processing Systems, NIPS '01,
2002.
[3] Nello Cristianini, John Shawe-Taylor, and Huma Lodhi. Latent semantic kernels.
Journal of Intelligent Information Systems, 18(2):127-152,2002.
[4] R. Ferrer and R.V. Sole. The small world of human language. Proceedings of the
Royal Society of London Series B - Biological Sciences, pages 2261- 2265 , 200l.
[5] Thomas Hofmann. Probabilistic latent semantic indexing. In Research and
Development in Information Retrieval, pages 50-57, 1999.
[6] T. Joachims. Text categorization with support vector machines. In Proceedings
of European Conference on Machine Learning (ECML) , 1998.
[7] R.I. Kondor and J. Lafferty. Diffusion kernels on graphs and other discrete structures. In Proceedings of Intenational Conference on Machine Learning (ICML
2002), 2002.
[8] Todd A. Letsche and Michael W. Berry. Large-scale information retrieval with
latent semantic indexing. Information Sciences, 100(1-4):105- 137,1997.
[9] G. Siolas and F. d 'Alch Buc. Support vector machines based on a semantic
kernel for text categorization. In IEEE-IJCNN 2000), 2000.
| 2316 |@word kondor:1 proportion:3 lodhi:1 open:1 seek:1 decomposition:2 thereby:1 recursively:1 reduction:1 contains:1 series:1 document:34 existing:1 comparing:1 jaz:3 must:1 john:4 partition:1 hofmann:1 enables:1 selected:3 prohibitive:1 item:3 parameterization:1 beginning:1 provides:1 node:9 lexicon:3 gx:1 constructed:1 become:2 combine:1 introduce:1 notably:1 expected:1 indeed:1 frequently:1 inspired:1 considering:1 increasing:2 provided:1 xx:1 unrelated:1 xkx:1 eigenvector:2 developed:1 whilst:1 berkeley:1 quantitative:1 ti:4 k2:10 classifier:2 uk:2 control:1 positive:5 before:2 todd:1 despite:2 path:6 emphasis:1 studied:1 suggests:1 co:3 limited:1 range:2 bi:1 averaged:1 recursive:3 procedure:2 empirical:2 projection:1 viceversa:1 word:17 convenient:1 cannot:1 selection:2 applying:1 optimize:1 demonstrated:1 insight:1 k20:1 notion:2 target:5 construction:1 pioneered:1 agreement:2 expensive:1 solved:1 ensures:2 connected:2 remote:1 decrease:1 removed:1 substantial:1 pdj:2 ui:1 cooccurrence:1 complexity:1 cristianini:4 trained:5 solving:1 alch:1 indirect:2 represented:1 train:2 distinct:2 effective:1 london:2 query:1 choosing:2 refined:1 whose:3 encoded:1 solve:2 pab:1 statistic:1 eigenvalue:3 differentiable:1 net:1 propose:4 product:7 maximal:1 adaptation:1 flexibility:1 frobenius:1 ky:1 cluster:2 neumann:5 produce:2 categorization:4 develop:1 ac:1 augmenting:1 measured:1 sole:1 c:1 indicate:1 implies:1 differ:1 correct:1 human:1 enable:1 viewing:1 require:1 f1:11 decompose:1 proposition:9 tfidf:1 biological:1 yyl:1 hold:2 proximity:13 around:1 considered:8 exp:9 equilibrium:2 adopt:1 purpose:1 bag:12 label:3 vice:3 create:1 successfully:1 tf:2 weighted:4 clearly:1 aim:2 rather:4 derived:4 ax:1 focus:1 joachim:1 improvement:2 rank:1 typically:2 relation:12 issue:1 classification:1 development:1 equal:1 construct:1 having:2 identical:1 represents:7 optimising:1 argmaxae:1 unsupervised:2 icml:1 future:1 others:2 intelligent:1 kandola:2 simultaneously:1 national:1 argmax:1 attempt:2 highly:1 possibility:1 multiply:1 alignment:19 analyzed:1 punctuation:1 bracket:1 semidefinite:1 parametrised:3 kt:2 edge:2 indexed:5 taylor:4 walk:1 re:1 instance:1 column:2 introducing:1 vertex:6 entry:6 deviation:4 examining:1 reported:1 combined:1 probabilistic:2 michael:1 connecting:2 together:3 von:5 medline:3 satisfied:1 successively:1 possibly:1 derivative:3 account:2 satisfy:3 mv:1 vi:6 depends:1 view:6 characterizes:1 start:1 decaying:1 ass:1 pba:1 efficiently:4 yield:1 classified:1 suffers:2 sharing:1 definition:11 frequency:2 naturally:2 proof:5 di:1 associated:2 stop:1 dataset:7 popular:1 recall:1 knowledge:1 ut:1 higher:3 methodology:1 evaluated:1 though:1 strongly:1 just:1 correlation:4 porter:1 defines:2 aj:1 effect:2 validity:1 contain:1 hence:3 symmetric:1 semantic:40 attractive:1 recurrence:5 cosine:1 stress:1 demonstrate:1 meaning:1 svc:4 recently:4 novel:1 exponentially:4 interpret:1 ferrer:1 versa:3 cambridge:2 ai:11 rd:3 outlined:1 similarly:1 shawe:4 dj:2 language:2 stable:1 similarity:34 longer:1 gt:1 align:2 imbalanced:1 perspective:1 termed:1 route:1 success:1 seen:2 greater:1 ii:1 infer:2 match:1 faster:2 cross:2 retrieval:4 variant:1 df:2 kernel:69 grow:1 singular:1 posse:1 lafferty:1 split:3 affect:1 finish:1 xj:1 inner:5 idea:1 regarding:1 expression:4 mj2:1 ten:2 concentrated:1 percentage:3 correctly:1 yy:7 diverse:1 write:1 discrete:1 express:1 drawn:1 diffusion:21 graph:14 sum:4 angle:1 exponentiation:1 powerful:1 parameterized:1 place:2 separation:1 draw:1 mii:2 ki:1 fold:1 topological:2 refine:1 encountered:1 strength:3 occur:2 ijcnn:1 sharply:1 letsche:1 afforded:1 u1:1 performing:1 px:1 according:1 combination:2 across:1 tkt:1 indexing:3 equation:8 remains:1 operation:1 apply:1 observe:2 indirectly:1 spectral:1 occurrence:4 alternative:4 hat:1 substitute:1 thomas:1 denotes:1 include:1 medicine:1 exploit:6 giving:3 k1:3 establish:1 society:1 seeking:1 question:1 quantity:1 traditional:1 link:3 capacity:3 iikii:3 klj:1 nello:3 reason:1 length:2 index:3 relationship:1 mostly:2 statement:1 rise:3 vai:1 motivates:1 adjustable:1 perform:2 observation:3 datasets:1 ecml:1 defining:1 community:1 introduced:3 pair:4 required:2 kl:2 connection:3 california:1 textual:1 huma:1 nip:1 below:1 pattern:1 regime:1 royal:2 power:1 representing:2 scheme:2 library:1 text:13 literature:1 ella:1 berry:1 interesting:2 limitation:2 validation:2 degree:2 mercer:2 propagates:1 share:1 course:2 transpose:1 stemmer:1 taking:2 sparse:1 valid:3 gram:5 world:1 made:2 commonly:1 k80:2 implicitly:1 buc:1 corpus:9 assumed:1 xi:2 search:5 latent:4 table:6 channel:1 learn:1 symmetry:2 investigated:2 european:1 constructing:1 k50:2 referred:2 precision:1 inferring:3 kfj:1 exponential:4 xl:1 lie:1 weighting:1 showing:1 rhul:1 explored:1 decay:3 derives:1 effectively:1 occurring:1 kx:1 knock:1 likely:1 xiu:1 corresponds:1 vav:1 satisfies:1 viewed:2 content:1 experimentally:1 except:1 semantically:3 total:1 duality:1 experimental:2 indicating:1 holloway:1 uneven:1 support:7 latter:1 inability:2 incorporate:3 correlated:3 |
1,448 | 2,317 | shorter argument and much tighter than previous margin bounds.
There are two mathematical flavors of margin bound dependent upon the weights
Wi of the vote and the features Xi that the vote is taken over.
1. Those ([12], [1]) with a bound on Li w~ and Li x~
("bib" bounds).
2. Those ([11], [6]) with a bound on Li Wi and maxi Xi ("it/loo" bounds).
The results here are of the "bll2" form. We improve on Shawe-Taylor et al. [12]
and Bartlett [1] by a log(m)2 sample complexity factor and much tighter constants
(1000 or unstated versus 9 or 18 as suggested by Section 2.2). In addition, the
bound here covers margin errors without weakening the error-free case.
Herbrich and Graepel [3] moved significantly towards the approach adopted in our
paper, but the methodology adopted meant that their result does not scale well to
high dimensional feature spaces as the bound here (and earlier results) do.
The layout of our paper is simple - we first show how to construct a stochastic
classifier with a good true error bound given a margin, and then construct a margin
bound.
2
2.1
Margin Implies PAC-Bayes Bound
Notation and theoreIll
Consider a feature space X which may be used to make predictions about the value
in an output space Y = {-I, +1}. We use the notation x = (Xl, ... , XN) to denote
an N dimensional vector. Let the vote of a voting classifier be given by:
vw(x) = wx =
L
WiXi?
i
The classifier is given by c(x) = sign (vw(x)). The number of "margin violations"
or "margin errors" at 7 is given by:
e1'(c) =
Pr
(X,1I)~U(S)
(yvw(x)
< 7),
where U(S) is the uniform distribution over the sample set S.
For convenience, we assume vx(x) :::; 1 and vw(w) :::; 1. Without this assumption,
our results scale as ../vx(x)../vw(w)h rather than 117.
Any margin bound applies to a vector W in N dimensional space. For every example,
we can decompose the example into a portion which is parallel to W and a portion
which is perpendicular to w.
vw(x)
XT
= X - IIwl1 2 w
XII
=x -
XT
The argument is simple: we exhibit a "prior" over the weight space and a "posterior"
over the weight space with an analytical form for the KL-divergence. The stochastic
classifier defined by the posterior has a slightly larger empirical error and a small
true error bound.
For the next theorem, let F(x) = 1- f~oo ke-z2/2dx be the tail probability of a
Gaussian with mean 0 and variance 1. Also let
eQ(W,1',f)
=
Pr
(X,1I)~D,h~Q(w,1',f)
(h(x)
=I y)
be the true error rate of a stochastic classifier with distribution Q(f, w, 7) dependent
on a free parameter f, the weights w of an averaging classifier, and a margin 7.
Theorem 2.1 There exists a function Q mapping a weight vector w, margin 7,
and value f > 0 to a distribution Q(w, 7, f) such that
Inp(Fl:(Ol)+lnmtl)
A
Pr
S~D"'
Vw, 7, f: KL(e1'(c) + flleQ(w,1',f?) :::;
~ 1- 8
m
(
where KL(qllp) = q In: + (1 - q) In ~::::: = the Kullback-Leibler divergence between
two coins of bias q < p.
2.2
Discussion
Theorem 2.1 shows that when a margin exists it is always possible to find a "posterior" distribution (in the style of [5]) which introduces only a small amount of
additional training error rate. The true error bound for this stochastization of the
large-margin classifier is not dependent on the dimensionality except via the margin.
Since the Gaussian tail decreases exponentially, the value of P-l(f) is not very large
for any reasonable value of f. In particular, at P(3), we have f :::; 0.01. Thus, for
the purpose of understanding, we can replace P-l(f) with 3 and consider f ~ O.
One useful approximation for P(x) with large x is:
_
e-",2/2
F(x) ~ . tn= (1/x)
y27f
If there are no margin errors e1'(c) = 0, then these approximations, yield the ap-
proximate bound:
+ In m?1
)
21'2 + In 3v'2iT
l'
{j
_9_
Pr
D
S ~"'
(
eQ(w,1',O) :::;
m
~
1~
- u
In particular, for large m the true error is approximately bounded by 21'~m'
As an example, if 7 = 0.25, the bound is less than 1 around m = 100 examples and
less than 0.5 around m = 200 examples.
Later we show (see Lemmas 4.1 and 4.2 or Theorem 4.3) that the generalisation
error of the original averaging classifier is only a factor 2 or 4 larger than that of the
stochastic classifiers considered here. Hence, the bounds of Theorems 2.1 and 3.1
also give bounds on the averaging classifiers w.
This theorem is robust in the presence of noise and margin errors. Since the PACBayes bound works for any "posterior" Q, we are free to choose Q dependent upon
the data in any way. In practice, it may be desirable to follow an approach similar
to [5] and allow the data to determine the "right" posterior Q. Using the data
rather than the margin 7 allows the bound to take into account a fortuitous data
distribution and robust behavior in the presence of a "soft margin" (a margin with
errors). This is developed (along with a full proof) in the next section.
3
Main Full Result
We now present the main result. Here we state a bound which can take into account the distribution of the training set. Theorem 2.1 is a simple consequence
of this result. This theorem demonstrates the flexibility of the technique since it
incorporates significantly more data-dependent information into the bound calculation. When applying the bound one would choose p, to make the inequality (1)
an equality. Hence, any choice of p, determines E and hence the overall bound. We
then have the freedom to choose p, to optimise the bound.
As noted earlier, given a weight vector w, any particular feature vector x decomposes into a portion xII which is parallel to w and a portion XT which is perpendicular to w. Hence, we can write x = xllell + XTeT, where ell is a unit vector in
the direction of w and eT is a unit vector in the direction of XT. Note that we may
have YXII < 0, if x is misclassified by w.
Theorem 3.1 For all averaging classifiers c with normalized weights wand for all
E > 0 stochastic error rates, If we choose p, > 0 such that
- (YXII
Ex,y~sF
XT
P,
) =
(1)
E
then there exists a posterior distribution Q(w, p" E) such that
s~!J", ( VE, w, p,: KL(ElleQ(w,p"f)) ~
In ~l
F(p,)
+ In !!!?!
)
/j
~ 1- 6
m
where KL(qllp) = q In ~ + (1 - q) In ~=: = the Kullback-Leibler divergence between
two coins of bias q < p.
Proof. The proof uses the PAC-Bayes bound, which states that for all prior distributions P,
Pr
S~D"'
(VQ:
KL(eQlleQ)
~
KL(QIIP) + In
m
?) ~
1- 6
We choose P = N(O,I), an isotropic Gaussian1 .
A choice of the "posterior" Q completes the proof. The Q we choose depends upon
the direction w, the margin 'Y, and the stochastic error E. In particular, Q equals
P in every direction perpendicular to w, and a rectified Gaussian tail in the w
direction2 ? The distribution of a rectified Gaussian tail is given by R(p,) = 0 for
x < p, and R(p,) = F(p,~.;21re-",2 /2 for x ~ p,o
The chain rule for relative entropy (Theorem 2.5.3 of [2]) and the independence of
draws in each dimension implies that:
KL(QIIIIPjI) + KL(QTIIPT)
KL(R(p,)IIN(O, 1)) + KL(PTIIPr)
KL(R(p,)IIN(O, 1)) + 0
KL(QIIP)
roo 1
1p, Inp(p,)R(X)dx
=
1
In P(p,)
1 Later, the fact that an isotropic Gaussian has the same representation in all rotations
of the coordinate sytem will be useful.
2Note that we use the invariance under rotation of N(O, I) here to line up one dimension
with w.
Thus, our choice of posterior implies the theorem if the empirical error rate is
eq(w,x,.)
:s Ex,._sF (*1') :s ? which we show next.
Given a point x, our choice of posterior implies that we can decompose the stochastic
weight vector, W = wllell +wTeT +w, where ell is parallel to w, eT is parallel to XT
and W is a residual vector perpendicular to both. By our definition of the stochastic
generation wli ~ R(p) and WT ~ N(O, 1). To avoid an error, we must have:
y
=
=
Then, since tOil
~
sign(v;;,(x))
sign(wlixli +WTXT).
JJ, no error occurs if:
y(pxlI + WTXT)
>0
Since WT is drawn from N(O, 1) the probability of this event is:
Pr (Y(I""II +WTXT) > 0)
~ 1- F (~~Ip)
And so, the empirical error rate of the stochastic classifier is bounded by:
eq:S
Ex,._sF (~~Ip)
=.
as required. _
3.1
Proof of Theorem 2.1
Proof. (sketch) The theorem follows from a relaxation of Theorem 3.1. In particular, we treat every example with a margin less than / as an error and use the
bounds IlxT11 1 and IlxlIll ~ /. -
:s
3.2
Further results
Several aspects of the Theorem 3.1 appear arbitrary, but they are not. In particular,
the choice of "prior" is not that arbitrary as the following lemma indicates.
Lemma 3.2 The set of P satisfying 311111 : P(x) = 11II1(lIxI12) (rotational invariance) and P(x) = n~, p;(x;) (independence of each dimension) is N(O, >J) for
>'>0.
Proof. Rotational invariance together with the dimension independence imply that
for all i,j,x: p;(x) =p;(x) which implies that:
N
P(x) =
II p(x;)
;=1
for some ftmction p(.). Applying rotational invariance, we have that:
N
P(x) = 11II1(llxIl2) = IIp(x;)
;=1
This implies:
10g11111
(~,q) = ~IOgP(X;)'
Taking the derivative of this equation with respect to
2
1I111 (1IxI1 ) 2xi
PjIIl(llxI1 2 )
-
Xi
gives
P'(Xi)
p(Xi) .
Since this holds for all values of x we must have
Pjlll (t) = AlIllI (t)
for some constant A, or Pjlll (t) = C exp(At) , for some constant C. Hence, P(x) =
C exp(AllxI1 2 ), as required. _
The constant A in the previous lemma is a free parameter. However, the results do
not depend upon the precise value of A so we choose 1 for simplicity. Some freedom
in the choice of the "posterior" Q does exist and the results are dependent on this
choice. A rectified gaussian appears simplest.
4
Margin Implies Margin Bound
There are two methods for constructing a margin bound for the original averaging
classifier. The first method is simplest while the second is sometimes significantly
tighter.
4.1
Simple Margin Bound
First we note a trivial bound arising from a folk theorem and the relationship to
our result.
Lemma 4.1 (Simple Averaging bound) For any stochastic classifier with distribution Q and true error rate eQ, the averaging classifier,
CQ(X) = sign ( [ h(X)dQ(h))
has true error rate:
Proof. For every example (x,y), every time the averaging classifier errs, the probability of the stochastic classifier erring must be at least 1/2. _
This result is interesting and of practical use when the empirical error rate of the
original averaging classifier is low. Furthermore, we can prove that cQ(x) is the
original averaging classifier.
Lemma 4.2 For Q = Q(w,'Y,e) derived according to Theorems 2.1 and 3.1 and
cQ(x) as in lemma 4.1:
CQ(X) = sign (vw(x))
Proof. For every x this equation holds because of two simple facts:
1. For any oW that classifies an input x differently from the averaging classifier,
there is a unique equiprobable paired weight vector that agrees with the
averaging classifier.
2. If vw(x) ?- 0, then there exists a nonzero measure of classifier pairs which
always agrees with the averaging classifier.
Condition (1) is met by reversing the sign of WT and noting that either the original random vector or the reversed random vector must agree with the averaging
classifier.
Condition (2) is met by the randomly drawn classifier W = AW and nearby classifiers
for any A > O. Since the example is not on the hyperplane, there exists some small
sphere of paired classifiers (in the sense of condition (1)). This sphere has a positive
measure. _
The simple averaging bound is elegant, but it breaks down when the empirical error
is large because:
e(c) ::; 2eQ
= 2(?Q + 6o m ) ~ 2?-y(c) + 260 m
where ?Q is the empirical error rate of a stochastic classifier and 60m goes to zero
as m -t 00. Next, we construct a bound of the form e(cQ) ::; ?-y(c) + 6o~ where
6o~ > 60 m but ?-y(c) ::; 2?-y(c).
4.2
A (Sometimes) Tighter Bound
By altering our choice of J.L and our notion of "error" we can construct a bound
which holds without randomization. In particular, we have the following theorem:
Theorem 4.3 For all averaging classifiers C with normalized weights W for all E > 0
"extra" error rates and"( > 0 margins:
Pr
S~D"'
(
VE, w,"(: KL(?-y(c)
where KL(qllp) = qln ~
two coins of bias q < p.
+ (1 -
+ Elle(c) - E) ::;
In
-c/
1(0?) + 21n
F -"/-m
mtl) ~
1- 0
q) In ~::::: = the Kullback-Leibler divergence between
The proof of this statement is strongly related to the proof given in [11] but noticeably simpler. It is also very related to the proof of theorem 2.1.
Proof. (sketch) Instead of choosing wli so that the empirical error rate is increased
by E, we instead choose wli so that the number of margin violations at margin ~ is
increased by at most E. This can be done by drawing from a distribution such as
1
R
(E))
A
WII'"
(2F-
"(
Applying the PAC-Bayes bound to this we reach a bound on the number of margin
violations at ~ for the true distribution. In particular, we have:
s!:!'-
(KL (",(e) +<IleQ,;) oS In F(~ + In
"'t') '" 1_;
The application is tricky because the bound does not hold uniformly for all "(.3
Instead we can discretize "( at scale 1/ m and apply a union bound to get 0 -t 0/ m+ 1.
For any fixed example, (x,y) with probability 1- 0, we know that with probability
at least 1 - eQ,~' the example has a margin of at least ~. Since the example has
3Thanks to David McAllester for pointing this out.
a margin of at least ~ and our randomization doesn't change the margin by more
than ~ with probability 1- f, the averaging classifier almost always predicts in the
same way as the stochastic classifier implying the theorem. _
4.3
Discussion &< Open Problems
The bound we have obtained here is considerably tighter than previous bounds for
averaging classifiers-in fact it is tight enough to consider applying to real learning
problems and using the results in decision making.
Can this argument be improved? The simple averaging bound (lemma 4.1) and
the margin bound (theorem 4.3) each have a regime in which they dominate. We
expect that there exists some natural theorem which does well in both regimes
simultaneously.
hI order to verify that the margin bound is as tight as possible, it would also be
instructive to study lower bounds.
4.4
Acknowledgements
Many thanks to David McAllester for critical reading and comments.
References
[1] P. L. Bartlett, "The sample complexity of pattern classification with neural
networks: the size of the weights is more important than the size of the network,"
IEEE funsactiollS on Information Theory, vol. 44, no. 2, pp. 525-536, 1998.
[2] Thomas Cover and Joy Thomas, "Elements of fuformation Theory" Wiley, New
York 1991.
[3] Ralf Herbrich and Thore Graepel, A PAC-Bayesian Margin Bound for Linear
Classifiers: Why SVMs work. In Advances in Neural fuformation Processing
Systems 13, pages 224-230. 2001.
[4] T. Jaakkola, M. Mella, T. Jebara, "Maximum Entropy D iscrirnination\char"
NIPS 1999.
[5] John Langford and Rich Caruana, (Not) Bounding the True Error NIPS2001.
[6] John Langford, Matthias Seeger, and Nimrod Megiddo, "An Improved Predictive Accuracy Bound for Averaging Classifiers" ICML2001.
[7] John Langford and Matthias Seeger, "Bounds for Averaging Classifiers." CMU
tech report, CMU-CS-01-102, 2001.
[8] David McAllester, "PAC-Bayesian Model Averaging" COLT 1999.
[9] Yoav Freund and Robert E. Schapire, "A Decision Theoretic Generalization of
On-line Learning and an Application to Boosting" Eurocolt 1995.
[10] Matthias Seeger, "PAC-Bayesian Generalization Error Bounds for Gaussian
Processes", Tech Report, Division of fuformatics report EDI-INF-RR-0094.
http://www.dai.ed.ac.uk/homes/seeger/papers/gpmcall-tr.ps.gz
[11] Robert E. Schapire, Yoav Freund, Peter Bartlett, and Wee Sun Lee, "Boosting
the Margin: A new explanation for the effectiveness of voting methods" The
Annals of Statistics, 26(5):1651-1686, 1998.
[12] J. Shawe-Taylor, P. L. Bartlett, R. C. Williamson, and M. Anthony. Structural risk minimization over data-dependent hierarchies. IEEE funsactions on
Information Theory, 44(5):1926--1940, 1998.
| 2317 |@word open:1 fortuitous:1 tr:1 z2:1 dx:2 must:4 john:3 wx:1 joy:1 implying:1 isotropic:2 boosting:2 herbrich:2 simpler:1 mathematical:1 along:1 prove:1 behavior:1 ol:1 eurocolt:1 classifies:1 notation:2 bounded:2 developed:1 every:6 voting:2 megiddo:1 classifier:35 demonstrates:1 tricky:1 uk:1 unit:2 appear:1 positive:1 treat:1 consequence:1 ap:1 approximately:1 perpendicular:4 practical:1 unique:1 practice:1 union:1 empirical:7 significantly:3 inp:2 get:1 convenience:1 risk:1 applying:4 www:1 layout:1 go:1 ke:1 simplicity:1 rule:1 dominate:1 ralf:1 notion:1 coordinate:1 annals:1 hierarchy:1 us:1 element:1 satisfying:1 qln:1 predicts:1 sun:1 decrease:1 complexity:2 depend:1 tight:2 predictive:1 upon:4 division:1 differently:1 choosing:1 larger:2 roo:1 drawing:1 statistic:1 ip:2 rr:1 analytical:1 matthias:3 flexibility:1 moved:1 p:1 oo:1 ac:1 eq:7 c:1 implies:7 met:2 direction:4 stochastic:13 bib:1 pacbayes:1 vx:2 mcallester:3 char:1 noticeably:1 generalization:2 decompose:2 randomization:2 tighter:5 hold:4 around:2 considered:1 exp:2 mapping:1 sytem:1 pointing:1 purpose:1 unstated:1 agrees:2 minimization:1 gaussian:7 always:3 rather:2 avoid:1 erring:1 jaakkola:1 derived:1 indicates:1 tech:2 seeger:4 sense:1 dependent:7 weakening:1 qllp:3 misclassified:1 overall:1 classification:1 colt:1 ell:2 equal:1 construct:4 report:3 equiprobable:1 randomly:1 wee:1 simultaneously:1 divergence:4 ve:2 wli:3 freedom:2 violation:3 introduces:1 chain:1 folk:1 shorter:1 taylor:2 re:1 increased:2 earlier:2 soft:1 cover:2 altering:1 caruana:1 yoav:2 uniform:1 loo:1 aw:1 considerably:1 thanks:2 lee:1 together:1 iip:1 choose:8 derivative:1 style:1 li:3 account:2 depends:1 later:2 break:1 portion:4 bayes:3 parallel:4 ii1:2 accuracy:1 variance:1 yield:1 bayesian:3 rectified:3 reach:1 ed:1 definition:1 pp:1 proof:13 wixi:1 dimensionality:1 mella:1 graepel:2 appears:1 follow:1 methodology:1 mtl:1 improved:2 done:1 strongly:1 furthermore:1 langford:3 sketch:2 o:1 thore:1 normalized:2 true:9 verify:1 hence:5 equality:1 leibler:3 nonzero:1 noted:1 theoretic:1 tn:1 theoreill:1 iin:2 rotation:2 exponentially:1 tail:4 shawe:2 posterior:10 inf:1 inequality:1 errs:1 additional:1 dai:1 determine:1 ii:2 full:2 desirable:1 calculation:1 sphere:2 proximate:1 e1:3 paired:2 prediction:1 cmu:2 sometimes:2 addition:1 completes:1 extra:1 comment:1 elegant:1 incorporates:1 effectiveness:1 structural:1 vw:8 presence:2 noting:1 enough:1 independence:3 iogp:1 bartlett:4 elle:1 peter:1 york:1 jj:1 useful:2 amount:1 svms:1 simplest:2 nimrod:1 schapire:2 http:1 exist:1 sign:6 arising:1 xii:2 write:1 vol:1 drawn:2 relaxation:1 wand:1 almost:1 reasonable:1 home:1 draw:1 decision:2 bound:50 fl:1 hi:1 nearby:1 aspect:1 argument:3 according:1 slightly:1 wi:2 making:1 pr:7 taken:1 equation:2 vq:1 agree:1 know:1 adopted:2 wii:1 apply:1 coin:3 original:5 thomas:2 occurs:1 exhibit:1 ow:1 reversed:1 trivial:1 relationship:1 cq:5 rotational:3 robert:2 statement:1 discretize:1 precise:1 arbitrary:2 jebara:1 david:3 edi:1 pair:1 required:2 kl:16 nip:1 suggested:1 pattern:1 regime:2 reading:1 optimise:1 explanation:1 event:1 critical:1 natural:1 residual:1 improve:1 imply:1 gz:1 prior:3 understanding:1 acknowledgement:1 relative:1 freund:2 expect:1 generation:1 interesting:1 iiwl1:1 versus:1 i111:1 dq:1 free:4 bias:3 allow:1 taking:1 dimension:4 xn:1 rich:1 doesn:1 kullback:3 xi:6 decomposes:1 why:1 robust:2 williamson:1 constructing:1 anthony:1 main:2 wtxt:3 bounding:1 noise:1 nips2001:1 wiley:1 sf:3 xl:1 theorem:23 down:1 xt:6 pac:6 maxi:1 exists:6 qiip:2 margin:36 flavor:1 entropy:2 applies:1 determines:1 towards:1 replace:1 change:1 generalisation:1 except:1 uniformly:1 reversing:1 averaging:22 wt:3 hyperplane:1 lemma:8 invariance:4 vote:3 meant:1 instructive:1 ex:3 |
1,449 | 2,318 | Adaptive Nonlinear System Identification
with Echo State Networks
Herbert Jaeger
International University Bremen
D-28759 Bremen, Germany
h.jaeger@iu-bremen. de
Abstract
Echo state networks (ESN) are a novel approach to recurrent neural network training. An ESN consists of a large, fixed, recurrent
"reservoir" network, from which the desired output is obtained by
training suitable output connection weights. Determination of optimal output weights becomes a linear, uniquely solvable task of
MSE minimization. This article reviews the basic ideas and describes an online adaptation scheme based on the RLS algorithm
known from adaptive linear systems. As an example, a 10-th order NARMA system is adaptively identified. The known benefits
of the RLS algorithms carryover from linear systems to nonlinear
ones; specifically, the convergence rate and misadjustment can be
determined at design time.
1
Introduction
It is fair to say that difficulties with existing algorithms have so far precluded supervised training techniques for recurrent neural networks (RNNs) from widespread
use. Echo state networks (ESNs) provide a novel and easier to manage approach
to supervised training of RNNs. A large (order of 100s of units) RNN is used as a
"reservoir" of dynamics which can be excited by suitably presented input and/or
fed-back output. The connection weights of this reservoir network are not changed
by training. In order to compute a desired output dynamics, only the weights of
connections from the reservoir to the output units are calculated. This boils down
to a linear regression. The theory of ESNs, references and many examples can be
found in [5] [6]. A tutorial is [7]. A similar idea has recently been independently
investigated in a more biologically oriented setting under the name of "liquid state
networks" [8] [9].
In this article I describe how ESNs can be conjoined with the "recursive least
squares" (RLS) algorithm, a method for fast online adaptation known from linear
systems. The resulting RLS-ESN is capable of tracking a 10-th order nonlinear
system with high quality in convergence speed and residual error. Furthermore,
the approach yields apriori estimates of tracking performance parameters and thus
allows one to design nonlinear trackers according to specifications l .
1
All
algorithms
and
calculations
described
m
this
article
are
con-
Article organization. Section 2 recalls the basic ideas and definitions of ESNs and
introduces an augmentation of the basic technique. Section 3 demonstrates ESN
omine learning on the 10th order system identification task. Section 4 describes the
principles of using the RLS algorithm with ESN networks and presents a simulation
study. Section 5 wraps up.
2
Basic ideas of echo state networks
For the sake of a simple notation, in this article I address only single-input, singleoutput systems (general treatment in [5]). We consider a discrete-time "reservoir"
RNN with N internal network units , a single extra input unit, and a single extra
output unit. The input at time n 2 1 is u(n), activations of internal units are x(n) =
(xl(n), ... ,xN(n)), and activation of the output unit is y(n). Internal connection
weights are collected in an N x N matrix W = (Wij), weights of connections going
from the input unit into the network in an N-element (column) weight vector win =
(w~n), and the N + 1 (input-and-network)-to-output connection weights in aN + 1element (row) vector wo ut = (w?ut). The output weights wo ut will be learned, the
internal weights Wand input weights win are fixed before learning, typically in a
sparse random connectivity pattern. Figure 1 sketches the setup used in this article.
N internal units
Figure 1: Basic setup of ESN. Solid arrows: fixed weights; dashed arrows: trainable
weights.
The activation of internal units and the output unit is updated according to
x(n
y(n
+ 1)
+ 1)
f(Wx(n) + winu(n + 1) + v(n + 1))
rut (wo ut (u(n + 1), x(n + 1) )) ,
(1)
(2)
where f stands for an element-wise application of the unit nonlinearity, for which
we here use tanh; v(n + 1) is an optional noise vector; (u(n + l) , x(n + 1)) is a
vector concatenated from u(n + 1) and x(n + 1); and rut is the output unit's nonlinearity (tanh will be used here, too). Training data is a stationary I/O signal
(Uteach(n), Yteach(n)). When the network is updated according to (1), then under
certain conditions the network state becomes asymptotically independent of initial conditions. More precisely, if the network is started from two arbitrary states
x(O), X(O) and is run with the same input sequence in both cases, the resulting state
sequences x(n), x(n) converge to each other. If this condition holds , the reservoir
network state will asymptotically depend only on the input history, and the network
tained in a tutorial Mathematica notebook which
http://www.ais.fraunhofer.de/INDY /ESNresources.html.
can
be
fetched
from
is said to be an echo state network (ESN). A sufficient condition for the echo state
property is contractivity of W. In practice it was found that a weaker condition
suffices, namely, to ensure that the spectral radius I Amax I of W is less than unity.
[5] gives a detailed account.
Consider the task of computing the output weights such that the teacher output
is approximated by the network. In the ESN approach, this task is spelled out
concretely as follows: compute wo ut such that the training error
(3)
is minimized in the mean square sense. Note that the effect of the output nonlinearity is undone by (f0ut)-l in this error definition. We dub (fout)-IYteach(n)
the teacher pre-signal and (f0ut)-l (wo ut (Uteach(n), x(n)) + v(n)) the network's preoutput. The computation of wo ut is a linear regression. Here is a sketch of an offline
algorithm for the entire learning procedure:
1. Fix a RNN with a single input and a single output unit , scaling the weight
matrix W such that IAmax 1< 1 obtains.
2. Run this RNN by driving it with the teaching input signal. Dismiss
data from initial transient and collect remaining input+network states
(Uteach (n), Xteach (n)) row-wise into a matrix M. Simultaneously, collect
the remaining training pre-signals (f0ut)-IYteach(n) into a column vector r.
3. Compute the pseudo-inverse M-l, and put wo ut = (M-Ir) T (where T
denotes transpose).
4. Write wo ut into the output connections; the ESN is now trained.
The modeling power of an ESN grows with network size. A cheaper way to increase
the power is to use additional nonlinear transformations ofthe network state x(n) for
computing the network output in (2). We use here a squared version of the network
state. Let w~~~ares denote a length 2N + 2 output weight vector and Xsquares(n)
the length 2N +2 (column) vector (u(n), Xl (n), . . . , xN(n), u 2 (n), xi(n), ... , xJv(n)).
Keep the network update (1) unchanged, but compute outputs with the following
variant of (2):
y(n
+ 1)
(4)
The "reservoir" and the input is now tapped by linear and quadratic connections.
The learning procedure remains linear and now goes like this:
1. (unchanged)
2. Drive the ESN with the training input. Dismiss initial transient and collect
remaining augmented states Xsquares(n) row-wise into M. Simultaneously,
collect the training pre-signals (fout)-IYteach(n) into a column vector r.
3. Compute the pseudo-inverse M-l, and put w~~~ares = (M-Ir) T.
4. The ESN is now ready for exploitation, using output formula (4).
3
Identifying a 10th order system: offline case
In this section the workings of the augmented algorithm will be demonstrated with
a nonlinear system identification task. The system was introduced in a survey-andunification-paper [1]. It is a 10th-order NARMA system:
d(n + 1)
= 0.3 d(n) +
0.05 d(n)
[t,
d(n -
i)] + 1.5 u(n - 9) u(n) + 0.1.
(5)
Network setup. An N = 100 ESN was prepared by fixing a random, sparse connection weight matrix W (connectivity 5 %, non-zero weights sampled from uniform
distribution in [-1,1], the resulting raw matrix was re-scaled to a spectral radius
of 0.8, thus ensuring the echo state property). An input unit was attached with a
random weight vector win sampled from a uniform distribution over [-0.1,0.1].
Training data and training. An I/O training sequence was prepared by driving the
system (5) with an i.i.d. input sequence sampled from the uniform distribution over
[0,0.5]' as in [1]. The network was run according to (1) with the training input for
1200 time steps with uniform noise v(n) of size 0.0001. Data from the first 200
steps were discarded. The remaining 1000 network states were entered into the
augmented training algorithm, and a 202-length augmented output weight vector
w~~~ares was calculated.
Testing. The learnt output vector was installed and the network was run from a
zero starting state with newly created testing input for 2200 steps, of which the
first 200 were discarded. From the remaining 2000 steps, the NMSE test error
NMSE test = E[(Y(;~(d~(n))2J was estimated. A value of NMSE test ~ 0.032 was found.
Comments. (1) The noise term v(n) functions as a regularizer, slightly compromising the training error but improving the test error. (2) Generally, the larger
an ESN, the more training data is required and the more precise the learning.
Set up exactly like in the described 100-unit example, an augmented 20-unit ESN
trained on 500 data points gave NMSE test ~ 0.31, a 50-unit ESN trained on 1000
points gave NMSEtest ~ 0.084, and a 400-unit ESN trained on 4000 points gave
NMSE test ~ 0.0098.
Comparison. The best NMSE training [!] error obtained in [1] on a length 200
training sequence was NMSEtrain ~ 0.2412 However, the level of precision reported
[1] and many other published papers about RNN training appear to be based on
suboptimal training schemes. After submission of this paper I went into a friendly
modeling competition with Danil Prokhorov who expertly applied EKF-BPPT techniques [3] to the same tasks. His results improve on [1] results by an order of
magnitude and reach a slightly better precision than the results reported here.
4
Online adaptation of ESN network
Because the determination of optimal (augmented) output weights is a linear task,
standard recursive algorithms for MSE minimization known from adaptive linear
signal processing can be applied to online ESN estimation. I assume that the reader
is familiar with the basic idea of FIR tap-weight (Wiener) filters: i.e. , that N input
signals Xl (n), ... ,XN (n) are transformed into an output signal y(n) by an inner
product with a tap-weight vector (Wl, ... ,WN): y(n) = wlxl(n) + ... + wNxN(n).
In the ESN context, the input signals are the 2N + 2 components of the augmented
input+network state vector, the tap-weight vector is the augmented output weight
vector, and the output signal is the network pre-output (fout)-ly(n) .
2The authors miscalculated their NMSE because they used a formula for zero-mean signals. I re-calculated the value NMSEtrain ~ 0.241 from their reported best (miscalculated)
NMSE of 0.015 . The larger value agrees with the plots supplied in that paper.
4.1
A refresher on adaptive linear system identification
For a recursive online estimation of tap-weight vectors, "recursive least squares"
(RLS) algorithms are widely used in linear signal processing when fast convergence is of prime importance. A good introduction to RLS is given in [2], whose
notation I follow. An online algorithm in the augmented ESN setting should do
the following: given an open-ended, typically non-stationary training I/O sequence
(Uteach(n), Yteach(n)), at each time n ~ 1 determine an augmented output weight
vector w~~~ares(n) which yields a good model of the current teacher system.
Formally, an RLS algorithm for ESN output weight update minimizes the exponentially discounted square "pre-error"
n
LAn-
k
((follt)-lYteach(k) - (follt)-lY [n](k))2 ,
(6)
k=l
where A < 1 is the forgetting factor and Y[n](k) is the model output that would
be obtained at time k when a network with the current output weights w~~~ares(n)
would be employed at all times k = 1, ... ,n.
There are many variants of RLS algorithms minimizing (6), differing in their tradeoffs between computational cost, simplicity, and numerical stability. I use a "vanilla"
version, which is detailed out in Table 12.1 in [2] and in the web tutorial package
accompanying this paper.
Two parameters characterise the tracking performance of an RLS algorithm: the
misadjustment M and the convergence time constant T. The misadjustment gives
the ratio between the excess MSE (or excess NMSE) incurred by the fluctuations of
the adaptation process, and the optimal steady-state MSE that would be obtained
in the limit of offline-training on infinite stationary training data. For instance, a
misadjustment of M = 0.3 means that the tracking error of the adaptive algorithm
in a steady-state situation exceeds the theoretically achievable optimum (with Sanle
tap weight vector length) by 30 %. The time constant T associated with an RLS
algorithm determines the exponent of the MSE convergence, e- n / T ? For example,
T = 200 would imply an excess MSE reduction by I/e every 200 steps. Misadjustment and convergence exponent are related to the forgetting factor and the
tap-vector length through
and
4.2
1
T::::::--.
I-A
(7)
Case study: RLS-ESN for our 10th-order system
Eqns. (7) can be used to predict/design the tracking characteristics of a RLSpowered ESN. I will demonstrate this with the 10th-order system (5). Ire-use
the same augmented lOa-unit ESN, but now determine its 2N + 2 output weight
vector online with RLS. Setting A = 0.995 , and considering N = 202, Eqns. (7)
yield a misadjustment of M = 0.5 and a time constant T :::::: 200. Since the asymptotically optimal NMSE is approximately the NMSE of the offline-trained network,
namely, NMSE :::::: 0.032, the misadjustment M = 0.5 lets us expect a NMSE of
0.032 x 150% :::::: 0.048 for the online adaptation after convergence. The time constant T :::::: 200 makes us expect NMSE convergence to the expected asymptotic
NMSE by a factor of I/e every 200 steps.
Training data. Experiments with the system (5) revealed that the system sometimes explodes when driven with i.i.d. input from [0,0.5]. To bound outputs, I
wrapped the r.h.s. of (5) with a tanh. Furthermore, I replaced the original constants 0.3,0.05,1.5, 0.1 by free parameters a, (3", 6, to obtain
d(n + 1)
= tanh
(a
d(n) + (3 d(n)
[t,
d(n -
i)] + ,u(n - 9) u(n) + 6).
(8)
This system was run for 10000 steps with an i.i.d. teacher input from [0,0.5]. Every
2000 steps, 0'.,(3",6 were assigned new random values taken from a ? 50 % interval
around the respective original constants. Fig. 2A shows the resulting teacher output
sequence, which clearly shows transitions between different "episodes" every 2000
steps.
Running the RLS-ENS algorithm. The ENS was started from zero state and
with a zero augmented output weight vector. It was driven by the teacher input, and a noise of size 0.0001 was inserted into the state update, as in the
offline training. The RLS algorithm (with forgetting factor 0.995) was initialized according to the prescriptions given in [2] and then run together with the
network updates , to compute from the augmented input+network states x(n) =
(u(n), Xl (n), ... ,XN (n), u2 (n), xi(n), ... ,xJv(n)) a sequence of augmented output
weight vectors w~~~ares (n). These output weight vectors were used to calculate a
network output y(n) = tanh(w~~~ares(n), x(n)).
Results. From the resulting length-l0000 sequences of desired outputs d(n) and network productions y(n) , NMSE's were numerically estimated from averaging within
subsequent length-lOO blocks. Fig. 2B gives a logarithmic plot.
In the last three episodes, the exponential NMSE convergence after each episode
onset disruption is clearly recognizable. Also the convergence speed matches the
predicted time constant, as revealed by the T = 200 slope line inserted in Fig. 2B.
The dotted horizontal line in Fig. 2B marks the NMSE of the offline-trained ESN
described in the previous section. Surprisingly, after convergence, the online-NMSE
is lower than the offline NMSE. This can be explained through the IIR (autoregressive) nature of the system (5) resp. (8) , which incurs long-term correlations in the
signal d( n), or in other words, a nonstationarity of the signal in the timescale of the
correlation lengthes, even with fixed parameters a, (3", 6. This medium-term nonstationarity compromises the performance of the offline algorithm, but the online
adaptation can to a certain degree follow this nonstationarity.
Fig. 2C is a logarithmic plot of the development of the mean absolute output weight
size. It is apparent that after starting from zero, there is an initial exponential
growth of absolute values of the output weights, until a stabilization at a size of
about 1000, whereafter the NMSE develops a regular pattern (Fig. 2B).
Finally, Fig. 2D shows an overlay of d(n) (solid) with y(n) (dotted) of the last 100
steps in the experiment, visually demonstrating the precision after convergence.
A note on noise and stability. Standard offline training of ESNs yields output
weights whose absolute size depends on the noise inserted into the network during training: the larger the noise, the smaller the mean output weights (extensive
discussion in [5]). In online training, a similar inverse correlation between output
weight size (after settling on plateau) and noise size can be observed. When the
online learning experiment was done otherwise identically but without noise insertion, weights grew so large that the RLS algorithm entered a region of numerical
instability. Thus, the noise term is crucial here for numerical stability, a condition
familiar from EKF-based RNN training schemes [3], which are computationally
closely related to RLS.
Teacher output signal
LoglO of NMSE
0.8
0.7
0.6
0.5
0.4
0.3
A.
-0 . 5
-1
-1.5
2000
4000
6000
8000
10000
B.
LoglO of avo abs. weights
C.
Teacher
~!I~
2000
4000
6000
8000
-2
10000
vs. network
D.
Figure 2: A. Teacher output. B. NMSE with predicted baseline and slopeline. C.
Development of weights. D. Last 100 steps: desired (solid) and network-predicted
(dashed) signal. For details see text.
5
Discussion
Several of the well-known error-gradient-based RNN training algorithms can be used
for online weight adaptation. The update costs per time step in the most efficient of
those algorithms (overview in [1]) are O(N 2 ) , where N is network size. Typically,
standard approaches train small networks (order of N = 20), whereas ESN typically
relies on large networks for precision (order of N = 100). Thus, the RLS-based ESN
online learning algorithm is typically more expensive than standard techniques.
However, this drawback might be compensated by the following properties of RLSESN:
? Simplicity of design and implementation; robust behavior with little need
for learning parameter hand-tuning.
? Custom-design of RLS-ESNs with prescribed tracking parameters, transferring well-understood linear systems methods to nonlinear systems.
? Systems with long-lasting short-term memory can be learnt. Exploitable
ESN memory spans grow with network size (analysis in [6]). Consider the
i)]
30th order system d(n+ 1) = tanh(0.2d(n) +0.04d(n) [L~=o 9d(n +
1.5 u(n - 29) u(n) + 0.001). It was learnt by a 400-unit augmented adaptive
ESN with a test NMSE of 0.0081. The 51-th (!) order system y(n + 1) =
u(n - 10) u(n - 50) was learnt offline by a 400-unit augmented ESN with
a NMSE of 0.213.
All in all, on the kind of tasks considered in above, adaptive (augmented) ESNs
reach a similar level of precision as today's most refined gradient-based techniques.
A given level of precision is attained in ESN vs. gradient-based techniques with a
similar number of trainable weights (D. Prokhorov, private communication). Because gradient-based techniques train every connection weight in the RNN, whereas
3S ee Mathematica notebook for details.
ESNs train only the output weights, the numbers of units of similarly performing
standard RNNs vs. ESNs relate as N to N 2 . Thus, RNNs are more compact than
equivalent ESNs. However, when working with ESNs, for each new trained output signal one can re-use the same "reservoir", adding only N new connections
and weights. This has for instance been exploited for robots in the AIS institute
by simultaneously training multiple feature detectors from a single "reservoir" [4].
In this circumstance, with a growing number of simultaneously required outputs,
the requisite net model sizes for ESNs vs. traditional RNNs become asymptotically
equal. The size disadvantage of ESNs is further balanced by much faster offline
training, greater simplicity, and the general possibility to exploit linear-systems
expertise for nonlinear adaptive modeling.
Acknowledgments The results described in this paper were obtained while I
worked at the Fraunhofer AIS Institute. I am greatly indebted to Thomas
Christaller for unfaltering support. Wolfgang Maass and Danil Prokhorov contributed motivating discussions and valuable references. An international patent application for the ESN technique was filed on October 13, 2000 (PCT /EPOI/11490).
References
[1] A.F. Atiya and A.G. Parlos. New results on recurrent network training: Unifying
the algorithms and accelerating convergence. IEEE Trans. Neural Networks,
11(3):697- 709,2000.
[2] B. Farhang-Boroujeny. Adaptive Filters: Theory and Applications. Wiley, 1998.
[3] L.A. Feldkamp, D.V. Prokhorov, C.F. Eagen, and F. Yuan. Enhanced multistream Kalman filter training for recurrent neural networks. In J .A.K . Suykens
and J. Vandewalle, editors, Nonlinear Modeling: Advanced Black-Box Techniques, pages 29- 54. Kluwer, 1998.
[4] J. Hertzberg, H. Jaeger, and F. Schonherr. Learning to ground fact symbols in
behavior-based robots. In F. van Harmelen, editor, Proc. 15th Europ. Gonf. on
Art. Int. (EGAI 02), pages 708- 712. lOS Press, Amsterdam, 2002.
[5] H. Jaeger.
The "echo state" approach to analysing and training recurrent neural networks.
GMD Report 148, GMD - German National Research Institute for
Computer Science,
2001.
http://www.gmd.de/People/Herbert.Jaeger/Publications.html.
[6] H. Jaeger. Short term memory in echo state networks. GMD-Report 152,
GMD - German National Research Institute for Computer Science, 2002.
http://www.gmd.de/People/Herbert.Jaeger/Publications.html.
[7] H. Jaeger. Tutorial on training recurrent neural networks, covering BPPT,
RTRL , EKF and the echo state network approach. GMD Report 159, Fraunhofer
Institute AIS , 2002.
[8] W. Maass, T. Natschlaeger, and H. Markram. Real-time computing without
stable states: A new framework for neural computation based on perturbations.
http://www.cis.tugraz.at/igi/maass/psfiles/LSM-vl06.pdf. 2002.
[9] W. Maass, Th. NatschHiger, and H. Markram. A model for real-time computation in generic neural microcircuits. In S. Becker, S. Thrun, and K. Obermayer , editors, Advances in Neural Information Processing System 15 (Proc.
NIPS 2002). MIT Press, 2002.
| 2318 |@word private:1 exploitation:1 version:2 achievable:1 suitably:1 open:1 simulation:1 excited:1 prokhorov:4 incurs:1 solid:3 reduction:1 initial:4 liquid:1 bppt:2 existing:1 current:2 activation:3 subsequent:1 numerical:3 wx:1 plot:3 update:5 v:4 stationary:3 short:2 ire:1 lsm:1 become:1 yuan:1 consists:1 recognizable:1 theoretically:1 forgetting:3 expected:1 behavior:2 growing:1 feldkamp:1 discounted:1 little:1 considering:1 becomes:2 notation:2 medium:1 kind:1 minimizes:1 differing:1 transformation:1 ended:1 pseudo:2 every:5 friendly:1 growth:1 exactly:1 demonstrates:1 scaled:1 unit:23 ly:2 appear:1 before:1 understood:1 limit:1 installed:1 fluctuation:1 approximately:1 might:1 rnns:5 black:1 collect:4 acknowledgment:1 testing:2 recursive:4 practice:1 block:1 procedure:2 rnn:8 undone:1 pre:5 word:1 regular:1 put:2 context:1 instability:1 www:4 equivalent:1 demonstrated:1 compensated:1 go:1 starting:2 independently:1 survey:1 simplicity:3 identifying:1 amax:1 his:1 stability:3 updated:2 resp:1 enhanced:1 today:1 tapped:1 fout:3 element:3 approximated:1 expensive:1 submission:1 observed:1 inserted:3 calculate:1 region:1 episode:3 went:1 valuable:1 balanced:1 insertion:1 dynamic:2 trained:7 depend:1 compromise:1 refresher:1 regularizer:1 train:3 fast:2 describe:1 refined:1 whose:2 apparent:1 larger:3 widely:1 say:1 otherwise:1 timescale:1 echo:10 online:14 sequence:9 net:1 product:1 adaptation:7 entered:2 competition:1 los:1 convergence:13 optimum:1 jaeger:8 spelled:1 recurrent:7 fixing:1 indy:1 predicted:3 europ:1 radius:2 closely:1 drawback:1 compromising:1 filter:3 stabilization:1 transient:2 suffices:1 fix:1 hold:1 accompanying:1 tracker:1 around:1 considered:1 ground:1 visually:1 predict:1 driving:2 notebook:2 estimation:2 proc:2 tanh:6 agrees:1 wl:1 minimization:2 mit:1 clearly:2 loglo:2 ekf:3 tained:1 natschlaeger:1 publication:2 narma:2 greatly:1 baseline:1 sense:1 am:1 typically:5 entire:1 transferring:1 wij:1 going:1 transformed:1 germany:1 iu:1 html:3 exponent:2 development:2 art:1 apriori:1 equal:1 rls:19 minimized:1 carryover:1 report:3 develops:1 oriented:1 simultaneously:4 national:2 cheaper:1 familiar:2 replaced:1 ab:1 organization:1 possibility:1 custom:1 introduces:1 capable:1 respective:1 initialized:1 desired:4 re:3 instance:2 column:4 modeling:4 disadvantage:1 cost:2 uniform:4 vandewalle:1 too:1 loo:1 motivating:1 reported:3 iir:1 teacher:9 learnt:4 adaptively:1 international:2 filed:1 together:1 connectivity:2 augmentation:1 squared:1 manage:1 fir:1 danil:2 account:1 de:4 psfiles:1 int:1 igi:1 onset:1 depends:1 wolfgang:1 slope:1 square:4 ir:2 wiener:1 who:1 characteristic:1 yield:4 ofthe:1 identification:4 raw:1 dub:1 expertise:1 drive:1 published:1 indebted:1 history:1 plateau:1 detector:1 reach:2 nonstationarity:3 definition:2 esn:33 mathematica:2 associated:1 boil:1 con:1 sampled:3 newly:1 treatment:1 recall:1 ut:9 back:1 attained:1 supervised:2 follow:2 singleoutput:1 done:1 box:1 microcircuit:1 furthermore:2 until:1 correlation:3 sketch:2 working:2 dismiss:2 web:1 horizontal:1 hand:1 nonlinear:9 widespread:1 quality:1 grows:1 name:1 effect:1 avo:1 assigned:1 maass:4 wrapped:1 during:1 uniquely:1 eqns:2 covering:1 steady:2 pdf:1 demonstrate:1 disruption:1 wise:3 novel:2 recently:1 overview:1 patent:1 attached:1 exponentially:1 kluwer:1 numerically:1 esns:13 ai:4 tuning:1 vanilla:1 similarly:1 teaching:1 nonlinearity:3 specification:1 robot:2 stable:1 driven:2 prime:1 certain:2 exploited:1 misadjustment:7 herbert:3 additional:1 greater:1 employed:1 converge:1 determine:2 dashed:2 signal:17 multiple:1 exceeds:1 match:1 determination:2 calculation:1 faster:1 long:2 prescription:1 ensuring:1 variant:2 basic:6 regression:2 circumstance:1 sometimes:1 suykens:1 whereas:2 interval:1 grow:1 crucial:1 extra:2 explodes:1 comment:1 ee:1 revealed:2 identically:1 wn:1 gave:3 identified:1 suboptimal:1 inner:1 idea:5 tradeoff:1 accelerating:1 becker:1 wo:8 generally:1 detailed:2 characterise:1 prepared:2 atiya:1 gmd:7 http:4 supplied:1 conjoined:1 overlay:1 tutorial:4 dotted:2 estimated:2 per:1 discrete:1 write:1 lan:1 demonstrating:1 asymptotically:4 wand:1 run:6 inverse:3 package:1 reader:1 fetched:1 scaling:1 bound:1 pct:1 quadratic:1 precisely:1 worked:1 sake:1 speed:2 prescribed:1 span:1 performing:1 according:5 describes:2 slightly:2 smaller:1 rtrl:1 unity:1 biologically:1 lasting:1 explained:1 taken:1 computationally:1 remains:1 german:2 fed:1 spectral:2 generic:1 original:2 thomas:1 denotes:1 remaining:5 ensure:1 running:1 tugraz:1 unifying:1 exploit:1 concatenated:1 unchanged:2 harmelen:1 traditional:1 said:1 obermayer:1 gradient:4 win:3 wrap:1 thrun:1 collected:1 length:8 kalman:1 ratio:1 minimizing:1 setup:3 october:1 relate:1 design:5 implementation:1 contributed:1 discarded:2 optional:1 situation:1 grew:1 communication:1 precise:1 perturbation:1 arbitrary:1 introduced:1 namely:2 required:2 extensive:1 connection:11 tap:6 learned:1 nip:1 trans:1 address:1 rut:2 precluded:1 pattern:2 memory:3 power:2 suitable:1 difficulty:1 settling:1 solvable:1 residual:1 advanced:1 scheme:3 improve:1 imply:1 started:2 ready:1 created:1 fraunhofer:3 text:1 review:1 asymptotic:1 expect:2 incurred:1 degree:1 sufficient:1 article:6 principle:1 editor:3 bremen:3 production:1 row:3 changed:1 loa:1 last:3 transpose:1 free:1 surprisingly:1 offline:11 weaker:1 institute:5 markram:2 absolute:3 sparse:2 benefit:1 van:1 calculated:3 xn:4 stand:1 transition:1 autoregressive:1 concretely:1 author:1 adaptive:9 far:1 excess:3 obtains:1 compact:1 keep:1 xi:2 table:1 nature:1 robust:1 improving:1 xjv:2 mse:6 investigated:1 arrow:2 noise:10 fair:1 nmse:25 augmented:17 reservoir:9 fig:7 exploitable:1 en:2 wiley:1 precision:6 exponential:2 xl:4 down:1 formula:2 symbol:1 adding:1 importance:1 ci:1 magnitude:1 easier:1 logarithmic:2 iamax:1 amsterdam:1 tracking:6 u2:1 determines:1 relies:1 analysing:1 specifically:1 determined:1 infinite:1 averaging:1 formally:1 internal:6 mark:1 support:1 people:2 requisite:1 trainable:2 |
1,450 | 2,319 | Exponential Family PCA for Belief Compression
in POMDPs
Nicholas Roy
Robotics Institute
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Geoffrey Gordon
Department of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Abstract
Standard value function approaches to finding policies for Partially Observable
Markov Decision Processes (POMDPs) are intractable for large models. The intractability of these algorithms is due to a great extent to their generating an optimal
policy over the entire belief space. However, in real POMDP problems most belief
states are unlikely, and there is a structured, low-dimensional manifold of plausible
beliefs embedded in the high-dimensional belief space.
We introduce a new method for solving large-scale POMDPs by taking advantage of
belief space sparsity. We reduce the dimensionality of the belief space by exponential
family Principal Components Analysis [1], which allows us to turn the sparse, highdimensional belief space into a compact, low-dimensional representation in terms of
learned features of the belief state. We then plan directly on the low-dimensional belief
features. By planning in a low-dimensional space, we can find policies for POMDPs
that are orders of magnitude larger than can be handled by conventional techniques.
We demonstrate the use of this algorithm on a synthetic problem and also on a mobile
robot navigation task.
1 Introduction
Large Partially Observable Markov Decision Processes (POMDPs) are generally very difficult to solve, especially with standard value iteration techniques [2, 3]. Maintaining a full
value function over the high-dimensional belief space entails finding the expected reward of
every possible belief under the optimal policy. However, in reality most POMDP policies
generate only a small percentage of possible beliefs. For example, a mobile robot navigating in an office building is extremely unlikely to ever encounter a belief about its pose that
resembles a checkerboard. If the execution of a POMDP is viewed as a trajectory inside the
belief space, trajectories for most large, real world POMDPs lie on low-dimensional manifolds embedded in the belief space. So, POMDP algorithms that compute a value function
over the full belief space do a lot of unnecessary work.
Additionally, real POMDPs frequently have the property that the belief probability distributions themselves are sparse. That is, the probability of being at most states in the world is
zero. Intuitively, mobile robots and other real world systems have local uncertainty (which
can often be multi-modal), but rarely encounter global uncertainty. Figure 1 depicts a mobile robot travelling down a corridor, and illustrates the sparsity of the belief space.
Figure 1: An example probability distribution of a mobile robot navigating in a hallway (map dimensions are 47m x 17m, with a grid cell resolution of 10cm). The white areas are free space, states
where the mobile robot could be. The black lines are walls, and the dark gray particles are the output
of the particle filter tracking the robot?s position. The particles are located in states where the robot?s
belief over its position is non-zero. Although the distribution is multi-modal, it is still relatively
compact: the majority of the states contain no particles and therefore have zero probability.
We will take advantage of these characteristics of POMDP beliefs by using a variant of a
common dimensionality reduction technique, Principal Components Analysis (PCA). PCA
is well-suited to dimensionality reduction where the data lies near a linear manifold in
the higher-dimensional space. Unfortunately, POMDP belief manifolds are rarely linear;
in particular, sparse beliefs are usually very non-linear. However, we can employ a link
function to transform the data into a space where it does lie near a linear manifold; the
algorithm which does so (while also correctly handling the transformed residual errors) is
called Exponential Family PCA (E-PCA). E-PCA will allow us to find manifolds with only
a handful of dimensions, even for belief spaces with thousands of dimensions.
Our algorithm begins with a set of beliefs from a POMDP. It uses these beliefs to find a
decomposition of belief space into a small number of belief features. Finally, it plans over
a low-dimensional space by discretizing the features and using standard value iteration to
find a policy over the discrete beliefs.
2 POMDPs
# %$ &
'(
&
)!*# $ &
"!
A Partially Observable Markov Decision Process (POMDP) is a model given by a set
, actions
and observations
of states
. Associated with these are a set of transition probabilities
and observation probabilities
.
The objective of the planning problem is to find a policy that maximises the expected sum
of future (possibly discounted) rewards of the agent executing the policy. There are a large
number of value function approaches [2, 4] that explicitly compute the expected reward
of every belief. Such approaches produce complete policies, and can guarantee optimality
under a wide range of conditions. However, finding a value function this way is usually
computationally intractable.
Policy search algorithms [3, 5, 6, 7] have met with success recently. We suggest that a large
part of the success of policy search is due to the fact that it focuses computation on relevant
belief states. A disadvantage of policy search, however, is that can be data-inefficient:
many policy search techniques have trouble reusing sample trajectories generated from old
policies. Our approach focuses computation on relevant belief states, but also allows us to
use all relevant training data to estimate the effect of any policy.
Related research has developed heuristics which reduce the belief space representation. In
particular, entropy-based representations for heuristic control [8] and full value-function
planning [9] have been tried with some success. However, these approaches make strong
assumptions about the kind of uncertainties that a POMDP generates. By performing prin-
cipled dimensionality reduction of the belief space, our technique should be applicable to
a wider range of problems.
3 Dimensionality Reduction
!
Principal Component Analysis is one of the most popular and successful forms of dimensionality
reduction [10]. PCA operates by finding a set of feature vectors
that minimise the loss function
)! $ $
$ $
(1)
where
is the original data and is the matrix of low-dimensional coordinates of .
This particular loss function assumes that the data lie near a linear manifold, and that displacements from this manifold are symmetric and have the same variance everywhere. (For
example, i.i.d. Gaussian errors satisfy these requirements.)
Unfortunately, as mentioned previously, probability distributions for POMDPs rarely form
a linear subspace. In addition, squared error loss is inappropriate for modelling probability
distributions: it does not enforce positive probability predictions.
We use exponential family PCA to address this problem. Other nonlinear dimensionalityreduction techniques [11, 12, 13] could also work for this purpose, but would have different
domains of applicability. Although the optimisation procedure for E-PCA may be more
complicated than that for other models such as locally-linear models, it requires many
fewer samples of the belief space. For real world systems such as mobile robots, large
sample sets may be difficult to acquire.
3.1 Exponential family PCA
Exponential family Principal Component Analysis [1] (E-PCA) varies from conventional
PCA by adding a link function, in analogy to generalised linear models, and modifying
the loss function appropriately. As long as we choose the link and loss functions to match
each other, there will exist efficient algorithms for finding and given . By picking
particular link functions (with their matching losses), we can reduce the model to an SVD.
We can use any convex function
to generate a matching pair of link and loss functions.
The loss function which corresponds to
is
(2)
where
is defined so that the minimum over of
(3)
is always 0. (
is called the convex dual of
, and expression (3) is called a generalised
Bregman divergence from to .)
The loss functions themselves are only necessary for the analysis; our algorithm needs only
the link functions and their derivatives. So, we can pick the loss functions and differentiate
to get the matching link functions; or, we can pick the link functions directly and not worry
about the corresponding loss functions.
Each choice of link and loss functions results in a different model and therefore a potentially different decomposition of . This choice is where we should inject our domain
knowledge about what sort of noise there is in and what parameter matrices and
are a priori most likely. In our case the entries of are the number of particles from a
large sample which fell into a small bin, so a Poisson loss function is most appropriate.
The corresponding link function is
!
(4)
! )!
)!
(taken component-wise) and its associated loss function is
!
(5)
where the ?matrix dot product?
is the sum of products of corresponding elements.
It is worth noting that using the Poisson loss for dimensionality reduction is related to Lee
and Seung?s non-negative matrix factorization [14].
In order to find and , we compute the derivatives of the loss function with respect to
and and set them to 0. The result is a set of fixed-point equations that the optimal
parameter settings must satisfy:
!!
(6)
(7)
There are many algorithms which we could use to solve our optimality equations (6)
and (7). For example, we could use gradient descent. In other words, we could add a
multiple of
to , add a multiple of
to , and repeat until
convergence. Instead we will use a more efficient algorithm due to Gordon [15]; this algorithm is based on Newton?s method and is related to iteratively-reweighted least squares.
We refer the reader to this paper for further details.
4 Augmented MDP
Given the belief features acquired through E-PCA, it remains to learn a policy. We do
so by using the low-dimensional belief features to convert the POMDP into a tractable
MDP. Our conversion algorithm is a variant of the Augmented MDP, or Coastal Navigation
algorithm [9], using belief features instead of entropy. Table 1 outlines the steps of this
algorithm.
1.
2.
3.
4.
5.
Collect sample beliefs
Use E-PCA to generate low-dimensional belief features
Convert low-dimensional space into discrete state space
Learn belief transition probabilities
, and reward function
.
Perform value iteration on new model, using states , transition probabilities
and
.
Table 1: Algorithm for planning in low-dimensional belief space.
We can collect the beliefs in step 1 using some prior policy such as a random walk or a
most-likely-state heuristic. We have already described E-PCA (step 2), and value iteration
(step 5) is well-known. That leaves steps 3 and 4.
The state space can be discretized in a number of ways, such as laying a grid over the belief
features or using distance to the closest training beliefs to divide feature space into Voronoi
regions. Thrun [16] has proposed nearest-neighbor discretization in high-dimensional belief space; we propose instead to use low-dimensional feature space, where neighbors
should be more closely related.
easily from the reconstructed beliefs.
)!
(8)
We can compute the model reward function
To learn the transition function, we can sample states from the reconstructed beliefs, sample
observations from those states, and incorporate those observations to produce new belief
states.
One additional question is how to choose the number of bases. One possibility is to examine
the singular values of the matrix after performing E-PCA, and use only the features that
have singular values above some cutoff. A second possibility is to use a model selection
technique such as keeping a validation set of belief samples and picking the basis size
with the best reconstruction quality. Finally, we could search over basis sizes according to
performance of the resulting policy.
5 Experimental Results
We tested our approach on two models: a synthetic 40 state world with idealised action and
observations, and a large mobile robot navigation task. For each problem, we compared EPCA to conventional PCA for belief representation quality, and compared E-PCA to some
heuristics for policy performance. We are unable to compare our approach to conventional
value function approaches, because both problems are too large to be solved by existing
techniques.
5.1 Synthetic model
)
The abstract model has a two-dimensional state space: one dimension of position along
inclusive correspond to
a circular corridor, and one binary orientation. States
one orientation, and states
correspond to the other. The reward is at a known
position along the corridor; therefore, the agent needs to discover its orientation, move to
the appropriate position, and declare it has arrived at the goal. When the goal is declared
the system resets (regardless of whether the agent is actually at the goal). The agent has 4
actions: left, right, sense_orientation, and declare_goal. The observation
and transition probabilities
are given by von Mises distributions, an exponential family
distribution defined over
. The von Mises distribution is the ?wrapped? analog
of a Gaussian; it accounts for the fact that the two ends of the corridor are connected, and
because the sum of two von Mises variates is another von Mises variate, we can guarantees
that the true belief distribution is always a von Mises distribution over the corridor for each
orientation.
%
Sample Beliefs
0.18
Probability of State
0.16
0.14
0.12
0.1
0.08
0.06
0.04
0.02
0
0
5
10
15
20
25
30
35
40
State
Figure 2: Some sample beliefs from the two-dimensional problem, generated from roll-outs of the
model. Notice that some beliefs are bimodal, whereas others are unimodal in one half or the other of
the state space.
Figure 2 shows some sample beliefs from this model. Notice that some of the beliefs are
bimodal, but some beliefs have probability mass over half of the state space only?these
unimodal beliefs follow the sense_orientation action.
Figure 3(a) shows the reconstruction performance of both the E-PCA approach and conventional PCA, plotting average KL-divergence between the sample belief and its reconstruction against the number of bases used for the reconstruction. PCA minimises squared
error, while E-PCA with the Poisson loss minimises unnormalised KL-divergence, so it is
no surprise that E-PCA performs better. We believe that KL-divergence is a more appropriate measure since we are fitting probabilities. Both PCA and E-PCA reach near-zero
error at 3 bases (E-PCA hits zero error, since an -basis E-PCA can fit an -parameter
exponential family exactly). This fact suggests that both decompositions should generate
good policies using only 3 dimensions.
KL Divergence between Sampled Beliefs and Reconstructions
1.4
PCA
E-PCA
1.2
Average Reward
0.8
0.6
0.4
0.2
80000
60000
40000
0
0
-20000
2
3
4
Number of Bases
5
MDP Heuristic
20000
-0.2
1
E-PCA
PCA
100000
1
KL Divergence
Average reward vs. Number of Bases
120000
Entropy Heuristic
1
2
3
Number of Bases
(a) Reconstruction error
(b) Policy performance
Figure 3: (a) A comparison of the average KL divergence between the sample beliefs and their
reconstructions, against the number of bases used, for 500 samples beliefs. (b) A comparison of
policy performance using different numbers of bases, for 10000 trials. Policy performance was given
by total reward accumulated over trials.
Figure 3(b) shows a comparison of the policies from different algorithms. The PCA techniques do approximately twice as a well as the naive Maximum Likelihood heuristic. This
is because the ML-heuristic must guess its orientation, and is correct about half the time.
In comparison, the Entropy heuristic does very poorly because it is unable to distinguish
between a unimodal belief that has uncertainty about its orientation but not its position, and
a bimodal belief that knows its position but not its orientation.
5.2 Mobile Robot Navigation
Next we tried our algorithm on a mobile robot navigating in a corridor, as shown in figure 1.
As in the previous example, the robot can detect its position, but cannot determine its
orientation until it reaches the lab door approximately halfway down the corridor. The
robot must navigate to within 10cm of the goal and declare the goal to receive the reward.
The map is shown in figures 1 and 4, and is 47m 17m, with a grid cell resolution of 0.1m.
The total number of unoccupied cells is 8250, generating a POMDP with a belief space of
8250 dimensions. Without loss of generality, we restrict the robot?s actions to the forward
and backward motion, and similarly simplified the observation model. The reward structure
of the problem strongly penalised declaring the goal when the robot was far removed from
the goal state.
The initial set of beliefs was collected by a mobile robot navigating in the world, and then
post-processed using a noisy sensor model. In this particular environment, the laser data
used for localisation normally gives very good localisation results; however, this will not
be true for many real world environments [17].
Figure 4 shows a sample robot trajectory using the policy learned using 5 basis functions.
Notice that the robot drives past the goal to the lab door in order to verify its orientation
before returning to the goal. If the robot had started at the other end of the corridor, its
orientation would have become apparent on its way to the goal.
Figure 5(a) shows the reconstruction performance of both the E-PCA approach and con-
4
Start State
Start Distribution
Robot Trajectory
Goal State
Figure 4: An example robot trajectory, using the policy learned using 5 basis functions. On the left
are the start conditions and the goal. On the right is the robot trajectory. Notice that the robot drives
past the goal to the lab door to localise itself, before returning to the goal.
ventional PCA, plotting average KL-divergence between the sample belief and its reconstruction against the number of bases used for the reconstruction.
KL Divergence between Sampled Beliefs and Reconstructions
45
Policy perfomance on Mobile Robot Navigation
400000
E-PCA
PCA
40
300000
Average Reward
KL Divergence
35
30
25
20
15
10
5
200000
100000
0
-268500.0
-1000.0
33233.0
-100000
-200000
0
1
2
3
4
5
6
Number of Bases
7
8
9
-300000
ML Heuristic
PCA
E-PCA
(b) Policy performance
(a) Reconstruction performance
Figure 5: (a) A comparison of the average KL divergence between the sample beliefs and their
reconstructions against the number of bases used, for 400 samples beliefs for a navigating mobile
robot.(b) A comparison of policy performance using E-PCA, conventional PCA and the Maximum
Likelihood heuristic, for 1,000 trials.
Figure 5(b) shows the average policy performance for the different techniques, using 5
bases. (The number of bases was chosen based on reconstruction quality of E-PCA:
see [15] for further details.) Again, the E-PCA outperformed the other techniques because it was able to model its belief accurately. The Maximum-Likelihood heuristic could
not distinguish orientations, and therefore regularly declared the goal in the wrong place.
The conventional PCA algorithm failed because it could not represent its belief accurately
with only a few bases.
6 Conclusions
We have demonstrated an algorithm for planning for Partially Observable Markov Decision
Processes by taking advantage of particular kinds of belief space structure that are prevalent
in real world domains. In particular, we have shown this approach to work well on an
abstract small problem, and also on a 8250 state mobile robot navigation task which is well
beyond the capability of existing value function techniques.
The heuristic that we chose for dimensionality reduction was simply one of reconstruction
error, as in equation 5: a reduction that minimises reconstruction error should allow nearoptimal policies to be learned. However, it may be possible to learn good policies with even
fewer dimensions by taking advantage of transition probability structure, or cost function
structure. For example, for certain classes
function such as
of problems, a loss
)!
(9)
)!
would lead to a dimensionality reduction that maximises predictability. Similarly,
(10)
is some heuristic cost function (such as from a previous iteration of dimensionwhere
ality reduction) would lead to a reduction that maximises ability to differentiate states with
different values.
Acknowledgments
Thanks to Sebastian Thrun for many suggestions and insight. Thanks also to Drew Bagnell, Aaron
Courville and Joelle Pineau for helpful discussion. Thanks to Mike Montemerlo for localisation code.
References
[1] M. Collins, S. Dasgupta, and R. E. Schapire. A generalization of principal components analysis
to the exponential family. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances
in Neural Information Processing Systems, volume 14, Cambridge, MA, 2002. MIT Press.
[2] Leslie Pack Kaelbling, Michael L. Littman, and Anthony R. Cassandra. Planning and acting in
partially observable stochastic domains. Artificial Intelligence, 101:99?134, 1998.
[3] Andrew Ng and Michael Jordan. PEGASUS: A policy search method for large MDPs and
POMDPs. In Proceedings of Uncertainty in Artificial Intelligence (UAI), 2000.
[4] Milos Hauskrecht. Value-function approximations for partially observable Markov decision
processes. Journal of Artificial Intelligence Research, 13:33?94, 2000.
[5] Andrew Ng, Ron Parr, and Daphne Koller. Policy search via density estimation. In Advances
in Neural Information Processing Systems 12, 1999.
[6] Jonathan Baxter and Peter Bartlett. Reinforcement learning in POMDP?s via direct gradient
ascent. In Proc. the 17th International Conference on Machine Learning, 2000.
[7] J. Andrew Bagnell and Jeff Schneider. Autonomous helicopter control using reinforcement
learning policy search methods. In Proceedings of the International Conference on Robotics
and Automation, 2001.
[8] Anthony R. Cassandra, Leslie Pack Kaelbling, and James A. Kurien. Acting under uncertainty:
Discrete Bayesian models for mobile-robot navigation. In Proceedings of the IEEE/RSJ Interational Conference on Intelligent Robotic Systems (IROS), 1996.
[9] Nicholas Roy and Sebastian Thrun. Coastal navigation with mobile robots. In Advances in
Neural Processing Systems 12, pages 1043?1049, 1999.
[10] I. T. Joliffe. Principal Component Analysis. Springer-Verlag, 1986.
[11] Sam Roweis and Lawrence Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):2323?2326, December 2000.
[12] J. B. Tenenbaum, V. de Silva, and J. C. Langford. A global geometric framework for nonlinear
dimensionality reduction. Science, 290(5500):2319?2323, December 2000.
[13] S. T. Roweis, L. K. Saul, and G. E. Hinton. Global coordination of local linear models. In T. G.
Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing
Systems, volume 14, Cambridge, MA, 2002. MIT Press.
[14] Daniel D. Lee and H. Sebastian Seung. Learning the parts of objects by non-negative matrix
factorization. Nature, 401:788?791, 1999.
[15] Geoffrey Gordon. Generalized linear models. In Suzanna Becker, Sebastian Thrun, and Klaus
Obermayer, editors, Advances in Neural Information Processing Systems 15. MIT Press, 2003.
[16] Sebastian Thrun. Monte Carlo POMDPs. In Advances in Neural Information Processing Systems 12, 1999.
[17] S. Thrun, M. Beetz, M. Bennewitz, W. Burgard, A.B. Cremers, F. Dellaert, D. Fox, D. Hhnel,
C. Rosenberg, N. Roy, J. Schulte, , and D. Schulz. Probabilistic algorithms and the interactive
museum tour-guide robot Minerva. International Journal of Robotics Research, 19(11):972?
999, 2000.
| 2319 |@word trial:3 compression:1 tried:2 decomposition:3 ality:1 pick:2 reduction:13 initial:1 daniel:1 past:2 existing:2 discretization:1 must:3 localise:1 v:1 half:3 fewer:2 leaf:1 guess:1 intelligence:3 hallway:1 ron:1 daphne:1 along:2 direct:1 become:1 corridor:8 fitting:1 inside:1 introduce:1 acquired:1 expected:3 themselves:2 planning:6 frequently:1 multi:2 examine:1 discretized:1 discounted:1 inappropriate:1 begin:1 discover:1 mass:1 what:2 cm:2 kind:2 developed:1 finding:5 hauskrecht:1 guarantee:2 every:2 interactive:1 exactly:1 returning:2 wrong:1 hit:1 control:2 normally:1 positive:1 generalised:2 declare:2 local:2 before:2 approximately:2 black:1 chose:1 twice:1 resembles:1 collect:2 suggests:1 factorization:2 range:2 acknowledgment:1 procedure:1 displacement:1 area:1 matching:3 word:1 suggest:1 get:1 cannot:1 selection:1 conventional:7 map:2 demonstrated:1 regardless:1 convex:2 pomdp:12 resolution:2 suzanna:1 insight:1 embedding:1 coordinate:1 autonomous:1 us:1 pa:2 element:1 roy:3 located:1 mike:1 solved:1 thousand:1 region:1 connected:1 removed:1 mentioned:1 environment:2 reward:12 littman:1 seung:2 solving:1 basis:5 easily:1 laser:1 monte:1 artificial:3 klaus:1 apparent:1 heuristic:14 larger:1 plausible:1 solve:2 ability:1 transform:1 noisy:1 itself:1 differentiate:2 advantage:4 propose:1 reconstruction:16 product:2 reset:1 helicopter:1 relevant:3 poorly:1 roweis:2 convergence:1 requirement:1 produce:2 generating:2 executing:1 object:1 wider:1 andrew:3 pose:1 minimises:3 nearest:1 strong:1 c:1 met:1 closely:1 correct:1 filter:1 modifying:1 stochastic:1 bin:1 generalization:1 wall:1 great:1 lawrence:1 parr:1 purpose:1 estimation:1 proc:1 outperformed:1 applicable:1 coordination:1 coastal:2 mit:3 sensor:1 gaussian:2 always:2 mobile:16 rosenberg:1 office:1 focus:2 modelling:1 likelihood:3 prevalent:1 detect:1 helpful:1 voronoi:1 accumulated:1 entire:1 unlikely:2 koller:1 transformed:1 schulz:1 dual:1 orientation:11 priori:1 plan:2 schulte:1 ng:2 future:1 others:1 gordon:3 intelligent:1 employ:1 few:1 museum:1 divergence:11 possibility:2 circular:1 localisation:3 navigation:8 bregman:1 necessary:1 fox:1 old:1 divide:1 walk:1 disadvantage:1 leslie:2 applicability:1 cost:2 kaelbling:2 entry:1 tour:1 burgard:1 successful:1 too:1 nearoptimal:1 varies:1 synthetic:3 thanks:3 density:1 international:3 lee:2 probabilistic:1 picking:2 michael:2 squared:2 von:5 again:1 choose:2 possibly:1 inject:1 inefficient:1 derivative:2 checkerboard:1 reusing:1 account:1 de:1 automation:1 satisfy:2 cremers:1 explicitly:1 lot:1 lab:3 start:3 sort:1 complicated:1 capability:1 square:1 roll:1 variance:1 characteristic:1 correspond:2 bayesian:1 accurately:2 carlo:1 trajectory:7 pomdps:11 worth:1 drive:2 reach:2 penalised:1 sebastian:5 against:4 james:1 associated:2 mi:5 con:1 sampled:2 popular:1 knowledge:1 dimensionality:11 actually:1 worry:1 higher:1 interational:1 follow:1 modal:2 strongly:1 generality:1 until:2 langford:1 nonlinear:3 unoccupied:1 pineau:1 quality:3 gray:1 believe:1 mdp:4 building:1 effect:1 dietterich:2 contain:1 true:2 verify:1 idealised:1 symmetric:1 iteratively:1 white:1 reweighted:1 wrapped:1 generalized:1 arrived:1 outline:1 complete:1 demonstrate:1 performs:1 motion:1 ventional:1 silva:1 bennewitz:1 wise:1 recently:1 common:1 volume:2 perfomance:1 analog:1 mellon:2 refer:1 cambridge:2 grid:3 similarly:2 particle:5 had:1 dot:1 robot:30 entail:1 add:2 base:14 closest:1 certain:1 verlag:1 discretizing:1 binary:1 success:3 joelle:1 minimum:1 additional:1 schneider:1 determine:1 full:3 multiple:2 unimodal:3 match:1 long:1 post:1 prediction:1 variant:2 optimisation:1 cmu:2 poisson:3 minerva:1 iteration:5 represent:1 bimodal:3 robotics:3 cell:3 receive:1 addition:1 whereas:1 singular:2 appropriately:1 fell:1 ascent:1 beetz:1 december:2 regularly:1 jordan:1 near:4 noting:1 door:3 baxter:1 variate:2 fit:1 restrict:1 reduce:3 minimise:1 whether:1 expression:1 pca:43 handled:1 bartlett:1 becker:3 peter:1 dellaert:1 action:5 generally:1 dark:1 locally:2 tenenbaum:1 processed:1 generate:4 schapire:1 percentage:1 exist:1 notice:4 correctly:1 carnegie:2 discrete:3 dasgupta:1 milo:1 epca:1 cutoff:1 iros:1 backward:1 halfway:1 sum:3 convert:2 everywhere:1 uncertainty:6 place:1 family:9 reader:1 decision:5 distinguish:2 courville:1 handful:1 inclusive:1 ri:1 prin:1 generates:1 declared:2 extremely:1 optimality:2 performing:2 relatively:1 department:1 structured:1 according:1 sam:1 intuitively:1 taken:1 computationally:1 equation:3 previously:1 remains:1 turn:1 know:1 tractable:1 end:2 travelling:1 enforce:1 appropriate:3 nicholas:2 encounter:2 original:1 assumes:1 trouble:1 maintaining:1 newton:1 ghahramani:2 especially:1 rsj:1 objective:1 move:1 already:1 question:1 pegasus:1 bagnell:2 obermayer:1 navigating:5 gradient:2 subspace:1 distance:1 link:10 unable:2 thrun:6 majority:1 manifold:8 extent:1 collected:1 laying:1 code:1 acquire:1 difficult:2 unfortunately:2 potentially:1 negative:2 policy:35 perform:1 maximises:3 conversion:1 observation:7 markov:5 descent:1 hinton:1 ever:1 pair:1 kl:10 learned:4 address:1 able:1 beyond:1 usually:2 sparsity:2 belief:74 residual:1 mdps:1 started:1 naive:1 unnormalised:1 prior:1 geometric:1 embedded:2 loss:19 suggestion:1 analogy:1 geoffrey:2 declaring:1 validation:1 agent:4 plotting:2 editor:3 intractability:1 repeat:1 free:1 keeping:1 guide:1 allow:2 institute:1 wide:1 neighbor:2 taking:3 saul:2 sparse:3 dimension:7 world:8 transition:6 forward:1 reinforcement:2 simplified:1 far:1 reconstructed:2 observable:6 compact:2 ml:2 global:3 robotic:1 uai:1 pittsburgh:2 unnecessary:1 search:8 ggordon:1 reality:1 additionally:1 table:2 learn:4 pack:2 nature:1 anthony:2 domain:4 noise:1 augmented:2 depicts:1 predictability:1 position:8 montemerlo:1 exponential:9 lie:4 down:2 navigate:1 intractable:2 adding:1 drew:1 magnitude:1 execution:1 illustrates:1 cassandra:2 surprise:1 suited:1 entropy:4 simply:1 likely:2 failed:1 tracking:1 partially:6 springer:1 corresponds:1 ma:2 viewed:1 goal:15 jeff:1 operates:1 acting:2 principal:6 called:3 total:2 svd:1 experimental:1 rarely:3 aaron:1 highdimensional:1 collins:1 jonathan:1 incorporate:1 tested:1 handling:1 |
1,451 | 232 | 702
Obradovic and Pclrberry
Analog Neural Networks of Limited Precision I:
Computing with Multilinear Threshold Functions
(Preliminary Version)
Zoran Obradovic and Ian Parberry
Department of Computer Science.
Penn State University.
University Park. Pa. 16802.
ABSTRACT
Experimental evidence has shown analog neural networks to be ex~mely fault-tolerant; in particular. their performance does not appear to be significantly impaired when precision is limited. Analog
neurons with limited precision essentially compute k-ary weighted
multilinear threshold functions. which divide R" into k regions with
k-l hyperplanes. The behaviour of k-ary neural networks is investigated. There is no canonical set of threshold values for k>3.
although they exist for binary and ternary neural networks. The
weights can be made integers of only 0 ?z +k ) log (z +k ? bits. where
z is the number of processors. without increasing hardware or running time. The weights can be made ?1 while increasing running
time by a constant multiple and hardware by a small polynomial in z
and k. Binary neurons can be used if the running time is allowed to
increase by a larger constant multiple and the hardware is allowed to
increase by a slightly larger polynomial in z and k. Any symmetric
k-ary function can be computed in constant depth and size
(n k - 1/(k-2)!). and any k-ary function can be computed in constant
depth and size 0 (nk"). The alternating neural networks of Olafsson
and Abu-Mostafa. and the quantized neural networks of Fleisher are
closely related to this model.
o
Analog Neural Networks of Limited Precision I
1 INTRODUCTION
Neural networks are typically circuits constructed from processing units which compute simple functions of the form f(Wl> ... ,wlI):RII-+S where SeR, wieR for 1~~,
and
II
f (Wl> ... ,WII)(Xl, .?. ,xlI)=g (LWi X;)
i=1
for some output function g :R-+S. There are two choices for the set S which are
currently popular in the literature. The first is the discrete model, with S=B (where B
denotes the Boolean set (0,1)). In this case, g is typically a linear threshold function
g (x)= 1 iff x~. and f is called a weighted linear threshold function. The second is
the analog model, with S=[O,I] (where [0,1] denotes (re RI~~I}). In this case. g
is typically a monotone increasing function, such as the sigmoid function
g (x)=(1 +c -% 1 for some constant c e R. The analog neural network model is popular
because it is easy to construct processors with the required characteristics using a few
transistors. The digital model is popular because its behaviour is easy to analyze.
r
Experimental evidence indicates that analog neural networks can produce accurate
computations when the precision of their components is limited. Consider what actually happens to the analog model when the precision is limited. Suppose the neurons
can take on k distinct excitation values (for example, by restricting the number of digits in their binary or decimal expansions). Then S is isomorphic to Zk={O, ... ,k-l}.
We will show that g is essentially the multilinear threshold function
g (hloh2 ....,hk-l):R-+Zk defined by
Here and throughout this paper, we will assume that hl~h2~ ... ~hk-1> and for convenience define ho=-oo and h/c=oo. We will call f a k-ary weighted multilinear threshold
function when g is a multilinear threshold function.
We will study neural networks constructed from k-ary multilinear threshold functions.
We will call these k-ary neural networks, in order to distinguish them from the standard 2-ary or binary neural network. We are particularly concerned with the resources
of time, size (number of processors), and weight (sum of all the weights) of k-ary
neural networks when used in accordance with the classical computational paradigm.
The reader is referred to (parberry, 1990) for similar results on binary neural networks.
A companion paper (Obradovic & Parberry, 1989b) deals with learning on k-ary neural networks. A more detailed version of this paper appears in (Obradovic & Parberry,
1989a).
2 A K-ARY NEURAL NETWORK MODEL
A k-ary neural network is a weighted graph M =(V ,E ,W ,h), where V is a set of processors and E cVxV is a set of connections between processors. Function
w:VxV -+R assign weights to interconnections and h:V -+Rk - assign a set of k-l
thresholds to each of the processors. We assume that if (u ,v) eE, W (u ,v )=0. The
size of M is defined to be the number of processors, and the weight of M is
703
704
Obradovic and Parberry
The processors of a k-ary neural network are relatively limited in computing power.
A k-ary function is a function f :Z:~Z". Let F; denote the set of all n-input k-ary
functions. Define e::R,,+Ir;-l~F; by e:(w l .....w".h It .???h''_l):R;~Z,,. where
.
e;(w It ???? w" .h h???.h,,-l)(X 1o... ,%.. )=i iff hi ~~Wi xi <h; +1?
i=1
The set of k-ary weighted multilinear threshold functions is the union. over all n e N.
of the range of e;. Each processor of a k-ary neural network can compute a k-ary
weighted multilinear threshold function of its inputs.
Each processor can be in one of k states, 0 through k-l. Initially. the input processors of M are placed into states which encode the input If processor v was updated
during interval t, its state at time t -1 was i and output was j. then at time t its state
will be j. A k-ary neural network computes by having the processors change state until a stable configuration is reached. The output of M are the states of the output processors after a stable state has been reached. A neural network M 2 is said to be f (t )equivalent to M 1 iff for all inputs x. for every computation of M 1 on input x which
terminates in time t there is a computation of M 2 on input x which terminates in time
f (t) with the same output. A neural network M 2 is said to be equivalent to M 1 iff it
is t -equivalent to it.
3 ANALOG NEURAL NETWORKS
Let f be a function with range [0.1]. Any limited-precision device which purports to
compute f must actually compute some function with range the k rational values
R"={ilk-llieZ,,,~<k} (for some keN). This is sufficient for all practical purposes
provided k is large enough. Since R" is isomorphic to Z". we will formally define
the limited precision variant of f to be the function f" :X ~Z" defined by
f,,(x)=round(j (x).(k-l?, where round:R~N is the natural rounding function defined
by round(x)=n iff n-o.5~<n-tO.5.
Theorem 3.1 : Letf(Wlo ... ,w.. ):R"~[O,I] where WieR for
1~~.
be defined by
.
f (w1O.?.,W,,)(X 10 .?? ,x.. )=g (LWiXi)
i=l
where g:R~[O,I] is monotone increasing and invertible. Then f(Wlo ... ,W.. )":R"~Z,,
is a k-ary weighted multilinear threshold function.
Proof: It is easy to verify that f(Wlo ...?W")"=S;(Wl' ... ,w",hl, ...?h,,_l)' where
hi =g-1?2i-l)/2(k-l?. 0
Thus we see that analog neural networks with limited precision are essentially k-ary
neural networks.
Analog Neural Networks of Limited Precision I
4 CANONICAL THRESHOLDS
Binary neural networks have the advantage that all thresholds can be taken equal to
zero (see. for example. Theorem 4.3.1 of Parberry, 1990). A similar result holds for
ternary neural networks.
Theorem 4.1 : For every n-input ternary weighted multilinear threshold function there
is an equivalent (n +I)-input ternary weighted multilinear threshold function with
threshold values equal to zero and one.
Proof: Suppose W=(W1o ??? ,WII )E R", hloh2E R. Without loss of generality assume
h l<h 2.
Define W=(Wl ?...?wlI+l)e RII+I by wj=wjl(hrh 1) for I~!0t, and
wlI +I=-h I/(h2-h 1). It can be demonstrated by a simple case analysis that for all
x =(x 1, ??? ,xll)e
Z;.
8;(w,h l,hz)(x )=8;+I(W ,0,I)(x l,... ,xll ,1).
o
The choice of threshold values in Theorem 4.1 was arbitrary. Unfortunately there is
no canonical set of thresholds for k >3.
Theorem 4.2 : For every k>3, n~2, m~. h 1o ??? ,hk - 1E R. there exists an n-input k-ary
weighted multilinear threshold function
such that for all (n +m )-input k-ary weighted multilinear threshold functions
8 k"+m("
WI.???
)?zm+1I
.WII+m. h 10???. hk-l'
k
~Z k
A
Proof (Sketch): Suppose that t I ?.. . .tk-l e R is a canonical set of thresholds. and w.t.o.g.
assume n =2. Let h =(h 1o ??? ,hk - 1), where h l=h z=2. h j=4, hi =5 for 4Si <k. and
f=8i(1,I.h).
By hypothesis there exist wlo ????wm+2 and y=(ylo ...?ym)eRm such that for all xeZi,
f (x )=8r+2(w 1.? .. ,Wm+2,t 1, ??? ,tk-l)(X ,y).
m
Let S= I:Wi+2Yi. Since f (1.0)=0. f (0.1)=0, f (2,1)=2, f (1,2)=2. it follows that
;=1
2(Wl+Wz+S )<tl+t 3.
Since f (2,0)=2, f (1.1 )=2. and f (0.2)=2, it follows that
(1)
70S
706
Obradovic and Pdrberry
Wl+W2+S~2?
(2)
2t2<ll+13.
(3)
Inequalities (1) and (2) imply that
By similar arguments from g=S;(1,l,l.3.3.4 ?...?4) we can conclude that
(4)
But (4) contradicts (3). 0
S NETWORKS OF BOUNDED WEIGHT
Although our model allows each weight to take on an infinite number of possible
values. there are only a finite number of threshold functions (since there are only a
finite number of k-ary functions) with a fixed number of inputs. Thus the number of
n -input threshold functions is bounded above by some function in n and k. In fact.
something stronger can be shown. All weights can be made integral. and
o ((n +k) log (n +k? bits are sufficient to describe each one.
Theorem 5.1 : For every k-ary neural network M 1 of size z there exists an equivalent
k-ary neural network M2 of size z and weight ((k_l)/2)Z(z+I)(z+k)'2+0(1) with integer
weights.
Proof (Sketch): It is sufficient to prove that for every weighted threshold function
f:(Wlt ...?wll.hh ...?h"-I):Z:~Z,, for some neN. there is an equivalent we1f.hted threshold function g:(w~ ?...? w:.hi ?...? h;-d such that Iwtl~((k-l)/2)I(n+l)'" )12+0(1) for
l~i~. By extending the techniques used by Muroga. Toda and Takasu (1961) in the
binary case. we see that the weights are bounded above by the maximum determinant
of a matrix of dimension n +k -lover Z". 0
Thus if k is bounded above by a polynomial in n. we are guaranteed of being able to
describe the weights using a polynomial number of bits.
6 THRESHOLD CIRCUITS
A k-ary neural network with weights drawn from {?1} is said to have unit weights. A
unit-weight directed acyclic k-ary neural network is called a k-ary threshold circuit.
A k-ary threshold circuit can be divided into layers. with each layer receiving inputs
only from the layers above it. The depth of a k-ary threshold circuit is defined to be
the number of layers. The weight is equal to the number of edges. which is bounded
above by the square of the size. Despite the apparent handicap of limited weights. kary threshold circuits are surprisingly powerful.
Much interest has focussed on the computation of symmetric functions by neural networks. motivated by the fact that the visual system appears to be able to recognize objects regardless of their position on the retina A function f :Z:~Z" is called symmetric if its output remains the same no matter how the input is permuted.
Analog Neural Networks of Limited Precision I
Theorem 6.1 : Any symmetric k-ary function on n inputs can be computed by a k-ary
threshold circuit of depth 6 and size (n+1)k-l/(k-2)!+ o (kn).
Proof: Omitted. 0
It has been noted many times that neural networks can compute any Boolean function
in constant depth. The same is true of k-ary neural networks, although both results
appear to require exponential size for many interesting functions.
Theorem 6.2 : Any k-ary function of n inputs can be computed by a k-ary threshold
circuit with size (2n+1)k"+k+1 and depth 4.
Proof: Similar to that for k=2 (see Chandra et. al., 1984; Parberry, 1990). 0
The interesting problem remaining is to determine which functions require exponential
size to achieve constant depth, and which can be computed in polynomial size and
constant depth. We will now consider the problem of adding integers represented in
k-ary notation.
Theorem 6.3 : The sum of two k-ary integers of size n can be computed by a k-ary
threshold circuit with size 0 (n 2) and depth 5.
Proof: First compute the carry of x and y in 'luadratic size and depth 3 using the standard elementary school algorithm. Then the it position of the result can be computed
from the i tit position of the operands and a carry propagated in that position in constant size and depth 2. 0
Theorem 6.4 : The sum of n k-~ integers of size n can be computed by a k-ary
threshold circuit with size 0 (n 3+kn ) and constant depth.
Proof: Similar to the proof for k=2 using Theorem 6.3 (see Chandra et. al., 1984; Parberry, 1990). 0
Theorem 6.S : For every k-ary neural network M 1 of size z there exists an 0 (t)equivalent unit-weight k-ary neural network M2 of size o ((z+k)410g3(z+k?.
Proof: By Theorem 5.1 we can bound all weights to have size 0 ((z+k)log(z+k? in
binary notation. By Theorem 6.4 we can replace every processor with non-unit
weights by a threshold circuit of size o ((z+k)310g3(z+k? and constant depth. 0
Theorem 6.5 implies that we can assume unit weights by increasing the size by a polynomial and the running time by only a constant multiple provided the number of
logic levels is bounded above by a polynomial in the size of the network. The
number of thresholds can also be reduced to one if the size is increased by a larger
polynomial:
Theorem 6.6 : For every k-ary neural network M 1 of size z there exists an 0 (t )equivalent unit-weight binary neural network M 2 of size 0 (z 4k 4)(log z + log k)3
which outputs the binary encoding of the required result
Proof: Similar to the proof of Theorem 6.5. 0
This result is primarily of theoretical interest. Binary neural networks appear simpler,
and hence more desirable than analog neural networks. However, analog neural networks are actually more desirable since they are easier to build. With this in mind,
Theorem 6.6 simply serves as a limit to the functions that an analog neural network
707
708
Obradovic and Parberry
can be expected to compute efficiently. We are more concerned with constructing a
model of the computational abilities of neural networks, rather than a model of their
implementation details.
7 NONMONOTONE MULTILINEAR NEURAL NETWORKS
Olafsson and Abu-Mostafa (1988) study
f(Wlt ... ,wl):R"-+B for w;ER, 1~~, where
f
information
capacity
of functions
II
(Wlt.. ??WII)(X1 ?... , xlI)=g (~W;X;)
;=1
and g is the alternating threshold function g (h loh2.....hk-1):R-+B for some monotone
increasing h;ER, 1~<k, defined by g(x)=O if h2i~<h2i+1 for some ~5:nI2. We
will call f an alternating weighted multilinear threshold function, and a neural network constructed from functions of this form alternating multilinear neural networks.
Alternating multilinear neural networks are closely related to k-ary neural networks:
Theorem 7.1 : For every k-ary neural network of size z and weight w there is an
equivalent alternating multilinear neural network of size z log k and weight
(k -l)w log (k -1) which produces the output of the former in binary notation.
Proof (Sketch): Each k-ary gate is replaced by log k gates which together essentially
perform a "binary search" to determine each bit of the k-ary gate. Weights which increase exponentially are used to provide the correct output value. 0
Theorem 7.2 : For every alternating multilinear neural network of size z and weight
w there is a 3t-equivalent k-ary neural network of size 4z and weight w+4z.
Proof (Sketch): Without loss of generality. assume k is odd. Each alternating gate is
replaced by a k-ary gate with identical weights and thresholds. The output of this gate
goes with weight one to a k-ary gate with thresholds 1,3,S ?... ,k-1 and with weight
minus one to a k-ary gate with thresholds -(k-1), ... ,-3,-1. The output of these gates
goes to a binary gate with threshold k. 0
Both k-ary and alternating multilinear neural networks are a special case of nonmonotone multilinear neural networks, where g :R-+R is the defined by g (x )=Ci iff
hi~<h;+lt for some monotone increasing h;ER, 1~<k, and co, ... ,Ck-1EZk. Nonmonotone neural networks correspond to analog neural networks whose output function is not necessarily monotone nondecreasing. Many of the result of this paper, including Theorems 5.1, 6.5, and 6.6, also apply to nonmonotone neural networks. The
size, weight and running time of many of the upper-bounds can also be improved by a
small amount by using nonmonotone neural networks instead of k-ary ones. The details are left to the interested reader.
8 MUL TILINEAR HOPFIELD NETWORKS
A multilinear version of the Hopfield network called the quantized neural network has
been studied by Fleisher (1987). Using the terminology of (parberry, 1990), a quantized neural network is a simple symmetric k-ary neural network (that is, its interconnection pattern is an undirected graph without self-loops) with the additional property
that all processors have an identical set of thresholds. Although the latter assumption
Analog Neural Networks of Limited Precision I
is reasonable for binary neural networks (see, for example, Theorem 4.3.1 of Parberry,
1990), and ternary neural networks (Theorem 4.1), it is not necessarily so for k-ary
neural networks with k>3 (Theorem 4.2). However, it is easy to extend Fleisher's
main result to give the following:
Theorem 8.1 : Any productive sequential computation of a simple symmetric k-ary
neural network will converge.
9 CONCLUSION
It has been shown that analog neural networks with limited precision are essentially
k-ary neural networks. If k is limited to a polynomial, then polynomial size, constant
depth k-ary neural networks are equivalent to polynomial size, constant depth binary
neural networks. Nonetheless, the savings in time (at most a constant multiple) and
hardware (at most a polynomial) arising from using k-ary neural networks rather than
binary ones can be quite significant. We do not suggest that one should actually construct binary or k-ary neural networks. Analog neural networks can be constructed by
exploiting the analog behaviour of transistors, rather than using extra hardware to inhibit it Rather, we suggest that k-ary neural networks are a tool for reasoning about the
behaviour of analog neural networks.
Acknowledgements
The financial support of the Air Force Office of Scientific Research, Air Force S ysterns Command, DSAF, under grant numbers AFOSR 87-0400 and AFOSR 89-0168
and NSF grant CCR-8801659 to Ian Parberry is gratefully acknowledged.
References
Chandra A. K., Stockmeyer L. J. and Vishkin D., (1984) "Constant depth reducibility,"
SIAM 1. Comput., vol. 13, no. 2, pp. 423-439.
Fleisher M., (1987) "The Hopfield model with multi-level neurons," Proc. IEEE
Conference on Neural Information Processing Systems, pp. 278-289, Denver, CO.
Muroga S., Toda 1. and Takasu S., (1961) "Theory of majority decision elements," 1.
Franklin Inst., vol. 271., pp. 376-418.
Obradovic Z. and Parberry 1., (1989a) "Analog neural networks of limited precision I:
Computing with multilinear threshold functions (preliminary version)," Technical Report CS-89-14, Dept of Computer Science, Penn. State Dniv.
Obradovic Z. and Parberry I., (1989b) "Analog neural networks of limited precision II:
Learning with multilinear threshold functions (preliminary version)," Technical Report
CS-89-15, Dept. of Computer Science, Penn. State Dniv.
Olafsson S. and Abu-Mostafa Y. S., (1988) "The capacity of multilevel threshold functions," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 10, no. 2, pp.
277-281.
Parberry I., (To Appear in 1990) "A Primer on the Complexity Theory of Neural Networks," in A Sourcebook of Formal Methods in Artificial Intelligence, ed. R. Banerji,
North-Holland.
709
| 232 |@word determinant:1 version:5 polynomial:12 stronger:1 minus:1 carry:2 configuration:1 franklin:1 nonmonotone:5 si:1 must:1 wll:1 intelligence:2 device:1 quantized:3 hyperplanes:1 simpler:1 constructed:4 prove:1 expected:1 wier:2 multi:1 increasing:7 provided:2 bounded:6 notation:3 circuit:11 what:1 every:10 ser:1 unit:7 penn:3 grant:2 appear:4 accordance:1 limit:1 despite:1 encoding:1 studied:1 co:2 limited:18 range:3 wjl:1 directed:1 practical:1 ternary:5 union:1 digit:1 significantly:1 h2i:2 suggest:2 convenience:1 equivalent:11 demonstrated:1 go:2 regardless:1 m2:2 financial:1 updated:1 suppose:3 hypothesis:1 pa:1 element:1 particularly:1 hrh:1 fleisher:4 region:1 wj:1 inhibit:1 complexity:1 productive:1 zoran:1 tit:1 hopfield:3 represented:1 distinct:1 describe:2 artificial:1 apparent:1 whose:1 larger:3 quite:1 interconnection:2 ability:1 vishkin:1 nondecreasing:1 advantage:1 transistor:2 zm:1 loop:1 iff:6 achieve:1 exploiting:1 impaired:1 extending:1 produce:2 tk:2 object:1 oo:2 odd:1 school:1 c:2 implies:1 closely:2 correct:1 require:2 behaviour:4 assign:2 multilevel:1 preliminary:3 multilinear:24 elementary:1 hold:1 mostafa:3 omitted:1 purpose:1 proc:1 currently:1 wl:7 tool:1 weighted:13 rather:4 ck:1 command:1 office:1 encode:1 indicates:1 obradovic:9 hk:6 inst:1 typically:3 initially:1 interested:1 special:1 equal:3 construct:2 saving:1 having:1 identical:2 park:1 muroga:2 t2:1 wlt:3 report:2 few:1 retina:1 primarily:1 recognize:1 replaced:2 wli:3 interest:2 accurate:1 integral:1 edge:1 divide:1 re:1 theoretical:1 increased:1 boolean:2 rounding:1 kn:2 siam:1 receiving:1 invertible:1 ym:1 together:1 ni2:1 north:1 matter:1 analyze:1 reached:2 wm:2 square:1 ir:1 air:2 characteristic:1 efficiently:1 correspond:1 xli:2 ary:61 processor:16 ed:1 nonetheless:1 pp:4 proof:14 wlo:4 propagated:1 rational:1 popular:3 actually:4 appears:2 stockmeyer:1 improved:1 generality:2 until:1 sketch:4 takasu:2 scientific:1 verify:1 true:1 former:1 hence:1 alternating:9 symmetric:6 deal:1 round:3 ll:1 during:1 self:1 noted:1 excitation:1 reasoning:1 sigmoid:1 permuted:1 operand:1 denver:1 exponentially:1 analog:23 extend:1 significant:1 gratefully:1 stable:2 something:1 inequality:1 binary:18 fault:1 yi:1 additional:1 determine:2 paradigm:1 converge:1 ii:3 multiple:4 desirable:2 technical:2 divided:1 variant:1 essentially:5 chandra:3 interval:1 w2:1 extra:1 hz:1 undirected:1 lover:1 integer:5 call:3 ee:1 easy:4 concerned:2 enough:1 motivated:1 detailed:1 amount:1 hardware:5 ken:1 reduced:1 exist:2 canonical:4 nsf:1 arising:1 ccr:1 discrete:1 vol:3 abu:3 terminology:1 threshold:48 acknowledged:1 drawn:1 graph:2 monotone:5 sum:3 powerful:1 throughout:1 reader:2 reasonable:1 decision:1 bit:4 layer:4 hi:5 handicap:1 guaranteed:1 distinguish:1 bound:2 ilk:1 ri:1 argument:1 relatively:1 department:1 terminates:2 slightly:1 contradicts:1 wi:3 g3:2 happens:1 hl:2 erm:1 taken:1 resource:1 remains:1 hh:1 mind:1 serf:1 wii:4 nen:1 apply:1 k_l:1 ho:1 gate:10 primer:1 denotes:2 running:5 remaining:1 toda:2 build:1 classical:1 said:3 capacity:2 majority:1 decimal:1 unfortunately:1 implementation:1 rii:2 xll:2 perform:1 upper:1 neuron:4 finite:2 ezk:1 arbitrary:1 required:2 connection:1 trans:1 able:2 pattern:2 wz:1 including:1 power:1 natural:1 force:2 imply:1 parberry:15 literature:1 acknowledgement:1 reducibility:1 afosr:2 loss:2 interesting:2 acyclic:1 digital:1 h2:2 sufficient:3 placed:1 surprisingly:1 formal:1 focussed:1 depth:16 dimension:1 computes:1 made:3 olafsson:3 logic:1 tolerant:1 conclude:1 xi:1 search:1 zk:2 expansion:1 investigated:1 necessarily:2 constructing:1 main:1 allowed:2 mul:1 x1:1 referred:1 tl:1 precision:15 purport:1 position:4 ylo:1 xl:1 exponential:2 comput:1 ian:2 companion:1 rk:1 theorem:25 er:3 evidence:2 exists:4 restricting:1 adding:1 sequential:1 ci:1 nk:1 easier:1 lt:1 simply:1 visual:1 holland:1 lwi:1 vxv:1 replace:1 change:1 infinite:1 called:4 isomorphic:2 experimental:2 formally:1 support:1 latter:1 dept:2 ex:1 |
1,452 | 2,320 | Combining Features for BCI
Guido Dornhege1?, Benjamin Blankertz1 , Gabriel Curio2 , Klaus-Robert M?ller1,3
1 Fraunhofer FIRST.IDA, Kekul?str. 7, 12489 Berlin, Germany
2 Neurophysics Group, Dept. of Neurology, Klinikum Benjamin Franklin,
Freie Universit?t Berlin, Hindenburgdamm 30, 12203 Berlin, Germany
3 University of Potsdam, August-Bebel-Str. 89, 14482 Potsdam, Germany
!#"%$ !'&( ) )*
(+
,- /.0"21
3&
( )4567
-8.9%(+
Abstract
Recently, interest is growing to develop an effective communication interface connecting the human brain to a computer, the ?Brain-Computer
Interface? (BCI). One motivation of BCI research is to provide a new
communication channel substituting normal motor output in patients
with severe neuromuscular disabilities. In the last decade, various neurophysiological cortical processes, such as slow potential shifts, movement
related potentials (MRPs) or event-related desynchronization (ERD) of
spontaneous EEG rhythms, were shown to be suitable for BCI, and, consequently, different independent approaches of extracting BCI-relevant
EEG-features for single-trial analysis are under investigation. Here, we
present and systematically compare several concepts for combining such
EEG-features to improve the single-trial classification. Feature combinations are evaluated on movement imagination experiments with 3 subjects where EEG-features are based on either MRPs or ERD, or both.
Those combination methods that incorporate the assumption that the single EEG-features are physiologically mutually independent outperform
the plain method of ?adding? evidence where the single-feature vectors
are simply concatenated. These results strengthen the hypothesis that
MRP and ERD reflect at least partially independent aspects of cortical
processes and open a new perspective to boost BCI effectiveness.
1 Introduction
A brain-computer interface (BCI) is a system which translates a subject?s intentions into a
control signal for a device, e.g., a computer application, a wheelchair or a neuroprosthesis, cf. [1]. When measuring non-invasively, brain activity is acquired by scalp-recorded
electroencephalogram (EEG) from a subject that tries to convey its intentions by behaving
according to well-defined paradigms, e.g., motor imagery, specific mental tasks, or feedback control. ?Features? (or feature vectors) are extracted from the digitized EEG-signals
by signal processing methods. These features are translated into a control signal, either
(1) by simple equations or threshold criteria (with only a few free parameters that are estimated on training data), or (2) by machine learning algorithms that learn a more complex
? To
whom correspondence should be addressed.
decision function on the training data, e.g., linear discriminant analysis (LDA), support
vector machines (SVMs), or artificial neural networks (ANN).
Concerning the pivotal step of feature extraction, neurophysiological a priori knowledge
can aid to decide which EEG-feature is to be expected to hold the most discriminative
information for the chosen paradigm. For some behavioral paradigms even several EEGfeatures might be usable, stimulating a discussion how to combine different features. Investigations in this direction were announced, e.g., in [2, 3] but no publications on that
topic followed.
Here, we present several methods for combining features to enhance single-trial EEG classification for BCI. A special focus was placed on the question how to incorporate a priori knowledge about feature independence. Recently this approach proved to be most effective in an open internet-based classification competition:
&&
it &/turned
%(
out
(+,winning
9 .(
entry
of
the7
NIPS
2001, dataset 2, cf.
,
&.8&.0 BCI
!
competition
8&/
( &
.
Neurophysiological background for single-feature EEG-paradigms.
Three approaches are characteristic for the majority of single-feature BCI paradigms.
(1) Based on slow cortical potentials the T?binger Thought Translation Device (TTD) [4]
translates low-pass filtered brain activity from central scalp position into a vertical cursor
movement on a computer screen. This enables subjects to learn self-regulation of electrocortical positivity or negativity. After some training, patients can generate binary decisions
in a 4 seconds pace with an accuracies of up to 85 % and thereby handle a word processor or
an internet browser. (2) The Albany BCI system [2] allows the user to control cursor movement by oscillatory brain activity into one of two or four possible target areas on a computer
screen. In the first training sessions most subjects use some kind of motor imagery which is
replaced by adapted strategies during further feedback sessions. Well-trained users achieve
hit rates of over 90 % in the two-target setup. Each selection typically takes 4 to 5 seconds.
And (3), the Graz BCI system [5] is based on event-related modulations of the pericentral ? - and/or ? -rhythms of sensorimotor cortices, with a focus on motor preparation and
imagination. Feature vectors calculated from spontaneous EEG signals by adaptive autoregressive modelling are used to train a classifier. In a ternary classification task accuracies
of over 96 % were obtained in an offline study with a trial duration of 8 seconds.
Neurophysiological background for combining single EEG-features.
Most gain from a combination of different features is expected when the single features
provide complementary information for the classification task. In the case of movement related potentials (MRPs) or event-related desynchronization (ERD) of EEG rhythms, recent
evidence [6] supports the hypothesis that MRPs and ERD of the pericentral alpha rhythm
reflect different aspects of sensorimotor cortical processes and could provide complementary information on brain activity accompanying finger movements, as they show different
spatiotemporal activation patterns, e.g., in primary (sensori-)motor cortex (M-1), supplementary motor area (SMA) and posterior parietal cortex (PP).
This hypothesis is backed by invasive recordings [7] supporting the idea that ERD and
MRPs represent different aspects of motor cortex activation with varying generation mechanisms: EEG was recorded during brisk, self-paced finger and foot movements subdurally
in 3 patients and scalp-recorded in normal subjects. MRPs started over wide areas of the
sensorimotor cortices (Bereitschaftspotential) and focalizes at the contralateral M-1 hand
cortex with a steep negative slope prior to finger movement onset, reaching a negative peak
approximately 100 ms after EMG onset (motor potential). In contrast, a bilateral M-1 ERD
just prior to movement onset appeared to reflect a more widespread cortical ?alerting? function. Most importantly, the ERD response magnitude did not have a significant correlation
with the amplitude of the negative MRPs slope.
Note that these studies analyze movement preparation and execution only. We presume a
similar independence of MRP and ERD phenomena for imagined movements. This hy-
pothesis is confirmed by our results, see section 3.
Apart from exploiting complementary information on cortical processes, combining MRP
and ERD based features might give the benefit of being more robust against artifacts from
non central nervous system (CNS) activity such as eye movement (EOG) or muscular artifacts (EMG). While EOG activity mainly affects slow potentials, i.e. MRPs, EMG activity
is of more concern to oscillatory features, cf. [1]. Accordingly, a classification method that
is based on both features has better chance to handle trials that are contaminated by one
kind of those artifacts. On the other hand, it might increase the risk of using non-CNS activity for classification which would not be conform with the BCI idea, [1]. For our setting
the latter issue is investigated in section 2.3.
2 Data acquisition and analysis methods
Experiments.
In this paper we analyze EEG data from experiments with three subjects called aa, af and
ak. The subject sat in a normal chair, with arms lying relaxed on the table. During the
experiment the symbol ?L? or ?R? was shown every 4.5 ?0.25 sec for a duration of 3 s on
the computer screen. The subject was instructed to imagine performing left resp. right hand
finger movements as long as the symbol was visible. 200?300 trials were recorded for each
class and each subject.
Brain activity was recorded with 28 (subject aa) resp. 52 (subjects af and ak) Ag/AgCl
electrodes at 1000 Hz and downsampled to 100 Hz for the present offline study. In addition,
an electromyogram (EMG) of the musculus flexor digitorum bilaterally and horizontal and
vertical electrooculograms (EOG) were recorded to monitor non-CNS activity.
No artifact rejection or correction was employed.
Objective of single-trial analysis.
In these experiments the aim of classification is to discriminate ?left? from ?right? trials
based on EEG-data during the whole period of imagination. Here, no effort was made to
come to a decision as early as possible, which would also be a reasonable objective.
2.1 Feature Extraction
The present behavioural paradigms allowed to study the two prominent brain signals accompanying motor imagery: (1) the lateralized MRP showing up as a slow negative EEGshift focussed over the corresponding motor and sensorimotor cortex contralateral to the
involved hand, and (2) the ERD appearing as a lateralized attenuation of the ? - and/or central ? -rhythm. Fig. 1 shows these effects calculated from subject aa.
In the following we describe methods to derive feature vectors capturing MRP or ERD effects. Note that all filtering techniques used are causal so that all methods are applicable
in online systems. Some free parameters were chosen from appropriately fixed parameter
sets by cross-validation for all experiments and each classification setting separately described in section 2.2. This selection was done to obtain the most appropriate setting for
each single-feature analysis. These values were used for both, classifying trials based on
single-features and the combined classification.
Movement related potential (MRP).
To quantify the lateralized MRP we proceeded similar to our approach in [8] (Berlin BrainComputer Interface, BBCI). Small modifications were made to take account of the different
experimental setup. Signals were baseline corrected on the interval 0?300 ms and downsampled by calculating five jumping means in several consecutive intervals beginning at
300 ms and ending between 1500?3500 ms. Optional an elliptic IIR low-pass filter at 2.5 Hz
C3 lap
C4 lap
C3 lap
C4 lap
Figure 1: ERP and ERD (7?30 Hz) curves for subject aa in the time interval -500 ms to 3000 ms relative to stimulus. Thin and thick lines are averages over right resp. left hand trials. The contralateral
negativation resp. desynchronization is clearly observable.
was applied to the signals beforehand.
To derive feature vectors for the ERD effects we use two different methods which may
reflect different aspects of brain rhythm modulations. The first (AR) reflects the spectral
distribution of the most prominent brain rhythms whereas the second (CSP) reflects spatial
patterns of most prominent power modulation in specifying frequency bands.
Autoregressive models (AR).
In an autoregressive model of order p each time point of a time series is represented as
a fixed linear combination (AR coefficients) of the last p data points. The model order p
was taken as free parameter to be selected between 5 and 12. The feature vector of one
trial is the concatenation of the AR coefficients plus the variance of each channel. The AR
coefficients reflect oscillatory properties of the EEG signal, but not the overall amplitude.
Accounting for this by adding the variance to the feature vector improves classification.
To prevent the AR models from being distorted by EEG-baseline drifts, the signals were
high-pass filtered at 4 Hz. And to sharpen the spectral information to focal brain sources
(spatial) Laplacian filters were applied. The interval for estimating the AR parameters
started at 500 ms and the end points were choosen between 2000 ms and 3500 ms.
Common spatial patterns (CSP).
This method was suggested for binary classification of EEG trials in [9]. In features space
projections on orientations with most differing power-ratios are used. These can be calculated by determining generalized eigenvalues or by simultaneous diagonalisation of the
covariance matrices of both classes. Only a few orientations with the highest ratio between
their eigenvalues (in both directions) are selected. The number of CSP used per class was a
free parameter to be chosen between 2 and 4. Before applying CSP, the signals were filtered
between 8 and 13 Hz to focus on effects in the ? -band. Using a broader band of 7?30 Hz
did not give better results. The interval of interest were choosen as described above for the
AR model. Feature vectors consist of the variances of the CSP projected trial, cf. [9]. Note
that for cross-validation CSP must be calculated for each training set separately.
2.2 Classification and model selection
Our approach for classification was guided by two general ideas. First, following the concept ?simple methods first? we employed only linear classifiers. In our BCI studies linear
classification methods were never found to perform worse than non-linear classifiers, cf.
also [10, 11]. And second, regularization, which is a well-established principle in machine
learning, is highly relevant in experimental conditions typical for a BCI scenario, i.e., a
small number of training samples for ?weak features?. In weak features discriminative information is spread across many dimensions. Classifying such features based on a small
training set may lead to the well-known overfitting problem. To avoid this, typically one
of the following strategies is employed: (1) performing strong preprocessing to extract low
dimensional feature vectors which are tractable for most classifiers. Or (2) performing no
or weak preprocessing and carefully regularizing the classifier such that high-dimensional
features can be handled even with only a small training set. Solution (1) has the disadvantage that strong assumptions about the data distributions have to be made. So especially in
EEG analysis where many sources of variability make strong assumptions dubious, solution (2) is to be preferred. A good introduction to regularized classification is [12] including
regularized LDA which we used here.
To assess classification performance, the generalization error was estimated by 10?10-fold
cross-validation. The reported standard deviation is calculated from the mean errors of the
10-fold cross-validations. The regularization coefficients were chosen by cross-validation
together with the free parameters of the feature extraction methods, see section 2.1, in
the following way. Strictly this cross-validation has to be performed on the training set.
So in this off-line analysis where in each cross-validation procedure 100 different training
sets are drawn randomly from the set of all trials one would have to do a cross-validation
(for model selection, MS) within a cross-validation (for estimating the generalization error,
GE). Obviously this would be very time consuming. On the other hand doing the model
selection by cross-validation on all trials would could lead to overfitting and underestimating the generalization error. As an intermediate way MS-cross-validation was performed
on three subsets of all trials that were randomly drawn where the size of the subsets was
the same as the size of the training sets in the GE-cross-validation, i.e., here 90 % of the
whole set. This procedure was tested in several settings without any significant bias on the
estimation of the GE, cf. [13].
2.3 Analysis of single-features
The table in Fig. 2 shows the generalization error for single-features. Data of each subject
can be well classified. Some differences in the quality of the features for classification are
observable, but there is not one type of feature that is generally the best.
The 10?10-fold cross-validation was also used to determine how often each trial is classified correctly when belonging to the test set. Trials which were classified 9 to 10 times
(i.e., 90 to 100 %) correctly are labeled ?good?, while those classified 9 to 10 times wrong
are labeled ?bad?. Only a small number of trials did fall in neither of those two categories
(?ambivalent?) as could be expected due to the small standard deviation. It is now interesting to see whether there are trials which are for one feature type in the well classified
range and for the other feature in the badly classified part. Fig. 2 shows BP and CSP for
subject af as example for each the part of the bad classified values which are good and bad
classified in the other feature.
These results strengthen the hypothesis that it is promising to combine features.
We made the following check for the impact of non-CNS activity on classification results.
MRP based classification was applied to the EOG signals and ERD based classification was
applied to the EMG signals. All those tests resulted in accuracies at chance level (?50 %).
Since the main concern in this paper is comparing classification with single vs. combined
features this issue was not followed in further detail.
2.4 Combination methods
Feature combination or sensor fusion strategies are rather common in speech recognition
(e.g. [14]) or vision (e.g. [15]) or robotics (e.g. [16]) where either signals on different timescales or from distinct modalities need to be combined. Typical approaches suggested are a
winner-takes-all strategy, which cannot increase performance above the best single feature
analysis, and concatenation of the single feature vectors, discussed as CONCAT below.
Furthermore combinations that use a joint probabilistic modeling [15] appear promising.
We propose two further methods that incorporate independence assumptions (PROB and to
ak
17.2 ? 0.8
25.1? 0.6
17.5? 0.9
11%
10%
CSP?bad
MRP
AR
CSP
af
18.4 ? 1.0
21.2 ? 1.0
14.4 ? 0.8
MRP?bad
8%
aa
12.4 ? 0.6
13.1 ? 0.8
9.5 ? 0.5
8%
82%
81%
Figure 2: Left: Misclassification rates for single features classified with regularized LDA. Free parameters of each feature extraction method were selected by cross-validation on subsets of all trials,
see section 2.2. Right: Pie charts show how ?MRP-bad? and ?CSP-bad? trials for subject af are classified based on the respective other feature: white is the portion of the trials which is ?good? for the
other feature, black marks ?bad?, and gray ?ambivalent? trials for the other feature. See text for the
definition of ?good?, ?bad? and ?ambivalent? in this context.
a smaller extend META) and allow individual decision boundary fitting to single features
(META).
(CONCAT) In this simple approach of gathered evidence feature vectors are just concatenated. To account for the increased dimensionality careful regulization is necessary.
Additionally, we tried classification with a linear programming machine (LPM), which is
appealing for its sparse feature selection property, but it did not improve results compared
to regularized LDA.
(PROB) It is well-known that LDA is the Bayes-optimal classifier, i.e., the one minimizing
the expected risk of misclassification, for two classes of known gaussian distribution with
equal covariance matrices. Here we derive the optimal classifier for combined feature
vectors X = (X1 , . . . , Xn ) under the additional assumption that individual features X1 , . . . , Xn
are mutually independent. Denoting by Y? (x) the decision function on feature space X
Y? (x) = ?R? ? P(Y = ?R? | X = x) > P(Y = ?L? | X = x)
? fY =?R? (x) P(Y = ?R?) > fY =?L? (x) P(Y = ?L?),
where Y is a random variable on the labels {?L?, ?R?} and f denotes densities. Using
the independence assumption one can factorize the densities. Neglecting the class priors
and exploiting the gaussian assumption (Xn | Y = y) ? N (?n,y , ?n ) we get the decision
function
N
1
>
?1
Y? (x) = ?R? ? ? [w>
n xn ? ( ?n,?R? + ?n,?L? ) wn ] > 0, with wn := ?n ( ?n,?R? ? ?n,?L? )
2
n=1
In terms of LDA this corresponds to forcing the elements of the estimated covariance matrix
that belong to different features to zero. Thereby less parameters have to be estimated and
distortions by accidental correlations of independent variables are avoided. If the classes
do not have equal covariance matrices a non-linear version of PROB can be formulated
in analogy to quadratic discriminant analysis (QDA). To avoid overfitting we use regularisation for PROB. There are two ways possible: Regularisation of the covariance matrices
with one global parameter (PROBsame) or with three separately selected parameters corresponding to the single-type features (PROBdiff).
(META) In this approach a meta classifier is applied to the continuous output of individual
classifiers that are trained on single features beforehand. This allows a tailor-made choice
of classifiers for each feature, e.g., if the decision boundary is linear for one feature and nonlinear for another. Here we just use LDA for all features, but regularization coefficients are
selected for each single feature individually. Since the meta classifier acts on low (2 or 3)
dimensional features further regularization is not needed, so we used unregularized LDA.
META extracts discriminative information from single features independently but the meta
classification may exploit inter relations based on the output of the individual decision
aa
af
ak
mean
Best Single
9.5 ? 0.5
14.4 ? 0.8
17.2 ? 0.8
13.7 ? 3.2
CONCAT PROBsame PROBdiff META
9.5 ? 0.4
6.3 ? 0.5
6.5 ? 0.5
6.7 ? 0.4
14.4 ? 1.2
7.4 ? 0.8
7.4 ? 0.7 10.2 ? 0.5
14.8 ? 0.9
13.9 ? 1.0 13.2 ? 0.7 14.0 ? 0.8
12.9 ? 2.4
9.2 ? 3.4
9.0 ? 3.0 10.3 ? 3.0
Table 1: Generalization errors ? s.d. of the means in 10?10-fold cross-validation for combined features compared to the most successful single-type feature. Best result for each subject is in boldface.
functions. That means independence is assumed on the low level while possible high level
relations are taken into account.
3 Results
Table 1 shows the results for the combined classification methods and for comparison the
best result on single-type features (?Best Single?) from the table of Fig. 2. All three feature
were combined together. Combining two of them (especially MRP with AR or CSP) leads
to good values, too, which are slightly worse, however.
The CONCAT method performs only for subject ak better than the single feature methods.
The following two problems may be responsible for that. First, there are only few training
samples and a higher dimensional space than for the single features, so the curse of dimensionality stikes harder. And second, regularisation for the single features results in different
regularisation parameters. In CONCAT a single regularisation parameter has to be found.
In our case the regularisation parameters for subject aa for MRP are about 0.001 whereas
for CSP about 0.8.
From the other approaches the PROB methods are most successful, but META is very good,
too, and better than the single feature results. Differences between the two PROB methods
were not observed.
Concerning the results it is noteworthy that all subjects were BCI-untrained. Only subject
aa had experience as subject in EEG experiments. The result obtained with single-features
is in the range of the best results for untrained BCI performance with imagined movement
paradigm, cf. [17]. Whereas the result of less than 8 % error with our proposed combining
approach for subject aa and af is better than for the 3 subjects in [17] in up to even 10 feedback sessions. Subject ak with an error rate of less than 14 % is in the range of good results.
Additionally, it should be noted that the subject aa reported that he sometimes missed to
react to the stimulus due to fatigue. He estimated the portion of missed stimuli to be 5 %.
Hence the classification error of 6.3 % is very close to what is possible to achieve.
4 Concluding discussion
Combining the feature vectors corresponding to event-related desynchronization and
movement-related potentials under an independence assumption derived from a priori physiological knowledge (PROB, and to a smaller extent META) leads to an improved classification accuracy when compared to single-feature classification. In contrast, the combination of features without any assumption of independence (CONCAT) did not improve
accuracy in every case and always performs worse than PROB and META. These results
further support the hypothesis that MRP and ERD reflect independent aspects of brain activity.
In all three experiments an improvement of about 25 % to 50 % reduction of the error rate
could be achieved by combining methods. Additionally, the combined approach has the
practical advantage that no prior decision has to be made about what feature to use.
Combining features of different brain processes in feedback scenarios where the subject
is trying to adapt to the feedback algorithm could in principle hold the risk of making the
learning task too complex for the subject. This, however, needs to be investigated in future
online studies.
Finally, we would like to remark that the proposed feature combination principles can be
used in other application areas where independent features can be obtained.
Acknowledgments.
We thank Sebastian Mika, Roman Krepki, Thorsten Zander, Gunnar Raetsch, Motoaki
Kawanabe and Stefan Harmeling for helpful discussions. The studies were supported by a
grant of the Bundesministerium f?r Bildung und Forschung (BMBF), FKZ 01IBB02A and
FKZ 01IBB02B.
References
[1] J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller, and T. M. Vaughan, ?Braincomputer interfaces for communication and control?, Clin. Neurophysiol., 113: 767?791, 2002.
[2] J. R. Wolpaw, D. J. McFarland, and T. M. Vaughan, ?Brain-Computer Interface Research at the
Wadsworth Center?, IEEE Trans. Rehab. Eng., 8(2): 222?226, 2000.
[3] J. A. Pineda, B. Z. Allison, and A. Vankov, ?The Effects of Self-Movement, Observation, and
Imagination on ? ?Rhythms and Readiness Potential (RP?s): Toward a Brain-computer Interface
(BCI)?, IEEE Trans. Rehab. Eng., 8(2): 219?222, 2000.
[4] N. Birbaumer, N. Ghanayim, T. Hinterberger, I. Iversen, B. Kotchoubey, A. K?bler, J. Perelmouter, E. Taub, and H. Flor, ?A spelling device for the paralysed?, Nature, 398: 297?298,
1999.
[5] B. O. Peters, G. Pfurtscheller, and H. Flyvbjerg, ?Automatic Differentiation of Multichannel
EEG Signals?, IEEE Trans. Biomed. Eng., 48(1): 111?116, 2001.
[6] C. Babiloni, F. Carducci, F. Cincotti, P. M. Rossini, C. Neuper, G. Pfurtscheller, and F. Babiloni,
?Human Movement-Related Potentials vs Desynchronization of EEG Alpha Rhythm: A HighResolution EEG Study?, NeuroImage, 10: 658?665, 1999.
[7] C. Toro, G. Deuschl, R. Thather, S. Sato, C. Kufta, and M. Hallett, ?Event-related desynchronization and movement-related cortical potentials on the ECoG and EEG?, Electroencephalogr.
Clin. Neurophysiol., 93: 380?389, 1994.
[8] B. Blankertz, G. Curio, and K.-R. M?ller, ?Classifying Single Trial EEG: Towards Brain Computer Interfacing?, in: T. G. Diettrich, S. Becker, and Z. Ghahramani, eds., Advances in Neural
Inf. Proc. Systems (NIPS 01), vol. 14, 2002, to appear.
[9] H. Ramoser, J. M?ller-Gerking, and G. Pfurtscheller, ?Optimal spatial filtering of single trial
EEG during imagined hand movement?, IEEE Trans. Rehab. Eng., 8(4): 441?446, 2000.
[10] L. Parra, C. Alvino, A. C. Tang, B. A. Pearlmutter, N. Yeung, A. Osman, and P. Sajda, ?Linear
spatial integration for single trial detection in encephalography?, NeuroImage, 2002, to appear.
[11] K.-R. M?ller, C. W. Anderson, and G. E. Birch, ?Linear and Non-Linear Methods for BrainComputer Interfaces?, IEEE Trans. Neural Sys. Rehab. Eng., 2003, submitted.
[12] J. H. Friedman, ?Regularized Discriminant Analysis?, J. Amer. Statist. Assoc., 84(405): 165?
175, 1989.
[13] G. R?tsch, T. Onoda, and K.-R. M?ller, ?Soft Margins for AdaBoost?, Machine Learning, 42(3):
287?320, 2001.
[14] N. Morgan and H. Bourlard, ?Continuous Speech Recognition: An Introduction to the Hybrid
HMM/Connectionist Approach?, Signal Processing Magazine, 25?42, 1995.
[15] M. Brand, N. Oliver, and A. Pentland, ?Coupled hidden markov models for complex action
recognition?, 1996.
[16] S. Thrun, A. B?cken, W. Burgard, D. Fox, T. Fr?hlinghaus, D. Henning, T. Hofmann, M. Krell,
and T. Schmidt, ?Map Learning and High-Speed Navigation in RHINO?, in: D. Kortenkamp,
R. Bonasso, and R. Murphy, eds., AI-based Mobile Robots, MIT Press, 1998.
[17] G. Pfutscheller, C. Neuper, D. Flotzinger, and M. Pregenzer, ?EEG-based discrimination between imagination of right and left hand movement?, Electroencephalogr. Clin. Neurophysiol.,
103: 642?651, 1997.
| 2320 |@word blankertz1:1 trial:27 proceeded:1 version:1 open:2 cincotti:1 tried:1 accounting:1 covariance:5 eng:5 thereby:2 harder:1 reduction:1 series:1 denoting:1 franklin:1 ida:1 comparing:1 activation:2 pothesis:1 must:1 visible:1 hofmann:1 enables:1 motor:10 toro:1 v:2 discrimination:1 selected:5 device:3 nervous:1 concat:6 accordingly:1 beginning:1 sys:1 underestimating:1 filtered:3 mental:1 five:1 agcl:1 combine:2 fitting:1 behavioral:1 acquired:1 inter:1 expected:4 growing:1 brain:17 curse:1 str:2 ibb02b:1 ller1:1 estimating:2 pregenzer:1 what:2 kind:2 differing:1 ag:1 differentiation:1 freie:1 every:2 attenuation:1 act:1 universit:1 classifier:11 hit:1 wrong:1 control:5 assoc:1 grant:1 appear:3 before:1 ak:6 modulation:3 approximately:1 noteworthy:1 might:3 plus:1 black:1 mika:1 diettrich:1 specifying:1 range:3 practical:1 responsible:1 acknowledgment:1 harmeling:1 ternary:1 pericentral:2 wolpaw:2 procedure:2 area:4 thought:1 osman:1 projection:1 intention:2 word:1 downsampled:2 flotzinger:1 get:1 cannot:1 close:1 selection:6 risk:3 applying:1 context:1 vaughan:2 map:1 center:1 backed:1 duration:2 independently:1 react:1 importantly:1 handle:2 resp:4 target:2 spontaneous:2 imagine:1 user:2 guido:1 strengthen:2 programming:1 magazine:1 hypothesis:5 element:1 recognition:3 electromyogram:1 labeled:2 observed:1 deuschl:1 graz:1 movement:21 highest:1 benjamin:2 und:1 tsch:1 trained:2 neurophysiol:3 translated:1 joint:1 various:1 represented:1 finger:4 train:1 sajda:1 distinct:1 effective:2 describe:1 artificial:1 neurophysics:1 klaus:1 supplementary:1 distortion:1 bci:18 alvino:1 browser:1 bereitschaftspotential:1 bler:1 online:2 obviously:1 pineda:1 advantage:1 eigenvalue:2 propose:1 fr:1 rehab:4 relevant:2 combining:10 turned:1 achieve:2 competition:2 exploiting:2 electrode:1 derive:3 develop:1 strong:3 come:1 quantify:1 motoaki:1 direction:2 foot:1 thick:1 guided:1 filter:2 human:2 generalization:5 investigation:2 parra:1 strictly:1 ecog:1 correction:1 hold:2 accompanying:2 lying:1 normal:3 bildung:1 substituting:1 sma:1 early:1 consecutive:1 cken:1 estimation:1 albany:1 proc:1 applicable:1 ambivalent:3 label:1 curio2:1 individually:1 electroencephalogr:2 reflects:2 stefan:1 mit:1 clearly:1 sensor:1 gaussian:2 always:1 aim:1 csp:12 reaching:1 rather:1 avoid:2 kortenkamp:1 interfacing:1 varying:1 mobile:1 broader:1 publication:1 derived:1 focus:3 improvement:1 modelling:1 check:1 mainly:1 contrast:2 lateralized:3 baseline:2 helpful:1 bebel:1 typically:2 hidden:1 relation:2 rhino:1 germany:3 biomed:1 issue:2 classification:28 overall:1 orientation:2 priori:3 spatial:5 special:1 integration:1 wadsworth:1 equal:2 never:1 extraction:4 thin:1 future:1 contaminated:1 stimulus:3 connectionist:1 roman:1 few:3 randomly:2 resulted:1 individual:4 murphy:1 replaced:1 cns:4 bundesministerium:1 friedman:1 detection:1 interest:2 highly:1 severe:1 allison:1 navigation:1 beforehand:2 paralysed:1 oliver:1 neglecting:1 necessary:1 experience:1 jumping:1 respective:1 fox:1 qda:1 causal:1 increased:1 modeling:1 soft:1 ar:10 disadvantage:1 measuring:1 kekul:1 deviation:2 entry:1 subset:3 contralateral:3 burgard:1 successful:2 too:3 iir:1 reported:2 perelmouter:1 emg:5 spatiotemporal:1 combined:8 ibb02a:1 density:2 peak:1 gerking:1 ghanayim:1 probabilistic:1 off:1 enhance:1 connecting:1 together:2 imagery:3 reflect:6 recorded:6 central:3 positivity:1 hinterberger:1 worse:3 imagination:5 usable:1 account:3 potential:11 sec:1 coefficient:5 onset:3 bilateral:1 try:1 performed:2 analyze:2 doing:1 portion:2 hindenburgdamm:1 bayes:1 slope:2 encephalography:1 ass:1 chart:1 accuracy:5 variance:3 characteristic:1 gathered:1 sensori:1 weak:3 babiloni:2 kotchoubey:1 confirmed:1 presume:1 processor:1 classified:10 submitted:1 oscillatory:3 simultaneous:1 sebastian:1 ed:2 definition:1 against:1 sensorimotor:4 pp:1 acquisition:1 invasive:1 involved:1 frequency:1 gain:1 wheelchair:1 proved:1 dataset:1 birch:1 knowledge:3 improves:1 dimensionality:2 amplitude:2 carefully:1 higher:1 adaboost:1 response:1 improved:1 erd:16 amer:1 evaluated:1 done:1 anderson:1 furthermore:1 just:3 bilaterally:1 correlation:2 hand:8 horizontal:1 nonlinear:1 readiness:1 widespread:1 lda:8 artifact:4 quality:1 gray:1 effect:5 concept:2 regularization:4 hence:1 white:1 during:5 self:3 noted:1 rhythm:9 criterion:1 m:11 prominent:3 generalized:1 fatigue:1 trying:1 highresolution:1 electroencephalogram:1 pearlmutter:1 performs:2 interface:8 regularizing:1 recently:2 common:2 dubious:1 winner:1 birbaumer:2 imagined:3 discussed:1 extend:1 belong:1 he:2 significant:2 raetsch:1 taub:1 ai:1 automatic:1 focal:1 session:3 lpm:1 sharpen:1 had:1 robot:1 cortex:7 behaving:1 posterior:1 recent:1 perspective:1 inf:1 apart:1 forcing:1 scenario:2 meta:11 binary:2 morgan:1 additional:1 relaxed:1 employed:3 determine:1 paradigm:7 period:1 ller:4 signal:16 adapt:1 af:7 cross:15 long:1 concerning:2 laplacian:1 impact:1 patient:3 vision:1 yeung:1 represent:1 sometimes:1 robotics:1 achieved:1 background:2 addition:1 separately:3 whereas:3 addressed:1 interval:5 neuromuscular:1 source:2 modality:1 appropriately:1 flor:1 subject:29 recording:1 hz:7 vankov:1 henning:1 effectiveness:1 extracting:1 intermediate:1 wn:2 independence:7 affect:1 fkz:2 idea:3 translates:2 shift:1 whether:1 handled:1 becker:1 effort:1 peter:1 speech:2 remark:1 action:1 gabriel:1 generally:1 band:3 statist:1 svms:1 category:1 multichannel:1 hallett:1 generate:1 outperform:1 bonasso:1 estimated:5 per:1 correctly:2 pace:1 conform:1 vol:1 group:1 gunnar:1 four:1 threshold:1 monitor:1 drawn:2 erp:1 prevent:1 neither:1 regulization:1 bbci:1 prob:8 flexor:1 distorted:1 tailor:1 reasonable:1 decide:1 missed:2 decision:9 announced:1 capturing:1 internet:2 followed:2 paced:1 correspondence:1 fold:4 accidental:1 quadratic:1 activity:12 scalp:3 adapted:1 badly:1 sato:1 bp:1 hy:1 aspect:5 speed:1 ttd:1 chair:1 concluding:1 performing:3 according:1 combination:9 belonging:1 across:1 smaller:2 slightly:1 appealing:1 modification:1 making:1 thorsten:1 taken:2 unregularized:1 behavioural:1 equation:1 mutually:2 mechanism:1 needed:1 ge:3 tractable:1 krepki:1 end:1 kawanabe:1 appropriate:1 elliptic:1 spectral:2 appearing:1 schmidt:1 rp:1 denotes:1 cf:7 clin:3 iversen:1 calculating:1 exploit:1 concatenated:2 ghahramani:1 especially:2 objective:2 question:1 strategy:4 primary:1 spelling:1 disability:1 klinikum:1 thank:1 berlin:4 concatenation:2 majority:1 hmm:1 thrun:1 topic:1 whom:1 fy:2 discriminant:3 extent:1 toward:1 boldface:1 ratio:2 minimizing:1 regulation:1 setup:2 steep:1 robert:1 pie:1 negative:4 perform:1 vertical:2 observation:1 markov:1 parietal:1 supporting:1 optional:1 pentland:1 communication:3 variability:1 digitized:1 august:1 neuroprosthesis:1 drift:1 c3:2 alerting:1 c4:2 potsdam:2 established:1 boost:1 nip:2 trans:5 suggested:2 mcfarland:2 below:1 pattern:3 appeared:1 including:1 power:2 event:5 suitable:1 braincomputer:3 misclassification:2 regularized:5 hybrid:1 bourlard:1 diagonalisation:1 arm:1 blankertz:1 improve:3 eye:1 started:2 fraunhofer:1 negativity:1 extract:2 coupled:1 eog:4 text:1 dornhege1:1 prior:4 determining:1 relative:1 regularisation:6 generation:1 interesting:1 filtering:2 analogy:1 validation:15 principle:3 systematically:1 classifying:3 mrp:14 translation:1 placed:1 last:2 free:6 supported:1 offline:2 choosen:2 bias:1 allow:1 wide:1 fall:1 focussed:1 sparse:1 benefit:1 boundary:2 feedback:5 xn:4 plain:1 cortical:7 calculated:5 ending:1 curve:1 autoregressive:3 dimension:1 instructed:1 made:6 adaptive:1 projected:1 preprocessing:2 avoided:1 alpha:2 observable:2 preferred:1 global:1 overfitting:3 sat:1 assumed:1 consuming:1 discriminative:3 neurology:1 factorize:1 physiologically:1 continuous:2 decade:1 flyvbjerg:1 rossini:1 mrps:8 nature:1 onoda:1 table:5 channel:2 learn:2 robust:1 promising:2 additionally:3 eeg:28 brisk:1 investigated:2 complex:3 untrained:2 ramoser:1 did:5 spread:1 main:1 timescales:1 motivation:1 whole:2 allowed:1 complementary:3 convey:1 pivotal:1 x1:2 fig:4 screen:3 slow:4 aid:1 bmbf:1 pfurtscheller:4 neuroimage:2 position:1 winning:1 tang:1 carducci:1 bad:9 specific:1 invasively:1 showing:1 desynchronization:6 symbol:2 physiological:1 evidence:3 concern:2 consist:1 fusion:1 curio:1 adding:2 forschung:1 magnitude:1 execution:1 cursor:2 margin:1 rejection:1 lap:4 simply:1 neurophysiological:4 partially:1 aa:10 corresponds:1 chance:2 extracted:1 stimulating:1 formulated:1 consequently:1 ann:1 careful:1 towards:1 muscular:1 typical:2 corrected:1 called:1 pas:3 discriminate:1 experimental:2 brand:1 neuper:2 support:3 mark:1 latter:1 preparation:2 incorporate:3 dept:1 tested:1 phenomenon:1 |
1,453 | 2,321 | Feature Selection and Classification on
Matrix Data: From Large Margins To
Small Covering Numbers
Sepp Hochreiter and Klaus Obermayer
Department of Electrical Engineering and Computer Science
Technische Universit?at Berlin
10587 Berlin, Germany
{hochreit,oby}@cs.tu-berlin.de
Abstract
We investigate the problem of learning a classification task for
datasets which are described by matrices. Rows and columns of
these matrices correspond to objects, where row and column objects may belong to different sets, and the entries in the matrix
express the relationships between them. We interpret the matrix elements as being produced by an unknown kernel which operates on
object pairs and we show that - under mild assumptions - these kernels correspond to dot products in some (unknown) feature space.
Minimizing a bound for the generalization error of a linear classifier which has been obtained using covering numbers we derive an
objective function for model selection according to the principle of
structural risk minimization. The new objective function has the
advantage that it allows the analysis of matrices which are not positive definite, and not even symmetric or square. We then consider
the case that row objects are interpreted as features. We suggest an
additional constraint, which imposes sparseness on the row objects
and show, that the method can then be used for feature selection.
Finally, we apply this method to data obtained from DNA microarrays, where ?column? objects correspond to samples, ?row? objects
correspond to genes and matrix elements correspond to expression
levels. Benchmarks are conducted using standard one-gene classification and support vector machines and K-nearest neighbors after
standard feature selection. Our new method extracts a sparse set
of genes and provides superior classification results.
1
Introduction
Many properties of sets of objects can be described by matrices, whose rows and
columns correspond to objects and whose elements describe the relationship between
them. One typical case are so-called pairwise data, where rows as well as columns
of the matrix represent the objects of the dataset (Fig. 1a) and where the entries of
the matrix denote similarity values which express the relationships between objects.
Pairwise Data (a)
A B C D E F G H I
A
B
C
D
E
F
G
H
I
J
K
L
0.9 -0.1 -0.8
0.5 0.2
-0.5 -0.7 -0.9 0.2
Feature Vectors (b)
-0.7 0.4
-0.3
-0.1
0.9
0.6
0.3 -0.7 -0.6
0.3
0.7 -0.3 -0.8 -0.7 -0.9
-0.8
0.6
0.9
0.2 -0.6
0.6
0.5
0.2 -0.7 -0.5 -0.1
0.5
0.3
0.2
0.9
0.7
0.1
0.3 -0.1
0.2 -0.7 -0.6
0.7
0.9 -0.9 -0.5
-0.5 -0.6 0.6
0.1 -0.9
-0.7
0.3
0.5
0.3 -0.5 0.9
-0.9
0.7
0.2 -0.1 0.4
0.2 -0.3 -0.7
-0.7 -0.8 -0.5
0.6
0.9
0.6 -0.1
0.9 -0.9
0.1 -0.3
-0.6
0.6
-0.1
0.7
0.9 -0.2 -0.6 -0.5 -0.4 -0.3
0.9
0.1 -0.6 -0.3 0.2
0.9 -0.3 -0.5
0.6
0.9 -0.3 -0.3
-0.2 -0.3
0.4 -0.7 -0.1 -0.9 -0.6 -0.4
-0.3 -0.9
0.4
0.9
0.7 -0.3 -0.7
0.6
0.2 -0.9
0.9 -0.7
0.3
0.4
0.9 -0.3 -0.7
0.8
0.6 -0.9 -0.3
A B C D E F G
J K L
0.9 -0.1 -0.5
0.3 -0.7 -0.1
0.9
0.1
0.4
0.1
0.9
0.8 -0.5
?
?
?
?
?
?
?
?
?
?
?
?
1.3
-2.2 -1.6
7.8 6.6
-7.5 -4.8
-1.8
-1.1 7.2
2.3 9.0
3.8
1.2
1.9
-2.9
-2.2 -4.4
3.9
-4.7 -8.4
-0.3
3.7
0.8 -0.6
2.5
-5.7 0.1
9.2
-9.4 -8.3
9.2
-2.4 -3.9 1.9
-7.7 8.6 -9.7
-4.8
0.1
0.7
-1.7
0.3
-6.2 -6.2 1.8
9.0 4.8
6.2
-8.3
9.0 1.5
9.6 7.0
-7.4 2.6
-1.2 0.9 0.2
-7.2 -1.8
6.9
2.9
2.7
0.2
4.6 2.6
3.6 -0.7 -9.4
-0.8 -2.0
-1.1 7.7 8.4
2.5 -4.3 -5.4
0.9
4.4 -1.9
0.7
-2.1
1.2
Figure 1: Two typical examples of matrix data (see text). (a) Pairwise data. Row
(A-L) and column (A-L) objects coincide. (b) Feature vectors. Column objects
(A-G) differ form row objects (? - ?). The latter are interpreted as features.
Another typical case occurs, if objects are described by a set of features (Fig. 1b).
In this case, the column objects are the objects to be characterized, the row objects
correspond to their features and the matrix elements denote the strength with which
a feature is expressed in a particular object.
In the following we consider the task of learning a classification problem on matrix
data. We consider the case that class labels are assigned to the column objects of
the training set. Given the matrix and the class labels we then want to construct
a classifier with good generalization properties. From all the possible choices we
select classifiers from the support vector machine (SVM) family [1, 2] and we use
the principle of structural risk minimization [15] for model selection - because of its
recent success [11] and its theoretical properties [15].
Previous work on large margin classifiers for datasets, where objects are described
by feature vectors and where SVMs operate on the column vectors of the matrix, is
abundant. However, there is one serious problem which arise when the number of
features becomes large and comparable to the number of objects: Without feature
selection, SVMs are prone to overfitting, despite the complexity regularization which
is implicit in the learning method [3]. Rather than being sparse in the number of
support vectors, the classifier should be sparse in the number of features used for
classification. This relates to the result [15] that the number of features provide an
upper bound on the number of ?essential? support vectors.
Previous work on large margin classifiers for datasets, where objects are described by
their mutual similarities, was centered around the idea that the matrix of similarities
can be interpreted as a Gram matrix (see e.g. Hochreiter & Obermayer [7]). Work
along this line, however, was so far restricted to the case (i) that the Gram matrix is
positive definite (although methods have been suggested to modify indefinite Gram
matrices in order to restore positive definiteness [10]) and (ii) that row and column
objects are from the same set (pairwise data) [7].
In this contribution we extend the Gram matrix approach to matrix data, where
row and column objects belong to different sets. Since we can no longer expect that
the matrices are positive definite (or even square), a new objective function must be
derived. This is done in the next section, where an algorithm for the construction
of linear classifiers is derived using the principle of structural risk minimization.
Section 3 is concerned with the question under what conditions matrix elements
can indeed be interpreted as vector products in some feature space. The method
is specialized to pairwise data in Section 4. A sparseness constraint for feature
selection is introduced in Section 5. Section 6, finally, contains an evaluation of the
new method for DNA microarray data as well as benchmark results with standard
classifiers which are based on standard feature selection procedures.
2
Large Margin Classifiers for Matrix Data
In the following we consider two sets X and Z of objects, which are described by
feature vectors x and z. Based on the feature vectors x we construct a linear
classifier defined through the classification function
f (x) = hw, xi + b,
(1)
where h., .i denotes a dot product. The zero isoline of f is a hyperplane which is
? and by its perpendicular distance b/kwk2
parameterized by its unit normal vector w
from the origin. The hyperplane?s margin ? with respect to X is given by
? xi + b/kwk2 | .
? = min |hw,
x?X
(2)
Setting ? = kwk?1
2 allows us to treat normal vectors w which are not normalized,
if the margin is normalized to 1. According to [15] this is called the ?canonical
form? of the separation hyperplane. The hyperplane with largest margin is then
obtained by minimizing kwk22 for a margin which equals 1.
It has been shown [14, 13, 12] that the generalization error of a linear classifier, eq.
(1), can be bounded from above with probability 1 ? ? by the bound B,
?
?
??
?
? ?
??
2
4La
B(L, a/?, ?) =
log2 EN
, F, 2L
+ log2
,
(3)
L
2a
??
provided that the training classification error is zero and f (x) is bounded by
?a ? f (x) ? a for all x drawn iid from the (unknown) distribution of objects.
L denotes the number of training objects x, ? denotes the margin and EN (?, F, L)
the expected ?-covering number of a class F of functions that map data objects from
T to [0, 1] (see Theorem 7.7 in [14] and Proposition 19 in [12]). In order to obtain
a classifier with good generalization properties we suggest to minimize a/? under
proper constraints. a is not known in general, however, because the probability distribution of objects (in particular its support) is not
? known. Ini order to avoidi this
?
? x i ? mini hw,
? x i of
problem we approximate a by the range m = 0.5 maxi hw,
values in the training set and minimize the quantity B(L, m/?, ?) instead of eq. (3).
?
?
Let X := x1?, x2 , . . . , xL be
? the matrix of feature vectors of L objects from the set
1
2
P
X and Z := z , z , . . . , z be the matrix of feature vectors of P objects from the
set Z. The objects of set X are labeled, and we summarize all labels using a label
matrix Y : [Y ]ij := y i ?ij ? RL?L , where ? is the Kronecker-Delta. Let us consider
the case that the feature vectors X and Z are unknown, but that we are given the
matrix K := X T Z of the corresponding scalar products. The training set is then
given by the data matrix K and the corresponding label matrix Y . The principle
of structural risk minimization is implemented by minimizing an upper bound on
2
? xi i| ?
(m/?) given by kX T wk22 , as can be seen from m/? ? kwk2 maxi |hw,
q
P
T
i
i
i 2
i (hw, x i) = kX wk2 . The constraints f (x ) = y imposed by the training
?
?
set are taken into account using the expressions 1 ? ?i+ ? y i hw, xi i + b ?
1 + ?i? , where ?i+ , ?i? ? 0 are slack variables which should also be minimized. We
thus obtain the optimization problem
1
kX T wk22 + M + 1T ? + + M ? 1T ? ?
min
(4)
2
w,b,?+ ,??
?
?
s.t.
Y ?1 X T w + b1 ? 1 + ? + ? 0
?
?
Y ?1 X T w + b1 ? 1 ? ? ? ? 0
?+ , ?? ? 0 .
M penalizes wrong classification and M ? absolute values exceeding 1. For classification M ? may be set to zero. Note, that the quadratic expression in the objective
function is convex, which follows from kX T wk22 = wT X X T w and the fact
that X X T is positive semidefinite.
+
? +, ?
? ? be the dual variables for the constraints imposed
Let ?
by? the training set,
?
? := ?
?+ ? ?
? ? , and ? a vector with ?
? = Y X T Z ?. Two cases
?
must be treated: ? is not unique or does not exist. First, if ? is not unique
we choose ? according to Section 5. Second, if ? does not exist we set ? =
? T
??1 T
? where Y ?T Y ?1 is the identity. The
Z X Y ?T Y ?1 X T Z
Z X Y ?T ?,
optimality conditions require that the following derivatives of the Lagrangian L are
? ?L/?w = X X T w ? X Y ?1 ?,
? ?L/?? ? =
zero: ?L/?b = 1T Y ?1 ?,
?
?
?
+
?
? + ? , where ? , ? ? 0 are the Lagrange multipliers for the slack
M 1 ? ?
T
T
variables. ?We obtain
? Z ?) = 0 which is ensured by w = Z ?,
? Z X X (w
T
T
+
0 = 1
X Z ?,
?
?
?
M
,
and
?i ? M ? . The Karush?Kuhn?Tucker
? Ti
?
? T ? ??
conditions give b = 1 Y 1 / 1 1 if ?
? i < M + and ??
?i < M ? .
? T ? ?1
In the following we set M + = M ? ?= M and
C
:=
M
kY
X Z krow so that
?
T
? ? ? kY X Z krow k?k? ? M , where k.krow is the
k?k? ? C implies k?k
row-sum norm. We then obtain the following dual problem of eq. (4):
1 T
? K T K ? ? 1T Y K ?
(5)
min
?
2
subject to
1T K ? = 0 , |?i | ? C.
If M + 6= M ? we must add another constraint. For M ? = 0, for example, we have
to add Y K (?+ ? ?? ) ? 0. If a classifier has been selected according to eq.
(5), a new example u is classified according to the sign of
P
X
f (u) = hw, ui + b =
?i hz i , ui + b.
(6)
i=1
The optimal classifier is selected by optimizing eq. (5), and as long as a = m holds
true for all possible objects x (which are assumed to be drawn iid), the generalization
error is bounded by eq. (3). If outliers are rejected, condition a = m can always be
enforced. For large training sets the number of rejections is small: The probability
P {|hw, xi| > m} that an outlier occurs can be bounded with confidence 1 ? ? using
the additive Chernoff bounds (e.g. [15]):
r
? log ?
P {|hw, xi| > m} ?
.
(7)
2L
But note, that not all outliers are misclassified, and the trivial bound on the generalization error is still of the order L?1 .
3
Kernel Functions, Measurements and Scalar Products
In the last section we have assumed that the matrix K is derived from scalar
products between the feature vectors x and z which describe the objects from the
sets X and Z. For all practical purposes, however, the only information available
is summarized in the matrices K and Y . The feature vectors are not known and
it is even unclear whether they exist. In order to apply the results of Section 2 to
practical problems the following question remains to be answered: What are the
conditions under which the measurement operator k(., z) can indeed be interpreted
as a scalar product between feature vectors and under which the matrix K can be
interpreted as a matrix of kernel evaluations?
In order to answer these questions, we make use Rof the following theorems. Let
L2 (H) denote the set of functions h P
from H with h2 (x)dx < ? and `2 the set
of infinite vectors (a1 , a2 , . . . ) where i a2i converges.
Theorem 1 (Singular Value Expansion) Let H1 and H2 be Hilbert spaces. Let
? be from L2 (H1 ) and let k be a kernel from L2 (H2 , H1 ) which defines a HilbertSchmidt operator Tk : H1 ? H2
Z
(Tk ?)(x) = f (x) =
k(x, z) ?(z) dz .
(8)
P
Then there exists an expansion k(x, z) =
n sn en (z) gn (x) which converges in
the L2 -sense. The sn ? 0 are the singular values of Tk , and en ? H1 , gn ? H2 are
the corresponding orthonormal functions.
Corollary 1R(Linear Classification in `2 ) Let the assumptions of Theorem 1
hold and let H1 (k(x, z))2 dz ? K 2 for all x. Let h.iH1 be the a dot product in
H1 . We define w := (h?, e1 iH1 , h?, e2 iH1 , . . . ), and ?(x) := (s1 g1 (x), s2 g2 (x), . . . ).
Then the following holds true:
? w, ?(x) ? `2 , where kwk2`2 = k?k2H1 , and
? kf k2H2 = hTk? Tk ?, ?iH1 , where Tk? is the adjoint operator of Tk ,
and the following sum convergences absolutely and uniformly:
X
f (x) = hw, ?(x)i`2 =
sn h?, en iH1 gn (x) .
(9)
n
Eq. (9) is a linear classifier in `2 . ? maps vectors from H2 into the feature space. We
define a second mapping from H1 to the feature space by ? (z) := (e1 (z), e2 (z), . . . ).
PP
i
For ? =
where ?(z i ) is the Dirac delta, we recover the discrete
i=1 ?i ?(z ),
? ?
PP
classifier (6) and w = i=1 ?i ? z i . We observe that kf k2H2 = ?T K T K ? =
kX T wk22 . A problem may arise if z i belongs to a set of measure zero which does
not obey the singular value decomposition of k. If this occurs ?(z i ) may be set to
the zero function.
Theorem 1 tells us that any measurement kernel k applied to objects x and z can
be expressed for almost all x and z as k(x, z) = h? (x) , ? (z)i, where h.i defines
a dot product in?some
? ?space for? almost
?? all x, z. Hence, we can define the
? ?feature
a matrix X := ? x1 , ? ?x2 ? , . .?. , ?? xL
? of feature
? ?? vectors for the L column
objects and a matrix Z := ? z 1 , ? z 2 , . . . , ? z P of feature vectors for the
P row objects and apply the results of Section 2.
4
Pairwise Data
An interesting special case occurs if row and column objects coincide. This kind of
data is known as pairwise data [5, 4, 8] where the objects to be classified serve as
features and vice versa. Like in Section 3 we can expand the measurement kernel
via singular value decomposition but that would introduce two different mappings
(? and ?) into the feature space. We will use one map for row and column objects
and perform an eigenvalue decomposition. The consequence is that that eigenvalues
may be negative (see the following theorem).
Theorem 2 (Eigenvalue Expansion) Let definitions and assumptions be as in
Theorem 1. Let H1 =
P H2 = H and let k be symmetric. Then 2there exists an
expansion k(x, z) =
n ?n en (z) en (x) which converges in the L -sense. The ?n
are the eigenvalues of Tk with the corresponding orthonormal eigenfunctions en .
Corollary 2R (Minkowski Space Classification) Let the assumptions of Theorem 2 and H (k(x, z))2 dz ? K 2 for all x hold true. We define w :=
p
p
p
p
( |?1 |h?, e1 iH , |?2 |h?, e2 iH , . . . ), ?(x) := ( |?1 |e1 (x), |?2 |e2 (x), . . . ), and `2S
to denote `2 with a given signature S = (sign(?1 ), sign(?2 ), . . . ).
Then the following holds true:
?2
?p
P
P
2
|?
|
h?,
e
i
=
kwk2`2 =
sign(?
)
n
n
H
n
n ?n h?, en iH = hTk ?, ?iH ,
n
S
P
2
k?(x)k2`2 =
= k(x, x) in the L2 sense, and the following sum
n ?n en (x)
S
convergences absolutely and uniformly:
X
f (x) = hw, ?(x)i`2S =
?n h?, en iH en (x) .
(10)
n
Eq. (10) is a linear classifier in the Minkowski space `2S . For the discrete case
? i?
PP
PP
i
? =
i=1 ?i ?(z ), the normal vector is w =
i=1 ?i ? z . In comparison to
Corollary 1, we have kwk2`2 = ?T K ?. and must assume that k?(x)k2`2 does
S
S
converge. Unfortunately, this can be assured in general only for almost all x. If k is
both continuous and positive definite and if H is compact, then the sum converges
uniformly and absolutely for all x (Mercer).
5
Sparseness and Feature Selection
As mentioned in the text after optimization problem (4) ? may be not u nique
and an additional regularization term is needed. We choose the regularization term
such that it enforces sparseness and that it also can be used for feature selection.
We choose ?? k?k1 ?, where ? is the regularization parameter. We separate ? into
a positive part ?+ and a negative part ?? with ? = ?+ ? ?? and ?i+ , ?i? ? 0
[11]. The dual optimization problem is then given by
?T
?
?
1? +
? ? ??
K T K ?+ ? ? ? ?
(11)
min
?
2
?
?
?
?
1T Y K ?+ ? ? ? + ? 1 T ?+ + ? ?
?
?
s.t.
1T K ?+ ? ?? = 0 , C1 ? ?+ , ?? ? 0 .
If ? is sparse, i.e. if many ?i = ?i+ ? ?i? are zero, the classification function
?
PP ? +
?
f (u) = hw, ui + b =
hz i , ui + b contains only few terms.
i=1 ?i ? ?i
This saves on the number of measurements hz i , ui for new objects and yields to
improved classification performance due to the reduced number of features z i [15].
6
Application to DNA Microarray Data
We apply our new method to the DNA microarray data published in [9]. Column
objects are samples from different brain tumors of the medullablastoma kind. The
samples were obtained from 60 patients, which were treated in a similar way and
the samples were labeled according to whether a patient responded well to chemoor radiation therapy. Row objects correspond to genes. Transcriptions of 7,129
genes were tagged with fluorescent dyes and used as a probe in a binding assay.
For every sample-gene pair, the fluorescence of the bound transcripts - a snapshot
of the level of gene expression - was measured. This gave rise to a 60 ? 7, 129 real
valued sample-gene matrix where each entry represents the level of gene expression
in the corresponding sample. For more details see [9].
The task is now to construct a classifier which predicts therapy outcome on the
basis of samples taken from new patients. The major problem of this classification
task is the limited number of samples - given the large number of genes. Therefore,
feature selection is a prerequisite for good generalization [6, 16]. We construct the
classifier using a two step procedure. In a first step, we apply our new method
on a 59 ? 7, 129 matrix, where one column object was withhold to avoid biased
feature selection. We choose ? to be fairly large in order to obtain a sparse set of
features. In a second step, we use the selected features only and apply our method
once more on the reduced sample-gene matrix, but now with a small value of ?. The
C-parameter is used for regularization instead.
Feature Selection
/ Classification
TrkC
statistic / SVM
statistic / Comb1
statistic / KNN
statistic / Comb2
#
F
1
8
#
E
20
15
14
13
12
Feature Selection
/ Classification
P-SVM / C-SVM
P-SVM / C-SVM
P-SVM / P-SVM
C
1.0
0.01
0.1
#
F
40/45/50
40/45/50
40/45/50
#
E
5/4/5
5/5/5
4/4/5
Table 1: Benchmark results for DNA microarray data (for explanations see text).
The table shows the classification error given by the number of wrong classifications
(?E?) for different numbers of selected features (?F?) and for different values of the
parameter C. The feature selection method is signal-to-noise-statistic and t-statitic
denoted by ?statistic? or our method P-SVM. Data are provided for ?TrkC?-Gene
classification, standard SVMs, weighted ?TrkC?/SVM (Comb1), K nearest neighbor
(KNN), combined SVM/TrkC/KNN (Comb2), and our procedure (P-SVM) used for
classification. Except for our method (P-SVM), results were taken from [9].
Table 1 shows the result of a leave-one-out cross-validation procedure, where the
classification error is given for different numbers of selected features. Our method
(P-SVM) is compared with ?TrkC?-Gene classification (one gene classification),
standard SVMs, weighted ?TrkC?/SVM-classification, K nearest neighbor (KNN),
and a combined SVM/TrkC/KNN classifier. For the latter methods, feature selection was based on the correlation of features with classes using signal-to-noisestatistics and t-statistics [3]. For our method we use C = 1.0 and 0.1 ? ? ? 1.5
for feature selection in step one which gave rise to 10 ? 1000 selected features. The
feature selection procedure (also a classifier) had its lowest misclassification rate
between 20 and 40 features. For the construction of the classifier we used in step
two ? = 0.01. Our feature selection method clearly outperforms standard methods
? the number of misclassification is down by a factor of 3 (for 45 selected genes).
Acknowledgments
We thank the anonymous reviewers for their hints to improve the paper. This work
was funded by the DFG (SFB 618).
References
[1] B. E. Boser, I. M. Guyon, and V. N. Vapnik. A training algorithm for optimal
margin classifiers. In Proc. of the 5th Annual ACM Workshop on Computational Learning Theory, pages 144?152. ACM Press, Pittsburgh, PA, 1992.
[2] C. Cortes and V. N. Vapnik. Support vector networks. Machine Learning,
20:273?297, 1995.
[3] R. Golub, D. K. Slonim, P. Tamayo, C. Huard, M. Gaasenbeek, J. P. Mesirov,
H. Coller, M. Loh, J. R. Downing, M. A. Caligiuri, C. D. Bloomfield, and E. S.
Lander. Molecular classification of cancer: Class discovery and class prediction
by gene expression monitoring. Science, 286(5439):531?537, 1999.
[4] T. Graepel, R. Herbrich, P. Bollmann-Sdorra, and K. Obermayer. Classification
on pairwise proximity data. In NIPS 11, pages 438?444, 1999.
[5] T. Graepel, R. Herbrich, B. Sch?olkopf, A. J. Smola, P. L. Bartlett, K.-R.
M?
uller, K. Obermayer, and R. C. Williamson. Classification on proximity data
with LP?machines. In ICANN 99, pages 304?309, 1999.
[6] I. Guyon, J. Weston, S. Barnhill, and V. Vapnik. Gene selection for cancer
classification using support vector machines. Mach. Learn., 46:389?422, 2002.
[7] S. Hochreiter and K. Obermayer. Classification of pairwise proximity data with
support vectors. In The Learning Workshop. Y. LeCun and Y. Bengio, 2002.
[8] T. Hofmann and J. Buhmann. Pairwise data clustering by deterministic annealing. IEEE Trans. on Pat. Analysis and Mach. Intell., 19(1):1?14, 1997.
[9] S. L. Pomeroy, P. Tamayo, M. Gaasenbeek, L. M. Sturla, M. Angelo, M. E.
McLaughlin, J. Y. H. Kim, L. C. Goumnerova, P. M. Black, C. Lau, J. C.
Allen, D. Zagzag, J. M. Olson, T. Curran, C. Wetmore, J. A. Biegel, T. Poggio,
S. Mukherjee, R. Rifkin, A. Califano, G. Stolovitzky, D. N. Louis, J. P. Mesirov,
E. S. Lander, and T. R. Golub. Prediction of central nervous system embryonal
tumour outcome based on gene expression. Nature, 415(6870):436?442, 2002.
[10] V. Roth, J. Buhmann, and J. Laub. Pairwise clustering is equivalent to classical
k-means. In The Learning Workshop. Y. LeCun and Y. Bengio, 2002.
[11] B. Sch?
olkopf and A. J. Smola. Learning with kernels ? Support Vector Machines, Reglarization, Optimization, and Beyond. MIT Press, Cambridge, 2002.
[12] J. Shawe-Taylor, P. L. Bartlett, R. C. Williamson, and M. Anhtony. A framework for structural risk minimisation. In Comp. Learn. Th., pages 68?76, 1996.
[13] J. Shawe-Taylor, P. L. Bartlett, R. C. Williamson, and M. Anhtony. Structural risk minimization over data-dependent hierarchies. IEEE Transactions
on Information Theory, 44:1926?1940, 1998.
[14] J. Shawe-Taylor and N. Cristianini. On the generalisation of soft margin algorithms. Technical Report NC2-TR-2000-082, NeuroCOLT2, Department of
Computer Science, Royal Holloway, University of London, 2000.
[15] V. Vapnik. The nature of statistical learning theory. Springer, NY, 1995.
[16] J. Weston, S. Mukherjee, O. Chapelle, M. Pontil, T. Poggio, and V. Vapnik.
Feature selection for SVMs. In NIPS 12, pages 668?674, 2000.
| 2321 |@word mild:1 norm:1 tamayo:2 decomposition:3 tr:1 contains:2 outperforms:1 dx:1 must:4 additive:1 hofmann:1 hochreit:1 selected:7 nervous:1 provides:1 ih1:5 herbrich:2 downing:1 along:1 laub:1 introduce:1 pairwise:11 expected:1 indeed:2 brain:1 becomes:1 provided:2 bounded:4 sdorra:1 lowest:1 what:2 kind:2 interpreted:6 every:1 ti:1 universit:1 classifier:23 wrong:2 ensured:1 k2:2 unit:1 louis:1 positive:7 engineering:1 modify:1 treat:1 slonim:1 consequence:1 despite:1 mach:2 black:1 wk2:1 limited:1 perpendicular:1 range:1 unique:2 practical:2 enforces:1 acknowledgment:1 lecun:2 definite:4 procedure:5 pontil:1 confidence:1 suggest:2 selection:21 operator:3 risk:6 equivalent:1 deterministic:1 map:3 imposed:2 lagrangian:1 dz:3 reviewer:1 sepp:1 roth:1 convex:1 orthonormal:2 construction:2 hierarchy:1 curran:1 origin:1 pa:1 element:5 mukherjee:2 predicts:1 labeled:2 electrical:1 mentioned:1 complexity:1 ui:5 stolovitzky:1 cristianini:1 signature:1 serve:1 basis:1 describe:2 london:1 oby:1 tell:1 klaus:1 outcome:2 whose:2 valued:1 statistic:7 knn:5 g1:1 advantage:1 eigenvalue:4 mesirov:2 product:9 tu:1 rifkin:1 adjoint:1 olson:1 dirac:1 ky:2 olkopf:2 convergence:2 converges:4 leave:1 object:45 tk:7 derive:1 radiation:1 measured:1 nearest:3 ij:2 transcript:1 eq:8 implemented:1 c:1 implies:1 differ:1 kuhn:1 centered:1 require:1 generalization:7 karush:1 anonymous:1 proposition:1 hold:5 proximity:3 around:1 therapy:2 normal:3 mapping:2 major:1 a2:1 purpose:1 proc:1 angelo:1 label:5 fluorescence:1 largest:1 vice:1 weighted:2 minimization:5 uller:1 mit:1 clearly:1 always:1 rather:1 avoid:1 minimisation:1 corollary:3 derived:3 kim:1 sense:3 dependent:1 expand:1 misclassified:1 germany:1 classification:30 dual:3 denoted:1 special:1 fairly:1 mutual:1 equal:1 construct:4 once:1 chernoff:1 represents:1 minimized:1 report:1 serious:1 few:1 hint:1 intell:1 dfg:1 investigate:1 evaluation:2 golub:2 semidefinite:1 poggio:2 taylor:3 penalizes:1 abundant:1 theoretical:1 column:17 soft:1 gn:3 technische:1 entry:3 conducted:1 answer:1 combined:2 central:1 choose:4 nique:1 derivative:1 account:1 de:1 summarized:1 h1:9 kwk:1 recover:1 contribution:1 minimize:2 square:2 nc2:1 responded:1 correspond:8 yield:1 produced:1 iid:2 monitoring:1 comp:1 published:1 classified:2 barnhill:1 definition:1 pp:5 tucker:1 e2:4 dataset:1 hilbert:1 graepel:2 htk:2 improved:1 done:1 rejected:1 implicit:1 smola:2 correlation:1 defines:2 normalized:2 multiplier:1 true:4 regularization:5 assigned:1 hence:1 tagged:1 symmetric:2 assay:1 covering:3 ini:1 allen:1 superior:1 specialized:1 rl:1 belong:2 extend:1 interpret:1 kwk2:6 measurement:5 versa:1 cambridge:1 shawe:3 had:1 dot:4 funded:1 chapelle:1 similarity:3 longer:1 add:2 recent:1 dye:1 optimizing:1 belongs:1 success:1 seen:1 additional:2 converge:1 signal:2 ii:1 relates:1 technical:1 characterized:1 cross:1 long:1 e1:4 molecular:1 a1:1 prediction:2 patient:3 kernel:8 represent:1 hochreiter:3 c1:1 want:1 lander:2 annealing:1 singular:4 microarray:4 sch:2 biased:1 operate:1 eigenfunctions:1 kwk22:1 subject:1 hz:3 bollmann:1 structural:6 coller:1 bengio:2 concerned:1 gave:2 idea:1 microarrays:1 mclaughlin:1 whether:2 expression:7 bloomfield:1 bartlett:3 sfb:1 loh:1 svms:5 dna:5 reduced:2 exist:3 canonical:1 sign:4 delta:2 discrete:2 express:2 indefinite:1 k2h2:2 drawn:2 caligiuri:1 sum:4 enforced:1 parameterized:1 family:1 almost:3 guyon:2 separation:1 comparable:1 bound:7 quadratic:1 annual:1 strength:1 hilbertschmidt:1 constraint:6 kronecker:1 x2:2 answered:1 min:4 optimality:1 minkowski:2 department:2 according:6 lp:1 s1:1 outlier:3 restricted:1 lau:1 taken:3 remains:1 slack:2 needed:1 available:1 prerequisite:1 apply:6 observe:1 obey:1 probe:1 save:1 a2i:1 denotes:3 clustering:2 log2:2 k1:1 classical:1 objective:4 question:3 quantity:1 occurs:4 neurocolt2:1 tumour:1 obermayer:5 unclear:1 distance:1 separate:1 thank:1 berlin:3 trivial:1 relationship:3 mini:1 minimizing:3 unfortunately:1 negative:2 rise:2 proper:1 unknown:4 perform:1 upper:2 snapshot:1 datasets:3 benchmark:3 pat:1 introduced:1 pair:2 rof:1 boser:1 nip:2 trans:1 beyond:1 suggested:1 summarize:1 royal:1 explanation:1 misclassification:2 treated:2 restore:1 buhmann:2 improve:1 extract:1 sn:3 text:3 l2:5 discovery:1 kf:2 expect:1 interesting:1 fluorescent:1 validation:1 h2:7 imposes:1 mercer:1 principle:4 row:17 prone:1 cancer:2 last:1 neighbor:3 absolute:1 sparse:5 gram:4 withhold:1 coincide:2 far:1 transaction:1 approximate:1 compact:1 transcription:1 gene:18 overfitting:1 b1:2 pittsburgh:1 assumed:2 califano:1 xi:6 continuous:1 table:3 learn:2 nature:2 expansion:4 williamson:3 assured:1 icann:1 s2:1 noise:1 arise:2 x1:2 fig:2 gaasenbeek:2 en:12 definiteness:1 ny:1 exceeding:1 xl:2 hw:13 theorem:9 down:1 wk22:4 maxi:2 svm:16 cortes:1 essential:1 exists:2 ih:5 vapnik:5 workshop:3 sparseness:4 margin:11 kx:5 rejection:1 lagrange:1 expressed:2 huard:1 g2:1 scalar:4 binding:1 springer:1 acm:2 weston:2 identity:1 typical:3 infinite:1 operates:1 uniformly:3 hyperplane:4 wt:1 except:1 tumor:1 generalisation:1 called:2 la:1 select:1 holloway:1 support:9 pomeroy:1 latter:2 absolutely:3 |
1,454 | 2,322 | Optimality of Reinforcement Learning
Algorithms with Linear Function
Approximation
Ralf Schoknecht
ILKD
University of Karlsruhe , Germany
[email protected]
Abstract
There are several reinforcement learning algorithms that yield approximate solutions for the problem of policy evaluation when the
value function is represented with a linear function approximator.
In this paper we show that each of the solutions is optimal with
respect to a specific objective function. Moreover, we characterise
the different solutions as images of the optimal exact value function under different projection operations. The results presented
here will be useful for comparing the algorithms in terms of the
error they achieve relative to the error of the optimal approximate
solution.
1
Introduction
In large domains the determination of an optimal value function via a tabular representation is no longer feasible with respect to time and memory considerations.
Therefore, reinforcement learning (RL) algorithms are combined with linear function approximation schemes. However, the different RL algorithms, that all achieve
the same optimal solution in the tabular case, converge to different solutions when
combined with function approximation. Up to now it is not clear which of the
solutions, i.e. which of the algorithms, should be preferred. One reason is that a
characterisation of the different solutions in terms of the objective functions they
optimise is partly missing. In this paper we state objective functions for the TD(O)
algorithm [9], the LSTD algorithm [4, 3] and the residual gradient algorithm [1] applied to the problem of policy evaluation, i.e. the determination of the value function
for a fixed policy. Moreover, we characterise the different solutions as images of the
optimal exact value function under different projection operations. We think that
an analysis of the different optimisation criteria and the projection operations will
be useful for determining the errors that the different algorithms achieve relative to
the error of the theoretically optimal approximate solution. This will yield a criterion for selecting an optimal RL algorithm. For the TD(O) algorithm such error
bounds with respect to a specific norm are already known [2, 10] but for the other
algorithms there are no comparable results.
2
Exact Policy Evaluation
For a Markov decision process (MDP) with finite state space S (lSI = N), action
space A, state transition probabilities p : (S, S, A) -+ [0,1] and stochastic reward
function r : (S, A) -+ R policy evaluation is concerned with solving the Bellman
equation
Vit = "(PltVIt + Rit
(1)
for a fixed policy /-t : S -+ A.
denotes the value of state Si, pt,j = p(Si' Sj, /-t(Si)),
Rf = E{r(si,/-t(Si))} and "( is the discount factor. As the policy /-t is fixed we will
omit it in the following to make notation easier.
vt
The fixed point V* of equation (1) can be determined iteratively with an operator
T: RN -+ RN by
TV n = V n + 1 = "(PV n + R.
(2)
This iteration converges to a unique fixed point [2], that is given by
V* = (I - ,,(p)-l R ,
(3)
where (J - "(P) is invertible for every stochastic matrix P.
3
Approximate Policy Evaluation
If the state space S gets too large the exact solution of equation (1) becomes very
costly with respect to both memory and computation time. Therefore, often linear
feature-based function approximation is applied. The value function V is represented as a linear combination of basis functions H := {1J>1' ... , IJ> F} which can be
written as V = IJ>w, where w E RF is the parameter vector describing the linear
combination and IJ> = (1J>11 ... IIJ> F) E RNxF is the matrix with the basis functions
as columns. The rows of IJ> are the feature vectors CP(Si) E RF for the states Si.
3.1
The Optimal Approximate Solution
If the transition probability matrix P were known, then the optimal exact solution
V* = (J - ,,(P)-l R could be computed directly. The optimal approximation to
this solution is obtained by minimising IllJ>w - V* II with respect to w. Therefore,
a notion of norm must exist. Generally a symmetric positive definite matrix D can
be used to define a norm according to II . liD = ~ with the scalar product
(x, y) D = x T Dy. The optimal solution that can be achieved with the linear function
approximator IJ>w then is the orthogonal projection of V* onto [IJ>], i.e. the span of
the columns of IJ>. Let IJ> have full column rank. Then the orthogonal projection on
[IJ>] according to the norm II? liD is defined as IID = 1J>(IJ>TDIJ?-lIJ>TD. We denote
the optimal approximate solution by vf/ = IID V*. The corresponding parameter
vector wfJ/ with vg L = IJ>wfJ/ is then given by
wfJ/ = (IJ>TDIJ?-lIJ>TDV* = (IJ>TDIJ?-lIJ>TD(J _ ,,(P)-lR.
Here, 8L stands for supervised learning because
quadratic error
w~knF ~lllJ>w - V*111 = ~(lJ>w?L -
(4)
wl} minimises the weighted
v*f D(lJ>w?L - V*) =
~llVgL - V*111
(5)
for a given D and V*, which is the objective of a supervised learning method.
Note, that V* equals the expected discounted accumulated reward along a sampled
trajectory under the fixed policy /-t, i.e. V*(so) = E[2:::o r(st, /-t(St))] for every
So E S. These are exactly the samples obtained by the TD(l) algorithm [9]. Thus,
the TD(l) solution is equivalent to the optimal approximate solution.
3.2
The Iterative TD Algorithm
In the approximate case the Bellman equation (1) becomes
<I>w = ,,(P<I>w + R
(6)
A popular algorithm for updating the parameter vector w after a single transition
Xi -+ Zi with reward ri is the stochastic sampling-based TD(O)-algorithm [9]
wn+l = w n + acp(xi)[ri
+ ,,(CP(Zi )T w n -
cp(Xi)T w n ] = (IF
+ aAi)wn + abi ,
(7)
where a is the learning rate, Ai = cp(Xi)["(cp(Zi) - cp(xi)f, bi = cp(xi)ri and IF is
the identity matrix in RF. Let p be a probability distribution on the state space S.
Furthermore, let Xi be sampled according to p, Zi be sampled according to P(Xi , ?)
and ri be sampled according to r(x;). We will use E p[.] to denote the expectation
= Ep[A;] and bItp = Ep[b i ]. If the
with respect to the distribution p. Let AiP
p
learning rate decays according to
Lat =
00
La; < 00,
t
(8)
t
then, in the average sense, the stochastic TD(O) algorithm (7) behaves like the
deterministic iteration
(9)
with
ATD
(I - rvP)<I>
Dp = _<I>T D P
I'
bTD
Dp = <I>T D P R ,
(10)
where D p = diag(p) is the diagonal matrix with the elements of p and R is the
vector of expected rewards [2] (Lemma 6.5, Lemma 6.7). In particular the stochastic
TD(O) algorithm converges if and only if the deterministic algorithm (9) converges.
Furthermore, if both algorithms converge they converge to the same fixed point.
An iteration of the form (9) converges if all eigenvalues of the matrix 1+ aAip
p
lie within the unit circle [5]. For a matrix Alt
that
has
only
eigenvalues
with
p
negative real part and a learning rate at that decays according to (8) there is a
t* such that the eigenvalues of I +
lie inside the unit circle for all t > t* .
p
Hence, for a decaying learning rate the deterministic TD(O) algorithm converges
if all eigenvalues of Aft
have a negative real part. Since this requirement is not
p
always fulfilled the TD algorithm possibly diverges as shown in [1] . This divergence
is due to the positive eigenvalues of AI;D
[8].
p
atA IF
However, under special assumptions convergence of the TD(O) algorithm can be
shown [2]. Let the feature matrix <I> E R NxF have full rank, where F :::; N, i.e.
there are not more parameters than states). This results in no loss of generality
because the linearly dependent columns of <I> can be eliminated without changing the
power of the approximation architecture. The most important assumption concerns
the sampling of the states that is reflected in the matrix D. Let the Markov chain
be aperiodic and recurrent. Besides the aperiodicity requirement, this assumption
results in no loss of generality because transient states can be eliminated. Then a
steady-state distribution 7r of the Markov chain exists. When sampling the states
accordinj3 to this steady-state distribution, i.e. D = D'/r = diag(7r), it can be shown
that AI;" is negative definite [2] (Lemma 6.6). This immediately yields that all
eigenvalues are negative which in turn yields convergence of the TD(O) algorithm
with decaying learning rate.
In the next section we will characterise the limit value vZ;: as the projection of
V* in a more general setting. However, for the sampling distribution 7r there is
another interesting interpretation of VZ;: as the fixed point of IID~ T , where IID~
is the orthogonal projection with respect to DJr onto [<r>], as defined in section 3.1 ,
and T is the update operator defined in (2) [2, 10] . In the following we use this fact
to deduce a new formula for VZ;: that has a form similar to V* in (3). Before we
proceed, we need the following lemma
Lemma 1 Th e matrix 1 -
')' IID~P
is regular.
Proof: The matrix 1 - ')'IID~P is regular if and only if it does not have eigenvalue zero. An equivalent condition is that one is not an eigenvalue of ')'IID~ P.
Therefore, it is sufficient to show that the spectral radius satisfies ehIID~P) < 1.
For any matrix norm II? II it holds that e(A) :S IIAII [5]. Therefore, we know
that ehIID~P) :S IbIID~PIID~ ' where the vector norm II?IID~ induces the matrix
norm II . IID~ by the standard definition IIAIID~ = sUP ll x II D~= dIIAx IID~} . With
this definition and with the fact that IlPx lID~ :S Il x lID~ for all x [2] (Lemma 6.4)
we obtain IIPIID~ = sUP ll x II D~=dIIPxIID~} :S sUP llxII D~= dll x IID~} = 1. Moreover, we have IIIIDJID~ = sUP ll x II D~=d IIIID~ xI ID~} :S sUP ll x II D~=d llxIID~} = 1,
where we used the well known fact that an orthogonal projection IID~ is a nonexpansion with respect to the vector norm II . IID~. Putting all together we obtain
ehIID~P) :S II ')'IID~PIID~ :S ')' IIIIDJID~ ? IIPIID~ :S ')' < 1.
D
We can now solve the fixed point equation vZ;: = IID~ TVZ;: and obtain
(11)
with j5 = IID~ P and R = IID~ R. This resembles equation (3) for the exact solution
of the policy evaluation problem. The TD(O) solution with sampling distribution
7r can thus be interpreted as exact solution of the "projected" policy evaluation
problem with j5 and R. Note, that compared to the TD(l) solution of the approximate policy evaluation problem VJ!: = IID~ (1 - ,),P) - l R with weighting matrix
DJr equation (11) only differs in the position of the projection operator. This leads
to an interesting comparison of TD(O) and TD(l) . While TD(O) yields the exact
solution of the projected problem, TD(l) yields the projected solution of the exact
problem.
3.3
The Least-Squares TD Algorithm
Besides the iterative solution of (6) often a direct solution by matrix inversion is
computed using equation (9) in the fixed point form
+ p = O. This
p
p
approach is known as least-squares TD (LSTD) [4, 3]. It is only required that
p
be invertible, i.e. that the eigenvalues be unequal zero. In contrast to the iterative
TD algorithm the eigenvalues need not have negative real parts. Therefore, LSTD
offers the possibility of using sampling distributions p other than the steady-state
distribution 7r [6, 7] Thus, parts of the state space that would be rarely visited under
the steady-state distribution can now be visited more frequently which makes the
approximation of the value function more reliable. This is necessary if the result of
policy evaluation should be used in a policy improvement step because otherwise
the action choice in rarely visited states may be bad [6].
AiFwiF bIt
AIt
For the following let the feature matrix have full column rank. As described above
this results in no loss of generality. LSTD allows to sample the states with an
arbitrary sampling distribution p. If there are states s that are not visited under p,
i.e. p(s) = 0, then these states can be eliminated from the Markov chain. Hence,
without loss of generality we assume that the matrix D p = diag(p) is invertible.
These conditions ensure the invertibility of A'};D
and according to [4, 3] the LSTD
p
solution is given by
(12)
Note, that the matrix A'iF
and the vector bI;D
can be computed from samples
p
p
such that the model P does not need to be known. Note also that in general
wI;D
p ?- wy}p as discussed in [3]. This means, that the TD(O) solution wI;D
p and the
TD(I) solution wfJ/p may differ when function approximation is used.
Depending on the sampling distribution p the LSTD approach may be the only
way of computing the fixed point of (9) because the corresponding iterative TD(O)
algorithm may diverge due to positive eigenvalues. However, if the TD(O) algorithm
converges the limit coincides with the LSTD solution wI;D.
p
For the value function V.JD
achieved by the LSTD algorithm the following holds
p
VTD
Dp
q,WTD
Dp
(3),(10) II
=
(~) q,(_ATD)
-l bTD
Dp
Dp
(I-,PjTDJq,q,TDp(I-,P)
V*
= q, [(_ATD)T(_ATD)] -1 (_ATD)TbTD
Dp
=
Dp
Dp
IIDJD V* .
Dp
(
13 )
We define D JD = (J - , P)TDJq,q,TDp(J - , P). As q,q,T is singular in general, the
matrix DJD is symmetric and positive semi-definite. Hence, it defines a semi-norm
II?IIDTD
. Thus, the LSTD solution is obtained by projecting V* onto [q,] with
p
respect to II . II DTp D. After having deduced this new relation between the optimal
solution V* and V.JD
we can characterise WI;Dp as minimising the corresponding
p
quadratic objective function.
min~llq,w-V*
112DpTD =~(q,WTD_V*fDTD(q,wTD_V*)
2
2
Dp
p
Dp
cER F
=
~IIVTD-V*W
TD .
2 Dp
Dp
(14)
It can be shown that the value of the objective function for the LSTD solution
is zero, i.e. IIV.JpD - V*111TD
= O. With equation (14) we have shown that the
p
LSTD solution minimises a certain error metric. The form of this error metric is
similar to (5). The only difference lies in the norm that is used. This unifies the
characterisation of the solutions that are achieved by different algorithms.
3.4
The Residual Gradient Algorithm
There is a third approach to solving equation (6). The residual gradient algorithm
[1] directly minimises the weighted Bellman error
1
-II(I
2
2
, P)q,w - RIID
p
(15)
by gradient descent. The resulting update rule of the deterministic algorithm has
a form similar to (9)
(16)
with
G
bR
R'
Dp = q,T(J - "VPT)D
,
P
(17)
where D p is again the diagonal matrix with the visitation probabilities Pi on its
diagonal. As all entries on the diagonal are nonnegative, D p can be decomposed
into yfi5"";T yfi5"";. Hence, we can write Ai5; = -(yfi5"";(I _ ,p)q,)T yfi5"";(J - ,P)q,.
Therefore, Ai5G
is negative semidefinite. If q, has full column rank and Dp is
p
regular, i.e. the visitation probability for every state is positive, then Ai5G
is negative
p
G
definite. Therefore, all eigenvalues of Ai5 p are negative, which yields convergence of
the residual gradient algorithm (16) for a decaying learning rate independently of
the weighting D p , t he function approximator q, and the transition probabilities P.
The equivalence of the limit value of the deterministic and the stochastic version of
the residual gradient algorithm can be proven with an argument similar to that in
[2] for the equivalence of the deterministic and the stochastic version of the TD(O)
algorithm in equations (7) and (9) respectively. Note also that the matrix Ai5G
and
p
the vector bi5G
can
be
computed
from
samples
so
that
the
model
P
does
not
need
p
to be known for the deterministic residual gradient algorithm.
If Ai5G
is invertible a unique limit of the iteration (16) exists. It can be directly
p
computed via the fixed point form , which yields the new identity
wi5; = (-Ai5;)-lbi5; = (q,T(I - , p f Dp(I _ , p)q,) -l q,T (J _ , p)T DpR. (18)
This solution of the residual gradient algorithm is related to the optimal solution
(4) of the approximate Bellman equation (6) as described in the following lemma.
Lemma 2 The solution wi5G
of the residual gradient algorithm with weighting map
trix D p is equivalent to the optimal supervised learning solution Wf/RG of the approxp
imate B ellman equation (6) with weighting matrix D:G = (J _ , p)T Dp(I - , P).
Proof:
wi5; = (q,T (I _ , p)T Dp(I _ , p)q,) -l q,T (I - , p f DpR
= (q,T D:Gq,) -l q,T (J - , p f Dp(I - , P)(I _ , p) -l R
= (q,T DRGq,) -l q,T DRGV* = wSL
p
p
DJ;G,
where we used the fact that V* = (J _ , P) -l R.
D
Therefore , wi5G
can be interpreted as the orthogonal projection of the optimal
p
solution V* onto [q,] with respect to the scalar product defined by D:G. This
yields a new equivalent formula for the Bellman error (15)
~II(I
2
=
, P)q,w -
~(q,w
2
RII~
=
p
~((J
2
, P)q,w - RfDp((I - , P)q,w - R)
v*f(I - , pfDp(J - , P)(q,w - V*) =
~11q,w
- V*II~RG'
2
(19)
p
The Bellman error is the objective function that is minimised by the residual gradient algorithm. As we have just shown, this objective function can be expressed
in a form similar to (5), where the only difference lies in the norm that is used.
Thus, we have shown that the solution of the residual gradient algorithm can also
be characterised in the general framework of quadratic error metrics IIq,w - V* liD.
As a direct consequence we can represent the solution as an orthogonal projection
RG = q,w Dp
RG = II DpRG V*.
VDp
According to section 3.2 an iteration of the form (16) generally converges for matrices A with eigenvalues that have negative real parts. However, the fact that Ai5G
p
is symmetric assures convergence even for singular Ai5G
[8]
(Proposition
1).
Thus,
p
Table 1: Overview over the solutions of different RL algorithms. The supervised
learning (SL) approach, the TD(O) algorithm, the LSTD algorithm and the residual
gradient (RG) algorithm are analysed in terms of the conditions of solvability. Moreover, we summarise the optimisation criteria that the different algorithms minimise
and characterise the different solutions in terms of the projection of the optimal
solution V* onto [<1>]. If the visitation distribution is arbitrary, we write 'r:/p.
SL
solvability:
TD
<0
LSTD
RG
Ai :;i 0
Re(Ai) ::::: 0
condition for Ai
-
condition for p
'r:/p
p=7f
p(s) :;i 0
'r:/p
optimisation criterion
eq. (5)
eq. (14)
eq. (14)
eq. (19)
characterisation as projection
IIDp V*
IID;D V*
IIDTD
p V*
IIDRG
p V*
Re(Ai)
the residual gradient algorithm (16) converges for any matrix A15G
that is of the
p
form (17) and in case A15G
is
regular
the
limit
is
given
by
(18).
Note
that a matrix
p
<I> which does not have full column rank leads to ambiguous solutions w15G
that
p
depend on the initial value w o. However, the corresponding Vj}G
=
<l>w15G
are
the
p
p
same. For singular Dp the matrix D:G = (I - ,P)T Dp(J - IP) is also singular.
Thus, the limit Vj}G
may not be unique but may depend itself on the initial value
p
w o. The reason is that there may be a whole subspace of [<I>] with dimension larger
than zero that minimises IIVj}G
p - V*IIDRG
p because II?IIDRG
p is now only a semi-norm.
But for all minimising Vj}G
the Bellman error is the same, i.e. with respect to the
p
Bellman error all the solutions Vj}G
are equivalent [8] (Proposition 1).
p
3.5
Synopsis of the Different Solutions
In Table 1 we give a brief overview of the solutions that the different RL algorithms yield. An SL solution can be computed for arbitrary weighting matrices D p
induced by a sampling distribution p. For the three RL algorithms (TD, LSTD,
RG) solvability conditions can be either formulated in terms of the eigenvalues of
the iteration matrix A or in terms of the sampling distribution p. The iterative
TD(O) algorithm has the most restrictive conditions for solvability both for the
eigenvalues of the iteration matrix A, whose real parts must be smaller than zero,
and for the sampling distribution p, which must equal the steady-state distribution
7f. The LSTD method only requires invertibility of
This is satisfied if <I> has
p
full column rank and if the visitation distribution p samples every state s infinitely
often, i.e. p( s) :;i 0 for all s E S. In contrast to that the residual gradient algorithm
converges independently of p and the concrete A15G
because all these matrices have
p
eigenvalues with nonpositive real parts.
Arp.
All solutions can be characterised as minimising a quadratic optimisation criterion
Il<I>w - V* liD with corresponding matrix D. The SL solution optimises the weighted
quadratic error (5), RG optimises the weighted Bellman error (19) and both TD and
LSTD optimise the quadratic function (14) with weighting matrices D;;D and DJD
respectively. With the assumption of regular D p , i.e. p(s) :;i 0 for all s E S, the
solutions V can be characterised as images of the optimal solution V* under different
orthogonal projections (optimal, RG) and projections that minimise a semi-norm
(TD, LSTD). For singular Dp see the remarks on ambiguous solutions in section 3.4.
Let us finally discuss the case of a quasi-tabular representation of the value function
that is obtained for regular <I> and let all states be visited infinitely often, i.e. D p is
regular. Due to the invertibility of <I> we have [<I>] = ~N. Thus, the optimal solution
V* is exactly representable because V* E [<I>]. Moreover, every projection operator
II : ~N -+ [<I>] reduces to the identity. Therefore, all the projection operators for
the different algorithms are equivalent to the identity. Hence, with a quasi-tabular
representation all the algorithms converge to the optimal solution V*.
4
Conclusions
We have presented an analysis of the solutions that are achieved by different reinforcement learning algorithms combined with linear function approximation. The
solutions of all the examined algorithms, TD(O), LSTD and the residual gradient
algorithm, can be characterised as minimising different corresponding quadratic
objective function. As a consequence, each of the value functions, that one of the
above algorithms converges to , can be interpreted as image of the optimal exact
value function under a corresponding orthogonal projection. In this general framework we have given the first characterisation of the approximate TD(O) solution
in terms of the minimisation of a quadratic objective function. This approach allows to view the TD(O) solution as exact solution of a projected learning problem.
Moreover, we have shown that the residual gradient solution and the optimal approximate solution only differ in the weighting of the error between the exact and
the approximate solution. In future research we intend to use the results presented
here for determining the errors of the different solutions relative to the optimal
approximate solution with respect to a given norm. This will yield a criterion for
selecting reinforcement learning algorithms that achieve optimal solution quality.
References
[1] L. C. Baird. Residual algorithms: Reinforcement learning with function approximation. Proc. of the Twelfth International Conference on Machine Learning, 1995.
[2] D. P. Bertsekas and J. N. Tsitsiklis. Neuro Dynamic Programming. Athena Scientific,
Belmont, Massachusetts, 1996.
[3] J .A. Boyan. Least-squares temporal difference learning. In Proceeding of the Sixteenth
International Conference on Machine Learning, pages 49- 56, 1999.
[4] S.J Bradtke and A.G. Barto. Linear least-squares algorithms for temporal difference
learning. Machine Learning, 22:33- 57, 1996.
[5] A. Greenbaum . Iterative Methods for Solving Linear Systems. SIAM , 1997.
[6] D. Koller and R. Parr. Policy iteration for factored mdps. In Proc. of the Sixteenth
Conference on Uncertainty in Artificial Intelligence (UAI) , pages 326- 334, 2000.
[7] M. G. Lagoudakis and R . Parr. Model-free least-squares policy iteration. In Advances
in Neural Information Processing Systems, volume 14, 2002.
[8] R. Schoknecht and A. Merke. Convergent combinations of reinforcement learning
with function approximation. In Advances in Neural Information Processing Syst ems,
volume 15, 2003.
[9] R. S. Sutton. Learning to predict by the methods of temporal differences. Machine
Learning, 3:9- 44, 1988.
[10] J. N. Tsitsiklis and B. Van Roy. An analysis of temporal-difference learning with
function approximation. IEEE Transactions on Automatic Control, 1997.
| 2322 |@word version:2 inversion:1 norm:14 twelfth:1 initial:2 selecting:2 comparing:1 analysed:1 si:7 written:1 must:3 aft:1 belmont:1 update:2 intelligence:1 tdp:2 lr:1 along:1 direct:2 inside:1 dpr:2 theoretically:1 expected:2 frequently:1 bellman:9 discounted:1 decomposed:1 td:39 becomes:2 moreover:6 notation:1 interpreted:3 wfj:4 temporal:4 every:5 exactly:2 control:1 unit:2 omit:1 bertsekas:1 positive:5 before:1 limit:6 consequence:2 sutton:1 id:1 resembles:1 examined:1 equivalence:2 dtp:1 bi:2 unique:3 definite:4 differs:1 projection:18 regular:7 get:1 onto:5 operator:5 aai:1 equivalent:6 deterministic:7 map:1 missing:1 vit:1 independently:2 immediately:1 factored:1 rule:1 ralf:2 notion:1 pt:1 exact:12 programming:1 element:1 roy:1 updating:1 ep:2 nonexpansion:1 reward:4 dynamic:1 iiv:1 depend:2 solving:3 basis:2 represented:2 artificial:1 whose:1 larger:1 solve:1 vpt:1 otherwise:1 think:1 itself:1 ip:1 eigenvalue:16 product:2 gq:1 achieve:4 sixteenth:2 convergence:4 requirement:2 diverges:1 converges:10 depending:1 recurrent:1 minimises:4 ij:13 eq:4 differ:2 radius:1 aperiodic:1 stochastic:7 transient:1 proposition:2 hold:2 acp:1 predict:1 parr:2 proc:2 visited:5 vz:4 wl:1 ellman:1 weighted:4 always:1 barto:1 minimisation:1 improvement:1 rank:6 contrast:2 sense:1 wf:1 dependent:1 accumulated:1 lj:2 relation:1 koller:1 quasi:2 germany:1 j5:2 special:1 equal:2 optimises:2 having:1 sampling:11 eliminated:3 cer:1 tabular:4 future:1 summarise:1 aip:1 divergence:1 llq:1 possibility:1 evaluation:9 semidefinite:1 atd:5 chain:3 necessary:1 orthogonal:8 circle:2 re:2 column:8 entry:1 too:1 combined:3 st:2 deduced:1 international:2 siam:1 merke:1 minimised:1 invertible:4 diverge:1 together:1 concrete:1 again:1 satisfied:1 possibly:1 syst:1 de:1 invertibility:3 baird:1 view:1 sup:5 decaying:3 dll:1 il:2 square:5 aperiodicity:1 yield:11 unifies:1 iid:19 trajectory:1 definition:2 proof:2 nonpositive:1 sampled:4 popular:1 massachusetts:1 supervised:4 reflected:1 synopsis:1 generality:4 furthermore:2 just:1 defines:1 quality:1 scientific:1 karlsruhe:2 mdp:1 hence:5 symmetric:3 iteratively:1 ll:4 ambiguous:2 steady:5 coincides:1 criterion:6 cp:7 bradtke:1 image:4 consideration:1 lagoudakis:1 behaves:1 vtd:1 rl:6 overview:2 volume:2 discussed:1 interpretation:1 he:1 ai:7 automatic:1 dj:1 schoknecht:3 longer:1 deduce:1 solvability:4 wtd:1 certain:1 vt:1 converge:4 ii:23 semi:4 full:6 reduces:1 determination:2 minimising:5 offer:1 neuro:1 knf:1 optimisation:4 expectation:1 metric:3 iteration:9 represent:1 achieved:4 singular:5 abi:1 induced:1 greenbaum:1 concerned:1 wn:2 zi:4 architecture:1 rvp:1 br:1 minimise:2 proceed:1 action:2 remark:1 useful:2 generally:2 clear:1 characterise:5 discount:1 induces:1 sl:4 exist:1 lsi:1 fulfilled:1 write:2 visitation:4 putting:1 wsl:1 characterisation:4 changing:1 uncertainty:1 decision:1 dy:1 vf:1 comparable:1 bit:1 bound:1 convergent:1 quadratic:8 nonnegative:1 ri:4 argument:1 optimality:1 span:1 min:1 tv:1 according:9 combination:3 representable:1 smaller:1 em:1 wi:4 vdp:1 lid:6 projecting:1 equation:13 imate:1 assures:1 describing:1 turn:1 discus:1 know:1 iiq:1 operation:3 spectral:1 jd:3 denotes:1 ensure:1 lat:1 restrictive:1 objective:10 intend:1 already:1 costly:1 diagonal:4 gradient:16 dp:25 subspace:1 ilkd:2 athena:1 reason:2 besides:2 negative:9 rii:1 policy:16 markov:4 finite:1 descent:1 rn:2 arbitrary:3 required:1 unequal:1 wy:1 rf:4 optimise:2 memory:2 reliable:1 power:1 boyan:1 residual:16 scheme:1 brief:1 mdps:1 lij:3 determining:2 relative:3 loss:4 interesting:2 proven:1 approximator:3 vg:1 sufficient:1 pi:1 row:1 ata:1 free:1 tsitsiklis:2 btd:2 van:1 dimension:1 transition:4 stand:1 reinforcement:7 projected:4 transaction:1 sj:1 approximate:14 uni:1 preferred:1 uai:1 xi:9 iterative:6 table:2 domain:1 diag:3 vj:5 linearly:1 whole:1 ait:1 lllj:1 iij:1 position:1 pv:1 lie:4 weighting:7 iiaii:1 third:1 formula:2 bad:1 specific:2 decay:2 alt:1 concern:1 exists:2 easier:1 rg:9 infinitely:2 expressed:1 trix:1 scalar:2 lstd:18 satisfies:1 identity:4 formulated:1 feasible:1 determined:1 characterised:4 lemma:8 piid:2 arp:1 partly:1 la:1 rarely:2 rit:1 |
1,455 | 2,323 | Unsupervised Color Constancy
Kinh Tieu
Artificial Intelligence Laboratory
Massachusetts Institute of Technology
Cambridge, MA 02139
[email protected]
Erik G. Miller
Computer Science Division
UC Berkeley
Berkeley, CA 94720
[email protected]
Abstract
In [1] we introduced a linear statistical model of joint color changes in
images due to variation in lighting and certain non-geometric camera parameters. We did this by measuring the mappings of colors in one image
of a scene to colors in another image of the same scene under different
lighting conditions. Here we increase the flexibility of this color flow
model by allowing flow coefficients to vary according to a low order
polynomial over the image. This allows us to better fit smoothly varying lighting conditions as well as curved surfaces without endowing our
model with too much capacity. We show results on image matching and
shadow removal and detection.
1 Introduction
The number of possible images of an object or scene, even when taken from a single viewpoint with a fixed camera, is very large. Light sources, shadows, camera aperture, exposure time, transducer non-linearities, and camera processing (such as auto-gain-control and
color balancing) can all affect the final image of a scene. These effects have a significant
impact on the images obtained with cameras and hence on image processing algorithms,
often hampering or eliminating our ability to produce reliable recognition algorithms.
Addressing the variability of images due to these photic parameters has been an important
problem in machine vision. We distinguish photic parameters from geometric parameters,
such as camera orientation or blurring, that affect which parts of the scene a particular pixel
represents. We also note that photic parameters are more general than ?lighting parameters? and include anything which affects the final RGB values in an image given that the
geometric parameters and the objects in the scene have been fixed.
We present a statistical linear model of color change space that is learned by observing
how the colors in static images change jointly under common, naturally occurring lighting
changes. Such a model can be used for a number of tasks, including synthesis of images
of new objects under different lighting conditions, image matching, and shadow detection.
Results for each of these tasks will be reported.
Several aspects of our model merit discussion. First, it is obtained from video data in a
completely unsupervised fashion. The model uses no prior knowledge of lighting conditions, surface reflectances, or other parameters during data collection and modeling. It also
has no built-in knowledge of the physics of image acquisition or ?typical? image color
changes, such as brightness changes. Second, it is a single global model and does not need
to be re-estimated for new objects or scenes. While it may not apply to all scenes equally
well, it is a model of frequently occurring joint color changes, which is meant to apply to
all scenes. Third, while our model is linear in color change space, each joint color change
that we model (a 3-D vector field) is completely arbitrary, and is not itself restricted to
being linear. This gives us great modeling power, while capacity is controlled through the
number of basis fields allowed.
After discussing previous work in Section 2, we introduce the color flow model and how
it is obtained from observations in Section 3. In Section 4, we show how the model and a
single observed image can be used to generate a large family of related images. We also
give an efficient procedure for finding the best fit of the model to the difference between two
images. In Section 5 we give preliminary results for image matching (object recognition)
and shadow detection.
2 Previous work
The color constancy literature contains a large body of work on estimating surface reflectances and various photic parameters from images. A common approach is to use linear
models of reflectance and illuminant spectra [2]. Gray world algorithms [3] assume the
average reflectance of all the surfaces in a scene is gray. White world algorithms [4] assume the brightest pixel corresponds to a scene point with maximal reflectance. Brainard
and Freeman attacked this problem probabilistically [5] by defining prior distributions on
particular illuminants and surfaces. They used a new, maximum local mass estimator to
choose a single best estimate of the illuminant and surface.
Another technique is to estimate the relative illuminant or mapping of colors under an unknown illuminant to a canonical one. Color gamut mapping [6] uses the convex hull of all
achievable RGB values to represent an illuminant. The intersection of the mappings for
each pixel in an image is used to choose a ?best? mapping. [7] trained a back-propagation
multi-layer neural network to estimate the parameters of a linear color mapping. The approach in [8] works in the log color spectra space where the effect of a relative illuminant
is a set of constant shifts in the scalar coefficients of linear models for the image colors and
illuminant. The shifts are computed as differences between the modes of the distribution
of coefficients of randomly selected pixels of some set of representative colors.
[9] bypasses the need to predict specific scene properties by proving that the set of images
of a gray Lambertian convex object under all lighting conditions form a convex cone. 1 We
wanted a model which, based upon a single image (instead of three required by [9]), could
make useful predictions about other images of the same scene. This work is in the same
spirit, although we use a statistical method rather than a geometric one.
3 Color flows
In the following, let C = {(r, g, b)T ? R3 : 0 ? r ? 255, 0 ? g ? 255, 0 ? b ? 255} be
the set of all possible observable image color 3-vectors. Let the vector-valued color of an
image pixel p be denoted by c(p) ? C.
Suppose we are given two P -pixel RGB color images I1 and I2 of the same scene taken
under two different photic parameters ?1 and ?2 (the images are registered). Each pair of
1
This result depends upon the important assumption that the camera, including the transducers,
the aperture, and the lens introduce no non-linearities into the system. The authors? results on color
images also do not address the issue of metamers, and assume that light is composed of only the
wavelengths red, green, and blue.
a
b
c
d
e
f
Figure 1: Matching non-linear color changes. b is the result of squaring the value of a (in
HSV) and re-normalizing it to 255. c-f are attempts to match b with a using four different
algorithms. Our algorithm (f) was the only one to capture the non-linearity.
corresponding image pixels pk1 and pk2 , 1 ? k ? P , in the two images represents a singlecolor mapping c(pk1 ) 7? c(pk2 ) that is conveniently represented by the vector difference:
d(pk1 , pk2 ) = c(pk2 ) ? c(pk1 ).
(1)
By computing P vector differences (one for each pair of pixels) and placing each at the
point c(pk1 ) in color space C, we have a partially observed color flow:
?0 (c(pk1 )) = d(pk1 , pk2 ),
1?k?P
(2)
defined at points in C for which there are colors in image I1 .
To obtain a full color flow (i.e. a vector field ? defined at all points in C) from a partially
observed color flow ?0 , we must address two issues. First, there will be many points in
C at which no vector difference is defined. Second, there may be multiple pixels of a
particular color in image I1 that are mapped to different colors in image I2 . We use a radial
basis function estimator which defines the flow at a color point (r, g, b) T as the weighted
proximity-based average of nearby observed ?flow vectors?. We found empirically that
? 2 = 16 (with colors on a 0?255 scale) worked well. Note that color flows are defined so
that a color point with only a single nearby neighbor will inherit a flow vector that is nearly
parallel to its neighbor. The idea is that if a particular color, under a photic parameter
change ?1 7? ?2 , is observed to get a little bit darker and a little bit bluer, for example, then
its neighbors in color space are also defined to exhibit this behavior.
3.1 Structure in the space of color flows
Consider a flat Lambertian surface that may have different reflectances as a function of
the wavelength. While in principle it is possible for a change in lighting to map any color
from such a surface to any other color independently of all other colors 2 , we know from
experience that many such joint maps are unlikely. This suggests that while the marginal
distribution of mappings for a particular color is broadly distributed, the space of possible
joint color maps (i.e., color flows) is much more compact3 .
In learning a statistical model of color flows, many common color flows can be anticipated
such as ones that make colors a little darker, lighter, or more red. These types of flows can
be well modeled with a simple global 3x3 matrix A that maps a color c 1 in image I1 to a
color c2 in image I2 via
c2 = Ac1 .
(3)
However, there are many effects which linear maps cannot model. Perhaps the most significant is the combination of a large brightness change coupled with a non-linear gain-control
adjustment or brightness re-normalization by the camera. Such photic changes will tend
2
By carefully choosing properties such as the surface reflectance of a point as a function of wave? can, in principle, be observed even on a flat Lambertian surface.
length and lighting any mapping ?
However the metamerism which would cause such effects is uncommon in practice [10, 11]
3
We will address below the significant issue of non-flat surfaces and shadows, which can cause
highly ?incoherent? maps.
Figure 2: Evidence of non-linear color
changes. The first two images are of
the top and side of a box covered with
multi-colored paper. The quotient image
is shown next. The rightmost image is an
ideal quotient image, corresponding to a Figure 3: Effects of the first three eigenflows.
linear lighting model.
See text.
to leave the bright and dim parts of the image alone, while spreading the central colors of
color space toward the margins.
For a linear imaging process, the ratio of the brightnesses of two images, or quotient image
[12], should vary smoothly except at surface normal boundaries. However as shown in
Figure 2, the quotient image is a function not only of surface normal, but also of albedo?
direct evidence of a non-linear imaging process. Another pair of images exhibiting a nonlinear color flow is shown in Figures 1a and b. Notice that the brighter areas of the original
image get brighter and the darker portions get darker.
3.2 Color eigenflows
We wanted to capture the structure in color flow space by observing real-world data in an
unsupervised fashion. A one square meter color palette was printed on standard non-glossy
plotter paper using every color that could be produced by a Hewlett Packard DesignJet
650C. The poster was mounted on a wall in our office so that it was in the direct line of
overhead lights and computer monitors but not the single office window. An inexpensive
video camera (the PC-75WR, Supercircuits, Inc.) with auto-gain-control was aimed at the
poster so that the poster occupied about 95% of the field of view.
Images of the poster were captured using the video camera under a wide variety of lighting
conditions, including various intervals during sunrise, sunset, at midday, and with various
combinations of office lights and outdoor lighting (controlled by adjusting blinds). People
used the office during the acquisition process as well, thus affecting the ambient lighting
conditions. It is important to note that a variety of non-linear normalization mechanisms
built into the camera were operating during this process.
We chose image pairs I j = (I1j , I2j ), 1 ? j ? 800, by randomly and independently selecting individual images from the set of raw images. Each image pair was then used to
estimate a full color flow ?(I j ). We used 4096 distinct RGB colors (equally spaced in
RGB space), so ?(I j ) was represented by a vector of 3 ? 4096 = 12288 components.
We modeled the space of color flows using principal components analysis (PCA) because:
1) the flows are well represented (in an L2 sense) by a small number of principal components, and 2) finding the optimal description of a difference image in terms of color flows
was computationally efficient using this representation (see Section 4). We call the principal components of the color flow data ?color eigenflows?, or just eigenflows, 4 for short.
We emphasize that these principal components of color flows have nothing to do with the
distribution of colors in images, but only model the distribution of changes in color. This
is a key and potentially confusing point. Our work is very different from approaches that
compute principal components in the intensity or color space itself [14, 15]. Perhaps the
most important difference is that our model is a global model for all images, while the
4
PCA has been applied to motion vector fields [13], and these have also been termed ?eigenflows?.
25
color flow
linear
diagonal
gray world
rms error
20
15
10
5
a
0
1
2
3
image
4
b
Figure 4: Image matching. Top row: original images. Bottom row: best approximation to
original images using eigenflows and the source image a. Reconstruction errors per pixel
component for four methods are shown in b.
above methods are models only for a particular set of images, such as faces.
4 Using color flows to synthesize novel images
How do we generate a new image from a source image and a color flow ?? For each pixel
p in the new image, its color c0 (p) can be computed as
c(p)),
c0 (p) = c(p) + ??(?
(4)
where c(p) is color in the source image and ? is a scalar multiplier that represents the
?quantity of flow?. ?
c(p) is interpreted to be the color vector closest to c(p) (in color space)
at which ? has been computed. RGB values are clipped to 0?255.
Figure 3 shows the effect of the first three eigenflows on an image of a face. The original
image is in the middle of each row while the other images show the application of each
eigenflow with ? values between ?4 standard deviations. The first eigenflow (top row)
represents a generic brightness change that could probably be represented well with a linear
model. Notice, however, the third row. Moving right from the middle image, the contrast
grows. The shadowed side of the face grows darker while the lighted part of the face grows
lighter. This effect cannot be achieved with a simple matrix multiplication as given in
Equation 3. It is precisely these types of non-linear flows we wish to model.
We stress that the eigenflows were only computed once (on the color palette data), and that
they were applied to the face image without any knowledge of the parameters under which
the face image was taken.
4.1 Flowing one image to another
Suppose we have two images and we pose the question of whether they are images of the
same object or scene. We suggest that if we can ?flow? one image to another then the
images are likely to be of the same scene.
Let us treat an image I as a function that takes a color flow and returns a difference image
D by placing at each (x,y) pixel in D the color change vector ?(c(p x,y )). The difference
image basis for I and set of eigenflows ?i , 1 ? i ? E, is Di = I(?i ). The set of images
S that can be formed using a source image and a set of eigenflows is S = {S : S =
PE
I + i=1 ?i Di }, where the ?i ?s are scalars, and here I is just an image, and not a function.
In our experiments, we used E = 30 of the top eigenvectors.
We can only flow image I1 to another image I2 if it is possible to represent the difference
image as a linear combination of the Di ?s, i.e. if I2 ? S. We find the optimal (in the
least-squares sense) ?i ?s by solving the system
D=
E
X
i=1
? i Di ,
(5)
a
b
e
c
d
f
Figure 5: Modeling lighting changes with color flows. a. Image with strong shadow. b.
Same image under more uniform lighting conditions. c. Flow from a to b using eigenflows.
d. Flow from a to b using linear. Evaluating the capacity of the color flow model. e. Mirror
image of b. f. Failure to flow b to e implies that the model is not overparameterized.
using the pseudo-inverse, where D = I2 ? I1 . The error residual represents a match score
for I1 and I2 . We point out again that this analysis ignores clipping effects. While clipping
can only reduce the error between a synthetic image and a target image, it may change
which solution is optimal in some cases.
5 Experiments
5.1 Image matching
One use of the color change model is for image matching. An ideal system would flow
matching images with zero error, and have large errors for non-matching images.
We first examined our ability to flow a source image to a matching target image under
different photic parameters. We compared our system to 3 other commonly used methods:
linear, diagonal, and gray world. The linear method finds the matrix A in Equation 3 that
minimizes the L2 error between the synthetic and target images; diagonal does the same
with a diagonal A; gray world linearly matches the mean R, G, B values of the synthetic
and target images. While our goal was to reduce the numerical difference between two
images using flows, it is instructive to examine one example that was particularly visually
compelling, shown in Figure 1.
In a second experiment (Figure 4), we matched images of a face taken under various camera
parameters but with constant lighting. Color flows outperforms the other methods in all but
one task, on which it was second.
5.2 Local flows
In another test, the source and target images were taken under very different lighting conditions. Furthermore, shadowing effects and lighting direction changed between the two
images. None of the methods could handle these effects when applied globally. Thus we
repeatedly applied each method on small patches of the image. Our method again performed the best, with an RMS error of 13.8 per pixel component, compared with errors of
17.3, 20.1, and 20.6 for the other methods. Figure 5 shows obvious visual artifacts with the
linear method, while our method seems to have produced a much better synthetic image,
especially in the shadow region at the edge of the poster.
a
b
c
d
Figure 6: Backgrounding with color flows. a A background image. b A new object and
shadow have appeared. c For each of the two regions (from background subtraction), a
?flow? was done between the original image and the new image based on the pixels in each
region. d The color flow of the original image using the eigenflow coefficients recovered
from the shadow region. The color flow using the coefficients from the non-shadow region
are unable to give a reasonable reconstruction of the new image.
Synthesis on patches of images greatly increases the capacity of the model. We performed
one experiment to measure the over-fitting of our method versus the others by trying to
flow an original image to its reflection (Figure 5). The RMS error per pixel component was
33.2 for our method versus 41.5, 47.3, and 48.7 for the other methods. Note that while our
method had lower error (which is undesirable), there was still a significant spread between
matching images and non-matching images. We believe we can improve differentiation
between matching and non-matching image pairs by assigning a cost to the change in ? i
across each image patch. For non-matching images, we would expect the ? i ?s to vary
rapidly to accommodate the changing image. For matching images, sharp changes would
only be necessary at shadow boundaries or changes in the surface orientation relative to
directional light sources.
5.3 Shadows
Shadows confuse tracking algorithms [16], backgrounding schemes and object recognition
algorithms. For example, shadows can have a dramatic effect on the magnitude of difference images, despite the fact that no ?new objects? have entered a scene. Shadows can
also move across an image and appear as moving objects. Many of these problems could
be eliminated if we could recognize that a particular region of an image is equivalent to a
previously seen version of the scene, but under a different lighting.
Figure 6a shows how color flows may be used to distinguish between a new object and a
shadow by flowing both regions. A constant color flow across an entire region may not
model the image change well. However, we can extend our basic model to allow linearly
or quadratically (or other low order polynomially) varying fields of eigenflow coefficients.
That is, we can find the best least squares fit of the difference image allowing our ? estimates to vary linearly or quadratically over the image. We implemented this technique by
computing flows ?x,y between corresponding image patches (indexed by x and y), and then
minimizing the following form:
X
arg min
(6)
(?x,y ? M cx,y )T ??1
x,y (?x,y ? M cx,y ).
M
x,y
Here, each cx,y is a vector polynomial of the form [x y 1]T for the linear case and
[x2 xy y 2 x y 1]T for the quadratic case. M is an Ex3 matrix in the linear case and
an Ex6 matrix in the quadratic case. The ??1
x,y ?s are the error covariances in the estimate
of the ?x,y ?s for each patch.
Allowing the ??s to vary over the image greatly increases the capacity of a matcher, but
by limiting this variation to linear or quadratic variation, the capacity is still not able to
qualitatively match ?non-matching? images. Note that this smooth variation in eigenflow
coefficients can model either a nearby light source or a smoothly curving surface, since
either of these conditions will result in a smoothly varying lighting change.
shadow
non-shadow
constant
36.5
110.6
linear
12.5
64.8
quadratic
12.0
59.8
Table 1: Error residuals for shadow and non-shadow regions after color flows.
We consider three versions of the experiment: 1) a single vector of flow coefficients, 2)
linearly varying ??s, 3) quadratically varying ??s. In each case, the residual error for the
shadow region is much lower than for the non-shadow region (Table 1).
5.4 Conclusions
Except for the synthesis experiments, most of the experiments in this paper are preliminary
and only a proof of concept. Much larger experiments need to be performed to establish the utility of the color change model for particular applications. However, since the
color change model represents a compact description of lighting changes, including nonlinearities, we are optimistic about these applications.
References
[1] E. Miller and K. Tieu. Color eigenflows: Statistical modeling of joint color changes. In IEEE
ICCV, volume 1, pages 607?614, 2001.
[2] D. H. Marimont and B. A. Wandell. Linear models of surface and illuminant spectra. J. Opt.
Soc. Amer., 11, 1992.
[3] G. Buchsbaum. A spatial processor model for object color perception. J. Franklin Inst., 310,
1980.
[4] J. J. McCann, J. A. Hall, and E. H. Land. Color mondrian experiments: The study of average
spectral distributions. J. Opt. Soc. Amer., A(67), 1977.
[5] D. H. Brainard and W. T. Freeman. Bayesian color constancy. J. Opt. Soc. Amer., 14(7):1393?
1411, 1997.
[6] D. A. Forsyth. A novel algorithm for color constancy. IJCV, 5(1), 1990.
[7] V. C. Cardei, B. V. Funt, and K. Barnard. Modeling color constancy with neural networks. In
Proc. Int. Conf. Vis., Recog., and Action: Neural Models of Mind and Machine, 1997.
[8] R. Lenz and P. Meer. Illumination independent color image representation using logeigenspectra. Technical Report LiTH-ISY-R-1947, Link?oping University, April 1997.
[9] P. N. Belhumeur and D. Kriegman. What is the set of images of an object under all possible
illumination conditions? IJCV, 28(3):1?16, 1998.
[10] W. S. Stiles, G. Wyszecki, and N. Ohta. Counting metameric object-color stimuli using frequency limited spectral reflectance functions. J. Opt. Soc. Amer., 67(6), 1977.
[11] L. T. Maloney. Evaluation of linear models of surface spectral reflectance with small numbers
of parameters. J. Opt. Soc. Amer., A1, 1986.
[12] A. Shashua and R. Riklin-Raviv. The quotient image: Class-based re-rendering and recognition
with varying illuminations. IEEE PAMI, 3(2):129?130, 2001.
[13] J. J. Lien. Automatic Recognition of Facial Expressions Using Hidden Markov Models and
Estimation of Expression Intensity. PhD thesis, Carnegie Mellon University, 1998.
[14] M. Turk and A. Pentland. Eigenfaces for recognition. J. Cog. Neuro., 3(1):71?86, 1991.
[15] M. Soriano, E. Marszalec, and M. Pietikainen. Color correction of face images under different
illuminants by rgb eigenfaces. In Proc. 2nd Int. Conf. on Audio- and Video-Based Biometric
Person Authentication, pages 148?153, 1999.
[16] K. Toyama, J. Krumm, B. Brumitt, and B. Meyers. Wallflower: Principles and practice of
background maintenance. In IEEE CVPR, pages 255?261, 1999.
| 2323 |@word middle:2 eliminating:1 achievable:1 polynomial:2 seems:1 version:2 c0:2 nd:1 rgb:7 covariance:1 brightness:5 dramatic:1 accommodate:1 contains:1 score:1 selecting:1 franklin:1 rightmost:1 outperforms:1 recovered:1 assigning:1 must:1 numerical:1 wanted:2 alone:1 intelligence:1 selected:1 short:1 colored:1 hsv:1 kinh:1 c2:2 direct:2 transducer:2 ijcv:2 overhead:1 fitting:1 introduce:2 mccann:1 behavior:1 frequently:1 examine:1 multi:2 freeman:2 globally:1 little:3 window:1 estimating:1 linearity:3 matched:1 mass:1 what:1 interpreted:1 minimizes:1 finding:2 differentiation:1 pseudo:1 berkeley:3 every:1 toyama:1 control:3 appear:1 local:2 treat:1 despite:1 pami:1 chose:1 examined:1 suggests:1 limited:1 camera:12 practice:2 x3:1 procedure:1 area:1 printed:1 poster:5 matching:17 radial:1 suggest:1 get:3 cannot:2 undesirable:1 equivalent:1 map:6 exposure:1 independently:2 convex:3 estimator:2 pk2:5 proving:1 handle:1 meer:1 variation:4 limiting:1 target:5 suppose:2 lighter:2 us:2 ac1:1 synthesize:1 recognition:6 particularly:1 sunset:1 observed:6 constancy:5 bottom:1 recog:1 midday:1 capture:2 region:11 kriegman:1 trained:1 solving:1 mondrian:1 upon:2 division:1 blurring:1 completely:2 basis:3 joint:6 various:4 represented:4 distinct:1 artificial:1 choosing:1 larger:1 valued:1 cvpr:1 ability:2 jointly:1 itself:2 final:2 reconstruction:2 maximal:1 krumm:1 rapidly:1 entered:1 flexibility:1 description:2 ohta:1 produce:1 raviv:1 leave:1 object:15 brainard:2 pose:1 strong:1 soc:5 implemented:1 c:1 shadow:22 eigenflows:12 implies:1 quotient:5 exhibiting:1 direction:1 hull:1 wall:1 preliminary:2 opt:5 correction:1 proximity:1 hall:1 brightest:1 normal:2 great:1 visually:1 mapping:9 predict:1 vary:5 albedo:1 lenz:1 proc:2 estimation:1 shadowing:1 spreading:1 weighted:1 mit:1 rather:1 occupied:1 varying:6 probabilistically:1 office:4 greatly:2 contrast:1 sense:2 dim:1 inst:1 squaring:1 unlikely:1 entire:1 hidden:1 lien:1 i1:7 pixel:15 issue:3 arg:1 orientation:2 biometric:1 denoted:1 spatial:1 uc:1 marginal:1 field:6 once:1 eliminated:1 represents:6 placing:2 unsupervised:3 nearly:1 anticipated:1 others:1 report:1 stimulus:1 randomly:2 composed:1 recognize:1 individual:1 attempt:1 detection:3 highly:1 evaluation:1 uncommon:1 light:6 hewlett:1 pc:1 ambient:1 edge:1 necessary:1 experience:1 xy:1 facial:1 indexed:1 re:4 modeling:5 compelling:1 measuring:1 clipping:2 cost:1 addressing:1 deviation:1 uniform:1 oping:1 too:1 reported:1 synthetic:4 person:1 physic:1 synthesis:3 again:2 central:1 thesis:1 choose:2 conf:2 return:1 nonlinearities:1 coefficient:8 inc:1 forsyth:1 int:2 depends:1 blind:1 vi:1 performed:3 view:1 optimistic:1 observing:2 red:2 wave:1 portion:1 hampering:1 parallel:1 shashua:1 bright:1 square:3 formed:1 miller:2 spaced:1 directional:1 raw:1 bayesian:1 produced:2 none:1 lighting:22 processor:1 maloney:1 inexpensive:1 failure:1 acquisition:2 frequency:1 turk:1 obvious:1 naturally:1 proof:1 di:4 static:1 gain:3 adjusting:1 massachusetts:1 color:101 knowledge:3 carefully:1 back:1 flowing:2 april:1 amer:5 done:1 box:1 bluer:1 just:2 furthermore:1 pk1:7 nonlinear:1 propagation:1 defines:1 mode:1 artifact:1 perhaps:2 gray:6 overparameterized:1 believe:1 grows:3 effect:11 concept:1 multiplier:1 hence:1 laboratory:1 i2:7 white:1 during:4 authentication:1 anything:1 trying:1 stress:1 motion:1 reflection:1 image:133 novel:2 endowing:1 common:3 wandell:1 empirically:1 volume:1 extend:1 significant:4 mellon:1 cambridge:1 ai:1 automatic:1 lith:1 had:1 moving:2 surface:17 operating:1 closest:1 termed:1 certain:1 discussing:1 captured:1 seen:1 belhumeur:1 subtraction:1 full:2 multiple:1 meyers:1 smooth:1 technical:1 match:4 isy:1 stile:1 equally:2 a1:1 controlled:2 impact:1 prediction:1 neuro:1 basic:1 maintenance:1 vision:1 funt:1 represent:2 normalization:2 achieved:1 affecting:1 background:3 interval:1 source:9 probably:1 tend:1 marimont:1 flow:52 spirit:1 call:1 counting:1 ideal:2 rendering:1 variety:2 affect:3 fit:3 buchsbaum:1 brighter:2 reduce:2 idea:1 shift:2 soriano:1 whether:1 expression:2 pca:2 rms:3 utility:1 cause:2 repeatedly:1 action:1 useful:1 covered:1 aimed:1 eigenvectors:1 generate:2 canonical:1 notice:2 estimated:1 wr:1 per:3 blue:1 broadly:1 carnegie:1 metameric:1 key:1 four:2 monitor:1 changing:1 imaging:2 cone:1 inverse:1 clipped:1 family:1 reasonable:1 patch:5 confusing:1 bit:2 layer:1 distinguish:2 cardei:1 quadratic:4 precisely:1 worked:1 scene:18 flat:3 x2:1 nearby:3 aspect:1 min:1 according:1 combination:3 across:3 i1j:1 restricted:1 iccv:1 taken:5 computationally:1 equation:2 previously:1 r3:1 mechanism:1 know:1 merit:1 mind:1 apply:2 lambertian:3 generic:1 spectral:3 original:7 top:4 include:1 reflectance:9 especially:1 establish:1 move:1 question:1 quantity:1 diagonal:4 exhibit:1 unable:1 mapped:1 link:1 capacity:6 toward:1 erik:1 length:1 modeled:2 ratio:1 minimizing:1 ex3:1 potentially:1 unknown:1 allowing:3 observation:1 markov:1 sunrise:1 curved:1 attacked:1 pentland:1 defining:1 variability:1 arbitrary:1 sharp:1 intensity:2 introduced:1 pair:6 required:1 palette:2 photic:8 learned:1 registered:1 quadratically:3 address:3 able:1 below:1 perception:1 appeared:1 built:2 reliable:1 including:4 video:4 green:1 packard:1 power:1 shadowed:1 residual:3 scheme:1 improve:1 technology:1 gamut:1 incoherent:1 auto:2 coupled:1 text:1 prior:2 geometric:4 literature:1 removal:1 meter:1 l2:2 multiplication:1 relative:3 expect:1 mounted:1 versus:2 principle:3 viewpoint:1 bypass:1 balancing:1 land:1 row:5 changed:1 side:2 allow:1 institute:1 neighbor:3 wide:1 face:8 eigenfaces:2 distributed:1 tieu:3 boundary:2 world:6 evaluating:1 ignores:1 author:1 collection:1 commonly:1 qualitatively:1 polynomially:1 observable:1 emphasize:1 compact:1 aperture:2 global:3 lighted:1 spectrum:3 table:2 ca:1 did:1 inherit:1 spread:1 linearly:4 eigenflow:5 nothing:1 allowed:1 body:1 representative:1 fashion:2 darker:5 wish:1 outdoor:1 pe:1 third:2 cog:1 specific:1 normalizing:1 evidence:2 mirror:1 metamers:1 magnitude:1 phd:1 illumination:3 confuse:1 occurring:2 margin:1 smoothly:4 intersection:1 cx:3 wavelength:2 likely:1 visual:1 conveniently:1 adjustment:1 tracking:1 partially:2 scalar:3 corresponds:1 ma:1 goal:1 barnard:1 change:30 typical:1 except:2 principal:5 lens:1 pietikainen:1 matcher:1 people:1 meant:1 illuminant:10 audio:1 instructive:1 |
1,456 | 2,324 | Inferring a Semantic Representation of Text
via Cross-Language Correlation Analysis
Alexei Vinokourov
John Shawe-Taylor
Dept. Computer Science
Royal Holloway, University of London
Egham, Surrey, UK, TW20 0EX
[email protected]
[email protected]
Nello Cristianini
Dept. Statistics
UC Davis, Berkeley, US
[email protected]
Abstract
The problem of learning a semantic representation of a text document
from data is addressed, in the situation where a corpus of unlabeled
paired documents is available, each pair being formed by a short English document and its French translation. This representation can then
be used for any retrieval, categorization or clustering task, both in a standard and in a cross-lingual setting. By using kernel functions, in this case
simple bag-of-words inner products, each part of the corpus is mapped
to a high-dimensional space. The correlations between the two spaces
are then learnt by using kernel Canonical Correlation Analysis. A set
of directions is found in the first and in the second space that are maximally correlated. Since we assume the two representations are completely independent apart from the semantic content, any correlation between them should reflect some semantic similarity. Certain patterns of
English words that relate to a specific meaning should correlate with certain patterns of French words corresponding to the same meaning, across
the corpus. Using the semantic representation obtained in this way we
first demonstrate that the correlations detected between the two versions
of the corpus are significantly higher than random, and hence that a representation based on such features does capture statistical patterns that
should reflect semantic information. Then we use such representation
both in cross-language and in single-language retrieval tasks, observing
performance that is consistently and significantly superior to LSI on the
same data.
1 Introduction
Most text retrieval or categorization methods depend on exact matches between words.
Such methods will, however, fail to recognize relevant documents that do not share words
with a users? queries. One reason for this is that the standard representation models (e.g.
boolean, standard vector, probabilistic) treat words as if they are independent, although it
is clear that they are not. A central problem in this field is to automatically model term-
term semantic interrelationships, in a way to improve retrieval, and possibly to do so in an
unsupervised way or with a minimal amount of supervision. For example latent semantic indexing (LSI) has been used to extract information about co-occurrence of terms in
the same documents, an indicator of semantic relations, and this is achieved by singular
value decomposition (SVD) of the term-document matrix. The LSI method has also been
adapted to deal with the important problem of cross-language retrieval, where a query in
a language is used to retrieve documents in a different language. Using a paired corpus (a
set of pairs of documents, each pair being formed by two versions of the same text in two
different languages), after merging each pair into a single ?document?, we can interpret frequent co-occurrence of two terms in the same document as an indication of cross-linguistic
correlation [5]. In this framework, a common vector-space, including words from both
languages, is created and then the training set is analysed in this space using SVD. This
method, termed CL-LSI, will be briefly discussed in Section 4. More generally, many
other statistical and linear algebra methods have been used to obtain an improved semantic
representation of text data over LSI [6]. In this study we address the problem of learning
a semantic representation of text from a paired bilingual corpus, a problem that is important both for mono-lingual and cross-lingual applications. This problem can be regarded
either as an unsupervised problem with paired documents, or as a supervised monolingual
problem with very complex labels (i.e. the label of an english document could be its french
counterpart). In either way, the data can be readily obtained without an explicit labeling
effort, and furthermore there is not the loss of information due to compressing the meaning of a document into a discrete label. We employ kernel Canonical Correlation Analysis
(KCCA) [1] to learn a representation of text that captures aspects of its meaning. Given
a paired bilingual corpus, this method defines two embedding spaces for the documents
of the corpus, one for each language, and an obvious one-to-one correspondence between
points in the two spaces. KCCA then finds projections in the two embedding spaces for
which the resulting projected values are highly correlated. In other words, it looks for particular combinations of words that appear to have the same co-occurrence patterns in the
two languages. Our hypothesis is that finding such correlations across a paired crosslingual
corpus will locate the underlying semantics, since we assume that the two languages are
?conditionally independent?, or that the only thing they have in common is their meaning.
The directions would carry information about the concepts that stood behind the process of
generation of the text and, although expressed differently in different languages, are, nevertheless, semantically equivalent. To illustrate such representation we have printed the most
probable (most typical) words in each language for some of the first few kernel canonical
corrleation components found for bilingual 36 Canadian Parliament corpus (Hansards)
(left column is English space and right column is French space):
PENSIONS PLAN?
pension
plan
cpp
canadians
benefits
retirement
fund
tax
investment
income
finance
young
years
rate
superannuation
disability
taxes
mounted
future
premiums
seniors
country
rates
jobs
pay
AGRICULTURE?
regime
pensions
rpc
prestations
canadiens
retraite
cotisations
fonds
discours
imp?ot
revenu
jeunes
ans
pension
argent
regimes
investissement
milliards
prestation
plan
finances
pays
avenir
invalidit
resolution
wheat
board
farmers
newfoundland
grain
party
amendment
producers
canadian
speaker
referendum
minister
directors
quebec
speech
school
system
marketing
provinces
constitution
throne
money
section
rendum
majorit
bl
commission
agriculteurs
producteurs
canadienne
grain
parti
conseil
commercialisat
neuve
ministre
administration
modification
qubec
terre
formistes
partis
grains
op
nationale
lus
bloc
nations
chambre
administration
CANADIAN LANDS?
FISHING INDUSTRY?
park
land
aboriginal
yukon
marine
government
valley
water
boards
territories
board
north
parks
resource
agreements
northwest
resources
development
treaty
nations
territoire
work
territory
atlantic
programs
fisheries
atlantic
operatives
fishermen
newfoundland
fishery
problem
operative
fishing
industry
fish
years
problems
wheat
coast
oceans
west
salmon
tags
minister
communities
program
commission
motion
stocks
parc
autochtones
terres
ches
vall
ressources
yukon
nord
gouvernement
offices
marin
eaux
territoires
parcs
nations
territoriales
revendications
ministre
cheurs
ouest
entente
rights
office
atlantique
ententes
p?eches
atlantique
p?echeurs
p?eche
probl
coop
ans
industrie
poisson
neuve
terre
ouest
stocks
ratives
ministre
sant
saumon
affaiblies
facult
secteur
programme
gion
scientifiques
travailler
conduite
This representation is then used for retrieval tasks, providing better performance than
existing techniques. Such directions are then used to calculate the coordinates of the
documents in a ?language independent? way. Of course, particular statistical care is needed
for excluding ?spurious? correlations. We show that the correlations we find are not the
effect of chance, and that the resulting representation significantly improves performance
of retrieval systems. We find that the correlation existing between certain sets of words
in English and French documents cannot be explained as a random correlation. Hence
we need to explain it by means of relations between the generative processes of the two
versions of the documents, that we assume to be conditionally independent given the topic
or content. Under such assumptions, hence, such correlations detect similarities in content
between the two documents, and can be exploited to derive a semantic representation of
the text. This representation is then used for retrieval tasks, providing better performance
than existing techniques. We first apply the method to crosslingual information retrieval,
comparing performance with a related approach based on latent semantic indexing (LSI)
described below [5]. Secondly, we treat the second language as a complex label for the
first language document and view the projection obtained by CL-KCCA as a semantic
map for use in a multilingual classification task with very encouraging results. From the
computational point of view, we detect such correlations by solving an eigenproblem, that
is avoiding problems like local minima, and we do so by using kernels.
The KCCA machinery will be given in Section 3 and in Section 4 we will show how to
apply KCCA to cross-lingual retrieval while Section 4 describes the monolingual applications. Finally, results will be presented in Section 5.
2 Previous work
The use of LSI for cross-language retrieval was proposed by [5]. LSI uses a method from
linear algebra, singular value decomposition, to discover the important associative relationships. An initial sample of documents is translated
or, perhaps,
by machine, to
by
human
create a set of dual-language training documents
and
. After
preprocessing documents a common vector-space, including words from both languages,
is created and then the training set is analysed in this space using SVD:
!"$#&%('
(1)
where the ) -th column of corresponds to document ) with its first set of coordinates giving the first language features and the second set the second language features. To translate
a new document (query) * to a language-independent representation one projects (folds-in)
its expanded (filled up with zero components related to another
.- language)
1 - % vector represen+* . The similarity
tation +* into the space spanned by the , first eigenvectors : / *0
between two documents is measured as the inner product between their projections. The
documents that are the most similar to the query are considered to be relevant.
3 Kernel Canonical Correlation Analysis
In this study our aim is to find an appropriate language-independent representation. Suppose as for cross-lingual LSI (CL-LSI) we3
are
two
254 given aligned texts in, for
63simplicity,
287
languages, i.e., every text in one language
is a translationof9text
in another
language or vice versa. Our hypothesis
mapped
to a high>? is that havingthe
6corpus
6@?
dimensional
feature
space
:
as
;=<
and
corpus
to
:
as
;=<
(with
A
and
A
being respectively the kernels of the two mappings, i.e. matrices of the inner products
2
B G?9?9 :
between
images
of all the data points, [2]), we can learnC(semantic)
>D?E? directions
2
'
F'
and B
in those spaces so that the projections <@B ;=<
and <DB ;=<
:
of input data images from the different languages would be maximally correlated. We
have
thusintuitively
defined a need for the notion of a kernel canonical : -correlation
(:
:
: ) which is defined as
<E<@B
'
G?9? '
;=<
<@B
C'
6@?9?E?
;=<
?9? F'
G?9?
;=<
<@B
;=<
E? ? "! '
'
<DB
;=<
<@B
;=<
'
<@B
! ?E?
(2)
We search for B and B in the space spanned
by the -images
of the data points
#?
' ?(repro
$#&% # ;
8
('*) '
ducing kernel Hilbert space, RKHS [2]): B
; <
=
,B
;=<
. This
rewrites the numerator of (2) as
+
<@B
'
;=<
?9?
<@B
%
where is the vector with components
problem (2) can then be reformulated as
% %
?9?
;=<
% #
$
, -
'
)
and
A
A
)
(3)
the vector with components
) '
. The
% % )
A
A
./. % 0. ..0. ) .0.
A
(4)
A
Once we have moved to a kernel defined feature space the extra flexibility introduced means
that there is a danger of overfitting. By this we mean that we can find spurious correlations
by using large weight vectors to project the data so that the two projections are completely
aligned. For example, if the data are linearly independent in both feature spaces we can
find linear transformations that map the input data to an orthogonal basis in each feature
space. It is now possible to find 1 perfect correlations between the two representations.
Using kernel functions will frequently result in linear independence of the training set, for
example, when using Gaussian kernels. It is clear therefore
that we will need to introduce
a control on the flexibility of the projection mappings B and B . To do that in the spirit of
Partial Least Squares (PLS) we would add a multiple of 2-norm squared:
#? ' +
% # 4 # 4 7? 6
2 0. . B ./.
*
2 3 + % #=
; <
; <
=
#
5# 4
2 % % A %
(5)
in
denominator. Convexly combining PLS regularization term (5) and kCCA term
.0. the
% 0. .
A
:
2 ;? .0. A % .0. =< 2 /. . B .0.
<8:9
<8:9
2 ? % % A % < 2 % % A %
% % A E< <8:9 2 ? A < 2?> ? %
.0. % 0. .
we
) substitute its square root into denominator of (4) instead of A
:
%
)
(
%
@
A
A
(6)
and do the same for
(7)
2 B? ./. A % .0. < 2 ./. B .0. ? <E<8A9 2 ?B./. A ) .0. < 2 .0. B .0. ?
%
Differentiating
under
with respect to , taking into account that
% % %
K
CD ./. EF./.
GHG DD GHG the expression
J %
A
A
and I ,
, and equating the derivative to zero we obtain
?BI ./. % .0. < 2 ./. .0. &N
? < 2?> ? %
KO
% % )
)ML
A
A
<8A9 2
A
B
9
A A
<9<8A9 2 A
A
(8)
?B./. % .0. < 2 ./. .0.
%
2
We ) note that can be normalised so that <8P9
A
B
8 . Similar operations
for yield analogous equations that together with (8) can be written in a matrix form:
QSR
R
, -
< <8A9
where
% % is ) the average per point correlation between projections <DB
, and
A
A
Q
A
T
A
A
T
A
'
<E<8A9
2 ? A < 2?> ? A
T
'
<E<8A9
;=<
>?E?
and <DB
F'
(9)
;=<
C?9?
:
T
2 ? A < 2?> ? A
(10)
Table 1: Statistics for ?House debates? of the 36
% )
R
%
%
Canadian Parliament proceedings corpus.
E NGLISH
SENTENCE
PAIRS
948 K
62 K
TRAINING
TESTING 1
F RENCH
WORDS
14,614 K
995 K
WORDS
15,657 K
1067 K
%
where
. Equation (9) is known as a generalised eigenvalue problem.The
standard approach to the solution of (9) in the case of
a symmetric is to perform
R incom
%
plete Cholesky decomposition of the matrix :
and define
which
allows
us, after
simple transformations, to rewrite it as a standard eigenvalue problem
Q
%
. We will discuss how to choose 2 in Section 5.
%
)
It is easy to see that if or changes sign in (9), also changes sign. Thus, the spectrum
of the problem (9) has paired positive and negative values between 9 8 and 8 .
4 Applications of KCCA
Cross-linguistic retrieval with KCCA. The kernel CCA procedure identifies a set of projections from both languages into a common semantic space. This provides a natural framework for performing cross-language information retrieval. We first select a number of
1 , with largest correlation values . To process an insemantic dimensions, 8
for its language * and project
coming query * we expand * into the vector representation
%
%
it onto the canonical : -correlation components:
/
*
0
* using the appropriate
matrix whose columns are the first solutions
vector for that language, where is a 1
of (9) for ? the
given
by eigenvalue in descending order. Here we assumed
?9? language sorted
'
%
that
<G;= < ;=
is
simply
where
is
<
*
*
the training
?
? corpus in the given language:
or
.
<
<
Using the
semantic space in text categorisation. The semantic vectors in the given language
can be exported and used in some other application, for example, Support
Vector Machine classification. We first find common features of the training data used to
extract the semantics and the data used to train SVM classifier, cut the features that are not
common and compute the new kernel which is the inner product of the projected data:
A
D<
! ?
%
'
%
%
!
(11)
The term-term relationship matrix
can be computed only once and stored for further
use in the SVM learning process and classification.
5 Experiments
Experimental setup. Following [5] we conducted a series of experiments with the Hansard
collection [3] to measure the ability of CL-LSI and CL-KCCA for any document from a
test collection in one language to find its mate in another language. The whole collection consists of 1.3 million pairs of aligned text chunks (sentences or smaller fragments)
from the 36 Canadian Parliament proceedings. In our experiments we used only the
?house debates? part for which statistics is given in Table 1. As a testing collection we
used only ?testing 1?. The raw text was split into sentences with Adwait Ratnaparkhi?s
MXTERMINATOR and the sentences were aligned with I. Dan Melamed?s GSA tool (for
details on the collection and also for the source see [3]).
Table 2: Average accuracy of top-rank (first retrieved) English French retrieval, %
(left)
precision of English French retrieval over set of fixed recalls
O 'O and
JC' average
'O
( 8
), % (right)
cl-lsi
cl-kcca
100
84
98
200
91
99
300
93
99
400
95
99
full
97
99
100
73
91
200
78
91
300
80
91
400
82
91
full
82
87
The text chunks were split into ?paragraphs? based on ?***? delimiters and these ?paragraphs? were treated as separate documents. After removing stop-words in both French
and
8 J
P
8
English parts and rare words (i.e. appearing less than
three
times)
we
obtained
J
term-by-document ?English? matrix and
P8 8
?French? matrix (we also removed
8
a few documents that appeared to be problematic when split into paragraphs). As these
matrices were still too large to perform SVD and KCCA on them, we split the whole collection into 14 chunks of about 910 documents each and conducted experiments separately
with them, measuring the performance of the methods each time on a 917-document test
collection. The results were then averaged. We have also trained the CL-KCCA method
on
O randomly reassociated French-English document pairs and observed accuracy of about
8
on test data which is far lower than results on the non-random original data. It is worth
noting that CL-KCCA behaves differently from CL-LSI over the full scale of the spectrum.
When CL-LSI only increases its performance with more eigenvectors taken from the lower
part of spectrum (which is, somewhat unexpectedly, quite different from its behaviour in
the monolinguistic setting), CL-KCCA?s performance, on the contrary, tends to deteriorate
with the dimensionality of the semantic subspace approaching the dimensionality of the
input data space.
The partial Singular Value Decomposition of the matrices was done using Matlab?s ?svds?
function and full SVD was performed using the ?kernel trick? discussed in the previous
section and ?svd? function which took about 2 minutes to compute on Linux Pentium III
1GHz system for a selection of 1000 documents. The Matlab implementation of KCCA
using the same function, ?svd?, which solves the generalised eigenvalue problem through
Cholesky incomplete decomposition, took about 8 minutes to compute on the same data.
Mate retrieval. The results are presented in Table 2. Only one - mate document in French
was considered as relevant to each of the test English documents which were treated as
queries and the relative number of correctly retrieved
O documents
O J
O was computed (Table 2)
,
, . Very similar results
along with average precision over fixed recalls: 8 ,
(omitted here) were obtained when French documents were treated as queries and English
as test documents. As one can see from Table 2 CL-KCCA
seems to capture most of
accuracy
the semantics in the first few components achieving
with as little as 100
components when CL-LSI needs all components for a similar figure.
Selecting the regularization parameter. The regularization parameter 2 (6) not only
makes the problem (9) well-posed numerically, but also provides control over capacity of
the function space where the solution is being sought. The larger values of 2 are, the
less sensitive the method to the input data is, therefore, the more stable (less prone to
finding spurious relations) the solution becomes. We should thus observe an increase of
?reliability? of the solution. We measure the ability of the method to catch useful signal by comparing the solutions on original input and ?random? data. The ?random?
?E? data
'
is constructed by random reassociations of the data pairs, for example, <
de<
notes English-French parallel corpus which is obtained from the original English-French
aligned
the French (equivalently, English) documents. Suppose,
'E reshuffling
by
!
A collection
? denotes the (positive part of) spectrum of the KCCA solution on
<
?
'
the paired dataset <
. If./. " the method is overfitting
it will
'E ?;.0.$the
# O data
" be able to find
'
perfect correlations and hence 9 A
<
where is the all-one vec-
1.5
1.5
1
1
1
0.5
0.5
0
0
1
2
3
./. "
4
0
0
0.5
1
2
'
3
4
?9?;.0.
./. "
0
0
1
2
'
3
4
?;.0.
<
<
Figure
(left), 9 A
(middle)
.0. " 1: Quantities ' 9 A
?E?;.0. <
and
(right) as functions of the regularization parameter 2 .
9 A
<
<
(Graphs were obtained for the regularization schema discussed in [1]).
tor. We therefore use this as a measure
to assess' the
degree of overfitting. Three
graphs
.0. "
' ?B./.
< ?9?B./. , ./. " 9 A
in
Figure
1
show
the
quantities
, and
9
A
<
<
.0. "
?E?;.0.
'
9 A
<
<
as functions of the regularization parameter 2 . For small
values of 2 ' the? spectrum of all the tests is close to the all-one spectrum (the spectrum
A
< ). This indicates overfitting since the method is able to find correlations
even in randomly associated pairs. As 2 increases the spectrum of the randomly associated
data becomes far from all-one, while that of the paired documents remains correlated. This
observation can be exploited for choosing the optimal value of 2 . From the middle and
J
right graphs in Figure 1 this value could be derived as lying somewhere between 8 and .
For the experiments reported in this study we used the value of 8
.
Pseudo query test. To perform a more realistic test we generated short queries, which
are most likely to occur in search engines, that consisted of the 5 most probable words
from each test document. The relevant documents were the test documents themselves in
monolinguistic retrieval (English query - English document) and their mates in the crosslinguistic (English query - French document) test. Table 3 shows the relative number of
correctly retrieved as top-ranked English documents for English queries (left) and the relative number of correctly retrieved documents in the top ten ranked (right). Table 4 provides
analogous results but for cross-linguistic retrieval.
Table 3: English English top-ranked retrieval accuracy, % (left) and English
top-ten
retrieval accuracy, % (right)
cl-lsi
cl-kcca
100
53
60
200
60
63
300
64
70
400
66
71
full
70
73
100
82
90
200
86
93
300
88
94
400
89
95
English
full
91
95
Table 4: English French top-ranked retrieval accuracy, % (left) and English-French topten retrieval accuracy, % (right)
cl-lsi
cl-kcca
100
30
68
200
38
75
300
42
78
400
45
79
full
49
81
100
67
94
200
75
96
300
79
97
400
81
98
full
84
98
Text categorisation using semantics learned on a completely different corpus. The
semantics (300 vectors) extracted from the Canadian Parliament corpus (Hansard) was
used in Support Vector Machine (SVM) text classification [2] of Reuters-21578 corpus (Table 5). In this experimental setting the intersection of vector spaces of the Hansards,
5159 English words from the first 1000-French-English-document training chunk, and
Reuters ModApt split, 9962 words
from the 9602 training and 3299 test documents
OO
had 1473 words. The extracted
KCCA vectors from English and French parts
(raw ?KCCA? of # Table
5) and 300 eigenvectors from the same data (raw ?CL-LSI?) were
used in the SVM
[4] with the kernel (11) to classify the Reuters-21578 data. The
experiments were averaged over 10 runs with 5% each time randomly chosen fraction of
training data as the difference between bag-of-words and semantic methods is more contrasting on smaller samples. Both CL-KCCA and CL-LSI perform remarkably well when
one considers that they are based on just 1473 words. In all cases CL-KCCA outperforms
the bag-of-words kernel.
Table 5: value, %, averaged over 10 subsequent runs of SVM classifier with original
Reuters-21578 data (?bag-of-words?) and preprocessed using semantics (300 vectors)
extracted from the Canadian Parliament corpus by various methods.
CLASS
BAG - OF - WORDS
CL - KCCA
CL - LSI
EARN
81
90
77
ACQ
GRAIN
57
75
52
33
43
64
CRUDE
13
38
40
6 Conclusions
We have presented a novel procedure for extracting semantic information in an unsupervised way from a bilingual corpus, and we have used it in text retrieval applications. Our
main findings are that: the correlation existing between certain sets of words in english and
french documents cannot be explained as random correlations. Hence we need to explain
it by means of relations between the generative processes of the two versions of the documents. The correlations detect similarities in content between the two documents, and can
be exploited to derive a semantic representation of the text. The representation is then used
for retrieval tasks, providing better performance than existing techniques.
References
[1] F. R. Bach and M. I. Jordan. Kernel indepedendent component analysis. Journal of
Machine Learning Research, 3:1?48, 2002.
[2] Nello Cristianini and John Shawe-Taylor. An introduction to Support Vector Machines
and other kernel-based learning methods. Cambridge University Press, 2000.
[3] Ulrich Germann.
Aligned Hansards of the 36th Parliament of Canada.
http://www.isi.edu/natural-language/download/hansard/,
2001. Release 2001-1a.
#
[4] Thorsten
Joachims.
http://svmlight.joachims.org, 2002.
#
-
Support
Vector
Machine.
[5] M. L. Littman, S. T. Dumais, and T. K. Landauer. Automatic cross-language information retrieval using latent semantic indexing. In G. Grefenstette, editor, Cross language
information retrieval. Kluwer, 1998.
[6] Alexei Vinokourov and Mark Girolami. A probabilistic framework for the hierarchic
organisation and classification of document collections. Journal of Intelligent Information Systems, 18(2/3):153?172, 2002. Special Issue on Automated Text Categorization.
| 2324 |@word briefly:1 version:4 middle:2 norm:1 seems:1 decomposition:5 carry:1 initial:1 series:1 fragment:1 selecting:1 document:54 rkhs:1 outperforms:1 atlantic:2 existing:5 comparing:2 analysed:2 written:1 readily:1 john:3 grain:4 realistic:1 subsequent:1 fund:1 generative:2 marine:1 short:2 provides:3 scientifiques:1 org:1 along:1 constructed:1 director:1 consists:1 dan:1 paragraph:3 ressources:1 introduce:1 deteriorate:1 p8:1 themselves:1 frequently:1 isi:1 automatically:1 p9:1 encouraging:1 repro:1 little:1 becomes:2 project:3 discover:1 underlying:1 contrasting:1 finding:3 transformation:2 pseudo:1 berkeley:1 every:1 nation:3 finance:2 classifier:2 uk:3 farmer:1 control:2 appear:1 generalised:2 positive:2 local:1 treat:2 tends:1 tation:1 marin:1 adwait:1 equating:1 co:3 bi:1 averaged:3 testing:3 investment:1 hansard:6 procedure:2 tw20:1 danger:1 significantly:3 printed:1 projection:8 word:26 cannot:2 unlabeled:1 valley:1 onto:1 selection:1 close:1 descending:1 www:1 equivalent:1 map:2 fishing:2 resolution:1 simplicity:1 parti:1 regarded:1 spanned:2 retrieve:1 embedding:2 notion:1 coordinate:2 analogous:2 nglish:1 suppose:2 user:1 exact:1 us:1 hypothesis:2 agreement:1 melamed:1 trick:1 cut:1 observed:1 capture:3 unexpectedly:1 calculate:1 svds:1 compressing:1 wheat:2 removed:1 littman:1 cristianini:2 trained:1 depend:1 solving:1 parc:1 algebra:2 rewrite:2 treaty:1 completely:3 basis:1 translated:1 differently:2 stock:2 various:1 train:1 london:1 detected:1 query:12 labeling:1 choosing:1 whose:1 quite:1 posed:1 larger:1 coop:1 ratnaparkhi:1 ability:2 statistic:3 ghg:2 associative:1 a9:6 indication:1 eigenvalue:4 net:1 took:2 product:4 coming:1 frequent:1 relevant:4 aligned:6 combining:1 translate:1 flexibility:2 tax:2 moved:1 categorization:3 perfect:2 illustrate:1 derive:2 ac:2 ouest:2 measured:1 op:1 school:1 job:1 solves:1 c:2 avenir:1 girolami:1 direction:4 human:1 government:1 canadiens:1 behaviour:1 probable:2 secondly:1 lying:1 considered:2 mapping:2 tor:1 sought:1 omitted:1 agriculture:1 bag:5 label:4 sensitive:1 largest:1 vice:1 create:1 tool:1 reshuffling:1 gaussian:1 aim:1 office:2 linguistic:3 derived:1 release:1 joachim:2 consistently:1 rank:1 indicates:1 pentium:1 detect:3 spurious:3 relation:4 expand:1 semantics:6 issue:1 classification:5 dual:1 development:1 plan:3 special:1 uc:1 field:1 once:2 eigenproblem:1 having:1 park:2 look:1 unsupervised:3 imp:1 future:1 intelligent:1 employ:1 few:3 producer:1 randomly:4 recognize:1 highly:1 alexei:3 neuve:2 behind:1 delimiters:1 partial:2 retirement:1 machinery:1 orthogonal:1 filled:1 incomplete:1 taylor:2 eaux:1 minimal:1 column:4 industry:2 boolean:1 classify:1 measuring:1 rare:1 stood:1 conducted:2 too:1 commission:2 stored:1 reported:1 learnt:1 dumais:1 chunk:4 ches:1 probabilistic:2 bloc:1 together:1 fishery:2 earn:1 linux:1 squared:1 reflect:2 central:1 choose:1 possibly:1 derivative:1 account:1 de:1 north:1 jc:1 performed:1 view:2 root:1 observing:1 schema:1 parallel:1 acq:1 ass:1 square:2 formed:2 minister:2 amendment:1 accuracy:7 yield:1 territory:2 raw:3 lu:1 worth:1 explain:2 surrey:1 obvious:1 associated:2 stop:1 dataset:1 recall:2 improves:1 dimensionality:2 hilbert:1 higher:1 supervised:1 maximally:2 improved:1 done:1 furthermore:1 marketing:1 just:1 correlation:26 french:21 defines:1 perhaps:1 effect:1 concept:1 consisted:1 counterpart:1 hence:5 regularization:6 symmetric:1 semantic:23 deal:1 conditionally:2 numerator:1 davis:1 speaker:1 demonstrate:1 interrelationship:1 motion:1 meaning:5 coast:1 image:3 salmon:1 ef:1 novel:1 superior:1 common:6 plete:1 behaves:1 crosslingual:2 discussed:3 million:1 kluwer:1 interpret:1 numerically:1 versa:1 vec:1 probl:1 cambridge:1 automatic:1 shawe:2 language:40 reliability:1 had:1 stable:1 similarity:4 supervision:1 money:1 add:1 retrieved:4 apart:1 termed:1 exported:1 certain:4 constitution:1 exploited:3 minimum:1 care:1 somewhat:1 signal:1 multiple:1 full:8 match:1 cross:14 bach:1 retrieval:26 paired:9 kcca:24 ko:1 denominator:2 poisson:1 kernel:19 achieved:1 remarkably:1 separately:1 operative:2 addressed:1 singular:3 country:1 source:1 ot:1 extra:1 db:4 thing:1 quebec:1 contrary:1 spirit:1 jordan:1 extracting:1 noting:1 svmlight:1 canadian:8 split:5 easy:1 iii:1 automated:1 independence:1 approaching:1 inner:4 vinokourov:2 administration:2 expression:1 effort:1 reformulated:1 speech:1 matlab:2 generally:1 useful:1 clear:2 eigenvectors:3 amount:1 ten:2 http:2 lsi:20 canonical:6 problematic:1 fish:1 sign:2 per:1 correctly:3 discrete:1 nevertheless:1 achieving:1 mono:1 preprocessed:1 graph:3 fraction:1 year:2 run:2 cca:1 pay:2 correspondence:1 fold:1 adapted:1 occur:1 categorisation:2 represen:1 tag:1 aspect:1 performing:1 expanded:1 combination:1 lingual:5 across:2 describes:1 smaller:2 modification:1 explained:2 intuitively:1 indexing:3 thorsten:1 taken:1 resource:2 equation:2 remains:1 discus:1 fail:1 needed:1 available:1 operation:1 apply:2 observe:1 appropriate:2 egham:1 ocean:1 occurrence:3 appearing:1 substitute:1 original:4 northwest:1 clustering:1 top:6 denotes:1 qsr:1 somewhere:1 giving:1 bl:1 quantity:2 disability:1 subspace:1 separate:1 mapped:2 capacity:1 topic:1 nello:3 considers:1 reason:1 water:1 relationship:2 gion:1 providing:3 equivalently:1 setup:1 relate:1 debate:2 nord:1 negative:1 implementation:1 perform:4 observation:1 mate:4 situation:1 excluding:1 locate:1 vall:1 community:1 canada:1 download:1 introduced:1 pair:9 sentence:4 engine:1 learned:1 address:1 able:2 below:1 pattern:4 regime:2 appeared:1 program:2 royal:1 including:2 natural:2 treated:3 ranked:4 indicator:1 conseil:1 improve:1 identifies:1 created:2 catch:1 extract:2 text:21 rpc:1 relative:3 loss:1 monolingual:2 generation:1 mounted:1 degree:1 parliament:6 dd:1 ulrich:1 editor:1 share:1 land:2 translation:2 prone:1 course:1 english:30 senior:1 normalised:1 hierarchic:1 taking:1 differentiating:1 benefit:1 ghz:1 dimension:1 collection:9 projected:2 preprocessing:1 programme:1 party:1 income:1 far:2 correlate:1 multilingual:1 ml:1 overfitting:4 corpus:20 investissement:1 assumed:1 landauer:1 spectrum:8 search:2 latent:3 facult:1 table:13 learn:2 correlated:4 cl:24 complex:2 main:1 linearly:1 whole:2 reuters:4 bilingual:4 west:1 board:3 precision:2 germann:1 inferring:1 explicit:1 house:2 crude:1 young:1 removing:1 minute:2 specific:1 rhul:2 svm:5 convexly:1 organisation:1 merging:1 province:1 fonds:1 nationale:1 atlantique:2 intersection:1 simply:1 likely:1 expressed:1 pls:2 corresponds:1 chance:1 extracted:3 grefenstette:1 sorted:1 content:4 change:2 typical:1 semantically:1 cpp:1 svd:7 premium:1 experimental:2 newfoundland:2 holloway:1 select:1 support:5 cholesky:2 mark:1 dept:2 avoiding:1 ex:1 |
1,457 | 2,325 | Fast Exact Inference with a Factored Model for
Natural Language Parsing
Dan Klein
Department of Computer Science
Stanford University
Stanford, CA 94305-9040
Christopher D. Manning
Department of Computer Science
Stanford University
Stanford, CA 94305-9040
[email protected]
[email protected]
Abstract
We present a novel generative model for natural language tree structures
in which semantic (lexical dependency) and syntactic (PCFG) structures
are scored with separate models. This factorization provides conceptual simplicity, straightforward opportunities for separately improving
the component models, and a level of performance comparable to similar, non-factored models. Most importantly, unlike other modern parsing
models, the factored model admits an extremely effective A* parsing algorithm, which enables efficient, exact inference.
1 Introduction
Syntactic structure has standardly been described in terms of categories (phrasal labels and
word classes), with little mention of particular words. This is possible, since, with the
exception of certain common function words, the acceptable syntactic configurations of a
language are largely independent of the particular words that fill out a sentence. Conversely,
for resolving the important attachment ambiguities of modifiers and arguments, lexical
preferences are known to be very effective. Additionally, methods based only on key lexical
dependencies have been shown to be very effective in choosing between valid syntactic
forms [1]. Modern statistical parsers [2, 3] standardly use complex joint models of over
both category labels and lexical items, where ?everything is conditioned on everything? to
the extent possible within the limits of data sparseness and finite computer memory. For
example, the probability that a verb phrase will take a noun phrase object depends on the
head word of the verb phrase. A VP headed by acquired will likely take an object, while
a VP headed by agreed will likely not. There are certainly statistical interactions between
syntactic and semantic structure, and, if deeper underlying variables of communication
are not modeled, everything tends to be dependent on everything else in language [4].
However, the above considerations suggest that there might be considerable value in a
factored model, which provides separate models of syntactic configurations and lexical
dependencies, and then combines them to determine optimal parses. For example, under
this view, we may know that acquired takes right dependents headed by nouns such as
company or division, while agreed takes no noun-headed right dependents at all. If so,
there is no need to explicitly model the phrasal selection on top of the lexical selection.
Although we will show that such a model can indeed produce a high performance parser,
we will focus particularly on how a factored model permits efficient, exact inference, rather
than the approximate heuristic inference normally used in large statistical parsers.
S
S, fell-VBD
fell-VBD
NP
VP
NP, payrolls-NNS
payrolls-NNS
NN
NNS
VBD
fell
IN
VP, fell-VBD
in-IN
Factory-NN payrolls-NNS fell-VBD
Factory-NN payrolls
Factory payrolls
fell
PP
Factory
Factory
payrolls
fell
in-IN September-NN
September
in September
(a) PCFG Structure
PP, in-IN
in September-NN
NN
in
(b) Dependency Structure
September
(c) Combined Structure
Figure 1: Three kinds of parse structures.
2 A Factored Model
Generative models for parsing typically model one of the kinds of structures shown in figure 1. Figure 1a is a plain phrase-structure tree T , which primarily models syntactic units,
figure 1b is a dependency tree D, which primarily models word-to-word selectional affinities [5], and figure 1c is a lexicalized phrase-structure tree L, which carries both category
and (part-of-speech tagged) head word information at each node.
A lexicalized tree can be viewed as the pair L = (T, D) of a phrase structure tree T and
a dependency tree D. In this view, generative models over lexicalized trees, of the sort
standard in lexicalized PCFG parsing [2, 3], can be regarded as assigning mass P(T, D)
to such pairs. To the extent that dependency and phrase structure need not be modeled
jointly, we can factor our model as P(T, D) = P(T )P(D): this approach is the basis
of our proposed models, and its use is, to our knowledge, new. This factorization, of
course, assigns mass to pairs which are incompatible, either because they do not generate
the same terminal string or do not embody compatible bracketings. Therefore, the total
mass assigned to valid structures will be less than one. We could imagine fixing this by
renormalizing. For example, this situation fits into the product-of-experts framework [6],
with one semantic expert and one syntactic expert that must agree on a single structure.
However, since we are presently only interested in finding most-likely parses, no global
renormalization constants need to be calculated.
Given the factorization P(T, D) = P(T )P(D), rather than engineering a single complex
combined model, we can instead build two simpler sub-models. We show that the combination of even quite simple ?off the shelf? implementations of the two sub-models can
provide decent parsing performance. Further, the modularity afforded by the factorization
makes it much easier to extend and optimize the individual components. We illustrate this
by building improved versions of both sub-models, but we believe that there is room for
further optimization.
Concretely, we used the following sub-models. For P(T ), we used successively more
accurate PCFGs. The simplest, PCFG - BASIC, used the raw treebank grammar, with nonterminals and rewrites taken directly from the training trees [7]. In this model, nodes rewrite
atomically, in a top-down manner, in only the ways observed in the training data. For improved models of P(T ), tree nodes? labels were annotated with various contextual markers.
In PCFG - PA, each node was marked with its parent?s label as in [8]. It is now well known
that such annotation improves the accuracy of PCFG parsing by weakening the PCFG independence assumptions. For example, the NP in figure 1a would actually have been labeled
NP?S. Since the counts were not fragmented by head word or head tag, we were able
to directly use the MLE parameters, without smoothing.1 The best PCFG model, PCFG LING , involved selective parent splitting, order-2 rule markovization (similar to [2, 3]), and
linguistically-derived feature splits.2
1 This is not to say that smoothing would not improve performance, but to underscore how the
factored model encounters less sparsity problems than a joint model.
2 Infinitive VPs, possessive NPs, and gapped Ss were marked, the preposition tag was split into
O(n 5 ) Items and Schema
X(h)
Y(h0)
X(h)
+
i h j
i h j
An Edge
Z(h)
Z(h)
+
j h0 k
X(h) Y(h0)
ih
k
The Edge Combination Schema
Figure 2: Edges and the edge combination schema for an O(n 5 ) lexicalized tabular parser.
Models of P(D) were lexical dependency models, which deal with tagged words: pairs
hw, ti. First the head hwh , th i of a constituent is generated, then successive right dependents hwd , td i until a STOP token is generated, then successive left dependents until
is generated again. For example, in figure 1, first we choose fell-VBD as the head of the
sentence. Then, we generate in-IN to the right, which then generates September-NN to the
right, which generates on both sides. We then return to in-IN, generate to the right, and
so on.
The dependency models required smoothing, as the word-word dependency data is very
sparse. In our basic model, DEP - BASIC, we generate a dependent conditioned on the head
and direction, using a mixture of two generation paths: a head can select a specific argument
word, or a head can select only an argument tag. For head selection of words, there is a prior
distribution over dependents taken by the head?s tag, for example, left dependents taken by
past tense verbs: P(wd , td |th , dir ) = count(wd , td , th , dir )/count(th , dir ). Observations
of bilexical pairs are taken against this prior, with some prior strength ?:
P(wd , td |wh , th , dir ) =
count(wd , td , wh , th , dir ) + ? P(wd , td |th , dir )
count(wh , th , dir ) + ?
This model can capture bilexical selection, such as the affinity between payrolls and fell.
Alternately, the dependent can have only its tag selected, and then the word is generated
independently: P(wd , td |wh , th , dir ) = P(wd |td )P(td |wh , th , dir ). The estimates for
P(td |wh , th , dir ) are similar to the above. These two mixture components are then linearly interpolated, giving just two prior strengths and a mixing weight to be estimated on
held-out data.
In the enhanced dependency model, DEP - VAL, we condition not only on direction, but also
on distance and valence. The decision of whether to generate is conditioned on one of
five values of distance between the head and the generation point: zero, one, 2?5, 6?10,
and 11+. If we decide to generate a non- dependent, the actual choice of dependent is
sensitive only to whether the distance is zero or not. That is, we model only zero/non-zero
valence. Note that this is (intentionally) very similar to the generative model of [2] in broad
structure, but substantially less complex.
At this point, one might wonder what has been gained. By factoring the semantic and
syntactic models, we have certainly simplified both (and fragmented the data less), but
there are always simpler models, and researchers have adopted complex ones because of
their parsing accuracy. In the remainder of the paper, we demonstrate the three primary
benefits of our model: a fast, exact parsing algorithm; parsing accuracy comparable to
non-factored models; and useful modularity which permits easy extensibility.
several subtypes, conjunctions were split into contrastive and other occurrences, and the word not
was given a unique tag. In all models, unknown words were modeled using only the MLE of
P(tag|unknown) with ML estimates for the reserved mass per tag. Selective splitting was done using
an information-gain like criterion.
3 An A* Parser
In this section, we outline an efficient algorithm for finding the Viterbi, or most probable,
parse for a given terminal sequence in our factored lexicalized model. The naive approach
to lexicalized PCFG parsing is to act as if the lexicalized PCFG is simply a large nonlexical
PCFG, with many more symbols than its nonlexicalized PCFG backbone. For example,
while the original PCFG might have a symbol NP, the lexicalized one has a symbol NP-x
for every possible head x in the vocabulary. Further, rules like S ? NP VP become a
family of rules S-x ? NP-y VP-x.3 Within a dynamic program, the core parse item in
this case is the edge, shown in figure 2, which is specified by its start, end, root symbol,
and head position.4 Adjacent edges combine to form larger edges, as in the top of figure 2.
There are O(n 3 ) edges, and two edges are potentially compatible whenever the left one
ends where the right one starts. Therefore, there are O(n 5 ) such combinations to check,
giving an O(n 5 ) dynamic program.5
The core of our parsing algorithm is a tabular agenda-based parser, using the O(n 5 ) schema
above. The novelty is in the choice of agenda priority, where we exploit the rapid parsing
algorithms available for the sub-models to speed up the otherwise impractical combined
parse. Our choice of priority also guarantees optimality, in the sense that when the goal
edge is removed, its most probable parse is known exactly. Other lexicalized parsers accelerate parsing in ways that destroy this optimality guarantee. The top-level procedure is
given in figure 3. First, we parse exhaustively with the two sub-models, not to find complete parses, but to find best outside scores for each edge e. An outside score is the score of
the best parse structure which starts at the goal and includes e, the words before it, and the
words after it, as depicted in figure 3. Outside scores are a Viterbi analog of the standard
outside probabilities given by the inside-outside algorithm [11]. For the syntactic model,
P(T ), well-known cubic PCFG parsing algorithms are easily adapted to find outside scores.
For the semantic model, P(D), there are several presentations of cubic dependency parsing
algorithms, including [9] and [12]. These can also be adapted to produce outside scores in
cubic time, though since their basic data structures are not edges, there is some subtlety.
For space reasons, we omit the details of these phases.
An agenda-based parser tracks all edges that have been constructed at a given time. When
an edge is first constructed, it is put on an agenda, which is a priority queue indexed by
some score for that node. The agenda is a holding area for edges which have been built
in at least one way, but which have not yet been used in the construction of other edges.
The core cycle of the parser is to remove the highest-priority edge from the agenda, and
act on it according to the edge combination schema, combining it with any previously
removed, compatible edges. This much is common to many parsers; agenda-based parsers
primarily differ in their choice of edge priority. If the best known inside score for an edge
is used as a priority, then the parser will be optimal. In particular, when the goal edge is
removed, its score will correspond the most likely parse. The proof is a generalization of
the proof of Dijkstra?s algorithm (uniform-cost search), and is omitted for space reasons
3 The score of such a rule in the factored model would be the PCFG score for S ? NP VP,
combined with the score for x taking y as a dependent and the left and right STOP scores for y.
4 The head position variable often, as in our case, also specifies the head?s tag.
5 Eisner and Satta [9] propose a clever O(n 4 ) modification which separates this process into two
steps by introducing an intermediate object. However, even the O(n 4 ) formulation is impractical for
exhaustive parsing with broad-coverage, lexicalized treebank grammars. There are several reasons for
this: the constant factor due to the grammar is huge (these grammars often contain tens of thousands
of rules once binarized), and larger sentences are more likely to contain structures which unlock
increasingly large regions of the grammar ([10] describes how this can cause the sentence length
to leak into terms which are analyzed as constant, leading to empirical growth far faster than the
predicted bounds). We did implement a version of this parser using the O(n 4 ) formulation of [9],
but, because of the effectiveness of the A* estimate, it was only marginally faster; see section 4.
1.
2.
3.
4.
5.
6.
7.
Extract the PCFG sub-model and set up the PCFG parser.
Use the PCFG parser to find outside scores ?PCFG (e) for each edge.
Extract the dependency sub-model and set up the dependency parser.
Use the dependency parser to find outside scores ?DEP (e) for each edge.
Combine PCFG and dependency sub-models into the lexicalized model.
Form the combined outside estimate a(e) = ?PCFG (e) + ?DEP (e)
Use the lexicalized A* parser, with a(e) as an A* estimate of ?(e)
?
e
?
words
Figure 3: The top-level algorithm and an illustration of inside and outside scores.
PCFG Model
PCFG-BASIC
PCFG-PA
PCFG-LING
Precision
75.3
78.4
83.7
Recall
70.2
76.9
82.1
F1
72.7
77.7
82.9
(a) The PCFG Model
Exact Match
11.0
18.5
25.7
Dependency Model
DEP-BASIC
DEP-VAL
Dependency Acc
76.3
85.0
(b) The Dependency Model
Figure 4: Performance of the sub-models alone.
(but given in [13]). However, removing edges by inside score is not practical (see section 4
for an empirical demonstration), because all small edges end up having better scores than
any large edges. Luckily, the optimality of the algorithm remains if, rather than removing
items from the agenda by their best inside scores, we add to those scores any optimistic
(admissible) estimate of the cost to complete a parse using that item. The proof of this is a
generalization of the proof of the optimality of A* search.
To our knowledge, no way of generating effective, admissible A* estimates for lexicalized
parsing has previously been proposed.6 However, because of the factored structure of
our model, we can use the results of the sub-models? parses to give us quite sharp A*
estimates. Say we want to know the outside score of an edge e. That score will be the score
?(Te , De ) (a logprobability) of a certain structure (Te , De ) outside of e, where Te and De
are a compatible pair. From the initial phases, we know the exact scores of the overall best
Te0 and the best De0 which can occur outside of e, though of course it may well be that Te0
and De0 are not compatible. However, ?PCFG (Te ) ? ?PCFG (Te0 ) and ?DEP (De ) ? ?DEP (De0 ),
and so ?(Te , De ) = ?PCFG (Te ) + ?DEP (De ) ? ?PCFG (Te0 ) + ?DEP (De0 ). Therefore, we can
use the sum of the sub-models? outside scores, a(e) = ?PCFG (Te0 ) + ?DEP (De0 ), as an upper
bound on the outside score for the combined model. Since it is reasonable to assume that
the two models will be broadly compatible and will generally prefer similar structures, this
should create a sharp A* estimate, and greatly reduce the work needed to find the goal
parse. We give empirical evidence of this in section 4.
4 Empirical Performance
In this section, we demonstrate that (i) the factored model?s parsing performance is comparable to non-factored models which use similar features, (ii) there is an advantage to exact
inference, and (iii) the A* savings are substantial. First, we give parsing figures on the standard Penn treebank parsing task. We trained the two sub-models, separately, on sections
02?21 of the WSJ section of the treebank. The numbers reported here are the result of then
testing on section 23 (length ? 40). The treebank only supplies node labels (like NP) and
6 The basic idea of changing edge priorities to more effectively guide parser work is standardly
used, and other authors have made very effective use of inadmissible estimates. [2] uses extensive
probabilistic pruning ? this amounts to giving pruned edges infinitely low priority. Absolute pruning
can, and does, prevent the most likely parse from being returned at all. [14] removes edges in order of
estimates of their correctness. This, too, may result in the first parse found not being the most likely
parse, but it has another more subtle drawback: if we hold back an edge e for too long, we may use
e to build another edge f in a new, better way. If f has already been used to construct larger edges,
we must then propagate its new score upwards (which can trigger still further propagation).
PCFG Model
PCFG-BASIC
PCFG-BASIC
PCFG-PA
PCFG-PA
PCFG-LING
PCFG-LING
PCFG Model
PCFG-LING
PCFG-LING
Dependency Model
DEP-BASIC
DEP-VAL
DEP-BASIC
DEP-VAL
DEP-BASIC
DEP-VAL
Dependency Model
DEP-VAL
DEP-VAL
Precision
80.1
82.5
82.1
84.0
85.4
86.6
Recall
78.2
81.5
82.2
85.0
84.8
86.8
Thresholded?
No
Yes
F1
79.1
82.0
82.1
84.5
85.1
86.7
F1
86.7
86.5
Exact Match
16.7
17.7
23.7
24.8
30.4
32.1
Exact Match
32.1
31.9
Dependency Acc
87.2
89.2
88.0
89.7
90.3
91.0
Dependency Acc
91.0
90.8
Figure 5: The combined model, with various sub-models, and with/without thresholding.
does not contain head information. Heads were calculated for each node according to the
deterministic rules given in [2]. These rules are broadly correct, but not perfect.
We effectively have three parsers: the PCFG (sub-)parser, which produces nonlexical
phrase structures like figure 1a, the dependency (sub-)parser, which produces dependency
structures like figure 1b, and the combination parser, which produces lexicalized phrase
structures like figure 1c. The outputs of the combination parser can also be projected down
to either nonlexical phrase structures or dependency structures. We score the output of our
parsers in two ways. First, the phrase structure of the PCFG and combination parsers can
be compared to the treebank parses. The parsing measures standardly used for this task are
labeled precision and recall.7 We also report F1 , the harmonic mean of these two quantities. Second, for the dependency and combination parsers, we can score the dependency
structures. A dependency structure D is viewed as a set of head-dependent pairs hh, di,
with an extra dependency hr oot, xi where r oot is a special symbol and x is the head of
the sentence. Although the dependency model generates part-of-speech tags as well, these
are ignored for dependency accuracy. Punctuation is not scored. Since all dependency
structures over n non-punctuation terminals contain n dependencies (n ? 1 plus the root
dependency), we report only accuracy, which is identical to both precision and recall. It
should be stressed that the ?correct? dependency structures, though generally correct, are
generated from the PCFG structures by linguistically motivated, but automatic and only
heuristic rules.
Figure 4 shows the relevant scores for the various PCFG and dependency parsers alone. 8
The valence model increases the dependency model?s accuracy from 76.3% to 85.0%, and
each successive enhancement improves the F1 of the PCFG models, from 72.7% to 77.7%
to 82.9%. The combination parser?s performance is given in figure 5. As each individual
model is improved, the combination F1 is also improved, from 79.1% with the pair of basic
models to 86.7% with the pair of top models. The dependency accuracy also goes up:
from 87.2% to 91.0%. Note, however, that even the pair of basic models has a combined
dependency accuracy higher than the enhanced dependency model alone, and the top three
have combined F1 better than the best PCFG model alone. For the top pair, figure 6c
illustrates the relative F1 of the combination parser to the PCFG component alone, showing
the unsurprising trend that the addition of the dependency model helps more for longer
sentences, which, on average, contain more attachment ambiguity. The top F 1 of 86.7%
is greater than that of the lexicalized parsers presented in [15, 16], but less than that of
the newer, more complex, parsers presented in [3, 2], which reach as high as 90.1% F 1 .
7 A tree T is viewed as a set of constituents c(T ). Constituents in the correct and the proposed
tree must have the same start, end, and label to be considered identical. For this measure, the lexical
heads of nodes are irrelevant. The actual measures used are detailed in [15], and involve minor
normalizations like the removal of punctuation in the comparison.
8 The dependency model is sensitive to any preterminal annotation (tag splitting) done by the
PCFG model. The actual value of DEP - VAL shown corresponds to PCFG - LING.
Combined Phase
80
1000
100
PCFG Phase
40
0
1
0
10
20
Length
(a)
30
40
75
1.5
50
1
Combination
PCFG
Combination/PCFG
25
20
Uniform-Cost
A-Star
10
Dependency Phase
60
Absolute F1
10000
2
0.5
0
0
5
10
15
20
Length
(b)
25
30
35
40
Relative F1
100
100
100000
Time (sec)
Edges Processed
1000000
0
0
10
20
30
40
Length
(c)
Figure 6: (a) A* effectiveness measured by edges expanded, (b) time spent on each phase,
and (c) relative F1 , all shown as sentence length increases.
However, it is worth pointing out that these higher-accuracy parsers incorporate many finely
wrought enhancements which could presumably be extracted and applied to benefit our
individual models.9
The primary goal of this paper is not to present a maximally tuned parser, but to demonstrate
a method for fast, exact inference usable in parsing. Given the impracticality of exact
inference for standard parsers, a common strategy is to take a PCFG backbone, extract a
set of top parses, either the top k or all parses within a score threshold of the top parse,
and rerank them [3, 17]. This pruning is done for efficiency; the question is whether it is
hurting accuracy. That is, would exact inference be preferable? Figure 5 shows the result
of parsing with our combined model, using the best model pair, but with the A* estimates
altered to block parses whose PCFG projection had a score further than a threshold ? = 2
in log-probability from the best PCFG-only parse. Both bracket F1 and exact-match rate
are lower for the thresholded parses, which we take as an argument for exact inference. 10
We conclude with data on the effectiveness of the A* method. Figure 6a shows the average
number of edges extracted from the agenda as sentence length increases. Numbers both
with and without using the A* estimate are shown. Clearly, the uniform-cost version of
the parser is dramatically less efficient; by sentence length 15 it extracts over 800K edges,
while even at length 40 the A* heuristics are so effective that only around 2K edges are
extracted. At length 10, the average number is less than 80, and the fraction of edges not
suppressed is better than 1/10K (and improves as sentence length increases). To explain
this effectiveness, we suggest that the combined parsing phase is really only figuring out
how to reconcile the two models? preferences.11 The A* estimates were so effective that
even with our object-heavy Java implementation of the combined parser, total parse time
was dominated by the initial, array-based PCFG phase (see figure 6b). 12
9 For example, the dependency distance function of [2] registers punctuation and verb counts, and
both smooth the PCFG production probabilities, which could improve the PCFG grammar.
10 While pruning typically buys speed at the expense of some accuracy (see also, e.g., [2]), pruning
can also sometimes improve F1 : Charniak et al. [14] find that pruning based on estimates for P(e|s)
raises accuracy slightly, for a non-lexicalized PCFG. As they note, their pruning metric seems to
mimic Goodman?s maximum-constituents parsing [18], which maximizes the expected number of
correct nodes rather than the likelihood of the entire parse. In any case, we see it as valuable to have
an exact parser with which these types of questions can be investigated at all for lexicalized parsing.
11 Note that the uniform-cost parser does enough work to exploit the shared structure of the dynamic
program, and therefore edge counts appear to grow polynomially. However, the A* parser does so
little work that there is minimal structure-sharing. Its edge counts therefore appear to grow exponentially over these sentence lengths, just like a non-dynamic-programming parser?s would. With much
longer sentences, or a less efficient estimate, the polynomial behavior would reappear.
12 The average time to parse a sentence with the best model on a 750MHz Pentium III with 2GB
RAM was: for 20 words, PCFG 13 sec, dependencies 0.6 sec, combination 0.3 sec; 40 words, PCFG
72 sec, dependencies 18 sec, combination 1.6 sec.
5 Conclusion
The framework of factored models over lexicalized trees has several advantages. It is conceptually simple, and modularizes the model design and estimation problems. The concrete
model presented performs comparably to other, more complex, non-exact models proposed,
and can be easily extended in the ways that other parser models have been. Most importantly, it admits a novel A* parsing approach which allows fast, exact inference of the most
probable parse.
Acknowledgements. We would like to thank Lillian Lee, Fernando Pereira, and Joshua
Goodman for advice and discussion about this work. This paper is based on work supported
by the National Science Foundation (NSF) under Grant No. IIS-0085896, by the Advanced
Research and Development Activity (ARDA)?s Advanced Question Answering for Intelligence (AQUAINT) Program, by an NSF Graduate Fellowship to the first author, and by an
IBM Faculty Partnership Award to the second author.
References
[1] D. Hindle and M. Rooth. Structural ambiguity and lexical relations. Computational Linguistics,
19(1):103?120, 1993.
[2] M. Collins. Head-Driven Statistical Models for Natural Language Parsing. PhD thesis, University of Pennsylvania, 1999.
[3] E. Charniak. A maximum-entropy-inspired parser. NAACL 1, pp. 132?139, 2000.
[4] R. Bod. What is the minimal set of fragments that achieves maximal parse accuracy? ACL 39,
pp. 66?73, 2001.
[5] I. A. Mel0 c? uk. Dependency Syntax: theory and practice. State University of New York Press,
Albany, NY, 1988.
[6] G. E. Hinton. Training products of experts by minimizing contrastive divergence. Technical
Report GCNU TR 2000-004, GCNU, University College London, 2000.
[7] E. Charniak. Tree-bank grammars. Proceedings of the Thirteenth National Conference on
Artificial Intelligence (AAAI ?96), pp. 1031?1036, 1996.
[8] M. Johnson. PCFG models of linguistic tree representations. Computational Linguistics,
24(4):613?632, 1998.
[9] J. Eisner and G. Satta. Efficient parsing for bilexical context-free grammars and head-automaton
grammars. ACL 37, pp. 457?464, 1999.
[10] D. Klein and C. D. Manning. Parsing with treebank grammars: Empirical bounds, theoretical
models, and the structure of the Penn treebank. ACL 39/EACL 10, pp. 330?337, 2001.
[11] J. K. Baker. Trainable grammars for speech recognition. D. H. Klatt and J. J. Wolf, editors,
Speech Communication Papers for the 97th Meeting of the Acoustical Society of America, pp.
547?550, 1979.
[12] J. Lafferty, D. Sleator, and D. Temperley. Grammatical trigrams: A probabilistic model of link
grammar. Proc. AAAI Fall Symposium on Probabilistic Approaches to Natural Language, 1992.
[13] D. Klein and C. D. Manning. Parsing and hypergraphs. Proceedings of the 7th International
Workshop on Parsing Technologies (IWPT-2001), 2001.
[14] E. Charniak, S. Goldwater, and M. Johnson. Edge-based best-first chart parsing. Proceedings
of the Sixth Workshop on Very Large Corpora, pp. 127?133, 1998.
[15] D. M. Magerman. Statistical decision-tree models for parsing. ACL 33, pp. 276?283, 1995.
[16] M. J. Collins. A new statistical parser based on bigram lexical dependencies. ACL 34, pp.
184?191, 1996.
[17] M. Collins. Discriminative reranking for natural language parsing. ICML 17, pp. 175?182,
2000.
[18] J. Goodman. Parsing algorithms and metrics. ACL 34, pp. 177?183, 1996.
| 2325 |@word faculty:1 version:3 bigram:1 polynomial:1 seems:1 propagate:1 contrastive:2 mention:1 tr:1 carry:1 initial:2 configuration:2 score:32 charniak:4 fragment:1 tuned:1 past:1 contextual:1 wd:7 assigning:1 yet:1 must:3 parsing:37 enables:1 remove:2 alone:5 generative:4 selected:1 intelligence:2 item:5 hwd:1 reranking:1 core:3 provides:2 node:9 preference:2 successive:3 simpler:2 five:1 constructed:2 become:1 supply:1 symposium:1 dan:1 combine:3 headed:4 inside:5 manner:1 acquired:2 expected:1 indeed:1 behavior:1 rapid:1 embody:1 terminal:3 inspired:1 company:1 td:10 little:2 actual:3 underlying:1 baker:1 maximizes:1 mass:4 what:2 backbone:2 kind:2 string:1 substantially:1 finding:2 impractical:2 guarantee:2 every:1 binarized:1 ti:1 act:2 growth:1 exactly:1 preferable:1 uk:1 normally:1 unit:1 omit:1 penn:2 appear:2 grant:1 before:1 engineering:1 tends:1 limit:1 bilexical:3 path:1 might:3 plus:1 acl:6 conversely:1 factorization:4 pcfgs:1 graduate:1 unique:1 practical:1 satta:2 testing:1 practice:1 block:1 implement:1 reappear:1 procedure:1 area:1 empirical:5 oot:2 java:1 projection:1 word:22 suggest:2 clever:1 selection:4 put:1 context:1 optimize:1 deterministic:1 lexical:10 straightforward:1 go:1 independently:1 automaton:1 impracticality:1 simplicity:1 splitting:3 assigns:1 factored:14 rule:8 array:1 importantly:2 fill:1 regarded:1 phrasal:2 imagine:1 enhanced:2 parser:44 construction:1 exact:17 trigger:1 programming:1 us:1 pa:4 trend:1 recognition:1 particularly:1 labeled:2 observed:1 capture:1 thousand:1 region:1 cycle:1 removed:3 extensibility:1 highest:1 valuable:1 substantial:1 leak:1 gapped:1 dynamic:4 exhaustively:1 trained:1 raise:1 rewrite:2 eacl:1 division:1 efficiency:1 basis:1 accelerate:1 joint:2 easily:2 various:3 america:1 fast:4 effective:7 london:1 artificial:1 choosing:1 h0:3 outside:16 exhaustive:1 quite:2 heuristic:3 stanford:6 larger:3 whose:1 say:2 s:1 otherwise:1 grammar:12 syntactic:10 jointly:1 sequence:1 advantage:2 propose:1 interaction:1 product:2 maximal:1 remainder:1 relevant:1 combining:1 mixing:1 te0:5 bod:1 constituent:4 parent:2 enhancement:2 produce:5 renormalizing:1 generating:1 wsj:1 perfect:1 object:4 help:1 illustrate:1 spent:1 fixing:1 measured:1 minor:1 dep:20 preterminal:1 coverage:1 c:2 predicted:1 differ:1 direction:2 drawback:1 annotated:1 correct:5 luckily:1 everything:4 f1:13 generalization:2 really:1 probable:3 subtypes:1 hold:1 around:1 considered:1 presumably:1 viterbi:2 pointing:1 trigram:1 achieves:1 omitted:1 estimation:1 albany:1 linguistically:2 proc:1 label:6 sensitive:2 correctness:1 create:1 clearly:1 always:1 rather:4 shelf:1 conjunction:1 linguistic:1 derived:1 focus:1 check:1 likelihood:1 underscore:1 greatly:1 pentium:1 sense:1 inference:10 dependent:13 factoring:1 nn:7 typically:2 weakening:1 entire:1 relation:1 selective:2 interested:1 overall:1 development:1 noun:3 smoothing:3 special:1 once:1 saving:1 having:1 construct:1 identical:2 broad:2 icml:1 tabular:2 mimic:1 np:11 report:3 primarily:3 modern:2 national:2 divergence:1 individual:3 phase:8 huge:1 certainly:2 mixture:2 analyzed:1 punctuation:4 bracket:1 held:1 accurate:1 edge:43 tree:16 indexed:1 theoretical:1 minimal:2 mhz:1 phrase:11 cost:5 introducing:1 uniform:4 wonder:1 nonterminals:1 johnson:2 too:2 unsurprising:1 reported:1 dependency:49 dir:10 nns:4 combined:13 international:1 probabilistic:3 off:1 lee:1 concrete:1 thesis:1 aaai:2 again:1 ambiguity:3 successively:1 vbd:6 choose:1 iwpt:1 priority:8 expert:4 usable:1 leading:1 return:1 de:6 star:1 sec:7 includes:1 explicitly:1 register:1 depends:1 view:2 root:2 optimistic:1 schema:5 start:4 sort:1 annotation:2 chart:1 accuracy:13 largely:1 reserved:1 correspond:1 yes:1 vp:7 conceptually:1 goldwater:1 raw:1 comparably:1 marginally:1 worth:1 researcher:1 acc:3 explain:1 reach:1 whenever:1 sharing:1 sixth:1 against:1 pp:13 involved:1 intentionally:1 gcnu:2 proof:4 di:1 stop:2 gain:1 wh:6 recall:4 knowledge:2 improves:3 subtle:1 agreed:2 actually:1 back:1 higher:2 improved:4 maximally:1 formulation:2 done:3 though:3 just:2 until:2 parse:20 christopher:1 marker:1 propagation:1 believe:1 building:1 naacl:1 tense:1 contain:5 tagged:2 assigned:1 semantic:5 deal:1 adjacent:1 criterion:1 syntax:1 outline:1 complete:2 demonstrate:3 performs:1 upwards:1 harmonic:1 consideration:1 novel:2 common:3 exponentially:1 extend:1 analog:1 hypergraphs:1 hurting:1 automatic:1 language:7 had:1 atomically:1 longer:2 add:1 irrelevant:1 driven:1 possessive:1 certain:2 meeting:1 infinitive:1 joshua:1 greater:1 payroll:7 determine:1 novelty:1 fernando:1 ii:2 resolving:1 smooth:1 technical:1 faster:2 match:4 long:1 mle:2 award:1 basic:14 metric:2 normalization:1 sometimes:1 addition:1 want:1 separately:2 fellowship:1 thirteenth:1 else:1 grow:2 goodman:3 extra:1 unlike:1 finely:1 fell:9 arda:1 lafferty:1 effectiveness:4 structural:1 intermediate:1 split:3 easy:1 decent:1 iii:2 enough:1 independence:1 fit:1 pennsylvania:1 reduce:1 idea:1 whether:3 motivated:1 gb:1 queue:1 returned:1 speech:4 york:1 cause:1 logprobability:1 ignored:1 useful:1 generally:2 detailed:1 selectional:1 involve:1 dramatically:1 amount:1 ten:1 processed:1 category:3 simplest:1 generate:6 specifies:1 nsf:2 figuring:1 estimated:1 per:1 track:1 klein:4 broadly:2 key:1 threshold:2 changing:1 prevent:1 thresholded:2 destroy:1 ram:1 fraction:1 sum:1 family:1 reasonable:1 decide:1 incompatible:1 acceptable:1 decision:2 prefer:1 comparable:3 modifier:1 bound:3 activity:1 strength:2 adapted:2 occur:1 afforded:1 tag:11 generates:3 interpolated:1 speed:2 argument:4 extremely:1 optimality:4 pruned:1 dominated:1 expanded:1 de0:5 department:2 according:2 combination:16 manning:4 describes:1 slightly:1 increasingly:1 suppressed:1 newer:1 modification:1 presently:1 sleator:1 taken:4 agree:1 previously:2 remains:1 count:8 hh:1 needed:1 know:3 end:4 adopted:1 available:1 permit:2 occurrence:1 encounter:1 original:1 top:12 linguistics:2 opportunity:1 exploit:2 giving:3 eisner:2 build:2 society:1 already:1 quantity:1 question:3 strategy:1 primary:2 september:6 affinity:2 distance:4 separate:3 valence:3 thank:1 link:1 acoustical:1 extent:2 reason:3 nonlexical:3 length:12 modeled:3 illustration:1 demonstration:1 minimizing:1 potentially:1 holding:1 expense:1 implementation:2 agenda:9 design:1 unknown:2 upper:1 observation:1 finite:1 lillian:1 dijkstra:1 situation:1 extended:1 communication:2 head:23 hinton:1 verb:4 sharp:2 standardly:4 pair:12 required:1 specified:1 extensive:1 sentence:13 alternately:1 able:1 sparsity:1 program:4 built:1 including:1 memory:1 natural:5 hr:1 advanced:2 improve:3 altered:1 technology:1 attachment:2 naive:1 extract:4 prior:4 acknowledgement:1 removal:1 val:8 relative:3 par:9 rerank:1 generation:2 foundation:1 thresholding:1 treebank:8 bank:1 editor:1 heavy:1 production:1 ibm:1 preposition:1 course:2 compatible:6 token:1 supported:1 free:1 side:1 guide:1 deeper:1 fall:1 taking:1 absolute:2 sparse:1 benefit:2 fragmented:2 grammatical:1 plain:1 calculated:2 valid:2 vocabulary:1 concretely:1 author:3 made:1 projected:1 simplified:1 far:1 polynomially:1 approximate:1 pruning:7 vps:1 ml:1 global:1 buy:1 conceptual:1 corpus:1 conclude:1 xi:1 discriminative:1 search:2 modularity:2 additionally:1 ca:2 improving:1 investigated:1 complex:6 did:1 linearly:1 ling:7 scored:2 reconcile:1 advice:1 cubic:3 renormalization:1 ny:1 precision:4 sub:16 position:2 pereira:1 factory:5 answering:1 hw:1 admissible:2 down:2 removing:2 specific:1 showing:1 symbol:5 admits:2 evidence:1 workshop:2 lexicalized:19 ih:1 pcfg:64 effectively:2 gained:1 phd:1 te:6 conditioned:3 illustrates:1 sparseness:1 easier:1 entropy:1 depicted:1 simply:1 likely:7 infinitely:1 subtlety:1 corresponds:1 wolf:1 extracted:3 viewed:3 marked:2 goal:5 presentation:1 klatt:1 room:1 shared:1 considerable:1 inadmissible:1 total:2 exception:1 select:2 college:1 partnership:1 stressed:1 collins:3 incorporate:1 trainable:1 |
1,458 | 2,326 | Developing Topography and Ocular Dominance
Using two aVLSI Vision Sensors and a
Neurotrophic Model of Plasticity
Terry Elliott
Dept. Electronics & Computer Science
University of Southampton
Highfield
Southampton, SO17 1BJ
United Kingdom
[email protected]
J?org Kramer
Institute of Neuroinformatics
University of Z?urich and ETH Z?urich
Winterthurerstrasse 190
8057 Z?urich
Switzerland
[email protected]
Abstract
A neurotrophic model for the co-development of topography and ocular
dominance columns in the primary visual cortex has recently been proposed. In the present work, we test this model by driving it with the
output of a pair of neuronal vision sensors stimulated by disparate moving patterns. We show that the temporal correlations in the spike trains
generated by the two sensors elicit the development of refined topography and ocular dominance columns, even in the presence of significant
amounts of spontaneous activity and fixed-pattern noise in the sensors.
1 Introduction
A large body of evidence suggests that the development of the retinogeniculocortical pathway, which leads in higher vertebrates to the emergence of eye-specific laminae in the
lateral geniculate nucleus (LGN), the formation of ocular dominance columns (ODCs) in
the striate cortex and the establishment of retinotopic representations in both structures, is a
competitive, activity-dependent process (see Ref. [1] for a review). Experimental findings
indicate that at least in the case of ODC formation, this competition may be mediated by
retrograde neurotrophic factors (NTFs) [2]. A computational model for synaptic plasticity
based on this hypothesis has recently been proposed [1]. This model has successfully been
applied to the development and refinement of retinotopic representations in the LGN and
striate cortex, and to the formation of ODCs in the striate cortex due to competition between the eye-specific laminae of the LGN. In this model, the activity within the afferent
cell sheets was simulated either as interocularly uncorrelated spontaneous retinal waves or,
as a coarse model of visually evoked activity, as interocularly correlated Gaussian noise.
Gaussian noise, however, is not a realistic model of evoked retinal activity, nor do the interocular correlations introduced adequately capture the correlations that arise due to the
spatial disparity between the two retinas.
For this study, we tested the ability of the plasticity model to generate topographic refinement and ODCs in response to afferent activity provided by a pair of biologically-inspired
artificial vision sensors. These sensors capture some of the properties of biological retinas.
They convert optical images into analog electrical signals and perform brightness adaptation and logarithmic contrast-encoding. Their output is encoded in asynchronous, binary
spike trains, as provided by the retinal ganglion cells of biological retinas. Mismatch of
processing elements and temporal noise are a natural by-product of biological retinas and
such vision sensors alike. One goal of this work was to determine the robustness of the
model towards such nonidealities. While the refinement of topography from the temporal
correlations provided by one vision sensor in response to moving stimuli has already been
explored [3], the present work focuses on the co-development of topography and ODCs
in response to the correlations between the signals from two vision sensors stimulated by
disparate moving bars. In particular, the dependence of ODC formation on disparity and
noise is considered.
2 Vision Sensor
The vision sensor used in the experiments is a two-dimensional array of 16 16 pixels
fabricated with standard CMOS technology, where each pixel performs a two-way rectified
temporal high-pass filtering operation on the incoming visual signal in the focal plane [4, 5].
The sensor adapts to background illuminance and responds to local positive and negative
illuminance transients at separately coded terminals. The transients are converted into a
stream of asynchronous binary pulses, which are multiplexed onto a common, arbitrated
address bus, where the address encodes the location of the sending pixel and the sign of
the transient. In the absence of any activity on the communication bus for a few hundred
milliseconds the bus address decays to zero. A block diagram of a reduced-resolution
array of pixels with peripheral arbitration and communication circuitry is shown in
Fig.
1.)
Handshaking with external data acquisition circuitry is provided via the request (
and acknowledge ( ) terminals.
Arbiter tree
111
ON
110
ACK
OFF
101
ON
100
OFF
011
ON
010
OFF
001
ON
OFF
000
Handshaking
REQ
X address
Y address
00
Arbiter tree
10
Handshaking
01
11
Figure 1: Block diagram of the sensor architecture (reduced resolution).
If the array is used for imaging purposes under constant or slowly-varying ambient lighting conditions, it only responds to boundaries or edges of moving objects or shadows of
sufficient contrast and not to static scenes. Depending on the settings of different bias controls the imager can be used in different modes. Separate gain controls for ON and OFF
transients permit the imager to respond to only one type of transient or to both types with
adjustable weighting. Together with these gain controls, a threshold bias sets the contrast
response threshold and the rate of spontaneous activity. For sufficiently large thresholds,
spontaneous activity is completely suppressed. Another bias control sets a refractory period that limits the maximum spike rate of each pixel. For short refractory periods, each
contrast transient at a given pixel triggers a burst of spikes; for long refractory periods, a
typical transient only triggers a single spike in the pixel, resulting in a very efficient, one-bit
edge coding.
3 Sensor-Computer Interface
The two vision sensors were coupled to a computer via two parallel ports. The handshaking
terminals of each chip were shorted, so that the sensors could operate at their own speed
without being artificially slowed down by the computer. This avoided the risk of overloading the multiplexer and thereby distorting the data. Furthermore, this scheme was simpler
to implement than a handshaking scheme. The lack of synchronization entailed several
problems: missing out on events, reading events more than once, and reading spurious zero
addresses in the absence of recent activity in the sensors. The first two problems could
satisfactorily be solved by choosing a long refractory period, so that each moving-edge
stimulus only evoked a single spike per pixel. For a typical stimulus this resulted in interspike intervals on the multiplexed bus of a few milliseconds, which made it unlikely that
events would be missed. Furthermore, the refractory period prevented any given pixel from
spiking more than once in a row in response to a moving edge, so that multiple reads of
the same address were always due to the same event being read several times and therefore
could be discarded. The ambiguity of the (0,0) address readings, namely whether such a
reading meant that the (0,0) pixel was active or that the address on the bus had decayed to
zero due to lack of activity, could not be resolved. It was therefore decided to ignore the
(0,0) address and to exclude the (0,0) cell from each map. Using this strategy it was found
that the data read by the computer reflected the optical stimuli with a small error rate.
4 Visual Stimulation
Two separate windows within the display of the LCD monitor of the computer used for data
acquisition were each imaged onto one of the vision chips via a lens to provide the optical
stimulation. The stimuli in each window consisted in eight separate sequences of images
that were played without interruption, each new sequence being selected randomly after the
completion of the previous one. Each sequence simulated a white bar sweeping across a
black background. The sequences were distinguished only by the orientation and direction
of motion of the bar, while the speed, as measured perpendicularly to the bar?s orientation,
was constant and identical for each sequence. The bar could have four different orientations, aligned to the rows or columns of the vision sensor or to one of the two diagonals,
and move in either direction. The bars had a finite width of 20 pixels on the LCD display,
corresponding to about 8 pixel periods on the image sensors, and they were sufficiently
long entirely to fill the field of view of the chips. The displays in the two windows stimulating the two chips were identical save for a fixed relative displacement between the bars
along the direction of motion during the entire run, simulating the disparity seen by two
eyes looking at the same object. The used displacements were 0, 10, and 15 pixels on the
LCD display, corresponding to no disparity and disparities of 1/2 the bar width (4 sensor
pixels) and 3/4 of the bar width (6 sensor pixels), respectively. The speed of the bar was
largely unimportant, because the output spikes of the chip were sampled into bins of fixed
sizes, rather than bins representing fixed time windows. The chosen white bar on a black
background stimulated the vision sensor with a leading ON edge and a trailing OFF edge.
However, because the spurious activity of the chip, mainly in the form of crosstalk, was
increased if both ON and OFF responses were activated and because we required only the
response to one edge type for this work, the ON responses from the chip were suppressed.
5 Neurotrophic Model of Plasticity
Let the letters and label afferent cells within an afferent sheet, letters and label the
afferent sheets, and letters and label target cells. The two afferent sheets represent the
two chips? arrays of pixels and are therefore 16 16 square arrays of cells. For convenience,
the target array is also a 16 16 square array of cells. Let denote an afferent cell?s
activity. For each time step of simulated development, we capture a fixed number of spikes
, while one that has gives
.
from each chip. A pixel that has not spiked gives
If represents the number of synapses projected from cell in afferent sheet to target
, then evolves according to the equation
$ $
132
154
# $'&
+0/
132 154
$ "! $ $
& "! &*,) +.-
#%$'& &
# $,&
%
(
+& &
$
+ &687 :9<;
(1)
Here,
and
represent, respectively, an activity-independent and a maximum activitydependent release of + NTF from target cells; the parameter a resting NTF uptake capacity
by afferent cells; - a function characterising NTF diffusion between target cells,# which
we take for convenience to be a Gaussian of width = . The function ! >
@A? B
is a simple model for the number of NTF receptors supported by an afferent cell, where
? denotes average afferent activity. The parameter sets the overall
rate
1 2
1 4 of development.
D
, LF,
and M .
Consistent with previous work [3], we set CD
E;
GF , =
E;IHKJ ,
Although this model appears complex, it can be shown to be equivalent to a non-linear
Hebbian rule with competition implemented via multiplicative synaptic normalisation [6].
For a full discussion, derivation and justification of the model, see Ref. [7].
Both afferent sheets initially project roughly equally to all cells in the target sheet.
The initial pattern of connectivity between
the sheets is established following Goodhill?s
method [8]. For a given afferent cell, let be the distance between some target Ecell
NPORQ and the
target cell to which the afferent cell would project were topography perfect; let
be the
maximum such distance. Then the number of synapses projected by the afferent cell to this
target cell is initially set to be proportional to
S 7 NPOTQ3U 7 "VPW
(2)
where V>X8Y
EWZ\[ is a randomly selected number for each such pair of afferent and target
cells. The parameter X]Y
WZZ[ determines the quality of the projections, with ^
giving initially greatest topographical bias, so that an afferent cell projects maximally to
its topographically preferred target cell, and _
giving initially completely random
projections. Here we set `
;IJ ; the impact of decreasing on the final structure of the
topographic map has been thoroughly explored elsewhere [3].
The topographic representation of an afferent sheet on the target sheet is depicted using
standard methods [1, 8]: the centres of mass of afferent projections to all target cells are
calculated, and these are then connected by lines that preserve the neighbourhood relations
among the target cells.
6 Results
For each iteration step of the algorithm a fixed number of spikes was captured. The bin
size determines the correlation space constants of the afferent cell sheets and therefore
influences the final quality of the topographic mapping [3]. Unless otherwise noted the bin
size was 32 per sensor, which corresponds to about two successive pixel rows stimulated
by a moving contrast boundary. The presented simulations were performed for 15,000 to
20,000 iteration steps, sufficient for map development to be largely complete.
(a)
(b)
(c)
Figure 2: Distribution of ODCs in the target cell sheet for different disparities between the
bar stimuli driving the two afferent sheets. The gray level of each target cell indicates the
relative strengths of projections from the two afferent sheets, where ?black? represents one
and ?white? the other afferent sheet. (a) No disparity; (b) disparity: 50% of bar width (4
sensor pixels); (c) disparity: 75% of bar width (6 sensor pixels).
Several runs were performed for the three different disparities of the stimuli presented to
the two sensors. Since the results for a given disparity were all qualitatively similar, we
only show the results of one representative run for each value. The distribution of the
formed ODCs in the target sheet is shown in Fig. 2, where the shading of each neuron
indicates the relative numbers of projections from the two afferent sheets. In the absence
of any disparity the formation of ODCs was suppressed. The residual ocular dominance
modulations may be attributed to a small misalignment of the two chips with respect to
the display. With the introduction of a disparity a very clear structure of ODCs emerges.
The distribution of ODCs strongly depends on the disparity and does not vary significantly
between runs for a given disparity. With increasing disparity the boundaries between ODCs
become more distinct [9, 10]. The obtained maps are qualitatively similar to those obtained
with simulated afferent inputs [1].
0.3
0.25
Power
0.2
0.15
0.1
0.05
0
0
2
4
6
8
10
Frequency
12
14
16
Figure 3: Power spectra of the spatial frequency distribution of ODCs in the target cell
sheet for different disparities and data sets. A ?solid? line denotes data with disparity of
75% of bar width (6 sensor pixels); a ?dashed? line denotes a disparity of 50% of bar width
(4 sensor pixels); a ?dotted? line denotes no disparity.
The power spectra obtained from two-dimensional Fourier transforms of the ODC distributions, represented in Fig. 3, show that the spatial frequency content of the ODCs is a
function of disparity, consistent with experimental findings in the cat [8, 11, 12, 13], and
that its variability between different runs of the same disparity is significantly smaller than
between different disparities. The principal spatial frequency along each dimension of the
target sheet is mainly determined by the NTF diffusion parameter [1] and the disparity. For
the NTF diffusion parameter used here, it ranges between two and four cycles; increasing (decreasing) the diffusion parameter decreases (increases) the spatial frequency. The
heights of the peaks show the degree of segregation, which increases with disparity, as
already mentioned.
(a)
(b)
(c)
Figure 4: Topographic mapping between afferent sheets and target sheet for different disparities between the stimuli driving the two afferent sheets. The data are from the same
runs as the ODC data of Fig. 2. (a) No disparity; (b) disparity: 50% of bar width (4 sensor
pixels); (c) disparity: 75% of bar width (6 sensor pixels).
The resulting topographic maps for the same runs are shown in Fig. 4. In the absence of
disparity the topographic map is almost perfect, with nearly one-to-one mapping between
the afferent sheets and the target sheet, apart from remaining edge effects. However, disruptions appear at ODC boundaries in the runs with disparate stimuli, these disruptions
becoming more distinct with increasing disparity due to the increasing sharpness of ODC
boundaries.
The data presented above were obtained under suppression of spontaneous firing, so that
each pixel generated exactly one spike in response to each moving bright-to-dark contrast
boundary with an error rate of about 5%. By turning up the spontaneous firing rate we can
test the robustness of the system to increased noise levels. We set the spontaneous firing
rate to approximately 50%, so that roughly half of all spikes are not associated with an
edge event. We also increased the bin size from 32 to 48 spikes per chip to compensate
for the reduced intraocular correlations as a result of increased noise [3]. Fig. 5 shows a
typical pattern of ODCs and the corresponding topographic map in the presence of 50%
spontaneous activity. Although there are some distortions in the topographic map, in general it compares very favourably to maps developed in the absence of spontaneous activity.
At an approximately 60% level of noise major disruptions in topographic map formation
and attenuated ODC development are exhibited. Increasing the level of noise still further
causes a complete breakdown of topographic and ODC map formation (data not shown).
(a)
(b)
Figure 5: The pattern of ODCs and the topographic map that develop in the presence of
approximately 50% noise. (a) The OD map; (b) the topographic map. The disparity is 50%
of the bar width (4 sensor pixels).
7 Discussion
The refinement of topography and the development of ODCs can be robustly simulated
with the considered hybrid system, consisting of an integrated analog visual sensing system
that captures some of the key features of retinal processing and a mathematical model
of activity-dependent synaptic competition. Despite the different structure of the input
stimuli and the different noise characteristics of the real sensors from those used in the
pure simulations [1], the results are comparable.
Several parameters of the vision sensors, such as refractory period and spontaneous firing
rate, can be continuously varied with input bias voltages. This facilitates the evaluation of
the performance of the model under different input conditions. The sensors were operated
at long refractory periods, so that each pixel responded with a single spike to a contrast
boundary moving across it. In this non-bursting mode the coding of the stimulus is very
sparse, which makes the topographic refinement process more efficient [3].
The noise induced by the vision sensors manifests itself in occasionally missing responses
of some pixels to a moving edge, in temporal jitter and a tunable level of spontaneous activity. With an optimal suppression of spontaneous firing, the error rate (number of missed
and spurious events divided by total number of events) can be reduced to approximately
5%. Increased spontaneous activity levels show a strongly anisotropic distribution across
the sensing arrays because of the inherent fixed-pattern noise present in the integrated sensors due to random mismatches in the fabricated circuits. This type of inhomogeneity has
not been modeled in previous work. Spontaneous activity and mismatches between cells
with the same functional role are prominent features of biological neural systems and biological information processing systems therefore have to deal with these nonidealities. The
plasticity algorithm proves to be sufficiently robust with respect to these types of noise.
The developed ODC and topographic maps depend quite strongly on the disparity between
the two sensors. At zero disparity, the formation of ODCs is practically suppressed and
topography becomes very smooth. As the disparity increases, the period of the resulting
ODCs increases, consistent with experimental results in the cat [8, 11, 12, 13], and, as
expected, the degree of segregation also increases [9, 10]. In the presence of high levels
of spontaneous activity in the afferent pathways, with as much as half of all spikes not
being stimulus?related, the maps continue to exhibit well developed ODCs and topography.
Although there are indications of distortions in the topographic maps in the presence of
approximately 50% spontaneous activity, the maps remain globally well structured. As
spontaneous activity is increased further, map development becomes increasingly disrupted
until it breaks down completely.
8 Conclusions
We examined the refinement of topographic mappings and the formation of ocular dominance columns by coupling a pair of integrated vision sensors to a neurotrophic model
of synaptic plasticity. We have shown that the afferent input from real sensors looking at
moving bar stimuli yields similar results as simulated partially randomized input and that
these results are insensitive to the presence of significant noise levels.
Acknowledgments
Tragically, J?org Kramer died in July, 2002. TE dedicates this work to his memory.
TE thanks the Royal Society for the support of a University Research Fellowship. JK was supported
in part by the Swiss National Foundation Research SPP grant. We thank David Lawrence of the
Institute of Neuroinformatics for his invaluable help with interfacing the chip to the PC.
References
[1] T. Elliott and N. R. Shadbolt, ?A neurotrophic model of the development of the retinogeniculocortical pathway induced by spontaneous retinal waves,? Journal of Neuroscience, vol. 19, pp.
7951?7970, 1999.
[2] A.K. McAllister, L.C. Katz, and D.C. Lo, ?Neurotrophins and synaptic plasticity,? Annual
Review of Neuroscience, vol. 22, pp. 295?318, 1999.
[3] T. Elliott and J. Kramer, ?Coupling an aVLSI neuromorphic vision chip to a neurotrophic
model of synaptic plasticity: the development of topography,? Neural Computation, vol. 14,
pp. 2353?2370, 2002.
[4] J. Kramer, ?An integrated optical transient sensor,? IEEE Trans. Circuits and Systems II: Analog
and Digital Signal Processing, 2002, submitted.
[5] J. Kramer, ?An on/off transient imager with event-driven, asynchronous read-out,? in Proc.
2002 IEEE Int. Symp. on Circuits and Systems, Phoenix, AZ, May 2002, vol. II, pp. 165?168,
IEEE Press.
[6] T. Elliott and N. R. Shadbolt, ?Multiplicative synaptic normalization and a nonlinear Hebb rule
underlie a neurotrophic model of competitive synaptic plasticity,? Neural Computation, vol. 14,
pp. 1311?1322, 2002.
[7] T. Elliott and N. R. Shadbolt, ?Competition for neurotrophic factors: Mathematical analysis,?
Neural Computation, vol. 10, pp. 1939?1981, 1998.
[8] G.J. Goodhill, ?Topography and ocular dominance: a model exploring positive correlations,?
Biological Cybernetics, vol. 69, pp. 109?118, 1993.
[9] D.H. Hubel and T.N. Wiesel, ?Binocular interaction in striate cortex of kittens reared with
artificial squint,? Journal of Neurophysiology, vol. 28, pp. 1041?1059, 1965.
[10] C.J. Shatz, S. Lindstr?om, and T.N. Wiesel, ?The distribution of afferents representing the right
and left eyes in the cat?s visual cortex,? Brain Research, vol. 131, pp. 103?116, 1977.
[11] S. L?owel, ?Ocular dominance column development: Strabismus changes the spacing of adjacent
columns in cat visual cortex,? Journal of Neuroscience, vol. 14, pp. 7451?7468, 1994.
[12] G.J. Goodhill and S. L?owel, ?Theory meets experiment: correlated neural activity helps determine ocular dominance column periodicity,? Trends in Neurosciences, vol. 18, pp. 437?439,
1995.
[13] S.B. Tieman and N. Tumosa, ?Alternating monocular exposure increases the spacing of ocularity domains in area 17 of cats,? Visual Neuroscience, vol. 14, pp. 929?938, 1997.
| 2326 |@word neurophysiology:1 wiesel:2 simulation:2 pulse:1 brightness:1 thereby:1 solid:1 shading:1 initial:1 electronics:1 disparity:35 united:1 od:1 realistic:1 interspike:1 plasticity:9 half:2 selected:2 plane:1 short:1 coarse:1 location:1 successive:1 org:2 simpler:1 height:1 mathematical:2 burst:1 along:2 become:1 pathway:3 symp:1 expected:1 roughly:2 nor:1 brain:1 terminal:3 inspired:1 globally:1 decreasing:2 window:4 vertebrate:1 increasing:5 provided:4 retinotopic:2 project:3 becomes:2 circuit:3 mass:1 developed:3 finding:2 fabricated:2 temporal:5 winterthurerstrasse:1 exactly:1 uk:1 control:4 imager:3 grant:1 underlie:1 appear:1 positive:2 local:1 died:1 limit:1 receptor:1 encoding:1 despite:1 meet:1 firing:5 modulation:1 becoming:1 approximately:5 black:3 bursting:1 evoked:3 suggests:1 examined:1 co:2 range:1 decided:1 acknowledgment:1 satisfactorily:1 crosstalk:1 block:2 implement:1 lf:1 swiss:1 displacement:2 area:1 elicit:1 eth:1 significantly:2 projection:5 onto:2 convenience:2 sheet:24 risk:1 influence:1 equivalent:1 map:19 missing:2 exposure:1 urich:3 resolution:2 sharpness:1 pure:1 rule:2 array:8 fill:1 his:2 justification:1 spontaneous:18 trigger:2 target:21 hypothesis:1 element:1 trend:1 jk:1 breakdown:1 role:1 electrical:1 capture:4 solved:1 connected:1 cycle:1 decrease:1 mentioned:1 lcd:3 depend:1 topographically:1 req:1 completely:3 misalignment:1 resolved:1 chip:13 represented:1 cat:5 derivation:1 train:2 distinct:2 artificial:2 formation:9 choosing:1 refined:1 neuroinformatics:2 quite:1 encoded:1 distortion:2 otherwise:1 ability:1 topographic:17 emergence:1 itself:1 inhomogeneity:1 final:2 sequence:5 indication:1 interaction:1 product:1 adaptation:1 aligned:1 adapts:1 competition:5 az:1 cmos:1 perfect:2 object:2 lamina:2 depending:1 avlsi:2 ac:1 completion:1 develop:1 measured:1 ij:1 coupling:2 help:2 implemented:1 shadow:1 indicate:1 switzerland:1 direction:3 transient:9 bin:5 biological:6 exploring:1 practically:1 sufficiently:3 considered:2 visually:1 lawrence:1 mapping:4 bj:1 ntf:6 circuitry:2 driving:3 trailing:1 vary:1 major:1 purpose:1 proc:1 geniculate:1 label:3 successfully:1 sensor:40 gaussian:3 always:1 interfacing:1 establishment:1 rather:1 shadbolt:3 varying:1 voltage:1 release:1 focus:1 indicates:2 mainly:2 contrast:7 suppression:2 dependent:2 unlikely:1 entire:1 integrated:4 initially:4 spurious:3 relation:1 lgn:3 pixel:28 overall:1 among:1 orientation:3 illuminance:2 development:14 spatial:5 field:1 once:2 identical:2 represents:2 nearly:1 mcallister:1 shorted:1 stimulus:13 inherent:1 few:2 retina:4 perpendicularly:1 randomly:2 preserve:1 resulted:1 national:1 consisting:1 arbitrated:1 normalisation:1 evaluation:1 entailed:1 multiplexed:2 operated:1 pc:1 activated:1 ambient:1 edge:10 unless:1 tree:2 increased:6 column:8 neuromorphic:1 southampton:2 hundred:1 thoroughly:1 thanks:1 decayed:1 peak:1 disrupted:1 randomized:1 off:8 together:1 continuously:1 connectivity:1 ambiguity:1 slowly:1 external:1 leading:1 multiplexer:1 converted:1 exclude:1 retinal:5 coding:2 int:1 afferent:31 depends:1 stream:1 multiplicative:2 view:1 performed:2 break:1 tieman:1 competitive:2 wave:2 parallel:1 om:1 square:2 formed:1 bright:1 responded:1 largely:2 characteristic:1 yield:1 interocular:1 lighting:1 rectified:1 cybernetics:1 shatz:1 submitted:1 synapsis:2 phys:1 synaptic:8 acquisition:2 frequency:5 ocular:9 pp:12 associated:1 attributed:1 soton:1 static:1 gain:2 sampled:1 tunable:1 manifest:1 emerges:1 neurotrophic:9 appears:1 higher:1 reflected:1 response:10 maximally:1 strongly:3 furthermore:2 binocular:1 correlation:8 until:1 favourably:1 nonlinear:1 lack:2 mode:2 quality:2 gray:1 effect:1 consisted:1 adequately:1 read:4 imaged:1 alternating:1 white:3 deal:1 adjacent:1 during:1 width:11 noted:1 prominent:1 ini:1 ack:1 complete:2 performs:1 motion:2 interface:1 characterising:1 invaluable:1 disruption:3 image:3 owel:2 recently:2 common:1 stimulation:2 spiking:1 functional:1 phoenix:1 refractory:7 insensitive:1 anisotropic:1 analog:3 resting:1 katz:1 significant:2 focal:1 centre:1 had:2 moving:11 cortex:7 own:1 recent:1 apart:1 driven:1 occasionally:1 binary:2 continue:1 seen:1 captured:1 determine:2 period:9 signal:4 dashed:1 ii:2 multiple:1 full:1 july:1 hebbian:1 smooth:1 long:4 compensate:1 divided:1 prevented:1 equally:1 coded:1 impact:1 vision:16 iteration:2 represent:2 normalization:1 cell:29 background:3 fellowship:1 separately:1 spacing:2 interval:1 diagram:2 operate:1 exhibited:1 induced:2 facilitates:1 presence:6 architecture:1 attenuated:1 whether:1 distorting:1 cause:1 clear:1 unimportant:1 amount:1 transforms:1 dark:1 reduced:4 generate:1 millisecond:2 dotted:1 sign:1 neuroscience:5 per:3 vol:12 dominance:9 key:1 four:2 threshold:3 monitor:1 diffusion:4 retrograde:1 imaging:1 convert:1 run:8 letter:3 jitter:1 respond:1 topographical:1 almost:1 missed:2 comparable:1 bit:1 entirely:1 played:1 display:5 annual:1 activity:25 strength:1 scene:1 encodes:1 fourier:1 speed:3 optical:4 structured:1 developing:1 according:1 peripheral:1 request:1 across:3 smaller:1 remain:1 suppressed:4 increasingly:1 evolves:1 biologically:1 alike:1 slowed:1 spiked:1 equation:1 segregation:2 bus:5 monocular:1 retinogeniculocortical:2 sending:1 operation:1 permit:1 eight:1 reared:1 simulating:1 distinguished:1 neighbourhood:1 save:1 robustly:1 robustness:2 denotes:4 remaining:1 giving:2 prof:1 society:1 move:1 already:2 spike:14 x8y:1 strategy:1 primary:1 dependence:1 striate:4 responds:2 interruption:1 so17:1 diagonal:1 exhibit:1 distance:2 separate:3 thank:1 lateral:1 simulated:6 capacity:1 modeled:1 tragically:1 kingdom:1 dedicates:1 negative:1 disparate:3 squint:1 adjustable:1 perform:1 neuron:1 discarded:1 acknowledge:1 finite:1 communication:2 looking:2 variability:1 varied:1 sweeping:1 introduced:1 david:1 pair:4 namely:1 required:1 odc:9 established:1 trans:1 address:10 bar:20 pattern:6 mismatch:3 goodhill:3 spp:1 reading:4 ocularity:1 wz:1 memory:1 royal:1 terry:1 greatest:1 event:8 power:3 natural:1 hybrid:1 turning:1 residual:1 representing:2 scheme:2 technology:1 eye:4 uptake:1 mediated:1 coupled:1 gf:1 review:2 relative:3 synchronization:1 topography:11 filtering:1 proportional:1 digital:1 foundation:1 nucleus:1 degree:2 elliott:5 sufficient:2 consistent:3 port:1 uncorrelated:1 row:3 lo:1 elsewhere:1 periodicity:1 supported:2 asynchronous:3 bias:5 institute:2 sparse:1 boundary:7 calculated:1 dimension:1 made:1 refinement:6 projected:2 avoided:1 qualitatively:2 ec:1 nonidealities:2 ignore:1 preferred:1 active:1 incoming:1 hubel:1 arbiter:2 spectrum:2 activitydependent:1 stimulated:4 robust:1 complex:1 artificially:1 domain:1 noise:15 arise:1 ref:2 body:1 neuronal:1 fig:6 representative:1 hebb:1 strabismus:1 ihkj:1 weighting:1 down:2 specific:2 sensing:2 explored:2 decay:1 evidence:1 overloading:1 te:3 depicted:1 logarithmic:1 ganglion:1 visual:7 partially:1 ch:1 corresponds:1 determines:2 stimulating:1 goal:1 kramer:6 towards:1 absence:5 content:1 change:1 typical:3 determined:1 principal:1 lens:1 total:1 pas:1 experimental:3 support:1 meant:1 ethz:1 handshaking:5 dept:1 arbitration:1 tested:1 kitten:1 correlated:2 |
1,459 | 2,327 | String Kernels, Fisher Kernels and Finite
State Automata
John Shawe-Taylor
Alexei Vinokourov
Department of Computer Science
Royal Holloway, University of London
Email: { craig, j st, alexei }?lcs. rhul. ac. uk
Craig Saunders
Abstract
In this paper we show how the generation of documents can be
thought of as a k-stage Markov process, which leads to a Fisher kernel from which the n-gram and string kernels can be re-constructed.
The Fisher kernel view gives a more flexible insight into the string
kernel and suggests how it can be parametrised in a way that reflects the statistics of the training corpus. Furthermore, the probabilistic modelling approach suggests extending the Markov process to consider sub-sequences of varying length, rather than the
standard fixed-length approach used in the string kernel. We give
a procedure for determining which sub-sequences are informative
features and hence generate a Finite State Machine model, which
can again be used to obtain a Fisher kernel. By adjusting the
parametrisation we can also influence the weighting received by the
features . In this way we are able to obtain a logarithmic weighting
in a Fisher kernel. Finally, experiments are reported comparing
the different kernels using the standard Bag of Words kernel as a
baseline.
1
Introduction
Recently the string kernel [6] has been shown to achieve good performance on textcategorisation tasks . The string kernel projects documents into a feature space
indexed by all k-tuples of symbols for some fixed k. The strength of the feature
indexed by the k-tuple U = (Ul, ... , Uk) for a document d is the sum over all
occurrences of U as a subsequence (not necessarily contiguous) in d, where each
occurrence is weighted by an exponentially decaying function of its length in d. This
naturally extends the idea of an n-gram feature space where the only occurrences
considered are contiguous ones.
The dimension of the feature space and the non-sparsity of even modestly sized
documents makes a direct computation of the feature vector for the string kernel
infeasible. There is, however, a dynamic programming recursion that enables the
semi-efficient evaluation of the kernel [6]. String kernels are apparently making no
use of the semantic prior knowledge that the structure of words can give and yet
they have been used with considerable success.
The aim of this paper is to place the n-gram and string kernels in the context of
probabilistic modelling of sequences, showing that they can be viewed as Fisher kernels of a Markov generation process. This immediately suggests ways of introducing
weightings derived from refining the model based on the training corpus.
Furthermore, this view also suggests extending consideration to subsequences of
varying lengths in the same model. This leads to a Finite State Automaton again
inferred from the data. The refined probabilistic model that this affords gives rise
to two Fisher kernels depending on the parametrisation that is chosen, if we take
the Fisher information matrix to be the identity.
We give experimental evidence suggesting that the new kernels are capturing useful
properties of the data while overcoming the computational difficulties of the original
string kernel.
2
The Fisher VIew of the n-gram and String kernels
In this section we show how the string kernel can be thought of as a type of Fisher
kernel [2] where the fixed-length subsequences used as the features in the string
kernel correspond to the parameters for building the model. In order to give some
insight into the kernel we first give a Fisher formulat ion of the n-gram kernel (i.e. the
string kernel which considers only contiguous sequences), and then extend this to
the full string kernel.
Let us assume that we have some document d of length s which is a sequence of
symbols belonging to some alphabet A, i.e. di E A, i
1, ... , s. We can consider
document d as being generated by a k-stage Markov process. According to this
view, for sequences u E A k - l we can define the probability of observing a symbol x
after a sequence u as PU--+X. Sequences of k symbols therefore index the parameters
of our model. The probability of a document d being generated by the model is
therefore
Idl
=
= II Pd[j-k+!:j-l]--+djl
P(d)
j =k
where we use the notation d[i: j] to denote the sequence did i +!?? ?dj
the derivative of the log-probability:
oIn P( d)
o
.
Now taking
In TIj~k Pd[j -k+!:j -l]--+dj
opu--+x
L
olnpd[j-k+!:j-l]--+dj
j=k
opu --+ x
Idl
= tf(ux,d)
(1)
P u --+ x
where tf(ux,d) is the term frequency of ux in d, that is the number of times the
string ux occurs in d. l
1 Since the pu-+x are not independent it is not possible to take the partial derivative of
one parameter without affecting others. However we can approximate our approach:
We introduce an extra character c. For each (n - I)-gram u we assign a sufficiently
small probability to pu-+c and change the other pu-+x to pu-+x = pu-+x (1 - Pu-+c). We
now replace each occurence of Pu-+ c in P(d) by 1 - LaEA\{ c }Pu-+ a . Thus, since uc never
occurs in d and Pu-+x ~ pv-+x, the u --+ x Fisher score entry for a document d becomes
tf( ux, d)
Pu-+x
tf( uc, d) ~ tf( ux , d)
pu-+ c
pu-+ x
The Fisher kernel is subsequently defined to be
k(d,d')
= UJrIUd',
where Ud is the Fisher score vector with ux-component a~n P( d) and I
Ed[UdUdTJ .
p u--t x
It has become traditional to set the matrix I to be the identity when defining a
Fisher kernel, though this undermines the very satisfying property of the pure definition that it is independent of the parametrisation. We will follow this same route
mainly to reduce the complexity ofthe computation. We will, however, subsequently
consider alternative parameterisations.
=
Different choices of the parameters PU-r X give rise to different models and hence
different kernels . It is perhaps surprising that the n-gram kernel is recovered (up
to a constant factor) if we set PU-rX = IAI- I for all u E An-l and x E A, that is
the least informative parameter setting. This follows since the feature vector of a
document d has entries
We therefore recover the n-gram kernel as the Fisher kernel of a model which uses
a uniform distribution for generating documents.
Before considering how the PU-r X might be chosen non-uniformly we turn our attention briefly to the string kernel.
We have shown that we can view the n-gram kernel as a Fisher kernel. A little
more work is needed in order to place the full string kernel (which considers noncontiguous subsequences) in the same framework.
First we define an index set Sk-l,q over all (possibly non-contiguous) subsequences
of length k, which finish in position q,
Sk-l, q
= {i : 1 :'S i l
< i2 < ... < i k - l < i k
= q}.
We now define a probability distribution P Sk_1 ,q over Sk - l,q by weighting sequence
i by )..l(i), where l(i) = i k - i l + 1 is the length of i, and normalising with a
fixed constant C . This may leave some probability unaccounted for, which can
be assigned to generating a spurious symbol. We denote by d [iJ the sequence of
characters d i1 di2 ... dik . We now define a text generation model that generates the
symbol for position q by first selecting a sequence i from Sk-l,q according to the
fixed distribution P Sk _l ,q and then generates the next symbol based on Pd[i'] -rdi k for
all possible values of d q where i' = (iI, i 2 , ??? , i k - l ) is the vector i without its last
component. We will refer to this model as the Generalised k-stage Markov model
with decay fa ctor )... Hence, if we assume that distributions are uniform
aIn P (d)
a In TIj~k
f
I:iESk_l ,j P
Sk-l,j (i)Pd[i']-rd ik
apu-rx
a In I:iEsk_l ,j
P
Sk -l ,j (i )Pd[i'] -rdik
aPu-rx
j =k
Idl
IAIL
L
Idl
IAIC- l L
P Sk -1 ,j
L
j = k iESk_l ,j
(i )Xux (d[i])
)..l(i)Xux(d [i ]),
where Xux is the indicator function for string ux . It follows that the corresponding
Fisher features will be the weighted sum over all subsequences with decay factor A.
In other words we recover the string kernel.
Proposition 1 The Fisher kernel of the generalised k-stage Markov model with
decay fa ctor A and constant Pu--+x is th e string kernel of length k and decay fa ctor
A.
3
The Finite State Machine Model
Viewing the n-gram and string kernels as Fisher kernels of Markov models means
we can view the different sequences of k - 1 symbols as defining states with the
next symbol controlling the transition to the next state. We therefore arrive at a
finite state automaton with states indexed by A k - 1 and transitions labelled by the
elements of A . Hence, if u E Ak -l the symbol x E A causes the transition to state
v[2: k], where v = ux.
One drawback of the string kernel is that the value of k has to be chosen a-priori
and is then fixed. A more flexible approach would be to consider different length
subsequences as features, depending on their frequency. Subsequences that occur
very frequently should be given a low weighting, as they do not contain much information in the same way that stop words are often removed from the bag of words
representation. Rather than downweight such sequences an alternative strategy is
to extend their length. Hence, the 3-gram com could be very frequent and hence
not a useful discriminator. By extending it either backwards or forwards we would
arrive at subsequences that are less frequent and so potentially carry useful information. Clearly, extending a sequence will always reduce its frequency since the
extension could have been made in many distinct ways all of which contribute to
the frequency of the root n-gram.
As this derivation follows more naturally from the analysis of the n-gram kernel
described in Section 2 we will only consider contiguous subsequences also known
as substrings. We begin by introducing the general Finite State Machine (FSM)
model and the corresponding Fisher kernel.
Definition 2 A Finite State Machin e model over an alphabet A
(~, J,p) where
1. th e non- empty set ~ of states
closed under taking substrings,
IS
a finit e subset of A*
~
xA
IS
a triple F
=
2. the transition function J
J:
--+~,
is defin ed by
J(u, x) = v [j : l(v)], wh ere v = ux and j = min{j : v [j : l(v)] E ~},
if the minimum is defined, otherwise the empty sequence
f
3. for each state u the function p gives a function Pu, which is either a distribution over next symbols Pu (x) or th e all on e function Pu (x) = 1, for u E ~
and x E A.
Given an FSM model F = (~, J, p) to process a document d we start at the state
corresponding to the empty sequence f (guaranteed to be in ~ as it is non-empty and
closed under taking substrings) and follow the transitions dictated by the symbols
of the document. The probability of a document in the model is the product of the
values on all of the transitions used:
Idl
P.:F (d) =
Pd[id -1](d j ),
II
j =l
where ij = min{i: d[i: j -1] E ~}. Note that requiring that the set ~ to be closed
under taking substrings ensures that the minimum in the definition of is is always
defined and that d[i j : j] does indeed define the state at stage j (this follows from
a simple inductive argument on the sequence of states) .
If we follow a similar derivation to that given in equation (1) we arrive at the
corresponding feature for document d and transition on x from u of
()
?;u,x d
=
tf( (u, x), d)
()'
Pu x
where we use tf( (u, x), d) to denote the frequency of the transition on symbol x
from a state u with non-unity Pu in document d.
Hence, given an FSM model we can construct the corresponding Fisher kernel feature vector by simply processing the document through the FSM and recording the
counts for each transition. The corresponding feature vector will be sparse relative
to the dimension of the feature space (the total number of transitions in the FSM)
since only those transitions actually used will have non-zero entries. Hence, as for
the bag of words we can create feature vectors by listing the indices of transitions
used followed by their frequency. The number of non-zero features will be at most
equal to the number of symbols in the document.
Consider taking ~ = U7==-Ol Ai with all the distributions Pu uniform for u E A k - 1
and Pu == 1 for other u. In this case we recover the k-gram model and corresponding
kernel.
A problem that we have observed when experiment ing with the n-gram model is
that if we estimate the frequencies of transitions from the corpus certain transitions
can become very frequent while others from the same state occur only rarely. In
such cases the rare states will receive a very high weighting in the Fisher score
vector. One would like to use the strategy adopted for the idf weighting for the
bag of words kernel which is often taken to be
where m is the number of documents and m i the number containing term i. The
In ensures that the contrast in weighting is controlled. We can obtain this effect in
the Fisher kernel if we reparametrise the transition probabilities as follows
Pu(x)
= exp(-
exp( -tu(x))),
where tu(x) is the new parameter. With this parametrisation the derivative of the
In probabilities becomes
a lnpu(x)
atu(x)
exp(-tu(x ))
= -lnpu(x),
as required.
Although this improves performance the problem of frequent substrings being uninformative remains . We now consider the idea outlined above of moving to longer
subsequences in order to ensure that transitions are informative.
4
Choosing Features
There is a critical frequency at which the most information is conveyed by a feature.
If it is ubiquitous as we observed above it gives little or no information for analysing
documents. If on the other hand it is very infrequent it again will not be useful
since we are only rarely able to use it. The usefulness is maximal at the threshold
between these two extremes. Hence, we would like to create states that occur not
too frequently and not too infrequently.
A natural way to infer the set of such states is from the training corpus. We select
all substrings that have occurred at least t t imes in the document corpus, where t
is a small but statistically visible number. In our experiments we took t = 10.
Hence, given a corpus S we create the FSM model F t (S) with
I;t (S) =
{u E A* : u occurs at least t times in the corpus S} .
Taking this definition of I;t (S) we construct the corresponding finite state machine
model as described in Definition 2. We will refer to the model F t as the frequent
set FSM at threshold t.
We now construct the transition probabilities by processing the corpus through
the F t (S) keeping a tally of the number of times each transition is actually used.
Typically we initialise the counts to some constant value c and convert the resulting
counts into probabilities for the model. Hence, if fu ,x is the number of times we
leave state u processing symbol x, the corresponding probabilities will be
Pu ( X )
+c
= lAic +fu,x
2:: x /EA fu ,x
(2)
l
Note that we will usually exclude from the count the transitions at the beginning
of a document d that start from states d[l : j] for some j ?: O.
The following proposition demonstrates that the model has the desired frequency
properties for the transitions. We use the notation u ~ v to indicate the transition
from state u to state v on processing symbol x.
Proposition 3 Given a corpus S th e FSM model F t (S) satisfies th e following property. Ign oring transitions from states indexed by d[l : i] for some docum ent d of th e
corpus, th e frequ ency counts f u,x for transitions u ~ v in th e corpus S satisfy
for all u E I;t (S) .
Proof. Suppose that for some state u E I;t (S)
(3)
This implies that the string u has occurred at least tlAI times at the head of a
transition not at the beginning of a document. Hence, by the pigeon hole principle
there is ayE A such that y has occurred t times immediately before one of the
transitions in the sum of (3). Note that this also implies that yu occurs at least t
times in the corpus and therefore will be in I;t (S). Consider one of the transitions
that occurs after yu on some symbol x . This transition will not be of the form
u ~ v but rather yu ~ v contradicting its inclusion in the sum (3). Hence, the
proposition holds. ?
Note that the proposition implies that no individual transition can be more frequent
than the full sum. The proposition also has useful consequences for the maximum
weighting for any Fisher score entries as the next corollary demonstrates.
Corollary 4 Given a corpus S if we constru ct th e FSM model F t (S) and compute
th e probabilities by counting transitions ignoring those from states indexed by d[l : i]
for some docum ent d of th e corpus, th e probabilities on th e transitions will satisfy
Proof. We substitute the bound given in the proposition into the formula (2). ?
The proposition and corollary demonstrate that the choice of Ft(S) as an FSM
model has the desirable property that all of the states are meaningfully frequent
while none of the transitions is too frequent and furthermore the Fisher weighting
cannot grow too large for any individual transition.
In the next section we will present exp erimental results testing the kernels we have
introduced using the standard and logarithmic weightings. The baseline for the
experiments will always be the bag of words kernel using the TFIDF weighting
scheme. It is perhaps worth noting that though the IDF weighting appears similar
to those described above it makes critical use of the distribution of terms across
documents, something that is incompatible with the Fisher approach that we have
adopted . It is therefore very exciting to see the results that we are able to obtain
using these syntactic features and sub-document level weightings.
5
Experimental Results
Our experiments were conducted on the top 10 categories of the standard Reuters21578 data set using the "Mod Apte" split. We compared the standard n-gram
kernel with a Uniform, non-uniform and In weighting scheme, and the variablelength FSM model described in Section 4 both with uniform weighting and a In
weighting scheme. As mentioned in Section 4, the parameter t was set to 10. In
order to keep the comparison fair, the n-gram kernel features were also pruned from
the feature vector if they occured less than 10 times . For our experiments we used
5-gram features, which have previously been reported to give the best results [5].
The standard bag of words model using the normal tfidf weighting scheme is used
as a baseline. Once feature vectors had been created they were normalised and
the SVMlight software package [3] was used with the default parameter settings
to obtain outputs for the test examples. In order to compare algorithms, we used
the average p erformance measure commonly used in Information Retrieval (see e.g.
[4]). This is the average of precision values obtained when thresholding at each
positively classified document. If all positive documents in the corpus are ranked
higher than any negative documents, then the average precision is 100%. Average
precision incorporates both precision and recall measures and is highly sensitive to
document ranking, so therefore can be used to obtain a fair comparison between
methods. The results are shown in Table 1.
As can b ee seen from the table, the variable-length subsequence method performs
as well as or better than all other methods and achieves a perfect ranking for
documents in one of the categories.
Method
Weighting
earn
acq
money-fx
grain
crude
trade
interest
ship
wheat
corn
BoW
TFIDF
99.86
99.62
80.54
99.69
98.52
95.29
91.61
96.84
98.52
98.95
ngrams
Uniform 1;:
99.91
96.4
99.61
99.7
82.43
84.9
99.67
99.9
98.23
99.9
95.53
94.6
98.83
96.6
99.42
91.7
97.2
98.7
98.2
99.3
In 1;:
99.9
99.5
83.4
99.4
97.2
95.6
95.4
98.9
99.3
99.0
FSA
Uniform
99.9
99.7
86.5
97.8
100.0
94.6
94.0
92.7
95.3
97.5
In 1;:
99.9
99.7
85.8
97.5
100.0
91.3
88.8
98.4
98.4
98.1
Table 1: Average precision results comparing TFIDF, n-gram and FSM features on
the top 10 categories of the reuters data set.
6
Discussion
In this paper we have shown how the string kernel can be thought of as a k-stage
Markov process, and as a result interpreted as a Fisher kernel. Using this new
insight we have shown how the features of a Fisher kernel can be constructed using
a Finite State Model parameterisation which reflects the statistics of the frequency
of occurance of features within the corpus. This model has then been extended
further to incorporate sub-sequences of varying length, which is a great d eal more
flexible than the fixed-length approach. A procedure for determining informative
sub-sequences (states in the FSM model) has also been given. Experimental results
have shown that this model outperforms the standard tfidf bag of words model on
a well known data set. Although the experiments in this paper are not extensive,
they show that the approach of using a Finite-State-Model to generate a Fisher
kernel gives new insights and more flexibility over the string kernel, and performs
well. Future work would include d etermining the optimum value for the threshold
t (maximum frequency of a sub-string occurring within the FSM before a state is
expanded) as this currently has to be set a-priori.
References
[1] D. Haussler. Convolution kernels on discrete structures. Technical Report UCSC-CRL99-10, University of California, Santa Cruz, July 1999.
[2] T. Jaakkola, M. Diekhaus, and D. Haussler. Using the fisher kernel method to detect
remote protein homologies. 7th Intell. Sys. Mol. Bio!. , pages 149- 158, 1999.
[3] T. Joachims. Making large-scale svm learning practical. In B. Schiilkopf, C. Burges,
and A. Smola, editors , Advances in Kernel Methods - Support Vector Learning. MITPress, 1999.
[4] Y. Li, H. Zaragoza, R. Herbrich, J. Shawe-Taylor, and J. Kandola. The perceptron
algorithm with uneven margins. In Proceedings of th e Nineteenth International Conference on Machine Learning (ICML '02), 2002.
[5] H Lodhi, C. Saunders, J. Shawe-Taylor, N. Cristianini, and Watkins C. Text classification using string kernels. Journal of Machine Learning Research, (2):419- 444,
2002.
[6] H. Lodhi , J. Shawe-Taylor, N. Cristianini, and C. Watkins. Text classification using
string kernels. In T. K. Leen, T. G. Dietterich, and V. Tresp, editors, Advances in
Neural Information Processing Systems 13, pages 563- 569. MIT Press, 2001.
[7] C. Watkins. Dynamic alignment kernels. Technical Report CSD-TR-98-11, Royal
Holloway, University of London, January 1999.
| 2327 |@word briefly:1 lodhi:2 idl:5 tr:1 carry:1 score:4 selecting:1 document:29 outperforms:1 recovered:1 comparing:2 di2:1 surprising:1 com:1 yet:1 john:1 cruz:1 grain:1 visible:1 informative:4 enables:1 beginning:2 sys:1 normalising:1 contribute:1 herbrich:1 constructed:2 direct:1 become:2 ucsc:1 ik:1 introduce:1 indeed:1 frequently:2 ol:1 little:2 considering:1 becomes:2 project:1 begin:1 notation:2 interpreted:1 string:29 demonstrates:2 uk:2 bio:1 before:3 generalised:2 positive:1 consequence:1 ak:1 id:1 might:1 suggests:4 apu:2 statistically:1 practical:1 testing:1 procedure:2 erformance:1 thought:3 word:10 djl:1 mitpress:1 protein:1 cannot:1 context:1 influence:1 attention:1 automaton:3 immediately:2 pure:1 insight:4 haussler:2 docum:2 initialise:1 fx:1 controlling:1 machin:1 infrequent:1 suppose:1 programming:1 us:1 element:1 infrequently:1 satisfying:1 observed:2 ft:1 imes:1 ensures:2 wheat:1 remote:1 trade:1 removed:1 mentioned:1 pd:6 complexity:1 cristianini:2 dynamic:2 finit:1 alphabet:2 derivation:2 distinct:1 london:2 diekhaus:1 choosing:1 refined:1 saunders:2 nineteenth:1 otherwise:1 statistic:2 syntactic:1 fsa:1 sequence:20 took:1 product:1 maximal:1 frequent:8 tu:3 bow:1 flexibility:1 achieve:1 formulat:1 ent:2 empty:4 optimum:1 extending:4 generating:2 perfect:1 leave:2 depending:2 ac:1 ij:2 received:1 indicate:1 implies:3 drawback:1 subsequently:2 viewing:1 assign:1 proposition:8 tfidf:5 extension:1 hold:1 sufficiently:1 considered:1 normal:1 exp:4 great:1 achieves:1 bag:7 currently:1 ain:1 sensitive:1 tf:7 ere:1 create:3 reflects:2 weighted:2 reuters21578:1 mit:1 clearly:1 always:3 aim:1 rather:3 varying:3 jaakkola:1 corollary:3 derived:1 refining:1 joachim:1 modelling:2 mainly:1 contrast:1 baseline:3 detect:1 defin:1 typically:1 spurious:1 i1:1 classification:2 flexible:3 priori:2 uc:2 equal:1 construct:3 never:1 once:1 yu:3 icml:1 future:1 others:2 variablelength:1 report:2 kandola:1 intell:1 individual:2 interest:1 highly:1 alexei:2 evaluation:1 alignment:1 extreme:1 parametrised:1 tuple:1 fsm:14 partial:1 fu:3 indexed:5 taylor:4 re:1 desired:1 eal:1 contiguous:5 introducing:2 rdi:1 entry:4 subset:1 rare:1 uniform:8 undermines:1 usefulness:1 conducted:1 too:4 reported:2 st:1 international:1 probabilistic:3 aye:1 parametrisation:4 earn:1 again:3 containing:1 possibly:1 derivative:3 li:1 suggesting:1 exclude:1 satisfy:2 ranking:2 view:6 root:1 closed:3 apparently:1 observing:1 start:2 decaying:1 recover:3 acq:1 listing:1 correspond:1 ofthe:1 craig:2 substring:6 none:1 rx:3 worth:1 classified:1 ed:2 email:1 definition:5 frequency:11 naturally:2 proof:2 di:1 stop:1 adjusting:1 wh:1 recall:1 knowledge:1 improves:1 ubiquitous:1 occured:1 actually:2 ea:1 appears:1 higher:1 follow:3 iai:1 leen:1 though:2 furthermore:3 xa:1 stage:6 smola:1 hand:1 ency:1 perhaps:2 building:1 dietterich:1 effect:1 contain:1 requiring:1 homology:1 inductive:1 hence:13 assigned:1 semantic:1 i2:1 zaragoza:1 demonstrate:1 performs:2 consideration:1 recently:1 occurance:1 unaccounted:1 exponentially:1 extend:2 occurred:3 refer:2 ai:1 rd:1 u7:1 outlined:1 inclusion:1 laea:1 shawe:4 dj:3 had:1 moving:1 longer:1 money:1 pu:26 something:1 dictated:1 ship:1 route:1 certain:1 success:1 seen:1 minimum:2 ud:1 july:1 semi:1 ii:3 full:3 desirable:1 infer:1 ing:1 technical:2 ign:1 retrieval:1 controlled:1 kernel:66 ion:1 receive:1 affecting:1 uninformative:1 grow:1 extra:1 constru:1 recording:1 meaningfully:1 incorporates:1 mod:1 ee:1 counting:1 backwards:1 noting:1 split:1 svmlight:1 erimental:1 finish:1 reduce:2 idea:2 vinokourov:1 ul:1 dik:1 cause:1 useful:5 tij:2 santa:1 category:3 generate:2 affords:1 discrete:1 threshold:3 sum:5 convert:1 package:1 extends:1 place:2 arrive:3 incompatible:1 oring:1 capturing:1 bound:1 ct:1 guaranteed:1 followed:1 strength:1 occur:3 idf:2 software:1 generates:2 argument:1 min:2 noncontiguous:1 pruned:1 expanded:1 downweight:1 corn:1 department:1 according:2 belonging:1 across:1 character:2 unity:1 parameterisation:1 making:2 taken:1 equation:1 apte:1 remains:1 previously:1 turn:1 count:5 needed:1 adopted:2 occurrence:3 alternative:2 original:1 substitute:1 top:2 ensure:1 include:1 occurs:5 fa:3 strategy:2 traditional:1 modestly:1 considers:2 length:14 index:3 potentially:1 negative:1 rise:2 convolution:1 markov:8 ctor:3 finite:10 january:1 defining:2 extended:1 head:1 inferred:1 overcoming:1 introduced:1 required:1 extensive:1 discriminator:1 california:1 able:3 usually:1 sparsity:1 royal:2 critical:2 difficulty:1 natural:1 ranked:1 indicator:1 recursion:1 scheme:4 created:1 occurence:1 tresp:1 text:3 prior:1 determining:2 relative:1 generation:3 triple:1 conveyed:1 principle:1 exciting:1 thresholding:1 editor:2 last:1 keeping:1 infeasible:1 normalised:1 burges:1 perceptron:1 taking:6 sparse:1 dimension:2 default:1 gram:19 transition:32 forward:1 made:1 commonly:1 approximate:1 keep:1 corpus:16 tuples:1 lcs:1 subsequence:12 sk:8 ngrams:1 table:3 ignoring:1 mol:1 necessarily:1 did:1 oin:1 csd:1 reuters:1 contradicting:1 fair:2 positively:1 precision:5 sub:6 position:2 pv:1 tally:1 crude:1 watkins:3 weighting:19 formula:1 showing:1 symbol:17 rhul:1 decay:4 svm:1 evidence:1 occurring:1 hole:1 margin:1 logarithmic:2 pigeon:1 simply:1 ux:10 satisfies:1 sized:1 viewed:1 identity:2 labelled:1 replace:1 fisher:31 considerable:1 change:1 analysing:1 uniformly:1 total:1 experimental:3 parameterisations:1 rarely:2 holloway:2 select:1 uneven:1 support:1 incorporate:1 |
1,460 | 2,328 | A Note on the Representational Incompatibility
of Function Approximation and Factored
Dynamics
Eric Allender
Computer Science Department
Rutgers University
[email protected]
Sanjeev Arora
Computer Science Department
Princeton University
[email protected]
Michael Kearns
Department of Computer and Information Science
University of Pennsylvania
[email protected]
Cristopher Moore
Department of Computer Science
University of New Mexico
[email protected]
Alexander Russell
Department of Computer Science and Engineering
University of Connecticut
[email protected]
Abstract
We establish a new hardness result that shows that the difficulty of planning in factored Markov decision processes is representational rather
than just computational. More precisely, we give a fixed family of factored MDPs with linear rewards whose optimal policies and value functions simply cannot be represented succinctly in any standard parametric
form. Previous hardness results indicated that computing good policies
from the MDP parameters was difficult, but left open the possibility of
succinct function approximation for any fixed factored MDP. Our result
applies even to policies which yield a polynomially poor approximation
to the optimal value, and highlights interesting connections with the complexity class of Arthur-Merlin games.
1 Introduction
While a number of different representational approaches to large Markov decision processes (MDPs) have been proposed and studied over recent years, relatively little is known
about the relationships between them. For example, in function approximation, a parametric form is proposed for the value functions of policies. Presumably, for any assumed
parametric form (for instance, linear value functions), rather strong constraints on the underlying stochastic dynamics and rewards may be required to meet the assumption. However, a precise characterization of such constraints seems elusive.
Similarly, there has been recent interest in making parametric assumptions on the dynamics and rewards directly, as in the recent work on factored MDPs. Here it is known that
the problem of computing an optimal policy from the MDP parameters is intractable (see
[7] and the references therein), but exactly what the representational constraints on such
policies are has remained largely unexplored.
In this note, we give a new intractability result for planning in factored MDPs that exposes
a noteworthy conceptual point missing from previous hardness results. Prior intractability
results for planning in factored MDPs established that the problem of computing optimal
policies from MDP parameters is hard, but left open the possibility that for any fixed factored MDP, there might exist a compact, parametric representation of its optimal policy.
This would be roughly analogous to standard NP-complete problems such as graph coloring ? any 3-colorable graph has a ?compact? description of its 3-coloring, but it is hard to
compute it from the graph.
Here we dismiss even this possibility. Under a standard and widely believed complexitytheoretic assumption (that is even weaker than the assumption that NP does not have polynomial size Boolean circuits), we prove that a specific family of factored MDPs does not
even possess ?succinct? policies. By this we mean something extremely general ? namely,
that for each MDP in the family, it cannot have an optimal policy represented by an arbitrary
Boolean circuit whose size is bounded by a polynomial in the size of the MDP description.
Since such circuits can represent essentially any standard parametric functional form, we
are showing that there exists no ?reasonable? representation of good policies in factored
MDPs, even if we ignore the problem of how to compute them from the MDP description. This result holds even if we ask only for policies whose expected return approximates
the optimal within a polynomial factor. (With a slightly stronger complexity-theoretic assumption, it follows that obtaining an approximation even within an exponential factor is
impossible.)
Thus, while previous results established that there was at least a computational barrier to
going from factored MDP parameters to good policies, here we show that the barrier is
actually representational, a considerably worse situation. The result highlights the fact that
even when making strong and reasonable assumptions about one representational aspect of
MDPs (such as value functions or dynamics), there is no reason in general for this to lead
to any nontrivial restrictions on the others.
The construction in our result is ultimately rather simple, and relies on powerful results
developed in complexity theory over the last decade. In particular, we exploit striking
results on the complexity class associated with computational protocols known as ArthurMerlin games.
We note that recent and independent work by Liberatore [5] establishes results similar to
ours. The primary differences between our work and Liberatore?s is that our results prove
intractability of approximation and rely on different proof techniques.
2 DBN-Markov Decision Processes
A Markov decision process is a tuple
where is a set of states, is a set of
actions,
is a family of probability distributions on , one for each
, and
is a reward function. We will denote by
the probability
that action in state results in state . When started in a state , and provided with
a sequence of actions
the MDP traverses a sequence of states
,
where each
is a random sample from the distribution
. Such a state sequence
is called a path. The -discounted return associated with such a path is
.
A policy
is a mapping from states to actions. When the action sequence is
generated according to this policy, we denote by
the state sequence
produced as above. A policy is optimal if for all policies
and all
, we have
We consider MDPs where the transition law is represented as a dynamic Bayes net, or
DBN-MDPs. Namely, if the state space has size , then is represented by a -layer
Bayes net. There are
variables in the first layer, representing the state variables at
any given time , along with the action chosen at time . There are variables in the second
layer, representing the state variables at time
. All directed edges in the Bayes net go
from variables in the first layer to variables in the second layer; for our result, it suffices to
consider Bayes nets in which the indegree of every second-layer node is bounded by some
constant. Each second layer node has a conditional probability table (CPT) describing its
conditional distribution for every possible setting of its parents in the Bayes net. Thus
the stochastic dynamics of the DBN-MDP are entirely described by the Bayes net in the
standard way; the next-state distribution for any state is given by simply fixing the first
layer nodes to the settings given by the state. Any given action choice then yields the nextstate distribution according to standard Bayes net semantics. We shall assume throughout
that the rewards are a linear function of state.
3 Arthur-Merlin Games
The complexity class AM is a probabilistic extension of the familiar class NP, and is typically described in terms of Arthur?Merlin games (see [2]). An Arthur?Merlin game for a
language is played by two players (Turing machines), (the Verifier, often referred to as
Arthur in the literature), who is equipped with a random coin and only modest (polynomialtime bounded) computing power; and (the Prover, often referred to as Merlin), who is
computationally unbounded. Both are supplied with the same input of length bits. For
instance, might be some standard encoding of an undirected graph , and might be
interested in proving to that is 3-colorable. Thus, seeks to prove that
; is
skeptical but willing to listen. At each step of the conversation, flips a fair coin, perhaps
several times, and reports the resulting bits to ; this is interpreted as a ?question? or ?challenge? to . In the graph coloring example, it might be reasonable to interpret the random
bits generated by as identifying a random edge in , with the challenge to being to
identify the colors of the nodes on each end of this edge (which had better be different, and
consistent with any previous responses of , if is to be convinced). Thus responds
with some number of bits, and the protocol proceeds to the next round. After poly
steps,
decides, based upon the conversation, whether to accept that
or reject.
We say that the language
rithm such that:
is in the class AM poly if there is a (polynomial-time) algo-
When
, there is always a strategy for to generate the responses to the
random challenges that causes to accept.
When
, regardless of how responds to the random challenges, with probability at least
, rejects. Here the probability is taken over
the random challenges.
In other words, we ask that there be a polynomial time algorithm such that if
,
there is always some response to the random challenge sequence that will convince of
this fact; but if
, then every way of responding to the random challenge sequence has
an overwhelming probability of being ?caught? by .
What is the power of the class AM poly ? From the definition, it should be clear that
every language in NP has an (easy) AM poly protocol in which , the prover, ignores
the random challenges, and simply presents
with the standard NP witness to
(e.g., a specific 3-coloring of the graph ). More surprisingly, every language in the class
PSPACE (the class of all languages that can be recognized in deterministic polynomial
space, conjectured to be much larger than NP) also has an AM poly protocol, a beautiful
and important result due to [6, 9]. (For definitions of classes such as P, NP, and PSPACE,
see [8, 4].)
If a language has an Arthur-Merlin game where Arthur asks only a constant number
of questions, we say that
AM . NP corresponds to Arthur-Merlin games where
Arthur says nothing, and thus clearly NP
AM . Restricting the number of questions
seems to put severe limitations on the power of Arthur-Merlin games. Though AM poly
PSPACE, it is generally believed that
NP
AM
PSPACE
4 DBN-MDPs Requiring Large Policies
In this section, we outline our construction proving that factored MDPs may not have any
succinct representation for (even approximately) optimal policies, and conclude this note
with a formal statement of the result.
Let us begin by drawing a high-level analogy with the MDP setting. Let be a language
in PSPACE, and let and be the Turing machines for the AM
protocol for .
Since
is simply a Turing machine, it has some internal configuration (sufficient to
completely describe the tape contents, read/write head position, abstract computational
state, and so on) at any given moment in the protocol with . Since we assume is allpowerful (computationally unbounded), we can assume that has complete knowledge of
this internal state of at all times. The protocol at round can thus be viewed: is in
some state/configuration ; a random bit sequence (the challenge) is generated; based
on and , computes some response or action ; and based on and , enters its
next configuration
. From this description, several observations can be made:
?s internal configuration constitutes state in the Markovian sense ? combined
with the action , it entirely determines the next-state distribution. The dynamics
are probabilistic due to the influence of the random bit sequence .
We can thus view as implementing a policy in the MDP determined by (the
internal configuration of) ? ?s actions, together with the stochastic , determine the evolution of the . Informally, we might imagine defining the total
return to to be 1 if causes to accept, and 0 if rejects.
The MDP so defined in this manner is not arbitrarily complex ? in particular, the
transition dynamics are defined by the polynomial-time Turing machine .
At a high level, then, if every MDP so defined by a language in AM poly had an ?efficient?
policy , then something remarkable would occur: the arbitrary power allowed to in the
definition of the class would have been unnecessary. We shall see that this would have
extraordinary and rather implausible complexity-theoretic implications. For the moment,
let us simply sketch the refinements to this line of thought that will allow us to make the
connection to factored MDPs: we will show that the MDPs defined above can actually be
represented by DBN-MDPs with only constant indegree and a linear reward function. As
suggested, this will allow us to assert rather strong negative results about even the existence
of efficient policies, even when we ask for rather weak approximation to the optimal return.
We now turn to the problem of planning in a DBN-MDP. Typically, one might like to have
a ?general-purpose? planning procedure ? a procedure that takes as input a description
of a DBN-MDP
, and returns a description of the optimal policy .
This is what is typically meant by the term planning, and we note that it demands a certain
kind of uniformity ? a single planning algorithm that can efficiently compute a succinct
representation of the optimal policy for any DBN-MDP. Note that the existence of such a
planning algorithm would certainly imply that every DBN-MDP has a succinct representation of its optimal policy ? but the converse does not hold. It could be that the difficulty of
planning in DBN-MDPs arises from the demand of uniformity ? that is, that every DBNMDP possesses a succinct optimal policy, but the problem of computing it from the MDP
parameters is intractable. This would be analogous to problems in NP ? for example, every 3-colorable graph obviously has a succinct description of a 3-coloring, but it is difficult
to compute it from the graph.
As mentioned in the introduction, it has been known for some time that planning in this
uniform sense is computationally intractable. Here we establish the stronger and conceptually important result that it is not the uniformity giving rise to the difficulty, but rather
that there simply exist DBN-MDPs in which the optimal policy does not possess a succinct
representation in any natural parameterization. We will present a specific family of DBNMDPs
(where
has states with components), and show that, under a standard
complexity-theoretic assumption, the corresponding family of optimal policies
cannot be represented by arbitrary Boolean circuits of size polynomial in . We note that such
circuits constitute a universal representation of efficiently computable functions, and all of
the standard parametric forms in wide use in AI and statistics can be computed by such
circuits.
We now provide the details of the construction. Let be any language in PSPACE, and
let
be a polynomial-time Turing machine running in time
on inputs of length ,
implementing the algorithm of ?Arthur? in the AM protocol for . Let be the maximum
number of bits needed to write down a complete configuration of that may arise during
computation on an input of length
(so
, since no computation taking
time can consume more than
space). Each state of our DBN-MDP
will have
components, each corresponding to one bit of the encoding of a configuration. No states
will have rewards, except for the accepting states, which have reward
. (Without loss
of generality, we may assume that never enters an accepting state other than at time time
.) Note that we can encode configurations so that there is one bit position (say, the first
bit of the state vector) that records if the current state of is accepting or not. Thus the
times the first component).
reward function is obviously linear (it is simply
There are two actions:
. Each action advances the simulation of the AM game
by one time step. There are three types of steps:
1. Steps where is choosing a bit to send to
choosing to send a ? ? to .
is flipping a coin; each action
2. Steps where
having the coin come up ?heads?.
; action
corresponds to
yields probability
3. Steps where is doing deterministic computation; each action
computation ahead one step.
of
moves the
It is straightforward to encode this as a DBN-MDP. Note that each bit of the next move
relation of a Turing machine depends on only
bits of the preceding configuration (i.e.,
on the bits encoding the contents of the neighboring cells, the bits encoding the presence
or absence of the input head in one of those cells, and the bits encoding the finite state
information of the Turing machine). Thus the DBN-MDP
describing on inputs of
length has constant indegree; each bit is connected to the
bits on which it depends.
Note that a path in this MDP corresponding to an accepting computation of on an input
of length has total reward ; a rejecting path has reward . A routine calculation shows
that the expected reward of the optimal policy is equal to the fraction of coin flip sequences
that cause to accept when communicating with an optimal . That is,
Prob
accepts
Optimal expected reward
With the construction above, we can now describe our result:
Theorem 1. If PSPACE is not contained in P/ POLY , then there is a family of DBN-MDPs
,
, such that for any two polynomials,
and
, there exist infinitely many
such that no circuit of size
can compute a policy having expected reward greater
than
times the optimum.
Before giving the formal proof, we remark that the assumption that PSPACE is not contained in P/ POLY is standard and widely believed, and informally asserts that not everything that can be computed in polynomial space can be computed by a non-uniform family
of small circuits.
Proof. Let be any language in PSPACE that is not in P/ POLY, and let
be as described
above. Suppose, contrary to the statement of Theorem, that for large enough there is
indeed a circuit of size
computing a policy for
whose return is within a
factor of optimal. We now consider the probabilistic circuit that operates as follows.
takes a string as input, and estimates the expected return of the policy given by (which
is the same as the probability that the prover associated with is able to convince that
). Specifically, builds the state corresponding to the start state of protocol on
input , and then repeats the following procedure
times:
Given state , if is a state encoding a configuration in which it is ?s
turn, use to compute the message sent by and set to the new state
of the AM protocol.
Otherwise, if is a state encoding a configuration in which it is ?s turn,
flip a coin at random and set to the new state of the AM protocol.
Repeat until an accept or reject state is encountered.
If any of these repetitions result in an accept,
Note now that if
, then the probability that
accepts; otherwise
rejects.
rejects is no more than
since in this case we are guaranteed that each iteration will accept with probability at least
. On the other hand, if
, then
accepts with probability no more than
, since each iteration accepts with probability at most
. As has polynomial
size and a probabilistic circuit can be simulated by a deterministic one of essentially the
same size, it follows that is in P/ POLY , a contradiction.
It is worth mentioning that, by the worst-case-to-average-case reduction of [1], if PSPACE
is not in P/ POLY then we can select such a language so that the circuit will perform
badly on a non-negligible fraction of the states of
. That is, not only is it hard to find
an optimal policy, it will be the case that every policy that can be expressed as a polynomial
size circuit will perform very badly on very many inputs.
Finally, we remark that by coupling the above construction with the approximate lower
bound protocol of [3], one can prove (under a stronger assumption) that there are no succinct policies for the DBN-MDPs
which even approximate the optimum return to
within an exponential factor.
Theorem 2. If PSPACE is not contained in AM , then there is a family of DBN-MDPs
,
, such that for any polynomial there exist infinitely many such that no circuit
of size
can compute a policy having expected reward greater than
times the
optimum.
References
[1] L. Babai, L. Fortnow, N. Nisan, and A. Wigderson. BPP has subexponential time
simulations unless EXPTIME has publishable proofs. Complexity Theory, 3:307?318,
1993.
[2] L. Babai and S. Moran. Arthur-merlin games: a randomized proof system, and a
hierarchy of complexity classes. Journal of Computer and System Sciences, 36(2):254?
276, 1988.
[3] S. Goldwasser and M. Sipser. Private coins versus public coins in interactive proof
systems. Advances in Computing Research, 5:73?90, 1989.
[4] D. Johnson. A catalog of complexity classes. In J. van Leeuwen, editor, Handbook of
Theoretical Computer Science, volume A. The MIT Press, 1990.
[5] P. Liberatore. The size of MDP factored policies. In Proceedings of AAAI 2002. AAAI
Press, 2002.
[6] C. Lund, L. Fortnow, H. Karloff, and N. Nisan. Algebraic methods for interactive proof
systems. Journal of the ACM, 39(4):859?868, 1992.
[7] M. Mundhenk, J. Goldsmith, C. Lusena, and E. Allender. Complexity of finite-horizon
Markov decision process problems. Journal of the ACM, 47(4):681?720, 2000.
[8] C. Papadimitriou. Computational Complexity. Addison-Wesley, 1994.
[9] A. Shamir. IP = PSPACE. Journal of the ACM, 39(4):869?877, 1992.
| 2328 |@word private:1 polynomial:14 seems:2 stronger:3 open:2 willing:1 seek:1 simulation:2 asks:1 reduction:1 moment:2 configuration:11 mkearns:1 ours:1 current:1 mundhenk:1 parameterization:1 record:1 accepting:4 characterization:1 cse:1 node:4 traverse:1 unbounded:2 along:1 prove:4 manner:1 upenn:1 expected:6 indeed:1 hardness:3 roughly:1 planning:10 discounted:1 little:1 overwhelming:1 equipped:1 provided:1 begin:1 underlying:1 bounded:3 circuit:14 what:3 kind:1 interpreted:1 string:1 developed:1 assert:1 unexplored:1 every:10 interactive:2 exactly:1 connecticut:1 converse:1 before:1 negligible:1 engineering:1 encoding:7 meet:1 path:4 noteworthy:1 approximately:1 might:6 therein:1 studied:1 mentioning:1 directed:1 procedure:3 universal:1 reject:6 thought:1 word:1 cannot:3 put:1 impossible:1 influence:1 restriction:1 deterministic:3 missing:1 elusive:1 go:1 regardless:1 send:2 caught:1 colorable:3 straightforward:1 identifying:1 factored:14 communicating:1 contradiction:1 proving:2 analogous:2 construction:5 imagine:1 suppose:1 hierarchy:1 shamir:1 enters:2 worst:1 connected:1 russell:1 mentioned:1 complexity:12 reward:15 dynamic:8 ultimately:1 uniformity:3 santafe:1 algo:1 upon:1 eric:1 completely:1 represented:6 uconn:1 describe:2 choosing:2 whose:4 widely:2 larger:1 say:4 drawing:1 consume:1 otherwise:2 statistic:1 ip:1 obviously:2 sequence:10 net:7 acr:1 neighboring:1 representational:6 description:7 asserts:1 parent:1 optimum:3 coupling:1 fixing:1 strong:3 c:2 come:1 stochastic:3 public:1 implementing:2 everything:1 suffices:1 extension:1 hold:2 presumably:1 mapping:1 purpose:1 expose:1 repetition:1 establishes:1 mit:1 clearly:1 always:2 rather:7 incompatibility:1 encode:2 am:16 sense:2 typically:3 accept:7 relation:1 going:1 interested:1 semantics:1 subexponential:1 equal:1 never:1 having:3 constitutes:1 papadimitriou:1 np:11 others:1 report:1 babai:2 familiar:1 interest:1 message:1 possibility:3 severe:1 certainly:1 implication:1 tuple:1 edge:3 arthur:12 modest:1 unless:1 theoretical:1 leeuwen:1 instance:2 boolean:3 markovian:1 uniform:2 johnson:1 considerably:1 combined:1 convince:2 randomized:1 probabilistic:4 michael:1 together:1 sanjeev:1 aaai:2 worse:1 return:8 depends:2 nisan:2 sipser:1 view:1 doing:1 start:1 bayes:7 polynomialtime:1 largely:1 who:2 efficiently:2 yield:3 identify:1 conceptually:1 weak:1 rejecting:1 produced:1 worth:1 implausible:1 definition:3 associated:3 proof:7 ask:3 conversation:2 listen:1 color:1 knowledge:1 bpp:1 routine:1 actually:2 coloring:5 wesley:1 response:4 though:1 generality:1 just:1 until:1 sketch:1 hand:1 dismiss:1 perhaps:1 indicated:1 mdp:25 requiring:1 evolution:1 read:1 moore:2 round:2 game:10 during:1 outline:1 complete:3 theoretic:3 goldsmith:1 functional:1 fortnow:2 volume:1 approximates:1 interpret:1 ai:1 dbn:17 similarly:1 language:11 had:2 something:2 recent:4 conjectured:1 certain:1 arbitrarily:1 merlin:9 greater:2 preceding:1 recognized:1 determine:1 calculation:1 believed:3 essentially:2 rutgers:2 iteration:2 represent:1 pspace:12 cell:2 posse:3 undirected:1 sent:1 contrary:1 presence:1 easy:1 enough:1 pennsylvania:1 karloff:1 goldwasser:1 computable:1 whether:1 algebraic:1 cause:3 tape:1 action:15 constitute:1 cpt:1 remark:2 generally:1 clear:1 informally:2 generate:1 supplied:1 exist:4 write:2 shall:2 graph:8 fraction:2 year:1 turing:7 prob:1 powerful:1 striking:1 family:9 reasonable:3 throughout:1 decision:5 bit:18 entirely:2 bound:1 layer:8 guaranteed:1 played:1 encountered:1 badly:2 nontrivial:1 occur:1 ahead:1 precisely:1 constraint:3 aspect:1 extremely:1 relatively:1 department:5 according:2 poor:1 slightly:1 making:2 taken:1 computationally:3 describing:2 turn:3 needed:1 addison:1 flip:3 end:1 coin:8 existence:2 responding:1 running:1 wigderson:1 exploit:1 giving:2 verifier:1 build:1 establish:2 move:2 question:3 prover:3 flipping:1 parametric:7 primary:1 indegree:3 strategy:1 responds:2 simulated:1 reason:1 length:5 relationship:1 mexico:1 difficult:2 statement:2 negative:1 rise:1 policy:37 perform:2 observation:1 allender:3 markov:5 finite:2 situation:1 witness:1 defining:1 precise:1 head:3 arbitrary:3 namely:2 required:1 connection:2 catalog:1 accepts:4 established:2 able:1 suggested:1 proceeds:1 lund:1 challenge:9 power:4 difficulty:3 rely:1 beautiful:1 natural:1 representing:2 mdps:20 imply:1 arora:2 started:1 prior:1 literature:1 law:1 loss:1 highlight:2 interesting:1 limitation:1 analogy:1 versus:1 remarkable:1 sufficient:1 consistent:1 editor:1 intractability:3 skeptical:1 succinctly:1 convinced:1 surprisingly:1 last:1 repeat:2 formal:2 weaker:1 allow:2 wide:1 taking:1 barrier:2 van:1 transition:2 computes:1 ignores:1 made:1 refinement:1 polynomially:1 approximate:2 compact:2 ignore:1 decides:1 handbook:1 conceptual:1 assumed:1 conclude:1 unnecessary:1 decade:1 table:1 obtaining:1 poly:12 complex:1 lusena:1 protocol:12 arise:1 nothing:1 succinct:9 fair:1 allowed:1 referred:2 rithm:1 extraordinary:1 position:2 cristopher:1 exponential:2 down:1 remained:1 theorem:3 specific:3 showing:1 moran:1 intractable:3 exists:1 restricting:1 ci:1 demand:2 horizon:1 simply:7 infinitely:2 expressed:1 contained:3 applies:1 corresponds:2 determines:1 relies:1 acm:3 conditional:2 viewed:1 absence:1 content:2 hard:3 determined:1 except:1 operates:1 specifically:1 kearns:1 called:1 total:2 player:1 select:1 internal:4 arises:1 meant:1 alexander:1 princeton:2 |
1,461 | 2,329 | A Neural Edge-Detection Model for
Enhanced Auditory Sensitivity in
Modulated Noise
Alon Fishbach and Bradford J. May
Department of Biomedical Engineering and Otolaryngology-HNS
Johns Hopkins University
Baltimore, MD 21205
[email protected]
Abstract
Psychophysical data suggest that temporal modulations of stimulus
amplitude envelopes play a prominent role in the perceptual
segregation of concurrent sounds. In particular, the detection of an
unmodulated signal can be significantly improved by adding
amplitude modulation to the spectral envelope of a competing
masking noise. This perceptual phenomenon is known as
?Comodulation Masking Release? (CMR). Despite the obvious
influence of temporal structure on the perception of complex
auditory scenes, the physiological mechanisms that contribute to
CMR and auditory streaming are not well known. A recent
physiological study by Nelken and colleagues has demonstrated an
enhanced cortical representation of auditory signals in modulated
noise. Our study evaluates these CMR-like response patterns from
the perspective of a hypothetical auditory edge-detection neuron. It
is shown that this simple neural model for the detection of
amplitude transients can reproduce not only the physiological data
of Nelken et al., but also, in light of previous results, a variety of
physiological and psychoacoustical phenomena that are related to
the perceptual segregation of concurrent sounds.
1
In t rod u ct i on
The temporal structure of a complex sound exerts strong influences on auditory
physiology (e.g. [10, 16]) and perception (e.g. [9, 19, 20]). In particular, studies of
auditory scene analysis have demonstrated the importance of the temporal structure
of amplitude envelopes in the perceptual segregation of concurrent sounds [2, 7].
Common amplitude transitions across frequency serve as salient cues for grouping
sound energy into unified perceptual objects. Conversely, asynchronous amplitude
transitions enhance the separation of competing acoustic events [3, 4].
These general principles are manifested in perceptual phenomena as diverse as
comodulation masking release (CMR) [13], modulation detection interference [22]
and synchronous onset grouping [8].
Despite the obvious importance of timing information in psychoacoustic studies of
auditory masking, the way in which the CNS represents the temporal structure of an
amplitude envelope is not well understood. Certainly many physiological studies
have demonstrated neural sensitivities to envelope transitions, but this sensitivity is
only beginning to be related to the variety of perceptual experiences that are evoked
by signals in noise.
Nelken et al. [15] have suggested a correspondence between neural responses to
time-varying amplitude envelopes and psychoacoustic masking phenomena. In their
study of neurons in primary auditory cortex (A1), adding temporal modulation to
background noise lowered the detection thresholds of unmodulated tones. This
enhanced signal detection is similar to the perceptual phenomenon that is known as
comodulation masking release [13].
Fishbach et al. [11] have recently proposed a neural model for the detection of
?auditory edges? (i.e., amplitude transients) that can account for numerous
physiological [14, 17, 18] and psychoacoustical [3, 21] phenomena. The
encompassing utility of this edge-detection model suggests a common mechanism
that may link the auditory processing and perception of auditory signals in a
complex auditory scene. Here, it is shown that the auditory edge detection model
can accurately reproduce the cortical CMR-like responses previously described by
Nelken and colleagues.
2
Th e M od el
The model is described in detail elsewhere [11]. In short, the basic operation of the
model is the calculation of the first-order time derivative of the log-compressed
envelope of the stimulus. A computational model [23] is used to convert the
acoustic waveform to a physiologically plausible auditory nerve representation (Fig
1a). The simulated neural response has a medium spontaneous rate and a
characteristic frequency that is set to the frequency of the target tone. To allow
computation of the time derivative of the stimulus envelope, we hypothesize the
existence of a temporal delay dimension, along which the stimulus is progressively
delayed. The intermediate delay layer (Fig 1b) is constructed from an array of
neurons with ascending membrane time constants (?); each neuron is modeled by a
conventional integrate-and-fire model (I&F, [12]). Higher membrane time constant
induces greater delay in the neuron?s response [1].
The output of the delay layer converges to a single output neuron (Fig. 1c) via a set
of connection with various efficacies that reflect a receptive field of a gaussian
derivative. This combination of excitatory and inhibitory connections carries out the
time-derivative computation. Implementation details and parameters are given in
[11]. The model has 2 adjustable and 6 fixed parameters, the former were used to fit
the responses of the model to single unit responses to variety of stimuli [11]. The
results reported here are not sensitive to these parameters.
(a) AN model
(b) delay-layer
(c) edge-detector
neuron
?=6 ms
I&F
Neuron
?=4 ms
?=3 ms
bandpass
log
d
dt
RMS
Figure 1: Schematic diagram of the model and a block diagram of the basic
operation of each model component (shaded area). The stimulus is converted to a
neural representation (a) that approximates the average firing rate of a medium
spontaneous-rate AN fiber [23]. The operation of this stage can be roughly
described as the log-compressed rms output of a bandpass filter. The neural
representation is fed to a series of neurons with ascending membrane time constant
(b). The kernel functions that are used to simulate these neurons are plotted for a
few neurons along with the time constants used. The output of the delay-layer
neurons converge to a single I&F neuron (c) using a set of connections with weights
that reflect a shape of a gaussian derivative. Solid arrows represent excitatory
connections and white arrows represent inhibitory connections. The absolute
efficacy is represented by the width of the arrows.
3
Resu lt s
Nelken et al. [15] report that amplitude modulation can substantially modify the
noise-driven discharge rates of A1 neurons in Halothane-anesthetized cats. Many
cortical neurons show only a transient onset response to unmodulated noise but fire
in synchrony (?lock?) to the envelope of modulated noise. A significant reduction in
envelope-locked discharge rates is observed if an unmodulated tone is added to
modulated noise. As summarized in Fig. 2, this suppression of envelope locking can
reveal the presence of an auditory signal at sound pressure levels that are not
detectable in unmodulated noise. It has been suggested that this pattern of neural
responding may represent a physiological equivalent of CMR.
Reproduction of CMR-like cortical activity can be illustrated by a simplified case in
which the analytical amplitude envelope of the stimulus is used as the input to the
edge-detector model. In keeping with the actual physiological approach of Nelken et
al., the noise envelope is shaped by a trapezoid modulator for these simulations.
Each cycle of modulation, E N(t), is given by:
t
0?t <D
P
D ? t < 3D
E N (t ) =
P ? DP (t ? 3 D ) 3 D ? t < 4 D
0
4 D ? t < 8D
P
D
where P is the peak pressure level and D is set to 12.5 ms.
(b) Modulated noise
76
Spikes/sec
Tone level (dB SPL)
(a) Unmodulated noise
26
0
150
300 0
150
300
Time (ms)
Figure 2: Responses of an A1 unit to a combination of noise and tone at many tone
levels, replotted from Nelken et al. [15]. (a) Unmodulated noise and (b) modulated
noise. The noise envelope is illustrated by the thick line above each figure. Each
row shows the response of the neuron to the noise plus the tone at the level specified
on the ordinate. The dashed line in (b) indicates the detection threshold level for the
tone. The detection threshold (as defined and calculated by Nelken et al.) in the
unmodulated noise was not reached.
Since the basic operation of the model is the calculation of the rectified timederivative of the log-compressed envelope of the stimulus, the expected noisedriven rate of the model can be approximated by:
(
)
d
M N ( t ) = max 0,
A ln 1 +
dt
E (t )
P0
where A=20/ln(10) and P0 =2e-5 Pa. The expected firing rate in response to the noise
plus an unmodulated signal (tone) can be similarly approximated by:
M N + S ( t ) = max 0,
(
)
d
A ln 1 +
dt
E ( t ) + PS
P0
where PS is the peak pressure level of the tone. Clearly, both MN (t) and MN+S (t) are
identically zero outside the interval [0 D]. Within this interval it holds that:
M N (t ) =
AP
D
P0 +
P
D
t
0?t<D
and
M N + S (t ) =
AP
D
P0 + PS +
P
D
t
0?t<D
and the ratio of the firing rates is:
M N (t )
PS
=1 +
M N + S (t )
P0 + DP t
0?t< D
Clearly, M N + S < M N for the interval [0 D] of each modulation cycle. That is, the
addition of a tone reduces the responses of the model to the rising part of the
modulated envelope. Higher tone levels (Ps ) cause greater reduction in the model?s
firing rate.
(c)
(b)
Level derivative
(dB SPL/ms)
Level (dB SPL)
(a)
(d)
Time (ms)
Figure 3: An illustration of the basic operation of the model on various amplitude
envelopes. The simplified operation of the model includes log compression of the
amplitude envelope (a and c) and rectified time-derivative of the log-compressed
envelope (b and d). (a) A 30 dB SPL tone is added to a modulated envelope (peak
level of 70 dB SPL) 300 ms after the beginning of the stimulus (as indicated by the
horizontal line). The addition of the tone causes a great reduction in the time
derivative of the log-compressed envelope (b). When the envelope of the noise is
unmodulated (c), the time-derivative of the log-compressed envelope (d) shows a
tiny spike when the tone is added (marked by the arrow).
Fig. 3 demonstrates the effect of a low-level tone on the time-derivative of the logcompressed envelope of a noise. When the envelope is modulated (Fig. 3a) the
addition of the tone greatly reduces the derivative of the rising part of the
modulation (Fig. 3b). In the absence of modulations (Fig. 3c), the tone presentation
produces a negligible effect on the level derivative (Fig. 3d).
Model simulations of neural responses to the stimuli used by Nelken et al. are
plotted in Fig. 4. As illustrated schematically in Fig 3 (d), the presence of the tone
does not cause any significant change in the responses of the model to the
unmodulated noise (Fig. 4a). In the modulated noise, however, tones of relatively
low levels reduce the responses of the model to the rising part of the envelope
modulations.
(b) Modulated noise
76
Spikes/sec
Tone level (dB SPL)
(a) Unmodulated noise
26
0
150
300 0
Time (ms)
150
300
Figure 4: Simulated responses of the model to a combination of a tone and
Unmodulated noise (a) and modulated noise (b). All conventions are as in Fig. 2.
4
Di scu ssi on
This report uses an auditory edge-detection model to simulate the actual
physiological consequences of amplitude modulation on neural sensitivity in
cortical area A1. The basic computational operation of the model is the calculation
of the smoothed time-derivative of the log-compressed stimulus envelope. The
ability of the model to reproduce cortical response patterns in detail across a variety
of stimulus conditions suggests similar time-sensitive mechanisms may contribute to
the physiological correlates of CMR.
These findings augment our previous observations that the simple edge-detection
model can successfully predict a wide range of physiological and perceptual
phenomena [11]. Former applications of the model to perceptual phenomena have
been mainly related to auditory scene analysis, or more specifically the ability of the
auditory system to distinguish multiple sound sources. In these cases, a sharp
amplitude transition at stimulus onset (?auditory edge?) was critical for sound
segregation. Here, it is shown that the detection of acoustic signals also may be
enhanced through the suppression of ongoing responses to the concurrent
modulations of competing background sounds. Interestingly, these temporal
fluctuations appear to be a common property of natural soundscapes [15].
The model provides testable predictions regarding how signal detection may be
influenced by the temporal shape of amplitude modulation. Carlyon et al. [6]
measured CMR in human listeners using three types of noise modulation: squarewave, sine wave and multiplied noise. From the perspective of the edge-detection
model, these psychoacoustic results are intriguing because the different modulator
types represent manipulations of the time derivative of masker envelopes. Squarewave modulation had the most sharply edged time derivative and produced the
greatest masking release.
Fig. 5 plots the responses of the model to a pure-tone signal in square-wave and
sine-wave modulated noise. As in the psychoacoustical data of Carlyon et al., the
simulated detection threshold was lower in the context of square-wave modulation.
Our modeling results suggest that the sharply edged square wave evoked higher
levels of noise-driven activity and therefore created a sensitive background for the
suppressing effects of the unmodulated tone.
(b)
60
Spikes/sec
Tone level (dB SPL)
(a)
10
0
200
400
600 0
Time (ms)
200
400
600
Figure 5: Simulated responses of the model to a combination of a tone at various
levels and a sine-wave modulated noise (a) or a square-wave modulated noise (b).
Each row shows the response of the model to the noise plus the tone at the level
specified on the abscissa. The shape of the noise modulator is illustrated above each
figure. The 100 ms tone starts 250 ms after the noise onset. Note that the tone
detection threshold (marked by the dashed line) is 10 dB lower for the square-wave
modulator than for the sine-wave modulator, in accordance with the
psychoacoustical data of Carlyon et al. [6].
Although the physiological basis of our model was derived from studies of neural
responses in the cat auditory system, the key psychoacoustical observations of
Carlyon et al. have been replicated in recent behavioral studies of cats (Budelis et
al. [5]).
These data support the generalization of human perceptual processing to other
species and enhance the possible correspondence between the neuronal CMR-like
effect and the psychoacoustical masking phenomena.
Clearly, the auditory system relies on information other than the time derivative of
the stimulus envelope for the detection of auditory signals in background noise.
Further physiological and psychoacoustic assessments of CMR-like masking effects
are needed not only to refine the predictive abilities of the edge-detection model but
also to reveal the additional sources of acoustic information that influence signal
detection in constantly changing natural environments.
Ackn ow led g men t s
This work was supported in part by a NIDCD grant R01 DC004841.
Refe ren ces
[1] Agmon-Snir H., Segev I. (1993). ?Signal delay and input synchronization in passive
dendritic structure?, J. Neurophysiol. 70, 2066-2085.
[2] Bregman A.S. (1990). ?Auditory scene analysis: The perceptual organization of sound?,
MIT Press, Cambridge, MA.
[3] Bregman A.S., Ahad P.A., Kim J., Melnerich L. (1994) ?Resetting the pitch-analysis
system. 1. Effects of rise times of tones in noise backgrounds or of harmonics in a complex
tone?, Percept. Psychophys. 56 (2), 155-162.
[4] Bregman A.S., Ahad P.A., Kim J. (1994) ?Resetting the pitch-analysis system. 2. Role of
sudden onsets and offsets in the perception of individual components in a cluster of
overlapping tones?, J. Acoust. Soc. Am. 96 (5), 2694-2703.
[5] Budelis J., Fishbach A., May B.J. (2002) ?Behavioral assessments of comodulation
masking release in cats?, Abst. Assoc. for Res. in Otolaryngol. 25.
[6] Carlyon R.P., Buus S., Florentine M. (1989) ?Comodulation masking release for three
types of modulator as a function of modulation rate?, Hear. Res. 42, 37-46.
[7] Darwin C.J. (1997) ?Auditory grouping?, Trends in Cog. Sci. 1(9), 327-333.
[8] Darwin C.J., Ciocca V. (1992) ?Grouping in pitch perception: Effects of onset
asynchrony and ear of presentation of a mistuned component?, J. Acoust. Soc. Am. 91 , 33813390.
[9] Drullman R., Festen H.M., Plomp R. (1994) ?Effect of temporal envelope smearing on
speech reception?, J. Acoust. Soc. Am. 95 (2), 1053-1064.
[10] Eggermont J J. (1994). ?Temporal modulation transfer functions for AM and FM stimuli
in cat auditory cortex. Effects of carrier type, modulating waveform and intensity?, Hear.
Res. 74, 51-66.
[11] Fishbach A., Nelken I., Yeshurun Y. (2001) ?Auditory edge detection: a neural model
for physiological and psychoacoustical responses to amplitude transients?, J. Neurophysiol.
85, 2303?2323.
[12] Gerstner W. (1999) ?Spiking neurons?, in Pulsed Neural Networks , edited by W. Maass,
C. M. Bishop, (MIT Press, Cambridge, MA).
[13] Hall J.W., Haggard M.P., Fernandes M.A. (1984) ?Detection in noise by spectrotemporal pattern analysis?, J. Acoust. Soc. Am. 76, 50-56.
[14] Heil P. (1997) ?Auditory onset responses revisited. II. Response strength?, J.
Neurophysiol. 77, 2642-2660.
[15] Nelken I., Rotman Y., Bar-Yosef O. (1999) ?Responses of auditory cortex neurons to
structural features of natural sounds?, Nature 397, 154-157.
[16] Phillips D.P. (1988). ?Effect of Tone-Pulse Rise Time on Rate-Level Functions of Cat
Auditory Cortex Neurons: Excitatory and Inhibitory Processes Shaping Responses to Tone
Onset?, J. Neurophysiol. 59, 1524-1539.
[17] Phillips D.P., Burkard R. (1999). ?Response magnitude and timing of auditory response
initiation in the inferior colliculus of the awake chinchilla?, J. Acoust. Soc. Am. 105, 27312737.
[18] Phillips D.P., Semple M.N., Kitzes L.M. (1995). ?Factors shaping the tone level
sensitivity of single neurons in posterior field of cat auditory cortex?, J. Neurophysiol. 73,
674-686.
[19] Rosen S. (1992) ?Temporal information in speech: acoustic, auditory and linguistic
aspects?, Phil. Trans. R. Soc. Lond. B 336, 367-373.
[20] Shannon R.V., Zeng F.G., Kamath V., Wygonski J, Ekelid M. (1995) ?Speech
recognition with primarily temporal cues?, Science 270, 303-304.
[21] Turner C.W., Relkin E.M., Doucet J. (1994). ?Psychophysical and physiological
forward masking studies: probe duration and rise-time effects?, J. Acoust. Soc. Am. 96 (2),
795-800.
[22] Yost W.A., Sheft S. (1994) ?Modulation detection interference ? across-frequency
processing and auditory grouping?, Hear. Res. 79, 48-58.
[23] Zhang X., Heinz M.G., Bruce I.C., Carney L.H. (2001). ?A phenomenological model for
the responses of auditory-nerve fibers: I. Nonlinear tuning with compression and
suppression?, J. Acoust. Soc. Am. 109 (2), 648-670.
| 2329 |@word rising:3 compression:2 pulse:1 simulation:2 p0:6 pressure:3 solid:1 carry:1 reduction:3 series:1 efficacy:2 interestingly:1 suppressing:1 od:1 intriguing:1 john:1 shape:3 hypothesize:1 plot:1 progressively:1 cue:2 tone:35 beginning:2 short:1 sudden:1 provides:1 contribute:2 revisited:1 zhang:1 along:2 constructed:1 behavioral:2 expected:2 roughly:1 abscissa:1 heinz:1 actual:2 medium:2 burkard:1 substantially:1 unified:1 finding:1 acoust:7 temporal:13 hypothetical:1 demonstrates:1 assoc:1 unit:2 grant:1 unmodulated:14 appear:1 carrier:1 negligible:1 engineering:1 timing:2 understood:1 modify:1 accordance:1 consequence:1 despite:2 firing:4 modulation:19 ap:2 fluctuation:1 reception:1 plus:3 evoked:2 conversely:1 suggests:2 shaded:1 comodulation:5 locked:1 range:1 block:1 area:2 significantly:1 physiology:1 suggest:2 scu:1 context:1 influence:3 conventional:1 equivalent:1 demonstrated:3 phil:1 duration:1 semple:1 pure:1 array:1 discharge:2 target:1 enhanced:4 play:1 spontaneous:2 us:1 pa:1 trend:1 approximated:2 recognition:1 observed:1 role:2 cycle:2 buus:1 edited:1 environment:1 locking:1 predictive:1 serve:1 basis:1 neurophysiol:5 yeshurun:1 various:3 fiber:2 represented:1 cat:7 listener:1 outside:1 plausible:1 compressed:7 ability:3 abst:1 analytical:1 cluster:1 p:5 produce:1 converges:1 object:1 alon:1 measured:1 strong:1 soc:8 convention:1 waveform:2 thick:1 filter:1 human:2 transient:4 generalization:1 dendritic:1 hold:1 hall:1 great:1 predict:1 agmon:1 spectrotemporal:1 sensitive:3 modulating:1 concurrent:4 successfully:1 halothane:1 clearly:3 mit:2 gaussian:2 varying:1 linguistic:1 release:6 derived:1 indicates:1 mainly:1 greatly:1 suppression:3 kim:2 am:8 el:1 streaming:1 reproduce:3 augment:1 smearing:1 field:2 shaped:1 represents:1 rosen:1 report:2 stimulus:15 few:1 primarily:1 individual:1 delayed:1 cns:1 fire:2 mistuned:1 detection:25 organization:1 certainly:1 light:1 bregman:3 edge:13 experience:1 snir:1 re:4 plotted:2 darwin:2 modeling:1 delay:7 reported:1 peak:3 sensitivity:5 rotman:1 enhance:2 hopkins:1 reflect:2 ear:1 hn:1 otolaryngology:1 derivative:16 account:1 converted:1 summarized:1 sec:3 includes:1 onset:8 sine:4 reached:1 wave:9 start:1 masking:12 synchrony:1 bruce:1 square:5 nidcd:1 characteristic:1 resetting:2 percept:1 accurately:1 produced:1 ren:1 rectified:2 detector:2 influenced:1 evaluates:1 energy:1 colleague:2 frequency:4 obvious:2 di:1 auditory:35 shaping:2 amplitude:17 nerve:2 higher:3 dt:3 response:30 improved:1 biomedical:1 stage:1 horizontal:1 zeng:1 nonlinear:1 assessment:2 overlapping:1 indicated:1 reveal:2 asynchrony:1 effect:11 former:2 maass:1 illustrated:4 white:1 width:1 inferior:1 m:12 prominent:1 passive:1 harmonic:1 recently:1 common:3 spiking:1 approximates:1 significant:2 cambridge:2 haggard:1 phillips:3 edged:2 tuning:1 similarly:1 had:1 phenomenological:1 lowered:1 cortex:5 posterior:1 recent:2 perspective:2 pulsed:1 driven:2 manipulation:1 psychoacoustical:7 manifested:1 initiation:1 greater:2 additional:1 converge:1 signal:13 dashed:2 ii:1 multiple:1 sound:11 reduces:2 calculation:3 a1:4 schematic:1 prediction:1 pitch:3 basic:5 exerts:1 kernel:1 represent:4 background:5 addition:3 schematically:1 baltimore:1 interval:3 diagram:2 source:2 envelope:30 db:8 structural:1 presence:2 intermediate:1 identically:1 variety:4 fit:1 modulator:6 competing:3 fm:1 reduce:1 regarding:1 rod:1 synchronous:1 rms:2 utility:1 speech:3 cause:3 squarewave:2 induces:1 inhibitory:3 diverse:1 key:1 salient:1 threshold:5 changing:1 ce:1 convert:1 colliculus:1 spl:7 separation:1 layer:4 ct:1 distinguish:1 correspondence:2 refine:1 activity:2 strength:1 sharply:2 segev:1 awake:1 scene:5 aspect:1 simulate:2 lond:1 relatively:1 department:1 combination:4 yosef:1 membrane:3 across:3 interference:2 ln:3 segregation:4 previously:1 detectable:1 mechanism:3 needed:1 ascending:2 fed:1 operation:7 multiplied:1 probe:1 spectral:1 fernandes:1 existence:1 responding:1 lock:1 testable:1 eggermont:1 r01:1 psychophysical:2 added:3 spike:4 receptive:1 primary:1 md:1 ow:1 dp:2 link:1 simulated:4 sci:1 cmr:11 modeled:1 trapezoid:1 illustration:1 ratio:1 kamath:1 rise:3 implementation:1 adjustable:1 neuron:20 observation:2 carlyon:5 smoothed:1 sharp:1 intensity:1 ordinate:1 specified:2 connection:5 acoustic:5 trans:1 suggested:2 bar:1 psychophys:1 perception:5 pattern:4 hear:3 replotted:1 max:2 greatest:1 event:1 critical:1 natural:3 turner:1 mn:2 numerous:1 created:1 heil:1 synchronization:1 encompassing:1 northwestern:1 men:1 resu:1 integrate:1 principle:1 tiny:1 row:2 elsewhere:1 excitatory:3 supported:1 asynchronous:1 keeping:1 allow:1 wide:1 absolute:1 anesthetized:1 dimension:1 cortical:6 transition:4 calculated:1 ssi:1 nelken:11 forward:1 replicated:1 simplified:2 correlate:1 doucet:1 masker:1 physiologically:1 nature:1 transfer:1 gerstner:1 complex:4 psychoacoustic:4 arrow:4 noise:40 neuronal:1 fig:14 bandpass:2 carney:1 perceptual:12 cog:1 bishop:1 offset:1 physiological:15 otolaryngol:1 reproduction:1 grouping:5 adding:2 importance:2 magnitude:1 lt:1 led:1 relies:1 constantly:1 ma:2 marked:2 presentation:2 absence:1 change:1 specifically:1 specie:1 bradford:1 shannon:1 support:1 modulated:15 yost:1 ongoing:1 phenomenon:9 |
1,462 | 233 | 558
Rohwer
The 'Moving Targets' Training Algorithm
Richard Rohwer
Centre for Speech Technology Research
Edinburgh University
80, South Bridge
Edinburgh EH1 1HN SCOTLAND
ABSTRACT
A simple method for training the dynamical behavior of a neural network is derived. It is applicable to any training problem
in discrete-time networks with arbitrary feedback. The algorithm
resembles back-propagation in that an error function is minimized
using a gradient-based method, but the optimization is carried out
in the hidden part of state space either instead of, or in addition to
weight space. Computational results are presented for some simple
dynamical training problems, one of which requires response to a
signal 100 time steps in the past.
1
INTRODUCTION
This paper presents a minimization-based algorithm for training the dynamical behavior of a discrete-time neural network model. The central idea is to treat hidden
nodes as target nodes with variable training data. These "moving targets" are
varied during the minimization process. Werbos (Werbos, 1983) used the term
"moving targets" to describe the qualitative idea that a network should set itself
intermediate objectives, and vary these objectives as information is accumulated on
their attainability and their usefulness for achieving overall objectives. The (coincidentally) like-named algorithm presented here can be regarded as a quantitative
realization of this qualitative idea.
The literature contains several temporal training algorithms based on minimization
of an error measure with respect to the weights. This type of method includes
the straightforward extension of the back-propagation method to back-propagation
The 'Moving Targets' Training Algorithm
through time (Rumelhart, 1986), the methods of Rohwer and Forrest (Rohwer,
1987), Pearlmutter (Pearlmutter, 1989), and the forward propagation of derivatives
(Robinson, 1988, Williams 1989a, Williams 1989b, Kuhn, 1990). A careful comparison of moving targets with back-propagation in time and teacher forcing appears in
(Rohwer, 1989b). Although applicable only to fixed-point training, the algorithms
of Almeida (Almeida, 1989) and Pineda (Pineda, 1988) have much in common with
these dynamical training algorithms. The formal relationship between these and
the method of Rohwer and Forrest is spelled out in (Rohwer 1989a).
2
NOTATION AND STATEMENT OF THE TRAINING
PROBLEM
Consider a neural network model with arbitrary feedback as a dynamical system in
which the dynamical variables Xit change with time according to a dynamical law
given by the mapping
LWij/(Xj,t-l)
j
XOt
(1)
bias constant
unless specified otherwise. The weights Wi; are arbitrary parameters representing
the connection strength from node :i to node i. / is an arbitrary differentiable
function. Let us call any given variable Xit the "activation" on node i at time t. It
represents the total input into node i at time t. Let the "output" of each node be
denoted by Yit = /(Xit). Let node 0 be a "bias node", assigned a positive constant
activation so that the weights WiO can be interpreted as activation thresholds.
In normal back-propagation, a network architecture is defined which divides the
network into input, hidden, and target nodes. The moving targets algorithm makes
itself applicable to arbitrary training problems by defining analogous concepts in a
manner dependent upon the training data, but independent of the network architecture. Let us call a node-time pair an "event"'. To define a training problem, the
set of all events must be divided into three disjoint sets, the input events I, target
events T, and hidden events H. A node may participate in different types of event
at different times. For every input event (it) E I, we require training data Xit with
which to overrule the dynamical law (1) using
Xit
= Xit
(it) E I.
(2)
(The bias events (Ot) can be regarded as a special case of input events.) For each
target event (it) E T, we require training data X it to specify a desired activation
value for event (Ot). No notational ambiguity arises from referring to input and
target data with the same symbol X because I and T are required to be disjoint
sets. The training dat a says nothing about the hidden events in H. There is no
restriction on how the initial events (iO) are classified.
559
560
Rohwer
3
THE "MOVING TARGETS" METHOD
Like back-propagation, the moving targets training method uses (arbitrary) gradientbased minimization techniques to minimize an "error" function such as the "output
deficit"
Eod = ~
{Yit - ~tl2,
(3)
L
(it)ET
where Yit = f(xid and ~t = f(Xid. A modification of the output deficit error gave
the best results in numerical experiments. However, the most elegant formalism
follows from an "activation deficit" error function:
Ead
=! L
{Xit -
Xitl 2 ,
(4)
(it)ET
so this is what we shall use to present the formalism.
The basic idea is to treat the hidden node activations as variable target activations.
Therefore let us denote these variables as X it , just as the (fixed) targets and inputs
are denoted. Let us write the computed activation values Xit of the hidden and
target events in terms of the inputs and (fixed and moving) targets of the previous
time step. Then let us extend the sum in (4) to include the hidden events, so the
error becomes
E=
~
L {L
(it)ETUH
wiif(Xi,t-l) _ Xit}2
(5)
i
This is a function of the weights Wii, and because there are no x's present, the full
dependence on Wii is explicitly displayed. We do not actually have desired values
for the Xit with (it) E H. But any values for which weights can be found which
make (5) vanish would be suitable, because this would imply not only that the
desired targets are attained, but also that the dynamical law is followed on both
the hidden and target nodes. Therefore let us regard E as a function of both the
weights and the "moving targets" X it , (it) E H. This is the essence of the method.
The derivatives with respect to all of the independent variables can be computed
and plugged into a standard minimization algorithm.
The reason for preferring the activation deficit form of the error (4) to the output
deficit form (3) is that the activation deficit form makes (5) purely quadratic in the
weights. Therefore the equations for the minimum,
(6)
form a linear system, the solution of which provides the optimal weights for any
given set of moving targets. Therefore these equations might as well be used to
define the weights as functions of the moving targets, thereby making the error (5)
a function of the moving targets alone.
The 'Moving Targets' Training Algorithm
The derivation of the derivatives with respect to the moving targets is spelled out
in (Rohwer, 1989b). The result is:
(7)
where
(it)
(it)
eie
TuH
1'uH
E
?
= 2:: Wij/(Xj,t-d -
X ie ,
(8)
(9)
j
f .t!
= d/(x)
dx
I
~-x .
--
(to)
'
It
and
? ? -W IJ
~ (~X
L: ' X?
~
It
It
Y;k ,t-i )
M(i)-i
kj
,
(11)
where M(a)-i is the inverse of M(a), the correlation matrix of the node outputs
defined by
(a)
Mij
-
~X y..
y.
Lat I,t-i J,t-i?
(12)
t
In the event that any of the matrices M are singular, a pseudo-inversion method
such as singular value decomposition (Press, 1988) can be used to define a unique
solution among the infinite number available.
Note also that (11) calls for a separate matrix inversion for each node. However if
the set of input nodes remains fixed for all time, then all these matrices are equal.
3.1
FEEDFORWARD VERSION
The basic ideas used in the moving targets algorithm can be applied to feedforward networks to provide an alternative method to back-propagation. The hidden
node activations for each training example become the moving target variables.
Further details appear in (Rohwer, 1989b). The moving targets method for feedforward nets is analogous to the method of Grossman, Meir, and Domany (Grossman,
1990a, 1990b) for networks with discrete node values. Birmiwal, Sarwal, and Sinha
(Birmiwal, 1989) have developed an algorithm for feedforward networks which incorporates the use of hidden node values as fundamental variables and a linear
561
562
Rohwer
system of equations for obtaining the weight matrix. Their algorithm differs from
the feedforward version of moving targets mainly in the (inessential) use of a specific
minimization algorithm which discards most of the gradient information except for
the signs of the various derivatives. Heileman, Georgiopoulos, and Brown (Heileman, 1989) also have an algorithm which bears some resemblance to the feedforward
version of moving targets. Another similar algorithm has been developed by Krogh,
Hertz, and Thorbergasson (Krogh, 1989, 1990).
4
COMPUTATIONAL RESULTS
A set of numerical experiments performed with the activation deficit form of the
algorithm (4) is reported in (Rohwer, 1989b). Some success was attained, but
greater progress was made after changing to a quartic output deficit error function
with temporal weighting of errors:
Equartic
=t
L
(1.0 + at){Yit - }'ie}4.
(13)
(it)ET
Here a is a small positive constant. The quartic function is dominated by the terms
with the greatest error. This combats a tendency to fail on a few infrequently seen
state transitions in order to gain unneeded accuracy on a large number of similar,
low-error state transitions. The temporal weighting encourages the algorithm to
focus first on late-time errors, and then work back in time. In some cases this
helped with local minimum difficulties. A difficulty with convergence to chaotic
attractors reported in (Rohwer, 1989b) appears to have mysteriously disappeared
with the adoption of this error measure.
4.1
MINIMIZATION ALGORITHM
Further progress was made by altering the minimization algorithm. Originally the
conjugate gradient algorithm (Press, 1988) was used, with a linesearch algorithm
from Fletcher (Fletcher, 1980). The new algorithm might be called "curvature
avoidance" . The change in the gradient with each linesearch is used to update
a moving average estimate of the absolute value of the diagonal components of
the Hessian. The linesearch direction is taken to be the component-by-component
quotient of the gradient with these curvature averages. Were it not for the absolute
values, this would be an unusual way of estimating the conjugate gradient. The
absolute values are used to discourage exploration of directions which show any
hint of being highly curved. The philosophy is that by exploring low-curvature
directions first, narrow canyons are entered only when necessary.
4.2
SIMULATIONS
Several simulations have been done using fully connected networks. Figure 1 plots
the node outputs of a network trained to switch between different limit cycles under
input control. There are two input nodes, one target node, and 2 hidden nodes,
as indicated in the left margin. Time proceeds from left to right. The oscillation
The 'Moving Targets' Training Algorithm
period of the target node increases with the binary number represented by the two
input nodes. The network was trained on one period of each of the four frequencies.
Figure 1: Controlled switching between limit cycles
Figure 2 shows the operation of a network trained to detect whether an even or odd
number of pulses have been presented to the input; a temporal version of parity
detection. The network was trained on the data preceding the third input pulse.
control fila: 1550 log f~a: lu6Isiplrr/rmndir/movingtargalSlWorkiparilyllogSlts5O
e- ?1.ClOOOOOe+OO a- ?1.()Q()()()Qe+OO
o Linasaarchas. 0 Gradiant avals. 0 error avals. 0 CPU sacs.
H
LlJ)
JJ
F
l
T
.--
r
J
H
"1
nn n r
-
.--
.--
--
-
r-
,...-
I
-::-:::-:
=
-::-::-
~
=
Figure 2: Parity detection
Figure 3 shows the behavior of a network trained to respond to the second of
two input pulses separated by 100 time steps. This demonstrates a unique (in
the author's knowledge) capability of this method, an ability to utilize very distant
563
564
Rohwer
temporal correlations when there is no other way to solve the problem. This network
was trained and tested on the same data, the point being merely to show that
training is possible in this type of problem. More complex problems of this type
frequently get stuck in local minima.
control file: cx100.tr log file: lu6Isiplrr/rmndir/movinglargelslworlclcx1l1ogslcx100.1r
E- 2.2328OOe-11 a- 9.9nS18a-04
4414linasearchas. 9751 Gradient avals. 9043 Error avals. 3942 CPU &eea.
H
r
r
J
{
T
r
I
I
I
I
Figure 3: Responding to temporally distant input
5
CONCLUDING REMARKS
The simulations show that this method works, and show in particular that distant
temporal correlations can be discovered. Some practical difficulties have emerged,
however, which are currently limiting the application of this technique to 'toy'
problems. The most serious are local minima and long training times. Problems
involving large amounts of training data may present the minimization problem
with an impractically large number of variables. Variations of the algorithm are
being studied in hopes of overcomming these difficulties.
Acknowledgements
This work was supported by ESPRIT Basic Research Action 3207 ACTS.
References
L. Almeida, (1989), "Backpropagation in Non-Feedforward Networks", in Neural
Computing Architecture!, I. Aleksander, ed., North Oxford Academic.
K. Birmiwal, P. Sarwal, and S. Sinha, (1989), "A new Gradient-Free Learning
Algorithm", Tech. report, Dept. of EE, Southern Illinois U., Carbondale.
R. Fletcher, (1980), Practical Methods of Optimization, v1, Wiley.
T. Grossman, (1990a), "The CHIR Algorithm: A Generalization for Multiple Output and Multilayered Networks" , to appear in Complex Systems.
The 'Moving Targets' Training Algorithm
T. Grossman, (1990bL this volume.
G. L. Heileman, M. Georgiopoulos, and A. K. Brown, (1989), "The Minimal Disturbance Back Propagation Algorithm", Tech. report, Dept. of EE, U. of Central
Florida, Orlando.
A. Krogh, J. A. Hertz, and G.1. Thorbergsson, (1989), "A Cost Function for Internal
Representations", NORDITA preprint 89/37 S.
A. Krogh, J. A. Hertz, and G. I. Thorbergsson, (1990), this volume.
G. Kuhn, (1990) "Connected Recognition with a Recurrent Network", to appear in
Proc. NEUROSPEECH, 18 May 1989, as special issue of Speech Communication,
9, no. 2.
B. Pearlmutter, (1989), "Learning State Space Trajectories in Recurrent Neural
Networks", Proc. IEEE IJCNN 89, Washington D. C., II-365.
F. Pineda, (1988), "Dynamics and Architecture for Neural Computation", J. Complexity 4, 216.
W. H. Press, B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling, (1988), Numerical Recipes in C, The Art of Scientific Computing, Cambridge.
A. J. Robinson and F. Fallside, (1988), "Static and Dynamic Error Propagation
Networks with Applications to Speech Coding", Neural Information Processing Systems, D. Z. Anderson, Ed., AlP, New York.
R. Rohwer and B. Forrest, (1987), "Training Time Dependence in Neural Networks"
Proc. IEEE ICNN, San Diego, II-701.
R. Rohwer and S. Renals, (1989a), "Training Recurrent Networks", in Neural Networks from Models to Applications, L. Personnaz and G. Dreyfus, eds., I.D.S.E.T.,
Paris, 207.
R. Rohwer, (1989b), "The 'Moving Targets' Training Algorithm", to appear in Proc.
DANIP, G MD Bonn, J. Kinderman and A. Linden, Eds.
D. Rumelhart, G. Hinton and R. Williams, (1986), "Learning Internal Representations by Error Propagation" in Parallel Distributed Processing, v. 1, MIT.
P. Werbos, (1983) Energy Models and Studies, B. Lev, Ed., North Holland.
R. Williams and D. Zipser, (1989a), "A Learning Algorithm for Continually Running Fully Recurrent Neural Networks" , Neural Computation 1, 270.
R. Williams and D. Zipser, (1989bL "Experimental Analysis of the Real-time Recurrent Learning Algorithm", Connection Science 1, 87.
565
| 233 |@word version:4 inversion:2 pulse:3 simulation:3 decomposition:1 thereby:1 tr:1 initial:1 contains:1 past:1 attainability:1 activation:12 dx:1 must:1 distant:3 numerical:3 plot:1 update:1 alone:1 scotland:1 provides:1 node:26 become:1 qualitative:2 manner:1 behavior:3 frequently:1 cpu:2 becomes:1 estimating:1 notation:1 what:1 interpreted:1 developed:2 eod:1 temporal:6 quantitative:1 every:1 pseudo:1 combat:1 act:1 inessential:1 demonstrates:1 esprit:1 control:3 appear:4 continually:1 positive:2 local:3 treat:2 limit:2 io:1 switching:1 heileman:3 oxford:1 lev:1 might:2 resembles:1 studied:1 kinderman:1 adoption:1 unique:2 practical:2 ead:1 differs:1 chaotic:1 backpropagation:1 ooe:1 get:1 restriction:1 straightforward:1 williams:5 sac:1 avoidance:1 regarded:2 variation:1 analogous:2 limiting:1 target:35 diego:1 us:1 rumelhart:2 infrequently:1 recognition:1 werbos:3 preprint:1 connected:2 cycle:2 chir:1 complexity:1 dynamic:2 trained:6 purely:1 upon:1 uh:1 various:1 represented:1 derivation:1 separated:1 describe:1 emerged:1 solve:1 say:1 otherwise:1 ability:1 itself:2 pineda:3 differentiable:1 net:1 renals:1 realization:1 entered:1 recipe:1 convergence:1 disappeared:1 spelled:2 oo:2 recurrent:5 ij:1 odd:1 progress:2 krogh:4 quotient:1 kuhn:2 direction:3 exploration:1 alp:1 xid:2 require:2 orlando:1 generalization:1 icnn:1 extension:1 exploring:1 gradientbased:1 normal:1 fletcher:3 mapping:1 vary:1 proc:4 applicable:3 currently:1 bridge:1 minimization:9 hope:1 mit:1 aleksander:1 derived:1 xit:10 focus:1 notational:1 mainly:1 tech:2 detect:1 dependent:1 accumulated:1 nn:1 unneeded:1 vetterling:1 hidden:12 wij:1 overall:1 among:1 issue:1 denoted:2 art:1 special:2 equal:1 washington:1 represents:1 minimized:1 report:2 richard:1 few:1 hint:1 serious:1 attractor:1 detection:2 highly:1 necessary:1 carbondale:1 unless:1 divide:1 plugged:1 desired:3 minimal:1 sinha:2 formalism:2 linesearch:3 altering:1 cost:1 usefulness:1 reported:2 teacher:1 referring:1 fundamental:1 ie:2 preferring:1 central:2 ambiguity:1 hn:1 derivative:4 grossman:4 toy:1 coding:1 includes:1 north:2 explicitly:1 performed:1 helped:1 capability:1 parallel:1 minimize:1 accuracy:1 trajectory:1 eie:1 classified:1 ed:5 rohwer:17 energy:1 frequency:1 static:1 gain:1 knowledge:1 actually:1 back:9 appears:2 attained:2 originally:1 response:1 specify:1 done:1 anderson:1 just:1 correlation:3 propagation:11 indicated:1 resemblance:1 scientific:1 concept:1 brown:2 assigned:1 overrule:1 during:1 encourages:1 essence:1 qe:1 pearlmutter:3 dreyfus:1 common:1 volume:2 extend:1 cambridge:1 centre:1 illinois:1 moving:24 curvature:3 quartic:2 forcing:1 discard:1 binary:1 xot:1 success:1 seen:1 minimum:4 greater:1 preceding:1 period:2 signal:1 ii:2 full:1 multiple:1 academic:1 long:1 divided:1 controlled:1 involving:1 basic:3 addition:1 singular:2 ot:2 file:2 south:1 elegant:1 incorporates:1 call:3 ee:2 zipser:2 intermediate:1 feedforward:7 switch:1 xj:2 gave:1 architecture:4 idea:5 domany:1 whether:1 speech:3 hessian:1 york:1 jj:1 remark:1 action:1 coincidentally:1 amount:1 meir:1 sign:1 disjoint:2 discrete:3 write:1 shall:1 nordita:1 four:1 threshold:1 achieving:1 yit:4 changing:1 canyon:1 utilize:1 v1:1 merely:1 sum:1 inverse:1 respond:1 named:1 forrest:3 oscillation:1 followed:1 quadratic:1 strength:1 ijcnn:1 dominated:1 bonn:1 concluding:1 llj:1 according:1 conjugate:2 hertz:3 wi:1 modification:1 making:1 taken:1 equation:3 remains:1 fail:1 unusual:1 available:1 wii:2 operation:1 alternative:1 florida:1 responding:1 running:1 include:1 lat:1 dat:1 bl:2 personnaz:1 objective:3 eh1:1 dependence:2 md:1 diagonal:1 southern:1 gradient:8 fallside:1 deficit:8 separate:1 participate:1 reason:1 relationship:1 statement:1 curved:1 displayed:1 defining:1 hinton:1 communication:1 discovered:1 varied:1 georgiopoulos:2 arbitrary:6 eea:1 pair:1 required:1 specified:1 paris:1 connection:2 narrow:1 robinson:2 proceeds:1 dynamical:9 greatest:1 event:16 suitable:1 difficulty:4 disturbance:1 representing:1 technology:1 imply:1 temporally:1 carried:1 kj:1 literature:1 acknowledgement:1 law:3 fully:2 bear:1 supported:1 parity:2 free:1 formal:1 bias:3 absolute:3 edinburgh:2 regard:1 feedback:2 distributed:1 transition:2 forward:1 made:2 author:1 stuck:1 san:1 xi:1 obtaining:1 complex:2 discourage:1 multilayered:1 nothing:1 wiley:1 vanish:1 weighting:2 late:1 third:1 specific:1 symbol:1 linden:1 margin:1 flannery:1 holland:1 mij:1 thorbergsson:2 teukolsky:1 careful:1 change:2 infinite:1 except:1 impractically:1 total:1 called:1 wio:1 tendency:1 experimental:1 internal:2 almeida:3 tl2:1 arises:1 philosophy:1 dept:2 tested:1 neurospeech:1 |
1,463 | 2,330 | Learning in Spiking Neural Assemblies
David Barber
Institute for Adaptive and Neural Computation
Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, U.K.
[email protected]
Abstract
We consider a statistical framework for learning in a class of networks of spiking neurons. Our aim is to show how optimal local
learning rules can be readily derived once the neural dynamics and
desired functionality of the neural assembly have been specified,
in contrast to other models which assume (sub-optimal) learning
rules. Within this framework we derive local rules for learning temporal sequences in a model of spiking neurons and demonstrate its
superior performance to correlation (Hebbian) based approaches.
We further show how to include mechanisms such as synaptic depression and outline how the framework is readily extensible to
learning in networks of highly complex spiking neurons. A stochastic quantal vesicle release mechanism is considered and implications
on the complexity of learning discussed.
1
Introduction
Models of individual neurons range from simple rate based approaches to spiking models and further detailed descriptions of protein dynamics within the
cell[9, 10, 13, 6, 12]. As the experimental search for the neural correlates of memory increasingly consider multi-cell observations, theoretical models of distributed
memory become more relevant[12]. Despite increasing complexity of neural description, many theoretical models of learning are based on correlation Hebbian
assumptions ? that is, changes in synaptic efficacy are related to correlations of preand post-synaptic firing[9, 10, 14]. Whilst such learning rules have some theoretical
justification in toy neural models, they are not necessarily optimal in more complex cases in which the dynamics of the cell contains historical information, such
as modelled by synaptic facilitation and depression, for example[1]. It is our belief
that appropriate synaptic learning rules should appear as a natural consequence
of the neurodynamical system and some desired functionality ? such as storage of
temporal sequences.
It seems clear that, as the brain operates dynamically through time, relevant cognitive processes are plausibly represented in vivo as temporal sequences of spikes in
restricted neural assemblies. This paradigm has heralded a new research front in dynamic systems of spiking neurons[10]. However, to date, many learning algorithms
assume Hebbian learning, and assess its performance in a given model[8, 6, 14].
.
neuron j
neuron i
Highly Complex
(deterministic)
Internal Dynamics
h(1)
h(2)
h(t)
v(1)
v(2)
v(t)
(a) Deterministic Hiddens
stochastic firing
axon
.
(b) Neural firing model
Figure 1: (a) A first order Dynamic Bayesian Network with deterministic hidden
states (represented by diamonds). (b) The basic simplification for neural firing.
Recent work[13] has taken into account some of the complexities in the synaptic dynamics, including facilitation and depression, and derived appropriate learning rules.
However, these are rate based models, and do not capture the detailed stochastic
firing effects of individual neurons. Other recent work [4] has used experimental observations to modify Hebbian learning rules to make heuristic rules consistent with
empirical observations[11]. However, as more and more detail of cellular processes
are experimentally discovered, it would be satisfying to see learning mechanisms as
naturally derivable consequences of the underlying cellular constraints. This paper
is a modest step in this direction, in which we outline a framework for learning
in spiking systems which can handle highly complex cellular processes. The major
simplifying assumption is that internal cellular processes are deterministic, whilst
communication between cells can be stochastic. The central aim of this paper is
to show that optimal learning algorithms are derivable consequences of statistical
learning criteria. Quantitative agreement with empirical data would require further
realistic constraints on the model parameters but is not a principled hindrance to
our framework.
2
A Framework for Learning
A neural assembly of V neurons is represented by a vector v(t) whose components
vi (t), i = 1, . . . , V represent the state of neuron i at time t. Throughout we assume
that vi (t) ? {0, 1}, for which vi (t) = 1 means that neuron i spikes at time t, and
vi (t) = 0 denotes no spike. The shape of an action potential is assumed therefore
not to carry any information. This constraint of a binary state firing representation
could be readily relaxed without great inconvenience to multiple or even continuous
states.
Our stated goal is to derive optimal learning rules for an assumed desired functionality and a given neural dynamics. To make this more concrete, we assume that the task is sequence learning (although generalistions to other forms
of learning, including input-output type dynamics are readily achievable[2]). We
make the important assumption that the neural assembly has a sequence of states
V = {v(1), v(2), . . . , v(t = T )} that it wishes to store (although how such internal
representations are known is in itself a fundamental issue that needs to be ultimately addressed). In addition to the neural firing states, V, we assume that there
are hidden/latent variables which influence the dynamics of the assembly, but which
cannot be directly observed. These might include protein levels within a cell, for
example. These variables may also represent environmental conditions external to
the cell and common to groups of cells. We represent a sequence of hidden variables
by H = {h(1), h(2), . . . , h(T )}.
The general form of our model is depicted in fig(1)[a] and comprises two components
1. Neural Conditional Independence :
p(v(t + 1)|v(t), h(t)) =
V
Y
p(vi (t + 1)|v(t), h(t), ? v )
(1)
i=1
This distribution specifies that all the information determining the probability that neuron i fires at time t + 1 is contained in the immediate past
firing of the neural assembly at time v(t) and the hidden states h(t). The
distribution is parameterised by ? v , which can be learned from a training
sequence (see below). Here time simply discretises the dynamics. In principle, a unit of time in our model may represent a fraction of millisecond.
2. Deterministic Hidden Variable Updating :
h(t + 1) = f (v(t + 1), v(t), h(t), ? h )
(2)
This equation specifies that the next hidden state of the assembly h(t + 1)
depends on a vector function f of the states v(t+1), v(t), h(t). The function
f is parameterised by ? h which is to be learned.
This model is a special case of Dynamic Bayesian networks, in which the hidden
variables are deterministic functions of their parental states and is treated in more
generality in [2]. The model assumptions are depicted in fig(1)[b] in which potentially complex deterministic interactions within a neuron can be considered, with
lossy transmission of this information between neurons in the form of stochastic firing. Whilst the restriction to deterministic hidden dynamics appears severe, it has
the critical advantage that learning in such models can be achieved by deterministic
forward propagation through time. This is not the case in more general Dynamic
Bayesian networks where an integral part of the learning procedure involves, in principal, both forward and backward temporal passes (non-causal learning), and also
imposes severe restrictions on the complexity of the hidden unit dynamics due to
computational difficulties[7, 2]. A central ingredient of our approach is that it deals
with individual spike events, and not just spiking-rates as used in other studies[13].
The key mechanism for learning in statistical models is maximising the log-likelihood
L(? v , ? h |V) of a sequence V,
L(? v , ? h |V) = log p(v(1)|? v ) +
T
?1
X
log p(v(t + 1)|v(t), h(t), ? v )
(3)
t=1
where the hidden unit values are calculated recursively using (2). Training
multiple sequences V ? , ? = 1, . . . P is straightforward using the log-likelihood
P
?
? L(? v , ? h |V ). To maximise the log-likelihood, it is useful to evaluate the derivatives with respect to the model parameters. These can be calculated as follows :
T ?1
dL
?p(v(1)|? v ) X ?
=
+
log p(v(t + 1)|v(t), h(t), ? v )
d? v
?? v
?? v
t=1
(4)
T
?1
X
dL
?
dh(t)
=
log p(v(t + 1)|v(t), h(t), ? v )
d? h
?h(t)
d? h
t=1
(5)
?f (t)
?f (t) dh(t ? 1)
dh(t)
=
+
d? h
?? h
?h(t ? 1) d? h
(6)
where f (t) ? f (v(t), v(t ? 1), h(t ? 1), ? h ). Hence :
1. Learning can be carried out by forward propagation through time. In a biological system it is natural to use gradient ascent training ? ? ? +?dL/d?
where the learning rate ? is chosen small enough to ensure convergence to
a local optimum of the likelihood. This batch training procedure is readily
convertible to an online form if needed.
2. Highly complex functions f and tables p(v(t + 1)|v(t), h(t)) may be used.
In the remaining sections, we apply this framework to some simple models and show
how optimal learning rules can be derived for old and new theoretical models.
2.1
Stochastically Spiking Neurons
We assume that neuron i fires depending on the membrane potential ai (t) through
p(vi (t + 1) = 1|v(t), h(t)) = p(vi (t + 1) = 1|ai (t)). (More complex dependencies on
environmental variables are also clearly possible). To be specific, we take throughout
p(vi (t + 1) = 1|ai (t)) = ? (ai (t)), where ?(x) = 1/(1 + e?x ). The probability of the
quiescent state is 1 minus this probability, and we can conveniently write
p(vi (t + 1)|ai (t)) = ? ((2vi (t + 1) ? 1)ai (t))
(7)
which follows from 1 ? ?(x) = ?(?x). The choice of the sigmoid function ?(x)
is not fundamental and is simply analytically convenient. The log-likelihood of a
sequence of visible states V is
L=
T
?1 X
V
X
log ? ((2vi (t + 1) ? 1)ai (t))
(8)
t=1 i=1
and the (online) gradient of the log-likelihood is then
dL(t + 1)
dai (t)
= (vi (t + 1) ? ?(ai (t)))
dwij
dwij
(9)
where we used the fact that vi ? {0, 1}. The batch gradient is simply given by
summing the above online gradient over time. Here wij are parameters of the
membrane potential (see below). We take (9) as common to the remainder in which
we model the membrane potential ai (t) with increasing complexity.
2.2
A simple model of the membrane potential
Perhaps the simplest membrane potential model is the Hopfield potential
ai (t) ?
V
X
wij vj (t) ? bi
(10)
j=1
where wij characterizes the synaptic efficacy from neuron j (pre-synaptic) to neuron
i (post-synaptic), and bi is a threshold. The model is depicted in fig(2)[a]. Applying
xi (t ? 1)
xi (t)
xi (t + 1)
ai (t ? 1)
ai (t)
ai (t + 1)
ai (t ? 1)
ai (t)
ai (t + 1)
v(t ? 1)
v(t)
v(t + 1)
v(t ? 1)
v(t)
v(t + 1)
(a) Hopfield Graph
(b) Hopfield with Dynamic Synapses
Figure 2: (a) The graph for a simple Hopfield membrane potential shown only for a
single membrane potential. The potential is a deterministic function of the network
state and (the collection of) membrane potentials influences the next state of the
network. (b) Dynamic synapses correspond to hidden variables which influence the
membrane potential and update themselves, depending on the firing of the network.
Only one membrane potential and one synaptic factor is shown.
our framework to this model to learn a temporal sequence V by adjustment of the
parameters wij (the bi are fixed for simplicity), we obtain the (batch) learning rule
new
wij
= wij + ?
dL
,
dwij
T
?1
X
dL
=
(vi (t + 1) ? ?(ai (t))) vj (t),
dwij
t=1
(11)
where the learning rate ? is chosen empirically to be sufficiently small to ensure
convergence. Note that in the above rule vi (t + 1) refers to the desired known
training pattern, and ?(ai (t)) can be interpreted as the average instantaneous firing
rate of neuron i at time t + 1 when its inputs are clamped to the known desired
values of the network at time t. This is a form of Delta Rule (or Rescorla-Wagner)
learning[12]. The above learning rule can be seen as a modification of the standard
PT ?1
Hebb learning rule wij = t=1 vi (t + 1)vj (t). However, the rule (11) can store a
sequence of V linearly independent patterns, much greater than the 0.26V capacity
of the Hebb rule[5]. Biologically, the rule (11) could be implemented by measuring
the difference between the desired training state vi (t + 1) of neuron i, and the
instantaneous firing rate of neuron i when all other neurons, j 6= i are clamped
in training states vj (t). Simulations with this model and comparison with other
training approaches are given in [3].
3
Dynamic Synapses
In more realistic synaptic models, neurotransmitter generation depends on a finite
rate of cell subcomponent production, and the quantity of vesicles released is affected by the history of firing[1]. The depression mechanism affects the impact of
spiking on the membrane potential
responsePby moderating terms in the membrane
P
potential ai (t) of the form j wij vj (t) to j wij xj (t)vj (t), for depression factors
xj (t) ? [0, 1]. A simple dynamics for these depression factors is[15, 14]
?
?
1 ? xj (t)
xj (t + 1) = xj (t) + ?t
? U xj (t)vj (t)
(12)
?
Reconstruction
neuron number
Original
x values
Hebb Reconstruction
5
5
5
5
10
10
10
10
15
15
15
15
20
20
20
20
25
25
25
25
30
30
30
30
35
35
35
35
40
40
40
40
45
45
45
45
50
50
50
10
20
t
10
t
20
50
10
t
20
10
20
t
Figure 3: Learning with depression : U = 0.5, ? = 5, ?t = 1, ? = 0.25.
where ?t, ? , and U represent time scales, recovery times and spiking effect parameters respectively. Note that these depression factor dynamics are exactly of the
form of hidden variables that are not observed, consistent with our framework in
section (2), see fig(2)[b]. Whilst some previous models have considered learning
rules for dynamic synapses using spiking-rate models [13, 15] we consider learning
in a stochastic spiking model. Also, in contrast to a previous study which assumes
that the synaptic dynamics modulates baseline Hebbian weights[14], we show below
that it is straightforward to include dynamic synapses in a principled way using our
learning framework. Since the depression dynamics in this model do not explicitly
depend on wij , the gradients are simple to calculate. Note that synaptic facilitation
is also straightforward to include in principle[15].
For the Hopfield potential, the learning dynamics is simply given by equations
i (t)
(9,12), with da
dwij = xj (t)vj (t). In fig(3) we demonstrate learning a random temporal sequence of 20 time steps for an assembly of 50 neurons. After learning w ij
with our rule, we initialised the trained network in the first state of the training
sequence. The remaining states of the sequence were then correctly recalled by
iteration of the learned model. The corresponding generated factors xi (t) are also
plotted. For comparison, we plot the results of using the dynamics having set the w ij
using a temporal Hebb rule. The poor performance of the correlation based Hebb
rule demonstrates the necessity, in general, to couple a dynamical system with an
appropriate learning mechanism which, in this case at least, is readily available.
4
Leaky Integrate and Fire models
Leaky integrate and fire models move a step towards biological realism in which the
membrane potential increments if it receives an excitatory stimulus (wij > 0), and
decrements if it receives an inhibitory stimulus (wij < 0). A model that incorporates
such effects is
?
?
X
ai (t) = ??ai (t ? 1) +
wij vj (t) + ? rest (1 ? ?)? (1 ? vi (t ? 1)) + vi (t ? 1)? f ired
j
(13)
Since vi ? {0, 1}, if neuron i fires at time t ? 1 the potential is reset to ? f ired at
time t. Similarly, with no synaptic input, the potential equilibrates to ? rest with
time constant ?1/ log ?. Here ? ? [0, 1] represents membrane leakage characteristic
of this class of models.
a(t ? 1)
a(t)
a(t + 1)
r(t ? 1)
r(t)
r(t + 1)
v(t ? 1)
v(t)
v(t + 1)
Figure 4: Stochastic vesicle release (synaptic dynamic factors not indicated).
Despite the apparent increase in complexity of the membrane potential over the
simple Hopfield case, deriving appropriate learning dynamics for this new system
is straightforward since, as before, the hidden variables (here the membrane potentials) update in a deterministic fashion. The membrane derivatives are
?
?
dai (t)
dai (t ? 1)
= (1 ? vi (t ? 1)) ?
+ vj (t)
(14)
dwij
dwij
i (t=1)
By initialising the derivative dadw
= 0, equations (9,13,14) define a first order
ij
recursion for the gradient which can be used to adapt wij in the usual manner
wij ? wij + ?dL/dwij . We could also apply synaptic dynamics to this case by
replacing the term vj (t) in (14) by xj (t)vj (t).
A direct consequence of the above learning rule (explored in detail elsewhere) is a
spike time dependent learning window in qualitative agreement with experimental
results[11], a pleasing corollary of our approach, and is consistent with our belief
that such observed plasticity has at its core a simple learning rule.
5
A Stochastic Vesicle Release Model
Neurotransmitter release can be highly stochastic and it would be desirable to include this mechanism in our models. A simple model of quantal release of transmitter from pre-synaptic neuron j to post-synaptic neuron i is to release a vesicle
with probability
p(rij (t) = 1|xij (t), vj (t)) = xij (t)vj (t)Rij
(15)
where, in analogy with (12),
xij (t + 1) = xij (t) + ?t
?
1 ? xij (t)
? U xij (t)rij (t)
?
?
(16)
and Rij ? [0, 1] is a plastic release parameter. The membrane potential is then
governed in integrate and fire models by
?
?
X
ai (t) = ??ai (t ? 1) +
wij rij (t) + ? rest (1 ? ?)? (1 ? vi (t ? 1)) + vi (t ? 1)? f ired
j
(17)
This model is schematically depicted in fig(4). Since the unobserved stochastic
release variables rij (t) are hidden, this model does not have fully deterministic
hidden dynamics. In general, learning in such models is more complex and would
require both forward and backward temporal propagations including, undoubtably,
graphical model approximation techniques[7].
6
Discussion
Leaving aside the issue of stochastic vesicle release, a further step in the evolution of membrane complexity is to use Hodgkin-Huxley type dynamics[9]. Whilst
this might appear complex, in principle, this is straightforward since the membrane
dynamics can be represented by deterministic hidden dynamics. Explicitly summing out the hidden variables would then give a representation of Hodgkin-Huxley
dynamics analogous to that of the Spike Response Model (see Gerstner in [10]).
Deriving optimal learning in assemblies of stochastic spiking neurons can be
achieved using maximum likelihood. This is straightforward in cases for which
the latent dynamics is deterministic. It is worth emphasising, therefore, that almost arbitrarily complex spatio-temporal patterns may potentially be learned ?
and generated under cued retrieval ? for very complex neural dynamics. Whilst
this framework cannot deal with arbitrarily complex stochastic interactions, it can
deal with learning in a class of interesting neural models, and concepts from graphical models can be useful in this area. A more general stochastic framework would
need to examine approximate causal learning rules which, despite not being fully
optimal, may perform well. Finally, our assumption that the brain operates optimally (albeit within severe constraints) enables us to drop other assumptions about
unobserved processes, and leads to models with potentially more predictive power.
References
[1] L.F. Abbott, J.A. Varela, K. Sen, and S.B. Nelson, Synaptic depression and cortical
gain control, Science 275 (1997), 220?223.
[2] D. Barber, Dynamic Bayesian Networks with Deterministic Latent Tables, Neural
Information Processing Systems (2003).
[3] D. Barber and F. Agakov, Correlated sequence learning in a network of spiking neurons using maximum likelihood, Tech. Report EDI-INF-RR-0149, School of Informatics, 5 Forrest Hill, Edinburgh, UK, 2002.
[4] C. Chrisodoulou, G. Bugmann, and T.G. Clarkson, A Spiking Neuron Model : Applications and Learning, Neural Networks 15 (2002), 891?908.
[5] A. D?
uring, A.C.C. Coolen, and D. Sherrington, Phase diagram and storage capacity
of sequence processing neural networks, Journal of Physics A 31 (1998), 8607?8621.
[6] W. Gerstner, R. Ritz, and J.L. van Hemmen, Why Spikes? Hebbian Learning and
retrieval of time-resolved excitation patterns, Biological Cybernetics 69 (1993), 503?
515.
[7] M. I. Jordan, Learning in Graphical Models, MIT Press, 1998.
[8] R. Kempter, W. Gerstner, and J.L. van Hemmen, Hebbian learning and spiking neurons, Physical Review E 59 (1999), 4498?4514.
[9] C. Koch, Biophysics of Computation, Oxford University Press, 1998.
[10] W. Maass and C. Bishop, Pulsed Neural Networks, MIT Press, 2001.
[11] H. Markram, J. Lubke, M. Frotscher, and B. Sakmann, Regulation of synaptic efficacy
by coindence of postsynaptic APs and EPSPs, Science 275 (1997), 213?215.
[12] S.J. Martin, P.D. Grimwood, and R.G.M. Morris, Synaptic Plasticity and Memory:
An Evaluation of the Hypothesis, Annual Reviews Neuroscience 23 (2000), 649?711.
[13] T. Natschl?
ager, W. Maass, and A. Zador, Efficient Temporal Processing with Biologically Realistic Dynamic Synapses, Tech Report (2002).
[14] L. Pantic, J.T. Joaquin, H.J. Kappen, and S.C.A.M. Gielen, Associatice Memory with
Dynamic Synapses, Neural Computation 14 (2002), 2903?2923.
[15] M. Tsodyks, K. Pawelzik, and H. Markram, Neural Networks with Dynamic Synapses,
Neural Computation 10 (1998), 821?835.
| 2330 |@word achievable:1 seems:1 simulation:1 simplifying:1 minus:1 recursively:1 carry:1 kappen:1 necessity:1 contains:1 efficacy:3 past:1 readily:6 visible:1 subcomponent:1 realistic:3 plasticity:2 shape:1 enables:1 plot:1 drop:1 update:2 aps:1 aside:1 inconvenience:1 realism:1 core:1 direct:1 become:1 qualitative:1 manner:1 themselves:1 examine:1 multi:1 brain:2 pawelzik:1 window:1 increasing:2 underlying:1 interpreted:1 whilst:6 unobserved:2 temporal:10 quantitative:1 exactly:1 demonstrates:1 uk:2 control:1 unit:3 appear:2 before:1 maximise:1 local:3 modify:1 consequence:4 despite:3 oxford:1 firing:13 might:2 dynamically:1 range:1 bi:3 procedure:2 area:1 empirical:2 convenient:1 pre:2 refers:1 protein:2 cannot:2 storage:2 influence:3 applying:1 restriction:2 deterministic:15 straightforward:6 zador:1 simplicity:1 recovery:1 rule:25 ritz:1 deriving:2 facilitation:3 handle:1 justification:1 increment:1 analogous:1 pt:1 hypothesis:1 agreement:2 satisfying:1 updating:1 agakov:1 observed:3 rij:6 capture:1 moderating:1 calculate:1 tsodyks:1 principled:2 complexity:7 dynamic:40 ultimately:1 trained:1 depend:1 vesicle:6 predictive:1 resolved:1 hopfield:6 represented:4 neurotransmitter:2 whose:1 heuristic:1 apparent:1 itself:1 online:3 sequence:17 advantage:1 rr:1 sen:1 reconstruction:2 interaction:2 rescorla:1 reset:1 remainder:1 relevant:2 date:1 description:2 convergence:2 transmission:1 optimum:1 cued:1 derive:2 depending:2 ac:1 ij:3 school:1 epsps:1 implemented:1 involves:1 direction:1 functionality:3 stochastic:14 require:2 emphasising:1 biological:3 sufficiently:1 considered:3 koch:1 great:1 major:1 released:1 coolen:1 mit:2 clearly:1 aim:2 corollary:1 derived:3 release:9 transmitter:1 likelihood:8 tech:2 contrast:2 baseline:1 dependent:1 hidden:17 wij:17 issue:2 special:1 frotscher:1 once:1 having:1 represents:1 report:2 stimulus:2 ired:3 convertible:1 individual:3 phase:1 fire:6 undoubtably:1 pleasing:1 highly:5 evaluation:1 severe:3 implication:1 integral:1 ager:1 modest:1 old:1 desired:6 plotted:1 causal:2 theoretical:4 extensible:1 measuring:1 front:1 optimally:1 dependency:1 hiddens:1 fundamental:2 physic:1 informatics:1 concrete:1 central:2 cognitive:1 external:1 stochastically:1 derivative:3 equilibrates:1 toy:1 account:1 potential:22 explicitly:2 vi:23 depends:2 characterizes:1 vivo:1 ass:1 lubke:1 characteristic:1 correspond:1 modelled:1 bayesian:4 plastic:1 worth:1 cybernetics:1 history:1 synapsis:8 ed:1 synaptic:21 initialised:1 naturally:1 couple:1 gain:1 appears:1 response:1 generality:1 parameterised:2 just:1 correlation:4 joaquin:1 receives:2 hindrance:1 replacing:1 propagation:3 indicated:1 perhaps:1 lossy:1 effect:3 concept:1 evolution:1 hence:1 analytically:1 maass:2 deal:3 excitation:1 discretises:1 criterion:1 hill:2 outline:2 demonstrate:2 sherrington:1 instantaneous:2 superior:1 common:2 sigmoid:1 spiking:17 physical:1 empirically:1 discussed:1 ai:23 similarly:1 recent:2 inf:1 pulsed:1 store:2 binary:1 arbitrarily:2 seen:1 dai:3 relaxed:1 greater:1 paradigm:1 multiple:2 desirable:1 hebbian:7 adapt:1 retrieval:2 heralded:1 post:3 biophysics:1 impact:1 basic:1 iteration:1 represent:5 achieved:2 cell:8 addition:1 schematically:1 addressed:1 diagram:1 leaving:1 rest:3 natschl:1 pass:1 ascent:1 incorporates:1 jordan:1 enough:1 independence:1 affect:1 xj:8 clarkson:1 action:1 depression:10 useful:2 detailed:2 clear:1 morris:1 simplest:1 specifies:2 xij:6 millisecond:1 inhibitory:1 delta:1 neuroscience:1 correctly:1 write:1 affected:1 group:1 key:1 varela:1 threshold:1 abbott:1 backward:2 graph:2 fraction:1 hodgkin:2 throughout:2 almost:1 forrest:2 initialising:1 simplification:1 annual:1 constraint:4 huxley:2 martin:1 poor:1 membrane:20 increasingly:1 postsynaptic:1 modification:1 biologically:2 restricted:1 taken:1 equation:3 mechanism:7 needed:1 available:1 apply:2 appropriate:4 batch:3 original:1 denotes:1 remaining:2 include:5 assembly:10 ensure:2 assumes:1 graphical:3 plausibly:1 leakage:1 move:1 eh1:1 quantity:1 spike:7 usual:1 gradient:6 pantic:1 capacity:2 bugmann:1 nelson:1 barber:3 cellular:4 maximising:1 quantal:2 ql:1 regulation:1 potentially:3 stated:1 sakmann:1 diamond:1 perform:1 neuron:31 observation:3 finite:1 immediate:1 communication:1 discovered:1 david:1 edi:1 specified:1 recalled:1 learned:4 parental:1 below:3 pattern:4 dynamical:1 including:3 memory:4 belief:2 power:1 critical:1 event:1 natural:2 treated:1 difficulty:1 recursion:1 carried:1 review:2 determining:1 fully:2 kempter:1 generation:1 interesting:1 analogy:1 ingredient:1 integrate:3 consistent:3 imposes:1 principle:3 production:1 excitatory:1 elsewhere:1 institute:1 wagner:1 markram:2 leaky:2 edinburgh:3 distributed:1 van:2 calculated:2 cortical:1 forward:4 collection:1 adaptive:1 historical:1 correlate:1 approximate:1 derivable:2 summing:2 assumed:2 quiescent:1 spatio:1 xi:4 search:1 continuous:1 latent:3 why:1 table:2 learn:1 anc:1 gerstner:3 complex:12 necessarily:1 vj:14 da:1 linearly:1 decrement:1 fig:6 hemmen:2 fashion:1 hebb:5 axon:1 sub:1 comprises:1 wish:1 clamped:2 governed:1 specific:1 bishop:1 explored:1 dl:7 albeit:1 modulates:1 depicted:4 simply:4 gielen:1 conveniently:1 contained:1 adjustment:1 uring:1 environmental:2 dh:3 dwij:8 conditional:1 goal:1 towards:1 change:1 experimentally:1 operates:2 principal:1 experimental:3 preand:1 neurodynamical:1 internal:3 evaluate:1 correlated:1 |
1,464 | 2,331 | Neuromorphic Bistable VLSI Synapses with
Spike-Timing-Dependent Plasticity
Giacomo Indiveri
Institute of Neuroinformatics
University/ETH Zurich
CH-8057 Zurich, Switzerland
[email protected]
Abstract
We present analog neuromorphic circuits for implementing bistable synapses with spike-timing-dependent plasticity (STDP) properties. In these
types of synapses, the short-term dynamics of the synaptic efficacies are
governed by the relative timing of the pre- and post-synaptic spikes,
while on long time scales the efficacies tend asymptotically to either a
potentiated state or to a depressed one. We fabricated a prototype VLSI
chip containing a network of integrate and fire neurons interconnected
via bistable STDP synapses. Test results from this chip demonstrate the
synapse?s STDP learning properties, and its long-term bistable characteristics.
1 Introduction
Most artificial neural network algorithms based on Hebbian learning use correlations of
mean rate signals to increase the synaptic efficacies between connected neurons. To prevent uncontrolled growth of synaptic efficacies, these algorithms usually incorporate also
weight normalization constraints, that are often not biophysically realistic. Recently an
alternative class of competitive Hebbian learning algorithms has been proposed based on a
spike-timing-dependent plasticity (STDP) mechanism [1]. It has been argued that the STDP
mechanism can automatically, and in a biologically plausible way, balance the strengths of
synaptic efficacies, thus preserving the benefits of both weight normalization and correlation based learning rules [16]. In STDP the precise timing of spikes generated by the
neurons play an important role. If a pre-synaptic spike arrives at the synaptic terminal before a post-synaptic spike is emitted, within a critical time window, the synaptic efficacy
is increased. Conversely if the post-synaptic spike is emitted soon before the pre-synaptic
one arrives, the synaptic efficacy is decreased.
While mean rate Hebbian learning algorithms are difficult to implement using analog circuits, spike-based learning rules map directly onto VLSI [4, 6, 7]. In this paper we present
compact analog circuits that, combined with neuromorphic integrate and fire (I&F) neurons
and synaptic circuits with realistic dynamics [8, 12, 11] implement STDP learning for short
time scales and asymptotically tend to one of two possible states on long time scales. The
circuits required to implement STDP, are described in Section 2. The circuits that implement bistability are described in Section 3. The network of I&F neurons used to measure
the properties of the bistable STDP synapse is described in Section 4.
Long term storage of synaptic efficacies
The circuits that drive the synaptic efficacy to one of two possible states on long time scales,
were implemented in order to cope with the problem of long term storage of analog values
in CMOS technology. Conventional VLSI capacitors, the devices typically used as memory
elements, are not ideal, in that they slowly loose the charge they are supposed to store, due
to leakage currents. Several solutions have been proposed for long term storage of synaptic
efficacies in analog VLSI neural networks. One of the first suggestions was to use the same
method used for dynamic RAM: to periodically refresh the stored value. This involves
though discretization of the analog value to N discrete levels, a method for comparing the
measured voltage to the N levels, and a clocked circuit to periodically refresh the value
on the capacitor. An alternative solution is to use analog-to-digital (ADC) converters, an
off chip RAM and digital-to-analog converters (DAC), but this approach requires, next
to a discretization of the value to N states, bulky ADC and DAC circuits. A more recent
suggestion is the one of using floating gate devices [5]. These devices can store very precise
analog values for an indefinite amount of time using standard CMOS technology [13], but
for spike-based learning rules they would require a control circuit (and thus large area) per
synapse. To implement dense arrays of neurons with large numbers of dendritic inputs the
synaptic circuits should be as compact as possible.
Bistable synapses
An alternative approach that uses a very small amount of area per synapse is to use bistable
synapses. These types of synapses contain minimum feature-size circuits that locally compare the value of the synaptic efficacy stored on the capacitor with a fixed threshold voltage
and slowly drive that value either toward a high analog voltage or toward a low one, depending on the output of the comparator (see Section 3).
The assumption that on long time scales the synaptic efficacy can only assume two values
is not too severe, for networks of neurons with large numbers of synapses. It has been
argued that also biological synapses can be indeed discrete on long time-scales. These
assumptions are compatible with experimental data [3] and are supported by experimental
evidence [15]. Also from a theoretical perspective it has been shown that the performance
of associative networks is not necessarily degraded if the dynamic range of the synaptic
efficacy is reduced even to the extreme (two stable states), provided that the transitions
between stable states are stochastic [2].
Related work
Bistable VLSI synapses in networks of I&F neurons have already been proposed in [6], but
in those circuits, the synaptic efficacy is always clamped to either a high value or a low one,
also for short-term dynamics, as opposed to our case, in which the synaptic efficacy can
assume any analog value between the two. In [7] the authors propose a spike-based learning circuit, based on a modified version of Riccati?s equation [10], in which the synaptic
efficacy is a continuous analog voltage; but their synapses require many more transistors
than the solution we propose, and do not incorporate long-term bistability. More recently
Bofill and Murray proposed circuits for implementing STDP within a framework of pulsebased neural network circuits [4]. But, next to missing the long-term bistability properties,
their synaptic circuits require digital control signals that cannot be easily generated within
the framework of neuromorphic networks of I&F neurons [8, 12].
Vdd
Vdd
M3
M4
Vtp
M2
Vdd
M10
M5
/post
Vpot
Ipot
Vw0
Vd
M6
M7
Vp
Cw
Idep
Vdep
pre
M11
M8
M1
M12
M9
Vtd
Figure 1: Synaptic efficacy STDP circuit.
2 The STDP circuits
The circuit required to implement STDP in a network of I&F neurons is shown in Fig. 1.
This circuit increases or decreases the analog voltage Vw0 , depending on the relative timing
of the pulses pre and /post. The voltage Vw0 is then used to set the strength of synaptic
circuits with realistic dynamics, of the type described in [11]. The pre- and post-synaptic
pulses pre and /post are generated by compact, low power I&F neurons, of the type described in [9].
The circuit of Fig. 1 is fully symmetric: upon the arrival of a pre-synaptic pulse pre a
waveform Vpot (t) (for potentiating Vw0 ) is generated. Similarly, upon the arrival of a
post-synaptic pulse /post, a complementary waveform Vdep (t) (for depotentiating Vw0 )
is generated. Both waveforms have a sharp onset and decay linearly with time, at a rate set
respectively by Vtp and Vtd . The pre- and post-synaptic pulses are also used to switch on
two gates (M 8 and M 5), that allow the currents Idep and Ipot to flow, as long as the pulses
are high, either increasing or decreasing the weight. The bias voltages V p on transistor M 6
and Vd on M 7 set an upper bound for the maximum amount of current that can be injected
into or removed from the capacitor Cw . If transistors M 4?M 9 operate in the subthreshold
regime [13], we can compute the analytical expression of Ipot (t) and Idep (t):
Ipot (t) =
Idep (t) =
e
I0
? U? Vpot (t?tpre )
T
+e
? U? Vp
(1)
T
I0
? U? Vdep (t?tpost )
(2)
? ? V
e T
+ e UT d
where tpre and tpost are the times at which the pre-synaptic and post-synaptic spikes are
emitted, UT is the thermal voltage, and ? is the subthreshold slope factor [13]. The change
in synaptic efficacy is then:
(
I
(t
)
?Vw0 = potCppost ?tspk
if tpre < tpost
(3)
Idep (tpre )
?Vw0 = ? Cd ?tspk if tpost < tpre
where ?tspk is the pre- and post-synaptic spike width, Cp is the parasitic capacitance of
node Vpot and Cd the one of node Vdep (not shown in Fig. 1).
In Fig. 2(a) we plot experimental data showing how ?Vw0 changes as a function of ?t =
tpre ? tpost for different values of Vtd and Vtp . Similarly, in Fig. 2(b) we show plots
w0
0
?0.5
?V
?V
w0
(V)
0.5
(V)
0.5
?10
?5
0
? t (ms)
5
0
?0.5
10
?10
?5
(a)
0
? t (ms)
5
10
(b)
Figure 2: Changes in synaptic efficacy, as a function of the difference between pre- and
post-synaptic spike emission times ?t = tpre ?tpost . (a) Curves obtained for four different
values of Vpot (in the left quadrant) and four different values of Vdep (in the right quadrant).
(b) Typical STDP plot, obtained by setting Vp to 4.0V and Vd to 0.6V.
Vw0 (V)
1.5
0
0
5
2
3
4
5
0
0
5
1
2
3
4
5
0
0
1
2
3
Time (ms)
4
5
pre (V)
V
dep
(V)
1
Figure 3: Changes in Vw0 , in response to a sequence of pre-synaptic spikes (top trace). The
middle trace shows how the signal Vdep , triggered by the post-synaptic neuron, decreases
linearly with time. The bottom trace shows the series of digital pulses pre, generated with
every pre-synaptic spike.
of ?Vw0 versus ?t for three different values of Vp and three different values of Vd . As
there are four independent control biases, it is possible to set the maximum amplitude and
temporal window of influence independently for positive and negative changes in V w0 .
The data of Fig. 2 was obtained using a paired-pulse protocol similar to the one used in
physiological experiments [14]: one single pair of pre- and post-synaptic spikes was used
to measure each ?Vw0 data point, by systematically changing the delay tpre ? tpost and
by separating each stimulation session by a few hundreds of milliseconds (to allow the
signals to return to their resting steady-state). Unlike the biological experiments, in our
VLSI setup it is possible to evaluate the effect of multiple pulses on the synaptic efficacy,
for very long successive stimulation sessions, monitoring all the internal state variables
and signals involved in the process. In Fig. 3 we show the effect of multiple pre-synaptic
spikes, succeeding a post-synaptic one, plotting a trace of the voltage V w0 , together with the
Vhigh
M3
Vw0
?
Vthr
+
M4
M5
Vw0
M6
Vleak
M1
M2
Vlow
Figure 4: Bistability circuit. Depending on Vw0 ? Vthr , the comparator drives Vw0 to either
Vhigh or Vlow . The rate at which the circuit drives Vw0 toward the asymptote is controlled
by Vleak and imposed by transistors M 2 and M 4.
?internal? signal Vdep , generated by the post-synaptic spike, and the pulses pre, generated
by the per-synaptic neuron. Note how the change in Vw0 is a positive one, when the postsynaptic spike follows a pre-synaptic one, at t = 0.5ms, and is negative when a series
of pre-synaptic spikes follows the post-synaptic one. The effect of subsequent pre pulses
following the first post-/pre-synaptic pair is additive, and decreases with time as in Fig. 2.
As expected, the anti-causal relationship between pre- and post-synaptic neurons has the
net effect of decreasing the synaptic efficacy.
3 The bistability circuit
The bistability circuit, shown in Fig. 4, drives the voltage Vw0 toward one of two possible
states: Vhigh (if Vw0 > Vthr ), or Vlow (if Vw0 < Vthr ). The signal Vthr is a threshold
voltage that can be set externally. The circuit comprises a comparator, and a mixed-mode
analog-digital leakage circuit. The comparator is a five transistor transconductance amplifier [13] that can be designed using minimum feature-size transistors. The leakage circuit
contains two gates that act as digital switches (M 5, M 6) and four transistors that set the
two stable state asymptotes Vhigh and Vlow and that, together with the bias voltage Vleak ,
determine the rate at which Vw0 approaches the asymptotes. The bistability circuit drives
Vw0 in two different ways, depending on how large is the distance between the value of V w0
itself and the asymptote. If |Vw0 ?Vas | > 4UT the bistability circuit drives Vw0 toward Vas
linearly, where Vas represents either Vlow or Vhigh , depending on the sign of (Vw0 ? Vthr ):
(
leak
t if Vw0 > Vthr
Vw0 (t) = Vw0 (0) + IC
w
(4)
Ileak
Vw0 (t) = Vw0 (0) ? Cw t if Vw0 < Vthr
where Cw is the capacitor of Fig. 1 and
Ileak = I0 e
?Vleak ?Vlow
UT
As Vw0 gets close to the asymptote and |Vw0 ?Vas | < 4UT , transistors M 2 or M 4 of Fig. 4
go out of saturation and Vw0 begins to approach the asymptote exponentially:
(
I
? leak t
Vw0 (t) = Vhigh ? Vw0 (0)e Cw UT
if Vw0 > Vthr
(5)
Ileak
? Cw
t
UT
Vw0 (t) = Vlow + Vw0 (0)e
if Vw0 < Vthr
On long time scales the dynamics of Vw0 are governed by the bistability circuit, while on
short time-scales they are governed by the STDP circuits and the precise timing of pre- and
3
2
V
w0
(V)
2.5
1.5
1
0
2
4
6
Time (ms)
8
10
Figure 5: Synaptic efficacy bistability. Transition of Vw0 from below threshold to above
threshold (Vthr = 1.52V ), with leakage rate set by Vleak = 0.25V and pre- and postsynaptic neurons stimulated in a way to increase Vw0 .
I1
I2
M1
O1
M2
O2
Figure 6: Network of leaky I&F neurons with bistable STDP excitatory synapses and inhibitory synapses. The large circles symbolize I&F neurons, the small empty ones bistable
STDP excitatory synapses, and the small bars non-plastic inhibitory synapses. The arrows
in the circles indicate the possibility to inject current from an external source, to stimulate
the neurons.
post-synaptic spikes. If the STDP short-term dynamics drive Vw0 above threshold we say
that long-term potentiation (LTP) had been induced. And if the short-term dynamics drive
Vw0 below threshold, we say that long-term depression (LTD) has been induced.
In Fig. 5 we show how the synaptic efficacy Vw0 changes upon induction of LTP, while
stimulating the pre- and post-synaptic neurons with uniformly distributed spike trains. The
asymptote Vlow was set to zero, and Vhigh to 2.75V. The pre- and post-synaptic neurons
were injected with constant DC currents in a way to increase Vw0 , on average. As shown,
the two asymptotes Vlow and Vhigh act as two attractors, or stable equilibrium points,
whereas the threshold voltage Vthr acts as an unstable equilibrium point. If the synaptic efficacy is below threshold the short-term dynamics have to fight against the long-term
bistability effect, to increase Vw0 . But as soon as Vw0 crosses the threshold, the bistability
circuit switches, the effects of the short-term dynamics are reinforced by the asymptotic
drive, and Vw0 is quickly driven toward Vhigh .
4 A network of integrate and fire neurons
The prototype chip that we used to test the bistable STDP circuits presented in this paper,
contains a symmetric network of leaky I&F neurons [9] (see Fig. 6). The experimental data
w0
V
V
w0
(V)
4
(V)
4
4
6
8
10
0
0
2
2
4
6
8
10
0
0
2
4
6
Time (ms)
8
10
0
0
2
2
4
6
8
10
0
0
2
2
4
6
8
10
0
0
2
4
6
Time (ms)
8
10
pre (V)
pre (V)
post (V)
2
post (V)
0
0
2
(a)
(b)
Figure 7: Membrane potentials of pre- and post-synaptic neurons (bottom and middle traces
respectively) and synaptic efficacy values (top traces). (a) Changes in V w0 for low synaptic efficacy values (Vhigh = 2.1V) and no bistability leakage currents (Vleak = 0). (b)
Changes in Vw0 for high synaptic efficacy values (Vwh = 3.6V ) and with bistability asymptotic drive (Vleak = 0.25V).
of Figs. 2, 3, and 5 was obtained by injecting currents in the neurons labeled I1 and O1
and by measuring the signals from the excitatory synapse on O1. In Fig. 7 we show the
membrane potential of I1, O1, and the synaptic efficacy Vw0 of the corresponding synapse,
in two different conditions. Figure 7(a) shows the changes in Vw0 when both neurons are
stimulated but no asymptotic drive is used. As shown Vw0 strongly depends on the spike
patterns of the pre- and post-synaptic neurons. Figure 7(b) shows a scenario in which
only neuron I1 is stimulated, but in which the weight Vw0 is close to its high asymptote
(Vhigh = 3.6V) and in which there is a long-term asymptotic drive (Vleak = 0.25). Even
though the synaptic weight stays always in its potentiated state, the firing rate of O1 is not
as regular as the one of its efferent neuron. This is mainly due to the small variations of
Vw0 induced by the STDP circuit.
5 Discussion and future work
The STDP circuits presented here introduce a source of variability in the spike timing of the
I&F neurons that could be exploited for creating VLSI networks of neurons with stochastic
dynamics and for implementing spike-based stochastic learning mechanisms [2]. These
mechanisms rely on the variability of the input signals (e.g. of Poisson distributed spike
trains) and on their precise spike-timing in order to induce LTP or LTD only to a small
specific sub-set of the synapses stimulated. In future experiments we will characterize the
properties of the bistable STDP synapse in response to Poisson distributed spike trains, and
measure transition probabilities as functions of input statistics and circuit parameters.
We presented compact neuromorphic circuits for implementing bistable STDP synapses in
VLSI networks of I&F neurons, and showed data from a prototype chip. We demonstrated
how these types of synapses can either store their LTP or LTD state for long-term, or switch
state depending on the precise timing of the pre- and post-synaptic spikes. In the near
future, we plan to use the simple network of I&F neurons of Fig. 6, present on the prototype
chip, to analyze the effect of bistable STDP plasticity at a network level. On the long term,
we plan to design a larger chip with these circuits to implement a re-configurable network
of I&F neurons of O(100) neurons and O(1000) synapses, and use it as a real-time tool for
investigating the computational properties of competitive networks and selective attention
models.
Acknowledgments
I am grateful to Rodney Douglas and Kevan Martin for their support, and to Shih-Chii Liu
and Stefano Fusi for constructive comments on the manuscript. Some of the ideas that led
to the design and implementation of the circuits presented were inspired by the Telluride
Workshop on Neuromorphic Engineering (http://www.ini.unizh.ch/telluride).
References
[1] L. F. Abbott and S. Song. Asymmetric hebbian learning, spike liming and neural response
variability. In Advances in Neural Information Processing Systems, volume 11, pages 69?75,
1998.
[2] D. J. Amit and S. Fusi. Dynamic learning in neural networks with material synapses. Neural
Computation, 6:957, 1994.
[3] T. V. P. Bliss and G. L. Collingridge. A synaptic model of memory: Long term potentiation in
the hippocampus. Nature, 31:361, 1993.
[4] A. Bofill and A.F. Murray. Circuits for VLSI implementation of temporally asymmetric Hebbian learning. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural
Information processing systems, volume 14. MIT Press, Cambridge, MA, 2001.
[5] C. Diorio, P. Hasler, B.A. Minch, and C. Mead. A single-transistor silicon synapse. IEEE Trans.
Electron Devices, 43(11):1972?1980, 1996.
[6] S. Fusi, M. Annunziato, D. Badoni, A. Salamon, and D.J. Amit. Spike-driven synaptic plasticity: theory, simulation, VLSI implementation. Neural Computation, 12:2227?2258, 2000.
[7] P. H?afliger, M. Mahowald, and L. Watts. A spike based learning neuron in analog VLSI. In
M. C. Mozer, M. I. Jordan, and T. Petsche, editors, Advances in neuralinformation processing
systems, volume 9, pages 692?698. MIT Press, 1997.
[8] G. Indiveri. Modeling selective attention using a neuromorphic analog VLSI device. Neural
Computation, 12(12):2857?2880, December 2000.
[9] G. Indiveri. A low-power adaptive integrate-and-fire neuron circuit. In ISCAS 2003. The 2003
IEEE International Symposium on Circuits and Systems, 2003. IEEE, 2003.
[10] T. Kohonen. Self-Organization and Associative Memory. Springer Series in Information Sciences. Springer Verlag, 2nd edition, 1988.
[11] S.-C. Liu, M. Boegerhausen, and S. Pascal. Circuit model of short-term synaptic dynamics. In
Advances in Neural Information Processing Systems, volume 15, Cambridge, MA, December
2002. MIT Press.
[12] S.-C. Liu, J. Kramer, G. Indiveri, T. Delbruck, T. Burg, and R. Douglas. Orientation-selective
aVLSI spiking neurons. Neural Networks, 14(6/7):629?643, 2001. Special Issue on Spiking
Neurons in Neuroscience and Technology.
[13] S.-C. Liu, J. Kramer, G. Indiveri, T. Delbruck, and R. Douglas. Analog VLSI:Circuits and
Principles. MIT Press, 2002.
[14] H. Markram, J. L?ubke, M. Frotscher, and B. Sakmann. Regulation of synaptic efficacy by
coincidence of postsynaptic APs and EPSPs. Science, 275:213?215, 1997.
[15] C. C. H. Petersen, R. C. Malenka, R. A. Nicoll, and J. J. Hopfield. All-ornone potentiation at
CA3-CA1 synapses. Proc. Natl. Acad. Sci., 95:4732, 1998.
[16] S. Song, K. D. Miller, and L. F. Abbot. Competitive Hebbian learning through spike-timingdependent plasticity. Nature Neuroscience, 3(9):919?926, 2000.
| 2331 |@word middle:2 version:1 hippocampus:1 nd:1 pulse:11 simulation:1 liu:4 series:3 efficacy:29 contains:2 o2:1 current:7 discretization:2 comparing:1 refresh:2 periodically:2 subsequent:1 additive:1 realistic:3 plasticity:6 asymptote:9 plot:3 succeeding:1 designed:1 aps:1 device:5 vtp:3 short:9 node:2 successive:1 five:1 m7:1 symposium:1 introduce:1 expected:1 indeed:1 terminal:1 m8:1 inspired:1 decreasing:2 automatically:1 window:2 increasing:1 provided:1 begin:1 circuit:47 ileak:3 ca1:1 adc:2 fabricated:1 temporal:1 every:1 act:3 charge:1 growth:1 control:3 before:2 positive:2 engineering:1 timing:10 acad:1 mead:1 firing:1 conversely:1 range:1 acknowledgment:1 implement:7 area:2 eth:1 pre:34 induce:1 quadrant:2 vhigh:11 regular:1 petersen:1 get:1 onto:1 cannot:1 close:2 storage:3 influence:1 www:1 conventional:1 map:1 imposed:1 missing:1 demonstrated:1 go:1 attention:2 independently:1 m2:3 rule:3 array:1 variation:1 play:1 us:1 element:1 asymmetric:2 labeled:1 bottom:2 role:1 coincidence:1 vdep:7 connected:1 diorio:1 decrease:3 removed:1 mozer:1 leak:2 dynamic:14 vdd:3 grateful:1 upon:3 easily:1 hopfield:1 chip:7 train:3 artificial:1 neuroinformatics:1 larger:1 plausible:1 say:2 statistic:1 itself:1 associative:2 sequence:1 triggered:1 transistor:9 analytical:1 net:1 propose:2 interconnected:1 kohonen:1 riccati:1 supposed:1 empty:1 cmos:2 depending:6 avlsi:1 measured:1 dep:1 bulky:1 epsps:1 implemented:1 involves:1 indicate:1 switzerland:1 waveform:3 vpot:5 stochastic:3 bistable:14 material:1 implementing:4 argued:2 require:3 potentiation:3 boegerhausen:1 dendritic:1 biological:2 ic:1 stdp:24 equilibrium:2 electron:1 proc:1 injecting:1 tool:1 mit:4 always:2 modified:1 voltage:13 emission:1 indiveri:5 mainly:1 annunziato:1 am:1 dependent:3 i0:3 typically:1 fight:1 vlsi:14 selective:3 i1:4 issue:1 orientation:1 pascal:1 plan:2 special:1 frotscher:1 idep:5 represents:1 ubke:1 unizh:1 future:3 few:1 m4:2 floating:1 iscas:1 fire:4 attractor:1 amplifier:1 organization:1 possibility:1 severe:1 arrives:2 extreme:1 natl:1 vw0:57 circle:2 re:1 causal:1 theoretical:1 increased:1 modeling:1 abbot:1 measuring:1 bistability:14 delbruck:2 neuromorphic:7 mahowald:1 ca3:1 hundred:1 delay:1 too:1 afliger:1 characterize:1 stored:2 configurable:1 minch:1 giacomo:2 combined:1 m10:1 international:1 stay:1 off:1 together:2 quickly:1 containing:1 opposed:1 slowly:2 external:1 creating:1 inject:1 return:1 m9:1 potential:2 bliss:1 onset:1 depends:1 analyze:1 competitive:3 slope:1 rodney:1 degraded:1 characteristic:1 reinforced:1 subthreshold:2 miller:1 vp:4 chii:1 biophysically:1 plastic:1 monitoring:1 drive:13 synapsis:21 phys:1 synaptic:69 against:1 involved:1 efferent:1 ut:7 amplitude:1 manuscript:1 salamon:1 response:3 synapse:8 though:2 strongly:1 correlation:2 dac:2 mode:1 stimulate:1 effect:7 dietterich:1 contain:1 symmetric:2 i2:1 potentiating:1 width:1 self:1 steady:1 timingdependent:1 clocked:1 m:7 m5:2 ini:2 demonstrate:1 cp:1 stefano:1 recently:2 stimulation:2 vtd:3 spiking:2 exponentially:1 volume:4 analog:17 m1:3 resting:1 silicon:1 cambridge:2 similarly:2 session:2 depressed:1 had:1 stable:4 recent:1 showed:1 perspective:1 driven:2 scenario:1 store:3 verlag:1 exploited:1 preserving:1 minimum:2 determine:1 signal:9 multiple:2 hebbian:6 cross:1 long:21 post:28 paired:1 controlled:1 va:4 poisson:2 normalization:2 whereas:1 decreased:1 source:2 operate:1 unlike:1 comment:1 induced:3 tend:2 ltp:4 december:2 flow:1 capacitor:5 jordan:1 emitted:3 near:1 ideal:1 m6:2 switch:4 symbolize:1 converter:2 idea:1 prototype:4 expression:1 ltd:3 becker:1 song:2 neuralinformation:1 depression:1 kevan:1 amount:3 locally:1 reduced:1 http:1 millisecond:1 inhibitory:2 sign:1 neuroscience:2 per:3 discrete:2 indefinite:1 four:4 threshold:9 shih:1 badoni:1 changing:1 prevent:1 douglas:3 abbott:1 hasler:1 ram:2 asymptotically:2 injected:2 fusi:3 bound:1 uncontrolled:1 strength:2 constraint:1 transconductance:1 malenka:1 martin:1 watt:1 membrane:2 postsynaptic:3 biologically:1 equation:1 zurich:2 nicoll:1 loose:1 mechanism:4 collingridge:1 petsche:1 alternative:3 gate:3 top:2 burg:1 ghahramani:1 murray:2 amit:2 leakage:5 capacitance:1 already:1 spike:34 cw:6 distance:1 separating:1 sci:1 vd:4 w0:9 unstable:1 toward:6 bofill:2 induction:1 o1:5 relationship:1 balance:1 difficult:1 setup:1 regulation:1 trace:6 negative:2 design:2 implementation:3 sakmann:1 potentiated:2 m11:1 neuron:38 m12:1 upper:1 anti:1 thermal:1 variability:3 precise:5 incorporate:2 dc:1 sharp:1 pair:2 required:2 ornone:1 tpre:8 trans:1 bar:1 usually:1 below:3 pattern:1 regime:1 saturation:1 memory:3 power:2 critical:1 rely:1 technology:3 temporally:1 relative:2 asymptotic:4 fully:1 mixed:1 suggestion:2 versus:1 digital:6 integrate:4 plotting:1 editor:2 principle:1 systematically:1 cd:2 compatible:1 excitatory:3 supported:1 soon:2 vthr:12 bias:3 allow:2 institute:1 markram:1 leaky:2 benefit:1 distributed:3 curve:1 transition:3 author:1 tpost:7 adaptive:1 cope:1 compact:4 investigating:1 continuous:1 stimulated:4 nature:2 necessarily:1 protocol:1 dense:1 linearly:3 arrow:1 arrival:2 edition:1 complementary:1 fig:16 sub:1 comprises:1 governed:3 clamped:1 externally:1 specific:1 showing:1 decay:1 physiological:1 evidence:1 workshop:1 led:1 springer:2 ch:3 ma:2 stimulating:1 comparator:4 telluride:2 kramer:2 change:10 typical:1 uniformly:1 experimental:4 m3:2 parasitic:1 internal:2 support:1 ethz:1 constructive:1 evaluate:1 liming:1 |
1,465 | 2,332 | Theory-Based Causal Inference
Joshua B. Tenenbaum & Thomas L. Griffiths
Department of Brain and Cognitive Sciences
MIT, Cambridge, MA 02139
jbt, gruffydd @mit.edu
Abstract
People routinely make sophisticated causal inferences unconsciously, effortlessly, and from very little data ? often from just one or a few observations. We argue that these inferences can be explained as Bayesian
computations over a hypothesis space of causal graphical models, shaped
by strong top-down prior knowledge in the form of intuitive theories. We
present two case studies of our approach, including quantitative models of human causal judgments and brief comparisons with traditional
bottom-up models of inference.
1 Introduction
People are remarkably good at inferring the causal structure of a system from observations
of its behavior. Like any inductive task, causal inference is an ill-posed problem: the data
we see typically underdetermine the true causal structure. This problem is worse than
the usual statistician?s dilemma that ?correlation does not imply causation?. Many cases of
everyday causal inference follow from just one or a few observations, where there isn?t even
enough data to reliably infer correlations! This fact notwithstanding, most conventional
accounts of causal inference attempt to generate hypotheses in a bottom-up fashion based
on empirical correlations. These include associationist models [12], as well as more recent
rational models that embody an explicit concept of causation [1,3], and most algorithms
for learning causal Bayes nets [10,14,7].
Here we argue for an alternative top-down approach, within the causal Bayes net framework. In contrast to standard bottom-up approaches to structure learning [10,14,7], which
aim to optimize or integrate over all possible causal models (structures and parameters),
we propose that people consider only a relatively constrained set of hypotheses determined
by their prior knowledge of how the world works. The allowed causal hypotheses not only
form a small set of all possible causal graphs, but also instantiate specific causal mechanisms with constrained conditional probability tables, rather than much more general conditional dependence and independence relations.
The prior knowledge that generates this hypothesis space of possible causal models can be
thought of as an intuitive theory, analogous to the scientific theories of classical mechanics or electrodynamics that generate constrained spaces of possible causal models in their
domains. Following the suggestions of recent work in cognitive development (reviewed
in [4]), we take the existence of strong intuitive theories to be the foundation for human
causal inference. However, our view contrasts with some recent suggestions [4,11] that
an intuitive theory may be represented as a causal Bayes net model. Rather, we consider
a theory to be the underlying principles that generate the range of causal network models
potentially applicable in a given domain ? the abstractions that allow a learner to construct
and reason with appropriate causal network hypotheses about novel systems in the presence
of minimal perceptual input.
Given the hypothesis space generated by an intuitive theory, causal inference then follows
the standard Bayesian paradigm: weighing each hypothesis according to its posterior probability and averaging their predictions about the system according to those weights. The
combination of Bayesian causal inference with strong top-down knowledge is quite powerful, allowing us to explain people?s very rapid inferences about model complexity in both
static and temporally extended domains. Here we present two case studies of our approach,
including quantitative models of human causal judgments and brief comparisons with more
bottom-up accounts.
2 Inferring hidden causal powers
We begin with a paradigm introduced by Gopnik and Sobel for studying causal inference in children [5]. Subjects are shown a number of blocks, along with a machine ?
the ?blicket detector?. The blicket detector ?activates? ? lights up and makes noise ? whenever a ?blicket? is placed on it. Some of the blocks are ?blickets?, others are not, but their
outward appearance is no guide. Subjects observe a series of trials, on each of which one
or more blocks are placed on the detector and the detector activates or not. They are then
asked which blocks have the hidden causal power to activate the machine.
Gopnik and Sobel have demonstrated various conditions under which children successfully
infer the causal status of blocks from just one or a few observations. Of particular interest
is their ?backwards blocking? condition [13]: on trial 1 (the ?1-2? trial), children observe
two blocks ( and ) placed on the detector and the detector activates. Most children
now say that both and are blickets. On trial 2 (the ?1 alone? trial), is placed on
the detector alone and the detector activates. Now all children say that is a blicket, and
most say that is not a blicket. Intuitively, this is a kind of ?explaining away?: seeing
that is sufficient to activate the detector alone explains away the previously observed
association of with detector activation.
Gopnik et al. [6] suggest that children?s causal reasoning here may be thought of in terms
of learning the structure of a causal Bayes net. Figure 1a shows a Bayes net, , that is
consistent with children?s judgments after trial 2. Variables
and
represent whether
blocks and are on the detector;
represents whether the detector activates; the
existence of an edge
but no edge
represents the hypothesis that but
not is a blicket? that
but not has the power to turn on the detector. We encode the
two observations as vectors , where if block 1 is on the detector
(else ! ), likewise for and block 2, and " if the detector is active (else ! ).
Given only the data # $%&' # % ! $$ , standard Bayes net learning algorithms
have no way to converge on subjects?s choice () . The data are not sufficient to compute
the conditional independence relations required by constraint-based methods [9,13], 1 nor
to strongly influence the Bayesian structural score using arbitrary conditional probability
tables [7]. Standard psychological models of causal strength judgment [12,3], equivalent
to maximum-likelihood parameter estimates for the family of Bayes nets in Figure 1a [15],
either predict no explaining away here or make no prediction due to insufficient data.
1
Gopnik et al. [6] argue that constraint-based learning could be applied here, if we supplement the
observed data with large numbers of fictional observations. However, this account does not explain
why subjects make the inferences that they do from the very limited data actually observed, nor why
they are justified in doing so. Nor does it generalize to the three experiments we present here.
Alternatively, reasoning on this task could be explained in terms of a simple logical deduction. We require as a premise the activation law: a blicket detector activates if and
only if
one or more blickets are placed on it. Based on the activation law and the data ,
we can deduce that is a blicket but remains undetermined. If we further assume a
form of Occam?s razor, positing the minimal number of hidden causal powers, then we can
infer that is not a blicket, as most children do. Other cases studied by Gopnik et al. can
be explained similarly. However, this deductive model cannot explain many plausible but
nondemonstrative causal inferences that people make, or people?s degrees of confidence in
their judgments, or their ability to infer probabilistic causal relationships from noisy data
[3,12,15]. It also leaves mysterious the origin and form of Occam?s razor. In sum, neither
deductive logic nor standard Bayes net learning provides a satisfying account of people?s
rapid causal inferences. We now show how a Bayesian structural inference based on strong
top-down knowledge can explain the blicket detector judgments, as well as several probabilistic variants that clearly exceed the capacity of deductive accounts.
Most generally, the top-down knowledge takes the form of a causal theory with at least two
components: an ontology of object, attribute and event types, and a set of causal principles
relating these elements. Here we treat theories only informally; we are currently developing
a formal treatment using the tools of probabilistic relational logic (e.g., [9]). In the basic
blicket detector domain, we have two kinds of objects, blocks and machines; two relevant
attributes, being a blicket and being a blicket detector; and two kinds of events, a block
being placed on a machine and a machine activating. The causal principle relating these
events and attributes is just the activation law introduced above. Instead of serving as a
premise for deductive inference, the causal law now generates a hypothesis space of causal
Bayes nets for statistical inference. This space is quite restricted: with two objects and one
detector, there are only 4 consistent hypotheses
& (Figure 1a). The
conditional probabilities for each hypothesis are also determined by the theory. Based
on the activation law,
if and , or
and ;
otherwise it equals 0.
Causal inference then follows by Bayesian updating of probabilities over
in light of
the observed data . We assume independent observations so that the total likelihood
factors into separate terms for individual trials. For all hypotheses in , the individual-trial
likelihoods also factor into
, and we can ignore the
last two terms
assuming that block positions are independent of the
causal structure. The remaining term (
is 1 for any hypothesis consistent
with the data and 0 otherwise, because of the deterministic activation law. The posterior
for any data set is then simply the restriction and renormalization of the prior
to the set of hypotheses consistent with . 2
Backwards blocking proceeds as follows. After the ?1-2? trial ( ), at least one block must
be
a blicket: the consistent hypotheses are % & , and . After the ?1 alone? trial
( ), only ) and remain consistent. The prior over causal structures
can be
, assuming that each block has some independent
written as
probability of being a blicket. The nonzero posterior probabilities are then given as fol
!#" ! $ ,
lows (all others are zero):
&
! % ,
!%& !#" !$
!%'& !(" !$
. Finally, the
! %
% &*!#" !$ by averaging the
.
may be! computed
predictions of all consistent hypotheses weighted their posterior probabilities:
.
0/
1 .
2/ ' .
,
3/ .
.
!#" ! $ ) , and
%
!
&
(
!
" !$
probability that block + is a blicket
-,
2
More generally, we could allow for some noise in the detector, by letting the likelihood
465879 :<;=>: ?(=A@CB'DFE be probabilistic rather than deterministic. For simplicity we consider only the noiseless case here; a low level of noise would give similar results.
In comparing with human judgments in the backwards blocking paradigm, the relevant
probabilities are
-,
, the baseline judgments before either block is placed
on
#
, judgments after the ?1-2? trial; and
,
#
,
the detector;
,
judgments after the ?1 alone? trial. These probabilities depend only on the prior probability of blickets, . Setting qualitatively matches children?s backwards blocking behavior: after the ?1-2? trial, both blocks are more likely than not to be blickets
the ?1 alone? trial, is definitely a blicket while
(
,
#
; then, after
is probably not (
). Thus there is no need to posit a special
?Occam?s razor? just to explain why becomes like less likely to be a blicket after the ?1
alone? trial ? this adjustment follows naturally as a rational statistical inference. However,
we do have to assume that blickets are somewhat rare (
). Following the ?1 alone?
trial the probability of being a blicket returns to baseline ( ), because the unambiguous
second trial explains away all the evidence for from the first trial. Thus for
,
block 2 would remain likely to be a blicket even after the ?1 alone? trial.
In order to test whether human causal reasoning actually embodies this Bayesian form of
Occam?s razor, or instead a more qualitative rule such as the classical version, ?Entities
should not be multiplied beyond necessity?, we conducted three new blicket-detector experiments on both adults and 4-year-old children (in collaboration with Sobel & Gopnik).
The first two experiments were just like the original backwards blocking studies, except
that we manipulated subjects? estimates of by introducing a pretraining phase. Subjects
first saw 12 objects placed on the detector, of which either 2, in the ?rare? condition?,
or 10, in the ?common? condition, activated the detector. We hypothesized that this manipulation would lead subjects to set their subjective prior for blickets to either
or
, and thus, if guided by the Bayesian Occam?s razor, to show strong or weak
blocking respectively.
We gave adult subjects a different cover story, involving ?super pencils? and a ?superlead
detector?, but here we translate the results into blicket detector terms. Following the ?rare?
or ?common? training, two new objects and were picked at random from the same
pile and subjects were asked three times to judge the probability that each one could activate
the detector: first, before seeing it on the detector, as a baseline; second, after a ?1-2? trial;
third, after a ?1 alone? trial. Probabilities were judged on a 1-7 scale and then rescaled to
the range 0-1.
The mean adult probability judgments and the model predictions are shown in Figures 2a
(rare) and 2b (common). Wherever two objects have the same pattern of observed contingencies (e.g., and at baseline and after the ?1-2? trial), subjects? mean judgments
were found not to be significantly different and were averaged together for this analysis. In
fitting the model, we adjusted to match subjects? baseline judgments; the best-fitting values were very close to the true base rates. More interestingly, subjects? judgments tracked
the Bayesian model over both trials and conditions. Following the ?1-2? trial, mean ratings
of both objects increased above baseline, but more so in the rare condition where the activation of the detector was more surprising. Following the ?1 alone? trial, all subjects in both
conditions were 100% sure that had the power to activate the detector, and the mean
rating of returned to baseline: low in the rare condition, but high in the common condition. Four-year-old children made ?yes?/?no? judgments that were qualitatively similar,
across both rare and common conditions [13].
Human causal inference thus appears to follow rational statistical principles, obeying the
Bayesian version of Occam?s razor rather than the classical logical version. However, an
alternative explanation of our data is that subjects are simply employing a combination of
logical reasoning and simple heuristics. Following the ?1 alone? trial, people could logically deduce that they have no information about the status of and then fall back on
the base rate of blickets as a default, without the need for any genuinely Bayesian computations. To rule out this possibility, our third study tested causal explaining way in the
absence of unambiguous data that could be used to support deductive reasoning. Subjects
again saw the ?rare? pretraining, but now the critical trials involved three objects, , ,
. After judging the baseline probability that each object could activate the detector,
subjects saw two trials: a ?1-2? trial, followed by a ?1-3? trial, in which objects and
activated the detector together. The Bayesian hypothesis space is analogous to Figure
1a, but now includes eight (
) hypotheses representing all possible assignments of
causal powers to the three objects. As before, the prior over causal structures
can
6 )
, the likelihood 1
reduces
be written as
to 1 for any hypothesis consistent with (under the activation law) and 0 otherwise, and
the probability that block + is a blicket
,
.
may be computed by summing the
posterior probabilities of all consistent hypotheses, e.g.,
0/ ' .
.
Figure 2c shows that the Bayesian model?s predictions and subjects? mean judgments match
well except for a slight overshoot in the model. Following the ?1-3? trial, people judge that
probably activates the detector, but now with less than 100% confidence. Correspondingly, the probability that activates the detector decreases, and the probability that
activates the detector increases, to a level above baseline but below 0.5. All of these predicted effects are statistically significant ( ! ! , one-tailed paired t-tests).
These results provide strong support for our claim that rapid human inferences about causal
structure can be explained as theory-guided Bayesian computations. Particularly striking
is the contrast between the effects of the ?1 alone? trial and the ?1-3 trial?. In the former
case, subjects observe unambiguously that is a cause and their judgment about
falls completely to baseline; in the latter, they observe only a suspicious coincidence and
so explaining away is not complete. A logical deductive mechanism might generate the
all-or-none explaining-away observed in the former case, while a bottom-up associative
learning mechanism might generate the incomplete effect seen in the latter case, but only
our top-down Bayesian approach naturally explains the full spectrum of one-shot causal
inferences, from uncertainty to certainty.
3 Causal inference in perception
Our second case study argues for the importance of causal theories in a very different
domain: perceiving the mechanics of collisions and vibrations. Michotte?s [8] studies of
causal perception showed that a moving ball coming to rest next to a stationary ball would
be perceived as the cause of the latter?s subsequent motion only if there was essentially no
gap in space or time between the end of the first ball?s motion and the beginning of the second ball?s. The standard explanation is that people have automatic perceptual mechanisms
for detecting certain kinds of physical causal relations, such as transfer of force, and these
mechanisms are driven by simple bottom-up cues such as spatial and temporal proximity.
Figure 3a shows data from an experiment described in [2] which might appear to support
this view. Subjects viewed a computer screen depicting a long horizontal beam. At one end
of the beam was a trap door, closed at the beginning of each trial. On each
trial, a heavy
block was dropped onto the beam at some position
, and after some time , the trap door
opened and a ball flew out. Subjects were told that the block dropping on the beam might
have jarred loose a latch that opens the door, and they were asked to judge (on a scale)
how likely it was that the block dropping was the cause of the door opening. The distance
and time separating these two events were varied across trials. Figure 3a shows that
as either
or increases, the judged probability of a causal link decreases.
Anderson [1] proposed that this judgment could be formalized as a Bayesian inference with
two alternative hypotheses: , that a causal link exists, and , that no causal link exists.
He suggested that the likelihood
should
be product of decreasing
exponentials
in space and time,
, while
would pre-
sumably be constant. This model has three free parameters ? the decay constants
and
, and the prior probability
? plus multiplicative and additive scaling parameters to
bring the model ouptuts onto the same range as the data. Figure 3c shows that this model
can be adjusted to fit the broad outlines of the data, but it misses the crossover interaction: in the data, but not the model, the typical
advantage of small distances
over large
distances disappears and even reverses as increases.
This crossover may reflect the presence of a much more sophisticated theory of force transfer than is captured by the spatiotemporal decay model. Figure 1b shows a causal graphical
structure representing a simplified physical model of this situation. The graph is a dynamic
Bayes net (DBN), enabling inferences about the system?s behavior over time. There are
four basic event types, each indexed by time . The door state
-
can be either open
(
-
" ) or closed (
! ), and once open it stays open. There is an intrinsic source
of noise
in the door mechanism, which we take to be i.i.d., zero-mean gaussian. At
each time step , the door opens if and only if the noise amplitude
exceeds some
threshold (which we take to be 1 without loss of generality). The block hits the beam at
position
!
(and time ! ), setting up a vibration in the door mechanism with energy - !
. We assume this energy decreases according to an inverse power law with the
distance between the block and the door, !
!
. (We can always set ,
absorbing it into the parameter below.) For simplicity, we assume that energy propagates
instantaneously from the block to the door (plausible given the speed of sound relative
to the distances and times used here), and that there is no vibrational damping over time
( -
-
). Anderson [2] also sketches an account along these lines, although he
provides no formal model.
At time
, the door pops open; we denote this event as
. The likelihood of
depends strictly on the variance of the noise ? the bigger the variance, the sooner the
door should pop open. At issue is whether there exists a causal link between the vibration
? caused by the block dropping ? and the noise ? which causes the door to open.
More
precisely, we propose that causal inference is based on the probabilities
under the two hypotheses (causal link) and (no causal link). The noise variance has
some low intrinsic level
, which under ? but not ? is increased by some fraction
of the vibrational energy . That is,
-
6
!
-
.
We can then solve for the likelihoods
) analytically or through simulation.
We take the limit as the intrinsic noise level
! , leaving three free parameters, , ,
and
, plus multiplicative and additive scaling parameters, just as in the spatiotemporal
decay model. Figure 3b plots the (scaled) posterior probabilities
for the best
fitting parameter values. In contrast to the spatiotemporal decay model, the DBN model
captures the crossover interaction between space and time.
This difference between the two models is fundamental, not just an accident of the parameter values chosen. The spatiotemporal decay
model can never produce a crossover effect
due to its functional form ? separable in and
. A crossover of some form is generic in
the DBN model, because its predictions essentially follow an exponential decay function
on with a decay rate that is a nonlinear function of
. Other mathematical models with
a nonseparable form could surely be devised to fit this data as well. The strength of our
model lies in its combination of rational statistical inference and realistic physical motivation. These results suggest that whatever schema of force transfer is in people?s brains, it
must embody a more complex interaction between spatial and temporal factors than is assumed in traditional bottom-up models of causal inference, and its functional form may be
a rational consequence of a rich but implicit physical theory that underlies people?s instantaneous percepts of causality. It is an interesting open question whether human observers
can use this knowledge only by carrying out an online simulation in parallel with their
observations, or can access it in a ?compiled? form to interpret bottom-up spatiotemporal
cues without the need to conduct any explicit internal simulations.
4 Conclusion
In two case studies, we have explored how people make rapid inferences about the causal
texture of their environment. We have argued that these inferences can be explained best as
Bayesian computations, working over hypothesis spaces strongly constrained by top-down
causal theories. This framework allowed us to construct quantitative models of causal
judgment ? the most accurate models to date in both domains, and in the blicket detector domain, the only quantitatively predictive model to date. Our models make a number
of substantive and mechanistic assumptions about aspects of the environment that are not
directly accessible to human observers. From a scientific standpoint this might seem undesirable; we would like to work towards models that require the fewest number of a priori
assumptions. Yet we feel there is no escaping the need for powerful top-down constraints
on causal inference, in the form of intuitive theories. In ongoing work, we are beginning
to study the origins of these theories themselves. We expect that Bayesian learning mechanisms similar to those considered here will also be useful in understanding how we acquire
the ingredients of theories: abstract causal principles and ontological types.
References
[1] J. .R. Anderson. The Adaptive Character of Thought. Erlbaum, 1990.
[2] J. .R. Anderson. Is human cognition adaptive? Behavioral and Brain Sciences, 14, 471?484,
1991.
[3] P. W. Cheng. From covariation to causation: A causal power theory. Psychological Review, 104,
367?405, 1997.
[4] A. Gopnik & C. Glymour. Causal maps and Bayes nets: a cognitive and computational account
of theory-formation. In Carruthers et al. (eds.), The Cognitive Basis of Science. Cambridge, 2002.
[5] A. Gopnik & D. M. Sobel. Detecting blickets: How young children use information about causal
properties in categorization and induction. Child Development, 71, 1205?1222, 2000.
[6] A. Gopnik, C. Glymour, D. M. Sobel, L. E. Schulz, T. Kushnir, D. Danks. A theory of causal
learning in children: Causal maps and Bayes nets. Psychological Review, in press.
[7] D. Heckerman. A Bayesian approach to learning causal networks. In Proc. Eleventh Conf. on
Uncertainty in Artificial Intelligence, Morgan Kaufmann Publishers, San Francisco, CA, 1995.
[8] A. E. Michotte. The Perception of Causality. Basic Books, 1963.
[9] H. Pasula & S. Russell. Approximate inference for first-order probabilistic languages. In Proc.
International Joint Conference on Artificial Intelligence, Seattle, 2001.
[10] J. Pearl. Causality. New York: Oxford University Press, 2000.
[11] B. Rehder. A causal-model theory of conceptual representation and categorization. Submitted
for publication, 2001.
[12] D. R. Shanks. Is human learning rational? Quarterly Journal of Experimental Psychology, 48a,
257?279, 1995.
[13] D. Sobel, J. B. Tenenbaum & A. Gopnik. The development of causal learning based on indirect
evidence: More than associations. Submitted for publication, 2002.
[14] P. Spirtes, C. Glymour, & R. Scheines. Causation, prediction, and search (2nd edition, revised).
Cambridge, MA: MIT Press, 2001.
[15] J. B. Tenenbaum & T. L. Griffiths. Structure learning in human causal induction. In T. Leen, T.
Dietterich, and V. Tresp (eds.), Advances in Neural Information Processing Systems 13. Cambridge,
MA: MIT Press, 2001.
h10
(a)
X1
h01
X2
E
h11
h00
E
h1
h0
vibrational
energy
V(0)
V(1)
noise
Z(0)
Z(1)
door
state
E(0)
E(1)
time
t=0
t=1
present
absent
...
V(n)
Z(n)
X2
X1
X2
X(0)
X2
X1
E
X1
block
position
(b)
E
...
E(n)
t=n
Figure 1: Hypothesis spaces of causal Bayes nets for (a) the blicket detector and (b) the
mechanical vibration domains.
1
(a)
1
(b)
People
Bayes
(c)
1
0.8
0.8
0.8
0.6
0.6
0.6
0.4
0.4
0.4
0.2
0.2
0.2
0
B1,B2
Baseline
B1,B2
After
"12"
trial
0
B1
B2
After
"1 alone"
trial
B1,B2
Baseline
B1,B2
After
"12"
trial
B1
B2
After
"1 alone"
trial
0
B1,B2,B3
Baseline
B1,B2 B3
After
"12"
trial
B1 B2,B3
After
"13"
trial
Figure 2: Human judgments and model predictions (based on Figure 1a) for one-shot backwards blocking with blickets, when blickets are (a) rare or (b) common, or (c) rare and only
observed in ambiguous combinations. Bar height represents the mean judged probability
that an object has the causal power to activate the detector.
4
6
5
5
4
3
2
6
P( h1| T, X)
5
X = 15
X=7
X=3
X=1
P( h1| T, X)
Causal strength
6
4
3
0.1 0.3 0.9 2.7 8.1
Time (sec)
2
3
0.1 0.3 0.9 2.7 8.1
Time (sec)
2
0.1 0.3 0.9 2.7 8.1
Time (sec)
Figure 3: Probability of a causal connection between two events: a block dropping onto a
beam and a trap door opening. Each curve corresponds to a different
spatial gap
between
these events; each x-axis value to a different temporal gap . (a) Human judgments. (b)
Predictions of the dynamic Bayes net model (Figure 1b). (c) Predictions of the spatiotemporal decay model.
| 2332 |@word trial:42 version:3 nd:1 open:9 simulation:3 shot:2 necessity:1 series:1 score:1 interestingly:1 subjective:1 comparing:1 surprising:1 activation:8 yet:1 must:2 written:2 subsequent:1 additive:2 realistic:1 blickets:11 plot:1 alone:15 stationary:1 instantiate:1 weighing:1 leaf:1 cue:2 intelligence:2 beginning:3 rehder:1 provides:2 detecting:2 positing:1 height:1 mathematical:1 along:2 qualitative:1 suspicious:1 fitting:3 eleventh:1 behavioral:1 rapid:4 embody:2 themselves:1 nor:4 mechanic:2 nonseparable:1 brain:3 ontology:1 behavior:3 decreasing:1 little:1 becomes:1 begin:1 underlying:1 kind:4 certainty:1 quantitative:3 temporal:3 scaled:1 hit:1 whatever:1 appear:1 before:3 dropped:1 treat:1 limit:1 consequence:1 oxford:1 might:5 plus:2 studied:1 limited:1 range:3 statistically:1 averaged:1 block:28 empirical:1 crossover:5 thought:3 significantly:1 confidence:2 pre:1 griffith:2 seeing:2 suggest:2 cannot:1 close:1 onto:3 undesirable:1 judged:3 influence:1 optimize:1 conventional:1 equivalent:1 demonstrated:1 deterministic:2 restriction:1 map:2 simplicity:2 formalized:1 rule:2 analogous:2 feel:1 hypothesis:25 origin:2 element:1 satisfying:1 particularly:1 updating:1 genuinely:1 blocking:7 bottom:8 observed:7 coincidence:1 capture:1 decrease:3 rescaled:1 russell:1 environment:2 complexity:1 asked:3 dynamic:2 overshoot:1 depend:1 carrying:1 predictive:1 dilemma:1 learner:1 completely:1 basis:1 joint:1 indirect:1 routinely:1 represented:1 various:1 fewest:1 activate:6 artificial:2 formation:1 h0:1 quite:2 heuristic:1 posed:1 plausible:2 solve:1 say:3 otherwise:3 ability:1 noisy:1 associative:1 online:1 advantage:1 net:14 propose:2 interaction:3 coming:1 product:1 relevant:2 date:2 translate:1 intuitive:6 everyday:1 seattle:1 flew:1 produce:1 categorization:2 object:12 strong:6 predicted:1 judge:3 revers:1 h01:1 posit:1 guided:2 gopnik:10 attribute:3 opened:1 human:14 explains:3 require:2 premise:2 activating:1 argued:1 h11:1 adjusted:2 strictly:1 effortlessly:1 proximity:1 considered:1 cb:1 cognition:1 predict:1 claim:1 perceived:1 proc:2 applicable:1 currently:1 saw:3 vibration:4 deductive:6 successfully:1 tool:1 weighted:1 instantaneously:1 mit:4 clearly:1 activates:9 gaussian:1 always:1 aim:1 super:1 rather:4 danks:1 publication:2 encode:1 vibrational:3 likelihood:8 logically:1 contrast:4 baseline:13 inference:33 abstraction:1 typically:1 hidden:3 relation:3 deduction:1 h10:1 schulz:1 issue:1 ill:1 priori:1 development:3 constrained:4 special:1 spatial:3 equal:1 construct:2 once:1 shaped:1 never:1 represents:3 broad:1 others:2 blicket:25 quantitatively:1 few:3 causation:4 opening:2 manipulated:1 individual:2 phase:1 statistician:1 attempt:1 unconsciously:1 interest:1 possibility:1 light:2 activated:2 sobel:6 accurate:1 edge:2 damping:1 indexed:1 incomplete:1 old:2 sooner:1 conduct:1 causal:81 minimal:2 psychological:3 increased:2 cover:1 assignment:1 introducing:1 rare:10 undetermined:1 conducted:1 erlbaum:1 spatiotemporal:6 definitely:1 fundamental:1 international:1 accessible:1 stay:1 probabilistic:5 told:1 together:2 again:1 reflect:1 worse:1 cognitive:4 conf:1 book:1 return:1 account:7 b2:9 sec:3 includes:1 caused:1 depends:1 multiplicative:2 h1:3 view:2 picked:1 closed:2 doing:1 schema:1 fol:1 observer:2 bayes:15 parallel:1 variance:3 kaufmann:1 likewise:1 percept:1 judgment:21 yes:1 generalize:1 weak:1 bayesian:19 none:1 submitted:2 explain:5 detector:39 whenever:1 ed:2 energy:5 mysterious:1 involved:1 naturally:2 static:1 rational:6 treatment:1 covariation:1 logical:4 knowledge:7 amplitude:1 sophisticated:2 actually:2 back:1 appears:1 follow:3 unambiguously:1 leen:1 strongly:2 anderson:4 generality:1 just:8 implicit:1 correlation:3 pasula:1 sketch:1 working:1 horizontal:1 nonlinear:1 scientific:2 b3:3 effect:4 hypothesized:1 concept:1 true:2 dietterich:1 inductive:1 pencil:1 former:2 analytically:1 nonzero:1 jbt:1 spirtes:1 latch:1 razor:6 unambiguous:2 ambiguous:1 outline:1 complete:1 argues:1 motion:2 bring:1 reasoning:5 instantaneous:1 novel:1 common:6 absorbing:1 functional:2 physical:4 tracked:1 association:2 slight:1 he:2 relating:2 interpret:1 significant:1 cambridge:4 automatic:1 dbn:3 similarly:1 language:1 had:1 moving:1 fictional:1 access:1 compiled:1 deduce:2 base:2 posterior:6 recent:3 showed:1 driven:1 manipulation:1 certain:1 joshua:1 seen:1 captured:1 morgan:1 somewhat:1 accident:1 surely:1 converge:1 paradigm:3 full:1 sound:1 infer:4 reduces:1 exceeds:1 match:3 long:1 devised:1 bigger:1 paired:1 prediction:10 variant:1 basic:3 involving:1 underlies:1 noiseless:1 essentially:2 represent:1 beam:6 justified:1 remarkably:1 else:2 source:1 leaving:1 standpoint:1 publisher:1 rest:1 probably:2 sure:1 subject:20 seem:1 structural:2 presence:2 backwards:6 exceed:1 door:15 enough:1 independence:2 fit:2 gave:1 psychology:1 escaping:1 absent:1 whether:5 returned:1 york:1 cause:4 pretraining:2 generally:2 collision:1 useful:1 informally:1 outward:1 tenenbaum:3 generate:5 judging:1 serving:1 dropping:4 four:2 threshold:1 neither:1 graph:2 fraction:1 sum:1 year:2 inverse:1 powerful:2 uncertainty:2 striking:1 family:1 michotte:2 scaling:2 shank:1 followed:1 cheng:1 strength:3 constraint:3 precisely:1 x2:4 generates:2 aspect:1 speed:1 gruffydd:1 separable:1 relatively:1 glymour:3 department:1 developing:1 according:3 combination:4 ball:5 remain:2 across:2 heckerman:1 character:1 wherever:1 explained:5 intuitively:1 restricted:1 scheines:1 previously:1 remains:1 turn:1 loose:1 mechanism:8 letting:1 mechanistic:1 end:2 studying:1 multiplied:1 eight:1 observe:4 quarterly:1 away:6 appropriate:1 generic:1 alternative:3 existence:2 thomas:1 original:1 top:8 remaining:1 include:1 graphical:2 dfe:1 embodies:1 classical:3 question:1 dependence:1 usual:1 traditional:2 distance:5 separate:1 link:6 separating:1 capacity:1 entity:1 argue:3 reason:1 induction:2 substantive:1 assuming:2 relationship:1 insufficient:1 acquire:1 potentially:1 reliably:1 kushnir:1 allowing:1 observation:8 revised:1 enabling:1 situation:1 extended:1 relational:1 varied:1 arbitrary:1 superlead:1 rating:2 introduced:2 required:1 mechanical:1 connection:1 pop:2 pearl:1 adult:3 beyond:1 suggested:1 proceeds:1 below:2 pattern:1 perception:3 bar:1 including:2 explanation:2 power:9 event:8 critical:1 force:3 representing:2 brief:2 imply:1 temporally:1 disappears:1 axis:1 isn:1 tresp:1 prior:9 understanding:1 review:2 relative:1 law:8 loss:1 expect:1 suggestion:2 interesting:1 ingredient:1 foundation:1 integrate:1 contingency:1 degree:1 ontological:1 sufficient:2 consistent:9 propagates:1 principle:5 story:1 occam:6 collaboration:1 heavy:1 pile:1 placed:8 last:1 free:2 guide:1 allow:2 formal:2 explaining:5 fall:2 correspondingly:1 curve:1 default:1 world:1 rich:1 qualitatively:2 made:1 adaptive:2 simplified:1 san:1 employing:1 approximate:1 ignore:1 status:2 logic:2 active:1 summing:1 conceptual:1 assumed:1 francisco:1 b1:9 alternatively:1 spectrum:1 search:1 tailed:1 why:3 table:2 reviewed:1 transfer:3 ca:1 depicting:1 complex:1 domain:8 motivation:1 noise:10 edition:1 allowed:2 child:14 x1:4 causality:3 screen:1 fashion:1 renormalization:1 inferring:2 position:4 explicit:2 obeying:1 exponential:2 lie:1 perceptual:2 third:2 young:1 down:8 specific:1 explored:1 decay:8 evidence:2 trap:3 exists:3 intrinsic:3 importance:1 supplement:1 texture:1 notwithstanding:1 gap:3 simply:2 appearance:1 likely:4 adjustment:1 corresponds:1 ma:3 conditional:5 viewed:1 towards:1 absence:1 h00:1 determined:2 except:2 perceiving:1 typical:1 averaging:2 miss:1 total:1 experimental:1 internal:1 people:14 support:3 latter:3 ongoing:1 tested:1 |
1,466 | 2,333 | Discriminative Learning for Label
Sequences via Boosting
Yasemin Altun, Thomas Hofmann and Mark Johnson*
Department of Computer Science
*Department of Cognitive and Linguistics Sciences
Brown University, Providence, RI 02912
{altun,th}@cs.brown.edu, [email protected]
Abstract
This paper investigates a boosting approach to discriminative
learning of label sequences based on a sequence rank loss function.
The proposed method combines many of the advantages of boosting schemes with the efficiency of dynamic programming methods
and is attractive both, conceptually and computationally. In addition, we also discuss alternative approaches based on the Hamming
loss for label sequences. The sequence boosting algorithm offers an
interesting alternative to methods based on HMMs and the more
recently proposed Conditional Random Fields. Applications areas
for the presented technique range from natural language processing
and information extraction to computational biology. We include
experiments on named entity recognition and part-of-speech tagging which demonstrate the validity and competitiveness of our
approach.
1
Introduction
The problem of annotating or segmenting observation sequences arises in many
applications across a variety of scientific disciplines, most prominently in natural
language processing, speech recognition, and computational biology. Well-known
applications include part-of-speech (POS) tagging, named entity classification, information extraction, text segmentation and phoneme classification in text and
speech processing [7] as well as problems like protein homology detection, secondary
structure prediction or gene classification in computational biology [3].
Up to now, the predominant formalism for modeling and predicting label sequences
has been based on Hidden Markov Models (HMMs) and variations thereof. Yet,
despite its success, generative probabilistic models - of which HMMs are a special
case - have two major shortcomings, which this paper is not the first one to point
out. First, generative probabilistic models are typically trained using maximum
likelihood estimation (MLE) for a joint sampling model of observation and label
sequences. As has been emphasized frequently, MLE based on the joint probability
model is inherently non-discriminative and thus may lead to suboptimal prediction
accuracy. Secondly, efficient inference and learning in this setting often requires
to make questionable conditional independence assumptions. More precisely, in the
case of HMMs, it is assumed that the Markov blanket of the hidden label variable at
time step t consists of the previous and next labels as well as the t-th observation.
This implies that all dependencies on past and future observations are mediated
through neighboring labels.
In this paper, we investigate the use of discriminative learning methods for learning
label sequences. This line of research continues previous approaches for learning
conditional models , namely Conditional Random Fields (CRFs) [6], and discriminative re-ranking [1, 2] . CRFs have two main advantages compared to HMMs:
They are trained discriminatively by maximizing a conditional (or pseudo-) likelihood criterion and they are more flexible in modeling additional dependencies such
as direct dependencies of the t-th label on past or future observations. However, we
strongly believe there are two further lines of research that are worth pursuing and
may offer additional benefits or improvements.
First of all, and this is the main emphasis of this paper, an exponential loss function
such as the one used in boosting algorithms [9,4] may be preferable to the logarithmic loss function used in CRFs. In particular we will present a boosting algorithm
that has the additional advantage of performing implicit feature selection, typically
resulting in very sparse models. This is important for model regularization as well
as for reasons of efficiency in high dimensional feature spaces. Secondly, we will
also discuss the use of loss functions that explicitly minimize the zer%ne loss on
labels, i.e. the Hamming loss, as an alternative to loss functions based on ranking
or predicting entire label sequences.
2
Additive Models and Exponential Families
Formally, learning label sequences is a generalization of the standard supervised classification problem. The goal is to learn a discriminant function for sequences, i.e. a
mapping from observation sequences X = (X1,X2, ... ,Xt, ... ) to label sequences
y = (Y1, Y2, ... , Yt, ... ). The availability of a training set of labeled sequences
X == {(Xi, yi) : i = 1, ... ,n} to learn this mapping from data is assumed.
In this paper, we focus on discriminant functions that can be written as additive
models. The models under consideration take the following general form:
Fe(X , Y)
=L
Fe(X, Y; t),
with
Fe(X, Y; t)
=L
fh!k(X , Y ; t)
(1)
k
Here fk denotes a (discrete) feature in the language of maximum entropy modeling, or a weak learner in the language of boosting. In the context of label sequences fk will typically be either of the form f~1)(Xt+s,Yt) (with S E {-l , O, l})
or f~2) (Yt-1, Yt). The first type of features will model dependencies between the
observation sequence X and the t-th label in the sequence, while the second type
will model inter-label dependencies between neighboring label variables. For ease
of presentation, we will assume that all features are binary, i.e. each learner corresponds to an indicator function. A typical way of defining a set of weak learners is
as follows:
fk(1) ( Xt+s , Yt )
J(Yt, y(k))Xdxt+s)
(2)
(3)
J(Yt ,y(k))J(Yt-1 ,y(k)) .
fk(2) ( Yt-1, Yt )
where J denotes the Kronecker-J and Xk is a binary feature function that extracts
a feature from an observation pattern; y(k) and y(k) refer to the label values for
which the weak learner becomes "active".
There is a natural way to associate a conditional probability distribution over label
sequences Y with an additive model Fo by defining an exponential family for every
fixed observation sequence X
==
Po(YIX)
exp~:(~; Y)],
Zo(X)
(4)
== Lexp[Fo(X,Y)].
y
This distribution is in exponential normal form and the parameters B are also called
natural or canonical parameters. By performing the sum over the sequence index
t, we can see that the corresponding sufficient statistics are given by Sk(X, Y) ==
2: t h(X, Y; t). These sufficient statistics simply count the number of times the
feature fk has been "active" along the labeled sequence (X, Y).
3
Logarithmic Loss and Conditional Random Fields
In CRFs, the log-loss of the model with parameters B w.r.t. a set of sequences X
is defined as the negative sum of the conditional probabilities of each training label
sequence given the observation sequence,
Although [6] has proposed a modification of improved iterative scaling for parameter
estimation in CRFs, gradient-based methods such as conjugate gradient descent
have often found to be more efficient for minimizing the convex loss function in
Eq. (5) (cf. [8]). The gradient can be readily computed as
(6)
where expectations are taken w.r.t. Po(YIX). The stationary equations then simply
state that uniformly averaged over the training data, the observed sufficient statistics should match their conditional expectations. Computationally, the evaluation
of S(Xi, yi) is straightforward counting, while summing over all sequences Y to
compute E [S(X, Y)IX = Xi] can be performed using dynamic programming, since
the dependency structure between labels is a simple chain.
4
Ranking Loss Functions for Label Sequences
As an alternative to logarithmic loss functions, we propose to minimize an upper
bound on the ranking loss [9] adapted to label sequences. The ranking loss of a
discriminant function Fo w.r.t. a set of training sequences is defined as
1{rnk(B;X) =
L L
i
==
8(Fo(Xi,Y) _FO(Xi,yi)), 8(x)
{~ ~~~:r~~e
(7)
Y;iY;
which is simply the sum of the number of label sequences that are ranked higher than
or equal to the true label sequence over all training sequences. It is straightforward
to see (based on a term by term comparison) that an upper bound on the rank loss
is given by the following exponential loss function
1{exp(B; X) ==
L L
i
exp [FO(Xi, Y) - FO(Xi, yi)] =
Y#Y'
L
i
[Po
0
(~iIXi)
-1].(8)
Interestingly this simply leads to a loss function that uses the inverse conditional
probability of the true label sequence, if we define this probability via the exponential form in Eq. (4). Notice that compared to [1], we include all sequences and
not just the top N list generated by some external mechanism. As we will show
shortly, an explicit summation is possible because of the availability of dynamic
programming formulation to compute sums over all sequences efficiently.
In order to derive gradient equations for the exponential loss we can simply make
use of the elementary facts
\1 eP(())
1
\1 eP(())
\le(-logP(()))=- P(()) , and\le p (())=- P(())2
\le( -logP(()))
P(())
(9)
Then it is easy to see that
(10)
The only difference between Eq. (6) and Eq. (10) is the non-uniform weighting of
different sequences by their inverse probability, hence putting more emphasis on
training label sequences that receive a small overall (conditional) probability.
5
Boosting Algorithm for Label Sequences
As an alternative to a simple gradient method, we now turn to the derivation of
a boosting algorithm, following the boosting formulation presented in [9]. Let us
introduce a relative weight (or distribution) D(i , Y) for each label sequence Y
w.r.t. a training instance (Xi, yi), i.e. L i Ly D(i , Y) = 1,
exp [Fe (Xi, Y) - Fe (Xi, yi)]
Lj, LY, #Yj exp [Fe(Xj , Y') - Fe (Xj, y j)]'
D(i, Y)
. Pe(YIXi)
D(z) 1 _ Pe(yiIXi) '
for Y
. _
Pe(yi IXi) - l - 1
D(z) = Lj [Pe(yjIXj) -l _ 1]
1- y i
(11)
(12)
In addition, we define D(i, y i) = O. Eq. (12) shows how we can split D(i, Y) into
a relative weight for each training instance, given by D(i) , and a relative weight of
each sequence, given by the re-normalized conditional probability Pe(YIX i ). Notice
that D(i) --+ 0 as we approach the perfect prediction case of Pe(yi IXi) --+ 1.
We define a boosting algorithm which in each round aims at minimizing the partition function or weight normalization constant Zk w.r.t. a weak learner fk and a
corresponding optimal parameter increment L,()k
Zk(L,()k) == "D(i)"
~
?
=
P~~IXli)
.) exp [L,()k(Sk(X i , Y)-Sk(Xi, yi))](13)
e Y?X?
~ . 1-
Y#Y'
~ ( ~ D(i)P(bIXi; k)) exp [bL,()k],
where Pe(bIXi; k) = LY EY (b; X i) Pe(YIX i )/( l
Y 1- y i 1\ (Sk(Xi,Y) - Sk(Xi,yi)) = b}.
tractable if the number of features is small,
with accumulators [6] for every feature seems
(14)
- Pe(yi IXi)) and Y(b; Xi) == {Y :
This minimization problem is only
since a dynamic programming run
to be required in order to compute
the probabilities Po(bIXi; k), i.e. the probability for the k-th feature to be active
exactly b times, conditioned on the observation sequence Xi.
In cases, where this is intractable (and we assume this will be the case in most
applications), one can instead minimize an upper bound on every Zk' The general
idea is to exploit the convexity of the exponential function and to bound
(15)
which is valid for every x E [xmin; xmax].
We introduce the following shorthand notation Uik(Y) == Sk(Xi,Y) - SdXi,yi),
max =
- maxy:;ty i Uik (Y) , Ukmax -_ maxi Umax , Umin '
Uik
i Uik (Y) , Ukmin=
ik
ik = mmy:;ty
i
mini u'[kin and 7fi(Y) == Po(YIX )!(1 - Po(yiIXi) ) which allows us to rewrite
Zk(L.B k ) = LD(i) L
(16)
7fi(Y) exp [L.BkUik(Y)]
y:;tyi
< " D(i) " 7fi(Y) [u'[kax - Uik(:) eL:o.Oku,&;n + Uik(Y) - u~in eL:o.Oku,&ax]
-
~
~
i
uI?ax - uI?m
tk
y:;tyi
LD(i) (rikeMkU,&;n
uI?ax - uI?m
tk
tk
+ (1- rik)eMkU,&aX),
where
tk
(17)
i
rik == "
~
(18)
7fi(Y) u'[kax - Uik(:)
uI?ax _ u mm
tk
y:;tyi
tk
By taking the second derivative w.r.t. L.Bk it is easy to verify that this is a convex
function in L.Bk which can be minimized with a simple line search.
If one is willing to accept a looser bound, one can instead work with the interval [uk'in; uk'ax] which is the union of the intervals [u'[kin; u'[kax] for every training
sequence i and obtain the upper bound
Zk(L.Bk)
< rkeMkuk';n + (1 _ rk)eL:o.Okuk'ax
"D(i) "
~
i
~
(19)
7fi(Y) uk'ax - Uik(:)
y=/-yi
u max _u mm
k
(20)
k
Which can be solved analytically
L.B k-
1
10 (
uk'ax _ uk'in
g
-rkuk'in )
(1 - rk)Uk'ax
(21)
but will in general lead to more conservative step sizes.
The final boosting procedure picks at every round the feature for which the upper
bound on Zk is minimal and then performs an update of Bk +- Bk + L.B k . Of course,
one might also use more elaborate techniques to find the optimal L.B k , once !k
has been selected, since the upper bound approximation may underestimate the
optimal step sizes. It is important to see that the quantities involved (rik and rk,
respectively) are simple expectations of sufficient statistics that can be computed for
all features simultaneously with a single dynamic programming run per sequence.
6
Hamming Loss for Label Sequences
In many applications one is primarily interested in the label-by-labelloss or Hamming loss [9]. Here we investigate how to train models by minimizing an upper
bound on the Hamming loss. The following logarithmic loss aims at maximizing
the log-probability for each individual label and is given by
F1og(B;X) == - LL)og P o(y1I Xi ) = - LLlog
L
PO(YIX i ).
(22)
v:Yt = Y;
Again, focusing on gradient descent methods, the gradient is given by
As can be seen, the expected sufficient statistics are now compared not to their
empirical values, but to their expected values, conditioned on a given label value
(and not the entire sequence Vi). In order to evaluate these expectations, one
can perform dynamic programming using the algorithm described in [5], which
has (independently of our work) focused on the use of Hamming loss functions in
the context of CRFs. This algorithm has the complexity of the forward-backward
algorithm scaled by a constant.
Y;
Similar to the log-loss case, one can define an exponential loss function that corresponds to a margin-like quantity at every single label. We propose minimizing the
following loss function
~~~
L
.t
2,
exp
[F'(X;, Y) -log
Y'~": exp [Fo(X" V')] ]<24)
LR ( iIXi'B) - l
l:vexp [FO(Xi,y)]
=
l:v '
Yt=y i exp [FO(Xi, Y)]
.t
t
2,
0
Yt
(25)
,
As a motivation, we point out that for the case of sequences of length 1, this
will reduce to the standard multi-class exponential loss. Effectively in this model,
the prediction of a label Yt will mimic the probabilistic marginalization, i.e.
=
argmax y FO(Xi, Y; t), FO(Xi, Y; t) = log l:v:Yt=Y exp [FO(Xi, Y)].
y;
Similar to the log-loss case, the gradient is given by
_ " E [S(X , Y)IX = Xi ,Yt = yn ~ E [S(Xi, Y)IX = Xi] (26)
it'
Po(y:IX')
Again, we see the same differences between the log-loss and the exponential loss, but
this time for individual labels. Labels for which the marginal probability Po (yf IXi)
is small are accentuated in the exponential loss. The computational complexity for
computing \7 oFex p and \7 oF1og is practically the same. We have not been able to
derive a boosting formulation for this loss function, mainly because it cannot be
written as a sum of exponential terms. We have thus resorted to conjugate gradient
descent methods for minimizing Fexp in our experiments.
7
7 .1
Experimental Results
Named Entity Recognition
Named Entity Recognition (NER) , a subtask of Information Extraction, is the task
of finding the phrases that contain person, location and organization names, times
and quantities. Each word is tagged with the type of the name as well as its position
in the name phrase (i.e. whether it is the first item of the phrase or not) in order
to represent the boundary information.
We used a Spanish corpus which was provided for the Special Session of CoNLL2002
on NER. The data is a collection of news wire articles and is tagged for person
names, organizations, locations and miscellaneous names.
We used simple binary features to ask questions about the word being tagged, as
well as the previous tag (i.e. HMM features). An example feature would be: Is the
current word= 'Clinton' and the tag='Person-Beginning '? We also used features to
ask detailed questions (i.e. spelling features) about the current word (e.g.: Is the
current word capitalized and the tag='Location-Intermediate'?) and the neighboring words. These questions cannot be asked (in a principled way) in a generative
HMM model. We ran experiments comparing the different loss functions optimized
with the conjugate gradient method and the boosting algorithm. We designed
three sets of features: HMM features (=31), 31 and detailed features of the current word (= 32), and 32 and detailed features of the neighboring words (=33).
The results summarized in Table 1
demonstrate the competitiveness of the
Feature
Objective
proposed loss functions with respect to
log
exp
boost
Set
1{log. We observe that with different
1{
sets of features, the ordering of the per6.60 6.95
8.05
Sl
formance of the loss functions changes.
:F 6.73 7.33
Boosting performs worse than the conju1{
6.72
7.03
6.93
S2
gate gradient when only HMM features
:F 6.67 7.49
are used, since there is not much infor1{
6.15
5.84
6.77
mation in the features other than the
S3
5.90
5.10
:F
identity of the word to be labeled. Consequently, the boosting algorithm needs
Table 1: Test error of the Spanish corto include almost all weak learners in
pus for named entity recognition.
the ensemble and cannot exploit feature
sparseness. When there are more detailed features , the boosting algorithm is competitive with the conjugate gradient
method, but has the advantage of generating sparser models. The conjugate gradient method uses all of the available features, whereas boosting uses only about 10%
of the features.
7.2
Part of Speech Tagging
We used the Penn TreeBank corpus for
t he part-of-speech tagging experiments.
The features were similar to the feature sets Sl and S2 described above in
the context of NER. Table 2 summarizes
the experimental results obtained on this
task. It can be seen that the test errors obtained by different loss functions
lie within a relatively small range. Qualitatively the behavior of the different optimization methods is comparable to the
NER experiments .
7.3
Feature
Set
Sl
S2
Objective
log
exp boost
1{
:F
1{
:F
4.69
4.88
4.37
4.71
5.04
4.96
4.74
4.90
10.58
-
5.09
-
Table 2: Test error of the Penn TreeBank corpus for POS
General Comments
Even with t he tighter bound in the boosting formulation , the same features are
selected many times, because of the conservative estimate of the step size for parameter updates. We expect to speed up the convergence of the boosting algorithm
by using a more sophisticated line search mechanism to compute the optimal step
length, a conjecture that will be addressed in future work.
Although we did not use real-valued features in our experiments, we observed that
including real-valued features in a conjugate gradient formulation is a challenge,
whereas it is very natural to have such features in a boosting algorithm.
We noticed in our experiments that defining a distribution over the training instances using the inverse conditional probability creates problems in the boosting
formulation for data sets that are highly unbalanced in terms of the length of the
training sequences. To overcome this problem, we divided the sentences into pieces
such that the variation in the length of the sentences is small. The conjugate gradient optimization, on the other hand, did not appear to suffer from this problem.
8
Conclusion and Future Work
This paper makes two contributions to the problem of learning label sequences.
First, we have presented an efficient algorithm for discriminative learning of label
sequences that combines boosting with dynamic programming. The algorithm compares favorably with the best previous approach, Conditional Random Fields, and
offers additional benefits such as model sparseness. Secondly, we have discussed the
use of methods that optimize a label-by-labelloss and have shown that these methods bear promise for further improving classification accuracy. Our future work will
investigate the performance (in both accuracy and computational expenses) of the
different loss functions in different conditions (e.g. noise level, size of the feature
set).
Acknowledgments
This work was sponsored by an NSF-ITR grant, award number IIS-0085940.
References
[1] M. Collins. Discriminative reranking for natural language parsing. In Proceedings 17th
International Conference on Machine Learning, pages 175- 182. Morgan Kaufmann ,
San Francisco , CA, 2000.
[2] M. Collins. Ranking algorithms for named- entity extraction: Boosting and the voted
perceptron. In Proceedings 40th Annual Meeting of the Association for Computational
Linguistics (ACL), pages 489- 496, 2002.
[3] R. Durbin , S. Eddy, A. Krogh, and G. Mitchison. Biological Sequence Analysis: Probabilistic Models of Proteins and Nucleic Acids. Cambridge University Press, 1998.
[4] J. Friedman, T. Hastie, and R. Tibshirani. Additive logistic regression: a statistical
view of boosting. Annals of Statistics, 28:337- 374, 2000.
[5] S. Kakade, Y.W. Teh, and S. Roweis. An alternative objective function for Markovian
fields. In Proceedings 19th International Conference on Machine Learning, 2002.
[6] J . Lafferty, A. McCallum, and F . Pereira. Conditional random fields: Probabilistic
models for segmenting and labeling sequence data. In Proc. 18th International Conf. on
Machine Learning, pages 282- 289. Morgan Kaufmann, San Francisco, CA, 200l.
[7] C. Manning and H. Schiitze. Foundations of Statistical Natural Language Processing.
MIT Press, 1999.
[8] T. Minka. Algorithms for maximum-likelihood logistic regression. Technical report ,
CMU, Department of Statistics, TR 758 , 200l.
[9] R. Schapire and Y. Singer. Improved boosting algorithms using confidence-rated predictions. Machine Learning, 37(3):297- 336, 1999.
| 2333 |@word seems:1 willing:1 pick:1 tr:1 ld:2 interestingly:1 past:2 current:4 comparing:1 yet:1 written:2 readily:1 parsing:1 additive:4 partition:1 hofmann:1 designed:1 sponsored:1 update:2 stationary:1 generative:3 selected:2 reranking:1 item:1 xk:1 beginning:1 mccallum:1 lr:1 boosting:26 location:3 bixi:3 along:1 direct:1 ik:2 competitiveness:2 consists:1 shorthand:1 combine:2 introduce:2 inter:1 tagging:4 expected:2 behavior:1 frequently:1 multi:1 becomes:1 provided:1 notation:1 finding:1 pseudo:1 every:7 questionable:1 preferable:1 exactly:1 scaled:1 uk:6 grant:1 ly:3 penn:2 yn:1 appear:1 segmenting:2 ner:4 despite:1 might:1 acl:1 emphasis:2 hmms:5 ease:1 range:2 averaged:1 acknowledgment:1 accumulator:1 yj:1 union:1 procedure:1 area:1 empirical:1 word:9 confidence:1 protein:2 altun:2 cannot:3 selection:1 context:3 optimize:1 yt:16 crfs:6 maximizing:2 straightforward:2 independently:1 convex:2 focused:1 variation:2 increment:1 annals:1 programming:7 us:3 ixi:4 associate:1 recognition:5 continues:1 labeled:3 observed:2 ep:2 solved:1 news:1 ordering:1 xmin:1 ran:1 principled:1 subtask:1 convexity:1 ui:5 complexity:2 asked:1 dynamic:7 trained:2 rewrite:1 creates:1 efficiency:2 learner:6 po:11 joint:2 derivation:1 zo:1 train:1 shortcoming:1 labeling:1 valued:2 annotating:1 statistic:7 final:1 sequence:51 advantage:4 propose:2 neighboring:4 zer:1 roweis:1 convergence:1 generating:1 perfect:1 yiixi:2 tk:6 derive:2 eq:5 krogh:1 c:1 blanket:1 implies:1 capitalized:1 accentuated:1 generalization:1 tighter:1 elementary:1 secondly:3 summation:1 biological:1 mm:2 practically:1 normal:1 exp:14 mapping:2 major:1 fh:1 estimation:2 proc:1 label:41 minimization:1 mit:1 yix:6 aim:2 mation:1 og:1 ax:10 focus:1 improvement:1 rank:2 likelihood:3 mainly:1 inference:1 el:3 typically:3 entire:2 lj:2 accept:1 hidden:2 interested:1 overall:1 classification:5 flexible:1 special:2 marginal:1 field:6 equal:1 once:1 extraction:4 sampling:1 biology:3 future:5 minimized:1 report:1 mimic:1 primarily:1 simultaneously:1 individual:2 argmax:1 friedman:1 vexp:1 detection:1 organization:2 investigate:3 highly:1 evaluation:1 predominant:1 chain:1 re:2 minimal:1 instance:3 formalism:1 modeling:3 markovian:1 logp:2 phrase:3 uniform:1 johnson:1 providence:1 dependency:6 person:3 international:3 probabilistic:5 discipline:1 iy:1 again:2 worse:1 cognitive:1 external:1 conf:1 derivative:1 summarized:1 availability:2 explicitly:1 ranking:6 vi:1 piece:1 performed:1 view:1 competitive:1 contribution:1 minimize:3 voted:1 accuracy:3 phoneme:1 formance:1 efficiently:1 ensemble:1 kaufmann:2 acid:1 conceptually:1 weak:5 worth:1 fo:13 ty:2 underestimate:1 involved:1 minka:1 thereof:1 hamming:6 ask:2 segmentation:1 eddy:1 sophisticated:1 focusing:1 higher:1 supervised:1 improved:2 formulation:6 schiitze:1 strongly:1 just:1 implicit:1 hand:1 logistic:2 yf:1 scientific:1 believe:1 name:5 validity:1 brown:3 homology:1 y2:1 true:2 normalized:1 regularization:1 hence:1 analytically:1 tagged:3 verify:1 attractive:1 round:2 ll:1 spanish:2 criterion:1 demonstrate:2 performs:2 consideration:1 recently:1 fi:5 discussed:1 he:2 association:1 refer:1 cambridge:1 fk:6 session:1 umin:1 language:6 pu:1 binary:3 success:1 meeting:1 yi:13 yasemin:1 seen:2 morgan:2 additional:4 ey:1 ii:1 technical:1 match:1 offer:3 divided:1 mle:2 award:1 prediction:5 kax:3 regression:2 expectation:4 cmu:1 normalization:1 represent:1 xmax:1 receive:1 addition:2 whereas:2 interval:2 addressed:1 comment:1 lafferty:1 counting:1 intermediate:1 split:1 easy:2 variety:1 independence:1 xj:2 marginalization:1 hastie:1 suboptimal:1 reduce:1 idea:1 itr:1 whether:1 suffer:1 speech:6 detailed:4 iixi:2 tyi:3 schapire:1 sl:3 canonical:1 nsf:1 notice:2 s3:1 per:1 tibshirani:1 discrete:1 promise:1 putting:1 backward:1 resorted:1 sum:5 run:2 inverse:3 named:6 family:2 almost:1 pursuing:1 looser:1 summarizes:1 scaling:1 investigates:1 comparable:1 rnk:1 bound:10 annual:1 durbin:1 adapted:1 kronecker:1 precisely:1 ri:1 x2:1 y1i:1 tag:3 speed:1 performing:2 relatively:1 conjecture:1 department:3 manning:1 conjugate:7 across:1 contain:1 kakade:1 modification:1 maxy:1 taken:1 computationally:2 equation:2 discus:2 lexp:1 count:1 mechanism:2 turn:1 singer:1 tractable:1 available:1 observe:1 alternative:6 shortly:1 gate:1 thomas:1 denotes:2 top:1 linguistics:2 include:4 cf:1 exploit:2 bl:1 objective:3 noticed:1 question:3 quantity:3 spelling:1 gradient:15 entity:6 hmm:4 discriminant:3 reason:1 length:4 index:1 mini:1 minimizing:5 fe:7 favorably:1 expense:1 negative:1 perform:1 teh:1 upper:7 observation:11 wire:1 markov:2 nucleic:1 descent:3 defining:3 y1:1 bk:5 namely:1 required:1 optimized:1 sentence:2 boost:2 able:1 pattern:1 challenge:1 max:2 including:1 natural:7 ranked:1 predicting:2 indicator:1 scheme:1 rated:1 ne:1 mmy:1 umax:1 mediated:1 extract:1 text:2 relative:3 loss:38 expect:1 discriminatively:1 bear:1 interesting:1 foundation:1 rik:3 sufficient:5 article:1 treebank:2 course:1 perceptron:1 taking:1 sparse:1 benefit:2 boundary:1 overcome:1 valid:1 forward:1 collection:1 qualitatively:1 san:2 gene:1 active:3 summing:1 corpus:3 assumed:2 francisco:2 discriminative:7 xi:25 mitchison:1 search:2 iterative:1 sk:6 table:4 learn:2 zk:6 yixi:1 inherently:1 ca:2 improving:1 clinton:1 did:2 main:2 motivation:1 s2:3 noise:1 x1:1 uik:8 elaborate:1 position:1 pereira:1 explicit:1 exponential:13 prominently:1 lie:1 pe:9 weighting:1 ix:4 kin:2 rk:3 xt:3 emphasized:1 maxi:1 list:1 intractable:1 effectively:1 conditioned:2 sparseness:2 margin:1 sparser:1 entropy:1 logarithmic:4 simply:5 corresponds:2 conditional:15 goal:1 presentation:1 identity:1 consequently:1 miscellaneous:1 change:1 typical:1 uniformly:1 conservative:2 called:1 secondary:1 experimental:2 formally:1 mark:1 arises:1 unbalanced:1 collins:2 evaluate:1 |
1,467 | 2,334 | Spectro-Temporal Receptive Fields of
Subthreshold Responses in Auditory Cortex
Christian K. Machens, Michael Wehr, Anthony M. Zador
Cold Spring Harbor Laboratory
One Bungtown Rd
Cold Spring Harbor, NY 11724
machens, wehr, zador @cshl.edu
Abstract
How do cortical neurons represent the acoustic environment? This question is often addressed by probing with simple stimuli such as clicks or
tone pips. Such stimuli have the advantage of yielding easily interpreted
answers, but have the disadvantage that they may fail to uncover complex
or higher-order neuronal response properties.
Here we adopt an alternative approach, probing neuronal responses with
complex acoustic stimuli, including animal vocalizations and music. We
have used in vivo whole cell methods in the rat auditory cortex to record
subthreshold membrane potential fluctuations elicited by these stimuli.
Whole cell recording reveals the total synaptic input to a neuron from
all the other neurons in the circuit, instead of just its output?a sparse binary spike train?as in conventional single unit physiological recordings.
Whole cell recording thus provides a much richer source of information
about the neuron?s response.
Many neurons responded robustly and reliably to the complex stimuli
in our ensemble. Here we analyze the linear component?the spectrotemporal receptive field (STRF)?of the transformation from the sound
(as represented by its time-varying spectrogram) to the neuron?s membrane potential. We find that the STRF has a rich dynamical structure,
including excitatory regions positioned in general accord with the prediction of the simple tuning curve. We also find that in many cases, much of
the neuron?s response, although deterministically related to the stimulus,
cannot be predicted by the linear component, indicating the presence of
as-yet-uncharacterized nonlinear response properties.
1 Introduction
In their natural environment, animals encounter highly complex, dynamically changing
stimuli. The auditory cortex evolved to process such complex sounds. To investigate a
system in its normal mode of operation, it therefore seems reasonable to use natural stimuli.
The linear response of an auditory neuron can be described in terms of its spectro-temporal
receptive field (STRF). The cortical STRF has been estimated using a variety of stimu-
lus ensembles1 , including tone pips [1] and dynamic ripples [2]. However, while natural
stimuli have long been used to probe cortical responses [3, 4], and have been widely used
in other preparations to compute STRFs [5], they have only rarely been used to compute
STRFs from cortical neurons [6].
Here we present estimates of the STRF using in vivo whole cell recording. Because whole
cell recording measures the total synaptic input to a neuron, rather than just its output?
a sparse binary spike train?as in conventional single unit physiological recordings, this
technique provides a much richer source of information about the neuron?s response.
Whole cell recording also has a different sampling bias from conventional extracellular
recording: instead of recording from active neurons with large action potentials (i.e. those
that are most easily isolated on the electrode), whole cell recording selects for neurons
solely on the basis of the experimenter?s ability to form a gigaohm seal.
Using these novel methods, we investigated the computations performed by single neurons
in the auditory cortex A1 of rats.
2 Spike responses and subthreshold activity
We first used cell-attached methods to obtain well-isolated single unit recordings. We found
that many cells in auditory cortex responded only very rarely to the natural stimulus ensemble, making it difficult to characterize the neuron?s input-output relationship effectively. An
example of this problem is shown in Fig. 1(b) where a natural stimulus (here, the call of a
nightingale) leads to an average of about five spikes during the eight-second-long presentation. Such sparse responses are not surprising, since it is well known that many cortical
neurons are selective for stimulus transients [7, 8].
One way to circumvent this difficulty is to present stimuli that elicit high firing rates. For
example, using dynamic ripple stimuli, an STRF can be constructed with about
spikes collected over minutes (average firing rate of approximately spikes/second, or
about -fold higher than the rate elicited by the natural stimulus in Fig. 1(b)) [9]. However, such stimuli have, by design, a simple correlational structure, and therefore preclude
the investigation of nonlinear response properties driven by higher-order stimulus characteristics.
We have therefore adopted an alternative approach based on in vivo whole cell recording,
exploiting the fact that although these neurons spike only rarely, they feature strong subthreshold activity. A set of subthreshold voltage traces, obtained by a whole-cell recording
where spikes were blocked (only in the neuron being recorded from) with the intracellular sodium channel blocker QX-314 (see Methods), is shown in Fig. 1(c). The responses
feature robust stimulus-locked fluctuations of membrane potential, as well as some spontaneous activity. Both the spontaneous and stimulus-locked voltage fluctuations are due to
the synchronous arrival of many excitatory postsynaptic potentials (EPSPs). (Note that if
spikes had not been blocked pharmacologically, some of the larger EPSPs would have triggered spikes). Not only do these whole cell recordings avoid the problem of sparse spiking
responses, they also provide insight into the computations performed by the input to the
neuron?s spike generating mechanism.
1
Because cortical neurons respond poorly to white noise, this stimulus has not been used to estimate cortical STRFs.
(a)
(b)
10
trial no.
8
6
4
2
0
1
2
3
4
5
6
7
8
0
1
2
3
4
time (sec)
5
6
7
8
trial no.
(c)
Figure 1: (a) Spectrogram of the song of a nightingale. (b) Spike raster plots recorded in
cell-attached mode during ten repetitions of the nightingale song from a single neuron in
auditory cortex A1. (c) Voltage traces recorded in whole-cell-mode during ten repetitions
from another neuron in A1.
3 Reliability of responses
A key step in the characterization of the neuron?s responses is the separation of the
stimulus-locked activity from the stimulus-independent activity (?background noise?). A
sample average trace is compared with a single trial in Fig. 2(a).
To quantify the amount of stimulus-locked activity, we computed the coherence function
between a single response trace and the average over the remaining traces. The coherence
measures the frequency-resolved correlation of two time series. This function is shown in
Fig. 2b for responses to several natural stimuli from the same cell. The coherence function demonstrates that the stimulus-dependent activity is confined to lower frequencies
(
Hz). Note that the coherence function provides merely an average over the complete trace; in reality, the coherence can locally be much higher (when all traces feature
the same stimulus-locked excursion in membrane potential) or much lower (for instance in
the absence of stimulus-locked activity). On average, however, the coherence is approximately the same for all the natural stimuli presented, indicating that all stimuli feature
approximately the same level of background activity.
(a)
(b)
20
mean response
single trial
0.8
coherence
voltage (mV)
15
10
5
0
?5
10
1
0.6
0.4
0.2
11
12
13
time (sec)
14
15
0
0
50
100
frequency (Hz)
150
Figure 2: (a) Mean response compared to single trial for a natural stimulus (jaguar mating
call). (b) Coherence functions between mean response and single trial for different stimuli.
All natural stimuli yield approximately the same relation between signal and noise.
4 Spectro-temporal receptive field
Having established the mean over trials as a reliable estimate of the stimulus-dependent
activity, we next sought to understand the computations performed by the neurons.
To mimic the cochlear transform, it has proven useful to describe the stimulus in the timefrequency domain [2]. Discretizing both time and frequency, we describe the stimulus
power in the -th time bin and the -th frequency bin by
. To compute the
time-frequency representation, we used the spectrogram method which requires a certain
choice for the time-frequency tradeoff [10]; several choices were used independently of
each other, essentially yielding the same results. In all cases, stimulus power is measured
in logarithmic units.
The simplest and most widely used model is a linear transform between the stimulus (as
represented by the spectrogram) and the response,
given by the
formula
est
"!
(1)
where is a constant offset and the parameters represent the spectro-temporal
receptive field (STRF) of the neuron. Note, though,
that the response is usually taken
to be the average firing rate [2, 11]; here the response is given by the subthreshold voltage trace. The parameters can be fitted by minimizing the mean-square error between
the measured response # and the estimated response est # . This problem is solved by
multi-dimensional linear regression.
However, a direct, ?naive? estimate as obtained by the solution to the regression equations,
will usually fail since the stimulus does not properly sample all dimensions in stimulus
space. In general, this leads to strong overfitting of the poorly sampled dimensions and
poor predictive power of the model. The overfitting can be seen in the noisy structure of
the STRF shown in Fig. 3(a).
A simple alternative is to penalize the improperly sampled directions which can be done
using ridge regression [12]. Ridge regression minimizes the mean-square-error between
measured and estimated response while placing a constraint on the sum of the regression
coefficients. Choosing the constraint such that the predictive power of the model is maximized, we obtained the STRF shown in Fig. 3(b). Note that ridge regression operates
on all coefficients uniformly (ie the constraint is global), so that observed smoothness in
the estimated STRF represents structure in the data; no local smoothness constraint was
applied.
(b)
naive estimate
ridge estimate
12800
12800
6400
6400
frequency (Hz)
frequency (Hz)
(a)
3200
1600
800
3200
1600
800
400
400
200
200
100
?0.3
?0.2
?0.1
time (sec)
0
100
?0.3
?0.2
?0.1
time (sec)
0
Figure 3: (a) Naive estimate of the STRF via linear regression. Darker pixels denote timefrequency bins with higher power. (b) Estimate of the STRF via ridge regression.
The STRF displays the neuron?s frequency-sensitivity, centered around 800?1600 Hz. This
range of frequencies matches the neuron?s tuning curve which is measured with short sine
tones. The STRF suggests that the neuron essentially integrates frequencies within this
range and a time constant of about 100 ms. These types of computations have been previously reported for neurons in auditory cortex [1, 2].
4.1 Spectral analysis of error
How well does the simple linear model predict the subthreshold responses? To assess the
predictive power of the model, the STRF was estimated from data obtained for ten different
natural stimuli and then tested on an eleventh stimulus. A sample prediction is shown in
Fig. 4(a). While the predicted trace roughly captures the occurrence of the EPSPs, it fails
to predict their overall shape. This observation can be quantified by spectrally resolving
the prediction success. For that purpose, we again used the coherence function which
measures the correlation between the actual response and the predicted response at each
frequency. This function is shown in Fig. 4(b). Clearly, the model fails to predict any
response fluctuations faster than
Hz. As a comparison, recall that the response is
reliable up to about Hz (Fig. 2).
(a)
(b)
20
mean response
prediction
0.8
coherence
voltage (mV)
15
10
5
0
?5
10
1
0.6
0.4
0.2
11
12
13
time (sec)
14
15
0
0
5
10
15
20
frequency (Hz)
25
Figure 4: (a) Mean response and prediction for a natural stimulus (jaguar mating call). The
STRF captures the gross features of the response, but not the fine details. (b) Coherence
function between measured and predicted response.
correlation coefficient ^2
1
0.8
0.6
0.4
0.2
0
THT BHW SLB JMC HBW KF TF JHP SJF CWM BGC
stimulus no.
Figure 5: Squared Correlation coefficients between the mean of the measured responses
and the predicted response. Linear prediction with the STRF is more effective for some
stimuli than others.
4.2 Errors across stimuli
Some of the natural stimuli elicited highly reliable responses that were not at all predicted
by the STRF, see Fig. 5. In fact, the example shown in Fig. 4 is one of the best predictions
achieved by the model. The failure to predict the responses of some stimuli cannot be attributed to the absence of stimulus-locked activity; as the coherence functions in Fig. 2(a)
have shown, all stimuli feature approximately the same proportion of stimulus-locked activity to noise. Rather, such responses indicate a high degree of nonlinearity that dominates
the response to some stimuli. This observation is in accord with previous work on neurons
in the auditory forebrain of zebrafinches [11], where neurons show a high degree of feature
selectivity.
The nonlinearities seen in subthreshold responses of A1 neurons can partly be attributed
to adaptation, to interactions between frequencies [13, 14], and also to off-responses 2 . In
general, the linear model performs best if the stimuli are slowly modulated in both time
and frequency.
5 Discussion
We have used whole cell patch clamp methods in vivo to record subthreshold membrane
potential fluctuations elicited by natural sounds. Subthreshold responses were reliable and
(in contrast to the suprathreshold spiking responses) sufficiently rich and robust to permit
rapid and efficient estimation of the linear predictor of the neuron?s response (the STRF).
The present manuscript represents the first analysis of subthreshold responses elicited by
natural stimuli in the cortex, or to our knowledge in any system.
STRFs estimated from natural sounds were in general agreement, with respect to gross
characteristics such as frequency tuning, with those obtained directly from pure tone pips.
The STRFs from complex sounds, however, provided a much more complete view of the
neuron?s dynamics, so that it was possible to compare the predicted and experimentally
measured responses.
In many cases the prediction was poor (cf. Fig. 6), indicating strong nonlinearities in the
neuron?s responses. These nonlinearities include adaptation, two-tone interactions, and
2
Off-responses are excitatory responses that occur at the termination of stimuli in some neurons.
Because they have the same sign as the on-response, they represent a form of rectifying nonlinearity. Further complications arise because on- and off-responses interact, depending on their spectrotemporal relations [14].
number of cells
3
2
1
0.1
0.2
0.3
0.4
0.5
average over squared correlation coefficients
0.6
Figure 6: Summary figure. Altogether cells were recorded in whole cell mode.
Shown are the squared correlation coefficients, averaged over all stimuli for a given cell.
For many cells, the linear model worked rather poorly as indicated by low cross correlations.
off-responses. Explaining these nonlinearities represents an exciting challenge for future
research.
6 Methods
Sprague-Dawley rats (p18-21) were anesthetized with ketamine (30 mg/kg) and medetomidine (0.24 mg/kg). Whole cell recordings and single unit recordings were made with glass
M ) from primary auditory cortex (A1) using standard methods
microelectrodes ( !
appropriately modified for the in vivo preparation. During whole cell recordings, sodium
action potentials were blocked using the sodium channel blocker QX-314.
All natural sounds were taken from an audio CD, sampled at 44,100 Hz. Animal vocalizations were from ?The Diversity of Animal Sounds,? available from the Cornell Laboratory
of Ornithology. Additional stimuli included pure tones and white noise bursts with 25
ms duration and 5 ms ramp (sampled at 97.656 kHz), and Purple Haze by Jimi Hendrix.
Sounds were delivered by a TDT RP2 at 97.656 kHz to a calibrated TDT electrostatic
speaker and presented free field in a double-walled sound booth.
References
[1] R. C. deCharms and M. M. Merzenich. Primary cortical representation of sounds by
the coordination of action- potential timing. Nature, 381(6583):610?3., 1996.
[2] D. J. Klein, D. A. Depireux, J. Z. Simon, and S. A. Shamma. Robust spectrotemporal
reverse correlation for the auditory system: optimizing stimulus design. J Comput
Neurosci, 9(1):85?111., 2000.
[3] O. Creutzfeldt, F. C. Hellweg, and C. Schreiner. Thalamocortical transformation of
responses to complex auditory stimuli. Exp Brain Res, 39(1):87?104, 1980.
[4] I. Nelken, Y. Rotman, and O. Bar Yosef. Responses of auditory-cortex neurons to
structural features of natural sounds. Nature, 397:154?157, 1999.
[5] F. E. Theunissen, S. V. David, N. C. Singh, A. Hsu, W. E. Vinje, and J. L. Gallant.
Estimating spatio-temporal receptive fields of auditory and visual neurons from their
responses to natural stimuli. Network, 12(3):289?316., 2001.
[6] J. F. Linden, R. C. Liu, M. Kvale, C. E. Schreiner, and M. M. Merzenich. Reversecorrelation analysis of receptive fields in mouse and rat auditory cortex. Society for
Neuroscience Abstracts, 27(2):1635, 2001.
[7] P. Heil. Auditory cortical onset responses revisited. ii. response strength. J Neurophysiol, 77(5):2642?60., 1997.
[8] S. L. Sally and J. B. Kelly. Organization of auditory cortex in the albino rat: sound
frequency. J Neurophysiol, 59(5):1627?38., 1988.
[9] D. A. Depireux, J. Z. Simon, D. J. Klein, and S. A. Shamma. Spectro-temporal
response field characterization with dynamic ripples in ferret primary auditory cortex.
J Neurophysiol, 85(3):1220?34., 2001.
[10] L. Cohen. Time-frequency Analysis. Prentice Hall, 1995.
[11] F. E. Theunissen, K. Sen, and A. J. Doupe. Spectral-temporal receptive fields of nonlinear auditory neurons obtained by using natural sounds. J. Neurosci., 20(6):2315?
2331, 2000.
[12] T. Hastie, R. Tibshirani, and J. Friedman. The elements of statistical learning theory.
Springer, 2001.
[13] M. Brosch and C. E. Schreiner. Time course of forward masking tuning curves in cat
primary auditory cortex. J Neurophysiol, 77(2):923?43., 1997.
[14] L. Tai and A. Zador. In vivo whole cell recording of synaptic responses underlying two-tone interactions in rat auditory cortex. Society for Neuroscience Abstracts,
27(2):1634, 2001.
| 2334 |@word trial:7 timefrequency:2 seems:1 proportion:1 seal:1 termination:1 liu:1 series:1 surprising:1 yet:1 slb:1 shape:1 christian:1 plot:1 tone:7 short:1 record:2 characterization:2 provides:3 revisited:1 complication:1 five:1 burst:1 constructed:1 direct:1 eleventh:1 pharmacologically:1 rapid:1 roughly:1 multi:1 brain:1 actual:1 preclude:1 provided:1 estimating:1 underlying:1 circuit:1 evolved:1 kg:2 interpreted:1 minimizes:1 spectrally:1 transformation:2 temporal:7 demonstrates:1 unit:5 local:1 timing:1 fluctuation:5 solely:1 firing:3 approximately:5 strf:19 dynamically:1 suggests:1 quantified:1 shamma:2 locked:8 range:2 averaged:1 cold:2 ornithology:1 elicit:1 cannot:2 prentice:1 conventional:3 zador:3 independently:1 duration:1 pure:2 schreiner:3 insight:1 tht:1 spontaneous:2 machens:2 agreement:1 element:1 theunissen:2 observed:1 solved:1 capture:2 region:1 gross:2 environment:2 jaguar:2 dynamic:4 singh:1 predictive:3 basis:1 neurophysiol:4 easily:2 resolved:1 represented:2 cat:1 train:2 describe:2 effective:1 choosing:1 richer:2 widely:2 larger:1 ramp:1 ability:1 transform:2 noisy:1 vocalization:2 delivered:1 advantage:1 triggered:1 mg:2 sen:1 clamp:1 interaction:3 adaptation:2 poorly:3 exploiting:1 electrode:1 double:1 ripple:3 generating:1 depending:1 measured:7 strong:3 epsps:3 predicted:7 indicate:1 quantify:1 direction:1 cwm:1 centered:1 transient:1 suprathreshold:1 bin:3 investigation:1 around:1 sufficiently:1 hall:1 normal:1 exp:1 predict:4 sought:1 adopt:1 purpose:1 estimation:1 integrates:1 spectrotemporal:3 coordination:1 repetition:2 tf:1 clearly:1 modified:1 rather:3 avoid:1 cornell:1 depireux:2 varying:1 voltage:6 pip:3 properly:1 contrast:1 glass:1 dependent:2 walled:1 relation:2 selective:1 jimi:1 selects:1 pixel:1 overall:1 animal:4 field:10 having:1 sampling:1 placing:1 represents:3 mimic:1 future:1 others:1 stimulus:59 microelectrodes:1 tdt:2 friedman:1 organization:1 highly:2 investigate:1 yielding:2 re:1 isolated:2 fitted:1 instance:1 disadvantage:1 predictor:1 characterize:1 reported:1 answer:1 calibrated:1 sensitivity:1 ie:1 rotman:1 off:4 michael:1 mouse:1 again:1 squared:3 recorded:4 slowly:1 potential:9 nonlinearities:4 diversity:1 sec:5 coefficient:6 mv:2 onset:1 performed:3 sine:1 view:1 analyze:1 elicited:5 masking:1 simon:2 vivo:6 rectifying:1 ass:1 square:2 purple:1 responded:2 characteristic:2 ensemble:2 subthreshold:11 yield:1 maximized:1 hbw:1 lu:1 synaptic:3 mating:2 failure:1 raster:1 frequency:19 attributed:2 sampled:4 auditory:21 experimenter:1 hsu:1 recall:1 knowledge:1 positioned:1 uncover:1 manuscript:1 higher:5 response:62 done:1 though:1 just:2 correlation:8 nonlinear:3 mode:4 indicated:1 merzenich:2 laboratory:2 white:2 during:4 speaker:1 rat:6 m:3 complete:2 ridge:5 performs:1 novel:1 spiking:2 cohen:1 attached:2 khz:2 blocked:3 smoothness:2 rd:1 tuning:4 nonlinearity:2 had:1 reliability:1 cortex:15 electrostatic:1 optimizing:1 driven:1 reverse:1 selectivity:1 certain:1 binary:2 discretizing:1 success:1 kvale:1 seen:2 additional:1 spectrogram:4 signal:1 ii:1 resolving:1 sound:13 match:1 faster:1 cross:1 long:2 a1:5 prediction:8 regression:8 essentially:2 bgc:1 represent:3 accord:2 confined:1 cell:24 penalize:1 achieved:1 background:2 fine:1 addressed:1 ferret:1 source:2 appropriately:1 recording:18 hz:9 call:3 structural:1 presence:1 variety:1 harbor:2 hastie:1 click:1 tradeoff:1 synchronous:1 improperly:1 song:2 action:3 strfs:5 useful:1 amount:1 cshl:1 ten:3 locally:1 simplest:1 sign:1 estimated:6 neuroscience:2 tibshirani:1 klein:2 ketamine:1 key:1 changing:1 merely:1 blocker:2 sum:1 respond:1 reasonable:1 excursion:1 separation:1 patch:1 coherence:12 display:1 fold:1 activity:12 strength:1 occur:1 constraint:4 worked:1 rp2:1 sprague:1 spring:2 dawley:1 extracellular:1 poor:2 yosef:1 membrane:5 across:1 postsynaptic:1 making:1 taken:2 haze:1 equation:1 brosch:1 previously:1 forebrain:1 tai:1 fail:2 mechanism:1 adopted:1 available:1 operation:1 permit:1 probe:1 eight:1 spectral:2 occurrence:1 robustly:1 alternative:3 encounter:1 altogether:1 remaining:1 cf:1 include:1 music:1 society:2 question:1 spike:12 receptive:8 primary:4 cochlear:1 collected:1 uncharacterized:1 relationship:1 minimizing:1 difficult:1 decharms:1 trace:9 p18:1 design:2 reliably:1 gallant:1 bungtown:1 neuron:39 observation:2 david:1 acoustic:2 established:1 bar:1 dynamical:1 usually:2 challenge:1 including:3 reliable:4 power:6 natural:20 difficulty:1 circumvent:1 sodium:3 heil:1 medetomidine:1 naive:3 kelly:1 kf:1 proven:1 vinje:1 degree:2 exciting:1 cd:1 excitatory:3 summary:1 course:1 thalamocortical:1 free:1 bias:1 understand:1 explaining:1 anesthetized:1 sparse:4 curve:3 dimension:2 cortical:9 rich:2 nelken:1 forward:1 made:1 qx:2 spectro:5 global:1 active:1 reveals:1 overfitting:2 spatio:1 reality:1 channel:2 nature:2 robust:3 interact:1 investigated:1 complex:7 wehr:2 anthony:1 domain:1 intracellular:1 neurosci:2 whole:16 noise:5 arise:1 arrival:1 neuronal:2 fig:14 hendrix:1 ny:1 probing:2 darker:1 fails:2 deterministically:1 comput:1 minute:1 formula:1 offset:1 physiological:2 linden:1 dominates:1 effectively:1 booth:1 logarithmic:1 visual:1 sally:1 albino:1 springer:1 presentation:1 absence:2 experimentally:1 included:1 operates:1 uniformly:1 correlational:1 total:2 partly:1 est:2 indicating:3 rarely:3 doupe:1 modulated:1 preparation:2 audio:1 tested:1 |
1,468 | 2,335 | How Linear are Auditory Cortical Responses?
Maneesh Sahani
Gatsby Unit, UCL
17 Queen Sq., London, WC1N 3AR, UK.
[email protected]
Jennifer F. Linden
Keck Center, UCSF
San Francisco, CA 94143?0732.
[email protected]
Abstract
By comparison to some other sensory cortices, the functional properties of cells in the primary auditory cortex are not yet well understood.
Recent attempts to obtain a generalized description of auditory cortical
responses have often relied upon characterization of the spectrotemporal receptive field (STRF), which amounts to a model of the stimulusresponse function (SRF) that is linear in the spectrogram of the stimulus.
How well can such a model account for neural responses at the very first
stages of auditory cortical processing? To answer this question, we develop a novel methodology for evaluating the fraction of stimulus-related
response power in a population that can be captured by a given type of
SRF model. We use this technique to show that, in the thalamo-recipient
layers of primary auditory cortex, STRF models account for no more
than 40% of the stimulus-related power in neural responses.
1 Introduction
A number of recent studies have suggested that spectrotemporal receptive field (STRF)
models [1, 2], which are linear in the stimulus spectrogram, can describe the spiking responses of auditory cortical neurons quite well [3, 4]. At the same time, other authors have
pointed out significant non-linearities in auditory cortical responses [5, 6], or have emphasized both linear and non-linear response components [7, 8]. Some of the differences in
these results may well arise from differences in the stimulus ensembles used to evoke neuronal responses. However, even for a single type of stimulus, it is extremely difficult to put
a number to the proportion of the response that is linear or non-linear, and so to judge the
relative contributions of the two components to the stimulus-evoked activity.
The difficulty arises because repeated presentations of identical stimulus sequences evoke
highly variable responses from neurons at intermediate stages of perceptual systems, even
in anaesthetized animals. While this variability may reflect meaningful changes in the
internal state of the animal or may be completely random, from the point of view of modelling the relationship between stimulus and neural response it must be treated as noise.
As previous authors have noted [9, 10], this noise complicates the evaluation of the performance of a particular class of stimulus-response function (SRF) model (for example, the
class of STRF models) in two ways. First, it makes it difficult to assess the quality of the
predictions given by any single model. Perfect prediction of a noisy response is impossible, even in principle, and since the the true underlying relationship between stimulus and
neural response is unknown, it is unclear what degree of partial prediction could possibly
be expected. Second, the noise introduces error into the estimation of the model parameters; consequently, even where direct unbiased evaluations of the predictions made by the
estimated models are possible, these evaluations understate the performance of the model
in the class that most closely matches the true SRF.
The difficulties can be illustrated in the context of the classical statistical measure of the
fraction of variance explained by a model, the coefficient of determination or statistic.
This is the ratio of the reduction in variance achieved by the regression model (the total
variance of the outputs minus the variance of the residuals) to the total variance of the
outputs. The total variance of the outputs includes contributions from the noise, and so
an of 1 is an unrealistic target, and the actual maximum achievable value is unclear.
Moreover, the reduction of variance on the training data, which appears in the numerator
of the , includes some ?explanation? of noise due to overfitting. The extent to which
this happens is difficult to estimate; if the reduction in variance is evaluated on test data,
estimation errors in the model will lead to an underestimate of the performance of the best
model in the class. Hypothesis tests based on compensate for these shortcomings in
answering questions of model sufficiency. However, these tests do not provide a way to
assess the extent of partial validity of a model class; indeed, it is well known that even
the failure of a hypothesis test to reject a specific model class is not sufficient evidence to
regard the model as fully adequate. One proposed method for obtaining a more quantitative measure of model performance is to compare the correlation (or, equivalently, squared
distance) between the model prediction and a new response measurement to that between
two successive responses to the same stimulus [9, 11]; as acknowledged in those proposals, however, this yardstick underestimates the response reliability even after considerable
averaging, and so the comparison will tend to overestimate the validity of the SRF model.
Measures like that are based on the fractional variance (or, for time series, the power) explained by a model do have some advantages; for example, contributions from independent
sources are additive. Here, we develop analytic techniques that overcome the systematic
noise-related biases in the usual variance measures1 , and thus obtain, for a population of
neurons, a quantitative estimate of the fraction of stimulus-related response captured by a
given class of models. This statistical framework may be applicable to analysis of response
functions for many types of neural data, ranging from intracellular recordings to imaging
measurements. We apply it to extracellular recordings from rodent auditory cortex, quantifying the degree to which STRF models can account for neuronal responses to dynamic
random chord stimuli. We find that on average less than half of the reliable stimulus-related
power in these responses can be captured by spectrogram-linear STRF models.
2 Signal power
The analysis assumes that the data consist of spike trains or other neural measurements
continuously recorded during presentation of a long, complex, rapidly varying stimulus.
This stimulus is treated as a discrete-time process. In the auditory experiment considered
here, the discretization was set by the duration of regularly clocked sound pulses of fixed
length; in a visual experiment, the discretization might be the frame rate of a movie. The
neural response can then be measured with the same level of precision, counting action
potentials (or integrating measurements)
to estimate a response rate for each time bin, to
obtain a response vector
. We propose to measure model performance in
terms of the fraction of response power predicted successfully,
where ?power? is used in
the sense of average squared deviation from the mean: ( denoting
1
An alternative would be to measure information or conditional entropy rates. However, the question of how much relevant information is preserved by a model is different from the question of how
accurate a model?s prediction is. For example, an information theoretic measure would not distinguish between a linear model and the same linear model cascaded with an invertible non-linearity.
averages over time). As argued above, only some part of the total response power is predictable, even in principle; fortunately, this signal power can be estimated by combining
repeated responses to the same stimulus sequence. We present a method-of-moments [12]
derivation of the relevant estimator below.
Suppose we have
responses
, where
is the common, stimulus
dependent component (signal) in the response and
is the (zero-mean) noise component of the response in the
th trial. The expected power in each response is given by
(where the symbol means ?equal in expectation?). This
simple relationship depends only on the noise component having been defined to have zero
mean, and holds even if the variance or other property of the noise depends on the signal
strength. We now construct two trial-averaged quantities, similar to the sum-of-squares
terms used in the analysis of variance (ANOVA) [12]: the power of the average response,
and the average power per response. Using to indicate trial averages:
and
Assuming the noise in each trial is independent (although the noise in different time bins
within a trial need not be), we have:
. Thus solving for
suggests the following estimator for the signal
power:
(1)
(A similar estimator for the noise power
is obtained by subtracting this expression from
.) This estimator is unbiased, provided only that the noise distribution has defined
first and second moments and is independent between trials, as can be verified by explicitly calculating its expected value. Unlike the sum-of-squares terms encountered in an
ANOVA, it is not a variate even when the noise is normally distributed (indeed, it is not
necessarily positive). However, since each of the power
terms in (1) is the mean of at least
numbers, the central limit theorem suggests that will be approximately normally distributed for recordings that are considerably longer than the time-scale of noise correlation
(in the experiment considered here, "
!#$## ). Its variance is given by:
%"&
:C
B5
A
1
2
1
1
4365 -7
36893;:
8
' *
(
)
- 5
.-0/.
Tr <=/>/@?
,+
+
(2)
ED
where / is the (
) covariance matrix of the noise, 5 is a vector formed by averaging
each column of / , 8 is the average of all the elements of / and 3 is the time-average of the
&
%GF
mean . Thus,
' depends only on the first and second moments of the response distribution; substitution of data-derived estimates of these moments into (2)
yields a standard
error bar for the estimator. In this way we have obtained an estimate (with corresponding uncertainty) of the maximum possible signal power that any model could accurately
predict, without having assumed any particular distribution or time-independence of the
noise.
3 Extrapolating Model Performance
To compare the performance of an estimated SRF model to this maximal value, we must
determine the amount of response power successfully predicted by the model. This is not
necessarily the power of the predicted response, since the prediction may be inaccurate.
Instead, the residual power in the difference
between a measured response and the predicted response H to the same stimulus, H , is taken as an estimate of the error power.
(The measured response used for this evaluation, and the stimulus which elicited it, may or
may not also have been used to identify the parameters of the SRF model being evaluated;
see explanation of training and test predictive powers below.) The difference between the
power in the observed response and the error power gives the predictive
power of the
model; it is this value that can be compared to the estimated signal power .
To be able to describe more than one neuron, an SRF model class must contain parameters
that can be adapted to each case. Ideally, the power of the model class to describe a population of neurons would be judged using parameters that produced models closest to the
true SRFs (the ideal models), but we do not have a priori knowledge of those parameters.
Instead, the parameters must be tuned in each case using the measured neural responses.
One way to choose SRF model parameters is to minimize the mean squared error (MSE)
between the neural response in the training data and the model prediction for the same
stimulus; for example, the Wiener kernel minimizes the MSE for a model based on a finite
impulse response filter of fixed length. This MSE is identical to the error power that would
be obtained when the training data themselves are used as the reference measured response
. Thus, by minimizing the MSE, we maximize the predictive power evaluated against the
training data. The resulting maximum value, hereafter the training predictive power, will
overestimate the predictive ability of the ideal model, since the minimum-MSE parameters
will be overfit to the training data. (Overfitting is inevitable, because model estimates based
on finite data will always capture some stimulus-independent response variability.) More
precisely, the expected value of the training predictive power is an upper bound on the true
predictive power of the model class; we therefore refer to the training predictive power
itself as an upper estimate of the SRF model performance. We can also obtain a lower estimate, defined similarly, by empirically measuring the generalization performance of the
model by cross-validation. This provides an unbiased estimate of the average generalization performance of the fitted models; however, since these models are inevitably overfit
to their training data, the expected value of this cross-validation predictive power bounds
the true predictive power of the ideal model from below, and thereby provides the desired
lower estimate.
For any one recording, the predictive power of the ideal SRF model of a particular class can
only be bracketed between these upper and lower estimates (that is, between the training
and cross-validation predictive powers). As the noise in the recording grows, the model
parameters will overfit more and more to the noise, and hence both estimates will grow
looser. Indeed, in high-noise conditions, the model may primarily describe the stimulusindependent (noise) part of the training
data, and so the training predictive power might
exceed the estimated signal power ( ), while the cross-validation predictive power may
fall below zero (that is, the model?s predictions may become more inaccurate than simply
predicting a constant response). As such, the estimates may not usefully constrain the
predictive power on a particular recording. However, assuming that the predictive power
of a single model class is similar for a population of similar neurons, the noise dependence
can be exploited to tighten the estimates when applied to the population as a whole, by
extrapolating within the population to the zero noise point. This extrapolation allows us
to answer the sort of question posed at the outset: how well, in an absolute sense, can a
particular SRF model class account for the responses of a population of neurons?
4 Experimental Methods
Extracellular neural responses were collected from the primary auditory cortex of rodents
during presentation of dynamic random chord stimuli. Animals (6 CBA/CaJ mice and 4
Long-Evans rats) were anaesthetized with either ketamine/medetomidine or sodium pentobarbital, and a skull fragment over auditory cortex was removed; all surgical and experimental procedures conformed to protocols approved by the UCSF Committee on Animal
Research. An ear plug was placed in the left ear, and the sound field created by the freefield speakers was calibrated near the opening of the right pinna. Neural responses (205
recordings collected from 68 recording sites) were recorded in the thalamo-recipient layers
Signal power (spikes2/bin)
0.4
0.3
0.2
0.1
0
0
0.5
1
1.5
2
2.5
Noise power (spikes2/bin)
3
3.5
4
0
50
100
150
Number of recordings
Figure 1: Signal power in neural responses.
of the left auditory cortex while the stimulus (see below) was presented to the right ear.
Recordings often reflected the activity of a number of neurons; single neurons were identified by Bayesian spike-sorting techniques [13, 14] whenever possible. All analyses pool
data from mice and rats, barbiturate and ketamine/medetomidine anesthesia, high and low
frequency stimulation, and single-unit and multi-unit recordings; each group individually
matched the aggregate behaviour described here.
The dynamic random chord stimulus used in the auditory experiments was similar to that
used in a previous study [15], except that the intensity of component tone pulses was variable. Tone pulses were 20 ms in length, ramped up and down with 5 ms cosine gates. The
times, frequencies and sound intensities of the pulses were chosen randomly and independently from 20 ms bins in time, 1/12 octave bins covering either 2?32 or 25?100 kHz in
frequency, and 5 dB SPL bins covering 25?70 dB SPL in level. At any time point, the
stimulus averaged two tone pulses per octave, with an expected loudness of approximately
73 dB SPL for the 2?32 kHz stimulus and 70 dB SPL for the 25?100 kHz stimulus. The
total duration of each stimulus was 60 s. At each recording site, the 2?32 kHz stimulus was
repeated 20 times, and the 25?100 kHz stimulus was repeated 10 times.
Neural responses were binned at 20 ms, and STRFs fit by linear regression of the average
spike rate in each bin onto vectors formed from the amplitudes of tone pulses falling within
the preceding 300 ms of the stimulus (15 pulse-widths, starting with pulses coincident with
the target spike-rate bin). The regression parameters thus included a single filter weight
for each frequency-time bin in this window, and an additional offset (or bias) weight. A
Bayesian technique known as automatic relevance determination (ARD) [16] was used to
improve the STRF estimates. In this case, an additional parameter reflecting the average
noise in the response was also estimated. Models incorporating static output non-linearities
were fit by kernel regression between the output of the linear model (fit by ARD) and the
training data. The kernel employed was Gaussian with a half-width of 0.05 spike/bin; performance at this width was at least as good as that obtained by selecting widths individually
for each recording by leave-one-out cross-validation. Cross-validation for lower estimates
on model predictive power used 10 disjoint splits into 9/10 training data and 1/10 test data.
Extrapolation of the predictive powers in the population, shown in Figs. 2 and 3, was performed using polynomial fits. The degree of the polynomial, determined by leave-one-out
cross-validation, was quadratic for the lower estimates in Fig. 3 and linear in all other cases.
5 Results
We used the techniques described above to ask how accurate a description of auditory
cortex responses could be provided by the STRF. Recordings were binned to match the
discretization rate of the stimulus and the signal power estimated using equation (1). Fig. 1
shows the distribution of signal powers obtained, as a scatter plot against the estimated
noise power and as a histogram. The error bars indicate standard error intervals based on
the estimated variances obtained from equation (2). A total of 92 recordings in the data
set (42 from mouse, 50 from rat), shown by filled circles and histogram bars in Fig. 1,
had signal power greater than one standard error above zero. The subsequent analysis was
confined to these stimulus-responsive recordings.
For each such recording we estimated an STRF model by minimum-MSE linear regression, which is equivalent to obtaining the Wiener kernel for the time-series. The training
predictive power of this model provided the upper estimate for the predictive power of
the model class. The minimum-MSE solution generalizes poorly, and so generates overly
pessimistic lower estimates in cross-validation. However, the linear regression literature
provides alternative parameter estimation techniques with improved generalization ability.
In particular, we used a Bayesian hyperparameter optimization technique known as Automatic Relevance Determination [16] (ARD) to find an optimized prior on the regression
parameters, and then chose parameters which optimized the posterior distribution under
this prior and the training data (this and other similar techniques are discussed in Sahani
and Linden, ?Evidence Optimization Techniques for Estimating Stimulus-Response Functions?, this volume). The cross-validation predictive power of these estimates served as the
lower estimates of the model class performance.
Fig. 2 shows the upper ( ) and lower ( ) estimates for the predictive power of the class of
linear STRF models in our population of rodent auditory cortex recordings, as a function
of the estimated noise level in each recording. The divergence of the estimates at higher
noise levels, described above, is evident. At low
noise
levels the estimates do not converge
for the upper estimate and #
perfectly, the extrapolated values being # # 1
# #$#
)
# ##
for the lower (intervals are standard errors). This gap is indicative of an SRF model
class that is insufficiently powerful to capture the true stimulus-response relationship; even
if noise were absent, the trained model from the class would only be able to approximate
the true SRF in the region of the finite amount of data used for training, and so would
perform better on those training data than on test data drawn from outside that region.
Fig. 3 shows the same estimates for simulations derived from linear fits to the cortical
data. Simulated data were produced by generating Poisson spike trains with mean rates
as predicted by the ARD-estimated models for real cortical recordings, and rectifying so
that negative predictions were treated as zero. Simulated spike trains were then binned and
analyzed in the same manner as real spike trains. Since the simulated data are spectrogramlinear by construction apart from the rectification, we expect the estimates to converge to a
value very close to 1 with little separation. This result is evident in Fig. 3. Thus, the analysis
correctly reports that virtually all of the response power in these simulations is linearly
Normalized linearly predictable power
1
1.5
1
0.5
0.5
0
?0.5
0
0
20
40
Normalized noise power
60
0
10
20
30
Figure 2: Evaluation of STRF predictive power in auditory cortex.
Normalized linearly predictable power
1.5
3
2.5
2
1.5
1
1
0.5
0
?0.5
0
50
Normalized noise power
100
0.5
0
10
20
30
Figure 3: Evaluation of linearity in simulated data.
predictable from the stimulus spectrogram, attesting to the reliability of the extrapolated
estimates for the real data in Fig. 2.
Some portion of the scatter of the points about the population average lines in Fig. 2 reflects
genuine variability in the population, and so the extrapolated scatter at zero noise is also
of interest.
containing
at least 50% of the population
distribution for the cortical
Intervals
data are # 1 C #
for the upper estimate and # # 1 ! C # ! # for the lower estimate
)
(assuming normal scatter).
These will be overestimates of the) spread in the underlying
population distribution because of additional scatter from estimation noise. The variability
of STRF predictive power in the population appears unimodal, and the hypothesis that
the distributions of the deviations from the regression lines are zero-mean normal in both
cases cannot be rejected (Kolmogorov-Smirnov test, # ). Thus the treatment of these
recordings as coming from a single homogeneous population is reasonable. In Fig. 3, there
is a small amount of downward bias and population scatter due to the varying amounts of
rectification in the simulations; however, most of the observed scatter is due to estimation
error resulting from the incorporation of Poisson noise.
The linear model is not constrained to predict non-negative firing rates. To test whether
including a static output non-linearity could improve predictions, we also fit models in
which the prediction from the ARD-derived STRF estimates was transformed time-point by
time-point by a non-parametric non-linearity (see Experimental Methods) to obtain a new
firing rate prediction. The resulting cross-validation predictive powers were compared to
those of the spectrogram-linear model (data not shown). The addition of a static output nonlinearity contributed very little to the predictive power of the STRF model class. Although
the difference in model performance was significant ( # #$# , Wilcoxon signed rank
test), the mean normalized predictive power increase with the addition
of a static output
non-linearity was very small (0.031).
6 Conclusions
We have demonstrated a novel way to evaluate the fraction of response power in a population of neurons that can be captured by a particular class of SRF models. The confounding
effects of noise on evaluation of model performance and estimation of model parameters
are overcome by two key analytic steps. First, multiple measurements of neural responses
to the same stimulus are used to obtain an unbiased estimate of the fraction of the response
variance that is predictable in principle, against which the predictive power of a model may
be judged. Second, Bayesian regression techniques are employed to lessen the effects of
noise on linear model estimation, and the remaining noise-related bias is eliminated by
exploiting the noise-dependence of parameter-estimation-induced errors in the predictive
power to extrapolate model performance for a population of similar recordings to the zero
noise point. This technique might find broad applicability to regression problems in neuroscience and elsewhere, provided certain essential features of the data considered here are
shared: repeated measurements must be made at the same input values in order to estimate the signal power; both inputs and repetitions must be numerous enough for the signal
power estimate, which appears in the denominator of the normalized powers, to be wellconditioned; and finally we must have a group of different regression problems, with different normalized noise powers, that might be expected to instantiate the same underlying
model class. Data with these features are commonly encountered in sensory neuroscience,
where the sensory stimulus can be reliably repeated. The outputs modelled may be spike
trains (as in the present study) or intracellular recordings; local-field, evoked-potential, or
optical recordings; or even fMRI measurements.
Applying this technique to analysis of the primary auditory cortex we find that spectrogramlinear response components can account for only 18% to 40% (on average) of the power
in extracellular responses to dynamic random chord stimuli. Further, elaborated models
that append a static output non-linearity to the linear filter are barely more effective at predicting responses to novel stimuli than is the linear model class alone. Previous studies
of auditory cortex have reached widely varying conclusions regarding the degree of linearity of neural responses. Such discrepancies may indicate that response properties are
critically dependent on the statistics of the stimulus ensemble [6, 5, 10], or that cortical
response linearity differs between species. Alternatively, as previous measures of linearity
have been biased by noise, the divergent estimates might also have arisen from variation
in the level of noise power across studies. Our approach represents the first evaluation of
auditory cortex response predictability that is free of this potential noise confound. The
high degree of response non-linearity we observe may well be a characteristic of all auditory cortical responses, given the many known non-linearities in the peripheral and central
auditory systems [17]. Alternatively, it might be unique to auditory cortex responses to
noisy sounds like dynamic random chord stimuli, or else may be general to all stimulus ensembles and all sensory cortices. Current and future work will need to be directed toward
measurement of auditory cortical response linearity using different stimulus ensembles and
in different species, and toward development of non-linear classes of models that predict
auditory cortex responses more accurately than spectrogram-linear models.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
Aertsen, A. M. H. J, Johannesma, P. I. M, & Hermes, D. J. (1980) Biol Cybern 38, 235?248.
Eggermont, J. J, Johannesma, P. M, & Aertsen, A. M. (1983) Q Rev Biophys 16, 341?414.
Kowalski, N, Depireux, D. A, & Shamma, S. A. (1996) J Neurophysiol 76, 3524?3534.
Shamma, S. A & Versnel, H. (1995) Aud Neurosci 1, 255?270.
Nelken, I, Rotman, Y, & Yosef, O. B. (1999) Nature 397, 154?157.
Rotman, Y, Bar-Yosef, O, & Nelken, I. (2001) Hear Res 152, 110?127.
Nelken, I, Prut, Y, Vaadia, E, & Abeles, M. (1994) Hear Res 72, 206?222.
Calhoun, B. M & Schreiner, C. E. (1998) Eur J Neurosci 10, 926?940.
Eggermont, J. J, Aertsen, A. M, & Johannesma, P. I. (1983) Hear Res 10, 167?190.
Theunissen, F. E, Sen, K, & Doupe, A. J. (2000) J. Neurosci. 20, 2315?2331.
Nelken, I, Prut, Y, Vaadia, E, & Abeles, M. (1994) Hear Res 72, 223?236.
Lindgren, B. W. (1993) Statistical Theory. (Chapman & Hall), 4th edition. ISBN: 0412041812.
Lewicki, M. S. (1994) Neural Comp 6, 1005?1030.
Sahani, M. (1999) Ph.D. thesis (California Institute of Technology, Pasadena, California).
deCharms, R. C, Blake, D. T, & Merzenich, M. M. (1998) Science 280, 1439?1443.
MacKay, D. J. C. (1994) ASHRAE Transactions 100, 1053?1062.
Popper, A & Fay, R, eds. (1992) The Mammalian Auditory Pathway: Neurophysiology.
(Springer, New York).
| 2335 |@word neurophysiology:1 trial:6 achievable:1 proportion:1 approved:1 polynomial:2 smirnov:1 pulse:8 simulation:3 covariance:1 thereby:1 minus:1 tr:1 moment:4 reduction:3 substitution:1 series:2 fragment:1 hereafter:1 selecting:1 phy:1 denoting:1 tuned:1 current:1 discretization:3 yet:1 scatter:7 must:7 evans:1 additive:1 subsequent:1 analytic:2 extrapolating:2 plot:1 alone:1 half:2 instantiate:1 tone:4 indicative:1 characterization:1 provides:3 successive:1 anesthesia:1 direct:1 become:1 ramped:1 pathway:1 manner:1 indeed:3 expected:7 themselves:1 multi:1 actual:1 little:2 window:1 provided:4 estimating:1 linearity:14 underlying:3 moreover:1 matched:1 what:1 minimizes:1 quantitative:2 usefully:1 uk:2 unit:3 normally:2 overestimate:3 positive:1 understood:1 local:1 limit:1 firing:2 approximately:2 might:6 chose:1 signed:1 strf:14 evoked:2 suggests:2 shamma:2 averaged:2 directed:1 unique:1 differs:1 sq:1 procedure:1 maneesh:2 johannesma:3 reject:1 outset:1 integrating:1 onto:1 close:1 cannot:1 judged:2 put:1 context:1 impossible:1 applying:1 cybern:1 equivalent:1 demonstrated:1 center:1 starting:1 duration:2 independently:1 schreiner:1 estimator:5 population:18 variation:1 target:2 suppose:1 construction:1 homogeneous:1 hypothesis:3 element:1 pinna:1 mammalian:1 theunissen:1 observed:2 capture:2 region:2 removed:1 chord:5 predictable:5 ideally:1 dynamic:5 trained:1 solving:1 predictive:29 upon:1 completely:1 neurophysiol:1 kolmogorov:1 derivation:1 train:5 describe:4 london:1 shortcoming:1 effective:1 aggregate:1 outside:1 quite:1 posed:1 widely:1 calhoun:1 ability:2 statistic:2 noisy:2 itself:1 conformed:1 sequence:2 advantage:1 vaadia:2 hermes:1 isbn:1 ucl:2 propose:1 subtracting:1 sen:1 maximal:1 coming:1 relevant:2 combining:1 rapidly:1 poorly:1 description:2 exploiting:1 keck:1 generating:1 perfect:1 leave:2 develop:2 ac:1 measured:5 ard:5 predicted:5 judge:1 indicate:3 aud:1 closely:1 filter:3 bin:11 argued:1 behaviour:1 generalization:3 cba:1 pessimistic:1 hold:1 considered:3 hall:1 normal:2 blake:1 predict:3 estimation:8 applicable:1 spectrotemporal:2 individually:2 repetition:1 successfully:2 reflects:1 always:1 gaussian:1 depireux:1 varying:3 derived:3 modelling:1 rank:1 sense:2 dependent:2 inaccurate:2 pasadena:1 transformed:1 priori:1 development:1 animal:4 constrained:1 mackay:1 field:4 equal:1 construct:1 having:2 genuine:1 eliminated:1 chapman:1 identical:2 represents:1 broad:1 inevitable:1 fmri:1 discrepancy:1 report:1 stimulus:46 future:1 primarily:1 opening:1 randomly:1 divergence:1 lessen:1 attempt:1 interest:1 highly:1 evaluation:8 introduces:1 analyzed:1 wc1n:1 accurate:2 partial:2 fay:1 filled:1 desired:1 circle:1 re:4 complicates:1 fitted:1 ashrae:1 column:1 ar:1 measuring:1 queen:1 applicability:1 deviation:2 stimulusresponse:1 answer:2 abele:2 considerably:1 calibrated:1 eur:1 systematic:1 rotman:2 invertible:1 pool:1 continuously:1 mouse:3 thesis:1 squared:3 reflect:1 recorded:2 central:2 choose:1 possibly:1 ear:3 containing:1 account:5 potential:3 includes:2 coefficient:1 explicitly:1 bracketed:1 depends:3 performed:1 view:1 extrapolation:2 portion:1 reached:1 relied:1 sort:1 elicited:1 rectifying:1 contribution:3 ass:2 square:2 formed:2 minimize:1 wiener:2 variance:15 characteristic:1 prut:2 ensemble:4 yield:1 identify:1 kowalski:1 surgical:1 modelled:1 bayesian:4 accurately:2 produced:2 critically:1 served:1 comp:1 whenever:1 ed:2 failure:1 underestimate:2 against:3 frequency:4 static:5 auditory:25 treatment:1 ask:1 knowledge:1 fractional:1 amplitude:1 reflecting:1 appears:3 higher:1 methodology:1 response:72 reflected:1 improved:1 sufficiency:1 evaluated:3 rejected:1 stage:2 correlation:2 overfit:3 quality:1 impulse:1 grows:1 effect:2 validity:2 contain:1 true:7 unbiased:4 normalized:7 hence:1 merzenich:1 illustrated:1 numerator:1 during:2 width:4 covering:2 noted:1 speaker:1 cosine:1 clocked:1 rat:3 generalized:1 m:5 octave:2 evident:2 theoretic:1 ranging:1 novel:3 common:1 functional:1 spiking:1 empirically:1 stimulation:1 khz:5 volume:1 discussed:1 significant:2 measurement:8 refer:1 automatic:2 similarly:1 pointed:1 nonlinearity:1 had:1 reliability:2 cortex:16 longer:1 wilcoxon:1 lindgren:1 closest:1 posterior:1 recent:2 confounding:1 apart:1 certain:1 popper:1 exploited:1 captured:4 minimum:3 fortunately:1 additional:3 preceding:1 spectrogram:6 employed:2 greater:1 determine:1 maximize:1 converge:2 signal:15 multiple:1 sound:4 unimodal:1 match:2 determination:3 plug:1 cross:10 compensate:1 long:2 prediction:13 regression:11 denominator:1 expectation:1 poisson:2 histogram:2 kernel:4 arisen:1 achieved:1 cell:1 confined:1 proposal:1 preserved:1 addition:2 interval:3 else:1 grow:1 source:1 biased:1 unlike:1 recording:24 tend:1 virtually:1 db:4 induced:1 regularly:1 near:1 counting:1 ideal:4 intermediate:1 exceed:1 split:1 enough:1 independence:1 variate:1 fit:6 identified:1 perfectly:1 regarding:1 absent:1 whether:1 expression:1 b5:1 caj:1 york:1 action:1 adequate:1 strfs:1 amount:5 ph:1 estimated:12 disjoint:1 per:2 overly:1 correctly:1 neuroscience:2 discrete:1 ketamine:2 hyperparameter:1 group:2 key:1 acknowledged:1 falling:1 drawn:1 anova:2 verified:1 imaging:1 fraction:6 sum:2 uncertainty:1 powerful:1 reasonable:1 spl:4 looser:1 separation:1 layer:2 bound:2 distinguish:1 quadratic:1 encountered:2 activity:2 strength:1 adapted:1 binned:3 precisely:1 insufficiently:1 constrain:1 incorporation:1 generates:1 extremely:1 optical:1 extracellular:3 elaborated:1 peripheral:1 yosef:2 across:1 skull:1 rev:1 happens:1 explained:2 confound:1 taken:1 rectification:2 equation:2 jennifer:1 committee:1 generalizes:1 apply:1 observe:1 responsive:1 alternative:2 gate:1 recipient:2 assumes:1 remaining:1 calculating:1 eggermont:2 classical:1 anaesthetized:2 question:5 quantity:1 spike:9 receptive:2 primary:4 dependence:2 usual:1 parametric:1 aertsen:3 unclear:2 loudness:1 distance:1 simulated:4 extent:2 collected:2 barely:1 toward:2 assuming:3 length:3 relationship:4 ratio:1 minimizing:1 equivalently:1 difficult:3 decharms:1 negative:2 append:1 reliably:1 unknown:1 perform:1 contributed:1 upper:7 neuron:10 finite:3 coincident:1 inevitably:1 variability:4 frame:1 intensity:2 optimized:2 srf:15 california:2 able:2 suggested:1 bar:4 below:5 hear:4 reliable:1 including:1 explanation:2 power:73 unrealistic:1 difficulty:2 treated:3 predicting:2 cascaded:1 residual:2 sodium:1 improve:2 movie:1 technology:1 numerous:1 created:1 medetomidine:2 gf:1 sahani:3 prior:2 literature:1 relative:1 fully:1 expect:1 validation:10 degree:5 sufficient:1 principle:3 elsewhere:1 extrapolated:3 placed:1 free:1 bias:4 institute:1 fall:1 absolute:1 distributed:2 regard:1 overcome:2 cortical:11 evaluating:1 sensory:4 author:2 made:2 commonly:1 san:1 nelken:4 tighten:1 transaction:1 approximate:1 evoke:2 overfitting:2 assumed:1 francisco:1 alternatively:2 nature:1 ca:1 obtaining:2 mse:7 complex:1 necessarily:2 protocol:1 spread:1 intracellular:2 linearly:3 whole:1 noise:44 arise:1 neurosci:3 edition:1 repeated:6 neuronal:2 site:2 fig:10 gatsby:2 predictability:1 precision:1 answering:1 perceptual:1 theorem:1 down:1 specific:1 emphasized:1 symbol:1 offset:1 divergent:1 linden:3 evidence:2 consist:1 incorporating:1 essential:1 thalamo:2 downward:1 biophys:1 sorting:1 gap:1 rodent:3 entropy:1 simply:1 visual:1 lewicki:1 springer:1 conditional:1 presentation:3 consequently:1 quantifying:1 shared:1 considerable:1 change:1 included:1 determined:1 except:1 averaging:2 total:6 specie:2 experimental:3 meaningful:1 internal:1 doupe:1 arises:1 yardstick:1 relevance:2 ucsf:3 evaluate:1 biol:1 extrapolate:1 |
1,469 | 2,336 | Discriminative Densities from Maximum
Contrast Estimation
Peter Meinicke
Neuroinformatics Group
University of Bielefeld
Bielefeld, Germany
[email protected]
Thorsten Twellmann
Neuroinformatics Group
University of Bielefeld
Bielefeld, Germany
[email protected]
Helge Ritter
Neuroinformatics Group
University of Bielefeld
Bielefeld, Germany
[email protected]
Abstract
We propose a framework for classifier design based on discriminative
densities for representation of the differences of the class-conditional distributions in a way that is optimal for classification. The densities are
selected from a parametrized set by constrained maximization of some
objective function which measures the average (bounded) difference, i.e.
the contrast between discriminative densities. We show that maximization of the contrast is equivalent to minimization of an approximation
of the Bayes risk. Therefore using suitable classes of probability density functions, the resulting maximum contrast classifiers (MCCs) can
approximate the Bayes rule for the general multiclass case. In particular
for a certain parametrization of the density functions we obtain MCCs
which have the same functional form as the well-known Support Vector Machines (SVMs). We show that MCC-training in general requires
some nonlinear optimization but under certain conditions the problem
is concave and can be tackled by a single linear program. We indicate
the close relation between SVM- and MCC-training and in particular we
show that Linear Programming Machines can be viewed as an approximate realization of MCCs. In the experiments on benchmark data sets,
the MCC shows a competitive classification performance.
1 Introduction
In
framework of classification the ultimate goal of a classifier
the
Bayesian
is to minimize the expected risk of misclassification
which
measured by
denotes
the
loss
for
assigning
a
given
feature
vector
to
class
,
while
it
actually
belongs
to
!"
class , with
being the number of classes. With
being the class-conditional probability density functions (PDFs) and #%$ denoting the corresponding apriori probabilities of
class-membership we have the risk
# $
$
!"
(1)
With the standard ?zero-one? loss function
$ , where $ denotes the Kronecker delta, it is easy to show (see e.g. [3]) that the expected risk is minimized, if one
chooses the classifier
!#"%$ # $ !"
$
(2)
The resulting lower bound
is known as the Bayes risk which limits the average perforon
mance of the classifier
. Because the class-conditional densities are usually unknown,
one way to realize the above classifier is to use estimates of these densities instead. This
leads to the so-called plug-in classifiers, which are Bayes-consistent if the density estimators are consistent (e.g. [9]). Due to the notoriously slow convergence of density estimates
the plug-in scheme usually isn?t the best recipe for classifier design and as an alternative
many discriminant functions including Neural Networks (see [1, 9] for an overview) and
Support Vector Machines (SVMs) [2, 12] have been proposed which are trained directly to
minimize the empirical classification error.
We recently proposed a method for the design of density-based classifiers without resorting to the usual density estimation schemes of the plug-in approach [6]. Instead we utilized
discriminative densities with parameters optimized to solve the classification problem. The
approach requires maximization
of the average bounded difference between class (discrim
inative) densities
, which we refer to as the contrast of
?true? dis$
the underlying
-bounded contrast is the expectation
with
tributions. The
&
- . 0/ '(#*,+
. 1 0/ ' (#*2+
"87:9 ' (#*,+ # 5 ;/0< 5 # ;/0<
(3)
463 5
;/0<
The idea is to find
discriminative
$ , which represent the underlying
!densities
"
in a way, that is< optimal for classification. When
distributions with ?true? densities
maximizing the contrast with respect to the parameters $ of the discriminative densities
the upper bound '(#*,+ plays a central role because it prevents the learning algorithm from
increasing the differences between discriminative densities where the differences between
')(#*,+
the true densities are already large.
In this paper we show that with some slight modification the contrast can be viewed as
an approximation of the negative Bayes risk (up to some constant shift and scaling) which
is valid for the binary as well as for the general multiclass case. Therefore for certain
parametrizations of the discriminative densities MCCs allow to find an optimal trade-off
between the classical plug-in Bayes-consistency and the consistency which arises from direct minimization of the approximate Bayes risk. Furthermore, for a particular parametrization of the PDFs, we obtain certain kinds of Linear Programming Machines (LPMs) [4] as
(in general) approximate solutions of maximum contrast estimation. In that way MCCs
provide a Bayes-consistent approach to realize multiclass LPMs / SVMs and they suggest
an interpretation of the magnitude of the LPM / SVM classification function in terms of
density differences which provide a probabilistic measure of confidence. For the case of
LPMs we propose an extended optimization procedure for maximization of the contrast
via iteration of linear optimizations. Inspired by the MCC-framework, for the resulting
Sequential Linear Programming Machines (SLPM) we propose a new regularizer which
allows to find an optimal trade-off between the above mentioned two approaches to Bayes
consistency. In the experiments we analyse the performance of the SLPM on simulated and
real world data.
2 Maximum Contrast Estimation
For the design of MCCs the first step, which is the same as for the plug-in concept, requires
to replace the unknown class-conditional densities of the Bayes classifier (2) by suitably
parametrized PDFs. Then, instead of choosing the parameters for an approximation of
the original (true) densities (e.g. by maximum likelihood estimation) as with the plug-in
scheme, the density parameters are choosen to maximize the so-called contrast which is
the expected value of the
-bounded density differences as defined in (3).
' (#*,+
'(#*,+
For the case of an unbounded contrast, i.e.
, the general maximum contrast
solution can be found analytically and for notational simplicity we derive it for the binary
case with equal apriori probabilities, where the contrast can be written as
! !
! ! !
, " %$
!
,#"$ ! !
' (#*2+
Thus the unbounded contrast is maximized for
with the peaks of the Delta (Dirac) functions located at
and
, respectively. Obviously, these are not the best discriminative densities we may think of and therefore we require an appropriate bound
.
For finite
, maximization of the contrast enforces a redistribution of the estimated
probability mass and gives rise to a constrained linear optimization problem in the space of
discriminative densities which may be solved by variational methods in some cases.
')(#*,+
The relation between contrast and Bayes risk becomes more convenient when we slightly
modify the above definition (3) by a unit upper bound and by adding a lower bound on the
-scaled density differences:
. 0/
"$
"87:9
# 5 ;/ < 5 # ;/0< (4)
463 5
with scale
. / ')(#
*, + . Therefore, for an infinite scale factor the (expected)
. factor
contrast
scaling:
:7:" . 0/
: 7 " .
!% . 0/
approaches the negative Bayes risk up to constant shift and
(5)
Thus the scale factor defines a subset of the input-space, which includes the decision boundary and which becomes increasingly focused in their vicinity as
. The extent of the
region is defined by the bounds
on the difference between discriminative densities.
In terms of the contrast function it can be defined as
. 0/
(6)
Since for MCC-training
we maximize the empirical contrast, i.e. the corresponding sample
average of
, the scale factor then defines a subset of the training data which has
impact on learning of the decision boundary. Thus for increasing scale factor the relative
size of that subset is shrinking. However for increasing size of the training set the scale factor can be gradually increased and then, for suitable classes of PDFs, MCCs can approach
the Bayes rule. In other words, acts as a regularization parameter such that, for particular
choices of the PDF class convergence to the Bayes classifier can be achieved if the quality
of the approximation of the loss function is gradually increased for increasing sample sizes.
In the following section we shall consider such a class of PDFs which is flexible enough
and which turns out to include a certain kind of SVMs.
3 MCC-Realizations
In the following we shall first consider a particularly useful parametrization of the discriminative densities which gives rise to classifiers which in the binary case have the same
functional form as SVMs up to a ?missing? bias term in the MCC-case. For training of
these MCCs we derive a suitable objective function which can be maximized by sequential
linear programming where we show the close relation to training of Linear Programming
Machines.
3.1 Density Parametrization
We first have to choose a set of candidate functions from which we select the required PDF.
Because this set should provide some flexibility with respect to contrast maximization the
usual kernel density estimator (KDE)[11]
! $
2
$
(7)
with index set containing
indices of examples from class and with normalized kernel
functions according to
isn?t a quite good choice, since the only free
parameter is the kernel bandwidth which doesn?t allow for any local adaptation. On the
other hand if we allow for local variation of the bandwidth we get a complicated contrast
which is difficult to maximize due to nonlinear dependencies on the parameters. The same
is true if we treat the kernel centers as free parameters. However, if we modify the kernel
density estimator to have flexible mixing weights according to
;/ <
$
$
$
with
we get an objective function, which is linear in the mixing parameters
conditions. Thus we have class-specific densities with mixing weights
the contribution of a single training example to the PDF.
(8)
under certain
$ which
control
$
With that choice we achieve plug-in Bayes-consistency for the case of equal mixing
weights, since then we have the usual kernel density estimator (KDE), which, besides
some
mild assumptions about the distributions, requires a vanishing kernel bandwidth for
.
and
the
;/
# $ $ $
#
(9)
$
.
, i.e. the sample average over training
so that we can write the empirical contrast
3.2 Objective Function
For notational simplicity
in the following we shall incorporate
factor
scale
the
mixing weigths
into a common
parameter
vector
with
!
and
. Further we define the scaled density difference
"
$#
examples, as:
.
%#
, $
.
&('
'
$)
"87:9 $ /
3
"
+*
-,
(10)
'
where the assignment variables
realize the maximum function in (4). With
$/.
%#
fixed assignment variables $ ,
is concave and maximization with respect to gives rise
'
' On the other hand, for fixed maximization with respect
to a linear
optimization problem.
to the $ is achieved by setting $
for negative terms. This suggests a sequential linear
optimization strategy for overall maximization of the contrast which shall be introduced in
detail in the following section.
Since we have already incorporated as a scaling factor into the parameter vector , is
now identified with the norm . Therefore the scale factor can be adjusted implicitly
by a regularization term which penalizes some suitable norm of the . Thus a suitable
objective function can be defined by
.
%#
.
%#
(11)
with determining the weight of the penalty, i.e. the degree of regularization. We now
consider several
instances
of the case where the penalty corresponds to some -norm of .
With the -norm, for
the probability mass of the discriminative densities is concentrated on those two kernel-functions which yield the highest average
density difference.
Although that property forces the sparsest solution for large enough , clearly,
that solution
isn?t Bayes-consistent in general because as pointed out in Sec.2, for
all probability mass of the discriminative densities is concentrated at the two points with maximum
average density difference.
Conversely taking
/ , which resembles the standard SVM regularizer [10],
. Indeed, it is easy to see that all
yields the KDE with equal mixing
weights for
-norm penalties with share this convenient property, which guarantees ?plug-in?
Bayes consistency in the case where the solution is totally determined by the regularizer.
In that case kernel density estimators
are achieved as the ?default? solution. Therefore we
chose a combination of the -norm with the maximum-norm
,
-
(12)
which is easily incorporated into a linear
program, as to be shown in the following. For that
we achieve an equal distribution of the weights
kind of penalty in the limiting case
which corresponds to the kernel density estimator (KDE) solution. In that
way we have a
nice trade-off between two kinds of Bayes consistency: for increasing the class-specific
densities converge to the KDE with equal mixing weights, whereas for decreasing the
probability mass of the discriminative densities is more and more concentrated near the
Bayes-optimal decision boundary. By a suitable choice of the kernel width and the scale
of the weights, e.g. via cross-validation, the solution with fastest convergence to the Bayes
rule may be selected.
With an 1-norm penalty on the weights and on the vector of soft margin slack variables
we get the Linear Programming Machine which requires to minimize
-
$
6
,
#
subject to
'
$
$ $
(13)
.
with
and
with
the above constraints on . Dividing the objective by ,
subtracting , setting
$ $ and turning minimization to maximization of the negative
'objective shows that LPM training corresponds to a special case of MCC training with fixed
and -norm regularizer with
.
$
3.3 Sequential Linear Programming
Estimation of mixing weights is now achieved' by maximizing the sample contrast with
respect to the $ and the assignment variables $ . This can be achieved by the following
iterative optimization scheme:
'
1. Initialization: $
2. Maximization w.r.t.
$
' $
(# *, +
$
, $ 03 ,
$
subject to
' $ $ ' $
(#
*,+ $
3. Maximization w.r.t. for fixed :
463 5 $
for fixed :
'
maximize
"
!
'
$
"
otherwise.
4. If convergence in contrast then stop else proceed with step 2.
'
"
Where $ are slack variables, measuring the part of the density difference
$
which can be charged to the objective function. The constraint
in the linear
program was chosen in order
which may otherwise
to prevent the trivial solution
.
Since
we
used
unnormalized
Gaussian
kernel functions with
appear
for
larger
values
of
, i.e. we excluded all multiplicative density constants, that constraint doesn?t
exclude any useful solutions for the weights.
4 Experiments
In the following section we consider the task of solving binary classification problems
within the MCC-framework, using the above SLPM with Gaussian kernel function. The
first experiment
illustrates the behaviour of the MCC for different values for the regu
larization by means of a simple two-dimensional toy dataset. The second experiment
compares the classification performance of the MCC with those of the SVM and KernelDensity-Classifier (KDC) which is a special case of the MCC with equal weighting of each
kernel function. To this end, we selected four frequently used benchmark datasets from the
UCI Machine Learning Repository.
The two-dimensional toy dataset consists of 300 data points, sampled
from two overlapping
isotropic normal distributions with a mutual distance of
and standard
deviation .
Figure 1 shows the solution of the MCC for two different
values
of
(only
data points
with non-zero weights according the criterion $
are marked by symbols). In
both figures, data points
with large mixing weights are located near the decision border. In
particular for small there are regions of high contrast
alongside the decision function
(illustrated by isolines). For increasing the number of data points with non-zero $ increases. At the same time, one can note a decrease of the difference between
the weights.
Regions with contrast
are highlighted gray. For small values of , these regions
are
nearer to the decision border than for large values. This illustrates that for increasing
the quality of the approximation of the loss function
decreases. In both figures, several
data points are misclassified with a contrast
. The MCC identified those data points
as outliers and deactivated them during the training (encircled symbols).
.
.
.
The second experiment demonstrates the performance of the MCC in comparison with
those of a Support Vector Machine, as one of the state-of-the-art binary classifiers, and
with the KDC. For this experiment we selected the Pima Indian Diabetes, Breast-Cancer,
Heart and Thyroid dataset from the UCI Machine Learning repository. The Support Vector
Machine was trained using the Sequential Minimal Optimization algorithm by J. Platt[7]
adjusted according to the modification proposed by S.S. Keerthi [5].
300 datapoints / ? = 0.2
300 datapoints / ? = 4.2
.5
?0
?0.5
?1
?1
?1.5
0
?1.5
0
?2
?2
0.5
?2.5
2.5
5
1.
2
?0.5
1.5
0
0
1
1
0.
0.5
5
Figure 1: Two
MCC solutions
for the two-dimensional toy dataset for different values of
(left:
, right:
). The symbols and depict the positions of data points
with with non-zero $ . The size of each symbol is scaled according the value of the corresponding $ . Encircled symbols have been deactivated during the training (symbols for
deactivated data points are not scaled according to $ , since in most cases $ is zero). The
absolute value of the contrast is illustrated by the isolines while the sign
of the contrast depicts the binary classification of the classifier. The region with
which corresponds
to as defined in (6) is colored white and the complement colored gray. The percentage
of data points that define the solution is
(left figure) and
(right figure) of the
dataset.
.
The experimental setup was comparable with that in [8]: After normalization to zero
mean and unit standard deviation, each dataset was divided 100 times in different
pairs of disjoint train- and testsets with a ratio of :
(provided by G. R?atsch at
http://ida.first.gmd.de/ raetsch/data/benchmarks.htm). Since we used for all classifiers the
Gaussian kernel function, all three algorithms are parametrized
by the bandwidth . Addi
tionally, for the SVM and MCC the regularization value had to be chosen. The optimal
parametrization was chosen by estimating the generalization performance for different values of bandwidth and regularization by means of the average test error on the first five
dataset partitions. More precisely, a first coarse scan was performed, followed by a fine
scan in the interval near the optimal
values of the first one. Each scan considered 1600
different combinations of and , resp. and . For parameter pairs with identical test
error, the pair constructing the sparsest solution was kept. Finally, the reported values in
Tab.1 and Tab.2 are averaged over all 100 dataset partitions.
Table 1 shows the optimal parametrization
of the MCC in combination with the
classification rate and sparseness of the solution (measured as percentage non-zero $ ).
Additionally, the corresponding values after the first MCC iteration are given in brackets.
The last two columns show the absolute number of iterations and the final number of deactivated examples. For all four datasets the MCC is able to find a sparse solution. In
particular for the Heart, Breast-Cancer and Diabetes dataset the solution of the MCC is
significantly sparser than those of the SVM (see Tab.2). Nevertheless, Tab.2 indicates that
the classification rates of the MCC are competitive with those of the SVM.
5 Conclusion
The MCC-approach provides an understanding of SVMs / LPMs in terms of generative
modelling using discriminative densities. While usual unsupervised density estimation
schemes try to minimize some distance criterion (e.g. Kullback-Leibler divergence) be-
'
Table 1: Optimal parametrization
, classification
rate, percentage of non-zero $ ,
number of iterations of the MCC and number of $
. The results are averaged over all
100 dataset partitions. For the classification rate and percentage of non-zero -coefficients
the corresponding value after the first MCC iteration is given in brackets.
Dataset
Breast-Cancer
Heart
Thyroid
Diabetes
1.38
2.69
0.49
4.52
(13.8 )
(21.2 )
(46.1 )
(5.5 )
Classif. rate
74.3 (74.4 )
84.3 (84.1 )
95.5 (95.5 )
76.6 (76.5 )
12.17
2.066
2.624
13.6
20.4
46.1
5.3
Iter.
2.23
3.10
1.00
5.86
2.6
6.4
0.0
40.7
Table 2: Summary of the performance of the KDC, SVM and MCC for the four benchmark
datasets. Given are the classification rates with percentage of non-zero $ (in brackets).
Note that our results for the SVM are slightly better to those reported in [8]. One reason
could be the coarse parameter selection for the SVM as already mentioned by the author.
Dataset
Breast-Cancer
Heart
Thyroid
Diabetes
73.1
84.1
95.6
74.2
KDC
(100
(100
(100
(100
)
)
)
)
74.5
84.4
95.7
76.7
SVM
(58.5
(60.9
(15.8
(53.6
)
)
)
)
74.3
84.3
95.5
76.6
MCC
(13.6 )
(20.4 )
(46.1 )
( 5.3 )
tween the models and the true densities, MC-estimation aims at learning of densities which
represent the differences of the underlying distributions in an optimal way for classification. Future work will address the investigation of the general multiclass performance and
the capability to cope with misslabeled data.
References
[1] C. M. Bishop. Neural Networks for Pattern Recognition. Clarendon Press, Oxford, 1995.
[2] C. Cortes and V. Vapnik. Support-vector networks. Machine Learning, 20(3):273?297, 1995.
[3] R. O. Duda and P. E. Hart. Pattern Classification and Scene Analysis. Wiley, New York, 1973.
[4] T. Graepel, R. Herbrich, B. Scholkopf, A. Smola, P. Bartlett, K. Robert-Muller, K. Obermayer,
and B. Williamson. Classification on proximity data with lp?machines, 1999.
[5] S.S. Keerthi, S.K. Shevade, C. Bhattacharyya, and K.R.K. Murthy. Improvements to platt?s
SMO algorithm for SVM classifier design. Technical report, Dept of CSA, IISc, Bangalore,
India, 1999.
[6] P. Meinicke, T. Twellmann, and H. Ritter. Maximum contrast classifiers. In Proc. of the Int.
Conf. on Artificial Neural Networks, Berlin, 2002. Springer. in press.
[7] J. Platt. Fast training of support vector machines using sequential minimal optimization. In
B. Sch?olkopf, C. J. C. Burges, and A. J. Smola, editors, Advances in Kernel Methods ? Support
Vector Learning, pages 185?208, Cambridge, MA, 1999. MIT Press.
[8] G. R?atsch, T. Onoda, and K.-R. M?uller. Soft margins for AdaBoost. Technical Report NC-TR1998-021, Department of Computer Science, Royal Holloway, University of London, Egham,
UK, August 1998. Submitted to Machine Learning.
[9] B. D. Ripley. Pattern Recognition and Neural Networks. Cambridge University Press, Cambridge, 1996.
[10] B. Sch?olkopf and A. J. Smola. Learning with Kernels. MIT Press, 2002.
[11] D. W. Scott. Multivariate Density Estimation. Wiley, 1992.
[12] V. N. Vapnik. The Nature of Statistical Learning Theory. Springer, New York, 1995.
| 2336 |@word mild:1 repository:2 norm:9 duda:1 meinicke:2 suitably:1 denoting:1 bhattacharyya:1 ida:1 assigning:1 written:1 realize:3 partition:3 larization:1 depict:1 generative:1 selected:4 isotropic:1 parametrization:7 vanishing:1 colored:2 coarse:2 provides:1 herbrich:1 five:1 unbounded:2 direct:1 scholkopf:1 consists:1 indeed:1 expected:4 frequently:1 inspired:1 decreasing:1 increasing:7 becomes:2 totally:1 provided:1 bounded:4 underlying:3 estimating:1 mass:4 iisc:1 kind:4 guarantee:1 act:1 concave:2 scaled:4 classifier:18 demonstrates:1 control:1 unit:2 platt:3 uk:1 appear:1 local:2 modify:2 treat:1 limit:1 oxford:1 kdc:4 chose:1 initialization:1 resembles:1 suggests:1 conversely:1 fastest:1 averaged:2 enforces:1 procedure:1 empirical:3 mcc:27 significantly:1 convenient:2 confidence:1 word:1 suggest:1 get:3 close:2 selection:1 risk:9 twellmann:2 equivalent:1 charged:1 missing:1 maximizing:2 center:1 focused:1 simplicity:2 rule:3 estimator:6 datapoints:2 variation:1 limiting:1 techfak:3 resp:1 play:1 programming:7 diabetes:4 recognition:2 particularly:1 utilized:1 located:2 tributions:1 role:1 solved:1 region:5 trade:3 highest:1 decrease:2 mentioned:2 trained:2 solving:1 easily:1 htm:1 regularizer:4 train:1 fast:1 london:1 artificial:1 choosing:1 neuroinformatics:3 quite:1 larger:1 solve:1 otherwise:2 addi:1 think:1 analyse:1 highlighted:1 final:1 obviously:1 propose:3 subtracting:1 adaptation:1 uci:2 realization:2 parametrizations:1 mixing:9 flexibility:1 achieve:2 dirac:1 olkopf:2 recipe:1 convergence:4 pmeinick:1 derive:2 measured:2 dividing:1 indicate:1 redistribution:1 require:1 behaviour:1 generalization:1 investigation:1 adjusted:2 proximity:1 considered:1 normal:1 estimation:9 proc:1 minimization:3 uller:1 mit:2 clearly:1 gaussian:3 aim:1 pdfs:5 notational:2 modelling:1 likelihood:1 indicates:1 improvement:1 contrast:33 membership:1 relation:3 misclassified:1 germany:3 overall:1 classification:18 flexible:2 constrained:2 special:2 art:1 mutual:1 apriori:2 equal:6 identical:1 unsupervised:1 future:1 minimized:1 report:2 bangalore:1 divergence:1 keerthi:2 bracket:3 penalizes:1 minimal:2 increased:2 instance:1 soft:2 column:1 measuring:1 assignment:3 maximization:12 deviation:2 subset:3 reported:2 dependency:1 chooses:1 density:52 peak:1 ritter:2 off:3 probabilistic:1 central:1 containing:1 choose:1 conf:1 toy:3 exclude:1 de:4 sec:1 includes:1 coefficient:1 int:1 multiplicative:1 performed:1 try:1 tab:4 competitive:2 bayes:20 complicated:1 capability:1 contribution:1 minimize:4 maximized:2 yield:2 bayesian:1 mc:1 notoriously:1 submitted:1 murthy:1 definition:1 stop:1 sampled:1 dataset:12 regu:1 graepel:1 actually:1 clarendon:1 adaboost:1 furthermore:1 smola:3 shevade:1 hand:2 nonlinear:2 overlapping:1 defines:2 quality:2 gray:2 concept:1 true:6 normalized:1 classif:1 analytically:1 vicinity:1 regularization:5 excluded:1 leibler:1 illustrated:2 white:1 during:2 width:1 unnormalized:1 criterion:2 pdf:3 variational:1 recently:1 common:1 functional:2 overview:1 slight:1 interpretation:1 refer:1 raetsch:1 cambridge:3 resorting:1 consistency:6 pointed:1 lpm:2 tionally:1 had:1 multivariate:1 belongs:1 certain:6 binary:6 muller:1 converge:1 maximize:4 encircled:2 technical:2 plug:8 cross:1 divided:1 hart:1 impact:1 breast:4 expectation:1 iteration:5 kernel:17 represent:2 normalization:1 achieved:5 whereas:1 fine:1 interval:1 else:1 sch:2 subject:2 mccs:8 near:3 easy:2 enough:2 discrim:1 bandwidth:5 identified:2 idea:1 multiclass:4 shift:2 bartlett:1 ultimate:1 penalty:5 peter:1 york:2 proceed:1 useful:2 weigths:1 concentrated:3 svms:6 gmd:1 http:1 percentage:5 isolines:2 sign:1 delta:2 estimated:1 disjoint:1 write:1 shall:4 group:3 iter:1 four:3 nevertheless:1 prevent:1 kept:1 bielefeld:9 decision:6 scaling:3 comparable:1 bound:6 followed:1 tackled:1 kronecker:1 constraint:3 precisely:1 scene:1 thyroid:3 department:1 according:6 combination:3 slightly:2 increasingly:1 helge:2 lp:1 modification:2 outlier:1 gradually:2 thorsten:1 heart:4 turn:1 slack:2 end:1 mance:1 appropriate:1 egham:1 alternative:1 original:1 denotes:2 include:1 classical:1 objective:8 already:3 strategy:1 usual:4 obermayer:1 distance:2 simulated:1 berlin:1 parametrized:3 extent:1 discriminant:1 trivial:1 reason:1 besides:1 index:2 ratio:1 nc:1 difficult:1 setup:1 robert:1 pima:1 kde:5 negative:4 rise:3 design:5 unknown:2 upper:2 datasets:3 benchmark:4 finite:1 extended:1 incorporated:2 august:1 introduced:1 complement:1 pair:3 required:1 optimized:1 smo:1 nearer:1 address:1 able:1 alongside:1 usually:2 pattern:3 scott:1 program:3 including:1 royal:1 deactivated:4 suitable:6 misclassification:1 force:1 turning:1 scheme:5 isn:3 nice:1 understanding:1 determining:1 relative:1 loss:4 validation:1 degree:1 consistent:4 editor:1 share:1 cancer:4 summary:1 last:1 free:2 dis:1 choosen:1 bias:1 allow:3 burges:1 india:1 taking:1 absolute:2 sparse:1 boundary:3 default:1 valid:1 world:1 doesn:2 author:1 cope:1 approximate:4 uni:3 implicitly:1 kullback:1 inative:1 discriminative:16 ripley:1 iterative:1 table:3 additionally:1 nature:1 onoda:1 csa:1 williamson:1 constructing:1 tween:1 border:2 depicts:1 slow:1 wiley:2 shrinking:1 position:1 sparsest:2 candidate:1 weighting:1 specific:2 bishop:1 symbol:6 svm:12 cortes:1 vapnik:2 sequential:6 adding:1 magnitude:1 illustrates:2 sparseness:1 margin:2 sparser:1 prevents:1 springer:2 corresponds:4 ma:1 conditional:4 viewed:2 goal:1 marked:1 replace:1 infinite:1 determined:1 called:2 experimental:1 atsch:2 select:1 holloway:1 perfor:1 support:7 arises:1 scan:3 indian:1 incorporate:1 dept:1 |
1,470 | 2,337 | Dynamical Constraints on Computing
with Spike Timing in the Cortex
Arunava Banerjee and Alexandre Pouget
Department of Brain and Cognitive Sciences
University of Rochester, Rochester, New York 14627
{arunavab, alex} @bcs.rochester.edu
Abstract
If the cortex uses spike timing to compute, the timing of the spikes
must be robust to perturbations. Based on a recent framework that
provides a simple criterion to determine whether a spike sequence
produced by a generic network is sensitive to initial conditions, and
numerical simulations of a variety of network architectures, we
argue within the limits set by our model of the neuron, that it is
unlikely that precise sequences of spike timings are used for
computation under conditions typically found in the cortex.
1 Introduction
Several models of neural computation use the precise timing of spikes to encode
information. For example, Abeles et al. have proposed synchronous volleys of
spikes (synfire chains) as a candidate for representing information in the cortex [1].
More recently, Maass has demonstrated how spike timing in general, not merely
synfire chains, can be utilized to perform nonlinear computations [6].
For any of these schemes to function, the timing of the spikes must be robust to
small perturbations; i.e., small perturbations of spike timing should not result in
successively larger fluctuations in the timing of subsequent spikes. To use the
terminology of dynamical systems theory, the network must not exhibit sensitivity
to initial conditions. Indeed, reliable computation would simply be impossible if the
timing of spikes is sensitive to the slightest source of noise, such as synaptic release
variability, or thermal fluctuations in the opening and closing of ionic channels.
Diesmann et al. have recently examined this issue for the particular case of synfire
chains in feed-forward networks [4]. They have demonstrated that the propagation
of a synfire chain over several layers of integrate-and-fire neurons can be robust to 2
Hz of random background activity and to a small amount of noise in the spike
timings. The question we investigate here is whether this result generalizes to the
propagation of any arbitrary spatiotemporal configuration of spikes through a
recurrent network of neurons. This question is central to any theory of computation
in cortical networks using spike timing since it is well known that the connectivity
between neurons in the cortex is highly recurrent. Although there have been earlier
attempts at resolving like issues, the applicability of the results are limited by the
model of the neuron [8] or the pattern of propagated spikes [5] considered.
Before we can address this question in a principled manner, however, we must
confront a couple of confounding issues. First stands the problem of stationarity. As
is well known, Lyapunov characteristic exponents of trajectories are limit quantities
that are guaranteed to exist (almost surely) in classical dynamical systems that are
stationary. In systems such as the cortex that receive a constant barrage of transient
inputs, it is questionable whether such a concept bears much relevance. Fortunately,
our simulations indicate that convergence or divergence of trajectories in cortical
networks can occur very rapidly (within 200-300 msec). Assuming that external
inputs do not change drastically over such short time scales, one can reasonably
apply the results from analysis under stationary conditions to such systems.
Second, the issues of how a network should be constructed so as to generate a
particular spatiotemporal pattern of spikes as well as whether a given spatiotemporal
pattern of spikes can be generated in principle, remain unresolved in the general
setting. It might be argued that without such knowledge, any classification of spike
patterns into sensitive and insensitive classes is inherently incomplete. However, as
shall be demonstrated later, sensitivity to initial conditions can be inferred under
relatively weak conditions. In addition, we shall present simulation results from a
variety of network architectures to support our general conclusions.
The remainder of the paper is organized as follows. In section 2, we briefly review
relevant aspects of the dynamical system corresponding to a recurrent neuronal
network as formulated in [2] and formally define "sensitivity to initial conditions".
In Section 3, we present simulation results from a variety of network architectures.
In Section 4, we interpret these results formally which in turn lead us to an
additional set of experiments. In Section 5, we draw conclusions regarding the issue
of computation using spike timing in cortical networks based on these results.
2
Spike dynamics
A detailed exposition of an abstract dynamical system that models recurrent systems
of biological neurons was presented in [2]. Here, we recount those aspects of the
system that are relevant to the present discussion. Based on the intrinsic nature of
the processes involved in the generation of postsynaptic potentials (PSP's) and of
those involved in the generation of action potentials (spikes), it was shown that the
state of a system of neurons can be specified by enumerating the temporal positions
of all spikes generated in the system over a bounded past. For example, in Figure 1,
the present state of the system is described by the positions of the spikes (solid
lines) in the shaded region at t= 0 and the state of the system at a future time T is
specified by the positions of the spikes (solid lines) in the shaded region at t= T.
Each internal neuron i in the system is assigned a membrane potential function PJ)
that takes as its input the present state and generates the instantaneous potential at
the soma of neuron i. It is the particular instantiation of the set of functions PJ) that
determines the nature of the neurons as well as their connectivity in the network.
Consider now the network in Figure 1 initialized at the particular state described by
the shaded region at t= O. Whenever the integration of the PSP's from all presynaptic
spikes to a neuron combined with the hyperpolarizing effects of its own spikes (the
precise nature of the union specified by PJ)) brings its membrane potential above
threshold, the neuron emits a new spike. If the spikes in the shaded region at t= 0
were perturbed in time ( dotted lines), this would result in a perturbation on the new
spike. The size of the new perturbation would depend upon the positions of the
spikes in the shaded region, the nature of PJ) , and the sizes of the old
perturbations. This scenario would in turn repeat to produce further perturbations on
future spikes. In essence, any initial set of perturbations would propagate from spike
to spike to produce a set of perturbations at any arbitrary future time t= T.
: I
:
I:
: I
I:
I:
I:
I:
I:
I
:I I :
I
I:
: I
I:
:
: I
: I
: I
: I
I
I
I
I
I
I
I
I
I
Pa st
I
I
:
I
I==T Future
1==0
Figure 1: Schematic diagram of the spike dynamics of a system of neurons.
Input neurons are colored gray and internal neurons black. Spikes are shown
in solid lines and their corresponding perturbations in dotted lines. Note that
spikes generated by the input neurons are not perturbed. Gray boxes
demarcate a bounded past history starting at time t. The temporal position of
all spikes in the boxes specify the state of the system at times t= 0 and t= T.
It is of considerable importance to note at this juncture that while the specification
of the network architecture and the synaptic weights determine the precise temporal
sequence of spikes generated by the network, the relative size of successive
perturbations are determined by the temporal positions of the spikes in successive
state descriptions at the instant of the generation of each new spike. If it can be
demonstrated that there are particular classes of state descriptions that lead to large
relative perturbations, one can deduce the qualitative aspects of the dynamics of a
network armed with only a general description of its architecture. A formal analysis
in Section 4 will bring to light such a classification.
y
Let column vectors ~ and
denote , respectively, perturbations on the spikes of
internal neurons at times t=O and t=T. We pad each vector with as many zeroes as
there are input spikes in the respective state descriptions. Let AT denote the matrix
such that = Ar~. Let Band C be the matrices as described in [3] that discard the
rigid translational components from the final and initial perturbations. Then, the
dynamics of the system is sensitive to initial conditions if lim T _ oo liB * AT * ell = 00 .
y
If instead, lim T _
oo
liB * AT * ell
=
0 , the dynamics is insensitive to initial conditions.
A few comments are in order here. First, our interest lies not in the precise values of
the Lyapunov characteristic exponents of trajectories (where they exist), but in
whether the largest exponent is greater than or less than zero. Furthermore, the class
of trajectories that satisfy either of the above criteria is larger (although not
necessarily in measure) than the class of trajectories that have definite exponents.
Second, input spikes are free parameters that have to be constrained in some manner
if the above criteria are to be well-defined. By the same token, we do not consider
the effects that perturbations of input spikes have on the dynamics of the system.
3
Simulations and results
A typical column in the cortex contains on the order of 10 5 neurons, approximately
80% of which are excitatory and the rest inhibitory. Each neuron receives around
10 4 synapses, approximately half of which are from neurons in the same column and
the rest from excitatory neurons in other columns and the thalamus. These estimates
indicate that even at background rates as low as 0.1 Hz, a column generates on
average 10 spikes every millisecond. Since perturbations are propagated from spikes
to generated spikes, divergence and/or convergence of spike trajectories could occur
extremely rapidly. We test this hypothesis in this section through model simulations.
All experiments reported here were conducted on a system containing 1000 internal
neurons (set to model a cortical column) and 800 excitatory input neurons (set to
model the input into the column). Of the 1000 internal neurons, 80% were chosen to
be excitatory and the rest inhibitory. Each internal neuron received 100 synapses
from other (internal as well as input) neurons in the system. The input neurons were
set to generate random uncorrelated Poisson spike trains at a fixed rate of 5 Hz.
The membrane potential function P/) for each internal neuron was modeled as the
sum of excitatory and inhibitory PSP ' s triggered by the arrival of spikes at synapses,
and afterhyperpolarization potentials triggered by the spikes generated by the
neuron. PSP ' s were modeled using the function "'.Ji e-"'i e-Y, where v, E and Twere set
v
t
to mimic four kinds of synapses, NMDA, AMP A, GABA A , and GABA B . OJ was set
for excitatory and inhibitory synapses so as to generate a mean spike rate of 5 Hz by
excitatory and 15 Hz by inhibitory internal neurons. The parameters were then held
constant over the entire system leaving the network connectivity and axonal delays
as the only free parameters. After the generation of a spike, an absolute refractory
period of 1 msec was introduced during which the neuron was prohibited from
generating a spike. There was no voltage reset. However, each spike triggered an
afterhyperpolarization potential with a decay constant of 30 msec that led to a
relative refractory period. Simulations were performed in 0.1 msec time steps and
the time bound on the state description, as related in Section 2, was set at 200 msec.
The issue of correlated inputs was addressed by simulating networks of disparate
architectures. On the one extreme was an ordered two layer ring network with input
neurons forming the lower layer and internal neurons (with the inhibitory neurons
placed evenly among the excitatory neurons) forming the upper layer. Each internal
neuron received inputs from a sector of internal and input neurons that was centered
on that neuron. As a result, any two neighboring internal neurons shared 96 of their
100 inputs (albeit with different axonal delays of 0.5-1.1 msec). This had the effect
of output spike trains from neighboring internal neurons being highly correlated,
with sectors of internal neurons producing synchronized bursts of spikes. On the
other extreme was a network where each internal neuron received inputs from 100
randomly chosen neurons from the entire population of internal and input neurons.
Several other networks where neighboring internal neurons shared an intermediate
percentage of their inputs were also simulated. Here, we present results from the
two extreme architectures. The results from all the other networks were similar.
Figure 2(a) displays sample output spike trains from 100 neighboring internal
neurons over a period of 450 msec for both architectures. In the first set of
experiments, pairs of identical systems driven by identical inputs and initialized at
identical states except for one randomly chosen spike that was perturbed by 1 msec ,
were simulated. In all cases, the spike trajectories diverged very rapidly. Figure 2(b)
presents spike trains generated by the same 100 neighboring internal neurons from
the two simulations from 200 to 400 msec after initialization, for both architectures.
To further explore the sensitivity of the spike trajectories, we partitioned each
trajectory into segments of 500 spike generations each. For each such segment, we
then extracted the spectral norm
* AT * after every 100 spike generations.
Figure 2( c) presents the outcome of this analysis for both architectures. Although
successive segments of 500 spike generations were found to be quite variable in
their absolute sensitivity, each such segment was nevertheless found to be sensitive.
We also simulated several other architectures (results not shown), such as systems
with fixed axonal delays and ones with bursty behavior, with similar outcomes.
liB
ell
(a)
"
:.
?
.'.
o msec
Ring Network (above) and Random Network (below)
450 msec
(b)
.
:
,
'.~
.:~
.,
,",
', '
200 msec
400 msec
Ring Network
200 msec
400 msec
Random Network
(c)
lO',.-----~~-~--~--~--__,
200
300
400
500
103 r--~~-~--~--~-----'
400
500
Figure 2: (a) Spike trains of 100 neighboring neurons for 450 msec from the
ring and the random networks respectively. (b) Spike trains from the same
100 neighboring neurons (above and below) 200 msec after initialization.
Note that the trains have already diverged at 200 msec. (c) Spectral norm of
sensitivity matrices of 14 successive segments of 500 spike generations
each, computed in steps of 100 spike generations for both architectures.
4
Analysis and further simulations
The reasons behind the divergence of the spike trajectories presented in Section 3
can be found by considering how perturbations are propagated from the set of spikes
in the current state description to a newly generated spike. As shown in [3] , the
perturbation in the new spike can be represented as a weighted sum of the
perturbations of those spikes in the state description that contribute to the generation
of the new spike. The weight assigned to a spike Xi is proportional to the slope of the
PSP or that of the hyperpolarization triggered by that spike ( apo/aXi in the general
case), at the instant of the generation of the new spike. Intuitively, the larger the
slope is, the greater is the effect that a perturbation of that spike can have on the
total potential at the soma, and hence, the larger is the perturbation on the new
spike. The proportionality constant is set so that the weights sum to 1. This
constraint is reflected in the fact that if all spikes were to be perturbed by a fixed
quantity, this would amount to a rigid displacement in time causing the new spike to
be perturbed by the same quantity. We denote the slopes by Pi, and the weights by
n
where j ranges over all contributing spikes.
ai. Then, a =
~ j"", l J
i
p.I" p.,
I
We now assume that at the generation of each new spike, the p,'s are drawn
independently from a stationary distribution (for both internal and input contributing
spikes), and that the ratio of the number of internal to the total (internal plus input)
spikes in any state description remains close to a fixed quantity f-l at all times. Note
that this amounts to an assumed probability distribution on the likelihood of
particular spike trajectories rather than one on possible network architectures and
synaptic weights. The iterative construction of the matrix AT, based on these
conditions, was described in detail in [3]. It was also shown that the statistic
\I;I~l a i2 ) plays a central role in the determination of the sensitivity of the resultant
spike trajectories. In a minor modification to the analysis in [3], we assume that AT
represents the full perturbation (internal plus input) at each step of the process.
While this merely entails the introduction of additional rows with zero entries to
* AT *
account for input spikes in each state, this alters the effect that B has on
liB
ell
in a way that allows for a simpler as well as bidirectional bound on the norm. Since
the analysis is identical to that in [3] and does not introduce any new techniques , we
only report the result. If
\I;I~l a 2 ) > (2 + ~(y")
i
-1 (res p .
\I;~l a 2 ) < ~ -I} then the
i
spike trajectories are almost surely sensitive (resp. insensitive) to initial conditions.
m denotes the number of internal spikes in the state description.
If we make the liberal assumption that input spikes account for as much as half the
total number of spikes in state descriptions, noting that m is a very large quantity
(greater than 10 3 in all our simulations), the above constraint requires
3 for
(Ian>
spike trajectories to be almost surely sensitive to initial conditions. From our earlier
a i2 whenever a spike was generated, and
simulations, we extracted the value of
L
computed the sample mean
(I a
2
i )
over all spike generations. The mean was larger
than 3 in all cases (it was 69.6 for the ring and 11.3 for the random network).
The above criterion enables us to peer into the nature of the spike dynamics of real
cortical columns, for although simulating an entire column remains intractable, a
single neuron can be simulated under various input scenarios, and the resultant
statistic applied to infer the nature of the spike dynamics of a cortical column most
of whose neurons operate under those conditions.
An examination of the mathematical nature of
L: a
2
i
reveals that its value rises as
the size of the subset of p;'s that are negative grows larger. The criterion for
sensitivity is therefore more likely to be met when a substantial portion of the
excitatory PSP's are on their falling phase (and inhibitory PSP ' s on their rising
phase) at the instant of the generation of each new spike. This corresponds to a case
where the inputs into the neurons of a system are not strongly synchronized.
Conversely, if spikes are generated soon after the arrival of a synchronized burst of
spikes (all of whose excitatory PSP ' s are presumably on their rising phase), the
criterion for sensitivity is less likely to be met. We simulated several combinations
of the two input scenarios to identify cases where the corresponding spike
trajectories in the system were not likely to be sensitive to initial conditions.
We constructed a model pyramidal neuron with 10,000 synapses, 85% of which
were chosen to be excitatory and the rest inhibitory. The threshold of the neuron
was set at 15 mV above resting potential. PSP ' s were modeled using the function
described earlier with values for the parameters set to fit the data reported in [7].
For excitatory PSP's the peak amplitudes ranged between 0.045 and 1.2 mV with
the median around 0.15 mY , 10-90 rise times ranged from 0.75 to 3.35 msec and
widths at half amplitude ranged from 8.15 to 18.5 msec. For inhibitory PSP's, the
peak amplitudes were on average twice as large and the 10-90 rise times and widths
at half amplitude were slightly larger. Whenever the neuron generated a new spike,
the values of the p;'s were recorded and
a } was computed. The mean
a i2 )
L:
(L:
was then computed over the set of all spike generations. In order to generate
conservative estimates, samples with value above 10 4 were discarded (they
comprised about 0.1% of the data). The datasets ranged in size from 3000 to 15,000.
Three experiments simulating various levels of uncorrelated input/output activity
were conducted. In particular, excitatory Poisson inputs at 2, 20 and 40 Hz were
balanced by inhibitory Poisson inputs at 6.3, 63 and 124 Hz to generate output rates
of approximately 2, 20 and 40 Hz, respectively. We confirmed that the output in all
three cases was Poisson-like (CV=O.77, 0.74, and 0.89, respectively). The mean
a i2 ) for the three experiments was 4.37 , 5.66, and 9.52 , respectively.
(L:
Next, two sets of experiments simulating the arrival of regularly spaced synfire
chains were conducted. In the first set the random background activity was set at 2
Hz and in the second, at 20 Hz. The synfire chains comprised of spike volleys that
arrived every 50 msec. Four experiments were conducted within each set: volleys
were composed of either 100 or 200 spikes (producing jolts of around 10 and 20 mV
respectively) that were either fully synchronized or were dispersed over a Gaussian
distribution with a=1 msec. The mean
for the experiments was as follows.
(Lan
At 2 Hz background activity, it was 0.49
(200 spikes/volley, dispersed) , 2.46 (100
(100 spikes/volley, dispersed). At 20 Hz
spikes/volley, synchronized), 8.32 (200
spikes/volley, synchronized), and 6.78 (l00
(200 spikes/volley, synchronized),
spikes/volley, synchronized), and
background activity, it was 4.39
spikes/volley, dispersed) , 6.77
spikes/volley, dispersed).
0.60
2.16
(200
(100
Finally, two sets of experiments simulating the arrival of randomly spaced synfire
chains were conducted. In the first set the random background activity was set at 2
Hz and in the second, at 20 Hz. The synfire chains comprised of a sequence of spike
volleys that arrived randomly at a rate of 20 Hz. Two experiments were conducted
within each set: volleys were composed of either 100 or 200 synchronized spikes.
The mean (L a i2 ) for the experiments was as follows. At 2 Hz background activity,
it was 4.30 (200 spikes/volley) and 4.64 (100 spikes/volley). At 20 Hz background
activity, it was 5.24 (200 spikes/volley) and 6.28 (l00 spikes/volley).
5
Conclusion
As was demonstrated in Section 3, senslllvlty to initial conditions transcends
unstructured connectivity in systems of spiking neurons. Indeed, our simulations
indicate that sensitivity is more the rule than the exception in systems modeling
cortical networks operating at low to moderate levels of activity. Since perturbations
are propagated from spike to spike, trajectories that are sensitive can diverge very
rapidly in systems that generate a large number of spikes within a short period of
time. Sensitivity therefore is an issue, even for schemes based on precise sequences
of spike timing with computation occurring over short (hundreds of msec) intervals.
Within the limits set by our model of the neuron, we have found that spike
trajectories are likely to be sensitive to initial conditions in all scenarios except
where large (100-200) synchronized bursts of spikes occur in the presence of sparse
background activity (2 Hz) with sufficient but not too large an interval between
successive bursts (50 msec). This severely restricts the possible use of precise spike
sequences for reliable computation in cortical networks for at least two reasons.
First, un synchronized activity can rise well above 2 Hz in the cortex, and second,
the highly constrained nature of this dynamics would show in in vivo recordings.
Although cortical neurons can have vastly more complex responses than that
modeled in this paper, our conclusions are based largely on the simplicity and the
generality of the constraints identified (the analysis assumes a general membrane
potential function PO). Although a more refined model of the cortical neuron could
lead to different values of the statistic computed, we believe that the results are
unlikely to cross the noted bounds and therefore change our overall conclusions.
We are however not arguing that computation with spike timing is impossible in
general. There are neural structures, such as the nucleus laminaris in the barn owl
and the electro sensory array in the electric fish , which have been shown to perform
exquisitely precise computations using spike timing. Interestingly, these structures
have very specialized neurons and network architectures.
To conclude, computation using precise spike sequences does not appear to be likely
in the cortex in the presence of Poisson-like activity at levels typically found there.
References
[1] Abeles, M., Bergman, H., Margalit, E. & Vaadia, E. (1993) Spatiotemporal firing patterns
in the frontal cortex of behaving monkeys. Journal of Neurophysiology 70, pp. 1629-1638.
[2] Banerjee, A. (2001) On the phase-space dynamics of systems of spiking neurons: I.
model and experiments. Neural Computation 13, pp. 161-193.
[3] Banerjee, A. (2001) On the phase-space dynamics of systems of spiking neurons : II.
formal analysis. Neural Computation 13, pp. 195-225.
[4] Diesmann, M. , Gewaltig, M. O. & Aertsen, A. (1999) Stable propagation of synchronous
spiking in cortical neural networks. Nature 402 , pp. 529-533.
[5] Gerstner, W. , van Hemmen, J. L. & Cowan, J. D. (1996) What matters in neuronal
locking. Neural Computation 8, pp. 1689-1712.
[6] Maass , W. (1995) On the computational complexity of networks of spiking neurons.
Advances in Neural Information Processing Systems 7, pp. 183-190.
[7] Mason, A. , Nicoll, A. & Stratford, K. (1991) Synaptic transmission between individual
pyramidal neurons of the rat visual cortex in vitro . Journal of Neuroscience 11(1), pp. 72-84.
[8] van Vreeswijk, c., & Sompolinsky, H. (1998) Chaotic balanced state in a model of
cortical circuits. Neural Computation 10, pp. 1321-1372 .
| 2337 |@word neurophysiology:1 briefly:1 rising:2 norm:3 proportionality:1 simulation:12 propagate:1 solid:3 initial:13 configuration:1 contains:1 interestingly:1 amp:1 past:2 current:1 must:4 subsequent:1 hyperpolarizing:1 numerical:1 enables:1 stationary:3 half:4 short:3 colored:1 provides:1 contribute:1 successive:5 liberal:1 simpler:1 mathematical:1 burst:4 constructed:2 qualitative:1 introduce:1 manner:2 indeed:2 behavior:1 brain:1 armed:1 considering:1 lib:4 bounded:2 circuit:1 what:1 kind:1 monkey:1 temporal:4 every:3 questionable:1 appear:1 producing:2 before:1 timing:16 limit:3 severely:1 fluctuation:2 firing:1 approximately:3 might:1 black:1 plus:2 initialization:2 twice:1 examined:1 conversely:1 shaded:5 limited:1 range:1 arguing:1 union:1 definite:1 chaotic:1 displacement:1 close:1 impossible:2 demonstrated:5 starting:1 independently:1 simplicity:1 unstructured:1 pouget:1 rule:1 transcends:1 array:1 population:1 resp:1 construction:1 play:1 us:1 hypothesis:1 bergman:1 pa:1 utilized:1 apo:1 role:1 region:5 sompolinsky:1 principled:1 substantial:1 balanced:2 locking:1 complexity:1 dynamic:11 depend:1 segment:5 upon:1 po:1 represented:1 various:2 train:7 outcome:2 refined:1 peer:1 quite:1 whose:2 larger:7 statistic:3 final:1 sequence:7 triggered:4 vaadia:1 unresolved:1 remainder:1 reset:1 neighboring:7 relevant:2 causing:1 rapidly:4 description:10 convergence:2 transmission:1 produce:2 generating:1 ring:5 oo:2 recurrent:4 minor:1 received:3 indicate:3 synchronized:11 lyapunov:2 met:2 centered:1 transient:1 owl:1 argued:1 biological:1 around:3 considered:1 barn:1 prohibited:1 bursty:1 presumably:1 diverged:2 sensitive:10 largest:1 weighted:1 gaussian:1 rather:1 voltage:1 encode:1 release:1 likelihood:1 rigid:2 typically:2 unlikely:2 entire:3 pad:1 margalit:1 issue:7 classification:2 translational:1 among:1 overall:1 exponent:4 constrained:2 integration:1 ell:4 identical:4 represents:1 future:4 mimic:1 report:1 opening:1 few:1 randomly:4 composed:2 divergence:3 individual:1 phase:5 fire:1 attempt:1 stationarity:1 interest:1 investigate:1 highly:3 extreme:3 light:1 behind:1 held:1 chain:8 respective:1 incomplete:1 old:1 initialized:2 re:1 arunava:1 column:10 earlier:3 modeling:1 ar:1 applicability:1 entry:1 subset:1 hundred:1 comprised:3 delay:3 conducted:6 too:1 reported:2 perturbed:5 spatiotemporal:4 abele:2 my:1 combined:1 st:1 peak:2 sensitivity:11 diverge:1 connectivity:4 vastly:1 central:2 recorded:1 successively:1 containing:1 cognitive:1 external:1 account:2 potential:11 afterhyperpolarization:2 matter:1 satisfy:1 mv:3 later:1 performed:1 portion:1 rochester:3 slope:3 vivo:1 characteristic:2 largely:1 spaced:2 identify:1 weak:1 produced:1 ionic:1 trajectory:17 confirmed:1 history:1 synapsis:6 whenever:3 synaptic:4 pp:8 involved:2 resultant:2 propagated:4 couple:1 emits:1 newly:1 knowledge:1 lim:2 organized:1 nmda:1 amplitude:4 alexandre:1 feed:1 bidirectional:1 reflected:1 specify:1 response:1 box:2 strongly:1 generality:1 furthermore:1 receives:1 synfire:8 nonlinear:1 banerjee:3 propagation:3 brings:1 gray:2 believe:1 grows:1 effect:5 concept:1 ranged:4 hence:1 assigned:2 maass:2 i2:5 during:1 width:2 essence:1 noted:1 rat:1 criterion:6 arrived:2 bring:1 instantaneous:1 recently:2 specialized:1 spiking:5 ji:1 hyperpolarization:1 vitro:1 refractory:2 insensitive:3 resting:1 interpret:1 ai:1 cv:1 closing:1 had:1 specification:1 entail:1 cortex:11 operating:1 behaving:1 deduce:1 stable:1 own:1 recent:1 confounding:1 moderate:1 driven:1 discard:1 scenario:4 fortunately:1 additional:2 greater:3 surely:3 determine:2 period:4 ii:1 resolving:1 full:1 bcs:1 thalamus:1 infer:1 determination:1 cross:1 schematic:1 confront:1 poisson:5 receive:1 background:9 addition:1 addressed:1 interval:2 diagram:1 pyramidal:2 source:1 leaving:1 median:1 rest:4 operate:1 comment:1 hz:19 recording:1 electro:1 cowan:1 regularly:1 axonal:3 noting:1 presence:2 intermediate:1 variety:3 fit:1 architecture:14 identified:1 laminaris:1 regarding:1 enumerating:1 synchronous:2 whether:5 york:1 action:1 detailed:1 amount:3 band:1 generate:6 exist:2 percentage:1 restricts:1 inhibitory:10 millisecond:1 dotted:2 alters:1 fish:1 neuroscience:1 shall:2 four:2 terminology:1 soma:2 threshold:2 nevertheless:1 drawn:1 falling:1 lan:1 pj:4 merely:2 sum:3 recount:1 almost:3 draw:1 layer:4 bound:3 guaranteed:1 display:1 activity:12 occur:3 constraint:4 alex:1 diesmann:2 generates:2 aspect:3 extremely:1 relatively:1 department:1 combination:1 gaba:2 membrane:4 remain:1 psp:11 slightly:1 postsynaptic:1 partitioned:1 modification:1 intuitively:1 nicoll:1 remains:2 turn:2 vreeswijk:1 barrage:1 generalizes:1 apply:1 generic:1 spectral:2 simulating:5 denotes:1 assumes:1 instant:3 classical:1 question:3 quantity:5 spike:131 already:1 aertsen:1 exhibit:1 simulated:5 evenly:1 argue:1 presynaptic:1 reason:2 assuming:1 modeled:4 ratio:1 exquisitely:1 sector:2 negative:1 disparate:1 rise:4 perform:2 upper:1 neuron:64 datasets:1 discarded:1 thermal:1 variability:1 precise:9 perturbation:23 arbitrary:2 inferred:1 introduced:1 pair:1 specified:3 address:1 dynamical:5 pattern:5 below:2 reliable:2 oj:1 examination:1 representing:1 scheme:2 gewaltig:1 review:1 contributing:2 relative:3 fully:1 bear:1 generation:15 proportional:1 integrate:1 nucleus:1 sufficient:1 principle:1 uncorrelated:2 pi:1 demarcate:1 lo:1 excitatory:13 row:1 token:1 repeat:1 placed:1 free:2 soon:1 drastically:1 formal:2 absolute:2 sparse:1 van:2 axi:1 cortical:12 stand:1 sensory:1 forward:1 l00:2 instantiation:1 reveals:1 assumed:1 conclude:1 xi:1 un:1 iterative:1 slightest:1 channel:1 reasonably:1 robust:3 nature:9 inherently:1 gerstner:1 necessarily:1 complex:1 electric:1 noise:2 arrival:4 stratford:1 neuronal:2 hemmen:1 position:6 msec:24 volley:17 candidate:1 lie:1 ian:1 mason:1 decay:1 intrinsic:1 intractable:1 albeit:1 importance:1 juncture:1 occurring:1 led:1 simply:1 explore:1 likely:5 forming:2 visual:1 ordered:1 corresponds:1 determines:1 dispersed:5 extracted:2 formulated:1 exposition:1 shared:2 considerable:1 change:2 determined:1 typical:1 except:2 conservative:1 total:3 exception:1 formally:2 internal:25 support:1 relevance:1 frontal:1 correlated:2 |
1,471 | 2,338 | Concurrent Object Recognition and
Segmentation by Graph Partitioning
Stella
x. YuH, Ralph Gross t and Jianbo Shit
Robotics Institute t
Carnegie Mellon University
Center for the Neural Basis of Cognition +
5000 Forbes Ave, Pittsburgh, PA 15213-3890
{stella.yu, rgross, jshi}@cs.cmu.edu
Abstract
Segmentation and recognition have long been treated as two separate processes. We propose a mechanism based on spectral graph partitioning
that readily combine the two processes into one. A part-based recognition system detects object patches, supplies their partial segmentations as
well as knowledge about the spatial configurations of the object. The goal
of patch grouping is to find a set of patches that conform best to the object
configuration, while the goal of pixel grouping is to find a set of pixels
that have the best low-level feature similarity. Through pixel-patch interactions and between-patch competition encoded in the solution space,
these two processes are realized in one joint optimization problem. The
globally optimal partition is obtained by solving a constrained eigenvalue
problem. We demonstrate that the resulting object segmentation eliminates false positives for the part detection, while overcoming occlusion
and weak contours for the low-level edge detection.
1 Introduction
A good image segmentation must single out meaningful structures such as objects from
a cluttered scene. Most current segmentation techniques take a bottom-up approach [5] ,
where image properties such as feature similarity (brightness, texture, motion etc), boundary smoothness and continuity are used to detect perceptually coherent units. Segmentation
can also be performed in a top-down manner from object models, where object templates
are projected onto an image and matching errors are used to determine the existence of the
object [1] . Unfortunately, either approach alone has its drawbacks.
Without utilizing any knowledge about the scene, image segmentation gets lost in poor data
conditions: weak edges, shadows, occlusions and noise. Missed object boundaries can then
hardly be recovered in subsequent object recognition. Gestaltists have long recognized this
issue, circumventing it by adding a grouping factor called familiarity [6]. Without being
subject to perceptual constraints imposed by low level grouping, an object detection process
can produce many false positives in a cluttered scene [3]. One approach is to build a better
part detector, but this has its own limitations, such as increase in the complexity of classifiers and the number of training examples required. Another approach, which we adopt in
this paper, is based on the observation that the falsely detected parts are not perceptually
salient (Fig. 1), thus they can be effectively pruned away by perceptual organization.
Right arm: 7
Right leg: 3
Head: 4
Left arm: 4
Left leg: 9
Figure 1: Human body part detection. A total of 27 parts are detected, each labeled by one of the
five part detectors for arms, legs and head. False positives cannot be validated on two grounds. First,
they do not form salient structures based on low-level cues, e.g. the patch on the floor that is labeled
left leg has the same features as its surroundings. Secondly, false positives are often incompatible
with nearby parts, e.g. the patch on the treadmill that is labeled head has no other patches in the
image to make up a whole human body. These two conditions, low-level image feature saliency and
high-level part labeling consistency, are essential for the segmentation of objects from background.
Both cues are encoded in our pixel and patch grouping respectively.
We propose a segmentation mechanism that is coupled with the object recognition process (Fig. 2). There are three tightly coupled processes. I)Top-level: part-based object
recognition process. It learns classifiers from training images to detect parts along with the
segmentation patterns and their relative spatial configurations. A few approaches based on
pattern classification have been developed for part detection [9,3] . Recent work on object
segmentation [1] uses image patches and their figure-ground labeling as building blocks
for segmentation. 2)Bottom-level: pixel-based segmentation process. This process finds
perceptually coherent groups using pairwise local feature similarity. The local features we
use here are contour cues. 3)Interactions: coupling object recognition with segmentation
by linking patches with their corresponding pixels. With such a representation, we concurrently carry out object recognition and image segmentation processes. The final output is
an object segmentation where the object group consists of pixels with coherent low-level
features and patches with compatible part configurations.
We formulate our object segmentation task in a graph partitioning framework. We represent low-level grouping cues with a graph where each pixel is a node and edges between the
nodes encode the affinity of pixels based on their feature similarity [4]. We represent highlevel grouping cues with a graph where each detected patch is a node and edges between
the nodes encode the labeling consistency based on prior knowledge of object part configurations. There are also edges connecting patch nodes with their supporting pixel nodes.
We seek the optimal graph cut in this joint graph, which separates the desired patch and
pixel nodes from the rest nodes. We build upon the computational framework of spectral
graph partitioning [7], and achieve patch competition using the subspace constraint method
proposed in [10]. We show that our formulation leads to a constrained eigenvalue problem,
whose global-optimal solutions can be obtained efficiently.
2
Segmentation model
We illustrate our method through a synthetic example shown in Fig. 3. Suppose we are
interested in detecting a human-like configuration. Furthermore, we assume that some
object recognition system has labeled a set of patches as object parts. Every patch has a
local segmentation according to its part label. The recognition system has also learned the
?
')
(
Figure 2: Model of object segmentation. Given an image, we detect edges using a set of oriented
filter banks. The edge responses provide low-level grouping cues, and a graph can be constructed
with one node for each pixel. Shown on the middle right are affinity patterns of five center pixels
within a square neighbourhood, overlaid on the edge map. Dark means larger affinity. We detect a
set of candidate body parts using learned classifiers. Body part labeling provides high-level grouping
cues, and a consistency graph can be constructed with one node for each patch. Shown on the middle
left are the connections between patches. Thicker lines mean better compatibility. Edges are noisy,
while patches contain ambiguity in local segmentation and part labeling. Patches and pixels interact
by expected local segmentation based on object knowledge, as shown in the middle image. A global
partitioning on the coupled graph outputs an object segmentation that has both pixel-level saliency
and patch-level consistency.
statistical distribution of the spatial configurations of object parts. Given such information,
we need to address two issues. One is the cue evaluation problem, i.e. how to evaluate
low-level pixel cues, high-level patch cues and their segmentation correspondence. The
other is the integration problem, i.e. how to fuse partial and imprecise object knowledge
with somewhat unreliable low-level cues to segment out the object of interest.
patches
I[WJcrDJ [0-
, -
-
pixel-patch rebtio",
.---___im
_ a...;;g_e_ _---,/
~
edges
object
segmentation
o
Figure 3: Given the image on the left, we want to detect the object on the ri ght). 11 patches of various
sizes are detected (middle top). They are labeled as head(l), left-upper-arm(2, 9), left-lower-arm(3,
10), left-leg (11), left-upper-leg(4), left-lower-leg(5), right-arm(6), right-leg(7, 8). Each patch has a
partial local segmentation as shown in the center image. Object pixels are marked black, background
white and others gray. The image intensity itself has its natural organization, e.g. pixels across a
strong edge (middle bottom) are likely to be in different regions. Our goal is to find the best patchpixel combinations that conform to the object knowledge and data coherence.
2.1
Representations
We denote the graph in Fig. 2 by G = (V, E, W). Let N be the number of pixels and
M the number of patches. Let A be the pixel-pixel affinity matrix, B be the patch-patch
affinity matrix, and C be the patch-pixel affinity matrix. All these weights are assumed
nonnegative. Let f3B and f3c be scalars reflecting the relative importance of Band C with
respect to A. Then the node set and the weight matrix for the pairwise edge set E are:
V
{I,??? , N,
}V+1, . .. ,N+M),
'"--v--'
pixels
W(A , B , C ; f3B, f3c)
[
A N xN
f3c? C M x N
patches
f3c . C~ X M ]
f3B . B Mx M
.
(1)
Object segmentation corresponds to a node bipartitioning problem, where V = VI U V2
and VI n V2 = 0. We assume VI contains a set of pixel and patch nodes that correspond to
the object, and V 2 is the rest of the background pixels and patches that correspond to false
positives and alternative labelings. Let Xl be an (N + M) x 1 vector, with Xl (k) = 1
if node k E VI and 0 otherwise. It is convenient to introduce the indicator for V 2 , where
X 2 = 1 - Xl and 1 is the vector of ones.
We only need to process the image region enclosing all the detected patches. The rest
pixels are associated with a virtual background patch, which we denote as patch N + M,
in addition to M - 1 detected object patches. Restriction of segmentation to this region of
interest (ROI) helps binding irrelavent background elements into one group [10].
2.2
Computing pixel-pixel similarity A
The pixel affinity matrix A measures low-level image feature similarity. In this paper, we
choose intensity as our feature and calcuate A based on edge detection results. We first
convolve the image with quadrature pairs of oriented filters to extract the magnitude of
edge responses OE [4]. Let i denote the location of pixel i . Pixel affinity A is inversely
correlated with the maximum magnitude of edges crossing the line connecting two pixels.
A( i , j) is low if i, j are on the two sides of a strong edge (Fig. 4):
. . _
A(~ , J) - exp
( _ _ 1_. [maXtE (Q,I ) OE(i + t . j)]
2(J"~
maxk
0 E(k.)
2) .
(2)
A(1 , 3) ;:::: 1
A(1 , 2) ;:::: 0
o
D
image
oriented filter pairs
edge magnitudes
Figure 4: Pixel-pixel similarity matrix A is computed based on intensity edge magnitudes.
2.3
Computing patch-patch compatibility B and competition
For object patches, we evaluate their position compatibility according to learned statistical
distributions. For object part labels a and b, we can model their spatial distribution by a
Gaussian, with mean /L a b and variance ~ab estimated from training data. Let p be the object
label of patch p . Let p be the center location of patch p. For patches p and q, B(p, q) is low
if p, q form rare configurations for their part labels p and q (Fig. Sa):
I T -..1 (p - q - /Lpq) ) .
B(p, q)= exp ( - -(p
2 -- -q - /Lprj) ~ p
q--
(3)
We manually set these values for our image examples. As to the virtual background patch
node, it only has affinity of 1 to itself.
Patch compatibility measures alone do not prevent the desired pixel and patch group from
including falsely detected patches and their pixels, nor does it favor the true object pixels to
be away from unlabeled background pixels. We need further constraints to restrict a feasible
grouping. This is done by constraining the partition indicator X. In Fig. Sb, there are four
pairs of patches with the same object part labels. To encode mutual exclusion between
patches, we enforce one winner among patch nodes in competition. For example, only one
of the patches 7 and 8 can be validated to the object group: Xl (N + 7) + Xl (N + 8) = 1.
We also set an exclusion constraint between a reliable patch and the virtual background
patch so that the desired object group stands out alone without these unlabeled background
pixels, e.g Xl (N + 1) + Xl (N + M) = 1. Formally, let S be a superset of nodes to be
separated and let I . I denote the cardinality of a set. We have:
L
Xl(k) = 1, m = 1 :
lSI?
(4)
7 and 8 cannot both be
part of the object
a) compatibility
patches
b) competition
Figure 5: a) Patch-patch compatibility matrix B is evaluated based on statistical configuration plausibility. Thicker lines for larger affinity. b) Patches of the same object part label compete to enter the
object group. Only one winner from each linked pair of patches can be validated as part of the object.
2.4
Computing pixel-patch association C
Every object part label also projects an expected pixel segmentation within the patch window (Fig. 6). The pixel-patch association matrix C has one column for each patch:
C(i,p)
=
{
0I :
if i is an object pixel of patch p,
otherwise.
(5)
For the virtual background patch, its member pixels are those outside the ROI.
Head detector ->
Patch 1
I
?
1
Arm detector ->
Patch 2
Leg detector ->
Patch 11
expected local segmentation
19
12
110
1
61
3
l_
I" 15 71 si
patches
association
Figure 6: Pixel-patch association C for object patches. Object pixels are marked black, background
white and others gray. A patch is associated with its object pixels in the given partial segmentation.
Finally, we desire (38 to balance the total weights between pixel and patch grouping so that
M ? N does not render patch grouping insignificant, and we want (3c to be large enough
so that the results of patch grouping can bring along their associated pixels:
ITAI
(3B
(3B = 0?01 1TB1 ,
(3c = maxC.
(6)
2.5
Segmentation as an optimization problem
We apply the normalized cuts criterion [7] to the joint pixel-patch graph in Eg. (1):
L xTwX
, s . t. L
X DX
2
maXE(X1) =
t= l
t
T
t
t
Xl(k) = 1, m = 1 :
lSI?
(7)
D is the diagonal degree matrix of W, D(i, i) = Lj W(i,j) . Let x = Xl - Xfr~~'.
By relaxing the constraints into the form of LT x = 0 [10], Eq. (7) becomes a constrained
eigenvalue problem [10], the maximizer given by the nontrivial leading eigenvector:
x*
s. t. LT X
= O.
AX',
1 - D - l L(LT D - l L) - l LT .
(8)
(9)
(10)
Once we get the optimal eigenvector, we compare 10 thresholds uniformly distributed
within its range and choose the discrete segmentation that yields the best criterion E. Below
is an overview of our algorithm.
1: Compute edge response OE and calculate pixel affinity A, Eq. (2).
2: Detect parts and calculate patch affinity B , Eq. (3).
3: Formulate constraints Sand L among competing patches, Eq. (4).
4: Set pixel-patch affinity C, Eq. (5).
5: Calculate weights (3B and (3c , Eq. (6).
6: Form Wand calculate its degree matrix D, Eq. (1).
7: Solve QD - lWx* = AX', Eq. (9,10).
8: Threshold x' to get a discrete segmentation.
3
Results and conclusions
In Fig. 7, we show results on the 120 x 120 synthetic image. Image segmentation alone gets
lost in a cluttered scene. With concurrent segmentation and recognition, regions forming
the object of interest pop out, with unwanted edges (caused by occlusion) and weak edges
(illusory contours) corrected in the final segmentation. It is also faster to compute the
pixel-patch grouping since the size of the solution space is greatly reduced.
I
segmentation alone
concurrent segmentation and recognition
I
44 seconds
17 seconds
Figure 7: Eigenvectors (row 1) and their segmentations (row 2) for Fig. 3. On the right, we show the
optimal eigenvector on both pixels and patches, the horizontal dotted line indicating the threshold.
Computation times are obtained in MATLAB 6.0 on a PC with 10Hz CPU and 10 memory.
We apply our method to human body detection in a single image. We manually label five
body parts (both arms, both legs and the head) of a person walking on a treadmill in all
32 images of a complete gait cycle. Using the magnitude thresholded edge orientations
in the hand-labeled boxes as features, we train linear Fisher classifiers [2] for each body
part. In order to account for the appearance changes of the limbs through the gait cycle, we
use two separate models for each arm and each leg, bringing the total number of models
to 9. Each individual classifier is trained to discriminate between the body part and a
random image patch. We iteratively re-train the classifiers using false positives until the
optimal performance is reached over the training set. In addition, we train linear colorbased classifiers for each body part to perform figure-ground discrimination at the pixel
level. Alternatively a general model of human appearance based on filter responses as in [8]
could be used. In Fig. 8, we show the results on the test image in Fig. 2. Though the pixelpatch affinity matrix C, derived from the color classifier, is neither precise nor complete,
and the edges are weak at many object boundaries, the two processes complement each
other in our pixel-patch grouping system and output a reasonably good object segmentation.
segmentation alone: 68 seconds
segmentation-recognition: 58 seconds
Figure 8: Eigenvectors and their segmentations for the 261 x 183 human body image in Fig. 2.
Acknowledgments. We thank Shyjan Mahamud and anonymous referees for valuable
comments. This research is supported by ONR NOOOI4-00-1-09IS and NSF IRI-98 17496.
References
[1] E. Borenstein and S. Ullman. Class-specific, top-down segmentation. In European Conference
on Computer Vision, 2002.
[2] K. Fukunaga. Introduction to statistical pattern recognition. Academic Press, 1990.
[3] S. Mahamud, M. Hebert, and J. Lafferty. Combining simple discriminators for object discrimination. In European Conference on Computer Vision, 2002.
[4] J. Malik, S. Belongie, T. Leung, and J. Shi. Contour and texture analysis for image segmentation. International Journal of Computer Vision, 200l.
[5] D. Marr. Vision. CA: Freeman, 1982.
[6] S. E. Palmer. Vision science: from photons to phenomenology. MIT Press, 1999.
[7] J. Shi and J. Malik. Normalized cuts and image segmentation. In IEEE Conference on Computer
Vision and Pattern Recognition, pages 731- 7, June 1997.
[8] H. Sidenbladh and M. Black. Learning image statistics for Bayesian tracking. In International
Conference on Computer Vision , 200l.
[9] P. Viola and M. Jones. Rapid object detection using a boosted cascade of simple features. In
IEEE Conference on Computer Vision and Pattern Recognition, 200l.
[10] S. X. Yu and J. Shi. Grouping with bias. In Neural Information Processing Systems, 2001.
| 2338 |@word middle:5 seek:1 brightness:1 carry:1 configuration:9 contains:1 current:1 recovered:1 si:1 dx:1 must:1 readily:1 subsequent:1 partition:2 discrimination:2 alone:6 cue:11 detecting:1 provides:1 node:17 location:2 five:3 along:2 constructed:2 supply:1 consists:1 combine:1 introduce:1 manner:1 falsely:2 pairwise:2 expected:3 rapid:1 nor:2 freeman:1 detects:1 globally:1 cpu:1 window:1 cardinality:1 becomes:1 project:1 eigenvector:3 developed:1 every:2 thicker:2 unwanted:1 jianbo:1 classifier:8 partitioning:5 unit:1 positive:6 local:7 black:3 relaxing:1 palmer:1 range:1 acknowledgment:1 lost:2 block:1 cascade:1 matching:1 imprecise:1 convenient:1 get:4 onto:1 cannot:2 unlabeled:2 restriction:1 imposed:1 map:1 center:4 shi:3 iri:1 cluttered:3 bipartitioning:1 formulate:2 utilizing:1 marr:1 suppose:1 us:1 pa:1 element:1 crossing:1 recognition:16 referee:1 walking:1 cut:3 labeled:6 bottom:3 calculate:4 region:4 cycle:2 oe:3 valuable:1 gross:1 complexity:1 trained:1 solving:1 segment:1 upon:1 basis:1 joint:3 various:1 train:3 separated:1 detected:7 labeling:5 outside:1 whose:1 encoded:2 larger:2 solve:1 otherwise:2 favor:1 statistic:1 noisy:1 itself:2 final:2 highlevel:1 eigenvalue:3 propose:2 gait:2 interaction:2 combining:1 treadmill:2 achieve:1 competition:5 produce:1 object:58 help:1 coupling:1 illustrate:1 eq:8 sa:1 strong:2 c:1 shadow:1 qd:1 drawback:1 filter:4 human:6 virtual:4 sand:1 anonymous:1 secondly:1 im:1 ground:3 roi:2 exp:2 overlaid:1 cognition:1 adopt:1 label:8 concurrent:3 mit:1 concurrently:1 gaussian:1 boosted:1 encode:3 validated:3 ax:2 derived:1 june:1 greatly:1 ave:1 detect:6 leung:1 sb:1 lj:1 labelings:1 interested:1 ralph:1 pixel:57 issue:2 classification:1 compatibility:6 among:2 orientation:1 spatial:4 constrained:3 integration:1 mutual:1 once:1 manually:2 yu:2 jones:1 others:2 few:1 surroundings:1 oriented:3 tightly:1 individual:1 occlusion:3 ab:1 detection:8 organization:2 interest:3 evaluation:1 yuh:1 pc:1 edge:23 partial:4 desired:3 re:1 column:1 rare:1 synthetic:2 person:1 international:2 connecting:2 ambiguity:1 choose:2 leading:1 ullman:1 account:1 photon:1 caused:1 vi:4 performed:1 linked:1 reached:1 forbes:1 square:1 variance:1 efficiently:1 correspond:2 saliency:2 yield:1 weak:4 bayesian:1 detector:5 maxc:1 associated:3 noooi4:1 illusory:1 knowledge:6 color:1 segmentation:48 reflecting:1 response:4 formulation:1 done:1 evaluated:1 box:1 though:1 furthermore:1 until:1 hand:1 horizontal:1 maximizer:1 continuity:1 gray:2 building:1 f3c:4 contain:1 true:1 normalized:2 iteratively:1 white:2 eg:1 criterion:2 complete:2 demonstrate:1 motion:1 bring:1 image:29 jshi:1 overview:1 winner:2 linking:1 association:4 mellon:1 enter:1 smoothness:1 consistency:4 similarity:7 etc:1 own:1 exclusion:2 recent:1 onr:1 somewhat:1 floor:1 recognized:1 determine:1 faster:1 academic:1 plausibility:1 long:2 vision:8 cmu:1 represent:2 robotics:1 background:11 want:2 addition:2 borenstein:1 eliminates:1 rest:3 bringing:1 comment:1 subject:1 hz:1 member:1 lafferty:1 constraining:1 superset:1 enough:1 restrict:1 competing:1 render:1 hardly:1 matlab:1 eigenvectors:2 dark:1 band:1 reduced:1 lsi:2 nsf:1 dotted:1 estimated:1 conform:2 carnegie:1 discrete:2 itai:1 group:7 salient:2 four:1 threshold:3 prevent:1 neither:1 thresholded:1 graph:13 circumventing:1 fuse:1 wand:1 compete:1 patch:84 missed:1 mahamud:2 incompatible:1 coherence:1 correspondence:1 nonnegative:1 nontrivial:1 constraint:6 tb1:1 scene:4 ri:1 nearby:1 fukunaga:1 pruned:1 according:2 combination:1 poor:1 across:1 leg:11 mechanism:2 phenomenology:1 apply:2 limb:1 away:2 spectral:2 v2:2 enforce:1 neighbourhood:1 alternative:1 existence:1 top:4 convolve:1 xfr:1 ght:1 build:2 malik:2 realized:1 diagonal:1 affinity:14 subspace:1 mx:1 separate:3 thank:1 sidenbladh:1 balance:1 unfortunately:1 enclosing:1 perform:1 upper:2 observation:1 supporting:1 maxk:1 viola:1 head:6 precise:1 intensity:3 overcoming:1 complement:1 pair:4 required:1 connection:1 discriminator:1 coherent:3 learned:3 pop:1 address:1 below:1 pattern:6 including:1 reliable:1 memory:1 treated:1 natural:1 indicator:2 arm:9 inversely:1 stella:2 coupled:3 extract:1 prior:1 relative:2 limitation:1 degree:2 bank:1 row:2 compatible:1 supported:1 hebert:1 l_:1 side:1 bias:1 institute:1 template:1 distributed:1 boundary:3 xn:1 stand:1 contour:4 projected:1 unreliable:1 global:2 pittsburgh:1 assumed:1 belongie:1 alternatively:1 lpq:1 reasonably:1 ca:1 interact:1 european:2 whole:1 noise:1 quadrature:1 body:10 x1:1 fig:13 position:1 xl:10 candidate:1 perceptual:2 learns:1 down:2 familiarity:1 specific:1 insignificant:1 grouping:16 essential:1 false:6 adding:1 effectively:1 importance:1 texture:2 magnitude:5 perceptually:3 lt:4 likely:1 appearance:2 forming:1 desire:1 tracking:1 scalar:1 binding:1 corresponds:1 goal:3 marked:2 fisher:1 feasible:1 change:1 uniformly:1 corrected:1 called:1 total:3 discriminate:1 meaningful:1 maxe:1 formally:1 indicating:1 evaluate:2 correlated:1 |
1,472 | 2,339 | Fractional Belief Propagation
Wim Wiegerinck and Tom Heskes
SNN, University of Nijmegen
Geert Grooteplein 21, 6525 EZ, Nijmegen, the Netherlands
wimw,tom @snn.kun.nl
Abstract
We consider loopy belief propagation for approximate inference in probabilistic graphical models. A limitation of the standard algorithm is that
clique marginals are computed as if there were no loops in the graph.
To overcome this limitation, we introduce fractional belief propagation.
Fractional belief propagation is formulated in terms of a family of approximate free energies, which includes the Bethe free energy and the
naive mean-field free as special cases. Using the linear response correction of the clique marginals, the scale parameters can be tuned. Simulation results illustrate the potential merits of the approach.
1 Introduction
Probabilistic graphical models are powerful tools for learning and reasoning in domains
with uncertainty. Unfortunately, inference in large, complex graphical models is computationally intractable. Therefore, approximate inference methods are needed. Basically, one
can distinguish between to types of methods, stochastic sampling methods and deterministic methods. One of methods in the latter class is Pearl?s loopy belief propagation [1]. This
method is increasingly gaining interest since its successful applications to turbo-codes. Until recently, a disadvantage of the method was its heuristic character, and the absence of a
converge guarantee. Often, the algorithm gives good solutions, but sometimes the algorithm fails to converge. However, Yedidia et al. [2] showed that the fixed points of loopy
belief propagation are actually stationary points of the Bethe free energy from statistical
physics. This does not only give the algorithm a firm theoretical basis, but it also solves
the convergence problem by the existence of an objective function which can be minimized
directly [3]. Belief propagation is generalized in several directions. Minka?s expectation
propagation [4] is a generalization that makes the method applicable to Bayesian learning.
Yedidia et al. [2] introduced the Kikuchi free energy in the graphical models community,
which can be considered as a higher order truncation of a systematic expansion of the exact free energy using larger clusters. They also developed an associated generalized belief
propagation algorithm. In this paper, we propose another direction which yields possibilities to improve upon loopy belief propagation, without resorting to larger clusters.
This paper is organized as follows. In section 2 we define the inference problem. In section 3 we shortly review approximate inference by loopy belief propagation and discuss an
inherent limitation of this method. This motivates us to generalize upon loopy belief propagation. We do so by formulating a new class of approximate free energies in section 4. In
section 5 we consider the fixed point equations and formulate the fractional belief propagation algorithm. In section 6 we will use linear response estimates to tune the parameters
in the method. Simulation results are presented in section 7. In section 8 we end with the
conclusion.
2 Inference in graphical models
Our starting point is a probabilistic model
in a finite domain. The joint distribution
of clique potentials
on a set of discrete variables
is assumed to be proportional to a product
(1)
where each refers to a subset of the nodes in the model. A typical example that we will
consider later in the paper is the Boltzmann machine with binary units (!#"%$ ),
8 8
(2)
&('*),+
.- 50 076 -8:9
/ 02143
where the sum is over connected pairs ;<>=? . The right hand side can be viewed as product
of potentials @0 50 AB'
),+ 50 0C6 .ED F!GHD 1 9 6 D FJI
D 9 0 0 2 , where K is the set of
3
edges that contain node ; . The typical
task that we try to perform is to compute the marginal
single node distributions L . Basically, the computation requires the summation over all
remaining variables M4 . In small networks, this summation can be performed explicitly.
In large networks, the complexity of computation depends on the underlying graphical
structure of the model, and is exponential in the maximal clique size of the triangulated
moralized graph [5]. This may lead to intractable models, even if the clusters are small.
When the model is intractable, one has to resort to approximate methods.
3 Loopy belief propagation in Boltzmann machines
A nowadays popular approximate method is loopy belief propagation. In this section, we
will shortly review of this method. Next we will discuss one of its inherent limitations,
which motivates us to propose a possible way to overcome this limitation. For simplicity,
we restrict this section to Boltzmann machines.
50
of connected nodes. Loopy belief propagaThe goal is to compute pair marginals
tion computes approximating pair marginals
by applying the belief propagation
algorithm for trees to loopy graphs, i.e., it computes messages according to
N 50 @0
O QPR0 0 ST-U '
),+ 50 0 W V @0 X
G
3
W
in which V @0 are the incoming messages to node ; except from node = ,
W V 50 S('
),+Y9 8[Z O 8 PA X
F GL\ 0
(3)
(4)
If the procedure converges (which is not guaranteed in loopy graphs), the resulting approximating pair marginals are
N @0 50 &]'
)?+J @0 0 W V 50
3
In general, the exact pair marginals will be of the form
50 50 S_'*),+Y 50`>a 0 W 50
3
^ W V 02 0 7
(5)
W 02 0 X
(6)
50`>a
50`>a @0
3
3
@0`>a
which has an effective interaction
. In the case of a tree,
. With loops in the
, and the result will in general be different
graph, however, the loops will contribute to
. If we compare (6) with (5), we see that loopy belief propagation assumes
from
, ignoring contributions from loops.
3
50
3
50`>a
3
@0 3
3Now suppose we would know @0>` a in advance, then a better approximation could be ex3
pected if we could model approximate
pair marginals of the form
N 50 @04& '*),+Y 3 5050 0[ W V 50 > W V 02<,0 7
(7)
@0 @0`>a . The W V @0 are to be determined by some propagation algorithm.
where 50
3 3
In the next sections, we generalize upon the above idea and introduce fractional belief
propagation as a family
of loopy belief propagation-like algorithms parameterized by scale
parameters
. The resulting approximating clique marginals will be of the form
N &_ Z F W V <LX
(8)
where K
is the set of nodes in clique . The issue of how to set the parameters is subject
of section 6.
4 A family of approximate free energies
X
The new class of approximating methods will be formulated via a new class of approximating free energies. The exact free energy of a model with clique potentials
is
6 -4U
EA -[U -
It is well known that the joint distribution
energy
U C $
(9)
can be recovered by minimization of the free
% "!$ #
&
VA
(10)
under the constraint '
. The idea is now to construct an approximate free en
)(+*,*.-0/ 1
ergy
and compute its minimum . Then is interpreted as an approximation
of .
AVN
N
N
A popular approximate free energy is based on the Bethe assumption, which basically states
that is approximately tree-like,
N
DF G D
N& N N <^> 3 2
(11)
in which K are the cliques that contain ; . This assumption is exact if the factor graph [6]
of the model is a tree. Substitution of the tree-assumption into the free energy leads to the
well-known Bethe free energy
`
` N N 4S - - U N
6 - - U N N 6 - H:$ <; K ; - U G N 2L NLX (12)
U N _ $ and
which
is
to
be
minimized
under
normalization
constraints
'
U G N & $ and the marginalization constraints ' U G N S
N for ;@?K .
'
)94
6587
>=
It can be shown that minima of the Bethe free energy are fixed points of the loopy belief
propagation algorithm [2].
In our proposal, we generalize upon the Bethe assumption, and make the parameterized
assumption
G DF G D
N S H N N 3 2+
(13)
Z F G . The intuition behind this assumption is that we replace
X
$
;
%
K
;
9
'
in which
each by a factor N . The term with single node marginals is constructed to
deal with overcounted terms. Substitution of (13) into the free energy leads to the approximate free energy
N N S - - U N
6 - - U N N 6 - H$: ; %K 9; - U G N
^ N 2LX (14)
which is also parameterized by . This class of free energies trivially contains the Bethe
mean
confree energy ( $ ).
In addition, it
' includes
U Nthevariational
U G NfieldfreeN energy,
6
ventionally
as
as
a
limiting
'
'
'
written
(implying
an effective interaction of strength zero). If this limit is
case for
taken in (14), terms linear in will dominate and act as a penalty term for non-factorial
entropies.
Z Consequently, the distributions will be constrained to be completely factorized,
these constraints, the remaining terms reduce to the conventional
N F N .
Under
. Thirdly,
representation of
it contains the recently derived free energy to upper bound
the log partition function [7]. This one is recovered if, for pair-wise cliques, the 50 ?s are set
to the edge appearance probabilities in the so-called spanning tree polytope of the graph.
These requirements imply that
50 #$ .
9
5 Fractional belief propagation
In this section we will use the fixed point equations to
generalize Pearl?s algorithm to
. Here, we do not worry too
fractional belief propagation as a heuristic to minimize
much about guaranteed convergence.
If convergence is a problem, one can always resort
to direct minimization of
using, e.g., Yuille?s CCCP algorithm [3]. If
standard
belief
4 65 7
[8].
We
propagation converges, its solution
is
guaranteed
to
be
a
local
minimum
of
expect a similar situation for .
`
`
Fixed point equations from
are derived in the same way as in [2]. We obtain
N ] 9& Z F Z F!G \ O O 2Y
(15)
N <^> O <^>X
(16)
O <^> N ^ O
7
(17)
N
and we notice that N has indeed the functional dependency of
as desired in (8).
Inspired by Pearl?s loopy belief propagation algorithm, we use the above equations to formulate fractional belief propagation & (see Algorithm 1) .
1
1
, i.e. with all , is equivalent to standard loopy belief propagation
%
O <N F G O
ON ; K
NA ; K
N <N
&
Algorithm 1 Fractional Belief Propagation
1: initialize(
)
2: repeat
3:
for all do
4:
update
according to (15).
5:
update
, @?
according to (17) using the new
and the old .
6:
update , @?
by marginalization of
.
7:
end for
8: until convergence criterion is met (or maximum number of iterations is exceeded)
(or failure)
9: return
N
N
N
As a theoretical footnote we mention a different (generally more greedy) -algorithm,
& . This algorithm is similar to Algorithm 1, except
which has the same fixed points as
that (1) the update of
(in line 4) is to be taken with
, as in in standard belief propagation and (2) the update of the marginals
(in line 6) is to be performed by minimizing
the divergence D 9
where
%
N
$
N
ZF
AN E
EN
$
32
D *X NAS
:$ :$ - U N
(18)
with the limiting cases
X NA&#-U N
EX<NA- U N N (19)
rather than by marginalization (which corresponds to minimizing D , which is the equal to
the usual divergence). The D ?s are known as the -divergences [9] where
6 $
and $ $ . The minimization of the N ?s using D leads to the well known mean
D
and D
field equations.
6 Tuning using linear response theory
R
& ST- CE <N - - U
Now the question is, how do we set the parameters ? The idea is as follows, if we could
have access to the true marginals
, we could optimize by minimizing,
for example,
N
N
(20)
in which we labeled by to emphasize its dependency on the scale parameters. Unfortunately, we do not have access to the true pair marginals, but if we would have estimates
that improve upon
, we can compute new parameters such that
is closer to
. However, with the new parameters the estimates
will be changed as well, and this
procedure should be iterated.
N
V
V
V
N
N
In this paper, we use the linear response theory [10] to improve upon . For simplicity,
we restrict ourselves to Boltzmann machines with binary units. Applying linear response
in Boltzmann machines yields the following linear response estimates for
theory to
the pair marginals,
N @0 2 50 SN 2N 0 0 6 0 N 9 0
(21)
Algorithm 2 Tuning by linear response
1: initialize(
)
2: repeat
3:
set step-size
4:
compute the linear response estimates
5:
compute as in (22).
6:
set
7: until convergence
criterion is met
8: return
:$ $
S 6 $
N 50 N
N 50 2
as in (21)
N ^504
In [10], it is argued
that if
is correct up to
, the error in the linear response
estimate is
. Linear response theory has been applied previously to improve upon
pair marginals (or correlations) in the naive mean field approximation [11] and in loopy
belief propagation [12].
To iteratively compute new scaling parameters from the linear response corrections we use
a gradient descent like algorithm
-.
5021
50 2 < N 50
REN
(22)
with a time dependent step-size parameter .
By iteratively computing the linear response marginals, and adapting the scale parameters
in the gradient descent direction, we can optimize , see Algorithm 2. Each linear response
estimate can be computed numerically by applying
& to a Boltzmann machine with
parameters
and
. Partial derivatives with respect to , required for the
gradient in (22), can be computed numerically by rerunning fractional belief propagation
with parameters . In this procedure the computation cost to update requires
times the cost of
is the number of nodes and is the
& , where
number of edges.
K 6
9 29 6 9 0
3
3
6 50
%
50
K
7 Numerical results
We applied the method to a Boltzmann machine in which the nodes are connected according
to a square grid with periodic boundary conditions. The weights in the model were
?
with
equal probability. Thresholds
drawn from the binary distribution
were drawn according to
We generated networks, and compared results
of standard loopy belief propagation to results obtained by fractional belief propagation
where the scale parameters were obtained by Algorithm 2.
9
@0
3 $
,
$
$ 6
$
$
In the experiment the step size was set to be
. The iterations were
stopped if the maximum change in was less than 2 , or if the number of iterations
exceeded
. Throughout the procedure, fractional belief propagations were ran
2"!
with convergence criterion of maximal difference of
between messages in successive
iterations (one iteration is one cycle over all weights). In our experiment, all (fractional)
belief propagation runs converged. The number of updates of ranged between
20 and
$# to
80.
After optimization we found (inverse) scale parameters ranging from
#
.
$
$ 50
$ 50
$ 50
Results are plotted in figure 1. In the left panel, it can be seen that the procedure can lead
to significant improvements. In these experiments, the solutions obtained by optimized
are consistently 10 to 100 times better in averaged , than the ones obtained by
0
1
10
BP(1)
BP(C)
)>
BP(1)
BP(C)
i approx
?2
10
<X >
ij
< KL( P || Q
c
ij
0.5
0
?0.5
?4
10
?4
10
?2
10 1
< KL( P || Q ) >
ij
0
10
?1
?1
?0.5
ij
0
<Xi>ex
0.5
1
Figure 1: Left: Scatter plots of averaged between exact and approximated pair
marginals obtained by the optimized fractional belief propagation (
& ) versus the ones
obtained by standard belief propagation (
). Each point in the plot is the result of
one instantiation of the network. Right: approximated single-node means for
and
optimized
& against the exact single node means. This plot is for the network where
had the worst performance (i.e. corresponding to the point in the left panel with
).
highest
$[
standard
%H$[
%
%
$[
C!50 <N @0
$[ . The averaged
CE
is defined as
50 <N 50 S $ - . R @0 N @0 !
@021
(23)
In the right panel, approximations of single-node means are plotted for the case where
had the worst performance. Here we see that procedure can lead to quite precise
is very poor.
estimates of the means, even if the quality of solutions by obtained
Here, it should be noticed that the linear response correction does not alter the estimated
means [12]. In other words, the improvement in quality of the means is a result of optimized
, and not of the linear response correction.
$[
$[
8 Conclusions
In this paper, we introduced fractional belief propagation as a family of approximating inference methods that generalize upon loopy belief propagation without resorting to larger
clusters. The approximations are parameterized by scale parameters , which are motivated to better model the effective interactions due to the effect of loops in the graph. The
approximations are formulated in terms of approximating free energies. This family of approximating free energies includes as special cases the Bethe free energy, the mean field
free energy, and also the free energy approximation that provides an upper bound on the
log partition function, developed in [7].
In order to apply fractional belief propagation, the scale parameters have to be tuned. In
this paper, we demonstrated in toy problems for Boltzmann machines that it is possible to
tune the scale parameters using linear response theory. Results show that considerable improvements can be obtained, even if standard loopy belief propagation is of poor quality. In
principle, the method is applicable to larger and more general graphical models. However,
how to make the tuning of scale parameters practically feasible in such models is still to be
explored.
Acknowledgements
We thank Bert Kappen for helpful comments and the Dutch Technology Foundation STW
for support.
References
[1] J. Pearl. Probabilistic Reasoning in Intelligent systems: Networks of Plausible Inference. Morgan Kaufmann Publishers, Inc., 1988.
[2] J. Yedidia, W. Freeman, and Y. Weiss. Generalized belief propagation. In NIPS 13.
[3] A. Yuille. CCCP algorithms to minimize the Bethe and Kikuchi free energies: Convergent
alternatives to belief propagation. Neural Computation, July 2002.
[4] T. Minka. A family of algorithms for approximate Bayesian inference. PhD thesis, MIT Media
Lab, 2001.
[5] S.L. Lauritzen and D.J. Spiegelhalter. Local computations with probabilties on graphical structures and their application to expert systems. J. Royal Statistical society B, 50:154?227, 1988.
[6] F. Kschischang, B. Frey, and H. Loeliger. Factor graphs and the sum-product algorithm. IEEE
Transactions on Information Theory, 47(2):498?519, 2001.
[7] W. Wainwright, T. Jaakkola, and S. Willsky. A new class of upper bounds on the log partition
function. In UAI-2002, pages 536?543.
[8] T. Heskes. Stable fixed points of loopy belief propagation are minima of the Bethe free energy.
In NIPS 15.
[9] S. Amari, S. Ikeda, and H. Shimokawa. Information geometry of -projection in mean field
approximation. In M. Opper and D. Saad, editors, Advanced Mean Field Methods, pages 241?
258, Cambridge, MA, 2001. MIT press.
[10] G. Parisi. Statistical Field Theory. Addison-Wesley, Redwood City, CA, 1988.
[11] H.J. Kappen and F.B. Rodr??guez. Efficient learning in Boltzmann Machines using linear response theory. Neural Computation, 10:1137?1156, 1998.
[12] M. Welling and Y.W. Teh. Propagation rules for linear response estimates of joint pairwise
probabilities. 2002. Submitted.
| 2339 |@word grooteplein:1 simulation:2 mention:1 kappen:2 substitution:2 contains:2 loeliger:1 tuned:2 recovered:2 scatter:1 guez:1 written:1 ikeda:1 numerical:1 partition:3 plot:3 update:7 stationary:1 implying:1 greedy:1 provides:1 node:13 contribute:1 successive:1 c6:1 constructed:1 direct:1 introduce:2 pairwise:1 indeed:1 inspired:1 freeman:1 snn:2 underlying:1 panel:3 factorized:1 medium:1 interpreted:1 developed:2 guarantee:1 act:1 unit:2 local:2 frey:1 limit:1 approximately:1 averaged:3 procedure:6 avn:1 adapting:1 projection:1 word:1 refers:1 applying:3 optimize:2 equivalent:1 conventional:1 deterministic:1 demonstrated:1 starting:1 formulate:2 simplicity:2 rule:1 dominate:1 geert:1 limiting:2 suppose:1 exact:6 pa:1 approximated:2 labeled:1 worst:2 connected:3 cycle:1 highest:1 ran:1 intuition:1 complexity:1 yuille:2 upon:8 basis:1 completely:1 joint:3 effective:3 firm:1 quite:1 heuristic:2 larger:4 plausible:1 amari:1 parisi:1 propose:2 interaction:3 maximal:2 product:3 loop:5 convergence:6 cluster:4 requirement:1 converges:2 kikuchi:2 illustrate:1 ij:4 lauritzen:1 solves:1 triangulated:1 met:2 direction:3 correct:1 stochastic:1 argued:1 generalization:1 summation:2 correction:4 practically:1 considered:1 applicable:2 wim:1 city:1 tool:1 minimization:3 mit:2 always:1 rather:1 jaakkola:1 derived:2 improvement:3 consistently:1 helpful:1 inference:9 dependent:1 issue:1 rodr:1 constrained:1 special:2 initialize:2 marginal:1 field:7 construct:1 equal:2 sampling:1 alter:1 minimized:2 intelligent:1 inherent:2 divergence:3 m4:1 geometry:1 ourselves:1 interest:1 message:3 possibility:1 nl:1 behind:1 edge:3 nowadays:1 closer:1 partial:1 tree:6 old:1 desired:1 plotted:2 theoretical:2 stopped:1 disadvantage:1 loopy:21 cost:2 subset:1 successful:1 too:1 dependency:2 periodic:1 probabilistic:4 physic:1 systematic:1 na:4 thesis:1 resort:2 derivative:1 expert:1 return:2 toy:1 potential:4 includes:3 inc:1 wimw:1 explicitly:1 depends:1 later:1 try:1 performed:2 tion:1 lab:1 contribution:1 minimize:2 square:1 kaufmann:1 yield:2 generalize:5 bayesian:2 iterated:1 basically:3 converged:1 submitted:1 footnote:1 ed:1 failure:1 against:1 energy:27 minka:2 associated:1 popular:2 fractional:16 organized:1 actually:1 ea:1 worry:1 exceeded:2 wesley:1 higher:1 tom:2 response:18 wei:1 until:3 correlation:1 hand:1 propagation:45 shimokawa:1 quality:3 effect:1 contain:2 true:2 ranged:1 iteratively:2 deal:1 criterion:3 generalized:3 reasoning:2 ranging:1 variational:1 wise:1 recently:2 functional:1 thirdly:1 marginals:16 numerically:2 significant:1 cambridge:1 tuning:3 approx:1 resorting:2 heskes:2 grid:1 trivially:1 had:2 access:2 stable:1 showed:1 binary:3 seen:1 minimum:4 morgan:1 converge:2 july:1 cccp:2 va:1 expectation:1 df:2 dutch:1 iteration:5 sometimes:1 normalization:1 proposal:1 addition:1 publisher:1 saad:1 comment:1 subject:1 marginalization:3 restrict:2 reduce:1 idea:3 motivated:1 penalty:1 generally:1 tune:2 factorial:1 netherlands:1 notice:1 estimated:1 ghd:1 discrete:1 threshold:1 drawn:2 graph:9 sum:2 run:1 inverse:1 parameterized:4 powerful:1 uncertainty:1 family:6 throughout:1 scaling:1 bound:3 guaranteed:3 distinguish:1 convergent:1 turbo:1 pected:1 strength:1 constraint:4 bp:4 formulating:1 according:5 poor:2 increasingly:1 character:1 taken:2 computationally:1 equation:5 previously:1 discus:2 needed:1 know:1 merit:1 addison:1 end:2 fji:1 yedidia:3 apply:1 alternative:1 shortly:2 existence:1 assumes:1 remaining:2 graphical:8 approximating:8 society:1 objective:1 noticed:1 question:1 rerunning:1 usual:1 gradient:3 thank:1 polytope:1 spanning:1 willsky:1 code:1 minimizing:3 kun:1 unfortunately:2 ex3:1 nijmegen:2 stw:1 motivates:2 boltzmann:9 perform:1 teh:1 upper:3 zf:1 finite:1 descent:2 situation:1 precise:1 redwood:1 bert:1 community:1 introduced:2 pair:11 required:1 kl:2 optimized:4 pearl:4 nip:2 gaining:1 royal:1 belief:43 wainwright:1 advanced:1 improve:4 technology:1 spiegelhalter:1 imply:1 naive:2 review:2 acknowledgement:1 expect:1 limitation:5 proportional:1 versus:1 foundation:1 principle:1 editor:1 changed:1 gl:1 repeat:2 free:27 truncation:1 side:1 overcome:2 boundary:1 opper:1 computes:2 welling:1 transaction:1 approximate:13 emphasize:1 clique:9 incoming:1 instantiation:1 uai:1 assumed:1 xi:1 bethe:10 ca:1 kschischang:1 ignoring:1 expansion:1 complex:1 domain:2 en:3 fails:1 exponential:1 moralized:1 explored:1 intractable:3 phd:1 entropy:1 appearance:1 ez:1 corresponds:1 ma:1 goal:1 viewed:1 formulated:3 consequently:1 replace:1 absence:1 considerable:1 change:1 feasible:1 determined:1 typical:2 except:2 wiegerinck:1 called:1 support:1 latter:1 probabilties:1 ex:2 |
1,473 | 234 | 178
Lang and Hinton
Dimensionality Reduction and Prior Knowledge in
E-set Recognition
Geoffrey E. Hinton
Computer Science Dept.
University of Toronto
Toronto, Ontario M5S lA4
Canada
Kevin J. Lang1
Computer Science Dept.
Carnegie Mellon University
Pittsburgh, PA 15213
USA
ABSTRACT
It is well known that when an automatic learning algorithm is applied
to a fixed corpus of data, the size of the corpus places an upper bound
on the number of degrees of freedom that the model can contain if
it is to generalize well. Because the amount of hardware in a neural
network typically increases with the dimensionality of its inputs, it
can be challenging to build a high-performance network for classifying
large input patterns. In this paper, several techniques for addressing this
problem are discussed in the context of an isolated word recognition
task.
1 Introduction
The domain for our research was a speech recognition task that requires distinctions to be
learned between recordings of four highl y confusable words: the names of the letters "B",
"D", "E", and "V". The task was created at IBM's T. J. Watson Research Center, and is
difficult because many speakers were included and also because the recordings were made
under noisy office conditions using a remote microphone. One hundred male speakers
said each of the 4 words twice, once for training and again for testing. The words were
spoken in isolation, and the recordings averaged 1.1 seconds in length. The signal-tonoise ratio of the data set has been estimated to be about 15 decibels, as compared to
1 Now
at NEC Research Institute, 4 Independence Way, Princeton, NJ 08540.
Dimensionality Reduction and Prior Knowledge in E-Set Recognition
50 decibels for typical lip-mike recordings (Brown, 1987). The key feature of the data
set from our point of view is that each utterance contains a tiny information-laden event
- the release of the consonant - which can easily be overpowered by meaningless
variation in the strong "E" vowel and by background noise.
Our first step in processing these recordings was to convert them into spectrograms using
a standard DFI' program. The spectrograms encoded the energy in 128 frequency bands
(ranging up to 8 kHz) at 3 msec intervals, and so they contained an average of about
45,000 energy values. Thus, a naive back-propagation network which devoted a separate
weight to each of these input components would contain far too many weights to be
properly constrained by the task's 400 training patterns.
As described in the next section, we drastically reduced the dimensionality of our training
patterns by decreasing their resolution in both frequency and time and also by using a
segmentation algorithm to extract the most relevant portion of each pattern. However, our
network still contained too many weights, and many of them were devoted to detecting
spurious features. This situation motivated the experiments with our network's objective
function and architecture that will be described in sections 3 and 4.
2 Reducing the Dimensionality of the Input Patterns
Because it would have been futile to feed our gigantic raw spectrograms into a backpropagation network, we first decreased the time resolution of our input format by a factor
of 4 and the frequency resolution of the format by a factor 8. While our compression
along the time axis preserved the linearity of the scale, we combined different numbers
of raw freqencies into the various frequency bands to create a mel scale, which is linear
up to 2 kHz and logarithmic above that, and thus provides more resolution in the more
informative lower frequency bands.
Next, a segmentation heuristic was used to locate the consonant in each training pattern
so that the rest of the pattern could be discarded. On average, all but 1/7 of each
recording was thrown away, but we would have liked to have discarded more. The
useful information in a word from the E-set is concentrated in a roughly 50 msec region
around the consonant release in the word, but current segmentation algorithms aren't
good enough to accurately position a 50 msec window on that region. To prevent the
loss of potentially useful information, we extracted a 150 msec window from around each
consonant release. This safeguard meant that our networks contained about 3 times as
many weights as would be required with an ideal segmentation.
We were also concerned that segmentation errors during recognition could lower our
final system's performance, so we adopted a simple segmentation-free testing method in
which the trained network is scanned over the full-length version of each testing utterance.
Figures 3(a) and 3(b) show the activation traces generated by two different networks when
scanned over four sample utterances. To the right of each of the capital letters which
identifies a particular sample word is a set of 4 wiggly lines that should be viewed as
the output of a 4-channel chart recorder which is connected to the network's four output
units. Our recognition rule for unsegmented utterances states that the output unit which
179
180
Lang and Hinton
output unit weights
, ?
_ ::::=3ii:Z
.. _ =z:::t::IIi:: _ _ -: :-::::EX
-
---=-:~
__
_ =::::iCE
~
1: __
'I'=-
D =
'"
__
::a:::a:::::x:~
~:::c
_ _ _ _
- --- -
" :! 't:
'
,
,
,
MM'
_ .......... _ ::::::a::a::-==-:
::c: :::a:::a:= _
:s::::z::x:: _ :::a::a::::z:::=:=::Ii
_~
_-=z:
::JL:_ _
::1: _
, . _:=-=c
""
-
??
_
- ---.,
---
8
8
,
i
?
_ _ _ _ :L:L
(a)
...
, ,
..
"
.
:::c::L
----""
==
---'"': __ MM. '"____
- - - _............
=-z:::::a:::::: _ _-:
(b)
~=-=
:-=-: 3E
. . :::a:::c ::c
--- _...
:a:::::a:: ......
4i:i3III: _ ..
x::_ ::z::-:
..... _:JL:&
=-::x
=_ .
.---- ..
::z::c
'II'
,
.'
::z:::z::: _::t:
.. __ 1
- ====
?
:%:LO:::a: ==::L
-- - --____
- ---_
'w.'.'
: #iIJO _"':
-:ii3Ei:
__ iiL
-8kHz
__:::a::E_
=
=
? ': ===
_:::::c:_
:--:==~-
:::::z::-: _ _ _
----
==
- =-====
?
~
,
-:
.. _ ::z:::w: _
_:::z:a::: __
_ _ :::::L:L
_
--- ----- ::z::
-----------_.
W"
""
M
:a:::c:
B
_
-___ _ __ ':: ====-:::IE
i
::a::
,"'
,
? ? ""
:::z:::::J::
""
::e::c:::z:: _ :c
- ::::a;;::;:-: --~
2 kHz
:3BE ::c
: ::::IEi::"':
~:-lkHz
. . - ----:k
k:_:::a::::a:
iiiiE: _ :
_:z::a:: _-:
(c)
(d)
Figure 1: Output Unit Weights from Four Different 2-layer BDEV Networks: (a) baseline, (b) smoothed, (c) decayed, (d) TDNN
generates the largest activation spike (and hence the highest peak in the chart recorder's
traces) on a given utterance determines the network's classification of that utterance. 2
To establish a performance baseline for the experiments that will be described in the next
two sections, we trained the simple 2-layer network of figure 2(a) until it had learned to
correctly identify 94 percent of our training segments.3
This network contains 4 output units (one for each word) but no hidden units. 4 The
weights that this network used to recognize the words B and D are shown in figure l(a).
While these weight patterns are quite noisy, people who know how to read spectrograms
can see sensible feature detectors amidst the clutter. For example, both of the units appear
to be stimulated by an energy burst near the 9th time frame. However, the units expect
to see this energy at different frequencies because the tongue position is different in the
consonants that the two units represent.
Unfortunately, our baseline network's weights also contain many details that don't make
ZOne can't reasonably expect a network that has been trained on pre-segmented patterns to function well
when tested in this way, but our best network (a 3-layer TDN1'-I,) actually does perform better in this mode
than when trained and tested on segments selected by a Viterbi alignment with an IBM hidden Markov model.
Moreover, because the Viterbi alignment procedure is told the identity of the words in advance, it is probably
more accurate than any method that could be used in a real recognition system.
3This rather arbitrary halting rule for the learning procedure was uniformly employed during the experiments
of sections 2, 3 and 4.
4Experiments performed with multi-layer networks support the same general conclusions as the results
reported here.
Dimensionality Reduction and Prior Knowledge in E-Set Recognition
any sense to speech recognition experts. These spurious features are artifacts of our
small, noisy training set, and are partially to blame for the very poor perfonnance of
the network; it achieved only 37 percent recognition accuracy when scanned across the
unsegmented testing utterances.
3 Limiting the Complexity of a Network using a Cost Function
Our baseline network perfonned poorly because it had lots of free parameters with which
it could model spurious features of the training set. However, we had already taken our
brute force techniques for input dimensionality reduction (pre-segmenting the utterances
and reducing the resolution of input format) about as far as possible while still retaining
most of the useful infonnation in the patterns. Therefore it was necessary to resort to
a more subtle fonn of dimensionality reduction in which the back-propagation learning
algorithm is allowed to create complicated weight patterns only to the extent that they
actually reduce the network's error.
This constraint is implemented by including a cost term for the network's complexity in
its objective function. The particular cost function that should be used is induced by a
particular definition of what constitutes a complicated weight pattern, and this definition
should be chosen with care. For example, the rash of tiny details in figure l(a) originally
led us to penalize weights that were different from their neighbors, thus encouraging the
network to develop smooth, low-resolution weight patterns whenever possible.
C
1 "
"
=21 "~" IINiII
~(Wi
-
,
JEM
Wj) 2
(1)
To compute the total tax on non-smoothness, each weight Wi was compared to all of its
neighbors (which are indexed by the set Ali). When a weight differed from a neighbor,
a penalty was assessed that was proportional to the square of their difference. The tenn
IlNiIl- 1 normalized for the fact that units at the edge of a receptive field have fewer
neighbors than units in the middle.
When a cost function is used, a tradeoff factor'x is typically used to control the relative
importance of the error and cost components of the overall objective function 0 = E+'xC.
The gradient of the overall objective function is then 'V 0 = 'V E + ,X 'V C. To compute
'V C, we needed the derivative of our cost function with respect to each weight Wi. This
derivative is just the difference between the weight and the average of its neighbors:
g~ = Wi LjEM Wj, so minimizing the combined objective function was equivalent
to minimizmg the network's error while simultaneously smoothing the weight patterns
by decaying each weight towards the average of its neighbors.
ukn"
Figure 1(b) shows the B and D weight patterns of a 2-layer network that was trained
under the influence of this cost function. As we had hoped, sharp transitions between
neighboring weights occurred primarily in the maximally infonnative consonant release
of each word, while the spurious details that had plagued our baseline network were
smoothed out of existence. However, this network was even worse at the task of generalizing to unsegmented test cases than the baseline network, getting only 35 percent of
181
182
Lang and Hinton
them correct
While equation 1 might be a good cost function for some other task, it doesn't capture
our prior knowledge that the discrimination cues in E-set recognition are highly localized
in time. This cost function tells the network to treat unimportant neighboring input
components similarly, but we really want to tell the network to ignore these components
altogether. Therefore, a better cost function for this task is the one associated with
standard weight decay:
c= ~~w?
2 L...J
'
j
(2)
Equation 2 causes weights to remain close to zero unless they are particularly valuable
for reducing the network's error on the training set. Unfortunately, the weights that our
network learns under the influence of this function merely look like smaller versions of
the baseline weights of figure l(a) and perform just as poorly. No matter what value is
used for .x, there is very little size differentiation between the weights that we know to
be valuable for this task and the weights that we know to be spurious. Weight decay
fails because our training set is so small that spurious weights do not appear to be as
irrelevant as they really are for performing the task in general. Fortunately, there is a
modified form of weight decay (Scalettar and Zee, 1988) that expresses the idea that the
disparity between relevant and irrelevant weights is greater than can be deduced from the
training set:
c=.!.l:
wf
2 . 2.5 +wr
(3)
I
The weights of figure l(c) were learned under the influence of equation 3. 5 In these patterns, the feature detectors that make sense to speech recognition experts stand out clearly
above a highly suppressed field of less important weights. This network generalizes to
48 percent of the unsegmented test cases, while our earlier networks had managed only
37 percent accuracy.
4 A Time-Delay Neural Network
The preceding experiments with cost functions show that controlling attention (rather
than resolution) is the key to good performance on the BDEV task. The only way to
accurately classify the utterances in this task is to focus on the tiny discrimination cues
in the spectrograms while ignoring the remaining material in the patterns.
Because we know that the BDEV discrimination cues are highly localized in time, it
would make sense to build a network whose architecture reflected that knowledge. One
such network (see figure 2(b? contains many copies of each output unit. These copies
apply identical weight patterns to the input in all possible positions. The activation values
sWe trained with >.
decay.
= 100 here as opposed to the setting of >. = 10 that worked best with standard weight
Dimensionality Reduction and Prior Knowledge in E-Set Recognition
~
8 copies
:----,-; ouqNtuOOu
output uoiu
11 ___ -
__1
/\
input units
input units
16
12
12
(a)
(b)
Figure 2: Conventional and Time-Delay 2-layer Networks
from all of the copies of a given output unit are summed to generate the overall output
value for that unit6
Now, assuming that the learning algorithm can construct weight patterns which recognize
the characteristic features of each word while rejecting the rest of the material in the
words, then when an instance of a particular word is shown to the network, the only unit
that will be activated is the output unit copy for that word which happens to be aligned
with the recognition cues in the pattern. Then, the summation step at the output stage of
the network serves as an OR gate which transmits that activation to the outside world.
This network architecture, which has been named the "Time-Delay Neural Network"
or "TDNN", has several useful properties for E-set recognition, all of which are consequences of the fact that the network essentially performs its own segmentation by
recognizing the most relevant portion of each input and rejecting the rest. One benefit is that sharp weight patterns can be learned even when the training patterns have
been sloppily segmented. For example, in the TDNN weight patterns of figure l(d), the
release-burst detectors are localized in a single time frame, while in the earlier weight
patterns from conventional networks they were smeared over several time frames.
Also, the network learns to actively discriminate between the relevant and irrelevant
portions of its training segments, rather than trying to ignore the latter by using small
weights. This turns out to be a big advantage when the network is later scanned across
unsegmented utterances, as evidenced by the vastly different appearances of the output
6We actually designed this network before performing our experiments with cost functions, and were originally attracted by its translation invariance rather than by the advantages mentioned here (Lang, 1987).
183
184
Lang and Hinton
v
v
f
e
'-------' ,---d
~----------------------b
v
E
E
V
r-r'O..r-__
D
-
D
1----'"'"--'--- d
d
b
,..-.,..
-
v
e
"\
d
b
v
v
e
B
B
'----d
t'-----J
o
-
J
'----------b
250msec
v
e
d
b
-r
~------------------~-b
v
e
~
~
e
......
500
(a)
o
e
d
b
\
I
I
250msec
500
(b)
Figure 3: Output Unit Activation Traces of a Conventional Network and a Time-Delay
Network, on Four Sample Utterances
activity traces in figures 3(a) and 3(b)?
Finally, because the IDNN can locate and attend to the most relevant portion of its
input, we are able to make its receptive fields very narrow, thus reducing the number of
free parameters in the network and making it highly trainable with the small number of
uaining cases that are available in this task. In fact, the scanning mode generalization rate
of our 2-layer TDNN is 65 percent, which is nearly twice the accuracy of our baseline
2-layer network.
5 Comparison with other systems
The 2-layer networks described up to this point were uained and tested under identical
conditions so that their perfonnances could be meaningfully compared. No attempt was
made to achieve really high perfonnance in these experiments. On the other hand when
'While the main text of this paper compares the perfonnance of a sequence of 2-1ayer networks, the plots of
figure 3 show the output traces of 3-layer versions of the networks. The correct plots could not be conveniently
generated because our eMU Common Lisp program for creating them has died of bit rot.
Dimensionality Reduction and Prior Knowledge in E-Set Recognition
we trained a 3-layer TDNN using the slightly fancier methodology described in (Lang,
Hinton, and Waibel, 1990),8 we obtained a system that generalized to about 91 percent of
the unsegmented test cases. By comparison, the standard, large-vocabulary IBM hidden
Markov model accounts for 80 percent of the test cases, and the accuracy of human
listeners has been measured at 94 percent. In fact, the TDNN is probably the best
automatic recognition system built for this task to date; it even performs slightly better
than the continuous acoustic parameter, maximum mutual information hidden Markov
model proposed in (Brown, 1987).
6 Conclusion
The performance of a neural network can be improved by building a priori knowledge
into the network's architecture and objective function. In this paper, we have exhibited
two successful examples of this technique in the context of a speech recognition task
where the crucial information for making an output decision is highly localized and
where the number of training cases is limited. Tony Zee's modified version of weight
decay and our time-delay architecture both yielded networks that focused their attention
on the short-duration discrimination cues in the utterances. Conversely, our attempts to
use weight smoothing and standard weight decay during training got us nowhere because
these cost functions didn't accurately express our knowledge about the task.
Acknowledgements
This work was supported by Office of Naval Research contract NOOOI4-86-K-0167, and
by a grant from the Ontario Information Techology Research Center. Geoffrey Hinton is
a fellow of the Canadian Institute for Advanced Research.
References
P. Brown. (1987) The Acoustic-Modeling Problem in Automatic Speech Recognition.
Doctoral Dissertation, Carnegie Mellon University.
K. Lang. (1987) Connectionist Speech Recognition. PhD Thesis Proposal, Carnegie
Mellon University.
K. Lang, G. Hinton, and A. Waibel. (1990) A Time-Delay Neural Network Architecture
for Isolated Word Recognition. Neural Networks 3(1).
R. Scalettar and A. Zee. (1988) In D. Waltz and 1. Feldman (eds.), Connectionist Models
and their Implications, p. 309. Publisher: A. Blex.
SWider but less precisely aligned training segments were employed, as well as randomly selected "counterexample" segments that further improved the network's already good "E" and background noise rejection.
Also, a preliminary cross-validation run was performed to locate a nearly optimal stopping point for the
learning procedure. When trained using this improved methodology, a conventional 3-layer network achieved
a generalization score in the mid 50's.
185
| 234 |@word middle:1 version:4 compression:1 fonn:1 reduction:7 contains:3 disparity:1 score:1 current:1 lang:8 activation:5 attracted:1 informative:1 designed:1 plot:2 discrimination:4 tenn:1 selected:2 fewer:1 cue:5 short:1 dissertation:1 detecting:1 provides:1 toronto:2 along:1 burst:2 roughly:1 multi:1 decreasing:1 encouraging:1 little:1 window:2 linearity:1 moreover:1 didn:1 what:2 spoken:1 sloppily:1 differentiation:1 nj:1 fellow:1 amidst:1 brute:1 unit:18 control:1 grant:1 gigantic:1 appear:2 segmenting:1 ice:1 before:1 attend:1 treat:1 died:1 consequence:1 might:1 twice:2 doctoral:1 conversely:1 challenging:1 limited:1 averaged:1 testing:4 backpropagation:1 procedure:3 got:1 word:16 pre:2 close:1 context:2 influence:3 equivalent:1 conventional:4 center:2 attention:2 laden:1 duration:1 focused:1 resolution:7 rule:2 variation:1 limiting:1 controlling:1 pa:1 nowhere:1 recognition:21 particularly:1 mike:1 capture:1 region:2 wj:2 connected:1 remote:1 highest:1 valuable:2 mentioned:1 complexity:2 trained:8 segment:5 ali:1 easily:1 various:1 listener:1 tell:2 kevin:1 outside:1 quite:1 encoded:1 heuristic:1 whose:1 tested:3 scalettar:2 noisy:3 la4:1 final:1 advantage:2 sequence:1 neighboring:2 relevant:5 aligned:2 date:1 poorly:2 ontario:2 tax:1 achieve:1 getting:1 liked:1 develop:1 measured:1 strong:1 implemented:1 jem:1 correct:2 human:1 material:2 generalization:2 really:3 preliminary:1 summation:1 mm:2 around:2 iil:1 plagued:1 viterbi:2 techology:1 infonnation:1 largest:1 create:2 smeared:1 clearly:1 modified:2 rather:4 office:2 release:5 focus:1 naval:1 properly:1 baseline:8 sense:3 wf:1 stopping:1 typically:2 spurious:6 hidden:4 overall:3 classification:1 priori:1 retaining:1 constrained:1 smoothing:2 summed:1 mutual:1 field:3 once:1 construct:1 identical:2 look:1 constitutes:1 nearly:2 connectionist:2 primarily:1 randomly:1 simultaneously:1 recognize:2 vowel:1 attempt:2 freedom:1 thrown:1 ukn:1 highly:5 alignment:2 male:1 activated:1 wiggly:1 devoted:2 implication:1 accurate:1 waltz:1 edge:1 zee:3 necessary:1 perfonnance:3 unless:1 indexed:1 confusable:1 isolated:2 tongue:1 instance:1 classify:1 earlier:2 modeling:1 infonnative:1 cost:13 addressing:1 hundred:1 delay:6 recognizing:1 successful:1 too:2 reported:1 scanning:1 combined:2 rash:1 deduced:1 decayed:1 peak:1 ie:1 told:1 contract:1 e_:1 safeguard:1 again:1 vastly:1 thesis:1 opposed:1 worse:1 creating:1 expert:2 resort:1 derivative:2 actively:1 account:1 halting:1 iei:1 matter:1 performed:2 view:1 lot:1 later:1 portion:4 decaying:1 complicated:2 chart:2 square:1 accuracy:4 who:1 characteristic:1 identify:1 generalize:1 raw:2 accurately:3 rejecting:2 m5s:1 detector:3 whenever:1 ed:1 definition:2 energy:4 frequency:6 associated:1 transmits:1 noooi4:1 knowledge:9 dimensionality:10 segmentation:7 subtle:1 actually:3 back:2 feed:1 originally:2 reflected:1 ayer:1 maximally:1 methodology:2 improved:3 just:2 stage:1 until:1 hand:1 unsegmented:6 propagation:2 mode:2 artifact:1 name:1 usa:1 building:1 contain:3 brown:3 normalized:1 managed:1 hence:1 read:1 during:3 speaker:2 mel:1 generalized:1 trying:1 performs:2 percent:9 ranging:1 common:1 khz:4 jl:2 discussed:1 occurred:1 mellon:3 feldman:1 counterexample:1 smoothness:1 automatic:3 similarly:1 blame:1 had:6 rot:1 own:1 irrelevant:3 watson:1 fortunately:1 care:1 greater:1 spectrogram:5 employed:2 preceding:1 signal:1 ii:3 full:1 segmented:2 smooth:1 cross:1 recorder:2 essentially:1 represent:1 achieved:2 penalize:1 preserved:1 background:2 want:1 proposal:1 interval:1 decreased:1 crucial:1 publisher:1 meaningless:1 rest:3 exhibited:1 probably:2 recording:6 induced:1 meaningfully:1 lisp:1 near:1 ideal:1 canadian:1 iii:1 enough:1 concerned:1 independence:1 isolation:1 architecture:6 reduce:1 idea:1 tradeoff:1 motivated:1 penalty:1 speech:6 cause:1 useful:4 unimportant:1 amount:1 clutter:1 mid:1 band:3 concentrated:1 hardware:1 reduced:1 generate:1 estimated:1 correctly:1 wr:1 carnegie:3 express:2 key:2 four:5 capital:1 prevent:1 merely:1 convert:1 run:1 letter:2 named:1 place:1 decision:1 bit:1 bound:1 layer:12 yielded:1 activity:1 scanned:4 constraint:1 worked:1 precisely:1 generates:1 performing:2 format:3 waibel:2 poor:1 across:2 remain:1 smaller:1 suppressed:1 slightly:2 wi:4 making:2 happens:1 taken:1 equation:3 turn:1 needed:1 know:4 serf:1 adopted:1 generalizes:1 available:1 apply:1 away:1 altogether:1 gate:1 existence:1 remaining:1 tony:1 xc:1 build:2 establish:1 objective:6 already:2 spike:1 receptive:2 said:1 gradient:1 separate:1 sensible:1 extent:1 assuming:1 length:2 ratio:1 minimizing:1 difficult:1 unfortunately:2 potentially:1 trace:5 perform:2 upper:1 markov:3 discarded:2 situation:1 hinton:8 locate:3 frame:3 smoothed:2 arbitrary:1 sharp:2 canada:1 evidenced:1 required:1 acoustic:2 distinction:1 learned:4 narrow:1 emu:1 able:1 pattern:24 program:2 built:1 including:1 event:1 perfonned:1 force:1 advanced:1 identifies:1 axis:1 created:1 tdnn:6 naive:1 utterance:12 extract:1 text:1 prior:6 acknowledgement:1 relative:1 loss:1 expect:2 idnn:1 proportional:1 geoffrey:2 localized:4 validation:1 degree:1 classifying:1 tiny:3 ibm:3 lo:1 translation:1 supported:1 free:3 copy:5 highl:1 drastically:1 institute:2 neighbor:6 benefit:1 vocabulary:1 transition:1 stand:1 world:1 doesn:1 made:2 far:2 perfonnances:1 ignore:2 corpus:2 pittsburgh:1 consonant:6 don:1 continuous:1 lip:1 stimulated:1 channel:1 tonoise:1 reasonably:1 ignoring:1 futile:1 domain:1 main:1 big:1 noise:2 allowed:1 differed:1 fails:1 dfi:1 position:3 msec:6 learns:2 decibel:2 decay:6 importance:1 phd:1 nec:1 hoped:1 ljem:1 rejection:1 aren:1 generalizing:1 logarithmic:1 led:1 appearance:1 conveniently:1 contained:3 partially:1 determines:1 extracted:1 viewed:1 identity:1 towards:1 included:1 typical:1 reducing:4 uniformly:1 microphone:1 total:1 discriminate:1 invariance:1 zone:1 people:1 support:1 latter:1 meant:1 assessed:1 fancier:1 dept:2 princeton:1 trainable:1 ex:1 |
1,474 | 2,340 | An Impossibility Theorem for Clustering
Jon Kleinberg
Department of Computer Science
Cornell University
Ithaca NY 14853
Abstract
Although the study of clustering is centered around an intuitively
compelling goal, it has been very difficult to develop a unified
framework for reasoning about it at a technical level, and profoundly diverse approaches to clustering abound in the research
community. Here we suggest a formal perspective on the difficulty
in finding such a unification, in the form of an impossibility theorem: for a set of three simple properties, we show that there is no
clustering function satisfying all three. Relaxations of these properties expose some of the interesting (and unavoidable) trade-offs
at work in well-studied clustering techniques such as single-linkage,
sum-of-pairs, k-means, and k-median.
1
Introduction
Clustering is a notion that arises naturally in many fields; whenever one has a heterogeneous set of objects, it is natural to seek methods for grouping them together
based on an underlying measure of similarity. A standard approach is to represent
the collection of objects as a set of abstract points, and define distances among the
points to represent similarities ? the closer the points, the more similar they are.
Thus, clustering is centered around an intuitively compelling but vaguely defined
goal: given an underlying set of points, partition them into a collection of clusters so
that points in the same cluster are close together, while points in different clusters
are far apart.
The study of clustering is unified only at this very general level of description, however; at the level of concrete methods and algorithms, one quickly encounters a bewildering array of different clustering techniques, including agglomerative, spectral,
information-theoretic, and centroid-based, as well as those arising from combinatorial optimization and from probabilistic generative models. These techniques are
based on diverse underlying principles, and they often lead to qualitatively different
results. A number of standard textbooks [1, 4, 6, 9] provide overviews of a range of
the approaches that are generally employed.
Given the scope of the issue, there has been relatively little work aimed at reasoning
about clustering independently of any particular algorithm, objective function, or
generative data model. But it is not clear that this needs to be the case. To take
a motivating example from a technically different but methodologically similar set-
ting, research in mathematical economics has frequently formalized broad intuitive
notions (how to fairly divide resources, or how to achieve consensus from individual
preferences) in what is often termed an axiomatic framework ? one enumerates a
collection of simple properties that a solution ought to satisfy, and then studies how
these properties constrain the solutions one is able to obtain [10]. In some striking
cases, as in Arrow?s celebrated theorem on social choice functions [2], the result is
impossibility ? there is no solution that simultaneously satisfies a small collection
of simple properties.
In this paper, we develop an axiomatic framework for clustering. First, as is standard, we define a clustering function to be any function f that takes a set S of n
points with pairwise distances between them, and returns a partition of S. (The
points in S are not assumed to belong to any ambient space; the pairwise distances
are the only data one has about them.) We then consider the effect of requiring the clustering function to obey certain natural properties. Our first result is
a basic impossibility theorem: for a set of three simple properties ? essentially
scale-invariance, a richness requirement that all partitions be achievable, and a
consistency condition on the shrinking and stretching of individual distances ? we
show that there is no clustering function satisfying all three. None of these properties is redundant, in the sense that it is easy to construct clustering functions
satisfying any two of the three. We also show, by way of contrast, that certain
natural relaxations of this set of properties are satisfied by versions of well-known
clustering functions, including those derived from single-linkage and sum-of-pairs.
In particular, we fully characterize the set of possible outputs of a clustering function
that satisfies the scale-invariance and consistency properties.
How should one interpret an impossibility result in this setting? The fact that it
arises directly from three simple constraints suggests a technical underpinning for
the difficulty in unifying the initial, informal concept of ?clustering.? It indicates a
set of basic trade-offs that are inherent in the clustering problem, and offers a way
to distinguish between clustering methods based not simply on operational grounds,
but on the ways in which they resolve the choices implicit in these trade-offs. Exploring relaxations of the properties helps to sharpen this type of analysis further
? providing a perspective, for example, on the distinction between clustering functions that fix the number of clusters a priori and those that do not; and between
clustering functions that build in a fundamental length scale and those that do not.
Other Axiomatic Approaches. As discussed above, the vast majority of approaches to clustering are derived from the application of specific algorithms, the
optima of specific objective functions, or the consequences of particular probabilistic generative models for the data. Here we briefly review work seeking to examine
properties that do not overtly impose a particular objective function or model.
Jardine and Sibson [7] and Puzicha, Hofmann, and Buhmann [12] have considered
axiomatic approaches to clustering, although they operate in formalisms quite different from ours, and they do not seek impossibility results. Jardine and Sibson are
concerned with hierarchical clustering, where one constructs a tree of nested clusters. They show that a hierarchical version of single-linkage is the unique function
consistent with a collection of properties; however, this is primarily a consequence
of the fact that one of their properties is an implicit optimization criterion that
is uniquely optimized by single-linkage. Puzicha et al. consider properties of cost
functions on partitions; these implicitly define clustering functions through the process of choosing a minimum-cost partition. They investigate a particular class of
clustering functions that arises if one requires the cost function to decompose into
a certain additive form. Recently, Kalai, Papadimitriou, Vempala, and Vetta have
also investigated an axiomatic framework for clustering [8]; like the approach of Jardine and Sibson [7], and in contrast to our work here, they formulate a collection
of properties that are sufficient to uniquely specify a particular clustering function.
Axiomatic approaches have also been applied in areas related to clustering ? particularly in collaborative filtering, which harnesses similarities among users to make
recommendations, and in discrete location theory, which focuses on the placement
of ?central? facilities among distributed collections of individuals. For collaborative
filtering, Pennock et al. [11] show how results from social choice theory, including
versions of Arrow?s Impossibility Theorem [2], can be applied to characterize recommendation systems satisfying collections of simple properties. In discrete location
theory, Hansen and Roberts [5] prove an impossibility result for choosing a central
facility to serve a set of demands on a graph; essentially, given a certain collection of
required properties, they show that any function that specifies the resulting facility
must be highly sensitive to small changes in the input.
2
The Impossibility Theorem
A clustering function operates on a set S of n ? 2 points and the pairwise distances
among them. Since we wish to deal with point sets that do not necessarily belong
to an ambient space, we identify the points with the set S = {1, 2, . . . , n}. We then
define a distance function to be any function d : S ? S ? R such that for distinct
i, j ? S, we have d(i, j) ? 0, d(i, j) = 0 if and only if i = j, and d(i, j) = d(j, i). One
can optionally restrict attention to distance functions that are metrics by imposing
the triangle inequality: d(i, k) ? d(i, j) + d(j, k) for all i, j, k ? S. We will not
require the triangle inequality in the discussion here, but the results to follow ?
both negative and positive ? still hold if one does require it.
A clustering function is a function f that takes a distance function d on S and
returns a partition ? of S. The sets in ? will be called its clusters. We note that, as
written, a clustering function is defined only on point sets of a particular size (n);
however, all the specific clustering functions we consider here will be defined for all
values of n larger than some small base value.
Here is a first property one could require of a clustering function. If d is a distance
function, we write ??d to denote the distance function in which the distance between
i and j is ?d(i, j).
Scale-Invariance. For any distance function d and any ? > 0,
we have f (d) = f (? ? d).
This is simply the requirement that the clustering function not be sensitive to
changes in the units of distance measurement ? it should not have a built-in ?length
scale.? A second property is that the output of the clustering function should be
?rich? ? every partition of S is a possible output. To state this more compactly,
let Range(f ) denote the set of all partitions ? such that f (d) = ? for some distance
function d.
Richness. Range(f ) is equal to the set of all partitions of S.
In other words, suppose we are given the names of the points only (i.e. the indices
in S) but not the distances between them. Richness requires that for any desired
partition ?, it should be possible to construct a distance function d on S for which
f (d) = ?.
Finally, we discuss a Consistency property that is more subtle that the first two.
We think of a clustering function as being ?consistent? if it exhibits the following
behavior: when we shrink distances between points inside a cluster and expand
distances between points in different clusters, we get the same result. To make this
precise, we introduce the following definition. Let ? be a partition of S, and d and
d0 two distance functions on S. We say that d0 is a ?-transformation of d if (a) for
all i, j ? S belonging to the same cluster of ?, we have d0 (i, j) ? d(i, j); and (b) for
all i, j ? S belonging to different clusters of ?, we have d0 (i, j) ? d(i, j).
Consistency. Let d and d0 be two distance functions. If f (d) = ?,
and d0 is a ?-transformation of d, then f (d0 ) = ?.
In other words, suppose that the clustering ? arises from the distance function d. If
we now produce d0 by reducing distances within the clusters and enlarging distance
between the clusters then the same clustering ? should arise from d0 .
We can now state the impossibility theorem very simply.
Theorem 2.1 For each n ? 2, there is no clustering function f that satisfies ScaleInvariance, Richness, and Consistency.
We will prove Theorem 2.1 in the next section, as a consequence of a more general
statement. Before doing this, we reflect on the relation of these properties to one
another by showing that there exist natural clustering functions satisfying any two
of the three properties.
To do this, we describe the single-linkage procedure (see e.g. [6]), which in fact defines a family of clustering functions. Intuitively, single-linkage operates by initializing each point as its own cluster, and then repeatedly merging the pair of clusters
whose distance to one another (as measured from their closest points of approach)
is minimum. More concretely, single-linkage constructs a weighted complete graph
Gd whose node set is S and for which the weight on edge (i, j) is d(i, j). It then
orders the edges of Gd by non-decreasing weight (breaking ties lexicographically),
and adds edges one at a time until a specified stopping condition is reached. Let
Hd denote the subgraph consisting of all edges that are added before the stopping
condition is reached; the connected components of Hd are the clusters.
Thus, by choosing a stopping condition for the single-linkage procedure, one obtains
a clustering function, which maps the input distance function to the set of connected
components that results at the end of the procedure. We now show that for any
two of the three properties in Theorem 2.1, one can choose a single-linkage stopping
condition so that the resulting clustering function satisfies these two properties.
Here are the three types of stopping conditions we will consider.
? k-cluster stopping condition. Stop adding edges when the subgraph first
consists of k connected components. (We will only consider this condition
to be well-defined when the number of points is at least k.)
? distance-r stopping condition. Only add edges of weight at most r.
? scale-? stopping condition. Let ?? denote the maximum pairwise distance;
i.e. ?? = maxi,j d(i, j). Only add edges of weight at most ??? .
It is clear that these various stopping conditions qualitatively trade off certain of
the properties in Theorem 2.1. Thus, for example, the k-cluster stopping condition
does not attempt to produce all possible partitions, while the distance-r stopping
condition builds in a fundamental length scale, and hence is not scale-invariant.
However, by the appropriate choice of one of these stopping conditions, one can
achieve any two of the three properties in Theorem 2.1.
Theorem 2.2 (a) For any k ? 1, and any n ? k, single-linkage with the k-cluster
stopping condition satisfies Scale-Invariance and Consistency.
(b) For any positive ? < 1, and any n ? 3, single-linkage with the scale-? stopping
condition satisfies Scale-Invariance and Richness.
(c) For any r > 0, and any n ? 2, single-linkage with the distance-r stopping
condition satisfies Richness and Consistency.
3
Antichains of Partitions
We now state and prove a strengthening of the impossibility result. We say that
a partition ?0 is a refinement of a partition ? if for every set C 0 ? ?0 , there is a
set C ? ? such that C 0 ? C. We define a partial order on the set of all partitions
by writing ?0 ? if ?0 is a refinement of ?. Following the terminology of partially
ordered sets, we say that a collection of partitions is an antichain if it does not
contain two distinct partitions such that one is a refinement of the other.
For a set of n ? 2 points, the collection of all partitions does not form an antichain;
thus, Theorem 2.1 follows from
Theorem 3.1 If a clustering function f satisfies Scale-Invariance and Consistency,
then Range(f ) is an antichain.
Proof. For a partition ?, we say that a distance function d (a, b)-conforms to ? if,
for all pairs of points i, j that belong to the same cluster of ?, we have d(i, j) ? a,
while for all pairs of points i, j that belong to different clusters, we have d(i, j) ? b.
With respect to a given clustering function f , we say that a pair of positive real
numbers (a, b) is ?-forcing if, for all distance functions d that (a, b)-conform to ?,
we have f (d) = ?.
Let f be a clustering function that satisfies Consistency. We claim that for any
partition ? ? Range(f ), there exist positive real numbers a < b such that the pair
(a, b) is ?-forcing. To see this, we first note that since ? ? Range(f ), there exists a
distance function d such that f (d) = ?. Now, let a0 be the minimum distance among
pairs of points in the same cluster of ?, and let b0 be the maximum distance among
pairs of points that do not belong to the same cluster of ?. Choose numbers a < b
so that a ? a0 and b ? b0 . Clearly any distance function d0 that (a, b)-conforms to
? must be a ?-transformation of d, and so by the Consistency property, f (d0 ) = ?.
It follows that the pair (a, b) is ?-forcing.
Now suppose further that the clustering function f satisfies Scale-Invariance, and
that there exist distinct partitions ?0 , ?1 ? Range(f ) such that ?0 is a refinement
of ?1 . We show how this leads to a contradiction.
Let (a0 , b0 ) be a ?0 -forcing pair, and let (a1 , b1 ) be a ?1 -forcing pair, where a0 < b0
and a1 < b1 ; the existence of such pairs follows from our claim above. Let a2 be any
number less than or equal to a1 , and choose ? so that 0 < ? < a0 a2 b?1
0 . It is now
straightforward to construct a distance function d with the following properties:
For pairs of points i, j that belong to the same cluster of ?0 , we have d(i, j) ? ?; for
pairs i, j that belong to the same cluster of ?1 but not to the same cluster of ?0 ,
we have a2 ? d(i, j) ? a1 ; and for pairs i, j the do not belong to the same cluster
of ?1 , we have d(i, j) ? b1 .
The distance function d (a1 , b1 )-conforms to ?1 , and so we have f (d) = ?1 . Now set
0
0
? = b0 a?1
2 , and define d = ? ? d. By Scale-Invariance, we must have f (d ) = f (d) =
0
?1 . But for points i, j in the same cluster of ?0 we have d (i, j) ? ?b0 a?1
< a0 ,
2
while for points i, j that do not belong to the same cluster of ?0 we have d0 (i, j) ?
0
0
a2 b0 a?1
2 ? b0 . Thus d (a0 , b0 )-conforms to ?0 , and so we must have f (d ) = ?0 . As
?0 6= ?1 , this is a contradiction.
The proof above uses our assumption that the clustering function f is defined on
the set of all distance functions on n points. However, essentially the same proof
yields a corresponding impossibility result for clustering functions f that are defined
only on metrics, or only on distance functions arising from n points in a Euclidean
space of some dimension. To adapt the proof, one need only be careful to choose
the constant a2 and distance function d to satisfy the required properties.
We now prove a complementary positive result; together with Theorem 3.1, this
completely characterizes the possible values of Range(f ) for clustering functions f
that satisfy Scale-Invariance and Consistency.
Theorem 3.2 For every antichain of partitions A, there is a clustering function f
satisfying Scale-Invariance and Consistency for which Range(f ) = A.
Proof. Given an arbitrary antichain A, it is not clear how to produce a stopping
condition for the single-linkage procedure that gives rise to a clustering function f
with Range(f ) = A. (Note that the k-cluster stopping condition yields a clustering
function whose range is the antichain consisting of all partitions into k sets.) Thus,
to prove this result, we use a variant of the sum-of-pairs clustering function (see
e.g. [3]), adapted to general antichains. We focus on the case in which |A| > 1,
since the case of |A| = 1 is trivial.
For a partition ? ? A, we write (i, j) ? ? if both i and j belong to the same cluster in
?. The A-sum-of-pairs function f seeks the partition ? ? A that minimizes the sum
of all distances between pairs of points in the same cluster;
P in other words, it seeks
the ? ? A minimizing the objective function ?d (?) = (i,j)?? d(i, j). (Ties are
broken lexicographically.) It is crucial that the minimization is only over partitions
in A; clearly, if we wished to minimize this objective function over all partitions,
we would choose the partition in which each point forms its own cluster.
It is clear that f satisfies Scale-Invariance, since ???d (?) = ??d (?) for any partition
?. By definition we have Range(f ) ? A, and we argue that Range(f ) ? A as
follows. For any partition ? ? A, construct a distance function d with the following
properties: d(i, j) < n?3 for every pair of points i, j belonging to the same cluster
of ?, and d(i, j) ? 1 for every pair of points i, j belonging to different clusters of
?. We have ?d (?) < 1; and moreover ?d (?0 ) < 1 only for partitions ?0 that are
refinements of ?. Since A is an antichain, it follows that ? must minimize ?d over
all partitions in A, and hence f (d) = ?.
It remains only to verify Consistency. Suppose that for the distance function d,
we have f (d) = ?; and let d0 be a ?-transformation of d. For any partition ?0 , let
?(?0 ) = ?d (?0 ) ? ?d0 (?0 ). It is enough to show that for any partition ?0 ? A, we
have ?(?) ? ?(?0 ).
P
But this follows simply because ?(?) = (i,j)?? d(i, j) ? d0 (i, j), while
X
X
?(?0 ) =
d(i, j) ? d0 (i, j) ?
d(i, j) ? d0 (i, j) ? ?(?),
(i,j)??0
(i,j)??0 and (i,j)??
where both inequalities follow because d0 is a ?-transformation of d: first, only
terms corresponding to pairs in the same cluster of ? are non-negative; and second,
every term corresponding to a pair in the same cluster of ? is non-negative.
4
Centroid-Based Clustering and Consistency
In a widely-used approach to clustering, one selects k of the input points as centroids,
and then defines clusters by assigning each point in S to its nearest centroid. The
goal, intuitively, is to choose the centroids so that each point in S is close to at least
one of them. This overall approach arises both from combinatorial optimization
perspectives, where it has roots in facility location problems [9], and in maximumlikelihood methods, where the centroids may represent centers of probability density
functions [4, 6]. We show here that for a fairly general class of centroid-based
clustering functions, including k-means and k-median, none of the functions in the
class satisfies the Consistency property. This suggests an interesting tension between
between Consistency and the centroid-based approach to clustering, and forms a
contrast with the results for single-linkage and sum-of-pairs in previous sections.
Specifically, for any natural number k ? 2, and any continuous, non-decreasing,
and unbounded function g : R+ ? R+ , we define the (k, g)-centroid clustering
function as follows. First, we choosePthe set of k ?centroid? points T ? S for
which the objective function ?gd (T ) = i?S g(d(i, T )) is minimized. (Here d(i, T ) =
minj?T d(i, j).) Then we define a partition of S into k clusters by assigning each
point to the element of T closest to it. The k-median function [9] is obtained
by setting g to be the identity function, while the objective function underlying
k-means clustering [4, 6] is obtained by setting g(d) = d2 .
Theorem 4.1 For every k ? 2 and every function g chosen as above, and for n
sufficiently large relative to k, the (k, g)-centroid clustering function does not satisfy
the Consistency property.
Proof Sketch. We describe the proof for k = 2 clusters; the case of k > 2 is similar.
We consider a set of points S that is divided into two subsets: a set X consisting
of m points, and a set Y consisting of ?m points, for a small number ? > 0. The
distance between points in X is r, the distance between points in Y is ? < r, and
the distance from a point in X to a point in Y is r + ?, for a small number ? > 0.
By choosing ?, r, ?, and ? appropriately, the optimal choice of k = 2 centroids
will consist of one point from X and one from Y , and the resulting partition ? will
have clusters X and Y . Now, suppose we divide X into sets X0 and X1 of equal
size, and reduce the distances between points in the same Xi to be r0 < r (keeping
all other distances the same). This can be done, for r 0 small enough, so that the
optimal choice of two centroids will now consist of one point from each Xi , yielding
a different partition of S. As our second distance function is a ?-transformation of
the first, this violates Consistency.
5
Relaxing the Properties
In addition to looking for clustering functions that satisfy subsets of the basic properties, we can also study the effect of relaxing the properties themselves. Theorem 3.2 is a step in this direction, showing that the sum-of-pairs function satisfies
Scale-Invariance and Consistency, together with a relaxation of the Richness property. As an another example, it is interesting to note that single-linkage with the
distance-r stopping condition satisfies a natural relaxation of Scale-Invariance: if
? > 1, then f (? ? d) is a refinement of f (d).
We now consider some relaxations of Consistency. Let f be a clustering function,
and d a distance function such that f (d) = ?. If we reduce distances within clusters
and expand distances between clusters, Consistency requires that f output the same
partition ?. But one could imagine requiring something less: perhaps changing
distances this way should be allowed to create additional sub-structure, leading to
a new partition in which each cluster is a subset of one of the original clusters. Thus,
we can define Refinement-Consistency, a relaxation of Consistency, to require that
if d0 is an f (d)-transformation of d, then f (d0 ) should be a refinement of f (d).
We can show that the natural analogue of Theorem 2.1 still holds: there is no clustering function that satisfies Scale-Invariance, Richness, and Refinement-Consistency.
However, there is a crucial sense in which this result ?just barely? holds, rendering it arguably less interesting to us here. Specifically, let ??n denote the partition of S = {1, 2, . . . , n} in which each individual element forms its own cluster. Then there exist clustering functions f that satisfy Scale-Invariance and
Refinement-Consistency, and for which Range(f ) consists of all partitions except
??n . (One example is single-linkage with the distance-(??) stopping condition, where
? = mini,j d(i, j) is the minimum inter-point distance, and ? ? 1.) Such functions f , in addition to Scale-Invariance and Refinement-Consistency, thus satisfy a
kind of Near-Richness property: one can obtain every partition as output except
for a single, trivial partition. It is in this sense that our impossibility result for
Refinement-Consistency, unlike Theorem 2.1, is quite ?brittle.?
To relax Consistency even further, we could say simply that if d0 is an f (d)transformation of d, then one of f (d) or f (d0 ) should be a refinement of the other.
In other words, f (d0 ) may be either a refinement or a ?coarsening? of f (d). It is
possible to construct clustering functions f that satisfy this even weaker variant of
Consistency, together with Scale-Invariance and Richness.
Acknowledgements. I thank Shai Ben-David, John Hopcroft, and Lillian Lee for
valuable discussions on this topic. This research was supported in part by a David
and Lucile Packard Foundation Fellowship, an ONR Young Investigator Award, an
NSF Faculty Early Career Development Award, and NSF ITR Grant IIS-0081334.
References
[1] M. Anderberg, Cluster Analysis for Applications, Academic Press, 1973.
[2] K. Arrow, Social Choice and Individual Values, Wiley, New York, 1951.
[3] M. Bern, D. Eppstein, ?Approximation algorithms for geometric prolems,? in Approximation Algorithms for NP-Hard Problems, (D. Hochbaum, Ed.), PWS Publishing, 1996.
[4] R. Duda, P. Hart, D. Stork, Pattern Classification (2nd edition), Wiley, 2001.
[5] P. Hansen, F. Roberts, ?An impossibility result in axiomatic location theory,? Mathematics of Operations Research 21(1996).
[6] A. Jain, R. Dubes, Algorithms for Clustering Data, Prentice-Hall, 1981.
[7] N. Jardine, R. Sibson, Mathematical Taxonomy Wiley, 1971.
[8] A. Kalai, C. Papadimitriou, S. Vempala, A. Vetta, personal communication, June 2002.
[9] P. Mirchandani, R. Francis, Discrete Location Theory, Wiley, 1990.
[10] M. Osborne A. Rubinstein, A Course in Game Theory, MIT Press, 1994.
[11] D. Pennock, E. Horvitz, C.L. Giles, ?Social choice theory and recommender systems:
Analysis of the axiomatic foundations of collaborative filtering,? Proc. 17th AAAI, 2000.
[12] J. Puzicha, T. Hofmann, J. Buhmann ?A Theory of Proximity Based Clustering:
Structure Detection by Optimization,? Pattern Recognition, 33(2000).
| 2340 |@word briefly:1 faculty:1 version:3 achievable:1 duda:1 nd:1 d2:1 seek:4 methodologically:1 initial:1 celebrated:1 ours:1 horvitz:1 assigning:2 must:5 written:1 john:1 additive:1 partition:44 hofmann:2 generative:3 node:1 location:5 preference:1 unbounded:1 mathematical:2 prove:5 consists:2 inside:1 introduce:1 x0:1 pairwise:4 inter:1 behavior:1 themselves:1 frequently:1 examine:1 decreasing:2 resolve:1 little:1 abound:1 underlying:4 moreover:1 what:1 kind:1 minimizes:1 textbook:1 unified:2 finding:1 transformation:8 ought:1 every:9 tie:2 unit:1 grant:1 arguably:1 positive:5 before:2 consequence:3 studied:1 suggests:2 relaxing:2 jardine:4 range:14 unique:1 procedure:4 area:1 word:4 suggest:1 get:1 close:2 prentice:1 writing:1 map:1 center:1 straightforward:1 economics:1 attention:1 independently:1 formulate:1 formalized:1 contradiction:2 array:1 hd:2 notion:2 imagine:1 suppose:5 user:1 us:1 element:2 satisfying:6 particularly:1 recognition:1 initializing:1 connected:3 richness:10 trade:4 valuable:1 broken:1 personal:1 technically:1 serve:1 completely:1 triangle:2 compactly:1 hopcroft:1 various:1 distinct:3 jain:1 describe:2 rubinstein:1 choosing:4 quite:2 whose:3 larger:1 widely:1 say:6 relax:1 think:1 strengthening:1 subgraph:2 achieve:2 description:1 intuitive:1 cluster:46 requirement:2 optimum:1 produce:3 ben:1 object:2 help:1 develop:2 dubes:1 measured:1 nearest:1 wished:1 b0:9 direction:1 centered:2 violates:1 require:4 fix:1 decompose:1 exploring:1 underpinning:1 hold:3 around:2 considered:1 ground:1 sufficiently:1 hall:1 proximity:1 scope:1 claim:2 early:1 a2:5 proc:1 axiomatic:8 combinatorial:2 hansen:2 expose:1 sensitive:2 create:1 weighted:1 minimization:1 offs:3 clearly:2 mit:1 kalai:2 cornell:1 derived:2 focus:2 june:1 indicates:1 impossibility:14 contrast:3 centroid:13 sense:3 stopping:19 a0:7 relation:1 expand:2 selects:1 issue:1 among:6 overall:1 classification:1 priori:1 development:1 fairly:2 field:1 construct:7 equal:3 broad:1 jon:1 papadimitriou:2 minimized:1 np:1 inherent:1 primarily:1 simultaneously:1 individual:5 consisting:4 attempt:1 detection:1 lucile:1 investigate:1 highly:1 eppstein:1 yielding:1 ambient:2 edge:7 closer:1 unification:1 partial:1 conforms:4 tree:1 divide:2 euclidean:1 desired:1 formalism:1 compelling:2 giles:1 cost:3 subset:3 motivating:1 characterize:2 gd:3 density:1 fundamental:2 probabilistic:2 off:1 lee:1 together:5 quickly:1 concrete:1 central:2 unavoidable:1 satisfied:1 reflect:1 choose:6 aaai:1 leading:1 return:2 satisfy:8 root:1 doing:1 characterizes:1 reached:2 francis:1 shai:1 collaborative:3 minimize:2 stretching:1 yield:2 identify:1 none:2 minj:1 whenever:1 ed:1 definition:2 bewildering:1 naturally:1 proof:7 stop:1 enumerates:1 subtle:1 follow:2 harness:1 specify:1 tension:1 done:1 shrink:1 just:1 implicit:2 until:1 sketch:1 defines:2 perhaps:1 name:1 effect:2 requiring:2 concept:1 contain:1 verify:1 facility:4 hence:2 deal:1 game:1 uniquely:2 criterion:1 theoretic:1 complete:1 reasoning:2 recently:1 overview:1 stork:1 belong:10 discussed:1 interpret:1 measurement:1 imposing:1 consistency:29 mathematics:1 sharpen:1 overtly:1 similarity:3 base:1 add:3 something:1 closest:2 own:3 perspective:3 apart:1 forcing:5 termed:1 certain:5 inequality:3 onr:1 minimum:4 additional:1 impose:1 employed:1 r0:1 redundant:1 ii:1 d0:23 technical:2 lexicographically:2 academic:1 adapt:1 offer:1 divided:1 hart:1 award:2 a1:5 variant:2 basic:3 heterogeneous:1 essentially:3 metric:2 represent:3 hochbaum:1 addition:2 fellowship:1 median:3 crucial:2 ithaca:1 appropriately:1 operate:1 unlike:1 pennock:2 coarsening:1 near:1 scaleinvariance:1 easy:1 concerned:1 enough:2 rendering:1 restrict:1 reduce:2 itr:1 linkage:16 york:1 repeatedly:1 generally:1 clear:4 aimed:1 specifies:1 exist:4 nsf:2 arising:2 diverse:2 conform:1 discrete:3 write:2 profoundly:1 sibson:4 terminology:1 changing:1 vaguely:1 vast:1 graph:2 relaxation:7 sum:7 striking:1 family:1 distinguish:1 adapted:1 placement:1 constraint:1 constrain:1 kleinberg:1 vempala:2 relatively:1 department:1 belonging:4 intuitively:4 invariant:1 resource:1 remains:1 discus:1 anderberg:1 end:1 informal:1 operation:1 obey:1 hierarchical:2 spectral:1 appropriate:1 encounter:1 existence:1 original:1 clustering:70 publishing:1 unifying:1 ting:1 build:2 seeking:1 objective:7 added:1 exhibit:1 distance:56 thank:1 majority:1 topic:1 agglomerative:1 argue:1 consensus:1 trivial:2 barely:1 length:3 index:1 mini:1 providing:1 minimizing:1 optionally:1 difficult:1 robert:2 statement:1 taxonomy:1 antichains:2 negative:3 rise:1 recommender:1 lillian:1 looking:1 precise:1 communication:1 arbitrary:1 community:1 david:2 pair:25 required:2 specified:1 optimized:1 distinction:1 able:1 pattern:2 built:1 including:4 packard:1 analogue:1 difficulty:2 natural:7 buhmann:2 review:1 geometric:1 acknowledgement:1 relative:1 fully:1 antichain:7 brittle:1 interesting:4 filtering:3 foundation:2 sufficient:1 consistent:2 principle:1 course:1 supported:1 keeping:1 bern:1 formal:1 weaker:1 distributed:1 dimension:1 rich:1 concretely:1 collection:11 qualitatively:2 refinement:14 far:1 social:4 obtains:1 implicitly:1 b1:4 assumed:1 xi:2 continuous:1 career:1 operational:1 investigated:1 necessarily:1 arrow:3 arise:1 edition:1 osborne:1 allowed:1 complementary:1 x1:1 ny:1 wiley:4 shrinking:1 sub:1 wish:1 breaking:1 young:1 theorem:21 enlarging:1 pws:1 specific:3 showing:2 maxi:1 grouping:1 exists:1 consist:2 merging:1 adding:1 demand:1 simply:5 ordered:1 partially:1 recommendation:2 nested:1 satisfies:15 vetta:2 goal:3 identity:1 careful:1 change:2 hard:1 specifically:2 except:2 operates:2 reducing:1 called:1 invariance:17 puzicha:3 maximumlikelihood:1 arises:5 investigator:1 |
1,475 | 2,341 | Location Estimation with a Differential Update
Network
Ali Rahimi and Trevor Darrell
Artificial Intelligence Laboratory
Massachusetts Institute of Technology
Cambridge, MA 02139
{ali,trevor}@mit.edu
Abstract
Given a set of hidden variables with an a-priori Markov structure, we
derive an online algorithm which approximately updates the posterior as
pairwise measurements between the hidden variables become available.
The update is performed using Assumed Density Filtering: to incorporate
each pairwise measurement, we compute the optimal Markov structure
which represents the true posterior and use it as a prior for incorporating
the next measurement. We demonstrate the resulting algorithm by calculating globally consistent trajectories of a robot as it navigates along a
2D trajectory. To update a trajectory of length t, the update takes O(t).
When all conditional distributions are linear-Gaussian, the algorithm can
be thought of as a Kalman Filter which simplifies the state covariance
matrix after incorporating each measurement.
1 Introduction
Consider a hidden Markov chain. Given a sequence of pairwise measurements between the
elements of the chain (for example, their differences, corrupted by noise) we are asked to
refine our estimate of their values online, as these pairwise measurements become available.
We propose the Differential Update Network as a mechanism for solving this problem. We
use this mechanism to recover the trajectory of a robot given noisy measurements of its
movement between points in its trajecotry. These pairwise displacements are thought of as
noise corrupted measurements between the true but unknown poses to be recovered. The
recovered trajectories are consistent in the sense that when the camera returns to an already
visited position, its estimated pose is consistent with the pose recovered on the earlier visit.
Pose change measurements between two points on the trajectory are obtained by bringing
images of the environment acquired at each pose into registration with each other. The
required transformation to affect the registration is the pose change measurement. There
is a rich literature on computing pose changes from a pair of scans from an optical sensor:
2D [5, 6] and 3D transformations [7, 8, 9] from monocular cameras, or 3D transformations
from range imagery [10, 11, 12] are a few examples. These have been used by [1, 2] in 3D
model acquisition and by [3, 4] in robot navigation. The trajectory of the robot is defined as
the unknown pose from which each frame was acquired, and is maintained in a state vector
which is updated as pose changes are measured.
Figure 1: Independence structure of a differential update network.
An alternative method estimates the pose of the robot with respect to fixed features in the
world. These methods represent the world as a set of features, such as corners, lines, and
other geometric shapes in 3D [13, 14, 15] and match features between a scan at the current
pose and the acquired world representation. However, measurements are still pairwise,
since they depend on a feature and the poses of the camera. Because both the feature list
and the poses are maintained in the state vector, the differential Update Framework can be
applied to both scan-based methods and feature-based methods.
Our algorithm incorporates each pose change measurement by updating the pose associated
with every frame encountered. To insure that each update can happen in time linear to the
length of the trajectory, the correlation structure of the state vector is approximated with
a simpler Markov chain after each measurement. This scheme can be thought of as an
instance of Assumed Density Filtering (ADF) [16, 17].
The Differential Update Network presented here assumes a linear Gaussian system, but
our derivation is general and can accommodate any distribution. For example, we are
currently experimenting with discrete distributions. In addition, we focus on frame-based
trajectory estimation due to the ready availability of pose change estimators, and to avoid
the complexity of maintaining an explicit feature map.
The following section describes the model in a Bayesian framework. Sections 3 and 4
sketch existing batch and online methods for obtaining globally consistent trajectories. Section 5 derives the update rules for our algorithm, which is then applied to a 2D trajectory
estimation in section 6.
2 Dynamics and Measurement Models
Figure 1 depicts the network. We assume the hidden variables xt have a Markov structure
with known transition densities:
T
Y
p(X) =
p(xt |xt?1 ).
t=1
Pairwise measurements appear on the chain one by one. Conditioned on the hidden variables, these measurements are assumed to be independent:
Y
p(Y |X) =
p(yst |xs , xt ),
(s,t)?M
where M is the set of pairs of hidden variables which have been measured.
To apply this network to robot localization, let X = {xt }t=1..T be the trajectory of the
robot up to time T , with each xt denoting its pose at time t. These poses can be represented using any parametrization of pose, for example as 3D rotations and translation, 2D
translations (which is what we use in section 6, or even non-rigid deformations such as
affine. The conditional distribution between adjacent x?s is assumed to follow:
p(xt+1 |xt ) = N (xt+1 |xt , ?x|x ).
(1)
As the robot moves, the pose change estimator computes the motion y st of the robot from
two scans of the environment. Given the true poses, we assume that these measurements
are independent of each other even when they share a common scan. We model each y st as
being drawn from a Gaussian centered around xt ? xs :
p(yst |xs , xt ) = N (yst |xt ? xs , ?y|xx)
(2)
y st
The online global estimation problem requires us to update p(X|Y ) as each
in Y becomes available. The following section reviews a batch solution for computing p(X|Y )
using this model. Section 4 discusses a recursive approach with a similar running time as
the batch version. Section 5 presents our approach, which performs these updates much
faster by simplifying the output of the recursive solution after incorporating each measurement.
3 Batch Linear Gaussian Solution
Equation (1) dictates a Gaussian prior p(X) with mean mX and covariance ?X . Because
the pose dynamics are Markovian, the inverse covariance ? ?1
X is tri-diagonal. According
to equation (2), the observations are drawn from yst = As,t X + ?s,t = xt ? xs + ?s,t ,
with ?s,t white and Gaussian with covariance ?s,t . Stacking up the As,t and ?s,t into A
and ?Y |X respectively we know that the posterior mean of X|Y is [21]:
?1
mX|Y = mX + ?X A> A?X A> + ?Y |X
Y
(3)
?1
?X|Y = ?X ? ?X A> A?X A> + ?Y |X
A?X ,
(4)
or alternatively,
??1
X|Y
=
mX|Y
=
?1
??1
X + ?Y |X
?1
?X|Y ??1
X mX + ?Y |X Y .
(5)
(6)
If there are M measurements and T hidden variables, this computation will take O(T 2 M )
if performed naively. Note that if M > T , as is the case in the robot mapping problem, the
alternate equations (5) and (6) can be used to obtain a running time of O(T 3 ).
4 Online Linear Gaussian Solution
Lu and Milios [3] proposed a recursive update for updating the trajectory X|Y old after
obtaining a new measurement yst . Because each measurement is independent of past measurements given the X?s, the update is:
Bayes
(7)
p(X|Y old , yst ) ? p(yst |X)p(X|Y old ).
t
2
Using equations (3) and (4) to perform this update for one ys takes O(T ). After integrating
M measurements, this yields the same final cost as the batch update.
One way to lower this cost is to reduce the number of hidden variables x t by fixing some
of them, thus reducing T [23]. It is also possible to take advantage of the sparseness of the
covariance structure of X|Y old by using the updates (6) and (5):
?1
t
t
??1
m
=
?
m
+
?
y
(8)
ys |old s
X|new X|new
X|old X|old
??1
X|new
> ?1
= ??1
X|old + As,t ?X|old As,t
(9)
Figure 2: The measurement (left) correlates the hidden variables (middle), whose correlation is then
simplified (right), and is ready to accept a new measurement.
Because ??1
X|new has a sparse structure (see equation (9)), mX|new can be found using a
sparse linear system solver [23]. Unfortunately, as measurements are incorporated, ? ?1
X|new
becomes denser due to the accumulation of the rank 1 terms in equation (9), rendering this
approach less effective.
In the linear Gaussian case, the Differential Update Network addresses this problem by projecting ?X|new on the closest covariance matrix which has a tri-diagonal inverse. Hence,
in solving (8), ?X|new is always tri-diagonal, so mX|new is easy to compute.
5 Approximate Online Solution
To implement this idea in the general case, we resort to Assumed Density Filtering (ADF)
[16]: we approximate p(X|Y old ) with a simpler distribution q(X|Y old ). To incorporate a
new measurement yst , we apply the update
p(X|Y new )
Bayes
?
p(yst |xs , xt )q(X|Y old ).
new
(10)
old
This new p(X|Y
) has a more complicated independence structure than q(X|Y ), so
incorporating subsequent measurements would require more work and the resulting posterior would be even hairier. So we approximate it again with a q(X|Y new ) that has a simpler
independence structure. Subsequent measurements can again be incorporated easily using
this new q. Specifically, we force q to always obey Markovian independence. Figure 5
summarizes this process.
The following section discusses how to find a Markovian q so as to minimize the KL divergence between p and q. Section 5.2 shows how to incorporate a pairwise measurement on
the resulting Markov chain using equation (10).
5.1 Simplifying the independence structure
We
Q would like to approximate an arbitrary distribution which
Q factors according to p(X) =
t pt (xt |Pa[xt ]), using one which factors into q(X) =
t qt (xt |Qa[xt ]). Here, Pa[xt ]
are the parents of node xt in the graph prescribed by p(X), and Qa[xt ][xt ] = xt?1 are the
parents of node xt as prescribed by q(X).
The objective is to minimize:
?
q = arg min KL
q
Y
Z
Y
p(X)
pt
qt =
p(X) ln Q
.
q
(x
x
i t t |Qa[xt ])
(11)
After some manipulation, it can be shown that:
qt? = p(xt |Qa[xt ]).
(12)
This says that the best conditional qt is built up from the corresponding pt by marginalizing
out the conditions that were removed in the graph. This is not an easy operation to perform
in general, but the following section shows how to do it in our case.
5.2 Computing posterior transitions on a graph with a single loop
This result suggests a simplification to the update of equation (10). Because the ultimate
goal is to compute q(X|Y new ), not p(X|Y new ), we only need to compute the posterior
transitions p(xt |xt?1 , Y new ). Thus, we circumvent having to first find p then project it
onto q. We propose computing these transitions in three steps, one for the transitions to the
left of xs , another for the loop, and the third for transitions to the right of x t .
5.2.1 Finding p(x? |x? ?1 , y) for ? = s..t
For every s < ? < t, notice that
p(y, x? ?1 , xt )p(x? |x? ?1 , xt ) = p(y, x? ?1 , x? , xt ),
(13)
because according to figure 5, p(x? |x? ?1 , xt ) = p(x? |x? ?1 , xt , y). If we could find this
joint distribution for all ? , we could find p(x? |x? ?1 , y) by marginalizing out xt and normalizing. We could also find p(x? |y) by marginalizing out both xt and x? ?1 , then normalizing.
Finally, we could compute p(y, x? , xt ) for the next ? in the iteration.
So there are two missing pieces: The first is p(y, xs , xt ) for starting the recursion. Computing this term is easy, because p(y|xs , xt ) is the given measurement model, and p(xs , xt )
can be obtained easily from the prior by successively applying the total probability theorem.
The second missing piece is p(x? |x? ?1 , xt ). Note that this quantity does not depend on the
measurements and could be computed offline if we wanted to. The recursion for calculating
it is:
p(x? |x? ?1 , xt )
Bayes
?
p(xt |x? )
=
p(xt |x? )p(x? |x? ?1 )
Z
dxi+1 p(xt |xi+1 )p(x? +1 |x? )
(14)
(15)
The second equation describes a recursion which starts from t and goes down to s. It
computes the influence of node ? on node t. Equation (14) is coupled to this equation
and uses its output. It involves applying Bayes rule to compute a function of 3 variables.
Because of the backward nature of (15), p(x? |x? ?1 , xt ) has to be computed using a pass
which runs in the opposite direction of the process of (13).
5.2.2 Finding p(x? |x? ?1 , y) for ? = 1..s
Starting from ? = s ? 1, compute
p(y|x? )
p(x? |y)
Z
=
Bayes
?
dx? +1 p(y|x? +1 )p(x? +1 |x? )
p(y|x? )p(x? )
Bayes
p(x? |x? ?1 , y)
?
p(y|x? )p(x? |x? ?1 )
The recursion first computes the influence of x? on the observation, then computes the
marginal and the transition probability.
5.2.3 Finding p(x? |x? ?1 , y) for ? = t..T
Starting from ? = t, compute
p(x? |y)
=
Z
dx? ?1 p(x? |x? ?1 , y)p(x? ?1 |y)
p(x? |x? ?1 , y) = p(x? |x? ?1 )
The second identity follows from the independence structure on the right side of observed
nodes.
6 Results
We manually navigated a camera rig along two trajectories. The camera faced upward and
recorded the ceiling. The robot took about 3 minutes to trace each path, producing about
6000 frames of data for each experiment. The trajectory was pre-marked on the floor so
we could revisit specific locations (see the rightmost diagrams of figures 6(a,b)). This was
done to make the evaluation of the results simpler. The trajectory estimation worked at
frame-rate, although it was processed offline to simplify data acquisition.
In these experiments, the pose parameters were (x, y) locations on the floor. All experiments assume the same Brownian motion dynamics. For each new frame, pose changes
were computed with respect to at most three base frames. The selection of base frames was
based on a measure of appearance between the current frame and all past frames. The pose
change estimator was a Lucas-Kanade optical flow tracker [24]. To compute pose displacements, we computed a robust average of the flow vectors using an iterative outlier rejection
scheme. We used the number of inlier flow vectors as a crude estimate of the precision of
p(yst |xs , xt ).
Figures 6(a,b) compare the algorithm presented in this paper against two others. The middle
plots compare our algorithm (blue) against the batch algorithm which uses equations (5)
and (6) (black). Although our recovered trajectories don?t coincide exactly with the batch
solutions, like the batch solutions, ours are smooth and consistent.
In contrast, more naive methods of reconstructing trajectories do not exhibit these two
desiderata. Estimating the motion of each frame with respect to only the previous base
frame yields an unsmooth trajectory (green). Furthermore, loops can?t be closed correctly
(for example, the robot is not found to return to the origin).
The simplest method of taking into account multiple base frames also fails to meet our requirements. The red trajectory shows what happens when we assume individual poses are
independent. This corresponds to using a diagonal matrix to represent the correlation between the poses (instead of the tri-diagonal inverse covariance matrix our algorithm uses).
Notice that the resulting trajectory is not smooth, and loops are not well closed.
By taking into account a minimum amount of correlation between frame poses, loops have
been closed correctly and the trajectory is correctly found to be smooth.
7 Conclusion
We have presented a method for approximately computing the posterior distribution of a
set of variables for which only pairwise measurements are available. We call the resulting
structure a Differential Update Network and showed how to use Assumed Density Filtering
to update the posterior as pairwise measurements become available. The two key insights
were 1) how to approximate the posterior at each step to minimize KL divergence, and 2)
how to compute transition densities on a graph with a single loop in closed form.
We showed how to estimate globally consistent trajectories for a camera using this framework. In this linear-Gaussian context, our algorithm can be thought of as a Kalman Filter
which projects the state information matrix down to a tri-diagonal representation while
minimizing the KL divergence between the truth and obtain estimate. Although the example used pose change measurements between scans of the environment, our framework can
be applied to feature-based mapping and localization as well.
References
[1] A. Stoddart and A. Hilton. Registration of multiple point sets. In IJCV, pages B40?44, 1996.
(a)
(b)
Figure 3: Left, naive accumulation (green) and projecting trajectory to diagonal covariance (red).
Loops are not closed well, and trajectory is not smooth. The zoomed areas show that in both naive
approaches, there are large jumps in the trajectory, and the pose estimate is incorrect at revisited
locations. Right, Differential Update Network (blue) and exact solution (black). Like the batch
solution, our solution generates smooth and consistent trajectories.
[2] Y. Chen and G. Medioni. Object modelling by registration of multiple range images. In Porceedings of the IEEE Internation Conference on Robotics and Authomation, pages 2724?2728, 1991.
[3] F. Lu and E. Milios. Globally consistent range scan alignment for environment mapping. Autonomous Robots, 4:333?349, 1997.
[4] J. Gutmann and K. Konolige. Incremental mapping of large cyclic environments. In IEEE
International Symposium on Computational Intelligence in Robotics and Automation (CIRA),
2000.
[5] Harpreet S. Sawhney, Steve Hsu, and Rakesh Kumar. Robust video mosaicing through topology
inference and local to global alignment. In Proc ECCV 2, pages 103?119, 1998.
[6] H.-Y. Shum and R. Szeliski. Construction of panoramic mosaics with global and local alignment. In IJCV, pages 101?130, February 2000.
[7] A. Shashua. Trilinearity in visual recognition by alignment. In ECCV, pages 479?484, 1994.
[8] C. Tomasi and T. Kanade. Shape and motion from image streams under orthography: A factorization approach. International Journal of Computer Vision, 9(2):137?154, 1992.
[9] Olivier Faugeras. Three-Dimensional Computer Vision: A Geometric Viewpoint. MIT Press,
Cambridge, Massachusetts, 1993.
[10] M. Harville, A. Rahimi, T. Darrell, G.G. Gordon, and J. Woodfill. 3d pose tracking with linear
depth and brightness constraints. In ICCV99, pages 206?213, 1999.
[11] Feng Lu and E. Milios. Robot pose estimation in unknown environments by matching 2d range
scans. Robotics and Autonomous Systems, 22(2):159?178, 1997.
[12] P. J. Besl and N. D. McKay. A method for registration of 3-d shapes. IEEE Trans. Patt. Anal.
Machine Intell., 14(2):239?256, February 1992.
[13] N. Ayache and O. Faugeras. Maintaining representations of the environment of a mobile robot.
IEEE Tran. Robot. Automat., 5(6):804?819, 1989.
[14] Y. Liu, R. Emery, D. Chakrabarti, W. Burgard, and S. Thrun. Using EM to learn 3D models
of indoor environments with mobile robots. In IEEE International Conference on Machine
Learning (ICML), 2001.
[15] R. Smith, M. Self, and P. Cheeseman. Estimating uncertain spatial relationships in robotics. In
Uncertainity in Artificial Intelligence, 1988.
[16] T.P. Minka. Expectation propagation for approximate bayesian inference. In UAI, 2001.
[17] X. Boyen and D. Koller. Tractable inference for complex stochastic processes. In Uncertainty
in Artificial Intelligence, 1998.
[18] T.P. Minka.
Independence diagrams.
Technical
http://www.stat.cmu.edu/?minka/papers/diagrams.html, 1998.
report,
Media
Lab,
[19] J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, 1997.
[20] A. Rahimi, L-P. Morency, and T. Darrell. Reducing drift in parametric motion tracking. In
ICCV, volume 1, pages 315?322, June 2001.
[21] T. Kailath, A. H. Sayed, and B. Hassibi. Linear Estimation. Prentice Hall, 2000.
[22] E. Sudderth. Embedded trees: Estimation of gaussian processes on graphs with cycles. Master?s
thesis, MIT, 2002.
[23] Philip F. McLauchlan. A batch/recursive algorithm for 3d scene reconstruction. Conf. Computer
Vision and Pattern Recognition, 2:738?743, 2000.
[24] B. D. Lucas and Takeo Kanade. An iterative image registration technique with an application
to stereo vision. In International Joint Conference on Artificial Intelligence, pages 674?679,
1981.
[25] Andrew W. Fitzgibbon and Andrew Zisserman. Automatic camera recovery for closed or open
image sequences. In ECCV, pages 311?326, 1998.
| 2341 |@word middle:2 version:1 open:1 covariance:8 simplifying:2 automat:1 brightness:1 accommodate:1 cyclic:1 liu:1 shum:1 denoting:1 ours:1 rightmost:1 past:2 existing:1 recovered:4 current:2 dx:2 takeo:1 subsequent:2 happen:1 shape:3 wanted:1 plot:1 update:24 intelligence:5 parametrization:1 smith:1 node:5 location:4 revisited:1 simpler:4 along:2 differential:8 become:3 symposium:1 chakrabarti:1 incorrect:1 ijcv:2 yst:10 sayed:1 acquired:3 pairwise:10 globally:4 solver:1 becomes:2 project:2 xx:1 insure:1 estimating:2 medium:1 what:2 finding:3 transformation:3 every:2 exactly:1 appear:1 producing:1 local:2 meet:1 path:1 approximately:2 black:2 suggests:1 factorization:1 range:4 camera:7 recursive:4 implement:1 fitzgibbon:1 sawhney:1 displacement:2 area:1 thought:4 dictate:1 matching:1 pre:1 integrating:1 onto:1 selection:1 prentice:1 context:1 applying:2 influence:2 accumulation:2 www:1 map:1 missing:2 go:1 starting:3 recovery:1 estimator:3 rule:2 insight:1 autonomous:2 updated:1 pt:3 construction:1 exact:1 olivier:1 us:3 mosaic:1 origin:1 pa:2 element:1 approximated:1 recognition:2 updating:2 observed:1 cycle:1 rig:1 gutmann:1 movement:1 removed:1 environment:8 complexity:1 asked:1 dynamic:3 depend:2 solving:2 ali:2 localization:2 easily:2 joint:2 represented:1 derivation:1 effective:1 artificial:4 whose:1 faugeras:2 plausible:1 denser:1 say:1 besl:1 noisy:1 final:1 online:6 sequence:2 advantage:1 took:1 propose:2 tran:1 reconstruction:1 zoomed:1 loop:7 parent:2 darrell:3 requirement:1 emery:1 incremental:1 inlier:1 object:1 derive:1 andrew:2 fixing:1 stat:1 pose:33 measured:2 qt:4 involves:1 uncertainity:1 direction:1 filter:2 stochastic:1 centered:1 require:1 around:1 tracker:1 hall:1 mapping:4 estimation:8 proc:1 currently:1 visited:1 mit:3 sensor:1 gaussian:10 always:2 avoid:1 mobile:2 focus:1 june:1 rank:1 modelling:1 experimenting:1 panoramic:1 contrast:1 sense:1 inference:4 rigid:1 accept:1 hidden:9 koller:1 upward:1 arg:1 html:1 priori:1 lucas:2 spatial:1 marginal:1 having:1 manually:1 represents:1 icml:1 others:1 report:1 simplify:1 gordon:1 few:1 konolige:1 intelligent:1 divergence:3 intell:1 individual:1 evaluation:1 alignment:4 navigation:1 chain:5 tree:1 old:13 deformation:1 uncertain:1 instance:1 earlier:1 markovian:3 stacking:1 cost:2 mckay:1 burgard:1 mosaicing:1 corrupted:2 st:3 density:6 international:4 probabilistic:1 thesis:1 imagery:1 again:2 successively:1 recorded:1 corner:1 conf:1 resort:1 return:2 account:2 availability:1 automation:1 stream:1 piece:2 performed:2 closed:6 lab:1 red:2 start:1 recover:1 bayes:6 complicated:1 shashua:1 minimize:3 kaufmann:1 yield:2 bayesian:2 lu:3 trajectory:27 trevor:2 against:2 acquisition:2 minka:3 milios:3 associated:1 dxi:1 hsu:1 massachusetts:2 adf:2 steve:1 follow:1 zisserman:1 done:1 furthermore:1 correlation:4 sketch:1 propagation:1 true:3 hence:1 laboratory:1 white:1 adjacent:1 self:1 maintained:2 demonstrate:1 performs:1 motion:5 reasoning:1 image:5 common:1 rotation:1 volume:1 measurement:35 cambridge:2 automatic:1 robot:17 base:4 navigates:1 brownian:1 posterior:9 showed:2 closest:1 manipulation:1 unsmooth:1 morgan:1 minimum:1 floor:2 multiple:3 rahimi:3 smooth:5 technical:1 match:1 faster:1 visit:1 y:2 desideratum:1 vision:4 expectation:1 cmu:1 iteration:1 represent:2 orthography:1 robotics:4 addition:1 cira:1 diagram:3 sudderth:1 hilton:1 bringing:1 tri:5 incorporates:1 flow:3 call:1 easy:3 rendering:1 affect:1 independence:7 topology:1 opposite:1 reduce:1 simplifies:1 idea:1 ultimate:1 stereo:1 amount:1 processed:1 simplest:1 http:1 notice:2 revisit:1 estimated:1 correctly:3 blue:2 patt:1 discrete:1 key:1 drawn:2 navigated:1 harville:1 registration:6 backward:1 graph:5 run:1 inverse:3 uncertainty:1 master:1 summarizes:1 simplification:1 refine:1 encountered:1 constraint:1 worked:1 scene:1 generates:1 prescribed:2 min:1 kumar:1 optical:2 according:3 alternate:1 describes:2 reconstructing:1 em:1 happens:1 projecting:2 outlier:1 iccv:1 ceiling:1 ln:1 monocular:1 equation:12 discus:2 mechanism:2 know:1 tractable:1 available:5 operation:1 apply:2 obey:1 alternative:1 batch:10 assumes:1 running:2 maintaining:2 calculating:2 february:2 feng:1 move:1 objective:1 already:1 quantity:1 parametric:1 diagonal:7 exhibit:1 mx:7 thrun:1 philip:1 length:2 kalman:2 relationship:1 minimizing:1 unfortunately:1 trace:1 anal:1 unknown:3 perform:2 observation:2 markov:6 incorporated:2 frame:14 arbitrary:1 drift:1 pair:2 required:1 kl:4 tomasi:1 pearl:1 qa:4 address:1 trans:1 pattern:1 indoor:1 boyen:1 built:1 green:2 video:1 medioni:1 force:1 circumvent:1 cheeseman:1 recursion:4 scheme:2 technology:1 ready:2 coupled:1 naive:3 faced:1 prior:3 literature:1 geometric:2 review:1 marginalizing:3 embedded:1 filtering:4 affine:1 consistent:8 viewpoint:1 share:1 translation:2 eccv:3 offline:2 side:1 institute:1 szeliski:1 taking:2 sparse:2 depth:1 world:3 transition:8 rich:1 computes:4 jump:1 coincide:1 simplified:1 correlate:1 approximate:6 global:3 uai:1 assumed:6 xi:1 alternatively:1 don:1 ayache:1 iterative:2 kanade:3 nature:1 learn:1 robust:2 obtaining:2 complex:1 noise:2 depicts:1 precision:1 fails:1 position:1 hassibi:1 explicit:1 crude:1 third:1 theorem:1 down:2 minute:1 xt:48 specific:1 list:1 x:11 normalizing:2 derives:1 incorporating:4 naively:1 conditioned:1 sparseness:1 chen:1 rejection:1 appearance:1 visual:1 tracking:2 corresponds:1 truth:1 ma:1 conditional:3 goal:1 identity:1 marked:1 kailath:1 internation:1 change:10 specifically:1 reducing:2 total:1 morency:1 pas:1 rakesh:1 scan:8 incorporate:3 |
1,476 | 2,342 | Binary Coding in Auditory Cortex
Michael R. DeWeese and Anthony M. Zador
Cold Spring Harbor Laboratory, Cold Spring Harbor, NY 11724
[email protected], [email protected]
Abstract
Cortical neurons have been reported to use both rate and temporal
codes. Here we describe a novel mode in which each neuron
generates exactly 0 or 1 action potentials, but not more, in response
to a stimulus. We used cell-attached recording, which ensured
single-unit isolation, to record responses in rat auditory cortex to
brief tone pips. Surprisingly, the majority of neurons exhibited
binary behavior with few multi-spike responses; several dramatic
examples consisted of exactly one spike on 100% of trials, with no
trial-to-trial variability in spike count. Many neurons were tuned to
stimulus frequency. Since individual trials yielded at most one
spike for most neurons, the information about stimulus frequency
was encoded in the population, and would not have been accessible
to later stages of processing that only had access to the activity of a
single unit. These binary units allow a more efficient population
code than is possible with conventional rate coding units, and are
consistent with a model of cortical processing in which
synchronous packets of spikes propagate stably from one neuronal
population to the next.
1
Binary coding in auditory cortex
We recorded responses of neurons in the auditory cortex of anesthetized rats to
pure-tone pips of different frequencies [1, 2]. Each pip was presented repeatedly,
allowing us to assess the variability of the neural response to multiple presentations
of each stimulus. We first recorded multi-unit activity with conventional tungsten
electrodes (Fig. 1a). The number of spikes in response to each pip fluctuated
markedly from one trial to the next (Fig. 1e), as though governed by a random
mechanism such as that generating the ticks of a Geiger counter. Highly variable
responses such as these, which are at least as variable as a Poisson process, are the
norm in the cortex [3-7], and have contributed to the widely held view that cortical
spike trains are so noisy that only the average firing rate can be used to encode
stimuli.
Because we were recording the activity of an unknown number of neurons, we could
not be sure whether the strong trial-to-trial fluctuations reflected the underlying
variability of the single units. We therefore used an alternative technique, cell-
a
b
Single-unit recording method
5mV
Multi-unit
1sec
Raw cellattached
voltage
10 kHz
c
Single-unit
. . . . .. ... ... . . .... . ...
Identified
spikes
Threshold
e
28 kHz
d
Single-unit
80 120 160 200
Time (msec)
N = 29 tones
3
2
1
Poisson
N = 11 tones
ry
40
4
na
bi
38 kHz
0
Response variance/mean (spikes/trial)
High-pass
filtered
0
0
1
2
3
Mean response (spikes/trial)
Figure 1: Multi-unit spiking activity was highly variable, but single units obeyed binomial
statistics. a Multi-unit spike rasters from a conventional tungsten electrode recording showed
high trial-to-trial variability in response to ten repetitions of the same 50 msec pure tone
stimulus (bottom). Darker hash marks indicate spike times within the response period, which
were used in the variability analysis. b Spikes recorded in cell-attached mode were easily
identified from the raw voltage trace (top) by applying a high-pass filter (bottom) and
thresholding (dark gray line). Spike times (black squares) were assigned to the peaks of
suprathreshold segments. c Spike rasters from a cell-attached recording of single-unit
responses to 25 repetitions of the same tone consisted of exactly one well-timed spike per
trial (latency standard deviation = 1.0 msec), unlike the multi-unit responses (Fig. 1a). Under
the Poisson assumption, this would have been highly unlikely (P ~ 10 -11). d The same neuron
as in Fig. 1c responds with lower probability to repeated presentations of a different tone, but
there are still no multi-spike responses. e We quantified response variability for each tone by
dividing the variance in spike count by the mean spike count across all trials for that tone.
Response variability for multi-unit tungsten recording (open triangles) was high for each of
the 29 tones (out of 32) that elicited at least one spike on one trial. All but one point lie
above one (horizontal gray line), which is the value produced by a Poisson process with any
constant or time varying event rate. Single unit responses recorded in cell-attached mode
were far less variable (filled circles). Ninety one percent (10/11) of the tones that elicited at
least one spike from this neuron produced no multi-spike responses in 25 trials; the
corresponding points fall on the diagonal line between (0,1) and (1,0), which provides a strict
lower bound on the variability for any response set with a mean between 0 and 1. No point
lies above one.
attached recording with a patch pipette [8, 9], in order to ensure single unit isolation
(Fig. 1b). This recording mode minimizes both of the main sources of error in spike
detection: failure to detect a spike in the unit under observation (false negatives),
and contamination by spikes from nearby neurons (false positives). It also differs
from conventional extracellular recording methods in its selection bias: With cell-
attached recording neurons are selected solely on the basis of the experimenter?s
ability to form a seal, rather than on the basis of neuronal activity and
responsiveness to stimuli as in conventional methods.
Surprisingly, single unit responses were far more orderly than suggested by the
multi-unit recordings; responses typically consisted of either 0 or 1 spikes per trial,
and not more (Fig. 1c-e). In the most dramatic examples, each presentation of the
same tone pip elicited exactly one spike (Fig. 1c). In most cases, however, some
presentations failed to elicit a spike (Fig. 1d). Although low-variability responses
have recently been observed in the cortex [10, 11] and elsewhere [12, 13], the
binary behavior described here has not previously been reported for cortical
neurons.
a
1.4
N = 3055 response sets
b
1.2
1
Poisson
28 kHz - 100 msec
0.8
0.6
0.4
0.2
0
0
ry
na
bi
Response variance/mean (spikes/trial)
The majority of the neurons (59%) in our study for which statistical significance
could be assessed (at the p<0.001 significance level; see Fig. 2, caption) showed
noisy binary behavior??binary? because neurons produced either 0 or 1 spikes, and
?noisy? because some stimuli elicited both single spikes and failures. In a
substantial fraction of neurons, however, the responses showed more variability. We
found no correlation between neuronal variability and cortical layer (inferred from
the depth of the recording electrode), cortical area (inside vs. outside of area A1) or
depth of anesthesia. Moreover, the binary mode of spiking was not due to the
brevity (25 msec) of the stimuli; responses that were binary for short tones were
comparably binary when longer (100 msec) tones were used (Fig. 2b).
Not assessable
Not significant
Significant (p<0.001)
0.2 0.4 0.6 0.8 1
1.2
Mean response (spikes/trial)
28 kHz - 25 msec
1.4
0
40
80
120
160
Time (msec)
200
Figure 2: Half of the neuronal population exhibited binary firing behavior. a Of the 3055
sets of responses to 25 msec tones, 2588 (gray points) could not be assessed for significance
at the p<0.001 level, 225 (open circles) were not significantly binary, and 242 were
significantly binary (black points; see Identification methods for group statistics below). All
points were jittered slightly so that overlying points could be seen in the figure. 2165
response sets contained no multi-spike responses; the corresponding points fell on the line
from [0,1] to [1,0]. b The binary nature of single unit responses was insensitive to tone
duration, even for frequencies that elicited the largest responses. Twenty additional spike
rasters from the same neuron (and tone frequency) as in Fig. 1c contain no multi-spike
responses whether in response to 100 msec tones (above) or 25 msec tones (below). Across
the population, binary responses were as prevalent for 100 msec tones as for 25 msec tones
(see Identification methods for group statistics).
In many neurons, binary responses showed high temporal precision, with latencies
sometimes exhibiting standard deviations as low as 1 msec (Fig. 3; see also Fig. 1c),
comparable to previous observations in the auditory cortex [14], and only slightly
more precise than in monkey visual area MT [5]. High temporal precision was
positively correlated with high response probability (Fig. 3).
a
b
N = (44 cells)x(32 tones)
14
N = 32 tones
12
30
Jitter (msec)
Jitter (msec)
40
10
8
6
20
10
4
2
0
0
0
0.2
0.4
0.6
0.8
Mean response (spikes/trial)
1
0
0.4
0.8
1.2
1.6
Mean response (spikes/trial)
2
Figure 3: Trial-to-trial variability in latency of response to repeated presentations of the
same tone decreased with increasing response probability. a Scatter plot of standard
deviation of latency vs. mean response for 25 presentations each of 32 tones for a different
neuron as in Figs. 1 and 2 (gray line is best linear fit). Rasters from 25 repeated presentations
of a low response tone (upper left inset, which corresponds to left-most data point) display
much more variable latencies than rasters from a high response tone (lower right inset;
corresponds to right-most data point). b The negative correlation between latency variability
and response size was present on average across the population of 44 neurons described in
Identification methods for group statistics (linear fit, gray).
The low trial-to-trial variability ruled out the possibility that the firing statistics
could be accounted for by a simple rate-modulated Poisson process (Fig. 4a1,a2). In
other systems, low variability has sometimes been modeled as a Poisson process
followed by a post-spike refractory period [10, 12]. In our system, however, the
range in latencies of evoked binary responses was often much greater than the
refractory period, which could not have been longer than the 2 msec inter-spike
intervals observed during epochs of spontaneous spiking, indicating that binary
spiking did not result from any intrinsic property of the spike generating mechanism
(Fig. 4a3). Moreover, a single stimulus-evoked spike could suppress subsequent
spikes for as long as hundreds of milliseconds (e.g. Figs. 1d,4d), supporting the idea
that binary spiking arises through a circuit-level, rather than a single-neuron,
mechanism. Indeed, the fact that this suppression is observed even in the cortex of
awake animals [15] suggests that binary spiking is not a special property of the
anesthetized state.
It seems surprising that binary spiking in the cortex has not previously been
remarked upon. In the auditory cortex the explanation may be in part technical:
Because firing rates in the auditory cortex tend to be low, multi-unit recording is
often used to maximize the total amount of data collected. Moreover, our use of
cell-attached recording minimizes the usual bias toward responsive or active
neurons.
Such explanations are not, however, likely to account for the failure to observe
binary spiking in the visual cortex, where spike count statistics have been
scrutinized more closely [3-7]. One possibility is that this reflects a fundamental
difference between the auditory and visual systems. An alternative interpretation?
a1
b
Response probability
100 spikes/s
2 kHz
Poisson simulation
c
100
200 300 400
Time (msec)
500
20
Ratio of pool sizes
a2
0
16
12
8
4
0
a3
Poisson with refractory period
0
40
80 120 160 200
Time (msec)
d
Response probability
PSTH
0.2
0.4
0.6
0.8
1
Mean spike count per neuron
1
0.8
N = 32 tones
0.6
0.4
0.2
0
2.0
3.8 7.1 13.2 24.9 46.7
Tone frequency (kHz)
Figure 4: a The lack of multi-spike responses elicited by the neuron shown in Fig. 3a were
not due to an absolute refractory period since the range of latencies for many tones, like that
shown here, was much greater than any reasonable estimate for the neuron?s refractory
period. (a1) Experimentally recorded responses. (a2) Using the smoothed post stimulus time
histogram (PSTH; bottom) from the set of responses in Fig. 4a, we generated rasters under
the assumption of Poisson firing. In this representative example, four double-spike responses
(arrows at left) were produced in 25 trials. (a3) We then generated rasters assuming that the
neuron fired according to a Poisson process subject to a hard refractory period of 2 msec.
Even with a refractory period, this representative example includes one triple- and three
double-spike responses. The minimum interspike-interval during spontaneous firing events
was less than two msec for five of our neurons, so 2 msec is a conservative upper bound for
the refractory period. b. Spontaneous activity is reduced following high-probability
responses. The PSTH (top; 0.25 msec bins) of the combined responses from the 25% (8/32)
of tones that elicited the largest responses from the same neuron as in Figs. 3a and 4a
illustrates a preclusion of spontaneous and evoked activity for over 200 msec following
stimulation. The PSTHs from progressively less responsive groups of tones show
progressively less preclusion following stimulation. c Fewer noisy binary neurons need to be
pooled to achieve the same ?signal-to-noise ratio? (SNR; see ref. [24]) as a collection of
Poisson neurons. The ratio of the number of Poisson to binary neurons required to achieve
the same SNR is plotted against the mean number of spikes elicited per neuron following
stimulation; here we have defined the SNR to be the ratio of the mean spike count to the
standard deviation of the spike count. d Spike probability tuning curve for the same neuron
as in Figs. 1c-e and 2b fit to a Gaussian in tone frequency.
and one that we favor?is that the difference rests not in the sensory modality, but
instead in the difference between the stimuli used. In this view, the binary responses
may not be limited to the auditory cortex; neurons in visual and other sensory
cortices might exhibit similar responses to the appropriate stimuli. For example, the
tone pips we used might be the auditory analog of a brief flash of light, rather than
the oriented moving edges or gratings usually used to probe the primary visual
cortex. Conversely, auditory stimuli analogous to edges or gratings [16, 17] may be
more likely to elicit conventional, rate-modulated Poisson responses in the auditory
cortex. Indeed, there may be a continuum between binary and Poisson modes. Thus,
even in conventional rate-modulated responses, the first spike is often privileged in
that it carries most of the information in the spike train [5, 14, 18]. The first spike
may be particularly important as a means of rapidly signaling stimulus transients.
Binary responses suggest a mode that complements conventional rate coding. In the
simplest rate-coding model, a stimulus parameter (such as the frequency of a tone)
governs only the rate at which a neuron generates spikes, but not the detailed
positions of the spikes; the actual spike train itself is an instantiation of a random
process (such as a Poisson process). By contrast, in the binomial model, the
stimulus parameter (frequency) is encoded as the probability of firing (Fig. 4d).
Binary coding has implications for cortical computation. In the rate coding model,
stimulus encoding is ?ergodic?: a stimulus parameter can be read out either by
observing the activity of one neuron for a long time, or a population for a short time.
By contrast, in the binary model the stimulus value can be decoded only by
observing a neuronal population, so that there is no benefit to integrating over long
time periods (cf. ref. [19]). One advantage of binary encoding is that it allows the
population to signal quickly; the most compact message a neuron can send is one
spike [20]. Binary coding is also more efficient in the context of population coding,
as quantified by the signal-to-noise ratio (Fig. 4c).
The precise organization of both spike number and time we have observed suggests
that cortical activity consists, at least under some conditions, of packets of spikes
synchronized across populations of neurons. Theoretical work [21-23] has shown
how such packets can propagate stably from one population to the next, but only if
neurons within each population fire at most one spike per packet; otherwise, the
number of spikes per packet?and hence the width of each packet?grows at each
propagation step. Interestingly, one prediction of stable propagation models is that
spike probability should be related to timing precision, a prediction born out by our
observations (Fig. 3). The role of these packets in computation remains an open
question.
2
Identification methods for group statistics
We recorded responses to 32 different 25 msec tones from each of 175 neurons from
the auditory cortices of 16 Sprague-Dawley rats; each tone was repeated between 5
and 75 times (mean = 19). Thus our ensemble consisted of 32x175=5600 response
sets, with between 5 and 75 samples in each set. Of these, 3055 response sets
contained at least one spike on at least on trial. For each response set, we tested the
hypothesis that the observed variability was significantly lower than expected from
the null hypothesis of a Poisson process. The ability to assess significance depended
on two parameters: the sample size (5-75) and the firing probability. Intuitively, the
dependence on firing probability arises because at low firing rates most responses
produce only trials with 0 or 1 spikes under both the Poisson and binary models;
only at high firing rates do the two models make different predictions, since in that
case the Poisson model includes many trials with 2 or even 3 spikes while the binary
model generates only solitary spikes (see Fig. 4a1,a2). Using a stringent
significance criterion of p<0.001, 467 response sets had a sufficient number of
repeats to assess significance, given the observed firing probability. Of these, half
(242/467=52%) were significantly less variable than expected by chance, five
hundred-fold higher than the 467/1000=0.467 response sets expected, based on the
0.001 significance criterion, to yield a binary response set. Seventy-two neurons had
at least one response set for which significance could be assessed, and of these, 49
neurons (49/72=68%) had at least one significantly sub-Poisson response set. Of this
population of 49 neurons, five achieved low variability through repeatable bursty
behavior (e.g., every spike count was either 0 or 3, but not 1 or 2) and were
excluded from further analysis. The remaining 44 neurons formed the basis for the
group statistics analyses shown in Figs. 2a and 3b. Nine of these neurons were
subjected to an additional protocol consisting of at least 10 presentations each of
100 msec tones and 25 msec tones of all 32 frequencies. Of the 100 msec
stimulation response sets, 44 were found to be significantly sub-Poisson at the
p<0.05 level, in good agreement with the 43 found to be significant among the
responses to 25 msec tones.
3
Bibliography
1.
Kilgard, M.P. and M.M. Merzenich, Cortical map reorganization enabled by
nucleus basalis activity. Science, 1998. 279(5357): p. 1714-8.
2.
Sally, S.L. and J.B. Kelly, Organization of auditory cortex in the albino rat:
sound frequency. J Neurophysiol, 1988. 59(5): p. 1627-38.
3.
Softky, W.R. and C. Koch, The highly irregular firing of cortical cells is
inconsistent with temporal integration of random EPSPs. J Neurosci, 1993.
13(1): p. 334-50.
4.
Stevens, C.F. and A.M. Zador, Input synchrony and the irregular firing of
cortical neurons. Nat Neurosci, 1998. 1(3): p. 210-7.
5.
Buracas, G.T., A.M. Zador, M.R. DeWeese, and T.D. Albright, Efficient
discrimination of temporal patterns by motion-sensitive neurons in primate
visual cortex. Neuron, 1998. 20(5): p. 959-69.
6.
Shadlen, M.N. and W.T. Newsome, The variable discharge of cortical
neurons: implications for connectivity, computation, and information
coding. J Neurosci, 1998. 18(10): p. 3870-96.
7.
Tolhurst, D.J., J.A. Movshon, and A.F. Dean, The statistical reliability of
signals in single neurons in cat and monkey visual cortex. Vision Res, 1983.
23(8): p. 775-85.
8.
Otmakhov, N., A.M. Shirke, and R. Malinow, Measuring the impact of
probabilistic transmission on neuronal output. Neuron, 1993. 10(6): p.
1101-11.
9.
Friedrich, R.W. and G. Laurent, Dynamic optimization of odor
representations by slow temporal patterning of mitral cell activity. Science,
2001. 291(5505): p. 889-94.
10.
Kara, P., P. Reinagel, and R.C. Reid, Low response variability in
simultaneously recorded retinal, thalamic, and cortical neurons. Neuron,
2000. 27(3): p. 635-46.
11.
Gur, M., A. Beylin, and D.M. Snodderly, Response variability of neurons in
primary visual cortex (V1) of alert monkeys. J Neurosci, 1997. 17(8): p.
2914-20.
12.
Berry, M.J., D.K. Warland, and M. Meister, The structure and precision of
retinal spike trains. Proc Natl Acad Sci U S A, 1997. 94(10): p. 5411-6.
13.
de Ruyter van Steveninck, R.R., G.D. Lewen, S.P. Strong, R. Koberle, and
W. Bialek, Reproducibility and variability in neural spike trains. Science,
1997. 275(5307): p. 1805-8.
14.
Heil, P., Auditory cortical onset responses revisited. I. First-spike timing. J
Neurophysiol, 1997. 77(5): p. 2616-41.
15.
Lu, T., L. Liang, and X. Wang, Temporal and rate representations of timevarying signals in the auditory cortex of awake primates. Nat Neurosci,
2001. 4(11): p. 1131-8.
16.
Kowalski, N., D.A. Depireux, and S.A. Shamma, Analysis of dynamic
spectra in ferret primary auditory cortex. I. Characteristics of single-unit
responses to moving ripple spectra. J Neurophysiol, 1996. 76(5): p. 350323.
17.
deCharms, R.C., D.T. Blake, and M.M. Merzenich, Optimizing sound
features for cortical neurons. Science, 1998. 280(5368): p. 1439-43.
18.
Panzeri, S., R.S. Petersen, S.R. Schultz, M. Lebedev, and M.E. Diamond,
The role of spike timing in the coding of stimulus location in rat
somatosensory cortex. Neuron, 2001. 29(3): p. 769-77.
19.
Britten, K.H., M.N. Shadlen, W.T. Newsome, and J.A. Movshon, The
analysis of visual motion: a comparison of neuronal and psychophysical
performance. J Neurosci, 1992. 12(12): p. 4745-65.
20.
Delorme, A. and S.J. Thorpe, Face identification using one spike per
neuron: resistance to image degradations. Neural Netw, 2001. 14(6-7): p.
795-803.
21.
Diesmann, M., M.O. Gewaltig, and A. Aertsen, Stable propagation of
synchronous spiking in cortical neural networks. Nature, 1999. 402(6761):
p. 529-33.
22.
Marsalek, P., C. Koch, and J. Maunsell, On the relationship between
synaptic input and spike output jitter in individual neurons. Proc Natl Acad
Sci U S A, 1997. 94(2): p. 735-40.
23.
Kistler, W.M. and W. Gerstner, Stable propagation of activity pulses in
populations of spiking neurons. Neural Comp., 2002. 14: p. 987-997.
24.
Zohary, E., M.N. Shadlen, and W.T. Newsome, Correlated neuronal
discharge rate and its implications for psychophysical performance. Nature,
1994. 370(6485): p. 140-3.
25.
Abbott, L.F. and P. Dayan, The effect of correlated variability on the
accuracy of a population code. Neural Comput, 1999. 11(1): p. 91-101.
| 2342 |@word trial:28 norm:1 seems:1 seal:1 open:3 simulation:1 propagate:2 pulse:1 dramatic:2 carry:1 born:1 tuned:1 interestingly:1 surprising:1 scatter:1 subsequent:1 interspike:1 plot:1 progressively:2 hash:1 v:2 half:2 selected:1 fewer:1 discrimination:1 tone:40 patterning:1 short:2 record:1 filtered:1 provides:1 tolhurst:1 revisited:1 location:1 psth:3 five:3 anesthesia:1 alert:1 consists:1 inside:1 inter:1 expected:3 indeed:2 behavior:5 multi:14 ry:2 actual:1 zohary:1 increasing:1 underlying:1 moreover:3 circuit:1 null:1 minimizes:2 monkey:3 temporal:7 every:1 exactly:4 ensured:1 unit:24 maunsell:1 reid:1 positive:1 timing:3 depended:1 acad:2 encoding:2 laurent:1 solely:1 firing:14 fluctuation:1 black:2 might:2 quantified:2 evoked:3 suggests:2 conversely:1 limited:1 tungsten:3 shamma:1 bi:2 range:2 steveninck:1 differs:1 signaling:1 cold:2 area:3 elicit:2 significantly:6 integrating:1 suggest:1 petersen:1 selection:1 context:1 applying:1 conventional:8 map:1 dean:1 send:1 zador:4 duration:1 ergodic:1 pure:2 reinagel:1 gur:1 enabled:1 population:16 analogous:1 discharge:2 spontaneous:4 snodderly:1 caption:1 hypothesis:2 agreement:1 particularly:1 bottom:3 observed:6 role:2 wang:1 counter:1 contamination:1 substantial:1 dynamic:2 segment:1 upon:1 basis:3 triangle:1 neurophysiol:3 easily:1 cat:1 train:5 describe:1 outside:1 encoded:2 widely:1 otherwise:1 ability:2 statistic:8 favor:1 noisy:4 itself:1 advantage:1 rapidly:1 reproducibility:1 fired:1 achieve:2 electrode:3 double:2 transmission:1 ripple:1 produce:1 generating:2 grating:2 dividing:1 epsps:1 strong:2 indicate:1 somatosensory:1 synchronized:1 exhibiting:1 closely:1 stevens:1 filter:1 packet:7 transient:1 suprathreshold:1 stringent:1 kistler:1 bin:1 koch:2 blake:1 bursty:1 panzeri:1 continuum:1 a2:4 proc:2 sensitive:1 largest:2 repetition:2 reflects:1 gaussian:1 rather:3 depireux:1 varying:1 voltage:2 pip:6 timevarying:1 encode:1 prevalent:1 contrast:2 suppression:1 detect:1 dayan:1 unlikely:1 typically:1 basalis:1 among:1 animal:1 special:1 integration:1 seventy:1 stimulus:21 few:1 thorpe:1 oriented:1 simultaneously:1 individual:2 consisting:1 fire:1 detection:1 organization:2 message:1 highly:4 possibility:2 light:1 natl:2 held:1 implication:3 edge:2 filled:1 timed:1 circle:2 ruled:1 plotted:1 re:1 theoretical:1 newsome:3 measuring:1 deviation:4 snr:3 hundred:2 reported:2 obeyed:1 jittered:1 combined:1 peak:1 fundamental:1 accessible:1 probabilistic:1 pool:1 michael:1 quickly:1 lebedev:1 na:2 connectivity:1 recorded:7 account:1 potential:1 de:1 retinal:2 coding:11 sec:1 includes:2 pooled:1 mv:1 onset:1 later:1 view:2 observing:2 thalamic:1 elicited:8 synchrony:1 ass:3 square:1 formed:1 accuracy:1 variance:3 characteristic:1 ensemble:1 yield:1 kowalski:1 raw:2 identification:5 produced:4 comparably:1 lu:1 comp:1 synaptic:1 failure:3 raster:7 against:1 frequency:11 remarked:1 auditory:17 experimenter:1 higher:1 reflected:1 response:77 though:1 stage:1 correlation:2 horizontal:1 lack:1 propagation:4 mode:7 stably:2 gray:5 grows:1 scrutinized:1 effect:1 consisted:4 contain:1 hence:1 assigned:1 merzenich:2 read:1 excluded:1 laboratory:1 during:2 width:1 rat:5 criterion:2 motion:2 percent:1 image:1 novel:1 recently:1 psths:1 stimulation:4 spiking:10 mt:1 attached:7 khz:7 insensitive:1 refractory:8 analog:1 interpretation:1 significant:3 tuning:1 had:4 reliability:1 moving:2 stable:3 access:1 cortex:24 longer:2 showed:4 optimizing:1 binary:34 responsiveness:1 seen:1 additional:2 greater:2 minimum:1 maximize:1 period:10 signal:5 multiple:1 sound:2 technical:1 long:3 post:2 pipette:1 a1:5 privileged:1 impact:1 prediction:3 vision:1 poisson:21 histogram:1 sometimes:2 achieved:1 cell:10 irregular:2 decreased:1 interval:2 ferret:1 source:1 modality:1 rest:1 unlike:1 exhibited:2 markedly:1 sure:1 recording:14 strict:1 fell:1 tend:1 subject:1 inconsistent:1 harbor:2 isolation:2 fit:3 identified:2 idea:1 synchronous:2 whether:2 movshon:2 resistance:1 nine:1 action:1 repeatedly:1 latency:8 governs:1 detailed:1 amount:1 dark:1 cshl:2 ten:1 simplest:1 reduced:1 millisecond:1 per:7 group:6 four:1 threshold:1 deweese:3 abbott:1 v1:1 fraction:1 overlying:1 jitter:3 solitary:1 reasonable:1 patch:1 geiger:1 comparable:1 bound:2 layer:1 followed:1 display:1 fold:1 mitral:1 yielded:1 activity:12 awake:2 bibliography:1 sprague:1 nearby:1 generates:3 diesmann:1 spring:2 dawley:1 extracellular:1 according:1 across:4 slightly:2 ninety:1 primate:2 intuitively:1 previously:2 remains:1 count:8 mechanism:3 subjected:1 meister:1 probe:1 observe:1 appropriate:1 responsive:2 alternative:2 odor:1 binomial:2 top:2 ensure:1 cf:1 remaining:1 warland:1 psychophysical:2 question:1 spike:75 primary:3 dependence:1 usual:1 responds:1 diagonal:1 bialek:1 exhibit:1 aertsen:1 softky:1 sci:2 majority:2 collected:1 toward:1 assuming:1 code:3 reorganization:1 modeled:1 relationship:1 ratio:5 liang:1 decharms:1 trace:1 negative:2 suppress:1 unknown:1 contributed:1 allowing:1 twenty:1 upper:2 neuron:58 observation:3 diamond:1 supporting:1 variability:21 precise:2 smoothed:1 inferred:1 complement:1 required:1 friedrich:1 delorme:1 suggested:1 below:2 usually:1 pattern:1 explanation:2 event:2 brief:2 gewaltig:1 heil:1 britten:1 koberle:1 epoch:1 kelly:1 berry:1 lewen:1 triple:1 nucleus:1 sufficient:1 consistent:1 shadlen:3 thresholding:1 beylin:1 elsewhere:1 accounted:1 surprisingly:2 repeat:1 tick:1 allow:1 bias:2 fall:1 face:1 anesthetized:2 absolute:1 benefit:1 van:1 curve:1 depth:2 cortical:16 sensory:2 collection:1 schultz:1 far:2 compact:1 netw:1 orderly:1 active:1 instantiation:1 spectrum:2 nature:3 ruyter:1 gerstner:1 anthony:1 protocol:1 did:1 significance:8 main:1 neurosci:6 arrow:1 noise:2 kara:1 repeated:4 ref:2 positively:1 neuronal:8 fig:27 representative:2 ny:1 darker:1 slow:1 precision:4 sub:2 position:1 decoded:1 msec:29 comput:1 lie:2 governed:1 repeatable:1 inset:2 a3:3 intrinsic:1 false:2 nat:2 illustrates:1 likely:2 visual:9 failed:1 contained:2 sally:1 albino:1 malinow:1 corresponds:2 fluctuated:1 chance:1 presentation:8 flash:1 experimentally:1 hard:1 buracas:1 degradation:1 conservative:1 total:1 pas:2 albright:1 indicating:1 mark:1 modulated:3 assessed:3 brevity:1 arises:2 tested:1 correlated:3 |
1,477 | 2,343 | Dynamic Bayesian Networks with
Deterministic Latent Tables
David Barber
Institute for Adaptive and Neural Computation
Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, U.K.
[email protected]
Abstract
The application of latent/hidden variable Dynamic Bayesian Networks is constrained by the complexity of marginalising over latent
variables. For this reason either small latent dimensions or Gaussian latent conditional tables linearly dependent on past states are
typically considered in order that inference is tractable. We suggest
an alternative approach in which the latent variables are modelled
using deterministic conditional probability tables. This specialisation has the advantage of tractable inference even for highly complex non-linear/non-Gaussian visible conditional probability tables.
This approach enables the consideration of highly complex latent
dynamics whilst retaining the benefits of a tractable probabilistic
model.
1
Introduction
Dynamic Bayesian Networks are a powerful framework for temporal data models
with widespread application in time series analysis[10, 2, 5]. A time series of length
T is a sequence of observation vectors V = {v(1), v(2), . . . , v(T )}, where vi (t) represents the state of visible variable i at time t. For example, in a speech application V
may represent a vector of cepstral coefficients through time, the aim being to classify
the sequence as belonging to a particular phonene[2, 9]. The power in the Dynamic
Bayesian Network is the assumption that the observations may be generated by
some latent (hidden) process that cannot be directly experimentally observed. The
basic structure of these models is shown in fig(1)[a] where network states are only
dependent on a short time history of previous states (the Markov assumption).
Representing the hidden variable sequence by H = {h(1), h(2), . . . , h(T )}, the joint
distribution of a first order Dynamic Bayesian Network is
p(V, H) = p(v(1))p(h(1)|v(1))
TY
?1
p(v(t+1)|v(t), h(t))p(h(t+1)|v(t), v(t+1), h(t))
t=1
This is a Hidden Markov Model (HMM), with additional connections from visible
to hidden units[9]. The usage of such models is varied, but here we shall concentrate on unsupervised sequence learning. That is, given a set of training sequences
h(1)
h(2)
h(t)
v(1)
v(2)
v(t)
(a) Bayesian Network
h(1), h(2)
h(2), h(3)
h(t ? 1), h(t)
(b) Hidden Inference
Figure 1: (a) A first order Dynamic Bayesian Network containing a sequence of
hidden (latent) variables h(1), h(2), . . . , h(T ) and a sequence of visible (observable) variables v(1), v(2), . . . , v(T ). In general, all conditional probability tables
are stochastic ? that is, more than one state can be realised. (b) Conditioning on
the visible units forms an undirected chain in the hidden space. Hidden unit inference is achieved by propagating information along both directions of the chain to
ensure normalisation.
V 1 , . . . , V P we aim to capture the essential features of the underlying dynamical
process that generated the data. Denoting the parameters of the model by ?,
learning can be achieved using the EM algorithm which maximises a lower bound
on the likelihood of a set of observed sequences by the procedure[5]:
P
X
?new = arg max
p(H? |V ? , ?old ) log p(H? , V ? , ?).
? ?=1
(1)
This procedure contains expectations with respect to the distribution p(H|V) ? that
is, to do learning, we need to infer the hidden unit distribution conditional on the
visible variables. p(H|V) is represented by the undirected clique graph, fig(1)[b], in
which each node represents a function (dependent on the clamped visible units) of
the hidden variables it contains, with p(H|V) being the product of these clique potentials. In order to do inference on such a graph, in general, it is necessary to carry
out a message passing type procedure in which messages are first passed one way
along the undirected graph, and then back, such as in the forward-backward algorithm in HMMs [5]. Only when messages have been passed along both directions of
all links can the normalised conditional hidden unit distribution be numerically determined. The complexity of calculating messages is dominated by marginalisation
of the clique functions over a hidden vector h(t). In the case of discrete hidden units
with S states, this complexity is of the order S 2 , and the total complexity of inference is then O(T S 2 ). For continuous hidden units, the analogous marginalisation
requires integration of a clique function over a hidden vector. If the clique function
is very low dimensional, this may be feasible. However, in high dimensions, this
is typically intractable unless the clique functions are of a very specific form, such
as Gaussians. This motivates the Kalman filter model[5] in which all conditional
probability tables are Gaussian with means determined by a linear combination of
previous states. There have been several attempts to generalise the Kalman filter
to include non-linear/non-Gaussian conditional probability tables, but most rely on
using approximate integration methods based on either sampling[3], perturbation
or variational type methods[5].
In this paper we take a different approach. We consider specially constrained networks which, when conditioned on the visible variables, render the hidden unit
vout (1)
vout (2)
vout (t)
h(1)
h(2)
h(t)
h(1)
h(2)
h(t)
v(1)
v(2)
v(t)
vin (1)
vin (2)
vin (t)
(a) Deterministic Hiddens
h(1)
h(2)
(b) Input-Output HMM
h(t)
v(1)
v(2)
v(3)
v(4)
(d) Visible Representation
(c) Hidden Inference
Figure 2: (a) A first order Dynamic Bayesian Network with deterministic hidden
CPTs (represented by diamonds) ? that is, the hidden node is certainly in a single
state, determined by its parents. (b) An input-output HMM with deterministic
hidden variables. (c) Conditioning on the visible variables forms a directed chain
in the hidden space which is deterministic. Hidden unit inference can be achieved
by forward propagation alone. (d) Integrating out hidden variables gives a cascade
style directed visible graph, shown here for only four time steps.
distribution trivial. The aim is then to be able to consider non-Gaussian and nonlinear conditional probability tables (CPTs), and hence richer dynamics in the hidden space.
2
Deterministic Latent Variables
The deterministic latent CPT case, fig(2)[a] defines conditional probabilities
p(h(t + 1)|v(t + 1), v(t), h(t)) = ? (h(t + 1) ? f (v(t + 1), v(t), h(t), ? h ))
(2)
where ?(x) represents the Dirac delta function for continuous hidden variables, and
the Kronecker delta for discrete hidden variables. The vector function f parameterises the CPT, itself having parameters ? h . Whilst the restriction to deterministic
CPTs appears severe, the model retains some attractive features : The marginal
p(V) is non-Markovian, coupling all the variables in the sequence, see fig(2)[d]. The
marginal p(H) is stochastic, whilst hidden unit inference is deterministic, as illustrated in fig(2)[c]. Although not considered explicitly here, input-output HMMs[7],
see fig(2)[b], are easily dealt with by a trivial modification of this framework.
For learning, we can dispense with the EM algorithm and calculate the log likelihood
of a single training sequence V directly,
L(? v , ? h |V) = log p(v(1)|? v ) +
T
?1
X
t=1
log p(v(t + 1)|v(t), h(t), ? v )
(3)
where the hidden unit values are calculated recursively using
h(t + 1) = f (v(t + 1), v(t), h(t), ? h )
(4)
The adjustable parameters of the hidden and visible CPTs are represented by ? h
and ? v respectively. The case of training multiple independently
generated seP
quences V ? , ? = 1, . . . P is straightforward and has likelihood ? L(? v , ? h |V ? ). To
maximise the log-likelihood, it is useful to evaluate the derivatives with respect to
the model parameters. These can be calculated as follows :
T ?1
dL
? log p(v(1)|? v ) X ?
=
+
log p(v(t + 1)|v(t), h(t), ? v )
d? v
?? v
?? v
t=1
(5)
T
?1
X
?
dh(t)
dL
=
log p(v(t + 1)|v(t), h(t), ? v )
d? h
?h(t)
d? h
t=1
(6)
dh(t)
?f (t)
?f (t) dh(t ? 1)
=
+
(7)
d? h
?? h
?h(t ? 1) d? h
where f (t) ? f (v(t), v(t ? 1), h(t ? 1), ? h ). Hence the derivatives can be calculated
by deterministic forward propagation of errors and highly complex functions f and
CPTs p(v(t + 1)|v(t), h(t)) may be used. Whilst the training of such networks
resembles back-propagation in neural networks [1, 6], the models have a stochastic
interpretation and retain the benefits inherited from probability theory, including
the possibility of a Bayesian treatment.
3
A Discrete Visible Illustration
To make the above framework more explicit, we consider the case of continuous
hidden units and discrete, binary visible units, vi (t) ? {0, 1}. In particular, we
restrict attention to the model:
?
?
V
Y
X
X
p(v(t+1)|v(t), h(t)) =
? ?(2vi (t + 1) ? 1)
wij ?j (t)? , hi (t+1) =
uij ?j (t)
i=1
j
j
where ?(x) = 1/(1 + e?x ) and ?j (t) and ?j (t) represent fixed functions of the
network state (h(t), v(t)). Normalisation is ensured since 1 ? ?(x) = ?(?x). This
model generalises a recurrent stochastic heteroassociative Hopfield network[4] to
include deterministic hidden units dependent on past network states.
The derivatives of the log likelihood are given by :
X
X
dhl (t)
dL
dL
=
(1 ? ?i (t)) (2vi (t+1)?1)?j (t),
=
(1 ? ?k (t)) (2vk (t+1)?1)wkl ?0l (t)
dwij
du
duij
ij
t
t,k,l
P
where ?i (t) ? ?((2vi (t + 1) ? 1) j wij ?j (t)),
derivatives are found from the recursions
dhl (t + 1) X
d?k (t)
=
ulk
+ ?il ?j (t),
duij
duij
k
?0l (t)
? d?l (t)/dt and the hidden unit
d?k (t) X ??k (t) dhm (t)
=
duij
?hm (t) duij
m
We
a network with the simple linear
considered
type
influences,
?(t)
? ?(t) ?
h(t)
A 0
C 0
v(t) , and restricted connectivity W = 0 B , U = 0 D , where the
h(t)
h(t + 1)
v(t)
v(t + 1)
(a) Network
(b) original
(c) recalled
Figure 3: (a) A temporal slice of the network. (b) The training sequence consists
of a random set vectors (V = 3) over T = 10 time steps. (c) The reconstruction
using H = 7 hidden units. The initial state v(t = 1) for the recalled sequence was
set to the correct initial training value albeit with one of the values flipped. Note
how the dynamics learned is an attractor for the original sequence.
parameters to learn are the matrices A, B, C, D. A slice of the network is illustrated
in fig(3)[a]. We can easily iterate the hidden states in this case to give
h(t + 1) = Ah(t) + Bv(t) = At h(1) +
t?1
X
0
At Bv(t ? t0 )
t0 =0
which demonstrates how the hidden state depends on the full past history of the
observations. We trained the network using 3 visible units and 7 hidden units to
maximise the likelihood of the binary sequence in fig(3)[b]. Note that this sequence
contains repeated patterns and therefore could not be recalled perfectly with a
model which does not contain hidden units. We tested if the learned model had
captured the dynamics of the training sequence by initialising the network in the
first visible state in the training sequence, but with one of the values flipped. The
network then generated the following hidden and visible states recursively, as plotted
in fig(3)[c]. The learned network is an attractor with the training sequence as
a stable point, demonstrating that such models are capable of learning attractor
recurrent networks more powerful than those without hidden units. Learning is
very fast in such networks, and we have successfully applied these models to cases
of several hundred hidden and visible unit dimensions.
3.1
Recall Capacity
What effect have the hidden units on the ability of Hopfield networks to recall
sequences? By recall, we mean that a training sequence is correctly generated by
the network given that only the initial state of the training sequence is presented to
the trained network. For the analysis here, we will consider the retrieval dynamics
to be completely deterministic, thus if we concatenate both hidden h(t) and visible
variables v(t) into the vector x(t) and consider the deterministic hidden function
f (y) ? thresh(y) which is 1 if y > 0 and zero otherwise, then
X
xi (t + 1) = thresh
Mij xj (t).
(8)
j
Here Mij are the elements of the weight matrix representing the transitions from
? (1), . . . , x
? (T ) can be recalled correctly if
time t to time t + 1. A desired sequence x
we can find a matrix M and real numbers i (t) such that
M [?
x(1), . . . , x(T ? 1)] = [(2), . . . , (T )]
where the i (t) are arbitrary real numbers for which thresh(i (t)) = x
?i (t). This
? (T ? 1)] has rank
system of linear equations can be solved if the matrix [?
x(1), . . . , x
T ? 1. The use of hidden units therefore increases the length of temporal sequences
that we can store by forming,
during
appropriate
hidden representations
learning,
h(t) such that the vectors
h(2)
v(2)
,...,
h(T )
v(T )
form a linearly independent set.
Such vectors are clearly possible to generate if the matrix U is full rank. Thus recall
can be achieved if (V + H) ? T ? 1.
The reader might consider forming from a set of linearly dependent patterns
v(1), . . . , v(T ) a linearly independent is by injecting the patterns into a higher
? (t) using a non-linear mapping. This would appear
dimensional space, v(t) ? v
to dispense with the need to use hidden units. However, if the same pattern in
the training set is repeated at different times in the sequence (as in fig(3)[b]), no
? (1), . . . , v
? (T )
matter how complex this non-linear mapping, the resulting vectors v
will be linearly dependent. This demonstrates that hidden units not only solve the
linear dependence problem for non-repeated patterns, they also solve it for repeated
patterns. They are therefore capable of sequence disambiguation since the hidden
unit representations formed are dependent on the full history of the visible units.
4
A Continuous Visible Illustration
To illustrate the use of the framework to continuous visible variables, we consider
the simple Gaussian visible CPT model
1
2
p(v(t + 1)|v(t), h(t)) = exp ? 2 [v(t + 1) ? g (Ah(t) ? Bv(t))] /(2?? 2 )V /2
2?
h(t + 1) = f (Ch(t) + Dv(t))
(9)
where the functions f and g are in general non-linear functions of their arguments.
In the case that f (x) ? x, and g(x) ? x this model is a special case of the
Kalman filter[5]. Training of these models by learning A, B, C, D (? 2 was set to
0.02 throughout) is straightforward using the forward error propagation techniques
outlined earlier in section (2).
4.1
Classifying Japanese vowels
This UCI machine learning test problem consists of a set of multi-dimensional times
series. Nine speakers uttered two Japanese vowels /ae/ successively to form discrete
time series with 12 LPC cepstral coefficients. Each utterance forms a time series
V whose length is in the range T = 7 to T = 29 and each vector v(t) of the time
series contains 12 cepstral coefficients. The training data consists of 30 training
utterances for each of the 9 speakers. The test data contains 370 time series, each
uttered by one of the nine speakers. The task is to assign each of the test utterances
to the correct speaker.
We used the special settings f (x) ? x and g(x) ? x to see if such a simple network
would be able to perform well. We split the training data into a 2/3 train and
a 1/3 validation part, training then a set of 10 models for each of the 9 speakers,
with hidden unit dimensions taking the values H = 1, 2, . . . , 10 and using 20 training
iterations of conjugate gradient learning[1]. For simplicity, we used the same number
of hidden units for each of the nine speaker models. To classify a test utterance,
we chose the speaker model which had the highest likelihood of generating the test
utterance, using an error of 0 if the utterance was assigned to the correct speaker
and an error of 1 otherwise. The errors on the validation set for these 10 models
2
2
0
0
?2
0
2
5
10
15
20
25
30
35
40
0
?2
0
2
?2
0
2
40
?2
0
2
0
0
?2
0
2
5
10
15
20
25
30
35
40
?2
0
2
0
0
?2
0
2
40
?2
0
2
5
5
10
10
15
15
20
20
25
25
30
30
35
35
0
?2
5
10
15
20
25
30
35
40
5
10
15
20
25
30
35
40
5
10
15
20
25
30
35
40
5
10
15
20
25
30
35
40
5
10
15
20
25
30
35
40
0
0
0
5
10
15
20
25
30
35
40
?2
0
Figure 4: (Left)Five sequences from the model v(t) = sin(2(t ? 1) + 1 (t)) + 0.12 (t).
(Right) Five sequences from the model v(t) = sin(5(t ? 1) + 3 (t)) + 0.14 (t), where
i (t) are zero mean unit variance Gaussian noise samples. These were combined
to form a training set of 10 unlabelled sequences. We performed unsupervised
learning by fitting a two component mixture model. The posterior probability
p(i = 1|V ? ) of the 5 sequences on the left belonging to class 1 are (from above)
0.99, 0.99, 0.83, 0.99, 0.96 and for the 5 sequences on the right belonging to class
2 are (from above) 0.95, 0.99, 0.97, 0.97, 0.95, in accord with the data generating
process.
were 6, 6, 3, 5, 5, 5, 4, 5, 6, 3. Based on these validation results, we retrained a model
with H = 3 hidden units on all available training data. On the final independent
test set, the model achieved an accuracy of 97.3%. This compares favourably with
the 96.2% reported for training using a continuous-output HMM with 5 (discrete)
hidden states[8]. Although our model is not powerful in being able to reconstruct
the training data, it does learn sufficient information in the data to be able to make
reliable classification. This problem serves to illustrate that such simple models can
perform well. An interesting alternative training method not explored here would
be to use discriminative learning[7]. Also, not explored here, is the possibility of
using Bayesian methods to set the number of hidden dimensions.
5
Mixture Models
Since our models are probabilistic, we can apply standard statistical generalisations
to them, including using them as part of a M component mixture model
p(V|?) =
M
X
p (V|?i , i) p (i)
(10)
i=1
where p(i) denotes the prior mixing coefficients for model i, and each time series component model is represented by p (V|?i , i). Training mixture models by
maximum likelihood on a set of sequences V 1 , . . . , V P is straightforward using the
standard EM recursions [1]:
PP
?
old old
(i)
?=1 p(V |i, ?i )p
new
p
(i) = PM PP
(11)
old
?
old (i)
i=1
?=1 p(V |i, ?i )p
?new
= arg max
i
?i
P
X
?
p(V ? |i, ?old
i ) log p(V |i, ?i )
(12)
?=1
To illustrate this on a simple example, we trained a mixture model with component
models of the form described in section (4). The data is a series of 10 one dimensional (V = 1) time series each of length T = 40. Two distinct models were used
to generate 10 training sequences, see fig(4). We fitted a two component mixture
model using mixture components of the form (9) (with linear functions f and g)
each model having H = 3 hidden units. After training, the model priors were found
to be roughly equal 0.49, 0.51 and it was satisfying to find that the separation of the
unlabelled training sequences is entirely consistent with the data generation process, see fig(4). An interesting observation is that, whilst the true data generating
process is governed by effectively stochastic hidden transitions, the deterministic
hidden model still performs admirably.
6
Discussion
We have considered a class of models for temporal sequence processing which are
a specially constrained version of Dynamic Bayesian Networks. The constraint was
chosen to ensure that inference would be trivial even in high dimensional continuous hidden/latent spaces. Highly complex dynamics may therefore be postulated
for the hidden space transitions, and also for the hidden to the visible transitions.
However, unlike traditional neural networks the models remain probabilistic (generative models), and hence the full machinery of Bayesian inference is applicable to
this class of models. Indeed, whilst not explored here, model selection issues, such
as assessing the relevant hidden unit dimension, are greatly facilitated in this class
of models. The potential use of this class of such models is therefore widespread.
An area we are currently investigating is using these models for fast inference and
learning in Independent Component Analysis and related areas. In the case that
the hidden unit dynamics is known to be highly stochastic, this class of models is
arguably less appropriate. However, stochastic hidden dynamics is often used in
cases where one believes that the true hidden dynamics is too complex to model
effectively (or, rather, deal with computationally) and one uses noise to ?cover? for
the lack of complexity in the assumed hidden dynamics. The models outlined here
provide an alternative in the case that a potentially complex hidden dynamics form
can be assumed, and may also still provide a reasonable solution even in cases where
the underlying hidden dynamics is stochastic. This class of models is therefore a
potential route to computationally tractable, yet powerful time series models.
References
[1] C.M. Bishop, Neural Networks for Pattern Recognition, Oxford University Press, 1995.
[2] H.A. Bourlard and N. Morgan, Connectionist Speech Recognition. A Hybrid Approach., Kluwer, 1994.
[3] A. Doucet, N. de Freitas, and N. J. Gordon, Sequential Monte Carlo Methods in
Practice, Springer, 2001.
[4] J. Hertz, A. Krogh, and R. Palmer, Introduction to the theory of neural computation.,
Addison-Wesley, 1991.
[5] M. I. Jordan, Learning in Graphical Models, MIT Press, 1998.
[6] J.F. Kolen and S.C. Kramer, Dynamic Recurrent Networks, IEEE Press, 2001.
[7] A. Krogh and S.K. Riis, Hidden Neural Networks, Neural Computation 11 (1999),
541?563.
[8] M. Kudo, J. Toyama, and M. Shimbo, Multidimensional Curve Classification Using
Passing-Through Regions, Pattern Recognition Letters 20 (1999), no. 11-13, 1103?
1111.
[9] L.R. Rabiner and B.H. Juang, An introduction to hidden Markov models, IEEE Transactions on Acoustics Speech, Signal Processing 3 (1986), no. 1, 4?16.
[10] M. West and J. Harrison, Bayesian forecasting and dynamic models, Springer, 1999.
| 2343 |@word version:1 heteroassociative:1 recursively:2 carry:1 initial:3 series:11 contains:5 denoting:1 past:3 freitas:1 yet:1 concatenate:1 visible:24 enables:1 alone:1 generative:1 short:1 node:2 five:2 along:3 consists:3 fitting:1 admirably:1 indeed:1 roughly:1 multi:1 underlying:2 what:1 whilst:6 temporal:4 toyama:1 multidimensional:1 ensured:1 demonstrates:2 uk:1 unit:35 appear:1 arguably:1 maximise:2 oxford:1 might:1 chose:1 resembles:1 hmms:2 palmer:1 range:1 directed:2 practice:1 procedure:3 area:2 cascade:1 integrating:1 suggest:1 cannot:1 selection:1 influence:1 restriction:1 deterministic:15 uttered:2 straightforward:3 attention:1 independently:1 simplicity:1 analogous:1 us:1 element:1 satisfying:1 recognition:3 observed:2 solved:1 capture:1 calculate:1 region:1 highest:1 complexity:5 dispense:2 dynamic:22 trained:3 completely:1 easily:2 joint:1 sep:1 hopfield:2 represented:4 train:1 distinct:1 fast:2 monte:1 whose:1 richer:1 solve:2 otherwise:2 reconstruct:1 ability:1 itself:1 final:1 advantage:1 sequence:34 reconstruction:1 product:1 uci:1 relevant:1 mixing:1 dirac:1 parent:1 juang:1 assessing:1 generating:3 coupling:1 recurrent:3 ac:1 propagating:1 illustrate:3 ij:1 krogh:2 concentrate:1 direction:2 correct:3 filter:3 stochastic:8 assign:1 considered:4 exp:1 mapping:2 injecting:1 applicable:1 currently:1 successfully:1 mit:1 clearly:1 gaussian:7 aim:3 rather:1 vk:1 rank:2 likelihood:8 greatly:1 inference:12 dependent:7 typically:2 hidden:69 uij:1 wij:2 arg:2 classification:2 issue:1 retaining:1 constrained:3 integration:2 special:2 marginal:2 equal:1 having:2 sampling:1 represents:3 flipped:2 unsupervised:2 connectionist:1 gordon:1 duij:5 attractor:3 vowel:2 attempt:1 normalisation:2 message:4 highly:5 possibility:2 certainly:1 severe:1 mixture:7 chain:3 capable:2 necessary:1 machinery:1 unless:1 old:6 desired:1 plotted:1 fitted:1 classify:2 earlier:1 markovian:1 cover:1 retains:1 hundred:1 too:1 reported:1 combined:1 hiddens:1 retain:1 probabilistic:3 connectivity:1 successively:1 containing:1 derivative:4 style:1 potential:3 de:1 kolen:1 coefficient:4 matter:1 postulated:1 explicitly:1 vi:5 depends:1 cpts:5 performed:1 realised:1 vin:3 inherited:1 il:1 formed:1 accuracy:1 variance:1 rabiner:1 dealt:1 modelled:1 bayesian:13 vout:3 parameterises:1 carlo:1 history:3 ah:2 ed:1 ty:1 pp:2 treatment:1 recall:4 back:2 appears:1 wesley:1 higher:1 dt:1 marginalising:1 favourably:1 nonlinear:1 propagation:4 lack:1 widespread:2 defines:1 usage:1 effect:1 contain:1 true:2 hence:3 assigned:1 illustrated:2 deal:1 attractive:1 sin:2 during:1 speaker:8 hill:1 performs:1 variational:1 consideration:1 quences:1 conditioning:2 interpretation:1 kluwer:1 numerically:1 outlined:2 pm:1 had:2 stable:1 posterior:1 thresh:3 store:1 route:1 binary:2 captured:1 morgan:1 additional:1 signal:1 multiple:1 full:4 infer:1 generalises:1 unlabelled:2 kudo:1 retrieval:1 basic:1 ae:1 expectation:1 iteration:1 represent:2 accord:1 achieved:5 harrison:1 marginalisation:2 specially:2 unlike:1 undirected:3 jordan:1 split:1 iterate:1 xj:1 restrict:1 perfectly:1 t0:2 passed:2 forecasting:1 render:1 speech:3 passing:2 nine:3 cpt:3 useful:1 generate:2 delta:2 correctly:2 discrete:6 shall:1 four:1 demonstrating:1 backward:1 graph:4 facilitated:1 letter:1 powerful:4 throughout:1 reader:1 reasonable:1 forrest:1 separation:1 disambiguation:1 initialising:1 dhm:1 entirely:1 bound:1 hi:1 bv:3 kronecker:1 constraint:1 dominated:1 argument:1 combination:1 belonging:3 conjugate:1 remain:1 hertz:1 em:3 modification:1 dv:1 restricted:1 computationally:2 equation:1 addison:1 riis:1 tractable:4 serf:1 available:1 gaussians:1 apply:1 appropriate:2 alternative:3 original:2 denotes:1 ensure:2 include:2 graphical:1 calculating:1 eh1:1 dependence:1 traditional:1 gradient:1 link:1 capacity:1 hmm:4 barber:1 trivial:3 reason:1 length:4 kalman:3 illustration:2 ql:1 potentially:1 motivates:1 adjustable:1 diamond:1 maximises:1 perform:2 shimbo:1 observation:4 markov:3 wkl:1 varied:1 perturbation:1 arbitrary:1 retrained:1 david:1 connection:1 recalled:4 acoustic:1 learned:3 able:4 dynamical:1 pattern:8 lpc:1 dhl:2 max:2 including:2 reliable:1 belief:1 power:1 ulk:1 rely:1 hybrid:1 bourlard:1 recursion:2 representing:2 hm:1 utterance:6 prior:2 interesting:2 generation:1 validation:3 sufficient:1 consistent:1 classifying:1 normalised:1 generalise:1 institute:1 taking:1 cepstral:3 edinburgh:2 benefit:2 slice:2 calculated:3 dimension:6 transition:4 curve:1 forward:4 adaptive:1 transaction:1 approximate:1 observable:1 clique:6 doucet:1 investigating:1 assumed:2 xi:1 discriminative:1 continuous:7 latent:12 table:8 learn:2 anc:1 du:1 complex:7 japanese:2 linearly:5 noise:2 repeated:4 fig:12 west:1 explicit:1 clamped:1 governed:1 specific:1 bishop:1 explored:3 specialisation:1 dl:4 essential:1 intractable:1 albeit:1 sequential:1 effectively:2 conditioned:1 forming:2 springer:2 mij:2 ch:1 dh:3 dwij:1 conditional:10 kramer:1 feasible:1 experimentally:1 determined:3 generalisation:1 total:1 evaluate:1 tested:1 |
1,478 | 2,344 | Adaptive Quantization and Density
Estimation in Silicon
David Hsu
Seth Bridges
Miguel Figueroa
Chris Diorio
Department of Computer Science and Engineering
University of Washington
114 Sieg Hall, Box 352350
Seattle, WA 98195-2350 USA
{hsud, seth, miguel, diorio}@cs.washington.edu
Abstract
We present the bump mixture model, a statistical model for analog
data where the probabilistic semantics, inference, and learning
rules derive from low-level transistor behavior. The bump mixture
model relies on translinear circuits to perform probabilistic inference, and floating-gate devices to perform adaptation. This system
is low power, asynchronous, and fully parallel, and supports various on-chip learning algorithms. In addition, the mixture model can
perform several tasks such as probability estimation, vector quantization, classification, and clustering. We tested a fabricated system
on clustering, quantization, and classification of handwritten digits
and show performance comparable to the E-M algorithm on mixtures of Gaussians.
1
I n trod u cti on
Many system-on-a-chip applications, such as data compression and signal processing, use online adaptation to improve or tune performance. These applications can
benefit from the low-power compact design that analog VLSI learning systems can
offer. Analog VLSI learning systems can benefit immensely from flexible learning
algorithms that take advantage of silicon device physics for compact layout, and that
are capable of a variety of learning tasks. One learning paradigm that encompasses a
wide variety of learning tasks is density estimation, learning the probability
distribution over the input data. A silicon density estimator can provide a basic
template for VLSI systems for feature extraction, classification, adaptive vector
quantization, and more.
In this paper, we describe the bump mixture model, a statistical model that describes
the probability distribution function of analog variables using low-level transistor
equations. We intend the bump mixture model to be the silicon version of mixture of
Gaussians [1], one of the most widely used statistical methods for modeling the
probability distribution of a collection of data. Mixtures of Gaussians appear in
many contexts from radial basis functions [1] to hidden Markov models [2]. In the
bump mixture model, probability computations derive from translinear circuits [3]
and learning derives from floating-gate device equations [4]. The bump mixture
model can perform different functions such as quantization, probability estimation,
and classification. In addition this VLSI mixture model can implement multiple
learning algorithms using different peripheral circuitry. Because the equations for
system operation and learning derive from natural transistor behavior, we can build
large bump mixture model with millions of parameters on a single chip. We have
fabricated a bump mixture model, and tested it on clustering, classification, and vector quantization of handwritten digits. The results show that the fabricated system
performs comparably to mixtures of Gaussians trained with the E-M algorithm [1].
Our work builds upon several trends of research in the VLSI community. The results
in this paper are complement recent work on probability propagation in analog VLSI
[5-7]. These previous systems, intended for decoding applications in communication
systems, model special forms of probability distributions over discrete variables,
and do not incorporate learning. In contrast, the bump mixture model performs inference and learning on probability distributions over continuous variables. The
bump mixture model significantly extends previous results on floating-gate circuits
[4]. Our system is a fully realized floating-gate learning algorithm that can be used
for vector quantization, probability estimation, clustering, and classification. Finally, the mixture model?s architecture is similar to many previous VLSI vector
quantizers [8, 9]. We can view the bump mixture model as a VLSI vector quantizer
with well-defined probabilistic semantics. Computations such as probability estimation and maximum-likelihood classification have a natural statistical interpretation
under the mixture model. In addition, because we rely on floating-gate devices, the
mixture model does not require a refresh mechanism unlike previous learning VLSI
quantizers.
2
T h e ad ap ti ve b u mp ci rcu i t
The adaptive bump circuit [4], depicted in Fig.1(a-b), forms the basis of the bump
mixture model. This circuit is slightly different from previous versions reported in
the literature. Nevertheless, the high level functionality remains the same; the adaptive bump circuit computes the similarity between a stored variable and an input,
and adapts to increase the similarity between the stored variable and input.
Fig.1(a) shows the computation portion of the circuit. The bump circuit takes as
input, a differential voltage signal (+Vin, ?Vin) around a DC bias, and computes the
similarity between Vin and a stored value, ?. We represent the stored memory ? as a
voltage:
?=
Vw- ? Vw+
2
(1)
where Vw+ and Vw? are the gate-offset voltages stored on capacitors C1 and C2. Because C1 and C2 isolate the gates of transistors M1 and M2 respectively, these transistors are floating-gate devices. Consequently, the stored voltages Vw+ and Vw? are
nonvolatile.
We can express the floating-gate voltages Vfg1 and Vfg2 as
Vfg1 =Vin +Vw+ and Vfg2 =Vw? ?Vin, and the output of the bump circuit as [10]:
I out =
Ib
cosh
2
( ( 4? / SU ) (V
t
fg 1
? V fg 2 )
)
=
Ib
cosh ( ( 8? / SU t )(Vin ? ? ) )
2
(2)
where Ib is the bias current, ? is the gate-coupling coefficient, Ut is the thermal voltage, and S depends on the transistor sizes. Fig.1(b) shows Iout for three different
stored values of ?. As the data show, different ??s shift the location of the peak response of the circuit.
Vw+
V fg1
V in
V fg2
Vb
M1
?V in
M2
I out
Vw?
C1
C2
V ca sc
V2
V1
Iout (nA)
?3
2
-0.2
0
V in
(c)
0.2
0.4
V fg2
M3 M4
V2
V1
(b)
4
-0.4
V fg1
V in j
6
0
M6
(a)
?2
?1
V tun
M5
V inj
bump circuit's transfer function for three ?'s
10
8
Vb
V tun
Figure 1. (a-b) The adaptive bump
circuit. (a) The original bump circuit augmented by capacitors C1
and C2, and cascode transistors
(driven by Vcasc). (b) The adaptation subcircuit. M3 and M4 control
injection on the floating-gates and
M5 and M6 control tunneling. (b)
Measured output current of a bump
circuit for three programmed
memories.
Fig.1(b) shows the circuit that implements learning in the adaptive bump circuit. We
implement learning through Fowler-Nordheim tunneling [11] on tunneling junctions
M5-M6 and hot electron injection [12] on the floating-gate transistors M3-M4. Transistor M3 and M5 control injection and tunneling on M1?s floating-gate. Transistors
M4 and M6 control injection and tunneling on M2?s floating-gate. We activate tunneling and injection by a high Vtun and low Vinj respectively. In the adaptive bump
circuit, both processes increase the similarity between Vin and ?. In addition, the
magnitude of the update does not depend on the sign of (Vin ? ?) because the differential input provides common-mode rejection to the input differential pair.
The similarity function, as seen in Fig.1(b), has a Gaussian-like shape. Consequently, we can equate the output current of the bump circuit with the probability of
the input under a distribution parameterized by mean ?:
P (Vin | ? ) = I out
(3)
In addition, increasing the similarity between Vin and ? is equivalent to increasing
P(Vin |?). Consequently, the adaptive bump circuit adapts to maximize the likelihood
of the present input under the circuit?s probability distribution.
3
T h e b u mp mi xtu re mod el
We now describe the computations and learning rule implemented by the bump mixture model. A mixture model is a general class of statistical models that approximates the probability of an analog input as the weighted sum of probability of the
input under several simple distributions. The bump mixture model comprises a set
of Gaussian-like probability density functions, each parameterized by a mean vector, ?i. Denoting the j th dimension of the mean of the ith density as ?ij, we express
the probability of an input vector x as:
P ( x ) = (1/ N )
i
P ( x | i ) = (1/ N )
i
(? P ( x
j
j
| ?ij )
)
(4)
where N is the number of densities in the model and i denotes the ith density. P(x|i)
is the product of one-dimensional densities P(xj|?ij) that depend on the j th dimension
of the ith mean, ?ij. We derive each one-dimensional probability distribution from
the output current of a single bump circuit. The bump mixture model makes two
assumptions: (1) the component densities are equally likely, and (2) within each
component density, the input dimensions are independent and have equal variance.
Despite these restrictions, this mixture model can, in principle, approximate any
probability density function [1].
The bump mixture model adapts all ?i to maximize the likelihood of the training
data. Learning in the bump mixture model is based on the E-M algorithm, the standard algorithm for training Gaussian mixture models. The E-M algorithm comprises
two steps. The E-step computes the conditional probability of each density given the
input, P(i|x). The M-step updates the parameters of each distribution to increase the
likelihood of the data, using P(i|x) to scale the magnitude of each parameter update.
In the online setting, the learning rule is:
??ij = ? P (i | x )
? log P ( x j | ?ij )
??ij
=?
P( x | i)
k
P( x | k)
? log P ( x j | ?ij )
??ij
(5)
where ? is a learning rate and k denotes component densities. Because the adaptive
bump circuit already adapts to increase the likelihood of the present input, we approximate E-M by modulating injection and tunneling in the adaptive bump circuit
by the conditional probability:
??ij = ? P ( i | x ) f ( x j ? ? ij )
(6)
where f() is the parameter update implemented by the bump circuit. We can modulate the learning update in (6) with other competitive factors instead of the conditional probability to implement a variety of learning rules such as online K-means.
4
S i l i con i mp l emen tati on
We now describe a VLSI system that implements the silicon mixture model. The
high level organization of the system detailed in Fig.2, is similar to VLSI vector
quantization systems. The heart of the mixture model is a matrix of adaptive bump
circuits where the ith row of bump circuits corresponds to the ith component density.
In addition, the periphery of the matrix comprises a set of inhibitory circuits for performing probability estimation, inference, quantization, and generating feedback for
learning.
We send each dimension of an input x down a single column. Unity-gain inverting
amplifiers (not pictured) at the boundary of the matrix convert each single ended
voltage input into a differential signal. Each bump circuit computes a current that
represents (P(xj|?ij))?, where ? is the common variance of the one-dimensional densities. The mixture model computes P(x|i) along the ith row and inhibitory circuits
perform inference, estimation, or quantization. We utilize translinear devices [3] to
perform all of these computations. Translinear devices, such as the subthreshold
MOSFET and bipolar transistor, exhibit an exponential relationship between the
gate-voltage and source current. This property allows us to establish a power-law
relationship between currents and probabilities (i.e. a linear relationship between
gate voltages and log-probabilities).
x1
x2
xn
Vtun,Vinj
P(x|?11)
P(x|?12)
Inh()
P(x|?1n)
Output
P(x|?
?1)
P(x|?21)
P(x|?22)
P(x|?2n)
Inh()
P(x|?
?2)
Figure 2. Bump mixture
model architecture. The
system comprises a matrix of adaptive bump
circuits where each row
computes the probability
P(x|?i). Inhibitory circuits transform the output of each row into
system outputs. Spike
generators also transform inhibitory circuit
outputs into rate-coded
feedback for learning.
We compute the multiplication of the probabilities in each row of Fig.2 as addition
in the log domain using the circuit in Fig.3 (a). This circuit first converts each bump
circuit?s current into a voltage using a diode (e.g. M1). M2?s capacitive divider computes Vavg as the average of the scalar log probabilities, logP(xj|?ij):
Vavg = (? / N )
j
log P ( x j | ? ij )
(7)
where ? is the variance, N is the number of input dimensions, and voltages are in
units of ?/Ut (Ut is the thermal voltage and ? is the transistor-gate coupling coefficient). Transistors M2- M5 mirror Vavg to the gate of M5. We define the drain voltage
of M5 as log P(x|i) (up to an additive constant) and compute:
log ( P ( x | i ) ) =
(C1 +C2 )
C1
Vavg =
(C1 +C2 )?
C1 N
j
(
)
log P ( x j | ? ij ) + k
(8)
where k is a constant dependent on Vg (the control gate voltage on M5), and C1 and
C2 are capacitances. From eq.8 we can derive the variance as:
? = NC1 / ( C1 + C2 )
(9)
The system computes different output functions and feedback signals for learning by
operating on the log probabilities of eq.8. Fig.3(b) demonstrates a circuit that computes P(i|x) for each distribution. The circuit is a k-input differential pair where the
bias transistor M0 normalizes currents representing the probabilities P(x|i) at the ith
leg. Fig.3(c) demonstrates a circuit that computes P(x). The ith transistor exponentiates logP(x|i), and a single wire sums the currents. We can also apply other inhibitory circuits to the log probabilities such as winner-take-all circuits (WTA) [13] and
resistive networks [14]. In our fabricated chip, we implemented probability estimation,conditional probability computation, and WTA. The WTA outputs the index of
the most likely component distribution for the present input, and can be used to implement vector quantization and to produce feedback for an online K-means learning rule.
At each synapse, the system combines a feedback signal, such as the conditional
probability P(i|x), computed at the matrix periphery, with the adaptive bump circuit
to implement learning. We trigger adaptation at each bump circuit by a rate-coded
spike signal generated from the inhibitory circuit?s current outputs. We generate this
spike train with a current-to-spike converter based on Lazzaro?s low-powered spiking neuron [15]. This rate-coded signal toggles Vtun and Vinj at each bump circuit.
Consequently, adaptation is proportional to the frequency of the spike train, which
is in turn a linear function of the inhibitory feedback signal. The alternative to the
rate code would be to transform the inhibitory circuit?s output directly into analog
Vs
M1
Vavg
M2
M5
Vavg C2
...
P(xn|?in)?
P(x1|?i1)?
Vs
Vg
Vb
C1
M4
M3
M0
...
...
log P(x|i)
...
...
P(x)
P(i|x)
log P(x|i)
(a)
(b)
(c)
Figure 3. (a) Circuit for computing logP(x|i). (b) Circuit for computing P(i|x). The
current through the ith leg represents P(i|x). (c) Circuit for computing P(x).
Vtun and Vinj signals. Because injection and tunneling are highly nonlinear functions
of Vinj and Vtun respectively, implementing updates that are linear in the inhibitory
feedback signal is quite difficult using this approach.
5
E xp eri men tal Res u l ts an d Con cl u s i on s
We fabricated an 8 x 8 mixture model (8 probability distribution functions with 8
dimensions each) in a TSMC 0.35?m CMOS process available through MOSIS, and
tested the chip on synthetic data and a handwritten digits dataset. In our tests, we
found that due to a design error, one of the input dimensions coupled to the other
inputs. Consequently, we held that input fixed throughout the tests, effectively reducing the input to 7 dimensions. In addition, we found that the learning rule in eq.6
produced poor performance because the variance of the bump distributions was too
large. Consequently, in our learning experiments, we used the hard winner-take-all
circuit to control adaptation, resulting in a K-means learning rule. We trained the
chip to perform different tasks on handwritten digits from the MNIST dataset [16].
To prepare the data, we first perform PCA to reduce the 784-pixel images to sevendimensional vectors, and then sent the data on-chip.
We first tested the circuit on clustering handwritten digits. We trained the chip on
1000 examples of each of the digits 1-8. Fig.4(a) shows reconstructions of the eight
means before and after training. We compute each reconstruction by multiplying the
means by the seven principal eigenvectors of the dataset. The data shows that the
means diverge to associate with different digits. The chip learns to associate most
digits with a single probability distribution. The lone exception is digit 5 which
doesn?t clearly associate with one distribution. We speculate that the reason is that
3?s, 5?s, and 8?s are very similar in our training data?s seven-dimensional representation. Gaussian mixture models trained with the E-M algorithm also demonstrate
similar results, recovering only seven out of the eight digits.
We next evaluated the same learned means on vector quantization of a set of test
digits (4400 examples of each digit). We compare the chip?s learned means with
means learned by the batch E-M algorithm on mixtures of Gaussians (with ?=0.01),
a mismatch E-M algorithm that models chip nonidealities, and a non-adaptive baseline quantizer. The purpose of the mismatch E-M algorithm was to assess the effect
of nonuniform injection and tunneling strengths in floating-gate transistors. Because
tunneling and injection magnitudes can vary by a large amount on different floatinggate transistors, the adaptive bump circuits can learn a mean that is somewhat offcenter. We measured the offset of each bump circuit when adapting to a constant
input and constructed the mismatch E-M algorithm by altering the learned means
during the M-step by the measured offset. We constructed the baseline quantizer by
selecting, at random, an example of each digit for the quantizer codebook. For each
quantizer, we computed the reconstruction error on the digit?s seven-dimensional
after
average squared
quantization error
before
E-M
Probability under 7's model (?A)
7 +
9 o
1.5
1
0.5
1
1.5
2
Probability under 9's model (?A)
1
2
3
4
5
6
7
8
digit
(b)
2
0.5
10
0
baseline
chip
E-M/mismatch
(a)
2.5
20
2.5
Figure 4. (a) Reconstruction of chip
means before and after training with
handwritten digits. (b) Comparison of
average quantization error on unseen
handwritten digits, for the chip?s
learned means and mixture models
trained by standard algorithms. (c) Plot
of probability of unseen examples of 7?s
and 9?s under two bump mixture models
trained solely on each digit.
(c)
representation when we represent each test digit by the closest mean. The results in
Fig.4(b) show that for most of the digits the chip?s learned means perform as well as
the E-M algorithm, and better than the baseline quantizer in all cases. The one digit
where the chip?s performance is far from the E-M algorithm is the digit ?1?. Upon
examination of the E-M algorithm?s results, we found that it associated two means
with the digit ?1?, where the chip allocated two means for the digit ?3?. Over all the
digits, the E-M algorithm exhibited a quantization error of 9.98, mismatch E-M
gives a quantization error of 10.9, the chip?s error was 11.6, and the baseline quantizer?s error was 15.97. The data show that mismatch is a significant factor in the
difference between the bump mixture model?s performance and the E-M algorithm?s
performance in quantization tasks.
Finally, we use the mixture model to classify handwritten digits. If we train a separate mixture model for each class of data, we can classify an input by comparing the
probabilities of the input under each model. In our experiment, we train two separate mixture models: one on examples of the digit 7, and the other on examples of
the digit 9. We then apply both mixtures to a set of unseen examples of digits 7 and
9, and record the probability score of each unseen example under each mixture
model. We plot the resulting data in Fig.4(c). Each axis represents the probability
under a different class. The data show that the model probabilities provide a good
metric for classification. Assigning each test example to the class model that outputs
the highest probability results in an accuracy of 87% on 2000 unseen digits. Additional software experiments show that mixtures of Gaussians (?=0.01) trained by
the batch E-M algorithm provide an accuracy of 92.39% on this task.
Our test results show that the bump mixture model?s performance on several learning tasks is comparable to standard mixtures of Gaussians trained by E-M. These
experiments give further evidence that floating-gate circuits can be used to build
effective learning systems even though their learning rules derive from silicon physics instead of statistical methods. The bump mixture model also represents a basic
building block that we can use to build more complex silicon probability models
over analog variables. This work can be extended in several ways. We can build
distributions that have parameterized covariances in addition to means. In addition,
we can build more complex, adaptive probability distributions in silicon by combining the bump mixture model with silicon probability models over discrete variables
[5-7] and spike-based floating-gate learning circuits [4].
A c k n o w l e d g me n t s
This work was supported by NSF under grants BES 9720353 and ECS 9733425, and
Packard Foundation and Sloan Fellowships.
References
[1]
C. M. Bishop, Neural Networks for Pattern Recognition. Oxford, UK: Clarendon
Press, 1995.
[2]
L. R. Rabiner, "A tutorial on hidden Markov models and selected applications in
speech recognition," Proceedings of the IEEE, vol. 77, pp. 257-286, 1989.
[3]
B. A. Minch, "Analysis, Synthesis, and Implementation of Networks of MultipleInput Translinear Elements," California Institute of Technology, 1997.
[4]
C.Diorio, D.Hsu, and M.Figueroa, "Adaptive CMOS: from biological inspiration to
systems-on-a-chip," Proceedings of the IEEE, vol. 90, pp. 345-357, 2002.
[5]
T. Gabara, J. Hagenauer, M. Moerz, and R. Yan, "An analog 0.25 ?m BiCMOS tailbiting MAP decoder," IEEE International Solid State Circuits Conference (ISSCC),
2000.
[6]
J. Dai, S. Little, C. Winstead, and J. K. Woo, "Analog MAP decoder for (8,4) Hamming code in subthreshold CMOS," Advanced Research in VLSI (ARVLSI), 2001.
[7]
M. Helfenstein, H.-A. Loeliger, F. Lustenberger, and F. Tarkoy, "Probability propagation and decoding in analog VLSI," IEEE Transactions on Information Theory, vol.
47, pp. 837-843, 2001.
[8]
W. C. Fang, B. J. Sheu, O. Chen, and J. Choi, "A VLSI neural processor for image
data compression using self-organization neural networks," IEEE Transactions on
Neural Networks, vol. 3, pp. 506-518, 1992.
[9]
J. Lubkin and G. Cauwenberghs, "A learning parallel analog-to-digital vector quantizer," Journal of Circuits, Systems, and Computers, vol. 8, pp. 604-614, 1998.
[10]
T. Delbruck, "Bump circuits for computing similarity and dissimilarity of analog voltages," California Institute of Technology, CNS Memo 26, 1993.
[11]
M. Lenzlinger, and E. H. Snow, "Fowler-Nordheim tunneling into thermally grown
SiO2," Journal of Applied Physics, vol. 40, pp. 278-283, 1969.
[12]
E. Takeda, C. Yang, and A. Miura-Hamada, Hot Carrier Effects in MOS Devices. San
Diego, CA: Academic Press, 1995.
[13]
J. Lazzaro, S. Ryckebusch, M. Mahowald, and C. A. Mead, "Winner-take-all networks
of O(n) complexity," in Advances in Neural Information Processing, vol. 1, D. Tourestzky, Ed.: MIT Press, 1989, pp. 703-711.
[14]
K. Boahen and A. Andreou, "A contrast sensitive silicon retina with reciprocal synapses," in Advances in Neural Information Processing Systems 4, S. H. J. Moody, and
R. Lippmann, Ed.: MIT Press, 1992, pp. 764-772.
[15]
J. Lazzaro, "Low-power silicon spiking neurons and axons," IEEE International Symposium on Circuits and Systems, 1992.
[16]
Y. Lecun, "The MNIST database of handwritten digits,
http://yann_lecun.com/exdb/mnist."
| 2344 |@word version:2 compression:2 covariance:1 solid:1 score:1 selecting:1 loeliger:1 denoting:1 current:13 comparing:1 com:1 assigning:1 refresh:1 additive:1 shape:1 plot:2 update:6 v:2 selected:1 device:8 floatinggate:1 ith:9 reciprocal:1 record:1 quantizer:8 provides:1 codebook:1 location:1 sieg:1 along:1 c2:9 constructed:2 differential:5 symposium:1 resistive:1 isscc:1 combine:1 behavior:2 little:1 increasing:2 circuit:58 lone:1 fabricated:5 ended:1 ti:1 bipolar:1 demonstrates:2 uk:1 control:6 unit:1 grant:1 appear:1 before:3 carrier:1 engineering:1 despite:1 oxford:1 mead:1 solely:1 ap:1 programmed:1 lecun:1 block:1 implement:7 digit:31 vfg2:2 yan:1 significantly:1 adapting:1 radial:1 context:1 restriction:1 equivalent:1 map:2 send:1 layout:1 m2:6 rule:8 estimator:1 fang:1 diego:1 trigger:1 vtun:5 associate:3 trend:1 element:1 recognition:2 database:1 vavg:6 nc1:1 diorio:3 highest:1 boahen:1 complexity:1 cascode:1 trained:8 depend:2 upon:2 basis:2 seth:2 chip:19 various:1 grown:1 train:4 mosfet:1 describe:3 activate:1 effective:1 sc:1 quite:1 widely:1 unseen:5 transform:3 online:4 advantage:1 transistor:17 reconstruction:4 product:1 adaptation:6 combining:1 adapts:4 lustenberger:1 takeda:1 seattle:1 produce:1 vinj:5 generating:1 cmos:3 derive:6 coupling:2 miguel:2 measured:3 ij:15 eq:3 implemented:3 c:1 diode:1 recovering:1 winstead:1 snow:1 functionality:1 translinear:5 implementing:1 require:1 biological:1 immensely:1 around:1 hall:1 tati:1 mo:1 bump:51 circuitry:1 electron:1 m0:2 vary:1 purpose:1 estimation:9 prepare:1 bridge:1 modulating:1 sensitive:1 weighted:1 mit:2 clearly:1 gaussian:4 voltage:15 likelihood:5 contrast:2 baseline:5 inference:5 dependent:1 el:1 hidden:2 vlsi:14 i1:1 semantics:2 pixel:1 classification:8 flexible:1 fg1:2 special:1 equal:1 extraction:1 washington:2 represents:4 retina:1 ve:1 m4:5 floating:14 intended:1 cns:1 amplifier:1 organization:2 highly:1 tourestzky:1 mixture:49 held:1 capable:1 trod:1 re:2 column:1 modeling:1 classify:2 altering:1 logp:3 delbruck:1 miura:1 mahowald:1 too:1 reported:1 stored:7 minch:1 synthetic:1 density:15 peak:1 international:2 probabilistic:3 physic:3 decoding:2 diverge:1 synthesis:1 moody:1 na:1 squared:1 speculate:1 coefficient:2 fg2:2 mp:3 sloan:1 ad:1 depends:1 view:1 portion:1 competitive:1 cauwenberghs:1 parallel:2 vin:11 ass:1 accuracy:2 variance:5 equate:1 subthreshold:2 rabiner:1 handwritten:9 comparably:1 produced:1 multiplying:1 processor:1 synapsis:1 ed:2 frequency:1 pp:8 associated:1 mi:1 con:2 hamming:1 hsu:2 gain:1 dataset:3 lenzlinger:1 ut:3 clarendon:1 response:1 synapse:1 evaluated:1 box:1 though:1 su:2 nonlinear:1 propagation:2 mode:1 thermally:1 fowler:2 building:1 effect:2 usa:1 divider:1 inspiration:1 during:1 self:1 m5:9 toggle:1 exdb:1 demonstrate:1 performs:2 image:2 common:2 spiking:2 winner:3 sheu:1 million:1 analog:13 interpretation:1 approximates:1 m1:5 silicon:11 significant:1 similarity:7 operating:1 closest:1 recent:1 driven:1 periphery:2 iout:2 seen:1 additional:1 somewhat:1 dai:1 paradigm:1 maximize:2 signal:10 multiple:1 academic:1 offer:1 equally:1 coded:3 basic:2 metric:1 represent:2 c1:11 addition:10 fellowship:1 source:1 allocated:1 tsmc:1 unlike:1 exhibited:1 isolate:1 sent:1 sio2:1 capacitor:2 mod:1 vw:10 yang:1 m6:4 variety:3 xj:3 architecture:2 converter:1 reduce:1 shift:1 pca:1 speech:1 lazzaro:3 detailed:1 eigenvectors:1 tune:1 amount:1 cosh:2 generate:1 http:1 nsf:1 inhibitory:9 tutorial:1 sign:1 discrete:2 vol:7 express:2 nevertheless:1 xtu:1 utilize:1 v1:2 mosis:1 sum:2 convert:2 parameterized:3 extends:1 throughout:1 tunneling:11 vb:3 comparable:2 hamada:1 strength:1 figueroa:2 x2:1 software:1 tal:1 performing:1 injection:9 department:1 peripheral:1 poor:1 describes:1 slightly:1 unity:1 wta:3 leg:2 heart:1 equation:3 remains:1 turn:1 mechanism:1 junction:1 gaussians:7 operation:1 available:1 apply:2 eight:2 v2:2 alternative:1 batch:2 gate:22 original:1 capacitive:1 denotes:2 clustering:5 eri:1 build:6 establish:1 intend:1 already:1 realized:1 spike:6 capacitance:1 ryckebusch:1 exhibit:1 subcircuit:1 separate:2 decoder:2 chris:1 seven:4 me:1 reason:1 code:2 helfenstein:1 index:1 relationship:3 difficult:1 quantizers:2 memo:1 design:2 implementation:1 perform:9 wire:1 neuron:2 markov:2 t:1 thermal:2 extended:1 communication:1 inh:2 dc:1 nonuniform:1 community:1 david:1 complement:1 pair:2 inverting:1 offcenter:1 andreou:1 california:2 learned:6 nordheim:2 pattern:1 mismatch:6 encompasses:1 tun:2 packard:1 memory:2 power:4 hot:2 natural:2 rely:1 examination:1 advanced:1 pictured:1 representing:1 improve:1 technology:2 axis:1 woo:1 coupled:1 literature:1 drain:1 multiplication:1 powered:1 law:1 fully:2 men:1 proportional:1 vg:2 generator:1 digital:1 foundation:1 xp:1 principle:1 row:5 normalizes:1 supported:1 asynchronous:1 bias:3 institute:2 wide:1 template:1 fg:2 benefit:2 feedback:7 dimension:8 boundary:1 xn:2 computes:10 doesn:1 collection:1 adaptive:17 san:1 far:1 nonidealities:1 ec:1 transaction:2 approximate:2 compact:2 lippmann:1 continuous:1 learn:1 transfer:1 ca:2 cl:1 complex:2 domain:1 x1:2 augmented:1 fig:13 nonvolatile:1 axon:1 comprises:4 exponential:1 ib:3 learns:1 down:1 choi:1 bishop:1 offset:3 hsud:1 derives:1 evidence:1 quantization:17 mnist:3 effectively:1 ci:1 mirror:1 magnitude:3 dissimilarity:1 chen:1 rejection:1 depicted:1 likely:2 scalar:1 corresponds:1 relies:1 cti:1 conditional:5 modulate:1 consequently:6 hard:1 reducing:1 principal:1 inj:1 m3:5 exception:1 support:1 incorporate:1 tested:4 |
1,479 | 2,345 | Error Bounds for Transductive Learning via
Compression and Clustering
Philip Derbeko
Ran El-Yaniv
Ron Meir
Technion - Israel Institute of Technology
{philip,rani}@cs.technion.ac.il [email protected]
Abstract
This paper is concerned with transductive learning. Although transduction appears to be an easier task than induction, there have not been many
provably useful algorithms and bounds for transduction. We present explicit error bounds for transduction and derive a general technique for
devising bounds within this setting. The technique is applied to derive
error bounds for compression schemes such as (transductive) SVMs and
for transduction algorithms based on clustering.
1 Introduction and Related Work
In contrast to inductive learning, in the transductive setting the learner is given both the
training and test sets prior to learning. The goal of the learner is to infer (or ?transduce?)
the labels of the test points. The transduction setting was introduced by Vapnik [1, 2] who
proposed basic bounds and an algorithm for this setting. Clearly, inferring the labels of
points in the test set can be done using an inductive scheme. However, as pointed out
in [2], it makes little sense to solve an easier problem by ?reducing? it to a much more
difficult one. In particular, the prior knowledge carried by the (unlabeled) test points can
be incorporated into an algorithm, potentially leading to superior performance. Indeed,
a number of papers have demonstrated empirically that transduction can offer substantial
advantage over induction whenever the training set is small or moderate (see e.g. [3, 4,
5, 6]). However, unlike the current state of affairs in induction, the question of what are
provably effective learning principles for transduction is quite far from being resolved.
In this paper we provide new error bounds and a general technique for transductive learning. Our technique is based on bounds that can be viewed as an extension of McAllester?s
PAC-Bayesian framework [7, 8] to transductive learning. The main advantage of using this
framework in transduction is that here priors can be selected after observing the unlabeled
data (but before observing the labeled sample). This flexibility allows for the choice of
?compact priors? (with small support) and therefore, for tight bounds. Another simple observation is that the PAC-Bayesian framework can be operated with polynomially (in m, the
training sample size) many different priors simultaneously. Altogether, this added flexibility, of using data-dependent multiple priors allows for easy derivation of tight error bounds
for ?compression schemes? such as (transductive) SVMs and for clustering algorithms.
We briefly review some previous results. The idea of transduction, and a specific algorithm
for SVM transductive learning, was introduced and studied by Vapnik (e.g. [2]), where an
error bound is also proposed. However, this bound is implicit and rather unwieldy and,
to the best of our knowledge, has not been applied in practical situations. A PAC-Bayes
bound [7] for transduction with Perceptron Decision Trees is given in [9]. The bound is
data-dependent depending on the number of decision nodes, the margins at each node and
the sample size. However, the authors state that the transduction bound is not much tighter
than the induction bound. Empirical tests show that this transduction algorithm performs
slightly better than induction in terms of the test error, however, the advantage is usually
statistically insignificant. Refining the algorithm of [2] a transductive algorithm based on a
SVMs is proposed in [3]. The paper also provides empirical tests indicating that transduction is advantageous in the text categorization domain. An error bound for transduction,
based on the effective VC Dimension, is given in [10]. More recently Lanckriet et al. [11]
derived a transductive bound for kernel methods based on spectral properties of the kernel
matrix. Blum and Langford [12] recently also established an implicit bound for transduction, in the spirit of the results in [2].
2 The Transduction Setup
We consider the following setting proposed by Vapnik ([2] Chp. 8), which for simplicity is
described in the context of binary classification (the general case will be discussed in the
full paper). Let H be a set of binary hypotheses consisting of functions from input space
X to {?1} and let Xm+u = {x1 , . . . , xm+u } be a set of points from X each of which is
chosen i.i.d. according to some unknown distribution ?(x). We call Xm+u the full sample.
Let Xm = {x1 , . . . , xm } and Ym = {y1 , . . . , ym }, where Xm is drawn uniformly from
Xm+u and yi ? {?1}. The set Sm = {(x1 , y1 ), . . . , (xm , ym )} is referred to as a training
sample. In this paper we assume that yi = ?(xi ) for some unknown function ?. The
remaining subset Xu = Xm+u \ Xm is referred to as the unlabeled sample. Based on Sm
and Xu our goal is to choose h ? H which predicts the labels of points in Xu as accurately
as possible. For each h ? H and a set Z = x1 , . . . , x|Z| of samples define
|Z|
1 X
`(h(xi ), yi ),
Rh (Z) =
|Z| i=1
(1)
where in our case `(?, ?) is the zero-one loss function. Our goal in transduction is to learn
an h such that Rh (Xu ) is as small as possible. This problem setup is summarized by the
following transduction ?protocol? introduced in [2] and referred to as Setting 1:
(i) A full sample Xm+u = {x1 , . . . , xm+u } consisting of arbitrary m + u points is
given.1
(ii) We then choose uniformly at random the training sample Xm ? Xm+u and receive its labeling Ym ; the resulting training set is Sm = (Xm , Ym ) and the remaining set Xu is the unlabeled sample, Xu = Xm+u \ Xm ;
(iii) Using both Sm and Xu we select a classifier h ? H whose quality is measured by
Rh (Xu ).
Vapnik [2] also considers another formulation of transduction, referred to as Setting 2:
(i) We are given a training set Sm = (Xm , Ym ) selected i.i.d according to ?(x, y).
(ii) An independent test set Su = (Xu , Yu ) of u samples is then selected in the same
manner.
1
The original Setting 1, as proposed by Vapnik, discusses a full sample whose points are chosen
independently at random according to some source distribution ?(x).
(iii) We are required to choose our best h ? H based on Sm and Xu so as to minimize
Z
m+u
1 X
` (h(xi ), yi ) d?(x1 , y1 ) ? ? ? d?(xm+u , ym+u ).
Rm,u (h) =
(2)
u i=m+1
Even though Setting 2 may appear more applicable in practical situations than Setting 1, the
derivation of theoretical results can be easier within Setting 1. Nevertheless, as far as the
expected losses are concerned, Vapnik [2] shows that an error bound in Setting 1 implies
an equivalent bound in Setting 2. In view of this result we restrict ourselves in the sequel
to Setting 1.
We make use of the following quantities, which are all instances of (1). The quantity
Rh (Xm+u ) is called the full sample risk of the hypothesis h, Rh (Xu ) is referred to as
the transduction risk (of h), and Rh (Xm ) is the training error (of h). Thus, Rh (Xm ) is
? h (Sm ). While our objective in transduction is to
the standard training error denoted by R
achieve small error over the unlabeled set (i.e. to minimize Rh (Xu )), it turns out that it is
much easier to derive error bounds for the full sample risk. The following simple lemma
translates an error bound on Rh (Xm+u ), the full sample risk, to an error bound on the
transduction risk Rh (Xu ).
Lemma 2.1 For any h ? H and any C
? h (Sm ) + C
Rh (Xm+u ) ? R
?
? h (Sm ) +
Rh (Xu ) ? R
m+u
? C.
u
(3)
Proof: For any h
Rh (Xm+u ) =
mRh (Xm ) + uRh (Xu )
.
m+u
(4)
? h (Sm ) for Rh (Xm ) in (4) and then substituting the result for the left-hand
Substituting R
side of (3) we get
Rh (Xm+u ) =
? h (Sm ) + uRh (Xu )
mR
? h (Sm ) + C.
?R
m+u
The equivalence (3) is now obtained by isolating Rh (Xu ) on the left-hand side.
2
3 General Error Bounds for Transduction
Consider a hypothesis class H and assume for simplicity that H is countable; in fact, in
the case of transduction it suffices to consider a finite hypothesis class. To see this note
that all m + u points are known in advance. Thus, in the case of binary classification
(for example) it suffices to consider at most 2m+u possible dichotomies. Recall that in the
setting considered we select a sub-sample of m points from the set Xm+u of cardinality
m+u. This corresponds to a selection of m points without replacement from a set of m+u
points, leading to the m points being dependent. A naive utilization of large deviation
bounds would therefore not be directly applicable in this setting. However, Hoeffding
(see Theorem 4 in [13]) pointed out a simple procedure to transform the problem into one
involving independent data. While this procedure leads to non-trivial bounds, it does not
fully take advantage of the transductive setting and will not be used here. Consider for
simplicity the case of binary classification. In this case we make use of the following
concentration inequality, based on [14].
Theorem 3.1 Let C = {c1 , . . . , cN }, ci ? {0, 1}, be a finite set of binary numbers, and
PN
set c? = (1/N ) i=1 ci . Let Z1 , . . . , Zm , be random variables obtaining their values
Pm
by sampling C uniformly at random without replacement. Set Z = (1/m) i=1 Zi and
? = m/N . Then, if 2 ? ? min{1 ? c?, c?(1 ? ?)/?},
? ?
?
?
?
?? ?
? c? + 7 log(N + 1) ,
Pr {Z ? EZ > ?} ? exp ?mD(?
c + ?k?
c) ? (N ? m) D c? ?
1???
where D(pkq) = p log(p/q) = (1 ? p) log(1 ? p)/(1 ? q), p, q, ? [0, 1] is the binary
Kullback-Leibler divergence.
Using this result we obtain the following error bound for transductive classification.
Theorem 3.2 Let Xm+u = Xm ?Xu be the full sample and let p = p(Xm+u ) be a (prior)
distribution over the class of binary hypotheses H that may depend on the full sample. Let
? ? (0, 1) be given. Then, with probability at least 1 ? ? over choices of Sm (from the full
sample) the following bound holds for any h ? H,
v?
!
u
1
u 2R
? h (Sm )(m + u) log p(h)
+ ln m
? + 7 log(m + u + 1)
t
?
Rh (Xu ) ? Rh (Sm ) +
u
m?1
?
?
1
+ ln m
2 log p(h)
? + 7 log(m + u + 1)
.
(5)
+
m?1
Proof: (sketch) In our transduction setting the set Xm (and therefore Sm ) is obtained by
sampling the full sample Xm+u uniformly at random without replacement. We first claim
that
? h (Sm ) = Rh (Xm+u ),
E?m R
(6)
where E?m (?) is the expectation with respect to a random choice of Sm from Xm+u without replacement. This is shown as follows.
X
X
1 X
? h (Sm ) = ? 1 ?
? h (Sm ) = ? 1 ?
`(h(x), ?(x)).
R
E?m R
m+u
m+u
m
m
m
Sm
Xm ?Xm+n
x?Sm
By symmetry, all points x ? X?m+u are
right-hand
side an equal number of
? ?counted
?on the
?m+u?1
?
m+u?1
times; this number is precisely m+u
?
=
.
The
equality (6) is obtained
m
m
m?1 ?
? ?m+u?
m
.
by considering the definition of Rh (Xm+u ) and noting that m+u?1
= m+u
m?1 / m
The remainder of the proof combines Theorem 3.1 and the techniques presented in [15].
The details will be provided in the full paper.
2
? h (Sm ) ? 0 the square root in (5) vanishes and faster rates are obtained.
Notice that when R
An important feature of Theorem 3.2 is that it allows one to use the sample Xm+u in order
to choose the prior distribution p(h). This advantage has already been alluded to in [2], but
does not seem to have been widely used in practice. Additionally, observe that (5) holds
with probability at least 1 ? ? with respect to the random selection of sub-samples of size
m from the fixed set Xm+u . This should be contrasted with the standard inductive setting
results where the probabilities are with respect to a random choice of m training points
chosen i.i.d. from ?(x, y).
The next bound we present is analogous to McAllester?s Theorem 1 in [8]. This theorem
concerns Gibbs composite classifiers, which are distributions over the base classifiers in
H. For any distribution q over H denote by Gq the Gibbs classifier, which classifies an
2
The second condition, ? ? c?(1 ? ?)/?, simply guarantees that the number of ?ones? in the
sub-sample does not exceed their number in the original sample.
instance (in Xu ) by randomly choosing, according to q, one hypothesis h ? H. For Gibbs
classifiers we now extend definition (1) as follows. Let Z = x1 , . . . , x|Z| be any set of
samples
Gq be a Gibbs classifier
n and let P
o over H. The risk of Gq over Z is RGq (Z) =
|Z|
Eh?q (1/|Z|) i=1 `(h(xi ), ?(xi )) . As before, when Z = Xm (the training set) we
? G (Sm ) = RG (Xm ). Due to space limitations, the proof of
use the standard notation R
q
q
the following theorem will appear in the full paper.
Theorem 3.3 Let Xm+u be the full sample. Let p be a distribution over H that may depend
on Xm+u and let q be a (posterior) distribution over H that may depend on both Sm and
Xu . Let ? ? (0, 1) be given. With probability at least 1 ? ? over the choices of Sm for any
distribution q
v?
!
u
? G (Sm )(m + u) D(qkp) + ln m + 7 log(m + u + 1)
u 2R
q
?
? G (Sm ) + t
RGq (Xu ) ? R
q
u
m?1
?
?
7
2 D(qkp) + ln m
? + m log(m + u + 1)
.
+
m?1
In the context of inductive learning, a major obstacle in generating meaningful and effective bounds using the PAC-Bayesian framework [8] is the construction of ?compact priors?. Here we discuss two extensions to the PAC-Bayesian scheme, which together allow
for easy choices of compact priors that can yield tight error bounds. The first extension
we offer is the use of multiple priors. Instead of a single prior p in the original PACBayesian framework we observe that one can use all PAC-Bayesian bounds with a number
of priors p1 , . . . , pk and then replace the complexity term ln(1/p(h)) (in Theorem 3.2)
by mini ln(1/pi (h)), at a cost of an additional ln k term (see below). Similarly, in Theorem 3.3 we can replace the KL-divergence term in the bound with mini D(q||pi ). The
penalty for using k priors is logarithmic in k (specifically the ln(1/?) term in the original
bound becomes ln(k/?)). As long as k is sub-exponential in m we still obtain effective
generalization bounds. The second ?extension? is simply the feature of our transduction
bounds (Theorems 3.2 and 3.3), which allows for the priors to be dependent on the full
sample Xm+u . The combination of these two simple ideas yields a powerful technique for
deriving error bounds in realistic transductive settings. After stating the extended result we
later use it for deriving tight bounds for known learning algorithms and for deriving new
algorithms. Suppose that instead of a single prior p over H we want to utilize k priors,
p1 , . . . , pk and in retrospect choose the best among the k corresponding PAC-Bayesian
bounds. The following theorem shows that one can use polynomially many priors with
a minor penalty. The proof, which is omitted due to space limitations, utilizes the union
bound in a straightforward manner.
Theorem 3.4 Let the conditions of Theorem 3.2 hold, except that we now have k prior
distributions p1 , . . . , pk defined over H, each of which may depend on Xm+u . Let ? ?
(0, 1) be given. Then, with probability at least 1 ? ? over random choices of sub-samples of
size m from the full-sample, for all h ? H, (5) holds with p(h) replaced by min1?i?k pi (h)
and log 1? is replaced by log k? .
Remark: A similar result holds for the Gibbs algorithm of Theorem 3.3. Also, as noted by
one of the reviewers, when the supports of the k priors intersect (i.e. there is at least one
pair of priors pi and P
pj with overlapping support), then one can do better by utilizing the
1
?super prior? p = k i pi within the original Theorem 3.2. However, note that when the
supports are disjoint, these two views (of multiple priors and a super prior) are equivalent.
In the applications below we utilize non-intersecting priors.
4 Bounds for Compression Algorithms
Here we propose a technique for bounding the error of ?compression? algorithms based on
appropriate construction of prior probabilities. Let A be a learning algorithm. Intuitively,
A is a ?compression scheme? if it can generate the same hypothesis using a subset of the
data. More formally, a learning algorithm A (viewed as a function from samples to some
hypothesis class) is a compression scheme with respect to a sample Z if there is a subsample Z 0 , Z 0 ? Z, such that A(Z 0 ) = A(Z). Observe that the SVM approach is a
compression scheme, with Z 0 being determined by the set of support vectors.
Let A be a deterministic compression scheme and consider the full sample Xm+u . For
each integer ? = 1, . . . , m, consider all subsets of Xm+u of size ? , and for each subset
construct all possible dichotomies of that subset (note that we are not proposing this approach as an algorithm, but rather as a means to derive bounds; in practice one need not
construct all these dichotomies). A deterministic algorithm A uniquely determines at most
one hypothesis h ? H for each dichotomy.3 For each ? , let the set of hypotheses generated
by this procedure be? denoted
? by H? . For the rest of this discussion we assume the worst
case where |H? | = m+u
(i.e. if H? does not contains one hypothesis for each dichotomy
?
the bounds improve). The prior p? is then defined to be a uniform distribution over H? .
In this way we have m priors, p1 , . . . , pm which are constructed using only Xm+u (and
are independent of Sm ). Any hypothesis selected by the learning algorithm A based on
the labeled sample Sm and on the test set Xu belongs to ?m
? =1 H? . The motivation for this
construction is as follows. Each ? can be viewed as our ?guess? for the maximal number of
compression points that will be utilized by a resulting classifier. For each such ? the prior
p? is constructed over all possible classifiers that use ? compression points. By systematically considering all possible dichotomies of ? points we can characterize a relatively small
subset of H without observing labels of the training points. Thus, each prior p? represents
one such guess. Using Theorem 3.4 we are later allowed to choose in retrospect the bound
corresponding to the best ?guess?. The following corollary identifies an upper bound on
the divergence in terms of the observed size of the compression set of the final classifier.
Corollary 4.1 Let the conditions of Theorem 3.4 hold. Let A be a deterministic learning
algorithm leading to a hypothesis h ? H based on a compression set of size s. Then
with probability at least 1 ? ? for all h ? H, (5) holds with log(1/p(h)) replaced by
s log(2e(m + u)/s) and ln(m/?) replaced by ln(m2 /?).
Proof: Recall that Hs ? H is the support set of ps and that ?ps (h)? = 1/|Hs | for all
h ? Hs , implying that ln(1/ps (h)) = |Hs |. Using the inequality m+u
? (e(m + u)/s)s
s
?
?
m+u
we have that |Hs | = 2s s ? (2e(m + u)/s)s . Substituting this result in Theorem 3.4
while restricting the minimum over i to be over i ? s, leads to the desired result.
2
The bound of Corollary 4.1 can be easily computed once the classifier is trained. If the size
of the compression set happens to be small, we obtain a tight bound. SVM classification is
one of the best studied compression schemes. The compression set for a sample Sm is given
by the subset of support vectors. Thus the bound in Corollary 4.1 immediately applies with
s being the number of observed support vectors (after training). We note that this bound
is similar to a recently derived compression bound for inductive learning (Theorem 5.18 in
[16]). Also, observe that the algorithm itself (inductive SVM) did not use in this case the
unlabeled sample (although the bound does use this sample). Nevertheless, using exactly
the same technique we obtain error bounds for the transductive SVM algorithms in [2, 3].4
3
It might be that for some dichotomies the algorithm will fail. For example, an SVM in feature
space without soft margin will fail to classify non linearly-separable dichotomies of Xm+u .
4
Note however that our bounds are optimized with a ?minimum number of support vectors? approach rather than ?maximum margin?.
5 Bounds for Clustering Algorithms
Some learning problems do not allow for high compression rates using compression
schemes such as SVMs (i.e. the number of support vectors can sometimes be very large).
A considerably stronger type of compression can often be achieved by clustering algorithms. While there is lack of formal links between entirely unsupervised clustering and
classification, within a transduction setting we can provide a principled approach to using
clustering algorithms for classification. Let A be any (deterministic) clustering algorithm
which, given the full sample Xm+u , can cluster this sample into any desired number of
clusters. We use A to cluster Xm+u into 2, 3 . . . , c clusters where c ? m. Thus, the algorithm generates a collection of partitions of Xm+u into ? = 2, 3, . . . , c clusters, where
each partition is denoted by C? . For each value of ? , let H? consist of those hypotheses
which assign an identical label to all points in the same cluster of partition C? , and define
the prior p? (h) = 1/2? for each h ? H? and zero otherwise (note that there are 2? possible dichotomies). The learning algorithm selects a hypothesis as follows. Upon observing
the labeled sample Sm = (Xm , Ym ), for each of the clusterings C2 , . . . , Cc constructed
above, it assigns a label to each cluster based on the majority vote from the labels Ym of
points falling within the cluster (in case of ties, or if no points from Xm belong to the
cluster, choose a label arbitrarily). Doing this leads to c ? 1 classifiers h? , ? = 2, . . . , c.
For each h? there is a valid error bound as given by Theorem 3.4 and all these bounds are
valid simultaneously. Thus we choose the best classifier (equivalently, number of clusters)
for which the best bound holds. We thus have the following corollary of Theorem 3.4 and
Lemma 2.1.
Corollary 5.1 Let A be any clustering algorithm and let h? , ? = 2, . . . , c be classifications
of test set Xu as determined by clustering of the full sample Xm+u (into ? clusters) and
the training set Sm , as described above. Let ? ? (0, 1) be given. Then with probability at
least 1 ? ?, for all ? , (5) holds with log(1/p(h)) replaced by ? and ln(m/?) replaced by
ln(mc/?).
Error bounds obtained using Corollary 5.1 can be rather tight when the clustering algorithm
is successful (i.e. when it captures the class structure in the data using a small number of
clusters).
Corollary 5.1 can be extended in a number of ways. One simple extension is the use of
an ensemble of clustering algorithms. Specifically, we can concurrently apply k clustering
algorithm (using each algorithm to cluster the data into ? = 2, . . . , c clusters). We thus
obtain kc hypotheses (partitions of Xm+u ). By a simple application of the union bound
kcm
in Corollary 5.1 and guarantee that kc bounds hold siwe can replace ln cm
? by ln ?
multaneously for all kc hypotheses (with probability at least 1 ? ?). We thus choose the
hypothesis which minimizes the resulting bound. This extension is particularly attractive
since typically without prior knowledge we do not know which clustering algorithm will
be effective for the dataset at hand.
6 Concluding Remarks
We presented new bounds for transductive learning algorithms. We also developed a new
technique for deriving tight error bounds for compression schemes and for clustering algorithms in the transductive setting. We expect that these bounds and new techniques will
be useful for deriving new error bounds for other known algorithms and for deriving new
types of transductive learning algorithms. It would be interesting to see if tighter transduction bounds can be obtained by reducing the ?slacks? in the inequalities we use in our
analysis. Another promising direction is the construction of better (multiple) priors. For example, in our compression bound (Corollary 4.1), for each number of compression points
we assigned the same prior to each possible point subset and each possible dichotomy.
However, in practice a vast majority of all these subsets and dichotomies are unlikely to
occur.
Acknowledgments The work of R.E and R.M. was partially supported by the Technion
V.P.R. fund for the promotion of sponsored research. Support from the Ollendorff center
of the department of Electrical Engineering at the Technion is also acknowledged. We also
thank anonymous referees for their useful comments.
References
[1] V. N. Vapnik. Estimation of Dependences Based on Empirical Data. Springer Verlag,
New York, 1982.
[2] V. N. Vapnik. Statistical Learning Theory. Wiley Interscience, New York, 1998.
[3] T. Joachims. Transductive inference for text classification unsing support vector machines. In European Conference on Machine Learning, 1999.
[4] A. Blum and S. Chawla. Learning from labeled and unlabeled data using graph mincuts. In Proceeding of The Eighteenth International Conference on Machine Learning
(ICML 2001), pages 19?26, 2001.
[5] R. El-Yaniv and O. Souroujon. Iterative double clustering for unsupervised and semisupervised learning. In Advances in Neural Information Processing Systems (NIPS
2001), pages 1025?1032, 2001.
[6] T. Joachims. Transductive learning via spectral graph partitioning. In Proceeding of
The Twentieth International Conference on Machine Learning (ICML-2003), 2003.
[7] D. McAllester. Some PAC-Bayesian theorems. Machine Learning, 37(3):355?363,
1999.
[8] D. McAllester. PAC-Bayesian stochastic model selection. Machine Learning,
51(1):5?21, 2003.
[9] D. Wu, K. Bennett, N. Cristianini, and J. Shawe-Taylor. Large margin trees for induction and transduction. In International Conference on Machine Learning, 1999.
[10] L. Bottou, C. Cortes, and V. Vapnik. On the effective VC dimension. Technical report,
AT&T, 1994.
[11] G.R.G. Lanckriet, N. Cristianini, L. El Ghaoui, P. Bartlett, and M.I. Jordan. Learning
the kernel matrix with semi-definite programming. Technical report, University of
Berkeley, Computer Science Division, 2002.
[12] A. Blum and J. Langford. Pac-mdl bounds. In COLT, pages 344?357, 2003.
[13] W. Hoeffding. Probability inequalities for sums of bounded random variables. J.
Amer. Statis. Assoc., 58:13?30, 1963.
[14] A. Dembo and O. Zeitouni. Large Deviation Techniques and Applications. Springer,
New York, second edition, 1998.
[15] D. McAllester. Simplified pac-bayesian margin bounds. In COLT, pages 203?215,
2003.
[16] R. Herbrich. Learning Kernel Classifiers: Theory and Algorithms. MIT Press,
Boston, 2002.
| 2345 |@word h:5 rani:1 briefly:1 compression:23 advantageous:1 stronger:1 contains:1 current:1 realistic:1 partition:4 sponsored:1 fund:1 statis:1 implying:1 selected:4 devising:1 guess:3 dembo:1 affair:1 provides:1 node:2 ron:1 herbrich:1 constructed:3 c2:1 combine:1 interscience:1 manner:2 indeed:1 expected:1 p1:4 little:1 cardinality:1 considering:2 becomes:1 provided:1 classifies:1 notation:1 bounded:1 israel:1 what:1 cm:1 minimizes:1 developed:1 proposing:1 guarantee:2 berkeley:1 tie:1 exactly:1 classifier:13 rm:1 assoc:1 utilization:1 partitioning:1 appear:2 before:2 engineering:1 might:1 studied:2 equivalence:1 statistically:1 practical:2 acknowledgment:1 practice:3 union:2 definite:1 procedure:3 intersect:1 empirical:3 chp:1 composite:1 get:1 unlabeled:7 selection:3 context:2 risk:6 equivalent:2 transduce:1 demonstrated:1 reviewer:1 center:1 eighteenth:1 straightforward:1 deterministic:4 independently:1 simplicity:3 immediately:1 assigns:1 m2:1 utilizing:1 deriving:6 analogous:1 qkp:2 construction:4 suppose:1 programming:1 hypothesis:18 lanckriet:2 referee:1 particularly:1 utilized:1 predicts:1 labeled:4 observed:2 min1:1 electrical:1 capture:1 worst:1 ran:1 substantial:1 principled:1 vanishes:1 complexity:1 cristianini:2 trained:1 depend:4 tight:7 upon:1 division:1 learner:2 resolved:1 easily:1 derivation:2 effective:6 dichotomy:11 labeling:1 choosing:1 quite:1 whose:2 widely:1 solve:1 otherwise:1 transductive:19 transform:1 itself:1 final:1 advantage:5 propose:1 gq:3 maximal:1 zm:1 remainder:1 flexibility:2 achieve:1 yaniv:2 p:3 cluster:14 double:1 categorization:1 generating:1 derive:4 depending:1 ac:2 stating:1 measured:1 minor:1 c:1 implies:1 direction:1 stochastic:1 vc:2 mcallester:5 assign:1 suffices:2 generalization:1 anonymous:1 tighter:2 extension:6 hold:10 considered:1 exp:1 claim:1 substituting:3 major:1 omitted:1 estimation:1 applicable:2 label:8 pacbayesian:1 promotion:1 clearly:1 concurrently:1 mit:1 super:2 rather:4 pn:1 corollary:10 derived:2 refining:1 joachim:2 contrast:1 sense:1 inference:1 dependent:4 el:3 typically:1 unlikely:1 kc:3 selects:1 provably:2 classification:9 among:1 colt:2 denoted:3 equal:1 construct:2 once:1 sampling:2 identical:1 represents:1 yu:1 unsupervised:2 icml:2 report:2 randomly:1 simultaneously:2 divergence:3 replaced:6 consisting:2 ourselves:1 replacement:4 mdl:1 operated:1 tree:2 taylor:1 desired:2 isolating:1 theoretical:1 instance:2 classify:1 soft:1 obstacle:1 ollendorff:1 cost:1 deviation:2 subset:9 uniform:1 technion:5 successful:1 characterize:1 considerably:1 international:3 sequel:1 ym:9 together:1 intersecting:1 choose:9 hoeffding:2 leading:3 summarized:1 later:2 view:2 root:1 observing:4 doing:1 bayes:1 minimize:2 il:2 square:1 who:1 ensemble:1 yield:2 bayesian:9 accurately:1 mc:1 cc:1 whenever:1 definition:2 proof:6 dataset:1 recall:2 knowledge:3 appears:1 formulation:1 done:1 though:1 amer:1 implicit:2 langford:2 retrospect:2 hand:4 sketch:1 su:1 overlapping:1 lack:1 quality:1 semisupervised:1 inductive:6 equality:1 assigned:1 leibler:1 attractive:1 uniquely:1 noted:1 performs:1 recently:3 superior:1 empirically:1 extend:1 discussed:1 belong:1 gibbs:5 pm:2 similarly:1 pointed:2 shawe:1 pkq:1 base:1 posterior:1 moderate:1 belongs:1 verlag:1 inequality:4 binary:7 arbitrarily:1 yi:4 minimum:2 additional:1 mr:1 semi:1 ii:2 multiple:4 full:19 infer:1 technical:2 faster:1 offer:2 long:1 involving:1 basic:1 expectation:1 kernel:4 sometimes:1 achieved:1 c1:1 receive:1 want:1 source:1 rest:1 unlike:1 comment:1 spirit:1 seem:1 jordan:1 call:1 integer:1 ee:1 noting:1 exceed:1 iii:2 easy:2 concerned:2 zi:1 restrict:1 idea:2 cn:1 translates:1 bartlett:1 penalty:2 york:3 remark:2 useful:3 svms:4 generate:1 meir:1 notice:1 disjoint:1 rmeir:1 nevertheless:2 blum:3 falling:1 drawn:1 acknowledged:1 pj:1 utilize:2 vast:1 graph:2 sum:1 powerful:1 wu:1 utilizes:1 decision:2 entirely:1 bound:70 occur:1 precisely:1 generates:1 min:1 concluding:1 separable:1 relatively:1 department:1 according:4 combination:1 slightly:1 happens:1 intuitively:1 pr:1 ghaoui:1 ln:16 alluded:1 discus:2 turn:1 fail:2 slack:1 know:1 apply:1 observe:4 spectral:2 appropriate:1 chawla:1 altogether:1 original:5 remaining:2 clustering:17 zeitouni:1 objective:1 question:1 added:1 quantity:2 already:1 concentration:1 dependence:1 md:1 link:1 thank:1 philip:2 majority:2 considers:1 trivial:1 induction:6 mini:2 equivalently:1 difficult:1 setup:2 potentially:1 countable:1 unknown:2 upper:1 observation:1 sm:33 finite:2 situation:2 extended:2 incorporated:1 y1:3 arbitrary:1 introduced:3 pair:1 required:1 kl:1 z1:1 optimized:1 established:1 nip:1 usually:1 below:2 xm:58 eh:1 scheme:11 improve:1 technology:1 identifies:1 carried:1 naive:1 text:2 prior:34 review:1 loss:2 fully:1 expect:1 interesting:1 limitation:2 principle:1 systematically:1 pi:5 supported:1 side:3 allow:2 formal:1 perceptron:1 institute:1 dimension:2 valid:2 author:1 collection:1 simplified:1 counted:1 far:2 polynomially:2 compact:3 kullback:1 derbeko:1 xi:5 iterative:1 additionally:1 promising:1 learn:1 obtaining:1 symmetry:1 bottou:1 european:1 domain:1 protocol:1 did:1 pk:3 main:1 linearly:1 rh:20 bounding:1 subsample:1 motivation:1 edition:1 allowed:1 x1:7 xu:24 referred:5 transduction:29 wiley:1 sub:5 inferring:1 explicit:1 exponential:1 unwieldy:1 theorem:24 specific:1 pac:11 insignificant:1 svm:6 cortes:1 concern:1 consist:1 vapnik:9 restricting:1 ci:2 margin:5 easier:4 boston:1 rg:1 logarithmic:1 simply:2 twentieth:1 ez:1 partially:1 applies:1 springer:2 corresponds:1 determines:1 goal:3 viewed:3 replace:3 bennett:1 specifically:2 except:1 reducing:2 uniformly:4 contrasted:1 determined:2 lemma:3 called:1 mincuts:1 vote:1 meaningful:1 indicating:1 select:2 formally:1 support:12 |
1,480 | 2,346 | Predicting Speech Intelligibility from a
Population of Neurons
Jeff Bondy
Dept. of Electrical Engineering
McMaster University
Hamilton, ON
[email protected]
Ian C. Bruce
Dept. of Electrical Engineering
McMaster University
Hamilton, ON
[email protected]
Suzanna Becker
Dept. of Psychology
McMaster University
[email protected]
Simon Haykin
Dept. of Electrical Engineering
McMaster University
[email protected]
Abstract
A major issue in evaluating speech enhancement and hearing
compensation algorithms is to come up with a suitable metric that
predicts intelligibility as judged by a human listener. Previous
methods such as the widely used Speech Transmission Index (STI)
fail to account for masking effects that arise from the highly
nonlinear cochlear transfer function. We therefore propose a
Neural Articulation Index (NAI) that estimates speech
intelligibility from the instantaneous neural spike rate over time,
produced when a signal is processed by an auditory neural model.
By using a well developed model of the auditory periphery and
detection theory we show that human perceptual discrimination
closely matches the modeled distortion in the instantaneous spike
rates of the auditory nerve. In highly rippled frequency transfer
conditions the NAI?s prediction error is 8% versus the STI?s
prediction error of 10.8%.
1
In trod u ction
A wide range of intelligibility measures in current use rest on the assumption that
intelligibility of a speech signal is based upon the sum of contributions of
intelligibility within individual frequency bands, as first proposed by French and
Steinberg [1]. This basic method applies a function of the Signal-to-Noise Ratio
(SNR) in a set of bands, then averages across these bands to come up with a
prediction of intelligibility. French and Steinberg?s original Articulation Index (AI)
is based on 20 equally contributing bands, and produces an intelligibility score
between zero and one:
1 20
AI =
(1)
? TI i ,
20 i =1
th
where TIi (Transmission Index i) is the normalized intelligibility in the i band. The
TI per band is a function of the signal to noise ratio or:
(2)
SNRi + 12
30
for SNRs between ?12 dB and 18 dB. A SNR of greater than 18 dB means that the
band has perfect intelligibility and TI equals 1, while an SNR under ?12 dB means
that a band is not contributing at all, and the TI of that band equals 0. The overall
intelligibility is then a function of the AI, but this function changes depending on
the semantic context of the signal.
TI i =
Kryter validated many of the underlying AI principles [2]. Kryter also presented the
mechanics for calculating the AI for different number of bands - 5,6,15 or the
original 20 - as well as important correction factors [3]. Some of the most important
correction factors account for the effects of modulated noise, peak clipping, and
reverberation. Even with the application of various correction factors, the AI does
not predict intelligibility in the presence of some time-domain distortions.
Consequently, the Modulation Transfer Function (MTF) has been utilized to
measure the loss of intelligibility due to echoes and reverberation [4]. Steeneken
and Houtgast later extended this approach to include nonlinear distortions, giving a
new name to the predictor: the Speech Transmission Index (STI) [5]. These metrics
proved more valid for a larger range of environments and interferences.
The STI test signal is a long-term average speech spectrum, gaussian random signal,
amplitude modulated by a 0.63 Hz to 12.5 Hz tone. Acoustic components within
different frequency bands are switched on and off over the testing sequence to come
up with an intelligibility score between zero and one. Interband intermodulation
sources can be discerned, as long as the product does not fall into the testing band.
Therefore, the STI allows for standard AI-frequency band weighted SNR effects,
MTF-time domain effects, and some limited measurements of nonlinearities. The
STI shows a high correlation with empirical tests, and has been codified as an ANSI
standard [6]. For general acoustics it is very good. However, the STI does not
accurately model intraband masker non-linearities, phase distortions or the
underlying auditory mechanisms (outside of independent frequency bands)
We therefore sought to extend the AI/STI concepts to predict intelligibility, on the
assumption that the closest physical variable we have to the perceptual variable of
intelligibility is the auditory nerve response. Using a spiking model of the auditory
periphery [7] we form the Neuronal Articulation Index (NAI) by describing
distortions in the spike trains of different frequency bands. The spiking over time of
an auditory nerve fiber for an undistorted speech signal (control case) is compared
to the neural spiking over time for the same signal after undergoing some distortion
(test case). The difference in the estimated instantaneous discharge rate for the two
cases is used to calculate a neural equivalent to the TI, the Neural Distortion (ND),
for each frequency band. Then the NAI is calculated with a weighted average of
NDs at different Best Frequencies (BFs). In general detection theory terms, the
control neuronal response sets some locus in a high dimensional space, then the
distorted neuronal response will project near that locus if it is perceptually
equivalent, or very far away if it is not. Thus, the distance between the control
neuronal response and the distorted neuronal response is a function of intelligibility.
Due to the limitations of the STI mentioned above it is predicted that a measure of
the neural coding error will be a better predictor than SNR for human intelligibility
word-scores. Our method also has the potential to shed light on the underlying
neurobiological mechanisms.
2
2.1
Meth o d
Model
The auditory periphery model used throughout (and hereafter referred to as the
Auditory Model) is from [7]. The system is shown in Figure 1.
Figure 1 Block diagram of the computational model of the auditory periphery
from the middle ear to the Auditory Nerve. Reprinted from Fig. 1 of [7] with
permission from the Acoustical Society of America ? (2003).
The auditory periphery model comprises several sections, each providing a
phenomenological description of a different part of the cat auditory periphery
function.
The first section models middle ear filtering. The second section, labeled the
?control path,? captures the Outer Hair Cells (OHC) modulatory function, and
includes a wideband, nonlinear, time varying, band-pass filter followed by an OHC
nonlinearity (NL) and low-pass (LP) filter. This section controls the time-varying,
nonlinear behavior of the narrowband signal-path basilar membrane (BM) filter. The
control-path filter has a wider bandwidth than the signal-path filter to account for
wideband nonlinear phenomena such as two-tone rate suppression.
The third section of the model, labeled the ?signal path?, describes the filter
properties and traveling wave delay of the BM (time-varying, narrowband filter);
the nonlinear transduction and low-pass filtering of the Inner Hair Cell (IHC NL and
LP); spontaneous and driven activity and adaptation in synaptic transmission
(synapse model); and spike generation and refractoriness in the auditory nerve
(AN). In this model, CIHC and COHC are scaling constants that control IHC and OHC
status, respectively.
The parameters of the synapse section of the model are set to produce adaptation
and discharge-rate versus level behavior appropriate for a high-spontaneous-
rate/low-threshold auditory nerve fiber. In order to avoid having to generate many
spike trains to obtain a reliable estimate of the instantaneous discharge rate over
time, we instead use the synaptic release rate as an approximation of the discharge
rate, ignoring the effects of neural refractoriness.
2.2
Neural articulation index
These results emulate most of the simulations described in Chapter 2 of Steeneken?s
thesis [8], as it describes the full development of an STI metric from inception to
end. For those interested, the following simulations try to map most of the second
chapter, but instead of basing the distortion metric on a SNR calculation, we use the
neural distortion.
There are two sets of experiments. The first, in section 3.1, deals with applying a
frequency weighting structure to combine the band distortion values, while section
3.2 introduces redundancy factors also. The bands, chosen to match [8], are octave
bands centered at [125, 250, 500, 1000, 2000, 4000, 8000] Hz. Only seven bands are
used here. The Neural AI (NAI) for this is:
NAI = ? 1 ? NTI1 + ? 2 ? NTI2 + ... + ? 7 ? NTI7 ,
(3)
th
where ?i is the i bands contribution and NTIi is the Neural Transmission Index in
th
the i band. Here all the ?s sum to one, so each ? factor can be thought of as the
percentage contribution of a band to intelligibility. Since NTI is between [0,1], it
can also be thought of as the percentage of acoustic features that are intelligible in a
particular band. The ND per band is the projection of the distorted (Test)
instantaneous spike rate against the clean (Control) instantaneous spike rate.
ND = 1 ?
Test ? Control T
,
Control ? Control T
(4)
where Control and Test are vectors of the instantaneous spike rate over time,
sampled at 22050 Hz. This type of error metric can only deal with steady state
channel distortions, such as the ones used in [8]. ND was then linearly fit to
resemble the TI equation 1-2, after normalizing each of the seven bands to have zero
means and unit standard deviations across each of the seven bands. The NTI in the
th
i band was calculated as
NDi ? ? i
(5)
NTIi = m
+b.
?i
NTIi is then thresholded to be no less then 0 and no greater then 1, following the TI
thresholding. In equation (5) the factors, m = 2.5, b = -1, were the best linear fit to
produce NTIi?s in bands with SNR greater then 15 dB of 1, bands with 7.5 dB SNR
produce NTIi?s of 0.75, and bands with 0 dB SNR produced NTI i?s of 0.5. This
closely followed the procedure outlined in section 2.3.3 of [8]. As the TI is a best
linear fit of SNR to intelligibility, the NTI is a best linear fit of neural distortion to
intelligibility.
The input stimuli were taken from a Dutch corpus [9], and consisted of 10
Consonant-Vowel-Consonant (CVC) words, each spoken by four males and four
females and sampled at 44100 Hz. The Steeneken study had many more, but the
exact corpus could not be found. 80 total words is enough to produce meaningful
frequency weighting factors. There were 26 frequency channel distortion conditions
used for male speakers, 17 for female and three SNRs (+15 dB, +7.5 dB and 0 dB).
The channel conditions were split into four groups given in Tables 1 through 4 for
males, since females have negligible signal in the 125 Hz band, they used a subset,
marked with an asterisk in Table 1 through Table 4.
Table 1: Rippled Envelope
ID #
1*
2*
3*
4*
5*
6*
7*
8*
125
1
0
1
0
1
0
1
0
OCTAVE-BAND CENTRE FREQUENCY
250
500
1K
2K
4K
8K
1
1
1
0
0
0
0
0
0
1
1
1
1
0
0
0
1
1
0
1
1
1
0
0
1
0
0
1
1
0
0
1
1
0
0
1
0
1
0
1
0
1
1
0
1
0
1
0
Table 2: Adjacent Triplets
ID #
9
10
11*
125
1
0
0
OCTAVE-BAND CENTRE FREQUENCY
250
500
1K
2K
4K
8K
1
1
0
0
0
0
1
1
1
0
0
0
0
0
1
1
1
0
Table 3: Isolated Triplets
ID #
12
13
14
15*
16*
17
125
1
1
1
0
0
0
OCTAVE-BAND CENTRE FREQUENCY
250
500
1K
2K
4K
8K
0
1
0
1
0
0
0
1
0
0
1
0
0
0
1
0
1
0
1
0
1
0
0
1
1
0
0
1
0
1
0
1
0
1
0
1
Table 4: Contiguous Bands
OCTAVE-BAND CENTRE FREQUENCY
ID #
18*
19*
20*
21
22*
23*
24
25
26*
125
250
500
1K
2K
4K
8K
0
0
0
1
0
0
1
0
1
1
0
0
1
1
0
1
1
1
1
1
0
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
0
1
1
0
1
1
1
1
1
0
0
1
0
0
1
0
1
1
In the above tables a one represents a passband and a zero a stop band. A 1353 tap
FIR filter was designed for each envelope condition. The female envelopes are a
subset of these because they have no appreciable speech energy in the 125 Hz
octave band. Using the 40 male utterances and 40 female utterances under distortion
and calculating the NAI following equation (3) produces only a value between [0,1].
To produce a word-score intelligibility prediction between zero and 100 percent the
NAI value was fit to a third order polynomial that produced the lowest standard
deviation of error from empirical data. While Fletcher and Galt [10] state that the
relation between AI and intelligibility is exponential, [8] fits with a third order
polynomial, and we have chosen to compare to [8]. The empirical word-score
intelligibility was from [8].
3
3.1
R esu lts
Determining frequency weighting structure
For the first tests, the optimal frequency weights (the values of ?i from equation 3)
were designed through minimizing the difference between the predicted
intelligibility and the empirical intelligibility. At each iteration one of the values
was dithered up or down, and then the sum of the ? i was normalized to one. This is
very similar to [5] whose final standard deviation of prediction error for males was
12.8%, and 8.8% for females. The NAI?s final standard deviation of prediction error
for males was 8.9%, and 7.1% for females.
Figure 2 Relation between NAI and empirical word-score intelligibility for male
(left) and female (right) speech with bandpass limiting and noise. The vertical
spread from the best fitting polynomial for males has a s.d. = 8.9% versus the
STI [5] s.d. = 12.8%, for females the fit has a s.d. = 7.1% versus the STI [5] s.d.
= 8.8%
The frequency weighting factors are similar for the NAI and the STI. The STI
weighting factors from [8], which produced the optimal prediction of empirical data
(male s.d. = 6.8%, female s.d. = 6.0%) and the NAI are plotted in Figure 3.
Figure 3 Frequency weighting factors for the optimal predictor of male and
female intelligibility calculated with the NAI and published by Steeneken [8].
As one can see, the low frequency information is tremendously suppressed in the
NAI, while the high frequencies are emphasized. This may be an effect of the
stimuli corpus. The corpus has a high percentage of stops and fricatives in the initial
and final consonant positions. Since these have a comparatively large amount of
high frequency signal they may explain this discrepancy at the cost of the low
frequency weights. [8] does state that these frequency weights are dependant upon
the conditions used for evaluation.
3.2
Determining frequency weighting with redundancy factors
In experiment two, rather then using equation (3) that assumes each frequency band
contributes independently, we introduce redundancy factors. There is correlation
between the different frequency bands of speech [11], which tends to make the STI
over-predict intelligibility. The redundancy factors attempt to remove correlate
signals between bands. Equation (3) then becomes:
NAIr = ? 1 ? NTI1 ? ? 1 NTI1 ? NTI2 + ? 2 ? NTI2 ? ? 1 NTI2 ? NTI3 + ... + ? 7 ? NTI7 , (6)
where the r subscript denotes a redundant NAI and ? is the correlation factor. Only
adjacent bands are used here to reduce complexity. We replicated Section 3.1 except
using equation 6. The same testing, and adaptation strategy from Section 3.1 was
used to find the optimal ?s and ?s.
Figure 4 Relation between NAIr and empirical word-score intelligibility for
male speech (right) and female speech (left) with bandpass limiting and noise
with Redundancy Factors. The vertical spread from the best fitting polynomial
for males has a s.d. = 6.9% versus the STIr [8] s.d. = 4.7%, for females the best
fitting polynomial has a s.d. = 5.4% versus the STIr [8] s.d. = 4.0%.
The frequency weighting and redundancy factors given as optimal in Steeneken,
versus calculated through optimizing the NAIr are given in Figure 5.
Figure 5 Frequency and redundancy factors for the optimal predictor of male
and female intelligibility calculated with the NAIr and published in [8].
The frequency weights for the NAIr and STIr are more similar than in Section 3.1.
The redundancy factors are very different though. The NAI redundancy factors
show no real frequency dependence unlike the convex STI redundancy factors. This
may be due to differences in optimization that were not clear in [8].
Table 5: Standard Deviation of Prediction Error
NAI
STI [5]
STI [8]
MALE
EQ. 3
8.9 %
12.8 %
6.8 %
FEMALE
EQ. 3
7.1 %
8.8 %
6.0 %
MALE
EQ. 6
6.9 %
4.7 %
FEMALE
EQ. 6
5.4 %
4.0 %
The mean difference in error between the STI r, as given in [8], and the NAIr is 1.7%.
This difference may be from the limited CVC word choice. It is well within the
range of normal speaker variation, about 2%, so we believe that the NAI and NAIr
are comparable to the STI and STI r in predicting speech intelligibility.
4
Conclusions
These results are very encouraging. The NAI provides a modest improvement over
STI in predicting intelligibility. We do not propose this as a replacement for the STI
for general acoustics since the NAI is much more computationally complex then the
STI. The NAI?s end applications are in predicting hearing impairment intelligibility
and using statistical decision theory to describe the auditory systems feature
extractors - tasks which the STI cannot do, but are available to the NAI.
While the AI and STI can take into account threshold shifts in a hearing impaired
individual, neither can account for sensorineural, suprathreshold degradations [12].
The accuracy of this model, based on cat anatomy and physiology, in predicting
human speech intelligibility provides strong validation of attempts to design hearing
aid amplification schemes based on physiological data and models [13]. By
quantifying the hearing impairment in an intelligibility metric by way of a damaged
auditory model one can provide a more accurate assessment of the distortion, probe
how the distortion is changing the neuronal response and provide feedback for
preprocessing via a hearing aid before the impairment. The NAI may also give
insight into how the ear codes stimuli for the very robust, human auditory system.
References
[1] French, N.R. & Steinberg, J.C. (1947) Factors governing the intelligibility of speech
sounds. J. Acoust. Soc. Am. 19:90-119.
[2] Kryter, K.D. (1962) Validation of the articulation index. J. Acoust. Soc. Am. 34:16981702.
[3] Kryter, K.D. (1962b) Methods for the calculation and use of the articulation index. J.
Acoust. Soc. Am. 34:1689-1697.
[4] Houtgast, T. & Steeneken, H.J.M. (1973) The modulation transfer function in room
acoustics as a predictor of speech intelligibility. Acustica 28:66-73.
[5] Steeneken, H.J.M. & Houtgast, T. (1980) A physical method for measuring speechtransmission quality. J. Acoust. Soc. Am. 67(1):318-326.
[6] ANSI (1997) ANSI S3.5-1997 Methods for calculation of the speech intelligibility index.
American National Standards Institute, New York.
[7] Bruce, I.C., Sachs, M.B., Young, E.D. (2003) An auditory-periphery model of the effects
of acoustic trauma on auditory nerve responses. J. Acoust. Soc. Am., 113(1):369-388.
[8] Steeneken, H.J.M. (1992) On measuring and predicting speech intelligibility. Ph.D.
Dissertation, University of Amsterdam.
[9] van Son, R.J.J.H., Binnenpoorte, D., van den Heuvel, H. & Pols, L.C.W. (2001) The IFA
corpus: a phonemically segmented Dutch ?open source? speech database. Eurospeech 2001
Poster
http://145.18.230.99/corpus/index.html
[10] Fletcher, H., & Galt, R.H. (1950) The perception of speech and its relation to telephony.
J. Acoust. Soc. Am. 22:89-151.
[11] Houtgast, T., & Verhave, J. (1991) A physical approach to speech quality assessment:
correlation patterns in the speech spectrogram. Proc. Eurospeech 1991, Genova:285-288.
[12] van Schijndel, N.H., Houtgast, T. & Festen, J.M. (2001) Effects of degradation of
intensity, time, or frequency content on speech intelligibility for normal-hearing and hearingimpaired listeners. J. Acoust. Soc. Am.110(1):529-542.
[13] Sachs, M.B., Bruce, I.C., Miller, R.L., & Young, E. D. (2002) Biological basis of
hearing-aid design. Ann. Biomed. Eng. 30:157?168.
| 2346 |@word middle:2 polynomial:5 nd:5 open:1 simulation:2 eng:1 initial:1 score:7 hereafter:1 current:1 remove:1 designed:2 discrimination:1 steeneken:8 tone:2 dissertation:1 haykin:2 provides:2 org:1 combine:1 fitting:3 introduce:1 behavior:2 mechanic:1 encouraging:1 becomes:1 project:1 underlying:3 linearity:1 lowest:1 developed:1 spoken:1 acoust:7 ti:9 shed:1 control:12 unit:1 hamilton:2 before:1 negligible:1 engineering:3 tends:1 id:4 subscript:1 mtf:2 modulation:2 path:5 limited:2 wideband:2 range:3 testing:3 block:1 procedure:1 empirical:7 thought:2 physiology:1 projection:1 phonemically:1 word:8 poster:1 cannot:1 judged:1 context:1 applying:1 equivalent:2 map:1 independently:1 convex:1 suzanna:1 insight:1 bfs:1 population:1 variation:1 discharge:4 limiting:2 spontaneous:2 damaged:1 exact:1 utilized:1 predicts:1 labeled:2 database:1 electrical:3 capture:1 calculate:1 mentioned:1 environment:1 complexity:1 pol:1 upon:2 basis:1 various:1 listener:2 fiber:2 america:1 cat:2 emulate:1 train:2 snrs:2 chapter:2 describe:1 ction:1 outside:1 whose:1 widely:1 larger:1 distortion:16 echo:1 final:3 sequence:1 propose:2 product:1 adaptation:3 amplification:1 description:1 enhancement:1 transmission:5 impaired:1 produce:7 perfect:1 wider:1 depending:1 undistorted:1 basilar:1 eq:4 strong:1 soc:7 predicted:2 resemble:1 come:3 anatomy:1 closely:2 filter:8 centered:1 human:5 suprathreshold:1 biological:1 correction:3 normal:2 fletcher:2 predict:3 major:1 sought:1 proc:1 basing:1 weighted:2 gaussian:1 rather:1 avoid:1 fricative:1 varying:3 ndi:1 validated:1 release:1 improvement:1 tremendously:1 suppression:1 am:7 relation:4 interested:1 biomed:1 issue:1 overall:1 html:1 development:1 equal:2 having:1 represents:1 discrepancy:1 stimulus:3 national:1 individual:2 phase:1 replacement:1 vowel:1 attempt:2 detection:2 highly:2 evaluation:1 introduces:1 male:15 nl:2 light:1 accurate:1 trod:1 modest:1 galt:2 plotted:1 isolated:1 contiguous:1 measuring:2 clipping:1 cost:1 hearing:8 deviation:5 subset:2 snr:10 predictor:5 delay:1 eurospeech:2 peak:1 off:1 sensorineural:1 thesis:1 ear:3 fir:1 mcmaster:7 american:1 account:5 potential:1 tii:1 nonlinearities:1 coding:1 includes:1 later:1 try:1 wave:1 masking:1 simon:1 bruce:3 contribution:3 stir:3 accuracy:1 miller:1 accurately:1 produced:4 published:2 explain:1 synaptic:2 against:1 energy:1 frequency:32 intermodulation:1 sampled:2 auditory:20 proved:1 stop:2 amplitude:1 nerve:7 response:7 discerned:1 synapse:2 refractoriness:2 though:1 inception:1 governing:1 correlation:4 traveling:1 nonlinear:6 assessment:2 french:3 dependant:1 quality:2 believe:1 name:1 effect:8 normalized:2 concept:1 consisted:1 lts:1 semantic:1 deal:2 adjacent:2 speaker:2 steady:1 octave:6 ntii:5 percent:1 narrowband:2 instantaneous:7 spiking:3 physical:3 ohc:3 heuvel:1 extend:1 measurement:1 ai:11 outlined:1 nonlinearity:1 centre:4 had:1 phenomenological:1 closest:1 female:16 optimizing:1 driven:1 periphery:7 greater:3 spectrogram:1 redundant:1 signal:15 full:1 sound:1 segmented:1 match:2 calculation:3 long:2 equally:1 prediction:8 basic:1 hair:2 metric:6 dutch:2 iteration:1 cell:2 cvc:2 diagram:1 source:2 envelope:3 rest:1 unlike:1 hz:7 db:10 near:1 presence:1 split:1 enough:1 fit:7 psychology:1 bandwidth:1 inner:1 reduce:1 reprinted:1 shift:1 becker:2 speech:24 york:1 trauma:1 impairment:3 modulatory:1 clear:1 amount:1 band:44 ph:1 processed:1 generate:1 http:1 percentage:3 s3:1 estimated:1 per:2 group:1 redundancy:10 four:3 soma:1 threshold:2 changing:1 neither:1 clean:1 thresholded:1 sum:3 sti:26 ansi:3 distorted:3 throughout:1 decision:1 scaling:1 genova:1 comparable:1 followed:2 activity:1 membrane:1 across:2 describes:2 son:1 suppressed:1 lp:2 den:1 interference:1 taken:1 computationally:1 equation:7 describing:1 fail:1 mechanism:2 locus:2 end:2 available:1 probe:1 away:1 intelligibility:41 appropriate:1 permission:1 original:2 assumes:1 denotes:1 include:1 ifa:1 calculating:2 giving:1 society:1 passband:1 comparatively:1 nai:23 spike:8 strategy:1 dependence:1 distance:1 outer:1 acoustical:1 cochlear:1 seven:3 code:1 index:12 modeled:1 ratio:2 providing:1 minimizing:1 reverberation:2 design:2 vertical:2 neuron:1 compensation:1 extended:1 intensity:1 tap:1 acoustic:6 nti:4 perception:1 pattern:1 articulation:6 reliable:1 suitable:1 predicting:6 meth:1 scheme:1 utterance:2 contributing:2 determining:2 loss:1 generation:1 limitation:1 filtering:2 telephony:1 versus:7 asterisk:1 validation:2 switched:1 principle:1 thresholding:1 institute:1 wide:1 fall:1 van:3 feedback:1 calculated:5 evaluating:1 valid:1 replicated:1 preprocessing:1 bm:2 far:1 correlate:1 neurobiological:1 status:1 corpus:6 consonant:3 masker:1 spectrum:1 triplet:2 table:9 channel:3 transfer:4 robust:1 ca:3 ignoring:1 contributes:1 complex:1 domain:2 codified:1 spread:2 linearly:1 sachs:2 intelligible:1 noise:5 arise:1 neuronal:6 fig:1 referred:1 transduction:1 aid:3 position:1 comprises:1 bandpass:2 exponential:1 perceptual:2 third:3 steinberg:3 weighting:8 extractor:1 young:2 ian:1 ihc:2 down:1 emphasized:1 undergoing:1 physiological:1 normalizing:1 perceptually:1 amsterdam:1 applies:1 nair:7 marked:1 consequently:1 quantifying:1 ann:1 appreciable:1 jeff:2 room:1 crl:1 content:1 change:1 except:1 degradation:2 total:1 pas:3 meaningful:1 modulated:2 dept:4 phenomenon:1 |
1,481 | 2,347 | Markov Models for Automated ECG Interval
Analysis
Nicholas P. Hughes, Lionel Tarassenko and Stephen J. Roberts
Department of Engineering Science
University of Oxford
Oxford, 0X1 3PJ, UK
{nph,lionel,sjrob}@robots.ox.ac.uk
Abstract
We examine the use of hidden Markov and hidden semi-Markov models for automatically segmenting an electrocardiogram waveform into
its constituent waveform features. An undecimated wavelet transform
is used to generate an overcomplete representation of the signal that is
more appropriate for subsequent modelling. We show that the state durations implicit in a standard hidden Markov model are ill-suited to those
of real ECG features, and we investigate the use of hidden semi-Markov
models for improved state duration modelling.
1
Introduction
The development of new drugs by the pharmaceutical industry is a costly and lengthy process, with the time from concept to final product typically lasting ten years. Perhaps the
most critical stage of this process is the phase one study, where the drug is administered
to humans for the first time. During this stage each subject is carefully monitored for any
unexpected adverse effects which may be brought about by the drug. Of particular interest
is the electrocardiogram (ECG1 ) of the patient, which provides detailed information about
the state of the patient?s heart.
By examining the ECG signal in detail it is possible to derive a number of informative
measurements from the characteristic ECG waveform. These can then be used to assess the
medical well-being of the patient, and more importantly, detect any potential side effects
of the drug on the cardiac rhythm. The most important of these measurements is the ?QT
interval?. In particular, drug-induced prolongation of the QT interval (so called Long QT
Syndrome) can result in a very fast, abnormal heart rhythm known as torsade de pointes,
which is often followed by sudden cardiac death 2 .
In practice, QT interval measurements are carried out manually by specially trained ECG
analysts. This is an expensive and time consuming process, which is susceptible to mistakes by the analysts and provides no associated degree of confidence (or accuracy) in the
measurements. This problem was recently highlighted in the case of the antihistamine
1
2
The ECG is also referred to as the EKG.
This is known as Sudden Arrhythmia Death Syndrome, or SADS.
0.06
QRS complex
T wave
0.04
0.02
U wave
P wave
0
Baseline 2
Baseline 1
?0.02
?0.04
?0.06
?0.08
P
on
P
Q
off
J
T
off
U
off
Figure 1: A human ECG waveform.
terfenadine, which had the side-effect of significantly prolonging the QT interval in a number of patients. Unfortunately this side-effect was not detected in the clinical trials and
only came to light after a large number of people had unexpectedly died whilst taking the
drug [8].
In this paper we consider the problem of automated ECG interval analysis from a machine
learning perspective. In particular, we examine the use of hidden Markov models for automatically segmenting an ECG signal into its constituent waveform features. A redundant
wavelet transform is used to provide an informative representation which is both robust to
noise and tuned to the morphological characteristics of the waveform features. Finally we
investigate the use of hidden semi-Markov models for explicit state duration modelling.
2
2.1
The Electrocardiogram
The ECG Waveform
Each individual heartbeat is comprised of a number of distinct cardiological stages, which
in turn give rise to a set of distinct features in the ECG waveform. These features represent
either depolarization (electrical discharging) or repolarization (electrical recharging) of the
muscle cells in particular regions of the heart. Figure 1 shows a human ECG waveform and
the associated features. The standard features of the ECG waveform are the P wave, the
QRS complex and the T wave. Additionally a small U wave (following the T wave) is
occasionally present.
The cardiac cycle begins with the P wave (the start and end points of which are referred
to as Pon and Poff ), which corresponds to the period of atrial depolarization in the heart.
This is followed by the QRS complex, which is generally the most recognisable feature of
an ECG waveform, and corresponds to the period of ventricular depolarization. The start
and end points of the QRS complex are referred to as the Q and J points. The T wave
follows the QRS complex and corresponds to the period of ventricular repolarization. The
end point of the T wave is referred to as Toff and represents the end of the cardiac cycle
(presuming the absence of a U wave).
2.2
ECG Interval Analysis
The timing between the onset and offset of particular features of the ECG (referred to as an
interval) is of great importance since it provides a measure of the state of the heart and can
indicate the presence of certain cardiological conditions. The two most important intervals
in the ECG waveform are the QT interval and the PR interval. The QT interval is defined
as the time from the start of the QRS complex to the end of the T wave, i.e. Toff ? Q, and
corresponds to the total duration of electrical activity (both depolarization and repolarization) in the ventricles. Similarly, the PR interval is defined as the time from the start of the
P wave to the start of the QRS complex, i.e. Q ? Pon , and corresponds to the time from
the onset of atrial depolarization to the onset of ventricular depolarization.
The measurement of the QT interval is complicated by the fact that a precise mathematical
definition of the end of the T wave does not exist. Thus T wave end measurements are
inherently subjective and the resulting QT interval measurements often suffer from a high
degree of inter- and intra-analyst variability. An automated ECG interval analysis system,
which could provide robust and consistent measurements (together with an associated degree of confidence in each measurement), would therefore be of great benefit to the medical
community.
2.3
Previous Work on Automated ECG Interval Analysis
The vast majority of algorithms for automated QT analysis are based on threshold methods
which attempt to predict the end of the T wave as the point where the T wave crosses a
predetermined threshold [3]. An exception to this is the work of Koski [4] who trained a
hidden Markov model on raw ECG data using the Baum-Welch algorithm. However the
performance of this model was not assessed against a labelled data set of ECG waveforms.
More recently, Graja and Boucher have investigated the use of hidden Markov tree models
for segmenting ECG signals encoded with the discrete wavelet transform [2].
3
Data Collection
In order to develop an automated system for ECG interval analysis, we collected a data
set of over 100 ECG waveforms (sampled at 500 Hz), together with the corresponding
waveform feature boundaries3 as determined by a group of expert ECG analysts. Due to
time constraints it was not possible for each expert analyst to label every ECG waveform
in the data set. Therefore we chose to distribute the waveforms at random amongst the
different experts (such that each waveform was measured by one expert only).
For each ECG waveform, the following points were labelled: Pon , Poff , Q, J and Toff (if a
U wave was present the Uoff point was also labelled). In addition, the point corresponding
to the start of the next P wave (i.e. the P wave of the following heart beat), NPon , was also
labelled. During the data collection exercise, we found that it was not possible to obtain
reliable estimates for the Ton and Uon points, and therefore these were taken to be the J
and Toff points respectively.
4
A Hidden Markov Model for ECG Interval Analysis
It is natural to view the ECG signal as the result of a generative process, in which each
waveform feature is generated by the corresponding cardiological state of the heart. In
addition, the ECG state sequence obeys the Markov property, since each state is solely
3
We developed a novel software application which enabled an ECG analyst to label the boundaries
of each of the features of an ECG waveform, using a pair of ?onscreen calipers?.
P wave
Baseline 1
QRS complex
T wave
Baseline 2
U wave
5.5
1.7
1.0
0.9
2.3
0.6
47.2
80.0
11.3
1.8
32.2
25.3
0.5
1.6
79.0
1.2
1.3
0.6
4.4
1.3
4.6
83.6
3.5
3.9
26.5
9.5
2.7
7.3
31.8
26.8
15.9
5.9
1.4
5.2
28.9
42.8
Table 1: Percentage confusion matrix for an HMM trained on the raw ECG data.
dependent on the previous state. Thus, hidden Markov models (HMMs) would seem ideally
suited to the task of segmenting an ECG signal into its constituent waveform features.
Using the labelled data set of ECG waveforms we trained a hidden Markov model in a supervised manner. The model was comprised of the following states: P wave, QRS complex,
T wave, U wave, and Baseline. The parameters of the transition matrix aij were computed
using the maximum likelihood estimates, given by:
a
?ij = nij /
X
nik
(1)
k
where nij is the total number of transitions from state i to state j over all of the label sequences. We estimated the observation (or emission) probability densities bi for each state
i by fitting a Gaussian mixture model (GMM) to the set of signal samples corresponding
to that particular state4 . Model selection for the GMM was performed using the minimum
description length framework [1].
In our initial experiments, we found that the use of a single state to represent all the regions
of baseline in the ECG waveform resulted in poor performance when the model was used
to infer the underlying state sequence of new unseen waveforms. In particular, a single
baseline state allowed for the possibility of the model returning to the P wave state, following a P wave - Baseline sequence. Therefore we decided to partition the Baseline state into
two separate states; one corresponding to the region of baseline between the P off and Q
points (which we termed ?Baseline 1?), and a second corresponding to the region between
the Toff and NPon points5 (termed ?Baseline 2?).
In order to fully evaluate the performance of our model, we performed 5-fold crossvalidation on the data set of 100 labelled ECGs. Prior to training and testing, the raw
ECG data was pre-processed to have zero mean and unit energy. This was done in order
to normalise the dynamic range of the signals and stabilise the baseline sections. Once
the model had been trained, the Viterbi algorithm [9] was used to infer the optimal state
sequence for each of the signals in the test set.
Table 1 shows the resulting confusion matrix (computed from the state assignments on
a sample-point basis). Although reasonable classification accuracies are obtained for the
QRS complex and T wave states, the P wave state is almost entirely misclassified as Baseline 1, Baseline 2 or U wave. In order to improve the performance of the model, we require
an encoding of the ECG that captures the key temporal and spectral characteristics of the
waveform features in a more informative representation than that of the raw time series
data alone. Thus we now examine the use of wavelet methods for this purpose.
4
We also investigated autoregressive observation densities, although these were found to perform
poorly in comparison to GMMs.
5
If a U wave was present the Uoff point was used instead of Toff .
P wave
Baseline 1
QRS complex
T wave
Baseline 2
U wave
74.2
15.8
0
0
1.4
0.1
14.4
81.5
2.1
0
0
0.1
0.1
1.7
94.4
1.0
0
0.1
0.3
0.1
3.5
96.1
1.6
1.7
11.0
0.9
0
2.2
95.6
85.6
0
0
0
0.7
1.4
12.4
Table 2: Percentage confusion matrix for an HMM trained on the wavelet encoded ECG.
4.1
Wavelet Encoding of ECG
Wavelets are a class of functions that possess compact support and form a basis for all
finite energy signals. They are able to capture the non-stationary spectral characteristics
of a signal by decomposing it over a set of atoms which are localised in both time and
frequency. These atoms are generated by scaling and translating a single mother wavelet.
The most popular wavelet transform algorithm is the discrete wavelet transform (DWT),
which uses the set of dyadic scales (i.e. those based on powers of two) and translates of
the mother wavelet to form an orthonormal basis for signal analysis. The DWT is therefore
most suited to applications such as data compression where a compact description of a
signal is required. An alternative transform is derived by allowing the translation parameter
to vary continuously, whilst restricting the scale parameter to a dyadic scale (thus, the
set of time-frequency atoms now forms a frame). This leads to the undecimated wavelet
transform6 (UWT), which for a signal s ? L2 (R), is given by:
1
w? (? ) = ?
?
Z
+?
s(t) ?
??
?
t??
?
dt
? = 2k , k ? Z, ? ? R
(2)
where w? (? ) are the UWT coefficients at scale ? and shift ? , and ? ? is the complex conjugate of the mother wavelet. In practice the UWT can be computed in O(N log N ) using
fast filter bank algorithms [6].
The UWT is particularly well-suited to ECG interval analysis as it provides a timefrequency description of the ECG signal on a sample-by-sample basis. In addition, the
UWT coefficients are translation-invariant (unlike the DWT coefficients), which is important for pattern recognition applications.
In order to find the most effective wavelet basis for our application, we examined the performance of HMMs trained on ECG data encoded with wavelets from the Daubechies,
Symlet, Coiflet and Biorthogonal wavelet families. In the frequency domain, a wavelet at
a given scale is associated with a bandpass filter7 of a particular centre frequency. Thus
the optimal wavelet basis will correspond to the set of bandpass filters that are tuned to the
unique spectral characteristics of the ECG.
In our experiments we found that the Coiflet wavelet with two vanishing moments resulted
in the highest overall classification accuracy. Table 2 shows the results for this wavelet.
It is evident that the UWT encoding results in a significant improvement in classification
accuracy (for all but the U wave state), when compared with the results obtained on the raw
ECG data.
6
The undecimated wavelet transform is also known as the stationary wavelet transform and the
translation-invariant wavelet transform.
7
These filters satisfy a constant relative bandwidth property, known as ?constant-Q?.
P wave
QRS complex
0.03
T wave
0.04
True
Model
0.014
True
Model
0.035
0.012
0.03
0.02
True
Model
0.01
0.025
0.008
0.02
0.006
0.015
0.01
0.004
0.01
0.002
0.005
0
0
0
50
100
150
State duration (ms)
200
0
0
50
100
State duration (ms)
150
0
100
200
300
State duration (ms)
400
Figure 2: Histograms of the true state durations and those decoded by the HMM.
4.2
HMM State Durations
A significant limitation of the standard hidden Markov model is the manner in which it
models state durations. For a given state i with self-transition coefficient aii , the probability
density of the state duration d is a geometric distribution, given by:
pi (d) = (aii )d?1 (1 ? aii )
(3)
For the waveform features of the ECG signal, this geometric distribution is inappropriate.
Figure 2 shows histograms of the true state durations and the durations of the states decoded
by the HMM, for each of the P wave, QRS complex and T wave states. In each case it
is clear that a significant number of decoded states have a duration that is much shorter
than the minimum state duration observed with real ECG signals. Thus for a given ECG
waveform the decoded state sequence may contain many more state transitions than are
actually present in the signal. The resulting HMM state segmentation is then likely to be
poor and the resulting QT and PR interval measurements unreliable.
One solution to this problem is to post-process the decoded state sequences using a median
filter designed to smooth out sequences whose duration is known to be physiologically
implausible. A more principled and more effective approach, however, is to model the
probability density of the individual state durations explicitly, using a hidden semi-Markov
model.
5
A Hidden Semi-Markov Model for ECG Interval Analysis
A hidden semi-Markov model (HSMM) differs from a standard HMM in that each of the
self-transition coefficients aii are set to zero, and an explicit probability density is specified
for the duration of each state [5]. In this way, the individual state duration densities govern
the amount of time the model spends in a given state, and the transition matrix governs
the probability of the next state once this time has elapsed. Thus the underlying stochastic
process is now a ?semi-Markov? process.
To model the durations pi (d) of the various waveform features of the ECG, we used a
Gamma density since this is a positive distribution which is able to capture the inherent
skewness of the ECG state durations. For each state i, maximum likelihood estimates of
the shape and scale parameters were computed directly from the set of labelled ECG signals
(as part of the cross-validation procedure).
In order to infer the most probable state sequence Q = {q1 q2 ? ? ? qT } for a given observation sequence O = {O1 O2 ? ? ? OT }, the standard Viterbi algorithm must be modified to
P wave
QRS complex
0.03
T wave
0.04
True
Model
0.014
True
Model
0.035
0.012
0.03
0.02
True
Model
0.01
0.025
0.008
0.02
0.006
0.015
0.01
0.004
0.01
0.002
0.005
0
0
0
50
100
150
State duration (ms)
200
0
0
50
100
State duration (ms)
150
0
100
200
300
State duration (ms)
400
Figure 3: Histograms of the true state durations and those decoded by the HSMM.
handle the explicit state duration densities of the HSMM. We start by defining the likelihood of the most probable state sequence that accounts for the first t observations and ends
in state i:
?t (i) = max p(q1 q2 ? ? ? qt = i, O1 O2 ? ? ? Ot |?)
(4)
q1 q2 ???qt?1
where ? is the set of parameters governing the HSMM. The recurrence relation for computing ?t (i) is then given by:
n
o
?t (i) = max max ?t?di (j)aji pi (di ) ?tt0 =t?di +1 bi (Ot0 )
(5)
di
j
where the outer maximisation is performed over all possible values of the state duration d i
for state i, and the inner maximisation is over all states j. At each time t and for each state
i, the two arguments that maximise equation (5) are recorded, and a simple backtracking
procedure can then be used to find the most probable state sequence.
The time complexity of the Viterbi decoding procedure for an HSMM is given by
O(K 2 T Dmax ), where K is the total number of states, and Dmax is the maximum range
of state durations over all K states, i.e. Dmax = maxi (max(di ) ? min(di )). As noted
in [7], scaling the computation of ?t (i) to avoid underflow is non-trivial. However, by
simply computing log ?t (i) it is possible to avoid any numerical problems.
Figure 3 shows histograms of the resulting state durations for an HSMM trained on a
wavelet encoding of the ECG (using 5-fold cross-validation). Clearly, the durations of
the decoded state sequences are very well matched to the true durations of each of the
ECG features. This improvement in duration modelling is reflected in the accuracy and
robustness of the segmentations produced by the HSMM.
Model
HMM on raw ECG
HMM on wavelet encoded ECG
HSMM on wavelet encoded ECG
Pon
157
12
13
Q
31
11
3
J
27
20
7
Toff
139
46
12
Table 3: Mean absolute segmentation errors (in milliseconds) for each of the models.
Table 3 shows the mean absolute errors8 for the Pon , Q, J and Toff points, for each of the
models discussed. On the important task of accurately determining the Q and T off points
for QT interval measurements, the HSMM significantly outperforms the HMM.
8
The error was taken to be the time difference from the first decoded segment boundary to the
true segment boundary (of the same type).
6
Discussion
In this work we have focused on the two core issues in developing an automated system for
ECG interval analysis: the choice of representation for the ECG signal and the choice of
model for the segmentation. We have demonstrated that wavelet methods, and in particular
the undecimated wavelet transform, can be used to generate an encoding of the ECG which
is tuned to the unique spectral characteristics of the ECG waveform features. With this representation the performance of the models on new unseen ECG waveforms is significantly
better than similar models trained on the raw time series data. We have also shown that the
robustness of the segmentation process can be improved through the use of explicit state
duration modelling with hidden semi-Markov models. With these models the detection accuracy of the Q and Toff points compares favourably with current methods for automated
QT analysis [3, 2].
A key advantage of probabilistic models over traditional threshold-based methods for ECG
segmentation is that they can be used to generate a confidence measure for each segmented
ECG signal. This is achieved by considering the log likelihood of the observed signal
given the model, i.e. log p(O|?), which can be computed efficiently for both HMMs and
HSMMs. Given this confidence measure, it should be possible to determine a suitable
threshold for rejecting ECG signals which are either too noisy or too corrupted to provide
reliable estimates of the QT and PR intervals. The robustness with which we can detect
such unreliable QT interval measurements based on this log likelihood score is one of the
main focuses of our current research.
Acknowledgements
We thank Cardio Analytics Ltd for help with data collection and labelling, and Oxford
BioSignals Ltd for funding this research. NH thanks Iead Rezek for many useful discussions, and the anonymous reviewers for their helpful comments.
References
[1] M. A. T. Figueiredo and A. K. Jain. Unsupervised learning of finite mixture models. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 24(3):381?396, 2002.
[2] S. Graja and J. M. Boucher. Multiscale hidden Markov model applied to ECG segmentation. In
WISP 2003: IEEE International Symposium on Intelligent Signal Processing, pages 105?109,
Budapest, Hungary, 2003.
[3] R. Jan?e, A. Blasi, J. Garc?ia, and P. Laguna. Evaluation of an automatic threshold based detector
of waveform limits in Holter ECG with QT database. In Computers in Cardiology, pages 295?
298. IEEE Press, 1997.
[4] A. Koski. Modelling ECG signals with hidden Markov models.
Medicine, 8:453?471, 1996.
Artificial Intelligence in
[5] S. E. Levinson. Continuously variable duration hidden Markov models for automatic speech
recognition. Computer Speech and Language, 1(1):29?45, 1986.
[6] S. Mallat. A Wavelet Tour of Signal Processing. Academic Press, 2nd edition, 1999.
[7] K. P. Murphy. Hidden semi-Markov models. Technical report, MIT AI Lab, 2002.
[8] C. M. Pratt and S. Ruberg. The dose-response relationship between Terfenadine (Seldane) and
the QTc interval on the scalar electrocardiogram in normals and patients with cardiovascular
disease and the QTc interval variability. American Heart Journal, 131(3):472?480, 1996.
[9] L. R. Rabiner. A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257?286, 1989.
| 2347 |@word trial:1 timefrequency:1 compression:1 nd:1 uon:1 q1:3 moment:1 initial:1 series:2 score:1 tuned:3 subjective:1 o2:2 outperforms:1 current:2 must:1 subsequent:1 partition:1 informative:3 predetermined:1 shape:1 numerical:1 designed:1 alone:1 generative:1 stationary:2 intelligence:2 selected:1 vanishing:1 core:1 sudden:2 provides:4 mathematical:1 symposium:1 fitting:1 manner:2 inter:1 arrhythmia:1 examine:3 automatically:2 inappropriate:1 considering:1 begin:1 underlying:2 matched:1 spends:1 depolarization:6 skewness:1 developed:1 q2:3 whilst:2 temporal:1 every:1 returning:1 uk:2 unit:1 medical:2 discharging:1 segmenting:4 positive:1 maximise:1 engineering:1 timing:1 died:1 cardiovascular:1 mistake:1 laguna:1 limit:1 encoding:5 oxford:3 solely:1 chose:1 examined:1 ecg:69 hmms:3 analytics:1 bi:2 range:2 presuming:1 obeys:1 decided:1 unique:2 testing:1 hughes:1 practice:2 maximisation:2 differs:1 procedure:3 aji:1 jan:1 drug:6 significantly:3 blasi:1 confidence:4 pre:1 cardiology:1 selection:1 demonstrated:1 pon:5 baum:1 reviewer:1 duration:34 focused:1 welch:1 importantly:1 orthonormal:1 enabled:1 handle:1 mallat:1 us:1 expensive:1 particularly:1 recognition:3 tarassenko:1 database:1 qtc:2 observed:2 electrical:3 unexpectedly:1 toff:9 capture:3 biosignals:1 region:4 cycle:2 morphological:1 highest:1 principled:1 disease:1 govern:1 complexity:1 ideally:1 dynamic:1 trained:9 segment:2 heartbeat:1 basis:6 aii:4 various:1 distinct:2 fast:2 effective:2 jain:1 detected:1 artificial:1 whose:1 encoded:5 unseen:2 transform:10 highlighted:1 noisy:1 final:1 sequence:13 advantage:1 product:1 budapest:1 hungary:1 poorly:1 description:3 constituent:3 crossvalidation:1 lionel:2 help:1 derive:1 develop:1 ac:1 measured:1 cardio:1 ij:1 qt:19 indicate:1 waveform:32 filter:4 stochastic:1 human:3 translating:1 garc:1 require:1 anonymous:1 probable:3 normal:1 great:2 viterbi:3 predict:1 vary:1 purpose:1 label:3 brought:1 clearly:1 mit:1 gaussian:1 modified:1 avoid:2 derived:1 emission:1 focus:1 improvement:2 modelling:6 likelihood:5 baseline:17 detect:2 helpful:1 stabilise:1 dependent:1 biorthogonal:1 typically:1 hidden:21 relation:1 misclassified:1 overall:1 classification:3 ill:1 issue:1 development:1 once:2 atom:3 manually:1 represents:1 unsupervised:1 ot0:1 report:1 intelligent:1 inherent:1 gamma:1 resulted:2 individual:3 pharmaceutical:1 murphy:1 phase:1 attempt:1 detection:1 interest:1 investigate:2 possibility:1 intra:1 evaluation:1 mixture:2 light:1 shorter:1 tree:1 overcomplete:1 nij:2 dose:1 industry:1 assignment:1 sjrob:1 tour:1 comprised:2 examining:1 too:2 corrupted:1 thanks:1 density:8 international:1 probabilistic:1 off:5 decoding:1 together:2 continuously:2 daubechies:1 recorded:1 expert:4 american:1 account:1 potential:1 distribute:1 de:1 electrocardiogram:4 coefficient:5 satisfy:1 explicitly:1 onset:3 performed:3 view:1 lab:1 undecimated:4 wave:42 start:7 complicated:1 ass:1 accuracy:6 characteristic:6 who:1 efficiently:1 uwt:6 correspond:1 rabiner:1 raw:7 accurately:1 produced:1 rejecting:1 detector:1 implausible:1 lengthy:1 definition:1 against:1 energy:2 frequency:4 associated:4 di:6 monitored:1 sampled:1 popular:1 segmentation:7 carefully:1 actually:1 dt:1 supervised:1 reflected:1 response:1 improved:2 done:1 ox:1 governing:1 implicit:1 stage:3 favourably:1 multiscale:1 boucher:2 perhaps:1 tt0:1 effect:4 concept:1 true:11 contain:1 death:2 during:2 self:2 recurrence:1 noted:1 rhythm:2 m:6 evident:1 confusion:3 koski:2 novel:1 recently:2 funding:1 nh:1 discussed:1 measurement:12 significant:3 mother:3 ai:1 automatic:2 similarly:1 centre:1 language:1 had:3 robot:1 perspective:1 termed:2 occasionally:1 certain:1 came:1 muscle:1 minimum:2 syndrome:2 determine:1 redundant:1 period:3 signal:26 stephen:1 semi:9 levinson:1 infer:3 smooth:1 segmented:1 technical:1 academic:1 clinical:1 long:1 cross:3 prolongation:1 post:1 patient:5 histogram:4 represent:2 achieved:1 cell:1 addition:3 interval:28 median:1 ot:2 specially:1 posse:1 unlike:1 comment:1 subject:1 induced:1 hz:1 gmms:1 seem:1 presence:1 pratt:1 automated:8 bandwidth:1 inner:1 translates:1 shift:1 administered:1 ltd:2 suffer:1 speech:3 generally:1 useful:1 detailed:1 clear:1 governs:1 amount:1 ten:1 processed:1 generate:3 exist:1 percentage:2 repolarization:3 millisecond:1 tutorial:1 estimated:1 discrete:2 rezek:1 group:1 key:2 threshold:5 pj:1 gmm:2 prolonging:1 vast:1 year:1 almost:1 reasonable:1 family:1 scaling:2 entirely:1 abnormal:1 followed:2 fold:2 activity:1 constraint:1 software:1 ventricular:3 argument:1 min:1 department:1 developing:1 poor:2 conjugate:1 cardiac:4 qrs:14 lasting:1 invariant:2 pr:4 heart:8 taken:2 equation:1 turn:1 dmax:3 end:9 decomposing:1 appropriate:1 spectral:4 nicholas:1 dwt:3 alternative:1 robustness:3 medicine:1 costly:1 traditional:1 amongst:1 onscreen:1 separate:1 thank:1 majority:1 hmm:10 normalise:1 outer:1 collected:1 trivial:1 analyst:6 length:1 o1:2 relationship:1 susceptible:1 unfortunately:1 robert:1 localised:1 rise:1 perform:1 allowing:1 observation:4 markov:24 poff:2 finite:2 ekg:1 beat:1 defining:1 variability:2 precise:1 frame:1 community:1 pair:1 required:1 specified:1 elapsed:1 able:2 pattern:2 atrial:2 reliable:2 max:4 power:1 critical:1 suitable:1 natural:1 ia:1 improve:1 carried:1 prior:1 geometric:2 l2:1 acknowledgement:1 determining:1 relative:1 fully:1 limitation:1 validation:2 degree:3 consistent:1 ventricle:1 bank:1 pi:3 translation:3 figueiredo:1 aij:1 side:3 taking:1 absolute:2 benefit:1 boundary:3 transition:6 autoregressive:1 collection:3 transaction:1 compact:2 unreliable:2 consuming:1 recharging:1 physiologically:1 table:6 additionally:1 robust:2 inherently:1 investigated:2 complex:15 domain:1 main:1 noise:1 edition:1 allowed:1 dyadic:2 x1:1 referred:5 hsmm:9 decoded:8 explicit:4 bandpass:2 exercise:1 wavelet:29 maxi:1 offset:1 ton:1 restricting:1 importance:1 nik:1 labelling:1 suited:4 nph:1 backtracking:1 simply:1 likely:1 unexpected:1 scalar:1 corresponds:5 underflow:1 labelled:7 absence:1 adverse:1 determined:1 total:3 called:1 exception:1 people:1 support:1 assessed:1 evaluate:1 |
1,482 | 2,348 | Perception of the structure of the physical world
using unknown multimodal sensors and effectors
D. Philipona
Sony CSL, 6 rue Amyot
75005 Paris, France
[email protected]
J.K. O?Regan
Laboratoire de Psychologie Exp?erimentale, CNRS
Universit?e Ren?e Descartes, 71, avenue Edouard Vaillant
92774 Boulogne-Billancourt Cedex, France
http://nivea.psycho.univ-paris5.fr
J.-P. Nadal
Laboratoire de Physique Statistique, ENS
rue Lhomond
75231 Paris Cedex 05
O. J.-M. D. Coenen
Sony CSL, 6 rue Amyot
75005 Paris, France
Abstract
Is there a way for an algorithm linked to an unknown body to infer by
itself information about this body and the world it is in? Taking the case
of space for example, is there a way for this algorithm to realize that its
body is in a three dimensional world? Is it possible for this algorithm to
discover how to move in a straight line? And more basically: do these
questions make any sense at all given that the algorithm only has access
to the very high-dimensional data consisting of its sensory inputs and
motor outputs?
We demonstrate in this article how these questions can be given a positive
answer. We show that it is possible to make an algorithm that, by analyzing the law that links its motor outputs to its sensory inputs, discovers
information about the structure of the world regardless of the devices
constituting the body it is linked to. We present results from simulations
demonstrating a way to issue motor orders resulting in ?fundamental?
movements of the body as regards the structure of the physical world.
1
Introduction
What is it possible to discover from behind the interface of an unknown body, embedded
in an unknown world? In previous work [4] we presented an algorithm that can deduce
the dimensionality of the outside space in which it is embedded, by making random movements and studying the intrinsic properties of the relation linking outgoing motor orders to
resulting changes of sensory inputs (the so called sensorimotor law [3]).
In the present article we provide a more advanced mathematical overview together with a
more robust algorithm, and we also present a multimodal simulation.
The mathematical section provides a rigorous treatment, relying on concepts from differential geometry, of what are essentially two very simple ideas. The first idea is that transformations of the organism-environment system which leave the sensory inputs unchanged
will do this independently of the code or the structure of sensors, and are in fact the only
aspects of the sensorimotor law that are independent of the code (property 1). In a single
given sensorimotor configuration the effects of such transformations induce what is called
a tangent space over which linear algebra can be used to extract a small number of independent basic elements, which we call ?measuring rod?. The second idea is that there is
a way of applying these measuring rods globally (property 2) so as to discover an overall
substructure in the set of transformations that the organism-environment system can suffer, and that leave sensory inputs unchanged. Taken together these ideas make it possible,
if the sensory devices are sufficiently informative, to extract an algebraic group structure
corresponding to the intrinsic properties of the space in which the organism is embedded.
The simulation section is for the moment limited to an implementation of the first idea. It
presents briefly the main steps of an implementation giving access to the measuring rods,
and presents the results of its application to a virtual rat with mixed visual, auditory and
tactile sensors (see Figure 2). The group discovered reveals the properties of the Euclidian
space implicit in the equations describing the physics of the simulated world.
Figure 1: The virtual organism used for the simulations. Random motor commands produce random changes in the rat?s body configuration, involving uncoordinated movements
of the head, changes in the gaze direction, and changes in the aperture of the eyelids and
diaphragms.
2
Mathematical formulation
Let us note S the sensory inputs, and M the motor outputs. They are the only things
the algorithm can access. Let us note P the configurations of the body controlled by the
algorithm and E the configurations of the environment.
We will assume that the body position is controlled by the multidimensional motor outputs
through some law ?a and that the sensory devices together deliver a multidimensional
input that is a function ?b of the configuration of the body and the configuration of the
environment:
P = ?a (M )
def
and
S = ?b (P, E)
We shall write ?(M, E) = ?b (?a (M ), E), note S, M, P, E the sets of all S, M , P , E,
and assume that M and E are manifolds.
2.1
Isotropy group of the sensorimotor law
Through time, the algorithm will be able to experiment a set of sensorimotor laws linking
its inputs to its outputs:
def
?(?, E) = {M 7? ?(M, E), E ? E}
These are a set of functions linking S to M , parametrized by the environmental state E.
Our goal is to extract from this set something that does not depend on the way the sensory
information is provided. In other words something that would be the same for all h??(?, E),
where h is an invertible function corresponding to a change of encoding, including changes
of the sensory devices (as long as they provide access to the same information).
def
If we note Sym(X) = {f : X ? X, f one to one mapping}, and consider :
?(?) = {f ? Sym(M ? E) such that ? ? f = ?}
then
Property 1 ?(?1 ) = ?(?2 ) ? ?f ? Sym(S) such that ?1 = f ? ?2
Thus ?(?) is invariant by change of encoding, and retains from ? all that is independent
of the encoding. This result is easily understood using an example from physics: think
of a light sensor with unknown characteristics in a world consisting of a single point light
source. The values of the measures are very dependent on the sensor, but the fact that
they are equal on concentric spheres is an intrinsic property of the physics of the situation
(?(?), in this case, would be the group of rotations) and is independent of the code and of
the sensor?s characteristics.
But how can we understand the transformations f which, first, involve a manifold E the
algorithm does not know, and second that are invisible since ? ? f = ?. We will show that,
under one reasonable assumption, there is an algorithm that can discover the Lie algebra of
the Lie subgroups of ?(?) that have independent actions over M and E, i.e. Lie groups G
such that g(M, E) = (g1 (M ), g2 (E)) for any ? G, with
?(g1 (M ), g2 (E)) = ?(M, E) ?g ? G
2.2
(1)
Fundamental vector fields over the sensory inputs
We will assume that the sensory inputs provide enough information to observe univocally
the changes of the environment when the exteroceptive sensors do not move. In mathematical form, we will assume that:
Condition 1 There exists U ? V ? M ? E such that ?(M, ?) is an injective immersion
from V to S for any M ? U
Under this condition, ?(M, V) is a manifold for any P ? U and ?(M, ?) is a diffeomorphism from V to ?(M, V). We shall write ??1 (M, ?) its inverse. Choosing M0 ? U, it is
thus possible to define an action ?M0 of G over the manifold ?(M0 , V) :
def
?M0 (g, S) = ?(M0 , g2 (??1 (M0 , S))) ? S ? ?(M0 , V)
As a consequence (see for instance [2]), for any left invariant vector field X on G there is
an associated fundamental vector field X S on ?(M0 , V)1 :
def
X S (S) =
1
d M0 ?tX
? (e
, S)|t=0
dt
? S ? ?(M0 , V)
To avoid heavy notations we have written X S instead of X ?(M0 ,V) .
The key point for us is that this whole vector field can be discovered experimentally by
the algorithm from one vector alone : let us suppose the algorithm knows the one vector
d
?tX
, M0 )|t=0 ? T M|M0 (the tangent space of M at M0 ), that we will call a
dt ?1 (e
measuring rod. Then it can construct a motor command MX (t) such that
MX (0) = M0
and
d
M? X (0) = ? ?1 (e?tX , M0 )|t=0
dt
and observe the fundamental field, thanks to the property:
Property 2 X S (S) =
d
dt
?(MX (t), ??1 (M0 , S))|t=0
? S ? ?(M0 , V)
Indeed the movements of the environment reveal a sub-manifold ?(M0 , V) of the manifold
S of all sensory inputs, and this means they allow to transport the sensory image of the
given measuring rod over this sub-manifold : X(S) is the time derivative of the sensory
inputs at t = 0 in the movement implied by the motor command MX in that configuration
of the environment yielding S at t = 0.
The fundamental vector fields are the key to our problem because [2] :
S S
S
X ,Y
= [X, Y ]
where the left term uses the bracket of the vectors fields on ?(M0 , V) and the right term
uses the bracket in the Lie algebra of G. Thus clearly we can get insight into the properties
of the latter by the study of these fields. If the action ?M0 is effective (and it is possible to
show that for any G there is a subgroup such that it is),we have the additional properties:
1. X 7? X S is an injective Lie algebra morphism: we can understand the whole Lie
algebra of G through the Lie bracket over the fundamental vector fields
2. G is diffeomorphic to the group of finite compositions of fundamental flows : any
element g of G can be written as g = eX1 eX2 . . . eXk , and
?M0 (g, S) = ?M0 (eX1 , ?M0 (eX2 , . . . ?M0 (eXk , S)))
2.3
Discovery of the measuring rods
Thus the question is: how can the algorithm come to know the measuring rods? If ? is not
singular (that is: is a subimmersion on U ? V, see [1]), then it can be demonstrated that:
h
i
??
d
Property 3 ?M
(M0 , E0 ) M? ? M? X = 0 ? dt
?(M (t), ?)|t=0 = X S (?(M0 , ?))
This means that the particular choice of one vector of T M|M0 among those that have the
same sensory image as a given measuring rod is of no importance for the construction of
the associated vector field. Consequently, the search for the measuring rods becomes the
search for their sensory image, which form a linear subspace of the intersection of the
tangent spaces of ?(M0 , V) and ?(U, E0 ) (as a direct consequence of property 2):
?X
\
??
d
(M0 , E0 ) ?1 (e?tX , M0 )|t=0 ? T ?(M0 , V)|S0
T ?(U, E0 )|S0
?M
dt
But what about the rest of the intersection? Reciprocally, it can be shown that:
Property 4 Any measuring rod that has a sensory image in the intersection of the tangent
spaces of ?(M0 , V) and ?(U, E) for any E ? V reveals a monodimensional subgroup of
transformations over V that is invariant under any change of encoding.
3
3.1
Simulation
Description of the virtual rat
We have applied these ideas to a virtual body satisfying the different necessary conditions
for the theory to be applied. Though our approach would also apply to the situation where
the sensorimotor law involves time-varying functions, for simplicity here we shall take
the restricted case where S and M are linked by a non-delayed relationship. We thus
implemented a rat?s head with instantaneous reactions so that M ? Rm and S ? Rs . In
the simulation, m and s have been arbitrarily assigned the value 300.
The head had visual, auditory and tactile input devices (see Figure 2). The visual device
consisted of two eyes, each one being constituted by 40 photosensitive cells randomly
distributed on a planar retina, one lens, one diaphragm (or pupil) and two eyelids. The
images of the 9 light sources constituting the environment were projected through the lens
on the retina to locally stimulate photosensitive cells, with a total influx related to the
aperture of the diaphragm and the eyelids. The auditory device was constituted by one
amplitude sensor in each of the two ears, with a sensitivity profile favoring auditory sources
with azimuth and elevation 0? with respect to the orientation of the head. The tactile device
was constituted by 4 whiskers on each side of the rat?s jaw, that stuck to an object when
touching it, and delivered a signal related to the shift from rest position. The global sensory
inputs of dimension 90 (2 ? 40 photosensors plus 2 auditory sensors plus 8 tactile sensors)
were delivered to the algorithm through a linear mixing of all the signals delivered by
these sensors, using a random matrix WS ? M(s, 90) representing some sensory neural
encoding in dimension s = 300.
azimuth
(a)
(b)
(c)
Figure 2: The sensory system. (a) the sensory part of both eyes is constituted of randomly
distributed photosensitive cells (small dark dots). (b) the auditory sensors have a gain
profile favoring sounds coming from the front of the ears. (c) tactile devices stick to the
sources they come into contact with.
The motor device was as follows. Sixteen control parameters were constructed from linear combinations of the motor outputs of dimension m = 300 using a random matrix
WM ? M(16, m) representing some motor neural code. The configuration of the rat?s
head was then computed from these sixteen variables in this way: six parameters controlled the position and orientation of the head, and, for each eye, three controlled the eye
orientation plus two the aperture of the diaphragm and the eyelids. The whiskers were not
controllable, but were fixed to the head.
In the simulation we used linear encoding WS and WM in order to show that the algorithm
worked even when the dimension of the sensory and motor vectors was high. Note first
however that any, even non-linear, continuous high-dimensional function could have been
used instead of the linear mixing matrices. More important, note that even when linear
mixing is used, the sensorimotor law is highly nonlinear: the sensors deliver signals that
are not linear with respect to the configuration of the rat?s head, and this configuration is
itself not linear with respect to the motor outputs.
3.2
The algorithm
The first important result of the mathematical section was that the sensory images of the
measuring rods are in the intersection between the tangent space of the sensory inputs
observed when issuing different motor outputs while the environment is immobile, and the
tangent space of the sensory inputs observed when the command being issued is constant.
In the present simulation we will only be making use of this point, but keep in mind that
the second important result was the relation between the fundamental vector fields and
these measuring rods. This implies that the tangent vectors we are going to find by an
experiment for a given sensory input S0 = ?(M0 , E0 ) can be transported in a particular
way over the whole sub-manifold ?(M0 , V), thereby generating the sensory consequences
of any transformation of E associated with the Lie subgroup of ?(?) whose measuring rods
have been found.
Figure 3: Amplitudes of the ratio of successive singular values of : (a) the estimated tangent
sensorimotor law (when E is fixed at E0 ) during the bootstrapping process; (b) the matrix
corresponding to an estimated generating family for the tangent space to the manifold of
sensory inputs observed when M is fixed at M0 ; (c) the matrix constituted by concatenating
the vectors found in the two previous cases. The nullspaces of the two first matrices reflect
redundant variables; the nullspace of the last one is related to the intersection of the two
first tangent spaces (see equation 2). The graphs show there are 14 control parameters
with respect to the body, and 27 variables to parametrize the environment (see text). The
nullspace of the last matrix leads to the computation of an intersection of dimension 6
reflecting the Lie group of Euclidian transformations SE(3) (see text).
In [4], the simulation aimed to demonstrate that the dimensions of the different vector
spaces involved were accessible. We now present a simulation that goes
T beyond this by
estimating these vector space themselves, in particular T ?(M0 , V)|S0 T ?(U, E0 )|S0 , in
the case of multimodal sensory inputs and with a robust algorithm. The method previously
used to estimate the first tangent space, and more specifically its dimension, indeed required
an unrealistic level of accuracy. One of the reasons was the poor behavior of the Singular
Value Decomposition when dealing with badly conditioned matrices. We have developed a
much more stable method, that furthermore uses time derivatives as a more plausible way
to estimate the differential than multivariate linear approximation. Indeed, the nonlinear
functional relationship between the motor output and the sensory inputs implies an exact
linear relationship between their respective time derivative at a given motor output M0
??
?
S(t) = ?(M (t), E0 ) ? S(0)
=
(M0 , E0 )M? (0)
?M
and this linear relationship can be estimated as the linear mapping associating the M? (0),
?
for any curve in the motor command space such that M (0) = M0 , to the resulting S(0).
The idea is then to use bootstrapping to estimate the time derivative of the ?good? sensory
input combinations along the ?good? movements so that this linear relation is diagonal and
the decomposition unnecessary : the purpose of the SVD used at each step is to provide
an indication of what vectors seem to be of interest. At the end of the process, when
the linear relationship is judged to be sufficiently diagonal, the singular values are taken
as the diagonal elements, and are thus estimated with the precision of the time derivative
estimator. Figure 3a presents the evolution of the estimated dimension of the tangent space
during this bootstrapping process.
Using this method in the first stage of the experiment when the environment is immobile
makes it possible for the algorithm, at the same time as it finds a basis for the tangent
space, to calibrate the signals coming from the head : it extracts sensory input combinations
that are meaningful as regards its own mobility. Then during a second stage, using these
combinations, it estimates the tangent space to sensory inputs resulting from movement of
the environment while it keeps its motor output fixed at M0 . Finally, using the tangent
spaces estimated in these two stages, it computes their intersection : if T SM is a matrix
containing the basis of the first tangent space, and T SE a basis of the second tangent space,
then the nullspace of [T SM , T SE ] allows to generate the intersection of the two spaces:
[T SM , T SE ]? = 0 ? T SM ?M = ?T SE ?E
where ? = (?TM , ?TE )T
(2)
To conclude, using the pseudo-inverse of the tangent sensorimotor law, the algorithm computes measuring rods that have a sensory image in that intersection; and this computation
is simple since the adaptation process made the tangent law diagonal.
3.3
Results2
Figure 3a demonstrates the evolution of the estimation of the ratio between successive singular values. The maximum of this ratio can be taken as the frontier between significantly
non-zero values and zero ones, and thus reveals the dimension of the tangent space to the
sensory inputs observed in an immobile environment. There are indeed 14 effective parameters of control of the body with respect to the sensory inputs: from the 16 parameters
described in section 3.1, for each eye the two parameters controlling the aperture of the diaphragm and the eyelids combine in a single effective one characterizing the total incoming
light influx.
After this adaptation process the tangent space to sensory inputs observed for a fixed motor
output M0 can be estimated without bootstrapping as shown, as regards its dimension (27 =
9 ? 3 for the 9 light sources moving in a three dimensional space), in Figure 3b. The
intersection is computed from the nullspace of the matrix constituted by concatenation
of generating vectors of the two previous spaces, using equation 2. This nullspace is of
2
The Matlab code of the simulation can be downloaded at http://nivea.psycho.
univ-paris5.fr/?philipona for further examination.
Figure 4: The effects of motor commands corresponding to a generating family of 6 independent measuring rods computed by the algorithm. They reveal the control of the head
in a rigid fashion. Without the Lie bracket to understand commutativity, these movements
involve arbitrary compositions of translations and rotations.
dimension 41 ? 35 = 6, as shown in Figure 3c. Note that the graph shows the ratio
of successive singular values, and thus has one less value than the number of vectors.
Figure 4 demonstrates the movements of the rat?s head associated with the measuring rods
found using the pseudoinverse of the sensorimotor law. Contrast these with the non-rigid
movements of the rat?s head associated with random motor commands of Figure 1.
4
Conclusion
We have shown that sensorimotor laws possess intrinsic properties related to the structure
of the physical world in which an organism?s body is embedded. These properties have an
overall group structure, for which smoothly parametrizable subgroups that act separately on
the body and on the environment can be discovered. We have briefly presented a simulation
demonstrating the way to access the measuring rods of these subgroups.
We are currently conducting our first successful experiments on the estimation of the Lie
bracket. This will allow the groups whose measuring rods have been found to be decomposed. It will then be possible for the algorithm to distinguish for instance between translations and rotations, and between rotations around different centers.
The question now is to determine what can be done with these first results: is this intrinsic
understanding of space enough to discover the subgroups of ?(?) that do not act both on
the body and the environment: for example those acting on the body alone should provide
a decomposition of the body with respect to its articulations.
The ultimate goal is to show that there is a way of extracting objects in the environment
from the sensorimotor law, even though nothing is known about the sensors and effectors.
References
[1] N. Bourbaki. Vari?etes diff?erentielles et analytiques. Fascicule de r?esultats. Hermann,
1971-1997.
[2] T. Masson. G?eom?etrie diff?erentielle, groupes et alg`ebres de Lie, fibr?es et connexions.
LPT, 2001.
[3] J. K. O?Regan and A. No?e. A sensorimotor account of vision and visual consciousness.
Behavioral and Brain Sciences, 24(5), 2001.
[4] D. Philipona, K. O?Regan, and J.-P. Nadal. Is there something out there ? Inferring
space from sensorimotor dependencies. Neural Computation, 15(9), 2003.
| 2348 |@word briefly:2 r:1 simulation:12 decomposition:3 euclidian:2 thereby:1 moment:1 configuration:10 reaction:1 issuing:1 written:2 realize:1 informative:1 motor:22 alone:2 device:10 provides:1 successive:3 org:1 uncoordinated:1 parametrizable:1 mathematical:5 along:1 constructed:1 direct:1 differential:2 combine:1 ex2:2 behavioral:1 indeed:4 behavior:1 themselves:1 brain:1 relying:1 globally:1 decomposed:1 csl:2 becomes:1 provided:1 discover:5 notation:1 estimating:1 isotropy:1 what:6 nadal:2 developed:1 vaillant:1 transformation:7 bootstrapping:4 pseudo:1 multidimensional:2 act:2 universit:1 rm:1 demonstrates:2 stick:1 control:4 positive:1 understood:1 consequence:3 encoding:6 analyzing:1 plus:3 edouard:1 limited:1 significantly:1 statistique:1 induce:1 word:1 get:1 judged:1 applying:1 demonstrated:1 center:1 go:1 regardless:1 masson:1 independently:1 simplicity:1 insight:1 estimator:1 construction:1 suppose:1 controlling:1 exact:1 us:3 element:3 satisfying:1 observed:5 exk:2 movement:10 environment:16 depend:1 algebra:5 deliver:2 basis:3 multimodal:3 easily:1 tx:4 univ:2 effective:3 outside:1 m4x:1 choosing:1 whose:2 plausible:1 g1:2 think:1 itself:2 delivered:3 indication:1 coming:2 fr:2 adaptation:2 mixing:3 description:1 produce:1 generating:4 leave:2 object:2 implemented:1 involves:1 come:2 implies:2 direction:1 hermann:1 virtual:4 elevation:1 frontier:1 sufficiently:2 around:1 exp:1 mapping:2 lpt:1 m0:42 purpose:1 estimation:2 currently:1 clearly:1 sensor:14 avoid:1 varying:1 command:7 contrast:1 rigorous:1 sense:1 diffeomorphic:1 dependent:1 rigid:2 cnrs:1 psycho:2 w:2 relation:3 favoring:2 going:1 france:3 issue:1 overall:2 among:1 orientation:3 equal:1 field:11 construct:1 retina:2 randomly:2 delayed:1 geometry:1 consisting:2 interest:1 highly:1 physique:1 bracket:5 yielding:1 light:5 behind:1 injective:2 necessary:1 respective:1 mobility:1 commutativity:1 e0:9 effector:2 instance:2 nullspaces:1 measuring:18 retains:1 calibrate:1 successful:1 azimuth:2 front:1 dependency:1 answer:1 thanks:1 fundamental:8 sensitivity:1 accessible:1 physic:3 invertible:1 gaze:1 together:3 reflect:1 ear:2 containing:1 derivative:5 account:1 de:4 linked:3 wm:2 substructure:1 accuracy:1 characteristic:2 conducting:1 basically:1 ren:1 straight:1 sensorimotor:14 involved:1 associated:5 gain:1 auditory:6 treatment:1 dimensionality:1 amplitude:2 reflecting:1 dt:6 planar:1 formulation:1 photosensors:1 though:2 done:1 furthermore:1 implicit:1 stage:3 transport:1 nonlinear:2 reveal:2 stimulate:1 effect:2 concept:1 consisted:1 evolution:2 assigned:1 consciousness:1 ex1:2 during:3 rat:9 demonstrate:2 invisible:1 interface:1 image:7 instantaneous:1 discovers:1 rotation:4 functional:1 physical:3 overview:1 immobile:3 linking:3 organism:5 composition:2 had:1 dot:1 moving:1 access:5 stable:1 deduce:1 something:3 multivariate:1 own:1 touching:1 issued:1 arbitrarily:1 additional:1 determine:1 redundant:1 signal:4 sound:1 infer:1 long:1 sphere:1 controlled:4 descartes:1 basic:1 involving:1 essentially:1 vision:1 cell:3 separately:1 laboratoire:2 singular:6 source:5 rest:2 posse:1 cedex:2 thing:1 flow:1 seem:1 call:2 extracting:1 enough:2 results2:1 associating:1 connexion:1 idea:7 tm:1 avenue:1 shift:1 rod:18 six:1 coenen:1 ultimate:1 suffer:1 tactile:5 algebraic:1 action:3 matlab:1 se:5 involve:2 aimed:1 dark:1 locally:1 http:2 generate:1 estimated:7 write:2 shall:3 group:9 key:2 demonstrating:2 graph:2 inverse:2 family:2 reasonable:1 def:5 distinguish:1 immersion:1 photosensitive:3 badly:1 worked:1 influx:2 aspect:1 jaw:1 combination:4 poor:1 making:2 invariant:3 restricted:1 taken:3 equation:3 previously:1 describing:1 know:3 mind:1 sony:2 end:1 studying:1 parametrize:1 apply:1 observe:2 psychologie:1 giving:1 unchanged:2 contact:1 implied:1 move:2 question:4 diagonal:4 mx:4 subspace:1 link:1 simulated:1 concatenation:1 parametrized:1 manifold:9 reason:1 code:5 relationship:5 ratio:4 implementation:2 unknown:5 sm:4 finite:1 situation:2 head:12 discovered:3 arbitrary:1 concentric:1 david:1 paris:3 required:1 subgroup:7 bourbaki:1 able:1 beyond:1 perception:1 articulation:1 including:1 reciprocally:1 unrealistic:1 examination:1 advanced:1 representing:2 eom:1 eye:5 extract:4 analytiques:1 text:2 understanding:1 discovery:1 tangent:21 law:14 embedded:4 whisker:2 mixed:1 regan:3 sixteen:2 univocally:1 downloaded:1 s0:5 article:2 heavy:1 translation:2 last:2 sym:3 side:1 allow:2 understand:3 taking:1 eyelid:5 characterizing:1 distributed:2 regard:3 curve:1 dimension:11 world:9 vari:1 computes:2 sensory:38 stuck:1 made:1 projected:1 constituting:2 aperture:4 keep:2 dealing:1 global:1 pseudoinverse:1 reveals:3 incoming:1 unnecessary:1 conclude:1 search:2 continuous:1 transported:1 robust:2 controllable:1 alg:1 rue:3 main:1 constituted:6 whole:3 profile:2 nothing:1 body:18 en:1 pupil:1 fashion:1 lhomond:1 precision:1 sub:3 position:3 morphism:1 inferring:1 concatenating:1 lie:12 nullspace:5 intrinsic:5 exists:1 importance:1 te:1 conditioned:1 smoothly:1 intersection:10 visual:4 g2:3 environmental:1 goal:2 consequently:1 diffeomorphism:1 change:9 experimentally:1 specifically:1 diff:2 acting:1 called:2 lens:2 total:2 svd:1 e:1 meaningful:1 latter:1 outgoing:1 |
1,483 | 2,349 | Finding the M Most Probable
Configurations Using Loopy Belief
Propagation
Chen Yanover and Yair Weiss
School of Computer Science and Engineering
The Hebrew University of Jerusalem
91904 Jerusalem, Israel
{cheny,yweiss}@cs.huji.ac.il
Abstract
Loopy belief propagation (BP) has been successfully used in a number of difficult graphical models to find the most probable configuration of the hidden variables. In applications ranging from protein
folding to image analysis one would like to find not just the best
configuration but rather the top M . While this problem has been
solved using the junction tree formalism, in many real world problems the clique size in the junction tree is prohibitively large. In
this work we address the problem of finding the M best configurations when exact inference is impossible.
We start by developing a new exact inference algorithm for calculating the best configurations that uses only max-marginals. For approximate inference, we replace the max-marginals with the beliefs
calculated using max-product BP and generalized BP. We show empirically that the algorithm can accurately and rapidly approximate
the M best configurations in graphs with hundreds of variables.
1
Introduction
Considerable progress has been made in the field of approximate inference using
techniques such as variational methods [7], Monte-Carlo methods [5], mini-bucket
elimination [4] and belief propagation (BP) [6]. These techniques allow approximate
solutions to various inference tasks in graphical models where building a junction
tree is infeasible due to the exponentially large clique size. The inference tasks that
have been considered include calculating marginal probabilities, finding the most
likely configuration, and evaluating or bounding the log likelihood.
In this paper we consider an inference task that has not been tackled with the same
tools of approximate inference: calculating the M most probable configurations
(MPCs). This is a natural task in many applications. As a motivating example,
consider the protein folding task known as the side-chain prediction problem. In
our previous work [17], we showed how to find the minimal-energy side-chain configuration using approximate inference in a graphical model. The graph has 300
nodes and the clique size in a junction tree calculated using standard software [10]
can be up to an order of 1042 , so that exact inference is obviously impossible. We
showed that loopy max-product belief propagation (BP) achieved excellent results
in finding the first MPC for this graph. In the few cases where BP did not converge, Generalized Belief Propagation (GBP) always converge, with an increase in
computation. But we are also interested in finding the second best configuration,
the third best or, more generally, the top M configurations. Can this also be done
with BP ?
The problem of finding the M MPCs has been successfully solved within the junction
tree (JT) framework. However, to the best of our knowledge, there has been no
equivalent solution when building a junction tree is infeasible. A simple solution
would be outputting the top M configurations that are generated by a Monte-Carlo
simulation or by a local search algorithm from multiple initializations. As we show
in our simulations, both of these solutions are unsatisfactory. Alternatively, one
can attempt to use more sophisticated heuristically guided search methods (such
as A? ) or use exact MPCs algorithms on an approximated, reduced size junction
tree [4, 1]. However, given the success of BP and GBP in finding the first MPC in
similar problems [6, 9] it is natural to look for a method based on BP. In this paper
we develop such an algorithm. We start by showing why the standard algorithm [11]
for calculating the top M MPCs cannot be used in graphs with cycles. We then
introduce a novel algorithm called Best Max-Marginal First (BMMF) and show
that when the max-marginals are exact it provably finds the M MPCs. We show
simulation results of BMMF in graphs where exact inference is impossible, with
excellent performance on challenging graphical models with hundreds of variables.
2
Exact MPCs algorithms
We assume our hidden variables are denoted by a vector X, N = |X| and the
observed variables by Y , where Y = y. Let mk = (mk (1), mk (2), ? ? ? , mk (N )) denote
the k th MPC. We first seek a configuration m1 that maximizes Pr(X = x|y). Pearl,
Dawid and others [12, 3, 11] have shown that this configuration can be calculated
using a quantity known as max-marginals (MMs):
max marginal(i, j) = max Pr(X = x|y)
x:x(i)=j
(1)
Max-marginal lemma: If there exists a unique MAP assignment m1 (i.e.
Pr(X = m1 |y) > Pr(X = x|y), ?x 6= m1 ) then x1 defined by x1 (i) =
arg maxj max marginal(i, j) will recover the MAP assignment, m1 = x1 .
Proof: Suppose, that there exists i for which m1 (i) = k, x1 (i) = l, and k =
6 l.
It follows that maxx:x(i)=k Pr(X = x|y) > maxx:x(i)=l Pr(X = x|y) which is a
contradiction to the definition of x1 .
When the graph is a tree, the MMs can be calculated exactly using max-product
belief propagation [16, 15, 12] using two passes: one up the tree and the other down
the tree. Similarly, for an arbitrary graph they can be calculated exactly using two
passes of max-propagation in the junction tree [2, 11, 3].
A more efficient algorithm for calculating m1 requires only one pass of maxpropagation. After calculating the max-marginal exactly at the root node, the
MAP assignment m1 can be calculated by tracing back the pointers that were used
during the max-propagation [11]. Figure 1a illustrates this traceback operation in
the Viterbi algorithm in HMMs [13] (the pairwise potentials favor configurations
where neighboring nodes have different values). After calculating messages from left
x(3) = 1
x(2)
x(3)
1
)=
x(2 = 0
)
x(2
x(2) = 0
x(1)
x(1
)
x(1 = 0
)=
1
x(1)
x(3)
x(2)
x(3) = 1
x(3) = 0
a
b
Figure 1: a. The traceback operation in the Viterbi algorithm. The MAP configuration can be calculated by a forward message passing scheme followed by a
backward ?traceback?. b. The same traceback operation applied to a loopy graph
may give inconsistent results.
to right using max-product, we have the max-marginal at node 3 and can calculate
x1 (3) = 1. We then use the value of x1 (3) and the message from node 1 to 2 to find
x1 (2) = 0. Similarly, we then trace back to find the value of x1 (1).
These traceback operations, however, are problematic in loopy graphs. Figure 1b
shows a simple example from [15] with the same potentials as in figure 1a. After
setting x1 (3) = 1 we traceback and find x1 (2) = 0, x1 (1) = 1 and finally x1 (3) = 0,
which is obviously inconsistent with our initial choice.
One advantage of using traceback is that it can recover m1 even if there are ?ties?
in the MMs, i.e. when there exists a max-marginal that has a non-unique maximizing value. When there are ties, the max-marginal lemma no longer holds and
independently maximizing the MMs will not find m1 (cf. [12]).
Finding m1 using only MMs requires multiple computation of MMs ? each time
with the additional constraint x(i) = j, where i is a tied node and j one of its
maximizing values ? until no ties exist. It is easy to show that this algorithm
will recover m1 . The proof is a special case of the proof we present for claim 2 in
the next section. However, we need to recalculate the MMs many times until no
more ties exist. This is the price we pay for not being able to use traceback. The
situation is similar if we seek the M MPCs.
2.1
The Simplified Max-Flow Propagation Algorithm
Nilsson?s Simplified Max-Flow Propagation (SMFP) [11] starts by calculating the
MMs and using the max-marginal lemma to find m1 . Since m2 must differ from m1
in at least one variable, the algorithm defines N conditioning sets, Ci , (x(1) =
m1 (1), x(2) = m1 (2), ? ? ? , x(i?1) = m1 (i?1), x(i) 6= m1 (i)). It then uses the maxmarginal lemma to find the most probable configuration given each conditioning
set, xi = arg maxx Pr(X = x|y, Ci ) and finally m2 = arg maxx?{xi } Pr(X = x|y).
Since the conditioning sets form a partition, it is easy to show that the algorithm
finds m2 after N calculations of the MMs. Similarly, to find mk the algorithm uses
the fact that mk must differ from m1 , m2 , ? ? ? , mk?1 in at least one variable and
forms a new set of up to N conditioning sets. Using the max-marginal lemma one
can find the MPC given each of these new conditioning sets. This gives up to N
new candidates, in addition to (k ? 1)(N ? 1) previously calculated candidates. The
Figure 2: An illustration of our novel BMMF algorithm on a simple example.
most probable candidate out of these k(N ? 1) + 1 is guaranteed to be mk .
As pointed out by Nilsson, this simple algorithm may require far too many calculations of the MMs (O(M N )). He suggested an algorithm that uses traceback
operations to reduce the computation significantly. Since traceback operations are
problematic in loopy graphs, we now present a novel algorithm that does not use
traceback but may require far less calculation of the MMs compared to SMFP.
2.2
A novel algorithm: Best Max-Marginal First
For simplicity of exposition, we will describe the BMMF algorithm under what we
call the strict order assumption, that no two configurations have exactly the same
probability.
We illustrate our algorithm using a simple example (figure 2). There are 4 binary variables in the graphical model and we can find the top 3 MPCs exactly:
1100, 1110, 0001.
Our algorithm outputs a set of candidates xt , one at each iteration. In the first
iteration, t = 1, we start by calculating the MMs, and using the max-marginal
lemma we find m1 . We now search the max-marginal table for the next best maxmarginal value. In this case it is obtained with x(3) = 1. In the second iteration,
t = 2, we now lock x(3) = 1. In other words, we calculate the MMs with the added
constraint that x(3) to 1. We use the max-marginal lemma to find the most likely
configuration with x(3) = 1 locked and obtain x2 = 1110. Note that we have found
the second most likely configuration. We then add the complementary constraint
x(3) 6= 1 to the originating constraints set and calculate the MMs. In the third
iteration, t = 3, we search both previous max-marginal tables and find the best
remaining max-marginal. It is obtained at x(1) = 0, t = 1. We now add the
constraint x(1) = 0 to the constraints set from t = 1, calculate the MMs and use
the max-marginal lemma to find x3 = 0001. Finally, we add the complementary
constraint x(1) 6= 0 to the originating constraints set and calculate the MMs. Thus
after 3 iterations we have found the first 3 MPCs using only 5 calculations of the
MMs.
The Best Max-Marginal First (BMMF) algorithm for calculating the M
most probable configurations:
? Initialization
SCORE1 (i, j)
=
x1 (i)
=
CONSTRAINTS1
USED2
max Pr(X = x|y)
(2)
arg max SCORE1 (i, j)
(3)
x:x(i)=j
j
= ?
= ?
(4)
(5)
? For t=2:T
SEARCHt
(it , jt , st )
CONSTRAINTSt
SCOREt (i, j)
xt (i)
USEDt+1
CONSTRAINTSst
SCOREst (i, j)
= (i, j, s < t : xs (i) 6= j, (i, j, s) ?
/ USEDt )
= arg
max
SCOREs (i, j)
(6)
(7)
= CONSTRAINTSst ? {(x(it ) = jt )}
=
max
Pr(X = x|y)
(8)
(9)
(i,j,s)?SEARCHt
x:x(i)=j,CONSTRAINTSt
=
arg max SCOREt (i, j)
j
= USEDt ? {(it , jt , st )}
= CONSTRAINTSst ? {(x(it ) 6= jt )}
=
max
Pr(X = x|y)
x:x(i)=j,CONSTRAINTSst
(10)
(11)
(12)
(13)
Claim 1: x1 calculated by the BMMF algorithm is equal to the MPC m1 .
Proof: This is just a restatement of the max-marginal lemma.
Claim 2: x2 calculated by the BMMF algorithm is equal to the second MPC m2 .
Proof: We first show that m2 (i2 ) = j2 . We know that m2 differs in at least one
location from m1 . We also know that out of all the assignments that differ from m1
it must have the highest probability. Suppose, that m2 (i2 ) 6= j2 . By the definition
of SCORE1 , this means that there exists an x 6= m2 that is not m1 whose posterior
probability is higher than that of m2 . This is a contradiction. Now, out of all
assignments for which x(i2 ) = j2 , m2 has highest posterior probability (recall that
by definition, m1 (i2 ) 6= j2 ). The max-marginal lemma guarantees that x2 = m2 .
Partition Lemma:
Let SATk denote the set of assignments satisfying
CONSTRAINTSk . Then, after iteration k, the collection {SAT1 , SAT2 , ? ? ? , SATk }
is a partition of the assignment space.
Proof: By induction over k. For k = 1, CONSTRAINTS1 = ? and the claim
trivially holds. For k = 2, SAT1 = {x|x(i2 ) 6= j2 } and SAT2 = {x|x(i2 ) = j2 }
are mutually disjoint and SAT1 ? SAT2 covers the assignment space, therefore
{SAT1 , SAT2 } is a partition of the assignment space. Assume that after iteration k ? 1, {SAT1 , SAT2 , ? ? ? , SATk?1 } is a partition of the assignment space. Note
that in iteration k, we add CONSTRAINTSk = CONSTRAINTSsk ? {(x(ik ) = jk )}
and modify CONSTRAINTSsk = CONSTRAINTSsk ? {(x(ik ) 6= jk )}, while keeping all other constraints set unchanged. SATk and the modified SATsk are pairwise disjoint and SATk ? SATsk covers the originating SATsk . Since after itera-
tion k ? 1 {SAT1 , SAT2 , ? ? ? , SATk?1 } is a partition of the assignment space, so is
{SAT1 , SAT2 , ? ? ? , SATk }.
Claim 3: xk , the configuration calculated by the algorithm in iteration k, is mk ,
the kth MPC.
Proof: First, note that SCOREsk (ik , jk ) ? SCOREsk?1 (ik?1 , jk?1 ), otherwise (ik , jk , sk )
would have been chosen in iteration k ? 1. Following the partition lemma, each
assignment arises at most once. By the strict order assumption, this means that
SCOREsk (ik , jk ) < SCOREsk?1 (ik?1 , jk?1 ).
Let mk ? SATs? . We know that mk differs from all previous xs in at least one
location. In particular, mk must differ from xs? in at least one location. Denote
that location by i? and mk (i? ) = j ? . We want to show that SCOREs? (i? , j? ) =
Pr(X = mk |y). First, note that (i? , j ? , s? ) ?
/ USEDk . If we had previously used it,
then (x(i? ) 6= j ? ) ? CONSTRAINTSs? , which contradicts the definition of s? . Now
suppose there exists ml , l ? k ? 1 such that ml ? SATs? and ml (i? ) = j ? . Since
(i? , j ? , s? ) ?
/ USEDk this would mean that SCOREsk (ik , jk ) ? SCOREsk?1 (ik?1 , jk?1 )
which is a contradiction. Therefore mk is the most probable assignment that satisfies
CONSTRAINTSs? and has the value j ? at location i? . Hence SCOREs? (i? , j? ) =
Pr(X = mk |y).
A consequence of claim 3 is that BMMF will find the top M MPCs using 2M
calculations of max marginals. In contrast, SMFP requires O(M N ) calculations.
In real world loopy problems, especially when N ?M , this can lead to drastically
different run times. First, real world problems may have thousands of nodes so
a speedup of a factor of N will be very significant. Second, calculating the MMs
requires iterative algorithms (e.g. BP or GBP) so that the speedup of a factor of
N may be the difference between running a month versus running half a day.
3
Approximate MPCs algorithms using loopy BP
We now compare 4 approximate MPCs algorithms:
1. loopy BMMF. This is exactly the algorithm in section 2.2 with the MMs
based on the beliefs computed by loopy max-product BP or max-GBP:
SCOREk (i, j) = Pr(X = xk |y)
BEL(i, j|CONSTRAINTSk )
maxj BEL(i, j|CONSTRAINTSk )
(14)
2. loopy SMFP. This is just Nilsson?s SMFP algorithm with the MMs calculated using loopy max-product BP.
3. Gibbs sampling. We collect all configurations sampled during a Gibbs
sampling simulation and output the top M of these.
4. Greedy. We collect all configurations encountered during a greedy optimization of the posterior probability (this is just Gibbs sampling at zero
temperature) and output the top M of these.
All four algorithms were implemented in Matlab and the number of iterations for
greedy and Gibbs were chosen so that the run times would be the same as that of
loopy BMMF. Gibbs sampling started from m1 , the most probable assignment, and
the greedy local search algorithm initialized to an assignment ?similar? to m 1 (1%
of the variables were chosen randomly and their values flipped).
For the protein folding problem [17], we used a database consisting of 325 proteins,
each gives rise to a graphical model with hundreds of variables and many loops. We
303.5
?591
303
Gibbs
Energy
302.5
?591.1
302
Greedy
loopy BMMF
301.5
?591.2
301
loopy BMMF
300.5
?591.3
300
299.5
5
10
15
20
25
30
35
Configuration Number
40
45
50
5
10
15
20
25
Configuration Number
Figure 3: The configurations found by loopy-BMMF compared to those obtained
using Gibbs sampling and greedy local search for a large toy-QMR model (right)
and a 32 ? 32 spin glass model (right).
compared the top 100 correct configurations obtained by the A? heuristic search
algorithm [8] to those found by loopy BMMF algorithm, using BP. In all cases where
A? was feasible, loopy BMMF always found the correct configurations. Also, the
BMMF algorithm converged more often (96.3% compared to 76.3%) and ran much
faster.
We then assessed the performance of the BMMF algorithm for a couple of relatively
small problems, where exact inference was possible. For both a small toy-QMR
model (with 20 diseases and 50 symptoms) and a 8 ? 8 spin glass model the BMMF
algorithm obtained the correct MPCs.
Finally, we compared the performance of the algorithms for couple of hard problems
? a large toy-QMR model (with 100 diseases and 200 symptoms) and 32 ? 32 spin
glass model with large pairwise interactions. For the toy-QMR model, the MPCs
calculated by the BMMF algorithm were better than those calculated by Gibbs
sampling (Figure 3, left). For the large spin glass, we found that ordinary BP
didn?t converge and used max-product generalized BP instead. This is exactly the
algorithm described in [18] with marginalizations replaced with maximizations. We
found that GBP converged far more frequently and indeed the MPCs found using
GBP are much better than those obtained with Gibbs or greedy (Figure 3, right.
Gibbs results are worse than those of the greedy search and therefore not shown).
Note that finding the second MPC using the simple MFP algorithm requires a week,
while the loopy BMMF calculated the 25 MPCs in few hours only.
4
Discussion
Existing algorithms successfully find the M MPCs for graphs where building a JT is
possible. However, in many real-world applications exact inference is impossible and
approximate techniques are needed. In this paper we have addressed the problem
of finding the M MPCs using the techniques of approximate inference. We have
presented a new algorithm, called Best Max-Marginal First that will provably solve
the problem if MMs can be calculated exactly. We have shown that the algorithm
continues to perform well when the MMs are approximated using max-product loopy
BP or GBP.
Interestingly, the BMMF algorithm uses the numerical values of the approximate
MMs to determine what to do in each iteration. The success of loopy BMMF
suggests that in some cases the max product loopy BP gives a good numerical
approximation to the true MMs. Most existing analysis of loopy max-product [16,
15] has focused on the configurations found by the algorithm. It would be interesting
to extend the analysis to bound the approximate MMs which in turn would lead to
a provable approximate MPCs algorithm.
While we have used loopy BP to approximate the MMs, any approximate inference
can be used inside BMMF to derive a novel, approximate MPCs algorithm. In
particular, the algorithm suggested by Wainwright et al. [14] can be shown to give
the MAP assignment when it converges. It would be interesting to incorporate their
algorithm into BMMF.
References
[1] A. Cano, S. Moral, and A. Salmer?
on. Penniless propagation in join trees. Journal of
Intelligent Systems, 15:1010?1027, 2000.
[2] R. Cowell. Advanced inference in Bayesian networks. In M.I. Jordan, editor, Learning
in Graphical Models. MIT Press, 1998.
[3] P. Dawid. Applications of a general propagation algorithm for probabilistic expert
systems. Statistics and Computing, 2:25?36, 1992.
[4] R. Dechter and I. Rish. A scheme for approximating probabilistic inference. In
Uncertainty in Artificial Intelligence (UAI 97), 1997.
[5] A. Doucet, N. de Freitas, K. Murphy, and S. Russell. Rao-blackwellised particle
filtering for dynamic bayesian networks. In Proceedings UAI 2000. Morgan Kaufmann,
2000.
[6] B.J. Frey, R. Koetter, and N. Petrovic. Very loopy belief propagation for unwrapping
phase images. In Adv. Neural Information Processing Systems 14. MIT Press, 2001.
[7] T.S. Jaakkola and M.I. Jordan. Variational probabilistic inference and the QMR-DT
database. JAIR, 10:291?322, 1999.
[8] Andrew R. Leach and Andrew P. Lemon. Exploring the conformational space of protein side chains using dead-end elimination and the A* algorithm. Proteins: Structure,
Function, and Genetics, 33(2):227?239, 1998.
[9] A. Levin, A. Zomet, and Y. Weiss. Learning to perceive transparency from the
statistics of natural scenes. In Proceedings NIPS 2002. MIT Press, 2002.
[10] Kevin Murphy. The bayes net toolbox for matlab. Computing Science and Statistics,
33, 2001.
[11] D. Nilsson. An efficient algorithm for finding the M most probable configurations in
probabilistic expert systems. Statistics and Computing, 8:159?173, 1998.
[12] Judea Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible
Inference. Morgan Kaufmann, 1988.
[13] L.R. Rabiner. A tutorial on hidden Markov models and selected applications in speech
recognition. Proc. IEEE, 77(2):257?286, 1989.
[14] M. J. Wainwright, T. Jaakkola, and A. S. Willsky. Exact map estimates by (hyper)tree
agreement. In Proceedings NIPS 2002. MIT Press, 2002.
[15] M. J. Wainwright, T. Jaakkola, and A. S. Willsky. Tree consistency and bounds
on the performance of the max-product algorithm and its generalizations. Technical
Report P-2554, MIT LIDS, 2002.
[16] Y. Weiss and W.T. Freeman. On the optimality of solutions of the max-product
belief propagation algorithm in arbitrary graphs. IEEE Transactions on Information
Theory, 47(2):723?735, 2001.
[17] C. Yanover and Y. Weiss. Approximate inference and protein folding. In Proceedings
NIPS 2002. MIT Press, 2002.
[18] J. Yedidia, W. Freeman, and Y. Weiss. Understanding belief propagation and its generalizations. In G. Lakemeyer and B. Nebel, editors, Exploring Artificial Intelligence
in the New Millennium. Morgan Kaufmann, 2003.
| 2349 |@word heuristically:1 seek:2 simulation:4 initial:1 configuration:31 score:3 interestingly:1 existing:2 freitas:1 rish:1 must:4 dechter:1 numerical:2 partition:7 koetter:1 half:1 greedy:8 intelligence:2 selected:1 xk:2 pointer:1 node:7 location:5 ik:9 inside:1 introduce:1 pairwise:3 indeed:1 frequently:1 freeman:2 itera:1 maximizes:1 didn:1 israel:1 what:2 finding:11 guarantee:1 blackwellised:1 tie:4 exactly:8 prohibitively:1 engineering:1 local:3 modify:1 frey:1 consequence:1 qmr:5 initialization:2 collect:2 challenging:1 suggests:1 hmms:1 locked:1 unique:2 differs:2 x3:1 maxx:4 significantly:1 word:1 protein:7 cannot:1 impossible:4 equivalent:1 map:6 maximizing:3 jerusalem:2 independently:1 focused:1 simplicity:1 perceive:1 contradiction:3 m2:12 suppose:3 exact:10 us:5 agreement:1 dawid:2 approximated:2 satisfying:1 jk:9 continues:1 recognition:1 database:2 observed:1 solved:2 restatement:1 calculate:5 recalculate:1 thousand:1 cycle:1 adv:1 russell:1 highest:2 ran:1 disease:2 dynamic:1 various:1 describe:1 monte:2 artificial:2 kevin:1 hyper:1 whose:1 heuristic:1 solve:1 plausible:1 otherwise:1 favor:1 statistic:4 obviously:2 advantage:1 net:1 outputting:1 interaction:1 product:12 neighboring:1 j2:6 loop:1 rapidly:1 converges:1 illustrate:1 develop:1 ac:1 derive:1 andrew:2 school:1 progress:1 implemented:1 c:1 differ:4 constraintsk:4 guided:1 lakemeyer:1 correct:3 elimination:2 require:2 generalization:2 yweiss:1 probable:9 exploring:2 mm:26 hold:2 considered:1 viterbi:2 week:1 claim:6 nebel:1 proc:1 successfully:3 tool:1 mit:6 always:2 modified:1 rather:1 jaakkola:3 unsatisfactory:1 likelihood:1 contrast:1 glass:4 inference:20 hidden:3 originating:3 interested:1 provably:2 arg:6 denoted:1 special:1 marginal:22 field:1 equal:2 once:1 sampling:6 flipped:1 look:1 others:1 report:1 intelligent:2 few:2 randomly:1 murphy:2 maxj:2 replaced:1 phase:1 consisting:1 attempt:1 message:3 bmmf:24 chain:3 tree:14 initialized:1 minimal:1 mk:16 formalism:1 rao:1 cover:2 assignment:16 maximization:1 loopy:25 ordinary:1 hundred:3 levin:1 too:1 motivating:1 petrovic:1 st:2 huji:1 probabilistic:5 worse:1 dead:1 expert:2 toy:4 potential:2 de:1 scoret:2 tion:1 root:1 start:4 recover:3 bayes:1 il:1 spin:4 kaufmann:3 rabiner:1 bayesian:2 accurately:1 carlo:2 converged:2 definition:4 energy:2 mpc:8 proof:7 couple:2 sampled:1 judea:1 recall:1 knowledge:1 sophisticated:1 constraints1:2 back:2 higher:1 dt:1 day:1 jair:1 wei:5 done:1 symptom:2 just:4 until:2 propagation:15 defines:1 building:3 true:1 hence:1 i2:6 during:3 generalized:3 temperature:1 reasoning:1 cano:1 ranging:1 image:2 variational:2 novel:5 empirically:1 conditioning:5 exponentially:1 extend:1 he:1 m1:26 marginals:5 significant:1 gibbs:10 trivially:1 consistency:1 similarly:3 pointed:1 particle:1 had:1 longer:1 add:4 posterior:3 showed:2 binary:1 success:2 leach:1 morgan:3 additional:1 converge:3 determine:1 multiple:2 transparency:1 technical:1 faster:1 calculation:6 prediction:1 iteration:12 score1:3 achieved:1 folding:4 addition:1 want:1 addressed:1 pass:2 strict:2 inconsistent:2 flow:2 jordan:2 call:1 easy:2 marginalization:1 reduce:1 maxmarginal:2 moral:1 speech:1 passing:1 matlab:2 generally:1 conformational:1 traceback:11 reduced:1 exist:2 problematic:2 tutorial:1 disjoint:2 four:1 satk:7 backward:1 graph:12 run:2 uncertainty:1 bound:2 pay:1 followed:1 guaranteed:1 tackled:1 encountered:1 lemon:1 constraint:11 bp:19 x2:3 software:1 scene:1 optimality:1 relatively:1 speedup:2 developing:1 contradicts:1 lid:1 nilsson:4 pr:14 bucket:1 mutually:1 previously:2 turn:1 mpcs:20 needed:1 know:3 end:1 junction:8 operation:6 yedidia:1 yair:1 top:9 remaining:1 include:1 cf:1 running:2 graphical:7 lock:1 calculating:11 especially:1 approximating:1 unchanged:1 added:1 quantity:1 kth:1 induction:1 provable:1 willsky:2 mini:1 illustration:1 hebrew:1 difficult:1 trace:1 rise:1 perform:1 markov:1 situation:1 arbitrary:2 toolbox:1 bel:2 gbp:7 pearl:2 hour:1 nip:3 address:1 able:1 suggested:2 max:50 belief:11 wainwright:3 natural:3 yanover:2 advanced:1 scheme:2 millennium:1 started:1 understanding:1 interesting:2 filtering:1 versus:1 editor:2 unwrapping:1 mfp:1 genetics:1 keeping:1 infeasible:2 drastically:1 side:3 allow:1 tracing:1 calculated:16 world:4 evaluating:1 forward:1 made:1 collection:1 simplified:2 far:3 transaction:1 approximate:17 clique:3 ml:3 doucet:1 uai:2 sat:2 xi:2 alternatively:1 search:8 iterative:1 sk:1 why:1 table:2 excellent:2 did:1 bounding:1 complementary:2 x1:15 join:1 candidate:4 tied:1 third:2 down:1 xt:2 jt:6 showing:1 x:3 exists:5 ci:2 illustrates:1 chen:1 likely:3 cowell:1 satisfies:1 month:1 exposition:1 replace:1 price:1 considerable:1 feasible:1 hard:1 zomet:1 lemma:12 called:2 pas:1 arises:1 assessed:1 incorporate:1 |
1,484 | 235 | 60
Nelson and Bower
Computational Efficiency:
A Common Organizing Principle for
Parallel Computer Maps and Brain Maps?
Mark E. Nelson James M. Bower
Computation and Neural Systems Program
Division of Biology, 216-76
California Institute of Technology
Pasadena, CA 91125
ABSTRACT
It is well-known that neural responses in particular brain regions
are spatially organized, but no general principles have been developed that relate the structure of a brain map to the nature of
the associated computation. On parallel computers, maps of a sort
quite similar to brain maps arise when a computation is distributed
across multiple processors. In this paper we will discuss the relationship between maps and computations on these computers and
suggest how similar considerations might also apply to maps in the
brain.
1
INTRODUCTION
A great deal of effort in experimental and theoretical neuroscience is devoted to
recording and interpreting spatial patterns of neural activity. A variety of map
patterns have been observed in different brain regions and , presumably, these patterns reflect something about the nature of the neural computations being carried
out in these regions. To date, however, there have been no general principles for
interpreting the structure of a brain map in terms of properties of the associated
computation. In the field of parallel computing, analogous maps arise when a computation is distributed across multiple processors and, in this case, the relationship
Computational Eftkiency
between maps and computations is better understood. In this paper, we will attempt to relate some of the mapping principles from the field of parallel computing
to the organization of brain maps.
2
MAPS ON PARALLEL COMPUTERS
The basic idea of parallel computing is to distribute the computational workload
for a single task across a large number of processors (Dongarra, 1987; Fox and
Messina, 1987). In principle, a parallel computer has the potential to deliver computing power equivalent to the total computing power of the processors from which
it is constructed; a 100 processor machine can potentially deliver 100 times the
computing power of a single processor. In practice, however, the performance that
can be achieved is always less efficient than this ideal. A perfectly efficient implementation with N processors would give a factor N speed up in computation time;
the ratio of the actual speedup (1 to the ideal speedup N can serve as a measure of
the efficiency f of a parallel implementation.
(1
(1)
f= -
N
For a given computation, one of the factors that most influences the overall performance is the way in which the computation is mapped onto the available processors.
The efficiency of any particular mapping can be analyzed in terms of two principal
factors: load-balance and communication overhead. Load-balance is a measure of
how uniformly the computational work load is distributed among the available processors. Communication overhead, on the other hand, is related to the cost in time
of communicating information between processors.
On parallel computers, the load imbalance A is defined in terms of the average
calculation time per processor T atJg and the maximum calculation time required by
the busiest processor T maz :
A
= Tmaz -
T atJg
T atJg
(2)
The communication overhead 7] is defined in terms of the maximum calculation time
and the maximum communication time Tcomm:
T maz
Tcomm
7]=-----Tmaz
Tcomm
+
(3)
Assuming that the calculation and communication phases of a computation do not
overlap in time, as is the case for many parallel computers, the relationship between
efficiency f, load-imbalance A, and communicaticn overhead 7] is given by (Fox et
al.,1988):
61
62
Nelson and Bower
1-7]
{=l+A
(4)
When both load-imbalance A and communication overhead 7] are small, the inefficiency is approximately the sum of the contributions from load-imbalance and
communication overhead:
(~l-(7]+A)
(5)
When attempting to achieve maximum performance from a parallel computer, a
programmer tries to find a mapping that minimizes the combined contributions of
load-imbalance and communication overhead. In some cases this is accomplished by
applying simple heuristics (Fox et al., 1988), while in others it requires the explicit
use of optimization techniques like simulated annealing (Kirkpatrick et al., 1983)
or even artificial neural network approaches (Fox and Furmanski, 1988). In any
case, the optimal tradeoff between load imbalance and communication overhead
depends on certain properties of the computation itself. Thus different types of
computations give rise to different kinds of optimal maps on parallel computers.
2.1
AN EXAMPLE
In order to illustrate how different mappings can give rise to different computational
efficiencies, we will consider the simulation of a single neuron using a multicompartment modeling approach (Segev et al., 1989). In such a simulation, the model neuron is divided into a large number of compartments, each of which is assumed to be
isopotential. Each compartment is represented by an equivalent electric circuit that
embodies information about the local membrane properties. In order to update the
voltage of an individual compartment, it is necessary to know the local properties
as well as the membrane voltages of the neighboring compartments. Such a model
gives rise to a system of differential equations of the following form:
(6)
where em is the membrane capacitance, Vi is the membrane voltage of compartment
i, 9k and Ek are the local conductances and their reversal potentials, and 9i?l,i are
coupling conductances to neighboring compartments.
When carrying out such a simulation on a parallel computer, where there are more
compartments than processors, each processor is assigned responsibility for updating
a subset of the compartments (Nelson et al., 1989). If the compartments represent
equivalent computational loads, then the load-imbalance will be proportional to
the difference between the maximum and the average number of compartments per
processor. If the computer processors are fully interconnected by communication
channels, then the communication overhead will be proportional to the number
of interprocessor messages providing the voltages of neighboring compartments. If
Computational Efficiency
c
A
A= 0.26
11 = 0.04
E=
0.76
\' A= 0.01
:,:!' 11 = 0.07
:i~
~
? =
0.92
Figure 1: Tradeoffs between load-imbalance A and communication overhead 7],
giving rise to different efficiencies ? for different mappings of a multicompartment neuron model. (A) a minimum-cut mapping that minimizes communication
overhead but suffers from a significant load-imbalance, (B) a scattered mapping
that minimizes load-imbalance but has a large communication overhead, and (C)
a near-optimal mapping that simultaneously minimizes both load-imbalance and
communication overhead.
neighboring compartments are mapped to the same processor, then this information
is available without any interprocessor communication and thus no communication
overhead is incurred.
Fig. 1 shows three different ways of mapping a 155 compartment neuron model
onto a group of 4 processors. In each case the load-imbalance and communication
overhead are calculated using the assumptions listed above and the computational
efficiency is computed using eq. 4. The map in Fig. 1A minimizes the communication
overhead of the' mapping by making a minimum number of cuts in the dendritic
tree, but is rather inefficient because a significant load-imbalance remains even
after optimizing the location of each cut. The map is Fig. 1B, on the other hand,
minimizes the load-imbalance, by using a scattered mapping technique (Fox et al.,
1988), but is inefficient because of a large communication overhead. The map in
Fig. 1C strikes a balance between load-imbalance and communication overhead that
results in a high computational efficiency. Thus this particular mapping makes the
best use of the available computing resources for this particular computational task.
63
64
Nelson and Bower
A
B
c
Figure 2: Three classes of map topologies found in the brain (of the rat). (A)
continuous map of tactile inputs in somatosensory cortex (B) patchy map of tactile
inputs to cerebellar cortex and (C) scattered mapping of olfactory inputs to olfactory
cortex as represented by the unstructured pattern of 2DG uptake in a single section
of this cortex.
3
MAPS IN THE BRAIN
Since some parallel computer maps are clearly more efficient than others for particular problems, it seems natural to ask whether a similar relationship might hold for
brain maps and neural computations. Namely, for a given computational task, does
one particular brain map topology make more efficient use of the available neural
computing resources than another? If so, does this impose a significant constraint
on the evolution and development of brain map topologies?
It turns out that there are striking similarities between the kinds of maps that
arise on parallel computers and the types of maps that have been observed in
the brain. In both cases, the map patterns can be broadly grouped into three
categories: continuous maps, patchy maps, and scattered (non-topographic) maps.
Fig. 2 shows examples of brain maps that fall into these categories. Fig. 2A shows
an example of a smooth and continuous map representing the pattern of afferent
tactile projections to the primary somatosensory cortex of a rat (Welker, 1971).
The patchy map in Fig. 2B represents the spatial pattern of tactile projections to
the granule cell layer of the rat cerebellar hemispheres (Shambes et aI., 1978; Bower
and Woolston, 1983). Finally, Fig. 2C represents an extreme case in which a brain
region shows no apparent topographic organization. This figure shows the pattern
of metabolic activity in one section of the olfactory (piriform) cortex, as assayed by
2-deoxyglucose (2DG) uptake, in response to the presentation of a particular odor
(Sharp et al., 1977). As suggested by the uniform label in the cortex, no discernible
Computational Eftkiency
odor-specific patterns are found in this region of cortex.
On parallel computers, maps in these different categories arise as optimal solutions
to different classes of computations. Continuous maps are optimal for computations
that are local in the problem space, patchy maps are optimal for computations that
involve a mixture of local and non-local interactions, and scattered maps are optimal or near-optimal for computations characterized by a high degree of interaction
throughout the problem space, especially if the patterns of interaction are dynamic
or cannot be easily predicted. Interestingly, it turns out that the intrinsic neural circuitry associated with different kinds of brain maps also reflects these same
patterns of interaction. Brain regions with continuous maps, like somatosensory
cortex, tend to have predominantly local circuitry; regions with patchy maps, like
cerebellar cortex, tend to have a mixture of local and non-local circuitry; and regions
with scattered maps, like olfactory cortex, tend to be characterized by wide-spread
connectivity.
The apparent correspondence between brain maps and computer maps raises the
general question of whether or not there are correlates of load-imbalance and communication overhead in the nervous system. In general, these factors are much more
difficult to identify and quantify in the brain than on parallel computers. Parallel
computer systems are, after all, human-engineered while the nervous system has
evolved under a set of selection criteria and constraints that we know very little
about. Furthermore, fundamental differences in the organization of digital computers and brains make it difficult to translate ideas from parallel computing directly
into neural equivalents (c.f. Nelson et al., 1989). For example, it is far from clear
what should be taken as the neural equivalent of a single processor. Depending on
the level of analysis, it might be a localized region of a dendrite, an entire neuron, or
an assembly of many neurons. Thus, one must consider multiple levels of processing
in the nervous system when trying to draw analogies with parallel computers.
First we will consider the issue of load-balancing in the brain. The map in Fig. 2A,
while smooth and continuous, is obviously quite distorted. In particular, the regions
representing the lips and whiskers are disproportionately large in comparison to
the rest of the body. It turns out that similar map distortions arise on parallel
computers as a result of load-balancing. If different regions of the problem space
require more computation than other regions, load-balance is achieved by distorting
the map until each processor ends up with an equal share of the workload (Fox et
al., 1988). In brain maps, such distortions are most often explained by variations
in the density of peripheral receptors. However, it has recently been shown in
the monkey, that prolonged increased use of a particular finger is accompanied by
an expansion of the corresponding region of the map in the somatosensory cortex
(Merzenich, 1987). Presumably this is not a consequence of a change in peripheral
receptor density, but instead reflects a use-dependent remapping of some tactile
computation onto available cortical circuitry.
Although such map reorganization phenomena are suggestive of load-balancing effects, we cannot push the analogy too far because we do not know what actually
6S
66
Nelson and Bower
corresponds to "computational load" in the brain. One possibility is that it is associated with the metabolic load that arises in response to neural activity (Yarowsky
and Ingvar, 1981). Since metabolic activity necessitates the delivery of an adequate
supply of oxygen and glucose via a network of small capillaries, the efficient use of
the capillary system might favor mappings that tend to avoid metabolic "hot spots"
which would overload the delivery capabilities of the system.
When discussing communication overhead in the brain, we also run into the problem of not knowing exactly what corresponds to "communication cost". On parallel
computers, communication overhead is usually associated with the time-cost of exchanging information between processors. In the nervous system, the importance of
such time-costs is probably quite dependent on the behavioral context of the computation. There is evidence, for example, that some brain regions actually make use
of transmission delays to process information (Carr and Konishi, 1988). However,
there is another aspect of communication overhead that may be more generally
applicable having to do with the space-costs of physically connecting processors together. In the design of modern parallel computers and in the design of individual
computer processor chips, space-costs associated with interconnections pose a very
serious constraint for the design engineer. In the nervous system, the extremely
large numbers of potential connections combined with rather strict limitations on
cranial capacity are likely to make space-costs a very important factor.
4
CONCLUSIONS
The view that computational efficiency is an important constraint on the organization of brain maps provides a potentially useful new perspective for interpretting
the structure of those maps. Although the available evidence is largely circumstantial, it seems likely that the topology of a brain map affects the efficiency with
which neural resources are utilized. Furthermore, it seems reasonable to assume
that network efficiency would impose a constraint on the evolution and development of map topologies that would tend to favor maps that are near-optimal for
the computational tasks being performed. The very substantial task before us, in
the case of the nervous system, is to carry out further experiments to better understand the detailed relationships between brain maps, neural architectures and
associated computations (Bower, 1990).
Acknowledgements
We would like to acknowledge Wojtek Furmanski and Geoffrey Fox of the Caltech
Concurrent Computation Program (CCCP) for their parallel computing support.
We would also like to thank Geoffrey for his comments on an earlier version of this
manuscript. This effort was supported by the NSF (ECS-8700064), the Lockheed
Corporation, and the Department of Energy (DE-FG03-85ER25009).
References
Bower, J .M. (1990) Reverse engineering the nervous system: An anatomical, physiological, and computer based approach. In: An Introduction to Neural and Electronic
Computational Efficiency
Networks. (S. Zornetzer, J. Davis, and C. Lau, eds), pp. 3-24, Academic Press.
Bower, J .M. and D.C. Woolston (1983) Congruence of Spatial Organization of Tactile Projections to Granule Cell and Purkinje Cell Layers of Cerebellar Hemispheres
of the Albino Rat: Vertical Organization of Cerebellar Cortex. J. Neurophysiol. 49,
745-756.
Carr, C.E. and M. Konishi (1988) Axonal delay lines for time measurement in the
owl's brain stem. Proc Natl Acad Sci USA 85, 8311-8315.
Dongarra, J.J. (1987) Experimental Parallel Computing Architectures, (Dongarra,
J.J., ed.) North-Holland.
Fox, G. C., M. Johnson, G. Lyzenga, S. Otto, J. Salmon, D. Walker (1988) Solving
Problems on Concurrent Processors, Prentice Hall.
Fox, G.C. and W. Furmanski (1988) Load Balancing loosely synchronous problems
with a neural network. In: Proceedings of the Third Conference on Hypercube
Concurrent Computers and Applications, (Fox, G.C., ed.), pp.241-278, ACM.
Fox, G.C. and P. Messina (1987) Advanced Computer Architectures. Scientific
American, October, 66-74.
Kirkpatrick, S., C.D. Gelatt and M.P. Vecchi (1983) Optimization by Simulated
Annealing. Science, 220, 671-680.
Merzenich, M.M. (1987) Dynamic neocortical processes and the origins of higher
brain functions. In: The Neural and Molecular Bases of Learning, (Changeux, J .-P.
and Konishi, M., eds.), pp. 337-358, John Wiley & Sons.
Nelson, M.E., W. Furmanski and J .M. Bower (1989) Modeling Neurons and Networks on Parallel Computers. In: Methods in Neuronal Modeling: From Synapses
to Networks, (Koch, C. and I. Segev, eds.), pp. 397-438, MIT Press.
Segev, I., J.W. Fleshman and R.E. Burke (1989) Compartmental Models of Complex
Neurons. In: Methods in Neuronal Modeling: From Synapses to Networks, (Koch,
C. and I. Segev, eds.), pp. 63-96, MIT Press.
Shambes, G.M., J .M. Gibson and W. Welker (1978) Fractured Somatotopy in Granule Cell Tactile Areas of Rat Cerebellar Hemispheres Revealed by Micromapping.
Brain Behav. Evol. 15, 94-140.
Sharp, F.R., J.S . Kauer and G.M. Shepherd (1977) Laminar Analysis of 2-Deoxyglucose Uptake in Olfactory Bulb and Olfactory Cortex of Rabbit and Rat. J.
Neurophysiol. 40, 800-813.
Welker, C. (1971) Microelectrode delineation of fine grain somatotopic organization
of SMI cerebral neocortex in albino rat. Brain Res. 26, 259-275.
Yarowsky, P.J. and D.H. Ingvar (1981) Neuronal activity and energy metabolism.
Federation Proc. 40, 2353-2263.
67
| 235 |@word version:1 maz:2 seems:3 simulation:3 carry:1 inefficiency:1 interestingly:1 must:1 john:1 grain:1 discernible:1 update:1 metabolism:1 nervous:7 provides:1 location:1 constructed:1 interprocessor:2 differential:1 supply:1 assayed:1 overhead:22 behavioral:1 olfactory:6 brain:33 prolonged:1 actual:1 little:1 delineation:1 circuit:1 remapping:1 what:3 evolved:1 kind:3 minimizes:6 monkey:1 developed:1 corporation:1 exactly:1 yarowsky:2 before:1 understood:1 local:9 engineering:1 consequence:1 acad:1 receptor:2 approximately:1 might:4 practice:1 spot:1 area:1 gibson:1 projection:3 suggest:1 onto:3 cannot:2 selection:1 prentice:1 context:1 influence:1 applying:1 equivalent:5 map:56 rabbit:1 unstructured:1 evol:1 communicating:1 his:1 konishi:3 variation:1 analogous:1 origin:1 updating:1 utilized:1 cut:3 observed:2 busiest:1 region:14 substantial:1 dynamic:2 carrying:1 raise:1 solving:1 deliver:2 serve:1 division:1 efficiency:13 neurophysiol:2 necessitates:1 workload:2 easily:1 chip:1 represented:2 finger:1 artificial:1 quite:3 heuristic:1 apparent:2 federation:1 distortion:2 compartmental:1 interconnection:1 otto:1 favor:2 topographic:2 itself:1 obviously:1 interconnected:1 interaction:4 neighboring:4 date:1 organizing:1 translate:1 achieve:1 transmission:1 illustrate:1 coupling:1 depending:1 pose:1 eq:1 predicted:1 somatosensory:4 quantify:1 human:1 engineered:1 programmer:1 owl:1 disproportionately:1 require:1 dendritic:1 hold:1 burke:1 koch:2 hall:1 great:1 presumably:2 mapping:14 congruence:1 circuitry:4 proc:2 applicable:1 label:1 grouped:1 concurrent:3 reflects:2 mit:2 clearly:1 always:1 messina:2 rather:2 avoid:1 voltage:4 dependent:2 entire:1 pasadena:1 microelectrode:1 overall:1 among:1 issue:1 smi:1 development:2 spatial:3 field:2 equal:1 having:1 biology:1 represents:2 others:2 serious:1 modern:1 dg:2 simultaneously:1 individual:2 phase:1 somatotopy:1 attempt:1 conductance:2 organization:7 message:1 circumstantial:1 possibility:1 kirkpatrick:2 analyzed:1 extreme:1 mixture:2 wojtek:1 natl:1 devoted:1 necessary:1 fox:11 tree:1 loosely:1 re:1 theoretical:1 increased:1 modeling:4 earlier:1 purkinje:1 patchy:5 exchanging:1 cost:7 subset:1 uniform:1 delay:2 johnson:1 too:1 combined:2 density:2 fundamental:1 connecting:1 together:1 connectivity:1 reflect:1 ek:1 inefficient:2 american:1 distribute:1 potential:3 de:1 accompanied:1 north:1 afferent:1 depends:1 vi:1 performed:1 try:1 view:1 responsibility:1 sort:1 parallel:26 capability:1 micromapping:1 contribution:2 compartment:13 capillary:2 largely:1 identify:1 processor:24 synapsis:2 suffers:1 ed:6 energy:2 pp:5 james:1 associated:7 woolston:2 ask:1 organized:1 actually:2 manuscript:1 higher:1 response:3 furthermore:2 kauer:1 until:1 hand:2 scientific:1 fractured:1 usa:1 effect:1 evolution:2 assigned:1 merzenich:2 spatially:1 deal:1 davis:1 rat:7 criterion:1 trying:1 neocortical:1 carr:2 interpreting:2 oxygen:1 consideration:1 salmon:1 recently:1 predominantly:1 common:1 cerebral:1 significant:3 measurement:1 glucose:1 ai:1 cortex:14 similarity:1 base:1 something:1 somatotopic:1 perspective:1 optimizing:1 hemisphere:3 reverse:1 certain:1 discussing:1 accomplished:1 caltech:1 minimum:2 impose:2 strike:1 multiple:3 stem:1 smooth:2 academic:1 characterized:2 calculation:4 divided:1 cccp:1 molecular:1 basic:1 physically:1 represent:1 cerebellar:6 achieved:2 cell:4 fine:1 annealing:2 walker:1 rest:1 probably:1 comment:1 strict:1 recording:1 tend:5 shepherd:1 axonal:1 near:3 ideal:2 revealed:1 variety:1 affect:1 architecture:3 perfectly:1 topology:5 idea:2 knowing:1 tradeoff:2 synchronous:1 whether:2 distorting:1 effort:2 tactile:7 behav:1 adequate:1 generally:1 useful:1 clear:1 listed:1 involve:1 detailed:1 neocortex:1 category:3 nsf:1 shambes:2 neuroscience:1 per:2 anatomical:1 broadly:1 group:1 welker:3 sum:1 run:1 striking:1 distorted:1 dongarra:3 throughout:1 reasonable:1 electronic:1 draw:1 delivery:2 layer:2 correspondence:1 laminar:1 activity:5 constraint:5 segev:4 aspect:1 speed:1 vecchi:1 extremely:1 attempting:1 furmanski:4 speedup:2 department:1 peripheral:2 membrane:4 across:3 em:1 son:1 making:1 explained:1 lau:1 taken:1 equation:1 resource:3 remains:1 discus:1 turn:3 know:3 reversal:1 end:1 available:7 apply:1 gelatt:1 odor:2 assembly:1 embodies:1 giving:1 especially:1 multicompartment:2 granule:3 hypercube:1 capacitance:1 question:1 primary:1 thank:1 mapped:2 simulated:2 capacity:1 sci:1 nelson:8 assuming:1 reorganization:1 tmaz:2 relationship:5 ratio:1 balance:4 providing:1 piriform:1 difficult:2 october:1 potentially:2 relate:2 rise:4 implementation:2 design:3 imbalance:16 vertical:1 neuron:8 acknowledge:1 communication:26 sharp:2 namely:1 required:1 connection:1 california:1 suggested:1 usually:1 pattern:11 program:2 power:3 overlap:1 hot:1 natural:1 advanced:1 representing:2 technology:1 uptake:3 deoxyglucose:2 carried:1 acknowledgement:1 fully:1 whisker:1 limitation:1 proportional:2 analogy:2 geoffrey:2 localized:1 digital:1 incurred:1 degree:1 bulb:1 principle:5 metabolic:4 share:1 balancing:4 supported:1 understand:1 institute:1 fall:1 wide:1 distributed:3 calculated:1 cortical:1 far:2 ec:1 correlate:1 lockheed:1 suggestive:1 assumed:1 zornetzer:1 continuous:6 lip:1 nature:2 channel:1 ca:1 dendrite:1 expansion:1 complex:1 electric:1 spread:1 arise:5 body:1 neuronal:3 fig:9 scattered:6 wiley:1 explicit:1 bower:10 third:1 load:27 specific:1 changeux:1 physiological:1 evidence:2 intrinsic:1 importance:1 push:1 ingvar:2 likely:2 albino:2 holland:1 corresponds:2 acm:1 presentation:1 change:1 uniformly:1 principal:1 engineer:1 total:1 isopotential:1 experimental:2 mark:1 support:1 arises:1 overload:1 phenomenon:1 |
1,485 | 2,350 | Nonstationary Covariance Functions for
Gaussian Process Regression
Christopher J. Paciorek and Mark J. Schervish
Department of Statistics
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected],[email protected]
Abstract
We introduce a class of nonstationary covariance functions for Gaussian
process (GP) regression. Nonstationary covariance functions allow the
model to adapt to functions whose smoothness varies with the inputs.
The class includes a nonstationary version of the Mat?rn stationary covariance, in which the differentiability of the regression function is controlled by a parameter, freeing one from fixing the differentiability in
advance. In experiments, the nonstationary GP regression model performs well when the input space is two or three dimensions, outperforming a neural network model and Bayesian free-knot spline models, and
competitive with a Bayesian neural network, but is outperformed in one
dimension by a state-of-the-art Bayesian free-knot spline model. The
model readily generalizes to non-Gaussian data. Use of computational
methods for speeding GP fitting may allow for implementation of the
method on larger datasets.
1
Introduction
Gaussian processes (GPs) have been used successfully for regression and classification
tasks. Standard GP models use a stationary covariance, in which the covariance between
any two points is a function of Euclidean distance. However, stationary GPs fail to adapt
to variable smoothness in the function of interest [1, 2]. This is of particular importance in
geophysical and other spatial datasets, in which domain knowledge suggests that the function may vary more quickly in some parts of the input space than in others. For example, in
mountainous areas, environmental variables are likely to be much less smooth than in flat
regions. Spatial statistics researchers have made some progress in defining nonstationary
covariance structures for kriging, a form of GP regression. We extend the nonstationary
covariance structure of [3], of which [1] gives a special case, to a class of nonstationary
covariance functions. The class includes a Mat?rn form, which in contrast to most covariance functions has the added flexibility of a parameter that controls the differentiability
of sample functions drawn from the GP distribution. We use the nonstationary covariance
structure for one, two, and three dimensional input spaces in a standard GP regression
model, as done previously only for one-dimensional input spaces [1].
The problem of variable smoothness has been attacked in spatial statistics by mapping
the original input space to a new space in which stationarity is assumed, but research has
focused on multiple noisy replicates of the regression function with no development nor
assessment of the method in the standard regression setting [4, 5]. The issue has been addressed in regression spline models by choosing the knot locations during the fitting [6] and
in smoothing splines by choosing an adaptive penalizer on the integrated squared derivative
[7]. The general approach in spline and other models involves learning the underlying basis
functions, either explicitly or implicitly, rather than fixing the functions in advance. One
alternative to a nonstationary GP model is mixtures of stationary GPs [8, 9]. Such methods adapt to variable smoothness by using different stationary GPs in different parts of the
input space. The main difficulty is that the class membership is a function of the inputs;
this involves additional unknown functions in the hierarchy of the model. One possibility
is to use stationary GPs for these additional unknown functions [8], while [9] reduce computational complexity by using a local estimate of the class membership, but do not know
if the resulting model is well-defined probabilistically. While the mixture approach is intriguing, neither of [8, 9] compare their model to other methods. In our model, there are
unknown functions in the hierarchy of the model that determine the nonstationary covariance structure. We choose to fully model the functions as Gaussian processes themselves,
but recognize the computational cost and suggest that simpler representations are worth
investigating.
2
Covariance functions and sample function differentiability
The covariance function is crucial in GP regression because it controls how much the data
are smoothed in estimating the unknown function. GP distributions are distributions over
functions; the covariance function determines the properties of sample functions drawn
from the distribution. The stochastic process literature gives conditions for determining
sample function properties of GPs based on the covariance function of the process, summarized in [10] for several common covariance functions. Stationary, isotropic covariance
functions are functions only of Euclidean distance, ? . Of particular note, the squared
exponential (also called the Gaussian) covariance function, C(? ) = ? 2 exp ?(? /?)2 , where
? 2 is the variance and ? is a correlation scale parameter, has sample functions with infinitely many derivatives. In contrast, spline regression models have sample functions that
are typically only twice differentiable. In addition to being of theoretical concern from an
asymptotic perspective [11], other covariance forms might better fit real data for which it is
unlikely that the unknown function is so highly differentiable. In spatial statistics, the exponential covariance, C(? ) = ? 2 exp (?? /?) , is commonly used, but this form gives sample
functions that, while continuous, are not differentiable.
work?
in spatial statistics has
? Recent
?
1
focused on the Mat?rn form, C(? ) = ? 2 ?(?)2
?? /?) K? (2 ?? /?) , where K? (?)
??1 (2
is the modified Bessel function of the second kind, whose order is the differentiability parameter, ? > 0. This form has the desirable property that sample functions are b? ? 1c
times differentiable. As ? ? ?, the Mat?rn approaches the squared exponential form,
while for ? = 0.5, the Mat?rn takes the exponential form. Standard covariance functions
require one to place all of one?s prior probability on a particular degree of differentiability;
use of the Mat?rn allows one to more accurately, yet easily, express prior lack of knowledge
about sample function differentiability. One application for which this may be of particular
interest is geophysical data.
[12] suggest p
using the squared exponential covariance but with anisotropic distance,
? (xi , xj ) = (xi ? xj )T ??1 (xi ? xj ), where ? is an arbitrary positive definite matrix, rather than the standard diagonal matrix. This allows the GP model to more easily
model interactions between the inputs. The nonstationary covariance function we introduce next builds on this more general form.
3
Nonstationary covariance functions
One
nonstationary covariance function, introduced by [3], is C(xi , xj ) =
R
2
k
x
2
i (u)kxj (u)du, where xi , xj , and u are locations in < , and kx (?) is a ker<
nel function centered at x. One can show directly that C(xi , xj ) is positive definite in
<p , p = 1, 2, . . ., [10]. For Gaussian kernels, the covariance takes the simple form,
1
1
1
C N S (xi , xj ) = ? 2 |?i | 4 |?j | 4 | (?i + ?j ) /2|? 2 exp (?Qij ) ,
(1)
with quadratic form
Qij = (xi ? xj )T ((?i + ?j ) /2)
?1
(xi ? xj ),
(2)
where ?i , which we call the kernel matrix, is the covariance matrix of the Gaussian kernel
at xi . The form (1) is a squared exponential correlation function, but in place of a fixed
matrix, ?, in the quadratic form, we average the kernel matrices for the two locations. The
evolution of the kernel matrices in space produces nonstationary covariance, with kernels
that drop off quickly producing locally short correlation scales. Independently, [1] derived a
special case in which the kernel matrices are diagonal. Unfortunately, so long as the kernel
matrices vary smoothly in the input space, sample functions from GPs with the covariance
(1) are infinitely differentiable [10], just as for the stationary squared exponential.
To generalize (1) and introduce functions for which sample path differentiability varies, we
extend (1) as proven in [10]:
Theorem 1 Let Qij be defined as in (2). If a stationary correlation function, R S (? ), is
positive definite on <p for every p = 1, 2, . . ., then
p
1
1
?1
Qij
(3)
RN S (xi , xj ) = |?i | 4 |?j | 4 |(?i + ?j ) /2| 2 RS
is a nonstationary correlation function, positive definite on <p , p = 1, 2, . . ..
One example of nonstationary covariance functions constructed in this way is a nonstationary version of the Mat?rn covariance,
? 1
1
1
?
p
? 2 |?i | 4 |?j | 4 ?i + ?j 2 p
NS
C (xi , xj ) =
2
?Q
K
2
?Q
(4)
ij
?
ij .
?(?)2??1
2
Provided the kernel matrices vary smoothly in space, the sample function differentiability of the nonstationary form follows that of the stationary form, so for the nonstationary
Mat?rn, the sample function differentiability increases with ? [10].
4
Bayesian regression model and implementation
Assume independent observations, Y1 , . . . , Yn , indexed by a vector of input or feature values, xi ? <P , with Yi ? N (f (xi ), ? 2 ), where ? 2 is the noise variance. Specify a Gaussian
process prior, f (?) ? GP ?f , CfN S (?, ?) , where CfN S (?, ?) is the nonstationary Mat?rn covariance function (4) constructed from a set of Gaussian kernels as described below. For
the differentiability parameter, we use the prior, ?f ? U(0.5, 30), which varies between
non-differentiability (0.5) and high differentiability. We use proper, but diffuse, priors for
?f , ?f2 , and ? 2 .The main challenge is to parameterize the kernel matrices, since their evolution determines how quickly the covariance structure changes in the input space and the
degree to which the model adapts to variable smoothness in the unknown function. In many
problems, it seems natural that the covariance structure would evolve smoothly; if so, the
differentiability of the regression function will be determined by ?f .
We put a prior distribution on the kernel matrices as follows. Any location in the input
space, xi , has a Gaussian kernel with mean xi and covariance (kernel) matrix, ?i . When
the input space is one-dimensional, each kernel ?matrix? is just a scalar, the variance of
the kernel, and we use a stationary Mat?rn GP prior on the log variance so that the variances evolve smoothly in the input space. Next consider multi-dimensional input spaces;
since there are (implicitly) kernel matrices at each location in the input space, we have
a multivariate process, the matrix-valued function, ?(?). Parameterizing positive definite
matrices as a function of the input space is a difficult problem; see [7]. We use the spectral
decomposition of an individual covariance matrix, ?i ,
?i = ?(?1 (xi ), . . . , ?Q (xi ))D(?1 (xi ), . . . , ?P (xi ))?(?1 (xi ), . . . , ?Q (xi ))T ,
(5)
where D is a diagonal matrix of eigenvalues and ? is an eigenvector matrix constructed
as described below. ?p (?), p = 1, . . . , P , and ?q (?), q = 1, . . . , Q, which are functions on the input space, construct ?(?). We will refer to these as the eigenvalue
and eigenvector processes, and to them collectively as the eigenprocesses. Let ?(?) ?
{log(?1 (?)), . . . , log(?P (?)), ?1 (?), . . . , ?Q (?)} denote any one of these eigenprocesses. To
have the kernel matrices vary smoothly, we ensure that their eigenvalues and eigenvectors
vary smoothly by taking each ?(?) to have a GP prior with a single stationary, anisotropic
Mat?rn correlation function, common to all the processes and described later. Using a
shared correlation function gives us smoothly-varying kernels, while limiting the number
of parameters. We force the eigenprocesses to be very smooth by fixing ? = 30. We do
not let ? vary, because it should have minimal impact on the regression estimate and is not
well-informed by the data.
Parameterizing the eigenvectors of the kernel matrices using Givens angles, with each angle a function on <P , the input space, is difficult, because the angle functions have range
[0, 2?) ? S 1 , which is not compatible with the range of a GP. To avoid this, we overparameterize the eigenvectors, using Q = P (P ? 1)/2 + P ? 1 Gaussian processes, ?q (?), that
determine the directions of a set of orthogonal vectors. Here, we demonstrate the construction of the eigenvectors for xi ? <2 and xi ? <3 ; a similar approach, albeit with more
parameters, applies to higher-dimensional spaces, but is probably infeasible in dimensions
larger than five or so. In <3 , we construct an eigenvector matrix for an individual location
as ? = ?3 ?2 , where
?
?
?
?
a
?b
?ac
1 0
0
labc
lab
lab labc
? b
?
u
?v
a
?bc
? 0 l
?.
?3 = ? labc lab
luv
lab labc ? , ?2 =
uv
v
u
lab
c
0
0
luv
luv
labc
labc
The
of three random variables, {A, B, C}, where labc =
?
? elements of ?3 are functions
a2 + b2 + c2 and lab = a2 + b2 . (?3 )32 = 0 is a constraint that saves a degree of
freedom for the two-dimensional subspace orthogonal to ?3 . The elements of ?2 are based
on two random variables, U and V . To have the matrices, ?(?), vary smoothly in space,
a, b, c, u and v, are the values of the processes, ?1 (?), . . . , ?5 (?) at the input of interest.
One can integrate f , the function evaluated at the inputs, out of the GP model. In the
stationary GP model, the marginal posterior contains a small number of hyperparameters
to either optimize or sample via MCMC. In the nonstationary case, the presence of the
additional GPs for the kernel matrices (5) precludes straightforward optimization, leaving
MCMC. For each of the eigenprocesses, we reparameterize the vector, ?, of values of the
process at the input locations, ? = ?? + ?? L(?(?))? ? , where ? ? ? N (0, I) a priori and
L is a matrix defined below. We sample ?? , ?? , and ? ? via Metropolis-Hastings separately
for each eigenprocess. The parameter vector ?, involving P correlation scale parameters
and P (P ? 1)/2 Givens angles, is used to construct an anisotropic distance matrix, ?(?),
shared by the ? vectors, creating a stationary, anisotropic correlation structure common to
all the eigenprocesses. ? is also sampled via Metropolis-Hastings. L(?(?)) is a generalized Cholesky decomposition of the correlation matrix shared by the ? vectors that deals
12
6
0
8
0.2
0.4
0.6
0.8
1.0
6
?1
z
1
0.0
4
2
?1
0
1
2
?4
2
6
?2
0
0.0
0.2
0.4
y 0.6
0.0
0.2
0.4
0.6 x
0.8
0.0
0.2
0.4
0.6
0.8
1.0
0.8
1.01.0
Figure 1: On the left are the three test functions in one dimension, with one simulated set
of observations (of the 50 used in the evaluation), while the right shows the test function
with two inputs.
with numerically singular correlation matrices by setting the ith column of the matrix to
all zeroes when ?i is numerically a linear combination of ?1 , . . . , ?i?1 [13]. One never
calculates L(?(?))?1 or |L(?(?))|, which are not defined, and does not need to introduce
jitter, and therefore discontinuity in ?(?), into the covariance structure.
5
Experiments
For one-dimensional functions, we compare the nonstationary GP method to a stationary GP model1 , two neural network implementations2 , and Bayesian adaptive regression
splines (BARS), a Bayesian free-knot spline model that has been very successful in comparisons in the statistical literature [6]. We use three test functions [6]: a smoothly-varying
function, a spatially inhomogeneous function, and a function with a sharp jump (Figure
1a). For each, we generate 50 sets of noisy data and compare the models using the means,
P
P
averaged over the 50 sets, of the standardized MSE, i (f?i ? fi )2 / i (fi ? f?)2 , where f?i
is the posterior mean at xi , and f? is the mean of the true values. In the non-Bayesian neural
network model, f?i is the fitted value and, as a simplification, we use a network with the optimal number of hidden units (3, 3, and 8 for the three functions), thereby giving an overly
optimistic assessment of the performance. To avoid local minima, we used the network fit
that minimized the MSE (relative to the data, with yi in place of fi in the expression for
MSE) over five fits with different random seeds.
For higher-dimensional inputs, we compare the nonstationary GP to the stationary GP, the
neural network models, and two free-knot spline methods, Bayesian multivariate linear
splines (BMLS) [14] and Bayesian multivariate automatic regression splines (BMARS)
[15], a Bayesian version of MARS [16]. We choose to compare to neural networks and
1
We implement the stationary GP model by replacing CfN S (?, ?) with the Mat?rn stationary correlation, still using a differentiability parameter, ?f , that is allowed to vary.
2
For a non-Bayesian model, we use the implementation in the statistical software R, which fits
a multilayer perceptron with one hidden layer. For a Bayesian version, results from R. Neal?s FBM
software were kindly provided by A. Vehtari.
Table 1: Mean (over 50 data samples) and 95% confidence interval for standardized MSE
for the five methods on the three test functions with one-dimensional input.
Method
Function 1
Function 2
Function 3
Stat. GP
.0083 (.0073,.0093) .026 (.024,.029)
.071 (.067,.074)
Nonstat. GP
.0083 (0.0073,.0093) .015 (.013,.016)
.026 (.021,.030)
BARS
.0081 (.0071,.0092) .012 (.011,.013) .0050 (.0043,.0056)
Bayes. neural net. .0082 (.0072,.0093) .011 (.010,.014)
.015 (.014,.016)
neural network
.0108 (.0095,.012)
.013 (.012,.015) .0095 (.0086,.010)
splines, because they are popular and these particular implementations have the ability
to adapt to variable smoothness. BMLS uses piecewise, continuous linear splines, while
BMARS uses tensor products of univariate splines; both are fit via reversible jump MCMC.
We use three datasets, the first a function with two inputs [14] (Figure 1b), for which we use
225 training inputs and test on 225 inputs, for each of 50 simulated datasets. The second
is a real dataset of air temperature as a function of latitude and longitude [17] that allows
assessment on a spatial dataset with distinct variable smoothness. We use a 109 observation
subset of the original data, focusing on the Western hemisphere, 222.5? ? 322.5? E and
62.5? S-82.5? N and fit the models on 54 splits with 107 training examples and two test
examples and one split with 108 training examples and one test example, thereby including
each data point as a test point once. The third is a real dataset of 111 daily measurements
of ozone [18] included in the S-plus statistical software. The goal is to predict the cube root
of ozone based on three features: radiation, temperature, and wind speed. We do 55 splits
with 109 training examples and two test examples and one split of 110 training examples
and one test example. For the non-Bayesian neural network, 10, 50, and 3 hidden units
were optimal for the three datasets, respectively.
Table 1 shows that the nonstationary GP does as well or better than the stationary GP,
but that BARS does as well or better than the other methods on all three datasets with
one input. Part of the difficulty for the nonstationary GP with the third function, which
has the sharp jump, is that our parameterization forces smoothly-varying kernel matrices,
which prevents our particular implementation from picking up sharp jumps. A potential
improvement would be to parameterize kernel matrices that do not vary so smoothly. Table
2 shows that for the known function on two dimensions, the GP models outperform both
the spline models and the non-Bayesian neural network, but not the Bayesian network. The
stationary and nonstationary GPs are very similar, indicative of the relative homogeneity
of the function. For the two real datasets, the nonstationary GP model outperforms the
other methods, except the Bayesian network on the temperature dataset. Predictive density
calculations that assess the fits of the functions drawn during the MCMC are similar to the
point estimate MSE calculations in terms of model comparison, although we do not have
predictive density values for the non-Bayesian neural network implementation.
6
Non-Gaussian data
We can model non-Gaussian data, using the usual extension from a linear model to a generalized linear model, for observations, Yi ? D (g (f (xi ))), where D(?) (g(?)) is an appropriate distribution (link) function, such as the Poisson (log) for count data or the binomial
(logit) for binary data. Take f (?) to have a nonstationary GP prior; it cannot be integrated
out of the model because of the lack of conjugacy, which causes slow MCMC mixing. [10]
improves mixing, which remains slow, using a sampling scheme in which the hyperparameters (including the kernel structure for the nonstationarity) are sampled jointly with the
function values, f , in a way that makes use of information in the likelihood.
Table 2: For test function with two inputs, mean (over 50 data samples) and 95% confidence
interval for standardized MSE at 225 test locations, and for the temperature and ozone
datasets, cross-validated standardized MSE, for the six methods.
Method
Function with 2 inputs Temp. data Ozone data
Stat. GP
.024 (.021,.026)
.46
.33
Nonstat. GP
.023 (.020,.026)
.36
.29
Bayesian neural network
.020 (.019,.022)
.35
.32
neural network
.040* (.033,.047)
.60
.34
BMARS
.076 (.065,.087)
.53
.33
BMLS
.033 (.029,.038)
.78
.33
* [14] report a value of .07 for a neural network implementation
We fit the model to the Tokyo rainfall dataset [19]. The data are the presence of rainfall
greater than 1 mm for every calendar day in 1983 and 1984. Assuming independence
between years [19], conditional on f (?) = logit(p(?)), the likelihood for a given calendar
day, xi , is binomial with two trials and unknown probability of rainfall, p(xi ). Figure 2a
shows that the estimated function reasonably follows the data and is quite variable because
the data in some areas are clustered. The model detects inhomogeneity in the function,
with more smoothness in the first few months and less smoothness later (Figure 2b).
Kernel size Prob. of rainfall
10 25 0.0
0.4
0.8
(a)
(b)
0
7
100
200
calendar day
300
Figure 2.
(a) Posterior mean
estimate, from nonstationary GP
model, of p(?), the probability of
rainfall as a function of calendar
day, with 95% pointwise credible intervals. Dots are empirical
probabilities of rainfall based on
the two binomial trials. (b) Posterior geometric mean kernel size
(square root of geometric mean
kernel eigenvalue).
Discussion
We introduce a class of nonstationary covariance functions that can be used in GP regression (and classification) models and allow the model to adapt to variable smoothness in
the unknown function. The nonstationary GPs improve on stationary GP models on several test datasets. In test functions on one-dimensional spaces, a state-of-the-art free-knot
spline model outperforms the nonstationary GP, but in higher dimensions, the nonstationary GP outperforms two free-knot spline approaches and a non-Bayesian neural network,
while being competitive with a Bayesian neural network. The nonstationary GP may be
of particular interest for data indexed by spatial coordinates, where the low dimensionality
keeps the parameter complexity manageable.
Unfortunately, the nonstationary GP requires many more parameters than a stationary GP,
particularly as the dimension grows, losing the attractive simplicity of the stationary GP
model. Use of GP priors in the hierarchy of the model to parameterize the nonstationary
covariance results in slow computation, limiting the feasibility of the model to approximately n < 1000, because the Cholesky decomposition is O(n3 ). Our approach provides
a general framework; work is ongoing on simpler, more computationally efficient parameterizations of the kernel matrices. Also, approaches that use low-rank approximations to
the covariance matrix [20, 21] may speed fitting.
References
[1] M.N. Gibbs. Bayesian Gaussian Processes for Classification and Regression. PhD thesis, Univ.
of Cambridge, Cambridge, U.K., 1997.
[2] D.J.C. MacKay. Introduction to Gaussian processes. Technical report, Univ. of Cambridge,
1997.
[3] D. Higdon, J. Swall, and J. Kern. Non-stationary spatial modeling. In J.M. Bernardo, J.O.
Berger, A.P. Dawid, and A.F.M. Smith, editors, Bayesian Statistics 6, pages 761?768, Oxford,
U.K., 1999. Oxford University Press.
[4] A.M. Schmidt and A. O?Hagan. Bayesian inference for nonstationary spatial covariance structure via spatial deformations. Technical Report 498/00, University of Sheffield, 2000.
[5] D. Damian, P.D. Sampson, and P. Guttorp. Bayesian estimation of semi-parametric nonstationary spatial covariance structure. Environmetrics, 12:161?178, 2001.
[6] I. DiMatteo, C.R. Genovese, and R.E. Kass. Bayesian curve-fitting with free-knot splines.
Biometrika, 88:1055?1071, 2002.
[7] D. MacKay and R. Takeuchi. Interpolation models with multiple hyperparameters, 1995.
[8] Volker Tresp. Mixtures of Gaussian processes. In Todd K. Leen, Thomas G. Dietterich, and
Volker Tresp, editors, Advances in Neural Information Processing Systems 13, pages 654?660.
MIT Press, 2001.
[9] C.E. Rasmussen and Z. Ghahramani. Infinite mixtures of Gaussian process experts. In T. G.
Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing
Systems 14, Cambridge, Massachusetts, 2002. MIT Press.
[10] C.J. Paciorek. Nonstationary Gaussian Processes for Regression and Spatial Modelling. PhD
thesis, Carnegie Mellon University, Pittsburgh, Pennsylvania, 2003.
[11] M.L. Stein. Interpolation of Spatial Data : Some Theory for Kriging. Springer, N.Y., 1999.
[12] F. Vivarelli and C.K.I. Williams. Discovering hidden features with Gaussian processes regression. In M.J. Kearns, S.A. Solla, and D.A. Cohn, editors, Advances in Neural Information
Processing Systems 11, 1999.
[13] J.R. Lockwood, M.J. Schervish, P.L. Gurian, and M.J. Small. Characterization of arsenic occurrence in source waters of U.S. community water systems. J. Am. Stat. Assoc., 96:1184?1193,
2001.
[14] C.C. Holmes and B.K. Mallick. Bayesian regression with multivariate linear splines. Journal
of the Royal Statistical Society, Series B, 63:3?17, 2001.
[15] D.G.T. Denison, B.K. Mallick, and A.F.M. Smith. Bayesian MARS. Statistics and Computing,
8:337?346, 1998.
[16] J.H. Friedman. Multivariate adaptive regression splines. Annals of Statistics, 19:1?141, 1991.
[17] S.A. Wood, W.X. Jiang, and M. Tanner. Bayesian mixture of splines for spatially adaptive
nonparametric regression. Biometrika, 89:513?528, 2002.
[18] S.M. Bruntz, W.S. Cleveland, B. Kleiner, and J.L. Warner. The dependence of ambient ozone
on solar radiation, temperature, and mixing height. In American Meteorological Society, editor,
Symposium on Atmospheric Diffusion and Air Pollution, pages 125?128, 1974.
[19] C. Biller. Adaptive Bayesian regression splines in semiparametric generalized linear models.
Journal of Computational and Graphical Statistics, 9:122?140, 2000.
[20] A.J. Smola and P. Bartlett. Sparse greedy Gaussian process approximation. In T. Leen, T. Dietterich, and V. Tresp, editors, Advances in Neural Information Processing Systems 13, Cambridge, Massachusetts, 2001. MIT Press.
[21] M. Seeger and C. Williams. Fast forward selection to speed up sparse Gaussian process regression. In Workshop on AI and Statistics 9, 2003.
| 2350 |@word trial:2 version:4 manageable:1 seems:1 logit:2 r:1 covariance:43 decomposition:3 thereby:2 contains:1 series:1 bc:1 outperforms:3 ka:1 yet:1 intriguing:1 readily:1 drop:1 stationary:24 greedy:1 discovering:1 denison:1 parameterization:1 indicative:1 isotropic:1 ith:1 smith:2 short:1 provides:1 parameterizations:1 characterization:1 location:8 simpler:2 five:3 height:1 constructed:3 c2:1 symposium:1 qij:4 fitting:4 introduce:5 themselves:1 nor:1 warner:1 multi:1 detects:1 provided:2 estimating:1 underlying:1 cleveland:1 kind:1 eigenvector:3 informed:1 every:2 bernardo:1 rainfall:6 biometrika:2 assoc:1 control:2 unit:2 yn:1 mountainous:1 producing:1 positive:5 local:2 todd:1 oxford:2 jiang:1 path:1 interpolation:2 approximately:1 might:1 plus:1 twice:1 luv:3 higdon:1 suggests:1 range:2 averaged:1 definite:5 implement:1 ker:1 area:2 empirical:1 confidence:2 kern:1 suggest:2 cannot:1 selection:1 put:1 optimize:1 penalizer:1 straightforward:1 williams:2 independently:1 focused:2 simplicity:1 ozone:5 parameterizing:2 holmes:1 coordinate:1 limiting:2 annals:1 hierarchy:3 construction:1 losing:1 gps:10 us:2 pa:1 element:2 dawid:1 particularly:1 hagan:1 parameterize:3 region:1 solla:1 kriging:2 vehtari:1 complexity:2 predictive:2 f2:1 basis:1 easily:2 kxj:1 univ:2 distinct:1 fast:1 choosing:2 whose:2 quite:1 larger:2 valued:1 precludes:1 calendar:4 ability:1 statistic:10 gp:42 jointly:1 noisy:2 inhomogeneity:1 differentiable:5 eigenvalue:4 net:1 interaction:1 product:1 mixing:3 flexibility:1 adapts:1 produce:1 ac:1 fixing:3 radiation:2 stat:4 ij:2 freeing:1 progress:1 longitude:1 involves:2 direction:1 inhomogeneous:1 tokyo:1 stochastic:1 centered:1 require:1 clustered:1 extension:1 mm:1 exp:3 seed:1 mapping:1 predict:1 vary:9 a2:2 estimation:1 outperformed:1 successfully:1 mit:3 gaussian:22 modified:1 rather:2 avoid:2 varying:3 volker:2 probabilistically:1 derived:1 validated:1 improvement:1 rank:1 likelihood:2 modelling:1 contrast:2 seeger:1 am:1 inference:1 membership:2 integrated:2 typically:1 unlikely:1 hidden:4 issue:1 classification:3 priori:1 development:1 art:2 spatial:13 special:2 smoothing:1 marginal:1 cube:1 construct:3 never:1 once:1 mackay:2 sampling:1 genovese:1 minimized:1 report:3 others:1 spline:22 cfn:3 piecewise:1 few:1 recognize:1 homogeneity:1 individual:2 friedman:1 freedom:1 stationarity:1 interest:4 possibility:1 highly:1 evaluation:1 replicates:1 mixture:5 ambient:1 daily:1 orthogonal:2 indexed:2 euclidean:2 lockwood:1 deformation:1 theoretical:1 minimal:1 fitted:1 column:1 modeling:1 paciorek:3 cost:1 subset:1 successful:1 varies:3 density:2 dimatteo:1 off:1 picking:1 tanner:1 quickly:3 squared:6 thesis:2 choose:2 creating:1 expert:1 derivative:2 american:1 potential:1 summarized:1 b2:2 includes:2 explicitly:1 later:2 root:2 wind:1 lab:6 optimistic:1 competitive:2 bayes:1 solar:1 ass:1 air:2 square:1 takeuchi:1 variance:5 generalize:1 bayesian:29 accurately:1 knot:8 worth:1 researcher:1 nonstationarity:1 sampled:2 dataset:5 popular:1 massachusetts:2 knowledge:2 improves:1 credible:1 dimensionality:1 focusing:1 higher:3 day:4 specify:1 leen:2 done:1 evaluated:1 mar:2 just:2 smola:1 correlation:12 hastings:2 christopher:1 replacing:1 cohn:1 assessment:3 lack:2 reversible:1 western:1 meteorological:1 grows:1 dietterich:3 true:1 evolution:2 alumnus:1 spatially:2 neal:1 deal:1 attractive:1 during:2 generalized:3 demonstrate:1 performs:1 temperature:5 fi:3 common:3 anisotropic:4 extend:2 numerically:2 mellon:2 refer:1 measurement:1 cambridge:5 gibbs:1 ai:1 smoothness:10 automatic:1 uv:1 dot:1 multivariate:5 posterior:4 recent:1 perspective:1 hemisphere:1 outperforming:1 binary:1 yi:3 minimum:1 additional:3 greater:1 determine:2 bessel:1 semi:1 multiple:2 desirable:1 fbm:1 smooth:2 technical:2 adapt:5 calculation:2 cross:1 long:1 controlled:1 impact:1 calculates:1 involving:1 regression:26 feasibility:1 multilayer:1 sheffield:1 cmu:2 poisson:1 kernel:28 damian:1 addition:1 semiparametric:1 separately:1 addressed:1 interval:3 singular:1 leaving:1 source:1 crucial:1 probably:1 call:1 nonstationary:40 presence:2 split:4 xj:11 fit:8 independence:1 pennsylvania:1 reduce:1 expression:1 six:1 bartlett:1 becker:1 cause:1 eigenvectors:4 nonparametric:1 stein:1 locally:1 nel:1 differentiability:15 generate:1 outperform:1 estimated:1 overly:1 carnegie:2 mat:12 express:1 drawn:3 neither:1 diffusion:1 schervish:2 year:1 wood:1 angle:4 prob:1 jitter:1 place:3 environmetrics:1 layer:1 simplification:1 quadratic:2 constraint:1 n3:1 flat:1 diffuse:1 software:3 speed:3 reparameterize:1 department:1 combination:1 metropolis:2 temp:1 computationally:1 conjugacy:1 previously:1 remains:1 count:1 fail:1 vivarelli:1 know:1 generalizes:1 spectral:1 appropriate:1 occurrence:1 save:1 alternative:1 schmidt:1 original:2 thomas:1 standardized:4 binomial:3 ensure:1 graphical:1 giving:1 ghahramani:2 build:1 society:2 tensor:1 pollution:1 added:1 parametric:1 dependence:1 usual:1 diagonal:3 subspace:1 distance:4 link:1 simulated:2 water:2 assuming:1 pointwise:1 berger:1 difficult:2 unfortunately:2 implementation:7 proper:1 unknown:8 observation:4 datasets:9 attacked:1 defining:1 y1:1 rn:13 smoothed:1 arbitrary:1 sharp:3 community:1 atmospheric:1 introduced:1 discontinuity:1 bar:3 below:3 latitude:1 challenge:1 model1:1 including:2 royal:1 mallick:2 difficulty:2 natural:1 force:2 scheme:1 improve:1 tresp:3 speeding:1 prior:10 literature:2 geometric:2 evolve:2 determining:1 asymptotic:1 relative:2 fully:1 proven:1 integrate:1 degree:3 editor:6 compatible:1 free:7 rasmussen:1 infeasible:1 allow:3 perceptron:1 taking:1 overparameterize:1 sparse:2 curve:1 dimension:7 forward:1 made:1 adaptive:5 commonly:1 jump:4 implicitly:2 keep:1 investigating:1 pittsburgh:2 assumed:1 xi:28 continuous:2 kleiner:1 guttorp:1 table:4 reasonably:1 du:1 mse:7 domain:1 kindly:1 main:2 noise:1 hyperparameters:3 allowed:1 slow:3 n:1 exponential:7 third:2 theorem:1 concern:1 workshop:1 albeit:1 importance:1 phd:2 kx:1 smoothly:11 likely:1 infinitely:2 univariate:1 prevents:1 scalar:1 collectively:1 applies:1 springer:1 environmental:1 determines:2 conditional:1 goal:1 month:1 sampson:1 shared:3 change:1 included:1 determined:1 except:1 infinite:1 kearns:1 called:1 geophysical:2 mark:2 cholesky:2 ongoing:1 mcmc:5 |
1,486 | 2,351 | A Kullback-Leibler Divergence Based Kernel for
SVM Classification in Multimedia Applications
Pedro J. Moreno Purdy P. Ho
Hewlett-Packard
Cambridge Research Laboratory
Cambridge, MA 02142, USA
{pedro.moreno,purdy.ho}@hp.com
Nuno Vasconcelos
UCSD ECE Department
9500 Gilman Drive, MC 0407
La Jolla, CA 92093-0407
[email protected]
Abstract
Over the last years significant efforts have been made to develop kernels
that can be applied to sequence data such as DNA, text, speech, video
and images. The Fisher Kernel and similar variants have been suggested
as good ways to combine an underlying generative model in the feature
space and discriminant classifiers such as SVM?s. In this paper we suggest an alternative procedure to the Fisher kernel for systematically finding kernel functions that naturally handle variable length sequence data
in multimedia domains. In particular for domains such as speech and
images we explore the use of kernel functions that take full advantage
of well known probabilistic models such as Gaussian Mixtures and single full covariance Gaussian models. We derive a kernel distance based
on the Kullback-Leibler (KL) divergence between generative models. In
effect our approach combines the best of both generative and discriminative methods and replaces the standard SVM kernels. We perform
experiments on speaker identification/verification and image classification tasks and show that these new kernels have the best performance
in speaker verification and mostly outperform the Fisher kernel based
SVM?s and the generative classifiers in speaker identification and image
classification.
1
Introduction
During the last years Support Vector Machines (SVM?s) [1] have become extremely successful discriminative approaches to pattern classification and regression problems. Excellent results have been reported in applying SVM?s in multiple domains. However, the
application of SVM?s to data sets where each element has variable length remains problematic. Furthermore, for those data sets where the elements are represented by large sequences
of vectors, such as speech, video or image recordings, the direct application of SVM?s to
the original vector space is typically unsuccessful.
While most research in the SVM community has focused on the underlying learning algorithms the study of kernels has also gained importance recently. Standard kernels such
as linear, Gaussian, or polynomial do not take full advantage of the nuances of specific
data sets. This has motivated plenty of research into the use of alternative kernels in the
areas of multimedia. For example, [2] applies normalization factors to polynomial kernels
for speaker identification tasks. Similarly, [3] explores the use of heavy tailed Gaussian
kernels in image classification tasks. These approaches in general only try to tune standard
kernels (linear, polynomial, Gaussian) to the nuances of multimedia data sets.
On the other hand statistical models such as Gaussian Mixture Models (GMM) or Hidden
Markov Models make strong assumptions about the data. They are simple to learn and
estimate, and are well understood by the multimedia community. It is therefore attractive
to explore methods that combine these models and discriminative classifiers. The Fisher
kernel proposed by Jaakkola [4] effectively combines both generative and discriminative
classifiers for variable length sequences. Besides its original application in genomic problems it has also been applied to multimedia domains, among others [5] applies it to audio
classification with good results; [6] also tries a variation on the Fisher kernel on phonetic
classification tasks.
We propose a different approach to combine both discriminative and generative methods to
classification. Instead of using these standard kernels, we leverage on successful generative
models used in the multimedia field. We use diagonal covariance GMM?s and full covariance Gaussian models to better represent each individual audio and image object. We then
use a metric derived from the symmetric Kullback-Leibler (KL) divergence to effectively
compute inner products between multimedia objects.
2
Kernels for SVM?s
Much of the flexibility and classification power of SVM?s resides in the choice of kernel.
Some examples are linear, polynomial degree p, and Gaussian. These kernel functions
have two main disadvantages for multimedia signals. First they only model inner products
between individual feature vectors as opposed to an ensemble of vectors which is the typical
case for multimedia signals. Secondly these kernels are quite generic and do not take
advantage of the statistics of the individual signals we are targeting.
The Fisher kernel approach [4] is a first attempt at solving these two issues. It assumes the
existence of a generative model that explains well all possible data. For example, in the
case of speech signals the generative model p(x|?) is often a Gaussian mixture. Where the
? model parameters are priors, means, and diagonal covariance matrices. GMM?s are also
quite popular in the image classification and retrieval domains; [7] shows good results on
image classification and retrieval using Gaussian mixtures.
For any given sequence of vectors defining a multimedia object X = {x1 , x2 , . . . , xm } and
assuming that each vector in the sequence is independent and identically distributed, we
can
Qmeasily define the likelihood of the ensemble being generated by p(x|?) as P (X|?) =
i=1 p(xi |?). The Fisher score maps each individual sequence {X1 , . . . , Xn }, composed
of a different number of feature vectors, into a single vector in the gradient log-likelihood
space.
This new feature vector, the Fisher score, is defined as
UX = ?? log(P (X|?))
(1)
Each component of UX is a derivative of the log-likelihood of the vector sequence X
with respect to a particular parameter of the generative model. In our case the parameters
? of the generative model are chosen from either the prior probabilities, the mean vector
or the diagonal covariance matrix of each individual Gaussian in the mixture model. For
example, if we use the mean vectors as our model parameters ?, i.e., for ? = ?k out of K
possible mixtures, then the Fisher score is
??k log(P (X|?k )) =
m
X
P (k|xi )??1
k (xi ? ?k )
(2)
i=1
where P (k|xi ) represents the a posteriori probability of mixture k given the observed
feature vector xi . Effectively we transform each multimedia object (audio or image) X of
variable length into a single vector UX of fixed dimension.
3
Kullback-Leibler Divergence Based Kernels
We start with a statistical model p(x|? i ) of the data, i.e., we estimate the parameters ? i
of a generic probability density function (PDF) for each multimedia object (utterance or
image) Xi = {x1 , x2 , . . . , xm }. We pick PDF?s that have been shown over the years to
be quite effective at modeling multimedia patterns. In particular we use diagonal Gaussian
mixture models and single full covariance Gaussian models. In the first case the parameters
? i are priors, mean vectors, and diagonal covariance matrices while in the second case the
parameters ? i are the mean vector and full covariance matrix.
Once the PDF p(x|? i ) has been estimated for each training and testing multimedia object
we replace the kernel computation in the original sequence space by a kernel computation
in the PDF space:
K(Xi , Xj ) =? K(p(x|? i ), p(x|? j ))
(3)
To compute the PDF parameters ? i for a given object Xi we use a maximum likelihood
approach. In the case of diagonal mixture models there is no analytical solution for ? i
and we use the Expectation Maximization algorithm. In the case of single full covariance
Gaussian model there is a simple analytical solution for the mean vector and covariance
matrix. Effectively we are proposing to map the input space Xi to a new feature space ? i .
Notice that if the number of vector in the Xi multimedia sequence is small and there is not
enough data to accurately estimate ? i we can use regularization methods, or even replace
the maximum likelihood solution for ? i by a maximum a posteriori solution. Other solutions like starting from a generic PDF and adapting its parameters ? i to the current object
are also possible.
The next step is to define the kernel distance in this new feature space. Because of the statistical nature of the feature space a natural choice for a distance metric is one that compares
PDF?s. From the standard statistical literature there are several possible choices, however,
in this paper we only report our results on the symmetric Kullback-Leibler (KL) divergence
Z?
D(p(x|? i ), p(x|? j )) =
??
p(x|? i )
) dx +
p(x|? i ) log(
p(x|? j )
Z?
p(x|? j ) log(
??
p(x|? j )
) dx
p(x|? i )
(4)
Because a matrix of kernel distances directly based on symmetric KL divergence does not
satisfy the Mercer conditions, i.e., it is not a positive definite matrix, we need a further step
to generate a valid kernel. Among many posibilities we simply exponentiate the symmetric KL divergence, scale, and shift (A and B factors below) it for numerical stability reasons
K(Xi , Xj ) =? K(p(x|? i ), p(x|? j ))
=? e?A D(p(x|?i ),p(x|?j ))+B
(5)
In the case of Gaussian mixture models the computation of the KL divergence is not direct.
In fact there is no analytical solution to Eq. (4) and we have to resort to Monte Carlo
methods or numerical approximations. In the case of single full covariance models the KL
divergence has an analytical solution
?1
D(p(x|? i ), p(x|? j )) = tr(?i ??1
j ) + tr(?j ?i )?
?1
T
2 S + tr((??1
i + ?j ) (?i ? ?j )(?i ? ?j ) )
(6)
where S is the dimensionality of the original feature data x. This distance is similar to the
Arithmetic harmonic sphericity (AHS) distance quite popular in the speaker identification
and verification research community [8].
Notice that there are significant differences between our KL divergence based kernel and
the Fisher kernel method. In our approach there is no underlying generative model to represent all the data. We do not use a single PDF (even if it encodes a latent variable indicative
of class membership) as a way to map the multimedia object from the original feature vector
space to a gradient log-likelihood vector space. Instead each individual object (consisting
of a sequence of feature vectors) is modeled by its unique PDF. This represents a more localized version of the Fisher kernel underlying generative model. Effectively the modeling
power is spent where it matters most, on each of the individual objects in the training and
testing sets. Interestingly, the object PDF does not have to be extremely complex. As we
will show in our experimental section a single full covariance Gaussian model produces
extremely good results. Also, in our approach there is not a true intermediate space unlike
the gradient log-likelihood space used in the Fisher kernel. Our multimedia objects are
transformed directly into PDF?s.
4
Audio and Image Databases
We chose the 50 most frequent speakers from the HUB4-96 [9] News Broadcasting corpus
and 50 speakers from the Narrowband version of the KING corpus [10] to train and test
our new kernels on speaker identification and verification tasks. The HUB training set
contains about 25 utterances (each 3-7 seconds long) from each speaker, resulting in 1198
utterances (or about 2 hours of speech). The HUB test set contains the rest of the utterances
from these 50 speakers resulting in 15325 utterances (or about 21 hours of speech). The
KING corpus is commonly used for speaker identification and verification in the speech
community [11]. Its training set contains 4 utterances (each about 30 seconds long) from
each speaker and the test set contains the remaining 6 from these 50 speakers. A total of 200
training utterances (about 1.67 hours of speech) and 300 test utterances (about 2.5 hours
of speech) were used. Following standard practice in speech processing each utterance
was transformed into a sequence of 13 dimensional Mel-Frequency Cepstral vectors. The
vectors were augmented with their first and second order time derivatives resulting in a
39 dimensional feature vector. We also mean-normalized the KING utterances in order to
compensate for the distortion introduced by different telephone channels. We did not do
so for the HUB experiments since mean normalizing the audio would remove important
speaker characteristics.
We chose the Corel database [12] to train and test all algorithms on image classification.
COREL contains a variety of objects, such as landscape, vehicles, plants, and animals. To
make the task more challenging we picked 8 classes of highly confusable objects: Apes,
ArabianHorses, Butterflies, Dogs, Owls, PolarBears, Reptiles, and RhinosHippos. There
were 100 images per class ? 66 for training and 34 for testing; thus, a total of 528 training
images and 272 testing images were used. All images are 353x225 pixel 24-bit RGB-color
JPEGs. To extract feature vectors we followed standard practice in image processing. For
each of the 3 color channels the image was scanned by an 8x8 window shifted every 4
pixels. The 192 pixels under each window were converted into a 192-dimensional Discrete
Cosine Transform (DCT) feature vector. After this only the 64 low frequency elements
were used since they captured most of the image characteristics.
5
Experiments and Results
Our experiments trained and tested five different types of classifiers: Baseline GMM, Baseline AHS1 , SVM using Fisher kernel, and SVM using our new KL divergence based kernels.
When training and testing our new GMM/KL Divergence based kernels, a sequence of
feature vectors, {x1 , x2 , . . . , xm } from each utterance or image X was modeled by a single
GMM of diagonal covariances. Then the KL divergences between each of these GMM?s
were computed according to Eq. (4) and transformed according to Eq. (5). This resulted
in kernel matrices for training and testing that could be feed directly into a SVM classifier.
Since all our SVM experiments are multiclass experiments we used the 1-vs-all training
approach. The class with the largest positive score was designated as the winner class. For
the experiments in which the object PDF was a single full covariance Gaussian we followed
a similar procedure. The KL divergences between each pair of PDF?s were computed
according to Eq. (6) and transformed according Eq. (5). The dimensions of the resulting
training and testing kernel matrices are shown in Table 1.
Table 1: Dimensions of the training and testing kernel matrices of both new probablisitic
kernels on HUB, KING, and COREL databases.
HUB
Training
1198x1198
HUB
Testing
15325x1198
KING
Training
200x200
KING
Testing
300x200
COREL
Training
528x528
COREL
Testing
272x528
In the Fisher kernel experiments we computed the Fisher score vector UX for each training
and testing utterance and image with ? parameter based on the prior probabilities of each
mixture Gaussian. The underlying generative model was the same one used for the GMM
classification experiments.
The task of speaker verification is different from speaker identification. We make a binary
decision of whether or not an unknown utterance is spoken by the person of the claimed
identity. Because we have trained SVM?s using the one-vs-all approach their output can
be directly used in speaker verification. To verify whether the utterance belongs to class
A we just use the A-vs-all SVM output. On the other hand, the scores of the GMM and
AHS classifiers cannot be used directly for verification experiments. We need to somehow
combine the scores from the non claimed identities, i.e., if we want to verify whether an
utterance belongs to speaker A we need to compute a model for non-A speakers. This nonclass model can be computed by first pooling the 49 non-class GMM?s together to form a
super GMM with 256x49 mixtures, (each speaker GMM has 256 mixtures). Then the score
produced by this super GMM is subtracted from the score produced by the claimed speaker
GMM. In the case of AHS classifiers we estimate the non-class score as the arithmetic mean
of the other 49 speaker scores. To compute the miss and false positive rates we compare the
1
Arithmetic harmonic sphericity classifiers pull together all vectors belonging to a class and fit a
single full covariance Gaussian model to the data. Similarly, a single full covariance model is fitted to
each testing utterance. The similarity between the testing utterances and the class models is measured
according to Eq. (6). The class with the minimum distance is chosen as the winning class.
decision scores to a threshold ?. By varying ? we can compute Detection Error Tradeoff
(DET) curves as the ones shown in Fig. 1.
We compare the performance of all the 5 classifiers in speaker verification and speaker
identification tasks. Table 2 shows equal-error rates (EER?s) for speaker verification and
accuracies of speaker identification for both speech corpora.
Table 2: Comparison of all the classifiers used on the HUB and KING corpora. Both classification accuracy (Acc) and equal error rates (EER) are reported in percentage points.
Type of
Classifier
GMM
AHS
SVM Fisher
SVM GMM/KL
SVM COV/KL
HUB
Acc
87.4
81.7
62.4
83.8
84.7
HUB
EER
8.1
9.1
14.0
7.8
7.4
KING
Acc
68.0
48.3
48.0
72.7
79.7
KING
EER
16.1
26.8
12.3
7.9
6.6
We also compared the performance of 4 classifiers in the image classification task. Since
the AHS classifier is not a effective image classifier we excluded it here. Table 3 shows the
classification accuracies.
Table 3: Comparison of the 4 classifiers used on the COREL animal subset. Classification
accuracies are reported in percentage points.
Type of
Classifier
GMM
SVM Fisher
SVM GMM/KL
SVM COV/KL
Accuracy
82.0
73.5
85.3
80.9
Our results using the KL divergence based kernels in both multimedia data types are quite
promising. In the case of the HUB experiments all classifiers perform similarly in both
speaker verification and identification tasks with the exception of the SVM Fisher which
performs significantly worse. However, For the KING database, we can see that our KL
based SVM kernels outperform all other classifiers in both identification and verification
tasks. Interestingly the Fisher kernel performs quite poorly too. Looking at the DET plots
for both corpora we can see that on the HUB experiments the new SVM kernels perform
quite well and on the KING corpora they perform much better than any other verification
system.
In image classification experiments with the COREL database both KL based SVM kernels
outperform the Fisher SVM; the GMM/KL kernel even outperforms the baseline GMM
classifier.
6
Conclusion and Future Work
In this paper we have proposed a new method of combining generative models and discriminative classifiers (SVM?s). Our approach is extremely simple. For every multimedia
object represented by a sequence of vectors, a PDF is learned using maximum likelihood
approaches. We have experimented with PDF?s that are commonly used in the multimedia
HUB DETs
0.4
GMM NG=256
AHS
SVM Fisher
SVM GMM/KL
SVM COV/KL
0.35
0.3
0.35
0.3
0.25
P(miss)
P(miss)
0.25
0.2
0.2
0.15
0.15
0.1
0.1
0.05
0.05
0
KING DETs
0.4
0
0.05
0.1
0.15 0.2 0.25
P(false positive)
0.3
0.35
0.4
0
0
0.05
0.1
0.15 0.2 0.25
P(false positive)
0.3
0.35
Figure 1: Speaker verification detection error tradeoff (DET) curves for the HUB and the
KING corpora, tested on all 50 speakers.
community. However, the method is generic enough and could be used with any PDF. In the
case of GMM?s we use the EM algorithm to learn the model parameters ?. In the case of a
single full covariance Gaussian we directly estimate its parameters. We then introduce the
idea of computing kernel distances via a direct comparison of PDF?s. In effect we replace
the standard kernel distance on the original data K(Xi , Xj ) by a new kernel derived from
the symmetric Kullback-Leibler (KL) divergence K(Xi , Xj ) ?? K(p(x|? i ), p(x|? j )).
After that a kernel matrix is computed and a traditional SVM can be used.
In our experiments we have validated this new approach in speaker identification, verification, and image classification tasks by comparing its performance to Fisher kernel SVM?s
and other well-known classification algorithms: GMM and AHS methods. Our results
show that our new method of combining generative models and SVM?s always outperform the SVM Fisher kernel and the AHS methods, and it often outperforms other classification methods such as GMM?s and AHS. The equal error rates are consistently better
with the new kernel SVM methods too. In the case of image classification our GMM/KL
divergence-based kernel has the best performance among the four classifiers while our single full covariance Gaussian distance based kernel outperforms most other classifiers and
only do slightly worse than the baseline GMM. All these encouraging results show that
SVM?s can be improved by paying careful attention to the nature of the data being modeled. In both audio and image tasks we just take advantage of previous years of research in
generative methods.
The good results obtained using a full covariance single Gaussian KL kernel also make
our algorithm a very attractive alternative as opposed to the more complex methods of
tuning system parameters and combining generative classifiers and discriminative methods
such as the Fisher SVM. This full covariance single Gaussian KL kernel?s performance
is consistently good across all databases. It is especially simple and fast to compute and
requires no tuning of system parameters.
We feel that this approach of combining generative classifiers via KL divergences of derived
PDF?s is quite generic and can possibly be applied to other domains. We plan to explore its
use in other multimedia related tasks.
References
[1] Vapnik, V., Statistical learning theory, John Wiley and Sons, New York, 1998.
0.4
[2] Wan, V. and Campbell, W., ?Support vector machines for speaker verification and identification,? IEEE Proceeding, 2000.
[3] Chapelle, O. and Haffner, P. and Vapnik V., ?Support vector machines for histogram-based
image classification,? IEEE Transactions on Neural Networks, vol. 10, no. 5, pp. 1055?1064,
September 1999.
[4] Jaakkola, T., Diekhans, M. and Haussler, D., ?Using the fisher kernel method to detect remote
protein homologies,? in Proceedings of the Internation Conference on Intelligent Systems for
Molecular Biology, Aug. 1999.
[5] Moreno, P. J. and Rifkin, R., ?Using the fisher kernel method for web audio classification,?
ICASSP, 2000.
[6] Smith N., Gales M., and Niranjan M., ?Data dependent kernels in SVM classification of speech
patterns,? Tech. Rep. CUED/F-INFENG/TR.387,Cambridge University Engineering Department, 2001.
[7] Vasconcelos, N. and Lippman, A., ?A unifying view of image similarity,? IEEE International
Conference on Pattern Recognition, 2000.
[8] Bimbot, F., Magrin-Chagnolleau, I. and Mathan, L., ?Second-order statistical measures for
text-independent speaker identification,? Speech Communication, vol. 17, pp. 177?192, 1995.
[9] Stern, R. M., ?Specification of the 1996 HUB4 Broadcast News Evaluation,? in DARPA Speech
Recognition Workshop, 1997.
[10] ?The KING Speech Database,? http://www.ldc.upenn.edu/Catalog/docs/LDC95S22/ kingdb.txt.
[11] Chen K., ?Towards better making a decision in speaker verification,? Pattern Recognition, , no.
36, pp. 329?246, 2003.
[12] ?Corel stock photos,? http://elib.cs.berleley.edu/photos/blobworld/cdlist.html.
| 2351 |@word version:2 polynomial:4 rgb:1 covariance:20 pick:1 tr:4 contains:5 score:12 interestingly:2 outperforms:3 current:1 com:1 comparing:1 dx:2 john:1 dct:1 numerical:2 moreno:3 remove:1 plot:1 v:3 generative:19 indicative:1 smith:1 five:1 direct:3 become:1 combine:6 introduce:1 upenn:1 encouraging:1 window:2 underlying:5 proposing:1 spoken:1 finding:1 every:2 classifier:25 positive:5 understood:1 engineering:1 chose:2 challenging:1 unique:1 testing:14 practice:2 definite:1 lippman:1 procedure:2 area:1 adapting:1 significantly:1 eer:4 suggest:1 protein:1 cannot:1 targeting:1 applying:1 www:1 map:3 attention:1 starting:1 focused:1 haussler:1 pull:1 stability:1 handle:1 variation:1 feel:1 element:3 gilman:1 recognition:3 database:7 observed:1 news:2 remote:1 trained:2 solving:1 icassp:1 darpa:1 stock:1 represented:2 train:2 fast:1 effective:2 monte:1 quite:8 distortion:1 statistic:1 cov:3 transform:2 butterfly:1 sequence:14 advantage:4 analytical:4 propose:1 product:2 frequent:1 combining:4 rifkin:1 flexibility:1 poorly:1 produce:1 object:17 spent:1 derive:1 develop:1 cued:1 measured:1 aug:1 paying:1 eq:6 strong:1 c:1 owl:1 explains:1 secondly:1 largest:1 sphericity:2 genomic:1 gaussian:23 always:1 super:2 varying:1 jaakkola:2 derived:3 validated:1 consistently:2 likelihood:8 tech:1 baseline:4 detect:1 posteriori:2 dependent:1 membership:1 purdy:2 typically:1 hidden:1 transformed:4 pixel:3 issue:1 classification:25 among:3 html:1 animal:2 plan:1 field:1 once:1 equal:3 vasconcelos:2 ng:1 biology:1 represents:2 plenty:1 future:1 others:1 report:1 intelligent:1 composed:1 x49:1 divergence:18 resulted:1 individual:7 consisting:1 attempt:1 detection:2 highly:1 evaluation:1 mixture:13 hewlett:1 confusable:1 fitted:1 modeling:2 disadvantage:1 maximization:1 subset:1 successful:2 too:2 reported:3 person:1 density:1 explores:1 international:1 probabilistic:1 together:2 opposed:2 wan:1 possibly:1 gale:1 broadcast:1 worse:2 resort:1 derivative:2 converted:1 x200:2 matter:1 satisfy:1 vehicle:1 try:2 picked:1 view:1 start:1 accuracy:5 characteristic:2 ensemble:2 landscape:1 ahs:9 identification:14 accurately:1 produced:2 mc:1 carlo:1 drive:1 acc:3 frequency:2 pp:3 nuno:2 naturally:1 popular:2 color:2 dimensionality:1 campbell:1 feed:1 improved:1 furthermore:1 just:2 hand:2 web:1 somehow:1 usa:1 effect:2 normalized:1 true:1 verify:2 homology:1 regularization:1 excluded:1 symmetric:5 leibler:6 laboratory:1 attractive:2 during:1 speaker:33 mel:1 cosine:1 pdf:18 performs:2 exponentiate:1 narrowband:1 image:30 harmonic:2 recently:1 corel:8 winner:1 significant:2 cambridge:3 tuning:2 hp:1 similarly:3 chapelle:1 specification:1 similarity:2 jolla:1 belongs:2 phonetic:1 claimed:3 binary:1 rep:1 captured:1 minimum:1 signal:4 arithmetic:3 full:16 multiple:1 compensate:1 long:2 retrieval:2 molecular:1 niranjan:1 variant:1 regression:1 infeng:1 txt:1 metric:2 expectation:1 histogram:1 kernel:63 normalization:1 represent:2 want:1 rest:1 unlike:1 ape:1 recording:1 pooling:1 leverage:1 intermediate:1 identically:1 enough:2 variety:1 xj:4 fit:1 inner:2 idea:1 haffner:1 multiclass:1 tradeoff:2 det:3 shift:1 diekhans:1 whether:3 motivated:1 effort:1 speech:15 york:1 tune:1 dna:1 generate:1 http:2 outperform:4 percentage:2 problematic:1 notice:2 shifted:1 estimated:1 per:1 jpegs:1 discrete:1 vol:2 four:1 threshold:1 gmm:27 bimbot:1 year:4 doc:1 decision:3 bit:1 followed:2 replaces:1 scanned:1 x2:3 encodes:1 hub4:2 extremely:4 department:2 designated:1 according:5 belonging:1 across:1 slightly:1 em:1 son:1 making:1 remains:1 photo:2 generic:5 subtracted:1 alternative:3 ho:2 existence:1 original:6 assumes:1 remaining:1 unifying:1 especially:1 diagonal:7 traditional:1 september:1 gradient:3 distance:10 blobworld:1 discriminant:1 reason:1 nuance:2 assuming:1 length:4 besides:1 modeled:3 mostly:1 stern:1 unknown:1 perform:4 markov:1 defining:1 looking:1 communication:1 ucsd:2 community:5 introduced:1 dog:1 pair:1 kl:27 catalog:1 learned:1 hour:4 suggested:1 below:1 pattern:5 xm:3 packard:1 unsuccessful:1 video:2 ldc:1 power:2 natural:1 x8:1 extract:1 utterance:17 text:2 prior:4 literature:1 plant:1 localized:1 degree:1 verification:17 mercer:1 systematically:1 heavy:1 last:2 cepstral:1 distributed:1 curve:2 dimension:3 xn:1 valid:1 resides:1 made:1 commonly:2 reptile:1 transaction:1 kullback:6 corpus:8 discriminative:7 xi:13 latent:1 tailed:1 table:6 promising:1 learn:2 nature:2 channel:2 ca:1 excellent:1 complex:2 domain:6 did:1 main:1 x1:4 augmented:1 fig:1 wiley:1 winning:1 specific:1 hub:13 experimented:1 svm:40 normalizing:1 workshop:1 false:3 vapnik:2 effectively:5 gained:1 importance:1 chen:1 broadcasting:1 simply:1 explore:3 ux:4 applies:2 pedro:2 ma:1 identity:2 king:14 careful:1 internation:1 towards:1 replace:3 fisher:26 typical:1 telephone:1 miss:3 multimedia:22 total:2 ece:2 experimental:1 la:1 exception:1 support:3 audio:7 tested:2 |
1,487 | 2,352 | Discriminative Fields for Modeling Spatial
Dependencies in Natural Images
Sanjiv Kumar and Martial Hebert
The Robotics Institute
Carnegie Mellon University
Pittsburgh, PA 15213
{skumar,hebert}@ri.cmu.edu
Abstract
In this paper we present Discriminative Random Fields (DRF), a discriminative framework for the classification of natural image regions by incorporating neighborhood spatial dependencies in the labels as well as the
observed data. The proposed model exploits local discriminative models
and allows to relax the assumption of conditional independence of the
observed data given the labels, commonly used in the Markov Random
Field (MRF) framework. The parameters of the DRF model are learned
using penalized maximum pseudo-likelihood method. Furthermore, the
form of the DRF model allows the MAP inference for binary classification problems using the graph min-cut algorithms. The performance of
the model was verified on the synthetic as well as the real-world images.
The DRF model outperforms the MRF model in the experiments.
1
Introduction
For the analysis of natural images, it is important to use contextual information in the form
of spatial dependencies in images. In a probabilistic framework, this leads one to random
field modeling of the images. In this paper we address the main challenge involving such
modeling, i.e. how to model arbitrarily complex dependencies in the observed image data
as well as the labels in a principled manner.
In the literature, Markov Random Field (MRF) is a commonly used model to incorporate
contextual information [1]. MRFs are generally used in a probabilistic generative framework that models the joint probability of the observed data and the corresponding labels.
In other words, let y be the observed data from an input image, where y = {y i }i?S , y i
is the data from ith site, and S is the set of sites. Let the corresponding labels at the image sites be given by x = {xi }i?S . In the MRF framework, the posterior over the labels
given the data is expressed using the Bayes? rule as, p(x|y) ? p(x, y) = p(x)p(y|x)
where the prior over labels, p(x) is modeled as a MRF. For computational tractability, the
observationQor likelihood model, p(y|x) is usually assumed to have a factorized form, i.e.
p(y|x) = i?S p(y i |xi )[1][2]. However, as noted by several researchers [3][4], this assumption is too restrictive for the analysis of natural images. For example, consider a class
that contains man-made structures (e.g. buildings). The data belonging to such a class is
highly dependent on its neighbors since the lines or edges at spatially adjoining sites follow
some underlying organization rules rather than being random (See Fig. 2). This is also true
for a large number of texture classes that are made of structured patterns.
Some efforts have been made in the past to model the dependencies in the data [3][4], but
they make factored approximations of the actual likelihood for tractability. In addition,
simplistic forms of the factors preclude capturing stronger relationships in the observations
in the form of arbitrarily complex features that might be desired to discriminate between
different classes. Now considering a different point of view, for classification purposes, we
are interested in estimating the posterior over labels given the observations, i.e., p(x|y).
In a generative framework, one expends efforts to model the joint distribution p(x, y),
which involves implicit modeling of the observations. In a discriminative framework, one
models the distribution p(x|y) directly. As noted in [2], a potential advantage of using the
discriminative approach is that the true underlying generative model may be quite complex
even though the class posterior is simple. This means that the generative approach may
spend a lot of resources on modeling the generative models which are not particularly
relevant to the task of inferring the class labels. Moreover, learning the class density models
may become even harder when the training data is limited [5].
In this work we present a Discriminative Random Field (DRF) model based on the concept of Conditional Random Field (CRF) proposed by Lafferty et al. [6] in the context of
segmentation and labeling of 1-D text sequences. The CRFs directly model the posterior
distribution p(x|y) as a Gibbs field. This approach allows one to capture arbitrary dependencies between the observations without resorting to any model approximations. Our
model further enhances the CRFs by proposing the use of local discriminative models to
capture the class associations at individual sites as well as the interactions in the neighboring sites on 2-D grid lattices. The proposed model uses local discriminative models to
achieve the site classification while permitting interactions in both the observed data and
the label field in a principled manner. The research presented in this paper alleviates several
problems with the previous version of the DRFs described in [7].
2
Discriminative Random Field
We first restate in our notations the definition of the Conditional Random Fields as given
by Lafferty et al. [6]. In this work we will be concerned with binary classification, i.e.
xi ? {?1, 1}. Let the observed data at site i, y i ? <c .
CRF Definition: Let G = (S, E) be a graph such that x is indexed by the vertices of G.
Then (x, y) is said to be a conditional random field if, when conditioned on y, the random variables xi obey the Markov property with respect to the graph: p(xi |y, xS?{i} ) =
p(xi |y, xNi ), where S ?{i} is the set of all nodes in G except the node i, Ni is the set of
neighbors of the node i in G, and x? represents the set of labels at the nodes in set ?.
Thus CRF is a random field globally conditioned on the observations y. The condition of
positivity requiring p(x|y) > 0 ? x has been assumed implicitly. Now, using the Hammersley Clifford theorem [1] and assuming only up to pairwise clique potentials to be nonzero,
the joint distribution over the labels x given the observations y can be written as,
?
?
X
XX
1
p(x|y) = exp?
Ai (xi , y)+
Iij (xi , xj , y)?
(1)
Z
i?S
i?S j?Ni
where Z is a normalizing constant known as the partition function, and -Ai and -Iij are the
unary and pairwise potentials respectively. With a slight abuse of notations, in the rest of
the paper we will call Ai as association potential and Iij as interaction potential. Note that
both the terms explicitly depend on all the observations y. In the DRFs, the association
potential is seen as a local decision term which decides the association of a given site to a
certain class ignoring its neighbors. The interaction potential is seen as a data dependent
smoothing function. For simplicity, in the rest of the paper we assume the random field
given in (1) to be homogeneous and isotropic, i.e. the functional forms of Ai and Iij are
independent of the locations i and j. Henceforth we will leave the subscripts and simply
use the notations A and I. Note that the assumption of isotropy can be easily relaxed at the
cost of a few additional parameters.
2.1
Association potential
In the DRF framework, A(xi , y) is modeled using a local discriminative model that outputs
the association of the site i with class xi . Generalized Linear Models (GLM) are used
extensively in statistics to model the class posteriors given the observations [8]. For each
site i, let f i (y) be a function that maps the observations y on a feature vector such that
f i : y ? <l . Using a logistic function as the link, the local class posterior can be modeled
as,
1
P (xi = 1|y) =
= ?(w0 + wT1 f i (y))
(2)
?(w0 +w T
1 f i (y ))
1+e
where w = {w0 , w1 } are the model parameters. To extend the logistic model to induce a
nonlinear decision boundary in the feature space, a transformed feature vector at each site
i is defined as, hi (y) = [1, ?1 (f i (y)), . . . , ?R (f i (y))]T where ?k (.) are arbitrary nonlinear functions. The first element of the transformed vector is kept as 1 to accommodate
the bias parameter w0 . Further, since xi ? {?1, 1}, the probability in (2) can be compactly
expressed as P (xi |y) = ?(xi wT hi (y)). Finally, the association potential is defined as,
A(xi , y) = log(?(xi wT hi (y))
(3)
This transformation makes sure that the DRF yields standard logistic classifier if the interaction potential in (1) is set to zero. Note that the transformed feature vector at each site
i, i.e. hi (y) is a function of whole set of observations y. On the contrary, the assumption
of conditional independence of the data in the MRF framework allows one to use the data
only from a particular site, i.e. y i to get the log-likelihood, which acts as the association
potential.
As a related work, in the context of tree-structured belief networks, Feng et al. [2] used
the scaled likelihoods to approximate the actual likelihoods at each site required by the
generative formulation. These scaled likelihoods were obtained by scaling the local class
posteriors learned using a neural network. On the contrary, in the DRF model, the local
class posterior is an integral part of the full conditional model in (1). Also, unlike [2], the
parameters of the association and interaction potential are learned simultaneously in the
DRF framework.
2.2
Interaction potential
To model the interaction potential I, we first analyze the interaction potential commonly
used in the MRF framework. Note that the MRF framework does not permit the use of data
in the interaction potential. For a homogeneous and isotropic Ising model, the interaction
potential is given as I = ?xi xj , which penalizes every dissimilar pair of labels by the cost
? [1]. This form of interaction prefers piecewise constant smoothing without explicitly
considering discontinuities in the data. In the DRF formulation, the interaction potential is
a function of all the observations y. We would like to have similar labels at a pair of sites
for which the observed data supports such a hypothesis. In other words, we are interested
in learning a pairwise discriminative model as the interaction potential.
For a pair of sites (i, j), let ?ij (? i (y), ? j (y)) be a new feature vector such that ?ij : <? ?
<? ? <q , where ? k : y ? <? . Denoting this feature vector as ?ij (y) for simplification,
the interaction potential is modeled as,
I(xi , xj , y) = xi xj v T ?ij (y)
(4)
where v are the model parameters. Note that the first component of ?ij (y) is fixed to be
1 to accommodate the bias parameter. This form of interaction potential is much simpler
than the one proposed in [7], and makes the parameter learning a convex problem. There
are two interesting properties of the interaction potential given in (4). First, if the association potential at each site and the interaction potentials of all the pairwise cliques except
the pair (i, j) are set to zero in (1), the DRF acts as a logistic classifier which yields the
probability of the site pair to have the same labels given the observed data. Second, the proposed interaction potential is a generalization of the Ising model. The original Ising form
is recovered if all the components of vector v other than the bias parameter are set to zero
in (4). Thus, the form in (4) acts as a data-dependent discontinuity adaptive model that will
moderate smoothing when the data from the two sites is ?different?. The data-dependent
smoothing can especially be useful to absorb the errors in modeling the association potential. Anisotropy can be easily included in the DRF model by parametrizing the interaction
potentials of different directional pairwise cliques with different sets of parameters v.
3
Parameter learning and inference
Let ? be the set of DRF parameters where ? = {w, v}. The form of the DRF model
resembles the posterior of the MRF framework assuming conditionally independent data.
However, in the MRF framework, the parameters of the class generative models, p(y i |xi )
and the parameters of the prior random field on labels, p(x) are generally assumed to be
independent and learned separately [1]. In contrast, we make no such assumption and learn
all the parameters of the DRF simultaneously.
The maximum likelihood approach to learn the DRF parameters involves evaluation of the
partition function Z which is in general a NP-hard problem. One could use either sampling techniques or resort to some approximations e.g. pseudo-likelihood to estimate the
parameters. In this work we used the pseudo-likelihood formulation due to its simplicity
and consistency of the estimates for the large lattice limit [1]. InQ
the pseudo-likelihood
approach, a factored approximation is used such that, P (x|y, ?) ? i?S P (xi |xNi , y, ?).
However, for the Ising model in MRFs, pseudo-likelihood tends to overestimate the interaction parameter ?, causing the MAP estimates of the field to be very poor solutions [9].
Our experiments in the previous work [7] and Section 4 of this paper verify these observations for the interaction parameters in DRFs too. To alleviate this problem, we take a
Bayesian approach to get the maximum a posteriori estimates of the parameters. Similar
to the concept of weight decay in neural learning literature, we assume a Gaussian prior
over the interaction parameters v such that p(v|? ) = N (v; 0, ? 2 I) where I is the identity
matrix. Using a prior over parameters w that leads to weight decay or shrinkage might
also be beneficial but we leave that for future exploration. The prior over parameters w is
assumed to be uniform. Thus, given M independent training images,
?
?
M X?
?
X
X
1
?b= arg max
log ?(xi wT hi (y))+
xi xj v T?ij (y)?log zi ? 2 v T v (5)
?
?
2?
?
m=1
i?S
where
zi =
j?Ni
X
xi ?{?1,1}
exp
?
?
?
log ?(xi wT hi (y)) +
X
j?Ni
xi xj v T ?ij (y)
?
?
?
If ? is given, the penalized log pseudo-likelihood in (5) is convex with respect to the model
parameters and can be easily maximized using gradient descent.
As a related work regarding the estimation of ? , Mackay [10] has suggested the use of
type II marginal likelihood. But in the DRF formulation, integrating the parameters v
is a hard problem. Another choice is to integrate out ? by choosing a non-informative
hyperprior on ? as in [11] [12]. However our experiments showed that these methods
do not yield good estimates of the parameters because of the use of pseudo-likelihood in
our framework. In the present work we choose ? by cross-validation. Alternative ways
of parameter estimation include the use of contrastive divergence [13] and saddle point
approximations resembling perceptron learning rules [14]. We are currently exploring these
possibilities.
The problem of inference is to find the optimal label configuration x given an image y,
where optimality is defined with respect to a cost function. In the current work we use the
MAP estimate as the solution to the inference problem. While using the Ising MRF model
for the binary classification problems, exact MAP solution can be computed using mincut/max-flow algorithms provided ? ? 0 [9][15]. For the DRF model, the MAP estimates
can be obtained using the same algorithms. However, since these algorithms do not allow
negative interaction between the sites, the data-dependent smoothing for each clique is set
to be v T?ij (y) = max{0, v T?ij (y)}, yielding an approximate MAP estimate. This is
equivalent to switching the smoothing off at the image discontinuities.
4
Experiments and discussion
For comparison, a MRF framework was also learned assuming a conditionally independent likelihood and a homogeneous
model. So,the MRF
P and isotropic Ising interaction
P
P
?1
posterior is p(x|y) = Zm exp
i?S log p(si (y i )|xi ) +
i?S
j?Ni ?xi xj where ?
is the interaction parameter and si (y i ) is a single-site feature vector at ith site such that
si : y i ? <d . Note that si (y i ) does not take into account influence of the data in the
neighborhood of ith site. A first order neighborhood (4 nearest neighbors) was used for
label interaction in all the experiments.
4.1
Synthetic images
The aim of these experiments was to obtain correct labels from corrupted binary images.
Four base images, 64 ? 64 pixels each, were used in the experiments (top row in Fig. 1).
We compare the DRF and the MRF results for two different noise models. For each noise
model, 50 images were generated from each base image. Each pixel was considered as an
image site and the feature vector si (y i ) was simply chosen to be a scalar representing the
intensity at ith site. In experiments with the synthetic data, no neighborhood data interaction was used for the DRFs (i.e. f i (y) = si (y i )) to observe the gains only due to the use of
discriminative models in the association and interaction potentials. A linear discriminant
was implemented in the association potential such that hi (y) = [1, f i (y)]T . The pairwise
data vector ?ij (y) was obtained by taking the absolute difference of si (y i ) and sj (y j ).
For the MRF model, each class-conditional density, p(si (y i )|xi ), was modeled as a Gaussian. The noisy data from the left most base image in Fig.1 was used for training while 150
noisy images from the rest of the three base images were used for testing.
Three experiments were conducted for each noise model. (i) The interaction parameters for
the DRF (v) as well as for the MRF (?) were set to zero. This reduces the DRF model to a
logistic classifier and MRF to a maximum likelihood (ML) classifier. (ii) The parameters of
the DRF, i.e. [w, v], and the MRF, i.e. ?, were learned using pseudo-likelihood approach
without any penalty, i.e. ? = ?. (iii) Finally, the DRF parameters were learned using
penalized pseudo-likelihood and the best ? for the MRF was chosen from cross-validation.
The MAP estimates of the labels were obtained using graph-cuts for both the models.
Under the first noise model, each image pixel was corrupted with independent Gaussian
noise of standard deviation 0.3. For the DRF parameter learning, ? was chosen to be
0.01. The pixelwise classification error for this noise model is given in the top row of
Table 1. Since the form of noise is the same as the likelihood model in the MRF, MRF is
Table 1: Pixelwise classification errors (%) on 150 synthetic test images. For the Gaussian
noise MRF and DRF give similar error while for ?bimodal? noise, DRF performs better.
Note that only label interaction (i.e. no data interaction) was used for these tests (see text).
Noise
ML
Logistic MRF (PL) DRF (PL) MRF DRF
Gaussian 15.62
15.78
13.18
29.49
2.35 2.30
Bimodal 24.00
29.86
22.70
29.49
7.00 6.21
Figure 1: Results on synthetic images. From top, first row:original images, second row:
images corrupted with ?bimodal? noise, third row: MRF results, fourth row: DRF results.
expected to give good results. The DRF model does marginally better than MRF even for
this case. Note that the DRF and MRF results are worse when the parameters were learned
without penalizing the pseudo-likelihood (shown in Table 1 with suffix (PL)). The MAP
inference yields oversmoothed images for these parameters. The DRF model is affected
more because all the parameters in DRFs are learned simultaneously unlike MRFs.
In the second noise model each pixel was corrupted with independent mixture of Gaussian noise. For each class, a mixture of two Gaussians with equal mixing weights was
used yielding a ?bimodal? class noise. The mixture model parameters (mean, std) for the
two classes were chosen to be [(0.08, 0.03), (0.46, 0.03)], and [(0.55, 0.02), (0.42, 0.10)]
inspired by [5]. The classification results are shown in the bottom row of Table 1. An
interesting point to note is that DRF yields lower error than MRF even when the logistic
classifier has higher error than the ML classifier on the test data. For a typical noisy version
of the four base images, the performance of different techniques in compared in Fig. 1.
Table 2: Detection Rates (DR) and False Positives (FP) for the test set containing 129
images (49, 536 sites). FP for logistic classifier were kept to be the same as for DRF for
DR comparison. Superscript 0 ?0 indicates no neighborhood data interaction was used.
MRF Logistic? DRF? Logistic DRF
DR (%) 58.35
47.50
61.79
60.80
72.54
FP (per image) 2.44
2.28
2.28
1.76
1.76
4.2
Real-World images
The proposed DRF model was applied to the task of detecting man-made structures in
natural scenes. The aim was to label each image site as structured or nonstructured. The
training and the test set contained 108 and 129 images respectively, each of size 256?384
pixels, from the Corel image database. Each nonoverlapping 16?16 pixels block is called
an image site. For each image site i, a 5-dim single-site feature vector si (y i ) and a 14-dim
multiscale feature vector f i (y) is computed using orientation and magnitude based features
as described in [16]. Note that f i (y) incorporates data interaction from neighboring sites.
For the association potentials, a transformed feature vector hi (y) was computed at each site
i using quadratic transforms of vector f i (y). The pairwise data vector ?ij (y) was obtained
by concatenating the two vectors f i (y) and f j (y). For the DRF parameter learning, ? was
chosen to be 0.001. For the MRF, each class conditional density was modeled as a mixture
of five Gaussians. Use of a single Gaussian for each class yielded very poor results.
For two typical images from the test set, the detection results for the MRF and the DRF
models are given in Fig. 2. The blocks detected as structured have been shown enclosed
within an artificial boundary. The DRF results show higher detections with lower false
positives. For a quantitative evaluation, we compared the detection rates and the number
of false positives per image for different techniques. For the comparison of detection rates,
in all the experiments, the decision threshold of the logistic classifier was fixed such that it
yields the same false positive rate as the DRF. The first set of experiments was conducted
using the single-site features for all the three methods. Thus, no neighborhood data interaction was used for both the logistic classifier and the DRF, i.e. f i (y) = si (y i ). The
comparative results for the three methods are given in Table 2 under ?MRF?, ?Logistic? ?
and ?DRF? ?. The detection rates of the MRF and the DRF are higher than the logistic classifier due to the label interaction. However, higher detection rate and lower false positives
for the DRF in comparison to the MRF indicate the gains due to the use of discriminative
models in the association and interaction potentials in the DRF. In the next experiment,
to take advantage of the power of the DRF framework, data interaction was allowed for
both the logistic classifier as well as the DRF (?Logistic? and ?DRF? in Table 2). The DRF
detection rate increases substantially and the false positives decrease further indicating the
importance of allowing the data interaction in addition to the label interaction.
5
Conclusion and future work
We have presented discriminative random fields which provide a principled approach for
combining local discriminative classifiers that allow the use of arbitrary overlapping features, with adaptive data-dependent smoothing over the label field. We are currently exploring alternative ways of parameter learning using contrastive divergence and saddle point
approximations. One of the further aspects of the DRF model is the use of general kernel
mappings to increase the classification accuracy. However, one will need some method to
induce sparseness to avoid overfitting [12]. In addition, we intend to extend the model to
accommodate multiclass classification problems.
Acknowledgments
Our thanks to John Lafferty and Jonas August for immensely helpful discussions.
Figure 2: Example structure detection results. Left column: MRF results. Right column:
DRF results. DRF has higher detection rate with lower false positives.
References
[1] S. Z. Li. Markov Random Field Modeling in Image Analysis. Springer-Verlag, Tokyo, 2001.
[2] X. Feng, C. K. I. Williams, and S. N. Felderhof. Combining belief networks and neural networks
for scene segmentation. IEEE Trans. Pattern Anal. Machine Intelli., 24(4):467?483, 2002.
[3] H. Cheng and C. A. Bouman. Multiscale bayesian segmentation using a trainable context model.
IEEE Trans. on Image Processing, 10(4):511?525, 2001.
[4] R. Wilson and C. T. Li. A class of discrete multiresolution random fields and its application to
image segmentation. IEEE Trans. on Pattern Anal. and Machine Intelli., 25(1):42?56, 2003.
[5] Y. D. Rubinstein and T. Hastie. Discriminative vs informative learning. In Proc. Third Int. Conf.
on Knowledge Discovery and Data Mining, pages 49?53, 1997.
[6] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for
segmenting and labeling sequence data. In Proc. Int. Conf. on Machine Learning, 2001.
[7] S. Kumar and M. Hebert. Discriminative random fields: A discriminative framework for contextual interaction in classification. IEEE Int. Conf. on Computer Vision, 2:1150?1157, 2003.
[8] P. McCullagh and J. A. Nelder. Generalised Linear Models. Chapman and Hall, London, 1987.
[9] D. M. Greig, B. T. Porteous, and A. H. Seheult. Exact maximum a posteriori estimation for
binary images. Journal of Royal Statis. Soc., 51(2):271?279, 1989.
[10] D. Mackay. Bayesian non-linear modelling for the 1993 energy prediction competition. In
Maximum Entropy and Bayesian Methods, pages 221?234, 1996.
[11] P. Williams. Bayesian regularization and pruning using a laplacian prior. Neural Computation,
7:117?143, 1995.
[12] M. A. T. Figueiredo. Adaptive sparseness using jeffreys prior. Advances in Neural Information
Processing Systems (NIPS), 2001.
[13] G. E. Hinton. Training product of experts by minimizing contrastive divergence. Neural Computation, 14:1771?1800, 2002.
[14] M. Collins. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proc. Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2002.
[15] V. Kolmogorov and R. Zabih. What energy functions can be minimized via graph cuts. In Proc.
European Conf. on Computer Vision, 3:65?81, 2002.
[16] S. Kumar and M. Hebert. Man-made structure detection in natural images using a causal multiscale random field. In Proc. IEEE Int. Conf. on Comp. Vision and Pattern Recog., June 2003.
| 2352 |@word version:2 stronger:1 contrastive:3 accommodate:3 harder:1 configuration:1 contains:1 denoting:1 outperforms:1 past:1 recovered:1 contextual:3 current:1 si:10 written:1 john:1 sanjiv:1 partition:2 informative:2 statis:1 v:1 generative:7 isotropic:3 mccallum:1 ith:4 detecting:1 node:4 location:1 simpler:1 five:1 become:1 jonas:1 drfs:5 manner:2 pairwise:7 expected:1 inspired:1 globally:1 anisotropy:1 actual:2 preclude:1 considering:2 provided:1 estimating:1 notation:3 underlying:2 moreover:1 factorized:1 xx:1 isotropy:1 what:1 substantially:1 proposing:1 transformation:1 pseudo:10 quantitative:1 every:1 act:3 classifier:12 scaled:2 overestimate:1 positive:7 segmenting:1 generalised:1 local:9 tends:1 limit:1 switching:1 subscript:1 abuse:1 might:2 resembles:1 limited:1 acknowledgment:1 testing:1 block:2 empirical:1 word:2 induce:2 integrating:1 get:2 wt1:1 context:3 influence:1 equivalent:1 map:9 crfs:2 resembling:1 williams:2 felderhof:1 convex:2 simplicity:2 factored:2 rule:3 oversmoothed:1 exact:2 homogeneous:3 us:1 hypothesis:1 pa:1 element:1 particularly:1 std:1 cut:3 ising:6 database:1 observed:9 bottom:1 recog:1 capture:2 region:1 decrease:1 principled:3 depend:1 compactly:1 easily:3 joint:3 kolmogorov:1 london:1 detected:1 artificial:1 labeling:2 rubinstein:1 neighborhood:6 choosing:1 quite:1 spend:1 relax:1 statistic:1 noisy:3 superscript:1 advantage:2 sequence:2 interaction:41 product:1 zm:1 neighboring:2 relevant:1 causing:1 combining:2 alleviates:1 mixing:1 achieve:1 multiresolution:1 competition:1 comparative:1 leave:2 nearest:1 ij:11 soc:1 implemented:1 involves:2 indicate:1 restate:1 correct:1 tokyo:1 exploration:1 generalization:1 alleviate:1 exploring:2 pl:3 immensely:1 considered:1 hall:1 exp:3 mapping:1 purpose:1 estimation:3 proc:5 label:25 currently:2 gaussian:7 aim:2 rather:1 avoid:1 shrinkage:1 wilson:1 june:1 modelling:1 likelihood:21 indicates:1 contrast:1 helpful:1 posteriori:2 inference:5 dim:2 mrfs:3 dependent:6 unary:1 suffix:1 hidden:1 transformed:4 interested:2 pixel:6 arg:1 classification:12 orientation:1 spatial:3 smoothing:7 mackay:2 marginal:1 field:23 equal:1 sampling:1 chapman:1 represents:1 future:2 minimized:1 np:1 piecewise:1 few:1 simultaneously:3 divergence:3 individual:1 detection:11 organization:1 highly:1 possibility:1 mining:1 evaluation:2 mixture:4 yielding:2 adjoining:1 xni:2 edge:1 integral:1 indexed:1 tree:1 hyperprior:1 penalizes:1 desired:1 causal:1 bouman:1 column:2 modeling:7 lattice:2 tractability:2 cost:3 vertex:1 deviation:1 uniform:1 conducted:2 too:2 pixelwise:2 dependency:6 corrupted:4 synthetic:5 thanks:1 density:3 probabilistic:3 off:1 w1:1 clifford:1 containing:1 choose:1 emnlp:1 positivity:1 henceforth:1 dr:3 worse:1 conf:5 resort:1 expert:1 li:2 account:1 potential:31 nonoverlapping:1 int:4 explicitly:2 view:1 lot:1 analyze:1 bayes:1 ni:5 accuracy:1 maximized:1 yield:6 directional:1 bayesian:5 marginally:1 comp:1 researcher:1 definition:2 energy:2 gain:2 knowledge:1 segmentation:4 higher:5 follow:1 formulation:4 though:1 furthermore:1 implicit:1 nonlinear:2 multiscale:3 overlapping:1 logistic:16 building:1 concept:2 true:2 requiring:1 verify:1 regularization:1 spatially:1 nonzero:1 conditionally:2 inq:1 noted:2 generalized:1 crf:3 performs:1 image:44 functional:1 corel:1 association:15 slight:1 extend:2 mellon:1 gibbs:1 ai:4 resorting:1 grid:1 consistency:1 language:1 base:5 posterior:10 showed:1 moderate:1 certain:1 verlag:1 binary:5 arbitrarily:2 seen:2 additional:1 relaxed:1 ii:2 full:1 reduces:1 cross:2 permitting:1 laplacian:1 prediction:1 mrf:35 involving:1 simplistic:1 vision:3 cmu:1 kernel:1 bimodal:4 robotics:1 addition:3 separately:1 rest:3 unlike:2 sure:1 contrary:2 lafferty:4 flow:1 incorporates:1 call:1 iii:1 concerned:1 xj:7 independence:2 zi:2 hastie:1 greig:1 regarding:1 multiclass:1 effort:2 penalty:1 prefers:1 generally:2 useful:1 transforms:1 extensively:1 zabih:1 per:2 carnegie:1 discrete:1 affected:1 four:2 threshold:1 penalizing:1 verified:1 kept:2 graph:5 fourth:1 decision:3 scaling:1 capturing:1 hi:8 simplification:1 cheng:1 quadratic:1 yielded:1 ri:1 scene:2 aspect:1 min:1 optimality:1 kumar:3 structured:4 poor:2 belonging:1 beneficial:1 jeffreys:1 glm:1 resource:1 gaussians:2 permit:1 obey:1 observe:1 alternative:2 original:2 top:3 include:1 porteous:1 mincut:1 exploit:1 restrictive:1 especially:1 feng:2 intend:1 said:1 enhances:1 gradient:1 link:1 w0:4 discriminant:1 assuming:3 modeled:6 relationship:1 minimizing:1 negative:1 anal:2 allowing:1 observation:12 markov:5 parametrizing:1 descent:1 hinton:1 arbitrary:3 august:1 intensity:1 pair:5 required:1 trainable:1 learned:9 discontinuity:3 trans:3 address:1 nip:1 suggested:1 usually:1 pattern:4 fp:3 challenge:1 hammersley:1 max:3 royal:1 belief:2 power:1 natural:7 representing:1 martial:1 text:2 prior:7 literature:2 discovery:1 interesting:2 enclosed:1 validation:2 integrate:1 row:7 penalized:3 hebert:4 figueiredo:1 bias:3 allow:2 perceptron:2 institute:1 neighbor:4 taking:1 absolute:1 boundary:2 drf:53 world:2 commonly:3 made:5 adaptive:3 sj:1 approximate:2 pruning:1 implicitly:1 absorb:1 clique:4 ml:3 decides:1 overfitting:1 pittsburgh:1 assumed:4 nelder:1 discriminative:20 xi:29 table:7 learn:2 ignoring:1 complex:3 european:1 main:1 whole:1 noise:14 allowed:1 site:34 fig:5 iij:4 inferring:1 pereira:1 concatenating:1 intelli:2 third:2 theorem:1 x:1 decay:2 normalizing:1 incorporating:1 false:7 importance:1 texture:1 magnitude:1 conditioned:2 sparseness:2 entropy:1 simply:2 saddle:2 expressed:2 contained:1 scalar:1 springer:1 expends:1 conditional:9 identity:1 man:3 hard:2 mccullagh:1 included:1 typical:2 except:2 wt:4 called:1 discriminate:1 indicating:1 support:1 collins:1 dissimilar:1 incorporate:1 seheult:1 |
1,488 | 2,353 | Learning a Rare Event Detection Cascade by
Direct Feature Selection
Jianxin Wu James M. Rehg Matthew D. Mullin
College of Computing and GVU Center, Georgia Institute of Technology
{wujx, rehg, mdmullin}@cc.gatech.edu
Abstract
Face detection is a canonical example of a rare event detection problem, in which target patterns occur with much lower frequency than nontargets. Out of millions of face-sized windows in an input image, for example, only a few will typically contain a face. Viola and Jones recently
proposed a cascade architecture for face detection which successfully addresses the rare event nature of the task. A central part of their method
is a feature selection algorithm based on AdaBoost. We present a novel
cascade learning algorithm based on forward feature selection which is
two orders of magnitude faster than the Viola-Jones approach and yields
classifiers of equivalent quality. This faster method could be used for
more demanding classification tasks, such as on-line learning.
1
Introduction
Fast and robust face detection is an important computer vision problem with applications
to surveillance, multimedia processing, and HCI. Face detection is often formulated as a
search and classification problem: a search strategy generates potential image regions and a
classifier determines whether or not they contain a face. A standard approach is brute-force
search, in which the image is scanned in raster order and every n ? n window of pixels
over multiple image scales is classified [1, 2, 3].
When a brute-force search strategy is used, face detection is a rare event detection problem,
in the sense that among the millions of image regions, only very few contain faces. The
resulting classifier design problem is very challenging: The detection rate must be very high
in order to avoid missing any rare events. At the same time, the false positive rate must be
very low (e.g. 10?6 ) in order to dodge the flood of non-events. From the computational
standpoint, huge speed-ups are possible if the sparsity of faces in the input set can be
exploited. In their seminal work [4], Viola and Jones proposed a face detection method
based on a cascade of classifiers, illustrated in figure 1. Each classifier node is designed to
reject a portion of the nonface regions and pass all of the faces. Most image regions are
rejected quickly, resulting in very fast face detection performance.
There are three elements in the Viola-Jones framework: the cascade architecture, a rich
over-complete set of rectangle features, and an algorithm based on AdaBoost for constructing ensembles of rectangle features in each classifier node. Much of the recent work on face
detection following Viola-Jones has explored alternative boosting algorithms such as FloatBoost [5], GentleBoost [6], and Asymmetric AdaBoost [7] (see [8] for a related method).
H1
Non-face
Non-face
d1 , f 1
H2
d2 , f 2
...
Hn
dn , f n
Non-face
Face
Figure 1: Illustration of the cascade architecture with n nodes.
This paper is motivated by the observation that the AdaBoost feature selection method is
an indirect way to meet the learning goals of the cascade. It is also an expensive algorithm.
For example, weeks of computation are required to produce the final cascade in [4].
In this paper we present a new cascade learning algorithm which uses direct forward feature
selection to construct the ensemble classifiers in each node of the cascade. We demonstrate
empirically that our algorithm is two orders of magnitude faster than the Viola-Jones algorithm, and produces cascades which are indistinguishable in face detection performance.
This faster method could be used for more demanding classification tasks, such as on-line
learning or searching the space of classifier structures. Our results also suggest that a large
portion of the effectiveness of the Viola-Jones detector should be attributed to the cascade
design and the choice of the feature set.
2
Cascade Architecture for Rare Event Detection
The learning goal for the cascade in figure 1 is the construction of a set of classifiers
n
{Hi }i=1 . Each Hi is required to have a very high detection rate, but only a moderate
false positive rate (e.g. 50%). An input image region is passed from Hi to Hi+1 if it is
classified as a face, otherwise it is rejected. If the {Hi } can be constructed to produce independent errors,
detection rate d and false positive rate f for the cascade is
Qn then the
Qoverall
n
given by i=1 di and i=1 fi respectively. In a hypothetical example, a 20 node cascade
with di = 0.999 and fi = 0.5 would have d = 0.98 and f = 9.6e ? 7.
As in [4], the overall cascade learning method in this paper is a stage-wise, greedy feature
selection process. Nodes are constructed sequentially, starting with H1 . Within a node Hi ,
features are added sequentially to form an ensemble. Following Viola-Jones, the training
dataset is manipulated between nodes to encourage independent errors. Each node Hi is
trained on all of the positive examples and a subset of the negative examples. In moving
from node Hi to Hi+1 during training, negative examples that were classified successfully
by the cascade are discarded and replaced with new ones, using the standard bootstrapping
approach from [1]. The difference between our method and Viola-Jones is the feature
selection algorithm for the individual nodes.
The cascade architecture in figure 1 should be suitable for other rare event problems, such
as network intrusion detection in which an attack constitutes a few packets out of tens of
millions. Recent work in that community has also explored a cascade approach [9].
For each node in the cascade architecture, given a training set {xi , yi }, the learning objective is to select a set of weak classifiers {ht } from a total set of F features and combine
them into an ensemble H with a high detection rate d and a moderate false positive rate f .
Train all weak classifiers
Train all weak classifiers
Adjust threshold of the
ensemble to meet the
detection rate goal
no
yes
Add the feature with
minimum weighted error
to the ensemble
d>D?
Add the feature to
minimize false positive
rate of the ensemble
f>=F ?
Add the feature to
maximize detection rate
of the ensemble
f>=F or d<=D ?
(a)
(b)
Figure 2: Diagram for training one node in the cascade architecture, (a) is for the ViolaJones method, and (b) is for the proposed method. F and D are false positive rate and
detection rate goals respectively.
A weak classifier is formed from a rectangle feature by applying the feature to the input
pattern and thresholding the result.1 Training a weak classifier corresponds to setting its
threshold.
In [4], an algorithm based on AdaBoost trains weak classifiers, adds them to the ensemble,
and computes the ensemble weights. AdaBoost [10] is an iterative method for obtaining
an ensemble of weak classifiers by evolving a distribution of weights, Dt , over the training
data. In the Viola-Jones approach, each iteration t of boosting adds the classifier ht with
the lowest weighted error to the ?
ensemble. After T rounds of boosting, the decision of the
PT
1
t=1 ?t ht (x) ? ? , where the ?t are the standard
ensemble is defined as H(x) =
0 otherwise
AdaBoost ensemble weights and ? is the threshold of the ensemble. This threshold is
adjusted to meet the detection rate goal. More features are then added if necessary to meet
the false positive rate goal. The flowchart for the algorithm is given in figure 2(a).
The process of sequentially adding features which individually minimize the weighted error
is at best an indirect way to meet the learning goals for the ensemble. For example, the false
positive goal is relatively easy to meet, compared to the detection rate goal which is near
100%. As a consequence, the threshold ? produced by AdaBoost must be discarded in
favor of a threshold computed directly from the ensemble performance. Unfortunately,
the weight distribution maintained by AdaBoost requires that the complete set of weak
classifiers be retrained in each iteration. This is a computationally demanding task which
is in the inner loop of the feature selection algorithm.
Beyond these concerns is a more basic question about the cascade learning problem: What
is the role of boosting in forming an effective ensemble? Our hypothesis is that the overall
success of the method depends upon having a sufficiently rich feature set, which defines the
space of possible weak classifiers. From this perspective, a failure mode of the algorithm
would be the inability to find sufficient features to meet the learning goal. The question
then is to what extent boosting helps to avoid this problem. In the following section we
describe a simple, direct feature selection algorithm that sheds some light on these issues.
3
Direct Feature Selection Method
We propose a new cascade learning algorithm based on forward feature selection [11].
Pseudo-code of the algorithm for building an ensemble classifier for a single node is given
1
A feature and its corresponding classifier will be used interchangeably.
1. Given a training set. Given d, the minimum detection rate and f , the maximum
false positive rate.
2. For every feature, j, train a weak classifier hj , whose false positive rate is f .
3. Initialize the ensemble H to an empty set, i.e. H ? ?. t ? 0, d0 = 0.0, f0 = 1.0.
4. while dt < d or ft > f
(a) if dt < d, then, find the feature k, such that by adding it to H, the new
ensemble will have largest detection rate dt+1 .
(b) else, find the feature k, such that by adding it to H, the new ensemble will
have smallest false positive rate ft+1 .
(c) t ? t + 1, H ? H ? {hk }.
5. The decision of the ensemble classifier
Pis formed by a majority voting of weak
?
1
hj ?H hj (x) ? ?
classifiers in H, i.e. H(x) =
, where ? = T2 . De0 otherwise
crease ? if necessary.
Table 1: The direct feature selection method for building an ensemble classifier.
in table 1. The corresponding flowchart is illustrated in figure 2(b). The first step in our
algorithm is to train each of the weak classifiers to meet the false positive rate goal for the
ensemble.
The output of each weak classifier on each training data item is collected in a large lookup table. The core algorithm is an exhaustive search over possible classifiers. In each
iteration, we consider adding each possible classifier to the ensemble and select the one
which makes the largest improvement to the ensemble performance. The selection criteria
directly maximizes the learning objective for the node. The look-up table, in conjunction
with majority vote rule, makes this feature search extremely fast.
The resulting algorithm is roughly 100 times faster than Viola-Jones. The key difference
is that we train the weak classifiers only once per node, while in the Viola-Jones method
they are trained once for each feature in the cascade. Let T be the training time for weak
classifiers2 and F be the number of features in the final cascade. The learning time for
Viola-Jones is roughly F T , which in [4] was on the order of weeks. Let N be the number
of nodes in the cascade. Empirically the learning time for our method is 2N T , which is on
the order of hours in our experiments. For the cascade of 32 nodes with 4297 features in
[4], the difference in learning time will be dramatic.
The difficulty of the classifier design problem increases with the depth of the cascade, as
the non-face patterns selected by bootstrapping become more challenging. A large number of features may be required to achieve the learning objectives when majority vote is
used. In this case, a weighted ensemble could be advantageous. Once feature selection has
been performed, a variant of the Viola-Jones algorithm can be used to obtain a weighted
ensemble. Pseudo-code for this weight setting method is given in table 2.
4
Experimental Results
We conducted three controlled experiments to compare our feature selection method to
the Viola-Jones algorithm. The procedures and data sets were the same for all of the ex2
In our experiments, T is about 10 minutes.
Given a training set, maintain a distribution D over it.
Select N features using the algorithm in table 1. These features form a set F .
Initialize the ensemble classifier to an empty set, i.e. H ? ?.
for i = 1 : N
(a) Select the feature k from F that has smallest error ? on the training set,
weighted over the distribution D.
(b) Update the distribution D according to the AdaBoost algorithm as in [4].
?
to H. And
(c) Add the feature k and it?s associated weight ?k = ? log 1??
remove the feature k from F .
5. Decision of the ensemble classifier is formed by a weighted average of weak classifiers in H. Decrease the threshold ? until the ensemble reaches the detection rate
goal.
1.
2.
3.
4.
Table 2: Weight setting algorithm after feature selection.
periments. Our training set contained 5000 example face images and 5000 initial non-face
examples, all of size 24x24. We used approximately 2284 million non-face patches to bootstrap the non-face examples between nodes. We used 32466 features sampled uniformly
from the entire set of rectangle features. For testing purposes we used the MIT+CMU
frontal face test set [2] in all experiments. Although many researchers use automatic procedures to evaluate their algorithm, we decided to manually count the missed faces and
false positives.3 When scanning a test image at different scales, the image is re-scaled
repeatedly by a factor of 1.25. Post-processing is similar to [4].
In the first experiment we constructed three face detection cascades. One cascade used
the direct feature selection method from table 1. The second cascade used the weight setting algorithm in table 2. The training algorithms stopped when they exhausted the set of
non-face training examples. The third cascade used our implementation of the Viola-Jones
algorithm. The three cascades had 38, 37, and 28 nodes respectively. The third cascade was
stopped after 28 nodes because the AdaBoost based training algorithm could not meet the
learning goal. With 200 features, when the detection rate is 99.9%, the AdaBoost ensemble?s false positive rate is larger than 97%. Adding several hundred additional features did
not change the outcome. ROC curves for cascades using our method and the Viola-Jones
method are depicted in figure 3(a). We constructed the ROC curves by removing nodes
from the cascade to generate points with increasing detection and false positive rates. These
curves demonstrate that the test performance of our method is indistinguishable from that
of the Viola-Jones method.
The second experiment explored the ability of the rectangle feature set to meet the detection
rate goal for the ensemble on a difficult node. Figure 3(b) shows the false positive and
detection rates for the ensemble (i.e., one node in the cascade architecture) as a function
of the number of features that were added to the ensemble. The training set used was the
bootstrapped training set for the 19th node in the cascade which was trained by the ViolaJones method. Even for this difficult learning task, the algorithm can improve the detection
rate from about 0.7 to 0.9 using only 13 features, without any significant increase in false
positive rate. This suggests that the rectangle feature set is sufficiently rich. Our hypothesis
is that the strength of this feature set in the context of the cascade architecture is the key to
3
We found that the criterion for automatically finding detection errors in [6] was too loose. This
criterion yielded higher detection rates and lower false positive rates than manual counting.
94
1
correct detection rate
93
92
0.9
91
0.8
detection rate
90
0.7
false positive rate
89
Viola-Jones
88
0.6
Feature selection
Weight setting
87
0.5
86
0.4
0
85
0
100
200
300
false positives
400
(a)
500
50
100
150
200
Number of features
(b)
Figure 3: Experimental Results. (a) is ROC curves of the proposed method and the ViolaJones method and (b) is trend of detection and false positive rates when more features are
combined in one node.
the success of the Viola-Jones approach.
We conducted a third experiment in which we focused on learning one node in the cascade
architecture. Figure 4 shows ROC curves of the Viola-Jones, direct feature selection, and
weight setting methods for one node of the cascade. The training set used in figure 4 was
the same training set as in the second experiment. Unlike the ROC curves in figure 3(a),
these curves show the performance of the node in isolation using a validation set. These
curves reinforce the similarity in the performance of our method compared to Viola-Jones.
In the region of interest (e.g. detection rate > 99%), our algorithms yield better ROC curve
performance than the Viola-Jones method. Although figure 4 and figure 3(b) only showed
curves for one specific training set, the same pattern in these figures were found with other
bootstrapped training sets in our experiments.
5
Related Work
A survey of face detection methods can be found in [12]. We restrict our attention here
to frontal face detection algorithms related to the cascade idea. The neural network-based
detector of Rowley et. al. [2] incorporated a manually-designed two node cascade. Other
cascade structures have been constructed for SVM classifiers. In [13], a set of reduced set
vectors is calculated from the support vectors. Each reduced set vector can be interpreted as
a face or anti-face template. Since these reduced set vectors are applied sequentially to the
input pattern, they can be viewed as nodes in a cascade. An alternative cascade framework
for SVM classifiers is proposed by Heisele et. al. in [14]. Based on different assumptions,
Keren et al. proposed another object detection method which consists of a series of antiface templates [15]. Carmichael and Hebert propose a hierarchical strategy for detecting
chairs at different orientations and scales [16].
Following [4], several authors have developed alternative boosting algorithms for feature
selection. Li et al. incorporated floating search into the AdaBoost algorithm (FloatBoost)
and proposed some new features for detecting multi-view faces [5]. Lienhart et al. [6] experimentally evaluated different boosting algorithms and different weak classifiers. Their
results showed that Gentle AdaBoost and CART decision trees had the best performance.
In an extension of their original work [7], Viola and Jones proposed an asymmetric AdaBoost algorithm in which false negatives are penalized more than false positives. This is
an interesting attempt to incorporate the rare event observation more explicitly into their
Correct detection rate
1
0.9
Viola-Jones
Feature Selection
Weight Setting
0.8
0.5
0.6
0.7
0.8
0.9
1
False positive rate
Figure 4: Single node ROC curves on a validation set.
learning algorithm (see [8] for a related method). All of these methods explore variations
in AdaBoost-based feature selection, and their training times are similar to the original
Viola-Jones algorithm. While all of the above methods adopt a brute-force search strategy
for generating input regions, there has been some interesting work on generating candidate
face hypotheses from more general interest operators. Two examples are [17, 18].
6
Conclusions
Face detection is a canonical example of a rare event detection task, in which target patterns
occur with much lower frequency than non-targets. It results in a challenging classifier
design problem: The detection rate must be very high in order to avoid missing any rare
events and the false positive rate must be very low to dodge the flood of non-events. A
cascade classifier architecture is well-suited to rare event detection.
The Viola-Jones face detection framework consists of a cascade architecture, a rich overcomplete feature set, and a learning algorithm based on AdaBoost. We have demonstrated
that a simpler direct algorithm based on forward feature selection can produce cascades
of similar quality with two orders of magnitude less computation. Our algorithm directly
optimizes the learning criteria for the ensemble, while the AdaBoost-based method is more
indirect. This is because the learning goal is a highly-skewed tradeoff between detection
rate and false positive rate which does not fit naturally into the weighted error framework
of AdaBoost. Our experiments suggest that the feature set and cascade structure in the
Viola-Jones framework are the key elements in the success of the method.
Three issues that we plan to explore in future work are: the necessary properties for feature sets, global feature selection methods, and the incorporation of search into the cascade framework. The rectangle feature set seems particularly well-suited for face detection. What general properties must a feature set possess to be successful in the cascade
framework? In other rare event detection tasks where a large set of diverse features is not
naturally available, methods to create such a feature set may be useful (e.g. the random
subspace method proposed by Ho [19]).
In our current algorithm, both nodes and features are added sequentially and greedily to
the cascade. More global techniques for forming ensembles could yield better results.
Finally, the current detection method relies on a brute-force search strategy for generating
candidate regions. We plan to explore the cascade architecture in conjunction with more
general interest operators, such as those defined in [18, 20].
The authors are grateful to Mike Jones and Paul Viola for providing their training data,
along with many valuable discussions. This work was supported by NSF grant IIS-0133779
and the Mitsubishi Electric Research Laboratory.
References
[1] K. Sung and T. Poggio. Example-based learning for view-based human face detection. IEEE
Trans. on Pattern Analysis and Machine Intelligence, 20(1):39?51, 1998.
[2] H. A. Rowley, S. Baluja, and T. Kanade. Neural network-based face detection. IEEE Trans. on
Pattern Analysis and Machine Intelligence, 20(1):23?38, 1998.
[3] Henry Schneiderman and Takeo Kanade. A statistical model for 3d object detection applied to
faces and cars. In IEEE Conference on Computer Vision and Pattern Recognition. IEEE, June
2000.
[4] P. Viola and M. Jones. Rapid object detection using a boosted cascade of simple features. In
Proc. CVPR, pages 511?518, 2001.
[5] S.Z. Li, Z.Q. Zhang, Harry Shum, and H.J. Zhang. FloatBoost learning for classification. In
S. Thrun S. Becker and K. Obermayer, editors, NIPS 15. MIT Press, December 2002.
[6] R. Lienhart, A. Kuranov, and V. Pisarevsky. Empirical analysis of detection cascades of boosted
classifiers for rapid object detection. Technical report, MRL, Intel Labs, 2002.
[7] P. Viola and M. Jones. Fast and robust classification using asymmetric AdaBoost and a detector
cascade. In NIPS 14, 2002.
[8] G. J. Karakoulas and J. Shawe-Taylor. Optimizing classifiers for imbalanced training sets. In
NIPS 11, pages 253?259, 1999.
[9] W. Fan, W. Lee, S. J. Stolfo, and M. Miller. A multiple model cost-sensitive approach for
intrusion detection. In Proc. 11th ECML, 2000.
[10] R. E. Schapire, Y. Freund, P. Bartlett, and W. S. Lee. Boosting the margin: A new explanation
for the effectiveness of voting methods. The Annals of Statististics, 26(5):1651?1686, 1998.
[11] A. R. Webb. Statistical Pattern Recognition. Oxford University Press, New York, 1999.
[12] M.-H. Yang, D. J. Kriegman, and N. Ahujua. Detecting faces in images: a survey. IEEE Trans.
on Pattern Analysis and Machine Intelligence, 24(1):34?58, 2002.
[13] S. Romdhani, P. Torr, B. Schoelkopf, and A. Blake. Computationally efficient face detection.
In Proc. Intl. Conf. Computer Vision, pages 695?700, 2001.
[14] B. Heisele, T. Serre, S. Mukherjee, and T. Poggio. Feature reduction and hierarchy of classifiers
for fast object detection in video images. In Proc. CVPR, volume 2, pages 18?24, 2001.
[15] D. Keren, M. Osadchy, and C. Gotsman. Antifaces: A novel, fast method for image detection.
IEEE Trans. on Pattern Analysis and Machine Intelligence, 23(7):747?761, 2001.
[16] O. Carmichael and M. Hebert. Object recognition by a cascade of edge probes. In British
Machine Vision Conference, volume 1, pages 103?112, September 2002.
[17] T. Leung, M. Burl, and P. Perona. Finding faces in cluttered scenes using random labeled graph
matching. In Proc. Intl. Conf. Computer Vision, pages 637?644, 1995.
[18] S. Lazebnik, C. Schmid, and J. Ponce. Sparse texture representation using affine-invariant
neighborhoods. In Proc. CVPR, 2003.
[19] T. K. Ho. The random subspace method for constructing decision forests. IEEE Trans. on
Pattern Analysis and Machine Intelligence, 20(8):832?844, 1998.
[20] S. Belongie, J. Malik, and J. Puzicha. Shape matching and object recognition using shape
contexts. IEEE Trans. on Pattern Analysis and Machine Intelligence, 24(4):509?522, 2002.
| 2353 |@word advantageous:1 seems:1 d2:1 mitsubishi:1 dramatic:1 reduction:1 initial:1 series:1 shum:1 bootstrapped:2 current:2 must:6 takeo:1 shape:2 remove:1 designed:2 update:1 greedy:1 selected:1 intelligence:6 item:1 core:1 pisarevsky:1 detecting:3 boosting:8 node:33 attack:1 simpler:1 zhang:2 dn:1 constructed:5 direct:8 become:1 along:1 consists:2 hci:1 combine:1 ex2:1 stolfo:1 rapid:2 roughly:2 multi:1 floatboost:3 automatically:1 window:2 increasing:1 maximizes:1 lowest:1 what:3 interpreted:1 developed:1 finding:2 bootstrapping:2 sung:1 pseudo:2 every:2 hypothetical:1 voting:2 shed:1 classifier:42 scaled:1 brute:4 grant:1 positive:26 consequence:1 osadchy:1 nontargets:1 oxford:1 meet:10 approximately:1 suggests:1 challenging:3 decided:1 testing:1 bootstrap:1 procedure:2 heisele:2 carmichael:2 empirical:1 evolving:1 cascade:59 reject:1 matching:2 ups:1 suggest:2 selection:24 operator:2 context:2 applying:1 seminal:1 equivalent:1 demonstrated:1 center:1 missing:2 attention:1 starting:1 cluttered:1 focused:1 survey:2 rule:1 rehg:2 searching:1 variation:1 annals:1 target:3 construction:1 pt:1 hierarchy:1 us:1 hypothesis:3 element:2 trend:1 expensive:1 particularly:1 recognition:4 asymmetric:3 mukherjee:1 labeled:1 mike:1 role:1 ft:2 region:8 schoelkopf:1 decrease:1 valuable:1 rowley:2 kriegman:1 trained:3 grateful:1 dodge:2 upon:1 indirect:3 train:6 fast:6 effective:1 describe:1 outcome:1 neighborhood:1 exhaustive:1 whose:1 larger:1 cvpr:3 otherwise:3 favor:1 ability:1 flood:2 final:2 propose:2 karakoulas:1 loop:1 achieve:1 gentle:1 empty:2 intl:2 produce:4 generating:3 object:7 help:1 correct:2 packet:1 human:1 adjusted:1 extension:1 sufficiently:2 blake:1 week:2 matthew:1 adopt:1 smallest:2 purpose:1 proc:6 sensitive:1 individually:1 largest:2 create:1 successfully:2 weighted:8 mit:2 avoid:3 hj:3 boosted:2 surveillance:1 gatech:1 conjunction:2 june:1 ponce:1 improvement:1 intrusion:2 hk:1 greedily:1 sense:1 leung:1 typically:1 entire:1 perona:1 pixel:1 overall:2 classification:5 among:1 issue:2 orientation:1 plan:2 initialize:2 construct:1 once:3 having:1 manually:2 jones:31 look:1 constitutes:1 future:1 t2:1 report:1 few:3 manipulated:1 individual:1 floating:1 replaced:1 maintain:1 attempt:1 detection:61 huge:1 interest:3 highly:1 adjust:1 light:1 edge:1 encourage:1 necessary:3 poggio:2 tree:1 taylor:1 re:1 overcomplete:1 mullin:1 stopped:2 cost:1 subset:1 rare:12 hundred:1 successful:1 conducted:2 too:1 scanning:1 combined:1 lee:2 quickly:1 central:1 hn:1 conf:2 li:2 potential:1 lookup:1 harry:1 explicitly:1 depends:1 performed:1 h1:2 view:2 lab:1 portion:2 jianxin:1 minimize:2 formed:3 ensemble:38 yield:3 miller:1 yes:1 weak:17 produced:1 cc:1 researcher:1 classified:3 detector:3 reach:1 romdhani:1 manual:1 failure:1 raster:1 frequency:2 james:1 naturally:2 associated:1 attributed:1 di:2 sampled:1 dataset:1 car:1 higher:1 dt:4 adaboost:20 evaluated:1 rejected:2 stage:1 until:1 defines:1 mode:1 quality:2 building:2 serre:1 contain:3 burl:1 laboratory:1 illustrated:2 round:1 indistinguishable:2 during:1 nonface:1 interchangeably:1 skewed:1 maintained:1 criterion:4 complete:2 demonstrate:2 image:13 wise:1 lazebnik:1 novel:2 recently:1 fi:2 empirically:2 volume:2 million:4 significant:1 automatic:1 shawe:1 had:2 henry:1 moving:1 f0:1 similarity:1 add:6 imbalanced:1 recent:2 showed:2 perspective:1 optimizing:1 moderate:2 optimizes:1 success:3 yi:1 exploited:1 minimum:2 additional:1 maximize:1 ii:1 multiple:2 d0:1 technical:1 faster:5 crease:1 post:1 controlled:1 variant:1 basic:1 vision:5 cmu:1 iteration:3 diagram:1 else:1 standpoint:1 unlike:1 posse:1 cart:1 december:1 effectiveness:2 near:1 counting:1 yang:1 easy:1 isolation:1 fit:1 architecture:13 restrict:1 inner:1 idea:1 tradeoff:1 whether:1 motivated:1 bartlett:1 passed:1 becker:1 york:1 repeatedly:1 useful:1 ten:1 reduced:3 generate:1 schapire:1 canonical:2 nsf:1 per:1 diverse:1 key:3 threshold:7 ht:3 rectangle:7 graph:1 schneiderman:1 wu:1 patch:1 missed:1 decision:5 hi:9 fan:1 yielded:1 strength:1 occur:2 scanned:1 periments:1 incorporation:1 scene:1 generates:1 speed:1 extremely:1 chair:1 relatively:1 de0:1 according:1 invariant:1 computationally:2 count:1 loose:1 gvu:1 available:1 probe:1 hierarchical:1 alternative:3 ho:2 original:2 objective:3 malik:1 added:4 question:2 strategy:5 obermayer:1 september:1 subspace:2 reinforce:1 keren:2 thrun:1 majority:3 extent:1 collected:1 code:2 illustration:1 providing:1 difficult:2 unfortunately:1 webb:1 negative:3 design:4 implementation:1 observation:2 discarded:2 anti:1 ecml:1 viola:31 incorporated:2 retrained:1 community:1 required:3 flowchart:2 hour:1 nip:3 trans:6 address:1 beyond:1 pattern:14 sparsity:1 explanation:1 video:1 event:14 demanding:3 suitable:1 force:4 difficulty:1 improve:1 technology:1 schmid:1 freund:1 interesting:2 validation:2 h2:1 affine:1 sufficient:1 thresholding:1 editor:1 pi:1 penalized:1 classifiers2:1 supported:1 hebert:2 institute:1 template:2 face:44 lienhart:2 sparse:1 curve:11 depth:1 calculated:1 rich:4 qn:1 computes:1 forward:4 author:2 global:2 sequentially:5 belongie:1 xi:1 search:10 iterative:1 table:9 kanade:2 nature:1 robust:2 obtaining:1 forest:1 constructing:2 electric:1 did:1 paul:1 gentleboost:1 intel:1 roc:7 georgia:1 x24:1 candidate:2 third:3 minute:1 removing:1 british:1 specific:1 explored:3 svm:2 concern:1 false:26 adding:5 texture:1 magnitude:3 exhausted:1 margin:1 suited:2 depicted:1 explore:3 forming:2 contained:1 corresponds:1 determines:1 relies:1 sized:1 formulated:1 goal:15 viewed:1 change:1 experimentally:1 baluja:1 torr:1 uniformly:1 total:1 multimedia:1 pas:1 experimental:2 vote:2 select:4 college:1 puzicha:1 support:1 inability:1 frontal:2 incorporate:1 evaluate:1 d1:1 |
1,489 | 2,354 | Denoising and untangling graphs using
degree priors
Quaid D Morris, Brendan J Frey, and Christopher J Paige
University of Toronto
Electrical and Computer Engineering
10 King?s College Road, Toronto, Ontario, M5S 3G4
Canada
{quaid, frey}@psi.utoronto.ca, [email protected]
Abstract
This paper addresses the problem of untangling hidden graphs from
a set of noisy detections of undirected edges. We present a model
of the generation of the observed graph that includes degree-based
structure priors on the hidden graphs. Exact inference in the model
is intractable; we present an efficient approximate inference algorithm to compute edge appearance posteriors. We evaluate our
model and algorithm on a biological graph inference problem.
1
Introduction and motivation
The inference of hidden graphs from noisy edge appearance data is an important
problem with obvious practical application. For example, biologists are currently
building networks of all the physical protein-protein interactions (PPI) that occur
in particular organisms. The importance of this enterprise is commensurate with its
scale: a completed network would be as valuable as a completed genome sequence,
and because each organism contains thousands of different types of proteins, there
are millions of possible types of interactions. However, scalable experimental methods for detecting interactions are noisy, generating many false detections. Motivated
by this application, we formulate the general problem of inferring hidden graphs as
probabilistic inference in a graphical model, and we introduce an efficient algorithm
that approximates the posterior probability that an edge is present.
In our model, a set of hidden, constituent graphs are combined to generate the observed graph. Each hidden graph is independently sampled from a prior on graph
structure. The combination mechanism acts independently on each edge but can
be either stochastic or deterministic. Figure 1 shows an example of our generative
model. Typically one of the hidden graphs represents the graph of interest (the true
graph), the others represent different types of observation noise. Independent edge
noise may also be added by the combination mechanism. We use probabilistic inference to compute a likely decomposition of the observed graph into its constituent
parts. This process is deemed ?untangling?. We use the term ?denoising? to refer
to the special case where the edge noise is independent. In denoising there is a
single hidden graph, the true graph, and all edge noise in the observed graph is due
E1
eij1
E2
i
eij2
j
xij
i
j
i
j
X
Figure 1: Illustrative generative model example. Figure shows an example where an observed
graph, X, is a noisy composition of two constituent graphs, E 1 and E 2 . All graphs share the
same vertex set, so each can be represented by a symmetric matrix of random binary variables
(i.e., an adjacency matrix). This generative model is designed to solve a toy counter-espionage
problem. The vertices represent suspects and each edge in X represents an observed call
between two suspects. The graph X reflects zero or more spy rings (represented by E 1 ),
telemarketing calls (represented by E 2 ), social calls (independent edge noise), and lost call
records (more independent edge noise). The task is to locate any spy rings hidden in X. We
model the distribution of spy ring graphs using a prior, P (E 1 ), that has support only on graphs
where all vertices have degree of either 2 (i.e., are in the ring) or 0 (i.e., are not). Graphs of
telemarketing call patterns are represented using a prior, P (E 2 ), under which all nodes have
degrees of > 3 (i.e., are telemarketers), 1 (i.e., are telemarketees), or 0 (i.e., are neither). The
displayed hidden graphs are one likely untangling of X.
to the combination mechanism.
Prior distributions over graphs can be specified in various ways, but our choice is
motivated by problems we want to solve, and by a view to deriving an efficient inference algorithm. One compact representation of a distribution over graphs consists
of specifying a distribution over vertex degrees, and assuming that graphs that have
the same vertex degrees are equiprobable. Such a prior can model quite rich distributions over graphs. These degree-based structure priors are natural representions
of graph structure; many classes of real-world networks have a characteristic functional form associated with their degree distributions [1], and sometimes this form
can be predicted using knowledge about the domain (see, e.g., [2]) or detected empirically (see, e.g., [3, 4]). As such, our model incorporates degree-based structure
priors.
Though exact inference in our model is intractable in general, we present an efficient
algorithm for approximate inference for arbitrary degree distributions. We evaluate
our model and algorithm using the real-world example of untangling yeast proteinprotein interaction networks.
2
A model of noisy and tangled graphs
For degree-based structure priors, inference consists of searching over vertex degrees
and edge instantiations, while comparing each edge with its noisy observation and
enforcing the constraint that the number of edges connected to every vertex must
equal the degree of the vertex. Our formulation of the problem in this way is inspired by the success of the sum-product algorithm (loopy belief propagation) for
solving similar formulations of problems in error-correcting decoding [6, 7], phase
unwrapping [8], and random satisfiability [9]. For example, in error-correcting decoding, inference consists of searching over configurations of codeword bits, while
comparing each bit with its noisy observation and enforcing parity-check constraints
on subsets of bits [10].
For a graph on a set of N vertices, eij is a variable that indicates the presence
of an edge connecting vertices i and j: eij = 1 if there is an edge, and eij = 0
otherwise. We assume the vertex set is fixed, so each graph is specified by an
adjacency matrix, E = {eij }N
i,j=1 . The degree of vertex i is denoted by di and the
degree set by D = {di }N
.
The
observations are given by a noisy adjacency matrix,
i=1
X = {xij }N
.
Generally,
edges
can be directed, but in this paper we focus on
i,j=1
undirected graphs, so eij = eji and xij = xji .
Assuming the observation noise is independent for different edges, the joint distribution is
P (X, E, D) = P (X|E)P (E, D) =
?Y
j?i
?
P (xij |eij ) P (E, D).
P (xij |eij ) models the edge observation noise. We use an undirected model for the
joint distribution over edges and degrees, P (E, D), where the prior distribution over
di is determined by a non-negative potential fi (di ). Assuming graphs that have the
same vertex degrees are equiprobable, we have
P (E, D) ?
Y?
i
fi (di )I(di ,
N
X
j=1
?
eij ) ,
P
where I(a, b) = 1 if a = b, and I(a, b) = 0 if a 6= b. The term I(di , j eij )
ensures that the number of edges connected to vertex i is equal to di . It
is straightforward
? Q to show ?that the marginal distribution over di is P (di ) ?
P
fi (di ) D\di nD j6=i fj (dj ) , where nD is the number of graphs with degrees D
and the sum is over all degree variables except di . The potentials, fi , can be
estimated from a given degree prior using Markov chain Monte Carlo; or, as an
approximation, they can be set to an empirical degree distribution obtained from
noise-free graphs.
Fig 2a shows the factor graph [11] for the above model. Each filled square corresponds to a term in the factorization of the joint distribution and the square is
connected to all variables on which the term depends. Factor graphs are graphical
models that unify the properties of Bayesian networks and Markov random fields
[12]. Many inference algorithms, including the sum-product algorithm (a.k.a. loopy
belief propagation), are more easily derived using factor graphs than Bayesian networks or Markov random fields. We describe the sum-product algorithm for our
model in section 3.
(a)
I(d ,e + e +e +e
4 14 24 34
44
d1
e11
e12
e14
4
d3
d2
e13
f 4(d )
e22
e23
e24
(b)
)
d1
d4
e33
e34
e1
e44
11
x11
x11
x12
x13
x14
x22
x23
x24
x33
d1
1
x34
x44
e2
11
e1
12
x12
e2
12
d1
2
e1
13
e1
e2
13
e1
14
x13
e1
22
x14
e2
14
d1
3
23
x22
e2
22
x23
e2
23
4
e1
e1
24
e2
24
e1
33
x24
34
x33
e2
33
x34
e2
34
e1
44
x44
e2
44
P( x44 | e44 )
(c)
d4
s41
e14
e24
d2
1
d4
e34
e44
e14
s42
e24
s43
e34
d2
2
d2
3
d2
4
s44
P( x
e44
44
1
,e 2 )
44 44
|e
Figure 2: (a) A factor graph that describes a distribution over graphs with vertex degrees
di , binary
P edge indicator variables eij , and noisy edge observations xij . The indicator function
I(di , j eij ) enforces the constraint that the sum of the binary edge indicator variables for
vertex i must equal the degree of vertex i. (b) A factor graph that explains noisy observed
edges as a combination P
of two constituent graphs, with edge indicator variables e 1ij and e2ij .
(c) The constraint I(di , j eij ) can be implemented using a chain with state variables, which
leads to an exponentially faster message-passing algorithm.
2.1
Combining multiple graphs
The above model is suitable when we want to infer a graph that matches a degree
prior, assuming the edge observation noise is independent. A more challenging
goal, with practical application, is to infer multiple hidden graphs that combine to
explain the observed edge data. In section 4, we show how priors over multiple
hidden graphs can be be used to infer protein-protein interactions.
When there are H hidden graphs, each constituent graph is specified by a set of
edges on the same set of N common vertices. For the degree variables and edge
variables, we use a superscript to indicate which hidden graph the variable is used
to describe. Assuming the graphs are independent, the joint distribution over the
observed edge data X, and the edge variables and degree variables for the hidden
graphs, E 1 , D1 , . . . , E H , DH , is
P (X, E 1 , D1 , . . . , E H , DH ) =
?Y
j?i
P (xij |e1ij , . . . , eH
ij )
H
?Y
P (E h , Dh ),
(1)
h=1
where for each hidden graph, P (E h , Dh ) is modeled as described above. Here, the
likelihood P (xij |e1ij , . . . , eH
ij ) describes how the edges in the hidden graphs combine
to model the observed edge. Figure 2b shows the factor graph for this model.
3
Probabilistic inference of constituent graphs
Exact probabilistic inference in the above models is intractable, here we introduce
an approximate inference algorithm that consists of applying the sum-product algorithm, while ignoring cycles in the factor graph. Although the sum-product algorithm has been used to obtain excellent results on several problems [6, 7, 13, 14, 8, 9],
we have found that the algorithm works best when the model consists of uncertain
observations of variables that are subject to a large number of hard constraints.
Thus the formulation of the model described above.
Conceptually, our inference algorithm is a straight-forward application of the sumproduct algorithm, c.f. [15], where messages are passed along edges in the factor
graph iteratively, and then combined at variables to obtain estimates of posterior
probabilities. However, direct implementation of the message-passing updates will
lead to an intractable algorithm. In particular,
direct implementation of the update
P
for the message sent from function I(di , j eij ) to edge variable eik takes a number
of scalar operations that is exponential in the number of vertices. Fortunately there
exists a more efficient way to compute these messages.
3.1
Efficiently summing over edge configurations
P
The function I(di , j eij ) ensures that the number of edges connected to vertex i
is equal to di . Passing messages through this function requires summing over all
edge configurations that correspond to each possible degree, di , andPsumming over
di . Specifically, the message, ?Ii ?eik (eik ), sent from function I(di , j eij ) to edge
variable eik is given by
?
?
X
X
X
Y
I(di ,
eij )
?eij ?Ii (eij ) ,
di {eij | j=1,...,N, j6=k}
j
j6=k
where ?eij ?Ii (eij ) is the message sent from eij to function I(di ,
P
j
eij ).
The sum over {eij | j = 1, . . . , N, j 6= k} contains 2N ?1 terms, so direct computation
is intractable. However,
Pfor a maximum degree of dmax , all messages departing
from the function I(di , j eij ) can be computed using order dmax N binary scalar
P
operations, by introducing integer state variables sij . We define sij = n?j ein
and note that, by recursion, sij = sij?1 + eij , where si0 = 0 and 0 ? sij ? dmax .
This recursive expression enables us to write the high-complexity constraint as the
sum of a product of low-complexity constraints,
N
?Y
?
X
X
I(di ,
eij ) =
I(si1 , ei1 )
I(sij , sij?1 + eij ) I(di , siN ).
j
{sij | j=1,...,N }
j=2
This summation can be performed using the forward-backward algorithm. In
the factor
graph, the summation can be implemented by replacing the function
P
I(di , j eij ) with a chain of lower-complexity functions, connected as shown in Fig.
2c. The function vertex (filled square) on the far left corresponds to I(si1 , ei1 ) and
the function vertex in the upper right corresponds
P to I(di , siN ). So, messages can
be passed through each constraint function I(di , j eij ) in an efficient manner, by
performing a single forward-backward pass in the corresponding chain.
4
Results
We evaluate our model using yeast protein-protein interaction (PPI) data compiled
by [16]. These data include eight sets of putative, but noisy, interactions derived
from various sources, and one gold-standard set of interactions detected by reliable
experiments.
Using the ? 6300 yeast proteins as vertices, we represent the eight sets of putative
m
interactions using adjacency matrices {Y m }8m=1 where yij
= 1 if and only if putative
interaction dataset m contains an interaction between proteins i and j. We similarly
use Y gold to represent the gold-standard interactions.
m
We construct an observed graph, X, by setting xij = maxm yij
for all i and j, thus
the observed edge set is the union of all the putative edge sets. We test our model
(a)
(b)
40
0
untangling
baseline
random
30
20
10
0
0
empirical
potential
posterior
?2
log Pr
true positives (%)
50
?4
?6
?8
5
false positives (%)
10
?10
0
10
20
30
degree (# of nodes)
Figure 3: Protein-protein interaction network untangling results. (a) ROC curves measuring
performance of predicting e1ij when xij = 1. (b) Degree distributions. Compares the empirical
1
degree distribution of the test set subgraph of E 1 to the degree
on the
P potential f estimated
1
training set subgraph of E and to the distribution of di = j pij where pij = P? (e1ij = 1|X)
is estimated by untangling.
on the task of discerning which of the edges in X are also in Y gold . We formalize
this problem as that of decomposing X into two constituent graphs E 1 and E 2 , the
gold
true and the noise graphs respectively, such that e1ij = xij yij
and e2ij = xij ? e1ij .
We use a training set to fit our model parameters and then measure task performance on a test set. The training set contains a randomly selected half of the
? 6300 yeast proteins, and the subgraphs of E 1 , E 2 , and X restricted to those
vertices. The test contains the other half of the proteins and the corresponding
subgraphs. Note that interactions connecting test set proteins to training set proteins (and vice versa) are ignored.
We fit three sets of parameters: a set of Naive Bayes parameters that define a set of
edge-specific likelihood functions, Pij (xij |e1ij , e2ij ), one degree potential, f 1 , which
is the same for every vertex in E1 and defines the prior P (E 1 ), and a second, f 2 ,
that similarly defines the prior P (E 2 ).
The likelihood functions, Pij , are used to both assign likelihoods and enforce problem constraints. Given our problem definition, if xij = 0 then e1ij = e2ij = 0,
otherwise xij = 1 and e1ij = 1 ? e2ij . We enforce the former constraint by setting Pij (xij = 0|e1ij , e2ij ) = (1 ? e1ij )(1 ? e2ij ), and the latter by setting Pij (xij =
1|e1ij , e2ij ) = 0 whenever e1ij = e2ij . This construction of Pij simplifies the calculation
of the ?Pij ?ehij messages and improves the computational efficiency of inference because when xij = 0, we need never update messages to and from variables e1ij and
e2ij . We complete the specification of Pij (xij = 1|e1ij , e2ij ) as follows:
( ym
m
?mij (1 ? ?m )1?yij , if e1ij = 1 and e2ij = 0,
1
2
Pij (xij = 1|eij , eij ) =
m
ym
?mij (1 ? ?m )1?yij , if e1ij = 0 and e2ij = 1.
P
P
m 1
where {?m } and {?m } are naive Bayes parameters, ?m = i,j yij
eij / i,j e1ij and
?m =
P
i,j
m 2
yij
eij /
P
i,j
e2ij , respectively.
The degree potentials f 1 (d) and f 2 (d) are kernel density estimates fit to the degree
distribution of the training set subgraphs of E 1 and E 2 , respectively. We use
Gaussian kernels and set the width parameter (standard deviation) ? using leaveone-out cross-validation to maximize the total log density of the held-out datapoints.
Each datapoint is the degree of a single vertex. Both degree potentials closely
followed the training set empirical degree distributions.
Untangling was done on the test set subgraph of X. We initially set the ? Pij ?e1ij
messages equal to the likelihood function Pij and we randomly initialized the
?Ij1 ?e1ij messages with samples from a normal distribution with mean 0 and variance 0.01. We then performed 40 iterations of the following message update order:
?e1ij ?Ij1 , ?Ij1 ?e1ij , ?e1ij ?Pij , ?Pij ?e2ij , ?e2ij ?Ij2 , ?Ij2 ?e2ij , ?e2ij ?Pij , ?Pij ?e1ij .
We evaluated our untangling algorithm using an ROC curve by comparing the actual
test set subgraph of E 1 to posterior marginal probabilities,P? (e1ij = 1|X), estimated
by our sum-product algorithm. Note that because the true interaction network is
sparse (less than 0.2% of the 1.8 ? 107 possible interactions are likely present [16])
and, in this case, true positive predictions are of greater biological interest than
true negative predictions, we focus on low false positive rate portions of the ROC
curve.
Figure 3a compares the performance of a classifier for e1ij based on thresholding
P? (eij = 1|X) to a baseline method based on thresholding the likelihood functions,
Pij (xij = 1|e1ij = 1, e2ij = 0). Note because e1ij = 0 whenever xij = 0, we exclude
the xij = 0 cases from our performance evaluation. The ROC curve shows that
for the same low false positive rate, untangling produces 50% ? 100% more true
positives than the baseline method.
Figure 3b shows that the degree potential, the true degree distribution, and the
predicted degree distribution are all comparable. The slight overprediction of the
true degree distribution may result because the degree potential f 1 that defines
P (E 1 ) is not equal to the expected degree distribution of graphs sampled from the
distribution P (E 1 ).
5
Summary and Related Work
Related work includes other algorithms for structure-based graph denoising [17, 18].
These algorithms use structural properties of the observed graph to score edges and
rely on the true graph having a surprisingly large number of three (or four) edge
cycles compared to the noise graph. In contrast, we place graph generation in a
probabilistic framework; our algorithm computes structural fit in the hidden graph,
where this computation is not affected by the noise graph(s); and we allow for
multiple sources of observation noise, each with its own structural properties.
After submitting this paper to the NIPS conference, we discovered [19], in which a
degree-based graph structure prior is used to denoise (but not untangle) observed
graphs. This paper addresses denoising in directed graphs as well as undirected
graphs, however, the prior that they use is not amenable to deriving an efficient sumproduct algorithm. Instead, they use Markov Chain Monte Carlo to do approximate
inference in a hidden graph containing 40 vertices. It is not clear how well this
approach scales to the ? 3000 vertex graphs that we are using.
In summary, the contributions of the work described in this paper include: a general
formulation of the problem of graph untangling as inference in a factor graph; an
efficient approximate inference algorithm for a rich class of degree-based structure
priors; and a set of reliability scores (i.e., edge posteriors) for interactions from a
current version of the yeast protein-protein interaction network.
References
[1] A L Barabasi and R Albert. Emergence of scaling in random networks. Science,
286(5439), October 1999.
[2] A Rzhetsky and S M Gomez. Birth of scale-free molecular networks and the number
of distinct dna and protein domains per genome. Bioinformatics, pages 988?96, 2001.
[3] M Faloutsos, P Faloutsos, and C Faloutsos. On power-law relationships of the Internet
topology. Computer Communications Review, 29, 1999.
[4] Hawoong Jeong, B Tombor, R?eka Albert, Z N Oltvai, and Albert-L?
aszl?
o Barab?
asi.
The large-scale organization of metabolic networks. Nature, 407, October 2000.
[5] J. Pearl. Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann, San
Mateo CA., 1988.
[6] D. J. C. MacKay and R. M. Neal. Near Shannon limit performance of low density
parity check codes. Electronics Letters, 32(18):1645?1646, August 1996. Reprinted in
Electronics Letters, vol. 33, March 1997, 457?458.
[7] B. J. Frey and F. R. Kschischang. Probability propagation and iterative decoding. In
Proceedings of the 1996 Allerton Conference on Communication, Control and Computing, 1996.
[8] B. J. Frey, R. Koetter, and N. Petrovic. Very loopy belief propagation for unwrapping
phase images. In 2001 Conference on Advances in Neural Information Processing
Systems, Volume 14. MIT Press, 2002.
[9] M. M?ezard, G. Parisi, and R. Zecchina. Analytic and algorithmic solution of random
satisfiability problems. Science, 297:812?815, 2002.
[10] B. J. Frey and D. J. C. MacKay. Trellis-constrained codes. In Proceedings of the 35 th
Allerton Conference on Communication, Control and Computing 1997, 1998.
[11] F. R. Kschischang, B. J. Frey, and H.-A. Loeliger. Factor graphs and the sum-product
algorithm. IEEE Transactions on Information Theory, Special Issue on Codes on
Graphs and Iterative Algorithms, 47(2):498?519, February 2001.
[12] B. J. Frey. Factor graphs: A unification of directed and undirected graphical models.
University of Toronto Technical Report PSI-2003-02, 2003.
[13] Kevin P. Murphy, Yair Weiss, and Michael I. Jordan. Loopy belief propagation for
approximate inference: An empirical study. In Uncertainty in Artificial Intelligence
1999. Stockholm, Sweden, 1999.
[14] W. Freeman and E. Pasztor. Learning low-level vision. In Proceedings of the International Conference on Computer Vision, pages 1182?1189, 1999.
[15] M. I. Jordan. An Inroduction to Learning in Graphical Models. 2004. In preparation.
[16] C von Mering et al. Comparative assessment of large-scale data sets of protein-protein
interactions. Nature, 2002.
[17] R Saito, H Suzuki, and Y Hayashizaki. Construction of reliable protein-protein interaction networks with a new interaction generality measure. Bioinformatics, pages
756?63, 2003.
[18] D S Goldberg and F P Roth. Assessing experimentally derived interactions in a small
world. Proceedings of the National Academy of Science, 2003.
[19] S M Gomez and A Rzhetsky. Towards the prediction of complete protein?protein
interaction networks. In Pacific Symposium on Biocomputing, pages 413?24, 2002.
| 2354 |@word version:1 nd:2 d2:5 decomposition:1 electronics:2 configuration:3 contains:5 score:2 loeliger:1 current:1 comparing:3 ij1:3 must:2 koetter:1 enables:1 analytic:1 e22:1 designed:1 update:4 generative:3 selected:1 half:2 intelligence:1 record:1 detecting:1 node:2 toronto:3 allerton:2 si1:2 along:1 enterprise:1 direct:3 symposium:1 consists:5 combine:2 manner:1 introduce:2 g4:1 expected:1 xji:1 inspired:1 freeman:1 actual:1 zecchina:1 every:2 act:1 classifier:1 control:2 positive:6 engineering:1 frey:7 limit:1 mateo:1 specifying:1 challenging:1 factorization:1 directed:3 practical:2 enforces:1 lost:1 recursive:1 union:1 saito:1 empirical:5 asi:1 road:1 protein:24 applying:1 tangled:1 deterministic:1 roth:1 straightforward:1 independently:2 formulate:1 unify:1 correcting:2 subgraphs:3 deriving:2 datapoints:1 searching:2 x14:2 construction:2 exact:3 goldberg:1 observed:14 aszl:1 electrical:1 thousand:1 ensures:2 connected:5 cycle:2 counter:1 valuable:1 ij2:2 complexity:3 ezard:1 solving:1 efficiency:1 untangling:12 easily:1 joint:4 represented:4 various:2 distinct:1 describe:2 monte:2 detected:2 artificial:1 kevin:1 birth:1 quite:1 solve:2 otherwise:2 emergence:1 noisy:11 superscript:1 sequence:1 parisi:1 interaction:23 product:8 combining:1 subgraph:4 ontario:1 gold:5 academy:1 constituent:7 assessing:1 produce:1 generating:1 comparative:1 ring:4 ij:3 implemented:2 predicted:2 indicate:1 closely:1 stochastic:1 adjacency:4 explains:1 assign:1 biological:2 stockholm:1 summation:2 yij:7 normal:1 algorithmic:1 barabasi:1 currently:1 si0:1 maxm:1 vice:1 reflects:1 mit:1 gaussian:1 discerning:1 derived:3 focus:2 check:2 indicates:1 likelihood:6 contrast:1 brendan:1 baseline:3 inference:21 typically:1 initially:1 hidden:19 e11:1 x11:2 issue:1 denoted:1 constrained:1 special:2 mackay:2 biologist:1 marginal:2 equal:6 field:2 construct:1 never:1 having:1 represents:2 eik:4 others:1 report:1 intelligent:1 equiprobable:2 randomly:2 national:1 quaid:2 murphy:1 phase:2 detection:2 organization:1 interest:2 message:15 evaluation:1 held:1 x22:2 chain:5 amenable:1 edge:47 unification:1 sweden:1 filled:2 initialized:1 uncertain:1 untangle:1 measuring:1 loopy:4 introducing:1 vertex:28 subset:1 deviation:1 combined:2 petrovic:1 density:3 international:1 e14:3 probabilistic:6 decoding:3 michael:1 connecting:2 ym:2 von:1 containing:1 toy:1 s41:1 potential:9 exclude:1 includes:2 depends:1 performed:2 view:1 portion:1 bayes:2 e13:1 contribution:1 square:3 variance:1 characteristic:1 efficiently:1 kaufmann:1 correspond:1 conceptually:1 bayesian:2 carlo:2 m5s:1 j6:3 straight:1 e12:1 explain:1 datapoint:1 eka:1 whenever:2 definition:1 obvious:1 e2:11 associated:1 psi:2 di:32 sampled:2 dataset:1 knowledge:1 x13:2 improves:1 satisfiability:2 formalize:1 e24:3 wei:1 formulation:4 done:1 though:1 evaluated:1 generality:1 christopher:1 replacing:1 assessment:1 propagation:5 e2ij:19 defines:3 yeast:5 building:1 true:11 former:1 symmetric:1 s42:1 iteratively:1 neal:1 sin:2 width:1 illustrative:1 d4:3 complete:2 x34:2 fj:1 reasoning:1 image:1 fi:4 common:1 functional:1 physical:1 empirically:1 exponentially:1 volume:1 million:1 organism:2 approximates:1 slight:1 refer:1 composition:1 versa:1 similarly:2 dj:1 reliability:1 specification:1 compiled:1 posterior:6 own:1 codeword:1 binary:4 success:1 morgan:1 fortunately:1 greater:1 maximize:1 ii:3 multiple:4 infer:3 technical:1 eji:1 faster:1 match:1 cross:1 calculation:1 e1:12 molecular:1 barab:1 prediction:3 scalable:1 vision:2 albert:3 iteration:1 represent:4 sometimes:1 kernel:2 want:2 source:2 suspect:2 subject:1 undirected:5 sent:3 incorporates:1 jordan:2 call:5 integer:1 structural:3 near:1 presence:1 fit:4 topology:1 simplifies:1 reprinted:1 motivated:2 expression:1 passed:2 paige:2 passing:3 ignored:1 generally:1 clear:1 morris:1 dna:1 generate:1 xij:23 spy:3 estimated:4 per:1 write:1 vol:1 affected:1 four:1 d3:1 neither:1 backward:2 graph:83 sum:11 letter:2 uncertainty:1 place:1 putative:4 scaling:1 comparable:1 bit:3 internet:1 followed:1 gomez:2 occur:1 constraint:10 performing:1 x12:2 pacific:1 combination:4 march:1 describes:2 restricted:1 sij:8 pr:1 dmax:3 mechanism:3 operation:2 decomposing:1 x23:2 eight:2 enforce:2 faloutsos:3 yair:1 include:2 completed:2 graphical:4 ppi:2 february:1 added:1 ei1:2 enforcing:2 assuming:5 code:3 modeled:1 relationship:1 october:2 negative:2 implementation:2 upper:1 observation:10 commensurate:1 markov:4 pasztor:1 displayed:1 communication:3 locate:1 discovered:1 arbitrary:1 sumproduct:2 august:1 canada:1 specified:3 jeong:1 pearl:1 nip:1 address:2 pattern:1 including:1 reliable:2 belief:4 power:1 suitable:1 natural:1 eh:2 rely:1 predicting:1 indicator:4 recursion:1 deemed:1 naive:2 prior:19 review:1 law:1 generation:2 x33:2 validation:1 degree:46 pij:17 thresholding:2 metabolic:1 share:1 unwrapping:2 summary:2 surprisingly:1 parity:2 free:2 allow:1 leaveone:1 departing:1 sparse:1 curve:4 world:3 genome:2 rich:2 computes:1 forward:3 suzuki:1 san:1 far:1 social:1 transaction:1 approximate:6 compact:1 proteinprotein:1 instantiation:1 summing:2 iterative:2 nature:2 ca:3 kschischang:2 ignoring:1 excellent:1 domain:2 motivation:1 noise:14 oltvai:1 denoise:1 fig:2 roc:4 ein:1 trellis:1 inferring:1 exponential:1 x24:2 specific:1 utoronto:2 submitting:1 intractable:5 exists:1 false:4 importance:1 telemarketing:2 eij:35 appearance:2 likely:3 scalar:2 mij:2 corresponds:3 dh:4 goal:1 king:1 towards:1 hard:1 experimentally:1 determined:1 except:1 specifically:1 denoising:5 total:1 pas:1 experimental:1 shannon:1 college:1 support:1 pfor:1 latter:1 bioinformatics:2 biocomputing:1 preparation:1 evaluate:3 d1:7 |
1,490 | 2,355 | Discriminating deformable shape classes
S. Ruiz-Correa? , L. G. Shapiro? , M. Meil?a? and G. Berson?
?Department of Electrical Engineering
?Department of Statistics
?
Division of Medical Genetics, School of Medicine
University of Washington, Seattle, WA 98105
Abstract
We present and empirically test a novel approach for categorizing 3-D free form object shapes represented by range data . In contrast to traditional surface-signature based
systems that use alignment to match specific objects, we adapted the newly introduced
symbolic-signature representation to classify deformable shapes [10]. Our approach constructs an abstract description of shape classes using an ensemble of classifiers that learn
object class parts and their corresponding geometrical relationships from a set of numeric
and symbolic descriptors. We used our classification engine in a series of large scale discrimination experiments on two well-defined classes that share many common distinctive
features. The experimental results suggest that our method outperforms traditional numeric
signature-based methodologies. 1
1
Introduction
Categorizing objects from their shape is an unsolved problem in computer vision that entails the ability of a computer system to represent and generalize shape information on the
basis of a finite amount of prior data. For automatic categorization to be of practical value,
a number of important issues must be addressed. As pointed out in [10], how to construct
a quantitative description of shape that accounts for the complexities in the categorization
process is currently unknown. From a practical prospective, human perception, knowledge,
and judgment are used to elaborate qualitative definitions of a class and to make distinctions
among different classes. Nevertheless, categorization in humans is a standing problem in
Neurosciences and Psychology, and no one is certain what information is utilized and what
kind of processing takes place when constructing object categories [8]. Consequently, the
task of classifying object shapes is often cast in the framework of supervised learning.
Most 3-D object recognition research in computer vision has heavily used the alignmentverification methodology [11] for recognizing and locating specific objects in the context
of industrial machine vision. The number of successful approaches is rather diverse and
spans many different axes . However, only a handful of studies have addressed the problem of categorizing shapes classes containing a significant amount of shape variation and
missing information frequently found in real range scenes. Recently, Osada et al. [9] developed a shape representation to match similar objects. The so-called shape distribution
encodes the shape information of a complete 3-D object as a probability distribution sampled from a shape function. Discrimination between classes is attempted by comparing
a deterministic similarity measure based on a Lp norm. Funkhouser et al. [1] extended
the work on shape distribution by developing a representation of shape for object retrieval.
1
This research is based upon work supported by NSF Grant No. IIS-0097329 and NIH Grant No.
P20LM007714. Any opinions, findings and conclusions or recomendations expressed in this material
and those of the autors do not necessarily reflects the views of NSF o NIH.
The representation is based on a spherical harmonics expansion of the points of a polygonal surface mesh rasterized into a voxel grid. Query objects are matched to the database
using a nearest neighbor classifier. In [7], Martin et al. developed a physical model for
studying neuropathological shape deformations using Principal Component Analysis and a
Gaussian quadratic classifier. Golland [2] introduced the discriminative direction for kernel
classifiers for quantifying morphological differences between classes of anatomical structures. The method utilizes the distance-transform representation to characterize shape, but
it is not directly applicable to range data due to the dependence of the representation on the
global structure of the objects. In [10], we developed a shape novelty detector for recognizing classes of 3-D object shapes in cluttered scenes. The detector learns the components
of a shapes class and their corresponding geometric configuration from a set of surface signatures embedded in a Hilbert space. The numeric signatures encode characteristic surface
features of the components, while the symbolic signatures describe their corresponding
spatial arrangement.
The encouraging results obtained with our novelty detector motivated us to take a step
further and extend our algorithm to accommodate classification by developing a 3-D shape
classifier to be described in the next section. The basic idea is to generalize existing surface
representations that have proved effective in recognizing specific 3-D objects to the problem of object classes by using a ?symbolic? representation that is resistant to deformation
as opposed to a numeric representation that is tied to a specific shape. We were also motivated by applications in medical diagnosis and human interface design where 3-D shape
information plays a significant role. Detecting congenital abnormalities from craniofacial
features [3], identifying cancerous cells using microscopic tomography, and discriminating
3-D facial gestures are some of the driving applications.
The paper is organized as follows. Section 2 describes our proposed method. Section 3 is
devoted to the experimental results. Section 4 discusses relevant aspects of our work and
concludes the paper.
2
Our Approach
We develop our shape classifier in this section. For the sake of clarity we concentrate
on the simplest architecture capable of performing binary classification. Nevertheless, the
approach admits a straightforward extension to a multi-class setting. The basic architecture
consists of a cascade of two classification modules. Both modules have the same structure
(a bank of novelty detectors and a multi-class classifier) but operate on different input
spaces. The first module processes numeric surface signatures and the second, symbolic
ones. These shape descriptors characterize our classes at two different levels of abstraction.
2.1
Surface signatures
The surface signatures developed by Johnson and Hebert [5] are used to encode surface
shape of free form objects. In contrast to the shape distributions and harmonic descriptors,
their spatial scale can be enlarged to take into account local and non-local effects, which
makes them robust against the clutter and occlusion generally present in range data. Experimental evidence has shown that the spin image and some of its variants are the preferred
choice for encoding surface shape whenever the normal vectors of the surfaces of the objects can be accurately estimated [11]. The symbolic signatures developed in [10] are used
at the next level to describe the spatial configuration of labeled surface regions.
Numeric surface signatures. A spin-image [5] is a two-dimensional histogram computed
at an oriented point P of the surface mesh of an object (see Figure 1). The histogram accumulates the coordinates ? and ? of a set of contributing points Q on the mesh. Contributing
points are those that are within a specified distance of P and for which the surface normal
forms an angle of less than the specified size with the surface normal N of P . This angle is
called the support angle. As shown in Figure 1, the coordinate ? is the distance from P to
Surface
Mesh
Spin
Image
Coordinate
System
N
N
Q
?
Tp
P
?
?
P
?
Figure 1: The spin image for point P is constructed by accumulating in a 2-D histogram the coordinates ? and ? of a set of contributing points (such as Q) on the mesh representing the object.
the projection of Q onto the tangent plane TP at point P ; ? is the distance from Q to this
plane. We use spin images as the numeric signatures in this work.
Symbolic surface signatures Symbolic surface signatures (Fig. 2) are somewhat related
to numeric surface signatures in that they also start with a point P on the surface mesh and
consider a set of contributing points Q, which are still defined in terms of the distance from
P and support angle. The main difference is that they are derived from a labeled surface
mesh (shown in Figure 2a); each vertex of the mesh has an associated symbolic label referencing a surface region or component in which it lies. The components are constructed
using a region growing algorithm to be described in Section 2.2. For symbolic surface
signature construction, the vector P Q in Figure 2b is projected to the tangent plane at P
where a set of orthogonal axes ? and ? have been defined. The direction of the ? ? ? axes is
arbitrarily defined since no curvature information was used to specify preferred directions.
This ambiguity is resolved by the methods described in Section 2.2. The discretized version
of the ? and ? coordinates of P Q are used to index a 2D array, and the indexed position
of the array is set to the component label of Q. Note thst it is possible that multiple points
Q that have different labels project into the same bin. In this case, the label that appeared
most frequently is aasigned to the bin. The resultant array is the symbolic surface signature
at point P . Note that the signature captures the relationships among the labeled regions on
the mesh. The signature is shown as a labeled color image in Figure 2c.
Figure 2: The symbolic surface signature for point P on a labeled surface mesh model of a human
head. The signature is represented as a labeled color image for illustration purposes.
2.2
Classifying shape classes
We consider the classification task for which we are given a set of l surface meshes
C = {C1 , ? ? ? , Cl } representing two classes of object shapes. Each surface mesh is labeled by y ? {?1}. The problem is to use the given meshes and the labels to construct an
algorithm that predicts the label y of a new surface mesh C. We let C+1 (C?1 ) denote the
shape class labeled with y = +1 (y = ?1, respectively). We start by assuming that the
correspondences between all the points of the instances for each class Cy are known. This
can be achieved by using a morphable surface models technique such as the one described
in [10].
Finding shape class components
Before shape class learning can take place, the salient feature components associated with
C+1 and C?1 must be specified . Each component of a class is identified by a particular
region located on the surface of the class members. For each class C+1 and C?1 the components are constructed one at a time using a region growing algorithm. This algorithm
iteratively constructs a classification function (novelty detector), which captures regions
in the space of numeric signatures S that approximately correspond to the support of an
assumed probability distribution function FS associated with the class component under
consideration. In this context, a shape class component is defined as the set of all mesh
points of the surface meshes in a shape class whose numeric signatures lie inside of the
support region estimated by the classification function. The region growing algorithm proceeds as follows.
Figure 3: The component R was grown around the critical point p using the algorithm described in
the text. Six typical models of the training set are shown. The numeric signatures for the critical point
p of five of the models are also shown. Their image width is 70 pixels and its region of influence
covers about three quarters of the surface mesh models .
Step I (Region Growing) . The input of this phase is a set of surface meshes that are samples of an
object class Cy .
1. Select a set of critical points on a training object for class Cy . Let my be the number of critical
points per object. The number my and the locations of the critical points are chosen by hand at this
time. Note that the critical points chosen for class C+ can differ from the critical points chosen for
class C? .
2. Use known correspondences to find the corresponding critical points on all training instances in C
belonging to Cy .
3. For each critical point p of a class Cy , compute the numeric signatures at the corresponding points
of every training instance of Cy ; this set of signatures is the training set Tp,y for critical point p of
class Cy .
4. For each critical point p of class Cy , train a component detector (implemented as a ?-SVM
novelty detector [12]) to learn a component about p, using the training set T p,y . The component
detector will actually grow a region about p using the shape information of the numeric signatures
in the training sample. The regions are grown for each critical point individually using the following
growing phase. Let p be one of the m critical points. The performance of the component detector
for point p can be quantified by calculating a bound on the expected probability of error E on the
target set as E = #SVp /|Cy |, where #SVp is the number of support vectors in the component
detector for p, and |Cy | the number of elements with label y in C. Using the classifier for point p,
perform an iterative component growing operation to expand the component about p. Initially, the
component consists only of point p. An iteration of the procedure consists of the following steps. 1)
Select a point that is an immediate neighbor of one of the points in the component and is not yet in
the component. 2) Retrain the classifier with the current component plus the new point. 3) Compute
the error E 0 for this classifier. 4) If the new error E0 is lower than the previous error E, add the new
point to the component and set E = E 0 . 5) This continues until no more neighbors can be added
to the component. This region growing approach is related to the one used by Heisele et al. [4]
for categorizing objects in 2-D images. Figure 3 shows an example of a component grown by this
technique about critical point p on a training set of 200 human faces from the University of South
Florida database.
At the end of step I, there are my component detectors, each of which can identify the
component of a particular critical point of the object shape class Cy . That is, when applied
to a surface mesh, each component detector will determines which vertices it thinks belong
to its learned component (positive surface points), and which vertices do not.
Step II. The input of this step is the training set of numeric signatures and their corresponding labels
for each of the m = m+1 + m?1 components. The labels are determined by the step-I component
detectors previously applied to C+1 and C?1 . The output is a component classifier (multi-class ?SVM) that, when given a positive surface point of a surface mesh previously processed with the bank
of component detectors, will determine the particular component of the m components to which this
point belongs.
Learning spatial relationships
The ensemble of component detectors and the component classifier described above define
our classification module mentioned at the beginning of the section. A central feature
of this module is that it can be used for learning the spatial configuration of the labeled
components just by providing as input the set C of training surface meshes with each vertex
labeled with the label of its component or zero if it does not belong to a component. The
algorithm proceeds in the same fashion as described above except that the classifiers operate
on the symbolic surface signatures of the labeled mesh. The signatures are embedded in
a Hilbert space by means of a Mercer kernel that is constructed as follows. Let A and
B be two square matrices of dimension N storing arbitrary labels. Let A ? B denote a
binary square matrix whose elements are defined as [A ? B]ij = match ([A]ij , [B]ij ) ,
where match(a,
b) = 1 if a = b, and 0 otherwise. The symmetric mapping < A, B >=
P
(1/N 2 ) ij [A ? B]ij , whose range is the interval [0, 1], can be interpreted as the cosine
of angle ?AB between two unit vectors on the unit sphere lying within a single quadrant.
The angle ?AB is the geodesic distance between them. Our kernel function is defined as
2
k(A, B) = exp(??AB
/? 2 ).
Since symbolic surface signatures are defined up to a rotation, we use the virtual SV method
for training all the classifiers involved. The method consists of training a component detector on the signatures to calculate the support vectors. Once the support vectors are obtained,
new virtual support vectors are extracted from the labeled surface mesh in order to include
the desired invariance; that is, a number r of rotated versions of each support vector is generated by rotating the ? ? ? coordinate system used to construct each symbolic signature
(see Fig. 2). Finally, the novelty detector used by the algorithm is trained with the enlarged
data set consisting of the original training data and the set of virtual support vectors.
The worse case complexity of the classification module is O(nc2 s), where n is the number of vertices of the input mesh, s is the size of the input signatures (either numeric or
symbolic) and c is the number of novelty detectors. In the classification experiments to be
described below, typical values for n, s and c are 10, 000, 2, 500 and 8 , respectively.
A classification example
An architecture capable of discriminating two shape classes consists of a cascade of two
classification modules. The first module identifies the components of each shape class,
while the second verifies the geometric consistency (spatial relationships) of the components. Figure 4 illustrates the classification procedure on two sample surface meshes from
a test set of 200 human heads. The first mesh (Figure 4 a) belongs to the class of healthy individuals, while the second (Figure 4 e) belongs to the class of individuals with a congenital
syndrome that produces a pathological craniofacial deformation. The input classification
module was trained with a set of 400 surface meshes and 4 critical points per class to recognize the eight components shown in Figure 4 b and f. The first four components are
associated with healthy heads and the rest with the malformed ones. Each of the test surface meshes was individually processed as follows. Given an input surface mesh to the
first classification module, the classifier ensemble (component detectors and components
classifier) is applied to the numeric surface signatures of its points (Figure 4 a and e). A
connected components algorithm is then applied to the result and components of size below
a threshold (10 mesh points) are discarded. After this process the resulting labeled mesh is
fed to the second classification module that was trained with 400 labeled meshes and two
critical points to recognize two new components. The first component was grown around
the point P in Figure 4 a. The second component was grown around point Q in Figure 4 e.
The symbolic signatures inside the region around P encode the geometric configuration of
three of the four components learned by the first module (healthy heads), while the symbolic signatures around Q encode the geometric configuration of three of the remaining
four components (malformed heads), Figure 4 b and f . Consequently, the points of the
output mesh of the second module will be set to ?+1? if they belong to learned symbolic
signatures associated with the healthy heads (Figure 4 c) , and ?-1? otherwise (Figure 4 g).
Finally, the filtering algorithms described above are applied to the output mesh. Figure 4 c
(g) shows the region found by our algorithm that corresponds to the shape class model of
normal (respectively abnormal) head.
Figure 4: Binary classification example. a) and e) Mesh models of normal and abnormal heads,
respectively. b) and f) Output of the first classification module. Components 1-4 are associated with
healthy individuals while components 5-8, with unhealthy ones. Labeled points outside the bounded
regions correspond to false positives. c) and g) Output of the second classification module. d) and
h) Normalized classifier margin of the components associated with the second classification module.
Red points represent high confidence values while blue points represent low values.
3
Experiments
We used our classifier in a series of discrimination tasks with deformable 3-D human heads
and faces. All data sets were split into training and testing samples. For classification with
human heads the data consisted of 600 surface mesh models (400 training samples and
200 testing samples). The models had a resolution of 1 mm (? 30, 000 points) . For the
faces, the data sets consisted of 300 surface meshes (200 training samples and 100 testing
samples). The corresponding mesh resolution was set to about 0.8 mm (? 70, 000 points).
All the surface models considered here were obtained from range data scanners and all the
deformable models were constructed using the methods described in [10].
We tested the stability in the formation of shape class components using the faces data
set. This set contains a significant amount of shape variability. It includes models of
real subjects of different gender, race, age (young and mature adults) and facial gesture
(smiling vs. neutral). Typical samples are shown in Figure 3. The first module of our classifier must generate stable components to allow the second module to discriminate their
corresponding geometric configurations. We trained the first classification module with a
set of 200 faces using critical points arbitrarily located on the cheek, chin, forehead and
philtrum of the surface models. The trained module was then applied to the testing faces to
identify the corresponding components. The component associated with the forehead was
correctly identified in 86% of the testing samples. This rate is reasonably high considering
the amount of shape variability in the data set (Fig. 3). The percentage of identified components associated with the cheek, chin and philtrum were 86%, 89% and 82%, respectively.
We performed classification of normal versus abnormal human heads, a task that often
occurs in medical settings. The abnormalities considered are related to two genetic syndromes that can produce severe craniofacial deformities 2 . Our goal was to evaluate the
performance of our classifier in discriminating examples with two well-defined where a
very fine distinction exists. In our setup, the classes share many common features. This
makes the classification difficult even for a trained physician. In Task I, the classifier attempted to discriminate between test samples that were 100% normal or 100% affected
by each of the two model syndromes (Tasks I A and B). Task II was similar, except that
the classifier was presented with examples with varying degrees of abnormality. The surface meshes of each of these examples were convex combinations of normal and abnormal
heads. The degree of collusion between the resulting classes made the discrimination process more difficult. Our rationale was to drive a realistic task to its limit in order to evaluate
the discrimination capabilities of the classifier. High discrimination power could be useful to quantitatively evaluate cases that are otherwise difficult to diagnose, even by human
standards. The results of the experiments are summarized in Table 1. Our shape classifier
was able to discriminate with high accuracy between normal and abnormal models. It was
also able to discriminate classes that share a significant amount of common shape features
( see II-B? in Table 1).
We compared the performance of our approach with a signature-based method [11] that
uses alignment for matching objects and is robust to scene clutter and occlusion. As we
expected, a pilot study showed that the signature-based method performs poorly in tasks
I A and B with an average classification rate close to 43%. The methods cited in the
introduction were not considered for direct comparison, because they use global shape
representations that were designed for classifying complete 3-D models. Our approach
using symbolic signatures can operate on single-view data sets containing partial model
information, as shown by the experimental results performed on several shape classes [10].
I-A (100% normal - 0% abnormal)
I-B (100% normal - 0% abnormal)
II-B (65% normal - 35% abnormal)
98
100
98
II-B (50% normal - 50% abnormal)
II-B ? (25% normal - 75% abnormal)
II-B (15% normal - 85% abnormal)
97
92
48
Table 1: Classification accuracy rate (%) for discrimination between above test samples
versus 100% abnormal test samples.
4
Discussion and Conclusion
We presented a supervised approach to classification of 3-D shapes represented by range
data that learns class components and their geometrical relationships from surface descriptors. We performed preliminary classification experiments on models of human heads (normal vs. abnormal) and studied the stability in the formation of class components using a
collection of real face models containing a large amount of shape variability. We obtained
promising results. The classification rates were high and the algorithm was able to grow
consistent class components despite the variance.
We want to stress which parts of our approach are essential as described and which are
modifiable. The numeric and symbolic shape descriptors considered here are important.
They are locally defined but they convey a certain amount of global information. For example, the spin image defined on the forehead (point P) in Figure 3 encodes information
about the shape of most of the face (including the chin). As the image width increases, the
spin image becomes more descriptive. Spin images and some variants [11] are reliable for
encoding surface shape in the present context. Other descriptors such as curvature-based or
harmonic signatures are not descriptive enough or lack robustness to scene clutter and occlusion. In the classification experiments described above, we did not perform any kind of
feature selection for choosing the critical points. Nevertheless, the shape descriptors cap2
Test samples were obtained from models with craniofacial features based upon either the Greig
cephalopolysyndactyly (A) or the trisomy 9 mosaic (B) syndromes [6].
tured enough global information to allow a classifier to discriminate between the distinctive
features of normal and abnormal heads.
The structure of the classification module (bank of novelty detectors and multi-class classifier) is important. The experimental results showed us that the output of the novelty
detectors is not always reliable and the multi-class classifier becomes critical for constructing stable and consistent class components. In the context of our medical application, the
performance of our novelty detectors can be improved by incorporating prior information
into the classification scheme. Maximum entropy classifiers or an extension of the Bayes
point machines to the one class setting are being investigated as possible alternatives. The
region-growing algorithm for finding class components is not critical. The essential point
consists of generating groups of neighboring surface points whose shape descriptors are
similar but distinctive enough from the signatures of other components.
There are several issues to investigate. 1) Our method is able to model shape classes containing significant shape variance and can absorb about 20% of scale changes. A multiresolution approach could be used for applications that require full scale invariance. 2)
We used large range data sets for training our classifier. However, larger sets are required
in order to capture the shape variability of the abnormal craniofacial features due to race,
age and gender. We are currently collecting data from various medical sources to create
a database for implementing and testing a semi-automated diagnosis system. The data includes 3-D models constructed from range data and CT scans. The usability of the system
will be evaluated by a panel of expert geneticists.
References
[1] T. Funkhouser, P. Min, M. Kazhdan, J. Chen, A. Halderman, D. Dobkin, and D. Jacobs ?A
Search Engine for 3D Models,? ACM Transactions on Graphics, 22(1), pp. 83-105, January
2003.
[2] P. Golland ?Discriminative Direction for Kernel Classifiers,? In: Advances in Neural Information
Processing Systems, 13, Vancouver, Canada, 745-752, 2001.
[3] P. Hammond, T. J. Hunton, M. A. Patton, and J. E. Allanson. ?Delineation and Visualization of
Congenital Abnormality using 3-D Facial Images,? In:Intelligent Data Analysis in Medicine and
Pharmacology, MEDINFO, 2001, London.
[4] B. Heisele, T. Serre, M. Pontil, T. Vetter and T. Poggio. ?Categorization by Learning and Combining Object Parts,? In: Advances in Neural Information Processing Systems, 14, Vancouver,
Canada, Vol. 2, 1239-1245, 2002.
[5] A. E. Johnson and M. Hebert, ?Using Spin Images for Efficient Object Recognition in Cluttered
3D scenes,? IEEE Trans. Pattern Analysis and Machine Intelligence, 21(5), pp. 433-449, 1999.
[6] K. L. Jones, Smith?s Recognizable Patterns of Human Malformation, 5th Ed. W.B. Saunders
Company, 1999.
[7] J. Martin, A. Pentland, S. Sclaroff, and R. Kikinis, ?Characterization of Neurophatological
Shape Deformations,? IEEE Transactions on Pattern Analysis and Machine Intelligence,, Vol.
2, No. 2, 1998.
[8] D. L. Medin, C. M. Aguilar, Categorization. In R.A. Wilson and F. C. Keil (Eds.). The MIT
Encyclopedia of the Cognitive Sciences, Cambridge, MA, 1999.
[9] R. Osada, T. Funkhouser, B. Chazelle, and D. Dobkin, ?Matching 3-D models with shape distributions,? Shape Modeling International, 2001, pp. 154-166.
[10] S. Ruiz-Correa, L. G. Shapiro, and M. Meil?a. ?A New Paradigm for Recognizing 3-D Object
Shapes from Range Data,? Proceedings of the IEEE Computer Society International Conference
on Computer Vision 2003, Vol.2, pp. 1126-1133.
[11] S. Ruiz-Correa, L. G. Shapiro, and M. Meil?a, ?A New Signature-based Method for Efficient
3-D Object Recognition,? Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition 2001, Vol. 1, pp. 769 -776.
[12] B. Scholk?opf and A. J. Smola, Learning with Kernels, The MIT Press, Cambridge, MA, 2002.
| 2355 |@word version:2 norm:1 jacob:1 accommodate:1 configuration:6 series:2 contains:1 genetic:1 outperforms:1 existing:1 current:1 comparing:1 chazelle:1 yet:1 must:3 mesh:39 realistic:1 shape:62 designed:1 discrimination:7 v:2 intelligence:2 plane:3 beginning:1 smith:1 detecting:1 characterization:1 location:1 five:1 constructed:6 direct:1 qualitative:1 consists:6 inside:2 recognizable:1 expected:2 frequently:2 growing:8 multi:5 discretized:1 spherical:1 company:1 encouraging:1 delineation:1 considering:1 becomes:2 project:1 matched:1 bounded:1 panel:1 what:2 kind:2 interpreted:1 developed:5 finding:3 quantitative:1 every:1 collecting:1 classifier:30 unit:2 medical:5 grant:2 before:1 positive:3 engineering:1 local:2 limit:1 despite:1 encoding:2 accumulates:1 meil:3 approximately:1 plus:1 studied:1 quantified:1 range:10 malformed:2 medin:1 practical:2 testing:6 procedure:2 heisele:2 pontil:1 cascade:2 projection:1 matching:2 confidence:1 vetter:1 quadrant:1 suggest:1 symbolic:21 onto:1 close:1 selection:1 context:4 influence:1 accumulating:1 deterministic:1 missing:1 straightforward:1 cluttered:2 convex:1 resolution:2 identifying:1 array:3 aguilar:1 stability:2 variation:1 coordinate:6 construction:1 play:1 heavily:1 target:1 us:1 mosaic:1 element:2 recognition:4 utilized:1 located:2 continues:1 predicts:1 database:3 labeled:15 role:1 module:21 electrical:1 capture:3 calculate:1 cy:11 region:18 connected:1 morphological:1 mentioned:1 complexity:2 geodesic:1 signature:44 trained:6 upon:2 distinctive:3 division:1 basis:1 resolved:1 represented:3 various:1 grown:5 train:1 describe:2 effective:1 london:1 query:1 formation:2 outside:1 choosing:1 saunders:1 whose:4 larger:1 otherwise:3 ability:1 statistic:1 think:1 transform:1 descriptive:2 neighboring:1 relevant:1 combining:1 poorly:1 multiresolution:1 deformable:4 description:2 seattle:1 produce:2 categorization:5 generating:1 rotated:1 object:31 develop:1 ij:5 nearest:1 school:1 implemented:1 differ:1 direction:4 concentrate:1 human:12 opinion:1 material:1 virtual:3 bin:2 implementing:1 require:1 preliminary:1 extension:2 mm:2 lying:1 around:5 considered:4 scanner:1 normal:17 exp:1 mapping:1 driving:1 purpose:1 applicable:1 label:11 currently:2 healthy:5 individually:2 create:1 reflects:1 mit:2 gaussian:1 always:1 rather:1 varying:1 wilson:1 categorizing:4 ax:3 encode:4 derived:1 cancerous:1 deformity:1 rasterized:1 contrast:2 industrial:1 abstraction:1 unhealthy:1 initially:1 expand:1 pixel:1 issue:2 classification:32 among:2 spatial:6 neuropathological:1 construct:5 once:1 washington:1 jones:1 quantitatively:1 intelligent:1 oriented:1 pathological:1 recognize:2 individual:3 phase:2 occlusion:3 consisting:1 ab:3 investigate:1 severe:1 alignment:2 devoted:1 capable:2 partial:1 poggio:1 facial:3 orthogonal:1 indexed:1 rotating:1 desired:1 deformation:4 e0:1 instance:3 classify:1 modeling:1 cover:1 tp:3 vertex:5 neutral:1 recognizing:4 successful:1 johnson:2 graphic:1 characterize:2 sv:1 my:3 cited:1 international:2 discriminating:4 standing:1 physician:1 ambiguity:1 central:1 containing:4 opposed:1 worse:1 cognitive:1 expert:1 account:2 summarized:1 includes:2 race:2 performed:3 view:2 diagnose:1 red:1 start:2 bayes:1 capability:1 nc2:1 square:2 spin:9 accuracy:2 variance:2 descriptor:8 characteristic:1 ensemble:3 judgment:1 correspond:2 identify:2 generalize:2 accurately:1 hammond:1 drive:1 detector:22 whenever:1 ed:2 definition:1 against:1 pp:5 involved:1 resultant:1 associated:9 unsolved:1 sampled:1 newly:1 proved:1 pilot:1 knowledge:1 color:2 hilbert:2 organized:1 actually:1 supervised:2 methodology:2 specify:1 improved:1 evaluated:1 just:1 smola:1 until:1 hand:1 lack:1 effect:1 smiling:1 normalized:1 consisted:2 serre:1 symmetric:1 iteratively:1 funkhouser:3 width:2 cosine:1 scholk:1 chin:3 stress:1 complete:2 correa:3 performs:1 interface:1 geometrical:2 image:15 harmonic:3 consideration:1 novel:1 recently:1 nih:2 common:3 rotation:1 quarter:1 empirically:1 physical:1 extend:1 belong:3 forehead:3 significant:5 cambridge:2 automatic:1 grid:1 consistency:1 pointed:1 had:1 resistant:1 entail:1 similarity:1 surface:56 morphable:1 stable:2 add:1 curvature:2 showed:2 belongs:3 certain:2 binary:3 arbitrarily:2 somewhat:1 syndrome:4 novelty:10 determine:1 paradigm:1 ii:8 semi:1 multiple:1 full:1 match:4 usability:1 gesture:2 sphere:1 retrieval:1 variant:2 basic:2 vision:5 histogram:3 represent:3 kernel:5 iteration:1 achieved:1 cell:1 c1:1 golland:2 want:1 fine:1 addressed:2 interval:1 grow:2 source:1 operate:3 rest:1 south:1 subject:1 mature:1 member:1 abnormality:4 split:1 enough:3 automated:1 psychology:1 architecture:3 identified:3 greig:1 idea:1 motivated:2 six:1 f:1 locating:1 generally:1 useful:1 amount:7 clutter:3 encyclopedia:1 locally:1 tomography:1 processed:2 category:1 simplest:1 generate:1 shapiro:3 percentage:1 nsf:2 neuroscience:1 estimated:2 per:2 correctly:1 modifiable:1 anatomical:1 diverse:1 diagnosis:2 blue:1 vol:4 affected:1 group:1 salient:1 four:3 nevertheless:3 threshold:1 clarity:1 angle:6 place:2 cheek:2 utilizes:1 osada:2 bound:1 abnormal:15 ct:1 correspondence:2 quadratic:1 adapted:1 handful:1 scene:5 encodes:2 sake:1 collusion:1 aspect:1 span:1 min:1 performing:1 martin:2 department:2 developing:2 combination:1 belonging:1 describes:1 lp:1 referencing:1 visualization:1 previously:2 discus:1 fed:1 end:1 studying:1 operation:1 eight:1 svp:2 alternative:1 robustness:1 florida:1 original:1 remaining:1 include:1 medicine:2 calculating:1 society:1 arrangement:1 added:1 occurs:1 dependence:1 traditional:2 microscopic:1 distance:6 prospective:1 assuming:1 index:1 relationship:5 illustration:1 providing:1 setup:1 difficult:3 design:1 unknown:1 perform:2 discarded:1 finite:1 keil:1 pentland:1 january:1 immediate:1 extended:1 variability:4 head:14 arbitrary:1 dobkin:2 canada:2 introduced:2 cast:1 required:1 specified:3 engine:2 distinction:2 learned:3 trans:1 adult:1 able:4 proceeds:2 below:2 perception:1 pattern:4 appeared:1 including:1 reliable:2 power:1 critical:21 representing:2 scheme:1 identifies:1 concludes:1 text:1 prior:2 geometric:5 tangent:2 vancouver:2 contributing:4 opf:1 embedded:2 rationale:1 kikinis:1 filtering:1 versus:2 age:2 degree:2 consistent:2 mercer:1 bank:3 classifying:3 share:3 storing:1 genetics:1 supported:1 free:2 hebert:2 allow:2 neighbor:3 patton:1 face:8 dimension:1 numeric:17 made:1 collection:1 projected:1 voxel:1 transaction:2 preferred:2 absorb:1 global:4 assumed:1 discriminative:2 search:1 iterative:1 table:3 promising:1 learn:2 reasonably:1 robust:2 expansion:1 investigated:1 necessarily:1 cl:1 constructing:2 did:1 main:1 pharmacology:1 verifies:1 convey:1 enlarged:2 fig:3 retrain:1 elaborate:1 fashion:1 position:1 lie:2 tied:1 congenital:3 ruiz:3 learns:2 young:1 specific:4 admits:1 svm:2 evidence:1 exists:1 essential:2 incorporating:1 polygonal:1 false:1 illustrates:1 margin:1 chen:1 sclaroff:1 entropy:1 expressed:1 gender:2 corresponds:1 determines:1 extracted:1 acm:1 ma:2 goal:1 consequently:2 quantifying:1 change:1 kazhdan:1 typical:3 determined:1 except:2 principal:1 called:2 discriminate:5 invariance:2 experimental:5 attempted:2 select:2 geneticist:1 support:10 scan:1 evaluate:3 tested:1 |
1,491 | 2,356 | Dopamine modulation in a basal ganglio-cortical
network implements saliency-based gating of
working memory
Aaron J. Gruber1,2 , Peter Dayan3 , Boris S. Gutkin3 , and Sara A. Solla2,4
Biomedical Engineering1 , Physiology2 , and Physics and Astronomy4 ,
Northwestern University, Chicago, IL, USA.
Gatsby Computational Neuroscience Unit3 ,
University College London, London, UK.
{a-gruber1,solla }@northwestern.edu, {dayan,boris}@gatsby.ucl.ac.uk
Abstract
Dopamine exerts two classes of effect on the sustained neural activity
in prefrontal cortex that underlies working memory. Direct release in
the cortex increases the contrast of prefrontal neurons, enhancing the robustness of storage. Release of dopamine in the striatum is associated
with salient stimuli and makes medium spiny neurons bistable; this modulation of the output of spiny neurons affects prefrontal cortex so as to
indirectly gate access to working memory and additionally damp sensitivity to noise. Existing models have treated dopamine in one or other
structure, or have addressed basal ganglia gating of working memory exclusive of dopamine effects. In this paper we combine these mechanisms
and explore their joint effect. We model a memory-guided saccade task
to illustrate how dopamine?s actions lead to working memory that is selective for salient input and has increased robustness to distraction.
1
Introduction
Ample evidence indicates that the maintenance of information in working memory (WM)
is mediated by persistent neural activity in the prefrontal cortex (PFC) [9, 10]. Critical for
such memories is to control how salient external information is gated into storage, and to
limit the effects of noise in the neural substrate of the memory itself. Experimental [15, 18]
and theoretical [2, 13, 4, 17] studies implicate dopaminergic neuromodulation of PFC in
information gating and noise control. In addition, there is credible speculation [7] that input
to the PFC from the basal ganglia (BG) should also exert gating effects. Since the striatum
is also a major target of dopamine innervation, the nature of the interaction between these
various control structures and mechanisms in manipulating WM is important.
A wealth of mathematical and computational models bear on these questions. A recent
cellular-level model, which includes many known effects of dopamine (DA) on ionic conductances, indicates that modulation of pyramidal neurons causes the pattern of network
activity at a fixed point attractor to become more robust both to noise and to input-driven
switching of attractor states [6]. This result is consistent with reported effects of DA in
more abstract, spiking-based models [2] of WM, and provides a cellular substrate for network models that account for gating effects of DA in cognitive WM tasks [1]. Other network models [7] of cognitive tasks have concentrated on the input from the BG, arguing
that it has a disinhibitory effect (as in models of motor output) that controls bistability
in cortical neurons and thereby gates external input to WM. This approach emphasizes
the role of dopamine in providing a training signal to the BG, in contrast to the modulatory effects of DA discussed here, which are important for on-line neural processing.
Finally, dopaminergic neuromodulation in the striatum has itself been recently captured in
a biophysically-grounded model [11], which describes how medium spiny neurons (MSNs)
become bistable in elevated dopamine. As the output of a major subset of MSNs ultimately
reaches PFC after further processing through other nuclei, this bistability can have potentially strong effects on WM.
In this paper, we combine these various influences on working memory activity in the PFC.
We model a memory-guided saccade task [8] in which subjects must fixate on a centrally
located fixation spot while a visual target is flashed at a peripheral location. After a delay
period of up to a few seconds, subjects must saccade to the remembered target location.
Numerous experimental studies of the task show that memory is maintained through striatal
and sustained prefrontal neuronal activity; this persistent activity is consistent with attractor
dynamics. Robustness to noise is of particular importance in the WM storage of continuous
scalar quantities such as the angular location of a saccade target, since internal noise in the
attractor network can easily lead to drift in the activity encoding the memory. In successive
sections of this paper, we consider the effect of DA on resistance to attractor switching in
the isolated cortical network; the effect of MSN activity on gating and noise; and the effect
of dopamine induced bistability in MSNs on WM activity associated with salient stimuli.
We demonstrate that DA exerts complementary direct and indirect effects, which result in
superior performance in memory-guided tasks.
Model description
PF Cortex
Input
I
T
E
DA
BG
medium spiny
S
pyramidal
ate
up st
input
activity
The components of the network model
used to simulate the WM activity during a
memory-guided saccade task are shown in
Fig 1. The input module consists of a ring
of 120 units that project both to the PFC and
the BG modules. Input units are assigned firing rates rjT to represent the sensory cortical
response to visual targets. Bumps of activity centered at different locations along the
ring encode for the position of different targets around the circle, as characterized by an
angle in the [0, 2?) interval.
activation
2
input
The BG module consists of 24 medium spiny
neurons (MSNs). Connections from the in- Figure 1: The network model consists of
put units consist of Gaussian receptive fields three modules: cortical input, basal ganthat assign to each MSN a preferred direc- glia (BG), and prefrontal cortex (PFC). Intion; these preferred directions are monoton- sets show the response functions of spiny
ically and uniformly distributed. The dy- (BG) and pyramidal (PFC) neurons for
namics of individual MSNs follow from a both low (dotted curves) and high (solid
biophysically-grounded single compartment curves) dopamine.
model [11]
?C V? S = ? (IIRK + ILCa ) + IORK + IL + IT ,
(1)
which incorporates three crucial ionic currents: an inward rectifying K + current (IIRK ),
an outward rectifying K + current (IORK ), and an L-type Ca2+ current (ILCa ). The characterization of these currents is based on available biophysical data on MSNs. The factor ?
represents an increase in the magnitude of the IIRK and ILCa currents due to the activation
of D1 dopamine receptors. This DA induced current enhancement renders the response
function of MSNs bistable for ? & 1.2 (see Fig 1 for ? = 1.4). The synaptic input IT is an
ohmic term with conductance given by the weighted summedP
activity of the corresponding
ST T S
ST
input unit; input to the j-th MSN is thus given by IT j =
i Wji ri Vj , where Wji
is the strength of the connection from the i-th input neuron to the j-th spiny neuron. The
firing rate of MSNs is a logistic function of their membrane potential: rjS = L(VjS ). The
MSNs provide excitatory inputs to the PFC; in the model, this monosynaptic projection
represents the direct pathway through the globus pallidus/substantia nigra and thalamus.
The PFC module implements a line attractor capable of sustaining a bump of activity that
encodes for the value of an angular variable in [0, 2?). ?Bump? networks like this have
been used [3, 5] to model head direction and visual stimulus location characterized by a
single angular variable. The module consists of 120 excitatory units; each unit is assigned
a preferred direction, uniformly covering the [0, 2?) interval. Lateral connections between
excitatory units are a Gaussian function of the angular difference between the corresponding preferred directions. A single inhibitory unit provides uniform global inhibition; the
activity of the inhibitory unit is controlled by the total activity of the excitatory population.
This type of connectivity guarantees that a localized bump of activity, once established,
will persist beyond the disappearance of the external input that originated it (see Fig 2).
One of the purposes of this paper is to investigate whether this persistent activity bump is
robust to noise in the line attractor network.
The excitatory units follow the stochastic differential equation
P
P
ES S
EE E
? E V? jE = ?VjE + i Wji
ri + i6=j Wji
ri ? rI + rjT + ?e ?.
(2)
ES
The first sum in Eq 2 represents inputs from the BG; the connections Wji
consist of
Gaussian receptive fields centered to align with the preferred direction of the corresponding
excitatory unit. The second sum represents inputs from other excitatory PFC units; note that
self-connections are excluded. The following two terms represent input from the inhibitory
PFC unit (rI ) and information about the visual target provided by the input module (rjT ).
Crucially, the last term provides a stochastic input that models fluctuations in the activities
that contribute to the total input to the excitatory units. The random variable ? is drawn from
a Gaussian distribution with zero mean and unit variance. The noise amplitude ?e scales
like (dt)?1/2 , where dt is the integration time step. The firing rate of the PFC excitatory
units is a logistic function rjE = L(VjE ); as shown in Fig 1, the steepness of this response
function is controlled by DA. The dynamics of the inhibitory unit follows from ? I V? I =
P
E
i ri , where the sum represents the total activity of the excitatory population. The firing
rate rI of the inhibitory unit is a linear threshold function of V I . Dopaminergic modulation
of the PFC network is implemented through an increase in the steepness of the response
function of the excitatory cortical units. Gain control of this form has been adopted in
a previous, more abstract, network theory of WM [17], and is generally consistent with
biophysically-grounded models [6, 2].
To investigate the properties of the network model represented in Fig 1, the system of equations summarized above is integrated numerically using a 5th order Runge-Kutta method
with variable time step that ensures an error tolerance below 5 ?V/ms.
3
Results
3.1 Dopamine effects on the cortex: increased memory robustness
0.8
activity
A
0
0
?
?/2
2?
3?/2
PFC Neuron angular label
B
PFC Nuron label
3/2?
?/4
?d
?
?b?
*
?b
?0
0
0
100
200
300
400
time (ms)
0
0
?/4
?/2
?d?
2?/3
?
Figure 2: (A) Activity profile of
the bump state in low DA (open
dots) and high DA (full dots). (B)
Robustness characteristics of bump
activity in low DA (dashed curve)
and high DA (solid curve). For
reference, the thin dotted line indicates the identity ?b ? = ?d ?. The
activity profile shown as a function of time in the inset (grey scale,
white as most active) illustrates the
displacement of the bump from its
initial location at ?0 to a final location at ?b due to a distractor input
at ?d . This case corresponds to the
asterisk on the curves in B.
We first investigate the properties of the cortical network isolated from the input and basal
ganglia components. The connectivity among cortical units is set so there are two stable
states of activity for the PFC network: either all excitatory units have very low activity
level, or a subset of them participates in a localized bump of elevated activity (Fig 2A,
open dots). The bump can be translated to any position along the ring of cortical units, thus
providing a way to encode a continuous variable, such as the angular position of a stimulus
within a circle. The encoded angle corresponds to the location of the bump peak, and it
can be read out by computing the population vector. The effect of DA on the PFC module,
modeled here as an increase in the gain of the response function of the excitatory units,
results in a narrower bump with a higher peak (Fig 2A, full dots).
We measure the robustness of the location of the bump state against perturbative distractor
inputs by applying a brief distractor at an angular distance ?d ? from the current location
of the bump and assessing the resulting angular displacement ?b ? in the location of the
bump 40 ms after the offset of the distractor. The procedure is illustrated in the inset of
Fig 2B, which shows that a distractor current injection centered at a location ?d causes a
drift in bump location from its initial position ?0 to a final position ?b , closer to the angular
location of the distractor. If ?d is close to ?0 , the distractor is capable of moving the bump
completely to the injection location, and ?b ? is almost equal to ?d ?. As shown in Fig 2B,
the plot of ?b ? versus ?d ? remains close to the identity line for small ?d ?. However, as
?d ? increases the distractor becomes less and less effective, until the displacement ?b ? of
the bump decreases abruptly and becomes negligible.
The generic features of bump stability shown in Fig 2B apply to both low DA (dashed
curve) and high DA (solid curve) conditions. The difference between these two curves reveals that the dopamine induced increase in the gain of PFC units decreases the sensitivity
of the bump to distractors, resulting in a consistently smaller bump displacement. The actual location of these two curves can be altered by varying the intensity and/or the duration
of the distractor input, but their features and relative order remain invariant. This numerical experiment demonstrates that DA increases the robustness of the encoded memory,
consistent with other PFC models of DA effects on WM [2, 6].
3.2 Basal ganglia effects on the cortex: increased memory robustness and input gating
Next, we investigate the effects of BG input (both tonic and phasic) on the stability of PFC
bump activity in the absence of DA modulation. Tonic input from a single MSN, whose
preferred direction coincides with the angular location of the bump, anchors the bump at
that location and increases memory robustness against both noise induced diffusion (Figs
B 0.06
?/6
C
without BG
with BG
0
0
??/6
0
2
time (s)
4
?/6
?b?
< ?2>
?
A
0
2
4
0
0
time (s)
?/2
?
?d?
Figure 3: Diffusion of the bump location due to noise in low DA (grey traces in A; dashed
curve in B) is greatly reduced by input from a single BG unit with the same preferred
angular location (dark traces in A; solid curve in B). The robustness to distractor driven
drift is also increased by BG input (C).
3A and 3B) and distractors (Fig 3C). Such localized tonic input to the PFC effectively
breaks the symmetry of the line attractor, yielding a single fixed point for the cortical active
state: a bump centered at the location of maximal BG input. This transition from a continuous line attractor to a fixed point attractor reduces the maximal deviation of the bump by
a distractor.
Active MSNs provide control over the encoded memory not only by enhancing robustness,
as shown above for the case of tonic input to the PFC, but also by providing phasic input
that can assist a relevant visual stimulus in switching the location of the PFC activity bump.
We show in Fig 4 (top plots) the location of the activity bump ?b as a function of time
in response to two stimuli at different locations ?s . The nature of the PFC response to
the second stimulus depends dramatically on whether it elicits activity in the MSNs. The
initial stimulus activates a tight group of MSNs which encode for its angular position. It
also causes activation of a group of PFC neurons whose population vector encodes for the
same angular position. When the input disappears, the MSNs become inactive and the
cortical layer relaxes to a characteristic bump state centered at the angular position of the
stimulus. A second stimulus (distractor) that fails to activate BG units (Fig 4A) has only a
minimal effect on the bump location. However, if the stimulus does activate the BG units
(Fig 4B), then it causes a switch in bump location. In this case, the PFC memory is updated
to encode for the location of the most recent stimulus. Thus a direct stimulus input to the
PFC that by itself is not sufficient to switch attractor states can trigger a switch, provided it
activates the BG, whose activity yields additional input to the PFC. Transient activation of
MSNs thus effectively gates access to working memory.
3.3 Dopamine effects on the basal ganglia: saliency-based gating
Ample evidence indicates that DA, the release of which is associated with the presentation
of conditioned stimuli [16], modulates the activity of MSNs. Our previous computational
model of MSNs [11] studied the apparently paradoxical effects of DA modulation, manifested in both suppression and enhancement of MSN activity in a complex reward-based
saccade task [12]. We showed that DA can induce bistability in the response functions of
MSNs, with important consequences. In high DA, the effective threshold for reaching the
active ?up? state is increased; the activity of units that do not exceed threshold is suppressed
into a quiescent ?down? state, while units that reach the up state exhibit a higher firing rate
which is extended in duration due to effects of hysteresis.
We now demonstrate that the dual enhancing/suppressing nature of DA modulation of
MSNs activity significantly affects the network?s response to stimuli. We show in Fig 5
(top plot) the location of the activity bump ?b as a function of time in response to four
?
?
, ?B . Crucially, in this sequence, only ?A
is a
stimuli at two different locations: ?A , ?B , ?A
conditioned stimulus that triggers DA release.
A
B
?s, ?b
2?
?
0
MSN label
DA (?)
2?
?
0
PFC label
2?
?
0
0
0.5
1
time (s)
1.5
2
0
0.5
1
1.5
2
time (s)
Figure 4: Top plot shows the location ?b of the encoded memory as determined from the
population vector of the excitatory cortical units (thin black curve) and the location ?s of
stimuli as encoded by a Gaussian bump of activity in the input units (grey bars) as a function
of time. The middle and bottom panels show the activity of the BG and the PFC modules,
respectively. Dopamine level remains low.
The first two stimuli activate appropriate MSNs, and are therefore gated into WM. The
?
presentation of ?A
activates the same set of MSNs as ?A , but the DA-modulated MSNs
now become bistable: high activity is enhanced while intermediate activity is suppressed.
Only the central MSN remains active with an enhanced amplitude; the two lateral MSNs
that were transiently activated by ?A in low DA are now suppressed. The activity of the
central MSN suffices to gate the location of the new stimulus into WM; the location of
the PFC activity bump switches accordingly. Interestingly, this switch from B to A occurs
more slowly than the preceding switch from A to B. This effect is also attributable to DA:
its release affects the response function of excitatory PFC units, making them less likely
to react to a subsequent stimulus and thus enhancing the stability of the bump at the ?B
?
angular position. Once the bump has switched to the angular location ?A
to encode for
the conditioned stimulus, the subsequent presentation of ?B does not activate MSNs since
they are hysteretically locked in the inactive down state. The pattern of activity in the
BG continues to encode for ?A for as long as the DA level remains elevated, and the PFC
?
activity bump continues to encode for ?A
.
In sum, DA induced bistability of MSNs, associated with an expectation of reward, imparts
salience selectivity to the gating function of the BG. By locking the activation of MSNs
associated with salient input, the BG input prevents a switch in PFC bump activity and
preserves the conditioned stimulus in WM. The robustness of the WM activity is enhanced
by a combined effect of DA through both increasing the gain of PFC neurons and sustaining
MSN input during the delay period (see Fig 5, bottom plot).
4
Discussion
We have built a working memory model which links dopaminergic neuromodulation in
the prefrontal cortex, bistability-inducing dopaminergic neuromodulation of striatal spiny
?s, ?b
2?
?
0
MSN label
DA (?)
2?
?
0
PFC label
2?
?
0
0.5
1
1.5
2
2.5
3
time (s)
?b?
?/4
0
0
?/2
?
?d?
Figure 5: Top plot shows the location ?b of the encoded memory as determined from the
population vector of the excitatory cortical units (thin black curve) and the location ?s
of stimuli as encoded by a Gaussian bump of activity in the input units (grey bars) as a
function of time. The second and third panels bottom plots show the activity of the BG and
the PFC modules, respectively. Dopamine level increases in response to the conditioned
stimulus. The bottom plot displays increased robustness of WM for conditioned (solid
curve) as compared to unconditioned (dashed curve) stimuli.
neurons, and the effects of basal ganglia output on cortical persistence. The resulting interactions provide a sophisticated control mechanism over the read-in to working memory
and the elimination of noise. We demonstrated the quality of the system in a model of a
standard memory-guided saccade task.
There are two central issues for models of working memory: robustness to external noise,
such as explicit lures presented during the memory delay period, and robustness to internal
noise, coming from unwarranted corruption of the neural substrate of persistent activity.
Our model, along with various others, addresses these issues at a cortical level via two basic
mechanisms: DA modulation, which changes the excitability of neurons in a particular way
(units that are inactive are less excitable by input, while units that are active can become
more active), and targeted input from the BG. However, models differ as to the nature and
provenance of the BG input, and also its effects on the PFC. Ours is the first to consider the
combined, complementary, effects of DA in the PFC and the BG.
The requirements for a gating signal are that it be activated at the same time as the stimuli
that are to be stored, and that it is a (possibly exclusive) means by which a WM state is
established. Following the experimental evidence that perturbing DA leads to disruption
of WM [18], a set of theories suggested that a phasic DA signal (as associated, for instance, with reward predicting conditioned stimuli [16]) acts as the gate in the cortex [4].
In various models [17, 2, 6], and also in ours, phasic DA is able to act as a gate through
its contrast-enhancing effect on cortical activity. However, as discussed at length in Frank
et al [7] (whose model does not incorporate the effect at all), this is unlikely to be the sole
gating mechanism, since various stimuli that would not lead to the release of phasic DA
still require storage in WM. In our model, even in low DA, the BG gates information by
controlling the switching of the attractor state in response to inputs. Frank et al [7] point out
the various advantages of this type of gating, largely associated with the opportunities for
precise temporal and spatial gating specificity, based on information about the task context.
Our BG gating mechanism simply involves additional targeted excitatory input to the cortex from the (currently over-simplified) output of striatal spiny neurons, coupled with a
detailed account [11] of DA induced bistability in MSNs. This allows us to couple gating
to motivationally salient stimuli that induce the release of DA. Since DA controls plasticity
in cortico-striatal synapses [14], there is an available mechanism for learning the appropriate gating of salient stimuli, as well as motivationally neutral contextual stimuli that do not
trigger DA release but are important to store.
Robustness against noise that is internal to the WM is of particular importance for line or
surface attractor memories, since they have one or more global directions of null stability
and therefore exhibit propensity to diffuse. Rather than rely on bistability in cortical neurons [3], our model relies on input from the striatum to reduce drift. This mechanism is
available in both high and low DA conditions. This additional input turns the line attractor
into a point attractor at the given location, and thereby adds stability while it persists. The
DA induced bistability of MSNs, for which there is now experimental evidence, enhances
this stabilization effect.
We have focused on the mechanisms by which DA and the BG can influence WM. An
important direction for future work is to relate this material to our growing understanding
of the provenance of the DA signal in terms of reward prediction errors and motivationally
salient cues.
References
[1] Braver TS, Cohen JD (1999) Prog. Brain Res. 121:327-349.
[2] Brunel N, Wang XJ (2001) J. Comp. Neurosci. 11:63-85.
[3] Camperi M, Wang XJ (1998) J. Comp. Neurosci. 5:383-405.
[4] Cohen JD, Braver TS, Brown JW (2002) Curr. Opin. Neurobiol. 12:223-229.
[5] Compte A, Brunel N, Goldman-Rakic P, Wang XJ (2000) Cereb. Cortex 10:910-923.
[6] Durstewitz D, Seamans J, Sejnowski T (2000) J. Neurophys. 83:1733-1750.
[7] Frank M, Loughry B, O?Reilly RC (2001) Cog., Affective, & Behav. Neurosci. 1(2):137-160.
[8] Funahashi S, Bruce CJ, Goldman-Rakic PS (1989) J. Neurophys. 255:556-559.
[9] Fuster J (1995) Memory in the Cerebral Cortex MIT Press.
[10] Goldman-Rakic PS (1995) Neuron 14:477-85.
[11] Gruber AJ, Solla SA, Houk JC (2003). NIPS 15.
[12] Kawagoe R, Takikawa Y, Hikosaka O (1998) Nat. Neurosci. 1:411-416.
[13] O?Reilly RC, Noelle DC, Braver TS, Cohen JD (2002) Cerebral Cortex 12:246-257.
[14] Reynolds JN, Wickens JR (2000) Neurosci. 99:199-203.
[15] Sawaguchi T, Goldman-Rakic PS (1991) Science 251:947-950.
[16] Schultz W, Apicella P, Ljungberg T (1993) J. Neurosci. 13:900-913.
[17] Servan-Schreiber D, Printz H, Cohen J (1990) Science 249:892-895.
[18] Williams GV, Goldman-Rakic PS (1995) Nature 376:572-575.
| 2356 |@word middle:1 open:2 grey:4 crucially:2 thereby:2 solid:5 initial:3 ours:2 suppressing:1 interestingly:1 reynolds:1 existing:1 current:9 contextual:1 neurophys:2 activation:5 perturbative:1 must:2 numerical:1 chicago:1 subsequent:2 plasticity:1 motor:1 opin:1 plot:8 gv:1 cue:1 accordingly:1 funahashi:1 provides:3 characterization:1 contribute:1 location:37 successive:1 mathematical:1 along:3 rc:2 direct:4 become:5 differential:1 persistent:4 consists:4 sustained:2 fixation:1 combine:2 pathway:1 affective:1 distractor:12 growing:1 brain:1 goldman:5 actual:1 pf:1 innervation:1 becomes:2 project:1 monosynaptic:1 provided:2 increasing:1 panel:2 medium:4 inward:1 null:1 neurobiol:1 guarantee:1 temporal:1 act:2 demonstrates:1 uk:2 control:8 unit:36 negligible:1 persists:1 limit:1 consequence:1 striatum:4 switching:4 encoding:1 receptor:1 firing:5 modulation:8 fluctuation:1 black:2 lure:1 exert:1 studied:1 sustaining:2 sara:1 locked:1 arguing:1 implement:2 loughry:1 spot:1 procedure:1 substantia:1 displacement:4 significantly:1 projection:1 persistence:1 reilly:2 induce:2 specificity:1 close:2 storage:4 put:1 influence:2 applying:1 context:1 demonstrated:1 williams:1 duration:2 focused:1 react:1 population:6 stability:5 updated:1 target:7 trigger:3 enhanced:3 controlling:1 substrate:3 located:1 continues:2 persist:1 bottom:4 role:1 module:10 wang:3 ensures:1 solla:2 decrease:2 locking:1 reward:4 dynamic:2 ultimately:1 tight:1 completely:1 translated:1 easily:1 joint:1 indirect:1 various:6 represented:1 ohmic:1 msns:27 london:2 effective:2 activate:4 sejnowski:1 whose:4 encoded:7 itself:3 final:2 runge:1 unconditioned:1 sequence:1 advantage:1 biophysical:1 motivationally:3 ucl:1 interaction:2 maximal:2 coming:1 relevant:1 description:1 inducing:1 enhancement:2 requirement:1 assessing:1 p:4 boris:2 ring:3 illustrate:1 ac:1 sole:1 sa:1 eq:1 strong:1 implemented:1 involves:1 differ:1 direction:8 guided:5 stochastic:2 msn:10 centered:5 stabilization:1 transient:1 bistable:4 elimination:1 material:1 require:1 assign:1 suffices:1 vje:2 around:1 houk:1 bump:40 major:2 purpose:1 unwarranted:1 label:6 currently:1 propensity:1 noelle:1 schreiber:1 compte:1 weighted:1 mit:1 activates:3 gaussian:6 reaching:1 rather:1 sawaguchi:1 varying:1 encode:7 release:8 consistently:1 indicates:4 greatly:1 contrast:3 suppression:1 dayan:1 integrated:1 unlikely:1 manipulating:1 selective:1 issue:2 among:1 dual:1 spatial:1 integration:1 field:2 once:2 equal:1 represents:5 thin:3 future:1 others:1 stimulus:32 transiently:1 few:1 preserve:1 kawagoe:1 individual:1 attractor:15 curr:1 conductance:2 investigate:4 yielding:1 activated:2 capable:2 closer:1 circle:2 re:1 isolated:2 theoretical:1 minimal:1 increased:6 instance:1 servan:1 bistability:9 deviation:1 subset:2 neutral:1 uniform:1 delay:3 wickens:1 reported:1 stored:1 damp:1 combined:2 st:3 peak:2 sensitivity:2 physic:1 participates:1 connectivity:2 central:3 prefrontal:7 slowly:1 possibly:1 external:4 cognitive:2 account:2 potential:1 summarized:1 includes:1 hysteresis:1 jc:1 bg:29 depends:1 break:1 apparently:1 wm:21 bruce:1 rectifying:2 il:2 compartment:1 variance:1 characteristic:2 largely:1 yield:1 saliency:2 biophysically:3 emphasizes:1 rjt:3 ionic:2 comp:2 corruption:1 synapsis:1 reach:2 synaptic:1 against:3 fixate:1 associated:7 couple:1 gain:4 camperi:1 distractors:2 credible:1 cj:1 amplitude:2 sophisticated:1 higher:2 dt:2 follow:2 response:14 jw:1 angular:16 biomedical:1 until:1 working:11 logistic:2 aj:1 quality:1 usa:1 effect:32 brown:1 assigned:2 excluded:1 read:2 excitability:1 flashed:1 illustrated:1 white:1 rakic:5 during:3 self:1 maintained:1 covering:1 coincides:1 m:3 demonstrate:2 cereb:1 disruption:1 recently:1 superior:1 spiking:1 perturbing:1 cohen:4 cerebral:2 discussed:2 elevated:3 numerically:1 i6:1 dot:4 moving:1 access:2 stable:1 cortex:14 surface:1 inhibition:1 align:1 add:1 recent:2 showed:1 ljungberg:1 driven:2 selectivity:1 store:1 manifested:1 remembered:1 wji:5 captured:1 additional:3 nigra:1 preceding:1 period:3 signal:4 dashed:4 full:2 thalamus:1 reduces:1 characterized:2 hikosaka:1 long:1 controlled:2 prediction:1 imparts:1 underlies:1 maintenance:1 basic:1 enhancing:5 exerts:2 expectation:1 dopamine:18 grounded:3 represent:2 addition:1 addressed:1 interval:2 wealth:1 pyramidal:3 crucial:1 printz:1 subject:2 induced:7 ample:2 incorporates:1 ee:1 exceed:1 intermediate:1 relaxes:1 switch:7 affect:3 xj:3 reduce:1 inactive:3 whether:2 assist:1 abruptly:1 render:1 peter:1 resistance:1 cause:4 behav:1 action:1 dramatically:1 generally:1 modulatory:1 detailed:1 outward:1 dark:1 concentrated:1 reduced:1 inhibitory:5 dotted:2 neuroscience:1 steepness:2 basal:8 group:2 salient:8 four:1 threshold:3 drawn:1 diffusion:2 sum:4 angle:2 ca2:1 prog:1 almost:1 dy:1 layer:1 centrally:1 display:1 activity:51 strength:1 ri:7 encodes:2 diffuse:1 simulate:1 injection:2 dopaminergic:5 glia:1 peripheral:1 membrane:1 describes:1 ate:1 smaller:1 remain:1 spiny:9 suppressed:3 jr:1 making:1 invariant:1 equation:2 remains:4 turn:1 mechanism:9 neuromodulation:4 phasic:5 adopted:1 available:3 apply:1 indirectly:1 generic:1 appropriate:2 braver:3 robustness:16 gate:7 jd:3 jn:1 top:4 opportunity:1 paradoxical:1 question:1 quantity:1 occurs:1 receptive:2 exclusive:2 disappearance:1 exhibit:2 enhances:1 apicella:1 kutta:1 distance:1 elicits:1 link:1 lateral:2 cellular:2 length:1 modeled:1 providing:3 potentially:1 striatal:4 frank:3 relate:1 trace:2 gated:2 neuron:18 t:3 extended:1 tonic:4 head:1 precise:1 dc:1 implicate:1 drift:4 intensity:1 provenance:2 speculation:1 connection:5 established:2 nip:1 address:1 beyond:1 bar:2 suggested:1 below:1 pattern:2 able:1 built:1 memory:31 critical:1 treated:1 rely:1 predicting:1 altered:1 brief:1 numerous:1 disappears:1 mediated:1 excitable:1 coupled:1 understanding:1 relative:1 bear:1 northwestern:2 ically:1 versus:1 localized:3 asterisk:1 nucleus:1 switched:1 sufficient:1 consistent:4 gruber:1 excitatory:17 last:1 salience:1 cortico:1 distributed:1 tolerance:1 curve:15 cortical:17 transition:1 sensory:1 simplified:1 schultz:1 preferred:7 global:2 active:7 reveals:1 anchor:1 quiescent:1 continuous:3 fuster:1 additionally:1 nature:5 robust:2 symmetry:1 pfc:40 complex:1 da:49 vj:1 neurosci:6 noise:15 profile:2 complementary:2 neuronal:1 fig:17 je:1 gatsby:2 attributable:1 fails:1 position:9 originated:1 explicit:1 third:1 down:2 cog:1 inset:2 gating:16 offset:1 evidence:4 consist:2 effectively:2 importance:2 modulates:1 magnitude:1 nat:1 illustrates:1 conditioned:7 direc:1 simply:1 explore:1 likely:1 ganglion:6 visual:5 prevents:1 durstewitz:1 scalar:1 saccade:7 brunel:2 corresponds:2 relies:1 rjs:1 identity:2 narrower:1 presentation:3 targeted:2 absence:1 change:1 determined:2 uniformly:2 total:3 experimental:4 e:2 aaron:1 college:1 distraction:1 internal:3 modulated:1 incorporate:1 d1:1 |
1,492 | 2,357 | Prediction on Spike Data
Using Kernel Algorithms
Jan Eichhorn, Andreas Tolias, Alexander Zien, Malte Kuss,
Carl Edward Rasmussen, Jason Weston, Nikos Logothetis and Bernhard Sch o? lkopf
Max Planck Institute for Biological Cybernetics
72076 T?ubingen, Germany
[email protected]
Abstract
We report and compare the performance of different learning algorithms
based on data from cortical recordings. The task is to predict the orientation of visual stimuli from the activity of a population of simultaneously
recorded neurons. We compare several ways of improving the coding of
the input (i.e., the spike data) as well as of the output (i.e., the orientation), and report the results obtained using different kernel algorithms.
1
Introduction
Recently, there has been a great deal of interest in using the activity from a population
of neurons to predict or reconstruct the sensory input [1, 2], motor output [3, 4] or the
trajectory of movement of an animal in space [5]. This analysis is of importance since it
may lead to a better understanding of the coding schemes utilised by networks of neurons
in the brain. In addition, efficient algorithms to interpret the activity of brain circuits in
real time are essential for the development of successful brain computer interfaces such as
motor prosthetic devices.
The goal of reconstruction is to predict variables which can be of rather different nature
and are determined by the specific experimental setup in which the data is collected. They
might be for example arm movement trajectories or variables representing sensory stimuli,
such as orientation, contrast or direction of motion. From a data analysis perspective, these
problems are challenging for a number of reasons, to be discussed in the remainder of this
article.
We will exemplify our reasoning using data from an experiment described in Sect. 3. The
task is to reconstruct the angle of a visual stimulus, which can take eight discrete values,
from the activity of simultaneously recorded neurons.
Input coding. In order to effectively apply machine learning algorithms, it is essential to
adequately encode prior knowledge about the problem. A clever encoding of the input data
might reflect, for example, known invariances of the problem, or assumptions about the
similarity structure of the data motivated by scientific insights. An algorithmic approach
which currently enjoys great popularity in the machine learning community, called kernel
machines, makes these assumptions explicit by the choice of a kernel function. The kernel can be thought of as a mathematical formalisation of a similarity measure that ideally
captures much of this prior knowledge about the data domain. Note that unlike many traditional machine learning methods, kernel machines can readily handle data that is not in the
form of vectors of numbers, but also complex data types, such as strings, graphs, or spike
trains. Recently, a kernel for spike trains was proposed whose design is based on a number
of biologically motivated assumptions about the structure of spike data [6].
Output coding. Just like the inputs, also the stimuli perceived or the actions carried out by
an animal are in general not given to us in vectorial form. Moreover, biologically meaningful similarity measures and loss functions may be very different from those used traditionally in pattern recognition. Hence, once again, there is a need for methods that are
sufficiently general such that they can cope with these issues. In the problem at hand,
the outputs are orientations of a stimulus and thus it would be desirable to use a method
which takes their circular structure into account. In this paper, we will utilise the recently
proposed kernel dependency estimation technique [7] that can cope with general sets of
outputs and and a large class of loss functions in a principled manner. Besides, we also
apply Gaussian process regression to the given task.
Inference and generalisation. The dimensionality of the spike data can be very high, in
particular if the data stem from multicellular recording and if the temporal resolution is
high. In addition, the problems are not necessarily stationary, the distributions can change
over time, and depend heavily on the individual animal. These aspects make it hard for
a learning machine to generalise from the training data to previously unseen test data. It
is thus important to use methods which are state of the art and assay them using carefully
designed numerical experiments. In our work, we have attempted to evaluate several such
methods, including certain developments for the present task that shall be described below.
2
Learning algorithms, kernels and output coding
In supervised machine learning, we basically attempt to discover dependencies between
variables based on a finite set of observations (called the training set) {(xi , yi )|i =
1, . . . , n}. The xi ? X are referred to as inputs and are taken from a domain X; likewise,
the y ? Y are called outputs and the objective is to approximate the mapping X ? Y
between the domains from the samples. If Y is a discrete set of class labels, e.g. {?1, 1},
the problem is referred to as classification; if Y = RN , it is called regression.
Kernel machines, a term which refers to a group of learning algorithms, are based on the
notion of a feature space mapping ?. The input points get mapped to a possibly highdimensional dot product space (called the feature space) using ?, and in that space the
learning problem is tackled using simple linear geometric methods (see [8] for details). All
geometric methods that are based on distances and angles can be performed in terms of the
dot product. The ?kernel trick? is to calculate the inner product of feature space mapped
points using a kernel function
k(xi , xj ) = h?(xi ), ?(xj )i.
(1)
while avoiding explicit mappings ?. In order for k to be interpretable as a dot product in
some feature space it has to be a positive definite function.
2.1
Support Vector Classification and Gaussian Process Regression
A simple geometric classification method which is based on dot products and which is the
basis of support vector machines is linear classification via separating hyperplanes. One
can show that the so-called optimal separating hyperplane (the one that leads to the largest
margin of separation between the classes) can be written in feature space as hw, ?(x)i+b =
0, where the hyperplane normal vector can be expanded in terms of the training points as
Pm
w =
i=1 ?i ?(xi ). The points for which ?i 6= 0 are called support vectors. Taken
together, this leads to the decision function
f (x) = sign
m
X
i=1
m
X
?i h?(x), ?(xi )i + b = sign
?i k(x, xi ) + b .
(2)
i=1
The coefficients ?i , b ? R are found by solving a quadratic optimisation problem, for which
standard methods exist. The central idea of support vector machines is thus that we can
perform linear classification in a high-dimensional feature space using a kernel which can
be seen as a (nonlinear) similarity measure for the input data. A popular nonlinear kernel
function is the Gaussian kernel k(xi , xj ) = exp(?kxi ? xj k2 /2? 2 ). This kernel has been
successfully used to predict stimulus parameters using spikes from simultaneously recorded
data [2].
In Gaussian process regression [9], the model specifies a random distribution over functions. This distribution is conditioned on the observations (the training set) and predictions
may be obtained in closed form as Gaussian distributions for any desired test inputs. The
characteristics (such as smoothness, amplitude, etc.) of the functions are given by the covariance function or covariance kernel; it controls how the outputs covary as a function of
the inputs. In the experiments below (assuming x ? RD ) we use a Gaussian kernel of the
form
D
1X
Cov(yi , yj ) = k(xi , xj ) = v 2 exp ?
kxdi ? xdj k2 /wd2
(3)
2
d=1
with parameters v and w = (w1 , . . . , wD ). This covariance function expresses that outputs
whose inputs are nearby have large covariance, and outputs that belong to inputs far apart
have smaller covariance. In fact, it is possible to show that the distribution of functions
generated by this covariance function are all smooth. The w parameters determine exactly
how important different input coordinates are (and can be seen as a generalisation of the
above kernel). The parameters are fit by optimising the likelihood.
2.2
Similarity measures for spike data
To take advantage of the strength of kernel machines in the analysis of cortical recordings
we will explore the usefulness of different kernel functions. We describe the spikernel
introduced in [6] and present a novel use of alignment-type scores typically used in bioinformatics.
Although we are far from understanding the neuronal code, there exist some reasonable
assumptions about the structure of spike data one has to take into account when comparing
spike patterns and designing kernels.
? Most fundamental is the assumption that frequency and temporal coding play central roles. Information related to a certain variable of the stimulus may be coded
in highly specific temporal patterns contained in the spike trains of a cortical population.
? These firing patterns may be misaligned in time. To compare spike trains it might
be necessary to realign them by introducing a certain time shift. We want the
similarity score to be the higher the smaller this time shift is.
Spikernel. In [6] Shpigelman et al. proposed a kernel for spike trains that was designed
with respect to the assumptions above and some extra assumptions related to the special
task to be solved. To understand their ideas it is most instructive to have a look at the
feature map ? rather than at the kernel itself.
Let s be a sequence of firing rates of length |s|. The feature map maps this sequence into a
high dimensional space where the coordinates u represent a possible spike train prototype
of fixed length n ? |s|. The value of the feature map of s, ?u (s), represents the similarity
of s to the prototype u. The u component of the feature vector ?(s) is defined as:
X
n
?d(si ,u) ?|s|?i1
(4)
?u (s) = C 2
i?In,|s|
Here i is an index vector that indexes a length n ordered subsequence of s and the sum
runs over all possible subsequences. ?, ? ? [0, 1] are parameters of the kernel. The ?part of the sum reflects the weighting according to
similarity of s to the coordinate
Pthe
n
u (expressed in the distance measure d(si , u) =
k=1 d(si,k , uk )), whereas the ?-part
emphasises the concentration towards a ?time of interest? at the end of the sequence s (i 1
is the first index of the subsequence). Following the authors we chose the distance measure
d(si,k , uk ), determining how two firing rate vectors are compared, to be the squared l2 norm: d(si,k , uk ) = ksi,k ? uk k22 . Note, that each entry sk of the sequence (-matrix) s is
meant to be a vector containing the firing rates of all simultaneously recorded neurons in
the same time interval (bin).
The kernel kn (s, t) induced by this feature map can be computed in time O(|s||t|n) using
dynamic programming. The kernel used in our experiments is a sum of kernels for different
pattern lengths n weighted with another parameter p, i.e.,
PN
k(s, t) = i=1 pi ki (s, t).
Alignment score. In addition to methods developed specifically for neural spike train data,
we also train on pairwise similarities derived from global alignments. Aligning sequences
is a standard method in bioinformatics; there, the sequences usually describe DNA, RNA or
protein molecules. Here, the sequences are time-binned representations of the spike trains,
as described above.
In a global alignment of two sequences s = s1 . . . s|s| and t = t1 . . . t|t| , each sequence
may be elongated by inserting copies of a special symbol (the dash, ? ?) at any position,
yielding two stuffed sequences s0 and t0 . The first requirement is that the stuffed sequences
must have the same length. This allows to write them on top of each other, so that each
symbol of s is either mapped to a symbol of t (match/mismatch), or mapped to a dash (gap),
and vice versa. The second requirement for a valid alignment is that no dash is mapped to
a dash, which restricts the length of any alignment to a maximum of |s| + |t|.
Once costs are assigned to the matches and gaps, the cost of an alignment is defined as the
sum of costs in the alignment. The distance of s and t can now be defined as the cost of an
optimal global alignment of s and t, where optimal means minimising the cost. Although
there are exponentially many possible global alignments, the optimal cost (and an optimal
alignment) can be computed in time O(|s||t|) using dynamic programming [10].
Let c(a, b) denote the cost of a match/mismatch (a = si , b = tj ) or of a gap (either a =? ?
or b =? ?). We parameterise the costs with ? and ? as follows:
c(a, b) = c(b, a)
c(a, ) = c( , a)
:= |a ? b|
:= ?|a ? ?|
The matrix of pairwise distances as defined above will, in general, not be a proper kernel
(i.e., it will not be positive definite). Therefore, we use it to build a new representation of
the data (see below). A related but different distance measure has previously been proposed
by Victor and Purpura [11].
We use the alignment score to compute explicit feature vectors of the data points via an
empirical kernel map [8, p. 42]. Consider as prototypes the overall data set1 {xi }i=1,...,m
of m trials xi = [n1,i n2,i ... n20,i ] as defined in Sect. 3. Since our alignment score
kalign (n, n0 ) applies to single spike trains only2 , we compute the empirical kernel map
for each neuron separately and then concatenate these vectors. Hence, the feature map is
defined as:
?x1 ,...,xm (x0 ) = ?x1 ,...,xm ([n01 n02 . . . n020 ])
= [{kalign (n1,i=1..m , n01 )} {kalign (n2,i=1..m , n02 )} . . . {kalign (n20,i=1..m , n020 )}]
Thus, each trial is represented by a vector of its alignment score with respect to all other
trials where alignments are computed separately for all 20 neurons.
We can now train kernel machines using any standard kernel on top of this representation,
but we already achieve very good performance using the simple linear kernel (see results
section). Although we give results obtained with this technique of constructing a feature
map only for the alignment score, it can be easily applied with the spikernel and other
kernels.
2.3
Coding structure in output space
Our objective is to use various machine learning algorithms to predict the orientation of a
stimulus used in the experiment described below. Since we use discrete orientations we can
model this as a multi-class classification problem or transform it into a regression task.
Combining Support Vector Machines. Above, we explained how to do binary classification using SVMs by estimating a normal vector w and offset b of a hyperplane
hw, ?(x)i + b = 0 in the feature space. A given point x will then be assigned to class
1 if hw, ?(x)i + b > 0 (and to class -1 otherwise). If we have M > 2 classes, we can
train M classifiers, each one separating one specific class from the union of all other ones
(hence the name ?one-versus-rest?). When classifying a new point x, we simply assign it
to the class whose classifier leads to the largest value of hw, ?(x)i + b.
A more sophisticated and more expensive method is to train one classifier for each possible
combination of two classes and then use a voting scheme to classify a point. It is referred
to as ?one-versus-one?.
Kernel Dependency Estimation. Note that the above approach treats all classes the same.
In our situation, however, certain classes are ?closer? to each other since the corresponding
stimulus angles are closer than others. To take this into account, we use the kernel dependency estimation (KDE) algorithm [7] with an output similarity measure corresponding to
a loss function of the angles taking the form L(?, ?) = cos(2? ? 2?).3 The modification
respects the symmetry that 0? and 180? , say, are equivalent.
Lack of space does not permit us to explain the KDE algorithm in detail. In a nutshell, it
estimates a linear mapping between two feature spaces. One feature space corresponds to
the kernel used on the inputs (in our case, the spike trains), and the other one to a second
kernel which encodes the similarity measure to be used on the outputs (the orientation of
the lines).
Gaussian Process Regression. When we use Gaussian processes to predict the stimulus
angle ? we consider the task as a regression problem on sin 2? and cos 2? separately. To
1
Note that this means that we are considering a transductive setting [12], where we have access
to all input data (but not the test outputs) during training.
2
It is straightforward to extend this idea to synchronous alignments of the whole population vector,
but we achieved worse results.
3
Note that L(?, ?) needs to be an admissible kernel, i.e. positive definite, and therefore we cannot
use the linear loss function (5).
do prediction we take the means of the predicted distributions of sin 2? and cos 2? as point
estimates respectively, which are then projected onto the unit circle. Finally we assign the
averaged predicted angle to the nearest orientation which could have been shown.
3
Experiments
We will now apply the ideas from the reasoning above and see how well these different
concepts perform in practice on a dataset of cortical recordings.
Data collection. The dataset we used was collected in an experiment performed in our
neurophysiology department. All experiments were conducted in full compliance with
the guidelines of the European Community (EUVD/86/609/EEC) for the care and use of
laboratory animals and were approved by the local authorities (Regierungspr?asidium). The
spike data were recorded using tetrodes inserted in area V1 of a behaving macaque (Macaca
Mulatta). The spike waveforms were sampled at 32KHz. The animal?s task was to fixate a
small square spot on the monitor while gratings of eight different orientations (0 o , 22o , 45o ,
67o , 90o , 112o , 135o , 158o ) and two contrasts (2% and 30%) were presented on a monitor.
The stimuli were positioned on the monitor so as to cover the classical receptive fields of
the neurons. A single stimulus of fixed orientation and contrast was presented for a period
of 500 ms, i.e., during the epoch of a single behavioural trial. All 8 stimuli appeared 30
times each and in random order, resulting in 240 observed trials.
Spiking activity from neural recordings usually come as a time series of action potentials
from one or more neurons recorded from the brain. It is commonly believed that in most
circumstances most of the information in the spiking activity is mainly present in the times
of occurrence of spikes and not in the exact shape of the individual spikes. Therefore we
can abstract the spike series as a series of zeros and ones.
From a single trial we have recordings of 500ms from 20 neurons. We compute the firing
rates from the high resolution data for each neuron in 1, 5 or 10 bins of length 500, 100 or
50ms respectively, resulting in three different data representations for different temporal
resolutions. By concatenation of the vectors nr (r = 1, . . . , 20) containing the bins of
each neuron we obtain one data point x = [n1 n2 ... n20 ] per trial.
Comparing the algorithms. Below we validate our reasoning on input and output coding
with several experiments. We will compare the kernel algorithms KDE, SVM and Gaussian Processes (GP) and a simple k-nearest neighbour approach (k-NN) that we applied
with different kernels and different data representations. As reference values, we give the
performance of a standard Bayesian reconstruction method (assuming independent neurons
with Poisson characteristics), a Template Matching method and the standard Population
Vector method as they are described e.g. in [5] and [3].
In all our experiments we compute the test error over a five fold cross-validation using
always the same data split, balanced with respect to the classes.4 We use four out of the
five folds of the data to choose the parameters of the kernel and the method. This choice
itself is done via another level of five fold cross-validation (this time unbalanced). Finally
we train the best model on these four folds and compute an independent test error on the
remaining fold.
Since simple zero-one-loss is not very informative about the error in multi-class problems,
we report the linear loss of the predicted angles, while taking into account the circular
structure of the problem. Hence the loss function takes the form
L(?, ?) = min{|? ? ?|, ?|? ? ?| + 180o }.
4
I.e., in every fold we have the same number of points per class.
(5)
The parameters of the KDE algorithm (ridge parameter) and the SVM (C) are taken from
a logarithmic grid (ridge = 10?5 , 10?4 , ..., 101 ; C = 10?1 , 1, ..., 105 ). After we knew
its order of magnitude, we chose the ?-parameter of the Gaussian kernel from a linear grid
(? = 1, 2, ..., 10). The spikernel has four parameters: ?, ?, N and p. The stimulus in our
experiment was perceived over the whole period of recording. Therefore we do not want
any increasing weight of the similarity score towards the beginning or the end of the spikesequence and we fix ? = 1. Further we chose N = 10 to be the length of our sequence,
and thereby consider patterns of all possible lengths. The parameters ? and p are chosen
from the following (partly linear) grids: ? = 0.01, 0.05, 0.1, 0.2, 0.3, 0.4, ..., 0.8, 0.9, 0.99
and p = 0.05, 0.1, 0.3, 0.5, ..., 2.5, 2.7
Table 1 Mean test error and standard error on the low contrast dataset
KDE
SVM (1-vs-rest)
SVM (1-vs-1)
k-NN
GP
10 bins
1 bin
10 bins
1 bin
10 bins
1 bin
10 bins
1 bin
2 bins ?
1 bin
Gaussian Kernel
Spikernel
Alignment score
16.8? ? 1.6?
12.8? ? 1.7?
16.8? ? 2.0?
13.3? ? 1.6?
16.4? ? 1.6?
12.2? ? 1.7?
18.7? ? 1.5?
14.0? ? 1.7?
16.2? ? 1.1?
15.6? ? 1.7?
11.5? ? 1.3?
(13.6? ? 1.8? )?
13.1? ? 1.4?
13.8? ? 1.3?
11.2? ? 1.3?
12.3? ? 1.5?
12.1? ? 1.4?
13.0? ? 2.0?
n/a ?
n/a ?
12.8? ? 0.9?
Bayesian rec.: 14.4? ? 2.1? , Template Matching: 17.7? ? 0.6? , Pop. Vect.: 28.8? ? 1.0?
Table 2 Mean test error and standard error on the high contrast dataset
KDE
SVM (1-vs-rest)
SVM (1-vs-1)
k-NN
GP
10 bins
1 bin
10 bins
1 bin
10 bins
1 bin
10 bins
1 bin
2 bins ?
1 bin
Gaussian Kernel
Spikernel
Alignment score
1.9? ? 0.5?
1.4? ? 0.5?
1.5? ? 0.5?
1.4? ? 0.4?
1.2? ? 0.4?
1.1? ? 0.4?
4.7? ? 1.2?
1.7? ? 0.6?
1.4? ? 0.4?
2.0? ? 0.5?
1.7? ? 0.4?
(1.6? ? 0.4? )?
1.4? ? 0.6?
2.1? ? 0.4?
1.0? ? 0.5?
1.4? ? 0.5?
0.8? ? 0.3?
1.0? ? 0.4?
1.0? ? 0.3?
n/a ?
n/a ?
Bayesian rec.: 3.8? ? 0.6? , Template Matching: 7.2? ? 1.0? , Pop. Vect.: 11.6? ? 0.7?
?
We report this number only for comparison, since the spikernel relies on temporal patterns and it makes no sense to use only
one bin.
?
A 10 bin resolution would require to determine 200 parameters w d of the covariance function (3) from only 192 samples.
?
We did not compute these results. Both kernels are not analytical functions of their parameters and we would loose much of the
convenience of Gaussian Processes. Using crossvalidation instead resembles very much Kernel Ridge Regression on sin 2? and
cos 2? which is almost exactly what KDE is doing when applied with the loss function (5).
The results for the low contrast datasets is given in Table 1, and Table 2 presents results for
high contrast (five best results in boldface). The relatively large standard error (? ??n ) is
due to the fact that we used only five folds to compute the test error.
4
Discussion
In our experiments, we have shown that using modern machine learning techniques, it
is possible to use tetrode recordings in area V1 to reconstruct the orientation of a stimulus presented to a macaque monkey rather accurately: depending on the contrast of the
stimulus, we obtained error rates in the range of 1? ? 20? . We can observe that standard
techniques for decoding, namely Population vector, Template Matching and a particular
Bayesian reconstruction method, can be outperformed by state-of-the-art kernel methods
when applied with an appropriate kernel and suitable data representation. We found that
the accuracy of kernel methods can in most cases be improved by utilising task specific
similarity measures for spike trains, such as the spikernel or the introduced alignment distances from bioinformatics. Due to the (by machine learning standards) relatively small
size of the analysed datasets, it is hard to draw conclusions regarding which of the applied
kernel methods performs best.
Rather than focusing too much on the differences in performance, we want to emphasise
the capability of kernel machines to assay different decoding hypotheses by choosing appropriate kernel functions. Analysing their respective performance may provide insight
about how spike trains carry information and thus about the nature of neural coding.
Acknowledgements. For useful help, we thank Goekhan Bak?r, Olivier Bousquet and
Gunnar R?atsch. J.E. was supported by a grant from the Studienstiftung des deutschen
Volkes.
References
[1] P. F?oldi?ak. The ?ideal humunculus?: statistical inference from neural population responses. In
F. Eeckman and J. Bower, editors, Computation and Neural Systems 1992, Norwell, MA, 1993.
Kluwer.
[2] A. S. Tolias, A. G. Siapas, S. M. Smirnakis and N. K. Logothetis. Coding visual information at
the level of populations of neurons. Soc. Neurosci. Abst. 28, 2002.
[3] A. P. Georgopoulos, A. B. Schwartz and R. E. Kettner. Neuronal population coding of movement
direction. Science, 233(4771):1416?1419, 1986.
[4] T. D. Sanger. Probability density estimation for the interpretation of neural population codes. J
Neurophysiol., 76(4):2790?2793, 1996.
[5] K. Zhang, I. Ginzburg, B. L. McNaughton and T. J. Sejnowski. Interpreting neuronal population
activity by reconstruction: unified framework with application to hippocampal place cells. J
Neurophysiol., 79(2):1017?1044, 1998.
[6] L. Shpigelman, Y. Singer, R. Paz and E. Vaadia. Spikernels: embedding spike neurons in innerproduct spaces. In S. Becker, S. Thrun and K. Obermayer, editors, Advances in Neural Information
Processing Systems 15, 2003.
[7] J. Weston, O. Chapelle, A. Elisseeff, B. Scho? lkopf and V. Vapnik. Kernel dependency estimation.
In S. Becker, S. Thrun and K. Obermayer, editors, Advances in Neural Information Processing
Systems 15, 2003.
[8] B. Sch?olkopf and A. J. Smola. Learning with Kernels. The MIT Press, Cambridge, Massachusetts, 2002.
[9] C. K. I. Williams and C. E. Rasmussen. Gaussian processes for regression. In D. S. Touretzky,
M. C. Mozer and M. E. Hasselmo, editors, Advances in Neural Information Processing Systems
8, 1996.
[10] S. B. Needleman and C. D. Wunsch. A General Method Applicable to the Search for Similarities
in the Amino Acid Sequence of Two Proteins. Journal of Molecular Biology, 48:443?453, 1970.
[11] J. D. Victor and K. P. Purpura. Nature and precision of temporal coding in visual cortex: a
metric-space analysis. J Neurophysiol, 76(2):1310?1326, 1996.
[12] V. N. Vapnik. Statistical Learning Theory. John Wiley & Sons, New York, 1998.
| 2357 |@word neurophysiology:1 trial:7 norm:1 approved:1 covariance:7 elisseeff:1 thereby:1 carry:1 series:3 score:10 wd:1 comparing:2 analysed:1 si:6 written:1 readily:1 must:1 john:1 concatenate:1 numerical:1 informative:1 shape:1 eichhorn:1 motor:2 designed:2 interpretable:1 n0:1 v:4 stationary:1 device:1 beginning:1 authority:1 hyperplanes:1 zhang:1 five:5 mathematical:1 shpigelman:2 manner:1 x0:1 pairwise:2 mpg:1 multi:2 brain:4 considering:1 increasing:1 discover:1 moreover:1 estimating:1 circuit:1 what:1 string:1 monkey:1 developed:1 unified:1 temporal:6 every:1 voting:1 smirnakis:1 nutshell:1 exactly:2 classifier:3 k2:2 schwartz:1 control:1 uk:4 grant:1 unit:1 planck:1 positive:3 t1:1 local:1 treat:1 encoding:1 ak:1 firing:5 might:3 chose:3 emphasis:1 resembles:1 challenging:1 misaligned:1 co:4 range:1 averaged:1 yj:1 union:1 practice:1 definite:3 spot:1 jan:1 area:2 empirical:2 thought:1 matching:4 refers:1 protein:2 get:1 cannot:1 clever:1 onto:1 convenience:1 spikernels:1 equivalent:1 map:9 elongated:1 straightforward:1 williams:1 resolution:4 insight:2 wunsch:1 population:11 handle:1 notion:1 traditionally:1 coordinate:3 mcnaughton:1 embedding:1 logothetis:2 heavily:1 play:1 exact:1 programming:2 carl:1 olivier:1 designing:1 hypothesis:1 trick:1 recognition:1 expensive:1 rec:2 observed:1 role:1 inserted:1 solved:1 capture:1 calculate:1 sect:2 movement:3 principled:1 balanced:1 mozer:1 ideally:1 dynamic:2 depend:1 solving:1 basis:1 neurophysiol:3 easily:1 represented:1 various:1 train:17 describe:2 sejnowski:1 choosing:1 whose:3 say:1 reconstruct:3 otherwise:1 cov:1 unseen:1 transductive:1 transform:1 itself:2 gp:3 advantage:1 sequence:13 abst:1 analytical:1 vaadia:1 reconstruction:4 product:5 remainder:1 inserting:1 combining:1 pthe:1 achieve:1 validate:1 macaca:1 olkopf:1 crossvalidation:1 requirement:2 help:1 depending:1 nearest:2 grating:1 soc:1 edward:1 predicted:3 come:1 direction:2 waveform:1 bak:1 bin:25 require:1 assign:2 fix:1 biological:1 sufficiently:1 normal:2 exp:2 great:2 algorithmic:1 mapping:4 predict:6 perceived:2 estimation:5 outperformed:1 applicable:1 label:1 currently:1 largest:2 hasselmo:1 vice:1 successfully:1 reflects:1 weighted:1 mit:1 gaussian:14 rna:1 always:1 rather:4 pn:1 encode:1 derived:1 likelihood:1 mainly:1 contrast:8 sense:1 inference:2 nn:3 typically:1 i1:1 germany:1 issue:1 classification:7 orientation:11 overall:1 development:2 animal:5 art:2 special:2 field:1 once:2 optimising:1 represents:1 biology:1 look:1 report:4 stimulus:16 others:1 modern:1 neighbour:1 simultaneously:4 individual:2 n02:2 n1:3 volkes:1 attempt:1 interest:2 circular:2 highly:1 alignment:20 yielding:1 tj:1 norwell:1 closer:2 necessary:1 respective:1 desired:1 circle:1 classify:1 cover:1 cost:8 introducing:1 entry:1 usefulness:1 successful:1 paz:1 conducted:1 too:1 dependency:5 kn:1 eec:1 kxi:1 density:1 fundamental:1 decoding:2 together:1 w1:1 squared:1 again:1 reflect:1 recorded:6 containing:2 choose:1 possibly:1 central:2 worse:1 account:4 potential:1 de:2 coding:12 coefficient:1 performed:2 jason:1 utilised:1 closed:1 doing:1 capability:1 square:1 accuracy:1 acid:1 characteristic:2 likewise:1 lkopf:2 bayesian:4 accurately:1 basically:1 trajectory:2 cybernetics:1 kuss:1 explain:1 touretzky:1 frequency:1 fixate:1 sampled:1 dataset:4 popular:1 massachusetts:1 exemplify:1 knowledge:2 dimensionality:1 amplitude:1 positioned:1 carefully:1 sophisticated:1 focusing:1 higher:1 supervised:1 response:1 improved:1 done:1 spikernel:8 just:1 smola:1 hand:1 nonlinear:2 lack:1 scientific:1 name:1 k22:1 concept:1 needleman:1 adequately:1 hence:4 assigned:2 laboratory:1 assay:2 covary:1 deal:1 sin:3 during:2 m:3 hippocampal:1 ridge:3 performs:1 motion:1 interface:1 interpreting:1 reasoning:3 novel:1 recently:3 scho:1 mulatta:1 spiking:2 khz:1 exponentially:1 discussed:1 belong:1 extend:1 kluwer:1 interpret:1 interpretation:1 versa:1 cambridge:1 eeckman:1 siapas:1 smoothness:1 rd:1 grid:3 pm:1 dot:4 chapelle:1 access:1 similarity:14 behaving:1 cortex:1 etc:1 aligning:1 perspective:1 apart:1 certain:4 ubingen:1 binary:1 yi:2 victor:2 seen:2 n20:3 care:1 nikos:1 determine:2 period:2 zien:1 full:1 desirable:1 stem:1 smooth:1 match:3 minimising:1 believed:1 cross:2 molecular:1 coded:1 prediction:3 regression:9 n01:2 optimisation:1 circumstance:1 poisson:1 metric:1 kernel:56 represent:1 achieved:1 cell:1 addition:3 want:3 whereas:1 separately:3 interval:1 sch:2 extra:1 rest:3 unlike:1 wd2:1 recording:8 induced:1 compliance:1 ideal:1 split:1 xj:5 fit:1 andreas:1 inner:1 idea:4 prototype:3 regarding:1 shift:2 t0:1 synchronous:1 motivated:2 becker:2 york:1 action:2 useful:1 svms:1 dna:1 specifies:1 exist:2 restricts:1 sign:2 popularity:1 per:2 discrete:3 write:1 shall:1 express:1 group:1 gunnar:1 four:3 monitor:3 stuffed:2 v1:2 graph:1 sum:4 run:1 angle:7 utilising:1 place:1 almost:1 reasonable:1 separation:1 draw:1 decision:1 set1:1 ki:1 dash:4 tackled:1 fold:7 quadratic:1 activity:7 strength:1 binned:1 vectorial:1 georgopoulos:1 encodes:1 prosthetic:1 nearby:1 bousquet:1 aspect:1 min:1 expanded:1 relatively:2 department:1 according:1 combination:1 smaller:2 son:1 biologically:2 s1:1 modification:1 explained:1 ginzburg:1 taken:3 behavioural:1 previously:2 loose:1 singer:1 end:2 permit:1 eight:2 apply:3 observe:1 appropriate:2 occurrence:1 top:2 remaining:1 sanger:1 build:1 classical:1 objective:2 already:1 spike:26 receptive:1 concentration:1 traditional:1 nr:1 obermayer:2 distance:7 thank:1 mapped:5 separating:3 concatenation:1 thrun:2 collected:2 tuebingen:1 reason:1 boldface:1 assuming:2 besides:1 code:2 length:9 index:3 setup:1 kde:7 design:1 guideline:1 proper:1 perform:2 neuron:15 observation:2 vect:2 datasets:2 finite:1 oldi:1 situation:1 rn:1 community:2 introduced:2 namely:1 pop:2 macaque:2 below:5 pattern:7 usually:2 mismatch:2 xm:2 appeared:1 max:1 including:1 suitable:1 malte:1 arm:1 representing:1 scheme:2 innerproduct:1 carried:1 prior:2 understanding:2 geometric:3 l2:1 epoch:1 acknowledgement:1 determining:1 loss:8 parameterise:1 versus:2 validation:2 s0:1 article:1 editor:4 realign:1 classifying:1 pi:1 deutschen:1 supported:1 last:1 rasmussen:2 copy:1 enjoys:1 understand:1 generalise:1 institute:1 template:4 taking:2 emphasise:1 cortical:4 valid:1 sensory:2 author:1 collection:1 commonly:1 projected:1 far:2 cope:2 approximate:1 bernhard:1 global:4 tolias:2 xi:11 knew:1 subsequence:3 search:1 sk:1 purpura:2 table:4 nature:3 kettner:1 molecule:1 symmetry:1 improving:1 complex:1 necessarily:1 constructing:1 domain:3 european:1 did:1 neurosci:1 whole:2 n2:3 amino:1 x1:2 neuronal:3 referred:3 wiley:1 formalisation:1 precision:1 position:1 explicit:3 bower:1 weighting:1 hw:4 admissible:1 specific:4 symbol:3 offset:1 svm:6 tetrode:2 essential:2 studienstiftung:1 xdj:1 vapnik:2 effectively:1 importance:1 multicellular:1 magnitude:1 conditioned:1 margin:1 ksi:1 gap:3 logarithmic:1 simply:1 explore:1 visual:4 expressed:1 contained:1 ordered:1 applies:1 utilise:1 corresponds:1 relies:1 ma:1 weston:2 goal:1 towards:2 change:1 hard:2 analysing:1 determined:1 generalisation:2 specifically:1 hyperplane:3 called:7 invariance:1 experimental:1 partly:1 attempted:1 meaningful:1 atsch:1 highdimensional:1 support:5 meant:1 alexander:1 bioinformatics:3 unbalanced:1 avoiding:1 evaluate:1 instructive:1 |
1,493 | 2,358 | Probabilistic Inference of Speech Signals from
Phaseless Spectrograms
Kannan Achan, Sam T. Roweis, Brendan J. Frey
Machine Learning Group
University of Toronto
Abstract
Many techniques for complex speech processing such as denoising and
deconvolution, time/frequency warping, multiple speaker separation, and
multiple microphone analysis operate on sequences of short-time power
spectra (spectrograms), a representation which is often well-suited to
these tasks. However, a significant problem with algorithms that manipulate spectrograms is that the output spectrogram does not include a phase
component, which is needed to create a time-domain signal that has good
perceptual quality. Here we describe a generative model of time-domain
speech signals and their spectrograms, and show how an efficient optimizer can be used to find the maximum a posteriori speech signal, given
the spectrogram. In contrast to techniques that alternate between estimating the phase and a spectrally-consistent signal, our technique directly infers the speech signal, thus jointly optimizing the phase and a
spectrally-consistent signal. We compare our technique with a standard
method using signal-to-noise ratios, but we also provide audio files on
the web for the purpose of demonstrating the improvement in perceptual
quality that our technique offers.
1
Introduction
Working with a time-frequency representation of speech can have many advantages over
processing the raw amplitude samples of the signal directly. Much of the structure in
speech and other audio signals manifests itself through simultaneous common onset, offset or co-modulation of energy in multiple frequency bands, as harmonics or as coloured
noise bursts. Furthermore, there are many important high-level operations which are much
easier to perform in a short-time multiband spectral representation than on the time domain
signal. For example, time-scale modification algorithms attempt to lengthen or shorten
a signal without affecting its frequency content. The main idea is to upsample or downsample the spectrogram of the signal along the time axis while leaving the frequency axis
unwarped. Source separation or denoising algorithms often work by identifying certain
time-frequency regions as having high signal-to-noise or as belonging to the source of interest and ?masking-out? others. This masking operation is very natural in the time-frequency
domain. Of course, there are many clever and efficient speech processing algorithms for
pitch tracking[6], denoising[7], and even timescale modification[4] that do operate directly
on the signal samples, but the spectral domain certainly has its advantages.
s1
s n/2
sn/2 +1
sn
sn+1
M1
s3n/2
s3n/2 +1
s2n
sN
M3
M2
Figure 1: In the generative model, the spectrogram is obtained by taking overlapping windows of length n from the time-domain speech signal, and computing the energy spectrum.
In order to reap the benefits of working with a spectrogram of the audio, it is often important
to ?invert? the spectral representation back into a time domain signal which is consistent
with a new time-frequency representation we obtain after processing. For example, we may
mask out certain cells in the spectrogram after determining that they represent energy from
noise signals, or we may drop columns of the spectrogram to modify the timescale. How
do we recover the denoised or sped up speech signal? In this paper we study this inversion
and present an efficient algorithm for recovering signals from their overlapping short-time
spectral magnitudes using maximum a posteriori inference in a simple probability model.
This is essentially a problem of phase recovery, although with the important constraint
that overlapping analysis windows must agree with each other about their estimates of the
underlying waveform. The standard approach, exemplified by the classic paper of Griffin
and Lim [1], is to alternate between estimating the time domain signal given a current
estimate of the phase and the observed spectrogram, and estimating the phase given the
hypothesized signal and the observed spectrogram. Unfortunately, at any iteration, this
technique maintains inconsistent estimates of the signal and the phase.
Our algorithm maximizes the a posteriori probability of the estimated speech signal by
adjusting the estimated signal samples directly, thus avoiding inconsistent phase estimates.
At each step of iterative optimization, the method is guaranteed to reduce the discrepancy
between the observed spectrogram and the spectrogram of the estimated waveform. Further, by jointly optimizing all samples simultaneously, the method can make global changes
in the waveform, so as to better match all short-time spectral magnitudes.
2
A Generative Model of Speech Signals and Spectrograms
An advantage of viewing phase recovery as a problem of probabilistic inference of the
speech signal is that a prior distribution over time-domain speech signals can be used to
improve performance. For example, if the identity of the speaker that produced the spectrogram is known, a speaker-specific speech model can be used to obtain a higher-quality
reconstruction of the time-domain signal. However, it is important to point out that when
prior knowledge of the speaker is not available, our technique works well using a uniform
prior.
For a time-domain signal with N samples, let s be a column vector containing samples
s1 , . . . , sN . We define the spectrogram of a signal as the magnitude of its windowed shorttime Fourier transform. Let M = {m1 , m2 , m3 ....} denote the spectrogram of s; mk is
the magnitude spectrum of the kth window and mfk is the magnitude of the f th frequency
component. Further, let n be the width of the window used to obtain the short-time transform. We assume the windows are spaced at intervals of n/2, although this assumption is
easy to relax. In this setup, shown in Fig. 1, a particular time-domain sample s t contributes
to exactly two windows in the spectrogram.
The joint distribution over the speech signal s and the spectrogram M is
P (s, M) = P (s)P (M|s).
(1)
We use an Rth-order autoregressive model for the prior distribution over time-domain
speech signals:
R
N
Y
2
1 X
P (s) ?
ar st?r ? st
.
(2)
exp ? 2
2? r=1
t=1
In this model, each sample is predicted to be a linear combination of the r previous samples.
The autoregressive model can be estimated beforehand, using training data for a specific
speaker or a general class of speakers. Although this model is overly simple for general
speech signals, it is useful for avoiding discontinuities introduced at window boundaries by
mis-matched phase components in neighboring frames. To avoid artifacts at frame boundaries, the variance of the prior can be set to low values at frame boundaries, enabling the
prior to ?pave over? the artifacts.
Assuming that the observed spectrogram is equal to the spectrogram of the hidden speech
signal, plus independent Gaussian noise, the likelihood can be written
Y
1
? k (s) ? mk ||2
(3)
P (M|s) ?
exp ? 2 ||m
2?
k
2
? k (s) is the magnitude spectrum given
where ? is the noise in the observed spectra, and m
by the appropriate window of the estimated speech signal, s. Note that the magnitude
spectra are independent given the time domain signal.
The likelihood in (3) favors configurations of s that match the observed spectrogram, while
the prior in (2) places more weight on configurations that match the autoregressive model.
2.1
Making the speech signal explicit in the model
? k (s), by introducing the n?n Fourier transform maWe can simplify the functional form m
trix, F. Let sk be an n-vector containing the samples from the kth window. Using the fact
that the magnitude of a complex number c is cc? , where ? denotes complex conjugation,
we have
? k (s) = (Fsk ) ? (Fsk )? = (Fsk ) ? (F? sk ),
m
where ? indicates element-wise product.
The joint distribution in (1) can now be written
R
1
Y
1 X
exp ? 2 ||(Fsk )?(F? sk )?mk ||2
ar st?r ?st )2 .
exp ? 2 (
2?
2? r=1
t
k
(4)
The factorization of the distribution in (4) can be used to construct the factor graph shown
in Fig. 2. For clarity, we have used a 3rd order autoregressive model and a window length of
4. In this graphical model, function nodes are represented by black disks and each function
node corresponds to a term in the joint distribution. There is one function node connecting
each observed short-time energy spectrum to the set of n time-domain samples from which
it was possibly derived, and one function node connecting each time-domain sample to its
R predecessors in the autoregressive model.
P (s, M) ?
Y
Taking the logarithm of the joint distribution in (4) and expanding the norm, we obtain
n
n
2
1 X X X X
log P (s, M) ? ? 2
Fij Fil? snk?n/2+j snk?n/2+l ? mki
2?
i
j=1
k
l=1
R
2
1 X X
? 2
ar st?r ? st .
2? t r=1
(5)
g1
S1
S2
g2
S3
f
g3
S4
S5
f
1
M1
g N?2
g4
S6
SN
f
2
M2
L
ML
Figure 2: Factor graph for the model in (4) using a 3rd order autoregressive model, window
length of 4 and an overlap of 2 samples. Function nodes fi enforce the constraint that
the spectrogram of s match the observed spectrogram and function nodes g i enforce the
constraint due to the AR model
In this expression, k indexes frames, i indexes frequency, sk?n/2+j is the jth sample in the
kth frame, mki is the observed spectral energy at frequency i in frame k, and ar is the rth
autoregressive coefficient. The log-probability is quartic in the unknown speech samples,
s1 , . . . , s N .
For simplicity of presentation above, we implicitly assumed a rectangular window for computing the spectrogram. The extension to other types of windowing functions is straightforward. In the experiments described below, we have used a Hamming window, and adjusted the equations appropriately.
3
Inference Algorithms
The goal of probabilistic inference is to compute the posterior distribution over speech
waveforms and output a typical sample or a mode of the posterior as an estimate of the
reconstructed speech signal. To find a mode of the posterior, we have explored the use of
iterative conditional modes (ICM) [8], Markov chain Monte Carlo methods [9], variational
techniques [10], and direct application of numerical optimization methods for finding the
maximum a posteriori speech signal. In this paper, we report results on two of the faster
techniques, ICM and direct optimization.
ICM operates by iteratively selecting a variable and assigning the MAP estimate to the
variable while keeping all other variables fixed. This technique is guaranteed to increase
the joint probability of the speech waveform and the observed spectrum, at each step. At
every stage we set st to its most probable value, given the other speech samples and the
observed spectrogram:
s?t = argmaxst P (st |M, s \ st ) = argmaxst P (s, M).
This value can be found by extracting the terms in (5) that depend on st and optimizing the
resulting quartic equation with complex coefficients. To select an initial configuration of s,
we applied an inverse Fourier transform to the observed magnitude spectra M, assuming
a random phase. As will become evident in the experimental section of this paper, by
updating only a single sample at a time, ICM is prone to finding poor local minima.
We also implemented an inference algorithm that directly searches for a maximum of
log P (s, M) w.r.t. s, using conjugate gradients. The same derivatives used to find the ICM
updates were used in a conjugate gradient optimizer, which is capable of finding search directions in the vector space s, and jointly adjusting all speech samples simultaneously. We
0
2
4
Time (seconds)
0
2
4
Time (seconds)
0
2
4
Time (seconds)
Frequency (kHz)
8
4
0
Figure 3: Reconstruction results for an utterance from the WSJ database. (left) Original
signal and the corresponding spectrogram. (middle) Reconstruction using algorithm in
[1]. The spectrogram of the reconstruction fails to capture the finer details in the original
signal. (right) Reconstruction using our algorithm. The spectrogram captures most of the
fine details in the original signal.
initialized the conjugate gradient optimizer using the same procedure as described above
for ICM.
4
Experiments
We tested our algorithm using several randomly chosen utterances from the Wall street
journal corpus and the NIST TIMIT corpus. For all experiments we used a (Hamming)
window of length 256 and with an overlap of 128 samples. Where possible, we trained
a 12th order AR model of the speaker using an utterance different from the one used to
create the spectrogram. For convergence to a good local minima, it is important to down
weight the contribution of the AR-model for the first several iterations of conjugate gradient
optimization. In fact we ran the algorithm without the AR model until convergence and then
started the AR model with a weighting factor of 10. This way, the AR model operates on
the signal with very little error in the estimated spectrogram.
Along the frame boundaries, the variance of the prior (AR model) was set to a small value
to smooth down spikes that are not very probable apriori. Further, we also tried using
a cubic spline smoother along the boundaries as a post processing step for better sound
quality.
4.1
Evaluation
The quality of sound in the estimated signal is an important factor in determining the
effectiveness of the algorithm. To demonstrate improvement in the perceptual quality of sound we have placed audio files on the web; for demonstrations please check,
http://www.psi.toronto.edu/?kannan/spectrogram. Our algorithm consistently outperformed the algorithm proposed in [1] both in terms of sound quality and in matching the
observed spectrogram . Fig. 3 shows reconstruction result for an utterance from WSJ data.
As expected, ICM typically converged to a poor local minima in a few iterations. In Fig. 4,
a plot of the log probability as a function of number of iterations is shown for ICM and our
approach.
?3
?4
dB gain (dB)
4.508
7.900
?5
?6
log P
Algorithm
Griffin and Lim [1]
Our approach
(without AR model)
Our approach
(12th order AR model)
?7
ICM
CG
?8
8.172
?9
?10
?11
0
10
20
30
40
50
iteration
60
70
80
90
100
Figure 4: SNR for different algorithms. Values reported are averages over 12 different
utterances. The graph on the right compares the log probability under ICM to our algorithm
Analysis of signal to noise ratio of the true and estimated signal can be used to measure the
quality of the estimated signal, with high dB gain indicating good reconstruction.
As the input to our model does not include a phase component, we cannot measure SNR
by comparing the recovered signal to any true time domain signal. Instead, we define the
following approximation
SN R =
?
X
10 log
u
1
2
w
f |su,w (f )|
P P Eu1
su,w (f )| ? E1u |su,w (f )|)2
w
f ( E?u |?
P P
(6)
P
where Eu = t s2t is the total energy in utterance u. Summations over u, w and f are over
all utterances, windows and frequencies respectively.
The table in Fig. 4 reports dB gain averaged over several utterances for [1] and our algorithm with and without an AR model.The gains for our algorithm are significantly better
than for the algorithm of Griffin and Lim. Moving the summation over w in (6) outside the
log produces similar quality estimates.
4.2
Time Scale Modification
As an example to show the potential utility of spectrogram inversion, we investigated an
extremely simple approach to time scale modification of speech signals. Starting from
the original signal we form the spectrogram (or else we may start with the spectrogram
directly), and upsample or downsample it along the time axis. (For example, to speed up
the speech by a factor of two we can discard every second column of the spectrogram.) In
spite of the fact that this approach does not use any phase information from the original
signal, it produces results with good perceptual sound quality. (Audio demonstrations are
available on the web site given earlier.)
5
Variational Inference
The framework described so far focuses on obtaining fixed point estimates for the time domain signal by maximizing the joint log probability of the model in (5). A more important
and potentially useful task is to find the posterior probability distribution P (s|M). As exact inference of P (s|M) is intractable, we approximate it using a fully factored distribution
Q(s) where,
Q(s) =
Y
qi (si )
(7)
i
Here we assume qi (si ) ? N (?i , ?i ). The goal of variational approximation is to infer the
parameters {?i , ?i }, ?i by minimizing the KL divergence between the approximating Q
distribution and the true posterior P (s|M). This is equivalent to minimizing,
Q(s)
P
(s,
M)
s
Q
XY
( i qi (si ))
=
( qi (si )) log
P (s, M)
s
i
X
=?
H(qi ) ? EQ (log P (s, M))
D=
X
Q(s) log
(8)
i
The entropy term H(qi ) is easy to compute; log P (s, M) is a quartic in the random variable
si and the second term involves computing the expectation of it with respect to the Q
distribution. Simplifying and rearranging terms we get,
n X
n
X
D =?
X
H(qi ) ?
+
X
?i2 Gi (?, ?)
i
Fij Fil? ?nk?n/2+j ?nk?n/2+l ? mki
2
j=1 l=1
(9)
i
Gi (?, ?) accounts for uncertainty in s. Estimates with high uncertainty (?) will tend to
have very little influence on other estimates during the optimization. Another interesting
aspect of this formulation is that by setting ? = 0, the first and third terms in (9) vanish
and D takes a form similar to (5). In other words, in the absence of uncertainty we are in
essence finding fixed point estimates for s.
6
Conclusion
In this paper, we have introduced a simple probabilistic model of noisy spectrograms in
which the samples of the unknown time domain signal are represented directly as hidden
variables. But using a continuous gradient optimizer on these quantities, we are able to accurately estimate the full speech signal from only the short time spectral magnitudes taken
in overlapping windows. Our algorithm?s reconstructions are substantially better, both in
terms of informal perceptual quality and measured signal to noise ratio, than the standard
approach of Griffin and Lim[1]. Furthermore, in our setting, it is easy to incorporate an
a-priori model of gross speech structure in the form of an AR-model, whose influence on
the reconstruction is user-tunable. Spectrogram inversion has many potential applications;
as an example we have demonstrated an extremely simple but nonetheless effective time
scale modification algorithm which subsamples the spectrogram of the original utterance
and then inverts.
In addition to improved experimental results, our approach highlights two important lessons
from the point of view of statistical signal processing algorithms. The first is that directly
representing quantities of interest and making inferences about them using the machinery of
probabilistic inference is a powerful approach that can avoid the pitfalls of less principled
iterative algorithms that maintain inconsistent estimates of redundant quantities, such as
phase and time-domain signals. The second is that coordinate descent optimization (ICM)
does not always yield the best results in problems with highly dependent hidden variables.
It is often tacitly assumed in the graphical models community, that the more structured
an approximation one can make when updating blocks of parameters simultaneously, the
better. In other words, practitioners often try to solve for as more variables as possible
conditioned on quantities that have just been updated. Our experience in this model has
shown that direct continuous optimization using gradient techniques allows all quantities
to adjust simultaneously and ultimately finds far superior solutions.
Because of its probabilistic nature, our model can easily be extended to include other pieces
of prior information, or to deal with missing or noisy spectrogram frames. This opens the
door to unified phase recovery and denoising algorithms, and to the possibility of performing sophisticated speech separation or denoising inside the pipeline of a standard speech
recognition system, in which typically only short time spectral magnitudes are available.
Acknowledgments
We thank Carl Rasmussen for his conjugate gradient optimizer. KA, STR and BJF are
supported in part by the Natural Sciences and Engineering Research Council of Canada.
BJF and STR are supported in part by the Ontario Premier?s Research Excellence Award.
STR is supported in part by the Learning Project of IRIS Canada.
References
[1] Griffin, D. W and Lim, J. S Signal estimation from modified short time Fourier transform In IEEE Transactions on Acoustics, Speech and Signal Processing, 1984 32/2
[2] Kschischang, F. R., Frey, B. J. and Loeliger, H. A. Probability propagation and iterative decoding.Factor graphs and the sum-product algorithm In IEEE Transactions on
Information Theory, 2001 47
[3] Fletcher, R Practical methods of optimization . John Wiley & Sons, 1987.
[4] Roucos, S. and A. M. Wilgus. High Quality Time-Scale Modification for Speech. In
Proceedings of the International Conference on Acoustics, Speech, and Signal Processing, IEEE, 1985, 493-496.
[5] Rabiner, L. and Juang, B. Fundamentals of Speech Recognition. Prentice Hall, 1993
[6] L. K. Saul, D. D. Lee, C. L. Isbell, and Y. LeCun Real time voice processing with
audiovisual feedback: toward autonomous agents with perfect pitch. in S. Becker, S.
Thrun, and K. Obermayer (eds.), Advances in Neural Information Processing Systems
15. MIT Press: Cambridge, MA, 2003
[7] Eric A. Wan and Alex T. Nelson Removal of noise from speech using the dual EKF
algorithm in Proceedings of the International Conference on Acoustics, Speech, and
Signal Processing (ICASSP), IEEE, May, 1998
[8] Besag, J On the statistical analysis of dirty pictures Journal of the Royal Statistical
Society B vol.48, pg 259?302, 1986
[9] Neal, R. M, Probabilistic inference using Markov chain Monte Carlo Methods, University of Toronto Technical Report 1993
[10] M. I. Jordan and Z. Ghahramani and T. S. Jaakkola and L. K. Saul An introduction
to variational methods for graphical models Learning in Graphical Models, edited by
M. I. Jordan, Kluwer Academic Publishers, Norwell MA., 1998.
| 2358 |@word middle:1 inversion:3 norm:1 disk:1 open:1 tried:1 simplifying:1 pg:1 reap:1 initial:1 configuration:3 selecting:1 loeliger:1 current:1 comparing:1 recovered:1 ka:1 si:5 assigning:1 must:1 written:2 john:1 numerical:1 lengthen:1 drop:1 plot:1 update:1 generative:3 short:9 node:6 toronto:3 windowed:1 burst:1 along:4 direct:3 predecessor:1 become:1 s2t:1 inside:1 g4:1 excellence:1 mask:1 expected:1 audiovisual:1 pitfall:1 little:2 window:16 str:3 project:1 estimating:3 underlying:1 matched:1 maximizes:1 substantially:1 spectrally:2 unified:1 finding:4 every:2 exactly:1 phaseless:1 engineering:1 frey:2 modify:1 local:3 modulation:1 black:1 plus:1 co:1 factorization:1 averaged:1 acknowledgment:1 practical:1 lecun:1 block:1 procedure:1 significantly:1 matching:1 word:2 spite:1 get:1 cannot:1 clever:1 prentice:1 influence:2 www:1 equivalent:1 map:1 demonstrated:1 missing:1 maximizing:1 straightforward:1 starting:1 rectangular:1 simplicity:1 identifying:1 recovery:3 shorten:1 m2:3 factored:1 his:1 s6:1 classic:1 coordinate:1 autonomous:1 updated:1 user:1 exact:1 carl:1 element:1 recognition:2 updating:2 database:1 observed:13 capture:2 region:1 eu:1 ran:1 gross:1 principled:1 edited:1 tacitly:1 ultimately:1 trained:1 depend:1 eric:1 easily:1 joint:6 icassp:1 represented:2 describe:1 effective:1 monte:2 outside:1 whose:1 solve:1 relax:1 favor:1 gi:2 g1:1 timescale:2 jointly:3 itself:1 transform:5 noisy:2 subsamples:1 sequence:1 advantage:3 reconstruction:9 product:2 neighboring:1 ontario:1 roweis:1 convergence:2 juang:1 produce:2 wsj:2 perfect:1 measured:1 eq:1 recovering:1 predicted:1 implemented:1 involves:1 direction:1 waveform:5 fij:2 viewing:1 wall:1 mki:3 probable:2 summation:2 adjusted:1 extension:1 fil:2 hall:1 exp:4 fletcher:1 optimizer:5 purpose:1 estimation:1 outperformed:1 council:1 create:2 mit:1 gaussian:1 always:1 ekf:1 modified:1 avoid:2 jaakkola:1 derived:1 focus:1 improvement:2 consistently:1 likelihood:2 indicates:1 check:1 contrast:1 brendan:1 cg:1 besag:1 posteriori:4 inference:11 dependent:1 downsample:2 typically:2 hidden:3 dual:1 priori:1 apriori:1 equal:1 construct:1 having:1 discrepancy:1 others:1 report:3 simplify:1 spline:1 few:1 randomly:1 eu1:1 simultaneously:4 divergence:1 phase:15 maintain:1 attempt:1 interest:2 highly:1 possibility:1 evaluation:1 certainly:1 adjust:1 chain:2 norwell:1 beforehand:1 capable:1 xy:1 experience:1 machinery:1 logarithm:1 initialized:1 mk:3 column:3 earlier:1 ar:15 introducing:1 snr:2 uniform:1 s3n:2 reported:1 st:10 international:2 fundamental:1 probabilistic:7 lee:1 decoding:1 connecting:2 containing:2 wan:1 possibly:1 derivative:1 account:1 potential:2 coefficient:2 onset:1 piece:1 view:1 try:1 start:1 recover:1 denoised:1 maintains:1 masking:2 timit:1 contribution:1 variance:2 spaced:1 lesson:1 yield:1 multiband:1 rabiner:1 raw:1 accurately:1 produced:1 carlo:2 cc:1 finer:1 converged:1 simultaneous:1 premier:1 ed:1 energy:6 nonetheless:1 frequency:13 mi:1 psi:1 hamming:2 gain:4 tunable:1 adjusting:2 manifest:1 knowledge:1 lim:5 infers:1 amplitude:1 sophisticated:1 back:1 higher:1 improved:1 formulation:1 furthermore:2 just:1 stage:1 until:1 working:2 web:3 su:3 overlapping:4 propagation:1 mode:3 artifact:2 quality:12 hypothesized:1 true:3 iteratively:1 i2:1 neal:1 deal:1 during:1 width:1 argmaxst:2 please:1 essence:1 speaker:7 iris:1 evident:1 demonstrate:1 wise:1 variational:4 harmonic:1 fi:1 common:1 superior:1 sped:1 functional:1 khz:1 m1:3 rth:2 kluwer:1 significant:1 s5:1 cambridge:1 rd:2 moving:1 posterior:5 quartic:3 optimizing:3 discard:1 certain:2 minimum:3 spectrogram:43 redundant:1 signal:63 smoother:1 full:1 multiple:3 windowing:1 sound:5 infer:1 smooth:1 technical:1 match:4 faster:1 academic:1 offer:1 post:1 manipulate:1 award:1 qi:7 pitch:2 essentially:1 expectation:1 iteration:5 represent:1 invert:1 cell:1 affecting:1 addition:1 fine:1 interval:1 else:1 leaving:1 source:2 publisher:1 appropriately:1 operate:2 file:2 tend:1 db:4 inconsistent:3 effectiveness:1 jordan:2 practitioner:1 extracting:1 door:1 easy:3 reduce:1 idea:1 expression:1 utility:1 becker:1 speech:40 useful:2 fsk:4 s4:1 band:1 http:1 s3:1 bjf:2 estimated:9 overly:1 vol:1 group:1 demonstrating:1 clarity:1 achan:1 graph:4 sum:1 inverse:1 uncertainty:3 powerful:1 place:1 separation:3 griffin:5 guaranteed:2 conjugation:1 constraint:3 isbell:1 alex:1 fourier:4 speed:1 aspect:1 extremely:2 performing:1 structured:1 alternate:2 combination:1 poor:2 belonging:1 conjugate:5 son:1 sam:1 g3:1 modification:6 s1:4 making:2 taken:1 pipeline:1 equation:2 agree:1 needed:1 informal:1 available:3 operation:2 snk:2 spectral:8 appropriate:1 enforce:2 s2n:1 voice:1 original:6 denotes:1 dirty:1 include:3 graphical:4 ghahramani:1 approximating:1 society:1 warping:1 quantity:5 spike:1 pave:1 obermayer:1 gradient:7 kth:3 thank:1 thrun:1 street:1 nelson:1 toward:1 kannan:2 assuming:2 length:4 index:2 ratio:3 demonstration:2 minimizing:2 setup:1 unfortunately:1 potentially:1 unknown:2 perform:1 markov:2 enabling:1 nist:1 descent:1 unwarped:1 extended:1 frame:8 community:1 canada:2 introduced:2 kl:1 acoustic:3 discontinuity:1 able:1 below:1 exemplified:1 royal:1 power:1 overlap:2 natural:2 representing:1 improve:1 picture:1 axis:3 started:1 utterance:9 sn:7 coloured:1 prior:9 removal:1 determining:2 fully:1 highlight:1 interesting:1 agent:1 consistent:3 prone:1 course:1 placed:1 supported:3 keeping:1 rasmussen:1 jth:1 saul:2 taking:2 benefit:1 boundary:5 feedback:1 autoregressive:7 far:2 transaction:2 reconstructed:1 approximate:1 implicitly:1 roucos:1 ml:1 global:1 corpus:2 assumed:2 spectrum:9 search:2 iterative:4 continuous:2 sk:4 table:1 nature:1 expanding:1 rearranging:1 kschischang:1 obtaining:1 contributes:1 investigated:1 complex:4 domain:20 main:1 s2:1 noise:9 icm:11 fig:5 site:1 cubic:1 wiley:1 fails:1 explicit:1 inverts:1 perceptual:5 vanish:1 weighting:1 third:1 down:2 specific:2 offset:1 explored:1 deconvolution:1 intractable:1 magnitude:11 conditioned:1 nk:2 easier:1 suited:1 entropy:1 upsample:2 tracking:1 trix:1 g2:1 corresponds:1 ma:2 conditional:1 identity:1 presentation:1 goal:2 absence:1 content:1 change:1 typical:1 operates:2 denoising:5 microphone:1 total:1 experimental:2 m3:2 indicating:1 select:1 incorporate:1 audio:5 tested:1 avoiding:2 |
1,494 | 2,359 | Locality Preserving Projections
Xiaofei He
Department of Computer Science
The University of Chicago
Chicago, IL 60637
[email protected]
Partha Niyogi
Department of Computer Science
The University of Chicago
Chicago, IL 60637
[email protected]
Abstract
Many problems in information processing involve some form of dimensionality reduction. In this paper, we introduce Locality Preserving Projections (LPP). These are linear projective maps that arise by solving a
variational problem that optimally preserves the neighborhood structure
of the data set. LPP should be seen as an alternative to Principal Component Analysis (PCA) ? a classical linear technique that projects the
data along the directions of maximal variance. When the high dimensional data lies on a low dimensional manifold embedded in the ambient
space, the Locality Preserving Projections are obtained by finding the
optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold. As a result, LPP shares many of the
data representation properties of nonlinear techniques such as Laplacian
Eigenmaps or Locally Linear Embedding. Yet LPP is linear and more
crucially is defined everywhere in ambient space rather than just on the
training data points. This is borne out by illustrative examples on some
high dimensional data sets.
1. Introduction
Suppose we have a collection of data points of n-dimensional real vectors drawn from an
unknown probability distribution. In increasingly many cases of interest in machine learning and data mining, one is confronted with the situation where n is very large. However,
there might be reason to suspect that the ?intrinsic dimensionality? of the data is much
lower. This leads one to consider methods of dimensionality reduction that allow one to
represent the data in a lower dimensional space.
In this paper, we propose a new linear dimensionality reduction algorithm, called Locality
Preserving Projections (LPP). It builds a graph incorporating neighborhood information
of the data set. Using the notion of the Laplacian of the graph, we then compute a transformation matrix which maps the data points to a subspace. This linear transformation
optimally preserves local neighborhood information in a certain sense. The representation
map generated by the algorithm may be viewed as a linear discrete approximation to a continuous map that naturally arises from the geometry of the manifold [2]. The new algorithm
is interesting from a number of perspectives.
1. The maps are designed to minimize a different objective criterion from the classical linear techniques.
2. The locality preserving quality of LPP is likely to be of particular use in information retrieval applications. If one wishes to retrieve audio, video, text documents
under a vector space model, then one will ultimately need to do a nearest neighbor
search in the low dimensional space. Since LPP is designed for preserving local
structure, it is likely that a nearest neighbor search in the low dimensional space
will yield similar results to that in the high dimensional space. This makes for an
indexing scheme that would allow quick retrieval.
3. LPP is linear. This makes it fast and suitable for practical application. While a
number of non linear techniques have properties (1) and (2) above, we know of no
other linear projective technique that has such a property.
4. LPP is defined everywhere. Recall that nonlinear dimensionality reduction techniques like ISOMAP[6], LLE[5], Laplacian eigenmaps[2] are defined only on the
training data points and it is unclear how to evaluate the map for new test points.
In contrast, the Locality Preserving Projection may be simply applied to any new
data point to locate it in the reduced representation space.
5. LPP may be conducted in the original space or in the reproducing kernel Hilbert
space(RKHS) into which data points are mapped. This gives rise to kernel LPP.
As a result of all these features, we expect the LPP based techniques to be a natural alternative to PCA based techniques in exploratory data analysis, information retrieval, and
pattern classification applications.
2. Locality Preserving Projections
2.1. The linear dimensionality reduction problem
The generic problem of linear dimensionality reduction is the following. Given a set
x1 , x2 , ? ? ? , xm in Rn , find a transformation matrix A that maps these m points to a set
of points y1 , y2 , ? ? ? , ym in Rl (l n), such that yi ?represents? xi , where yi = AT xi .
Our method is of particular applicability in the special case where x1 , x2 , ? ? ? , xm ? M
and M is a nonlinear manifold embedded in Rn .
2.2. The algorithm
Locality Preserving Projection (LPP) is a linear approximation of the nonlinear Laplacian
Eigenmap [2]. The algorithmic procedure is formally stated below:
1. Constructing the adjacency graph: Let G denote a graph with m nodes. We put
an edge between nodes i and j if xi and xj are ?close?. There are two variations:
(a) -neighborhoods. [parameter ? R] Nodes i and j are connected by an edge
if kxi ? xj k2 < where the norm is the usual Euclidean norm in Rn .
(b) k nearest neighbors. [parameter k ? N] Nodes i and j are connected by an
edge if i is among k nearest neighbors of j or j is among k nearest neighbors
of i.
Note: The method of constructing an adjacency graph outlined above is correct
if the data actually lie on a low dimensional manifold. In general, however, one
might take a more utilitarian perspective and construct an adjacency graph based
on any principle (for example, perceptual similarity for natural signals, hyperlink
structures for web documents, etc.). Once such an adjacency graph is obtained,
LPP will try to optimally preserve it in choosing projections.
2. Choosing the weights: Here, as well, we have two variations for weighting the
edges. W is a sparse symmetric m ? m matrix with Wij having the weight of the
edge joining vertices i and j, and 0 if there is no such edge.
(a) Heat kernel. [parameter t ? R]. If nodes i and j are connected, put
kxi ?xj k2
t
Wij = e?
The justification for this choice of weights can be traced back to [2].
(b) Simple-minded. [No parameter]. Wij = 1 if and only if vertices i and j are
connected by an edge.
3. Eigenmaps: Compute the eigenvectors and eigenvalues for the generalized eigenvector problem:
XLX T a = ?XDX T a
(1)
where D is a diagonal matrix whose entries are column (or row, since W is symmetric) sums of W , Dii = ?j Wji . L = D ? W is the Laplacian matrix. The ith
column of matrix X is xi .
Let the column vectors a0 , ? ? ? , al?1 be the solutions of equation (1), ordered according to their eigenvalues, ?0 < ? ? ? < ?l?1 . Thus, the embedding is as follows:
xi ? yi = AT xi , A = (a0 , a1 , ? ? ? , al?1 )
where yi is a l-dimensional vector, and A is a n ? l matrix.
3. Justification
3.1. Optimal Linear Embedding
The following section is based on standard spectral graph theory. See [4] for a comprehensive reference and [2] for applications to data representation.
Recall that given a data set we construct a weighted graph G = (V, E) with edges connecting nearby points to each other. Consider the problem of mapping the weighted graph G to
a line so that connected points stay as close together as possible. Let y = (y1 , y2 , ? ? ? , ym )T
be such a map. A reasonable criterion for choosing a ?good? map is to minimize the following objective function [2]
X
(yi ? yj )2 Wij
ij
under appropriate constraints. The objective function with our choice of Wij incurs a heavy
penalty if neighboring points xi and xj are mapped far apart. Therefore, minimizing it is
an attempt to ensure that if xi and xj are ?close? then yi and yj are close as well.
Suppose a is a transformation vector, that is, yT = aT X, where the ith column vector of
X is xi . By simple algebra formulation, the objective function can be reduced to
1X
1X T
(yi ? yj )2 Wij =
(a xi ? aT xj )2 Wij
2 ij
2 ij
X
X
=
aT xi Dii xTi a ?
aT xi Wij xTj a = aT X(D ? W )X T a = aT XLX T a
i
ij
where X = [x1 , x2 , ? ? ? , xm ], and D is a diagonal matrix; its entries are column (or row,
since W is symmetric) sum of W, Dii = ?j Wij . L = D ? W is the Laplacian matrix
[4]. Matrix D provides a natural measure on the data points. The bigger the value D ii
(corresponding to yi ) is, the more ?important? is yi . Therefore, we impose a constraint as
follows:
yT Dy = 1 ? aT XDX T a = 1
Finally, the minimization problem reduces to finding:
arg min
a
aT XDX T a = 1
aT XLX T a
The transformation vector a that minimizes the objective function is given by the minimum
eigenvalue solution to the generalized eigenvalue problem:
XLX T a = ?XDX T a
It is easy to show that the matrices XLX T and XDX T are symmetric and positive semidefinite. The vectors ai (i = 0, 2, ? ? ? , l ? 1) that minimize the objective function are given
by the minimum eigenvalue solutions to the generalized eigenvalue problem.
3.2. Geometrical Justification
The Laplacian matrix L (=D ? W ) for finite graph, or [4], is analogous to the Laplace
Beltrami operator L on compact Riemannian manifolds. While the Laplace Beltrami operator for a manifold is generated by the Riemannian metric, for a graph it comes from the
adjacency relation.
Let M be a smooth, compact, d-dimensional Riemannian manifold. If the manifold is
embedded in Rn the Riemannian structure on the manifold is induced by the standard
Riemannian structure on Rn . We are looking here for a map from the manifold to the real
line such that points close together on the manifold get mapped close together on the line.
Let f be such a map. Assume that f : M ? R is twice differentiable.
Belkin and Niyogi [2] showed that the optimal map preserving locality can be found by
solving the following optimization problem on the manifold:
Z
arg min
k?f k2
kf kL2 (M) =1
which is equivalent to
1
arg min
kf kL2 (M) =1
M
Z
L(f )f
M
where the integral is taken with respect to the standard measure on a Riemannian manifold. L is the Laplace Beltrami operator on the manifold, i.e.R Lf = ? div ?(f ). Thus,
the optimal f has to be an eigenfunction of L. The integral M L(f )f can be discretely
approximated by hf (X), Lf (X)i = f T (X)Lf (X) on a graph, where
f (X) = [f (x1 ), f (x2 , ? ? ? , f (xm ))]T , f T (X) = [f (x1 ), f (x2 , ? ? ? , f (xm ))]
If we restrict the map to be linear, i.e. f (x) = aT x, then we have
f (X) = X T a ? hf (X), Lf (X)i = f T (X)Lf (X) = aT XLX T a
The constraint can be computed as follows,
Z
Z
Z
Z
2
2
T
2
T
T
T
kf kL2 (M) =
|f (x)| dx =
(a x) dx =
(a xx a)dx = a (
M
M
M
xxT dx)a
M
where dx is the standard measure on a Riemannian manifold. By spectral graph theory [4],
the measure dx directly corresponds to the measure for the graph which is the degree of
the vertex, i.e. Dii . Thus, |f k2L2 (M) can be discretely approximated as follows,
Z
X
kf k2L2 (M) = aT (
xxT dx)a ? aT (
xxT Dii )a = aT XDX T a
M
i
Finally, we conclude that the optimal linear projective map, i.e. f (x) = aT x, can be
obtained by solving the following objective function,
arg min
aT XLX T a
a
aT XDX T a = 1
1
If M has a boundary, appropriate boundary conditions for f need to be assumed.
These projective maps are the optimal linear approximations to the eigenfunctions of the
Laplace Beltrami operator on the manifold. Therefore, they are capable of discovering the
nonlinear manifold structure.
3.3. Kernel LPP
Suppose that the Euclidean space Rn is mapped to a Hilbert space H through a nonlinear
mapping function ? : Rn ? H. Let ?(X) denote the data matrix in the Hilbert space,
?(X) = [?(x1 ), ?(x2 ), ? ? ? , ?(xm )]. Now, the eigenvector problem in the Hilbert space
can be written as follows:
[?(X)L?T (X)]? = ?[?(X)D?T (X)]?
(2)
To generalize LPP to the nonlinear case, we formulate it in a way that uses dot product
exclusively. Therefore, we consider an expression of dot product on the Hilbert space H
given by the following kernel function:
K(xi , xj ) = (?(xi ) ? ?(xj )) = ?T (xi )?(xj )
Because the eigenvectors of (2) are linear combinations of ?(x1 ), ?(x2 ), ? ? ? , ?(xm ), there
exist coefficients ?i , i = 1, 2, ? ? ? , m such that
?=
m
X
?i ?(xi ) = ?(X)?
i=1
where ? = [?1 , ?2 , ? ? ? , ?m ]T ? Rm .
By simple algebra formulation, we can finally obtain the following eigenvector problem:
KLK? = ?KDK?
(3)
Let the column vectors ?1 , ?2 , ? ? ? , ?m be the solutions of equation (3). For a test point x,
we compute projections onto the eigenvectors ? k according to
(? k ? ?(x)) =
m
X
?ik (?(x) ? ?(xi )) =
i=1
m
X
?ik K(x, xi )
i=1
where ?ik is the ith element of the vector ?k . For the original training points, the maps can
be obtained by y = K?, where the ith element of y is the one-dimensional representation
of xi . Furthermore, equation (3) can be reduced to
Ly = ?Dy
(4)
which is identical to the eigenvalue problem of Laplacian Eigenmaps [2]. This shows that
Kernel LPP yields the same results as Laplacian Eigenmaps on the training points.
4. Experimental Results
In this section, we will discuss several applications of the LPP algorithm. We begin with
two simple synthetic examples to give some intuition about how LPP works.
4.1. Simply Synthetic Example
Two simple synthetic examples are given in Figure 1. Both of the two data sets correspond essentially to a one-dimensional manifold. Projection of the data points onto the
first basis would then correspond to a one-dimensional linear manifold representation. The
second basis, shown as a short line segment in the figure, would be discarded in this lowdimensional example.
Figure 1: The first and third plots show the results of PCA. The second and forth plots
show the results of LPP. The line segments describe the two bases. The first basis is shown
as a longer line segment, and the second basis is shown as a shorter line segment. In this
example, LPP is insensitive to the outlier and has more discriminating power than PCA.
Figure 2: The handwritten digits (?0?-?9?) are mapped into a 2-dimensional space. The left
figure is a representation of the set of all images of digits using the Laplacian eigenmaps.
The middle figure shows the results of LPP. The right figure shows the results of PCA. Each
color corresponds to a digit.
LPP is derived by preserving local information, hence it is less sensitive to outliers than
PCA. This can be clearly seen from Figure 1. LPP finds the principal direction along the
data points at the left bottom corner, while PCA finds the principal direction on which the
data points at the left bottom corner collapse into a single point. Moreover, LPP can has
more discriminating power than PCA. As can be seen from Figure 1, the two circles are
totally overlapped with each other in the principal direction obtained by PCA, while they
are well separated in the principal direction obtained by LPP.
4.2. 2-D Data Visulization
An experiment was conducted with the Multiple Features Database [3]. This dataset consists of features of handwritten numbers (?0?-?9?) extracted from a collection of Dutch
utility maps. 200 patterns per class (for a total of 2,000 patterns) have been digitized in
binary images. Digits are represented in terms of Fourier coefficients, profile correlations,
Karhunen-Love coefficients, pixel average, Zernike moments and morphological features.
Each image is represented by a 649-dimensional vector. These data points are mapped to
a 2-dimensional space using different dimensionality reduction algorithms, PCA, LPP, and
Laplacian Eigenmaps. The experimental results are shown in Figure 2. As can be seen,
LPP performs much better than PCA. LPPs are obtained by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold. As a
result, LPP shares many of the data representation properties of non linear techniques such
as Laplacian Eigenmap. However, LPP is computationally much more tractable.
4.3. Manifold of Face Images
In this subsection, we applied the LPP to images of faces. The face image data set used
here is the same as that used in [5]. This dataset contains 1965 face images taken from
sequential frames of a small video. The size of each image is 20 ? 28, with 256 gray levels
Figure 3: A twodimensional representation of the set of
all images of faces
using the Locality
Preserving
Projection. Representative
faces are shown next
to the data points
in different parts
of the space.
As
can be seen, the
facial
expression
and the viewing
point of faces change
smoothly.
Table 1: Face Recognition Results on Yale Database
LPP LDA PCA
dims
14
14
33
error rate (%) 16.0 20.0 25.3
per pixel. Thus, each face image is represented by a point in the 560-dimensional ambient space. Figure 3 shows the mapping results. The images of faces are mapped into the
2-dimensional plane described by the first two coordinates of the Locality Preserving Projections. It should be emphasized that the mapping from image space to low-dimensional
space obtained by our method is linear, rather than nonlinear as in most previous work. The
linear algorithm does detect the nonlinear manifold structure of images of faces to some
extent. Some representative faces are shown next to the data points in different parts of the
space. As can be seen, the images of faces are clearly divided into two parts. The left part
are the faces with closed mouth, and the right part are the faces with open mouth. This
is because that, by trying to preserve neighborhood structure in the embedding, the LPP
algorithm implicitly emphasizes the natural clusters in the data. Specifically, it makes the
neighboring points in the ambient space nearer in the reduced representation space, and
faraway points in the ambient space farther in the reduced representation space. The bottom images correspond to points along the right path (linked by solid line), illustrating one
particular mode of variability in pose.
4.4. Face Recognition
PCA and LDA are the two most widely used subspace learning techniques for face recognition [1][7]. These methods project the training sample faces to a low dimensional representation space where the recognition is carried out. The main supposition behind this
procedure is that the face space (given by the feature vectors) has a lower dimension than
the image space (given by the number of pixels in the image), and that the recognition
of the faces can be performed in this reduced space. In this subsection, we consider the
application of LPP to face recognition.
The database used for this experiment is the Yale face database [8]. It is constructed at the
Yale Center for Computational Vision and Control. It contains 165 grayscale images of
15 individuals. The images demonstrate variations in lighting condition (left-light, centerlight, right-light), facial expression (normal, happy, sad, sleepy, surprised, and wink), and
with/without glasses. Preprocessing to locate the the faces was applied. Original images
were normalized (in scale and orientation) such that the two eyes were aligned at the same
position. Then, the facial areas were cropped into the final images for matching. The size
of each cropped image is 32 ? 32 pixels, with 256 gray levels per pixel. Thus, each image
can be represented by a 1024-dimensional vector.
For each individual, six images were taken with labels to form the training set. The rest of
the database was considered to be the testing set. The training samples were used to learn
a projection. The testing samples were then projected into the reduced space. Recognition
was performed using a nearest neighbor classifier. In general, the performance of PCA,
LDA and LPP varies with the number of dimensions. We show the best results obtained by
them. The error rates are summarized in Table 1. As can be seen, LPP outperforms both
PCA and LDA.
5. Conclusions
In this paper, we propose a new linear dimensionality reduction algorithm called Locality
Preserving Projections. It is based on the same variational principle that gives rise to the
Laplacian Eigenmap [2]. As a result it has similar locality preserving properties.
Our approach also has several possible advantages over recent nonparametric techniques
for global nonlinear dimensionality reduction such as [2][5][6]. It yields a map which
is simple, linear, and defined everywhere (and therefore on novel test data points). The
algorithm can be easily kernelized yielding a natural non-linear extension.
Performance improvement of this method over Principal Component Analysis is demonstrated through several experiments. Though our method is a linear algorithm, it is capable
of discovering the non-linear structure of the data manifold.
References
[1] P.N. Belhumeur, J.P. Hepanha, and D.J. Kriegman, ?Eigenfaces vs. fisherfaces: recognition using class specific linear projection,?IEEE. Trans. Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 711-720, July 1997.
[2] M. Belkin and P. Niyogi, ?Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering ,? Advances in Neural Information Processing Systems 14,
Vancouver, British Columbia, Canada, 2002.
[3] C. L. Blake and C. J. Merz, ?UCI repository of machine learning databases?,
http://www.ics.uci.edu/ mlearn/MLRepository.html. Irvine, CA, University of California, Department of Information and Computer Science, 1998.
[4] Fan R. K. Chung, Spectral Graph Theory, Regional Conference Series in Mathematics, number 92, 1997.
[5] Sam Roweis, and Lawrence K. Saul, ?Nonlinear Dimensionality Reduction by Locally Linear Embedding,? Science, vol 290, 22 December 2000.
[6] Joshua B. Tenenbaum, Vin de Silva, and John C. Langford, ?A Global Geometric
Framework for Nonlinear Dimensionality Reduction,? Science, vol 290, 22 December
2000.
[7] M. Turk and A. Pentland, ?Eigenfaces for recognition,? Journal of Cognitive Neuroscience, 3(1):71-86, 1991.
[8] Yale Univ. Face Database, http://cvc.yale.edu/projects/yalefaces/yalefaces.html.
| 2359 |@word illustrating:1 middle:1 repository:1 norm:2 open:1 crucially:1 lpp:36 incurs:1 solid:1 klk:1 moment:1 reduction:11 contains:2 exclusively:1 series:1 document:2 rkhs:1 outperforms:1 yet:1 dx:7 written:1 john:1 chicago:4 designed:2 plot:2 v:1 xdx:7 intelligence:1 discovering:2 plane:1 ith:4 short:1 farther:1 provides:1 node:5 along:3 constructed:1 ik:3 surprised:1 consists:1 introduce:1 love:1 xti:1 totally:1 project:3 xx:1 begin:1 moreover:1 minimizes:1 eigenvector:3 finding:3 transformation:5 k2:3 rm:1 classifier:1 control:1 ly:1 positive:1 local:3 k2l2:2 joining:1 path:1 might:2 twice:1 collapse:1 projective:4 practical:1 yj:3 testing:2 utilitarian:1 lf:5 digit:4 procedure:2 area:1 projection:15 matching:1 get:1 onto:2 close:6 operator:6 put:2 twodimensional:1 www:1 equivalent:1 map:18 quick:1 yt:2 center:1 demonstrated:1 formulate:1 retrieve:1 embedding:6 notion:1 variation:3 exploratory:1 laplace:6 justification:3 analogous:1 coordinate:1 suppose:3 us:1 overlapped:1 element:2 approximated:2 recognition:9 database:7 bottom:3 connected:5 morphological:1 intuition:1 kriegman:1 ultimately:1 solving:3 segment:4 algebra:2 basis:4 easily:1 represented:4 xxt:3 separated:1 heat:1 fast:1 describe:1 univ:1 neighborhood:5 choosing:3 whose:1 widely:1 niyogi:4 final:1 confronted:1 advantage:1 eigenvalue:7 differentiable:1 propose:2 lowdimensional:1 maximal:1 product:2 neighboring:2 aligned:1 uci:2 roweis:1 forth:1 cluster:1 pose:1 ij:4 nearest:6 c:2 come:1 direction:5 beltrami:6 correct:1 viewing:1 dii:5 adjacency:5 extension:1 considered:1 blake:1 normal:1 ic:1 lawrence:1 algorithmic:1 mapping:4 label:1 sensitive:1 minded:1 weighted:2 minimization:1 clearly:2 rather:2 zernike:1 derived:1 improvement:1 contrast:1 sense:1 detect:1 glass:1 a0:2 kernelized:1 relation:1 wij:9 pixel:5 arg:4 classification:1 among:2 orientation:1 html:2 special:1 construct:2 once:1 having:1 identical:1 represents:1 belkin:2 preserve:4 comprehensive:1 individual:2 xtj:1 geometry:1 attempt:1 interest:1 mining:1 semidefinite:1 light:2 behind:1 yielding:1 ambient:5 edge:8 integral:2 capable:2 shorter:1 facial:3 euclidean:2 circle:1 column:6 applicability:1 vertex:3 entry:2 eigenmaps:8 conducted:2 optimally:3 varies:1 kxi:2 synthetic:3 discriminating:2 stay:1 ym:2 connecting:1 together:3 borne:1 corner:2 cognitive:1 chung:1 de:1 summarized:1 coefficient:3 performed:2 try:1 closed:1 linked:1 hf:2 vin:1 xlx:7 partha:1 minimize:3 il:2 variance:1 yield:3 correspond:3 generalize:1 handwritten:2 emphasizes:1 lighting:1 mlearn:1 kl2:3 pp:1 turk:1 naturally:1 riemannian:7 irvine:1 dataset:2 recall:2 color:1 subsection:2 dimensionality:12 hilbert:5 actually:1 back:1 formulation:2 though:1 furthermore:1 just:1 correlation:1 langford:1 web:1 nonlinear:12 mode:1 lda:4 quality:1 gray:2 normalized:1 y2:2 isomap:1 hence:1 symmetric:4 illustrative:1 mlrepository:1 criterion:2 generalized:3 trying:1 demonstrate:1 performs:1 silva:1 geometrical:1 image:24 variational:2 novel:1 rl:1 insensitive:1 he:1 ai:1 outlined:1 mathematics:1 dot:2 similarity:1 longer:1 etc:1 base:1 showed:1 recent:1 perspective:2 apart:1 certain:1 binary:1 yi:9 joshua:1 wji:1 preserving:15 seen:7 minimum:2 impose:1 belhumeur:1 july:1 ii:1 signal:1 multiple:1 reduces:1 smooth:1 retrieval:3 divided:1 bigger:1 a1:1 laplacian:14 essentially:1 metric:1 vision:1 dutch:1 represent:1 kernel:6 cropped:2 cvc:1 rest:1 regional:1 eigenfunctions:3 suspect:1 induced:1 december:2 easy:1 xj:9 restrict:1 expression:3 pca:15 six:1 utility:1 penalty:1 involve:1 eigenvectors:3 nonparametric:1 locally:2 tenenbaum:1 reduced:7 http:2 exist:1 dims:1 neuroscience:1 per:3 discrete:1 vol:3 traced:1 drawn:1 graph:16 sum:2 everywhere:3 reasonable:1 sad:1 dy:2 yale:5 fan:1 discretely:2 constraint:3 x2:7 nearby:1 fourier:1 min:4 department:3 according:2 combination:1 increasingly:1 sam:1 outlier:2 indexing:1 taken:3 computationally:1 equation:3 discus:1 know:1 tractable:1 generic:1 spectral:4 appropriate:2 alternative:2 original:3 clustering:1 ensure:1 build:1 classical:2 objective:7 usual:1 diagonal:2 unclear:1 div:1 subspace:2 mapped:7 manifold:24 extent:1 reason:1 minimizing:1 happy:1 stated:1 rise:2 unknown:1 fisherfaces:1 discarded:1 finite:1 xiaofei:2 pentland:1 situation:1 looking:1 variability:1 digitized:1 locate:2 rn:7 y1:2 reproducing:1 frame:1 canada:1 california:1 nearer:1 eigenfunction:1 trans:1 below:1 pattern:4 xm:7 hyperlink:1 video:2 mouth:2 power:2 suitable:1 natural:5 scheme:1 eye:1 carried:1 columbia:1 text:1 geometric:1 kf:4 vancouver:1 embedded:3 expect:1 interesting:1 degree:1 principle:2 share:2 heavy:1 row:2 sleepy:1 uchicago:2 allow:2 lle:1 neighbor:6 eigenfaces:2 face:24 saul:1 sparse:1 boundary:2 dimension:2 kdk:1 collection:2 preprocessing:1 projected:1 far:1 compact:2 implicitly:1 global:2 conclude:1 assumed:1 xi:19 grayscale:1 continuous:1 search:2 table:2 learn:1 ca:1 constructing:2 main:1 arise:1 profile:1 x1:7 representative:2 wink:1 position:1 wish:1 lie:2 perceptual:1 weighting:1 third:1 british:1 specific:1 emphasized:1 supposition:1 intrinsic:1 incorporating:1 sequential:1 karhunen:1 locality:13 smoothly:1 simply:2 likely:2 faraway:1 ordered:1 corresponds:2 extracted:1 viewed:1 change:1 specifically:1 principal:6 called:2 total:1 experimental:2 merz:1 formally:1 arises:1 eigenmap:3 evaluate:1 audio:1 |
1,495 | 236 | 614
Gish and Blanz
Comparing the Performance of Connectionist
and Statistical Classifiers on an Image
Segmentation Problem
Sheri L. Gish
w. E. Blanz
IBM Almaden Research Center
650 Harry Road
San Jose, CA 95120
ABSTRACT
In the development of an image segmentation system for real time
image processing applications, we apply the classical decision analysis paradigm by viewing image segmentation as a pixel classifica.tion task. We use supervised training to derive a classifier for our
system from a set of examples of a particular pixel classification
problem. In this study, we test the suitability of a connectionist method against two statistical methods, Gaussian maximum
likelihood classifier and first, second, and third degree polynomial
classifiers, for the solution of a "real world" image segmentation
problem taken from combustion research. Classifiers are derived
using all three methods, and the performance of all of the classifiers on the training data set as well as on 3 separate entire test
images is measured.
1
Introduction
We are applying the trainable machine paradigm in our development of an image
segmentation system to be used in real time image processing applications. We
view image segmentation as a classical decision analysis task; each pixel in a scene
is described by a set of measurements, and we use that set of measurements with
a classifier of our choice to determine the region or object within a scene to which
that pixel belongs. Performing image segmentation as a decision analysis task provides several advantages. We can exploit the inherent trainability found in decision
Comparing the Performance of Connectionist and Statistical Classifiers
analysis systems [1 J and use supervised training to derive a classifier from a set of
examples of a particular pixel classification problem. Classifiers derived using the
trainable machine paradigm will exhibit the property of generalization, and thus can
be applied to data representing a set of problems similar to the example problem. In
our pixel classification scheme, the classifier can be derived solely from the qU8J1titative characteristics of the problem data. Our approach eliminates the dependency
on qualitative characteristics of the problem data which often is characteristic of
explicitly derived classification algorithms [2,3J.
Classical decision 8J1alysis methods employ statistical techniques. We have compared a connectionist system to a set of alternative statistical methods on classification problems in which the classifier is derived using supervised training, 8J1d have
found that the connectionist alternative is comparable, and in some cases preferable, to the statistical alternatives in terms of performance on problems of varying
complexity [4J. That comparison study also 8J.lruyzed the alternative methods in
terms of cost of implementation of the solution architecture in digital LSI. hl terms
of our cost analysis, the connectionist architectures were much simpler to implement
than the statistical architectures for the more complex classification problems; this
property of the connectionist methods makes them very attractive implementation
choices for systems requiring hardware implementations for difficult applications.
In this study, we evaluate the perform8J.lce of a connectionist method and several
statisticrumethods as the classifier component of our real time image segmentation
system. The classification problem we use is a "real world" pixel classification task
using images of the size (200 pixels by 200 pixels) and variable data quality typical
of the problems a production system would be used to solve. We thus test the
suitability of the connectionist method for incorporation in a system with the performance requirements of our system, as well as the feasibility of our exploiting the
adv8J.ttages the simple connectionist architectures provide for systems implemented
in hardware.
2
2.1
Methods
The Image Segmentation System
The image segmentation system we use is described in [5J, and summarized in
Figure 1. The system is designed to perform low level image segmentation in real
time; for production, the feature extraction and classifier system components are
implemented in hardware. The classifier par8J.neters are derived during the Training
Phase. A user at a workstation outlines the regions or objects of interest in a
training image. The system performs low level feature extraction on the training
image, and the results of the feature extraction plus the input from the user are
combined automatically by the system to form a training data set. The system then
applies a supervised training method making use of the training data set in order
to derive the coefficients for the classifier which can perform the pixel classification
task. The feature extraction process is capable of computing 14 classes of features
for each pixel; up to 10 features with the highest discriminatory power are used to
615
616
Gish and Blanz
describe all of the pixels in the image. TIns selection of features is based only on an
analysis of the results of the feattue extraction process and is independent of the
supervised learning paradigm being used to derive the classifier [6]. The identical
feature extraction process is applied in both the Training and Running Phases for
a particular image segmentation problem.
Coefficients
for
Classifier
TRAINING
PHASE
Segmented
Image
RUNNING
PHASE
Training Images
Test Image
Figure 1: Diagram of the real time image segmentation system.
2.2
The Image Segmentation Problem
The image segmentation problem used in this study is from combustion research and
is described in [3]. The images are from a series of images of a combustion chamber
taken by a high speed camera during the inflammation process of a gas/air mixhue. The segmentation task is to determine the area of inflamed gas in the image;
therefore, the pixels in the image are classified into 3 different classes: cylinder,
uninflamed gas, and flamed gas (See Figure 2). Exact determination of the area
of flamed gas is not possible using pixel classification alone, but the greater the
success of the pixel classification step, the greater the likelihood that a real time
image segmentation system could be used successfully on this problem.
2.3
The Classifiers
The set of classifiers used in tIns study is composed of a connectionist classifier
based on the Parallel Distributed Processing (PDP) model described in [7] and two
statistical methods: a Gaussian maximum likelihood classifier (a Bayes classifier),
and a polynomial classifier based on first, second, and third degree polynomials.
Tlus set of classifiers was used in a general study comparing the performance of
Comparing the Performance of Connectionist and Statistical Classifiers
Figure 2: The imnge segmentntion problem is to classify each imllge pixel into 1
of 3 regions.
the alternatives on a set of classification problems; all of the classifiers as well as
adaptation procedures are described in detnil in that study [4]. Implementation
and adaptation of nll classifiers in this study was performed as software simulation.
The connectionist classifier was implemented in eMU Common Lisp rmming on an
IBM RT workstation.
The connectionist classifier nrchitecture is a multi-Inyer feedforwnrd network with
one hidden layer. The network is fully connected, but there nre only connections
between ndjacent layers. The number of units in the input and output layers are
determined by the number of features in the fenture vector describing ench pixel
and a binary encoding scheme for the class to which the pixel belongs, respectively.
The number of units in the hidden layer is an architectural "free parnmeter." The
network used in this study has 10 units in the input layer, 12 units in the hidden
layer, and 3 units in the outPllt layer.
Network activation is achieved by using the continuous, nonlinear logistic function
defined in [8]. The connectionist adaptation procedure is the applicntion of the
backpropagation learning rule also defined in [8]. For this problem, the learning rnte
TJ = 0.01 and the momentum a = 0.9; both terms were held conshmt throughout
adaptntion. The presentation of all of the patterns ill the training data set is termed
a trial; network weights nnd unit binses were updated after the presentation of each
pattern during a trial.
The training data set for this problem was generated automatically by the image
segmentation system. This training data set consists of approximately 4,000 ten
element (feature) vectors (each vector describes one pixel); each vector is labeled as
belonging to one of the 3 regions of interest in the imnge. The training data set was
constructed from one entire training image, and is composed of vectors stntistically
representative of the pixels in each of the 3 regions of interest in that image.
617
618
Gish and Dlanz
All of the classifiers tested in this study were adapted from the same training data
set. The connectionist classifier was defined to be converged for tlus problem before
it was tested. Network convergence is determined from the results of two separate
tests. III the first test, the difference between the network output and the target
output averaged over the entire training data set has to reach a minimum. In the
second test, the performance of the network in classifying the training data set is
measured, and the number of misclassifications made by the network has to reach
a minimum. Actual network performance in classifying a pattern is measured after
post-processing of the output vector. The real outputs of each unit in the output
layer are assigned the values of 0 or 1 by application of a 0.5 decision threshold.
In our binary encoding scheme, the output vector should have only one element
with the value 1; that element corresponds to one of the 3 classes. H the network
produces an output vector with either more than one element with the value 1 or all
elements with the value 0, the pattern generating that output is considered rejected.
For the test problem in this study, all of the classifiers were set to reject patterns
in the test data samples. All of the statistical classifiers had a rejection threshold
set to 0.03.
3
Results
The performance of each of the classifiers (connectionist, Gaussian maximum likelihood, and linear, quadratic, and cubic polynomial) was measured on the training
data set and test data representing 3 entire images taken from the series of combustion chamber images. One of those images, labeled Inlage 1, is the image from
which the training data set was constructed. The performance of all of the classifiers
is summarized in Table 1.
Althollgh all of the classifiers were able to classify the training data set with comparably few misclassifications, the Gaussian maximum likelihood classifier and the
quadratic polynomial classifier were unable to perform on any of the 3 entire test
images. The connectionist classifier was the only alternative tested in this study to
deliver acceptable performance on all 3 test images; the connectionist classifier had
lower error rates on the test images than it delivered on the training data sample.
Both the linear polynomial and cubic polynomial classifiers performed acceptably
on the test Image 2, but then both exhibited high error rates on the other two
test images. For this image segmentation problem, only the connectionist method
generalized from the training data set to a solution with acceptable performance.
In Figure 3, the results from pixel classification performed by the connectionist and
polynonual classifiers on all 3 test images are portrayed as segmented images. The
actual test images are included at the left of the figure.
4
Conclusions
Our results demonstrate the feasibility of the application of a connectionist decision
analysis method to the solution of a ureal world" image segmentation problem. The
Comparing the Performance of Connectionist and Statistical Classifiers
~ata Sel
II
-
Polynom;al
-- -~auss;an --]
Classifier
Classifier
Degree
Error
Reject
Error
Reject
,----T . .
1O.40%- ~-.64%
1
'1l.25% 1.62%
12.84% ---0.12%raInIng
Data
2
9.61%
1.41%
3
8.13%
1.05%
Image 1 C
1.72%
1
41.70% 4.63%
8.84%
94.27% 0.00%
2
57.55% 3.66%
25 .86% 0.28%
3
r-~------~~--~-r~~~~~r-------~~----~~----~
Image 2
5.82%
1.53%
1
12.01% 2.00%
69.09% 0.01%
2
68.01 % 0.58%
3
4.68%
0.26%
6.31 % - -::-1.-=-6-=3%=o- tf---- 1=---19.68 % 5.43 %
Image 3
88.35% 0.00%
2
45.89% 1.41%
3
25.75% 0.28%
Conne;;l;on;sl
Classifier
Error fl
Reject b
I
,_ __ _ _ _ _ _ _ _ _ _ _ L __
~
_ _ __ _
I
~
_ _ _ _ _ _L __ _ _ _ _ _
I
~
__ ____
I
~
_ __
___
~
_ _ _ _ _ __ _
flPercent misclauificatioDi for all patterns.
bpercent of all patterns rejected.
Clmage from which training data let was taken .
Table 1: A sununary of the performance of the c16Ssifiers.
inclusion of a connectionist classifier in our supervised segmentation system will allow us to meet our performance requirements under real world problem constraints.
Although the application of connectionism to the solution of real time machine
vision problems represents a new processing method, our solution strategy h6S remained consistent with the decision analysis paradigm. Our connectionist cl6Ssifiers
are derived solely from the quantitative characteristics of the problem data; our connectionist architecture thus remains simple and need not be re-designed according
to qualitative characteristics of each specific problem to which it will be applied.
Our connectionist architecture is independent of the image size; we have applied the
identical architecture successfully to images which range in size from 200 pixels by
200 pixels to 512 pixels by 512 pixels [9). In most research to date in which neural
networks are applied to machine vision, entire images explicitly are mapped to networks by making each pixel in an image correspond to a different unit in a network
layer (see [10,11) for examples). This "pixel map" representation makes scaling up
to larger image sizes from the idealized "toy" research images a significant problem.
Most statistical pattern classification methods require that problem data satisfy
tIle assumptions of statistical models; unfortunately, real world problem data are
complex and of variable quality and thus rarely can be used to guide the choice of an
appropriate method for the solution of a particular problem a priori. For the image
segmentation problem reported in this study, our cI6Ssifier performance results show
that the problem data actually did not satisfy the assumptions behind the statistical
models underlying the Gaussian maximum likelihood classifier or the polynomial
619
620
Gish and Blanz
Figure 3: The grey levels assigned to each region nre: Black - cylinder, Light
Grey - uninflamed gas, Grey - fhnned gas. Original images nre at the left of the
figure.
classifiers. It appenrs that the Gaussian model least fits our problem data, the
polynomial classifiers provide a slightly better fi t, and the connect.ionist method
provides the fit required for the solution of the problem. It is also notable that all
the alternative m.ethods in this study could be aflapted to perform acceptably on
the training data set, but extensive testing on several different entire images was
required in order to demonstrate the true performance of the n1t.ernntive lllethods
on the actual problem., rather than just on the trnining data set.
These results show that a connectionist method is a viable choice for n system. such
as ours which requires a simple nrchitecture readily implemented in hardware, the
flexibility to handle cOl1lpi('x problems described by large amounts of data, and the
robustness to not require problem data to meet, many model assnmptions 11 priori.
Comparing the Performance of Connectionist and Statistical Classifiers
References
[lJ R. O. Duda a.nd P. E. H6I't. Pattern Cla$$ification and Scene Analy,i$. Wiley,
New York, 1973.
[2J W. E. Blanz, J. L. C. Sanz, and D. Petkovic. Control-free low-level image
segmentation: Theory, architecture,a.nd experimentation. In J. L. C. Sanz,
editor, Advance$ of Machine Vi$ion, Application$ and Architect-ure" SpringerVerlag, 1988.
[3] B. Straub and W. E. Blanz. Combined decision theoretic and syntactic approach to image segmentation. Machine Vi,ion and Application$, 2(1 ):17-30,
1989.
[4J Sheri L. Gish and W. E. Blallz. Comparing a Connectioni$t Trainable Clauifier
with Clauical Stati$tical Deci,ion AnalY$i$ Method$. Research Report RJ 6891
(65717), IBM, Jtme 1989.
[5] W. E. Bla.nz, B. Slmng, C. Cox, W. Greiner, B. Dom, a.nd D. Petkovic. De$ign
and implementation of a low level image ,egmentation architecture - LISA.
Research Report RJ 7194 (67673), IBM, December 1989.
[6] W. E. BI611z. Non-p6I'ametric feature selection for multiple class processes. In
Proc. 9th Int. Con/. Pattern Recognition, Rome, Italy, Nov. 14-17 1988.
[7J David E. Rumelhart, J61ues L. McClelland, et a1. Parallel Di$tributed Proceuing. MIT Press, C61ubridge, Massachusetts, 1986.
[8] David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Willia.ms. Le6I'nillg internal representations by error propagation. In David E. Rumelllart, James L.
McClell611d, et aI., editors, Parallel Di,tributed Proce66ing, chapter 8, MIT
Press, Cambridge, Massachusetts, 1986.
[9] W. E. Bla.nz 6l1d Sheri L. Gish. A Connectioni,t Clauifier Architecture Applied
To Image Segmentation. Rese6I'ch Report RJ 7193 (67672), IBM, December
1989.
[10} K. Fukushima, S. Miyake, and T. Ito. Neocognitron: a neura.lnetwork model
for a mechanism of visual pattern recognition. IEEE Tran$actio1t$ on Sy,tem"
Man, and Cybernetic$, SMC-13(5):826-834, 1983.
[U} Y. Hirai. A model of humau associative processor. IEEE Tran$action$ on
Sy,tem$, Man, and Cybernetic$, SMC-13(5):851-857, 1983.
621
| 236 |@word trial:2 cox:1 polynomial:9 duda:1 nd:3 grey:3 simulation:1 gish:7 cla:1 series:2 ours:1 comparing:7 activation:1 readily:1 ronald:1 designed:2 alone:1 petkovic:2 provides:2 simpler:1 lce:1 constructed:2 viable:1 qualitative:2 consists:1 multi:1 automatically:2 actual:3 nre:3 underlying:1 straub:1 sheri:3 quantitative:1 preferable:1 classifier:52 inflamed:1 control:1 unit:8 acceptably:2 before:1 encoding:2 proceuing:1 meet:2 solely:2 ure:1 approximately:1 tributed:2 black:1 plus:1 au:1 nz:2 smc:2 discriminatory:1 range:1 averaged:1 camera:1 testing:1 bla:2 implement:1 backpropagation:1 procedure:2 area:2 reject:4 road:1 selection:2 applying:1 map:1 center:1 miyake:1 rule:1 handle:1 updated:1 target:1 user:2 exact:1 analy:2 element:5 rumelhart:2 recognition:2 labeled:2 region:6 connected:1 highest:1 tlus:2 complexity:1 dom:1 n1t:1 deliver:1 chapter:1 describe:1 larger:1 solve:1 blanz:6 syntactic:1 delivered:1 associative:1 nll:1 advantage:1 tran:2 adaptation:3 date:1 flexibility:1 outpllt:1 sanz:2 exploiting:1 convergence:1 requirement:2 produce:1 generating:1 object:2 derive:4 measured:4 implemented:4 nnd:1 viewing:1 bin:1 require:2 generalization:1 suitability:2 connectionism:1 considered:1 proc:1 tf:1 successfully:2 mit:2 gaussian:6 rather:1 sel:1 varying:1 derived:7 likelihood:6 entire:7 lj:1 hidden:3 pixel:27 classification:14 ill:1 almaden:1 priori:2 development:2 extraction:6 identical:2 represents:1 tem:2 connectionist:29 report:3 inherent:1 employ:1 few:1 composed:2 phase:4 fukushima:1 cylinder:2 deci:1 interest:3 light:1 behind:1 tj:1 held:1 tical:1 capable:1 re:1 classify:2 inflammation:1 cost:2 reported:1 dependency:1 connect:1 combined:2 tile:1 toy:1 de:1 harry:1 summarized:2 coefficient:2 int:1 satisfy:2 notable:1 explicitly:2 idealized:1 vi:2 performed:3 tion:1 view:1 bayes:1 parallel:3 air:1 characteristic:5 sy:2 correspond:1 l1d:1 comparably:1 processor:1 classified:1 converged:1 reach:2 against:1 james:1 di:2 workstation:2 con:1 massachusetts:2 segmentation:25 actually:1 combustion:4 supervised:6 classifica:1 hirai:1 rejected:2 just:1 nonlinear:1 propagation:1 logistic:1 quality:2 requiring:1 true:1 assigned:2 attractive:1 during:3 m:1 generalized:1 neocognitron:1 outline:1 theoretic:1 demonstrate:2 performs:1 image:63 fi:1 common:1 measurement:2 significant:1 cambridge:1 ai:1 inclusion:1 had:2 italy:1 belongs:2 termed:1 binary:2 success:1 minimum:2 greater:2 determine:2 paradigm:5 ii:1 multiple:1 rj:3 segmented:2 determination:1 ign:1 post:1 a1:1 feasibility:2 vision:2 achieved:1 ion:3 ionist:1 diagram:1 eliminates:1 exhibited:1 december:2 connectioni:2 lisp:1 iii:1 fit:2 misclassifications:2 architecture:10 cybernetic:2 york:1 action:1 amount:1 ten:1 hardware:4 mcclelland:1 sl:1 lsi:1 threshold:2 jose:1 throughout:1 architectural:1 decision:9 acceptable:2 scaling:1 comparable:1 layer:9 fl:1 quadratic:2 adapted:1 incorporation:1 constraint:1 scene:3 software:1 speed:1 performing:1 according:1 belonging:1 describes:1 slightly:1 making:2 hl:1 taken:4 remains:1 describing:1 mechanism:1 experimentation:1 apply:1 appropriate:1 chamber:2 alternative:7 robustness:1 original:1 running:2 neura:1 exploit:1 classical:3 strategy:1 rt:1 exhibit:1 separate:2 unable:1 mapped:1 difficult:1 unfortunately:1 implementation:5 perform:4 gas:7 architect:1 hinton:1 pdp:1 rome:1 david:3 required:2 trainable:3 extensive:1 connection:1 ethods:1 emu:1 able:1 pattern:11 inlage:1 power:1 representing:2 scheme:3 fully:1 geoffrey:1 digital:1 degree:3 consistent:1 editor:2 classifying:2 ibm:5 production:2 ata:1 free:2 guide:1 allow:1 lisa:1 distributed:1 raining:1 world:5 made:1 san:1 nov:1 continuous:1 table:2 ca:1 complex:2 did:1 representative:1 cubic:2 wiley:1 momentum:1 portrayed:1 third:2 tin:2 ito:1 clauifier:2 remained:1 specific:1 rejection:1 greiner:1 visual:1 applies:1 ch:1 corresponds:1 stati:1 presentation:2 man:2 springerverlag:1 included:1 typical:1 determined:2 trainability:1 rarely:1 internal:1 evaluate:1 tested:3 |
1,496 | 2,360 | Online Passive-Aggressive Algorithms
Koby Crammer Ofer Dekel Shai Shalev-Shwartz Yoram Singer
School of Computer Science & Engineering
The Hebrew University, Jerusalem 91904, Israel
{kobics,oferd,shais,singer}@cs.huji.ac.il
Abstract
We present a unified view for online classification, regression, and uniclass problems. This view leads to a single algorithmic framework for the
three problems. We prove worst case loss bounds for various algorithms
for both the realizable case and the non-realizable case. A conversion
of our main online algorithm to the setting of batch learning is also discussed. The end result is new algorithms and accompanying loss bounds
for the hinge-loss.
1
Introduction
In this paper we describe and analyze several learning tasks through the same algorithmic
prism. Specifically, we discuss online classification, online regression, and online uniclass
prediction. In all three settings we receive instances in a sequential manner. For concreteness we assume that these instances are vectors in Rn and denote the instance received on
round t by xt . In the classification problem our goal is to find a mapping from the instance
space into the set of labels, {?1, +1}. In the regression problem the mapping is into R.
Our goal in the uniclass problem is to find a center-point in Rn with a small Euclidean
distance to all of the instances.
We first describe the classification and regression problems. For classification and regression we restrict ourselves to mappings based on a weight vector w ? Rn , namely the
mapping f : Rn ? R takes the form f (x) = w ? x. After receiving xt we extend a
prediction y?t using f . For regression the prediction is simply y?t = f (xt ) while for classification y?t = sign(f (xt )). After extending the prediction y?t , we receive the true outcome
yt . We then suffer an instantaneous loss based on the discrepancy between yt and f (xt ).
The goal of the online learning algorithm is to minimize the cumulative loss. The losses
we discuss in this paper depend on a pre-defined insensitivity parameter and are denoted
` (w; (x, y)). For regression the -insensitive loss is,
0
|y ? w ? x| ?
` (w; (x, y)) =
,
(1)
|y ? w ? x| ? otherwise
while for classification the -insensitive loss is defined to be,
0
y(w ? x) ?
` (w; (x, y)) =
.
(2)
? y(w ? x) otherwise
As in other online algorithms the weight vector w is updated after receiving the feedback
yt . Therefore, we denote by wt the vector used for prediction on round t. We leave the
details on the form this update takes to later sections.
Problem
Example (zt )
n
Discrepancy (?)
Update Direction (vt )
Classification
(xt , yt ) ? R ? {-1,+1}
?yt (wt ? xt )
y t xt
Regression
(xt , yt ) ? Rn ? R
|yt ? wt ? xt |
sign(yt ? wt ? xt ) xt
Uniclass
(xt , yt ) ? Rn ? {1}
kxt ? wt k
xt ?wt
kxt ?wt k
Table 1: Summary of the settings and parameters employed by the additive PA algorithm
for classification, regression, and uniclass.
The setting for uniclass is slightly different as we only observe a sequence of instances.
The goal of the uniclass algorithm is to find a center-point w such that all instances x t fall
within a radius of from w. Since we employ the framework of online learning the vector
w is constructed incrementally. The vector wt therefore plays the role of the instantaneous
center and is adapted after observing each instance xt . If an example xt falls within a
Euclidean distance from wt then we suffer no loss. Otherwise, the loss is the distance
between xt and a ball of radius centered at wt . Formally, the uniclass loss is,
0
kxt ? wt k ?
` (wt ; xt ) =
.
(3)
kxt ? wt k ? otherwise
In the next sections we give additive and multiplicative online algorithms for the above
learning problems and prove respective online loss bounds. A common thread of our approach is a unified view of all three tasks which leads to a single algorithmic framework
with a common analysis.
Related work: Our work builds on numerous techniques from online learning. The updates we derive are based on an optimization problem directly related to the one employed
by Support Vector Machines [15]. Li and Long [14] were among the first to suggest the idea
of converting a batch optimization problem into an online task. Our work borrows ideas
from the work of Warmuth and colleagues [11]. In particular, Gentile and Warmuth [6]
generalized and adapted techniques from [11] to the hinge loss which is closely related to
the losses defined in Eqs. (1)-(3). Kivinen et al. [10] discussed a general framework for
gradient-based online learning where some of their bounds bare similarities to the bounds
presented in this paper. Our work also generalizes and greatly improves online loss bounds
for classification given in [3]. Herbster [8] suggested an algorithm for classification and
regression that is equivalent to one of the algorithms given in this paper, however, the lossbound derived by Herbster is somewhat weaker. Finally, we would like to note that similar
algorithms have been devised in the convex optimization community (cf. [1, 2]). The main
difference between these algorithms and the online algorithms presented in this paper lies
in the analysis: while we derive worst case, finite horizon loss bounds, the optimization
community is mostly concerned with asymptotic convergence properties.
2
A Unified Loss
The three problems described in the previous section share common algebraic properties
which we explore in this section. The end result is a common algorithmic framework that is
applicable to all three problems and an accompanying analysis (Sec. 3). Let z t = (xt , yt )
denote the instance-target pair received on round t where in the case of uniclass we set
yt = 1 as a placeholder. For a given example zt , let ?(w; zt ) denote the discrepancy of
w on zt : for classification we set the discrepancy to be ?yt (wt ? xt ) (the negative of the
margin), for regression it is |yt ? wt ? xt |, and for uniclass kxt ? wt k. Fixing zt , we also
view ?(w; zt ) as a convex function of w. Let [a]+ be the function that equals a whenever
a > 0 and otherwise equals zero. Using the discrepancies defined above, the three different
losses given in Eqs. (1)-(3) can all be written as ` (w; z) = [?(w; z) ? ]+ , where for
classification we set ? ? since the discrepancy is defined as the negative of the margin.
While this construction might seem a bit odd for classification, it is very useful in unifying
the three problems. To conclude, the loss in all three problems can be derived by applying
the same hinge loss to different (problem dependent) discrepancies.
3
An Additive Algorithm for the Realizable Case
Equipped with the simple unified notion of loss we describe in this section a single online
algorithm that is applicable to all three problems. The algorithm and the analysis we present
in this section assume that there exist a weight vector w ? and an insensitivity parameter ?
for which the data is perfectly realizable. Namely, we assume that `? (w? ; zt ) = 0 for all
t which implies that,
yt (w? ? xt ) ? |? | (Class.)
|yt ? w? ? xt | ? ? (Reg.)
kxt ? w? k ? ? (Unic.) . (4)
A modification of the algorithm for the unrealizable case is given in Sec. 5.
The general method we use for deriving our on-line update rule is to define the new weight
vector wt+1 as the solution to the following projection problem
1
wt+1 = argmin
kw ? wt k2
s.t. ` (w; zt ) = 0 ,
(5)
2
w
namely, wt+1 is set to be the projection of wt onto the set of all weight vectors that attain
a loss of zero. We denote this set by C. For the case of classification, C is a half space,
C = {w : ?yt w ? xt ? }. For regression C is an -hyper-slab, C = {w : |w ? xt ?
yt | ? } and for uniclass it is a ball of radius centered at xt , C = {w : kw ? xt k ?
}. In Fig. 2 we illustrate the projection for the three cases. This optimization problem
attempts to keep wt+1 as close to wt as possible, while forcing wt+1 to achieve a zero
loss on the most recent example. The resulting algorithm is passive whenever the loss is
zero, that is, wt+1 = wt whenever ` (wt ; zt ) = 0. In contrast, on rounds for which
` (wt ; zt ) > 0 we aggressively force wt+1 to satisfy the constraint ` (wt+1 ; zt ) = 0.
Therefore we name the algorithm
passive-aggressive or PA for short. In
Parameter: Insensitivity:
the following we show that for the
Initialize: Set w1 = 0 (R&C) ; w1 = x0 (U)
three problems described above the
For t = 1, 2, . . .
solution to the optimization problem
? Get a new instance: zt ? Rn
in Eq. (5) yields the following update
? Suffer loss: ` (wt ; zt )
rule,
? If ` (wt ; zt ) > 0 :
wt+1 = wt + ?t vt ,
(6)
1. Set vt (see Table 1)
(wt ;zt )
where vt is minus the gradi2. Set ?t = `kv
2
tk
ent of the discrepancy and
2
3.
Update:
w
=
w t + ? t vt
t+1
?t = ` (wt ; zt )/kvt k .
(Note
that although the discrepancy might
not be differentiable everywhere, its
gradient exists whenever the loss is
Figure 1: The additive PA algorithm.
greater than zero). To see that the
update from Eq. (6) is the solution to the problem defined by Eq. (5), first note that the
equality constraint ` (w; zt ) = 0 is equivalent to the inequality constraint ?(w; zt ) ? .
The Lagrangian of the optimization problem is
1
L(w, ? ) = kw ? wt k2 + ? (?(w; zt ) ? ) ,
(7)
2
wt+1
wt q
wt+1
wt q
wt+1
wt q
Figure 2: An illustration of the update: wt+1 is found by projecting the current vector
wt onto the set of vectors attaining a zero loss on zt . This set is a stripe in the case of
regression, a half-space for classification, and a ball for uniclass.
where ? ? 0 is a Lagrange multiplier. To find a saddle point of L we first differentiate L
with respect to w and use the fact that vt is minus the gradient of the discrepancy to get,
?w (L) = w ? wt + ? ?w ? = 0
?
w = w t + ? vt .
To find the value of ? we use the KKT conditions. Hence, whenever ? is positive (as in
the case of non-zero loss), the inequality constraint, ?(w; zt ) ? , becomes an equality.
Simple algebraic manipulations yield that the value ? for which ?(w; zt ) = for all three
problems is equal to, ?t = ` (w; zt )/kvt k2 . A summary of the discrepancy functions and
their respective updates is given in Table 1. The pseudo-code of the additive algorithm for
all three settings is given in Fig. 1.
We now discuss the initialization of w1 . For classification and regression a reasonable
choice for w1 is the zero vector. However, in the case of uniclass initializing w1 to be
the zero vector might incur large losses if, for instance, all the instances are located far
away from the origin. A more sensible choice for uniclass is to initialize w1 to be one of
the examples. For simplicity of the description we assume that we are provided with an
example x0 prior to the run of the algorithm and initialize w1 = x0 .
To conclude this section we note that for all three cases the weight vector wt is a linear
combination of the instances. This representation enables us to employ kernels [15].
4
Analysis
The following theorem provides a unified loss bound for all three settings. After proving
the theorem we discuss a few of its implications.
Theorem 1 Let z1 , z2 , . . . , zt , . . . be a sequence of examples for one of the problems described in Table 1. Assume that there exist w ? and ? such that `? (w? ; zt ) = 0 for all
t. Then if the additive PA algorithm is run with ? ? , the following bound holds for any
T ?1
T
X
t=1
(` (wt ; zt ))
2
+ 2( ? ? )
T
X
t=1
` (wt ; zt )
?
B kw? ? w1 k2 ,
(8)
where for classification and regression B is a bound on the squared norm of the instances
(?t : B ? kxt k22 ) and B = 1 for uniclass.
Proof: Define ?t = kwt ? w? k2 ? kwt+1 ? w? k2 . We prove the theorem by bounding
PT
PT
t=1 ?t is a telescopic sum and therefore
t=1 ?t from above and below. First note that
T
X
t=1
?t = kw1 ? w? k2 ? kwT +1 ? w? k2 ? kw1 ? w? k2 .
This provides an upper bound on
?t ?
P
t
(9)
?t . In the following we prove the lower bound
` (wt ; zt )
(` (wt ; zt ) + 2( ? ? )) .
B
(10)
First note that we do not modify wt if ` (wt ; zt ) = 0. Therefore, this inequality trivially
holds when ` (wt ; zt ) = 0 and thus we can restrict ourselves to rounds on which the
discrepancy is larger than , which implies that ` (wt ; zt ) = ?(wt ; zt ) ? . Let t be such a
round then by rewriting wt+1 as wt + ?t vt we get,
?t
= kwt ? w? k2 ? kwt+1 ? w? k2 = kwt ? w? k2 ? kwt + ?t vt ? w? k2
= kwt ? w? k2 ? ?t2 kvt k2 + 2?t (vt ? (wt ? w? )) + kwt ? w? k2
= ??t2 kvt k2 + 2?t vt ? (w? ? wt ) .
(11)
Using the fact that ?vt is the gradient of the convex function ?(w; zt ) at wt we have,
?(w? ; zt ) ? ?(wt ; zt ) ? (?vt ) ? (w? ? wt ) .
(12)
Adding and subtracting from the left-hand side of Eq. (12) and rearranging we get,
vt ? (w? ? wt ) ? ?(wt ; zt ) ? + ? ?(w? ; zt ) .
(13)
(?(wt ; zt ) ? ) + ( ? ?(w? ; zt )) ? ` (wt ; zt ) + ( ? ? ) .
(14)
Recall that ?(wt ; zt ) ? = ` (wt ; zt ) and that ? ? ?(w? ; zt ). Therefore,
Combining Eq. (11) with Eqs. (13-14) we get
?t
??t2 kvt k2 + 2?t (` (wt ; zt ) + ( ? ? ))
= ?t ??t kvt k2 + 2` (wt ; zt ) + 2( ? ? ) .
?
(15)
2
Plugging ?t = ` (wt ; zt )/kvt k into Eq. (15) we get
?t ?
` (wt ; zt )
(` (wt ; zt ) + 2( ? ? )) .
kvt k2
For uniclass kvt k2 is always equal to 1 by construction and for classification and regression
we have kvt k2 = kxt k2 ? B which gives,
` (wt ; zt )
(` (wt ; zt ) + 2( ? ? )) .
B
Comparing the above lower bound with the upper bound in Eq. (9) we get
?t ?
T
X
2
(` (wt ; zt )) +
t=1
T
X
t=1
2( ? ? )` (wt ; zt ) ? Bkw? ? w1 k2 .
This concludes the proof.
Let us now discuss the implications of Thm. 1. We first focus on the classification case. Due
to the realizability assumption, there exist w ? and ? such that for all t, `? (w? ; zt ) = 0
which implies that yt (w? ? xt ) ? ?? . Dividing w? by its norm we can rewrite the latter as
? ? ? xt ) ? ?? where w
? ? = w? /kw? k and ?? = |? |/kw? k. The parameter ?? is often
yt (w
referred to as the margin of a unit-norm separating hyperplane. Now, setting = ?1 we
get that ` (w; z) = [1 ? y(w ? x)]+ ? the hinge loss for classification. We now use Thm. 1
to obtain two loss bounds for the hinge loss in a classification setting. First, note that by
? ? /?
also setting w? = w
? and thus ? = ?1 we get that the second term on the left hand
side of Eq. (8) vanishes as ? = = ?1 and thus,
T
X
t=1
([1 ? yt (wt ? xt )]+ )
2
? B kw? k2 =
B
.
(?
? )2
(17)
We thus have obtained a bound on the squared hinge loss. The same bound was also
derived by Herbster [8]. We can immediately use this bound to derive a mistake bound for
the PA algorithm. Note that the algorithm makes a prediction mistake iff yt (wt ? xt ) ? 0.
In this case, [1 ? yt (wt ? xt )]+ ? 1 and therefore the number of prediction mistakes is
bounded by B/(?
? )2 . This bound is common to online algorithms for classification such
as ROMMA [14].
We can also manipulate the result of Thm. 1 to obtain a direct bound on the hinge loss.
Using again = ?1 and omitting the first term in the left hand side of Eq. (8) we get,
2(?1 ? ? )
?
??
T
X
[1 ? yt (wt ? xt )]+ ? Bkw? k2 .
t=1
?
By setting w = 2w /?
, which implies that ? = ?2, we can further simplify the above
to get a bound on the cumulative hinge loss,
T
X
t=1
[1 ? yt (wt ? xt )]+ ? 2
B
.
(?
? )2
To conclude this section, we would like to point out that the PA online algorithm can also
be used as a building block for a batch algorithm. Concretely, let S = {z1 , . . . , zm } be a
fixed training set and let ? ? R be a small positive number. We start with an initial weight
vector w1 and then invoke the PA algorithm as follows. We choose an example z ? S such
that ` (w1 ; z)2 > ? and present z to the PA algorithm. We repeat this process and obtain
w2 , w3 , . . . until the T ?th iteration on which for all z ? S, ` (wT ; z)2 ? ?. The output of
?
2
the batch algorithm is wT . Due to the bound of Thm. 1, T is
?at most dBkw ?w1 k /?e and
by construction the loss of wT on any z ? S is at most ?. Moreover, in the following
lemma we show that the norm of wT cannot be too large. Since wT achieves a small
empirical loss and its norm is small, it can be shown using classical techniques (cf. [15])
that the loss of wT on unseen data is small as well.
Lemma 2 Under the same conditions of Thm. 1, the following bound holds for any T ? 1
kwT ? w1 k ? 2 kw? ? w1 k .
Proof: First note that the inequality trivially holds for T = 1 and thus we focus on the
case T > 1. We use the definition of ?t from the proof of Thm. 1. Eq. (10) implies that
?t is non-negative for all t. Therefore, we get from Eq. (9) that
0 ?
T
?1
X
t=1
?t = kw1 ? w? k2 ? kwT ? w? k2 .
(18)
Rearranging the terms in Eq. (18) we get that kwT ? w? k ? kw? ? w1 k. Finally, we use
the triangle inequality to get the bound,
kwT ? w1 k = k(wT ? w? ) + (w? ? w1 )k
? kwT ? w? k + kw? ? w1 k ? 2 kw? ? w1 k .
This concludes the proof.
5
A Modification for the Unrealizable Case
We now briefly describe an algorithm for the unrealizable case. This algorithm applies only
to regression and classification problems. The case of uniclass is more involved and will
be discussed in detail elsewhere. The algorithm employs two parameters. The first is the
insensitivity parameter which defines the loss function as in the realizable case. However,
in this case we do not assume that there exists w ? that achieves zero loss over the sequence.
We instead measure the loss of the online algorithm relative to the loss of any vector w ? .
The second parameter, ? > 0, is a relaxation parameter. Before describing the effect of this
parameter we define the update step for the unrealizable case. As in the realizable case, the
algorithm is conservative. That is, if the loss on example zt is zero then wt+1 = wt . In
case the loss is positive the update rule is wt+1 = wt + ?t vt where vt is the same as in the
realizable case. However, the scaling factor ?t is modified and is set to,
` (wt ; zt )
?t =
.
kvt k2 + ?
The following theorem provides a loss bound for the online algorithm relative to the loss
of any fixed weight vector w? .
Theorem 3 Let z1 = (x1 , y1 ), z2 = (x2 , y2 ), . . . , zt = (xt , yt ), . . . be a sequence of
classification or regression examples. Let w ? be any vector in Rn . Then if the PA algorithm
for the unrealizable case is run with , and with ? > 0, the following bound holds for any
T ? 1 and a constant B satisfying B ? kxt k2 ,
T
T
X
B X
2
2
(` (wt ; zt ))
? (? + B) kw? ? w1 k2 + 1 +
(` (w? ; zt )) . (19)
?
t=1
t=1
The proof of the theorem is based on a reduction to the realizable case (cf. [4, 13, 14]) and
is omitted due to the lack of space.
6
Extensions
There are numerous potential extensions to our approach. For instance, if all the components of the instances are non-negative we can derive a multiplicative version of the PA
algorithm. The multiplicative
PA algorithm maintains a weight vector wt ? Pn where
n
n Pn
P = {x : x ? R+ ,
j=1 xj = 1}. The multiplicative update of wt is,
wt+1,j = (1/Zt ) wt,j e?t vt,j ,
where vt is the same as the one used in the additive algorithm (Table 1), ?t now becomes
4` (wt ; zt )/kvt k2? for regression and classification and ` (wt ; zt )/(8kvt k2? ) for uniclass
Pn
?t vt,j
and Zt =
is a normalization factor. For the multiplicative PA we can
j=1 wt,j e
prove the following loss bound.
Theorem 4 Let z1 , z2 , . . . , zP
t = (xt , yt ), . . . be a sequence of examples such that xt,j ? 0
for all t. Let DRE (wkw0 ) = j wj log(wj /wj0 ) denote the relative entropy between w and
w0 . Assume that there exist w? and ? such that `? (w? ; zt ) = 0 for all t. Then when the
multiplicative version of the PA algorithm is run with > ? , the following bound holds for
any T ? 1,
T
X
t=1
(` (wt ; zt ))
2
+ 2( ? ? )
T
X
t=1
` (wt ; zt ) ?
1
B DRE (w? kw1 ) ,
2
where for classification and regression B is a bound on the square of the infinity norm of
the instances (?t : B ? kxt k2? ) and B = 16 for uniclass.
The proof of the theorem is rather technical and uses the proof technique of Thm. 1 in
conjunction with inequalities on the logarithm of Zt (see for instance [7, 11, 9]).
An interesting question is whether the unified view of classification, regression, and
uniclass can be exported and used with other algorithms for classification such as
ROMMA [14] and ALMA [5]. Another, rather general direction for possible extension
surfaces when replacing the Euclidean distance between wt+1 and wt with other distances
and divergences such as the Bregman divergence. The resulting optimization problem may
be solved via Bregman projections. In this case it might be possible to derive general loss
bounds, see for example [12]. We are currently exploring generalizations of our framework
to other decision tasks such as distance-learning [16] and online convex programming [17].
References
[1] H. H. Bauschke and J. M. Borwein. On projection algorithms for solving convex
feasibility problems. SIAM Review, 1996.
[2] Y. Censor and S. A. Zenios. Parallel Optimization.. Oxford University Press, 1997.
[3] K. Crammer and Y. Singer. Ultraconservative online algorithms for multiclass problems. Jornal of Machine Learning Research, 3:951?991, 2003.
[4] Y. Freund and R. E. Schapire. Large margin classification using the perceptron algorithm. Machine Learning, 37(3):277?296, 1999.
[5] C. Gentile. A new approximate maximal margin classification algorithm. Journal of
Machine Learning Research, 2:213?242, 2001.
[6] C. Gentile and M. Warmuth. Linear hinge loss and average margin. In NIPS?98.
[7] D. P. Helmbold, R. E. Schapire, Y. Singer, and M. K. Warmuth. A comparison of new
and old algorithms for a mixture estimation problem. In COLT?95.
[8] M. Herbster.
COLT?01.
Learning additive models online with fast evaluating kernels.
In
[9] J. Kivinen, D. P. Helmbold, and M. Warmuth. Relative loss bounds for single neurons.
IEEE Transactions on Neural Networks, 10(6):1291?1304, 1999.
[10] J. Kivinen, A. J. Smola, and R. C. Williamson. Online learning with kernels. In
NIPS?02.
[11] J. Kivinen and M. K. Warmuth. Exponentiated gradient versus gradient descent for
linear predictors. Information and Computation, 132(1):1?64, January 1997.
[12] J. Kivinen and M. K. Warmuth. Relative loss bounds for multidimensional regression
problems. Journal of Machine Learning, 45(3):301?329, July 2001.
[13] N. Klasner and H. U. Simon. From noise-free to noise-tolerant and from on-line to
batch learning. In COLT?95.
[14] Y. Li and P. M. Long. The relaxed online maximum margin algorithm. Machine
Learning, 46(1?3):361?387, 2002.
[15] V. N. Vapnik. Statistical Learning Theory. Wiley, 1998.
[16] E. Xing, A. Y. Ng, M. Jordan, and S. Russel. Distance metric learning, with application to clustering with side-information. In NIPS?03.
[17] M. Zinkevich. Online convex programming and generalized infinitesimal gradient
ascent. In ICML?03.
| 2360 |@word version:2 briefly:1 norm:6 dekel:1 minus:2 reduction:1 initial:1 current:1 z2:3 comparing:1 written:1 additive:8 enables:1 update:12 half:2 warmuth:7 short:1 provides:3 constructed:1 direct:1 prove:5 manner:1 x0:3 equipped:1 becomes:2 provided:1 bounded:1 moreover:1 israel:1 argmin:1 unified:6 pseudo:1 multidimensional:1 k2:34 unit:1 positive:3 before:1 engineering:1 modify:1 mistake:3 oxford:1 might:4 initialization:1 jornal:1 block:1 empirical:1 attain:1 projection:5 pre:1 suggest:1 get:14 onto:2 close:1 cannot:1 applying:1 zinkevich:1 equivalent:2 telescopic:1 lagrangian:1 center:3 yt:26 jerusalem:1 convex:6 simplicity:1 immediately:1 helmbold:2 rule:3 deriving:1 proving:1 notion:1 updated:1 target:1 play:1 construction:3 pt:2 programming:2 us:1 origin:1 pa:13 satisfying:1 located:1 stripe:1 role:1 initializing:1 solved:1 worst:2 wj:2 vanishes:1 depend:1 rewrite:1 solving:1 incur:1 triangle:1 various:1 fast:1 describe:4 hyper:1 shalev:1 outcome:1 larger:1 otherwise:5 unseen:1 online:27 differentiate:1 kxt:10 sequence:5 differentiable:1 subtracting:1 maximal:1 zm:1 combining:1 iff:1 insensitivity:4 achieve:1 description:1 kv:1 ent:1 convergence:1 extending:1 zp:1 leave:1 tk:1 illustrate:1 derive:5 ac:1 fixing:1 school:1 odd:1 received:2 eq:15 dividing:1 c:1 implies:5 direction:2 radius:3 closely:1 centered:2 generalization:1 unrealizable:5 extension:3 exploring:1 accompanying:2 hold:6 algorithmic:4 mapping:4 slab:1 kvt:13 achieves:2 omitted:1 estimation:1 applicable:2 label:1 currently:1 always:1 modified:1 rather:2 pn:3 conjunction:1 derived:3 focus:2 greatly:1 contrast:1 realizable:8 censor:1 dependent:1 classification:31 among:1 colt:3 denoted:1 initialize:3 equal:4 ng:1 kw:12 koby:1 icml:1 discrepancy:12 t2:3 simplify:1 employ:3 few:1 divergence:2 kwt:14 ourselves:2 attempt:1 mixture:1 implication:2 bregman:2 respective:2 euclidean:3 logarithm:1 old:1 instance:18 predictor:1 too:1 bauschke:1 herbster:4 siam:1 huji:1 invoke:1 receiving:2 w1:20 squared:2 again:1 borwein:1 choose:1 li:2 aggressive:2 potential:1 attaining:1 sec:2 satisfy:1 later:1 view:5 multiplicative:6 analyze:1 observing:1 start:1 xing:1 maintains:1 parallel:1 shai:1 simon:1 minimize:1 il:1 square:1 yield:2 whenever:5 definition:1 infinitesimal:1 colleague:1 involved:1 proof:8 recall:1 improves:1 smola:1 until:1 hand:3 replacing:1 alma:1 lack:1 incrementally:1 defines:1 name:1 effect:1 k22:1 omitting:1 true:1 multiplier:1 y2:1 building:1 equality:2 hence:1 aggressively:1 round:6 generalized:2 wj0:1 passive:3 instantaneous:2 common:5 insensitive:2 discussed:3 extend:1 trivially:2 kobics:1 kw1:4 similarity:1 surface:1 recent:1 forcing:1 manipulation:1 exported:1 inequality:6 prism:1 vt:19 gentile:3 somewhat:1 greater:1 relaxed:1 employed:2 converting:1 july:1 dre:2 technical:1 long:2 devised:1 manipulate:1 plugging:1 feasibility:1 prediction:7 regression:22 metric:1 iteration:1 kernel:3 normalization:1 receive:2 w2:1 romma:2 ascent:1 seem:1 jordan:1 concerned:1 xj:1 w3:1 perfectly:1 restrict:2 zenios:1 idea:2 multiclass:1 thread:1 whether:1 suffer:3 algebraic:2 useful:1 schapire:2 exist:4 sign:2 bkw:2 rewriting:1 relaxation:1 concreteness:1 sum:1 run:4 everywhere:1 reasonable:1 decision:1 scaling:1 bit:1 bound:33 adapted:2 constraint:4 infinity:1 x2:1 ball:3 combination:1 slightly:1 modification:2 projecting:1 discus:5 describing:1 singer:4 end:2 ofer:1 generalizes:1 observe:1 away:1 batch:5 clustering:1 cf:3 hinge:9 unifying:1 placeholder:1 yoram:1 build:1 classical:1 question:1 shais:1 gradient:7 distance:7 separating:1 sensible:1 w0:1 code:1 illustration:1 hebrew:1 mostly:1 negative:4 zt:67 conversion:1 upper:2 neuron:1 finite:1 descent:1 january:1 y1:1 rn:8 thm:7 community:2 namely:3 pair:1 z1:4 nip:3 suggested:1 below:1 force:1 kivinen:5 numerous:2 realizability:1 concludes:2 bare:1 prior:1 review:1 asymptotic:1 relative:5 freund:1 loss:52 interesting:1 versus:1 borrows:1 share:1 elsewhere:1 summary:2 repeat:1 free:1 side:4 weaker:1 exponentiated:1 perceptron:1 fall:2 feedback:1 evaluating:1 cumulative:2 concretely:1 far:1 transaction:1 approximate:1 keep:1 kkt:1 tolerant:1 conclude:3 shwartz:1 ultraconservative:1 table:5 rearranging:2 williamson:1 main:2 oferd:1 bounding:1 noise:2 x1:1 fig:2 referred:1 wiley:1 lie:1 theorem:9 xt:37 exists:2 vapnik:1 sequential:1 adding:1 horizon:1 margin:7 entropy:1 simply:1 explore:1 saddle:1 lagrange:1 applies:1 russel:1 goal:4 specifically:1 wt:107 hyperplane:1 lemma:2 conservative:1 formally:1 support:1 latter:1 crammer:2 reg:1 |
1,497 | 2,361 | Geometric Clustering using the Information
Bottleneck method
Susanne Still
Department of Physics
Princeton Unversity, Princeton, NJ 08544
[email protected]
William Bialek
Department of Physics
Princeton Unversity, Princeton, NJ 08544
[email protected]
L?eon Bottou
NEC Laboratories America
4 Independence Way, Princeton, NJ 08540
[email protected]
Abstract
We argue that K?means and deterministic annealing algorithms for geometric clustering can be derived from the more general Information Bottleneck approach. If we cluster the identities of data points to preserve
information about their location, the set of optimal solutions is massively
degenerate. But if we treat the equations that define the optimal solution
as an iterative algorithm, then a set of ?smooth? initial conditions selects
solutions with the desired geometrical properties. In addition to conceptual unification, we argue that this approach can be more efficient and
robust than classic algorithms.
1
Introduction
Clustering is one of the most widespread methods of data analysis and embodies strong
intuitions about the world: Many different acoustic waveforms stand for the same word,
many different images correspond to the same object, etc.. At a colloquial level, clustering
groups data points so that points within a cluster are more similar to one another than
to points in different clusters. To achieve this, one has to assign data points to clusters
and determine how many clusters to use. (Dis)similarity among data points might, in the
simplest example, be measured with the Euclidean norm, and then we could ask for a
clustering of the points1 {xi }, i = 1, 2, ..., N , such that the mean square distance among
points within the clusters is minimized,
Nc
1 X
1 X
|xi ? xj |2 ,
Nc c=1 nc ij?c
(1)
where there are Nc clusters and nc points are assigned to cluster c. Widely used iterative reallocation algorithms such as K?means [5, 8] provide an approximate solution to the
1
Notation: All bold faced variables in this paper denote vectors.
problem of minimizing this quantity. Several alternative cost functions have been proposed
(see e.g. [5]), and some use analogies with physical systems [3, 7]. However, this approach
does not give a principled answer to how many clusters should be used. One often introduces and optimizes another criterion to find the optimal number of clusters, leading to a
variety of ?stopping rules? for the clustering process [5]. Alternatively, cross-validation
methods can be used [11] or, if the underlying distribution is assumed to have a certain
shape (mixture models), then the number of clusters can be found, e.g by using the BIC
[4].
A different view of clustering is provided by information theory. Clustering is viewed as
lossy data compression; the identity of individual points (? log 2 N bits) is replaced by the
identity of the cluster to which they are assigned (? log 2 Nc bits log2 N bits). Each
cluster is associated with a representative point xc , and what we lose in the compression
are the deviations of the individual xi?c , from the representative xc . One way to formalize
this trading between data compression and error is rate?distortion theory [10], which again
requires us to specify a function d(xi , xc ) that measures the magnitude of our error in
replacing xi by xc . The trade-off between the coding cost and the distortion defines a
one parameter family of optimization problems, and this parameter can be identified with
temperature through an analogy with statistical mechanics [9]. As we lower the temperature
there are phase transitions to solutions with more and more distinct clusters, and if we fix
the number of clusters and vary the temperature we find a smooth variation from ?soft?
(probabilistic) to ?hard? (deterministic) clustering. For distortion functions d(x, x 0 ) ?
(x ? x0 )2 , a deterministic annealing approach to solving the variational problem converges
to the K?means algorithm in the limit of zero temperature [9].
A more general information theoretic approach to clustering, the Information Bottleneck
method [13], explicitly implements the idea that our analysis of the data typically is motivated by our interest in some derived quantity (e.g., words from sounds) and that we
should preserve this relevant information rather than trying to guess at what metric in the
space of our data will achieve the proper feature selection. We imagine that each point x i
occurs together with a corresponding variable vi , and that v is really the object of interest.2 Rather than trying to select the important features of similarity among different points
xi , we cluster in x space to compress our description of these points while preserving as
much information as possible about v, and again this defines a one parameter family of
optimization problems. In this formulation there is no need to define a similarity (or distortion) measure; this measure arises from the optimization principle itself. Furthermore,
this framework allows us to find the optimal number of clusters for a finite data set using
perturbation theory [12]. The Information Bottleneck principle thus allows a full solution
of the clustering problem.
The Information Bottleneck approach is attractive precisely because the generality of information theory frees us from a need to specify in advance what it means for data points to be
similar: Two points can be clustered together if this merger does not lose too much information about the relevant variable v. More precisely, because mutual information is invariant
to any invertible transformation of the variables, approaches which are built entirely from
such information theoretic quantities are independent of any arbitrary assumptions about
what it means for two points to be close in the data space. This is especially attractive
if we want the same information theoretic principles to apply both to the analysis of, for
example, raw acoustic waveforms and to the sequences of words for which these sounds
might stand [2]. On the other hand, it is not clear how to incorporate a geometric intuition
into the Information Bottleneck approach.
A natural and purely information theoretic formulation of geometric clustering might ask
that we cluster the points, compressing the data index i ? [1, N ] into a smaller set of cluster
2
v does not have to live in the same space as the data xi .
indices c ? [1, Nc ] so that we preserve as much information as possible about the locations
of the points, i.e. location x becomes the relevant variable. Because mutual information is
a geometric invariant, however, such a problem has an infinitely degenerate set of solutions.
We emphasize that this degeneracy is a matter of principle, and not a failing of any approximate algorithm for solving the optimization problem. What we propose here is to lift this
degeneracy by choosing the initial conditions for an iterative algorithm which solves the
Information Bottleneck equations. In effect our choice of initial conditions expresses a notion of smoothness or geometry in the space of the {xi }, and once this is done the dynamics
of the iterative algorithm lead to a finite set of fixed points. For a broad range of temperatures in the Information Bottleneck problem the solutions we find in this way are precisely
those which would be found by a K?means algorithm, while at a critical temperature we
recover the deterministic annealing approach to rate?distortion theory. In addition to the
conceptual attraction of connecting these very different approaches to clustering in a single
information theoretic framework, we argue that our approach may have some advantages
of robustness.
2
Derivation of K?means from the Information Bottleneck method
We use the Information Bottleneck method to solve the geometric clustering problem and
compress the data indices i into cluster indices c in a lossy way, keeping as much information about the location x in the compression as possible. The variational principle is
then
max [I(x, c) ? ?I(c, i)]
p(c|i)
(2)
where ? is a Lagrange parameter which regulates the trade-off between compression and
preservation of relevant information. Following [13], we assume that p(x|i, c) = p(x|i),
i.e. the distribution of locations for a datum, if the index of the datum is known, does not
depend explicitly on how we cluster. Then p(x|c) is given by the Markov condition
p(x|c) =
1 X
p(x|i)p(c|i)p(i).
p(c) i
(3)
For simplicity, let us discretize the space that the data live in, let us assume that it is a
finite domain and that we can estimate the probability distribution p(x) by a normalized
histogram. Then the data we observe determine
p(x|i) = ?xxi ,
(4)
where ?xxi is the Kronecker-delta which is 1 if x = xi and zero otherwise. The probability
of indices is, of course, p(i) = 1/N .
The optimal assignment rule follows from the variational principle (2) and is given by
"
#
p(c)
1X
p(c|i) =
exp
p(x|i) log2 [p(x|c)] .
Z(i, ?)
? x
(5)
where Z(i, ?) ensures normalization.
This equation has to be solved self consistently toP
gether with eq.(3) and p(c) = i p(c|i)/N . These are the Information Bottleneck equations and they can be solved iteratively [13]. Denoting by pn the probability distribution
after the n-th iteration, the iterative algorithm is given by
"
#
1X
pn?1 (c)
exp
pn (c|i) =
p(x|i) log2 [pn?1 (x|c)] ,
Zn (i, ?)
? x
X
1
p(x|i)pn (c|i),
pn (x|c) =
N pn?1 (c) i
1 X
pn (c) =
pn (c|i).
N i
(6)
(7)
(8)
(0)
Let d(x, x0 ) be a distance measure on the data space. We choose Nc cluster centers xc at
random and initialize
1
1
(0)
(9)
exp ? d(x, xc )
p0 (x|c) =
Z0 (c, ?)
s
where Z0 (c, ?) is a normalization constant and s > 0 is some arbitrary length scale ?
the reason for introducing s will become apparent in the following treatment. After each
(n)
iteration, we determine the cluster centers xc , n ? 1, according to (compare [9])
(n)
0=
X
pn (x|c)
x
?d(x, xc )
(n)
?xc
,
(10)
which for the squared distance reduces to
x(n)
=
c
X
x pn (x|c).
(11)
x
We furthermore initialize p0 (c) = 1/Nc , where Nc is the number of clusters. Now define
the index c?i such that it denotes the cluster with cluster center closest to the datum xi (in
the n-th iteration):
c?i := arg min d(xi , x(n)
(12)
c ).
c
Proposition:
n??
If 0 < ? < 1, and if the cluster indexed by c?i is non?empty, then for
p(c|i) = ?cc?i .
(13)
P
Proof: From (7) and (4) we know that pn (x|c) ? i ?xxi pn (c|i)/pn?1 (c) and from (6)
we have
"
#
1X
pn (c|i)/pn?1 (c) ? exp
p(x|i) log2 [pn?1 (x|c)] ,
(14)
? x
1/?
and hhence pn (x|c)i ? (pn?1 (x|c)) .
Substituting (9), we have p1 (x|c) ?
(0)
(n)
1
exp ? s?
d(x, xc ) . The cluster centers xc are updated in each iteration and therefore
we have after n iterations:
1
pn (x|c) ? exp ? n d(x, x(n?1)
)
(15)
c
s?
where the proportionality constant has to ensure normalization of the probability measure.
Use (14) and (15) to find that
1
(n?1)
pn (c|i) ? pn?1 (c) exp ? n d(xi , xc
) .
(16)
s?
and again the proportionality constant has to ensure normalization. We can now write the
probability that a data point is assigned to the cluster nearest to it:
pn (c?i |i) =
?
??1
X
1
1
(n?1)
?1 +
pn?1 (c) exp ? n d(xi , x(n?1)
) ? d(xi , xc? ) ?(17)
c
i
pn?1 (c?i )
s?
?
c6=ci
(n?1)
(n?1)
By definition d(xi , xc
) ? d(xi , xc? ) > 0 ?c 6= c?i , and thus for n ? ?,
i
i
h
(n?1)
(n?1)
) ? d(xi , xc? ) ? 0, and for clusters that do not have zero
exp ? s?1n d(xi , xc
i
occupancy, i.e for which pn?1 (c?i ) > 0, we have p(c?i |i) ? 1. Finally, because of normalization, p(c 6= c?i |i) must be zero.
From eq. (13) follows with equations (4), (7) and (11) that for n ? ?
1 X
xi ?cc?i ,
(18)
xc =
nc x
P
?
where nc =
i ?cci . This means that for the square distance measure, this algorithm
produces the familiar K?means solution: we get a hard clustering assignment (13) where
each datum i is assigned to the cluster c?i with the nearest center. Cluster centers are updated
according to eq. (18) as the average of all the points that have been assigned to that cluster.
For some problems, the squared distance might be inappropriate, and the update rule for
computing the cluster centers depends on the particular distance function (see eq. 10).
Example. We consider the squared Euclidean distance, d(x, x0 ) = |x ? x0 |2 /2. With
this distance measure, eq. (15) tells us that the (Gaussian) distribution p(x|c) contracts
around the cluster center xc as the number of iterations increases. The xc ?s are, of course,
recomputed in every iteration, following eq. (11).
We create a synthetic data set by drawing 2500 data points i.i.d. from four two-dimensional
Gaussian distributions with different means and the same variance. Figure (1) shows the
result of numerical iteration of the equations (14) and (16) ? ensuring proper normalization
? as well as (8) and (11), with ? = 0.5 and s = 0.5. The algorithm converges to a stable
solution after n = 14 iterations.
This algorithm is less sensitive to initial conditions than the regular K?means algorithm.
We measure the goodness of the classification by evaluating how much relevant information
I(x, c) the solution captures. In the case we are looking at, the relevant information reduces
to the entropy H[p(c)] of the distribution p(c) at the solution3 . We used 1000 different
random initial conditions for the cluster centers and for each, we iterated eqs. (8), (11),
(14) and (16) on the data in Fig. 1. We found two different values for H[p(c)] at the
solution, indicating that there are at least two local maxima in I(x, c). Figure 2 shows
the fraction of the initial conditions that converged to the global maximum. This number
depends on the parameters s and ?. For d(x, x0 ) = |x ? x0 |2 /2s, the initial distribution
p(0) (x|c) is Gaussian with variance s. Larger variance s makes the algorithm less sensitive
to the initial location of the cluster centers. Figure 2 shows that, for large values of s,
we obtain a solution that corresponds to the global maximum of I(x, c) for 100% of the
initial conditions. Here, we fixed ? at reasonably small values to ensure fast convergence
(? ? {0.05, 0.1, 0.2}). For these ? values, the number of iterations till convergence lies
3
P
P
I(x, c) = H[p(c)] + x p(x) c p(c|x) log2 (p(c|x)). Deterministic assignments: p(c|i) =
?cc?i . Data points which are located at one particular position: p(x|i) = ? xxi . We thus have
P
P
?
1
1
?
?
p(c|x) = N p(c)
i p(c|i)p(x|i) = N p(c)
i ?xxi ?cci = ?ccx , where cx = arg minc d(x, xc ).
P
Then c p(c|x) log2 (p(c|x) = 0 and hence I(x, c) = H[p(c)].
2.5
2
1.5
1
0.5
0
?0.5
?1
?1.5
?2
?2.5
?2
?1.5
?1
?0.5
0
0.5
1
1.5
2
2.5
3
Figure 1: 2500 data points drawn i.i.d from four Gaussian distributions with different means
and the same variance. Those data which got assigned to the same cluster are plotted with
the same symbol. The dotted traces indicate movements of the cluster centers (black stars)
from their initial positions in the lower left corner of the graph to their final positions close
to the means of the Gaussian distributions (black circles) after 14 iterations.
between 10 and 20 (for 0.5 < s < 500). As we increase ? there is a (noisy) trend to more
iterations. In comparison, we did the same test using regular K?means [8] and obtained a
globally optimal solution from only 75.8% of the initial cluster locations.
To see how this algorithm performs on data in a higher dimensional space, we draw 2500
points from 4 twenty-dimensional Gaussians with variance 0.3 along each dimension. The
typical euclidean distances between the means are around 7. We tested the robustness to
initial center locations in the same way as we did for the two dimensional data. Despite
the high signal to noise ratio, the regular K?means algorithm [8], run on this data, finds a
globally optimal solution for only 37.8% of the initial center locations, presumably because
the data is relatively scarce and therefore the objective function is relatively rough. We
found that our algorithm converged to the global optimum for between 78.0% and 81.0%
of the initial center locations for large enough values of s (1000 < s < 10000) and ? = 0.1.
3
Discussion
For ? = 1, we obtain the solution
1
(n?1)
pn (c|i) ? exp ? d(xi , xc
)
s
Connection to deterministic annealing.
(19)
where the proportionality constant ensures normalization. This equation, together with eq.
(11), recovers the equations derived from rate distortion theory in [9] (for square distance),
only here the length scale s appears in the position of the annealing temperature T in [9].
We call this parameter the annealing temperature, because [9] suggests the following deterministic annealing scheme: start with large T ; fix the xc ?s and compute the optimal
assignment rule according to eq. (19), then fix the assignment rule and compute the x c ?s
according to eq. (11), and repeat these two steps until convergence. Then lower the temper-
100
?=0.2
?=0.1
%
95
90
?=0.05
85
80
0
10
1
10
s
2
10
3
10
Figure 2: Robustness of algorithm to initial center positions as a function of the initial
variance, s. 1000 different random initial positions were used to obtain clustering solutions
on the data shown in Fig. 1. Displayed is, as a function of the initial variance s, the percent
of initial center positions that converge to a global maximum of the objective function. In
comparison, regular K?means [8] converges to the global optimum for only 75.8% of the
initial center positions. The parameter ? is kept fixed at reasonably small values (indicated
in the plot) to ensure fast convergence (between 10 and 20 iterations).
ature and repeat the procedure. There is no general rule that tells us how slow the annealing
has to be. In contrast, the algorithm we have derived here for ? < 1 suggests to start with
a very large initial temperature, given by s?, by making s very large and to lower the temperature rapidly by making ? reasonably small. In contrast to the deterministic annealing
scheme, we do not iterate the equations for the optimal assignment rule and cluster centers
till convergence before we lower the temperature, but instead the temperature is lowered by
a factor of ? after each iteration. This produces an algorithm that converges rapidly while
finding a globally optimal solution with high probability.
h
i
(n?1)
For ? = 1, we furthermore find from eq. (15), that pn (x|c) ? exp ? 1s d(x, xc
) , and
for d(x, x0 ) = |x ? x0 |2 /2, the clusters are simply Gaussians.
For ? > 1, we obtain a useless solution for n ? ?, that assigns all the data to one cluster.
Optimal number of clusters One of the advancements that the approach we have laid
out here should bring is that it should now be possible to extend our earlier results on
finding the optimal number of clusters [12] to the problem of geometric clustering. We
have to leave the details for a future paper, but essentially we would argue that as we
observe a finite number of data points, we make an error in estimating the distribution that
underlies the generation of these data points. This mis-estimate leads to a systematic error
in evaluating the relevant information. We have computed this error using perturbation
theory [12]. For deterministic assignments (as we have in the hard K?means solution), we
know that a correction of the error introduces a penalty in the objective function for using
more clusters and this allows us to find the optimal number of clusters. Since our result
says that the penalty depends on the number of bins that we use to estimate the distribution
underlying the data [12], we either have to know the resolution with which to look at our
data, or estimate this resolution from the size of the data set, as in e.g. [1, 6]. A combination
of these insights should tell us how to determine, for geometrical clustering, the number of
clusters that is optimal for a finite data set.
4
Conclusion
We have shown that it is possible to cast geometrical clustering into the general, information theoretic framework provided by the Information Bottleneck method. More precisely,
we cluster the data keeping information about location and we have shown that the degeneracy of optimal solutions, which arises from the fact that the mutual information is invariant
to any invertible transformation of the variables, can be lifted by the correct choice of the
initial conditions for the iterative algorithm which solves the Information Bottleneck equations. We have shown that for a large range of values of the Lagrange multiplier ? (which
regulates the trade-off between compression and preservation of relevant information), we
obtain an algorithm that converges to a hard clustering K?means solution. We have found
some indication that this algorithm might be more robust to initial center locations than
regular K?means. Our results also suggest an annealing scheme, which might prove to
be faster than the deterministic annealing approach to geometrical clustering, known from
rate?distortion theory [9]. We recover the later for ? = 1. Our results shed new light
on the connection between the relatively novel Information Bottleneck method and earlier
approaches to clustering, particularly the well-established K?means algorithm.
Acknowledgments
We thank G. Atwal and N. Slonim for interesting discussions. S. Still acknowledges support
from the German Research Foundation (DFG), grant no. Sti197.
References
[1] W. Bialek and C. G. Callan and S. P. Strong, Phys. Rev. Lett. 77 (1996) 4693-4697,
http://arxiv.org/abs/cond-mat/9607180
?
[2] W. Bialek in Physics of bio-molecules and cells; Ecole
d?ete de physique th?eorique Les Houches
Session LXXV Eds.: H. Flyvbjerg, F. J?ulicher, P. Ormos and F. David (2001) Springer-Verlag,
pp.485?577, http://arxiv.org/abs/physics/0205030
[3] M. Blatt, S. Wiseman and E. Domany, Phys. Rev. Lett. 76 (1996) 3251-3254,
http://arxiv.org/abs/cond-mat/9702072
[4] C. Fraley and A. Raftery, J. Am. Stat. Assoc. 97 (2002) 611-631.
[5] A. D. Gordon, Classification, (1999) Chapmann and Hall/CRC Press, London.
[6] P. Hall and E. J. Hannan, Biometrika 75, 4 (1988) 705-714.
[7] D. Horn and A. Gottlieb, Phys. Rev. Lett. 88 (2002) 018702, extended version:
http://arxiv.org/abs/physics/0107063
[8] J. MacQueen in Proc. 5th Berkeley Symp. Math. Statistics and Probability Eds.: L.M.L Cam
and J. Neyman (1967) University of California Press, pp. 281-297 (Vol. I)
[9] K. Rose, E. Gurewitz and G. C. Fox, Phys. Rev. Lett. 65 (1990) 945; and: K. Rose, Proceedings
of the IEEE 86, 11 (1998) pp. 2210-2239.
[10] C. E. Shannon, Bell System Tech. J. 27, (1948). pp. 379-423, 623-656. See also: C. Shannon
and W. Weaver, The Mathematical Theory of Communication (1963) University of Illinois Press
[11] P. Smyth, Statistics and Computing 10, 1 (2000) 63-72.
[12] S. Still and W. Bialek (2003, submitted), available at http://arxiv.org/abs/physics/0303011
[13] N. Tishby, F. Pereira and W. Bialek in Proc. 37th Annual Allerton Conf. Eds.: B. Hajek and R.
S. Sreenivas (1999) University of Illinois, http://arxiv.org/abs/physics/0004057
| 2361 |@word version:1 compression:6 norm:1 proportionality:3 p0:2 initial:23 denoting:1 ecole:1 must:1 numerical:1 shape:1 plot:1 update:1 guess:1 advancement:1 merger:1 math:1 location:12 allerton:1 org:7 c6:1 mathematical:1 along:1 become:1 prove:1 symp:1 x0:8 houches:1 p1:1 mechanic:1 globally:3 inappropriate:1 becomes:1 provided:2 estimating:1 notation:1 underlying:2 what:5 finding:2 transformation:2 nj:3 berkeley:1 every:1 shed:1 biometrika:1 assoc:1 bio:1 grant:1 before:1 local:1 treat:1 limit:1 slonim:1 despite:1 might:6 black:2 suggests:2 range:2 acknowledgment:1 horn:1 implement:1 procedure:1 bell:1 got:1 word:3 regular:5 suggest:1 get:1 close:2 selection:1 live:2 deterministic:10 center:19 resolution:2 simplicity:1 assigns:1 rule:7 attraction:1 insight:1 classic:1 notion:1 variation:1 updated:2 imagine:1 smyth:1 trend:1 particularly:1 located:1 solved:2 capture:1 compressing:1 ensures:2 trade:3 movement:1 principled:1 intuition:2 rose:2 cam:1 dynamic:1 depend:1 solving:2 purely:1 america:1 derivation:1 distinct:1 fast:2 london:1 wbialek:1 lift:1 tell:3 choosing:1 apparent:1 widely:1 solve:1 larger:1 distortion:7 drawing:1 otherwise:1 say:1 statistic:2 itself:1 noisy:1 final:1 sequence:1 advantage:1 indication:1 propose:1 relevant:8 rapidly:2 till:2 degenerate:2 achieve:2 description:1 convergence:5 cluster:49 empty:1 optimum:2 produce:2 converges:5 leave:1 object:2 stat:1 measured:1 nearest:2 ij:1 eq:11 solves:2 strong:2 trading:1 indicate:1 waveform:2 correct:1 bin:1 crc:1 assign:1 fix:3 clustered:1 really:1 proposition:1 correction:1 around:2 hall:2 exp:11 presumably:1 substituting:1 vary:1 failing:1 proc:2 lose:2 sensitive:2 create:1 rough:1 gaussian:5 rather:2 pn:28 lifted:1 minc:1 derived:4 consistently:1 tech:1 contrast:2 am:1 stopping:1 typically:1 selects:1 arg:2 among:3 classification:2 temper:1 initialize:2 mutual:3 once:1 sreenivas:1 broad:1 look:1 future:1 minimized:1 gordon:1 ete:1 preserve:3 individual:2 dfg:1 familiar:1 replaced:1 phase:1 geometry:1 william:1 ab:6 interest:2 introduces:2 mixture:1 physique:1 light:1 callan:1 unification:1 fox:1 indexed:1 euclidean:3 desired:1 plotted:1 circle:1 soft:1 earlier:2 wiseman:1 goodness:1 zn:1 assignment:7 cost:2 introducing:1 deviation:1 cci:2 too:1 tishby:1 answer:1 synthetic:1 probabilistic:1 physic:7 off:3 contract:1 invertible:2 systematic:1 together:3 connecting:1 again:3 squared:3 choose:1 corner:1 conf:1 leading:1 de:1 star:1 bold:1 coding:1 matter:1 explicitly:2 vi:1 depends:3 later:1 view:1 start:2 recover:2 blatt:1 square:3 variance:7 correspond:1 raw:1 iterated:1 cc:3 converged:2 submitted:1 phys:4 ed:3 definition:1 pp:4 associated:1 proof:1 recovers:1 mi:1 degeneracy:3 treatment:1 ask:2 formalize:1 hajek:1 appears:1 higher:1 specify:2 formulation:2 done:1 generality:1 furthermore:3 until:1 hand:1 replacing:1 widespread:1 defines:2 indicated:1 lossy:2 effect:1 normalized:1 multiplier:1 hence:1 assigned:6 laboratory:1 iteratively:1 attractive:2 self:1 criterion:1 trying:2 theoretic:6 performs:1 temperature:12 bring:1 percent:1 geometrical:4 image:1 variational:3 novel:1 physical:1 regulates:2 extend:1 smoothness:1 solution3:1 session:1 illinois:2 lowered:1 stable:1 similarity:3 etc:1 closest:1 optimizes:1 massively:1 certain:1 verlag:1 preserving:1 determine:4 converge:1 signal:1 preservation:2 full:1 sound:2 hannan:1 reduces:2 smooth:2 faster:1 cross:1 ensuring:1 underlies:1 essentially:1 metric:1 arxiv:6 histogram:1 normalization:7 iteration:14 xxi:5 cell:1 addition:2 want:1 annealing:11 call:1 enough:1 variety:1 independence:1 xj:1 bic:1 iterate:1 identified:1 eorique:1 idea:1 domany:1 bottleneck:14 motivated:1 penalty:2 fraley:1 clear:1 simplest:1 http:6 dotted:1 delta:1 write:1 mat:2 vol:1 express:1 group:1 recomputed:1 four:2 drawn:1 kept:1 graph:1 fraction:1 run:1 family:2 laid:1 atwal:1 draw:1 bit:3 entirely:1 datum:4 annual:1 precisely:4 kronecker:1 min:1 leon:1 relatively:3 department:2 according:4 combination:1 smaller:1 rev:4 making:2 invariant:3 equation:10 neyman:1 german:1 know:3 available:1 gaussians:2 reallocation:1 apply:1 observe:2 alternative:1 robustness:3 compress:2 top:1 clustering:22 denotes:1 ensure:4 log2:6 ccx:1 xc:24 embodies:1 eon:1 especially:1 objective:3 quantity:3 occurs:1 bialek:5 distance:10 thank:1 argue:4 reason:1 length:2 index:7 useless:1 ratio:1 minimizing:1 nc:12 trace:1 susanne:1 proper:2 twenty:1 discretize:1 markov:1 macqueen:1 finite:5 displayed:1 extended:1 looking:1 communication:1 perturbation:2 arbitrary:2 david:1 cast:1 connection:2 acoustic:2 california:1 established:1 built:1 max:1 critical:1 natural:1 scarce:1 weaver:1 occupancy:1 scheme:3 raftery:1 acknowledges:1 gurewitz:1 faced:1 geometric:7 generation:1 interesting:1 analogy:2 validation:1 foundation:1 principle:6 course:2 repeat:2 free:1 keeping:2 dis:1 dimension:1 lett:4 world:1 stand:2 transition:1 evaluating:2 approximate:2 emphasize:1 global:5 conceptual:2 assumed:1 xi:20 alternatively:1 iterative:6 flyvbjerg:1 reasonably:3 robust:2 molecule:1 bottou:2 domain:1 did:2 gether:1 noise:1 fig:2 representative:2 slow:1 position:8 pereira:1 lie:1 z0:2 symbol:1 ci:1 ature:1 nec:1 magnitude:1 entropy:1 cx:1 susanna:1 simply:1 infinitely:1 lagrange:2 unversity:2 springer:1 corresponds:1 identity:3 viewed:1 hard:4 typical:1 gottlieb:1 shannon:2 cond:2 indicating:1 select:1 support:1 arises:2 incorporate:1 princeton:7 tested:1 |
1,498 | 2,362 | Sequential Bayesian Kernel Regression
Jaco Vermaak, Simon J. Godsill, Arnaud Doucet
Cambridge University Engineering Department
Cambridge, CB2 1PZ, U.K.
{jv211, sjg, ad2}@eng.cam.ac.uk
Abstract
We propose a method for sequential Bayesian kernel regression. As is
the case for the popular Relevance Vector Machine (RVM) [10, 11], the
method automatically identifies the number and locations of the kernels.
Our algorithm overcomes some of the computational difficulties related
to batch methods for kernel regression. It is non-iterative, and requires
only a single pass over the data. It is thus applicable to truly sequential data sets and batch data sets alike. The algorithm is based on a
generalisation of Importance Sampling, which allows the design of intuitively simple and efficient proposal distributions for the model parameters. Comparative results on two standard data sets show our algorithm
to compare favourably with existing batch estimation strategies.
1 Introduction
Bayesian kernel methods, including the popular Relevance Vector Machine (RVM) [10,
11], have proved to be effective tools for regression and classification. For the RVM the
sparsity constraints are elegantly formulated within a Bayesian framework, and the result of
the estimation is a mixture of kernel functions that rely on only a small fraction of the data
points. In this sense it bears resemblance to the popular Support Vector Machine (SVM)
[13]. Contrary to the SVM, where the support vectors lie on the decision boundaries, the
relevance vectors are prototypical of the data. Furthermore, the RVM does not require any
constraints on the types of kernel functions, and provides a probabilistic output, rather than
a hard decision.
Standard batch methods for kernel regression suffer from a computational drawback in that
they are iterative in nature, with a computational complexity that is normally cubic in the
number of data points at each iteration. A large proportion of the research effort in this area
is devoted to the development of estimation algorithms with reduced computational complexity. For the RVM, for example, a strategy is proposed in [12] that exploits the structure
of the marginal likelihood function to significantly reduce the number of computations.
In this paper we propose a full Bayesian formulation for kernel regression on sequential
data. Our algorithm is non-iterative, and requires only a single pass over the data. It is
equally applicable to batch data sets by presenting the data points one at a time, with the
order of presentation being unimportant. The algorithm is especially effective for large data
sets. As opposed to batch strategies that attempt to find the optimal solution conditional
on all the data, the sequential strategy includes the data one at a time, so that the poste-
rior exhibits a tempering effect as the amount of data increases. Thus, the difficult global
estimation problem is effectively decomposed into a series of easier estimation problems.
The algorithm itself is based on a generalisation of Importance Sampling, and recursively
updates a sample based approximation of the posterior distribution as more data points
become available. The proposal distribution is defined on an augmented parameter space,
and is formulated in terms of model moves, reminiscent of the Reversible Jump Markov
Chain Monte Carlo (RJ-MCMC) algorithm [5]. For kernel regression these moves may
include update moves to refine the kernel locations, birth moves to add new kernels to
better explain the increasing data, and death moves to eliminate erroneous or redundant
kernels.
The remainder of the paper is organised as follows. In Section 2 we outline the details of
the model for sequential Bayesian kernel regression. In Section 3 we present the sequential
estimation algorithm. Although we focus on regression, the method extends straightforwardly to classification. It can, in fact, be applied to any model for which the posterior can
be evaluated up to a normalising constant. We illustrate the performance of the algorithm
on two standard regression data sets in Section 4, before concluding with some remarks in
Section 5.
2 Model Description
The data is assumed to arrive sequentially as input-output pairs (xt , yt ), t = 1, 2, ? ? ? ,
xt ? Rd , yt ? R. For kernel regression the output is assumed to follow the model
Xk
yt = ?0 +
?i K(xt , ?i ) + vt , vt ? N(0, ?y2 ),
i=1
where k is the number of kernel functions, which we will consider to be unknown, ? k =
(?0 ? ? ? ?k ) are the regression coefficients, Uk = (?1 ? ? ? ?k ) are the kernel centres, and ?y2
is the variance of the Gaussian observation noise. Assuming independence the likelihood
for all the data points observed up to time t, denoted by Yt = (y1 ? ? ? yt ), can be written as
p(Yt |k, ? k , Uk , ?y2 ) = N(Yt |Kk ? k , ?y2 It ),
(1)
where Kk denotes the t ? (k + 1) kernel matrix with [Kk ]s,1 = 1 and [Kk ]s,l =
K(xs , ?l?1 ) for l > 1, and In denotes the n-dimensional identity matrix. For the unknown model parameters ? k = (? k , Uk , ?y2 , ??2 ) we assume a hierarchical prior that takes
the form
p(k, ? k ) = p(k)p(? k , ??2 )p(Uk )p(?y2 ),
(2)
with
p(k) ? ?k exp(??)/k!, k ? {1 ? ? ? kmax }
p(? k , ??2 ) = N(? k |0, ??2 Ik+1 )IG(??2 |a? , b? )
Yk Xt
p(Uk ) =
?xs (?l )/t
p(?y2 )
=
l=1
s=1
2
IG(?y |ay , by ),
where ?x (?) denotes the Dirac delta function with mass at x, and IG(?|a, b) denotes the
Inverted Gamma distribution with parameters a and b. The prior on the number of kernels
is set to be a truncated Poisson distribution, with the mean ? and the maximum number of
kernels kmax assumed to be fixed and known. The regression coefficients are drawn from
an isotropic Gaussian prior with variance ??2 in each direction. This variance is, in turn,
drawn from an Inverted Gamma prior. This is in contrast with the Automatic Relevance
Determination (ARD) prior [8], where each coefficient has its own associated variance.
The prior for the kernel centres is assumed to be uniform over the grid formed by the input
data points available at the current time step. Note that the support for this prior increases
with time. Finally, the noise variance is assumed to follow an Inverted Gamma prior. The
parameters of the Inverted Gamma priors are assumed to be fixed and known.
Given the likelihood and prior in (1) and (2), respectively, it is straightforward to obtain an
expression for the full posterior distribution p(k, ? k |Yt ). Due to conjugacy this expression
can be marginalised over the regression coefficients, so that the marginal posterior for the
kernel centres can be written as
p(k, Uk |?y2 , ??2 , Yt ) ?
|Bk |1/2 exp(?YtT Pk Yt /2?y2 )p(k)p(Uk )
,
(2??y2 )t/2 (??2 )k+1/2
(3)
with Bk = (KTk Kk /?y2 + Ik+1 /??2 )?1 and Pk = It ? Kk Bk KTk /?y2 . It will be our objective to approximate this distribution recursively in time as more data becomes available,
using Monte Carlo techniques. Once we have samples for the kernel centres, we will require new samples for the unknown parameters (?y2 , ??2 ) at the next time step. We can
obtain these by first sampling for the regression coefficients from the posterior
b , Bk ),
p(? k |k, Uk , ?y2 , ??2 , Yt ) = N(? k |?
k
(4)
b = Bk KT Yt , and conditional on these values, sampling for the unknown paramewith ?
k
k
ters from the posteriors
p(?y2 |k, ? k , Uk , Yt ) = IG(?y2 |ay + t/2, by + eTt et /2)
p(??2 |k, ? k ) = IG(??2 |a? + (k + 1)/2, b? + ? Tk ? k /2),
(5)
with et = Yt ? Kk ? k the model approximation error.
Since the number of kernel functions to use is unknown the marginal posterior in (3) is
defined over a discrete space of variable dimension. In the next section we will present a
generalised importance sampling strategy to obtain Monte Carlo approximations for distributions of this nature recursively as more data becomes available.
3 Sequential Estimation
Recall that it is our objective to recursively update a Monte Carlo representation of the posterior distribution for the kernel regression parameters as more data becomes available. The
method we propose here is based on a generalisation of the popular importance sampling
technique. Its application extends to any model for which the posterior can be evaluated up
to a normalising constant. We will thus first present the general strategy, before outlining
the details for sequential kernel regression.
3.1 Generalised Importance Sampling
Our aim is to recursively update a sample based approximation of the posterior p(k, ? k |Yt )
of a model parameterised by ? k as more data becomes available. The efficiency of importance sampling hinges on the ability to design a good proposal distribution, i.e. one that
approximates the target distribution sufficiently well. Designing an efficient proposal distribution to generate samples directly in the target parameter space is difficult. This is mostly
due to the fact that the dimension of the parameter space is generally high and variable.
To circumvent these problems we augment the target parameter space with an auxiliary
parameter space, which we will associate with the parameters at the previous time step. We
now define the target distribution over the resulting joint space as
?t (k, ? k ; k 0 , ? 0k0 ) = p(k, ? k |Yt )qt0 (k 0 , ? 0k0 |k, ? k ).
(6)
This joint clearly admits the desired target distribution as a marginal. Apart from some
weak assumptions, which we will discuss shortly, the distribution qt0 is entirely arbitrary,
and may depend on the data and the time step. In fact, in the application to the RVM we
consider here we will set it to qt0 (k 0 , ? 0k0 |k, ? k ) = ?(k,?k ) (k 0 , ? 0k0 ), so that it effectively disappears from the expression above. A similar strategy of augmenting the space to simplify
the importance sampling procedure has been exploited before in [7] to develop efficient
Sequential Monte Carlo (SMC) samplers for a wide range of models. To generate samples
in this joint space we define the proposal for importance sampling to be of the form
Qt (k, ? k ; k 0 , ? 0k0 ) = p(k 0 , ? 0k0 |Yt?1 )qt (k, ? k |k 0 , ? 0k0 ),
(7)
where qt may again depend on the data and the time step. This proposal embodies the
sequential character of our algorithm. Similar to SMC methods [3] it generates samples
for the parameters at the current time step by incrementally refining the posterior at the
previous time step through the distribution qt . Designing efficient incremental proposals
is much easier than constructing proposals that generate samples directly in the target parameter space, since the posterior is unlikely to undergo dramatic changes over consecutive
time steps. To compensate for the discrepancy between the proposal in (7) and the joint
posterior in (6) the importance weight takes the form
p(k, ? k |Yt )qt0 (k 0 , ? 0k0 |k, ? k )
Wt (k, ? k ; k 0 , ? 0k0 ) =
.
(8)
p(k 0 , ? 0k0 |Yt?1 )qt (k, ? k |k 0 , ? 0k0 )
Due to the construction of the joint in (6), marginal samples in the target parameter space
associated with this weighting will indeed be distributed according to the target posterior
p(k, ? k |Yt ). As might be expected the importance weight in (8) is similar in form to
the acceptance ratio for the RJ-MCMC algorithm [5]. One notable difference is that the
reversibility condition is not required, so that for a given qt , qt0 may be arbitrary, as long as
the ratio in (8) is well-defined.
In practice it is often necessary to design a number of candidate moves to obtain an efficient
algorithm. Examples include update moves to refine the model parameters in the light of
the new data, birth moves to add new parameters to better explain the new data, death moves
to remove redundant or erroneous parameters, and many more. We will denote the set of
0
candidate moves at time t by {?t,i , qt,i , qt,i
}M
i=1 , where ?t,i is the probability of choosing
PM
move i, with i=1 ?t,i = 1. For each move i the importance weight is computed by
0
substituting the corresponding qt,i and qt,i
into (8). Note that the probability of choosing
a particular move may depend on the old state and the time step, so that moves may be
included or excluded as is appropriate.
3.2 Sequential Kernel Regression
We will now present the details for sequential kernel regression. Our main concern will
be the recursive estimation of the marginal posterior for the kernel centres in (3). This
distribution is conditional on the parameters (?y2 , ??2 ), for which samples can be obtained
at each time step from the corresponding posteriors in (4) and (5).
To sample for the new kernel centres we will consider three kinds of moves: a zero move
qt,1 , a birth move qt,2 , and a death move qt,3 . The zero move leaves the kernel centres
unchanged. The birth move adds a new kernel at a uniformly randomly chosen location over
the grid of unoccupied input data points. The death move removes a uniformly randomly
chosen kernel. For k = 0 only the birth move is possible, whereas the birth move is
impossible for k = kmax or k = t. Similar to [5] we set the move probabilities to
?t,2 = c min{1, p(k + 1)/p(k)}
?t,3 = c min{1, p(k ? 1)/p(k)}
?t,1 = 1 ? ?t,2 ? ?t,3
in all other cases. In the above c ? (0, 1) is a parameter that tunes the relative frequency of
the dimension changing moves to the zero move. For these choices the importance weight
in (8) becomes
Wt,i (k, Uk ; k 0 , U0k0 ) ?
T
|Bk |1/2 exp(?(YtT Pk Yt ? Yt?1
P0k0 Yt?1 )/2?y2 )
0
2
|Bk0 |1/2 (2??y2 )1/2 (?? )k?k0 /2
0
?
?k?k (t ? 1)(k 0 ? 1)!
,
t(k ? 1)!qt,i (k, Uk |k 0 , U0k0 )
where the primed variables are those corresponding to the posterior at time t ? 1. For
the zero move the parameters are left unchanged, so that the expression for qt,1 in the
importance weight becomes unity. This is often a good move to choose, and captures the
notion that the posterior rarely changes dramatically over consecutive time steps. For the
birth move one new kernel is added, so that k = k 0 + 1. The centre for this kernel is
uniformly randomly chosen from the grid of unoccupied input data points. This means that
the expression for qt,2 in the importance weight reduces to 1/(t ? k 0 ), since there are t ? k 0
such data points. Similarly, the death move removes a uniformly randomly chosen kernel,
so that k = k 0 ? 1. In this case the expression for qt,3 in the importance weight reduces
to 1/k 0 . It is straightforward to design numerous other moves, e.g. an update move that
perturbs existing kernel centres. However, we found that the simple moves presented yield
satisfactory results while keeping the computational complexity acceptable.
We conclude this section with a summary of the algorithm.
Algorithm 1: Sequential Kernel Regression
Inputs:
? Kernel function K(?, ?), model parameters (?, kmax , ay , by , a? , b? ), fraction of dimension
change moves c, number of samples to approximate the posterior N .
Initialisation: t = 0
(i)
(i)
2(i)
? For i = 1 ? ? ? N , set k(i) = 0, ? k = ?, Uk = ?, and sample ?y
2(i)
? p(?y2 ), ??
? p(??2 ).
Generalised Importance Sampling Step: t > 0
? For i = 1 ? ? ? N ,
? Sample a move j(i) so that P (j(i) = l) = ?t,l .
e (i) = U(i) and e
? If j(i) = 1 (zero move), set U
k(i) = k(i) .
k
k
e (i) by uniformly randomly adding a kernel at one of
Else if j(i) = 2 (birth move), form U
k
e
the unoccupied data points, and set k(i) = k(i) + 1.
e (i) by uniformly randomly deleting one of the existing
Else if j(i) = 3 (death move), form U
k
(i)
(i)
kernels, and set e
k = k ? 1.
(i)
? For i = 1 ? ? ? N , compute the importance weights Wt
normalise.
(i)
e (i) ; k(i) , U(i) ), and
? Wt (e
k(i) , U
k
k
(i) e
e ? p(? |e
? For i = 1 ? ? ? N , sample the nuisance parameters ?
, Uk , ? y
k
k k
2(i)
2(i)
2 e(i) e (i)
2 e(i) e (i) e (i)
? p(?? |k , ? ), ?
ey ? p(?y |k , ? , U , Yt ).
?
e
?
k
k
(i)
2(i)
2(i)
, ?? , Yt ),
k
Resampling Step: t > 0
(i)
e(i)
? Multiply / discard samples {e
k(i) , ?
k } with respect to high / low importance weights {Wt }
(i)
(i)
to obtain N samples {k , ? k }.
?
Each of the samples is initialised to be empty, i.e. no kernels are included. Initial values for
the variance parameters are sampled from their corresponding prior distributions. Using
the samples before resampling, a Minimum Mean Square Error (MMSE) estimate of the
clean data can be obtained as
XN
(i) e (i) e (i)
bt =
Z
Wt K
k ?k .
i=1
The resampling step is required to avoid degeneracy of the sample based representation.
It can be performed by standard procedures such as multinomial resampling [4], stratified resampling [6], or minimum entropy resampling [2]. All these schemes are unbiased,
e(i) ) appears after resampling satisfies
so that the number of times Ni the sample (e
k (i) , ?
(i)
k
E(Ni ) = N Wt . Thus, resampling essentially multiplies samples with high importance
weights, and discards those with low importance weights.
The algorithm requires only a single pass through the data. The computational complexity
at each time step is O(N ). For each sample the computations are dominated by the computation of the matrix Bk , which requires a (k + 1)-dimensional matrix inverse. However,
this inverse can be incrementally updated from the inverse at the previous time step using
the techniques described in [12], leading to substantial computational savings.
4 Experiments and Results
In this section we illustrate the performance of the proposed sequential estimation algorithm on two standard regression data sets.
4.1 Sinc Data
This experiment is described in [1]. The training data is taken to be the sinc function, i.e.
sinc(x) = sin(x)/x, corrupted by additive Gaussian noise of standard deviation ?y = 0.1,
for 50 evenly spaced points in the interval x ? [?10, 10]. In all the runs we presented
these points to the sequential estimation algorithm in random order. For the test data we
used 1000 points over the same interval. We used a Gaussian kernel of width 1.6, and
set the fixed parameters of the model to (?, kmax , ay , by , a? , b? ) = (1, 50, 0, 0, 0, 0). For
these settings the prior on the variances reduces to the uninformative Jeffreys? prior. The
fraction of dimension change moves was set to c = 0.25. It should be noted that the
algorithm proved to be relatively insensitive to reasonable variations in the values of the
fixed parameters.
The left side of Figure 1 shows the test error as a function of the number of samples N .
These results were obtained by averaging over 25 random generations of the training data
for each value of N . As expected, the error decreases with an increase in the number of
samples. No significant decrease is obtained beyond N = 250, and we adopt this value for
subsequent comparisons. A typical MMSE estimate of the clean data is shown on the right
side of Figure 1.
In Table 1 we compare the results of the proposed sequential estimation algorithm with a
number of batch strategies for the SVM and RVM. The results for the batch algorithms are
duplicated from [1, 9]. The error for the sequential algorithm is slightly higher. This is due
to the stochastic nature of the algorithm, and the fact that it uses only very simple moves
that take no account of the characteristics of the data during the move proposition. This
increase should be offset against the algorithm simplicity and efficiency. The error could
be further decreased by designing more complex moves.
1.5
0.4
0.35
1
0.3
0.25
0.5
0.2
0.15
0
0.1
0.05
0
100
200
300
400
500
600
700
800
900
?0.5
?10
1000
?8
?6
?4
?2
0
2
4
6
8
10
Figure 1: Results for the sinc experiment. Test error as a function of the number of
samples (left), and example fit (right), showing the uncorrupted data (blue circles), noisy
data (red crosses) and MMSE estimate (green squares). For this example the test error was
0.0309 and an average of 6.18 kernels were used.
Method
Figueiredo
SVM
RVM
VRVM
MCMC
Sequential RVM
Test Error
0.0455
0.0519
0.0494
0.0494
0.0468
0.0591
# Kernels
7.0
28.0
6.9
7.4
6.5
4.5
Noise Estimate
?
?
0.0943
0.0950
?
0.1136
Table 1: Comparative performance results for the sinc data. The batch results are
reproduced from [1, 9].
4.2 Boston Housing Data
We also applied our algorithm to the popular Boston housing data set. We considered
random train / test partitions of the data of size 300 / 206. We again used a Gaussian kernel,
and set the width parameter to 5. For the model and algorithm parameters we used values
similar to those for the sinc experiment, except for setting ? = 5 to allow a larger number
of kernels. The results are summarised in Table 2. These were obtained by averaging over
10 random partitions of the data, and setting the number of samples to N = 250. The test
error is comparable to those for the batch strategies, but far fewer kernels are required.
Method
SVM
RVM
Sequential RVM
Test Error
8.04
7.46
7.18
# Kernels
142.8
39.0
25.29
Table 2: Comparative performance results for the Boston housing data. The batch
results are reproduced from [10].
5 Conclusions
In this paper we proposed a sequential estimation strategy for Bayesian kernel regression.
Our algorithm is based on a generalisation of importance sampling, and incrementally updates a Monte Carlo representation of the target posterior distribution as more data points
become available. It achieves this through simple and intuitive model moves, reminiscent
of the RJ-MCMC algorithm. It is further non-iterative, and requires only a single pass over
the data, thus overcoming some of the computational difficulties associated with batch estimation strategies for kernel regression. Our algorithm is more general than the kernel
regression problem considered here. Its application extends to any model for which the
posterior can be evaluated up to a normalising constant. Initial experiments on two standard regression data sets showed our algorithm to compare favourably with existing batch
estimation strategies for kernel regression.
Acknowledgements
The authors would like to thank Mike Tipping for helpful comments during the experimental procedure. The work of Vermaak and Godsill was partially funded by QinetiQ under
the project ?Extended and Joint Object Tracking and Identification?, CU006-14890.
References
[1] C. M. Bishop and M. E. Tipping. Variational relevance vector machines. In C. Boutilier and
M. Goldszmidt, editors, Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence, pages 46?53. Morgan Kaufmann, 2000.
[2] D. Crisan. Particle filters ? a theoretical perspective. In A. Doucet, J. F. G. de Freitas, and N. J.
Gordon, editors, Sequential Monte Carlo Methods in Practice, pages 17?38. Springer-Verlag,
2001.
[3] A. Doucet, J. F. G. de Freitas, and N. J. Gordon, editors. Sequential Monte Carlo Methods in
Practice. Springer-Verlag, New York, 2001.
[4] N. J. Gordon, D. J. Salmond, and A. F. M. Smith. Novel approach to nonlinear/non-Gaussian
Bayesian state estimation. IEE Proceedings-F, 140(2):107?113, 1993.
[5] P. J. Green. Reversible jump Markov chain Monte Carlo computation and Bayesian model
determination. Biometrika, 82(4):711?732, 1995.
[6] G. Kitagawa. Monte Carlo filter and smoother for non-Gaussian nonlinear state space models.
Journal of Computational and Graphical Statistics, 5(1):1?25, 1996.
[7] P. Del Moral and A. Doucet. Sequential Monte Carlo samplers. Technical Report CUED/FINFENG/TR.443, Signal Processing Group, Cambridge University Engineering Department,
2002.
[8] R. M. Neal. Assessing relevance determination methods using DELVE. In C. M. Bishop, editor,
Neural Networks and Machine Learning, pages 97?129. Springer-Verlag, 1998.
[9] S. S. Tham, A. Doucet, and R. Kotagiri. Sparse Bayesian learning for regression and classification using Markov chain Monte Carlo. In Proceedings of the International Conference on
Machine Learning, pages 634?643, 2002.
[10] M. E. Tipping. The relevance vector machine. In S. A. Solla, T. K. Leen, and K. R. Mu? ller,
editors, Advances in Neural Information Processing Systems, volume 12, pages 652?658. MIT
Press, 2000.
[11] M. E. Tipping. Sparse Bayesian learning and the relevance vector machine. Journal of Machine
Learning Research, 1:211?244, 2001.
[12] M. E. Tipping and A. C. Faul. Fast marginal likelihood maximisation for sparse Bayesian models. In C. M. Bishop and B. J. Frey, editors, Proceedings of the Ninth International Workshop
on Artificial Intelligence and Statistics, 2003.
[13] V. N. Vapnik. Statistical Learning Theory. John Wiley and Sons, New York, 1998.
| 2362 |@word proportion:1 eng:1 vermaak:2 dramatic:1 tr:1 recursively:5 initial:2 series:1 initialisation:1 mmse:3 existing:4 freitas:2 current:2 reminiscent:2 written:2 john:1 subsequent:1 partition:2 additive:1 remove:3 update:7 resampling:8 intelligence:2 leaf:1 fewer:1 xk:1 isotropic:1 smith:1 normalising:3 provides:1 location:3 become:2 ik:2 expected:2 indeed:1 decomposed:1 automatically:1 increasing:1 becomes:6 project:1 mass:1 kind:1 poste:1 biometrika:1 uk:14 normally:1 before:4 generalised:3 engineering:2 frey:1 might:1 delve:1 smc:2 stratified:1 range:1 practice:3 recursive:1 maximisation:1 cb2:1 procedure:3 area:1 significantly:1 ett:1 kmax:5 impossible:1 yt:25 straightforward:2 simplicity:1 tham:1 notion:1 variation:1 updated:1 target:9 construction:1 bk0:1 us:1 designing:3 associate:1 observed:1 mike:1 capture:1 solla:1 decrease:2 yk:1 substantial:1 mu:1 complexity:4 cam:1 depend:3 efficiency:2 joint:6 k0:12 train:1 fast:1 effective:2 monte:12 artificial:2 choosing:2 birth:8 larger:1 ability:1 statistic:2 itself:1 noisy:1 reproduced:2 housing:3 propose:3 remainder:1 description:1 intuitive:1 dirac:1 empty:1 assessing:1 comparative:3 incremental:1 tk:1 object:1 illustrate:2 develop:1 ac:1 augmenting:1 cued:1 ard:1 qt:17 auxiliary:1 faul:1 direction:1 drawback:1 filter:2 stochastic:1 sjg:1 require:2 proposition:1 kitagawa:1 ad2:1 sufficiently:1 considered:2 exp:3 substituting:1 achieves:1 consecutive:2 adopt:1 estimation:15 applicable:2 rvm:11 tool:1 mit:1 clearly:1 gaussian:7 aim:1 rather:1 primed:1 avoid:1 crisan:1 focus:1 refining:1 likelihood:4 contrast:1 sense:1 finfeng:1 helpful:1 eliminate:1 unlikely:1 bt:1 classification:3 denoted:1 augment:1 multiplies:1 development:1 marginal:7 once:1 saving:1 reversibility:1 sampling:12 discrepancy:1 report:1 simplify:1 gordon:3 randomly:6 gamma:4 attempt:1 acceptance:1 multiply:1 truly:1 mixture:1 light:1 devoted:1 chain:3 kt:1 necessary:1 old:1 desired:1 circle:1 theoretical:1 deviation:1 uniform:1 iee:1 straightforwardly:1 corrupted:1 international:2 probabilistic:1 again:2 opposed:1 choose:1 leading:1 account:1 de:2 includes:1 coefficient:5 notable:1 performed:1 red:1 simon:1 formed:1 square:2 ni:2 variance:7 characteristic:1 kaufmann:1 yield:1 spaced:1 weak:1 bayesian:12 identification:1 carlo:12 explain:2 against:1 frequency:1 initialised:1 associated:3 degeneracy:1 sampled:1 proved:2 popular:5 duplicated:1 recall:1 appears:1 higher:1 tipping:5 follow:2 formulation:1 evaluated:3 leen:1 furthermore:1 parameterised:1 favourably:2 nonlinear:2 unoccupied:3 reversible:2 del:1 incrementally:3 resemblance:1 effect:1 y2:20 unbiased:1 arnaud:1 excluded:1 death:6 satisfactory:1 neal:1 sin:1 during:2 width:2 nuisance:1 noted:1 presenting:1 outline:1 ay:4 variational:1 novel:1 multinomial:1 insensitive:1 volume:1 approximates:1 significant:1 cambridge:3 rd:1 automatic:1 grid:3 pm:1 similarly:1 particle:1 centre:9 funded:1 add:3 posterior:21 own:1 showed:1 perspective:1 apart:1 discard:2 verlag:3 vt:2 exploited:1 uncorrupted:1 inverted:4 morgan:1 minimum:2 ey:1 redundant:2 ller:1 signal:1 smoother:1 full:2 rj:3 reduces:3 technical:1 determination:3 cross:1 compensate:1 long:1 equally:1 regression:27 essentially:1 poisson:1 iteration:1 kernel:53 proposal:9 whereas:1 uninformative:1 interval:2 decreased:1 else:2 comment:1 undergo:1 contrary:1 independence:1 fit:1 reduce:1 expression:6 effort:1 moral:1 suffer:1 york:2 remark:1 dramatically:1 generally:1 boutilier:1 unimportant:1 tune:1 amount:1 reduced:1 generate:3 delta:1 blue:1 summarised:1 discrete:1 group:1 tempering:1 drawn:2 changing:1 clean:2 fraction:3 run:1 inverse:3 uncertainty:1 extends:3 arrive:1 reasonable:1 decision:2 acceptable:1 comparable:1 entirely:1 refine:2 constraint:2 dominated:1 generates:1 min:2 concluding:1 relatively:1 department:2 according:1 slightly:1 son:1 character:1 unity:1 alike:1 jeffreys:1 intuitively:1 taken:1 conjugacy:1 turn:1 discus:1 available:7 hierarchical:1 appropriate:1 batch:13 shortly:1 vrvm:1 ktk:2 denotes:4 include:2 graphical:1 hinge:1 embodies:1 exploit:1 especially:1 unchanged:2 move:43 objective:2 added:1 strategy:12 exhibit:1 qt0:5 perturbs:1 thank:1 normalise:1 evenly:1 assuming:1 kk:7 ratio:2 difficult:2 mostly:1 godsill:2 design:4 unknown:5 observation:1 markov:3 truncated:1 extended:1 y1:1 ninth:1 arbitrary:2 overcoming:1 bk:7 pair:1 required:3 ytt:2 beyond:1 sparsity:1 including:1 green:2 deleting:1 difficulty:2 rely:1 circumvent:1 marginalised:1 scheme:1 numerous:1 identifies:1 disappears:1 prior:13 acknowledgement:1 relative:1 bear:1 prototypical:1 generation:1 organised:1 outlining:1 editor:6 summary:1 keeping:1 figueiredo:1 rior:1 side:2 allow:1 salmond:1 wide:1 sparse:3 distributed:1 boundary:1 dimension:5 xn:1 author:1 jump:2 ig:5 far:1 approximate:2 overcomes:1 doucet:5 global:1 sequentially:1 assumed:6 conclude:1 iterative:4 table:4 nature:3 complex:1 constructing:1 elegantly:1 pk:3 main:1 noise:4 augmented:1 cubic:1 wiley:1 lie:1 candidate:2 weighting:1 erroneous:2 xt:4 bishop:3 showing:1 offset:1 pz:1 svm:5 x:2 admits:1 concern:1 sinc:6 workshop:1 vapnik:1 sequential:24 effectively:2 importance:21 adding:1 easier:2 boston:3 entropy:1 tracking:1 partially:1 ters:1 springer:3 satisfies:1 jaco:1 conditional:3 identity:1 formulated:2 presentation:1 hard:1 change:4 included:2 generalisation:4 typical:1 uniformly:6 except:1 sampler:2 wt:7 averaging:2 pas:4 experimental:1 rarely:1 support:3 goldszmidt:1 relevance:8 mcmc:4 |
1,499 | 2,363 | Training a Quantum Neural Network
Bob Ricks
Department of Computer Science
Brigham Young University
Provo, UT 84602
[email protected]
Dan Ventura
Department of Computer Science
Brigham Young University
Provo, UT 84602
[email protected]
Abstract
Most proposals for quantum neural networks have skipped over the problem of how to train the networks. The mechanics of quantum computing
are different enough from classical computing that the issue of training
should be treated in detail. We propose a simple quantum neural network
and a training method for it. It can be shown that this algorithm works
in quantum systems. Results on several real-world data sets show that
this algorithm can train the proposed quantum neural networks, and that
it has some advantages over classical learning algorithms.
1
Introduction
Many quantum neural networks have been proposed [1], but very few of these proposals
have attempted to provide an in-depth method of training them. Most either do not mention
how the network will be trained or simply state that they use a standard gradient descent
algorithm. This assumes that training a quantum neural network will be straightforward and
analogous to classical methods. While some quantum neural networks seem quite similar
to classical networks [2], others have proposed quantum networks that are vastly different
[3, 4, 5]. Several different network structures have been proposed, including lattices [6]
and dots [4]. Several of these networks also employ methods which are speculative or
difficult to do in quantum systems [7, 8]. These significant differences between classical
networks and quantum neural networks, as well as the problems associated with quantum
computation itself, require us to look more deeply at the issue of training quantum neural
networks. Furthermore, no one has done empirical testing on their training methods to
show that their methods work with real-world problems.
It is an open question what advantages a quantum neural network (QNN) would have over
a classical network. It has been shown that QNNs should have roughly the same computational power as classical networks [7]. Other results have shown that QNNs may work best
with some classical components as well as quantum components [2].
Quantum searches can be proven to be faster than comparable classical searches. We leverage this idea to propose a new training method for a simple QNN. This paper details such a
network and how training could be done on it. Results from testing the algorithm on several
real-world problems show that it works.
2
Quantum Computation
Several necessary ideas that form the basis for the study of quantum computation are briefly
reviewed here. For a good treatment of the subject, see [9].
2.1
Linear Superposition
Linear superposition is closely related to the familiar mathematical principle of linear combination of vectors. Quantum systems are described by a wave function ? that exists in a
Hilbert space. The Hilbert space has a set
P of states, |?i i, that form a basis, and the system
is described by a quantum state |?i = i ci |?i i. |?i is said to be coherent or to be in a
linear superposition of the basis states |?i i, and in general the coefficients ci are complex.
A postulate of quantum mechanics is that if a coherent system interacts in any way with its
environment (by being measured, for example), the superposition is destroyed. This loss
of coherence is governed by the wave function ?. The coefficients ci are called probability
2
amplitudes, and |ci | gives the probability of |?i being measured in the state |?i i . Note
that the wave function ? describes a real physical system that must collapse to exactly one
basis state. Therefore, the probabilities governed by the amplitudes ci must sum to unity. A
two-state quantum system is used as the basic unit of quantum computation. Such a system
is referred to as a quantum bit or qubit and naming the two states |0i and |1i, it is easy to
see why this is so.
2.2
Operators
Operators on a Hilbert space describe how one wave function is changed into another and
they may be represented as matrices acting on vectors (the notation |?i indicates a column
vector and the h?| a [complex conjugate] row vector). Using operators, an eigenvalue equation can be written A |?i i = ai |?i i, where ai is the eigenvalue. The solutions |?i i to such
an equation are called eigenstates and can be used to construct the basis of a Hilbert space
as discussed in Section 2.1. In the quantum formalism, all properties are represented as operators whose eigenstates are the basis for the Hilbert space associated with that property
and whose eigenvalues are the quantum allowed values for that property. It is important
to note that operators in quantum mechanics must be linear operators and further that they
must be unitary.
2.3
Interference
Interference is a familiar wave phenomenon. Wave peaks that are in phase interfere constructively while those that are out of phase interfere destructively. This is a phenomenon
common to all kinds of wave mechanics from water waves to optics. The well known
double slit experiment demonstrates empirically that at the quantum level interference also
applies to the probability waves of quantum mechanics. The wave function interferes with
itself through the action of an operator ? the different parts of the wave function interfere
constructively or destructively according to their relative phases just like any other kind of
wave.
2.4
Entanglement
Entanglement is the potential for quantum systems to exhibit correlations that cannot be
accounted for classically. From a computational standpoint, entanglement seems intuitive
enough ? it is simply the fact that correlations can exist between different qubits ? for example if one qubit is in the |1i state, another will be in the |1i state. However, from a physical
standpoint, entanglement is little understood. The questions of what exactly it is and how
it works are still not resolved. What makes it so powerful (and so little understood) is the
fact that since quantum states exist as superpositions, these correlations exist in superposition as well. When coherence is lost, the proper correlation is somehow communicated
between the qubits, and it is this ?communication? that is the crux of entanglement. Mathematically, entanglement may be described using the density matrix formalism. The density
matrix ?? of a quantum state |?i is defined as ?? = |?i h?| For example, the quantum
? ?
1
1
1
1 ? 1 ?
?
?
?
state |?i = 2 |00i + 2 |01i appears in vector form as |?i = 2 ? ? and it may
0
0
?
?
1 1 0 0
? 1 1 0 0 ?
also be represented as the density matrix ?? = |?i h?| = 21 ?
while the
0 0 0 0 ?
0 0 0 0
?
?
1 0 0 1
? 0 0 0 0 ?
state |?i = ?12 |00i + ?12 |11i is represented as ?? = |?i h?| = 12 ?
0 0 0 0 ?
1 0 0 1
where the matrices and vectors
are
indexed
by
the
state
labels
00,...,
11.
Notice that ??
1
0
1
1
?
where ? is the normal tensor
can be factorized as ?? = 12
0 0
1 1
product. On the other hand, ?? can not be factorized. States that can not be factorized are
said to be entangled, while those that can be factorized are not. There are different degrees
of entanglement and much work has been done on better understanding and quantifying it
[10, 11]. Finally, it should be mentioned that while interference is a quantum property that
has a classical cousin, entanglement is a completely quantum phenomenon for which there
is no classical analog. It has proven to be a powerful computational resource in some cases
and a major hindrance in others.
To summarize, quantum computation can be defined as representing the problem to be
solved in the language of quantum states and then producing operators that drive the system
(via interference and entanglement) to a final state such that when the system is observed
there is a high probability of finding a solution.
2.5
An Example ? Quantum Search
One of the best known quantum algorithms searches an unordered database quadratically
faster than any classical method [12, 13]. The algorithm begins with a superposition of
all N data items and depends upon an oracle that can recognize the target of the search.
Classically, searching such a database requires
O(N ) oracle calls; however, on a quan?
tum computer, the task requires only O( N ) oracle calls. Each oracle call consists of a
quantum operator that inverts the phase of the search target. An ?inversion
about average?
?
operator then shifts amplitude towards the target state. After ?/4 ? N repetitions of this
process, the system is measured and with high probability, the desired datum is the result.
3
A Simple Quantum Neural Network
We would like a QNN with features that make it easy for us to model, yet powerful enough
to leverage quantum physics. We would like our QNN to:
? use known quantum algorithms and gates
? have weights which we can measure for each node
? work in classical simulations of reasonable size
? be able to transfer knowledge to classical systems
We propose a QNN that operates much like a classical ANN composed of several layers
of perceptrons ? an input layer, one or more hidden layers and an output layer. Each layer
is fully connected to the previous layer. Each hidden layer computes a weighted sum of
the outputs of the previous layer. If this is sum above a threshold, the node goes high,
otherwise it stays low. The output layer does the same thing as the hidden layer(s), except
that it also checks its accuracy against the target output of the network. The network as a
whole computes a function by checking which output bit is high. There are no checks to
make sure exactly one output is high. This allows the network to learn data sets which have
one output high or binary-encoded outputs.
Figure 1: Simple QNN to compute XOR function
The QNN in Figure 1 is an example of such a network, with sufficient complexity to compute the XOR function. Each input node i is represented by a register, |?ii . The two hidden
nodes compute a weighted sum of the inputs, |?ii1 and |?ii2 , and compare the sum to a
threshold weight, |?ii0 . If the weighted sum is greater than the threshold the node goes
high. The |?ik represent internal calculations that take place at each node. The output layer
works similarly, taking a weighted sum of the hidden nodes and checking against a threshold. The QNN then checks each computed output and compares it to the target output, |?ij
sending |?ij high when they are equivalent. The performance of the network is denoted
by |?i, which is the number of computed outputs equivalent to their corresponding target
output.
At the quantum gate level, the network will require O(blm + m2 ) gates for each node of
the network. Here b is the number of bits used for floating point arithmetic in |?i, l is the
number of bits for each weight and m is the number of inputs to the node [14]-[15].
The overall network works as follows on a training set. In our example, the network has
two input parameters, so all n training examples will have two input registers. These are
represented as |?i11 to |?in2 . The target answers are kept in registers |?i11 to |?in2 . Each
hidden or output node has a weight vector, represented by |?ii , each vector containing
weights for each of its inputs. After classifying a training example, the registers |?i1 and
|?i2 reflect the networks ability to classify that the training example. As a simple measure
of performance, we increment |?i by the sum of all |?ii . When all training examples have
Figure 2: QNN Training
been classified, |?i will be the sum of the output nodes that have the correct answer throughout the training set and will range between zero and the number of training examples times
the number of output nodes.
4
Using Quantum Search to Learn Network Weights
One possibility for training this kind of a network is to search through the possible weight
vectors for one which is consistent with the training data. Quantum searches have been
used already in quantum learning [16] and many of the problems associated with them
have already been explored [17]. We would like to find a solution which classifies all
training examples correctly; in other words we would like |?i = n ? m where n is the
number of training examples and m is the number of output nodes. Since we generally do
not know how many weight vectors will do this, we use a generalization of the original
search algorithm [18], intended for problems where the number of solutions t is unknown.
The basic idea is that we will put |?i into a superposition of all possible weight vectors and
search for one which classifies all training examples correctly.
We start out with |?i as a superposition of all possible weight vectors. All other registers
(|?i, |?i, |?i), besides the inputs and target outputs are initialized to the state |0i. We
then classify each training example, updating the performance register, |?i. By using a superposition we classify the training examples with respect to every possible weight vector
simultaneously. Each weight vector is now entangled with |?i in such a way that |?i corresponds with how well every weight vector classifies all the training data. In this case, the
oracle for the quantum search is |?i = n ? m, which corresponds to searching for a weight
vector which correctly classifies the entire set.
Unfortunately, searching the weight vectors while entangled with |?i would cause unwanted weight vectors to grow that would be entangled with the performance metric we
are looking for. The solution is to disentangle |?i from the other registers after inverting
the phase of those weights which match the search criteria, based on |?i. To do this the
entire network will need to be uncomputed, which will unentangle all the registers and set
them back to their initial values. This means that the network will need to be recomputed
each time we make an oracle call and after each measurement.
There are at least two things about this algorithm that are undesirable. First, not all training
data will have any solution networks that correctly classify all training instances. This
means that nothing will be marked by the search oracle, so every weight vector will have
an equal chance of being measured. It is also possible that even when a solution does
exist, it is not desirable because it over fits the training data. Second, p
the amount of time
needed to find a vector which correctly classifies the training set is O( 2b /t), which has
exponential complexity with respect to the number of bits in the weight vector.
One way to deal with the first problem is to search until we find a solution which covers an
acceptable percentage, p, of the training data. In other words, the search oracle is modified
to be |?i ? n ? m ? p. The second problem is addressed in the next section.
5
Piecewise Weight Learning
Our quantum search algorithm gives us a good polynomial speed-up to the exponential task
of finding a solution to the QNN. This algorithm does not scale well, in fact it is exponential
in the total number of weights in the network and the bits per weight. Therefore, we propose
a randomized training algorithm which searches each node?s weight vector independently.
The network starts off, once again, with training examples in |?i, the corresponding answers in |?i, and zeros in all the other registers. A node is randomly selected and its
weight vector, |?ii , is put into superposition. All other weight vectors start with random
classical initial weights. We then search for a weight vector for this node that causes the
entire network to classify a certain percentage, p, of the training examples correctly. This is
repeated, iteratively decreasing p, until a new weight vector is found. That weight is fixed
classically and the process is repeated randomly for the other nodes.
Searching each node?s weight vector separately is, in effect, a random search through the
weight space where we select weight vectors which give a good level of performance for
each node. Each node takes on weight vectors that tend to increase performance with some
amount of randomness that helps keep it out of local minima. This search can be terminated
when an acceptable level of performance has been reached.
There are a few improvements to the basic design which help speed convergence. First,
to insure that hidden nodes find weight vectors that compute something useful, a small
performance penalty is added to weight vectors which cause a hidden node to output the
same value for all training examples. This helps select weight vectors which contain useful
information for the output nodes. Since each output node?s performance is independent
of the performance or all output nodes, the algorithm only considers the accuracy of the
output node being trained when training an output node.
6
Results
We first consider the canonical XOR problem. Each of the hidden and the output nodes
are thresholded nodes with three weights, one for each input and one for the threshold. For
each weight 2 bits are used. Quantum search did well on this problem, finding a solution
in an average of 2.32 searches.
The randomized search algorithm also did well on the XOR problem. After an average of
58 weight updates, the algorithm was able to correctly classify the training data. Since this
is a randomized algorithm both in the number of iterations of the search algorithm before
measuring and in the order which nodes update their weight vectors, the standard deviation
for this method was much higher, but still reasonable. In the randomized search algorithm,
an epoch refers to finding and fixing the weight of a single node.
We also tried the randomized search algorithm for a few real-world machine learning problems: lenses, Hayes-Roth and the iris datasets [19]. The lenses data set is a data set that
tries to predict whether people will need soft contact lenses, hard contact lenses or no contacts. The iris dataset details features of three different classes of irises. The Hayes-Roth
dataset classifies people into different classes depending several attributes.
Data Set
Iris
Lenses
Hayes-Roth
# Weight
Qubits
32
42
68
Epochs
23,000
22,500
5 ? 106
Weight
Updates
225
145
9,200
Output
Accuracy
98.23%
98.35%
88.76%
Training
Accuracy
97.79%
100.0%
82.98%
Backprop
96%
92%
83%
Table 1: Training Results
The lenses data set can be solved with a network that has three hidden nodes. After between
a few hundred to a few thousand iterations it usually finds a solution. This may be because
it has a hard time with 2 bit weights, or because it is searching for perfect accuracy. The
number of times a weight was fixed and updated was only 225 for this data set. The iris data
set was normalized so that each input had a value between zero and one. The randomized
search algorithm found the correct target for 97.79% of the output nodes.
Our results for the Hayes-Roth problem were also quite good. We used four hidden nodes
with two bit weights for the hidden nodes. We had to normalize the inputs to range
from zero to one once again so the larger inputs would not dominate the weight vectors.
The algorithm found the correct target for 88.86% of the output nodes correctly in about
5,000,000 epochs. Note that this does not mean that it classified 88.86% of the training
examples correctly since we are checking each output node for accuracy on each training example. The algorithm actually classified 82.98% of the training set correctly, which
compares well with backpropagation?s 83% [20].
7
Conclusions and Future Work
This paper proposes a simple quantum neural network and a method of training it which
works well in quantum systems. By using a quantum search we are able to use a wellknown algorithm for quantum systems which has already been used for quantum learning.
The algorithm is able to search for solutions that cover an arbitrary percentage of the training set. This could be very useful for problems which require a very accurate solution. The
drawback is that it is an exponential algorithm, even with the significant quadratic speedup.
A randomized version avoids some of the exponential increases in complexity with problem
size. This algorithm is exponential in the number of qubits of each node?s weight vector
instead of in the composite weight vector of the entire network. This means the complexity
of the algorithm increases with the number of connections to a node and the precision of
each individual weight, dramatically decreasing complexity for problems with large numbers of nodes. This could be a great improvement for larger problems. Preliminary results
for both algorithms have been very positive.
There may be quantum methods which could be used to improve current gradient descent
and other learning algorithms. It may also be possible to combine some of these with a
quantum search. An example would be to use gradient descent to try and refine a composite weight vector found by quantum search. Conversely, a quantum search could start with
the weight vector of a gradient descent search. This would allow the search to start with an
accurate weight vector and search locally for weight vectors which improve overall performance. Finally the two methods could be used simultaneously to try and take advantage of
the benefits of each technique.
Other types of QNNs may be able to use a quantum search as well since the algorithm
only requires a weight space which can be searched in superposition. In addition, more
traditional gradient descent techniques might benefit from a quantum speed-up themselves.
References
[1] Alexandr Ezhov and Dan Ventura. Quantum neural networks. In Ed. N. Kasabov, editor, Future
Directions for Intelligent Systems and Information Science. Physica-Verlang, 2000.
[2] Ajit Narayanan and Tammy Menneer. Quantum artificial neural network architectures and
components. In Information Sciences, volume 124 nos. 1-4, pages 231?255, 2000.
[3] M. V. Altaisky. Quantum neural network. Technical report, 2001. http://xxx.lanl.gov/quantph/0107012.
[4] E. C. Behrman, J. Niemel, J. E. Steck, and S. R. Skinner. A quantum dot neural network. In
Proceedings of the 4th Workshop on Physics of Computation, pages 22?24. Boston, 1996.
[5] Fariel Shafee.
Neural networks with c-not gated nodes.
Technical report, 2002.
http://xxx.lanl.gov/quant-ph/0202016.
[6] Yukari Fujita and Tetsuo Matsui. Quantum gauged neural network: U(1) gauge theory. Technical report, 2002. http://xxx.lanl.gov/cond-mat/0207023.
[7] S. Gupta and R. K. P. Zia. Quantum neural networks. In Journal of Computer and System
Sciences, volume 63 No. 3, pages 355?383, 2001.
[8] E. C. Behrman, V. Chandrasheka, Z. Wank, C. K. Belur, J. E. Steck, and S. R. Skinner. A quantum neural network computes entanglement. Technical report, 2002. http://xxx.lanl.gov/quantph/0202131.
[9] Michael A. Nielsen and Isaac L. Chuang. Quantum computation and quantum information.
Cambridge University Press, 2000.
[10] V. Vedral, M. B. Plenio, M. A. Rippin, and P. L. Knight. Quantifying entanglement. In Physical
Review Letters, volume 78(12), pages 2275?2279, 1997.
[11] R. Jozsa. Entanglement and quantum computation. In S. Hugget, L. Mason, K.P. Tod, T. Tsou,
and N.M.J. Woodhouse, editors, The Geometric Universe, pages 369?379. Oxford University
Press, 1998.
[12] Lov K. Grover. A fast quantum mechanical algorithm for database search. In Proceedings of
the 28th ACM STOC, pages 212?219, 1996.
[13] Lov K. Grover. Quantum mechanics helps in searching for a needle in a haystack. In Physical
Review Letters, volume 78, pages 325?328, 1997.
[14] Peter Shor. Polynomial-time algorithms for prime factorization and discrete logarithms on a
quantum computer. In SIAM Journal of Computing, volume 26 no. 5, pages 1484?1509, 1997.
[15] Vlatko Vedral, Adriano Barenco, and Artur Ekert. Quantum networks for elementary arithmetic
operations. In Physical Review A, volume 54 no. 1, pages 147?153, 1996.
[16] Dan Ventura and Tony Martinez. Quantum associative memory. In Information Sciences, volume 124 nos. 1-4, pages 273?296, 2000.
[17] Alexandr Ezhov, A. Nifanova, and Dan Ventura. Distributed queries for quantum associative
memory. In Information Sciences, volume 128 nos. 3-4, pages 271?293, 2000.
[18] Michel Boyer, Gilles Brassard, Peter H?yer, and Alain Tapp. Tight bounds on quantum searching. In Proceedings of the Fourth Workshop on Physics and Computation, pages 36?43, 1996.
[19] C.L. Blake and C.J. Merz.
UCI repository of machine learning databases, 1998.
http://www.ics.uci.edu/?mlearn/MLRepository.html.
[20] Frederick Zarndt. A comprehensive case study: An examination of machine learning and connectionist algorithms. Master?s thesis, Brigham Young University, 1995.
| 2363 |@word repository:1 briefly:1 inversion:1 polynomial:2 seems:1 version:1 open:1 steck:2 simulation:1 tried:1 mention:1 initial:2 current:1 yet:1 must:4 written:1 update:3 selected:1 item:1 node:41 mathematical:1 ik:1 consists:1 dan:4 combine:1 lov:2 roughly:1 themselves:1 mechanic:6 decreasing:2 gov:4 little:2 begin:1 classifies:6 notation:1 insure:1 factorized:4 what:3 kind:3 finding:4 every:3 unwanted:1 exactly:3 demonstrates:1 unit:1 producing:1 before:1 positive:1 understood:2 local:1 oxford:1 might:1 conversely:1 matsui:1 collapse:1 factorization:1 range:2 testing:2 alexandr:2 lost:1 communicated:1 backpropagation:1 empirical:1 composite:2 word:2 refers:1 cannot:1 undesirable:1 needle:1 operator:10 put:2 www:1 equivalent:2 roth:4 straightforward:1 go:2 independently:1 artur:1 ii0:1 m2:1 dominate:1 searching:7 increment:1 analogous:1 updated:1 target:10 updating:1 database:4 observed:1 solved:2 thousand:1 connected:1 knight:1 deeply:1 mentioned:1 environment:1 entanglement:12 complexity:5 trained:2 tight:1 upon:1 basis:6 completely:1 resolved:1 represented:7 train:2 fast:1 describe:1 artificial:1 query:1 quite:2 whose:2 encoded:1 larger:2 otherwise:1 ability:1 itself:2 final:1 associative:2 advantage:3 eigenvalue:3 interferes:1 propose:4 product:1 uci:2 fariel:1 intuitive:1 normalize:1 convergence:1 double:1 perfect:1 help:4 depending:1 fixing:1 measured:4 ij:2 c:2 direction:1 closely:1 correct:3 attribute:1 drawback:1 backprop:1 require:3 crux:1 generalization:1 preliminary:1 elementary:1 mathematically:1 physica:1 blake:1 normal:1 ic:1 great:1 qubit:2 predict:1 major:1 label:1 superposition:12 repetition:1 gauge:1 weighted:4 modified:1 rick:1 improvement:2 indicates:1 check:3 skipped:1 qnn:10 entire:4 hidden:12 boyer:1 i1:1 fujita:1 issue:2 overall:2 html:1 denoted:1 proposes:1 equal:1 construct:1 once:2 look:1 future:2 others:2 report:4 piecewise:1 intelligent:1 few:5 employ:1 connectionist:1 randomly:2 composed:1 simultaneously:2 recognize:1 comprehensive:1 individual:1 blm:1 familiar:2 floating:1 phase:5 intended:1 possibility:1 accurate:2 necessary:1 indexed:1 logarithm:1 initialized:1 desired:1 instance:1 column:1 formalism:2 classify:6 soft:1 cover:2 measuring:1 brassard:1 lattice:1 deviation:1 hundred:1 answer:3 density:3 peak:1 randomized:7 siam:1 destructively:2 stay:1 physic:3 off:1 michael:1 provo:2 vastly:1 postulate:1 reflect:1 again:2 containing:1 thesis:1 classically:3 michel:1 potential:1 unordered:1 coefficient:2 register:9 depends:1 try:3 reached:1 wave:12 start:5 ii1:1 accuracy:6 xor:4 drive:1 bob:1 randomness:1 classified:3 mlearn:1 ed:1 against:2 isaac:1 associated:3 dataset:2 treatment:1 knowledge:1 ut:2 hilbert:5 amplitude:3 nielsen:1 actually:1 back:1 appears:1 tum:1 higher:1 xxx:4 done:3 furthermore:1 just:1 correlation:4 until:2 hand:1 hindrance:1 interfere:3 somehow:1 qnns:3 effect:1 contain:1 normalized:1 skinner:2 iteratively:1 i2:1 deal:1 mlrepository:1 iris:5 criterion:1 common:1 speculative:1 physical:5 empirically:1 volume:8 discussed:1 analog:1 significant:2 measurement:1 cambridge:1 haystack:1 ai:2 similarly:1 zia:1 language:1 had:2 dot:2 something:1 disentangle:1 wellknown:1 prime:1 certain:1 binary:1 minimum:1 byu:2 greater:1 arithmetic:2 ii:4 desirable:1 technical:4 faster:2 match:1 calculation:1 naming:1 basic:3 metric:1 iteration:2 represent:1 proposal:2 addition:1 separately:1 addressed:1 entangled:4 grow:1 standpoint:2 sure:1 subject:1 tend:1 quan:1 thing:2 seem:1 call:4 unitary:1 leverage:2 enough:3 destroyed:1 easy:2 fit:1 architecture:1 shor:1 quant:1 idea:3 cousin:1 shift:1 whether:1 penalty:1 peter:2 cause:3 action:1 dramatically:1 generally:1 useful:3 amount:2 locally:1 ph:1 narayanan:1 http:5 behrman:2 exist:4 percentage:3 canonical:1 notice:1 correctly:10 per:1 discrete:1 mat:1 recomputed:1 four:1 threshold:5 thresholded:1 kept:1 sum:9 letter:2 powerful:3 fourth:1 master:1 place:1 throughout:1 reasonable:2 coherence:2 acceptable:2 comparable:1 bit:9 layer:11 bound:1 datum:1 quadratic:1 refine:1 oracle:8 optic:1 speed:3 barenco:1 speedup:1 department:2 according:1 combination:1 conjugate:1 describes:1 unity:1 interference:5 equation:2 resource:1 needed:1 know:1 sending:1 operation:1 gate:3 original:1 in2:2 assumes:1 chuang:1 tony:1 classical:16 contact:3 tensor:1 question:2 already:3 added:1 interacts:1 traditional:1 said:2 exhibit:1 gradient:5 considers:1 water:1 besides:1 eigenstates:2 difficult:1 unfortunately:1 ventura:5 stoc:1 constructively:2 design:1 proper:1 unknown:1 gated:1 gilles:1 datasets:1 descent:5 communication:1 looking:1 ajit:1 arbitrary:1 inverting:1 mechanical:1 lanl:4 connection:1 coherent:2 quadratically:1 able:5 frederick:1 usually:1 summarize:1 including:1 memory:2 power:1 treated:1 examination:1 representing:1 improve:2 ii2:1 epoch:3 understanding:1 review:3 checking:3 geometric:1 relative:1 loss:1 fully:1 gauged:1 proven:2 grover:2 degree:1 sufficient:1 consistent:1 principle:1 editor:2 classifying:1 row:1 changed:1 accounted:1 alain:1 allow:1 taking:1 benefit:2 distributed:1 depth:1 world:4 avoids:1 quantum:79 computes:3 qubits:4 keep:1 hayes:4 search:38 why:1 reviewed:1 table:1 learn:2 transfer:1 complex:2 did:2 universe:1 terminated:1 whole:1 tapp:1 nothing:1 martinez:1 allowed:1 repeated:2 referred:1 tod:1 precision:1 inverts:1 exponential:6 governed:2 young:3 explored:1 shafee:1 mason:1 gupta:1 brigham:3 exists:1 workshop:2 i11:2 ci:5 yer:1 boston:1 simply:2 applies:1 corresponds:2 chance:1 acm:1 marked:1 quantifying:2 ann:1 towards:1 hard:2 except:1 operates:1 acting:1 called:2 total:1 lens:6 slit:1 merz:1 attempted:1 cond:1 perceptrons:1 select:2 internal:1 people:2 searched:1 phenomenon:3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.