Unnamed: 0
int64 0
7.24k
| id
int64 1
7.28k
| raw_text
stringlengths 9
124k
| vw_text
stringlengths 12
15k
|
---|---|---|---|
2,600 | 3,358 | A Constraint Generation Approach to
Learning Stable Linear Dynamical Systems
Sajid M. Siddiqi
Robotics Institute
Carnegie-Mellon University
Pittsburgh, PA 15213
[email protected]
Byron Boots
Computer Science Department
Carnegie-Mellon University
Pittsburgh, PA 15213
[email protected]
Geoffrey J. Gordon
Machine Learning Department
Carnegie-Mellon University
Pittsburgh, PA 15213
[email protected]
Abstract
Stability is a desirable characteristic for linear dynamical systems, but it is often
ignored by algorithms that learn these systems from data. We propose a novel
method for learning stable linear dynamical systems: we formulate an approximation of the problem as a convex program, start with a solution to a relaxed version
of the program, and incrementally add constraints to improve stability. Rather
than continuing to generate constraints until we reach a feasible solution, we test
stability at each step; because the convex program is only an approximation of the
desired problem, this early stopping rule can yield a higher-quality solution. We
apply our algorithm to the task of learning dynamic textures from image sequences
as well as to modeling biosurveillance drug-sales data. The constraint generation
approach leads to noticeable improvement in the quality of simulated sequences.
We compare our method to those of Lacy and Bernstein [1, 2], with positive results
in terms of accuracy, quality of simulated sequences, and efficiency.
1
Introduction
Many problems in machine learning involve sequences of real-valued multivariate observations.
To model the statistical properties of such data, it is often sensible to assume each observation to be
correlated to the value of an underlying latent variable, or state, that is evolving over the course of the
sequence. In the case where the state is real-valued and the noise terms are assumed to be Gaussian,
the resulting model is called a linear dynamical system (LDS), also known as a Kalman Filter [3].
LDSs are an important tool for modeling time series in engineering, controls and economics as well
as the physical and social sciences.
Let {?i (M )}ni=1 denote the eigenvalues of an n ? n matrix M in decreasing order of magnitude, {?i (M )}ni=1 the corresponding unit-length eigenvectors, and define its spectral radius
?(M ) ? |?1 (M )|. An LDS with dynamics matrix A is stable if all of A?s eigenvalues have magnitude at most 1, i.e., ?(A) ? 1. Standard algorithms for learning LDS parameters do not enforce
this stability criterion, learning locally optimal values for LDS parameters by gradient descent [4],
Expectation Maximization (EM) [5] or least squares on a state sequence estimate obtained by subspace identification methods, as described in Section 3.1. However, when learning from finite data
samples, the least squares solution may be unstable even if the system is stable [6]. The drawback
of ignoring stability is most apparent when simulating long sequences from the system in order to
generate representative data or infer stretches of missing values.
We propose a convex optimization algorithm for learning the dynamics matrix while guaranteeing
stability. An estimate of the underlying state sequence is first obtained using subspace identification. We then formulate the least-squares problem for the dynamics matrix as a quadratic program
(QP) [7], initially without constraints. When this QP is solved, the estimate A? obtained may be
unstable. However, any unstable solution allows us to derive a linear constraint which we then add
to our original QP and re-solve. The above two steps are iterated until we reach a stable solution,
which is then refined by a simple interpolation to obtain the best possible stable estimate.
Our method can be viewed as constraint generation for an underlying convex program with a feasible set of all matrices with singular values at most 1, similar to work in control systems [1]. However,
we terminate before reaching feasibility in the convex program, by checking for matrix stability after
each new constraint. This makes our algorithm less conservative than previous methods for enforcing stability since it chooses the best of a larger set of stable dynamics matrices. The difference in
the resulting stable systems is noticeable when simulating data. The constraint generation approach
also achieves much greater efficiency than previous methods in our experiments.
One application of LDSs in computer vision is learning dynamic textures from video data [8]. An
advantage of learning dynamic textures is the ability to play back a realistic-looking generated sequence of any desired duration. In practice, however, videos synthesized from dynamic texture
models can quickly degenerate because of instability in the underlying LDS. In contrast, sequences
generated from dynamic textures learned by our method remain ?sane? even after arbitrarily long
durations. We also apply our algorithm to learning baseline dynamic models of over-the-counter
(OTC) drug sales for biosurveillance, and sunspot numbers from the UCR archive [9]. Comparison
to the best alternative methods [1, 2] on these problems yields positive results.
2
Related Work
Linear system identification is a well-studied subject [4]. Within this area, subspace identification
methods [10] have been very successful. These techniques first estimate the model dimensionality
and the underlying state sequence, and then derive parameter estimates using least squares. Within
subspace methods, techniques have been developed to enforce stability by augmenting the extended
observability matrix with zeros [6] or adding a regularization term to the least squares objective [11].
All previous methods were outperformed by Lacy and Bernstein [1], henceforth referred to as LB-1.
They formulate the problem as a semidefinite program (SDP) whose objective minimizes the state
sequence reconstruction error, and whose constraint bounds the largest singular value by 1. This
convex constraint is obtained by rewriting the nonlinear matrix inequality In ? AAT ? 0 as a linear
matrix inequality [12], where In is the n ? n identity matrix. Here, ? 0 (? 0) denotes positive
(semi-) definiteness. The existence of this constraint also proves the convexity of the ?1 ? 1 region.
A follow-up to this work by the same authors [2], which we will call LB-2, attempts to overcome the
conservativeness of LB-1 by approximating the Lyapunov inequalities P ? AP AT ? 0, P ? 0 with
the inequalities P ? AP AT ? ?In ? 0, P ? ?In ? 0, ? > 0. These inequalities hold iff the spectral
radius is less than 1. However, the approximation is achieved only at the cost of inducing a nonlinear
distortion of the objective function by a problem-dependent reweighting matrix involving P , which
is a variable to be optimized. In our experiments, this causes LB-2 to perform worse than LB-1 (for
any ?) in terms of the state sequence reconstruction error, even while obtaining solutions outside
the feasible region of LB-1. Consequently, we focus on LB-1 in our conceptual and qualitative
comparisons as it is the strongest baseline available. However, LB-2 is more scalable than LB-1, so
quantitative results are presented for both.
To summarize the distinction between constraint generation, LB-1 and LB-2: it is hard to have both
the right objective function (reconstruction error) and the right feasible region (the set of stable
matrices). LB-1 optimizes the right objective but over the wrong feasible region (the set of matrices
with ?1 ? 1). LB-2 has a feasible region close to the right one, but at the cost of distorting its
objective function to an extent that it fares worse than LB-1 in nearly all cases. In contrast, our
method optimizes the right objective over a less conservative feasible region than that of any previous
algorithm with the right objective, and this combination is shown to work the best in practice.
3
Linear Dynamical Systems
The evolution of a linear dynamical system can be described by the following two equations:
xt+1 = Axt + wt
yt = Cxt + vt
(1)
Time is indexed by the discrete variable t. Here xt denotes the hidden states in Rn , yt the observations in Rm , and wt and vt are zero-mean normally distributed state and observation noise variables.
A. 300
0
0
100
C.
B.
Sunspot numbers
200
Figure 1: A. Sunspot data, sampled monthly for 200 years. Each curve is a month, the x-axis is over
years. B. First two principal components of a 1-observation Hankel matrix. C. First two principal
components of a 12-observation Hankel matrix, which better reflect temporal patterns in the data.
Assume some initial state x0 . The parameters of the system are the dynamics matrix A ? Rn?n , the
observation model C ? Rm?n , and the noise covariance matrices Q and R. Note that we are learning uncontrolled linear dynamical systems, though, as in previous work, control inputs can easily be
incorporated into the objective function and convex program.
Linear dynamical systems can also be viewed as probabilistic graphical models. The standard LDS
filtering and smoothing inference algorithms [3, 13] are instantiations of the junction tree algorithm
for Bayesian Networks (see, for example, [14]).
We follow the subspace identification literature in estimating all parameters other than the dynamics
matrix. A clear and concise exposition of the required techniques is presented in Soatto et al. [8],
which we summarize below. We use subspace identification methods in our experiments for uniformity with previous work we are building on (in the control systems literature) and with work we are
comparing to ([8] on the dynamic textures data).
3.1 Learning Model Parameters by Subspace Methods
Subspace methods calculate LDS parameters by first decomposing a matrix of observations to yield
an estimate of the underlying state sequence. The most straightforward such technique is used here,
which relies on the singular value decomposition (SVD) [15]. See [10] for variations.
Let Y1:? = [y1 y2 . . . y? ] ? Rm?? and X1:? = [x1 x2 . . . x? ] ? Rn?? . D denotes the matrix of
observations which is the input to SVD. One typical choice for D is D = Y1:? ; we will discuss others
below. SVD yields D ? U ?V T where U ? Rm?n and V ? R? ?n have orthonormal columns {ui }
and {vi }, and ? = diag{?1 , . . . , ?n } contains the singular values. The model dimension n is
determined by keeping all singular values of D above a threshold. We obtain estimates of C and X:
C? = U
? = ?V T
X
(2)
?
See [8] for an explanation of why these estimates satisfy certain canonical model assumptions. X
is referred to as the extended observability matrix in the control systems literature; the tth column
? represents an estimate of the state of our LDS at time t. The least squares estimate of A is:
of X
?
?2
?
A? = arg min J 2 (A) = arg min ?AX0:? ?1 ? X1:? ?F = X1:? X0:?
(3)
?1
A
A
where k ? kF denotes the Frobenius norm and ? denotes the Moore-Penrose inverse. Eq. (3) asks A?
to minimize the error in predicting the state at time t + 1 from the state at time t. Given the above
? the covariance matrices Q
? and R
? can be estimated directly from residuals.
estimates A? and C,
3.2 Designing the Observation Matrix
In the decomposition above, we chose each column of D to be the observation vector for a single
time step. Suppose that instead we set D to be a matrix of the form
?
?
y1
y2
y3
???
y?
?
?
..
..
..
..
D = ? ...
?
.
.
.
.
yd
yd+1
yd+2
???
yd+? ?1
md??
A matrix of this form, with each block of rows equal to the previous block but shifted by a constant number of columns, is called a block Hankel matrix [4]. We say ?d-observation Hankel matrix
of size ? ? to mean the data matrix D ? Rmd?? with d length-m observation vectors per column.
Stacking observations causes each state to incorporate more information about the future, since x
?t
A.
^
B.10
S?
A
matrices)
*
Afinal
S?
A
unstable
matrices
?0
A LB-1
*
generated
constraint
S ? (stable
stable
matrices
unstable
matrices
S?
n2
R
-10
?10
0
?
10
Figure 2: (A): Conceptual depiction of the space of n ? n matrices. The region of stability (S? ) is
non-convex while the smaller region of matrices with ?1 ? 1 (S? ) is convex. The elliptical contours
indicate level sets of the quadratic objective function of the QP. A? is the unconstrained least-squares
solution to this objective. ALB-1 is the solution found by LB-1 [1]. One iteration of constraint
generation yields the constraint indicated by the line labeled ?generated constraint?, and (in this
case) leads to a stable solution A? . The final step of our algorithm improves on this solution by
? to obtain A?
interpolating A? with the previous solution (in this case, A)
f inal . (B): The actual stable
and unstable regions for the space of 2?2 matrices E?,? = [ 0.3 ? ; ? 0.3 ], with ?, ? ? [?10, 10].
Constraint generation is able to learn a nearly optimal model from a noisy state sequence of length
7 simulated from E0,10 , with better state reconstruction error than either LB-1 or LB-2.
now represents coefficients reconstructing yt as well as other observations in the future. However
the observation model estimate must now be C? = U ( : , 1 : m), i.e., the submatrix consisting of the
first m columns of U , because U ( : , 1 : m)?
xt = y?t for any t, where y?t denotes a reconstructed observation. Having multiple observations per column in D is particularly helpful when the underlying
dynamical system is known to have periodicity. For example, see Figure 1(A). See [12] for details.
4
The Algorithm
? To account for stability, we
The estimation procedure in Section 3.1 does not enforce stability in A.
first formulate the dynamics matrix learning problem as a quadratic program with a feasible set that
includes the set of stable dynamics matrices. Then we demonstrate how instability in its solutions
can be used to generate constraints that restrict this feasible set appropriately. As a final step, the
solution is refined to be as close as possible to the least-squares estimate while remaining stable.
The overall algorithm is illustrated in Figure 2(A). We now explain the algorithm in more detail.
4.1 Formulating the Objective
The least squares problem in Eq. (3) can be written as follows (see [12] for the derivation):
?
?2
A? = arg minA ?AX0:? ?1 ? X1:? ?F
?
?
= arg mina aTP a ? 2 q Ta + r
n2 ?1
where a ? R
,q?R
n2 ?1
(4)
n2 ?n2
,P ?R
and r ? R are defined as:
?
?
T
a = vec(A) = [A11 A21 A31 ? ? ? Ann ]T P = In ? X0:? ?1 X0:?
?1
? T
?
T
q = vec(X0:? ?1 X1:?
)
r = tr X1:?
X1:?
(5)
In is the n ? n identity matrix and ? denotes the Kronecker product. Note that P is a symmetric
nonnegative-definite matrix. The objective function in (4) is a quadratic function of a.
4.2 Generating Constraints
The quadratic objective function above is equivalent to the least squares problem of Eq. (3). Its
feasible set is the space of all n ? n matrices, regardless of their stability. When its solution yields
? is greater than 1. Ideally we would like to
an unstable matrix, the spectral radius of A? (i.e. |?1 (A)|)
?
use A to calculate a convex constraint on the spectral radius. However, consider the class of 2 ? 2
matrices [16]: E?,? = [ 0.3 ? ; ? 0.3 ]. The matrices E10,0 and E0,10 are stable with ?1 = 0.3, but
their convex combination ?E10,0 + (1 ? ?)E0,10 is unstable for (e.g.) ? = 0.5 (Figure 2(B)). This
shows that the set of stable matrices is non-convex for n = 2, and in fact this is true for all n > 1.
We turn instead to the largest singular value, which is a closely related quantity since
? ? |?i (A)|
? ? ?max (A)
?
?min (A)
?i = 1, . . . , n
[15]
Therefore every unstable matrix has a singular value greater than one, but the converse is not necessarily true. Moreover, the set of matrices with ?1 ? 1 is convex. Figure 2(A) conceptually depicts
the non-convex region of stability S? and the convex region S? with ?1 ? 1 in the space of all
n ? n matrices for some fixed n. The difference between S? and S? can be significant. Figure 2(B)
depicts these regions for E?,? with ?, ? ? [?10, 10]. The stable matrices E10,0 and E0,10 reside
at the edges of the figure. While results for this class of matrices vary, the constraint generation
algorithm described below is able to learn a nearly optimal model from a noisy state sequence of
? = 7 simulated from E0,10 , with better state reconstruction error than LB-1 and LB-2.
??
? V? T by SVD, where U
? = [?
? = diag{?
Let A? = U
ui ]ni=1 and V? = [?
vi ]ni=1 and ?
?1 , . . . , ?
?n }. Then:
??
? V? T ?
A? = U
? =U
? T A?V? ?
?
Therefore, instability of A? implies
? that:?
?v1 > 1 ?
?
?1 > 1 ? tr u
?T A?
1
? =u
?v1 = tr(?
?v1 )
?
?1 (A)
?T1 A?
uT1 A?
?
?
tr v?1 u
?T1 A? > 1 ?
gT a
?>1
(6)
(7)
Here g = vec(?
u1 v?1T ). Since Eq. (7) arose from an unstable solution of Eq. (4), g is a hyperplane
separating a
? from the space of matrices with ?1 ? 1. We use the negation of Eq. (7) as a constraint:
gT a
??1
(8)
4.3 Computing the Solution
The overall quadratic program can be stated as:
minimize
subject to
aTP a ? 2 q Ta + r
Ga ? h
(9)
with a, P , q and r as defined in Eqs. (5). {G, h} define the set of constraints, and are initially
empty. The QP is invoked repeatedly until the stable region, i.e. S? , is reached. At each iteration,
we calculate a linear constraint of the form in Eq. (8), add the corresponding g T as a row in G,
and augment h with 1. Note that we will almost always stop before reaching the feasible region S? .
Once a stable matrix is obtained, it is possible to refine this solution. We know that the last constraint
caused our solution to cross the boundary of S? , so we interpolate between the last solution and the
previous iteration?s solution using binary search to look for a boundary of the stable region, in
order to obtain a better objective value while remaining stable. An interpolation could be attempted
between the least squares solution and any stable solution. However, the stable region can be highly
complex, and there may be several folds and boundaries of the stable region in the interpolated area.
In our experiments (not shown), interpolating from the LB-1 solution yielded worse results.
5
Experiments
For learning the dynamics matrix, we implemented1 least squares, constraint generation (using
quadprog), LB-1 [1] and LB-2 [2] (using CVX with SeDuMi) in Matlab on a 3.2 GHz Pentium with 2 GB RAM. Note that these algorithms give a different result from the basic least-squares
system identification algorithm only in situations where the least-squares model is unstable. However, least-squares LDSs trained in scarce-data scenarios are unstable for almost any domain, and
some domains lead to unstable models up to the limit of available data (e.g. the steam dynamic
textures in Section 5.1). The goals of our experiments are to: (1) examine the state evolution and
simulated observations of models learned using our method, and compare them to previous work;
and (2) compare the algorithms in terms of reconstruction error and efficiency. The error metric used
?
for the quantitative experiments when evaluating
? matrix A is ?
? /J 2 (A)
?
ex (A? ) = 100 ? J 2 (A? ) ? J 2 (A)
(10)
i.e. percent increase in squared reconstruction error compared to least squares, with J(?) as defined
in Eq. (4). We apply these algorithms to learning dynamic textures from the vision domain (Section 5.1), as well as OTC drug sales counts and sunspot numbers (Section 5.2).
1
Source code is available at http://www.select.cs.cmu.edu/projects/stableLDS
A.
state evolution
B.
4
x 10
Least Squares
LB-1
Constraint Generation
2
0
?2
1
0
?1
0
500
t
t =100
t =200
1000 0
500
t
1000 0
500
t
1000
C.
t =400
t =100
t =800
t =200
t =400
t =800
Figure 3: Dynamic textures. A. Samples from the original steam sequence and the fountain
sequence. B. State evolution of synthesized sequences over 1000 frames (steam top, fountain
bottom). The least squares solutions display instability as time progresses. The solutions obtained
using LB-1 remain stable for the full 1000 frame image sequence. The constraint generation solutions, however, yield state sequences that are stable over the full 1000 frame image sequence without
significant damping. C. Samples drawn from a least squares synthesized sequences (top), and samples drawn from a constraint generation synthesized sequence (bottom). Images for LB-1 are not
shown. The constraint generation synthesized steam sequence is qualitatively better looking than
the steam sequence generated by LB-1, although there is little qualitative difference between the
two synthesized fountain sequences.
CG
|?1 |
?1
ex (%)
time
1.000
1.036
45.2
0.45
|?1 |
?1
ex (%)
time
0.999
1.037
58.4
2.37
|?1 |
?1
ex (%)
time
1.000
1.120
20.24
5.85
LB-1
LB-1?
steam (n = 10)
0.993
0.993
1.000
1.000
103.3
103.3
95.87
3.77
steam (n = 20)
?
0.990
?
1.000
?
154.7
?
1259.6
steam (n = 40)
?
0.989
?
1.000
?
282.7
?
79516.98
LB-2
CG
1.000
1.034
546.9
0.50
0.999
1.051
0.1
0.15
0.999
1.062
294.8
33.55
0.999
1.054
1.2
1.63
1.000
1.128
768.5
289.79
1.000
1.034
3.3
61.9
LB-1
LB-1?
LB-2
fountain (n = 10)
0.987
0.987
0.997
1.000
1.000
1.054
4.1
4.1
3.0
15.43
1.09
0.49
fountain (n = 20)
?
0.988
0.996
?
1.000
1.056
?
5.0
22.3
?
159.85
5.13
fountain (n = 40)
?
0.991
1.000
?
1.000
1.172
?
4.8
21.5
?
43457.77 239.53
Table 1: Quantitative results on the dynamic textures data for different numbers of states n. CG is our
algorithm, LB-1and LB-2 are competing algorithms, and LB-1? is a simulation of LB-1 using our
algorithm by generating constraints until we reach S? , since LB-1 failed for n > 10 due to memory
limits. ex is percent difference in squared reconstruction error as defined in Eq. (10). Constraint
generation, in all cases, has lower error and faster runtime.
5.1 Stable Dynamic Textures
Dynamic textures in vision can intuitively be described as models for sequences of images that
exhibit some form of low-dimensional structure and recurrent (though not necessarily repeating)
characteristics, e.g. fixed-background videos of rising smoke or flowing water. Treating each frame
of a video as an observation vector of pixel values yt , we learned dynamic texture models of two
video sequences: the steam sequence, composed of 120 ? 170 pixel images, and the fountain
sequence, composed of 150 ? 90 pixel images, both of which originated from the MIT temporal
texture database (Figure 3(A)). We use parameters ? = 80, n = 15, and d = 10. Note that the state
sequence we learn has no a priori interpretation.
An LDS model of a dynamic texture may synthesize an ?infinitely? long sequence of images by
driving the model with zero mean Gaussian noise. Each of our two models uses an 80 frame training
sequence to generate 1000 sequential images in this way. To better visualize the difference between
image sequences generated by least-squares, LB-1, and constraint generation, the evolution of each
method?s state is plotted over the course of the synthesized sequences (Figure 3(B)). Sequences
generated by the least squares models appear to be unstable, and this was in fact the case; both
the steam and the fountain sequences resulted in unstable dynamics matrices. Conversely, the
constrained subspace identification algorithms all produced well-behaved sequences of states and
stable dynamics matrices (Table 1), although constraint generation demonstrates the fastest runtime,
best scalability, and lowest error of any stability-enforcing approach.
A qualitative comparison of images generated by constraint generation and least squares (Figure 3(C)) indicates the effect of instability in synthesized sequences generated from dynamic texture
models. While the unstable least-squares model demonstrates a dramatic increase in image contrast
over time, the constraint generation model continues to generate qualitatively reasonable images.
Qualitative comparisons between constraint generation and LB-1 indicate that constraint generation
learns models that generate more natural-looking video sequences2 than LB-1.
Table 1 demonstrates that constraint generation always has the lowest error as well as the fastest
runtime. The running time of constraint generation depends on the number of constraints needed to
reach a stable solution. Note that LB-1 is more efficient and scalable when simulated using constraint
generation (by adding constraints until S? is reached) than it is in its original SDP formulation.
5.2 Stable Baseline Models for Biosurveillance
We examine daily counts of OTC drug sales in pharmacies, obtained from the National Data Retail
Monitor (NDRM) collection [17]. The counts are divided into 23 different categories and are tracked
separately for each zipcode in the country. We focus on zipcodes from a particular American city.
The data exhibits 7-day periodicity due to differential buying patterns during the week. We isolate a
60-day subsequence where the data dynamics remain relatively stationary, and attempt to learn LDS
parameters to be able to simulate sequences of baseline values for use in detecting anomalies.
We perform two experiments on different aggregations of the OTC data, with parameter values n =
7, d = 7 and ? = 14. Figure 4(A) plots 22 different drug categories aggregated over all zipcodes,
and Figure 4(B) plots a single drug category (cough/cold) in 29 different zipcodes separately. In both
cases, constraint generation is able to use very little training data to learn a stable model that captures
the periodicity in the data, while the least squares model is unstable and its predictions diverge over
time. LB-1 learns a model that is stable but overconstrained, and the simulated observations quickly
drift from the correct magnitudes. We also tested the algorithms on the sunspots data (Figure 2(C))
with parameters n = 7, d = 18 and ? = 50, with similar results. Quantitative results on both these
domains exhibit similar trends as those in Table 1.
6
Discussion
We have introduced a novel method for learning stable linear dynamical systems. Our constraint
generation algorithm is more powerful than previous methods in the sense of optimizing over a
larger set of stable matrices with a suitable objective function. The constraint generation approach
also has the benefit of being faster than previous methods in nearly all of our experiments. One
possible extension is to modify the EM algorithm for LDSs to incorporate constraint generation
into the M-step in order to learn stable systems that locally maximize the observed data likelihood.
Stability could also be of advantage in planning applications.
2
See videos at http://www.select.cs.cmu.edu/projects/stableLDS
B. Multi-zipcode sales counts
A. Multi-drug sales counts
C.
400
300
0
1500
0
400
0
300
0
1500
0
400
0
300
0
1500
0
400
0
300
Sunspot numbers
LB-1
Least Constraint Training
Squares Generation Data
1500
0
0
0
0
30
60
0
30
60
0
100
200
Figure 4: (A): 60 days of data for 22 drug categories aggregated over all zipcodes in the city. (B):
60 days of data for a single drug category (cough/cold) for all 29 zipcodes in the city. (C): Sunspot
numbers for 200 years separately for each of the 12 months. The training data (top), simulated
output from constraint generation, output from the unstable least squares model, and output from
the over-damped LB-1 model (bottom).
Acknowledgements
This paper is based on work supported by DARPA under the Computer Science Study Panel program
(authors GJG and BEB), the NSF under Grant Nos. EEC-0540865 (author BEB) and IIS-0325581
(author SMS), and the CDC under award 8-R01-HK000020-02, ?Efficient, scalable multisource
surveillance algorithms for Biosense? (author SMS).
References
[1] Seth L. Lacy and Dennis S. Bernstein. Subspace identification with guaranteed stability using constrained
optimization. In Proc. American Control Conference, 2002.
[2] Seth L. Lacy and Dennis S. Bernstein. Subspace identification with guaranteed stability using constrained
optimization. IEEE Transactions on Automatic Control, 48(7):1259?1263, July 2003.
[3] R.E. Kalman. A new approach to linear filtering and prediction problems. Trans. ASME?JBE, 1960.
[4] L. Ljung. System Identification: Theory for the user. Prentice Hall, 2nd edition, 1999.
[5] Zoubin Ghahramani and Geoffrey E. Hinton. Parameter estimation for Linear Dynamical Systems. Technical Report CRG-TR-96-2, U. of Toronto, Department of Comp. Sci., 1996.
[6] N. L. C. Chui and J. M. Maciejowski. Realization of stable models with subspace methods. Automatica,
32(100):1587?1595, 1996.
[7] Stephen Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[8] S. Soatto, G. Doretto, and Y. Wu. Dynamic Textures. Intl. Conf. on Computer Vision, 2001.
[9] E. Keogh and T. Folias. The UCR Time Series Data Mining Archive, 2002.
[10] P. Van Overschee and B. De Moor. Subspace Identification for Linear Systems: Theory, Implementation,
Applications. Kluwer, 1996.
[11] T. Van Gestel, J. A. K. Suykens, P. Van Dooren, and B. De Moor. Identification of stable models in
subspace identification by using regularization. IEEE Transactions on Automatic Control, 2001.
[12] Sajid M. Siddiqi, Byron Boots, and Geoffrey J. Gordon. A Constraint Generation Approach to Learning
Stable Linear Dynamical Systems. Technical Report CMU-ML-08-101, CMU, 2008.
[13] H. Rauch. Solutions to the linear smoothing problem. In IEEE Transactions on Automatic Control, 1963.
[14] Kevin Murphy. Dynamic Bayesian Networks. PhD thesis, UC Berkeley, 2002.
[15] Roger Horn and Charles R. Johnson. Matrix Analysis. Cambridge University Press, 1985.
[16] Andrew Y. Ng and H. Jin Kim. Stable adaptive control with online learning. In Proc. NIPS, 2004.
[17] M. Wagner. A national retail data monitor for public health surveillance. Morbidity and Mortality Weekly
Report, 53:40?42, 2004.
| 3358 |@word version:1 rising:1 norm:1 nd:1 simulation:1 covariance:2 decomposition:2 asks:1 dramatic:1 concise:1 tr:5 initial:1 series:2 contains:1 elliptical:1 comparing:1 must:1 written:1 realistic:1 treating:1 biosurveillance:3 plot:2 stationary:1 detecting:1 toronto:1 differential:1 jbe:1 qualitative:4 x0:5 ldss:4 sdp:2 examine:2 planning:1 multi:2 buying:1 decreasing:1 actual:1 little:2 project:2 estimating:1 underlying:7 moreover:1 panel:1 lowest:2 minimizes:1 developed:1 temporal:2 quantitative:4 y3:1 every:1 berkeley:1 runtime:3 axt:1 weekly:1 wrong:1 rm:4 demonstrates:3 sale:6 control:10 grant:1 unit:1 normally:1 converse:1 appear:1 positive:3 before:2 engineering:1 aat:1 modify:1 t1:2 limit:2 interpolation:2 ap:2 yd:4 sajid:2 chose:1 studied:1 conversely:1 fastest:2 horn:1 practice:2 block:3 definite:1 cold:2 procedure:1 area:2 drug:9 evolving:1 boyd:1 zoubin:1 close:2 ga:1 prentice:1 instability:5 www:2 equivalent:1 missing:1 yt:4 straightforward:1 economics:1 regardless:1 duration:2 convex:16 formulate:4 rule:1 fountain:8 orthonormal:1 vandenberghe:1 stability:18 variation:1 play:1 suppose:1 user:1 anomaly:1 quadprog:1 designing:1 us:1 pa:3 synthesize:1 trend:1 particularly:1 continues:1 labeled:1 database:1 rmd:1 bottom:3 observed:1 solved:1 capture:1 calculate:3 region:17 counter:1 convexity:1 ui:2 ideally:1 dynamic:30 trained:1 uniformity:1 efficiency:3 easily:1 darpa:1 seth:2 derivation:1 kevin:1 outside:1 refined:2 apparent:1 whose:2 larger:2 valued:2 solve:1 distortion:1 say:1 ability:1 noisy:2 final:2 zipcode:2 online:1 sequence:41 eigenvalue:2 advantage:2 propose:2 reconstruction:8 product:1 realization:1 iff:1 degenerate:1 inducing:1 frobenius:1 scalability:1 empty:1 intl:1 a11:1 guaranteeing:1 generating:2 derive:2 recurrent:1 andrew:1 augmenting:1 noticeable:2 progress:1 eq:10 c:5 indicate:2 implies:1 lyapunov:1 radius:4 drawback:1 closely:1 correct:1 filter:1 public:1 keogh:1 crg:1 extension:1 stretch:1 hold:1 hall:1 week:1 visualize:1 a31:1 driving:1 achieves:1 early:1 vary:1 estimation:2 proc:2 outperformed:1 largest:2 city:3 tool:1 moor:2 mit:1 gaussian:2 always:2 beb:3 rather:1 reaching:2 arose:1 surveillance:2 focus:2 improvement:1 indicates:1 likelihood:1 contrast:3 pentium:1 cg:3 baseline:4 sense:1 kim:1 helpful:1 inference:1 dependent:1 stopping:1 initially:2 hidden:1 pixel:3 arg:4 overall:2 multisource:1 augment:1 priori:1 smoothing:2 constrained:3 otc:4 uc:1 equal:1 once:1 having:1 ng:1 represents:2 look:1 nearly:4 future:2 report:3 others:1 gordon:2 dooren:1 composed:2 resulted:1 interpolate:1 national:2 murphy:1 consisting:1 negation:1 attempt:2 highly:1 mining:1 semidefinite:1 damped:1 edge:1 daily:1 sedumi:1 damping:1 indexed:1 tree:1 continuing:1 desired:2 re:1 plotted:1 e0:5 column:7 modeling:2 ax0:2 maximization:1 cost:2 stacking:1 successful:1 johnson:1 eec:1 chooses:1 probabilistic:1 diverge:1 quickly:2 squared:2 reflect:1 thesis:1 mortality:1 henceforth:1 worse:3 conf:1 american:2 account:1 de:2 includes:1 coefficient:1 satisfy:1 caused:1 vi:2 depends:1 reached:2 start:1 aggregation:1 cxt:1 minimize:2 square:26 ni:4 accuracy:1 gestel:1 characteristic:2 yield:7 conceptually:1 lds:10 identification:14 bayesian:2 iterated:1 produced:1 comp:1 explain:1 strongest:1 reach:4 sampled:1 stop:1 dimensionality:1 improves:1 overconstrained:1 back:1 higher:1 ta:2 day:4 follow:2 flowing:1 formulation:1 though:2 roger:1 until:5 dennis:2 nonlinear:2 reweighting:1 smoke:1 incrementally:1 quality:3 indicated:1 alb:1 behaved:1 building:1 effect:1 y2:2 true:2 evolution:5 regularization:2 soatto:2 symmetric:1 moore:1 illustrated:1 during:1 criterion:1 mina:2 asme:1 demonstrate:1 percent:2 image:13 invoked:1 novel:2 charles:1 physical:1 qp:5 tracked:1 fare:1 interpretation:1 lieven:1 synthesized:8 kluwer:1 mellon:3 monthly:1 significant:2 cambridge:2 vec:3 atp:2 automatic:3 unconstrained:1 stable:40 depiction:1 gt:2 add:3 multivariate:1 optimizing:1 optimizes:2 scenario:1 certain:1 inequality:5 binary:1 arbitrarily:1 vt:2 greater:3 relaxed:1 aggregated:2 maximize:1 doretto:1 july:1 semi:1 ii:1 multiple:1 desirable:1 full:2 infer:1 stephen:1 technical:2 faster:2 cross:1 long:3 divided:1 award:1 feasibility:1 prediction:2 involving:1 scalable:3 basic:1 vision:4 cmu:7 expectation:1 metric:1 iteration:3 robotics:1 achieved:1 retail:2 suykens:1 background:1 separately:3 singular:7 source:1 country:1 appropriately:1 morbidity:1 archive:2 subject:2 isolate:1 byron:2 call:1 bernstein:4 zipcodes:5 restrict:1 competing:1 observability:2 distorting:1 rauch:1 gb:1 cause:2 repeatedly:1 matlab:1 ignored:1 clear:1 involve:1 eigenvectors:1 repeating:1 locally:2 siddiqi:3 category:5 tth:1 generate:6 http:2 canonical:1 nsf:1 shifted:1 estimated:1 per:2 carnegie:3 discrete:1 threshold:1 monitor:2 drawn:2 rewriting:1 v1:3 ram:1 year:3 inverse:1 powerful:1 hankel:4 almost:2 reasonable:1 wu:1 cvx:1 ucr:2 submatrix:1 bound:1 uncontrolled:1 guaranteed:2 display:1 fold:1 quadratic:6 refine:1 nonnegative:1 yielded:1 constraint:51 kronecker:1 x2:1 interpolated:1 u1:1 simulate:1 chui:1 min:3 formulating:1 maciejowski:1 relatively:1 department:3 combination:2 remain:3 smaller:1 em:2 reconstructing:1 intuitively:1 equation:1 discus:1 turn:1 count:5 needed:1 know:1 sequences2:1 ut1:1 available:3 junction:1 decomposing:1 apply:3 spectral:4 enforce:3 simulating:2 alternative:1 existence:1 original:3 denotes:7 remaining:2 top:3 running:1 cough:2 graphical:1 ghahramani:1 prof:1 approximating:1 r01:1 objective:16 quantity:1 md:1 exhibit:3 gradient:1 subspace:14 lacy:4 simulated:8 separating:1 sci:1 sensible:1 extent:1 unstable:18 water:1 enforcing:2 kalman:2 length:3 code:1 stated:1 implementation:1 perform:2 steam:10 boot:2 observation:21 sm:2 finite:1 descent:1 jin:1 situation:1 extended:2 looking:3 incorporated:1 hinton:1 y1:4 rn:3 frame:5 lb:45 drift:1 introduced:1 required:1 optimized:1 learned:3 distinction:1 nip:1 trans:1 able:4 dynamical:12 pattern:2 below:3 summarize:2 program:11 max:1 memory:1 video:7 explanation:1 overschee:1 suitable:1 natural:1 predicting:1 residual:1 scarce:1 improve:1 axis:1 health:1 literature:3 acknowledgement:1 checking:1 kf:1 cdc:1 ljung:1 generation:30 filtering:2 geoffrey:3 row:2 course:2 periodicity:3 supported:1 last:2 keeping:1 institute:1 wagner:1 distributed:1 ghz:1 overcome:1 curve:1 dimension:1 boundary:3 evaluating:1 contour:1 benefit:1 van:3 author:5 reside:1 qualitatively:2 collection:1 adaptive:1 social:1 transaction:3 reconstructed:1 ml:1 instantiation:1 conceptual:2 pittsburgh:3 assumed:1 automatica:1 subsequence:1 search:1 latent:1 ggordon:1 why:1 table:4 learn:7 terminate:1 correlated:1 ignoring:1 obtaining:1 interpolating:2 necessarily:2 complex:1 domain:4 diag:2 conservativeness:1 noise:4 edition:1 n2:5 x1:8 representative:1 referred:2 sunspot:7 depicts:2 definiteness:1 originated:1 a21:1 learns:2 xt:3 adding:2 sequential:1 texture:17 magnitude:3 phd:1 infinitely:1 penrose:1 failed:1 relies:1 inal:1 viewed:2 identity:2 month:2 consequently:1 exposition:1 ann:1 goal:1 feasible:11 hard:1 typical:1 determined:1 wt:2 hyperplane:1 principal:2 conservative:2 called:2 svd:4 e10:3 attempted:1 select:2 incorporate:2 tested:1 ex:5 |
2,601 | 3,359 | C O F I R ANK
Maximum Margin Matrix Factorization for
Collaborative Ranking
Markus Weimer?
Alexandros Karatzoglou?
Quoc Viet Le?
Alex Smola?
Abstract
In this paper, we consider collaborative filtering as a ranking problem. We present
a method which uses Maximum Margin Matrix Factorization and optimizes ranking instead of rating. We employ structured output prediction to optimize directly
for ranking scores. Experimental results show that our method gives very good
ranking scores and scales well on collaborative filtering tasks.
1
Introduction
Collaborative filtering has gained much attention in the machine learning community due to the
need for it in webshops such as those of Amazon, Apple and Netflix. Webshops typically offer
personalized recommendations to their customers. The quality of these suggestions is crucial to the
overall success of a webshop. However, suggesting the right items is a highly nontrivial task: (1)
There are many items to choose from. (2) Customers only consider very few (typically in the order
of ten) recommendations. Collaborative filtering addresses this problem by learning the suggestion
function for a user from ratings provided by this and other users on items offered in the webshop.
Those ratings are typically collected on a five star ordinal scale within the webshops.
Learning the suggestion function can be considered either a rating (classification) or a ranking problem. In the context of rating, one predicts the actual rating for an item that a customer has not rated
yet. On the other hand, for ranking, one predicts a preference ordering over the yet unrated items.
Given the limited size of the suggestion shown to the customer, both (rating and ranking) are used
to compile a top-N list of recommendations. This list is the direct outcome of a ranking algorithm,
and can be computed from the results of a rating algorithm by sorting the items according to their
predicted rating. We argue that rating algorithms solve the wrong problem, and one that is actually
harder: The absolute value of the rating for an item is highly biased for different users, while the
ranking is far less prone to this problem.
One approach is to solve the rating problem using regression. For example for the Netflix prize
which uses root mean squared error as an evaluation criterion,1 the most straightforward approach
is to use regression. However, the same arguments discussed above apply to regression. Thus, we
present an algorithm that solves the ranking problem directly, without first computing the rating.
For collaborative rating, Maximum Margin Matrix Factorization (MMMF) [11, 12, 10] has proven to
be an effective means of estimating the rating function. MMMF takes advantage of the collaborative
effects: rating patterns from other users are used to estimate ratings for the current user. One key
?
Telecooperation Group, TU Darmstadt, Germany, [email protected]
Department of Statistics, TU Wien, [email protected]
?
Computer Science Department, Stanford University, Stanford, CA 94305, [email protected]
?
SML, NICTA, Northbourne Av. 218, Canberra 2601, ACT, Australia, [email protected]
1
We conjecture that this is the case in order to keep the rules simple, since ranking scores are somewhat
nontrivial to define, and there are many different ways to evaluate a ranking, as we will see in the following.
?
1
advantage of this approach is that it works without feature extraction. Feature extraction is domain
specific, e.g. the procedures developed for movies cannot be applied to books. Thus, it is hard
to come up with a consistent feature set in applications with many different types of items, as for
example at Amazon. Our algorithm is based on this idea of MMMF, but optimizes ranking measures
instead of rating measures.
Given that only the top ranked items will actually be presented to the user, it is much more important
to rank the first items right than the last ones. In other words, it is more important to predict what a
user likes than what she dislikes. In more technical terms, the value of the error for estimation is not
uniform over the ratings. All of above reasonings lead to the following goals:
?
?
?
?
The algorithm needs to be able to optimize ranking scores directly.
The algorithm needs to be adaptable to different scores.
The algorithm should not require any features besides the actual ratings.
The algorithm needs to scale well and parallelize such as to deal with millions of ratings arising
from thousands of items and users with an acceptable memory footprint.
We achieve these goals by combining (a) recent results in optimization, in particular the application
of bundle methods to convex optimization problems [14], (b) techniques for representing functions
on matrices, in particular maximum margin matrix factorizations [10, 11, 12] and (c) the application
of structured estimation for ranking problems. We describe our algorithm C O F I R ANK in terms of
optimizing the ranking measure Normalized Discounted Cumulative Gain (NDCG).
2
Problem Definition
Assume that we have m items and u users. The ratings are stored in the sparse matrix Y where
Yi,j ? {0, . . . , r} is the rating of item j by user i and r is some maximal score. Yi,j is 0 if user
i did not rate item j. In rating, one estimates the missing values in Y directly while we treat this
as a ranking task. Additionally, in NDCG [16], the correct order of higher ranked items is more
important than that of lower ranked items:
n
Definition 1 (NDCG) Denote by y ? {1, . . . , r} a vector of ratings and let ? be a permutation
of that vector. ?i denotes the position of item i after the permutation. Moreover, let k ? N be a
truncation threshold and ?s sorts y in decreasing order. In this case the Discounted Cumulative
Gains (DCG@k) score [5] and its normalized variant (NDCG@k) are given by
DCG@k(y, ?) =
k
X
2y?i ? 1
log(i + 2)
i=1
and N DCG@k(y, ?) =
DCG@k(y, ?)
DCG@k(y, ?s )
DCG@k is maximized for ? = ?s . The truncation threshold k reflects how many recommendations
users are willing to consider. NDCG is a normalized version of DCG so that the score is bounded
by [0, 1].
Unlike classification and regression measures, DCG is defined on permutations, not absolute values of the ratings. Departing from traditional pairwise ranking measures [4], DCG is positiondependent: Higher positions have more influence on the score than lower positions. Optimizing
DCG has gained much interest in the machine learning and information retrieval (e.g. [2]) communities. However, we present the first effort to optimize this measure for collaborative filtering.
To perform estimation, we need a recipe for obtaining the permutations ?. Since we want our system
to be scalable, we need a method which scales not much worse than linearly in the number of the
items to be ranked. The avenue we pursue is to estimate a matrix F ? Rm?u and to use the values
Fij for the purpose of ranking the items j for user i. Given a matrix Y of known ratings we are now
able to define the performance of F :
R(F, Y ) :=
u
X
NDCG@k(?i , Y i ),
i=1
2
(1)
where ?i is argsort(?F i ), it sorts F i in decreasing order.2 While we would like to maximize
R(F, Ytest ) we only have access to R(F, Ytrain ). Hence, we need to restrict the complexity of F to
ensure good performance on the test set when maximizing the score on the training set.
3
Structured Estimation for Ranking
However, R(F, Y ) is non-convex. In fact, it is piecewise constant and therefore clearly not amenable
to any type of smooth optimization. To address this issue we take recourse to structured estimation
[13, 15]. Note that the scores decompose into a sum over individual users? scores, hence we only
need to show how minimizing ?NDCG(?, y) can be replaced by minimizing a convex upper bound
on the latter. Summing over the users then provides us with a convex bound for all of the terms.3
Our conversion works in three steps:
1. Converting NDCG(?, y) into a loss by computing the regret with respect to the optimal
permutation argsort(?y).
2. Denote by ? a permutation (of the n items a user might want to see) and let f ? Rn be a
estimated rating. We design a mapping ?(?, f ) ? R which is linear in f in such a way
that maximizing ?(?, f ) with respect to ? yields argsort(f ).
3. We use the convex upper-bounding technique described by [15] to combine regret and
linear map into a convex upper bound which we can minimize efficiently.
Step 1 (Regret Conversion) Instead of maximizing NDCG(?, y) we may also minimize
?(?, y) := 1 ? NDCG(?, y).
(2)
?(?, y) is nonnegative and vanishes for ? = ?s .
Step 2 (Linear Mapping) Key in our reasoning is the use of the Polya-Littlewood-Hardy inequality: For any two vectors a, b ? Rn their inner product is maximized by sorting a and b in the same
order, that is ha, bi ? hsort(a), sort(b)i. This allows us to encode the permuation ? = argsort(f )
in the following fashion: denote by c ? Rn a decreasing nonnegative sequence, then the function
?(?, f ) := hc, f? i
(3)
is linear in f and maximized with respect to ? for argsort(f ). Since ci is decreasing by construction,
the Polya-Littlewood-Hardy inequality applies. We found that choosing ci = (i + 1)?0.25 produced
good results in our experiments. However, we did not formally optimize this parameter.
Step 3 (Convex Upper Bound) We adapt a result of [15] which describes how to find convex
upper bounds on nonconvex optimization problems.
Lemma 2 Assume that ? is defined as in (3). Moreover let ? ? := argsort(?f ) be the ranking
induced by f . Then the following loss function l(f, y) is convex in f and it satisfies l(f, y) ?
?(y, ? ? ).
h
i
l(f, y) := max ?(?, y) + hc, f? ? f i
(4)
?
Proof We show convexity first. The argument of the maximization over the permutations ? is a
linear and thus convex function in f . Taking the maximum over a set of convex functions is convex
itself, which proves the first claim. To see that it is an upper bound, we use the fact that
l(f, y) ? ?(? ? , y) + hc, f?? ? f i ? ?(? ? , y).
(5)
The second inequality follows from the fact that ? ? maximizes hc, f?? i.
2
3
M i denotes row i of matrix M . Matrices are written in upper case, while vectors are written in lower case.
This also opens the possibility for parallelization in the implementation of the algorithm.
3
4
Maximum Margin Matrix Factorization
Loss The reasoning in the previous section showed us how to replace the ranking score with a
convex upper bound on a regret loss. This allows us to replace the problem of maximizing R(F, Y )
by that of minimizing a convex function in F , namely
u
X
L(F, Y ) :=
l(F i , Y i )
(6)
i=1
Matrix Regularization Having addressed the problem of non-convexity of the performance score
we need to find an efficient way of performing capacity control of F , since we only have L(F, Ytrain )
at our disposition, whereas we would like to do well on L(F, Ytest ). The idea to overcome this problem is by means of a regularizer on F , namely the one proposed for Maximum Margin Factorization
by Srebro and coworkers[10, 11, 12]. The key idea in their reasoning is to introduce a regularizer on
F via
1
?[F ] := min [tr M M > + tr U U > ] subject to U M = F.
(7)
2 M,U
More specifically, [12] show that the above is a proper norm on F . While we could use a semidefinite program as suggested in [11], the latter is intractable for anything but the smallest problems.4
Instead, we replace F by U M and solve the following problem:
?
(8)
tr M M > + tr U U >
minimize L(U M, Ytrain ) +
M,U
2
Note that the above matrix factorization approach effectively allows us to learn an item matrix M
and a user matrix U which will store the specific properties of users and items respectively. This
approach learns the features of the items and the users. The dimension d of M ? Rd?m and
U ? Rd?u is chosen mainly based on computational concerns, since a full representation would
require d = min(m, u). On large problems the storage requirements for the user matrix can be
enormous and it is convenient to choose d = 10 or d = 100.
Algorithm While (8) may not be jointly convex in M and U any more, it still is convex in M and
U individually, whenever the other term is kept fixed. We use this insight to perform alternating subspace descent as proposed by [10]. Note that the algorithm does not guarantee global convergence,
which is a small price to pay for computational tractability.
repeat
For fixed M minimize (8) with respect to U .
For fixed U minimize (8) with respect to M .
until No more progress is made or a maximum iteration count has been reached.
Note that on problems of the size of Netflix the matrix Y has 108 entries, which means that the
number of iterations is typically time limited. We now discuss a general optimization method for
solving regularized convex optimization problems. For more details see [14].
5
Optimization
Bundle Methods We discuss the optimization over the user matrix U first, that is, consider the
problem of minimizing
?
R(U ) := L(U M, Ytrain ) + tr U U >
(9)
2
The regularizer tr U U > is rather simple to compute and minimize. On the other hand, L is expensive
to compute, since it involves maximizing l for all users.
Bundle methods, as proposed in [14] aim to overcome this problem by performing successive Taylor
approximations of L and by using them as lower bounds. In other words, they exploit the fact that
L(U M, Ytrain ) ? L(U M 0 , Ytrain ) + tr(M ? M 0 )> ?M L(U M 0 , Y )?M, M 0 .
4
?
In this case we optimize over
A
F>
F
B
?
0 where ?[F ] is replaced by 12 [tr A + tr B].
4
Algorithm 1 Bundle Method()
Initialize t = 0, U0 = 0, b0 = 0 and H = ?
repeat
Find minimizer Ut and value L of the optimization problem
?
minimize max tr Uj> M + bj + tr U > U.
U
0?j?t
2
Compute Ut+1 = ?U L(Ut M, Ytrain )
Compute bt+1 = L(Ut M, Ytrain ) ? tr Ut+1 Mt
>
if H 0 := tr Ut+1
Mt + bt+1 + ?2 tr U U > ? H then
Update H ? H 0
end if
until H ? L ?
Since this holds for arbitrary M 0 , we may pick a set of Mi and use the maximum over the Taylor
approximations at locations Mi to lower-bound L. Subsequently, we minimize this piecewise linear
lower bound in combination with ?2 tr U U > to obtain a new location where to compute our next
Taylor approximation and iterate until convergence is achieved. Algorithm 1 provides further details.
As we proceed with the optimization, we obtain increasingly tight lower bounds on L(U M, Ytrain ).
One may show [14] that the algorithm converges to precision with respect to the minimizer of
R(U ) in O(1/) steps. Moreover, the initial distance from the optimal solution enters the bound
only logarithmically.
After solving the optimization problem in U we switch to optimizing over the item matrix M . The
algorithm is virtually identical to that in U , except that we now need to use the regularizer in M
instead of that in U . We find experimentally that a small number of iterations (less than 10) is more
than sufficient for convergence.
Computing the Loss So far we simply used the loss l(f, y) of (4) to define a convex loss without any concern to its computability. To implement Algorithm 1, however, we need to be able to
solve the maximization of l with respect to the set of permutations ? efficiently. One may show
that computing
the ? which maximizes l(f, y) is possible by solving the inear assignment problem
P P
min i j Ci,j Xi,j with the cost matrix:
Ci,j = ?i
2Y [j] ? 1
? ci fj with ?i =
DCG(Y, k, ?s )log(i + 1)
1 if i < k,
0 otherwise
Efficient algorithms [7] based on the Hungarian Marriage algorithm (also referred to as the KuhnMunkres algorithm) exist for this problem [8]: it turns out that this integer programming problem
can be solved by invoking a linear program. This in turn allows us to compute l(f, y) efficiently.
Computing the Gradients The second ingredient needed for applying the bundle method is to
compute the gradients of L(F, Y ) with respect to F , since this allows us to compute gradients with
respect to M and U by applying the chain rule:
?M L(U M, Y ) = U > ?F L(X, F, Y ) and ?U L(U M, Y ) = ?F L(X, F, Y )> M
L decomposes into losses on individual users as described in (6). For each user i only row i of F
matters. It follows that ?F L(F, Y ) is composed of the gradients of l(F i , Y i ). Note that for l defined
as in (4) we know that
?F i l(F i , Y i ) = [c ? c?? ?1 ].
Here we denote by ?
? the maximizer of of the loss and c?? ?1 denotes the application of the inverse
permutation ?
? ?1 to the vector c.
5
6
Experiments
We evaluated C O F I R ANK with the NDCG loss just defined (denoted by C O F I R ANK -NDCG) as
well as with loss functions which optimize ordinal regression (C O F I R ANK -Ordinal) and regression
(C O F I R ANK -Regression). C O F I R ANK -Ordinal applies the algorithm described above to preference
ranking by optimizing the preference ranking loss. Similarly, C O F I R ANK -Regression optimizes for
regression using the root mean squared loss. We looked at two real world evaluation settings: ?weak?
and ?strong? [9] generalization on three publicly available data sets: EachMovie, MovieLens and
Netflix. Statistics for those can be found in table 1.
Dataset
EachMovie
MovieLens
Netflix
Users
61265
983
480189
Movies
1623
1682
17770
Ratings
2811717
100000
100480507
Table 1: Data set statistics
Weak generalization is evaluated by predicting the rank of unrated items for users known at
training time. To do so, we randomly select N = 10, 20, 50 ratings for each user for training and
and evaluate on the remaining ratings. Users with less then 20, 30, 60 rated movies where removed
to ensure that the we could evaluate on at least 10 movies per user We compare C O F I R ANK -NDCG,
C O F I R ANK -Ordinal, C O F I R ANK -Regression and MMMF [10]. Experimental results are shown in
table 2.
For all C O F I R ANK experiments, we choose ? = 10. We did not optimize for this parameter. The
results for MMMF were obtained using MATLAB code available from the homepage of the authors
1
1
of [10]. For those, we used ? = 1.9
for EachMovie, and ? = 1.6
for MovieLens as it is reported
to yield the best results for MMMF. In all experiments, we choose the dimensionality of U and M
to be 100. All C O F I R ANK experiments and those of MMMF on MovieLens were repeated ten times.
Unfortunately, we underestimated the runtime and memory requirements of MMMF on EachMovie.
Thus, we cannot report results on this data set using MMMF.
Additionally, we performed some experiments on the Netflix data set. However, we cannot compare
to any of the other methods on that data set as to the best of our knowledge, C O F I R ANK is the first
collaborative ranking algorithm to be applied to this data set, supposedly because of its large size.
Strong generalization is evaluated on users that were not present at training time. We follow the
procedure described in [17]: Movies with less than 50 ratings are discarded. The 100 users with the
most rated movies are selected as the test set and the methods are trained on the remaining users.
In evaluation, 10, 20 or 50 ratings from those of the 100 test users are selected. For those ratings,
the user training procedure is applied to optimize U . M is kept fixed in this process to the values
obtained during training. The remaining ratings are tested using the same procedure as for the weak
Method
C O F I R ANK -NDCG
C O F I R ANK -Ordinal
C O F I R ANK -Regression
N=10
0.6562 ? 0.0012
0.6727 ? 0.0309
0.6114 ? 0.0217
N=20
0.6644 ? 0.0024
0.7240 ? 0.0018
0.6400 ? 0.0354
N=50
0.6406 ? 0.0040
0.7214 ? 0.0076
0.5693 ? 0.0428
MovieLens
C O F I R ANK -NDCG
C O F I R ANK -Ordinal
C O F I R ANK -Regression
MMMF
0.6400 ? 0.0061
0.6233 ? 0.0039
0.6420 ? 0.0252
0.6061 ? 0.0037
0.6307 ? 0.0062
0.6686 ? 0.0058
0.6509 ? 0.0190
0.6937 ? 0.0039
0.6076 ? 0.0077
0.7169 ? 0.0059
0.6584 ? 0.0187
0.6989 ? 0.0051
Netflix
C O F I R ANK -NDCG
C O F I R ANK -Regression
0.6081
0.6082
0.6204
0.6287
EachMovie
Table 2: Results for the weak generalization setting experiments. We report the NDCG@10 accuracy for
various numbers of training ratings used per user. For most results we report the mean over ten runs and the
standard deviation. We also report the p-values for the best vs. second best score.
6
EachMovie
MovieLens
Method
C O F I R ANK -NDCG
GPR
CGPR
GPOR
CGPOR
MMMF
N=10
0.6367 ? 0.001
0.4558 ? 0.015
0.5734 ? 0.014
0.3692 ? 0.002
0.3789 ? 0.011
0.4746 ? 0.034
N=20
0.6619 ? 0.0022
0.4849 ? 0.0066
0.5989 ? 0.0118
0.3678 ? 0.0030
0.3781 ? 0.0056
0.4786 ? 0.0139
N=50
0.6771 ? 0.0019
0.5375 ? 0.0089
0.6341 ? 0.0114
0.3663 ? 0.0024
0.3774 ? 0.0041
0.5478 ? 0.0211
C O F I R ANK -NDCG
GPR
CGPR
GPOR
CGPOR
MMMF
0.6237 ? 0.0241
0.4937 ? 0.0108
0.5101 ? 0.0081
0.4988 ? 0.0035
0.5053 ? 0.0047
0.5521 ? 0.0183
0.6711 ? 0.0065
0.5020 ? 0.0089
0.5249 ? 0.0073
0.5004 ? 0.0046
0.5089 ? 0.0044
0.6133 ? 0.0180
0.6455 ? 0.0103
0.5088 ? 0.0141
0.5438 ? 0.0063
0.5011 ? 0.0051
0.5049 ? 0.0035
0.6651 ? 0.0190
Table 3: The NGDC@10 accuracy over ten runs and the standard deviation for the strong generalization evaluation.
generalization. We repeat the whole process 10 times and again use ? = 10 and a dimensionality of
100. We compare C O F I R ANK -NDCG to Gaussian Process Ordinal Regression (GPOR) [3] Gaussian
Process Regression (GPR) and the collaborative extensions (CPR, CGPOR) [17]. Table 3 shows our
results compared to the ones from [17].
C O F I R ANK performs strongly compared to most of the other tested methods. Particularly in the strong
generalization setting C O F I R ANK outperforms the existing methods in almost all the settings. Note
that all methods except C O F I R ANK and MMMF use additional extracted features which are either
provided with the dataset or extracted from the IMDB. MMMF and C O F I R ANK only rely on the
rating matrix. In the weak generalization experiments on the MovieLens data, C O F I R ANK performs
better for N = 20 but is marginally outperformed by MMMF for the N = 10 and N = 50 cases.
We believe that with proper parameter tuning, C O F I R ANK will perform better in these cases.
7
Discussion and Summary
C O F I R ANK is a novel approach to collaborative filtering which solves the ranking problem faced
by webshops directly. It can do so faster and at a higher accuracy than approaches which learn
a rating to produce a ranking. C O F I R ANK is adaptable to different loss functions such as NDCG,
Regression and Ordinal Regression in a plug-and-play manner. Additionally, C O F I R ANK is well
suited for privacy concerned applications, as the optimization itself does not need ratings from the
users, but only gradients.
Our results, which we obtained without parameters tuning, are on par or outperform several of the
most successful approaches to collaborative filtering like MMMF, even when they are used with
tuned parameters. C O F I R ANK performs best on data sets of realistic sizes such as EachMovie and
significantly outperforms other approaches in the strong generalization setting.
In our experiments, C O F I R ANK shows to be very fast. For example, training on EachMovie with
N = 10 can be done in less than ten minutes and uses less than 80M B of memory on a laptop. For
N = 20, C O F I R ANK obtained a NDCG@10 of 0.72 after the first iteration, which also took less than
ten minutes. This is the highest NDCG@10 score on that data set we are aware of (apart from the
result of C O F I R ANK after convergence). A comparison to MMMF in that regard is difficult, as it is
implemented in MATLAB and C O F I R ANK in C++. However, C O F I R ANK is more than ten times faster
than MMMF while using far less memory. In the future, we will exploit the fact that the algorithm
is easily parallelizable to obtain even better performance on current multi-core hardware as well as
computer clusters. Even the current implementation allows us to report the first results on the Netflix
data set for direct ranking optimization.
Acknowledgments: Markus Weimer is funded by the German Research Foundation as part of the Research
Training Group 1223: ?Feedback-Based Quality Management in eLearning?.
Software: C O F I R ANK is available from http://www.cofirank.org
7
References
[2] C. J. Burges, Q. V. Le, and R. Ragno. Learning to rank with nonsmooth cost functions. In
B. Sch?olkopf, J. Platt, and T. Hofmann, editors, Advances in Neural Information Processing
Systems 19, 2007.
[3] W. Chu and Z. Ghahramani. Gaussian processes for ordinal regression. J. Mach. Learn. Res.,
6:1019?1041, 2005.
[4] R. Herbrich, T. Graepel, and K. Obermayer. Large margin rank boundaries for ordinal regression. In A. J. Smola, P. L. Bartlett, B. Sch?olkopf, and D. Schuurmans, editors, Advances in
Large Margin Classifiers, pages 115?132, Cambridge, MA, 2000. MIT Press.
[5] K. Jarvelin and J. Kekalainen. IR evaluation methods for retrieving highly relevant documents.
In ACM Special Interest Group in Information Retrieval (SIGIR), pages 41?48. New York:
ACM, 2002.
[7] R. Jonker and A. Volgenant. A shortest augmenting path algorithm for dense and sparse linear
assignment problems. Computing, 38:325?340, 1987.
[8] H.W. Kuhn. The Hungarian method for the assignment problem. Naval Research Logistics
Quarterly, 2:83?97, 1955.
[9] B. Marlin. Collaborative filtering: A machine learning perspective. Masters thesis, University
of Toronto, 2004.
[10] J. Rennie and N. Srebro. Fast maximum margin matrix factoriazation for collaborative prediction. In Proc. Intl. Conf. Machine Learning, 2005.
[11] N. Srebro, J. Rennie, and T. Jaakkola. Maximum-margin matrix factorization. In L. K. Saul,
Y. Weiss, and L. Bottou, editors, Advances in Neural Information Processing Systems 17,
Cambridge, MA, 2005. MIT Press.
[12] N. Srebro and A. Shraibman. Rank, trace-norm and max-norm. In P. Auer and R. Meir,
editors, Proc. Annual Conf. Computational Learning Theory, number 3559 in Lecture Notes
in Artificial Intelligence, pages 545?560. Springer-Verlag, June 2005.
[13] B. Taskar, C. Guestrin, and D. Koller. Max-margin Markov networks. In S. Thrun, L. Saul, and
B. Sch?olkopf, editors, Advances in Neural Information Processing Systems 16, pages 25?32,
Cambridge, MA, 2004. MIT Press.
[14] C.H. Teo, Q. Le, A.J. Smola, and S.V.N. Vishwanathan. A scalable modular convex solver
for regularized risk minimization. In Conference on Knowledge Discovery and Data Mining,
2007.
[15] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. Large margin methods for structured
and interdependent output variables. J. Mach. Learn. Res., 6:1453?1484, 2005.
[16] E. Voorhees. Overview of the TREC 2001 question answering track. In Text REtrieval Conference (TREC) Proceedings. Department of Commerce, National Institute of Standards and
Technology, 2001. NIST Special Publication 500-250: The Tenth Text REtrieval Conference
(TREC 2001).
[17] S. Yu, K. Yu, V. Tresp, and H. P. Kriegel. Collaborative ordinal regression. In W.W. Cohen
and A. Moore, editors, Proc. Intl. Conf. Machine Learning, pages 1089?1096. ACM, 2006.
8
| 3359 |@word version:1 norm:3 open:1 willing:1 invoking:1 pick:1 tr:15 harder:1 initial:1 score:16 hardy:2 tuned:1 document:1 outperforms:2 existing:1 current:3 com:1 yet:2 chu:1 written:2 realistic:1 hofmann:2 update:1 v:1 intelligence:1 selected:2 item:25 prize:1 core:1 alexandros:1 provides:2 location:2 preference:3 successive:1 org:1 herbrich:1 toronto:1 five:1 direct:2 retrieving:1 combine:1 manner:1 privacy:1 introduce:1 pairwise:1 multi:1 discounted:2 decreasing:4 tuwien:1 actual:2 solver:1 provided:2 estimating:1 moreover:3 bounded:1 maximizes:2 homepage:1 laptop:1 what:2 pursue:1 developed:1 shraibman:1 marlin:1 guarantee:1 act:1 runtime:1 wrong:1 rm:1 platt:1 control:1 classifier:1 treat:1 mach:2 parallelize:1 path:1 ndcg:23 might:1 au:1 compile:1 factorization:8 limited:2 bi:1 acknowledgment:1 commerce:1 regret:4 implement:1 footprint:1 procedure:4 significantly:1 positiondependent:1 convenient:1 word:2 altun:1 cannot:3 tsochantaridis:1 storage:1 context:1 influence:1 applying:2 risk:1 optimize:8 www:1 map:1 customer:4 missing:1 maximizing:5 straightforward:1 attention:1 convex:19 sigir:1 amazon:2 kekalainen:1 rule:2 insight:1 construction:1 play:1 user:36 programming:1 us:3 alexis:1 logarithmically:1 expensive:1 particularly:1 predicts:2 taskar:1 enters:1 solved:1 thousand:1 ordering:1 removed:1 highest:1 supposedly:1 vanishes:1 convexity:2 complexity:1 trained:1 solving:3 tight:1 imdb:1 easily:1 various:1 regularizer:4 fast:2 describe:1 effective:1 inear:1 artificial:1 outcome:1 choosing:1 modular:1 stanford:3 solve:4 rennie:2 otherwise:1 statistic:3 jointly:1 itself:2 advantage:2 sequence:1 took:1 maximal:1 product:1 tu:3 relevant:1 combining:1 achieve:1 olkopf:3 recipe:1 convergence:4 cluster:1 requirement:2 intl:2 produce:1 converges:1 tk:1 ac:1 augmenting:1 polya:2 b0:1 progress:1 strong:5 solves:2 implemented:1 predicted:1 involves:1 come:1 hungarian:2 kuhn:1 fij:1 correct:1 subsequently:1 australia:1 karatzoglou:1 require:2 darmstadt:2 generalization:9 decompose:1 extension:1 cpr:1 hold:1 marriage:1 considered:1 mapping:2 predict:1 bj:1 claim:1 smallest:1 purpose:1 estimation:5 proc:3 outperformed:1 teo:1 individually:1 reflects:1 minimization:1 mit:3 clearly:1 gaussian:3 aim:1 rather:1 jaakkola:1 publication:1 encode:1 june:1 naval:1 she:1 joachim:1 rank:5 mainly:1 typically:4 dcg:11 bt:2 koller:1 germany:1 overall:1 classification:2 issue:1 denoted:1 special:2 initialize:1 aware:1 extraction:2 having:1 identical:1 yu:2 jarvelin:1 future:1 report:5 nonsmooth:1 piecewise:2 employ:1 few:1 randomly:1 composed:1 national:1 individual:2 replaced:2 interest:2 highly:3 possibility:1 mining:1 evaluation:5 semidefinite:1 bundle:5 chain:1 amenable:1 taylor:3 re:2 disposition:1 maximization:2 assignment:3 tractability:1 cost:2 deviation:2 entry:1 uniform:1 successful:1 stored:1 reported:1 mmmf:18 squared:2 again:1 thesis:1 management:1 choose:4 worse:1 conf:3 book:1 suggesting:1 de:1 wien:1 star:1 sml:1 matter:1 ranking:29 performed:1 root:2 reached:1 netflix:8 sort:3 collaborative:15 minimize:8 publicly:1 accuracy:3 ir:1 efficiently:3 maximized:3 yield:2 weak:5 produced:1 informatik:1 marginally:1 apple:1 parallelizable:1 whenever:1 definition:2 proof:1 mi:2 gain:2 dataset:2 knowledge:2 ut:6 dimensionality:2 graepel:1 actually:2 auer:1 adaptable:2 higher:3 follow:1 wei:1 evaluated:3 done:1 strongly:1 just:1 smola:4 until:3 hand:2 maximizer:1 quality:2 believe:1 argsort:6 effect:1 normalized:3 hence:2 regularization:1 volgenant:1 alternating:1 moore:1 deal:1 during:1 anything:1 criterion:1 performs:3 fj:1 reasoning:4 novel:1 mt:2 overview:1 cohen:1 million:1 discussed:1 cambridge:3 rd:2 tuning:2 similarly:1 funded:1 access:1 recent:1 showed:1 perspective:1 optimizing:4 optimizes:3 apart:1 store:1 verlag:1 nonconvex:1 inequality:3 success:1 unrated:2 yi:2 guestrin:1 additional:1 somewhat:1 converting:1 maximize:1 shortest:1 coworkers:1 u0:1 full:1 eachmovie:8 smooth:1 technical:1 faster:2 adapt:1 plug:1 offer:1 retrieval:4 prediction:2 variant:1 regression:20 scalable:2 iteration:4 achieved:1 whereas:1 want:2 ank:41 addressed:1 underestimated:1 crucial:1 sch:3 biased:1 parallelization:1 unlike:1 induced:1 subject:1 virtually:1 integer:1 concerned:1 iterate:1 switch:1 restrict:1 inner:1 idea:3 avenue:1 jonker:1 bartlett:1 effort:1 proceed:1 york:1 matlab:2 ytrain:9 ten:7 hardware:1 http:1 outperform:1 exist:1 meir:1 estimated:1 arising:1 per:2 track:1 group:3 key:3 threshold:2 enormous:1 tenth:1 kept:2 computability:1 sum:1 run:2 inverse:1 master:1 almost:1 acceptable:1 bound:12 pay:1 nonnegative:2 annual:1 nontrivial:2 vishwanathan:1 alex:2 software:1 personalized:1 markus:2 ragno:1 argument:2 min:3 performing:2 conjecture:1 structured:5 department:3 according:1 combination:1 describes:1 increasingly:1 quoc:2 recourse:1 discus:2 count:1 turn:2 german:1 needed:1 ordinal:12 know:1 end:1 available:3 apply:1 quarterly:1 top:2 denotes:3 ensure:2 remaining:3 exploit:2 ghahramani:1 prof:1 uj:1 question:1 looked:1 traditional:1 obermayer:1 gradient:5 subspace:1 distance:1 thrun:1 capacity:1 argue:1 collected:1 nicta:2 cofirank:1 besides:1 code:1 minimizing:4 difficult:1 unfortunately:1 trace:1 design:1 implementation:2 proper:2 perform:3 upper:8 av:1 conversion:2 markov:1 discarded:1 nist:1 descent:1 logistics:1 trec:3 rn:3 arbitrary:1 community:2 rating:39 namely:2 address:2 able:3 suggested:1 kriegel:1 pattern:1 program:2 max:4 memory:4 ranked:4 rely:1 regularized:2 predicting:1 representing:1 movie:6 rated:3 technology:1 tresp:1 faced:1 text:2 interdependent:1 discovery:1 dislike:1 loss:14 par:1 permutation:9 lecture:1 suggestion:4 filtering:8 proven:1 srebro:4 ingredient:1 foundation:1 offered:1 sufficient:1 consistent:1 editor:6 row:2 prone:1 summary:1 repeat:3 last:1 truncation:2 viet:1 ytest:2 burges:1 institute:1 saul:2 taking:1 absolute:2 sparse:2 departing:1 regard:1 overcome:2 dimension:1 feedback:1 world:1 cumulative:2 boundary:1 author:1 made:1 far:3 keep:1 global:1 summing:1 xi:1 decomposes:1 table:6 additionally:3 learn:4 ca:1 obtaining:1 schuurmans:1 hc:4 bottou:1 domain:1 did:3 dense:1 weimer:2 linearly:1 bounding:1 whole:1 repeated:1 canberra:1 referred:1 fashion:1 precision:1 position:3 answering:1 gpr:3 learns:1 minute:2 specific:2 littlewood:2 list:2 voorhees:1 concern:2 intractable:1 effectively:1 gained:2 ci:6 margin:12 sorting:2 suited:1 simply:1 recommendation:4 applies:2 springer:1 minimizer:2 satisfies:1 extracted:2 ma:3 acm:3 goal:2 replace:3 price:1 hard:1 experimentally:1 specifically:1 except:2 movielens:7 lemma:1 experimental:2 formally:1 select:1 latter:2 evaluate:3 tested:2 |
2,602 | 336 | CAM Storage of Analog Patterns and
Continuous Sequences with 3N 2 Weights
Bill Baird
Dept Mathematics and
Dept Molecular and Cell Biology,
129 LSA, U .C.Berkeley,
Berkeley, Ca. 94720
Frank Eeckman
Lawrence Livermore
National Laboratory,
P.O. Box 808 (L-426),
Livermore, Ca. 94550
Abstract
A simple architecture and algorithm for analytically guaranteed associative memory storage of analog patterns, continuous sequences, and chaotic
attractors in the same network is described. A matrix inversion determines
network weights, given prototype patterns to be stored. There are N units
of capacity in an N node network with 3N 2 weights. It costs one unit per
static attractor, two per Fourier component of each sequence, and four per
chaotic attractor. There are no spurious attractors, and there is a Liapunov function in a special coordinate system which governs the approach
of transient states to stored trajectories. Unsupervised or supervised incremental learning algorithms for pattern classification, such as competitive
learning or bootstrap Widrow-Hoff can easily be implemented. The architecture can be "folded" into a recurrent network with higher order weights
that can be used as a model of cortex that stores oscillatory and chaotic
attractors by a Hebb rule. Hierarchical sensory-motor control networks
may be constructed of interconnected "cortical patches" of these network
modules. Network performance is being investigated by application to the
problem of real time handwritten digit recognition.
1
Introduction
We introduce here a "projection network" which is a new network for implementation of the "normal form projection algorithm" discussed in [Bai89, Bai90b]. The
autoassociative case of this network is formally equivalent to the previous higher
order network realization used as a biological model [Bai90a]. It has 3N 2 weights
instead of N2 + N 4 , and is more useful for engineering applications. All the mathematical results proved for the projection algorithm in that case carryover to this
91
92
Baird and Eeckman
INPUT
p- 1
X'1
X'
n
Network Coordinates
matrix
A matrix
Dynamic
winner-take-all
Network
P matrix
OUTPUT
Normal Form:
?
Vi
= aVi -
Vi '"'
wj aijVj2
x=Pii
Network Coordinates
Figure 1: Projection Network - 3N 2 weights. The A matrix determines a k-winnertake-all net - programs attractors, basins of attraction, and rates of convergence.
The columns of P contain the ouptut patterns associated to these attractors. The
rows of p-l determine category centroids
new architecture, but more general versions can be trained and applied in novel
ways. The discussion here will be informal, since space prohibits technical detail
and proofs may be found in the references above.
A key feature of a net constructed by this algorithm is that the underlying dynamics
is explicitly isomorphic to any of a class of standard, well understood nonlinear
dynamical systems - a "normal form" [GH83]. This system is chosen in advance,
independent of both the patterns to be stored and the learning algorithm to be
used. This control over the dynamics permits the design of important aspects of
the network dynamics independent of the particular patterns to be stored. Stability,
basin geometry, and rates of convergence to attractors can be programmed in the
standard dynamical system.
Here we use the normal form for the Hopf bifurcation [GH83] as a simple recurrent
competitive k-winner-take-all network with a cubic nonlinearity. This network lies
in what might considered diagonalized or "overlap" or "memory coordinates" (one
memory per k nodes). For temporal patterns, these nodes come in complex conjugate pairs which supply Fourier components for trajectories to be learned. Chaotic
dynamics may be created by specific programming of the interaction of two pairs
of these nodes.
Learning of desired spatial or spatia-temporal patterns is done by projecting sets of
CAM Storage of Analog Patterns and Continuous Sequences with 3N2 Weights
these nodes into network coordinates( the standard basis) using the desired vectors
as corresponding columns of a transformation matrix P. In previous work, the
differential equations of the recurrent network itself are linearly transformed or
"projected" , leading to new recurrent network equations with higher order weights
corresponding to the cubic terms of the recurrent network.
2
The Projection Network
In the projection net for autoassociation, this algebraic projection operation into
and out of memory coordinates is done explicitly by a set of weights in two feedforward linear networks characterized by weight matrices p-l and P. These map
inputs into and out of the nodes of the recurrent dynamical network in memory
coordinates sandwiched between them. This kind of network, with explicit input
and output projection maps that are inverses, may be considered an "unfolded"
version of the purely recurrent networks described in the references above.
This network is shown in figure 1. Input pattern vectors i' are applied as pulses
which project onto each vector of weights (row of the p-l matrix) on the input
to each unit i of the dynamic network to establish an activation level Vi which
determines the initial condition for the relaxation dynamics of this network. The
recurrent weight matrix A of the dynamic network can be chosen so that the unit
or predefined subspace of units which recieves the largest projection of the input
will converge to some state of activity, static or dynamic, while all other units are
supressed to zero activity.
The evolution of the activity in these memory coordinates appears in the original
network coordinates at the output terminals as a spatia-temporal pattern which
may be fully distributed accross all nodes. Here the state vector of the dynamic
network has been transformed by the P matrix back into the coordinates in which
the input was first applied. At the attractor v* in memory coordinates, only a linear combination of the columns of the P weight matrix multiplied by the winning
nonzero modes of the dynamic net constitute the network representation of the output of the system. Thus the attractor retrieved in memory coordinates reconstructs
its learned distributed representation i* through the corresponding columns of the
output matrix P, e.g. p-1i' = v, v -+ ir, Pir = i* .
For the special case of content addressable memory or autoassociation, which we
have been describing here, the actual patterns to be learned form the columns
of the output weight matrix P, and the input matrix is its inverse p-l. These
are the networks that can be "folded" into higher order recurrent networks. For
orthonormal patterns, the inverse is the transpose of this output matrix of memories,
p-l = pT, and no computation of p-l is required to store or change memories just plug the desired patterns into appropriate rows and columns of P and pT.
In the autoassociative network, the input space, output space and normal form
state space are each of dimension N. The input and output linear maps require
N2 weights each, while the normal form coefficients determine another N 2 weights.
Thus the net needs only 3N 2 weights, instead of the N 2 + N 4 weights required by
the folded recurrent network. The 2N2 input and output weights could be stored
off-chip in a conventional memory, and the fixed weights of the dynamic normal
form network could be implemented in VLSI for fast analog relaxation.
93
94
Baird and Eeckman
3
Learning Extensions
More generally, for a heteroassociative net (i. e., a net designed to perform a map
from input space to possibly different output space) the linear input and output
maps need not be inverses, and may be noninvertible. They may be found by any
linear map learning technique such as Widrow-Hoff or by finding pseudoinverses.
Learning of all desired memories may be instantaneous, when they are known in
advance, or may evolve by many possible incremental methods, supervised or unsupervised. The standard competitive learning algorithm where the input weight
vector attached to the winning memory node is moved toward the input pattern
can be employed. We can also decrease the tendency to choose the most frequently
selected node, by adjusting paratmeters in the normal form equations, to realize the
more effective frequency selective competitive learning algorithm [AKCM90]. Supervised algorithms like bootstrap Widrow Hoff may be implemented as well, where
a desired output category is known. The weight vector of the winning normal form
node is updated by the competitive rule, if it is the right category for that input,
but moved away from the input vector, if it is not the desired category, and the
weight vector of the desired node is moved toward the input.
Thus the input map can be optimized for clustering and classification by these
algorithms, as the weight vectors (row vectors of the input matrix) approach the
centroids of the clusters in the input environment. The output weight matrix may
then be constructed with any desired output pattern vectors in appropriate columns
to place the attractors corresponding to these categories anywhere in the state space
in network coordinates that is required to achieve a desired heteroassociation.
If either the input or the output matrix is learned, and the other chosen to be its
inverse, then these competitive nets can be folded into oscillating biological versions,
to see what the competive learning algorithms correspond to there. Now either the
rows of the input matrix may be optimized for recognition, or the columns of the
output matrix may be chosen to place attractors, but not both. We hope to be able
to derive a kind of Hebb rule in the biological network, using the unfolded form of
the network, which we can prove will accomplish competitive learning. Thus the
work on engineering applications feeds back on the understanding of the biological
systems.
4
Programming the Normal Form Network
The key to the power of the projection algorithm to program these systems lies in
the freedom to chose a well understood normal form for the dynamics, independent of the patterns to be learned. The Hopf normal form used here, (in Cartesian
2:;=1 hjVj - Vi 2:;=1 AijVJ is especially easy to work with
coordinates) Vi
for programming periodic attractors, but handles fixed points as well. J is a matrix with real eigenvalues for determining static attractors, or complex conjugate
eignevalue pairs in blocks along the diagonal for periodic attractors. The real parts
are positive, and cause initial states to move away from the origin, until the competitive (negative) cubic terms dominate at some distance, and cause the flow to
be inward from all points beyond. The off-diagonal cubic terms cause competition
between directions of flow within a spherical middle region and thus create multiple
attractors and basins. The larger the eigenvalues in J and off-diagonal weights in
=
CAM Storage of Analog Patterns and Continuous Sequences with 3N2 Weights
A, the faster the convergence to at tractors in this region.
It is easy to choose blocks of coupling along the diagonal of the A matrix to produce
different kinds of attractors, static, periodic, or chaotic, in different coordinate
subspaces of the network. The sizes of the subspaces can be programmed by the
sizes of the blocks. The basin of attraction of an attractor determined within
a subspace is guaranteed to contain the subspace [Bai90b]. Thus basins can be
programmed, and "spurious" attractors can be ruled out when all subspaces have
been included in a programmed block.
This can be accomplished simply by choosing the A matrix entries outside the
blocks on the diagonal (which determine coupling of variables within a subspace) to
be greater (more negative) than those within the blocks. The principle is that this
makes the subspaces defined by the blocks compete exhaustively, since intersubspace
competition is greater than subspace self-damping. Within the middle region, the
flow is forced to converge laterally to enter the subspaces programmed by the blocks.
An simple example is a matrix of the form,
d
d
A=
(g)
d
d
c
c
c
c
d
d
(g)
where 0 < c < d < g. There is a static attractor on each axis (in each one
dimensional subspace) corresponding to the first two entries on the diagonal, by
the agrument above. In the first two dimensional subspace block there is a single
fixed point in the interior of the subspace on the main diagonal, because the offdiagonal entries within the block are symmetric and less negative than those on the
diagonal. The components do not compete, but rather combine. Nevertheless, the
flow from outside is into the subspace, because the entries outside the subspace are
more negative than those within it.
The last subspace contains entries appropriate to guarantee the stability of a periodic attractor with two frequencies (Fourier components) chosen in the J matrix.
The doubling of the entries is because these components come in complex conjugate
pairs (in the J matrix blocks) which get identical A matrix coupling. Again, these
pairs are combined by the lesser off-diagonal coupling within the block to form a
single limit cycle attractor. A large subspace can store a complicated continuous
periodic spatia-temporal sequence with many component frequencies .
The discrete Fourier transform of a set of samples of such a sequence in space and
time can be input directly to the P matrix as a set of complex columns corresponding
to the frequencies in J and the subspace programmed in A. N /2 total DFT samples
of N dimensional time varying spatial vectors may be placed in the P matrix, and
parsed by the A matrix into M < N /2 separate sequences as desired, with separate
basins of attraction guaranteed [Bai90b]. For a symmetric A matrix, there is a
95
96
Baird and Eeckman
Liapunov function, in the amplitude equations of a polar coordinate version of the
normal form, which governs the approach of initial states to stored trajectories.
5
Chaotic Attractors
Chaotic attractors may be created in this normal form, with sigmoid nonlinearities
added to the right hand side, Vi -+ tanh( vd. The sigmoids yield a spectrum of higher
order terms that break the phase shift symmetry of the system. Two oscillatory
pairs of nodes like those programmed in the block above can then be programmed
to interact chaotically. In our simulations, for example, if we set the upper block of
d entries to -1, and the lower to 1, and replace the upper c entries with 4.0, and the
lower with -0.4, we get a chaotic attractor of dimension less than four, but greater
than three.
This is "weak" or "phase coherent" chaos that is still nearly periodic. It is created
by the broken symmetry, when a homo clinic tangle occurs to break up an invariant
3-torus in the flow [G H83]. This is the Ruelle-Takens route to chaos and has been
observed in Taylor-Couette flow when both cylnders are rotated. We believe that
sets of Lorentz equations in three dimensional subspace blocks could be used in a
projection network as well. Experiments of Freeman, however, have suggested that
chaotic attractors of the above dimension occur in the olfactory system [Fre87].
These might most naturally occur by the interaction of oscillatory modes.
In the projection network or its folded biological version, these chaotic attractors
have a basin of attraction in the N dimensional state space that constitues a category, just like any other attractor in this system. They are, however, "fuzzy"
at tr actors , and there may be computational advantages to the basins of attraction
(categories) produced by chaotic attractors, or to the effects their outputs have
as fuzzy inputs to other network modules. The particular N dimensional spatiatemporal patterns learned for the four components of these chaotically paired modes
may be considered a coordinate specific "encoding" of the strange attractor, which
may constitute a recognizable input to another network, if it falls within some
learned basin of attraction. While the details of the trajectory of a strange attractor in any real physical continuous dynamical system are lost in the noise, there
is still a particular statistical structure to the attractor which is a recognizable
"sign at ure" .
6
Applications
Handwritten characters have a natural translation invariant analog representation
in terms of a sequence of angles that parametrize the pencil trajectory, and their
classification can be taken as a static or temporal pattern recognition problem. We
have constructed a trainable on-line system to which anyone may submit input
by mouse or digitizing pad, and observe the performance of the system for themselves, in immediate comparison to their own internal recognition response. The
performance of networks with static, periodic, and chaotic attractors may be tested
simultaneously, and we are presently assessing the results.
These networks can be combined into a hierarchical architecture of interconnected
modules. The larger network itself can then be viewed as a projection network,
transformed into biological versions, and its behavior analysed with the same tools
that were used to design the modules. The modules can model "patches" of cortex
CAM Storage of Analog Patterns and Continuous Sequences with 3N2 Weights
interconnected to form sensory-motor control networks. These can be configured to
yield autonomous adaptive "organisms" which learn useful sequences of behaviors
by reinforcement from their environment.
The A matrix for a network like that above may itself become a sub-block in the A
matrix of a larger network. The overall network is then a projection network with
zero elements in off-diagonal A matrix entries outside blocks that define multiple
attractors for the submodules. The modules neither compete nor combine states,
in the absence of A matrix coupling between them, but take states independently
based on their inputs to each other through the weights in the matrix J (which
here describes full coupling). The modules learn connection weights Jij between
themselves which will cause the system to evolve under a clocked "machine cycle"
by a sequence of transitions of attractors (static, oscillatory, or chaotic) within
the modules, much as a digital computer evolves by transitions of its binary flipflop states. This entire network may be folded to use more fault tolerant and
biologically plausible distributed representations, without disrupting the identity of
the subnetworks.
Supervised learning by recurrent back propagation or reinforcement can be used to
train the connections between modules. When the inputs from one module to the
next are given as impulses that establish initial conditions, the dynamical behavior
of a module is described exactly by the projection theorem [Bai89]. Possible applications include problems such as system identification and control, robotic path
planning, gramatical inference, and variable-binding by phaselocking in oscillatory
semantic networks.
Acknowledgements:
Supported by AFOSR-87-0317, and a grant from LLNL. It is a pleasure to acknuwledge the support of Walter Freeman and invaluable assistance of Morris Hirsch.
References
[AKCM90] C. Ahalt, A. Krishnamurthy, P. Chen, and D. Melton. Competitive
learning algorithms for vector quantization. Neural Networks, 3:277290,1990.
B Baird. A bifurcation theory approach to vector field programming
[Bai89]
for periodic attractors. In Proc. Int. Joint Conf on Neural Networks,
Wash. D.C., pages 1:381-388, June 1989.
B. Baird. Bifurcation and learning in network models of oscillating
[Bai90a]
cortex. In S. Forest, editor, Emergent Computation, pages 365-384.
North Holland, 1990. also in Physica D, 42.
B. Baird. A learning rule for cam storage of continuous periodic se[Bai90b]
quences. In Proc. Int. Joint Conf on Neural Networks, San Diego,
pages 3: 493-498, June 1990.
W.J. Freeman. Simulation of chaotic eeg patterns with a dynamic model
[Fre87]
of the olfactory system. Biological Cybernetics, 56:139, 1987.
J. Guckenheimer and D. Holmes. Nonlinear Oscillations, Dynamical
[GH83]
Systems, and Bifurcations of Vector Fields. Springer, New York, 1983.
97
| 336 |@word middle:2 version:6 inversion:1 simulation:2 pulse:1 heteroassociative:1 tr:1 initial:4 contains:1 diagonalized:1 analysed:1 activation:1 lorentz:1 realize:1 motor:2 designed:1 selected:1 liapunov:2 node:12 mathematical:1 along:2 constructed:4 differential:1 hopf:2 supply:1 become:1 prove:1 combine:2 recognizable:2 olfactory:2 introduce:1 behavior:3 themselves:2 frequently:1 nor:1 planning:1 terminal:1 freeman:3 spherical:1 unfolded:2 actual:1 accross:1 project:1 underlying:1 inward:1 what:2 kind:3 prohibits:1 fuzzy:2 finding:1 transformation:1 guarantee:1 temporal:5 berkeley:2 laterally:1 exactly:1 control:4 unit:6 grant:1 lsa:1 positive:1 engineering:2 understood:2 limit:1 encoding:1 ure:1 path:1 might:2 chose:1 autoassociation:2 programmed:8 lost:1 block:17 chaotic:14 bootstrap:2 digit:1 addressable:1 projection:15 get:2 onto:1 interior:1 storage:6 bill:1 equivalent:1 map:7 conventional:1 independently:1 rule:4 attraction:6 holmes:1 orthonormal:1 dominate:1 stability:2 handle:1 coordinate:17 autonomous:1 krishnamurthy:1 updated:1 pt:2 diego:1 programming:4 origin:1 element:1 recognition:4 melton:1 observed:1 module:11 wj:1 region:3 cycle:2 decrease:1 environment:2 broken:1 tangle:1 cam:5 dynamic:14 exhaustively:1 trained:1 chaotically:2 purely:1 basis:1 easily:1 joint:2 chip:1 emergent:1 train:1 walter:1 forced:1 fast:1 effective:1 avi:1 choosing:1 outside:4 larger:3 plausible:1 transform:1 itself:3 associative:1 sequence:12 eigenvalue:2 advantage:1 net:8 interconnected:3 interaction:2 jij:1 realization:1 achieve:1 moved:3 competition:2 convergence:3 cluster:1 assessing:1 oscillating:2 produce:1 incremental:2 rotated:1 derive:1 widrow:3 recurrent:11 coupling:6 implemented:3 come:2 pii:1 direction:1 transient:1 require:1 biological:7 extension:1 physica:1 considered:3 normal:14 lawrence:1 polar:1 proc:2 tanh:1 largest:1 create:1 tool:1 hope:1 guckenheimer:1 rather:1 varying:1 june:2 centroid:2 inference:1 entire:1 pad:1 spurious:2 vlsi:1 transformed:3 selective:1 overall:1 classification:3 takens:1 spatial:2 special:2 bifurcation:4 hoff:3 field:2 biology:1 identical:1 unsupervised:2 nearly:1 carryover:1 simultaneously:1 national:1 geometry:1 phase:2 attractor:36 freedom:1 homo:1 predefined:1 damping:1 taylor:1 desired:10 ruled:1 column:9 cost:1 entry:9 stored:6 periodic:9 accomplish:1 combined:2 off:5 mouse:1 again:1 reconstructs:1 choose:2 possibly:1 conf:2 leading:1 nonlinearities:1 north:1 coefficient:1 baird:7 configured:1 int:2 explicitly:2 vi:6 break:2 competitive:9 offdiagonal:1 complicated:1 ir:1 correspond:1 yield:2 weak:1 handwritten:2 identification:1 produced:1 trajectory:5 cybernetics:1 oscillatory:5 frequency:4 naturally:1 associated:1 proof:1 static:8 proved:1 adjusting:1 amplitude:1 back:3 pseudoinverses:1 appears:1 feed:1 higher:5 supervised:4 response:1 done:2 box:1 just:2 anywhere:1 until:1 hand:1 nonlinear:2 propagation:1 mode:3 impulse:1 believe:1 effect:1 contain:2 evolution:1 analytically:1 pencil:1 symmetric:2 laboratory:1 nonzero:1 semantic:1 assistance:1 self:1 clocked:1 disrupting:1 llnl:1 invaluable:1 instantaneous:1 novel:1 chaos:2 sigmoid:1 physical:1 quences:1 winner:2 attached:1 analog:7 discussed:1 digitizing:1 organism:1 eeckman:4 enter:1 dft:1 mathematics:1 nonlinearity:1 winnertake:1 actor:1 cortex:3 spatia:3 own:1 retrieved:1 store:3 route:1 binary:1 fault:1 accomplished:1 greater:3 employed:1 converge:2 determine:3 multiple:2 full:1 technical:1 faster:1 characterized:1 plug:1 molecular:1 paired:1 cell:1 ahalt:1 flow:6 feedforward:1 easy:2 submodules:1 architecture:4 prototype:1 lesser:1 shift:1 pir:1 algebraic:1 york:1 cause:4 constitute:2 autoassociative:2 useful:2 generally:1 governs:2 se:1 morris:1 category:7 sign:1 per:4 discrete:1 key:2 four:3 gramatical:1 nevertheless:1 neither:1 relaxation:2 compete:3 inverse:5 angle:1 place:2 ruelle:1 strange:2 patch:2 oscillation:1 guaranteed:3 activity:3 occur:2 fourier:4 aspect:1 anyone:1 combination:1 conjugate:3 describes:1 character:1 evolves:1 biologically:1 presently:1 projecting:1 invariant:2 taken:1 equation:5 describing:1 subnetworks:1 informal:1 parametrize:1 operation:1 permit:1 multiplied:1 observe:1 hierarchical:2 away:2 appropriate:3 original:1 clustering:1 include:1 parsed:1 especially:1 establish:2 sandwiched:1 noninvertible:1 move:1 added:1 occurs:1 diagonal:10 subspace:19 distance:1 separate:2 pleasure:1 capacity:1 vd:1 toward:2 frank:1 negative:4 implementation:1 design:2 perform:1 upper:2 immediate:1 pair:6 required:3 livermore:2 trainable:1 optimized:2 connection:2 coherent:1 learned:7 able:1 beyond:1 suggested:1 dynamical:6 pattern:23 program:2 memory:14 power:1 overlap:1 natural:1 axis:1 created:3 understanding:1 acknowledgement:1 evolve:2 determining:1 tractor:1 afosr:1 fully:1 digital:1 clinic:1 basin:9 principle:1 editor:1 translation:1 row:5 placed:1 last:1 transpose:1 supported:1 side:1 fall:1 distributed:3 dimension:3 cortical:1 transition:2 sensory:2 adaptive:1 projected:1 reinforcement:2 san:1 hirsch:1 robotic:1 tolerant:1 spectrum:1 continuous:8 learn:2 ca:2 symmetry:2 eeg:1 forest:1 interact:1 investigated:1 complex:4 submit:1 main:1 linearly:1 noise:1 n2:6 cubic:4 hebb:2 sub:1 phaselocking:1 explicit:1 torus:1 winning:3 heteroassociation:1 lie:2 theorem:1 specific:2 quantization:1 wash:1 sigmoids:1 cartesian:1 chen:1 simply:1 doubling:1 holland:1 binding:1 springer:1 determines:3 viewed:1 identity:1 replace:1 absence:1 content:1 change:1 included:1 folded:6 determined:1 total:1 isomorphic:1 tendency:1 formally:1 internal:1 support:1 flipflop:1 dept:2 tested:1 |
2,603 | 3,360 | Congruence between model and human attention
reveals unique signatures of critical visual events
Robert J. Peters?
Department of Computer Science
University of Southern California
Los Angeles, CA 90089
[email protected]
Laurent Itti
Departments of Neuroscience and Computer Science
University of Southern California
Los Angeles, CA 90089
[email protected]
Abstract
Current computational models of bottom-up and top-down components of attention are predictive of eye movements across a range of stimuli and of simple,
fixed visual tasks (such as visual search for a target among distractors). However, to date there exists no computational framework which can reliably mimic
human gaze behavior in more complex environments and tasks, such as driving
a vehicle through traffic. Here, we develop a hybrid computational/behavioral
framework, combining simple models for bottom-up salience and top-down relevance, and looking for changes in the predictive power of these components at
different critical event times during 4.7 hours (500,000 video frames) of observers
playing car racing and flight combat video games. This approach is motivated by
our observation that the predictive strengths of the salience and relevance models exhibit reliable temporal signatures during critical event windows in the task
sequence?for example, when the game player directly engages an enemy plane
in a flight combat game, the predictive strength of the salience model increases
significantly, while that of the relevance model decreases significantly. Our new
framework combines these temporal signatures to implement several event detectors. Critically, we find that an event detector based on fused behavioral and stimulus information (in the form of the model?s predictive strength) is much stronger
than detectors based on behavioral information alone (eye position) or image information alone (model prediction maps). This approach to event detection, based
on eye tracking combined with computational models applied to the visual input,
may have useful applications as a less-invasive alternative to other event detection
approaches based on neural signatures derived from EEG or fMRI recordings.
1
Introduction
The human visual system provides an arena in which objects compete for our visual attention, and
a given object may win the competition with support from a number of influences. For an example,
an moving object in our visual periphery may capture our attention because of its salience, or the
degree to which it is unusual or surprising given the overall visual scene [1]. On the other hand,
a piece of fruit in a tree may capture our attention because of its relevance to our current foraging
task, in which we expect rewarding items to be found in certain locations relative to tree trunks, and
to have particular visual features such as a reddish color [2, 3]. Computational models of each of
these influences have been developed and have individually been extensively characterized in terms
of their ability to predict an overt measure of attention, namely gaze position [4, 5, 6, 3, 7, 8, 9].
Yet how do the real biological factors modeled by such systems interact in real-world settings [10]?
Often salience and relevance are competing factors, and sometimes one factor is so strong that it
? webpage:
http://ilab.usc.edu/rjpeters/
event-locked eye position template
human game player
eye position
BU NSS
BU model
TD NSS
event-locked BU NSS template
event-locked TD NSS template
game visual stimulus
TD model
event-locked BU map template
event-locked TD map template
event annotation
Figure 1: Our computational framework for generating detector templates which can be used to
detect key events in video sequences. A human game player interacts with a video game, generating
a sequence of video frames from the game, and a sequence of eye position samples from the game
player. The video frames feed into computational models for predicting bottom-up (BU) salience
and top-down (TD) relevance influences on attention. These predictions are then compared with the
observed eye position using a ?normalized scanpath saliency? (NSS) metric. Finally, the video game
sequence is annotated with key event times, and these are used to generate event-locked templates
from each of the game-related signals. These templates are used to try to detect the events in the
original game sequences, and the results are quantified with metrics from signal detection theory.
overrides our best efforts to ignore it, as in the case of oculomotor capture [11]. How does the visual
system decide which factor dominates, and how does this vary as a function of the current task?
We propose that one element of learning sophisticated visual or visuomotor tasks may be learning
which attentional influences are important for each phase of the task. A key question is how to build
models that can capture the effects of rapidly changing task demands on behavior.
Here we address that question in the context of playing challenging video games, by comparing eye
movements recorded during game play with the predictions of a combined salience/relevance computational model. Figure 1 illustrates the overall framework. The important factor in our approach
is that we identify key game events (such as destroying an enemy plane, or crashing the car during
driving race) which can be used as proxy indicators of likely transitions in the observer?s task set.
Then we align subsequent analysis on these event times, such that we can detect repeatable changes
in model predictive strength within temporal windows around the key events. Indeed, we find significant changes in the predictive strength of both salience and relevance models within these windows,
including more than 8-fold increases in predictive strength as well as complete shifts from predictive
to anti-predictive behavior. Finally we show that the predictive strength signatures formed in these
windows can be used to detect the occurrence of the events themselves.
2
Psychophysics and eye tracking
Five subjects (four male, one female) participated under a protocol approved by the Institutional
Review Board of the University of Southern California. Subjects played two challenging games on
a Nintendo GameCube: Need For Speed Underground (a car racing game) and Top Gun (a flight
combat game). All of the subjects had at least some prior experience with playing video games
in general, but none of the subjects had prior experience with the particular games involved in
our experiment. For each game, subjects first practiced the game for several one-hour sessions on
different days until reaching a success criterion (definition follows), and then returned for a one-hour
eye tracking session with that game. Within each game, subjects learned to play three game levels,
and during eye tracking, each subject played each game level twice. Thus, in total, our recorded data
set consists of video frames and eye tracking data from 60 clips (5 subjects ? 2 games per subject ?
3 levels per game ? 2 clips per level) covering 4.7 hours.
Need For Speed: Underground (NFSU). In this game, players control a car in a race against
three other computer-controlled racers in a three-lap race, with a different race course for each
game level. The game display consists of a first-person view, as if the player were looking out the
windshield from the driver?s seat of the vehicle, with several ?heads-up display? elements showing
current elapsed time, race position, and vehicle speed, as well as a race course map (see Figure 2
for sample game frames). The game controller joystick is used simply to steer the vehicle, and a
pair of controller buttons are used to apply acceleration or brakes. Our ?success? criterion for NFSU
was finishing the race in third place or better out of the four racers. The main challenge for players
was learning to be able to control the vehicle at a high rate of simulated speed (100+ miles per
hour) while avoiding crashes with slow-moving non-race traffic and also avoiding the attempts of
competing racers to knock the player?s vehicle off course. During eye tracking, the average length of
an NFSU level was 4.11 minutes, with a range of 3.14?4.89 minutes across the 30 NFSU recordings.
Top Gun (TG). In this game, players control a simulated fighter plane with a success criterion of
destroying 12 specific enemy targets in 10 minutes or less. The game controller provides a simple set
of flight controls: the joystick controls pitch (forward?backward axis) and combined yaw/roll (left?
right axis), a pair of buttons controls thrust level up and down, and another button triggers missile
firings toward enemy targets. Two onscreen displays aid the players in finding enemy targets: one is
a radar map with enemy locations indicated by red triangles, and another is a direction finder running
along the bottom screen showing the player?s current compass heading along with the headings to
each enemy target. Players? challenges during training involved first becoming familiar with the
flight controls, and then learning a workable strategy for using the radar and direction finder to
efficiently navigate the combat arena. During eye tracking, the average length of a TG level was
5.29 minutes, with a range of 2.96?8.78 minutes across the 30 TG recordings.
Eye tracking. Stimuli were presented on a 22? computer monitor at a resolution of 640?480 pixels
and refresh rate of 75 Hz. Subjects were seated at a viewing distance of 80 cm and used a chin-rest to
stabilize their head position during eye tracking. Video game frames were captured at 29.97Hz from
the GameCube using a Linux computer under SCHED_FIFO scheduling, which then displayed the
captured frames onscreen for the player?s viewing and while simultaneously streaming the frames
to disk for subsequent processing. Finally, subjects? eye position was recorded at 240Hz with a
hardware-based eye-tracking system (ISCAN, Inc.). In total, we obtained roughly 500,000 video
game frames and 4,000,000 eye position samples during 4.7 hours of recording.
3
Computational attention prediction models
We developed a computational model which uses existing building blocks for bottom-up and topdown components of attention to generate new eye position prediction maps for each of the recorded
video game frames. Then, for each frame, we quantified the degree of correspondence between
those maps and the actual eye position recorded from the game player. Although the individual
models form the underlying computational foundation of our current study, our focus is not on
testing their individual validity for predicting eye movements (which has already been established by
prior studies), but rather on using them as components of a new model for investigating relationships
between task structure and the relative strength of competing influences on visual attention; therefore
we provide only a coarse summary of the workings of the models here and refer the reader to original
sources for full details.
Salience. Bottom-up salience maps were generated using a model based on detecting outliers in
space and spatial frequency according to low-level features intensity, color, orientation, flicker and
motion [4]. This model has been previously reported to be significantly predictive of eye positions
across a range of stimuli and tasks [5, 6, 7, 8].
Relevance. Top-down task-relevance maps were generated using a model [9] which is trained to
associate low-level ?gist? signatures with relevant eye positions (see also [3]). We trained the taskrelevance model with a leave-one-out approach: for each of the 60 game clips, the task-relevance
model used for testing against that clip was trained on the video frames and eye position samples
from the remaining 59 clips.
Model/human agreement. For each video game frame, we used the normalized scanpath saliency
(NSS) metric [6] to quantify the agreement between the corresponding human eye position and the
(e)
shown here for a single example session
Prediction strength of BU
and TD models in predicting
observers? eye position
shown here for a single example session
Eye position (in screen
coordinates) recorded from
observer playing the game
shown here for a single example session
Game events extracted
from a continuous signal
across the full game session
shown here for a single example event
Game frames surrounding
event at times t-?, t, t+?
(g)
(f)
0
TD @ 130.26s
?5
100
missile fired
(discrete events)
131.77s
150
133.27s
0
TD @ 131.77s
BU @ 131.77s
5
200
TD @ 133.27s
BU @ 133.27s
time (s) relative to event
TD
BU
time (s) in video game session
missile fired
(# events = 658)
50
TD model prediction strength
BU model prediction strength
vertical eye position
horizontal eye position
# missiles fired
(continuous signal)
BU @ 130.26s
?10
0
1
0
2
4
0
2
4
360
240
120
160
320
480
5
10
15
130.26s
eye
eye position
position
10
0
100
TD @ 216.08s
BU @ 216.08s
?5
150
target destroyed
(discrete events)
216.68s
200
250
0
TD
TD @ 216.68s
BU @ 216.68s
5
TD @ 217.28s
BU @ 217.28s
time (s) relative to event
BU
300
# targets destroyed
(continuous signal)
217.28s
time (s) in video game session
target destroyed
(# events = 328)
50
TD model prediction strength
BU model prediction strength
vertical eye position
horizontal eye position
216.08s
?10
0
1
2
0
2
4
0
2
4
360
240
120
160
320
480
2
4
6
8
10
12
eye
eye position
position
10
0
TD @ 113.41s
150
BU
0
TD
TD @ 114.41s
BU @ 114.41s
5
200
TD @ 115.42s
BU @ 115.42s
time (s) relative to event
start speed?up
(# events = 522)
?5
100
start speed?up
(discrete events)
115.42s
time (s) in video game session
TD model prediction strength
BU model prediction strength
vertical eye position
50
114.41s
speedometer
(continuous signal)
horizontal eye position
BU @ 113.41s
?10
0.5
1
1.5
0
2
4
0
2
4
360
240
160
120
320
480
0
20
40
60
80
100
120
140
113.41s
eye
eye position
position
Figure 2: Event-locked analysis of agreement between model-predicted attention maps and observed human eye position. See
Section 4 for details.
shown here for a single example event
TD prediction map of gaze
position at times t-?, t, t+?,
relative to an event time
shown here for a single example event
BU prediction map of gaze
position at times t-?, t, t+?,
relative to an event time
shown here for all events of a given type
shaded area represents 98% confidence interval
Event-locked prediction
strength of BU and TD
models, averaged across all
events of a given type
(d)
(c)
(b)
(a)
# missiles fired
V eye pos H eye pos
NSS
NSS
NSS
# targets destroyed
V eye pos H eye pos
NSS
NSS
NSS
speedometer
V eye pos H eye pos
NSS
NSS
NSS
10
250
0.2
(b) ROC for "missile fired" events
non?events
0.1
0
0
0.2
0.4
0.6
0.8
0.9
0.2
non?events
events
0.1
0
0
0.2
0.4
0.6
0.8
0.2
0.8
0.7
0.7
0.6
0.6
1
0.5
0.4
0.3
0.2
0.4
0.6
0.8
0
1
0
frequency
0.2
0.4
0.6
0.8
0
1
0
0.2
0.4
0.6
0.8
1
recall
(f) precision/recall for "target destroyed " events
1
1
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.6
BU&TD NSS: max(F1)=0.377
0.2
non?events
events
0.1
0
0
0.2
0.4
0.6
0.8
0.2
1
1
0.5
0.4
0.3
non?events
0
0.2
0.4
0.6
0.8
0
1
BU&TD maps template match strength
0.4
0.2
BU&TD NSS: AUC=0.891; d?=1.891
eye position: AUC=0.807; d?=1.391
BU&TD maps: AUC=0.685; d?=0.576
0.1
0
0.5
0.3
0.2
events
0.1
BU&TD maps: max(F )=0.142
1
true positives
frequency
0.1
0.8
precision
0
eye position template match strength
frequency
0.6
eye position: max(F1)=0.250
0
BU&TD NSS template match strength
0
0.2
0.4
0.6
0.8
0.1
0
1
0
0.2
false positives
(g) detector histograms for "start speed?up" events
(h) ROC for "start speed?up" events
0.2
0.6
0.8
1
(i) precision/recall for "start speed?up" events
1
1
0.9
0.9
non?events
0.1
0.4
recall
BU&TD NSS: max(F )=0.173
events
1
eye position: max(F )=0.159
1
0
0
0.2
0.4
0.6
0.8
non?events
0
events
0
0.2
0.4
0.6
0.8
1
eye position template match strength
0.2
0.8
0.7
0.7
0.6
0.6
0.5
0.4
0.3
non?events
0
0.2
0.4
0.6
0.8
1
BU&TD maps template match strength
0
0.5
0.4
0.2
BU&TD NSS: AUC=0.622; d?=0.445
eye position: AUC=0.600; d?=0.387
BU&TD maps: AUC=0.502; d?=?0.152
0.1
0
BU&TD maps: max(F1)=0.126
0.3
0.2
events
0.1
true positives
0.2
0.1
0.8
1
BU&TD NSS template match strength
precision
frequency
0.4
(e) ROC for "target destroyed " events
non?events
events
0.1
0.2
false positives
(d) detector histograms for "target destroyed " events
0.2
0.4
0.2
BU&TD NSS: AUC=0.837; d?=1.476
eye position: AUC=0.772; d?=1.093
BU&TD maps: AUC=0.585; d?=0.287
0.1
0
0.5
0.3
0.2
events
0
BU&TD maps: max(F )=0.153
1
non?events
0.1
0.8
BU&TD maps template match strength
frequency
1
1
true positives
frequency
0.9
1
eye position template match strength
frequency
1
eye position: max(F )=0.278
BU&TD NSS template match strength
frequency
(c) precision/recall for "missile fired" events
1
BU&TD NSS: max(F )=0.373
events
precision
frequency
(a) detector histograms for "missile fired" events
0
0.2
0.4
0.6
false positives
0.8
0.1
1
0
0
0.2
0.4
0.6
0.8
1
recall
Figure 3: Signal detection results of using event-locked signatures to detect visual events in video
game frame sequences. See Section 5 for details.
model maps derived from that frame. Computing the NSS simply involves normalizing the model
prediction map to have a mean of zero and a variance of one, and then finding the value in that
normalized map at the location of the human eye position. An NSS value of 0 would represent a
model at chance in predicting human eye position, while an NSS value of 1 would represent a model
for which human eye positions fell at locations with salience (or relevance) one standard deviation
above average. Previous studies have typically used the NSS as a summary statistic to describe the
predictive strength of a model across an entire sequence of fixations [6, 12]; here, we use it instead
as a continuous measure of the instantaneous predictive strength of the models.
4
Event-locked analyses
We annotated the video game clips with several pieces of additional information that we could use to
identify interesting events (see Figure 2) which would serve as the basis for event-locked analyses.
These events were selected on the basis of representing transitions between different task phases.
We hypothesized that such events should correlate with changes in the relative strengths of different
influences on visual attention, and that we should be able to detect such changes using the previously
described models as diagnostic tools. Therefore, after annotating the video clips with the times of
each event of interest, we subsequently aligned further analyses on a temporal window of -5s/+5s
around each event (shaded background regions in Figure 2, rows b?e). From those windows we
extract the time courses of NSS scores from the salience and relevance models and then compute the
average time course across all of the windows, giving an event-locked template showing the NSS
signature of that event type (Figure 2e).
TG ?missile fired? events. In the TG game we looked for times when the player fired missiles
(Figure 2, column 1). We selected these events because they represent transitions into a unique task
phase, namely the phase of direct engagement with an enemy plane. During most of the TG game
playing time, the player?s primary task involves actively using the radar and direction finder to locate
enemy targets; however, during the time when a missile is in flight the player?s only task is to await
visual confirmation of the missile destroying its target. Figure 2, column 1, row a illustrates one of
the ?missile fired? events with captured video frames at -1500ms, 0ms, and +1500ms relative to the
event time. Row b uses one of the 30 TG clips to show how the event times represent transitions in a
continuous signal (number of missiles fired); a -5s/+5s window around each event is highlighted by
the shaded background regions. These windows then propagate through our model-based analysis,
where we compare the eye position traces (row c) with the maps predicted by the BU salience (row f)
and TD relevance (row g) to generate a continuous sequence of NSS values for each model (row d).
Finally, all of the 658 event windows are pooled and we compute the average NSS value along
with a 98% confidence interval at each time point in the window, giving event-locked template NSS
signatures for the ?missile fired? event type (row e). Those signatures show a strong divergence in
the predictiveness of the BU and TD models: outside the event window, both models are significantly
but weakly predictive of observers? eye positions, with NSS values around 0.3, while inside the event
window the BU NSS score increases to an NSS value around 1.0, while the TD NSS score drops
below zero for several seconds. We believe this reflects the task phase transition. In general, the
TD model has learned that the radar screen and direction finder (toward the bottom left of the game
screens) are usually the relevant locations, as illustrated by the elevated activity at those locations
in the sample TD maps in row g. Most of the time, that is indeed a good prediction of eye position,
reflected by the fact that the TD NSS scores are typically higher than the BU NSS scores outside the
event window. However, within the event window, players shift their attention away from the target
search task to instead follow the salient objects on the display (enemy target, the missile in flight),
which is reflected in the transient upswing in BU NSS scores.
TG ?target destroyed? events. In the TG game we also considered times when enemy targets were
destroyed (Figure 2, column 2). Like the ?missile fired? events, these represent transitions between
task phases, but whereas the ?missile fired? represented transitions from the enemy target search
phase into a direct engagement phase, the ?target destroyed? events represent the reverse transition;
once the player sees that the enemy target has been destroyed, he or she can quickly begin searching
the radar and direction finder for the next enemy target to engage. This is reflected in the sample
frames shown in Figure 2, column 2, row a, where leading up to the event (at -600ms and 0ms) the
player is watching the enemy target, but by +600ms after the event the player has switched back to
looking at the direction finder to find a new target. The analysis proceeds as before, using -5s/+5s
windows around each of the 328 events to generate average event-locked NSS signatures for the two
models (row e). These signatures represent the end of the direct engagement phase whose beginning
was represented by the ?missile fired? events; here, the BU NSS score reaches an even higher peak
of around 1.75 within 50ms after the target being destroyed, and then quickly drops to almost zero
by 600ms after the event. Conversely, the TD NSS score is below zero leading up to the event, but
then quickly rebounds after the event and transiently goes above its baseline level. Again, we believe
these characteristic NSS traces reflect the observer?s task transitions.
NFSU ?start speed-up? events. In the NFSU game, we considered times at which the player just
begins recovering from a crash (Figure 2, column 3); players? task is generally to drive as fast as
possible while avoiding obstacles, but when players inevitably crash they must transiently shift to
a task of trying to recover from the crash. The general driving task typically involves inspecting
the horizon line and and focus of expansion for oncoming obstacles, while the crash-recovery task
typically involves examining the foreground scene to determine how to get back on course. To
automatically identify crash recovery phases, we extracted the speedometer value from each video
game frame to form a continuous speedometer history (Figure 2, column 3, row b); we identified
?start speed-up? events as upward-turning zero crossings in the acceleration, represented again by
shaded background bars in the figure. Again we computed average event-locked NSS signatures for
the BU and TD models from -5s/+5s windows around each of the 522 events, giving the traces in
row e. These traces reveal a significant drop in TD NSS scores during the event window, but no
significant change in BU NSS scores. The drop in TD NSS scores likely reflects the players? shift of
attention away from the usual relevant locations (horizon line, focus of expansion) and toward other
regions relevant to the crash-recovery task. However, the lack of change in BU NSS scores indicates
that the locations attended during crash recovery where neither more nor less salient than locations
attended in general; together, these results suggest that during crash recovery players? attention is
more strongly driven by some influence that is not captured well by either of the current BU and TD
models.
5
Event detectors
Having seen that critical game events are linked with highly significant signatures in the time course
of BU and TD model predictiveness, we next asked whether these signatures could be used in turn
to predict the events themselves. To test this question, we built event-locked ?detector? templates
from three sources (see Figure 1): (1) the raw BU and TD prediction maps (which carry explicit
information only from the visual input image); (2) the raw eye position traces (which carry explicit
information only from the player?s behavior); and (3) the BU and TD NSS scores, which represent
a fusion of information from the image (BU and TD maps) and from the observer (eye position).
For each of these detector types and for each event type, we compute event-locked signatures just as
described in the previous section. For the BU and TD NSS scores, this is exactly what is represented
in Figure 2, row e, and for the other two detector types the analysis is analogous. For the BU and
TD prediction maps, we compute the event-locked average BU and TD prediction map at each time
point within the event window, and for the eye position traces we compute the event-locked average
x and y eye position coordinate at each time point. Thus we have signatures for how each of these
detector signals is expected to look during the critical event intervals.
Next, we go back to the original detector traces (that is, the raw eye position traces as in Figure 2
row c, or the raw BU and TD maps as in rows f and g, or the raw BU and TD NSS scores as in
row d). At each point in those original traces, we compute the correlation coefficient between a
temporal window in the trace and the corresponding event-locked detector signature. To combine
each pair of correlation coefficients (from BU and TD maps, or from BU and TD NSS, or from x
and y eye position) into a single match strength, we scale the individual correlation coefficients to a
range of [0...1] and then multiply, to produce a soft logical ?and? operation, where both components
must have high values in order to produce a high output:
BU,TD maps match strength = r hBUievent , BU ? r hTDievent , TD
(1)
eye position match strength = r hxievent , x ? r hyievent , y
(2)
BU,TD NSS match strength = r hNSSBU ievent , NSSBU ? r hNSSTD ievent , NSSTD ,
(3)
where h?ievent represents the event-locked template for that signal, and r(?, ?) represents the correlation
coefficient between the two sequences of values, rescaled from the natural [-1...1] range to a [0...1]
range. This yields continuous traces of match strength between the event detector templates and the
current signal values, for each video game frame in the data set.
Finally, we adopt a signal detection approach. For each event type, we label every video frame as
?during event? if it falls within a -500ms/+500ms window around the event instant, and label it as
?during non-event? otherwise. Then we ask how well the match strengths can predict the label,
for each of the three detector types (BU and TD maps alone, eye position alone, or BU and TD
NSS). Figure 3 shows the results using several signal detection metrics. Each row represents one
of the three event types (?missile fired,? ?target destroyed,? and ?start speed-up?). The first column
(panels a, d, and g) shows the histograms of the match strength values during events and during nonevents, for each of the three detector types; this gives a qualitative sense for how well each detector
can distinguish events from non-events. The strongest separation between events and non-events is
clearly obtained by the BU&TD NSS and eye position detectors for the ?missile fired? and ?target
destroyed? events. Panels b, e, and h show ROC curves for each detector type and event type, along
with values for area-under-the-curve (AUC) and d-prime (d?); panels c, f, and i show precision/recall
curves with values with for the maximum F1 measure along the curve (F1 = (2 ? p ? r)/(p + r), where
p and r represent precision and recall). Each metric reflects the same qualitative trends. The highest
scores overall occur for ?target destroyed? events, followed by ?missile fired? and ?start speedup? events. Within each event type, the highest scores are obtained by the BU&TD NSS detector
(representing fused image/behavioral information), followed by the eye position detector (behavioral
information only) and then the BU&TD maps detector (image information only).
6
Discussion and Conclusion
Our contributions here are twofold: First, we reported several instances in which the degree of correspondence between computational models of attention and human eye position varies systematically
as a function of the current task phase. This finding suggests a direct means for integrating low-level
computational models of visual attention with higher-level models of general cognition and task
performance: the current task state could be linked through a weight matrix to determine the degree
to which competing low-level signals may influence overall system behavior.
Second, we reported that variations in the predictive strength of the salience and relevance models
are systematic enough that the signals can be used to form template-based detectors of the key game
events. Here, the detection is based on signals that represent a fusion of image-derived information
(salience/relevance maps) with observer-derived behavior (eye position), and we found that such
a combined signal is more powerful than a signal based on image-derived or observer-derived information alone. For event-detection or object-detection applications, this approach may have the
advantage of being more generally applicable than a pure computer vision approach (which might
require development of algorithms specifically tailored to the object or event of interest), by virtue
of its reliance on human/model information fusion. Conversely, the approach of deriving human
behavioral information only from eye movements has the advantage of being less invasive and cumbersome than other neurally-based event-detection approaches using EEG or fMRI [13]. Further,
although an eye tracker?s x/y traces amounts to less raw information than EEG?s dozens of leads
or fMRI?s 10,000s of voxels, the eye-tracking signals also contain a denser and less redundant representation of cognitive information, as they are a manifestation of whole-brain output. Together,
these advantages could make our proposed method a useful approach in a number of applications.
References
[1] L. Itti and P. Baldi. A principled approach to detecting surprising events in video. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 631?637, San Siego, CA, Jun 2005.
[2] V. Navalpakkam and L. Itti. Modeling the influence of task on attention. Vision Research, 45(2):205?231,
January 2005.
[3] A. Torralba, A. Oliva, M.S. Castelhano, and J.M. Henderson. Contextual guidance of eye movements
and attention in real-world scenes: the role of global features in object search. Psychological Review,
113(4):766?786, October 2006.
[4] L. Itti, C. Koch, and E. Niebur. A model of saliency-based visual attention for rapid scene analysis. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 20(11):1254?1259, November 1998.
[5] D. Parkhurst, K. Law, and E. Niebur. Modeling the role of salience in the allocation of overt visual
attention. Vision Research, 42(1):107?123, 2002.
[6] R.J. Peters, A. Iyer, L. Itti, and C. Koch. Components of bottom-up gaze allocation in natural images.
Vision Research, 45(18):2397?2416, 2005.
[7] R. Carmi and L. Itti. Visual causes versus correlates of attentional selection in dynamic scenes. Vision
Research, 46(26):4333?4345, Dec 2006.
[8] R.J. Peters and L. Itti. Computational mechanisms for gaze direction in interactive visual environments.
In Proc. ACM Eye Tracking Research and Applications, pages 27?32, Mar 2006.
[9] R.J. Peters and L. Itti. Beyond bottom-up: Incorporating task-dependent influences into a computational
model of spatial attention. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR
2007), Minneapolis, MN, Jun 2007.
[10] M. Hayhoe and D. Ballard. Eye movements in natural behavior. Trends in Cognitive Sciences, 9(4):188?
194, April 2005.
[11] J. Theeuwes, A.F. Kramer, S. Hahn, D.E. Irwin, and G.J. Zelinsky. Influence of attentional capture on oculomotor control. Journal of Experimental Psychology?Human Perception and Performance, 25(6):1595?
1608, December 1999.
[12] W. Einhauser, W. Kruse, K.P. Hoffmann, and P. Konig. Differences of monkey and human overt attention
under natural conditions. Vision Research, 46(8-9):1194?1209, April 2006.
[13] A.D. Gerson, L.C. Parra, and P. Sajda. Cortically coupled computer vision for rapid image search. IEEE
Transactions on Neural Systems and Rehabilitation Engineering, 14(2):174?179, June 2006.
| 3360 |@word stronger:1 approved:1 disk:1 propagate:1 attended:2 carry:2 score:17 practiced:1 existing:1 current:10 comparing:1 contextual:1 surprising:2 yet:1 must:2 refresh:1 subsequent:2 thrust:1 iscan:1 gist:1 drop:4 alone:5 intelligence:1 selected:2 item:1 plane:4 beginning:1 provides:2 coarse:1 detecting:2 location:9 five:1 along:5 direct:4 driver:1 qualitative:2 consists:2 fixation:1 combine:2 behavioral:6 inside:1 baldi:1 expected:1 indeed:2 roughly:1 themselves:2 nor:1 rapid:2 behavior:7 brain:1 td:71 automatically:1 actual:1 window:21 begin:2 underlying:1 panel:3 what:1 cm:1 monkey:1 developed:2 finding:3 temporal:5 combat:4 every:1 interactive:1 exactly:1 control:8 engages:1 positive:6 before:1 engineering:1 laurent:1 firing:1 becoming:1 might:1 twice:1 quantified:2 conversely:2 challenging:2 shaded:4 suggests:1 range:7 locked:21 averaged:1 minneapolis:1 unique:2 testing:2 block:1 implement:1 area:2 significantly:4 confidence:2 integrating:1 suggest:1 get:1 selection:1 scheduling:1 context:1 influence:11 map:36 destroying:3 brake:1 go:2 attention:23 resolution:1 recovery:5 pure:1 seat:1 deriving:1 searching:1 coordinate:2 variation:1 analogous:1 target:28 play:2 trigger:1 engage:1 us:2 agreement:3 associate:1 element:2 crossing:1 trend:2 recognition:2 racing:2 bottom:9 observed:2 role:2 capture:5 await:1 region:3 movement:6 decrease:1 rescaled:1 underground:2 highest:2 principled:1 environment:2 asked:1 dynamic:1 signature:18 radar:5 trained:3 weakly:1 predictive:16 serve:1 basis:2 triangle:1 po:6 joystick:2 represented:4 einhauser:1 surrounding:1 sajda:1 fast:1 describe:1 visuomotor:1 outside:2 whose:1 enemy:15 denser:1 cvpr:2 annotating:1 otherwise:1 ability:1 statistic:1 highlighted:1 sequence:10 advantage:3 propose:1 relevant:4 combining:1 aligned:1 date:1 rapidly:1 fired:18 competition:1 los:2 webpage:1 konig:1 produce:2 generating:2 leave:1 object:7 develop:1 strong:2 recovering:1 predicted:2 involves:4 quantify:1 direction:7 annotated:2 subsequently:1 human:16 viewing:2 transient:1 require:1 oncoming:1 f1:5 biological:1 inspecting:1 parra:1 predictiveness:2 around:9 considered:2 tracker:1 koch:2 congruence:1 cognition:1 predict:3 gerson:1 driving:3 vary:1 institutional:1 adopt:1 torralba:1 proc:3 overt:3 applicable:1 label:3 individually:1 tool:1 reflects:3 clearly:1 reaching:1 rather:1 derived:6 focus:3 finishing:1 june:1 she:1 indicates:1 baseline:1 detect:6 sense:1 dependent:1 streaming:1 typically:4 entire:1 pixel:1 overall:4 among:1 orientation:1 upward:1 development:1 spatial:2 psychophysics:1 once:1 having:1 represents:4 look:1 yaw:1 rebound:1 foreground:1 mimic:1 fmri:3 stimulus:5 transiently:2 simultaneously:1 divergence:1 individual:3 usc:3 familiar:1 phase:11 attempt:1 detection:10 interest:2 highly:1 workable:1 multiply:1 arena:2 henderson:1 male:1 experience:2 tree:2 guidance:1 psychological:1 instance:1 column:7 soft:1 steer:1 compass:1 obstacle:2 modeling:2 tg:9 deviation:1 examining:1 reported:3 foraging:1 varies:1 engagement:3 combined:4 person:1 peak:1 bu:70 systematic:1 rewarding:1 off:1 gaze:6 together:2 fused:2 quickly:3 linux:1 again:3 reflect:1 recorded:6 zelinsky:1 castelhano:1 watching:1 cognitive:2 itti:9 leading:2 actively:1 pooled:1 stabilize:1 parkhurst:1 coefficient:4 inc:1 race:8 piece:2 vehicle:6 try:1 observer:9 view:1 linked:2 traffic:2 red:1 start:9 recover:1 annotation:1 contribution:1 formed:1 roll:1 variance:1 characteristic:1 efficiently:1 yield:1 saliency:3 identify:3 raw:6 critically:1 none:1 niebur:2 drive:1 history:1 detector:24 strongest:1 reach:1 cumbersome:1 definition:1 against:2 frequency:10 involved:2 invasive:2 ask:1 logical:1 recall:8 distractors:1 car:4 color:2 sophisticated:1 back:3 feed:1 higher:3 day:1 follow:1 reflected:3 april:2 strongly:1 mar:1 just:2 until:1 correlation:4 flight:7 hand:1 working:1 horizontal:3 lack:1 indicated:1 reveal:1 believe:2 building:1 effect:1 validity:1 normalized:3 true:3 hypothesized:1 contain:1 mile:1 illustrated:1 during:20 game:59 auc:10 covering:1 criterion:3 m:10 trying:1 manifestation:1 override:1 chin:1 complete:1 motion:1 image:9 instantaneous:1 ilab:1 elevated:1 he:1 significant:4 refer:1 reddish:1 session:9 had:2 moving:2 align:1 female:1 driven:1 reverse:1 periphery:1 prime:1 certain:1 carmi:1 nintendo:1 success:3 captured:4 seen:1 additional:1 determine:2 redundant:1 kruse:1 signal:19 full:2 neurally:1 match:16 characterized:1 fighter:1 finder:6 controlled:1 prediction:20 pitch:1 oliva:1 controller:3 vision:9 metric:5 histogram:4 sometimes:1 represent:10 tailored:1 dec:1 background:3 crash:9 participated:1 whereas:1 interval:3 crashing:1 source:2 rest:1 fell:1 recording:4 subject:11 hz:3 december:1 enough:1 destroyed:15 psychology:1 competing:4 identified:1 angeles:2 shift:4 whether:1 motivated:1 effort:1 peter:4 returned:1 cause:1 scanpath:2 useful:2 generally:2 amount:1 extensively:1 clip:8 hardware:1 http:1 generate:4 flicker:1 neuroscience:1 diagnostic:1 per:4 discrete:3 key:6 four:2 salient:2 reliance:1 monitor:1 changing:1 neither:1 backward:1 button:3 upswing:1 compete:1 powerful:1 place:1 almost:1 reader:1 decide:1 separation:1 followed:2 played:2 display:4 correspondence:2 fold:1 distinguish:1 activity:1 strength:36 occur:1 scene:5 speed:12 missile:22 speedup:1 department:2 according:1 across:8 rehabilitation:1 outlier:1 previously:2 trunk:1 turn:1 mechanism:1 end:1 unusual:1 operation:1 apply:1 away:2 occurrence:1 alternative:1 original:4 top:6 running:1 remaining:1 instant:1 racer:3 giving:3 build:1 hahn:1 question:3 already:1 looked:1 hoffmann:1 strategy:1 primary:1 usual:1 interacts:1 southern:3 exhibit:1 win:1 onscreen:2 distance:1 attentional:3 simulated:2 gun:2 toward:3 length:2 modeled:1 relationship:1 navalpakkam:1 october:1 robert:1 trace:12 reliably:1 vertical:3 observation:1 november:1 anti:1 displayed:1 inevitably:1 january:1 looking:3 head:2 frame:21 locate:1 intensity:1 namely:2 pair:3 california:3 elapsed:1 learned:2 established:1 hour:6 address:1 able:2 bar:1 topdown:1 below:2 usually:1 proceeds:1 pattern:3 beyond:1 hayhoe:1 perception:1 challenge:2 oculomotor:2 built:1 reliable:1 including:1 video:26 max:9 power:1 event:141 critical:5 natural:4 hybrid:1 predicting:4 indicator:1 turning:1 mn:1 representing:2 eye:80 axis:2 jun:2 extract:1 coupled:1 review:2 prior:3 voxels:1 relative:9 law:1 expect:1 interesting:1 allocation:2 versus:1 foundation:1 switched:1 degree:4 proxy:1 fruit:1 systematically:1 playing:5 seated:1 row:18 course:7 summary:2 heading:2 salience:16 fall:1 template:23 curve:4 world:2 transition:9 forward:1 san:1 correlate:2 transaction:2 ignore:1 windshield:1 global:1 reveals:1 investigating:1 search:5 continuous:9 ballard:1 ca:3 confirmation:1 eeg:3 interact:1 expansion:2 complex:1 protocol:1 main:1 whole:1 roc:4 board:1 screen:4 slow:1 aid:1 n:57 precision:8 cortically:1 position:60 explicit:2 third:1 dozen:1 down:5 minute:5 specific:1 repeatable:1 navigate:1 showing:3 virtue:1 dominates:1 normalizing:1 exists:1 speedometer:4 fusion:3 false:3 incorporating:1 iyer:1 illustrates:2 demand:1 knock:1 horizon:2 lap:1 simply:2 likely:2 visual:22 tracking:12 chance:1 extracted:2 acm:1 kramer:1 acceleration:2 twofold:1 change:7 specifically:1 total:2 experimental:1 player:27 support:1 irwin:1 relevance:16 avoiding:3 |
2,604 | 3,361 | CPR for CSPs: A Probabilistic Relaxation of
Constraint Propagation
Luis E. Ortiz
ECE Dept, Univ. of Puerto Rico, Mayag?uez, PR 00681-9042
[email protected]
Abstract
This paper proposes constraint propagation relaxation (CPR), a probabilistic approach to classical constraint propagation that provides another view on the whole
parametric family of survey propagation algorithms SP(?). More importantly, the
approach elucidates the implicit, but fundamental assumptions underlying SP(?),
thus shedding some light on its effectiveness and leading to applications beyond
k-SAT.
1
Introduction
Survey propagation (SP) is an algorithm for solving k-SAT recently developed in the physics community [1, 2] that exhibits excellent empirical performance on ?hard? instances. To understand the
behavior of SP and its effectiveness, recent work (see Maneva et al. [3] and the references therein)
has concentrated on establishing connections to belief propagation (BP) [4], a well-known approximation method for computing posterior probabilities in probabilistic graphical models. Instead, this
paper argues that it is perhaps more natural to establish connections to constraint propagation (CP),
another message-passing algorithm tailored to constraint satisfaction problems (CSPs) that is wellknown in the AI community. The ideas behind CP were first proposed by Waltz [5] 1 Yet, CP has
received considerably less attention than BP lately.
This paper reconnects BP to CP in the context of CSPs by proposing a probabilistic relaxation
of CP that generalizes it. Through the approach, it is easy to see the exact, implicit underlying
assumptions behind the entire family of survey propagation algorithms SP(?). (Here, the approach
is presented in the context of k-SAT; it will be described in full generality in a separate document.)
In short, the main point of this paper is that survey propagation algorithms are instances of a natural
generalization of constraint propagation and have simple interpretations in that context.
2
Constraint Networks and Propagation
This section presents a brief introduction to the graphical representation of CSPs and CP, and concentrates on the aspects that are relevant to this paper. 2
A constraint network (CN) is the graphical model for CSPs used in the AI community. Of interest
here is the CN based on the hidden transformation. (See Bacchus et al. [9] for more information
on the different transformations and their properties.) It has a bipartite graph where every variable
and constraint is each represented by a node or vertex in the graph and there is an edge between a
variable i and a constraint a if and only if a is a function of i (see figure 1). From now on, a CN with
a tree graph is referred to as a tree CN, and a CN with an arbitrary graph as an arbitrary CN.
1
See also Pearl [4], section 4.1.1, and the first paragraph of section 4.1.2.
Please refer to Russell and Norvig [6] for a general introduction, Kumar [7] for a tutorial and Dechter [8]
for a more comprehensive treatment of these topics and additional references.
2
1
Constraint propagation is typically used as part
of a depth-first search algorithm for solving
CSPs. The search algorithm works by extending partial assignments, usually one variable
at a time, during the search. The algorithm
is called backtracking search because one can
backtrack and change the value of a previously
assigned variable when the search reaches an
illegal assignment.
1
variables
2
3
4
CP is often applied either as a preprocessing
a
b
step or after an assignment to a variable is
clauses
made. The objective is to reduce the domains
of the variables by making them locally consistent with the current partial assignment. The Figure 1: The graph of the constraint network corpropagation process starts with the belief that responding to the 3-SAT formula f (x) = (x1 ?
for every value assignment vi in the domain of x2 ? x3 ) ? (x2 ? x
?3 ? x4 ), which has four varieach variable i there exists a solution with vi as- ables and two clauses; the first and second clause
signed to i. The process then attempts to correct are denoted in the figure by a and b, respectively.
this a priori belief by locally propagating con- Following the convention of the SP community,
straint information. It is well-known that CP, clause and variable nodes are drawn as boxes and
unlike BP, always converges, regardless of the circles, respectively; also, if a variable appears as
structure of the CN graph. This is because no a negative literal in a clause (e.g., variable 3 in
possible solution is ignored at the start and none clause b), the edge between them is drawn as a
ever removed during the process. In the end, CP dashed line.
produces potentially reduced variable domains
that are in fact locally consistent. In turn, the
resulting search space is at worst no larger than the original but potentially smaller while still containing all possible solutions. The computational efficiency and effectiveness of CP in practice has
made it a popular algorithm in the CSP community.
3
Terminology and Notation
Let V (a) be the set of variables that appear in
constraint a and C(i) the set of constraints in
variables
which variable i appears. Let also Vi (a) ?
1
3
4
2
V (a) ? {i} and Ca (i) ? C(i) ? {a}. In kSAT, the constraints are the clauses, each variable is binary, with domain {0, 1}, and a solution corresponds to a satisfying assignment. If
i ? V (a), denote by sa,i the value assignment
fb
to variable i that guarantees the satisfiability of
fb?2
a
b
clause a; and denote the other possible assignclauses
ment to i by ua,i . Finally, let Cas (i) and Cau (i)
be the set of clauses in Ca (i) where variable i
appears in the same and different literal form as Figure 2: The graph inside the continuous curve is
it does in clause a, respectively.
the CN graph for the formula fb that results from
The k-SAT formula under consideration is de- removing clause b from f . The graph inside the
noted by f . It is convenient to introduce no- dashed curve is the CN graph for fb?2 , which cortation for formulae associated to the CN that responds to the formula for the connected comporesults from removing variables or constraints nent of the CN graph for fb that contains variable
from f . Let fa be the function that results from 2.
removing clause a from f (see figure 2), and
similarly, abusing notation, let fi be the function that results from removing variable i from f . Let
fa?i be the function that corresponds to the connected component of the CN graph for fa that contains variable i ? V (a), and let fi?a be the function that corresponds to the connected component
of the CN graph for fi that contains a ? C(i). (Naturally, if node a is not a separator of the CN
graph for f , fa has a single connected component, which leads to fa?i = fa ; similarly for fi .)
2
It is convenient to use a simple, if perhaps unusual, representation of sets in order to track the
domains of the variables during the propagation process. Each subset A of a set S of size m is
represented as a bit array of m elements where component k in the array is set to 1 if k is in A and
to 0 otherwise. For instance, if S = {0, 1}, then the array [00] represents ?, and similarly, [01], [10]
and [11] represent {0}, {1} and {0, 1}, respectively.
It is also useful to introduce the concept of (globally) consistent domains of variables and SAT
functions. Let Sf = {x|x satisfies f } be the set of assignments that satisfy f . Given a complete
assignment x, denote by x?i the assignments to all the variables except i; thus, x = (x1 , . . . , xn ) =
(xi , x?i ). Let the set Wi be the consistent domain of variable i in f if Wi = {xi |x = (xi , x?i ) ?
Sf for some x?i }; that is, Wi contains the set of all possible values that variable i can take in an
assignment that satisfies f . Let the set W be the consistent domain of f if W = ?ni=1 Wi and, for
all i, Wi is the consistent domain of variable i in f .
Finally, some additional terminology classifies variables of a SAT function given a satisfying assignment. Given a function f and a satisfying assignment x, let variable i be fixed if changing only its
assignment xi in x does not produce another satisfying assignment for f ; and be free otherwise.
4
Propagation Algorithms for Satisfiability
Constraint Propagation. In CP for k-SAT, the message Ma?i that clause a sends to variable i
is an array of binary values indexed by the elements of the domain of i; similarly, for the message
Mi?a that variable i sends to clause a. Intuitively, for all xi ? {0, 1}, Mi?a (xi ) = 1 if and only
if assigning value xi to variable i is ?ok? with all clauses other than a. Formally, Mi?a (xi ) = 1
if and only if fa?i has a satisfying assignment with xi assigned to variable i (or in other words,
xi is in the consistent domain of i in fa?i ). Similarly, Ma?i (xi ) = 1 if and only if clause a
is ?ok? with assigning value xi to variable i; or formally, Ma?i (xi ) = 1 if and only if fi?a
has a satisfying assignment with xi assigned to variable i, or assigning xi to variable i by itself
xi
xi
satisfies a. It is convenient to denote Mi?a (xi ) and Mi?a (xi ) by Ma?i
and Ma?i
, respectively.
sa,i
ua,i
sa,i
ua,i
s
u
s
In addition, Mi?a , Mi?a , Ma?i and Ma?i are simply denoted by Mi?a , Mi?a
, Ma?i
and
u
Ma?i , respectively.
In summary, we can write CP for k-SAT as follows.
? Messages that clause a sends to variable i:
xi
s
Ma?i
= 1 if and only if xi = sa,i or, there exists j ? Vi (a), s.t. Mj?a
= 1.
(1)
? Messages that variable i sends to clause a:
xi
xi
Mi?a
= 1 if and only if for all b ? Ca (i), Mb?i
= 1.
(2)
It is convenient to express CP mathematically as follows.
? Messages that clause a sends to variable i:
1, Q
xi
Ma?i =
1?
(1 ? M s
j?Vi (a)
j?a ),
xi
? Messages that variable i sends to clause a: Mi?a
=
Q
if xi = sa,i ,
if xi = ua,i .
b?Ca (i)
xi
Mb?i
.
s
u
In order to guarantee convergence, the message values in CP are initialized as Mi?a
= 1, Mi?a
=
u
s
1, Ma?i = 1, and naturally, Ma?i = 1. This initialization encodes the a priori belief that every
assignment is a solution. CP attempts to ?correct? or update this belief through the local propagation
of constraint information. In fact, the expressions in CP force the messages to be locally consistent.
By being initially conservative about the consistent domains, no satisfying assignment is discarded
during the propagation process.
OnceQ CP converges, for eachQ variable i, its locally-consistent domain becomes
xi
u
{xi | a?C(i) Ma?i
= 1} = {xi | a?C(i):xi =ua,i Ma?i
= 1} ? 2{0,1} . For general CSPs,
CP is usually very effective because it can significantly reduce the original domain of the variables,
3
leading to a smaller search space of possible assignments. It should be noted that in the particular
case of k-SAT with arbitrary CNs, CP is usually only effective after some variables have already
being assigned during the search, because those (partial) assignments can lead to ?boundary
conditions.? Without such boundary conditions, however, CP never reduces the domain of the
variables in k-SAT, as can be easily seen from the expressions above.
On the other hand, when CP is applied to tree CNs, it exhibits additional special properties. For
example, convergence is actually guaranteed regardless of how the messages are initialized, because
of the boundary conditions imposed by the leaves of the tree. Also, the final messages are in fact
globally consistent (i.e., all the messages are consistent with their definition). Therefore, the locallyconsistent domains are in fact the consistent domains. Whether the formula is satisfiable, or not,
can be determined immediately after applying CP. If the formula is not satisfiable, the consistent
domains will be empty sets. If the formula is in fact satisfiable, applying depth-first search always
finds a satisfying assignment without the need to backtrack.
We can express CP in a way that looks closer to SP and BP. Using the reparametrization ?a?i =
u
1 ? Ma?i
, we get the following expression of CP.
Q
s
? Message that clause a sends to variable i: ?a?i = j?Vi (a) (1 ? Mj?a
).
Q
s
? Message that variable i sends to clause a: Mi?a = b?C u (i) (1 ? ?b?i ).
a
Survey Propagation. Survey propagation has become a very popular propagation algorithm for
k-SAT. It was developed in the physics community by M?ezard et al. [2]. The excitement around
SP comes from its excellent empirical performance on hard satisfiability problems; that is, k-SAT
formulae with a ratio ? of the number of clauses to the number of variables near the so called
satisfiability threshold ?c .
The following is a description of an SP-inspired family of message-passing procedures, parametrized
by ? ? [0, 1]. It is often denoted by SP(?), and contains BP (? = 0) and (pure) SP (? = 1).
? Message that clause a sends to variable i:
Q
?a?i = j?Vi (a)
?u
j?a
s
?
?u
j?a +?j?a +?j?a
? Messages that variable i sends to clause a:
Q
Q
?ui?a =
1 ? ? b?C u (i) (1 ? ?b?i )
b?Cas (i) (1 ? ?b?i )
a
Q
Q
?si?a =
b?Cau (i) (1 ? ?b?i ) 1 ?
b?Cas (i) (1 ? ?b?i )
Q
Q
Q
??i?a =
b?C u (i) (1 ? ?b?i )
b?C s (i) (1 ? ?b?i ) =
b?Ca (i) (1 ? ?b?i )
a
a
SP was originally derived via arguments and concepts from physics. A simple derivation based on a
probabilistic interpretation of CP is given in the next section of the paper. The derivation presented
here elucidates the assumptions that SP algorithms make about the satisfiability properties and structure of k-SAT formulae. However, it is easy to establish strong equivalence relations between the
different propagation algorithms even at the basic level, before introducing the probabilistic interpretation (details omitted).
5
A Probabilistic Relaxation of Constraint Propagation for Satisfiability
The main idea behind constraint propagation relaxation (CPR) is to introduce a probabilistic model
for the k-SAT formula and view the messages as random variables in that model. If the formula f
has n variables, the sample space ? = (2{0,1} )n is the set of the n-tuple whose components are
subsets of the set of possible values that each variable i can take (i.e., subsets of {0, 1}). The ?true
probability law? Pf of a SAT formula f that corresponds to CP is defined in terms of the consistent
domain of f : for all W ? ?,
1, if W is the consistent domain of f,
Pf (W) =
0, otherwise.
4
Clearly, if we could compute the consistent domains of the remaining variables after each variable
assignment during the search, there would be no need to backtrack. But, while it is easy to compute
consistent domains for tree CNs, it is actually hard in general for arbitrary CNs. Thus, it is generally
hard to compute Pf . (CNs with graphs of bounded tree-width are a notable exception.)
However, the probabilistic interpretation will allow us to introduce ?bias? on ?, which leads to a
heuristic for dynamically ordering both the variables and their values during search. As shown in
this section, it turns out that for arbitrary CNs, survey propagation algorithms attempt to compute
different ?approximations? or ?relaxations? of Pf by making different assumptions about its ?probabilistic structure.?
s
u
s
u
Let us now view each message Ma?i
, Ma?i
, Mi?a
, and Mi?a
for each variable i and clause
a as a (Bernoulli) random variable in some probabilistic model with sample space ? and a, now
arbitrary, probability law P. 3 Formally, for each clause a, variable i and possible assignment value
xi ? {0, 1}, we define
xi
xi
i
i
)
? Bernoulli(pxi?a
) and Mi?a
? Bernoulli(pxa?i
Ma?i
xi
xi
xi
xi
where pa?i = P(Ma?i = 1) and pi?a = P(Mi?a = 1). This is a distribution over all possible
subsets (i.e., the power set) of the domain of each variable, not just over the variable?s domain itself.
s
Also, clearly we do not need to worry about psa?i because it is always 1, by the definition of Ma?i
.
The following is a description of how we can use those probabilities during search. In the SP
community, the resulting heuristic search is called ?decimation? [1, 2]. If we believe that P ?closely
xi
approximates? Pf , and know the probability pxi i ? P(Ma?i
= 1 for all a ? C(i)) that xi is in
the consistent domain for variable i of f , for every variable i, clause a and possible assignment
xi , we can use them to dynamically order both the variables and the values they can take during
u
u
search. Specifically, we first compute p1i = P(Ma?i
= 1 for all a ? C ? (i)) and p0i = P(Ma?i
=
+
+
?
1 for all a ? C (i)) for each variable i, where C (i) and C (i) are the sets of clauses where
variable i appears as a positive and a negative literal, respectively. Using those probability values,
we then compute what the SP community calls the ?bias? of i: |p1i ? p0i |. The variable to assign next
is the one with the largest bias. 4 We would set that variable to the value of largest probability; for
instance, if variable i has the largest bias, then we set i next, to 1 if p1i > p0i , and to 0 if p1i < p0i .
The objective is then to compute or estimate those probabilities.
The following are (independence) assumptions about the random variables (i.e., messages) used in
this section. The assumptions hold for tree CNs and, as formally shown below, are inherent to the
survey propagation process.
s
Assumption 1. For each clause a and variable i, the random variables Mj?a
for all j ? Vi (a) are
independent.
u
Assumption 2. For each clause a and variable i, the random variables Mb?i
for all clauses b ?
Cau (i) are independent.
u
Assumption 3. For each clause a and variable i, the random variables Mb?i
for all clauses b ?
s
Ca (i) are independent.
Without any further assumptions, we can derive the following, by applying assumption 1 and the
u
expression for Ma?i
that results from 1:
Q
Q
u
u
s
pa?i = P(Ma?i = 1) = 1 ? j?Vi (a) P(Mj?a
= 0) = 1 ? j?Vi (a) (1 ? psj?a ).
s
Similarly, by assumption 2 and the expression for Mi?a
that results from 2, we derive
Q
Q
s
s
u
pi?a = P(Mi?a = 1) = b?C u (i) P(Mb?i
= 1) = b?C u (i) pub?i .
a
a
u
Using the reparametrization ?a?i = P(Ma?i
= 0) = 1 ? pua?i , we obtain the following messagepassing procedure.
j
3
Given clause a and variable i of SAT formula f , let Da?i
be the (globally) consistent domain of fa?i
for variable j. The random variables corresponding to the messages from variable i to clause a are defined as
xi
j
i
(W) = 1 iff Wj ? Da?i
for every variable j of Q
fa?i ; and xi ? Da?i
. The other random variables are
Mi?a
s
u
s
then defined as Ma?i
(W) = 1 and Ma?i
(W) = 1 ? j?Vi (a) (1 ? Mj?a
(W)) for all W.
4
For both variable and value ordering, we can break ties uniformly at random. Also, the description of
SP(?) used often, sets a fraction ? of the variables that remained unset during search. While clearly this
speeds up the process of getting a full assignment, the effect that heuristic might have on the completeness of
the search procedure is unclear, even in practice.
5
? Message that clause a sends to variable i: ?a?i =
Q
? Message that variable i sends to clause a: psi?a =
Q
We can then use assumption 3 to estimate pui?a as
Q
j?Vi (a) (1
b?Cas (i) (1
b?Cau (i) (1
? psi?a )
? ?b?i )
? ?b?i ).
Note that this message-passing procedure is exactly ?classical? CP if we initialize ?a?i = 0 and
psi?a = 1 for all variables i and clause a. However, the version here allows the messages to be in
[0, 1]. At the same time, for tree CNs, this algorithm is the same as classical CP (i.e., produces the
same result), regardless of how the messages ?a?i and psi?a are initialized. In fact, in the tree case,
the final messages uniquely identify P = Pf .
Making Assumptions about Satisfiability. Let us make the following assumption about the
?probabilistic satisfiability structure? of the k-SAT formula.
Assumption 4. For some ? ? [0, 1], for each clause a and variable i,
s
u
s
u
P(Mi?a
= 0, Mi?a
= 0) = (1 ? ?)P(Mi?a
= 1, Mi?a
= 1).
For ? = 1, the last assumption essentially says that fa?i has a satisfying assignment; i.e.,
s
u
P(Mi?a
= 0, Mi?a
= 0) = 0. For ? = 0, it essentially says that the likelihood that fa?i does
not have a satisfying assignment is the same as the likelihood that fa?i has a satisfying assignment
s
u
s
where variable i is free. Formally, in this case, we have P(Mi?a
= 0, Mi?a
= 0) = P(Mi?a
=
u
s
u
1, Mi?a
= 1), which, interestingly, is equivalent to the condition P(Mi?a
= 1) + P(Mi?a
=
1) = 1.
Let us introduce a final assumption about the random variables associated to the messages from
variables to clauses.
s
u
Assumption 5. For each clause a and variable i, the random variables Mi?a
and Mi?a
are
independent.
Note that assumptions 2, 3 and 5 hold (simultaneously) if and only if for each clause a and variable
u
for all clauses b ? Ca (i) are independent.
i, the random variables Mb?i
The following theorem is the main result of this paper.
Theorem 1. (Sufficient Assumptions) Let assumptions 1, 2 and 3 hold. The message-passing
procedure that results from CPR as presented above is
1. belief propagation (i.e., SP(0)), if assumption 4, with ? = 0, holds, and
2. a member of the family of survey propagation algorithms SP(?), with 0 < ? ? 1, if
assumption 4, with the given ?, and assumption 5 hold.
These assumptions are also necessary in a strong sense (details omitted), Assumptions 1, 2, 3, and
even 5 might be obvious to some readers, but assumption 4 might not be, and it is essential.
Q
Proof. As in the last subsection, assumption 1 leads to pua?i = 1 ? j?Vi (a) (1 ? psj?a ), while
Q
Q
assumptions 2 and 3 lead to psi?a = b?Cau (i) pub?i and pui?a = b?Cas (i) pub?i .
s
u
Note also that assumption 4 is equivalent to psi?a + pui?a ? ? P(Mi?a
= 1, Mi?a
= 1) = 1. This
allows us to express
psi?a
s
,
P(Mi?a
= 1) = psi?a = s
u
s
u
pi?a + pi?a ? ? P(Mi?a
= 1, Mi?a
= 1)
which implies
s
P(Mi?a
= 0) =
s
u
pui?a ? ? P(Mi?a
= 1, Mi?a
= 1)
.
s
u
? ? P(Mi?a = 1, Mi?a = 1) + psi?a
pui?a
If ? = 0, then the last expression simplifies to
s
P(Mi?a
= 0) =
6
pui?a
.
pui?a + psi?a
u
u
= 1) = pui?a
= 0) = 1 ? pua?i , ?ui?a ? P(Mi?a
Using the reparametrization ?a?i ? P(Ma?i
s
= 1) = psi?a , leads to BP (i.e., SP(0)).
and ?si?a + ??i?a ? P(Mi?a
u
Otherwise, if 0 < ? ? 1, then using the reparametrization ?a?i ? P(Ma?i
= 0),
?ui?a
?si?a
??i?a
?
=
?
?
u
P(Mi?a
s
P(Mi?a
s
P(Mi?a
s
P(Mi?a
s
u
= 1) ? ? P(Mi?a
= 1, Mi?a
= 1)
u
s
u
= 0, Mi?a = 1) + (1 ? ?)P(Mi?a
= 1, Mi?a
= 1),
u
= 1, Mi?a = 0), and
u
= 1, Mi?a
= 1),
and applying assumption 5 leads to SP(?).
The following are some remarks that can be easily derived using CPR.
On the Relationship Between SP and BP. SP essentially assumes that every sub-formula fa?i
has a satisfying assignment, while BP assumes that for every clause a and variable i ? V (a), variable
i is equally likely not to have a satisfying assignment or being free in fa?i , as it is easy to see from
assumption 4. The parameter ? just modulates the relative scaling of those two likelihoods. While
the same statement about pure SP is not novel, the statement about BP, and more generally, the class
SP(?) for 0 ? ? < 1, seems to be.
On the Solutions of SAT formula f . Note that Pf may not satisfy all or any of the assumptions.
Yet, satisfying an assumption imposes constraints on what Pf actually is and thus on the solution
space of f . For example, if Pf satisfies assumption 4 for any ? < 1, which includes BP when ? = 0,
s
u
s
u
and for all clauses a and variables i, then Pf (Mi?a
= 0, Mi?a
= 0) = Pf (Mi?a
= 1, Mi?a
=
s
u
s
u
1) = 0 and therefore either Pf (Mi?a = 1, Mi?a = 0) = 1 or Pf (Mi?a = 0, Mi?a = 1) = 1
holds, but not both of course. That implies f must have a unique solution!
On SP. This result provides additional support to previous informal conjectures as to why SP is
so effective near the satisfiability threshold: SP concentrates all its efforts on finding a satisfying
assignment when they are scarce and ?scattered? across the space of possible assignments. Thus, SP
assumes that the set of satisfying assignments has in fact special structure.
s
u
To see that, note that assumptions 4, with ? = 1, and 5 imply that P(Mi?a
= 1, Mi?a
= 0) = 0
s
u
or P(Mi?a
= 0, Mi?a
= 1) = 0 must hold. This says that in every assignment that satisfies
fa?i , variable i is either free or always has the same value assignment. This observation is relevant
because it has been argued that as we approach the satisfiability threshold, the set of satisfying
assignments decomposes into many ?local? or disconnected subsets. It follows easily from the
discussion here that SP assumes such a structure, therefore potentially making it most effective
under those conditions (see Maneva et al. [3] for more information).
Similarly, it has also been empirically observed that SP is more effective for ? close to, but strictly
less than 1. The CPR approach suggests that such behavior might be because, with respect to any
P that satisfies assumption 4, unlike pure SP, for such values of ? < 1, SP(?) guards against the
possibility that fa?i is not satisfiable, while still being somewhat optimistic by giving more weight
to the event that variable i is free in fa?i . Naturally, BP, which is the case of ? = 0, might be too
pessimistic in this sense.
On BP. For BP (? = 0), making the additional assumption that the formula fa?i is satisfiable
s
u
(i.e., P(Mi?a
= 0, Mi?a
= 0) = 0) implies that there are no assignments with free variables (i.e.,
s
u
P(Mi?a
= 1, Mi?a
= 1) = 0). Therefore, the only possible consistent domain is the singleton
s
u
s
u
{sa,i } or {ua,i } (i.e., P(Mi?a
= 1, Mi?a
= 0) + P(Mi?a
= 0, Mi?a
= 1) = 1). Thus,
either 0 or 1 can possibly be a consistent value assignment, but not both. This suggests that BP is
concentrating its efforts on finding satisfying assignments without free variables.
On Variable and Value Ordering. To complete the picture of the derivation of SP(?) via CPR,
we need to compute p0i and p1i for all variables i to use for variable and value ordering during search.
We can use the following, slightly stronger versions of assumptions 2 and 3 for that.
u
Assumption 6. For each variable i, the random variables Ma?i
for all clauses a ? C ? (i) are
independent.
7
u
for all clauses a ? C + (i) are
Assumption 7. For each variable i, the random variables Ma?i
independent.
Q
0
Using assumptions 6 and 7, we can easily derive that p1i =
a?C ? (i) (1 ? ?a?i ) and pi =
Q
a?C + (i) (1 ? ?a?i ), respectively.
On Generalizations. The approach provides a general, simple and principled way to introduce
possibly uncertain domain knowledge into the problem by making assumptions about the structure
of the set of satisfying assignments and incorporating them through P. That can lead to more
effective propagation algorithms for specific contexts.
Related Work. Dechter and Mateescu [10] also connect BP to CP but in the context of the inference problem of assessing zero posterior probabilities. Hsu and McIlraith [11] give an intuitive
explanation of the behavior of SP and BP from the perspective of traditional local search methods.
They provide a probabilistic interpretation, but the distribution used there is over the biases.
Braunstein and Zecchina [12] showed that pure SP is equivalent to BP on a particular MRF over
an extended domain on the variables of the SAT formula, which adds a so called ?joker? state.
Maneva et al. [3] generalized that result by showing that SP(?) is only one of many families of
algorithms that are equivalent to performing BP on a particular MRF. In both cases, one can easily
interpret those MRFs as ultimately imposing a distribution over ?, as defined here, where the joker
state corresponds to the domain {0, 1}. Here, the only particular distribution explicitly defined is
Pf , the ?optimal? distribution. This paper does not make any explicit statements about any specific
distribution P for which applying CPR leads to SP(?).
6
Conclusion
This paper strongly connects survey and constraint propagation. In fact, the paper shows how survey
propagation algorithms are instances of CPR, the probabilistic generalization of classical constraint
propagation proposed here. The general approach presented not only provides a new view on survey
propagation algorithms, which can lead to a better understanding of them, but can also be used to
easily develop potentially better algorithms tailored to specific classes of CSPs.
References
[1] A. Braunstein, M. M?ezard, and R. Zecchina. Survey propagation: An algorithm for satisfiability. Random
Structures and Algorithms, 27:201, 2005.
[2] M. M?ezard, G. Parisi, and R. Zecchina. Analytic and Algorithmic Solution of Random Satisfiability
Problems. Science, 297(5582):812?815, 2002.
[3] E. Maneva, E. Mossel, and M. J. Wainwright. A new look at survey propagation and its generalizations.
ACM, 54(4):2?41, July 2007.
[4] J. Pearl. Probabilistic Reasoning in Intelligent Systems. Networks of Plausible Inference. Morgan Kaufmann, 1988.
[5] D. L. Waltz. Generating semantic descriptions from drawings of scenes with shadows. Technical Report
271, MIT AI Lab, Nov. 1972. PhD Thesis.
[6] S. Russell and P. Norvig. Artificial Intelligence: A Modern Approach, chapter 5, pages 137?160. Prentice
Hall, second edition, 1995.
[7] V. Kumar. Algorithms for constraint-satisfaction problems: A survey. AI Magazine, 13(1):32?44, 1992.
[8] R. Dechter. Constraint Processing. Morgan Kaufmann, 2003.
[9] F. Bacchus, X. Chen, P. van Beek, and T. Walsh. Binary vs. non-binary constraints. AI, 140(1-2):1?37,
Sept. 2002.
[10] R. Dechter and R. Mateescu. A simple insight into iterative belief propagation?s success. In UAI, 2003.
[11] E. I. Hsu and S. A. McIlraith. Characterizing propagation methods for boolean satisfiability. In SAT,
2006.
[12] A. Braunstein and R. Zecchina. Survey propagation as local equilibrium equations. JSTAT, 2004.
8
| 3361 |@word version:2 seems:1 stronger:1 contains:5 pub:3 document:1 interestingly:1 psj:2 current:1 si:3 yet:2 assigning:3 must:2 luis:1 dechter:4 analytic:1 update:1 v:1 intelligence:1 leaf:1 nent:1 short:1 provides:4 completeness:1 node:3 guard:1 become:1 paragraph:1 inside:2 introduce:6 behavior:3 inspired:1 globally:3 pf:14 ua:6 becomes:1 classifies:1 underlying:2 notation:2 bounded:1 what:2 developed:2 proposing:1 finding:2 transformation:2 guarantee:2 zecchina:4 every:8 tie:1 exactly:1 appear:1 maneva:4 before:1 positive:1 local:4 establishing:1 signed:1 might:5 therein:1 initialization:1 equivalence:1 dynamically:2 suggests:2 walsh:1 pui:8 unique:1 cau:5 practice:2 x3:1 procedure:5 empirical:2 braunstein:3 significantly:1 illegal:1 convenient:4 word:1 get:1 close:1 prentice:1 context:5 applying:5 equivalent:4 imposed:1 unset:1 attention:1 regardless:3 survey:16 immediately:1 pure:4 insight:1 array:4 importantly:1 leortiz:1 norvig:2 magazine:1 elucidates:2 exact:1 decimation:1 pa:2 element:2 satisfying:19 observed:1 worst:1 wj:1 connected:4 ordering:4 russell:2 removed:1 principled:1 ui:3 ezard:3 ultimately:1 solving:2 bipartite:1 efficiency:1 easily:6 represented:2 chapter:1 derivation:3 univ:1 effective:6 artificial:1 whose:1 heuristic:3 larger:1 plausible:1 say:3 drawing:1 otherwise:4 itself:2 final:3 parisi:1 ment:1 mb:6 relevant:2 iff:1 description:4 intuitive:1 getting:1 convergence:2 empty:1 extending:1 assessing:1 produce:3 generating:1 converges:2 derive:3 develop:1 propagating:1 received:1 sa:6 strong:2 pxa:1 come:1 implies:3 convention:1 shadow:1 concentrate:2 closely:1 correct:2 argued:1 assign:1 generalization:4 pessimistic:1 mathematically:1 strictly:1 cpr:9 hold:7 around:1 hall:1 equilibrium:1 algorithmic:1 omitted:2 largest:3 puerto:1 mit:1 clearly:3 always:4 csp:1 derived:2 pxi:2 bernoulli:3 likelihood:3 sense:2 inference:2 mrfs:1 entire:1 typically:1 initially:1 hidden:1 relation:1 denoted:3 priori:2 proposes:1 special:2 initialize:1 never:1 x4:1 represents:1 look:2 report:1 intelligent:1 inherent:1 modern:1 simultaneously:1 comprehensive:1 connects:1 cns:8 ortiz:1 attempt:3 interest:1 message:29 possibility:1 light:1 behind:3 waltz:2 edge:2 closer:1 partial:3 tuple:1 necessary:1 tree:9 indexed:1 initialized:3 circle:1 uncertain:1 instance:5 boolean:1 assignment:41 introducing:1 vertex:1 subset:5 bacchus:2 too:1 connect:1 considerably:1 fundamental:1 probabilistic:15 physic:3 thesis:1 containing:1 possibly:2 literal:3 leading:2 de:1 singleton:1 includes:1 satisfy:2 notable:1 explicitly:1 vi:13 view:4 break:1 lab:1 optimistic:1 start:2 satisfiable:5 reparametrization:4 ni:1 kaufmann:2 identify:1 backtrack:3 none:1 reach:1 definition:2 against:1 obvious:1 naturally:3 associated:2 mi:79 psi:11 con:1 proof:1 hsu:2 treatment:1 popular:2 concentrating:1 subsection:1 knowledge:1 satisfiability:13 actually:3 appears:4 ok:2 rico:1 originally:1 worry:1 box:1 strongly:1 generality:1 just:2 implicit:2 hand:1 propagation:37 abusing:1 perhaps:2 believe:1 effect:1 concept:2 true:1 assigned:4 semantic:1 psa:1 during:11 width:1 uniquely:1 please:1 noted:2 generalized:1 complete:2 argues:1 cp:29 reasoning:1 consideration:1 novel:1 recently:1 fi:5 clause:48 empirically:1 interpretation:5 approximates:1 interpret:1 refer:1 imposing:1 ai:5 similarly:7 add:1 posterior:2 recent:1 csps:8 perspective:1 showed:1 p1i:6 wellknown:1 binary:4 success:1 seen:1 morgan:2 additional:5 somewhat:1 dashed:2 july:1 full:2 reduces:1 technical:1 equally:1 mrf:2 basic:1 essentially:3 represent:1 tailored:2 p0i:5 addition:1 sends:12 unlike:2 member:1 effectiveness:3 call:1 near:2 easy:4 independence:1 reduce:2 idea:2 cn:14 simplifies:1 whether:1 expression:6 effort:2 passing:4 remark:1 ignored:1 useful:1 generally:2 locally:5 concentrated:1 reduced:1 straint:1 tutorial:1 track:1 write:1 express:3 four:1 terminology:2 threshold:3 drawn:2 changing:1 graph:15 relaxation:6 fraction:1 family:5 reader:1 scaling:1 bit:1 guaranteed:1 constraint:26 bp:19 x2:2 scene:1 encodes:1 aspect:1 speed:1 ables:1 argument:1 kumar:2 performing:1 conjecture:1 disconnected:1 smaller:2 across:1 slightly:1 wi:5 making:6 intuitively:1 pr:1 equation:1 previously:1 turn:2 excitement:1 know:1 end:1 unusual:1 informal:1 generalizes:1 original:2 responding:1 remaining:1 assumes:4 graphical:3 giving:1 establish:2 classical:4 objective:2 already:1 parametric:1 fa:19 responds:1 traditional:1 unclear:1 exhibit:2 separate:1 parametrized:1 topic:1 relationship:1 ratio:1 potentially:4 statement:3 negative:2 observation:1 discarded:1 extended:1 ever:1 arbitrary:6 community:8 connection:2 pearl:2 beyond:1 usually:3 below:1 explanation:1 belief:7 wainwright:1 power:1 event:1 satisfaction:2 natural:2 force:1 scarce:1 brief:1 imply:1 mossel:1 picture:1 lately:1 sept:1 understanding:1 relative:1 law:2 sufficient:1 consistent:22 imposes:1 joker:2 pi:5 course:1 summary:1 mateescu:2 last:3 free:7 bias:5 allow:1 understand:1 characterizing:1 van:1 curve:2 depth:2 xn:1 boundary:3 pua:3 fb:5 made:2 preprocessing:1 nov:1 uai:1 sat:21 xi:44 search:18 continuous:1 iterative:1 decomposes:1 why:1 mj:5 ca:12 messagepassing:1 excellent:2 separator:1 domain:30 da:3 beek:1 sp:37 main:3 whole:1 edition:1 x1:2 referred:1 scattered:1 sub:1 explicit:1 sf:2 formula:19 removing:4 remained:1 theorem:2 specific:3 showing:1 exists:2 essential:1 incorporating:1 modulates:1 phd:1 chen:1 backtracking:1 simply:1 likely:1 corresponds:5 satisfies:6 acm:1 ma:33 hard:4 change:1 determined:1 except:1 specifically:1 uniformly:1 conservative:1 called:4 ece:2 shedding:1 exception:1 formally:5 support:1 dept:1 |
2,605 | 3,362 | Regulator Discovery from Gene Expression Time
Series of Malaria Parasites: a Hierarchical Approach
Jos?e Miguel Hern?andez-Lobato
Escuela Polit?ecnica Superior
Universidad Aut?onoma de Madrid, Madrid, Spain
[email protected]
Tjeerd Dijkstra
Leiden Malaria Research Group
LUMC, Leiden, The Netherlands
[email protected]
Tom Heskes
Institute for Computing and Information Sciences
Radboud University Nijmegen, Nijmegen, The Netherlands
[email protected]
Abstract
We introduce a hierarchical Bayesian model for the discovery of putative regulators from gene expression data only. The hierarchy incorporates the knowledge
that there are just a few regulators that by themselves only regulate a handful
of genes. This is implemented through a so-called spike-and-slab prior, a mixture of Gaussians with different widths, with mixing weights from a hierarchical
Bernoulli model. For efficient inference we implemented expectation propagation. Running the model on a malaria parasite data set, we found four genes with
significant homology to transcription factors in an amoebe, one RNA regulator
and three genes of unknown function (out of the top ten genes considered).
1 Introduction
Bioinformatics provides a rich source for the application of techniques from machine learning. Especially the elucidation of regulatory networks underlying gene expression has lead to a cornucopia
of approaches: see [1] for review. Here we focus on one aspect of network elucidation, the identification of the regulators of the causative agent of severe malaria, Plasmodium falciparum. Several
properties of the parasite necessitate a tailored algorithm for regulator identification:
? In most species gene regulation takes place at the first stage of gene expression when a
DNA template is transcribed into mRNA. This transcriptional control is mediated by specific transcription factors. Few specific transcription factors have been identified in Plasmodium based on sequence homology with other species [2, 3]. This could be due to
Plasmodium possessing a unique set of transcription factors or due to other mechanisms of
gene regulation, e.g. at the level of mRNA stability or post-transcritional regulation.
? Compared with yeast, gene expression in Plasmodium is hardly changed by perturbations
e.g. by adding chemicals or changing temperature [4]. The biological interpretation of this
finding is that the parasite is so narrowly adapted to its environment inside a red blood cell
that it follows a stereotyped gene expression program. From a machine learning point of
view, this finding means that network elucidation techniques relying on perturbations of
gene expression cannot be used.
? Similar to yeast [5], data for three different strains of the parasite with time series of gene
expression are publicly available [6]. These assay all of Plasmodium?s 5,600 genes for
about 50 time points. In contrast to yeast, there are no ChIP-chip data available and fewer
then ten transcription factor binding motifs are known.
1
Together, these properties point to a vector autoregressive model making use of the gene expression
time series. The model should not rely on sequence homology information but it should be flexible
enough to integrate sequence information in the future. This points to a Bayesian model as favored
approach.
2 The model
We start with a semi-realistic model of transcription based on Michaelis-Menten kinetics [1] and
subsequently simplify to obtain a linear model. Denoting the concentration of a certain mRNA
transcript at time t by z(t) we write:
dz(t)
V1 a1 (t)M1
VN aN (t)MN
1
=
?
?
?
p(t) ? z(t),
M
M
1
N
dt
K1 + a1 (t)
KN + aN (t)
?z
(1)
with aj (t) the concentration of the j-th activator (positive regulator), p(t) the concentration of RNA
polymerase and Vj , Kj , Mj and ?z reaction constants. N denotes the number of potential activators.
The activator is thought to bind to DNA motifs upstream of the transcription start site and binds RNA
polymerase which reads the DNA template to produce an mRNA transcript. Mj can be thought of
as the multiplicity of the motif, ?z captures the characteristic life time of the transcript. While
reasonably realistic, this equation harbors too many unknowns for reliable inference: 3N + 1 with
N ? 1000. We proceed with several simplifications:
? aj (t) Kj : activator concentration is low;
? p(t) = p0 is constant;
?
dz(t)
dt
?
z(t+?)?z(t)
?
with ? the sampling period;
? ? ? ?z : sampling period roughly equal to transcript life time.
Counting time in units of ? and taking logarithms on both sides, Equation (1) then simplifies to
log z(t + 1) = C + M1 log a1 (t) + ? ? ? + MN log aN (t),
with C = log(T V1 ? ? ? VN p0 /(K1 ? ? ? KN )). This is a linear model for gene expression level given
the expression levels of a set of activators. With a similar derivation one can include repressors [1].
2.1 A Bayesian model for sparse linear regression
Let y be a vector with the log expression of the target gene and X = (x1 , . . . , xN ) a matrix whose
columns contain the log expression of the candidate regulators. Assuming that the measurements
are corrupted with additive Gaussian noise, we get y ? N (X?, ? 2 I) where ? = (?1 , . . . , ?N )T
is a vector of regression coefficients and ? 2 is the variance of the noise. Such a linear model is
commonly used [7, 8, 9]. Both y and x1 , . . . , xN are mean-centered vectors with T measurements.
We specify an inverse gamma (IG) prior for ? 2 so that P(? 2 ) = IG(? 2 , ?/2, ??/2), where ? is a
prior estimate of ? 2 and ? is the sample size associated with that estimate. We assume that a priori
all components ?i are independent and take a so-called ?spike and slab prior? [10] for each of them.
That is, we introduce binary latent variables ?i , with ?i = 1 if xi takes part in the regression of y
and ?i = 0 otherwise. Given ?, the prior on ? then reads
P(?|?) =
N
P(?i |?i ) =
i=1
N
N (?i , 0, v1 )?i N (?i , 0, v0 )1??i ,
i=1
where N (x, ?, ? 2 ) denotes a Gaussian density with mean ? and variance ? 2 evaluated at x. In order
to enforce sparsity, the variance v1 of the slab should be larger than the variance v0 of the spike.
Instead of picking the hyperparameters v1 and v0 directly, it is convenient to pick a threshold of
practical significance ? so that P(?i = 1) gets more weight when |?i | > ? and P(?i = 0) gets more
weight when |?i | < ? [10]. In this way, given ? and one of v1 or v0 , we pick the other one such that
?2 =
log(v1 /v0 )
.
v0?1 ? v1?1
2
(2)
Finally, we assign independent Bernoulli priors to the components of the latent vector ?:
P(?) =
N
Bern(?i , w) =
i=1
N
w?i (1 ? w)1??i ,
i=1
so that each of the x1 , . . . , xN can independently take part in the regression with probability w. We
can identify the candidate genes whose expression is more likely to be correlated with the target
gene by means of the posterior distribution of ?:
P(?|y, X) =
P(?, ?, ? 2 |y, X) d? d? 2 ?
P(?, ?, ? 2 , y|X) d? d? 2 ,
? ,?2
? ,?2
where
P(?, ?, ? 2 , y|X)
=
=
N (y, X?, ? 2 I)P(?|?)P(?)P(? 2 )
T
N
N
2
?i
1??i
N (?i , 0, v1 ) N (?i , 0, v0 )
N (yt ,
xi,t ?i , ? )
t=1
N
i=1
i=1
Bern(?i , w) IG(? 2 , ?/2, ??/2) .
(3)
i=1
Unfortunately, this posterior distribution cannot be computed exactly if the number N of candidate
genes is larger than 25. An approximation based on Markov Chain Monte Carlo (MCMC) methods
has been proposed in [11].
2.2 A hierarchical model for gene regulation
In the section above we made use of the prior information that a target gene is typically regulated
by a small number of regulators. We have not yet made use of the prior information that a regulator
typically regulates more than one gene. We incorporate this information by a hierarchical extension
of our previous model. We introduce a vector ? of binary latent variables where ?i = 1 if gene i is
a regulator and ?i = 0 otherwise. The following joint distribution captures this idea:
?
?
?1
N T
N
N (xj,t+1 ,
xi,t ?j,i , ?j2 )?
P(? , ?, ?, ? 2 |X) = ?
?
?
j=1 t=1
N
N
i=1, i=j
??
N (?j,i , 0, v1 )?j,i N (?j,i , 0, v0 )1??j,i ? ?
j=1 i=1,i=j
N
?
IG(?j2 , ?j /2, ?j ?j /2)?
j=1
?
?
N
N
N
?
1??
i
i
?
?
Bern(?j,i , w1 ) Bern(?j,i , w0 )
Bern(?i , w) .
j=1 i=1,i=j
(4)
i=1
In this hierarchical model, ? is a matrix of binary latent variables where ?j,i = 1 if gene i takes
part in the regression of gene j and ?j,i = 0 otherwise. The relationship between regulators and
regulatees suggests that P(?j,i = 1|?i = 1) should be bigger than P(?j,i = 1|?i = 0) and thus
w1 > w0 . Matrix ? contains regression coefficients where ?j,i is the regression coefficient between
the expression of gene i and the delayed expression of gene j. Hyperparameter w represents the prior
probability of any gene being a regulator and the elements ?j2 of the vector ? 2 contain the variance
of the noise in each of the N regressions. Hyperparameters ?j and ?j have the same meaning as in
the model for sparse linear regression. The corresponding plate model is illustrated in Figure 1.
We can identify the genes more likely to be regulators by means of the posterior distribution P(? |X).
Compared with the sparse linear regression model we expanded the number of latent variables from
O(N ) to O(N 2 ). In order to keep inference feasible we turn to an approximate inference technique.
3
w0
w
?i
?ji
w1
?j
v0
?j
?ji
v1
?j
xjt+1
xit
Figure 1: The hierarchical
model for gene regulation.
T
N
N
3 Expectation propagation
The Expectation Propagation (EP) algorithm [12] allows to perform approximate Bayesian inference. In all Bayesian problems, the joint distribution of the model parameters ? and a data set
D = {(xi , yi ) : i = 1, . . . , n} with i.i.d. elements can be expressed as a product of terms
P(?, D) =
n
P(yi |xi , ?)P(?) =
i=1
n+1
ti (?) ,
(5)
i=1
where tn+1 (?) = P(?) is the prior distribution for ? and ti (?) = P(yi |xi , ?) for i = 1, . . . , n.
Expectation propagation proceeds to approximate (5) with a product of simpler terms
n+1
ti (?) ?
i=1
n+1
t?i (?) = Q(?) ,
(6)
i=1
where all the term approximations t?i are restricted to belong to the same family F of exponential
distributions, but they do not have to integrate 1. Note that Q will also be in F because F is closed
under multiplication. Each term approximation t?i is chosen so that
Q(?) = t?i (?)
t?j (?) = t?i (?)Q\i (?)
j=i
is as close as possible to
ti (?)
t?j (?) = ti (?)Q\i (?) ,
j=i
in terms of the direct Kullback-Leibler (K-L) divergence. The pseudocode of the EP algorithm is:
1. Initialize the term approximations t?i and Q to be uniform.
2. Repeat until all t?i converge:
(a) Choose a t?i to refine and remove it from Q to get Q\i (e.g. dividing Q by t?i ).
(b) Update the term t?i so that it minimizes the K-L divergence between ti Q\i and t?i Q\i .
(c) Re-compute Q so that Q = t?i Q\i .
The optimization problem in step (b) is solved by matching sufficient statistics between a distribution Q within the F family and ti Q\i , the new t?i is then equal to Q /Q\i . Because Q belongs to the
exponential family it is generally trivial to calculate its normalization constant. Once Q is normalized it can approximate P(?|D). Finally, EP is not guaranteed to converge, although convergence
can be improved by means of damped updates or double-loop algorithms [13].
3.1 EP for sparse linear regression
The application of EP to the models of Section 2 introduces some nontrivial technicalities.
Furthermore, we describe several techniques to speed up the EP algorithm. We approximate
P(?, ?, ? 2 , y|X) for sparse linear regression by means of a factorized exponential distribution:
N
2
Bern(?i , qi )N (?i , ?i , si ) IG(? 2 , a, b) ? Q(?, ?, ? 2 ) ,
(7)
P(?, ?, ? , y|X) ?
i=1
4
where {qi , ?i , si : i = 1, . . . , N }, a and b are free parameters. Note that in the approximation
2
Q(?, ?, ? 2 ) all the components of the vectors ? and ? and the
nvariable ? are considered to be
independent; this allows the approximation of P(?|y, X) by i=1 Bern(?i , qi ). We tune the parameters of Q(?, ?, ? 2 ) by means of EP over the unnormalized density P(?, ?, ? 2 , y|X). Such
density appears in (3) as a product of T + N terms (not counting the priors) which correspond to the
ti terms in (5). This way, we have T + N term approximations with the same form as (7) and which
correspond to the term approximations t?i in (6). The complexity is O(T N ) per iteration, because
updating any of the first T term approximations requires N operations. However, some of the EP
update operations require to compute integrals which do not have a closed form expression. To avoid
that, we employ the following simplifications when we update the first T term approximations:
1. When updating the parameters {?i , si : i = 1, . . . , N } of the Gaussians in the term approximations, we approximate a Student?s t-distribution by means of a Gaussian distribution with the same mean and variance. This approximation becomes more accurate as the
degrees of freedom of the t-distribution increase.
2. When updating the parameters {a, b} of the IG in the term approximations, instead of
propagating the sufficient statistics of an IG distribution we propagate the expectations of
1/? 2 and 1/? 4 . To achieve this, we have to perform two approximations like the one stated
above. Note that in this case we are not minimizing the direct K-L divergence. However,
at convergence, we expect the resulting IG in (7) to be sufficiently accurate.
In order to improve convergence, we re-update all the N last term approximations each time one
of the first T term approximations is updated. Computational complexity does not get worse than
O(T N ) and the resulting algorithm turns out to be faster. By comparison, the MCMC method
in [11] takes O(N 2 ) steps to generate a single sample from P(?|y, X). On problems of much
smaller size than we will consider in our experiments, one typically requires on the order of 10000
samples to obtain reasonably accurate estimates [10].
3.2 EP for gene regulation
We approximate P(? , ?, ?, ? 2 |X) by the factorized exponential distribution
?
Q(? , ?, ?, ? 2 )
= ?
?
?
N
N
?
N
Bern(?j,i , wj,i )?
Bern(?i , ti )
j=1 i=1,i=j
N
N
i=1
??
N (?j,i , ?j,i , sj,i )? ?
j=1 i=1,i=j
N
?
IG(?j2 , aj , bj )? ,
j=1
where {aj , bj , ti , wj,i , ?j,i , sj,i : i = 1, . . . , N ; j = 1, . . . , N ; i = j} are free parameters. The
posterior probability P(? |X) that indicates which genes are more likely to be regulators can then
N
be approximated by i=1 Bern(?i , ti ). Again, we fix the parameters in Q(? , ?, ?, ? 2 ) by means of
EP over the joint density P(? , ?, ?, ? 2 |X). It is trivial to adapt the EP algorithm used in the sparse
linear regression model to this new case: the terms to be approximated are the same as before except
for the new N (N ? 1) terms for the prior on ?. As in the previous section and in order to improve
convergence, we re-update all the N (N ? 1) term approximations corresponding to the prior on ?
each time N of the N (T ? 1) term approximations corresponding to regressions are updated. In
order to reduce memory requirements, we associate all the N (N ? 1) terms for the prior on ? into
a single term, which we can do because they are independent so that we only store in memory one
term approximation instead of N (N ? 1). We also group the N (N ? 1) terms for the prior on ?
into N independent terms and the N (T ? 1) terms for the regressions into T ? 1 independent terms.
Assuming a constant number of iterations (in our experiments, we need at most 20 iterations for EP
to converge), the computational complexity and the memory requirements of the resulting algorithm
are O(T N 2 ). This indicates that it is feasible to analyze data sets which contain the expression
pattern of thousands of genes. An MCMC algorithm would require O(N 3 ) to generate just a single
sample.
5
4 Experiments with artificial data
We carried out experiments with artificially generated data in order to validate the EP algorithms.
In the experiments for sparse linear regression we fixed the hyperparameters in (3) so that ? = 3,
? is the sample variance of the target vector y, v1 = 1, ? = N ?1 , v0 is chosen according to
(2) and w = N ?1 . In the experiment for gene regulation we fixed the hyperparameters in (4) so
that w = (N ? 1)?1 , ?i = 3 and ?i is the sample variance of the vector xi , w1 = 10?1 (N ?
1)?1 , w0 = 10?2 (N ? 1)?1 , v1 = 1, ? = 0.2 and v0 is chosen according to (2). Although the
posterior probabilities are sensitive to some of the choices, the orderings of these probabilities, e.g.,
to determine the most likely regulators, are robust to even large changes.
4.1 Sparse linear regression
In the first experiment we set T = 50 and generated x1 , . . . , x6000 ? N (0, 32 I) candidate vectors
and a target vector y = x1 ? x2 + 0.5 x3 ? 0.5 x4 + ?, where ? ? N (0, I). The EP algorithm
assigned values close to 1 to w1 and w2 , the parameters w3 and w4 obtained values 5.2 ? 10?3 and
0.5 respectively and w5 , . . . , w6000 were smaller than 3 ? 10?4 . We repeated the experiment several
times (each time using new data) and obtained similar results on each run.
In the second experiment we set T = 50 and generated a target vector y ? N (0, 32 I) and
x1 , . . . , x500 candidate vectors so that xi = y + ?i for i = 2, . . . , 500, where ?i ? N (0, I).
The candidate vector x1 is generated as x1 = y + 0.5 ?1 where ?1 ? N (0, I). This way, the noise
in x1 is twice as small as the noise in the other candidate vectors. Note that all the candidate vectors are highly correlated with each other and with the target vector. This is what happens in gene
expression data sets where many genes show similar expression patterns. We ran the EP algorithm
100 times (each time using new data) and it always assigned to all the w1 , . . . , w500 more or less the
same value of 6 ? 10?4 . However, w1 obtained the highest value on 54 of the runs and it was among
the three ws with highest value on 87 of the runs.
Finally, we repeated these experiments setting N = 100, using the MCMC method of [11] and the
EP algorithm for sparse linear regression. Both techniques produced results that are statistically
indistinguishable (the approximations obtained through EP fall within the variation of the MCMC
method), for EP within a fraction of the time of MCMC.
4.2 Gene regulation
In this experiment we set T = 50 and generated a vector z with T + 1 values from a sinusoid. We
then generated 49 more vectors x2 , ..., x50 where xi,t = zt +?i,t for i = 2, . . . , 50 and t = 1, . . . , T ,
where ?i,t ? N (0, ? 2 ) and ? is one fourth of the sample standard deviation of z. We also generated
a vector x1 so that x1,t = zt+1 + ?t where t = 1, . . . , T and ?t ? N (0, ? 2 ). In this way, x1 acts as
a regulator for x2 , ..., x50 . A single realization of the vectors x1 , . . . , x50 is displayed on the left of
Figure 2. We ran the EP algorithm for gene regulation over 100 different realizations of x1 , . . . , x50 .
The algorithm assigned t1 the highest value on 33 of the runs and x1 was ranked among the top five
on 74 of the runs. This indicates that the EP algorithm can successfully detect small differences in
correlations and should be able to find new regulators in real microarray data.
5 Experiments with real microarray data
We applied our algorithm to four data sets. The first is a yeast cell-cycle data set from [5] which is
commonly used as a benchmark for regulator discovery. Data sets two through four are from three
different Plasmodium strains [6]. Missing values were imputed by nearest neighbors [14] and the
hyperparameters were fixed at the same values as in Section 4. The yeast cdc15 data set contains
23 measurements of 6178 genes. We singled out 751 genes which met a minimum criterion for cell
cycle regulation [5]. The top ten genes with the highest values for ? along with their annotation from
the Saccharomyces Genome database are listed in table 5: the top two genes are specific transcription
factors and IOC2 is associated with transcription regulation. As 4% of the yeast genome is associated
with transcription the probability of this occurring by chance is 0.0062. However, although the result
is statistically significant, we were disappointed to find none of the known cell-cycle regulators (like
ACE2, FKH* or SWI*) among the top ten.
6
3
1.5
?2
?1
0
Expression
1
2
1.0
0.5
0.0
?0.5
?1.0
Expression
Gene PF11_321
Genes positively regulated
Genes negatively regulated
?1.5
Regulator
Regulatees
0
10
20
30
40
50
0
Measurement
10
20
30
40
50
Measurement
Figure 2: Left: Plot of the vectors x2 , ..., x50 in grey and the vector x1 in black. The vector x1
contains the expression of a regulator which would determine the expressions in x2 , ..., x50 . Right:
Expressions of gene PF11 321 (black) and the 100 genes which are more likely to be regulated by it
(light and dark grey). Two clusters of positively and negatively regulated genes can be appreciated.
rank
1
2
3
4
5
6
7
8
9
10
standard
name
common
name
annotation
YLR098c
YOR315w
YJL073w
YOR023c
YOR105w
YLR095w
YOR321w
YLR231c
YOR248w
YOR247w
CHA4
SFG1
JEM1
AHC1
IOC2
PMT3
BNA5
SRL1
DNA binding transcriptional activator
putative transcription factor for growth of superficial pseudohyphae
DNAJ-like chaperone
subunit of the ADA histone acetyl transferase complex
dubious open reading frame
transcription elongation
protein O-mannosyl transferase
kynureninase
dubious open reading frame
mannoprotein
The three data sets for the malaria parasite [6] contain 53 measurements (3D7), 50 measurements
(Dd2) and 48 measurements (HB3). We focus on 3D7 as this is the sequenced reference strain. We
singled out 751 genes who showed the highest variation as quantified by the interquartile range of the
expression measurements. The top ten genes with the highest values for ? along with their annotation
from PlasmoDB are listed in table 5. Recalling the motivation for our approach, the paucity of known
transcription factors, we cannot expect to find many annotated regulators in PlasmoDB version 5.4.
Thus, we list the BLASTP hits provided by PlasmoDB instead of the absent annotation. These
hits were the highest scoring ones outside of the genus Plasmodium. We find four genes with a
large identity to transcription factors in Dictyostelium (a recently sequenced social amoebe) and one
annotated helicase which typically functions in post-transcriptional regulation. Interestingly three
genes have no known function and could be regulators.
rank
standard name
annotation or selected BLASTP hits
1
2
3
4
5
6
7
8
9
10
PFC0950c
PF11 0321
PFI1210w
MAL6P1.233
PFD0175c
MAL7P1.34
MAL6P1.182
PF13 0140
PF13 0138
MAL13P1.14
25% identity to GATA binding TF in Dictyostelium
25% identity to putative WRKY TF in Dictyostelium
no BLASTP matches outside Plasmodium genus
no BLASTP matches outside Plasmodium genus
32% identity to GATA binding TF in Dictyostelium
35% identity to GATA binding TF in Dictyostelium
N-acetylglucosaminyl-phosphatidylinositol de-n-acetylase
dihydrofolate synthase/folylpolyglutamate synthase
no BLASTP matches outside Plasmodium genus
DEAD box helicase
Results for the HB3 strain were similar in that five putative regulators were found. Somewhat
disappointing, we found only one putative regulator (a helicase) among the top ten genes for Dd2.
7
6 Conclusion and discussion
Our approach enters a field full of methods enforcing sparsity ([15, 8, 7, 16, 9]). Our main contributions are: a hierarchical model to discover regulators, a tractable algorithm for fast approximate
inference in models with many interacting variables, and the application to malaria.
Arguably most related is the hierarchical model in [15]. The covariates in this model are a dozen
external variables, coding experimental conditions, instead of the hundreds of expression levels of
other genes as in our model. Furthermore, the prior in [15] enforces sparsity on the ?columns? of
? to implement the idea that some genes are not influenced by any of the experimental conditions.
Our prior, on the other hand, enforces sparsity on the ?rows? in order to find regulators.
Future work could include more involved priors, e.g., enforcing sparsity on both ?rows? and
?columns? or incorporating information from DNA sequence data. The approximate inference techniques described in this paper make it feasible to evaluate such extensions in a fraction of the time
required by MCMC methods.
References
[1] T.S. Gardner and J.J. Faith. Reverse-engineering transcription control networks. Physics of
Life Reviews, 2:65?88, 2005.
[2] R. Coulson, N. Hall, and C. Ouzounis. Comparative genomics of transcriptional control in the
human malaria parasite Plasmodium falciparum. Genome Res., 14:1548?1554, 2004.
[3] S. Balaji, M.M. Babu, L.M. Iyer, and L. Aravind. Discovery of the principal specific transcription factors of apicomplexa and their implication for the evolution of the ap2-integrase dna
binding domains. Nucleic Acids Research, 33(13):3994?4006, 2005.
[4] T. Sakata and E.A. Winzeler. Genomics, systems biology and drug development for infectuous
diseases. Molecular BioSystems, 3:841?848, 2007.
[5] P.T. Spellman, G. Sherlock, V.R. Iyer, K. Anders, M.B. Eisen, P.O. Brown, and D. Botstein.
Comprehensive identification of cell cycle-regulated genes of the yeast Saccharomyces cerevisiae by microarray hybridization. Molecular Biology of the Cell, 9(12):3273?3297, 1998.
[6] M. LLinas, Z. Bozdech, E. D. Wong, A.T. Adai, and J. L. DeRisi. Comparative whole
genome transcriptome analysis of three Plasmodium falciparum strains. Nucleic Acids Research, 34(4):1166?1173, 2006.
[7] M. Beal. Variational Algorithms for Approximate Bayesian Inference. PhD thesis, UCL, 2003.
[8] C. Sabatti and G.M. James. Bayesian sparse hidden components analysis for transcription
regulation networks. Bioinformatics, 22(6):739?746, 2006.
[9] S.T. Jensen, G. Chen, and C.J. Stoeckert. Bayesian variable selection and data integration for
biological regulatory networks. The Annals of Applied Statistics, 1:612?633, 2007.
[10] E.I. George and R.E. McCulloch. Approaches for Bayesian variable selection. Statistica
Sinica, 7:339?374, 1997.
[11] E.I. George and R.E. McCulloch. Variable selection via Gibbs sampling. Journal of the American Statistical Association, 88(423):881?889, 1993.
[12] T. Minka. A family of algorithms for approximate Bayesian inference. PhD thesis, MIT, 2001.
[13] T. Heskes and O. Zoeter. Expectation propagation for approximate inference in dynamic
Bayesian networks. In UAI-2002, pages 216?223, 2002.
[14] O. Troyanskaya, M. Cantor, P. Brown, T. Hastie, R. Tibshirani, and D. Botstein. Missing value
estimation methods for dna microarrays. Bioinformatics, 17(6):520?525, 2001.
[15] J. Lucas, C. Carvalho, Q. Wang, A. Bild, J. Nevins, and M. West. Sparse statistical modelling
in gene expression genomics. In K.A. Do, P. M?uller, and M. Vannucci, editors, Bayesian
inference for gene expression and proteomics. Springer, 2006.
[16] M.Y. Park, T. Hastie, and R. Tibshirani. Averaged gene expressions for regression. Biostatistics, 8:212?227, 2007.
8
| 3362 |@word version:1 open:2 grey:2 propagate:1 p0:2 pick:2 series:3 contains:3 denoting:1 interestingly:1 reaction:1 si:3 yet:1 additive:1 realistic:2 remove:1 plot:1 update:6 fewer:1 histone:1 selected:1 provides:1 simpler:1 five:2 along:2 direct:2 inside:1 introduce:3 roughly:1 themselves:1 relying:1 becomes:1 spain:1 provided:1 underlying:1 discover:1 factorized:2 mcculloch:2 biostatistics:1 what:1 minimizes:1 finding:2 ti:11 act:1 growth:1 exactly:1 hit:3 control:3 unit:1 arguably:1 positive:1 before:1 t1:1 bind:2 engineering:1 hernandez:1 black:2 twice:1 quantified:1 suggests:1 genus:4 range:1 statistically:2 averaged:1 unique:1 practical:1 enforces:2 nevins:1 implement:1 x3:1 w4:1 drug:1 thought:2 convenient:1 matching:1 causative:1 polymerase:2 protein:1 get:5 cannot:3 close:2 selection:3 wong:1 dz:2 lobato:1 yt:1 mrna:4 missing:2 independently:1 onoma:1 stability:1 variation:2 updated:2 annals:1 hierarchy:1 target:7 associate:1 element:2 approximated:2 updating:3 balaji:1 database:1 ep:20 x500:1 solved:1 capture:2 enters:1 calculate:1 thousand:1 wj:2 wang:1 cycle:4 ordering:1 highest:7 ran:2 disease:1 environment:1 complexity:3 covariates:1 dynamic:1 negatively:2 joint:3 chip:2 derivation:1 bild:1 fast:1 describe:1 radboud:1 monte:1 artificial:1 outside:4 parasite:7 whose:2 larger:2 otherwise:3 malaria:7 statistic:3 sakata:1 disappointed:1 singled:2 beal:1 sequence:4 ucl:1 product:3 j2:4 loop:1 realization:2 mixing:1 achieve:1 faith:1 validate:1 convergence:4 double:1 requirement:2 cluster:1 produce:1 comparative:2 propagating:1 miguel:1 nearest:1 transcript:4 dividing:1 implemented:2 met:1 annotated:2 subsequently:1 centered:1 human:1 require:2 assign:1 fix:1 andez:1 biological:2 extension:2 kinetics:1 sufficiently:1 considered:2 hall:1 bj:2 slab:3 estimation:1 troyanskaya:1 sensitive:1 tf:4 successfully:1 uller:1 mit:1 rna:3 gaussian:3 always:1 cerevisiae:1 avoid:1 falciparum:3 focus:2 xit:1 saccharomyces:2 bernoulli:2 indicates:3 rank:2 modelling:1 contrast:1 detect:1 inference:11 motif:3 anders:1 typically:4 w:1 hidden:1 among:4 flexible:1 favored:1 priori:1 development:1 lucas:1 integration:1 initialize:1 equal:2 once:1 field:1 sampling:3 elongation:1 x4:1 represents:1 biology:2 park:1 future:2 simplify:1 few:2 employ:1 gamma:1 divergence:3 comprehensive:1 delayed:1 recalling:1 freedom:1 w5:1 highly:1 interquartile:1 severe:1 introduces:1 mixture:1 nl:2 light:1 damped:1 chain:1 implication:1 accurate:3 integral:1 logarithm:1 re:4 biosystems:1 column:3 ada:1 deviation:1 uniform:1 hundred:1 too:1 kn:2 corrupted:1 density:4 universidad:1 physic:1 jos:1 picking:1 together:1 swi:1 w1:7 again:1 thesis:2 choose:1 transcribed:1 necessitate:1 worse:1 dead:1 external:1 american:1 potential:1 de:2 repressor:1 student:1 coding:1 coefficient:3 babu:1 view:1 closed:2 analyze:1 red:1 start:2 zoeter:1 michaelis:1 annotation:5 contribution:1 publicly:1 variance:8 characteristic:1 who:1 acid:2 correspond:2 identify:2 bayesian:12 identification:3 produced:1 none:1 carlo:1 influenced:1 involved:1 james:1 minka:1 associated:3 knowledge:1 aravind:1 appears:1 dt:2 tom:1 specify:1 improved:1 botstein:2 llinas:1 evaluated:1 gata:3 box:1 furthermore:2 just:2 stage:1 until:1 correlation:1 hand:1 propagation:5 aj:4 yeast:7 name:3 contain:4 homology:3 normalized:1 brown:2 evolution:1 assigned:3 chemical:1 read:2 sinusoid:1 leibler:1 assay:1 illustrated:1 indistinguishable:1 width:1 unnormalized:1 criterion:1 plate:1 tn:1 temperature:1 meaning:1 variational:1 possessing:1 recently:1 superior:1 common:1 pseudocode:1 dubious:2 ji:2 regulates:1 belong:1 interpretation:1 m1:2 association:1 significant:2 measurement:9 ap2:1 gibbs:1 chaperone:1 heskes:3 dictyostelium:5 menten:1 v0:11 posterior:5 showed:1 cantor:1 belongs:1 disappointing:1 reverse:1 store:1 certain:1 binary:3 transcriptome:1 life:3 yi:3 scoring:1 minimum:1 aut:1 somewhat:1 george:2 converge:3 determine:2 period:2 semi:1 full:1 d7:2 faster:1 adapt:1 match:3 post:2 molecular:2 bigger:1 a1:3 qi:3 regression:19 xjt:1 proteomics:1 expectation:6 iteration:3 normalization:1 tailored:1 sequenced:2 cell:6 derisi:1 source:1 microarray:3 w2:1 hybridization:1 incorporates:1 escuela:1 counting:2 enough:1 harbor:1 xj:1 w3:1 hastie:2 identified:1 reduce:1 simplifies:1 idea:2 microarrays:1 absent:1 dd2:2 narrowly:1 expression:30 proceed:1 hardly:1 generally:1 listed:2 tune:1 netherlands:2 dark:1 ten:6 dna:7 imputed:1 generate:2 per:1 tibshirani:2 write:1 hyperparameter:1 group:2 four:4 threshold:1 blood:1 changing:1 v1:13 fraction:2 run:5 inverse:1 fourth:1 place:1 family:4 vn:2 putative:5 guaranteed:1 simplification:2 refine:1 nontrivial:1 adapted:1 handful:1 x2:5 regulator:28 aspect:1 speed:1 expanded:1 according:2 smaller:2 making:1 happens:1 restricted:1 multiplicity:1 vannucci:1 equation:2 hern:1 turn:2 mechanism:1 tractable:1 available:2 gaussians:2 uam:1 operation:2 hierarchical:9 regulate:1 enforce:1 top:7 running:1 denotes:2 include:2 elucidation:3 paucity:1 k1:2 especially:1 activator:6 spike:3 concentration:4 transcriptional:4 regulated:6 w0:4 trivial:2 enforcing:2 assuming:2 ru:1 relationship:1 minimizing:1 regulation:13 unfortunately:1 sinica:1 nijmegen:2 stated:1 zt:2 unknown:2 perform:2 nucleic:2 markov:1 benchmark:1 dijkstra:2 displayed:1 subunit:1 strain:5 frame:2 interacting:1 perturbation:2 required:1 x50:6 able:1 sabatti:1 proceeds:1 pattern:2 sparsity:5 reading:2 program:1 sherlock:1 reliable:1 memory:3 adai:1 ranked:1 rely:1 mn:2 spellman:1 improve:2 gardner:1 carried:1 mediated:1 kj:2 genomics:3 prior:18 review:2 discovery:4 multiplication:1 expect:2 carvalho:1 leiden:2 integrate:2 agent:1 degree:1 sufficient:2 editor:1 row:2 changed:1 repeat:1 last:1 free:2 bern:10 appreciated:1 side:1 institute:1 fall:1 template:2 taking:1 neighbor:1 sparse:11 polit:1 xn:3 rich:1 autoregressive:1 genome:4 eisen:1 commonly:2 made:2 ig:9 social:1 sj:2 approximate:12 kullback:1 transcription:17 gene:61 keep:1 technicality:1 uai:1 xi:9 regulatory:2 latent:5 table:2 mj:2 reasonably:2 robust:1 superficial:1 upstream:1 artificially:1 complex:1 domain:1 vj:1 significance:1 main:1 stereotyped:1 statistica:1 motivation:1 noise:5 hyperparameters:5 whole:1 repeated:2 x1:17 positively:2 site:1 west:1 madrid:2 exponential:4 candidate:8 dozen:1 specific:4 jensen:1 list:1 incorporating:1 adding:1 phd:2 iyer:2 occurring:1 chen:1 likely:5 expressed:1 binding:6 springer:1 chance:1 identity:5 feasible:3 change:1 except:1 hb3:2 principal:1 called:2 specie:2 e:1 experimental:2 bioinformatics:3 incorporate:1 evaluate:1 mcmc:7 correlated:2 |
2,606 | 3,363 | Sparse Feature Learning for Deep Belief Networks
Marc?Aurelio Ranzato1
Y-Lan Boureau2,1
Yann LeCun1
1
Courant Institute of Mathematical Sciences, New York University
2
INRIA Rocquencourt
{ranzato,ylan,[email protected]}
Abstract
Unsupervised learning algorithms aim to discover the structure hidden in the data,
and to learn representations that are more suitable as input to a supervised machine
than the raw input. Many unsupervised methods are based on reconstructing the
input from the representation, while constraining the representation to have certain desirable properties (e.g. low dimension, sparsity, etc). Others are based on
approximating density by stochastically reconstructing the input from the representation. We describe a novel and efficient algorithm to learn sparse representations, and compare it theoretically and experimentally with a similar machine
trained probabilistically, namely a Restricted Boltzmann Machine. We propose a
simple criterion to compare and select different unsupervised machines based on
the trade-off between the reconstruction error and the information content of the
representation. We demonstrate this method by extracting features from a dataset
of handwritten numerals, and from a dataset of natural image patches. We show
that by stacking multiple levels of such machines and by training sequentially,
high-order dependencies between the input observed variables can be captured.
1
Introduction
One of the main purposes of unsupervised learning is to produce good representations for data, that
can be used for detection, recognition, prediction, or visualization. Good representations eliminate
irrelevant variabilities of the input data, while preserving the information that is useful for the ultimate task. One cause for the recent resurgence of interest in unsupervised learning is the ability
to produce deep feature hierarchies by stacking unsupervised modules on top of each other, as proposed by Hinton et al. [1], Bengio et al. [2] and our group [3, 4]. The unsupervised module at one
level in the hierarchy is fed with the representation vectors produced by the level below. Higherlevel representations capture high-level dependencies between input variables, thereby improving
the ability of the system to capture underlying regularities in the data. The output of the last layer in
the hierarchy can be fed to a conventional supervised classifier.
A natural way to design stackable unsupervised learning systems is the encoder-decoder
paradigm [5]. An encoder transforms the input into the representation (also known as the code
or the feature vector), and a decoder reconstructs the input (perhaps stochastically) from the representation. PCA, Auto-encoder neural nets, Restricted Boltzmann Machines (RBMs), our previous
sparse energy-based model [3], and the model proposed in [6] for noisy overcomplete channels are
just examples of this kind of architecture. The encoder/decoder architecture is attractive for two reasons: 1. after training, computing the code is a very fast process that merely consists in running the
input through the encoder; 2. reconstructing the input with the decoder provides a way to check that
the code has captured the relevant information in the data. Some learning algorithms [7] do not have
a decoder and must resort to computationally expensive Markov Chain Monte Carlo (MCMC) sampling methods in order to provide reconstructions. Other learning algorithms [8, 9] lack an encoder,
which makes it necessary to run an expensive optimization algorithm to find the code associated
with each new input sample. In this paper we will focus only on encoder-decoder architectures.
1
In general terms, we can view an unsupervised model as defining a distribution over input vectors
Y through an energy function E(Y, Z, W ):
R ??E(Y,z,W )
Z
e
P (Y |W ) = P (Y, z|W ) = R z ??E(y,z,W )
(1)
e
z
y,z
where Z is the code vector, W the trainable parameters of encoder and decoder, and ? is an arbitrary
positive constant. The energy function includes the reconstruction error, and perhaps other terms
as well. For convenience, we will omit W from the notation in the following. Training the machine
to model the input distribution is performed by finding the encoder and decoder parameters that
minimize a loss function equal to the negative log likelihood of the training data under the model.
For a single training sample Y , the loss function is
Z
Z
1
1
??E(Y,z)
L(W, Y ) = ? log e
+ log
e??E(y,z)
(2)
?
?
z
y,z
The first term is the free energy F? (Y ). Assuming that the distribution over Z is rather peaked, it
can be simpler to approximate this distribution over Z by its mode, which turns the marginalization
over Z into a minimization:
1
L (W, Y ) = E(Y, Z (Y )) + log
?
?
?
Z
e??E(y,Z
?
(y))
(3)
y
where Z ? (Y ) is the maximum likelihood value Z ? (Y ) = argminz E(Y, z), also known as the
optimal code. We can then define an energy for each input point, that measures how well it is
reconstructed by the model:
Z
1
F? (Y ) = E(Y, Z ? (Y )) = lim ? log e??E(Y,z)
(4)
???
?
z
The second term in equation 2 and 3 is called the log partition function, and can be viewed as a
penalty term for low energies. It ensures that the system produces low energy only for input vectors
that have high probability in the (true) data distribution, and produces higher energies for all other
input vectors [5]. The overall loss is the average of the above over the training set.
Regardless of whether only Z ? or the whole distribution over Z is considered, the main difficulty
with this framework is that it can be very hard to compute the gradient of the log partition function
in equation 2 or 3 with respect to the parameters W . Efficient methods shortcut the computation by
drastically and cleverly reducing the integration domain. For instance, Restricted Boltzmann Machines (RBM) [10] approximate the gradient of the log partition function in equation 2 by sampling
values of Y whose energy will be pulled up using an MCMC technique. By running the MCMC for
a short time, those samples are chosen in the vicinity of the training samples, thereby ensuring that
the energy surface forms a ravine around the manifold of the training samples. This is the basis of
the Contrastive Divergence method [10].
The role of the log partition function is merely to ensure that the energy surface is lower around
training samples than anywhere else. The method proposed here eliminates the log partition function
from the loss, and replaces it by a term that limits the volume of the input space over which the energy
surface can take a low value. This is performed by adding a penalty term on the code rather than on
the input. While this class of methods does not directly maximize the likelihood of the data, it can be
seen as a crude approximation of it. To understand the method, we first note that if for each vector
Y , there exists a corresponding optimal code Z ? (Y ) that makes the reconstruction error (or energy)
F? (Y ) zero (or near zero), the model can perfectly reconstruct any input vector. This makes the
energy surface flat and indiscriminate. On the other hand, if Z can only take a small number of
different values (low entropy code), then the energy F? (Y ) can only be low in a limited number of
places (the Y ?s that are reconstructed from this small number of Z values), and the energy cannot
be flat.
More generally, a convenient method through which flat energy surfaces can be avoided is to limit
the maximum information content of the code. Hence, minimizing the energy F? (Y ) together with
the information content of the code is a good substitute for minimizing the log partition function.
2
A popular way to minimize the information content in the code is to make the code sparse or lowdimensional [5]. This technique is used in a number of unsupervised learning methods, including
PCA, auto-encoders neural network, and sparse coding methods [6, 3, 8, 9]. In sparse methods,
the code is forced to have only a few non-zero units while most code units are zero most of the
time. Sparse-overcomplete representations have a number of theoretical and practical advantages,
as demonstrated in a number of recent studies [6, 8, 3]. In particular, they have good robustness to
noise, and provide a good tiling of the joint space of location and frequency. In addition, they are
advantageous for classifiers because classification is more likely to be easier in higher dimensional
spaces. This may explain why biology seems to like sparse representations [11]. In our context, the
main advantage of sparsity constraints is to allow us to replace a marginalization by a minimization,
and to free ourselves from the need to minimize the log partition function explicitly.
In this paper we propose a new unsupervised learning algorithm called Sparse Encoding Symmetric
Machine (SESM), which is based on the encoder-decoder paradigm, and which is able to produce
sparse overcomplete representations efficiently without any need for filter normalization [8, 12] or
code saturation [3]. As described in more details in sec. 2 and 3, we consider a loss function which
is a weighted sum of the reconstruction error and a sparsity penalty, as in many other unsupervised
learning algorithms [13, 14, 8]. Encoder and decoder are constrained to be symmetric, and share
a set of linear filters. Although we only consider linear filters in this paper, the method allows
the use of any differentiable function for encoder and decoder. We propose an iterative on-line
learning algorithm which is closely related to those proposed by Olshausen and Field [8] and by us
previously [3]. The first step computes the optimal code by minimizing the energy for the given
input. The second step updates the parameters of the machine so as to minimize the energy.
In sec. 4, we compare SESM with RBM and PCA. Following [15], we evaluate these methods by
measuring the reconstruction error for a given entropy of the code. In another set of experiments,
we train a classifier on the features extracted by the various methods, and measure the classification
error on the MNIST dataset of handwritten numerals. Interestingly, the machine achieving the best
recognition performance is the one with the best trade-off between RMSE and entropy. In sec. 5, we
compare the filters learned by SESM and RBM for handwritten numerals and natural image patches.
In sec.5.1.1, we describe a simple way to produce a deep belief net by stacking multiple levels of
SESM modules. The representational power of this hierarchical non-linear feature extraction is
demonstrated through the unsupervised discovery of the numeral class labels in the high-level code.
2
Architecture
In this section we describe a Sparse Encoding Symmetric Machine (SESM) having a set of linear filters in both encoder and decoder. However, everything can be easily extended to any other choice of
parameterized functions as long as these are differentiable and maintain symmetry between encoder
and decoder. Let us denote with Y the input defined in RN , and with Z the code defined in RM ,
where M is in general greater than N (for overcomplete representations). Let the filters in encoder
and decoder be the columns of matrix W ? RN ?M , and let the biases in the encoder and decoder
be denoted by benc ? RM and bdec ? RN , respectively. Then, encoder and decoder compute:
fenc (Y ) = W T Y + benc ,
fdec (Z) = W l(Z) + bdec
(5)
where the function l is a point-wise logistic non-linearity of the form:
l(x) = 1/(1 + exp(?gx)),
(6)
with g fixed gain. The system is characterized by an energy measuring the compatibility between
pairs of input Y and latent code Z, E(Y, Z) [16]. The lower the energy, the more compatible (or
likely) is the pair. We define the energy as:
E(Y, Z) = ?e kZ ? fenc (Y )k22 + kY ? fdec (Z)k22
(7)
During training we minimize the following loss:
L(W, Y )
= E(Y, Z) + ?s h(Z) + ?r kW k1
= ?e kZ ? fenc (Y )k22 + kY ? fdec (Z)k22 + ?s h(Z) + ?r kW k1
(8)
The first term tries to make the output of the encoder as similar as possible to the code Z. The second
term is the mean-squared error between the input Y and the reconstruction provided by the decoder.
3
The third term ensures the sparsity of the code by penalizing non zero values of code units; this term
PM
acts independently on each code unit and it is defined as h(Z) = i=1 log(1+l2 (zi )), (corresponding to a factorized Student-t prior distribution on the non linearly transformed code units [8] through
the logistic of equation 6). The last term is an L1 regularization on the filters to suppress noise and
favor more localized filters. The loss formulated in equation 8 combines terms that characterize
also other methods. For instance, the first two terms appear in our previous model [3], but in that
work, the weights of encoder and decoder were not tied and the parameters in the logistic were updated using running averages. The second and third terms are present in the ?decoder-only? model
proposed in [8]. The third term was used in the ?encoder-only? model of [7]. Besides the alreadymentioned advantages of using an encoder-decoder architecture, we point out another good feature
of this algorithm due to its symmetry. A common idiosyncrasy for sparse-overcomplete methods
using both a reconstruction and a sparsity penalty in the objective function (second and third term in
equation 8), is the need to normalize the basis functions in the decoder during learning [8, 12] with
somewhat ad-hoc technique, otherwise some of the basis functions collapse to zero, and some blow
up to infinity. Because of the sparsity penalty and the linear reconstruction, code units become tiny
and are compensated by the filters in the decoder that grow without bound. Even though the overall
loss decreases, training is unsuccessful. Unfortunately, simply normalizing the filters makes less
clear which objective function is minimized. Some authors have proposed quite expensive methods to solve this issue: by making better approximations of the posterior distribution [15], or by
using sampling techniques [17]. In this work, we propose to enforce symmetry between encoder
and decoder (through weight sharing) so as to have automatic scaling of filters. Their norm cannot
possibly be large because code units, produced by the encoder weights, would have large values as
well, producing bad reconstructions and increasing the energy (the second term in equation 7 and
8).
3
Learning Algorithm
Learning consists of determining the parameters in W , benc , and bdec that minimize the loss in
equation 8. As indicated in the introduction, the energy augmented with the sparsity constraint is
minimized with respect to the code to find the optimal code. No marginalization over code distribution is performed. This is akin to using the loss function in equation 3. However, the log partition
function term is dropped. Instead, we rely on the code sparsity constraints to ensure that the energy
surface is not flat.
Since the second term in equation 8 couples both Z and W and bdec , it is not straightforward to
minimize this energy with respect to both. On the other hand, once Z is given, the minimization
with respect to W is a convex quadratic problem. Vice versa, if the parameters W are fixed, the
optimal code Z ? that minimizes L can be computed easily through gradient descent. This suggests
the following iterative on-line coordinate descent learning algorithm:
1. for a given sample Y and parameter setting, minimize the loss in equation 8 with respect to Z by
gradient descent to obtain the optimal code Z ?
2. clamping both the input Y and the optimal code Z ? found at the previous step, do one step of
gradient descent to update the parameters.
Unlike other methods [8, 12], no column normalization of W is required. Also, all the parameters
are updated by gradient descent unlike in our previous work [3] where some parameters are updated
using a moving average.
After training, the system converges to a state where the decoder produces good reconstructions
from a sparse code, and the optimal code is predicted by a simple feed-forward propagation through
the encoder.
4
Comparative Coding Analysis
In the following sections, we mainly compare SESM with RBM in order to better understand their
differences in terms of maximum likelihood approximation, and in terms of coding efficiency and
robustness.
RBM
As explained in the introduction, RBMs minimize an approximation of the negative log
likelihood of the data under the model. An RBM is a binary stochastic symmetric machine defined
4
by an energy function of the form: E(Y, Z) = ?Z T W T Y ? bTenc Z ? bTdec Y . Although this is not
obvious at first glance, this energy can be seen as a special case of the encoder-decoder architecture
that pertains to binary data vectors and code vectors [5]. Training an RBM minimizes an approximation of the negative log likelihood loss function 2, averaged over the training set, through a gradient
descent procedure. Instead of estimating the gradient of the log partition function, RBM training
uses contrastive divergence [10], which takes random samples drawn over a limited region ? around
the training samples. The loss becomes:
X
XX
1
1
L(W, Y ) = ? log
e??E(Y,z) + log
e??E(y,z)
(9)
?
?
z
z
y??
Because of the RBM architecture, given a Y , the components of Z are independent, hence the sum
over configurations of Z can be done independently for each component of Z. Sampling y in the
neighborhood ? is performed with one, or a few alternated MCMC steps over Y , and Z. This means
that only the energy of points around training samples is pulled up. Hence, the likelihood function
takes the right shape around the training samples, but not necessarily everywhere. However, the
code vector in an RBM is binary and noisy, and one may wonder whether this does not have the
effect of surreptitiously limiting the information content of the code, thereby further minimizing the
log partition function as a bonus.
SESM
RBM and SESM have almost the same architecture because they both have a symmetric
encoder and decoder, and a logistic non-linearity on the top of the encoder. However, RBM is trained
using (approximate) maximum likelihood, while SESM is trained by simply minimizing the average
energy F? (Y ) of equation 4 with an additional code sparsity term. SESM relies on the sparsity
term to prevent flat energy surfaces, while RBM relies on an explicit contrastive term in the loss, an
approximation of the log partition function. Also, the coding strategy is very different because code
units are ?noisy? and binary in RBM, while they are quasi-binary and sparse in SESM. Features
extracted by SESM look like object parts (see next section), while features produced by RBM lack
an intuitive interpretation because they aim at modeling the input distribution and they are used in a
distributed representation.
4.1
Experimental Comparison
In the first experiment we have trained SESM, RBM, and PCA on the first 20000 digits in the
MNIST training dataset [18] in order to produce codes with 200 components. Similarly to [15] we
have collected test image codes after the logistic non linearity (except for PCA which is linear), and
we have measured the root mean square error (RMSE) and the entropy. SESM was run for different
values of the sparsity coefficient ?s in equation 8 (while
q all other parameters are left unchanged, see
1
? 2 , where Z? is the uniformly
next section for details). The RMSE is defined as ? P1N kY ? fdec (Z)k
2
quantized code produced by the encoder, P is the number of test samples, and ? is the estimated
variance of units in the input Y . Assuming to encode the (quantized) code units independently and
with the same distribution, the lower bound on the number of bits required to encode each of them
PQ
i
i
log2 PcM
, where ci is the number of counts in the i-th bin, and Q
is given by: Hc.u. = ? i=1 PcM
is the number of quantization levels. The number of bits per pixel is then equal to: M
N Hc.u. . Unlike
in [15, 12], the reconstruction is done taking the quantized code in order to measure the robustness
of the code to the quantization noise. As shown in fig. 1-C, RBM is very robust to noise in the
code because it is trained by sampling. The opposite is true for PCA which achieves the lowest
RMSE when using high precision codes, but the highest RMSE when using a coarse quantization.
SESM seems to give the best trade-off between RMSE and entropy. Fig. 1-D/F compare the features
learned by SESM and RBM. Despite the similarities in the architecture, filters look quite different
in general, revealing two different coding strategies: distributed for RBM, and sparse for SESM.
In the second experiment, we have compared these methods by means of a supervised task in order to
assess which method produces the most discriminative representation. Since we have available also
the labels in the MNIST, we have used the codes (produced by these machines trained unsupervised)
as input to the same linear classifier. This is run for 100 epochs to minimize the squared error
between outputs and targets, and has a mild ridge regularizer. Fig. 1-A/B show the result of these
experiments in addition to what can be achieved by a linear classifier trained on the raw pixel data.
Note that: 1) training on features instead of raw data improves the recognition (except for PCA
5
10 samples
100 samples
45
18
40
16
1000 samples
RAW: train
RAW: test
9
35
14
30
12
10 samples
10
100 samples
45
18
40
16
35
14
30
12
1000 samples
10
9
PCA: train
PCA: test
15
6
10
4
5
2
0
0
1
2
ENTROPY (bits/pixel)
0
0
1
2
ENTROPY (bits/pixel)
SESM: train
7
SESM: test
6
25
20
ERROR RATE %
8
RBM: test
ERROR RATE %
20
10
8
RBM: train
ERROR RATE %
25
ERROR RATE %
ERROR RATE %
ERROR RATE %
8
10
8
15
6
10
4
5
2
5
6
5
4
(A)
7
4
3
0
1
2
ENTROPY (bits/pixel)
(B)
0
0
0.2
RMSE
0.4
0
0
0.2
RMSE
0.4
3
0
0.2
RMSE
0.4
Symmetric Sparse Coding ? RBM ? PCA
0.45
PCA: quantization in 5 bins
PCA: quantization in 256 bins
RBM: quantization in 5 bins
RBM: quantization in 256 bins
Sparse Coding: quantization in 5 bins
Sparse Coding: quantization in 256 bins
0.4
0.35
RMSE
0.3
0.25
0.2
0.15
0.1
(C)
0.05
0
0.5
1
Entropy (bits/pixel)
1.5
2
(D)
(E)
(F)
(G)
(H)
Figure 1: (A)-(B) Error rate on MNIST training (with 10, 100 and 1000 samples per class) and
test set produced by a linear classifier trained on the codes produced by SESM, RBM, and PCA.
The entropy and RMSE refers to a quantization into 256 bins. The comparison has been extended
also to the same classifier trained on raw pixel data (showing the advantage of extracting features).
The error bars refer to 1 std. dev. of the error rate for 10 random choices of training datasets
(same splits for all methods). The parameter ?s in eq. 8 takes values: 1, 0.5, 0.2, 0.1, 0.05. (C)
Comparison between SESM, RBM, and PCA when quantizing the code into 5 and 256 bins. (D)
Random selection from the 200 linear filters that were learned by SESM (?s = 0.2). (E) Some pairs
of original and reconstructed digit from the code produced by the encoder in SESM (feed-forward
propagation through encoder and decoder). (F) Random selection of filters learned by RBM. (G)
Back-projection in image space of the filters learned in the second stage of the hierarchical feature
extractor. The second stage was trained on the non linearly transformed codes produced by the first
stage machine. The back-projection has been performed by using a 1-of-10 code in the second stage
machine, and propagating this through the second stage decoder and first stage decoder. The filters
at the second stage discover the class-prototypes (manually ordered for visual convenience) even
though no class label was ever used during training. (H) Feature extraction from 8x8 natural image
patches: some filters that were learned.
6
when the number of training samples is small), 2) RBM performance is competitive overall when
few training samples are available, 3) the best performance is achieved by SESM for a sparsity level
which trades off RMSE for entropy (overall for large training sets), 4) the method with the best
RMSE is not the one with lowest error rate, 5) compared to a SESM having the same error rate
RBM is more costly in terms of entropy.
5
Experiments
This section describes some experiments we have done with SESM. The coefficient ?e in equation 8
has always been set equal to 1, and the gain in the logistic have been set equal to 7 in order to achieve
a quasi-binary coding. The parameter ?s has to be set by cross-validation to a value which depends
on the level of sparsity required by the specific application.
5.1
Handwritten Digits
Fig. 1-B/E shows the result of training a SESM with ?s is equal to 0.2. Training was performed on
20000 digits scaled between 0 and 1, by setting ?r to 0.0004 (in equation 8) with a learning rate
equal to 0.025 (decreased exponentially). Filters detect the strokes that can be combined to form a
digit. Even if the code unit activation has a very sparse distribution, reconstructions are very good
(no minimization in code space was performed).
5.1.1
Hierarchical Features
A hierarchical feature extractor can be trained layer-by-layer similarly to what has been proposed
in [19, 1] for training deep belief nets (DBNs). We have trained a second (higher) stage machine
on the non linearly transformed codes produced by the first (lower) stage machine described in the
previous example. We used just 20000 codes to produce a higher level representation with just 10
components. Since we aimed to find a 1-of-10 code we increased the sparsity level (in the second
stage machine) by setting ?s to 1. Despite the completely unsupervised training procedure, the
feature detectors in the second stage machine look like digit prototypes as can be seen in fig. 1-G.
The hierarchical unsupervised feature extractor is able to capture higher order correlations among
the input pixel intensities, and to discover the highly non-linear mapping from raw pixel data to the
class labels. Changing the random initialization can sometimes lead to the discover of two different
shapes of ?9? without a unit encoding the ?4?, for instance. Nevertheless, results are qualitatively
very similar to this one. For comparison, when training a DBN, prototypes are not recovered because
the learned code is distributed among units.
5.2
Natural Image Patches
A SESM with about the same set up was trained on a dataset of 30000 8x8 natural image patches
randomly extracted from the Berkeley segmentation dataset [20]. The input images were simply
scaled down to the range [0, 1.7], without even subtracting the mean. We have considered a 2
times overcomplete code with 128 units. The parameters ?s , ?r and the learning rate were set to
0.4, 0.025, and 0.001 respectively. Some filters are localized Gabor-like edge detectors in different
positions and orientations, other are more global, and some encode the mean value (see fig. 1-H).
6
Conclusions
There are two strategies to train unsupervised machines: 1) having a contrastive term in the loss
function minimized during training, 2) constraining the internal representation in such a way that
training samples can be better reconstructed than other points in input space. We have shown that
RBM, which falls in the first class of methods, is particularly robust to channel noise, it achieves very
low RMSE and good recognition rate. We have also proposed a novel symmetric sparse encoding
method following the second strategy which: is particularly efficient to train, has fast inference,
works without requiring any withening or even mean removal from the input, can provide the best
recognition performance and trade-off between entropy/RMSE, and can be easily extended to a
hierarchy discovering hidden structure in the data. We have proposed an evaluation protocol to
compare different machines which is based on RMSE, entropy and, eventually, error rate when also
7
labels are available. Interestingly, the machine achieving the best performance in classification is the
one with the best trade-off between reconstruction error and entropy. A future avenue of work is to
understand the reasons for this ?coincidence?, and deeper connections between these two strategies.
Acknowledgments
We wish to thank Jonathan Goodman, Geoffrey Hinton, and Yoshua Bengio for helpful discussions. This work
was supported in part by NSF grant IIS-0535166 ?toward category-level object recognition?, NSF ITR-0325463
?new directions in predictive learning?, and ONR grant N00014-07-1-0535 ?integration and representation of
high dimensional data?.
References
[1] G.E. Hinton and R. R Salakhutdinov. Reducing the dimensionality of data with neural networks. Science,
313(5786):504?507, 2006.
[2] Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle. Greedy layer-wise training of deep networks. In
NIPS, 2006.
[3] M. Ranzato, C. Poultney, S. Chopra, and Y. LeCun. Efficient learning of sparse representations with an
energy-based model. In NIPS 2006. MIT Press, 2006.
[4] Y. Bengio and Y. LeCun. Scaling learning algorithms towars ai. In D. DeCoste L. Bottou, O. Chapelle
and J. Weston, editors, Large-Scale Kernel Machines. MIT Press, 2007.
[5] M. Ranzato, Y. Boureau, S. Chopra, and Y. LeCun. A unified energy-based framework for unsupervised
learning. In Proc. Conference on AI and Statistics (AI-Stats), 2007.
[6] E. Doi, D. C. Balcan, and M. S. Lewicki. A theoretical analysis of robust coding over noisy overcomplete
channels. In NIPS. MIT Press, 2006.
[7] Y. W. Teh, M. Welling, S. Osindero, and G. E. Hinton. Energy-based models for sparse overcomplete
representations. Journal of Machine Learning Research, 4:1235?1260, 2003.
[8] B. A. Olshausen and D. J. Field. Sparse coding with an overcomplete basis set: a strategy employed by
v1? Vision Research, 37:3311?3325, 1997.
[9] D. D. Lee and H. S. Seung. Learning the parts of objects by non-negative matrix factorization. Nature,
401:788?791, 1999.
[10] G.E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation,
14:1771?1800, 2002.
[11] P. Lennie. The cost of cortical computation. Current biology, 13:493?497, 2003.
[12] J.F. Murray and K. Kreutz-Delgado. Learning sparse overcomplete codes for images. The Journal of
VLSI Signal Processing, 45:97?110, 2008.
[13] G.E. Hinton and R.S. Zemel. Autoencoders, minimum description length, and helmholtz free energy. In
NIPS, 1994.
[14] G.E. Hinton, P. Dayan, and M. Revow. Modeling the manifolds of images of handwritten digits. IEEE
Transactions on Neural Networks, 8:65?74, 1997.
[15] M.S. Lewicki and T.J. Sejnowski. Learning overcomplete representations. Neural Computation, 12:337?
365, 2000.
[16] Y. LeCun, S. Chopra, R. Hadsell, M. Ranzato, and F.J. Huang. A tutorial on energy-based learning. In
G. Bakir and al.., editors, Predicting Structured Data. MIT Press, 2006.
[17] P. Sallee and B.A. Olshausen. Learning sparse multiscale image representations. In NIPS. MIT Press,
2002.
[18] http://yann.lecun.com/exdb/mnist/.
[19] G.E. Hinton, S. Osindero, and Y.-W. Teh. A fast learning algorithm for deep belief nets. Neural Computation, 18:1527?1554, 2006.
[20] http://www.cs.berkeley.edu/projects/vision/grouping/segbench/.
8
| 3363 |@word mild:1 norm:1 advantageous:1 seems:2 indiscriminate:1 contrastive:5 thereby:3 delgado:1 configuration:1 interestingly:2 recovered:1 current:1 com:1 activation:1 rocquencourt:1 must:1 partition:11 shape:2 update:2 greedy:1 discovering:1 short:1 provides:1 quantized:3 coarse:1 location:1 gx:1 simpler:1 mathematical:1 become:1 consists:2 combine:1 theoretically:1 salakhutdinov:1 decoste:1 increasing:1 becomes:1 provided:1 discover:4 underlying:1 notation:1 linearity:3 factorized:1 estimating:1 xx:1 bonus:1 lowest:2 kind:1 what:2 minimizes:2 unified:1 finding:1 berkeley:2 act:1 classifier:7 rm:2 scaled:2 unit:14 grant:2 omit:1 appear:1 producing:1 positive:1 dropped:1 limit:2 despite:2 encoding:4 inria:1 initialization:1 suggests:1 limited:2 collapse:1 factorization:1 range:1 averaged:1 practical:1 acknowledgment:1 lecun:5 digit:7 procedure:2 gabor:1 revealing:1 convenient:1 projection:2 refers:1 segbench:1 convenience:2 cannot:2 selection:2 context:1 www:1 conventional:1 demonstrated:2 compensated:1 straightforward:1 regardless:1 independently:3 convex:1 hadsell:1 stats:1 lamblin:1 coordinate:1 updated:3 limiting:1 hierarchy:4 target:1 dbns:1 us:1 helmholtz:1 recognition:6 expensive:3 particularly:2 std:1 observed:1 role:1 module:3 coincidence:1 capture:3 region:1 ensures:2 ranzato:4 trade:6 decrease:1 highest:1 seung:1 trained:13 predictive:1 efficiency:1 basis:4 completely:1 easily:3 joint:1 various:1 regularizer:1 train:7 forced:1 fast:3 describe:3 monte:1 doi:1 sejnowski:1 zemel:1 neighborhood:1 whose:1 quite:2 solve:1 reconstruct:1 otherwise:1 encoder:30 ability:2 favor:1 statistic:1 noisy:4 higherlevel:1 advantage:4 differentiable:2 hoc:1 net:4 quantizing:1 propose:4 reconstruction:14 lowdimensional:1 subtracting:1 product:1 relevant:1 achieve:1 representational:1 intuitive:1 description:1 normalize:1 ky:3 regularity:1 produce:10 comparative:1 converges:1 object:3 propagating:1 measured:1 eq:1 predicted:1 c:1 larochelle:1 direction:1 closely:1 filter:19 stochastic:1 everything:1 numeral:4 bin:9 around:5 considered:2 exp:1 mapping:1 achieves:2 purpose:1 proc:1 label:5 vice:1 weighted:1 minimization:4 mit:5 always:1 aim:2 rather:2 probabilistically:1 encode:3 focus:1 check:1 likelihood:8 mainly:1 detect:1 helpful:1 inference:1 lennie:1 dayan:1 eliminate:1 lecun1:1 hidden:2 vlsi:1 quasi:2 transformed:3 compatibility:1 issue:1 pixel:9 overall:4 classification:3 denoted:1 among:2 orientation:1 constrained:1 integration:2 special:1 equal:6 field:2 once:1 extraction:2 having:3 sampling:5 manually:1 biology:2 kw:2 look:3 unsupervised:18 peaked:1 future:1 minimized:3 others:1 yoshua:1 few:3 randomly:1 divergence:3 ourselves:1 maintain:1 detection:1 interest:1 highly:1 evaluation:1 chain:1 edge:1 necessary:1 overcomplete:11 theoretical:2 increased:1 instance:3 column:2 fdec:4 modeling:2 dev:1 measuring:2 stacking:3 cost:1 sallee:1 wonder:1 osindero:2 characterize:1 dependency:2 encoders:1 combined:1 density:1 lee:1 off:6 together:1 squared:2 reconstructs:1 huang:1 possibly:1 idiosyncrasy:1 stochastically:2 resort:1 expert:1 blow:1 coding:11 sec:4 includes:1 student:1 coefficient:2 explicitly:1 ad:1 depends:1 performed:7 view:1 try:1 root:1 competitive:1 rmse:16 minimize:10 square:1 ass:1 variance:1 efficiently:1 raw:7 handwritten:5 produced:10 carlo:1 stroke:1 explain:1 detector:2 sharing:1 rbms:2 energy:37 frequency:1 obvious:1 associated:1 rbm:30 couple:1 gain:2 dataset:6 popular:1 lim:1 bakir:1 improves:1 dimensionality:1 segmentation:1 back:2 feed:2 higher:5 courant:2 supervised:3 done:3 though:2 just:3 anywhere:1 stage:11 correlation:1 autoencoders:1 hand:2 multiscale:1 lack:2 propagation:2 glance:1 mode:1 logistic:6 perhaps:2 indicated:1 olshausen:3 effect:1 k22:4 requiring:1 true:2 vicinity:1 hence:3 regularization:1 symmetric:7 attractive:1 during:4 criterion:1 exdb:1 ridge:1 demonstrate:1 l1:1 balcan:1 image:11 wise:2 novel:2 common:1 exponentially:1 volume:1 interpretation:1 refer:1 versa:1 ai:3 automatic:1 dbn:1 pm:1 similarly:2 pq:1 moving:1 chapelle:1 similarity:1 surface:7 etc:1 posterior:1 recent:2 irrelevant:1 certain:1 n00014:1 binary:6 onr:1 captured:2 preserving:1 seen:3 greater:1 somewhat:1 additional:1 employed:1 minimum:1 paradigm:2 maximize:1 signal:1 ii:1 multiple:2 desirable:1 characterized:1 cross:1 long:1 ensuring:1 prediction:1 vision:2 normalization:2 sometimes:1 kernel:1 achieved:2 addition:2 decreased:1 else:1 grow:1 goodman:1 eliminates:1 unlike:3 extracting:2 near:1 chopra:3 constraining:2 bengio:4 split:1 marginalization:3 zi:1 architecture:9 perfectly:1 opposite:1 prototype:3 avenue:1 itr:1 whether:2 pca:14 ultimate:1 akin:1 penalty:5 york:1 cause:1 deep:6 useful:1 generally:1 clear:1 aimed:1 transforms:1 category:1 argminz:1 http:2 nsf:2 tutorial:1 estimated:1 per:2 group:1 lan:1 nevertheless:1 achieving:2 drawn:1 changing:1 prevent:1 penalizing:1 v1:1 merely:2 sum:2 run:3 parameterized:1 everywhere:1 place:1 almost:1 yann:3 patch:5 scaling:2 bit:6 layer:4 bound:2 replaces:1 quadratic:1 constraint:3 infinity:1 flat:5 structured:1 project:1 cleverly:1 describes:1 reconstructing:3 making:1 explained:1 restricted:3 computationally:1 equation:15 visualization:1 previously:1 turn:1 count:1 eventually:1 fed:2 tiling:1 available:3 hierarchical:5 enforce:1 robustness:3 substitute:1 original:1 top:2 running:3 ensure:2 log2:1 k1:2 murray:1 approximating:1 unchanged:1 objective:2 strategy:6 costly:1 gradient:8 thank:1 decoder:29 sesm:28 manifold:2 collected:1 reason:2 toward:1 assuming:2 code:64 besides:1 length:1 minimizing:6 unfortunately:1 negative:4 resurgence:1 suppress:1 design:1 boltzmann:3 teh:2 markov:1 datasets:1 descent:6 defining:1 hinton:8 variability:1 extended:3 ever:1 rn:3 arbitrary:1 intensity:1 namely:1 pair:3 required:3 connection:1 learned:7 nip:5 able:2 bar:1 below:1 sparsity:14 saturation:1 poultney:1 including:1 unsuccessful:1 belief:4 power:1 suitable:1 natural:6 difficulty:1 rely:1 predicting:1 x8:2 auto:2 alternated:1 prior:1 epoch:1 discovery:1 l2:1 removal:1 popovici:1 determining:1 loss:15 geoffrey:1 localized:2 validation:1 editor:2 tiny:1 share:1 compatible:1 supported:1 last:2 free:3 drastically:1 bias:1 allow:1 pulled:2 understand:3 institute:1 fall:1 deeper:1 taking:1 sparse:25 distributed:3 dimension:1 cortical:1 computes:1 kz:2 author:1 forward:2 qualitatively:1 avoided:1 welling:1 transaction:1 reconstructed:4 approximate:3 global:1 sequentially:1 kreutz:1 discriminative:1 ravine:1 iterative:2 latent:1 why:1 learn:2 channel:3 robust:3 nature:1 symmetry:3 improving:1 hc:2 necessarily:1 bottou:1 marc:1 domain:1 protocol:1 main:3 linearly:3 aurelio:1 whole:1 noise:5 p1n:1 augmented:1 fig:6 precision:1 position:1 explicit:1 wish:1 crude:1 tied:1 third:4 extractor:3 down:1 bad:1 specific:1 showing:1 nyu:1 normalizing:1 grouping:1 exists:1 mnist:5 quantization:10 adding:1 ci:1 clamping:1 boureau:1 easier:1 entropy:15 simply:3 likely:2 pcm:2 visual:1 ordered:1 lewicki:2 relies:2 extracted:3 weston:1 viewed:1 formulated:1 replace:1 revow:1 content:5 experimentally:1 hard:1 shortcut:1 except:2 reducing:2 uniformly:1 called:2 experimental:1 select:1 internal:1 jonathan:1 pertains:1 evaluate:1 mcmc:4 trainable:1 |
2,607 | 3,364 | Contraction Properties of VLSI Cooperative
Competitive Neural Networks of Spiking Neurons
Emre Neftci1 , Elisabetta Chicca1 , Giacomo Indiveri1 , Jean-Jacques Slotine2 , Rodney Douglas1
1 Institute of Neuroinformatics, UNI|ETH, Zurich
2 Nonlinear Systems Laboratory, MIT, Cambridge, Massachusetts, 02139
[email protected]
Abstract
A non?linear dynamic system is called contracting if initial conditions are forgotten exponentially fast, so that all trajectories converge to a single trajectory.
We use contraction theory to derive an upper bound for the strength of recurrent
connections that guarantees contraction for complex neural networks. Specifically, we apply this theory to a special class of recurrent networks, often called
Cooperative Competitive Networks (CCNs), which are an abstract representation
of the cooperative-competitive connectivity observed in cortex. This specific type
of network is believed to play a major role in shaping cortical responses and selecting the relevant signal among distractors and noise. In this paper, we analyze
contraction of combined CCNs of linear threshold units and verify the results of
our analysis in a hybrid analog/digital VLSI CCN comprising spiking neurons and
dynamic synapses.
1
Introduction
Cortical neural networks are characterized by a large degree of recurrent excitatory connectivity,
and local inhibitory connections. This type of connectivity among neurons is remarkably similar,
across all areas in the cortex [1]. It has been argued that a good candidate model for a canonical
micro-circuit, potentially used as a general purpose cortical computational unit in the cortices, is
the soft Winner-Take-All (WTA) circuit [1], or the more general class of Cooperative Competitive
Networks [2] (CCN). A CCN is a set of interacting neurons, in which cooperation is achieved by local recurrent excitatory connections and competition is achieved via a group of inhibitory neurons,
driven by the excitatory neurons and inhibiting them (see Figure 1). As a result, CCNs perform
both common linear operations as well as complex non?linear operations. The linear operations
include analog gain (linear amplification of the feed?forward input, mediated by the recurrent excitation and/or common mode input), and locus invariance [3]. The non?linear operations include
non?linear selection or soft winner?take?all (WTA) behavior [2, 4, 5], signal restoration [4, 6], and
multi?stability [2, 5]. CCN networks can be modeled using linear threshold units, as well as recurrent networks of spiking neurons. The latter can be efficiently implemented in silicon using
Integrate?and?Fire (I&F) neurons and dynamic synapses [7]. In this work we use a prototype VLSI
CCN device, comprising 128 low power I&F neurons [8] and 4096 dynamic synapses [9] that operate in real-time, in a massively parallel fashion. The main goal of this paper is to address the
open question of how to determine network parameters, such as the strength of recurrent excitatory
couplings or global inhibitory couplings, to create well?behaving complex networks composed of
combinations of neural computational modules (such as CCNs) as depicted in Figure 1. The theoretical foundations used to address these problems are based on contraction theory [10]. By applying
this theory to CCN models of linear threshold units and to combinations of them we find upper
bounds to contraction conditions. We then test the theoretical results on the VLSI CCN of spiking
neurons, and on a combination of two mutually coupled CCNs. We show how the experimental data
presented are consistent with the theoretical predictions.
1
Figure 1: CCNs and combinations of CCNs. (a) A CCN consisting of a population of nearest neighbor connected excitatory neurons (blue) receiving external input and an inhibitory neuron which
receives input from all the excitatory neurons and inhibits them back (red). (b) Photo of the VLSI
CCN Chip comprising I&F neurons. (c) Three coupled CCNs, showing examples of connectivity
patterns between them.
2
CCN of linear threshold units
Neural network models of linear threshold units (LTUs) ignore many of the non?linear processes
that occur at the synaptic level and contain, by definition, no information about spike timing. However networks of LTUs can functionally behave as networks of I&F neurons in a wide variety of
cases [11]. Similarly boundary conditions found for LTU networks can be often applied also to their
I&F neuron network counterparts. For this reason, we start by analyzing a network of LTUs whose
structure is analogous to the one of the VLSI CCN of I&F neurons, and derive sufficient boundary
conditions for contraction.
If we consider a CCN of recurrently connected LTUs according to a weight matrix W, as shown on
Figure 1, we can express the network dynamics as:
?i
d
xi = ?xi + g((W x)i + bi ) ?i = 1, ..., N
dt
(1)
Where N is the total number of neurons in the system, the function g(x) = max(x, 0) is a half?wave
rectification non?linearity to ensure that x ? (x1 , ..., xN )? remains positive, bi are the external inputs
applied to the neurons and ?i are the time constants of the neurons. We assume that neurons of each
type (i.e. excitatory or inhibitory) have identical dynamics: we denote the time constant of excitatory
neurons with ?ex and the one of inhibitory neurons with ?ih . Throughout the paper, we will use the
following notation for the weights: ws for self excitation, we1 , we2 for 1st and 2nd nearest neighbor
excitation respectively, and wie , wei for inhibitory to excitatory neuron and vice versa. The W matrix
has the following shape:
?
?
wsel f w1 w2
0 w2
w1
?wei
.. ?
..
?
.
. ?
W=?
(2)
? w
w2
0 w2 w1 wsel f ?wei ?
1
wie wie wie wie wie wie
0
A CCN can be used to implement a WTA computation. Depending on the strength of the connections, a CCN can implement a Hard (HWTA) or Soft (SWTA) WTA. A HWTA implements a max
operation or selection mechanism: only the neuron receiving the strongest input can be active and
all other neurons are suppressed by global inhibition. A SWTA implements more complex operation
such as non?linear selection, signal restoration, and multi?stability: one or several groups of neurons can be active at the same time, neurons belonging to the same group cooperate through local
excitation, different groups compete through global inhibition. The activity of the ?winning? group
of neurons can be amplified while other groups are suppressed. Depending on the strength of inhibitory and excitatory couplings different regimes are observed. Specifically, in Sec. 4 we compare
a weakly coupled configuration, which guarantees contraction, with a strongly coupled configuration in which the output of the network depends on the input and the history, showing hysteretic
(non?contracting) behaviors in which the selected ?winning? group has advantages over other group
of neurons because of the recurrent excitation.
2
3
Contraction theory applied to CCNs of linear threshold units
3.1 Contraction of a single network
A formal analysis of contraction theory applied to non?linear systems has been described in [10,12].
Here we present an overview of the theory applied to the system of Eq. (1).
In a contracting system, all the trajectories converge to a single trajectory exponentially fast independent of the initial conditions. In particular, if the system has a steady state solution then, by
definition, the state will contract and converge to that solution exponentially fast. Formally, the system is contracting if dtd k ? x k is uniformally negative (i.e. negative in the entire state space) where
? x corresponds to the distance between two neighboring trajectories at a given time. In fact, by path
R
integration, we have dtd PP21 k ? x k< 0 where P1 and P2 are two points of state space (non-necessarily
neighboring). This leads to the following theorem:
Consider a system whose dynamics is given by the differential equations dtd x = f(x,t). The system is
said to be contracting if all its trajectories converge exponentially to a single trajectory. A sufficient
condition is that the symmetric part of the Jacobian J = ??x f is uniformly negative definite. This
condition can be written more explicitly as
1
(J + J? ) ? ?? I
?? > 0 , ?x, ?t ? 0 Js ?
2
where I is the identity matrix and Js is the symmetric part of J It is equivalent to Js having all its
eigenvalues uniformly negative [13].
We can define more generally a local coordinate transformation ? z=?? x, where ?(x,t) is a square
matrix, such that M(x,t) = ?T ? is a uniformly positive definite, symmetric and continuously differentiable metric. Note that the coordinate system z(x,t) does not need to exist, and will not in the
general case, but ? z and ? z? ? z can always be defined [14]. Then, in this metric one can compute
the generalized Jacobian F = ( dtd ? + ?J)??1 . If the symmetric part of the generalized Jacobian,
Fs , is negative definite then the system is contracting. In a suitable metric it has been shown that
this condition becomes sufficient and necessary [10]. In particular, if ? is constant, Fs is negative
definite if and only if (MF)s is negative definite. In fact, as Fs = (??1 )T (MJ)s ??1 , then the condition vT Fs v < 0 ?v ? RN (negative definite matrix) is equivalent to (vT (??1 )T )(MJ)s (??1 v) <
0 ???1 v ? RN . Consequently, we can always choose a constant M to simplify our equations.
Let us now see under which conditions the system defined by Eq. (1) is contracting. Except for the
rectification non?linearity, the full system is a linear time?invariant (LTI) system, and it has a fixed
point [15]. A common alternative to the half-wave rectification function is the sigmoid, in which
case the Jacobian becomes differentiable. If we define fi (x,t) as
d
1
1
fi (x,t) ? xi = ? xi + g((Wx)i + bi )
(3)
dt
?i
?i
then the Jacobian matrix is given by Ji j = ??x j fi (x,t) = ? ?1i ?i j + ?1i g? (yi ) wi j , where yi = (Wx)i + b
and ?i is the time constant of neuron i, with ?i = ?ex for the excitatory neurons and ?i = ?ih for the
inhibitory ones. We assume that the wei and wie weights are not zero so we can use the constant
metric:
?
?
?ex 0
0
? .
?
..
M = ? .. . . .
(4)
?
.
0
...
wei
wie ?ih
which is positive definite. With this metric, MJ can be written MJ = ?I + D K, where Di j =
g? (yi )?i j , and K is similar to W but with wei in place of wie . Since g is sigmoidal (and thus it and its
derivative are both bounded), we can then use the method proposed in [16] to determine a sufficient
condition for contraction. This leads to a condition of the form ?max < 0, where
?max = 2we1 + 2we2 + ws ? 1
(5)
A graphical representation of the boundaries defined by this contraction condition is provided in
Figure 2. The term |?max | is called the contraction rate with respect to metric M. It is of particular
interest because it is a lower bound for the rate at which the system converges to its solution in that
metric.
3
Figure 2: Qualitative phase diagram for a single CCN of LTUs. We show here the possible regimes
of the given in Eq. (1) as a function of excitation and inhibition. In the region D the rates would
grow without bounds if there were no refractory period for the neurons. We see that a system which
is unstable without inhibition cannot be in region A (i.e. within the boundaries of Eq. (5)). Note,
however, that we do not quantitatively know the boundaries between B and C and between C and D
3.2
Contraction of feed?back combined CCNs
One of the powerful features of contraction theory is the following: if a complex system is composed
of coupled (feed?forward and feed?back) subsystems that are individually contracting, then it is
possible to find a sufficient condition for contraction without computing the system?s full Jacobian.
In addition it is possible to compute a lower bound for the full system?s contraction rate. Let Fs
be the symmetric part of the Jacobian of two bi?directionally coupled subsystems, with symmetric
feed?back couplings. Then Fs can be written with four blocks of matrices:
F1s G
Fs =
(6)
G? F2s
where F1s and F2s refer to the Jacobian of the individual, decoupled subsystems, while G and G? are
the feed?back coupling components. If we assume both subsystems are contracting, then a sufficient
condition for contraction of the overall system is given by [17]:
|?max (F1s )| |?max (F2s )| > ? 2 (G) ?t > 0, uni f ormly
(7)
where |?max (?)| is the contraction rate with respect to the used metric and ? (G) is the largest eigenvalue of G? G. By the eigenvalue interlacing theorem [13] we have that the contraction rate of the
combined system is given by ?max (Fs ) ? mini ? (Fis ) i = 1, 2.
For the specific example of a combined system comprising two identical subsystems coupled by a
uniform coupling matrix G =w f b ? I we have ? 2 (G) = w2f b . The combined system is contracting if:
|w f b | < ?max
(8)
The results obtained with this analysis can be generalized to more than two combined subsystems,
and with different types of coupling matrices [17]. Note that in a feed?forward or a negative?
feedback case (i.e. at least one of the ?G?blocks? in the non?symmetric form is negative semidefinite), the system is automatically contracting provided that both subsystems are contracting. Given
this, the condition for contraction of the combined system described by Eq. (8) becomes: w f b < ?max .
Note that the contraction rate is an observable quantity, therefore one can build a contracting system
consisting of an arbitrary number of CCNs as follows: 1. Determine the contraction rate of two
CCNs by using Eq. (5) or by measuring it. 2. Use Eq. (7) to set the weight of the relation. Compute
the upper bound to the contraction rate of the combined system as explained above. 3. Repeat the
procedure for a new CCN and the combined one.
4
Contraction in a VLSI CCN of spiking neurons
The VLSI device used in this work implements a CCN of spiking neurons using an array of low?
power I&F neurons with dynamic synapses [8, 18]. The chip has been fabricated using a standard
AMS 0.35?m CMOS process, and covers an area of about 10 mm2 . It contains 124 excitatory
neurons with self, 1st , 2nd , 3rd nearest?neighbor recurrent excitatory connections and 4 inhibitory
neurons (all?to?all bi?directionally connected to the excitatory neurons). Each neuron receives
input currents from a row of 32 afferent plastic synapses that use the Address Event Representation
(AER) to receive spikes. The spiking activity of the neurons is also encoded using the AER. In this
representation input and output spikes are real?time asynchronous digital events that carry analog
information in their temporal structure. We can interface the chip to a workstation, for prototyping
4
120
100
80
60
50
80
60
40
40
20
20
0
5
Time [s]
0
0
20 40
Frequency [Hz]
(a) Single trial input stimulus
0
40
100
100
Neuron
Neuron
100
50
Rate [Hz]
120
30
20
10
0
0
20 40
Frequency [Hz]
5
Time [s]
0
0
50
Neuron
100
128
(b) Single trial CCN response (c) Multiple trials mean response
Figure 3: Contraction of a single VLSI CCN. (a) A raster plot of the input stimulus(left) and the
mean firing rates(right): the membrane potential of the I&F neurons are set to a random initial
state by stimulating them with uncorrelated Poisson spike trains of constant mean frequency (up
to the dashed line). Then the network is stimulated with 2 Gaussian bumps of different amplitude
centered at Neuron 30 and Neuron 80, while, all the neurons received a constant level of uncorrelated
input during the whole trial. (b) The response of the CCN to the stimulus presented in (a). (c)
Mean responses of 100 trials, calculated after the red dashed line with error bars. The shaded area
represents the mean input stimulus presented throughout the experiment. The system selects the
largest input and suppresses the noise and the smaller bump, irrespective of initial conditions and
noise. Neurons 124 to 128 are inhibitory neurons and do not receive external input.
experiments using a dedicated PCI?AER board [19]. This board allows us to stimulate the synapses
on the chip (e.g. with synthetic trains of spikes), monitor the activity of the I&F neurons, and map
events from one neuron to a synapse belonging to a neuron on the same chip and/or on a different
chip. An analysis of the dynamics of our VLSI I&F neurons can be found in [20] and although the
leakage term in our implemented neurons is constant, it has been shown that such neurons exhibit
responses qualitatively similar to standard linear I&F neurons [20].
A steady state solution is easily computable for a network of linear threshold units [5, 21]: it is
a fixed point in state space, i.e. a set of activities for the neurons. In a VLSI network of I&F
neurons the steady state will be modified by mismatch and the activities will fluctuate due to external
and microscopic perturbations (but remain in its vicinity if the system is contracting). To prove
contraction experimentally in these types of networks, one would have to apply an input and test with
all possible initial conditions. This is clearly not possible, but we can verify under which conditions
the system is compatible with contraction by repeating the same experiment with different initial
conditions (see Sec. 4.1) and under which conditions the system is not compatible with contraction
by observing if system settles to different solutions when stimulated with different initial conditions
(see Sec. 4.3).
4.1
Convergence to a steady state with a static stimulus
The VLSI CCN is stimulated by uncorrelated Poisson spike trains whose mean rates form two
Gaussian?shaped bumps along the array of neurons, one with a smaller amplitude than the other
superimposed to background noise (see Figure 3a). In a SWTA configuration, our CCNs should
select and amplify the largest bump while suppressing the smaller one and the noise. We set the
neurons into random initial conditions by stimulating them with uncorrelated Poisson spike trains
with a spatially uniform and constant mean rate, before applying the real input stimulus (before
the dashed line in Figure 3a ). Figure 3b shows the response of the CCN to this spike train, and
Figure 3c is the response averaged over 100 trials. This experiment shows that regardless of the
initial conditions, the final response of the CCN in an SWTA configuration is always the same (see
the small error bars on Figure 3c), as we would expect from a contracting system.
4.2
Convergence with non?static stimulus and contraction rate
As the condition for contraction does not depend on the external input, it will also hold for time?
varying inputs. For example an interesting input stimulus is a bump of activity moving along the
array of neurons at a constant speed. In this case, the firing rates produced by the chip carry informa5
100
60
80
40
60
40
20
20
60
80
40
60
40
20
20
2
4
Time [s]
6
0
(a) Single trial weak CCN
1
80
120
Neuron
Neuron
100
2
4
Time [s]
6
0
(b) Single trial strong CCN
Rate (Normlized to max.)
80
120
Sequencer
Weak CCN
Strong CCN
0.8
0.6
0.4
0.2
0
3.4
3.6
3.8
4
Time [s]
4.2
4.4
(c) Activity of neuron #25
Figure 4: Contraction rate in VLSI CCNs using non?static stimulation. The input changed from an
initial stage, where all the neurons were randomly stimulated with constant mean frequencies (up to
3 s), to a second stage in which the moving stimulus (freshly generated from trial to trial) is applied.
This stimulus consists of a bump of activity that is shifted from one neuron to the next. Panels (a)
and (b) show trials for two different configurations (weak and strong) and the colors indicate the
firing rates calculated with a 300 ms sliding time window. The panel (c) compares the mean rates
of neuron #25 in the weakly coupled CCN (green), the strong CCN (blue) and the input (red), all
normalized to their peak of activity and calculated over 50 trials. We see how the blue line is delayed
compared to the red and green line: the stronger recurrent couplings reduces the contraction rate.
tion about the system?s contraction rate. We measured the response of the chip to such a stimulus,
for both strong and weak recurrent couplings (see Figure 4). The strong coupling case produces
slower responses to the input than the weak coupling case, as expected from a system having a
lower contraction rate (see Figure 4c). The system?s condition for contraction does not depend on
the individual neuron?s time constants, although the contraction rate in the original metric does.
This also applies to the non?static input case, where the system will converge to the expected solution, independently of the neurons time constants. Local mismatch effects in the VLSI chip lead to
an effective weight matrix whose elements wsel f , w1 , w2 , wie are not identical throughout the array.
This combined with the high gain of the strong coupling, and the variance produced by the input
Poisson spike trains during the initial phase, explains the emergence of ?pseudo-random? winners
around neuron 30,60 and 80 in Figure 4b.
4.3
A non?contracting example
We expect a CCN to be non?contracting when the coupling is strong: in this condition the CCN
exhibits a hysteretic behavior [22], so the position of the winner strongly depends on the network?s
initial conditions. Figure 5 illustrates this behavior with a CCN with very strong recurrent weights.
4.4
Contraction of combined systems
By using a multi-chip AER communication infrastructure [19] we can connect multiple chips
together with arbitrary connectivity matrices (e.g. G in Sec. 3.2), and repeat experiments analogous
to the ones of Sec. 4.1. Figure 6 shows the response of two CCNs, combined via a connectivity
matrix as shown in Figure 6b, to three input bumps of activity in a contracting configuration.
5
Conclusion
We applied contraction theory to combined Cooperative Competitive Networks (CCN) of Linear
Threshold Units (LTU) and determined sufficient conditions for contraction. We then tested the theoretical predictions on neuromorphic VLSI implementations of CCNs, by measuring their response
to different types of stimuli with different random initial conditions. We used these results to determine parameter settings of single and combined networks of spiking neurons which make the system
behave as a contracting one. Similarly, we verified experimentally that CCNs with strong recurrent
couplings are not contracting as predicted by the theory.
6
120
100
100
50
60
40
50
60
5
Time [s]
0
0 10 20
Frequency [Hz]
0
(a) Initial Cond. I
0
50
40
60
80
50
60
40
20
5
Time [s]
0
(d) Initial cond. II
50
40
20
0
10 0 10 20
Frequency [Hz]
100
100
80
Neuron
Neuron
60
0
0 50 100
Frequency [Hz]
10
120
100
100
80
5
Time [s]
(c) Strong CCN, non-contracting
120
100
100
0
20
0
0 20 40
5
10
Frequency [Hz]
Time [s]
(b) Weak CCN, contracting
120
50
40
20
0
Neuron
80
40
20
100
100
80
Neuron
Neuron
80
60
120
100
100
Neuron
120
20
0
10 0 20 40
Frequency [Hz]
5
Time [s]
(e) Weak CCN, contracting
0
5
Time [s]
10
0
0 200 400
Frequency [Hz]
(f) Strong CCN, non-contracting
Figure 5: VLSI CCN in a non-contracting configuration. We compare the CCN with very strong
lateral recurrent excitation and low inhibition to a weakly coupled CCN. The figures present the
raster plot and mean rates of the CCNs response (calculated after the dashed line) to the same stimuli
starting from two different initial conditions. Panels (b) and (e) show the response of a contracting
CCN, whereas panels (c) and (f) show that the system response depends on the initial conditions of
(a) and (d). Therefore the the "Strong CCN" is non?contracting.
30
20
80
60
10
40
120
80
60
40
20
20
2
4
6
Time [s]
8
0
20 40 60 80 100 120
Neuron CCN2
(a) CCN1 response
(b) Connectivity matrix
30
120
120
100
30
30
20
20
100
20
80
60
10
40
20
Neuron
Neuron
100
80
60
10
40
Rate [Hz]
Neuron
100
Neuron CCN1
120
10
20
2
4
6
Time [s]
8
(c) CCN1 response
0
2
4
6
Time [s]
(d) Trial CCN2
8
0
0
0
50
Neuron
100
(e) Mean response of CCNs
Figure 6: Contraction in combined CCNs. (a) and (d) Single trial responses of CCN1 and CCN2
to the input stimulus shown in (c); (b) Connectivity matrix that couples the two CCNs (inverted
identity matrix); (e) Mean response of CCNs, averaged over 20 trials (data points) superimposed to
average input frequencies (shaded area). The response of the coupled CCNs converged to the same
mean solution, consistent with the hypothesis that the combined system is contracting.
7
Acknowledgments
This work was supported by the DAISY (FP6-2005-015803) EU grant, and by the Swiss National
Science Foundation under Grant PMPD2-110298/1. We thank P. Del Giudice and V. Dante (ISS),
for original design of the PCI-AER board and A. Whatley for help with the software of the PCI-AER
board.
References
[1] R.J. Douglas and K.A.C. Martin. Neural circuits of the neocortex. Annual Review of Neuroscience,
27:419?51, 2004.
[2] S. Amari and M. A. Arbib. Competition and cooperation in neural nets. In J. Metzler, editor, Systems
Neuroscience, pages 119?65. Academic Press, 1977.
[3] D. Hansel and H. Somplinsky. Methods in Neuronal Modeling, chapter Modeling Feature Selectivity in
Local Cortical Circuits, pages 499?567. MIT Press, Cambridge, Massachusetts, 1998.
[4] P. Dayan and L.F. Abbott. Theoretical Neuroscience: Computational and Mathematical Modeling of
Neural Systems. MIT Press, 2001.
[5] R. Hahnloser, R. Sarpeshkar, M.A. Mahowald, R.J. Douglas, and S. Seung. Digital selection and analog
amplification co-exist in an electronic circuit inspired by neocortex. Nature, 405(6789):947?951, 2000.
[6] R.J. Douglas, M.A. Mahowald, and K.A.C. Martin. Hybrid analog-digital architectures for neuromorphic
systems. In Proc. IEEE World Congress on Computational Intelligence, volume 3, pages 1848?1853.
IEEE, 1994.
[7] G. Indiveri. Synaptic plasticity and spike-based computation in VLSI networks of integrate-and-fire neurons. Neural Information Processing - Letters and Reviews, 2007. (In press).
[8] G. Indiveri, E. Chicca, and R. Douglas. A VLSI array of low-power spiking neurons and bistable synapses
with spike?timing dependent plasticity. IEEE Transactions on Neural Networks, 17(1):211?221, Jan
2006.
[9] C. Bartolozzi and G. Indiveri. Synaptic dynamics in analog VLSI. Neural Computation, 19:2581?2603,
Oct 2007.
[10] Winfried Lohmiller and Jean-Jacques E. Slotine. On contraction analysis for non-linear systems. Automatica, 34(6):683?696, 1998.
[11] B. Ermentrout. Reduction of conductance-based models with slow synapses to neural nets. Neural
Computation, 6:679?695, 1994.
[12] Jean-Jacques E. Slotine. Modular stability tools for distributed computation and control. International J.
of Adaptive Control and Signal Processing, 17(6):397?416, 2003.
[13] Roger A. Horn and Charles R. Johnson. Matrix Analysis. Cambridge University Press, 1985.
[14] Winfried Lohmiller and Jean-Jacques E. Soltine. Nonlinear process control using contraction theory.
A.I.Ch.E. Journal, March 2000.
[15] S. H. Strogatz. Nonlinear Dynamics and Chaos. Perseus Books, 1994.
[16] O. Faugeras and J.-J. Slotine. Synchronization in neural fields. 2007.
[17] Wei Wang and Jean-Jacques E. Slotine. On partial contraction analysis for coupled nonlinear oscillators.
Biological Cybernetics, 92(1):38?53, 2005.
[18] C. Bartolozzi, S. Mitra, and G. Indiveri. An ultra low power current?mode filter for neuromorphic systems
and biomedical signal processing. In IEEE Proceedings on Biomedical Circuits and Systems (BioCAS06),
pages 130?133, 2006.
[19] E. Chicca, G. Indiveri, and R.J. Douglas. Context dependent amplification of both rate and eventcorrelation in a VLSI network of spiking neurons. In B. Sch?lkopf, J.C. Platt, and T. Hofmann, editors,
Advances in Neural Information Processing Systems 19, Cambridge, MA, Dec 2007. Neural Information
Processing Systems Foundation, MIT Press. (In press).
[20] S. Fusi and M. Mattia. Collective behavior of networks with linear (VLSI) integrate and fire neurons.
Neural Computation, 11:633?52, 1999.
[21] H. Sebastion Seung Richard H. R. Hahnloser and Jean-Jacques Slotine. Permitted and forbidden sets in
symmetric threshold-linear networks. Neural Computation, 15:621?638, 2003.
[22] E. Chicca. A Neuromorphic VLSI System for Modeling Spike?Based Cooperative Competitive Neural
Networks. PhD thesis, ETH Z?rich, Z?rich, Switzerland, April 2006.
8
| 3364 |@word trial:15 stronger:1 nd:2 open:1 contraction:45 somplinsky:1 carry:2 reduction:1 initial:17 configuration:7 contains:1 selecting:1 suppressing:1 current:2 written:3 wx:2 plasticity:2 shape:1 hofmann:1 plot:2 half:2 selected:1 device:2 intelligence:1 infrastructure:1 sigmoidal:1 mathematical:1 along:2 differential:1 qualitative:1 f2s:3 prove:1 consists:1 expected:2 behavior:5 p1:1 multi:3 inspired:1 automatically:1 window:1 becomes:3 provided:2 linearity:2 notation:1 circuit:6 bounded:1 panel:4 suppresses:1 perseus:1 transformation:1 fabricated:1 guarantee:2 forgotten:1 temporal:1 pseudo:1 platt:1 control:3 unit:9 grant:2 positive:3 before:2 local:6 timing:2 congress:1 mitra:1 analyzing:1 path:1 firing:3 shaded:2 co:1 bi:5 averaged:2 acknowledgment:1 horn:1 block:2 implement:5 definite:7 swiss:1 sequencer:1 procedure:1 jan:1 area:4 eth:2 cannot:1 amplify:1 selection:4 subsystem:7 context:1 applying:2 equivalent:2 map:1 regardless:1 starting:1 independently:1 chicca:3 wie:11 array:5 stability:3 population:1 coordinate:2 analogous:2 play:1 hypothesis:1 element:1 metzler:1 cooperative:6 observed:2 role:1 module:1 wang:1 region:2 connected:3 eu:1 f1s:3 seung:2 ermentrout:1 dynamic:11 weakly:3 depend:2 easily:1 chip:11 chapter:1 sarpeshkar:1 train:6 fast:3 effective:1 pci:3 neuroinformatics:1 jean:6 whose:4 encoded:1 modular:1 faugeras:1 amari:1 emergence:1 final:1 directionally:2 advantage:1 eigenvalue:3 differentiable:2 whatley:1 net:2 neighboring:2 relevant:1 amplified:1 amplification:3 competition:2 convergence:2 produce:1 cmos:1 converges:1 help:1 derive:2 recurrent:14 coupling:14 depending:2 measured:1 nearest:3 received:1 eq:7 strong:14 p2:1 implemented:2 predicted:1 indicate:1 switzerland:1 filter:1 centered:1 settle:1 bistable:1 explains:1 argued:1 ultra:1 biological:1 hold:1 around:1 bump:7 inhibiting:1 major:1 purpose:1 proc:1 hansel:1 individually:1 largest:3 vice:1 create:1 tool:1 mit:4 clearly:1 always:3 gaussian:2 modified:1 fluctuate:1 varying:1 indiveri:5 superimposed:2 am:1 dayan:1 dependent:2 entire:1 fis:1 w:2 vlsi:22 relation:1 comprising:4 selects:1 overall:1 among:2 special:1 integration:1 field:1 having:2 shaped:1 identical:3 mm2:1 represents:1 stimulus:14 simplify:1 micro:1 quantitatively:1 richard:1 randomly:1 composed:2 national:1 individual:2 delayed:1 phase:2 consisting:2 fire:3 conductance:1 interest:1 we2:2 semidefinite:1 ccn:43 partial:1 necessary:1 decoupled:1 ltus:5 theoretical:5 soft:3 modeling:4 cover:1 measuring:2 restoration:2 neuromorphic:4 mahowald:2 uniform:2 johnson:1 connect:1 giacomo:1 combined:16 synthetic:1 st:2 peak:1 international:1 contract:1 receiving:2 together:1 continuously:1 connectivity:8 w1:4 thesis:1 choose:1 external:5 book:1 derivative:1 potential:1 sec:5 ltu:2 explicitly:1 depends:3 afferent:1 tion:1 analyze:1 observing:1 red:4 wave:2 start:1 competitive:6 parallel:1 rodney:1 daisy:1 square:1 variance:1 efficiently:1 weak:7 lkopf:1 plastic:1 produced:2 trajectory:7 cybernetics:1 history:1 converged:1 synapsis:8 phys:1 strongest:1 synaptic:3 definition:2 raster:2 bartolozzi:2 frequency:11 slotine:5 di:1 workstation:1 static:4 gain:2 couple:1 massachusetts:2 distractors:1 color:1 shaping:1 amplitude:2 back:5 feed:7 dt:2 permitted:1 response:22 wei:7 synapse:1 april:1 strongly:2 roger:1 stage:2 biomedical:2 receives:2 nonlinear:4 del:1 mode:2 stimulate:1 effect:1 verify:2 contain:1 normalized:1 counterpart:1 vicinity:1 spatially:1 symmetric:8 laboratory:1 during:2 self:2 excitation:7 steady:4 m:1 generalized:3 ini:1 dedicated:1 interface:1 dtd:4 cooperate:1 chaos:1 fi:3 charles:1 common:3 sigmoid:1 stimulation:1 spiking:10 ji:1 overview:1 winner:4 exponentially:4 refractory:1 volume:1 analog:6 functionally:1 silicon:1 refer:1 cambridge:4 versa:1 rd:1 similarly:2 moving:2 cortex:3 behaving:1 inhibition:5 j:3 forbidden:1 driven:1 massively:1 selectivity:1 vt:2 yi:3 inverted:1 converge:5 determine:4 period:1 signal:5 dashed:4 sliding:1 full:3 multiple:2 ii:1 interlacing:1 reduces:1 characterized:1 academic:1 believed:1 prediction:2 metric:9 poisson:4 achieved:2 dec:1 receive:2 addition:1 remarkably:1 background:1 whereas:1 diagram:1 grow:1 sch:1 w2:5 operate:1 hz:10 variety:1 arbib:1 architecture:1 prototype:1 computable:1 f:8 generally:1 repeating:1 neocortex:2 exist:2 canonical:1 inhibitory:11 shifted:1 jacques:6 neuroscience:3 blue:3 express:1 group:8 four:1 hysteretic:2 threshold:9 monitor:1 douglas:5 verified:1 lti:1 abbott:1 fp6:1 compete:1 letter:1 powerful:1 place:1 throughout:3 electronic:1 fusi:1 bound:6 annual:1 activity:10 strength:4 occur:1 aer:6 software:1 giudice:1 speed:1 inhibits:1 martin:2 according:1 combination:4 march:1 belonging:2 membrane:1 across:1 smaller:3 remain:1 suppressed:2 wi:1 wta:4 explained:1 invariant:1 mattia:1 rectification:3 equation:2 zurich:1 mutually:1 remains:1 mechanism:1 locus:1 know:1 photo:1 operation:6 apply:2 alternative:1 slower:1 original:2 ensure:1 include:2 graphical:1 dante:1 build:1 leakage:1 question:1 quantity:1 spike:12 said:1 exhibit:2 microscopic:1 distance:1 thank:1 lateral:1 w2f:1 unstable:1 reason:1 modeled:1 mini:1 potentially:1 negative:10 implementation:1 design:1 collective:1 perform:1 upper:3 neuron:88 behave:2 communication:1 interacting:1 rn:2 perturbation:1 arbitrary:2 connection:5 address:3 bar:2 pattern:1 prototyping:1 mismatch:2 regime:2 max:12 green:2 power:4 suitable:1 event:3 hybrid:2 irrespective:1 mediated:1 coupled:11 review:2 emre:2 synchronization:1 contracting:28 expect:2 interesting:1 digital:4 foundation:3 integrate:3 degree:1 sufficient:7 consistent:2 editor:2 uncorrelated:4 row:1 excitatory:14 cooperation:2 compatible:2 repeat:2 changed:1 we1:2 asynchronous:1 supported:1 formal:1 institute:1 neighbor:3 wide:1 distributed:1 boundary:5 feedback:1 cortical:4 xn:1 calculated:4 world:1 rich:2 forward:3 qualitatively:1 adaptive:1 transaction:1 observable:1 uni:2 ignore:1 global:3 active:2 automatica:1 xi:4 freshly:1 stimulated:4 mj:4 nature:1 complex:5 necessarily:1 main:1 whole:1 noise:5 x1:1 neuronal:1 i:1 board:4 fashion:1 slow:1 position:1 winning:2 candidate:1 jacobian:8 theorem:2 specific:2 showing:2 recurrently:1 ih:3 phd:1 illustrates:1 mf:1 depicted:1 strogatz:1 applies:1 ch:2 corresponds:1 ma:1 stimulating:2 oct:1 hahnloser:2 goal:1 identity:2 consequently:1 oscillator:1 hard:1 experimentally:2 specifically:2 except:1 uniformly:3 determined:1 called:3 total:1 invariance:1 experimental:1 cond:2 formally:1 select:1 winfried:2 latter:1 ethz:1 tested:1 ex:3 |
2,608 | 3,365 | HM-BiTAM: Bilingual Topic Exploration, Word
Alignment, and Translation
Bing Zhao
IBM T. J. Watson Research
[email protected]
Eric P. Xing
Carnegie Mellon University
[email protected]
Abstract
We present a novel paradigm for statistical machine translation (SMT), based on
a joint modeling of word alignment and the topical aspects underlying bilingual
document-pairs, via a hidden Markov Bilingual Topic AdMixture (HM-BiTAM).
In this paradigm, parallel sentence-pairs from a parallel document-pair are coupled via a certain semantic-flow, to ensure coherence of topical context in the
alignment of mapping words between languages, likelihood-based training of
topic-dependent translational lexicons, as well as in the inference of topic representations in each language. The learned HM-BiTAM can not only display
topic patterns like methods such as LDA [1], but now for bilingual corpora; it
also offers a principled way of inferring optimal translation using document context. Our method integrates the conventional model of HMM ? a key component
for most of the state-of-the-art SMT systems, with the recently proposed BiTAM
model [10]; we report an extensive empirical analysis (in many ways complementary to the description-oriented [10]) of our method in three aspects: bilingual
topic representation, word alignment, and translation.
1 Introduction
Most contemporary SMT systems view parallel data as independent sentence-pairs whether or
not they are from the same document-pair. Consequently, translation models are learned only at
sentence-pair level, and document contexts ? essential factors for translating documents ? are generally overlooked. Indeed, translating documents differs considerably from translating a group of
unrelated sentences. A sentence, when taken out of the context from the document, is generally more
ambiguous and less informative for translation. One should avoid destroying a coherent document
by simply translating it into a group of sentences which are indifferent to each other and detached
from the context.
Developments in statistics, genetics, and machine learning have shown that latent semantic aspects
of complex data can often be captured by a model known as the statistical admixture (or mixed
membership model [4]). Statistically, an object is said to be derived from an admixture if it consists
of a bag of elements, each sampled independently or coupled in a certain way, from a mixture
model. In the context of SMT, each parallel document-pair is treated as one such object. Depending
on the chosen modeling granularity, all sentence-pairs or word-pairs in a document-pair correspond
to the basic elements constituting the object, and the mixture from which the elements are sampled
can correspond to a collection of translation lexicons and monolingual word frequencies based on
different topics (e.g., economics, politics, sports, etc.). Variants of admixture models have appeared
in population genetics [6] and text modeling [1, 4].
Recently, a Bilingual Topic-AdMixture (BiTAM) model was proposed to capture the topical aspects
of SMT [10]; word-pairs from a parallel document-pair follow the same weighted mixtures of translation lexicons, inferred for the given document-context. The BiTAMs generalize over IBM Model1; they are efficient to learn and scalable for large training data. However, they do not capture locality
1
constraints of word alignment, i.e., words ?close-in-source? are usually aligned to words ?close-intarget?, under document-specific topical assignment. To incorporate such constituents, we integrate
the strengths of both HMM and BiTAM, and propose a Hidden Markov Bilingual Topic-AdMixture
model, or HM-BiTAM, for word alignment to leverage both locality constraints and topical context
underlying parallel document-pairs.
In the HM-BiTAM framework, one can estimate topic-specific word-to-word translation lexicons
(lexical mappings), as well as the monolingual topic-specific word-frequencies for both languages,
based on parallel document-pairs. The resulting model offers a principled way of inferring optimal
translation from a given source language in a context-dependent fashion. We report an extensive
empirical analysis of HM-BiTAM, in comparison with related methods. We show our model?s effectiveness on the word-alignment task; we also demonstrate two application aspects which were
untouched in [10]: the utility of HM-BiTAM for bilingual topic exploration, and its application for
improving translation qualities.
2 Revisit HMM for SMT
An SMT system can be formulated as a noisy-channel model [2]:
e? = arg max P (e|f ) = arg max P (f |e)P (e),
e
(1)
e
where a translation corresponds to searching for the target sentence e? which explains the source
sentence f best. The key component is P (f |e), the translation model; P (e) is monolingual language
model. In this paper, we generalize P (f |e) with topic-admixture models.
An HMM implements the ?proximity-bias? assumption ? that words ?close-in-source? are aligned
to words ?close-in-target?, which is effective for improving word alignment accuracies, especially
for linguistically close language-pairs [8]. Following [8], to model word-to-word translation, we
introduce the mapping j ? aj , which assigns a French word fj in position j to an English word
ei in position i = aj denoted as eaj . Each (ordered) French word fj is an observation, and it is
generated by an HMM state defined as [eaj , aj ], where the alignment indicator aj for position j is
considered to have a dependency on the previous alignment aj?1 . Thus a first-order HMM for an
alignment between e ? e1:I and f ? f1:J is defined as:
p(f1:J |e1:I ) =
J
XY
p(fj |eaj )p(aj |aj?1 ),
(2)
a1:J j=1
where p(aj |aj?1 ) is the state transition probability; J and I are sentence lengths of the French and
English sentences, respectively. The transition model enforces the proximity-bias. An additional
pseudo word ?NULL? is used at the beginning of English sentences for HMM to start with. The
HMM implemented in GIZA++ [5] is used as our baseline, which includes refinements such as
special treatment of a jump to a NULL word. A graphical model representation for such an HMM
is illustrated in Figure 1 (a).
?k
K
Im,n
em,i
Im,n
em,i
B = p(f |e)
Bk
?
fm,1
am,1
fm,2
am,2
fm,3
am,3
fJm,n
?m
zm,n
fm,1
aJm,n
Nm
am,1
fm,2
am,2
fm,3
am,3
K
fJm,n
aJm,n
Nm
M
M
Ti,i?
Ti,i?
(a) HMM for Word Alignment
(b) HM-BiTAM
Figure 1: The graphical model representations of (a) HMM, and (b) HM-BiTAM, for parallel corpora. Circles
represent random variables, hexagons denote parameters, and observed variables are shaded.
2
3 Hidden Markov Bilingual Topic-AdMixture
We assume that in training corpora of bilingual documents, the document-pair boundaries are
known, and indeed they serve as the key information for defining document-specific topic weights
underlying aligned sentence-pairs or word-pairs. To simplify the outline, the topics here are sampled at sentence-pair level; topics sampled at word-pair level can be easily derived following the
outlined algorithms, in the same spirit of [10]. Given a document-pair (F, E) containing N parallel
sentence-pairs (en , fn ), HM-BiTAM implements the following generative scheme.
3.1 Generative Scheme of HM-BiTAM
Given a conjugate prior Dirichlet(?), the topic-weight vector (hereafter, TWV), ?m for each
document-pair (Fm , Em ), is sampled independently. Let the non-underscripted ? denote the TWV
of a typical document-pair (F, E), a collection of topic-specific translation lexicons be B ? {Bk },
where Bi,j,k =P (f =fj |e=ei , z=k) is the conditional probability of translating e into f under a
given topic indexed by z; the topic-specific monolingual model ? ? {?k }, which can be the usual
LDA-style monolingual unigrams. The sentence-pairs {fn , en } are drawn independently from a
mixture of topics. Specifically (as illustrated also in Fig. 1 (b)):
1. ? ? Dirichlet(?)
2. For each sentence-pair (fn , en ),
(a) zn ? Multinomial(?) sample the topic
(b) en,1:In |zn ? P (en |zn ; ?) sample all English words from a monolingual topic
model (e.g., an unigram model),
(c) For each position jn = 1, . . . , Jn in fn ,
i. ajn ? P (ajn |ajn ?1 ;T ) sample an alignment link ajn from a first-order Markov
process,
ii. fjn ? P (fjn |en , ajn , zn ; B) sample a foreign word fjn according to a topic
specific translation lexicon.
Under an HM-BiTAM model, each sentence-pair consists of a mixture of latent bilingual topics;
each topic is associated with a distribution over bilingual word-pairs. Each word f is generated by
two hidden factors: a latent topic z drawn from a document-specific distribution over K topics, and
the English word e identified by the hidden alignment variable a.
3.2 Extracting Bilingual Topics from HM-BiTAM
Because of the parallel nature of the data, the topics of English and the foreign language will share
similar semantic meanings. This assumption is captured in our model. Shown in Figure 1(b), both
the English and foreign topics are sampled from the same distribution ?, which is a documentspecific topic-weight vector.
Although there is an inherent asymmetry in the bilingual topic representation in HM-BiTAM (that
the monolingual topic representations ? are only defined for English, and the foreign topic representations are implicit via the topical translation models), it is not difficult to retrieve the monolingual
topic representations of the foreign language via a marginalization over hidden word alignment. For
example, the frequency (i.e., unigram) of foreign word fw under topic k can be computed by
X
P (fw |k) =
P (fw |e, Bk )P (e|?k ).
(3)
e
As a result, HM-BiTAM can actually be used as a bilingual topic explorer in the LDA-style and
beyond. Given paired documents, it can extract the representations of each topic in both languages
in a consistent fashion (which is not guaranteed if topics are extracted separately from each language
using, e.g., LDA), as well as the lexical mappings under each topics, based on a maximal likelihood
or Bayesian principle. In Section 5.2, we demonstrate outcomes of this application.
We expect that, under the HM-BiTAM model, because bilingual statistics from word alignment a
are shared effectively across different topics, a word will have much less translation candidates due
to constraints by the hidden topics; therefore the topic specific translation lexicons are much smaller
and sharper, which give rise to a more parsimonious and unambiguous translation model.
3
4 Learning and Inference
We sketch a generalized mean-field approximation scheme for inferring latent variables in HMBiTAM, and a variational EM algorithm for estimating model parameters.
4.1 Variational Inference
Under HM-BiTAM, the complete likelihood of a document-pair (F, E) can be expressed as follows:
p(F, E, ?, ~z, ~a|?, ?, T, B) = p(?|?)P (~z|?)P (~a|T )P (F|~a, ~z, E, B)P (E|~z, ?),
(4)
QN QJn
where P (~a|T )= n=1 j=1 P (ajn |ajn ?1 ; T ) represents the probability of a sequence of alignQN QJn
ment jumps; P (F|~a, ~z, E, B)= n=1 j=1
P (fjn |ajn , en , zn , B) is the document-level translation
probability; and P (E|~z, ?) is the topic-conditional likelihood of the English document based on a
topic-dependent unigram as used in LDA. Apparently, exact inference under this model is infeasible
as noted in earlier models related to, but simpler than, this one [10].
To approximate the posterior p(~a, ?, ~z |F, E), we employ a generalized mean field approach and
~ a|~?),
adopt the following factored approximation to the true posterior: q(?, ~z , ~a) = q(?|~? )q(~z|?)q(~
~
~
where q(?|~? ), q(~z|?), and q(~a|?) are re-parameterized Dirichlet, multinomial, and HMM, respectively, determined by some variational parameters that correspond to the expected sufficient statistics of the dependent variables of each factor [9].
As well known in the variational inference literature, solutions to the above variational parameters can be obtained by minimizing the Kullback-Leibler divergence between q(?, ~z , ~a) and
p(?, ~z, ~a|F, E), or equivalently, by optimizing the lower-bound of the expected (over q()) loglikelihood defined by Eq.(4), via a fixed-point iteration. Due to space limit, we forego a detailed
derivation, and directly give the fixed-point equations below:
?
?k = ?k +
N
X
?n,k ,
(5)
n=1
Jn
In X
K
?
?
?X
?
X
?n,j,i log ?k,ein
?k ) ? exp
??n,k ? exp ?(?k ) ? ?(
i=1 j=1
k=1
n ,In X X
?
?JX
? exp
1(fjn , f )1(ein , e)?n,j,i log Bf ,e,k ,
(6)
j,i=1f ?VF e?VE
? n,j,i ? exp
?
In
?X
i? =1
? exp
In
?
?
?X
?n,j?1,i? log Ti,i? ? exp
?n,j+1,i? log Ti?,i
?X X
i?=1
1(fjn ,f )1(ein ,e)
f ?VFe?VE
K
?
?
?X
?n,k log ?k,ein ,
?n,k log Bf ,e,k ? exp
K
X
k=1
(7)
k=1
where 1(?, ?) denotes an indicator function, and ?(?) represents the digamma function.
The vector ??n ? (??n,1 , . . . , ??n,K ) given by Eq. (6) represents the approximate posterior of the
topic weights for each sentence-pair (fn , en ). The topical information for updating ??n is collected
from three aspects: aligned word-pairs weighted by the corresponding topic-specific translation lexicon probabilities, topical distributions of monolingual English language model, and the smoothing
factors from the topic prior.
Equation (7) gives the approximate posterior probability for alignment between the j-th word in
fn and the i-th word in en , in the form of an exponential model. Intuitively, the first two terms
represent the messages corresponding to the forward and the backward passes in HMM; The third
term represents the emission probabilities, and it can be viewed as a geometric interpolation of the
strengths of individual topic-specific lexicons; and the last term provides further smoothing from
monolingual topic-specific aspects.
Inference of optimum word-alignment One of the translation model?s goals is to infer the optimum word alignment: a? = arg maxa P (a|F, E). The variational inference scheme described
above leads to an approximate alignment posterior q(~a|~?), which is in fact a reparameterized HMM.
Thus, extracting the optimum alignment amounts to applying an Viterbi algorithm on q(~a|~?).
4
4.2 Variational EM for parameter estimation
To estimate the HM-BiTAM parameters, which include the Dirichlet hyperparameter ?, the
~k }, and the topic-specific
transition matrix T , the topic-specific monolingual English unigram {?
translation lexicon {Bk }, we employ an variational EM algorithm which iterates between computing variational distribution of the hidden variables (the E-step) as described in the previous
subsection, and optimizing the parameters with respect to the variational likelihood (the M-step).
Here are the update equations for the M-step:
T?i?,i? ?
Jn
N X
X
?n,j,i? ?n,j?1,i? ,
(8)
n=1 j=1
Bf,e,k ?
Jn X
In X
N X
K
X
1(fjn , f )1(ein , e)?n,j,i ?n,k ,
(9)
n=1 j=1 i=1 k=1
?k,e ?
In X
Jn
N X
X
1ei ,e ?nji ?n,k .
(10)
n=1 i=1 j=1
For updating Dirichlet hyperparameter ?, which is a corpora-level parameter, we resort to gradient
accent as in [7]. The overall computation complexity of the model is linear to the number of topics.
5 Experiments
In this section, we investigate three main aspects of the HM-BiTAM model, including word alignment, bilingual topic exploration, and machine translation.
#Tokens
English
Chinese
Train
#Doc.
#Sent.
TreeBank
Sinorama04
Sinorama02
Chnews.2005
FBIS.BEIJING
XinHua.NewsStory
316
6367
2373
1001
6111
17260
4172
282176
103252
10317
99396
98444
133,598
10,321,061
3,810,664
326,347
4,199,030
3,807,884
105,331
10,027,095
3,146,014
270,274
3,527,786
3,915,267
ALL
33,428
597,757
22,598,584
20,991,767
Table 1: Training data statistics.
The training data is a collection of parallel document-pairs, with document boundaries explicitly
given. As shown in Table 1, our training corpora are general newswire, covering topics mainly about
economics, politics, educations and sports. For word-alignment evaluation, our test set consists of
95 document-pairs, with 627 manually-aligned sentence-pairs and 14,769 alignment-links in total,
from TIDES?01 dryrun data. Word segmentations and tokenizations were fixed manually for optimal
word-alignment decisions. This test set contains relatively long sentence-pairs, with an average
sentence length of 40.67 words. The long sentences introduce more ambiguities for alignment tasks.
For testing translation quality, TIDES?02 MT evaluation data is used as development data, and
ten documents from TIDES?04 MT-evaluation are used as the unseen test data. BLEU scores are
reported to evaluate translation quality with HM-BiTAM models.
5.1 Empirical Validation
Word Alignment Accuracy We trained HM-BiATMs with ten topics using parallel corpora of
sizes ranging from 6M to 22.6M words; we used the F-measure, the harmonic mean of precision
and recall, to evaluate word-alignment accuracy. Following the same logics for all BiTAMs in [10],
we choose HM-BiTAM in which topics are sampled at word-pair level over sentence-pair level. The
baseline IBM models were trained using a 18 h5 43 scheme 2 . Refined alignments are obtained from
both directions of baseline models in the same way as described in [5].
Figure 2 shows the alignment accuracies of HM-BiTAM, in comparison with that of the baselineHMM, the baseline BiTAM, and the IBM Model-4. Overall, HM-BiTAM gives significantly better
F-measures over HMM, with absolute margins of 7.56%, 5.72% and 6.91% on training sizes of
2
Eight iterations for IBM Model-1, five iterations for HMM, and three iterations for IBM Model-4 (with
deficient EM: normalization factor is computed using sampled alignment neighborhood in E-step)
5
HM?BiTAM: ?log(likelihood) per doc
66
62
60
HM?BiTAM: ?log(likelihood) per doc
64
HMM
BiTAM
IBM-4
HM-BiTAM
58
56
54
52
50
6M
11M
Negative log?likehood: HM?BiTAM (y?axis) vs IBM Model?4 (x?axis) & HMM (x?axis)
5000
4000
3000
2000
1000
500
1000
1500
2000
2500
3000
3500
IBM Model?4 (with deficient EM)
4000
4500
5000
500
1000
1500
2000
2500
3000
3500
HMM (with forward?backward EM)
4000
4500
5000
5000
4000
3000
2000
1000
22.6M
Figure 3: Comparison of likelihoods of data under
Figure 2: Alignment accuracy (F-measure) of differ-
different models. Top: HM-BiTAM v.s. IBM Model4; bottom: HM-BiTAM v.s. HMM.
ent models trained on corpora of different sizes.
6 M, 11 M and 22.6 M words, respectively. In HM-BiTAM, two factors contribute to narrowing
down the word-alignment decisions: the position and the lexical mapping. The position part is
the same as the baseline-HMM, implementing the ?proximity-bias?. Whereas the emission lexical
probability is different, each state is a mixture of topic-specific translation lexicons, of which the
weights are inferred using document contexts. The topic-specific translation lexicons are sharper
and smaller than the global one used in HMM. Thus the improvements of HM-BiTAM over HMM
essentially resulted from the extended topic-admixture lexicons. Not surprisingly, HM-BiTAM also
outperforms the baseline-BiTAM significantly, because BiTAM captures only the topical aspects
and ignores the proximity bias.
Notably, HM-BiTAM also outperforms IBM Model-4 by a margin of 3.43%, 3.64% and 2.73%,respectively. Overall, with 22.6 M words, HM-BiTAM outperforms HMM, BiTAM, IBM-4 significantly, p=0.0031, 0.0079, 0.0121, respectively. IBM Model-4 already integrates the fertility and
distortion submodels on top of HMM, which further narrows the word-alignment choices. However,
IBM Model-4 does not have a scheme to adjust its lexicon probabilities specific to document topicalcontext as in HM-BiTAM. In a way, HM-BiTAM wins over IBM-4 by leveraging topic models that
capture the document context.
Likelihood on Training and Unseen Documents Figure 3 shows comparisons of the likelihoods
of document-pairs in the training set under HM-BiTAM with those under IBM Model-4 or HMM.
Each point in the figure represents one document-pair; the y-coordinate corresponds to the negative
log-likelihood under HM-BiTAM, and the x-coordinate gives the counterparts under IBM Model-4
or HMM. Overall the likelihoods under HM-BiTAM are significantly better than those under HMM
and IBM Model-4, revealing the better modeling power of HM-BiTAM.
We also applied HM-BiTAM to ten document-pairs selected from MT04, which were not included in
the training. These document-pairs contain long sentences and diverse topics. As shown in Table 2,
the likelihoods of HM-BiTAM on these unseen data dominates significantly over that of HMM,
BiTAM, and IBM Models in every case, confirming that HM-BiTAM indeed offers a better fit and
generalizability for the bilingual document-pairs.
Publishers
Genre
IBM-1
HMM
IBM-4
BiTAM
HM-BiTAM
AgenceFrance(AFP)
AgenceFrance(AFP)
AgenceFrance(AFP)
ForeignMinistryPRC
HongKongNews
People?s Daily
United Nation
XinHua News
XinHua News
ZaoBao News
news
news
news
speech
speech
editorial
speech
news
news
editorial
-3752.94
-3341.69
-2527.32
-2313.28
-2198.13
-2485.08
-2134.34
-2425.09
-2684.85
-2376.12
-3388.72
-2899.93
-2124.75
-1913.29
-1822.25
-2094.90
-1755.11
-2030.57
-2326.39
-2047.55
-3448.28
-3005.80
-2161.31
-1963.24
-1890.81
-2184.23
-1821.29
-2114.39
-2352.62
-2116.42
-3602.28
-3139.95
-2323.11
-2144.12
-2035
-2377.1
-1949.39
-2192.9
-2527.78
-2235.79
-3188.90
-2595.72
-2063.69
-1669.22
-1423.84
-1867.13
-1431.16
-1991.31
-2317.47
-1943.25
123.83
60.54
68.41
107.57
43.71
Avg. Perplexity
Table 2: Likelihoods of unseen documents under HM-BiTAMs, in comparison with competing models.
5.2 Application 1: Bilingual Topic Extraction
Monolingual topics: HM-BiTAM facilitates inference of the latent LDA-style representations of
topics [1] in both English and the foreign language (i.e., Chinese) from a given bilingual corpora.
The English topics (represented by the topic-specific word frequencies) can be directly read-off
from HM-BiTAM parameters ?. As discussed in ? 3.2, even though the topic-specific distributions
6
of words in the Chinese corpora are not directly encoded in HM-BiTAM, one can marginalize over
alignments of the parallel data to synthesize them based on the monolingual English topics and the
topic-specific lexical mapping from English to Chinese.
Figure 4 shows five topics, in both English and Chinese, learned via HM-BiTAM. The top-ranked
frequent words in each topic exhibit coherent semantic meanings; and there are also consistencies
between the word semantics under the same topic indexes across languages. Under HM-BiTAM,
the two respective monolingual word-distributions for the same topic are statistically coupled due
to sharing of the same topic for each sentence-pair in the two languages. Whereas if one merely
apply LDA to the corpora in each language separately, such coupling can not be exploited. This
coupling enforces consistency between the topics across languages. However, like general clustering
algorithms, topics in HM-BiTAM, are not necessarily to present obvious semantic labels.
?sports?
?stocks?
?(people)
??(handicapped)
??(sports)
??(career)
?(water)
??(world)
?(region)
???(Xinhua)
??(team member)
??(reporter)
?housing?
??(house)
?(house)
?? (JiuJiang)
??(construction)
??(macao)
?(??Yuan)
??(workers)
??(current)
??(national)
?(province)
??(shenzhen)
?(shen zhen)
?(Singarpore)
?(??Yuan)
?(stock)
??(Hongkong)
??(state-owned)
??(foreign
investiment)
???(Xinhua)
??(refinancing)
?takeover?
?energy?
??(company)
???(gas)
?(two)
?(countries)
??(U.S.)
??(reporters)
??(relations)
?(Russian)
?(France)
??(ChongQing)
??(countries)
??(ChongQing)
?(Factory)
??(TianJin)
??(Government)
??(project)
??(national)
??(Shenzhen)
????(take over)
??(buy)
Figure 4: Monolingual topics of both languages learned from parallel data. It appears that the English topics
(on the left panel) are highly parallel to the Chinese ones (annotated with English gloss, on the right panel).
Topic-Specific Lexicon Mapping: Table 3 shows two examples of topic-specific lexicon mapping
learned by HM-BiTAM. Given a topic assignment, a word usually has much less translation candidates, and the topic-specific translation lexicons are generally much smaller and sharper. Different
topic-specific lexicons emphasize different aspects of translating the same source words, which can
not be captured by the IBM models or HMM. This effect can be observed from Table 3.
Topics
Topic-1
Topic-2
Topic-3
Topic-4
Topic-5
Topic-6
Topic-7
Topic-8
Topic-9
Topic-10
IBM Model-1
HMM
IBM Model-4
TopCand
$??
?v
?A
N
??
?v
$??
??
$??
$??
$??
?meet?
Meaning
sports meeting
to satisfy
to adapt
to adjust
to see someone
to satisfy
sports meeting
to see someone
Probability
0.508544
0.160218
0.921168
0.996929
0.693673
0.467555
0.487728
0.551466
sports meeting
sports meeting
sports meeting
0.590271
0.72204
0.608391
TopCand
>?
>?
9
??
??
??
?
??
>?
??
??
?power?
Meaning
electric power
electricity factory
to be relevant
strength
strength
Electric watt
power
to generate
strength
Probability
0.565666
0.656
0.985341
0.410503
0.997586
0.613711
1.0
0.50457
1.0
power plant
strength
strength
0.314349
0.51491
0.506258
Table 3: Topic-specific translation lexicons learned by HM-BiTAM. We show the top candidate (TopCand)
lexicon mappings of ?meet? and ?power? under ten topics. (The symbol ?-? means inexistence of significant
lexicon mapping under that topic.) Also shown are the semantic meanings of the mapped Chinese words, and
the mapping probability p(f |e, k).
5.3 Application 2: Machine Translation
The parallelism of topic-assignment between languages modeled by HM-BiTAM, as shown in ? 3.2
and exemplified in Fig. 4, enables a natural way of improving translation by exploiting semantic
consistency and contextual coherency more explicitly and aggressively. Under HM-BiTAM, given
a source document DF , the predictive probability distribution of candidate translations of every
source word, P (e|f, DF ), must be computed by mixing multiple topic-specific translation lexicons
according to the topic weights p(z|DF ) determined from monolingual context in DF . That is:
P (e|f, DF ) ? P (f |e, DF )P (e|DF )=
K
X
P (f |e, z = k)P (e|z = k)P (z = k|DF ).
(11)
k=1
We used p(e|f, DF ) to score the bilingual phrase-pairs in a state-of-the-art GALE translation system
trained with 250 M words. We kept all other parameters the same as those used in the baseline. Then
decoding of the unseen ten MT04 documents in Table 2 was carried out.
7
Systems
Hiero Sys.
Gale Sys.
HM-BiTAM
Ground Truth
1-gram
73.92
75.63
76.77
76.10
2-gram
40.57
42.71
42.99
43.85
3-gram
23.21
25.00
25.42
26.70
4-gram
13.84
14.30
14.56
15.73
BLEUr4n4
30.70
32.78
33.19
34.17
Table 4: Decoding MT04 10-documents. Experiments using the topic assignments inferred from ground truth
and the ones inferred via HM-BITAM; ngram precisions together with final BLEUr4n4 scores are evaluated.
Table 4 shows the performance of our in-house Hiero system (following [3]), the state-of-the-art
Gale-baseline (with a better BLEU score), and our HM-BiTAM model, on the NIST MT04 test
set. If we know the ground truth of translation to infer the topic-weights, improvement is from
32.78 to 34.17 BLEU points. With topical inference from HM-BiTAM using monolingual source
document, improved N-gram precisions in the translation were observed from 1-gram to 4-gram.
The largest improved precision is for unigram: from 75.63% to 76.77%. Intuitively, unigrams have
potentially more ambiguities for translations than the higher order ngrams, because the later ones
encode already contextual information. The overall BLEU score improvement of HM-BiTAM over
other systems, including the state-of-the-art, is from 32.78 to 33.19, an slight improvement with
p = 0.043.
6 Discussion and Conclusion
We presented a novel framework, HM-BiTAM, for exploring bilingual topics, and generalizing over
traditional HMM for improved word-alignment accuracies and translation quality. A variational inference and learning procedure was developed for efficient training and application in translation.
We demonstrated significant improvement of word-alignment accuracy over a number of existing
systems, and the interesting capability of HM-BiTAM to simultaneously extract coherent monolingual topics from both languages. We also report encouraging improvement of translation quality
over current benchmarks; although the margin is modest, it is noteworthy that the current version of
HM-BiTAM remains a purely autonomously trained system. Future work also includes extensions
with more structures for word-alignment such as noun phrase chunking.
References
[1] David Blei, Andrew NG, and Michael I. Jordon. Latent dirichlet allocation. In Journal of Machine
Learning Research, volume 3, pages 1107?1135, 2003.
[2] Peter F. Brown, Stephen A. Della Pietra, Vincent. J. Della Pietra, and Robert L. Mercer. The mathematics
of statistical machine translation: Parameter estimation. In Computational Linguistics, volume 19(2),
pages 263?331, 1993.
[3] David Chiang. A hierarchical phrase-based model for statistical machine translation. In Proceedings of
the 43rd Annual Meeting of the Association for Computational Linguistics (ACL?05), pages 263?270, Ann
Arbor, Michigan, June 2005. Association for Computational Linguistics.
[4] Elena Erosheva, Steve Fienberg, and John Lafferty. Mixed membership models of scientific publications.
In Proceedings of the National Academy of Sciences, volume 101 of Suppl. 1, April 6 2004.
[5] Franz J. Och and Hermann Ney. The alignment template approach to statistical machine translation. In
Computational Linguistics, volume 30, pages 417?449, 2004.
[6] J. Pritchard, M. Stephens, and P. Donnell. Inference of population structure using multilocus genotype
data. In Genetics, volume 155, pages 945?959, 2000.
[7] K. Sj?olander, K. Karplus, M. Brown, R. Hughey, A. Krogh, I.S. Mian, and D. Haussler. Dirichlet mixtures: A method for improving detection of weak but significant protein sequence homology. Computer
Applications in the Biosciences, 12, 1996.
[8] Stephan. Vogel, Hermann Ney, and Christoph Tillmann. HMM based word alignment in statistical machine translation. In Proc. The 16th Int. Conf. on Computational Lingustics, (Coling?96), pages 836?841,
Copenhagen, Denmark, 1996.
[9] Eric P. Xing, M.I. Jordan, and S. Russell. A generalized mean field algorithm for variational inference
in exponential families. In Meek and Kjaelff, editors, Uncertainty in Artificial Intelligence (UAI2003),
pages 583?591. Morgan Kaufmann Publishers, 2003.
[10] Bing Zhao and Eric P. Xing. Bitam: Bilingual topic admixture models for word alignment. In Proceedings
of the 44th Annual Meeting of the Association for Computational Linguistics (ACL?06), 2006.
8
| 3365 |@word version:1 bf:3 gloss:1 contains:1 score:5 hereafter:1 united:1 document:45 outperforms:3 existing:1 current:3 com:1 contextual:2 must:1 john:1 fn:6 informative:1 confirming:1 enables:1 update:1 v:1 generative:2 selected:1 intelligence:1 beginning:1 sys:2 chiang:1 blei:1 provides:1 iterates:1 contribute:1 lexicon:22 simpler:1 five:2 bitam:73 yuan:2 consists:3 introduce:2 notably:1 expected:2 indeed:3 company:1 encouraging:1 project:1 estimating:1 underlying:3 unrelated:1 panel:2 null:2 maxa:1 developed:1 pseudo:1 ajn:8 every:2 ti:4 nation:1 och:1 limit:1 meet:2 interpolation:1 noteworthy:1 acl:2 shaded:1 someone:2 christoph:1 ngram:1 bi:1 statistically:2 enforces:2 testing:1 implement:2 differs:1 procedure:1 empirical:3 significantly:5 revealing:1 word:70 protein:1 close:5 marginalize:1 giza:1 context:12 applying:1 conventional:1 demonstrated:1 lexical:5 destroying:1 economics:2 independently:3 shen:1 assigns:1 tillmann:1 factored:1 haussler:1 retrieve:1 population:2 searching:1 coordinate:2 target:2 construction:1 exact:1 element:3 synthesize:1 updating:2 observed:3 bottom:1 narrowing:1 capture:4 region:1 news:8 autonomously:1 russell:1 contemporary:1 principled:2 complexity:1 xinhua:5 trained:5 predictive:1 serve:1 purely:1 eric:3 easily:1 joint:1 stock:2 represented:1 genre:1 derivation:1 train:1 effective:1 artificial:1 outcome:1 refined:1 neighborhood:1 eaj:3 encoded:1 loglikelihood:1 distortion:1 statistic:4 unseen:5 noisy:1 final:1 housing:1 sequence:2 propose:1 ment:1 maximal:1 zm:1 frequent:1 aligned:5 relevant:1 mixing:1 academy:1 description:1 constituent:1 ent:1 exploiting:1 asymmetry:1 optimum:3 object:3 depending:1 coupling:2 andrew:1 eq:2 krogh:1 olander:1 implemented:1 c:1 ajm:2 vfe:1 differ:1 direction:1 hermann:2 annotated:1 exploration:3 translating:6 implementing:1 education:1 explains:1 government:1 f1:2 im:2 exploring:1 extension:1 proximity:4 considered:1 ground:3 exp:7 mapping:11 viterbi:1 adopt:1 jx:1 estimation:2 proc:1 integrates:2 linguistically:1 bag:1 label:1 largest:1 weighted:2 avoid:1 publication:1 encode:1 derived:2 emission:2 june:1 improvement:6 model4:1 likelihood:14 mainly:1 digamma:1 baseline:8 am:6 inference:12 dependent:4 membership:2 foreign:8 hidden:8 relation:1 france:1 semantics:1 translational:1 arg:3 overall:5 denoted:1 development:2 art:4 special:1 smoothing:2 noun:1 field:3 extraction:1 ng:1 hongkong:1 manually:2 represents:5 future:1 report:3 simplify:1 inherent:1 employ:2 oriented:1 simultaneously:1 divergence:1 ve:2 individual:1 resulted:1 national:3 pietra:2 detection:1 message:1 investigate:1 highly:1 evaluation:3 adjust:2 alignment:41 indifferent:1 mixture:7 genotype:1 worker:1 daily:1 xy:1 respective:1 modest:1 indexed:1 circle:1 re:1 karplus:1 jordon:1 modeling:4 earlier:1 zn:5 assignment:4 electricity:1 phrase:3 reported:1 dependency:1 generalizability:1 considerably:1 donnell:1 off:1 decoding:2 michael:1 together:1 ambiguity:2 nm:2 containing:1 choose:1 gale:3 conf:1 resort:1 zhao:2 style:3 forego:1 includes:2 int:1 satisfy:2 explicitly:2 later:1 view:1 unigrams:2 apparently:1 xing:3 start:1 parallel:15 capability:1 erosheva:1 accuracy:7 kaufmann:1 afp:3 correspond:3 shenzhen:2 generalize:2 weak:1 bayesian:1 vincent:1 sharing:1 energy:1 frequency:4 obvious:1 associated:1 bioscience:1 sampled:8 treatment:1 recall:1 subsection:1 segmentation:1 actually:1 appears:1 steve:1 higher:1 follow:1 improved:3 april:1 evaluated:1 though:1 hughey:1 implicit:1 mian:1 sketch:1 ei:3 accent:1 french:3 aj:9 quality:5 lda:7 scientific:1 russian:1 detached:1 effect:1 contain:1 true:1 brown:2 counterpart:1 homology:1 aggressively:1 read:1 leibler:1 semantic:7 illustrated:2 twv:2 ambiguous:1 unambiguous:1 noted:1 covering:1 generalized:3 outline:1 complete:1 demonstrate:2 fj:4 meaning:5 variational:12 ranging:1 novel:2 recently:2 harmonic:1 multinomial:2 mt:2 volume:5 untouched:1 discussed:1 slight:1 association:3 mellon:1 significant:3 rd:1 outlined:1 consistency:3 mathematics:1 newswire:1 language:19 etc:1 posterior:5 optimizing:2 perplexity:1 certain:2 watson:1 meeting:7 exploited:1 tide:3 captured:3 morgan:1 additional:1 paradigm:2 ii:1 stephen:2 multiple:1 infer:2 adapt:1 offer:3 long:3 e1:2 a1:1 paired:1 variant:1 basic:1 scalable:1 essentially:1 cmu:1 df:9 editorial:2 iteration:4 represent:2 normalization:1 suppl:1 nji:1 whereas:2 separately:2 source:8 country:2 publisher:2 vogel:1 pass:1 smt:7 deficient:2 sent:1 facilitates:1 member:1 flow:1 spirit:1 effectiveness:1 leveraging:1 lafferty:1 extracting:2 jordan:1 leverage:1 granularity:1 stephan:1 marginalization:1 fit:1 fm:7 identified:1 competing:1 qjn:2 politics:2 whether:1 utility:1 peter:1 speech:3 generally:3 detailed:1 amount:1 ten:5 generate:1 revisit:1 coherency:1 per:2 diverse:1 carnegie:1 hyperparameter:2 group:2 key:3 drawn:2 kept:1 backward:2 merely:1 beijing:1 parameterized:1 uncertainty:1 multilocus:1 family:1 submodels:1 parsimonious:1 doc:3 coherence:1 decision:2 vf:1 hexagon:1 bound:1 guaranteed:1 meek:1 display:1 annual:2 strength:7 constraint:3 aspect:10 relatively:1 according:2 watt:1 conjugate:1 across:3 smaller:3 em:9 intuitively:2 taken:1 chunking:1 fienberg:1 equation:3 bing:2 remains:1 know:1 eight:1 apply:1 hierarchical:1 fbi:1 ney:2 jn:6 denotes:1 dirichlet:7 ensure:1 include:1 likehood:1 graphical:2 top:4 clustering:1 linguistics:5 especially:1 chinese:7 already:2 usual:1 traditional:1 said:1 exhibit:1 gradient:1 win:1 link:2 mapped:1 reporter:2 hmm:34 topic:108 fjn:7 evaluate:2 collected:1 bleu:4 water:1 denmark:1 length:2 index:1 modeled:1 minimizing:1 equivalently:1 difficult:1 robert:1 sharper:3 potentially:1 negative:2 rise:1 observation:1 markov:4 benchmark:1 nist:1 gas:1 reparameterized:1 defining:1 extended:1 team:1 topical:10 pritchard:1 overlooked:1 inferred:4 bk:4 david:2 pair:44 copenhagen:1 extensive:2 sentence:26 coherent:3 learned:6 narrow:1 beyond:1 usually:2 pattern:1 below:1 parallelism:1 exemplified:1 appeared:1 handicapped:1 lingustics:1 model1:1 fjm:2 max:2 including:2 power:6 treated:1 explorer:1 ranked:1 natural:1 indicator:2 scheme:6 epxing:1 axis:3 admixture:10 carried:1 hm:64 zhen:1 coupled:3 extract:2 fertility:1 text:1 prior:2 literature:1 geometric:1 plant:1 expect:1 monolingual:18 mixed:2 interesting:1 allocation:1 chongqing:2 validation:1 integrate:1 sufficient:1 consistent:1 mercer:1 principle:1 treebank:1 editor:1 share:1 translation:47 ibm:25 genetics:3 token:1 surprisingly:1 last:1 english:19 infeasible:1 bias:4 documentspecific:1 macao:1 template:1 absolute:1 boundary:2 transition:3 world:1 gram:7 qn:1 ignores:1 forward:2 collection:3 refinement:1 jump:2 avg:1 franz:1 constituting:1 sj:1 approximate:4 emphasize:1 kullback:1 logic:1 global:1 buy:1 corpus:10 latent:6 ngrams:1 table:10 learn:1 channel:1 nature:1 career:1 improving:4 complex:1 necessarily:1 electric:2 main:1 bilingual:23 complementary:1 fig:2 en:9 fashion:2 ein:5 precision:4 position:6 inferring:3 exponential:2 factory:2 candidate:4 house:3 third:1 coling:1 down:1 elena:1 specific:26 unigram:5 symbol:1 dominates:1 essential:1 effectively:1 province:1 margin:3 locality:2 generalizing:1 michigan:1 simply:1 expressed:1 ordered:1 sport:9 corresponds:2 truth:3 owned:1 extracted:1 conditional:2 viewed:1 formulated:1 goal:1 consequently:1 ann:1 shared:1 fw:3 included:1 typical:1 specifically:1 determined:2 total:1 arbor:1 people:2 incorporate:1 h5:1 della:2 |
2,609 | 3,366 | Modeling Natural Sounds
with Modulation Cascade Processes
Richard E. Turner and Maneesh Sahani
Gatsby Computational Neuroscience Unit
17 Alexandra House, Queen Square, London, WC1N 3AR, London
Abstract
Natural sounds are structured on many time-scales. A typical segment of speech,
for example, contains features that span four orders of magnitude: Sentences
(? 1 s); phonemes (? 10?1 s); glottal pulses (? 10?2 s); and formants (. 10?3 s).
The auditory system uses information from each of these time-scales to solve complicated tasks such as auditory scene analysis [1]. One route toward understanding how auditory processing accomplishes this analysis is to build neuroscienceinspired algorithms which solve similar tasks and to compare the properties of
these algorithms with properties of auditory processing. There is however a discord: Current machine-audition algorithms largely concentrate on the shorter
time-scale structures in sounds, and the longer structures are ignored. The reason for this is two-fold. Firstly, it is a difficult technical problem to construct
an algorithm that utilises both sorts of information. Secondly, it is computationally demanding to simultaneously process data both at high resolution (to extract
short temporal information) and for long duration (to extract long temporal information). The contribution of this work is to develop a new statistical model for
natural sounds that captures structure across a wide range of time-scales, and to
provide efficient learning and inference algorithms. We demonstrate the success
of this approach on a missing data task.
1
Introduction
Computational models for sensory processing are still in their infancy, but one promising approach
has been to compare aspects of sensory processing with aspects of machine-learning algorithms
crafted to solve the same putative task. A particularly fruitful approach in this vein uses the generative modeling framework to derive these learning algorithms. For example, Independent Component
Analysis (ICA) and Sparse Coding (SC), Slow Feature Analysis (SFA), and Gaussian Scale Mixture Models (GSMMs) are examples of algorithms corresponding to generative models that show
similarities with visual processing [3]. In contrast, there has been much less success in the auditory
domain, and this is due in part to the paucity of flexible models with an explicit temporal dimension
(although see [2]). The purpose of this paper is to address this imbalance.
This paper has three parts. In the first we review models for the short-time structure of sound and
argue that a probabilistic time-frequency model has several distinct benefits over traditional timefrequency representations for auditory modeling. In the second we review a model for the long-time
structure in sounds, called probabilistic amplitude demodulation. In the third section these two
models are combined with the notion of auditory features to produce a full generative model for
sounds called the Modulation Cascade Process (MCP). We then show how to carry out learning and
inference in such a complex hierarchical model, and provide results on speech for complete and
missing data tasks.
1
2
Probabilistic Time-Frequency Representations
Most representations of sound focus on the short temporal structures. Short segments (< 10?1 s) are
frequently periodic and can often be efficiently represented in a Fourier basis as the weighted sum of
a few sinusoids. Of course, the spectral content of natural sounds changes slowly over time. This is
handled by time-frequency representations, such as the Short-Time Fourier Transform (STFT) and
spectrogram, which indicate the spectral content of a local, windowed section of the sound. More
specifically, the STFT (xd,t ) and spectrogram (sd,t ) of a discretised sound (yt0 ) are given by,
0
xd,t
=
T
X
rt?t0 yt0 exp (?i?d t0 ) ,
sd,t = log |xd,t |.
(1)
t0 =1
The (possibly frequency dependent) duration of the window (rt?t0 ) must be chosen carefully, as it
controls whether features are represented in the spectrum or in the time-variation of the spectra. For
example, the window for speech is typically chosen to last for several pitch periods, so that both
pitch and formant information is represented spectrally.
The first stage of the auditory pathway derives a time-frequency-like representation mechanically at
the basilar membrane. Subsequent stages extract progressively more complex auditory features, with
structure extending over more time. Thus, computational models of auditory processing often begin
with a time-frequency (or auditory-filter bank) decomposition, deriving new representations from
the time-frequency coefficients [4]. Machine-learning algorithms also typically operate on the timefrequency coefficients, and not directly on the waveform. The potential advantage lies in the ease
with which auditory features may be extracted from the STFT representation. There are, however,
associated problems. For example, time-frequency representations tend to be over-complete (e.g.
the number of STFT coefficients tends to be larger than the number of samples of the original sound
T ? D > T 0 ). This means that realisable sounds live on a manifold in the time-frequency space (for
the STFT this manifold is a hyper-plane). Algorithms that solve tasks like filling-in missing data
or denoising must ensure that the new coefficients lie on the manifold. Typically this is achieved in
an ad hoc manner by projecting time-frequency coefficients back onto the manifold according to an
arbitrary metric [5]. For generative models of time-frequency coefficients, it is difficult to force the
model to generate only on the realisable manifold. An alternative is to base a probabilistic model of
the waveform on the same heuristics that led to the original time-frequency representation. Not only
does this side-step the generation problem, but it also allows parameters of the representation, like
the ?window?, to be chosen automatically.
The heuristic behind the STFT ? that sound comprises sinusoids in slowly-varying linear superposition ? led Qi et al [6] to propose a probabilistic algorithm called Bayesian Spectrum Estimation
(BSE), in which the sinusoid coefficients (xd,t ) are latent variables. The forward model is,
p(xd,t |xd,t?1 ) = Norm ?d xd,t?1 , ?d2 ,
(2)
!
X
p(yt |xt ) = Norm
(3)
xd,t sin (?d t + ?d ) , ?y2 .
d
The prior distribution over the coefficients is Gaussian and auto-regressive, evolving at a rate controlled by the dynamical parameters ?d and ?d2 . Thus, as ?d ? 1 and ?d2 ? 0 the processes become
very slow, and as ?d ? 0 and ?d2 ? ? they become very fast. More precisely, the length-scale of
the coefficients is given by ?d = ? log(?d ). The observations are generated by a weighted sum of
sinusoids, plus Gaussian noise. This model is essentially a Linear Gaussian State Space System with
time varying weights defined by the sinusoids. Thus, inference is simple, proceeding via the Kalman
Smoother recursions with time-varying weights. In effect, these recursions dynamically adjust the
window used to derive the coefficients, based on the past history of the stimulus. BSE is a model for
the short-time structure of sounds and it will essentially form the bottom level of the MCP. In the
next section we turn our attention to a model of the longer-time structure.
3
Probabilistic Demodulation Cascade
A salient property of the long-time statistics of sounds is the persistence of strong amplitude modulation [7]. Speech, for example, contains power in isolated regions corresponding to phonemes.
2
The phonemes themselves are localised into words, and then into sentences. Motivated by these
observations, Anonymous Authors [8] have proposed a model for the long-time structures in sounds
using a demodulation cascade. The basic idea of the demodulation cascade is to represent a sound as
a product of processes drawn from a hierarchy, or cascade, of progressively longer time-scale modulators. For speech this might involve three processes: representing sentences on top, phonemes in
the middle, and pitch and formants at the bottom (e.g. fig. 1A and B). To construct such a representation, one might start with a traditional amplitude demodulation algorithm, which decomposes
a signal into a quickly-varying carrier and more slowly-varying envelope. The cascade could then
be built by applying the same algorithm to the (possibly transformed) envelope, and then to the envelope that results from this, and so on. This procedure is only stable, however, if both the carrier
and the envelope found by the demodulation algorithm are well-behaved. Unfortunately, traditional
methods (like the Hilbert Transform, or low-pass filtering a non-linear transformation of the stimulus) return a suitable carrier or envelope, but not both. A new approach to amplitude demodulation
is thus called for.
In a nutshell, the new approach is to view amplitude demodulation as a task of probabilistic inference. This is natural, as demodulation is fundamentally ill-posed ? there are infinitely many
decompositions of a signal into a positive envelope and real valued carrier ? and so prior information must always be leveraged to realise such a decomposition. The generative model approach
makes this information explicit. Furthermore, it not necessary to use the recursive procedure (just
described) to derive a modulation cascade: the whole hierarchy can be estimated at once using a
single generative model. The generative model for Probabilistic Amplitude Demodulation (PAD) is
(m)
(m) (m)
(m)
2
p z0
= Norm (0, 1) , p zt |zt?1 = Norm ?m zt?1 , ?m
?t > 0,
(4)
(m)
xt
(m)
= fa(m) zt
?m > 1,
(1)
xt
= Norm (0, 1) ,
yt =
M
Y
(m)
xt
.
(5)
m=1
A set of modulators (X2:M ) are drawn in a two stage process: First a set of slowly varying processes
(Z2:M ) are drawn from a one-step linear Gaussian prior (identical to Eq. 2). The effective lengthscales of these processes, inherited by the modulators, are ordered such that ?m > ?m?1 . Second
the modulators are formed by passing these variables through a point-wise non-linearity to enforce
positivity. A typical choice might be
(m)
(m)
fa(m) zt
= log exp zt + a(m) + 1 ,
(6)
(m)
which is logarithmic for large negative values of zt , and linear for large positive values. This
(m)
transforms the Gaussian distribution over zt into a sparse, non-negative, distribution, which is a
good match to the marginal distributions of natural envelopes. The parameter a(m) controls exactly
where the transition from log to linear occurs, and consequently alters the degree of sparsity. These
positive signals modulate a Gaussian white-noise carrier, to yield observations y1:T by a simple
point-wise product. A typical draw from this generative model can be seen in Fig. 1C. This model
is a fairly crude one for natural sounds. For example, as described in the previous section, we
expect that the carrier process will be structured and yet it is modelled as Gaussian white noise. The
surprising observation is that this very simple model is excellent at demodulation.
Inference in this model typically proceeds by a zero-temperature EM-like procedure. Firstly the
(1)
carrier (xt ) is integrated out and then the modulators are found by maximum a posteriori (MAP).
Slower, more Bayesian algorithms that integrate over the modulators using MCMC indicate that this
approximation is not too severe, and the results are compelling.
4
Modulation Cascade Processes
We have reviewed two contrasting models: The first captures the local harmonic structure of sounds,
but has no long-time structure; The second captures long-time amplitude modulations, but models
the short-time structure as white noise. The goal of this section is to synthesise both to form a new
model. We are guided by the observation that the auditory system might implement a similar synthesis. In the well-known psychophysical phenomenon of comodulation masking release (see [9] for
a review), a tone masked by noise with a bandwidth greater than an auditory filter becomes audible
3
B.
C.
y1:T
x(1)
1:T
x(2)
1:T
x(3)
1:T
A.
2
time /s
4
6
0.5
1
1.5
time /s
Figure 1: An example of a modulation-cascade representation of speech (A and B) and typical samples from generative models used to derive that representation (C). A) The spoken-speech waveform
(black) is represented as the product of a carrier (blue), a phoneme modulator (red) and a sentence
modulator (magenta). B) A close up of the first sentence (2 s) additionally showing the derived enve(2) (3)
lope (xt xt ) superposed onto the speech (red, bottom panel). C) The generative model (M = 3)
with a carrier (blue), a phoneme modulator (red) and a sentence modulator (magenta).
if the noise masker is amplitude modulated. This suggests that long-time envelope information is
processed and analysed across (short-time) frequency channels in the auditory system.
A simple way to combine the two models would be to express each filter coefficient of the time(1) (2)
frequency model as a product of processes (e.g. xd,t = xd,t xd,t ). However, power across even
widely seperated channels of natural sounds can be strongly correlated [7]. Furthermore, comodulation masking release suggests that amplitude-modulation is processed across frequency channels
and not independently in each channel. Presumably this reflects the collective modulation of wideband (or harmonic) sounds, with features that span many frequencies. Thus, a synthesis of BSE and
PAD should incorporate the notion of auditory features.
The forward model. The Modulation Cascade Process (MCP) is given by
(m)
(m)
(m)
2
p zkm ,t |zkm ,t?1 , ? = Norm ?(m) zkm ,t?1 , ?(m)
m = 1 : 3, t > 0,
(m)
(m)
(m)
p zkm ,0 = Norm (0, 1) , xkm ,t = f (zkm ,t , a(m) ) m = 1 : 3, t ? 0,
X
(m)
(1) (2) (3)
p yt |xt , ? = Norm ?yt , ?y2 , ?yt =
gd,k1 ,k2 xk1 ,t xk2 ,t xt sin (?d t + ?d ) .
(7)
(8)
(9)
d,k1 ,k2
Once again, latent variables are arranged in a hierarchy according to their time-scales (which depend on m). At the top of the hierarchy is a long-time process which models slow structures, like
the sentences of speech. The next level models more quickly varying structure (like phonemes).
Finally, the bottom level of the hierarchy captures short-time variability (intra-phoneme variability
for instance). Unlike in PAD, the middle and lower levels now contain multiple process. So, for
example if K1 = 4 and K2 = 2, there would be four quickly varying modulators in the lower level,
two modulators in the middle level, and one slowly varying modulator at the top (see fig. 2A).
The idea is that the modulators in thePfirst level independently control the presence or absence of
individual spectral features (given by d gd,k1 ,k2 sin (?d t + ?d )). For example, in speech a typical
phoneme might be periodic, but this periodicity might change systematically as the speaker alters
their pitch. This change in pitch might be modeled using two spectral features: one for the start of the
phoneme and one for the end, with a region of coactivation in the middle. Indeed it is because speech
4
?
x(2)
t
?
?
x(2)
t
?
?
yt
?
x(3)
t
x(3)
t
B.
x(1)
t
A.
x(1)
0
+
?
+
?
=
+
?
1.5
2
x(2)
t
?
1
x(1)
t
?
0.5
x(3)
t
t
yt
yt
0.84
0.86
0.88
0.9
time /s
0.92
0.94
Figure 2: A. Schematic representation of the MCP forward model in the simple case when K1 = 4,
K2 = 2 and D = 6. The hierarchy of latent variables moves from the slowest modulator at the
top (magenta) to the fastest (blue) with an intermediate modulator between (red). The outer-product
of the modulators multiplies the generative weights (black and white, only 4 of the 8 shown). In
turn, these modulate sinusoids (top right) which are summed to produce the observations (bottom
right). B. A draw from the forward model using parameters learned from a spoken-sentence (see
the results section for more details of the model). The grey bars on the top four panels indicate the
region depicted in the bottom four panels.
and other natural sounds are not precisely stationary even over short time-scales that we require the
lowest layer of the hierarchy. The role of the modulators in the second level is to simultaneously
turn on groups of similar features. For example, one modulator might control the presence of all the
harmonic features and the other the broad-band features. Finally the top level modulator gates all
the auditory features at once. Fig. 2B shows a draw from the forward model for a more complicated
example. Promisingly the samples share many features of natural sounds.
Relationship to other models. This model has an interesting relationship to previous statistical
models and in particular to the GSMMs. It is well known that when ICA is applied to data from
natural scenes the inferred filter coefficients tend not to be independent (see [3, 10]), with coefficients
corresponding to similar filters sharing power. GSMMs model dependencies using a hierarchical
framework, in which the distribution over the coefficients depends on a set of latent variables that
introduce correlations between their powers. The MCP is similar, in that the higher level latent
variables alter the power of similar auditory features. Indeed, we suggest that the correlations in the
power of ICA coefficients are a sign that AM is prevalent in natural scenes. The MCP can be seen
as a generalisation of the GSMMs to include time-varying latent variables, a deeper hierarchy and a
probabilistic time-frequency representation.
Inference and learning algorithms. Any type of learning in the MCP is computationally demanding. Motivated by speed, and encouraged by the results from PAD, the aim will therefore be to find
a joint MAP estimate of the latent variables and the weights, that is
X, G
= arg max log p(X, Y, G|?).
X,G
5
(10)
Note that we have introduced a prior over the generative tensor. This prevents an undesirable feature
of combined MAP and ML inference in such models: namely that the weights grow without bound,
enabling the modal values of latent variables to shrink towards zero, increasing their density under
the prior. The resulting cost function is,
T
3 X X
T
X
X
(1)
(2)
(m)
(m)
log p(X, Y, G|?) =
log p(yt |xt , xt ) +
log p(zkm ,t |zkm ,t?1 )
t=1
+
m=1 km
T
X
t=0
t=1
(m)
dz
X
km ,t
(m)
log (m) + log p(zkm ,0 ) +
log p(gd,k1 ,k2 ) (11)
dx
km ,t
k1 ,k2 ,d
(m)
We would like to optimise this objective-function with respect to the latent variables (xkm ,t ) and the
generative tensor (gd,k1 ,k2 ). There are, however, two main obstacles. The first is that there are a
large number of latent variables to estimate (T ? (K1 + K2 )), making inference slow. The second
is that the generative tensor contains a large number of elements D ? K1 ? K2 , making learning
slow too. The solution is to find a good initialisation procedure, and then to fine-tune using a slow
EM-like algorithm that iterates between updating the latents and the weights. First we outline the
initialisation procedure.
The key to learning complicated hierarchical models is to initialise well, and so the procedure developed for the MCP will be explained in some detail. The main idea is to learn the model one layer at
a time. This is achieved by clamping the upper layers of the hierarchy that are not being learned to
unity. In the first stage of the initialisation, for example, the top and middle levels of the hierarchy
are clamped and the mean of the emission distribution becomes
X
(1)
?yt =
?d,k1 xk1 ,t sin (?d t + ?d ) ,
(12)
d,k1
where ?d,k1 =
P
k2
gd,k1 ,k2 . Learning and inference then proceed by gradient based optimisation of
(1)
the cost-function (log p(X, Y, G|?)) with respect to the un-clamped latents (xk1 ,t ) and the contracted
generative weights (?d,k1 ). This is much faster than the full optimisation as there are both fewer
latents and fewer parameters to estimate. When this process is complete, the second layer of latent
variables is un-clamped, and learning of these variables commences. This requires the full generative
tensor, which must be initialised from the contracted generative weights learned at the previous
stage. One choice is to set the individual weights to their averages gd,k1 ,k2 = K12 ?d,k1 and this
works well, but empirically slows learning. An alternative is to use small chunks of sounds to
learn the lower level weights. These chunks are chosen to be relatively stationary segments that
have a time-scale similar to the second-level modulators. This allows us to make the simplifying
assumption that just one second-level modulator was active during the chunk. The generative tensor
can be therefore be initialised using gd,k1 ,k2 = ?d,k1 ?k2 ,j . Typically this method causes the second
stage of learning to converge faster, and to a similar solution.
In contrast to the initialisation, the fine tuning algorithm is simple. In the E-Step the latent variables
are updated simultaneously using gradient based optimisation of Eq. 11. In the M-Step, the generative tensor is updated using co-ordinate ascent. That is to say that we sequentially update each
gk1 ,k2 using gradient based optimisation of Eq. 11 and iterate over k1 and k2 . In principle, joint
optimisation of the generative tensor and latent variables is possible, but the memory requirements
are prohibitive. This is also why co-ordinate ascent is used to learn the generative tensor (rather than
using the usual linear regression solution which involves a prohibitive matrix inverse).
5
Results
The MCP was trained on a spoken sentence, lasting 2s and sampled at 8000Hz, using the algorithm
outlined in the previous section. The time-scales of the modulators were chosen to be {20 ms,
200 ms, 2 s}. The time-frequency representation had D/2 = 100 sines and D/2 = 100 cosines
spaced logarithmically from 100 ? 4000Hz. The model was given K1 = 18 latent variables in the
first level of the hierarchy, and K2 = 6 in the second. Learning took 16hrs to run on a 2.2 GHz
Opteron with 2Gb of memory.
6
x(3)
t
2
1
x(2)
t,k
x(1)
t,k
y
0
0.5
time /s
1
1.5
0
1000
2000
frequency /HZ
3000
4000
Figure 3: Application of the MCP to speech. Left panels: The inferred latent variable hierarchy.
At top is the sentence modulator (magenta). Next are the phoneme modulators, followed by the
intra-phoneme modulators. These are coloured according to which of the phoneme modulators they
interact most strongly with. The
p speech waveform is shown in the bottom panel. Right panels:
2 + g 2 ) coloured according to phoneme modulator. For exThe learned spectral features ( gsin
cos
ample, the top panel show the spectra from gk1 =1:18,k2 =1 . Spectra corresponding to one phoneme
modulator look similar and offer the features only differ in their phase.
The results can be seen in Fig. 3. The MCP recovers a sentence modulator, phoneme modulators,
and intra-phoneme modulators. Typically a pair of features are used to model a phoneme, and often
they have similar spectra as expected. The spectra fall into distinct classes: those which are harmonic (modelling voiced features) and those which are broad-band (modelling unvoiced features).
One way of assessing which features of speech the model captures is to sample from the forward
model using the learned parameters. This can be seen in Fig. 2B. The conclusion is that the model
is capturing structure across a wide range of time-scales: formants and pitch structure, phoneme
structure, and sentence structure.
There are, however, two noticeable differences between the real and generated data. The first is that
the generated data contain fewer transients and noise segments than natural speech, and more vowellike components. The reason for this is that at the sampling rates used, many of the noisy segments
are indistinguishable from white-noise and are explained using observation noise. These problems
are alleviated by moving to higher sampling rates, but the algorithm is then markedly slower. The
second difference concerns the inferred and generated latent variables in that the former are much
sparser than the latter. The reason is that learned generative tensor contains many gk1 ,k2 which are
nearly zero. In generation, this means that significant contributions to the output are only made when
particular pairs of phoneme and intra-phoneme modulators are active. So although many modulators
are active at one time, only one or two make sizeable contributions. Conversely, in inference, we
can only get information about the value of a modulator when it is part of a contributing pair. If this
is not the case, the inference goes to the maximum of the prior which is zero. In effect there are
large error-bars on the non-contributing components? estimates.
Finally, to indicate the improvement of the MCP over PAD and BSE, we compare the algorithms
abilities to fill in missing sections of a spoken sentence. The average root-mean-squared (RMS)
error per sample is used as a metric to compare the algorithms. In order to use the MCP to fill in the
missing data, it is first necessary to learn a set of auditory features. The MCP was therefore trained
on a different spoken sentence from the same speaker, before inference was carried out on the test
data. To make the comparison fair, BSE is given an identical set of sinusoidal basis functions as
MCP, and the associated smoothness priors were learned on the same training data.
Typical results can be seen in fig. 4. On average the RMS errors for MCP, BSE and PAD were:
{0.10, 0.30, 0.41}. As PAD models the carrier as white noise it predicts zeros in the missing regions
7
[t]
original
MCP
BSE
0.08 0.1 0.12 0.14 0.16 0.18
time /s
0.08
0.1
0.12
time /s
0.14
0.16
0.15
0.2
0.25
time /s
0.3
Figure 4: A selection of typical missing data results for three phonemes (columns). The top row
shows the original speech segement with the missing regions shown in red. The middle row shows
the predictions made by the MCP and the bottom row those made by BSE.
and therefore it merely serves as a baseline in these experiments. Both MCP and BSE smoothly
interpolate their latent variables over the missing region. However, whereas BSE smoothly interpolates each sinusoidal component independently, MCP interpolates the set of learned auditory features in a complex manner determined by the interaction of the modulators. It is for this reason that
it improves over BSE by such a large margin.
6
Conclusion
We have introduced a neuroscience-inspired generative model for natural sounds that is capable of
capturing structure spanning a wide range of temporal scales. The model is a marriage between a
probabilistic time-frequency representation (that captures the short-time structure) and a probabilistic demodulation cascade (that captures the long-time structure). When the model is trained on a
spoken sentence, the first level of the hierarchy learns auditory features (weighted sets of sinusoids)
that capture structures like different voiced sections of speech. The upper levels comprise a temporally ordered set of modulators are used to represent sentence structure, phoneme structure and
intra-phoneme variability. The superiority of the new model over its parents was demonstrated in
a missing data experiment where it out-performed the Bayesian time-frequency analysis by a large
margin.
References
[1]
[2]
[3]
[4]
Bregman, A.S. (1990) Auditory Scene Analysis. MIT Press.
Smith E. & Lewicki, M.S. (2006) Efficient Auditory Coding. Nature 439 (7079).
Simoncelli, E.P. (2003) Vision and the statistics of the visual environment. Curr Opin Neurobi 13(2):144-9.
Patterson, R.D. (2000) Auditory images: How complex sounds are represented in the auditory system. J
Acoust Soc Japan (E) 21(4):183-190.
[5] Griffin, D. & Lim J. (1984) Signal estimation from modified short-time Fourier transform. IEEE Trans. on
ASSP 32(2):236-243.
[6] Qi, Y., Minka, T. & Picard, R.W. (2002) Bayesian Spectrum Estimation of Unevenly Sampled Nonstationary Data. MIT Media Lab Technical Report Vismod-TR-556.
[7] Attias, H. & Schreiner, C.E. (1997) Low-Order Temporal Statistics of Natural Sounds. Adv in Neural Info
Processing Sys 9. MIT Press.
[8] Anonymous Authors (2007) Probabilistic Amplitude Demodulation. ICA 2007 Conference Proceedings.
Springer, in press.
[9] Moore, B.C.J. (2003) An Introduction to the Psychology of Hearing. Academic Press.
[10] Karklin, Y. & Lewicki, M.S. (2005) A hierarchical Bayesian model for learning nonlinear statistical regularities in nonstationary natural signals. Neural Comput 17(2):397-423.
8
| 3366 |@word middle:6 timefrequency:2 norm:8 grey:1 d2:4 pulse:1 km:3 decomposition:3 simplifying:1 tr:1 carry:1 contains:4 initialisation:4 past:1 current:1 z2:1 surprising:1 analysed:1 yet:1 dx:1 must:4 subsequent:1 opin:1 progressively:2 update:1 stationary:2 generative:23 fewer:3 prohibitive:2 tone:1 plane:1 sys:1 smith:1 short:12 regressive:1 iterates:1 firstly:2 windowed:1 become:2 pathway:1 combine:1 introduce:1 manner:2 indeed:2 expected:1 ica:4 themselves:1 frequently:1 formants:3 inspired:1 automatically:1 window:4 increasing:1 becomes:2 begin:1 linearity:1 panel:7 medium:1 lowest:1 spectrally:1 sfa:1 contrasting:1 developed:1 spoken:6 acoust:1 transformation:1 temporal:6 xd:11 nutshell:1 exactly:1 k2:20 control:4 unit:1 superiority:1 carrier:10 positive:3 before:1 local:2 sd:2 tends:1 modulation:10 might:8 plus:1 black:2 dynamically:1 suggests:2 conversely:1 co:3 ease:1 wideband:1 comodulation:2 fastest:1 range:3 coactivation:1 recursive:1 implement:1 procedure:6 maneesh:1 evolving:1 cascade:12 alleviated:1 persistence:1 word:1 suggest:1 get:1 onto:2 close:1 undesirable:1 selection:1 live:1 applying:1 superposed:1 fruitful:1 map:3 demonstrated:1 missing:10 yt:10 dz:1 go:1 attention:1 duration:2 independently:3 resolution:1 schreiner:1 deriving:1 fill:2 initialise:1 notion:2 variation:1 updated:2 hierarchy:13 us:2 promisingly:1 element:1 logarithmically:1 particularly:1 updating:1 predicts:1 vein:1 bottom:8 role:1 capture:8 region:6 adv:1 environment:1 trained:3 depend:1 segment:5 patterson:1 basis:2 joint:2 represented:5 distinct:2 fast:1 effective:1 london:2 modulators:22 sc:1 hyper:1 lengthscales:1 heuristic:2 larger:1 solve:4 posed:1 valued:1 widely:1 say:1 ability:1 formant:1 statistic:3 transform:3 noisy:1 hoc:1 advantage:1 took:1 propose:1 interaction:1 product:5 parent:1 regularity:1 requirement:1 extending:1 assessing:1 produce:2 zkm:8 derive:4 develop:1 basilar:1 noticeable:1 eq:3 strong:1 soc:1 involves:1 indicate:4 differ:1 concentrate:1 waveform:4 guided:1 filter:5 opteron:1 transient:1 require:1 anonymous:2 secondly:1 marriage:1 exp:2 presumably:1 xk2:1 purpose:1 estimation:3 superposition:1 weighted:3 reflects:1 mit:3 gaussian:8 always:1 aim:1 modified:1 rather:1 varying:10 release:2 focus:1 derived:1 emission:1 improvement:1 prevalent:1 modelling:2 slowest:1 contrast:2 baseline:1 am:1 posteriori:1 inference:12 dependent:1 typically:6 integrated:1 glottal:1 pad:7 transformed:1 arg:1 flexible:1 ill:1 multiplies:1 summed:1 fairly:1 marginal:1 construct:2 once:3 comprise:1 sampling:2 encouraged:1 identical:2 broad:2 look:1 filling:1 nearly:1 alter:1 report:1 stimulus:2 fundamentally:1 richard:1 few:1 simultaneously:3 interpolate:1 individual:2 phase:1 curr:1 intra:5 picard:1 adjust:1 severe:1 mixture:1 behind:1 wc1n:1 bregman:1 capable:1 necessary:2 shorter:1 isolated:1 instance:1 column:1 modeling:3 compelling:1 obstacle:1 ar:1 queen:1 cost:2 hearing:1 latents:3 masked:1 too:2 dependency:1 periodic:2 combined:2 gd:7 chunk:3 density:1 probabilistic:12 mechanically:1 audible:1 synthesis:2 quickly:3 again:1 squared:1 leveraged:1 slowly:5 possibly:2 positivity:1 audition:1 return:1 japan:1 potential:1 sinusoidal:2 realisable:2 sizeable:1 coding:2 coefficient:15 ad:1 depends:1 sine:1 view:1 root:1 performed:1 lab:1 red:5 start:2 sort:1 complicated:3 inherited:1 masking:2 voiced:2 contribution:3 square:1 formed:1 phoneme:24 largely:1 efficiently:1 yield:1 spaced:1 modelled:1 bayesian:5 history:1 sharing:1 realise:1 frequency:21 initialised:2 minka:1 associated:2 recovers:1 sampled:2 auditory:25 lim:1 improves:1 hilbert:1 amplitude:10 carefully:1 back:1 higher:2 modal:1 arranged:1 shrink:1 strongly:2 furthermore:2 just:2 stage:6 xk1:3 correlation:2 nonlinear:1 behaved:1 alexandra:1 effect:2 contain:2 y2:2 former:1 sinusoid:7 moore:1 white:6 sin:4 during:1 indistinguishable:1 bse:11 speaker:2 cosine:1 m:2 outline:1 complete:3 demonstrate:1 temperature:1 image:1 wise:2 harmonic:4 empirically:1 significant:1 smoothness:1 tuning:1 outlined:1 stft:6 had:1 moving:1 stable:1 longer:3 similarity:1 base:1 route:1 success:2 seen:5 greater:1 utilises:1 spectrogram:2 accomplishes:1 converge:1 period:1 signal:5 smoother:1 full:3 sound:28 multiple:1 simoncelli:1 technical:2 match:1 faster:2 academic:1 offer:1 long:10 demodulation:13 controlled:1 qi:2 pitch:6 schematic:1 basic:1 regression:1 prediction:1 essentially:2 metric:2 optimisation:5 vision:1 represent:2 achieved:2 whereas:1 fine:2 unevenly:1 grow:1 envelope:8 operate:1 unlike:1 ascent:2 markedly:1 hz:3 tend:2 ample:1 nonstationary:2 presence:2 intermediate:1 iterate:1 psychology:1 modulator:15 bandwidth:1 gk1:3 idea:3 attias:1 t0:4 whether:1 motivated:2 handled:1 rms:2 gb:1 speech:17 passing:1 proceed:1 cause:1 interpolates:2 ignored:1 involve:1 tune:1 transforms:1 band:2 processed:2 generate:1 alters:2 sign:1 neuroscience:2 estimated:1 per:1 blue:3 exthe:1 express:1 group:1 key:1 four:4 salient:1 drawn:3 merely:1 sum:2 run:1 inverse:1 lope:1 putative:1 draw:3 griffin:1 capturing:2 layer:4 bound:1 followed:1 fold:1 precisely:2 scene:4 x2:1 aspect:2 fourier:3 speed:1 span:2 relatively:1 structured:2 according:4 synthesise:1 membrane:1 across:5 em:2 unity:1 making:2 lasting:1 projecting:1 explained:2 computationally:2 mcp:20 turn:3 end:1 serf:1 hierarchical:4 spectral:5 enforce:1 alternative:2 slower:2 gate:1 original:4 top:11 ensure:1 include:1 paucity:1 k1:21 build:1 psychophysical:1 move:1 tensor:9 objective:1 occurs:1 fa:2 rt:2 usual:1 traditional:3 gradient:3 outer:1 manifold:5 argue:1 toward:1 reason:4 spanning:1 length:1 kalman:1 modeled:1 relationship:2 difficult:2 unfortunately:1 info:1 localised:1 negative:2 slows:1 zt:8 collective:1 imbalance:1 upper:2 observation:7 unvoiced:1 enabling:1 variability:3 assp:1 y1:2 arbitrary:1 inferred:3 ordinate:2 introduced:2 discretised:1 namely:1 pair:3 sentence:16 learned:8 trans:1 address:1 bar:2 proceeds:1 dynamical:1 sparsity:1 built:1 max:1 optimise:1 memory:2 power:6 suitable:1 demanding:2 natural:16 force:1 hr:1 recursion:2 turner:1 karklin:1 representing:1 temporally:1 carried:1 extract:3 auto:1 sahani:1 review:3 understanding:1 prior:7 coloured:2 contributing:2 expect:1 generation:2 interesting:1 filtering:1 integrate:1 degree:1 principle:1 bank:1 systematically:1 share:1 row:3 course:1 yt0:2 periodicity:1 last:1 side:1 deeper:1 wide:3 fall:1 sparse:2 k12:1 benefit:1 ghz:1 dimension:1 transition:1 sensory:2 forward:6 author:2 made:3 ml:1 active:3 sequentially:1 masker:1 spectrum:8 un:2 latent:17 decomposes:1 seperated:1 reviewed:1 additionally:1 promising:1 channel:4 learn:4 why:1 nature:1 interact:1 excellent:1 complex:4 domain:1 main:2 whole:1 noise:10 fair:1 crafted:1 fig:7 contracted:2 gatsby:1 slow:6 comprises:1 explicit:2 comput:1 lie:2 house:1 infancy:1 crude:1 clamped:3 third:1 learns:1 z0:1 magenta:4 xt:11 showing:1 concern:1 derives:1 magnitude:1 clamping:1 margin:2 sparser:1 smoothly:2 depicted:1 led:2 logarithmic:1 infinitely:1 visual:2 prevents:1 ordered:2 lewicki:2 springer:1 extracted:1 modulate:2 goal:1 consequently:1 towards:1 absence:1 content:2 change:3 xkm:2 typical:7 specifically:1 generalisation:1 determined:1 denoising:1 called:4 pas:1 latter:1 modulated:1 commences:1 incorporate:1 mcmc:1 phenomenon:1 correlated:1 |
2,610 | 3,367 | The Distribution Family of Similarity Distances
Gertjan J. Burghouts?
Arnold W. M. Smeulders
Intelligent Systems Lab Amsterdam
Informatics Institute
University of Amsterdam
Jan-Mark Geusebroek ?
Abstract
Assessing similarity between features is a key step in object recognition and scene
categorization tasks. We argue that knowledge on the distribution of distances
generated by similarity functions is crucial in deciding whether features are similar or not. Intuitively one would expect that similarities between features could
arise from any distribution. In this paper, we will derive the contrary, and report the theoretical result that Lp -norms ?a class of commonly applied distance
metrics? from one feature vector to other vectors are Weibull-distributed if the
feature values are correlated and non-identically distributed. Besides these assumptions being realistic for images, we experimentally show them to hold for
various popular feature extraction algorithms, for a diverse range of images. This
fundamental insight opens new directions in the assessment of feature similarity,
with projected improvements in object and scene recognition algorithms.
1
Introduction
Measurement of similarity is a critical asset of state of the art in computer vision. In all three major
streams of current research - the recognition of known objects [13], assigning an object to a class
[8, 24], or assigning a scene to a type [6, 25] - the problem is transposed into the equality of features
derived from similarity functions. Hence, besides the issue of feature distinctiveness, comparing
two images heavily relies on such similarity functions. We argue that knowledge on the distribution
of distances generated by such similarity functions is even more important, as it is that knowledge
which is crucial in deciding whether features are similar or not.
For example, Nowak and Jurie [21] establish whether one can draw conclusions on two never seen
objects based on the similarity distances from known objects. Where they build and traverse a
randomized tree to establish region correspondence, one could alternatively use the distribution of
similarity distances to establish whether features come from the mode or the tail of the distribution.
Although this indeed only hints at an algorithm, it is likely that knowledge of the distance distribution
will considerably improve or speed-up such tasks.
As a second example, consider the clustering of features based on their distances. Better clustering
algorithms significantly boost performance for object and scene categorization [12]. Knowledge
on the distribution of distances aids in the construction of good clustering algorithms. Using this
knowledge allows for the exact distribution shape to be used to determine cluster size and centroid,
where now the Gaussian is often groundlessly assumed. We will show that in general distance
distributions will strongly deviate from the Gaussian probability distribution.
A third example is from object and scene recognition. Usually this is done by measuring invariant
feature sets [9, 13, 24] at a predefined raster of regions in the image or at selected key points in the
image [11, 13] as extensively evaluated [17]. Typically, an image contains a hundred regions or a
?
?
Dr. Burghouts is now with TNO Defense, The Netherlands, [email protected].
Corresponding author. Email: [email protected].
1
thousand key points. Then, the most expensive computational step is to compare these feature sets
to the feature sets of the reference objects, object classes or scene types. Usually this is done by
going over all entries in the image to all entries in the reference set and select the best matching
pair. Knowledge of the distribution of similarity distances and having established its parameters
enables a remarkable speed-up in the search for matching reference points and hence for matching
images. When verifying that a given reference key-point or region is statistically unlikely to occur
in this image, one can move on to search in the next image. Furthermore, this knowledge can well
be applied in the construction of fast search trees, see e.g. [16].
Hence, apart from obtaining theoretical insights in the general distribution of similarities, the results
derived in this paper are directly applicable in object and scene recognition.
Intuitively one would expect that the set of all similarity values to a key-point or region in an image
would assume any distribution. One would expect that there is no preferred probability density
distribution at stake in measuring the similarities to points or regions in one image. In this paper, we
will derive the contrary. We will prove that under broad conditions the similarity values to a given
reference point or region adhere to a class of distributions known as the Weibull distribution family.
The density function has only three parameters: mean, standard deviation and skewness. We will
verify experimentally that the conditions under which this result from mathematical statistics holds
are present in common sets of images. It appears the theory predicts the resulting density functions
accurately.
Our work on density distributions of similarity values compares to the work by Pekalska and Duin
[23] assuming a Gaussian distribution for similarities. It is based on an original combination of two
facts from statistical physics. An old fact regards the statistics of extreme values [10], as generated
when considering the minima and maxima of many measurements. The major result of the field
of extreme value statistics is that the probability density in this case can only be one out of three
different types, independent of the underlying data or process. The second fact is a new result, which
links these extreme value statistics to sums of correlated variables [2, 3]. We exploit these two facts
in order to derive the distribution family of similarity measures.
This paper is structured as follows. In Section 2, we overview literature on similarity distances and
distance distributions. In Section 3, we discuss the theory of distributions of similarity distances
from one to other feature vectors. In Section 4, we validate the resulting distribution experimentally
for image feature vectors. Finally, conclusions are given in Section 5.
2
2.1
Related work
Similarity distance measures
To measure the similarity between two feature vectors, many distance measures have been proposed
[15]. A common metric class of measures is the Lp -norm [1]. The distance from one reference
feature vector s to one other feature vector t can be formalized as:
n
X
d(s, t) = (
|si ? ti |p )1/p ,
(1)
i=1
where n and i are the dimensionality and indices of the vectors. Let the random variable Dp represent distances d(s, t) where t is drawn from the random variable T representing feature vectors.
Independent of the reference feature vector s, the probability density function of Lp -distances will
be denoted by f (Dp = d).
2.2
Distance distributions
Ferencz et al. [7] have considered the Gamma distribution to model the L2 -distances from image
1
d??1 e?d/? , where ? is the shape parameter,
regions to one reference region: f (D2 = d) = ? ? ?(?)
and ? the scale parameter; ?(?) denotes the Gamma function. In [7], the distance function was
fitted efficiently from few examples of image regions. Although the distribution fits were shown to
represent the region distances to some extent, the method lacks a theoretical motivation.
2
Based on the central limit theorem, Pekalska and Duin [23] assumed that Lp -distances between
2
2
1
feature vectors are normally distributed: f (Dp = d) = ?2?
e?(d /? )/2 . As the authors argue,
?
the use of the central limit theorem is theoretically justified if the feature values are independent,
identically distributed, and have limited variance. Although feature values generally have limited
variance, unfortunately, they cannot be assumed to be independent and/or identically distributed as
we will show below. Hence, an alternative derivation of the distance distribution function has to be
followed.
2.3
Contribution of this paper
Our contribution is the theoretical derivation of a parameterized distribution for Lp -norm distances
between feature vectors. In the experiments, we establish whether distances to image features adhere
to this distribution indeed. We consider SIFT-based features [17], computed from various interest
region types [18].
3
Statistics of distances between features
In this section, we derive the distribution function family of Lp -distances from a reference feature
vector to other feature vectors. We consider the notation as used in the previous section, with t
a feature vector drawn from the random variable T . For each vector t, we consider the value at
index i, ti , resulting in a random variable Ti . The value of the reference vector at index i, si , can
be interpreted as a sample of the random variable Ti . The computation of distances from one to
other vectors involves manipulations of the random variable Ti resulting in a new random variable:
Xi = |si ?Ti |p . Furthermore, the computation of the distances D requires the summation of random
PI
variables, and a reparameterization: D = ( i=1 Xi )1/p . In order to derive the distribution of D,
we start with the statistics of the summation of random variables, before turning to the properties of
Xi .
3.1
Statistics of sums
As a starting point to derive the Lp -distance distribution function, we consider a lemma from statistics about the sum of random variables.
PN
Lemma 1 For non-identical and correlated random variables Xi , the sum i=1 Xi , with finite N ,
is distributed according to the generalized extreme value distribution, i.e. the Gumbel, Frechet or
Weibull distribution.
For a proof, see [2, 3]. Note that the lemma is an extension of the central limit theorem to nonidentically distributed random variables. And, indeed, the proof follows the path of the central
limit theorem. Hence, the resulting distribution of sums is different from a normal distribution, and
rather one of the Gumbel, Frechet or Weibull distributions instead. This lemma is important for
our purposes, as later the feature values will turn out to be non-identical and correlated indeed. To
confine the distribution function further, we also need the following lemma.
Lemma 2 If in the above lemma the random variable Xi are upper-bounded, i.e. Xi < max, the
sum of the variables is Weibull distributed (and not Gumbel nor Frechet):
y ?
? y
(2)
f (Y = y) = ( )??1 e?( ? ) ,
? ?
with ? the shape parameter and ? the scale parameter.
For a proof, see [10]. Figure 1 illustrates the Weibull distribution for various shape parameters
?. This lemma is equally important to our purpose, as later the feature values will turn out to be
upper-bounded indeed.
The combination of Lemmas 1 and 2 yields the distribution of sums of non-identical, correlated and
upper-bounded random variables, summarized in the following theorem.
3
p
shape
parameter
0.8
?=2
0.6
?=4
?=6
?=8
0.4
0.2
distance
1
2
3
4
5
Figure 1: Examples of the Weibull distribution for various shape parameters ?.
Theorem 1 For non-identical, correlated and upper-bounded random variables Xi , the random
PN
variable Y = i=1 Xi , with finite N , adheres to the Weibull distribution.
The proof follows trivially from combining the different findings of statistics as laid down in Lemmas 1 and 2. Theorem 1 is the starting point to derive the distribution of Lp -norms from one
reference vector to other feature vectors.
3.2
Lp -distances from one to other feature vectors
Theorem 1 states that Y is Weibull-distributed, given that {Xi = |si ? Ti |p }i?[1,...,I] are nonidentical, correlated and upper-bounded random variables. We transform Y such that it represents
Lp -distances, achieved by the transformation (?)1/p :
N
X
Y 1/p = (
|si ? Ti |p )1/p .
(3)
i=1
The consequence of the substitution Z = Y 1/p for the distribution of Y is a change of variables
f (z p )
z = y 1/p in Equation 2 [22]: g(Z = z) = (1/p?1)z
(1?p) . This transformation yields a different
distribution still of the Weibull type:
g(Z = z) =
z
z p??1 ?( ?1/p
?
1
)p?
(
)
e
,
(1/p ? 1) ? 1/p ? 1/p
(4)
where ? ? = p? is the new shape parameter and ? ? = ? 1/p is the new scale parameter, respectively.
Thus, also Y 1/p and hence Lp -distances are Weibull-distributed under the assumed case.
We argue that the random variables Xi = |si ? Ti |p and Xj (i 6= j) are indeed non-identical,
correlated and upper-bounded random variables when considering a set of values extracted from
feature vectors at indices i and j:
? Xi and Xj are upper-bounded. Features are usually an abstraction of a particular type of
finite measurements, resulting in a finite feature. Hence, for general feature vectors, the
values at index i, Ti , are finite. And, with finite p, it follows trivially that Xi is finite.
? Xi and Xj are correlated. The experimental verification of this assumption is postponed to
Section 4.1.
? Xi and Xj are non-identically distributed. The experimental verification of this assumption
is postponed to Section 4.1.
We have obtained the following result.
Corollary 1 For finite-length feature vectors with non-identical, correlated and upper-bounded values, Lp distances, for limited p, from one reference feature vector to other feature vectors adhere to
the Weibull distribution.
4
3.3
Extending the class of features
We extend the class of features for which the distances are Weibull-distributed. From now on, we
allow the possibility that the vectors are preprocessed by a PCA transformation. We denote the PCA
transform g(?) applied to a single feature vector as s? = g(s). For the random variable Ti , we obtain
Ti? . We are still dealing with upper-bounded variables Xi? = |s?i ? Ti? |p as PCA is a finite transform.
The experimental verification of the assumption that PCA-transformed feature values Ti? and Tj? ,
i 6= j are non-identically distributed is postponed to Section 4.1. Our point here, is that we have
assumed originally correlating feature values, but after the decorrelating PCA transform we are no
longer dealing with correlated feature values Ti? and Tj? . In Section 4.1, we will verify experimentally
whether Xi? and Xj? correlate. The following observation is hypothesized. PCA translates the data
to the origin, before applying an affine transformation that yields data distributed along orthogonal
axes. The tuples (Xi? , Xj? ) will be in the first quadrant due to the absolute value transformation.
Obviously, variances ?(Xi? ) and ?(Xj? ) are limited and means ?(Xi? ) > 0 and ?(Xj? ) > 0. For
data constrained to the first quadrant and distributed along orthogonal axes, a negative covariance is
expected to be observed. Under the assumed case, we have obtained the following result.
Corollary 2 For finite-length feature vectors with non-identical, correlated and upper-bounded values, and for PCA-transformations thereof, Lp distances, for limited p, from one to other features
adhere to the Weibull distribution.
3.4
Heterogeneous feature vector data
We extend the corollary to hold also for composite datasets of feature vectors. Consider the composite dataset modelled by random variables {Tt }, where each random variable Tt represents nonidentical and correlated feature values. Hence, from Corollary 2 it follows that feature vectors from
each of the Tt can be fitted by a Weibull function f ?,? (d). However, the distances to each of the
Tt may have a different range and modus, as we will verify by experimentation in Section 4.1. For
heterogeneous distance data {Tt }, we obtain a mixture of Weibull functions [14].
Corollary 3 (Distance distribution) For feature vectors that are drawn from a mixture of datasets,
of which each results in non-identical and correlated feature values, finite-length feature vectors
with non-identical, correlated and upper-bounded values, and for PCA-transformations thereof, Lp
distances, for limited p, from one reference feature vector to other feature vectors adhere to the
Pc
Weibull mixture distribution: f (D = d) = Pi=1 ?i ? fi?i ,?i (d), where fi are the Weibull functions
c
and ?i are their respective weights such that i=1 ?i = 1.
4
Experiments
In our experiments, we validate assumptions and Weibull goodness-of-fit for the region-based SIFT,
GLOH, SPIN, and PCA-SIFT features on COREL data [5]. We include these features for two
reasons as: a) they are performing well for realistic computer vision tasks and b) they provide
different mechanisms to describe an image region [17]. The region features are computed from
regions detected by the Harris- and Hessian-affine regions, maximally stable regions (MSER), and
intensity extrema-based regions (IBR) [18]. Also, we consider PCA-transformed versions for each
of the detector/feature combinations. For reason of its extensive use, the experimentation is based
on the L2 -distance. We consider distances from 1 randomly drawn reference vector to 100 other
randomly drawn feature vectors, which we repeat 1,000 times for generalization. In all experiments,
the features are taken from multiple images, except for the illustration in Section 4.1.2 to show
typical distributions of distances between features taken from single images.
4.1
4.1.1
Validation of the corollary assumptions for image features
Intrinsic feature assumptions
Corollary 2 rests on a few explicit assumptions. Here we will verify whether the assumptions occur
in practice.
5
Differences between feature values are correlated. We consider a set of feature vectors Tj and
the differences at index i to a reference vector s: Xi = |si ? Tji |p . We determine the significance
of Pearson?s correlation [4] between the difference values Xi and Xj , i 6= j. We establish the
percentage of significantly correlating differences at a confidence level of 0.05. We report for each
feature the average percentage of difference values that correlate significantly with difference values
at an other feature vector index.
As expected, the feature value differences correlate. For SIFT, 99% of the difference values are
significantly correlated. For SPIN and GLOH, we obtain 98% and 96%, respectively. Also PCASIFT contains significantly correlating difference values: 95%. Although the feature?s name hints
at uncorrelated values, it does not achieve a decorrelation of the values in practice. For each of the
features, a low standard deviation < 5% is found. This expresses the low variation of correlations
across the random samplings and across the various region types.
We repeat the experiment for PCA-transformed feature values. Although the resulting values are
uncorrelated by construction, their differences are significantly correlated. For SIFT, SPIN, GLOH,
and PCA-SIFT, the percentages of significantly correlating difference values are: 94%, 86%, 95%,
and 75%, respectively.
Differences between feature values are non-identically distributed. We repeat the same procedure as above, but instead of measuring the significance of correlation, we establish the percentage
of significantly differently distributed difference values Xi by the Wilcoxon rank sum test [4] at a
confidence level of 0.05. For SIFT, SPIN, GLOH, and PCA-SIFT, the percentages of significantly
differently distributed difference values are: 99%, 98%, 92%, and 87%. For the PCA-transformed
versions of SIFT, SPIN, GLOH, and PCA-SIFT, we find: 62%, 40%, 64%, and 51%, respectively.
Note that in all cases, correlation is sufficient to fulfill the assumptions of Corollary 2. We have
illustrated that feature value differences are significantly correlated and significantly non-identically
distributed. We conclude that the assumptions of Corollary 2 about properties of feature vectors are
realistic in practice, and that Weibull functions are expected to fit distance distributions well.
4.1.2
Inter-feature assumptions
In Corollary 3, we have assumed that distances from one to other feature vectors are described
well by a mixture of Weibulls, if the features are taken from different clusters in the data. Here,
we illustrate that clusters of feature vectors, and clusters of distances, occur in practice. Figure
2a shows Harris-affine regions from a natural scene which are described by the SIFT feature. The
distances are described well by a single Weibull distribution. The same hold for distances from
one to other regions computed from a man-made object, see Figure 2b. In Figure 2c, we illustrate
the distances of one to other regions computed from a composite image containing two types of
regions. This results in two modalitites of feature vectors hence of similarity distances. The distance
distribution is therefore bimodal, illustrating the general case of multimodality to be expected in
realistic, heterogeneous image data. We conclude that the assumptions of Corollary 3 are realistic
in practice, and that the Weibull function, or a mixture, fits distance distributions well.
4.2
Validation of Weibull-shaped distance distributions
In this experiment, we validate the fitting of Weibull distributions of distances from one reference
feature vector to other vectors. We consider the same data as before. Over 1,000 repetitions we
consider the goodness-of-fit of L2 -distances by the Weibull distribution. The parameters of the
Weibull distribution function are obtained by maximum likelihood estimation. The established fit is
assessed by the Anderson-Darling test at a confidence level of ? = 0.05 [20]. The Anderson-Darling
test has also proven to be suited to measure the goodness-of-fit of mixture distributions [19].
Table 1 indicates that for most of the feature types computed from various regions, more than 90%
of the distance distributions is fit by a single Weibull function. As expected, distances between each
of the SPIN, SIFT, PCA-SIFT and GLOH features, are fitted well by Weibull distributions. The
exception here is the low number of fits for the SIFT and SPIN features computed from Hessianaffine regions. The distributions of distances between these two region/feature combinations tend to
have multiple modes. Likewise, there is a low percentage of fits of L2 -distance distributions of the
6
0.014
0.014
0.012
0.012
0.012
0.01
0.01
0.008
0.006
0.008
0.006
0.004
0.004
0.002
0.002
0
250
300
350
400
450
500
distances
(a)
550
600
650
700
0.01
probability
probability
probability
0.014
0
250
0.008
0.006
0.004
0.002
300
350
400
450
500
distances
(b)
550
600
650
700
0
250
300
350
400
450
500
distances
550
600
650
700
(c)
Figure 2: Distance distributions from one randomly selected image region to other regions, each
described by the SIFT feature. The distance distribution is described by a single Weibull function
for a natural scene (a) and a man-made object (b). For a composite image, the distance distribution
is bimodal (c). Samples from each of the distributions are shown in the upper images.
Table 1: Accepted Weibull fits for COREL data [5].
Harris-affine
Hessian-affine
MSER
IBR
c=1
c?2
c=1
c?2
c=1
c?2
c=1
c?2
SIFT
95%
100%
60%
99%
98%
100%
92%
100%
SIFT (g =PCA)
95%
99%
60%
98%
98%
100%
92%
99%
PCA-SIFT
89%
100%
96%
100%
94%
100%
95%
100%
PCA-SIFT (g =PCA)
89%
100%
96%
100%
94%
100%
95%
100%
SPIN
71%
99%
12%
99%
77%
99%
45%
98%
SPIN (g =PCA)
71%
100%
12%
97%
77%
99%
45%
98%
GLOH
87%
100%
91%
100%
82%
99%
86%
100%
GLOH (g =PCA)
87%
100%
91%
99%
82%
99%
86%
100%
Percentages of L2 -distance distributions fitted by a Weibull function (c = 1) and a mixture of two Weibull
functions (c ? 2) are given.
SPIN feature computed from IBR regions. Again, multiple modes in the distributions are observed.
For these distributions, a mixture of two Weibull functions provides a good fit (? 97%).
5
Conclusion
In this paper, we have derived that similarity distances between one and other image features in
databases are Weibull distributed. Indeed, for various types of features, i.e. the SPIN, SIFT, GLOH
and PCA-SIFT features, and for a large variety of images from the COREL image collection, we
have demonstrated that the similarity distances from one to other features, computed from Lp norms,
are Weibull-distributed. These results are established by the experiments presented in Table 1. Also,
between PCA-transformed feature vectors, the distances are Weibull-distributed. The Malahanobis
distance is very similar to the L2 -norm computed in the PCA-transformed feature space. Hence,
we expect Mahalanobis distances to be Weibull distributed as well. Furthermore, when the dataset
is a composition, a mixture of few (typically two) Weibull functions suffices, as established by the
experiments presented in Table 1. The resulting Weibull distributions are distinctively different from
the distributions suggested in literature, as they are positively or negatively skewed while the Gamma
[7] and normal [23] distributions are positively and non-skewed, respectively.
We have demonstrated that the Weibull distribution is the preferred choice for estimating properties
of similarity distances. The assumptions under which the theory is valid are realistic for images. We
experimentally have shown them to hold for various popular feature extraction algorithms, and for a
diverse range of images. This fundamental insight opens new directions in the assessment of feature
similarity, with projected improvements and speed-ups in object/scene recognition algorithms.
7
Acknowledgments
This work is partly sponsored by the EU funded NEST project PERCEPT, by the Dutch BSIK
project Multimedian, and by the EU Network of Excellence MUSCLE.
References
[1] B. G. Batchelor. Pattern Recognition: Ideas in Practice. Plenum Press, New York, 1995.
[2] E. Bertin. Global fluctuations and Gumbel statistics. Physical Review Letters, 95(170601):1?4, 2005.
[3] E. Bertin and M. Clusel. Generalised extreme value statistics and sum of correlated variables. Journal of
Physics A, 39:7607, 2006.
[4] W. J. Conover. Practical nonparametric statistics. Wiley, New York, 1971.
[5] Corel Gallery. www.corel.com.
[6] L. Fei-Fei and P. Perona. A bayesian hierarchical model for learning natural scene categories. In CVPR,
2005.
[7] A. Ferencz, E.G. Learned-Miller, and J. Malik. Building a classification cascade for visual identification
from one example. In Proceedings of the International Conference Computer Vision, pages 286?293.
IEEE Computer Society, 2003.
[8] R. Fergus, P. Perona, and A. Zisserman. A sparse object category model for efficient learning and exhaustive recognition. In Proceedings of the Computer Vision and Pattern Recognition. IEEE, 2005.
[9] J. M. Geusebroek, R. van den Boomgaard, A. W. M. Smeulders, and H. Geerts. Color invariance. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 23(12):1338?1350, 2001.
[10] E. J. Gumbel. Statistics of Extremes. Columbia University Press, New York, 1958.
[11] C. Harris and M. Stephans. A combined corner and edge detector. In Proceedings of the 4th Alvey Vision
Conference, pages 189?192, Manchester, 1988.
[12] F. Jurie and B. Triggs. Creating efficient codebooks for visual recognition. In ICCV, pages 604?610,
2005.
[13] D. G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer
Vision, 60(2):91?110, 2004.
[14] J. M. Marin, M. T. Rodriquez-Bernal, and M. P. Wiper. Using weibull mixture distributions to model
heterogeneous survival data. Communications in statistics, 34(3):673?684, 2005.
[15] R. S. Michalski, R. E. Stepp, and E. Diday. A recent advance in data analysis: Clustering objects into
classes characterized by conjunctive concepts. In L. N. Kanal and A. Rosenfeld, editors, Progress in
Pattern Recognition, pages 33?56. North-Holland Publishing Co., Amsterdam, 1981.
[16] K. Mikolajczyk, B. Leibe, and B. Schiele. Multiple object class detection with a generative model. In
CVPR, 2006.
[17] K. Mikolajczyk and C. Schmid. A performance evaluation of local descriptors. IEEE Transactions on
Pattern Analysis and Machine Intelligence, 27(10):1615?1630, 2005.
[18] K. Mikolajczyk, T. Tuytelaars, C. Schmid, A. Zisserman, J. Matas, F. Schaffalitzky, T. Kadir, and L. Van
Gool. A comparison of affine region detectors. International Journal of Computer Vision, 65(1/2):43?72,
2005.
[19] K. Mosler. Mixture models in econometric duration analysis. Applied Stochastic Models in Business and
Industry, 19(2):91?104, 2003.
[20] NIST/SEMATECH. e-Handbook of Statistical Methods. NIST, http://www.itl.nist.gov/div898/handbook/,
2006.
[21] E. Nowak and F. Jurie. Learning visual similarity measures for comparing never seen objects. In CVPR,
2007.
[22] A. Papoulis and S. U. Pillai. Probability, Random Variables and Stochastic Processes. McGraw-Hill,
New York, 4 edition, 2002.
[23] E. Pekalska and R. P. W. Duin. Classifiers for dissimilarity-based pattern recognition. In Proceedings of
the International Conference on Pattern Recognition, volume 2, page 2012, 2000.
[24] C. Schmid and R. Mohr. Local grayvalue invariants for image retrieval. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 19(5):530?535, 1997.
[25] J.C. van Gemert, J.M. Geusebroek, C.J. Veenman, C.G.M. Snoek, and Arnold W.M. Smeulders. Robust
scene categorization by learning image statistics in context. In CVPR Workshop on Semantic Learning
Applications in Multimedia (SLAM), 2006.
8
| 3367 |@word illustrating:1 version:2 norm:6 triggs:1 open:2 d2:1 covariance:1 papoulis:1 substitution:1 contains:2 current:1 comparing:2 com:1 si:7 assigning:2 conjunctive:1 realistic:6 shape:7 enables:1 sponsored:1 intelligence:3 selected:2 generative:1 provides:1 traverse:1 mathematical:1 along:2 prove:1 fitting:1 multimodality:1 excellence:1 theoretically:1 inter:1 expected:5 snoek:1 indeed:7 nor:1 gov:1 considering:2 project:2 estimating:1 underlying:1 notation:1 bounded:11 geerts:1 interpreted:1 skewness:1 weibull:41 finding:1 transformation:7 extremum:1 ti:15 classifier:1 normally:1 gloh:9 before:3 generalised:1 local:2 limit:4 consequence:1 marin:1 path:1 fluctuation:1 mohr:1 co:1 limited:6 range:3 statistically:1 jurie:3 acknowledgment:1 practical:1 practice:6 procedure:1 jan:1 significantly:11 composite:4 matching:3 ups:1 confidence:3 cascade:1 quadrant:2 cannot:1 context:1 applying:1 www:2 demonstrated:2 mser:2 starting:2 duration:1 formalized:1 insight:3 reparameterization:1 variation:1 plenum:1 construction:3 heavily:1 exact:1 origin:1 recognition:13 expensive:1 predicts:1 database:1 observed:2 verifying:1 thousand:1 region:31 eu:2 schiele:1 negatively:1 distinctive:1 differently:2 various:8 veenman:1 derivation:2 fast:1 describe:1 detected:1 pearson:1 exhaustive:1 kadir:1 cvpr:4 statistic:15 tuytelaars:1 rosenfeld:1 transform:4 obviously:1 michalski:1 combining:1 achieve:1 validate:3 manchester:1 cluster:4 assessing:1 extending:1 categorization:3 bernal:1 burghouts:3 object:18 derive:7 illustrate:2 progress:1 involves:1 come:1 direction:2 tji:1 stochastic:2 suffices:1 generalization:1 summation:2 extension:1 hold:5 confine:1 considered:1 normal:2 deciding:2 major:2 purpose:2 estimation:1 applicable:1 repetition:1 gaussian:3 rather:1 fulfill:1 pn:2 corollary:11 derived:3 ax:2 improvement:2 rank:1 likelihood:1 indicates:1 centroid:1 abstraction:1 typically:2 unlikely:1 perona:2 going:1 transformed:6 issue:1 classification:1 denoted:1 art:1 constrained:1 field:1 never:2 extraction:2 having:1 sampling:1 shaped:1 identical:9 represents:2 broad:1 report:2 intelligent:1 hint:2 few:3 randomly:3 gamma:3 batchelor:1 detection:1 interest:1 possibility:1 evaluation:1 mixture:11 extreme:6 nl:2 pc:1 tj:3 slam:1 predefined:1 edge:1 nowak:2 respective:1 orthogonal:2 tree:2 old:1 theoretical:4 fitted:4 industry:1 frechet:3 measuring:3 goodness:3 deviation:2 entry:2 hundred:1 considerably:1 combined:1 density:6 fundamental:2 randomized:1 international:4 physic:2 informatics:1 again:1 central:4 containing:1 nest:1 dr:1 corner:1 creating:1 summarized:1 north:1 weibulls:1 stream:1 later:2 lowe:1 lab:1 start:1 contribution:2 smeulders:3 spin:11 variance:3 descriptor:1 efficiently:1 likewise:1 yield:3 percept:1 miller:1 modelled:1 bayesian:1 identification:1 accurately:1 asset:1 detector:3 email:1 raster:1 thereof:2 proof:4 transposed:1 ibr:3 dataset:2 popular:2 knowledge:8 color:1 dimensionality:1 appears:1 originally:1 zisserman:2 maximally:1 decorrelating:1 done:2 evaluated:1 strongly:1 anderson:2 furthermore:3 correlation:4 assessment:2 lack:1 mode:3 nonidentical:2 building:1 name:1 hypothesized:1 verify:4 concept:1 equality:1 hence:10 semantic:1 illustrated:1 mahalanobis:1 skewed:2 generalized:1 hill:1 tt:5 image:34 conover:1 fi:2 common:2 physical:1 overview:1 corel:5 volume:1 tail:1 extend:2 modus:1 measurement:3 composition:1 trivially:2 funded:1 stable:1 similarity:29 longer:1 wilcoxon:1 recent:1 apart:1 manipulation:1 postponed:3 muscle:1 seen:2 minimum:1 determine:2 multiple:4 keypoints:1 characterized:1 retrieval:1 equally:1 heterogeneous:4 vision:7 metric:2 dutch:1 represent:2 bimodal:2 achieved:1 justified:1 adhere:5 alvey:1 crucial:2 rest:1 tend:1 contrary:2 identically:7 variety:1 xj:9 fit:12 codebooks:1 idea:1 translates:1 whether:7 pca:25 defense:1 hessian:2 york:4 generally:1 netherlands:1 nonparametric:1 extensively:1 category:2 http:1 percentage:7 diverse:2 pillai:1 darling:2 express:1 key:5 drawn:5 preprocessed:1 econometric:1 sum:9 parameterized:1 letter:1 family:4 laid:1 draw:1 followed:1 correspondence:1 occur:3 duin:3 fei:2 scene:12 speed:3 performing:1 structured:1 according:1 combination:4 across:2 lp:15 intuitively:2 invariant:3 den:1 iccv:1 taken:3 mosler:1 equation:1 discus:1 turn:2 mechanism:1 gemert:1 experimentation:2 leibe:1 hierarchical:1 alternative:1 original:1 denotes:1 clustering:4 include:1 publishing:1 tno:2 exploit:1 build:1 establish:6 society:1 move:1 malik:1 matas:1 dp:3 distance:75 link:1 argue:4 extent:1 reason:2 gallery:1 assuming:1 besides:2 length:3 index:7 illustration:1 unfortunately:1 negative:1 upper:12 observation:1 datasets:2 finite:11 nist:3 communication:1 intensity:1 pair:1 pekalska:3 extensive:1 learned:1 established:4 boost:1 suggested:1 usually:3 below:1 pattern:8 geusebroek:3 max:1 gool:1 critical:1 decorrelation:1 natural:3 business:1 turning:1 representing:1 improve:1 columbia:1 schmid:3 deviate:1 review:1 literature:2 l2:6 expect:4 proven:1 remarkable:1 bertin:2 validation:2 affine:6 verification:3 sufficient:1 editor:1 uncorrelated:2 pi:2 repeat:3 ferencz:2 allow:1 arnold:2 institute:1 distinctiveness:1 absolute:1 sparse:1 distributed:23 regard:1 van:3 valid:1 mikolajczyk:3 author:2 commonly:1 made:2 projected:2 collection:1 correlate:3 transaction:3 schaffalitzky:1 preferred:2 mcgraw:1 dealing:2 global:1 correlating:4 handbook:2 assumed:7 conclude:2 tuples:1 xi:23 fergus:1 alternatively:1 search:3 table:4 robust:1 kanal:1 obtaining:1 adheres:1 uva:1 distinctively:1 significance:2 motivation:1 arise:1 edition:1 positively:2 aid:1 wiley:1 explicit:1 third:1 theorem:8 down:1 sift:21 survival:1 intrinsic:1 workshop:1 dissimilarity:1 illustrates:1 gumbel:5 suited:1 likely:1 visual:3 amsterdam:3 holland:1 relies:1 extracted:1 harris:4 itl:1 man:2 experimentally:5 change:1 typical:1 except:1 lemma:10 multimedia:1 accepted:1 stake:1 experimental:3 partly:1 invariance:1 exception:1 select:1 mark:2 assessed:1 correlated:20 |
2,611 | 3,368 | Multi-Task Learning via Conic Programming
Tsuyoshi Kato,?, Hisashi Kashima? , Masashi Sugiyama? , Kiyoshi Asai,
Graduate School of Frontier Sciences, The University of Tokyo,
?
Institute for Bioinformatics Research and Development (BIRD),
Japan Science and Technology Agency (JST)
?
Tokyo Research Laboratory, IBM Research,
?
Department of Computer Science, Tokyo Institute of Technology,
AIST Computational Biology Research Center,
[email protected], kashi [email protected],
[email protected], [email protected]
Abstract
When we have several related tasks, solving them simultaneously is shown to be
more effective than solving them individually. This approach is called multi-task
learning (MTL) and has been studied extensively. Existing approaches to MTL
often treat all the tasks as uniformly related to each other and the relatedness of
the tasks is controlled globally. For this reason, the existing methods can lead
to undesired solutions when some tasks are not highly related to each other, and
some pairs of related tasks can have significantly different solutions. In this paper, we propose a novel MTL algorithm that can overcome these problems. Our
method makes use of a task network, which describes the relation structure among
tasks. This allows us to deal with intricate relation structures in a systematic way.
Furthermore, we control the relatedness of the tasks locally, so all pairs of related
tasks are guaranteed to have similar solutions. We apply the above idea to support vector machines (SVMs) and show that the optimization problem can be cast
as a second order cone program, which is convex and can be solved efficiently.
The usefulness of our approach is demonstrated through simulations with protein
super-family classification and ordinal regression problems.
1 Introduction
In many practical situations, a classification task can often be divided into related sub-tasks. Since
the related sub-tasks tend to share common factors, solving them together is expected to be more
advantageous than solving them independently. This approach is called multi-task learning (MTL,
a.k.a. inductive transfer or learning to learn) and has theoretically and experimentally proven to be
useful [4, 5, 8].
Typically, the ?relatedness? among tasks is implemented as imposing the solutions of related tasks to
be similar (e.g. [5]). However, the MTL methods developed so far have several limitations. First, it
is often assumed that all sub-tasks are related to each other [5]. However, this may not be always true
in practice?some are related but others may not be. The second problem is that the related tasks
are often imposed to be close in the sense that the sum of the distances between solutions over all
pairs of related tasks is upper-bounded [8] (which is often referred to as the global constraint [10]).
This implies that all the solutions of related tasks are not necessarily close, but some can be quite
different.
In this paper, we propose a new MTL method which overcomes the above limitations. We settle the
first issue by making use of a task network that describes the relation structure among tasks. This
enables us to deal with intricate relation structures in a systematic way. We solve the second problem
1
by directly upper-bounding each distance between the solutions of related task pairs (which we call
local constraints).
We apply this ideas in the framework of support vector machines (SVMs) and show that linear SVMs
can be trained via a second order cone program (SOCP) [3] in the primal. An SOCP is a convex
problem and the global solution can be computed efficiently. We further show that the kernelized
version of the proposed method can be formulated as a matrix-fractional program (MFP) [3] in the
dual, which can be again cast as an SOCP; thus the optimization problem of the kernelized variant is
still convex and the global solution can be computed efficiently. Through experiments with artificial
and real-world protein super-family classification data sets, we show that the proposed MTL method
compares favorably with existing MTL methods.
We further test the performance of the proposed approach in ordinal regression scenarios [9], where
the goal is to predict ordinal class labels such as users? preferences (?like?/?neutral?/?dislike?) or
students? grades (from ?A? to ?F?). The ordinal regression problems can be formulated as a set of
one-versus-one classification problems, e.g., ?like? vs. ?neutral? and ?neutral? vs. ?dislike?. In ordinal
regression, the relatedness among tasks is highly structured. That is, the solutions (decision boundaries) of adjacent problems are expected to be similar, but others may not be related, e.g., ?A? vs. ?B?
and ?B? vs. ?C? would be related, but ?A? vs. ?B? and ?E? vs. ?F? may not be. Our experiments
demonstrate that the proposed method is also useful in the ordinal regression scenarios and tends to
outperform existing approaches [9, 8]
2 Problem Setting
In this section, we formulate the MTL problem.
Let us consider M binary classification tasks, which all share the common input-output space X ?
{?1}. For the time being, we assume X ? R d for simplicity; later in Section 4, we extend it to
reproducing kernel Hilbert spaces. Let {x t , yt }t=1 be the training set, where x t ? X and yt ? {?1}
for t = 1, . . . , . Each data sample (x t , yt ) has its target task; we denote the set of sample indices
of the i-th task by I i . We assume that each sample belongs only to a single task, i.e., the index sets
M
are exclusive: i=1 |Ii | = and Ii ? Ij = null, ?i = j.
The goal is to learn the score function of each classification task: f i (x; wi , bi ) = wi x + bi , for
i = 1, . . . , M, where wi ? Rd and bi ? R are the model parameters of the i-th task. We assume that
a task network is available. The task network describes the relationships among tasks, where each
node represents a task and two nodes are connected by an edge if they are related to each other 1. We
denote the edge set by E ? {(i k , jk )}K
k=1 .
3 Local MTL with Task Network: Linear Version
In this section, we propose a new MTL method.
3.1 Basic Idea
When the relation among tasks is not available, we may just solve M penalized fitting problems
individually:
1
wi 2 + C?
Hinge(fi (xt ; wi , bi ), yt ),
for i = 1, . . . , M,
(1)
2
t?Ii
where C? ? R+ is a regularization constant and Hinge(?, ?) is the hinge loss function:
Hinge(f, y) ? max(1 ? f y, 0). This individual approach tends to perform poorly if the number
of training samples in each task is limited?the performance is expected to be improved if more
training samples are available. Here, we can exploit the information of the task network. A naive
1
More generally, the tasks can be related in an inhomogeneous way, i.e., the strength of the relationship
among tasks can be dependent on tasks. This general setting can be similarly formulated by a weighted network,
where edges are weighted according to the strength of the connections. All the discussions in this paper can be
easily extended to weighted networks, but for simplicity we focus on unweighted networks.
2
idea would be to use the training samples of neighboring tasks in the task network for solving the
target fitting problem. However, this does not fully make use of the network structure since there are
many other indirectly connected tasks via some paths on the network.
To cope with this problem, we take another approach here, which is based on the expectation that
the solutions of related tasks are close to each other. More specifically, we impose the following
constraint on the optimization problem (1):
1
2
wik ? wjk ? ?, for ?k = 1, . . . , K.
(2)
2
Namely, we upper-bound each difference between the solutions of related task pairs by a positive
scalar ? ? R+ . We refer to this constraint as local constraint following [10]. Note that we do
not impose a constraint on the bias parameter b i since the bias could be significantly different even
among related tasks. The constraint (2) allows us to implicitly increase the number of training
samples over the task network in a systematic way through the solutions of related tasks.
Following the convention [8], we blend Eqs.(1) and (2) as
M
M
1
wi 2 + C?
Hinge(fi (xt ; ?), yt ) + C? ?,
2M i=1
i=1
(3)
t?Ii
where C? is a positive trade-off parameter. Then our optimization problem is summarized as follows:
Problem 1.
min
subj. to
where
M
1
wi 2 + C? ?1 + C? ?, wrt.
2M i=1
1
2
wik ? wjk ? ?, ?k,
2
w ? w1 , . . . , wM
,
w ? RMd , b ? RM , ?? ? R+ , and ? ? R+ ,
and
yt wi xt + bi ? 1 ? ?t? , ?t ? Ii , ?i
and
?? ? [?1? , . . . , ?? ] .
(4)
3.2 Primal MTL Learning by SOCP
The second order cone program (SOCP) is a class of convex programs of minimizing a linear function over an intersection of second-order cones [3]: 2
Problem 2.
min
f z
wrt
z ? Rn
subj. to
Ai z + bi ? c
i z + di ,
for i = 1, . . . , N,
(5)
where f ? Rn , Ai ? R(ni ?1)?n , bi ? Rni ?1 , ci ? Rn , di ? R.
Linear programs, quadratic programs, and quadratically-constrained quadratic programs are actually
special cases of SOCPs. SOCPs are a sub-class of semidefinite programs (SDPs) [3], but SOCPs can
be solved more efficiently than SDPs. Successful optimization algorithms
for both SDP and SOCP
are interior-point algorithms. The SDP solvers (e.g. [2]) consume O(n 2 i n2i ) time complexity
for solving
Problem 2, but the SOCP-specialized solvers that directly solve Problem 2 take only
O(n2 i ni ) computation [7]. Thus, SOCPs can be solved more efficiently than SDPs.
We can show that Problem 1 is cast as an SOCP using hyperbolic constraints [3].
Theorem 1. Problem 1 can be reduced to an SOCP and it can be solved with O((M d+) 2 (Kd+))
computation.
4 Local MTL with Task Network: Kernelization
The previous section showed that a linear version of the proposed MTL method can be cast as an
SOCP. In this section, we show how the kernel trick could be employed for obtaining a non-linear
variant.
2
More generally, an SOCP can include linear equality constraints, but they can be eliminated, for example,
by some projection method.
3
4.1 Dual Formulation
Let Kfea be a positive semidefinite matrix with the (s, t)-th element being the inner-product of
fea
feature vectors x s and xt : Ks,t
? xs , xt . This is a kernel matrix of feature vectors. We also
introduce a kernel among tasks. Using a new K-dimensional non-negative parameter vector ? ?
RK
+ , we define the kernel matrix of tasks by
?1
1
IM + U?
Knet (?) ?
,
M
ik ik
where U? ? K
+ E jk jk ? E ik jk ? E jk ik , and E (i,j) ? RM?M is the
k=1 ?k Uk , Uk ? E
sparse matrix whose (i, j)-th element is one and all the others are zero. Note that this is the graph
Laplacian kernel [11], where the k-th edge is weighted according to ? k . Let Z ? NM? be the
indicator of a task and a sample such that Z i,t = 1 if t ? Ii and Zi,t = 0 otherwise. Then the
information about the tasks are expressed by the ? kernel matrix Z Knet (?) Z. We integrate
the two kernel matrices K fea and Z Knet (?) Z by
Kint (?) ? Kfea ? Z Knet (?) Z ,
(6)
where ? denotes the Hadamard product (a.k.a element-wise product). This parameterized matrix Kint (?) is guaranteed to be positive semidefinite [6].
Based on the above notations, the dual formulation of Problem 1 can be expressed using the parameterized integrated kernel matrix K int (?) as follows:
Problem 3.
1
? diag(y)Kint (?) diag(y)? ? ?1 ,
min
wrt. ? ? R+ , and ? ? RM
+,
2
(7)
subj. to ? ? C? 1 , Z diag(y) ? = 0M , ?1 ? C? .
We note that the solutions ? and ? tend to be sparse due to the 1 norm.
Changing the definition of K fea from the linear kernel to an arbitrary kernel, we can extend the
proposed linear MTL method to non-linear domains. Furthermore, we can also deal with nonvectorial structured data by employing a suitable kernel such as the string kernel and the Fisher
kernel.
In the test stage, a new sample x in the j-th task is classified by
fj (x) =
M
?t yt kfea (xt , x)knet (i, j)Zi,t + bj ,
(8)
t=1 i=1
where kfea (?, ?) and knet (?, ?) are the kernel functions over features and tasks, respectively.
4.2 Dual MTL Learning by SOCP
Here, we show that the above dual problem can also be reduced to an SOCP. To this end, we first
introduce a matrix-fractional program (MFP) [7]:
Problem 4.
p
min (F z + g) P (z)?1 (F z + g)
wrt. z ? Rp+ subj. to P (z) ? P0 +
zi Pi ? Sn++ ,
i=1
Sn+ ,
F ?R
, and g ? R . Here
and
denote the positive semidefinite cone
where Pi ?
and strictly positive definite cone of n ? n matrices, respectively.
n?p
n
Sn+
Sn++
Let us re-define d as the rank of the feature kernel matrix K fea . We introduce a matrix V fea ? R?d
which decomposes the feature kernel matrix as K fea = Vfea Vfea . Define the -dimensional vectors
fh ? R of the h-th feature as V fea ? [f1 , . . . , fd ] ? R?d and the matrices Fh ? Z diag(fh ?
y), for h = 1, . . . , d. Using those variables, the objective function in Problem 3 can be rewritten as
?1
d
1 1
IM + U?
? Fh
Fh ? ? ? 1 .
(9)
JD =
2
M
h=1
4
This implies that Problem 3 can be transformed into the combination of a linear program and d
MFPs.
Let us further introduce the vector v k ? RM for each edge: v k = eik ?ejk , where eik is a unit vector
with the ik -th element being one. Let V lap be a matrix defined by V lap = [v1 , . . . , vK ] ? RM?K .
Then we can re-express the graph Lagrangian matrix of tasks as U? = V lap diag(?)Vlap .
Given the fact that an MFP can be reduced to an SOCP [7], we can reduce Problem 3 to the following
SOCP:
Problem 5.
min
? 1
?+
d
1
s0,h + s1,h + ? ? ? + sK,h ,
2
(10)
h=1
wrt
s0,h ? R, sk,h ? R, u0,h ? RM , uh = [u1,h , . . . , uK,h ] ? RK
??
subj. to
RK
+,
??
?k, ?h
R+ ,
(11)
(12)
1
K?
? ? C ? 1 ,
Z diag(y) ? = 0M ,
? C? ,
2u0,h
?1/2
? s0,h + 1,
M
u0,h + Vlap uh = Fh ?,
s0,h ? 1
2uk,h
sk,h ? ?k
? sk,h + ?k
(13)
?h
(14)
?k, ?h
(15)
Consequently, we obtain the following result:
Theorem 2. The dual problem of CoNs learning (Problem 3) can be reduced to the SOCP in Problem 5 and it can be solved with O((Kd + ) 2 ((M + K)d + )) computation.
5 Discussion
In this section, we discuss the properties of the proposed MTL method and the relation to existing
methods.
MTL with Common Bias A possible variant of the proposed MTL method would be to share the
common bias parameter with all tasks (i.e. b 1 = b2 = ? ? ? = bM ). The idea is expected to be useful
particularly when the number of samples in each task is very small. We can also apply the common
bias idea in the kernelized version just by replacing the constraint Z diag(y)? = 0 M in Problem 3
by y ? = 0.
Global vs. Local Constraints Micchelli and Pontil [8] have proposed a related MTL
method which upper-bounds the sum of the differences of K related task pairs, i.e.,
K
2
1
k=1 wik ? wjk ? ?. We call it the global constraint. This global constraint can also have
2
a similar effect to our local constraint (2), i.e., the related task pairs tend to have close solutions.
However, the global constraint can allow some of the distances to be large since only the sum is
upper-bounded. This actually causes a significant performance degradation in practice, which will
be experimentally demonstrated in Section 6. We note that the idea of local constraints is also used
in the kernel learning problem [10].
Relation to Standard SVMs By construction, the proposed MTL method includes the standard
SVM learning algorithm a special case. Indeed, when the number of tasks is one, Problem 3 is
reduced to the standard SVM optimization problem. Thus, the proposed method may be regarded
as a natural extension of SVMs.
Ordinal Regression As we mentioned in Section 1, MTL approaches are useful in ordinal regression problems. Ordinal regression is a task of learning multiple quantiles, which can be formulated
as a set of one-versus-one classification problems. A naive approach to ordinal regression is to
individually train M SVMs with score functions f i (x) = wi , x + bi , i = 1, . . . , M . Shashua
5
1
1
1
1
1
1
1
1
1
1
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0
0
0
0
0
0
0
0
0
-0.5
-1
-1 -0.5 0
-0.5
0.5
1
-1
-1 -0.5 0
-0.5
0.5
1
-1
-1 -0.5 0
-0.5
0.5
1
-1
-1 -0.5 0
-0.5
0.5
-1
-1 -0.5 0
1
-0.5
0.5
-1
-1 -0.5 0
1
-0.5
0.5
1
-1
-1 -0.5 0
-0.5
0.5
1
-1
-1 -0.5 0
0
-0.5
0.5
-1
-1 -0.5 0
1
-0.5
0.5
1
-1
-1 -0.5 0
1
1
1
1
1
1
1
1
1
1
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0
0
-0.5
-1
-1 -0.5 0
0
-0.5
0.5
1
-1
-1 -0.5 0
0
-0.5
0.5
1
-1
-1 -0.5 0
0
-0.5
0.5
1
-1
-1 -0.5 0
-1
-1 -0.5 0
1
0
0
-0.5
0.5
-1
-1 -0.5 0
1
0
-0.5
-0.5
0.5
0.5
1
-1
-1 -0.5 0
(a) True classification boundaries
1
1
0.5
1
0.5
1
0.5
1
-1
-1 -0.5 0
-1
-1 -0.5 0
1
1
-1
-1 -0.5 0
1
0.5
1
0.5
1
0.5
1
0.5
0
0
0
0
0
0
0
0
-0.5
-0.5
-0.5
-0.5
-0.5
-0.5
-0.5
-0.5
-1
-1 -0.5 0
0.5
1
-1
-1 -0.5 0
0.5
1
-1
-1 -0.5 0
0.5
-1
-1 -0.5 0
1
0.5
-1
-1 -0.5 0
1
0.5
1
-1
-1 -0.5 0
0.5
1
-1
-1 -0.5 0
0.5
1
-1
-1 -0.5 0
0.5
1
-1
-1 -0.5 0
1
1
1
1
1
1
1
1
1
1
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0
0
-1
-1 -0.5 0
0
-0.5
-0.5
0.5
1
-1
-1 -0.5 0
0
-0.5
0.5
1
-1
-1 -0.5 0
0
-0.5
0.5
1
-1
-1 -0.5 0
1
-1
-1 -0.5 0
0
0
-0.5
0.5
1
0.5
1
0.5
0
-0.5
1
0.5
1
0.5
0
0.5
1
-0.5
0.5
-0.5
-1
-1 -0.5 0
0.5
0
-0.5
0.5
1
(b) IL-SVMs
1
0.5
0
-0.5
0.5
0.5
-1
-1 -0.5 0
1
(c) MTL-SVM(global/full)
0
-0.5
-0.5
0.5
0.5
1
-1
-1 -0.5 0
0
-0.5
0.5
1
-1
-1 -0.5 0
0
-0.5
0.5
1
-1
-1 -0.5 0
-0.5
0.5
1
-1
-1 -0.5 0
(d) MTL-SVM(local/network)
Figure 1: Toy multi classification tasks. Each subfigure contains the 10-th, 30-th, 50-th, 70-th, and
90-th tasks in the top row and the 110-th, 130-th, 150-th, 170-th, and 190-th tasks in the bottom row.
and Levin [9] proposed an ordinal regression method called the support vector ordinal regression (SVOR), where the weight vectors are shared by all SVMs (i.e. w 1 = w2 = ? ? ? = wM )
and only the bias parameter is learned individually.
The proposed MTL method can be naturally employed in ordinal regression by constraining the
weight vectors as wi ? wi+1 2 ? ?, i = 1, . . . , M ? 1, i.e., the task network only has a weight between consecutive tasks. This method actually includes the above two ordinal regression approaches
as special cases?C? = 0 (i.e., ignoring the task network) yields the independent training of SVMs
and C? = ? (i.e., the weight vectors of all SVMs agree) is reduced to SVOR. Thus, in the context
of ordinal regression, the proposed method smoothly bridges two extremes and allows us to control
the belief of task constraints.
6 Experiments
In this section, we show the usefulness of the proposed method through experiments.
6.1 Toy Multiple Classification Tasks
First, we illustrate how the proposed method behaves using a 2-dimensional toy data set, which
includes 200 tasks (see Figure 1(a)). Each task possesses a circular-shaped classification boundary
with different centers and a fixed radius 0.5. The location of the center in the i-th task is (?1 +
0.02(i ? 1), 0) for 1 ? i ? 100 and (0, ?1 + 0.02(i ? 101)) for 101 ? i ? 200. For each task,
only two positive and two negative samples are generated following the uniform distribution. We
construct a task network where consecutive tasks are connected in a circular manner, i.e., (1, 2),
(2, 3), . . ., (99, 100), and (100, 1) for the first 100 tasks and (101, 102), (102, 103), . . ., (199, 200),
and (200, 1) for the last 100 tasks; we further add (50, 150), which connects the clusters of the first
100 and the last 100 nodes.
We compare the following methods: a naive method where 200 SVMs are trained indivisually (individually learned SVM, ?IL-SVM?), the MTL-SVM algorithm where the global constraint and the
fully connected task network are used [5] (?MTL-SVM(global/full)?), and the proposed method which
uses local constraints and the properly defined task network (?MTL-SVM(local/network)?).
The results are exhibited in Figure 1, showing that IL-SVM can not capture the circular shape due
to the small sample size in each task. MTL-SVM(global/full) can successfully capture closed-loop
boundaries by making use of the information from other tasks. However, the result is still not
so reliable since non-consecutive unrelated tasks heavily damage the solutions. On the other hand,
MTL-SVM(local/network) nicely captures the circular boundaries and the results are highly reliable.
Thus, given an appropriate task network, the proposed MTL-SVM(local/network) can effectively
exploit information of the related tasks.
6
Table 1: The accuracy of each method in the protein super-family classification task.
Dataset
IL-SVM
One-SVM
d-f
d-s
d-o
f-s
f-o
s-o
0.908 (0.023)
0.638 (0.067)
0.725 (0.032)
0.891 (0.036)
0.792 (0.046)
0.663 (0.034)
0.941 (0.015)
0.722 (0.030)
0.747 (0.017)
0.886 (0.021)
0.819 (0.029)
0.695 (0.034)
MTL-SVM
(global/full)
0.945 (0.013)
0.698 (0.036)
0.748 (0.021)
0.918 (0.020)
0.834 (0.021)
0.692 (0.050)
MTL-SVM
(global/network)
0.933 (0.017)
0.695 (0.032)
0.749 (0.023)
0.911 (0.022)
0.828 (0.015)
0.663 (0.068)
MTL-SVM
(local/network)
0.952 (0.015)
0.747 (0.020)
0.764 (0.028)
0.918 (0.025)
0.838 (0.018)
0.703 (0.036)
6.2 Protein Super-Family Classification
Next, we test the performance of the proposed method with real-world protein super-family classification problems.
The input data are amino acid sequences from the SCOP database [1] (not SOCP). We counted
2-mers for extraction of feature vectors. There are 20 kinds of amino acids. Hence, the number
2
of features is 202 = 400. We use RBF kernels, where the kernel width ? rbf
is set to the average
of the squared distances to the fifth nearest neighbors. Each data set consists of two folds. Each
fold is divided into several super-families. We here consider the classification problem into the
super-families. A positive class is chosen from one fold, and a negative class is chosen from the
other fold. We perform multi-task learning from all the possible combinations. For example, three
super-families are in DNA/RNA binding, and two in SH3. The number of combinations is 3 ? 2 = 6.
So the data set d-s has the six binary classification tasks. We used four folds: DNA/RNA binding,
Flavodoxin, OB-fold and SH3. From these folds, we generate six data sets: d-f, d-f, d-o, f-o, f-s,
and o-s, where the fold names are abbreviated to d, f, o, and s, respectively.
The task networks are constructed as follows: if the positive super-family or the negative superfamily is common to two tasks, the two tasks are regarded as a related task pair and connected by
an edge. We compare the proposed MTL-SVM(local/network) with IL-SVM, ?One-SVM?, MTLSVM(global/full), and MTL-SVM(global/network). One-SVM regards the multiple tasks as one big
task and learns the big task once by a standard SVM. We set C ? = 1 for all the approaches. The
value of the parameter C ? for three MTL-SVM approaches is determined by cross-validation over
the training set. We randomly pick ten training sequences from each super-family, and use them for
training. We compute the classification accuracies of the remaining test sequences. We repeat this
procedure 10 times and take the average of the accuracies.
The results are described in Table 1, showing that the proposed MTL-SVM(local/network) compares favorably with the other methods. In this simulation, the task network is constructed rather
heuristically. Even so, the proposed MTL-SVM(local/network) is shown to significantly outperform
MTL-SVM(global/full), which does not use the network structure. This implies that the proposed
method still works well even when the task network contains small errors. It is interesting to note
that MTL-SVM(global/network) actually does not work well in this simulation, implying that the
task relatedness are not properly controlled by the global constraint. Thus the use of the local constraints would be effective in MTL scenarios.
6.3 Ordinal Regression
As discussed in Section 5, MTL methods are useful in ordinal regression. Here we create five ordinal
regression data sets described in Table 2, where all the data sets are originally regression and the output values are divided into five quantiles. Therefore, the overall task can be divided into four isolated
classification tasks, each of which estimates a quantile. We compare MTL-SVM(local/network) with
IL-SVM, SVOR [9] (see Section 5), MTL-SVM(full/network) and MTL-SVM(global/network). The
value of the parameter C ? for three MTL-SVM approaches is determined by cross-validation over
the training set. We set C? = 1 for all the approaches. We use RBF kernels, where the parame2
ter ?rbf
is set to the average of the squared distances to the fifth nearest neighbors. We randomly
picked 200 samples for training. The remaining samples are used for evaluating the classification
accuracies.
7
Table 2: The accuracy of each method in ordinal regression tasks.
Data set
IL-SVM
SVOR
pumadyn
stock
bank-8fh
bank-8fm
calihouse
0.643 (0.007)
0.894 (0.012)
0.781 (0.003)
0.854 (0.004)
0.648 (0.003)
0.661 (0.006)
0.878 (0.011)
0.777 (0.006)
0.845 (0.010)
0.642 (0.008)
MTL-SVM
(global/full)
0.629 (0.025)
0.872 (0.010)
0.772 (0.006)
0.832 (0.013)
0.640 (0.005)
MTL-SVM
(global/network)
0.645 (0.018)
0.888 (0.010)
0.773 (0.006)
0.847 (0.009)
0.646 (0.007)
MTL-SVM
(local/network)
0.661 (0.007)
0.902 (0.007)
0.779 (0.002)
0.854 (0.009)
0.650 (0.004)
The averaged performance over five runs is described in Table 2, showing that the proposed MTLSVM(local/network) is also promising in ordinal regression scenarios.
7 Conclusions
In this paper, we proposed a new multi-task learning method, which overcomes the limitation of
existing approaches by making use of a task network and local constraints. We demonstrated through
simulations that the proposed method is useful in multi-task learning scenario; moreover, it also
works excellently in ordinal regression scenarios.
The standard SVMs have a variety of extensions and have been combined with various techniques,
e.g., one-class SVMs, SV regression, and the ?-trick. We expect that such extensions and techniques
can also be applied similarly to the proposed method. Other possible future works include the
elucidation of the entire regularization path and the application to learning from multiple networks;
developing algorithms for learning probabilistic models with a task network is also a promising
direction to be explored.
Acknowledgments
This work was partially supported by a Grant-in-Aid for Young Scientists (B), number 18700287,
from the Ministry of Education, Culture, Sports, Science and Technology, Japan.
References
[1] A. Andreeva, D. Howorth, S. E. Brenner, T. J. P. Hubbard, C. Chothia, and A. G. Murzin. SCOP database
in 2004: refinements integrate structure and sequence family data. Nucl. Acid Res., 32:D226?D229, 2004.
[2] B. Borchers. CSDP, a C library for semidefinite programming. Optimization Methods and Software,
11(1):613?623, 1999.
[3] Stephen Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[4] R. Caruana. Multitask learning. Machine Learning, 28(1):41?75, 1997.
[5] T. Evgeniou and M. Pontil. Regularized multitask learning. In Proc. of 17-th SIGKDD Conf. on Knowledge Discovery and Data Mining, 2004.
[6] D. Haussler. Convolution kernels on discrete structures. Technical Report UCSC-CRL-99-10, UC Santa
Cruz, July 1999.
[7] M. Lobo, L. Vandenberghe, S. Boyd, and H. Lebret. Applications of second-order cone programming.
Linear Algebra and its Applications, 284:193?228, 1998.
[8] C. A. Micchelli and M. Pontil. Kernels for multi-task learning. In Lawrence K. Saul, Yair Weiss, and L?eon
Bottou, editors, Advances in Neural Information Processing Systems 17, pages 921?928, Cambridge, MA,
2005. MIT Press.
[9] A. Shashua and A. Levin. Ranking with large margin principle: two approaches. In Advances in Neural
Information Processing Systems 15, pages 937?944, Cambridge, MA, 2003. MIT Press.
[10] K. Tsuda and W.S. Noble. Learning kernels from biological networks by maximizing entropy. Bioinformatics, 20(Suppl. 1):i326?i333, 2004.
[11] X. Zhu, J. Kandola, Z. Ghahramani, and J. Lafferty. Nonparametric transforms of graph kernels for
semi-supervised learning. In Lawrence K. Saul, Yair Weiss, and Lon Bottou, editors, Advances in Neural
Information Processing Systems 17, Cambridge, MA, 2004. MIT Press.
8
| 3368 |@word multitask:2 version:4 advantageous:1 norm:1 mers:1 heuristically:1 simulation:4 p0:1 pick:1 contains:2 score:2 existing:6 cruz:1 shape:1 enables:1 v:7 kint:3 implying:1 node:3 location:1 preference:1 five:3 constructed:2 ucsc:1 ik:5 consists:1 fitting:2 manner:1 introduce:4 theoretically:1 expected:4 indeed:1 intricate:2 sdp:2 multi:8 grade:1 globally:1 solver:2 bounded:2 notation:1 unrelated:1 moreover:1 null:1 kind:1 string:1 developed:1 masashi:1 rm:6 uk:4 control:2 unit:1 grant:1 positive:9 scientist:1 local:21 treat:1 tends:2 path:2 bird:1 studied:1 k:1 co:1 limited:1 graduate:1 bi:8 averaged:1 practical:1 acknowledgment:1 practice:2 definite:1 procedure:1 pontil:3 significantly:3 hyperbolic:1 projection:1 boyd:2 protein:5 close:4 interior:1 context:1 imposed:1 demonstrated:3 center:3 yt:7 lagrangian:1 murzin:1 maximizing:1 asai:2 convex:5 independently:1 formulate:1 simplicity:2 haussler:1 regarded:2 vandenberghe:2 target:2 construction:1 heavily:1 user:1 programming:3 us:1 trick:2 element:4 jk:5 particularly:1 database:2 rmd:1 bottom:1 solved:5 capture:3 connected:5 trade:1 mentioned:1 agency:1 pong:1 complexity:1 lobo:1 trained:2 solving:6 algebra:1 uh:2 easily:1 stock:1 various:1 train:1 effective:2 borchers:1 artificial:1 quite:1 whose:1 solve:3 consume:1 otherwise:1 sequence:4 propose:3 product:3 fea:7 neighboring:1 kato:2 hadamard:1 loop:1 poorly:1 wjk:3 cluster:1 illustrate:1 ac:2 nearest:2 ij:1 school:1 eq:1 implemented:1 c:1 implies:3 convention:1 direction:1 inhomogeneous:1 radius:1 tokyo:4 settle:1 jst:1 education:1 f1:1 biological:1 kiyoshi:1 im:2 frontier:1 strictly:1 extension:3 cb:1 lawrence:2 predict:1 bj:1 consecutive:3 fh:7 proc:1 label:1 bridge:1 individually:5 hubbard:1 create:1 successfully:1 weighted:4 mit:3 always:1 rna:2 super:10 rather:1 focus:1 lon:1 vk:1 properly:2 rank:1 sigkdd:1 sense:1 dependent:1 typically:1 integrated:1 entire:1 kernelized:3 relation:7 transformed:1 tsuyoshi:2 issue:1 among:9 classification:19 dual:6 overall:1 yahoo:1 development:1 constrained:1 special:3 uc:1 construct:1 once:1 shaped:1 nicely:1 eliminated:1 extraction:1 biology:1 represents:1 evgeniou:1 noble:1 eik:2 future:1 others:3 report:1 randomly:2 simultaneously:1 kandola:1 individual:1 connects:1 fd:1 highly:3 circular:4 mining:1 extreme:1 semidefinite:5 primal:2 edge:6 culture:1 re:3 tsuda:1 isolated:1 subfigure:1 caruana:1 neutral:3 uniform:1 csdp:1 usefulness:2 successful:1 levin:2 sv:1 combined:1 systematic:3 off:1 probabilistic:1 together:1 w1:1 again:1 squared:2 nm:1 pumadyn:1 conf:1 toy:3 japan:2 socp:17 scop:2 hisashi:1 student:1 summarized:1 int:1 b2:1 includes:3 ranking:1 later:1 picked:1 closed:1 socps:4 shashua:2 wm:2 il:7 ni:2 accuracy:5 acid:3 efficiently:5 yield:1 sdps:3 classified:1 definition:1 sugi:1 naturally:1 di:2 con:1 dataset:1 knowledge:1 fractional:2 hilbert:1 actually:4 originally:1 supervised:1 mtl:50 improved:1 wei:2 formulation:2 furthermore:2 just:2 stage:1 hand:1 replacing:1 name:1 effect:1 true:2 inductive:1 regularization:2 hence:1 equality:1 laboratory:1 aist:1 undesired:1 deal:3 adjacent:1 width:1 demonstrate:1 fj:1 wise:1 novel:1 fi:2 common:6 specialized:1 behaves:1 jp:4 extend:2 discussed:1 lieven:1 refer:1 significant:1 cambridge:4 imposing:1 ai:2 rd:1 nonvectorial:1 similarly:2 sugiyama:1 add:1 showed:1 belongs:1 scenario:6 binary:2 ministry:1 impose:2 employed:2 july:1 ii:6 u0:3 multiple:4 full:8 stephen:1 semi:1 technical:1 cross:2 divided:4 controlled:2 laplacian:1 variant:3 regression:22 basic:1 titech:1 expectation:1 kernel:25 suppl:1 cbrc:1 w2:1 posse:1 exhibited:1 tend:3 lafferty:1 call:2 ter:1 constraining:1 variety:1 chothia:1 zi:3 fm:1 inner:1 idea:7 reduce:1 six:2 cause:1 useful:6 generally:2 santa:1 transforms:1 nonparametric:1 extensively:1 locally:1 ten:1 svms:13 excellently:1 dna:2 reduced:6 generate:1 outperform:2 discrete:1 express:1 four:2 changing:1 v1:1 graph:3 cone:7 sum:3 run:1 parameterized:2 family:11 decision:1 ob:1 bound:2 guaranteed:2 fold:8 quadratic:2 strength:2 subj:5 constraint:22 software:1 u1:1 min:5 department:1 structured:2 according:2 developing:1 combination:3 kd:2 describes:3 wi:11 making:3 s1:1 agree:1 discus:1 abbreviated:1 wrt:5 ordinal:21 end:1 available:3 rewritten:1 apply:3 indirectly:1 appropriate:1 kashima:1 yair:2 rp:1 jd:1 denotes:1 top:1 include:2 remaining:2 hinge:5 elucidation:1 exploit:2 eon:1 quantile:1 ghahramani:1 micchelli:2 objective:1 blend:1 damage:1 exclusive:1 distance:5 reason:1 index:2 relationship:2 minimizing:1 favorably:2 negative:4 perform:2 upper:5 convolution:1 ejk:1 situation:1 extended:1 rn:3 reproducing:1 arbitrary:1 svor:4 pair:8 cast:4 namely:1 connection:1 quadratically:1 learned:2 lebret:1 program:11 max:1 reliable:2 belief:1 suitable:1 natural:1 regularized:1 indicator:1 nucl:1 zhu:1 wik:3 technology:3 library:1 conic:1 naive:3 sn:4 discovery:1 dislike:2 loss:1 fully:2 expect:1 interesting:1 limitation:3 proven:1 versus:2 validation:2 integrate:2 rni:1 s0:4 principle:1 editor:2 bank:2 share:3 pi:2 ibm:1 row:2 mfp:3 penalized:1 repeat:1 last:2 supported:1 bias:6 allow:1 institute:2 neighbor:2 saul:2 fifth:2 sparse:2 superfamily:1 regard:1 overcome:1 boundary:5 world:2 evaluating:1 unweighted:1 refinement:1 bm:1 counted:1 far:1 employing:1 cope:1 relatedness:5 implicitly:1 overcomes:2 global:21 assumed:1 decomposes:1 sk:4 table:5 promising:2 learn:2 transfer:1 ignoring:1 obtaining:1 bottou:2 necessarily:1 domain:1 diag:7 bounding:1 big:2 n2:1 amino:2 referred:1 quantiles:2 aid:1 sub:4 learns:1 young:1 theorem:2 rk:3 xt:6 showing:3 explored:1 x:1 svm:38 effectively:1 ci:1 margin:1 smoothly:1 intersection:1 entropy:1 lap:3 expressed:2 n2i:1 partially:1 scalar:1 sport:1 binding:2 ma:3 goal:2 formulated:4 consequently:1 rbf:4 shared:1 fisher:1 brenner:1 experimentally:2 crl:1 specifically:1 determined:2 uniformly:1 degradation:1 called:3 support:3 bioinformatics:2 kernelization:1 |
2,612 | 3,369 | A Bayesian Model of Conditioned Perception
Alan A. Stocker? and Eero P. Simoncelli
Howard Hughes Medical Institute,
Center for Neural Science,
and Courant Institute of Mathematical Sciences
New York University
New York, NY-10003, U.S.A.
We argue that in many circumstances, human observers evaluate sensory evidence
simultaneously under multiple hypotheses regarding the physical process that has
generated the sensory information. In such situations, inference can be optimal if
an observer combines the evaluation results under each hypothesis according to
the probability that the associated hypothesis is correct. However, a number of experimental results reveal suboptimal behavior and may be explained by assuming
that once an observer has committed to a particular hypothesis, subsequent evaluation is based on that hypothesis alone. That is, observers sacrifice optimality in
order to ensure self-consistency. We formulate this behavior using a conditional
Bayesian observer model, and demonstrate that it can account for psychophysical
data from a recently reported perceptual experiment in which strong biases in perceptual estimates arise as a consequence of a preceding decision. Not only does
the model provide quantitative predictions of subjective responses in variants of
the original experiment, but it also appears to be consistent with human responses
to cognitive dissonance.
1 Motivation
Is the glass half full or half empty? In different situations, the very same perceptual evidence (e.g. the
perceived level of liquid in a glass) can be interpreted very differently. Our perception is conditioned
on the context within which we judge the evidence. Perhaps we witnessed the process of the glass
being filled, and thus would more naturally think of it as half full. Maybe it is the only glass on
the table that has liquid remaining, and thus its precious content would be regarded as half full. Or
maybe we simply like the content so much that we cannot have enough, in which case we may view
it as being half empty.
Contextual influences in low-level human perception are the norm rather than the exception, and
have been widely reported. Perceptual illusions, for example, often exhibit particularly strong contextual effects, either in terms of perceptual space (e.g. spatial context affects perceived brightness;
see [1] for impressive examples) or time (prolonged exposure to an adaptor stimulus will affect
subsequent perception, see e.g. the motion after-effect [2]). Data of recent psychophysical experiments suggest that an observer?s previous perceptual decisions provide additional form of context
that can substantially influence subsequent perception [3, 4]. In particular, the outcome of a categorical decision task can strongly bias a subsequent estimation task that is based on the same stimulus
presentation. Contextual influences are typically strongest when the sensory evidence is most ambiguous in terms of its interpretation, as in the example of the half-full (or half-empty) glass.
Bayesian estimators have proven successful in modeling human behavior in a wide variety of lowlevel perceptual tasks (for example: cue-integration (see e.g. [5]), color perception (e.g. [6]), visual
motion estimation (e.g. [7, 8])). But they generally do not incorporate contextual dependencies
?
corresponding author.
beyond a prior distribution (reflecting past experience) over the variable of interest. Contextual
dependencies may be incorporated in a Bayesian framework by assuming that human observers,
when performing a perceptual task, test different hypotheses about the underlying structure of the
sensory evidence, and arrive at an estimate by weighting the estimates under each hypothesis according to the strength of their belief in that hypothesis. This approach is known as optimal model
evaluation [9], or Bayesian model averaging [10] and has been previously suggested to account for
cognitive reasoning [11]. It further has been suggested that the brain could use different neuromodulators to keep track of the probabilities of individual hypotheses [12]. Contextual effects are
reflected in the observer?s selection and evaluation of these hypotheses, and thus vary with experimental conditions. For the particular case of cue-integration, Bayesian model averaging has been
proposed and tested against data [13, 14], suggesting that some of the observed non-linearities in
cue integration are the result of the human perceptual system taking into account multiple potential
contextual dependencies.
In contrast to these studies, however, we propose that model averaging behavior is abandoned once
the observer has committed to a particular hypothesis. Specifically, subsequent perception is conditioned only on the chosen hypothesis, thus sacrificing optimality in order to achieve self-consistency.
We examine this hypothesis in the context of a recent experiment in which subjects were asked to
estimate the direction of motion of random dot patterns after being forced to make a categorical
decision about whether the direction of motion fell on one side or the other of a reference mark [4].
Depending on the different levels of motion coherence, responses on the estimation task were heavily biased by the categorical decision. We demonstrate that a self-consistent conditional Bayesian
model can account for mean behavior, as well as behavior on individual trials [8]. The model has essentially no free parameters, and in addition is able to make precise predictions under a wide variety
of alternative experimental arrangements. We provide two such example predictions.
2 Observer Model
We define perception as a statistical estimation problem in which an observer tries to infer the value
of some environmental variable s based on sensory evidence m (see Fig. 1). Typically, there are
sources of uncertainty associated with m, including both sensor noise and uncertainty about the
relationship between the sensory evidence and the variable s. We refer to the latter as structural
uncertainty which represents the degree of ambiguity in the observer?s interpretation of the physical
world. In cases where the structural possibilities are discrete, we denote them as a set of hypotheses
H = {h1 , ..., hN }. Perceptual inference requires two steps. First, the observer computes their belief
world
s
property
measurement
noise!
m
p(H|m)
p(s|m)
h1
estimate
...
observer
s(m)
hn
hypotheses
^
prior
knowledge
Figure 1: Perception as conditioned inference problem. Based on noisy sensory measurements
m the observer generates different hypotheses for the generative structure that relates m to the
stimulus variable s. Perception is a two-fold inference problem: Given the measurement and prior
knowledge, the observer generates and evaluates different structural hypotheses h i . Conditioned on
this evaluation, they then infer an estimate s?(m) from the measurement m.
in each hypothesis for given sensory evidence m. Using Bayes? identity, the belief is expressed as
the posterior
p(H|m) =
p(m|H)p(H)
.
p(m)
(1)
Second, for each hypothesis, a conditional posterior is formulated as p(s|m, H = h i ), and the full
(non-conditional) posterior is computed by integrating the evidence over all hypotheses, weighted
by the belief in each hypothesis h i :
p(s|m) =
N
p(s|m, H = hi )p(H = hi |m) .
(2)
i=1
Finally, the observer selects an estimate s? that minimizes the expected value (under the posterior)
of an appropriate loss function 1 .
2.1 Decision leads to conditional estimation
In situations where the observer has already made a decision (either explicit or implicit) to select one
hypothesis as being correct, we postulate that subsequent inference will be based on that hypothesis
alone, rather than averaging over the full set of hypotheses. For example, suppose the observer
selects the maximum a posteriori hypothesis h MAP , the hypothesis that is most probable given the
sensory evidence and the prior distribution. We assume that this decision then causes the observer
to reset the posterior probabilities over the hypotheses to
p(H|m) = 1,
= 0,
if H = hMAP
otherwise.
(3)
That is, the decision making process forces the observer to consider the selected hypothesis as
correct, with all other hypotheses rendered impossible. Changing the beliefs over the hypotheses
will obviously affect the estimate s? in our model. Applying the new posterior probabilities Eq. (3)
simplifies the inference problem Eq. (2) to
p(s|m) = p(s|m, H = hMAP ) .
(4)
We argue that this simplification by decision is essential for complex perceptual tasks (see Discussion). By making a decision, the observer frees resources, eliminating the need to continuously
represent probabilities about other hypotheses, and also simplifies the inference problem. The price
to pay is that the subsequent estimate is typically biased and sub-optimal.
3 Example: Conditioned Perception of Visual Motion
We tested our observer model by simulating a recently reported psychophysical experiment [4].
Subjects in this experiment were asked on each trial to decide whether the overall motion direction
of a random dot pattern was to the right or to the left of a reference mark (as seen from the fixation
point). Low levels of motion coherence made the decision task difficult for motion directions close
to the reference mark. In a subset of randomly selected trials subjects were also asked to estimate the
precise angle of motion direction (see Fig. 2). The decision task was always preceding the estimation
task, but at the time of the decision, subjects were unaware whether they would had to perform the
estimation task or not.
3.1 Formulating the observer model
We denote ? as the direction of coherent motion of the random dot pattern, and m the noisy sensory
measurement. Suppose that on a given trial the measurement m indicates a direction of motion to
the right of the reference mark. The observer can consider two hypotheses H = {h 1 , h2 } about the
actual physical motion of the random dot pattern: Either the true motion is actually to the right and
thus in agreement with the measurement, or it is to the left but noise has disturbed the measurement
1
For the purpose of this paper, we assume a standard squared error loss function, in which case the observer
should choose the mean of the posterior distribution.
decision
estimation
reference
?
?
a
??
trials
reference
?
b
Figure 2: Decision-estimation experiment. (a) Jazayeri and Movshon presented moving random
dot patterns to subjects and asked them to decide if the overall motion direction was either to the
right or the left of a reference mark [4]. Random dot patterns could exhibit three different levels of
motion coherence (3, 6, and 12%) and the single coherent motion direction was randomly selected
from a uniform distribution over a symmetric range of angles [??, ?] around the reference mark. (b)
In randomly selected 30% of trials, subjects were also asked, after making the directional decision,
to estimate the exact angle of motion direction by adjusting an arrow to point in the direction of
perceived motion. In a second version of the experiment, motion was either toward the direction of
the reference mark or in the opposite direction.
such that it indicates motion to the right. The observer?s belief in each of the two hypotheses based
on their measurement is given by the posterior distribution according to Eq. (1), and the likelihood
?
p(m|H) =
p(m|?, H)p(?|H)d? .
(5)
??
The optimal decision is to select the hypothesis h MAP that maximizes the posterior given by Eq. (1).
3.2 Model observer vs. human observer
The subsequent conditioned estimate of motion direction then follows from Eq. (4) which can be
rewritten as
p(?|m) =
p(m|?, H = hMAP )p(?|H = hMAP )
.
p(m|H = hMAP )
(6)
The model is completely characterized by three quantities: The likelihood functions p(m|?, H),
the prior distributions p(?|H) of the direction of motion given each hypothesis, and the prior on the
hypotheses p(H) itself (shown in Fig. 3). In the given experimental setup, both prior distributions
were uniform but the width parameter of the motion direction ? was not explicitly available to
the subjects and had to be individually learned from training trials. In general, subjects seem to
over-estimate this parameter (up to a factor of two), and adjusting its value in the model accounts
for most of the variability between subjects. The likelihood functions p(m|?, H) is given by the
uncertainty about the motion direction due to the low motion coherence levels in the stimuli and the
sensory noise characteristics of the observer. We assumed it to be Gaussian with a width that varies
inversely with the coherence level. Values were estimated from the data plots in [4].
Figure 4 compares the prediction of the observer model with human data. Trial data of the model
were generated by first sampling a hypothesis h according to p(H), then drawing a stimulus direction from p(?|H = h ). then picking a sensory measurement sample m according to the conditional
probability p(m|?, H = h ), and finally performing inference according to Eqs. (1) and (6). The
model captures the characteristics of human behavior in both the decision and the subsequent estimation task. Note the strong influence of the decision task on the subsequent estimation of the
motion direction, effectively pushing the estimates away from the decision boundary.
We also compared the model with a second version of the experiment, in which the decision task
was to discriminate between motion toward and away from the reference [4]. Coherent motion of
the random dot pattern was uniformly sampled from a range around the reference and from a range
p(m|?, H)
p(?|H)
p(H)
??
12 %
6%
0.5
0.5
?
?
3%
??
?
Figure 3: Ingredients of the conditional observer model. The sensory signal is assumed to be
corrupted by additive Gaussian noise, with width that varies inversely with the level of motion
coherence. Actual widths were approximated from those reported in [4]. The prior distribution
over the hypotheses p(H) is uniform. The two prior distributions over motion direction given each
hypothesis, p(?|H = h1,2 ), are again determined by the experimental setup, and are uniform over
the range [0, ??].
around the direction opposite to the reference, as illustrated by the prior distributions shown in Fig. 5.
Again, note that these distributions are given by the experiment and thus, assuming the same noise
characteristics as in the first experiment, the model has no free parameters.
3.3 Predictions
The model framework also allows us to make quantitative predictions of human perceptual behavior
under conditions not yet tested. Figure 6 shows the model observer?s behavior under two modifications of the original experiment. The first is identical to the experiment shown in Fig. 4 but with
unequal prior probability on the two hypotheses. The model predicts that a human subject would
respond to this change by more frequently choosing the more likely hypothesis. However, this hypothesis would also be more likely to be correct, and thus the estimates under this hypothesis would
exhibit less bias than in the original experiment.
The second modification is to add a second reference and ask the subject to decide between three
different classes of motion direction (e.g. left, central, right). Again, the model predicts that in such
a case, a human subject?s estimate in the central direction should be biased away from both decision
boundaries, thus leading to an almost constant direction estimate. Estimates following a decision in
favor of the two outer classes show the same repulsive bias as seen in the original experiment.
4 Discussion
We have presented a normative model for human perception that captures the conditioning effects
of decisions on an observer?s subsequent evaluation of sensory evidence. The model is based on
the premise that observers aim for optimal inference (taking into account all sensory evidence and
prior information), but that they exhibit decision-induced biases because they also aim to be selfconsistent, eliminating alternatives that have been decided against. We?ve demonstrated that this
model can account for the experimental results of [4].
Although this strategy is suboptimal (in that it does not minimize expected loss), it provides two
fundamental advantages. First, self-consistency would seem an important requirement for a stable
interpretation of the environment, and adhering to it might outweigh the disadvantages of perceptual
misjudgments. Second, framing perception in terms of optimal statistical estimation implies that the
more information an observer evaluates, the more accurately they should be able to solve a perceptual task. But this assumes that the observer can construct and retain full probability distributions
and perform optimal inference calculations on these. Presumably, accumulating more probabilistic
evidence of more complex conditional dependencies has a cost, both in terms of storage, and in terms
of the computational load of performing subsequent inference. Thus, discarding information after
making a decision can help to keep this storage and the computational complexity at a manageable
level, freeing computational resources to perform other tasks.
data
data
model
20
0
0.5
-20
coherence level
3%
6%
12 %
0
-20
-10
0
10
20
-20
-10
0
10
20
20
10
estimated direction [deg]
fraction motion
right of reference
1
estimated direction [deg]
model
40
3%
-40
40
20
0
-20
-40
6%
40
20
0
0
-10
-20
-20
-40
-20
-10
0
10
20
-20
-10
0
true direction [deg]
10
20
-20
-10
0
10
12 %
20
-20
-10
0
10
20
true direction [deg]
Figure 4: Comparison of model predictions with data for a single subject. Upper left: The two panels show the percentage of observed motion to the right as a function of the true pattern direction,
for the three coherence levels tested. The model accurately predicts the subject?s behavior, which
exhibits a decrease in the number of false decisions with decreasing noise levels and increasing distance to the reference. Lower left: Mean estimates of the direction of motion after performing the
decision task. Clearly, the decision has a substantial impact on the subsequent estimate, producing
a strong bias away from the reference. The model response exhibits biases similar to those of the
human subjects, with lower coherence levels producing stronger repulsive effects. Right: Grayscale
images show distributions of estimates across trials for both the human subject and the model observer, for all three coherence levels. All trials are included (correct and incorrect). White dashed
lines represent veridical estimates. Model observer performed 40 trials at each motion direction (in
1.5 degrees increments). Human data are replotted from [4].
An interesting avenue for exploration is the implementation of such an algorithm in neural substrate.
Recent studies propose a means by which population of neurons can represent and multiply probability distributions [15]. It would be worthwhile to consider how the model presented here could be
implemented with such a neural mechanism. In particular, one might expect that the sudden change
in posterior probabilities over the hypotheses associated with the decision task would be reflected in
sudden changes in response pattern in such populations [16].
Questions remain. For the experiment we have modeled, the hypotheses were specified by the two
alternatives of the decision task, and the subjects were forced to choose one of them. What happens in more general situations? First, do humans always decompose perceptual inference tasks
into a set of inference problems, each conditioned on a different hypothesis? Data from other,
cue-combination experiments suggest that subjects indeed seem to perform such probabilistic decomposition [13, 14]. If so, then how do observers generate these hypotheses? In the absence of
explicit instructions, humans may automatically perform implicit comparisons relative to reference
features that are unconsciously selected from the environment. Second, if humans do consider different hypotheses, do they always select a single one on which subsequent percepts are conditioned,
even if not explicitly asked to do so? For example, simply displaying the reference mark in the
experiment of [4] (without asking the observer to report any decision) might be sufficient to trigger
an implicit decision that would result in behaviors similar to those shown in the explicit case.
Finally, although we have only tested it on data of a particular psychophysical experiment, we believe that our model may have implications beyond low-level sensory perception. For instance, a
data
model
40
20
p(H)
0
-20
0.5
p(?|H)
??
?
?
estimated direction [deg]
0.5
3%
-40
40
20
0
-20
-40
6%
40
20
0
-20
-40
-20
-10
0
10
12 %
20
-20
-10
0
10
20
true direction [deg]
Figure 5: Comparison of model predictions with data for second experiment. Left: Prior distributions for second experiment in [4]. Right: Grayscale images show the trial distributions of the
human subject and the model observer for all three coherence levels. White dashed lines represent
veridical estimates. Note that the human subject does not show any significant bias in their estimate.
The trial variance appears to increase with decreasing levels of coherence. Both characteristics are
well predicted by the model. Human data replotted from [4] (supplementary material).
well-studied human attribute is known as cognitive dissonance [17], which causes people to adjust their opinions and beliefs to be consistent with their previous statements or behaviors. 2 Thus,
self-consistency may be a principle that governs computations throughout the brain.
Acknowledgments
We thank J. Tenenbaum for referring us to the cognitive dissonance literature, and J. Pillow, N. Daw,
D. Heeger, A. Movshon, and M. Jazayeri for interesting discussions.
References
[1] E.H. Adelson. Perceptual organization and the judgment of brightness. Science, 262:2042?2044, December 1993.
[2] S.P. Thompson. Optical illusions of motion. Brain, 3:289?298, 1880.
[3] S. Baldassi, N. Megna, and D.C. Burr. Visual clutter causes high-magnitude errors. PLoS Biology,
4(3):387ff, March 2006.
[4] M. Jazayeri and J.A. Movshon. A new perceptual illusion reveals mechanisms of sensory decoding.
Nature, 446:912ff, April 2007.
[5] M.O. Ernst and M.S. Banks. Humans integrate visual and haptic information in a statistically optimal
fashion. Nature, 415:429ff, January 2002.
[6] D. Brainard and W. Freeman. Bayesian color constancy. Journal of Optical Society of America A,
14(7):1393?1411, July 1997.
2
An example that is directly analogous to the perceptual experiment in [4] is documented in [18]: Subjects
initially rated kitchen appliances for attractiveness, and then were allowed to select one as a gift from amongst
two that they had rated equally. They were subsequently asked to rate the appliances again. The data show a
repulsive bias of the post-decision ratings compared with the pre-decision ratings, such that the rating of the
selected appliance increased, and the rating of the rejected appliance decreased.
p(H)
trial
p(?|H)
??
?
A
?
??
??
?
?
1/3
1/3
B
1/3
20
20
0
0
-20
-20
-40
-40
40
40
20
20
0
0
-20
-20
?
0.2
??
40
?
estimated direction [deg]
0.8
mean
40
-40
-40
-20
-10
0
10
20
-20
-10
0
10
20
true direction [deg]
Figure 6: Model predictions for two modifications of the original experiment. A: We change the
prior probability p(H) to be asymmetric (0.8 vs. 0.2). However, we keep the prior distribution
of motion directions given a particular side p(?|H) constant within the range [0, ??]. The model
makes two predictions (trials shown for an intermediate coherence level): First, although tested with
an equal number of trials for each motion direction, there is a strong bias induced by the asymmetric
prior. And second, the direction estimates on the left are more veridical than on the right. B: We
present two reference marks instead of one, asking the subjects to make a choice between three
equally likely regions of motion direction. Again, we assume uniform prior distributions of motion
directions within each area. The model predicts bilateral repulsion of the estimates in the central
area, leading to a strong bias that is almost independent of coherence level.
[7] Y. Weiss, E. Simoncelli, and E. Adelson. Motion illusions as optimal percept. Nature Neuroscience,
5(6):598?604, June 2002.
[8] A.A. Stocker and E.P. Simoncelli. Noise characteristics and prior expectations in human visual speed
perception. Nature Neuroscience, pages 578?585, April 2006.
[9] D. Draper. Assessment and propagation of model uncertainty. Journal of the Royal Statistical Society B,
57:45?97, 1995.
[10] J.A. Hoeting, D. Madigan, A.E. Raftery, and C.T. Volinsky. Bayesian model averaging: A tutorial. Statistical Science, 14(4):382?417, 1999.
[11] T.L. Griffiths, C. Kemp, and J. Tenenbaum. Handbook of Computational Cognitive Modeling, chapter
Bayesian models of cognition. Cambridge University Press, to appear.
[12] J.A. Yu and P. Dayan. Uncertainty, neuromodulation, and attention. Neuron, 46:681ff, May 2005.
[13] D. Knill. Robust cue integration: A Bayesian model and evidence from cue-conflict studies with stereoscopic and figure cues to slant. Journal of Vision, 7(7):1?24, May 2007.
[14] K. K?ording and J. Tenenbaum. Causal inference in sensorimotor integration. In B. Sch?olkopf, J. Platt,
and T. Hoffman, editors, Advances in Neural Information Processing Systems 19. MIT Press, 2007.
[15] W.J. Ma, J.M. Beck, P.E. Latham, and A. Pouget. Bayesian inference with probabilistic population codes.
Nature Neuroscience, 9:1432ff, November 2006.
[16] Roitman J. D. Ditterich J. Mazurek, M. E. and M. N. Shadlen. A role for neural integrators in perceptual
decision-making. Cerebral Cortex, 13:1257?1269, 2003.
[17] L. Festinger. Theory of Cognitive Dissonance. Stanford University Press, Stanford, CA, 1957.
[18] J.W. Brehm. Post-decision changes in the desirability of alternatives. Journal of Abnormal and Social
Psychology, 52(3):384ff., 1956.
| 3369 |@word trial:16 version:2 eliminating:2 manageable:1 norm:1 stronger:1 instruction:1 decomposition:1 brightness:2 liquid:2 ording:1 subjective:1 past:1 contextual:7 yet:1 additive:1 subsequent:14 plot:1 v:2 alone:2 half:7 cue:7 generative:1 selected:6 sudden:2 provides:1 appliance:4 mathematical:1 incorrect:1 fixation:1 combine:1 burr:1 sacrifice:1 indeed:1 expected:2 behavior:12 examine:1 frequently:1 brain:3 integrator:1 freeman:1 decreasing:2 automatically:1 prolonged:1 actual:2 increasing:1 gift:1 underlying:1 linearity:1 maximizes:1 panel:1 what:1 interpreted:1 substantially:1 minimizes:1 quantitative:2 platt:1 medical:1 appear:1 producing:2 veridical:3 consequence:1 might:3 studied:1 range:5 statistically:1 decided:1 acknowledgment:1 hughes:1 illusion:4 area:2 pre:1 integrating:1 griffith:1 madigan:1 suggest:2 cannot:1 close:1 selection:1 storage:2 context:4 influence:4 impossible:1 applying:1 accumulating:1 disturbed:1 outweigh:1 map:2 demonstrated:1 center:1 baldassi:1 exposure:1 attention:1 lowlevel:1 thompson:1 formulate:1 adhering:1 pouget:1 estimator:1 regarded:1 population:3 increment:1 analogous:1 suppose:2 heavily:1 trigger:1 exact:1 substrate:1 hypothesis:48 agreement:1 approximated:1 particularly:1 asymmetric:2 predicts:4 observed:2 constancy:1 role:1 capture:2 region:1 plo:1 decrease:1 substantial:1 environment:2 complexity:1 asked:7 completely:1 differently:1 chapter:1 america:1 hoeting:1 forced:2 outcome:1 choosing:1 precious:1 widely:1 solve:1 supplementary:1 stanford:2 drawing:1 otherwise:1 favor:1 think:1 noisy:2 itself:1 obviously:1 advantage:1 propose:2 reset:1 ernst:1 achieve:1 olkopf:1 empty:3 requirement:1 mazurek:1 help:1 depending:1 brainard:1 freeing:1 adaptor:1 eq:6 strong:6 implemented:1 predicted:1 judge:1 implies:1 direction:40 correct:5 attribute:1 subsequently:1 exploration:1 human:25 opinion:1 material:1 premise:1 decompose:1 probable:1 around:3 presumably:1 cognition:1 vary:1 purpose:1 perceived:3 estimation:12 individually:1 weighted:1 hoffman:1 mit:1 clearly:1 sensor:1 always:3 gaussian:2 aim:2 desirability:1 rather:2 june:1 indicates:2 likelihood:3 contrast:1 glass:5 posteriori:1 inference:15 dayan:1 repulsion:1 typically:3 initially:1 hmap:5 selects:2 overall:2 spatial:1 integration:5 equal:1 once:2 construct:1 dissonance:4 sampling:1 identical:1 represents:1 biology:1 adelson:2 yu:1 report:1 stimulus:5 randomly:3 simultaneously:1 ve:1 individual:2 beck:1 kitchen:1 unconsciously:1 organization:1 interest:1 possibility:1 multiply:1 evaluation:6 adjust:1 stocker:2 implication:1 experience:1 filled:1 sacrificing:1 causal:1 jazayeri:3 increased:1 witnessed:1 modeling:2 instance:1 asking:2 disadvantage:1 cost:1 subset:1 uniform:5 successful:1 reported:4 dependency:4 varies:2 corrupted:1 referring:1 fundamental:1 retain:1 probabilistic:3 decoding:1 picking:1 continuously:1 again:5 squared:1 ambiguity:1 neuromodulators:1 postulate:1 hn:2 choose:2 central:3 cognitive:6 leading:2 account:7 suggesting:1 potential:1 explicitly:2 performed:1 view:1 observer:42 try:1 h1:3 bilateral:1 bayes:1 minimize:1 variance:1 characteristic:5 percept:2 judgment:1 directional:1 bayesian:12 accurately:2 strongest:1 volinsky:1 against:2 evaluates:2 sensorimotor:1 naturally:1 associated:3 sampled:1 adjusting:2 ask:1 color:2 knowledge:2 actually:1 reflecting:1 appears:2 courant:1 reflected:2 response:5 wei:1 april:2 strongly:1 rejected:1 implicit:3 assessment:1 propagation:1 reveal:1 perhaps:1 believe:1 effect:5 roitman:1 true:6 symmetric:1 illustrated:1 white:2 self:5 width:4 ambiguous:1 demonstrate:2 latham:1 motion:42 reasoning:1 image:2 recently:2 physical:3 conditioning:1 cerebral:1 interpretation:3 refer:1 measurement:10 significant:1 cambridge:1 slant:1 consistency:4 had:3 dot:7 moving:1 selfconsistent:1 stable:1 impressive:1 cortex:1 add:1 posterior:10 recent:3 seen:2 additional:1 preceding:2 brehm:1 signal:1 dashed:2 relates:1 multiple:2 simoncelli:3 full:7 infer:2 july:1 alan:1 characterized:1 calculation:1 post:2 equally:2 impact:1 prediction:10 variant:1 circumstance:1 essentially:1 expectation:1 vision:1 represent:4 addition:1 decreased:1 source:1 sch:1 biased:3 fell:1 haptic:1 subject:22 induced:2 december:1 seem:3 structural:3 intermediate:1 enough:1 variety:2 affect:3 psychology:1 suboptimal:2 opposite:2 regarding:1 simplifies:2 avenue:1 whether:3 ditterich:1 movshon:3 york:2 cause:3 generally:1 governs:1 maybe:2 clutter:1 tenenbaum:3 documented:1 generate:1 percentage:1 tutorial:1 stereoscopic:1 estimated:5 neuroscience:3 track:1 discrete:1 changing:1 draper:1 fraction:1 angle:3 uncertainty:6 respond:1 arrive:1 almost:2 throughout:1 decide:3 decision:38 coherence:14 abnormal:1 hi:2 pay:1 simplification:1 fold:1 strength:1 generates:2 speed:1 optimality:2 formulating:1 performing:4 optical:2 rendered:1 according:6 combination:1 march:1 across:1 remain:1 making:5 modification:3 happens:1 explained:1 resource:2 previously:1 mechanism:2 neuromodulation:1 repulsive:3 available:1 rewritten:1 worthwhile:1 away:4 appropriate:1 simulating:1 alternative:4 original:5 abandoned:1 assumes:1 remaining:1 ensure:1 pushing:1 society:2 psychophysical:4 arrangement:1 already:1 quantity:1 question:1 strategy:1 exhibit:6 amongst:1 distance:1 thank:1 outer:1 argue:2 kemp:1 toward:2 assuming:3 code:1 modeled:1 relationship:1 difficult:1 setup:2 statement:1 implementation:1 perform:5 upper:1 neuron:2 howard:1 november:1 january:1 situation:4 incorporated:1 committed:2 precise:2 variability:1 rating:4 specified:1 conflict:1 coherent:3 learned:1 unequal:1 framing:1 daw:1 beyond:2 suggested:2 able:2 perception:15 pattern:9 replotted:2 including:1 royal:1 belief:7 force:1 rated:2 inversely:2 raftery:1 categorical:3 prior:18 literature:1 relative:1 loss:3 expect:1 interesting:2 proven:1 ingredient:1 h2:1 integrate:1 degree:2 sufficient:1 consistent:3 shadlen:1 displaying:1 principle:1 bank:1 editor:1 free:3 bias:11 side:2 institute:2 wide:2 taking:2 boundary:2 world:2 pillow:1 unaware:1 computes:1 sensory:17 author:1 made:2 social:1 keep:3 deg:8 reveals:1 handbook:1 assumed:2 eero:1 grayscale:2 table:1 nature:5 robust:1 ca:1 complex:2 arrow:1 motivation:1 noise:8 arise:1 knill:1 allowed:1 fig:5 attractiveness:1 ff:6 fashion:1 ny:1 sub:1 explicit:3 heeger:1 perceptual:19 weighting:1 load:1 discarding:1 normative:1 evidence:14 essential:1 false:1 effectively:1 magnitude:1 conditioned:9 simply:2 likely:3 visual:5 expressed:1 environmental:1 ma:1 conditional:8 identity:1 presentation:1 formulated:1 price:1 absence:1 content:2 change:5 included:1 specifically:1 determined:1 uniformly:1 averaging:5 discriminate:1 experimental:6 exception:1 select:4 mark:9 people:1 latter:1 incorporate:1 evaluate:1 tested:6 |
2,613 | 337 | A Reinforcement Learning Variant for Control
Scheduling
Aloke Guha
Honeywell Sensor and System Development Center
3660 Technology Drive
Minneapolis, MN 55417
Abstract
We present an algorithm based on reinforcement and state recurrence
learning techniques to solve control scheduling problems. In particular, we
have devised a simple learning scheme called "handicapped learning", in
which the weights of the associative search element are reinforced, either
positively or negatively, such that the system is forced to move towards the
desired setpoint in the shortest possible trajectory. To improve the learning
rate, a variable reinforcement scheme is employed: negative reinforcement
values are varied depending on whether the failure occurs in handicapped or
normal mode of operation. Furthermore, to realize a simulated annealing
scheme for accelerated learning, if the system visits the same failed state
successively, the negative reinforcement value is increased. In examples
studied, these learning schemes have demonstrated high learning rates, and
therefore may prove useful for in-situ learning.
1 INTRODUCTION
Reinforcement learning techniques have been applied successfully for simple control
problems, such as the pole-cart problem [Barto 83, Michie 68, Rosen 88] where the
goal was to maintain the pole in a quasistable region, but not at specific setpoints.
However, a large class of continuous control problems require maintaining the
system at a desired operating point, or setpoint, at a given time. We refer to this
problem as the basic setpoint control problem [Guha 90], and have shown that
reinforcement learning can be used, not surprisingly, quite well for such control tasks.
A more general version of the same problem requires steering the system from some
479
480
Guha
initial or starting state to a desired state or setpoint at specific times without
knowledge of the dynamics of the system. We therefore wish to examine how
control scheduling tasks, where the system must be steered through a sequence of
setpoints at specific times. can be learned. Solving such a control problem without
explicit modeling of the system or plant can prove to be beneficial in many adaptive
control tasks.
To address the control scheduling problem. we have derived a learning algorithm
called handicapped learning. Handicapped learning uses a nonlinear encoding of the
state of the system. a new associative reinforcement learning algorithm. and a novel
reinforcement scheme to explore the control space to meet the scheduling
constraints. The goal of handicapped learning is to learn the control law necessary to
steer the system from one setpoint to another. We provide a description of the state
encoding and associative learning in Section 2. the reinforcement scheme in Section
3, the experimental results in Section 4, and the conclusions in Section 5.
2 REINFORCEMENT LEARNING STRATEGY:
HANDICAPPED LEARNING
Our earlier work on regulatory control using reinforcement learning [Guha 90] used a
simple linear coded state representation of the system. However. when considering
multiple setpoints in a schedule, a linear coding of high-resolution results in a
combinatorial explosion of states. To avoid this curse of dimensionality, we have
adopted a simple nonlinear encoding of the state space. We describe this first.
2.1 STATE ENCODING
To define the states in which reinforcement must be provided to the controller. we
set tolerance limits around the desired setpoint. say Xd. If the tolerance of operation
defined by the level of control sophistication required in the problem is T. then the
controller is defined to fail if IX(t) - Xdl > T as described in our earlier work in [Guha
90].
The controller must learn to maintain the system within this tolerance window. If the
range, R. of possible values of the setpoint or control variable X(t) is significantly
greater than the tolerance window. then the number of states required to define the
setpoint will be large. We therefore use a nonlinear coding of the control variable.
Thus, if the level of discrimination within the tolerance window is 2T/n. then the
number of states required to represent the control variable is (n + 2) where the two
added states represent the states, (X(t) - Xd) > T and (X(t) - Xd) < -T. With this
representation scheme. any continuous range of setpoints can be represented with
very high resolution but without the explosion in state space.
The above state encoding will be used in our associative reinforcement learning
algorithm. handicapped learning, which we describe next.
A Reinforcement Learning Variant for Control Scheduling
2.2 HANDICAPPED LEARNING ALGORITHM
Our reinforcement learning strategy is derived from the Associative Search
Element/Adaptive Heuristic Critic (ASE/AHC) algorithm [Barto 83. Anderson 86].
We have considered a binary control output. y(t):
y(t) = f(L wi(t)xi(t) + noise(t?
(1)
i
where f is the thresholding step function. and xi(t). 0 SiS N. is the current decoded
state. that is. xi(t) = 1 when the system is in the ith state and 0 otherwise. As in
ASE. the added term noise(t) facilitates stochastic learning. Note that the learning
algorithm can be easily extended to continuous valued outputs. the nature of the
continuity is determined by the thresholding function.
We incorporate two learning heuristics: state recurrence [Rosen 88] and a newly
introduced heuristic called "handicapped learning". The controller is in the
handicapped learning mode if a flag. H. is set high. H is defined as follows:
H = O. if IX(t) - Xdl < T
= 1. otherwise
(2)
The handicap mode provides a mechanism to modify the reinforcement scheme. In
this mode the controller is allowed to explore the search space of action sequences.
to steer to a new setpoint. without "punishment" (negative reinforcement). The mode
is invoked when the system is at a valid setpoint XI(tI) at time tl. but must be
steered to the new setpoint X2 outside the tolerance window. that is. IXI - X21 > T.
at time t2. Since both setpoints are valid operating points. these setpoints as well as
all points within the possible optimal trajectories from Xl to X2 cannot be deemed to
be failure states. Further. by following a special reinforcement scheme during the
handicapped mode. one can enable learning and facilitate the controller to find the
optimal trajectory to steer the system from one setpoint to another.
The weight updating rule used during setpoint schedule learning is given by equation
(3):
wi(t+I) = wi(t) + (1 rt(t) ei(t) + (12 r2(t) e2i(t) + (13 r3(t) e3i(t)
(3)
where the term (1 rt (t) ei(t) is the basic associative learning component. rt (t) the
heuristic reinforcement. and ei(t) the eligibility trace of the state xi(t) [Barto 83].
The third term in equation (3) is the state recurrence component for reinforcing short
cycles [Rosen 88]. Here (12 is a constant gain. f2(t) is a positive constant reward.
and ~i the state recurrence eligibility is defined as follows:
e2i(t) = ~2 xi(t)y(ti.last)/(~2 + t - ti.last).
= O. otherwise
if (t - ti.last) > 1 and H = 0
(4)
481
482
Guha
where ~2 is a positive constant, and ti.last is the last time the system visited the ith
state. The eligibility function in equation (4) reinforces shorter cycles more than
longer cycles, and improve control when the system is within a tolerance window.
The fourth term in equation (3) is the handicapped learning component. Here (13 is a
constant gain. r3(t) is a positive constant reward and e3i the handicapped learning
eligibility is defined as follows:
e3i(t) = - ~3 xi(t)y(ti.last)/(~3 + t - ti.lasV.
= O. otherwise
if H = 1
(5)
where ~3 is a positive constant. While state recurrence promotes short cycles around
a desired operating point. handicapped learning forces the controller to move away
from the current operating point X(t). The system enters the handicapped mode
whenever it is outside the tolerance window around the desired setpoint. If the initial
operating point Xi (= X(O? is outside the tolerance window of the desired setpoint
Xd. 1Xi - Xdl > T. the basic AHC network will always register a failure. This failure
situation is avoided by invoking the handicapped learning described above. By
setting absolute upper and lower limits to operating point values. the controller based
on handicapped learning can learn the correct sequence of actions necessary to steer
the system to the desired operating point Xd.
The weight update equations for the critic in the AHC are unchanged from the
original AHC and we do not list them here.
3 REINFORCEMENT SCHEMES
Unlike in previous experiments by other researchers. we have constructed the
reinforcement values used during learning to be multivalued. and not binary. The
reinforcement to the critic is negative-both positive and negative reinforcements are
used. There are two forms of failure that can occur during setpoint control. First. the
controller can reach the absolute upper or lower limits. Second. there may be a
timeout failure in the handicapped mode. By design. when the controller is in
handicapped mode, it is allowed to remain there for only TL. determined by the
average control step Ay and the error between the current operating point and the
desired setpoint:
TL = k Ay (XO - Xd)
(6)
where Xo is the initial setpoint. and k some constant. The negative reinforcement
provided to the controller is higher if the absolute limits of the operating point is
reached.
We have implemented a more interesting reinforcement scheme that is somewhat
similar to simulated annealing. If the system fails in the same state on two
successive trials. the negative reinforcement is increased.
The primary
reinforcement function can be defined as follows:
A Reinforcement Learning Variant for Control Scheduling
rjCk + I) = riCk) - rO,
= rl,
if i = j
if i ":i; j
(7)
where ri(k) is the negative reinforcement provided if the system failed in state i
during trial k, and rO and rl are constants.
4 EXPERIMENTS AND RESULTS
Two different setpoint control experiments have been conducted. The first was the
basic setpoint control of a continuous stirred tank reactor in which the temperature
must be held at a desired setpoint. That experiment successfully demonstrated the
use of reinforcement learning for setpoint control of a highly nonlinear and unstable
process [Guha 90]. The second recent experiment has been on evaluating the
handicapped learning strategy for an environmental controller where the controller
must learn to control the heating system to maintain the ambient temperature
specified by a time-temperature schedule. Thus, as the external temperature varies,
the network must adapt the heating (ON) and (OFF) control sequence so as to
maintain the environment at the desired temperature as quickly as possible. The
state information describing system is composed of the time interval of the schedule,
the current heating state (ON/OFF), and the error or the difference between desired
and current ambient or interior temperature. The heating and cooling rates are
variable: the heating rate decreases while the cooling rate increases exponentially as
the exterior temperature falls below the ambient or controlled temperature.
100
80
e
j
60
..!!
:I
. . Handicapped Learning
+ No Handicapped Learning
1
.c
J!
40
'5
"
20
0
0
10
20
30
40
Trial Number
Figure I: Rate of Learning with and without Handicapped Learning
483
484
Guha
TdalH3
70
68
Ambient
Temperature
66
' ( (CXIIltrolled)
6f
Tenp
62
...
60
58
56
54
1200
Dl
Time (minute)
Figure 2: Time-Temperature Plot of Controlled Environment at Forty-third Trial
The experiments on the environmental controller consisted of embedding a daily
setpoint schedule that contains six setpoints at six specific times. Trails were
conducted to train the controller. Each trial starts at the beginning of the schedule
(time = 0). The setpoints typically varied in the range of 55 to 75 degrees. The
desired tolerance window was 1 degree. The upper and lower limits of the controlled
temperature were set arbitrarily at 50 and 80 degrees. respectively. Control actions
were taken every 5 minutes. Learning was monitored by examining how much of the
schedule was learnt correctly as the number of trials increased.
Tal Run
t-
.
(
'!
i
65 t-
.
Temp
, ,
,
,
...
?
!
..,~I\'~';;???~l' ~
,,.
Setpoint
Sd\ecIule/_ _ _
Temperature
.\ .
.'.'t 1-1.'),
l
,4
,.,
,.',,-'\" J.\,L.
!
Ambient
.,
;,
,
I
(controlled)
.
'.
I
\
.'
~emperature
,
\.
;
I
1
~
"' "r\~?' ..J\"'.
.'. it. " "'..
-..!'.~J I.? ' . ....!
...?fl?,!
~
1
.,
t'
.,
BxteriCll" ~
Temperature
___
55 t-r
\.,
,
-------
\
:50
r-----------200
400
.
600
. --.---.--~-
800
1000
1200
1400
TIme (minutea)
Figure 3: Time-Temperature Plot of Controlled Environment for a Test Run
Figure 1 shows how the learning progresses with the number of trials. Current results
show that the learning of the complete schedule (of the six time-temperature pairs)
requiring 288 control steps. can be accomplished in only 43 trials. (Given binary
A Reinforcement Learning Variant for Control Scheduling
output, the controller could have in the worst case executed 1086 (- 2288) trials to learn
the complete schedule.)
More details on the learning ability using the reinforcement learning strategy are available
from the time-temperature plots of the trial and test runs in Figures 2 and 3. As the
learning progresses to the forty-third trial, the controller learns to continuously heat up or
cool down to the desired temperature (Figure 2). To further test the learning
generalizations on the schedule, the trained network was tested on a different environment
where the exterior temperature profile (and the therefore the heating and cooling rates) was
different from the one used for training. Figure 3 shows the schedule that is maintained.
Because the controller encounters different cooling rates in the test run, some learning
still occurs as evident form Figure 3. However, all six setpoints were reached in the
proper sequence. In essence, this test shows that the controller has generalized on the
heating and cooling control law , independent of the setpoints and the heating and cooling
rates.
5 CONCLUSIONS
We have developed a new learning strategy based on reinforcement learning that can be
used to learn setpoint schedules for continuous processes. The experimental results have
demonstrated good learning performance. However, a number of interesting extensions to
this work are possible. For instance. the handicapped mode exploration of control can be
better controlled for faster learning, if more information on the desired or possible
trajectory is known. Another area of investigation would be the area of state encoding.
In our approach, the nonlinear encoding of the system state was assumed uniform at
different regions of the control space. In applications where the system with high
nonlinearity, different nonlinear coding could be used adaptively to improve the state
representation. Finally, other formulations of reinforcement learning algorithms, besides
ASE/AHC, should also be explored. One such possibility is Watkins' Q-Iearning
[Watkins 89].
References
[Guha 90] A. Guha and A. Mathur, Setpoint Control Based on Reinforcement Learning,
Proceedings of UCNN 90, Washington D.C., January 1990.
[Barto 83] A.G. Barto, R.S. Sutton, and C.W. Anderson, Neuronlike Adaptive Elements
That Can Solve Difficult Learning Control Problems, IEEE Transactions on Systems,
Man, and Cybernetics, Vol. SMC-13. No.5. September/October 1983.
[Michie 68] D. Michie and R. Chambers, Machine Intelligence, E. Dale and D. Michie
(eds.), Oliver and Boyd, Edinburgh, 1968, p. 137.
[Rosen 88] B. E. Rosen, J. M. Goodwin. and J. J. Vidal, Learning by State Recurrence
Detection, IEEE Conference on Neural Information Processing Systems - Natural and
Synthetic. AlP Press, 1988.
[Watkins 89] C.J.C.H. Watkins, Learning from Delayed Rewards, Ph. D. Dissertation,
King's College, May 1989.
485
| 337 |@word trial:11 version:1 emperature:1 invoking:1 initial:3 contains:1 current:6 si:1 must:7 realize:1 plot:3 update:1 discrimination:1 intelligence:1 beginning:1 ith:2 short:2 dissertation:1 provides:1 successive:1 constructed:1 prove:2 examine:1 curse:1 window:8 considering:1 provided:3 developed:1 ahc:5 every:1 ti:7 xd:6 iearning:1 ro:2 control:36 positive:5 modify:1 sd:1 limit:5 sutton:1 encoding:7 meet:1 studied:1 smc:1 range:3 minneapolis:1 area:2 significantly:1 boyd:1 cannot:1 interior:1 scheduling:8 demonstrated:3 center:1 starting:1 resolution:2 rule:1 embedding:1 e2i:2 us:1 ixi:1 trail:1 element:3 updating:1 michie:4 cooling:6 enters:1 worst:1 region:2 cycle:4 decrease:1 environment:4 reward:3 dynamic:1 trained:1 solving:1 negatively:1 f2:1 easily:1 represented:1 train:1 forced:1 heat:1 honeywell:1 describe:2 outside:3 quite:1 heuristic:4 solve:2 valued:1 say:1 otherwise:4 ability:1 associative:6 timeout:1 sequence:5 description:1 depending:1 progress:2 implemented:1 cool:1 correct:1 stochastic:1 exploration:1 alp:1 enable:1 require:1 generalization:1 investigation:1 extension:1 around:3 considered:1 normal:1 combinatorial:1 visited:1 successfully:2 sensor:1 always:1 avoid:1 rick:1 barto:5 derived:2 typically:1 tank:1 development:1 special:1 washington:1 rosen:5 t2:1 composed:1 delayed:1 reactor:1 maintain:4 detection:1 neuronlike:1 highly:1 possibility:1 situ:1 held:1 ambient:5 oliver:1 explosion:2 necessary:2 daily:1 shorter:1 desired:15 increased:3 instance:1 modeling:1 setpoints:10 steer:4 earlier:2 pole:2 uniform:1 examining:1 conducted:2 guha:10 varies:1 learnt:1 synthetic:1 punishment:1 adaptively:1 off:2 quickly:1 continuously:1 successively:1 external:1 steered:2 coding:3 register:1 reached:2 start:1 reinforced:1 trajectory:4 drive:1 researcher:1 cybernetics:1 reach:1 whenever:1 ed:1 failure:6 monitored:1 gain:2 newly:1 knowledge:1 multivalued:1 dimensionality:1 schedule:12 higher:1 formulation:1 anderson:2 furthermore:1 ei:3 nonlinear:6 continuity:1 mode:10 facilitate:1 consisted:1 requiring:1 during:5 recurrence:6 eligibility:4 maintained:1 essence:1 generalized:1 ay:2 complete:2 evident:1 temperature:18 invoked:1 novel:1 rl:2 exponentially:1 refer:1 nonlinearity:1 longer:1 operating:9 recent:1 binary:3 arbitrarily:1 accomplished:1 greater:1 somewhat:1 steering:1 employed:1 forty:2 shortest:1 multiple:1 faster:1 adapt:1 devised:1 visit:1 coded:1 ase:3 promotes:1 controlled:6 variant:4 basic:4 controller:19 represent:2 annealing:2 xdl:3 interval:1 unlike:1 cart:1 facilitates:1 whether:1 six:4 reinforcing:1 action:3 useful:1 ph:1 correctly:1 reinforces:1 vol:1 run:4 fourth:1 handicap:1 fl:1 occur:1 constraint:1 x2:2 ri:1 tal:1 beneficial:1 remain:1 wi:3 temp:1 xo:2 taken:1 equation:5 describing:1 r3:2 fail:1 mechanism:1 adopted:1 available:1 operation:2 vidal:1 away:1 chamber:1 encounter:1 original:1 setpoint:26 x21:1 maintaining:1 unchanged:1 move:2 added:2 occurs:2 strategy:5 primary:1 rt:3 september:1 simulated:2 unstable:1 besides:1 difficult:1 executed:1 october:1 trace:1 negative:8 design:1 proper:1 upper:3 january:1 situation:1 extended:1 varied:2 mathur:1 introduced:1 pair:1 required:3 specified:1 goodwin:1 learned:1 address:1 below:1 handicapped:24 natural:1 force:1 mn:1 scheme:11 improve:3 technology:1 deemed:1 law:2 plant:1 interesting:2 degree:3 thresholding:2 critic:3 surprisingly:1 last:6 fall:1 absolute:3 tolerance:10 edinburgh:1 evaluating:1 valid:2 dale:1 reinforcement:36 adaptive:3 avoided:1 transaction:1 assumed:1 xi:9 search:3 continuous:5 regulatory:1 learn:6 nature:1 exterior:2 noise:2 profile:1 heating:8 allowed:2 positively:1 tl:3 fails:1 decoded:1 wish:1 explicit:1 xl:1 watkins:4 third:3 ix:2 learns:1 minute:2 down:1 specific:4 r2:1 list:1 explored:1 dl:1 sophistication:1 explore:2 failed:2 environmental:2 goal:2 king:1 towards:1 man:1 determined:2 flag:1 called:3 experimental:2 college:1 accelerated:1 incorporate:1 tested:1 |
2,614 | 3,370 | Automatic Generation of Social Tags for Music
Recommendation
Douglas Eck?
Sun Labs, Sun Microsystems
Burlington, Mass, USA
[email protected]
Paul Lamere
Sun Labs, Sun Microsystems
Burlington, Mass, USA
[email protected]
Thierry Bertin-Mahieux
Sun Labs, Sun Microsystems
Burlington, Mass, USA
[email protected]
Stephen Green
Sun Labs, Sun Microsystems
Burlington, Mass, USA
[email protected]
Abstract
Social tags are user-generated keywords associated with some resource on the
Web. In the case of music, social tags have become an important component of
?Web2.0? recommender systems, allowing users to generate playlists based on
use-dependent terms such as chill or jogging that have been applied to particular
songs. In this paper, we propose a method for predicting these social tags directly
from MP3 files. Using a set of boosted classifiers, we map audio features onto
social tags collected from the Web. The resulting automatic tags (or autotags)
furnish information about music that is otherwise untagged or poorly tagged, allowing for insertion of previously unheard music into a social recommender. This
avoids the ?cold-start problem? common in such systems. Autotags can also be
used to smooth the tag space from which similarities and recommendations are
made by providing a set of comparable baseline tags for all tracks in a recommender system.
1
Introduction
Social tags are a key part of ?Web 2.0? technologies and have become an important source of information for recommendation. In the domain of music, Web sites such as Last.fm use social tags
as a basis for recommending music to listeners. In this paper we propose a method for predicting
social tags using audio feature extraction and supervised learning. These automatically-generated
tags (or ?autotags?) can furnish information about music for which good, descriptive social tags
are lacking. Using traditional information retrieval techniques a music recommender can use these
autotags (combined with any available listener-applied tags) to predict artist or song similarity. The
tags can also serve to smooth the tag space from which similarities and recommendations are made
by providing a set of comparable baseline tags for all artists or songs in a recommender.
This is not the first attempt to predict something about textual data using music audio as input.
Whitman & Rifkin [10], for example, provide an audio-driven model for predicting words found
near artists in web queries . One main contribution of the work in this paper lies in the scale of our
experiments. As is described in Section 4 we work with a social tag database of millions of tags
applied to ? 100, 000 artists and an audio database of ? 90, 000 songs spanning many of the more
popular of these artists. This compares favorably with previous attempts which by and large treat
only very small datasets (e.g. [10] used 255 songs drawn from 51 artists.)
?
Eck and Bertin-Mahieux currently at Dept. of Computer Science, Univ. of Montreal, Montreal , Canada
1
This paper is organized as follows: in Section 2 we describe social tags in more depth, including
a description of how social tags can be used to avoid problems found in traditional collaborative
filtering systems, as well as a description of the tag set we built for these experiments. In Section 3
we present an algorithm for autotagging songs based on labeled data collected from the Internet.
In Section 4 we present experimental results and also discuss the ability to use model results for
visualization. Finally, in Section 5 we describe our conclusions and future work.
2
Using social tags for recommendation
As the amount of online music grows, automatic music recommendation becomes an increasingly
important tool for music listeners to find music that they will like. Automatic music recommenders
commonly use collaborative filtering (CF) techniques to recommend music based on the listening
behaviors of other music listeners. These CF recommenders (CFRs) harness the ?wisdom of the
crowds? to recommend music. Even though CFRs generate good recommendations there are still
some problems with this approach. A significant issue for CFRs recommenders is the cold-start
problem. A recommender needs a significant amount of data before it can generate good recommendations. For new music, music by an unknown artist with few listeners, a CFR cannot generate
good recommendations. Another issue is the lack of transparency in recommendations [7]. A CFR
cannot tell a listener why an artist was recommended beyond the description: ?people who listen to
X also listen to Y?. Also, a CFR is relatively insensitive to multimodal uses of the same album or
song. For example songs from an album (a single purchase in a standard CFR system) may be used
in the context of dining, jogging and working. In each context, the reason the song was selected
changes.
An alternative style of recommendation that addresses many of the shortcomings of a CFR is to
recommend music based upon the similarity of ?social tags? that have been applied to the music.
Social tags are free text labels that music listeners apply to songs, albums or artists. Typically, users
are motivated to tag as a way to organize their own personal music collection. The real strength
of a tagging system is seen when the tags of many users are aggregated. When the tags created by
thousands of different listeners are combined, a rich and complex view of the song or artist emerges.
Table 1 show the top 21 tags and frequencies of tags applied to the band ?The Shins?. Users have
applied tags associated with the genre (Indie, Pop, etc.), with the mood (mellow, chill), opinion
(favorite, love), style (singer-songwriter) and context (Garden State). From these tags and their
frequencies we learn much more about ?The Shins? than we would from a traditional single genre
assignment of ?Indie Rock?.
In this paper, we investigate the automatic generation of tags with properties similar to those generated by social taggers. Specifically, we introduce a machine learning algorithm that takes as input
acoustic features and predicts social tags mined from the web (in our case, Last.fm). The model
can then be used to tag new or otherwise untagged music, thus providing a partial solution to the
cold-start problem.
For this research, we extracted tags and tag frequencies for nearly 100,000 artists from the social
music website Last.fm using the Audioscrobbler web service [1]. The majority of tags describe
audio content. Genre, mood and instrumentation account for 77% of the tags. See ?extra material?
for a breakdown of tag types.
Overcoming the cold-start problem is the primary motivation for this area of research. For new music
or sparsely tagged music, we predict social tags directly from the audio and apply these automatically generated tags (called autotags) in lieu of traditionally applied social tags. By automatically
tagging new music in this fashion, we can reduce or eliminate much of the cold-start problem.
3
An autotagging algorithm
We now describe a machine learning model which uses the meta-learning algorithm AdaBoost [5]
to predict tags from acoustic features. This model is an extension of a previous model [3] which won
the Genre Prediction Contest and was the 2nd place performer in the Artist Identification Contest at
MIREX 2005 (ISMIR conference, London, 2005). The model has two principal advantages. First
it selects features based on a feature?s ability to minimize empirical error. We can therefore use the
2
Tag
Indie
Indie rock
Indie pop
Alternative
Rock
Seen Live
Pop
Freq
2375
1138
841
653
512
298
231
Tag
The Shins
Favorites
Emo
Mellow
Folk
Alternative rock
Acoustic
Freq
190
138
113
85
85
83
54
Tag
Punk
Chill
Singer-songwriter
Garden State
Favorite
Electronic
Love
Freq
49
45
41
39
37
36
35
Table 1: Top 21 tags applied to The Shins
Artist A
cool
PREDICTION
LEARNING ?80s? TAG
SONG TAGGING
80s
SET OF
BOOSTERS
Song 1
rock
new song
training
audio features
Song 1
cool
80s
rock
?80s? booster
target: ?80s?
none/some/a lot
predicted tags
Figure 1: Overview of our model
model to eliminate useless feature sets by looking at the order in which those features are selected.
We used this property of the model to discard many candidate features such as chromagrams (which
map spectral energy onto the 12 notes of the Western musical scale) because the weak learners
associated with those features were selected very late by AdaBoost. Second, though AdaBoost may
need relatively more weak learners to achieve the same performance on a large dataset than a small
one, the computation time for a single weak learner scales linearly with the number of training
examples. Thus AdaBoost has the potential to scale well to very large datasets. Both of these
properties are general to AdaBoost and are not explored further in this short paper. See [5, 9] for
more.
3.1
Acoustic feature extraction
The features we use include 20 Mel-Frequency Cepstral Coefficients, 176 autocorrelation coefficients computed for lags spanning from 250msec to 2000msec at 10ms intervals, and 85 spectrogram coefficients sampled by constant-Q (or log-scaled) frequency (see [6] for descriptions of these
standard acoustic features.)
The audio features described above are calculated over short windows of audio ( 100ms with 25ms
overlap). This yields too many features per song for our purposes. To address this, we create ?aggregate? features by computing individual means and standard deviations (i.e., independent Gaussians)
of these features over 5s windows of feature data. When fixing hyperparameters for these experiments, we also tried a combination of 5s and 10s features, but saw no real improvement in results.
For reasons of computational efficiency we used random sampling to retain a maximum of 12 aggregate features per song, corresponding to 1 minute of audio data.
3.2
Labels as a classification problem
Intuitively, automatic labeling would be a regression task where a learner would try to predict tag
frequencies for artists or songs. However, because tags are sparse (many artist are not tagged at all;
others like Radiohead are heavily tagged) this proves to be too difficult using our current Last.fm
3
dataset. Instead, we chose to treat the task as a classification one. Specifically, for each tag we try to
predict if a particular artist has ?none?, ?some? or ?a lot? of a particular tag relative to other tags.
We normalize the tag frequencies for each artist so that artists having many tags can be compared to
artists having few tags. Then for each tag, an individual artist is placed into a single class ?none?,
?some? or ?a lot? depending on the proportion of times the tag was assigned to that artist relative
to other tags assigned to that artist. Thus if an artist received only 50 rock tags and nothing else, it
would be treated as having ?a lot? of rock. Conversely, if an artist received 5000 rock tags but 10,000
jazz tags it would be treated as having ?some? rock and ?a lot? of jazz. The specific boundaries
between ?none?, ?some? and ?a lot? were decided by summing the normalized tag counts or all
artists, generating a 100-bin histogram for each tag and moving the category boundaries such that
an equal number of artists fall into each of the categories. In Figure 2 the histogram for ?rock? is
shown (with only 30 bins to make the plot easier to read). Note that most artists fall into the lowest
bin (no or very few instances of the ?rock? tag) and that otherwise most of the mass is in high bins.
This was the trend for most tags and one of our motivations for using only 3 bins. As described in
the paper we do not directly use the predictions of the ?some? bin. Rather it serves as a class for
holding those artists for which we cannot confidently say ?none? or ?a lot?. See Figure 2 for an
example.
Figure 2: A 30-bin histogram of the proportion of ?rock? tags to other tags for all songs in the dataset.
3.3
Tag prediction with AdaBoost
AdaBoost [5] is a meta-learning method that constructs a strong classifier from a set of simpler
classifiers, called weak learners in an iterative way. Originally intended for binary classification,
there exist several ways to extend it to multiclass classification. We use AdaBoost.MH [9] which
treats multiclass classification as a set of one-versus-all binary classification problems. In each
iteration t, the algorithm selects the best classifier, called h(t) from a pool of weak learners, based
on its performance on the training set, and assigns it a coefficient ?(t) . The input to the weak
learner is a d-dimensional observation vector x ? <d containing audio features for one segment of
aggregated data (5 seconds in our experiments). The output of h(t) is a binary vector y ? {?1, 1}k
(t)
over the k classes. hl = 1 means a vote for class l by a weak learner while h(t) , ?1 is a vote
against. After T iterations, the algorithm output is a vector-valued discriminant function:
g(x) =
T
X
?(t) h(y) (x)
(1)
t=1
As weak learners we used single stumps, e.g. a binary threshold on one of the features. In previous
work we also tried decision trees without any significant improvement. Usually we obtain a single
label by taking the class with the most votes i.e f (x) = arg maxl gl (x), but in our model, we use
the output value for each class rather than the argmax.
3.4
Generating autotags
For each aggregate segment, a booster yields a prediction over the classes ?none?, ?some?, and ?a
lot?. A booster?s raw output for a single segment might be (none:?3.56) (some:0.14) (a lot:2.6).
4
These segment predictions can then be combined to yield artist-level predictions. This can be
achieved in two ways: a winning class can be chosen for each segment (in this example the class ?a
lot? would win with 2.6) and the mean over winners can be tallied for all segments belonging to an
artist. Alternately we can skip choosing a winner and simply take the mean of the raw outputs for an
artist?s segments. Because we wanted to estimate tag frequencies using booster magnitude we used
the latter strategy.
The next step is to transform these class for our individual social tag boosters into a bag of words to
be associated with an artist. The most naive way to obtain a single value for rock is to look solely
at the prediction for the ?a lot? class. However this discards valuable information such as when a
booster votes strongly ?none?. A better way to obtain a measure for rock-ness is to take the center
of mass of the three values. However, because the values are not scaled well with respect to one
another, we ended up with poorly scaled results. Another intuitive idea is simply to subtract the
value of the ?none? bin from the value of the ?a lot? bin, the reasoning being that ?none? is truly
the opposite of ?a lot?. In our example, this would yield a rock strength of 7.16. In experiments
for setting hyperparameters, this was shown to work better than other methods. Thus to generate
our final measure of rock-ness, we ignore the middle bin (?some?). However this should not be
taken to mean that the middle ?some? bin is useless: the booster needed to learn to predict ?some?
during training thus forcing it to be more selective in predicting ?none? and ?a lot?. As a largemargin classifier, AdaBoost tries to separate the classes as much as possible, so the magnitude of the
values for each bin are not easily comparable. To remedy this, we normalize by taking the minimum
and maximum prediction for each booster, which seems to work for finding similar artists. This
normalization would not be necessary if we had good tagging data for all artists and could perform
regression on the frequency of tag occurrence across artists.
4
Experiments
To test our model we selected the 60 most popular tags from the Last.fm crawl data described in
Section 2. These tags included genres such as ?Rock?, ?Electronica?, and ?Post Punk?, moodrelated terms such as ?Chillout?. The full list of tags and frequencies are available in the ?extra
materials?. We collected MP3s for a subset of the artists obtained in our Audioscrobbler crawl.
From those MP3s we extracted several popular acoustic features. In total our training and testing
data included 89924 songs for 1277 artists and yielded more than 1 million 5s aggregate features.
4.1
Booster Errors
As described above, a classifier was trained to map audio features onto aggregate feature segments
for each of the 60 tags. A third of the data was withheld for testing. Because each of the 60
boosters needed roughly 1 day to process, we did not perform cross-validation. However each
booster was trained on a large amount of data relative to the number of decision stumps learned,
making overfitting a remote possibility. Classification errors are shown in Table 2. These errors are
broken down by tag in the annex for this paper. Using 3 bins and balanced classes, the random error
is about 67%.
Segment
Song
Mean
40.93
37.61
Median
43.1
39.69
Min
21.3
17.8
Max
49.6
46.6
Table 2: Summary of test error (%) on predicting bins for songs and segments.
4.2
Evaluation measures
We use three measures to evaluate the performance of the model. The first TopN compares two
ranked lists, a target ?ground truth? list A and our predicted list B. This measure is introduced in
[2], and is intended to place emphasis on how well our list predicts the top few items of the target
list. Let kj be the position in list B of the jth element from list A. ?r = 0.51/3 , and ?c = 0.52/3 ,
5
as in [2]. The result is a value between 0 (dissimilar) and 1 (identical top N ),
PN
j kj
j=1 ?r ?c
si = PN
l
l=1 (?r ? ?c )
(2)
For the results produced below, we look at the top N = 10 elements in the lists.
Our second measure is Kendall?s T au, a classic measure in collaborative filtering which measures
the number of discordant pairs in 2 lists. Let RA (i) be the rank of the element i in list A, if i is not
explicitly present, RA (i) = length(A) + 1. Let C be the number of concordant pairs of elements
(i, j), e.g. RA (i) > RA (j) and RB (i) < RB (j). In a similar way, D is the number of discordant
pairs. We use ? ?s approximation in [8]. We also define TA and TB the number of ties in list A and
B. In our case, it?s the number of pairs of artists that are in A but not in B, because they end up
having the same position RB = length(B) + 1, and reciprocally. Kendall?s tau value is defined as:
?=
C ?D
sqrt((C + D + TA )(C + D + TB ))
(3)
Unless otherwise noted, we analyzed the top 50 predicted values for the target and predicted lists.
Finally, we compute what we call the TopBucket, which is simply the percentage of common elements in the top N of 2 ranked lists. Here as in Kendall we compare the top 50 predicted values
unless otherwise noted.
4.3
Constructing ground truth
As has long been acknowledged [4] one of the biggest challenges in addressing this task is to find a
reasonable ?ground truth? against which to compare our results. We seek a similarity matrix among
artists which is not overly biased by current popularity, and which is not built directly from the
social tags we are using for learning targets. Furthermore we want to derive our measure using
data that is freely available data on the web, thus ruling out commercial services such as AllMusic
(www.allmusic.com). Our solution is to construct our ground truth similarity matrix using correlations from the listening habits of Last.fm users. If a significant number of users listen to artists A
and B (regardless of the tags they may assign to that artist) we consider those two artists similar.
One challenge, of course, is that some users listen to more music than others and that some artists
are more popular than others. Text search engines must deal with a similar problem: they want
to ensure that frequently used words (e.g., system) do not outweigh infrequently used words (e.g.,
prestidigitation) and that long documents do not always outweigh short documents. Search engines
assign a weight to each word in a document. The weight is meant to represent how important that
word is for that document. Although many such weighting schemes have been described (see [11]
for a comprehensive review), the most popular is the term frequency-inverse document frequency
(or TF?IDF) weighting scheme. TF?IDF assigns high weights to words that occur frequently in a
given document and infrequently in the rest of the collection. The fundamental idea is that words
that are assigned high weights for a given document are good discriminators for that document from
the rest of the collection. Typically, the weights associated with a document are treated as a vector
that has its length normalized to one.
In the case of LastFM, we can consider an artist to be a ?document?, where the ?words? of the
document are the users that have listened to that artist. The TF?IDF weight for a given user for a
given artist takes into account the global popularity of a given artist and ensures that users who have
listened to more artists do not automatically dominate users who have listened to fewer artists. The
resulting similarity measure seems to us to do a reasonable enough job of capturing artist similarity.
Furthermore it does not seem to be overly biased towards popular bands. See ?extra material? for
some examples.
4.4
Similarity Results
One intuitive way to compare autotags and social tags is to look at how well the autotags reproduce
the rank order of the social tags. We used the measures in Section 4.2 to measure this on 100 artists
not used for training (Table 3). The results were well above random. For example, the top 5 autotags
were in agreement with the top 5 social tags 61% of the time.
6
autotags
random
TopN 10
0.636
0.111
Kendall (N=5)
-0.099
-0.645
TopBucket (N=5)
61.0%
8.1%
Table 3: Results for all three measures on tag order for 100 out-of-sample artists.
A more realistic way to compare autotags and social tags is via their artist similarity predictions.
We construct similarity matrices from our autotag results and from the Last.fm social tags used for
training and testing. The similarity measure we used wascosine similarity scos (A1 , A2 ) = A1 ?
A2 /(||A1 || ||A2 ||) where A1 and A2 are tag magnitudes for an artist. In keeping with our interest in
developing a commercial system, we used all available data for generating the similarity matrices,
including data used for training. (The chance of overfitting aside, it would be unwise to remove The
Beatles from your recommender simply because you trained on some of their songs). The similarity
matrix is then used to generate a ranked list of similar artists for each artist in the matrix. These lists
are used to compute the measures describe in Section 4.2. Results are found at the top in Table 4.
One potential flaw in this experiment is that the ground truth comes from the same data source as
the training data. Though the ground truth is based on user listening counts and our learning data
comes from aggregate tagging counts, there is still a clear chance of contamination. To investigate
this, we selected the autotags and social tags for 95 of the artists from the USPOP database [2]. We
constructed a ground truth matrix based on the 2002 MusicSeer web survey eliciting similarity rankings between artists from appro 1000 listeners [2]. These results show much closer correspondence
between our autotag results and the social tags from Last.fm than the previous test. See bottom,
Table 4.
Groundtruth
Last.FM
MusicSeer
Model
social tags
autotags
random
social tags
autotags
random
TopN 10
0.26
0.118
0.005
0.237
0.184
0.051
Kendall 50
-0.23
-0.406
-0.635
-0.182
-0.161
-0.224
TopBucket 20
34.6%
22.5%
3.9%
29.7%
28.2%
21.5%
Table 4: Performance against Last.Fm (top) and MusicSeer (bottom) ground truth.
It is clear from these previous two experiments that our autotag results do not outperform the social
tags on which they were trained. Thus we asked whether combining the predictions of the autotags
with the social tags would yield better performance than either of them alone. To test this we blended
the autotag similarity matrix Sa with the social tag matrix Ss using ?Sa + (1 ? ?)Ss . The results
shown in Figure 3 show a consistent performance increase when blending the two similarity sources.
It seems clear from these results that the autotags are of value. Though they do not outperform the
social tags on which they were trained, they do yield improved performance when combined with
social tags. At the same time they are driven entirely by audio and so can be applied to new, untagged
music. With only 60 tags the model makes some reasonable predictions. When more boosters are
trained, it is safe to assume that the model will perform better.
5
Conclusion and future work
The work presented here is preliminary, but we believe that a supervised learning approach to autotagging has substantial merit. Our next step is to compare the performance of our boosted model
to other approaches such as SVMs and neural networks. The dataset used for these experiments
is already larger than those used for published results for genre and artist classification. However,
a dataset another order of magnitude larger is necessary to approximate even a small commercial
database of music. A further next step is comparing the performance of our audio features with other
sets of audio features.
7
Figure 3: Similarity performance results when autotag similarities are blended with social tag similarities. The horizontal line is the performance of the social tags against ground truth.
We plan to extend our system to predict many more tags than the current set of 60 tags. We expect
the accuracy of our system to improve as we extend our tag set, especially as we add tags such as
Classical and Folk that are associated with whole genres of music. We will also continue exploring
ways in which the autotag results can drive music visualization. See ?extra examples? for some
preliminary work.
Our current method of evaluating our system is biased to favor popular artists. In the future, we
plan to extend our evaluation to include comparisons with music similarity derived from human
analysis of music. This type of evaluation should be free of popularity bias. Most importantly, the
machine-generated autotags need to be tested in a social recommender. It is only in such a context
that we can explore whether autotags, when blended with real social tags, will in fact yield improved
recommendations.
References
[1] Audioscrobbler. Web Services described at http://www.audioscrobbler.net/data/webservices/.
[2] A. Berenzweig, B. Logan, D. Ellis, and B. Whitman. A large-scale evaluation of acoustic and subjective
music similarity measures. In Proceedings of the 4th International Conference on Music Information
Retrieval (ISMIR 2003), 2003.
[3] J. Bergstra, N. Casagrande, D. Erhan, D. Eck, and B. K?egl. Aggregate features and AdaBoost for music
classification. Machine Learning, 65(2-3):473?484, 2006.
[4] D. Ellis, B. Whitman, A. Berenzweig, and S. Lawrence. The quest for ground truth in musical artist
similarity. In Proceedings of the 3th International Conference on Music Information Retrieval (ISMIR
2002), 2002.
[5] Y. Freund and R.E. Shapire. Experiments with a new boosting algorithm. In Machine Learning: Proceedings of the Thirteenth International Conference, pages 148?156, 1996.
[6] B. Gold and N. Morgan. Speech and Audio Signal Processing: Processing and Perception of Speech and
Music. Wiley, Berkeley, California., 2000.
[7] Jonathan L. Herlocker, Joseph A. Konstan, and John Riedl. Explaining collaborative filtering recommendations. In Computer Supported Cooperative Work, pages 241?250, 2000.
[8] Jonathan L. Herlocker, Joseph A. Konstan, Loren G. Terveen, and John T. Riedl. Evaluating collaborative
filtering recommender systems. ACM Trans. Inf. Syst., 22(1):5?53, 2004.
[9] R. E. Schapire and Y. Singer. Improved boosting algorithms using confidence-rated predictions. Machine
Learning, 37(3):297?336, 1999.
[10] Brian Whitman and Ryan M. Rifkin. Musical query-by-description as a multiclass learning problem. In
IEEE Workshop on Multimedia Signal Processing, pages 153?156. IEEE Signal Processing Society, 2002.
[11] Justin Zobel and Alistair Moffat. Exploring the similarity space. SIGIR Forum, 32(1):18?34, 1998.
8
| 3370 |@word middle:2 proportion:2 seems:3 nd:1 seek:1 tried:2 document:11 subjective:1 current:4 com:3 comparing:1 si:1 must:1 john:2 audioscrobbler:4 realistic:1 wanted:1 remove:1 plot:1 aside:1 alone:1 selected:5 website:1 item:1 fewer:1 short:3 boosting:2 simpler:1 tagger:1 mahieux:2 constructed:1 become:2 autocorrelation:1 introduce:1 tagging:5 ra:4 roughly:1 behavior:1 love:2 frequently:2 eck:4 automatically:4 window:2 becomes:1 mass:6 lowest:1 what:1 tallied:1 mp3:1 finding:1 annex:1 ended:1 berkeley:1 tie:1 classifier:6 scaled:3 organize:1 before:1 service:3 treat:3 solely:1 might:1 chose:1 emphasis:1 au:1 conversely:1 decided:1 testing:3 cold:5 shin:4 habit:1 area:1 empirical:1 word:9 confidence:1 onto:3 cannot:3 lamere:2 context:4 live:1 www:2 outweigh:2 map:3 center:1 regardless:1 survey:1 sigir:1 assigns:2 importantly:1 dominate:1 classic:1 traditionally:1 target:5 commercial:3 heavily:1 user:13 us:2 agreement:1 trend:1 element:5 infrequently:2 untagged:3 breakdown:1 sparsely:1 predicts:2 database:4 labeled:1 bottom:2 cooperative:1 thousand:1 ensures:1 sun:10 remote:1 contamination:1 valuable:1 balanced:1 substantial:1 broken:1 insertion:1 asked:1 personal:1 trained:6 segment:10 serve:1 upon:1 efficiency:1 learner:9 basis:1 whitman:4 multimodal:1 mh:1 easily:1 listener:9 genre:7 univ:1 describe:5 shortcoming:1 london:1 query:2 tell:1 aggregate:7 labeling:1 choosing:1 crowd:1 lag:1 larger:2 valued:1 say:1 s:2 otherwise:5 ability:2 favor:1 transform:1 final:1 online:1 mood:2 descriptive:1 advantage:1 net:1 rock:18 dining:1 propose:2 combining:1 rifkin:2 poorly:2 achieve:1 indie:5 gold:1 description:5 intuitive:2 normalize:2 generating:3 depending:1 derive:1 montreal:2 fixing:1 keywords:1 received:2 thierry:1 sa:2 job:1 strong:1 cool:2 predicted:5 skip:1 come:2 safe:1 human:1 opinion:1 material:3 bin:14 assign:2 preliminary:2 brian:1 ryan:1 blending:1 extension:1 exploring:2 ground:10 lawrence:1 predict:8 a2:4 purpose:1 jazz:2 bag:1 label:3 currently:1 saw:1 create:1 tf:3 tool:1 always:1 rather:2 avoid:1 pn:2 appro:1 boosted:2 berenzweig:2 derived:1 improvement:2 rank:2 baseline:2 flaw:1 dependent:1 typically:2 eliminate:2 selective:1 reproduce:1 playlist:1 selects:2 issue:2 classification:9 arg:1 among:1 plan:2 ness:2 equal:1 construct:3 extraction:2 having:5 sampling:1 identical:1 look:3 nearly:1 terveen:1 future:3 purchase:1 others:3 recommend:3 few:4 comprehensive:1 individual:3 intended:2 argmax:1 attempt:2 interest:1 investigate:2 possibility:1 punk:2 evaluation:4 truly:1 analyzed:1 closer:1 partial:1 necessary:2 folk:2 moffat:1 unless:2 tree:1 logan:1 instance:1 elli:2 blended:3 assignment:1 deviation:1 subset:1 addressing:1 too:2 listened:3 combined:4 fundamental:1 international:3 loren:1 retain:1 pool:1 containing:1 booster:13 style:2 syst:1 account:2 potential:2 zobel:1 stump:2 bergstra:1 coefficient:4 explicitly:1 ranking:1 chill:3 view:1 lot:14 lab:4 try:3 kendall:5 start:5 contribution:1 collaborative:5 minimize:1 accuracy:1 musical:3 who:3 yield:7 wisdom:1 weak:8 identification:1 raw:2 artist:61 produced:1 none:11 drive:1 published:1 sqrt:1 against:4 web2:1 energy:1 frequency:12 associated:6 sampled:1 dataset:5 popular:7 listen:4 emerges:1 organized:1 originally:1 ta:2 supervised:2 day:1 harness:1 adaboost:10 improved:3 though:4 strongly:1 furthermore:2 correlation:1 working:1 horizontal:1 web:10 beatles:1 lack:1 western:1 grows:1 believe:1 usa:4 normalized:2 remedy:1 furnish:2 tagged:4 assigned:3 read:1 freq:3 deal:1 during:1 noted:2 mel:1 won:1 m:3 reasoning:1 umontreal:2 common:2 overview:1 winner:2 insensitive:1 million:2 extend:4 significant:4 discordant:2 automatic:6 recommenders:3 contest:2 had:1 moving:1 similarity:25 etc:1 add:1 something:1 own:1 inf:1 driven:2 instrumentation:1 discard:2 forcing:1 topn:3 meta:2 binary:4 continue:1 seen:2 minimum:1 morgan:1 performer:1 spectrogram:1 freely:1 aggregated:2 recommended:1 signal:3 stephen:2 full:1 transparency:1 smooth:2 unwise:1 cross:1 long:2 retrieval:3 dept:1 post:1 a1:4 prediction:13 regression:2 histogram:3 iteration:2 normalization:1 represent:1 achieved:1 want:2 thirteenth:1 interval:1 else:1 median:1 source:3 extra:4 biased:3 rest:2 file:1 lastfm:1 seem:1 call:1 near:1 enough:1 fm:10 opposite:1 reduce:1 idea:2 multiclass:3 listening:3 whether:2 motivated:1 song:23 speech:2 clear:3 amount:3 band:2 svms:1 category:2 generate:6 http:1 outperform:2 exist:1 percentage:1 schapire:1 overly:2 track:1 per:2 rb:3 popularity:3 key:1 threshold:1 acknowledged:1 drawn:1 douglas:2 inverse:1 you:1 place:2 reasonable:3 ismir:3 electronic:1 ruling:1 groundtruth:1 decision:2 comparable:3 capturing:1 entirely:1 internet:1 mined:1 mp3s:2 correspondence:1 yielded:1 strength:2 occur:1 idf:3 your:1 tag:105 min:1 relatively:2 developing:1 combination:1 riedl:2 belonging:1 across:1 increasingly:1 alistair:1 joseph:2 making:1 hl:1 largemargin:1 intuitively:1 taken:1 resource:1 visualization:2 previously:1 discus:1 emo:1 count:3 singer:3 needed:2 merit:1 serf:1 end:1 lieu:1 available:4 gaussians:1 apply:2 spectral:1 occurrence:1 alternative:3 top:12 cf:2 include:2 ensure:1 music:40 prof:1 especially:1 eliciting:1 classical:1 shapire:1 society:1 forum:1 already:1 strategy:1 primary:1 traditional:3 win:1 separate:1 majority:1 collected:3 cfr:5 discriminant:1 iro:1 spanning:2 reason:2 length:3 useless:2 providing:3 difficult:1 holding:1 favorably:1 herlocker:2 unknown:1 perform:3 allowing:2 recommender:9 observation:1 datasets:2 withheld:1 looking:1 canada:1 overcoming:1 introduced:1 pair:4 discriminator:1 acoustic:7 engine:2 learned:1 textual:1 california:1 pop:3 alternately:1 trans:1 address:2 beyond:1 justin:1 microsystems:4 usually:1 below:1 electronica:1 perception:1 confidently:1 challenge:2 tb:2 built:2 green:2 including:2 garden:2 max:1 reciprocally:1 tau:1 overlap:1 treated:3 ranked:3 predicting:5 scheme:2 improve:1 technology:1 rated:1 created:1 naive:1 kj:2 text:2 review:1 relative:3 lacking:1 freund:1 expect:1 generation:2 filtering:5 versus:1 bertin:2 validation:1 consistent:1 course:1 summary:1 placed:1 last:10 free:2 gl:1 jth:1 keeping:1 supported:1 bias:1 fall:2 explaining:1 taking:2 cepstral:1 sparse:1 boundary:2 depth:1 calculated:1 maxl:1 avoids:1 rich:1 crawl:2 evaluating:2 made:2 commonly:1 collection:3 erhan:1 social:41 approximate:1 ignore:1 global:1 overfitting:2 summing:1 recommending:1 search:2 iterative:1 why:1 table:9 favorite:3 learn:2 ca:2 complex:1 constructing:1 domain:1 did:1 main:1 linearly:1 motivation:2 whole:1 paul:2 hyperparameters:2 nothing:1 jogging:2 site:1 biggest:1 fashion:1 wiley:1 position:2 msec:2 winning:1 konstan:2 lie:1 candidate:1 late:1 burlington:4 third:1 weighting:2 minute:1 down:1 specific:1 explored:1 list:16 workshop:1 magnitude:4 album:3 egl:1 easier:1 subtract:1 simply:4 explore:1 recommendation:13 truth:10 chance:2 extracted:2 acm:1 towards:1 content:1 change:1 included:2 specifically:2 principal:1 called:3 total:1 multimedia:1 experimental:1 concordant:1 vote:4 people:1 quest:1 latter:1 meant:1 dissimilar:1 jonathan:2 evaluate:1 audio:17 tested:1 |
2,615 | 3,371 | The Price of Bandit Information
for Online Optimization
Thomas P. Hayes
Toyota Technological Institute
Chicago, IL 60637
[email protected]
Varsha Dani
Department of Computer Science
University of Chicago
Chicago, IL 60637
[email protected]
Sham M. Kakade
Toyota Technological Institute
Chicago, IL 60637
[email protected]
Abstract
In the online linear optimization problem, a learner must choose, in each round,
a decision from a set D ? Rn in order to minimize an (unknown and changing) linear cost function. We present sharp rates of convergence (with respect to
additive regret) for both the full information setting (where the cost function is
revealed at the end of each round) and the bandit setting (where only the scalar
cost incurred is revealed). In particular, this paper is concerned with the price
of bandit information, by which we mean the ratio of the best achievable regret
in the bandit setting to that in the full-information
? setting. For the full information case, the upper bound on the regret is O? ( nT ), where n is the ambient
dimension and T is the time
? horizon. For the bandit case, we present an algorithm
which achieves O? (n3/2 T ) regret ? all previous (nontrivial) bounds here were
O(poly(n)T 2/3 ) or worse. It is striking that the convergence rate for the bandit
setting is only a factor of n worse than in the full information case ? in stark
contrast to the
setting, where the gap in the dependence on K is
? K-arm bandit
?
exponential ( T K?vs. T log K). We also present lower bounds showing that
this gap is at least n, which we conjecture to be the correct order. The bandit
algorithm we present can be implemented efficiently in special cases of particular
interest, such as path planning and Markov Decision Problems.
1
Introduction
In the online linear optimization problem (as in Kalai and Vempala [2005]), at each timestep the
learner chooses a decision xt from a decision space D ? Rn and incurs a cost Lt ? xt , where the loss
vector Lt is in Rn . This paper considers the case where the sequence of loss vectors L1 , . . . , LT is
arbitrary ? that is, no statistical assumptions are made about the data generation process. The goal
of the learner is to minimize her regret, the difference between the incurred loss on the sequence
and the loss of the best single decision in hindsight. After playing xt at time t, the two most natural
sources of feedback that the learner receives are either complete information of the loss vector Lt
(referred to as the full information case) or only the scalar feedback of the incurred loss Lt ? xt
(referred to as the partial feedback or ?bandit? case).
The online linear optimization problem has been receiving increasing attention as a paradigm for
structured decision making in dynamic environments, with potential applications to network routing,
1
K-Arm
Full
Partial
Lower Bound
Upper Bound
Efficient Algo
?
?T ln K
T ln K
N/A
?
?T K
TK
N/A
Full
?
?nT
nT
Sometimes
Linear Optimization
Partial
I.I.D. Expectation High Probability
?
?
?
n?T
n ?
T
n ?
T
n T
n3/2 T
n3/2 T
Yes
Sometimes
?
Table 1: Summary of Regret Bounds: Only the leading dependency in terms of n and T are shown (so some
log factors are dropped). The results in bold are provided in this paper. The results for the K-arm case are
from Freund and Schapire [1997], Auer et al. [1998]. The i.i.d. column is the stochastic setting (where the
loss vectors are drawn from some fixed underlying distribution) and the result are from Dani et al. [2008]. The
expectation column refers to the expected regret for an arbitrary sequence of loss vectors (considered in this
paper). The high probability column follows from a forthcoming paper Bartlett et al. [2007]; these results also
hold in the adaptive adversary setting, where the loss vectors could change in response to the learner?s previous
decisions. The Efficient Algo row refers to whether or not there is an efficient implementation ? ?yes? means
there is a polytime algorithm (for the stated upper bound) which only uses access to a certain optimization
oracle (as in Kalai and Vempala [2005]) and ?sometimes? means only in special cases (such as Path Planning)
can the algorithm be implemented efficiently. See text for further details.
path planning, job scheduling, etc. This paper focuses on the fundamental regrets achievable for the
online linear optimization problem in both the full and partial information feedback settings, as
functions of both the dimensionality n and the time horizon T . In particular, this paper is concerned
with what might be termed the price of bandit information ? how much worse the regret is in the
partial information case as compared to the full information case.
In the K-arm case (where D is the set of K choices), much work has gone into obtaining sharp regret
bounds. These results are summarized in the left two columns in Table 1. For the full information
case, the exponential weights algorithm, Hedge, of Freund and Schapire [1997] provides the regret
listed. For the partial information case, there is a long history of sharp regret bounds in various
settings (particularly in statistical settings where i.i.d assumptions are made), dating back to Robbins
[1952]. In the (non-statistical) adversarial case, the algorithm of Auer et al. [1998] provides the
regret listed in Table 1 for the partial information setting. This case has a convergence rate that is
exponentially worse than the full information case (as a function of K).
There are a number of issues that we must address in obtaining sharp convergence for the online
linear optimization problem. The first issue to address is in understanding what are the natural
quantities to state upper and lower bounds in terms of. It is natural to consider the case where the
loss is uniformly bounded (say in [0, 1]). Clearly, the dimensionality n and the time horizon T
are fundamental quantities. For the full information case, all previous bounds (see, e.g., Kalai and
Vempala [2005]) also have dependencies on the diameter of the decision and cost spaces. It turns
out that these are extraneous quantities ? with the bounded loss assumption, one need not explicitly
consider diameters of the decision and cost spaces. Hence, even in the full information case, to
obtain a sharp upper bound we need a new argument to get an upper bound that is stated only in
terms of n and T (and we do this via a relatively straightforward appeal to Hedge).
The second (and more technically demanding) issue is to obtain a?
sharp bound for the partial infor?
mation case. Here, for the K-arm bandit case, the regret
is
O
(
KT ). Trivially, we can appeal
p
to this result in the linear optimization case to obtain a |D|T regret by setting K to be the size
of D. However, the regret could have a very poor n dependence, as |D| could be exponential in n
(or worse). In
p contrast, note that in the full information case, we could appeal to the K-arm case
to obtain O( T log |D|) regret, which in many cases is acceptable (such as when D is exponential
in n). The primary motivation for different algorithms in the full information case (e.g. Kalai and
Vempala [2005]) was for computational reasons. In contrast, in the partial information case, we
seek a new algorithm in order to just obtain a sharper convergence rate (of course, we are still
? also
interested in efficient implementations). The goal here is provide a regret that is O? (poly(n) T ).
In fact, the partial information case (for linear optimization) has been receiving increasing interest in the literature [Awerbuch and Kleinberg, 2004, McMahan and Blum, 2004, Dani and Hayes,
2006]. Here, all regrets provided are O(poly(n)T 2/3 ) or worse. We should note that some of the
results here [Awerbuch and Kleinberg, 2004, Dani and Hayes, 2006] are stated in terms of only n
2
and T (without referring to the diameters of various spaces). There
? is only one (non-trivial) special
?
case [Gyorgy et al., 2007] in the literature where an O (poly(n) T ) regret has been established,
and this case assumes significantly more feedback than in the partial information case ? their result is for Path Planning (where D is the set of paths on a graph and n is the number of edges)
and the feedback model assumes that learner receives the weight along each edge that is traversed
(significantly?more information than the just the scalar loss). The current paper provides the first
O? (poly(n) T ) regret for the general online linear optimization ?
problem with scalar feedback ?
in particular, our algorithm has an expected regret that is O? (n3/2 T ).
The final issue to address here is lower bounds, which are not extant in the literature. This paper
provides lower bounds for both the full and partial information case. We believe these lower bounds
are tight, up to log factors.
We have attempted to summarize the extant results in the literature (along with the results in this
paper) in Table 1. We believe that we have a near complete picture of the achievable rates. One
striking result is that the price of bandit information is relatively small ? the upper bound is only
a factor of n worse than in the full information
? case. In fact, the lower bounds suggest the partial
feedback case is only worse by a factor of n. Contrast this to the K arm case, where the full
information case does exponentially better as a function of K.
As we believe
that the lower bounds are sharp, we conjecture that the price of bandit information
?
is only n. Part of our reasoning is due to our previous result [Dani et al., 2008] in the i.i.d. case
(where the linear loss functions are sampled from a ?
fixed, time invariant distribution) ? there, we
provided an upper bound on the regret of only O? (n T ). That bound was achieved by a deterministic algorithm which was a generalization of the celebrated algorithm of Lai and Robbins. [1985]
for the K-arm case (in the i.i.d. setting).
Finally, we should note that this paper primarily focuses on the achievable regrets, not on efficient
implementations. In much of the previous work in the literature (for both the full and partial information case), the algorithms can be implemented efficiently provided access to a certain optimization oracle. We are not certain whether our algorithms can be implemented efficiently, in general,
with only this oracle access. However, as our algorithms use the Hedge algorithm of Freund and
Schapire [1997], for certain important applications, efficient implementations do exist, based on dynamic programming. Examples include problems such as Path Planning (for instance, in routing
network traffic), and also Markov Decision Problems, one of the fundamental models for long-term
planning in AI. This idea has been developed by Takimoto and Warmuth [2003] and also applied
by Gyorgy et al. [2007] (mentioned earlier) for Path Planning ? the extension to Markov Decision
Problems is relatively straightforward (based on dynamic programming).
The paper is organized as follows. In Section 2, we give a formal description of the problem. Then
in Section 3 we present upper bounds for both the full information and bandit settings. Finally, in
Section 4 we present lower bounds for both settings. All results in this paper are summarized in
Table 1 (along with previous work).
2
Preliminaries
Let D ? Rn denote the decision space. The learner plays the following T -round game against an
oblivious adversary. First, the adversary chooses a sequence L1 , . . . , LT of loss vectors in Rn . We
assume that the loss vectors are admissible, meaning they satisfy the boundedness property that for
each t and for all x ? D, 0 ? Lt ?x = L?t x ? 1. On each round t, the learner must choose a decision
xt in D, which results in a loss of `t = L?t xt . Throughout the paper we represent x ? D and Lt
as column vectors and use v ? to denote the transpose of a column vector v. In the full information
case, Lt is revealed to the learner after time t. In the partial information case, only the incurred loss
`t (and not the vector Lt ) is revealed.
PT
If x1 , . . . , xT are the decisions the learner makes in the game, then the total loss is t=1 L?t xt . The
cumulative regret is defined by
!
T
T
X
X
?
?
R=
Lt xt ? min
Lt x
t=1
x?D
3
t=1
In other words, the learner?s loss is compared to the loss of the best single decision in hindsight.
The goal of the learner is to make a sequence of decisions that guarantees low regret. For the partial
information case, our upper bounds on the regret are only statements that hold in expectation (with
respect to the learner?s randomness). The lower bounds provided hold with high probability.
This paper also assumes the learner has access to a barycentric spanner (as defined by Awerbuch and
Kleinberg [2004]) of the decision region ? such a spanner is useful for exploration. This is a subset
of n linearly independent vectors of the decision space, such that every vector in the decision space
can be expressed as a linear combination of elements of the spanner with coefficients in [?1, 1].
Awerbuch and Kleinberg [2004] showed that any full rank compact set in Rn has a barycentric
spanner. Furthermore, an almost barycentric spanner (where the coefficients are in [?2, 2]) can be
found efficiently (with certain oracle access). In view of these remarks, we assume without loss of
generality, that D contains the standard basis vectors ~e1 . . . ~en and that D ?
? [?1, 1]n . We refer to
the set {~e1 . . . ~en } as the spanner. Note that with this assumption, kxk2 ? n for all x ? D.
3
Upper Bounds
The decision set D may be potentially large or even uncountably infinite. However, for the purposes of designing algorithms with sharp regret bounds, the following lemma shows that we need
only concern ourselves with finite decision sets ? the lemma shows that any decision
set may be
?
approximated to sufficiently high accuracy by a suitably small set (which is a 1/ T -net for D).
? ? D of size
Lemma 3.1. Let D ? [?1, 1]n be an arbitrary decision set. Then there is a set D
n/2
e
at most (4nT )
such
? that for every sequence of admissible loss vectors, the optimal loss for D is
within an additive nT of the optimal loss for D.
Proof sketch. For each x ? D suppose we truncate each coordinate of x to only the first 12 log(nT )
bits. Now from all x ? D which result in the same truncated representation, we select a single
e This results in a set D
e of size at most (4nT )n/2 which is a
representative
to be included in D
?
?
e
1/ T -net for D. That is, every x ? D is at distance
? at most 1/ T from its nearest neighbor in D.
Since an admissible loss vector has norm at most ?
n, summing over the T rounds of the game, we
e is within an additive nT of the optimal loss for D.
see that the optimal loss for D
For implementation purposes, it may be impractical to store the decision set (or the covering net of
the decision set) explicitly as a list of points. However, our algorithms only require the ability to
sample from a specific distribution over the decision set. Furthermore, in many cases of interest the
full decision set is finite and exponential in n, so we can directly work with D (rather than a cover
of D). As discussed in the Introduction, in many important cases of interest this can actually be
accomplished using time and space which are only logarithmic in |D| ? this is due to that Hedge
can be implemented efficiently for these special cases.
3.1
With Full Information
In the full information
p setting, the algorithm Hedge of Freund and Schapire [1997] guarantees a
regret at most of O( T
?log |D|). Since we may modify D so that log |D| is O(n log n log T ),
?
this gives us regret O ( nT ). Note that we are only concerned with the regret here. Hedge may
in general be quite inefficient to implement. However, in many special cases of interest, efficient
implementations are in fact possible, as discussed in the Introduction.
We also note that under the relatively minor assumption of the existence of an oracle for offline
optimization, the algorithm of Kalai and Vempala
algorithm for this setting.
? [2005] is an efficient
?
However, it appears that that their regret is O(n T ) rather than O( nT ) ? their regret bounds are
stated in terms of diameters
? of the decision and cost spaces, but we can bound these in terms of n,
which leads to the O(n T ) regret for their algorithm.
4
3.2
With Bandit Information
We now present the Geometric Hedge algorithm (shown in Algorithm 3.1) that achieves low expected regret for the setting where only the observed loss, `t = Lt ? xt , is received as feedback. This
algorithm is motivated by the algorithms in Auer et al. [1998] (designed for the K-arm case), which
use Hedge (with estimated losses) along with a ? probability of exploration.
Algorithm G EOMETRIC H EDGE(D, ?, ?)
1
?x ? D, p1 (x) ? |D|
for t ? 1 to T
?x ? D, pbt (x) = (1 ? ?)pt (x) + n? 1{x ? spanner}
Sample xt according to distribution pbt
Incur and observe loss `t := L?t xt
Ct := Epbt [xx? ]
b t := `t Ct?1 xt
L
b?
?x ? D, pt+1 (x) ? pt (x)e??Lt x
In the Geometric Hedge algorithm, there is a ? probability of exploring with the spanner on each
round (motivated by Awerbuch and Kleinberg [2004]). The estimated losses we feed into Hedge
b t of Lt . Note that the algorithm is well defined as Ct is always
are determined by the estimator L
non-singular. The following lemma shows why this estimator is sensible.
b t is an unbiased estimator for the true loss vector Lt .
Lemma 3.2. On each round t, L
b t = `t Ct?1 xt = (Lt ? xt )Ct?1 xt = Ct?1 xt (x?t Lt ). Therefore
Proof. L
b t ] = E [Ct?1 xt (x?t Lt )] = Ct?1 E [xt x?t ]Lt = Ct?1 Ct Lt = Lt
E [L
where all the expectations are over the random choice of xt drawn from pbt .
In the K-arm case, where n = K and D = {~e1 , . . . , ~eK }, Algorithm 3.1 specializes to the Exp3
algorithm of Auer et al. [1998].
Note that if |D| is exponential in the dimension n then in general, maintaining and sampling from
the distributions pt and pbt is very expensive in terms of running time. However in many special
cases of interest, this can actually be implemented efficiently.
We now state the main technical result of the paper.
3/2
1
Theorem 3.3. Let ? = n?T and ? = ?nT
in Algorithm 3.1. For any sequence L1 , . . . , LT of
admissible loss vectors, let R denote the regret of Algorithm 3.1 on this sequence. Then
?
?
ER ? ln |D| nT + 2n3/2 T
As
replace
)n/2 ) for an additional regret of only
? before, since we may
? D with a set of size O((nT
? 3/2
n
nT , the regret is O (n
T ). Moreover, if |D|
? ? c for some constant c, as is the case for the
online shortest path problem, then ER = O(n3/2 T ).
3.3
Analysis of Algorithm 3.1
In this section, we prove Theorem 3.3. We start by providing the following bound on the sizes of
the estimated loss vectors used by Algorithm 3.1.
b t satisfies
Lemma 3.4. For each x ? D and 1 ? t ? T , the estimated loss vector L
b t ? x| ?
|L
5
n2
?
Proof. First, let us examine Ct . Let ?1 , . . . , ?n be the eigenvalues of Ct , and v1 , . . . , vn be the
corresponding (orthonormal) eigenvectors. Since Ct := Epbt [xx? ] and ?i = vi? Ct vi , we have
X
?i = vi? E [xx? ]vi =
pbt (x)(x ? vi )2
(1)
p
bt
x?D
and so
?i =
X
pbt (x)(x ? vi )2 ?
x?D
It follows that the eigenvalues
X
pbt (x)(x ? vi )2 ?
x?spanner
?1
?1 , . . . ??1
n
n
X
?
?
?
(~ej ? vi )2 = kvi k2 =
n
n
n
j=1
of Ct?1 are each at most n? .
Hence, for each x
n
n2
|`t | kxt k2 kxk2 ?
?
?
?
where we have used the upper bound on the eigenvalues and the upper bound of n for x ? D.
b t ? x| = |`t Ct?1 xt ? x| ?
|L
The following proposition is Theorem 3.1 in Auer et al. [1998], restated in our notation (for losses
M?
?
instead of gains). We state it here without proof. Denote ?M (?) := e ?1?M
.
M2
?
Proposition 3.5. (from Auer et al. [1998])For every x ? D, the sequence of estimated loss vectors
b1 , . . . , L
b T and the probability distributions p1 , . . . pT satisfy
L
T X
T
T X
X
X
X
bt ? x ?
b t ? x? + ln |D| + ?M (?)
b t ? x)2
pt (x)L
L
pt (x)(L
?
?
t=1
t=1
t=1
x?D
x?D
2
b ? x|.
where M = n /? is an upper bound on |L
Before we are ready to complete the proof, two technical lemmas are useful.
Lemma 3.6. For each x ? D and 1 ? t ? T ,
b t ? x)2 ? x? Ct?1 x
E
(L
xt ?b
pt
b t ? x)2 = x? E L
bt L
b ?t x, we have
Proof. Using that E (L
bt L
b ?t x = x? E `2t Ct?1 xt x?t Ct?1 x ? x? Ct?1 E xt x?t Ct?1 x = x? Ct?1 x
x? E L
Lemma 3.7. For each 1 ? t ? T ,
X
pbt (x)x? Ct?1 x = n
x?D
Proof. The singular value decomposition of Ct?1 is V BV ? where B is diagonal (with the inverse
eigenvalues as the diagonal entries)
and V is orthogonal (with the columns being the eigenvectors).
P
2
This implies that x? Ct?1 x = i ??1
i (x ? vi ) . Using Equation 1, it follows that
n
n
n
X
X
X
X
X
X
?1
2
2
pbt (x)x? Ct?1 x =
pbt (x)
??1
(x
?
v
)
=
?
p
b
(x)(x
?
v
)
=
1=n
i
t
i
i
i
x?D
i=1
x?D
i=1
i=1
x?D
We are now ready to complete the proof of Theorem 3.3.
Proof. We now have, for any x? ? D,
T X
X
X
?
bt ? x =
bt ? x
pbt (x)L
(1 ? ?)pt (x) + 1{?j : x = ~ej } L
n
t,x
t=1
x?D
? (1 ? ?)
?
T
X
X
b t ? x + ln |D| + ?M (?)
b t ? x)2
L
pt (x)(L
?
?
t,x
t=1
?
T
X
!
+
T X
n
X
?b
Lt ? ~ej
n
t=1 j=1
T X
n
X
X
?b
b t ? x? + ln |D| + ?M (?)
b t ? x)2 +
L
pbt (x)(L
Lt ? ~ej
?
?
n
t,x
t=1
t=1 j=1
6
where the last step uses (1 ? ?)pt (x) ? pbt (x). Taking expectations and using the unbiased property,
"
#
"
#
T
T X
n
X
X
X
X
?
(?)
ln
|D|
?
M
2
?
bt ? x =
b t ? x) +
E
pbt (x)L
+
E
pbt (x)(L
Lt ? ~ej
Lt ? x +
?
?
n
t,x
t,x
t=1
t=1 j=1
"
#
T
X
X
ln |D| ?M (?)
2
?
b t ? x) + ?T
?
+
E
pbt (x) E (L
Lt ? x +
xt ?b
pt
?
?
t,x
t=1
?
T
X
Lt ? x? +
t=1
ln |D| ?M (?)
+
nT + ?T
?
?
where we have used Lemmas 3.6 and 3.7 in the last step.
Setting ? =
n?3/2
T
and ? =
?1
nT
gives M ? = n2 ?/? ? 1 , which implies that
M 2 ?2
eM ? ? 1 ? M ?
?
= ?2
M2
M2
where the inequality comes from that for ? ? 1, e? ? 1 + ? + ?2 . With the above, we have
"
#
T
X
X
?
?
b
E
Lt ? x? + ln |D| nT + 2n3/2 T
pbt (x)Lt ? x ?
?M (?) =
t,x
t=1
The proof is completed by noting that
"
#
"
#
"
#
!
X
X
X
X
bt ? x = E
b t | Ht ? x = E
E
pbt (x)L
pbt (x)E L
pbt (x)Lt ? x = E
Lt ? xt
t,x
t,x
t,x
t
is the expected total loss of the algorithm.
4
4.1
Lower Bounds
With Full Information
?
We now present a family of distributions which establishes an ?( nT ) lower bound for i.i.d. loss
vectors in the full information setting. In the remainder of the paper, we assume for convenience
that the incurred losses are in the interval [?1, 1] rather than [0, 1]. (This changes the bounds by at
most a factor of 2.)
Example 4.1. For a given S ? {1, . . . , n} and 0 < ? < 1, we define a random loss vector L as
follows. Choose i ? {1, . . . , n} uniformly at random. Let ? ? ?1 be 1 with probability (1 + ?)/2
and ?1 otherwise. Set
?~ei
if i ? S
L=
??~ei if i ?
/S
Let DS,? denote the distribution of L.
Theorem 4.2. Suppose the decision set D is the unit hypercube {?1, 1}n . For any full-information
linear optimization algorithm A, and for any positive integer T , there exists S ? {1, . . . , n} such
that for loss vectors L1 , . . . , LT sampled i.i.d. according to DS,?n/T , the expected regret is
?
?( nT ).
Proof sketch. Clearly, for each S and ?, the optimal decision vector for loss vectors sampled i.i.d.
according to DS,? is the vector (x1 , . . . , xn ) where xi = ?1 if i ? S and 1 otherwise.
Suppose S is chosen uniformly at random. In this case, it is clear that the optimal algorithm chooses
decision (x1 , . . . , xn ) where for each i, the sign of xi is the same as the minority of past occurrences
of loss vectors ?ei (in case of a tie, the value of xi doesn?t matter).
Note that at every time step when the empirical minority incorrectly predicts the bias for coordinate i, the optimal algorithm incurs expected regret ?(?/n). By a standard application of Stirling?s
7
estimates, one can show that until coordinate i has been chosen ?(1/?2 ) times, the probability
that the empirical majority disagrees with the long-run average is ?(1). In expectation, this requires ?(n/?2 ) time steps. Summing over the n arms, the p
overall expected regret is thus at least
?(n(?/n) min{T, n/?2 } = ?(min{?T, n/?}). Setting ? = n/T yields the desired bound.
4.2
With Bandit Information
n
Next we prove that
? the same decision set {0, 1} and family of distributions DS,? can be used to
establish an ?(n T ) lower bound in the bandit setting.
Theorem 4.3. Suppose the decision set D is the unit hypercube {0, 1}n . For any bandit linear
optimization algorithm A, and for any positive integer T , there exists S ? {1, . . . , n} such?
that for
loss functions L1 , . . . , LT sampled i.i.d. according to DS,n/?T , the expected regret is ?(n T ).
Proof sketch. Again, for each S and ?, the optimal decision vector for loss vectors sampled i.i.d.
according to DS,? is just the indicator vector for the set S.
Suppose S is chosen uniformly at random. Unlike the proof of Theorem 4.2, we do not attempt to
characterize the optimal algorithm for this setting.
Note that, for every 1 ? i ? n, every time step when the algorithm incorrectly sets xi 6= 1{i ? S},
contributes ?(?/n) to the expected regret. Let us fix i ? {1, . . . , n} and prove a lower bound on
its expected contribution to the total regret. To simplify matters, let us consider the best algorithm
conditioned on the value of S \{i}. It is not hard to see that the problem of guessing the membership
of i in S based on t past measurements can be recast as a problem of deciding between two possible
means which differ by ?/n, given a sequence of t i.i.d. Bernoulli random variables with one of the
unknown mean, where each of the means is a priori equally likely. But for this problem, the error
probability is ?(1) unless t = ?((n/?)2 ). Thus we have shown that the expected contribution of
coordinate i to the total regret is ?(min{T, (n/?)2 }?/n).
? Summing over the n arms gives an overall
expected regret of ?(min{?T, n2 /?}. Setting ? = n/ T completes the proof.
References
P. Auer, N. Cesa-Bianchi, Y. Freund, and R. E. Schapire. Gambling in a rigged casino: the adversarial multiarmed bandit problem. In Proceedings of the 36th Annual Symposium on Foundations of Computer Science
(1995). IEEE Computer Society Press, Los Alamitos, CA, extended version, 24pp., dated June 8, 1998.
Available from R. Schapire?s website.
B. Awerbuch and R. Kleinberg. Adaptive routing with end-to-end feedback: Distributed learning and geometric
approaches. In Proceedings of the 36th ACM Symposium on Theory of Computing (STOC), 2004.
P. Bartlett, V. Dani, T. P. Hayes, S. M. Kakade, A. Rakhlin, and A. Tewari. High probability regret bounds for
online optimization (working title). Manuscript, 2007.
V. Dani, T. P. Hayes, and S. M. Kakade. Stochastic linear optimization under bandit feedback. In submission,
2008.
Varsha Dani and Thomas P. Hayes. Robbing the bandit: Less regret in online geometric optimization against
an adaptive adversary. In Proceedings of the 17th ACM-SIAM Symposium on Discrete Algorithms (SODA),
2006.
Y. Freund and R. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119?139, 1997.
A. Gyorgy, T. Linder, G. Lugosi, and G. Ottucsak. The on-line shortest path problem under partial monitoring.
Journal of Machine Learning Research, 8:2369?2403, 2007.
Adam Kalai and Santosh Vempala. Efficient algorithms for online decision problems. J. Comput. Syst. Sci., 71
(3):291?307, 2005. ISSN 0022-0000. doi: http://dx.doi.org/10.1016/j.jcss.2004.10.016.
T. L. Lai and H. Robbins. Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics,
6:4?25, 1985.
H.B. McMahan and A. Blum. Online geometric optimization in the bandit setting against an adaptive adversary.
In Proceedings of the 17th Annual Conference on Learning Theory (COLT), 2004.
H. Robbins. Some aspects of the sequential design of experiments. Bulletin of the American Mathematical
Society, 55:527?535, 1952.
Eiji Takimoto and Manfred K. Warmuth. Path kernels and multiplicative updates. J. Mach. Learn. Res., 4:
773?818, 2003. ISSN 1533-7928.
8
| 3371 |@word version:1 achievable:4 norm:1 suitably:1 rigged:1 seek:1 decomposition:1 incurs:2 boundedness:1 celebrated:1 contains:1 past:2 current:1 nt:19 dx:1 must:3 chicago:4 additive:3 designed:1 update:1 v:1 website:1 warmuth:2 manfred:1 provides:4 boosting:1 org:3 mathematical:1 along:4 symposium:3 prove:3 expected:12 p1:2 planning:7 examine:1 increasing:2 provided:5 xx:3 underlying:1 bounded:2 moreover:1 notation:1 what:2 developed:1 hindsight:2 impractical:1 guarantee:2 every:7 tie:1 k2:2 unit:2 before:2 positive:2 dropped:1 modify:1 mach:1 path:10 lugosi:1 might:1 gone:1 regret:48 implement:1 empirical:2 significantly:2 word:1 refers:2 suggest:1 get:1 convenience:1 scheduling:1 deterministic:1 straightforward:2 attention:1 restated:1 m2:3 estimator:3 rule:1 orthonormal:1 coordinate:4 pt:13 play:1 suppose:5 programming:2 us:2 designing:1 element:1 approximated:1 particularly:1 expensive:1 submission:1 predicts:1 observed:1 jcss:1 region:1 technological:2 mentioned:1 environment:1 dynamic:3 tight:1 algo:2 incur:1 technically:1 learner:14 basis:1 various:2 doi:2 quite:1 say:1 otherwise:2 ability:1 final:1 online:12 sequence:10 eigenvalue:4 kxt:1 net:3 remainder:1 description:1 los:1 convergence:5 adam:1 tti:2 tk:1 nearest:1 minor:1 received:1 job:1 implemented:6 c:1 implies:2 come:1 differ:1 correct:1 stochastic:2 exploration:2 routing:3 require:1 fix:1 generalization:2 preliminary:1 proposition:2 traversed:1 extension:1 exploring:1 hold:3 sufficiently:1 considered:1 deciding:1 achieves:2 purpose:2 title:1 robbins:4 establishes:1 dani:8 clearly:2 always:1 mation:1 rather:3 kalai:6 ej:5 focus:2 june:1 rank:1 bernoulli:1 contrast:4 adversarial:2 membership:1 bt:8 her:1 bandit:22 interested:1 infor:1 issue:4 overall:2 colt:1 extraneous:1 priori:1 special:6 santosh:1 sampling:1 simplify:1 primarily:1 oblivious:1 ourselves:1 attempt:1 interest:6 kt:1 ambient:1 edge:3 partial:17 orthogonal:1 unless:1 desired:1 re:1 instance:1 column:7 earlier:1 cover:1 stirling:1 cost:7 subset:1 entry:1 characterize:1 dependency:2 chooses:3 varsha:3 referring:1 fundamental:3 siam:1 receiving:2 extant:2 again:1 cesa:1 choose:3 worse:8 ek:1 inefficient:1 leading:1 american:1 stark:1 syst:1 potential:1 bold:1 summarized:2 casino:1 coefficient:2 matter:2 satisfy:2 explicitly:2 vi:9 multiplicative:1 view:1 traffic:1 start:1 contribution:2 minimize:2 il:3 accuracy:1 efficiently:7 yield:1 yes:2 monitoring:1 randomness:1 history:1 against:3 pp:1 proof:14 sampled:5 gain:1 dimensionality:2 organized:1 auer:7 back:1 actually:2 appears:1 feed:1 manuscript:1 response:1 generality:1 furthermore:2 just:3 until:1 d:6 sketch:3 receives:2 working:1 ei:3 believe:3 unbiased:2 true:1 awerbuch:6 hence:2 round:7 game:3 covering:1 complete:4 theoretic:1 l1:5 reasoning:1 meaning:1 exponentially:2 discussed:2 refer:1 measurement:1 multiarmed:1 ai:1 trivially:1 mathematics:1 access:5 etc:1 showed:1 termed:1 store:1 certain:5 inequality:1 accomplished:1 gyorgy:3 additional:1 paradigm:1 shortest:2 full:28 sham:2 technical:2 exp3:1 eometric:1 long:3 lai:2 e1:3 equally:1 expectation:6 sometimes:3 represent:1 kernel:1 achieved:1 interval:1 completes:1 singular:2 source:1 unlike:1 integer:2 near:1 noting:1 revealed:4 concerned:3 forthcoming:1 idea:1 whether:2 motivated:2 bartlett:2 remark:1 useful:2 tewari:1 clear:1 listed:2 eigenvectors:2 eiji:1 diameter:4 schapire:7 http:1 exist:1 sign:1 estimated:5 discrete:1 blum:2 drawn:2 changing:1 takimoto:2 ht:1 v1:1 timestep:1 graph:1 asymptotically:1 run:1 inverse:1 striking:2 soda:1 throughout:1 almost:1 family:2 vn:1 decision:36 acceptable:1 bit:1 bound:41 ct:26 oracle:5 annual:2 nontrivial:1 bv:1 n3:7 kleinberg:6 aspect:1 argument:1 min:5 vempala:6 relatively:4 conjecture:2 department:1 structured:1 according:5 truncate:1 combination:1 poor:1 em:1 kakade:3 making:1 invariant:1 ln:10 equation:1 turn:1 end:3 available:1 observe:1 occurrence:1 existence:1 thomas:2 assumes:3 running:1 include:1 completed:1 maintaining:1 robbing:1 establish:1 hypercube:2 society:2 alamitos:1 quantity:3 primary:1 dependence:2 diagonal:2 guessing:1 distance:1 sci:1 majority:1 sensible:1 considers:1 trivial:1 reason:1 minority:2 issn:2 ratio:1 polytime:1 providing:1 sharper:1 statement:1 potentially:1 stoc:1 stated:4 implementation:6 design:1 unknown:2 bianchi:1 upper:14 markov:3 finite:2 truncated:1 incorrectly:2 extended:1 rn:6 barycentric:3 sharp:8 arbitrary:3 established:1 address:3 adversary:5 summarize:1 spanner:9 recast:1 demanding:1 natural:3 indicator:1 arm:12 dated:1 picture:1 specializes:1 ready:2 dating:1 text:1 understanding:1 literature:5 geometric:5 disagrees:1 freund:6 loss:46 generation:1 allocation:1 foundation:1 incurred:5 playing:1 row:1 uncountably:1 course:1 summary:1 last:2 transpose:1 offline:1 formal:1 uchicago:1 bias:1 institute:2 neighbor:1 taking:1 bulletin:1 distributed:1 feedback:11 dimension:2 xn:2 cumulative:1 doesn:1 made:2 adaptive:5 compact:1 hayes:6 summing:3 b1:1 xi:4 why:1 table:5 learn:1 pbt:20 ca:1 obtaining:2 contributes:1 poly:5 main:1 linearly:1 motivation:1 n2:4 x1:3 referred:2 representative:1 en:2 gambling:1 exponential:6 comput:1 mcmahan:2 kxk2:2 toyota:2 admissible:4 theorem:7 xt:26 specific:1 showing:1 kvi:1 er:2 appeal:3 list:1 rakhlin:1 concern:1 exists:2 sequential:1 conditioned:1 horizon:3 gap:2 lt:35 logarithmic:1 likely:1 expressed:1 scalar:4 satisfies:1 acm:2 hedge:10 goal:3 price:5 replace:1 change:2 hard:1 included:1 infinite:1 determined:1 uniformly:4 lemma:10 total:4 attempted:1 select:1 linder:1 |
2,616 | 3,372 | A New View of Automatic Relevance Determination
David Wipf and Srikantan Nagarajan, ?
Biomagnetic Imaging Lab, UC San Francisco
{david.wipf, sri}@mrsc.ucsf.edu
Abstract
Automatic relevance determination (ARD) and the closely-related sparse
Bayesian learning (SBL) framework are effective tools for pruning large numbers
of irrelevant features leading to a sparse explanatory subset. However, popular update rules used for ARD are either difficult to extend to more general problems of
interest or are characterized by non-ideal convergence properties. Moreover, it remains unclear exactly how ARD relates to more traditional MAP estimation-based
methods for learning sparse representations (e.g., the Lasso). This paper furnishes
an alternative means of expressing the ARD cost function using auxiliary functions that naturally addresses both of these issues. First, the proposed reformulation of ARD can naturally be optimized by solving a series of re-weighted ` 1
problems. The result is an efficient, extensible algorithm that can be implemented
using standard convex programming toolboxes and is guaranteed to converge to
a local minimum (or saddle point). Secondly, the analysis reveals that ARD is
exactly equivalent to performing standard MAP estimation in weight space using
a particular feature- and noise-dependent, non-factorial weight prior. We then
demonstrate that this implicit prior maintains several desirable advantages over
conventional priors with respect to feature selection. Overall these results suggest
alternative cost functions and update procedures for selecting features and promoting sparse solutions in a variety of general situations. In particular, the methodology readily extends to handle problems such as non-negative sparse coding and
covariance component estimation.
1
Introduction
Here we will be concerned with the generative model
y = ?x + ,
(1)
where ? ? R
is a dictionary of features, x ? R is a vector of unknown weights, y is an
observation vector, and is uncorrelated noise distributed as N (; 0, ?I). When large numbers
of features are present relative to the signal dimension, the estimation problem is fundamentally
ill-posed. Automatic relevance determination (ARD) addresses this problem by regularizing the
solution space using a parameterized, data-dependent prior distribution that effectively prunes away
redundant or superfluous features [10]. Here we will describe a special case of ARD called sparse
Bayesian learning (SBL) that has been very successful in a variety of applications [15]. Later in
Section 4 we will address extensions to more general models.
n?m
m
The basic ARD prior incorporated by SBL is p(x; ?) = N (x; 0, diag[?]), where ? ? R m
+ is a vector
of m non-negative hyperperparameters governing the prior variance of each unknown coefficient.
These hyperparameters are estimated from the data by first marginalizing over the coefficients x
and then performing what is commonly referred to as evidence maximization or type-II maximum
likelihood [7, 10, 15]. Mathematically, this is equivalent to minimizing
Z
L(?) , ? log p(y|x)p(x; ?)dx = ? log p(y; ?) ? log |?y | + y T ??1
(2)
y y,
?
This research was supported by NIH grants R01DC04855 and R01DC006435.
where a flat hyperprior on ? is assumed, ?y , ?I + ???T , and ? , diag[?]. Once some ?? =
arg min? L(?) is computed, an estimate of the unknown coefficients can be obtained by setting
xARD to the posterior mean computed using ?? :
(3)
xARD = E[x|y; ?? ] = ?? ?T ??1
y? y.
Note that if any ??,i = 0, as often occurs during the learning process, then xARD,i = 0 and the
corresponding feature is effectively pruned from the model. The resulting weight vector x ARD is
therefore sparse, with nonzero elements corresponding with the ?relevant? features.
There are (at least) two outstanding issues related to this model which we consider to be significant.
First, while several methods exist for optimizing (2), limitations remain in each case. For example,
an EM version operates by treating the unknown x as hidden data, leading to the E-step
? , Cov[x|y; ?] = ? ? ??T ??1
? , E[x|y; ?] = ??T ??1
(4)
y ??,
y y,
and the M-step
?i ? ?2i + ?ii ,
?i = 1, . . . , m.
(5)
While convenient to implement, the convergence can be prohibitively slow in practice. In contrast,
the MacKay update rules are considerably faster to converge [15]. The idea here is to form the
gradient of (2), equate to zero, and then form the fixed-point update
?2i
?i ?
,
?i = 1, . . . , m.
(6)
1 ? ?i?1 ?ii
However, neither the EM nor MacKay updates are guaranteed to converge to a local minimum or
even a saddle point of L(?); both have fixed points whenever a ?i = 0, whether at a minimizing
solution or not. Finally, a third algorithm has recently been proposed that optimally updates a single
hyperparameter ?i at a time, which can be done very efficiently in closed form [16]. While extremely
fast to implement, as a greedy-like method it can sometimes be more prone to becoming trapped in
local minima when the number of features is large, e.g., m > n (results will be presented in a
forthcoming publication). Additionally, none of these methods are easily extended to more general
problems such as non-negative sparse coding, covariance component estimation, and classification
without introducing additional approximations.
A second issue pertaining to the ARD model involves its connection with more traditional maximum
a posteriori (MAP) estimation methods for extracting sparse, relevant features using fixed, sparsity
promoting prior distributions (i.e., heavy-tailed and peaked). Presently, it is unclear how ARD,
which invokes a parameterized prior and transfers the estimation problem to hyperparameter space,
relates to MAP approaches which operate directly in x space. Nor is it intuitively clear why ARD
often works better in selecting optimal feature sets.
This paper introduces an alternative formulation of the ARD cost function using auxiliary functions that naturally addresses the above issues. In Section 2, the proposed reformulation of ARD is
conveniently optimized by solving a series of re-weighted `1 problems. The result is an efficient algorithm that can be implemented using standard convex programming methods and is guaranteed to
converge to a local minimum (or saddle point) of L(?). Section 3 then demonstrates that ARD is exactly equivalent to performing standard MAP estimation in weight space using a particular featureand noise-dependent, non-factorial weight prior. We then show that this implicit prior maintains
several desirable advantages over conventional priors with respect to feature selection. Additionally,
these results suggest modifications of ARD for selecting relevant features and promoting sparse solutions in a variety of general situations. In particular, the methodology readily extends to handle
problems involving non-negative sparse coding, covariance component estimation, and classification
as discussed in Section 4.
2
ARD/SBL Optimization via Iterative Re-Weighted Minimum ` 1
In this section we re-express L(?) using auxiliary functions which leads to an alternative update
procedure that circumvents the limitations of current approaches. In fact, a wide variety of alternative update rules can be derived by decoupling L(?) using upper bounding functions that are more
conveniently optimized. Here we focus on a particular instantiation of this idea that leads to an
iterative minimum `1 procedure. The utility of this selection being that many powerful convex programming toolboxes have already been developed for solving these types of problems, especially
when structured dictionaries ? are being used.
2.1
Algorithm Derivation
To start we note that the log-determinant term of L(?) is concave in ? (see Section 3.1.5 of [1]),
and so can be expressed as a minimum over upper-bounding hyperplanes via
log |?y | = min z T ? ? g ? (z),
(7)
z
where g ? (z) is the concave conjugate of log |?y | that is defined by the duality relationship [1]
g ? (z) = min z T ? ? log |?y | ,
(8)
?
although for our purposes we will never actually compute g ? (z). This leads to the following upperbounding auxiliary cost function
(9)
L(?, z) , z T ? ? g ? (z) + y T ??1
y y ? L(?).
For any fixed ?, the optimal (tightest) bound can be obtained by minimizing over z. The optimal
value of z equals the slope at the current ? of log |?y |. Therefore, we have
(10)
zopt = O? log |?y | = diag ?T ??1
y ? .
This formulation naturally admits the following optimization scheme:
Step 1: Initialize each zi , e.g., zi = 1, ?i.
Step 2: Solve the minimization problem
? ? arg min Lz (?) , z T ? + y T ??1
y y.
(11)
?
Step 3: Compute the optimal z using (10).
Step 4: Iterate Steps 2 and 3 until convergence to some ?? .
Step 5: Compute xARD = E[x|y; ?? ] = ?? ?T ??1
y? y.
Lemma 1. The objective function in (11) is convex.
This can be shown using Example 3.4 and Section 3.2.2 in [1]. Lemma 1 implies that many standard
optimization procedures can be used for the minimization required by Step 2. For example, one
attractive option is to convert the problem to an equivalent least absolute shrinkage and selector
operator or ?Lasso? [14] optimization problem according to the following:
Lemma 2. The objective function in (11) can be minimized by solving the weighted convex ` 1 regularized cost function
X 1/2
x? = arg min ky ? ?xk22 + 2?
zi |xi |
(12)
x
and then setting ?i ?
?1/2
zi
|x?,i |
i
for all i (note that each zi will always be positive).
The proof of Lemma 2 can be briefly summarized using a re-expression of the data dependent term
X x2
1
in (11) using
2
i
y T ??1
y
=
min
ky
?
?xk
+
.
(13)
y
2
x ?
?
i
i
This leads to an upper-bounding auxiliary function for Lz (?) given by
X
x2
1
2
Lz (?, x) ,
zi ?i + i + ky ? ?xk2 ? Lz (?),
?
?
i
i
(14)
which is jointly convex in x and ? (see Example 3.4 in [1]) and can be globally minimized by
?1/2
|xi | minimizes Lz (?, x). When substituted into
solving over ? and then x. For any x, ?i = zi
(14) we obtain (12). When solved for x, the global minimum of (14) yields the global minimum of
(11) via the stated transformation.
In summary then, by iterating the above algorithm using Lemma 2 to implement Step 2, a convenient optimization method is obtained. Moreover, we do not even need to globally solve for x (or
equivalently ?) at each iteration as long as we strictly reduce (11) at each iteration. This is readily achievable using a variety of simple strategies. Additionally, if z is initialized to a vector of
ones, then the starting point (assuming Step 2 is computed in full) is the exact Lasso estimator. The
algorithm then refines this estimate through the specified re-weighting procedure.
2.2
Global Convergence Analysis
m
Let A(?) denote a mapping that assigns to every point in Rm
+ the subset of R+ which satisfies
Steps 2 and 3 of the proposed algorithm. Such a mapping can be implemented via the methodology
described above. We allow A(?) to be a point-to-set mapping to handle the case where the global
minimum of (11) is not unique, which could occur, for example, if two columns of ? are identical.
Theorem 1. From any initialization point ?(0) ? Rm
+ the sequence of hyperparameter estimates
{?(k) } generated via ?(k+1) ? A(?(k+1) ) is guaranteed to converge monotonically to a local minimum (or saddle point) of (2).
The proof is relatively straightforward and stems directly from the Global Convergence Theorem
(see for example [6]). A sketch is as follows: First, it must be shown that the the mapping A(?)
is compact. This condition is satisfied because if any element of ? is unbounded, L(?) diverges to
infinity. If fact, for any fixed y, ? and ?, there will always exist a radius r such that for any k? (0) k ?
r, k?(k) k ? r for all k. Second, we must show that for any non-minimizing point of L(?) denoted
? 0 , L(? 00 ) < L(? 0 ) for all ? 00 ? A(? 0 ). At any non-minimizing ? 0 the auxiliary cost function
Lz0 (?) obtained from Step 3 will be strictly tangent to L(?) at ? 0 . It will therefore necessarily have
a minimum elsewhere since the slope at ? 0 is nonzero by definition. Moreover, because the log | ? |
function is strictly concave, at this minimum the actual cost function will be reduced still further.
Consequently, the proposed updates represent a valid descent function. Finally, it must be shown
that A(?) is closed at all non-stationary points. This follows from related arguments. The algorithm
could of course theoretically converge to a saddle point, but this is rare and any minimal perturbation
leads to escape.
Both EM and MacKay updates provably fail to satisfy one or more of the above criteria and so global
convergence cannot be guaranteed. With EM, the failure occurs because the associated updates do
not always strictly reduce L(?). Rather, they only ensure that L(? 00 ) ? L(? 0 ) at all points. In
contrast, the MacKay updates do not even guarantee cost function decrease. Consequently, both
methods can become trapped at a solution such as ? = 0; a fixed point of the updates but not a
stationary point or local minimum of L(?). However, in practice this seems to be more of an issue
with the MacKay updates. Related shortcomings of EM in this regard can be found in [19]. Finally,
the fast Tipping updates could potentially satisfy the conditions for global convergence, although
this matter is not discussed in [16].
3
Relating ARD to MAP Estimation
In hierarchical models such as ARD and SBL there has been considerable debate over how to best
perform estimation and inference [8]. Do we add a hyperprior and then integrate out ? and perform
MAP estimation directly on x? Or is it better to marginalize over the coefficients x and optimize the
hyperparameters ? as we have described in this paper? In specific cases, arguments have been made
for the merits of one over the other based on intuition or heuristic arguments [8, 15]. But we would
argue that this distinction is somewhat tenuous because, as we will now show using ideas from the
previous section, the weights obtained from the ARD type-II ML procedure can equivalently be
viewed as arising from an explicit MAP estimate in x space. This notion is made precise as follows:
?1 T
Theorem 2. Let x2 , [x21 , . . . , x2m ]T and ? ?1 , [?1?1 , . . . , ?m
] . Then the ARD coefficients
from (3) solve the MAP problem
xARD = arg min ky ? ?xk22 + ?h? (x2 ),
(15)
x
where h (x ) is the concave conjugate of h(? ?1 ) , ? log |?y | and is a concave, non-decreasing
function of x.
?
2
This result can be established using much of the same analysis used in previous sections. Omitting
some details for the sake of brevity, using (13) we can create a strict upper bounding auxiliary
function on L(?):
X x2
1
i
L(?, x) = ky ? ?xk22 +
+ log |?y |.
(16)
?
?
i
i
If we optimize first over ? instead of x (allowable), the last two terms form the stated concave
conjugate function h? (x2 ). In turn, the minimizing x, which solves (15), is identical to that obtained
by ARD. The concavity of h? (x2 ) with respect each |xi | follows from similar ideas.
Corollary 1. The regularization term in (15), and hence the implicit prior distribution
on x given
Q
by p(x) ? exp[? 21 h? (x2 )], is not generally factorable, meaning p(x) 6= i pi (xi ). Additionally, unlike traditional MAP procedures (e.g., Lasso, ridge regression, etc.), this prior is explicitly
dependent on both the dictionary ? and the regularization term ?.
This result stems directly from the fact that h(? ?1 ) is non-factorable and is dependent on ? and
?. The only exception occurs when ?T ? = I; here h? (x2 ) factors and can be expressed in closed
form independently of ?, although ? dependency remains.
3.1
Properties of the implicit ARD prior
To begin at the most superficial level, the ? dependency of the ARD prior leads to scale invariant
solutions, meaning the value of xARD is not affected if we rescale ?, i.e., ? ? ?D, where D is a
diagonal matrix. Rather, any rescaling D only affects the implicit initialization of the algorithm, not
the shape of the cost function.
More significantly, the ARD prior is particularly well-designed for finding sparse solutions. We
should note that concave, non-decreasing regularization functions are well-known to encourage
sparse representations. Since h? (x2 ) is such a function, it should therefore not be surprising that it
promotes sparsity to some degree. However, when selecting highly sparse subsets of features, the
factorial `0 quasi-norm is often invoked as the
P ideal regularization term given unlimited computational resources. It is expressed via kxk0 , i I[xi 6= 0], where I[?] denotes the indicator function,
and so represents a count of the number of nonzero coefficients (and therefore features). By applying
a exp[?1/2(?)] transformation, we obtain the implicit (improper) prior distribution. The associated
MAP estimation problem (assuming the same standard Gaussian likelihood) involves solving
min ky ? ?xk22 + ?kxk0 .
(17)
x
The difficulty here is that (17) is nearly impossible to solve in general; it is NP-hard owing to a
combinatorial number of local minima and so the traditional idea is to replace k ? k 0 with a tractable
approximation. For this purpose, the `1 norm is the optimal or tightest convex relaxation of the `0
quasi-norm, and therefore it is commonly used leading to the Lasso algorithm [14]. However, the
`1 norm need not be the best relaxation in general. In Sections 3.2 and 3.3 we demonstrate that
the non-factorable, ?-dependent h? (x2 ) provides a tighter, albeit non-convex, approximation that
promotes greater sparsity than kxk1 while conveniently producing many fewer local minima than
using kxk0 directly. We also show that, in certain settings, no ?-independent, factorial regularization
term can achieve similar results. Consequently, the widely used family of `p quasi-norms, i.e.,
P
P
kxkp , i |xi |p , p < 1 [2], or the Gaussian entropy measure i log |xi | based on the Jeffreys
prior [4] provably fail in this regard.
3.2
Benefits of ? dependency
To explore the properties of h? (x2 ) regarding ? dependency alone, we adopt the simplifying assumption ?T ? = I. (Later we investigate the benefits of a non-factorial prior.) In this special case,
h? (x2 ) is factorable and can be expressed in closed form via
q
X
X
2|xi |
?
2
? 2
2
2
p
h (x ) =
h (xi ) ?
(18)
+ log 2? + xi + |xi | xi + 4? ,
x2i + 4?
i
i |xi | +
which is independent of ?. A plot of h? (x2i ) is shown in Figure 1 (left) below.
The ? dependency is retained however and contributes two very desirable properties: (i) As a strictly
concave function of each |xi |, h? (x2 ) more closely approximates the `0 quasi-norm than the `1 norm
while, (ii) The associated cost function (15) is unimodal unlike when ?-independent approximations,
e.g., the `p quasi-norm, are used. This can be explained as follows. When ? is small, the Gaussian
likelihood is highly restrictive, constraining most of its relative mass to a very localized region of x
space. Therefore, a tighter prior more closely resembling the `0 quasi-norm can be used without the
risk of local minima, which occur when the spines of a sparse prior overlap non-negligible portions
of the likelihood (see Figure 6 in [15] for a good 2D visual of a sparse prior with characteristic spines
running alone the coordinate axis). In the limit as ? ? 0, h? (x2 ) converges to a scaled version of the
`0 quasi-norm, yet no local minimum exist because the likelihood in this case only permits a single
feasible solution with x = ?T y. In contrast, when ? is large, the likelihood is less constrained and a
looser prior is required to avoid local minima troubles, which will arise whenever the now relatively
diffuse likelihood intersects the sharp spines of a highly sparse prior. In this situation h ? (x2 ) more
closely resembles a scaled version of the `1 norm. The implicit ARD prior naturally handles this
transition becoming sparser as ? decreases and vice versa. Hence the following property, which is
easy to show [18]:
Lemma 3. When ?T ? = I, (15) has no local minima whereas (17) has 2M local minima.
Use of the `1 norm in place of h? (x2 ) also yields no local minima; however, it is a much looser
approximation of `0 and penalizes coefficients linearly unlike h? (x2 ). The benefits of ? dependency
in this regard can be formalized and will be presented in a subsequent paper. As a final point of
comparison, the actual weight estimate obtained from solving (15) when ? T ? = I is equivalent to
the non-negative garrote estimator that has been advocated for wavelet shrinkage [5, 18].
2
? log p(x)
(normalized)
?
ARD
P
0.01
i |xi |
1.6
|xi |
PSfrag replacements
xi
? log p(xi )
ARD
1.4
1.2
I[xi 6= 0]
2
|xi |
1.6
ARD
0.8
0.7
0.6
0.5
0.4
1.2
0.3
0.8
0.2
0.4
0.1
0
?2
?1.5
?1
?0.5
0
0.5
1
1.5
2
ARD
P
0.01
i |xi |
0.9
(normalized)
I[xi 6= 0]
? log p(x)
? log p(xi )
PSfrag replacements
1
1.8
0
?8
maximally
sparse
solution
?6
?4
?2
0
2
4
6
8
?
xi
Figure 1: Left: 1D example of the implicit ARD prior. The `1 and `0 norms are included for comparison. Right: Plot of the ARD
P prior across the feasible region as parameterized by ?. A factorial
prior given by ? log p(x) ? i |xi |0.01 ? kxk0 is included for comparison. Both approximations
to the `0 norm retain the correct global minimum, but only ARD smooths out local minima.
3.3
Benefits of a non-factorial prior
In contrast, the benefits the typically non-factorial nature of h? (x2 ) are most pronounced when
m > n, meaning there are more features than the signal dimension y. In a noiseless setting (with
? ? 0), we can explicitly quantify the potential of this property of the implicit ARD prior. In this
limiting situation, the canonical sparse MAP estimation problem (17) reduces to finding
x0 , arg min kxk0 s.t. y = ?x.
(19)
x
By simple extension of results in [18], the global minimum of (15) in the limit as ? ? 0 will
equal x0 , assuming the latter is unique. The real distinction then is regarding the number of local
minimum. In this capacity the ARD MAP problem is superior to any possible factorial variant:
Theorem
3. In the limit as ? ? 0 and assuming m > n, no factorial prior p(x) =
Q
2
exp[?1/2f
i (xi )] exists such that the corresponding MAP problem min x ky ? ?xk2 +
i
P
? i fi (xi ) is: (i) Always globally minimized by a maximally sparse solution x0 and, (ii) Has
fewer local minima than when solving (15).
A sketch of the proof is as follows. First, for any factorial prior and associated regularization term
P
i fi (xi ), the only way to satisfy (i) is if ?fi (xi )/?xi ? ? as xi ? 0. Otherwise, it will always be
possible to have a ? and y such that x0 is not the global minimum. It is then
to show
straightforward
m
+
1,
local
minimum.
that any fi (xi ) with this property will necessarily have between m?1
n
n
Using results from [18], this is provably an upper bound on the number of local minimum to (15).
Moreover, with the exception of very contrived situations, the number of ARD local minima will
be considerably less. In general, this result speaks directly to the potential limitations of restricting
oneself to factorial priors when maximal feature pruning is paramount.
While generally difficult to visualize, in restricted situations it is possible to explicitly illustrate
the type of smoothing over local minima that is possible using non-factorial priors. For example,
consider the case where m = n + 1 and Rank(?) = n, implying that ? has a null-space dimension
of one. Consequently, any feasible solution to y = ?x can be expressed as x = x 0 + ?v, where
v ? Null(?), ? is any real-valued scalar, and x0 is any fixed, feasible solution (e.g., the minimum
norm solution). We can now plot any prior distribution p(x), or equivalently ? log p(x), over the
1D feasible region of x space as a function of ? to view the local minima profile.
To demonstrate this idea, we chose n = 10, m = 11 and generated a ? matrix using iid N (0, 1)
entries. We then computed y = ?x0 , where kx0 k0 = 9 and nonzero entries are also iid unit
Gaussian. Figure 1 (right) displays the plots of two example priors in the feasible
region of y = ?x:
P
(i) the non-factorial implicit ARD prior, and (ii) the prior p(x) ? exp(? 12 i |xi |p ), p = 0.01. The
later is a factorial prior which converges to the ideal sparsity penalty when p ? 0. From the figure,
we observe that, while both priors peak at the x0 , the ARD prior has substantially smoothed away
local minima. While the implicit Lasso prior (which is equivalent to the assumption p = 1) also
smooths out local minima, the global minimum may be biased away from the maximally sparse
solution in many situations, unlike the ARD prior which provides a non-convex approximation with
its global minimum anchored at x0 .
4
Extensions
Thus far we have restricted attention to one particularly useful ARD-based model. But much of the
analysis can be extended to handle a variety of alternative data likelihoods and priors. A particularly
useful adaptation relevant to compressed sensing [17], manifold learning [13], and neuroimaging
[12, 18] is as follows. First, the data y can be replaced with a n ? t observation matrix Y which is
generated via an unknown coefficient matrix X. The assumed likelihood model and prior are
d?
X
T ?1
1
1
2
p(Y |X) ? exp ? kY ? ?XkF , p(X) ? exp ? trace X ?x X , ?x ,
?i Ci .
2?
2
i=1
(20)
Here each of the d? matrices Ci ?s are known covariance components of which the irrelevant ones
are pruned by minimizing the analogous type-II likelihood function
?1
1
L(?) = log |?I + ??x ?T | + trace XX T ?I + ??x ?T
.
(21)
t
With minimal effort, this extension can be solved using the methodology described herein. The
primary difference is that Step 2 becomes a second-order cone (SOC) optimization problem for
which a variety of techniques exist for its minimization [2, 9].
Another very useful adaptation involves adding a non-negativity constraint on the coefficients x,
e.g., non-negative sparse coding. This is easily incorporated into the MAP cost function (15) and
optimization problem (12); performance is often significantly better than the non-negative Lasso.
Results will be presented in a subsequent paper. It may also be possible to develop an effective
variant for handling classification problems that avoids additional approximations such as those
introduced in [15].
5
Discussion
While ARD-based approaches have enjoyed remarkable success in a number of disparate fields, they
remain hampered to some degree by implementational limitations and a lack of clarity regarding the
nature of the cost function and existing update rules. This paper addresses these issues by presenting
a principled alternative algorithm based on auxiliary functions and a dual representation of the ARD
objective. The resulting algorithm is initialized at the well-known Lasso solution and then iterates
via a globally convergent re-weighted `1 procedure that in many ways approximates ideal subset
selection using the `0 norm. Preliminary results using this methodology on toy problems as well
as large neuroimaging simulations with m ? 100, 000 are very promising (and will be reported in
future papers). A good (highly sparse) solution is produced at every iteration and so early stopping is
always feasible if desired. This produces a highly efficient, global competition among features that
is potentially superior to the sequential (greedy) updates of [16] in terms of local minima avoidance
in certain cases when ? is highly overcomplete (i.e., m n). Moreover, it is also easily extended
to handle additional constraints (e.g., non-negativity) or model complexity as occurs with general
covariance component estimation. A related optimization strategy has also been reported in [3].
The analysis used in deriving this algorithm reveals that ARD is exactly equivalent to performing
MAP estimation in x space using a principled, sparsity-inducing prior that is non-factorable and
dependent on both the feature set and noise parameter. We have shown that these qualities allow it
to promote maximally sparse solutions at the global minimum while relenting drastically fewer local
minima than competing priors. This might possibly explain the superior performance of ARD/SBL
over Lasso in a variety of disparate disciplines where sparsity is crucial [11, 12, 18]. These ideas
raise a key question: If we do not limit ourselves to factorable, ?- and ?-independent regularization
terms/priors as is commonly done, then what is the optimal prior p(x) in the context of feature
selection? Perhaps there is a better choice that does not neatly fit into current frameworks linked
to empirical priors based on the Gaussian distribution. Note that the `1 re-weighting scheme for
optimization can be applied to a broad family of non-factorial, sparsity-inducing priors.
References
[1] S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, 2004.
[2] S.F. Cotter, B.D. Rao, K. Engan, and K. Kreutz-Delgado, ?Sparse solutions to linear inverse
problems with multiple measurement vectors,? IEEE Trans. Signal Processing, vol. 53, no. 7,
pp. 2477?2488, April 2005.
[3] M. Fazel, H. Hindi, and S. Boyd ?Log-Det Heuristic for Matrix Rank Minimization with Applications to Hankel and Euclidean Distance Matrices,? Proc. American Control Conf., vol. 3, pp.
2156?2162, June 2003.
[4] M.A.T. Figueiredo, ?Adaptive sparseness using Jeffreys prior,? Advances in Neural Information
Processing Systems 14, pp. 697?704, 2002.
[5] H. Gao, ?Wavelet shrinkage denoising using the nonnegative garrote,? Journal of Computational
and Graphical Statistics, vol. 7, no. 4, pp. 469?488, 1998.
[6] D.G. Luenberger, Linear and Nonlinear Programming, Addison?Wesley, Reading, Massachusetts, 2nd ed., 1984.
[7] D.J.C. MacKay, ?Bayesian interpolation,? Neural Comp., vol. 4, no. 3, pp. 415?447, 1992.
[8] D.J.C. MacKay, ?Comparison of approximate methods for handling hyperparameters,? Neural
Comp., vol. 11, no. 5, pp. 1035?1068, 1999.
[9] D.M. Malioutov, M. C
? etin, and A.S. Willsky, ?Sparse signal reconstruction perspective for
source localization with sensor arrays,? IEEE Trans. Signal Processing, vol. 53, no. 8, pp.
3010?3022, August 2005.
[10] R.M. Neal, Bayesian Learning for Neural Networks, Springer-Verlag, New York, 1996.
[11] R. Pique-Regi, E.S. Tsau, A. Ortega, R.C. Seeger, and S. Asgharzadeh, ?Wavelet footprints
and sparse Bayesian learning for DNA copy number change analysis,? Int. Conf. Acoustics
Speech and Signal Processing, April 2007.
[12] R.R. Ram??rez, Neuromagnetic Source Imaging of Spontaneous and Evoked Human Brain
Dynamics, PhD Thesis, New York University, 2005.
[13] J.G. Silva, J.S. Marques, and J.M. Lemos, ?Selecting landmark points for sparse manifold
learning,? Advances in Neural Information Processing Systems 18, pp. 1241?1248, 2006.
[14] R. Tibshirani, ?Regression shrinkage and selection via the Lasso,? Journal of the Royal
Statistical Society, vol. 58, no. 1, pp. 267?288, 1996.
[15] M.E. Tipping, ?Sparse Bayesian learning and the relevance vector machine,? Journal of
Machine Learning Research, vol. 1, pp. 211?244, 2001.
[16] M.E. Tipping and A.C. Faul, ?Fast marginal likelihood maximisation for sparse Bayesian
models,? Ninth Int. Workshop Artificial Intelligence and Statistics, Jan. 2003.
[17] M.B. Wakin, M.F. Duarte, S. Sarvotham, D. Baron, and R.G. Baraniuk, ?Recovery of jointly
sparse signals from a few random projections,? Advances in Neural Information Processing
Systems 18, pp. 1433?1440, 2006.
[18] D.P. Wipf, ?Bayesian Methods for Finding Sparse Representations,? PhD Thesis, UC San
Diego, 2006.
[19] C.F. Wu, ?On the convergence properties of the EM algorithm,? The Annals of Statistics, vol.
11, pp. 95?103, 1983.
| 3372 |@word determinant:1 version:3 sri:1 achievable:1 briefly:1 seems:1 norm:16 nd:1 simulation:1 covariance:5 simplifying:1 delgado:1 series:2 selecting:5 kx0:1 existing:1 current:3 surprising:1 yet:1 dx:1 must:3 readily:3 refines:1 subsequent:2 shape:1 mrsc:1 treating:1 designed:1 update:17 plot:4 stationary:2 generative:1 greedy:2 fewer:3 alone:2 implying:1 intelligence:1 xk:1 provides:2 iterates:1 hyperplanes:1 unbounded:1 become:1 psfrag:2 x0:8 theoretically:1 speaks:1 spine:3 nor:2 brain:1 srikantan:1 globally:4 decreasing:2 actual:2 becomes:1 begin:1 xx:1 moreover:5 mass:1 null:2 what:2 minimizes:1 substantially:1 developed:1 finding:3 transformation:2 guarantee:1 every:2 concave:8 exactly:4 prohibitively:1 demonstrates:1 rm:2 scaled:2 control:1 unit:1 grant:1 producing:1 positive:1 negligible:1 local:26 limit:4 becoming:2 interpolation:1 might:1 chose:1 initialization:2 resembles:1 evoked:1 fazel:1 unique:2 practice:2 maximisation:1 implement:3 footprint:1 procedure:8 jan:1 empirical:1 significantly:2 convenient:2 boyd:2 projection:1 suggest:2 cannot:1 marginalize:1 selection:6 operator:1 risk:1 applying:1 impossible:1 context:1 optimize:2 equivalent:7 map:16 conventional:2 resembling:1 straightforward:2 attention:1 starting:1 independently:1 convex:10 formalized:1 recovery:1 assigns:1 rule:4 estimator:2 avoidance:1 array:1 deriving:1 vandenberghe:1 handle:6 notion:1 coordinate:1 analogous:1 limiting:1 annals:1 spontaneous:1 diego:1 exact:1 programming:4 element:2 particularly:3 kxk1:1 solved:2 region:4 improper:1 decrease:2 principled:2 intuition:1 complexity:1 neuromagnetic:1 dynamic:1 raise:1 solving:8 localization:1 easily:3 k0:1 intersects:1 derivation:1 fast:3 effective:2 describe:1 shortcoming:1 pertaining:1 artificial:1 heuristic:2 posed:1 solve:4 widely:1 valued:1 otherwise:1 compressed:1 cov:1 statistic:3 jointly:2 final:1 advantage:2 sequence:1 reconstruction:1 maximal:1 adaptation:2 relevant:4 achieve:1 inducing:2 pronounced:1 competition:1 ky:8 convergence:8 contrived:1 diverges:1 produce:1 converges:2 illustrate:1 develop:1 rescale:1 ard:45 advocated:1 solves:1 soc:1 auxiliary:8 implemented:3 involves:3 implies:1 faul:1 quantify:1 radius:1 closely:4 correct:1 owing:1 human:1 nagarajan:1 preliminary:1 tighter:2 secondly:1 mathematically:1 extension:4 strictly:5 exp:6 mapping:4 visualize:1 lemos:1 dictionary:3 adopt:1 early:1 xk2:2 purpose:2 estimation:16 proc:1 combinatorial:1 vice:1 create:1 tool:1 weighted:5 cotter:1 minimization:4 sensor:1 always:6 gaussian:5 rather:2 avoid:1 shrinkage:4 publication:1 corollary:1 derived:1 focus:1 june:1 rank:2 likelihood:11 contrast:4 seeger:1 duarte:1 posteriori:1 inference:1 dependent:8 stopping:1 typically:1 explanatory:1 hidden:1 quasi:7 provably:3 arg:5 issue:6 overall:1 ill:1 classification:3 denoted:1 dual:1 among:1 smoothing:1 constrained:1 special:2 mackay:7 uc:2 initialize:1 equal:2 once:1 never:1 marginal:1 field:1 identical:2 represents:1 broad:1 nearly:1 promote:1 peaked:1 wipf:3 minimized:3 np:1 future:1 fundamentally:1 escape:1 few:1 replaced:1 ourselves:1 replacement:2 interest:1 highly:6 investigate:1 introduces:1 superfluous:1 encourage:1 euclidean:1 hyperprior:2 initialized:2 re:8 penalizes:1 desired:1 overcomplete:1 minimal:2 column:1 rao:1 extensible:1 implementational:1 maximization:1 cost:12 introducing:1 subset:4 entry:2 rare:1 successful:1 optimally:1 reported:2 dependency:6 considerably:2 peak:1 retain:1 discipline:1 thesis:2 satisfied:1 possibly:1 conf:2 american:1 leading:3 rescaling:1 toy:1 potential:2 coding:4 summarized:1 coefficient:9 matter:1 int:2 satisfy:3 explicitly:3 later:3 view:2 lab:1 closed:4 linked:1 portion:1 start:1 maintains:2 option:1 slope:2 baron:1 variance:1 characteristic:1 equate:1 efficiently:1 yield:2 upperbounding:1 bayesian:8 produced:1 iid:2 none:1 comp:2 malioutov:1 explain:1 whenever:2 ed:1 definition:1 failure:1 pp:12 naturally:5 proof:3 associated:4 popular:1 massachusetts:1 actually:1 wesley:1 tipping:3 methodology:5 maximally:4 april:2 formulation:2 done:2 governing:1 implicit:11 until:1 sketch:2 nonlinear:1 lack:1 quality:1 perhaps:1 omitting:1 normalized:2 regularization:7 hence:2 nonzero:4 neal:1 attractive:1 during:1 criterion:1 allowable:1 presenting:1 ortega:1 ridge:1 demonstrate:3 silva:1 meaning:3 invoked:1 sbl:6 recently:1 fi:4 nih:1 superior:3 extend:1 discussed:2 approximates:2 relating:1 expressing:1 significant:1 measurement:1 versa:1 cambridge:1 automatic:3 enjoyed:1 neatly:1 etc:1 add:1 posterior:1 perspective:1 optimizing:1 irrelevant:2 certain:2 verlag:1 success:1 minimum:41 additional:3 somewhat:1 greater:1 kxk0:5 prune:1 converge:6 redundant:1 monotonically:1 signal:7 ii:8 relates:2 multiple:1 desirable:3 full:1 reduces:1 stem:2 unimodal:1 smooth:2 faster:1 determination:3 characterized:1 long:1 promotes:2 involving:1 basic:1 regression:2 variant:2 noiseless:1 iteration:3 sometimes:1 represent:1 whereas:1 source:2 crucial:1 biased:1 operate:1 unlike:4 strict:1 extracting:1 ideal:4 constraining:1 easy:1 concerned:1 variety:8 iterate:1 affect:1 zi:7 forthcoming:1 fit:1 lasso:10 competing:1 reduce:2 idea:7 regarding:3 oneself:1 det:1 whether:1 expression:1 engan:1 utility:1 effort:1 penalty:1 factorable:6 speech:1 york:2 generally:2 iterating:1 clear:1 useful:3 factorial:16 dna:1 reduced:1 exist:4 canonical:1 estimated:1 trapped:2 arising:1 tibshirani:1 hyperparameter:3 vol:9 affected:1 express:1 key:1 reformulation:2 clarity:1 neither:1 imaging:2 ram:1 relaxation:2 convert:1 cone:1 inverse:1 parameterized:3 powerful:1 baraniuk:1 hankel:1 extends:2 family:2 place:1 wu:1 looser:2 circumvents:1 garrote:2 bound:2 guaranteed:5 display:1 convergent:1 paramount:1 nonnegative:1 occur:2 infinity:1 constraint:2 x2:19 flat:1 diffuse:1 unlimited:1 sake:1 argument:3 min:10 extremely:1 pruned:2 performing:4 relatively:2 structured:1 according:1 conjugate:3 remain:2 across:1 em:6 modification:1 presently:1 intuitively:1 invariant:1 jeffreys:2 explained:1 restricted:2 xk22:4 resource:1 remains:2 turn:1 count:1 fail:2 addison:1 merit:1 tractable:1 luenberger:1 tightest:2 permit:1 promoting:3 observe:1 hierarchical:1 away:3 alternative:7 xkf:1 hampered:1 denotes:1 running:1 ensure:1 trouble:1 x21:1 graphical:1 wakin:1 furnishes:1 invokes:1 restrictive:1 especially:1 society:1 objective:3 already:1 question:1 occurs:4 strategy:2 primary:1 traditional:4 diagonal:1 unclear:2 gradient:1 distance:1 capacity:1 landmark:1 manifold:2 argue:1 willsky:1 assuming:4 retained:1 relationship:1 minimizing:7 equivalently:3 difficult:2 neuroimaging:2 potentially:2 debate:1 trace:2 negative:7 stated:2 disparate:2 unknown:5 perform:2 upper:5 observation:2 descent:1 marque:1 situation:7 extended:3 incorporated:2 precise:1 perturbation:1 ninth:1 smoothed:1 sharp:1 august:1 david:2 introduced:1 required:2 toolbox:2 specified:1 optimized:3 connection:1 acoustic:1 distinction:2 herein:1 established:1 trans:2 address:5 below:1 sparsity:7 reading:1 royal:1 overlap:1 difficulty:1 regularized:1 indicator:1 hindi:1 scheme:2 x2i:2 axis:1 negativity:2 prior:52 regi:1 tangent:1 marginalizing:1 relative:2 limitation:4 localized:1 remarkable:1 integrate:1 degree:2 kxkp:1 uncorrelated:1 pi:1 heavy:1 prone:1 elsewhere:1 summary:1 course:1 supported:1 last:1 copy:1 figueiredo:1 drastically:1 allow:2 wide:1 absolute:1 sparse:32 distributed:1 regard:3 benefit:5 dimension:3 valid:1 transition:1 avoids:1 concavity:1 commonly:3 made:2 san:2 adaptive:1 lz:5 far:1 pruning:2 selector:1 compact:1 approximate:1 tenuous:1 ml:1 global:14 reveals:2 instantiation:1 kreutz:1 assumed:2 francisco:1 x2m:1 xi:33 iterative:2 tailed:1 why:1 anchored:1 additionally:4 promising:1 nature:2 transfer:1 superficial:1 decoupling:1 contributes:1 necessarily:2 diag:3 substituted:1 linearly:1 bounding:4 noise:4 hyperparameters:3 arise:1 profile:1 referred:1 slow:1 explicit:1 third:1 weighting:2 wavelet:3 rez:1 theorem:4 specific:1 sensing:1 admits:1 evidence:1 workshop:1 exists:1 biomagnetic:1 albeit:1 restricting:1 effectively:2 adding:1 ci:2 sequential:1 phd:2 sparseness:1 sparser:1 entropy:1 saddle:5 explore:1 gao:1 visual:1 conveniently:3 expressed:5 scalar:1 springer:1 satisfies:1 sarvotham:1 viewed:1 consequently:4 replace:1 considerable:1 hard:1 feasible:7 included:2 change:1 operates:1 denoising:1 lemma:6 etin:1 called:1 duality:1 exception:2 latter:1 brevity:1 relevance:4 ucsf:1 outstanding:1 regularizing:1 handling:2 |
2,617 | 3,373 | An Analysis of Convex Relaxations for MAP Estimation
M. Pawan Kumar
Dept. of Computing
Oxford Brookes University
V. Kolmogorov
Computer Science
University College London
P.H.S. Torr
Dept. of Computing
Oxford Brookes University
[email protected]
[email protected]
[email protected]
Abstract
The problem of obtaining the maximum a posteriori estimate of a general discrete random field (i.e. a random field defined using a finite and discrete set of
labels) is known to be NP-hard. However, due to its central importance in many
applications, several approximate algorithms have been proposed in the literature. In this paper, we present an analysis of three such algorithms based on
convex relaxations: (i) LP - S: the linear programming (LP) relaxation proposed by
Schlesinger [20] for a special case and independently in [4, 12, 23] for the general
case; (ii) QP - RL: the quadratic programming (QP) relaxation by Ravikumar and
Lafferty [18]; and (iii) SOCP - MS: the second order cone programming (SOCP) relaxation first proposed by Muramatsu and Suzuki [16] for two label problems and
later extended in [14] for a general label set.
We show that the SOCP - MS and the QP - RL relaxations are equivalent. Furthermore, we prove that despite the flexibility in the form of the constraints/objective
function offered by QP and SOCP, the LP - S relaxation strictly dominates (i.e. provides a better approximation than) QP - RL and SOCP - MS. We generalize these
results by defining a large class of SOCP (and equivalent QP) relaxations which is
dominated by the LP - S relaxation. Based on these results we propose some novel
SOCP relaxations which strictly dominate the previous approaches.
1 Introduction
Discrete random fields are a powerful tool to obtain a probabilistic formulation for various applications in Computer Vision and related areas [3]. Hence, developing accurate and efficient algorithms
for performing inference on a given discrete random field is of fundamental importance. In this
work, we will focus on the problem of maximum a posteriori (MAP) estimation. MAP estimation
is a key step in obtaining the solutions to many applications such as stereo, image stitching and
segmentation [21]. Furthermore, it is closely related to many important Combinatorial Optimization
problems such as MAXCUT [6], multi-way cut [5], metric labelling [3, 11] and 0-extension [3, 9].
Given data D, a discrete random field models the distribution (i.e. either the joint or the conditional probability) of a labelling for a set of random variables. Each of these variables v =
{v0 , v1 , ? ? ? , vn?1 } can take a label from a discrete set l = {l0 , l1 , ? ? ? , lh?1 }. A particular labelling
of variables v is specified by a function f whose domain corresponds to the indices of the random
variables and whose range is the index of the label set, i.e. f : {0, 1, ? ? ? , n? 1} ? {0, 1, ? ? ? , h? 1}.
In other words, random variable va takes label lf (a) . For convenience, we assume the model to be
a conditional random field (CRF) while noting that all the results of this paper also apply to Markov
random fields (MRF).
A CRF specifies a neighbourhood relationship E between the random variables, i.e. (a, b) ? E if,
and only if, va and vb are neighbouring random variables. Within this framework, the conditional
probability of a labelling f given data D is specified as Pr(f |D, ?) = 1 exp(?Q(f ; D, ?). Here
Z(? )
? represents the parameters of the CRF and Z(?) is a normalization constant which ensures that the
probability sums
Pto one 1(also known
P as the 2partition function). The1 energy Q(f ; D, ?) is given by
Q(f ; D, ?) = va ?v ?a;f
+
(a,b)?E ?ab;f (a)f (b) . The term ?a;f (a) is called a unary potential
(a)
2
since its value depends on the labelling of one random variable at a time. Similarly, ?ab;f
(a)f (b) is
called a pairwise potential as it depends on a pair of random variables. For simplicity, we assume
1
2
that ?ab;f
(a)f (b) = w(a, b)d(f (a), f (b)) where w(a, b) is the weight that indicates the strength of
the pairwise relationship between variables va and vb , with w(a, b) = 0 if (a, b) ?
/ E, and d(?, ?) is
a distance function on the labels. As will be seen later, this formulation of the pairwise potentials
would allow us to concisely describe our results.
The problem of MAP estimation is well known to be NP-hard in general. Since it plays a central
role in several applications, many approximate algorithms have been proposed in the literature. In
this work, we analyze three such algorithms which are based on convex relaxations. Specifically,
we consider: (i) LP - S, the linear programming (LP) relaxation of [4, 12, 20, 23]; (ii) QP - RL, the
quadratic programming (QP) relaxation of [18]; and (iii) SOCP - MS, the second order cone programming (SOCP) relaxation of [14, 16]. In order to provide an outline of these relaxations, we formulate
the problem of MAP estimation as an Integer Program (IP).
1.1 Integer Programming Formulation
We define a binary variable vector x of length nh. We denote the element of x at index a ? h + i
as xa;i where va ? v and li ? l. These elements xa;i specify a labelling f such that xa;i = 1 if
f (a) = i and xa;i = ?1 otherwise. We say that the variable xa;i belongs to variable va since it
defines which label va does (or does not) take. Let X = xx? . We refer to the (a ? h + i, b ? h + j)th
element of the matrix X as Xab;ij where va , vb ? v and li , lj ? l. Clearly, the following IP finds the
labelling with the minimum energy, i.e. it is equivalent to the MAP estimation problem:
P
P
(1+xa;i +xb;j +Xab;ij )
1 (1+xa;i )
2
IP :
x? = arg minx va ,li ?a;i
+ (a,b)?E,li ,lj ?ab;ij
2
4
s.t.
x ? {?1, 1}nh,
P
li ?l xa;i = 2 ? h,
X = xx? .
(1)
(2)
(3)
Constraints (1) and (3) specify that the variables x and X are binary such that Xab;ij = xa;i xb;j .
We will refer to them as the integer constraints. Constraint (2), which specifies that each variable
should be assigned only one label, is known as the uniqueness constraint. Note that one uniqueness
constraint is specified for each variable va . Solving the above IP is in general NP-hard. It is therefore
common practice to obtain an approximate solution using convex relaxations. We describe four such
convex relaxations below.
1.2 Linear Programming Relaxation
The LP relaxation (proposed by Schlesinger [20] for a special case and independently in [4, 12, 23]
for the general case), which we call LP - S, is given as follows:
P
P
(1+xa;i +xb;j +Xab;ij )
1 (1+xa;i )
2
LP - S :
x? = arg minx va ,li ?a;i
+ (a,b)?E,li ,lj ?ab;ij
2
4
s.t.
x ? [?1, 1]nh , X ? [?1, 1]nh?nh,
P
li ?l xa;i = 2 ? h,
P
lj ?l Xab;ij = (2 ? h)xa;i ,
Xab;ij = Xba;ji ,
1 + xa;i + xb;j + Xab;ij ? 0.
(4)
(5)
(6)
(7)
(8)
In the LP - S relaxation only those elements Xab;ij of X are used for which (a, b) ? E and li , lj ? l.
Unlike the IP, the feasibility region of the above problem is relaxed such that the variables xa;i
and Xab;ij lie in the interval [?1, 1]. Further, the constraint (3) is replaced by equation (6) which
is called the marginalization constraint [23]. One marginalization constraint is specified for each
2
(a, b) ? E and li ? l. Constraint (7) specifies that X is symmetric. Constraint (8) ensures that ?ab;ij
is multiplied by a number between 0 and 1 in the objective function. These constraints (7) and (8)
are defined for all (a, b) ? E and li , lj ? l. Note that the above constraints are not exhaustive, i.e.
it is possible to specify other constraints for the problem of MAP estimation (as will be seen in the
different relaxations described in the subsequent sections).
1.3 Quadratic Programming Relaxation
We now describe the QP relaxation for the MAP estimation IP which was proposed by Ravikumar
and Lafferty [18]. To this end, it would be convenient to reformulate the objective function of the IP
using a vector of unary potentials of length nh (denoted by ??1 ) and a matrix of pairwise potentials
2
of size nh ? nh (denoted by ??2 ). The element of the unary potential vector at index (a ? h + i) is
P
P
1
1
2
defined as ??a;i
= ?a;i
? vc ?v lk ?l |?ac;ik
|, where va ? v and li ? l. The (a ? h + i, b ? h + j)th
element of the pairwise potential matrix ??2 is defined such that
P
P
2
if
a = b, i = j,
2
vc ?v
lk ?l |?ac;ik |,
?
?ab;ij =
(9)
2
?ab;ij
otherwise,
where va , vb ? v and li , lj ? l. In other words, the potentials are modified by defining a pairwise
2
potential ??aa;ii
and subtracting the value of that potential from the corresponding unary potential
1
? 2 is guaranteed to be positive semidefi?a;i . The advantage of this reformulation is that the matrix ?
2
1+x
? 2 0. Using the fact that for xa;i ? {?1, 1}, 1+xa;i
= 2 a;i , it can be shown that
nite, i.e. ?
2
the following is equivalent to the MAP estimation problem [18]:
? 1
2 1+x
? + 1+x ? ?
?
?
,
(10)
QP - RL :
x? = arg minx 1+x
2
2
2
P
s.t.
(11)
li ?l xa;i = 2 ? h, ?va ? v,
x ? {?1, 1}nh,
(12)
where 1 is a vector of appropriate dimensions whose elements are all equal to 1. By relaxing
the feasibility region of the above problem to x ? [?1, 1]nh, the resulting QP can be solved in
? 2 0 (i.e. the relaxation of the QP (10)-(12) is convex). We call the above
polynomial time since ?
relaxation QP - RL. Note that in [18], the QP - RL relaxation was described using the variable y = 1+x
2 .
However, the above formulation can easily be shown to be equivalent to the one presented in [18].
1.4 Semidefinite Programming Relaxation
The SDP relaxation of the MAP estimation problem replaces the non-convex constraint X = xx? by
the convex semidefinite constraint X ? xx? 0 [6, 15] which can be expressed as
1 x?
0,
(13)
x X
using Schur?s complement [2]. Further, like LP - S, it relaxes the integer constraints by allowing the
variables xa;i and Xab;ij to lie in the interval [?1, 1] with Xaa;ii = 1 for all va ? v, li ? l. The
SDP relaxation is a well-studied approach which provides accurate solutions for the MAP estimation
problem (e.g. see [25]). However, due to its computational inefficiency, it is not practically useful
for large scale problems with nh > 1000. See however [17, 19, 22].
1.5 Second Order Cone Programming Relaxation
We now describe the SOCP relaxation that was proposed by Muramatsu and Suzuki [16] for the
MAXCUT problem (i.e. MAP estimation with h = 2) and later extended for a general label set [14].
This relaxation, which we call SOCP - MS, is based on the technique of Kim and Kojima [10] who
observed that the SDP constraint can be further relaxed to second order cone (SOC) constraints. For
this purpose, it employs a set of matrices S = {Ck |Ck = Uk (Uk )? 0, k = 1, 2, . . . , nC }.
Using the fact that the Frobenius dot product of two semidefinite matrices is non-negative, we get
? k(Uk )? xk2 ? Ck ? X, k = 1, ? ? ? , nC .
(14)
Each of the above SOC constraints may involve some or all variables xa;i and Xab;ij . For example,
k
if Cab;ij
= 0, then the k th SOC constraint will not involve Xab;ij (since its coefficient will be 0).
In order to describe the SOCP - MS relaxation, we consider a pair of neighbouring variables va and
vb , i.e. (a, b) ? E, and a pair of labels li and lj . These two pairs define the following variables:
xa;i , xb;j , Xaa;ii = Xbb;jj = 1 and Xab;ij = Xba;ji (since X is symmetric). For each such pair of
variables and labels, the SOCP - MS relaxation specifies two SOC constraints which involve only the
above variables [14, 16]. In order to specify the exact form of these SOC constraints, we need the
following definitions.
Using the variables va and vb (where (a, b) ? E) and labels li and lj , we define the submatrices
x(a,b,i,j) and X(a,b,i,j) of x and X respectively as:
xa;i
Xaa;ii Xab;ij
x(a,b,i,j) =
, X(a,b,i,j) =
.
(15)
xb;j
Xba;ji Xbb;jj
3
The SOCP - MS relaxation specifies SOC constraints of the form (14) for all pairs of neighbouring
1
variables
(a, b) ? E and
j ? l. To this end, it uses the following two matrices: CMS =
labels li , l
1 1
1 ?1
, C2MS =
. Hence, in the SOCP - MS formulation, the MAP estimation IP is
?1 1
1 1
relaxed to
P
P
(1+xa;i +xb;j +Xab;ij )
1 (1+xa;i )
2
SOCP - MS :
x? = arg minx va ,li ?a;i
+ (a,b)?E,li ,lj ?ab;ij
2
4
s.t.
x ? [?1, 1]nh , X ? [?1, 1]nh?nh,
P
li ?l xa;i = 2 ? h,
(16)
(17)
(xa;i ? xb;j )2 ? 2 ? 2Xab;ij ,
(18)
(xa;i + xb;j )2 ? 2 + 2Xab;ij ,
Xab;ij = Xba;ji .
(19)
(20)
We refer the reader to [14, 16] for details.
2 Comparing Relaxations
In order to compare the relaxations described above, we require the following definitions. We say
that a relaxation A dominates the relaxation B (alternatively, B is dominated by A) if and only if
min
(x,X)?F ( A)
e(x, X; ?) ?
min
(x,X)?F ( B)
e(x, X; ?), ??,
(21)
where F (A) and F (B ) are the feasibility regions of the relaxations A and B respectively. The term
e(x, X; ?) denotes the value of the objective function at (x, X) (i.e. the energy of the possibly
fractional labelling (x, X)) for the MAP estimation problem defined over the CRF with parameter ?.
Thus the optimal value of the dominating relaxation A is always greater than or equal to the optimal
value of relaxation B. We note here that the concept of domination has been used previously in [4]
(to compare LP - S with the linear programming relaxation in [11]).
Relaxations A and B are said to be equivalent if A dominates B and B dominates A, i.e. their optimal
values are equal to each other for all CRFs. A relaxation A is said to strictly dominate relaxation
B if A dominates B but B does not dominate A . In other words there exists at least one CRF with
parameter ? such that
min
(x,X)?F ( A)
e(x, X; ?) >
min
(x,X)?F ( B)
e(x, X; ?).
(22)
Note that, by definition, the optimal value of any relaxation would always be less than or equal to
the energy of the optimal (i.e. the MAP) labelling. Hence, the optimal value of a strictly dominating
relaxation A is closer to the optimal value of the MAP estimation IP compared to that of relaxation
B. In other words, A provides a tighter lower bound for MAP estimation than B.
Our Results: We prove that LP - S strictly dominates SOCP - MS (see section 3). Further, in section 4, we show that QP - RL is equivalent to SOCP - MS. This implies that LP - S strictly dominates the
QP - RL relaxation. In section 5 we generalize the above results by proving that a large class of SOCP
(and equivalent QP) relaxations is dominated by LP - S. Based on these results, we propose a novel
set of constraints which result in SOCP relaxations that dominate LP - S, QP - RL and SOCP - MS. These
relaxations introduce SOC constraints on cycles and cliques formed by the neighbourhood relationship of the CRF. Note that we will only provide the statement of the results here due to page limit.
All the proofs are described in [13].
3 LP-S vs. SOCP-MS
We now show that for the MAP estimation problem the linear constraints of LP - S are stronger than
the SOCP - MS constraints. In other words the feasibility region of LP - S is a strict subset of the
feasibility region of SOCP - MS (i.e. F (LP - S) ? F(SOCP - MS )). This in turn would allow us to prove
the following theorem.
Theorem 1: The LP - S relaxation strictly dominates the SOCP - MS relaxation.
4 QP-RL vs. SOCP-MS
We now prove that QP - RL and SOCP - MS are equivalent (i.e. their optimal values are equal for MAP
estimation problems defined over all CRFs). Specifically, we consider a vector x which lies in the
4
feasibility regions of the QP - RL and SOCP - MS relaxations, i.e. x ? [?1, 1]nh . For this vector, we
show that the values of the objective functions of the QP - RL and SOCP - MS relaxations are equal.
This would imply that if x? is an optimal solution of QP - RL for some CRF with parameter ? then
there exists an optimal solution (x? , X? ) of the SOCP - MS relaxation. Further, if eQ and eS are the
optimal values of the objective functions obtained using the QP - RL and SOCP - MS relaxation, then
eQ = eS .
Theorem 2: The QP - RL relaxation and the SOCP - MS relaxation are equivalent.
Theorems 1 and 2 prove that the LP - S relaxation strictly dominates the QP - RL and SOCP - MS relaxations. A natural question that now arises is whether the additive bound of QP - RL (proved in [18])
is applicable to the LP - S and SOCP - MS relaxations. Our next theorem answers this question in an
affirmative.
Theorem 3: Using the rounding scheme of [18], LP
the same additive
P- S and SOCP
P - MS provide
2
bound as the QP - RL relaxation, i.e. S4 where S =
(a,b)?E
li ,lj ?l |?ab;ij | (i.e. the sum of the
absolute values of all pairwise potentials). Furthermore, this bound is tight.
The above bound was proved for the case of binary variables (i.e. h = 2) in [8] using a slightly
different rounding scheme.
5 QP and SOCP Relaxations over Trees and Cycles
We now generalize the results of Theorem 1 by defining a large class of SOCP relaxations which
is dominated by LP - S. Specifically, we consider the SOCP relaxations which relax the non-convex
constraint X = xx? using a set of second order cone (SOC) constraints of the form
||(Uk )? x|| ? Ck ? X, k = 1, ? ? ? , nC
where C = U (U ) 0, for all k = 1, ? ? ? , nC .
k
k
(23)
k ?
Note that each SOCP relaxation belonging to this class would define an equivalent QP relaxation
(similar to the equivalent QP - RL relaxation defined by the SOCP - MS relaxation). Hence, all these QP
relaxations will also be dominated by the LP - S relaxation. Before we begin to describe our results
in detail, we need to set up some notation as follows.
(a)
(b)
(c)
Figure 1: (a) An example CRF defined over four variables which form a cycle. Note that the observed
nodes are not shown for the sake of clarity of the image. (b) The set E k specified by the matrix Ck
shown in equation (25), i.e. E k = {(a, b), (b, c), (c, d)}. (c) The set V k = {a, b, c, d}. See text for
definitions of these sets.
Notation: We consider an SOC constraint which is of the form described in equation (23), i.e.
||(Uk )? x|| ? Ck ? X,
(24)
where k ? {1, ? ? ? , nC }. In order to help the reader understand the notation better, we use an
example CRF shown in Fig. 1(a). This CRF is defined over four variables v = {va , vb , vc , vd }
(connected to form a cycle of length 4), each of which take a label from the set l = {l0 , l1 }. For this
CRF we specify a constraint using a matrix Ck 0 which is 0 everywhere, except for the following
4 ? 4 submatrix:
? k
? ?
?
k
k
k
Caa;00 Cab;00
Cac;00
Cad;00
2 1 1 0
k
k
k
k
? Cba;00 Cbb;00 Cbc;00 Cbd;00 ? ? 1 2 1 1 ?
? k
?=?
(25)
k
k
k
? Cca;00 Ccb;00
?
1 1 2 1 ?
Ccc;00
Ccd;00
k
k
k
k
0 1 1 2
C
C
C
C
da;00
db;00
dc;00
dd;00
Using the SOC constraint shown in equation (24) we define the following two sets: (i) The set E k is
defined such that (a, b) ? E k if, and only if, it satisfies the following conditions:
(a, b) ? E,
(26)
5
k
?li , lj ? l such that Cab;ij
6= 0.
(27)
Recall that E specifies the neighbourhood relationship for the given CRF. In other words E k is the
subset of the edges in the graphical model of the CRF such that Ck specifies constraints for the
random variables corresponding to those edges. For the example CRF (shown in Fig. 1(a)) and Ck
matrix (in equation (25)), the set E k obtained is shown in Fig. 1(b). (ii) The set V k is defined as
a ? V k if, and only if, there exists a vb ? v such that (a, b) ? E k . In other words V k is the subset
of hidden nodes in the graphical model of the CRF such that Ck specifies constraints for the random
variables corresponding to those hidden nodes. Fig. 1(c) shows the set V k for our example SOC
constraint.
We also define a weighted graph Gk = (V k , E k ) whose vertices are specified by the set V k and
whose edges are specified by the set E k . The weight of an edge (a, b) ? E k is given by w(a, b).
Recall that w(a, b) specifies the strength of the pairwise relationship between two neighbouring
variables va and vb . Thus, for our example SOC constraint, the vertices of this graph are given in
Fig. 1(c) while the edges are shown in Fig. 1(b). This graph can be viewed as a subgraph of the
graphical model representation for the given CRF.
Theorem 4: SOCP relaxations (and the equivalent QP relaxations) which define constraints only
using graphs Gk = (V k , E k ) which form (arbitrarily large) trees are dominated by the LP - S relaxation.
We note that the above theorem can be proved using the results of [24] on moment constraints (which
imply that LP - S provides the exact solution for the MAP estimation problems defined over treestructured random fields). However, our alternative proof presented in [13] allows us to generalize
the results of Theorem 4 for certain cycles as follows.
Theorem 5: When d(i, j) ? 0 for all li , lj ? l, the SOCP relaxations which define constraints only
using non-overlapping graphs Gk which form (arbitrarily large) even cycles with all positive or all
negative weights are dominated by the LP - S relaxation.
The above theorem can be proved for cycles of any length whose weights are all negative by a similar
construction. Further, it also holds true for odd cycles (i.e. cycles of odd number of variables) which
have only one positive or only one negative weight. However, as will be seen in the next section,
unlike trees it is not possible to extend these results for any general cycle.
6 Some Useful SOC Constraints
We now describe two SOCP relaxations which include all the marginalization constraints specified
in LP - S. Note that the marginalization constraints can be incorporated within the SOCP framework
but not in the QP framework.
6.1 The SOCP-C Relaxation
The SOCP - C relaxation (where C denotes cycles) defines second order cone (SOC) constraints using
positive semidefinite matrices C such that the graph G (defined in section 5) form cycles. Let the
variables corresponding to vertices of one such cycle G of length c be denoted as vC = {vb |b ?
{a1 , a2 , ? ? ? , ac }}. Further, let lC = {lj |j ? {i1 , i2 , ? ? ? , ic }} ? lc be a set of labels for the variables
vC . In addition to the marginalization constraints, the SOCP - C relaxation specifies the following
SOC constraint:
||U? x|| ? C ? X,
(28)
such that the graph G defined by the above constraint forms a cycle. The matrix C is 0 everywhere
except the following elements:
?c
if
k = l,
Cak ,al ,ik ,il =
(29)
Dc (k, l) otherwise.
Here Dc is a c ? c matrix which is defined as follows:
?
?
1
if
|k ? l| = 1
Dc (k, l) =
(?1)c?1
if
|k ? l| = c ? 1
?
0
otherwise,
(30)
and ?c is the absolute value of the smallest eigenvalue of Dc . In other words the submatrix of C
defined by vC and lC has diagonal elements equal to ?c and off-diagonal elements equal to the
6
elements of Dc . Clearly, C = U? U 0 since its only non-zero submatrix ?c I + Dc (where I is
a c ? c identity matrix) is positive semidefinite. This allows us to define a valid SOC constraint as
shown in inequality (28). We choose to define the SOC constraint (28) for only those set of labels lC
which satisfy the following:
X
X
Dc (k, l)?a2k al ;ik il ?
Dc (k, l)?a2k al ;jk jl , ?{j1 , j2 , ? ? ? , jc }.
(31)
(ak ,al )?E
(ak ,al )?E
Note that this choice is motivated by the fact that the variables Xak al ;ik il corresponding to these
sets vC and lC are assigned trivial values by the LP - S relaxation in the presence of non-submodular
terms.
Since marginalization constraints are included in the SOCP - C relaxation, the value of the objective
function obtained by solving this relaxation would at least be equal to the value obtained by the LP - S
relaxation (i.e. SOCP - C dominates LP - S, see Case II in section 2). We can further show that in the
case where |l| = 2 and the constraint (28) is defined over a frustrated cycle (i.e. a cycle with an
odd number of non-submodular terms) SOCP - C strictly dominates LP - S. One such example is given
in [13]. Note that if the given CRF contains no frustrated cycle, then it can be solved exactly using
the method described in [7].
The constraint defined in equation (28) is similar to the (linear) cycle inequality constraints [1] which
are given by
X
Dc (k, l)Xak al ;ik il ? 2 ? c.
(32)
k,l
We believe that the feasibility region defined by cycle inequalities is a strict subset of the feasibility
region defined by equation (28). In other words a relaxation defined by adding cycle inequalities to
LP - S would strictly dominate SOCP - C. We are not aware of a formal proof for this. We now describe
the SOCP - Q relaxation.
6.2 The SOCP-Q Relaxation
In this previous section we saw that LP - S dominates SOCP relaxations whose constraints are defined
on trees. However, the SOCP - C relaxation, which defines its constraints using cycles, strictly dominates LP - S. This raises the question whether matrices C, which result in more complicated graphs
G, would provide an even better relaxation for the MAP estimation problem. In this section, we
answer this question in an affirmative. To this end, we define an SOCP relaxation which specifies
constraints such that the resulting graph G from a clique. We denote this relaxation by SOCP - Q
(where Q indicates cliques).
The SOCP - Q relaxation contains the marginalization constraint and the cycle inequalities (defined
above). In addition, it also defines SOC constraints on graphs G which form a clique. We denote
the variables corresponding to the vertices of clique G as vQ = {vb |b ? {a1 , a2 , ? ? ? , aq }}. Let
lQ = {lj |j ? {i1 , i2 , ? ? ? , iq }} be a set of labels for these variables vQ . Given this set of variables
vQ and labels lQ , we define an SOC constraint using a matrix C of size nh ? nh which is zero
everywhere except for the elements Cak al ;ik il = 1. Clearly, C is a rank 1 matrix with eigenvalue 1
and eigenvector u which is zero everywhere except uak ;ik = 1 where vak ? vQ and lik ? lQ . This
implies that C 0, which enables us to obtain the following SOC constraint:
!2
X
X
xak ;ik
?q+
Xak al ;ik il .
(33)
k
k,l
We choose to specify the above constraint only for the set of labels lQ which satisfy the following
condition:
X
X
?a2k al ;ik il ?
?a2k al ;jk jl , ?{j1 , j2 , ? ? ? , jq }.
(34)
(ak ,al )?E
(ak ,al )?E
Again, this choice is motivated by the fact that the variables Xak al ;ik il corresponding to these sets
vQ and lQ are assigned trivial values by the LP - S relaxation in the presence of non-submodular
pairwise potentials.
When the clique contains a frustrated cycle, it can be shown that SOCP - Q dominates the LP - S relaxation (similar to SOCP - C). Further, using a counter-example, it can proved that the feasibility region
given by cycle inequalities is not a subset of the feasibility region defined by constraint (33). One
such example is given in [13].
7
7 Discussion
We presented an analysis of approximate algorithms for MAP estimation which are based on convex
relaxations. The surprising result of our work is that despite the flexibility in the form of the objective
function/constraints offered by QP and SOCP, the LP - S relaxation dominates a large class of QP
and SOCP relaxations. It appears that the authors who have previously used SOCP relaxations in
the Combinatorial Optimization literature [16] and those who have reported QP relaxation in the
Machine Learning literature [18] were unaware of this result. We also proposed two new SOCP
relaxations (SOCP - C and SOCP - Q) and presented some examples to prove that they provide a better
approximation than LP - S. An interesting direction for future research would be to determine the best
SOC constraints for a given MAP estimation problem (e.g. with truncated linear pairwise potentials).
Acknowledgments: We thank Pradeep Ravikumar and John Lafferty for careful reading of the manuscript and
for pointing out an error in our description of the SOCP - MS relaxation.
References
[1] F. Barahona and A. Mahjoub. On the cut polytope. Mathematical Programming, 36:157?173, 1986.
[2] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[3] Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts. PAMI,
23(11):1222?1239, 2001.
[4] C. Chekuri, S. Khanna, J. Naor, and L. Zosin. Approximation algorithms for the metric labelling problem
via a new linear programming formulation. In SODA, 2001.
[5] E. Dalhaus, D. Johnson, C. Papadimitriou, P. Seymour, and M. Yannakakis. The complexity of multiterminal cuts. SICOMP, 23(4):864?894, 1994.
[6] M. Goemans and D. Williamson. Improved approximation algorithms for maximum cut and satisfiability
problems using semidefinite programming. Journal of ACM, 42:1115?1145, 1995.
[7] P. Hammer, P. Hansen, and B. Simeone. Roof duality, complementation and persistency in quadratic 0-1
optimization. Mathematical Programming, 28:121?155, 1984.
[8] P. Hammer and B. Kalantari. A bound on the roof duality gap. Technical Report RRR 46, Rutgers Center
for Operations Research, Rutgers University, 1987.
[9] A. Karzanov. Minimum 0-extension of graph metrics. Euro. J. of Combinatorics, 19:71?101, 1998.
[10] S. Kim and M. Kojima. Second-order cone programming relaxation of nonconvex quadratic optimization
problems. Technical report, Tokyo Institute of Technology, 2000.
[11] J. Kleinberg and E. Tardos. Approximation algorithms for classification problems with pairwise relationships: Metric labeling and Markov random fields. In STOC, pages 14?23, 1999.
[12] A. Koster, C. van Hoesel, and A. Kolen. The partial constraint satisfaction problem: Facets and lifting
theorems. Operations Research Letters, 23(3-5):89?97, 1998.
[13] M. P. Kumar, V. Kolmogorov, and P. H. S. Torr.
An analysis of convex relaxations
for MAP estimation.
Technical report, Oxford Brookes University, 2007.
Available at
http://cms.brookes.ac.uk/staff/PawanMudigonda/.
[14] M. P. Kumar, P. H. S. Torr, and A. Zisserman. Solving Markov random fields using second order cone
programming relaxations. In CVPR, volume I, pages 1045?1052, 2006.
[15] J. Lasserre. Global optimization with polynomials and the problem of moments. SIAM Journal of Optimization, 11:796?817, 2001.
[16] M. Muramatsu and T. Suzuki. A new second-order cone programming relaxation for max-cut problems.
Journal of Operations Research of Japan, 43:164?177, 2003.
[17] C. Olsson, A. Eriksson, and F. Kahl. Solving large scale binary quadratic problems: Spectral methods vs.
semidefinite programming. In CVPR, pages 1?8, 2007.
[18] P. Ravikumar and J. Lafferty. Quadratic programming relaxations for metric labelling and Markov random
field MAP estimation. In ICML, 2006.
[19] C. Schellewald and C. Schnorr. Subgraph matching with semidefinite programming. In IWCIA, 2003.
[20] M. Schlesinger. Sintaksicheskiy analiz dvumernykh zritelnikh singnalov v usloviyakh pomekh (syntactic
analysis of two-dimensional visual signals in noisy conditions). Kibernetika, 4:113?130, 1976.
[21] R. Szeliski, R. Zabih, D. Scharstein, O. Veksler, V. Kolmogorov, A. Agarwala, M. Tappen, and C. Rother.
A comparative study of energy minimization methods for markov random fields. In ECCV, pages II:
16?29, 2006.
[22] P. H. S. Torr. Solving Markov random fields using semidefinite programming. In AISTATS, 2003.
[23] M. Wainwright, T. Jaakola, and A. Willsky. MAP estimation via agreement on trees: Message passing
and linear programming. IEEE Trans. on Information Theory, 51(11):3697?3717, 2005.
[24] M. Wainwright and M. Jordan. Graphical models, exponential families, and variational inference. Technical Report 649, University of California, Berkeley, 2003.
[25] M. Wainwright and M. Jordan. Treewidth-based conditions for exactness of the Sherali-Adams and
Lasserre relaxations. Technical Report 671, University of California, Berkeley, 2004.
8
| 3373 |@word polynomial:2 stronger:1 barahona:1 moment:2 inefficiency:1 contains:3 sherali:1 kahl:1 comparing:1 cad:1 surprising:1 john:1 additive:2 subsequent:1 partition:1 j1:2 uak:1 enables:1 v:3 persistency:1 provides:4 node:3 mathematical:2 ik:12 prove:6 naor:1 introduce:1 pairwise:11 sdp:3 multi:1 begin:1 xx:5 notation:3 pto:1 cm:2 mahjoub:1 affirmative:2 eigenvector:1 berkeley:2 exactly:1 uk:9 positive:5 before:1 seymour:1 limit:1 despite:2 xbb:2 ak:4 oxford:3 pami:1 kojima:2 studied:1 relaxing:1 range:1 jaakola:1 acknowledgment:1 practice:1 lf:1 nite:1 area:1 submatrices:1 convenient:1 boyd:1 word:9 vnk:1 matching:1 get:1 convenience:1 eriksson:1 kalantari:1 equivalent:13 map:26 center:1 crfs:2 sicomp:1 independently:2 convex:12 formulate:1 simplicity:1 dominate:5 vandenberghe:1 proving:1 ccb:1 analiz:1 tardos:1 construction:1 play:1 exact:2 programming:24 neighbouring:4 us:1 agreement:1 element:12 tappen:1 jk:2 cut:6 observed:2 role:1 solved:2 region:10 ensures:2 cycle:24 connected:1 counter:1 complexity:1 raise:1 solving:5 tight:1 easily:1 joint:1 various:1 kolmogorov:3 fast:1 describe:8 london:1 labeling:1 exhaustive:1 whose:7 dominating:2 cvpr:2 say:2 relax:1 otherwise:4 zosin:1 syntactic:1 noisy:1 ip:9 advantage:1 eigenvalue:2 xak:5 ucl:1 propose:2 subtracting:1 product:1 j2:2 subgraph:2 flexibility:2 description:1 frobenius:1 comparative:1 adam:1 help:1 iq:1 ac:7 a2k:4 ij:27 odd:3 eq:2 soc:21 implies:2 treewidth:1 direction:1 closely:1 tokyo:1 hammer:2 vc:7 require:1 cba:1 tighter:1 strictly:11 extension:2 hold:1 practically:1 ic:1 exp:1 pointing:1 a2:2 xk2:1 smallest:1 purpose:1 uniqueness:2 estimation:25 applicable:1 label:20 combinatorial:2 hansen:1 saw:1 treestructured:1 tool:1 weighted:1 minimization:2 exactness:1 clearly:3 always:2 modified:1 ck:10 l0:2 focus:1 rank:1 indicates:2 kim:2 posteriori:2 inference:2 unary:4 lj:15 hidden:2 jq:1 i1:2 arg:4 classification:1 agarwala:1 denoted:3 special:2 field:13 equal:9 aware:1 karzanov:1 represents:1 yannakakis:1 icml:1 future:1 papadimitriou:1 np:3 report:5 employ:1 olsson:1 roof:2 replaced:1 pawan:1 ab:10 message:1 cbb:1 brooke:6 pradeep:1 semidefinite:9 xb:9 accurate:2 edge:5 closer:1 partial:1 lh:1 tree:5 schlesinger:3 facet:1 vertex:4 subset:5 veksler:2 rounding:2 johnson:1 reported:1 answer:2 fundamental:1 siam:1 probabilistic:1 off:1 again:1 central:2 choose:2 possibly:1 li:24 japan:1 potential:14 socp:68 kolen:1 coefficient:1 satisfy:2 jc:1 combinatorics:1 depends:2 later:3 analyze:1 complicated:1 formed:1 il:8 who:3 generalize:4 complementation:1 definition:4 energy:6 proof:3 proved:5 recall:2 fractional:1 satisfiability:1 segmentation:1 appears:1 manuscript:1 specify:6 improved:1 zisserman:1 formulation:6 furthermore:3 xa:27 chekuri:1 overlapping:1 defines:4 khanna:1 believe:1 concept:1 true:1 hence:4 assigned:3 symmetric:2 i2:2 m:30 outline:1 crf:17 l1:2 image:2 variational:1 novel:2 boykov:1 common:1 rl:21 ji:4 qp:37 vak:1 nh:17 jl:2 extend:1 volume:1 muramatsu:3 refer:3 cambridge:1 similarly:1 maxcut:2 submodular:3 aq:1 dot:1 v0:1 belongs:1 certain:1 nonconvex:1 inequality:6 binary:4 arbitrarily:2 seen:3 minimum:2 greater:1 relaxed:3 cak:2 staff:1 determine:1 signal:1 ii:9 lik:1 technical:5 ravikumar:4 a1:2 va:20 feasibility:10 mrf:1 vision:1 metric:5 rutgers:2 normalization:1 addition:2 interval:2 unlike:2 strict:2 db:1 lafferty:4 schur:1 jordan:2 integer:4 call:3 noting:1 presence:2 iii:2 relaxes:1 marginalization:7 xba:4 whether:2 motivated:2 stereo:1 passing:1 jj:2 simeone:1 useful:2 involve:3 s4:1 zabih:2 http:1 specifies:11 discrete:6 key:1 four:3 reformulation:1 multiterminal:1 clarity:1 v1:1 graph:12 relaxation:111 cone:9 sum:2 koster:1 everywhere:4 powerful:1 letter:1 soda:1 family:1 reader:2 vn:1 vb:11 submatrix:3 bound:6 cca:1 guaranteed:1 quadratic:7 replaces:1 cac:1 strength:2 constraint:65 sake:1 dominated:7 kleinberg:1 min:4 kumar:3 performing:1 developing:1 belonging:1 slightly:1 lp:41 rrr:1 pr:1 equation:7 vq:5 previously:2 turn:1 stitching:1 end:3 caa:1 available:1 operation:3 multiplied:1 apply:1 appropriate:1 spectral:1 neighbourhood:3 alternative:1 denotes:2 include:1 ccd:1 graphical:4 schellewald:1 cbd:1 objective:8 question:4 diagonal:2 ccc:1 said:2 minx:4 distance:1 thank:1 vd:1 polytope:1 the1:1 trivial:2 willsky:1 rother:1 length:5 index:4 relationship:6 reformulate:1 nc:5 statement:1 stoc:1 gk:3 negative:4 allowing:1 markov:6 finite:1 truncated:1 defining:3 extended:2 incorporated:1 dc:10 complement:1 pair:6 specified:8 california:2 concisely:1 trans:1 below:1 reading:1 program:1 max:1 wainwright:3 satisfaction:1 natural:1 philiptorr:1 scheme:2 technology:1 imply:2 lk:2 hoesel:1 text:1 literature:4 xaa:3 interesting:1 offered:2 dd:1 eccv:1 formal:1 allow:2 understand:1 institute:1 kibernetika:1 szeliski:1 absolute:2 van:1 dimension:1 valid:1 unaware:1 author:1 suzuki:3 scharstein:1 approximate:5 clique:6 global:1 alternatively:1 lasserre:2 schnorr:1 obtaining:2 williamson:1 domain:1 da:1 aistats:1 xab:18 fig:6 euro:1 cbc:1 lc:5 lq:5 exponential:1 lie:3 theorem:13 dominates:15 exists:3 adding:1 importance:2 cab:3 lifting:1 labelling:11 gap:1 visual:1 expressed:1 aa:1 corresponds:1 satisfies:1 frustrated:3 acm:1 conditional:3 viewed:1 identity:1 careful:1 hard:3 included:1 specifically:3 torr:4 except:4 called:3 goemans:1 duality:2 e:2 domination:1 college:1 arises:1 dept:2 |
2,618 | 3,374 | Boosting Algorithms for Maximizing the Soft Margin
Manfred K. Warmuth?
Dept. of Engineering
University of California
Santa Cruz, CA, U.S.A.
Karen Glocer
Dept. of Engineering
University of California
Santa Cruz, CA, U.S.A.
Gunnar R?atsch
Friedrich Miescher Laboratory
Max Planck Society
T?ubingen, Germany
Abstract
We present a novel boosting algorithm, called SoftBoost, designed for sets of binary labeled examples that are not necessarily separable by convex combinations
of base hypotheses. Our algorithm achieves robustness by capping the distributions on the examples. Our update of the distribution is motivated by minimizing
a relative entropy subject to the capping constraints and constraints on the edges
of the obtained base hypotheses. The capping constraints imply a soft margin in
the dual optimization problem. Our algorithm produces a convex combination of
hypotheses whose soft margin is within ? of its maximum. We employ relative entropy projection methods to prove an O( ln?2N ) iteration bound for our algorithm,
where N is number of examples.
We compare our algorithm with other approaches including LPBoost, BrownBoost, and SmoothBoost. We show that there exist cases where the number of iterations required by LPBoost grows linearly in N instead of the logarithmic growth
for SoftBoost. In simulation studies we show that our algorithm converges about
as fast as LPBoost, faster than BrownBoost, and much faster than SmoothBoost.
In a benchmark comparison we illustrate the competitiveness of our approach.
1 Introduction
Boosting methods have been used with great success in many applications like OCR, text classification, natural language processing, drug discovery, and computational biology [13]. For AdaBoost
[7] it was frequently observed that the generalization error of the combined hypotheses kept decreasing after the training error had already reached zero [19]. This sparked a series of theoretical
studies trying to understand the underlying principles that govern the behavior of ensemble methods
[19, 1]. It became apparent that some of the power of ensemble methods lies in the fact that they
tend to increase the margin of the training examples. This was consistent with the observation that
AdaBoost works well on low-noise problems, such as digit recognition tasks, but not as well on tasks
with high noise. On such tasks, better generalizaton can be achieved by not enforcing a large margin
on all training points. This experimental observation was supported by the study of [19], where the
generalization error of ensemble methods was bounded by the sum of two terms: the fraction of
training points which have a margin smaller than some value ? plus a complexity term that depends
on the base hypothesis class and ?. While this worst-case bound can only capture part of what is
going on in practice, it nevertheless suggests that in some cases it pays to allow some points to have
small margin or be misclassified if this leads to a larger overall margin on the remaining points.
To cope with this problem, it was necessary to construct variants of AdaBoost which trade off the
fraction of examples with margin at least ? with the size of the margin ?. This was typically done
by preventing the distribution maintained by the algorithm from concentrating too much on the
most difficult examples. This idea is implemented in many algorithms including AdaBoost with
soft margins [15], MadaBoost [5], ?-Arc [16, 14], SmoothBoost [21], LPBoost [4], and several
others (see references in [13]). For some of these algorithms, significant improvements were shown
compared to the original AdaBoost algorithm on high noise data.
?
Supported by NSF grant CCR 9821087.
1
In parallel, there has been a significant interest in how the linear combination of hypotheses generated by AdaBoost is related to the maximum margin solution [1, 19, 4, 18, 17]. It was shown that
AdaBoost generates a combined hypothesis with a large margin, but not necessarily the maximum
hard margin [15, 18]. This observation motivated the development of many Boosting algorithms
that aim to maximize the margin [1, 8, 4, 17, 22, 18]. AdaBoost? [17] and TotalBoost [22] provable
converge to the maximum hard margin within precision ? in 2 ln(N/? 2 ) iterations. The other algorithms have worse or no known convergence rates. However, such margin-maximizing algorithms
are of limited interest for a practitioner working with noisy real-world data sets, as overfitting is
even more problematic for such algorithms than for the original AdaBoost algorithm [1, 8].
In this work we combine these two lines of research into a single algorithm, called SoftBoost, that
for the first time implements the soft margin idea in a practical boosting algorithm. SoftBoost
finds in O(ln(N )/? 2 ) iterations a linear combination of base hypotheses whose soft margin is at
least the optimum soft margin minus ?. BrownBoost [6] does not always optimize the soft margin.
SmoothBoost and MadaBoost can be related to maximizing the soft margin, but while they have
known iterations bounds in terms of other criteria, it is unknown how quickly they converge to
the maximum soft margin. From a theoretical point of view the optimization problems underlying
SoftBoost as well as LPBoost are appealing, since they directly maximize the margin of a (typically
large) subset of the training data [16]. This quantity plays a crucial role in the generalization error
bounds [19].
Our new algorithm is most similar to LPBoost because its goal is also to optimize the soft margin.
The most important difference is that we use slightly relaxed constraints and a relative entropy to
the uniform distribution as the objective function. This leads to a distribution on the examples that
is closer to the uniform distribution. An important result of our work is to show that this strategy
may help to increase the convergence speed: We will give examples where LPBoost converges much
more slowly than our algorithm?linear versus logarithmic growth in N .
The paper is organized as follows: in Section 2 we introduce the notation and the basic optimization
problem. In Section 3 we discuss LPBoost and give a separable setting where N/2 iterations are
needed by LPBoost to achieve a hard margin within precision .99. In Section 4 we present our new
SoftBoost algorithm and prove its iteration bound. We provide an experimental comparison of the
algorithms on real and synthetic data in Section 5, and conclude with a discussion in Section 6.
2 Preliminaries
In the boosting setting, we are given a set of N labeled training examples (xn , yn ), n = 1 . . . N ,
where the instances xn are in some domain X and the labels yn ? ?1. Boosting algorithms maintain
a distribution d on the N examples, i.e. d lies in the N dimensional probability simplex P N . Intuitively, the hard to classify examples receive more weight. In each iteration, the algorithm gives the
current distribution to an oracle (a.k.a. base learning algorithm), which returns a new base hypothesis h : X ? [?1, 1]N with a certain guarantee of performance. This guarantee will be discussed at
the end of this section.
One measure of the performance of a base hypothesis h with respect to distribution d is its edge,
PN
?h = n=1 dn yn h(xn ). When the range of h is ?1 instead of the interval [-1,1], then the edge is
just an affine transformation of the weighted error ?h of hypothesis h: i.e. ?h (d) = 21 ? 21 ?h . A
hypothesis that predicts perfectly has edge ? = 1, a hypothesis that always predicts incorrectly has
edge ? = ?1, and a random hypothesis has edge ? ? 0. The higher the edge, the more useful is the
hypothesis for classifying the training examples. The edge of a set of hypotheses is defined as the
maximum edge of the set.
After a hypothesis is received, the algorithm must update its distribution d on the examples. Boosting algorithms (for the separable case) commonly update their distribution by placing a constraint
on the edge of most recent hypothesis. Such algorithms are called corrective [17]. In totally corrective updates, one constrains the distribution to have small edge with respect to all of the previous
hypotheses [11, 22]. The update developed in this paper is an adaptation of the totally corrective
update of [22] that handles the inseparable case. The final output of the boosting algorithm is always
PT
a convex combination of base hypotheses fw (xn ) = t=1 wt ht (xn ), where ht is the hypothesis
added at iteration t and wt is its coefficient. The margin of a labeled example (xn , yn ) is defined as
2
?n = yn fw (xn ). The (hard) margin of a set of examples is taken to be the minimum margin of the
set.
It is convenient to define an N -dimensional vector um that combines the base hypothesis hm with
m
the labels yn of the N examples: um
n := yn h (xn ). With this notation, the edge of the t-th
t
hypothesis becomes d ? u and the margin of the n-th example w.r.t. a convex combination w of the
Pt?1
first t ? 1 hypotheses is m=1 um
n wm .
For a given set of hypotheses {h1 , . . . , ht }, the following linear programming problem (1) optimizes
the minimum soft margin. The term ?soft? here refers to a relaxation of the margin constraint. We
now allow examples to lie below the margin but penalize them linearly via slack variables ?n . The
dual problem (2) minimizes the maximum edge when the distribution is capped with 1/?, where
? ? {1, . . . , N }:
1 XN
?t? (?) = min ?
(2)
??t (?) = max ? ?
?n
(1)
d,?
n=1
w,?,?
?
Xt
s.t. d ? um ? ?, for 1 ? m ? t,
s.t.
um
wm ? ? ? ?n , for 1 ? n ? N,
n
1
m=1
d ? P N , d ? 1.
?
w ? P t , ? ? 0.
By duality, ??t (?) = ?t? (?). Note that the relationship between capping and the hinge loss has
long been exploited by the SVM community [3, 20] and has also been used before for Boosting in
[16, 14]. In particular, it is known that ? in (1) is chosen such that N ? ? examples have margin at
least ?. This corresponds to ? active constraints in (2). The case ? = 1 is degenerate: there are no
capping constraints in (2) and this is equivalent to the hard margin case.1
Assumption on the weak learner We assume that for any distribution d ? ?1 1 on the examples,
the oracle returns a hypothesis h with edge at least g, for some fixed g. This means that for the
corresponding u vector, d ? u ? g. For binary valued features, this is equivalent to the assumption
that the base learner always returns a hypothesis with error at most 12 ? 12 g.
Adding a new constraint can only increase the value ?t? (?) of the minimization problem (2) and
therefore ?t? (?) is non-decreasing in t. It is natural to define ? ? (?) as the value of (2) w.r.t. the entire
hypothesis set from which the oracle can choose. Clearly ?t? (?) approaches ? ? (?) from below.
Also, the guarantee g of the oracle can be at most ? ? (?) because for the optimal distribution d? that
realizes ? ? (?), all hypotheses have edge at most ? ? (?). For computational reasons, g might however
be lower than ? ? (?) and in that case the optimum soft margin we can achieve is g.
3 LPBoost
In iteration t, the LPBoost algorithm [4] sends its current distribution dt?1 to the oracle and receives
a hypothesis ht that satisfies dt?1 ? ut ? g. It then updates its distribution to dt by solving the linear
programming problem (1) based on the t hypotheses received so far.
The goal of the boosting algorithms is to produce a convex combination of T hypotheses such that
?T (?) ? g ? ?. The simplest way to achieve this is to break when this condition is satisfied.
Although the guarantee g is typically not known, it is upper bounded by b
?t = min1?m?t dt?1 ? ut
and therefore LPBoost uses the more stringent stopping criterion ?t (?) ? b
?t ? ?.
To our knowledge, there is no known iteration bound for LPBoost even though it provably converges
to the ?-optimal solution of the optimization problem after it has seen all hypotheses [4, 10]. Empirically, the convergence speed depends on the linear programming optimizer, e.g. simplex or interior
point solver [22]. For the first time, we are able to establish a lower bound showing that, independent
of the optimizer, LPBoost can require ?(N ) iterations:
Theorem 1 There exists a case where LPBoost requires N/2 iterations to achieve a hard margin
that is within ? = .99 of the optimum hard margin.
Proof. Assume we are in the hard margin case (? = 1). The counterexample has N examples and
N
N
?
2 + 1 base hypothesis. After 2 iterations, the optimal value ?t (1) for the chosen hypotheses will
1
Please note that [20] have previously used the parameter ? with a slightly different meaning, namely ?/N
in our notation. We use an unnormalized version of ? denoting a number of examples instead of a fraction.
3
Algorithm 1 LPBoost with accuracy param. ? and capping parameter ?
1. Input: S = h(x1 , y1 ), . . . , (xN , yN )i, accuracy ?, capping parameter ? ? [1, N ].
2. Initialize: d0 to the uniform distribution and ?
b0 to 1.
3. Do for t = 1, . . .
(a) Send dt?1 to oracle and obtain hypothesis ht .
Set utn = ht (xn )yn and ?
bt = min{b
?t?1 , dt?1 ? ut }.
t?1
t
(Assume d
? u ? g, where edge guarantee g is unknown.)
(b) Update the distribution to any dt that solves the LP problem
(dt , ?t? ) = argmin ?
s.t. d ? um ? ?, for 1 ? m ? t; d ? P N , d ?
d,?
1
1.
?
(c) If ?t? ? ?
bt ? ? then set T = t and break.2
PT
4. Output: fw (x) = m=1 wm hm (x), where the coefficients wm maximize the soft
margin over the hypothesis set {h1 , . . . , hT } using the LP problem (1).
2
When g is known, then one can break already when ?t? (?) ? g ? ?.
still be close to ?1, whereas after the last hypothesis is added, this value is at least ?/2. Here ? is a
precision parameter that is an arbitrary small number.
1
2
3
4
5
Figure 1 shows the case n \ t
1
+1
?1 + 5?
?1 + 7?
?1 + 9?
?1 + ?
where N = 8 and T = 5,
2
+1
?1 + 5?
?1 + 7?
?1 + 9?
?1 + ?
but it is trivial to generalize
3
+1
?1 + 5?
?1 + 7?
?1 + 9?
?1 + ?
this example to any even N .
4
+1
?1
+
5?
?1
+
7?
?1
+
9?
?1 + ?
There are 8 examples/rows
5
?1 + 2?
+1
?1 + 7?
?1 + 9?
+1 ? ?
and the five columns are the
6
?1 + 3?
?1 + 4?
+1
?1 + 9?
+1 ? ?
t
u ?s of the five available base
7
?1 + 3?
?1 + 5?
?1 + 6?
+1
+1 ? ?
hypotheses. The examples
8
?1 + 3?
?1 + 5?
?1 + 7?
?1 + 8?
+1 ? ?
are separable because if we ?t? (1) ?1 + 2? ?1 + 4?
?1 + 6?
?1 + 8?
? ?/2
put half of the weight on the
t
Figure 1: The u vectors that are hard for LPBoost (for ? = 1).
first and last hypothesis, then
the margins of all examples are at least ?/2.
We assume that in each iteration the oracle will return the remaining hypothesis with maximum
edge. This will result in LPBoost choosing the hypotheses in order, and there will never be any ties.
The initial distribution d0 is uniform. At the end of iteration t (1 ? t ? N/2), the distribution dt
will focus all its weight on example N/2 + t, and the optimum mixture of the columns will put all
of its weight on the tth hypothesis that was just received. In other words the value will be the bolded
entries in Figure 1: ?1 + 2?t at the end of iteration t = 1, . . . , N/2. After N/2 iterations the value
?t? (1) of the underlying LP problem will still be close to ?1, because ? can be made arbitrary small.
We reasoned already that the value for all N/2 + 1 hypotheses will be positive. So if ? is small
enough, then after N/2 iterations LPBoost is still at least .99 away from the optimal solution.
Although the example set used in the above proof is linearly separable, we can modify it explicitly
to argue that capping the distribution on examples will not help in the sense that ?soft? LPBoost
with ? > 1 can still have linear iteration bounds. To negate the effect of capping, simply pad out
? = N ? examples, and after
the problem by duplicating all of the rows ? times. There will now be N
?
N
N
2 = 2? iterations, the value of the game is still close to ?1. This is not a claim that capping has no
value. It remains an important technique for making an algorithm more robust to noise. However, it
is not sufficient to improve the iteration bound of LPBoost from linear growth in N to logarithmic.
Another attempt might be to modify LPBoost so that at each iteration a base hypothesis is chosen
that increases the value of the optimization problem the most. Unfortunately we found similar ?(N )
counter examples to this heuristic (not shown). It is also easy to see that the algorithms related to
the below SoftBoost algorithm choose the last hypothesis after first and finish in just two iterations.
4
Algorithm 2 SoftBoost with accuracy param. ? and capping parameter ?
1. Input: S = h(x1 , y1 ), . . . , (xN , yN )i, desired accuracy ?, and capping parameter
? ? [1, N ].
2. Initialize: d0 to the uniform distribution and ?
b0 to 1.
3. Do for t = 1, . . .
(a) Send dt?1 to the oracle and obtain hypothesis ht .
Set utn = ht (xn )yn and ?
bt = min{b
?t?1 , dt?1 ? ut }.
t?1
t
(Assume d
? u ? g, where edge guarantee g is unknown.)
(b) Update3
dt = argmin ?(d, d0 ),
d
s.t. d?um ? ?
bt ??, for 1 ? m ? t,
X
dn = 1, d ?
n
1
1.
?
(c) If above infeasible or dt contains a zero then T = t and break.
PT
4. Output: fw (x) = m=1 wm hm (x), where the coefficients wm maximize the soft
margin over the hypothesis set {h1 , . . . , ht } using the LP problem (1).
3
When g is known, replace the upper bound ?
bt ? ? by g ? ?.
4 SoftBoost
In this section, we present the SoftBoost algorithm, which adds capping to the TotalBoost algorithm
of [22]. SoftBoost takes as input a sequence of examples S = h(x1 , y1 ), . . . , (xN , yN )i, an accuracy
parameter ?, and a capping parameter ?. The algorithm has an oracle available with unknown
guarantee g. Its initial distribution d0 is uniform. In each iteration t, the algorithm prompts the oracle
for a new base hypothesis, incorporates it into the constraint
set, and updates its distribution dt?1 to
P
t
0
d by minimizing the relative entropy ?(d, d ) := n dn ln ddn0 subject to linear constraints:
n
t+1
0
d
= argmind ?(d, d )
s.t. P
d ? um ? b
?t ? ?, for 1 ? m ? t (where b
?t = min1?m?t dm?1 ? um ),
1
d
=
1,
d
?
1.
n n
?
It is easy to solve this optimization problem with vanilla sequential quadratic programming methods
(see [22] for details). Observe that removing the relative entropy term from the objective, results
in a feasibility problem for linear programming where the edges are upper bounded by ?
bt ? ?. If
we remove the relative entropy and minimize the upper bound on the edges, then we arrive at the
optimization problem of LPBoost, and logarithmic growth in the number of examples is no longer
possible. The relative entropy in the objective assures that the probabilities of the examples are
always proportional to their exponentiated negative soft margins (not shown). That is, more weight
is put on the examples with low soft margin, which are the examples that are hard to classify.
4.1 Iteration bounds for SoftBoost
Our iteration bound for SoftBoost is very similar to the bound proven for TotalBoost [22], differing
only in the additional details related to capping.
Theorem 2 SoftBoost terminates after at most ? ?22 ln(N/?)? iterations with a convex combination
that is at most ? below the optimum value g.
Proof. We begin by observing that if the optimization problem at iteration t is infeasible, then
?t? (?) > ?
bt ? ? ? g ? ?. Also if dt contains a zero, then since the objective function ?(d, d0 ) is
strictly convex in d and minimized at the interior point d0 , there is no optimal solution in the interior
of the simplex. Hence, ?t? (?) = ?
bt ? ? ? g ? ?.
Let Ct be the convex subset of probability vectors d ? P N satisfying d ? ?1 1 and maxtm=1 d ? ut ?
?
bt ? ?. Notice that C0 is the N dimensional probability simplex where the components are capped
to ?1 . The distribution dt?1 at iteration t ? 1 is the projection of d0 onto the closed convex set Ct?1 .
Because adding a new hypothesis in iteration t results in an additional constraint and b
?t ? ?
bt?1 ,
5
we have Ct ? Ct?1 . If t ? T ? 1, then our termination condition assures that at iteration t ? 1
the set Ct?1 has a feasible solution in the interior of the simplex. Also, d0 lies in the interior and
dt ? Ct ? Ct?1 . These preconditions assure that at iteration t ? 1, the projection dt?1 of d0 onto
Ct?1 , exists and the Generalized Pythagorean Theorem for Bregman divergences [2, 9] is applicable:
?(dt , d0 ) ? ?(dt?1 , d0 ) ? ?(dt , dt?1 ).
t
t?1
(3)
2
By Pinsker?s inequality, ?(dt , dt?1 ) ? (||d ?d2 ||1 ) , and by H?older?s inequality, ||dt?1 ?dt ||1 ?
||dt?1 ? dt ||1 ||ut ||? ? dt?1 ? ut ? dt ? ut . Also dt?1 ? ut ? ?
bt by the definition of ?
bt , and the
constraints on the optimization problem assure that dt ? ut ? ?
bt ? ? and thus dt?1 ? ut ? dt ? ut ?
2
?
bt ?(b
?t ??) = ?. We conclude that ?(dt , dt?1 ) ? ?2 at iterations 1 through T ?1. By summing (3)
over the first T ? 1 iterations, we obtain
?(dT , d0 ) ? ?(d0 , d0 ) ? (T ? 1)
?2
.
2
Since the left side is at most ln(N/?), the bound of the theorem follows.
When ? = 1, then capping is vacuous and the algorithm and its iteration bound coincides with the
bound for TotalBoost. Note that the upper bound ln(N/?) on the relative entropy decreases with ?.
When ? = N , then the distribution stays at d0 and the iteration bound is zero.
5 Experiments
In a first study, we use experiments on synthetic data to illustrate the general behavior of the considered algorithms.2 We generated a synthetic data set by starting with a random matrix of 2000
rows and 100 columns, where each entry was chosen uniformly in [0, 1]. For the first 1000 rows, we
added 1/2 to the first 10 columns and rescaled such that the entries in those columns were again in
[0, 1]. The rows of this matrix are our examples and the columns and their negation are the base hypotheses, giving us a total of 200 of them. The first 1000 examples were labeled +1 and the rest ?1.
This results in a well separable dataset. To illustrate how the algorithms deal with the inseparable
case, we flipped the sign of a random 10% of the data set. We then chose a random 500 examples as
our training set and the rest as our test set. In every boosting iteration we chose the base hypothesis
which has the largest edge with respect to the current distribution on the examples.
We have trained LPBoost and SoftBoost for different values of ? and recorded the generalization
error (cf. Figure 2; ? = 10?3 ). We should expect that for small ? (e.g. ?/N < 10%) the data is
not easily separable, even when allowing ? wrong predictions. Hence the algorithm may mistakenly
concentrate on the random directions for discrimination. If ? is large enough, most incorrectly
labeled examples are likely to be identified as margin errors (?i > 0) and the performance should
stabilize. In Figure 2 we observe this expected behavior and also that for large ? the classification
performance decays again. The generalization performances of LPBoost and SoftBoost are very
similar, which is expected as they both attempt to maximize the soft-margin.
Using the same data set, we analysed the convergence speed of several algorithms: LPBoost, SoftBoost, BrownBoost, and SmoothBoost. We chose ? = 10?2 and ? = 200.3 For every iteration
we record all margins and compute the soft margin objective (1) for optimally chosen ? and ??s.
Figure 3 plots this value against the number of iterations for the four algorithms. SmoothBoost
takes dramatically longer to converge to the maximum soft margin than the other other three algorithms. In our experiments it nearly converges to the maximum soft margin objective, even though
no theoretical evidence is known for this observed convergence. Among the three remaining algorithms, LPBoost and SoftBoost converge in roughly the same number of iterations, but SoftBoost
has a slower start. BrownBoost terminates in fewer iterations than the other algorithms but does not
maximize the soft margin.4 This is not surprising as there is no theoretical reason to expect such a
result.
2
Our code is available at https://sourceforge.net/projects/nboost
Smaller choices of ? lead to an even slower convergence of SmoothBoost.
4
SmoothBoost has two parameters: a guarantee g on the edge of the base learner and the target margin
g/2
?. We chose g = ? ? (?) (computed with LPBoost) and ? = 2+g/2
as proposed in [21]. Brownboost?s one
parameter, c = 0.35, was chosen via cross-validation.
3
6
0.18
0.05
? LPBoost
0.04
0.16
soft margin objective
classification error
0.03
0.14
SoftBoost ?
0.12
0.1
? BrownBoost
LPBoost ?
0.02
0.01
? SoftBoost
0
SmoothBoost ?
?0.01
0.08
?0.02
0.06
0
0.1
0.2
0.3
0.4
?/N
0.5
0.6
0.7
0.8
0
10
1
2
10
3
10
10
number of iterations
Figure 2: Generalization performance of SoftBoost
(solid) and LPBoost (dotted) on a synthetic data set
with 10% label-noise for different values of ?.
Figure 3: Soft margin objective vs. the number of
iterations for LPBoost, SoftBoost, BrownBoost and
SmoothBoost.
Finally, we present a small comparison on ten benchmark data sets derived from the UCI benchmark
repository as previously used in [15]. We analyze the performance of AdaBoost, LPBoost, SoftBoost, BrownBoost [6] and AdaBoostReg [15] using RBF networks as base learning algorithm.5
The data comes in 100 predefined splits into training and test sets. For each of the splits we use
5-fold cross-validation to select the optimal regularization parameter for each of the algorithms.
This leads to 100 estimates of the generalization error for each method and data set. The means
and standard deviations are given in Table 1.6 As before, the generalization performances of SoftBoost and LPBoost are very similar. However, the soft margin algorithms outperform AdaBoost on
most data sets. The genaralization error of BrownBoost lies between that of AdaBoost and SoftBoost. AdaBoostReg performs as well as SoftBoost, but there are no iteration bounds known for this
algorithm.
Even though SoftBoost and LPBoost often have similar generalization error on natural datasets, the
number of iterations needed by both algorithms can be radically different (see Theorem 1). Also, in
[22] there are some artificial data sets where TotalBoost (i.e. SoftBoost with ? = 1) outperformed
LPBoost i.t.o. generalization error.
Banana
B.Cancer
Diabetes
German
Heart
Ringnorm
F.Solar
Thyroid
Titanic
Waveform
AdaBoost
13.3 ? 0.7
32.1 ? 3.8
27.9 ? 1.5
26.9 ? 1.9
20.1 ? 2.7
1.9 ? 0.3?
36.1 ? 1.5
4.4 ? 1.9?
22.8 ? 1.0
10.5 ? 0.4
11.1
27.8
24.4
24.6
18.4
1.9
35.7
4.9
22.8
10.1
LPBoost
? 0.6
? 4.3
? 1.7
? 2.1
? 3.0
? 0.2
? 1.6
? 1.9
? 1.0
? 0.5
SoftBoost
11.1 ? 0.5
28.0 ? 4.5
24.4 ? 1.7
24.7 ? 2.1
18.2 ? 2.7
1.8 ? 0.2
35.5 ? 1.4
4.9 ? 1.9
23.0 ? 0.8
9.8 ? 0.5
BrownBoost
12.9 ? 0.7
30.2 ? 3.9
27.2 ? 1.6
24.8 ? 1.9
20.0 ? 2.8
1.9 ? 0.2
36.1 ? 1.4
4.6 ? 2.1
22.8 ? 0.8
10.4 ? 0.4
AdaBoost reg
11.3 ? 0.6
27.3 ? 4.3
24.5 ? 1.7
25.0 ? 2.2
17.6 ? 3.0
1.7 ? 0.2
34.4 ? 1.7
4.9 ? 2.0
22.7 ? 1.0
10.4 ? 0.7
Table 1: Generalization error estimates and standard deviations for ten UCI benchmark data sets. SoftBoost
and LPBoost outperform AdaBoost and BrownBoost on most data sets.
6 Conclusion
We prove by counterexample that LPBoost cannot have an O(ln N ) iteration bound. This counterexample may seem similar to the proof that the Simplex algorithm for LP can take exponentially more
steps than interior point methods. However this similarity is only superficial. First, our iteration
bound does not depend on the LP solver used within LPBoost. This is because in the construction,
the interim solutions are always unique and thus all LP solvers will produce the same solution. Second, the iteration bound essentially says that column generation methods (of which LPBoost is a
canonical example) should not solve the current subproblem at iteration t optimally. Instead a good
algorithm should loosen the constraints and spread the weight via a regularization such as the relative entropy. These two tricks used by the SoftBoost algorithm make it possible to obtain iteration
5
The data is from http://theoval.cmp.uea.ac.uk/?gcc/matlab/index.shtml. The RBF
networks were obtained from the authors of [15], including the hyper-parameter settings for each data set.
6
Note that [15] contains a similar benchmark comparison. It is based on a different model selection setup
leading to underestimates of the generalization error. Presumably due to slight differences in the RBF hyperparameters settings, our results for AdaBoost often deviate by 1-2%.
7
bounds that grow logarithmic in N . The iteration bound for our algorithm is a straightforward extension of a bound given in [22] that is based on Bregman projection methods. By using a different
divergence in SoftBoost, such as the sum of binary relative entropies, the algorithm morphs into a
?soft? version of LogitBoost (see discussion in [22]) which has essentially the same iteration bound
as SoftBoost. We think that the use of Bregman projections illustrates the generality of the methods. Although the proofs seem trivial in hindsight, simple logarithmic iteration bounds for boosting
algorithms that maximize the soft margin have eluded many researchers (including the authors) for
a long time. Note that duality methods typically can be used in place of Bregman projections. For
example in [12], a number of iteration bounds for boosting algorithms are proven with both methods.
On a more technical level, we show that LPBoost may require N/2 examples to get .99 close to the
maximum hard margin. We believe that similar methods can be used to show that ?(N/?) examples
may be needed to get ? close. However the real challenge is to prove that LPBoost may require
?(N/? 2 ) examples to get ? close.
References
[1] L. Breiman. Prediction games and arcing algorithms. Neural Computation, 11(7):1493?1518, 1999. Also
Technical Report 504, Statistics Department, University of California Berkeley.
[2] Y. Censor and S. A. Zenios. Parallel Optimization. Oxford, New York, 1997.
[3] C. Cortes and V. Vapnik. Support-vector networks. Machine Learning, 20(3):273?297, 1995.
[4] A. Demiriz, K.P. Bennett, and J. Shawe-Taylor. Linear programming boosting via column generation.
Machine Learning, 46(1-3):225?254, 2002.
[5] C. Domingo and O. Watanabe. Madaboost: A modification of Adaboost. In Proc. COLT ?00, pages
180?189, 2000.
[6] Y. Freund. An adaptive version of the boost by majority algorithm. Mach. Learn., 43(3):293?318, 2001.
[7] Y. Freund and R.E. Schapire. A decision-theoretic generalization of on-line learning and an application
to boosting. Journal of Computer and System Sciences, 55(1):119?139, 1997.
[8] A.J. Grove and D. Schuurmans. Boosting in the limit: Maximizing the margin of learned ensembles. In
Proceedings of the Fifteenth National Conference on Artifical Intelligence, 1998.
[9] Mark Herbster and Manfred K. Warmuth. Tracking the best linear predictor. Journal of Machine Learning
Research, 1:281?309, 2001.
[10] R. Hettich and K.O. Kortanek. Semi-infinite programming: Theory, methods and applications. SIAM
Review, 3:380?429, September 1993.
[11] J. Kivinen and M. K. Warmuth. Boosting as entropy projection. In Proc. 12th Annu. Conference on
Comput. Learning Theory, pages 134?144. ACM Press, New York, NY, 1999.
[12] J. Liao. Totally Corrective Boosting Algorithms that Maximize the Margin. PhD thesis, University of
California at Santa Cruz, December 2006.
[13] R. Meir and G. R?atsch. An introduction to boosting and leveraging. In S. Mendelson and A. Smola,
editors, Proc. 1st Machine Learning Summer School, Canberra, LNCS, pages 119?184. Springer, 2003.
[14] G. R?atsch. Robust Boosting via Convex Optimization: Theory and Applications. PhD thesis, University
of Potsdam, Germany, December 2001.
[15] G. R?atsch, T. Onoda, and K.-R. M?uller. Soft margins for AdaBoost. Machine Learning, 42(3):287?320,
2001.
[16] G. R?atsch, B. Sch?olkopf, A.J. Smola, S. Mika, T. Onoda, and K.-R. M?uller. Robust ensemble learning. In A.J. Smola, P.L. Bartlett, B. Sch?olkopf, and D. Schuurmans, editors, Advances in Large Margin
Classifiers, pages 207?219. MIT Press, Cambridge, MA, 2000.
[17] G. R?atsch and M. K. Warmuth. Efficient margin maximizing with boosting. Journal of Machine Learning
Research, 6:2131?2152, December 2005.
[18] C. Rudin, I. Daubechies, and R.E. Schapire. The dynamics of adaboost: Cyclic behavior and convergence
of margins. Journal of Machine Learning Research, 5:1557?1595, 2004.
[19] R.E. Schapire, Y. Freund, P.L. Bartlett, and W.S. Lee. Boosting the margin: A new explanation for the
effectiveness of voting methods. The Annals of Statistics, 26(5):1651?1686, 1998.
[20] B. Sch?olkopf, A.J. Smola, R.C. Williamson, and P.L. Bartlett. New support vector algorithms. Neural
Comput., 12(5):1207?1245, 2000.
[21] Rocco A. Servedio. Smooth boosting and learning with malicious noise. Journal of Machine Learning
Research, 4:633?648, 2003.
[22] M.K. Warmuth, J. Liao, and G. R?atsch. Totally corrective boosting algorithms that maximize the margin.
In Proc. ICML ?06, pages 1001?1008. ACM Press, 2006.
8
| 3374 |@word repository:1 version:3 c0:1 termination:1 d2:1 simulation:1 minus:1 solid:1 initial:2 cyclic:1 series:1 contains:3 denoting:1 current:4 surprising:1 analysed:1 must:1 cruz:3 remove:1 designed:1 plot:1 update:9 discrimination:1 v:1 half:1 fewer:1 intelligence:1 rudin:1 warmuth:5 record:1 manfred:2 boosting:25 five:2 dn:3 competitiveness:1 prove:4 combine:2 introduce:1 expected:2 roughly:1 behavior:4 frequently:1 decreasing:2 param:2 solver:3 totally:4 becomes:1 begin:1 project:1 bounded:3 underlying:3 notation:3 what:1 argmin:2 minimizes:1 developed:1 differing:1 hindsight:1 transformation:1 guarantee:8 duplicating:1 every:2 berkeley:1 voting:1 growth:4 tie:1 um:9 wrong:1 classifier:1 uk:1 grant:1 yn:12 planck:1 before:2 positive:1 engineering:2 modify:2 limit:1 mach:1 oxford:1 might:2 plus:1 chose:4 mika:1 suggests:1 ringnorm:1 limited:1 range:1 practical:1 unique:1 practice:1 implement:1 digit:1 lncs:1 drug:1 projection:7 convenient:1 word:1 refers:1 get:3 onto:2 smoothboost:10 interior:6 close:6 cannot:1 put:3 selection:1 optimize:2 equivalent:2 maximizing:5 send:2 straightforward:1 starting:1 convex:10 madaboost:3 handle:1 annals:1 pt:4 play:1 target:1 construction:1 programming:7 us:1 hypothesis:52 domingo:1 diabetes:1 trick:1 assure:2 recognition:1 satisfying:1 predicts:2 labeled:5 lpboost:43 observed:2 role:1 min1:2 subproblem:1 capture:1 worst:1 precondition:1 trade:1 counter:1 decrease:1 rescaled:1 govern:1 complexity:1 constrains:1 pinsker:1 dynamic:1 trained:1 depend:1 solving:1 learner:3 easily:1 corrective:5 fast:1 artificial:1 hyper:1 choosing:1 whose:2 apparent:1 larger:1 valued:1 heuristic:1 solve:2 say:1 statistic:2 think:1 demiriz:1 noisy:1 final:1 sequence:1 net:1 adaptation:1 uci:2 degenerate:1 achieve:4 olkopf:3 sourceforge:1 convergence:7 optimum:5 produce:3 converges:4 help:2 illustrate:3 ac:1 school:1 b0:2 received:3 kortanek:1 solves:1 implemented:1 come:1 concentrate:1 direction:1 waveform:1 stringent:1 require:3 generalization:13 preliminary:1 strictly:1 extension:1 considered:1 uea:1 great:1 presumably:1 claim:1 achieves:1 inseparable:2 optimizer:2 proc:4 outperformed:1 applicable:1 realizes:1 label:3 largest:1 weighted:1 minimization:1 uller:2 mit:1 clearly:1 always:6 aim:1 pn:1 cmp:1 breiman:1 shtml:1 arcing:1 derived:1 focus:1 improvement:1 sense:1 censor:1 stopping:1 typically:4 entire:1 bt:14 pad:1 going:1 misclassified:1 germany:2 provably:1 overall:1 dual:2 classification:3 among:1 colt:1 development:1 initialize:2 construct:1 never:1 reasoned:1 biology:1 placing:1 flipped:1 icml:1 nearly:1 simplex:6 others:1 report:1 minimized:1 employ:1 divergence:2 national:1 maintain:1 negation:1 attempt:2 interest:2 mixture:1 predefined:1 bregman:4 edge:22 closer:1 grove:1 necessary:1 taylor:1 desired:1 theoretical:4 instance:1 classify:2 soft:30 column:8 deviation:2 subset:2 entry:3 uniform:6 predictor:1 too:1 optimally:2 morphs:1 synthetic:4 combined:2 st:1 herbster:1 siam:1 stay:1 lee:1 off:1 theoval:1 quickly:1 thesis:2 daubechies:1 recorded:1 again:2 satisfied:1 choose:2 slowly:1 worse:1 leading:1 return:4 stabilize:1 coefficient:3 explicitly:1 depends:2 view:1 h1:3 break:4 closed:1 observing:1 analyze:1 reached:1 wm:6 start:1 parallel:2 solar:1 minimize:1 accuracy:5 became:1 bolded:1 ensemble:5 generalize:1 weak:1 researcher:1 definition:1 against:1 underestimate:1 servedio:1 dm:1 proof:5 dataset:1 concentrating:1 knowledge:1 ut:12 organized:1 higher:1 dt:37 adaboost:19 done:1 though:3 generality:1 just:3 smola:4 working:1 receives:1 mistakenly:1 glocer:1 grows:1 believe:1 effect:1 hence:2 regularization:2 laboratory:1 deal:1 game:2 please:1 maintained:1 unnormalized:1 coincides:1 criterion:2 generalized:1 trying:1 theoretic:1 performs:1 meaning:1 novel:1 empirically:1 exponentially:1 discussed:1 slight:1 significant:2 cambridge:1 counterexample:3 vanilla:1 language:1 had:1 shawe:1 longer:2 similarity:1 base:18 add:1 recent:1 optimizes:1 certain:1 ubingen:1 inequality:2 binary:3 success:1 exploited:1 utn:2 seen:1 minimum:2 additional:2 relaxed:1 converge:4 maximize:9 semi:1 d0:16 smooth:1 technical:2 faster:2 cross:2 long:2 feasibility:1 loosen:1 prediction:2 variant:1 basic:1 miescher:1 essentially:2 liao:2 fifteenth:1 iteration:55 achieved:1 penalize:1 receive:1 whereas:1 interval:1 grow:1 malicious:1 sends:1 crucial:1 sch:3 rest:2 subject:2 tend:1 december:3 incorporates:1 leveraging:1 seem:2 effectiveness:1 practitioner:1 split:2 enough:2 easy:2 finish:1 brownboost:12 perfectly:1 identified:1 zenios:1 idea:2 motivated:2 bartlett:3 karen:1 york:2 matlab:1 dramatically:1 useful:1 santa:3 ten:2 simplest:1 tth:1 http:2 schapire:3 outperform:2 exist:1 meir:1 nsf:1 problematic:1 notice:1 dotted:1 canonical:1 sign:1 ccr:1 gunnar:1 four:1 nevertheless:1 ht:10 kept:1 relaxation:1 fraction:3 sum:2 arrive:1 place:1 hettich:1 decision:1 bound:29 ct:8 pay:1 summer:1 fold:1 quadratic:1 oracle:10 constraint:14 sparked:1 totalboost:5 generates:1 speed:3 thyroid:1 min:3 separable:7 interim:1 department:1 combination:8 smaller:2 slightly:2 terminates:2 appealing:1 lp:7 making:1 modification:1 intuitively:1 taken:1 heart:1 ln:8 previously:2 remains:1 discus:1 slack:1 assures:2 german:1 needed:3 end:3 available:3 observe:2 ocr:1 away:1 robustness:1 slower:2 original:2 remaining:3 cf:1 hinge:1 giving:1 establish:1 nboost:1 society:1 objective:8 already:3 quantity:1 added:3 strategy:1 rocco:1 september:1 majority:1 argue:1 trivial:2 reason:2 enforcing:1 provable:1 code:1 index:1 relationship:1 minimizing:2 difficult:1 unfortunately:1 setup:1 negative:1 gcc:1 unknown:4 allowing:1 upper:5 observation:3 datasets:1 benchmark:5 arc:1 incorrectly:2 banana:1 y1:3 arbitrary:2 community:1 prompt:1 vacuous:1 namely:1 required:1 friedrich:1 california:4 eluded:1 learned:1 potsdam:1 boost:1 capped:2 able:1 below:4 challenge:1 max:2 including:4 explanation:1 power:1 natural:3 kivinen:1 older:1 improve:1 imply:1 titanic:1 hm:3 text:1 deviate:1 review:1 discovery:1 relative:10 freund:3 loss:1 expect:2 generation:2 proportional:1 proven:2 versus:1 validation:2 affine:1 sufficient:1 consistent:1 principle:1 editor:2 classifying:1 row:5 cancer:1 supported:2 last:3 infeasible:2 adaboostreg:2 side:1 allow:2 understand:1 exponentiated:1 xn:14 world:1 preventing:1 author:2 commonly:1 made:1 adaptive:1 far:1 cope:1 overfitting:1 active:1 summing:1 conclude:2 table:2 learn:1 onoda:2 robust:3 ca:2 superficial:1 schuurmans:2 williamson:1 necessarily:2 domain:1 spread:1 linearly:3 noise:6 hyperparameters:1 logitboost:1 x1:3 canberra:1 ny:1 precision:3 watanabe:1 comput:2 lie:5 capping:16 theorem:5 removing:1 annu:1 xt:1 showing:1 decay:1 svm:1 negate:1 cortes:1 evidence:1 exists:2 mendelson:1 vapnik:1 adding:2 sequential:1 phd:2 illustrates:1 margin:65 entropy:11 logarithmic:6 simply:1 likely:1 tracking:1 springer:1 corresponds:1 radically:1 satisfies:1 acm:2 ma:1 goal:2 rbf:3 replace:1 bennett:1 feasible:1 hard:12 fw:4 infinite:1 uniformly:1 wt:2 called:3 total:1 duality:2 experimental:2 atsch:7 select:1 support:2 mark:1 pythagorean:1 artifical:1 dept:2 reg:1 |
2,619 | 3,375 | On Ranking in Survival Analysis: Bounds on the
Concordance Index
Vikas C. Raykar, Harald Steck, Balaji Krishnapuram
CAD and Knowledge Solutions (IKM CKS), Siemens Medical Solutions Inc., Malvern, USA
{vikas.raykar,harald.steck,balaji.krishnapuram}@siemens.com
Cary Dehing-Oberije, Philippe Lambin
Maastro Clinic, University Hospital Maastricht, University Maastricht, GROW, The Netherlands
{cary.dehing,philippe.lambin}@maastro.nl
Abstract
In this paper, we show that classical survival analysis involving censored data
can naturally be cast as a ranking problem. The concordance index (CI), which
quantifies the quality of rankings, is the standard performance measure for model
assessment in survival analysis. In contrast, the standard approach to learning the
popular proportional hazard (PH) model is based on Cox?s partial likelihood. We
devise two bounds on CI?one of which emerges directly from the properties of
PH models?and optimize them directly. Our experimental results suggest that all
three methods perform about equally well, with our new approach giving slightly
better results. We also explain why a method designed to maximize the Cox?s
partial likelihood also ends up (approximately) maximizing the CI.
1
Introduction
Survival analysis is a well-established field in medical statistics concerned with analyzing/predicting
the time until the occurrence of an event of interest?e.g., death, onset of a disease, or failure of a
machine. It is applied not only in clinical research, but also in epidemiology, reliability engineering,
marketing, insurance, etc. The time between a well-defined starting point and the occurrence of the
event is called the survival time or failure time, measured in clock time or in another appropriate
scale, e.g., mileage of a car. Survival time data are not amenable to standard statistical methods
because of its two special features?(1) the continuous survival time often follows a skewed distribution, far from normal, and (2) a large portion of the data is censored (see Sec. 2). In this paper we
take a machine learning perspective and cast survival analysis as a ranking problem?where the task
is to rank the data points based on their survival times rather than to predict the actual survival times.
One of the most popular performance measures for assessing learned models in survival analysis is
the Concordance Index (CI), which is similar to the Wilcoxon-Mann-Whitney statistic [13, 10] used
in bi-partite ranking problems.
Given the CI as a performance measure, we develop approaches that learn models by directly optimizing the CI. As optimization of the CI is computationally expensive, we focus on maximizing two
lower bounds on the CI, namely the log-sigmoid and the exponential bounds, which are described in
Sec. 4, 5, and 6. Interestingly, the log-sigmoid bound arises in a natural way from the Proportional
Hazard (PH) model, which is the standard model used in classical survival analysis, see Sec. 5.2.
Moreover, as the PH models are learned by optimizing Cox?s partial likelihood in classical survival
analysis, we show in Sec. 8 that maximizing this likelihood also ends up (approximately) maximizing the CI. Our experiments in Sec. 9 show that optimizing our two lower bounds and Cox?s
likelihood yields very similar results with respect to the CI, with the proposed lower bounds being
slightly better.
1
2
Survival analysis
Survival analysis has been extensively studied in the statistics community for decades, e.g., [4, 8].
A primary focus is to build statistical models for survival time Ti? of individual i of a population.
2.1
Censored data
A major problem is the fact that the period of observation Ci? can be censored for many individuals
i. For instance, a patient may move to a different town and thus be no longer available for a clinical
trial. Also at the end of the trial a lot of patients may actually survive. For such cases the exact
survival time may be longer than the observation period. Such data are referred to as right-censored,
and Ci? is also called the censoring time. For such individuals, we only know that they survived for
at least Ci? , i.e., our actual observation is Ti = min(Ti? , Ci? ).
Let xi ? Rd be the associated d-dimensional vector of covariates (explanatory variables) for the
ith individual. In clinical studies, the covariates typically include demographic variables, such as
age, gender, or race; diagnosis information like lab tests; or treatment information, e.g., dosage. An
important assumption generally made is that Ti? and Ci? are independent conditional on xi , i.e., the
cause for censoring is independent of the survival time. With the indicator function ?i , which equals
1 if failure is observed (Ti? ? Ci? ) and 0 if data is censored (Ti? > Ci? ), the available training data
can be summarized as D = {Ti , xi , ?i }N
i=1 for N patients. The objective is to learn a predictive
model for the survival time as a function of the covariates.
2.2
Failure time distributions
The failures times are typically modeled to follow a distribution, which absorbs both truly random
effects and causes unexplained by the (available) covariates. This distribution is characterized by the
survival function S(t) = Pr[T > t] for t > 0, which is the probability that the individual is still alive
at time t. A related function commonly used is the hazard function. If T has density function p, then
the hazard function is defined by ?(t) = lim?t?0 Pr[t < T ? t + ?t|T > t]/?t = p(t)/S(t). The
hazard function measures the instantaneous
rate of failure, and provides more insight into the failure
Rt
mechanisms. The function ?(t) = 0 ?(u)du is called the cumulative hazard function, and it holds
that S(t) = e??(t) [4].
2.3
Proportional hazard model
Proportional hazard (PH) models have become the standard for studying the effect of the covariates
on the survival time distributions, e.g., [8]. Specifically, the PH model assumes a multiplicative
effect of the covariates on the hazard function, i.e.,
?(t|x) = ?0 (t)ew
>
x
,
(1)
where ?(t|x) is the hazard function of a person with covariates x; ?0 (t) is the so-called baseline
hazard function (i.e., when x = 0), which is typically based on the exponential or the Weibull
>
distributions; w is a set of unknown regression parameters, and ew x is the relative hazard function.
Equivalent formulations for the cumulative hazard function and the survival function include
w> x
?(t|x) = ?0 (t)e
2.4
,
and S(t|x) = e
??0 (t)ew
>x
=e
h > R
i
? ew x ?0 (t)dt
.
(2)
Cox?s partial likelihood
Cox noticed that a semi-parametric approach is sufficient for estimating the weights w in PH models
[2, 3], i.e., the baseline hazard function can remain completely unspecified. Only a parametric
assumption concerning the effect of the covariates on the hazard function is required. Parameter
estimates in the PH model are obtained by maximizing Cox?s partial likelihood (of the weights)
[2, 3]:
>
Y
ew xi
P
L(w) =
.
(3)
w > xj
Tj ?Ti e
T uncensored
i
2
2
0
?2
?4
?6
?8
?10
?10 ?8 ?6 ?4 ?2
(a)
(b)
Indicator function
Log?sigmoid lower bound
Exponential lower bound
0
2
4
6
8 10
z
(c)
Figure 1: Order graphs representing the ranking constraints. (a) No censored data and (b) with censored data.
The empty circle represents a censored point. The points are arranged in the increasing value of their survival
times with the lowest being at the bottom. (c) Two concave lower bounds on the 0-1 indicator function.
Each term in the product is the probability that the ith individual failed at time Ti given that exactly
one failure has occurred at time Ti and all individuals for which Tj ? Ti are at risk of failing. Cox
and others have shown that this partial log-likelihood can be treated as an ordinary log-likelihood to
derive valid (partial) maximum likelihood estimates of w [2, 3].
The interesting properties of the Cox?s partial likelihood include: (1) due to its parametric form, it
can be optimized in a computationally efficient way; (2) it depends only on the ranks of the observed
survival times, cf. the inequality Tj ? Ti in Eq. 3, rather than on their actual numerical values. We
outline this connection to the ranking of the times Ti ?and hence the concordance index?in Sec. 8.
3
Ordering of Survival times
Casting survival analysis as ranking problem is an elegant way of dealing not only with the typically
skewed distributions of survival times, but also with the censoring of the data: Two subjects? survival
times can be ordered not only if (1) both of them are uncensored but also if (2) the uncensored time
of one is smaller than the censored survival time of the other. This can be visualized by means of an
order graph G = (V, E), cf. also Fig. 1. The set of vertices V represents all the individuals, where
each filled vertex indicates an observed/uncensored survival time, while an empty circle denotes a
censored observation. Existence of an edge Eij implies that Ti < Tj . An edge cannot originate
from a censored point.
3.1
Concordance index
For these reasons, the concordance index (CI) or c-index is one of the most commonly used performance measures of survival models, e.g., [6]. It can be interpreted as the fraction of all pairs of
subjects whose predicted survival times are correctly ordered among all subjects that can actually be
ordered. In other words, it is the probability of concordance between the predicted and the observed
survival. It can be written as
1 X
1f (xi )<f (xj )
(4)
c(D, G, f ) =
|E|
Eij
with the indicator function 1a<b = 1 if a < b, and 0 otherwise; |E| denotes the number of edges in
the order graph. f (xi ) is the predicted survival time for subject i by the model f . Equivalently, the
concordance index can also be written explicitly as
X
X
1
c=
1f (xi )<f (xj ) .
(5)
|E|
Ti uncensored Tj >Ti
This index is a generalization of the Wilcoxon-Mann-Whitney statistics [13, 10] and thus of the
area under the ROC curve (AUC) to regression problems in that it can (1) be applied to continuous
3
output variables and (2) account for censoring of the data. Like for the AUC, c = 1 indicates perfect
prediction accuracy and c = 0.5 is as good as a random predictor.
3.2
Maximizing the CI?The Ranking Problem
Since we evaluate the predictive accuracy of a survival model in terms of the concordance index,
it is natural to formulate the learning problem to directly maximize the concordance index. Note
that, while the concordance index has been used widely to evaluate a learnt model, it is not generally
used as an objective function during training. As the concordance index is invariant to any monotone
transformation of the survival times, the model learnt by maximizing the c-index is actually a ranking/scoring function. Our goal is to predict whether the survival time of one individual is larger than
the one of another individual. Very often the doctor would like to know whether a particular kind
of treatment results in an increase in the survival time and the exact absolute value of the survival
time is not important. In terms of ranking problems studied in machine learning this is an N -partite
ranking problem, where every data point is a class in itself. Formulating it as a ranking problem allows us to naturally incorporate the censored data. Once we have formulated it as a ranking problem
we can use various ranking algorithms proposed in the machine learning literature [5, 7, 1, 12]. In
this paper we use the algorithm proposed by [12].
More formally, we would like to learn a ranking function f from a suitable function class F, such
that f (xi ) > f (xj ) implies that the survival time of patient i is larger than the one of patient j. Given
the data D and the order graph G, the optimal ranking function is fb = arg maxf ?F c(D, G, f ). As
to prevent overfitting on the training data, regularization can be added to this equation, see Secs. 5
and 6. In many cases, sufficient regularization is also achieved by restricting the function class F,
e.g., it may contain only linear functions. For ease of exposition we will consider the family of linear
ranking functions 1 in this paper: F = {fw }, where for any x, w ? Rd , fw (x) = w> x.
4
Lower bounds on the CI
Maximizing the CI is a discrete optimization problem, which is computationally expensive. For
this reason, we resort to maximizing a differentiable and concave lower bound on the 0-1 indicator
function in the concordance index, cf. Eqs. 4 and 5. In this paper we focus on the log-sigmoid lower
bound [12], cf. Sec. 5, and exponential lower bound, cf. Sec. 6, which are suitably scaled as to be
tight at the origin and also in the asymptotic limit of large positive values, see also Fig. 1(c). We will
also show how these bounds relate to the classical approaches in survival analysis: as it turns out,
for the family of linear ranking functions, these two approaches are closely related to the PH model
commonly used in survival analysis, cf. Sec. 5.2.
5
Log-sigmoid lower bound
The first subsection discusses the lower bound on the concordance index based on the log-sigmoid
function. The second subsection shows that this bound arises naturally when using proportional
hazard models.
5.1
Lower bound
The sigmoid function is defined as ?(z) = 1/(1+e?z ), While it is an approximation to the indicator
function, it is not a lower bound. In contrast, the scaled version of the log of the sigmoid function,
log [2?(z)]/ log 2, is a lower bound on the indicator function (Fig. 1(c)), i.e.,
1z>0 ? 1 + (log ?(z)/log 2).
(6)
The log-sigmoid function is concave and asymptotically linear for large negative values, and may
hence be considered a differentiable approximation to the hinge loss, which is commonly used for
1
Generalization to non-linear functions can be achieved easily by using kernels: the linear ranking function
class F is replaced by H, a reproducing kernel Hilbert space (RKHS). The ranking function then is of the form
P
f (x) = N
i=1 ?i k(x, xi ) where k is the kernel of the RHKS H.
4
training support vector machines. The lower bound on the concordance index (cf. Eq. 4) follows
immediately:
1 X
1 X
c=
1f (xj )?f (xi )>0 ?
1 + (log ?[f (xj ) ? f (xi )]/log 2) ? b
cLS ,
(7)
|E|
|E|
Eij
Eij
which can efficiently be maximized by gradient-based methods (cf. Sec 7). Given the linear ranking
function fw (x) = w> x, the bound b
cLS becomes
1 X
b
cLS (w) =
1 + (log ?[w> (xj ? xi )]/log 2).
(8)
|E|
Eij
As to avoid overfitting, we penalize functions with a large norm w in the standard way, and obtain
the regularized version
?
b
cLSreg (w) = ? kwk2 + b
cLS (w).
(9)
2
5.2
Connection to the PH model
The concordance index can be interpreted as the probability of correct ranking (as defined by the
given order graph) given a function f . Its probabilistic version can thus be cast as a likelihood.
Under the assumption that each pair (j, i) is independent of any other pair, the log-likelihood reads
Y
L(fw , D, G) = log
Pr [fw (xi ) < fw (xj )|w] .
(10)
Eij
As this independence assumption obviously does not hold among all pairs due to transitivity (even
though the individual samples i are assumed i.i.d.), it provides a lower bound on the concordance
index.
While the probability of correct pairwise ordering, Pr [fw (xi ) < fw (xj )|w], is often chosen to be
sigmoid in the ranking literature [1], we show in the following that the sigmoid function arises
naturally in the context of PH models. Let T (w> x) denote the survival time for the patient with
covariates x or relative log-hazard w> x. A larger hazard corresponds to a smaller survival time, cf.
Sec. 2. Hence
Z ?
Pr [fw (xi ) < fw (xj )|w] = Pr[T (w> xj ) > T (w> xi )|w] =
Pr[T (w> xj ) > t]p(t|xi )dt
Z ?
Z ? 0
0
=
S(t|xj )p(t|xi )dt =
?S(t|xj )S (t|xi )dt,
0
0
where p(t|xi ) is the density function of T for patient i with covariate xi , and S(t|xi ) is the corre0
sponding survival function; S (t) = dS(t)/dt = ?p(t). Using Eq. 2 of the PH model, we continue
the manipulations:
Z ?
>
>
??0 (t) ew xj +ew xi
0
w > xi
Pr [fw (xi ) < fw (xj )|w] = ?e
e
?0 (t)dt
0
>
=
ew xi
= ?[w> (xi ? xj )].
>
ew xj + ew> xi
(11)
This derivation shows that the probability of correct pairwise ordering indeed follows the sigmoid
function. Assuming a prior Pr[w] = N (w|0, ??1 ) for regularization, the optimal maximum aposteriori (MAP) estimator is of the form w
bMAP = arg max L(w), where the posterior L(w) takes
the form of a penalized log-likelihood:
X
?
L(w) = ? kwk2 +
log ? wT (xj ? xi ) .
(12)
2
Eij
This expression is equivalent to (8) except for a few constants that are irrelevant for optimization
problem, which justifies our choice of regularization in Eq. 8.
5
6
Exponential lower bound
The exponential 1 ? e?z can serve as an alternative lower bound on the step indicator function (see
Fig. 1(c)). The concordance index can then be lower-bounded by
1 X
c ?
1 ? e?[f (xj )?f (xi )] ? b
cE .
(13)
|E|
Eij
Analogous to the log-sigmoid bound, for the linear ranking function fw (x) = w> x, the lower bound
b
cE simplifies to
>
1 X
1 ? e?w (xj ?xi ) ,
(14)
b
cE (w) =
|E|
Eij
and, penalizing functions with large norm w, the regularized version reads
>
?
1 X
b
cEreg (w) = ? kwk2 +
1 ? e?w (xj ?xi ) .
2
|E|
(15)
Eij
7
Gradient based learning
In order to maximize the regularized concave surrogate we can use any gradient-based learning
technique. We use the Polak-Ribi`
ere variant of nonlinear conjugate gradients (CG) algorithm [11].
The CG method only needs the gradient g(w) and does not require evaluation of the function. It also
avoids the need for computing the second derivatives. The convergence of CG is much faster than
that of the steepest descent. Using the fact that d?(z)/dz = ?(z)[1 ? ?(z)] and 1 ? ?(z)P
= ?(?z),
1
the gradient of Eq. 9 (log-sigmoid bound) is given by ?w b
cLSreg (w) = ??w ? |E| log
Eij (xi ?
2
xj )? wT (xi ? xj ) , and the gradient of Eq. 15 (exponential bound) by ?w b
cEreg (w) = ??w ?
P
1
?w> (xj ?xi )
.
Eij (xi ? xj )e
|E|
8
Is Cox?s partial likelihood a lower bound on the CI ?
Our experimental results (Sec. 9) indicate that the Coxs method and our proposed methods showed
similar performance when assessed using the CI. While our proposed method was formulated to
explicitly maximize a lower bound on the concordance index, the Coxs method maximized the
partial likelihood. One suspects whether Coxs partial likelihood itself is a lower bound on the
concordance index. The argument presented below could give an indication as to why a method
which maximizes the partial likelihood also ends up (approximately) maximizing the concordance
index. We re-write the exponential bound on the CI for proportional hazard models from Sec. 6
X
X
X
X
>
>
>
1
1
b
cE (w) =
1 ? e?w (xi ?xj ) = 1 ?
e?w xi [
ew xj ]
|E|
|E|
Ti uncensored Tj ?Ti
Ti uncensored
Tj ?Ti
!
>
X
1
ew xi
No
1/zi , where zi = P
? [0, 1].
(16)
= 1?
w> xj
|E| No
Tj ?Ti e
T uncensored
i
Note that we have replaced Tj > Ti by Tj ? Ti , assuming that there are no ties in the data, i.e., no
two survival times are identical, analogous to Cox?s partial likelihood approach (cf. Sec. 2.4). The
number of uncensored observations
is denoted by No . The Cox?s partial likelihood can be written in
Q
o
terms of zi as L(w) = Ti uncensored zi = hzi iN
geom , where hzi igeom denotes the geometric mean of
the zi with uncensored Ti . Using the inequality zi ? min zi the concordance index can be bounded
as
No 1
c?1?
.
(17)
|E| min zi
This says maximizing min zi maximizes a lower bound on the concordance index. While this does
not say anything about the Cox?s partial likelihood it still gives a useful insight. Since max zi = 1
(because zi = 1 for the largest uncensored Ti ), maximizing min zi can be expected to approximately
maximize the geometric mean of zi , and hence the Cox?s partial likelihood.
6
Table 1: Summary of the five data sets used. N is the number of patients. d is the number of covariates used.
Dataset
MAASTRO
SUPPORT-1
SUPPORT-2
SUPPORT-4
MELANOMA
9
N
285
477
314
149
191
d
19
26
26
26
4
Missing
3.6%
14.9%
16.6%
22.0%
0.0%
Censored
30.5%
36.4%
43.0%
10.7%
70.2%
Experiments
In this section we compare the performance of the two different lower bounds on the CI?the logsigmoid, exponential, and Cox?s partial likelihood?on five medical data sets.
9.1
Medical datasets
Table 1 summarizes the five data sets we used in our experiments. A substantial amount of data
is censored and also missing. The MAASTRO dataset concerns the survival time of non-small
cell lung cancer patients, which we analyzed as part of our collaboration. The other medical data
sets are publicly available: The SUPPORT dataset 2 is a random sample from Phases I and II of
the SUPPORT [9](Study to Understand Prognoses Preferences Outcomes and Risks of Treatment)
study. As suggested in [6] we split the dataset into three different datasets, each corresponding to a
different cause of death. The MELANOMA data 3 is from a clinical study of skin cancer.
9.2
Evaluation procedure
For each data set, 70% of the examples were used for training and the remaining 30% as the hold-out
set for testing. We chose the optimal value of regularization parameter ? (cf. Eqs. 9 and 15) based
on five-fold cross validation on the training set. The tolerance for the conjugate gradient procedure
was set to 10?3 . The conjugate-gradient optimization procedure was initialized to the zero vector.
All the covariates were normalized to have zero mean and unit variance. As missing values were
not the focus of this paper, we used a simple imputation technique. For each missing value, we
imputed a sample drawn from a Gaussian distribution with its mean and variance estimated from the
available values of the other patients.
9.3
Results
The performance was evaluated in terms of the concordance index and the results are tabulated in
Table 2. We compare the following methods?(1) Cox?s partial likelihood method, and (2) the proposed ranking methods with log-sigmoid and exponential lower bounds. The following observations
can be made?(1) The proposed linear ranking method performs slightly better than the Cox?s partial likelihood method, but the difference does not appear significant. This agrees with our insights
that Cox?s partial likelihood may also end up maximizing the CI. (2) The exponential bound shows
slightly better performance than the log-sigmoid bound, which may indicate that the tightness of the
bound for positive z in Fig. 1(c) is more important than for negative z in our data sets. However the
difference is not significant.
10
Conclusions
In this paper, we outlined several approaches for maximizing the concordance index, the standard
performance measure in survival analysis when cast as a ranking problem. We showed that, for the
widely-used proportional hazard models, the log-sigmoid function arises as a natural lower bound
on the concordance index. We presented an approach for directly optimizing this lower bound in
a computationally efficient way. This optimization procedure can also be applied to other lower
bounds, like the exponential one. Apart from that, we showed that maximizing Cox?s partial likelihood can be understood as (approximately) maximizing a lower bound on the concordance index,
which explains the high CI-scores of proportional hazard models observed in practice. Optimization
of each of these three lower bounds results in about the same CI-score in our experiments, with our
new approach giving tentatively better results.
2
3
http://biostat.mc.vanderbilt.edu/twiki/bin/view/Main/DataSets.
www.stat.uni-muenchen.de/service/datenarchiv/melanoma/melanoma_e.html
7
Table 2: Concordance indices for the different methods and datasets. The mean and the standard deviation
are computed over a five fold cross-validation. The results are also shown for a fixed holdout set.
CI for
training set
mean [? std]
CI for
test set
mean [? std]
CI for
holdout
set
0.65 [?0.02]
0.69 [?0.02]
0.69 [?0.02]
0.57 [?0.09]
0.60 [?0.06]
0.64 [?0.08]
0.64
0.64
0.65
0.76 [?0.01]
0.83 [?0.01]
0.83 [?0.01]
0.74 [?0.05]
0.77 [?0.04]
0.79 [?0.02]
0.79
0.79
0.82
0.70 [?0.02]
0.79 [?0.01]
0.78 [?0.02]
0.63 [?0.06]
0.68 [?0.06]
0.68 [?0.09]
0.69
0.65
0.70
0.78 [?0.01]
0.80 [?0.01]
0.79 [?0.01]
0.68 [?0.09]
0.74 [?0.12]
0.73 [?0.03]
0.64
0.71
0.71
0.63 [?0.03]
0.76 [?0.02]
0.76 [?0.01]
0.62 [?0.09]
0.70 [?0.10]
0.65 [?0.11]
0.54
0.55
0.55
MAASTRO
Cox PH
log-sigmoid
exponential
SUPPORT-1
Cox PH
log-sigmoid
exponential
SUPPORT-2
Cox PH
log-sigmoid
exponential
SUPPORT-4
Cox PH
log-sigmoid
exponential
MELANOMA
Cox PH
log-sigmoid
exponential
Acknowledgements
We are grateful to R. Bharat Rao for encouragement and support of this work, and to the anonymous
reviewers for their valuable comments.
References
[1] C.J.C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, and G. Hullender. Learning to
rank using gradient descent. In Proceeding of the 22th International Conference on Machine Learning,
2005.
[2] D. R. Cox. Regression models and life-tables (with discussion). Journal of the Royal Statistical Society,
Series B, 34(2):187?220, 1972.
[3] D. R. Cox. Partial likelihood. Biometrika, 62(2):269?276, 1975.
[4] D. R. Cox and D. Oakes. Analysis of survival data. Chapman and Hall, 1984.
[5] Y. Freund, R. Iyer, and R. Schapire. An efficient boosting algorithm for combining preferences. Journal
of Machine Learning Research, 4:933?969, 2003.
[6] F. E. Harrell Jr. Regression Modeling Strategies, With Applications to Linear Models, Logistic Regression,
and Survival Analysis. Springer, 2001.
[7] R. Herbrich, T. Graepel, P. Bollmann-Sdorra, and K. Obermayer. Learning preference relations for information retrieval. ICML-98 Workshop: Text Categorization and Machine Learning, pages 80?84, 1998.
[8] J. D. Kalbfleisch and R. L. Prentice. The statistical analysis of failure time data. Wiley-Interscience,
2002.
[9] W.A. Knaus, F. E. Harrell, J. Lynn, et al. The support prognostic model: Objective estimates of survival
for seriously ill hospitalized adults. Annals of Internal Medicine, 122:191?203, 1995.
[10] H. B. Mann and D. R. Whitney. On a Test of Whether one of Two Random Variables is Stochastically
Larger than the Other. The Annals of Mathematical Statistics, 18(1):50?60, 1947.
[11] J. Nocedal and S. J. Wright. Numerical Optimization. Springer, 1999.
[12] V. C. Raykar, R. Duraiswami, and B. Krishnapuram. A fast algorithm for learning large scale preference
relations. In M. Meila and X. Shen, editors, Proceedings of the Eleventh International Conference on
Artificial Intelligence and Statistics, pages 385?392, 2007.
[13] F. Wilcoxon. Individual comparisons by ranking methods. Biometrics Bulletin, 1(6):80?83, December
1945.
8
| 3375 |@word trial:2 cox:30 version:4 norm:2 prognostic:1 suitably:1 steck:2 series:1 score:2 seriously:1 rkhs:1 interestingly:1 com:1 cad:1 written:3 numerical:2 designed:1 intelligence:1 ith:2 steepest:1 renshaw:1 provides:2 boosting:1 preference:4 herbrich:1 hospitalized:1 five:5 mathematical:1 become:1 kalbfleisch:1 eleventh:1 absorbs:1 interscience:1 bharat:1 pairwise:2 expected:1 indeed:1 actual:3 increasing:1 becomes:1 estimating:1 moreover:1 bounded:2 maximizes:2 sdorra:1 lowest:1 kind:1 unspecified:1 interpreted:2 weibull:1 transformation:1 every:1 ti:26 concave:4 tie:1 exactly:1 biometrika:1 scaled:2 unit:1 medical:5 appear:1 hamilton:1 positive:2 service:1 engineering:1 understood:1 melanoma:4 limit:1 analyzing:1 approximately:5 chose:1 studied:2 ease:1 bi:1 testing:1 practice:1 procedure:4 survived:1 area:1 deed:1 word:1 krishnapuram:3 suggest:1 cannot:1 prentice:1 risk:2 context:1 shaked:1 optimize:1 equivalent:2 map:1 www:1 dz:1 maximizing:16 missing:4 reviewer:1 starting:1 formulate:1 shen:1 immediately:1 insight:3 estimator:1 population:1 analogous:2 annals:2 exact:2 origin:1 expensive:2 balaji:2 std:2 observed:5 bottom:1 ordering:3 valuable:1 disease:1 substantial:1 covariates:11 grateful:1 tight:1 predictive:2 serve:1 completely:1 easily:1 various:1 derivation:1 fast:1 mileage:1 artificial:1 outcome:1 whose:1 widely:2 larger:4 say:2 tightness:1 otherwise:1 statistic:6 polak:1 itself:2 obviously:1 differentiable:2 indication:1 product:1 combining:1 oberije:1 convergence:1 empty:2 vanderbilt:1 assessing:1 categorization:1 perfect:1 derive:1 develop:1 stat:1 measured:1 eq:8 predicted:3 implies:2 indicate:2 closely:1 correct:3 mann:3 bin:1 explains:1 require:1 generalization:2 anonymous:1 hold:3 considered:1 hall:1 normal:1 wright:1 predict:2 major:1 failing:1 unexplained:1 largest:1 agrees:1 cary:2 ere:1 gaussian:1 ikm:1 rather:2 avoid:1 casting:1 focus:4 rank:3 likelihood:28 indicates:2 contrast:2 cg:3 baseline:2 typically:4 explanatory:1 relation:2 arg:2 among:2 html:1 ill:1 denoted:1 special:1 field:1 equal:1 once:1 chapman:1 identical:1 represents:2 survive:1 icml:1 others:1 few:1 dosage:1 individual:12 replaced:2 phase:1 interest:1 insurance:1 evaluation:2 truly:1 analyzed:1 nl:1 tj:10 amenable:1 edge:3 partial:22 censored:15 rhks:1 biometrics:1 filled:1 initialized:1 circle:2 re:1 instance:1 ribi:1 modeling:1 rao:1 whitney:3 ordinary:1 deviation:1 vertex:2 lazier:1 predictor:1 learnt:2 person:1 density:2 epidemiology:1 international:2 probabilistic:1 town:1 oakes:1 hzi:2 stochastically:1 resort:1 derivative:1 concordance:28 account:1 de:1 sec:15 summarized:1 inc:1 explicitly:2 ranking:29 onset:1 race:1 depends:1 multiplicative:1 view:1 lot:1 lab:1 portion:1 doctor:1 lung:1 partite:2 accuracy:2 publicly:1 variance:2 efficiently:1 maximized:2 yield:1 mc:1 biostat:1 explain:1 failure:9 naturally:4 associated:1 dataset:4 treatment:3 popular:2 holdout:2 knowledge:1 emerges:1 car:1 lim:1 subsection:2 hilbert:1 graepel:1 actually:3 cks:1 dt:6 follow:1 duraiswami:1 formulation:1 arranged:1 though:1 evaluated:1 marketing:1 until:1 clock:1 d:1 lambin:2 nonlinear:1 assessment:1 logistic:1 quality:1 usa:1 effect:4 contain:1 normalized:1 hence:4 regularization:5 read:2 death:2 raykar:3 skewed:2 during:1 auc:2 transitivity:1 anything:1 outline:1 performs:1 instantaneous:1 sigmoid:22 dehing:2 occurred:1 kwk2:3 significant:2 encouragement:1 rd:2 meila:1 outlined:1 reliability:1 longer:2 etc:1 wilcoxon:3 posterior:1 showed:3 perspective:1 optimizing:4 irrelevant:1 apart:1 manipulation:1 inequality:2 continue:1 life:1 devise:1 scoring:1 maximize:5 period:2 semi:1 ii:1 faster:1 characterized:1 clinical:4 cross:2 hazard:21 retrieval:1 concerning:1 equally:1 prediction:1 involving:1 regression:5 variant:1 muenchen:1 patient:10 kernel:3 sponding:1 harald:2 achieved:2 penalize:1 cell:1 grow:1 comment:1 subject:4 suspect:1 elegant:1 bollmann:1 december:1 split:1 concerned:1 xj:29 independence:1 zi:13 prognosis:1 simplifies:1 maxf:1 whether:4 expression:1 tabulated:1 cause:3 generally:2 useful:1 netherlands:1 amount:1 extensively:1 ph:17 visualized:1 imputed:1 http:1 schapire:1 estimated:1 correctly:1 diagnosis:1 discrete:1 write:1 drawn:1 imputation:1 prevent:1 penalizing:1 ce:4 nocedal:1 graph:5 asymptotically:1 monotone:1 fraction:1 family:2 summarizes:1 bound:45 fold:2 constraint:1 alive:1 argument:1 min:5 formulating:1 conjugate:3 jr:1 remain:1 slightly:4 smaller:2 invariant:1 pr:9 computationally:4 equation:1 turn:1 discus:1 mechanism:1 know:2 end:5 demographic:1 studying:1 available:5 appropriate:1 occurrence:2 alternative:1 vikas:2 existence:1 assumes:1 denotes:3 include:3 cf:11 remaining:1 hinge:1 medicine:1 giving:2 build:1 classical:4 society:1 move:1 objective:3 noticed:1 added:1 skin:1 parametric:3 primary:1 rt:1 strategy:1 surrogate:1 obermayer:1 gradient:10 uncensored:12 originate:1 reason:2 assuming:2 index:30 modeled:1 equivalently:1 lynn:1 relate:1 negative:2 unknown:1 perform:1 observation:6 datasets:4 descent:2 philippe:2 reproducing:1 community:1 cast:4 namely:1 required:1 pair:4 optimized:1 connection:2 learned:2 established:1 adult:1 suggested:1 below:1 geom:1 max:2 royal:1 event:2 suitable:1 natural:3 treated:1 regularized:3 predicting:1 indicator:8 representing:1 tentatively:1 hullender:1 text:1 prior:1 literature:2 geometric:2 acknowledgement:1 relative:2 asymptotic:1 freund:1 loss:1 interesting:1 proportional:8 aposteriori:1 age:1 validation:2 clinic:1 sufficient:2 editor:1 collaboration:1 maastricht:2 censoring:4 cancer:2 penalized:1 summary:1 harrell:2 understand:1 burges:1 bulletin:1 absolute:1 tolerance:1 curve:1 valid:1 cumulative:2 avoids:1 fb:1 made:2 commonly:4 far:1 uni:1 dealing:1 overfitting:2 assumed:1 xi:39 continuous:2 quantifies:1 decade:1 why:2 table:5 learn:3 du:1 cl:4 main:1 fig:5 malvern:1 referred:1 roc:1 wiley:1 exponential:17 covariate:1 survival:51 concern:1 workshop:1 restricting:1 ci:31 iyer:1 justifies:1 eij:12 failed:1 ordered:3 springer:2 gender:1 corresponds:1 conditional:1 goal:1 formulated:2 exposition:1 fw:13 specifically:1 except:1 wt:2 called:4 hospital:1 experimental:2 twiki:1 siemens:2 ew:12 formally:1 internal:1 support:11 arises:4 assessed:1 incorporate:1 evaluate:2 |
2,620 | 3,376 | Statistical Analysis of Semi-Supervised Regression
John Lafferty
Computer Science Department
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Larry Wasserman
Department of Statistics
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Abstract
Semi-supervised methods use unlabeled data in addition to labeled data to construct predictors. While existing semi-supervised methods have shown some
promising empirical performance, their development has been based largely based
on heuristics. In this paper we study semi-supervised learning from the viewpoint
of minimax theory. Our first result shows that some common methods based on
regularization using graph Laplacians do not lead to faster minimax rates of convergence. Thus, the estimators that use the unlabeled data do not have smaller
risk than the estimators that use only labeled data. We then develop several new
approaches that provably lead to improved performance. The statistical tools of
minimax analysis are thus used to offer some new perspective on the problem of
semi-supervised learning.
1
Introduction
Suppose that we have labeled data L = {(X 1 , Y1 ), . . . (X n , Yn )} and unlabeled data U =
{X n+1 , . . . X N } where N n and X i ? R D . Ordinary regression and classification techniques
use L to predict Y from X . Semi-supervised methods also use the unlabeled data U in an attempt
to improve the predictions. To justify these procedures, it is common to invoke one or both of the
following assumptions:
Manifold Assumption (M): The distribution of X lives on a low dimensional manifold.
Semi-Supervised Smoothness Assumption (SSS): The regression function m(x) =
EY | X = x is very smooth where the density p(x) of X is large. In particular, if there
is a path connecting X i and X j on which p(x) is large, then Yi and Y j should be similar
with high probability.
While these assumptions are somewhat intuitive, and synthetic examples can easily be constructed to
demonstrate good performance of various techniques, there has been very little theoretical analysis
of semi-supervised learning that rigorously shows how the assumptions lead to improved performance of the estimators.
In this paper we provide a statistical analysis of semi-supervised methods for regression, and propose some new techniques that provably lead to better inferences, under appropriate assumptions. In
particular, we explore precise formulations of SSS, which is motivated by the intuition that high density level sets correspond to clusters of similar objects, but as stated above is quite vague. To the best
of our knowledge, no papers have made the assumption precise and then explored its consequences
in terms of rates of convergence, with the exception of one of the first papers on semi-supervised
learning, by Castelli and Cover (1996), which evaluated a simple mixture model, and the recent
paper of Rigollet (2006) in the context of classification. This situation is striking, given the level
of activity in this area within the machine learning community; for example, the recent survey of
semi-supervised learning by Zhu (2006) contains 163 references.
1
Among our findings are:
1. Under the manifold assumption M, the semi-supervised smoothness assumption SSS is
superfluous. This point was made heuristically by Bickel and Li (2006), but we show
that in fact ordinary regression methods are automatically adaptive if the distribution of X
concentrates on a manifold.
2. Without the manifold assumption M, the semi-supervised smoothness assumption SSS as
usually defined is too weak, and current methods don?t lead to improved inferences. In
particular, methods that use regularization based on graph Laplacians do not achieve faster
rates of convergence.
3. Assuming specific conditions that relate m and p, we develop new semi-supervised methods that lead to improved estimation. In particular, we propose estimators that reduce bias
by estimating the Hessian of the regression function, improve the choice of bandwidths
using unlabeled data, and estimate the regression function on level sets.
The focus of the paper is on a theoretical analysis of semi-supervised regression techniques, rather
than the development of practical new algorithms and techniques. While we emphasize regression,
most of our results have analogues for classification. Our intent is to bring the statistical perspective
of minimax analysis to bear on the problem, in order to study the interplay between the labeled
sample size and the unlabeled sample size, and between the regression function and the data density.
By studying simplified versions of the problem, our analysis suggests how precise formulations of
assumptions M and SSS can be made and exploited to lead to improved estimators.
2
Preliminaries
The data are (X 1 , Y1 , R1 ), . . . , (X N , Y N , R N ) where Ri ? {0, 1} and we observe Yi only if Ri = 1.
The labeled data are L = {(X i , Yi ) Ri = 1} and the unlabeled data are U = {(X i , Yi ) Ri = 0}.
For convenience, assume that data are labeled so that Ri = 1 for i = 1, . . . , n and Ri = 0 for
i = n + 1, . . . , N . Thus, the labeled sample size is n, and the unlabeled sample size is u = N ? n.
Let p(x) be the density of X and let m(x) = E(Y | X = x) denote the regression function. Assume
that RR?
? Y | X (missing at random) and that Ri | X i ? Bernoulli(?(Xi )). Finally, let ? = P(Ri =
1) = ?(x) p(x)d x. For simplicity we assume that ?(x) = ? for all x. The missing at random
assumption R ?
? Y | X is crucial, although this point is rarely emphasized in the machine learning
literature.
It is clear that without some further conditions, the unlabeled data are useless. The key assumption
we need is that there is some correspondence between the shape of the regression function m and
the shape of the data density p.
We will use minimax theory to judge the quality of an estimator. Let R denote a class of regression
functions and let F denote a class of density functions. In the classical setting, we observe labeled
data (X 1 , Y2 ), . . . , (X n , Yn ). The pointwise minimax risk, or mean squared error (MSE), is defined
by
Rn (x) = inf sup E(b
m n (x) ? m(x))2
(1)
m
bn m?R, p?F
where the infimum is over all estimators. The global minimax risk is defined by
Z
Rn = inf sup E (b
m n (x) ? m(x))2 d x.
m
bn m?R, p?F
(2)
A typical assumption is that R is the Sobolev space of order two, meaning essentially that m has
smooth second derivatives. In this case we have1 Rn n ?4/(4+D) . The minimax rate is achieved
by kernel estimators and local polynomial estimators. In particular, for kernel estimators if we use
a product kernel with common bandwidth h n for each variable, choosing h n ? n ?1/(4+D) yields an
1 We write a b to mean that a /b is bounded away from 0 and infinity for large n. We have suppressed
n
n
n n
some technicalities such as moment assumptions on = Y ? m(X ).
2
estimator with the minimax rate. The difficulty is that the rate Rn n ?4/(4+D) is extremely slow
when D is large.
In more detail, let C > 0 and let B be a positive definite matrix, and define
C
R =
m m(x) ? m(x0 ) ? (x ? x0 )T ?m(x0 ) ? (x ? x0 )T B(x ? x0 )
2
?
F =
p p(x) ? b > 0, | p(x1 ) ? p(x2 )| ? ckx1 ? x2 k2 .
(3)
(4)
Fan (1993) shows that the local linear estimator is asymptotically
minimax for this class. This estiPn
mator is given by m
bn (x) = a0 where (a0 , a1 ) minimizes i=1
(Yi ?a0 ?a1T (X i ?x))2 K (H ?1/2 (X i ?
x)), where K is a symmetric kernel and H is a matrix of bandwidths.
The asymptotic MSE of the local linear estimator m
b(x) using the labeled data is
2
1 2
1
?0 ? 2
R(H ) =
?2 (K )tr(Hm (x)H ) +
+ o( tr(H ))
(5)
1/2
2
p(x)
n|H |
R
where Hm (x) is the Hessian of m at x, ?2 (K ) = K 2 (u) du and ?0 is a constant. The optimal
bandwidth matrix H? is given by
!2/(D+4)
?0 ? 2 |Hm |1/2
(Hm )?1
(6)
H? =
?22 (K )n Dp(x)
and R(H? ) = O(n ?4/(4+D) ). This result is important to what follows, because it suggests that if the
Hessian Hm of the regression function is related to the Hessian H p of the data density, one may be
able to estimate the optimal bandwidth matrix from unlabeled data in order to reduce the risk.
3
The Manifold Assumption
It is common in the literature to invoke both M and SSS. But if M holds, SSS is not needed. This
is argued by Bickel and Li (2006) who say, ?We can unwittingly take advantage of low dimensional
structure without knowing it.?
Suppose X ? R D has support on a manifold M with dimension d < D. Let m
bh be the local linear
estimator with diagonal bandwidth matrix H = h 2 I . Then Bickel and Li show that the bias and
variance are
J2 (x)
b(x) = h 2 J1 (x)(1 + o P (1)) and v(x) =
(1 + o P (1))
(7)
nh d
?1/(4+d)
?4/(4+d)
for some functions J1 and J2 . Choosing h n
yields a risk of order n
, which is the
optimal rate for data that to lie on a manifold of dimension d.
To use the above result we would need to know d. Bickel and Li argue heuristically that the following
procedure will lead to a reasonable bandwidth. First, estimate d using the procedure in Levina
b
b
and Bickel (2005). Now let B = {?1 /n 1/(d+4) , . . . , ? B /n 1/(d+4) } be a set of bandwidths, scaling
b
the asymptotic order n ?1/(d+4) by different constants. Finally, choose the bandwidth h ? B that
minimizes a local cross-validation score.
We now show that, in fact, one can skip the step of estimating d. Let E 1 , . . . , E n be independent
Bernoulli (? = 12 ) random variables. Split the data into two groups, so that I0 = {i E i = 0} and
I1 = {i E i = 1}. Let H = {n ?1/(4+d) 1 ? d ? D}. Construct
m
bh for h ? H using the data
P
b
in I0 , and estimate the risk from I1 by setting R(h)
= |I1 |?1 i?I1 (Yi ? m
bh (X i ))2 . Finally, let b
h
b
minimize R(h) and set m
b=m
bb
h . For simplicity, let us assume that both Yi and X i are bounded by a
finite constant B.
Theorem 1. Suppose that and |Yi | ? B and |X i j | ? B for all i and j . Assume the conditions in
Bickel and Li (2006). Suppose that the data density p(x) is supported on a manifold of dimension
d ? 4. Then we have that
1
e
E(b
m (x) ? m(x))2 = O
.
(8)
n 4/(4+d)
3
e allows for logarithmic factors in n.
The notation O
Proof. The risk is, up to a constant, R(h) = E(Y ? m
b(X ))2 , where (X, Y ) is a new pair
2
2
and Y = m(X ) + . Note that (Y ? m
bh (X )) = Y ? 2Y m
bh (X ) + m
b2h (X ), so R(h) = E(Y 2 ) ?
2
2E(Y m
bh (X )) + m
bh (X ). Let n 1 = |I1 |. Then,
X
1 X 2
2 X
b = 1
R(h)
Yi2 ?
Yi m
bh (X i ) +
m
bh (X i ).
n1
n1
n1
i?I1
i?I1
By conditioning on the data in I0 and applying Bernstein?s inequality, we have
X
b
b ? R(h)| > ? De?nc 2
P max | R(h) ? R(h)| >
?
P | R(h)
h?H
(9)
i?I1
(10)
h?H
?
C log n/n for suitably large C, we conclude that
!
r
C
log
n
b ? R(h)| >
P max | R(h)
?? 0.
n
h?H
for some c > 0. Setting n =
(11)
Let h ? minimize R(h) over H. Then, except on a set of probability tending to 0,
r
r
C
log
n
b ? ) + C log n
bb
? R(h
(12)
R(b
h) ? R(
h) +
n
n
r
r
C log n
1
C log n
1
e
? R(h ? ) + 2
=O
=
O
+
2
(13)
n
n
n 4/(4+d)
n 4/(4+d)
?
where we used the assumption?d ? 4 in the last equality.
If d = 4 then O( log n/n) =
e ?4/(4+d) ); if d > 4 then O( log n/n) = o n 4/(4+d) .
O(n
We conclude that ordinary regression methods are automatically adaptive, and achieve the lowdimensional minimax rate if the distribution of X concentrates on a manifold; there is no need for
semi-supervised methods in this case. Similar results apply to classification.
4
Kernel Regression with Laplacian Regularization
In practice, it is unlikely that the distribution of X would be supported exactly on a low-dimensional
manifold. Nevertheless, the shape of the data density p(x) might provide information about the
regression function m(x), in which case the unlabeled data are informative.
Several recent methods for semi-supervised learning attempt to exploit the smoothness assumption
SSS using regularization operators defined with respect to graph Laplacians (Zhu et al., 2003; Zhou
et al., 2004; Belkin et al., 2005). The technique of Zhu et al. (2003) is based on Gaussian random
fields and harmonic functions defined with respect to discrete Laplace operators. To express this
method in statistical terms, recall that standard kernel regression corresponds to the locally constant
estimator
Pn
n
X
i=1 K h (X i , x) Yi
m
bn (x) = arg min
K h (X i , x)(Yi ? m(x))2 = P
(14)
n
m(x)
i=1 K h (X i , x)
i=1
where K h is a symmetric kernel depending on bandwidth parameters h. In the semi-supervised
approach of Zhu et al. (2003), the locally constant estimate m
b(x) is formed using not only the
labeled data, but also using the estimates at the unlabeled points. Suppose that the first n data points
(X 1 , Y1 ), . . . , (X n , Yn ) are labeled, and the next u = N ? n points are unlabeled, X n+1 , . . . , X n+u .
The semi-supervised regression estimate is then (b
m (X 1 ), m
b(X 2 ), . . . , m
b(X N )) given by
m
b =
arg min
m
N X
N
X
i=1 j=1
K h (X i , X j ) (m(X i ) ? m(X j ))2
4
(15)
where the minimization is carried out subject to the constraint m(X i ) = Yi , i = 1, . . . , n. Thus,
the estimates are coupled, unlike the standard kernel regression estimate (14) where the estimate at
each point x can be formed independently, given the labeled data.
The estimator can be written in closed form as a linear smoother m
b = C ?1 B Y = G Y where
m
b = (b
m (X n+1 ), . . . , m(X n+u ))T is the vector of estimates over the unlabeled test points, and Y =
(Y1 , . . . , Yn )T is vector of labeled values. The (N ?n)?(N ?n) matrix C and the (N ?n)?n matrix
B denote blocks of the combinatorial Laplacian on the data graph corresponding to the labeled and
unlabeled data:
A BT
1=
(16)
B C
where the Laplacian 1 = 1i j has entries
P
k K h (X i , X k ) if i = j
1i j =
(17)
?K h (X i , X j )
otherwise.
This expresses the effective kernel G in terms of geometric objects such as heat kernels for the
discrete diffusion equations (Smola and Kondor, 2003).
This estimator assumes the noise is zero, since m
b(X i ) = Yi for i = 1, . . . , n. To work in the
standard model Y = m(X ) + , the natural extension of the harmonic function approach is manifold regularization (Belkin et al., 2005; Sindhwani et al., 2005; Tsang and Kwok, 2006). Here the
estimator is chosen to minimize the regularized empirical risk functional
R? (m) =
N X
n
X
i=1 j=1
N
N X
X
2
2
K H (X i , X j ) Y j ? m(X i ) +?
K H (X i , X j ) m(X j ) ? m(X i )
(18)
i=1 j=1
where H is a matrix of bandwidths and K H (X i , X j ) = K (H ?1/2 (X i ? X j )). When ? = 0 the
standard kernel smoother is obtained. The regularization term is
J (m) ?
N X
N
X
i=1 j=1
K H (X i , X j ) m(X j ) ? m(X i )
2
= 2m T 1m
(19)
where 1 is the combinatorial Laplacian associated with K H . This regularization term is motivated
by the semi-supervised smoothness assumption?it favors functions m for which m(X i ) is close to
m(X j ) when X i and X j are similar, according to the kernel function. The name manifold regularizaR
tion is justified by the fact that 21 J (m) ? M k?m(x)k2 dM x, the energy of m over the manifold.
While this regularizer has primarily been used for SVM classifiers (Belkin et al., 2005), it can be
used much more generally. For an appropriate choice of ? , minimizing the functional (18) can be
expected to give essentially the same results as the harmonic function approach that minimizes (15).
Theorem 2. Suppose that D ? 2. Let m
e H,? minimize (18), and let 1 p,H be the differential
operator defined by
1
? p(x)T H ? f (x)
trace(H f (x)H ) +
.
2
p(x)
Then the asymptotic MSE of m
e H,? (x) is
!2
?1
2
?
c
??
c
(?
+
?
)
1
2
e=
M
+
I ? 1 p,H
1 p,H m(x) + o( tr(H ))
?
?
n(? + ? ) p(x)|H |1/2
1 p,H f (x) =
(20)
(21)
where ? = P(Ri = 1).
Note that the bias of the standard kernel estimator, in the notation of this theorem, is b(x) =
c2 1 p,H m(x), and the variance is V (x) = c1 /np(x)|H |1/2 . Thus, this result agrees with the standard
e = M + o( tr(H ))
supervised MSE in the special case ? = 0. It follows from this theorem that M
e has the same
where M is the usual MSE for a kernel estimator. Therefore, the minimum of M
leading order in H as the minimum of M.
The proof is given in the full version of the paper. The implication of this theorem is that the
estimator that uses Laplacian regularization has the same rate of convergence as the usual kernel
estimator, and thus the unlabeled data have not improved the estimator asymptotically.
5
5
Semi-Supervised Methods With Improved Rates
The previous result is negative, in the sense that it shows unlabeled data do not help to improve the
rate of convergence. This is because the bias and variance of a manifold regularized kernel estimator are of the same order in H as the bias and variance of standard kernel regression. We now
demonstrate how improved rates of convergence can be obtained by formulating and exploiting appropriate SSS assumptions. We describe three different approaches: semi-supervised bias reduction,
improved bandwidth selection, and averaging over level sets.
5.1
Semi-Supervised Bias Reduction
We first show a positive result by formulating an SSS assumption that links the shape of p to the
shape of m by positing a relationship between the Hessian Hm of m and the Hessian H p of p. Under
this SSS assumption, we can improve the rate of convergence by reducing the bias.
To illustrate the idea, take p(x) known (i.e., N = ?) and suppose that Hm (x) = H p (x). Define
1
m
en (x) = m
bn (x) ? ?22 (K )tr(Hm (x)H )
2
(22)
where m
bn (x) is the local linear estimator.
Theorem 3. The risk of m
en (x) is O n ?8/(8+D) .
Proof. First note that the variance of the estimator m
en , conditional on X 1 , . . . , X n , is
Var(e
m n (x)|X 1 , . . . , X n ) = Var(b
m n (x)|X 1 , . . . , X n ). Now, the term 12 ?22 (K )tr(Hm (x)H ) is precisely the bias of the local linear estimator, under the SSS assumption that H p (x) = Hm (x). Thus,
the first order bias term has been removed. The result now follows from the fact that the next term
in the bias of the local linear estimator is of order O(tr(H )4 ).
By assuming 2` derivatives are matched, we get the rate n ?(4+4`)/(4+4`+D) . When p is estimated
from the data, the risk will be inflated by N ?4/(4+D) assuming standard smoothness assumptions
on p. This term will not dominate the improved rate n ?(4+4`)/(4+4`+D) as long as N > n ` . The
assumption that Hm = H p can be replaced by the more realistic assumption that Hm = g( p; ?)
for some parameterized family of functions g(?; ?). Semiparametric methods can then be used to
estimate ?. This approach is taken in the following section.
5.2
Improved Bandwidth Selection
b be the estimated bandwidth using the labeled data. We will now show how a bandwidth
Let H
b? can be estimated using the labeled and unlabeled data together, such that, under appropriate
H
assumptions,
b? ) ? R(H ? )|
|R( H
lim sup
= 0, where H ? = arg min R(H ).
(23)
b) ? R(H ? )|
H
n?? |R( H
Therefore, the unlabeled data allow us to construct an estimator that gets closer to the oracle risk.
The improvement is weaker than the bias adjustment method. But it has the virtue that the optimal
local linear rate is maintained even if the proposed model linking Hm to p is incorrect.
We begin in one dimension to make the ideas clear. Let m
b H denote the local linear estimator with
bandwidth H ? R, H > 0. To use the Runlabeled data, note that
R the optimal (global) bandwidth
is H ? = (c2 B/(4nc1 A))1/5 where A = m 00 (x)2 d x and B = d x/ p(x). Let b
p (x) be the kernel
density estimator of p using X 1 , . . . , X N and bandwidth h = O(N ?1/5 ). We assume
(SSS)
m 00 (x) = G ? ( p) for some function G depending on finitely many parameters ? .
00 (x) = G (b
\
b?
Now let m
b
? p ), and define H =
R
d x/b
p (x).
c2 b
B
b
4nc1 A
6
1/5
b =
where A
R
00 (x))2 d x and b
\
(m
B =
00 (x) ? m 00 (x) = O (N ?? ) where ? > 2 . Let N = N (n) ? ? as
\
Theorem 4. Suppose that m
P
5
1/4
n ? ?. If N /n ? ?, then
lim sup
n??
Proof.
The risk is
R(H ) = c1 H
4
Z
b? ) ? R(H ? )|
|R( H
= 0.
b) ? R(H ? )|
|R( H
c2
(m (x)) d x +
nH
00
2
Z
(24)
dx
1
+o
.
p(x)
nH
(25)
b be the bandwidth
The oracle bandwidth is H ? = c3 /n 1/5 and then R(H ? ) = O(n ?4/5 ). Now let H
estimated by cross-validation. Then, since R 0 (H ? ) = 0 and H ? = O(n ?1/5 ), we have
b ? H ? )2 00 ?
(H
b ? H ? |3 )
R (H ) + O(| H
(26)
2
b ? H ? )2
(H
b ? H ? |3 ).
=
O(n ?2/5 ) + O(| H
(27)
2
b ? H ? = O P (n ?3/10 ). Hence, R( H
b) ? R(H ? ) = O P (n ?1 ). Also, b
From Girard (1998), H
p (x) ?
?2/5
00
??
00
\
p(x) = O(N
). Since m (x) ? m (x) = O P (N ),
??
?2/5
N
N
?
?
b
+ OP
.
(28)
H ? H = OP
1/5
n
n 1/5
b)
R( H
=
The first term is o P (n ?3/10 ) since N > n 1/4 . The second term is o P (n ?3/10 ) since ? > 2/5. Thus
b? ) ? R(H ? ) = o P (1/n) and the result follows.
R( H
The proof in the multidimensional case is essentially the same as in the one dimensional case, except
b = O P (n ?(D+2)/(2(D+4)) ).
that we use the multivariate version of Girard?s result, namely, H? ? H
This leads to the following result.
? ? ? = O P (N ?? ) for some ? >
Theorem 5. Let N = N (n). If N /n D/4 ? ?, b
lim sup
n??
5.3
Averaging over Level Sets
b? ) ? R(H ? )|
|R( H
= 0.
b) ? R(H ? )|
|R( H
2
4+D
then
(29)
Recall that SSS is motivated by the intuition that high density level sets should correspond to clusters
of similar objects. Another approach to quantifying SSS is to make this cluster assumption explicit.
Rigollet (2006) shows one way to do this in classification. Here we focus on regression.
Suppose that L = {x p(x) > ?} can be decomposed into a finite number of connected, compact,
convex sets C1 , . . . , C g where ? is chosen so that L c has negligible probability. For N large we can
replace L with L = {x b
p (x) > ?} with small loss in accuracy,Pwhere b
p is an estimate of p using
n
the unlabeled data; see Rigollet (2006) for details. Let k j = i=1
I (X i ? C j ) and for x ? C j
define
Pn
Yi I (X i ? C j )
.
(30)
m
b(x) = i=1
kj
Thus, m
b(x) simply averages the labels of the data that fall in the set to which x belongs. If the
regression function is slowly varying in over this set, the risk should be small. A similar estimator
is considered by Cortes and Mohri (2006), but they do not provide estimates of the risk.
Theorem 6. The risk of m
b(x) for x ? L ? C j is bounded by
1
O
+ O ? 2j ? 2j
n? j
where ? j = supx?C j k?m(x)k, ? j = diameter(C j ) and ? j = P(X ? C j ).
7
(31)
Proof. Since the k j are Binomial, k j = n? j + o(1) almost surely. Thus, the variance of m
b(x)
is O(1/(n? j )). The mean, given X 1 , . . . , X n , is
1 X
1 X
m(X i ) = m(x) +
(m(X i ) ? m(x)).
(32)
kj
kj
i X i ?C j
i X i ?C j
? x)T ?m(u
Now m(X i ) ? m(x) = (X j
i ) for some u i between x and X i . Hence, |m(X i ) ? m(x)| ?
kX j ? xk supx?C j k?m(x)k and so the bias is bounded by ? j ? j .
This result reveals an interesting bias-variance tradeoff. Making ? smaller decreases the variance
and increases the bias. Suppose the two terms are balanced at ? = ?? . Then we will beat the usual
rate of convergence if ? j (?? ) > n ?D/(4+D) .
6
Conclusion
Semi-supervised methods have been very successful in many problems. Our results suggest that the
standard explanations for this success are not correct. We have indicated some new approaches to
understanding and exploiting the relationship between the labeled and unlabeled data. Of course, we
make no claim that these are the only ways of incorporating unlabeled data. But our results indicate
that decoupling the manifold assumption and the semi-supervised smoothness assumption is crucial
to clarifying the problem.
7
Acknowlegments
We thank Partha Niyogi for several interesting discussions. This work was supported in part by NSF
grant CCF-0625879.
References
B ELKIN , M., N IYOGI , P. and S INDHWANI , V. (2005). On manifold regularization. In Proceedings of the Tenth
International Workshop on Artificial Intelligence and Statistics (AISTAT 2005).
B ICKEL , P. and L I , B. (2006). Local polynomial regression on unknown manifolds. Tech. rep., Department of
Statistics, UC Berkeley.
C ASTELLI , V. and C OVER , T. (1996). The relative value of labeled and unlabeled samples in pattern recognition with an unknown mixing parameter. IEEE Trans. on Info. Theory 42 2101?2117.
C ORTES , C. and M OHRI , M. (2006). On transductive regression. In Advances in Neural Information Processing Systems (NIPS), vol. 19.
FAN , J. (1993). Local linear regression smoothers and their minimax efficiencies. The Annals of Statistics 21
196?216.
G IRARD , D. (1998). Asymptotic comparison of (partial) cross-validation, gcv and randomized gcv in nonparametric regression. Ann. Statist. 12 315?334.
L EVINA , E. and B ICKEL , P. (2005). Maximum likelihood estimation of intrinsic dimension. In Advances in
Neural Information Processing Systems (NIPS), vol. 17.
N IYOGI , P. (2007). Manifold regularization and semi-supervised learning: Some theoretical analyses. Tech.
rep., Departments of Computer Science and Statistics, University of Chicago.
R IGOLLET, P. (2006). Generalization error bounds in semi-supervised classification under the cluster assumption. arxiv.org/math/0604233 .
S INDHWANI , V., N IYOGI , P., B ELKIN , M. and K EERTHI , S. (2005). Linear manifold regularization for large
scale semi-supervised learning. In Proc. of the 22nd ICML Workshop on Learning with Partially Classified
Training Data.
S MOLA , A. and KONDOR , R. (2003). Kernels and regularization on graphs. In Conference on Learning
Theory, COLT/KW.
T SANG , I. and K WOK , J. (2006). Large-scale sparsified manifold regularization. In Advances in Neural
Information Processing Systems (NIPS), vol. 19.
Z HOU , D., B OUSQUET, O., L AL , T., W ESTON , J. and S CH?LKOPF, B. (2004). Learning with local and global
consistency. In Advances in Neural Information Processing Systems (NIPS), vol. 16.
Z HU , X. (2006). Semi-supervised learning literature review. Tech. rep., University of Wisconsin.
Z HU , X., G HAHRAMANI , Z. and L AFFERTY, J. (2003). Semi-supervised learning using Gaussian fields and
harmonic functions. In ICML-03, 20th International Conference on Machine Learning.
8
| 3376 |@word kondor:2 version:3 polynomial:2 nd:1 suitably:1 heuristically:2 hu:2 bn:6 tr:7 reduction:2 moment:1 contains:1 score:1 existing:1 current:1 dx:1 written:1 john:1 hou:1 chicago:1 realistic:1 j1:2 informative:1 shape:5 intelligence:1 xk:1 math:1 org:1 positing:1 constructed:1 c2:4 differential:1 incorrect:1 x0:5 expected:1 decomposed:1 automatically:2 little:1 begin:1 estimating:2 bounded:4 notation:2 matched:1 what:1 minimizes:3 finding:1 berkeley:1 multidimensional:1 exactly:1 k2:2 classifier:1 grant:1 yn:4 positive:2 negligible:1 local:13 consequence:1 path:1 might:1 suggests:2 practical:1 practice:1 block:1 definite:1 procedure:3 area:1 empirical:2 suggest:1 get:2 convenience:1 unlabeled:23 close:1 operator:3 bh:9 selection:2 risk:15 context:1 applying:1 missing:2 independently:1 convex:1 survey:1 simplicity:2 wasserman:1 estimator:32 dominate:1 laplace:1 annals:1 suppose:10 us:1 pa:2 recognition:1 labeled:18 tsang:1 nc1:2 connected:1 decrease:1 removed:1 balanced:1 intuition:2 rigorously:1 efficiency:1 vague:1 easily:1 various:1 regularizer:1 heat:1 effective:1 describe:1 artificial:1 choosing:2 quite:1 heuristic:1 say:1 otherwise:1 favor:1 statistic:5 niyogi:1 transductive:1 interplay:1 advantage:1 rr:1 propose:2 lowdimensional:1 product:1 j2:2 mixing:1 achieve:2 intuitive:1 aistat:1 exploiting:2 convergence:8 cluster:4 r1:1 object:3 help:1 depending:2 develop:2 illustrate:1 stat:1 finitely:1 op:2 c:1 skip:1 judge:1 indicate:1 inflated:1 concentrate:2 correct:1 larry:2 argued:1 generalization:1 preliminary:1 extension:1 hold:1 considered:1 predict:1 claim:1 bickel:6 estimation:2 proc:1 combinatorial:2 label:1 agrees:1 tool:1 minimization:1 gaussian:2 rather:1 zhou:1 pn:2 varying:1 have1:1 focus:2 improvement:1 elkin:2 bernoulli:2 likelihood:1 tech:3 sense:1 inference:2 i0:3 unlikely:1 bt:1 a0:3 i1:8 provably:2 arg:3 classification:6 among:1 colt:1 development:2 special:1 uc:1 field:2 construct:3 kw:1 icml:2 np:1 belkin:3 primarily:1 replaced:1 n1:3 attempt:2 mixture:1 superfluous:1 implication:1 closer:1 partial:1 mola:1 theoretical:3 cover:1 ordinary:3 entry:1 predictor:1 successful:1 gcv:2 too:1 supx:2 synthetic:1 density:11 international:2 randomized:1 invoke:2 connecting:1 together:1 squared:1 choose:1 slowly:1 a1t:1 derivative:2 leading:1 sang:1 li:5 de:1 pwhere:1 tion:1 closed:1 sup:5 partha:1 minimize:4 formed:2 accuracy:1 mator:1 largely:1 who:1 variance:8 correspond:2 yield:2 weak:1 lkopf:1 castelli:1 classified:1 energy:1 dm:1 proof:6 associated:1 recall:2 knowledge:1 lim:3 supervised:31 improved:11 formulation:2 evaluated:1 smola:1 infimum:1 quality:1 indicated:1 name:1 y2:1 ccf:1 regularization:13 hence:2 equality:1 symmetric:2 maintained:1 demonstrate:2 b2h:1 bring:1 meaning:1 harmonic:4 common:4 tending:1 functional:2 rigollet:3 conditioning:1 nh:3 linking:1 mellon:2 smoothness:7 consistency:1 multivariate:1 recent:3 perspective:2 inf:2 belongs:1 inequality:1 rep:3 success:1 life:1 yi:14 exploited:1 minimum:2 somewhat:1 ey:1 surely:1 semi:30 smoother:3 full:1 smooth:2 faster:2 levina:1 offer:1 cross:3 long:1 a1:1 laplacian:5 prediction:1 regression:27 essentially:3 cmu:2 arxiv:1 kernel:19 achieved:1 c1:3 justified:1 addition:1 semiparametric:1 crucial:2 unlike:1 subject:1 lafferty:2 bernstein:1 split:1 bandwidth:20 reduce:2 idea:2 knowing:1 tradeoff:1 motivated:3 hessian:6 generally:1 clear:2 nonparametric:1 locally:2 statist:1 diameter:1 nsf:1 estimated:4 carnegie:2 write:1 discrete:2 vol:4 express:2 group:1 key:1 nevertheless:1 tenth:1 diffusion:1 graph:5 asymptotically:2 parameterized:1 striking:1 family:1 reasonable:1 almost:1 wok:1 sobolev:1 scaling:1 bound:1 correspondence:1 fan:2 oracle:2 activity:1 infinity:1 constraint:1 precisely:1 ri:9 x2:2 extremely:1 min:3 formulating:2 department:4 according:1 smaller:2 suppressed:1 making:1 taken:1 equation:1 needed:1 know:1 studying:1 apply:1 observe:2 kwok:1 away:1 appropriate:4 assumes:1 binomial:1 exploit:1 classical:1 usual:3 diagonal:1 dp:1 link:1 thank:1 clarifying:1 manifold:21 argue:1 assuming:3 useless:1 pointwise:1 relationship:2 minimizing:1 nc:1 relate:1 info:1 trace:1 stated:1 negative:1 intent:1 unknown:2 afferty:1 finite:2 beat:1 sparsified:1 situation:1 precise:3 y1:4 rn:4 community:1 pair:1 namely:1 c3:1 nip:4 trans:1 able:1 usually:1 pattern:1 laplacians:3 max:2 explanation:1 analogue:1 difficulty:1 natural:1 regularized:2 zhu:4 minimax:12 runlabeled:1 improve:4 carried:1 ss:15 hm:13 coupled:1 kj:3 review:1 literature:3 geometric:1 understanding:1 asymptotic:4 relative:1 wisconsin:1 loss:1 bear:1 interesting:2 var:2 validation:3 viewpoint:1 course:1 mohri:1 supported:3 last:1 bias:15 allow:1 weaker:1 fall:1 dimension:5 acknowlegments:1 made:3 adaptive:2 simplified:1 bb:2 emphasize:1 compact:1 technicality:1 global:3 reveals:1 pittsburgh:2 conclude:2 xi:1 don:1 promising:1 decoupling:1 du:1 mse:5 yi2:1 noise:1 girard:2 x1:1 en:3 slow:1 explicit:1 lie:1 theorem:9 specific:1 emphasized:1 explored:1 svm:1 cortes:1 virtue:1 incorporating:1 workshop:2 intrinsic:1 kx:1 logarithmic:1 simply:1 explore:1 adjustment:1 partially:1 sindhwani:1 ch:1 corresponds:1 conditional:1 quantifying:1 ann:1 replace:1 typical:1 except:2 reducing:1 justify:1 averaging:2 exception:1 rarely:1 support:1 |
2,621 | 3,377 | EEG-Based Brain-Computer Interaction: Improved
Accuracy by Automatic Single-Trial Error Detection
Pierre W. Ferrez
IDIAP Research Institute
Centre du Parc
Av. des Pr?es-Beudin 20
1920 Martigny, Switzerland
[email protected]
Jos?e del R. Mill?an
IDIAP Research Institute
Centre du Parc
Av. des Pr?es-Beudin 20
1920 Martigny, Switzerland
[email protected] ?
Abstract
Brain-computer interfaces (BCIs), as any other interaction modality based on
physiological signals and body channels (e.g., muscular activity, speech and gestures), are prone to errors in the recognition of subject?s intent. An elegant approach to improve the accuracy of BCIs consists in a verification procedure directly based on the presence of error-related potentials (ErrP) in the EEG recorded
right after the occurrence of an error. Six healthy volunteer subjects with no prior
BCI experience participated in a new human-robot interaction experiment where
they were asked to mentally move a cursor towards a target that can be reached
within a few steps using motor imagination. This experiment confirms the previously reported presence of a new kind of ErrP. These ?Interaction ErrP? exhibit a
first sharp negative peak followed by a positive peak and a second broader negative
peak (?290, ?350 and ?470 ms after the feedback, respectively). But in order to
exploit these ErrP we need to detect them in each single trial using a short window following the feedback associated to the response of the classifier embedded
in the BCI. We have achieved an average recognition rate of correct and erroneous
single trials of 81.8% and 76.2%, respectively. Furthermore, we have achieved an
average recognition rate of the subject?s intent while trying to mentally drive the
cursor of 73.1%. These results show that it?s possible to simultaneously extract
useful information for mental control to operate a brain-actuated device as well
as cognitive states such as error potentials to improve the quality of the braincomputer interaction. Finally, using a well-known inverse model (sLORETA), we
show that the main focus of activity at the occurrence of the ErrP are, as expected,
in the pre-supplementary motor area and in the anterior cingulate cortex.
1
Introduction
People with severe motor disabilities (spinal cord injury (SCI), amyotrophic lateral sclerosis (ALS),
etc.) need alternative ways of communication and control for their everyday life. Over the past two
decades, numerous studies proposed electroencephalogram (EEG) activity for direct brain-computer
interaction [1]-[2]. EEG-based brain-computer interfaces (BCIs) provide disabled people with new
tools for control and communication and are promising alternatives to invasive methods. However,
as any other interaction modality based on physiological signals and body channels (e.g., muscular
activity, speech and gestures), BCIs are prone to errors in the recognition of subject?s intent, and
those errors can be frequent. Indeed, even well-trained subjects rarely reach 100% of success. In
?
This work is supported by the European IST Programme FET Project FP6-003758 and by the Swiss National Science Foundation NCCR ?IM2?. This paper only reflects the authors? views and funding agencies are
not liable for any use that may be made of the information contained herein.
1
contrast to other interaction modalities, a unique feature of the ?brain channel? is that it conveys both
information from which we can derive mental control commands to operate a brain-actuated device
as well as information about cognitive states that are crucial for a purposeful interaction, all this on
the millisecond range. One of these states is the awareness of erroneous responses, which a number
of groups have recently started to explore as a way to improve the performance of BCIs [3]-[6].
In particular, [6] recently reported the presence of a new kind of error potentials (ErrP) elicited by
erroneous feedback provided by a BCI during the recognition of the subject?s intent. In this study
subjects were asked to reach a target by sending repetitive manual commands to pass over several
steps. The system was executing commands with an 80% accuracy, so that at each step there was
a 20% probability that the system delivered an erroneous feedback. The main components of these
?Interaction ErrP? are a negative peak 250 ms after the feedback, a positive peak 320 ms after the
feedback and a second broader negative peak 450 ms after the feedback. To exploit these ErrP for
BCIs, it is mandatory to detect them no more in grand averages but in each single trial using a
short window following the feedback associated to the response of the BCI. The reported average
recognition rates of correct and erroneous single trials are 83.5% and 79.2%, respectively. These
results tend to show that ErrP could be a potential tool to improve the quality of the brain-computer
interaction. However, it is to note that in order to isolate the issue of the recognition of ErrP out of
the more difficult and general problem of a whole BCI where erroneous feedback can be due to nonoptimal performance of both the interface (i.e., the classifier embedded into the interface) and the
user himself, the subject delivered commands manually. The key issue now is to investigate whether
subjects also show ErrP while already engaged in tasks that require a high level of concentration
such as motor imagination, and no more in easy tasks such as pressing a key.
The objective of the present study is to investigate the presence of these ErrP in a real BCI task.
Subjects don?t deliver manual commands anymore, but are focussing on motor imagination tasks
to reach targets randomly selected by the system. In this paper we report new experimental results
recorded with six healthy volunteer subjects with no prior BCI experience during a simple humanrobot interaction that confirm the previously reported existence of a new kind of ErrP [6], which
is satisfactorily recognized in single trials using a short window just after the feedback. Furthermore, using a window just before the feedback, we report a 73.1% accuracy in the recognition of
the subject?s intent during mental control of the BCI. This confirms the fact that EEG conveys simultaneously information from which we can derive mental commands as well as information about
cognitive states and shows that both can be sufficiently well recognized in each single trials to provide the subject with an improved brain-computer interaction. Finally, using a well-known inverse
model called sLORETA [7] that non-invasively estimates the intracranial activity from scalp EEG,
we show that the main focus of activity at the occurrence of ErrP seems to be located in the presupplementary motor area (pre-SMA) and in the anterior cingulate cortex (ACC), as expected [8][9].
Figure 1: Illustration of the protocol. (1) The target (blue) appears 2 steps on the left side of the cursor (green).
(2) The subject is imagining a movement of his/her left hand and the cursor moves 1 step to the left. (3) The
subject still focuses on his/her left hand, but the system moves the cursor in the wrong direction. (4) Correct
move to the left, compensating the error. (5) The cursor reaches the target. (6) A new target (red) appears 3
steps on the right side of the cursor, the subject will now imagine a movement of his/her right foot. The system
moved the cursor with an error rate of 20%; i.e., at each step, there was a 20% probability that the robot made
a movement in the wrong direction.
2
Experimental setup
The first step to integrate ErrP detection in a BCI is to design a protocol where the subject is focussing on a mental task for device control and on the feedback delivered by the BCI for ErrP
2
detection. To test the ability of BCI users to concentrate simultaneously on a mental task and to be
aware of the BCI feedback at each single trial, we have simulated a human-robot interaction task
where the subject has to bring the robot to targets 2 or 3 steps either to the left or to the right. This
virtual interaction is implemented by means of a green square cursor that can appear on any of 20
positions along an horizontal line. The goal with this protocol is to bring the cursor to a target that
randomly appears either on the left (blue square) or on the right(red square) of the cursor. The target
is no further away than 3 positions from the cursor (symbolizing the current position of the robot).
This prevents the subject from habituation to one of the stimuli since the cursor reaches the target
within a small number of steps. Figure 1 illustrates the protocol with the target (blue) initially positioned 2 steps away on the left side of the cursor (green). An error occurred at step 3) so that the
cursor reaches the target in 5 steps. Each target corresponds to a specific mental task. The subjects
were asked to imagine a movement of their left hand for the left target and to imagine a movement
of their right foot for the right target (note that subject n? 1 selected left foot for the left target and
right hand for the right target). However, since the subjects had no prior BCI experience, the system
was not moving the cursor following the mental commands of the subject, but with an error rate of
20%, to avoid random or totally biased behavior of the cursor.
Six healthy volunteer subjects with no prior BCI experience participated in these experiments. After
the presentation of the target, the subject focuses on the corresponding mental task until the cursor
reached the target. The system moved the cursor with an error rate of 20%; i.e., at each step, there
was a 20% probability that the cursor moved in the opposite direction. When the cursor reached a
target, it briefly turned from green to light green and then a new target was randomly selected by the
system. If the cursor didn?t reach the target after 10 steps, a new target was selected. As shown in
figure 2, while the subject focuses on a specific mental task, the system delivers a feedback about every 2 seconds. This provides a window just before the feedback for BCI classification and a window
just after the feedback for ErrP detection for every single trial. Subjects performed 10 sessions of 3
minutes on 2 different days (the delay between the two days of measurements varied from 1 week
to 1 month), corresponding to ?75 single trials per session. The 20 sessions were split into 4 groups
of 5, so that classifiers were built using a group and tested on the following group. The classification
rates presented in Section 3 are therefore the average of 3 prediction performances: classification of
group n + 1 using group n to build a classifier. This rule applies for both mental tasks classification
and ErrP detection.
Figure 2: Timing of the protocol. The system delivers a feedback about every 2 seconds, this provides a
window just before the feedback for BCI classification and a window just after the feedback for ErrP detection
for every single trial. As a new target is presented, the subject focuses on the corresponding mental task until
the target is reached.
EEG potentials were acquired with a portable system (Biosemi ActiveTwo) by means of a cap with
64 integrated electrodes covering the whole scalp uniformly. The sampling rate was 512 Hz and
signals were measured at full DC. Raw EEG potentials were first spatially filtered by subtracting
from each electrode the average potential (over the 64 channels) at each time step. The aim of this
re-referencing procedure is to suppress the average brain activity, which can be seen as underlying
background activity, so as to keep the information coming from local sources below each electrode.
Then for off-line mental tasks classification, the power spectrum density (PSD) of EEG channels
was estimated over a window of one second just before the feedback. PSD was estimated using
the Welch method resulting in spectra with a 2 Hz resolution from 6 to 44 Hz. The most relevant
EEG channels and frequencies were selected by a simple feature selection algorithm based on the
overlap of the distributions of the different classes. For off-line ErrP detection, we applied a 1-10
3
Hz bandpass filter as ErrP are known to be a relatively slow cortical potential. EEG signals were
then subsampled from 512 Hz to 64 Hz (i.e., we took one point out of 8) before classification,
which was entirely based on temporal features. Indeed the actual input vector for the statistical
classifier described below is a 150 ms window starting 250 ms after the feedback for channels FCz
and Cz. The choice of these channels follows the fact that ErrP are characterized by a fronto-central
distribution along the midline.
For both mental tasks and ErrP classification, the two different classes (left or right for mental tasks
and error or correct for ErrP) are recognized by a Gaussian classifier. The output of the statistical
classifier is an estimation of the posterior class probability distribution for a single trial; i.e., the
probability that a given single trial belongs to one of the two classes. In this statistical classifier,
every Gaussian unit represents a prototype of one of the classes to be recognized, and we use several
prototypes per class. During learning, the centers of the classes of the Gaussian units are pulled
towards the trials of the class they represent and pushed away from the trials of the other class. No
artifact rejection algorithm (for removing or filtering out eye or muscular movements) was applied
and all trials were kept for analysis. It is worth noting, however, that after a visual a-posteriori check
of the trials we found no evidence of muscular artifacts that could have contaminated one condition
differently from the other. More details on the Gaussian classifier and the analysis procedure to rule
out ocular/muscular artifacts as the relevant signals for both classifiers (BCI itself and ErrP) can be
found in [10].
Figure 3: (Top) Discriminant power (DP) of frequencies. Sensory motor rhythm (12-16 Hz) and some beta
components are discriminant for all subjects. (Bottom) Discriminant power (DP) of electrodes. The most
relevant electrodes are in the central area (C3, C4 and Cz) according to the ERD/ERD location for hand and
foot movement or imagination.
3
3.1
Experimental results
Mental tasks classification
Subject were asked to imagine a movement of their left hand when the left target was proposed and
to imagine a movement of their right foot when the right target was proposed (note that subject n? 1
4
was imagining left foot for the left target and right hand for the right target). The most relevant
EEG channels and frequencies were selected by a simple feature selection algorithm based on the
overlap of the distributions of the different classes. Figure 3 shows the discriminant power (DP) of
frequencies (top) and electrodes (bottom) for the 6 subject. For frequencies, the DP is based on the
best electrode, and for electrodes it is based on the best frequency. Table 1 shows the classification
rates for the two mental tasks and the general BCI accuracy for the 6 subjects and the average of
them, it also shows the features (electrodes and frequencies) used for classification.
For all 6 subjects, the 12-16 Hz band (sensory motor rhythm (SMR)) appears to be relevant for
classification. Subject 1, 3 and 5 show a peak in DP for frequencies around 25 Hz (beta band). For
subject 2 this peak in the beta band is centered at 20 Hz and for subject 6 it is centered at 30 Hz.
Finally subject 4 shows no particular discriminant power in the beta band. Previous studies confirm
these results. Indeed, SMR and beta rhythm over left and/or right sensorimotor cortex have been successfully used for BCI control [11]. Event-related de-synchronization (ERD) and synchronization
(ERS) refer to large-scale changes in neural processing. During periods of inactivity, brain areas are
in a kind of idling state with large populations of neurons firing in synchrony resulting in an increase
of amplitude of specific alpha (8-12 Hz) and beta (12-26 Hz) bands. During activity, populations of
neurons work at their own pace and the power of this idling state is reduced, the cortex has become
de-synchronized. [12]. In our case, the most relevant electrodes for all subjects are in the C3, C4 or
Cz area. These locations confirm previous studies since C3 and C4 areas usually show ERD/ERS
during hands movement or imagination whereas foot movement or imagination are focused in the
Cz area [12].
Table 1: Percentages (mean and standard deviations) of correctly recognized single trials for the 2 motor
imagination tasks for the 6 subjects and the average of them. All subjects show classification rates of about
70-75% for motor imagination and the general BCI accuracy is 73%. Features used for classification are also
shown.
Electrodes
# 1*
#2
#3
#4
#5
#6
Avg
C3 CP3 CP1 CPz CP2
C4 CP4 P4
C3 C4 C6 CP6 CP4
Cz C2 C4
Cz C4 CP4
CPz Cz CP6 CP4
Frequencies
[Hz]
10 12 14 26
10 12 14 18 20 22
14 16 26
12 14
12 24 26
12 14 28 30 32
Left hand
[%]
77.2 ? 3.7
71.8 ? 9.0
76.4 ? 5.8
79.6 ? 1.6
73.5 ? 16.1
77.9 ? 7.4
76.1 ? 2.9
Right foot
[%]
70.4 ? 3.2
80.9 ? 7.1
62.6 ? 6.7
66.3 ? 10.1
71.9 ? 13.3
69.0 ? 13.7
70.2 ? 6.2
Accuracy
[%]
73.8 ? 4.8
76.4 ? 6.4
69.5 ? 9.8
73.0 ? 9.4
72.7 ? 1.1
73.5 ? 6.3
73.1 ? 4.2
* Left foot and Right hand
All 6 subjects show classification rates of about 70-75% for motor imagination. These figures were
achieved with a relatively low number of features (up to 5 electrodes and up to 6 frequencies) and the
general BCI accuracy is 73%. This level of performance can appear relatively low for a 2-class BCI.
However, keeping in mind that first all subjects had no prior BCI experience and second that these
figures were obtained exclusively in prediction (i.e. classifiers were always tested on new data), the
performance is satisfactory.
3.2
Error-related potentials
Figure 4 shows the averages of error trials (red curve), of correct trials (green curve) and the difference error-minus-correct (blue curve) for channel FCz for the six subjects (top). A first small
positive peak shows up about ?230 ms after the feedback (t=0). A negative peak clearly appears
?290 ms after the feedback for 5 subjects. This negative peak is followed by a positive peak ?350
ms after the feedback. Finally a second broader negative peak occurs about ?470 ms after the
feedback. Figure 4 also shows the scalp potentials topographies (right) for the average of the six
subjects, at the occurrence of the four previously described peaks: a first fronto-central positivity
appears after ?230 ms, followed by a fronto-central negativity at ?290 ms, a fronto-central positivity at ?350 ms and a fronto-central negativity at ?470 ms. All six subjects show similar ErrP time
courses whose amplitudes slightly differ from one subject to the other. These experiments seem to
confirm the existence of a new kind of error-related potentials [6]. Furthermore, the fronto-central
5
focus at the occurrence of the different peaks tends to confirm the hypothesis that ErrP are generated
in a deep brain region called anterior cingulate cortex [8][9] (see also Section 3.3).
Table 2 reports the recognition rates (mean and standard deviations) for the six subjects plus the
average of them. These results show that single-trial recognition of erroneous and correct responses
are above 75% and 80%, respectively. Beside the crucial importance to integrate ErrP in the BCI
in a way that the subject still feels comfortable, for example by reducing as much as possible the
rejection of actually correct commands, a key point for the exploitation of the automatic recognition
of interaction errors is that they translate into an actual improvement of the performance of the BCI.
Table 2 also show the performance of the BCI in terms of bit rate (bits per trial) when detection
of ErrP is used or not and the induced increase of performance (for details see [6]). The benefit of
integrating ErrP detection is obvious since it at least doubles the bit rate for five of the six subjects
and the average increase is 124%.
Figure 4: (Top) Averages of error trials (red curve), of correct trials (green curve) and the difference errorminus-correct (blue curve) for channel FCz for the six subjects. All six subjects show similar ErrP time courses
whose amplitudes slightly differ from one subject to the other. (Bottom) Scalp potentials topographies for the
average of the six subjects, at the occurrence of the four described peaks. All focuses are located in frontocentral areas, over the anterior cingulate cortex (ACC).
Table 2: Percentages (mean and standard deviations) of correctly recognized error trials and correct trials for
the six subjects and the average of them. Table also show the BCI performance in terms of bit rate and its
increase using ErrP detection. Classification rates are above 75% and 80% for error trials and correct trials,
respectively. The benefit of integrating ErrP detection is obvious since it at least doubles the bit rate for five of
the six subjects.
#1
#2
#3
#4
#5
#6
Avg
3.3
Error
[%]
77.7 ? 13.9
75.4 ? 5.5
74.0 ? 12.9
84.3 ? 7.7
75.3 ? 6.0
70.7 ? 11.4
76.2 ? 4.6
Correct
[%]
76.8 ? 5.4
80.1 ? 7.9
85.9 ? 1.6
80.1 ? 5.5
85.6 ? 5.2
82.2 ? 5.1
81.8 ? 3.5
BCI accuracy [%]
(from Table 1)
73.8 ? 4.8
76.4 ? 6.4
69.5 ? 9.8
73.0 ? 9.4
72.7 ? 1.1
73.5 ? 6.3
73.1 ? 4.2
Bit rate [bits/trial]
(no ErrP) (ErrP)
0.170
0.345
0.212
0.385
0.113
0.324
0.159
0.403
0.154
0.371
0.166
0.333
0.160
0.359
Increase
[%]
103
82
187
154
141
101
124
Estimation of intracranial activity
Estimating the neuronal sources that generate a given potential map at the scalp surface (EEG)
requires the solution of the so-called inverse problem. This inverse problem is always initially
undetermined, i.e. there is no unique solution since a given potential map at the surface can be
6
generated by many different intracranial activity map. The inverse problem requires supplementary
a priori constraints in order to be univocally solved. The ultimate goal is to unmix the signals
measured at the scalp and to attribute to each brain area its own estimated temporal activity. The
sLORETA inverse model [7] is a standardized low resolution brain electromagnetic tomography.
This software, known for its zero localization error, was used as a localization tool to estimate
the focus of intracranial activity at the occurrence of the four ErrP peaks described in Section 3.2.
Figure 5 shows Talairach slices of localized activity for the grand average of the six subjects at the
occurrence of the four described peaks and at the occurrence of a late positive component showing
up 650 ms after the feedback. As expected, the areas involved in error processing, namely the
pre-supplementary motor area (pre-SMA, Brodmann area 6) and the rostral cingulate zone (RCZ,
Brodmann areas 24 & 32) are systematically activated [8][9]. For the second positive peak (350
ms) and mainly for the late positive component (650 ms), parietal areas are also activated. These
associative areas (somatosensory association cortex, Brodmann areas 5 & 7) could be related to
the fact that the subject becomes aware of the error. It has been proposed that the positive peak
was associated with conscious error recognition in case of error potentials elicited in reaction task
paradigm [13]. In our case, activation of parietal areas after 350 ms after the feedback agrees with
this hypothesis.
Figure 5: Talairach slices of localized activity for the grand average of the six subjects at the occurrence of
the four peaks described in Section 3.2 and at the occurrence of a late positive component showing up 650
ms after the feedback. Supplementary motor cortex and anterior cingulate cortex are systematically activated.
Furthermore, for the second positive peak (350 ms) and mainly for the late positive component (650 ms),
parietal areas are also activated. This parietal activation could reflect the fact that the subject is aware of the
error.
4
Discussion
In this study we have reported results on the detection of the neural correlate of error awareness for
improving the performance and reliability of BCI. In particular, we have confirmed the existence of
a new kind of error-related potential elicited in reaction to an erroneous recognition of the subject?s
intention. More importantly, we have shown the feasibility of simultaneously and satisfactorily detecting erroneous responses of the interface and classifying motor imagination for device control at
the level of single trials. However, the introduction of an automatic response rejection strongly interferes with the BCI. The user needs to process additional information which induces higher workload
and may considerably slow down the interaction. These issues have to be investigated when running
online BCI experiments integrating automatic error detection. Given the promising results obtained
in this simulated human-robot interaction, we are currently working in the actual integration of online ErrP detection into our BCI system. The preliminary results are very promising and confirm
that the online detection of errors is a tool of great benefit, especially for subjects with no prior
BCI experience or showing low BCI performance. In parallel, we are exploring how to increase the
recognition rate of single-trial erroneous and correct responses.
7
In this study we have also shown that, as expected, typical cortical areas involved in error processing such as pre-supplementary motor area and anterior cingulate cortex are systematically activated
at the occurrence of the different peaks. The software used for the estimation of the intracranial
activity (sLORETA) is only a localization tool. However, Babiloni et al. [14] have recently developed the so-called CCD (?cortical current density?) inverse model that estimates the activity of the
cortical mantle. Since ErrP seems to be generated by cortical areas, we plan to use this method to
best discriminate erroneous and correct responses of the interface. As a matter of fact, a key issue to
improve classification is the selection of the most relevant current dipoles out of a few thousands. In
fact, the very preliminary results using the CCD inverse model confirm the reported localization in
the pre-supplementary motor area and in the anterior cingulate cortex and thus we may well expect
a significant improvement in recognition rates by focusing on the dipoles estimated in those specific
brain areas.
More generally, the work described here suggests that it could be possible to recognize in real time
high-level cognitive and emotional states from EEG (as opposed, and in addition, to motor commands) such as alarm, fatigue, frustration, confusion, or attention that are crucial for an effective and
purposeful interaction. Indeed, the rapid recognition of these states will lead to truly adaptive interfaces that customize dynamically in response to changes of the cognitive and emotional/affective
states of the user.
References
[1] J.R. Wolpaw, N. Birbaumer, D.J. McFarland, G. Pfurtscheller, and T.M. Vaughan. Brain-computer interfaces for communication and control. Clinical Neurophysiology, 113:767?791, 2002.
[2] J. del R. Mill?an, F. Renkens, J. Mouri?no, and W. Gerstner. Non-invasive brain-actuated control of a mobile
robot by human EEG. IEEE Transactions on Biomedical Engineering, 51:1026?1033, 2004.
[3] G. Schalk, J.R. Wolpaw, D.J. McFarland, and G. Pfurtscheller. EEG-based communication: presence of
and error potential. Clinical Neurophysiology, 111:2138?2144, 2000.
[4] B. Blankertz, G. Dornhege, C. Sch?afer, R. Krepki, J. Kohlmorgen, K.-R. M?uller, V. Kunzmann, F. Losch,
and G. Curio. Boosting bit rates and error detection for the classification of fast-paced motor commands
based on single-trial EEG analysis. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 11(2):127?131, 2003.
[5] L.C. Parra, C.D. Spence, A.D. Gerson, and P. Sajda. Response error correction?a demonstration of
improved human-machine performance using real-time EEG monitoring. IEEE Transactions on Neural
Systems and Rehabilitation Engineering, 11(2):173?177, 2003.
[6] P.W. Ferrez and J. del R. Mill?an. You are wrong!?Automatic detection of interaction errors from brain
waves. In Proc. 19th Int. Joint Conf. Artificial Intelligence, 2005.
[7] R.D. Pascual-Marqui. Standardized low resolution brain electromagnetic tomography (sLORETA): Technical details. Methods & Findings in Experimental & Clinical Pharmacology, 24D:5?12, 2002.
[8] C.B. Holroyd and M.G.H. Coles. The neural basis of human error processing: Reinforcement learning,
dopamine and the error-related negativity. Psychological Review, 109:679?709, 2002.
[9] K. Fiehler, M. Ullsperger, and Y. von Cramon. Neural correlates of error detection and error correction:
Is there a common neuroanatomical substrate? European Journal of Neuroscience, 19:3081?3087, 2004.
[10] P.W. Ferrez and J. del R. Mill?an. Error-related EEG potentials in brain-computer interfaces. In G. Dornhege, J. del R. Mill?an, T. Hinterberger, D. McFarland, and K.-R. M?uller, editors, Toward Brain-Computing
Interfacing, pages 291?301. The MIT Press, 2007.
[11] D. McFarland and J.R. Wolpow. Sensorimotor rhythm-based brain-computer interface (BCI): Feature
selection by regression improves performance. IEEE Transactions on Neural Systems and Rehabilitation
Engineering, 13(3):372?379, 2005.
[12] G. Pfurtscheller and F.H. Lopes da Silva. Event-related EEG/MEG synchronization and desynchronization: Basic principles. Clinical Neurophysiology, 110:1842?1857, 1999.
[13] S. Nieuwenhuis, K.R. Ridderinkhof, J. Blom, G.P.H. Band, and A. Kok. Error-related brain potentials are
differently related to awareness of response errors: Evidence from an antisaccade task. Psychophysiology,
38:752?760, 2001.
[14] F. Babiloni, C. Babiloni, L. Locche, F. Cincotti, P.M. Rossini, and F. Carducci. High-resolution electroencephalogram: Source estimates of laplacian-transformed somatosensory-evoked potentials using realistic subject head model constructed from magnetic resonance imaging. Medical & Biological Engineering
and Computing, 38:512?519, 2000.
8
| 3377 |@word neurophysiology:3 trial:32 biosemi:1 briefly:1 cingulate:8 exploitation:1 seems:2 cincotti:1 confirms:2 minus:1 cp2:1 exclusively:1 past:1 reaction:2 current:3 anterior:7 activation:2 realistic:1 motor:18 intelligence:1 selected:6 device:4 cp3:1 short:3 idling:2 filtered:1 mental:17 provides:2 detecting:1 boosting:1 location:2 c6:1 five:2 along:2 c2:1 direct:1 beta:6 become:1 constructed:1 consists:1 affective:1 rostral:1 acquired:1 indeed:4 expected:4 rapid:1 behavior:1 brain:23 compensating:1 fcz:3 actual:3 window:10 kohlmorgen:1 totally:1 becomes:1 project:1 provided:1 underlying:1 estimating:1 didn:1 kind:6 developed:1 finding:1 dornhege:2 temporal:2 every:5 classifier:11 wrong:3 control:10 unit:2 renkens:1 medical:1 appear:2 comfortable:1 positive:11 before:5 engineering:5 timing:1 local:1 tends:1 firing:1 plus:1 dynamically:1 suggests:1 evoked:1 range:1 unique:2 satisfactorily:2 spence:1 swiss:1 wolpaw:2 procedure:3 area:23 pre:6 integrating:3 intention:1 selection:4 vaughan:1 map:3 center:1 attention:1 starting:1 focused:1 welch:1 resolution:4 dipole:2 rule:2 amyotrophic:1 importantly:1 his:3 population:2 feel:1 target:29 imagine:5 user:4 substrate:1 hypothesis:2 recognition:16 located:2 bottom:3 solved:1 thousand:1 cord:1 region:1 movement:11 agency:1 asked:4 trained:1 parc:2 deliver:1 localization:4 basis:1 workload:1 joint:1 differently:2 sajda:1 fast:1 effective:1 artificial:1 whose:2 supplementary:6 bci:35 ability:1 itself:1 delivered:3 associative:1 online:3 pressing:1 interferes:1 took:1 subtracting:1 interaction:20 coming:1 p4:1 frequent:1 turned:1 relevant:7 translate:1 kunzmann:1 moved:3 everyday:1 electrode:12 double:2 executing:1 derive:2 measured:2 implemented:1 idiap:4 somatosensory:2 synchronized:1 differ:2 switzerland:2 direction:3 foot:9 concentrate:1 correct:15 attribute:1 filter:1 centered:2 human:6 virtual:1 require:1 electromagnetic:2 preliminary:2 biological:1 parra:1 exploring:1 correction:2 sufficiently:1 around:1 great:1 week:1 gerson:1 sma:2 estimation:3 proc:1 currently:1 healthy:3 cole:1 agrees:1 successfully:1 tool:5 reflects:1 uller:2 mit:1 clearly:1 interfacing:1 gaussian:4 always:2 aim:1 avoid:1 mobile:1 command:10 broader:3 focus:9 improvement:2 check:1 mainly:2 contrast:1 detect:2 posteriori:1 integrated:1 initially:2 her:3 cp6:2 transformed:1 issue:4 classification:18 priori:1 plan:1 resonance:1 integration:1 aware:3 sampling:1 manually:1 represents:1 contaminated:1 report:3 stimulus:1 few:2 randomly:3 simultaneously:4 national:1 midline:1 recognize:1 subsampled:1 psd:2 detection:18 investigate:2 severe:1 smr:2 truly:1 light:1 activated:5 experience:6 re:1 fronto:6 psychological:1 injury:1 deviation:3 undetermined:1 delay:1 reported:6 considerably:1 density:2 peak:23 grand:3 off:2 jos:1 von:1 central:7 recorded:2 reflect:1 opposed:1 frustration:1 positivity:2 hinterberger:1 cognitive:5 conf:1 imagination:10 nccr:1 unmix:1 potential:20 de:4 int:1 matter:1 performed:1 view:1 reached:4 red:4 wave:1 elicited:3 parallel:1 synchrony:1 square:3 accuracy:9 raw:1 babiloni:3 monitoring:1 liable:1 drive:1 worth:1 confirmed:1 acc:2 reach:7 manual:2 sensorimotor:2 frequency:10 ocular:1 invasive:2 obvious:2 conveys:2 involved:2 associated:3 cap:1 improves:1 sloreta:5 amplitude:3 positioned:1 actually:1 appears:6 focusing:1 higher:1 day:2 brodmann:3 psychophysiology:1 response:11 improved:3 erd:4 strongly:1 furthermore:4 just:7 biomedical:1 until:2 hand:10 working:1 horizontal:1 del:5 artifact:3 quality:2 bcis:6 disabled:1 spatially:1 satisfactory:1 during:7 covering:1 customize:1 rhythm:4 fet:1 m:21 trying:1 fatigue:1 antisaccade:1 electroencephalogram:2 confusion:1 delivers:2 interface:10 bring:2 silva:1 cp4:4 funding:1 recently:3 common:1 mentally:2 cpz:2 spinal:1 birbaumer:1 association:1 occurred:1 im2:1 measurement:1 refer:1 significant:1 automatic:5 session:3 centre:2 had:2 reliability:1 moving:1 robot:7 afer:1 cortex:11 surface:2 nieuwenhuis:1 etc:1 posterior:1 intracranial:5 own:2 belongs:1 mandatory:1 success:1 life:1 seen:1 additional:1 recognized:6 paradigm:1 focussing:2 period:1 signal:6 full:1 technical:1 characterized:1 gesture:2 clinical:4 feasibility:1 laplacian:1 prediction:2 regression:1 basic:1 himself:1 volunteer:3 dopamine:1 repetitive:1 represent:1 cz:7 achieved:3 background:1 whereas:1 participated:2 addition:1 source:3 modality:3 crucial:3 biased:1 operate:2 sch:1 subject:64 tend:1 elegant:1 isolate:1 hz:14 induced:1 habituation:1 seem:1 presence:5 noting:1 split:1 easy:1 opposite:1 prototype:2 whether:1 six:15 ultimate:1 inactivity:1 speech:2 deep:1 useful:1 generally:1 kok:1 band:6 conscious:1 tomography:2 induces:1 reduced:1 generate:1 percentage:2 cp1:1 millisecond:1 estimated:4 neuroscience:1 per:3 correctly:2 pace:1 blue:5 ist:1 key:4 group:6 four:5 purposeful:2 kept:1 imaging:1 fp6:1 jose:1 inverse:8 you:1 lope:1 pushed:1 entirely:1 bit:8 followed:3 paced:1 activity:17 scalp:6 constraint:1 software:2 relatively:3 according:1 sclerosis:1 slightly:2 rehabilitation:3 pr:2 referencing:1 previously:3 mind:1 krepki:1 sending:1 away:3 magnetic:1 pierre:2 occurrence:12 anymore:1 alternative:2 existence:3 neuroanatomical:1 top:4 standardized:2 running:1 ccd:2 schalk:1 emotional:2 exploit:2 build:1 especially:1 move:4 objective:1 already:1 occurs:1 concentration:1 disability:1 exhibit:1 dp:5 sci:1 lateral:1 simulated:2 portable:1 discriminant:5 toward:1 meg:1 illustration:1 demonstration:1 difficult:1 setup:1 negative:7 ridderinkhof:1 martigny:2 intent:5 suppress:1 design:1 nonoptimal:1 av:2 neuron:2 parietal:4 communication:4 head:1 dc:1 varied:1 sharp:1 mantle:1 namely:1 beudin:2 c3:5 c4:7 herein:1 mcfarland:4 below:2 usually:1 built:1 green:7 power:6 overlap:2 event:2 braincomputer:1 blankertz:1 improve:5 eye:1 numerous:1 started:1 negativity:3 extract:1 prior:6 review:1 embedded:2 synchronization:3 beside:1 expect:1 topography:2 filtering:1 localized:2 foundation:1 integrate:2 awareness:3 univocally:1 verification:1 principle:1 editor:1 systematically:3 classifying:1 prone:2 course:2 supported:1 keeping:1 side:3 pulled:1 institute:2 benefit:3 slice:2 feedback:28 curve:6 cortical:5 sensory:2 author:1 made:2 avg:2 adaptive:1 reinforcement:1 programme:1 correlate:2 transaction:4 alpha:1 keep:1 confirm:7 don:1 spectrum:2 decade:1 table:7 rossini:1 promising:3 channel:11 actuated:3 eeg:20 improving:1 du:2 imagining:2 investigated:1 european:2 gerstner:1 protocol:5 da:1 main:3 whole:2 alarm:1 pharmacology:1 body:2 neuronal:1 marqui:1 pascual:1 slow:2 pfurtscheller:3 position:3 bandpass:1 late:4 carducci:1 minute:1 removing:1 erroneous:11 down:1 specific:4 invasively:1 showing:3 er:2 desynchronization:1 physiological:2 evidence:2 curio:1 importance:1 illustrates:1 cursor:22 rejection:3 mill:5 explore:1 visual:1 prevents:1 contained:1 applies:1 ch:2 corresponds:1 talairach:2 goal:2 presentation:1 month:1 losch:1 towards:2 change:2 muscular:5 typical:1 uniformly:1 reducing:1 called:4 pas:1 engaged:1 e:2 experimental:4 discriminate:1 rarely:1 millan:1 zone:1 people:2 tested:2 |
2,622 | 3,378 | Learning Transformational
Invariants from Natural Movies
Charles F. Cadieu & Bruno A. Olshausen
Helen Wills Neuroscience Institute
University of California, Berkeley
Berkeley, CA 94720
{cadieu, baolshausen}@berkeley.edu
Abstract
We describe a hierarchical, probabilistic model that learns to extract complex motion from movies of the natural environment. The model consists of two hidden
layers: the first layer produces a sparse representation of the image that is expressed in terms of local amplitude and phase variables. The second layer learns
the higher-order structure among the time-varying phase variables. After training on natural movies, the top layer units discover the structure of phase-shifts
within the first layer. We show that the top layer units encode transformational
invariants: they are selective for the speed and direction of a moving pattern,
but are invariant to its spatial structure (orientation/spatial-frequency). The diversity of units in both the intermediate and top layers of the model provides a set
of testable predictions for representations that might be found in V1 and MT. In
addition, the model demonstrates how feedback from higher levels can influence
representations at lower levels as a by-product of inference in a graphical model.
1
Introduction
A key attribute of visual perception is the ability to extract invariances from visual input. In the
realm of object recognition, the goal of invariant representation is quite clear: a successful object
recognition system must be invariant to image variations resulting from different views of the same
object. While spatial invariants are essential for forming a useful representation of the natural environment, there is another, equally important form of visual invariance, namely transformational
invariance. A transformational invariant refers to the dynamic visual structure that remains the same
when the spatial structure changes. For example, the property that a soccer ball moving through the
air shares with a football moving through the air is a transformational invariant; it is specific to how
the ball moves but invariant to the shape or form of the object. Here we seek to learn such invariants
from the statistics of natural movies.
There have been numerous efforts to learn spatial invariants [1, 2, 3] from the statistics of natural
images, especially with the goal of producing representations useful for object recognition [4, 5, 6].
However, there have been few attempts to learn transformational invariants from natural sensory
data. Previous efforts have either relied on using unnatural, hand-tuned stimuli [7, 8, 9], or unrealistic
supervised learning algorithms using only rigid translation of an image [10]. Furthermore, it is
unclear to what extent these models have captured the diversity of transformations in natural visual
scenes or to what level of abstraction their representations produce transformational invariants.
Previous work learning sparse codes of image sequences has shown that it is possible to recover
local, direction-selective components (akin to translating Gabors) [11]. However, this type of model
does not capture the abstract property of motion because each unit is bound to a specific orientation,
spatial-frequency and location within the image?i.e., it still suffers from the aperture problem.
1
Here we describe a hierarchical probabilistic generative model that learns transformational invariants from unsupervised exposure to natural movies. A key aspect of the model is the factorization
of visual information into form and motion, as compared to simply extracting these properties separately. The latter approach characterizes most models of form and motion processing in the visual
cortical hierarchy [6, 12], but suffers from the fact that information about these properties is not
bound together?i.e., it is not possible to reconstruct an image sequence from a representation in
which form and motion have been extracted by separate and independent mechanisms. While reconstruction is not the goal of vision, the ability to interact with the environment is key, and thus
binding these properties together is likely to be crucial for properly interacting with the world. In
the model we propose here, form and motion are factorized, meaning that extracting one property
depends upon the other. It specifies not only how they are extracted, but how they are combined to
provide a full description of image content.
We show that when such a model is adapted to natural movies, the top layer units learn to extract
transformational invariants. The diversity of units in both the intermediate layer and top layer provides a set of testable predictions for representations that might be found in V1 and MT. The model
also demonstrates how feedback from higher levels can influence representations at lower levels as
a by-product of inference in a graphical model.
2
Hierarchical Model
In this section we introduce our hierarchical generative model of time-varying images. The model
consists of an input layer and two hidden layers as shown in Figure 1. The input layer represents the
time-varying image pixel intensities. The first hidden layer is a sparse coding model utilizing complex basis functions, and shares many properties with subspace-ICA [13] and the standard energy
model of complex cells [14]. The second hidden layer models the dynamics of the complex basis
function phase variables.
2.1
Sparse coding with complex basis functions
In previous work it has been shown that many of the observed response properties of neurons in V1
may be accounted for in terms of a sparse coding model of images [15, 16]:
X
I (x,t) =
ui (t) Ai (x) + n(x,t)
(1)
i
where I (x,t) is the image intensity as a function of space (x ? R2 ) and time, Ai (x) is a spatial basis
function with coefficient ui , and the term, n(x,t) corresponds to Gaussian noise with variance ?n2 that
is small compared to the image variance. The sparse coding model imposes a kurtotic, independent
prior over the coefficients, and when adapted to natural image patches the Ai (x) converge to a set of
localized, oriented, multiscale functions similar to a Gabor wavelet decomposition of images.
We propose here a generalization of the sparse coding model to complex variables that is primarily
motivated from two observations of natural image statistics. The first observation is that although
the prior is factorial, the actual joint distribution of coefficients, even after learning, exhibits strong
statistical dependencies. These are most clearly seen as circularly symmetric, yet kurtotic distributions among pairs of coefficients corresponding to neighboring basis functions, as first described by
Zetzsche [17]. Such a circularly symmetric distribution strongly suggests that these pairs of coefficients are better described in polar coordinates rather than Cartesian coordinates?i.e., in terms of
amplitude and phase. The second observation comes from considering the dynamics of coefficients
through time. As pointed out by Hyvarinen [3], the temporal evolution of a coefficient in response
to a movie, ui (t), can be well described in terms of the product of a smooth amplitude envelope
multiplied by a quickly changing variable. A similar result from Kording [1] indicates that temporal
continuity in amplitude provides a strong cue for learning local invariances. These results are closely
related to the trace learning rule of Foldiak [18] and slow feature analysis [19].
With these observations in mind, we have modified the sparse coding model by utilizing a complex
basis function model as follows:
X
I (x,t) =
<{zi? (t) Ai (x)} + n(x,t)
(2)
i
2
I
where the basis functions now have real and imaginary parts, Ai (x) = AR
i (x) + jAi (x), and the
coefficients are also complex, with zi (t) = ai (t)ej?i (t) . (? indicates the complex conjugate and the
notation <{.} denotes taking the ?real part? of the argument.) The resulting generative model can
also be written as:
X
I
I (x,t) =
ai (t) cos ?i (t) AR
(3)
i (x) + sin ?i (t) Ai (x) + n(x,t)
i
I
Thus, each pair of basis functions AR
i ,Ai forms a 2-dimensional subspace and is controlled by an
amplitude ai and phase ?i that determine the position within each subspace. Note that the basis
functions are only functions of space. Therefore, the temporal dynamics within image sequences
will be expressed in the temporal dynamics of the amplitude and phase.
The prior over the complex coefficients, z, is designed so as to enforce circularly symmetric distributions and smooth amplitude dynamics as observed from time-varying natural images:
P (ai (t)|ai (t?1)) ? e?Spa (ai (t)) ? Sla (ai (t), ai (t?1))
(4)
The first term in the exponential imposes a sparse prior on the coefficient amplitudes. Here we
use Sp(ai (t)) = ?ai (t) (we have found other kurtotic priors to yield similar results). Since there
is no prior over the phases, this will result in circularly symmetric kurtotic distributions over each
subspace. The second term in the exponential imposes temporal stability on the time rate of change
of the amplitudes and is given by: Sla (ai (t), ai (t?1)) = (ai (t) ? ai (t?1))2 .
For a sequence of images the resulting negative log-posterior for the first hidden layer becomes:
"
#2
XX
X
X
X
?
1
E1 =
I
(x,t) ?
<{z
(t) Ai (x)}
+
Sp(a
(t)) +
Sl(ai (t), ai (t?1)) (5)
i
2
i
?
t
x
N
i
i,t
i,t
While this model by no means captures the full joint distribution of coefficients, it does at least
capture the circular symmetric dependencies among pairs of coefficients, which allows for the explicit representation of amplitude and phase. As we shall see, this representation serves as a staging
ground for learning higher-order dependencies over space and time.
2.2
Phase Transformations
Given the decomposition into amplitude and phase variables, we now have a non-linear representation of image content that enables us to learn its structure in another linear generative model. In
particular, the dynamics of objects moving in continuous trajectories through the world over short
epochs will be encoded in the population activity of the phase variables ?i . Furthermore, because
we have encoded these trajectories with an angular variable, many transformations in the image domain that would otherwise be nonlinear in the coefficients ui will now be linearized. This linear
relationship allows us to model the time-rate of change of the phase variables with a simple linear
generative model.
We thus model the first-order time derivative of the phase variables as follows:
X
?? i (t) =
Dik wk (t) + ?i (t)
(6)
k
where ?? i = ?i (t) ? ?i (t?1), and D is the basis function matrix specifying how the high-level
variables wk influence the phase shifts ?? i . The additive noise term, ?i , represents uncertainty or
noise in the estimate of the phase time-rate of change. As before, we impose a sparse, independent
distribution on the coefficients wk , in this case with a sparse cost function given as:
w (t) 2
k
(7)
Sw (wk (t)) = ? log 1 +
?
The uncertainty over the phase shifts is given by a von Mises distribution: p(?i ) ? exp(? cos(?i )).
Thus, the log-posterior over the second layer units is given by
X
X
X
E2 = ?
? cos(?? i ? [Dw(t)]i ) +
Sw (wk (t))
(8)
t
k
i?{ai (t)>0}
3
Figure 1: Graph of the hierarchical model showing the relationship among hidden variables.
Because the angle of a variable with 0 amplitude is undefined, we exclude angles where the corresponding amplitude is 0 from our cost function.
Note that in the first layer we did not introduce any prior on the phase variables. With our second
hidden layer, E2 can be viewed as a log-prior on the time rate of change of the phase variables:
?? i (t). For example, when [Dw(t)]i = 0, the prior on ?? i (t) is peaked around 0, or no change in phase.
Activating the w variables moves the prior away from ?? i (t) = 0, encouraging certain patterns of
phase shifting that will in turn produce patterns of motion in the image domain.
The structure of the complete graphical model is shown in Figure 1.
2.3
Learning and inference
A variational learning algorithm is used to adapt the basis functions in both layers. First we infer
the maximum a posteriori estimate of the variables a, ?, and w for the current values of the basis
functions. Given the map estimate of these variables we then perform a gradient update on the basis
functions. The two steps are iterated until convergence.
To infer coefficients in both the first and second hidden layers we perform gradient descent with
respect to the coefficients of the total cost function (E1 + E2 ). The resulting dynamics for the
amplitudes and phases in the first layer are given by
?ai (t) ? <{bi (t)} ? Sp0 (ai (t)) ? Sl0 (ai (t), ai (t?1))
(9)
?
?
??i (t) ? ={bi (t)} ai (t) ? ? sin(?i (t) ? [Dw(t)]i ) + ? sin(?i (t+1) ? [Dw(t+1)]i ) (10)
"
#
X
X
?j?
(t)
?
with bi (t) = ?12 e i
Ai (x) I (x,t) ?
<{zi (t) Ai (x)} . (={.} denotes the imaginary part.)
N
x
i
The dynamics for the second layer coefficients wk are given by
X
0
?wk (t) ?
? sin(?? i ? [Dw(t)]i ) Dik + Sw
(wk (t))
(11)
i?{ai (t)>0}
Note that the two hidden layers are coupled, since the inference of w depends on ?, and the inference
of ? in turn depends on w, in addition to I and a. Thus, the phases are computed from a combination
of bottom-up (I), horizontal (a) and top-down (w) influences.
The learning rule for the first layer basis functions is given by the gradient of E1 with respect to
Ai (x), using the values of the complex coefficients inferred in eqs. 9 and 10 above:
"
#
X
X
?
?Ai (x) ? ?12
I (x,t) ?
<{zi (t) Ai (x)} zi (t)
(12)
N
t
i
The learning rule for the second layer basis functions is given by the gradient of E2 with respect to
D, using the values of ? and w inferred above:
X
?Dik = ?
sin(?? i ? [Dw(t)]i ) wk (t)
(13)
t?ai (t)>0
After each gradient update the basis functions are normalized to have unit length.
4
3
Results
3.1
Simulation procedures
The model was trained on natural image sequences obtained from Hans van Hateren?s repository at
http://hlab.phys.rug.nl/vidlib/. The movies were spatially lowpass filtered and whitened
as described previously [15]. Note that no whitening in time was performed since the temporal
structure will be learned by the hierarchical model. The movies consisted of footage of animals in
grasslands along rivers and streams. They contain a variety of motions due to the movements of
animals in the scene, camera motion, tracking (which introduces background motion), and motion
borders due to occlusion.
We trained the first layer of the model on 20x20 pixel image patches, using 400 complex basis
functions Ai in the first hidden layer initialized to random values. During this initial phase of
learning only the terms in E1 are used to infer the ai and ?i . Once the first layer reaches convergence,
we begin training the second layer, using 100 bases, Di , initialized to random values. The second
layer bases are initially trained on the MAP estimates of the first layer ?? i inferred using E1 only.
After the second layer begins to converge we infer coefficients in both the first layer and the second
layer simultaneously using all terms in E1 + E2 (we observed that this improved convergence in
the second layer). We then continued learning in both layers until convergence. The bootstrapping
of the second layer was used to speed convergence and we did not observe much change in the first
layer basis functions after the initial convergence. We have run the algorithm multiple times and
have observed qualitatively similar results on each run. Here we describe the results of one run.
3.2
Learned complex basis functions
After learning, the first layer complex basis functions converge to a set of localized, oriented, and
bandpass functions with real and imaginary parts roughly in quadrature. The population of filters as
a whole tile the joint spaces of orientation, position, and center spatial frequency. Not surprisingly,
this result shares similarities to previous results described in [1] and [3]. Figure 2(a) shows the real
part, imaginary part, amplitude, and angle of two representative basis functions as a function of
space. Examining the amplitude of the basis function we see that it is localized and has a roughly
Gaussian envelope. The angle as a function of space reveals a smooth ramping of the phase in the
direction perpendicular to the basis functions? orientation.
(a)
AR
i
AIi
|Ai | ?Ai
(b)
a(t)
?(t)
A191
A292
Figure 2:
(c)
?
R{A191 z191
}
?
R{A292 z292
}
Learned Complex Basis Functions (for panel (b) see the animation in
movie TransInv Figure2.mov).
A useful way of visualizing what a generative model has learned is to generate images while varying
the coefficients. Figure 2(b) displays the resulting image sequences produced by two representative
basis functions as the amplitude and phase follow the indicated time courses. The amplitude has
the effect of controlling the presence of the feature within the image and the phase is related to the
position of the edge within the image. Importantly for our hierarchical model, the time derivative,
or slope of the phase through time is directly related to the movement of the edge through time.
Figure 2(c) shows how the population of complex basis functions tiles the space of position (left)
and spatial-frequency (right). Each dot represents a different basis function according to its maximum amplitude in the space domain, or its maximum amplitude in the frequency domain computed
via the 2D Fourier transform of each complex pair (which produces a single peak in the spatialfrequency plane). The basis functions uniformly tile both domains. This visualization will be useful
for understanding what the phase shifting components D in the second layer have learned.
5
3.3
Learned phase-shift components
Figure 3 shows a random sampling of 16 of the learned phase-shift components, Di , visualized
in both the space domain and frequency domain depictions of the first-layer units. The strength
of connection for each component is denoted by hue (red +, blue -, gray 0). Some have a global
influence over all spatial positions within the 20x20 input array (e.g., row 1, column 1), while
others have influence only over a local region (e.g., row 1, column 6). Those with a linear ramp
in the Fourier domain correspond to rigid translation, since the higher spatial-frequencies will spin
their phases at proportionally higher rates (and negative spatial-frequencies will spin in the opposite
direction). Some functions we believe arise from aliased temporal structure in the movies (row 1,
column 5), and others are unknown (row 2, column 4). We are actively seeking methods to quantify
these classes of learned phase-shift components.
Spatial
Domain
Frequency
Domain
Spatial
Domain
Frequency
Domain
Figure 3: Learned phase shifting components.
The phase shift components generate movements within the image that are invariant to aspects of
the spatial structure such as orientation and spatial-frequency. We demonstrate this in Figure 4 by
showing the generated transforms for 4 representative phase-shift components. The illustrated transformation components produce: (a) global translation, (b) local translation, (c) horizontal dilation
and contraction, and (d) local warping. See the caption of Figure 4 for a more detailed description
of the generated motions. We encourage the reader to view the accompanying videos.
4
Discussion and conclusions
The computational vision community has spent considerable effort on developing motion models.
Of particular relevance to our work is the Motion-Energy model [14], which signals motion via
the amplitudes of quadrature pair filter outputs, similar to the responses of complex neurons in V1.
Simoncelli & Heeger have shown how it is possible to extract motion by pooling over a population
of such units lying within a common plane in the 3D Fourier domain [12]. It has not been shown
how the representations in these models could be learned from natural images. Furthermore, it is
unclear how more complicated transformations, other than local translations, would be represented
by such a model, or indeed how the entire joint space of position, direction and speed should be
tiled to provide a complete description of time-varying images. Our model addresses each of these
problems: it learns from the statistics of natural movies how to best tile the joint domain of position
and motion, and it captures complex motion beyond uniform translation.
Central to our model is the representation of phase. The use of phase information for computing
motion is not new, and was used by Fleet and Jepson [20] to compute optic flow. In addition, as
shown in Eero Simoncelli?s Thesis, one can establish a formal equivalence between phase-based
methods and motion energy models. Here we argue that phase provides a convenient representation
as it linearizes trajectories in coefficient space and thus allows one to capture the higher-order structure via a simple linear generative model. Whether or how phase is represented in V1 is not known,
6
(a)
(c)
(b)
(d)
Figure 4: Visualization of learned transformational invariants (best viewed as animations in
movie TransInv Figure4x.mov, x=a,b,c,d). Each phase-shift component produces a pattern of
motion that is invariant to the spatial structure contained within the image. Each panel displays the
induced image transformations for a different basis function, Di . Induced motions are shown for
four different image patches with the original static patch displayed in the center position. Induced
motions are produced by turning on the respective coefficient wi positively (patches to the left of
center) and negatively (patches to the right of center). The final image in each sequence shows the
pixel-wise variance of the transformation (white values indicate where image pixels are changing
through time, which may be difficult to discern in this static presentation). The example in (a) produces global motion in the direction of 45 deg. The strongly oriented structure within the first two
patches clearly moves along the axis of motion. Patches with more complicated spatial structure (4th
patch) also show similar motion. The next example (b) produces local vertical motion in the lower
portion of the image patch only. Note that in the first patch the strong edge in the lower portion of
the patch moves while the edge in the upper portion remains fixed. Again, this component produces
similar transformations irrespective of the spatial structure contained in the image. The example in
(c) produces horizontal motion in the left part of the image in the opposite direction of horizontal
motion in the right half (the two halves of the image either converge or diverge). Note that the
oriented structure in the first two patches becomes more closely spaced in the leftmost patch and is
more widely spaced in the right most image. This is seen clearly in the third image as the spacing
between the vertical structure is most narrow in the leftmost image and widest in the rightmost image. The example in (d) produces warping in the upper part of the visual field. This example does
not lend itself to a simple description, but appears to produce a local rotation of the image patch.
but it may be worth looking for units that have response properties similar to those of the ?phase
units? in our model.
Our model also has implications for other aspects of visual processing and cortical architecture.
Under our model we may reinterpret the hypothesized split between the dorsal and ventral visual
streams. Instead of independent processing streams focused on form perception and motion perception, the two streams may represent complementary aspects of visual information: spatial invariants
and transformational invariants. Indeed, the pattern-invariant direction tuning of neurons in MT is
strikingly similar to that found in our model [21]. Importantly though, in our model information
about form and motion is bound together since it is computed by a process of factorization rather
than by independent mechanisms in separate streams.
Our model also illustrates a functional role for feedback between higher visual areas and primary
visual cortex, not unlike the proposed inference pathways suggested by Lee and Mumford [22]. The
first layer units are responsive to visual information in a narrow spatial window and narrow spatial
frequency band. However, the top layer units receive input from a diverse population of first layer
units and can thus disambiguate local information by providing a bias to the time rate of change
of the phase variables. Because the second layer weights D are adapted to the statistics of natural
movies, these biases will be consistent with the statistical distribution of motion occurring in the
7
natural environment. This method can thus deal with artifacts such as noise or temporal aliasing and
can be used to disambiguate local motions confounded by the aperture problem.
Our model could be extended in a number of ways. Most obviously, the graphical model in Figure 1
begs the question of what would be gained by modeling the joint distribution over the amplitudes,
ai , in addition to the phases. To some degree, this line of approach has already been pursued by
Karklin & Lewicki [2], and they have shown that the high level units in this case learn spatial
invariants within the image. We are thus eager to combine both of these models into a unified model
of higher-order form and motion in images.
References
[1] W. Einhauser, C. Kayser, P. Konig, and K.P. Kording. Learning the invariance properties of complex cells
from their responses to natural stimuli. European Journal of Neuroscience, 15(3):475?486, 2002.
[2] Y. Karklin and M.S. Lewicki. A hierarchical bayesian model for learning nonlinear statistical regularities
in nonstationary natural signals. Neural Computation, 17(2):397?423, 2005.
[3] A. Hyv?arinen, J. Hurri, and J. V?ayrynen. Bubbles: a unifying framework for low-level statistical properties of natural image sequences. Journal of the Optical Society of America A, 20(7):1237?1252, 2003.
[4] G. Wallis and E.T. Rolls. Invariant face and object recognition in the visual system. Progress in Neurobiology, 51(2):167?194, 1997.
[5] Y. LeCun, F.J. Huang, and L. Bottou. Learning methods for generic object recognition with invariance to
pose and lighting. Computer Vision and Pattern Recognition, 2004.
[6] T. Serre, L. Wolf, S. Bileschi, M. Riesenhuber, and T. Poggio. Robust object recognition with cortex-like
mechanisms. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 411?426, 2007.
[7] SJ Nowlan and T.J. Sejnowski. A selection model for motion processing in area MT of primates. Journal
of Neuroscience, 15(2):1195?1214, 1995.
[8] K. Zhang, M. I. Sereno, and M. E. Sereno. Emergence of position-independent detectors of sense of
rotation and dilation with Hebbian learning: An analysis. Neural Computation, 5(4):597?612, 1993.
[9] E.T. Rolls and S.M. Stringer. Invariant global motion recognition in the dorsal visual system: A unifying
theory. Neural Computation, 19(1):139?169, 2007.
[10] D.B. Grimes and R.P.N. Rao. Bilinear sparse coding for invariant vision. Neural Computation, 17(1):47?
73, 2005.
[11] B.A. Olshausen. Probabilistic Models of Perception and Brain Function, chapter Sparse codes and spikes,
pages 257?272. MIT Press, 2002.
[12] E.P. Simoncelli and D.J. Heeger. A model of neuronal responses in visual area MT. Vision Research,
38(5):743?761, 1998.
[13] A. Hyvarinen and P. Hoyer. Emergence of phase-and shift-invariant features by decomposition of natural
images into independent feature subspaces. Neural Computation, 12(7):1705?1720, 2000.
[14] E.H. Adelson and J.R. Bergen. Spatiotemporal energy models for the perception of motion. Journal of
the Optical Society of America, A, 2(2):284?299, 1985.
[15] B.A. Olshausen and D.J. Field. Sparse coding with an overcomplete basis set: A strategy employed by
v1? Vision Research, 37:3311?3325, 1997.
[16] A.J. Bell and T. Sejnowski. The independent components of natural images are edge filters. Vision
Research, 37:3327?3338, 1997.
[17] C. Zetzsche, G. Krieger, and B. Wegmann. The atoms of vision: Cartesian or polar? Journal of the
Optical Society of America A, 16(7):1554?1565, 1999.
[18] P. Foldiak. Learning invariance from transformation sequences. Neural Computation, 3(2):194?200,
1991.
[19] L. Wiskott and T.J. Sejnowski. Slow feature analysis: Unsupervised learning of invariances. Neural
Computation, 14(4):715?770, 2002.
[20] D.J. Fleet and A.D. Jepson. Computation of component image velocity from local phase information.
International Journal of Computer Vision, 5:77?104, 1990.
[21] J.A. Movshon, E.H. Adelson, M.S. Gizzi, and W.T. Newsome. The analysis of moving visual patterns.
Pattern Recognition Mechanisms, 54:117?151, 1985.
[22] T.S. Lee and D. Mumford. Hierarchical bayesian inference in the visual cortex. Journal of the Optical
Society of America A, 20(7):1434?1448, 2003.
8
| 3378 |@word repository:1 hyv:1 seek:1 linearized:1 simulation:1 decomposition:3 contraction:1 initial:2 tuned:1 rightmost:1 imaginary:4 current:1 nowlan:1 yet:1 must:1 written:1 additive:1 shape:1 enables:1 designed:1 update:2 generative:7 cue:1 half:2 pursued:1 intelligence:1 plane:2 short:1 filtered:1 provides:4 location:1 zhang:1 along:2 consists:2 pathway:1 combine:1 introduce:2 ramping:1 indeed:2 ica:1 roughly:2 aliasing:1 brain:1 footage:1 actual:1 encouraging:1 window:1 considering:1 becomes:2 begin:2 discover:1 notation:1 xx:1 panel:2 factorized:1 aliased:1 what:5 unified:1 transformation:9 bootstrapping:1 temporal:8 berkeley:3 reinterpret:1 demonstrates:2 unit:16 producing:1 before:1 local:12 bilinear:1 might:2 equivalence:1 suggests:1 specifying:1 co:3 factorization:2 bi:3 perpendicular:1 camera:1 lecun:1 kayser:1 procedure:1 area:3 bell:1 gabor:2 convenient:1 refers:1 selection:1 influence:6 map:2 center:4 helen:1 exposure:1 focused:1 rule:3 continued:1 utilizing:2 importantly:2 array:1 dw:6 stability:1 population:5 variation:1 coordinate:2 hierarchy:1 controlling:1 caption:1 velocity:1 recognition:9 observed:4 bottom:1 role:1 capture:5 region:1 sp0:1 movement:3 environment:4 ui:4 dynamic:9 trained:3 upon:1 negatively:1 basis:30 strikingly:1 joint:6 lowpass:1 aii:1 represented:2 america:4 chapter:1 einhauser:1 describe:3 sejnowski:3 quite:1 encoded:2 widely:1 ramp:1 reconstruct:1 football:1 otherwise:1 ability:2 statistic:5 transform:1 itself:1 emergence:2 final:1 obviously:1 sequence:9 reconstruction:1 propose:2 product:3 neighboring:1 description:4 konig:1 convergence:6 regularity:1 produce:12 object:9 spent:1 pose:1 progress:1 eq:1 strong:3 come:1 indicate:1 quantify:1 direction:8 closely:2 attribute:1 filter:3 translating:1 arinen:1 activating:1 generalization:1 accompanying:1 lying:1 around:1 ground:1 exp:1 ventral:1 polar:2 a191:2 mit:1 clearly:3 gaussian:2 modified:1 rather:2 ej:1 varying:6 encode:1 properly:1 indicates:2 sense:1 posteriori:1 inference:7 abstraction:1 rigid:2 bergen:1 wegmann:1 entire:1 initially:1 hidden:10 selective:2 pixel:4 among:4 orientation:5 denoted:1 animal:2 spatial:23 field:2 once:1 sampling:1 cadieu:2 atom:1 represents:3 adelson:2 unsupervised:2 peaked:1 others:2 stimulus:2 few:1 primarily:1 oriented:4 simultaneously:1 phase:47 occlusion:1 attempt:1 baolshausen:1 circular:1 introduces:1 grime:1 nl:1 undefined:1 zetzsche:2 staging:1 implication:1 edge:5 encourage:1 poggio:1 respective:1 grassland:1 initialized:2 overcomplete:1 column:4 modeling:1 rao:1 kurtotic:4 ar:4 newsome:1 cost:3 uniform:1 successful:1 examining:1 eager:1 dependency:3 spatiotemporal:1 combined:1 peak:1 river:1 international:1 probabilistic:3 lee:2 diverge:1 together:3 quickly:1 von:1 central:1 thesis:1 again:1 huang:1 tile:4 derivative:2 actively:1 transformational:11 exclude:1 diversity:3 coding:8 wk:9 coefficient:22 depends:3 stream:5 performed:1 view:2 characterizes:1 red:1 portion:3 relied:1 recover:1 complicated:2 slope:1 air:2 spin:2 roll:2 variance:3 yield:1 correspond:1 spaced:2 bayesian:2 iterated:1 produced:2 trajectory:3 worth:1 lighting:1 detector:1 phys:1 suffers:2 reach:1 energy:4 frequency:12 e2:5 mi:1 di:3 static:2 realm:1 amplitude:22 appears:1 higher:9 supervised:1 follow:1 response:6 improved:1 though:1 strongly:2 furthermore:3 angular:1 until:2 hand:1 horizontal:4 multiscale:1 nonlinear:2 continuity:1 artifact:1 gray:1 indicated:1 believe:1 olshausen:3 effect:1 hypothesized:1 normalized:1 consisted:1 contain:1 serre:1 evolution:1 spatially:1 symmetric:5 illustrated:1 white:1 deal:1 visualizing:1 sin:5 during:1 soccer:1 leftmost:2 complete:2 demonstrate:1 motion:37 image:51 meaning:1 variational:1 wise:1 charles:1 common:1 rotation:2 gizzi:1 functional:1 mt:5 ai:42 tuning:1 pointed:1 bruno:1 dot:1 moving:5 han:1 similarity:1 depiction:1 whitening:1 cortex:3 base:2 posterior:2 foldiak:2 sl0:1 certain:1 captured:1 seen:2 rug:1 impose:1 employed:1 converge:4 determine:1 signal:2 full:2 multiple:1 simoncelli:3 infer:4 hebbian:1 smooth:3 adapt:1 equally:1 e1:6 controlled:1 prediction:2 whitened:1 vision:9 represent:1 cell:2 receive:1 addition:4 background:1 separately:1 spacing:1 crucial:1 envelope:2 unlike:1 pooling:1 induced:3 flow:1 linearizes:1 extracting:2 nonstationary:1 presence:1 intermediate:2 split:1 variety:1 zi:5 architecture:1 opposite:2 figure2:1 shift:10 fleet:2 whether:1 motivated:1 unnatural:1 effort:3 akin:1 movshon:1 dik:3 useful:4 clear:1 proportionally:1 detailed:1 factorial:1 transforms:1 hue:1 band:1 visualized:1 http:1 specifies:1 sl:1 generate:2 neuroscience:3 blue:1 diverse:1 shall:1 key:3 four:1 sla:2 changing:2 v1:6 graph:1 run:3 angle:4 uncertainty:2 discern:1 reader:1 patch:15 spa:1 layer:46 bound:3 display:2 activity:1 adapted:3 strength:1 optic:1 scene:2 aspect:4 speed:3 argument:1 fourier:3 optical:4 developing:1 according:1 ball:2 combination:1 conjugate:1 wi:1 primate:1 invariant:26 visualization:2 remains:2 previously:1 turn:2 mechanism:4 mind:1 serf:1 confounded:1 multiplied:1 observe:1 hierarchical:9 away:1 enforce:1 generic:1 responsive:1 original:1 top:7 denotes:2 graphical:4 sw:3 unifying:2 testable:2 especially:1 establish:1 widest:1 society:4 seeking:1 move:4 warping:2 question:1 already:1 spike:1 mumford:2 strategy:1 primary:1 unclear:2 exhibit:1 gradient:5 hoyer:1 subspace:5 stringer:1 separate:2 argue:1 extent:1 code:2 length:1 relationship:2 providing:1 x20:2 difficult:1 trace:1 negative:2 unknown:1 perform:2 upper:2 vertical:2 neuron:3 observation:4 descent:1 riesenhuber:1 displayed:1 extended:1 looking:1 neurobiology:1 interacting:1 community:1 intensity:2 inferred:3 namely:1 pair:6 connection:1 california:1 learned:11 narrow:3 address:1 beyond:1 suggested:1 pattern:9 perception:5 hlab:1 video:1 lend:1 shifting:3 unrealistic:1 natural:23 turning:1 karklin:2 movie:14 numerous:1 axis:1 irrespective:1 bubble:1 extract:4 coupled:1 jai:1 prior:10 epoch:1 understanding:1 localized:3 degree:1 consistent:1 imposes:3 wiskott:1 begs:1 share:3 translation:6 row:4 course:1 accounted:1 surprisingly:1 formal:1 bias:2 institute:1 taking:1 face:1 sparse:14 van:1 feedback:3 cortical:2 world:2 sensory:1 qualitatively:1 hyvarinen:2 transaction:1 kording:2 sj:1 aperture:2 deg:1 global:4 reveals:1 eero:1 hurri:1 continuous:1 dilation:2 disambiguate:2 learn:6 robust:1 ca:1 interact:1 bottou:1 complex:20 european:1 bileschi:1 domain:14 jepson:2 sp:2 did:2 border:1 noise:4 whole:1 animation:2 n2:1 arise:1 sereno:2 complementary:1 quadrature:2 positively:1 neuronal:1 representative:3 slow:2 position:9 explicit:1 bandpass:1 exponential:2 heeger:2 mov:2 third:1 learns:4 wavelet:1 down:1 specific:2 showing:2 r2:1 essential:1 circularly:4 gained:1 illustrates:1 occurring:1 cartesian:2 krieger:1 simply:1 likely:1 forming:1 visual:19 expressed:2 contained:2 tracking:1 lewicki:2 binding:1 corresponds:1 wolf:1 extracted:2 goal:3 viewed:2 presentation:1 content:2 change:8 considerable:1 uniformly:1 total:1 wallis:1 invariance:8 tiled:1 latter:1 dorsal:2 relevance:1 hateren:1 |
2,623 | 3,379 | Gates
Tom Minka
Microsoft Research Ltd.
Cambridge, UK
John Winn
Microsoft Research Ltd.
Cambridge, UK
Abstract
Gates are a new notation for representing mixture models and context-sensitive
independence in factor graphs. Factor graphs provide a natural representation for
message-passing algorithms, such as expectation propagation. However, message
passing in mixture models is not well captured by factor graphs unless the entire mixture is represented by one factor, because the message equations have a
containment structure. Gates capture this containment structure graphically, allowing both the independences and the message-passing equations for a model
to be readily visualized. Different variational approximations for mixture models
can be understood as different ways of drawing the gates in a model. We present
general equations for expectation propagation and variational message passing in
the presence of gates.
1
Introduction
Graphical models, such as Bayesian networks and factor graphs [1], are widely used to represent
and visualise fixed dependency relationships between random variables. Graphical models are also
commonly used as data structures for inference algorithms since they allow independencies between
variables to be exploited, leading to significant efficiency gains. However, there is no widely used
notation for representing context-specific dependencies, that is, dependencies which are present or
absent conditioned on the state of another variable in the graph [2]. Such a notation would be
necessary not only to represent and communicate context-specific dependencies, but also to be able
to exploit context-specific independence to achieve efficient and accurate inference.
A number of notations have been proposed for representing context-specific dependencies, including: case factor diagrams [3], contingent Bayesian networks [4] and labeled graphs [5]. None of
these has been widely adopted, raising the question: what properties would a notation need, to
achieve widespread use? We believe it would need to be:
? simple to understand and use,
? flexible enough to represent context-specific independencies in real world problems,
? usable as a data structure to allow existing inference algorithms to exploit context-specific
independencies for efficiency and accuracy gains,
? usable in conjunction with existing representations, such as factor graphs.
This paper introduces the gate, a graphical notation for representing context-specific dependencies
that we believe achieves these desiderata. Section 2 describes what a gate is and shows how it can
be used to represent context-specific independencies in a number of example models. Section 3
motivates the use of gates for inference and section 4 expands on this by showing how gates can be
used within three standard inference algorithms: Expectation Propagation (EP), Variational Message
Passing (VMP) and Gibbs sampling. Section 5 shows how the placement of gates can tradeoff cost
versus accuracy of inference. Section 6 discusses the use of gates to implement inference algorithms.
1
(a)
(b)
p
m
m1
p1
True
c
False
c
Gaussian
x
(c)
m1
1
p1
p2
m2
x
p 2 m3
m2
2
p3
pn
mn
(d)
n
3
c
c
n=1..N
x
x
Figure 1: Gate examples (a) The dashed rectangle indicates a gate containing a Gaussian factor,
with selector variable c. (b) Two gates with different key values used to construct a mixture of two
Gaussians. (c) When multiple gates share a selector variable, they can be drawn touching with the
selector variable connected to only one of the gates. (d) A mixture of N Gaussians constructed using
both a gate and a plate. For clarity, factors corresponding to variable priors have been omitted.
2
The Gate
A gate encloses part of a factor graph and switches it on or off depending on the state of a latent
selector variable. The gate is on when the selector variable has a particular value, called the key,
and off for all other values. A gate allows context-specific independencies to be made explicit in the
graphical model: the dependencies represented by any factors inside the gate are present only in the
context of the selector variable having the key value. Mathematically, a gate represents raising the
Q
?(c=key)
contained factors to the power zero if the gate is off, or one if it is on: ( i fi (x))
where c is
the selector variable. In diagrams, a gate is denoted by a dashed box labelled with the value of key,
with the selector variable connected to the box boundary. The label may be omitted if c is boolean
and key is true. Whilst the examples in this paper refer to factor graphs, gate notation can also be
used in both directed Bayesian networks and undirected graphs.
A simple example of a gate is shown in figure 1a. This example represents the term
N (x; m, p?1 )?(c=true) so that when c is true the gate is on and x has a Gaussian distribution with
mean m and precision p. Otherwise, the gate is off and x is uniformly distributed (since it is connected to nothing).
By using several gates with different key values, multiple components of a mixture can be represented. Figure 1b shows how a mixture of two Gaussians can be represented using two gates with
different key values, true and false. If c is true, x will have distribution N (m1 , p?1
1 ), otherwise x
will have distribution N (m2 , p?1
)
.
When
multiple
gates
have
the
same
selector
variable
but differ2
ent key values, they can be drawn as in figure 1c, with the gate rectangles touching and the selector
variable connected to only one of the gates. Notice that in this example, an integer selector variable
is used and the key values are the integers 1,2,3.
For large homogeneous mixtures, gates can be used in conjunction with plates [6]. For example,
figure 1d shows how a mixture of N Gaussians can be represented by placing the gate, Gaussian
factor and mean/precision variables inside a plate, so that they are replicated N times.
Gates may be nested inside each other, implying a conjunction of their conditions. To avoid ambiguities, gates cannot partially overlap, nor can a gate contain its own selector variable.
2
Edge labels
(a)
x1
(b)
e12
e23
F
F
x2
Genetic
variant
w
gn
?
Gaussian
yn
x3
p
+
c
Pixel intensities
Quant.
trait
m
zn
xn
n=1..N
Figure 2: Examples of models which use gates (a) A line process where neighboring pixel intensities are independent if an edge exists between them. (b) Testing for dependence between a genetic
variant gn and an observed quantitative trait xn . The selector variable c encodes whether the linear
dependency represented by the structure inside the gate is present or absent.
Gates can also contain variables, as well as factors. Such variables have the behaviour that, when
the gate is off, they revert to having a default value of false or zero, depending on the variable type.
Mathematically, a variable inside a gate represents a Dirac delta when the gate is off: ?(x)1??(c=key)
where ?(x) is one only when x has its default value. Figure 2b shows an example where variables
are contained in gates ? this example is described in the following section.
2.1
Examples of models with gates
Figure 2a shows a line process from [7]. The use of gates makes clear the assumption that two neighboring image pixels xi and xj have a dependency between their intensity values, unless there is an
edge eij between them. An opaque three-way factor would hide this context-specific independence.
Gates can also be used to test for independence. In this case the selector variable is connected only
to the gate, as shown in the example of figure 2b. This is a model used in functional genomics [8]
where the aim is to detect associations between a genetic variant gn and some quantitative trait xn
(such as height, weight, intelligence etc.) given data from a set of N individuals. The binary selector
variable c switches on or off a linear model of the genetic variant?s contribution yn to the trait xn ,
across all individuals. When the gate is off, yn reverts to the default value of 0 and so the trait is
explained only by a Gaussian-distributed background model zn . Inferring the posterior distribution
of c allows associations between the genetic variation and the trait to be detected.
3
How gates arise from message-passing on mixture models
Factor graph notation arises naturally when describing message passing algorithms, such as the
sum-product algorithm. Similarly, the gate notation arises naturally when considering the behavior
of message passing algorithms on mixture models.
As a motivating example, consider the mixture model of figure 1b when the precisions p1 and p2 are
constant. Using 1 and 2 as keys instead of true and false, the joint distribution is: p(x, c, m1 , m2 ) =
p(c)p(m1 )p(m2 )f (x|m1 )?(c?1) f (x|m2 )?(c?2) where f is the Gaussian distribution. If we apply
mean-field approximation to this model, we obtain the following fixed-point system:
!
X
X
q(c = k) ? p(c = k) exp
q(x)
q(mk ) log f (x|mk )
(1)
x
q(mk ) ? p(mk ) exp
X
mk
!q(c=k)
q(x) log f (x|mk )
x
q(x) ?
Y
k
exp
X
!q(c=k)
q(mk ) log f (x|mk )
mk
3
(2)
(3)
These updates can be interpreted as message-passing combined with ?blurring? (raising to a power
between 0 and 1). For example, the update for q(mk ) can be interpreted as (message from
prior)?(blurred message from f ). The update for q(x) can be interpreted as (blurred message from
m1 )?(blurred message from m2 ). Blurring occurs whenever a message is sent from a factor having a random exponent to a factor without that exponent. Thus the exponent acts like a container,
affecting all messages that pass out of it. Hence, we use a graphical notation where a gate is a container, holding all the factors switched by the gate. Graphically, the blurring operation then happens
whenever a message leaves a gate. Messages passed into a gate and within a gate are unchanged.
This graphical property holds true for other algorithms as well. For example, EP on this model will
blur the message from f to mk and from f to x, where ?blurring? means a linear combination with
the 1 function followed by KL-projection.
3.1
Why gates are not equivalent to ?pick? factors
It is possible to rewrite this model so that the f factors do not have exponents, and therefore
would not be in gates. However, this will necessarily change the approximation. This is because the blurring effect caused by exponents operates in one direction only, while the blurring effect caused by intermediate factors is always bidirectional. For example, suppose we
try to write the model using a factor pick(x|c, h1 , h2 ) = ?(x ? h1 )?(c?1) ?(x ? h2 )?(c?2) .
We can introduce latent variables (h1 , h2 ) so that the model becomes p(x, c, m1 , m2 , h1 , h2 ) =
p(c)p(m1 )p(m2 )f (h1 |m1 )f (h2 |m2 )pick(x|c, h1 , h2 ). The pick factor will correctly blur the
downward messages from (m1 , m2 ) to x. However, the pick factor will also blur the message
upward from x before it reaches the factor f , which is incorrect.
Another approach is to pick from (m1 , m2 ) before reaching the factor f , so that the model becomes
p(x, c, m1 , m2 , m) = p(c)p(m1 )p(m2 )f (x|m)pick(m|c, m1 , m2 ). In this case, the message from
x to f is not blurred, and the upward messages to (m1 , m2 ) are blurred, which is correct. However,
the downward messages from (m1 , m2 ) to f are blurred before reaching f , which is incorrect.
3.2
Variables inside gates
Now consider an example where it is natural to consider a variable to be inside a gate. The model
Q
?(c?k)
is: p(x, c, m1 , m2 , y) = p(c)p(m1 )p(m2 ) k (f1 (x|y)f2 (y|mk ))
. If we use a structured
variational approximation where y is conditioned on c, then the fixed-point equations are [9]:
X
q(c = k) ? p(c = k) exp
q(x)
x
X
exp
q(y|c = k)
y
q(y|c = k) ? exp
X
X
!
q(y|c = k) log f1 (x|y)
y
!
q(mk ) log f2 (y|mk ) exp ?
mk
X
!
q(x) log f1 (x|y) exp
x
X
q(mk ) ? p(mk ) exp
X
q(x) ?
Y
k
exp
X
!q(c=k)
!q(c=k)
q(y|c = k) log f1 (x|y)
q(y|c = k) log q(y|c = k)
y
(4)
q(mk ) log f2 (y|mk )
q(y|c = k) log f2 (y|mk )
y
!
!
mk
y
X
(5)
(6)
(7)
Notice that only the messages to x and mk are blurred; the messages to and from y are not blurred.
Thus we can think of y as sitting inside the gate. The message from the gate to c can be interpreted
as the evidence for the submodel containing f1 , f2 , and y.
4
4
Inference with gates
In the previous section, we explained why the gate notation arises when performing message passing
in some example mixture models. In this section, we describe how gate notation can be generally
incorporated into Variational Message Passing [10], Expectation Propagation [11] and Gibbs Sampling [7] to allow each of these algorithms to support context-specific independence.
For reference, Table 1 showsQ
the messages needed to apply standard EP or VMP using a fully factorized approximation q(x) = i q(xi ). Notice that VMP uses different messages to and from deterministic factors, that is, factors which have the form fa (xi , xa\i ) = ?(xi ? h(xa\i )) where xi is the
derived child variable. Different VMP messages are also used to and from such deterministic
derived
Q
variables. For both algorithms the marginal distributions are obtained as q(xi ) = a ma?i (xi ), except for derived child variables in VMP where q(xi ) = mpar?i (xi ). The (approximate) model
evidence is obtained by a product of contributions, one from each variable and each factor. Table 1
shows these contributions for each algorithm, with the exception that deterministic factors and their
derived variables contribute 1 under VMP.
When performing inference on models with gates, it is useful to employ a normalised form of gate
model. In this form, variables inside a gate have no links to factors outside the gate, and a variable
outside a gate links to at most one factor inside the gate. Both of these requirements can be achieved
by splitting a variable into a copy inside and a copy outside the gate, connected by an equality factor
inside the gate. A factor inside a gate should not connect to the selector of the gate; it should be
given the key value instead. In addition, gates should be balanced by ensuring that if a variable links
Alg.
Type
Variable to factor
Factor to variable
mi?a (xi )
EP
Y
Any
mb?i (xi )
ma?i (xi )
?
proj
b6=a
VMP
Y
Stochastic
exp ?
a?i
Y
Det. to parent
X
mi?a (xi )
xa \xi
mb?i (xi )
b6=a
?
exp ?
X
xa\(i,ch)
?
?
?
Y
? mj?a (xj )? log fa (xa )?
j6=i
?
?
k6=(i,ch)
xa \xi
Alg.
Evidence for variable xi
P Q
EP
si =
VMP
si = exp(?
xi
P
xi
a
ma?i (xi )
q(xi ) log q(xi ))
Y
?
?
mk?a (xk )? log f?a (xa )?
P
where f?a (xa ) = xch mch?a (xch )fa (xa )
?
?
?
?
X Y
? mj?a (xj )? fa (xa )?
proj ?
mpar?i (xi )
Det. to child
ma?i (xi )
i
Q
m
(x
)
f
(x
)
j?a
j
a
a
xa \xi
j?a
hP
j6=i
Evidence for factor fa
mj?a (xj ))fa (xa )
(
sa = P xaQ j?amj?a (xj )ma?j (xj )
xa
j?a
P Q
sa = exp
m
(x
)
log
f
(x
)
j?a
j
a
a
xa
j?a
P
Q
Table 1: Messages and evidence computations for EP and VMP The top part of the table shows
messages between a variable xi and a factor fa . The notation j ? a refers to all neighbors of
the factor, j 6= i is all neighbors except i, par is the parent factor of a derived variable, and ch
is the child variable of a deterministic factor. The proj[p] operator returns an exponential-family
distribution whose sufficient statistics match p. The bottom part of the table shows the evidence
contributions for variables and factors in each algorithm.
5
to a factor in a gate with selector variable c, the variable also links to factors in gates keyed on all
other values of the selector variable c. This can be achieved by connecting the variable to uniform
factors in gates for any missing values of c. After balancing, each gate is part of a gate block ? a set
of gates activated by different values of the same condition variable. See [12] for details.
4.1
Variational Message Passing with gates
VMP can be augmented to run on a gate model in normalised form, by changing only the messages
out of the gate and by introducing messages from the gate to the selector variable. Messages sent
between nodes inside the gate and messages into the gate are unchanged from standard VMP. The
variational distributions for variables inside gates are implicitly conditioned on the gate selector, as
at the end of section 3. In the following, an individual gate is denoted g, its selector variable c and
its key kg . See [12] for the derivations.
The messages out of a gate are modified as follows:
? The message from a factor fa inside a gate g with selector c to a variable outside g is the
usual VMP message, raised to the power mc?g (c = kg ), except in the following case.
? Where a variable xi is the child of a number of deterministic factors inside a gate block G
with selector variable c, the variable is treated as derived and the message is a momentmatched average of the individual VMP messages. Then the message to xi is
?
?
X
mG?i (xi ) = proj ?
mc?g (c = kg )mg?i (xi )?
(8)
g?G
where mg?i (xi ) is the usual VMP message from the unique parent factor in g and proj is
a moment-matching projection onto the exponential family.
The message from a gate g to its selector variable c is a product of evidence messages from the
contained nodes:
Y Y
mg?c (c = kg ) =
sa
si ,
mg?c (c 6= kg ) = 1
(9)
a?g
i?g
where sa and si are the VMP evidence messages from a factor and variable, respectively (Table 1).
The set of contained factors includes any contained gates, which are treated as single factors by the
containing gate. Deterministic variables and factors send evidence messages of 1, except where a
deterministic factor fa parents a variable xi outside g. Instead of sending sa = 1, the factor sends:
!
X
sa = exp
ma?i (xi ) log mi?a (xi )
(10)
xi
The child variable xi outside the gate also has a different evidence message:
!
X
si = exp ?
mG?i (xi ) log mi?a (xi )
(11)
xi
where mG?i is the message from the parents (8) and mi?a is the message from xi to any parent.
To allow for nested gates, we must also define an evidence message for a gate:
?
?q(c=kg )
Y Y
sg = ?
sa
si ?
(12)
a?g
4.2
i?g
Expectation Propagation with gates
As with VMP, EP can support gate models in normalised form by making small modifications to the
message-passing rules. Once again, messages between nodes inside a gate are unchanged. Recall
that, following gate balancing, all gates are part of gate blocks. In the following, an individual gate
is denoted g, its selector variable c and its key kg . See [12] for the derivations.
6
The messages into a gate are as follows:
? The message from a selector variable to each gate in a gate block G is the same. It is the
product of all messages into the variable excluding messages from gates in G.
? The message from a variable to each neighboring factor inside a gate block G is the same.
It is product of all messages into the variable excluding messages from any factor in G.
Let nbrs(g) be the set of variables outside of g connected to some factor in g. Each gate computes
an intermediate evidence-like quantity sg defined as:
Y Y
Y
X
sg =
sa
si
sig
where sig =
mi?g (xi )mg?i (xi )
(13)
a?g
i?g
xi
i?nbrs(g)
where mg?i is the usual EP message to xi from its (unique) neighboring factor in g. The third term
is used to cancel the denominators of sa (see definition in Table 1). Given this quantity, the messages
out of a gate may now be specified:
? The combined message from all factors in a gate block G with selector variable c to a
variable xi is the weighted average of the messages sent by each factor:
hP
i
?1
proj
g?G mc?g (c = kg )sg sig mg?i (xi )mi?g (xi )
mG?i (xi ) =
(14)
mi?g (xi )
(Note mi?g (xi ) is the same for each gate g.)
? The message from a gate block G to its selector variable c is:
sg
mG?c (c = kg ) = P
(15)
g?G sg
Finally, the evidence contribution of a gate block with selector c is:
P
sg
P g?G
sc = Q
i?nbrs(g)
xi mi?g (xi )mG?i (xi )
4.3
(16)
Gibbs sampling with gates
Gibbs sampling can easily extend to gates which contain only factors. Gates containing variables
require a facility for computing the evidence of a submodel, which Gibbs sampling does not provide.
Note also that Gibbs sampling does not support deterministic factors. Thus the graph should only
be normalised up to these constraints. The algorithm starts by setting the variables to initial values
and sending these values to their neighboring factors. Then for each variable xi in turn:
1. Query each neighboring factor for a conditional distribution for xi . If the factor is in a gate
that is currently off, replace with a uniform distribution. For a gate g with selector xi , the
conditional distribution is proportional to s for the key value and 1 otherwise, where s is
the product of all factors in g.
2. Multiply the distributions from neighboring factors together to get the variable?s conditional
distribution. Sample a new value for the variable from its conditional distribution.
5
Enlarging gates to increase approximation accuracy
Gates induce a structured approximation as in [9], so by moving nodes inside or outside of gates,
you can trade off inference accuracy versus cost. Because one gate of a gate block is always on, any
node (variable or factor) outside a gate block G can be equivalently placed inside each gate of G.
This increases accuracy since a separate set of messages will be maintained for each case, but it may
increase the cost.
For example, Archambeau and Verleysen [14] suggested a structured approximation for Student-t
mixture models, instead of the factorised approximation of [13]. Their modification can be viewed
as a gate enlargement (figure 3). By enlarging the gate block to include unm , the blurring between
the multiplication factor and unm is removed, increasing accuracy. This comes at no additional cost
since unm is only used by one gate and therefore only one message is needed per n and m.
7
(a)
Dirichlet
Gaussian
Gamma
?
?m
?m
Discrete
(b)
m
Gaussian
Gamma
?
?m
?m
Discrete
Gaussian
zn
Dirichlet
?
unm
Gamma
m
Gaussian
zn
?
m=1..M
xn
unm
Gamma
m=1..M
xn
n=1..N
n=1..N
Figure 3: Student-t mixture model using gates (a) Model from [13] (b) Structured approximation
suggested by [14], which can be interpreted as enlarging the gate.
6
Discussion and conclusions
Gates have proven very useful to us when implementing a library for inference in graphical models. By using gates, the library allows mixtures of arbitrary sub-models, such as mixtures of factor analysers. Gates are also used for computing the evidence for a model, by placing the entire
model in a gate with binary selector variable b. The log evidence is then the log-odds of b, that is,
log P (b = true) ? log P (b = false). Similarly, gates are used for model comparison by placing
each model in a different gate of a gate block. The marginal over the selector gives the posterior
distribution over models.
Graphical models not only provide a visual way to represent a probabilistic model, but they can
also be used as a data structure for performing inference on that model. We have shown that gates
are similarly effective both as a graphical modelling notation and as a construct within an inference
algorithm.
References
[1] B. Frey, F. Kschischang, H. Loeliger, and N. Wiberg. Factor graphs and algorithms. In Proc. of the 35th
Allerton Conference on Communication, Control and Computing, 1998.
[2] C. Boutilier, N. Friedman, M. Goldszmidt, and D. Koller. Context-specific independence in Bayesian
networks. In Proc. of the 12th conference on Uncertainty in Artificial Intelligence, pages 115?123, 1996.
[3] D. McAllester, M. Collins, and F. Pereira. Case-factor diagrams for structured probabilistic modeling.
Uncertainty in Artificial Intelligence, 2004.
[4] B. Milch, B. Marthi, D. Sontag, S. Russell, D. L. Ong, and A. Kolobov. Approximate inference for infinite
contingent Bayesian networks. In Proc. of the 6th workshop on Artificial Intelligence and Statistics, 2005.
[5] E. Mjolsness. Labeled graph notations for graphical models: Extended report. Technical Report TR# 0403, UCI ICS, March 2004.
[6] W. L. Buntine. Operations for learning with graphical models. JAIR, 2:159?225, 1994.
[7] S. Geman and D. Geman. Stochastic relaxation, Gibbs distribution, and the Bayesian restoration of
images. IEEE Trans. on Pattern Anal. Machine Intell., 6:721?741, 1984.
[8] E. S. Lander and D. Botstein. Mapping Mendelian factors underlying quantitative traits using RFLP
linkage maps. Genetics, 121(1):185?199, 1989.
[9] W.A.J.J. Wiegerinck. Variational approximations between mean field theory and the junction tree algorithm. In UAI, pages 626?633, 2000.
[10] J. Winn and C. M. Bishop. Variational Message Passing. JMLR, 6:661?694, 2005.
[11] T. P. Minka. Expectation propagation for approximate Bayesian inference. In UAI, pages 362?369, 2001.
[12] T. Minka and J. Winn. Gates: A graphical notation for mixture models. Technical report, Microsoft
Research Ltd, 2008.
[13] M. Svens?en and C. M. Bishop. Robust Bayesian mixture modelling. Neurocomputing, 64:235?252, 2005.
[14] C. Archambeau and M. Verleysen. Robust Bayesian clustering. Neural Networks, 20:129?138, 2007.
8
| 3379 |@word pick:7 tr:1 moment:1 initial:1 loeliger:1 genetic:5 existing:2 si:7 must:1 readily:1 john:1 blur:3 update:3 implying:1 intelligence:4 leaf:1 xk:1 node:5 contribute:1 allerton:1 height:1 constructed:1 incorrect:2 inside:21 introduce:1 behavior:1 p1:3 nor:1 considering:1 increasing:1 becomes:2 notation:16 underlying:1 factorized:1 what:2 kg:9 interpreted:5 whilst:1 quantitative:3 expands:1 act:1 uk:2 control:1 yn:3 before:3 understood:1 frey:1 amj:1 archambeau:2 directed:1 unique:2 testing:1 block:12 implement:1 x3:1 projection:2 matching:1 induce:1 refers:1 get:1 cannot:1 onto:1 encloses:1 operator:1 context:14 milch:1 equivalent:1 deterministic:8 map:1 missing:1 send:1 graphically:2 splitting:1 m2:19 rule:1 submodel:2 variation:1 suppose:1 homogeneous:1 us:1 sig:3 geman:2 labeled:2 ep:8 observed:1 bottom:1 capture:1 connected:7 mjolsness:1 trade:1 removed:1 russell:1 balanced:1 ong:1 rewrite:1 efficiency:2 blurring:7 f2:5 easily:1 joint:1 represented:6 derivation:2 revert:1 describe:1 effective:1 detected:1 sc:1 query:1 analyser:1 artificial:3 outside:9 whose:1 widely:3 drawing:1 otherwise:3 statistic:2 think:1 mg:13 product:6 mb:2 neighboring:7 uci:1 achieve:2 dirac:1 ent:1 parent:6 requirement:1 depending:2 kolobov:1 sa:9 p2:2 come:1 direction:1 correct:1 stochastic:2 mcallester:1 momentmatched:1 implementing:1 require:1 behaviour:1 f1:5 mathematically:2 hold:1 ic:1 exp:16 mapping:1 achieves:1 omitted:2 proc:3 label:2 currently:1 sensitive:1 weighted:1 wiberg:1 gaussian:11 always:2 aim:1 modified:1 reaching:2 pn:1 avoid:1 conjunction:3 derived:6 modelling:2 indicates:1 detect:1 inference:15 entire:2 koller:1 proj:6 pixel:3 upward:2 flexible:1 denoted:3 exponent:5 k6:1 verleysen:2 raised:1 marginal:2 field:2 construct:2 once:1 having:3 sampling:6 represents:3 placing:3 vmp:16 cancel:1 report:3 employ:1 gamma:4 intell:1 individual:5 neurocomputing:1 microsoft:3 friedman:1 message:71 multiply:1 introduces:1 mixture:20 activated:1 accurate:1 edge:3 necessary:1 unless:2 tree:1 nbrs:3 mk:23 modeling:1 boolean:1 gn:3 zn:4 restoration:1 cost:4 introducing:1 uniform:2 mendelian:1 motivating:1 buntine:1 dependency:9 connect:1 combined:2 probabilistic:2 off:10 connecting:1 together:1 again:1 ambiguity:1 containing:4 usable:2 leading:1 return:1 factorised:1 student:2 includes:1 blurred:8 caused:2 try:1 h1:6 start:1 b6:2 contribution:5 accuracy:6 sitting:1 bayesian:9 none:1 mc:3 j6:2 e12:1 reach:1 whenever:2 definition:1 minka:3 naturally:2 mi:10 gain:2 recall:1 bidirectional:1 jair:1 tom:1 botstein:1 box:2 xa:14 propagation:6 widespread:1 believe:2 effect:2 contain:3 true:9 facility:1 hence:1 equality:1 maintained:1 plate:3 enlargement:1 image:2 variational:9 fi:1 functional:1 visualise:1 association:2 extend:1 m1:19 trait:7 significant:1 refer:1 cambridge:2 gibbs:7 similarly:3 hp:2 moving:1 etc:1 posterior:2 own:1 hide:1 touching:2 binary:2 exploited:1 captured:1 contingent:2 additional:1 xaq:1 dashed:2 multiple:3 technical:2 match:1 ensuring:1 desideratum:1 variant:4 denominator:1 expectation:6 represent:5 achieved:2 background:1 affecting:1 addition:1 winn:3 lander:1 diagram:3 sends:1 container:2 undirected:1 sent:3 integer:2 odds:1 presence:1 intermediate:2 enough:1 switch:2 xj:6 independence:7 quant:1 tradeoff:1 det:2 absent:2 whether:1 passed:1 linkage:1 ltd:3 sontag:1 passing:14 boutilier:1 generally:1 useful:2 clear:1 visualized:1 notice:3 delta:1 correctly:1 per:1 write:1 discrete:2 independency:5 key:16 drawn:2 clarity:1 changing:1 rectangle:2 graph:14 relaxation:1 sum:1 run:1 you:1 communicate:1 opaque:1 uncertainty:2 family:2 p3:1 followed:1 placement:1 constraint:1 svens:1 x2:1 encodes:1 performing:3 structured:5 combination:1 march:1 describes:1 across:1 making:1 happens:1 modification:2 explained:2 equation:4 discus:1 describing:1 turn:1 needed:2 end:1 sending:2 adopted:1 junction:1 gaussians:4 operation:2 apply:2 gate:146 top:1 dirichlet:2 include:1 clustering:1 graphical:12 exploit:2 unchanged:3 question:1 quantity:2 occurs:1 fa:9 dependence:1 usual:3 link:4 separate:1 relationship:1 equivalently:1 holding:1 anal:1 motivates:1 allowing:1 extended:1 incorporated:1 excluding:2 communication:1 arbitrary:1 intensity:3 kl:1 specified:1 raising:3 marthi:1 trans:1 able:1 suggested:2 pattern:1 reverts:1 including:1 power:3 overlap:1 natural:2 treated:2 mn:1 representing:4 library:2 genomics:1 prior:2 sg:7 multiplication:1 fully:1 par:1 proportional:1 proven:1 versus:2 h2:6 switched:1 sufficient:1 share:1 balancing:2 genetics:1 placed:1 copy:2 allow:4 understand:1 normalised:4 neighbor:2 distributed:2 boundary:1 default:3 xn:6 world:1 computes:1 commonly:1 made:1 replicated:1 unm:5 approximate:3 selector:32 implicitly:1 uai:2 containment:2 xi:56 latent:2 why:2 table:7 mj:3 robust:2 kschischang:1 alg:2 necessarily:1 arise:1 nothing:1 child:6 x1:1 augmented:1 en:1 precision:3 sub:1 inferring:1 pereira:1 explicit:1 exponential:2 jmlr:1 third:1 enlarging:3 specific:12 bishop:2 showing:1 evidence:16 exists:1 workshop:1 false:5 conditioned:3 downward:2 eij:1 visual:1 contained:5 keyed:1 partially:1 ch:3 nested:2 ma:6 conditional:4 viewed:1 labelled:1 replace:1 change:1 infinite:1 except:4 uniformly:1 operates:1 wiegerinck:1 called:1 pas:1 m3:1 exception:1 mch:1 support:3 goldszmidt:1 arises:3 collins:1 |
2,624 | 338 | Discovering Discrete Distributed Representations
with Iterative Competitive Learning
Michael C. Mozer
Department of Computer Science
and Institute of Cognitive Science
University of Colorado
Boulder, CO 80309-0430
Abstract
Competitive learning is an unsupervised algorithm that classifies input patterns into mutually exclusive clusters. In a neural net framework, each cluster is represented by a processing unit that competes with others in a winnertake-all pool for an input pattern. I present a simple extension to the algorithm that allows it to construct discrete, distributed representations. Discrete
representations are useful because they are relatively easy to analyze and
their information content can readily be measured. Distributed representations are useful because they explicitly encode similarity. The basic idea is
to apply competitive learning iteratively to an input pattern, and after each
stage to subtract from the input pattern the component that was captured in
the representation at that stage. This component is simply the weight vector
of the winning unit of the competitive pool. The subtraction procedure forces
competitive pools at different stages to encode different aspects of the input.
The algorithm is essentially the same as a traditional data compression technique known as multistep vector quantization, although the neural net perspective suggests potentially powerful extensions to that approach.
1 INTRODUCTION
Competitive learning (Grossberg, 1976; Kohonen, 1982; Rumelhart & Zipser, 1985; von
der Malsburg, 1973) is an unsupervised algorithm that classifies input patterns into mutually exclusive clusters. In a neural net framework, each cluster is represented by a processing unit that competes with others in a winner-take-all pool for each input pattern.
Competitive learning thus constructs a local representation in which a single unit is activated in response to an input. I present a simple extension to the algorithm that allows
it to construct discrete, distributed representations. Discrete representations are useful
because they are relatively easy to analyze and their information content can readily be
measured. Distributed representations are useful because they explicitly encode similarity. I begin by describing the standard competitive learning algorithm.
627
628
Mozer
2 COMPETITIVE LEARNING
Consider a two layer network with a input units and ~ competitive units. Each competitive unit represents a different classification of the input. The competitive units are activated by the input units and are connected in a winner-take-all pool such that a single
competitive unit becomes active. Formally,
f
1 if
Iwi - xI:s; IW j -xl
for all j
Yi - L 0 otherwise,
where Yj is the activity of competitive unit i, x is the input activity vector, Wj is the vector of connection strengths from the input units to competitive unit i, and 1?1 denotes the
L2 vector norm. The conventional weight update rule is:
~W?I
- EY?(X-W
.) '
I
I
where E is the step size. This algorithm moves each weight vector toward the center of a
cluster of input patterns.
The algorithm attempts to develop the best possible representation of the input with only
~ discrete alternatives. This representation is simply the weight vector of the winning
competitive unit, Ww"'~'. What does it mean to develop the best representation? Following Durbin (1990), competitive learning can be viewed as performing gradient descent in
the error measure
....ltems
f$
2
(1)
E - - L In L e-I w, .,,(p) liT
p-l
i-I
as T -0, where p is an index over patterns. T is a parameter in a soft competitive learning model (Bridle, 1989; Rumelhart, in press) which specifies the degree of competition;
the winner-take-all version of competitive learning is obtained at the limit of T - o.
3 EXTENDING COMPETITIVE LEARNING
Competitive learning constructs a local representation of the input. How might competitive learning be extended to construct distributed representations? One idea is to have
several independent competitive pools, each of which may form its own partition of the
input space. This often fails because all pools will discover the same partitioning if this
partitioning is unequivocally better than others. Thus, we must force different pools to
encode different components of the input.
In the one-pool competitive learning network, the component of the input not encoded is
simply
x' - x -
W",i??.,.
If competitive learning is reapplied with x' instead of x, the algorithm is guaranteed to
extract information not captured by the first pool of competitive units because this information has been subtracted out. This procedure can be invoked iteratively to capture different aspects of the input in an arbitrary number of competitive pools, hence the name
iterative competitive learning or leL. The same idea is at the heart of Sanger's (1989)
and Hrycej's (1989) algorithms for performing principal components analysis. Whereas
these algorithms discover continuous-valued feature dimensions, ICL is concerned with
Discovering Discrete Distributed Representations
the discovery of discrete-valued features. Of course, the continuous features can be
quantized to form discrete features, an idea that both Sanger and Hrycej explore, but
there is a cost to this, as I elaborate later.
To formalize the ICL model, consider a network composed of an arbitrary number of
stages (Figure 1). Each stage, s, consists of a input units and ~Cs) competitive units. Both
the input and competitive units at a given stage feed activity to the input units at the next
higher stage. The activity of the input units at stage 1, x(l), is given by the external input.
At subsequent stages, s,
(s)
x -
~IwCS-Il jT (s-I)
(s-I)
X
Y
~
..J
where Wand yare as before with an additional index for the stage number.
__ 12)
y(2)
__
-WI
~
"X. ~
(
.1-, ;-
12~
/~
I \\
,/
W~l '
"
,/
...
i
\
\
\
(
"--.,/'
Figure 1: The Iterative Competitive Learning Model
To reconstruct the original input pattern from the activities of the competitive units, the
components captured by the winning unit at each stage are simply summed together:
x- ;
[w(S)fl) .
(2)
A variant of ICL has been independently proposed by Ambros-Ingerson, Granger, and
Lynch (1990).1 Their algorithm, inspired by a neurobiological model, is the same as ICL
except for the competitive unit activation rule which uses an inner product instead of distance measure:
1
I thank Todd Leen and Steve Rehfuss for bringing this work to my attention,
629
630
Mozer
r1
Yi ...
if
XTW i
~
La otherwise.
aand XTw i
~
XTWj
for all j
The problem with this rule is that it is difficult to interpret what exactly the network is
computing, e.g., what aspect of the input is captured by the winning unit, whether the input can be reconstructed from the resulting activity pattern, and what information is discarded. The ICL activation rule, in combination with the learning rule, has a clear computational justification by virtue of the underlying objective measure (Equation 1) that is
being optimized.
It also turns out, much to my dismay, that ICL is virtually identical to a conventional
technique in data compression known as multistep vector quantization (Gray, 1984).
More on this later.
4 A SIMPLE EXAMPLE
Consider a set of four input patterns forming a rectangle in 2D space, located at (-1,-.5),
(-1,.5), (1,-.5), and (1,.5), and an ICL network with two stages each containing two competitive units. The first stage discovers the primary dimension of variation - along the
x-axis. That is, the units develop weight vectors (-1,0) and (1,0). Removing this component from the input, the four points become (0,-.5), (0,.5), (0,-.5), (0,.5). Thus, the
two points on the left side of the rectangle are collapsed together with the two points on
the right side. The second stage of the network then discovers the secondary dimension
of variation - along the y-axis.
The response of the ICL network to each input pattern can be summarized by the set of
competitive units, one per stage, that are activated. If the two units at each stage are
numbered a and 1, four response patterns will be generated: {O,O}, {0,1}, {1,0}, {1,1}.
Thus, ICL has discovered a two-bit code to represent the four inputs. The result will be
the same if instead of just four inputs, the input environment consists of four clusters of
points centered on the corners of the rectangle. In this case, the two-bit code will not
describe each input uniquely, but it will distinguish the clusters.
5 IMAGE COMPRESSION
Because ICL discovers compact and discrete codes, the algorithm should be useful for
data and image compression. In such problems, a set of raw data must be transformed
into a compact representation which can then be used to reconstruct the original data.
ICL performs such a transformation, with the resulting code consisting of the competitive
unit response pattern. The reconstruction is achieved by Equation 2.
I experimented with a 600x460 pixel image having 8 bits of gray level information per
pixel. ICL was trained on random 8x8 patches of the image for a total of 125,000 training trials. The network had 64 input units and 80 stages, each with two competitive units.
The initial weights were random, selected from a Normal distribution with mean zero and
standard deviation .0001. A fixed E of .01 was used. Figure 2 shows incoming connection strengths to the competitive units in the first nine stages. The connection strengths
are depicted as an 8x8 grid of cells whose shading indicates the weight from the
corresponding position in the image patch to the competitive unit.
Discovering Discrete Distributed Representations
Stage 1
Stage 2
Stage 3
Stage 4
Stage 5
Stage 6
Stage 7
Stage 8
Stage 9
Figure 2: Input-to-Competitive Unit Connection Strengths at Stages 1-9
Following training, the image is compressed by dividing the image into nonoverlapping
8x8 patches, presenting each in turn to ICL, obtaining the compressed code, and then
reconstructing the patch from the code. With an s stage network and two units per stage,
the compressed code contains s bits. Thus, the number of bits per pixel in the
compressed code is s /(8x8). To obtain different levels of compression, the number of
stages in ICL can be varied. Fortunately, this does not require retraining ICL because the
features detected at each stage do not depend on the number of stages; the earlier stages
capture the most significant variation in the input. Thus, if the network is trained with 80
stages, one can use just the first 32 to compress the image, achieving a .5 bit per pixel encoding.
The image used to train ICL was originally used in a neural net image compression study
by Cottrell, Munro, and Zipser (1989). Their compression scheme used a three-layer
back propagation autoencoder to map an image patch back to itself through a hidden
layer. The hidden layer, with fewer units than the input layer, served as the encoding.
Because hidden unit activities are continuous valued, it was necessary to quantize the activities. Using a standard measure of performance, the signal-to-noise ratio (the logarithm of the average energy relative to the average reconstruction error), ICL outperforms Cottrell et a1.'s network (Table 1).
This result is not surprising. In the data compression literature, vector quantization approaches - similar to ICL - usually work better than transformation-based approaches
- e.g., Cottrell et a1. (1989), Sanger (1989). The reason is that transformation-based approaches do not take quantization into account in the development of the code. That is,
in transformation-based approaches, the training procedure, which discovers the code,
and the quantization step, which turns this code into a form that can be used for digital
631
632
Mozer
data transmission or storage, are two distinct processes. In Cottrell et al.'s network, a
hidden unit encoding is learned without considering the demands of quantization. There
is no assurance that the quantized code will retain the information in the signal. In contrast, ICL takes quantization into account during training.
Table 1: Signal-to-Noise Ratio for Different Compression Levels
compression
1.25 bits/pixel
1 bit/pixel
.75 bits/pixel
.5 bits/pixel
Cottrell et al.
2.324
2.170
1.746
not available
ICL
2.366
2.270
2.146
1.975
6 COMPARISON TO VECTOR QUANTIZATION APPROACHES
As I mentioned previously, ICL is essentially a neural net reformulation of a conventional data compression scheme called multistep vector quantization. However, adopting a
neural net perspective suggests several promising variants of the approach. These variants result from viewing the encoding task as an optimization problem (Le., finding
weights that minimize Equation 1). I mention three variants, the first two of which are
methods for finding the solution more efficiently and consistently, the final one is a
powerful extension to algorithm that I believe has not yet been studied in the vector
quantization literature.
,6.1
AVOIDING LOCAL OPTIMA
As Rumelhart and Zipser (1985) and others have noted, competitive learning experiences
a serious problem from locally optimal solutions in which one competitive unit captures
most or all of the input patterns while others capture none. To eliminate such situations, I
have introduced a secondary error term whose purpose is to force the competitive units to
win equally often:
~ 1 - 2
Esec - L(t>: - Yi) ,
i-I
tJ
where Yi is the mean activity of competitive unit i over trials. Based on the soft competitive learning model with T>O, this yields the weight update rule
~wi - Y(X-wi)(1-~Yi)'
where y is the step size. Because this constraint should not be part of the ultimate solution, y must gradually be reduced to zero. In the image compression simulation, y was set
to .005 initially and was decreased by .0001 every 100 training trials. This is a more
principled solution to the local optimum problem than the "leaky learning" idea suggested by Rumelhart and Zipser. It can also be seen as an alternative or supplement to the
schemes proposed for selecting the initial code (weights) in the vector quantization literature.
Discovering Discrete Distributed Representations
6.2
CONSTRAINTS ON THE WEIGHTS
I have explored a further idea to increase the likelihood of converging on a good solution
and to achieve more rapid convergence. The idea is based on two facts. First, in an optimal solution, the weight vector of a competitive unit should be the mean of the inputs
captured by !J1,at unit. This gives rise to the second observation: beyond stage 1, the
,should be zero.
mean input,
x
If the competitive pools contain two units, these facts lead to a strong constraint on the
weights:
O-x
(. )
L
x(')(P) +
pEl-AlIT,
L
x(')(P)
pEl-An,
nl+ n 2
n 1w(')
+ n 2w(')
1
2
-
nl+n2
where X(f)(p) is the input vector in stage s for pattern p, PART 1 and PART 2 are the two
clusters of input patterns partitioned by the competitive units at stage s -1, and n 1 and "2
are the number of elements in each cluster.
The consequence is that, in an optimal solution,
n2
Wl---w2?
nl
(This property is observed in Figure 2.) Constraining the weights in this manner, and performing gradient descent in the ratio n2/nl, as well as in the weight parameters themselves, the quality of the solution and the convergence rate are dramatically improved.
6.3
GENERALIZING THE TRANSFORMATION BETWEEN STAGES
At each stage s, the winning competitive unit specifies a transformation of x(?) to obtain
In ICL, this transformation is simply a translation. There is no reason why this
could not be generalized to include rotation and dilation as well, i.e.,
X(H).
X
(H1)
=T(')WlftMrX
(f)
,
where T ..u.n... is a transformation matrix that includes the translation specified by w...........
(For this notation to be formally correct, x must be augmented by an element having constant value 1 to allow for translations.) The rotation and dilation parameters can be
learned via gradient descent search in the error measure given in Equation 1. Reconstruction involves inverting the sequence of transformations:
r
]-1 . .. rIT.l:~,,,,,]-1(000??? I)T.
x-lT2J.n..
A simple example of a situation in which this generalized transformation can be useful is
depicted in Figure 3. After subtracting out the component detected at stage 1, the two
clusters may be rotated into alignment, allowing the second stage to capture the remain-
633
634
Mozer
ing variation in the input. Whether or not this extension proves useful has yet to be tested. However, the connectivity patterns in Figure 2 certainly suggest that factoring out
variations in orientation might permit an even more compact representation of the input
data.
?
?
? ?
Figure 3: A Sample Input Space With Four Data Points
Acknowledgements
This research was supported by NSF grant IRI-9058450 and grant 90-21 from the James
S. McDonnell Foundation. My thanks to Paul Smolensky for helpful comments on this
work and to Gary Cottrell for providing the image data and associated software.
References
Ambros-Ingerson, J., Granger, G., & Lynch, G. (1990). Simulation of paleocortex performs hierarchical clustering. Science, 247, 1344-1348.
Bridle, J. (1990). Training stochastic model recognition algorithms as networks can lead to maximum mutual
information estimation of parameters. In D. S. Touretzky (Ed.), Advances in neural information processing systems 2 (pp. 211-217). San Mateo, CA: Morgan Kaufmann.
Cottrell, G. W., Munro, P., & Zipser, D. (1989). Image compression by back propagation: An example of extensional programming. In N. Sharkey (Ed.), Models of cognition: A review of cognitive science (pp.
208-240). Norwood, NJ: Ablex.
Durbin, R. (April, 1990). Principled competitive learning in both unsupervised and supervised networks. Poster presented at the conference on Neural Networks for Computing, Snowbird, Utah.
Gray, R. M. (1984). Vector quantization. IEEE ASSP Magazine, 4-29.
Grossberg, S. (1976). Adaptive pattern classification and universal recoding. I: Parallel development and coding of neural feature detectors. Biological Cybernetics, 23, 121-134.
Hrycej, T. (1989). Unsupervised learning by backward inhibition. Proceedings of the Eleventh International
Joint Conference on Artificial Intelligence (pp. 170-175). Los Altos, CA: Morgan Kaufmann.
Kohonen, T. (1982). Clustering, taxonomy. and topological maps of patterns. In M. Lang (Ed.), Proceedings of
the Sixth International Conference on Pattern Recognition (pp. 114-125). Silver Spring, MD: IEEE
Computer Society Press.
Rumelhart, D. E. (in press). Connectionist processing and learning as statistical inference. In Y. Chauvin & D.
E. Rumelhart (Eds.), Backpropagation: Theory, architectures, and applications. Hillsdale, NJ: Erlbaum.
Rumelhart, D. E., & Zipser, D. (1985). Feature discovery by competitive learning. Cognitive Science, 9,
75-112.
Sanger, T. D. (1989). Optimal unsupervised learning in a single-layer linear feedforward neural network. Neural Networks, 2, 459-473.
von der Malsburg, C. (1973). Self-organization of orientation sensitive cells in the striate cortex. Kybernetik,
14,85-100.
| 338 |@word trial:3 version:1 compression:13 norm:1 retraining:1 simulation:2 mention:1 shading:1 initial:2 contains:1 selecting:1 outperforms:1 surprising:1 activation:2 yet:2 lang:1 must:4 readily:2 cottrell:7 subsequent:1 partition:1 j1:1 extensional:1 update:2 intelligence:1 discovering:4 selected:1 fewer:1 assurance:1 alit:1 quantized:2 along:2 become:1 consists:2 eleventh:1 manner:1 rapid:1 themselves:1 paleocortex:1 inspired:1 considering:1 becomes:1 begin:1 classifies:2 competes:2 discover:2 underlying:1 notation:1 alto:1 what:4 pel:2 finding:2 transformation:10 nj:2 every:1 exactly:1 partitioning:2 unit:46 grant:2 before:1 local:4 todd:1 limit:1 consequence:1 kybernetik:1 encoding:4 multistep:3 might:2 studied:1 mateo:1 suggests:2 co:1 grossberg:2 yj:1 backpropagation:1 xtw:2 procedure:3 universal:1 poster:1 numbered:1 suggest:1 storage:1 collapsed:1 conventional:3 map:2 center:1 attention:1 iri:1 independently:1 rule:6 variation:5 justification:1 colorado:1 magazine:1 programming:1 us:1 element:2 rumelhart:7 recognition:2 located:1 observed:1 capture:5 wj:1 connected:1 mentioned:1 mozer:5 environment:1 principled:2 trained:2 depend:1 ablex:1 joint:1 represented:2 train:1 distinct:1 describe:1 detected:2 artificial:1 whose:2 encoded:1 valued:3 otherwise:2 reconstruct:2 compressed:4 itself:1 final:1 sequence:1 net:6 reconstruction:3 subtracting:1 product:1 kohonen:2 achieve:1 competition:1 los:1 convergence:2 cluster:10 transmission:1 extending:1 r1:1 optimum:2 silver:1 rotated:1 develop:3 snowbird:1 measured:2 strong:1 dividing:1 c:1 involves:1 correct:1 stochastic:1 centered:1 viewing:1 hillsdale:1 require:1 biological:1 extension:5 normal:1 cognition:1 purpose:1 estimation:1 iw:1 sensitive:1 wl:1 lynch:2 encode:4 consistently:1 indicates:1 likelihood:1 rehfuss:1 contrast:1 helpful:1 inference:1 factoring:1 eliminate:1 initially:1 hidden:4 transformed:1 pixel:8 classification:2 orientation:2 development:2 summed:1 mutual:1 construct:5 having:2 identical:1 represents:1 lit:1 unsupervised:5 others:5 connectionist:1 serious:1 composed:1 consisting:1 attempt:1 organization:1 certainly:1 alignment:1 nl:4 activated:3 tj:1 necessary:1 experience:1 logarithm:1 soft:2 earlier:1 cost:1 deviation:1 erlbaum:1 my:3 thanks:1 international:2 retain:1 pool:12 michael:1 together:2 connectivity:1 von:2 containing:1 ambros:2 cognitive:3 external:1 corner:1 account:2 nonoverlapping:1 summarized:1 coding:1 includes:1 explicitly:2 later:2 h1:1 analyze:2 competitive:51 parallel:1 iwi:1 minimize:1 il:1 kaufmann:2 efficiently:1 yield:1 raw:1 none:1 served:1 cybernetics:1 detector:1 touretzky:1 ed:4 sixth:1 energy:1 pp:4 james:1 sharkey:1 associated:1 bridle:2 icl:22 reapplied:1 formalize:1 back:3 feed:1 steve:1 higher:1 originally:1 supervised:1 response:4 improved:1 april:1 leen:1 just:2 stage:42 propagation:2 quality:1 gray:3 believe:1 name:1 utah:1 contain:1 hence:1 iteratively:2 during:1 self:1 uniquely:1 aand:1 noted:1 generalized:2 presenting:1 performs:2 image:14 invoked:1 discovers:4 rotation:2 winner:3 interpret:1 significant:1 grid:1 winnertake:1 had:1 similarity:2 cortex:1 inhibition:1 own:1 perspective:2 der:2 yi:5 captured:5 seen:1 additional:1 fortunately:1 morgan:2 ey:1 subtraction:1 signal:3 ing:1 equally:1 a1:2 converging:1 variant:4 basic:1 essentially:2 represent:1 adopting:1 achieved:1 cell:2 whereas:1 decreased:1 w2:1 bringing:1 comment:1 virtually:1 zipser:6 constraining:1 feedforward:1 easy:2 concerned:1 architecture:1 inner:1 idea:7 whether:2 munro:2 ultimate:1 iwcs:1 nine:1 dramatically:1 useful:7 clear:1 locally:1 reduced:1 specifies:2 nsf:1 per:5 discrete:12 four:7 reformulation:1 achieving:1 rectangle:3 backward:1 wand:1 powerful:2 patch:5 bit:10 layer:6 fl:1 guaranteed:1 distinguish:1 topological:1 durbin:2 activity:9 strength:4 constraint:3 software:1 aspect:3 spring:1 performing:3 relatively:2 department:1 combination:1 mcdonnell:1 ingerson:2 remain:1 reconstructing:1 wi:3 partitioned:1 gradually:1 boulder:1 heart:1 equation:4 mutually:2 previously:1 describing:1 granger:2 turn:3 available:1 permit:1 apply:1 yare:1 hierarchical:1 subtracted:1 alternative:2 original:2 compress:1 denotes:1 clustering:2 include:1 malsburg:2 sanger:4 lel:1 prof:1 society:1 move:1 objective:1 primary:1 exclusive:2 md:1 traditional:1 striate:1 gradient:3 win:1 distance:1 thank:1 toward:1 reason:2 chauvin:1 code:13 index:2 ratio:3 providing:1 difficult:1 potentially:1 taxonomy:1 rise:1 allowing:1 observation:1 discarded:1 descent:3 situation:2 extended:1 assp:1 ww:1 discovered:1 varied:1 arbitrary:2 introduced:1 inverting:1 specified:1 connection:4 optimized:1 learned:2 beyond:1 suggested:1 usually:1 pattern:21 smolensky:1 force:3 scheme:3 axis:2 x8:4 extract:1 autoencoder:1 review:1 literature:3 l2:1 discovery:2 acknowledgement:1 relative:1 digital:1 foundation:1 norwood:1 degree:1 unequivocally:1 translation:3 course:1 supported:1 side:2 allow:1 institute:1 leaky:1 distributed:9 recoding:1 dimension:3 adaptive:1 san:1 reconstructed:1 compact:3 neurobiological:1 active:1 incoming:1 dismay:1 xi:1 continuous:3 iterative:3 search:1 why:1 table:2 dilation:2 promising:1 ca:2 obtaining:1 quantize:1 noise:2 paul:1 n2:3 augmented:1 elaborate:1 fails:1 position:1 winning:5 xl:1 removing:1 jt:1 explored:1 experimented:1 virtue:1 quantization:12 supplement:1 demand:1 subtract:1 depicted:2 generalizing:1 simply:5 explore:1 forming:1 gary:1 viewed:1 content:2 except:1 principal:1 total:1 called:1 secondary:2 la:1 formally:2 rit:1 tested:1 avoiding:1 |
2,625 | 3,380 | Sparse probabilistic projections
C?edric Archambeau
Department of Computer Science
University College London, United Kingdom
[email protected]
Francis R. Bach
INRIA - Willow Project
Ecole Normale Sup?erieure, Paris, France
[email protected]
Abstract
We present a generative model for performing sparse probabilistic projections,
which includes sparse principal component analysis and sparse canonical correlation analysis as special cases. Sparsity is enforced by means of automatic relevance determination or by imposing appropriate prior distributions, such as generalised hyperbolic distributions. We derive a variational Expectation-Maximisation
algorithm for the estimation of the hyperparameters and show that our novel probabilistic approach compares favourably to existing techniques. We illustrate how
the proposed method can be applied in the context of cryptoanalysis as a preprocessing tool for the construction of template attacks.
1
Introduction
Principal component analysis (PCA) is widely used for data pre-processing, data compression and
dimensionality reduction. However, PCA suffers from the fact that each principal component is a
linear combination of all the original variables. It is thus often difficult to interpret the results. In
recent years, several methods for sparse PCA have been designed to find projections which retain
maximal variance, while enforcing many entries of the projection matrix to be zero [20, 6]. While
most of these methods are based on convex or partially convex relaxations of the sparse PCA problem, [16] has looked at using the probabilistic PCA framework of [18] along with `1 -regularisation.
Canonical correlation analysis (CCA) is also commonly used in the context for dimensionality reduction.The goal is here to capture features that are common to several views of the same data.
Recent attempts for constructing sparse CCA include [10, 19].
In this paper, we build on the probabilistic interpretation of CCA outlined by [2] and further extended
by [13]. We introduce a general probabilistic model, which allows us to infer from an arbitrary
number of views of the data, both a shared latent representation and individual low-dimensional
representations of each one of them. Hence, the probabilistic reformulations of PCA and CCA fit
this probabilistic framework. Moreover, we are interested in sparse solutions, as these are important
for interpretation purposes, denoising or feature extraction. We consider a Bayesian approach to
the problem. A proper probabilistic approach allows us to treat the trade-off between the modelling
accuracy (of the high-dimensional observations by low-dimensional latent variables) and the degree
of sparsity of the projection directions in principled way. For example, we do not need to estimate
the sparse components successively, using, e.g., deflation, but we can estimate all sparse directions
jointly as we are taking the uncertainty of the latent variable into account.
In order to ensure sparse solutions we propose two strategies. The first one, discussed in Appendix A, is based on automatic relevance determination (ARD) [14]. No parameter needs to be
set in advance. The entries in the projection matrix which are not well determined by the data are
automatically driven to zero. The second approach uses priors from the generalised hyperbolic family [3], and more specifically the inverse Gamma. In this case, the degree of sparsity can be adjusted,
eventually leading to very sparse solutions if desired. For both approaches we derive a variational
EM algorithm [15].
1
1
a/DQ=0.1
a/DQ=1
a/DQ=10
0.9
0.8
0.7
p(! )
0.6
0.5
0.4
0.3
0.2
0.1
0
?10
?5
(a)
0
!
5
10
(b)
Figure 1: (a) Graphical model (see text for details). Arrows denote conditional dependencies.
Shaded and unshaded nodes are respectively observed and unobserved random variables. Plates
indicate repetitions. (b) Marginal prior on the individual matrix entries (b = 1).
2
Generative model
We consider the graphical model shown in Figure 1(a). For each observation, we have P independent
measurements x1 , . . . , xP in different measurement spaces or views. The measurement xp ? RDp
is modelled as a mix of a common (or view independent) continuous latent vector y0 ? RQ0 and a
view dependent continuous latent vector yp ? RQp , such that
Wp ? RDp ?Q0 , Vp ? RDp ?Qp ,
xp = Wp y0 + Vp yp + ?p + p ,
where
p.
{?p }P
p=1
are the view dependent offsets and p ?
N (0, ?p?1 IDp )
(1)
is the residual error in view
We are interested in the case where y0 and yp are low-dimensional vectors, i.e., Q0 , Qp Dp for
all p. We impose Gaussian priors on the latent vectors:
(2)
y0 ? N (0, ??1
yp ? N (0, ??1
p ), p ? {1, . . . , P }.
0 ),
The resulting generative model comprises a number of popular probabilistic projection techniques as
special cases. If there is a single view (and a single latent cause) and the prior covariance is diagonal,
we recover probabilistic factor analysis [9]. If the prior is also isotropic, then we get probabilistic
PCA [18]. If there are two views, we recover probabilistic CCA [2].
P
We seek a solution for which the matrices {Wp }P
p=1 and {Vp }p=1 are sparse, i.e. most of their entries are zero. One way to achieve sparsity is by means of ARD-type priors [14]. In this framework,
a zero-mean Gaussian prior is imposed on the entries of the weight matrices:
wip j ? N (0, 1/?ip j ),
ip ? {1, . . . , Dp }, j ? {1, . . . , Q0 },
(3)
vip kp ? N (0, 1/?ip kp ),
ip ? {1, . . . , Dp }, kp ? {1, . . . , Qp }.
(4)
Type II maximum likelihood leads then to a sparse solution when considering independent hyperparameters. The updates arising in the context of probabilistic projections are given in Appendix A.
Since marginalisation with respect to both the latent vectors and the weights is intractable, we apply
variational EM [15]. Unfortunately, following this route does not allow us to adjust the degree of
sparsity, which is important e.g. for interpretation purposes or for feature extraction.
Hence, we seek a more flexible approach. In the remaining of this paper, we will assume that the
P
marginal prior on each weight ?ij , which is either an entry of {Wp }P
p=1 or {Vp }p=1 and will be
defined shortly, has the form of an (infinite) weighted sum of scaled Gaussians:
Z
?1
p(?ij ) = N (0, ?ij
) p(?ij ) d?ij .
(5)
We will chose the prior over ?ij in such a way that the resulting marginal prior over the corresponding ?ij induces sparsity. A similar approach was followed in the context of sparse nonparametric
Bayesian regression in [4, 5].
2
2.1
Compact reformulation of the generative model
Before discussing the approximate inference scheme, we rewrite the model in a more compact way.
Let us denote the nth observation, the corresponding latent vector and the means respectively by
> >
>
>
> >
> >
xn = x>
,
zn = yn0
, yn1
, . . . , ynP
,
? = ?>
.
n1 , . . . , xnP
1 , . . . , ?P
The generative model can be reformulated as follows:
zn ? N (0, ??1 ),
?ij |?ij ?
? ? RQ?Q , Q = Q0 +
?1
N (0, ?ij
),
P
p Qp ,
(6)
i ? {1, . . . , D}, j ? {1, . . . , Q}, D =
xn |zn , ? ? N (?zn + ?, ?
?1
??R
),
D?Q
D?D
, ??R
P
p Dp ,
,
(7)
(8)
where
?
? ?
?1
W1
? .. ? ? ..
?=? . ?=? .
?P
WP
V1
..
.
0
...
..
.
...
?
0
.. ? ,
. ?
VP
?
?1 ID1
?
..
?=?
.
0
...
..
.
...
?
0
?
..
?.
.
?P IDP
Note that we do not assume that the latent spaces are correlated as ? = diag{?0 , ?1 , . . . , ?P }.
This is consistent with the fact the common latent space is modelled independently through y0 .
Subsequently, we will also denote the matrix of the hyperparameters by ? ? RD?Q , where we set
(and fix) ?ij = ? for all ?ij = 0.
2.2
Sparsity inducing prior over the individual scale variables
We impose an inverse Gamma prior on the scale variable ?ij :
?ij ? IG(a/DQ, b),
(9)
for all i and j. The shape parameter a and the scale parameter b are non-negative. The marginal prior
on the weight ?ij is then in the class of the generalised hyperbolic distributions [3] and is defined in
terms of the modified Bessel function of the third kind K? (?):
! a ?1
r
a
q
?2ij 2DQ 4
2 b DQ
2
a
1
p(?ij ) =
K
2b?
(10)
ij
a
DQ ? 2
? ?( DQ
) 2b
for ?ij 6= 0, and
( q
lim p(?ij ) =
?ij ?0
a
1
b ?( DQ ? 2 )
a
2? ?( DQ
)
?
a
DQ
> 21 ,
otherwise.
(11)
The function ?(?) is the (complete) Gamma function.
The effective prior on the individual weights is shown in Figure 1(b). Intuitively, the joint distribution over the weights is sparsity inducing as it is sharply peaked around zero (and in fact infinite for
sufficiently small a). It favours only a small number of weights to be non-zero if the scale variable
b is sufficiently large. For a more formal discussion in the context of regression we refer to [7].
It is interesting to note that for a/DQ = 1 we recover the popular Laplace prior, which is equivalent
to the `1 -regulariser or the LASSO [17], and for a/DQ ? 0 and b ? 0 the resulting prior is
the Normal-Jeffreys prior. In fact, the automatic thresholding method described in Appendix A
fits also into the framework defined by (5). However, it corresponds to imposing a flat prior on
the scale variables over the log-scale, which is a limiting case of the Gamma distribution. When
imposing independent Gamma priors on the scale variables, the effective joint marginal is a product
of Student-t distributions, which again is sharply peaked around zero and sparsity inducing.
3
Variational approximation
We view {zn }N
n=1 and matrix ? as latent variables, and optimise the parameters ? = {?, ?, ?, ?}
by EM. In other words, we view the weight matrix ? as a matrix of parameter and estimate the
3
entries by maximum a posteriori (MAP) learning. The other parameters are estimated by maximum
likelihood (ML).
The variational free energy is given by
Fq (x1 , . . . , xN , ?) = ?
N
X
hln p(xn , zn , ?|?)iq ? H[q(z1 , . . . , zN , ?)],
(12)
n=1
where h?iq denotes the expectation with respect to the variational distribution q and H[?] is the differential entropy. Since the Kullback-Leibler divergence (KL) is non-negative, the negative free energy
is a lower bound to log-marginal likelihood:
N
X
ln p(xn |?) = ?Fq ({xn }, ?) + KL [q({zn }, ?)kp({zn }, ?)|{xn }, ?)] > ?Fq ({xn }, ?).
n=1
(13)
Interestingly it is not required to make a factorised approximation of the the joint posterior q to find
a tractable solution. Indeed, the posterior q factorises naturally given the data and the weights, such
that the posteriors we will obtain in the E-step are exact.
The variational EM finds maximum likelihood estimates for the parameters by cycling through the
following two steps until convergence:
1. The posterior over the latent variables are computed for fixed parameters by minimising
the KL in (13). It can be shown that the variational posteriors are given by
q(z1 , . . . , zN ) ?
q(?) ? e
N
Y
ehln p(xn ,zn ,?|?)iq(?) ,
n=1
hln p(xn ,zn |?,?)iq(z1 ,...,zN )
p(?).
(14)
(15)
2. The variational free energy (12) is minimised wrt the parameters for fixed q. This leads
in effect to type II ML estimates for the paramteres and is equivalent to maximising the
expected complete log-likelihood:
? ? argmax
?
N
X
hln p(xn , zn , ?|?)iq .
(16)
n=1
Depending on the initialisation, the variational EM algorithm converges to a local maximum of
the log-marginal likelihood. The convergence can be checked by monitoring the variational lower
bound, which monotonically increases during the optimisation. The explicit expression of the variational bound is here omitted due to a lack of space
3.1
Posterior of the latent vectors
The joint posterior of the latent vectors factorises into N posteriors due to the fact the observations
are independent. Hence, the posterior of each low-dimenstional latent vector is given by
? n ),
q(zn ) = N (?
zn , S
(17)
?1
? n ?> ?(xn ? ?) is the mean and S
? n = ?> ?? + ?
?n = S
where z
is the covariance.
3.2
Posterior of the scale variables
The inverse Gamma distribution is not conjugate to the exponential family. However, the posterior
over matrix ? is tractable. It has the form of a product of generalised inverse Gaussian distributions
(see Appendix B for a formal definition):
q(?) =
Q
D Y
Y
p(?ij |?ij ) =
i=1 j=1
Q
D Y
Y
N ? (?
?ij , ??ij , ?
?ij )
(18)
i=1 j=1
a
where ?
? ij = ? DQ
+ 12 is the index and ??ij = ?2ij and ?
?ij = 2b are the shape parameters. The
factorised form arises from the scale variable being independent conditioned on the weights.
4
3.3
Update for the parameters
Based on the properties of the Gaussian and the generalised inverse Gaussian, we can compute the
variational lower bound, which can then be maximised. This leads to the following updates:
N
1 X
??
(xn ? ??
zn ),
N n=1
?1
?
N
1 X
?
?>
?
diag{?
zn z
n + Sn },
N n=1
N
N
?1
X
X
? i + ?(i, i)
?i ? ?
hzn z>
i
?(i,
i)
(xn (i) ? ?(i))?
zn ,
n
n=1
?p?1 ?
(19)
(20)
n=1
N
1 X
?n + (?p zn )> ?p zn , (21)
(xnp ? ?p )> (xnp ? ?p ) ? 2(xnp ? ?p )> ?p z
N Dp n=1
where the required expectations are given by
?
?n z
?>
hzn z>
n i = Sn + z
n,
? i = diag h?i1 i, . . . , h?iQ i ,
?
>
?>
(?p zn )> ?p zn = tr{h?
zn z
n i?p ?p },
s
?
?
?ij ??ij
?
?ij K?+1
.
h?ij i =
?
??ij K?
?
?ij ??ij
(22)
(23)
Note that diag{?} denotes a block-diagonal operation in (19). More importantly, since we are seeking a sparse projection matrix, we do not suffer from the rotational ambiguity problem as is for
example the case standard probabilistic PCA.
4
4.1
Experiments
Synthetic denoising experiments
Because of identifiability issues which are subject of ongoing work, we prefer to compare various
methods for sparse PCA in a denoising experiment. That is, we assume that the data were generated
from sparse components plus some noise and we compare the various sparse PCA on the denoising
task, i.e., on the task of recovering the original data. We generated the data as follows: select
uniformly at random M = 4 unit norm sparse vectors in P = 10 dimensions with known number
S = 4 of non zero entries, then generate i.i.d. values of the random variables Z from three possible
distributions (Gaussian, Laplacian, uniform), then add isotropic noise of relative standard deviation
1/2. When the latent variables are Gaussian, our model exactly matches the data and our method
should provide a better fit; however, we consider also situations where the model is misspecified in
order to study the robustness of our probabilistic model.
We consider our two models: SCA-1 (which uses automatic relevance determination type of sparsity
priors) and SCA-2 (which uses generalised hyperbolic distribution), where we used 6 latent dimensions (larger than 4) and fixed hyperparameters that lead to vague priors. Those two models thus
have no free parameters and we compare them to the following methods, which all have two regularisation parameters (rank and regularisation): DSPCA [6], the method of Zou [20] and the recent
method of [16] which essentially considers a probabilistic PCA with `1 -penalty on the weights.
In Table 1 we report mean-squared reconstruction error averaged over 10 replications. It can be seen
that two proposed probabilistic approaches perform similarly and significantly outperform other
sparse PCA methods, even when the model is misspecified.
4.2
Template attacks
Power consumption and electromagnetic radiation are among the most extensively used sidechannels for analysing physically observable cryptographic devices. A common belief is that the
useful information for attacking a device is hidden at times where the traces (or time series) have
large variance. Once the relevant samples have been identified they can be used to construct templates, which can then be used to assess if a device is secure. A simple, yet very powerful approach,
recently proposed by [1], is to select time samples based on PCA. Figure 2(a) shows the weight
5
N
100
200
400
N
100
200
400
N
100
200
300
SCA-1
39.9
36.5
35.5
SCA-1
39.9
36.8
36.4
SCA-1
39.3
36.5
35.8
SCA-2
40.8
36.8
35.5
SCA-2
40.9
37.0
36.4
SCA-2
40.3
36.7
35.8
Zou
42.2
40.8
39.8
Zou
42.6
40.9
40.5
Zou
42.7
40.2
40.6
DSPCA
42.9
41.4
40.3
DSPCA
43.6
41.1
40.7
DSPCA
43.4
40.8
40.9
L1-PCA
50.8
50.4
42.5
L1-PCA
49.8
48.1
46.8
L1-PCA
48.5
46.2
41.0
Table 1: Denoising experiment with sparse PCA (we report mean squared errors): (top) Gaussian
distributed latent vectors, (middle) latent vectors generated from the uniform distribution, (bottom)
latent vectors generated from the Laplace distribution.
0.2
60
80
100
120
140
160
180
200
20
40
60
80
100
120
140
160
180
200
20
40
60
80
100
120
140
160
180
200
20
40
60
80
100
t
120
140
160
180
200
60
80
100
120
140
160
180
200
20
40
60
80
100
120
140
160
180
200
20
40
60
80
100
120
140
160
180
200
20
40
60
80
100
t
120
140
160
180
200
0
?1
0
(a) Probabilistic PCA.
40
0
?1
0
1
0
?1
0
20
0
?1
0
1
0
?1
0
1
0.1
0
0
1
!1
40
0
?1
0
1
!2
20
!2
!1
0
0
1
!3
Power
0.1
!3
Power
0.2
(b) Sparse probabilistic PCA (SCA-2).
Figure 2: Power traces and first three principal directions.
associated to each time sample by the first three principal directions found by PCA. The problem
with this approach is that all time samples get a non-zero weights. As a result, the user has to define
a threshold manually in order to decide whether the information leakage at time t is relevant or not.
Figure 2(b) shows the weight associated to the time samples by SCA-2 when using a Laplace prior
(i.e. for a/DQ = 1). It can be observed that one gets a much better picture of where the relevant
information is. Clearly, sparse probabilitic PCA can be viewed as being more robust to spurious
noise and provides a more reliable and amenable solution.
5
Conclusion
In this paper we introduced a general probabilistic model for inferring sparse probabilistic projection
matrices. Sparsity was enforced by either imposing an ARD-type prior or by means of the a NormalInverse Gamma prior. Although the inverse Gamma is not conjugate to the exponential family, the
posterior is tractable as it is a special case of the generalised inverse Gaussian [12], which in turn
is a conjugate prior to this family. Future work will include the validation of the method on a wide
range of applications and in particular as a feature extracting tool.
Acknowledgments
We are grateful to the PASCAL European network of excellence for partially supporting this work.
6
A
Automatic thresholding the weights by ARD
In this section, we provide the updates for achieving automatic thresholding of projection matrix
entries in a probabilistic setting. We apply Tipping?s sparse Bayesian theory [8], which is closely
related to ARD [14]. More specifically, we assume the prior over the scale variables is uniform over
a log-scale, which is a limiting case of the Gamma distribution.
Let us view {zn }N
n=1 and ? as latent variables and optimise the parameters ? = {?, ?, ?, ?} by
variational EM. The variational free energy is given by
Fq (x1 , . . . , xN , ?) = ?
N
X
hln p(xn , zn , ?|?)iq ? H[q(z1 , . . . , zN , ?)].
(24)
n=1
In order to find a tractable solution, we further have to assume that the approximate posterior q has
a factorised form. We can then compute the posterior of the low-dimenstional latent vectors:
? n ),
q(zn ) = N (?
zn , S
(25)
? n?
? > ?(xn ? ?) and S
?n = ?
? > ??
? + P ?(i, i)?
? i + ? ?1 . And the posterior of
?n = S
where z
i
the weights is given by
q(?) =
D
Y
q(?i ) =
D
Y
? i, ?
? i ),
N (?
(26)
i=1
i=1
?1
?i = ?
? i ?(i, i) P (xn (i) ? ?(i))?
? i = ?i + ?(i, i) P {S
?n + z
?n z
?>
where ?
zn and ?
. The
n}
n
n
Q
partially factorised form i q(?i ) arises naturally. Note also that the update for the mean weights
has the same form as in (20). Finally, the updates for the parameters are found by maximising the
negative free energy, which corresponds to performing type II ML for the scaling variables. This
yields
??
?p?1 ?
N
1 X
? zn ),
(xn ? ??
N n=1
??1 ?
N
1 X
?
?n z
?>
diag z
n + Sn ,
N n=1
?ij ? h?2ij i?1 ,
(27)
N
1 X
? pz
?n + (?p zn )> ?p zn , (28)
(xnp ? ?p )> (xnp ? ?p ) ? 2(xnp ? ?p )> ?
N Dp n=1
?i ?
?2 + ?
?> + ?
? P (?
? i (j, j) and (?p zn )> ?p zn = tr{(?
? i )}.
?>
where h?2ij i = ?
zn z
n + Sn )
ip
p
p
ij
ip
B
Generalised inverse Gaussian distribution
The Generalised inverse Gaussian distribution is in the class of generalised hyperbolic distributions.
It is defined as follows [12, 11]:
?
??? ( ??)? ??1 ? 1 (?y?1 +?y)
?
?
y
e 2
,
(29)
y ? N (?, ?, ?) =
2K? ( ??)
where y > 0 and K? (?) is the modified Bessel function of the third kind1 with index ?.
The following expectations are useful [12]:
s
r
p
p
?
?
hyi =
R? ( ??), hy ?1 i =
R?? ( ??),
?
?
?
d ln K? ( ??)
hln yi = ln ? +
,
d?
(30)
where R? (?) ? K?+1 (?)/K? (?).
1
The modified Bessel function of the third kind is known under various names. In particular, it is also known
as the modified Bessel function of the second kind (cf. E. W. Weisstein: ?Modified Bessel Function of the Second Kind.? From MathWorld: http://mathworld.wolfram.com/ModifiedBesselFunctionoftheSecondKind.html).
7
Inverse Gamma distribution
When ? = 0 and ? < 0, the generalised inverse Gaussian distribution reduces to the inverse Gamma
distribution:
ba ?a?1 ? b
IG(a, b) =
x
(31)
e x , a, b > 0.
?(a)
It is straightforward to verify this result by posing a = ?? and b = ?/2, and noting that
lim K? (y) = ?(??)2???1 y ?
(32)
y?0
for ? < 0.
References
[1] C. Archambeau, E. Peeters, F.-X. Standaert, and J.-J. Quisquater. Template attacks in principal subspaces.
In L. Goubin and M. Matsui, editors, 8th International Workshop on Cryptographic Hardware and Embedded Systems(CHES), volume 4249 of Lecture Notes in Computer Science, pages 1?14. Springer, 2006.
[2] F. Bach and M. I. Jordan. A probabilistic interpretation of canonical correlation analysis. Technical
Report 688, Department of Statistics, University of California, Berkeley, 2005.
[3] O. Barndorff-Nielsen and R. Stelzer. Absolute moments of generalized hyperbolic distributions and
approximate scaling of normal inverse Gaussian L?evy processes. Scandinavian Journal of Statistics,
32(4):617?637, 2005.
[4] P. J. Brown and J. E. Griffin. Bayesian adaptive lassos with non-convex penalization. Technical Report
CRiSM 07-02, Department of Statistics, University of Warwick, 2007.
[5] F. Caron and A. Doucet. Sparse bayesian nonparametric regression. In 25th International Conference on
Machine Learning (ICML). ACM, 2008.
[6] A. d?Aspremont, E. L. Ghaoui, M. I. Jordan, and G. R. G. Lanckriet. A direct formulation for sparse PCA
using semidefinite programming. SIAM Review, 49(3):434?48, 2007.
[7] J. Fan and R. Li. Variable selection via nonconcave penalized likelihood and its oracle properties. Journal
of the American Statistical Association, 96:1348?1360, 2001.
[8] A. C. Faul and M. E. Tipping. Analysis of sparse Bayesian learning. In T. G. Dietterich, S. Becker, and
Z. Ghahramani, editors, Advances in Neural Information Processing Systems 14 (NIPS), pages 383?389.
The MIT Press, 2002.
[9] Z. Ghahramani and G. E. Hinton. The EM algorithm for mixtures of factor analyzers. Technical Report
CRG-TR-96-1, Department of Computer Science, University of Toronto, 1996.
[10] D. Hardoon and J. Shawe-Taylor. Sparse canonical correlation analysis. Technical report, PASCAL
EPrints, 2007.
[11] Wenbo Hu. Calibration of multivariate generalized hyperbolic distributions using the EM algorithm, with
applications in risk management, portfolio optimization and portfolio credit risk. PhD thesis, Florida State
University, United States of America, 2005.
[12] B. J?rgensen. Statistical Properties of the Generalized Inverse Gaussian Distribution. Springer-Verlag,
1982.
[13] A. Klami and S. Kaski. Local dependent components. In Z. Ghahramani, editor, 24th International
Conference on Machine Learning (ICML), pages 425?432. Omnipress, 2007.
[14] D. J. C. MacKay. Bayesian methods for backprop networks. In E. Domany, J.L. van Hemmen, and
K. Schulten, editors, Models of Neural Networks, III, pages 211?254. 1994.
[15] R. M. Neal and G. E. Hinton. A view of the EM algorithm that justifies incremental, sparse, and other
variants. In M. I. Jordan, editor, Learning in Graphical Models, pages 355?368. The MIT press, 1998.
[16] C. D. Sigg and J. M. Buhmann. Expectation-maximization for sparse and non-negative PCA. In 25th
International Conference on Machine Learning (ICML). ACM, 2008.
[17] R. Tibshirani. Regression shrinkage and selection via the LASSO. Journal of the Royal Statistical Society
B, 58:267?288, 1996.
[18] M. E. Tipping and C. M. Bishop. Probabilistic principal component analysis. Journal of the Royal
Statistical Society B, 61:611?622, 1999.
[19] D. Torres, D. Turnbull, B. K. Sriperumbudur, L. Barrington, and G.Lanckriet. Finding musically meaningful words using sparse CCA. In NIPS workshop on Music, Brain and Cognition, 2007.
[20] H. Zou, T. Hastie, and R. Tibshirani. Sparse principal component analysis. Journal of Computational and
Graphical Statistics, 15(2):265?286, 2006.
8
| 3380 |@word middle:1 compression:1 norm:1 hu:1 seek:2 covariance:2 tr:3 edric:1 moment:1 reduction:2 series:1 dspca:4 united:2 initialisation:1 ecole:1 interestingly:1 existing:1 com:1 yet:1 shape:2 designed:1 update:6 generative:5 device:3 isotropic:2 maximised:1 wolfram:1 provides:1 node:1 evy:1 toronto:1 attack:3 org:1 along:1 direct:1 differential:1 barndorff:1 replication:1 introduce:1 excellence:1 expected:1 indeed:1 brain:1 automatically:1 considering:1 hardoon:1 project:1 moreover:1 kind:4 unobserved:1 finding:1 berkeley:1 exactly:1 scaled:1 uk:1 unit:1 generalised:11 before:1 local:2 treat:1 inria:1 chose:1 plus:1 shaded:1 archambeau:3 matsui:1 range:1 averaged:1 acknowledgment:1 maximisation:1 block:1 sca:10 hyperbolic:7 significantly:1 projection:11 pre:1 word:2 get:3 selection:2 context:5 risk:2 equivalent:2 unshaded:1 imposed:1 map:1 straightforward:1 independently:1 convex:3 importantly:1 laplace:3 limiting:2 construction:1 user:1 exact:1 programming:1 us:3 lanckriet:2 observed:2 bottom:1 capture:1 trade:1 principled:1 rq:1 mine:1 grateful:1 rewrite:1 vague:1 joint:4 various:3 america:1 kaski:1 effective:2 london:1 kp:4 widely:1 larger:1 warwick:1 otherwise:1 statistic:4 jointly:1 ip:6 ucl:1 propose:1 reconstruction:1 maximal:1 product:2 relevant:3 achieve:1 inducing:3 convergence:2 incremental:1 converges:1 derive:2 illustrate:1 ac:1 radiation:1 iq:7 depending:1 ij:41 ard:5 recovering:1 c:1 faul:1 indicate:1 idp:2 direction:4 closely:1 subsequently:1 xnp:7 backprop:1 musically:1 fix:1 electromagnetic:1 adjusted:1 crg:1 around:2 sufficiently:2 credit:1 normal:2 cognition:1 omitted:1 purpose:2 estimation:1 repetition:1 tool:2 weighted:1 mit:2 clearly:1 gaussian:14 modified:5 normale:1 shrinkage:1 modelling:1 likelihood:7 fq:4 rank:1 secure:1 posteriori:1 inference:1 dependent:3 hidden:1 spurious:1 willow:1 france:1 interested:2 i1:1 issue:1 among:1 flexible:1 pascal:2 html:1 special:3 mackay:1 marginal:7 once:1 construct:1 extraction:2 manually:1 icml:3 peaked:2 future:1 report:6 gamma:11 divergence:1 individual:4 argmax:1 n1:1 attempt:1 adjust:1 rdp:3 mixture:1 semidefinite:1 amenable:1 taylor:1 desired:1 zn:36 maximization:1 turnbull:1 deviation:1 entry:9 uniform:3 dependency:1 synthetic:1 international:4 siam:1 retain:1 ches:1 probabilistic:25 off:1 minimised:1 w1:1 thesis:1 again:1 ambiguity:1 successively:1 squared:2 management:1 american:1 leading:1 yp:4 li:1 account:1 factorised:4 student:1 includes:1 hzn:2 view:13 francis:2 sup:1 recover:3 identifiability:1 ass:1 accuracy:1 variance:2 yield:1 vp:5 modelled:2 bayesian:7 monitoring:1 suffers:1 checked:1 definition:1 sriperumbudur:1 energy:5 naturally:2 associated:2 popular:2 lim:2 dimensionality:2 nielsen:1 tipping:3 formulation:1 correlation:4 until:1 favourably:1 lack:1 barrington:1 name:1 effect:1 dietterich:1 verify:1 brown:1 hence:3 q0:4 wp:5 yn1:1 leibler:1 neal:1 during:1 generalized:3 plate:1 complete:2 l1:3 omnipress:1 variational:15 novel:1 recently:1 misspecified:2 common:4 qp:4 volume:1 discussed:1 interpretation:4 association:1 interpret:1 measurement:3 refer:1 caron:1 imposing:4 automatic:6 rd:1 erieure:1 outlined:1 similarly:1 analyzer:1 shawe:1 portfolio:2 calibration:1 scandinavian:1 add:1 ynp:1 posterior:15 recent:3 multivariate:1 driven:1 route:1 verlag:1 discussing:1 yi:1 seen:1 impose:2 attacking:1 bessel:5 monotonically:1 ii:3 hyi:1 mix:1 infer:1 reduces:1 technical:4 match:1 determination:3 bach:3 minimising:1 laplacian:1 variant:1 regression:4 optimisation:1 expectation:5 essentially:1 physically:1 marginalisation:1 klami:1 subject:1 nonconcave:1 jordan:3 extracting:1 noting:1 iii:1 fit:3 hastie:1 lasso:3 identified:1 domany:1 favour:1 whether:1 expression:1 pca:23 becker:1 penalty:1 suffer:1 reformulated:1 cause:1 useful:2 nonparametric:2 extensively:1 induces:1 hardware:1 generate:1 http:1 outperform:1 canonical:4 estimated:1 arising:1 tibshirani:2 reformulation:1 threshold:1 achieving:1 v1:1 relaxation:1 year:1 sum:1 enforced:2 inverse:14 uncertainty:1 powerful:1 family:4 decide:1 griffin:1 appendix:4 prefer:1 scaling:2 cca:6 bound:4 followed:1 fan:1 oracle:1 sharply:2 flat:1 hy:1 performing:2 department:4 combination:1 conjugate:3 em:9 y0:5 sigg:1 jeffreys:1 intuitively:1 ghaoui:1 ln:3 turn:1 eventually:1 deflation:1 wrt:1 vip:1 tractable:4 reformulations:1 gaussians:1 operation:1 apply:2 appropriate:1 robustness:1 shortly:1 florida:1 original:2 denotes:2 remaining:1 include:2 ensure:1 top:1 graphical:4 cf:1 music:1 ghahramani:3 build:1 society:2 leakage:1 seeking:1 looked:1 strategy:1 rgensen:1 diagonal:2 cycling:1 dp:6 subspace:1 consumption:1 considers:1 enforcing:1 maximising:2 index:2 rotational:1 kingdom:1 difficult:1 unfortunately:1 trace:2 negative:5 ba:1 regulariser:1 proper:1 cryptographic:2 perform:1 observation:4 supporting:1 situation:1 extended:1 hinton:2 arbitrary:1 hln:5 introduced:1 paris:1 kl:3 required:2 z1:4 california:1 yn0:1 nip:2 wenbo:1 sparsity:11 optimise:2 reliable:1 royal:2 belief:1 power:4 buhmann:1 residual:1 nth:1 scheme:1 factorises:2 picture:1 aspremont:1 sn:4 text:1 prior:27 weisstein:1 review:1 regularisation:3 relative:1 embedded:1 lecture:1 interesting:1 validation:1 penalization:1 degree:3 xp:3 consistent:1 dq:15 thresholding:3 editor:5 penalized:1 free:6 formal:2 allow:1 wide:1 template:4 taking:1 absolute:1 sparse:34 distributed:1 van:1 dimension:2 xn:19 commonly:1 adaptive:1 preprocessing:1 ig:2 approximate:3 compact:2 observable:1 kullback:1 ml:3 doucet:1 continuous:2 latent:23 table:2 robust:1 mathworld:2 posing:1 european:1 zou:5 constructing:1 diag:5 arrow:1 noise:3 hyperparameters:4 x1:3 hemmen:1 torres:1 inferring:1 comprises:1 explicit:1 schulten:1 exponential:2 rqp:1 third:3 rq0:1 bishop:1 wip:1 offset:1 pz:1 intractable:1 workshop:2 eprints:1 phd:1 conditioned:1 justifies:1 entropy:1 partially:3 springer:2 corresponds:2 acm:2 conditional:1 goal:1 viewed:1 shared:1 analysing:1 determined:1 specifically:2 infinite:2 uniformly:1 denoising:5 principal:8 id1:1 meaningful:1 select:2 college:1 arises:2 relevance:3 ongoing:1 correlated:1 |
2,626 | 3,381 | Look Ma, No Hands: Analyzing the Monotonic
Feature Abstraction for Text Classification
Doug Downey
Electrical Engineering and Computer Science Department
Northwestern University
Evanston, IL 60208
[email protected]
Oren Etzioni
Turing Center, Department of Computer Science and Engineering
University of Washington
Seattle, WA 98195
[email protected]
Abstract
Is accurate classification possible in the absence of hand-labeled data? This paper introduces the Monotonic Feature (MF) abstraction?where the probability of
class membership increases monotonically with the MF?s value. The paper proves
that when an MF is given, PAC learning is possible with no hand-labeled data
under certain assumptions.
We argue that MFs arise naturally in a broad range of textual classification applications. On the classic ?20 Newsgroups? data set, a learner given an MF and
unlabeled data achieves classification accuracy equal to that of a state-of-the-art
semi-supervised learner relying on 160 hand-labeled examples. Even when MFs
are not given as input, their presence or absence can be determined from a small
amount of hand-labeled data, which yields a new semi-supervised learning method
that reduces error by 15% on the 20 Newsgroups data.
1
Introduction
Is accurate classification possible in the complete absence of hand-labeled data? A Priori, the answer
would seem to be no, unless the learner has knowledge of some additional problem structure. This
paper identifies a problem structure, called Monotonic Features (MFs), that enables the learner to
automatically assign probabilistic labels to data. A feature is monotonic when the probability of
class membership increases monotonically with that feature?s value, all else being equal.
MFs occur naturally in a broad range of textual classification tasks. For example, it can be shown
that Naive Bayes text classifiers return probability estimates that are monotonic in the frequency of
a word?for the class in which the word is most common. Thus, if we are trying to discriminate
between documents about New York and Boston, then we expect to find that the Naive Bayes feature
measuring the frequency of ?Giants? in the corpus is an MF for the class New York, and likewise
for ?Patriots? and Boston.
In document classification, the name of the class is a natural MF?the more times it is repeated
in a document, all other things being equal, the more likely it is that the document belongs to the
class. We demonstrate this to be the case empirically in Section 4, extending the experiments of
[8] and [3]. Similarly, information retrieval systems classify documents into relevant and irrelevant
documents based, in part, on the Term-Frequency-Inverse-Document-Frequency (TF-IDF) metric,
and then proceed to rank the relevant documents. The term frequency component of this metric is a
monotonic feature.
The power of MFs is not restricted to textual classification. Consider Information Extraction (IE)
where strings are extracted from sentences and classified into categories (e.g. City or Film) based
on their proximity to ?extraction patterns?. For example, the phrase ?cities such as? is an extraction
pattern. Any proper noun immediately following this pattern is likely to denote a city, as in the
phrase ?cities such as Boston, Seattle, and New York? [9]. When classifying a proper noun, the
number of times that it follows an extraction pattern in a corpus turns out to be a powerful MF.
This observation is implicit in the combinatorial model of unsupervised IE put forth in [6]. Finally,
MF-based techniques have been demonstrated to be effective for word-sense disambiguation using a
set of manually-specified MFs [13]; this work was later extended to automatically derive MFs from
resources like Wordnet [10].
Thus, MFs have been used implicitly in a broad range of textual classification tasks. This paper
makes the MF abstraction explicit, provides a formal theory of MFs, an automatic method for explicitly detecting and utilizing MFs, and quantifies the method?s benefits empirically.
1.1
Contribution
Typically, MFs cannot serve directly as classifiers. Instead, this paper presents theoretical and empirical results showing that even relatively weak MFs can be used to induce a noisy labeling over
examples, and these examples can then be used to train effective classifiers utilizing existing supervised or semi-supervised techniques.
Our contributions are as follows:
1. We prove that the Monotonic Feature (MF) structure guarantees PAC learnability using only
unlabeled data, and that MFs are distinct from and complementary to standard biases used in semisupervised learning, including the manifold and cluster assumptions.
2. We present a general technique, called M FA, for employing MFs in combination with an arbitrary
concept learning algorithm. We demonstrate experimentally that M FA can outperform state-of-theart techniques for semi-supervised document classification, including Naive Bayes with Expectation
Maximization (NB-EM), and Label Propagation, on the 20 Newsgroups data set [11].
The remainder of the paper is organized as follows. Section 2 formally defines our problem structure
and the properties of monotonic features. Section 3 presents our theoretical results, and formalizes
the relationship between the MF approach and previous work. We present experimental results in
Section 4, and conclude with directions for future work.
2
Formal Framework
We consider a semi-supervised classification task, in which the goal is to produce a mapping from
an instance space X consisting of d-tuples x = (x1 , . . . , xd ), to a binary output space Y = {0, 1}.1
We denote the concept class of mappings f : X ? Y as C.
We assume the following inputs:
? A set of zero or more labeled examples DL = {(xi , yi )|i = 1 . . . n}, drawn i.i.d. from a
distribution P (x, y) for x ? X and y ? Y.
? A set of zero or more unlabeled
P examples DU = {(xi )|i = 1 . . . u} drawn from the
marginal distribution P (x) = y P (x, y).
? A set M ? {1, . . . , d} of zero or more monotonic features for the positive class y = 1. The
monotonic features have properties specified below.
The goal of the classification task is to produce a mapping c ? C that maximizes classification
accuracy evaluated over a set of test examples drawn i.i.d. from P (x, y).
1
For convenience, we restrict our formal framework to the binary case, but the techniques and analysis can
be extended trivially to the multi-class case.
We further define CM ? C as the concept class of binary classifiers that use only the monotonic
features. Similarly, let C?M ? C indicate the concept class of binary classifiers using only the
non-monotonic features.
Monotonic features exhibit a monotonically increasing relationship with the probability that an example is a member of the positive class. More formally, we define monotonic features as follows:
Definition 1 A monotonic feature for class y is a feature i ? {1, . . . , d} for which the following
three properties hold:
? The domain of xi is fully ordered and discrete, and has finite support.2
? The conditional probability that an example is an element of class y = 1 increases strictly
monotonically with the value of xi . That is, P (y = 1|xi = r) > P (y = 1|xi = r0 ) if
r > r0 .
? The monotonicity is non-trivial in that P (xi ) has positive probability for more than one
feature value. That is, there exists r > r0 and > 0 such that P (xi = r), P (xi = r0 ) > .
With this definition, we can state precisely the monotonic feature structure:
Definition 2 For a learning problem from the input space X of d-tuples x = (x1 , . . . , xd ) to the
output space Y, the monotonic feature structure (MFS) holds if and only if at least one of the
features i ? {1, . . . , d} is a monotonic feature for the positive class y = 1.
When tasked with a learning problem for which the MFS holds, three distinct configurations of
the input are possible. First, monotonic features may be known in the absence of labeled data
(|M | > 0, DL = ?). This is the setting considered in previous applications of monotonic features,
as discussed in the introduction. Second, monotonic features may be unknown, but labeled data may
be provided (M = ?, |DL | > 0); this corresponds to standard semi-supervised learning. In this case,
the MFS can still be exploited by identifying monotonic features using the labeled data. Lastly, both
monotonic features and labeled data may be provided (|M |, |DL | > 0). We provide algorithms for
each case and evaluate each experimentally in Section 4.
3
Theory of Monotonic Features
This section shows that under certain assumptions, knowing the identity of a single monotonic feature is sufficient to PAC learn a target concept from only unlabeled data. Further, we prove that
monotonic features become more informative relative to labeled examples as the feature set size
increases. Lastly, we discuss and formally establish distinctions between the monotonic feature
abstraction and other semi-supervised techniques.
We start by introducing the conditional independence assumption, which states that the monotonic
features are conditionally independent of the non-monotonic features given the class. Formally, the
conditional independence assumption is satisfied iff P ({xi : i ? M }|y, {xj : j ?
/ M }) = P ({xi :
i ? M }|y). While this assumption is clearly an idealization, it is not uncommon in semi-supervised
learning (for example, an analogous assumption was introduced to theoretically demonstrate the
power of co-training [2]). Further , techniques based upon the idealization of conditional independence are often effective in practice (e.g., Naive Bayes Classifiers).
We show that when the concept class C?M is learnable in the PAC model with classification noise,
and the conditional independence assumption holds, then knowledge of a single monotonic feature
makes the full concept class C learnable from only unlabeled data. Our result builds on a previous
theorem from [2], and requires the following definition:
Definition 3 A classifier h ? CM is weakly-useful iff there exists > 0 such that P (h(x) = 1) ?
and P (y = 1|h(x) = 1) ? P (y = 1) + .
2
For convenience, we present our analysis in terms of discrete and finite monotonic features, but the results
can be extended naturally to the continuous case.
Theorem 4 If the conditional independence assumption is satisfied and the concept class C?M is
learnable in the PAC model with classification noise, then given a single monotonic feature, C is
learnable from unlabeled data only.
Proof Sketch. The result follows from Theorem 1 in [2] and an application of Hoeffding bounds to
show that the monotonic feature can be used to construct a weakly-useful classifier.3
The next theorem demonstrates that monotonic features are relatively more informative than labeled
examples as the feature space increases in size. This result suggests that MF-based approaches to
text classification may become increasingly valuable over time, as corpora become larger and the
number of distinct words and phrases available to serve as features increases. We compare the
value of monotonic features and labeled examples in terms of information gain, defined below. For
convenience, these results are presented using a feature space XB , in which all features are binaryvalued.
Definition 5 The information gain with respect to an unlabeled example?s label y provided by a
variable
v is defined as the reduction in entropy of y when v is given, that is:
P
0
0
0
0
0
y =0,1 P (y = y |v) log P (y = y |v) ? P (y = y ) log P (y = y ).
Next, we define the two properties of the classification task that our theorem requires. Informally
speaking, the first property states that the feature space does not have fully redundant features,
whereas the second states that examples which are far apart have less dependent labels than those
which are close together. We would expect these properties to hold for most tasks in practice.
Definition 6 A distribution D on (XB , Y) has bounded feature dependence if there exists F > 0
such that the conditional probability PD (xi = r|{xj = rj : j 6= i}) < 1 ? F for all i, r, and sets
{xj = rj : j 6= i} of assignments to one or more xj .
Definition 7 A distribution D on (XB , Y) has distance-diminishing information gain if the information gain of an example x with respect to the label of any neighboring example x0 is less than
KI ?Ir for some ?I < 1, where r is the Hamming distance between x and x0 .
The following theorem shows that whenever the above properties hold to a sufficient degree, the
expected information gain from a labeled example falls as the size of the feature space increases.
Theorem 8 For a learning problem governed by distribution D with bounded feature dependence
I
and distance-dimishing information gain, with F > ?I?+1
, as the number of features d increases,
the expected information gain provided by a labeled example about unlabeled examples? labels
decreases to zero. However, the information gain from an MF xf with given relationship P (y|xf )
remains constant as d increases.
The portion of Theorem 8 which concerns information gain of a labeled example is a version of
the well-known ?curse of dimensionality? [1], which states that the number of examples needed to
estimate a function scales exponentially with the number of dimensions under certain assumptions.
Theorem 8 differs in detail, however; it states the curse of dimensionality in terms of information
gain, making possible a direct comparison with monotonic features.
3.1
Relation to Other Approaches
In the introduction, we identified several learning methods that utilized Monotonic Features (MFs)
implicitly, which was a key motivation for formalizing MFs. This section explains the ways in which
MF-based classification is distinct from previous semi-supervised learning methods.
When MFs are provided as input, they can be viewed as a kind of ?labeled feature? studied in [7].
However, instead of a generalized expectation criteria, we use the prior to generate noisy labels
for examples. Thus, MFs can complement any concept learning algorithm, not just discriminative
probabilistic models as in [7]. Moreover, while [7] focuses on a problem setting in which selected
3
Proofs of the theorems in this paper can be found in [5], Chapter 2.
features are labeled by hand, we show in Section 4 that MFs can either obviate hand-labeled data,
or can be estimated automatically from a small set of hand-labeled instances.
Co-training [2] is a semi-supervised technique that also considers a partition of the feature space
into two distinct ?views.? One might ask if monotonic feature classification is equivalent to cotraining with the monotonic features serving as one view, and the other features forming the other.
However, co-training requires labeled data to train classifiers for each view, unlike monotonic feature
classification which can operate without any labeled data. Thus, there are cases where an MF-based
algorithm like M FA is applicable, but co-training is not.
Even when labeled data is available, co-training takes the partition of the feature set as input, whereas
monotonic features can be detected automatically using the labeled data. Also, co-training is an
iterative algorithm in which the most likely examples of a class according to one view are used to
train a classifier on the other view in a mutually recursive fashion. For a given set of monotonic
features, however, iterating this process is ineffective, because the mostly likely examples of a class
according to the monotonic feature view are fixed by the monotonicity property.
3.1.1
Semi-supervised Smoothness Assumptions
The MFS is provably distinct from certain smoothness properties typically assumed in previous
semi-supervised learning methods, known as the cluster and manifold assumptions. The cluster
assumption states that in the target concept, the boundaries between classes occupy relatively lowdensity regions of the distribution P (x). The manifold assumption states that the distribution P (x)
is embedded on a manifold of strictly lower dimension than the full input space X . It can be shown
that classification tasks with the MFS exist for which neither the cluster assumption nor the manifold
assumption holds. Similarly, we can construct classification tasks exhibiting the manifold assumption, the cluster assumption, or their conjunction, but without the MFS. Thus, we state the following
theorem.
Theorem 9 The monotonic feature structure neither implies nor is implied by the manifold assumption, the cluster assumption, or their conjunction or disjuntion.
4
Experiments
This section reports on our experiments in utilizing MFs for text classification. As discussed in the
introduction, MFs have been used implicitly by several classification methods in numerous tasks.
Here we quantify their impact on the standard ?20 newsgroups? dataset [11]. We show that MFs can
be employed to perform accurate classification even without labeled examples, extending the results
from [8] and [3] to a semi-supervised setting. Further, we also demonstrate that whether or not the
identities of MFs are given, exploiting the MF structure by learning MFs can improve performance.
4.1
General Methods for Monotonic Feature Classification
Here, we define a set of abstract methods for incorporating monotonic features into any existing
learning algorithm. The first method, M FA, is an abstraction of the MF word sense disambiguation
algorithm first introduced in [13]. It is applicable when monotonic features are given but labeled
examples are not provided. The second, M FA - SSL , applies in the standard semi-supervised learning
case when some labeled examples are provided, but the identities of the MFs are unknown and must
be learned. Lastly, M FA - BOTH applies when both labeled data and MF identities are given.
M FA proceeds as shown in Figure 1. M FA labels the unlabeled examples DU as elements of class
y = 1 iff some monotonic feature value xi for i ? M exceeds a threshold ? . The threshold is
set using unlabeled data so as to maximize the minimum probability mass on either side of the
0
threshold.4 This set of bootstrapped examples DL
is then fed as training data into a supervised or
0
semi-supervised algorithm ?(DL
, DU ), and M FA outputs the resulting classifier. In general, the
M FA schema can be instantiated with any concept learning algorithm ?.
4
This policy is suggested by the proof of Theorem 4, in which the only requirement of the threshold is that
sufficient mass lies on each side.
M FA(M , DU , ?)
0
1. DL
= Labeled examples (x, y) such that y = 1
iff a xi > ? for some i ? M
0
2. Output ?(DL
, DU )
Figure 1: Pseudocode for M FA. The inputs are M , a set of monotonic features, DU , a set of unlabeled examples, and ?(L, U ), a supervised or semi-supervised machine learning algorithm which
outputs a classifier given labeled data L and unlabeled data U . The threshold ? is derived from the
unlabeled data and M (see text).
M FA - SSL(DL , DU , ?, ?M )
1. M = the k strongest monotonic features in DL
0
2. DL
= Examples from DU probabilistically
labeled with ?M (M, DL , DU )
0
3. Output ?(DL ? DL
, DU )
Figure 2: Pseudocode for M FA - SSL. The inputs DU and ? inputs are the same as those of M FA
(see Figure 1). The additional inputs include labeled data DL and a machine learning algorithm
?M (M, L, U ) which given labeled data L and unlabeled data U outputs a probabilistic classifier
that uses only monotonic features M . k is a parameter of M FA - SSL(see text).
When MFs are unknown, but some labeled data is given, the M FA - SSL (Figure 2) algorithm attempts
to identify MFs using the labeled training data DL , adding the most strongly monotonic features to
the set M . Monotonicity strength can be measured
in various ways; in our experiments, we rank
P
each feature xi by the quantity f (y, xi ) = r P (y, xi = r)r for each class y.5 M FA - SSL adds
monotonic features to M in descending order of this value, up to a limit of k = 5 per class.6
M FA - SSL then invokes a given machine learning algorithm ?M (M, DL , DU ) to learn a probabilistic
classifier that employs only the monotonic features in M . M FA - SSL uses this classifier to probabilis0
0
, DU ) as in M FA. Note
. M FA - SSL then returns ?(DL
tically label the examples in DU to form DL
that when no monotonic features are identified, M FA - SSL defaults to the underlying algorithm ?.
When monotonic features are known and labeled examples are available, we run a derivative of
M FA - SSL denoted as M FA - BOTH. The algorithm is the same as M FA - SSL, except that any given
monotonic features are added to the learned set in Step 1 of Figure 2, and bootstrapped examples
0
.
using the given monotonic features (from Step 1 in Figure 1) are added to DL
4.2
Experimental Methodology and Baseline Methods
The task we investigate is to determine from the text of a newsgroup post the newsgroup in which
it appeared. We used bag-of-word features after converting terms to lowercase, discarding the 100
most frequent terms and all terms appearing only once. Below, we present results averaged over four
disjoint training sets of variable size, using a disjoint test set of 5,000 documents and an unlabeled
set of 10,000 documents.
We compared the monotonic feature approach with two alternative algorithms, which represent two
distinct points in the space of semi-supervised learning algorithms. The first, NB-EM, is a semisupervised Naive Bayes with Expectation Maximization algorithm [12], employing settings previously shown to be effective on the 20 Newsgroups data. The second, LP, is a semi-supervised
graph-based label propagation algorithm recently employed for text classification [4]. We found
that on this dataset, the NB-EM algorithm substantially outperformed LP (providing a 41% error
reduction in the experiments in Figure 3), so below we compare exclusively with NB-EM.
When the identities of monotonic features are given, we obtained one-word monotonic features
simply using the newsgroup name, with minor modifications to expand abbreviations. This methodology closely followed that of [8]. For example, the occurrence count of the term ?politics? was
5
This measure is applicable for features with numeric values. For non-numeric features, alternative measures (e.g. rank correlation) could be employed to detect MFs.
6
A sensitivity analysis revealed that varying k by up to 40% in either direction did not decrease performance
of M FA - SSL in the experiments in Section 4.3.
a monotonic feature for the talk.politics.misc newsgroup. We also expanded the set of
monotonic features to include singular/plural variants.
We employed Naive Bayes classifiers for both ? and ?M . We weighted the set of examples labeled
0
using the monotonic features (DL
) equally with the original labeled set, increasing the weight by
the equivalent of 200 labeled examples when monotonic features are given to the algorithm.
4.3
Experimental Results
The first question we investigate is what level of performance M FA can achieve without labeled
training data, when monotonic features are given. The results of this experiment are shown in Table
1. M FA achieves accuracy on the 20-way classification task of 0.563. Another way to measure this
accuracy is in terms of the number of labeled examples that a baseline semi-supervised technique
would require in order to achieve comparable performance. We found that M FA outperformed NBEM with up to 160 labeled examples. This first experiment is similar to that of [8], except that
instead of evaluating against only supervised techniques, we use a more comparable semi-supervised
baseline (NB-EM).
Could the monotonic features, on their own, suffice to directly classify the test data? To address this
question, the table also reports the performance of using the given monotonic features exclusively
to label the test data (MF Alone), without using the semi-supervised technique ?. We find that the
bootstrapping step provides large benefits to performance; M FA has an effective number of labeled
examples eight times more than that of MF Alone.
Accuracy
Labeled Example
Equivalent
Random
Baseline
5%
MF Alone
24%
M FA
56% (2.33x)
0
20
160 (8x)
Table 1: Performance of M FA when monotonic features are given, and no labeled examples are provided. M FA achieves accuracy of 0.563, which is ten fold that of a Random Baseline classifier that
assigns labels randomly, and more than double that of ?MF Alone?, which uses only the monotonic
features and ignores the other features. M FA?s accuracy exceeds that of the NB-EM baseline with
160 labeled training examples, and is eight fold that of ?MF Alone?.
The second question we investigate is whether the monotonic feature approach can improve performance even when the class name is not given. M FA - SSL takes the same inputs as the NB-EM
technique, without the identities of monotonic features. The performance of M FA - SSL as the size of
the labeled data set varies is shown in Figure 3. The graph shows that for small labeled data sets of
size 100-400, M FA - SSL outperforms NB-EM by an average error reduction of 15%. These results
are statistically significant (p < 0.001, Fisher Exact Test). One important question is whether M FA SSL ?s performance advantage over NB-EM is in fact due to the presence of monotonic features, or
if it instead results from simply utilizing feature selection in Step 2 of Figure 2. We investigated this
by replacing M FA - SSL?s monotonic feature measure f (y, xi ) with a standard information gain measure, and learning an equal number of features distinct from those selected by M FA - SSL originally.
This method has performance essentially equivalent to that of NB-EM, suggesting that M FA - SSL?s
performance advantage is not due merely to feature selection.
Lastly, when both monotonic features and labeled examples are available, M FA - BOTH reduces error
over the NB-EM baseline by an average of 31% across the training set sizes shown in Figure 3. For
additional analysis of the above experiments, and results in another domain, see [5].
5
Conclusions
We have presented a general framework for utilizing Monotonic Features (MFs) to perform classification without hand-labeled data, or in a semi-supervised setting where monotonic features can
be discovered from small numbers of hand-labeled examples. While our experiments focused on
the 20 Newsgroups data set, we have complemented them with both a theoretical analysis, and by
enumerating a wide variety of algorithms that have used MFs implicitly.
MFA-SSL
MFA-both
Figure 3: Performance in document classification. M FA - SSL reduces error over the NB-EM baseline
by 15% for training sets between 100 and 400 examples, and M FA - BOTH reduces error by 31%
overall.
Acknowledgements
We thank Stanley Kok, Daniel Lowd, Mausam, Hoifung Poon, Alan Ritter, Stefan Schoenmackers, and Dan Weld for helpful comments. This research was supported in part by NSF grants IIS0535284 and IIS-0312988, ONR grant N00014-08-1-0431 as well as gifts from Google, and carried
out at the University of Washington?s Turing Center. The first author was supported by a Microsoft
Research Graduate Fellowship sponsored by Microsoft Live Labs.
References
[1] R. Bellman. Adaptive Control Processes: A Guided Tour. Princeton University Press, 1961.
[2] A. Blum and T. Mitchell. Combining labeled and unlabeled data with co-training. In COLT: Proceedings
of the Workshop on Computational Learning Theory, Morgan Kaufmann Publishers, pages 92?100, 1998.
[3] M.-W. Chang, L.-A. Ratinov, D. Roth, and V. Srikumar. Importance of semantic representation: Dataless
classification. In D. Fox and C. P. Gomes, editors, AAAI, pages 830?835. AAAI Press, 2008.
[4] J. Chen, D.-H. Ji, C. L. Tan, and Z.-Y. Niu. Semi-supervised relation extraction with label propagation.
In HLT-NAACL, 2006.
[5] D. Downey. Redundancy in Web-scale Information Extraction: Probabilistic Model and Experimental
Results. PhD thesis, University of Washington, 2008.
[6] D. Downey, O. Etzioni, and S. Soderland. A Probabilistic Model of Redundancy in Information Extraction. In Procs. of IJCAI, 2005.
[7] G. Druck, G. Mann, and A. McCallum. Learning from labeled features using generalized expectation
criteria. In Proceedings of SIGIR, 2008.
[8] A. Gliozzo, C. Strapparava, and I. Dagan. Investigating unsupervised learning for text categorization
bootstrapping. In Proceedings of HLT 2005, pages 129?136, Morristown, NJ, USA, 2005.
[9] M. Hearst. Automatic Acquisition of Hyponyms from Large Text Corpora. In Procs. of the 14th International Conference on Computational Linguistics, pages 539?545, Nantes, France, 1992.
[10] R. Mihalcea and D. I. Moldovan.
AAAI/IAAI, pages 461?466, 1999.
An automatic method for generating sense tagged corpora.
In
[11] T. M. Mitchell. Machine Learning. McGraw-Hill, New York, 1997.
[12] K. Nigam, A. McCallum, S. Thrun, and T. Mitchell. Text Classification from Labeled and Unlabeled
Documents using EM. Machine Learning, 39(2/3):103?134, 2000.
[13] D. Yarowsky. Unsupervised word sense disambiguation rivaling supervised methods. In Meeting of the
Association for Computational Linguistics, pages 189?196, 1995.
| 3381 |@word version:1 hyponym:1 reduction:3 configuration:1 exclusively:2 daniel:1 document:13 bootstrapped:2 outperforms:1 existing:2 must:1 partition:2 informative:2 enables:1 sponsored:1 alone:5 selected:2 mccallum:2 provides:2 detecting:1 direct:1 become:3 prove:2 dan:1 x0:2 theoretically:1 expected:2 nor:2 multi:1 bellman:1 relying:1 automatically:4 curse:2 increasing:2 gift:1 provided:8 bounded:2 formalizing:1 maximizes:1 moreover:1 mass:2 underlying:1 what:1 suffice:1 cm:2 kind:1 string:1 substantially:1 schoenmackers:1 giant:1 bootstrapping:2 nj:1 guarantee:1 formalizes:1 morristown:1 xd:2 classifier:17 evanston:1 demonstrates:1 control:1 yarowsky:1 grant:2 positive:4 engineering:2 limit:1 analyzing:1 niu:1 might:1 studied:1 suggests:1 co:7 range:3 statistically:1 averaged:1 graduate:1 hoifung:1 practice:2 recursive:1 differs:1 mihalcea:1 empirical:1 binaryvalued:1 word:8 induce:1 cannot:1 unlabeled:17 convenience:3 close:1 selection:2 put:1 nb:12 live:1 descending:1 equivalent:4 demonstrated:1 center:2 roth:1 focused:1 sigir:1 identifying:1 immediately:1 assigns:1 utilizing:5 obviate:1 classic:1 analogous:1 target:2 tan:1 exact:1 us:3 element:2 rivaling:1 utilized:1 srikumar:1 labeled:54 electrical:1 region:1 decrease:2 valuable:1 pd:1 weakly:2 serve:2 upon:1 learner:4 chapter:1 various:1 talk:1 train:3 distinct:8 instantiated:1 effective:5 detected:1 labeling:1 film:1 larger:1 noisy:2 advantage:2 mausam:1 remainder:1 frequent:1 neighboring:1 relevant:2 combining:1 iff:4 poon:1 achieve:2 forth:1 seattle:2 exploiting:1 cluster:6 requirement:1 extending:2 double:1 produce:2 ijcai:1 categorization:1 generating:1 derive:1 measured:1 minor:1 c:1 indicate:1 implies:1 quantify:1 exhibiting:1 direction:2 guided:1 closely:1 mann:1 explains:1 require:1 assign:1 strictly:2 hold:7 proximity:1 considered:1 mapping:3 achieves:3 outperformed:2 applicable:3 bag:1 label:13 combinatorial:1 tf:1 city:4 weighted:1 stefan:1 clearly:1 varying:1 probabilistically:1 conjunction:2 derived:1 focus:1 rank:3 baseline:8 sense:4 detect:1 helpful:1 abstraction:5 dependent:1 membership:2 lowercase:1 typically:2 diminishing:1 relation:2 expand:1 france:1 nantes:1 provably:1 overall:1 classification:32 colt:1 denoted:1 priori:1 art:1 noun:2 ssl:22 marginal:1 equal:4 construct:2 once:1 extraction:7 procs:2 washington:4 manually:1 broad:3 look:1 unsupervised:3 theart:1 future:1 report:2 employ:1 randomly:1 consisting:1 microsoft:2 attempt:1 investigate:3 introduces:1 uncommon:1 xb:3 accurate:3 unless:1 fox:1 theoretical:3 instance:2 classify:2 measuring:1 assignment:1 maximization:2 phrase:3 introducing:1 tour:1 learnability:1 answer:1 varies:1 eec:1 international:1 sensitivity:1 ie:2 probabilistic:6 ritter:1 together:1 druck:1 thesis:1 aaai:3 satisfied:2 hoeffding:1 derivative:1 return:2 suggesting:1 explicitly:1 later:1 view:6 lab:1 schema:1 portion:1 start:1 bayes:6 lowdensity:1 contribution:2 il:1 ir:1 accuracy:7 kaufmann:1 likewise:1 yield:1 identify:1 weak:1 classified:1 strongest:1 whenever:1 hlt:2 definition:8 against:1 acquisition:1 frequency:5 naturally:3 proof:3 hamming:1 gain:11 dataset:2 iaai:1 ask:1 mitchell:3 knowledge:2 dimensionality:2 stanley:1 organized:1 originally:1 supervised:28 methodology:2 evaluated:1 strongly:1 just:1 implicit:1 lastly:4 correlation:1 hand:11 sketch:1 web:1 replacing:1 propagation:3 google:1 defines:1 lowd:1 semisupervised:2 usa:1 name:3 naacl:1 concept:11 tagged:1 misc:1 semantic:1 conditionally:1 criterion:2 generalized:2 trying:1 hill:1 complete:1 demonstrate:4 recently:1 common:1 pseudocode:2 empirically:2 ji:1 exponentially:1 discussed:2 association:1 significant:1 smoothness:2 automatic:3 trivially:1 similarly:3 add:1 own:1 belongs:1 irrelevant:1 apart:1 certain:4 n00014:1 binary:4 onr:1 meeting:1 yi:1 exploited:1 morgan:1 minimum:1 additional:3 employed:4 r0:4 converting:1 determine:1 maximize:1 redundant:1 monotonically:4 semi:23 ii:1 full:2 rj:2 reduces:4 exceeds:2 alan:1 xf:2 retrieval:1 post:1 equally:1 impact:1 variant:1 essentially:1 expectation:4 metric:2 tasked:1 represent:1 oren:1 whereas:2 fellowship:1 else:1 singular:1 publisher:1 operate:1 unlike:1 ineffective:1 comment:1 thing:1 member:1 seem:1 presence:2 revealed:1 newsgroups:6 independence:5 xj:4 variety:1 restrict:1 identified:2 knowing:1 enumerating:1 politics:2 whether:3 downey:3 york:4 proceed:1 speaking:1 useful:2 iterating:1 informally:1 amount:1 kok:1 ten:1 category:1 generate:1 occupy:1 outperform:1 exist:1 nsf:1 estimated:1 disjoint:2 per:1 serving:1 discrete:2 key:1 four:1 redundancy:2 threshold:5 blum:1 drawn:3 neither:2 graph:2 merely:1 idealization:2 ratinov:1 run:1 turing:2 inverse:1 powerful:1 disambiguation:3 comparable:2 bound:1 ki:1 followed:1 fold:2 strength:1 occur:1 precisely:1 idf:1 weld:1 expanded:1 relatively:3 department:2 according:2 combination:1 across:1 em:13 increasingly:1 lp:2 making:1 modification:1 patriot:1 restricted:1 resource:1 mutually:1 remains:1 previously:1 turn:1 discus:1 count:1 needed:1 fed:1 available:4 moldovan:1 eight:2 appearing:1 occurrence:1 alternative:2 original:1 include:2 linguistics:2 invokes:1 prof:1 establish:1 build:1 implied:1 added:2 quantity:1 question:4 fa:45 dependence:2 exhibit:1 distance:3 thank:1 thrun:1 manifold:7 argue:1 considers:1 trivial:1 relationship:3 providing:1 mostly:1 proper:2 policy:1 unknown:3 perform:2 observation:1 finite:2 extended:3 discovered:1 arbitrary:1 hearst:1 introduced:2 complement:1 specified:2 sentence:1 distinction:1 textual:4 learned:2 address:1 suggested:1 proceeds:1 below:4 pattern:4 appeared:1 including:2 power:2 mfa:2 natural:1 improve:2 numerous:1 identifies:1 doug:1 carried:1 naive:6 text:11 prior:1 acknowledgement:1 relative:1 embedded:1 fully:2 expect:2 northwestern:2 etzioni:3 degree:1 sufficient:3 editor:1 classifying:1 supported:2 formal:3 bias:1 side:2 fall:1 wide:1 dagan:1 benefit:2 boundary:1 dimension:2 default:1 numeric:2 evaluating:1 ignores:1 author:1 adaptive:1 employing:2 far:1 implicitly:4 mcgraw:1 monotonicity:3 investigating:1 corpus:5 conclude:1 assumed:1 tuples:2 xi:18 discriminative:1 gomes:1 continuous:1 iterative:1 quantifies:1 table:3 learn:2 nigam:1 du:14 investigated:1 domain:2 did:1 motivation:1 noise:2 arise:1 plural:1 repeated:1 complementary:1 x1:2 fashion:1 explicit:1 lie:1 governed:1 cotraining:1 theorem:13 discarding:1 showing:1 pac:5 learnable:4 soderland:1 concern:1 dl:21 exists:3 incorporating:1 workshop:1 adding:1 importance:1 phd:1 chen:1 mf:59 boston:3 entropy:1 simply:2 likely:4 forming:1 ordered:1 chang:1 monotonic:77 applies:2 corresponds:1 extracted:1 ma:1 tically:1 complemented:1 conditional:7 abbreviation:1 goal:2 identity:6 viewed:1 absence:4 fisher:1 experimentally:2 determined:1 except:2 wordnet:1 called:2 discriminate:1 experimental:4 newsgroup:4 formally:4 support:1 evaluate:1 princeton:1 |
2,627 | 3,382 | An ideal observer model of infant object perception
Charles Kemp
Department of Psychology
Carnegie Mellon University
[email protected]
Fei Xu
Department of Psychology
University of British Columbia
[email protected]
Abstract
Before the age of 4 months, infants make inductive inferences about the motions
of physical objects. Developmental psychologists have provided verbal accounts
of the knowledge that supports these inferences, but often these accounts focus on
categorical rather than probabilistic principles. We propose that infant object perception is guided in part by probabilistic principles like persistence: things tend
to remain the same, and when they change they do so gradually. To illustrate this
idea we develop an ideal observer model that incorporates probabilistic principles
of rigidity and inertia. Like previous researchers, we suggest that rigid motions
are expected from an early age, but we challenge the previous claim that the inertia principle is relatively slow to develop [1]. We support these arguments by
modeling several experiments from the developmental literature.
Over the past few decades, ingenious experiments [1, 2] have suggested that infants rely on systematic expectations about physical objects when interpreting visual scenes. Looking time studies
suggest, for example, that infants expect objects to follow continuous trajectories through time and
space, and understand that two objects cannot simultaneously occupy the same location. Many of
these studies have been replicated several times, but there is still no consensus about the best way to
characterize the knowledge that gives rise to these ?ndings.
Two main approaches can be found in the literature. The verbal approach uses natural language
to characterize principles of object perception [1, 3]: for example, Spelke [4] proposes that object
perception is consistent with principles including continuity (?a moving object traces exactly one
connected path over space and time?) and cohesion (?a moving object maintains its connectedness
and boundaries?). The mechanistic approach proposes that physical knowledge is better characterized by describing the mechanisms that give rise to behavior, and researchers working in this
tradition often develop computational models that support their theoretical proposals [5]. We pursue
a third approach?the ideal observer approach [6, 7, 8]?that combines aspects of both previous
traditions. Like the verbal approach, our primary goal is to characterize principles that account for
infant behavior, and we will not attempt to characterize the mechanisms that produce this behavior.
Like the mechanistic approach, we emphasize the importance of formal models, and suggest that
these models can capture forms of knowledge that are dif?cult for verbal accounts to handle.
Ideal observer models [6, 9] specify the conclusions that normatively follow given a certain source
of information and a body of background knowledge. These models can therefore address questions
about the information and the knowledge that support perception. Approaches to the information
question characterize the kinds of perceptual information that human observers use. For example,
Geisler [9] discusses which components of the information available at the retina contribute to visual perception, and Banks and Shannon [10] use ideal observer models to study the perceptual
consequences of immaturities in the retina. Approaches to the knowledge question characterize the
background assumptions that are combined with the available input in order to make inductive inferences. For example, Weiss and Adelson [7] describe several empirical phenomena that are consistent
with the a priori assumption that motions tend to be slow and smooth. There are few previous attempts to develop ideal observer models of infant perception, and most of them focus only on the
information question [10]. This paper addresses the knowledge question, and proposes that the ideal
observer approach can help to identify the minimal set of principles needed to account for the visual
competence of young infants.
Most verbal theories of object perception focus on categorical principles [4], or principles that make
a single distinction between possible and impossible scenes. We propose that physical knowledge
in infancy is also characterized by probabilistic principles, or expectations that make some possible
scenes more surprising than others. We demonstrate the importance of probabilistic principles by
focusing on two examples: the rigidity principle states that objects usually maintain their shape and
size when they move, and the inertia principle states that objects tend to maintain the same pattern of
motion over time. Both principles capture important regularities, but exceptions to these regularities
are relatively common.
Focusing on rigidity and inertia allows us to demonstrate two contributions that probabilistic approaches can make. First, probabilistic approaches can reinforce current proposals about infant
perception. Spelke [3] suggests that rigidity is a core principle that guides object perception from a
very early age, and we demonstrate how this idea can be captured by a model that also tolerates exceptions, such as non-rigid biological motion. Second, probabilistic approaches can identify places
where existing proposals may need to be revised. Spelke [3] argues that the principle of inertia is
slow to develop, but we suggest that a probabilistic version of this principle can help to account for
inferences made early in development.
1
An ideal observer approach
An ideal observer approach to object perception can be formulated in terms of a generative model
for scenes. Scenes can be generated in three steps. First we choose the number n of objects that
will appear in the scene, and generate the shape, visual appearance, and initial location of each
object. We then choose a velocity ?eld for each object which speci?es how the object moves and
changes shape over time. Finally, we create a visual scene by taking a two-dimensional projection
of the moving objects generated in the two previous steps. An ideal observer approach explores
the idea that the inferences made by infants approximate the optimal inferences with respect to this
generative model.
We work within this general framework but make two simpli?cations. We will not discuss how the
shapes and visual appearances of objects are generated, and we make the projection step simple by
working with a two-dimensional world. These simpli?cations allow us to focus on the expectations
about velocity ?elds that guide motion perception in infants. The next two sections present two prior
distributions that can be used to generate velocity ?elds. The ?rst is a baseline prior that does not
incorporate probabilistic principles, and the second incorporates probabilistic versions of rigidity
and inertia. The two priors capture different kinds of knowledge, and we argue that the second
provides the more accurate characterization of the knowledge that infants bring to object perception.
1.1
A baseline prior on velocity ?elds
Our baseline prior is founded on ?ve categorical principles that are closely related to principles
discussed by Spelke [3, 4]. The principles we consider rely on three basic notions: space, time, and
matter. We also refer to particles, which are small pieces of matter that occupy space-time points.
Particles satisfy several principles:
C1. Temporal continuity. Particles are not created or destroyed. In other words, every particle
that exists at time t1 must also exist at time t2 .
C2. Spatial continuity. Each particle traces a continuous trajectory through space.
C3. Exclusion. No two particles may occupy the same space-time point.
An object is a collection of particles, and these collections satisfy two principles:
C4. Discreteness. Each particle belongs to exactly one object.
C5. Cohesion. At each point in time, the particles belonging to an object occupy a single
connected region of space.
Suppose that we are interested in a space-time window speci?ed by a bounded region of space and a
bounded interval of time. For simplicity, we will assume that space is two-dimensional, and that the
space-time window corresponds to the unit cube. Suppose that a velocity ?eld ~v assigns a velocity
(vx , vy ) to each particle in the space-time window, and let v~i be the ?eld created by considering
only particles that belong to object i. We develop a theory of object perception by de?ning a prior
distribution p(~v ) on velocity ?elds.
Consider ?rst the distribution p(v~1 ) on ?elds for a single object. Any ?eld that violates one or more
of principles C1?C5 is assigned zero probability. For instance, ?elds where part of an object winks
out of existence violate the principle of temporal continuity, and ?elds where an object splits into
two distinct pieces violate the principle of cohesion. Many ?elds, however, remain, including ?elds
that specify non-rigid motions and jagged trajectories. For now, assume that we are working with
a space of ?elds that is bounded but very large, and that the prior distribution over this space is
uniform for all ?elds consistent with principles C1?C5:
0 if v~1 violates C1?C5
p(v~1 ) ? f (v~1 ) =
(1)
1 otherwise.
Consider now the distribution p(v~1 , v~2 ) on ?elds for pairs of objects. Principles C1 through C5 rule
out some of these ?elds, but again we must specify a prior distribution on those that remain. Our
prior is induced by the following principle:
C6. Independence. Velocity ?elds for multiple objects are independently generated subject to
principles C1 through C5.
More formally, the independence principle speci?es how the prior for the multiple object case is
related to the prior p(v~1 ) on velocity ?elds for a single object (Equation 1):
0
if {~
vi } collectively violate C1?C5
(2)
p(v~1 , . . . , v~n ) ? f (v~1 , . . . , v~n ) =
f (v~1 ) . . . f (v~n ) otherwise.
1.2 A smoothness prior on velocity ?elds
We now develop a prior p(~v ) that incorporates probabilistic expectations about the motion of physical objects. Consider again the prior p(v~1 ) on the velocity ?eld v~1 of a single object. Principles
C1?C5 make a single cut that distinguishes possible from impossible ?elds, but we need to consider
whether infants have additional knowledge that makes some of the possible ?elds less surprising
than others. One informal idea that seems relevant is the notion of persistence[11]: things tend to
remain the same, and when they change they do so gradually. We focus on two versions of this idea
that may guide expectations about velocity ?elds:
S1. Spatial smoothness. Velocity ?elds tend to be smooth in space.
S2. Temporal smoothness. Velocity ?elds tend to be smooth in time.
A ?eld is ?smooth in space? if neighboring particles tend to have similar velocities at any instant
of time. The smoothest possible ?eld will be one where all particles have the same velocity at
any instant?in other words, where an object moves rigidly. The principle of spatial smoothness
therefore captures the idea that objects tend to maintain the same shape and size.
A ?eld is ?smooth in time? if any particle tends to have similar velocities at nearby instants of time.
The smoothest possible ?eld will be one where each particle maintains the same velocity throughout
the entire interval of interest. The principle of temporal smoothness therefore captures the idea that
objects tend to maintain their initial pattern of motion. For instance, stationary objects tend to remain
stationary, moving objects tend to keep moving, and a moving object following a given trajectory
tends to continue along that trajectory.
Principles S1 and S2 are related to two principles? rigidity and inertia?that have been discussed
in the developmental literature. The rigidity principle states that objects ?tend to maintain their size
and shape over motion?[3], and the inertia principle states that objects move smoothly in the absence
of obstacles [4]. Some authors treat these principles rather differently: for instance, Spelke suggests
that rigidity is one of the core principles that guides object perception from a very early age [3], but
that the principle of inertia is slow to develop and is weak or fragile once acquired. Since principles
S1 and S2 seem closely related, the suggestion that one develops much later than the other seems
counterintuitive. The rest of this paper explores the idea that both of these principles are needed to
characterize infant perception.
Our arguments will be supported by formal analyses, and we therefore need formal versions of
S1 and S2. There may be different ways to formalize these principles, but we present a simple
L1
L2
U
b) 200
L1
L2
U
0
log
?
p(H1 |~
v)
p(H2 |~
v)
?
a)
?200
baseline
smoothness
Figure 1: (a) Three scenes inspired by the experiments of Spelke and colleagues [12, 13]. Each
scene can be interpreted as a single object, or as a small object on top of a larger object. (b) Relative
preferences for the one-object and two-object interpretations according to two models. The baseline
model prefers the one-object interpretation in all three cases, but the smoothness model prefers the
one-object interpretation only for scenes L1 and L2.
approach that builds on existing models of motion perception in adults [7, 8]. We de?ne measures
of instantaneous roughness that capture how rapidly a velocity ?eld ~v varies in space and time:
Z
?~v (x, y, t) 2 ?~v (x, y, t) 2
1
dxdy
(3)
Rspace (~v , t) =
+
vol(O(t))
?x
?y
O(t)
Rtime (~v , t) =
1
vol(O(t))
Z
?~v (x, y, t) 2
dxdy
?t
(4)
O(t)
where O(t) is the set of all points that are occupied by the object at time t, and vol(O(t)) is the
volume of the object at time t. Rspace (~v , t) will be large if neighboring particles at time t tend to
have different velocities, and Rtime (~v , t) will be large if many particles are accelerating at time t.
We combine our two roughness measures to create a single smoothness function S(?) that measures
the smoothness of a velocity ?eld:
Z
Z
S(~v ) = ??space Rspace (~v , t)dt ? ?time Rtime (~v , t)dt
(5)
where ?space and ?time are positive weights that capture the importance of spatial smoothness and
temporal smoothness. For all analyses in this paper we set ?space = 10000 and ?time = 250, which
implies that violations of spatial smoothness are penalized more harshly than violations of temporal
smoothness. We now replace Equation 1 with a prior on velocity ?elds that takes smoothness into
account:
0
if v~1 violates C1?C5
p(v~1 ) ? f (v~1 ) =
(6)
exp (S(v~1 )) otherwise.
Combining Equation 6 with Equation 2 speci?es a model of object perception that incorporates
probabilistic principles of rigidity and inertia.
2
Empirical ?ndings: spatial smoothness
There are many experiments where infants aged 4 months and younger appear to make inferences
that are consistent with the principle of rigidity. This section suggests that the principle of spatial
smoothness can account for these results. We therefore propose that a probabilistic principle (spatial
smoothness) can explain all of the ?ndings previously presented in support of a categorical principle
(rigidity), and can help in addition to explain how infants perceive non-rigid motion.
One set of studies explores inferences about the number of objects in a scene. When a smaller block
is resting on top of a larger block (L1 in Figure 1a), 3-month-olds infer that the scene includes a
single object [12]. The same result holds when the small and large blocks are both moving in the
same direction (L2 in Figure 1a) [13]. When these blocks are moving in opposite directions (U in
Figure 1a), however, infants appear to infer that the scene contains two objects [13]. Results like
these suggest that infants may have a default expectation that objects tend to move rigidly.
We compared the predictions made by two models about the scenes in Figure 1a. The smoothness
model uses a prior p(v~1 ) that incorporates principles S1 and S2 (Equation 6), and the baseline model
is identical except that it sets ?space = ?time = 0. Both models therefore incorporate principles C1?
C6, but only the smoothness model captures the principle of spatial smoothness.
Given any of the scenes in Figure 1a, an infant must solve two problems: she must compute the
velocity ?eld ~v for the scene and must decide whether this ?eld speci?es the motion of one or two
objects. Here we focus on the second problem, and assume that the infant?s perceptual system has
already computed a veridical velocity ?eld for each scene that we consider. In principle, however,
the smoothness prior in Equation 6 can address both problems. Previous authors have shown how
smoothness priors can be used to compute velocity ?elds given raw image data [7, 8].
Let H1 be the hypothesis that a given velocity ?eld corresponds to a single object, and let H2 be the
hypothesis that the ?eld speci?es the motions of two objects. We assume that the prior probabilities
of these hypotheses are equal, and that P (H1 ) = P (H2 ) = 0.5. An ideal observer can use the
posterior odds ratio to choose between these hypotheses:
R
f (v~1 , v~2 )dv~1 dv~2
P (H1 |~v )
P (~v |H1 ) P (H1 )
f (~v )
=
?R
(7)
P (H2 |~v )
P (~v |H2 ) P (H2 )
f (v~A , v~B )
f (v~1 )dv~1
Equation 7 follows from Equations 2 and 6, and from approximating P (~v |H2 ) by considering only
the two object interpretation (v~A , v~B ) with maximum posterior probability. For each scene in Figure 1a, the best two object interpretation will specify a ?eld v~A for the small upper block, and a ?eld
v~B for the large lower block.
RTo approximateR the posterior odds ratio in Equation 7 we compute rough approximations of
f (v~1 )dv~1 and f (v~1 , v~2 )dv~1 dv~2 by summing over a ?nite space of velocity ?elds. As described in
the supporting material, we consider all ?elds that can be built from objects with 5 possible shapes,
900 possible starting locations, and 10 possible trajectories. For computational tractability, we convert each continuous velocity ?eld to a discrete ?eld de?ned over a space-time grid with 45 cells
along each spatial dimension and 21 cells along the temporal dimension.
Our results show that both models prefer the one-object hypothesis H1 when presented with scenes
L1 and L2 (Figure 1b). Since there are many more two-object scenes than one-object scenes, any
typical two-object interpretation is assigned lower prior probability than a typical one-object interpretation. This preference for simpler interpretations is a consequence of the Bayesian Occam?s
razor. The baseline model makes the same kind of inference about scene U, and again prefers the
one-object interpretation. Like infants, however, the smoothness model prefers the two-object interpretation of scene U. This model assigns low probability to a one-object interpretation where
adjacent points on the object have very different velocities, and this preference for smooth motion
is strong enough to overcome the simplicity preference that makes the difference when interpreting
the other two scenes.
Other experiments from the developmental literature have produced results consistent with the principle of spatial smoothness. For example, 3.5-month olds are surprised when a tall object is fully
hidden behind a short screen, 4 month olds are surprised when a large object appears to pass through
a small slot, and 4.5-month olds expect a swinging screen to be interrupted when an object is placed
in its path [1, 2]. All three inferences appear to rely on the expectation that objects tend not to shrink
or to compress like foam rubber. Many of these experiments are consistent with an account that
simply rules out non-rigid motion instead of introducing a graded preference for spatial smoothness.
Biological motions, however, are typically non-rigid, and experiments suggest that infants can track
and make inferences about objects that follow non-rigid trajectories [14]. Findings like these call
for a theory like ours that incorporates a preference for rigid motion, but recognizes that non-rigid
motions are possible.
3
Empirical ?ndings: temporal smoothness
We now turn to the principle of temporal smoothness (S2) and discuss some of the experimental
evidence that bears on this principle. Some researchers suggest that a closely related principle
(inertia) is slow to develop, but we argue that expectations about temporal smoothness are needed to
capture inferences made before the age of 4 months.
Baillargeon and DeVos [15] describe one relevant experiment that explores inferences about moving
objects and obstacles. During habituation, 3.5-month-old infants saw a car pass behind an occluder
and emerge from the other side (habituation stimulus H in Figure 2a). An obstacle was then placed
in the direct path of the car (unlikely scenes U1 and U2) or beside this direct path (likely scene L),
and the infants again saw the car pass behind the occluder and emerge from the other side. Looking
a)
H
L
U1
U2
log
?
p(L)
p(U 1)
?
600
log
?
p(L)
p(U 2)
?
400
log
?
pH (L)
pH (U 1)
?
log
?
pH (L)
pH (U 2)
?
b)
200
X X
X
baseline
0
smoothness
Figure 2: (a) Stimuli inspired by the experiments of [15]. The habituation stimulus H shows a block
passing behind a barrier and emerging on the other side. After habituation, a new block is added
either out of the direct path of the ?rst block (L) or directly in the path of the ?rst block (U1 and
U2). In U1, the ?rst block leaps over the second block, and in U2 the second block hops so that
the ?rst block can pass underneath. (b) Relative probabilities of scenes L, U1 and U2 according to
two models. The baseline model ?nds all three scenes equally likely a priori, and considers L and
U2 equally likely after habituation. The smoothness model considers L more likely than the other
scenes both before and after habituation.
a)
H1
H2
L
U
c)
b)
?
log
?
p(L)
p(U )
300
log
?
pH1 (L)
pH1 (U )
?
200
log
?
pH2 (L)
pH2 (U )
?
100
0
?100
X
X
baseline
smoothness
Figure 3: (a) Stimuli inspired by the experiments of Spelke et al. [16]. (b) Model predictions. After
habituation to H1, the smoothness model assigns roughly equal probabilities to L and U. After
habituation to H2, the model considers L more likely. (c) A stronger test of the inertia principle.
Now the best interpretation of stimulus U involves multiple changes of direction.
time measurements suggested that the infants were more surprised to see the car emerge when the
obstacle lay within the direct path of the car. This result is consistent with the principle of temporal
smoothness, which suggests that infants expected the car to maintain a straight-line trajectory, and
the obstacle to remain stationary.
We compared the smoothness model and the baseline model on a schematic version of this task. To
model this experiment, we again assume that the infant?s perceptual system has recovered a veridical
velocity ?eld, but now we must allow for occlusion. An ideal observer approach that treats a two
dimensional scene as a projection of a three dimensional world can represent the occluder as an
object in its own right. Here, however, we continue to work with a two dimensional world, and treat
the occluded parts of the scene as missing data. An ideal observer approach should integrate over all
possible values of the missing data, but for computational simplicity we approximate this approach
by considering only one or two high-probability interpretations of each occluded scene.
We also need to account for habituation, and for cases where the habituation stimulus includes occlusion. We assume that an ideal observer computes a habituation ?eld v~H , or the velocity ?eld with
maximum posterior probability given the habituation stimulus. In Figure 2a, the inferred habituation
?eld v~H speci?es a trajectory where the block moves smoothly from the left to the right of the scene.
We now assume that the observer expects subsequent velocity ?elds to be similar to v~H . Formally,
we use a product-of-experts approach to de?ne a post-habituation distribution on velocity ?elds:
pH (~v ) ? p(~v )p(~v |v~H )
(8)
The ?rst expert p(~v ) uses the prior distribution in Equation 6, and the second expert p(~v |v~H ) assumes
that ?eld ~v is drawn from a Gaussian distribution centered on v~H . Intuitively, after habituation to v~H
the second expert expects that subsequent velocity ?elds will be similar to v~H . More information
about this model of habituation is provided in the supporting material.
Given these assumptions, the black and dark gray bars in Figure 2 indicate relative a priori probabilities for scenes L, U1 and U2. The baseline model considers all three scenes equally probable,
but the smoothness model prefers L. After habituation, the baseline model is still unable to account
for the behavioral data, since it considers scenes L and U2 to be equally probable. The smoothness
model, however, continues to prefer L.
We previously mentioned three consequences of the principle of temporal smoothness: stationary
objects tend to remain stationary, moving objects tend to keep moving, and moving objects tend
to maintain a steady trajectory. The ?car and obstacle? task addresses the ?rst and third of these
proposals, but other tasks provide support for the second. Many authors have studied settings where
one moving object comes to a stop, and a second object starts to move [17]. Compared to the case
where the ?rst object collides with the second, infants appear to be surprised by the ?no-contact?
case where the two objects never touch. This ?nding is consistent with the temporal smoothness
principle, which predicts that infants expect the ?rst object to continue moving until forced to stop,
and expect the second object to remain stationary until forced to start.
Other experiments [18] provide support for the principle of temporal smoothness, but there are also
studies that appear inconsistent with this principle. In one of these studies [16], infants are initially
habituated to a block that moves from one corner of an enclosure to another (H1 in Figure 3a).
After habituation, infants see a block that begins from a different corner, and now the occluder
is removed to reveal the block in a location consistent with a straight-line trajectory (L) or in a
location that matches the ?nal resting place during the habituation phase (U). Looking times suggest
that infants aged 4-12 months are no more surprised by the inertia-violating outcome (U) than the
inertia-consistent outcome (L). The smoothness model, however, can account for this ?nding. The
outcome in U is contrary to temporal smoothness but consistent with habituation, and the tradeoff
between these factors leads the model to assign roughly the same probability to scenes L and U
(Figure 3b).
Only one of the inertia experiments described by Spelke et al. [16] and Spelke et al. [1] avoids this
tradeoff between habituation and smoothness. This experiment considers a case where the habituation stimulus (H2 in Figure 3a) is equally similar to the two test stimuli. The results suggest that 8
month olds are now surprised by the inertia-violating outcome, and the predictions of our model are
consistent with this ?nding (Figure 3b). 4 and 6 month olds, however, continue to look equally at the
two outcomes. Note, however, that the trajectories in Figure 3 include at most one in?ection point.
Experiments that consider trajectories with many in?ection points can provide a more powerful way
of exploring whether 4 month olds have expectations about temporal smoothness.
One possible experiment is sketched in Figure 3c. The task is very similar to the task in Figure 3a,
except that a barrier is added after habituation. In order for the block to end up in the same location
as before, it must now follow a tortuous path around the barrier (U). Based on the principle of
temporal smoothness, we predict that 4-month-olds will be more surprised to see the outcome in
stimulus U than the outcome in stimulus L. This experimental design is appealing in part because
previous work shows that infants are surprised by a case similar to U where the barrier extends all
the way from one wall to the other [16], and our proposed experiment is a minor variant of this task.
Although there is room for debate about the status of temporal smoothness, we presented two reasons to revisit the conclusion that this principle develops relatively late. First, some version of this
principle seems necessary to account for experiments like the car and obstacle experiment in Figure 2. Second, most of the inertia experiments that produced null results use a habituation stimulus
which may have prevented infants from revealing their default expectations, and the one experiment
that escapes this objection considers a relatively minor violation of temporal smoothness. Additional
experiments are needed to explore this principle, but we predict that the inertia principle will turn
out to be yet another example of knowledge that is available earlier than researchers once thought.
4
Discussion and Conclusion
We argued that characterizations of infant knowledge should include room for probabilistic expectations, and that probabilistic expectations about spatial and temporal smoothness appear to play a role
in infant object perception. To support these claims we described an ideal observer model that includes both categorical (C1 through C5) and probabilistic principles (S1 and S2), and demonstrated
that the categorical principles alone are insuf?cient to account for several experimental ?ndings. Our
two probabilistic principles are related to principles (rigidity and inertia) that have previously been
described as categorical principles. Although rigidity and inertia appear to play a role in some early
inferences, formulating these principles as probabilistic expectations helps to explain how infants
deal with non-rigid motion and violations of inertia.
Our analysis focused on some of the many existing experiments in the developmental literature, but
new experiments will be needed to explore our probabilistic approach in depth. Categorical versions
of a given principle (e.g. rigidity) allow room for only two kinds of behavior depending on whether
the principle is violated or not. Probabilistic principles can be violated to a greater or lesser extent,
and our approach predicts that violations of different magnitude may lead to different behaviors.
Future studies of rigidity and inertia can consider violations of these principles that range from
mild (Figure 3a) to severe (Figure 3c), and can explore whether infants respond to these violations
differently. Future work should also consider whether the categorical principles we described (C1
through C5) are better characterized as probabilistic expectations. In particular, future studies can
explore whether young infants consider large violations of cohesion (C5) or spatial continuity (C2)
more surprising than smaller violations of these principles.
Although we did not focus on learning, our approach allows us to begin thinking formally about
how principles of object perception might be acquired. First, we can explore how parameters like
the smoothness parameters in our model (?space and ?time ) might be tuned by experience. Second,
we can use statistical model selection to explore transitions between different sets of principles.
For instance, if a learner begins with the baseline model we considered (principles C1?C6), we
can explore which subsequent observations provide the strongest statistical evidence for smoothness
principles S1 and S2, and how much of this evidence is required before an ideal learner would
prefer our smoothness model over the baseline model. It is not yet clear which principles of object
perception could be learned, but the ideal observer approach can help to resolve this question.
References
[1] E. S. Spelke, K. Breinlinger, J. Macomber, and K. Jacobson. Origins of knowledge. Psychological Review,
99:605?632, 1992.
[2] R. Baillargeon, L. Kotovsky, and A. Needham. The acquisition of physical knowledge in infancy. In
D. Sperber, D. Premack, and A. J. Premack, editors, Causal Cognition: A multidisciplinary debate, pages
79?116. Clarendon Press, Oxford, 1995.
[3] E. S. Spelke. Principles of object perception. Cognitive Science, 14:29?56, 1990.
[4] E. Spelke. Initial knowledge: six suggestions. Cognition, 50:431?445, 1994.
[5] D. Mareschal and S. P. Johnson. Learning to perceive object unity: a connectionist account. Developmental Science, 5:151?172, 2002.
[6] D. Kersten and A. Yuille. Bayesian models of object perception. Current opinion in Neurobiology, 13:
150?158, 2003.
[7] Y. Weiss and E. H. Adelson. Slow and smooth: a Bayesian theory for the combination of local motion
signals in human vision. Technical Report A.I Memo No. 1624, MIT, 1998.
[8] A. L. Yuille and N. M. Grzywacz. A mathematical analysis of the motion coherence theory. International
Journal of Computer Vision, 3:155?175, 1989.
[9] W. S. Geisler. Physical limits of acuity and hyperacuity. Journal of the Optical Society of America, 1(7):
775?782, 1984.
[10] M. S. Banks and E. Shannon. Spatial and chromatic visual ef?ciency in human neonates. In Visual
perception and cognition in infancy, pages 1?46. Lawrence Erlbaum Associates, Hillsdale, NJ, 1993.
[11] R. Baillargeon. Innate ideas revisited: for a principle of persistence in infants? physical reasoning. Perspectives on Psychological Science, 3(3):2?13, 2008.
[12] R. Kestenbaum, N. Termine, and E. S. Spelke. Perception of objects and object boundaries by threemonth-old infants. British Journal of Developmental Psychology, 5:367?383, 1987.
[13] E. S. Spelke, C. von Hofsten, and R. Kestenbaum. Object perception and object-directed reaching in
infancy: interaction of spatial and kinetic information for object boundaries. Developmental Psychology,
25:185?196, 1989.
[14] G. Huntley-Fenner, S. Carey, and A. Solimando. Objects are individuals but stuff doesn?t count: perceived
rigidity and cohesiveness in?uence infants? representations of small groups of discrete entities. Cognition,
85:203?221, 2002.
[15] R. Baillargeon and J. DeVos. Object permanence in young infants: further evidence. Child Development,
61(6):1227?1246, 1991.
[16] E. S. Spelke, G. Katz, S. E. Purcell, S. M. Ehrlich, and K. Breinlinger. Early knowledge of object motion:
continuity and inertia. Cognition, 51:131?176, 1994.
[17] L. Kotovsky and R. Baillargeon. Reasoning about collisions involving inert objects in 7.5-month-old
infants. Developmental Science, 3(3):344?359, 2000.
[18] T. Wilcox and A. Schweinle. Infants? use of speed information to individuate objects in occlusion events.
Infant Behavior and Development, 26:253?282, 2003.
| 3382 |@word mild:1 version:7 seems:3 stronger:1 nd:1 eld:53 initial:3 contains:1 tuned:1 ours:1 past:1 existing:3 current:2 recovered:1 surprising:3 yet:2 must:7 interrupted:1 subsequent:3 shape:7 infant:46 generative:2 stationary:6 alone:1 cult:1 core:2 short:1 provides:1 characterization:2 contribute:1 location:6 preference:6 revisited:1 c6:3 simpler:1 mathematical:1 along:3 c2:2 direct:4 surprised:8 combine:2 behavioral:1 acquired:2 expected:2 roughly:2 behavior:6 occluder:4 inspired:3 breinlinger:2 resolve:1 window:3 considering:3 provided:2 begin:3 bounded:3 null:1 kind:4 interpreted:1 psych:1 pursue:1 emerging:1 finding:1 nj:1 temporal:20 every:1 stuff:1 exactly:2 ehrlich:1 unit:1 appear:8 veridical:2 before:5 t1:1 positive:1 local:1 treat:3 tends:2 limit:1 consequence:3 oxford:1 rigidly:2 path:8 connectedness:1 black:1 might:2 studied:1 suggests:4 dif:1 range:1 directed:1 block:19 nite:1 empirical:3 thought:1 revealing:1 projection:3 persistence:3 word:2 enclosure:1 suggest:9 cannot:1 selection:1 impossible:2 kersten:1 demonstrated:1 missing:2 starting:1 independently:1 focused:1 swinging:1 simplicity:3 kotovsky:2 assigns:3 perceive:2 rule:2 counterintuitive:1 handle:1 notion:2 grzywacz:1 suppose:2 play:2 us:3 hypothesis:5 origin:1 associate:1 velocity:34 hyperacuity:1 continues:1 lay:1 cut:1 predicts:2 role:2 capture:9 region:2 connected:2 hofsten:1 removed:1 mentioned:1 developmental:9 occluded:2 yuille:2 learner:2 differently:2 america:1 distinct:1 forced:2 describe:2 outcome:7 larger:2 solve:1 otherwise:3 rto:1 propose:3 interaction:1 product:1 neighboring:2 relevant:2 combining:1 rapidly:1 rst:10 regularity:2 produce:1 object:106 help:5 illustrate:1 develop:9 tall:1 depending:1 minor:2 tolerates:1 strong:1 involves:1 implies:1 indicate:1 come:1 direction:3 guided:1 ning:1 closely:3 centered:1 human:3 vx:1 opinion:1 violates:3 material:2 hillsdale:1 ndings:5 argued:1 assign:1 wall:1 biological:2 probable:2 roughness:2 exploring:1 hold:1 around:1 considered:1 exp:1 lawrence:1 cognition:5 predict:2 claim:2 early:6 cohesion:4 perceived:1 leap:1 saw:2 create:2 rough:1 mit:1 gaussian:1 rather:2 occupied:1 reaching:1 chromatic:1 focus:7 acuity:1 she:1 tradition:2 baseline:15 underneath:1 inference:14 rigid:10 entire:1 typically:1 unlikely:1 initially:1 hidden:1 interested:1 sketched:1 priori:3 proposes:3 development:3 spatial:16 cube:1 equal:2 once:2 never:1 hop:1 identical:1 adelson:2 look:1 thinking:1 future:3 report:1 others:2 t2:1 develops:2 stimulus:12 few:2 retina:2 connectionist:1 distinguishes:1 escape:1 simultaneously:1 ve:1 individual:1 baillargeon:5 phase:1 occlusion:3 maintain:7 attempt:2 interest:1 severe:1 violation:9 jacobson:1 behind:4 accurate:1 necessary:1 experience:1 old:11 causal:1 uence:1 theoretical:1 minimal:1 psychological:2 instance:4 modeling:1 obstacle:7 earlier:1 tractability:1 introducing:1 expects:2 uniform:1 johnson:1 erlbaum:1 characterize:7 varies:1 combined:1 geisler:2 explores:4 international:1 probabilistic:22 systematic:1 again:5 von:1 choose:3 corner:2 cognitive:1 expert:4 account:15 de:4 includes:3 matter:2 satisfy:2 jagged:1 vi:1 piece:2 later:1 h1:10 observer:18 start:2 maintains:2 carey:1 contribution:1 identify:2 foam:1 weak:1 raw:1 bayesian:3 produced:2 trajectory:13 researcher:4 straight:2 cation:2 explain:3 strongest:1 ed:1 colleague:1 acquisition:1 ph2:2 stop:2 knowledge:18 car:8 formalize:1 mareschal:1 purcell:1 focusing:2 appears:1 clarendon:1 dt:2 violating:2 follow:4 specify:4 wei:2 shrink:1 until:2 working:3 touch:1 continuity:6 multidisciplinary:1 gray:1 reveal:1 neonate:1 normatively:1 innate:1 inductive:2 assigned:2 deal:1 adjacent:1 during:2 razor:1 steady:1 ection:2 doesn:1 demonstrate:3 argues:1 motion:23 interpreting:2 bring:1 l1:5 reasoning:2 image:1 instantaneous:1 ef:1 charles:1 common:1 physical:8 insuf:1 volume:1 discussed:2 belong:1 interpretation:13 resting:2 katz:1 mellon:1 refer:1 measurement:1 smoothness:50 grid:1 particle:17 language:1 moving:14 posterior:4 own:1 exclusion:1 perspective:1 belongs:1 certain:1 continue:4 captured:1 additional:2 simpli:2 dxdy:2 greater:1 speci:7 signal:1 violate:3 multiple:3 infer:2 smooth:7 technical:1 match:1 characterized:3 post:1 equally:6 prevented:1 schematic:1 prediction:3 variant:1 basic:1 involving:1 vision:2 cmu:1 expectation:14 represent:1 cell:2 c1:13 proposal:4 background:2 younger:1 addition:1 interval:2 objection:1 aged:2 source:1 rest:1 collides:1 induced:1 tend:18 subject:1 thing:2 contrary:1 incorporates:6 inconsistent:1 seem:1 habituation:24 odds:2 call:1 ideal:17 split:1 enough:1 destroyed:1 independence:2 psychology:4 opposite:1 idea:9 lesser:1 tradeoff:2 fragile:1 whether:7 six:1 accelerating:1 passing:1 prefers:5 collision:1 clear:1 dark:1 ph:5 generate:2 occupy:4 exist:1 vy:1 revisit:1 track:1 carnegie:1 discrete:2 vol:3 group:1 spelke:15 drawn:1 macomber:1 discreteness:1 nal:1 convert:1 powerful:1 respond:1 place:2 throughout:1 extends:1 decide:1 coherence:1 prefer:3 fei:2 scene:37 nearby:1 aspect:1 u1:6 argument:2 speed:1 formulating:1 optical:1 relatively:4 ned:1 department:2 according:2 combination:1 belonging:1 remain:8 smaller:2 unity:1 appealing:1 s1:7 psychologist:1 dv:6 gradually:2 intuitively:1 equation:10 rubber:1 previously:3 describing:1 discus:3 mechanism:2 turn:2 needed:5 count:1 mechanistic:2 end:1 informal:1 available:3 existence:1 compress:1 top:2 assumes:1 include:2 recognizes:1 wilcox:1 instant:3 build:1 graded:1 approximating:1 society:1 contact:1 move:8 question:6 ingenious:1 already:1 added:2 primary:1 unable:1 reinforce:1 entity:1 argue:2 considers:7 kemp:1 consensus:1 reason:1 extent:1 ratio:2 debate:2 trace:2 memo:1 rise:2 design:1 rtime:3 upper:1 observation:1 revised:1 supporting:2 neurobiology:1 looking:3 competence:1 inferred:1 pair:1 required:1 c3:1 c4:1 distinction:1 learned:1 address:4 adult:1 suggested:2 bar:1 usually:1 perception:26 pattern:2 challenge:1 built:1 including:2 event:1 natural:1 rely:3 ne:2 nding:3 created:2 categorical:9 columbia:1 prior:21 literature:5 l2:5 review:1 relative:3 beside:1 fully:1 expect:4 bear:1 suggestion:2 age:5 h2:10 integrate:1 consistent:12 principle:88 editor:1 bank:2 ckemp:1 occam:1 penalized:1 supported:1 placed:2 cohesiveness:1 verbal:5 formal:3 guide:4 understand:1 allow:3 side:3 taking:1 barrier:4 emerge:3 boundary:3 default:2 dimension:2 transition:1 world:3 overcome:1 avoids:1 computes:1 depth:1 inertia:23 made:4 collection:2 replicated:1 c5:12 author:3 founded:1 approximate:2 emphasize:1 status:1 keep:2 summing:1 continuous:3 decade:1 ph1:2 ca:1 harshly:1 did:1 main:1 s2:8 child:1 xu:1 body:1 wink:1 cient:1 screen:2 slow:6 ciency:1 smoothest:2 infancy:4 perceptual:4 third:2 late:1 young:3 british:2 habituated:1 evidence:4 exists:1 importance:3 magnitude:1 smoothly:2 simply:1 appearance:2 likely:5 explore:7 visual:8 u2:8 collectively:1 ubc:1 corresponds:2 kinetic:1 slot:1 month:14 formulated:1 goal:1 room:3 replace:1 absence:1 change:4 typical:2 except:2 pas:4 e:6 experimental:3 shannon:2 exception:2 formally:3 support:8 violated:2 phenomenon:1 incorporate:2 rigidity:16 |
2,628 | 3,383 | Spectral Hashing
3
Yair Weiss1,3
School of Computer Science,
Hebrew University,
91904, Jerusalem, Israel
Antonio Torralba1
1
CSAIL, MIT,
32 Vassar St.,
Cambridge, MA 02139
[email protected]
[email protected]
2
Rob Fergus2
Courant Institute, NYU,
715 Broadway,
New York, NY 10003
[email protected]
Abstract
Semantic hashing[1] seeks compact binary codes of data-points so that the
Hamming distance between codewords correlates with semantic similarity.
In this paper, we show that the problem of finding a best code for a given
dataset is closely related to the problem of graph partitioning and can
be shown to be NP hard. By relaxing the original problem, we obtain a
spectral method whose solutions are simply a subset of thresholded eigenvectors of the graph Laplacian. By utilizing recent results on convergence
of graph Laplacian eigenvectors to the Laplace-Beltrami eigenfunctions of
manifolds, we show how to efficiently calculate the code of a novel datapoint. Taken together, both learning the code and applying it to a novel
point are extremely simple. Our experiments show that our codes outperform the state-of-the art.
1
Introduction
With the advent of the Internet, it is now possible to use huge training sets to address
challenging tasks in machine learning. As a motivating example, consider the recent work
of Torralba et al. who collected a dataset of 80 million images from the Internet [2, 3]. They
then used this weakly labeled dataset to perform scene categorization. To categorize a novel
image, they simply searched for similar images in the dataset and used the labels of these
retrieved images to predict the label of the novel image. A similar approach was used in [4]
for scene completion.
Although conceptually simple, actually carrying out such methods requires highly efficient
ways of (1) storing millions of images in memory and (2) quickly finding similar images to
a target image.
Semantic hashing, introduced by Salakhutdinov and Hinton[5] , is a clever way of addressing
both of these challenges. In semantic hashing, each item in the database is represented by a
compact binary code. The code is constructed so that similar items will have similar binary
codewords and there is a simple feedforward network that can calculate the binary code for
a novel input. Retrieving similar neighbors is then done simply by retrieving all items with
codes within a small Hamming distance of the code for the query. This kind of retrieval can
be amazingly fast - millions of queries per second on standard computers. The key for this
method to work is to learn a good code for the dataset. We need a code that is (1) easily
computed for a novel input (2) requires a small number of bits to code the full dataset and
(3) maps similar items to similar binary codewords.
To simplify the problem, we will assume that the items have already been embedded in
a Euclidean space, say Rd , in which Euclidean distance correlates with the desired similarity. The problem of finding such a Euclidean embedding has been addressed in a large
1
number of machine learning algorithms (e.g. [6, 7]). In some cases, domain knowledge can
be used to define a good embedding. For example, Torralba et al. [3] found that a 512
dimensional descriptor known as the GIST descriptor, gives an embedding where Euclidean
distance induces a reasonable similarity function on the items. But simply having Euclidean
embedding does not give us a fast retrieval mechanism.
If we forget about the requirement of having a small number of bits in the codewords, then
it is easy to design a binary code so that items that are close in Euclidean space will map
to similar binary codewords. This is the basis of the popular locality sensitive hashing
method E2LSH [8]. As shown in[8], if every bit in the code is calculated by a random linear
projection followed by a random threshold, then the Hamming distance between codewords
will asymptotically approach the Euclidean distance between the items. But in practice this
method can lead to very inefficient codes. Figure 1 illustrates the problem on a toy dataset
of points uniformly sampled in a two dimensional rectangle. The figure plots the average
precision at Hamming distance 1 using a E2LSH encoding. As the number of bits increases
the precision improves (and approaches one with many bits), but the rate of convergence
can be very slow.
Rather than using random projections to define the bits in a code, several authors have
pursued machine learning approaches. In [5] the authors used an autoencoder with several
hidden layers. The architecture can be thought of as a restricted Boltzmann machine (RBM)
in which there are only connections between layers and not within layers. In order to learn 32
bits, the middle layer of the autoencoder has 32 hidden units, and noise was injected during
training to encourage these bits to be as binary as possible. This method indeed gives codes
that are much more compact than the E2LSH codes. In [9] they used multiple stacked RBMs
to learn a non-linear mapping between input vector and code bits. Backpropagation using
an Neighborhood Components Analysis (NCA) objective function was used to refine the
weights in the network to preserve the neighborhood structure of the input space. Figure 1
shows that the RBM gives much better performance compared to random bits. A simpler
machine learning algorithm (Boosting SSC) was pursued in [10] who used adaBoost to
classify a pair of input items as similar or nonsimilar. Each weak learner was a decision
stump, and the output of all the weak learners on a given output is a binary code. Figure 1
shows that this boosting procedure also works much better than E2LSH codes, although
slightly worse than the RBMs1 .
The success of machine learning approaches over LSH is not limited to synthetic data. In [5],
RBMs gave several orders of magnitude improvement over LSH in document retrieval tasks.
In [3] both RBMs and Boosting were used to learn binary codes for a database of millions
of images and were found to outperform LSH. Also, the retrieval speed using these short
binary codes was found to be significantly faster than LSH (which was faster than other
methods such as KD trees).
The success of machine learning methods leads us to ask: what is the best code for performing semantic hashing for a given dataset? We formalize the requirements for a good code
and show that these are equivalent to a particular form of graph partitioning. This shows
that even for a single bit, the problem of finding optimal codes is NP hard. On the other
hand, the analogy to graph partitioning suggests a relaxed version of the problem that leads
to very efficient eigenvector solutions. These eigenvectors are exactly the eigenvectors used
in many spectral algorithms including spectral clustering and Laplacian eigenmaps [6, 11].
This leads to a new algorithm, which we call ?spectral hashing? where the bits are calculated
by thresholding a subset of eigenvectors of the Laplacian of the similarity graph. By utilizing recent results on convergence of graph Laplacian eigenvectors to the Laplace-Beltrami
eigenfunctions of manifolds, we show how to efficiently calculate the code of a novel datapoint. Taken together, both learning the code and applying it to a novel point are extremely
simple. Our experiments show that our codes outperform the state-of-the art.
1
All methods here use the same retrieval algorithm, i.e. semantic hashing. In many applications of LSH and Boosting SSC, a different retrieval algorithm is used whereby the binary code
only creates a shortlist and exhaustive search is performed on the shortlist. Such an algorithm is
impractical for the scale of data we are considering.
2
stumps boosting SSC
LSH
Proportion good neighbors for hamming distance < 2
Training samples
RBM (two hidden layers)
1
0.8
RBM
0.6
0.4
stumps boosting SSC
0.2
0
LSH
0
5
10
15
20
number of bits
25
30
35
Figure 1: Building hash codes to find neighbors. Neighbors are defined as pairs of points in
2D whose Euclidean distance is less than . The toy dataset is formed by uniformly sampling
points in a two dimensional rectangle. The figure plots the average precision (number of
neighbors in the original space divided by number of neighbors in a hamming ball using the
hash codes) at Hamming distance ? 1 for three methods. The plots on the left show how
each method partitions the space to compute the bits to represent each sample. Despite
the simplicity of this toy data, the methods still require many bits in order to get good
performance.
2
Analysis: what makes a good code
As mentioned earlier, we seek a code that is (1) easily computed for a novel input (2) requires
a small number of bits to code the full dataset and (3) maps similar items to similar binary
codewords. Let us first ignore the first requirement, that codewords be easily computed for
a novel input and search only for a code that is efficient (i.e. requires a small number of
bits) and similarity preserving (i.e. maps similar items to similar codewords). For a code
to be efficient, we require that each bit has a 50% chance of being one or zero, and that
different bits are independent of each other. Among all codes that have this property, we
will seek the ones where the average Hamming distance between similar points is minimal.
Let {yi }ni=1 be the list of codewords (binary vectors of length k) for n datapoints and Wn?n
be the affinity matrix. Since we are assuming the inputs are embedded in Rd so that
Euclidean distance correlates with similarity, we will use W (i, j) = exp(?kxi ? xj k2 /2 ).
Thus the parameter defines the distance in Rd which corresponds to similar items. Using
this
P notation, the2 average Hamming distance between similar neighbors can be written:
ij Wij kyi ? yj k . If we relax the independence assumption and require the bits to be
uncorrelated we obtain the following problem:
minimize :
X
Wij kyi ? yj k2
(1)
ij
subject to : yi ? {?1, 1}k
X
yi = 0
i
1X
yi yiT = I
n i
P
where
the constraint i yi = 0 requires each bit to fire 50% of the time, and the constraint
P
1
T
i yi yi = I requires the bits to be uncorrelated.
n
Observation: For a single bit, solving problem 1 is equivalent to balanced graph partitioning and is NP hard.
3
Proof: Consider an undirected graph whose vertices are the datapoints and where the
weight between item i and j is given by W (i, j). Consider a code with a single bit. The bit
partitions the graph into two equal P
parts (A, B), vertices where the bit is on and vertices
where the bit is off. For a single bit, ij Wij kyi ? yj k2 is simply the weight of the edges cut
P
by the partition: cut(A, B) = i?A,j?B W (i, j). Thus problem 1 is equivalent to minimizing
cut(A, B) with the requirement that |A| = |B| which is known to be NP hard [12].
For k bits the problem can be thought of as trying to find k independent balanced partitions,
each of which should have as low cut as possible.
2.1
Spectral Relaxation
T
By
P introducing a n ? k matrix Y whose jth row is yj and a diagonal n ? n matrix D(i, i) =
j W (i, j) we can rewrite the problem as:
minimize : trace(Y T (D ? W )Y )
subject to : Y (i, j) ? {?1, 1}
(2)
Y T1 = 0
Y TY = I
This is of course still a hard problem, but by removing the constraint that Y (i, j) ? {?1, 1}
we obtain an easy problem whose solutions are simply the k eigenvectors of D ? W with
minimal eigenvalue (after excluding the trivial eigenvector 1 which has eigenvalue 0).
2.2
Out of Sample Extension
The fact that the solution to the relaxed problem are the k eigenvectors of D ? W with
minimal eigenvalue would suggest simply thresholding these eigenvectors to obtain a binary
code. But this would only tell us how to compute the code representation of items in the
training set. This is the problem of out-of-sample extension of spectral methods which is
often solved using the Nystrom method [13, 14]. But note that the cost of calculating the
Nystrom extension of a new datapoint is linear in the size of the dataset. In our setting,
where there can be millions of items in the dataset this is impractical. In fact, calculating
the Nystrom extension is as expensive as doing exhaustive nearest neighbor search.
In order to enable efficient out-of-sample extension we assume the datapoints xi ? Rd are
samples from a probability distribution p(x). The equations in the problem 1 are now seen
to be sample averages which we replace with their expectations:
Z
minimize : ky(x1 ) ? y(x2 )k2 W (x1 , x2 )p(x1 )p(x2 )dx1 x2
(3)
subject to : y(x) ? {?1, 1}k
Z
y(x)p(x)dx = 0
Z
y(x)y(x)T p(x)dx = I
2
2
with W (x1 , x2 ) = e?kx1 ?x2 k / . Relaxing the constraint that y(x) ? {?1, 1}k now gives
a spectral problem whose solutions are eigenfunctions of the weighted Laplace-Beltrami
operators defined on manifolds [15, 16, 13, 17]. More explicitly, define the weighted Laplacian Lp as an operator that maps a function f to g = Lp f by g(x)
p(x) = D(x)f (x)p(x) ?
R
R
W
(s,
x)f
(s)p(s)ds
with
D(x)
=
W
(x,
s).
The
solution
to
the
relaxation of problem 3
s
s
are functions that satisfy Lp f = ?f with minimal eigenvalue (ignoring the trivial solution
f (x) = 1 which has eigenvalue 0). As discussed in [16, 15, 13], with proper normalization,
the eigenvectors of the discrete Laplacian defined by n points sampled from p(x) converges
to eigenfunctions of Lp as n ? ?.
What do the eigenfunctions of Lp look like ? One important special case is when p(x) is
a separable distribution. A simple case of a separable distribution is a multidimensional
4
Q
uniform distribution Pr(x) = i ui (xi ) where ui is a uniform distribution in the range
[ai , bi ]. Another example is a multidimensional Gaussian, which is separable once the space
has been rotated so that the Gaussian is axes aligned.
Observation: [17] If p(x) is separable, and similarity between datapoints is defined as
2
2
e?kxi ?xj k / then the eigenfunctions of the continuous weighted Laplacian, Lp have an outer
product form. That is, if ?i (x) is an eigenfunction of the weighted Laplacian defined on
R1 with eigenvalue ?i then ?i (x1 )?j (x2 ) ? ? ? ?d (xd ) is an eigenfunction of the d dimensional
problem with eigenvalue ?i ?j ? ? ? ?d .
Specifically for a case of a uniform distribution on [a, b] the eigenfunctions of the onedimensional Laplacian Lp are extremely well studied objects in mathematics. They correspond to the fundamental modes of vibration of a metallic plate. The eigenfunctions ?k (x)
and eigenvalues ?k are:
?
k?
+
x)
2
b?a
?k (x)
=
sin(
?k
=
1 ? e? 2 | b?a |
2
k?
2
(4)
(5)
A similar equation is also available for the one dimensional Gaussian . In this case the
eigenfunctions of the one-dimensional Laplacian Lp are (in the limit of small ) solutions
to the Schrodinger equations and are related to Hermite polynomials. Figure 2 shows the
analytical eigenfunctions for a 2D rectangle in order of increasing eigenvalue. The eigenvalue
(which corresponds to the cut) determines which k bits will be used. Note that the eigenvalue
depends on the aspect ratio of the rectangle and the spatial frequency ? it is better to cut
the long dimension before the short one, and low spatial frequencies are preferred. Note
that the eigenfunctions do not depend on the radius of similar neighbors . The radius does
change the eigenvalue but does not affect the ordering.
We distinguish between single-dimension eigenfunctions, which are of the form ?k (x1 ) or
?k (x2 ) and outer-product eigenfunctions which are of the form ?k (x1 )?l (x2 ). These outerproduct eigenfunctions are shown marked with a red border in the figure. As we now discuss,
these outer-product eigenfunctions should be avoided when building a hashing code.
Observation: Suppose we build a code by thresholding the k eigenfunctions of Lp with
minimal eigenvalue y(x) = sign(?k (x)). If any of the eigenfunctions is an outer-product
eigenfunction, then that bit is a deterministic function of other bits in the code.
Proof: This follows from the fact that sign(?1 (x1 )?2 (x2 )) = sign(?1 (x1 ))sign(?2 (x2 )).
This observation highlights the simplification we made in relaxing the independence constraint and requiring that the bits be uncorrelated. Indeed the bits corresponding to outerproduct eigenfunctions are approximately uncorrelated but they are surely not independent.
The exact form of the eigenfunctions for 1D continuous Laplacian for different distributions
is a matter of ongoing research [17]. We have found, however, that the bit codes obtained
by thresholding the eigenfunctions are robust to the exact form of the distribution. In particular, simply fitting a multidimensional rectangle distribution to the data (by using PCA
to align the axes, and then assuming a uniform distribution on each axis) works surprisingly
well for a wide range of distributions. In particular, using the analytic eigenfunctions of a
uniform distribution on data sampled from a Gaussian, works as well as using the numerically calculated eigenvectors and far better than boosting or RBMs trained on the Gaussian
distribution.
To summarize, given a training set of points {xi } and a desired number of bits k the spectral
hashing algorithm works by:
? Finding the principal components of the data using PCA.
? Calculating the k smallest single-dimension analytical eigenfunctions of Lp using a
rectangular approximation along every PCA direction. This is done by evaluating
the k smallest eigenvalues for each direction using (equation 4), thus creating a list
of dk eigenvalues, and then sorting this list to find the k smallest eigenvalues.
5
Figure 2: Left: Eigenfunctions for a uniform rectangular distribution in 2D. Right: Thresholded eigenfunctions. Outer-product eigenfunctions have a red frame. The eigenvalues depend on the aspect ratio of the rectangle and the spatial frequency of the cut ? it is better
to cut the long dimension first and lower spatial frequencies are better than higher ones.
LSH
Boosting SSC
RBM (two hidden layers) Spectral hashing
a) 3 bits
LSH
Boosting SSC
RBM (two hidden layers)
Spectral hashing
b) 7 bits
LSH
Boosting SSC
RBM (two hidden layers) Spectral hashing
c) 15 bits
Figure 3: Comparison of neighborhood defined by hamming balls of different radii using
codes obtained with LSH, Boosting, RBM and spectral hashing when using 3, 7 and 15 bits.
The yellow dot denotes a test sample. The red points correspond to the locations that are
within a hamming distance of zero. Green corresponds to a hamming ball of radius 1, and
blue to radius 2.
? Thresholding the analytical eigenfunctions at zero, to obtain binary codes.
This simple algorithm has two obvious limitations. First, it assumes a multidimensional
uniform distribution generated the data. We have experimented with using multidimensional
Gaussians instead. Second, even though it avoids the trivial 3 way dependencies that arise
from outer-product eigenfunctions, other high-order dependencies between the bits may
exist. We have experimented with using only frequencies that are powers of two to avoid
these dependencies. Neither of these more complicated variants of spectral hashing gave a
significant improvement in performance in our experiments.
Figure 4a compares the performance of spectral hashing to LSH, RBMs and Boosting on a
2D rectangle and figure 3 visualizes the Hamming balls for the different methods. Despite
the simplicity of spectral hashing, it outperforms the other methods. Even when we apply
RBMs and Boosting to the output of spectral hashing the performance does not improve.
A similar pattern of results is shown in high dimensional synthetic data (figure 4b).
Some insight into the superior performance can be obtained by comparing the partitions
that each bit defines on the data (figures 2,1). Recall that we seek partitions that give low
cut value and are approximately independent. LSH which uses random linear partitions
may give very unbalanced partitions. RBMs and Boosting both find good partitions, but
the partitions can be highly dependent on each other.
3
Results
In addition to the synthetic results we applied the different algorithms to the image databases
discussed in [3]. Figure 5 shows retrieval results for spectral hashing, RBMs and boosting
on the ?labelme? dataset. Note that even though the spectral hashing uses a terrible model
of the statistics of the database ? it simply assumes a N dimensional rectangle, it performs
better than boosting which actually uses the distribution (the difference in performance
relative to RBMs is not significant). Not only is the performance numerically better, but
6
Proportion good neighbors for hamming distance < 2
Proportion good neighbors for hamming distance < 2
1
Spectral hashing
Boosting +
spectral hashing
0.8
RBM
0.6
RBM+
spectral hashing
0.4
stumps boosting SSC
0.2
0
LSH
0
5
10
15
20
number of bits
25
30
35
1
Spectral hashing
0.8
RBM
0.6
0.4
stumps boosting SSC
0.2
LSH
0
0
5
10
15
20
number of bits
25
30
35
b) 10D uniform distribution
a) 2D uniform distribution
Proportion good neighbors for hamming distance < 2
Figure 4: left: results on 2D rectangles with different methods. Even though spectral
hashing is the simplest, it gives the best performance. right: Similar pattern of results for
a 10 dimensional distribution.
Input
Gist neighbors
Spectral hashing 10 bits
Boosting 10 bits
1
9
Spectral hashing
RBM
Boosting SSC
LSH
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
10
20
30
40
50
60
number of bits
Figure 5: Performance of different binary codes on the LabelMe dataset described in [3]. The
data is certainly not uniformly distributed, and yet spectral hashing gives better retrieval
performance than boosting and LSH.
our visual inspection of the retrieved neighbors suggests that with a small number of bits,
the retrieved images are better using spectral hashing than with boosting.
Figure 6 shows retrieval results on a dataset of 80 million images. This dataset is obviously
more challenging and even using exhaustive search some of the retrieved neighbors are semantically quite different. Still, the majority of retrieved neighbors seem to be semantically
relevant, and with 64 bits spectral hashing enables this peformance in fractions of a second.
4
Discussion
We have discussed the problem of learning a code for semantic hashing. We defined a hard
criterion for a good code that is related to graph partitioning and used a spectral relaxation
to obtain an eigenvector solution. We used recent results on convergence of graph Laplacian
eigenvectors to obtain analytic solutions for certain distributions and showed the importance
of avoiding redundant bits that arise from separable distributions.
The final algorithm we arrive at, spectral hashing, is extremely simple - one simply performs
PCA on the data and then fits a multidimensional rectangle. The aspect ratio of this multidimensional rectangle determines the code using a simple formula. Despite this simplicity,
the method is comparable, if not superior, to state-of-the-art methods.
7
Gist neighbors
Spectral hashing: 32 bits
64 bits
Figure 6: Retrieval results on a dataset of 80 million images using the original gist descriptor,
and hash codes build with spectral hashing with 32 bits and 64 bits. The input image
corresponds to the image on the top-left corner, the rest are the 24 nearest neighbors using
hamming distance for the hash codes and L2 for gist.
References
[1] R. R. Salakhutdinov and G. E. Hinton. Learning a nonlinear embedding by preserving class neighbourhood structure.
In AISTATS, 2007.
[2] A. Torralba, R. Fergus, and W. T. Freeman. Tiny images. Technical Report MIT-CSAIL-TR-2007-024, Computer
Science and Artificial Intelligence Lab, Massachusetts Institute of Technology, 2007.
[3] A. Torralba, R. Fergus, and Y. Weiss. Small codes and large databases for recognition. In CVPR, 2008.
[4] James Hays and Alexei A Efros.
(SIGGRAPH 2007), 26(3), 2007.
Scene completion using millions of photographs.
ACM Transactions on Graphics
[5] R. R. Salakhutdinov and G. E. Hinton. Semantic hashing. In SIGIR workshop on Information Retrieval and applications
of Graphical Models, 2007.
[6] Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps and spectral techniques for embedding and clustering. In
NIPS, pages 585?591, 2001.
[7] Geoffrey E. Hinton and Sam T. Roweis. Stochastic neighbor embedding. In NIPS, pages 833?840, 2002.
[8] A. Andoni and P. Indyk. Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions. In
FOCS, pages 459?468, 2006.
[9] R. R. Salakhutdinov and G. E. Hinton. Learning a nonlinear embedding by preserving class neighbourhood structure.
In AI and Statistics, 2007.
[10] G. Shakhnarovich, P. Viola, and T. Darrell. Fast pose estimation with parameter sensitive hashing. In ICCV, 2003.
[11] A.Y. Ng, M.I. Jordan, and Y. Weiss. On spectral clustering, analysis and an algorithm. In Advances in Neural Information
Processing 14, 2001.
[12] J. Shi and J. Malik. Normalized cuts and image segmentation. In Proc. IEEE Conf. Computer Vision and Pattern
Recognition, pages 731?737, 1997.
[13] Yoshua Bengio, Olivier Delalleau, Nicolas Le Roux, Jean-Fran?
cois Paiement, Pascal Vincent, and Marie Ouimet. Learning eigenfunctions links spectral embedding and kernel pca. Neural Computation, 16(10):2197?2219, 2004.
[14] Charless Fowlkes, Serge Belongie, Fan R. K. Chung, and Jitendra Malik. Spectral grouping using the nystr?
om method.
IEEE Trans. Pattern Anal. Mach. Intell., 26(2):214?225, 2004.
[15] R. R. Coifman, S. Lafon, A. B. Lee, M. Maggioni, B. Nadler, F. Warner, and S. W. Zucker. Geometric diffusions as
a tool for harmonic analysis and structure definition of data: Diffusion maps. Proceedings of the National Academy of
Sciences, 102(21):7426?7431, May 2005.
[16] M. Belkin and P. Niyogi. Towards a theoretical foundation for laplacian based manifold methods. Journal of Computer
and System Sciences, 2007.
[17] Boaz Nadler, Stephane Lafon amd Ronald R. Coifman, and Ioannis G. Kevrekidis. Diffusion maps, spectral clustering
and reaction coordinates of dynamical systems. Arxiv, 2008. http://arxiv.org/.
8
| 3383 |@word version:1 middle:1 polynomial:1 proportion:4 seek:4 tr:1 nystr:1 document:1 outperforms:1 reaction:1 comparing:1 yet:1 dx:2 written:1 ronald:1 partition:10 analytic:2 enables:1 plot:3 gist:5 hash:4 pursued:2 intelligence:1 item:15 inspection:1 short:2 shortlist:2 boosting:23 location:1 org:1 simpler:1 outerproduct:2 hermite:1 along:1 constructed:1 retrieving:2 focs:1 fitting:1 coifman:2 indeed:2 warner:1 salakhutdinov:4 freeman:1 considering:1 increasing:1 notation:1 kevrekidis:1 advent:1 israel:1 what:3 kind:1 eigenvector:3 finding:5 impractical:2 every:2 multidimensional:7 xd:1 exactly:1 k2:4 partitioning:5 unit:1 t1:1 before:1 limit:1 despite:3 encoding:1 vassar:1 mach:1 approximately:2 studied:1 suggests:2 relaxing:3 challenging:2 limited:1 range:2 bi:1 nca:1 yj:4 practice:1 backpropagation:1 procedure:1 thought:2 significantly:1 projection:2 suggest:1 get:1 clever:1 close:1 operator:2 applying:2 equivalent:3 map:7 deterministic:1 shi:1 jerusalem:1 rectangular:2 sigir:1 simplicity:3 roux:1 insight:1 utilizing:2 datapoints:4 embedding:9 maggioni:1 coordinate:1 laplace:3 target:1 suppose:1 exact:2 olivier:1 us:3 expensive:1 recognition:2 cut:10 labeled:1 database:5 solved:1 calculate:3 ordering:1 mentioned:1 balanced:2 ui:2 trained:1 weakly:1 carrying:1 solving:1 rewrite:1 depend:2 shakhnarovich:1 creates:1 learner:2 basis:1 easily:3 siggraph:1 represented:1 stacked:1 fast:3 query:2 artificial:1 tell:1 neighborhood:3 exhaustive:3 whose:6 quite:1 jean:1 cvpr:1 say:1 relax:1 delalleau:1 statistic:2 niyogi:2 final:1 indyk:1 obviously:1 eigenvalue:17 analytical:3 product:6 e2lsh:4 aligned:1 relevant:1 kx1:1 roweis:1 academy:1 ky:1 convergence:4 requirement:4 r1:1 darrell:1 categorization:1 converges:1 rotated:1 object:1 ac:1 pose:1 completion:2 nearest:3 ij:3 school:1 c:2 direction:2 beltrami:3 radius:5 closely:1 stephane:1 stochastic:1 enable:1 require:3 yweiss:1 extension:5 exp:1 nadler:2 mapping:1 predict:1 efros:1 torralba:5 smallest:3 estimation:1 proc:1 cois:1 label:2 sensitive:2 vibration:1 tool:1 weighted:4 mit:3 gaussian:5 rather:1 avoid:1 ax:2 improvement:2 dependent:1 hidden:6 wij:3 among:1 pascal:1 art:3 special:1 spatial:4 equal:1 once:1 having:2 ng:1 sampling:1 look:1 np:4 report:1 simplify:1 yoshua:1 belkin:2 preserve:1 national:1 intell:1 fire:1 huge:1 highly:2 alexei:1 certainly:1 amazingly:1 edge:1 encourage:1 tree:1 euclidean:9 desired:2 theoretical:1 minimal:5 classify:1 earlier:1 cost:1 introducing:1 addressing:1 subset:2 vertex:3 uniform:9 eigenmaps:2 graphic:1 motivating:1 dependency:3 kxi:2 synthetic:3 st:1 fundamental:1 huji:1 csail:3 lee:1 off:1 together:2 quickly:1 ssc:10 worse:1 corner:1 creating:1 conf:1 inefficient:1 chung:1 toy:3 stump:5 ioannis:1 matter:1 satisfy:1 jitendra:1 explicitly:1 depends:1 performed:1 lab:1 doing:1 red:3 complicated:1 partha:1 om:1 il:1 ni:1 formed:1 minimize:3 descriptor:3 who:2 efficiently:2 serge:1 correspond:2 yellow:1 conceptually:1 weak:2 vincent:1 visualizes:1 datapoint:3 definition:1 ty:1 rbms:9 frequency:5 james:1 obvious:1 nystrom:3 proof:2 rbm:12 hamming:17 sampled:3 dataset:17 popular:1 ask:1 massachusetts:1 recall:1 knowledge:1 improves:1 segmentation:1 formalize:1 actually:2 hashing:37 courant:1 higher:1 adaboost:1 wei:2 done:2 though:3 d:1 hand:1 nonlinear:2 defines:2 mode:1 building:2 requiring:1 normalized:1 semantic:8 sin:1 during:1 whereby:1 criterion:1 trying:1 plate:1 schrodinger:1 performs:2 image:17 harmonic:1 novel:10 charles:1 superior:2 million:8 discussed:3 onedimensional:1 numerically:2 significant:2 cambridge:1 ai:2 rd:4 mathematics:1 dot:1 lsh:17 zucker:1 similarity:7 align:1 recent:4 showed:1 retrieved:5 certain:1 hay:1 binary:17 success:2 yi:7 preserving:3 seen:1 relaxed:2 surely:1 redundant:1 full:2 multiple:1 technical:1 faster:2 long:2 retrieval:11 divided:1 laplacian:15 variant:1 vision:1 expectation:1 arxiv:2 represent:1 normalization:1 kernel:1 addition:1 addressed:1 rest:1 eigenfunctions:28 subject:3 undirected:1 seem:1 jordan:1 call:1 near:1 feedforward:1 bengio:1 easy:2 wn:1 peformance:1 xj:2 independence:2 gave:2 affect:1 architecture:1 fit:1 pca:5 york:1 antonio:1 eigenvectors:12 induces:1 simplest:1 terrible:1 http:1 outperform:3 exist:1 sign:4 per:1 blue:1 discrete:1 paiement:1 key:1 threshold:1 yit:1 kyi:3 neither:1 marie:1 thresholded:2 diffusion:3 rectangle:11 graph:12 asymptotically:1 relaxation:3 fraction:1 injected:1 arrive:1 reasonable:1 fran:1 decision:1 comparable:1 bit:54 layer:8 internet:2 followed:1 distinguish:1 simplification:1 fan:1 refine:1 constraint:5 scene:3 x2:11 aspect:3 speed:1 extremely:4 performing:1 separable:5 ball:4 kd:1 slightly:1 sam:1 lp:10 rob:1 restricted:1 pr:1 iccv:1 taken:2 equation:4 discus:1 mechanism:1 ouimet:1 available:1 gaussians:1 apply:1 spectral:38 fowlkes:1 neighbourhood:2 yair:1 original:3 denotes:1 clustering:4 assumes:2 top:1 graphical:1 calculating:3 build:2 objective:1 malik:2 already:1 codewords:10 diagonal:1 affinity:1 distance:19 link:1 majority:1 outer:6 amd:1 manifold:4 collected:1 trivial:3 assuming:2 code:55 length:1 ratio:3 minimizing:1 hebrew:1 broadway:1 trace:1 design:1 anal:1 proper:1 boltzmann:1 perform:1 observation:4 viola:1 hinton:5 excluding:1 frame:1 introduced:1 pair:2 connection:1 nip:2 eigenfunction:3 address:1 trans:1 dynamical:1 pattern:4 challenge:1 summarize:1 including:1 memory:1 green:1 power:1 improve:1 technology:1 axis:1 autoencoder:2 geometric:1 l2:1 relative:1 embedded:2 highlight:1 limitation:1 analogy:1 geoffrey:1 foundation:1 thresholding:5 storing:1 uncorrelated:4 tiny:1 row:1 course:1 surprisingly:1 jth:1 institute:2 neighbor:20 wide:1 mikhail:1 distributed:1 calculated:3 dimension:5 evaluating:1 avoids:1 lafon:2 author:2 made:1 avoided:1 far:1 correlate:3 transaction:1 approximate:1 compact:3 ignore:1 preferred:1 boaz:1 belongie:1 fergus:3 xi:3 search:4 continuous:2 learn:4 robust:1 nicolas:1 ignoring:1 metallic:1 domain:1 the2:1 aistats:1 border:1 noise:1 arise:2 x1:9 ny:1 slow:1 precision:3 removing:1 formula:1 dx1:1 nyu:2 list:3 dk:1 experimented:2 grouping:1 workshop:1 andoni:1 importance:1 magnitude:1 illustrates:1 sorting:1 torralba1:1 locality:1 forget:1 photograph:1 simply:10 visual:1 corresponds:4 chance:1 determines:2 acm:1 ma:1 marked:1 towards:1 replace:1 labelme:2 hard:6 change:1 specifically:1 uniformly:3 semantically:2 principal:1 searched:1 unbalanced:1 categorize:1 ongoing:1 avoiding:1 |
2,629 | 3,384 | A mixture model for the evolution of gene expression
in non-homogeneous datasets
Gerald Quon1 , Yee Whye Teh2 , Esther Chan3 , Timothy Hughes3 , Michael Brudno1,3 ,
Quaid Morris3
1
Department of Computer Science, and 3 Banting and Best Department of Medical Research,
University of Toronto, Canada,
2
Gatsby Computational Neuroscience Unit, University College London, United Kingdom
{gerald.quon,quaid.morris}@utoronto.ca
Abstract
We address the challenge of assessing conservation of gene expression in complex, non-homogeneous datasets. Recent studies have demonstrated the success
of probabilistic models in studying the evolution of gene expression in simple
eukaryotic organisms such as yeast, for which measurements are typically scalar
and independent. Models capable of studying expression evolution in much more
complex organisms such as vertebrates are particularly important given the medical and scientific interest in species such as human and mouse. We present Brownian Factor Phylogenetic Analysis, a statistical model that makes a number of
significant extensions to previous models to enable characterization of changes
in expression among highly complex organisms. We demonstrate the efficacy of
our method on a microarray dataset profiling diverse tissues from multiple vertebrate species. We anticipate that the model will be invaluable in the study of gene
expression patterns in other diverse organisms as well, such as worms and insects.
1
Introduction
High-throughput functional data is emerging as an indispensible resource for generating a complete
picture of genome-wide gene and protein function. Currently, gene function is often inferred through
sequence comparisons with genes of known function in other species, though sequence similarity
is no guarantee of shared biological function. Gene duplication, one of the primary forces of genomic evolution, often gives rise to genes with high sequence similarity but distinct biological roles
[1]. Differences in temporal and spatial gene expression patterns have also been posited to explain
phenotypic differences among animals despite a surprisingly large degree of gene sequence similarity [2]. This observation and the increasingly wide availability of genome-wide gene expression
profiles from related organisms has motivated us to develop statistical models to study the evolution of gene expression along phylogenies, in order to identify lineages where gene expression and
therefore gene function is likely to be conserved or diverged.
Comparing gene expression patterns between distantly related multi-cellular organisms is challenging because it is difficult to collect a wide range of functionally matching tissue samples. In some
cases, matching samples simply may not exist because some organismal functions have been redistributed among otherwise homologous organs. For example, processes such as B-cell development
are performed by both distinct and overlapping sets of tissues: primarily bone marrow in mammals;
Bursa of Fabricus and bone marrow in birds; and likely kidney, spleen, and/or thymus in teleost fish
(who lack bone marrow) [3]. Matching samples can also be hard to collect because anatomical arrangements of some of the queried organisms make isolation of specific tissues virtually impossible.
For example, in frog, the kidneys are immediately adjacent to the ovaries and are typically covered in
oocytes. By allowing tissue samples to be mixed and heterogeneous, though functionally related, it
1
becomes possible to compare expression patterns describing a much larger range of functions across
a much larger range of organisms.
Current detailed statistical models of expression data assume measurements from matched samples
in each organism. As such, comparative studies of gene expression to date have either resorted to
simple, non-phylogenetic measures to compare expression patterns [4], or restricted their comparisons to single-cellular organisms [5] or clearly homologous tissues in mammals [6].
Here, we present Brownian Factor Phylogenetic Analysis (BFPA), a new model of gene expression evolution that removes the earlier limitations of matched samples, therefore allowing detailed
comparisons of expression patterns from the widely diverged multi-cellular organisms. Our model
takes as input expression profiles of orthologous genes in multiple present-day organisms and a phylogenetic tree connecting those organisms, and simultaneously reconstructs the expression profiles
for the ancestral nodes in the phylogenetic tree while detecting links in the phylogeny where rapid
change of the expression profile has occurred.
We model the expression data from related organisms using a mixture of Gaussians model related
to a mixture of constrained factor analyzers [7]. In our model, each mixture component represents a
different pattern of conservation and divergence of gene expression along each link of the phylogenetic tree. We assume a constrained linear mapping between the heterogeneous samples in different
organisms and fit this mapping using maximum likelihood. We show that by expanding the amount
of expression data that can be compared between species, our model generates more useful information for predicting gene function and is also better able to reconstruct the evolutionary history of
gene expression as evidenced by its increased accuracy in reconstructing gene expression levels.
2
Previous work
Recent evolutionary models of gene expression treat it as a quantitative (i.e. real-valued) trait and
model evolutionary change in expression levels as a Brownian motion process [8, 9]. Assuming
Brownian motion, a given gene?s expression level xs in a child species s after a divergence time ts
from an ancestral species ?(s) is predicted to be Gaussian distributed with a mean x?(s) equal to the
gene?s expression level in the ancestor and variance ? 2 ts :
xs ? N (x?(s) , ? 2 ts )
(1)
where ? 2 represents the expected rate of change per unit time. The ancestor-child relationships are
specified using a phylogeny, such as that shown in Figure 1a for the vertebrates. The leaves of the
phylogeny are associated with present-day species and the internal branch points with shared ancestors. The exact position of the root of the phylogeny (not shown in the figure, but somewhere along
branch ?T?) cannot be established without additional information, and the outgroup species ?T? is
often used in place of the root of the tree. Nonetheless, the rooted phylogeny can be interpreted as a
directed Gaussian graphical model, e.g. Figure 1b, whose nodes are variables representing expression levels in the corresponding species and whose directed edges point from immediate ancestors
to their children species. The conditional probability distribution (CPD) at each node is given by
Equation 1.
Typical uses of these evolutionary models are to compare different hypotheses about divergence
times [8] or the structure of the phylogeny [9] by calculating the likelihood of the present-day expression levels under various hypotheses. To avoid assigning this prior over the root node and
thus introducing bias [10], Felsenstein developed a method called restricted maximum likelihood
(REML) [11], which specifies a distribution over the observed differences between present-day expression levels rather than the expression levels themselves.
3
Brownian Factor Phylogenetic Analysis: A model of expression evolution
In the following section, we propose changes to the Brownian motion model that not only allow
for unmatched tissue samples, but also leverage the change observed in expression levels across
multiple genes in order to classify genes into different patterns of expression evolution. We use xis
to indicate the hidden expression profile of the i-th gene (out of N ortholog groups) in species s.
2
?
?
?
?
??
?
Figure 1: Our statistical model and associated species phylogenies. (a) The phylogeny of the species
measured in our dataset of human (H), mouse (M), chicken (C), frog (F), and tetraodon (T), as well
as an example phylogeny of three hypothetical species x1 , x2 , and x3 used to illustrate our model.
(b) Our statistical model showing how the outgroup species x3 and its corresponding observed expression levels x
?3 is used as a gene expression prior. Edge weights on the graph depict scaling
factors applied to the variance terms ?, which are specified by each conservation pattern c. 1 denotes no scaling on that branch, whereas ? > 1 depicts a longer, and thus unconserved, branch. This
particular conservation pattern represents a phylogeny where all species have conserved expression.
The scale on the bottom shows hypothetical values for x1 , x2 , and x3 , as well as the inferred value
for x12 . (c) The same model except applied to a conservation pattern where species x3 is determined
to exhibit significantly different expression levels (rapid change).
The input to our model are vectors of tissue-specific expression levels {?
xis }N
i=1 for N genes over
present-day species s ? {P ? o}; we distinguish the chosen outgroup species o from the rest of
the present-day species P . x
?is ? IRds , where ds is the number of tissues in species s. The goal of
our model is to infer each gene?s corresponding pattern of gene expression evolution (conservation
i N
pattern) {ci }N
i=1 and latent expression levels {xs }i=1 for all species s ? {P ? o ? A}, where A
represents the internal ancestral species inthe phylogenetic tree (Figure 1). The likelihood function
xio }N
L = P {?
xiP , xiP ?o?A , ci }N
i=1 , ? is shown below, where ?(s) refers to the parent species
i=1 |{?
of s, ? = (?, ?, ?, ?, ?) are the model parameters, and N (x; ?, ?) is the density of x under a
multivariate normal distribution with mean ? and covariance ?:
i
Q hQ
Q
i
i i
xio , ?)P (ci |?)
L = i
xis |xis , ?) P (xio |?
s?P ?A P (xs |x?(s) , c , ?) ?
s?P P (?
j,s
P (xis |xi?(s) , ci = Kj , ?) = N (xis ; ?s xi?(s) , ?K
?s )
s
P (?
xis |xis , ?)
i
=
N (?
xis ; xis , ?s )
P (c = Kj |?) = ?j
(2)
(3)
(4)
Modeling branch lengths. Equation 2 reflects the central assumption of Brownian motion models [8,
9, 10] described in Equation 1, extended in two ways. BFPA extends this concept in two directions.
First, we constrain all variances ?s to be diagonal in order to estimate tissue-specific drift rates,
as tissues are known to vary widely in expression divergence rates [12]. Secondly, we note that in
studying a diverse lineage such as vertebrates, we expect to see large changes in expression for genes
that have diverged in function, as compared to genes of conserved function. We therefore model the
drift of a gene?s expression levels along each branch of the tree as following one of two rates: a
slow rate, reflecting a functional constraint, and a fast rate, reflecting neutral or selected change.
Correspondingly, for each branch of the phylogenetic tree above the species s, we define two rate
parameters, ?2s or ?1s , termed a short and long branch respectively (?2s < ?1s ). We fix ?2s = 1.0
and initialize ?1s to a much larger value to maintain this relationship during learning, thus modeling
fast-moving genes as outliers. Our method of modelling constrained and unconstrained change as
scalar multiples of a common variance is similar to the discrete gamma method [13].
3
Linear relationship between ancestral and child tissues. We model tissues of child species as linear
combinations of ancestral tissues. The matrix of coefficients ?s that relate expression levels in
the child species? tissues to that of its parent species is heavily constrained to leverage our prior
understanding of the relationships of specific tissues [14]. To construct ?s , pairs of tissues that were
clearly homologous (i.e. the heart) had their corresponding entry in ?s fixed at 1, and all other
entries in the same row set to zero. For the remaining tissues, literature searches were conducted
to determine which groups of tissues had broadly related function (i.e. immune tissues), and those
entries were allowed to vary from zero. All other entries were constrained to be zero.
Distinguishing intra- and inter-species variation. Equation 3 relates the observed expression levels
of present-day species to the noiseless, inferred expression levels of the corresponding hidden nodes
of each observed species. The variance factor ?s is an estimate of the variation expected due to
noise in the array measurements, and are estimated via maximum likelihood using multiple identical
probes present on each microarray.
Conservation pattern estimation. Our goal is to identify different types of expression evolution,
including punctuated evolution, fully conserved expression, or rapid change along all branches of
the phylogeny. We model the problem as a mixture model of conservation patterns, in which each
conservation pattern specifies either constrained or fast change along each branch of the tree. Each
conservation pattern Kj ? {1, 2}|P ?A| specifies a configuration of ?1s or ?2s for each species s
K
(Kj,s ? {1, 2} specifies ?s j,s ). However, not all 2|P ?A| possible patterns of short and long branches
can be uniquely considered. In particular, a tree containing at least one ancestor incident to two long
branches and one short are ambiguous because this tree cannot be distinguished from the same tree
with that ancestor incident to three long branches. As a post-processing step, we consider short
branches in those cases to be long, and sum over such ambiguous trees, leaving a total of J possible
conservation patterns. Each pattern Kj is assigned a prior probability P (Kj ) = ?j that is learned,
as reflected in Equation 4.
4
Inference
Because our graphical model contains no cycles, we can apply belief propagation to perform exact
inference and obtain the posterior distributions P (ci = Kj |?
xi , ?), ?i, j:
Z
?ij = P (ci = Kj |?
xi , ?) ? P (xiP ?o?A , x
?iP , ci = Kj |?
xio , ?)?xiP ?o?A
(5)
We can also estimate the distributions over expression levels of a species s0 as
XZ
P (xis0 |?
xi , ?) ?
P (xiP ?o?A , x
?iP , ci = Kj |?
xio , ?)?xiP ?o?A\s0
(6)
j
5
Learning
Applying the expectation maximization (EM) algorithm yields the following maximum likeliT
hood estimates of the model parameters, where Es,s|Kj = E[xis xis |?
xis , ci = Kj ], Es,?(s)|Kj =
T
T
xis , ci = Kj ], and E?(s),?(s)|Kj = E[xi?(s) xi?(s) |?
xis , ci = Kj ]:
E[xis xi?(s) |?
?
?s =
?
K
i=1 j=1
?s =
?
1
N diag
??ks =
?s j,s
PN PJ
i=1
j=1
??1
??
N X
J
X
?ij
?
Es,?(s)|Kj
?ij
Kj,s
?s
N X
J
X
?ij
??
K
i=1 j=1
Es,s|Kj ?
?s j,s
2?s ETs,?(s)|Kj
+
?s E?(s),?(s)|Kj ?Ts
?1 P P
i
[K
=
k]?
dim(x
)
j,s
ij
s
i
j
i
j [Kj,s = k]?ij ?
T ?1
tr[Es,s|Kj ??1
s ] + tr ?s ?s (?2Es,?(s)|Kj + ?s E?(s),?(s)|Kj )
P P
4
(7)
E?(s),?(s)|Kj ?
PN
i=1 ?ij
??j =
N
(8)
Although we have rooted the phylogeny using a present-day species rather than place a hypothetical
root as has been done in previous Brownian motion models, these two models are related because
they are equivalent under the condition that all samples are matched. First, note that in traditional
Brownian motion models, the location of the root is arbitrary if one assumes a constant, improper
prior over the root expression levels, since any choice of root would give rise to the same probability
distribution over the expression levels. By using a present-day species with observed expression
levels as the root node, we avoid integrating over this improper prior. Because the root node prior
is constant, the likelihood of the other present-day species conditioned on this present-day root
expression level is a constant times the likelihood of all present-day species expression levels. Our
conditional model therefore assigns identical likelihoods and marginals as REML.
6
Results
We present the results of applying our model to a novel dataset consisting of gene expression measurements of 4770 genes with unique, unambiguous orthology, i.e., each of the 4770 genes is present
in only a single copy, across the following five present-day organisms: human, mouse, chicken, frog,
and tetraodon. The phylogeny related these species is shown in Figure 1 with nodes labelled by the
first letter of the species name. We set Tetradon as the root, so o = T and P = {H, M, C, F }
and we label the internal ancestors by concatenating the labels of their present-day descendants, so
A = {HM, HM C, HM CF }.
Replicate microarray probe intensity measurements were taken for the 4770 genes across a total of
161 tissues (i.e., 322 microarrays in total) in the five organisms: 46 tissues from human, 55 from
mouse, and 20 from each of the other three organisms. We applied a standard pre-processing pipeline
to the array set to remove experimental artifacts and to transform the probe intensity measurements
on each array to a common, variance-stabilized scale. Each array was first spatially detrended as
described in [15]. Within a species, all arrays share the same probe set, so we applied VSN [16]
to the arrays from each species to estimate an array-specific affine transform to transform the probe
intensities to species-specific units. We next applied an arcsinh transform to the probe intensities
to make the variance of the noise independent of the intensity measurement. For the final two preprocessing steps, we placed the transformed intensity measurements into a matrix for each species.
The rows of this matrix correspond to genes and the columns are the measured tissues. First, to remove probe bias in the transformed intensities, we subtracted the row median from each element and
then to attempt to transform measurements from different species to a common scale, we subtracted
the column means from each element and divided by the column length.
First, we investigate the stability of our conservation pattern estimates by using parameters trained on
different random subsamples of our genes. We then evaluate the predictive value of our algorithm
BFPA using two tasks: a) predicting gene expression profiles in a new species given expression
profiles in other species, and b) predicting Gene Ontology annotation using the conservation pattern
inferred by our model.
To perform the stability experiments, we first randomly split the dataset into five subsets, and used
each subset individually to train the model using 100 iterations of EM. We then estimated P (ci |?
xis , ?)
for the four other subsets of genes, and classified each gene into its most likely conservation pattern.
Hence, each gene is classified four times by non-overlapping training sets. Figure 2 shows that
the classifications are quite stable and that most genes are classified into few conservation patterns.
Most genes that were uniquely classified into a single conservation pattern either were classified as
fully (all) conserved or completely unconserved, resulting in relatively few high-confidence lineagespecific genes.
6.1
Functional associations of co-transcriptionally evolving genes
Pairs of genes exhibiting correlated expression also tend to perform similar function. This guiltby-association principle is often used to initially assign putative functions to genes. For example,
a popular method for analyzing gene expression datasets is to cluster genes based on the pairwise
5
2000
3000
1500
# genes
# genes
4000
2000
1000
0
1000
500
0
1
2
3
4
# of conservation patterns
none
2
3
4
all
# conserved species
Figure 2: Stability of conservation pattern assignments to genes. (left) Each gene was placed into
one of four bins, denoting the number of unique patterns it was classified into. Most genes were
consistently classified into one conservation pattern for all four of its independent classifications.
(right) For all genes uniquely classified into a single conservation pattern, the number of presentday species adjacent to conserved links was computed. Most genes were either classified as fully
(all) conserved or completely unconserved.
Pearson correlation coefficient (PCC), then measure the enrichment of these clusters in Gene Ontology (GO) function and process annotations [17]. In this section, we introduce the evolutionary
correlation coefficient (ECC), a simple modification of PCC to integrate model predictions, and examine whether genes with the same annotated function are more similar in rank according the ECC
or PCC measures. ECC scales the positively-transformed PCC by the marginal probability of the
genes following the same expression evolution, assuming independent evolution.
ECC(?
xi , x
?k )
=
X
1 + P CC(?
xi , x
?k )
P (ci = j|?
xi , ?)P (ck = j|?
xk , ?)
j
ECC can be applied using the output of either BFPA or the Brownian model. For the Brownian
model, we trained and made predictions using only those matched samples in all five species. Those
ten samples are the central nervous system (CNS), intestine, heart, kidney, liver, eye, muscle, spleen,
stomach, and testis. We also introduce ECC-sequence, designed to measure the value of evolutionary
information derived from sequence. First, the protein sequences of each gene were aligned using
default parameters of MUSCLE [18]. These alignments were then inputted into PAML [19] together
with the species tree shown in Figure 1 to estimate branch lengths. The PCC measure for each pair
of genes was then scaled by the Pearson correlation coefficient of the branch lengths estimated by
PAML to produce ECC-sequence.
For all models, we first used the ECC/PCC similarity metric for each gene to rank all other genes
in order of expression similarity. We then apply the Wilcoxon Rank Sum test to evaluate whether
genes with the same GO annotations, as annotated for the mouse ortholog, are significantly higher in
rank than all other genes. For this analysis, we only considered GO Process categories which have
at least one of the 4770 genes annotated in that category. We also removed all genes which were not
annotated in any category, resulting in a total of 3319 genes and 4246 categories.
Figure 3 illustrates the distribution of smallest p-values achieved by each gene over all of their annotated functions. PCC is used as a baseline performance measure as it does not consider evolutionary
information. We see that all evolutionary-based models outperform PCC in ranking genes with similar function much closer on average. ECC-sequence performs worse than PCC, suggesting that
expression-based evolutionary metrics may provide additional information compared to those based
on sequence. The relative performance of BFPA versus Brownian reflects an overall significant performance gap between our models and the existing ones. A control measure ECC-random is shown,
which is computed by randomizing the gene labels of the data in each of the five organisms before
learning. Finally, Brown+prior measures the performance of the Brownian model when the conservation pattern priors are allowed to be estimated, and performs better than the Brownian model but
worse than BFPA, as expected. All differences between the distributions are statistically significant,
as all pairwise p-values computed by the Kolmogorov-Smirnov test are less than 10?6 .
6
3000
2000
difference in wins
2500
# genes
2000
1500
1000
500
0
5
BFPA
Brown+prior
Brown
PCC
ECC?sequence
ECC?random
10
?log10(pvalue)
1500
[BFPA]wins ? [Brown]wins
[BFPA]wins ? [baseline]wins
1000
500
0
15
H
M
C
species
F
Figure 3: Model performance. (left) A reverse cumulative distribution plot of p-values obtained
from applying the Wilcoxon Rank Sum test using either a PCC or ECC-based similarity metric. The
smallest p-value achieved for each gene across all its annotated functions is used in the distribution.
Position (x, y) indicates that for y genes, their p-value was less than 10?x . Higher lines on the graph
translate into stronger associations between expression levels and gene function, which we interpret
as better performance. (right) This graph shows the difference in the total number of expression
values for which a particular method achieves the lowest error, sorted by species.
6.2
Reconstruction of gene expression levels
Here we report the performance of our model in predicting the expression level of a gene in each
of human, mouse, chicken, and frog, given its expression levels in the other species. Tetraodon is
not predicted because it acts as an outgroup in our model. The model was trained using 100 EM
iterations on half of the dataset, which was then used to predict the expression levels for each gene
in each species in the other half of the dataset, and vice versa. To create a baseline performance
measure, we computed the error when using an average of the four other species to predict the
expression level of a gene in the fifth species. We only compute predictions for the ten matched
samples across all species so that we can compare errors made by our model against those of Brownian and the baseline, which require matched samples. Figure 3 shows that with the exception of
the comparison against Brownian in chicken, BFPA achieves lower error than both Brownian and
baseline in predicting expression measurements.
7
Discussion
We have presented a new model for the simultaneous evolution of gene expression levels across multiple tissues and organs. Given expression data from present-day species, our model can be used to
simultaneously infer the ancestral expression levels of orthologous genes as well as determine where
in the phylogeny the gene expression levels underwent substantial change. BFPA extends previous
Brownian models [8, 9] by introducing a constrained factor analysis framework to account for complex tissue relationships between different species and by adapting the discrete gamma method [13]
to model quantitative gene expression data. Our model performs better than other Brownian models
in functional association and expression prediction experiments, demonstrating that the evolutionary history we infer better recovers the function of the gene. We have shown that this is in large
part due to our ability to consider species-specific tissue measurements, a feature not implemented
in any existing model to the best of our knowledge. We also showed that gene expression-based
phylogenetic data may provide information not contained in sequence-based phylogenetic data in
terms of helping predict the functional association of genes.
Our model has a number of other applications outside of using it to study the evolutionary history of
gene expression. Our ability to identify genes with conserved expression across multiple species will
help in the inference of gene function from annotated to non-annotated species because unconserved
expression patterns indicate a likely change in the biological function of a gene. We also expect
that by identifying species that share a conserved expression pattern, our model will aid in the
7
identification of transcriptional cis-regulatory elements by focusing the search for cis-elements to
those species identified as conserved in expression.
While we have taken different profiled samples as representing different tissues, our methodology
can be easily expanded to study evolutionary change in gene expression in response to different
growth conditions or environmental stresses, as with those studied in [5]. Our methodology is
also easily extendible to other model organisms for which there are genomes and expression data
for multiple closely related species (e.g. yeast, worm, fly, plants). We anticipate that the results
obtained will be invaluable in the study of genome evolution and identification of cis-regulatory
elements, whose phylogeny should reflect that of the gene expression patterns.
All data used in this publication can be obtained by a request to the authors.
References
[1] Li, W., Yang, J., Gu, X. (2005) Expression divergence between duplicate genes. Trends Genet., 21, 602-607.
[2] Chen, K., Rajewsky, N. (2007) The evolution of gene regulation by transcription factors and microRNAs.
Nature Rev. Genet., 8, 93-103.
[3] Yergeau, D.A. et al. (2005) bloodthirsty, an RBCC/TRIM gene required for erythropoiesis in zebrafish.
Dev. Biol., 283, 97-112.
[4] Stuart, J.M., Segal, E., Koller, D., Kim, S.K. (2003) A gene-coexpression network for global discovery of
conserved genetic modules. Science, 302, 249-255.
[5] Tirosh, I., Weinberger, A., Carmi, M., Barkai, N. (2006) A genetic signature of interspecies variations in
gene expression. Nat. Genet., 38, 830-834.
[6] Khaitovich, P. et al. (2005) A neutral model of transcriptome evolution. PLoS. Biol., 2, 682-689.
[7] Ghahramani, Z., & Hinton, G.E. (1996) The EM algorithm for mixtures of factor analyzers. Technical
Report CRG-TR-96-2, University of Toronto.
[8] Gu, X. (2004) Statistical framework for phylogenomic analysis of gene family expression profiles. Genetics,
167, 531-542.
[9] Oakley, T.H. et al. (2005) Comparative methods for the analysis of gene-expression evolution: an example
using yeast functional genomic data. Mol. Biol. Evol., 22, 40-50.
[10] Felsenstein, J. (2004) Inferring phylogenies. Sunderland (Massachusetts): Sinauer Associates. 664 p.
[11] Felsenstein, J. (1981) Evolutionary trees from gene-frequencies and quantitative characters - finding maximum likelihood estimates. Evolution, 35, 1229-1242.
[12] Khaitovich et al. (2006) Evolution of primate gene expression. Nat. Rev. Genet., 7, 693-702.
[13] Yang, Z. (1994) Maximum likelihood phylogenetic estimation from DNA sequences with variable rates
over sites: approximate methods. J. Mol. Evol., 39, 306-314.
[14] Kardong, K.V. (2006) Vertebrates: comparative anatomy, function, evolution. McGraw-Hill. 782 p.
[15] Zhang, W., Morris, Q.D. et al. (2004) The functional landscape of mouse gene expression. J. Biol., 3, 21.
[16] Huber, W. et al. (2002) Variance stabilization applied to microarray data calibration and to the quantification of differential expression. Bioinformatics, 18, S96-104.
[17] The Gene Ontology Consortium. (2000) Gene Ontology: tool for the unification of biology. Nature Genet.,
25, 25-29.
[18] Edgar, R.C. (2004) MUSCLE: multiple sequence alignment with high accuracy and high throughput.
Nucleic Acids Res., 32, 1792-1797.
[19] Yang, Z. (2007) PAML 4: phylogenetic analysis by maximum likelihood. Mol. Biol. Evol., 24, 1586-1591.
8
| 3384 |@word pcc:11 stronger:1 smirnov:1 replicate:1 covariance:1 mammal:2 tr:3 configuration:1 contains:1 efficacy:1 united:1 denoting:1 indispensible:1 genetic:2 existing:2 current:1 comparing:1 assigning:1 remove:3 designed:1 plot:1 depict:1 half:2 leaf:1 selected:1 nervous:1 xk:1 short:4 characterization:1 detecting:1 node:8 toronto:2 location:1 zhang:1 five:5 phylogenetic:13 along:6 differential:1 descendant:1 introduce:2 pairwise:2 inter:1 huber:1 expected:3 rapid:3 ontology:4 themselves:1 xz:1 examine:1 multi:2 vertebrate:5 becomes:1 matched:6 lowest:1 interpreted:1 emerging:1 developed:1 finding:1 guarantee:1 temporal:1 quantitative:3 hypothetical:3 act:1 growth:1 scaled:1 control:1 unit:3 medical:2 before:1 ecc:13 treat:1 despite:1 ets:1 analyzing:1 bird:1 frog:4 k:1 studied:1 collect:2 challenging:1 co:1 range:3 statistically:1 directed:2 unique:2 hood:1 x3:4 evolving:1 significantly:2 adapting:1 matching:3 pre:1 integrating:1 refers:1 confidence:1 protein:2 consortium:1 cannot:2 impossible:1 applying:3 yee:1 equivalent:1 demonstrated:1 go:3 punctuated:1 kidney:3 identifying:1 lineage:2 immediately:1 assigns:1 evol:3 array:7 stability:3 variation:3 heavily:1 exact:2 homogeneous:2 us:1 distinguishing:1 hypothesis:2 associate:1 element:5 trend:1 particularly:1 observed:6 role:1 bottom:1 fly:1 module:1 cycle:1 improper:2 plo:1 removed:1 substantial:1 xio:5 gerald:2 signature:1 trained:3 predictive:1 completely:2 gu:2 easily:2 various:1 kolmogorov:1 train:1 distinct:2 fast:3 london:1 pearson:2 outside:1 banting:1 whose:3 quite:1 larger:3 widely:2 valued:1 otherwise:1 reconstruct:1 ability:2 transform:5 ip:2 final:1 subsamples:1 sequence:14 propose:1 redistributed:1 reconstruction:1 aligned:1 date:1 translate:1 parent:2 cluster:2 assessing:1 produce:1 generating:1 comparative:3 help:1 illustrate:1 develop:1 liver:1 measured:2 ij:7 implemented:1 predicted:2 indicate:2 exhibiting:1 direction:1 anatomy:1 closely:1 annotated:8 stabilization:1 human:5 enable:1 bin:1 require:1 vsn:1 assign:1 fix:1 anticipate:2 biological:3 secondly:1 crg:1 extension:1 helping:1 considered:2 normal:1 mapping:2 diverged:3 predict:3 vary:2 achieves:2 smallest:2 inputted:1 estimation:2 label:3 currently:1 individually:1 organ:2 vice:1 create:1 micrornas:1 tool:1 reflects:2 paml:3 clearly:2 genomic:2 gaussian:2 rather:2 ck:1 avoid:2 pn:2 publication:1 derived:1 consistently:1 modelling:1 likelihood:11 rank:5 indicates:1 baseline:5 kim:1 esther:1 inference:3 dim:1 xip:6 typically:2 initially:1 hidden:2 koller:1 ancestor:7 sunderland:1 transformed:3 overall:1 among:3 classification:2 insect:1 development:1 animal:1 spatial:1 constrained:7 initialize:1 marginal:1 equal:1 construct:1 identical:2 represents:4 stuart:1 biology:1 throughput:2 distantly:1 report:2 cpd:1 duplicate:1 primarily:1 few:2 randomly:1 simultaneously:2 divergence:5 gamma:2 transcriptionally:1 quaid:2 consisting:1 cns:1 maintain:1 attempt:1 interest:1 highly:1 investigate:1 intra:1 alignment:2 mixture:6 edge:2 capable:1 closer:1 unification:1 tree:14 re:1 increased:1 classify:1 earlier:1 modeling:2 column:3 dev:1 assignment:1 maximization:1 introducing:2 neutral:2 entry:4 subset:3 conducted:1 randomizing:1 density:1 ancestral:6 probabilistic:1 michael:1 connecting:1 mouse:7 together:1 central:2 reflect:1 reconstructs:1 containing:1 unmatched:1 worse:2 li:1 suggesting:1 account:1 segal:1 availability:1 coefficient:4 ranking:1 performed:1 bone:3 root:11 annotation:3 teleost:1 accuracy:2 variance:8 who:1 acid:1 yield:1 identify:3 correspond:1 landscape:1 identification:2 none:1 reml:2 edgar:1 cc:1 tissue:27 history:3 classified:9 explain:1 simultaneous:1 against:2 nonetheless:1 frequency:1 associated:2 recovers:1 dataset:6 stomach:1 popular:1 massachusetts:1 knowledge:1 reflecting:2 focusing:1 higher:2 day:15 reflected:1 methodology:2 response:1 done:1 though:2 correlation:3 d:1 overlapping:2 lack:1 propagation:1 artifact:1 yeast:3 scientific:1 barkai:1 name:1 concept:1 brown:4 evolution:21 hence:1 assigned:1 spatially:1 adjacent:2 during:1 uniquely:3 rooted:2 ambiguous:2 unambiguous:1 whye:1 stress:1 hill:1 complete:1 demonstrate:1 performs:3 invaluable:2 motion:6 novel:1 common:3 functional:7 association:5 organism:20 occurred:1 functionally:2 trait:1 marginals:1 measurement:11 significant:3 interpret:1 versa:1 queried:1 unconstrained:1 analyzer:2 had:2 immune:1 moving:1 stable:1 calibration:1 similarity:6 longer:1 wilcoxon:2 brownian:19 multivariate:1 recent:2 posterior:1 showed:1 reverse:1 termed:1 carmi:1 transcriptome:1 success:1 muscle:3 conserved:12 additional:2 determine:2 branch:16 multiple:9 relates:1 infer:3 technical:1 profiling:1 long:5 posited:1 divided:1 post:1 coexpression:1 prediction:4 irds:1 heterogeneous:2 noiseless:1 expectation:1 metric:3 iteration:2 achieved:2 cell:1 chicken:4 whereas:1 median:1 leaving:1 microarray:4 rest:1 duplication:1 tend:1 virtually:1 leverage:2 yang:3 split:1 isolation:1 fit:1 identified:1 genet:5 microarrays:1 whether:2 expression:96 motivated:1 useful:1 covered:1 detailed:2 amount:1 ten:2 morris:2 category:4 dna:1 specifies:4 outperform:1 exist:1 stabilized:1 fish:1 neuroscience:1 estimated:4 per:1 anatomical:1 diverse:3 broadly:1 discrete:2 ortholog:2 group:2 four:5 demonstrating:1 pj:1 phenotypic:1 resorted:1 graph:3 sum:3 letter:1 place:2 spleen:2 extends:2 family:1 zebrafish:1 putative:1 scaling:2 distinguish:1 constraint:1 constrain:1 x2:2 generates:1 expanded:1 x12:1 relatively:1 department:2 according:1 combination:1 request:1 felsenstein:3 across:8 increasingly:1 reconstructing:1 em:4 character:1 rev:2 modification:1 primate:1 outlier:1 restricted:2 heart:2 taken:2 pipeline:1 resource:1 equation:5 describing:1 arcsinh:1 studying:3 gaussians:1 probe:7 apply:2 oakley:1 distinguished:1 subtracted:2 weinberger:1 denotes:1 remaining:1 assumes:1 cf:1 graphical:2 log10:1 calculating:1 somewhere:1 ghahramani:1 arrangement:1 primary:1 diagonal:1 traditional:1 transcriptional:1 evolutionary:13 exhibit:1 win:5 link:3 cellular:3 assuming:2 length:4 relationship:5 kingdom:1 difficult:1 regulation:1 relate:1 rise:2 perform:3 allowing:2 observation:1 nucleic:1 datasets:3 t:4 immediate:1 extended:1 hinton:1 arbitrary:1 canada:1 drift:2 inferred:4 intensity:7 enrichment:1 evidenced:1 pair:3 required:1 specified:2 extendible:1 learned:1 established:1 address:1 able:1 below:1 pattern:34 challenge:1 including:1 belief:1 detrended:1 force:1 homologous:3 predicting:5 quantification:1 representing:2 eye:1 picture:1 pvalue:1 hm:3 kj:26 prior:10 understanding:1 literature:1 discovery:1 relative:1 sinauer:1 fully:3 expect:2 plant:1 mixed:1 limitation:1 versus:1 integrate:1 degree:1 incident:2 affine:1 s0:2 principle:1 share:2 row:3 genetics:1 surprisingly:1 placed:2 copy:1 profiled:1 bias:2 allow:1 wide:4 underwent:1 correspondingly:1 fifth:1 intestine:1 distributed:1 default:1 cumulative:1 genome:4 author:1 made:2 preprocessing:1 quon:1 approximate:1 trim:1 mcgraw:1 transcription:1 gene:111 global:1 ovary:1 conservation:21 orthologous:2 xi:28 search:2 latent:1 regulatory:2 nature:2 ca:1 expanding:1 mol:3 complex:4 eukaryotic:1 marrow:3 diag:1 noise:2 profile:8 child:6 allowed:2 interspecies:1 x1:2 positively:1 site:1 depicts:1 gatsby:1 slow:1 aid:1 position:2 inferring:1 concatenating:1 specific:7 utoronto:1 showing:1 x:4 ci:16 nat:2 conditioned:1 illustrates:1 rajewsky:1 gap:1 chen:1 timothy:1 simply:1 likely:4 contained:1 scalar:2 environmental:1 conditional:2 goal:2 sorted:1 labelled:1 shared:2 change:15 hard:1 typical:1 except:1 determined:1 worm:2 specie:65 called:1 total:5 e:6 experimental:1 exception:1 college:1 phylogeny:17 internal:3 bioinformatics:1 evaluate:2 biol:5 correlated:1 |
2,630 | 3,385 | Multi-task Gaussian Process Learning of Robot
Inverse Dynamics
Kian Ming A. Chai
Christopher K. I. Williams
Stefan Klanke
Sethu Vijayakumar
School of Informatics, University of Edinburgh, 10 Crichton Street, Edinburgh EH8 9AB, UK
{k.m.a.chai, c.k.i.williams, s.klanke, sethu.vijayakumar}@ed.ac.uk
Abstract
The inverse dynamics problem for a robotic manipulator is to compute the torques
needed at the joints to drive it along a given trajectory; it is beneficial to be able
to learn this function for adaptive control. A robotic manipulator will often need
to be controlled while holding different loads in its end effector, giving rise to a
multi-task learning problem. By placing independent Gaussian process priors over
the latent functions of the inverse dynamics, we obtain a multi-task Gaussian process prior for handling multiple loads, where the inter-task similarity depends on
the underlying inertial parameters. Experiments demonstrate that this multi-task
formulation is effective in sharing information among the various loads, and generally improves performance over either learning only on single tasks or pooling
the data over all tasks.
1
Introduction
The inverse dynamics problem for a robotic manipulator is to compute the torques ? needed at the
joints to drive it along a given trajectory, i.e. the motion specified by the joint angles q(t), velocities
?
? (t), through time t. Analytical models for the inverse dynamics ? (q, q,
? q
?)
q(t)
and accelerations q
are often infeasible, for example due to uncertainty in the physical parameters of the robot, or the
difficulty of modelling friction. This leads to the need to learn the inverse dynamics.
A given robotic manipulator will often need to be controlled while holding different loads in its end
effector. We refer to different loadings as different contexts. The inverse dynamics functions depend
on the different contexts. A simple approach is to learn a different mapping for each context, but
it is more attractive if one can exploit commonality in these related tasks to improve performance,
i.e. to carry out multi-task learning (MTL) [1, 2]. The aim of this paper is to show how this can be
carried out for the inverse dynamics problem using a multi-task Gaussian process (GP) framework.
In ?2 we discuss the relevant theory for the problem. Details of how we optimize the hyperparameters of the multi-task GP are given in ?3, and model selection is described in ?4. Relationships to
other work are discussed in ?5, and the experimental setup and results are given in ?6.
2
Theory
We first describe the relationship of inverse dynamics functions among contexts in ?2.1. In ?2.2 we
review the multi-task GP regression model proposed in [3], and in ?2.3 we describe how to derive a
multi-task GP model for the inverse-dynamics problem.
2.1
Linear relationship of inverse dynamics between contexts
Suppose we have a robotic manipulator consisting of J joints, and a set of M loads. Figure 1 illustrates a six-jointed manipulator, with joint j connecting links j ?1 and j. We wish to learn the inverse
Joint 1
Waist
?jm
q3
Joint 2
Shoulder
Base
???
Joint 6
Flange
Joint 3
Elbow
Joint 5
Wrist Bend
Joint 4
Wrist rotation
Figure 1: Schematic of the PUMA 560 without
the end-effector (to be connected to joint 6).
yjJ,1
hj
j = 1...J
? ? ?? ? ?
m = 1?. . . M ?
1
m
? ?J,1
?
? ??? ?
m
?J,10
yjJ,10
Figure 2: A schematic diagram on how the different functions are related. A plate repeats its
contents over the specified range.
dynamics model of the manipulator for the mth context, i.e. when it handles the mth load in its enddef
? T )T ? R3J .
effector connected to the last link. We denote this by ? m (x) ? RJ , with x =
(q T , q? T , q
th
It can be shown that the required torque for the j joint can be written as [4]
PJ
m
(1)
?jm (x) = j ? =j y T
y jj ? : R3J 7? R10 ,
jj ? (x)? j ?
10
where the y jj ? ?s are vector-valued functions of x, and ? m
is the vector of inertial parameters 1
j? ? R
of the j ?th joint when manipulating the mth load. The inertial parameters for a joint depend on the
physical characteristics of its corresponding link (e.g. mass) and are independent of x.
When, as in our case, the loads are rigidly attached to the end effector, each load may be considered
as part of the last link, and thus modifies the inertia parameters for the last link only [5]. The
parameters for the other links remain unchanged since the parameters are local to the links and their
frames. Denoting the common inertial parameters of the j ?th link by ? ?j ? , we can write
PJ?1 T
m
?
def
?jm (x) = hj (x) + y T
where hj (x) =
(2)
jJ (x)? J ,
j ? =j y jj ? (x)? j ? .
T T
m
def
def
? j (x)T ?
? m . Note
? j (x) =
?m =
(1, (? m
Define y
(hj (x), (y jJ (x))T )T and ?
J ) ) , then ?j (x) = y
m
? j s are shared among the contexts, while the ?
? s are shared among the J links, as illustrated
that the y
in Figure 2. This decomposition is not unique, since given a non-singular square 11?11 matrix Aj ,
def
def
? j (x) and ?m
? m , we also have
setting z j (x) =
A?T
j = Aj ?
j y
? j (x)T A?1
? m = z j (x)T ?m
?jm (x) = y
j .
j Aj ?
(3)
?
? is identifiable only up to a linear combination. Note that in
Hence the vector of parameters ?
general the matrix Aj may vary across the joints.
2.2
Multi-task GP regression model
We give a brief summary of the multi-task Gaussian process (GP) regression model described in [3].
This model learns M related functions {f m }M
m=1 by placing a zero mean GP prior which directly
induces correlations between tasks. Let tm be the observation of the mth function at x. Then the
model is given by
?
def
f
x
?
2
hf m (x)f m (x? )i = Kmm
tm ? N (f m (x), ?m
),
(4)
? k (x, x )
x
f
where k is a covariance function over inputs, K is a positive semi-definite (p.s.d) matrix of inter2
task similarities, and ?m
is the noise variance for the mth task.
2.3
Multi-task GP model for multiple contexts
We now show that the multi-task GP model can be used for inferring inverse dynamics for multiple
contexts. We begin by placing independent zero mean GP priors on all the component functions of
z 1 (?), . . . , z J (?). Let ? be an index into the elements of the vector function z j (?), then our prior is
hzj? (x)zj ? ?? (x? )i = ?jj ? ???? kjx (x, x? ).
(5)
1
We may also formulate our model using the more general vector of dynamic parameters which includes
also the friction parameters, motor inertia etc. However, these additional parameters are independent of the
load, and so can be absorbed into the function hj in eq. 2.
In addition to independence specified by the Kronecker delta functions ??? , this model also imposes
the constraint that all component functions for a given joint j share the same covariance function
kjx (?, ?). With this prior over the z j s, the Gaussian process prior for ?jm (?) is given by
?
h?jm (x)?jm? (x? )i = ?jj ? (Kj? )mm? kjx (x, x? ),
(6)
?
? def T
?
m T m
def
where we have set Pj =
(?1j | ? ? ? |?M
j ) and Kj = Pj Pj , so that (?j ) ?j = (Kj )mm? , the
?
?
? th
(m, m ) entry of the positive semi-definite matrix Kj . Notice that Kj defines the similarity
between different contexts. The rank of Kj? is the rank of Pj , and is upper bounded by min(M , 11),
reflecting the fact that there are at most 11 underlying latent functions (see Figure 2).
m
m
Let tm
j (x) be the observed value of ?j (x). The deviations from ?j (x) may be modelled with
m
m 2
2
M
def 1
tm
j (x) ? N (?j (x), (?j ) ), though in practice we let ?j = ?j ? ?j . . . ? ?j , sharing the variance parameters among the contexts. This completes the correspondence with the multi-task GP
model in eq. 4. Note, however, that in this case we have J multi-task GP models, one for each joint.
This model is a simple and convenient one where the prior, likelihood and posterior factorize over
joints. Hence inference and hyperparameter learning can be done separately for each joint.
Making predictions As in [3], inference in our model can be done by using the standard GP
formulae for the mean and variance of the predictive distribution with the covariance function given
in eq. 6 together with the normal noise model. The observations over all contexts for a given joint j
will be used to make the predictions. For the case of complete data (where there are observations at
the same set of x-values for all contexts) one can exploit the Kronecker-product structure [3, eq. 2].
2.3.1
The relationship among task similarity matrices
def
?=
? 1 | ? ? ? |?
? M ). Recall that ?
? m is an 11 dimensional vector. However, if the different loads
Let ?
(?
in the end effector do not explore the full space (e.g. if some of the inertial parameters are constant
def
? ? min(M , 11).
over all loads), then it can happen that s =
rank(?)
It is worthwhile to investigate the relationship between Kj? and Kj?? , j 6= j ? . Recall from eq. 3 that
def
? and K ? = ?
? T AT Aj ?,
?
? m , where Aj is a full-rank square matrix. This gives Pj = Aj ?
?m
j = Aj ?
j
j
?
?
?
so that rank(Kj ) = rank(?). Therefore the Kj s have the same rank for all joints, although their
exact values may differ. This observation will be useful for model selection in ?4.
3
Learning the hyperparameters ? a staged optimization heuristic
In this section, we drop the joint index j for the sake of brevity and clarity. The following applies
separately for each joint. Let tm be the vector of nm observed torques at the joint for context m,
and X m be the corresponding 3J ?nm design matrix. Further, let X be the 3J ?N design matrix
of distinct x-configurations observed over all M contexts. Given this data, we wish to optimize the
x
x
?
2
def
p({tm }M
marginal likelihood L(? x , K ? , ? 2 ) =
m=1 |X, ? , K , ? ), where ? are the parameters of
k x . As pointed out in [3], one may approach this either using general gradient-based optimization,
or using expectation-maximization. In this paper, the former is used.
In general, the objective function L(? x , K ? , ? 2 ) will have multiple modes, and it is a difficult problem of how to locate the best mode. We propose a staged strategy during optimization to help
localize the search region. This is outlined below, with details given in the subsections that follow.
Require: Starting positions ? x0 , K0? , ?02 , and rank r.
{All arg max operations are understood to find only the local maximum.}
?
1: Starting from ? x0 and ?02 , find (? x1 , ?12 ) = arg max?x ,?2 L(? x , K0 , ? 2 ).
?
2: Calculate K1 based on details in ?3.2.
?
?
2
3: Starting from ? x1 , K1 , and ?02 , find (? xans , Kans , ?ans
) = arg max?x ,K ? ,?2 L(? x , K ? , ? 2 ).
The optimization order reflects the relative importance of the different constituents of the model.
The most important is k x , hence the estimation of ? x begins in step 1; the least important is ? 2 ,
hence its estimation from the initial value ?02 is in step 3. For our application, we find that this
strategy works better than one which simultaneously optimizes for all the parameters.
The initial choice of K ?
3.1
The choice of K0? is important, since it affects the search very early on. Reasonable values that
admit ready interpretations are the matrix of ones 11T and the identity matrix I. For K0? = 11T ,
we initially assume the contexts to be indistinguishable from each other; while for K0? = I, we
initially assume the contexts to be independent given the kernel parameters, which is a multi-task
learning model that has been previously explored, e.g. [6]. These two are at the opposite extremes
in the spectrum of inter-context/task correlation, and we believe the merit of each will be application
dependent. Since these two models have the same number of free parameters, we select the one with
the higher likelihood as the starting point for the search in step 2. However, we note that in some
applications there may be reasons to prefer one over the other.
Computation of K1? in step 2
3.2
Given estimates ? x1 and ?12 , we wish to estimate a K1? from which the likelihood can be optimized
in step 3. Here we give the sequence of considerations that leads to a formula for computing K1? .
Let K1x be the covariance matrix for all pairs in X, using ? x1 for k x . Let T be an N?M matrix which
corresponds to the true values of the torque function ? m (xi ) for m = 1, . . . , M and i = 1, . . . , N .
Then as per the EM step discussed in [3, eq. 4], we have
T
?
= N ?1 T T (K1x )?1 T ?? 0 ? N ?1 hT i?? 0 (K1x )?1 hT i?? 0 ,
KEM
(7)
? 0 = (? x , K ? , ? 2 ), and the (i, m)th
where the expectations are taken w.r.t a GP with parameters ?
1
1
0
m
entry of hT i?? 0 is the mean of ? (xi ) with this GP. The approximation neglects the GP?s variance;
this is justifiable since the current aim is to obtain a starting estimate of K ? for a search procedure.
There are two weaknesses with eq. 7 that we shall address. The first is that the rank of hT i?? 0 is
?
is similarly upper bounded.2 This property
upper bounded by that of K0? , so that the rank of KEM
?
T
is undesirable, particularly when K0 = 11 . We ameliorate this by replacing h? m (xi )i?? 0 with the
corresponding observed value tm (xi ) wherever it is available, and call the resultant matrix Taug .
The second weakness is that with the commonly used covariance functions, K1x will typically have
rapidly decaying eigenvalues [7, ?4.3.1]. To overcome this, we regularize its inversion by adding ? 2 I
?
T
T
to the diagonal of K1x to give Kaug
= N ?1 Taug
(K1x + ? 2 I)?1 Taug . We set ? 2 to tr(Taug
Taug )/(M N ),
?
x
so that tr(Kaug ) = M if K1 were the zero matrix.
?
Finally, the required K1? is obtained from Kaug
by constraining it to have rank r. This is cur?
rently achieved by computing the eigen-decomposition of Kaug
and keeping only the top r eigenvectors/values; it could also be implemented using an incomplete Cholesky decomposition.
3.3
Incorporating a novel task
Above we have assumed that data from all contexts is available at training time. However, we may
encounter a new context for which we have not seen much data. In this case we fix ? x and ? 2 while
extending K ? by an extra row and column for the new context, and it is only this new border which
needs to be learned by maximising the marginal likelihood. Note that as K ? is p.s.d this means
learning only at most M new parameters, or fewer if we exploit the rank-constraint property of K ? .
4
Model selection
? In our
The choice of the rank r of Kj? in the model is important, since it reflects on the rank s of ?.
model, r is not a hyperparameter to be optimized. Thus to infer its value we rely on an information
criterion to select the most parsimonious correct model. Here, we use the Bayesian Information
Criterion (BIC), but the use of Akaike or Hannan-Quinn criteria is similar.
Let Ljr be the likelihood for each joint at optimized hyperparameters ? xj , Kj? , and ?j2 , when Kj?
th
is constrained to have rank r; let nm
joint in the mth
j be the number of observations for the j
2
?
is upper bounded by
This is not due to our approximation; indeed, it can be shown that the rank of KEM
?
that of K0 even if the exact EM update in eq. 7 has been used.
P
m
def
context, and n =
j,m nj be the total number of observations; and let dj be the dimensionality of
x
? j . Since the likelihood of the model factorizes over joints, we have
P
PJ
J
J
BIC(r) = ?2 j=1 log Ljr +
d
+
r(2M
+
1
?
r)
+
J
log n,
(8)
j=1 j
2
where r(2M + 1 ? r)/2 is the number of parameters needed to define an incomplete Cholesky
decomposition of rank r for an M ?M matrix. For selecting the appropriate rank of the Kj? s, we
compute and compare BIC(r) for different values of r.
5
Relationships to other work
We consider related work first with regard to the inverse dynamics problem, and then to multi-task
learning with Gaussian processes.
Learning methods for the single-context inverse dynamics problem can be found in e.g. [8], where
the locally weighted projection regression (LWPR) method is used. Gaussian process methods for
the same problem have also been shown to be effective [7, ?2.5; 9]. The LWPR method has been
extended to the multi-context situation by Petkos and Vijayakumar [5]. If the inertial parameters
?m
J s are known for at least 11 contexts then the estimated torque functions can be used to estimate
the underlying y jj ? s using linear regression, and prediction in a novel context (with limited training
data) will depend on estimating the inertial parameters for that context. Assuming the original
estimated torque functions are imperfect, having more than 11 models for distinct known inertial
parameters will improve load estimation. If the inertial parameters are unknown, the novel torque
function can still be represented as a linear combination of a set of 11 linearly independent torque
functions, and so one can estimate the inverse dynamics in a novel context by linear regression on
those estimated functions. In contrast to the known case, however, no more than 11 models can be
used [5, ?V]. Another difference between known and unknown parameters is that in the former case
the resulting ? m
J s are interpretable, while in the latter there is ambiguity due to the Aj s in eq. 3.
Comparing our approach with [5], we note that: (a) their approach does not exploit the knowledge
that the torque functions for the different contexts are known to share latent functions as in eq. 2,
and thus it may be useful to learn the M inverse dynamics models jointly. This is expected to be
particularly advantageous when the data for each task explores rather different portions of x-space;
(b) rather than relying on least-squares methods (which assume equal error variances everywhere),
our fully probabilistic model will propagate uncertainties (co-variances for jointly Gaussian models)
automatically; and (c) eq. 6 shows that we do not need to be limited to exactly 11 reference contexts,
either fewer or more than 11 can be used. On the other hand, using the LWPR methods will generally
give rise to better computational scaling for large data-sets (although see approximate GP methods
in [7, ch. 8]), and are perhaps less complex than the method in this paper.
Earlier work on multiple model learning such as Multiple Model Switching and Tuning (MMST)
[10] uses an inverse dynamics model and a controller for each context, switching among the models
to the one producing the most accurate predictions. The models are linear-in-the-parameters with
known non-linear regressor functions of x, and the number of models are assumed known. MMST
involves very little dynamics learning, estimating only the linear parameters of the models. A closely
related approach is Modular Selection and Identification for Control (MOSAIC) [11], which uses
inverse dynamics models for control and forward dynamics models for context identification. However, MOSAIC was developed and tested on linear dynamics models without the insights into how
eq. 1 may be used across contexts for more efficient and robust learning and control.
Early references to general multi-task learning are [1] and [2]. There has been a lot of work in recent
years on MTL with e.g. neural networks, Dirichlet processes, Gaussian processes and support vector
machines. Some previous models using GPs are summarized in [3]. An important related work is the
semiparametric latent factor model [12] which has a number of latent processes which are linearly
combined to produce observable functions as in eq. 3. However, in our model all the latent functions
share a common covariance function, which reduces the number of free parameters and should thus
help to reduce over-fitting. Also we note that the regression experiments by Teh et al. [12, ?4] used
a forward dynamics problem on a four-jointed robot arm for a single context, with an artificial linear
mixing of the four target joint accelerations to produce six response variables. In contrast, we have
shown how linear mixing arises naturally in a multi-context inverse dynamics situation. In relation
0.5
p1
p2
p3
p4
z/m
0.3
0.2
y/m 0
?0.2
0.7
0.5 x/m 0.6
0.4
0.3
Figure 3: The four paths p1 , p2 , p3 , p4 . The
robot base is located at (0, 0, 0).
Table 1: The trajectories at which the training
samples for each load are acquired. All loads
have training samples from the common trajectory (p2 , s3 ). For the multiple-contexts setting,
c15 , and hence (p4 , s4 ), is not used for training.
s1
s2
s3
s4
p1
c1
c7
c13
c14
c6
c12
c1 ? ? ? c15 c5
p2
p3
c11
c3
c4
c10
c2
c8
c9
c15 ?
p4
Table 2: The average nMSEs of the predictions by LR and sGP, for joint 3 and for both kinds of test
sets. Training set sizes given in the second row. The nMSEs are averaged over loads c1 . . . c15 .
average nMSE for the interpm sets
average nMSE for the extrapm sets
20
170
1004
4000
20
170
1004
4000
LR
1?10?1 7?10?4 6?10?4 6?10?4 5?10?1 2?10?1 2?10?1 2?10?1
sGP 1?10?2 2?10?7 2?10?8 3?10?9 1?10?1 3?10?2 4?10?3 3?10?3
to work by Bonilla et al. [3] described in section 2.2, we note that the factorization between inter-task
similarity K f and a common covariance function k x is an assumption there, while we have shown
that such decomposition is inherent in our application.
6
Experiments
Data We investigate the effectiveness of our model with the Puma 560 (Figure 1), which has
J = 6 degrees of freedom. We learn the inverse dynamic models of this robot manipulating M = 15
different loads c1 , . . . , c15 through four different figure-of-eight paths at four different speeds. The
data for our experiments is obtained using a realistic simulation package [13], which models both
Coulomb and viscous frictional forces. Figure 3 shows the paths p1 , . . . , p4 which are placed at
0.35m, 0.45m, 0.55m and 0.65m along the x-axis, at 0.36m, 0.40m, 0.44m and 0.48m along the
z-axis, and rotated about the z-axis by ?10? , 0? , 10? and 20? . There are four speeds s1 , . . . , s4 ,
finishing a path in 20s, 15s, 10s and 5s respectively. In general, loads can have very different
physical characteristics; in our case, this is done by representing each load as a cuboid with differing
dimensions and mass, and attaching each load rigidly to a random point at the end-effector. The
masses range evenly from 0.2kg for c1 to 3.0kg for c15 ; details of the other parameters are omitted
due to space constraints.
For each load cm , 4000 data points are sampled at regular intervals along the path for each path-speed
(trajectory) combination (p? , s? ). Each sample is the pair (t, x), where t ? RJ are the observed
torques at the joints, and x ? R3J are the joint angles, velocities and accelerations. This set of data
is partitioned into train and test sets in the manner described below.
Acquiring training data combinatorially by sampling for every possible load-trajectory pair may be
prohibitively expensive. One may imagine, however, that training data for the handling of a load can
be obtained along a fixed reference trajectory Tr for calibration purposes, and also along a trajectory
typical for that load, say Tm for the mth load. Thus, for each load, 2000 random training samples
are acquired at a common reference trajectory Tr = (p2 , s3 ), and an additional 2000 random training
samples are acquired at a trajectory unique to each load; Table 1 gives the combinations. Therefore
each load has a training set of 4000 samples, but acquired only on two different trajectories.
Following [14], two kinds of test sets are used to assess our models for (a) control along a repeated
trajectory (which is of practical interest in industry), and (b) control along arbitrary trajectories
(which is of general interest to roboticists). The test for (a) assesses the accuracy of torque predictions for staying within the trajectories that were used for training. In this case, the test set for load
cm , denoted by interpm for interpolation, consists of the rest of the samples from Tr and Tm that are
not used for training. The test for (b) assesses the accuracy also for extrapolation to trajectories not
sampled for training. The test set for this, denoted by extrapm , consists of all the samples that are
not training samples for cm .
In addition, we consider a data-poor scenario, and investigate the quality of the models using randomly selected subsets of the training data. The sizes of these subsets range from 20 to 4000.
Results comparing GP with linear regression We first compare learning the inverse dynamics
with Bayesian linear regression (LR) to learning with single-task Gaussian processes (sGP). For each
context and each joint, we train a LR model and a sGP model with the corresponding training data
? 1), where sgn(?) is the component-wise signum
separately. For LR, the covariates are (x, sgn(q),
of its arguments; regression coefficients ? and noise variance ? 2 are given a broad normal-inversegamma prior p(?, ? 2 ) ? N (?|0, ? 2 ? 108 I)IG(? 2 |1, 1), though note that the mean predictions do
not depend on the parameters of the inverse-gamma prior on ? 2 . The covariance function of each
? a squared exponential kernel
sGP model is a sum of an inhomogeneous linear kernel on (x, sgn(q)),
on x, and an independent noise component [7, ?4.2], with the first two using the automatic relevance
determination parameterization [7, ?5.1]. The hyperparameters of sGP are initialized by giving equal
weightings among the covariates and among the components of the covariance function, and then
learnt by optimizing the marginal likelihood independently for each context and each joint.
The trained LR and sGP models are used to predict torques for the interpm and extrapm data sets. For
each test set, the normalized mean square error (nMSE) of the predictions are computed, by dividing
the MSE by the variance of the test data. The nMSEs are then averaged over the 15 contexts for
the interpm and extrapm tests. Table 2 shows how the averages for joint 3 vary with the number
of training samples. Similar relative results are obtained for the other joints. The results show that
sGP outperforms LR for both the test cases. As one would expect, the errors of LR level-off early
at around 200 training samples, while the quality of predictions by sGP continues to improve with
training sample size, especially so for the interpm sets. Both sGP and LR do reasonably well on the
interpm sets, but not so well on the extrapm sets. This suggests that learning from multiple contexts
which have training data from different parts of the trajectory space will be advantageous.
Results for multi-task GP We now investigate the merit of using MTL, using the training data
tabulated in Table 1 for loads c1 , . . . , c14 . We use n to denote the number of observed torques for
each joint totalled across the 14 contexts. Note that trajectory (p4 , s4 ) is entirely unobserved during
learning, but is included in the extrapm sets. We learn the hyperparameters of a multi-task GP model
(mGP) for each joint by optimizing the marginal likelihood for all training data (accumulated across
contexts) for that joint, as discussed in ?3, using the same kernel and parameterization as for the
sGP. This is done for ranks 2, 4, 5, 6, 8 and 10. Finally, a common rank r for all the joints is chosen
using the selection criterion given in ?4. We denote the selected set of mGP models by mGP-BIC.
In addition to comparing with sGP, we also compare mGP-BIC with two other na??ve schemes: (a)
denoted by iGP, a collection of independent GPs for the contexts, but sharing kernel parameters of
kjx among the contexts; and (b) denoted by pGP, a single GP for each joint that learns by pooling
all training data from all the contexts. The iGP and pGP models can be seen as restrictions of the
multi-task GP model, restricting Kj? to the identity matrix I and the matrix of ones 11T respectively.
As discussed in ?3, the hyperparameters for the mGPs are initialized to either those of pGP or those
of iGP during optimization, choosing the one with the higher marginal likelihood. For our data,
we find that the choice is mostly iGP; pGP is only chosen for the case of joint 1 and n < 532. In
addition, the chosen ranks based on the BIC are r = 4 for all cases of n, except for n = 476 and
n = 1820 when r = 5 is selected instead.
Figure 4 gives results of sGP, iGP, pGP and mGP-BIC for both the interpm and extrapm test sets,
and for joints 1 and 4. Plots for the other joints are omitted due to space constraints, but they are
qualitatively similar to the plots for joint 4. The plots are the average nMSEs over the 14 contexts
against n. The vertical scales of the plots indicate that extrapolation is at least an order of magnitude
harder than interpolation. Since the training data are subsets selected independently for the different
values of n, the plots reflect the underlying variability in sampling. Nevertheless, we can see that
mGP-BIC performs favorably in almost all the cases, and especially so for the extrapolation task.
For joint 1, we see a close match between the predictive performances of mGP-BIC and pGP, with
mGP-BIC slightly better than pGP for the interpolation task. This is due to the limited variation
among observed torques for this joint across the different contexts for the range of end-effector
?10?5
5
?10?4
2
4
?10?2
2
?10?4
4
1.5
3
1.5
1
2
1
0.5
1
0.5
3
2
1
0
280
532
896
1820
0
280
532
896
1820
0
280
532
896
1820
0
280
532
896
1820
(a) joint 1, interpm tests (b) joint 1, extrapm tests (c) joint 4, interpm tests (d) joint 4, extrapm tests
Figure 4: Average nMSEs of sGP ( ), iGP ( ), pGP ( ) and mGP-BIC ( ) against n (on log2
scale). Ticks on the x-axes represent specified values of n. The vertical scales of the plots varies. A
value above the upper limit of its vertical range is plotted with a nominal value near the top instead.
movements investigated here. Therefore it is not surprising that pGP produces good predictions
for joint 1. For the other joints, iGP is usually the next best after mGP-BIC. In particular, iGP is
better than sGP, showing that (in this case) combining all the data to estimate the parameters of a
single common covariance function is better than separating the data to estimate the parameters of
14 covariance functions.
7
Summary
We have shown how the structure of the multiple-context inverse dynamics problem maps onto a
multi-task GP prior as given in eq. 6, how the corresponding marginal likelihood can be optimized
effectively, and how the rank of the Kj? s can be chosen. We have demonstrated experimentally
that the results of the multi-task GP method (mGP) are generally superior to sGP, iGP and pGP.
Therefore it is advantageous to learn inverse dynamics models jointly using mGP-BIC, especially
when each context/task explores different portions of the data space, a common case in dynamics
learning. In future work we would like to investigate if coupling learning over joints is beneficial.
Acknowledgments
We thank Sam Roweis for suggesting pGP as a baseline. This work is supported in part by the EU
PASCAL2 ICT Programme, and in part by the EU FP6 SENSOPAC project grant to SV and SK.
KMAC would also like to thank DSO NL for financial support.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
R. Caruana. Multitask Learning. Machine Learning, 28(1), July 1997.
S. Thrun and L. Pratt, editors. Learning to Learn. Kluwer Academic Publishers, 1998.
E. Bonilla, K. M. A. Chai, and C. K. I. Williams. Multi-task Gaussian Process Prediction. NIPS 20, 2008.
L. Sciavicco and B. Siciliano. Modelling and Control of Robot Manipulators. Springer, 2000.
G. Petkos and S. Vijayakumar. Load estimation and control using learned dynamics models. IROS, 2007.
T. P. Minka and R. W. Picard. Learning How to Learn is Learning with Point Sets, 1997. URL http:
//research.microsoft.com/?minka/papers/point-sets.html. revised 1999.
C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006.
S. Vijayakumar and S. Schaal. LWPR: An O(n) Algorithm for Incremental Real Time Learning in High
Dimensional Space. ICML 2000, 2000.
D. Nguyen-Tuong, J. Peters, and M. Seeger. Computed torque control with nonparametric regression
models. ACC 2008, 2008.
M. Kemal C?l?z and K. S. Narendra. Adaptive control of robotic manipulators using multiple models and
switching. Int. J. Rob. Res., 15(6):592?610, 1996.
M. Haruno, D. M. Wolpert, and M. Kawato. MOSAIC Model for Sensorimotor Learning and Control.
Neural Comp., 13(10):2201?2220, 2001.
Y. W. Teh, M. Seeger, and M. I. Jordan. Semiparametric latent factor models. 10th AISTATS, 2005.
P. I. Corke. A robotics toolbox for MATLAB. IEEE Rob. and Auto. Magazine, 3(1):24?32, 1996.
E. Burdet and A. Codourey. Evaluation of parametric and nonparametric nonlinear adaptive controllers.
Robotica, 16(1):59?73, 1998.
| 3385 |@word multitask:1 inversion:1 loading:1 advantageous:3 simulation:1 propagate:1 decomposition:5 covariance:11 tr:5 harder:1 igp:9 carry:1 initial:2 configuration:1 selecting:1 denoting:1 outperforms:1 current:1 comparing:3 com:1 surprising:1 written:1 realistic:1 happen:1 motor:1 drop:1 interpretable:1 update:1 plot:6 fewer:2 selected:4 parameterization:2 inversegamma:1 lr:9 c6:1 along:9 c2:1 consists:2 fitting:1 manner:1 acquired:4 x0:2 inter:3 expected:1 indeed:1 p1:4 multi:26 torque:16 ming:1 relying:1 automatically:1 little:1 jm:7 elbow:1 begin:2 estimating:2 underlying:4 bounded:4 project:1 mass:3 kg:2 kind:2 viscous:1 cm:3 c13:1 developed:1 differing:1 unobserved:1 nj:1 every:1 exactly:1 prohibitively:1 uk:2 control:11 grant:1 producing:1 positive:2 understood:1 local:2 limit:1 switching:3 rigidly:2 path:6 interpolation:3 suggests:1 co:1 limited:3 factorization:1 range:5 averaged:2 unique:2 practical:1 acknowledgment:1 wrist:2 practice:1 definite:2 procedure:1 convenient:1 puma:2 projection:1 regular:1 onto:1 undesirable:1 selection:5 bend:1 close:1 tuong:1 context:49 optimize:2 restriction:1 map:1 demonstrated:1 modifies:1 williams:4 starting:5 independently:2 formulate:1 insight:1 regularize:1 financial:1 handle:1 variation:1 target:1 suppose:1 imagine:1 nominal:1 exact:2 magazine:1 gps:2 akaike:1 us:2 mosaic:3 velocity:2 element:1 expensive:1 particularly:2 located:1 continues:1 observed:7 calculate:1 region:1 connected:2 eu:2 movement:1 covariates:2 dynamic:31 trained:1 depend:4 predictive:2 joint:54 k0:8 various:1 represented:1 train:2 distinct:2 effective:2 describe:2 artificial:1 choosing:1 heuristic:1 modular:1 valued:1 say:1 gp:24 jointly:3 haruno:1 sequence:1 eigenvalue:1 analytical:1 propose:1 product:1 p4:6 j2:1 relevant:1 combining:1 rapidly:1 mixing:2 roweis:1 constituent:1 chai:3 extending:1 produce:3 incremental:1 rotated:1 staying:1 help:2 derive:1 coupling:1 ac:1 school:1 c10:1 eq:14 p2:5 dividing:1 implemented:1 involves:1 indicate:1 differ:1 inhomogeneous:1 closely:1 correct:1 sgn:3 require:1 fix:1 mm:2 around:1 considered:1 normal:2 mapping:1 predict:1 narendra:1 vary:2 commonality:1 early:3 omitted:2 purpose:1 estimation:4 combinatorially:1 reflects:2 weighted:1 stefan:1 mit:1 gaussian:13 aim:2 rather:2 hj:5 factorizes:1 signum:1 q3:1 ax:1 finishing:1 schaal:1 modelling:2 rank:22 likelihood:11 contrast:2 seeger:2 baseline:1 inference:2 dependent:1 accumulated:1 typically:1 initially:2 mth:7 relation:1 manipulating:2 lwpr:4 arg:3 among:11 html:1 denoted:4 constrained:1 marginal:6 equal:2 having:1 sampling:2 placing:3 broad:1 icml:1 future:1 inherent:1 randomly:1 simultaneously:1 gamma:1 ve:1 consisting:1 microsoft:1 ab:1 freedom:1 interest:2 investigate:5 picard:1 evaluation:1 weakness:2 inter2:1 extreme:1 nl:1 accurate:1 incomplete:2 initialized:2 re:1 plotted:1 effector:8 column:1 earlier:1 industry:1 caruana:1 maximization:1 deviation:1 entry:2 subset:3 hzj:1 c14:2 frictional:1 varies:1 learnt:1 sv:1 combined:1 explores:2 vijayakumar:5 probabilistic:1 off:1 informatics:1 regressor:1 connecting:1 together:1 na:1 dso:1 squared:1 ambiguity:1 nm:8 reflect:1 admit:1 suggesting:1 c12:1 summarized:1 includes:1 coefficient:1 int:1 bonilla:2 depends:1 lot:1 extrapolation:3 portion:2 hf:1 decaying:1 ass:3 square:4 accuracy:2 variance:8 characteristic:2 kem:3 waist:1 modelled:1 bayesian:2 identification:2 trajectory:17 comp:1 drive:2 justifiable:1 acc:1 sharing:3 ed:1 against:2 c7:1 c15:6 sensorimotor:1 minka:2 resultant:1 naturally:1 cur:1 sampled:2 recall:2 subsection:1 knowledge:1 improves:1 dimensionality:1 inertial:9 reflecting:1 higher:2 mtl:3 follow:1 response:1 formulation:1 done:4 though:2 correlation:2 hand:1 christopher:1 replacing:1 nonlinear:1 defines:1 mode:2 aj:9 perhaps:1 quality:2 believe:1 manipulator:9 normalized:1 true:1 former:2 hence:5 sgp:16 illustrated:1 attractive:1 indistinguishable:1 during:3 criterion:4 plate:1 complete:1 demonstrate:1 performs:1 motion:1 wise:1 consideration:1 novel:4 common:8 rotation:1 superior:1 kawato:1 physical:3 attached:1 discussed:4 interpretation:1 kluwer:1 refer:1 tuning:1 automatic:1 outlined:1 similarly:1 pointed:1 dj:1 robot:6 calibration:1 similarity:5 etc:1 base:2 posterior:1 recent:1 optimizing:2 optimizes:1 scenario:1 seen:2 additional:2 c11:1 july:1 semi:2 multiple:10 full:2 rj:2 infer:1 hannan:1 reduces:1 match:1 determination:1 academic:1 controlled:2 schematic:2 prediction:11 regression:11 controller:2 expectation:2 kernel:5 represent:1 achieved:1 robotics:1 c1:6 addition:4 semiparametric:2 separately:3 interval:1 diagram:1 singular:1 completes:1 publisher:1 extra:1 rest:1 pooling:2 effectiveness:1 jordan:1 call:1 near:1 constraining:1 crichton:1 pratt:1 independence:1 affect:1 bic:13 xj:1 mgp:12 opposite:1 imperfect:1 reduce:1 tm:9 six:2 url:1 tabulated:1 peter:1 jj:9 matlab:1 generally:3 useful:2 eigenvectors:1 s4:4 nonparametric:2 klanke:2 locally:1 induces:1 kian:1 http:1 zj:1 notice:1 s3:3 delta:1 estimated:3 per:1 kaug:4 write:1 hyperparameter:2 shall:1 four:6 nevertheless:1 localize:1 clarity:1 pj:8 iros:1 r10:1 ht:4 fp6:1 year:1 sum:1 inverse:25 angle:2 uncertainty:2 everywhere:1 ameliorate:1 package:1 almost:1 reasonable:1 p3:3 parsimonious:1 jointed:2 prefer:1 scaling:1 entirely:1 def:14 correspondence:1 identifiable:1 kronecker:2 constraint:4 sake:1 speed:3 friction:2 min:2 c8:1 argument:1 combination:4 poor:1 beneficial:2 remain:1 across:5 em:2 slightly:1 partitioned:1 sam:1 rob:2 making:1 wherever:1 kmm:1 s1:2 mgps:1 taken:1 previously:1 discus:1 needed:3 merit:2 end:7 staged:2 available:2 operation:1 eight:1 worthwhile:1 quinn:1 appropriate:1 coulomb:1 encounter:1 eigen:1 original:1 top:2 dirichlet:1 log2:1 neglect:1 exploit:4 giving:2 k1:7 especially:3 unchanged:1 objective:1 strategy:2 parametric:1 diagonal:1 gradient:1 link:9 thank:2 separating:1 thrun:1 street:1 evenly:1 sethu:2 yjj:2 reason:1 maximising:1 assuming:1 index:2 relationship:6 kemal:1 setup:1 difficult:1 mostly:1 holding:2 favorably:1 rise:2 corke:1 design:2 unknown:2 teh:2 upper:5 vertical:3 observation:6 revised:1 situation:2 extended:1 shoulder:1 variability:1 frame:1 locate:1 arbitrary:1 pair:3 required:2 specified:4 c3:1 optimized:4 toolbox:1 c4:1 learned:2 eh8:1 nip:1 address:1 able:1 below:2 usually:1 max:3 pascal2:1 difficulty:1 rely:1 force:1 arm:1 representing:1 scheme:1 improve:3 brief:1 axis:3 carried:1 ready:1 auto:1 kj:16 prior:11 review:1 ict:1 relative:2 fully:1 expect:1 degree:1 imposes:1 editor:1 share:3 row:2 summary:2 repeat:1 last:3 free:2 keeping:1 infeasible:1 placed:1 supported:1 tick:1 rasmussen:1 attaching:1 edinburgh:2 regard:1 overcome:1 dimension:1 inertia:2 commonly:1 adaptive:3 forward:2 c5:1 ig:1 collection:1 qualitatively:1 programme:1 nguyen:1 approximate:1 observable:1 cuboid:1 robotic:6 assumed:2 factorize:1 xi:4 spectrum:1 search:4 latent:7 sk:1 table:5 learn:10 reasonably:1 robust:1 mse:1 investigated:1 complex:1 aistats:1 linearly:2 border:1 noise:4 hyperparameters:6 s2:1 repeated:1 x1:4 nmse:3 kjx:4 inferring:1 position:1 wish:3 exponential:1 weighting:1 learns:2 pgp:11 formula:2 load:31 showing:1 explored:1 incorporating:1 restricting:1 adding:1 effectively:1 importance:1 magnitude:1 illustrates:1 wolpert:1 explore:1 absorbed:1 applies:1 acquiring:1 ch:1 corresponds:1 springer:1 identity:2 acceleration:3 shared:2 content:1 experimentally:1 included:1 typical:1 except:1 total:1 experimental:1 select:2 cholesky:2 support:2 latter:1 arises:1 brevity:1 relevance:1 c9:1 ljr:2 tested:1 handling:2 |
2,631 | 3,386 | Optimization on a Budget: A Reinforcement
Learning Approach
Ian Fasel
Department of Computer Sciences
University of Texas at Austin
[email protected]
Paul Ruvolo
Department of Computer Science
University of California San Diego
La Jolla, CA 92093
[email protected]
Javier Movellan
Machine Perception Laboratory
University of California San Diego
[email protected]
Abstract
Many popular optimization algorithms, like the Levenberg-Marquardt algorithm
(LMA), use heuristic-based ?controllers? that modulate the behavior of the optimizer during the optimization process. For example, in the LMA a damping
parameter ? is dynamically modified based on a set of rules that were developed
using heuristic arguments. Reinforcement learning (RL) is a machine learning
approach to learn optimal controllers from examples and thus is an obvious candidate to improve the heuristic-based controllers implicit in the most popular and
heavily used optimization algorithms.
Improving the performance of off-the-shelf optimizers is particularly important
for time-constrained optimization problems. For example the LMA algorithm has
become popular for many real-time computer vision problems, including object
tracking from video, where only a small amount of time can be allocated to the
optimizer on each incoming video frame.
Here we show that a popular modern reinforcement learning technique using a
very simple state space can dramatically improve the performance of general purpose optimizers, like the LMA. Surprisingly the controllers learned for a particular
domain also work well in very different optimization domains. For example we
used RL methods to train a new controller for the damping parameter of the LMA.
This controller was trained on a collection of classic, relatively small, non-linear
regression problems. The modified LMA performed better than the standard LMA
on these problems. This controller also dramatically outperformed the standard
LMA on a difficult computer vision problem for which it had not been trained.
Thus the controller appeared to have extracted control rules that were not just
domain specific but generalized across a range of optimization domains.
1
Introduction
Most popular optimization algorithms, like the Levenberg-Marquardt algorithm (LMA) use simple
?controllers? that modulate the behavior of the optimization algorithm based on the state of the
optimization process. For example, in the LMA a damping factor ? modifies the descent step to
behave more like Gradient Descent or more like the Gauss-Newton optimization algorithm [1, 2].
1
The LMA uses the following heuristic for controlling ?: If an iteration of the LMA with the current
damping factor ?t reduces the error then the new parameters produced by the LMA iteration are
accepted and the damping factor is divided by a constant term ? > 0, i.e., ?t+1 = ?t /?. Otherwise,
if the error is not reduced, the new parameters are not accepted, the damping factor is multiplied
by ?, and the LMA iteration is repeated with the new damping parameter. While various heuristic
arguments have been used to justify this particular way of controlling the damping factor, it is not
clear whether this ?controller? is optimal in any way or whether it can be significantly improved.
Improving the performance of off-the-shelf optimizers is particularly important for time-constrained
optimization problems. For example the LMA algorithm has become popular for many real-time
computer vision problems, including object tracking from video, where only a small amount of time
can be allocated to the optimizer on each incoming video frame. Time constrained optimization
is in fact becoming an increasingly important problem in applications such as operations research,
robotics, and machine perception. In these problems the focus is on achieving the best possible
solution in a fixed amount of time. Given the special properties of time constrained optimization
problems it is likely that the heuristic-based controllers used in off-the-shelf optimizers may not be
particularly efficient. Additionally, standard techniques for non-linear optimization like the LMA
do not address issues such as when to stop a fruitless local search or when to revisit a previously
visited part of the parameter space.
Reinforcement learning (RL) is a machine learning approach to learn optimal controllers by examples and thus is an obvious candidate to improve the heuristic-based controllers used in the most
popular and heavily used optimization algorithms. An advantage of RL methods over other approaches to optimal control is that they do not require prior knowledge of the underlying system
dynamics and the system designer is free to choose reward metrics that best match the desiderata for
controller performance. For example, in the case of optimization under time constraints a suitable
reward could be to achieve the minimum loss within a fixed amount of time.
2
Related Work
The idea of using RL in optimization problems is not new [3, 4, 5, 6, 7]. However, previous
approaches have focused on using RL methods to develop problem-specific optimizers for NPcomplete problems. Here our focus is on using RL methods to modify the controllers implicit in
the most popular and heavily used optimization algorithms. In particular our goal is to make these
algorithms more efficient for optimization on time budget problems. As we will soon show, a simple RL approach can result in dramatic improvements in performance of these popular optimization
packages.
There has also been some work on empirical evaluations of the LMA algorithm versus other nonlinear optimization methods in the computer vision community. In [8], the LMA and Powell?s dog-leg
method are compared on the problem of bundle adjustment. The approach outlined in this document
could in principle learn to combine these two methods to perform efficient optimization.
3
The Levenberg Marquardt Algorithm
Consider the problem of optimizing a loss function f : Rn ? R over the space Rn . There are
many approaches to this problem, including zeroth-order methods (such as the Metropolis-Hastings
algorithm), first order approaches, such as gradient descent and the Gauss-Newton method, and
second order approaches such as the Newton-Raphson algorithm.
Each of these algorithms have advantages and disadvantages. For example, on each iteration of gradient descent, parameters are changed in the opposite direction of the gradient of the loss function,
e.g.,
xk+1
= xk ? ? 5x f (xk )
(1)
Steepest Descent has convergence guarantees provided the value of ? is reduced over the course of
the optimization and in general is robust, but quite slow.
The Gauss-Newton method is a technique for minimizing sums of squares of non-linear functions.
Let g be a function from Rn ? Rm with a corresponding loss function L(x) = g(x)> g(x). The
2
if f (xk )> f (xk ) > f (xk?1 )> f (xk?1 ) then
xk ? xk?1
?????
else
? ? ?1 ? ?
end if
Figure 1: A heuristic algorithm for updating lambda during Levenberg-Marquardt non-linear least
squares optimization.
algorithm works by first linearizing the function g using its first order Taylor expansion. The sum of
squares loss function, L, then becomes a quadratic function that can be analytically minimized. Let
H = J(xk )> J(xk ) and d = J(xk )> g(xk ), where J is the Jacobian of g with respect to x. Each
iteration of the Gauss-Newton method is of the following form:
xk+1
= xk ? H ?1 d
(2)
The Gauss-Newton method has a much faster convergence rate than gradient descent, however, it is
not as robust as gradient descent. It can actually perform very poorly when the linear approximation
to g is not accurate.
Levenberg-Marquardt [1] is a popular optimization algorithm that attempts to blend gradient descent and Gauss-Newton in order to obtain both the fast convergence rate of Gauss-Newton and the
convergence guarantees of gradient descent. The algorithm has the following update rule:
xk+1
= xk ? (H + ?diag(H))?1 d
(3)
This update rule is also known as damped Gauss-Newton because the ? parameter serves to dampen
the Gauss-Newton step by blending it with the gradient descent step. Marquardt proposed a heuristic
based control law to dynamically modify ? during the optimization process (see Figure 1). This
control has become part of most LMA packages.
The LMA algorithm has recently become a very popular approach to solve real-time problems in
computer vision [9, 10, 11], such as object tracking and feature tracking in video. Due to the special
nature of this problem it is unclear whether the heuristic-based controller embedded in the algorithm
is optimal or could be significantly improved upon.
In the remainder of this document we explore whether reinforcement learning methods can help
improve the performance of LMA by developing an empirically learned controller of the damping
factor rather than the commonly used heuristic controller.
4
Learning Control Policies for Optimization Algorithms
An optimizer is an algorithm that uses some statistics about the current progress of the optimization
in order to produce a next iterate to evaluate. It is natural to frame optimization in the language
of control theory by thinking of the statistics of the optimization progress used by the controller
to choose the next iterate as the control state and the next iterate to visit as the control action. In
this work we choose to restrict our state space to a few statistics that capture both the current time
constraints and the recent progress of the optimization procedure. The action space is restricted by
making the observation that current methods for non-linear optimization provide good suggestions
for the next point to visit. In this way our action space encodes which one of a fixed set of optimization subroutines (see Section 3) to use for the next iteration, along with actions that control
various heuristic parameters for each optimization subroutine (for instance schedules for updating ?
in gradient descent and heuristics for modifying the value of ? in the LMA).
In order to define the optimality of a controller we define a reward function that indicates the desirability of the solution found during optimization. In the context of optimization with semi-rigid
time constraints an appropriate reward function balances reduction in loss of the objective function
with the number of steps needed to achieve that reduction. In the case of optimization with a fixed
budget, a more natural choice might be the overall reduction in the loss function within the alloted
budget of function evaluations. For specific applications, in a similar spirit to the work of Boyan [6],
3
Initialize a policy ?0 that explores randomly
S ? {}
for i = 1 to n do
Generate a random optimization problem U
Optimize U for T time steps using policy ?0 and generate samples V ? (s, a, r, s0 )T
S ?S?V
end for
repeat
Construct the approximate action-value function Q?t t using the samples S
Set ?t+1 to be the one step policy improvement of ?t using Q?t t
t?t+1
until Q?t?1 ? Q?t
return ?t
Figure 2: Our algorithm for learning controllers for optimization on a budget. The construction
of the approximate action-value function and the policy improvement step are performed using the
techniques outlined in [12].
the reward function could be modified to include features of intermediate solutions that are likely to
indicate the desirability of the current point.
Given a state space, action space, and reward function for a given optimization problem, reinforcement learning methods provide an appropriate set of techniques for learning an optimal optimization
controller. While there are many reinforcement learning algorithms that are appropriate for our problem formulation, in this work we employ Least-Squares Policy Iteration (LSPI) [12]. Least Squares
Policy Iteration is particularly attractive since it handles continuous state spaces, is efficient in terms
of the number of interactions with the system needed to learn a good controller, does not need an
underlying model of the process dynamics, and learns models that are amenable to interpretation.
LSPI is an iterative procedure that repeatedly applies the following two steps until convergence:
approximating the action-value function as a linear combination of a fixed set of basis functions
and then improving the current policy greedily over the approximate value function. The bases are
functions of the state and action and can be non-linear. The method is efficient in terms of the
number of interactions required with the dynamical system and can reuse the same set of samples
to evaluate multiple policies, which is a crucial difference between LSPI and earlier methods like
LSTD. The output of the LSPI procedure is a weight vector that defines the action-value function of
the optimal policy as a linear combination of the basis vectors.
Our method for learning an optimization controller consists of two phases. In the first phase samples
are collected through interactions between a random optimization controller and an optimization
problem in a series of fixed length optimization episodes. These samples are tuples of the form
(s, a, r, s0 ) where s0 denotes the state arrived at when action a was executed starting from state s
and reward r was received. The second phase of our algorithm applies LSPI to learn an actionvalue function and implicitly an optimal policy (which is given by the greedy maximization of the
action-value function over actions for a given state). A sketch of our algorithm is given in Figure 2.
5
Experiments
We demonstrate the ability of our method to both achieve superior performance to off the shelf nonlinear optimization techniques as well as provide insight into the specific policies and action-value
functions learned.
5.1
Optimizing Nonlinear Least-Squares Functions with a Fixed Budget
Both the classical non-linear problems and the facial expression recognition task were formulated in
terms of optimization given a fixed budget of function evaluations. This criterion suggests a natural
reward function where L is a loss function we are trying to minimize, B is the budget of function
evaluations, I is the indicator function, x0 is the initial point visited in the optimization, and xopt is
4
the point with the lowest loss visited in the current optimization episode:
rk
= I(k < B) ? I(L(xk ) < L(xopt )) ? (L(xopt ) ? L(xk )) ?
1
L(x0 )
(4)
This reward function encourages controllers that achieve large reductions in loss within the fixed
budget of function evaluations.
Each optimization problem takes the form of minimizing the sum of squares of non-linear functions
and thus are well-suited to Levenberg-Marquardt style optimization. The action space we consider
in our experiments consists of adjustments to the damping factor (maintain, decrease by a multiplicative factor, or increase by a multiplicative factor) used in the LMA, the decision of whether or
not to throw away the last descent step, along with two actions that are not available to the LMA.
These additional actions include moving to a new random point in the domain of the objective function and also returning to the best point found so far and performing one descent step using the LMA
(using the current damping factor). The number of actions available at each step is 8 (6 for various
combinations of adjustments to ? and returning the the previous iterate along with the 2 additional
actions just described).
The state space used to make the action decision includes a fixed-length window of history that
encodes whether a particular step in the past increased or decreased the residual error from the
previous iterate. This window is set to size 2 for most of our experiments, however, we did evaluate
the relative improvement of using a window size of 1 versus 2 (see Figure 4). Also included in
the state space is the amount of function evaluations left in our budget and a problem-specific state
feature described in Section 5.3.
The state and action space are mapped through a collection of fixed basis functions which the LSPI
algorithm combines linearly to approximate the optimal action-value function. For most applications of LSPI these functions consist of radial-basis functions distributed throughout the continuous
state and action space. The basis we use in our problem treats each action independently and thus
constructs a tuple of basis functions for each action. To encode the number of evaluations left in
the optimization episode, we use a collection of radial-basis functions centered at different values
of budget remaining (specifically we use basis functions spaced at 4 step intervals with a bandwidth of .3). The history window of whether the loss went up or down during recent iterations of
the algorithm is represented as a d-dimensional binary vector where d is the length of history window considered. For the facial expression recognition task the tuple includes an additional basis
described in Section 5.3.
5.2
Classical Nonlinear Least Squares Problems
In order to validate our approach we apply it to a dataset of classical non-linear optimization problems [13]. This dataset of problems includes famous optimization problems that cover a wide variety
of non-linear behavior. Examples include the Kowalik and Osborne function and the Scaled Meyer
function. When restricted to a budget of 5 function evaluations, our method is able to learn a policy
which results in a 6% gain in performance (measured in total reduction in loss from the starting
point) when compared to the LMA.
5.3
Learning to Classify Facial Expressions
The box-filter features that proved successful for face detection in [14] have also shown promise for
recognizing facial expressions when combined using boosting methods. The response of a box-filter
to an image patch is obtained by weighting the sum of the pixel brightnesses in various boxes by a
coefficient defined by the particular box-filter kernel. In our work we frame the problem of feature
selection as an optimization procedure over a continuous parameter space. The parameter space
defines an infinite set of box-filters that includes many of those proposed in [14] as a special case
(see Figure 3). Each feature can be described as a vector in [0, 1]6 where the 6 dimensions of the
vector are depicted in Figure 3.
We learn a detector for the presence or absence of a smile using the pixel intensities of an image
patch containing a face. We accomplish this by employing the sequential regression procedure L2boost [15]. L2-boost creates a strong classifier by iteratively fitting the residuals of the current model
5
Horizontal
Offset
Filter
Height
Vertical
Offset
Vert
Crossbar
Horiz
Crossbar
Filter Width
Figure 3: A parameterized feature space. The position of the cross-hairs in the middle of the box
filter can freely float. This added generality allows for the features proposed in [14] to be generated
as special cases. A complete description of a feature is composed of the 6 parameters depicted above:
horizontal offset, vertical crossbar, vertical offset, filter height, horizontal crossbar, and filter width.
The weighting coefficients for the four boxes (depicted in a checkerboard pattern) is determined by
linear regression between filter outputs of each box and the labels of the training set.
over a collection of weak-learners (in this case our parameterized features). The L2-boost procedure
selects a box-filter at each iteration that most reduces the difference between the current predictions
of the model and the correct image labels. Once a sufficiently good feature is found this feature is
added to the current ensemble. L2-boost learns a linear model for predicting the label of the image
patch since each weak learner (box-filter) is a linear filter on the pixel values and L2-boost combines
weak learners in a linear fashion. The basis space for LSPI is augmented for this task by included a
basis that specifies the number of features already selected by the L2-boost procedure.
We test our algorithm on the task of smile detection using a subset of 1, 000 images from the GENKI
dataset (which is a collection of 60, 000 faces from the web). Along with information about the
location of faces and facial features, human labelers have labeled each image as containing or not
containing a smile. In this experiment our goal is to predict the human smile labels using the L2boost procedure outlined above.
During each trial 3 box filters are selected using the L2-boosting procedure. Within each round of
feature selection a total of 20 feature evaluations are allowed per round. We use the default version
of the LMA as a mode of comparison. After collecting samples from 100 episodes of optimization
on the GENKI dataset, LSPI is able to learn a policy that achieves a 2.66 fold greater reduction in
total loss than the LMA on a test set of faces from the GENKI dataset (see Figure 4). Since the
LMA does not have access to the ability to move to a new random part of the state space a more fair
comparison would be to our method without access to this action. In this experiment our method is
still able to achieve a 20% greater reduction in total loss than the LMA.
Figure 4 shows that the policies learned using our method not only achieves greater reduction in
loss on the training set, but that this reduction in loss translates to a significant gain in performance
for classification on a validation set of test images. Our method achieves between .036 and .083
better classification performance (as measured by area under the ROC curve) depending on the
optimization budget. Note that given the relatively high baseline performance of the LMA on the
smile detection task, an improvement of .083 in terms of area under the ROC translates to almost
halving the error rate. Also of significance is that the information encoded in the state space does
make a difference in the performance of the algorithm. Learning a policy that uses a history window
of error changes on the last two time steps is able to achieve a 16% greater reduction in total loss
than a policy learned with a history window of size 1.
Also of interest is the nature of the policies learned for smile detection on a fixed budget. The
policies learned exhibit the following general trend: during the early stages of selecting a specific
feature the learned policies either sample a new point in the feature space (if the error has increased
from the last iteration) or do a Levenberg-Marquardt step on the best point visited up until now (if
the error has gone down at the last iteration). This initial strategy makes sense since if the current
point does not look promising (error has increased) it is wise to try a different part of the state space,
6
Performance on Smile Detection as a function of Budget
Area Under the ROC
Learned Method
Default LMA
0.9
0.85
0.8
0
10
20
30
40
50
60
Optimization Budget Per Feature Selection
Controller Type
Learned (history window = 1)
Learned (history window = 2)
Learned (no random restarts)
Learned on Classical (no random restarts)
Default LMA
Average Reduction in Loss Relative to the LMA
2.3
2.66
1.2
1.19
1.0
Figure 4: Top: The performance on detecting smile versus not smile is substantially better when
using an optimization controller learned with our algorithm than using the default LMA. In each run
3 features are selected by the L2-boost procedure. The number of feature evaluations per feature
(the budget) varies along the x-axis. Bottom: This table describes the relative improvement in total
loss reduction for policies learned using our method.
however, if the error is decreasing it is best to continue to apply local optimization methods. Later in
the optimization, the policy always performs a Levenberg-Marquardt step on the current best point
no matter what the change in error was. This strategy makes sense since once a few different parts of
the state space have been investigated the utility of sampling a new part of the state space is reduced.
Several trends can be seen by examining the basis weights learned by LSPI. The first trend is that
the learned policy favors discarding the last iterate versus keeping (similar to the LMA). The second
trend is that the policy favors increasing the damping parameter when the error has increased on the
last iteration and decreasing the damping factor when the error has decreased (also similar to the
LMA).
5.4
Cross Generalization
A property of choosing a general state space for our method is that the policies learned on one class
of optimization problem are applicable to other classes of optimization. The optimization controllers
learned in the classical least squares minimization task achieve a 19% improvement over the standard
LMA on the smile detection task. Applying the controllers learned on the smile detection task to
the classical least squares problem yields a more modest 5% improvement. These results support
the claim that our method is extracting useful structure for optimizing under a fixed budget and not
simply learning a controller that is amenable to a particular problem domain.
6
Conclusion
We have presented a novel approach to the problem of learning optimization procedures for optimization on a fixed budget. We have shown that our approach achieves better performance than
ubiquitous methods for non-linear least squares optimization on the task of optimizing within a
fixed budget of function evaluations for both classical non-linear functions and a difficult computer
vision task. We have also provided an analysis of the patterns learned by our method and how they
7
make sense in the context of optimization under a fixed budget. Additionally, we have presented
extensions to the features used in [14] that are significant in their own right.
In the future we will more fully explore the framework that we have outlined in this document.
The specific application of the framework in the current work (state, action, and bases) while quite
effective may be able to be improved. For instance, by incorporating domain specific features into
the state space richer policies might be learned. We also want to apply this technique to other
problems in machine perception. An upcoming project will test the viability of our technique for
finding feature point locations on a face that simultaneously exhibit high likelihood in terms of
appearance and high likelihood in terms of the relative arrangement of facial features. The real-time
constraints of this problem make it a particularly appropriate target for the methods presented in this
document.
References
[1] K. Levenberg, ?A method for the solution of certain problems in least squares,? Applied Math
Quarterly, 1944.
[2] D. Marquardt, ?An algorithm for least-squares estimation of nonlinear parameters,? SIAM Journal of Applied Mathematics, 1963.
[3] V. V. Miagkikh and W. F. P. III, ?Global search in combinatorial optimization using reinforcement learning algorithms,? in Proceedings of the Congress on Evolutionary Computation,
vol. 1. IEEE Press, 6-9 1999, pp. 189?196.
[4] Y. Zhang, ?Solving large-scale linear programs by interior-point methods under the MATLAB
environment,? Optimization Methods and Software, vol. 10, pp. 1?31, 1998.
[5] L. M. Gambardella and M. Dorigo, ?Ant-q: A reinforcement learning approach to the traveling
salesman problem,? in International Conference on Machine Learning, 1995, pp. 252?260.
[6] J. A. Boyan and A. W. Moore, ?Learning evaluation functions for global optimization and
boolean satisfiability,? in AAAI/IAAI, 1998, pp. 3?10.
[7] R. Moll, T. J. Perkins, and A. G. Barto, ?Machine learning for subproblem selection,? in ICML
?00: Proceedings of the Seventeenth International Conference on Machine Learning. San
Francisco, CA, USA: Morgan Kaufmann Publishers Inc., 2000, pp. 615?622.
[8] M. I. Lourakis and A. A. Argyros, ?Is levenberg-marquardt the most efficient optimization
algorithm for implementing bundle adjustment?? Proceedings of ICCV, 2005.
[9] D. Cristinacce and T. F. Cootes, ?Feature detection and tracking with constrained local models,? BMVC, pp. 929?938, 2006.
[10] M. Pollefeys, L. V. Gool, M. Vergauwen, F. Verbiest, K. Cornelis, J. Tops, and R. Koch, ?Visual
modeling with a hand-held camera,? IJCV, vol. 59, no. 3, pp. 207?232, 2004.
[11] P. Beardsley, P. Torr, and A. Zisserman, ?3d model acquisition from extended image sequences.? Proceedings of ECCV, pp. 683?695, 1996.
[12] M. Lagoudakis and R. Parr, ?Least-squares policy iteration,? Journal of Machine Learning
Research, 2003.
[13] H. B. Nielsen, ?Uctp problems for unconstrained optimization,? Technical Report, Technical
University of Denmark, 2000.
[14] P. Viola and M. Jones, ?Robust real-time object detection,? International Journal of Computer
Vision, 2002.
[15] P. Buhlmann and B. Yu, ?Boosting with the l2 loss: Regression and classification,? Journal of
the American Statistical Association, 2003.
8
| 3386 |@word trial:1 version:1 middle:1 brightness:1 dramatic:1 reduction:12 initial:2 series:1 selecting:1 document:4 past:1 current:14 marquardt:11 update:2 greedy:1 selected:3 ruvolo:1 xk:19 steepest:1 detecting:1 boosting:3 math:1 location:2 zhang:1 height:2 along:5 become:4 consists:2 ijcv:1 combine:3 fitting:1 x0:2 behavior:3 mplab:1 decreasing:2 window:9 increasing:1 becomes:1 provided:2 project:1 underlying:2 lowest:1 what:1 substantially:1 developed:1 finding:1 guarantee:2 collecting:1 returning:2 rm:1 scaled:1 classifier:1 control:9 fasel:1 local:3 modify:2 treat:1 congress:1 cornelis:1 becoming:1 might:2 zeroth:1 dynamically:2 suggests:1 range:1 gone:1 seventeenth:1 camera:1 movellan:2 optimizers:5 xopt:3 procedure:11 powell:1 area:3 empirical:1 significantly:2 vert:1 radial:2 interior:1 selection:4 context:2 applying:1 optimize:1 modifies:1 starting:2 independently:1 focused:1 rule:4 insight:1 classic:1 handle:1 diego:2 controlling:2 heavily:3 construction:1 target:1 us:3 trend:4 recognition:2 particularly:5 updating:2 labeled:1 bottom:1 subproblem:1 capture:1 episode:4 went:1 decrease:1 environment:1 reward:9 dynamic:2 trained:2 solving:1 upon:1 creates:1 learner:3 basis:12 various:4 represented:1 train:1 fast:1 effective:1 choosing:1 quite:2 heuristic:13 encoded:1 solve:1 richer:1 otherwise:1 ability:2 statistic:3 favor:2 advantage:2 sequence:1 interaction:3 remainder:1 poorly:1 achieve:7 description:1 validate:1 convergence:5 produce:1 object:4 help:1 depending:1 develop:1 measured:2 received:1 progress:3 strong:1 throw:1 c:2 indicate:1 direction:1 correct:1 modifying:1 filter:14 centered:1 human:2 implementing:1 require:1 generalization:1 blending:1 extension:1 lma:38 sufficiently:1 considered:1 koch:1 predict:1 claim:1 parr:1 optimizer:4 achieves:4 early:1 purpose:1 estimation:1 outperformed:1 applicable:1 label:4 combinatorial:1 visited:4 utexas:1 dampen:1 minimization:1 always:1 desirability:2 modified:3 rather:1 shelf:4 barto:1 encode:1 focus:2 improvement:8 indicates:1 likelihood:2 greedily:1 baseline:1 sense:3 rigid:1 subroutine:2 selects:1 pixel:3 issue:1 overall:1 classification:3 constrained:5 special:4 initialize:1 construct:2 once:2 sampling:1 look:1 icml:1 jones:1 yu:1 thinking:1 future:1 minimized:1 report:1 few:2 employ:1 modern:1 randomly:1 composed:1 simultaneously:1 phase:3 maintain:1 attempt:1 detection:9 interest:1 evaluation:12 damped:1 held:1 bundle:2 amenable:2 accurate:1 tuple:2 facial:6 modest:1 damping:13 taylor:1 instance:2 increased:4 earlier:1 classify:1 boolean:1 modeling:1 cover:1 disadvantage:1 maximization:1 subset:1 recognizing:1 successful:1 examining:1 varies:1 accomplish:1 combined:1 explores:1 siam:1 international:3 off:4 aaai:1 containing:3 choose:3 lambda:1 horiz:1 american:1 style:1 return:1 checkerboard:1 includes:4 coefficient:2 matter:1 inc:1 performed:2 multiplicative:2 try:1 later:1 minimize:1 square:14 kaufmann:1 ensemble:1 spaced:1 yield:1 ant:1 weak:3 famous:1 produced:1 history:7 detector:1 acquisition:1 pp:8 obvious:2 stop:1 gain:2 dataset:5 proved:1 popular:11 iaai:1 knowledge:1 ubiquitous:1 satisfiability:1 schedule:1 javier:1 nielsen:1 actually:1 restarts:2 response:1 improved:3 bmvc:1 zisserman:1 formulation:1 box:11 generality:1 just:2 implicit:2 stage:1 until:3 crossbar:4 sketch:1 hastings:1 horizontal:3 web:1 traveling:1 hand:1 nonlinear:5 defines:2 mode:1 usa:1 analytically:1 laboratory:1 iteratively:1 moore:1 attractive:1 round:2 during:7 width:2 encourages:1 levenberg:10 linearizing:1 generalized:1 criterion:1 trying:1 arrived:1 complete:1 demonstrate:1 npcomplete:1 performs:1 image:8 wise:1 novel:1 recently:1 lagoudakis:1 superior:1 rl:8 empirically:1 association:1 interpretation:1 significant:2 unconstrained:1 outlined:4 mathematics:1 fruitless:1 language:1 had:1 moving:1 access:2 base:2 labelers:1 own:1 recent:2 optimizing:4 jolla:1 certain:1 binary:1 continue:1 seen:1 minimum:1 additional:3 greater:4 morgan:1 freely:1 gambardella:1 semi:1 multiple:1 reduces:2 technical:2 match:1 faster:1 cross:2 raphson:1 divided:1 visit:2 prediction:1 desideratum:1 regression:4 hair:1 controller:31 vision:7 metric:1 halving:1 iteration:14 kernel:1 genki:3 robotics:1 want:1 decreased:2 interval:1 else:1 float:1 allocated:2 crucial:1 publisher:1 spirit:1 smile:11 extracting:1 presence:1 intermediate:1 iii:1 viability:1 iterate:6 variety:1 moll:1 restrict:1 opposite:1 bandwidth:1 idea:1 translates:2 texas:1 whether:7 expression:4 utility:1 reuse:1 action:27 repeatedly:1 matlab:1 dramatically:2 useful:1 clear:1 amount:5 reduced:3 generate:2 specifies:1 revisit:1 designer:1 per:3 pollefeys:1 promise:1 vol:3 four:1 achieving:1 sum:4 run:1 package:2 parameterized:2 cootes:1 throughout:1 almost:1 patch:3 decision:2 fold:1 quadratic:1 constraint:4 perkins:1 software:1 encodes:2 argument:2 optimality:1 performing:1 relatively:2 department:2 developing:1 combination:3 across:1 describes:1 increasingly:1 metropolis:1 making:1 leg:1 restricted:2 iccv:1 previously:1 needed:2 end:2 serf:1 salesman:1 available:2 operation:1 multiplied:1 apply:3 quarterly:1 away:1 appropriate:4 denotes:1 remaining:1 include:3 top:2 newton:10 approximating:1 classical:7 lspi:10 upcoming:1 objective:2 move:1 added:2 already:1 arrangement:1 blend:1 strategy:2 unclear:1 exhibit:2 gradient:10 evolutionary:1 mapped:1 dorigo:1 collected:1 denmark:1 length:3 minimizing:2 balance:1 difficult:2 executed:1 policy:27 perform:2 vertical:3 observation:1 descent:13 behave:1 viola:1 extended:1 frame:4 rn:3 ucsd:2 community:1 intensity:1 buhlmann:1 dog:1 required:1 california:2 learned:22 boost:6 address:1 able:5 dynamical:1 perception:3 pattern:2 appeared:1 program:1 including:3 video:5 gool:1 suitable:1 natural:3 boyan:2 predicting:1 indicator:1 residual:2 improve:4 axis:1 prior:1 l2:8 relative:4 law:1 embedded:1 loss:20 fully:1 suggestion:1 versus:4 validation:1 s0:3 principle:1 eccv:1 austin:1 course:1 changed:1 surprisingly:1 repeat:1 free:1 soon:1 last:6 keeping:1 wide:1 face:6 distributed:1 curve:1 dimension:1 default:4 collection:5 reinforcement:9 san:3 commonly:1 far:1 employing:1 approximate:4 implicitly:1 argyros:1 global:2 incoming:2 francisco:1 tuples:1 search:2 continuous:3 iterative:1 table:1 additionally:2 promising:1 learn:8 nature:2 robust:3 ca:2 improving:3 expansion:1 alloted:1 investigated:1 domain:7 diag:1 did:1 significance:1 linearly:1 paul:1 osborne:1 repeated:1 allowed:1 fair:1 augmented:1 roc:3 fashion:1 slow:1 meyer:1 position:1 candidate:2 jacobian:1 weighting:2 learns:2 ian:1 rk:1 down:2 specific:8 discarding:1 offset:4 consist:1 incorporating:1 sequential:1 budget:21 suited:1 depicted:3 simply:1 likely:2 explore:2 appearance:1 visual:1 adjustment:4 tracking:5 actionvalue:1 applies:2 lstd:1 extracted:1 modulate:2 goal:2 formulated:1 absence:1 change:2 included:2 specifically:1 infinite:1 determined:1 torr:1 justify:1 total:6 accepted:2 gauss:9 la:1 beardsley:1 support:1 evaluate:3 |
2,632 | 3,387 | Efficient Direct Density Ratio Estimation for
Non-stationarity Adaptation and Outlier Detection
Takafumi Kanamori
Nagoya University
Nagoya, Japan
[email protected]
Shohei Hido
IBM Research
Kanagawa, Japan
[email protected]
Masashi Sugiyama
Tokyo Institute of Technology
Tokyo, Japan
[email protected]
Abstract
We address the problem of estimating the ratio of two probability density functions
(a.k.a. the importance). The importance values can be used for various succeeding tasks such as non-stationarity adaptation or outlier detection. In this paper, we
propose a new importance estimation method that has a closed-form solution; the
leave-one-out cross-validation score can also be computed analytically. Therefore,
the proposed method is computationally very efficient and numerically stable. We
also elucidate theoretical properties of the proposed method such as the convergence rate and approximation error bound. Numerical experiments show that the
proposed method is comparable to the best existing method in accuracy, while it
is computationally more efficient than competing approaches.
1
Introduction
In the context of importance sampling, the ratio of two probability density functions is called the
importance. The problem of estimating the importance is gathering a lot of attention these days
since the importance can be used for various succeeding tasks, e.g.,
Covariate shift adaptation: Covariate shift is a situation in supervised learning where the distributions of inputs change between the training and test phases but the conditional distribution of
outputs given inputs remains unchanged [8]. Covariate shift is conceivable in many real-world
applications such as bioinformatics, brain-computer interfaces, robot control, spam filtering, and
econometrics. Under covariate shift, standard learning techniques such as maximum likelihood estimation or cross-validation are biased and therefore unreliable?the bias caused by covariate shift
can be compensated by weighting the training samples according to the importance [8, 5, 1, 9].
Outlier detection: The outlier detection task addressed here is to identify irregular samples in an
evaluation dataset based on a model dataset that only contains regular samples [7, 3]. The importance
values for regular samples are close to one, while those for outliers tend to be significantly deviated
from one. Thus the values of the importance could be used as an index of the degree of outlyingness.
Below, we refer to the two sets of samples as the training and test sets. A naive approach to estimating the importance is to first estimate the training and test densities from the sets of training and test
samples separately, and then take the ratio of the estimated densities. However, density estimation is
known to be a hard problem particularly in high-dimensional cases. In practice, such an appropriate
parametric model may not be available and therefore this naive approach is not so effective.
1
To cope with this problem, we propose a direct importance estimation method that does not involve
density estimation. The proposed method, which we call least-squares importance fitting (LSIF), is
formulated as a convex quadratic program and therefore the unique global solution can be obtained.
We give a cross-validation method for model selection and a regularization path tracking algorithm
for efficient computation [4].
This regularization path tracking algorithm is turned out to be computationally very efficient since
the entire solution path can be traced without a quadratic program solver. However, it tends to share a
common weakness of path tracking algorithms, i.e., accumulation of numerical errors. To overcome
this drawback, we develop an approximation algorithm called unconstrained LSIF (uLSIF), which
allows us to obtain the closed-form solution that can be stably computed just by solving a system
of linear equations. Thus uLSIF is computationally efficient and numerically stable. Moreover,
the leave-one-out error of uLSIF can also be computed analytically, which further improves the
computational efficiency in model selection scenarios.
We experimentally show that the accuracy of uLSIF is comparable to the best existing method while
its computation is much faster than the others in covariate shift adaptation and outlier detection.
2
Direct Importance Estimation
Formulation and Notation: Let D ? (Rd ) be the data domain and suppose we are given indentr
pendent and identically distributed (i.i.d.) training samples {xtr
i }i=1 from a training distribution
te nte
with density ptr (x) and i.i.d. test samples {xj }j=1 from a test distribution with density pte (x). We
assume ptr (x) > 0 for all x ? D. The goal of this paper is to estimate the importance
w(x) =
pte (x)
ptr (x)
ntr
te nte
from {xtr
i }i=1 and {xj }j=1 . Our key restriction is that we want to avoid estimating densities
pte (x) and ptr (x) when estimating the importance w(x).
Least-squares Approach: Let us model the importance w(x) by the following linear model:
w(x)
b
= ?? ?(x),
(1)
where ? denotes the transpose, ? = (?1 , . . . , ?b )? , is a parameter to be learned, b is the number of
parameters, ?(x) = (?1 (x), . . . , ?b (x))? are basis functions such that ?(x) ? 0b for all x ? D,
0b denotes the b-dimensional vector with all zeros, and the inequality for vectors is applied in the
element-wise manner. Note that b and {?? (x)}b?=1 could be dependent on the samples i.e., kernel
models are also allowed. We explain how the basis functions {?? (x)}b?=1 are chosen later.
We determine the parameter ? so that the following squared error is minimized:
2
R
R
R
(x)
w(x)
b
? ppte
ptr (x)dx = 21 w(x)
b 2 ptr (x)dx ? w(x)p
b
J0 (?) = 21
te (x)dx + C,
tr (x)
where C =
1
2
R
w(x)pte (x)dx is a constant and therefore can be safely ignored. Let
J(?) = J0 (?) ? C = 21 ?? H? ? h? ?,
(2)
R
R
where H = ?(x)?(x)? ptr (x)dx, h = ?(x)pte (x)dx. Using the empirical approximation
and taking into account the non-negativity of the importance function w(x), we obtain
i
h
b ? ? + ?1? ?
c ?h
s.t. ? ? 0b ,
(3)
min??Rb 21 ?? H?
b
b = 1 Pnte ?(xte ). ?1? ? is a regularization term
c = 1 Pntr ?(xtr )?(xtr )? ,
h
where H
b
i
i
j
i=1
j=1
ntr
nte
for avoiding overfitting, ? ? 0, and 1b is the b-dimensional vector with all ones.
The above problem is a convex quadratic program and therefore the global optimal solution can be
obtained by a standard software. We call this method Least-Squares Importance Fitting (LSIF).
2
Convergence Analysis of LSIF: Here, we theoretically analyze the convergence property of the
b of the LSIF algorithm. Let ?? be the optimal solution of the ?ideal? problem:
solution ?
h
i
min??Rb 21 ?? H? ? h? ? + ?1?
?
s.t. ? ? 0b .
(4)
b
Let f (n) = ?(g(n)) mean that f (n) asymptotically dominates g(n), i.e., for all C > 0, there exists
n0 such that |Cg(n)| < |f (n)| for all n > n0 . Then we have the following theorem.
Theorem 1 Assume that (a) the optimal solution of the problem (4) satisfies the strict comple2
b =
mentarity condition,
and (b) ntr and nte satisfy nte = ?(ntr ). Then we have E[J(?)]
?1
?
J(? ) + O ntr , where E denotes the expectation over all possible training samples of size ntr
and all possible test samples of size nte .
Theorem 1 guarantees that LSIF converges to the ideal solution with order n?1
tr . It is possible to
explicitly obtain the coefficient of the term of order n?1
,
but
we
omit
the
detail
due to lack of space.
tr
Model Selection for LSIF: The performance of LSIF depends on the choice of the regularization
parameter ? and basis functions {?? (x)}b?=1 (which we refer to as a model). Since our objective is
to minimize the cost function J, it is natural to determine the model such that J is minimized.
b which has an accuracy guarantee for finite
Here, we employ cross-validation for estimating J(?),
ntr
te nte
samples: First, the training samples {xtr
i }i=1 and test samples {xj }j=1 are divided into R disjoint
te R
br (x) is obtained using
subsets {Xrtr }R
r=1 and {Xr }r=1 , respectively. Then an importance estimate w
tr
te
{Xj }j6=r and {Xj }j6=r , and the cost J is approximated using the held-out samples Xrtr and Xrte
P
P
(CV)
as Jbr
= 1 tr
br (xtr )2 ? 1te
br (xte ). This procedure is repeated for
tr w
te w
tr
te
2|Xr |
x ?Xr
x ?Xr
|Xr |
r = 1, . . . , R and its average Jb(CV) is used as an estimate of J. We can show that Jb(CV) gives an
almost unbiased estimate of the true cost J, where the ?almost?-ness comes from the fact that the
number of samples is reduced due to data splitting.
Heuristics of Basis Function Design: A good model may be chosen by cross-validation, given
that a family of promising model candidates is prepared. As model candidates, we propose using a
nte
Gaussian kernel model centered at the test input points {xte
j }j=1 , i.e.,
Pnte
w(x)
b
= ?=1
?? K? (x, xte
where K? (x, x? ) = exp ?kx ? x? k2 /(2? 2 ) .
(5)
? ),
nte
The reason why we chose the test input points {xte
j }j=1 as the Gaussian centers, not the training
tr ntr
input points {xi }i=1 , is as follows. By definition, the importance w(x) tends to take large values
if the training input density ptr (x) is small and the test input density pte (x) is large; conversely,
w(x) tends to be small (i.e., close to zero) if ptr (x) is large and pte (x) is small. When a function
is approximated by a Gaussian kernel model, many kernels may be needed in the region where the
output of the target function is large; on the other hand, only a small number of kernels would be
enough in the region where the output of the target function is close to zero. Following this heuristic,
we allocate many kernels at high test input density regions, which can be achieved by setting the
nte
Gaussian centers at the test input points {xte
j }j=1 .
ntr
te nte
Alternatively, we may locate (ntr + nte ) Gaussian kernels at both {xtr
i }i=1 and {xj }j=1 . However, in our preliminary experiments, this did not further improve the performance, but just slightly
nte
increased the computational cost. When nte is large, just using all the test input points {xte
j }j=1 as
Gaussian centers is already computationally rather demanding. To ease this problem, we practically
nte
propose using a subset of {xte
j }j=1 as Gaussian centers for computational efficiency, i.e.,
w(x)
b
=
Pb
?=1
?? K? (x, c? ),
(6)
nte
where c? is a template point randomly chosen from {xte
j }j=1 and b (? nte ) is a prefixed number.
In the experiments shown later, we fix the number of template points at b = min(100, nte ), and
optimize the kernel width ? and the regularization parameter ? by cross-validation with grid search.
3
b is piecewise linear
Entire Regularization Path for LSIF: We can show that the LSIF solution ?
with respect to the regularization parameter ?. Therefore, the regularization path (i.e., solutions for
all ?) can be computed efficiently based on the parametric optimization technique [4].
A basic idea of regularization path tracking is to check the violation of the Karush-KuhnTucker (KKT) conditions?which are necessary and sufficient conditions for optimality of convex
programs?when the regularization parameter ? is changed. Although the detail of the algorithm
is omitted due to lack of space, we can show that a quadratic programming solver is no longer
needed for obtaining the entire solution path of LSIF?just computing matrix inverses is enough.
This highly contributes to saving the computation time. However, in our preliminary experiments,
the regularization path tracking algorithm is turned out to be numerically rather unreliable since the
numerical errors tend to be accumulated when tracking the regularization path. This seems to be a
common pitfall of solution path tracking algorithms in general.
3
Approximation Algorithm
Unconstrained Least-squares Approach: The approximation idea we introduce here is very simple: we ignore the non-negativity constraint of the parameters in the optimization problem (3). Thus
h
i
b ? ? + ? ?? ? .
c ?h
min??Rb 21 ? ? H?
(7)
2
In the above, we included a quadratic regularization term ?? ? ?/2, instead of the linear one ?1?
b ?
since the linear penalty term does not work as a regularizer without the non-negativity constraint.
Eq.(7) is an unconstrained convex quadratic program, so the solution can be analytically computed.
However, since we dropped the non-negativity constraint ? ? 0b , some of the learned parameters
could be negative. To compensate for this approximation error, we modify the solution by
b = max(0b , ?),
e
?
e = (H
b
c + ?I b )?1 h,
?
(8)
where I b is the b-dimensional identity matrix and the ?max? operation for vectors is applied in the
element-wise manner. This is the solution of the approximation method we propose in this section.
An advantage of the above unconstrained formulation is that the solution can be computed just by
solving a system of linear equations. Therefore, the computation is fast and stable. We call this
method unconstrained LSIF (uLSIF). Due to the ?2 regularizer, the solution tends to be close to
0b to some extent. Thus, the effect of ignoring the non-negativity constraint may not be so strong.
Below, we theoretically analyze the approximation error of uLSIF.
Convergence Analysis of uLSIF: Here, we theoretically analyze the convergence property of
b of the uLSIF algorithm. Let ? ? be the optimal solution of the ?ideal? problem:
the solution ?
h
i
? ? = max(0b , ? ? ), where ? ? = argmin??Rb 21 ? ? H? ? h? ? + ?2 ? ? ? . Then we have
Theorem 2 Assume that (a) ??? 6= 0 for ? = 1, . . . , b, and (b) ntr and nte satisfy nte = ?(n2tr ).
b = J(? ? ) + O n?1 .
Then we have E[J(?)]
tr
Theorem 2 guarantees that uLSIF converges to the ideal solution with order n?1
tr . It is possible to
explicitly obtain the coefficient of the term of order n?1
,
but
we
omit
the
detail
due
to lack of space.
tr
We can also derive upper bounds on the difference between LSIF and uLSIF and show that uLSIF
gives a good approximation to LSIF. However, we do not go into the detail due to space limitation.
Efficient Computation of Leave-one-out Cross-validation Score: Another practically very important advantage of uLSIF is that the score of leave-one-out cross-validation (LOOCV) can also
be computed analytically?thanks to this property, the computational complexity for performing
LOOCV is the same order as just computing a single solution. In the current setting, we are given
ntr
te nte
two sets of samples, {xtr
i }i=1 and {xj }j=1 , which generally have different sample size. For simte
plicity, we assume that ntr < nte and the i-th training sample xtr
i and the i-th test sample xi are
te nte
held out at the same time; the test samples {xj }j=ntr +1 are always used for importance estimation.
4
b (i) be a parameter learned without the i-th training sample xtr and the i-th test sample xte .
Let ?
?
i
i
Pntr 1
(i)
(i)
?b
2
te ? b
Then the LOOCV score is expressed as n1tr i=1
[ 2 (?(xtr
)
?
)
?
?(x
)
?
].
Our
ap?
?
i
i
proach to efficiently computing the LOOCV score is to use the Sherman-Woodbury-Morrison forb (i) can be expressed as ?
b (i) = max{0b , (ntr ?1)nte (a +
mula for computing matrix inverses? ?
?
?
ntr (nte ?1)
tr
a?
a? ?(xtr
(ntr ?1)
?1 b
?1
te ?(xi )?atr
i )?ate
) ? ntr (nte ?1) (atr + ntr ??(xtr )? atr )}, where, a = A h, atr = A ?(xtr
?
i ), ate =
ntr ??(xtr
i ) ate
i
(ntr ?1)?
?1
te
c
A ?(xi ), A = H + ntr I b . This implies that the matrix inverse needs to be computed only
?1
once (i.e., A ) for calculating LOOCV scores. Thus LOOCV can be carried out very efficiently
without repeating hold-out loops.
4
Relation to Existing Methods
Kernel density estimator (KDE) is a non-parametric technique to estimate a probability density function. KDE can be used for importance estimation by first estimating pbtr (x) and pbte (x) separately
ntr
te nte
from {xtr
b
= pbte (x)/b
ptr (x). KDE
i }i=1 and {xj }j=1 and then estimating the importance by w(x)
is efficient in computation since no optimization is involved, and model selection is possible by
likelihood cross validation. However, KDE may suffer from the curse of dimensionality.
The kernel mean matching (KMM) method allows us to directly obtain an estimate of the importance
values at training points without going through density estimation [5]. KMM can overcome the curse
of dimensionality by directly estimating the importance using a special property of the Gaussian
reproducing kernel Hilbert space. However, there is no objective model selection method for the
regularization parameter and kernel width. As for the regularization parameter, we may follow a
suggestion in the original paper, which is justified by a theoretical argument to some extent [5].
As for the Gaussian width, we may adopt a popular heuristic to use the median distance between
samples, although there seems no strong justification for this. The computation of KMM is rather
demanding since a quadratic programming problem has to be solved.
Other approaches to directly estimating the importance is to directly fit an importance model to the
true importance?a method based on logistic regression (LogReg) [1], or a method based on the
kernel model (6) (which is called the Kullback-Leibler importance estimation procedure, KLIEP)
[9, 6]. Model selection of these methods is possible by cross-validation, which is a significant
advantage over KMM. However, LogReg and KLIEP are computationally rather expensive since
non-linear optimization problems have to be solved.
The proposed LSIF is qualitatively similar to LogReg and KLIEP, i.e., it can avoid density estimation, model selection is possible, and non-linear optimization is involved. However, LSIF is advantageous over LogReg and KLIEP in that it is equipped with a regularization path tracking algorithm.
Thanks to this, model selection of LSIF is computationally much more efficient than LogReg and
KLIEP. However, the regularization path tracking algorithm tends to be numerically unstable.
The proposed uLSIF inherits good properties of existing methods such as no density estimation
involved and a build-in model selection method equipped. In addition to these preferable properties,
the solution of uLSIF can be computed analytically through matrix inversion and therefore uLSIF
is computationally very efficient and numerically stable. Furthermore, the closed-form solution of
uLSIF allows us to compute the LOOCV score analytically without repeating hold-out loops, which
highly contributes to reducing the computation time in the model selection phase.
5
Experiments
Importance Estimation: Let ptr (x) be the d-dimensional normal distribution with mean zero and
covariance identity; let pte (x) be the d-dimensional normal distribution with mean (1, 0, . . . , 0)?
ntr
and covariance identity. The task is to estimate the importance at training input points: {w(xtr
i )}i=1 .
We fixed the number of test input points at nte = 1000 and consider the following two settings for
the number ntr of training samples and the input dimension d: (a) ntr = 100 and d = 1, 2, . . . , 20,
(b) d = 10 and ntr = 50, 60, . . . , 150. We run the experiments 100 times for each d, each ntr , and
tr
each method, and evaluate the quality of the importance estimates {w
bi }ni=1
by the normalized mean
5
?5
10
0.1
0.08
0.06
0.04
0.02
?6
10
5
10
15
d (Input Dimension)
0
20
(a) When d is changed
5
10
15
d (Input Dimension)
LogReg
uLSIF
15
10
5
0
20
(a) When d is changed
5
10
15
d (Input Dimension)
20
(a) When d is changed
0.15
?3
12
?4
10
?5
10
Total Computation Time [sec]
10
Computation Time [sec]
Average NMSE over 100 Trials (in Log Scale)
Total Computation Time [sec]
?4
10
KDE
KMM
LogReg
KLIEP
uLSIF
0.12
KDE
KMM
LogReg
KLIEP
uLSIF
Computation Time [sec]
Average NMSE over 100 Trials (in Log Scale)
?3
10
0.1
0.05
?6
10
8
6
4
2
10
0
50
50
100
n (Number of Training Samples)
150
100
ntr (Number of Training Samples)
tr
(b) When ntr is changed
(b) When ntr is changed
150
0
50
100
ntr (Number of Training Samples)
150
(b) When ntr is changed
Figure 1: NMSEs averaged Figure 2: Mean computation Figure 3: Mean computation
time (after model selection) time (including model selecover 100 trials in log scale.
over 100 trials.
tion of ? and ? over 9?9 grid).
Pntr
Pntr
Pntr
tr 2
tr
squared error (NMSE): n1tr i=1
(w(x
b tr
b tr
i ) ? w(xi )) , where
i ) and
i=1 w(xi ) are
i=1 w(x
normalized to be one, respectively.
NMSEs averaged over 100 trials (a) as a function of input dimension d and (b) as a function of
the training sample size ntr are plotted in log scale in Figure 1. Error bars are omitted for clear
visibility?instead, the best method in terms of the mean error and comparable ones based on the
t-test at the significance level 1% are indicated by ???; the methods with significant difference are
indicated by ???. Figure 1(a) shows that the error of KDE sharply increases as the input dimension
grows, while LogReg, KLIEP, and uLSIF tend to give much smaller errors than KDE. This would
be the fruit of directly estimating the importance without going through density estimation. KMM
tends to perform poorly, which is caused by an inappropriate choice of the Gaussian kernel width.
This implies that the popular heuristic of using the median distance between samples as the Gaussian
width is not always appropriate. On the other hand, model selection in LogReg, KLIEP, and uLSIF
seems to work quite well. Figure 1(b) shows that the errors of all methods tend to decrease as the
number of training samples grows. Again LogReg, KLIEP, and uLSIF tend to give much smaller
errors than KDE and KMM.
Next we investigate the computation time. Each method has a different model selection strategy,
i.e., KMM does not involve any cross-validation, KDE and KLIEP involve cross-validation over
the kernel width, and LogReg and uLSIF involve cross-validation over both the kernel width and
the regularization parameter. Thus the naive comparison of the total computation time is not so
meaningful. For this reason, we first investigate the computation time of each importance estimation
method after the model parameters are fixed. The average CPU computation time over 100 trials
are summarized in Figure 2. Figure 2(a) shows that the computation time of KDE, KLIEP, and
uLSIF is almost independent of the input dimensionality d, while that of KMM and LogReg is
rather dependent on d. Among them, the proposed uLSIF is one of the fastest methods. Figure 2(b)
shows that the computation time of LogReg, KLIEP, and uLSIF is nearly independent of the training
sample size ntr , while that of KDE and KMM sharply increase as ntr increases.
Both LogReg and uLSIF have very good accuracy and their computation time after model selection
is comparable. Finally, we compare the entire computation time of LogReg and uLSIF including
cross-validation, which is summarized in Figure 3. We note that the Gaussian width ? and the
regularization parameter ? are chosen over the 9 ? 9 equidistant grid in this experiment for both
LogReg and uLSIF. Therefore, the comparison of the entire computation time is fair. Figures 3(a)
and 3(b) show that uLSIF is approximately 5 to 10 times faster than LogReg.
6
Overall, uLSIF is shown to be comparable to the best existing method (LogReg) in terms of the
accuracy, but is computationally more efficient than LogReg.
Covariate Shift Adaptation in Regression and Classification: Next, we illustrate how the importance estimation methods could be used in covariate shift adaptation [8, 5, 1, 9]. Covariate shift is
a situation in supervised learning where the input distributions change between the training and test
phases but the conditional distribution of outputs given inputs remains unchanged. Under covariate
shift, standard learning techniques such as maximum likelihood estimation or cross-validation are
biased; the bias caused by covariate shift can be asymptotically canceled by weighting the samples
ntr
according to the importance. In addition to training input samples {xtr
i }i=1 following a training
te nte
input density ptr (x) and test input samples {xj }j=1 following a test input density pte (x), suppose
ntr
tr
that training output samples {yitr }ni=1
at the training input points {xtr
i }i=1 are given. The task is to
predict the outputs for test inputs.
We use the kernel model
Pt
fb(x; ?) = ?=1 ?? Kh (x, m? )
for function learning, where Kh (x, x? ) is the Gaussian kernel (5) and m? is a template point rannte
domly chosen from {xte
j }j=1 . We set the number of kernels at t = 50. We learn the parameter ? by
importance weighted regularized least-squares (IWRLS):
hP
2
i
ntr
tr
b tr
min?
b tr
+ ?k?k2 .
(9)
i ) f (xi ; ?) ? yi
i=1 w(x
It is known that IWRLS is consistent when the true importance w(xtr
i ) is used as weights?
unweighted RLS is not consistent due to covariate shift, given that the true learning target function
f (x) is not realizable by the model fb(x) [8].
The kernel width h and the regularization parameter ? in IWRLS (9) are chosen by importance
weighted CV (IWCV) [9]. More specifically, we first divide the training samples {zitr | zitr =
tr ntr
tr R
tr
b
(xtr
i , yi )}i=1 into R disjoint subsets {Zr }r=1 . Then a function fr (x) is learned using {Zj }j6=r
by IWRLS and its mean test error for the remaining samples Zrtr is computed:
P
1
br (x), y ,
w(x)loss
b
f
(10)
tr
tr
(x,y)?Z
|Z |
r
r
where loss (b
y , y) is (b
y ? y)2 in regression and 21 (1 ? sign{b
y y}) in classification. We repeat this
procedure for r = 1, . . . , R and choose the kernel width h and the regularization parameter ? so
that the average of the above mean test error over all r is minimized. We set the number of folds in
IWCV at R = 5. IWCV is shown to be an (almost) unbiased estimator of the generalization error,
while unweighted CV with misspecified models is biased due to covariate shift.
The datasets provided by DELVE and IDA are used for performance evaluation, where training input points are sampled with bias in the same way as [9]. We set the number of samples at ntr = 100
and nte = 500 for all datasets. We compare the performance of KDE, KMM, LogReg, KLIEP, and
uLSIF, as well as the uniform weight (Uniform, i.e., no adaptation is made). The experiments are
Pnte
te
repeated 100 times for each dataset and evaluate the mean test error: n1te j=1
loss(fb(xte
j ), yj ).
The results are summarized in Table 1, where all the error values are normalized by that of the uniform weight (no adaptation). For each dataset, the best method and comparable ones based on the
Wilcoxon signed rank test at the significance level 1% are described in bold face. The upper half corresponds to regression datasets taken from DELVE while the lower half correspond to classification
datasets taken from IDA.
The table shows that the generalization performance of uLSIF tends to be better than that of Uniform,
KDE, KMM, and LogReg, while it is comparable to the best existing method (KLIEP). The mean
computation time over 100 trials is described in the bottom row of the table, where the value is
normalized so that the computation time of uLSIF is one. This shows that uLSIF is computationally
more efficient than KLIEP. Thus, proposed uLSIF is overall shown to work well in covariate shift
adaptation with low computational cost.
Outlier Detection: Here, we consider an outlier detection problem of finding irregular samples
in a dataset (?evaluation dataset?) based on another dataset (?model dataset?) that only contains
7
Table 2: Outlier detection. Mean AUC
Table 1: Covariate shift adaptation. Mean and standard values over 20 trials (larger is better).
Dataset uLSIF KLIEP LogReg KMM OSVM LOF KDE
deviation of test error over 100 trials (smaller is better).
Dataset
kin-8fh
kin-8fm
kin-8nh
kin-8nm
abalone
image
ringnorm
twonorm
waveform
Average
Time
Uniform
1.00(0.34)
1.00(0.39)
?
1.00(0.26)
?
1.00(0.30)
?
1.00(0.50)
?
1.00(0.51)
1.00(0.04)
1.00(0.58)
1.00(0.45)
1.00(0.38)
?
KDE
KMM
LogReg
KLIEP
1.22(0.52) 1.55(0.39) 1.31(0.39) ? 0.95(0.31)
?
1.12(0.57) 1.84(0.58) 1.38(0.57) 0.86(0.35)
1.09(0.20) 1.19(0.29) 1.09(0.19) ? 0.99(0.22)
1.14(0.26) 1.20(0.20) 1.12(0.21) ? 0.97(0.25)
1.02(0.41) ? 0.91(0.38) ? 0.97(0.49) ? 0.97(0.69)
0.98(0.45) 1.08(0.54) ? 0.98(0.46) ? 0.94(0.44)
0.87(0.04) ? 0.87(0.04) 0.95(0.08) 0.99(0.06)
1.16(0.71) ? 0.94(0.57) ? 0.91(0.61) ? 0.91(0.52)
1.05(0.47) 0.98(0.31) ? 0.93(0.32) ? 0.93(0.34)
1.07(0.40) 1.17(0.37) 1.07(0.37) 0.95(0.35)
0.82
3.50
3.27
3.64
uLSIF
?
1.02(0.33)
?
0.88(0.39)
?
1.02(0.18)
1.04(0.25)
?
0.96(0.61)
?
0.98(0.47)
0.91(0.08)
?
0.88(0.57)
?
0.92(0.32)
0.96(0.36)
1.00
banana
b-cancer
diabetes
f-solar
german
heart
image
splice
thyroid
titanic
t-norm
w-form
Average
Time
.851
.463
.558
.416
.574
.659
.812
.713
.534
.525
.905
.890
.661
1.00
.815
.480
.615
.485
.572
.647
.828
.748
.720
.534
.902
.881
.685
11.7
.447
.627
.599
.438
.556
.833
.600
.368
.745
.602
.161
.243
.530
5.35
.578
.576
.574
.494
.529
.623
.813
.541
.681
.502
.439
.477
.608
751
.360
.508
.563
.522
.535
.681
.540
.737
.504
.456
.846
.861
.596
12.4
.915
.488
.403
.441
.559
.659
.930
.778
.111
.525
.889
.887
.629
85.5
.934
.400
.425
.378
.561
.638
.916
.845
.256
.461
.875
.861
.623
8.70
regular samples. Defining the importance over two sets of samples, we can see that the importance
values for regular samples are close to one, while those for outliers tend to be significantly deviated
from one. Thus the importance values could be used as an index of the degree of outlyingness in
this scenario. Since the evaluation dataset has wider support than the model dataset, we regard the
evaluation dataset as the training set (i.e., the denominator in the importance) and the model dataset
as the test set (i.e., the numerator in the importance). Then outliers tend to have smaller importance
values (i.e., close to zero).
We again test KMM, LogReg, KLIEP, and uLSIF for importance estimation; in addition, we test
native outlier detection methods such as the one-class support vector machine (OSVM) [7], the
local outlier factor (LOF) [3], and the kernel density estimator (KDE). The datasets provided by
IDA are used for performance evaluation. These datasets are binary classification datasets consisting
of training and test samples. We allocate all positive training samples for the ?model? set, while all
positive test samples and 1% of negative test samples are assigned in the ?evaluation? set. Thus, we
regard the positive samples as regular and the negative samples as irregular.
The mean AUC values over 20 trials as well as the computation time are summarized in Table 2,
showing that uLSIF works fairly well. KLIEP works slightly better than uLSIF, but uLSIF is computationally much more efficient. LogReg overall works rather well, but it performs poorly for
some datasets and therefore the average AUC value is small. KMM and OSVM are not comparable
to uLSIF both in AUC and computation time. LOF and KDE work reasonably well in terms of
AUC, but their computational cost is high. Thus, proposed uLSIF is overall shown to work well and
computationally efficient also in outlier detection.
6
Conclusions
We proposed a new method for importance estimation that can avoid solving a substantially more
difficult task of density estimation. We are currently exploring various possible applications of
important estimation methods beyond covariate shift adaptation and outlier detection, e.g., feature
selection, conditional distribution estimation, and independent component analysis?we believe that
importance estimation could be used as a new versatile tool in machine learning.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
S. Bickel et al. Discriminative learning for differing training and test distributions. ICML 2007.
S. Bickel et al. Dirichlet-enhanced spam filtering based on biased samples. NIPS 2006.
M. M. Breunig et al. LOF: Identifying density-based local outliers. SIGMOD 2000.
T. Hastie et al. The entire regularization path for the support vector machine. JMLR 2004.
J. Huang et al. Correcting sample selection bias by unlabeled data. NIPS 2006.
X. Nguyen et al. Estimating divergence functions and the likelihood ratio. NIPS 2007.
B. Sch?olkopf et al. Estimating the support of a high-dimensional distribution. Neural Computation,
13(7):1443?1471, 2001.
[8] H. Shimodaira. Improving predictive inference under covariate shift by weighting the log-likelihood
function. Journal of Statistical Planning and Inference, 90(2):227?244, 2000.
[9] M. Sugiyama et al. Direct importance estimation with model selection. NIPS 2007.
8
| 3387 |@word trial:10 inversion:1 norm:1 seems:3 advantageous:1 covariance:2 tr:27 versatile:1 contains:2 score:7 existing:6 current:1 com:1 ida:3 dx:6 numerical:3 visibility:1 succeeding:2 n0:2 half:2 forb:1 direct:4 fitting:2 introduce:1 manner:2 theoretically:3 planning:1 brain:1 pitfall:1 cpu:1 curse:2 equipped:2 solver:2 inappropriate:1 provided:2 estimating:13 moreover:1 notation:1 argmin:1 substantially:1 differing:1 finding:1 guarantee:3 safely:1 masashi:1 preferable:1 k2:2 control:1 omit:2 positive:3 dropped:1 local:2 modify:1 tends:7 path:14 ap:1 approximately:1 signed:1 chose:1 pbte:2 iwrls:4 conversely:1 ringnorm:1 delve:2 ease:1 fastest:1 bi:1 averaged:2 unique:1 woodbury:1 yj:1 practice:1 xr:5 procedure:3 j0:2 empirical:1 significantly:2 matching:1 regular:5 close:6 selection:17 unlabeled:1 context:1 accumulation:1 restriction:1 optimize:1 compensated:1 center:4 go:1 attention:1 convex:4 n1tr:2 splitting:1 identifying:1 correcting:1 estimator:3 justification:1 elucidate:1 suppose:2 target:3 pt:1 enhanced:1 programming:2 breunig:1 diabetes:1 element:2 approximated:2 particularly:1 expensive:1 econometrics:1 native:1 bottom:1 solved:2 region:3 decrease:1 complexity:1 solving:3 predictive:1 efficiency:2 lsif:17 basis:4 logreg:26 various:3 regularizer:2 fast:1 effective:1 quite:1 heuristic:4 larger:1 advantage:3 propose:5 adaptation:11 fr:1 turned:2 loop:2 poorly:2 kh:2 olkopf:1 convergence:5 leave:4 converges:2 wider:1 derive:1 develop:1 ac:2 illustrate:1 pendent:1 eq:1 strong:2 c:1 come:1 implies:2 waveform:1 drawback:1 tokyo:2 centered:1 fix:1 karush:1 generalization:2 preliminary:2 exploring:1 hold:2 practically:2 lof:4 normal:2 exp:1 predict:1 bickel:2 adopt:1 omitted:2 fh:1 domly:1 estimation:25 loocv:7 currently:1 tool:1 weighted:2 xtr:21 gaussian:13 always:2 rather:6 avoid:3 inherits:1 rank:1 likelihood:5 check:1 cg:1 realizable:1 inference:2 dependent:2 accumulated:1 entire:6 relation:1 going:2 overall:4 among:1 classification:4 canceled:1 ness:1 special:1 fairly:1 once:1 saving:1 sampling:1 rls:1 nearly:1 icml:1 minimized:3 others:1 jb:2 piecewise:1 employ:1 randomly:1 divergence:1 jbr:1 phase:3 consisting:1 detection:11 stationarity:2 highly:2 investigate:2 evaluation:7 weakness:1 violation:1 held:2 necessary:1 xte:12 divide:1 plotted:1 theoretical:2 increased:1 cost:6 deviation:1 subset:3 uniform:5 thanks:2 density:24 twonorm:1 squared:2 again:2 nm:3 iwcv:3 choose:1 huang:1 japan:3 account:1 sec:4 summarized:4 bold:1 coefficient:2 satisfy:2 caused:3 explicitly:2 depends:1 later:2 tion:1 lot:1 closed:3 analyze:3 solar:1 minimize:1 square:5 ni:2 accuracy:5 efficiently:3 correspond:1 identify:1 j6:3 explain:1 definition:1 sugi:1 involved:3 sampled:1 dataset:14 popular:2 improves:1 dimensionality:3 hilbert:1 day:1 supervised:2 follow:1 formulation:2 furthermore:1 just:6 hand:2 lack:3 logistic:1 stably:1 quality:1 indicated:2 believe:1 grows:2 effect:1 normalized:4 true:4 unbiased:2 analytically:6 regularization:22 assigned:1 leibler:1 numerator:1 width:10 auc:5 ptr:12 abalone:1 ulsif:44 performs:1 interface:1 image:2 wise:2 misspecified:1 common:2 jp:3 nh:1 numerically:5 refer:2 significant:2 cv:5 rd:1 unconstrained:5 grid:3 hp:1 sugiyama:2 sherman:1 pte:9 stable:4 robot:1 longer:1 wilcoxon:1 nagoya:3 scenario:2 inequality:1 binary:1 yi:2 determine:2 morrison:1 ntr:41 faster:2 cross:15 compensate:1 divided:1 hido:2 basic:1 regression:4 denominator:1 titech:1 expectation:1 kernel:22 achieved:1 irregular:3 justified:1 addition:3 want:1 separately:2 addressed:1 median:2 sch:1 biased:4 strict:1 tend:7 call:3 ideal:4 identically:1 enough:2 xj:10 fit:1 equidistant:1 hastie:1 competing:1 fm:1 idea:2 br:4 shift:17 allocate:2 penalty:1 suffer:1 ignored:1 generally:1 clear:1 involve:4 repeating:2 prepared:1 reduced:1 zj:1 sign:1 estimated:1 disjoint:2 rb:4 proach:1 key:1 pb:1 traced:1 asymptotically:2 run:1 inverse:3 almost:4 family:1 comparable:8 bound:2 deviated:2 fold:1 quadratic:7 pntr:5 constraint:4 sharply:2 software:1 thyroid:1 argument:1 min:5 optimality:1 performing:1 according:2 shimodaira:1 ate:3 slightly:2 smaller:4 kmm:17 plicity:1 outlier:16 gathering:1 taken:2 heart:1 computationally:12 equation:2 remains:2 german:1 needed:2 prefixed:1 available:1 operation:1 appropriate:2 original:1 denotes:3 remaining:1 dirichlet:1 calculating:1 sigmod:1 build:1 unchanged:2 objective:2 already:1 parametric:3 strategy:1 conceivable:1 distance:2 atr:4 extent:2 unstable:1 reason:2 shohei:1 index:2 ratio:5 difficult:1 kde:18 negative:3 design:1 perform:1 upper:2 datasets:8 finite:1 situation:2 defining:1 kliep:20 banana:1 locate:1 reproducing:1 learned:4 nip:4 address:1 beyond:1 bar:1 below:2 program:5 max:4 including:2 demanding:2 natural:1 regularized:1 zr:1 improve:1 technology:1 titanic:1 carried:1 negativity:5 naive:3 loss:3 suggestion:1 limitation:1 filtering:2 validation:15 degree:2 sufficient:1 nte:30 fruit:1 consistent:2 share:1 ibm:2 row:1 cancer:1 changed:7 repeat:1 osvm:3 transpose:1 kanamori:2 bias:4 institute:1 template:3 taking:1 kuhntucker:1 face:1 distributed:1 regard:2 overcome:2 dimension:6 world:1 unweighted:2 fb:3 qualitatively:1 made:1 spam:2 nguyen:1 cope:1 ignore:1 kullback:1 unreliable:2 global:2 overfitting:1 kkt:1 xi:7 discriminative:1 alternatively:1 search:1 why:1 table:6 promising:1 learn:1 reasonably:1 kanagawa:1 ignoring:1 obtaining:1 contributes:2 improving:1 domain:1 did:1 significance:2 allowed:1 repeated:2 fair:1 nmse:3 candidate:2 jmlr:1 weighting:3 kin:4 splice:1 theorem:5 covariate:17 showing:1 dominates:1 exists:1 importance:50 te:18 kx:1 expressed:2 tracking:9 corresponds:1 satisfies:1 outlyingness:2 conditional:3 goal:1 formulated:1 identity:3 change:2 hard:1 experimentally:1 included:1 specifically:1 reducing:1 called:3 total:3 meaningful:1 support:4 takafumi:1 bioinformatics:1 evaluate:2 avoiding:1 |
2,633 | 3,388 | Risk Bounds for Randomized Sample Compressed
Classifiers
Mohak Shah
Centre for Intelligent Machines
McGill University
Montreal, QC, Canada, H3A 2A7
[email protected]
Abstract
We derive risk bounds for the randomized classifiers in Sample Compression setting where the classifier-specification utilizes two sources of information viz. the
compression set and the message string. By extending the recently proposed Occam?s Hammer principle to the data-dependent settings, we derive point-wise versions of the bounds on the stochastic sample compressed classifiers and also recover the corresponding classical PAC-Bayes bound. We further show how these
compare favorably to the existing results.
1 Introduction
The Sample compression framework [Littlestone and Warmuth, 1986, Floyd and Warmuth, 1995]
has resulted in an important class of learning algorithms known as sample compression algorithms.
These algorithms have been shown to be competitive with the state-of-the-art algorithms such as
the SVM in practice [Marchand and Shawe-Taylor, 2002, Laviolette et al., 2005]. Moreover, the
approach has also resulted in practical realizable bounds and has shown significant promise in using
these bounds in model selection.
On another learning theoretic front, the PAC-Bayes approach [McAllester, 1999] has shown that
stochastic classifier selection can prove to be more powerful than outputing a deterministic classifier.
With regard to the sample compression settings, this was further confirmed in the case of sample
compressed Gibbs classifier by Laviolette and Marchand [2007]. However, the specific classifier
output by the algorithm (according to a selected posterior) is generally of immediate interest since
this is the classifier whose future performance is of relevance in practice. Diluting such guarantees
in terms of the expectancy of the risk over the posterior over the classifier space, although gives
tighter risk bounds, result in averaged statements over the expected true error.
A significant result in obtaining such guarantees for the specific randomized classifier has appeared
in the form of Occam?s Hammer [Blanchard and Fleuret, 2007]. It deals with bounding the performance of algorithms that result in a set output when given training data. With respect to classifiers,
this results in a bound on the true risk of the randomized classifier output by the algorithm in accordance with a learned posterior over the classifier space from training data. Blanchard and Fleuret
[2007] also present a PAC-Bayes bound for the data-independent settings (when the classifier space
is defined independently of the training data).
Motivated by this result, we derive risk bounds for the randomized sample compressed classifiers.
Note that the classifier space in the case of sample compression settings, unlike other settings, is
data-dependent in the sense that it is defined upon the specification of training data.1 The rest of
1
Note that the classifier space depends on the amount of the training data as we see further and not on
the training data themselves. Hence, a data-independent prior over the classifier space can still be obtained in
this setting, e.g., in the PAC-Bayes case, owing to the independence of the classifier space definition from the
content of the training data.
the paper is organized as follows: Section 2 provides a background on the sample compressed
classifiers and establishes the context; Section 3 then states the Occam?s Hammer for the dataindependent settings. We then derive bounds for the randomized sample compressed classifier in
Section 4 followed by showing how we can recover bounds for the sample compressed Gibbs case
(classical PAC-Bayes for sample compressed classifiers) in Section 5. We conclude in Section 6.
2 Sample Compressed (SC) Classifiers
We consider binary classification problems where the input space X consists of an arbitrary subset
def
of Rn and the output space Y = {?1, +1}. An example z = (x, y) is an input-output pair where
x ? X and y ? Y. Sample Compression learning algorithms are characterized as follows:
Given a training set S = {z1 , . . . , zm } of m examples, the classifier A(S) returned by algorithm
A is described entirely by two complementary sources of information: a subset zi of S, called the
compression set, and a message string ? which represents the additional information needed to
obtain a classifier from the compression set zi . Given a training set S, the compression set zi is
def
defined by a vector i of indices i = (i1 , i2 , . . . , i|i| ) with ij ? {1, . . . , m} ?j and i1 < i2 < . . . <
i|i| and where |i| denotes the number of indices present in i. Hence, zi denotes the ith example of S
whereas zi denotes the subset of examples of S that are pointed to by the vector of indices i defined
above. We will use i to denote the set of indices not present in i. Hence, we have S = zi ? zi for
any vector i ? I where I denotes the set of the 2m possible realizations of i.
Finally, a learning algorithm is a sample compression learning algorithm (that is identified solely
by a compression set zi and a message string ?) iff there exists a Reconstruction Function R :
(X ? Y)|i| ? K ?? H, associated with A. Here, H is the (data-dependent) classifier space and
K ? I ? M s.t. M = ?i?I M(i). That is, R outputs a classifier R(?, zi ) when given an arbitrary
compression set zi ? S and message string ? chosen from the set M(zi ) of all distinct messages
that can be supplied to R with the compression set zi .
We seek a tight risk bound for arbitrary reconstruction functions that holds uniformly for all compression sets and message strings. For this, we adopt the PAC setting where each example z is drawn
according to a fixed, but unknown, probability distribution D on X ? Y. The true risk R(f ) of any
classifier f is defined as the probability that it misclassifies an example drawn according to D:
def
R(f ) = Pr(x,y)?D (f (x) 6= y) = E(x,y)?D I(f (x) 6= y)
where I(a) = 1 if predicate a is true and 0 otherwise. Given a training set S = {z1 , . . . , zm } of m
examples, the empirical risk RS (f ) on S, of any classifier f , is defined according to:
m
X
def
def 1
RS (f ) =
I(f (xi ) 6= yi ) = E(x,y)?S I(f (x) 6= y)
m i=1
Let Zm denote the collection of m random variables whose instantiation gives a training sample
S = zm = {z1 , . . . , zm }. To obtain the tightest possible risk bound, we will fully exploit the
fact that the distribution of classification errors is a binomial. We now discuss the generic Occam?s
Hammer principle (w.r.t. the classification scenario) and then go on to show how it can be applied
to the sample compression setting.
3 Occam?s Hammer for data independent setting
In this section, we briefly detail the Occam?s hammer [Blanchard and Fleuret, 2007] for dataindependent setting. For the sake of simplicity, we retain the key notations of Blanchard and Fleuret
[2007]. Occam?s hammer work by bounding the probability of bad event defined as follows. For
every classifier h ? H, and a confidence parameter ? ? [0, 1], the bad event B(h, ?) is defined as
the region where the desired property on the classifier h does not hold, with probability ?. That is,
PrS?Dm [S ? B(h, ?)] ? ?. Further, it assumes that this region is nondecreasing in ?. Intuitively,
this means that with decreasing ? the bound on the true error of the classifier h becomes tighter.
With the above assumption satisfied, let, P be a non-negative reference measure on the classifier
space H known as the volumic measure. Let ? be a probability distribution on H absolutely contind?
uous w.r.t. P such that ? = dP
. Let ? be a probability distribution on (0, +?) (the inverse density
prior). Then Occam?s Hammer [Blanchard and Fleuret, 2007] states that:
Theorem 1 [Blanchard and Fleuret, 2007] Given the above assumption and P, ?, ? defined as
above, define the level function
?(h, u) = min(??(h)?(u), 1).
Rx
where ?(x) = 0 ud?(u) for x ? (0, +?). Then for any algorithm S 7? ?S returning a probability
density ?S over H with respect to P, and such that (S, h) 7? ?S (h) is jointly measurable in its two
variables, it holds that
Pr
S ? B(h, ?(h, ?S (h)?1 )) ? ?,
m
S?D ,h?Q
where Q is the distribution on H such that
dQ
dP
= ?S .
Note above that Q is the (data-dependent) posterior distribution on H after observing the data sample
S while P is the data-independent prior on H. The subscript S in ?S denotes this. Moreover, the
distribution ? on the space of classifiers may or may not be data-dependent. As we will see later, in
the case of sample compression learning settings we will consider priors over the space of classifiers
without reference to the data (such as PAC-Bayes case). To this end, we can either opt for a prior ?
independent of the data or make it the same as the volume measure P which establishes a distribution
on the classifier space without reference to the data.
4 Bounds for Randomized SC Classifiers
We work in the sample compression settings and as mentioned before, each classifier in this setting
is denoted in terms of a compression set and a message string. A reconstruction function then
uses these two information sources to reconstruct the classifier. This essentially means that we deal
with a data-dependent hypothesis space. This is in contrast with other notions of hypothesis class
complexity measures such as VC dimension. The hypothesis space is defined, in our case, based on
the size of data sample (and not the actual contents of the sample). Hence, we consider the priors
built on the size of the possible compression sets and associated message strings. More precisely, we
consider prior distribution P with probability density P (zi , ?) to be facotorizable in its compression
set dependent component and message string component (conditioned on a given compression set)
such that:
P (zi , ?) = PI (i)PM(zi ) (?)
(1)
Pm
1
with PI (i) = m p(|i|) such that d=0 p(d) = 1. The above choice of the form for PI (i) is
(|i|)
appropriate since we do not have any a priori information to distinguish one compression set from
other. However, as we will see later, we should choose p(d) such that we give more weight to smaller
compression sets.
Let PK be the set of all distributions P on K satisfying above equation. Then, we are interested
in algorithms that output a posterior Q ? PK over the space of classifiers with probability density Q(zi , ?) factorizable as QI (i)QM(zi ) (?). A sample compressed classifier is then defined by
choosing a classifier (zi , ?) according to the posterior Q(zi , ?). This is basically the Gibbs classifier
defined in the PAC-Bayes settings where the idea is to bound the true risk of this Gibbs classifier
defined as R(GQ ) = E(zi ,?)?Q R((zi , ?)). On the other hand, we are interested in bounding the true
risk of the specific classifier (zi , ?) output according to Q. As shown in [Laviolette and Marchand,
2007], a rescaled posterior Q of the following form can provide tighter guarantees while maintaining
the Occam?s principle of parsimony.
Definition 2 Given a distribution Q ? PK , we denote by Q the distribution:
QI (i)QM(zi ) (?)
Q(zi , ?)
def
Q(zi , ?) =
=
= QI (i)QM(zi ) (?)
1
1
|i|E(zi ,?)?Q |i|
|i|E(zi ,?)?Q |i|
?(zi , ?) ? K
Hence, note that the posterior is effectively rescaled for the compression set part. Hence, any
classifier (zi , ?) ? Q = i ? QI , ? ? QM(zi ) . Further, if we denote by dQ the expected
value of the compression set size over the choice of parameters according to the scaled posterior,
def
dQ = Ei?QI ,??QM(z ) |i|, then,
i
E(zi ,?)?Q
1
1
1
=
=
m
?
dQ
|i|
Ei?QI ,??QM(z ) |i|
i
Now, we proceed to derive the bounds for the randomized sample compressed classifiers starting
with a PAC-Bayes bound.
4.1 A PAC-Bayes Bound for randomized SC classifier
We exploit the fact that the distribution of the errors is binomial and define the following error
quantities (for a given i, and hence zi over z|i| ):
Definition 3 Let S ? Dm with D a distribution on X ? Y, and (zi , ?) ? K. We denote by
BinS (i, ?), the probability that the classifier R(zi , ?) of (true) risk R(zbi , ?) makes |i|Rzi (zi , ?) or
fewer errors on z?i ? D|i| . That is,
BinS (i, ?) =
|i|Rz (zi ,?)
i
X
?=0
|i|
(R(?, zi ))? (1 ? R(?, zi ))|i|??
?
and by BS (i, ?), the probability that this classifier makes exactly |i|Rzi (zi , ?) errors on z?i ? D|i| .
That is,
|i|
|i|R (z ,?)
|i|?|i|Rz (zi ,?)
i
(R(zi , ?)) zi i (1 ? R(zi , ?))
BS (i, ?) =
|i|Rzi (zi , ?)
Now, approximating the binomial by relative entropy Chernoff bound [Langford, 2005], we have,
for a classifier f :
mRS (f )
X
m
(R(f ))j (1 ? R(f ))m?j ? exp(?m ? kl(RS (f )kR(f )))
j
j=0
for all RS (f ) ? R(f ).
m
As also shown in [Laviolette and Marchand, 2007], since m
= m?j
and kl(RS (f )kR(f )) =
j
kl(1 ? RS (f )k1 ? R(f )), the above inequality holds true for each factor inside the sum on the
left hand side. Consequently, in the case of sample compressed classifier, ?(zi , ?) ? K and ?S ?
(X ? Y)m :
BS (i, ?) ? exp ?|i| ? kl(Rzi (?, zi )kR(?, zi ))
(2)
Bounding this by ? yields:
ln 1?
PrS?Dm kl(Rzi (?, zi )kR(?, zi )) ?
?1??
|i|
(3)
Now, consider the quantity in the probability in Equation 3 as the bad event over classifiers defined
by a compression set i and an associated message string ?. Let ?zm (i, ?) be the posterior probability
density of the rescaled data-dependent posterior distribution Q over the classifier space with respect
to the volume measure P. We can now replace ? for this bad event by the delta of the Occam?s
hammer defined as:
1
1
?1
?1
?
ln(min(??(hS )?(?zm (i, ?) ), 1) ) = ln+
? ? ?(h) min((k + 1)?1 ?zm (i, ?)? k+1
k , 1)
k+1
1
k , 1)
m
= ln+
?
max((k
+
1)?
(i,
?)
z
? ? ?(h)
k+1
1
k , 1)
m
? ln+
?
(k
+
1)
max(?
(i,
?)
z
? ? ?(h)
k+1
1
k
m
? ln
?
(k
+
1)
+
ln
?
(i,
?)
+
z
? ? ?(h)
where ln+ denotes max(0, ln), the positive part of the logarithm.
However, note that we are interested in data-independent priors over the space of classifiers2 , and
hence, we consider our prior ? to be the same as the volume measure P over the classifier space
yielding ? as unity. That is, our prior gives a distribution over the classifier space without any
regard to the data. Substituting for ?zm (i, ?) (the fraction of respective densities; Radon-Nikodym
derivative)3, we obtain the following result:
Theorem 4 For any reconstruction function R : Dm ? K ?? H and for any prior distribution
P over compression set and message strings, the sample compression algorithms A(S) returns a
posterior distribution Q, then, for ? ? (0, 1] and k > 0, we have:
Pr
kl(Rzi (zi , ?)kR(zi , ?))
S?D m ,i?QI ,??QM(zi )
1
k + 1
1
Q(zi , ?)
ln
+ (1 + ) ln+
?1??
?
m ? |i|
?
k
P (zi , ?)
where Rzi (zi , ?) is the empirical risk of the classifier reconstructed from (zi , ?) on the training
examples not in the compression set and R(zi , ?) is the corresponding true risk.
1
1
Note that we do not encounter the m?d
factor in the bound instead of m?|i|
unlike the bound
Q
of Laviolette and Marchand [2007]. This is because the PAC-Bayes bound of Laviolette and Marchand [2007] computes the expectancy over the kl-divergence of the empirical and true risk of the
classifiers chosen according to Q. This, as a result of rescaling of Q in preference of smaller compression sets, is reflected in the bound. On the other hand, the bound of Theorem 4 is a point-wise
version bounding the true error of the specific classifier chosen according to Q and hence concerns
the specific compression set utilized by this classifier.
4.2 A Binomial Tail Inversion Bound for randomized SC classifier
A tighter condition can be imposed on the true risk of the classifier by considering
the binomial tail
k
inversion over the distribution of errors. The binomial tail inversion Bin m
, ? is defined as the
largest risk value that a classifier can have while still having a probability of at least ? of observing
at most k errors out of m examples:
k
k
def
Bin
, ? = sup r : Bin
,r ? ?
m
m
where
Bin
k
,r
m
def
=
k
X
m j
r (1 ? r)m?j
j
j=0
From this definition, it follows that Bin (RS (f ), ?) is the smallest upper bound, which holds with
probability at least 1 ? ?, on the true risk of any classifier f with an observed empirical risk RS (f )
on a test set of m examples (test set bound):
PZm R(f ) ? Bin RZm (f ), ?
? 1 ? ? ?f
(4)
This bound can be converted to a training set bound in a standard manner by considering a measure
over the classifier space (see for instance [Langford, 2005, Theorem 4.1]). Moreover, in the sample
compression case, we are interested in the empirical risk of the classifier on the examples not in the
compression set (consistent compression set assumption). Now, let ?r be a ?-weighed measure on
the classifier space, i.e., i and ?. Then, for the compression sets and associated message strings,
2
Hence, the missing S in the subscript of ?(h) in the r.h.s. above.
Alternatively, let P (zi , ?) and Q(zi , ?) denote the probability densities of the prior distribution P and
rescaled posterior distributions Q over classifiers such that dQ = Q(zi , ?)d? and dP = P (zi , ?)d? w.r.t.
i ,?)
some measure ?. This too yields dQ
= Q(z
. Note that the final expression is independent of the underlying
dP
P (zi ,?)
measure ?.
3
consider the following bad event with empirical risk of the classifier measured as BinS ((zi , ?)) for
i ? QI , ? ? QM(zi ) :
B(h, ?) = R(zi , ?) > Bin(Rzi (zi , ?), ?r )
Now, we replace ?r with the level function of Occam?s hammer (with the same assumption of ? =
P, ? = 1):
k+1
min(??(hS )?(?zm (i, ?)?1 ), 1) ? ? ? min((k + 1)?1 ?zm (i, ?)? k , 1)
1
? ??
k+1
max((k + 1)?zm (i, ?) k , 1)
1
? ?
k+1
(k + 1) max(?zm (i, ?) k , 1)
?
?
k+1
(k + 1)?zm (i, ?) k
Hence, we have proved the following:
Theorem 5 For any reconstruction function R : Dm ? K ?? H and for any prior distribution P
over the compression set and message strings, the sample compression algorithms A(S) returns a
posterior distribution Q, then, for ? ? (0, 1] and k > 0, we have:
?
Pr
?1??
R(zi , ?) ? Bin Rzi (zi , ?),
k+1
i ,?)
k
S?D m ,i?QI ,??QM(zi )
(k + 1) Q(z
P (zi ,?)
We can obtain a looser bound by approximating the binomial tail inversion bound using [Laviolette
et al., 2005, Lemma 1]:
Corollary 6 Given all our previous definitions, the following holds with probability 1 ? ? over the
joint draw of S ? Dm and i ? QI , ? ? QM(zi ) :
m ? |i|
?1
k+1
R(zi , ?) ? 1 ? exp
ln
+ ln
?
|i|Rzi (zi , ?)
m ? |i| ? |i|Rzi (zi , ?)
1
Q(zi , ?)
+ (1 + ) ln
k
P (zi , ?)
5 Recovering the PAC-Bayes bound for SC Gibbs Classifier
Let us now see how a bound can be obtained for the Gibbs setting. We follow the general line of
argument of Blanchard and Fleuret [2007] to recover the PAC-Bayes bound for the Sample Compressed Gibbs classifier. However, note that we do this for the data-dependent setting here and also
utilize the rescaled posterior over the space of sample compressed classifiers.
The PAC-Bayes bound of Theorem 4 basically states that
ES?Dm [
Pr
i?QI ,??QM(zi )
[kl(Rzi (zi , ?)kR(zi , ?)) > ?(?)]] ? ?
where
?(?) =
Consequently,
ES?Dm [
1
k + 1
1
Q(zi , ?)
ln
+ (1 + ) ln+
m ? |i|
?
k
P (zi , ?)
Pr
i?QI ,??QM(zi )
[kl(Rzi (zi , ?)kR(zi , ?)) > ?(??)]] ? ??
Now, bounding the argument of expectancy above using the Markov inequality, we get:
Prm
Pr
[kl(Rzi (zi , ?)kR(zi , ?)) > ?(??)] > ? ? ?
S?D
i?QI ,??QM(zi )
Now, discretizing the argument over (?i , ?i ) = (?2?i , 2?i ), we obtain
Pr m
Pr
[kl(Rzi (zi , ?)kR(zi , ?)) > ?(?i ?i )] > ?i ? ?i
S?D
i?QI ,??QM(zi )
Taking the union bound over ?i , i ? 1 now yields:
?2i
?i
Prm
Pr
[kl(Rzi (zi , ?)kR(zi , ?)) > ?(?2 ] ? 2
> 1??
S?D
?i ? 0
i?QI ,??QM(zi )
Now, let us consider the argument of the above statement for a fixed sample S. Then, for all i ? 0,
the following holds with probability 1 ? ?:
1
k + 1
ln
+ 2i ln 2
Pr
kl(Rzi (zi , ?)kR(zi , ?)) >
m ? |i|
?
i?QI ,??QM(zi )
1
Q(zi , ?)
? 2?i
+ (1 + ) ln+
k
P (zi , ?)
and hence:
Pr
?S (zi , ?) > 2i ln 2 ? 2?i
i?QI ,??QM(zi )
where:
k + 1
1
Q(zi , ?)
? (1 + ) ln+
?
k
P (zi , ?)
S
We wish to bound, for the Gibbs classifier, Ei?QI ,??QM(z ) ? (zi , ?):
i
Z
S
Pr
[?S (zi , ?) ? 2i ln 2]d(2i ln 2)
Ei?QI ,??QM(z ) [? (zi , ?)] ?
?S (zi , ?) = (m ? |i|)kl(Rzi (zi , ?)kR(zi , ?)) ? ln
i
2i ln 2>0 i?QI ,??QM(zi )
?
2 ln 2
X
Pri?QI ,??QM(z ) [?S (zi , ?) ? 2i ln 2] ? 3
(5)
i
i?0
Now, we have:
Lemma 7 [Laviolette and Marchand, 2007] For any f : K ?? R+ , and for any Q, Q? ? PK
related by
1
Q? (zi , ?)f (zi , ?) =
Q(zi , ?),
E(zi ,?)?Q f (z1i ,?)
we have:
1
kl(RS (GQ )kR(GQ ))
E(zi ,?)?Q? f (zi , ?)kl(Rzi (zi , ?)kR(zi , ?)) ?
E(zi ,?)?Q f (z1i ,?)
where RS (GQ ) and R(GQ ) denote the empirical and true risk of the Gibbs classifier with posterior
Q respectively.
Hence, with Q? = Q and f (zi , ?) = |i|, Lemma 7 yields:
E(zi ,?)?Q (|i|kl(Rzi (zi , ?)kR(zi , ?))) ?
1
1
m?dQ
kl(RS (GQ )kR(GQ ))
(6)
Further,
Q(zi , ?)
Ei?QI ,??QM(z ) ln+
i
P (zi , ?)
=
=
?
Ei?QI ,??QM(z ) ln+
Q(zi , ?)
i
PI (i)PM(zi ) (?)
Q(zi , ?)
Q(zi , ?)
E(zi ,?)?P
? ln+
PI (i)PM(zi ) (?)
PI (i)PM(zi ) (?)
Q(zi , ?)
Q(zi , ?)
E(zi ,?)?P
? ln
PI (i)PM(zi ) (?)
PI (i)PM(zi ) (?)
? max x ln x
0?x<1
?
KL(QkP ) + 0.5
(7)
Equations 6 and 7 along with Equation 5 and substituting k = m ? 1 yields the final result:
Theorem 8 For any reconstruction functionR : Dm ? K ?? H and for any prior distribution P
over compression set and message strings, for ? ? (0, 1], we have:
Prm ?Q ? PK : kl(RS (GQ )kR(GQ ))
S?D
1
1
1
m
?
1+
KL(QkP ) +
+ ln
+ 3.5
?1??
m ? dQ
m?1
2(m ? 1)
?
Theorem 8 recovers almost exactly the PAC-Bayes bound for the Sample Compressed Classifiers
of Laviolette and Marchand [2007]. The key differences are an additional (m?d 1)(m?1) weighted
Q
m+1
KL-divergence term, ln( m
? ) instead of the ln( ? ) and the additional trailing terms bounded by
4
m?dQ . Note that the bound of Theorem 8 is derived in a relatively more straightforward manner
with the Occam?s Hammer criterion.
6 Conclusion
It has been shown that stochastic classifier selection is preferable to deterministic selection by the
PAC-Bayes principle resulting in tighter risk bounds over averaged risk of classifiers according to
the learned posterior. Further, this observation resulted in tight bounds in the case of stochastic
sample compressed classifiers [Laviolette and Marchand, 2007] also showing that sparsity considerations are of importance even in this scenario via. the rescaled posterior. However, of immediate
relevance are the guarantees of the specific classifier output by such algorithms according to the
learned posterior and hence a point-wise version of this bound is indeed needed. We have derived
bounds for such randomized sample compressed classifiers by adapting Occam?s Hammer principle
to the data-dependent sample compression settings. This has resulted in bounds on the specific classifier output by a sample compression learning algorithm according to the learned data-dependent
posterior and is more relevant in practice. Further, we also showed how classical PAC-Bayes bound
for the sample compressed Gibbs classifier can be recovered in a more direct manner and show that
this compares favorably to the existing result of Laviolette and Marchand [2007].
Acknowledgments
The author would like to thank John Langford for interesting discussions.
References
Gilles Blanchard and Franc?ois Fleuret. Occam?s hammer. In Proceedings of the 20th Annual Conference on Learning Theory (COLT-2007), volume 4539 of Lecture Notes on Computer Science,
pages 112?126, 2007.
Sally Floyd and Manfred Warmuth. Sample compression, learnability, and the Vapnik-Chervonenkis
dimension. Machine Learning, 21(3):269?304, 1995.
John Langford. Tutorial on practical prediction theory for classification. Journal of Machine Learning Research, 3:273?306, 2005.
Franc?ois Laviolette and Mario Marchand. PAC-Bayes risk bounds for stochastic averages and majority votes of sample-compressed classifiers. Journal of Machine Learning Research, 8:1461?1487,
2007.
Francois Laviolette, Mario Marchand, and Mohak Shah. Margin-sparsity trade-off for the set covering machine. In Proceedings of the 16th European Conference on Machine Learning, ECML
2005, volume 3720 of Lecture Notes in Artificial Intelligence, pages 206?217. Springer, 2005.
N. Littlestone and M. Warmuth. Relating data compression and learnability. Technical report,
University of California Santa Cruz, Santa Cruz, CA, 1986.
Mario Marchand and John Shawe-Taylor. The Set Covering Machine. Journal of Machine Learning
Reasearch, 3:723?746, 2002.
David McAllester. Some PAC-Bayesian theorems. Machine Learning, 37:355?363, 1999.
| 3388 |@word h:2 version:3 inversion:4 compression:43 briefly:1 seek:1 r:12 chervonenkis:1 existing:2 recovered:1 john:3 cruz:2 intelligence:1 selected:1 fewer:1 warmuth:4 ith:1 manfred:1 provides:1 preference:1 along:1 direct:1 prove:1 consists:1 inside:1 manner:3 indeed:1 expected:2 themselves:1 decreasing:1 actual:1 considering:2 becomes:1 moreover:3 notation:1 underlying:1 bounded:1 string:13 parsimony:1 guarantee:4 every:1 exactly:2 returning:1 classifier:84 qm:23 scaled:1 preferable:1 reasearch:1 before:1 positive:1 accordance:1 subscript:2 solely:1 averaged:2 practical:2 acknowledgment:1 practice:3 union:1 empirical:7 adapting:1 confidence:1 get:1 selection:4 risk:26 context:1 weighed:1 measurable:1 deterministic:2 imposed:1 missing:1 go:1 straightforward:1 starting:1 independently:1 qc:1 simplicity:1 zbi:1 notion:1 mcgill:2 qkp:2 us:1 hypothesis:3 satisfying:1 utilized:1 observed:1 region:2 trade:1 rescaled:6 mentioned:1 complexity:1 tight:2 upon:1 joint:1 distinct:1 artificial:1 sc:5 choosing:1 whose:2 otherwise:1 compressed:19 reconstruct:1 nondecreasing:1 jointly:1 final:2 reconstruction:6 gq:9 zm:14 relevant:1 realization:1 iff:1 extending:1 francois:1 derive:5 montreal:1 measured:1 ij:1 recovering:1 ois:2 hammer:13 owing:1 stochastic:5 vc:1 mcallester:2 bin:11 opt:1 tighter:5 hold:7 exp:3 prm:3 substituting:2 trailing:1 adopt:1 smallest:1 largest:1 establishes:2 weighted:1 functionr:1 corollary:1 derived:2 viz:1 contrast:1 realizable:1 sense:1 dependent:11 i1:2 interested:4 classification:4 colt:1 denoted:1 priori:1 misclassifies:1 art:1 having:1 chernoff:1 represents:1 future:1 report:1 intelligent:1 franc:2 resulted:4 divergence:2 diluting:1 interest:1 message:14 yielding:1 respective:1 taylor:2 logarithm:1 littlestone:2 desired:1 instance:1 subset:3 predicate:1 front:1 too:1 learnability:2 dataindependent:2 density:7 randomized:11 retain:1 off:1 satisfied:1 choose:1 derivative:1 return:2 rescaling:1 converted:1 blanchard:8 depends:1 later:2 observing:2 sup:1 mario:3 competitive:1 recover:3 bayes:17 yield:5 bayesian:1 basically:2 rx:1 confirmed:1 definition:5 z1i:2 dm:9 associated:4 recovers:1 proved:1 organized:1 follow:1 reflected:1 langford:4 hand:3 ei:6 a7:1 mohak:3 true:15 hence:14 pri:1 i2:2 deal:2 floyd:2 covering:2 outputing:1 criterion:1 theoretic:1 wise:3 consideration:1 recently:1 volume:5 tail:4 relating:1 significant:2 rzi:20 gibbs:10 pm:7 pointed:1 centre:1 shawe:2 specification:2 posterior:20 showed:1 scenario:2 inequality:2 binary:1 discretizing:1 yi:1 additional:3 mr:1 ud:1 technical:1 h3a:1 characterized:1 qi:23 prediction:1 essentially:1 background:1 whereas:1 source:3 rest:1 unlike:2 independence:1 zi:138 identified:1 idea:1 motivated:1 expression:1 returned:1 proceed:1 generally:1 fleuret:8 santa:2 amount:1 supplied:1 tutorial:1 delta:1 promise:1 key:2 drawn:2 utilize:1 fraction:1 sum:1 inverse:1 powerful:1 almost:1 looser:1 utilizes:1 draw:1 radon:1 entirely:1 def:9 bound:48 followed:1 distinguish:1 marchand:13 annual:1 precisely:1 sake:1 argument:4 min:5 relatively:1 according:12 smaller:2 unity:1 b:3 intuitively:1 pr:15 ln:35 equation:4 discus:1 needed:2 end:1 tightest:1 generic:1 appropriate:1 encounter:1 shah:2 rz:2 assumes:1 denotes:6 binomial:7 maintaining:1 laviolette:13 exploit:2 k1:1 approximating:2 classical:3 quantity:2 dp:4 thank:1 majority:1 pzm:1 index:4 statement:2 favorably:2 negative:1 unknown:1 gilles:1 upper:1 observation:1 markov:1 ecml:1 immediate:2 rn:1 arbitrary:3 canada:1 david:1 pair:1 kl:22 z1:3 california:1 learned:4 appeared:1 sparsity:2 built:1 max:6 event:5 cim:1 prior:14 relative:1 fully:1 lecture:2 interesting:1 consistent:1 principle:5 dq:9 nikodym:1 occam:14 pi:8 classifiers2:1 side:1 taking:1 regard:2 dimension:2 computes:1 author:1 collection:1 expectancy:3 reconstructed:1 instantiation:1 conclude:1 xi:1 alternatively:1 ca:2 obtaining:1 european:1 factorizable:1 pk:5 bounding:6 complementary:1 wish:1 theorem:10 bad:5 specific:7 pac:19 showing:2 svm:1 concern:1 exists:1 vapnik:1 effectively:1 kr:17 importance:1 conditioned:1 margin:1 entropy:1 sally:1 springer:1 consequently:2 replace:2 content:2 uniformly:1 lemma:3 called:1 e:2 vote:1 relevance:2 absolutely:1 |
2,634 | 3,389 | Unsupervised Learning of Visual Sense Models for
Polysemous Words
Kate Saenko
MIT CSAIL
Cambridge, MA
[email protected]
Trevor Darrell
UC Berkeley EECS / ICSI
Berkeley, CA
[email protected]
Abstract
Polysemy is a problem for methods that exploit image search engines to build object category models. Existing unsupervised approaches do not take word sense
into consideration. We propose a new method that uses a dictionary to learn models of visual word sense from a large collection of unlabeled web data. The use
of LDA to discover a latent sense space makes the model robust despite the very
limited nature of dictionary definitions. The definitions are used to learn a distribution in the latent space that best represents a sense. The algorithm then uses the
text surrounding image links to retrieve images with high probability of a particular dictionary sense. An object classifier is trained on the resulting sense-specific
images. We evaluate our method on a dataset obtained by searching the web for
polysemous words. Category classification experiments show that our dictionarybased approach outperforms baseline methods.
1
Introduction
We address the problem of unsupervised learning of object classifiers for visually polysemous words.
Visual polysemy means that a word has several dictionary senses that are visually distinct. Web
images are a rich and free resource compared to traditional human-labeled object datasets. Potential
training data for arbitrary objects can be easily obtained from image search engines like Yahoo or
Google. The drawback is that multiple word meanings often lead to mixed results, especially for
polysemous words. For example, the query ?mouse? returns multiple senses on the first page of
results: ?computer? mouse, ?animal? mouse, and ?Mickey Mouse? (see Figure 1.) The dataset thus
obtained suffers from low precision of any particular visual sense.
Some existing approaches attempt to filter out unrelated images, but do not directly address polysemy. One approach involves bootstrapping object classifiers from labeled image data [9], others
cluster the unlabeled images into coherent components [6],[2]. However, most rely on a labeled seed
set of inlier-sense images to initialize bootstrapping or to select the right cluster. The unsupervised
approach of [12] bootstraps an SVM from the top-ranked images returned by a search engine, with
the assumption that they have higher precision for the category. However, for polysemous words,
the top-ranked results are likely to include several senses.
We propose a fully unsupervised method that specifically takes word sense into account. The only
input to our algorithm is a list of words (such as all English nouns, for example) and their dictionary
entries. Our method is multimodal, using both web search images and the text surrounding them
in the document in which they are embedded. The key idea is to learn a text model of the word
sense, using an electronic dictionary such as Wordnet together with a large amount of unlabeled
text. The model is then used to retrieve images of a specific sense from the mixed-sense search
results. One application is an image search filter that automatically groups results by word sense for
easier navigation for the user. However, our main focus in this paper is on using the re-ranked images
1
Figure 1: Which sense of ?mouse?? Mixed-sense images returned from an image keyword search.
as training data for an object classifier. The resulting classifier can predict not only the English word
that best describes an input image, but also the correct sense of that word.
A human operator can often refine the search by using more sense-specific queries, for example,
?computer mouse? instead of ?mouse?. We explore a simple method that does this automatically
by generating sense-specific search terms from entries in Wordnet (see Section 2.3). However,
this method must rely on one- to three-word combinations and is therefore brittle. Many of the
generated search terms are too unnatural to retrieve any results, e.g., ?percoid bass?. Some retrieve
many unrelated images, such as the term ?ticker? used as an alternative to ?watch?. We regard this
method as a baseline to our main approach, which overcomes these issues by learning a model of
each sense from a large amount of text obtained by searching the web. Web text is more natural
and is a closer match to the text surronding web images than dictionary entries, which allows us to
learn more robust models. Each dictionary sense is represented in the latent space of hidden ?topics?
learned empirically for the polysemous word.
To evaluate our algorithm, we collect a dataset by searching the Yahoo Search engine for five polysemous words: ?bass?, ?face?, ?mouse?, ?speaker? and ?watch?. Each of these words has anywhere
from three to thirteen noun senses. Experimental evaluation on this dataset includes both retrieval
and classification of unseen images into specific visual senses.
2
Model
The inspiration for our method comes from the fact that text surrounding web images indexed by a
polysemous keyword can be a rich source of information about the sense of that word. The main
idea is to learn a probabilistic model of each sense, as defined by entries in a dictionary (in our case,
Wordnet), from a large amount of unlabeled text. The use of a dictionary is key because it frees us
from needing a labeled set of images to learn the visual sense model.
Since this paper is concerned with objects rather than actions, we restrict ourselves to entries
for nouns. Like standard word sense disambiguation (WSD) methods, we make a one-sense-perdocument assumption [14], and rely on words co-occurring with the image in the HTML document
to indicate that sense. Our method consists of three steps: 1) discovering latent dimensions in text
associated with a keyword, 2) learning probabilistic models of dictionary senses in that latent space,
and 3) using the text-based sense models to construct sense-specific image classifiers. We will now
describe each step in detail.
2.1
Latent Text Space
Unlike words in text commonly used in WSD, image links are not guaranteed to be surrounded by
grammatical prose. This makes it difficult to extract structured features such as part-of-speech tags.
We therefore take a bag-of-words approach, using all available words near the image link to evaluate
the probability of the sense. The first idea is to use a large collection of such bags-of-words to learn
coherent dimensions which align with different senses or uses of the word.
2
We could use one of several existing techniques to discover latent dimensions in documents consisting of bags-of-words. We choose to use Latent Dirichlet Allocation, or LDA, as introduced by Blei
et. al.[4]. LDA discovers hidden topics, i.e. distributions over discrete observations (such as words),
in the data. Each document is modeled as a mixture of topics z ? {1, ..., K}. A given collection
of M documents, each containing a bag of Nd words, is assumed to be generated by the following process: First, we sample the parameters ?j of a multinomial distribution over words from a
Dirichlet prior with parameter ? for each topic j = 1, ..., K. Then, for each document d, we sample
the parameters ?d of a multinomial distribution over topics from a Dirichlet prior with parameter
?. Finally, for each word token i, we choose a topic zi from the multinomial ?d , and then choose a
word wi from the multinomial ?zi . The probability of generating a document is defined as
P (w1 , ..., wNd |?, ?d ) =
Nd X
K
Y
P (wi |z, ?) P (z|?d )
(1)
i=1 z=1
Our initial approach was to learn hidden topics using LDA directly on the words surrounding the
images. However, while the resulting topics were often aligned along sense boundaries, the approach
suffered from over-fitting, due to the irregular quality and low quantity of the data. Often, the only
clue to the image?s sense is a short text fragment, such as ?fishing with friends? for an image returned
for the query ?bass?. To allieviate the overfitting problem, we instead create an additional dataset of
text-only web pages returned from regular web search. We then learn an LDA model on this dataset
and use the resulting distributions to train a model of the dictionary senses, described next.
2.2
Dictionary Sense Model
We use the limited text available in the Wordnet entries to relate dictionary sense to topics formed
above. For example, sense 1 of ?bass? contains the definition ?the lowest part of the musical range.?
To these words we also add the synonyms (e.g., ?pitch?), the hyponyms, if they exist, and the
first-level hypernyms (e.g., ?sound property?). We denote the bag-of-words extracted from such a
dictionary entry for sense s as es = w1 , w2 , ..., wEs , where Es is the number of words in the bag.
The model is trained as follows: Given a query word with sense s ? {1, 2, ...S} we define the
likelihood of a particular sense given the topic j as
P (s|z = j) ?
Es
1 X
P (wi |z = j),
Es i=1
(2)
or the average likelihood of words in the definition. For a web image with an associated text document d = w1 , w2 , ..., wD , the model computes the probability of a particular sense as
P (s|d) =
K
X
P (s|z = j)P (z = j|d).
(3)
j=1
The above requires the distribution of LDA topics in the text context, P (z|d), which we compute by
marginalizing across words and using Bayes? rule:
P (z = j|d) =
D
X
P (z = j|wi ) =
i=1
D
X
P (wi |z = j)P (z = j)
,
P (wi )
i=1
(4)
and also normalizing for the length of the text context. Finally, we define the probability of a
particular dictionary sense given the image to be equal to P (s|d). Thus, our model is able to assign
sense probabilities to images returned from the search engine, which in turn allows us to group the
images according to sense.
2.3
Visual Sense Model
The last step of our algorithm uses the sense model learned in the first two steps to generate training
data for an image-based classifier. The choice of classifier is not a crucial part of the algorithm. We
choose to use a discriminative classifier, in particular, a support vector machine (SVM), because of
its ability to generalize well in high-dimentional spaces without requiring a lot of training data.
3
Table 1: Dataset Description: sizes of the three datasets, and distribution of ground truth sense
labels in the keyword dataset.
category
Bass
Face
Mouse
Speaker
Watch
text-only
984
961
987
984
936
size of datasets
sense term keyword
357
678
798
756
726
768
2270
660
2373
777
distribution of labels in the keyword dataset
positive (good) negative (partial, unrelated)
146
532
130
626
198
570
235
425
512
265
For each particular sense s, the model re-ranks the images according to the probability of that sense,
and selects the N highest-ranked examples as positive training data for the SVM. The negative training data is drawn from a ?background? class, which in our case is the union of all other objects that
we are asked to classify. We represent images as histograms of visual words, which are obtained by
detecting local interest points and vector-quantizing their descriptors using a fixed visual vocabulary.
We compare our model with a simple baseline method that attempts to refine the search by automatically generating search terms from the dictionary entry. Experimentally, it was found that queries
consisting of more than about three terms returned very few images. Consequently, the terms are
generated by appending the polysemous word to its synonyms and first-level hypernyms. For example, sense 4 of ?mouse? has synonym ?computer mouse? and hypernym ?electronic device?, which
produces the terms ?computer mouse? and ?mouse electronic device?. An SVM classifier is then
trained on the returned images.
3
Datasets
To train and evaluate the outlined algorithms, we use three datasets: image search results using the
given keyword, image search results using sense-specific search terms, and text search results using
the given keyword.
The first dataset was collected automatically by issuing queries to the Yahoo Image SearchTM website
and downloading the returned images and HTML web pages. The keywords used were: ?bass?,
?face?, ?mouse?, ?speaker? and ?watch?. In the results, ?bass? can refer to a fish or a musical
term, as in ?bass guitar?; ?face? has a multitude of meanings, as in ?human face?, ?animal face?,
?mountain face?, etc.; ?speaker? can refer to audio speakers or human speakers; ?watch? can mean
a timepiece, the act of watching, as in ?hurricane watch?, or the action, as in ?watch out!? Samples
that had dead page links and/or corrupted images were removed from the dataset.
The images were labeled by a human annotator with one sense per keyword. The annotator labeled
the presense of the following senses: ?bass? as in fish, ?face? as in a human face, ?mouse? as
in computer mouse, ?speaker? as in an audio output device, and ?watch? as in a timepiece. The
annotator saw only the images, and not the text or the dictionary definitions. The labels used were
0 : unrelated, 1 : partial, or 2 : good. Images where the object was too small or occluded were
labeled partial. For evaluation, we used only good labels as positive, and grouped partial and
unrelated images into the negative class. The labels were only used in testing, and not in training.
The second image search dataset was collected in a similar manner but using the generated sensespecific search terms. The third, text-only dataset was collected via regular web search for the
original keywords. Neither of these two datasets were labeled. Table 1 shows the size of the datasets
and distribution of labels.
4
Features
When extracting words from web pages, all HTML tags are removed, and the remaining text is
tokenized. A standard stop-word list of common English words, plus a few domain-specific words
like ?jpg?, is applied, followed by a Porter stemmer [11]. Words that appear only once and the actual
word used as the query are pruned. To extract text context words for an image, the image link is
4
located automatically in the corresponding HTML page. All word tokens in a 100-token window
surrounding the location of the image link are extracted. The text vocabulary size used for the sense
model ranges between 12K-20K words for different keywords.
To extract image features, all images are resized to 300 pixels in width and converted to grayscale.
Two types of local feature points are detected in the image: edge features [6] and scale-invariant
salient points. In our experiments, we found that using both types of points boosts classficiation
performance relative to using just one type. To detect edge points, we first perform Canny edge
detection, and then sample a fixed number of points along the edges from a distribution proportional
to edge strength. The scales of the local regions around points are sampled uniformly from the range
of 10-50 pixels. To detect scale-invariant salient points, we use the Harris-Laplace [10] detector
with the lowest strength threshold set to 10. Altogether, 400 edge points and approximately the
same number of Harris-Laplace points are detected per image. A 128-dimensional SIFT descriptor
is used to describe the patch surrounding each interest point. After extracting a bag of interest point
descriptors for each image, vector quantization is performed. A codebook of size 800 is constructed
by k-means clustering a randomly chosen subset of the database (300 images per keyword), and
all images are converted to histograms over the resulting visual words. To be precise, the ?visual
words? are the cluster centers (codewords) of the codebook. No spatial information is included in
the image representation, but rather it is treated as a bag-of-words.
5
5.1
Experiments
Re-ranking Image Search Results
In the first set of experiments, we evaluate how well our text-based sense model can distinguish
between images depicting the correct visual sense and all the other senses. We train a separate LDA
model for each keyword on the text-only dataset, setting the number of topics K to 8 in each case.
Although this number is roughly equal to the average number of senses for the given keywords, we
do not expect nor require each topic to align with one particular sense. In fact, multiple topics can
represent the same sense. Rather, we treat K as the dimensionality of the latent space that the model
uses to represent senses. While our intuition is that it should be on the order of the number of senses,
it can also be set automatically by cross-validation. In our initial experiments, different values of K
did not significantly alter the results.
To perform inference in LDA, a number of approximate inference algorithms can be applied. We
use a Gibbs sampling approach of [7], implemented in the Matlab Topic Modeling Toolbox [13].
We used symmetric Dirichlet priors with scalar hyperparameters ? = 50/K and ? = 0.01, which
have the effect of smoothing the empirical topic distribution, and 1000 iterations of Gibbs sampling.
The LDA model provides us with topic distributions P (w|z) and P (z). We complete training the
model by computing P (s|z) for each sense s in Wordnet, as in Equation 2. We train a separate model
for each keyword. We then compute P (s|d) for all text contexts d associated with images in the
keyword dataset, using Equation 3, and rank the corresponding images according to the probability
of each sense. Since we only have ground truth labels for a single sense per keyword (see Section
3), we evaluate the retrieval performance for that particular ground truth sense. Figure 2 shows
the resulting ROCs for each keyword, computed by thresholding P (s|d). For example, the first
plot shows ROCs obtained by the eight models corresponding to each of the senses of the keyword
?bass?. The thick blue curve is the ROC obtained by the original Yahoo retrieval order. The other
thick curves show the dictionary sense models that correspond to the ground truth sense (a fish). The
results demonstrate that we are able to learn a useful sense model that retrieves far more positiveclass images than the original search engine order. This is important in order for the first step of
our method to be able to improve the precision of training data used in the second step. Note that,
for some keywords, there are multiple dictionary definitions that are difficult to distinguish visually,
for example, ?human face? and ?facial expression?. In our evaluation, we did not make such finegrained distinctions, but simply chose the sense that applied most generally.
In interactive applications, the human user can specify the intended sense of the word by providing
an extra keyword, such as by saying or typing ?bass fish?. The correct dictionary sense can then be
selected by evaluating the probability of the extra keyword under each sense model, and choosing
the highest-scoring one.
5
Figure 2: Retrieval of the ground truth sense from keyword search results. Thick blue lines are the
ROCs for the original Yahoo search ranks. Other thick lines are the ROCs obtained by our dictionary
model for the true senses, and thin lines are the ROCs obtained for the other senses.
5.2
Classifying Unseen Images
The goal of the second set of experiments is to evaluate the dictionary-based object classifier. We
train a classifier for the object corresponding to the ground-truth sense of each polysemous keyword
in our data. The clasifiers are binary, assigning a positive label to the correct sense and a negative
label to incorrect senses and all other objects. The top N unlabeled images ranked by the sense model
are selected as positive training images. The unlabeled pool used in our model consists of both the
keyword and the sense-term datasets. N negative images are chosen at random from positive data
for all other keywords. A binary SVM with an RBF kernel is trained on the image features, with the
C and ? parameters chosen by four-fold cross-validation. The baseline search-terms algorithm that
we compare against is trained on a random sample of N images from the sense-term dataset. Recall
6
terms
dict
85%
85%
terms
dict
85%
terms
dict
75%
75%
75%
65%
65%
55%
55%
bass
face
mouse speaker
(a) 1-SENSE test set
watch
65%
55%
bass
face
mouse
speaker
watch
50
100
150
200
250
300
N
(b) MIX-SENSE test set
(c) 1-SENSE average vs. N
Figure 3: Classification accuracy for the search-terms baseline (terms) and our dictionary model
(dict).
that this dataset was collected by simply searching with word combinations extracted from the target
sense definition. Training on the first N images returned by Yahoo did not qualitatively change the
results.
We evaluate the method on two test cases. In the first case, the negative class consists of only the
ground-truth senses of the other objects. We refer to this as the 1-SENSE test set. In the second
case, the negative class also includes other senses of the given keyword. For example, we test
detection of ?computer mouse? among other keyword objects as well as ?animal mouse?, ?Mickey
Mouse? and other senses returned by the search, including unrelated images. We refer to this as the
MIX-SENSE test set. Figure 3 compares the classification accuracy of our classifier to the baseline
search-terms classifier. Average accuracy across ten trials with different random splits into train and
test sets is shown for each object. Figure 3(a) shows results on 1-SENSE and 3(b) on MIX-SENSE,
with N equal to 250. Figure 3(c) shows 1-SENSE results averaged over the categories, at different
numbers of training images N . In both test cases, our dictionary method significantly improves
on the baseline algorithm. As the per-object results show, we do much better for three of the five
objects, and comparably for the other two. One explanation why we do not see a large improvement
in the latter cases is that the automatically generated sense-specific search terms happened to return
relatively high-precision images. However, in the other three cases, the term generation fails while
our model is still able to capture the dictionary sense.
6
Related Work
A complete review of WSD work is beyond the scope of the present paper. Yarowsky [14] proposed
an unsupervised WSD method, and suggested the use of dictionary definitions as an initial seed.
Several approaches to building object models using image search results have been proposed, although none have specifically addressed polysemous words. Fei-Fei et. al. [9] bootstrap object
classifiers from existing labeled image data. Fergus et. al. [6] cluster in the image domain and
use a small validation set to select a single positive component. Schroff et. al. [12] incorporate
text features (such as whether the keyword appears in the URL) and use them re-rank the images
before training the image model. However, the text ranker is category-independent and does not
learn which words are predictive of a specific sense. Berg et. al. [2] discover topics using LDA in
the text domain, and then use them to cluster the images. However, their method requires manual
intervention by the user to sort the topics into positive and negative for each category. The combination of image and text features is used in some web retrieval methods (e.g. [5]), however, our work
is focused not on instance-based image retrieval, but on category-level modeling.
A related problem is modeling images annotated with words, such as the caption ?sky, airplane?,
which are assigned by a human labeler. Barnard et. al. [1] use visual features to help disambiguate word senses in such loosely labeled data. Models of annotated images assume that there
is a correspondence between each image region and a word in the caption (e.g. Corr-LDA, [3]).
Such models predict words, which serve as category labels, based on image content. In contrast,
our model predicts a category label based on all of the words in the web image?s text context. In
general, a text context word does not necessarily have a corresponding visual region, and vice versa.
7
In work closely related to Corr-LDA, a People-LDA [8] model is used to guide topic formation in
news photos and captions, using a specialized face recognizer. The caption data is less constrained
than annotations, including non-category words, but still far more constrained than text contexts.
7
Conclusion
We introduced a model that uses a dictionary and text contexts of web images to disambiguate image
senses. To the best of our knowledge, it is the first use of a dictionary in either web-based image
retrieval or classifier learning. Our approach harnesses the large amount of unlabeled text available
through keyword search on the web in conjunction with the dictionary entries to learn a generative
model of sense. Our sense model is purely unsupervised, and is appropriate for web images. The use
of LDA to discover a latent sense space makes the model robust despite the very limited nature of
dictionary definitions. The definition text is used to learn a distribution over the empirical text topics
that best represents the sense. As a final step, a discriminative classifier is trained on the re-ranked
mixed-sense images that can predict the correct sense for novel images.
We evaluated our model on a large dataset of over 10,000 images consisting of search results for
five polysemous words. Experiments included retrieval of the ground truth sense and classification of unseen images. On the retrieval task, our dictionary model improved on the baseline search
engine precision by re-ranking the images according to sense probability. On the classification
task, our method outperformed a baseline method that attempts to refine the search by generating
sense-specific search terms from Wordnet entries. Classification also improved when the test objects
included the other senses of the keyword, making distinctions such as ?loudspeaker? vs. ?invited
speaker?. Of course, we would not expect the dictionary senses to always produce accurate visual models, as many senses do not refer to objects (e.g. ?bass voice?). Future work will include
annotating the data with more senses to further explore the ?visualness? of some of them.
References
[1] K. Barnard, K. Yanai, M. Johnson, and P. Gabbur. Cross modal disambiguation. In Toward Category-Level
Object Recognition, J. Ponce, M. Hebert, C. Schmidt, eds., Springer-Verlag LNCS Vol. 4170, 2006.
[2] T. Berg and D. Forsyth. Animals on the web. In Proc. CVPR, 2006.
[3] D. Blei and M. Jordan. Modeling annotated data. In Proc. International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 127-134. ACM Press, 2003.
[4] D. Blei, A. Ng, and M. Jordan. Latent Dirichlet allocation. J. Machine Learning Research, 3:993-1022,
Jan 2003.
[5] Z. Chen, L. Wenyin, F. Zhang and M. Li. Web mining for web image retrieval. J. of the American Society
for Information Science and Technology, 51:10, pages 831-839, 2001.
[6] R. Fergus, L. Fei-Fei, P. Perona, and A. Zisserman. Learning Object Categories from Google?s Image
Search. In Proc. ICCV 2005.
[7] T. Griffiths and M. Steyvers. Finding Scientific Topics. In Proc. of the National Academy of Sciences,
101 (suppl. 1), pages 5228-5235, 2004.
[8] V. Jain, E. Learned-Miller, A. McCallum. People-LDA: Anchoring Topics to People using Face Recognition. In Proc. ICCV, 2007.
[9] J. Li, G. Wang, and L. Fei-Fei. OPTIMOL: automatic Object Picture collecTion via Incremental MOdel
Learning. In Proc. CVPR, 2007.
[10] K. Mikolajczyk and C. Schmid. Scale and affine invariant interest point detectors. In Proc. IJCV, 2004.
[11] M. Porter, An algorithm for suffix stripping, Program, 14(3) pp 130-137, 1980.
[12] F. Schroff, A. Criminisi and A. Zisserman. Harvesting image databases from the web. In Proc. ICCV,
2007.
[13] M. Steyvers and T. Griffiths. Matlab Topic Modeling Toolbox.
http://psiexp.ss.uci.edu/research/programs_data/toolbox.htm
[14] D. Yarowsky. Unsupervised word sense disambiguation rivaling supervised methods. ACL, 1995.
8
| 3389 |@word trial:1 nd:2 hyponym:1 downloading:1 initial:3 contains:1 fragment:1 document:8 outperforms:1 existing:4 wd:1 assigning:1 issuing:1 must:1 plot:1 wnd:1 v:2 generative:1 discovering:1 device:3 website:1 selected:2 mccallum:1 short:1 harvesting:1 blei:3 detecting:1 provides:1 codebook:2 location:1 zhang:1 five:3 along:2 constructed:1 incorrect:1 consists:3 ijcv:1 fitting:1 manner:1 roughly:1 nor:1 hypernym:3 anchoring:1 automatically:7 actual:1 window:1 discover:4 mickey:2 unrelated:6 lowest:2 mountain:1 finding:1 bootstrapping:2 berkeley:3 sky:1 act:1 interactive:1 classifier:17 yarowsky:2 intervention:1 appear:1 positive:8 before:1 local:3 treat:1 despite:2 approximately:1 plus:1 chose:1 acl:1 collect:1 co:1 limited:3 range:3 averaged:1 testing:1 union:1 bootstrap:2 jan:1 lncs:1 empirical:2 significantly:2 word:67 regular:2 griffith:2 unlabeled:7 operator:1 context:8 center:1 fishing:1 focused:1 sigir:1 rule:1 retrieve:4 steyvers:2 searching:4 laplace:2 target:1 user:3 caption:4 us:6 recognition:2 rivaling:1 located:1 predicts:1 labeled:10 database:2 perdocument:1 wang:1 capture:1 region:3 news:1 keyword:26 bass:14 highest:2 removed:2 icsi:1 intuition:1 asked:1 occluded:1 trained:6 predictive:1 serve:1 purely:1 easily:1 multimodal:1 htm:1 represented:1 retrieves:1 surrounding:6 train:6 distinct:1 jain:1 describe:2 query:7 detected:2 formation:1 choosing:1 cvpr:2 s:1 annotating:1 ability:1 unseen:3 final:1 quantizing:1 propose:2 canny:1 aligned:1 uci:1 prose:1 academy:1 description:1 cluster:5 darrell:1 produce:2 generating:4 incremental:1 object:25 inlier:1 help:1 friend:1 keywords:6 implemented:1 involves:1 come:1 indicate:1 thick:4 drawback:1 correct:5 annotated:3 filter:2 closely:1 criminisi:1 human:9 require:1 assign:1 around:1 ground:8 visually:3 seed:2 scope:1 predict:3 dictionary:33 recognizer:1 proc:8 outperformed:1 schroff:2 bag:8 label:11 saw:1 grouped:1 vice:1 create:1 mit:2 always:1 rather:3 resized:1 conjunction:1 focus:1 ponce:1 improvement:1 rank:4 likelihood:2 contrast:1 baseline:9 sense:95 detect:2 inference:2 suffix:1 hidden:3 perona:1 selects:1 pixel:2 issue:1 classification:7 html:4 among:1 yahoo:6 development:1 animal:4 smoothing:1 noun:3 initialize:1 uc:1 spatial:1 equal:3 construct:1 once:1 constrained:2 ng:1 sampling:2 labeler:1 represents:2 unsupervised:8 thin:1 alter:1 future:1 others:1 few:2 randomly:1 national:1 hurricane:1 intended:1 ourselves:1 consisting:3 attempt:3 detection:2 interest:4 mining:1 evaluation:3 navigation:1 mixture:1 sens:26 accurate:1 edge:6 closer:1 partial:4 facial:1 indexed:1 loosely:1 re:6 instance:1 classify:1 modeling:5 entry:10 subset:1 johnson:1 too:2 stripping:1 eec:2 corrupted:1 international:1 csail:2 probabilistic:2 pool:1 together:1 mouse:21 w1:3 containing:1 choose:4 watching:1 dead:1 american:1 return:2 li:2 account:1 potential:1 converted:2 psiexp:1 includes:2 forsyth:1 kate:1 ranking:2 performed:1 lot:1 bayes:1 sort:1 annotation:1 formed:1 accuracy:3 musical:2 descriptor:3 miller:1 correspond:1 generalize:1 polysemous:12 comparably:1 none:1 detector:2 suffers:1 manual:1 trevor:2 ed:1 definition:10 against:1 pp:1 associated:3 stop:1 sampled:1 dataset:18 timepiece:2 finegrained:1 recall:1 knowledge:1 dimensionality:1 improves:1 appears:1 higher:1 supervised:1 harness:1 specify:1 improved:2 modal:1 zisserman:2 evaluated:1 anywhere:1 just:1 web:24 google:2 porter:2 quality:1 lda:15 scientific:1 building:1 effect:1 requiring:1 true:1 inspiration:1 assigned:1 symmetric:1 width:1 speaker:10 complete:2 demonstrate:1 image:100 meaning:2 consideration:1 discovers:1 novel:1 common:1 specialized:1 multinomial:4 jpg:1 empirically:1 refer:5 cambridge:1 gibbs:2 versa:1 automatic:1 outlined:1 had:1 etc:1 align:2 add:1 verlag:1 binary:2 scoring:1 additional:1 multiple:4 sound:1 needing:1 mix:3 match:1 cross:3 retrieval:11 pitch:1 dimentional:1 histogram:2 represent:3 iteration:1 kernel:1 suppl:1 irregular:1 background:1 addressed:1 source:1 suffered:1 crucial:1 w2:2 extra:2 unlike:1 invited:1 jordan:2 extracting:2 near:1 split:1 concerned:1 zi:2 restrict:1 idea:3 classficiation:1 airplane:1 ranker:1 whether:1 expression:1 url:1 unnatural:1 returned:10 speech:1 action:2 matlab:2 useful:1 generally:1 amount:4 ten:1 category:13 wes:1 generate:1 http:1 exist:1 fish:4 happened:1 per:5 blue:2 discrete:1 vol:1 group:2 key:2 salient:2 four:1 threshold:1 drawn:1 neither:1 wsd:4 saying:1 electronic:3 patch:1 disambiguation:3 guaranteed:1 followed:1 distinguish:2 correspondence:1 fold:1 refine:3 strength:2 fei:6 tag:2 pruned:1 relatively:1 loudspeaker:1 structured:1 according:4 combination:3 describes:1 across:2 wi:6 making:1 invariant:3 iccv:3 resource:1 equation:2 turn:1 photo:1 available:3 eight:1 appropriate:1 appending:1 alternative:1 voice:1 schmidt:1 altogether:1 original:4 top:3 dirichlet:5 include:2 remaining:1 clustering:1 exploit:1 build:1 especially:1 society:1 quantity:1 codewords:1 traditional:1 link:6 separate:2 topic:24 collected:4 toward:1 tokenized:1 length:1 modeled:1 providing:1 difficult:2 thirteen:1 relate:1 negative:8 optimol:1 perform:2 observation:1 datasets:8 precise:1 arbitrary:1 introduced:2 toolbox:3 engine:7 coherent:2 learned:3 distinction:2 boost:1 address:2 able:4 beyond:1 suggested:1 program:1 including:2 explanation:1 dict:4 ranked:6 rely:3 natural:1 treated:1 typing:1 improve:1 technology:1 picture:1 extract:3 schmid:1 text:40 prior:3 review:1 marginalizing:1 relative:1 embedded:1 fully:1 expect:2 brittle:1 mixed:4 generation:1 allocation:2 proportional:1 programs_data:1 annotator:3 validation:3 yanai:1 affine:1 thresholding:1 classifying:1 surrounded:1 course:1 token:3 last:1 free:2 english:3 hebert:1 guide:1 stemmer:1 face:14 grammatical:1 regard:1 dimension:3 boundary:1 vocabulary:2 curve:2 rich:2 computes:1 evaluating:1 mikolajczyk:1 collection:4 commonly:1 clue:1 qualitatively:1 far:2 approximate:1 overcomes:1 overfitting:1 assumed:1 discriminative:2 fergus:2 grayscale:1 search:38 latent:11 why:1 table:2 disambiguate:2 learn:13 nature:2 robust:3 ca:1 depicting:1 necessarily:1 polysemy:3 domain:3 did:3 main:3 synonym:3 hyperparameters:1 roc:6 precision:5 fails:1 third:1 specific:11 sift:1 list:2 svm:5 guitar:1 multitude:1 normalizing:1 quantization:1 corr:2 occurring:1 chen:1 easier:1 simply:2 likely:1 explore:2 visual:15 scalar:1 watch:10 springer:1 truth:8 extracted:3 ma:1 harris:2 acm:2 goal:1 consequently:1 rbf:1 barnard:2 content:1 experimentally:1 change:1 included:3 specifically:2 uniformly:1 wordnet:6 experimental:1 e:4 saenko:2 select:2 berg:2 support:1 people:3 latter:1 incorporate:1 evaluate:8 audio:2 |
2,635 | 339 | Oscillation Onset
?
In
Neural Delayed Feedback
Andre Longtin
Complex Systems Group and Center for Nonlinear Studies
Theoretical Division B213, Los Alamos National Laboratory
Los Alamos, NM 87545
Abstract
This paper studies dynamical aspects of neural systems with delayed negative feedback modelled by nonlinear delay-differential equations. These
systems undergo a Hopf bifurcation from a stable fixed point to a stable limit cycle oscillation as certain parameters are varied. It is shown
that their frequency of oscillation is robust to parameter variations and
noisy fluctuations, a property that makes these systems good candidates
for pacemakers. The onset of oscillation is postponed by both additive
and parametric noise in the sense that the state variable spends more time
near the fixed point than it would in the absence of noise. This is also the
case when noise affects the delayed variable, i.e. when the system has a
faulty memory. Finally, it is shown that a distribution of delays (rather
than a fixed delay) also stabilizes the fixed point solution.
1
INTRODUCTION
In this paper, we study the dynamics of a class of neural delayed feedback models
which have been used to understand equilibrium and oscillatory behavior in recurrent inhibitory circuits (Mackey and an der Heiden, 1984; Plant, 1981; Milton et
al., 1990) and brainstem reflexes such as the pupil light reflex (Longtin and Milton,
1989a,b; Milton et al., 1989; Longtin et al., 1990; Longtin, 1991) and respiratory
control (Glass and Mackey, 1979). These models are framed in terms of first-order
nonlinear delay-differential equations (DDE's) in which the state variable may represent, e.g., a membrane potential, a mean firing rate of a population of neurons or
130
Oscillation Onset in Neural Delayed Feedback
a muscle activity. For example, the negative feedback dynamics of the human pupil
light reflex have been shown to be appropriately modelled by the following equation for pupil area (related to the activity of the iris muscles through the nonlinear
monotonically decreasing function g(A) ) (see Longtin and Milton, 1989a,b):
dg(A) dA(t)
dA
dt
+ o:g
(A) =
I [let - T)A(t - T)]
?
, n
(I)
let) is the external light intensity and ? is the retinal light flux below which no
pupillary response occurs. The left hand side of Eq.(I) governs the response of the
system to the state-dependent forcing (i.e. stimulation) embodied in the term on
the right-hand side. The delay T is essential to the understanding of the dynamics
of this reflex. It accounts for the fact that the iris muscles move in response to the
retinal light flux variations occurring'" 300 msec earlier.
2
FOCUS AND MOTIVATION
For the sake of discussion, we shall focus on the following prototypical model of
delayed negative feedback
d~~t) + o:x(t) =
f(jj; x(t - T?
(2)
where jj is a vector of parameters and f is a monotonically decreasing function.
This equation typically exhibits a Hopf bifurcation (i.e. a qualitative change in
dynamics from a stable equilibrium solution to a stable limit cycle oscillation) as
the slope of the feedback function or the delay are increased passed critical values.
Autonomous (as opposed to externally forced) oscillations are frequently observed
in real neural delayed feedback systems which suggests that these systems may
exhibit a Hopf bifurcation. Further, it is clear that these systems operate despite
noisy environmental fluctuations. A clear understanding of the properties of these
systems can reveal useful information about their structure and the origin of the
"noisy" sources, as well as enable us to extract general functioning principles for
systems organized according to this scheme.
We now focus our attention on three different dynamical aspects of these systems:
I) the stability of the oscillation frequency and amplitude to parameter variations
and to noise; 2) the postponement of oscillation onset due to noise; and 3) the
stabilization of the equilibrium behavior in the more realistic case involving a distribution of delays rather than a single fixed delay.
3
FREQUENCY AND AMPLITUDE
Under certain conditions, the neural delayed feedback system will settle onto equilibrium behavior after an initial transient. Mathematically, this corresponds to the
fixed point solution x? of Eq.(2) obtained by setting z = O. A supercritical Hopf
bifurcation occurs in Eq.(2) when the slope of the feedback function at this fixed
point
~
I
z?
exceeds some value /co called the bifurcation value. It can also occur
131
132
Longtin
when the delay exceeds a critical value. The case where the parameter a increases
is particularly interesting because the system can undergo a Hopf bifurcation at
a = al followed by a restabilization of the fixed point through a reverse Hopf
bifurcation at a = a2 > al (see also Mackey, 1979).
Numerical simulations of Eq.(2) around the Hopf bifurcation point ko reveal that
the frequency is relatively constant while the amplitude Ampl grows as Jk - k o .
However, in oscillatory time series from real neural delayed feedback systems, the
frequency and amplitude fluctuate near the bifurcation point, with relative amplitude fluctuations being generally larger than relative frequency fluctuations. This
point has been illustrated using data from the human pupil light reflex whose feedback gain is under experimental control (see Longtin, 1991; Longtin et al., 1990).
In the case of the pupil light reflex, the variations in the mean and standard deviation of amplitude and period accompanying increases in the bifurcation parameter
(the external gain) have been explained in the hypothesis that "neural noise" is
affecting the deterministic dynamics of the system. This noise is strongly amplified
near the bifurcation point where the solutions are only weakly stable (Longtin et
al., 1990). Thus the coupling of the noise to the system is most likely responsible
for the aperiodicity of the observed data.
The fact that the frequency is not significantly affected by the noise nor by variation
of the bifurcation parameter (especially in comparison to the amplitude fluctuations) suggests that neural delayed feedback circuits may be ideally suited to serve
as pacemakers. The frequency stability in regulatory biological systems has previously been emphasized by Rapp (1981) in the context of biochemical regulation.
4
STABILIZATION BY NOISE
In the presence of noise, oscillations can be seen in the solution of Eq.(2) even
when the bifurcation value is below that at which the deterministic bifurcation
occurs. This does not mean however that the bifurcation has occurred, since these
oscillations simply become more and more prominent as the bifurcation parameter is
increased, and no qualitative change in the solution can be seen. Such a qualitative
change does occur when the solution is viewed from a different standpoint. One
can in fact construct a histogram of the values taken on by the solution of the
model differential equation (or by the data: see Longtin, 1991). The value of this
(normalized) histogram at a given point in the state space (e.g. of pupil area values)
provides a measure of the fraction of the time spent by the system in the vicinity
of this point . The onset of oscillation can then be detected by a qualitative change
in this histogram, specifically when it goes from unimodal to bimodal (Longtin et
al., 1990). The distance between the two humps in the bimodal case is a measure
of the limit cycle amplitude. For short time series however (as is often the case in
neurophysiology), it is practically impossible to resolve this distance and thus to
ascertain whether a Hopf bifurcation has occurred.
Intensive simulations of Eq.(2) with either additive noise (i.e. added to Eq.(2)) or
parametric noise (e.g. on the magnitude of the feedback function) reveal that the
statistical limit cycle amplitude (the distance between the two humps or "order
parameter") is smaller than the amplitude in the absence of noise (Longtin et al.,
1990). The bifurcation diagram is similar to that in Figure 1. This implies that the
Oscillation Onset in Neural Delayed Feedback
solution spends more time near the fixed point, i.e. that the fixed point is stabilized
by the noise (i.e . in the absence of noise, the limit cycle is larger and the system
spends less time near the unstable fixed point). In other words, the onset of the
Hopf bifurcation is postponed in the presence of these types of noise. Hence the
noise level in a neural system, whatever its source, may in fact control the onset of
an oscillation.
l2
Z ll . 5
.-?
ro1
t
II
J
~ LO.5
~
Z
LO
9
~
~
;:)
9.5
r..
Ii
' . 5~
_ _- - 2.S
7.S
LO
12.5
... 7 . 5
ORDD PAllAMI:TZJl
Figure 1. Magnitude of the Order Parameter as a Function of the Bifurcation
Parameter n for Noise on the Delayed State of the System.
In Figure 1 it is shown that the Hopf bifurcation is also postponed (the bifurcation
curve is shifted to higher parameter values with respect to the deterministic curve)
when the noise is applied to the delayed state variable x(t - T) and / in Eq.(2) is
of the form (negative feedback):
)..on
/ = On + xn(t _ T)"
(3)
For parameter values Q = 3.21,).. = 200,0 = 50, T = 0.3, the deterministic Hopf
bifurcation occurs at n = 8.18. Colored (Ornstein-Uhlenbeck type) Gaussian noise
of standard deviation u = 1.5 and correlation time lsec was added to the variable
x(t - T). This numerical calculation can be interpreted as a simulation of the
behavior of a neural delayed feedback system with bad memory (i.e. in which there
is a small error on the value recalled from the past). Thus, faulty memory also
stabilizes the fixed point.
5
DISTRIBUTED DELAYS
The use of a single fixed delay in models of delayed feedback is often a good approximation and strongly warranted in a simple circuit comprising only a small number
133
134
Longtin
of cells. However, neural systems often have a spatial extent due to the presence of
many parallel pathways in which the axon sizes are distributed according to a certain probability density. This leads to a distribution of conduction velocities down
these pathways and therefore to a distribution of propagation delays. In this case,
the dynamics are more appropriately modelled by an integro-differential equation
of the form
~; + ax(t) = f(~; z(t), x(t?,
let) =
1
t
oo K(t - u)x(u) duo
(4)
The extent to which values of the state variable in the past affect its present evolution is determined by the kernel K(t). The fixed delay case corresponds to choosing
the kernel to be a Dirac delta distribution.
We have looked at the effect of a distributed delay on the Hopf bifurcation in our
prototypical delayed feedback system Eq.(2). Specifically, we have considered the
case where the kernel in Eq.( 4) has the form of a gamma distribution
a +
=~(t) = -,t m e- aq , a, m > O.
(5)
m.
The average delay of this kernel is T = m;l and the kernel has the property that it
m
1
K(t)
converges to the delta function in the limit where m and a go to infinity all the while
keeping the ratio T constant. For a kernel of a given order it is possible to convert
the DDE Eq.(2) into a set of (m+2) coupled ordinary differential equations (ODE's)
which approximate the DDE (an infinite set of ODE's is in this case equivalent to the
original DDE) (see Fargue, 1973; MacDonald, 1978; Cooke and Grossman, 1982).
We have investigated the occurrence of a Hopf bifurcation in the (m + 2) ODE's as
a function of the order m of the memory kernel (keeping T equal to the fixed delay
of the DDE being approximated). This involves doing a stability analysis around
the fixed point of the (m + 2) order system of ODE's and numerically determining
the value of the bifurcation parameter n at which the Hopf bifurcation occurs.
The result is shown in Figure 2, where we have plotted n versus the order m of
approximation. Note that at least a 3 dimensional system of ODE's is required for
a Hopf bifurcation to occur in such a system. Note also the fast convergence of n
to the bifurcation value for the DDE (5.04). These calculations were done for the
Mackey-G lass equation
dx
dt
=
=
+ ax(t) = ~onx(t - r)
(6)
On+xn(t-r)
=
=
with parameters 0
1, a
2, ~
2, r
2 and n E (1,20). This equation is a
model for mixed feedback dynamics (i.e. a combination of positive and negative
feedback involving a single-humped feedback function). It displays the same qualitative features as Eq.(2) with the feedback given by Eq.(3) at the Hopf bifurcation
and was chosen for ease of computation since parameters can be chosen such that
the fixed point does not depend on the bifurcation parameter.
We can see that, for a memory kernel of a given order, the Hopf bifurcation occurs
at a higher value of the bifurcation parameter (which is proportional to the slope
of the feedback function at the fixed point) than for the DDE. This implies that a
stronger nonlinearity is required to set the ODE system into oscillation compared
Oscillation Onset in Neural Delayed Feedback
to the DDE. In other words, the distributed delay system with the same feedback
function as the DDE is less prone to oscillate (see also MacDonald, 1978; Cooke
and Grossman, 1982).
n
20r---~----~----~----~----~----~---.
11
16
12
10
?
6
~.5D4 --------------------------------------------------------~
?
2
2
3
6
7
m
Figure 2. Value of n at Which a Hopf Bifurcation Occurs Versus the Order m of
the Memory KerneL
6
SUMMARY
In sununary we have shown that neural delayed negative feedback systems can
exhibit either equilibrium or limit cycle behavior depending on their parameters
and on the noise levels. The constancy of their oscillation frequency, even in the
presence of noise, suggests their possible role as pacemakers in the nervous system.
Further, the equilibrium solution of these systems is stabilized by noise and by
distributed delays. We conjecture that these two effects may be related as they
somewhat share a conunon feature, in the sense that noise and distributed delays
tend to make the retarded action more diffuse. This is supported by the fact that a
system with bad memory (i.e. with noise on the delayed variable) also sees its fixed
point stabilized.
Acknowledgements
The author would like to thank Mackey for useful conversations as well as Christian
Cor tis for his help with the numerical analysis in Section 5. This research was
supported by the Natural Sciences and Engineering Research Council of Canada
(NSERC) as well as the Complex Systems Group and the Center for Nonlinear
Studies at Los Alamos National Laboratory in the form of postdoctoral fellowships.
135
136
Longtin
References
K.L. Cooke and Z. Grossman. (1982) Discrete delay, distributed delay and stability
switches. J. Math. Anal. Appl. 86:592-627.
D. Fargue. (1973) Reductibilite des systemes hereditaires a des systemes dynamiques (regis par des equations differentielles aux derivees partielles). C.R. Acad.
Sci. Paris T .277, No.17 (Serie B, 2e semestre):471-473.
L. Glass and M.C. Mackey. (1979) Pathological conditions resulting from instabilities in physiological control systems. Ann. N. Y. Acad. Sci. 316:214.
A. Longtin. (in press, 1991) Nonlinear dynamics of neural delayed feedback. In D.
Stein (ed.),Proceedings of the 3,.d Summer School on Complex Systems, Santa Fe
Institute Studies in the Sciences of Complexity, Lect. Vol. III. Redwood City, CA:
Addison-Wesley.
A. Longtin and J .G. Milton. (1989a) Modelling autonomous oscillations in the
human pupil light reflex using nonlinear delay-differential equations. Bull. Math.
Bioi. 51:605-624.
A. Longtin and J .G. Milton. (1989b) Insight into the transfer function, gain and
oscillation onset for the pupil light reflex using nonlinear delay-differential equations.
Bioi. Cybern. 61:51-59.
A. Longtin, J .G. Milton, J. Bos and M.C. Mackey. (1990) Noise and critical behavior
of the pupil light reflex at oscillation onset. Phys. Rev. A 41:6992-7005.
N. MacDonald. (1978) Time lags in biological models. Lecture Notes in Biomathematics 27. Berlin: Springer Verlag.
M.C. Mackey. (1979) Periodic auto-immune hemolytic anemia: an induced dynamical disease. Bull. Math. Bioi. 41:829-834.
M.C. Mackey and U. an der Heiden. (1984) The dynamics of recurrent inhibition.
J. Math. Bioi. 19: 211-225.
J .G. Milton, U. an der Heiden, A. Longtin and M.C. Mackey. (in press, 1990) Complex dynamics and noise in simple neural networks with delayed mixed feedback.
Biomed. Biochem. Acta 8/9.
J .G. Milton, A. Longtin, A. Beuter, M.C. Mackey and L. Glass. (1989) Complex
dynamics and bifurcations in neurology. J. Theor. Bioi. 138:129-147.
R.E. Plant. (1981) A Fitzhugh differential-difference equation modelling recurrent
neural feedback. SIAM J. Appl. Math. 40:150-162.
P.E. Napp. (1981) Frequency encoded biochemical regulation is more accurate then
amplitude dependent control. J. Theor. Bioi. 90:531-544.
| 339 |@word neurophysiology:1 stronger:1 simulation:3 serie:1 biomathematics:1 initial:1 series:2 past:2 dx:1 dde:9 realistic:1 numerical:3 additive:2 christian:1 mackey:11 pacemaker:3 nervous:1 short:1 colored:1 provides:1 math:5 differential:8 hopf:18 become:1 qualitative:5 pathway:2 behavior:6 frequently:1 nor:1 decreasing:2 resolve:1 circuit:3 duo:1 interpreted:1 spends:3 ti:1 control:5 whatever:1 positive:1 engineering:1 limit:7 acad:2 despite:1 fluctuation:5 firing:1 acta:1 suggests:3 appl:2 co:1 ease:1 responsible:1 integro:1 area:2 significantly:1 word:2 onto:1 faulty:2 context:1 impossible:1 instability:1 cybern:1 equivalent:1 deterministic:4 center:2 go:2 attention:1 insight:1 his:1 stability:4 population:1 variation:5 autonomous:2 hypothesis:1 origin:1 velocity:1 approximated:1 particularly:1 jk:1 observed:2 constancy:1 role:1 systemes:2 cycle:6 disease:1 complexity:1 ideally:1 dynamic:11 weakly:1 depend:1 serve:1 division:1 forced:1 fast:1 lect:1 detected:1 choosing:1 whose:1 lag:1 larger:2 encoded:1 ampl:1 noisy:3 amplified:1 dirac:1 los:3 convergence:1 converges:1 spent:1 coupling:1 recurrent:3 oo:1 depending:1 help:1 school:1 eq:13 involves:1 implies:2 stabilization:2 brainstem:1 human:3 enable:1 settle:1 transient:1 rapp:1 biological:2 theor:2 mathematically:1 accompanying:1 practically:1 around:2 considered:1 equilibrium:6 stabilizes:2 a2:1 council:1 city:1 gaussian:1 rather:2 fluctuate:1 ax:2 focus:3 modelling:2 lass:1 regis:1 sense:2 glass:3 bos:1 dependent:2 biochemical:2 typically:1 supercritical:1 comprising:1 biomed:1 spatial:1 bifurcation:35 equal:1 construct:1 pathological:1 dg:1 gamma:1 national:2 delayed:21 hump:2 light:10 accurate:1 plotted:1 theoretical:1 increased:2 earlier:1 ordinary:1 bull:2 deviation:2 alamo:3 retarded:1 delay:23 conduction:1 periodic:1 density:1 siam:1 nm:1 opposed:1 external:2 grossman:3 account:1 potential:1 de:3 retinal:2 onset:11 ornstein:1 doing:1 parallel:1 slope:3 aperiodicity:1 modelled:3 oscillatory:2 phys:1 andre:1 ed:1 frequency:10 gain:3 conversation:1 organized:1 amplitude:11 wesley:1 higher:2 dt:2 response:3 done:1 strongly:2 heiden:3 correlation:1 hand:2 nonlinear:8 propagation:1 reveal:3 grows:1 effect:2 normalized:1 functioning:1 evolution:1 vicinity:1 hence:1 laboratory:2 illustrated:1 ll:1 iris:2 d4:1 prominent:1 stimulation:1 occurred:2 numerically:1 framed:1 nonlinearity:1 aq:1 immune:1 stable:5 inhibition:1 biochem:1 forcing:1 reverse:1 certain:3 verlag:1 postponed:3 der:3 muscle:3 seen:2 somewhat:1 period:1 monotonically:2 ii:2 unimodal:1 exceeds:2 calculation:2 involving:2 ko:1 longtin:20 histogram:3 represent:1 uhlenbeck:1 kernel:9 bimodal:2 cell:1 affecting:1 fellowship:1 ode:6 diagram:1 source:2 standpoint:1 appropriately:2 operate:1 induced:1 tend:1 undergo:2 near:5 presence:4 iii:1 switch:1 affect:2 intensive:1 whether:1 passed:1 oscillate:1 jj:2 action:1 useful:2 generally:1 governs:1 clear:2 santa:1 stein:1 inhibitory:1 stabilized:3 shifted:1 delta:2 discrete:1 shall:1 vol:1 affected:1 group:2 b213:1 fraction:1 convert:1 oscillation:20 followed:1 summer:1 display:1 activity:2 occur:3 infinity:1 diffuse:1 sake:1 lsec:1 aspect:2 fitzhugh:1 relatively:1 conjecture:1 according:2 combination:1 membrane:1 smaller:1 ascertain:1 ro1:1 rev:1 explained:1 taken:1 equation:13 previously:1 addison:1 milton:9 cor:1 occurrence:1 original:1 especially:1 move:1 added:2 occurs:7 looked:1 parametric:2 exhibit:3 distance:3 thank:1 macdonald:3 sci:2 berlin:1 extent:2 unstable:1 ratio:1 regulation:2 fe:1 negative:6 anal:1 neuron:1 varied:1 redwood:1 intensity:1 canada:1 required:2 paris:1 recalled:1 dynamical:3 below:2 memory:7 critical:3 natural:1 scheme:1 extract:1 coupled:1 embodied:1 auto:1 understanding:2 l2:1 acknowledgement:1 determining:1 relative:2 plant:2 par:1 lecture:1 mixed:2 prototypical:2 interesting:1 proportional:1 versus:2 principle:1 share:1 cooke:3 lo:3 prone:1 summary:1 supported:2 keeping:2 side:2 understand:1 institute:1 distributed:7 feedback:30 curve:2 xn:2 author:1 flux:2 approximate:1 neurology:1 postdoctoral:1 regulatory:1 transfer:1 robust:1 ca:1 warranted:1 investigated:1 complex:5 da:2 motivation:1 noise:28 respiratory:1 pupil:9 axon:1 msec:1 candidate:1 externally:1 down:1 bad:2 emphasized:1 physiological:1 essential:1 magnitude:2 occurring:1 suited:1 dynamiques:1 simply:1 likely:1 nserc:1 reflex:9 springer:1 corresponds:2 environmental:1 bioi:6 viewed:1 ann:1 absence:3 change:4 specifically:2 determined:1 infinite:1 called:1 experimental:1 aux:1 |
2,636 | 3,390 | Efficient Exact Inference in Planar Ising Models
Nicol N. Schraudolph
Dmitry Kamenetsky
[email protected]
[email protected]
National ICT Australia, Locked Bag 8001, Canberra ACT 2601, Australia
& RSISE, Australian National University, Canberra ACT 0200, Australia
Abstract
We give polynomial-time algorithms for the exact computation of lowest-energy
states, worst margin violators, partition functions, and marginals in certain binary
undirected graphical models. Our approach provides an interesting alternative to
the well-known graph cut paradigm in that it does not impose any submodularity
constraints; instead we require planarity to establish a correspondence with perfect
matchings in an expanded dual graph. Maximum-margin parameter estimation for
a boundary detection task shows our approach to be efficient and effective. A C++
implementation is available from http://nic.schraudolph.org/isinf/.
1
Introduction
Undirected graphical models are a popular tool in machine learning; they represent real-valued
energy functions of the form
X
X
0
E 0 (y) :=
Ei0 (yi ) +
Eij
(yi , yj ) ,
(1)
i?V
(i,j)?E
where the terms in the first sum range over the nodes V = {1, 2, . . . n}, and those in the second sum
over the edges E ? V ? V of an undirected graph G(V, E).
The junction tree decomposition provides an efficient framework for exact statistical inference in
graphs that are (or can be turned into) trees of small cliques. The resulting algorithms, however, are
exponential in the clique size, i.e., the treewidth of the original graph. This is prohibitively large
for many graphs of practical interest ? for instance, it grows as O(n) for an n ? n square lattice.
Many approximate inference techniques have been developed so as to deal with such graphs, such
as pseudo-likelihood, mean field approximation, loopy belief propagation, and tree reweighting.
1.1
The Ising Model
Efficient exact inference is possible in certain graphical models with binary node labels. Here we
focus on Ising models, whose energy functions have the form E : {0, 1}n ? R with
X
E(y) :=
[yi 6= yj ] Eij ,
(2)
(i,j)?E
where [?] denotes the indicator function, i.e., the cost Eij is incurred only in those states y where yi
and yj disagree. Compared to the general model (1) for binary nodes, (2) imposes two additional
restrictions: zero node energies, and edge energies in the form of disagreement costs. At first glance
these constraints look severe; for instance, such systems must obey the symmetry E(y) = E(? y),
where ? denotes Boolean negation (ones? complement). It is well known, however, that adding a
single node makes the Ising model (2) as expressive as the general model (1) for binary variables:
Theorem 1 Every energy function of the form (1) over n binary variables is equivalent to an Ising
energy function of the form (2) over n + 1 variables, with the additional variable held constant.
0
0
[(Eij
(0,1) + Eij
(1,0))
2
0
0
? (Eij
(0,0) + Eij
(1,1))]
0i
E
+
ij
ik
2Eij
- jk
1k
(c)
?E0j
1k
?
E
jk
jk
Eij
0j
ik
ij
@
@
0k
E
Eij :=
@
E
(1
,0
)?
? Ei 0
E j(
0,
ij
0)
(b)
E0i
@
@
j
E0
i
Ei0 (1)?Ei0 (0)
ik 1
)
,0
(0
0 j
Ei
) ? E ij
,1
?
(0
0 j
(a)
ik
0k
0k
Ei
0k
(d)
Figure 1: Equivalent Ising model (with disagreement costs) for a given (a) node energy Ei0 , (b) edge
0
energy Eij
in a binary graphical model; (c) equivalent submodular model if Eij > 0 and E0i > 0
but E0j < 0; (d) equivalent directed model of Kolmogorov and Zabih [1], Fig. 2d.
Proof by construction: Two energy functions are equivalent if they differ only by a constant. Without loss of generality, denote the additional variable y0 and hold it constant at y0 := 0. Given an
energy function of the form (1), construct an Ising model with disagreement costs as follows:
1. For each node energy function Ei0 (yi ), add a disagreement cost term E0i := Ei0 (1)?Ei0 (0);
0
2. For each edge energy function Eij
(yi , yj ), add the three disagreement cost terms
0
0
0
0
Eij := 12 [(Eij
(0, 1) + Eij
(1, 0)) ? (Eij
(0, 0) + Eij
(1, 1))],
0
0
E0i := Eij
(1, 0) ? Eij
(0, 0) ? Eij , and
E0j :=
0
Eij
(0, 1)
?
0
Eij
(0, 0)
(3)
? Eij .
Summing the above terms, the total bias of node i (i.e., its disagreement cost with the bias node) is
X
0
0
E0i = Ei0 (1) ? Ei0 (0) +
[Eij
(1, 0) ? Eij
(0, 0) ? Eij ] .
(4)
j:(i,j)?E
This construction defines an Ising model whose energy in every configuration y is shifted, relative
to that of the general model we started with, by the same constant amount, namely E 0 (0):
h i
X
X
0
0
?y ? {0, 1}n : E y
= E 0 (y) ?
Ei0 (0) ?
Eij
(0, 0) = E 0 (y) ? E 0 (0). (5)
i?V
(i,j)?E
The two models? energy functions are therefore equivalent.
Note how in the above construction the label symmetry E(y) = E(? y) of the plain Ising model
(2) is conveniently broken by the introduction of a bias node, through the convention that y0 := 0.
1.2
Energy Minimization via Graph Cuts
Definition 2 The cut C of a binary graphical model G(V, E) induced by state y ? {0, 1}n is the set
C(y) := {(i, j) ? E : yi 6= yj }; its weight |C(y)| is the sum of the weights of its edges.
Any given state y partitions the nodes of a binary graphical model into two sets: those labeled ?0?,
and those labeled ?1?. The corresponding graph cut is the set of edges crossing the partition; since
only they contribute disagreement costs to the Ising model (2), we have ?y : |C(y)| = E(y). The
lowest-energy state of an Ising model therefore induces its minimum-weight cut.
Minimum-weight cuts can be computed in polynomial time in graphs whose edge weights are all
non-negative. Introducing one more node, with the constraint yn+1 := 1, allows us to construct an
equivalent energy function by replacing each negatively weighted bias edge E0i < 0 by an edge to
the new node n + 1 with the positive weight Ei,n+1 := ?E0i > 0 (Figure 1c). This still leaves us
with the requirement that all non-bias edges be non-negative. This submodularity constraint implies
that agreement between nodes must be locally preferable to disagreement ? a severe limitation.
Graph cuts have been widely used in machine learning to find lowest-energy configurations, in
particular in image processing. Our construction (Figure 1c) differs from that of Kolmogorov and
Zabih [1] (Figure 1d) in that we do not employ the notion of directed edges. (In directed graphs, the
weight of a cut is the sum of the weights of only those edges crossing the cut in a given direction.)
4
4
4
5
2
3
4
3
1
3
2
5
1
(a)
1
3
5
(b)
2
(c)
5
(d)
1
2
Figure 2: (a) a non-plane drawing of a planar graph; (b) a plane drawing of the same graph; (c) a
different plane drawing of same graph, with the same planar embedding as (b); (d) a plane drawing
of the same graph with a different planar embedding.
2
Planarity
Unlike graph cut methods, the inference algorithms we describe below do not depend on submodularity; instead they require that the model graph be planar, and that a planar embedding be provided.
2.1
Embedding Planar Graphs
Definition 3 Let G(V, E) be an undirected, connected graph. For each vertex i ? V, let Ei denote
the set of edges in E incident upon i, considered as being oriented away from i, and let ?i be a cyclic
permutation of Ei . A rotation system for G is a set of permutations ? = {?i : i ? V}.
Rotation systems [2] directly correspond to topological graph embeddings in orientable surfaces:
Theorem 4 (White and Beineke [2], p. 22f) Each rotation system determines an embedding of G
in some orientable surface S such that ?i ? V, any edge (i, j) ? Ei is followed by ?i (i, j) in (say)
clockwise orientation, and such that the faces F of the embedding, given by the orbits of the mapping
(i, j) ? ?j (j, i), are 2-cells (topological disks).
Note that while in graph visualisation ?embedding? is often used as a synonym for ?drawing?, in
modern topological graph theory it stands for ?rotation system?. We adopt the latter usage, which
views embeddings as equivalence classes of graph drawings characterized by identical cyclic ordering of the edges incident upon each vertex. For instance, ?4 (4, 5) = (4, 3) in Figures 2b and 2c
(same embedding) but ?4 (4, 5) = (4, 1) in Figure 2d (different embedding). A sample face in Figures 2b?2d is given by the orbit (4, 1) ? ?1 (1, 4) = (1, 2) ? ?2 (2, 1) = (2, 4) ? ?4 (4, 2) = (4, 1).
The genus g of the embedding surface S can be determined from the Euler characteristic
|V| ? |E| + |F| = 2 ? 2g,
(6)
where |F| is found by counting the orbits of the rotation system, as described in Theorem 4. Since
planar graphs are exactly those that can be embedded on a surface of genus g = 0 (a topological
sphere), we arrive at a purely combinatorial definition of planarity:
Definition 5 A graph G(V, E) is planar iff it has a rotation system ? producing exactly 2 + |E| ? |V|
orbits. Such a system is called a planar embedding of G, and G(V, E, ?) is called a plane graph.
Our inference algorithms require a plane graph as input. In certain domains (e.g., when working with
geographic information) a plane drawing of the graph (from which the corresponding embedding is
readily determined) may be available. Where it is not, we employ the algorithm of Boyer and
Myrvold [3] which, given any connected graph G as input, produces in linear time either a planar
embedding for G or a proof that G is non-planar. Source code for this step is freely available [3, 4].
2.2
The Planarity Constraint
In Section 1.1 we have mapped a general binary graphical model to an Ising model with an additional
bias node; now we require that that Ising model be planar. What does that imply for the original,
general model? If all nodes of the graph are to be connected to the bias node without violating
planarity, the graph has to be outerplanar, i.e., have a planar embedding in which all its nodes lie on
the external face ? a very severe restriction.
Figure 3: Possible cuts (bold blue dashes) of a square face of the model graph (dashed) and complementary perfect matchings (bold red lines) of its expanded dual (solid lines).
The situation improves, however, if only a subset B ? V of nodes have non-zero bias (4): Then the
graph only has to be B-outerplanar, i.e., have a planar embedding in which all nodes in B lie on the
same face. In image processing, for instance, where it is common to operate on a square grid of
pixels, we can permit bias for all nodes on the perimeter of the grid. In general, a planar embedding
which maximizes a weighted sum over the nodes bordering a given face can be found in linear time
[5]; by setting node weights to some measure of bias (such as E0i2 ) we can efficiently obtain the
planar Ising model closest (in that measure) to any given planar binary graphical model.
In contrast to submodularity, B-outerplanarity is a structural constraint. This has the advantage that
once a model obeying the constraint is selected, inference (e.g., parameter estimation) can proceed
via unconstrained methods (e.g., optimization). Finally, we note that all our algorithms can be
extended to work for non-planar graphs as well. They then take time exponential in the genus of the
embedding though still polynomial in the size of the graph; for graphs of low genus this may well
be preferable to current approximative methods.
3
Computing Optimal States via Maximum-Weight Perfect Matching
The relationship between the states of a planar Ising model and perfect matchings (?dimer coverings?
to physicists) was first discovered by Kasteleyn [6] and Fisher [7]. Globerson and Jaakkola [8]
presented a more direct construction for triangulated graphs, which we generalize here.
3.1
The Expanded Dual Graph
Definition 6 The dual G? (F, E) of an embedded graph G(V, E, ?) has a vertex for each face of G,
with edges connecting vertices corresponding to faces that are adjacent (i.e., share an edge) in G.
Each edge of the dual crosses exactly one edge of the original graph; due to this one-to-one relationship we will consider the dual to have the same set of edges E (with the same energies) as the
original.
We now expand the dual graph by replacing each node with a q-clique, where q is the degree of the
node, as shown in Figure 3 for q = 4. The additional edges internal to each q-clique are given zero
energy so as to leave the model unaffected. For large q the introduction of these O(q 2 ) internal edges
slows down subsequent computations (solid line in Figure 4, left); this can be avoided by subdividing
the offending q-gonal face with chords (which are also given zero energy) before constructing the
dual. Our implementation performs best when ?octangulating? the graph, i.e., splitting octagons off
all faces with q > 13; this is more efficient than a full triangulation (Figure 4, left).
3.2
Complementary Perfect Matchings
Definition 7 A perfect matching of a graph G(V, E) is a subset M ? E of edges wherein exactly
one edge is incident upon each vertex in V; its weight |M| is the sum of the weights of its edges.
Theorem 8 For every cut C of an embedded graph G(V, E, ?) there exists at least one (if G is triangulated: exactly one) perfect matching M of its expanded dual complementary to C, i.e., E\M = C.
Proof sketch Consider the complement E\C of the cut as a partial matching of the expanded dual.
By definition, C intersects any cycle of G, and therefore also the perimeters of G?s faces F, in an
even number of edges. In each clique of the expanded dual, C?s complement thus leaves an even
number of nodes unmatched; M can therefore be completed using only edges interior to the cliques.
In a 3-clique, there is only one way to do this, so M is unique if G is triangulated.
In other words, there exists a surjection from perfect matchings in the expanded dual of G to cuts in
G. Furthermore, since we have given edges interior to the cliques of the expanded dual zero energy,
every perfect matching M complementary to a cut C of our Ising model (2) obeys the relation
X
|M| + |C| =
Eij = const.
(7)
(i,j)?E
This means that instead of a minimum-weight cut in a graph we can look for a maximum-weight
perfect matching in its expanded dual. But will that matching always be complementary to a cut?
Theorem 9 Every perfect matching M of the expanded dual of a plane graph G(V, E, ?) is complementary to a cut C of G, i.e., E\M = C.
Proof sketch In each clique of the expanded dual, an even number of nodes is matched by edges
interior to the clique. The complement E\M of the matching in G thus contains an even number of
edges around the perimeter of each face of the embedding. By induction over faces, this holds for
every contractible (on the embedding surface) cycle of G. Because a plane is simply connected, all
cycles in a plane graph are contractible; thus E\M is a cut.
This is where planarity matters: Surfaces of non-zero genus are not simply connected, and thus
non-plane graphs may contain non-contractible cycles; our construction does not guarantee that the
complement E\M of a perfect matching of the expanded dual contains an even number of edges
along such cycles. For planar graphs, however, the above theorems allow us to leverage known
polynomial-time algorithms for perfect matchings into inference methods for Ising models.
3.3
The Lowest-Energy (MAP or Ground) State
The blossom-shrinking algorithm [9, 10] is a sophisticated method to efficiently compute the
maximum-weight perfect matching of a graph. It can be implemented to run in as little as
O(|E| |V| log |V|) time. Although the Blossom IV code we are using [11] is asymptotically less
efficient ? O(|E| |V|2 ) ? we have found it to be very fast in practice (Figure 4, left).
We can now efficiently compute the lowest-energy state of a planar Ising model as follows: Find a
planar embedding of the model graph (Section 2.1), construct its expanded dual (Section 3.1), and
run the blossom-shrinking algorithm on that to compute its maximum-weight perfect matching. Its
complement in the original model is the minimum-weight graph cut (Section 3.2). We can identify
the state which induces this cut via a depth-first graph traversal that labels nodes as it encounters
them, starting by labeling the bias node y0 := 0; this is shown below as Algorithm 1.
Algorithm 1 Find State from Corresponding Graph Cut
Input:
1.
2.
Output:
3.4
Ising model graph G(V, E)
graph cut C(y) ? E
? i ? {0, 1, 2, . . . n} :
yi := unknown;
dfs state(0, 0);
state vector y
procedure dfs state(i ? {0, 1, 2, . . . n}, s ? {0, 1})
if yi = unknown then
yi := s; ?(i, j) ? Ei :
if (i, j) ? C then dfs state(j, ?s);
else dfs state(j, s);
else assert yi = s;
The Worst Margin Violator
Maximum-margin parameter estimation in graphical models involves determining the worst margin
violator ? the state that minimizes, relative to a given target state y ? , the margin energy
M (y|y ? ) := E(y) ? d(y|y ? ),
(8)
where d(?|?) is a measure of divergence in state space. If d(?|?) is the weighted Hamming distance
X
d(y|y ? ) :=
[[yi 6= yj ] 6= [yi? 6= yj? ]] vij ,
(9)
(i,j)?E
0.01
original
triang.
octang.
1e-3
1e9
10
1
0.1
0.01
no prefact.
prefactored
0.001
memory (bytes)
0.1
CPU time (seconds)
CPU time (seconds)
1
1e6
1e3
full-size K
prefact. H
1e-4
10
100
1e3
1e4
1e5
10
100
1000
10
100
1000
Figure 4: Cost of inference on a ring graph, plotted against ring size. Left & center: CPU time on
Apple MacBook with 2.2 GHz Intel Core2 Duo processor; right: storage size. Left: MAP state via
Blossom IV [11] on original, triangulated, and octangulated ring; center & right: marginal probabilities, full matrix K (double precision, no prefactoring) vs. prefactored half-Kasteleyn bitmatrix H.
where the vij ? 0 are constant weighting factors (in the simplest case: all ones) on the edges of our
Ising model, then it is easily verified that the margin energy (8) is implemented (up to a shift that
depends only on y ? ) by an isomorphic Ising model with disagreement costs
Eij + (2 [yi? 6= yj? ] ? 1) vij .
(10)
We can thus use our algorithm of Section 3.3 to efficiently find the worst margin violator,
argminy M (y|y ? ), for maximum-margin parameter estimation.
4
Computing the Partition Function and Marginal Probabilities1
A Markov random field (MRF) over our Ising model (2) models the distribution
X
1
P(y) = Z e?E(y) , where Z :=
e?E(y)
(11)
y
is the MRF?s partition function. As it involves a summation over exponentially many states y,
calculating the partition function is generally intractable. For planar graphs, however, the generating
function for perfect matchings can be calculated in polynomial time via the determinant of a skewsymmetric matrix [6, 7]. Due to the close relationship with graph cuts (Section 3.2) we can calculate
Z in (11) likewise. Elaborating on work of Globerson and Jaakkola [8], we first convert the Ising
model graph into a Boolean ?half-Kasteleyn? matrix H:
1. plane triangulate the embedded graph so as to make the relationship between cuts and
complementary perfect matchings a bijection (cf. Section 3.2);
2. orient the edges of the graph such that the in-degree of every node is odd;
3. construct the Boolean half-Kasteleyn matrix H from the oriented graph;
4. prefactor the triangulation edges (added in Step 1) out of H.
Our Step 2 simplifies equivalent operations in previous constructions [6?8], Step 3 differs in that it
only sets unit (i.e., +1) entries in a Boolean matrix, and Step 4 can dramatically reduce the size of H
for compact storage (as a bit matrix) and faster subsequent computations (Figure 4, center & right).
For a given set of disagreement edge costs Ek , k = {1, 2, , . . . |E|} on that graph, we then build from
H and the Ek the skew-symmetric, real-valued Kasteleyn matrix K:
1. K := H;
2. ?k ? {1, 2, , . . . |E|} : K2k?1,2k := K2k?1,2k + eEk ;
3. K := K ? K > .
p
The partition function for perfect matchings is |K| [6?8],
P so we factor K and use (7) to compute
the log partition function for (11) as ln Z = 21 ln |K| ? k?E Ek . Its derivative yields the marginal
probability of disagreement on the k th edge, and is computed via the inverse of K:
? ln Z
1 ?|K|
?K
?1
P(k ? C) := ?
=1?
= 1 ? 21 tr K ?1
= 1 + K2k?1,2k
K2k?1,2k .
?Ek
2|K| ?Ek
?Ek
(12)
1
We only have space for a high-level overview here; see [12] for full details.
Figure 5: Boundary detection by maximum-margin training of planar Ising grids; from left to right:
Ising model (100 ? 100 grid), original image, noisy mask, and MAP segmentation of the Ising grid.
5
Maximum Likelihood vs. Maximum Margin CRF Parameter Estimation
Our algorithms can be applied to regularized parameter estimation in conditional random fields
(CRFs). In a linear planar Ising CRF, the disagreement costs Eij in (2) are computed as inner products between features (sufficient statistics) x of the modeled data and corresponding parameters ?
of the model, and (11) is used to model the conditional distribution P(y|x, ?). Maximum likelihood
(ML) parameter estimation then seeks to minimize wrt. ? the L2 -regularized negative log likelihood
LML (?) :=
1
2
?k?k2 ? ln P(y ? |x, ?) =
1
2
?k?k2 + E(y ? |x, ?) + ln Z(?|x)
(13)
of a given target labeling y ?,2 with regularization parameter ?. This is a smooth, convex objective
that can be optimized via batch or online implementations of gradient methods such as LBFGS [13];
the gradient of the log partition function in (13) is obtained by computing the marginals (12). For
maximum margin (MM) parameter estimation [14] we instead minimize
LMM (?) :=
1
2
=
1
2
?k?k2 + E(y ? |x, ?) ? min M (y|y ?, x, ?)
(14)
y
2
?
?
?k?k + E(y |x, ?) ? E(?
y |x, ?) + d(?
y |y ),
?
where y
? := argminy M (y|y , x, ?) is the worst margin violator, i.e., the state that minimizes the
margin energy (8). LMM (?) is convex but non-smooth; we can minimize it via bundle methods such
as the BT bundle trust algorithm [15], making use of the convenient lower bound ?? : LMM (?) ? 0.
To demonstrate the scalability of planar Ising models, we designed a simple boundary detection
task based on images from the GrabCut Ground Truth image segmentation database [16]. We took
100 ? 100 pixel subregions of images that depicted a segmentation boundary, and corrupted the
segmentation mask with pink noise, produced by convolving a white noise image (all pixels i.i.d.
uniformly random) with a Gaussian density with one pixel standard deviation. We then employed
a planar Ising model to recover the original boundary ? namely, a 100 ? 100 square grid with
one additional edge pegged to a high energy, encoding prior knowledge that two opposing corners
of the grid depict different regions (Figure 5, left). The energy of the other edges was Eij :=
h[1, |xi ? xj |], ?i, where xi is the pixel intensity at node i. We did not employ a bias node for this
task, and simply set ? = 1.
Note that this is a huge model: 10 000 nodes and 19 801 edges. Computing the partition function or
marginals would require inverting a Kasteleyn matrix with over 1.5 ? 109 entries; minimizing (13) is
therefore computationally infeasible for us. Computing a ground state via the algorithm described
in Section 3, by contrast, takes only 0.3 seconds on an Apple MacBook with 2.2 GHz Intel Core2
Duo processor. We can therefore efficiently minimize (14) to obtain the MM parameter vector ? ? ,
then compute the CRF?s MAP (i.e., ground) state for rapid prediction.
Figure 5 (right) shows how even for a signal-to-noise (S/N) ratio of 1:8, our approach is capable
of recovering the original segmentation boundary quite well, with only 0.67% of nodes mislabeled
here. For S/N ratios of 1:9 and lower the system was unable to locate the boundary; for S/N ratios
of 1:7 and higher we obtained perfect reconstruction. Further experiments are reported in [12].
On smaller grids, ML parameter estimation and marginals for prediction become computationally
feasible, if slower than the MM/MAP approach. This will allow direct comparison of ML vs. MM for
parameter estimation, and MAP vs. marginals for prediction, to our knowledge for the first time on
graphs intractable for the junction tree appproach, such as the grids often used in image processing.
2
For notational clarity we suppress here the fact that we are usually modeling a collection of data items.
6
Discussion
We have proposed an alternative algorithmic framework for efficient exact inference in binary graphical models, which replaces the submodularity constraint of graph cut methods with a planarity constraint. Besides proving efficient and effective in first experiments, our approach opens up a number
of interesting research directions to be explored:
Our algorithms can all be extended to nonplanar graphs, at a cost exponential in the genus of the
embedding. We are currently developing these extensions, which may prove of great practical value
for graphs that are ?almost? planar; examples include road networks (where edge crossings arise
from overpasses without on-ramps) and graphs describing the tertiary structure of proteins [17].
These algorithms also provide a foundation for the future development of efficient approximate
inference methods for nonplanar Ising models.
Our method for calculating the ground state (Section 3) actually works for nonplanar graphs whose
ground state does not contain frustrated non-contractible cycles. The QPBO graph cut method [18]
finds ground states that do not contain any frustrated cycles, and otherwise yields a partial labeling.
Can we likewise obtain a partial labeling of ground states with frustrated non-contractible cycles?
The existence of two distinct tractable frameworks for inference in binary graphical models implies
a yet more powerful hybrid: Consider a graph each of whose biconnected components is either
planar or submodular. As a whole, this graph may be neither planar nor submodular, yet efficient
exact inference in it is clearly possible by applying the appropriate framework to each component.
Can this hybrid approach be extended to cover less obvious situations?
References
[1] V. Kolmogorov and R. Zabih. What energy functions can be minimized via graph cuts? IEEE Trans.
Pattern Analysis and Machine Intelligence, 26(2):147?159, 2004.
[2] A. T. White and L. W. Beineke. Topological graph theory. In L. W. Beineke and R. J. Wilson, editors,
Selected Topics in Graph Theory, chapter 2, pages 15?49. Academic Press, 1978.
[3] J. M. Boyer and W. J. Myrvold. On the cutting edge: Simplified O(n) planarity by edge addition. Journal
of Graph Algorithms and Applications, 8(3):241?273, 2004. Reference implementation (C source code):
http://jgaa.info/accepted/2004/BoyerMyrvold2004.8.3/planarity.zip
[4] A. Windsor. Planar graph functions for the boost graph library. C++ source code, boost file vault: http:
//boost-consulting.com/vault/index.php?directory=Algorithms/graph, 2007.
[5] C. Gutwenger and P. Mutzel. Graph embedding with minimum depth and maximum external face. In
G. Liotta, editor, Graph Drawing 2003, volume 2912 of LNCS, pages 259?272. Springer Verlag, 2004.
[6] P. W. Kasteleyn. The statistics of dimers on a lattice: I. the number of dimer arrangements on a quadratic
lattice. Physica, 27(12):1209?1225, 1961.
[7] M. E. Fisher. Statistical mechanics of dimers on a plane lattice. Phys Rev, 124(6):1664?1672, 1961.
[8] A. Globerson and T. Jaakkola. Approximate inference using planar graph decomposition. In B. Sch?olkopf,
J. Platt, and T. Hofmann (eds), Advances in Neural Information Processing Systems 19, 2007. MIT Press.
[9] J. Edmonds. Maximum matching and a polyhedron with 0,1-vertices. Journal of Research of the National
Bureau of Standards, 69B:125?130, 1965.
[10] J. Edmonds. Paths, trees, and flowers. Canadian Journal of Mathematics, 17:449?467, 1965.
[11] W. Cook and A. Rohe. Computing minimum-weight perfect matchings. INFORMS Journal on Computing, 11(2):138?148, 1999. C source code: http://www.isye.gatech.edu/?wcook/blossom4
[12] N. N. Schraudolph and D. Kamenetsky. Efficient exact inference in planar Ising models. Technical Report
0810.4401, arXiv, 2008. http://aps.arxiv.org/abs/0810.4401
[13] S. V. N. Vishwanathan, N. N. Schraudolph, M. Schmidt, and K. Murphy. Accelerated training conditional
random fields with stochastic gradient methods. In Proc. Intl. Conf. Machine Learning, pages 969?976,
New York, NY, USA, 2006. ACM Press.
[14] B. Taskar, C. Guestrin, and D. Koller. Max-margin Markov networks. In S. Thrun, L. Saul, and B.
Sch?olkopf (eds), Advances in Neural Information Processing Systems 16, pages 25?32, 2004. MIT Press.
[15] H. Schramm and J. Zowe. A version of the bundle idea for minimizing a nonsmooth function: Conceptual
idea, convergence analysis, numerical results. SIAM J. Optimization, 2:121?152, 1992.
[16] C. Rother, V. Kolmogorov, A. Blake, and M. Brown. GrabCut ground truth database, 2007. http://
research.microsoft.com/vision/cambridge/i3l/segmentation/GrabCut.htm
[17] S. V. N. Vishwanathan, K. Borgwardt, and N. N. Schraudolph. Fast computation of graph kernels. In B.
Sch?olkopf, J. Platt, and T. Hofmann (eds), Advances in Neural Information Processing Systems 19, 2007.
[18] V. Kolmogorov and C. Rother. Minimizing nonsubmodular functions with graph cuts ? a review. IEEE
Trans. Pattern Analysis and Machine Intelligence, 29(7):1274?1279, 2007.
| 3390 |@word determinant:1 version:1 polynomial:5 disk:1 open:1 seek:1 decomposition:2 tr:1 solid:2 offending:1 configuration:2 cyclic:2 contains:2 elaborating:1 current:1 com:2 yet:2 must:2 readily:1 subsequent:2 partition:10 numerical:1 hofmann:2 designed:1 depict:1 aps:1 v:4 half:3 leaf:2 selected:2 item:1 intelligence:2 cook:1 plane:13 directory:1 tertiary:1 provides:2 consulting:1 node:34 contribute:1 bijection:1 org:3 along:1 direct:2 become:1 ik:4 vault:2 prove:1 mask:2 rapid:1 subdividing:1 nor:1 mechanic:1 core2:2 little:1 cpu:3 provided:1 matched:1 maximizes:1 lowest:5 what:2 duo:2 minimizes:2 developed:1 guarantee:1 pseudo:1 assert:1 every:7 act:2 preferable:2 prohibitively:1 exactly:5 k2:3 platt:2 unit:1 yn:1 producing:1 before:1 positive:1 physicist:1 encoding:1 cecs:1 planarity:9 path:1 au:1 equivalence:1 genus:6 locked:1 range:1 obeys:1 directed:3 practical:2 globerson:3 unique:1 yj:8 practice:1 differs:2 procedure:1 lncs:1 matching:13 convenient:1 word:1 road:1 protein:1 interior:3 close:1 storage:2 applying:1 restriction:2 equivalent:8 map:6 www:1 center:3 crfs:1 starting:1 convex:2 splitting:1 embedding:22 proving:1 notion:1 construction:7 target:2 exact:7 approximative:1 agreement:1 crossing:3 jk:3 cut:29 ising:31 labeled:2 database:2 taskar:1 prefactor:1 worst:5 calculate:1 region:1 connected:5 cycle:8 ordering:1 chord:1 broken:1 traversal:1 depend:1 purely:1 negatively:1 upon:3 matchings:10 mislabeled:1 easily:1 htm:1 chapter:1 kolmogorov:5 intersects:1 distinct:1 fast:2 effective:2 describe:1 labeling:4 whose:5 quite:1 widely:1 valued:2 say:1 drawing:8 ramp:1 otherwise:1 statistic:2 noisy:1 online:1 advantage:1 took:1 reconstruction:1 product:1 turned:1 iff:1 scalability:1 olkopf:3 convergence:1 double:1 requirement:1 intl:1 produce:1 generating:1 perfect:20 leave:1 ring:3 informs:1 ij:4 odd:1 implemented:2 recovering:1 involves:2 treewidth:1 australian:1 convention:1 differ:1 implies:2 submodularity:5 direction:2 triangulated:4 dfs:4 zowe:1 stochastic:1 australia:3 require:5 summation:1 extension:1 physica:1 hold:2 mm:4 around:1 considered:1 ground:9 blake:1 great:1 mapping:1 algorithmic:1 dimer:4 adopt:1 estimation:10 proc:1 bag:1 label:3 combinatorial:1 currently:1 tool:1 weighted:3 minimization:1 mit:2 clearly:1 always:1 gaussian:1 jaakkola:3 wilson:1 gatech:1 focus:1 notational:1 polyhedron:1 likelihood:4 contrast:2 inference:15 bt:1 visualisation:1 boyer:2 expand:1 relation:1 koller:1 pixel:5 dual:18 orientation:1 development:1 marginal:3 field:4 construct:4 once:1 identical:1 look:2 triangulate:1 future:1 minimized:1 report:1 nonsmooth:1 employ:3 modern:1 oriented:2 kasteleyn:7 national:3 divergence:1 skewsymmetric:1 murphy:1 opposing:1 negation:1 ab:1 microsoft:1 detection:3 interest:1 huge:1 surjection:1 severe:3 held:1 bundle:3 perimeter:3 edge:42 capable:1 partial:3 tree:5 iv:2 orbit:4 plotted:1 e0:1 instance:4 modeling:1 boolean:4 cover:1 lattice:4 loopy:1 cost:13 introducing:1 deviation:1 vertex:6 subset:2 euler:1 entry:2 reported:1 corrupted:1 density:1 borgwardt:1 siam:1 off:1 connecting:1 unmatched:1 e9:1 external:2 corner:1 ek:6 derivative:1 convolving:1 conf:1 schramm:1 bold:2 matter:1 depends:1 view:1 red:1 recover:1 minimize:4 square:4 php:1 characteristic:1 efficiently:5 likewise:2 correspond:1 identify:1 yield:2 generalize:1 produced:1 apple:2 unaffected:1 processor:2 phys:1 ed:3 definition:7 against:1 energy:31 obvious:1 proof:4 hamming:1 macbook:2 popular:1 knowledge:2 improves:1 segmentation:6 sophisticated:1 actually:1 nonplanar:3 higher:1 violating:1 planar:34 wherein:1 though:1 generality:1 furthermore:1 orientable:2 working:1 e0i:7 sketch:2 ei:8 replacing:2 expressive:1 reweighting:1 propagation:1 glance:1 bordering:1 trust:1 defines:1 grows:1 usage:1 usa:1 contain:3 geographic:1 brown:1 regularization:1 symmetric:1 deal:1 white:3 adjacent:1 covering:1 crf:3 demonstrate:1 performs:1 image:8 argminy:2 common:1 rotation:6 overview:1 exponentially:1 volume:1 marginals:5 cambridge:1 unconstrained:1 grid:9 mathematics:1 mutzel:1 submodular:3 surface:6 add:2 closest:1 triangulation:2 certain:3 verlag:1 contractible:5 binary:12 yi:14 guestrin:1 minimum:6 additional:6 impose:1 zip:1 employed:1 freely:1 grabcut:3 paradigm:1 lmm:3 clockwise:1 dashed:1 signal:1 full:4 i3l:1 smooth:2 technical:1 faster:1 characterized:1 academic:1 schraudolph:6 sphere:1 cross:1 nonsubmodular:1 prediction:3 mrf:2 vision:1 arxiv:2 represent:1 kernel:1 cell:1 addition:1 windsor:1 else:2 source:4 sch:3 kamenetsky:2 operate:1 unlike:1 file:1 induced:1 undirected:4 structural:1 counting:1 leverage:1 canadian:1 embeddings:2 xj:1 reduce:1 simplifies:1 inner:1 idea:2 shift:1 e3:2 proceed:1 york:1 dramatically:1 generally:1 amount:1 locally:1 subregions:1 zabih:3 nic:1 induces:2 simplest:1 http:6 shifted:1 blue:1 edmonds:2 clarity:1 neither:1 verified:1 graph:86 asymptotically:1 sum:6 convert:1 run:2 orient:1 inverse:1 powerful:1 arrive:1 almost:1 bit:1 bound:1 followed:1 dash:1 correspondence:1 topological:5 replaces:1 quadratic:1 constraint:9 vishwanathan:2 min:1 expanded:13 developing:1 pink:1 smaller:1 y0:4 rev:1 making:1 ei0:10 lml:1 ln:5 computationally:2 skew:1 describing:1 wrt:1 tractable:1 available:3 junction:2 operation:1 permit:1 obey:1 away:1 appropriate:1 disagreement:12 alternative:2 encounter:1 batch:1 slower:1 schmidt:1 existence:1 original:10 bureau:1 denotes:2 cf:1 include:1 completed:1 graphical:11 const:1 calculating:2 build:1 establish:1 objective:1 added:1 arrangement:1 gradient:3 distance:1 unable:1 mapped:1 thrun:1 topic:1 induction:1 rother:2 code:5 besides:1 modeled:1 relationship:4 index:1 ratio:3 minimizing:3 info:1 negative:3 slows:1 suppress:1 implementation:4 unknown:2 disagree:1 qpbo:1 markov:2 situation:2 extended:3 locate:1 discovered:1 intensity:1 complement:6 namely:2 rsise:1 inverting:1 optimized:1 boost:3 nip:1 trans:2 below:2 usually:1 pattern:2 flower:1 max:1 memory:1 belief:1 hybrid:2 regularized:2 indicator:1 imply:1 library:1 started:1 byte:1 prior:1 ict:1 l2:1 review:1 nicol:1 determining:1 relative:2 embedded:4 loss:1 permutation:2 interesting:2 limitation:1 foundation:1 incurred:1 incident:3 degree:2 sufficient:1 imposes:1 editor:2 vij:3 share:1 infeasible:1 bias:12 allow:2 blossom:4 saul:1 face:14 ghz:2 boundary:7 plain:1 depth:2 stand:1 calculated:1 collection:1 avoided:1 simplified:1 approximate:3 compact:1 cutting:1 dmitry:1 clique:10 ml:3 eek:1 summing:1 conceptual:1 xi:2 symmetry:2 e5:1 constructing:1 domain:1 did:1 synonym:1 k2k:4 noise:3 arise:1 whole:1 complementary:7 fig:1 canberra:2 intel:2 ny:1 shrinking:2 precision:1 obeying:1 exponential:3 lie:2 isye:1 weighting:1 theorem:6 down:1 e4:1 rohe:1 explored:1 exists:2 intractable:2 adding:1 anu:1 margin:15 depicted:1 eij:32 simply:3 lbfgs:1 conveniently:1 springer:1 truth:2 violator:5 determines:1 frustrated:3 acm:1 conditional:3 fisher:2 feasible:1 determined:2 uniformly:1 total:1 called:2 isomorphic:1 accepted:1 internal:2 e6:1 latter:1 accelerated:1 |
2,637 | 3,391 | Hebbian Learning of Bayes Optimal Decisions
Bernhard Nessler?, Michael Pfeiffer?, and Wolfgang Maass
Institute for Theoretical Computer Science
Graz University of Technology
A-8010 Graz, Austria
{nessler,pfeiffer,maass}@igi.tugraz.at
Abstract
Uncertainty is omnipresent when we perceive or interact with our environment,
and the Bayesian framework provides computational methods for dealing with
it. Mathematical models for Bayesian decision making typically require datastructures that are hard to implement in neural networks. This article shows that
even the simplest and experimentally best supported type of synaptic plasticity,
Hebbian learning, in combination with a sparse, redundant neural code, can in
principle learn to infer optimal Bayesian decisions. We present a concrete Hebbian
learning rule operating on log-probability ratios. Modulated by reward-signals,
this Hebbian plasticity rule also provides a new perspective for understanding
how Bayesian inference could support fast reinforcement learning in the brain.
In particular we show that recent experimental results by Yang and Shadlen [1] on
reinforcement learning of probabilistic inference in primates can be modeled in
this way.
1
Introduction
Evolution is likely to favor those biological organisms which are able to maximize the chance of
achieving correct decisions in response to multiple unreliable sources of evidence. Hence one may
argue that probabilistic inference, rather than logical inference, is the ?mathematics of the mind?,
and that this perspective may help us to understand the principles of computation and learning in
the brain [2]. Bayesian inference, or equivalently inference in Bayesian networks [3] is the most
commonly considered framework for probabilistic inference, and a mathematical theory for learning
in Bayesian networks has been developed.
Various attempts to relate these theoretically optimal models to experimentally supported models for
computation and plasticity in networks of neurons in the brain have been made. [2] models Bayesian
inference through an approximate implementation of the Belief Propagation algorithm (see [3]) in a
network of spiking neurons. For reduced classes of probability distributions, [4] proposed a method
for spiking network models to learn Bayesian inference with an online approximation to an EM
algorithm. The approach of [5] interprets the weight wji of a synaptic connection between neurons
p(xi ,xj )
representing the random variables xi and xj as log p(xi )?p(x
, and presents algorithms for learning
j)
these weights.
Neural correlates of variables that are important for decision making under uncertainty had been
presented e.g. in the recent experimental study by Yang and Shadlen [1]. In their study they found
that firing rates of neurons in area LIP of macaque monkeys reflect the log-likelihood ratio (or logodd) of the outcome of a binary decision, given visual evidence. The learning of such log-odds
for Bayesian decision making can be reduced to learning weights for a linear classifier, given an
appropriate but fixed transformation from the input to possibly nonlinear features [6]. We show
?
Both authors contributed equally to this work.
1
that the optimal weights for the linear decision function are actually log-odds themselves, and the
definition of the features determines the assumptions of the learner about statistical dependencies
among inputs.
In this work we show that simple Hebbian learning [7] is sufficient to implement learning of Bayes
optimal decisions for arbitrarily complex probability distributions. We present and analyze a concrete learning rule, which we call the Bayesian Hebb rule, and show that it provably converges
towards correct log-odds. In combination with appropriate preprocessing networks this implements
learning of different probabilistic decision making processes like e.g. Naive Bayesian classification.
Finally we show that a reward-modulated version of this Hebbian learning rule can solve simple
reinforcement learning tasks, and also provides a model for the experimental results of [1].
2
A Hebbian rule for learning log-odds
We consider the model of a linear threshold neuron with output y0 , where y0 = 1 means that the
neuron is firing and y0 = 0 means non-firing. The neuron?s currentP
decision y?0 whether to fire or not
n
is given by a linear decision function y?0 = sign(w0 ? constant + i=1 wi yi ), where the yi are the
current firing states of all presynaptic neurons and wi are the weights of the corresponding synapses.
We propose the following learning rule, which we call the Bayesian Hebb rule:
?wi =
? (1 + e?wi ),
?? (1 + ewi ),
0,
(
if y0 = 1 and yi = 1
if y0 = 0 and yi = 1
if yi = 0.
(1)
This learning rule is purely local, i.e. it depends only on the binary firing state of the pre- and
postsynaptic neuron yi and y0 , the current weight wi and a learning rate ?. Under the assumption
of a stationary joint probability distribution of the pre- and postsynaptic firing states y0 , y1 , . . . , yn
the Bayesian Hebb rule learns log-probability ratios of the postsynaptic firing state y0 , conditioned
on a corresponding presynaptic firing state yi . We consider in this article the use of the rule in a
supervised, teacher forced mode (see Section 3), and also in a reinforcement learning mode (see
Section 4). We will prove that the rule converges globally to the target weight value wi? , given by
wi? = log
p(y0 = 1|yi = 1)
p(y0 = 0|yi = 1)
.
(2)
We first show that the expected update E[?wi ] under (1) vanishes at the target value wi? :
?
?
E[?wi? ] = 0 ? p(y0 =1, yi =1)?(1 + e?wi ) ? p(y0 =0, yi =1)?(1 + ewi ) = 0
?
?
?
1 + ewi
p(y0 =1, yi =1)
? =
p(y0 =0, yi =1)
1 + e?wi
p(y0 =1|yi =1)
wi? = log
p(y0 =0|yi =1)
.
(3)
Since the above is a chain of equivalence transformations, this proves that wi? is the only equilibrium
value of the rule. The weight vector w? is thus a global point-attractor with regard to expected weight
changes of the Bayesian Hebb rule (1) in the n-dimensional weight-space Rn .
Furthermore we show, using the result from (3), that the expected weight change at any current value
of wi points in the direction of wi? . Consider some arbitrary intermediate weight value wi = wi? +2?:
E[?wi ]|wi? +2?
=
E[?wi ]|wi? +2? ? E[?wi ]|wi?
?
?
? p(y0 =1, yi =1)e?wi (e?2? ? 1) ? p(y0 =0, yi =1)ewi (e2? ? 1)
= (p(y0 =0, yi =1)e?? + p(y0 =1, yi =1)e? )(e?? ? e? ) .
(4)
The first factor in (4) is always non-negative, hence ? < 0 implies E[?wi ] > 0, and ? > 0 implies
E[?wi ] < 0. The Bayesian Hebb rule is therefore always expected to perform updates in the right
direction, and the initial weight values or perturbations of the weights decay exponentially fast.
2
Already after having seen a finite set of examples hy0 , . . . , yn i ? {0, 1}n+1 , the Bayesian Hebb rule
closely approximates the optimal weight vector w
? that can be inferred from the data. A traditional
frequentist?s approach would use counters ai = #[y0 =1 ? yi =1] and bi = #[y0 =0 ? yi =1] to
estimate every wi? by
ai
w
?i = log
.
(5)
bi
A Bayesian approach would model p(y0 |yi ) with an (initially flat) Beta-distribution, and use the
counters ai and bi to update this belief [3], leading to the same MAP estimate w
?i . Consequently, in
both approaches a new example with y0 = 1 and yi = 1 leads to the update
ai + 1
ai
1
1
w
?inew = log
1+
=w
?i + log(1 +
(6)
= log
(1 + e?w?i )) ,
bi
bi
ai
Ni
where Ni := ai + bi is the number of previously processed examples with yi = 1, thus
bi
1
Ni (1 + ai ). Analogously, a new example with y0 = 0 and yi = 1 gives rise to the update
!
ai
ai
1
1
new
w
?i
= log
= log
?i ? log(1 +
=w
(1 + ew?i )).
bi + 1
bi 1 + b1i
Ni
1
ai
=
(7)
Furthermore, w
?inew = w
?i for a new example with yi = 0. Using the approximation log(1 + ?) ? ?
the update rules (6) and (7) yield the Bayesian Hebb rule (1) with an adaptive learning rate ?i = N1i
for each synapse.
In fact, a result of Robbins-Monro (see [8] for a review) implies that the updating of weight estimates
w
?i according to (6) and (7) converges to the target values wi? not only for the particular choice
P?
P?
(N )
(N )
(N )
(N )
?i i = N1i , but for any sequence ?i i that satisfies Ni =1 ?i i = ? and Ni =1 (?i i )2 <
?. More than that the Supermartingale Convergence Theorem (see [8]) guarantees convergence in
distribution even for a sufficiently small constant learning rate.
Learning rate adaptation
One can see from the above considerations that the Bayesian Hebb rule with a constant learning rate
? converges globally to the desired log-odds. A too small constant learning rate, however, tends
to slow down the initial convergence of the weight vector, and a too large constant learning rate
produces larger fluctuations once the steady state is reached.
(N )
(6) and (7) suggest a decaying learning rate ?i i = N1i , where Ni is the number of preceding
examples with yi = 1. We will present a learning rate adaptation mechanism that avoids biologically
implausible counters, and is robust enough to deal even with non-stationary distributions.
Since the Bayesian Hebb rule and the Bayesian approach of updating Beta-distributions for conditional probabilities are closely related, it is reasonable to expect that the distribution of weights wi
over longer time periods with a non-vanishing learning rate will resemble a Beta(ai , bi )-distribution
transformed to the log-odd domain. The parameters ai and bi in this case are not exact counters anymore but correspond to virtual sample sizes, depending on the current learning rate. We formalize
this statistical model of wi by
?(wi ) =
1
?(ai + bi )
? Beta(ai , bi ) ?? wi ?
?(wi )ai ?(?wi )bi ,
1 + e?wi
?(ai )?(bi )
In practice this model turned out to capture quite well the actually observed quasi-stationary distribution of wi . In [9] we show analytically that E[wi ] ? log abii and Var[wi ] ? a1i + b1i . A learning
rate adaptation mechanism at the synapse that keeps track of the observed mean and variance of the
synaptic weight can therefore recover estimates of the virtual sample sizes ai and bi . The following
mechanism, which we call variance tracking implements this by computing running averages of the
weights and the squares of weights in w
?i and q?i :
?inew
w
?inew
q?inew
?
?
?
q?i ?w
?i2
1+cosh w
?i
(1 ? ?i ) w
?i
+ ?i wi
(1 ? ?i ) q?i + ?i wi2 .
3
(8)
In practice this mechanism decays like N1i under stationary conditions, but is also able to handle
changing input distributions. It was used in all presented experiments for the Bayesian Hebb rule.
3
Hebbian learning of Bayesian decisions
We now show how the Bayesian Hebb rule can be used to learn Bayes optimal decisions. The first
application is the Naive Bayesian classifier, where a binary target variable x0 should be inferred
from a vector of multinomial variables x = hx1 , . . . , xm i, under
Qm the assumption that the xi ?s are
conditionally independent given x0 , thus p(x0 , x) = p(x0 ) 1 p(xk |x0 ). Using basic rules of
probability theory the posterior probability ratio for x0 = 1 and x0 = 0 can be derived:
(1?m) Y
m
m
p(x0 =1|x)
p(x0 =1) Y p(xk |x0 =1)
p(x0 =1)
p(x0 =1|xk )
=
=
=
(9)
p(x0 =0|x)
p(x0 =0)
p(xk |x0 =0)
p(x0 =0)
p(x0 =0|xk )
k=1
k=1
(1?m) Y
I(xk =j)
mk
m Y
p(x0 =1)
p(x0 =1|xk =j)
=
,
p(x0 =0)
p(x0 =0|xk =j)
j=1
k=1
where mk is the number of different possible values of the input variable xk , and the indicator
function I is defined as I(true) = 1 and I(f alse) = 0.
Let the m input variables x1 , . . . , xm be represented through the binary firing states y1 , . . . , yn ?
{0, 1} of the n presynaptic neurons in a population coding manner. More precisely, let each input
variable xk ? {1, . . . , mk } be represented by mk neurons, where each neuron fires only for one of
the mk possible values of xk . Formally we define the simple preprocessing (SP)
yT = ?(x1 )T , . . . , ?(xm )T
with ?(xk )T = [I(xk = 1), . . . , I(xk = mk )] .
(10)
The binary target variable x0 is represented directly by the binary state y0 of the postsynaptic neuron.
Substituting the state variables y0 , y1 , . . . , yn in (9) and taking the logarithm leads to
n
log
p(y0 = 1) X
p(yi = 1|y0 = 1)
p(y0 = 1|y)
= (1 ? m) log
+
yi log
.
p(y0 = 0|y)
p(y0 = 0) i=1
p(yi = 1|y0 = 0)
Hence the optimal decision under the Naive Bayes assumption is
y?0 = sign((1 ? m)w0? +
n
X
wi? yi ) .
i=1
The optimal weights w0? and wi?
w0? = log
p(y0 = 1)
p(y0 = 0)
and
wi? = log
p(y0 = 1|yi = 1)
p(y0 = 0|yi = 1)
for
i = 1, . . . , n.
are obviously log-odds which can be learned by the Bayesian Hebb rule (the bias weight w0 is
simply learned as an unconditional log-odd).
3.1
Learning Bayesian decisions for arbitrary distributions
We now address the more general case, where conditional independence of the input variables
x1 , . . . , xm cannot be assumed. In this case the dependency structure of the underlying distribution is given in terms of an arbitrary Bayesian network BN for discrete variables (see e.g. Figure
1 A). Without loss of generality we choose a numbering scheme of the nodes of the BN such that
the node to be learned is x0 and its direct children are x1 , . . . , xm? . This implies that the BN can be
described by m + 1 (possibly empty) parent sets defined by
Pk
= {i | a directed edge xi ? xk exists in BN and i ? 1}
.
The joint probability distribution on the variables x0 , . . . , xm in BN can then be factored and evaluated for x0 = 1 and x0 = 0 in order to obtain the probability ratio
?
m
p(x0 = 1|x)
p(x0 = 1|xP0 ) Y p(xk |xPk , x0 = 1)
p(x0 = 1, x)
=
=
p(x0 = 0, x)
p(x0 = 0|x)
p(x0 = 0|xP0 )
p(xk |xPk , x0 = 0)
k=1
4
m
Y
k=m? +1
p(xk |xPk )
p(xk |xPk )
.
A
B
Figure 1: A) An example Bayesian network with general connectivity. B) Population coding applied
to the Bayesian network shown in panel A. For each combination of values of the variables {xk , xPk }
of a factor there is exactly one neuron (indicated by a black circle) associated with the factor that
outputs the value 1. In addition OR?s of these values are computed (black squares). We refer to the
resulting preprocessing circuit as generalized preprocessing (GP).
Obviously, the last term cancels out, and by applying Bayes? rule and taking the logarithm the target
log-odd can be expressed as a sum of conditional log-odds only:
m?
p(x0 =1|xP0 ) X
p(x0 =1|xk , xPk )
p(x0 =1|xPk )
p(x0 =1|x)
= log
log
? log
.
log
+
p(x0 =0|x)
p(x0 =0|xP0 )
p(x0 =0|xk , xPk )
p(x0 =0|xPk )
(11)
k=1
We now develop a suitable sparse encoding of of x1 , . . . , xm into binary variables y1 , . . . , yn (with
n ? m) such that the decision function (11) can be written as a weighted sum, and the weights correspond to conditional log-odds of yi ?s. Figure 1 B illustrates such a sparse code: One binary variable
is created for every possible value assignment to a variable and all its parents, and one additional
binary variable is created for every possible value assignment to the parent nodes only. Formally,
the previously introduced population coding operator ? is generalized such that ?(xi1 , xi2 , . . . , xil )
Ql
creates a vector of length j=1 mij that equals zero in all entries except for one 1-entry which
identifies by its position in the vector the present assignment of the input variables xi1 , . . . , xil . The
concatenation of all these population coded groups is collected in the vector y of length n
yT = ?(xP0 )T , ?(x1 , xP1 )T , ??(xP1 )T , . . . , ?(xm , xPm )T , ??(xPm )T .
(12)
The negated vector parts in (12) correspond to the negative coefficients in the sum in (11). Inserting
the sparse coding (12) into (11) allows writing the Bayes optimal decision function (11) as a pure
sum of log-odds of the target variable:
n
X
x?0 = y?0 = sign(
wi? yi ),
with
i=1
wi? = log
p(y0 =1|yi 6=0)
.
p(y0 =0|yi 6=0)
Every synaptic weight wi can be learned efficiently by the Bayesian Hebb rule (1) with the formal
modification that the update is not only triggered by yi =1 but in general whenever yi 6=0 (which
obviously does not change the behavior of the learning process). A neuron that learns with the
Bayesian Hebb rule on inputs that are generated by the generalized preprocessing (GP) defined in
(12) therefore approximates the Bayes optimal decision function (11), and converges quite fast to
the best performance that any probabilistic inference could possibly achieve (see Figure 2B).
4
The Bayesian Hebb rule in reinforcement learning
We show in this section that a reward-modulated version of the Bayesian Hebb rule enables a learning agent to solve simple reinforcement learning tasks. We consider the standard operant conditioning scenario, where the learner receives at each trial an input x = hx1 , . . . , xm i, chooses an
action ? out of a set of possible actions A, and receives a binary reward signal r ? {0, 1} with
probability p(r|x, a). The learner?s goal is to learn (as fast as possible) a policy ?(x, a) so that
action selection according to this policy maximizes the average reward. In contrast to the previous
5
learning tasks, the learner has to explore different actions for the same input to learn the rewardprobabilities for all possible actions. The agent might for example choose actions stochastically
with ?(x, a = ?) = p(r = 1|x, a = ?), which corresponds to the matching behavior phenomenon
often observed in biology [10]. This policy was used during training in our computer experiments.
p(r=1|x,a)
The goal is to infer the probability of binary reward, so it suffices to learn the log-odds log p(r=0|x,a)
for every action, and choose the action that is most likely to yield reward (e.g. by a Winner-Take-All
structure). If the reward probability for an action a = ? is defined by some Bayesian network BN,
one can rewrite this log-odd as
m
p(r = 1|x, a = ?)
p(r = 1|a = ?) X
p(xk |xPk , r = 1, a = ?)
log
= log
+
log
.
(13)
p(r = 0|x, a = ?)
p(r = 0|a = ?)
p(xk |xPk , r = 0, a = ?)
k=1
In order to use the Bayesian Hebb rule, the input vector x is preprocessed to obtain a binary vector
y. Both a simple population code such as (10), or generalized preprocessing as in (12) and Figure
1B can be used, depending on the assumed dependency structure. The reward log-odd (13) for the
preprocessed input vector y can then be written as a linear sum
n
X
p(r = 1|y, a = ?)
?
?
= w?,0
+
w?,i
yi ,
log
p(r = 0|y, a = ?)
i=1
p(r=1|yi 6=0,a=?)
?
?
where the optimal weights are w?,0
= log p(r=1|a=?)
p(r=0|a=?) and w?,i = log p(r=0|yi 6=0,a=?) . These logodds can be learned for each possible action ? with a reward-modulated version of the Bayesian
Hebb rule (1):
(
? ? (1 + e?w?,i ),
if r = 1, yi 6= 0, a = ?
?? ? (1 + ew?,i ), if r = 0, yi 6= 0, a = ?
?w?,i =
(14)
0,
otherwise
The attractive theoretical properties of the Bayesian Hebb rule for the prediction case apply also to
the case of reinforcement learning. The weights corresponding to the optimal policy are the only
equilibria under the reward-modulated Bayesian Hebb rule, and are also global attractors in weight
space, independently of the exploration policy (see [9]).
5
5.1
Experimental Results
Results for prediction tasks
We have tested the Bayesian Hebb rule on 400 different prediction tasks, each of them defined by a
general (non-Naive) Bayesian network of 7 binary variables. The networks were randomly generated
by the algorithm of [11]. From each network we sampled 2000 training and 5000 test examples, and
measured the percentage of correct predictions after every update step.
The performance of the predictor was compared to the Bayes optimal predictor, and to online logistic
regression, which fits a linear model by gradient descent on the cross-entropy error function. This
non-Hebbian learning approach is in general the best performing online learning approach for linear
discriminators [3]. Figure 2A shows that the Bayesian Hebb rule with the simple preprocessing (10)
generalizes better from a few training examples, but is outperformed by logistic regression in the
long run, since the Naive Bayes assumption is not met. With the generalized preprocessing (12), the
Bayesian Hebb rule learns fast and converges to the Bayes optimum (see Figure 2B). In Figure 2C
we show that the Bayesian Hebb rule is robust to noisy updates - a condition very likely to occur in
biological systems. We modified the weight update ?wi such that it was uniformly distributed in
the interval ?wi ? ?%. Even such imprecise implementations of the Bayesian Hebb rule perform
very well. Similar results can be obtained if the exp-function in (1) is replaced by a low-order Taylor
approximation.
5.2
Results for action selection tasks
The reward-modulated version (14), of the Bayesian Hebb rule was tested on 250 random action
selection tasks with m = 6 binary input attributes, and 4 possible actions. For every action a
6
B
C
1
1
0.95
0.95
0.95
0.9
0.85
Bayesian Hebb SP
Log. Regression ?=0.2
Naive Bayes
Bayes Optimum
0.8
0.75
0.7
0
200
400
600
800
# Training Examples
1000
0.9
0.85
Bayesian Hebb GP
Bayesian Hebb SP
Bayes Optimum
0.8
0.75
0.7
0
200
400
600
800
# Training Examples
1000
Correctness
1
Correctness
Correctness
A
0.9
0.85
Without Noise
50% Noise
100% Noise
150% Noise
0.8
0.75
0.7
0
200
400
600
800
1000
# Training Examples
Figure 2: Performance comparison for prediction tasks. A) The Bayesian Hebb rule with simple
preprocessing (SP) learns as fast as Naive Bayes, and faster than logistic regression (with optimized
constant learning rate). B) The Bayesian Hebb rule with generalized preprocessing (GP) learns fast
and converges to the Bayes optimal prediction performance. C) Even a very imprecise implementation of the Bayesian Hebb rule (noisy updates, uniformly distributed in ?wi ? ?%) yields almost
the same learning performance.
random Bayesian network [11] was drawn to model the input and reward distributions (see [9] for
details). The agent received stochastic binary rewards for every chosen action, updated the weights
w?,i according to (14), and measured the average reward on 500 independent test trials.
In Figure 3A we compare the reward-modulated Bayesian Hebb rule with simple population coding
(10) (Bayesian Hebb SP), and generalized preprocessing (12) (Bayesian Hebb GP), to the standard
learning model for simple conditioning tasks, the non-Hebbian Rescorla-Wagner rule [12]. The
reward-modulated Bayesian Hebb rule learns as fast as the Rescorla-Wagner rule, and achieves in
combination with generalized preprocessing a higher performance level. The widely used tabular
Q-learning algorithm, in comparison is slower than the other algorithms, since it does not generalize,
but it converges to the optimal policy in the long run.
5.3
A model for the experiment of Yang and Shadlen
In the experiment by Yang and Shadlen [1], a monkey had to choose between gazing towards a red
target R or a green target G. The probability that a reward was received at either choice depended
on four visual input stimuli that had been shown at the beginning of the trial. Every stimulus was
one shape out of a set of ten possibilities and had an associated weight, which had been defined by
the experimenter. The sum of the four weights yielded the log-odd of obtaining a reward at the red
target, and a reward for each trial was assigned accordingly to one of the targets. The monkey thus
had to combine the evidence from four visual stimuli to optimize its action selection behavior.
In the model of the task it is sufficient to learn weights only for the action a = R, and select
this action whenever the log-odd using the current weights is positive, and G otherwise. A simple
population code as in (10) encoded the 4-dimensional visual stimulus into a 40-dimensional binary
vector y. In our experiments, the reward-modulated Bayesian Hebb rule learns this task as fast and
with similar quality as the non-Hebbian Rescorla-Wagner rule. Furthermore Figures 3B and 3C
show that it produces after learning similar behavior as that reported for two monkeys in [1].
6
Discussion
We have shown that the simplest and experimentally best supported local learning mechanism, Hebbian learning, is sufficient to learn Bayes optimal decisions. We have introduced and analyzed the
Bayesian Hebb rule, a training method for synaptic weights, which converges fast and robustly to
optimal log-probability ratios, without requiring any communication between plasticity mechanisms
for different synapses. We have shown how the same plasticity mechanism can learn Bayes optimal
decisions under different statistical independence assumptions, if it is provided with an appropriately
preprocessed input. We have demonstrated on a variety of prediction tasks that the Bayesian Hebb
rule learns very fast, and with an appropriate sparse preprocessing mechanism for groups of statistically dependent features its performance converges to the Bayes optimum. Our approach therefore
suggests that sparse, redundant codes of input features may simplify synaptic learning processes in
spite of strong statistical dependencies. Finally we have shown that Hebbian learning also suffices
7
B
Average Reward
Percentage of red choices
0.8
0.7
Bayesian Hebb SP
Bayesian Hebb GP
Rescorla?Wagner
Q?Learning
Optimal Selector
0.6
0.5
0.4
0
400
800
1200
Trials
1600
2000
C
100
80
60
40
20
0
?4
?2
0
2
4
Percentage of red choices
A
100
80
60
40
20
0
?4
Evidence for red (logLR)
?2
0
2
4
Evidence for red (logLR)
Figure 3: A) On 250 4-action conditioning tasks with stochastic rewards, the reward-modulated
Bayesian Hebb rule with simple preprocessing (SP) learns similarly as the Rescorla-Wagner rule,
and substantially faster than Q-learning. With generalized preprocessing (GP), the rule converges to
the optimal action-selection policy. B, C) Action selection policies learned by the reward-modulated
Bayesian Hebb rule in the task by Yang and Shadlen [1] after 100 (B), and 1000 (C) trials are
qualitatively similar to the policies adopted by monkeys H and J in [1] after learning.
for simple instances of reinforcement learning. The Bayesian Hebb rule, modulated by a signal
related to rewards, enables fast learning of optimal action selection. Experimental results of [1] on
reinforcement learning of probabilistic inference in primates can be partially modeled in this way
with regard to resulting behaviors.
An attractive feature of the Bayesian Hebb rule is its ability to deal with the addition or removal
of input features through the creation or deletion of synaptic connections, since no relearning of
weights is required for the other synapses. In contrast to discriminative neural learning rules, our
approach is generative, which according to [13] leads to faster generalization. Therefore the learning
rule may be viewed as a potential building block for models of the brain as a self-organizing and fast
adapting probabilistic inference machine.
Acknowledgments
We would like to thank Martin Bachler, Sophie Deneve, Rodney Douglas, Konrad Koerding, Rajesh
Rao, and especially Dan Roth for inspiring discussions. Written under partial support by the Austrian Science Fund FWF, project # P17229-N04, project # S9102-N04, and project # FP6-015879
(FACETS) as well as # FP7-216593 (SECO) of the European Union.
References
[1] T. Yang and M. N. Shadlen. Probabilistic reasoning by neurons. Nature, 447:1075?1080, 2007.
[2] R. P. N. Rao. Neural models of Bayesian belief propagation. In K. Doya, S. Ishii, A. Pouget, and R. P. N.
Rao, editors, Bayesian Brain., pages 239?267. MIT-Press, 2007.
[3] C. M. Bishop. Pattern Recognition and Machine Learning. Springer (New York), 2006.
[4] S. Deneve. Bayesian spiking neurons I, II. Neural Computation, 20(1):91?145, 2008.
? Ekeberg. A Bayesian attractor network with incremen[5] A. Sandberg, A. Lansner, K. M. Petersson, and O.
tal learning. Network: Computation in Neural Systems, 13:179?194, 2002.
[6] D. Roth. Learning in natural language. In Proc. of IJCAI, pages 898?904, 1999.
[7] D. O. Hebb. The Organization of Behavior. Wiley, New York, 1949.
[8] D. P. Bertsekas and J.N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, 1996.
[9] B. Nessler, M. Pfeiffer, and W. Maass. Journal version. in preparation, 2009.
[10] L. P. Sugrue, G. S. Corrado, and W. T. Newsome. Matching behavior and the representation of value in
the parietal cortex. Science, 304:1782?1787, 2004.
[11] J. S. Ide and F. G. Cozman. Random generation of Bayesian networks. In Proceedings of the 16th
Brazilian Symposium on Artificial Intelligence, pages 366?375, 2002.
[12] R. A. Rescorla and A. R. Wagner. Classical conditioning II. In A. H. Black and W. F. Prokasy, editors, A
theory of Pavlovian conditioning, pages 64?99. 1972.
[13] A. Y. Ng and M. I. Jordan. On discriminative vs. generative classifiers. NIPS, 14:841?848, 2002.
8
| 3391 |@word trial:6 version:5 bn:6 initial:2 current:5 written:3 plasticity:5 shape:1 enables:2 update:11 fund:1 v:1 stationary:4 generative:2 intelligence:1 accordingly:1 xk:24 beginning:1 vanishing:1 provides:3 node:3 mathematical:2 direct:1 beta:4 symposium:1 prove:1 combine:1 dan:1 manner:1 x0:42 theoretically:1 expected:4 behavior:7 themselves:1 brain:5 globally:2 provided:1 project:3 underlying:1 panel:1 circuit:1 maximizes:1 substantially:1 monkey:5 developed:1 transformation:2 guarantee:1 every:9 exactly:1 classifier:3 qm:1 yn:5 bertsekas:1 positive:1 local:2 tends:1 depended:1 encoding:1 firing:9 fluctuation:1 black:3 might:1 equivalence:1 suggests:1 bi:16 statistically:1 directed:1 acknowledgment:1 practice:2 block:1 implement:4 union:1 area:1 logodds:1 adapting:1 matching:2 imprecise:2 pre:2 datastructures:1 spite:1 suggest:1 hx1:2 cannot:1 selection:7 operator:1 applying:1 writing:1 nessler:3 optimize:1 map:1 demonstrated:1 yt:2 roth:2 independently:1 pure:1 perceive:1 pouget:1 factored:1 rule:58 population:7 handle:1 updated:1 target:11 exact:1 programming:1 recognition:1 updating:2 observed:3 capture:1 graz:2 counter:4 lansner:1 environment:1 vanishes:1 reward:26 dynamic:1 koerding:1 rewrite:1 creation:1 purely:1 creates:1 learner:4 ewi:4 joint:2 various:1 represented:3 cozman:1 forced:1 fast:13 artificial:1 outcome:1 quite:2 encoded:1 larger:1 solve:2 widely:1 otherwise:2 favor:1 ability:1 seco:1 gp:7 noisy:2 online:3 obviously:3 sequence:1 triggered:1 propose:1 rescorla:6 adaptation:3 inserting:1 turned:1 organizing:1 achieve:1 convergence:3 empty:1 parent:3 optimum:4 ijcai:1 produce:2 xil:2 converges:11 help:1 depending:2 develop:1 measured:2 odd:7 received:2 strong:1 resemble:1 implies:4 s9102:1 met:1 direction:2 closely:2 correct:3 attribute:1 stochastic:2 exploration:1 xp0:5 virtual:2 incremen:1 require:1 inew:5 suffices:2 generalization:1 biological:2 p17229:1 sufficiently:1 considered:1 exp:1 equilibrium:2 substituting:1 achieves:1 proc:1 outperformed:1 robbins:1 correctness:3 weighted:1 mit:1 always:2 modified:1 rather:1 derived:1 likelihood:1 contrast:2 ishii:1 inference:12 dependent:1 typically:1 initially:1 transformed:1 quasi:1 provably:1 among:1 classification:1 equal:1 once:1 having:1 ng:1 biology:1 cancel:1 tabular:1 stimulus:4 simplify:1 few:1 randomly:1 replaced:1 fire:2 attractor:3 attempt:1 organization:1 possibility:1 analyzed:1 unconditional:1 chain:1 rajesh:1 edge:1 partial:1 taylor:1 logarithm:2 desired:1 circle:1 theoretical:2 mk:6 instance:1 rao:3 facet:1 newsome:1 assignment:3 entry:2 predictor:2 too:2 reported:1 dependency:4 teacher:1 chooses:1 probabilistic:8 xi1:2 a1i:1 michael:1 analogously:1 concrete:2 connectivity:1 reflect:1 choose:4 possibly:3 stochastically:1 leading:1 potential:1 coding:5 coefficient:1 igi:1 depends:1 wolfgang:1 analyze:1 reached:1 red:6 bayes:18 decaying:1 recover:1 rodney:1 monro:1 square:2 ni:7 variance:2 efficiently:1 yield:3 correspond:3 generalize:1 bayesian:72 synapsis:3 implausible:1 whenever:2 synaptic:7 definition:1 e2:1 associated:2 sampled:1 experimenter:1 logical:1 austria:1 formalize:1 sandberg:1 actually:2 higher:1 supervised:1 response:1 synapse:2 evaluated:1 generality:1 furthermore:3 receives:2 nonlinear:1 propagation:2 mode:2 logistic:3 quality:1 indicated:1 scientific:1 building:1 requiring:1 true:1 evolution:1 hence:3 analytically:1 assigned:1 maass:3 i2:1 deal:2 conditionally:1 attractive:2 konrad:1 during:1 self:1 supermartingale:1 steady:1 ide:1 generalized:9 reasoning:1 consideration:1 multinomial:1 spiking:3 conditioning:5 exponentially:1 winner:1 organism:1 approximates:2 refer:1 ai:18 mathematics:1 similarly:1 language:1 had:6 longer:1 operating:1 cortex:1 posterior:1 recent:2 perspective:2 hy0:1 scenario:1 binary:16 arbitrarily:1 yi:44 wji:1 seen:1 additional:1 preceding:1 maximize:1 redundant:2 period:1 signal:3 ii:2 corrado:1 multiple:1 infer:2 hebbian:13 faster:3 cross:1 long:2 equally:1 coded:1 prediction:7 neuro:1 basic:1 regression:4 austrian:1 addition:2 interval:1 source:1 appropriately:1 n1i:4 jordan:1 odds:10 call:3 fwf:1 yang:6 xp1:2 intermediate:1 enough:1 variety:1 xj:2 independence:2 fit:1 interprets:1 whether:1 york:2 action:22 cosh:1 ten:1 inspiring:1 processed:1 simplest:2 reduced:2 percentage:3 sign:3 track:1 discrete:1 group:2 four:3 threshold:1 achieving:1 drawn:1 changing:1 preprocessed:3 douglas:1 deneve:2 fp6:1 sum:6 run:2 uncertainty:2 almost:1 reasonable:1 brazilian:1 doya:1 decision:21 yielded:1 occur:1 precisely:1 flat:1 tal:1 performing:1 ekeberg:1 pavlovian:1 martin:1 numbering:1 according:4 combination:4 em:1 postsynaptic:4 y0:39 wi:49 loglr:2 making:4 primate:2 biologically:1 alse:1 modification:1 operant:1 previously:2 mechanism:8 xi2:1 mind:1 fp7:1 adopted:1 generalizes:1 apply:1 appropriate:3 anymore:1 robustly:1 frequentist:1 slower:1 running:1 tugraz:1 prof:1 especially:1 classical:1 already:1 traditional:1 gradient:1 thank:1 concatenation:1 athena:1 w0:5 argue:1 presynaptic:3 collected:1 code:5 length:2 modeled:2 ratio:6 equivalently:1 ql:1 relate:1 negative:2 rise:1 implementation:3 n04:2 policy:9 contributed:1 perform:2 negated:1 neuron:17 finite:1 descent:1 parietal:1 communication:1 y1:4 rn:1 perturbation:1 arbitrary:3 xpk:11 inferred:2 introduced:2 required:1 connection:2 discriminator:1 optimized:1 learned:6 deletion:1 macaque:1 nip:1 address:1 able:2 pattern:1 xm:9 wi2:1 green:1 belief:3 suitable:1 natural:1 indicator:1 pfeiffer:3 representing:1 scheme:1 technology:1 identifies:1 created:2 naive:7 review:1 understanding:1 removal:1 loss:1 expect:1 generation:1 var:1 agent:3 sufficient:3 shadlen:6 article:2 principle:2 editor:2 supported:3 last:1 petersson:1 tsitsiklis:1 bias:1 formal:1 understand:1 institute:1 taking:2 wagner:6 sparse:6 distributed:2 regard:2 avoids:1 author:1 commonly:1 reinforcement:9 made:1 preprocessing:15 adaptive:1 qualitatively:1 correlate:1 prokasy:1 approximate:1 selector:1 bernhard:1 unreliable:1 dealing:1 keep:1 global:2 assumed:2 xi:5 discriminative:2 lip:1 learn:9 nature:1 robust:2 obtaining:1 interact:1 complex:1 european:1 domain:1 sp:7 pk:1 noise:4 child:1 x1:6 hebb:46 slow:1 wiley:1 position:1 learns:9 theorem:1 down:1 bishop:1 omnipresent:1 decay:2 evidence:5 exists:1 conditioned:1 illustrates:1 relearning:1 entropy:1 simply:1 likely:3 explore:1 visual:4 expressed:1 tracking:1 partially:1 springer:1 mij:1 corresponds:1 chance:1 determines:1 b1i:2 satisfies:1 conditional:4 goal:2 viewed:1 consequently:1 towards:2 hard:1 experimentally:3 change:3 except:1 uniformly:2 sophie:1 experimental:5 sugrue:1 ew:2 formally:2 select:1 support:2 modulated:12 preparation:1 tested:2 phenomenon:1 |
2,638 | 3,392 | Joint support recovery under high-dimensional scaling:
Benefits and perils of `1,?-regularization
Sahand Negahban
Department of Electrical Engineering and Computer Sciences
University of California, Berkeley
Berkeley, CA 94720-1770
sahand [email protected]
Martin J. Wainwright
Department of Statistics, and Department of Electrical Engineering and Computer Sciences
University of California, Berkeley
Berkeley, CA 94720-1770
[email protected]
Abstract
Given a collection of r ? 2 linear regression problems in p dimensions, suppose that the
regression coefficients share partially common supports. This set-up suggests the use of
`1 /`? -regularized regression for joint estimation of the p ? r matrix of regression coefficients. We analyze the high-dimensional scaling of `1 /`? -regularized quadratic programming, considering both consistency rates in `? -norm, and also how the minimal sample size
n required for performing variable selection grows as a function of the model dimension,
sparsity, and overlap between the supports. We begin by establishing bounds on the `? error as well sufficient conditions for exact variable selection for fixed design matrices, as
well as designs drawn randomly from general Gaussian matrices. These results show that the
high-dimensional scaling of `1 /`? -regularization is qualitatively similar to that of ordinary
`1 -regularization. Our second set of results applies to design matrices drawn from standard
Gaussian ensembles, for which we provide a sharp set of necessary and sufficient conditions:
the `1 /`? -regularized method undergoes a phase transition characterized by the rescaled sample size ?1,? (n, p, s, ?) = n/{(4 ? 3?)s log(p ? (2 ? ?) s)}. More precisely, for any ? > 0,
the probability of successfully recovering both supports converges to 1 for scalings such that
?1,? ? 1 + ?, and converges to 0 for scalings for which ?1,? ? 1 ? ?. An implication of this
threshold is that use of `1,? -regularization yields improved statistical efficiency if the overlap
parameter is large enough (? > 2/3), but performs worse than a naive Lasso-based approach
for moderate to small overlap (? < 2/3). We illustrate the close agreement between these
theoretical predictions, and the actual behavior in simulations.
1
Introduction
The area of high-dimensional statistical inference is concerned with the behavior of models and algorithms in
which the dimension p is comparable to, or possibly even larger than the sample size n. In the absence of additional structure, it is well-known that many standard procedures?among them linear regression and principal
component analysis?are not consistent unless the ratio p/n converges to zero. Since this scaling precludes having p comparable to or larger than n, an active line of research is based on imposing structural conditions on the
data?for instance, sparsity, manifold constraints, or graphical model structure?and then studying conditions
under which various polynomial-time methods are either consistent, or conversely inconsistent.
1
This paper deals with high-dimensional scaling in the context of solving multiple regression problems, where
the regression vectors are assumed to have shared sparse structure. More specifically, suppose that we are
given a collection of r different linear regression models in p dimensions, with regression vectors ? i ? Rp , for
i = 1, . . . , r. We let S(? i ) = {j | ?ji 6= 0} denote the support set of ? i . In many applications?among them
sparse approximation, graphical model selection, and image reconstruction?it is natural to impose a sparsity
constraint, corresponding to restricting the cardinality |S(? i )| of each support set. Moreover, one might expect
some amount of overlap between the sets S(? i ) and S(? j ) for indices i 6= j since they correspond to the
sets of active regression coefficients in each problem. For instance, consider the problem of image denoising
or reconstruction, using wavelets or some other type of multiresolution basis. It is well known that natural
images tend to have sparse representations in such bases. Moreover, similar images?say the same scene taken
from multiple cameras?would be expected to share a similar subset of active features in the reconstruction.
Similarly, in analyzing the genetic underpinnings of a given disease, one might have results from different
subjects and/or experiments, meaning that the covariate realizations and regression vectors would differ in their
numerical values, but one expects the same subsets of genes to be active in controlling the disease, which
translates to a condition of shared support in the regression coefficients. Given these structural conditions of
shared sparsity in these and other applications, it is reasonable to consider how this common structure can be
exploited so as to increase the statistical efficiency of estimation procedures.
In this paper, we study the high-dimensional scaling of block `1 /`? regularization. Our main contribution is
to obtain some precise?and arguably surprising?insights into the benefits and dangers of using block `1 /`?
regularization, as compared to simpler `1 -regularization (separate Lasso for each regression problem). We
begin by providing a general set of sufficient conditions for consistent support recovery for both fixed design
matrices, and random Gaussian design matrices. In addition to these basic consistency results, we then seek to
characterize rates, for the particular case of standard Gaussian designs, in a manner precise enough to address
the following questions.
(a) First, under what structural assumptions on the data does the use of `1 /`? block-regularization provide
a quantifiable reduction in the scaling of the sample size n, as a function of the problem dimension p
and other structural parameters, required for consistency?
(b) Second, are there any settings in which `1 /`? block-regularization can be harmful relative to computationally less expensive procedures?
Answers to these questions yield useful insight into the tradeoff between computational and statistical efficiency. Indeed, the convex programs that arise from using block-regularization typically require a greater
computational cost to solve. Accordingly, it is important to understand under what conditions this increased
computational cost guarantees that fewer samples are required for achieving a fixed level of statistical accuracy.
As a representative instance of our theory, consider the special case of standard Gaussian design matrices and
two regression problems (r = 2), with the supports S(? 1 ) and S(? 2 ) each of size s and overlapping in a
fraction ? ? [0, 1] of their entries. For this problem, we prove that block `1 /`? regularization undergoes a
phase transition in terms of the rescaled sample size
?1,? (n, p, s, ?)
:=
n
.
(4 ? 3?)s log(p ? (2 ? ?)s)
(1)
In words, for any ? > 0 and for scalings of the quadruple (n, p, s, ?) such that ?1,? ? 1 + ?, the probability of
successfully recovering both S(? 1 ) and S(? 2 ) converges to one, whereas for scalings such that ?1,? ? 1 ? ?,
the probability of success converges to zero. By comparison to previous theory on the behavior of the Lasso
(ordinary `1 -regularized quadratic programming), the scaling (1) has two interesting implications. For the ssparse regression problem with standard Gaussian designs, the Lasso has been shown [10] to undergo a phase
transition as a function of the rescaled sample size
?Las (n, p, s)
:=
n
,
2s log(p ? s)
(2)
so that solving two separate Lasso problems, one for each regression problem, would recover both supports for
problem sequences (n, p, s) such that ?Las > 1. Thus, one consequence of our analysis is to provide a precise
confirmation of the natural intuition: if the data is well-aligned with the regularizer, then block-regularization
increases statistical efficiency. On the other hand, our analysis also conveys a cautionary message: if the overlap
is too small?more precisely, if ? < 2/3?then block `1,? is actually harmful relative to the naive Lasso-based
approach. This fact illustrates that some care is required in the application of block regularization schemes.
2
The remainder of this paper is organized as follows. In Section 2, we provide a precise description of the
problem. Section 3 is devoted to the statement of our main result, some discussion of its consequences, and
illustration by comparison to empirical simulations.
2
Problem set-up
We begin by setting up the problem to be studied in this paper, including multivariate regression and family of
block-regularized programs for estimating sparse vectors.
2.1
Multivariate regression
In this problem, we consider the following form of multivariate regression. For each i = 1, . . . , r, let ? i ? Rp
be a regression vector, and consider the family of linear observation models
y i = X i ? i + wi ,
i = 1, 2, . . . , r.
(3)
i
n?p
Here each X ? R
is a design matrix, possibly different for each vector ? i , and wi ? Rn is a noise vector.
We assume that the noise vectors wi and wj are independent for different regression problems i 6= j. In this
paper, we assume that each wi has a multivariate Gaussian N (0, ? 2 In?n ) distribution. However, we note that
qualitatively similar results will hold for any noise distribution with sub-Gaussian tails (see the book [1] for
more background).
2.2
Block-regularization schemes
For compactness in notation, we frequently use B to denote the p ? r matrix with ? i ? Rp as the ith column.
Given a parameter q ? [1, ?], we define the `1 /`q block-norm as follows:
p
X
kBk`1 /`q : =
k(?k1 , ?k2 , . . . , ?kr )kq ,
(4)
k=1
corresponding to applying the `q norm to each row of B, and the `1 -norm across all of these blocks. We note
that all of these block norms are special cases of the CAP family of penalties [12].
This family of block-regularizers (4) suggests a natural family of M -estimators for estimating B, based on
solving the block-`1 /`q -regularized quadratic program
r
1 X
b ? arg min
B
ky i ? X i ? i k22 + ?n kBk`1 /`q ,
(5)
p?r
2n i=1
B?R
where ?n > 0 is a user-defined regularization parameter. Note that the data term is separable across the different
regression problems i = 1, . . . , r, due to our assumption of independence on the noise vectors. Any coupling
between the different regression problems is induced by the block-norm regularization.
In the special case of univariate regression (r = 1), the parameter q plays no role, and the block-regularized
scheme (6) reduces to the Lasso [7, 3]. If q = 1 and r ? 2, the block-regularization function (like the data
term) is separable across the different regression problems i = 1, . . . , r, and so the scheme (6) reduces to
solving r separate Lasso problems. For r ? 2 and q = 2, the program (6) is frequently referred to as the group
Lasso [11, 6]. Another important case [9, 8], and the focus of this paper, is block `1 /`? regularization.
The motivation for using block `1 /`? regularization is to encourage shared sparsity among the columns of the
regression matrix B. Geometrically, like the `1 norm that underlies the ordinary Lasso, the `1 /`? block norm
has a polyhedral unit ball. However, the block norm captures potential interactions between the columns ? i
in the matrix B. Intuitively, taking the maximum encourages the elements (?k1 , ?k2 . . . , ?kr ) in any given row
k = 1, . . . , p to be zero simultaneously, or to both be non-zero simultaneously. Indeed, if ?ki 6= 0 for at least
one i ? {1, . . . , r}, then there is no additional penalty to have ?kj 6= 0 as well, as long as |?kj | ? |?ki |.
2.3
Estimation in `? norm and support recovery
For a given ?n > 0, suppose that we solve the block `1 /`? program, thereby obtaining an estimate
r
1 X
b ? arg min
B
ky i ? X i ? i k22 + ?n kBk`1 /`? ,
B?Rp?r 2n
i=1
3
(6)
We note that under high-dimensional scaling (p n), this convex program (6) is not necessarily strictly
convex, since the quadratic term is rank deficient and the block `1 /`? norm is polyhedral, which implies that
the program is not strictly convex. However, a consequence of our analysis is that under appropriate conditions,
b is in fact unique.
the optimal solution B
b as a function of the sample size n, regression dimensions
In this paper, we study the accuracy of the estimate B,
p and r, and the sparsity index s = maxi=1,...,r |S(? i )|. There are various metrics with which to assess the
b to the truth B, including predictive risk, various types of norm-based bounds on
?closeness? of the estimate B
b
the difference B ? B, and variable selection consistency. In this paper, we prove results bounding the `? /`?
difference
bki ? Bki |.
b ? Bk` /`
:=
max
max |B
kB
?
?
k=1,...,p i=1,...,r
In addition, we prove results on support recovery criteria. Recall that for each vector ? i ? Rp , we use S(? i ) =
{k | ?ki 6= 0} to denote its support set. The problem of union support recovery corresponds to recovering the
set
r
[
J :=
S(? i ),
(7)
i=1
corresponding to the subset J ? {1, . . . , p} of indices that are active in at least one regression problem. Note
that the cardinality of |J| is upper bounded by rs, but can be substantially smaller (as small as s) if there is
overlap among the different supports.
In some results, we also study the more refined criterion of recovering the individual signed supports, meaning
the signed quantities sign(?ki ), where the sign function is given by
?
?+1 if t > 0
sign(t) =
(8)
0
if t = 0
?
?1 if t < 0
There are multiple ways in which the support (or signed support) can be estimated, depending on whether we
use primal or dual information from an optimal solution.
`1 /`? primal recovery: Solve the block-regularized program (6), thereby obtaining a (primal) optimal solub ? Rp?r , and estimate the signed support vectors
tion B
[Spri (?b i )]k
=
sign(?bki ).
(9)
b ?
`1 /`? dual recovery: Solve the block-regularized program (6), thereby obtaining an primal solution B
p?r
i
b
R . For each row k = 1, . . . , p, compute the set Mk : = arg max |?k |. Estimate the signed support via:
i=1,...,r
[Sdua (?bki )]
(
sign(?bki )
=
0
if i ? Mk
otherwise.
(10)
As our development will clarify, this procedure corresponds to estimating the signed support on the basis of a
dual optimal solution associated with the optimal primal solution.
2.4
Notational conventions
Throughout this paper, we use the index p ? {1, . . . , r} as a superscript in indexing the different regression
problems, or equivalently the columns of the matrix B ? Rp?r . Given a design matrix X ? Rn?p and a subset
S ? {1, . . . , p}, we use XS to denote the n ? |S| sub-matrix obtained by extracting
those columns indexed by
S. For a pair of matrices A ? Rm?` and B ? Rm?n , we use the notation A, B : = AT B for the resulting
` ? n matrix.
We use the following standard asymptotic notation: for functions f, g, the notation f (n) = O(g(n)) means that
there exists a fixed constant 0 < C < +? such that f (n) ? Cg(n); the notation f (n) = ?(g(n)) means that
f (n) ? Cg(n), and f (n) = ?(g(n)) means that f (n) = O(g(n)) and f (n) = ?(g(n)).
4
3
Main results and their consequences
In this section, we provide precise statements of the main results of this paper. Our first main result (Theorem 1)
provides sufficient conditions for deterministic design matrices X 1 , . . . , X r . This result allows for an arbitrary
number r of regression problems. Not surprisingly, these results show that the high-dimensional scaling of block
`1 /`? is qualitiatively similar to that of ordinary `1 -regularization: for instance, in the case of random Gaussian
designs and bounded r, our sufficient conditions in [5] ensure that n = ?(s log p) samples are sufficient to
recover the union of supports correctly with high probability, which matches known results on the Lasso [10].
As discussed in the introduction, we are also interested in the more refined question: can we provide necessary and sufficient conditions that are sharp enough to reveal quantitative differences between ordinary `1 regularization and block regularization? In order to provide precise answers to this question, our final two results
concern the special case of r = 2 regression problems, both with supports of size s that overlap in a fraction ?
of their entries, and with design matrices drawn randomly from the standard Gaussian ensemble. In this setting,
our final two results (Theorem 2 and 3) show that block `1 /`? regularization undergoes a phase transition
specified by the rescaled sample size. We then discuss some consequences of these results, and illustrate their
sharpness with some simulation results.
3.1
Sufficient conditions for deterministic designs
In addition to the sample size n, problem dimensions p and r, and sparsity index s, our results are stated in
terms of the minimum eigenvalue Cmin of the |J| ? |J| matrices n1 hXJi , XJi i?that is,
1 i
hXJ , XJi i ? Cmin
for all i = 1, . . . , r,
(11)
n
as well as an `? -operator norm of their inverses:
?1
1
|||? ? Dmax
for all i = 1, . . . , r.
(12)
||| hXJi , XJi i
n
It is natural to think of these quantites as being constants (independent of p and s), although our results do allow
them to scale.
?min
We assume that the columns of each design matrix X i , i = 1, . . . , r are normalized so that
kXki k22
? 2n
for all k = 1, 2, . . . p.
(13)
The choice of the factor 2 in this bound is for later technical convenience. We also require the following
incoherence condition on the design matrix is satisified: there exists some ? ? (0, 1] such that
max
`=1,...,|J c |
r
X
k X`i , XJi (hXJi , XJi i)?1 k1
i=1
? (1 ? ?),
(14)
and we also define the support minimum value Bmin = mink?J maxi=1,...,r |?ki |,
For a parameter ? > 1 (to be chosen by the user), we define the probability
?1 (?, p, s)
:=
1 ? 2 exp(?(? ? 1)[r + log p]) ? 2 exp(?(? 2 ? 1) log(rs))
(15)
which specifies the precise rate with which the ?high probability? statements in Theorem 1 hold.
Theorem 1. Consider the observation model (3) with design matrices X i satisfying the column bound (13) and
incoherence condition (14). Suppose that we solve the block-regularized `1 /`? convex program (6) with regu2 2
r +r log(p)
larization parameter ?2n ? 4??
for some ? > 1. Then with probability greater than ?1 (?, p, s) ? 1,
?2
n
we are guaranteed that:
b such that Sr S(?b i ) ? J, and it satisfies the
(a) The block-regularized program has a unique solution B
i=1
elementwise bound
s
4? 2 log(rs)
max max |?bki ? ?ki | ? ?
+ Dmax ?n .
(16)
i=1,...,r k=1,...,p
Cmin
n
|
{z
}
b1 (?, ?n , n, s)
5
(b) If in addition Bmin ? b1 (?, ?n , n, s), then
the union of supports J.
Sr
i=1
b correctly specifies
S(?b i ) = J, so that the solution B
Remarks: To clarify the scope of the claims, part (a) guarantees that the estimator recovers the union support
J correctly, whereas part (b) guarantees that for any given i = 1, . . . , r and k ? S(? i ), the sign sign(?bki ) is
correct. Note that we are guaranteed that ?bki = 0 for all k ?
/ J. However, within the union support J, when
using primal recovery method, it is possible to have false non-zeros?i.e., there may be an index k ? J\S(? i )
such that ?bki 6= 0. Of course, this cannot occur if the support sets S(? i ) are all equal. This phenomenon is
related to geometric properties of the block `1 /`? norm: in particular, for any given index k, when ?bkj 6= 0 for
some j ? {1, . . . , r}, then there is no further penalty to having ?bki 6= 0 for other column indices i 6= j.
The dual signed support recovery method (10) is more conservative in estimating the individual support sets.
In particular, for any given i ? {1, . . . , r}, it only allows an index k to enter the signed support estimate
Sdua (?b i ) when |?bki | achieves the maximum magnitude (possibly non-unique) across all indices i = 1, . . . , r.
Consequently, Theorem 1 guarantees that the dual signed support method will never include an index in the
individual supports. However, it may incorrectly exclude indices of some supports, but like the primal support
estimator, it is always guaranteed to correctly recover the union of supports J.
We note that it is possible to ensure that under some conditions that the dual support method will correctly
recover each of the individual signed supports, without any incorrect exclusions. However, as illustrated by
Theorem 2, doing so requires additional assumptions on the size of the gap |?ki | ? |?kj | for indices k ? B : =
S(? i ) ? S(? j ).
3.2
Sharp results for standard Gaussian ensembles
Our results thus far show under standard mutual incoherence or irrepresentability conditions, the block `1 /`?
method produces consistent estimators for n = ?(s log(p ? s)). In qualitative terms, these results match known
scaling for the Lasso, or ordinary `1 -regularization. In order to provide keener insight into the (dis)advantages
associated with using `1 /`? block regularization, we specialize the remainder of our analysis to the case of
r = 2 regression problems, where the corresponding design matrices X i , i = 1, 2 are sampled from the standard
Gaussian ensemble [2, 4]?i.e., with i.i.d. rows N (0, Ip?p ). Our goal in studying this special case is to be able
to make quantiative comparisons with the Lasso.
We consider a sequence of models indexed by the triplet (p, s, ?), corresponding to the problem dimension
p, support sizes s. and overlap parameter ? ? [0, 1]. We assume that s ? p/2, capturing the intuition of a
(relatively) sparse model. Suppose that for a given model, we take n = n(p, s, ?) observations. according to
equation (3). We can then study the probability of successful recovery as a function of the model triplet, and
the sample size n.
In order to state our main result, we define the order parameter or rescaled sample size ?1,? (n, p, s, ?)
:=
n
|? 1 | ? |? 2 |, and
.
We
also
define
the
support
gap
value
as
well
as
c
-gap
B
=
?
gap
B
B
(4?3?)s log(p?(2??)s)
c? = ?1n kT (Bgap )k? , where T (Bgap ) = ?n ? Bgap .
3.2.1
Sufficient conditions
We begin with a result that provides sufficient conditions for support recovery using block `1 /`? regularization.
Theorem 2 (Achievability). Given the observation model (3) with random design X drawn with i.i.d. standard
Gaussian entries, and consider problem sequences (n, p, s, ?) forqwhich ?1,? (n, p, s, ?) > 1 + ? for some
? > 0. If we solve the block-regularized program (6) with ?n = ? logn p and c? ? 0 , then with probability
greater than 1 ? c1 exp(?c2 log(p ? (2 ? ?)s)), the following properties hold:
(i) The block `1,? -program (6) has a unique solution (?b 1 , ?b 2 ), with supports S(?b 1 ) ? J and S(?b 2 ) ?
J. Moreover, we have the elementwise bound
r
4s
100 log(s)
i
i
b
max max |?k ? ?k | ? ?
+ ?n ? + 1 ,
(17)
i=1,2 k=1,...,p
n
n
{z
}
|
b3 (?, ?n , n, s)
6
(ii) If the support minimum Bmin > 2b3 (?, ?n , n, s), then the primal support method successfully recovers
the support union J = S(? 1 )?S(? 2 ). Moreover, using the primal signed support recovery method (9),
we have
[Spri (?b i )]k = sign(?ki )
for all k ? S(? i ).
(18)
3.2.2
Necessary conditions
We now turn to the question of finding matching necessary conditions for support recovery.
Theorem 3 (Lower bounds). Given the observation model (3) with random design X drawn with i.i.d. standard
Gaussian entries.
(a) For problem sequences (n, p, s, ?) such that ?1,? (n, p, s, ?) < 1 ? ? for some ? > 0 and for any
b = (?b 1 , ?b 2 ) to the block-regularized
non-increasing regularization sequence ?n > 0, no solution B
1
2
b
b
program (6) has the correct support union S(? ) ? S(? ).
(b) Recalling
the
definition
of
Bgap ,
define
the
rescaled
gap
limit
c2 (?n , Bgap )
:=
kT (B
)k2
?
.
lim sup(n,p,s) ?ngap
s
If the sample size n is bounded as
n < (1 ? ?) (4 ? 3?) + (c2 (?n , Bgap ))2 s log[p ? (2 ? ?)s]
for some ? > 0, then the dual recovery method (10) fails to recover the individual signed supports.
It is important to note that c? ? c2 , which implies then that as long as c? ? 0, then c2 ? 0, so that the
conditions of Theorem 3(a) and (b) are equivalent. However, note that if c2 does not go to 0, then in fact, the
method could fail to recover the correct support even if ?1,? > 1 + ?. This result is key to understanding the
`1,? -regularization term. The gap between the vectors plays a fundamental role in in reducing the sampling
complexity. Namely, if the gap is too large, then the sampling efficiency is greatly reduced as compared to if
the gap is very small. In summary, while (a) and (b) seem equivalent on the surface, the requirement in (b) is in
fact stronger than that in (a) and demonstrates the importance of condition (iii) in Theorem 2. It shows that if
the gap is too large, then correct joint support recovery is not possible.
3.3
Illustrative simulations and some consequences
In this section, we provide some illustrative simulations that illustrate the phase transitions predicted by Theorems 2 and 3, and show that the theory provides an accurate description of practice even for relatively small
problem sizes (e.g., p = 128). Figure 1 plots the probability of successful recovery of the individual signed supports using dual support recovery (10)?namely, P[Sdua (?b i ) = S? (? i ), Sdua (?b 2 ) = S? (? 2 )] for i = 1, 2?
versus the order parameter ?1,? (n, p, s, ?). The plot contains four sets of ?stacked? curves, each corresponding
to a different choice of the overlap parameter, ranging from ? = 1 (left-most stack), to ? = 0.1 (right-most
stack). Each stack contains three curves, corresponding to the problem sizes p ? {128, 256, 512}. In all cases,
we fixed the support size s = 0.1p. The stacking behavior of these curves demonstrates that we have isolated
the correct order parameter, and the step-function behavior is consistent with the theoretical predictions of a
sharp threshold.
Theorems 2 and 3 have some interesting consequences, particularly in comparison to the behavior of the ?naive?
Lasso-based individual decoding of signed supports?that is, the method that simply applies the Lasso (ordinary
`1 -regularization) to each column i = 1, 2 separately. By known results [10] on the Lasso, the performance of
this naive approach is governed by the order parameter
n
?Las (n, p, s) =
,
(19)
2s log(p ? s)
meaning that for any ? > 0, it succeeds for sequences such that ?Las > 1 + ?, and conversely fails for sequences
such that ?Las < 1??. To compare the two methods, we define the relative efficiency coefficient R(?1,? , ?Las ) :
= ?Las (n, p, s)/?1,? (n, p, s, ?). A value of R < 1 implies that the block method is more efficient, while R > 1
implies that the naive method is more efficient.
With this notation, we have the following:
Corollary 1. The relative efficiency of the block `1,? program (6) compared to the Lasso is given by
log(p?(2??)s)
R(?1,? , ?Las ) = 4?3?
. Thus, for sublinear sparsity s/p ? 0, the block scheme has greater
2
log(p?s)
statistical efficiency for all overlaps ? ? (2/3, 1], but lower statistical efficiency for overlaps ? ? [0, 2/3).
7
`1,? relaxation for s = 0.1*p and ? = 1, 0.7, 0.4, 0.1
1
Prob. success
0.8
?=1
0.6
? = 0.7
? = 0.4
? = 0.1
0.4
0.2
0
0
1
2
3
Control parameter ?
4
p = 128
p = 256
p = 512
5
Figure 1. Probability of success in recovering the joint signed supports plotted against the control parameter ?1,? =
n/[2s log(p ? (2 ? ?)s))] for linear sparsity s = 0.1p. Each stack of graphs corresponds to a fixed overlap ?, as
labeled on the figure. The three curves within each stack correspond to problem sizes p{128, 256, 512}; note how
they all align with each other and exhibit step-like behavior, consistent with Theorems 2 and 3. The vertical lines
?
correspond to the thresholds ?1,?
(?) predicted by Theorems 2 and 3; note the close agreement between theory and
simulation.
References
[1] V. V. Buldygin and Y. V. Kozachenko. Metric characterization of random variables and random processes.
American Mathematical Society, Providence, RI, 2000.
[2] E. Candes and T. Tao. The Dantzig selector: Statistical estimation when p is much larger than n. Annals
of Statistics, 2006.
[3] S. Chen, D. L. Donoho, and M. A. Saunders. Atomic decomposition by basis pursuit. SIAM J. Sci.
Computing, 20(1):33?61, 1998.
[4] D. L. Donoho and J. M. Tanner. Counting faces of randomly-projected polytopes when the projection
radically lowers dimension. Technical report, Stanford University, 2006. Submitted to Journal of the
AMS.
[5] S. Negahban and M. J. Wainwright. Joint support recovery under high-dimensional scaling: Benefits and
perils of `1,? -regularization. Technical report, Department of Statistics, UC Berkeley, January 2009.
[6] G. Obozinski, B. Taskar, and M. Jordan. Joint covariate selection for grouped classification. Technical
report, Statistics Department, UC Berkeley, 2007.
[7] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society,
Series B, 58(1):267?288, 1996.
[8] J. A. Tropp, A. C. Gilbert, and M. J. Strauss. Algorithms for simultaneous sparse approximation. Signal Processing, 86:572?602, April 2006. Special issue on ?Sparse approximations in signal and image
processing?.
[9] B. Turlach, W.N. Venables, and S.J. Wright. Simultaneous variable selection. Technometrics, 27:349?363,
2005.
[10] M. J. Wainwright. Sharp thresholds for high-dimensional and noisy recovery of sparsity using using `1 constrained quadratic programs. Technical Report 709, Department of Statistics, UC Berkeley, 2006.
[11] Kim Y., Kim J., and Y. Kim. Blockwise sparse regression. Statistica Sinica, 16(2), 2006.
[12] P. Zhao, G. Rocha, and B. Yu. Grouped and hierarchical model selection through composite absolute
penalties. Technical report, Statistics Department, UC Berkeley, 2007.
8
| 3392 |@word polynomial:1 turlach:1 norm:14 stronger:1 simulation:6 seek:1 r:3 decomposition:1 thereby:3 reduction:1 contains:2 series:1 genetic:1 wainwrig:1 surprising:1 numerical:1 plot:2 larization:1 fewer:1 accordingly:1 ith:1 provides:3 characterization:1 buldygin:1 simpler:1 mathematical:1 c2:6 incorrect:1 prove:3 qualitative:1 specialize:1 polyhedral:2 manner:1 indeed:2 expected:1 xji:5 behavior:7 frequently:2 actual:1 considering:1 cardinality:2 increasing:1 begin:4 estimating:4 moreover:4 notation:6 bounded:3 what:2 substantially:1 finding:1 guarantee:4 berkeley:10 quantitative:1 k2:3 rm:2 demonstrates:2 control:2 unit:1 arguably:1 engineering:2 limit:1 consequence:7 analyzing:1 establishing:1 quadruple:1 incoherence:3 might:2 signed:15 studied:1 dantzig:1 suggests:2 conversely:2 unique:4 camera:1 atomic:1 union:8 block:42 practice:1 procedure:4 danger:1 area:1 empirical:1 composite:1 matching:1 projection:1 word:1 convenience:1 close:2 selection:8 operator:1 cannot:1 context:1 applying:1 risk:1 gilbert:1 equivalent:2 deterministic:2 go:1 convex:5 sharpness:1 recovery:19 insight:3 estimator:4 rocha:1 annals:1 controlling:1 suppose:5 play:2 user:2 exact:1 programming:2 agreement:2 element:1 expensive:1 satisfying:1 particularly:1 labeled:1 role:2 taskar:1 electrical:2 capture:1 wj:1 rescaled:6 disease:2 intuition:2 complexity:1 solving:4 predictive:1 efficiency:9 basis:3 joint:6 various:3 regularizer:1 stacked:1 refined:2 saunders:1 larger:3 solve:6 stanford:1 say:1 otherwise:1 precludes:1 statistic:6 think:1 noisy:1 superscript:1 final:2 ip:1 sequence:7 eigenvalue:1 advantage:1 reconstruction:3 interaction:1 remainder:2 aligned:1 realization:1 multiresolution:1 description:2 ky:2 quantifiable:1 requirement:1 produce:1 converges:5 illustrate:3 coupling:1 depending:1 recovering:5 predicted:2 implies:4 convention:1 differ:1 correct:5 kb:1 cmin:3 require:2 strictly:2 clarify:2 hold:3 wright:1 exp:3 scope:1 claim:1 achieves:1 estimation:4 venables:1 grouped:2 successfully:3 gaussian:14 always:1 shrinkage:1 corollary:1 focus:1 notational:1 rank:1 greatly:1 cg:2 kim:3 am:1 inference:1 typically:1 compactness:1 interested:1 tao:1 issue:1 arg:3 among:4 dual:8 classification:1 logn:1 development:1 constrained:1 special:6 mutual:1 uc:4 equal:1 never:1 having:2 sampling:2 yu:1 report:5 randomly:3 simultaneously:2 individual:7 phase:5 n1:1 recalling:1 technometrics:1 message:1 primal:9 devoted:1 regularizers:1 bki:11 implication:2 kt:2 accurate:1 underpinnings:1 encourage:1 necessary:4 unless:1 indexed:2 harmful:2 plotted:1 isolated:1 theoretical:2 minimal:1 mk:2 instance:4 increased:1 column:9 ordinary:7 cost:2 stacking:1 subset:4 expects:1 entry:4 kq:1 successful:2 too:3 characterize:1 providence:1 answer:2 eec:2 fundamental:1 negahban:2 siam:1 decoding:1 tanner:1 possibly:3 worse:1 book:1 american:1 zhao:1 potential:1 exclude:1 coefficient:5 tion:1 later:1 analyze:1 doing:1 sup:1 recover:6 candes:1 contribution:1 ass:1 accuracy:2 ensemble:4 yield:2 correspond:3 peril:2 submitted:1 simultaneous:2 definition:1 against:1 conveys:1 associated:2 recovers:2 sampled:1 recall:1 lim:1 cap:1 organized:1 actually:1 improved:1 april:1 hand:1 tropp:1 overlapping:1 undergoes:3 reveal:1 grows:1 b3:2 k22:3 normalized:1 regularization:30 illustrated:1 deal:1 spri:2 encourages:1 illustrative:2 criterion:2 bmin:3 performs:1 image:5 meaning:3 ranging:1 common:2 ji:1 tail:1 discussed:1 elementwise:2 imposing:1 enter:1 consistency:4 similarly:1 surface:1 base:1 align:1 multivariate:4 exclusion:1 moderate:1 irrepresentability:1 bkj:1 success:3 exploited:1 minimum:3 greater:4 additional:3 impose:1 care:1 signal:2 ii:1 multiple:3 reduces:2 technical:6 match:2 characterized:1 long:2 prediction:2 underlies:1 regression:34 basic:1 satisified:1 metric:2 c1:1 addition:4 whereas:2 background:1 separately:1 keener:1 sr:2 subject:1 tend:1 undergo:1 induced:1 deficient:1 inconsistent:1 seem:1 jordan:1 extracting:1 structural:4 counting:1 iii:1 enough:3 concerned:1 independence:1 lasso:18 tradeoff:1 translates:1 whether:1 sahand:2 penalty:4 remark:1 useful:1 amount:1 reduced:1 specifies:2 sign:8 estimated:1 correctly:5 tibshirani:1 group:1 key:1 four:1 threshold:4 achieving:1 drawn:5 graph:1 relaxation:1 geometrically:1 fraction:2 inverse:1 prob:1 family:5 reasonable:1 throughout:1 scaling:16 comparable:2 capturing:1 bound:7 ki:8 guaranteed:3 quadratic:5 occur:1 precisely:2 constraint:2 scene:1 cautionary:1 ri:1 min:3 performing:1 separable:2 martin:1 relatively:2 department:7 according:1 ball:1 across:4 smaller:1 wi:4 kbk:3 intuitively:1 indexing:1 taken:1 computationally:1 equation:1 discus:1 dmax:2 turn:1 fail:1 studying:2 pursuit:1 hierarchical:1 appropriate:1 kozachenko:1 rp:7 ensure:2 include:1 graphical:2 k1:3 society:2 question:5 quantity:1 exhibit:1 separate:3 sci:1 manifold:1 index:13 illustration:1 ratio:1 providing:1 equivalently:1 sinica:1 statement:3 blockwise:1 stated:1 mink:1 design:20 upper:1 vertical:1 observation:5 incorrectly:1 january:1 precise:7 rn:2 stack:5 sharp:5 arbitrary:1 hxj:1 bk:1 pair:1 required:4 specified:1 namely:2 california:2 polytopes:1 address:1 able:1 sparsity:10 program:16 including:2 max:8 royal:1 wainwright:3 overlap:12 natural:5 regularized:13 scheme:5 naive:5 kj:3 geometric:1 understanding:1 relative:4 asymptotic:1 expect:1 sublinear:1 interesting:2 versus:1 sufficient:10 consistent:6 share:2 row:4 course:1 achievability:1 summary:1 surprisingly:1 dis:1 allow:1 understand:1 taking:1 face:1 absolute:1 sparse:8 benefit:3 curve:4 dimension:9 transition:5 collection:2 qualitatively:2 projected:1 far:1 selector:1 gene:1 active:5 b1:2 assumed:1 triplet:2 ca:2 confirmation:1 obtaining:3 necessarily:1 main:6 statistica:1 motivation:1 noise:4 arise:1 bounding:1 representative:1 referred:1 sub:2 fails:2 governed:1 wavelet:1 theorem:14 covariate:2 maxi:2 x:1 closeness:1 concern:1 exists:2 restricting:1 false:1 strauss:1 kr:2 importance:1 magnitude:1 illustrates:1 gap:9 chen:1 ngap:1 simply:1 univariate:1 quantiative:1 partially:1 applies:2 corresponds:3 truth:1 satisfies:1 radically:1 obozinski:1 goal:1 consequently:1 donoho:2 shared:4 absence:1 specifically:1 reducing:1 denoising:1 principal:1 conservative:1 la:8 succeeds:1 support:57 phenomenon:1 |
2,639 | 3,393 | Bayesian Kernel Shaping for Learning Control
Jo-Anne Ting1 , Mrinal Kalakrishnan1 , Sethu Vijayakumar2 and Stefan Schaal1,3
1
Computer Science, U. of Southern California, Los Angeles, CA 90089, USA
2
School of Informatics, University of Edinburgh, Edinburgh, EH9 3JZ, UK
3
ATR Computational Neuroscience Labs, Kyoto 619-02, Japan
Abstract
In kernel-based regression learning, optimizing each kernel individually is useful
when the data density, curvature of regression surfaces (or decision boundaries)
or magnitude of output noise varies spatially. Previous work has suggested gradient descent techniques or complex statistical hypothesis methods for local kernel
shaping, typically requiring some amount of manual tuning of meta parameters.
We introduce a Bayesian formulation of nonparametric regression that, with the
help of variational approximations, results in an EM-like algorithm for simultaneous estimation of regression and kernel parameters. The algorithm is computationally efficient, requires no sampling, automatically rejects outliers and has only
one prior to be specified. It can be used for nonparametric regression with local
polynomials or as a novel method to achieve nonstationary regression with Gaussian processes. Our methods are particularly useful for learning control, where
reliable estimation of local tangent planes is essential for adaptive controllers and
reinforcement learning. We evaluate our methods on several synthetic data sets
and on an actual robot which learns a task-level control law.
1
Introduction
Kernel-based methods have been highly popular in statistical learning, starting with Parzen windows,
kernel regression, locally weighted regression and radial basis function networks, and leading to
newer formulations such as Reproducing Kernel Hilbert Spaces, Support Vector Machines, and
Gaussian process regression [1]. Most algorithms start with parameterizations that are the same for
all kernels, independent of where in data space the kernel is used, but later recognize the advantage
of locally adaptive kernels [2, 3, 4]. Such locally adaptive kernels are useful in scenarios where the
data characteristics vary greatly in different parts of the workspace (e.g., in terms of data density,
curvature and output noise). For instance, in Gaussian process (GP) regression, using a nonstationary
covariance function, e.g., [5], allows for such a treatment. Performing optimizations individually for
every kernel, however, becomes rather complex and is prone to overfitting due to a flood of open
parameters. Previous work has suggested gradient descent techniques with cross-validation methods
or involved statistical hypothesis testing for optimizing the shape and size of a kernel in a learning
system [6, 7].
In this paper, we consider local kernel shaping by averaging over data samples with the help of
locally polynomial models and formulate this approach, in a Bayesian framework, for both function
approximation with piecewise linear models and nonstationary GP regression. Our local kernel
shaping algorithm is computationally efficient (capable of handling large data sets), can deal with
functions of strongly varying curvature, data density and output noise, and even rejects outliers
automatically. Our approach to nonstationary GP regression differs from previous work by avoiding
Markov Chain Monte Carlo (MCMC) sampling [8, 9] and by exploiting the full nonparametric
characteristics of GPs in order to accommodate nonstationary data.
One of the core application domains for our work is learning control, where computationally efficient
function approximation and highly accurate local linearizations from data are crucial for deriving
controllers and for optimizing control along trajectories [10]. The high variations from fitting noise,
seen in Fig. 3, are harmful to the learning system, potentially causing the controller to be unstable.
Our final evaluations illustrate such a scenario by learning an inverse kinematics model for a real
robot arm.
2
Bayesian Local Kernel Shaping
We develop our approach in the context of nonparametric locally weighted regression with locally linear polynomials [11], assuming, for notational simplicity, only a one-dimensional output?
extensions to multi-output settings are straightforward. We assume a training set of N samples,
N
D = {xi , yi }i=1 , drawn from a nonlinear function y = f (x) + that is contaminated with meanzero (but potentially heteroscedastic) noise . Each data sample consists of a d-dimensional input
vector xi and an output yi . We wish to approximate a locally linear model of this function at a
query point xq ? <d?1 in order to make a prediction yq = bT xq , where b ? <d?1 . We assume
the existence of a spatially localized weighting kernel wi = K (xi , xq , h) that assigns a weight to
every {xi , yi } according to its Euclidean distance in input space from the query point xq . A popular
choice for K is the Gaussian kernel, but other kernels may be used as well [11]. The bandwidth
h ? <d?1 of the kernel is the crucial parameter that determines the local model?s quality of fit. Our
goal is to find a Bayesian formulation of determining b and h simultaneously.
2.1
Model
For the locally linear model at the query
i = 1,.., N
yi
point xq , we can introduce hidden ran- ! 2
dom variables z [12] and modify the linear
!
!zd
!z1
z2
Pd
model yi = bT xi so that yi = m=1 zim +
zi1
zid
zi2
bd
b1
b2
?
?
, where zim = bTm xim + zm and zm
2
Normal (0, ?zm ), ? Normal 0, ? are
both additive noise terms. Note that xim =
wi1
? x
wid
xi1
xi2 wi2
id
[xim 1]T and bm = [bm bm0 ]T , where xim
is the mth coefficient of xi , bm is the mth
?
hd
h2
h1
coefficient of b and bm0 is the offset value.
The z variables allow us to derive compu- Figure 1: Graphical model. Random variables are in
tationally efficient O(d) EM-like updates, circles, and observed random variables are in shaded
as we will see later. The prediction at the double circles.
Pd
query point xq is then m bTm xqm . We assume the following prior distributions for our model, shown graphically in Fig. 1:
p(bm |?zm ) ? Normal (0, ?zm ?bm ,0 )
p(yi |zi ) ? Normal 1T zi , ? 2
T
p(zim |xim ) ? Normal bm xim , ?zm
p(?zm ) ? Scaled-Inv-?2 (nm0 , ?zm,0 )
where 1 is a vector of 1s, zi ? <d?1 , zim is the mth coefficient of zi , and ?bm ,0 is the prior
2
covariance matrix of bm and a 2 ? 2 diagonal matrix. nm0 and ?mN
0 are the prior parameters of
2
2
the Scaled-inverse-? distribution (nm0 is the number of degrees of freedom parameter and ?mN
0 is
the scale parameter). The Scaled-Inverse-?2 distribution was used for ?zm since it is the conjugate
prior for the variance parameter of a Gaussian distribution.
In contrast to classical treatments of Bayesian weighted regression [13] where the weights enter
as a heteroscedastic correction on the noise variance of each data sample, we associate a scalar
indicator-like weight, wi ? {0, 1}, with each sample {xi , yi } in D. The sample is fully included in
Qd
the local model if wi = 1 and excluded if wi = 0. We define the weight wi to be wi = m=1 wim ,
where wim is the weight component in the mth input dimension. While previous methods model the
weighting kernel K as some explicit function, we model the weights wim as Bernoulli-distributed
random variables, i.e., p(wim ) ? Bernoulli(qim ), choosing a symmetric bell-shaped function for the
parameter qim : qim = 1/(1 + (xim ? xqm )2r hm ). xqm is the mth coefficient of xq , hm is the mth
coefficient of h, and r > 0 is a positive integer1 . As pointed out in [11], the particular mathematical
formulation of a weighting kernel is largely computationally irrelevant for locally weighted learning.
Our choice of function for qim was dominated by the desire to obtain analytically tractable learning
updates. We place a Gamma prior over the bandwidth hm , i.e., p(hm ) ? Gamma (ahm0 , bhm0 )
where ahm0 and bhm0 are parameters of the Gamma distribution, to ensure that a positive weighting
kernel width.
2.2
Inference
We can treat the entire regression problem as an EM learning problem [14, 15] and maximize the log
likelihood log p(y|X) for generating the observed data. We can maximize this incomplete log likelihood by maximizing the expected value of the complete log likelihood p(y, Z, b, w, h, ? 2 , ?z |X) =
QN
2
i=1 p(yi , zi , b, wi , h, ? , ?z |xi ). In our model, each data sample i has an indicator-like scalar
weight wi associated with it, allowing us to express the complete log likelihood L, in a similar
fashion to mixture models, as:
"N "
# d
#
d
Y
Y
wi Y
2
2
L = log
p(yi |zi , ? )p(zi |xi , b, ?z )
p(wim )
p(bm |?zm )p(?zm )p(hm )p(? )
m=1
i=1
m=1
Expanding the log p(wim ) term from the expression above results in a problematic ? log(1 +
2r
(xim ? xqm ) ) term that prevents us from deriving an analytically tractable expression for the
posterior of hm . To address this, we use a variational approach on concave/convex functions suggested by [16] to produce analytically tractable expressions. We can find a lower bound on the
2r
term so that ? log(1 + xim ? xqm )2r ? ??im (xim ? xqm ) hm , where ?im is a variational
parameter to be optimized in the M-step of our final EM-like algorithm. Our choice of weighting
kernel allows us to find a lower bound to L in this manner. We explored the use of other weighting
kernels (e.g., a quadratic negative exponential), but had issues with finding a lower bound to the
problematic terms in log p(wim ) such that analytically tractable inference for hm could be done.
? due to lack of space, we give the expression for L
? in the apThe resulting lower bound to L is L;
?
pendix. The expectation of L should be taken with respect to the true posterior distribution of all
hidden variables Q(b, ?z , z, h). Since this is an analytically tractable expression, a lower bound
can be formulated using a technique from variational calculus where we make a factorial approximation of the true posterior, e.g., Q(b, ?z , z, h) = Q(b, ?z )Q(h)Q(z) [15], that allows resulting
posterior distributions over hidden variables to become analytically tractable. The posterior of wim ,
p(wim = 1|yi , zi , xi , ?, wi,k6=m ), is inferred using Bayes? rule:
Qd
p(yi , zi |xi , ?, wi,k6=m , wim = 1)
Qd
p(yi , zi |xi , ?, wi,k6=m , wim = 1)
t=1,t6=m hwit i
t=1,t6=m hwit i
p(wim = 1)
p(wim = 1) + p(wim = 0)
(1)
d
where ? = {b, ?z , h} and wi,k6=m denotes the set of weights {wik }k=1,k6=m . For the dimension
m, we account for the effect of weights in the other d ? 1 dimensions. This is a result of wi
being defined as the product of weights in all dimensions. The posterior mean of wim is then
Qd
hp(wim = 1|yi , zi , xi , ?, wi,k6=m )i, and hwi i = m=1 hwim i, where h.i denotes the expectation
operator. We omit the full set of posterior EM update equations (please refer to the appendix for
this) and list only the posterior updates for hm , wim , bm and zi :
!?1
N
X
1 ?zN T ?zN
?zN
?1
T
?bm = ?bm ,0 +
hwi i xim xim
?zi |yi ,xi =
?
11
hwi i
si hwi i
hwi i
i=1
N
X
hbm i = ?bm
hwi i hzim i xim
i=1
hwim i =
Qd
qim Ai
Qd
qim Ai
!
k=1,k6=m hwik i
k=1,k6=m hwik i
+ 1 ? qim
?zN 1
?zN 11T
hzi i =
+ Id,d ?
bxi
si hwi i
si hwi i
PN
ahm0 + N ? i=1 hwim i
hhm i =
PN
2r
bhm0 + i=1 ?im (xim ? xqm )
1
(xim ? xqm ) is taken to the power 2r in order to ensure that the resulting expression is positive. Adjusting
r affects how long the tails of the kernel are. We use r = 2 for all our experiments.
T
where Id,d is a d ? d identity matrix, bxi is a d by 1 vector with coefficients hbm i xim , hwi i =
Qd
2
T ?zN
m=1 hwim i, ?zN is a diagonal matrix with ?zN on its diagonal, si = ? + 1 hwi i 1 (to avoid division by zero, hwi i needs to be capped to some small non-zero value), qim = ?im = 1/(1+(xim ?
Qd
T
xqm )2r hhm i), and Ai = N (yi ; 1T hzi i , ? 2 ) m=1 N (zim ; hbm i xim , ?zm ). Closer examination
of the expression for hbm i shows that it is a standard Bayesian weighted regression update [13], i.e.,
a data sample i with lower weight wi will be downweighted in the regression. Since the weights are
influenced by the residual error at each data point (see posterior update for hwim i), an outlier will
be downweighted appropriately and eliminated from the local model. Fig. 2 shows how local kernel
shaping is able to ignore outliers that a classical GP fits.
A few remarks should be made regarding the initialization of priors
3
Training data
Stationary GP
used in the posterior EM updates. ?bm ,0 can be set to 106 I to
2.5
Kernel Shaping
reflect a large uncertainty associated with the prior distribution of
2
b. The initial noise variance, ?zm,0 , should be set to the best guess
1.5
on the noise variance. To adjust the strength of this prior, nm0 can y 1
be set to the number of samples one believes to have seen with
0.5
noise variance ?zm,0 Finally, the initial h of the weighting kernel
0
should be set so that the kernel is broad and wide. We use values of ?0.5
ahm0 = bhm0 = 10?6 so that hm0 = 1 with high uncertainty. Note
?1
0
1
x
that some sort of initial belief about the noise level is needed to
distinguish between noise and structure in the training data. Aside Figure 2: Effect of outliers (in
from the initial prior on ?zm , we used the same priors for all data black circles)
sets in our evaluations.
2.3
Computational Complexity
For one local model, the EM update equations have a computational complexity of O(N d) per EM
iteration, where d is the number input dimensions and N is the size of the training set. This efficiency
arises from the introduction of the hidden random variables zi , which allows hzi i and ?zi |yi ,xi to
be computed in O(d) and avoids a d ? d matrix inversion which would typically require O(d3 ).
Some nonstationary GP methods, e.g., [5], require O(N 3 ) + O(N 2 ) for training and prediction,
while other more efficient stationary GP methods, e.g., [17], require O(M 2 N ) + O(M 2 ) training
and prediction costs (where M << N is the number of pseudoinputs used in [17]). Our algorithm
requires O(N dIEM ), where IEM is the number of EM iterations?with a maximal cap of 1000
iterations used. Our algorithm also does not require any MCMC sampling as in [8, 9], making it
more appealing to real-time applications.
3
Extension to Gaussian Processes
We can apply the algorithm in section 2 not only to locally weighted learning with linear models, but
also to derive a nonstationary GP method. A GP is defined by a mean and and a covariance function,
where the covariance function K captures dependencies between
any two points as a function of
the corresponding inputs, i.e., k (xi , xj ) = cov f (xi ), f (x0j ) , where i, j = 1, .., N . Standard GP
models use a stationary covariance function, where the covariance between any two points in the
training data is a function of the distances |xi ? xj |, not of their locations. Stationary GPs perform
suboptimally for functions that have different properties in various parts of the input space (e.g.,
discontinuous functions) where the stationary assumption fails to hold. Various methods have been
proposed to specify nonstationary GPs. These include defining a nonstationary Mat?ern covariance
function [5], adopting a mixture of local experts approach [18, 8, 9] to use independent GPs to
cover data in different regions of the input space, and using multidimensional scaling to map a
nonstationary spatial GP into a latent space [19].
Given the data set D drawn from the function y = f (x)+, as previously introduced in section 2, we
propose an approach to specify a nonstationary covariance function. Assuming the use of a quadratic
negative exponential covariance function, the covariance function of a stationary GP is k(xi , xj ) =
Pd
v12 exp(?0.5 m=1 hm (xim ? x0jm )2 ) + v0 , where the hyperparameters {h1 , h2 , ..., hd , v0 , v1 } are
2
optimized.
a nonstationary GP, the covariance
In P
function could then take the form k(xi , xj ) =
d
2
2 him hjm
v1 exp ?0.5 m=1 (xim ? xjm ) (him +hjm ) +v0 , where him is the bandwidth of the local model
centered at xim and hjm is the bandwidth of the local model centered at xjm . We learn first the
d
values of {him }m=1 for all training data samples i = 1, ..., N using our proposed local kernel
shaping algorithm and then optimize the hyperparameters v0 and v1 . To make a prediction for a test
d
sample xq , we learn also the values of {hqm }m=1 , i.e., the bandwidth of the local model centered at
xq . Importantly, since the covariance function of the GP is derived from locally constant models, we
learn with locally constant, instead of locally linear, polynomials. We use r = 1 for the weighting
kernel in order keep the degree of nonlinearity consistent with that in the covariance function (i.e.,
quadratic). Even though the weighting kernel used in the local kernel shaping algorithm is not a
quadratic negative exponential, it has a similar bell shape, but with a flatter top and shorter tails.
Because of this, our augmented GP is an approximated form of a nonstationary GP. Nonetheless,
it is able to capture nonstationary properties of the function f without needing MCMC sampling,
unlike previously proposed nonstationary GP methods [8, 9].
4
Experimental Results
4.1
Synthetic Data
First, we show our local kernel shaping algorithm?s bandwidth adaptation abilities on several synthetic data sets, comparing it to a stationary GP and our proposed augmented nonstationary GP.
For the ease of visualization, we consider the following one-dimensional functions, similar to those
in [5]: i) a function with a discontinuity, ii) a spatially inhomogeneous function, and iii) a straight
line function. The data set for function i) consists of 250 training samples, 201 test inputs (evenly
spaced across the input space) and output noise with ? 2 = 0.3025; the data set for function ii) consists of 250 training samples, 101 test inputs and an output signal-to-noise ratio (SNR) of 10; and
the data set for function iii) has 50 training samples, 21 test inputs and an output SNR of 100.
Fig. 3 shows the predicted outputs of a stationary GP, augmented nonstationary GP and the local
kernel shaping algorithm for data sets i)-iii). The local kernel shaping algorithm smoothes over
regions where a stationary GP overfits and yet, it still manages to capture regions of highly varying
curvature, as seen in Figs. 3(a) and 3(b). It correctly adjusts the bandwidths h with the curvature
of the function. When the data looks linear, the algorithm opens up the weighting kernel so that
all data samples are considered, as Fig. 3(c) shows. Our proposed augmented nonstationary GP
also can handle the nonstationary nature of the data sets as well, and its performance is quantified
in Table 1. Returning to our motivation to use these algorithms to obtain linearizations for learning
control, it is important to realize that the high variations from fitting noise, as shown by the stationary
GP in Fig. 3, are detrimental for learning algorithms, as the slope (or tangent hyperplane, for highdimensional data) would be wrong.
Table 1 reports the normalized mean squared prediction error (nMSE) values for function i) and
function ii) data sets, averaged over 20 random data sets. Fig. 4 shows results of the local kernel
shaping algorithm and the proposed augmented nonstationary GP on the ?real-world? motorcycle
data set [20] consisting of 133 samples (with 80 equally spaced input query points used for prediction). We also show results from a previously proposed MCMC-based nonstationary GP method: an
alternate infinite mixture of GP experts [9]. We can see that the augmented nonstationary GP and
the local kernel shaping algorithm both capture the leftmost flatter region of the function, as well as
some of the more nonlinear and noisier regions after 30msec.
4.2
Robot Data
Next, we move on to an example application: learning an inverse kinematics model for a 3 degree-offreedom (DOF) haptic robot arm (manufactured by SensAble, shown in Fig. 5(a)) in order to control
the end-effector along a desired trajectory. This will allow us to verify that the kernel shaping algo2
This is derived from the definition of K as a positive semi-definite matrix, i.e. where the integral is the
product of two quadratic negative exponentials?one with parameter him and the other with parameter hjm .
2
2
1
y 0
2
Training data
Stationary GP
Aug GP
Kernel Shaping
1
y
y
0
0
?2
?1
?1
?4
?2
?1
0
x
1
2
?2
?1
0
x
1
2
?2
?2
?1
0
x
1
2
7
w 10
xq
6
10
6
10
h
w
h
h
w
w
0
10
3
10
1
1
1
?6
0
?2
0
?1
0 x
1
(a) Function i)
10
2
0
?2
0
?1
0
x
1
(b) Function ii)
10
2
0
?2
?1
0
x
1
10
2
(c) Function iii)
Figure 3: Predicted outputs using a stationary GP, our augmented nonstationary GP and local kernel
shaping. Figures on the bottom show the bandwidths learnt by local kernel shaping and the corresponding weighting kernels (in dotted black lines) for input query points (shown in red circles).
rithm can successfully deal with a large, noisy real-world data set with outliers and non-stationary
properties?typical characteristics of most control learning problems.
We collected 60, 000 data samples from the arm while it performed random sinusoidal movements
within a constrained box volume of Cartesian space. Each sample consists of the arm?s joint angles
? end-effector position in Cartesian space x, and end-effector velocities x.
? From
q, joint velocities q,
? where J(q) is the Jacobian matrix.
this data, we first learn a forward kinematics model: x? = J(q)q,
The transformation from q? to x? can be assumed to be locally linear at a particular configuration q
of the robot arm. We learn the forward model using kernel shaping, building a local model around
each training point only if that point is not already sufficiently covered by an existing local model
(e.g., having an activation weight of less than 0.2). Using insights into robot geometry, we localize
the models only with respect to q while the regression of each model is trained only on a mapping
?
from q? to x?these
geometric insights are easily incorporated as priors in the Bayesian model. This
procedure resulted in 56 models being built to cover the entire space of training data.
We artificially introduce a redundancy in our inverse kinematics problem on the 3-DOF arm by
? only in terms of x, z positions and velocities, i.e., the movespecifying the desired trajectory (x, x)
ment is supposed to be in a vertical plane in front of the robot. Analytically, the inverse kinematics
?g
, where J # (q) is the pseudo-inverse of the Jacobian. The
equation is q? = J# (q)x? ? ?(I ? J# J) ?q
second term is an optimal solution to the redundancy problem, specified here by a cost function g
in terms of joint angles q. To learn a model for J# , we can reuse the local regions of q from the
forward model, where J# is also locally linear. The redundancy issue can be solved by applying
an additional weight to each data point according to a reward function [21]. In our case, the task is
specified in terms of {x,
? z},
? so we define a reward based on a desired y coordinate, ydes , that we
1
? 2
, where
would like to enforce as a soft constraint. Our reward function is g = e? 2 h(k(ydes ?y)?y)
k is a gain and h specifies the steepness of the reward. This ensures that the learnt inverse model
chooses a solution which produces a y? that pushes the y coordinate toward ydes . We invert each
forward local model using a weighted linear regression, where each data point is weighted by the
weight from the forward model and additionally weighted by the reward.
We test the performance of this inverse model (Learnt IK) in a figure-eight tracking task as shown
in Fig. 5(b). As seen, the learnt model performs as well as the analytical inverse kinematics solution
(IK), with root mean squared tracking errors in positions and velocities very close to that of the
Table 1: Average normalized mean squared prediction error values, for a stationary GP model, our
augmented nonstationary GP, local kernel shaping?averaged over 20 random data sets.
Method
Stationary GP
Augmented nonstationary GP
Local Kernel Shaping
Function i)
0.1251 ? 0.013
0.0110 ? 0.0078
0.0092 ? 0.0068
Function ii)
0.0230 ? 0.0047
0.0212 ? 0.0067
0.0217 ? 0.0058
100
Acceleration (g)
Acceleration (g)
0
?50
?100
?150
0
100
100
50
50
0
?50
Training data
Aug GP
Stationary GP
?100
10
20
30
Time (ms)
40
50
60
(A) mix. of GPs
(a) Alternate infinite
Acceleration (g)
Training Data
AiMoGPE
SingleGP
50
?150
0
10
20
30
40
Time (ms)
50
0
?50
Training data
Kernel Shaping
Stationary GP
?100
?150
0
60
(b) Augmented nonstationary GP
10
20
30
40
Time (ms)
50
60
(c) Local Kernel Shaping
Figure 4: Motorcycle impact data set from [20], with predicted results shown for our augmented
GP and local kernel shaping algorithms. Results from the alternate infinite mixture of GP experts
(AiMoGPE) are taken from [9].
analytical solution. This demonstrates that kernel shaping is an effective learning algorithm for use
in robot control learning applications.
Applying any arbitrary nonlinear regression method (such as a GP) to the inverse kinematics problem
would, in fact, lead to unpredictably bad performance. The inverse kinematics problem is a one-tomany mapping and requires careful design of a learning problem to avoid problems with non-convex
solution spaces [22]. Our suggested method of learning linearizations with a forward mapping
(which is a proper function), followed by learning an inverse mapping within the local region of
the forward mapping, is one of the few clean approaches to the problem. Instead of using locally
linear methods, one could also use density-based estimation techniques like mixture models [23].
However, these methods must select the correct mode in order to arrive at a valid solution, and
this final step may be computationally intensive or involve heuristics. For these reasons, applying
a MCMC-type approach or GP-based method to the inverse kinematics problem was omitted as a
comparison.
5
Discussion
We presented a full Bayesian treatment of nonparametric local multi-dimensional kernel adaptation
that simultaneously estimates the regression and kernel parameters. The algorithm can also be integrated into nonlinear algorithms, offering a valuable and flexible tool for learning. We show that our
local kernel shaping method is particularly useful for learning control, demonstrating results on an
inverse kinematics problem, and envision extensions to more complex problems with redundancy,
Desired
Analytical IK
0.1
0
?0.1
(a) Robot arm
Desired
Learnt IK
0.2
z (m)
z (m)
0.2
0.1
0
?0.1 ?0.05
0
0.05
x (m)
0.1
?0.1
?0.1 ?0.05
0
0.05
x (m)
0.1
(b) Desired versus actual trajectories
Figure 5: Desired versus actual trajectories for SensAble Phantom robot arm
e.g., learning inverse dynamics models of complete humanoid robots. Note that our algorithm requires only one prior be set by the user, i.e., the prior on the output noise. All other biases are
initialized the same for all data sets and kept uninformative. In its current form, our Bayesian kernel
shaping algorithm is built for high-dimensional inputs due to its low computational complexity?
it scales linearly with the number of input dimensions. However, numerical problems may arise
in case of redundant and irrelevant input dimensions. Future work will address this issue through
the use of an automatic relevant determination feature. Other future extensions include an online
implementation of the local kernel shaping algorithm.
References
[1] C. K. I. Williams and C. E. Rasmussen. Gaussian processes for regression. In David S. Touretzky,
Michael C. Mozer, and Michael E. Hasselmo, editors, In Advances in Neural Information Processing
Systems 8, volume 8. MIT Press, 1995.
[2] J. H. Friedman. A variable span smoother. Technical report, Stanford University, 1984.
[3] T. Poggio and F. Girosi. Regularization algorithms for learning that are equivalent to multilayer networks.
Science, 247:213?225, 1990.
[4] J. Fan and I. Gijbels. Local polynomial modeling and its applications. Chapman and Hall, 1996.
[5] C. J. Paciorek and M. J. Schervish. Nonstationary covariance functions for Gaussian process regression.
In Advances in Neural Information Processing Systems 16. MIT Press, 2004.
[6] J. Fan and I. Gijbels. Data-driven bandwidth selection in local polynomial fitting: Variable bandwidth
and spatial adaptation. Journal of the Royal Statistical Society B, 57:371?395, 1995.
[7] S. Schaal and C.G. Atkeson. Assessing the quality of learned local models. In G. Tesauro J. Cowan
and J. Alspector, editors, Advances in Neural Information Processing Systems, pages 160?167. Morgan
Kaufmann, 1994.
[8] C. E. Rasmussen and Z. Ghahramani. Infinite mixtures of Gaussian processes. In Advances in Neural
Information Processing Systems 14. MIT Press, 2002.
[9] E. Meeds and S. Osindero. An alternative infinite mixture of Gaussian process experts. In Advances in
Neural Information Processing Systems 17. MIT Press, 2005.
[10] C. Atkeson and S. Schaal. Robot learning from demonstration. In Proceedings of the 14th international
conference on Machine learning, pages 12?20. Morgan Kaufmann, 1997.
[11] C. Atkeson, A. Moore, and S. Schaal. Locally weighted learning. AI Review, 11:11?73, April 1997.
[12] A. D?Souza, S. Vijayakumar, and S. Schaal. The Bayesian backfitting relevance vector machine. In
Proceedings of the 21st International Conference on Machine Learning. ACM Press, 2004.
[13] A. Gelman, J. Carlin, H.S. Stern, and D.B. Rubin. Bayesian Data Analysis. Chapman and Hall, 2000.
[14] A. Dempster, N. Laird, and D. Rubin. Maximum likelihood from incomplete data via the EM algorithm.
Journal of Royal Statistical Society. Series B, 39(1):1?38, 1977.
[15] Z. Ghahramani and M.J. Beal. Graphical models and variational methods. In D. Saad and M. Opper,
editors, Advanced Mean Field Methods - Theory and Practice. MIT Press, 2000.
[16] T. S. Jaakkola and M. I. Jordan. Bayesian parameter estimation via variational methods. Statistics and
Computing, 10:25?37, 2000.
[17] E. Snelson and Z. Ghahramani. Sparse Gaussian processes using pseudo-inputs. In Advances in Neural
Information Processing Systems 18. MIT Press, 2006.
[18] V. Tresp. Mixtures of Gaussian processes. In Advances in Neural Information Processing Systems 13.
MIT Press, 2000.
[19] A. M. Schmidt and A. O?Hagan. Bayesian inference for nonstationary spatial covariance structure via
spatial deformations. Journal of Royal Statistical Society. Series B, 65:745?758, 2003.
[20] B. W. Silverman. Some aspects of the spline smoothing approach to non-parametric regression curve
fitting. Journal of Royal Statistical Society. Series B, 47:1?52, 1985.
[21] J. Peters and S. Schaal. Learning to control in operational space. International Journal of Robotics
Research, 27:197?212, 2008.
[22] M. I. Jordan and D. E. Rumelhart. Internal world models and supervised learning. In Machine Learning:
Proceedings of Eighth Internatinoal Workshop, pages 70?85. Morgan Kaufmann, 1991.
[23] Z. Ghahramani. Solving inverse problems using an EM approach to density estimation. In Proceedings
of the 1993 Connectionist Models summer school, pages 316?323. Erlbaum Associates, 1994.
| 3393 |@word inversion:1 polynomial:6 open:2 calculus:1 covariance:15 accommodate:1 initial:4 configuration:1 series:3 offering:1 envision:1 existing:1 current:1 z2:1 comparing:1 anne:1 si:4 yet:1 activation:1 bd:1 must:1 realize:1 additive:1 numerical:1 shape:2 girosi:1 update:8 hwit:2 stationary:17 aside:1 guess:1 plane:2 core:1 parameterizations:1 location:1 mathematical:1 along:2 become:1 ik:4 consists:4 backfitting:1 fitting:4 manner:1 introduce:3 expected:1 alspector:1 multi:2 automatically:2 actual:3 window:1 becomes:1 finding:1 transformation:1 pseudo:2 every:2 multidimensional:1 concave:1 returning:1 wrong:1 scaled:3 uk:1 control:11 demonstrates:1 omit:1 positive:4 local:40 modify:1 treat:1 id:3 black:2 initialization:1 quantified:1 shaded:1 heteroscedastic:2 ease:1 averaged:2 testing:1 practice:1 definite:1 differs:1 silverman:1 procedure:1 bell:2 reject:2 radial:1 mrinal:1 close:1 selection:1 operator:1 gelman:1 context:1 applying:3 optimize:1 equivalent:1 map:1 phantom:1 maximizing:1 straightforward:1 graphically:1 starting:1 hm0:1 convex:2 williams:1 formulate:1 simplicity:1 assigns:1 rule:1 adjusts:1 insight:2 importantly:1 deriving:2 hd:2 handle:1 variation:2 coordinate:2 user:1 gps:5 hypothesis:2 associate:2 velocity:4 rumelhart:1 approximated:1 particularly:2 hagan:1 observed:2 bottom:1 solved:1 capture:4 region:7 ensures:1 movement:1 valuable:1 ran:1 mozer:1 pd:3 dempster:1 complexity:3 reward:5 dynamic:1 dom:1 trained:1 solving:1 division:1 efficiency:1 meed:1 basis:1 easily:1 joint:3 various:2 effective:1 monte:1 query:6 choosing:1 dof:2 heuristic:1 stanford:1 qim:8 ability:1 cov:1 statistic:1 flood:1 gp:44 noisy:1 laird:1 final:3 online:1 beal:1 advantage:1 analytical:3 propose:1 ment:1 product:2 maximal:1 zm:15 adaptation:3 causing:1 relevant:1 motorcycle:2 achieve:1 supposed:1 los:1 exploiting:1 xim:21 double:1 assessing:1 produce:2 generating:1 help:2 illustrate:1 develop:1 derive:2 school:2 aug:2 predicted:3 qd:8 inhomogeneous:1 discontinuous:1 correct:1 centered:3 wid:1 require:4 offreedom:1 im:4 extension:4 correction:1 hold:1 sufficiently:1 considered:1 hall:2 normal:5 exp:2 around:1 mapping:5 vary:1 omitted:1 estimation:5 wi1:1 wim:17 individually:2 him:5 hasselmo:1 successfully:1 tool:1 weighted:10 stefan:1 mit:7 gaussian:12 rather:1 pn:2 avoid:2 varying:2 jaakkola:1 derived:2 schaal:5 notational:1 zim:5 bernoulli:2 likelihood:5 greatly:1 contrast:1 inference:3 typically:2 bt:2 entire:2 integrated:1 hidden:4 mth:6 xqm:9 issue:3 flexible:1 k6:8 spatial:4 constrained:1 smoothing:1 field:1 shaped:1 having:1 sampling:4 eliminated:1 chapman:2 broad:1 look:1 future:2 contaminated:1 report:2 piecewise:1 spline:1 few:2 connectionist:1 simultaneously:2 recognize:1 gamma:3 resulted:1 geometry:1 consisting:1 iem:1 friedman:1 freedom:1 highly:3 evaluation:2 adjust:1 mixture:8 hwi:11 chain:1 accurate:1 integral:1 capable:1 closer:1 poggio:1 shorter:1 incomplete:2 harmful:1 euclidean:1 initialized:1 circle:4 desired:7 deformation:1 effector:3 instance:1 soft:1 modeling:1 cover:2 zn:8 paciorek:1 cost:2 snr:2 osindero:1 front:1 erlbaum:1 dependency:1 varies:1 learnt:5 synthetic:3 chooses:1 st:1 density:5 international:3 vijayakumar:1 workspace:1 xi1:1 informatics:1 michael:2 parzen:1 jo:1 squared:3 reflect:1 hzi:3 compu:1 expert:4 leading:1 japan:1 account:1 downweighted:2 manufactured:1 sinusoidal:1 b2:1 flatter:2 coefficient:6 later:2 h1:2 performed:1 lab:1 root:1 overfits:1 red:1 start:1 bayes:1 sort:1 slope:1 variance:5 characteristic:3 largely:1 kaufmann:3 spaced:2 bayesian:14 manages:1 carlo:1 trajectory:5 straight:1 simultaneous:1 influenced:1 touretzky:1 manual:1 definition:1 nonetheless:1 involved:1 associated:2 gain:1 treatment:3 popular:2 adjusting:1 cap:1 hilbert:1 shaping:28 zid:1 supervised:1 specify:2 april:1 formulation:4 done:1 though:1 strongly:1 box:1 hjm:4 nonlinear:4 lack:1 mode:1 quality:2 usa:1 effect:2 building:1 requiring:1 true:2 normalized:2 verify:1 analytically:7 regularization:1 spatially:3 excluded:1 symmetric:1 moore:1 deal:2 width:1 please:1 m:3 leftmost:1 complete:3 performs:1 variational:6 snelson:1 novel:1 tationally:1 bxi:2 volume:2 tail:2 refer:1 enter:1 ai:4 tuning:1 automatic:1 hp:1 pointed:1 nonlinearity:1 had:1 robot:12 surface:1 v0:4 curvature:5 posterior:10 optimizing:3 irrelevant:2 driven:1 tesauro:1 scenario:2 meta:1 yi:17 seen:4 morgan:3 additional:1 maximize:2 redundant:1 signal:1 ii:5 semi:1 full:3 mix:1 needing:1 kyoto:1 smoother:1 technical:1 determination:1 cross:1 long:1 equally:1 impact:1 prediction:8 regression:24 controller:3 multilayer:1 expectation:2 iteration:3 kernel:58 adopting:1 invert:1 robotics:1 uninformative:1 crucial:2 appropriately:1 saad:1 hhm:2 unlike:1 haptic:1 cowan:1 jordan:2 nonstationary:28 iii:4 affect:1 fit:2 zi:15 xj:4 carlin:1 bandwidth:10 regarding:1 intensive:1 angeles:1 expression:7 reuse:1 peter:1 remark:1 useful:4 covered:1 involve:1 factorial:1 amount:1 nonparametric:5 locally:17 specifies:1 problematic:2 dotted:1 neuroscience:1 per:1 correctly:1 zd:1 mat:1 express:1 steepness:1 redundancy:4 demonstrating:1 drawn:2 d3:1 localize:1 clean:1 kept:1 v1:3 schervish:1 gijbels:2 linearizations:3 inverse:17 angle:2 uncertainty:2 place:1 arrive:1 x0j:1 v12:1 smoothes:1 decision:1 appendix:1 scaling:1 eh9:1 bound:5 followed:1 distinguish:1 summer:1 fan:2 quadratic:5 strength:1 constraint:1 dominated:1 aspect:1 span:1 performing:1 ern:1 according:2 alternate:3 conjugate:1 across:1 em:11 newer:1 wi:16 appealing:1 making:1 outlier:6 taken:3 computationally:5 equation:3 visualization:1 previously:3 kinematics:10 hbm:4 xi2:1 needed:1 tractable:6 end:3 apply:1 eight:1 enforce:1 zi2:1 alternative:1 schmidt:1 existence:1 denotes:2 top:1 ensure:2 include:2 graphical:2 ghahramani:4 nm0:4 classical:2 society:4 move:1 already:1 parametric:1 diagonal:3 southern:1 gradient:2 detrimental:1 distance:2 atr:1 evenly:1 sethu:1 collected:1 unstable:1 toward:1 reason:1 assuming:2 suboptimally:1 ratio:1 demonstration:1 aimogpe:2 potentially:2 btm:2 negative:4 design:1 implementation:1 proper:1 stern:1 perform:1 allowing:1 vertical:1 markov:1 descent:2 defining:1 zi1:1 incorporated:1 reproducing:1 arbitrary:1 inv:1 souza:1 inferred:1 introduced:1 david:1 specified:3 z1:1 optimized:2 california:1 learned:1 discontinuity:1 address:2 capped:1 suggested:4 able:2 eighth:1 wi2:1 built:2 reliable:1 royal:4 belief:2 power:1 examination:1 indicator:2 residual:1 advanced:1 arm:8 mn:2 wik:1 yq:1 hm:10 tresp:1 xq:10 prior:14 geometric:1 review:1 tangent:2 pseudoinputs:1 determining:1 law:1 fully:1 versus:2 localized:1 validation:1 h2:2 humanoid:1 degree:3 consistent:1 rubin:2 editor:3 prone:1 rasmussen:2 t6:2 bias:1 allow:2 wide:1 sparse:1 edinburgh:2 distributed:1 curve:1 boundary:1 dimension:7 world:3 avoids:1 valid:1 qn:1 opper:1 forward:7 made:1 adaptive:3 reinforcement:1 xjm:2 bm:14 atkeson:3 approximate:1 ignore:1 keep:1 overfitting:1 b1:1 assumed:1 xi:20 latent:1 table:3 additionally:1 jz:1 learn:6 nature:1 ca:1 expanding:1 operational:1 complex:3 artificially:1 domain:1 linearly:1 motivation:1 noise:16 hyperparameters:2 arise:1 nmse:1 augmented:11 fig:10 rithm:1 fashion:1 fails:1 position:3 wish:1 explicit:1 exponential:4 msec:1 weighting:11 jacobian:2 learns:1 schaal1:1 bad:1 offset:1 explored:1 list:1 essential:1 workshop:1 magnitude:1 push:1 cartesian:2 prevents:1 desire:1 tracking:2 scalar:2 determines:1 acm:1 goal:1 formulated:1 identity:1 acceleration:3 careful:1 included:1 infinite:5 typical:1 unpredictably:1 averaging:1 hyperplane:1 experimental:1 select:1 highdimensional:1 internal:1 support:1 arises:1 noisier:1 relevance:1 algo2:1 evaluate:1 mcmc:5 avoiding:1 handling:1 |
2,640 | 3,394 | Self-organization using synaptic plasticity
Vicenc? G?omez1
[email protected]
Hilbert J Kappen1
[email protected]
Andreas Kaltenbrunner2
[email protected]
Vicente L?opez2
[email protected]
1
Department of Biophysics
Radboud University Nijmegen
6525 EZ Nijmegen, The Netherlands
2
Barcelona Media - Innovation Centre
Av. Diagonal 177,
08018 Barcelona, Spain
Abstract
Large networks of spiking neurons show abrupt changes in their collective dynamics resembling phase transitions studied in statistical physics. An example of
this phenomenon is the transition from irregular, noise-driven dynamics to regular, self-sustained behavior observed in networks of integrate-and-fire neurons as
the interaction strength between the neurons increases. In this work we show how
a network of spiking neurons is able to self-organize towards a critical state for
which the range of possible inter-spike-intervals (dynamic range) is maximized.
Self-organization occurs via synaptic dynamics that we analytically derive. The
resulting plasticity rule is defined locally so that global homeostasis near the critical state is achieved by local regulation of individual synapses.
1
Introduction
It is accepted that neural activity self-regulates to prevent neural circuits from becoming hyper- or
hypoactive by means of homeostatic processes [14]. Closely related to this idea is the claim that
optimal information processing in complex systems is attained at a critical point, near a transition between an ordered and an unordered regime of dynamics [3, 11, 9]. Recently, Kinouchi and
Copelli [8] provided a realization of this claim, showing that sensitivity and dynamic range of a
network are maximized at the critical point of a non-equilibrium phase transition. Their findings
may explain how sensitivity over high dynamic ranges is achieved by living organisms.
Self-Organized Criticality (SOC) [1] has been proposed as a mechanism for neural systems which
evolve naturally to a critical state without any tuning of external parameters. In a critical state, typical macroscopic quantities present structural or temporal scale-invariance. Experimental results [2]
show the presence of neuronal avalanches of scale-free distributed sizes and durations, thus giving evidence of SOC under suitable conditions. A possible regulation mechanism may be provided
by synaptic plasticity, as proposed in [10], where synaptic depression is shown to cause the mean
synaptic strengths to approach a critical value for a range of interaction parameters which grows
with the system size.
In this work we analytically derive a local synaptic rule that can drive and maintain a neural network
near the critical state. According to the proposed rule, synapses are either strengthened or weakened
whenever a post-synaptic neuron receives either more or less input from the population than the
required to fire at its natural frequency. This simple principle is enough for the network to selforganize at a critical region where the dynamic range is maximized. We illustrate this using a model
of non-leaky spiking neurons with delayed coupling for which a phase transition was analyzed in [7].
1
2
The model
The model under consideration was introduced in [12] and can be considered as an extension of [15,
5]. The state of a neuron i at time t is encoded by its activation level ai (t), which performs at discrete
timesteps a random walk with positive drift towards an absorbing barrier L. This spontaneous
evolution is modelled using a Bernoulli process with parameter p. When the threshold L is reached,
the states of the other units j in the network are increased after one timestep by the synaptic efficacy
?ji , ai is reset to 1, and the unit i remains insensitive to incoming spikes during the following
timestep. The evolution of a neuron i can be described by the following recursive rules:
?
N
?
X
?
?
?ai (t) +
?ij HL (aj (t)) + 1 with probability p
?
?
j=1,j6=i
ai (t + 1) =
if ai (t) < L
N
X
?
?
?
?
a (t) +
?ij HL (aj (t))
with probability 1 ? p
?
? i
j=1,j6=i
ai (t + 1) = 1 +
N
X
?ij HL (aj (t))
if ai (t) ? L
(1)
j=1,j6=i
where HL (x) is the Heaviside step function: HL (x) = 1 if x ? L, and 0 otherwise.
PN PN
Using the mean synaptic efficacy: h?i = i
j,j6=i ?ij /(N (N ? 1)) we describe the degree of
interaction between the units with the following characteristic parameter:
?=
L?1
,
(N ? 1)h?i
(2)
which indicates whether the spontaneous dynamics (? > 1) or the message interchange mechanism
(? ? 1) dominates the behavior of the system. As illustrated in the right raster-plot of Figure 1, at
? > 1 neurons fire irregularly as independent oscillators, whereas at ? = 1 (central raster-plot) they
synchronize into several phase-locked clusters. The lower ?, the less clusters can be observed. For
? = 0.5 the network is fully synchronized (left raster-plot).
In [7] it is shown that the system undergoes a phase transition around the critical value ? = 1.
The study provides upper (?max ) and lower bounds (?min ) for the mean inter-spike-interval (ISI)
? of the ensemble and shows that the range of possible ISIs taking the average network behavior
(?? = ?max -?min ) is maximized at ? = 1. This is illustrated in Figure 1 and has been observed as
well in [8] for a similar neural model.
The average of the mean ISI h? i is of order N x with exponent x = 1 for ? > 1, x = 0.5 for ? = 1,
and x = 0 for ? < 1 as N ? ?, and can be approximated as shown in [7] with 1 :
s
2
L ? 1 ? N h?i
L ? 1 ? N h?i
N h?i
?app = 1 +
+
+1 +
.
(3)
2p
2p
2p
3
Self-organization using synaptic plasticity
We now introduce synaptic dynamics in the model. We first present the dissipated spontaneous
evolution, a magnitude also maximized at ? = 1. The gradient of this magnitude turns to be simple
analytically and leads to a plasticity rule that can be expressed using only local information encoded
in the post-synaptic unit.
3.1
The dissipated spontaneous evolution
During one ISI, we distinguish between the spontaneous evolution carried out by a unit and the
actual spontaneous evolution needed for a unit to reach the threshold L. The difference of both
quantities can be regarded as a surplus of spontaneous evolution, which is dissipated during an ISI.
1
The equation was denoted h? imin in [7]. We slightly modified it using h?i and replacing ? by Eq. (2).
2
60
50
time
10
0
50
20
time
50
0
100
# neuron
??1
clustering
# neuron
?? = ?max ? ?min
40
25
# neuron
0
30
? = 0.5
full synchronization
?>1
noisy firing
20
10
0
0.5
0.6
0.7
0.8
0.9
1
?
1.1
1.2
1.3
1.4
1.5
Figure 1: Number of possible ISIs according to the bound ?? = ?max ? ?min derived in [7].
For ? > 1 the network presents sub-critical behavior and is dominated by the noise. For ? < 1 it
shows super-critical behavior. Criticality is produced at ? = 1, which coincides to the onset of
sustained activity. At this point, the network is also broken down in a maximal number of clusters
of units which fire according to a periodic pattern.
Figure 2a shows an example trajectory of a neuron?s state. First, we calculate the spontaneous evolution of the given unit during one ISI, which it is just its number of stochastic state transitions during
an ISI of length ? (thick black lines in Figure 2a). These state transitions occur with probability p
at every timestep except from the timestep directly after spiking. Using the average ISI-length h? i
over many spikes and all units we can calculate the average total spontaneous evolution:
Etotal = (h? i ? 1)p.
(4)
Since the state of a given unit can exceed the threshold because of the received messages from the
rest of the population (blue dashed lines in Figure 2a), a fraction of (4) is actually not required to
induce a spike in that unit, and therefore is dissipated. We can obtain this fraction by subtracting
from (4) the actual number of state transitions that was required to reach the threshold L. The latter
quantity can be referred to as effective spontaneous evolution Eef f and is on average L ? 1 minus
(N ? 1)h?i, the mean evolution caused by the messages received from the rest of the units during an
ISI. For ? ? 1, the activity is self-sustained and the messages from other units are enough to drive
a unit above the threshold. In this case, all the spontaneous evolution is dissipated and Eef f = 0.
Summarizing, we have that:
L ? 1 ? (N ? 1)h?i for ? ? 1
Eef f = max{0, L ? 1 ? (N ? 1)h?i} =
(5)
0
for ? < 1
If we subtract (5) from Etotal (4), we obtain the mean dissipated spontaneous evolution, which is
visualized as red dimensioning in Figure 2a:
Ediss = Etotal ? Eef f = (h? i ? 1)p ? max{0, L ? 1 ? (N ? 1)h?i}.
(6)
Using (3) as an approximation of h? i we can get an analytic expression for Ediss . Figures 2b and c
show this analytic curve Ediss in function of ? together with the outcome of simulations.
At ? > 1 the units reach the threshold L mainly because of their spontaneous evolution. Hence,
Etotal ? Eef f and Ediss ? 0. The difference between Etotal and Eef f increases as ? approaches 1
because the message interchange progressively dominates the dynamics. At ? < 1, we have Eef f =
0. In this scenario Ediss = Etotal , is mainly determined by the ISI h? i and thus decays again for
decreasing ?. The maximum can be found at ? = 1.
3.2
Synaptic dynamics
After having presented our magnitude of interest we now derive a plasticity rule in the model. Our
approach essentially assumes that updates of the individual synapses ?ij are made in the direction of
3
(b)
25
Spontaneous evolution
Messages from other units
spike
Ediss
L
Surplus of spontaneous evolution
(a)
threshold
9
total
13
E
1
L?1
7
5
1
0
?12+?13
3
2
4
6
8
10
12
t
14
diss
15
10
5
0
0.5
(c)
100
1
?
1.5
Etotal
80
E
60
Ediss
?12
? (N?1) ???
a (t)
?
Eeff
? (????1)p
11
Sim.
E
20
eff
40
20
0
0.8
16
0.9
1
?
1.1
1.2
Figure 2: (a) Example trajectory of the state of a neuron: the dissipated spontaneous evolution
Ediss is the difference between the total spontaneous evolution Etotal (thick black lines) and the
actual evolution required to reach the threshold Eef f (dark gray dimensioning) in one ISI. (b) Ediss
is maximized at the critical point. (c) The three different evolutions involved in the analysis (parameters for (b) and (c) are N = L = 1000 and p = 0.9. For the mean ISI we used ?app of
Eq. (3)).
the gradient of Ediss . The analytical results are rather simple and allow a clear interpretation of the
underlying mechanism governing the dynamics of the network under the proposed synaptic rule.
We start approximating the terms N h?i and (N ? 1)h?i by the sum of all pre-synaptic efficacies ?ik :
N h?i = (N ? 1)h?i + h?i ? (N ? 1)h?i =
N X
X
i=1 k6=i
?ik /N ?
X
?ik .
(7)
k6=i
This can be done for large N and if we suppose that the distribution of ?ik is the same for all i. Ediss
is now defined in terms of each individual neuron i as:
?
?
s
P
P
2 P
?
L
?
1
?
?
L
?
1
?
?
ik
ik
ik
k6=i
k6=i
k6=i
i
?p
Ediss
=?
+
+1 +
2p
2p
2p
X
? max{0, L ? 1 ?
?ik }. (8)
k6=i
An update of ?ij occurs when a spike from the pre-synaptic unit j induces a spike in a post-synaptic
unit i. Other schedulings are also possible. The results are robust as long as synaptic updates are
produced at the spike-time of the post-synaptic neuron.
!
i
i
i
?Eef
?Etotal
?Ediss
f
=?
?
,
(9)
??ij = ?
??ij
??ij
??ij
where the constant ? scales the amount of change in the synapse. We can write the gradient as:
?
P
P
?
1 L?1? k6=i ?ik
1
0
if (L ? 1 ? k6=i ?ik ) < 0
?
?
+
i
P
2
2p
2
?Ediss
1
= r
P
2 P ? ? 2 ? ?indef if (L ? 1 ? Pk6=i ?ik ) = 0
??ij
L?1? k6=i ?ik
??1
ik
if (L ? 1 ? k6=i ?ik ) > 0.
+ 1 + k6=i
2p
2p
(10)
For a plasticity rule to be biologically plausible it must be local, so only information encoded in the
states of the pre-synaptic j and the post-synaptic i neurons must be considered to update ?ij .
4
(b)
0.5
(a)
1
dE
c = 0.05
c = 0.5
c=5
total
dEtotal+1
dE
diss
0.25
ij
0.5
??
0
?0.5
0
?0.25
?1
?500
?250
0
i
L
250
?0.5
?500
500
?250
0
i
L
250
500
Figure 3: Plasticity rule. (a) First derivative of the dissipated spontaneous evolution Ediss for
? = 1, L = 1000 and c = 0.9. (b) The same rule for different values of c.
P
We propagate k6=i ?ik to the state of the post-synaptic unit i by considering for every unit i, an effective threshold Li which decreases
Pdeterministically every time an incoming pulse is received [6].
At the end of an ISI Li ? (L ? 1 ? k6=i ?ik ) and encodes implicitly all pre-synaptic efficacies of i.
Intuitively, Li indicates how the activity received from the population in the last ISI differs from the
activity required to induce and spike in i.
The only term involving non-local information in (10) is the noise rate p. We replace it by a constant
c and show later its limited influence on the synaptic rule. With these modifications we can write
i
the derivative of Ediss
with respect to ?ij as a function of only local terms:
i
?Ediss
?Li ? c
sgn(Li )
= q
+
??ij
2
2
2 (Li + 2c) + 2c(L ? Li )
(11)
Note that, although the derivation based on the surplus spontaneous evolution (10) may involve
information not locally accessible to the neuron, the derived rule (11) only requires a mechanism to
keep track of the difference between the natural ISI and the actual one.
We can understand the mechanism involved in a particular synaptic update by analyzing in detail
Eq. (11). In the case of a negative effective threshold (Li < 0) unit i receives more input from
the rest of the units than the required to spike, which translates into a weakening of the synapse.
Conversely, if Li > 0 some spontaneous evolution was required for the unit i to fire, Eq. (11) is
positive and the synapse is strengthened. The intermediate case (Li = 0), corresponds to ? = 1 and
no synaptic update is needed (nor is it defined). We will consider it thus 0 for practical purposes.
i
Figure 3a shows Eq. (11) in bold lines together with ?Etotal
/??ij (dashed line, corresponding to
i
? < 1) and ?Etotal /??ij + 1 (dashed-dotted, ? > 1), for different values of the effective threshold
Li of a given unit at the end on an ISI. Etotal indicates the amount of synaptic change and Eef f
determines whether the synapse is strengthened or weakened. The largest updates occur in the
transition from a positive to a negative Li and tend to zero for larger absolute values of Li . Therefore,
significant updates correspond to those synapses with post-synaptic neurons which during the last
ISI have received a similar amount of activity from the whole network as the one required to fire.
We remark the similarity between Figure 3b and the rule characterizing spike time dependent plasticity (STDP) [4, 13]. Although in STDP the change in the synaptic conductances is determined
by the relative spike timing of the pre-synaptic neuron and the post-synaptic neuron and here it is
determined by Li at the spiking time of the post-synaptic unit i, the largest changes in STDP occur
also in an abrupt transition from strengthening to weakening corresponding to Li = 0 in Figure 3a.
Figure 3b illustrates the role of c in the plasticity rule. For small c, updates are only significant in a
tiny range of Li values near zero. For higher values of c, the interval of relevant updates is widened.
The shape of the rule, however, is preserved, and the role of c is just to scale the change in the
synapse. For the rest of this manuscript, we will use c = 1.
5
? = 0.1
? = 0.01
1.8
1.8
??
1.02
1.6
1
1.6
1.4
??
??
? = 0.1
0.98
1.4
1.02
1.2
1.2
??
? = 0.01
1
1
0
1
200
400
0.98
600
0
1000
2000
3000
1.02
??
1
1
1
0.98
0.8
??
??
? = 0.1
1.02
0.8
??
? = 0.01
0.6
0
1000
# periods
2000
1
0.98
0
0.6
100
200
# periods
300
0
1
# periods
2
4
x 10
Figure 4: Empirical results of convergence toward ? = 1 for three different initial states above (top
four plots) and below (bottom four plots) the critical point. Horizontal axis denote number of ISIs
of the same random unit during all the simulations. On the left, results using the constant ? = 0.1.
Larger panels shows the full trajectory until 103 timesteps after convergence. Smaller panels are a
zoom of the first trajectory ?0 = 1.1 (top) and ? = 0.87 (bottom). Right panels show the same type
of results but using a smaller constant ? = 0.01.
3.3
Simulations
In this section we show empirical results for the proposed plasticity rule. We focus our analysis on
the time ?conv required for the system to converge toward the critical point. In particular, we analyze
how ?conv depends on the starting initial configuration and on the constant ?.
For the experiments we use a network composed of N = 500 units with homogeneous L = 500 and
p = 0.9. Synapses are initialized homogeneously and random initial states are chosen for all units
in each trial. Every time a unit i fires, we update its afferent synapses ?ij , for all j 6= i, which breaks
the homogeneity in the interaction strengths. The network starts with a certain initial condition ?0
and evolves according to its original discrete dynamics, Eq. (1), together with plasticity rule (9).
To measure the time ?conv necessary to reach a value close to ? = 1 for the first time, we select a
neuron i randomly and compute ? every time i fires. We assume convergence when ? ? (1??, 1+?)
for the first time. In these initial experiments, ? is set to ?/5 and ? is either 0.1 or 0.01.
We performed 50 random experiments for different initial configurations. In all cases, after
an initial transient, the network settles close to ? = 1, presenting some fluctuations. These
fluctuations did not grow even after 106 ISIs in all realizations. Figure 4 shows examples for
?0 ? {0.58, 0.7, 0.87, 1.1, 1.3, 1.7}.
We can see that for larger updates of the synapses (? = 0.1) the network converges faster. However, fluctuations around the reached state, slightly above ? = 1, are approximately one order of
magnitude bigger than for ? = 0.01. We therefore can conclude that ? determines the speed of
convergence and the quality and stability of the dynamics at the critical state: high values of ? cause
fast convergence but turn the dynamics of the network less stable at the critical state.
We study now how ?conv depends on ?0 in more detail. Given N, L, c and ?, we can approximate
the global change in ? after one entire ISI of a random unit assuming that all neurons change its
afferent synapses uniformly. This gives us a recursive definition for the sequence of ?t s generated
by the synaptic plasticity rule:
?
?
?L
(?
)
?
c
sgn(?
?
1)
ef
f
t
t
?,
?(?t ) = ?(N ? 1) ? q
+
2
2
2 (Lef f (?t ) + 2c) + 2c(L ? Lef f (?t ))
6
(a)
5
Periods required to reach ?=1
(b)
10
6
Time?steps required to reach ?=1
10
5
10
4
time (number of timesteps)
time (periods)
10
3
10
2
10
? = 0.01
?
conv
Simulations
???????????
? = 0.1
?conv
1
10
3
10
2
10
1
?
1.5
? = 0.01
?
conv_steps
Simulations
???????????
? = 0.1
?conv_steps
1
10
Simulations
0
10
0.5
4
10
Simulations
0
10
0.5
2
1
0
?
1.5
2
0
Figure 5: Number of ISIs (a) and timesteps (b) required to reach the critical state in function of
the initial configuration ?0 . Rounded dots indicate empirical results as averages over 10 different
realizations starting from the same ?0 . Continuous curves correspond to Eq. (12). Parameter values
are N = 500, L = 500, p = 0.9, c = 1, ? = ?/5.
1
where Lef f (?t ) = (L ? 1) 1 ?
?t
and
?t+1 = ?t + ?(?t ).
Then the number of ISIs and the number of timesteps can be obtained by2 :
?conv = min({i : |?t ? 1| ? ?}),
?conv steps =
?X
conv
?app (?t ).
(12)
t=0
Figure 5 shows empirical values of ?conv and ?conv steps for several values of ?0 together with the
approximations (12). Despite the inhomogeneous coupling strengths, the analytical approximations
(continuous lines) of the experiments (circles) are quite accurate. Typically, for ?0 < 1 more spikes
are required for convergence than for ?0 > 1. However, the opposite occurs if we consider timesteps
as time units. A hysteresis effect (described in [7]) present in the system if ?0 < 1, causes the
system to be more resistant against synaptic changes, which increases the number of updates (spikes)
necessary to achieve the same effect as for ?0 > 1. Nevertheless, since the ISIs are much shorter for
supercritical coupling the actual number of time steps is still lower than for subcritical coupling.
4
Discussion
Based on the amount of spontaneous evolution which is dissipated during an ISI, we have derived a
local synaptic mechanism which causes a network of spiking neurons to self-organize near a critical
state. Our motivation differs from those of similar studies, for instance [8], where the average
branching ratio ? of the network is used to characterize criticality. Briefly, ? is defined as the
average number of excitations created in the next time step by a spike of a given neuron.
The inverse of ? plays the role of the branching ratio ? in our model. If we initialize the units
uniformly in [1, L], we have approximately one unit in every subinterval of length ??, and in consequence, the closest unit to the threshold spikes in 1/? cases if it receives a spike. For ? > 1, a spike
of a neuron rarely induces another neuron to spike, so ? < 1. Conversely, for ? < 1, the spike of a
single neuron triggers more than one neuron to spike (? > 1). Only for ? = 1 the spike of a neuron
elicits the order of one spike (? = 1). Our study thus represents a realization of a local synaptic
mechanism which induces global homeostasis towards an optimal branching factor.
This idea is also related to the SOC rule proposed in [3], where a mechanism is defined for threshold
gates (binary units) in terms of bit flip probabilities instead of spiking neurons. As in our model,
criticality is achieved via synaptic scaling, where each neuron adjusts its synaptic input according to
an effective threshold called margin.
2
The value of ?app (?t ) has to be calculated using an h?i corresponding to ?t in Eq. (3).
7
When the network is operating at the critical regime, the dynamics can be seen as balancing between
a predictable pattern of activity and uncorrelated random behavior typically present in SOC. One
would also expect to find macroscopic magnitudes distributed according to scale-free distributions.
Preliminary results indicate that, if the stochastic evolution is reset to zero (p = 0) at the critical
state, inducing an artificial spike on a randomly selected unit causes neuronal avalanches of sizes
and lengths which span several orders of magnitude and follow heavy tailed distributions. These
results are in concordance with what is usually found for SOC and will be published elsewhere.
The spontaneous evolution can be interpreted for instance as activity from other brain areas not
considered in the pool of the simulated units, or as stochastic sensory input. Our results indicate
that the amount of this stochastic activity that is absorbed by the system is maximized at an optimal
state, which in a sense minimizes the possible effect of fluctuations due to noise on the behavior of
the system.
The application of the synaptic rule for information processing is left for future research. We advance, however, that external perturbations when the network is critical would cause a transient
activity. During the transient, synapses could be modified according to some other form of learning
to encode the proper values which drive the whole network to attain a characteristic synchronized
pattern for the external stimuli presented. We conjecture that the hysteresis effect shown in the
regime of ? < 1 may be suitable for such purposes, since the network then is able to keep the same
pattern of activity until the critical state is reached again.
Acknowledgments
We thank Joaqu??n J. Torres and Max Welling for useful suggestions and interesting discussions.
References
[1] P. Bak. How nature works: The Science of Self-Organized Criticality. Springer, 1996.
[2] J. M. Beggs and D. Plenz. Neuronal avalanches in neocortical circuits. Journal of Neuroscience,
23(35):11167?11177, December 2003.
[3] N. Bertschinger, T. Natschl?ager, and R. A. Legenstein. At the edge of chaos: Real-time computations
and self-organized criticality in recurrent neural networks. In Advances in Neural Information Processing
Systems 17, pages 145?152. MIT Press, Cambridge, MA, 2005.
[4] G. Q. Bi and M. M. Poo. Synaptic modifications in cultured hippocampal neurons: Dependence on spike
timing, synaptic strength, and postsynaptic cell type. Journal Of Neuroscience, 18:10464?10472, 1998.
[5] G. L. Gerstein and B. Mandelbrot. Random walk models for the spike activity of a single neuron. Biophys
J., 4:41?68, 1964.
[6] V. G?omez, A. Kaltenbrunner, and V. L?opez. Event modeling of message interchange in stochastic neural
ensembles. In IJCNN?06, Vancouver, BC, Canada, pages 81?88, 2006.
[7] A. Kaltenbrunner, V. G?omez, and V. L?opez. Phase transition and hysteresis in an ensemble of stochastic
spiking neurons. Neural Computation, 19(11):3011?3050, 2007.
[8] O. Kinouchi and M. Copelli. Optimal dynamical range of excitable networks at criticality. Nature Physics,
2:348, 2006.
[9] C. G. Langton. Computation at the edge of chaos: Phase transitions and emergent computation. Physica
D Nonlinear Phenomena, 42:12?37, jun 1990.
[10] A. Levina, J. M. Herrmann, and T. Geisel. Dynamical synapses causing self-organized criticality in neural
networks. Nature Physics, 3(12):857?860, 2007.
[11] N. H. Packard. Adaptation toward the edge of chaos. In: Dynamics Patterns in Complex Systems, pages
293?301. World Scientific: Singapore, 1988. A. J. Mandell, J. A. S. Kelso, and M. F. Shlesinger, editors.
[12] F. Rodr??guez, A. Su?arez, and V. L?opez. Period focusing induced by network feedback in populations of
noisy integrate-and-fire neurons. Neural Computation, 13(11):2495?2516, 2001.
[13] S. Song, K. D. Miller, and L. F. Abbott. Competitive hebbian learning through spike-timing-dependent
synaptic plasticity. Nature Neuroscience, 3(9):919?926, 2000.
[14] G. G. Turrigiano and S. B. Nelson. Homeostatic plasticity in the developing nervous system. Nature
Reviews Neuroscience, 5(2):97?107, 2004.
[15] C. Van Vreeswijk and L. F. Abbott. Self-sustained firing in populations of integrate-and-fire neurons.
SIAM J. Appl. Math., 53(1):253?264, 1993.
8
| 3394 |@word trial:1 briefly:1 pulse:1 propagate:1 simulation:7 minus:1 kappen:1 initial:8 configuration:3 efficacy:4 bc:1 activation:1 guez:1 must:2 plasticity:15 shape:1 analytic:2 plot:5 mandell:1 progressively:1 update:13 selected:1 nervous:1 provides:1 math:1 org:1 mandelbrot:1 ik:16 lopez:1 sustained:4 introduce:1 inter:2 behavior:7 isi:25 nor:1 brain:1 decreasing:1 actual:5 considering:1 conv:11 spain:1 provided:2 underlying:1 circuit:2 medium:1 panel:3 what:1 interpreted:1 minimizes:1 finding:1 temporal:1 every:6 unit:37 organize:2 positive:3 local:8 timing:3 consequence:1 despite:1 analyzing:1 firing:2 becoming:1 fluctuation:4 approximately:2 black:2 studied:1 weakened:2 conversely:2 appl:1 limited:1 range:9 locked:1 bi:1 practical:1 acknowledgment:1 recursive:2 differs:2 area:1 empirical:4 attain:1 kelso:1 pre:5 induce:2 regular:1 get:1 close:2 scheduling:1 influence:1 resembling:1 poo:1 starting:2 duration:1 abrupt:2 rule:20 adjusts:1 regarded:1 population:5 stability:1 spontaneous:22 suppose:1 play:1 copelli:2 trigger:1 cultured:1 homogeneous:1 approximated:1 observed:3 role:3 bottom:2 calculate:2 region:1 decrease:1 predictable:1 broken:1 dynamic:18 emergent:1 arez:1 derivation:1 fast:1 describe:1 effective:5 radboud:1 artificial:1 hyper:1 outcome:1 quite:1 encoded:3 larger:3 plausible:1 otherwise:1 noisy:2 sequence:1 turrigiano:1 analytical:2 subtracting:1 interaction:4 maximal:1 reset:2 strengthening:1 adaptation:1 causing:1 relevant:1 realization:4 achieve:1 inducing:1 convergence:6 cluster:3 converges:1 derive:3 illustrate:1 coupling:4 recurrent:1 ij:18 received:5 sim:1 eq:8 soc:5 geisel:1 indicate:3 synchronized:2 direction:1 bak:1 thick:2 closely:1 inhomogeneous:1 stochastic:6 sgn:2 transient:3 settle:1 eff:1 preliminary:1 extension:1 physica:1 around:2 considered:3 stdp:3 equilibrium:1 claim:2 purpose:2 by2:1 homeostasis:2 largest:2 mit:1 super:1 modified:2 rather:1 pn:2 encode:1 derived:3 focus:1 bernoulli:1 indicates:3 mainly:2 summarizing:1 sense:1 dependent:2 weakening:2 entire:1 typically:2 supercritical:1 rodr:1 denoted:1 exponent:1 k6:13 initialize:1 having:1 represents:1 future:1 stimulus:1 randomly:2 composed:1 zoom:1 individual:3 delayed:1 homogeneity:1 phase:7 eef:10 fire:10 maintain:1 conductance:1 organization:3 interest:1 message:7 analyzed:1 nl:1 accurate:1 edge:3 necessary:2 shorter:1 ager:1 walk:2 initialized:1 circle:1 increased:1 instance:2 modeling:1 characterize:1 periodic:1 sensitivity:2 siam:1 accessible:1 shlesinger:1 physic:3 rounded:1 pool:1 together:4 again:2 central:1 langton:1 external:3 derivative:2 li:16 concordance:1 de:2 unordered:1 bold:1 hysteresis:3 caused:1 afferent:2 onset:1 depends:2 later:1 break:1 performed:1 analyze:1 reached:3 red:1 start:2 competitive:1 avalanche:3 characteristic:2 maximized:7 ensemble:3 correspond:2 miller:1 modelled:1 beggs:1 produced:2 trajectory:4 drive:3 j6:4 app:4 published:1 explain:1 synapsis:10 reach:8 opez:3 whenever:1 synaptic:42 definition:1 against:1 raster:3 frequency:1 involved:2 naturally:1 hilbert:1 organized:4 surplus:3 actually:1 manuscript:1 focusing:1 attained:1 higher:1 follow:1 synapse:5 done:1 kaltenbrunner:3 just:2 governing:1 until:2 receives:3 horizontal:1 replacing:1 etotal:12 nonlinear:1 su:1 undergoes:1 aj:3 quality:1 gray:1 scientific:1 grows:1 effect:4 evolution:26 analytically:3 hence:1 illustrated:2 kinouchi:2 during:10 self:13 branching:3 excitation:1 coincides:1 hippocampal:1 presenting:1 neocortical:1 performs:1 consideration:1 ef:1 recently:1 chaos:3 absorbing:1 spiking:8 ji:1 regulates:1 insensitive:1 organism:1 interpretation:1 significant:2 cambridge:1 ai:7 tuning:1 centre:1 dot:1 stable:1 resistant:1 similarity:1 operating:1 closest:1 selforganize:1 driven:1 scenario:1 certain:1 binary:1 seen:1 upf:2 converge:1 period:6 living:1 dashed:3 full:2 hebbian:1 faster:1 levina:1 long:1 post:9 bigger:1 biophysics:1 involving:1 essentially:1 achieved:3 cell:1 irregular:1 preserved:1 whereas:1 interval:3 grow:1 macroscopic:2 rest:4 natschl:1 induced:1 tend:1 december:1 structural:1 near:5 presence:1 exceed:1 intermediate:1 enough:2 timesteps:6 opposite:1 andreas:2 idea:2 translates:1 whether:2 expression:1 song:1 cause:6 remark:1 depression:1 useful:1 clear:1 involve:1 netherlands:1 amount:5 dark:1 locally:2 induces:3 visualized:1 singapore:1 dotted:1 neuroscience:4 track:1 blue:1 discrete:2 write:2 four:2 threshold:14 nevertheless:1 prevent:1 abbott:2 timestep:4 subcritical:1 fraction:2 sum:1 inverse:1 legenstein:1 gerstein:1 scaling:1 bit:1 bound:2 distinguish:1 activity:12 strength:5 occur:3 ijcnn:1 encodes:1 dominated:1 speed:1 min:5 span:1 conjecture:1 imin:1 department:1 developing:1 according:7 smaller:2 slightly:2 postsynaptic:1 evolves:1 biologically:1 modification:2 hl:5 intuitively:1 equation:1 remains:1 turn:2 vreeswijk:1 mechanism:9 needed:2 flip:1 irregularly:1 end:2 homogeneously:1 gate:1 original:1 assumes:1 clustering:1 top:2 giving:1 approximating:1 quantity:3 spike:28 occurs:3 dependence:1 diagonal:1 gradient:3 elicits:1 thank:1 simulated:1 nelson:1 toward:3 assuming:1 ru:1 length:4 ratio:2 innovation:1 regulation:2 nijmegen:2 negative:2 collective:1 proper:1 upper:1 av:1 neuron:37 criticality:8 perturbation:1 homeostatic:2 drift:1 canada:1 introduced:1 required:13 widened:1 barcelona:2 able:2 below:1 pattern:5 usually:1 dynamical:2 regime:3 max:8 packard:1 critical:23 suitable:2 natural:2 event:1 synchronize:1 axis:1 created:1 dissipated:9 carried:1 excitable:1 jun:1 review:1 evolve:1 vancouver:1 relative:1 synchronization:1 fully:1 expect:1 suggestion:1 interesting:1 integrate:3 degree:1 principle:1 editor:1 tiny:1 uncorrelated:1 balancing:1 heavy:1 elsewhere:1 last:2 free:2 dis:2 lef:3 allow:1 understand:1 taking:1 barrier:1 characterizing:1 absolute:1 leaky:1 distributed:2 van:1 curve:2 calculated:1 feedback:1 transition:13 world:1 sensory:1 interchange:3 made:1 herrmann:1 welling:1 approximate:1 implicitly:1 keep:2 global:3 incoming:2 conclude:1 continuous:2 tailed:1 nature:5 robust:1 subinterval:1 complex:2 did:1 whole:2 noise:4 motivation:1 neuronal:3 referred:1 strengthened:3 torres:1 sub:1 down:1 showing:1 decay:1 evidence:1 dominates:2 magnitude:6 bertschinger:1 illustrates:1 biophys:1 margin:1 subtract:1 ez:1 absorbed:1 expressed:1 ordered:1 omez:2 springer:1 corresponds:1 determines:2 plenz:1 ma:1 towards:3 oscillator:1 replace:1 change:9 vicente:2 typical:1 except:1 determined:3 uniformly:2 total:4 called:1 accepted:1 invariance:1 experimental:1 rarely:1 select:1 latter:1 heaviside:1 phenomenon:2 |
2,641 | 3,395 | Variational Mixture of Gaussian Process Experts
Chao Yuan and Claus Neubauer
Siemens Corporate Research
Integrated Data Systems Department
755 College Road East, Princeton, NJ 08540
{chao.yuan,claus.neubauer}@siemens.com
Abstract
Mixture of Gaussian processes models extended a single Gaussian process with
ability of modeling multi-modal data and reduction of training complexity. Previous inference algorithms for these models are mostly based on Gibbs sampling,
which can be very slow, particularly for large-scale data sets. We present a new
generative mixture of experts model. Each expert is still a Gaussian process but
is reformulated by a linear model. This breaks the dependency among training
outputs and enables us to use a much faster variational Bayesian algorithm for
training. Our gating network is more flexible than previous generative approaches
as inputs for each expert are modeled by a Gaussian mixture model. The number
of experts and number of Gaussian components for an expert are inferred automatically. A variety of tests show the advantages of our method.
1
Introduction
Despite of its widespread success in regression problems, Gaussian process (GP) has two limitations. First, it cannot handle data with multi-modality. Multi-modality can exist in the input dimension (e.g., non-stationarity), in the output dimension (given the same input, the output has multiple
modes), or in a combination of both. Secondly, the cost of training is O(N 3 ), where N is the size of
the training set, which can be too expensive for large data sets. Mixture of GP experts models were
proposed to tackle the above problems (Rasmussen & Ghahramani [1]; Meeds & Osindero [2]).
Monte Carlo Markov Chain (MCMC) sampling methods (e.g., Gibbs sampling) are the standard
approaches to train these models, which theoretically can achieve very accurate results. However,
MCMC methods can be slow to converge and their convergence can be difficult to diagnose. It is
thus important to explore alternatives.
In this paper, we propose a new generative mixture of Gaussian processes model for regression problems and apply variational Bayesian methods to train it. Each Gaussian process expert is described
by a linear model, which breaks the dependency among training outputs and makes variational
inference feasible. The distribution of inputs for each expert is modeled by a Gaussian mixture
model (GMM). Thus, our gating network can handle missing inputs and is more flexible than single
Gaussian-based gating models [2-4]. The number of experts and the number of components for
each GMM are automatically inferred. Training using variational methods is much faster than using
MCMC. The rest of this paper is organized as follows. Section 2 surveys the related work. Section
3 describes the proposed algorithm. We present test results in Section 4 and summarize this paper
in Section 5.
2
Related work
Gaussian process is a powerful tool for regression problems (Rasmussen & Williams [5]). It elegantly models the dependency among data with a Gaussian distribution: P (Y) = N (Y|0, K+?n2 I),
1
L
t
z
p
ql
?y
C
y
x
mlc
?x
m0 R0
Rlc
r
vl
S
?l
?l
Il
a
b
Figure 1: The graphical model representation for the proposed mixture of experts model. It consists
of a hyperparameter set ? = {L, ?y , C, ?x , m0 , R0 , r, S, ? 1:L , I1:L , a, b} and a parameter set ? =
{p, ql , mlc , Rlc , vl , ?l | l = 1, 2, ..., L and c = 1, 2, ..., C}. The local expert is a GP linear model
to predict output y from input x; the gating network is a GMM for input x. Data can be generated
as follows. Step 1, determine hyperparameters ?. Step 2, sample parameters ?. Step 3, to sample
one data point x and y, we sequentially sample expert indicator t, cluster indicator z, x and y. Step
3 is independently repeated until enough data points are generated.
where Y = {y1:N } are N training outputs and I is an identity matrix. We will use y1:N to denote
y1 , y2 , ..., yN . The kernel matrix K considered here consists of kernel functions between pairs of
Pd
2
inputs xi and xj : Kij = k(xi , xj ) = ?f2 exp(? m=1 1/(2?gm
)(xim ? xjm )2 ), where d is the
dimension of the input x. The d + 2 hyperparameters ?n , ?f , ?g1 , ?g2 , ..., ?gd can be efficiently
estimated from the data. However, Gaussian process has difficulties in modeling large-scale data
and multi-modal data. The first issue was addressed by various sparse Gaussian processes [6-9, 16].
The mixture of experts (MoE) framework offers a natural solution for multi-modality problems
(Jacobs et al. [10]). Early MoE work used linear experts [3, 4, 11, 12] and some of them were
neatly trained via variational methods [4, 11, 12]. However, these methods cannot model nonlinear
data sets well. Tresp [13] proposed a mixture of GPs model that can be trained fast using the EM
algorithm. However, hyperparameters including the number of experts needed to be specified and the
training complexity issue was not addressed. By introducing the Dirichlet process mixture (DPM)
prior, infinite mixture of GPs models are able to infer the number of experts, both hyperparameters
and parameters via Gibbs sampling [1, 2]. However, these models are trained by MCMC methods,
which demand expensive training and testing time (as collected samples are usually combined to
give predictive distributions). How to select samples and how many samples to be used are still
challenging problems.
3
Algorithm description
Fig.1 shows the graphical model of the proposed mixture of experts. It consists of the local expert
part and gating network part, which are covered in Sections 3.1 and 3.2, respectively. In Section 3.3,
we describe how to perform variational inference of this model.
3.1
Local Gaussian process expert
A local Gaussian process expert is specified by the following linear model given the expert indicator
t = l (where l = 1 : L) and other related variables:
P (y|x, t = l, vl , ? l , Il , ?l ) = N (y|vlT ?l (x), ?l?1 ).
(1)
This linear model is symbolized by the inner product of the weight vector vl and a nonlinear feature
vector ?l (x). ?l (x) is a vector of kernel functions between a test input x and a subset of training
inputs: [kl (x, xIl1 ), kl (x, xIl2 ), ..., kl (x, xIlM )]T . The active set Il denotes the indices of selected
M training samples. How to select Il will be addressed in Section 3.3; for now let us assume that
we use the whole training set as the active set. vl has a Gaussian distribution N (vl |0, U?1
l ) with
2
0 mean and inverse covariance Ul . Ul is set to Kl + ?hl
I, where Kl is a M ? M kernel matrix
2
consisting of kernel functions between training samples in the active set. ?hl
is needed to avoid
singularity of Ul . ? l = {?hl , ?f l , ?gl1 , ?gl2 , ..., ?gld } denotes the set of hyperparameters for this
2
linear model. Note that ?l (x) depends on ? l . ?l is the inverse variance of this linear model. The
prior of ?l is set as a Gamma distribution: ?(?l |a, b) ? ba ?la?1 e?b?l with hyperparameters a and b.
It is easy to see that for each expert, y is a Gaussian process defined on x. Such a linear model
was proposed by Silverman [14] and was used by sparse Gaussian process models [6, 8]. If we set
2
?hl
= 0 and ?l = ?12 , the joint distribution of the training outputs Y, assuming they are from
nl
2
the same expert l, can be proved to be N (Y|0, Kl + ?nl
I). This has exactly the same form of
a regular Gaussian process. However, the largest advantage of this linear model is that it breaks
the dependency of y1:N once t1:N are given; i.e., P (y1:N |x1:N , t1:N , v1:L , ? 1:L , I1:L , ?1:L ) =
QN
n=1 P (yn |xn , tn = l, vl , ? l , Il , ?l ). This makes the variational inference of the mixture of
Gaussian processes feasible.
3.2
Gating network
A gating network determines which expert to use based on input x. We consider a generative gating
network, where expert indicator t is generated by a categorical distribution P (t = l) = pl . p =
[p1 p2 ... pL ] is given a symmetric Dirichlet distribution P (p) = Dir(p|?y /L, ?y /L, ..., ?y /L).
Given expert indicator t = l, we assume that x follows a Gaussian mixture model (GMM) with
C components. Each component (cluster) is modeled by a Gaussian distribution P (x|t = l, z =
c, mlc , Rlc ) = N (x|mlc , R?1
lc ). z is the cluster indicator which has a categorical distribution
P (z = c|t = l, ql ) = qlc . In addition, we give mlc a Gaussian prior N (mlc |m0 , R?1
0 ), Rlc a
Wishart prior W(Rlc |r, S) and ql a symmetric Dirichlet prior Dir(ql |?x /C, ?x /C, ..., ?x /C).
In previous generative gating networks [2-4], the expert indicator also acts as the cluster indicator
(or t = z) such that inputs for an expert can only have one Gaussian distribution. In comparison,
our model is more flexible by modeling inputs x for each expert as a Gaussian mixture distribution.
One can also put prior (e.g., inverse Gamma distribution) on ?x and ?y as done in [1, 2, 15]. In this
paper we treat them as fixed hyperparameters.
3.3
Variational inference
Variational EM algorithm Given a set of training data D = {(xn , yn ) | n = 1 : N }, the task
of learning is to estimate unknown hyperparameters and infer posterior distribution of parameters.
This problem is nicely addressed by the variational EM algorithm. The objective is to maximize
log P (D|?) over hyperparameters ?. Parameters ?, expert indicators T = {t1:N } and cluster
indicators Z = {z1:N } are treated as hidden variables, denoted by ? = {?, T, Z}.
It is possible to estimate all hyperparameters via the EM algorithm. However, most of the hyperparameters are generic and are thus fixed as follows. m0 and R0 are set to be the mean and inverse
covariance of the training inputs, respectively. We fix degree of freedom r = d and the scale matrix
S = 100I for the Wishart distribution. ?x , ?y , C and L are all set to 10. Following Bishop &
Svens?en [12], we set a = 0.01 and b = 0.0001. Such settings give broad priors to the parameters
and make our model sufficiently flexible. Our algorithm is not found to be sensitive to these generic
hyperparameters. The only hyperparameters remain to be estimated are ? = {? 1:L , I1:L }. Note
that these GP-related hyperparameters are problem specific and should not be assumed known.
In the E-step, based on the current estimates of ?, posterior probability of hidden variables
P (?|D, ?) is computed. Variational inference is involved in this step by approximating
P (?|D, ?) with a factorized distribution
Y
Y
Y
Q(?) =
Q(mlc )Q(Rlc )
Q(ql )Q(vl )Q(?l )Q(p)
Q(tn , zn ).
(2)
l,c
n
l
Each hidden variable has the same type of posterior distribution as its conjugate prior. To compute
the distribution for a hidden variable ?i , we need to compute the posterior mean of log P (D, ?|?)
over all hidden variables except ?i : hlog P (D, ?|?)i?/?i . The derivation is standard and is thus
omitted.
Variational inference for each hidden variable takes linear time with respect to N , C and L, because
the factorized form of P (D, ?|?) leads to separation of hidden variables in log P (D, ?|?). If
we switch from our linear model to a regular Gaussian process, one will encounter a prohibitive
3
complexity of O(LN ) for integrating log P (y1:N |x1:N , t1:N , ?) over t1:N . Also note that C =
L = 10 represents the maximum number of clusters and experts. The actual number is usually
smaller. During iteration, if a cluster c for expert l does not have a single training sample supporting
it (Q(tn = l, zn = c) > 0), this cluster and its associated parameters mlc and Rlc will be removed.
Similarly, we remove an expert l if no Q(tn = l) > 0. These C and L choices are flexible enough
for all our tests, but for more complicated data, larger values may be needed.
In the M-step, we search for ? which maximizes hlog P (D, ?|?)i? . We employ the conjugate
gradient method to estimate ? 1:L similarly to [5]. Both E-step and M-step are repeated until the
algorithm converges. For better efficiency, we do not select the active sets I1:L in each M-step;
instead, we fix I1:L during the EM algorithm and only update I1:L once when the EM algorithm
converges. The details are given after we introduce the algorithm initialization.
Initialization Without proper initialization, variational methods can be easily trapped into local
optima. Consequently, using pure randomization methods, one cannot rely on a single result, but
has to run the algorithm multiple times and then either pick the best result [12] or average the results
[11]. We present a new initialization method that only needs the algorithm to run once. Our method
is based on the assumption that the combined data including x and y for an expert are usually
distributed locally in the combined d + 1 dimensional space. Therefore, clustering methods such as
k-mean can be used to cluster data, one cluster for one expert.
Experts are initialized incrementally as follows. First, all training data are used to train one expert.
Secondly, we cluster all training data into two clusters and train one expert per cluster. We do this
four times and collect a total of L = 1 + 2 + 3 + 4 = 10 experts. Different experts represent
different local portions of training data in different scales. Although our assumption may not be true
in some cases (e.g., one expert?s data intersect with others), this initialization method does give us a
meaningful starting point. In practice, we find it effective and reliable.
Active set selection We now address the problem of selecting active set Il of size M in defining the feature vector ?l for expert l. The posterior distribution Q(vl ) can be proved to be
e l = h?l i P Tnl ? (xn )? (xn )T + Kl + ? 2 I and mean
Gaussian with inverse covariance U
l
l
hl
n
e ?1 h?l i P Tnl yn ?l (xn ). Tnl is an abbreviation for Q(tn = l) and h?l i is the posterior
el = U
?
n
l
e l has a complexity of O(M 3 ). Thus, for small data sets, the active set can be
mean of ?l . Inverting U
set to the full training set (M = N ). But for large data sets, we have to select a subset with M < N .
The active set Il is randomly initialized. With Il fixed, we run the variational EM algorithm and
obtain Q(?) and ?. Now we want to improve our results by updating Il . Our method is inspired by
the maximum a posteriori probability (MAP) used by sparse Gaussian processes [6, 8]. Specifically,
the optimization target in our case is maxIl ,vl P (vl |D) ? Q(vl ) with posterior distributions of
other hidden variables fixed. The justification of this choice is that a good Il should be strongly
el
supported by data D such that Q(vl ) is highly peaked. Since Q(vl ) is Gaussian, vl is always ?
at the optimal point and thus this optimization is equivalent to maximizing the determinant of the
inverse covariance
X
2
e l | = |h?l i
max |U
Tnl ?l (xn )?l (xn )T + Kl + ?hl
I|.
(3)
Il
n
Note that if Tnl is one for all n, our method turns into a MAP-based sparse Gaussian process.
However, even in that case, our criterion maxIl ,vl P (vl |D) differs from maxIl ,vl P (D|vl )P (vl )
derived in previous MAP-based work [6, 8]. First, the denominator P (D) is ignored by previous
2
methods, which actually depends on Il . Secondly, |Kl + ?hl
I| in P (vl ) is also ignored in previous
methods. For these reasons, previous methods are not real MAP estimation but its approximations.
Looking for the global optimal active set with size M is not feasible. Thus, similarly to many
sparse Gaussian processes, we consider a greedy algorithm by adding one index to Il each time.
e l requires O(N M ); incremental updating Cholesky
For a candidate index i, computing the new U
2
e
e l | needs O(1). Therefore, checking
factorization of Ul requires O(M ) and computing the new |U
one candidate i takes O(N M ). We consider selecting the best index from ? = 100 randomly
selected candidates [6, 8], which makes the total time for adding one index O(?N M ). For adding
all M indices, the total time is O(?N M 2 ). Such a complexity is comparable to that of [6], but
higher than those of [7, 8]. Note that this time is needed for each of the L experts.
4
In a summary, the variational EM algorithm with active set selection proceeds as follows. During
initialization, training data are clustered and assigned to each expert by the k-mean clustering algorithm noted above; the data assigned to each expert is used for randomly selecting the active set and
then training the linear model. During each iteration, we run variational EM to update parameters
and hyperparameters; when the EM algorithm converges, we update the active set and Q(vl ) for
each expert. Such an iteration is repeated until convergence.
It is also possible to define the feature vector ?l (x) as [k(x, x1 ), k(x, x2 ), ..., k(x, xM )]T , where
each x is a pseudo-input (Snelson & Ghahramani [9]). In this way, these pseudo-inputs X can
be viewed as hyperparameters and can be optimized in the same variational EM algorithm without
resorting to a separate update for active sets as we do. This is theoretically more sound. However,
it leads to a large number of hyperparameters to be optimized. Although overfitting may not be an
issue, the authors cautioned that this method can be vulnerable to local optima.
Predictive distribution Once training is done, for a test input x? , its predictive distribution
P (y ? |x? , D, ?) is evaluated as following:
Z
Z
P (y ? |x? , D, ?) = P (y ? |x? , ?, ?)P (?|D, ?)d? ? P (y ? |x? , ?, ?)Q(?)d?
? P (y ? |x? , hpi, {hql i}, {hmlc i}, {hRlc i}, {hvl i}, {h?l i}, {? l }, {Il }).
(4)
The first approximation uses the results from the variational inference. Note that expert indicators
T and cluster indicators Z are integrated out. Suppose that there are sufficient training data. Thus,
the posterior distribution of all parameters are usually highly peaked. This leads to the second approximation, where the integral reduces to point evaluation at the posterior mean of each parameter.
Eq.(4) can be easily computed using standard predictive algorithm for mixture of linear experts. See
appendix for more details.
4
Test results
For all data sets, we normalize each dimension of data to zero mean and unit variance before using
them for training. After training, to plot fitting results, we de-normalize data into their original
scales.
Artificial toy data We consider the toy data set used by [2], which consists of four continuous
functions covering input ranges (0, 15), (35, 60), (45, 80) and (80, 100), respectively. Different
levels of noise (with standard deviations std = 7, 7, 4 and 2) are added to different functions. This is
a challenging multi-modality problem in both input and output dimensions. Fig.2 (left) shows 400
points generated by this toy model, each point with a equal probability 0.25 to be assigned to one of
the four functions. Using these 400 points as training data, our method found two experts that fit the
data nicely. Fig.2 (left) shows the results.
In general, expert one represents the last two functions while expert two represents the first two
functions. One may desire to recover each function separately by an expert. However, note the fact
that the first two functions have the same noise level (std = 7); so it is reasonable to use
p just one GP
to model these two functions. In fact, we recovered a very close estimated std = 1/ h?2 i = 6.87
for the
psecond expert. The stds of the last two functions are also close (4 vs. 2), and are also similar
to 1/ h?1 i = 2.48 of the first expert. Note that the GP for expert one appears to fit the data of the
first function comparably well to that of expert two. However, the gating network does not support
this: the means of the GMM for expert one does not cover the region of the first function.
Ref.[2] and our method performed similarly well in discovering different modalities in different
input regions. We did not plot the mean of the predictive distribution as this data set has multiple
modes in the output dimension. Our results were produced using an active set size M = 60. Larger
active sets did not give appreciably better results.
Motorcycle data Our algorithm was also applied to the 2D motorcycle data set [14], which contains
133 data points with input-dependent noise as shown in Fig.2 (right). Our algorithm yielded two
experts with the first expert modeling the majority of the points and the second expert only depicting
the beginning part. The estimated stds of the two experts are 23.46 and 2.21, respectively. This
appears to correctly represent different levels of noise present in different parts of the data.
5
100
50
50
0
0
data for expert 1
GP for expert 1
?50
m for expert 1
data for expert 2
GP for expert 2
?100
m for expert 2
mean of experts
posterior samples
?50
?100
?150
0
20
40
60
80
100
10
20
30
40
50
Figure 2: Test results for toy data (left) and motorcycle data (right). Each data point is assigned to
an expert l based on its posterior probability Q(tn = l) and is referred to as ?data for expert l?. The
means of the GMM for each expert are also shown at the bottom as ?m for expert l?. In the right
figure, the mean of the predictive distribution is shown as a solid line and samples drawn from the
predictive distribution are shown as dots (100 samples for each of the 45 horizontal locations).
We also plot the mean of the predictive distribution (4) in Fig.2 (right). Our mean result compares
favorably with other methods using medians of mixtures [1, 2]. In particular, our result is similar
to that of [1] at input ? 30. At input > 35, the result of [1] abruptly becomes flat while our
result is smooth and appears to fit data better. The result of [2] is jagged, which may suggest using
more Gibbs samples for smoother results. In terms of the full predictive (posterior) distribution
(represented by samples in Fig.2 (right)), our results are better at input ? 40 as more artifacts are
produced by [1, 2] (especially between 15 and 25). However, our results have more artifacts at input
> 40 because that region shares the same std = 23.46 as the other region where input is between
15 and 40. The active set size of our method is set to 40. Training using matlab 7 on a Pentium 2.4
GHz machine took 20 seconds, compared to one hour spent by Gibbs sampling method [1].
Robot arm data We consider the two-link robot arm data set used by [12]. Fig.3 (left) shows the
kinematics of such a 2D robot. The joint angles are limited to the ranges 0.3 ? ?1 ? 1.2 and
?/2 ? ?2 ? 3?/2. Based on the forward kinematic equations (see [12]) the end point position
(x1 , x2 ) has a unique solution given values of joint angles (?1 , ?2 ). However, we are interested in
the inverse kinematics problem: given the end point position, we want to estimate the joint angles.
We randomly generated 2000 points based on the forward kinematics, with the first 1000 points for
training and the remaining 1000 points for testing. Although noise can be added, we did not do so
to make our results comparable to those of [12].
Since this problem involves predicting two correlated outputs at the same time, we used an independent set of local experts for each output but let these two outputs share the same gating network.
This was easily adapted in our algorithm. Our algorithm found five experts vs. 16 experts used by
[12]. The average number of GMM components is 3. We use residue plots [12] to present results
(see Fig.3). Compared to that of [12], the first residue plot is much cleaner suggesting that our errors
are much smaller. This is expected as we use more powerful GP experts vs. linear experts used by
[12]. The second residue plot (not used in [12]) also gives clean result but is worse than the first plot.
This is because the modality with the smaller posterior probability is more likely to be replaced by
false positive modes. The active set size was set to 100. A larger size did not improve the results.
DELVE data We applied our algorithm to three widely used DELVE data sets: Boston, Kin-8nm
and Pumadyn-32nm. These data sets appear to be single modal because impressive results were
achieved by a single GP. The purpose of this test is to check how our algorithm (intended for multimodality) handles single modality without knowing it. We followed the standard DELVE testing
framework: for the Boston data, there are two tests each using 128 training examples; for both
Kin-8nm and Pumadyn-32nm data, there are four tests, each using 1024 training examples.
Table 1 shows the standardised squared errors for the test. The scores from all previous methods are
copied from Waterhouse [11]. We used the full training set as the active set. Reducing the active
6
1
1
1
0.5
0.5
A
B
?
0.5
2
C
?1
0
0
0
0.5
1
0
0
0.5
1
0
0.5
1
Figure 3: Test results for robot arm data set. Left: illustration of the robot kinematics (adapted
from [12]). Our task is to estimate the joint angles (?1 , ?2 ) based on the end point positions. In
region B, there are two modalities for the same end point position. In regions A and C, there is only
one modality. Middle: the first residue plot. For a test point, its predictive distribution is a Gaussian
mixture. The mean of the Gaussian distribution with the highest probability was fed into the forward
kinematics to obtain the estimated end point position. A line was drawn between the estimated and
real end point positions; the length of the line indicates the magnitude of the error. The average line
length (error) is a very small 0.00067 so many lines appear as dots. Right: the second residue plot
using the mean of the Gaussian distribution with the second highest probability only for region B.
The average line length is 0.001. Both residue plots are needed to check whether both modalities
are detected correctly.
Date sets
Boston
Kin8nm
Pum32nm
gp
0.194 ? 0.061
0.116 ? 0.006
0.044 ? 0.009
mars
0.157 ? 0.009
0.460 ? 0.013
0.061 ? 0.003
mlp
0.094 ? 0.013
0.046 ? 0.023
me
0.159 ? 0.023
0.182 ? 0.020
0.701 ? 0.079
vmgp
0.157 ? 0.002
0.119 ? 0.005
0.041 ? 0.005
Table 1: Standardised squared errors of different methods on the DELVE data sets. Our method
(vmgp) is compared with a single Gaussian process trained using a maximum a posteriori method
(gp), a bagged version of MARS (mars), a multi-layer perceptron trained using hybrid MCMC (mlp)
and a committee of mixtures of linear experts (me) [11].
set compromised the results, suggesting that for these high dimensional data sets, a large number
of training examples are required; and for the present training sets, each training example carries
information not represented by others. We started with ten experts and found an average of 2, 1 and
2.75 experts for these data sets, respectively. The average number of GMM components for these
data sets are 8.5, 10 and 9.5, respectively, indicating that more GMM components are needed for
modeling higher dimensional inputs. Our results are comparable to and sometimes better than those
of previous methods.
Finally, to test how our active set selection algorithm performs, we conducted a standard test for
sparse GPs: 7168 samples from Pumadyn-32nm were used for training and the remaining 1024
were for testing. The active set size M was varied from 10 to 150. The error was 0.0569 when
M = 10, but quickly reduced to 0.0225, the same as the benchmark error in [7], when M = 25. We
rapidly achieved 0.0196 at M = 50 and the error did not decrease after that. This result is better
than that of [7] and comparable to the best result of [9].
5
Conclusions
We present a new mixture of Gaussian processes model and apply variational Bayesian method
to train it. The proposed algorithm nicely addresses data multi-modality and training complexity
issues of a single Gaussian process. Our method achieved comparable results to previous MCMCbased models on several 2D data sets. One future direction is to compare all algorithms using high
dimensional data so we can draw more meaningful conclusions. However, one clear advantage of
7
our method is that training is much faster. This makes our method more suitable for many real-world
applications where speed is critical.
Our active set selection method works well on the Pumadyn-32nm data set. But this test was done
in the context of mixture of GPs. To make a fair comparison to other sparse GPs, we can set L = 1
and also try more data sets. It is worthy noting that in the current implementation, the active set size
M is fixed for all experts. This can be improved by using a smaller M for an expert with a smaller
number of supporting training samples.
Acknowledgments
Thanks to Carl Rasmussen and Christopher Williams for sharing the GPML matlab package.
Appendix
X X P (t
Eq.(4) can be expressed as a weighted sum of all experts, where hyperparameters and parameters are omitted:
P (y ? |x? ) =
?
l
= l, z ? = c|x? )P (y ? |x? , t? = l).
(A-1)
c
The first term in (A-1) is the posterior probability for expert t? = l and it is the sum of
P (t? = l, z ? = c|x? ) =
P PP (xP (x|t |t= l,=z l ,=z c)P= c(t)P=(tl, z= l=, zc) = c ) ,
?
l0
c0
?
?
?
?
0
?
?
0
?
?
0
?
0
(A-2)
where P (t? = l, z ? = c) = hpl ihqlc i. The second term in (A-1) is the predictive probability for y ? given
expert l, which is Gaussian.
References
[1] C. E. Rasmussen and Z. Ghahramani. Infinite mixtures of Gaussian process experts. In Advances in
Neural Information Processing Systems 14. MIT Press, 2002.
[2] E. Meeds and S. Osindero. An alternative infinite mixture of Gaussian process experts. In Advances in
Neural Information Processing Systems 18. MIT Press, 2006.
[3] L. Xu, M. I. Jordan, and G. E. Hinton. An alternative model for mixtures of experts. In Advances in
Neural Information Processing Systems 7. MIT Press, 1995.
[4] N. Ueda and Z. Ghahramani. Bayesian model search for mixture models based on optimizing variational
bounds. Neural Networks, 15(10):1223?1241, 2002.
[5] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006.
[6] A. J. Smola and P. Bartlett. Sparse greedy Gaussian process regression. In Advances in Neural Information
Processing Systems 13. MIT Press, 2001.
[7] M. Seeger, C. K. I. Williams, and N. D. Lawrence. Fast forward selection to speed up sparse Gaussian
process regression. In Workshop on Artificial Intelligence and Statistics 9, 2003.
[8] S. S. Keerthi and W. Chu. A matching pursuit approach to sparse Gaussian process regression. In
Advances in Neural Information Processing Systems 18. MIT Press, 2006.
[9] E. Snelson and Z. Ghahramani. Sparse Gaussian processes using pseudo-inputs. In Advances in Neural
Information Processing Systems 18. MIT Press, 2006.
[10] R. A. Jacobs, M. I. Jordan, S. J. Nowlan, and G. E. Hinton. Adaptive mixture of local experts. Neural
computation, 3:79?87, 1991.
[11] S. Waterhouse. Classification and regression using mixtures of experts. PhD Theis, Department of Engineering, Cambridge University, 1997.
[12] C. M. Bishop and M. Svens?en. Bayesian hierarchical mixtures of experts. In Proc. Uncertainty in Artificial
Intelligence, 2003.
[13] V. Tresp. Mixtures of Gaussian processes. In Advances in Neural Information Processing Systems 13.
MIT Press, 2001.
[14] B. W. Silverman. Some aspects of the spline smoothing approach to non-parametric regression curve
fitting. J. Royal. Stat. Society. B, 47(1):1?52, 1985.
[15] C. E. Rasmussen. The infinite Gaussian mixture model. In Advances in Neural Information Processing
Systems 12. MIT Press, 2000.
[16] L. Csat?o and M. Opper. Sparse on-line Gaussian processes. Neural Computation, 14(3):641?668, 2002.
8
| 3395 |@word determinant:1 version:1 middle:1 c0:1 jacob:2 covariance:4 pick:1 solid:1 carry:1 reduction:1 contains:1 score:1 selecting:3 current:2 com:1 recovered:1 nowlan:1 chu:1 enables:1 remove:1 plot:10 update:4 v:3 generative:5 selected:2 prohibitive:1 greedy:2 discovering:1 intelligence:2 beginning:1 location:1 five:1 yuan:2 consists:4 fitting:2 multimodality:1 introduce:1 theoretically:2 expected:1 p1:1 multi:8 inspired:1 automatically:2 actual:1 becomes:1 maximizes:1 factorized:2 nj:1 pseudo:3 act:1 tackle:1 exactly:1 unit:1 yn:4 appear:2 positive:1 t1:5 before:1 local:9 treat:1 engineering:1 despite:1 gl1:1 initialization:6 collect:1 challenging:2 delve:4 factorization:1 limited:1 range:2 unique:1 acknowledgment:1 testing:4 practice:1 differs:1 silverman:2 intersect:1 matching:1 road:1 regular:2 integrating:1 suggest:1 cannot:3 close:2 selection:5 put:1 context:1 equivalent:1 map:4 missing:1 maximizing:1 williams:4 starting:1 independently:1 survey:1 pure:1 handle:3 justification:1 target:1 gm:1 suppose:1 gps:5 us:1 carl:1 expensive:2 particularly:1 updating:2 std:6 bottom:1 region:7 decrease:1 removed:1 highest:2 pd:1 complexity:6 trained:5 predictive:11 meed:2 f2:1 efficiency:1 easily:3 joint:5 various:1 represented:2 derivation:1 train:5 fast:2 describe:1 vlt:1 monte:1 effective:1 artificial:3 detected:1 larger:3 widely:1 ability:1 statistic:1 g1:1 gp:12 advantage:3 took:1 propose:1 product:1 motorcycle:3 date:1 rapidly:1 achieve:1 description:1 normalize:2 convergence:2 cluster:14 xim:1 optimum:2 incremental:1 converges:3 spent:1 stat:1 eq:2 p2:1 cautioned:1 involves:1 direction:1 fix:2 clustered:1 randomization:1 secondly:3 singularity:1 standardised:2 pl:2 sufficiently:1 considered:1 exp:1 lawrence:1 predict:1 m0:4 early:1 omitted:2 purpose:1 estimation:1 proc:1 sensitive:1 largest:1 appreciably:1 tool:1 weighted:1 mit:9 gaussian:50 always:1 avoid:1 gpml:1 derived:1 l0:1 check:2 indicates:1 tnl:5 pentium:1 seeger:1 posteriori:2 inference:8 dependent:1 el:2 vl:22 integrated:2 hidden:8 i1:6 interested:1 issue:4 among:3 flexible:5 classification:1 denoted:1 smoothing:1 bagged:1 equal:1 once:4 nicely:3 sampling:5 represents:3 broad:1 peaked:2 future:1 others:2 spline:1 employ:1 randomly:4 gamma:2 replaced:1 intended:1 consisting:1 keerthi:1 freedom:1 stationarity:1 mlp:2 highly:2 kinematic:1 evaluation:1 mixture:31 hpi:1 nl:2 chain:1 accurate:1 integral:1 initialized:2 kij:1 modeling:5 cover:1 zn:2 cost:1 introducing:1 deviation:1 subset:2 conducted:1 osindero:2 too:1 dependency:4 dir:2 gd:1 combined:3 thanks:1 quickly:1 pumadyn:4 squared:2 nm:6 rlc:7 wishart:2 worse:1 expert:92 toy:4 suggesting:2 de:1 jagged:1 depends:2 performed:1 break:3 try:1 diagnose:1 portion:1 recover:1 complicated:1 il:14 variance:2 efficiently:1 bayesian:5 comparably:1 produced:2 carlo:1 sharing:1 pp:1 involved:1 associated:1 proved:2 organized:1 actually:1 appears:3 higher:2 modal:3 improved:1 done:3 evaluated:1 strongly:1 mar:3 just:1 smola:1 hpl:1 until:3 horizontal:1 christopher:1 nonlinear:2 incrementally:1 widespread:1 mode:3 artifact:2 y2:1 true:1 assigned:4 symmetric:2 during:4 covering:1 noted:1 criterion:1 tn:6 performs:1 variational:21 snelson:2 kin8nm:1 cambridge:1 gibbs:5 resorting:1 similarly:4 neatly:1 dot:2 robot:5 impressive:1 posterior:14 optimizing:1 success:1 r0:3 converge:1 determine:1 maximize:1 smoother:1 multiple:3 corporate:1 full:3 infer:2 sound:1 reduces:1 smooth:1 faster:3 offer:1 regression:8 denominator:1 iteration:3 kernel:5 represent:2 sometimes:1 achieved:3 addition:1 want:2 separately:1 residue:6 addressed:4 median:1 modality:11 rest:1 claus:2 dpm:1 jordan:2 noting:1 enough:2 easy:1 variety:1 xj:2 switch:1 fit:3 inner:1 knowing:1 whether:1 bartlett:1 ul:4 abruptly:1 reformulated:1 matlab:2 ignored:2 covered:1 clear:1 cleaner:1 locally:1 ten:1 reduced:1 exist:1 estimated:6 trapped:1 per:1 correctly:2 csat:1 hyperparameter:1 four:4 drawn:2 gmm:9 clean:1 v1:1 sum:2 run:4 inverse:7 angle:4 powerful:2 package:1 uncertainty:1 reasonable:1 ueda:1 separation:1 draw:1 appendix:2 comparable:5 layer:1 bound:1 followed:1 copied:1 yielded:1 symbolized:1 adapted:2 svens:2 x2:2 hvl:1 flat:1 aspect:1 speed:2 department:2 combination:1 conjugate:2 describes:1 remain:1 em:11 smaller:5 hl:7 neubauer:2 ln:1 equation:1 turn:1 kinematics:5 committee:1 needed:6 fed:1 end:6 pursuit:1 apply:2 hierarchical:1 generic:2 alternative:3 encounter:1 original:1 denotes:2 dirichlet:3 clustering:2 remaining:2 graphical:2 ghahramani:5 especially:1 approximating:1 society:1 objective:1 added:2 parametric:1 gradient:1 separate:1 link:1 majority:1 me:2 collected:1 reason:1 assuming:1 length:3 modeled:3 index:6 illustration:1 difficult:1 mostly:1 ql:6 hlog:2 favorably:1 ba:1 implementation:1 proper:1 unknown:1 perform:1 markov:1 benchmark:1 supporting:2 defining:1 extended:1 looking:1 hinton:2 y1:6 worthy:1 varied:1 mlc:8 inferred:2 inverting:1 pair:1 moe:2 specified:2 kl:9 z1:1 optimized:2 required:1 gld:1 hour:1 address:2 able:1 proceeds:1 usually:4 xm:1 summarize:1 including:2 reliable:1 max:1 royal:1 suitable:1 critical:1 difficulty:1 natural:1 treated:1 rely:1 indicator:12 predicting:1 hybrid:1 arm:3 improve:2 started:1 categorical:2 tresp:2 chao:2 prior:8 checking:1 theis:1 waterhouse:2 limitation:1 degree:1 sufficient:1 xp:1 share:2 summary:1 supported:1 last:2 rasmussen:6 zc:1 perceptron:1 sparse:12 distributed:1 ghz:1 curve:1 dimension:6 xn:7 xil1:1 world:1 opper:1 qn:1 author:1 xjm:1 forward:4 adaptive:1 global:1 sequentially:1 active:23 overfitting:1 assumed:1 xi:2 search:2 continuous:1 compromised:1 table:2 depicting:1 elegantly:1 did:5 whole:1 noise:5 hyperparameters:18 n2:1 repeated:3 ref:1 fair:1 x1:4 xu:1 fig:8 referred:1 en:2 tl:1 slow:2 lc:1 position:6 candidate:3 kin:2 bishop:2 specific:1 gating:11 workshop:1 false:1 adding:3 phd:1 magnitude:1 demand:1 boston:3 explore:1 likely:1 desire:1 expressed:1 g2:1 vulnerable:1 determines:1 abbreviation:1 identity:1 viewed:1 consequently:1 feasible:3 infinite:4 except:1 specifically:1 reducing:1 total:3 la:1 siemens:2 east:1 meaningful:2 indicating:1 select:4 college:1 cholesky:1 support:1 mcmc:5 princeton:1 correlated:1 |
2,642 | 3,396 | Characteristic Kernels on Groups and Semigroups
Kenji Fukumizu
Institute of Statistical Mathematics
4-6-7 Minami-Azabu, Minato-ku, Tokyo 106-8569 Japan
[email protected]
Arthur Gretton
MPI for Biological Cybernetics
Spemannstra?e 38, 72076 T?ubingen, Germany
[email protected]
Bharath Sriperumbudur
Department of ECE, UC San Diego
/ MPI for Biological Cybernetics
[email protected]
Bernhard Sch?olkopf
MPI for Biological Cybernetics
[email protected]
Abstract
Embeddings of random variables in reproducing kernel Hilbert spaces (RKHSs)
may be used to conduct statistical inference based on higher order moments. For
sufficiently rich (characteristic) RKHSs, each probability distribution has a unique
embedding, allowing all statistical properties of the distribution to be taken into
consideration. Necessary and sufficient conditions for an RKHS to be characteristic exist for Rn . In the present work, conditions are established for an RKHS
to be characteristic on groups and semigroups. Illustrative examples are provided,
including characteristic kernels on periodic domains, rotation matrices, and Rn+ .
1
Introduction
Recent studies have shown that mapping random variables into a suitable reproducing kernel Hilbert
space (RKHS) gives a powerful and straightforward method of dealing with higher-order statistics
of the variables. For sufficiently rich RKHSs, it becomes possible to test whether two samples
are from the same distribution, using the difference in their RKHS mappings [8]; as well as testing
independence and conditional independence [6, 9]. It is also useful to optimize over kernel mappings
on distributions, for instance to find the most predictive subspace in regression [5], or for ICA [1].
Key to the above work is the notion of a characteristic kernel, as introduced in [5, 6]: it gives an
RKHS for which probabilities have unique images (i.e., the mapping is injective). Such RKHSs
are sufficiently rich in the sense required above. Universal kernels on compact metric spaces [16]
are characteristic [8], as are Gaussian and Laplace kernels on Rn [6]. Recently, it has been shown
[14] that a continuous shift-invariant R-valued positive definite kernel on Rn is characteristic if and
only if the support of its Fourier transform is the entire Rn . This completely determines the set of
characteristic ones in the convex cone of continuous shift-invariant positive definite kernels on Rn .
One of the chief advantages of kernel methods is that they allow us to deal straightforwardly with
complex domains, through use of a kernel function to determine the similarity between objects in
these domains [13]. A question that naturally arises is whether characteristic kernels can be defined
on spaces besides Rn . Several such domains constitute topological groups/semigroups, and our
focus is on kernels defined by their algebraic structure. Broadly speaking, our approach is based on
extensions of Fourier analysis to groups and semigroups, where we apply appropriate extensions of
Bochner?s theorem to obtain the required conditions on the kernel.
The most immediate generalization of the results in [14] is to locally compact Abelian groups, of
which (Rn , +) is one example. Thus, in Section 2 we provide review of characteristic kernels on
(Rn , +) from this viewpoint. In Section 3 we derive necessary and sufficient conditions for kernels
1
on locally compact Abelian groups to be characteristic. Besides (Rn , +), such groups include [0, 1]n
with periodic boundary conditions [13, Section 4.4.4]. We next address non-Abelian compact groups
in Section 4, for which we obtain a sufficient condition for a characteristic kernel. We illustrate with
the example of SO(3), which describes rotations in R3 , and is used in fields such as geophysics
[10] and robotics [15]. Finally, in Section 5, we consider the Abelian semigroup (Rn+ , +), where
R+ = [0, ?). This semigroup has many practical applications, including expressions of nonnegative
measures or frequency on n points [3]. Note that in all cases, we provide specific examples of
characteristic kernels to illustrate the properties required.
2
Preliminaries: Characteristic kernels and shift-invariant kernels
Let X be a random variable taking values on ap
measurable space (?, B), and H be a RKHS defined
by a measurable kernel k on ? such that E[ k(X, X)] < ?. The mean element mX of X is
defined by the element in H such that hmX , f iH = E[f (X)] (?f ? H) (See [6, 7]). By plugging
f = k(?, y) in the definition, the explicit functional form of mX is given by mX (y) = E[k(y, X)].
A bounded measurable kernel k on ? is called characteristic if
{P : probability on (?, B)} ? H,
P 7? mP = EX?P [k(?, X)]
(1)
is injective ([5, 6]). Therefore, by definition, a characteristic kernel uniquely determines a probability by its mean element. This property is important in making inference on properties of distributions. It guarantees, for example, that M M D = kmX ? mY kH is a (strict) distance on the space
of probabilities on ? [8]. The following result provides the necessary and sufficient condition for a
kernel to be characteristic and shows its associated RKHS to be a rich function class.
Lemma 1 ([7] Prop. 5). Let (?, B) be a measurable space, k be a bounded measurable positive
definite kernel on ?, and H be the associated RKHS. Then, k is characteristic if and only if H + R
(direct sum of the two RKHS?s) is dense in L2 (P ) for every probability P on (?, B).
The above lemma and Theorem 3 of [6] imply that characteristic kernels give a criterion of (conditional) independence through (conditional) covariance on RKHS, which enables statistical tests of
independence with kernels [6]. This explains also the practical importance of characteristic kernels.
The following result shows that the characteristic property is invariant under some conformal mappings introduced in [17] and provides a construction to generate new characteristic kernels.
Lemma 2. Let ? beRa topological space with Borel ?-field, k be a measurable positive definite
kernel on ? such that ? k(?, y)d?(y) = 0 means ? = 0 for a finite Borel measure ?, and f : ? ? C
be a bounded continuous function such that f (x) > 0 for all x ? ? and k(x, x)|f (x)|2 is bounded.
? y) = f (x)k(x, y)f (y) is characteristic.
Then, the kernel k(x,
R
R
? x)dQ(x). We have
? x)dP (x) = k(?,
Proof.
Let P and Q be Borel probabilities such that k(?,
R
k(?, x)f (x)d(P ? Q)(x) = 0, which means f P = f Q. We have P = Q by the positivity and
continuity of f .
We will focus on spaces with algebraic structure for better description of characteristic kernels.
Let G be a group. A function ? : G ? C is called positive definite if k(x, y) = ?(y ?1 x) is
a positive definite kernel. We call this type of positive definite kernels shift-invariant, because
k(zx, zy) = ?((zy)?1 zx) = ?(y ?1 x) = k(x, y) for any z ? G.
There are many examples of shift-invariant positive definite kernels on the additive groupP
Rn : Gausn
2
2
sian RBF kernel k(x, y) = exp(?kx?yk /? ) and Laplacian kernel k(x, y) = exp(?? i=1 |xi ?
n
yi |) are famous ones. In the case of R , the following Bochner?s theorem is well-known;
Theorem 3 (Bochner). Let ? : Rn ? C be a continuous function. ? is positive definite if and only
if there is a unique finite non-negative Borel measure ? on Rn such that
Z
?
T
?(x) =
e ?1x ? d?(?).
(2)
Rn
Bochner?s theorem completely characterizes the set of continuous shift-invariant positive definite
kernels on Rn by the Fourier transform. It also implies that the continuous positive
definite functions
?
T
form a convex cone with the extreme points given by the Fourier kernels {e ?1x ? | ? ? Rn }.
2
It is interesting to determine the class of continuous shift-invariant ?characteristic? kernels on Rn .
[14] gives a complete solution: if supp(?) = Rn ,1 then ?(x ? y) is characteristic. In addition, if
a continuous positive definite function of the form in Eq. (2) is real-valued and characteristic, then
supp(?) = Rn . The basic idea is the following: since the mean element EP [?(y ? X)] is equal to
the convolution ? ? P , the Fourier transform rewrites the definition of characteristic property as
b = 0 =? P = Q,
(Pb ? Q)?
where b denotes the Fourier transform, and we use ?[
? P = ?Pb. Hence, it is natural to expect that
b
b
if ? is everywhere positive, then (P ? Q) must be zero, which means P = Q.
We will extend these results to more general algebraic objects, such as groups and semigroups, on
which Fourier analysis and Bochner?s theorem can be extended.
3
Characteristic kernels on locally compact Abelian groups
It is known that most of the results on Fourier analysis for Rn are extended to any locally compact
Abelian (LCA) group, which is an Abelian (i.e. commutative) topological group with the topology
Hausdorff and locally compact. The basic terminologies are provided in the supplementary material
for readers who are not familiar to them. The group operation is denoted by ?+? in Abelian cases.
Hereafter, for a LCA group G, we consider only the probability measures included in the set of finite
regular measures M (G) (see Supplements) to discuss characteristic property. This slightly restricts
the class of measures, but removes only pathological ones.
3.1
Fourier analysis on LCA Group
We briefly summarize necessary results to show our main theorems. For the details, see [12, 11].
For a LCA group G, there exists a non-negative regular measure m on G such that m(E + x) =
m(E) for every x ? G and every Borel set E in G. This measure is called Haar measure. We use
dx to denote the Haar measure of G. With the Haar measure, the integral is shift-invariant, that is,
Z
Z
f (x + y)dx =
f (x)dx
(?y ? G).
G
G
The space of Lp (G, dx) is simply denoted by Lp (G).
A function ? : G ? C is called a character of G if ?(x + y) = ?(x)?(y) and |?(x)| = 1. The set of
all continuous characters of G forms an Abelian group with the operation (?1 ?2 )(x) = ?1 (x)?2 (x).
By convention, the group operation is denoted by addition ?+?, instead of multiplication; i.e., (?1 +
b
?2 )(x) = ?1 (x)?2 (x). This group is called the dual group of G, and denoted by G.
b given by x
b defines a character of G.
b It is
For any x ? G, the function x
? on G
?(?) = ?(x) (? ? G)
b is a LCA group if the weakest topology is introduced so that x
known that G
? is continuous for each
b denoted by G??, and the group homomorphism
x ? G. We can therefore consider the dual of G,
G ? G??,
x 7? x
?.
The Pontryagin duality guarantees that this homomorphism is an isomorphism, and homeomorphism, thus G?
?can be identified with G. In view of the duality, it is customary to write (x, ?) :=
?(x). We have (?x, ?) = (x, ??) = ?(x)?1 = (x, ?), where z is the complex conjugate of z.
Let f ? L1 (G) and ? ? M (G), the Fourier transform of f and ? are respectively defined by
Z
Z
?
b
f (?) =
(?x, ?)f (x)dx,
?
?(?) =
(?x, ?)d?(x),
(? ? G).
G
(3)
G
Let f ? L? (G), g ? L1 (G), and ?, ? ? M (G). The convolutions are defined respectively by
(g?f )(x) =
Z
f (x?y)g(y)dy, (??f )(x) =
G
Z
f (x?y)d?(y), (???)(E) =
G
Z
?E (x+y)d?(x)d?(y).
G
1
For a finite regular measure, there is the largest open set U with ?(U ) = 0. The complement of U is called
the support of ?, and denoted by supp(?). See the supplementary material for the detail.
3
g ? f is uniformly continuous on G. For any f, g ? L1 (G) and ?, ? ? M (G), we have the formula
f[
? g = f?g?,
?[
?f =?
bfb,
?[
?? =?
b?b.
(4)
The following facts are basic ( [12], Section 1.3).
Proposition 4. For ? ? M (G), the Fourier transform ?
? is bounded and uniformly continuous.
Theorem 5 (Uniqueness theorem). If ? ? M (G) satisfies ?
b = 0, then ? = 0.
?
T
It is known that the dual group of the LCA group Rn is {e ?1? x | ? ? Rn }, which can be
identified with Rn . The above definition and properties of Fourier transform for LCA groups are
extension of the ordinary Fourier transform for Rn . Bochner?s theorem can be also extended.
Theorem 6 (Bochner?s theorem. e.g., [12] Section 1.4.3). A continuous function ? on G is positive
b such that
definite if and only if there is a unique non-negative measure ? ? M (G)
Z
?(x) =
(x, ?)d?(?)
(x ? G).
(5)
b
G
3.2
Shift-invariant characteristic kernels on LCA group
Based on Bochner?s theorem, a sufficient condition of the characteristic property is obtained.
Theorem 7. Let ? be a continuous positive definite function on a LCA group G given by Eq. (5)
b then the positive definite kernel k(x, y) = ?(x ? y) is characteristic.
with ?. If supp(?) = G,
R
Proof. It suffices to prove that if ? ? M (G) satisfies ? ? ? = 0 then ? = 0. We have G (? ?
?)(x)d?(x) = 0. On the other hand, by using Fubini?s theorem,
R
R R
R R R
(? ? ?)(x)d?(x) = G G ?(x ? y)d?(y)d?(x) = G G Gb (x ? y, ?)d?(?)d?(y)d?(x)
G
R R
R
R
= Gb G (x, ?)d?(x) G (?y, ?)d?(y)d?(?) = Gb |b
?(?)|2 d?(?).
b we have ?
Since ?
b is continuous and supp(?) = G,
b = 0, which means ? = 0 by Theorem 5.
b is almost necessary.
In real-valued cases, the condition supp(?) = G
Theorem 8. Let ? be a R-valued continuous positive definite function on a LCA group G given
b is not open and
by Eq. (5) with ?. The kernel ?(x ? y) is characteristic if and only if (i) 0 ? G
b or (ii) 0 ? G
b is open and supp(?) ? G
b ? {0}. The case (ii) occurs if G is compact.
supp(?) = G,
Proof. It suffices to prove the only if part. Assume k(x, y) = ?(x ? y) is characteristic. It is
obvious that k is characteristic if and only if so is k(x, y) + 1. Thus, we can assume 0 ? supp(?).
b Since ? is real-valued, ?(?E) = ?(E) for every Borel set E. Thus
Suppose supp(?) 6= G.
b
U := G\supp(?) is a non-empty open set, with ?U = U , and 0 ?
/ U by assumption. Let ?0 ? U
b?G
b ? G,
b (?1 , ?2 ) 7? ?1 ? ?2 . Take an open neighborhood W of 0 in G
b with compact
and ? : G
closure such that W ? ? ?1 (U ? ?0 ). Then, (W + (?W ) + ?0 ) ? (W + (?W ) ? ?0 ) ? U .
Let g = ?W ? ??W , where ?E denotes the indicator function of a set E.P g is continuous, and supp(g) ? cl(W + (?W )). Also, g is positive definite, since
i,j ci cj g(xi ?
R
R
P
P
xj ) = i,j ci cj G ?W (xi ? xj ? y)??W (y)dy = i,j ci cj G ?W (xi ? y)??W (y ? xj )dy =
P
R P
i ci ?W (xi ? y)
j cj ?W (xj ? y) dy ? 0. By Bochner?s theorem and Pontryagin duality,
G
there is a non-negative measure ? ? M (G) such that
R
b
g(?) = G (x, ?)d?(x)
(? ? G).
R
R
It follows that g(? ? ?0 ) + g(? + ?0 ) = G {(x, ? ? ?0 ) + (x, ? + ?0 )}d?(x) = G (x, ?)d((?0 +
?0 )?)(x).
Since supp(g) ? cl(W + (?W )), the left hand side is non-zero only in (W + (?W ) + ?0 ) ? (W +
(?W ) ? ?0 ) ? U , which does not contain 0. Thus, by setting ? = 0, we have
((?0 + ?0 )?)(G) = 0.
4
(6)
The measure (?0 + ?0 )? is real-valued, and non-zero since the function g(? ? ?0 ) + g(? + ?0 ) is
not constant zero. Let m = |(?0 + ?0 )?|(G), and define the non-negative measures
?1 = |(?0 + ?0 )?|/m,
?2 = {|(?0 + ?0 )?| ? (?0 + ?0 )?}/m.
Both of ?1 and ?2 are probability measures on G from Eq. (6), and ?1 6= ?2 . From Fubini?s theorem,
R
m ? ((?1 ? ?2 ) ? ?)(x) = G ?(x ? y)(?0 (y) + ?0 (y))d?(y)
R
R
R
= Gb (x, ?) G {(y, ? ? ?0 ) + (y, ? + ?0 )}d?(y)d?(?) = Gb (x, ?){g(? ? ?0 ) + g(? + ?0 )}d?(?)
Since the integrand is zero in supp(?), we have (?1 ? ?2 ) ? ? = 0, which derives contradiction.
b is discrete if and only if G is compact [12, Sec. 1.7.3].
The last assertion is obvious, since G
Theorems 7 and 8 are generalization of the results in [14]. From Theorem 8, we can see that the
characteristic property is stable under the product for shift-invariant kernels.
Corollary 9. Let ?1 (x ? y) and ?2 (x ? y) be R-valued continuous shift-invariant characteristic
kernels on a LCA group G. If (i) G is non-compact, or (ii) G is compact and 2? 6= 0 for any nonzero
b Then (?1 ?2 )(x ? y) is characteristic.
? ? G.
Proof. We show the proof only for (i). Let ?1 , ?2 be the non-negative measures to give ?1 and ?2 ,
b This means supp(?1 ? ?2 ) =
respectively, in Eq. (5). By Theorem 8, supp(?1 ) = supp(?2 ) = G.
b
G. The proof is completed because ?1 ? ?2 gives a positive definite function ?1 ?2 .
1
2
Example 1. (Rn , +): As already
Pn shown in [6, 14], the Gaussian RBFnkernel exp(? 2?2 kx ? yk )
and Laplacian kernel exp(?? i=1 |xi ? yi |) are characteristic on R . An example of a positive
definite kernel that is not characteristic on Rn is sinc(x ? y) = sin(x?y)
x?y .
?
Example 2. ([0, 2?), +): The addition is made modulo 2?. The dual group is {e ?1nx | n ? Z},
which is isomorphic to Z. The Fourier transform is equal to the ordinary Fourier expansion. The
following are examples of characteristic kernels given by the expression
?
P?
P?
a0 ? 0, an > 0 (n 6= 0),
?(x) = n=?? an e ?1nx ,
n=0 an < ?.
(1) a0 = ? 2 /3, an = 2/n2 (n 6= 0)
(2) a0 = 1/2, an = 1/(1 + n2 ) (n 6= 0)
n
(3) a0 = 0, an = ? /n (n 6= 0), (|?| < 1)
(4) an = ?|n| , (0 < ? < 1)
?
?
k1 (x, y) = (? ? (x ? y)mod 2? )2 .
k2 (x, y) = cosh(? ? (x ? y)mod 2? ).
? k3 (x, y) = ? log(1 ? 2? cos(x ? y) + ?2 ).
? k4 (x, y) = 1/(1 ? 2? cos(x ? y) + ?2 ) (Poisson kernel).
Examples of non-characteristic kernels on [0, 2?) include cos(x ? y), F?ejer, and Dirichlet kernel.
4
Characteristic kernels on compact groups
We discuss non-Abelian cases in this section. Non-Abelian groups include various matrix groups,
such as SO(3) = {A ? M (3 ? 3; R) | AT A = I3 , detA = 1}, which represents rotations in R3 .
SO(3) is used in practice as the data space of rotational data, which popularly appear in many fields
such as geophysics [10] and robotics [15]. Providing useful positive definite kernels on this class is
important in those applications areas. First, we give a brief summary of known results on the Fourier
analysis on locally compact and compact groups. See [11, 4] for the details.
4.1
Unitary representation and Fourier analysis
Let G be a locally compact group, which may not be Abelian. A unitary representation (T, H) of
G is a group homomorphism T into the group U (H) of unitary operators on some nonzero Hilbert
space H, that is, a map T : G ? U (H) that satisfies T (xy) = T (x)T (y) and T (x?1 ) = T (x)?1 =
T (x)? , and for which x 7? T (x)u is continuous from G to H for any u ? H.
For a unitary representation (T, H) on a locally compact group G, a subspace V in H is called Ginvariant if T (x)V ? V for every x ? G. A unitary representation (T, H) is irreducible if there are
5
no closed G-invariant subspace except {0} and H. Unitary representations (T1 , H1 ) and (T2 , H2 )
are said to be equivalent if there is a unitary isomorphism A : H1 ? H2 such that T1 = A?1 T2 A.
The following facts are basic (e.g., [4], Section 3,1, 5.1).
Theorem 10. (i) If G is a compact group, every irreducible unitary representation (T, H) of G is
finite dimensional, that is, H is finite dimensional. (ii) If G is an Abelian group, every irreducible
unitary representation of G is one dimensional. They are the continuous characters of G.
It is possible to extend the Fourier analysis on locally compact non-Abelian groups. Unlike Abelian
cases, the Fourier transform by the characters are not possible, but we need to consider unitary
representations and operator-valued Fourier transform. Since extending the results of the LCA case
to the general cases causes very complicated topology, we focus on compact groups. Also, for
simplicity, we assume that G is second countable, i.e., there are countable open basis on G.
b to be the set of equivalent classes of irreducible unitary representations of a compact
We define G
group G. The equivalence class of a unitary representation (T, HT ) is denoted by [T ], and the
b for all.
dimensionality of HT by dT . We fix a representative T for every [T ] ? G
It is known that on a compact group G there is a Haar measure m, which is a left and right invariant
non-negative finite measure. We normalize it so that m(G) = 1 and denote it by dx.
Let (T, HT ) be a unitary representation. For f ? L1 (G) and ? ? M (G), the Fourier transform of f
b
and ? are defined by the ?operator-valued? functions on G,
Z
Z
Z
Z
fb(T ) =
f (x)T (x?1 )dx =
f (x)T (x)? dx, ?
b(T ) =
T (x?1 )d?(x) =
T (x)? d?(x),
G
G
G
G
respectively. These are operators on HT . This is a natural extension of the Fourier transform on
b is the characters serving as the Fourier kernel in view of Theorem 10.
LCA groups, where G
b be an operator on HT . The series
We can define the ?inverse Fourier transform?. Let AT ([T ] ? G)
P
(7)
b dT Tr[AT T (x)]
[T ]?G
?
P
is said to be absolutely convergent if [T ]?Gb dT Tr[|AT |] < ?, where |A| = AT A. It is obvious
that if the above series is absolutely convergent, the convergence is uniform on G. It is known that
b is at most countable, thus the sum is taken over the countable set.
if G is second countable, G
Bochner?s theorem can be extended to compact groups as follows [11, Section 34.10].
Theorem 11. A continuous function ? on a compact group G is positive definite if and only if the
b ) is positive semidefinite, gives an absolutely convergent series Eq. (7), and
Fourier transform ?(T
P
b
?(x) =
(8)
b dT Tr[?(T )T (x)].
[T ]?G
P
P
P
?1
b
The proof of ?if? part is easy; in fact, i,j ci cj ?(x?1
b dT Tr[?(T )T (xj xi )]
j xi ) =
i,j ci cj
[T ]?G
P
P
P
P
P
?
b )T (xj )? ] =
b
= i,j ci cj [T ] dT Tr[T (xi )?(T
[T ] dT Tr[
i ci T (xi ) ?(T )
j cj T (xj ) ] ? 0.
4.2
Shift-invariant characteristic kernels on compact groups
We have the following sufficient condition of characteristic property for compact groups.
Theorem 12. Let ? be a positive definite function of the form Eq. (8) on a compact group G. If
b ) is strictly positive definite for every [T ] ? G\{1},
b
?(T
the kernel ?(y ?1 x) is characteristic.
Proof. LetR P, Q ? M (G) be probabilities on G.
Define ? = P ? Q, and
suppose G ?(y ?1 x)d?(y) = 0.
If we take the integral over x with the meaR R P
?1
b
sure ?, Fubini?s theorem shows 0 =
x)]d?(y)d?(x) =
[T ] dT Tr[?(T )T (y
G G
R R
P
P
?
?
b
b
?(T )?(T )b
?(T ) ]. Since dT > 0 and
[T ] dT Tr[b
[T ] dT G G Tr[T (x)?(T )T (y) ]d?(x)d?(y) =
R
?
b
b
?(T ) is strictly positive, ?
b(T ) = 0 for every [T ] ? G, that is, G T (x) d?(x) = O. If we fix an
orthonormal basis of HT and express T (x) by the matrix elements Tij (x), we have
R
b i, j = 1, . . . , dT ).
Tij (x)d?(x) = 0
(?[T ] ? G,
G
6
?
b i, j =
The Peter-Weyl Theorem (e.g., [4, Section 5.2]) shows that { dT Tij (x) | [T ] ? G,
1, . . . , dT } is a complete orthonormal basis of L2 (G), which means ? = 0.
It is interesting to ask whether Theorem 8 can be extended to compact groups. The same proof does
b is
not apply, however, because application of Bochner?s theorem to a positive definite function on G
not possible by the lack of duality.
\ consists of (Tn , Hn ) (n = 0, 1, 2, . . .), where dT =
Example of SO(3). It is known that SO(3)
n
2n + 1. We omit the explicit form of Tn , while it is known (e.g., [4], Section 5.4), but use the
character defined by ?n (x) = Tr[Tn (x)]. It is also known that ?n is given by
?n (A) =
sin((2n + 1)?)
sin ?
(n = 0, 1, 2, . . .),
?
where e? ?1? (0 ? ? ? ?) are the eigenvalues of A, i.e., cos ? = 12 Tr[A]. Since plugging
b n ) = an Id in Eq. (8) derives an ?n for each term, we see that a sequence {an }? such that
?(T
n=0
Tn
P?
a0 ? 0, an > 0 (n ? 1), and n=0 an (2n + 1)2 < ? defines a characteristic positive definite
kernel on SO(3) by
P?
1
sin((2n + 1)?)
(cos ? = Tr[B ?1 A], 0 ? ? ? ?).
sin ?
2
Some examples are listed below (? is a parameter such that |?| < 1).
k(A, B) =
n=0 (2n
(1) an =
1
:
(2n + 1)4
(2) an =
?2n+1
:
(2n + 1)2
5
+ 1)an
k1 (A, B) =
k2 (A, B) =
?
1 X sin((2n + 1)?)
??(? ? ?)
=
.
sin ? n=0 (2n + 1)3
8 sin ?
?
2? sin ?
X
?2n+1 sin((2n + 1)?)
1
=
arctan
.
(2n + 1) sin ?
2 sin ?
1 ? ?2
n=0
Characteristic kernels on the semigroup Rn+
In this section, we consider kernels on an Abelian semigroup (S, +). In this case, a kernel based
on the semigroup structure is defined by k(x, y) = ?(x + y). For an Abelian semigroup (S, +), a
semicharacter is defined by a map ? : S ? C such that ?(x + y) = ?(x)?(y).
While extensions of Bochner?s theorem are known for semigroups [2], the topology on the set of
semicharacters are not as obvious as LCA groups, and the straightforward extension of the results
in Section 3 is difficult. We focus only on the Abelian semigroup (Rn+ , +), where R+ = [0, ?).
This semigroup has many practical applications of data analysis including expressions of nonnegative measures or frequency on n points [3]. For Rn+ , it is easy to see the bounded continuous
Qn
semicharacters are given by { i=1 e??i x | ?i ? 0 (i = 1, . . . , n)} [2, Section 4.4].
For Rn+ , Laplace transform replaces Fourier transform to give Bochner?s theorem.
Theorem 13 ([2], Section 4.4). Let ? be a bounded continuous function on Rn+ . ? is positive definite
if and only if there exists a unique non-negative measure ? ? M (Rn+ ) such that
Z
Pn
?(x) =
e? i=1 ti xi d?(t)
(?x ? Rn+ ).
(9)
Rn
+
Based on the above theorem, we have the following sufficient condition of characteristic property.
Theorem 14. Let ? be a positive definite function given by Eq. (9). If supp? = Rn+ , then the
positive definite kernel k(x, y) = ?(x + y) is characteristic.
Proof. Let P andPQ be probabilities on Rn+ , and ? = P ? Q. Define the Laplace transform by
R
n
L?(t) = Rn e? i=1 ti xi d?(x). It is easy to see L? is bounded and continuous on Rn+ . Suppose
+
R
?(x + y)d?(y) = 0 for all x ? Rn+ . In exactly the same way as the proof of Theorem 7, we have
LP = LQ. By the uniqueness part of Theorem 13, we conclude P = Q.
7
We show some examples of characteristic kernels on (Rn+ , +). Let a = (ai )ni=1 and b = (bi )ni=1
(ai ? 0, bi ? 0) be non-negative measures on n points.
Qn
Qn
(1) ? = i=1 t??1
e?ti (? > 0) :
k1 (a, b) = i=1 (ai + bi + ?)?1 .
i
Pn
2
?
(2) ? = t?3/2 e?? /(4t) (? > 0) :
k2 (a, b) = e?? i=1 ai +bi .
R
Since the proof of Theorem 14 shows ?(x + y)d?(y) = 0 means ? = 0 for ? ? M (Rn+ ), Lemma
2 shows
Pn ?
Pn p
Pn p
(ai + bi )/2 ? ( i=1 ai + i=1 bi )/2
k?2 (a, b) = exp ??
i=1
h(a)+h(b)
Pn ?
is also characteristic. The exponent has the form h a+b
?
with h(c) = i=1 ci , which
2
2
compares the value of h of the merged measure (a + b)/2 and the average of h(a) and h(b). This
type of kernel on non-negative measures is discussed in [3] in connection with semigroup structure.
6
Conclusions
We have discussed conditions that kernels defined by the algebraic structure of groups and semigroups are characteristic. For locally compact Abelian groups, the continuous shift-invariant Rvalued characteristic kernels are completely determined by the Fourier inverse of positive measures
with support equal to the entire dual group. For compact (non-Abelian) groups, we show a sufficient
condition of continuous shift-invariant characteristic kernels in terms of the operator-valued Fourier
transform. We show a condition for the semigroup Rn+ . In the advanced theory of harmonic analysis,
Bochner?s theorem and Fourier analysis can be extended to more general algebraic structure to some
extent. It is interesting to consider generalization of the results in this paper to such general classes.
In practical applications of machine learning, we are given a finite sample from a distribution, rather
than the distribution itself. In this setting, it becomes important to choose the best possible kernel
for inference on this sample. While the characteristic property gives a necessary requirement for
RKHS embeddings of distributions to be distinguishable, it does not address optimal kernel choice
at finite sample sizes. Theoretical approaches to this problem are the basis for future work.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
F. R. Bach and M. I. Jordan. Kernel independent component analysis. JMLR, 3:1?48, 2002.
C. Berg, J. P. R. Christensen, and P. Ressel. Harmonic Analysis on Semigroups. Springer, 1984.
M. Cuturi, K. Fukumizu, and J.-P. Vert. Semigroup kernels on measures. JMLR, 6:1169?1198, 2005.
B. B. Folland. A course in abstract harmonic analysis. CRC Press, 1995.
K. Fukumizu, F. R. Bach, and M. I. Jordan. Dimensionality reduction for supervised learning with reproducing kernel Hilbert spaces. JMLR, 5:73?99, 2004.
K. Fukumizu, A. Gretton, X. Sun, and B. Sch?olkopf. Kernel measures of conditional dependence. Advances in NIPS 20, 489?496. MIT Press, 2008.
K. Fukumizu, F. R.Bach, and M. I. Jordan. Kernel dimension reduction in regression. The Annals of
Statistics, 2009, in press.
A. Gretton, K. M. Borgwardt, M. Rasch, B. Sch?olkopf, and A. Smola. A kernel method for the twosample-problem. Advances in NIPS 19. MIT Press, 2007.
A. Gretton, K. Fukumizu, C. H. Teo, L. Song, B. Sch?olkopf, and A. Smola. A kernel statistical test of
independence. Advances in NIPS 20, 585?592. MIT Press, 2008.
M. S. Hanna and T. Chang. Fitting smooth histories to rotation data. Journal of Multivariate Analysis,
75:47?61, 2000.
E. Hewitt and K. A. Ross. Abstract Harmonic Analysis II. 1970.
W. Rudin. Fourier Analysis on Groups. Interscience, 1962.
B. Sch?olkopf and A.J. Smola. Learning with Kernels. MIT Press. 2002.
B. K. Sriperumbudur, A. Gretton, K. Fukumizu, G. Lanckriet, and B. Sch?olkopf. Injective Hilbert space
embeddings of probability measures. In Proc. COLT 2008, to appear, 2008.
O. Stavdahl, A. K. Bondhus, K. Y. Pettersen, and K. E. Malvig. Optimal statistical operators for 3dimensional rotational data: geometric interpretations and application to prosthesis kinematics. Robotica,
23(3):283?292, 2005.
I. Steinwart. On the influence of the kernel on the consistency of support vector machines. JMLR, 2:67?
93, 2001.
S. Wu and S-I. Amari. Conformal Transformation of Kernel Functions: A Data-Dependent Way to Improve Support Vector Machine Classifiers. Neural Process. Lett., 15(1):59?67, 2002.
8
| 3396 |@word briefly:1 open:6 closure:1 covariance:1 homomorphism:3 tr:12 reduction:2 moment:1 series:3 hereafter:1 rkhs:11 dx:8 must:1 additive:1 weyl:1 enables:1 remove:1 rudin:1 provides:2 arctan:1 direct:1 prove:2 consists:1 fitting:1 interscience:1 ica:1 mpg:2 becomes:2 provided:2 bounded:8 transformation:1 guarantee:2 every:10 ti:3 exactly:1 k2:3 classifier:1 omit:1 appear:2 positive:32 t1:2 id:1 ap:1 equivalence:1 co:5 bi:6 unique:5 practical:4 testing:1 practice:1 definite:28 area:1 universal:1 vert:1 regular:3 operator:7 influence:1 optimize:1 measurable:6 map:2 equivalent:2 folland:1 straightforward:2 convex:2 bfb:1 simplicity:1 contradiction:1 orthonormal:2 embedding:1 notion:1 laplace:3 annals:1 diego:1 construction:1 suppose:3 modulo:1 lanckriet:1 element:5 ep:1 sun:1 yk:2 cuturi:1 rewrite:1 predictive:1 completely:3 basis:4 various:1 neighborhood:1 supplementary:2 valued:10 amari:1 statistic:2 transform:19 itself:1 advantage:1 eigenvalue:1 sequence:1 product:1 description:1 kh:1 normalize:1 olkopf:6 convergence:1 empty:1 requirement:1 extending:1 object:2 derive:1 illustrate:2 ac:1 eq:9 kenji:1 homeomorphism:1 implies:1 convention:1 rasch:1 popularly:1 tokyo:1 merged:1 material:2 explains:1 crc:1 suffices:2 generalization:3 fix:2 preliminary:1 proposition:1 biological:3 minami:1 extension:6 strictly:2 sufficiently:3 exp:5 k3:1 mapping:5 uniqueness:2 proc:1 ross:1 teo:1 largest:1 fukumizu:8 mit:4 azabu:1 gaussian:2 i3:1 rather:1 pn:7 corollary:1 focus:4 sense:1 inference:3 dependent:1 letr:1 entire:2 a0:5 germany:1 dual:5 colt:1 denoted:7 exponent:1 uc:1 field:3 equal:3 represents:1 future:1 t2:2 irreducible:4 pathological:1 familiar:1 semigroups:8 extreme:1 semidefinite:1 integral:2 arthur:2 necessary:6 injective:3 xy:1 spemannstra:1 conduct:1 prosthesis:1 theoretical:1 instance:1 assertion:1 ordinary:2 uniform:1 straightforwardly:1 periodic:2 my:1 borgwardt:1 hn:1 choose:1 positivity:1 japan:1 supp:18 de:2 sec:1 mp:1 view:2 h1:2 closed:1 characterizes:1 complicated:1 ni:2 characteristic:58 who:1 famous:1 zy:2 zx:2 cybernetics:3 history:1 bharath:1 definition:4 sriperumbudur:2 frequency:2 obvious:4 naturally:1 associated:2 proof:12 ask:1 dimensionality:2 hilbert:5 cj:8 higher:2 fubini:3 dt:15 supervised:1 smola:3 hand:2 steinwart:1 lack:1 continuity:1 defines:2 contain:1 hausdorff:1 hence:1 semigroup:11 nonzero:2 deal:1 sin:12 uniquely:1 illustrative:1 mpi:3 criterion:1 complete:2 tn:4 l1:4 image:1 harmonic:4 consideration:1 recently:1 rotation:4 functional:1 jp:1 extend:2 discussed:2 interpretation:1 ai:6 consistency:1 mathematics:1 stable:1 similarity:1 multivariate:1 recent:1 ubingen:1 deta:1 yi:2 determine:2 bochner:14 ii:5 gretton:6 smooth:1 bach:3 plugging:2 laplacian:2 regression:2 basic:4 metric:1 poisson:1 kernel:80 robotics:2 addition:3 sch:6 unlike:1 strict:1 sure:1 mod:2 jordan:3 call:1 unitary:13 embeddings:3 easy:3 independence:5 xj:7 topology:4 identified:2 idea:1 shift:14 whether:3 expression:3 isomorphism:2 gb:6 song:1 peter:1 algebraic:5 speaking:1 cause:1 constitute:1 useful:2 tij:3 listed:1 locally:10 cosh:1 generate:1 exist:1 restricts:1 serving:1 broadly:1 write:1 discrete:1 express:1 group:58 key:1 terminology:1 pb:2 k4:1 ht:6 cone:2 sum:2 inverse:2 everywhere:1 powerful:1 semicharacters:2 almost:1 reader:1 wu:1 dy:4 convergent:3 topological:3 replaces:1 nonnegative:2 fourier:30 integrand:1 department:1 lca:14 conjugate:1 describes:1 slightly:1 character:7 lp:3 b:1 making:1 christensen:1 invariant:17 taken:2 discus:2 r3:2 kinematics:1 conformal:2 operation:3 apply:2 appropriate:1 rkhss:4 customary:1 denotes:2 dirichlet:1 include:3 completed:1 ism:1 k1:3 question:1 already:1 occurs:1 dependence:1 said:2 dp:1 subspace:3 mx:3 distance:1 nx:2 ressel:1 extent:1 tuebingen:2 besides:2 rotational:2 providing:1 difficult:1 abelian:20 negative:10 bharathsv:1 countable:5 allowing:1 convolution:2 finite:9 immediate:1 extended:6 rn:43 ucsd:1 reproducing:3 introduced:3 complement:1 required:3 connection:1 established:1 geophysics:2 nip:3 address:2 below:1 summarize:1 including:3 suitable:1 natural:2 haar:4 indicator:1 sian:1 advanced:1 improve:1 brief:1 imply:1 review:1 geometric:1 l2:2 multiplication:1 expect:1 interesting:3 h2:2 sufficient:8 dq:1 viewpoint:1 course:1 summary:1 twosample:1 last:1 side:1 allow:1 institute:1 taking:1 boundary:1 dimension:1 lett:1 rich:4 fb:1 qn:3 made:1 san:1 compact:30 bernhard:1 dealing:1 conclude:1 xi:12 continuous:26 chief:1 ku:1 hanna:1 expansion:1 complex:2 cl:2 domain:4 dense:1 main:1 hewitt:1 n2:2 minato:1 representative:1 borel:6 explicit:2 lq:1 jmlr:4 theorem:40 formula:1 specific:1 sinc:1 weakest:1 derives:2 exists:2 ih:1 importance:1 ci:9 supplement:1 commutative:1 kx:2 distinguishable:1 simply:1 chang:1 springer:1 determines:2 satisfies:3 prop:1 conditional:4 rbf:1 included:1 determined:1 except:1 uniformly:2 lemma:4 called:7 isomorphic:1 ece:1 duality:4 pontryagin:2 berg:1 support:5 arises:1 absolutely:3 ex:1 |
2,643 | 3,397 | Particle Filter-based Policy Gradient in POMDPs
Romain Deguest?
CMAP, Ecole Polytechnique
[email protected]
Pierre-Arnaud Coquelin
CMAP, Ecole Polytechnique
[email protected]
R?emi Munos
INRIA Lille - Nord Europe, SequeL project,
[email protected]
Abstract
Our setting is a Partially Observable Markov Decision Process with continuous
state, observation and action spaces. Decisions are based on a Particle Filter for
estimating the belief state given past observations. We consider a policy gradient
approach for parameterized policy optimization. For that purpose, we investigate
sensitivity analysis of the performance measure with respect to the parameters of
the policy, focusing on Finite Difference (FD) techniques. We show that the naive
FD is subject to variance explosion because of the non-smoothness of the resampling procedure. We propose a more sophisticated FD method which overcomes
this problem and establish its consistency.
1
Introduction
We consider a Partially Observable Markov Decision Problem (POMDP) (see e.g. (Lovejoy, 1991;
Kaelbling et al., 1998)) defined by a state process (Xt )t?1 ? X, an observation process (Yt )t?1 ?
Y , a decision (or action) process (At )t?1 ? A which depends on a policy (mapping from all possible
observation histories to actions), and a reward function r : X ? R. Our goal is to find a policy
? that maximizes a performance measure J(?), function of future rewards, for example in a finite
horizon setting:
n
X
?
def ?
J(?) = E
r(Xt ) .
(1)
t=1
Other performance measures (such as in infinite horizon with discounted rewards) could be handled
as well. In this paper, we consider the case of continuous state, observation, and action spaces.
The state process is a Markov decision process taking its values in a (measurable) state space X,
with initial probability measure ? ? M(X) (i.e. X1 ? ?), and which can be simulated using a
transition function F and independent random numbers, i.e. for all t ? 1,
i.i.d.
Xt+1 = F (Xt , At , Ut ), with Ut ? ?,
(2)
where F : X ? A ? U ? X and (U, ?(U ), ?) is a probability space. In many practical situations
U = [0, 1]p and Ut is a p-uple of pseudo random numbers. For simplicity, we adopt the notations
def
F (x0 , a0 , u) = F? (u), where F? is the first transition function (i.e. X1 = F? (U0 ) with U0 ? ?).
The observation process (Yt )t?1 lies in a (measurable) space Y and is linked with the state process
by the conditional probability measure P(Yt ? dyt |Xt = xt ) = g(xt , yt ) dyt , where g : X ? Y ?
[0, 1] is the marginal density function of Yt given Xt . We assume that observations are conditionally
independent given the state process. Here also, we assume that we can simulate an observation
using a transition function G and independent random numbers, i.e. ?t ? 1, Yt = G(Xt , Vt ),
?
Also affiliated to Columbia University
1
i.i.d.
where Vt ? ? (for the sake of simplicity we consider the same probability space (U, ?(U ), ?)).
Now, the action process (At )t?1 depends on a policy ? which assigns to each possible observation
history Y1:t (where we adopt the usual notation ?1 : t? to denote the collection of integers s such that
1 ? s ? t), an action At ? A.
In this paper we will consider policies that depend on the belief state (also called filtering distribution) conditionally to past observations. The belief state, written bt , belongs to M(X) (the space
def
of all probability measures on X) and is defined by bt (dxt , Y1:t ) = P(Xt ? dxt |Y1:t ), and will be
written bt (dxt ) or even bt for simplicity when there is no risk of confusion. Because of the Markov
property of the state dynamics, the belief state bt (?, Y1:t ) is the most informative representation about
the current state Xt given the history of past observations Y1:t . It represents sufficient statistics for
designing an optimal policy in the class of observations-based policies.
The temporal and causal dependencies of the dynamics of a generic POMDP using belief-based
policies is summarized in Figure 1 (left): at time t, the state Xt is unknown, only Yt is observed,
which enables (at least in theory) to update bt based on the previous belief bt?1 . The policy ? takes
as input the belief state bt and returns an action At (the policy may be deterministic or stochastic).
However, since the belief state is an infinite dimensional object, and thus cannot be represented in
a computer, we first simplify the class of policies that we consider here to be defined over a finite
dimensional space of belief-features f : M(X) ? RK which represents relevant statistics of the
filtering distribution. We write bt (fk ) for the value of the k-th feature (among K) (where we use the
def R
usual notation b(f ) = X f (x)b(dx) for any function f defined on X and measure b ? M(X)),
and denote bt (f ) the vector (of size K) with components bt (fk ). Examples of features are: f (x) = x
(mean value), f (x) = x? x (for the covariance matrix). Other more complex features (e.g. entropy
measure) could be used as well. Such a policy ? : RK ? A selects an action At = ?(bt (f )), which
in turn, yields a new state Xt+1 .
Except for simple cases, such as in finite-state finite-observation processes (where a Viterbi algorithm could be applied (Rabiner, 1989)), and the case of linear dynamics and Gaussian noise (where
a Kalman filter could be used), there is no closed-form representation of the belief state. Thus bt
must be approximated in our general setting. A popular method for approximating the filtering
distribution is known as Particle Filters (PF) (also called Interacting Particle Systems or Sequential Monte-Carlo). Such particle-based approaches have been used in many applications (see e.g.
(Doucet et al., 2001) and (Del Moral, 2004) for a Feynman-Kac framework) for example for parameter estimation in Hidden Markov Models and control (Andrieu et al., 2004) and mobile robot
localization (Fox et al., 2001). An PF approximates the belief state bt ? M(X) by a set of particles (x1:N
) (points of X), which are updated sequentially at each new observation by a transitiont
PN
selection procedure. In particular, the belief feature bt (f ) is approximated by N1 i=1 f (xit ), and
the policy is thus a function that takes as input the activation of the feature f at the position of
PN
the particles: At = ?( N1 i=1 f (xit )). For such methods, the general scheme for POMDPs using
Particle Filter-based policies is described in Figure 1 (right).
In this paper, we consider a class of policies ?? parameterized by a (multi-dimensional) parameter
? and we search for the value of ? that maximizes the resulting criterion J(?? ), now written J(?)
for simplicity. We focus on a policy gradient approach: the POMDP is replaced by an optimization
problem on the space of policy parameters, and a (stochastic) gradient ascent on J(?) is considered.
For that purpose (and this is the object of this work) we investigate the estimation of ?J(?) (where
the gradient ? refers to the derivative w.r.t. ?), with an emphasis on Finite-Difference techniques.
There are many works about such policy gradient approach in the field of Reinforcement Learning,
see e.g. (Baxter & Bartlett, 1999), but the policies considered are generally not based on the result of
an PF. Here, we explicitly consider a class of policies that are based on a belief state constructed by a
PF. Our motivations for investigating this case are based on two facts: (1) the belief state represents
sufficient statistics for optimality, as mentioned above. (2) PFs are a very popular and efficient tool
for constructing the belief state in continuous domains.
After recalling the general approach for evaluating the performance of a PF-based policy (Section 2),
we describe (in Section 3.1) a naive Finite-Difference (FD) approach (defined by a step size h) for
estimating ?J(?). We discuss the bias and variance tradeoff and explain the problem of variance
explosion when h is small. This problem is a consequence of the discontinuity of the resampling
operation w.r.t. the parameter ?. Our contribution is detailed in Section 3.2: We propose a modified
2
FD estimate for ?J(?) which (along the random sample path) has bias O(h2 ) and variance O(1/N ),
thus overcomes the drawback of the previous naive method. An algorithm is described and illustrated
in Section 4 on a simple problem where the optimal policy exhibits a tradeoff between greedy reward
optimization and localization.
r t?1
rt
r t+1
Reward
rt?1
rt
r t+1
Reward
X t?1
Xt
X t+1
State
X t?1
Xt
X t+1
State
Yt?1
Yt
Yt+1
Observation
Yt?1
Yt
Yt+1
Observation
b t?1
bt
b t+1
Belief state
1:N
x t?1
x t1:N
x
bt?1 (f)
b t (f )
1:N)
f( x t?1
f( x t1:N)
??
??
A t?1
bt+1 (f ) Belief features
??
At
??
??
Policy
A t+1
Action
A t?1
At
1:N
t+1
Particles
f( x 1:N
t+1 )
Features
??
A t+1
Policy
Action
Figure 1: Left figure: Causal and temporal dependencies in a POMDP. Right figure: PF-based
PN
scheme for POMDPs where the belief feature bt (f ) is approximated by N1 i=1 f (xit ).
2
Particle Filters (PF)
We first describe a generic PF for estimating the belief state based on past observations. In Subsection 2.1 we detail how to control a real-world POMDP and in Subsection 2.2 how to estimate
the performance of a given policy in simulation. In both cases, we assume that the models of the
dynamics (state, observation) are known. The basic PF, called Bootstrap Filter, see (Doucet et al.,
def PN
i
2001) for details, approximates the belief state bn by an empirical distribution bN
n =
i=1 wn ?xin
1:N
(where ? denotes a Dirac distribution) made of N particles xn . It consists in iterating the two
following steps: at time t, given observation yt ,
? Transition step: (also called importance sampling or mutation) a successor particles
population x
e1:N
is generated according to the state dynamics from the previous population
t
def
1:N
x1:N
=
t?1 . The (importance sampling) weights wt
g(e
x1:N ,yt )
PN t
xjt ,yt )
j=1 g(e
are evaluated,
? Selection step: Resample (with replacement) N particles x1:N
from the set x
e1:N
according
t
t
k1:N
def
to the weights wt1:N . We write x1:N
= x
et t
t
where kt1:N are the selection indices.
Resampling is used to avoid the problem of degeneracy of the algorithm, i.e. that most of the weights
decreases to zero. It consists in selecting new particle positions such as to preserve a consistency
PN
PN
i
property (i.e.
xit ) = E[ N1 i=1 ?(xit )]). The simplest version introduced in (Gordon
i=1 wt ?(e
et al., 1993) chooses the selection indices kt1:N by an independent sampling from the set 1 : N
according to a multinomial distribution with parameters wt1:N , i.e. P(kti = j) = wtj , for all 1 ?
i ? N . The idea is to replicate the particles in proportion to their weights. Many variants have been
proposed in the literature, among which the stratified resampling method (Kitagawa, 1996) which is
optimal in terms of variance, see e.g. (Capp?e et al., 2005).
Convergence issues of bN
n (f ) to bn (f ) (e.g. Law of Large Numbers or Central Limit Theorems) are
discussed in (Del Moral, 2004) or (Douc & Moulines, 2008). For our purpose we note that under
weak conditions on the feature f , we have the consistency property: bN (f ) ? b(f ), almost surely.
2.1 Control of a real system by an PF-based policy
We describe in Algorithm 1 how one may use an PF-based policy ?? for the control of a real-world
iid
system. Note that from our definition of F? , the particles are initialized with: x
e1:N
? ?.
1
2.2 Estimation of J(?) in simulation
Now, for the purpose of policy optimization, one should be capable of evaluating the performance
of a policy in simulation. J(?), defined by (1), may be estimated in simulation provided that
3
Algorithm 1 Control of a real-world POMDP
for t = 1 to n do
Observe: yt ,
Particle transition step:
1:N
1:N iid
1:N
Set x
e1:N
= F (x1:N
=
t
t?1 , at?1 , ut?1 ) with ut?1 ? ?. Set wt
g(e
x1:N ,yt )
PN t
,
xjt ,yt )
j=1 g(e
Particle resampling step:
k1:N
Set x1:N
=x
et t where kt1:N are given by the selection step according to the weights wt1:N .
t
PN
Select action: at = ?? ( N1 i=1 f (xit )),
end for
the dynamics of the state and observation are known. Making explicit the dependency w.r.t. the
random sample path, written ? (which accounts for the state and observation stochastic dynamics and the random numbers used in the PF-based policy), we write J(?) = E? [J? (?)], where
def Pn
J? (?) = t=1 r(Xt,? (?)), making the dependency of the state w.r.t. ? and ? explicit.
Algorithm 2 describes how to evaluate an PF-based policy in simulation. The function returns an
estimate, written J?N (?), of J? (?). Using previously mentioned asymptotic convergence results
for PF, one has limN ?? J?N (?) = J? (?), almost surely (a.s.). In order to approximate J(?), one
would perform several calls to the algorithm, receiving J?Nm (?) (for 1 ? m ? M ), and calculate
PM
1
N
their empirical mean M
m=1 J?m (?), which tends to J(?) a.s., when M, N ? ?.
Algorithm 2 Estimation of J? (?) in simulation
for t = 1 to n do
Define state:
xt = F (xt?1 , at?1 , ut?1 ) with ut?1 ? ?,
Define observation:
yt = G(xt , vt ) with vt ? ?,
Particle transition step:
1:N
1:N iid
1:N
Set x
et1:N = F (x1:N
=
t?1 , at?1 , ut?1 ) with ut?1 ? ?. Set wt
g(e
x1:N ,yt )
PN t
,
xjt ,yt )
j=1 g(e
Particle resampling step:
k1:N
Set x1:N
=x
et t where kt1:N are given by the selection step according to the weights wt1:N ,
t
PN
Select action: at = ?? ( N1 i=1 f (xit )),
end for
def Pn
Return J?N (?) = t=1 r(xt ).
3
A policy gradient approach
Now we want to optimize the value of the parameter in simulation. Then, once a ?good? parameter
?? is found, we would use Algorithm 1 to control the real system using the corresponding PF-based
policy ??? . Gradient approaches have been studied in the field of continuous space Hidden Markov
Models in (Fichoud et al., 2003; C?erou et al., 2001; Doucet & Tadic, 2003). The authors have
used a likelihood ratio approach to evaluate ?J(?). Such methods suffer from high variance, in
particular for problems with small noise. In order to reduce the variance, it has been proposed in
(Poyadjis et al., 2005) to use a marginal particle filter instead of a simple path-based particle filter.
This approach is efficient in terms of variance reduction but its computational complexity is O(N 2 ).
Here we investigate a pathwise (i.e. along the random sample path ?) sensitivity analysis of J? (?)
(w.r.t. ?) for the purpose of (stochastic) gradient optimization. We start with a naive Finite Difference
(FD) approach and show the problem of variance explosion. Then we provide an alternative, called
common indices FD, which overcomes this problem.
In the sequel, we make the assumptions that all relevant functions (F , g, f , ?) are continuously
differentiable w.r.t. their respective variables. Note that although this is not explicitly mentioned, all
such functions may depend on time.
4
3.1 Naive Finite-Difference (FD) method
Let us consider the derivative of J(?) component-wisely, writing ?J(?) the derivative of J(?) w.r.t. a
one-dimensional parameter. If the parameter ? is multi-dimensional, the derivative will be calculated
def
in each direction. For h > 0 we define the centered finite-difference quotient Ih = J(?+h)?J(??h)
.
2h
Since J(?) is differentiable then limh?0 Ih = ?J(?). Consequently, a method for approximating
?J(?) would consist in estimating Ih for a sufficiently small h. We know that J(?) can be numeriPM
1
N
cally estimated by M
m=1 J?m (?). Thus, it seems natural to estimate Ih by
def
IhN,M =
M
M
i
1 h 1 X N
1 X N
J?m (? + h) ?
J?m? (? ? h)
2h M m=1
M ?
m =1
where we used independent random numbers to evaluate J(? + h) and J(? ? h). From the consistency of the PF, we deduce that limh?0 limM,N ?? IhN,M = ?J(?). This naive FD estimate
exhibits the following bias-variance tradeoff (see (Coquelin et al., 2008) for the proof):
Proposition 1 (Bias-variance trade-off). Assume that J(?) is three times continuously differentiable
in a small neighborhood of ?, then the asymptotic (when N ? ?) bias of the naive FD estimate
IhN,M is of order O(h2 ) and its variance is O(N ?1 M ?1 h?2 ).
In order to reduce the bias, one should choose a small h, but then the variance would blow up.
Additional computational resource (larger number of particles N ) will help controlling the variance. However, in practice, e.g. for stochastic optimization, this leads to an intractable amount of
computational effort since any consistent FD-based optimization algorithm (e.g. such as the KieferWolfowitz algorithm) will need to consider a sequence of steps h that decreases with the number of
gradient iterations. But if the number of particles is bounded, the variance term will diverge, which
may prevent the stochastic gradient algorithm from converging to a local optimum.
In order to reduce the variance of the previous estimator when h is small, one may use common
random numbers to estimate both J(? + h) and J(? ? h) (i.e. ?m = ?m? ). The variance then
reduces to O(N ?1 M ?1 h?1 ) (see e.g. (Glasserman, 2003)), which still explodes for small h.
Now, under the additional assumption that along almost all random sample path ?, the function
? 7? J?N (?) is a.s. continuous, then the variance would reduce to O(N ?1 M ?1 ) (see Section (7.1)
of (Glasserman, 2003)). Unfortunately, this is not the case here because of the discontinuity of the
PF resampling operation w.r.t. ?. Indeed, for a fixed ?, the selection indices kt1:N (taking values in
a finite set 1 : N ) are usually a non-smooth function of the weights wt1:N , which depend on ?.
Therefore the naive FD method using PF cannot be applied in general because of variance explosion
of the estimate when h is small, even when using common random number.
3.2 Common-indices Finite-Difference method
Pn
Let us consider J? (?) = t=1 r(Xt,? (?)) making explicit the dependency of the state w.r.t. ? and a
random sample path ?. Under our assumptions, the gradient ?J? (?) is well defined. Now, let us fix
?. For clarity, we now omit to write the ? dependency when no confusion is possible. The function
? 7? Xt (?) (for any 1 ? t < n) is smooth because all transition functions are smooth, the policy is
smooth, and the belief state bt is smooth w.r.t. ?. Underlying the belief feature bt,? (f ) dependency
w.r.t. ?, we write:
smooth
smooth
smooth
? 7?? bt,? (f ) 7?? Xt (?) 7?? J? (?).
As already mentioned, the problem with the naive FD method is that the PF estimate bN
t,? (f ) =
PN
1
i
i=1 f (xt (?)) of bt,? (f ) is not smooth w.r.t. ? because it depends on the selection indices
N
1:N
k1:t
(?) which, taken as a function of ? (through the weights), is not continuous. We write
non-smooth
? 7?? bN
t,? (f ) =
N
1 X
smooth
f (xit (?)) 7?? J?N (?).
N i=1
So a natural idea to recover continuity in a FD method would consists in using exactly the same
selection indices for quantities related to ? + h and ? ? h. However, using the same indices means
using the same weights during the selection procedure for both trajectories. But this would lead to
a wrong estimator because the weights strongly depends on ? through the observation function g.
5
Our idea is thus to use the same selection indices but use a likelihood ratio in the belief feature
estimation. More precisely, let us write kt1:N (?) the selection indices obtained for parameter ?, and
consider a parameter ?? in a small neighborhood of ?. Then, an PF estimate for bt,?? (f ) is
Qt
N
X
g(xis (?? ), ys (?? ))
lti (?, ?? )
def
i ?
i
? def
bN
(f
)
=
f
(x
(?
)),
with
l
(?,
?
)
=
(3)
?
Qs=1
P
t,?
t
t
t
N
j
i
?
s=1 g(xs (?), ys (?))
j=1 lt (?, ? )
i=1
?
being the likelihood ratios computed along the particle paths, and where the particles x1:N
1:t (? ) have
1:N
been generated using the same selection indices k1:t
(?) (and the same random sample path ?) as
those used for ?. The next result states the consistency of this estimate and is our main contribution
(see (Coquelin et al., 2008) for the proof).
Proposition 2. Under weak conditions on f (see e.g. (Del Moral & Miclo, 2000)), there exists a
neighborhood of ?, such that for any ?? in this neighborhood, bN
t,? ? (f ) defined by (3) is a consistent
N
?
?
estimator of bt,? (f ), i.e. limN ?? bt,?? (f ) = bt,? (f ) almost surely.
Thus, for any perturbed value ?? around ?, we may run an PF where in the resampling step, we
1:N
use the same selection indices k1:n
(?) as those obtained for ?. Thus the mapping ?? 7? bN
t,? ? (f ) is
smooth. We write:
smooth
smooth
N ?
?? 7?? bN
t,? ? (f ) defined by (3) 7?? J? (? ).
From the previous proposition we deduce that J?N (?) is a consistent estimator for J? (?).
A possible implementation for the gradient estimation is described by Algorithm 3. The algorithm works by updating 3 families of state, observation, and particle populations, denoted by
?+?, ?-?, and ?o? for the values of the parameter ? + h, ? ? h, and ? respectively. For the
performance measure defined by (1), the algorithm returns the common indices FD estimator:
def 1 Pn
+
?
+
?
?h J?N = 2h
t=1 r(xt ) ? r(xt ) where x1:n and x1:n are upper and lower trajectories simulated
under the random sample path ?. Note that although the selection indices are the same, the particle
populations ?+?, ?-?, and ?o? are different, but very close (when h is small). Hence the likelihood
ratios lt1:N converge to 1 when h ? 0, which avoids a source of variance when h is small.
def 1 PM
N
The resulting estimator ?hM J?N = M
m=1 ?h J?m for J(?) would calculate an average over M
sample paths ?1:M of the return of Algorithm 3 called M times. This estimator overcomes the
drawbacks of the naive FD estimate: Its asymptotic bias is of order O(h2 ) (like any centered FD
scheme) but its variance is of order O(N ?1 M ?1 ) (the Central Limit Theorem applies to the belief
feature estimator (3) thus to ?h J?N as well). Since the variance does not degenerate when h is small,
one should choose h as small as possible to reduce the mean-squared estimation error.
The complexity of Algorithm 3 is linear in the number of particles N . Note that in the current
implementation we used 3 populations of particles per derivative. Of course, we could consider a
non-centered FD scheme approximating the derivative with J(?+h)?J(?)
, which is of first order but
h
which only requires 2 particle populations. If the parameter is multidimensional, the full gradient
estimate could be obtained by using K + 1 populations of particles. Of course, in gradient ascent
methods, such FD gradient estimate may be advantageously combined with clever techniques such
as simultaneous perturbation stochastic approximation (Spall, 2000), conjugate or second-order gradient approaches.
Note that when h ? 0, our estimator converges to an Infinitesimal Perturbation Analysis (IPA)
estimator (Glasserman, 1991). The same ideas as those presented above could be used to derive an
IPA estimator. The advantage of IPA is that it would use one population of particles only (for the
full gradient) which may be interesting when the number of parameters K is large. However, the
main drawback is that this approach would require to compute analytically the derivatives of all the
functions w.r.t. their respective variables, which may be time consuming for the programmer.
4
Numerical Experiment
Because of space constraints, our purpose here is simply to illustrate numerically the theoretical
findings of previous FD methods (in terms of bias-variance contributions) rather than to provide a
full example of POMDP policy optimization. We consider a very simple navigation task for a 2d
robot. The robot is defined by its coordinates xt ? R2 . The observation is a noisy measurement
6
Algorithm 3 Common-indices Finite Difference estimate of ?J?
Initialize likelihood ratios:
Set l01:N,+ = 1, l01:N,? = 1,
for t = 1 to n do
State processes: Sample ut?1 ? ? and
+
+
?
?
?
Set xot = F (xot?1 , aot?1 , ut?1 ), set x+
t = F (xt?1 , at?1 , ut?1 ), set xt = F (xt?1 , at?1 , ut?1 ),
Observation processes: Sample vt ? ? and
?
?
Set yto = G(xot , vt ), set yt+ = G(x+
t , vt ), set yt = G(xt , vt ),
iid
Particle transition step: Draw u1:N
t?1 ? ? and
1:N,o o
1:N
Set x
e1:N,o
=
F
(x
,
a
,
u
t
t?1
t?1 ),
t?1
1:N,+
1:N,+ +
?
1:N
Set x
et
= F (xt?1 , at?1 , u1:N
e1:N,?
= F (x1:N,?
t
t?1 ), set x
t?1 , at?1 , ut?1 ),
Set wt1:N =
g(e
x1:N,o ,yto )
PN t
o ,
xj,o
t ,yt )
j=1 g(e
g(e
x1:N,+ ,y + )
g(e
x1:N,? ,y ? )
1:N,+
1:N,?
Set lt1:N,+ = g(ext1:N,o ,yto ) lt?1
, set lt1:N,? = g(ext1:N,o ,yto ) lt?1
,
t
t
t
t
Particle resampling step:
Let kt1:N be the selection indices obtained from the weights wt1:N ,
k1:N ,o
Set x1:N,o
=x
et t
t
lt1:N,+
k1:N ,+
, set x1:N,+
=x
et t
t
k1:N ,+
lt t
,
lt1:N,?
k1:N ,?
lt t
,
k1:N ,?
, set x1:N,?
=x
et t
t
Set
=
set
=
Actions: ? P
?
N
Set aot = ?? N1 i=1 f (xi,o
t ) ,
? PN
? PN
lti,+
i,+ ?
?
Set a+
t = ??+h
i=1 PN lj,+ f (xt ) , set at = ???h
i=1
j=1 t
end for
def Pn
Return: ?h J?N = t=1
,
li,?
i,? ?
PNt j,? f (xt ) ,
l
j=1 t
?
r(x+
t )?r(xt )
.
2h
def
iid
of the squared distance to the origin (the goal): yt = ||xt ||2 + vt , where vt ? N (0, ?y2 ) (?y2 is
the variance of the noise). At each time step, the agent may choose a direction at (with ||at || = 1),
which results in moving the state, of a step d, in the corresponding direction: xt+1 = xt + dat + ut ,
i.i.d.
where ut ? N (0, ?x2 I) is an additive noise. The initial state x1 is drawn from ?, a uniform
distribution over the square [?1, 1]2 . We consider a class of policies that depend on a single feature
belief: the mean of the belief state (i.e. f (x) = x). The PF-based policy thus uses the barycenter of
PN
def
the particle population mt = N1 i=1 xit . Let us write m? the +90o rotation of a vector m. We
?
?(1??)m+?m
consider policies ?? (m) = ||?(1??)m+?m
? || parameterized by ? ? [0, 1]. The chosen action is thus
at = ?? (mt ). If the robot was well localized (i.e. mt close to xt ), then the policy ??=0 would move
the robot towards the direction of the goal, whereas ??=1 would move it in an orthogonal direction.
The performance measure (to be minimized) is defined as J(?) = E[||xn ||2 ], where n is a fixed time.
We plot in Figure 2 the performance and gradient estimation obtained when running Algorithms 2
and 3, respectively. We used the numerical values: N = 103 , M = 102 , h = 10?6 , n = 10,
?x = 0.05, ?y = 0.05, d = 0.1.
It is interesting to note that in this problem, the performance is optimal for ?? ? 0.3 (which is slightly
better than for ? = 0). ? = 0 would correspond to the best feed-back policy if the state was perfectly
known. However, moving in an direction orthogonal to the goal helps improving localization. Here,
the optimal policy exhibits a tradeoff between greedy optimization and localization.
Bias / Variance NFD
Bias / Variance CIFD
h = 100
0.57 / 6.05 ? 10?3
0.428 / 0.022
h = 10?2
0.31 / 0.13
0.00192 / 0.019
h = 10?4
unreliable / 25.3
0.00247 / 0.02
h = 10?6
unreliable / 6980
0.00162 / 0.0188
The table above shows the (empirically measured) bias and variance of the naive FD (NFD) (using
common random numbers) method and the common indices FD (CIFD) method, for a specific value
? = 0.5 (with N = 103 , M = 500). As predicted, the variance of the NFD approach makes this
method inapplicable, whereas that of the CIFD is reasonable.
7
0.8
1.4
1.2
0.7
1
0.8
Gradient estimate
Performance estimate
0.6
0.5
0.4
0.6
0.4
0.2
0
0.3
?0.2
0.2
?0.4
0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
parameter ?
0.7
0.8
0.9
?0.6
1
0
0.1
0.2
0.3
0.4
0.5
0.6
parameter ?
0.7
0.8
0.9
1
p
PM
1
N
? Var[J?N (?)]/M .
Figure 2: Left: Estimator M
m=1 J?m (?) of J(?) and confidence intervals
p
PM
1
N
N
Right: Estimator M
m=1 ?h J?m (?) of ?J(?) and confidence intervals ? Var[?h J? (?)]/M .
References
Andrieu, C., Doucet, A., Singh, S., & Tadic, V. (2004). Particle methods for change detection, identification
and control. Proceedings of the IEEE, 92, 423?438.
Baxter, J., & Bartlett, P. (1999). Direct gradient-based reinforcement learning. Journal of Artificial Inteligence
Reseach.
Capp?e, O., Douc, R., & Moulines, E. (2005). Comparaison of resampling schemes for particle filtering. 4th
International Symposium on Image and Signal Processing and Analysis.
C?erou, F., LeGland, F., & Newton, N. (2001). Stochastic particle methods for linear tangent filtering equations,
231?240. IOS Press, Amsterdam.
Coquelin, P., Deguest, R., & Munos, R. (2008). Sensitivity analysis in particle filters. Application to policy
optimization in POMDPs (Technical Report). INRIA, RR-6710.
Del Moral, P. (2004). Feynman-kac formulae, genealogical and interacting particle systems with applications.
Springer.
Del Moral, P., & Miclo, L. (2000). Branching and interacting particle systems. approximations of feynman-kac
formulae with applications to non-linear filtering. S?eminaire de probabilit?es de Strasbourg, 34, 1?145.
Douc, R., & Moulines, E. (2008). Limit theorems for weighted samples with applications to sequential monte
carlo methods. To appear in Annals of Statistics.
Doucet, A., Freitas, N. D., & Gordon, N. (2001). Sequential monte carlo methods in practice. Springer.
Doucet, A., & Tadic, V. (2003). Parameter estimation in general state-space models using particle methods.
Ann. Inst. Stat. Math.
Fichoud, J., LeGland, F., & Mevel, L. (2003). Particle-based methods for parameter estimation and tracking :
numerical experiments (Technical Report 1604). IRISA.
Fox, D., Thrun, S., Burgard, W., & Dellaert, F. (2001). Particle filters for mobile robot localization. Sequential
Monte Carlo Methods in Practice. New York: Springer.
Glasserman, P. (1991). Gradient estimation via perturbation analysis. Kluwer.
Glasserman, P. (2003). Monte carlo methods in financial engineering. Springer.
Gordon, N., Salmond, D., & Smith, A. F. M. (1993). Novel approach to nonlinear and non-gaussian bayesian
state estimation. Proceedings IEE-F (pp. 107?113).
Kaelbling, L. P., Littman, M. L., & Cassandra, A. R. (1998). Planning and acting in partially observable
stochastic domains. Artificial Intelligence, 101, 99?134.
Kitagawa, G. (1996). Monte-Carlo filter and smoother for non-Gaussian nonlinear state space models. J.
Comput. Graph. Stat., 5, 1?25.
Lovejoy, W. S. (1991). A survey of algorithmic methods for partially observable Markov decision processes.
Annals of Operations Research, 28, 47?66.
Poyadjis, G., Doucet, A., & Singh, S. (2005). Particle methods for optimal filter derivative: Application to
parameter estimation. IEEE ICASSP.
Rabiner, L. R. (1989). A tutorial on hidden Markov models and selected applications in speech recognition.
Proceedings of the IEEE, 77, 257?286.
Spall, J. C. (2000). Adaptive stochastic approximation by the simultaneous perturbation method. IEEE transaction on automatic control, 45, 1839?1853.
8
| 3397 |@word version:1 seems:1 proportion:1 replicate:1 simulation:7 bn:11 covariance:1 reduction:1 initial:2 selecting:1 ecole:2 past:4 freitas:1 current:2 activation:1 dx:1 written:5 must:1 additive:1 numerical:3 informative:1 enables:1 plot:1 update:1 resampling:10 greedy:2 intelligence:1 selected:1 smith:1 math:1 along:4 constructed:1 direct:1 symposium:1 consists:3 x0:1 indeed:1 planning:1 multi:2 moulines:3 discounted:1 glasserman:5 pf:22 project:1 estimating:4 notation:3 provided:1 maximizes:2 bounded:1 underlying:1 finding:1 pseudo:1 temporal:2 multidimensional:1 exactly:1 wrong:1 control:8 omit:1 deguest:3 appear:1 t1:2 engineering:1 local:1 tends:1 limit:3 consequence:1 io:1 path:10 inria:3 emphasis:1 studied:1 stratified:1 practical:1 practice:3 bootstrap:1 procedure:3 probabilit:1 empirical:2 confidence:2 refers:1 cannot:2 close:2 selection:16 clever:1 wt1:7 risk:1 writing:1 optimize:1 measurable:2 deterministic:1 yt:26 pomdp:7 survey:1 simplicity:4 assigns:1 estimator:13 q:1 financial:1 yto:4 population:9 coordinate:1 updated:1 annals:2 controlling:1 us:1 designing:1 origin:1 romain:1 approximated:3 recognition:1 updating:1 observed:1 calculate:2 decrease:2 trade:1 mentioned:4 complexity:2 reward:6 littman:1 dynamic:7 depend:4 singh:2 wtj:1 irisa:1 localization:5 inapplicable:1 capp:2 icassp:1 represented:1 describe:3 monte:6 artificial:2 neighborhood:4 larger:1 statistic:4 noisy:1 sequence:1 differentiable:3 advantage:1 rr:1 propose:2 fr:3 relevant:2 degenerate:1 dirac:1 convergence:2 optimum:1 converges:1 object:2 help:2 derive:1 illustrate:1 stat:2 measured:1 qt:1 quotient:1 predicted:1 cmapx:2 direction:6 drawback:3 filter:13 stochastic:10 centered:3 successor:1 programmer:1 require:1 fix:1 proposition:3 kitagawa:2 sufficiently:1 considered:2 around:1 mapping:2 viterbi:1 algorithmic:1 adopt:2 resample:1 purpose:6 estimation:13 pfs:1 et1:1 tool:1 weighted:1 gaussian:3 pnt:1 modified:1 rather:1 pn:22 avoid:1 mobile:2 xit:9 focus:1 likelihood:5 inst:1 lovejoy:2 bt:27 lj:1 a0:1 hidden:3 limm:1 selects:1 issue:1 among:2 denoted:1 initialize:1 marginal:2 field:2 once:1 sampling:3 represents:3 lille:1 future:1 minimized:1 spall:2 report:2 simplify:1 gordon:3 preserve:1 replaced:1 replacement:1 n1:8 recalling:1 detection:1 fd:22 investigate:3 navigation:1 reseach:1 capable:1 explosion:4 respective:2 orthogonal:2 fox:2 initialized:1 causal:2 theoretical:1 kaelbling:2 uniform:1 burgard:1 iee:1 dependency:7 perturbed:1 chooses:1 combined:1 density:1 international:1 sensitivity:3 sequel:2 off:1 receiving:1 diverge:1 continuously:2 squared:2 central:2 nm:1 choose:3 derivative:8 return:6 li:1 account:1 de:2 blow:1 summarized:1 explicitly:2 depends:4 closed:1 linked:1 start:1 recover:1 mutation:1 contribution:3 square:1 variance:28 yield:1 rabiner:2 correspond:1 douc:3 weak:2 identification:1 bayesian:1 iid:5 carlo:6 trajectory:2 pomdps:4 history:3 explain:1 simultaneous:2 definition:1 infinitesimal:1 pp:1 proof:2 degeneracy:1 popular:2 subsection:2 ut:16 limh:2 sophisticated:1 back:1 focusing:1 feed:1 evaluated:1 strongly:1 nonlinear:2 miclo:2 del:5 continuity:1 y2:2 andrieu:2 hence:1 analytically:1 arnaud:1 illustrated:1 conditionally:2 during:1 branching:1 criterion:1 polytechnique:4 confusion:2 image:1 novel:1 common:8 rotation:1 multinomial:1 mt:3 empirically:1 discussed:1 approximates:2 kluwer:1 numerically:1 measurement:1 smoothness:1 automatic:1 consistency:5 fk:2 pm:4 particle:46 moving:2 robot:6 europe:1 deduce:2 belongs:1 xot:3 vt:10 additional:2 surely:3 converge:1 signal:1 u0:2 smoother:1 full:3 reduces:1 smooth:14 technical:2 e1:6 y:2 converging:1 variant:1 basic:1 xjt:3 iteration:1 whereas:2 want:1 interval:2 fichoud:2 limn:2 source:1 ascent:2 explodes:1 subject:1 strasbourg:1 integer:1 call:1 baxter:2 wn:1 xj:1 perfectly:1 reduce:5 idea:4 tradeoff:4 handled:1 bartlett:2 moral:5 effort:1 suffer:1 advantageously:1 speech:1 dellaert:1 york:1 action:14 generally:1 iterating:1 detailed:1 amount:1 simplest:1 inteligence:1 kac:3 uple:1 wisely:1 ipa:3 aot:2 tutorial:1 estimated:2 per:1 write:9 drawn:1 clarity:1 prevent:1 lti:2 graph:1 run:1 parameterized:3 almost:4 family:1 reasonable:1 draw:1 decision:6 def:18 precisely:1 constraint:1 x2:1 sake:1 u1:2 emi:1 simulate:1 optimality:1 according:5 conjugate:1 describes:1 slightly:1 making:3 taken:1 resource:1 equation:1 previously:1 turn:1 discus:1 know:1 feynman:3 end:3 operation:3 observe:1 generic:2 pierre:1 alternative:1 denotes:1 running:1 newton:1 cally:1 kt1:7 k1:11 establish:1 approximating:3 dat:1 move:2 already:1 quantity:1 barycenter:1 rt:3 usual:2 exhibit:3 gradient:22 distance:1 simulated:2 thrun:1 kieferwolfowitz:1 kalman:1 index:17 ratio:5 tadic:3 unfortunately:1 nord:1 implementation:2 legland:2 affiliated:1 policy:45 unknown:1 perform:1 l01:2 upper:1 observation:26 markov:8 finite:13 situation:1 y1:5 interacting:3 perturbation:4 introduced:1 discontinuity:2 usually:1 poyadjis:2 belief:26 natural:2 scheme:5 hm:1 naive:11 columbia:1 literature:1 tangent:1 asymptotic:3 law:1 dxt:3 interesting:2 filtering:6 var:2 localized:1 h2:3 kti:1 agent:1 sufficient:2 consistent:3 course:2 bias:11 salmond:1 taking:2 munos:3 calculated:1 xn:2 transition:8 evaluating:2 world:3 avoids:1 author:1 collection:1 reinforcement:2 made:1 adaptive:1 transaction:1 approximate:1 observable:4 overcomes:4 unreliable:2 doucet:7 sequentially:1 investigating:1 consuming:1 xi:2 continuous:6 search:1 table:1 improving:1 complex:1 constructing:1 domain:2 main:2 motivation:1 noise:4 x1:24 lt1:5 position:2 explicit:3 comput:1 lie:1 rk:2 theorem:3 formula:2 xt:39 specific:1 r2:1 x:1 consist:1 intractable:1 ih:4 exists:1 sequential:4 importance:2 cmap:2 horizon:2 mevel:1 cassandra:1 entropy:1 lt:5 remi:1 simply:1 amsterdam:1 pathwise:1 partially:4 tracking:1 applies:1 springer:4 conditional:1 goal:4 consequently:1 ann:1 towards:1 comparaison:1 change:1 infinite:2 except:1 wt:4 acting:1 nfd:3 called:6 dyt:2 e:1 xin:1 select:2 coquelin:5 genealogical:1 evaluate:3 |
2,644 | 3,398 | Syntactic Topic Models
David Blei
Department of Computer Science
35 Olden Street
Princeton University
Princeton, NJ 08540
[email protected]
Jordan Boyd-Graber
Department of Computer Science
35 Olden Street
Princeton University
Princeton, NJ 08540
[email protected]
Abstract
We develop the syntactic topic model (STM), a nonparametric Bayesian model
of parsed documents. The STM generates words that are both thematically and
syntactically constrained, which combines the semantic insights of topic models
with the syntactic information available from parse trees. Each word of a sentence
is generated by a distribution that combines document-specific topic weights and
parse-tree-specific syntactic transitions. Words are assumed to be generated in an
order that respects the parse tree. We derive an approximate posterior inference
method based on variational methods for hierarchical Dirichlet processes, and we
report qualitative and quantitative results on both synthetic data and hand-parsed
documents.
1
Introduction
Probabilistic topic models provide a suite of algorithms for finding low dimensional structure in a
corpus of documents. When fit to a corpus, the underlying representation often corresponds to the
?topics? or ?themes? that run through it. Topic models have improved information retrieval [1], word
sense disambiguation [2], and have additionally been applied to non-text data, such as for computer
vision and collaborative filtering [3, 4].
Topic models are widely applied to text despite a willful ignorance of the underlying linguistic
structures that exist in natural language. In a topic model, the words of each document are assumed
to be exchangeable; their probability is invariant to permutation. This simplification has proved
useful for deriving efficient inference techniques and quickly analyzing very large corpora [5].
However, exchangeable word models are limited. While useful for classification or information
retrieval, where a coarse statistical footprint of the themes of a document is sufficient for success,
exchangeable word models are ill-equipped for problems relying on more fine-grained qualities of
language. For instance, although a topic model can suggest documents relevant to a query, it cannot
find particularly relevant phrases for question answering. Similarly, while a topic model might
discover a pattern such as ?eat? occurring with ?cheesecake,? it lacks the representation to describe
selectional preferences, the process where certain words restrict the choice of the words that follow.
It is in this spirit that we develop the syntactic topic model, a nonparametric Bayesian topic model
that can infer both syntactically and thematically coherent topics. Rather than treating words as the
exchangeable unit within a document, the words of the sentences must conform to the structure of a
parse tree. In the generative process, the words arise from a distribution that has both a documentspecific thematic component and a parse-tree-specific syntactic component.
We illustrate this idea with a concrete example. Consider a travel brochure with the sentence ?In
.? Both the low-level syntactic context of a word and
the near future, you could find yourself in
its document context constrain the possibilities of the word that can appear next. Syntactically, it
1
z1
?
?
?T
?k
?
?k
z2
w1:laid
z3
z4
w2:phrases
w3:in
z5
w4:some
z5
w5:mind
z6
w5:his
w6:for
z7
?D
?d
Parse trees
grouped into M
documents
?
M
(a) Overall Graphical Model
w7:years
(b) Sentence Graphical Model
Figure 1: In the graphical model of the STM, a document is made up of a number of sentences,
represented by a tree of latent topics z which in turn generate words w. These words? topics are
chosen by the topic of their parent (as encoded by the tree), the topic weights for a document ?,
and the node?s parent?s successor weights ?. (For clarity, not all dependencies of sentence nodes
are shown.) The structure of variables for sentences within the document plate is on the right, as
demonstrated by an automatic parse of the sentence ?Some phrases laid in his mind for years.? The
STM assumes that the tree structure and words are given, but the latent topics z are not.
is going to be a noun consistent as the object of the preposition ?of.? Thematically, because it is in
a travel brochure, we would expect to see words such as ?Acapulco,? ?Costa Rica,? or ?Australia?
more than ?kitchen,? ?debt,? or ?pocket.? Our model can capture these kinds of regularities and
exploit them in predictive problems.
Previous efforts to capture local syntactic context include semantic space models [6] and similarity
functions derived from dependency parses [7]. These methods successfully determine words that
share similar contexts, but do not account for thematic consistency. They have difficulty with polysemous words such as ?fly,? which can be either an insect or a term from baseball. With a sense
of document context, i.e., a representation of whether a document is about sports or animals, the
meaning of such terms can be distinguished.
Other techniques have attempted to combine local context with document coherence using linear
sequence models [8, 9]. While these models are powerful, ordering words sequentially removes
the important connections that are preserved in a syntactic parse. Moreover, these models generate words either from the syntactic or thematic context. In the syntactic topic model, words are
constrained to be consistent with both.
The remainder of this paper is organized as follows. We describe the syntactic topic model, and
develop an approximate posterior inference technique based on variational methods. We study its
performance both on synthetic data and hand parsed data [10]. We show that the STM captures
relationships missed by other models and achieves lower held-out perplexity.
2
The syntactic topic model
We describe the syntactic topic model (STM), a document model that combines observed syntactic
structure and latent thematic structure. To motivate this model, we return to the travel brochure
sentence ?In the near future, you could find yourself in
.?. The word that fills in the blank is
constrained by its syntactic context and its document context. The syntactic context tells us that it is
an object of a preposition, and the document context tells us that it is a travel-related word.
The STM attempts to capture these joint influences on words. It models a document corpus as
exchangeable collections of sentences, each of which is associated with a tree structure such as a
2
parse tree (Figure 1(b)). The words of each sentence are assumed to be generated from a distribution
influenced both by their observed role in that tree and by the latent topics inherent in the document.
The latent variables that comprise the model are topics, topic transition vectors, topic weights, topic
assignments, and top-level weights. Topics are distributions over a fixed vocabulary (?k in Figure
1). Each is further associated with a topic transition vector (?k ), which weights changes in topics
between parent and child nodes. Topic weights (?d ) are per-document vectors indicating the degree
to which each document is ?about? each topic. Topic assignments (zn , associated with each internal
node of 1(b)) are per-word indicator variables that refer to the topic from which the corresponding
word is assumed to be drawn. The STM is a nonparametric Bayesian model. The number of topics
is not fixed, and indeed can grow with the observed data.
The STM assumes the following generative process of a document collection.
1. Choose global topic weights ? ? GEM(?)
2. For each topic index k = {1, . . . }:
(a) Choose topic ?k ? Dir(?)
(b) Choose topic transition distribution ?k ? DP(?T , ?)
3. For each document d = {1, . . . M }:
(a) Choose topic weights ?d ? DP(?D , ?)
(b) For each sentence in the document:
i. Choose topic assignment z0 ? ?d ?start
ii. Choose root word w0 ? mult(1, ?z0 )
iii. For each additional word wn and parent pn , n ? {1, . . . dn }
? Choose topic assignment zn ? ?d ?zp(n)
? Choose word wn ? mult(1, ?zn )
The distinguishing feature of the STM is that the topic assignment is drawn from a distribution that
combines two vectors: the per-document topic weights and the transition probabilities of the topic
assignment from its parent node in the parse tree. By merging these vectors, the STM models both
the local syntactic context and corpus-level semantics of the words in the documents. Because they
depend on their parents, the topic assignments and words are generated by traversing the tree.
A natural alternative model would be to traverse the tree and choose the topic assignment from either
the parental topic transition ?zp(n) or document topic weights ?d , based on a binary selector variable.
This would be an extension of [8] to parse trees, but it does not enforce words to be syntactically
consistent with their parent nodes and be thematically consistent with a topic of the document. Only
one of the two conditions must be true. Rather, this approach draws on the idea behind the product
of experts [11], multiplying two vectors and renormalizing to obtain a new distribution. Taking
the point-wise product can be thought of as viewing one distribution through the ?lens? of another,
effectively choosing only words whose appearance can be explained by both.
The STM is closely related to the hierarchical Dirichlet process (HDP). The HDP is an extension of
Dirichlet process mixtures to grouped data [12]. Applied to text, the HDP is a probabilistic topic
model that allows each document to exhibit multiple topics. It can be thought of as the ?infinite?
topic version of latent Dirichlet allocation (LDA) [13]. The difference between the STM and the
HDP is in how the per-word topic assignment is drawn. In the HDP, this topic assignment is drawn
directly from the topic weights and, thus, the HDP assumes that words within a document are exchangeable. In the STM, the words are generated conditioned on their parents in the parse tree. The
exchangeable unit is a sentence.
The STM is also closely related to the infinite tree with independent children [14]. The infinite tree
models syntax by basing the latent syntactic category of children on the syntactic category of the
parent. The STM reduces to the Infinite Tree when ?d is fixed to a vector of ones.
3
Approximate posterior inference
The central computational problem in topic modeling is to compute the posterior distribution of the
latent structure conditioned on an observed collection of documents. Specifically, our goal is to
compute the posterior topics, topic transitions, per-document topic weights, per-word topic assign3
ments, and top-level weights conditioned on a set of documents, each of which is a collection of
parse trees.
This posterior distribution is intractable to compute. In typical topic modeling applications, it is
approximated with either variational inference or collapsed Gibbs sampling. Fast Gibbs sampling
relies on the conjugacy between the topic assignment and the prior over the distribution that generates it. The syntactic topic model does not enjoy such conjugacy because the topic assignment is
drawn from a multiplicative combination of two Dirichlet distributed vectors. We appeal to variational inference.
In variational inference, the posterior is approximated by positing a simpler family of distributions,
indexed by free variational parameters. The variational parameters are fit to be close in relative
entropy to the true posterior. This is equivalent to maximizing the Jensen?s bound on the marginal
probability of the observed data [15].
We use a fully-factorized variational distribution,
Q
Q
Q
q(?, z, ?, ?, ? |? ? , ?, ?, ?) = q(?|? ? ) d q(?d |?d ) k q(?k |?k ) n q(zn |?n ).
(1)
?
Following [16], q(?|? ) is not a full distribution, but is a degenerate point estimate truncated so that
all weights whose index is greater than K are zero in the variational distribution. The variational
parameters ?d and ?z index Dirichlet distributions, and ?n is a topic multinomial for the nth word.
From this distribution, the Jensen?s lower bound on the log probability of the corpus is
L(?, ?, ?; ?, ?, ?, ? ) =
Eq [log p(?|?) + log p(?|?D , ?) + log p(?|?P , ?) + log p(z|?, ?)+
log p(w|z, ? ) + log p(? |?)] ? Eq [log q(?) + log q(?) + log q(z)] .
(2)
Expanding Eq [log p(z|?, ?)] is difficult, so we add an additional slack parameter, ?n to approximate
the expression. This derivation and the complete likelihood bound is given in the supplement. We
use coordinate ascent to optimize the variational parameters to be close to the true posterior.
Per-word variational updates The variational update for the topic assignment of the nth word is
n
P
PK
PK
K
?ni ? exp ?(?i ) ? ?( j=1 ?j ) + j=1 ?p(n),j ?(?j,i ) ? ?
?
j,k
k=1
P
P
PK
K
+ c?c(n) j=1 ?c,j ?(?i,j ) ? ?
k=1 ?i,k
o
P
P
? ?i,j
K
+
log
?
.
(3)
? c?c(n) ?c?1 j P ?kj P
i,w
n
?i,k
k
k
The influences on estimating the posterior of a topic assignment are: the document?s topic ?, the
topic of the node?s parent p(n), the topic of the node?s children c(n), the expected transitions between topics ?, and the probability of the word within a topic ?i,wn .
Most terms in Equation 3 are familiar from variational inference for probabilistic topic models, as
the digamma functions appear in the expectations of multinomial distributions. The second to last
term is new, however, because we cannot assume that the point-wise product of ?k and ?d will sum
to one. We approximate the normalizer for their produce by introducing ?; its update is
XX
?i ?j,i
?p(n),j PK
?n =
.
PK
k=1 ?k
k=1 ?j,k
i=1 j=1
Variational Dirichlet distributions and topic composition This normalizer term also appears in
the derivative of the likelihood function for ? and ? (the parameters to the variational distributions
on ? and ?, respectively), which cannot be solved in a closed form. We use conjugate gradient
optimization to determine the appropriate updates for these parameters [17].
Top-level weights Finally, we consider the top-level weights. The first K ? 1 stick-breaking
proportions are drawn from a Beta distribution with parameters (1, ?), but we assume that the final
stick-breaking proportion is unity (thus implying ? ? is non-zero only from 1 . . . K). Thus, we only
PK?1 ?
?
optimize the first K ? 1 positions and implicitly take ?K
= 1? i
?i . This constrained
optimization is performed using the barrier method [17].
4
4
Empirical results
Before considering real-world data, we demonstrate the STM on synthetic natural language data. We
generated synthetic sentences composed of verbs, nouns, prepositions, adjectives, and determiners.
Verbs were only in the head position; prepositions could appear below nouns or verbs; nouns only
appeared below verbs; prepositions or determiners and adjectives could appear below nouns. Each
of the parts of speech except for prepositions and determiners were sub-grouped into themes, and
a document contains a single theme for each part of speech. For example, a document can only
contain nouns from a single ?economic,? ?academic,? or ?livestock? theme.
Using a truncation level of 16, we fit three different nonparametric Bayesian language models to
the synthetic data (Figure 2).1 The infinite tree model is aware of the tree structure but not documents [14] It is able to separate parts of speech successfully except for adjectives and determiners
(Figure 2(a)). However, it ignored the thematic distinctions that actually divided the terms between
documents. The HDP is aware of document groupings and treats the words exchangeably within
them [12]. It is able to recover the thematic topics, but has missed the connections between the parts
of speech, and has conflated multiple parts of speech (Figure 2(b)).
The STM is able to capture the the topical themes and recover parts of speech (with the exception of
prepositions that were placed in the same topic as nouns with a self loop). Moreover, it was able to
identify the same interconnections between latent classes that were apparent from the infinite tree.
Nouns are dominated by verbs and prepositions, and verbs are the root (head) of sentences.
Qualitative description of topics learned from hand-annotated data The same general properties, but with greater variation, are exhibited in real data. We converted the Penn Treebank [10], a
corpus of manually curated parse trees, into a dependency parse [18]. The vocabulary was pruned
to terms that appeared in at least ten documents.
Figure 3 shows a subset of topics learned by the STM with truncation level 32. Many of the resulting topics illustrate both syntactic and thematic consistency. A few nonspecific function topics
emerged (pronoun, possessive pronoun, general verbs, etc.). Many of the noun categories were more
specialized. For instance, Figure 3 shows clusters of nouns relating to media, individuals associated
with companies (?mr,? ?president,? ?chairman?), and abstract nouns related to stock prices (?shares,?
?quarter,? ?earnings,? ?interest?), all of which feed into a topic that modifies nouns (?his,? ?their,?
?other,? ?last?). Thematically related topics are separated by both function and theme.
This division between functional and topical uses for the latent classes can also been seen in the
values for the per-document multinomial over topics. A number of topics in Figure 3(b), such as 17,
15, 10, and 3, appear to some degree in nearly every document, while other topics are used more
sparingly to denote specialized content. With ? = 0.1, this plot also shows that the nonparametric
Bayesian framework is ignoring many later topics.
Perplexity To study the performance of the STM on new data, we estimated the held out probability of previously unseen documents with an STM trained on a portion of the Penn Treebank. For
each position in the parse trees, we estimate the probability the observed word. We compute the
perplexity as the exponent of the inverse of the per-word average log probability. The lower the perplexity, the better the model has captured the patterns in the data. We also computed perplexity for
individual parts of speech to study the differences in predictive power between content words, such
as nouns and verbs, and function words, such as prepositions and determiners. This illustrates how
different algorithms better capture aspects of context. We expect function words to be dominated by
local context and content words to be determined more by the themes of the document.
This trend is seen not only in the synthetic data (Figure 4(a)), where parsing models better predict
functional categories like prepositions and document only models fail to account for patterns of
verbs and determiners, but also in real data. Figure 4(b) shows that HDP and STM both perform
better than parsing models in capturing the patterns behind nouns, while both the STM and the
infinite tree have lower perplexity for verbs. Like parsing models, our model was better able to
1
In Figure 2 and Figure 3, we mark topics which represent a single part of speech and are essentially the lone
representative of that part of speech in the model. This is a subjective determination of the authors, does not
reflect any specialization or special treatment of topics by the model, and is done merely for didactic purposes.
5
START
PHD_CANDIDATE,
GRAD_STUDENT, with,
over, PROFESSOR, on
PROFESSOR,
PHD_CANDIDATE, evil,
GRAD_STUDENT,
ponders. this
SHEEP, COW, PONY,
over, about, on
PROFESSOR,
GRAD_STUDENT, over,
PHD_CANDIDATE, on
STOCK,
MUTUAL_FUND,
SHARE, with, about
SHARE, STOCK,
MUTUAL_FUND, on,
over
1.00
0.20
ponders,
discusses, falls
queries, runs
on, about,
over, with
0.46
0.98
0.28
0.61
PROFESSOR,
PHD_CANDIDATE,
SHEEP,
GRAD_STUDENT,
PONY
that, evil, the,
this, a
(a) Parse transition only
(b) Document multinomial only
Themes
START
0.30
hates, dreads,
mourns, fears,
despairs
0.24
0.40
0.35
0.36
Parts of Speech
evil, that, this, the
0.37
0.26
ponders,
discusses, queries,
falls, runs
0.22
PROFESSOR,
PHD_CANDIDATE,
GRAD_STUDENT,
over, on
0.33
0.26
bucks, surges,
climbs, falls, runs
0.30
0.33
0.13
runs, falls, walks,
sits, climbs
0.30
0.30
0.35
0.27
0.22
STOCK SHARE,
MUTUAL_FUND,
on, with
SHEEP, PONY,
COW, over, with
0.31
0.30
0.36
0.38
0.23
0.26
0.33
stupid, that, the,
insolent
evil, this, the, that
(c) Combination of parse transition and document multinomial
Figure 2: Three models were fit to the synthetic data described in Section 4. Each box illustrates the
top five words of a topic; boxes that represent homogenous parts of speech have rounded edges and
are shaded. Edges between topics are labeled with estimates of their transition weight ?. While the
infinite tree model (a) is able to reconstruct the parts of speech used to generate the data, it lumps
all topics into the same categories. Although the HDP (b) can discover themes of recurring words, it
cannot determine the interactions between topics or separate out ubiquitous words that occur in all
documents. The STM (c) is able to recover the structure.
predict the appearance of prepositions, but also remained competitive with HDP on content words.
On the whole, the STM had lower perplexity than HDP and the infinite tree.
5
Discussion
We have introduced and evaluated the syntactic topic model, a nonparametric Bayesian model of
parsed documents. The STM achieves better perplexity than the infinite tree or the hierarchical
Dirichlet process and uncovers patterns in text that are both syntactically and thematically consistent.
This dual relevance is useful for work in natural language processing. For example, recent work [19,
20] in the domain of word sense disambiguation has attempted to combine syntactic similarity with
topical information in an ad hoc manner to improve the predominant sense algorithm [21]. The
syntactic topic model offers a principled way to learn both simultaneously rather than combining
two heterogenous methods.
The STM is not a full parsing model, but it could be used as a means of integrating document
context into parsing models. This work?s central premise is consistent with the direction of recent
improvements in parsing technology in that it provides a method for refining the parts of speech
present in a corpus. For example, lexicalized parsers [22] create rules specific to individual terms,
and grammar refinement [23] divides general roles into multiple, specialized ones. The syntactic
topic model offers an alternative method of finding more specific rules by grouping words together
that appear in similar documents and could be extended to a full parser.
6
START
0.95
0.11
says,
could,
can, did,
do, may,
does, say
0.37
garden, visit,
having, aid,
prime,
despite,
minister,
especially
sales
market
junk
fund
bonds
policy,
gorbachev,
mikhail,
leader, soviet,
restructuring,
software
his
her
their
other
one
says
could
can
did
do
0.26
0.57
0.10
0.11
they, who,
he, there,
one, we,
also, if
0.09
0.25
0.28
0.34
0.67
television,
public,
australia,
cable, host,
franchise,
service
0.06
his, their,
other, us,
its, last,
one, all
0.42
0.06
0.31
mr, inc, co,
president,
corp,
chairman,
vice, analyst,
0.08
shares,
quarter,
market, sales,
earnings,
interest,
months, yield
europe,
eastern,
protection,
corp, poland,
hungary,
chapter, aid
Document
0.29
0.22
0.52
1
3
5
7
9
12
15
18
21
24
27
30
Topic
(a) Sinks and sources
(b) Topic usage
Figure 3: Selected topics (along with strong links) after a run of the syntactic topic model with
a truncation level of 32. As in Figure 2, parts of speech that aren?t subdivided across themes are
indicated. In the Treebank corpus (left), head words (verbs) are shared, but the nouns split off into
many separate specialized categories before feeding into pronoun sinks. The specialization of topics
is also visible in plots of the variational parameter ? normalized for the first 300 documents of the
Treebank (right), where three topics columns have been identified. Many topics are used to some
extent in every document, showing that they are performing a functional role, while others are used
more sparingly for semantic content.
HDP
Infinite Tree, independent children
STM
30
4,000
3,500
20
3,000
Perplexity
Perplexity
25
2,500
15
2,000
1,500
10
1,000
5
0
HDP
Infinite Tree, independent children
STM
4,500
500
ALL
NOUN
VERB
ADJ
(a) Synthetic
DET
0
PREP
ALL
PREP
NOUN
(b) Treebank
VERB
Figure 4: After fitting three models on synthetic data, the syntactic topic model has better (lower)
perplexity on all word classes except for adjectives. HDP is better able to capture document- level
patterns of adjectives. The infinite tree captures prepositions best, which have no cross- document
variation. On real data 4(b), the syntactic topic model was able to combine the strengths of the
infinite tree on functional categories like prepositions with the strengths of the HDP on content
categories like nouns to attain lower overall perplexity.
While traditional topic models reveal groups of words that are used in similar documents, the STM
uncovers groups that are used the same way in similar documents. This decomposition is useful for
tasks that require a more fine- grained representation of language than the bag of words can offer or
for tasks that require a broader context than parsing models.
References
[1] Wei, X., B. Croft. LDA- based document models for ad- hoc retrieval. In Proceedings of the ACM SIGIR
Conference on Research and Development in Information Retrieval. 2006.
[2] Cai, J. F., W. S. Lee, Y. W. Teh. NUS- ML:Improving word sense disambiguation using topic features. In
Proceedings of SemEval- 2007. Association for Computational Linguistics, 2007.
7
[3] Fei-Fei Li, P. Perona. A Bayesian hierarchical model for learning natural scene categories. In CVPR ?05
- Volume 2, pages 524?531. IEEE Computer Society, Washington, DC, USA, 2005.
[4] Marlin, B. Modeling user rating profiles for collaborative filtering. In S. Thrun, L. Saul, B. Sch?olkopf,
eds., Advances in Neural Information Processing Systems. MIT Press, Cambridge, MA, 2004.
[5] Griffiths, T., M. Steyvers. Probabilistic topic models. In T. Landauer, D. McNamara, S. Dennis,
W. Kintsch, eds., Latent Semantic Analysis: A Road to Meaning. Laurence Erlbaum, 2006.
[6] Pad?o, S., M. Lapata. Dependency-based construction of semantic space models. Computational Linguistics, 33(2):161?199, 2007.
[7] Lin, D. An information-theoretic definition of similarity. In Proceedings of International Conference of
Machine Learning, pages 296?304. 1998.
[8] Griffiths, T. L., M. Steyvers, D. M. Blei, et al. Integrating topics and syntax. In L. K. Saul, Y. Weiss,
L. Bottou, eds., Advances in Neural Information Processing Systems, pages 537?544. MIT Press, Cambridge, MA, 2005.
[9] Gruber, A., M. Rosen-Zvi, Y. Weiss. Hidden topic Markov models. In Proceedings of Artificial Intelligence and Statistics. San Juan, Puerto Rico, 2007.
[10] Marcus, M. P., B. Santorini, M. A. Marcinkiewicz. Building a large annotated corpus of English: The
Penn treebank. Computational Linguistics, 19(2):313?330, 1994.
[11] Hinton, G. Products of experts. In Proceedings of the Ninth International Conference on Artificial Neural
Networks, pages 1?6. IEEE, Edinburgh, Scotland, 1999.
[12] Tee, Y. W., M. I. Jordan, M. J. Beal, et al. Hierarchical dirichlet processes. Journal of the American
Statistical Association, 101(476):1566?1581, 2006.
[13] Blei, D., A. Ng, M. Jordan. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993?
1022, 2003.
[14] Finkel, J. R., T. Grenager, C. D. Manning. The infinite tree. In Proceedings of Association for Computational Linguistics, pages 272?279. Association for Computational Linguistics, Prague, Czech Republic,
2007.
[15] Jordan, M., Z. Ghahramani, T. S. Jaakkola, et al. An introduction to variational methods for graphical
models. Machine Learning, 37(2):183?233, 1999.
[16] Liang, P., S. Petrov, M. Jordan, et al. The infinite PCFG using hierarchical Dirichlet processes. In
Proceedings of Emperical Methods in Natural Language Processing, pages 688?697. 2007.
[17] Boyd, S., L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[18] Johansson, R., P. Nugues. Extended constituent-to-dependency conversion for English. In (NODALIDA).
2007.
[19] Koeling, R., D. McCarthy. Sussx: WSD using automatically acquired predominant senses. In Proceedings
of SemEval-2007. Association for Computational Linguistics, 2007.
[20] Boyd-Graber, J., D. Blei. PUTOP: Turning predominant senses into a topic model for WSD. In Proceedings of SemEval-2007. Association for Computational Linguistics, 2007.
[21] McCarthy, D., R. Koeling, J. Weeds, et al. Finding predominant word senses in untagged text. In Proceedings of Association for Computational Linguistics, pages 280?287. Association for Computational
Linguistics, 2004.
[22] Collins, M. Head-driven statistical models for natural language parsing. Computational Linguistics,
29(4):589?637, 2003.
[23] Klein, D., C. Manning. Accurate unlexicalized parsing. In Proceedings of Association for Computational
Linguistics, pages 423?430. Association for Computational Linguistics, 2003.
8
| 3398 |@word kintsch:1 version:1 proportion:2 johansson:1 laurence:1 uncovers:2 decomposition:1 brochure:3 contains:1 document:61 subjective:1 blank:1 z2:1 adj:1 protection:1 must:2 parsing:9 visible:1 remove:1 treating:1 plot:2 update:4 fund:1 implying:1 generative:2 selected:1 intelligence:1 scotland:1 emperical:1 blei:5 coarse:1 provides:1 node:8 earnings:2 preference:1 traverse:1 sits:1 simpler:1 positing:1 five:1 dn:1 along:1 beta:1 qualitative:2 combine:7 fitting:1 manner:1 acquired:1 expected:1 market:2 indeed:1 surge:1 relying:1 company:1 automatically:1 equipped:1 considering:1 stm:30 discover:2 underlying:2 moreover:2 estimating:1 factorized:1 xx:1 medium:1 kind:1 lone:1 finding:3 marlin:1 nj:2 suite:1 quantitative:1 every:2 stick:2 exchangeable:7 unit:2 penn:3 enjoy:1 appear:6 sale:2 before:2 service:1 local:4 treat:1 despite:2 analyzing:1 might:1 shaded:1 co:1 limited:1 footprint:1 z7:1 w4:1 empirical:1 mult:2 thought:2 boyd:3 attain:1 word:63 integrating:2 griffith:2 road:1 suggest:1 cannot:4 close:2 context:16 influence:2 collapsed:1 optimize:2 equivalent:1 demonstrated:1 maximizing:1 modifies:1 nonspecific:1 convex:1 sigir:1 insight:1 rule:2 deriving:1 fill:1 vandenberghe:1 his:5 steyvers:2 coordinate:1 variation:2 president:2 construction:1 parser:2 user:1 distinguishing:1 us:1 trend:1 approximated:2 particularly:1 untagged:1 curated:1 labeled:1 observed:6 role:3 fly:1 solved:1 capture:8 ordering:1 principled:1 motivate:1 depend:1 trained:1 predictive:2 baseball:1 division:1 sink:2 joint:1 stock:4 represented:1 chapter:1 soviet:1 derivation:1 separated:1 fast:1 describe:3 query:3 artificial:2 tell:2 choosing:1 whose:2 encoded:1 widely:1 apparent:1 emerged:1 say:3 cvpr:1 interconnection:1 reconstruct:1 grammar:1 statistic:1 unseen:1 grenager:1 syntactic:29 final:1 beal:1 hoc:2 sequence:1 cai:1 interaction:1 product:4 remainder:1 relevant:2 loop:1 combining:1 pronoun:3 hungary:1 degenerate:1 description:1 olkopf:1 constituent:1 parent:10 regularity:1 cluster:1 zp:2 produce:1 renormalizing:1 franchise:1 object:2 derive:1 develop:3 illustrate:2 eq:3 strong:1 c:2 direction:1 closely:2 annotated:2 australia:2 successor:1 viewing:1 public:1 require:2 premise:1 subdivided:1 feeding:1 marcinkiewicz:1 acapulco:1 extension:2 exp:1 predict:2 achieves:2 purpose:1 determiner:6 travel:4 bag:1 bond:1 grouped:3 basing:1 create:1 successfully:2 vice:1 puerto:1 mit:2 rather:3 pn:1 finkel:1 broader:1 jaakkola:1 linguistic:1 derived:1 refining:1 improvement:1 likelihood:2 digamma:1 normalizer:2 sense:5 inference:8 pad:1 hidden:1 her:1 perona:1 going:1 semantics:1 overall:2 classification:1 ill:1 dual:1 insect:1 exponent:1 development:1 animal:1 noun:18 constrained:4 special:1 marginal:1 homogenous:1 comprise:1 aware:2 having:1 washington:1 sampling:2 manually:1 ng:1 nearly:1 future:2 rosen:1 report:1 others:1 inherent:1 few:1 composed:1 simultaneously:1 individual:3 familiar:1 kitchen:1 attempt:1 interest:2 w5:2 possibility:1 sheep:3 predominant:4 mixture:1 behind:2 sens:3 held:2 accurate:1 edge:2 traversing:1 tree:35 indexed:1 divide:1 walk:1 stupid:1 instance:2 column:1 modeling:3 zn:4 assignment:14 phrase:3 introducing:1 republic:1 subset:1 mcnamara:1 erlbaum:1 zvi:1 dependency:5 dir:1 synthetic:9 sparingly:2 international:2 probabilistic:4 off:1 lee:1 rounded:1 together:1 quickly:1 concrete:1 w1:1 central:2 reflect:1 choose:9 juan:1 expert:2 derivative:1 american:1 return:1 li:1 account:2 converted:1 lapata:1 inc:1 ad:2 multiplicative:1 root:2 performed:1 closed:1 later:1 portion:1 start:4 recover:3 competitive:1 collaborative:2 ni:1 minister:1 who:1 yield:1 identify:1 bayesian:7 polysemous:1 multiplying:1 influenced:1 ed:3 definition:1 petrov:1 associated:4 junk:1 costa:1 proved:1 treatment:1 ubiquitous:1 organized:1 pocket:1 actually:1 pony:3 appears:1 feed:1 rico:1 follow:1 improved:1 wei:3 done:1 box:2 evaluated:1 hand:3 dennis:1 parse:18 lack:1 quality:1 lda:2 indicated:1 reveal:1 usage:1 usa:1 building:1 contain:1 true:3 normalized:1 semantic:5 ignorance:1 self:1 syntax:2 plate:1 complete:1 demonstrate:1 theoretic:1 syntactically:5 meaning:2 variational:18 wise:2 specialized:4 multinomial:5 quarter:2 functional:4 volume:1 association:10 he:1 relating:1 refer:1 composition:1 cambridge:3 gibbs:2 automatic:1 consistency:2 similarly:1 z4:1 language:8 had:1 europe:1 similarity:3 etc:1 add:1 posterior:10 mccarthy:2 recent:2 driven:1 perplexity:12 possessive:1 prime:1 certain:1 corp:2 binary:1 success:1 tee:1 jbg:1 seen:2 additional:2 greater:2 captured:1 mr:2 determine:3 ii:1 multiple:3 full:3 infer:1 reduces:1 academic:1 determination:1 offer:3 cross:1 retrieval:4 lin:1 divided:1 host:1 visit:1 z5:2 vision:1 expectation:1 essentially:1 represent:2 preserved:1 fine:2 evil:4 grow:1 source:1 sch:1 w2:1 exhibited:1 ascent:1 spirit:1 climb:2 jordan:5 lump:1 prague:1 near:2 iii:1 split:1 wn:3 semeval:3 fit:4 w3:1 restrict:1 identified:1 cow:2 economic:1 idea:2 det:1 whether:1 expression:1 specialization:2 effort:1 speech:14 ignored:1 useful:4 buck:1 selectional:1 nonparametric:6 ten:1 category:9 generate:3 exist:1 estimated:1 per:9 klein:1 conform:1 didactic:1 group:2 drawn:6 clarity:1 merely:1 year:2 sum:1 run:6 inverse:1 you:2 powerful:1 wsd:2 laid:2 family:1 missed:2 disambiguation:3 draw:1 coherence:1 capturing:1 bound:3 simplification:1 strength:2 occur:1 constrain:1 fei:2 scene:1 software:1 dominated:2 generates:2 aspect:1 pruned:1 performing:1 eat:1 department:2 combination:2 manning:2 conjugate:1 across:1 unity:1 cable:1 explained:1 invariant:1 equation:1 conjugacy:2 previously:1 turn:1 slack:1 fail:1 discus:2 mind:2 available:1 hierarchical:6 enforce:1 appropriate:1 distinguished:1 alternative:2 assumes:3 dirichlet:11 include:1 top:5 linguistics:12 graphical:4 exploit:1 parsed:4 ghahramani:1 especially:1 society:1 question:1 chairman:2 traditional:1 exhibit:1 gradient:1 dp:2 separate:3 link:1 thrun:1 street:2 olden:2 w0:1 topic:112 w7:1 extent:1 unlexicalized:1 marcus:1 w6:1 hdp:15 analyst:1 index:3 relationship:1 z3:1 liang:1 difficult:1 policy:1 perform:1 teh:1 conversion:1 markov:1 truncated:1 extended:2 santorini:1 head:4 hinton:1 topical:3 dc:1 ninth:1 verb:13 rating:1 david:1 introduced:1 sentence:15 z1:1 connection:2 coherent:1 distinction:1 learned:2 czech:1 heterogenous:1 nu:1 able:9 parental:1 recurring:1 below:3 pattern:6 appeared:2 adjective:5 debt:1 garden:1 power:1 difficulty:1 natural:7 indicator:1 hate:1 turning:1 nth:2 improve:1 technology:1 kj:1 text:5 prior:1 poland:1 relative:1 fully:1 expect:2 permutation:1 par:1 filtering:2 allocation:2 degree:2 sufficient:1 consistent:6 rica:1 gruber:1 treebank:6 share:6 preposition:13 placed:1 last:3 free:1 truncation:3 eastern:1 english:2 documentspecific:1 fall:4 saul:2 taking:1 barrier:1 mikhail:1 distributed:1 edinburgh:1 vocabulary:2 transition:11 world:1 author:1 made:1 collection:4 refinement:1 san:1 approximate:5 selector:1 implicitly:1 ml:1 global:1 sequentially:1 corpus:10 assumed:4 gem:1 leader:1 landauer:1 latent:12 z6:1 additionally:1 learn:1 expanding:1 ignoring:1 improving:1 bottou:1 domain:1 did:2 pk:6 whole:1 arise:1 profile:1 prep:2 child:6 weed:1 graber:2 representative:1 aid:2 sub:1 theme:11 thematic:7 position:3 answering:1 breaking:2 grained:2 croft:1 z0:2 remained:1 specific:5 showing:1 jensen:2 appeal:1 ments:1 grouping:2 intractable:1 lexicalized:1 merging:1 effectively:1 pcfg:1 supplement:1 conditioned:3 occurring:1 illustrates:2 television:1 aren:1 entropy:1 appearance:2 restructuring:1 sport:1 fear:1 corresponds:1 relies:1 acm:1 ma:2 goal:1 month:1 price:1 professor:5 content:6 change:1 shared:1 yourself:2 infinite:16 specifically:1 typical:1 except:3 determined:1 lens:1 attempted:2 indicating:1 exception:1 thematically:6 internal:1 mark:1 collins:1 relevance:1 princeton:6 |
2,645 | 3,399 | Partially Observed Maximum Entropy
Discrimination Markov Networks
Jun Zhu?
Eric P. Xing?
Bo Zhang?
?
State Key Lab of Intelligent Tech & Sys, Tsinghua National TNList Lab, Dept. Comp Sci & Tech,
Tsinghua University, Beijing China. [email protected]; [email protected]
?
School of Comp. Sci., Carnegie Mellon University, Pittsburgh, PA 15213, [email protected]
?
Abstract
Learning graphical models with hidden variables can offer semantic insights to
complex data and lead to salient structured predictors without relying on expensive, sometime unattainable fully annotated training data. While likelihood-based
methods have been extensively explored, to our knowledge, learning structured
prediction models with latent variables based on the max-margin principle remains
largely an open problem. In this paper, we present a partially observed Maximum
Entropy Discrimination Markov Network (PoMEN) model that attempts to combine the advantages of Bayesian and margin based paradigms for learning Markov
networks from partially labeled data. PoMEN leads to an averaging prediction rule
that resembles a Bayes predictor that is more robust to overfitting, but is also built
on the desirable discriminative laws resemble those of the M3 N. We develop an
EM-style algorithm utilizing existing convex optimization algorithms for M3 N as
a subroutine. We demonstrate competent performance of PoMEN over existing
methods on a real-world web data extraction task.
1
Introduction
Inferring structured predictions based on high-dimensional, often multi-modal and hybrid covariates remains a central problem in data mining (e.g., web-info extraction), machine intelligence (e.g.,
machine translation), and scientific discovery (e.g., genome annotation). Several recent approaches
to this problem are based on learning discriminative graphical models defined on composite features that explicitly exploit the structured dependencies among input elements and structured interpretational outputs. Different learning paradigms have been explored, including the maximum
conditional likelihood [7] and max-margin learning [2, 12, 13], with remarkable success.
However, the problem of structured input/output learning can be intriguing and significantly more
difficult when there exist hidden substructures in the data, which is not uncommon in realistic problems. As is well-known in the probabilistic graphical model literature, hidden variables can facilitate natural incorporation of structured domain knowledge such as latent semantic concepts or unobserved dependence hierarchies into the model, which can often result in more intuitive representation
and more compact parameterization of the model; but learning a partially observed model is often
non-trivial because it involves optimizing against a more complex cost function, which is usually
not convex and requires additional efforts to impute or marginalize out hidden variables. Most existing work along this line, such as the hidden CRF for object recognition [9] and scene segmentation
[14] and the dynamic hierarchical MRF for web data extraction [18], falls in the likelihood-based
learning. For the max-margin learning, which is arguably a more desirable discriminative learning
paradigm in many application scenarios, learning a Makov network with hidden variables can be
extremely difficult and little work has been done except [11], where, in order to obtain a convex program, the uncertainty in mixture modeling is simplified by a reduction using the MAP component.
1
A major reason for the difficulty of considering latent structures in max-margin models is the lack of
a natural probabilistic interpretation of such models, which on the other hand offers the key insight
in likelihood-based learning to design algorithms such as EM for learning partially observed models. Recent work on semi-supervised or unsupervised max-margin learning [1, 4, 16] was all short of
an explicit probabilistic interpretation of their algorithms of handling latent variables. The recently
proposed Maximum Entropy Discrimination Markov Networks (MaxEnDNet) [20, 19] represent a
key advance in this direction. MaxEnDNet offers a general framework to combine Bayesian-style
learning and max-margin learning in structured prediction. Given a prior distribution of a structuredprediction model, and leveraging a new prediction-rule that is based on a weighted average over an
ensemble of prediction models, MaxEnDNet adopts a structured minimum relative entropy principle to learn a posterior distribution of the prediction model in a subspace defined by a set of expected margin constraints. This elegant combination of probabilistic and maximum margin concepts
provides a natural path to incorporate hidden structured variables in learning max-margin Markov
networks (M3 N), which is the focus of this paper.
It has been shown in [20] that, in the fully observed case, MaxEnDNet subsumes the standard M3 N
[12]. But MaxEnDNet in its full generality offers a number of important advantages while retaining
all the merits of the M3 N. For example, structured prediction under MaxEnDNet is based on an averaging model and therefore enjoys a desirable smoothing effect, with a uniform convergence bound
on generalization error, as shown in [20]; MaxEnDNet admits a prior that can be designed to introduce useful regularization effects, such as a sparsity bias, as explored in the Laplace M3 N [19, 20].
In this paper, we explore yet another advantage of MaxEnDNet stemmed from the Bayesian-style
max-margin learning formalism on incorporating hidden variables. We present the partially observed MaxEnDNet (PoMEN), which offers a principled way to incorporate latent structures carrying domain knowledge and learn a discriminative model with partially labeled data. The reducibility of MaxEnDNet to M3 N renders many existing convex optimization algorithms developed for
learning M3 N directly applicable as subroutines for learning our proposed model. We describe an
EM-style algorithm for PoMEN based on existing algorithms for M3 N. As a practical application,
we apply the proposed model to a web data extraction task?product information extraction, where
collecting fully labeled training data is very difficult. The results show the promise of max-margin
learning as opposed to likelihood-based estimation in the presence of hidden variables.
The paper is organized as follows. Section 2 reviews the basic max-margin structured prediction
formalism and MaxEnDNet. Section 3 presents the partially observed MaxEnDNet. Section 4
applies the model to real web data extraction, and Section 5 brings this paper to a conclusion.
2
Preliminaries
Our goal is to learn a predictive function h : X 7? Y from a structured input x ? X to a structured
output y ? Y, where Y = Y1 ?? ? ??Yl represents a combinatorial space of structured interpretations
of multi-facet objects. For example, in part-of-speech (POS) tagging, Yi consists of all the POS tags
and each label y = (y1 , ? ? ? , yl ) is a sequence of POS tags, and each input x is a sentence (word
sequence). We assume that the feasible set of labels Y(x) ? Y is finite for any x.
Let F (x, y; w) be a parametric discriminant function. A common choice of F is a linear model,
where F is defined by a set of K feature functions fk : X ? Y 7? R and their weights wk :
F (x, y; w) = w> f (x, y). A commonly used predictive function is:
h0 (x; w) = arg max F (x, y; w).
y?Y(x)
(1)
By using different loss functions, the parameters w can be estimated by maximizing the conditional
likelihood [7] or by maximizing the margin [2, 12, 13] on labeled training data.
2.1 Maximum margin Markov networks
Under the M3 N formalism, which we will generalize in this paper, given a set of fully labeled
training data D = {(xi , yi )}N
i=1 , the max-margin learning [12] solves the following optimization
problem and achieves an optimum point estimate of the weight vector w:
N
P0 (M3 N) :
min
w?F0 ,??RN
+
X
1
kwk2 + C
?i ,
2
i=1
(2)
where ?i represents a slack variable absorbing errors in training data, C is a positive constant, R+
denotes non-negative real numbers, and F0 is the feasible space for w: F0 = {w : w> ?fi (y) ?
2
?`i (y) ? ?i ; ?i, ?y 6= yi }, of which ?fi (y) = f (xi , yi ) ? f (xi , y), w> ?fi (y) is the ?margin?
between the true label yi and a prediction y, and ?`i (y) is a loss function with respect to yi .
Various loss functions have been proposed for P0. In this paper, we adopt the hamming loss [12]:
P|xi |
?`i (y) = j=1 I(yj 6= yji ), where I(?) is an indicator function that equals to 1 if the argument is
true and 0 otherwise. The optimization problem P0 is intractable because of the exponential number
of constraints in F0 . Exploring sparse dependencies among individual labels yi in y, as reflected
in the specific design of the feature functions (e.g., based on pair-wise labeling potentials), efficient
optimization algorithms based on cutting-plane [13] or message-passing [12], and various gradientbased methods [3, 10] have been proposed to obtain approximate solution to P0. As described
shortly, these algorithms can be directly employed as subroutines in solving our proposed model.
2.2 Maximum Entropy Discrimination Markov Networks
Instead of predicting based on a single rule F (?; w) as in M3 N using w, the structured maximum
entropy discrimination formalism [19] facilitates a Bayes-style prediction by averaging F (?; w) over
a distribution of rules according to a posterior distribution of the weights, p(w):
Z
h1 (x) = arg max
p(w)F (x, y; w) dw ,
(3)
y?Y(x)
where p(w) is learned by solving an optimization problem referred to as a maximum entropy discrimination Markov network (MaxEnDNet, or MEN) [20] that elegantly combines Bayesian-style
learning with max-margin learning. In a MaxEnDNet, a prior over w is introduced to regularize its
distribution, and the margins resulting from predictor (3) are used to define a feasible distribution
subspace. More formally, given a set of fully observed training data D and a prior distribution
p0 (w), MaxEnDNet solves the following problem for an optimal posterior p(w|D) or p(w):
P1 (MaxEnDNet) :
min
p(w)?F1 ,??RN
+
KL(p(w)||p0 (w)) + U (?),
(4)
where the objective function KL(p(w)||p0 (w)) + U (?) is known as the generalized entropy [8, 5],
or regularized KL-divergence, and U (?) is a closed proper convex function over the slack variables
?. U is also known as an additional ?potential? term in the maximum entropy principle. The feasible
distribution subspace Fn
Z as follows:
1 is defined
o
F1 = p(w) :
i
p(w)[?Fi (y; w) ? ?`i (y)] dw ? ??i , ?i, ?y ,
i
where ?Fi (y; w) = F (x , y ; w) ? F (xi , y; w).
P1 is a variational optimization problem over p(w) in the feasible subspace F1 . Since both the KLdivergence and the U function in P1 are convex, and the constraints in F1 are linear, P1 is a convex
program. Thus, one can apply the calculus of variations to the Lagrangian to obtain a variational
extremum, followed by a dual transformation of P1. As proved in [20], solution to P1 leads to a
GLIM for p(w), whose parameters are closely connected to the solution of the M3 N.
Theorem 1 (MaxEnDNet (adapted from [20])) The variational optimization problem P1 underlying a MaxEnDNet gives rise to the following
optimum distribution of Markov network parameters:
X
1
p(w) =
p0 (w) exp
(5)
?i (y)[?Fi (y; w) ? ?`i (y)] ,
Z(?)
i,y
where Z(?) is a normalization factor and the Lagrangian multipliers ?i (y) (corresponding to
constraints in F1 ) can be obtained by solving the following dual problem of P1:
D1 :
max ? log Z(?) ? U ? (?)
?
s.t. ?i (y) ? 0, ?i, ?y,
where U ? (?) is the conjugate of the slack function U (?), i.e., U ? (?) = sup?
P
i,y
?i (y)?i ? U (?) .
P
It can be shown that when F (x, y; w) = w> f (x, y), U (?) = C i ?P
i , and p0 (w) is a standard
Gaussian N (w|0, I), then p(w) is also a Gaussian with shifted mean i,y ?i (y)?fi (y) and covariance matrix I, where the Lagrangian multipliers ?i (y) can be obtained by solving problem D1
of the form that is isomorphic to the dual of M3 N. When applying this p(w) to Eq. (3), one can
obtain a predictor that is identical to that of the M3 N.
From the above reduction, it should be clear that M3 N is a special case of MaxEnDNet. But the
MaxEnDNet in its full generality offers a number of important advantages while retaining all the
3
(a)
(b)
(c)
Figure 1: (a) A web page with two data records containing 7 and 8 elements respectively; (b) A partial vision
tree of the page in Figure 1(a), where grey nodes are the roots of the two records; (c) A label hierarchy for
product information extraction, where the root node represents an entire instance (a web page); leaf nodes are
the attributes (i.e. Name, Image, Price, and Description); and inner nodes are the intermediate class labels
defined for parts of a web page, e.g. {N, I} is a class label for blocks containing both Name and Image.
merits of the M3 N. First, the MaxEnDNet prediction is based on model averaging and therefore
enjoys a desirable smoothing effect, with a uniform convergence bound on generalization error, as
shown in [20]. Second, MaxEnDNet admits a prior that can be designed to introduce useful regularization effects, such as a sparsity bias, as explored in the Laplace M3 N [19, 20]. Third, as explored
in this paper, MaxEnDNet offers a principled way to incorporate hidden generative models underlying the structured predictions, but allows the predictive model to be discriminatively trained based
on partially labeled data. In the sequel, we introduce partially observed MaxEnDNet (PoMEN), that
combines (possibly latent) generative model and discriminative training for structured prediction.
3
Partially Observed MaxEnDNet
Consider, for example, the problem of web data extraction, which is to identify interested information from web pages. Each sample is a data record or an entire web page which is represented as a set
of HTML elements. One striking characteristic of web data extraction is that various types of structural dependencies between HTML elements exist, e.g. the HTML tag tree or the Document Object
Model (DOM) structure is itself hierarchical. In [17], fully observed hierarchical CRFs are shown
to have great promise and achieve better performance than flat models like linear-chain CRFs [7].
One method to construct a hierarchical model is to first use a parser to construct a so called vision
tree [17]. For example, Figure 1(b) is a part of the vision tree of the page in Figure 1(a). Then, based
on the vision tree, a hierarchical model can be constructed accordingly to extract the interested attributes, e.g. a product?s name, image, price, description, etc. In such a hierarchical extraction
model, inner nodes are useful to incorporate long distance dependencies, and the variables at one
level are refinements of the variables at upper levels. To reflect the refinement relationship, the class
labels defined as in [17] are also organized in a hierarchy as in Figure 1(c). Due to concerns over
labeling cost and annotation-ambiguity caused by the overlapping of class labels as in Figure 1(c),
it is desirable to effectively learn a hierarchical extraction model with partially labeled data.
Without loss of generality, assume that the structured labeling of a sample consists of two parts?an
observed part y and a hidden part z. Both y and z are structured labels, and furthermore the hidden
variables are not isolated, but are statistically dependent on each other and on the observed data
according to a graphical model p(y, z, w|x) = p(w, z|x)p(y|x, z, w), where p(y|x, z, w) takes
the form of a Boltzmann distribution p(y|x, z, w) = Z1 exp{?F (x, y, z; w)} and x is a global
condition as in CRFs [7]. Following the spirit of a margin-based structured predictor such as M3 N,
we employ only the unnormalized energy function F (x, y, z; w) (which usually consists of linear
combinations of feature functions or potentials) as the cost function for structured prediction, and
we adopt a prediction rule directly extended from the MaxEnDNet?average over all the possible
models defined by different w, and at the same time marginalized over all hidden variables z. That is,
h2 (x) = arg max
y?Y(x)
XZ
p(w, z)F (x, y, z; w) dw .
(6)
z
Now our problem is learning the optimum p(w, z) from data. Let {z} ? (z1 , . . . , zN ) denote the
ensemble of hidden labels of all the samples. Analogous to the setup for learning the MaxEnDNet,
we specify a prior distribution p0 ({z}) over all the hidden structured labels. The feasible space F2
of p(w, {z}) can be defined as follows according to the margin constraints:
n
F2 = p(w, {z}) :
XZ
o
p(w, z)[?Fi (y, z; w) ? ?`i (y)] dw ? ??i , ?i, ?y ,
z
4
where ?Fi (y, z; w) = F (xi , yi , z; w) ? F (xi , y, z; w), and p(w, z) is the marginal distribution of
p(w, {z}) on a single sample, which will be used in (6) to compute the structured prediction.
Again we learn the optimum p(w, {z}) based on a structured minimum relative entropy principle
as in MaxEnDNet. Specifically, let p0 (w, {z}) represent a given joint prior over the parameters and
the hidden variables, we define the PoMEN problem that gives rise to the optimum p(w, {z}):
P2 (PoMEN) :
min
p(w,{z})?F2 ,??RN
+
KL(p(w, {z})||p0 (w, {z})) + U (?).
(7)
Analogous to P1, P2 is a variational optimization problem over p(w, {z}) in the feasible space F2 .
Again since both the KL and the U function in P2 are convex, and the constraints in F2 are linear,
P2 is a convex program. Thus, we can employ a technique similar to that used to solve MaxEnDNet
to solve the PoMEN problem.
3.1 Learning PoMEN
For a fully general p(w, {z}) where hidden variables in all samples are coupled, solving P2 based on
an extension of Theorem 1 would involve very high-dimensional integration and summation that is
in practice intractable. In this paper we consider a simpler case where the hidden labels of different
samples are iid and independent of the parameter w in both the prior and the posterior distributions,
QN
QN
that is, p0 (w, {z}) = p0 (w) i=1 p0 (zi ) and p(w, {z}) = p(w) i=1 p(zi ). This assumption
will hold true in a graphical model where w corresponds to only the observed y variables at the
bottom of a hierarchical model. For many practical applications such as the hierarchical web-info
extraction, such a model is realistic and adequate. For more general models where dependencies are
more global, we can use the above factored model as a generalized mean field approximation to the
true distribution, but this extension is beyond the scope of this paper, and will be explored later in
the full paper. Generalizing Theorem 1, following a coordinate descent principle, now we present
an alternating minimization (EM-style) procedure for P2:
Step 1: keep p(z) fixed, infer p(w) by solving the following problem:
X
min
p(w)?F10 ,??RN
+
KL(p(w)||p0 (w)) + C
?i ,
(8)
i
R
where F10 = {p(w) :
p(w)Ep(z) [?Fi (y, z; w) ? ?`i (y)] dw ? ??i , ?i, ?y}, which is a
generalized version of F1 with hidden variables. Thus, we can apply the same convex optimization
techniques as being used for solving the problem P1. Specifically, assume that the prior distribution
p0 (w) is a standard normal and F (x, y, z; w) = P
w> f (x, y, z), then the solution (i.e. posterior
distribution) is p(w) = N (w|?w , I), where ?w = i,y ?i (y)Ep(z) [?fi (y, z)]. The dual variables
? are achieved by solving a dual problem:
max
??P(C)
X
?i (y)?`i (y) ?
i,y
1 X
k
?i (y)Ep(z) [?fi (y, z)]k2 ,
2 i,y
(9)
P
where P(C) = {? :
y ?i (y) = C; ?i (y) ? 0, ?i, ?y}. This dual problem is isomorphic
to the dual form of the M3 N optimization problem, and we can use existing algorithms developed
for M3 N, such as [12, 3] to solve it. Alternatively, we can solve the following primal problem via
employing existing subgradient [10] or cutting plane [13] algorithms:
N
min
w?F00 ,??RN
+
X
1 >
w w+C
?i ,
2
i=1
(10)
where F00 = {w : w> Ep(z) [?fi (y, z)] ? ?`i (y) ? ?i ; ?i ? 0, ?i, ?y}, which is a generalized
version of F0 . It is easy to show that the solution to this primal problem is the posterior mean of
p(w), which will be used to make prediction in the predictive function h2 . Note that the primal
problem is very similar to that of M3 N, except the expectations in F00 . This is not surprising since it
can be shown that M3 N is a special case of MaxEnDNet. We will discuss how to efficiently compute
the expectations Ep(z) [?fi (y, z)] in Step 2.
Q
i
Step 2: keep
i p(z ) and
Q p(w)i fixed, based on the factorization assumption p({z}) =
p0 ({z}) =
i p0 (z ), the distribution p(z) for each sample i can be obtained by solving the
following problem:
min
p(z)?F1? ,?i ?R+
KL(p(z)||p0 (z)) + C?i ,
5
(11)
R
P
>
where F1? = {p(z) :
(y)] dw ? ??i , ?y}. Since p(w)
z p(z) p(w)[w ?fi (y, z) ? ?`iP
?
>
is a normal distribution as shown in Step 1, F1 = {p(z) :
z p(z)[?w ?fi (y, z) ? ?`i (y) ?
??i , ?y}. Similarly, by introducing a set ofn Lagrangian
multipliers ?(y), weo can get:
X
p(z) =
1
p0 (z) exp
Z(?)
?(y)[?>
w ?fi (y, z) ? ?`i (y)] ,
y
and the dual variables ?(y) can be obtained by solving the following dual problem:
max ? log
??Pi (C)
X
z
X
p0 (z) exp{
?(y)[?>
w ?fi (y, z) ? ?`i (y)]} ,
(12)
y
P
where Pi (C) = { y ?(y) = C, ?(y) ? 0, ?y}. This non-linear constrained optimization
problem can be solved with existing solvers, like IPOPT [15]. With a little algebra, we can compute
the gradients as follows:
?log Z(?)
= ?>
w Ep(z) [?fi (y, z)] ? ?`i (y).
??(y)
To efficiently calculate the expectations Ep(z) [?fi (y, z)] as required in Step1 and in the above gradients. We make a gentle assumption that the prior distribution p0 (z) is an exponential distribution
of the following form:
nX
o
p0 (z) = exp
?m (z) .
(13)
m
This assumption is general enough for our purpose, and covers the following commonly used priors:
i. Log-linear Prior: defined by a set of feature functions and their weights. For example,
Markov network, we can define the prior model as: p0 (z) ?
Pin a pairwise
P
exp
(i,j)?E
k ?k gk (zi , zj ) , where gk (zi , zj ) are feature functions and ?k are weights.
Q`
ii. Independent Prior: defined as p0 (z) = j=1 p0 (zj ). In the logarithm space, we can write it
P`
as: p0 (z) = exp{ j=1 log p0 (zj )}.
iii. Markov Prior: the prior model have the Markov property w.r.t the model?s structure. For examQ`
ple, for a chain graph, the prior distribution can be written as: p0 (z) = p(z1 ) j=2 p0 (zj |zj?1 ).
P`
Similarly, in the logarithm space, p0 (z) = exp{log p0 (z1 ) + j=2 log p0 (zj |zj?1 )}.
With the above assumption, p(z) is an exponential family distribution, and the expectations,
Ep(z) [?fi (y, z)], can be efficiently calculated by exploring the sparseness of the model?s structure
to compute marginal probabilities, e.g. p(zi ) and p(zi , zj ) in pairwise Markov networks. When the
model?s tree width is not large, this can be done exactly. For complex models, approximate inference
like loopy belief propagation and variational methods can be applied. However, since the number of
constraints in (12) is exponential to the size of the observed labels, the optimization problem cannot
be efficiently solved. A key observation, as explored in [12], isPthat we can interpret ?(y) as a probability distribution of y because of the regularity constraints: y ?(y) = C, ?(y) ? 0, ?y. Thus,
we can introduce a set of marginal dual variables and transfer the dual problem (12) to an equivalent
form with a polynomial number of constraints. The derivatives with respect to each marginal dual
parameter is of the same structure as the above gradients.
4
Experiments
We apply PoMEN to the problem of web data extraction, and compare it with partially observed
CRFs (PoHCRF) [9], and fully observed hierarchical CRFs (HCRF) [17] and hierarchical M3 N
(HM3 N) which has the same hierarchical model structure as the HCRF.
4.1 Data Sets, Evaluation Criteria, and Prior for Latent Variables
We concern ourselves with the problem of identifying product items for sale on the web. For each
product item, four attributes ? Name, Image, Price, and Description are extracted in our experiments.
The evaluation data consists of product web pages generated from 37 different templates. For each
template, there are 5 pages for training and 10 for testing. We evaluate all the methods on two
different levels of inputs, record level and page level. For record-level evaluation, we assume that
data records are given, and we compare different models on accuracy of extracting attributes in the
given records. For page-level evaluation, the inputs are raw web pages and all the models perform
6
Image
Name
0.88
HCRF
PoHCRF
HM3N
PoM3N
0.87
0.86
0.85
0
20
40
Training Ratio
Description
0.86
0.98
0.965
0.85
0.85
0.96
0.8
0.8
HCRF
PoHCRF
HM3N
PoM3N
0.7
0.65
0
0.75
0.945
0.7
HCRF
PoHCRF
HM3N
PoM3N
0.65
0.6
20
40
Training Ratio
0.83
0.95
0
0.935
0.93
50
0
50
HCRF
PoHCRF
HM3N
PoM3N
0.94
0.93
0
Training Ratio
(a)
0.82
0.81
0.95
HCRF
PoHCRF
HM3N
PoM3N
0.94
Training Ratio
0.96
F1
0.75
0.84
0.97
0.955
F1
F1
Block Instance Accuracy
Average F1
0.9
0.89
Price
0.97
0.9
0.85
0.91
F1
0.92
50
Training Ratio
HCRF
PoHCRF
HM3N
PoM3N
0.8
0.79
0.78
0
50
Training Ratio
(b)
Figure 2: (a) The F1 and block instance accuracy of record-level evaluation from 4 models under different
amount of training data. (b) The F1 and its variance on the attributes: Name, Image, Price, and Description.
1
1
0.95
0.95
0.8
0.75
HCRF
PoHCRF
HM3N
PoM3N
0.7
0.65
0.6
0
0.9
0.9
0.85
0.85
Average F1
Average F1
0.9
0.85
0.8
0.75
HCRF
PoHCRF
HM3N
PoM3N
0.7
0.65
0.6
20
40
Training Ratio
0
0.8
0.75
HCRF
PoHCRF
HM3N
PoM3N
0.7
0.65
0.6
20
40
Training Ratio
0
(a)
Block Instance Accuracy
1
0.95
Block Instance Accuracy
1
0.95
0.9
0.85
0.8
0.75
HCRF
PoHCRF
HM3N
PoM3N
0.7
0.65
0.6
20
40
Training Ratio
0
20
40
Training Ratio
(b)
Figure 3: The average F1 and block instance accuracy of different models with different ratios of training data
for two types of page-level evaluation: (a) ST1; and (b) ST2.
both record detection and attribute extraction simultaneously as in [17]. In the 185 training pages,
there are 1585 data records in total; in the 370 testing pages, 3391 data records are collected. As
for evaluation criteria, we use the standard precision, recall, and their harmonic value F1 for each
attribute and the two comprehensive measures, i.e. average F1 and block instance accuracy, as
defined in [17]. We adopt an independent prior described earlier for the latent variables, each factor
p0 (zi ) over a single latent label is assumed to be uniform.
4.2
Record-Level Evaluation
In this evaluation, partially observed training data are the data records whose leaf nodes are labeled
and inner nodes are hidden. We randomly select m = 5, 10, 20, 30, 40, or, 50 percent of the training
records as training data, and test on all the testing records. For each m, 10 independent experiments
were conducted and the average performance is summarized in Figure 2. From Figure 2(a), it can
be seen that the HM3 N performs slightly better than HCRF trained on fully labeled data. For the
two partially observed models, PoMEN performs much better than PoHCRF in both average F1
and block instance accuracy, and with lower variances of the score, especially when the training
set is small. As the number of training data increases, PoMEN performs comparably w.r.t. the
fully observed HM3 N. For all the models, higher scores and lower variances are achieved with
more training data. Figure 2(b) shows the F1 score on each attribute. Overall, for attributes Image,
Price, and Description, although all models generally perform better with more training data, the
improvement is small; and the differences between different models are small. This is possibly
because the features of these attributes are usually consistent and distinctive, and therefore easier to
learn and predict. For the attribute Name, however, a large number of training data are needed to
learn a good model because its underlying features have diverse appearance on web pages.
4.3
Page-Level Evaluation
Experiments on page-level prediction is conducted similarly as above, and the results are summarized in Figure 3. Two different partial labeling strategies are used to generate training data. ST1:
label the leaf nodes and the nodes that represent data records; ST2: label more information based
on ST1, e.g., label also the nodes above the ?Data Record? nodes in the hierarchy as in Figure 1(c).
Due to space limitation, we only report average F1 and block instance accuracy.
For ST1, PoMEN achieves better scores and lower variances than PoHCRF in both average F1 and
block instance accuracy. The HM3 N performs slightly better than HCRF (both trained on full labeling), and PoMEN performs comparably with the fully observed HCRF in block instance accuracy.
For ST2, with more supervision information, PoHCRF achieves higher performance that is comparable to that of HM3 N in average F1, but slightly lower than HM3 N in block instance accuracy. For
7
the latent models, PoHCRF performs slightly better in average F1, and PoMEN does better in block
instance accuracy; moreover, the variances of PoMEN are much smaller than those of PoHCRF in
both average F1 and block instance accuracy. We can also see that PoMEN does not change much
when additional label information is provided in ST2. Thus, the max-margin principle could provide
a better paradigm than the likelihood-based estimation for learning latent hierarchical models.
For the second step of learning PoMEN, the IPOPT solver [15] was used to compute the distribution
p(z). Interestingly, the performance of PoMEN does not change much during the iteration, and
our results were achieved within 3 iterations. It is possible that in hierarchical models, since inner
variables usually represent overlapping concepts, the initial distribution are already reasonably good
to describe confidence on the labeling due to implicit consistence across the labels. This is unlike
the multi-label learning [6] where only one of the multiple labels is true and during the iteration
more probability mass should be redistributed on the true label during the EM iterations.
5
Conclusions
We have presented an extension of the standard max-margin learning to address the challenging
problem of learning Markov networks with the existence of structured hidden variables. Our approach is a generalization of the maximum entropy discrimination Markov networks (MaxEnDNet),
which offer a general framework to combine Bayesian-style and max-margin learning and subsume
the standard M3 N as a special case, to consider structured hidden variables. For the partially observed MaxEnDNet, we developed an EM-style algorithm based on existing convex optimization
algorithms developed for the standard M3 N. We applied the proposed model to a real-world web
data extraction task and showed that learning latent hierarchical models based on the max-margin
principle could be better than the likelihood-based learning with hidden variables.
Acknowledgments
This work was done while J.Z. was a visiting researcher at CMU under a State Scholarship from China, and supports from NSF DBI-0546594 and DBI-0640543 awarded to E.X.; J.Z. and B.Z. are also supported by Chinese
NSF Grant 60621062 and 60605003; National Key Foundation R&D Projects 2003CB317007, 2004CB318108
and 2007CB311003; and Basic Research Foundation of Tsinghua National Lab for Info Sci & Tech.
References
[1] Y. Altun, D. McAllester, and M. Belkin. Maximum margin semi-supervised learning for structured
variables. In NIPS, 2006.
[2] Y. Altun, I. Tsochantaridis, and T. Hofmann. Hidden markov support vector machines. In ICML, 2003.
[3] P. Bartlett, M. Collins, B. Taskar, and D. McAllester. Exponentiated gradient algorithms for larg-margin
structured classification. In NIPS, 2004.
[4] U. Brefeld and T. Scheffer. Semi-supervised learning for structured output variables. In ICML, 2006.
[5] M. Dud??k, S.J. Phillips, and R.E. Schapire. Maximum entropy density estimation with generalized
regularization and an application to species distribution modeling. JMLR, (8):1217?1260, 2007.
[6] R. Jin and Z. Ghahramani. Learning with multiple labels. In NIPS, 2002.
[7] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for segmenting
and labeling sequence data. In ICML, 2001.
[8] G. Lebanon and J. Lafferty. Boosting and maximum likelihood for exponential models. In NIPS, 2001.
[9] A. Quattoni, M. Collins, and T. Darrell. Conditional random fields for object recognition. In NIPS, 2004.
[10] N.D. Ratliff, J.A. Bagnell, and M.A. Zinkevich. (online) subgradient methods for structured prediction.
In AISTATS, 2007.
[11] F. Sha and L. Saul. Large margin hidden markov models for automatic speech recognition. In NIPS, 2006.
[12] B. Taskar, C. Guestrin, and D. Koller. Max-margin markov networks. In NIPS, 2003.
[13] I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun. Support vector machine learning for
interdependent and structured output spaces. In ICML, 2004.
[14] J. Verbeek and B. Triggs. Scene segmentation with conditional random fields learned from partially
labeled images. In NIPS, 2007.
[15] A. W?
achter and L.T. Biegler. On the implementation of a primal-dual interior point filter line search
algorithm for large-scale nonlinear programming. Mathematical Programming, (106(1)):25?57, 2006.
[16] L. Xu, D. Wilkinson, F. Southey, and D. Schuurmans. Discriminative unsupervised learning of structured
predictors. In ICML, 2006.
[17] J. Zhu, Z. Nie, J.-R. Wen, B. Zhang, and W.-Y. Ma. Simultaneous record detection and attribute labeling
in web data extraction. In SIGKDD, 2006.
[18] J. Zhu, Z. Nie, B. Zhang, and J.-R. Wen. Dynamic hierarchical markov random fields and their application
to web data extraction. In ICML, 2007.
[19] J. Zhu, E.P. Xing, and B. Zhang. Laplace maximum margin markov networks. In ICML, 2008.
[20] J. Zhu, E.P. Xing, and B. Zhang. Maximum entropy discrimination markov networks. Technical Report
CMU-ML-08-104, Machine Learning Department, Carnegie Mellon University, 2008.
8
| 3399 |@word version:2 polynomial:1 triggs:1 open:1 calculus:1 grey:1 covariance:1 p0:35 tnlist:1 reduction:2 initial:1 score:4 document:1 interestingly:1 existing:9 surprising:1 stemmed:1 yet:1 intriguing:1 written:1 fn:1 realistic:2 hofmann:2 designed:2 discrimination:8 intelligence:1 leaf:3 generative:2 item:2 parameterization:1 accordingly:1 plane:2 mccallum:1 sys:1 short:1 record:18 provides:1 boosting:1 node:11 simpler:1 zhang:5 mathematical:1 along:1 constructed:1 consists:4 combine:5 introduce:4 pairwise:2 tagging:1 expected:1 p1:10 xz:2 multi:3 relying:1 little:2 consistence:1 considering:1 solver:2 provided:1 project:1 underlying:3 moreover:1 mass:1 developed:4 unobserved:1 extremum:1 transformation:1 kldivergence:1 collecting:1 exactly:1 k2:1 sale:1 grant:1 arguably:1 segmenting:1 positive:1 tsinghua:3 path:1 china:2 resembles:1 challenging:1 factorization:1 statistically:1 practical:2 acknowledgment:1 yj:1 testing:3 practice:1 block:14 procedure:1 significantly:1 composite:1 word:1 confidence:1 altun:3 get:1 cannot:1 marginalize:1 tsochantaridis:2 interior:1 applying:1 equivalent:1 map:1 lagrangian:4 zinkevich:1 maximizing:2 crfs:5 convex:11 identifying:1 factored:1 insight:2 rule:5 utilizing:1 dbi:2 regularize:1 dw:6 variation:1 coordinate:1 laplace:3 analogous:2 hierarchy:4 parser:1 programming:2 pa:1 element:4 expensive:1 recognition:3 labeled:10 observed:22 bottom:1 ep:8 taskar:2 solved:2 calculate:1 connected:1 principled:2 covariates:1 nie:2 wilkinson:1 dynamic:2 dom:1 trained:3 carrying:1 solving:10 algebra:1 predictive:4 distinctive:1 eric:1 f2:5 po:3 joint:1 various:3 represented:1 describe:2 labeling:8 h0:1 whose:2 solve:4 otherwise:1 itself:1 ip:1 online:1 advantage:4 sequence:3 brefeld:1 redistributed:1 product:6 achieve:1 f10:2 intuitive:1 description:6 gentle:1 interpretational:1 convergence:2 regularity:1 optimum:5 darrell:1 object:4 develop:1 school:1 eq:1 p2:6 solves:2 c:1 resemble:1 involves:1 direction:1 closely:1 annotated:1 attribute:12 f00:3 filter:1 mcallester:2 st1:4 f1:28 generalization:3 preliminary:1 summation:1 exploring:2 extension:3 hold:1 gradientbased:1 normal:2 exp:8 great:1 scope:1 predict:1 major:1 achieves:3 adopt:3 purpose:1 estimation:3 sometime:1 applicable:1 combinatorial:1 label:24 weighted:1 minimization:1 gaussian:2 focus:1 joachim:1 improvement:1 likelihood:9 tech:3 sigkdd:1 inference:1 dependent:1 entire:2 hidden:25 koller:1 subroutine:3 interested:2 arg:3 among:2 dual:13 html:3 overall:1 classification:1 retaining:2 smoothing:2 special:3 integration:1 constrained:1 marginal:4 equal:1 construct:2 field:5 extraction:17 identical:1 represents:3 unsupervised:2 icml:7 report:2 intelligent:1 employ:2 belkin:1 wen:2 randomly:1 simultaneously:1 national:3 divergence:1 individual:1 comprehensive:1 ourselves:1 attempt:1 detection:2 message:1 mining:1 evaluation:10 uncommon:1 mixture:1 primal:4 chain:2 partial:2 tree:6 logarithm:2 isolated:1 instance:14 formalism:4 modeling:2 earlier:1 facet:1 cover:1 zn:1 loopy:1 cost:3 introducing:1 predictor:6 uniform:3 conducted:2 unattainable:1 dependency:5 density:1 sequel:1 probabilistic:5 yl:2 again:2 central:1 reflect:1 ambiguity:1 opposed:1 containing:2 possibly:2 weo:1 derivative:1 style:9 achter:1 potential:3 makov:1 summarized:2 subsumes:1 wk:1 explicitly:1 caused:1 later:1 h1:1 root:2 lab:3 closed:1 thu:2 sup:1 xing:3 bayes:2 annotation:2 substructure:1 accuracy:14 variance:5 largely:1 characteristic:1 ensemble:2 efficiently:4 identify:1 generalize:1 bayesian:5 raw:1 comparably:2 iid:1 comp:2 researcher:1 quattoni:1 simultaneous:1 against:1 energy:1 hamming:1 proved:1 recall:1 knowledge:3 segmentation:2 organized:2 glim:1 higher:2 supervised:3 reflected:1 modal:1 specify:1 done:3 generality:3 furthermore:1 implicit:1 hand:1 web:21 nonlinear:1 overlapping:2 lack:1 propagation:1 brings:1 scientific:1 facilitate:1 effect:4 name:7 concept:3 true:6 multiplier:3 regularization:3 alternating:1 dud:1 semantic:2 impute:1 width:1 during:3 unnormalized:1 criterion:2 generalized:5 crf:1 demonstrate:1 performs:6 percent:1 image:8 wise:1 variational:5 harmonic:1 recently:1 fi:21 common:1 absorbing:1 interpretation:3 interpret:1 kwk2:1 mellon:2 phillips:1 automatic:1 fk:1 similarly:3 f0:5 supervision:1 etc:1 posterior:6 recent:2 showed:1 optimizing:1 awarded:1 scenario:1 success:1 yi:8 seen:1 minimum:2 additional:3 guestrin:1 employed:1 paradigm:4 semi:3 ii:1 full:4 desirable:5 multiple:2 infer:1 technical:1 offer:8 long:1 dept:1 prediction:20 mrf:1 basic:2 verbeek:1 vision:4 cmu:3 expectation:4 iteration:4 represent:4 normalization:1 achieved:3 dcszb:1 unlike:1 elegant:1 facilitates:1 leveraging:1 spirit:1 lafferty:2 extracting:1 structural:1 presence:1 intermediate:1 iii:1 easy:1 enough:1 zi:7 inner:4 cn:2 bartlett:1 effort:1 ipopt:2 render:1 speech:2 passing:1 adequate:1 useful:3 generally:1 clear:1 involve:1 amount:1 extensively:1 generate:1 schapire:1 exist:2 zj:9 nsf:2 shifted:1 estimated:1 diverse:1 carnegie:2 write:1 promise:2 key:5 salient:1 four:1 graph:1 subgradient:2 beijing:1 uncertainty:1 striking:1 family:1 comparable:1 bound:2 followed:1 larg:1 adapted:1 incorporation:1 constraint:9 scene:2 flat:1 step1:1 tag:3 argument:1 extremely:1 min:6 structured:33 department:1 according:3 combination:2 conjugate:1 smaller:1 slightly:4 em:6 across:1 remains:2 slack:3 discus:1 pin:1 needed:1 merit:2 apply:4 hierarchical:16 shortly:1 existence:1 denotes:1 graphical:5 marginalized:1 exploit:1 scholarship:1 especially:1 chinese:1 ghahramani:1 objective:1 already:1 parametric:1 strategy:1 dependence:1 sha:1 bagnell:1 visiting:1 gradient:4 subspace:4 distance:1 sci:3 nx:1 mail:1 collected:1 discriminant:1 trivial:1 reason:1 relationship:1 ratio:11 hm3:6 difficult:3 setup:1 info:3 gk:2 negative:1 rise:2 ratliff:1 design:2 implementation:1 proper:1 boltzmann:1 perform:2 upper:1 observation:1 markov:21 st2:4 finite:1 descent:1 jin:1 subsume:1 extended:1 y1:2 rn:5 introduced:1 pair:1 required:1 kl:7 sentence:1 z1:4 learned:2 nip:8 address:1 beyond:1 usually:4 sparsity:2 program:3 built:1 max:23 including:1 belief:1 natural:3 hybrid:1 difficulty:1 predicting:1 indicator:1 regularized:1 zhu:6 epxing:1 jun:2 extract:1 coupled:1 prior:19 literature:1 discovery:1 reducibility:1 review:1 interdependent:1 relative:2 law:1 fully:11 loss:5 discriminatively:1 men:1 limitation:1 remarkable:1 southey:1 h2:2 foundation:2 consistent:1 principle:7 pi:2 translation:1 hcrf:15 supported:1 enjoys:2 bias:2 exponentiated:1 fall:1 template:2 saul:1 sparse:1 calculated:1 world:2 genome:1 qn:2 adopts:1 commonly:2 refinement:2 simplified:1 ple:1 employing:1 lebanon:1 approximate:2 compact:1 cutting:2 keep:2 ml:1 global:2 overfitting:1 pittsburgh:1 assumed:1 discriminative:6 xi:7 alternatively:1 biegler:1 yji:1 search:1 latent:12 learn:7 transfer:1 robust:1 reasonably:1 schuurmans:1 complex:3 domain:2 elegantly:1 aistats:1 competent:1 xu:1 referred:1 scheffer:1 precision:1 inferring:1 pereira:1 explicit:1 exponential:5 jmlr:1 third:1 theorem:3 ofn:1 specific:1 explored:7 admits:2 concern:2 incorporating:1 intractable:2 effectively:1 sparseness:1 margin:30 easier:1 entropy:13 generalizing:1 explore:1 appearance:1 partially:17 bo:1 applies:1 corresponds:1 extracted:1 ma:1 conditional:5 goal:1 price:6 feasible:7 change:2 specifically:2 except:2 averaging:4 called:1 total:1 isomorphic:2 specie:1 m3:26 formally:1 select:1 support:3 collins:2 incorporate:4 evaluate:1 d1:2 handling:1 |
2,646 | 34 | 114
A Computer Simulation of Olfactory Cortex With Functional Implications for
Storage and Retrieval of Olfactory Information
Matthew A. Wilson and James M. Bower
Computation and Neural Systems Program
Division of Biology, California Institute of Technology, Pasadena, CA 91125
ABSTRACT
Based on anatomical and physiological data, we have developed a computer simulation of piriform (olfactory) cortex which is capable of reproducing spatial and temporal patterns of actual
cortical activity under a variety of conditions. Using a simple Hebb-type learning rule in conjunction with the cortical dynamics which emerge from the anatomical and physiological organization of the model, the simulations are capable of establishing cortical representations for different input patterns. The basis of these representations lies in the interaction of sparsely distributed, highly divergent/convergent interconnections between modeled neurons. We have shown that
different representations can be stored with minimal interference. and that following learning
these representations are resistant to input degradation, allowing reconstruction of a representation following only a partial presentation of an original training stimulus. Further, we have
demonstrated that the degree of overlap of cortical representations for different stimuli can
also be modulated. For instance similar input patterns can be induced to generate distinct cortical
representations (discrimination). while dissimilar inputs can be induced to generate overlapping
representations (accommodation). Both features are presumably important in classifying olfactory stimuli.
INTRODUCTION
Piriform cortex is a primary olfactory cerebral cortical structure which receives
second order input from the olfactory receptors via the olfactory bulb (Fig. 1). It
is believed to play a significant role in the classification and storage of olfactory
information 1?2?3 . For several years we have been using computer simulations as a
tool for studying information processing within this cortex4?5. While we are ultimately interested in higher order functional questions, our fITst modeling objective
was to construct a computer simulation which contained sufficient neurobiological
detail to reproduce experimentally obtained cortical activity patterns. We believe
this first step is crucial both to establish correspondences between the model and
the cortex, and to assure that the model is capable of generating output that can
be compared to data from actual physiological experiments. In the current case,
having demonstrated that the behavior of the simulation at least approximates
that of the actual cortex4 (Fig. 3), we are now using the model to explore the
types of processing which could be carried out by this cortical structure. In particular, in this paper we will describe the ability of the simulated cortex to store and
recall cortical activity patterns generated by stimulus various conditions. We
believe this approach can be used to provide experimentally testable hypotheses
concerning the functional organization of this cortex which would have been difficult to deduce solely from neurophysiological or neuroanatomical data.
@ American Institute of Physics 1988
115
Olfactory
Receptors
l
Higher Cortical Areas 1'- Hippocampus
1
Piriform Cortex
Olfactory I+and Other
Bulb
Olfactory Structures
Entorhinal
Cortex
T
LOT
Fig. 1. Simplified block diagram of the olfactory system and closely related sbUctures.
MODEL DESCRIPTION
This model is largely instructed by the neurobiology of piriform cortex3. Axonal conduction velocities, time delays, and the general properties of neuronal integration and the major intrinsic neuronal connections approximate those currently
described in the actual cortex. However, the simulation reduces both the number
and complexity of the simulated neurons (see below). As additional infonnation
concerning the these or other important features of the cortex is obtained it will be
incorporated in the model. Bracketed numbers in the text refer to the relevent
mathematical expressions found in the appendix.
Neurons. The model contains three distinct populations of intrinsic cortical
neurons, and a fourth set of cells which simulate cortical input from the olfactory
bulb (Fig. 2). The intrinsic neurons consist of an excitatory population of pyramidal neurons (which are the principle neuronal type in this cortex), and two populations of inhibitory interneurons. In these simulations each population is modeled
as 100 neurons arranged in a 10x10 array (the actual piriform cortex of the rat
contains on the order of 106 neurons). The output of each modeled cell type consists of an all-or-none action potential which is generated when the membrane
potential of the cell crosses a threshold [2.3]. This output reaches other neurons
after a delay which is a function of the velocity of the fiber which connects them
and the cortical distance from the originating neuron to each target neuron [2.0,
2.4]. When an action potential arrives at a destination cell it triggers a conductance change in a particular ionic channel type in that cell which has a characteristic time course, amplitude, and waveform [2.0, 2.1]. The effect of this conductance
change on the transmembrane potential is to drive it towards the equilibrium
potential of that channel. Na+, CI-, and K+ channels are included in the model.
These channels are differentially activated by activity in synapses associated with
different cell types (see below).
116
LOT Afferent Fiber
r
Ceudelly Directed
A. .oelatlOn Fiber
Locel
....oclellon
Flbe,
locel FMdbeck InhlblUon
Roatrelly Directed
Caudally DlrectecI
..._llIlon Fiber
Aasoclllion Fiber
Fig. 2. Schematic diagram of piriform cortex showing an excitatory pyramidal cell and two
inhibitory intemeurons with their local interactions. Circles indicate sites of synaptic modifiability.
Connection Patterns. In the olfactory system, olfactory receptors project to the
olfactory bulb which, in turn, projects directly to the pirifonn cortex and other olfactory structures (Fig. 1). The input to the pirifonn cortex from the olfactory bulb is
delivered via a fiber bundle known as the lateral olfactory tract (LOT). This fiber
tract appears to make sparse, non-topographic, excitatory connections with pyramidal and feedforward inhibitory neurons across the extent of the cortex3,6. In the
model this input is simulated as 100 independent cells each of which make random connections (p=O.05) with pyramidal and feedforward inhibitory neurons
(Fig. 1 and 2).
In addition to the input connections from the olfactory bulb, there is also an
extensive set of connections between the neurons intrinsic to the cortex (Fig. 2).
For example, the association fiber system arises from pyramidal cells and makes
sparse, distributed excitatory connections with other pyramidal cells all across the
cortex7,8.9 ? In the model these connections are randomly distributed with 0.05
probability. In the model and in the actual cortex, pyramidal cells also make excitatory connections with nearby feedforward and feedback inhibitory cells. These
intemeurons, in turn, make reciprocal inhibitory connections with the group of
nearby pyramidal cells. The primary effect of the feedback inhibitory neurons is to
inhibit pyramidal cell firing through a CI- mediated current shunting mechanism lO?ll ?12. Feedforward intemeurons inhibit pyramidal cells via a long latency,
long duration, K+ mediated hyperpolarizing potential 12,13. Pyramidal cell axons
also constitute the primary output of both the model and the actual pirifonn cortex 7?14?
117
Synaptic Properties and Modification Rules. In the model, each synaptic connection has an associated weight which determines the peak amplitude of the conductance change induced in the postsynaptic cell following presynaptic activity
[2.0]. To study learning in the model, synaptic weights associated with some of
the fiber systems are modifiable in an activity-dependent fashion (Fig. 2). The
basic modification rule in each case is Hebb-like; i.e. change in synaptic strength
is proportional to presynaptic activity multiplied by the offset of the postsynaptic
membrane potential from a baseline potential. This baseline potential is set
slightly more positive than the CI- equilibrium potential associated with the shunting feedback inhibition. This means that synapses activated while a destination
cell is in a depolarized or excited state are strengthened, while those activated
during a period of inhibition are weakened. In the model, synapses which follow
this rule include the association fiber connections between excitatory pyramidal
neurons as well as the connections between inhibitory neurons and pyramidal neurons. Whether these synapses are modifiable in this way in the actual cortex is a
subject of active research in our lab. However, the model does mimic the actual
synaptic properties associated with the input pathway (LOT) which we have
shown to undergo a transient increase in synaptic strength following activation
which is independent of postsynaptic potential 15. This increase is not pennanent
and the synaptic strength subsequently returns to its baseline value.
Generation of Physiological Responses. Neurons in the model are represented
as fIrst-order "leaky" integrators with multiple, time-varying inputs [1.0]. During
simulation runs, membrane potentials and currents as well as the time of
occurence of action potentials are stored for comparison with actual data. An
explicit compartmental model (5 compartments) of the pyramidal cells is used to
generate the spatial current distributions used for calculation of field potentials
(evoked potentials, EEGs) [3.0,4.0].
Stimulus Characteristics. To compare the responses of the model to those of
the actual cortex, we mimicked actual experimental stimulation protocols in the
simulated cortex and contrasted the resulting intracellular and extracellular
records. For example, shock stimuli applied to the LOT are often used to elicit
characteristic cortical evoked potentials in vivo 16,17,18. In the model we simulated
this stimulus paradigm by simultaneously activating all 100 input fibers. Another
measure of cortical activity used most successfully by Freeman and colleagues
involves recording EEG activity from pirifonn cortex in behaving animals 19,20.
These odor-like responses were generated in the model through steady, random
stimulation of the input fibers.
To study learning in the model, once physiological measures were established,
it was required that we use more refined stimulation procedures. In the absence of
any specific infonnation about actual input activity patterns along the LOT, we
constructed each stimulus out of a randomly selected set of 10 out of the 100 input
118
fibers. Each stimulus episode consisted of a burst of activity in this subset of
fibers with a duration of 10 msec at 25 msec intervals to simulate the 40 Hz periodicity of the actual olfactory bulb input. This pattern of activity was repeated in
trials of 200 msec duration which roughly corresponds to the theta rhythm periodicity of bulbar activity and respiration 21 ?22 . Each trial was then presented 5 times
for a total exposure time of 1 second (cortical time). During this period the Hebbtype learning rule could be used to modify the connection weights in an activitydependent fashion.
Output Measure for Learning. Given that the sole output of the cortex is in the
fonn of action potentials generated by the pyramidal cells, the output measure of
the model was taken to be the vector of spike frequency for all pyramidal neurons
over a 200 msec trial, with each element of the vector corresponding to the firing
frequency of a single pyramidal cell. Figures 5 through 8 show the 10 by 10 array
of pyramidal cells. The size of the box placed at each cell position represents the
magnitude of the spike frequency for that cell. To evaluate learning effects, overlap
comparisons between response pairs were made by taking the nonnalized dot
product of their response vectors and expressing that value as a percent overlap
(Fig. 4).
Simulated
~\~f".-.-
lj
Fig. 3. Simulated physiological responses of the model compared with actual cortical responses. Upper: Simulated intracellular response of a single cell to paired stimulation of the input
system (LOn (left) compared with actual response (right) (Haberly & Bower: 84). Middle:
Simulated extracellular response recorded at the cortical surface to stimulation of the LOT
(left), compared with actual response (right) (Haberly:73b). Lower: Stimulated EEG
response recorted at the cortical surface to odor-like input (left), for actual EEG see Freeman
1978.
119
Computational Requirements. All simulations were carried out on a Sun
Microsystems 3/260 model microcomputer equipped with 8 Mbytes of memory and
a floating point accelerator. Average time for a 200 msec simulation was 3 cpu
minutes.
RESULTS
Physiological Responses
As described above, our initial modeling objective was to accurately simulate
a wide range of activity patterns recorded, by ourselves and others, in piriform
cortex using various physiological procedures. Comparisons between actual and
simulated records for several types of response are shown in figure 3. In general,
the model replicated known physiological responses quite well (Wilson et al in
preparation describes, in detail, the analysis of the physiological reSUlts). For
example in response to shock stimulation of the input pathway (LOT), the model
reproduces the principle characteristics of both the intracellular and locationdependent extracellular waveforms recorded in the actual cortex9,17,18 (Fig. 3).
100
Percent Overlap
with
Final Response
Pattern
60 0
Number of Trials
5
Fig. 4. Convergence of the cortical response during training with a single stimulus with synaptic
modification.
56% overlap
? ????? ??? ?? ? ? ?? ? ??
? ? ?? ? ? ? ? ?
? ??? ?? ? ??
?
? ? ???
?
?
?? ? ?? ? ? ? ? ? ?
Full Stimulus
50% Simulus
Before Training
80% overlap
??
??
? ????
?
?
?? ?? ? ? ??
??? ?? ?? ?? ?? ??
?? ? ?? ? ?
Full Stimulus
50% Simulus
After Training
Fig. S. Reconstruction of cortical response patterns with partially degraded stimuli. Left:
Response, before training, to the full stimulus (left) and to the same stimulus with 50% of the
input fibers inactivated (right). There is a 44% degradation in the response. Right: Response
after ttaining, to the full stimulus (left), and to the same stimulus with 50% of the input
fibers inactivated (right). As a result of ttaining, the degradation is now only 20%.
120
Trained on A
?? ?? ??
?
?????
? ??
?? ??? ?
Trained on B
?
?
? ?
?
???
?
?
?
?
? ??? ? ?
Retains A Response
?? ???
? ?? ?
?
? ??? ?
?? ?? ?
Fig. 6. Storage of multiple patterns. Left Response to stimulus A afler training. Middle:
Response to stimulus B afler training on A followed by training on B. Right: Response to
stimulus A after training on A followed by training on B. When compared with the original
response (left) there is an 85% congruence.
Further, in response to odor-like stimulation the model exhibits 40 Hz oscillations
which are characteristic of the EEG activity in olfactory cortex in awake, behaving
animals 19. Although beyond the scope of the present paper, the simulation also
duplicates epileptiform9 and damped oscillatory16 type activity seen in the cortex
under special stimulus or pharmacological conditions4 .
Learning
Having simulated characteristic physiological responses, we wished to
explore the capabilities of the model to store and recall information. Learning in
this case is defined as the development of a consistent representation in the activity of the cortex for a particular input pattern with repeated stimulation and synaptic modification. Figure 4 shows how the network converges, with training, on a
representation for a stimulus. Having demonstrated that, we studied three properties of learned responses - the reconstruction of trained cortical response patterns
with partially degraded stimuli, the simultaneous storage of separate stimulus
response patterns, and the modulation of cortical response patterns independent
of relative stimulus characteristics.
Reconstruction of Learned Cortical Response Patterns "with Partially Degraded Stimuli. We were interested in knowing what effect training would have on the
sensitivity of cortical responses to fluctuations in the input signal. First we presented the model with a random stimulus A for one trial (without synaptic modification). On the next trial the model was presented with a degraded version of A
in which half of the original 10 input fibers were inactivated. Comparison of the
responses to these two stimuli in the naive cortex showed a 44% variation. Next,
the model was trained on the full stimulus A for 1 second (with synaptic modification). Again, half of the input was removed and the model was presented with the
degraded stimulus for 1 trial (without synaptic modification). In this case the dif-
121
27% overlap
? ?
? ?? ?
? ?
?? ?
?
??
46% overlap
?
? ?? ?
Stimulus A
Stimulus B
Before Training
? ?
?? ???? ?
?
? ? ??
?
????
?? ? ??
?
Stimulus A
Stimulus B
After Training
Fig. 7. Results of merging cortical response patterns for dissimilar
stimulus A and stimulus B before training. Stimuli A and B do not
common but still have a 27% overlap in cortical response patterns.
lus A and stimulus B after training in the presence of a common
overlap in cortical response patterns is now 46%.
stimuli. Left: Response to
activate any input fibers in
Right: Response to stimumodulatory input E 1. The
ference between cortical responses was only 20% (Fig. 5) showing that training
increased the robustness of the response to degradation of the stimulus.
Storage of Two Patterns. The model was frrst trained on a random stimulus A
for 1 second. The response vector for this case was saved. Then, continuing with
the weights obtained during this training, the model was trained on a new nonoverlapping (Le. different input fibers activated) stimulus B. Both stimulus A and
stimulus B alone activated roughly 25% of the cortical pyramidal neurons with 25%
overlap between the two responses. Following the second training period we
assessed the amount of interference in recalling A introduced by training with B
by presenting stimulus A again for a single trial (without synaptic modification).
The variation between the response to A following additional training with B and
the initially saved reponse to A alone was less than 15% (Fig. 6) demonstrating
that learning B did not substantially interfere with the ability to recall A.
Modulation of Cortical Response Patterns.
It has been previously demonstrated that the stimulus evoked response of olfactory cortex can be modulated by
factors not directly tied to stimulus qualities, such as the behavioral state of the
animal 1,20,23. Accordingly we were interested in knowing whether the representations stored in the model could be modulated by the influence of such a "state"
input.
One potential role of a "state" input might be to merge the cortical response
patterns for dissimilar stimuli; an effect we refer to as accomodation. To test this
in the model, we presented it with a random input stimulus A for 1 trial. It was
then presented with a random input stimulus B (non-overlapping input fibers).
The amount of overlap in the cortical responses for these untrained cases was
27%. Next, the model was trained for 1 second on stimulus A in the presence of an
additional random "state" stimulus El (activity in a set of 10 input fibers distinct
122
77% overlap
45% overlap
~----------~
r--~------~
?
??
??
?
?
?
?
?
???
?? ?
?
?
? ?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?? ? ?
?
?
? ?? ??
Stimulus A
Stimulus B
Before Training
?
??
? ? ???
? ? ??
?
?
?
?
?
?
??
??
?
Stimulus A
Stimulus B
After Training
Fig. 8. Results of differentiating cortical response patterns for similar stimuli. Left:
Response to stimulus A and stimulus B before training. Stimuli A and B activate 75% of
their input fibers in common and have a 77% overlap in cortical response patterns. Right:
Respon~ to stimulus A and stimulus B after training A in the presence of modulatory input
El and training B with a different modulatory input E2. The overlap in cortical response patterns is now 45%.
from both A and B). The model was then trained on stimulus B in the presence of
the same "state" stimulus El. After training, the model was presented with stimulus A alone for 1 trial and stimulus B alone for 1 trial. Results showed that now.
even without the coincident E 1 input, the amount of overlap between A and B
responses was found to have increased to 46% (Fig 7). The role of El in this case
was to provide a common stimulus component during learning which reinforced
shared components of the responses to input stimuli A and B.
To test the ability of a state stimulus to induce differentiation of cortical
response patterns for similar stimuli, we presented the model with a random input
stimulus A for 1 trial, followed by 1 trial of a random input stimulus B (75% of the
input fibers overlapping), The amount of overlap in the cortical responses for these
untrained cases was 77%. Next, the model was trained for a period of 1 second on
stimulus A in the presence of an additional random "state" stimulus El (a set of
10 input fibers not overlapping either A or B). It was then trained on input stimulus B in the presence of a different random "state" stimulus E2 (10 input fibers not
overlapping either A, B, or El) After this training the model was presented with
stimulus A alone for 1 trial and stimulus B alone for 1 trial. The amount of overlap
was found to have decreased to 45% (Fig 8). In this situation EI and E2 provided
a differential signal during learning which reinforced distinct components of the
responses to input stimuli A and B.
DISCUSSION
PhYSiological Responses. Detailed discussion of the mechanisms underlying
the simulated patterns of physiological activity in the cortex is beyond the scope
of the current paper. However, the model has been of value in suggesting roles for
123
specific features of the cortex in generating physiologically recorded activity. For
example, while actual input to the cortex from the olfactory bulb is modulated into
40 Hz bursts24 , continuous stimulation of the model allowed us to demonstrate
the model's capability for intrinsic periodic activity independent of the complementary pattern of stimulation from the olfactory bulb. While a similar ability has
also been demonstrated by models of Freeman25 , by studying this oscillating
property in the model we were able to associate these oscillatory characteristics
with specific interactions of local and distant network properties (e.g. inhibitory
and excitatory time constants and trans-cortical axonal conduction velocities).
This result suggests underlying mechanisms for these oscillatory patterns which
may be somewhat different than those previously proposed.
Learning. The main subject of this paper is the examination of the learning
capabilities of the cortical model. In this model, the apparently sparse, highly distributed pattern of connectivity characteristic of piriform cortex is fundamental to
the way in which the model learns. Essentially, the highly distributed pattern of
connections allows the model to develop stimulus-specific cortical response patterns by extracting correlations from randomly distributed input and association
fiber activity. These correlations are, in effect, stored in the synaptic weights of
the association fiber and local inhibitory connections.
The model has also demonstrated robustness of a learned cortical response
against degradation of the input signal. A key to this property is the action of
sparsely distributed association fibers which provide reinforcment for previously
established patterns of cortical activity. This property arises from the modification
of synaptic weights due to correlations in activity between intra-cortical association fibers. As a result of this modification the activity of a subset of pyramidal
neurons driven by a degraded input drives the remaining neurons in the response.
In general, in the model, similar stimuli will map onto similar cortical responses and dissimilar stimuli will map onto dissimilar cortical responses. However, a
presumably important function of the cortex is not simply to store sensory information, but to represent incoming stimuli as a function of the absolute stimulus
qualities and the context in which the stimulus occurs. The fact that many of the
structures that piriform cortex projects to (and receives projections from) may be
involved in multimodal "state" generation14 is circumstantial evidence that such
modulation may occur. We have demonstrated in the model that such a modulatory input can modify the representations generated by pairs of stimuli so as to
push the representations of like stimuli apart and pull the representations of dissimilar stimuli together. It should be pointed out that this modulatory input was
not an "instructive" signal which explicitly directed the course of the representation, but rather a "state" signal which did not require a priori knowledge of the
representational structure. In the model, this modulatory phenomenon is a simple
consequence of the degree of overlap in the combined (odor stimulus + modulator)
stimulus. Both cases approached approximately 50% overlap in cortical responses
reflecting the approximately 50% overlap in the combined stimuli for both cases.
124
Of interest was the use of the model's reconstructive capabilities to maintain the
modulated response to each input stimulus even in the absence of the modulatory
input.
CA YEATS AND CONCLUSIONS
Our approach to studying this system involves using computer simulation to
investigate mechanisms of information processing which could be implemented
given what is known about biological constraints. The significance of results presented here lies primarily in the finding that the structure of the model and the
parameter settings which were appropriate for the reproduction of physiological
responses were also appropriate for the proper convergence of a simple, biologically plausible learning rule under various conditions. Of course, the model we
have developed is only an approximation to the actual cortex limited by our knowledge of its organization and the computing power available. For example, the
actual piriform cortex of the rat contains on the order of 106 cells (compared with
1()2 in the simulations) with a sparsity of connection on the order of p=O.OOI
(compared with p=0.05 in the simulations). Our continuing research effort will
include explorations of the scaling properties of the network.
Other assumptions made in the context of the current model include the
assumption that the representation of information in piriform cortex is in the form
of spatial distributions of rate-coded outputs. Information contained in the spatiotemporal patterns of activity was not analyzed, although preliminary observation
suggests that this may be of significance. In fact, the dynamics of the model itself
suggest that temporally encoded information in the input at various time scales
may be resolvable by the cortex. Additionally, the output of the cortex was
assumed to have spatial uniformity, Le. no differential weighting of information
was made on the basis of spatial location in the cortex. But again, observation of
the dynamics of the model, as well as the details of known anatomical distribution
patterns for axonal ?connections, indicate that this is a major oversimplification.
Preliminary evidence from the model would indicate that some form of hierarchical
structuring of information along rostraVcaudal lines may occur. For example it
may be that cells found in progressively more rostral locations would have
increasingly non-specific odor responses.
Further investigations of learning within the model will explore each of these
issues more fully, with attempts to correlate simulated findings with actual recordings from awake, behaving animals. At the same time, new data pertaining to the
structure of the cortex will be incorporated into the model as it emerges.
ACKNOWLEDGEMENTS
We wish to thank Dr. Lewis Haberly and Dr. Joshua Chover for their roles in
the development and continued support of the modeling effort. We also wish to
thank Dave Bilitch for his technical assistance. This work was supported by NIH
grant NS22205, NSF grant EET-8700064, the Lockheed Corporation, and a fellowship from the ARCS foundation.
125
APPENDIX
E,-V, (r) ]
-dV, = - 1 [",1: lik(r) + - - dl
c'" i=1
(1.0)
r,
SOl1UJric Inregrarion
(1.1)
n
l'u ..
=
number of input types
resting potential
r, ==membrane
leakage resistance
E,
V.(t) membrane potential of i th cell
lit (t ) .. current into cell i due to input type Ie
E t - equilibrium potential associated with input type Ie
c... = membrane capacitance
goJ:(t) .. conductance due to input type Ie in cell i
(2.0)
Spilce Propagation
and SynaptiC l"Pur
Aiji = (l-p:",")e -L., P. +
P:"'''
(2.2)
V) (r?T) , S)O..)=O for
A.=t .. r-ru,
(2.3)
otherwise
L'j = Ii - j
I~
nc61ls .. number of cells in the simulation
~ .. distance between adjacent cells
di = duration of conductance change due to input type Ie
Vi '" velocity of signals for input type Ie
Et = latency for input type Ie
Pt = spatial anenuation factor for input type Ie
P:"''' .. minimum spatial anenuation for input type Ie
ru, = refractory period
(2.4)
= threshold for cell j
distance from cell i to cell j
~')t = distribution of synaptic density for input type Ie
w'} = synaptic weight from cell j to cell i
goJ: (t) = conductance due to input type Ie in cell i
Ft (t) = conductance waveform for input type k
~J (I) = spike output of cell j at time t
U (t) = unit step function
T)
L,)
OK
(3.0)
Field Poren/ials
nc61ls
=number of cells in the simulation
nugs = number of segments in the companmental model
V (r) .. approximate extracellular field potential at cell j
I ... (r) membrane current for segment n in cell i
k
=
Zr/ec
1"
=depth of recording site
= depth of segment n
x .. x location of the jth cell
= extracellular resistance per unit length
R.
(4.0)
Dendriric Model
(4.1)
126
(4.2)
nc"",,, = number of different channels per segment
V" (r) membrane potential of nth segment
= membrane capacitance for segment n
c,:
=
r; = axial resistance for segment n
r:' - membrane resistance for segment n
=membrane current for segment n
/" = length of segment n
d" = diameter of segment n
R". = membrane resistivity
Rj
BPI< (r) = conductance of channel c in segment n
Ec = equilibrium potential associated with channel c
I:%(r) axial current between segment nil and n
=
l:'(r)
R.
e",
= intracellular resistiviry per unit length
=extracellular resistance per unit length
=capacitance per unit surface area
REFERENCES
1.
2.
3.
4.
W. J. Freeman, J. Neurophysiol., 23, 111 (1960).
T. Tanabe, M. lino, and S. F. Takagi, J. Neurophysiol., 38,1284 (1975).
L. B. Haberly, Chemical Senses, 10,219 (1985).
M. Wilson, J. M. Bower, J. Chover, and L. B. Haberly, Soc. Neuro. Abs., 11,
317 (1986).
5. M. Wilson and J. M. Bower, Soc. Neurosci. Abs., 12, 310 (1987).
6. M. Devor, J. Compo Neur., 166,31 (1976).
7. L. B. Haberly and 1. L. Price, J. Compo Neurol., 178, 711 (1978a).
8. L. B. Haberly and S. Presto, J. Compo Neur., 248, 464 (1986).
9. L. B. Haberly andJ. M. Bower, J. Neurophysiol., 51, 90 (1984).
10. M. A. Biedenbach and C. F. Stevens, J. Neurophysiol., 32, 193 (1969).
11. M. A. Biedenbach and C. F. Stevens, J. Neurophysiol., 32, 204 (1969).
12. M. Satou, K. Mori, Y. Tazawa, and S. F. Takagi, J. Neurophysiol., 48, 1157
(1982).
13. O. F. Tseng and L. B. Haberly, Soc. Neurosci. Abs. 12,667 (1986).
14. L. B. Luskin and J. L. Price, J. Compo Neur., 216, 264 (1983).
15. J. M. Bower and L. B. Haberly,L.B., Proc. Natl. Acad. Sci. USA, 83, 1115
(1985).
16. W. J. Freeman, J. Neurophysiol., 31, 1 (1968).
17. L. B. Haberly, J. Neurophysiol., 36, 762 (1973).
18. L. B. Haberly, J. Neurophysiol., 36, 775 (1973).
19. W. J. Freeman, Electroenceph. and Clin. Neurophysiol., 44, 586 (1978).
20. W.J. Freeman and W. Schneider, Psychophysiology, 19,44 (1982).
21. F. Macrides and S. L. Chorover, Science, 175,84 (1972).
22. F. Macrides, H. B. Eigenbaum, and W. B. Forbes, J. Neurosci., 2, 12, 1705
(1982).
23. P. D. MacLean, N. H. Horwitz, and F. Robinson, Yale J. BioI. Med., 25, 159
(1952).
24. E. D. Adrian, Electroenceph. and Clin. Neurophysiol., 2, 377 (1950).
25. W. J. Freeman, Exp. Neuro!., 10, 525 (1964).
| 34 |@word trial:15 version:1 middle:2 hippocampus:1 adrian:1 simulation:17 excited:1 fonn:1 initial:1 contains:3 current:10 activation:1 distant:1 hyperpolarizing:1 progressively:1 discrimination:1 alone:6 half:2 selected:1 accordingly:1 reciprocal:1 record:2 compo:4 location:3 mathematical:1 along:2 constructed:1 burst:1 differential:2 consists:1 pathway:2 behavioral:1 olfactory:25 rostral:1 roughly:2 behavior:1 integrator:1 freeman:7 actual:24 cpu:1 equipped:1 project:3 provided:1 underlying:2 what:2 substantially:1 developed:2 microcomputer:1 finding:2 differentiation:1 corporation:1 temporal:1 ooi:1 pur:1 bulbar:1 unit:5 grant:2 positive:1 before:6 local:3 modify:2 consequence:1 acad:1 receptor:3 establishing:1 solely:1 firing:2 modulation:3 fluctuation:1 might:1 merge:1 approximately:2 weakened:1 studied:1 evoked:3 suggests:2 dif:1 limited:1 range:1 directed:3 block:1 procedure:2 area:2 elicit:1 projection:1 induce:1 suggest:1 onto:2 storage:5 context:2 influence:1 map:2 demonstrated:7 exposure:1 pennanent:1 duration:4 rule:6 continued:1 array:2 pull:1 his:1 population:4 variation:2 conditions4:1 target:1 play:1 trigger:1 pt:1 hypothesis:1 associate:1 assure:1 velocity:4 element:1 sparsely:2 role:5 ft:1 sun:1 episode:1 inhibit:2 removed:1 transmembrane:1 complexity:1 instructive:1 dynamic:3 ultimately:1 trained:10 uniformity:1 segment:13 mbytes:1 division:1 basis:2 neurophysiol:11 multimodal:1 various:4 fiber:29 represented:1 distinct:4 describe:1 activate:2 reconstructive:1 pertaining:1 approached:1 refined:1 quite:1 encoded:1 plausible:1 compartmental:1 interconnection:1 otherwise:1 ability:4 topographic:1 itself:1 delivered:1 final:1 reconstruction:4 interaction:3 product:1 representational:1 description:1 frrst:1 differentially:1 convergence:2 requirement:1 oscillating:1 generating:2 tract:2 converges:1 develop:1 axial:2 sole:1 wished:1 soc:3 implemented:1 involves:2 indicate:3 waveform:3 closely:1 saved:2 stevens:2 subsequently:1 exploration:1 transient:1 require:1 oversimplification:1 activating:1 preliminary:2 investigation:1 biological:1 exp:1 presumably:2 equilibrium:4 congruence:1 scope:2 matthew:1 major:2 proc:1 currently:1 infonnation:2 successfully:1 tool:1 rather:1 varying:1 wilson:4 conjunction:1 structuring:1 lon:1 cortex3:2 baseline:3 dependent:1 el:6 lj:1 initially:1 pasadena:1 originating:1 reproduce:1 luskin:1 interested:3 issue:1 classification:1 priori:1 development:2 animal:4 spatial:7 integration:1 special:1 ference:1 field:3 construct:1 once:1 having:3 biology:1 represents:1 lit:1 mimic:1 others:1 stimulus:92 duplicate:1 primarily:1 randomly:3 simultaneously:1 floating:1 connects:1 ourselves:1 maintain:1 recalling:1 attempt:1 conductance:8 organization:3 interest:1 circumstantial:1 interneurons:1 highly:3 investigate:1 intra:1 analyzed:1 arrives:1 activated:5 sens:1 damped:1 natl:1 bundle:1 implication:1 capable:3 partial:1 continuing:2 circle:1 minimal:1 instance:1 increased:2 modeling:3 retains:1 subset:2 delay:2 stored:4 conduction:2 periodic:1 spatiotemporal:1 combined:2 density:1 peak:1 sensitivity:1 fundamental:1 ie:10 destination:2 physic:1 pirifonn:4 together:1 na:1 connectivity:1 again:3 recorded:4 dr:2 bpi:1 american:1 return:1 suggesting:1 potential:24 nonoverlapping:1 bracketed:1 afferent:1 explicitly:1 vi:1 lot:8 lab:1 apparently:1 capability:4 vivo:1 forbes:1 compartment:1 degraded:6 largely:1 characteristic:9 reinforced:2 accurately:1 none:1 ionic:1 lu:1 drive:2 dave:1 horwitz:1 simultaneous:1 synapsis:4 reach:1 oscillatory:2 resistivity:1 synaptic:19 against:1 colleague:1 frequency:3 involved:1 james:1 e2:3 associated:7 di:1 recall:3 knowledge:2 emerges:1 amplitude:2 reflecting:1 appears:1 ok:1 higher:2 follow:1 psychophysiology:1 response:69 reponse:1 arranged:1 box:1 correlation:3 receives:2 ei:1 overlapping:5 propagation:1 interfere:1 quality:2 believe:2 usa:1 effect:6 intemeurons:3 consisted:1 chemical:1 adjacent:1 ll:1 during:7 pharmacological:1 assistance:1 steady:1 rhythm:1 rat:2 presenting:1 demonstrate:1 percent:2 caudally:1 nih:1 common:4 functional:3 stimulation:10 refractory:1 cerebral:1 association:6 approximates:1 resting:1 significant:1 refer:2 respiration:1 expressing:1 pointed:1 dot:1 resistant:1 cortex:42 behaving:3 inhibition:2 deduce:1 surface:3 accommodation:1 showed:2 driven:1 apart:1 store:3 joshua:1 seen:1 minimum:1 additional:4 somewhat:1 locationdependent:1 schneider:1 paradigm:1 period:5 signal:6 ii:1 lik:1 multiple:2 full:5 rj:1 reduces:1 x10:1 technical:1 calculation:1 believed:1 cross:1 retrieval:1 long:2 concerning:2 shunting:2 coded:1 paired:1 schematic:1 neuro:2 basic:1 essentially:1 represent:1 cell:43 addition:1 fellowship:1 interval:1 decreased:1 diagram:2 pyramidal:20 crucial:1 depolarized:1 induced:3 subject:2 undergo:1 recording:3 hz:3 med:1 extracting:1 axonal:3 presence:6 feedforward:4 variety:1 accomodation:1 modulator:1 knowing:2 whether:2 expression:1 effort:2 resistance:5 constitute:1 action:5 latency:2 modulatory:6 detailed:1 amount:5 diameter:1 generate:3 goj:2 nsf:1 inhibitory:10 per:5 modifiable:2 anatomical:3 group:1 key:1 threshold:2 demonstrating:1 shock:2 year:1 run:1 fourth:1 oscillation:1 appendix:2 scaling:1 followed:3 convergent:1 correspondence:1 yale:1 activity:26 strength:3 occur:2 constraint:1 awake:2 nearby:2 simulate:3 extracellular:6 neur:3 membrane:12 across:2 slightly:1 describes:1 postsynaptic:3 increasingly:1 andj:1 modification:10 biologically:1 dv:1 interference:2 taken:1 mori:1 previously:3 turn:2 mechanism:4 studying:3 available:1 presto:1 multiplied:1 hierarchical:1 appropriate:2 mimicked:1 odor:5 robustness:2 original:3 neuroanatomical:1 remaining:1 include:3 clin:2 testable:1 establish:1 leakage:1 objective:2 capacitance:3 question:1 spike:3 occurs:1 primary:3 exhibit:1 distance:3 separate:1 thank:2 simulated:13 lateral:1 sci:1 presynaptic:2 extent:1 tseng:1 ru:2 length:4 modeled:3 piriform:11 nc:1 difficult:1 proper:1 allowing:1 upper:1 neuron:23 observation:2 arc:1 coincident:1 takagi:2 situation:1 neurobiology:1 incorporated:2 nonnalized:1 reproducing:1 ab:3 introduced:1 pair:2 required:1 extensive:1 connection:18 california:1 learned:3 established:2 trans:1 robinson:1 beyond:2 able:1 below:2 pattern:36 microsystems:1 sparsity:1 program:1 memory:1 power:1 overlap:22 examination:1 zr:1 nth:1 technology:1 theta:1 temporally:1 carried:2 mediated:2 naive:1 occurence:1 text:1 acknowledgement:1 relative:1 fully:1 generation:1 accelerator:1 proportional:1 foundation:1 degree:2 bulb:9 sufficient:1 consistent:1 haberly:12 principle:2 classifying:1 lo:1 excitatory:7 course:3 periodicity:2 placed:1 supported:1 jth:1 institute:2 wide:1 taking:1 emerge:1 differentiating:1 sparse:3 leaky:1 distributed:7 absolute:1 feedback:3 depth:2 cortical:47 electroenceph:2 sensory:1 instructed:1 made:3 replicated:1 simplified:1 tanabe:1 ec:2 correlate:1 approximate:2 lockheed:1 neurobiological:1 eet:1 reproduces:1 active:1 incoming:1 assumed:1 physiologically:1 continuous:1 activitydependent:1 stimulated:1 additionally:1 channel:7 ca:2 inactivated:3 eeg:5 untrained:2 protocol:1 did:2 significance:2 main:1 intracellular:4 neurosci:3 repeated:2 allowed:1 complementary:1 neuronal:3 fig:21 site:2 simulus:2 fashion:2 hebb:2 strengthened:1 axon:1 position:1 explicit:1 msec:5 wish:2 lie:2 bower:6 tied:1 weighting:1 learns:1 afler:2 minute:1 resolvable:1 relevent:1 specific:5 showing:2 offset:1 divergent:1 physiological:14 neurol:1 evidence:2 reproduction:1 intrinsic:5 consist:1 dl:1 merging:1 ci:3 entorhinal:1 magnitude:1 push:1 simply:1 explore:3 neurophysiological:1 contained:2 partially:3 corresponds:1 determines:1 lewis:1 bioi:1 presentation:1 towards:1 shared:1 absence:2 price:2 experimentally:2 change:5 included:1 contrasted:1 degradation:5 total:1 nil:1 experimental:1 chover:2 maclean:1 support:1 modulated:5 arises:2 dissimilar:6 assessed:1 preparation:1 evaluate:1 fitst:1 phenomenon:1 |
2,647 | 340 | Basis-Function Trees as a Generalization of Local
Variable Selection Methods for Function
Approximation
Terence D. Sanger
Dept. Electrical Engineering and Computer Science
Massachusetts Institute of Technology, E25-534
Cambridge, MA 02139
Abstract
Local variable selection has proven to be a powerful technique for approximating functions in high-dimensional spaces. It is used in several
statistical methods, including CART, ID3, C4, MARS, and others (see the
bibliography for references to these algorithms). In this paper I present
a tree-structured network which is a generalization of these techniques.
The network provides a framework for understanding the behavior of such
algorithms and for modifying them to suit particular applications.
1
INTRODUCTION
Function approximation on high-dimensional spaces is often thwarted by a lack of
sufficient data to adequately "fill" the space, or lack of sufficient computational
resources. The technique of local variable selection provides a partial solution to
these problems by attempting to approximate functions locally using fewer than the
complete set of input dimensions.
Several algorithms currently exist which take advantage of local variable selection,
including AID (Morgan and Sonquist, 1963, Sonquist et al., 1971), k-d Trees (Bentley, 1975), ID3 (Quinlan, 1983, Schlimmer and Fisher, 1986, Sun et ai., 1988),
CART (Breiman et al., 1984), C4 (Quinlan, 1987), and MARS (Friedman, 1988),
as well as closely related algorithms such as GMDH (Ivakhnenko, 1971, Ikeda et
ai., 1976, Barron et al., 1984) and SONN (Tenorio and Lee, 1989). Most of these
algorithms use tree structures to represent the sequential incorporation of increasing numbers of input variables. The differences between these techniques lie in the
representation ability of the networks they generate, and the methods used to grow
and prune the trees. In the following I will show why trees are a natural structure
700
Basis-Function Trees as a Generalization of Local Variable Selection Methods
for these techniques, and how all these algorithms can be seen as special cases of a
general method I call "Basis Function Trees". I will also propose a new algorithm
called an "LMS tree" which has a simple and fast network implementation.
2
SEPARABLE BASIS FUNCTIONS
Consider approximating a scalar function I( x) of d-dimensional input x by
L
I(xt, ... ,Xd) ~
L C;U;(Xl' ... ,Xd)
(1)
;=1
where the u;'s are a finite set of nonlinear basis functions, and the c;'s are constant
coefficients. If the u/s are separable functions we can assume without loss of generality that there exists a finite set of scalar-input functions {4>n }~=1 (which includes
the constant function), such that we can write
(2)
where xp is the p'th component of x, 4>ri (xp) is a scalar function of scalar input x p,
?
p
and r~ is an integer from 1 to N specifying which function 4> is chosen for the p'th
dimension of the i'th basis function Ui.
If there are d input dimensions and N possible scalar functions 4>n, then there are
N d possible basis functions U;. If d is large, then there will be a prohibitively
large number of basis functions and coefficients to compute. This is one form of
Bellman's "curse of dimensionality" (Bellman, 1961). The purpose of local variable
selection methods is to find a small basis which uses products of fewer than d of
the 4>n's. If the 4>n's are local functions, then this will select different subsets of the
input variables for different ranges of their values. Most of these methods work by
incrementally increasing both the number and order of the separable basis functions
until the approximation error is below some threshold.
3
TREE STRUCTURES
Polynomials have a natural representation as a tree structure. In this representation,
the output of a subtree of a node determines the weight from that node to its parent.
For example, in figure 1, the subtree computes its output by summing the weights
a and b multiplied by the inputs x and y, and the result ax + by becomes the weight
from the input x at the first layer. The depth of the tree gives the order of the
polynomial, and a leaf at a particular depth p represents a monomial of order p
which can be found by taking products of all inputs on the path back to the root.
Now, if we expand equation 1 to get
L
I(xl, .. . , Xd) ~ 'L....J
" ci4>ri1 (xt) ... 4>r di (Xd)
(3)
;=1
we see that the approximation is a polynomial in the terms
4>rlp (X p ).
So the approx-
701
702
Sanger
x(ax+by) +cy+dz
x
y
Figure 1: Tree representation of the polynomial ax2 + bxy + cy + dz.
imation on separable basis functions can be described as a tree where the "inputs"
are the one-dimensional functions <Pn(x p ), as in figure 2.
Most local variable selection techniques can be described in this manner. The
differences in representation abilities of the different networks are determined by
the choice of the one-dimensional basis functions <Pn . Classification algorithms such
as CART, AID, C4, or ID3 use step-functions so that the resulting approximation
is piecewise constant. MARS uses a cubic spline basis so that the result is piecewise
cubic.
I propose that these algorithms can be extended by considering many alternate
bases. For example, for bandlimited functions the Fourier basis may be useful, for
sin(nx p ) for n odd, and cos(nx p ) for n even. Alternatively, local
which <Pn(x p )
Gaussians may be used to approximate a radial basis function representation. Or
the bits of a binary input could be used to perform Boolean operations. I call the
class of all such algorithms "Basis Function Trees" to emphasize the idea that the
basis functions are arbitrary.
=
It is important to realize that Basis Function Trees are fundamentally different
from the usual structure of multi-layer neural networks, in which the result of a
computation at one layer provides the data input to the next layer. In these tree
algorithms, the result of a computation at one layer determines the weights at the
next layer. Lower levels control the behavior of the processing at higher levels, but
the input data never traverses more than a single level.
4
WEIGHT LEARNING AND TREE GROWING
In addition to the choice of basis functions, one also has a choice of learning algorithm. Learning determines both the tree structure and the weights.
There are many ways to adjust the weights. Since the entire network is equivalent
to a single-layer network described by (1), The mean-squared output error can
be minimized either directly using pseudo-inverse techniques, or iteratively using
Basis-Function 'frees as a Generalization of Local Variable Selection Methods
output
Figure 2: Tree representation of an approximation over separable basis functions.
recursive least squares (Ljung and Soderstrom, 1983) or the Widrow-Hoff LMS
algorithm (Widrow and Hoff, 1960). Iterative techniques are often less robust and
can take longer to converge than direct techniques, but they do not require storage
of the entire data set and can adapt to nonstationary input distributions.
Since the efficiency of local variable selection methods will depend on the size of
the tree, good tree growing and pruning algorithms are essential for performance.
Tree-growing algorithms are often called "splitting rules", and the choice of rule
should depend on the data set as well as the type of basis functions. AID and
the "Regression Tree" method in CART split below the leaf with maximum meansquared prediction error. MARS tests all possible splits by forming the new trees
and estimating a "generalized cross-validation" criterion which penalizes both for
output error and for increasing tree size. This method is likely to be more noisetolerant, but it may also be significantly slower since the weights must be re-trained
for every subtree which is tested. Most methods include a tree-pruning stage which
attempts to reduce the size of the final tree.
5
LMS TREES
I now propose a new member of the class of local variable selection algorithms which
I call an "LMS Tree" (Sanger, 1991, Sanger, 1990a, Sanger, 1990b). LMS Trees can
use arbitrary basis functions, but they are characterized by the use of a recursive
algorithm to learn the weights as well as to grow new subtrees.
The LMS tree will be built using one dimension of the input at a time. The approximation to !(Xl, ... , Xd) using only the first dimension of the input is given
by
N
!(Xl, ... , Xd) ~ i(xl)
= L O'n<Pn(xt).
n=l
( 4)
703
704
Sanger
I use the Widrow-Hoff LMS learning rule (Widrow and Hoff, 1960) to minimize the
mean-squared approximation error based on only the first dimension:
(5)
where '7 is a rate term, and .6..a n is the change in the weight an made in response
to the current value of Xl. After convergence, j(xd is the best approximation to
! based on linear combinations of 4>1(xd, ., ., 4>N(xd, and the expected value of
the weight change E[.6..a n ] will be zero. However, there may still be considerable
variance of the weight changes, so that E[(.6..a n )2] f. O. The weight change variance
indicates that there is "pressure" to increase or decrease the weights for certain
input values, and it is related to the output error by
E:- 1 E[(.6..a n )2] > E[(! _ ])2] > max E[(.6..a n )2]
minxl E:=l 4>~(xd - n E[(4)n(xt))2]
(6)
(Sanger, 1990b). So the output error will be zero if and only if E[(.6..a n )2] = 0 for
all n.
We can decrease the weight change variance by using another network based on
X2 to add a variable term to the weight a r1 with largest variance, so that the new
network is given by
j(XI' X2) =
I: a n4>n(XI) + (ar1 + t
ar1,m4>m(X2)) 4>rl (xd?
(7)
m=l
n?~
.6..a r1 becomes the error term used to train the second-level weights a r1 ,m , so that
.6..ar1 ,m
.6..a rl 4>m(X2). In general, the weight change at any layer in the tree is
the error term for the layer below, so that
=
(8)
=
where the root of the recursion is .6..ae '7(!(Xl, ... , Xd)
term associated with the root of the tree.
-
j), and ae is a constant
As described so far, the algorithm imposes an arbitrary ordering on the dimensions
Xl, ... , Xd. This can be avoided by using all dimensions at once. The first layer tree
would be formed by the additive approximation
d
!(XI,"" X'd) ~
N
I: I: a(n,p)4>n(Xp)'
(9)
p=ln=l
New subtrees would include all dimensions and could be grown below any 4>n(xp).
Since this technique generates larger trees, tree pruning becomes very important.
In practice, most of the weights in large trees are often close to zero, so after a
network has been trained, weights below a threshold level can be set to zero and
any leaf with a zero weight can be removed.
LMS trees have the advantage of being extremely fast and easy to program. (For
example, a 49-input network was trained to a size of 20 subtrees on 40,000 data
Basis-Function Trees as a Generalization of Local Variable Selection Methods
Method
Basis Functions
Tree Growing
MARS
Truncated Cubic
Polynomials
Step functions
Exhaustive search for split which minimizes
a cross-validation criterion
Split leaf with largest mean-squared prediction error (= weight variance)
Choose split which maximizes an information
criterion
CART (Regression), AID
CART (Classification),
ID3, C4
k-d Trees
GMDH,
SONN
LMS Trees
Step functions
Step functions
Split leaf with the most data points
Data Dimensions
Find product of existing terms which maximizes correlation to desired function
Split leaf with largest weight change variance
Any. All dimenSlons present at
each level.
Figure 3: Existing tree algorithms.
samples in approximately 30 minutes of elapsed time on a sun-4 computer. The
LMS tree algorithm required 22 lines of C code (Sanger, 1990b).) The LMS rule
trains the weights and automatically provides the weight change variance which is
used to grow new subtrees. The data set does not have to be stored, so no memory
is required at nodes. Because the weight learning and tree growing both use the
recursive LMS rule, trees can adapt to slowly-varying nonstationary environments.
6
CONCLUSION
Figure 3 shows how several of the existing tree algorithms fit into the framework
presented here. Some aspects of these algorithms are not well described by this
framework. For instance, in MARS the location of the spline functions can depend
on the data, so the 4>n's do not form a fixed finite basis set. GMDH is not well
described by a tree structure, since new leaves can be formed by taking products of
existing leaves, and thus the approximation order can increase by more than 1 as
each layer is added. However, it seems that the essential features of these algorithms
and the way in which they can help avoid the "curse of dimensionality" are well
explained by this formulation.
Acknowledgements
Thanks are due to John Moody for introducing me to MARS, to Chris Atkeson for introducing me to the other statistical methods, and to the many people at NIPS who gave
useful comments and suggestions. The LMS Tree technique was inspired by a course at
MIT taught by Chris Atkeson, Michael Jordan, and Marc Raibert. This report describes
research done within the laboratory of Dr. Emilio Bizzi in the department of Brain and
Cognitive Sciences at MIT. The author was supported by an NDSEG fellowship from the
U.S. Air Force.
705
706
Sanger
References
Barron R. L., Mucciardi A. N., Cook F. J., Craig J. N., Barron A. R., 1984, Adaptive
learning networks: Development and application in the United States of algorithms related
to GMDH, In Farlow S. J., ed., Self-Organizing Methods in Modeling, Marcel Dekker, New
York.
Bellman R. E., 1961, Adaptive Control Processes, Princeton Univ. Press, Princeton, N J.
Bentley J. H., 1975, Multidimensional binary search trees used for associated searching,
Communications A CM, 18(9):509-517.
Breiman L., Friedman J., Olshen R., Stone C. J., 1984, Classification and Regression
Trees, Wadsworth Belmont, California.
Friedman J. H., 1988, Multivariate adaptive regression splines, Technical Report 102,
Stanford Univ. Lab for Computational Statistics.
Ikeda S., Ochiai M., Sawaragi Y., 1976, Sequential GMDH algorithm and its application
to river flow prediction, IEEE Trans. Systems, Man, and Cybernetics, SMC-6(7):473-479.
Ivakhnenko A. G., 1971, Polynomial theory of complex systems, IEEE Trans. Systems,
Man, and Cybernetics, SMC-1(4):364-378.
Ljung L., Soderstrom T., 1983,
Press, Cambridge, MA.
Theory and Practice of Recursive Identification, MIT
Morgan J. N., Sonquist J. A., 1963, Problems in the analysis of survey data, and a
proposal, J. Am. Statistical Assoc., 58:415-434.
Quinlan J. R., 1983, Learning efficient classification procedures and their application to
chess end games, In Michalski R. S., Carbonell J. G., Mitchell T. M., ed.s, Machine
Learning: An Artificial Intelligence Approach, chapter 15, pages 463-482, Tioga P., Palo
Alto.
Quinlan J. R., 1987, Simplifying decision trees, Int. J. Man-Machine Studies, 27:221-234.
Sanger T. D., 1990a, Basis-function trees for approximation in high-dimensional spaces,
In Touretzky D., Elman J., Sejnowski T., Hinton G., ed.s, Proceedings of the 1990 Connectionist Models Summer School, pages 145-151, Morgan Kaufmann, San Mateo, CA.
Sanger T. D., 1990b, A tree-structured algorithm for function approximation in high
dimensional spaces, IEEE Trans. Neural Networks, in press.
Sanger T. D., 1991, A tree-structured algorithm for reducing computation in networks
with separable basis functions, Neural Computation, 3(1), in press.
Schlimmer J. C., Fisher D., 1986, A case study of incremental concept induction, In Proc.
AAAI-86, Fifth National Conference on AI, pages 496-501, Los Altos, Morgan Kaufmann.
Sonquist J. A., Baker E. L., Morgan J. N., 1971, Searching for structure, Institute for
Social Research, Univ. Michigan, Ann Arbor.
Sun G. Z., Lee Y. C., Chen H. H., 1988, A novel net that learns sequential decision process,
In Anderson D. Z., ed., Neural Information Processing Systems, pages 760-766, American
Institute of Physics, New York.
Tenorio M. F., Lee W.-T., 1989, Self organizing neural network for optimum supervised
learning, Technical Report TR-EE 89-30, Purdue Univ. School of Elec. Eng.
Widrow B., Hoff M. E., 1960, Adaptive switching circuits, In IRE WESCON Conv.
Record, Part 4, pages 96-104.
| 340 |@word polynomial:6 seems:1 dekker:1 simplifying:1 eng:1 pressure:1 tr:1 united:1 existing:4 current:1 must:1 ikeda:2 john:1 belmont:1 additive:1 realize:1 intelligence:1 fewer:2 leaf:8 cook:1 record:1 provides:4 ire:1 node:3 location:1 traverse:1 direct:1 raibert:1 manner:1 expected:1 behavior:2 elman:1 growing:5 multi:1 brain:1 bellman:3 inspired:1 automatically:1 curse:2 considering:1 increasing:3 becomes:3 conv:1 estimating:1 baker:1 maximizes:2 alto:2 circuit:1 cm:1 minimizes:1 pseudo:1 every:1 multidimensional:1 xd:13 prohibitively:1 assoc:1 control:2 engineering:1 local:13 switching:1 farlow:1 path:1 approximately:1 mateo:1 specifying:1 co:1 smc:2 range:1 recursive:4 practice:2 procedure:1 significantly:1 radial:1 get:1 close:1 selection:11 storage:1 equivalent:1 dz:2 survey:1 splitting:1 rule:5 fill:1 searching:2 us:2 electrical:1 cy:2 sun:3 ordering:1 decrease:2 removed:1 environment:1 ui:1 trained:3 depend:3 efficiency:1 basis:28 chapter:1 grown:1 train:2 univ:4 elec:1 fast:2 sejnowski:1 artificial:1 exhaustive:1 larger:1 stanford:1 ability:2 statistic:1 id3:4 final:1 advantage:2 net:1 michalski:1 propose:3 product:4 tioga:1 organizing:2 los:1 gmdh:5 parent:1 convergence:1 optimum:1 r1:3 incremental:1 help:1 widrow:5 school:2 odd:1 sonn:2 marcel:1 closely:1 modifying:1 require:1 generalization:5 lm:13 bizzi:1 purpose:1 proc:1 currently:1 palo:1 largest:3 mit:3 pn:4 avoid:1 breiman:2 varying:1 ax:2 indicates:1 am:1 entire:2 expand:1 classification:4 development:1 special:1 hoff:5 wadsworth:1 once:1 never:1 represents:1 minimized:1 others:1 spline:3 piecewise:2 fundamentally:1 report:3 connectionist:1 national:1 m4:1 suit:1 friedman:3 attempt:1 adjust:1 schlimmer:2 subtrees:4 partial:1 soderstrom:2 tree:56 penalizes:1 re:1 desired:1 instance:1 modeling:1 boolean:1 introducing:2 subset:1 ri1:1 stored:1 thanks:1 river:1 lee:3 physic:1 terence:1 michael:1 e25:1 moody:1 squared:3 aaai:1 ndseg:1 choose:1 slowly:1 dr:1 cognitive:1 american:1 includes:1 coefficient:2 int:1 ax2:1 root:3 lab:1 minimize:1 square:1 formed:2 air:1 variance:7 who:1 kaufmann:2 identification:1 craig:1 bxy:1 cybernetics:2 touretzky:1 ed:4 associated:2 di:1 massachusetts:1 mitchell:1 dimensionality:2 back:1 higher:1 supervised:1 response:1 formulation:1 done:1 mar:7 generality:1 anderson:1 stage:1 until:1 correlation:1 nonlinear:1 lack:2 incrementally:1 bentley:2 concept:1 adequately:1 iteratively:1 laboratory:1 sin:1 game:1 self:2 criterion:3 generalized:1 stone:1 complete:1 novel:1 rl:2 cambridge:2 ai:3 approx:1 longer:1 base:1 add:1 multivariate:1 certain:1 binary:2 morgan:5 seen:1 prune:1 converge:1 emilio:1 technical:2 adapt:2 characterized:1 cross:2 noisetolerant:1 prediction:3 regression:4 ae:2 represent:1 proposal:1 addition:1 fellowship:1 grow:3 comment:1 cart:6 member:1 flow:1 jordan:1 call:3 integer:1 nonstationary:2 ee:1 split:7 easy:1 fit:1 gave:1 reduce:1 idea:1 york:2 useful:2 locally:1 generate:1 exist:1 write:1 taught:1 threshold:2 ar1:3 inverse:1 powerful:1 decision:2 bit:1 layer:11 summer:1 incorporation:1 ri:1 bibliography:1 x2:4 generates:1 fourier:1 aspect:1 extremely:1 attempting:1 separable:6 structured:3 department:1 alternate:1 combination:1 describes:1 n4:1 chess:1 explained:1 ln:1 resource:1 equation:1 imation:1 end:1 gaussians:1 operation:1 multiplied:1 barron:3 ochiai:1 slower:1 include:2 quinlan:4 sanger:12 approximating:2 added:1 usual:1 nx:2 me:2 chris:2 carbonell:1 induction:1 code:1 olshen:1 implementation:1 perform:1 purdue:1 finite:3 truncated:1 extended:1 thwarted:1 communication:1 hinton:1 arbitrary:3 required:2 meansquared:1 c4:4 california:1 elapsed:1 nip:1 trans:3 below:5 ivakhnenko:2 program:1 built:1 including:2 max:1 memory:1 bandlimited:1 natural:2 force:1 recursion:1 technology:1 understanding:1 acknowledgement:1 loss:1 ljung:2 suggestion:1 proven:1 validation:2 sufficient:2 xp:4 imposes:1 course:1 supported:1 free:1 monomial:1 institute:3 taking:2 fifth:1 dimension:10 depth:2 computes:1 author:1 made:1 adaptive:4 san:1 avoided:1 atkeson:2 far:1 social:1 approximate:2 emphasize:1 pruning:3 wescon:1 summing:1 xi:3 alternatively:1 search:2 iterative:1 why:1 learn:1 robust:1 ca:1 complex:1 marc:1 cubic:3 aid:4 xl:8 lie:1 learns:1 minute:1 xt:4 exists:1 essential:2 sequential:3 subtree:3 chen:1 michigan:1 likely:1 tenorio:2 forming:1 scalar:5 rlp:1 determines:3 ma:2 ann:1 fisher:2 considerable:1 change:8 man:3 determined:1 reducing:1 called:2 arbor:1 select:1 people:1 dept:1 princeton:2 tested:1 |
2,648 | 3,400 | Fast Rates for Regularized Objectives
Karthik Sridharan, Nathan Srebro, Shai Shalev-Shwartz
Toyota Technological Institute ? Chicago
Abstract
We study convergence properties of empirical minimization of a stochastic
strongly convex objective, where the stochastic component is linear. We show
that the value attained by the empirical minimizer converges to the optimal value
with rate 1/n. The result applies, in particular, to the SVM objective. Thus, we
obtain a rate of 1/n on the convergence of the SVM objective (with fixed regularization parameter) to its infinite data limit. We demonstrate how this is essential
for obtaining certain type of oracle inequalities for SVMs. The results extend
also to approximate minimization as well as to strong convexity with respect to an
arbitrary norm, and so also to objectives regularized using other `p norms.
1
Introduction
We consider the problem of (approximately) minimizing a stochastic objective
F (w) = E? [f (w; ?)]
(1)
where the optimization is with respect to w ? W, based on an i.i.d. sample ?1 , . . . , ?n . We focus
on problems where f (w; ?) has a generalized linear form:
f (w; ?) = `(hw, ?(?)i, ?) + r(w) .
(2)
The relevant special case is regularized linear prediction, where ? = (x, y), `(hw, ?(x)i, y) is the
loss of predicting hw, ?(x)i when the true target is y, and r(w) is a regularizer.
It is well known that when the domain W and the mapping ?(?) are bounded, and the function
`(z; ?) is Lipschitz continuous in z, the empirical averages
? [f (w; ?)] =
F? (w) = E
1
n
n
X
f (w; ?i )
(3)
i=1
p
converge uniformly to their expectations F (w) with rate 1/n. This justifies using the empirical
minimizer
? = arg min F? (w),
w
(4)
w?W
? to the population optimum
and we can then establish convergence of F (w)
F (w? ) = min F (w)
w?W
with a rate of
(5)
p
1/n.
Recently, Hazan et al [1] studied an online analogue to this problem, and established that if
f (w; ?) is strongly convex in w, the average online regret diminishes with a much faster rate,
namely (log n)/n. The function f (w; ?) becomes strongly convex when, for example, we have
2
r(w) = ?2 kwk as in SVMs and other regularized learning settings.
In this paper we present an analogous ?fast rate? for empirical minimization of a strongly convex
stochastic objective. In fact, we do not need to assume that we perform the empirical minimization
1
exactly: we provide uniform (over all w ? W) guarantees on the population sub-optimality F (w)?
? with a rate of 1/n. This is a stronger
F (w? ) in terms of the empirical sub-optimality F? (w) ? F? (w)
type of result than what can be obtained with an online-to-batch conversion, as it applies to any
possible solution w, and not only to some specific algorithmically defined solution. For example, it
can be used to analyze the performance of approximate minimizers obtained through approximate
optimization techniques. Specifically, consider f (w; ?) as in (2), where `(z; ?) is convex and LLipschitz in z, the norm of ?(?) is bounded by B, and r is ?-strongly convex. We show that for any
a > 0 and ? > 0, with probability at least 1 ? ?, for all w (of arbitrary magnitude):
L2 B 2 (log(1/?))
? + O (1 + 1/a)
F (w) ? F (w? ) ? (1 + a)(F? (w) ? F? (w))
.
(6)
?n
We emphasize that here and throughout the paper the big-O notation hides only fixed numeric constants.
It might not be surprising that requiring strong convexity yields a rate of 1/n. Indeed, the connection
between strong convexity, variance bounds, and rates of 1/n, is well known. However, it is interesting to note the generality of the result here, and the simplicity of the conditions. In particular, we do
not require any ?low noise? conditions, nor that the loss function is strongly convex (it need only be
weakly convex).
In particular, (6) applies, under no additional conditions, to the SVM objective. We therefore obtain
convergence with a rate of 1/n for the SVM objective. This 1/n rate on the SVM objective is
always valid, and does not depend on any low-noise conditions or on specific properties of the
kernel?
function. Such a ?fast? rate might seem surprising at a first glance to the reader familiar with
the 1/ n rate on the expected loss of the SVM optimum. There?is no contradiction here?what we
establish is that although the loss might converge at a rate of 1/ n, the SVM objective (regularized
loss) always converges at a rate of 1/n.
?
In fact, in Section 3 we see how a rate of 1/n on the objective corresponds to a rate of 1/ n on the
loss. Specifically, we perform an oracle analysis of the optimum of the SVM objective (rather than
of empirical minimization subject to a norm constraint, as in other oracle analyses of regularized
linear learning), based on the existence of some (unknown) low-norm, low-error predictor w.
Strong convexity is a concept that depends on a choice of norm. We state our results in a general
form, for any choice of norm k?k. Strong convexity of r(w) must hold with respect to the chosen
norm k?k, and the data ?(?) must be bounded with respect to the dual norm k?k? , i.e. we must
have k?(?)k? ? B. This allows us to apply our results also to more general forms of regularizers,
2
including squared `p norm regularizers, r(w) = ?2 kwkp , for p < 1 ? 2 (see Corollary 2). However,
the reader may choose to read the paper always thinking of the norm kwk, and so also its dual norm
kwk? , as the standard `2 -norm.
2
Main Result
We consider a generalized linear function f : W ? ? ? R, that can be written as in (2), defined
over a closed convex subset W of a Banach space equipped with norm k?k.
Lipschitz continuity and boundedness We require that the mapping ?(?) is bounded by B,
i.e. k?(?)k? ? B, and that the function `(z; ?) is L-Lipschitz in z ? R for every ?.
Strong Convexity We require that F (w) is ?-strongly convex w.r.t. the norm kwk. That is, for all
w1 , w2 ? W and ? ? [0, 1] we have:
2
F (?w1 + (1 ? ?)w2 ) ? ?F (w1 ) + (1 ? ?)F (w2 ) ? ?2 ?(1 ? ?) kw1 ? w2 k .
Recalling that w? = arg minw F (w), this ensures (see for example [2, Lemma 13]):
F (w) ? F (w? ) +
?
2
2
kw ? w? k
(7)
We require only that the expectation F (w) = E [f (w; ?)] is strongly convex. Of course, requiring
that f (w; ?) is ?-strongly convex for all ? (with respect to w) is enough to ensure the condition.
2
In particular, for a generalized linear function of the form (2) it is enough to require that `(z; y) is
convex in z and that r(w) is ?-strongly convex (w.r.t. the norm kwk).
We now provide a faster convergence rate using the above conditions.
Theorem 1. Let W be a closed convex subset of a Banach space with norm k?k and dual norm k?k?
and consider f (w; ?) = `(hw, ?(?)i; ?) + r(w) satisfying the Lipschitz continuity, boundedness,
? F (w) and F? (w) be as
and strong convexity requirements with parameters B, L, and ?. Let w? , w,
defined in (1)-(5). Then, for any ? > 0 and any a > 0, with probability at least 1 ? ? over a sample
of size n, we have that for all w ? W: (where [x]+ = max(x, 0))
8 (1 + a1 )L2 B 2 (32 + log(1/?))
F (w) ? F (w? ) ? (1 + a)[F? (w) ? F? (w? )]+ +
?n
1
2 2
8
(1
+
)L
B
(32
+ log(1/?))
a
? +
? (1 + a)(F? (w) ? F? (w))
.
?n
2
It is particularly interesting to consider regularizers of the form r(w) = ?2 kwkp , which are (p?1)?strongly convex w.r.t. the corresponding `p -norm [2]. Applying Theorem 1 to this case yields the
following bound:
Corollary 2. Consider an `p norm and its dual `q , with 1 < p ? 2, 1q + p1 = 1, and the objective
2
f (w; ?) = `(hw, ?(?)i; ?) + ?2 kwkp , where k?(?)kq ? B and `(z; y) is convex and L-Lipschitz
in z. The domain is the entire Banach space W = `p . Then, for any ? > 0 and any a > 0,
with probability at least 1 ? ? over a sample of size n, we have that for all w ? W = `p (of any
magnitude):
(1 + a1 )L2 B 2 log(1/?)
? +O
F (w) ? F (w? ) ? (1 + a)(F? (w) ? F? (w))
.
(p ? 1)?n
Corollary 2 allows us to analyze the rate of convergence of the regularized risk for `p -regularized
linear learning. That is, training by minimizing the empirical average of:
?
2
kwkp
(8)
2
where `(z, y) is some convex loss function and kxkq ? B. For example, in SVMs we use the `2
norm, and so bound kxk2 ? B, and the hinge loss `(z, y) = [1 ? yz]+ , which is 1-Lipschitz. What
we obtain is a bound on how quickly we can minimize the expectation F (w) = E [`(hw, xi, y)] +
2
?
2 kwkp , i.e. the regularized empirical loss, or in other words, how quickly do we converge to the
infinite-data optimum of the objective.
f (w; x, y) = `(hw, xi, y) +
We see, then, that the SVM objective converges to its optimum value at a fast rate of 1/n, without
? = E [`(hw,
? xi, y)]
any special assumptions. This still doesn?t mean that the expected loss L(w)
converges at this rate. This behavior is empirically demonstrated on the left plot of Figure 1. For
? ? L(w? ) and the sub-optimality of the
each data set size we plot the excess expected loss L(w)
? 2 ). Although the
? ? F (w? ) (recall that F (w)
? = L(w)
? + ?2 kwk
regularized expected loss F (w)
regularized expected loss converges to its infinite data limit, i.e. to the population
p minimizer, with
? converges at a slower rate of roughly 1/n.
rate roughly 1/n, the expected loss L(w)
Studying the convergence rate of the SVM objective allows us to better understand and appreciate analysis of computational optimization approaches for this objective, as well as obtain oracle
? as we do in the following Section.
inequalities on the generalization loss of w,
Before moving on, we briefly provide an example of applying Theorem 1 with respect to the `1 norm. The bound in Corollary 2 diverges when p ? 1 and the Corollary is not applicable for
2
`1 regularization. This is because kwk1 is not strongly convex w.r.t. the `1 -norm. An example of a
regularizer that is strongly convex with respect to the `1 norm is the (unnormalized) entropy reguPd
2
larizer [3]: r(w) = i=1 |wi | log(|wi |). This regularizer is 1/Bw
-strongly convex w.r.t. kwk1 , as
long as kwk1 ? Bw (see [2]), yielding:
Pd
Corollary 3. Consider a function f (w; ?) = `(hw, ?(?)i; ?) + i=1 |wi | log(|wi |), where
k?(?)k? ? B and `(z; y) is convex and L-Lipschitz in z. Take the domain to be the `1 ball
3
Suboptimality of Objective
Excess Expected Loss
?1
?1
10
10
?2
10
?2
3
10
4
10
10
n
3
10
4
n
10
? ? L(w? ) and sub-optimality of the regularized expected loss
Figure 1: Left: Excess expected loss L(w)
? ? F (w? ) as a function of training set size, for a fixed ? = 0.8. Right: Excess
? ?) ?
F (w)
p expected loss L(w
minw L(wo ), relative to the overall optimal wo = arg minw L(w), with ?n = 300/n. Both plots are on a
logarithmic scale and refer to a synthetic example with x uniform over [?1.5, 1.5]300 , and y = sign x1 when
|x1 | > 1 but uniform otherwise.
W = {w ? Rd : kwk1 ? Bw }. Then, for any ? > 0 and any a > 0, with probability at least 1 ? ?
over a sample of size n, we have that for all w ? W:
2
(1 + a1 )L2 B 2 Bw
log(1/?)
?
?
?
? +O
F (w) ? F (w ) ? (1 + a)(F (w) ? F (w))
.
?n
3
Oracle Inequalities for SVMs
In this Section we apply the results from previous Section to obtain an oracle inequality on the
expected loss L(w) = E [`(hw, xi, y)] of an approximate minimizer of the SVM training objective
? [f? (w)] where
F?? (w) = E
?
2
f? (w; x, y) = `(hw, xi, y) + kwk ,
(9)
2
and `(z, y) is the hinge-loss, or any other 1-Lipschitz loss function. As before we denote B =
supx kxk (all norms in this Section are `2 norms).
We assume, as an oracle assumption, that there exists a good predictor wo with low norm kwo k
and which attains low expected loss L(wo ). Consider an optimization algorithm for F?? (w) that is
? such that F?? (w)
? ? min F?? (w) + opt . Using the results of Section 2, we can
guaranteed to find w
translate this approximate optimality of the empirical objective to an approximate optimality of the
expected objective F? (w) = E [f? (w)]. Specifically, applying Corollary 2 with a = 1 we have that
with probability at least 1 ? ?:
2
B log(1/?)
? ? F? (w? ) ? 2opt + O
F? (w)
.
(10)
?n
2
Optimizing to within opt = O( B
?n ) is then enough to ensure
2
B log(1/?)
?
? ? F? (w ) = O
F? (w)
.
?n
(11)
? we consider the following decompoIn order to translate this to a bound on the expected loss L(w)
sition:
? = L(wo ) + (F? (w)
? ? F? (w? )) + (F? (w? ) ? F? (wo )) + ?2 kwo k2 ? ?2 kwk
? 2
L(w)
2
B log(1/?)
2
? L(wo ) + O
+ 0 + ?2 kwo k
(12)
?n
4
where we used the bound (11) to bound the second term, the optimality of w? to ensure the third
term is non-positive, and we also dropped the last, non-positive, term.
This might seem like a rate of 1/n on the generalization error, but we need to choose ? so as to
balance the second and third terms. The optimal choice for ? is
p
B log(1/?)
?
?(n) = c
,
(13)
kwo k n
for some constant c. We can now formally state our oracle inequality, which is obtained by substituting (13) into (12):
Corollary 4. Consider an SVM-type objective as in (9). For any wo and any ? > 0, with probability
2
? s.t. F??(n) (w)
? ? min F??(n) (w)+O( B
at least 1?? over a sample of size n, we have that for all w
?n ),
where ?(n) chosen as in (13), the following holds:
?
?s
2 kw k2 log(1/?)
B
o
?
? ? L(wo ) + O?
L(w)
n
Corollary 4 is demonstrated empirically on the right plot of Figure 1.
The way we set ?(n) in Corollary 4 depends on kwo k. However, using
p
B log(1/?)
?
?(n) =
n
(14)
we obtain:
Corollary 5. Consider an SVM-type objective as in (9) with ?(n) set as in (14). For any ? > 0,
? s.t. F??(n) (w)
? ?
with probability at least 1 ? ? over a sample of size n, we have that for all w
2
B
min F??(n) (w) + O( ?n ), the following holds:
?
?s
??
2 (kw k4 + 1) log(1/?)
B
o
??
? ? inf ?L(wo ) + O?
L(w)
wo
n
The price we pay here is that the bound of Corollary 5 is larger by a factor of kwo k relative
to the
p
bound of Corollary 4. Nevertheless, this bound allows us to converge with a rate of 1/n to the
expected loss of any fixed predictor.
It is interesting to repeat the analysis of this Section using the more standard result:
!
r
2 B2
Bw
?
?
?
?
F? (w) ? F? (w ) ? F? (w) ? F? (w ) + O
(15)
n
p
for kwk ? Bw where we ignore the dependence on ?. Setting Bw = 2/?, as this is a bound on
the norm of both the empirical and population optimums, and using (15) instead of Corollary 2 in
our analysis yields the oracle inequality:
?
!1/3 ?
2
2
B
kw
k
log(1/?)
o
?
? ? L(wo ) + O?
L(w)
(16)
n
The oracle analysis studied here is very simple?our oracle assumption involves only a single
predictor wo , and we make no assumptions about the kernel or the noise. We note that a more
sophisticated
analysis has been carried out by Steinwart et al [4], who showed that rates faster than
?
1/ n are possible under certain conditions on noise and complexity of kernel class. In Steinwart?s
et al analyses the estimation rates (i.e. rates for expected regularized risk) are given in terms of
the approximation error quantity ?2 kw? k2 + L(w? ) ? L? where L? is the Bayes risk. In our result we consider the estimation rate for regularized objective independent of the approximation error.
5
4
Proof of Main Result
To prove Theorem 1 we use techniques of reweighing and peeling following Bartlett et al [5].
For each w, we define gw (?) = f (w; ?) ? f (w? ; ?), and so our goal is to bound the expectation of
gw in terms of its empirical average. We denote by G = {gw |w ? W }.
Since our desired bound is not exactly uniform, and we would like to pay different attention to functions depending on their expected sub-optimality, we will instead consider the following reweighted
class. For any r > 0 define
n
o
0
gw
r
Gr = gw
= 4k(w)
: w ? W, k(w) = min{k 0 ? Z+ : E [gw ] ? r4k }
(17)
r
where Z+ is the set of non-negative integers. In other words, gw
? Gr is just a scaled version of
r
gw ? G and the scaling factor ensures that E [gw ] ? r.
We will begin by bounding the variation between expected and empirical average values of g r ? Gr .
This is typically done in terms of the complexity of the class Gr . However, we will instead use the
complexity of a slightly different class of functions, which ignores the non-random (i.e. non-datadependent) regularization terms r(w). Define:
n
o
0
hw
Hr = hrw = 4k(w)
: w ? W, k(w) = min{k 0 ? Z+ : E [gw ] ? r4k }
(18)
where
hw (?) = gw (?) ? (r(w) ? r(w? )) = `(hw, ?(?)i; ?) ? `(hw? , ?(?)i; ?).
(19)
r
r
, dropping the (scaled) regularization terms.
That is, hw (?) is the data dependent component of gw
r
? [hr ] (the regularization terms on the left
? [g r ] = E [hr ] ? E
]?E
With this definition we have E [gw
w
w
w
hand side cancel out), and so it is enough to bound the deviation of the empirical means in Hr . This
can be done in terms of the Rademacher Complexity of the class, R(Hr ) [6, Theorem 5]: For any
? > 0, with probability at least 1 ? ?,
!q
log 1/?
r
r
r
? [h ] ? 2R(Hr ) +
sup E [h ] ? E
sup |h (?)|
(20)
2n .
hr ?Hr
hr ?Hr ,?
We will now proceed to bounding the two terms on the right hand side:
p
Lemma 6. sup |hr (?)| ? LB 2r / ?
hr ?Hr ,?
Proof. From the definition of hrw given in (18)?(19), the Lipschitz continuity of `(?; ?), and the
bound k?(?)k? ? B, we have for all w, ?:
|hrw (?)| ?
|hw (?)|
4k(w)
? LB kw ? w? k /4k(w)
(21)
We now use the strong convexity of F (w), and in particular eq. (7), as well as the definitions of gw
and k(w), and finally note that 4k(w) ? 1, to get:
q
q
q
q
kw ? w? k ? ?2 (F (w) ? F (w? )) = ?2 E [gw ] ? ?2 4k(w) r ? ?2 16k(w) r
(22)
Substituting (22) in (21) yields the desired bound.
q
2r
Lemma 7. R(Hr ) ? 2L B ?n
Proof. We will use the following generic bound on the Rademacher complexity of linear functionals [7, Theorem 1]: for any t(w) which is ?-strongly convex (w.r.t a norm with dual norm k?k? ),
q
2a
R({? 7? hw, ?i | t(w) ? a}) ? (sup k?k? ) ?n
.
(23)
For each a > 0, define H(a) = {hw : w ? W, E [gw ] ? a}. First note that E [gw ] = F (w) ?
F (w? ) is ?-strongly
q convex. Using (23) and the Lipschitz composition property we therefore have
R(H(a)) ? LB
2a
?n .
Now:
?
?
q X
q
X
?j
j
?j
j
?j/2
2r
2r
R(Hr ) = R ??
4
H(r4
)
?
4
R(H(4r
))
?
LB
4
=
2LB
j=0
?n
?n
j=0
j=0
6
We now proceed to bounding E [gw ] = F (w) ? F (w? ) and thus proving Theorem 1. For any r > 0,
with probability at least 1 ? ? we have:
?
? [gw ] = 4k(w) (E [g r ] ? E
? [g r ]) = 4k(w) (E [hr ] ? E
? [hr ]) ? 4k(w) rD
E [gw ] ? E
(24)
w
w
w
w
q
q
? p
1
where D = LB ?n
(4 2+ log(1/?)) ? 2LB 32+log(1/?)
is obtained by substituting Lemmas
?n
6 and 7 into (20). We now consider two possible cases: k(w) = 0 and k(w) > 0.
The case k(w) = 0 corresponds to functions with an expected value close to optimal: E [gw ] ? r,
i.e. F (w) ? F (w? ) + r. In this case (24) becomes:
?
? [gw ] + rD
E [gw ] ? E
(25)
We now turn to functions for which k(w) > 0, i.e. with expected values further away from optimal.
In this case, the definition of k(w) ensures 4k(w)?1 r < E [gw ] and substituting this into (24) we
? [gw ] ? 4 E [gw ]?rD. Rearranging terms yields:
have E [gw ] ? E
r
E [gw ] ?
1 ? ?
E [gw ]
1?4D/ r
Combining the two cases (25) and (26) (and requiring r ? (4D)2 so that
have:
h
i
?
1 ?
? [gw ] + rD
E [gw ] ? 1?4D/
E
r
+
(26)
1 ?
1?4D/ r
? 1), we always
(27)
Setting r = (1 + a1 )2 (4D)2 yields the bound in Theorem 1.
5
Comparison with Previous ?Fast Rate? Guarantees
?
Rates faster than 1/ n for estimation have been previously explored under various conditions,
where strong convexity has played a significant role. Lee et al [8] showed faster rates for
squared loss, exploiting the strong convexity of this loss function, but only under finite pseudodimensionality assumption, which do not hold in SVM-like settings. Bousquet [9] provided similar
guarantees when the spectrum of the kernel matrix (covariance of the data) is exponentially
decay?
ing. Tsybakov [10] introduced a margin condition under which rates faster than 1/ n are shown
possible. It is also possible to ensure rates of 1/n by relying on low noise conditions [9, 11], but
here we make no such assumption.
Most methods for deriving fast rates first bound the variance of the functions in the class by some
monotone function of their expectations. Then, using methods as in Bartlett et al [5], one ?
can
get bounds that have a localized complexity term and additional terms of order faster than 1/ n.
However, it is important to note that the localized complexity term typically dominates the rate and
still needs to be controlled. For example, Bartlett et al [12] show that strict convexity of the loss
function implies a variance bound, and provide a general result that can enable obtaining faster rates
as long as the complexity term is low. For instance, for classes?with finite VC dimension V , the
resulting rate is n?(V +2)/(2V +2) , which indeed is better than 1/ n but is not quite 1/n. Thus we
see that even for a strictly convex loss function, such as the squared loss, additional conditions are
necessary in order to obtain ?fast? rates.
In this work we show that strong convexity not only implies a variance bound but in fact can be used
to bound the localized complexity. An important distinction is that we require strong convexity of
the function F (w) with respect to the norm kwk. This is rather different than requiring the loss
function z 7? `(z, y) be strongly convex on the reals. In particular, the loss of a linear predictor,
w 7? `(hw, xi, y) can never be strongly convex in a multi-dimensional space, even if ` is strongly
convex, since it is flat in directions orthogonal to x.
As mentioned, f (w; x, y) = `(hw, xi, y) can never be strongly convex in a high-dimensional space.
However, we actually only require the strong convexity of the expected loss F (w). If the loss
function `(z, y) is ?-strongly convex in z, and the eigenvalues of the covariance of x are bounded
away from zero, strong convexity of F (w) can be ensured. In particular, F (w) would be c?strongly-convex, where c is the minimal eigenvalue of the C OV[x]. This enables us to use Theorem
7
1 to obtain rates of 1/n on the expected loss itself. However, we cannot expect the eigenvalues to be
bounded away from zero in very high dimensional spaces, limiting the applicability of the result of
low-dimensional spaces were, as discussed above, other results also apply.
An interesting observation about our proof technique is that the only concentration inequality we
invoked was McDiarmid?s Inequality (in [6, Theorem 5] to obtain (20)?a bound on the deviations
in terms of the Rademacher complexity). This was possible because we could make a localization
argument for the `? norm of the functions in our function class in terms of their expectation.
6
Summary
We believe this is the first demonstration that, without any additional requirements, the SVM objective converges to its infinite data limit with a rate of O(1/n). This improves the previous results that
considered the SVM objective only under special additional conditions. The results extends also to
other regularized objectives.
Although the quantity that is ultimately of interest to us is the expected loss, and not the regularized
expected loss, it is still important to understand the statistical behavior of the regularized expected
loss. This is the quantity that we actually optimize, track, and often provide bounds on (e.g. in approximate or stochastic optimization approaches). A better understanding of its behavior can allow
us to both theoretically explore the behavior of regularized learning methods, to better understand
empirical behavior observed in practice, and to appreciate guarantees of stochastic optimization approaches for such regularized objectives. As we saw in Section 3, deriving such fast rates is also
essential for obtaining simple and general oracle inequalities, that also helps us guide our choice of
regularization parameters.
References
[1] E. Hazan, A. Kalai, S. Kale, and A. Agarwal. Logarithmic regret algorithms for online convex optimization. In Proceedings of the Nineteenth Annual Conference on Computational Learning Theory, 2006.
[2] S. Shalev-Shwartz. Online Learning: Theory, Algorithms, and Applications. PhD thesis, The Hebrew
University, 2007.
[3] T. Zhang. Covering number bounds of certain regularized linear function classes. J. Mach. Learn. Res.,
2:527?550, 2002.
[4] I. Steinwart, D. Hush, and C. Scovel. A new concentration result for regularized risk minimizers. Highdimensional Probability IV, in IMS Lecture Notes, 51:260?275, 2006.
[5] P. L. Bartlett, O. Bousquet, and S. Mendelson. Localized rademacher complexities. In COLT ?02: Proceedings of the 15th Annual Conference on Computational Learning Theory, pages 44?58, London, UK,
2002. Springer-Verlag.
[6] O. Bousquet, S. Boucheron, and G. Lugosi. Introduction to statistical learning theory. In O. Bousquet,
U.v. Luxburg, and G. R?atsch, editors, Advanced Lectures in Machine Learning, pages 169?207. Springer,
2004.
[7] S. M. Kakade, K. Sridharan, and A. Tewari. On the complexity of linear prediction: Risk bounds, margin
bounds, and regularization. In NIPS, 2008.
[8] W. S. Lee, P. L. Bartlett, and R. C. Williamson. The importance of convexity in learning with squared
loss. In Computational Learing Theory, pages 140?146, 1996.
[9] O. Bousquet. Concentration Inequalities and Empirical Processes Theory Applied to the Analysis of
Learning Algorithms. PhD thesis, Ecole Polytechnique, 2002.
[10] A. Tsybakov. Optimal aggregation of classifiers in statistical learning. Annals of Statistics, 32:135?166,
2004.
[11] I. Steinwart and C. Scovel. Fast rates for support vector machines using gaussian kernels. ANNALS OF
STATISTICS, 35:575, 2007.
[12] P. L. Bartlett, M. I. Jordan, and J. D. McAuliffe. Convexity, classification, and risk bounds. Journal of the
American Statistical Association, 101:138?156, March 2006.
8
| 3400 |@word version:1 briefly:1 norm:32 stronger:1 covariance:2 boundedness:2 ecole:1 scovel:2 surprising:2 must:3 written:1 chicago:1 enables:1 plot:4 mcdiarmid:1 zhang:1 learing:1 prove:1 theoretically:1 indeed:2 expected:25 roughly:2 p1:1 nor:1 behavior:5 multi:1 relying:1 equipped:1 becomes:2 begin:1 provided:1 bounded:6 notation:1 what:3 guarantee:4 every:1 exactly:2 ensured:1 k2:3 scaled:2 uk:1 classifier:1 mcauliffe:1 before:2 positive:2 dropped:1 limit:3 mach:1 approximately:1 lugosi:1 might:4 studied:2 r4:1 practice:1 regret:2 llipschitz:1 empirical:17 word:2 get:2 cannot:1 close:1 risk:6 applying:3 optimize:1 demonstrated:2 kale:1 attention:1 convex:32 simplicity:1 contradiction:1 deriving:2 proving:1 population:4 variation:1 analogous:1 limiting:1 annals:2 target:1 satisfying:1 particularly:1 observed:1 role:1 ensures:3 technological:1 mentioned:1 pd:1 convexity:17 complexity:12 ultimately:1 weakly:1 depend:1 ov:1 localization:1 various:1 regularizer:3 fast:9 london:1 shalev:2 quite:1 larger:1 nineteenth:1 otherwise:1 statistic:2 itself:1 online:5 eigenvalue:3 relevant:1 combining:1 translate:2 exploiting:1 convergence:7 optimum:6 requirement:2 diverges:1 rademacher:4 converges:7 help:1 depending:1 eq:1 strong:14 involves:1 implies:2 direction:1 stochastic:6 vc:1 enable:1 require:7 larizer:1 generalization:2 opt:3 strictly:1 hold:4 considered:1 mapping:2 substituting:4 estimation:3 diminishes:1 applicable:1 saw:1 minimization:5 always:4 gaussian:1 rather:2 kalai:1 corollary:14 focus:1 attains:1 dependent:1 minimizers:2 entire:1 typically:2 arg:3 dual:5 overall:1 colt:1 classification:1 special:3 never:2 kw:7 cancel:1 thinking:1 familiar:1 bw:7 karthik:1 recalling:1 interest:1 yielding:1 regularizers:3 necessary:1 minw:3 orthogonal:1 iv:1 desired:2 re:1 minimal:1 r4k:2 instance:1 applicability:1 deviation:2 subset:2 uniform:4 predictor:5 kq:1 gr:4 supx:1 synthetic:1 lee:2 quickly:2 w1:3 squared:4 thesis:2 choose:2 american:1 b2:1 depends:2 closed:2 hazan:2 kwk:10 analyze:2 sup:4 bayes:1 aggregation:1 shai:1 minimize:1 variance:4 who:1 yield:6 definition:4 proof:4 recall:1 improves:1 sophisticated:1 actually:2 attained:1 done:2 strongly:22 generality:1 just:1 hand:2 steinwart:4 glance:1 continuity:3 believe:1 requiring:4 true:1 concept:1 regularization:7 read:1 boucheron:1 gw:31 reweighted:1 covering:1 unnormalized:1 suboptimality:1 generalized:3 demonstrate:1 polytechnique:1 invoked:1 recently:1 empirically:2 exponentially:1 banach:3 extend:1 discussed:1 association:1 ims:1 refer:1 composition:1 significant:1 rd:5 kw1:1 moving:1 hide:1 showed:2 optimizing:1 inf:1 certain:3 verlag:1 inequality:10 kwk1:4 additional:5 converge:4 ing:1 faster:8 long:2 a1:4 controlled:1 prediction:2 sition:1 expectation:6 kernel:5 agarwal:1 w2:4 strict:1 subject:1 sridharan:2 seem:2 jordan:1 integer:1 enough:4 bartlett:6 wo:13 proceed:2 tewari:1 tsybakov:2 svms:4 sign:1 algorithmically:1 track:1 dropping:1 nevertheless:1 k4:1 monotone:1 luxburg:1 extends:1 throughout:1 reader:2 scaling:1 bound:30 pay:2 guaranteed:1 played:1 oracle:12 annual:2 constraint:1 flat:1 kwkp:5 bousquet:5 nathan:1 argument:1 min:7 optimality:8 ball:1 march:1 slightly:1 wi:4 kakade:1 previously:1 turn:1 studying:1 apply:3 away:3 generic:1 batch:1 slower:1 existence:1 ensure:4 hinge:2 yz:1 establish:2 appreciate:2 objective:29 quantity:3 reweighing:1 concentration:3 dependence:1 minimizing:2 balance:1 demonstration:1 hebrew:1 negative:1 unknown:1 perform:2 conversion:1 observation:1 finite:2 arbitrary:2 lb:7 introduced:1 namely:1 connection:1 distinction:1 established:1 hush:1 nip:1 including:1 max:1 analogue:1 regularized:21 predicting:1 hr:17 advanced:1 carried:1 understanding:1 l2:4 relative:2 loss:39 expect:1 lecture:2 interesting:4 srebro:1 localized:4 editor:1 course:1 summary:1 repeat:1 last:1 side:2 allow:1 understand:3 guide:1 institute:1 dimension:1 numeric:1 valid:1 doesn:1 ignores:1 functionals:1 excess:4 approximate:7 emphasize:1 ignore:1 xi:7 shwartz:2 spectrum:1 continuous:1 learn:1 rearranging:1 obtaining:3 williamson:1 domain:3 main:2 big:1 noise:5 bounding:3 x1:2 sub:5 kxk2:1 toyota:1 third:2 hw:21 peeling:1 theorem:10 specific:2 explored:1 decay:1 svm:16 dominates:1 essential:2 exists:1 mendelson:1 hrw:3 importance:1 phd:2 magnitude:2 justifies:1 margin:2 entropy:1 logarithmic:2 explore:1 kxk:1 datadependent:1 applies:3 springer:2 corresponds:2 minimizer:4 goal:1 lipschitz:10 price:1 infinite:4 specifically:3 uniformly:1 lemma:4 kxkq:1 atsch:1 formally:1 highdimensional:1 support:1 |
2,649 | 3,401 | Near-optimal Regret Bounds for
Reinforcement Learning
Peter Auer
Thomas Jaksch
Ronald Ortner
University of Leoben, Franz-Josef-Strasse 18, 8700 Leoben, Austria
{auer,tjaksch,rortner}@unileoben.ac.at
Abstract
For undiscounted reinforcement learning in Markov decision processes (MDPs)
we consider the total regret of a learning algorithm with respect to an optimal
policy. In order to describe the transition structure of an MDP we propose a new
parameter: An MDP has diameter D if for any pair of states s, s0 there is a policy
which moves from s to s0 in at most D steps (on average).
We present a rein?
?
AT ) after T steps for any
forcement learning algorithm with total regret O(DS
unknown MDP with S states, A actions per state, and diameter D. This bound
holds
? with high probability. We also present a corresponding lower bound of
?( DSAT ) on the total regret of any learning algorithm.
1
Introduction
In a Markov decision process (MDP) M with finite state space S and finite action space A, a learner
in state s ? S needs to choose an action a ? A. When executing action a in state s, the learner
receives a random reward r with mean r?(s, a) according to some distribution on [0, 1]. Further,
according to the transition probabilities p (s0 |s, a), a random transition to a state s0 ? S occurs.
Reinforcement learning of MDPs is a standard model for learning with delayed feedback. In contrast
to important other work on reinforcement learning ? where the performance of the learned policy is
considered (see e.g. [1, 2] and also the discussion and references given in the introduction of [3]) ?
we are interested in the performance of the learning algorithm during learning. For that, we compare
the rewards collected by the algorithm during learning with the rewards of an optimal policy.
In this paper we will consider undiscounted rewards. The accumulated reward of an algorithm A
after T steps in an MDP M is defined as
PT
R(M, A, s, T ) := t=1 rt ,
where s is the initial state and rt are the rewards received during the execution of algorithm A. The
average reward
?(M, A, s) := lim T1 E [R(M, A, s, T )]
T ??
can be maximized by an appropriate stationary policy ? : S ? A which defines an optimal action
for each state [4].
The difficulty of learning an MDP does not only depend on its size (given by the number of states
and actions), but also on its transition structure. In order to measure this transition structure we
propose a new parameter, the diameter D of an MDP. The diameter D is the time it takes to move
from any state s to any other state s0 , using an appropriate policy for this pair of states s and s0 :
Definition 1. Let T (s0 |M, ?, s) be the first (random) time step in which state s0 is reached when
policy ? is executed on MDP M with initial state s. Then the diameter of M is given by
D(M ) := max
min E [T (s0 |M, ?, s)] .
0
s,s ?S ?:S?A
A finite diameter seems necessary for interesting bounds on the regret of any algorithm with respect
to an optimal policy. When a learner explores suboptimal actions, this may take him into a ?bad
part? of the MDP from which it may take about D steps to reach again a ?good part? of the MDP.
Hence, the learner may suffer regret D for such exploration, and it is very plausible that the diameter
appears in the regret bound.
For MDPs with finite diameter (which usually are called communicating, see e.g. [4]) the optimal
average reward ?? does not depend on the initial state (cf. [4], Section 8.3.3), and we set
?? (M ) := ?? (M, s) := max ?(M, ?, s).
?
The optimal average reward is the natural benchmark for a learning algorithm A, and we define the
total regret of A after T steps as1
?(M, A, s, T ) := T ?? (M ) ? R(M, A, s, T ).
In the following, we present our reinforcement learning algorithm U CRL 2 (a variant of the UCRL
algorithm of [5]) which uses upper confidence boundsp
to choose an optimistic policy. We show
?
|A|T ). A corresponding lower bound of
thatpthe total regret of U CRL 2 after T steps is O(D|S|
?( D|S||A|T ) on the total regret of any learning algorithm is given as well. These results establish
the diameter as an important parameter of an MDP. Further, the diameter seems to be more natural
than other parameters that have been proposed for various PAC and regret bounds, such as the mixing
time [3, 6] or the hitting time of an optimal policy [7] (cf. the discussion below).
1.1
Relation to previous Work
We first compare our results to the PAC bounds for the well-known algorithms E3 of Kearns,
Singh [3], and R-Max of Brafman, Tennenholtz [6] (see also Kakade [8]). These algorithms achieve
?-optimal average reward with probability 1 ? ? after time polynomial in 1? , 1? , |S|, |A|, and the mixing time T?mix (see below). As the polynomial dependence on ? is of order 1/?3 , the PAC bounds
translate into T 2/3 regret bounds at the best. Moreover, both algorithms need the ?-return mixing
time T?mix of an optimal policy ? ? as input parameter. This parameter T?mix is the number of steps
until the average reward of ? ? over these T?mix steps is ?-close to the optimal average reward ?? .
It is easy to construct MDPs of diameter D with T?mix ? D/?. This additional dependency on ?
further increases the exponent in the above mentioned regret bounds for E3 and R-max. Also, the
exponents of the parameters |S| and |A| in the PAC bounds of [3] and [6] are substantially larger
than in our bound.
The MBIE algorithm of Strehl and Littman [9, 10] ? similarly to our approach ? applies confidence
bounds to compute an optimistic policy. However, Strehl and Littman consider only a discounted
reward setting, which seems to be less natural when dealing with regret. Their definition of regret
measures the difference between the rewards2 of an optimal policy and the rewards of the learning
algorithm along the trajectory taken by the learning algorithm. In contrast, we are interested in the
regret of the learning algorithm in respect to the rewards of the optimal policy along the trajectory
of the optimal policy.
Tewari and Bartlett [7] propose a generalization of the index policies of Burnetas and Katehakis [11].
These index policies choose actions optimistically by using confidence bounds only for the estimates
in the current state. The regret bounds for the index policies of [11] and the OLP algorithm of [7]
are asymptotically logarithmic in T . However, unlike our bounds, these bounds depend on the gap
between the ?quality? of the best and the second best action, and these asymptotic bounds also hide
an additive term which is exponential in the number of states. Actually, it is possible to prove a
corresponding gap-dependent logarithmic bound for our U CRL 2 algorithm as well (cf. Remark 4
below). This bound holds uniformly over time and under weaker assumptions: While [7] and [11]
consider only ergodic MDPs in which any policy will reach every state after a sufficient number of
steps, we make only the more natural assumption of a finite diameter.
?
It can be shown
? that maxA E [R(M, A, s, T )] = T ? (M ) + O(D(M )) and maxA R(M, A, s, T ) =
?
T ? (M ) + O T with high probability.
2
Actually, the state values.
1
?
2
Results
We summarize the results achieved for our algorithm U CRL 2 which is described in the next section,
and also state a corresponding lower bound. We assume an unknown MDP M to be learned, with
S := |S| states, A := |A| actions, and finite diameter D := D(M ). Only S and A are known to the
learner, and U CRL 2 is run with parameter ?.
Theorem 2. With probability 1 ? ? it holds that for any initial state s ? S and any T > 1, the regret
of U CRL 2 is bounded by
q
?(M, U CRL 2, s, T ) ? c1 ? DS T A log T? ,
for a constant c1 which is independent of M , T , and ?.
It is straightforward to obtain from Theorem 2 the following sample complexity bound.
Corollary 3. With probability 1 ? ? the average per-step regret is at most ? for any
D2 S 2 A
DSA
log
T ? c2
?2
??
steps, where c2 is a constant independent of M .
Remark 4. The proof method of Theorem 2 can be modified to give for each initial state s and T > 1
an alternative upper bound on the expected regret,
E [?(M, U CRL 2, s, T )] ? c3
D2 S 2 A log T
,
g
where g := ?? (M ) ? max?,s {?(M, ?, s) : ?(M, ?, s) < ?? (M )} is the gap between the optimal
average reward and the second best average reward achievable in M .
These new bounds are improvements over the bounds that have been achieved in [5] for the original
UCRL algorithm in various respects: the exponents of the relevant parameters have been decreased
considerably, the parameter D we use here is substantially smaller than the corresponding mixing
time in [5], and finally, the ergodicity assumption is replaced by the much weaker and more natural
assumption that the MDP has finite diameter.
The following is an accompanying lower bound on the expected regret.
Theorem 5. For some c4 > 0, any algorithm A, and any natural numbers S, A ? 10, D ?
20 logA S, and T ? DSA, there is an MDP 3 M with S states, A actions, and diameter D, such
that for any initial state s ? S the expected regret of A after T steps is
?
E [?(M, A, s, T )] ? c4 ? DSAT .
In a different setting, a modification of U CRL 2 can also deal with changing MDPs.
Remark 6. Assume that the MDP (i.e. its transition probabilities and reward distributions) is allowed to change ` times up to step T , such that the diameter is always at most D (we assume an
initial change at time t = 1). In this model we measure regret as the sum of missed rewards compared to the ` policies which are optimal after the changes of the MDP. Restarting U CRL 2 with
parameter ?/`2 at steps di3 /`2 e for i = 1, 2, 3 . . ., this regret is upper bounded by
q
1
2
c5 ? ` 3 T 3 DS A log T?
with probability 1 ? 2?.
MDPs with a different model of changing rewards have already been considered in [12]. There, the
transition probabilities are assumed to be fixed and known to the
? learner, but the rewards are allowed
to change in every step. A best possible upper bound of O( T ) on the regret against an optimal
stationary policy, given all the reward changes in advance, is derived.
3
The diameter of any MDP with S states and A actions is at least logA S.
Input: A confidence parameter ? ? (0, 1).
Initialization: Set t := 1, and observe the initial state s1 .
For episodes k = 1, 2, . . . do
Initialize episode k:
1. Set the start time of episode k, tk := t.
2. For all (s, a) in S ? A initialize the state-action counts for episode k, vk (s, a) := 0.
Further, set the state-action counts prior to episode k,
Nk (s, a) := # {? < tk : s? = s, a? = a} .
3. For s, s0 ? S and a ? A set the observed accumulated rewards and the transition
counts prior to episode k,
Rk (s, a) :=
tX
k ?1
r? 1s? =s,a? =a ,
? =1
Pk (s, a, s0 ) := # {? < tk : s? = s, a? = a, s? +1 = s0 } ,
0
Rk (s,a)
Pk (s,a,s )
and compute estimates r?k (s, a) := max{1,N
, p?k (s0 |s, a) := max{1,N
.
k (s,a)}
k (s,a)}
Compute policy ?
?k :
4. Let Mk be the set of all MDPs with states and actions as in M , and with transition probabilities p? (?|s, a) close to p?k (?|s, a), and rewards r?(s, a) ? [0, 1] close
to r?k (s, a), that is,
q
7 log(2SAtk /?)
r?(s, a) ? r?k s, a
and
(1)
?
2 max{1,Nk (s,a)}
q
14S log(2Atk /?)
(2)
p ?|s, a ? p?k ?|s, a
?
?
max{1,Nk (s,a)} .
1
5. Use extended value iteration (Section 3.1) to find a policy ?
?k and an optimistic
? k ? Mk such that
MDP M
? k, ?
??k := min ?(M
?k , s) ?
s
max
M 0 ?Mk ,?,s0
1
?(M 0 , ?, s0 ) ? ? .
tk
(3)
Execute policy ?
?k :
6. While vk (st , ?
?k (st )) < max{1, Nk (st , ?
?k (st ))} do
(a) Choose action at := ?
?k (st ), obtain reward rt , and observe next state st+1 .
(b) Update vk (st , at ) := vk (st , at ) + 1.
(c) Set t := t + 1.
Figure 1: The U CRL 2 algorithm.
3
The U CRL 2 Algorithm
Our algorithm is a variant of the UCRL algorithm in [5]. As its predecessor, U CRL 2 implements
the paradigm of ?optimism in the face of uncertainty?. As such, it defines a set M of statistically
? (with respect to
plausible MDPs given the observations so far, and chooses an optimistic MDP M
the achievable average reward) among these plausible MDPs. Then it executes a policy ?
? which is
?.
(nearly) optimal for the optimistic MDP M
More precisely, U CRL 2 (Figure 1) proceeds in episodes and computes a new policy ?
?k only at the
beginning of each episode k. The lengths of the episodes are not fixed a priori, but depend on
the observations made. In Steps 2?3, U CRL 2 computes estimates p?k (s0 |s, a) and r?k (s, a) for the
transition probabilities and mean rewards from the observations made before episode k. In Step 4,
a set Mk of plausible MDPs is defined in terms of confidence regions around the estimated mean
rewards r?k (s, a) and transition probabilities p?k (s0 |s, a). This guarantees that with high probability
the true MDP M is in Mk . In Step 5, extended value iteration (see below) is used to choose a near? k ? Mk . This policy ?
optimal policy ?
?k on an optimistic MDP M
?k is executed throughout episode
k (Step 6). Episode k ends when a state s is visited in which the action a = ?
?k (s) induced by the
current policy has been chosen in episode k equally often as before episode k. Thus, the total number
of occurrences of any state-action pair is at most doubled during an episode. The counts vk (s, a)
keep track of these occurrences in episode k.4
3.1
Extended Value Iteration
In Step 5 of the U CRL 2 algorithm we need to find a near-optimal policy ?
?k for an optimistic MDP.
While value iteration typically calculates a policy for a fixed MDP, we also need to select an op? k which gives almost maximal reward among all plausible MDPs. This can be
timistic MDP M
achieved by extending value iteration to search also among the plausible MDPs. Formally, this can
be seen as undiscounted value iteration [4] on an MDP with extended action set. We denote the state
values of the i-th iteration by ui (s) and the normalized state values by u0i (s) and get for all s ? S:
u0 (s)
=
0,
(
ui+1 (s)
=
max r?k (s, a) +
a?A
max
p(?)?P(s,a)
X
)
,
p(s ) ? ui (s )
0
0
(4)
s0 ?S
Here r?k (s, a) are the maximal rewards satisfying
condition (1) in algorithm U CRL 2, and P(s, a) is
the set of transition probabilities p? ?|s, a satisfying condition (2).
While (4) may look like a step of value iteration with an infinite action space, maxp p ? ui is actually
a linear optimization problem over the convex polytope P(s, a). This implies that only the finite
number of vertices of the polytope need to be considered as extended actions, which guarantees
convergence of the value iteration.5
The value iteration is stopped when
1
max ui+1 (s) ? ui (s) ? min ui+1 (s) ? ui (s) < ? ,
s?S
s?S
tk
(5)
which means that the change of the state values is almost uniform and actually close to the average
reward of the optimal policy. It can be shown that the actions, rewards, and transition probabilities
? k and a policy ?
chosen in (4) for this i-th iteration define an optimistic MDP M
?k which satisfy
condition (3) of algorithm U CRL 2.
4
Analysis of U CRL 2 and Proof Sketch of Theorem 2
In the following we present an outline of the main steps of the proof of Theorem 2. Details and the
complete proofs can be found in the full version of the paper [13]. We also make the assumption
that the rewards r(s, a) are deterministic and known to the learner.6 This simplifies the exposition.
Considering unknown stochastic rewards adds little to the proof and only lower order terms to the
regret bounds. We also assume that the true MDP M satisfies the confidence bounds in Step 4 of
algorithm U CRL 2 such that M ? Mk . This can be shown to hold with sufficiently high probability
(using a union bound over all T ).
We start by considering the regret in a single episode k. Since the optimistic average reward ??k
of the optimistically chosen policy ?
?k is essentially larger than the true optimal average reward ?? ,
it is sufficient to calculate by how much the optimistic average reward ??k overestimates
the actual
? k in Step 5 of U CRL 2, ??k ? ?? ? 1/?tk . Thus the
rewards of policy ?
?k . By the choice of ?
?k and M
4
Since the policy ?
?k is fixed for episode k, vk (s, a) 6= 0 only for a = ?
?k (s). Nevertheless, we find it
convenient to use a notation which explicitly includes the action a in vk (s, a).
5
Because of the special structure of the polytope P(s, a), the linear program in (4) can be solved very efficiently in O(S) steps after sorting the state values ui (s0 ). For the formal convergence proof also the periodicity
of optimal policies in the extended MDP needs to be considered.
6
In this case all plausible MDPs considered in Steps 4 and 5 of algorithm U CRL 2 would give these rewards.
regret ?k during episode k is bounded as
tk+1 ?1
?k :=
X
tk+1 ?1
?
(? ? rt ) ?
t=tk
X
(?
?k ? r t ) +
t=tk
tk+1 ? tk
?
.
tk
?
The sum over k of the second term on the right hand side is O( T ) and will not be considered
further in this proof sketch. The first term on the right hand side can be rewritten using the known
deterministic rewards r(s, a) and the occurrences of state action pairs (s, a) in episode k,
tk+1 ?1
?k .
X
(?
?k ? r t ) =
t=tk
4.1
X
vk (s, a) ??k ? r(s, a) .
(6)
(s,a)
Extended Value Iteration revisited
To proceed, we reconsider the extended value iteration in Section 3.1. As an important observation
for our analysis, we find that for any iteration i the range of the state values is bounded by the
diameter of the MDP M ,
max ui (s) ? min ui (s) ? D.
(7)
s
s
To see this, observe that ui (s) is the total expected reward after i steps of an optimal non-stationary
i-step policy starting in state s, on the MDP with extended action set as considered for the extended
value iteration. The diameter of this extended MDP is at most D as it contains the actions of the true
MDP M . If there were states with ui (s1 ) ? ui (s0 ) > D, then an improved value for ui (s0 ) could
be achieved by the following policy: First follow a policy which moves from s0 to s1 most quickly,
which takes at most D steps on average. Then follow the optimal i-step policy for s1 . Since only D
of the i rewards of the policy for s1 are missed, this policy gives ui (s0 ) ? ui (s1 ) ? D, proving (7).
For the convergence criterion (5) it can be shown that at the corresponding iteration
1
|ui+1 (s) ? ui (s) ? ??k | ? ?
tk
for all s ? S, where ??k is the average reward of the policy ?
?k chosen in this iteration on the
? k .7 Expanding ui+1 (s) according to (4), we get
optimistic MDP M
X
ui+1 (s) = r(s, ?
?k (s)) +
p?k (s0 |s, ?
?k (s)) ? ui (s0 )
s0
and hence
!
!
X
1
0
0
?k (s)) ?
p?k (s |s, ?
?k (s)) ? ui (s ) ? ui (s) ? ? .
??k ? r(s, ?
tk
s0
? k :=
Defining r k := rk s, ?
?k (s) s as the (column) vector of rewards for policy ?
? , P
k
0
? k , and v k := vk s, ?
p?k (s |s, ?
?k (s)) s,s0 as the transition matrix of ?
?k on M
?k (s) s as the (row)
vector of visit counts for each state and the corresponding action chosen by ?
?k , we can rewrite (6)
as
X
X vk (s, a)
? k ? I ui +
?
,
(8)
?k .
vk (s, a) ??k ? r(s, a) ? v k P
tk
(s,a)
(s,a)
? k sum to 1, we can replace ui by wk
recalling that vk (s, a) = 0 for a 6= ?
?k (s). Since the rows of P
with wk (s) = ui (s) ? mins ui (s) (we again use the subscript k to reference the episode). The last
term on the right hand side of (8) is of lower order, and by (7) we have
? k ? I wk ,
?k . v k P
(9)
kwk k? ? D.
(10)
7
This is quite intuitive. We expect to receive average reward ??k per step, such that the difference of the state
values after i + 1 and i steps should be about ??k .
4.2
Completing the Proof
? k of the policy ?
? k by the transition
Replacing the transition matrix P
?k in the optimistic MDP M
matrix P k of ?
?k in the true MDP M , we get
? k ? I wk = v k P
? k ? P k + P k ? I wk
?k . v k P
? k ? P k wk + v k P k ? I wk .
= vk P
(11)
The intuition about the second term in (11) is that the counts of the state visitsv k are relatively close
to the stationary distribution of the transition matrix P k , such that v k P k ?I should be small. The
formal proof requires the definition of a suitable martingale and the use of concentration inequalities
for this martingale. This yields
!
r
X
T
v k P k ? I wk = O D T log
?
k
with high probability, which gives a lower order term in our regret bound. Thus, our regret bound is
? k and M are in the set of plausible MDPs Mk ,
mainly determined by the first term in (11). Since M
this term can be bounded using condition (2) in algorithm U CRL 2:
XX
? k ? P k wk =
? k (s, s0 ) ? P k (s, s0 ) ? wk (s0 )
?k . v k P
vk s, ?
?k (s) ? P
s0
s
?
X
?
X
?
vk s, ?
?k (s) ?
P
k (s, ?) ? P k (s, ?)
? kw k k?
1
s
q 14S log(2AT /?)
vk s, ?
?k (s) ? 2 max{1,N
?k (s))} ? D .
k (s,?
(12)
s
P
P
Let N (s, a) :=
vk (s, a) such that
k
(s,a) N (s, a) = T and recall that Nk (s, a) =
P
v
(s,
a).
By
the
condition
of
the
while-loop
in Step 6 of algorithm U CRL 2, we have that
i<k i
vk (s, a) ? Nk (s, a). Summing (12) over all episodes k we get
q
X
XX
S log(AT /?)
?D
?k ? const ?
vk (s, a) ? max{1,N
k (s,a)}
k
k (s,a)
= const ? D ?
XX
p
?
S log(AT /?) ?
(s,a) k
vk (s,a)
max{1,Nk (s,a)}
Xp
p
? const ? D ? S log(AT /?) ?
N (s, a)
(13)
(s,a)
? const ? D ?
p
?
S log(AT /?) ? SAT .
(14)
Here we used for (13) that
n
X
?
p
xk
?
2+1
Xn ,
Xk?1
p
k=1
o
n P
k
where Xk = max 1, i=1 xi and 0 ? xk ? Xk?1 , and we used Jensen?s inequality for (14).
Noting that Theorem 2 holds trivially true for T ? A gives the bound of the theorem.
5
The Lower Bound (Proof Sketch for Theorem 5)
We first consider an MDP with two states s0 and s1 , and A0 = b(A ? 1)/2c actions. For each
action a, let r(s0 , a) = 0, r(s1 , a) = 1, and p (s0 |s1 , a) = ? where ? = 10/D. For all but a single
?good? action a? let p (s1 |s0 , a) = ?, while p (s1 |s0 , a? ) = ? + ? for some 0 < ? < ?. The diameter
?+?
> 12 ,
of this MDP is 1/?. The average reward of a policy which chooses action a? in state s0 is 2?+?
1
while the average reward of any other policy is 2 . Thus the regret suffered by a suboptimal action
in state s0 is ?(?/?). The main observation for the proof of the lower bound is that any algorithm
needs to probe ?(A0 ) actions in state s0 for ? ?/?2 times on average, to detect the ?good? action a?
reliably.
Considering k := bS/2c copies of this MDP where only one of the copies has such a ?good?
action a? , we find that ?(kA0 ) actions in the s0 -states of the copies need to be probed for ? ?/?2
p
0 /T , suboptimal actions need to be taken
times to detect the ?good? action. Setting ? = ?kA?
0
2
? kA ?/? = ?(T ) times which gives ?(T ?/?) = ?( T DSA) regret.
Finally, we need to connect the k copies into a single MDP. This can be done by introducing A0 + 1
additional deterministic actions per state, which do not leave the s1 -states but connect the s0 -states
of the k copies by inducing an A0 -ary tree structure on the s0 -states (1 action for going toward the
root, A0 actions to go toward the leaves). The diameter of the resulting MDP is at most 2(D/10 +
dlogA0 ke) which is twice the time to travel to or from the root for any state in the MDP. Thus we
have
? constructed an MDP with ? S states, ? A actions, and diameter ? D which forces regret
?( DSAT ) on any algorithm. This proves the theorem.
Acknowledgments
This work was supported in part by the Austrian Science Fund FWF (S9104-N13 SP4). The research
leading to these results has received funding from the European Community?s Seventh Framework
Programme (FP7/2007-2013) under grant agreements n? 216886 (PASCAL2 Network of Excellence), and n? 216529 (Personal Information Navigator Adapting Through Viewing, PinView). This
publication only reflects the authors? views.
References
[1] Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998.
[2] Michael J. Kearns and Satinder P. Singh. Finite-sample convergence rates for Q-learning and indirect
algorithms. In Advances in Neural Information Processing Systems 11. MIT Press, 1999.
[3] Michael J. Kearns and Satinder P. Singh. Near-optimal reinforcement learning in polynomial time. Mach.
Learn., 49:209?232, 2002.
[4] Martin L. Puterman. Markov Decision Processes: Discrete Stochastic Dynamic Programming. John
Wiley & Sons, Inc., New York, NY, USA, 1994.
[5] Peter Auer and Ronald Ortner. Logarithmic online regret bounds for reinforcement learning. In Advances
in Neural Information Processing Systems 19, pages 49?56. MIT Press, 2007.
[6] Ronen I. Brafman and Moshe Tennenholtz. R-max ? a general polynomial time algorithm for near-optimal
reinforcement learning. J. Mach. Learn. Res., 3:213?231, 2002.
[7] Ambuj Tewari and Peter Bartlett. Optimistic linear programming gives logarithmic regret for irreducible
mdps. In Advances in Neural Information Processing Systems 20, pages 1505?1512. MIT Press, 2008.
[8] Sham M. Kakade. On the Sample Complexity of Reinforcement Learning. PhD thesis, University College
London, 2003.
[9] Alexander L. Strehl and Michael L. Littman. A theoretical analysis of model-based interval estimation.
In Proc. 22nd ICML 2005, pages 857?864, 2005.
[10] Alexander L. Strehl and Michael L. Littman. An analysis of model-based interval estimation for Markov
decision processes. J. Comput. System Sci., 74(8):1309?1331, 2008.
[11] Apostolos N. Burnetas and Michael N. Katehakis. Optimal adaptive policies for Markov decision processes. Math. Oper. Res., 22(1):222?255, 1997.
[12] Eyal Even-Dar, Sham M. Kakade, and Yishay Mansour. Experts in a Markov decision process. In
Advances in Neural Information Processing Systems 17, pages 401?408. MIT Press, 2005.
[13] Peter Auer, Thomas Jaksch, and Ronald Ortner. Near-optimal regret bounds for reinforcement learning. Technical Report CIT-2009-01, University of Leoben, Chair for Information Technology, 2009.
http://institute.unileoben.ac.at/infotech/publications/TR/CIT-2009-01.pdf.
| 3401 |@word version:1 achievable:2 polynomial:4 seems:3 nd:1 d2:2 tr:1 initial:8 contains:1 current:2 ka:2 john:1 ronald:3 additive:1 update:1 fund:1 stationary:4 leaf:1 xk:5 beginning:1 math:1 revisited:1 along:2 c2:2 constructed:1 predecessor:1 katehakis:2 apostolos:1 prove:1 excellence:1 expected:4 discounted:1 little:1 actual:1 considering:3 xx:3 moreover:1 bounded:5 notation:1 substantially:2 maxa:2 guarantee:2 every:2 grant:1 overestimate:1 t1:1 before:2 sutton:1 mach:2 subscript:1 optimistically:2 twice:1 initialization:1 range:1 statistically:1 acknowledgment:1 union:1 regret:36 implement:1 strasse:1 adapting:1 convenient:1 confidence:6 doubled:1 get:4 close:5 deterministic:3 straightforward:1 go:1 starting:1 convex:1 ergodic:1 ke:1 communicating:1 proving:1 pt:1 yishay:1 programming:2 us:1 agreement:1 satisfying:2 observed:1 solved:1 calculate:1 region:1 episode:22 mentioned:1 intuition:1 complexity:2 ui:28 reward:47 littman:4 dynamic:1 personal:1 depend:4 singh:3 rewrite:1 learner:7 indirect:1 various:2 tx:1 describe:1 london:1 rein:1 quite:1 larger:2 plausible:8 forcement:1 maxp:1 online:1 propose:3 maximal:2 relevant:1 loop:1 mixing:4 translate:1 achieve:1 intuitive:1 inducing:1 convergence:4 undiscounted:3 extending:1 executing:1 leave:1 tk:18 andrew:1 ac:2 op:1 received:2 implies:1 dsat:3 stochastic:2 exploration:1 viewing:1 atk:1 generalization:1 hold:5 accompanying:1 around:1 considered:7 sufficiently:1 estimation:2 proc:1 travel:1 visited:1 him:1 reflects:1 mit:5 always:1 modified:1 barto:1 publication:2 corollary:1 ucrl:3 derived:1 improvement:1 vk:20 mainly:1 contrast:2 detect:2 dependent:1 accumulated:2 typically:1 a0:5 relation:1 going:1 interested:2 josef:1 among:3 exponent:3 priori:1 special:1 initialize:2 construct:1 u0i:1 kw:1 look:1 icml:1 nearly:1 report:1 richard:1 ortner:3 irreducible:1 delayed:1 replaced:1 recalling:1 necessary:1 tree:1 re:2 theoretical:1 mk:8 stopped:1 column:1 introducing:1 vertex:1 uniform:1 seventh:1 leoben:3 burnetas:2 dependency:1 connect:2 considerably:1 chooses:2 st:8 explores:1 michael:5 quickly:1 again:2 thesis:1 choose:5 expert:1 leading:1 return:1 oper:1 wk:10 includes:1 inc:1 satisfy:1 explicitly:1 root:2 view:1 optimistic:13 kwk:1 eyal:1 reached:1 start:2 efficiently:1 maximized:1 yield:1 ronen:1 trajectory:2 executes:1 ary:1 reach:2 definition:3 against:1 proof:11 austria:1 lim:1 recall:1 auer:4 actually:4 appears:1 follow:2 improved:1 execute:1 done:1 ergodicity:1 until:1 d:3 sketch:3 receives:1 hand:3 replacing:1 defines:2 quality:1 mdp:43 usa:1 normalized:1 true:6 hence:2 jaksch:2 deal:1 puterman:1 during:5 criterion:1 pdf:1 outline:1 complete:1 funding:1 olp:1 trivially:1 sp4:1 similarly:1 add:1 hide:1 loga:2 inequality:2 seen:1 additional:2 paradigm:1 u0:1 full:1 mix:5 sham:2 technical:1 ka0:1 equally:1 visit:2 calculates:1 variant:2 austrian:1 essentially:1 iteration:17 achieved:4 c1:2 receive:1 decreased:1 interval:2 suffered:1 unlike:1 induced:1 fwf:1 near:6 noting:1 easy:1 suboptimal:3 simplifies:1 optimism:1 bartlett:2 rortner:1 suffer:1 peter:4 e3:2 proceed:1 york:1 action:42 remark:3 dar:1 tewari:2 diameter:22 cit:2 http:1 estimated:1 mbie:1 per:4 track:1 discrete:1 probed:1 satk:1 nevertheless:1 changing:2 asymptotically:1 sum:3 run:1 uncertainty:1 throughout:1 almost:2 missed:2 decision:6 bound:38 completing:1 precisely:1 min:5 chair:1 relatively:1 n13:1 martin:1 according:3 smaller:1 son:1 kakade:3 modification:1 s1:12 b:1 taken:2 count:6 fp7:1 end:1 rewritten:1 probe:1 observe:3 appropriate:2 occurrence:3 alternative:1 thomas:2 original:1 cf:3 const:4 prof:1 establish:1 move:3 already:1 moshe:1 occurs:1 concentration:1 rt:4 dependence:1 sci:1 polytope:3 collected:1 toward:2 length:1 index:3 executed:2 reconsider:1 reliably:1 policy:49 unknown:3 upper:4 observation:5 markov:6 benchmark:1 finite:9 defining:1 extended:11 mansour:1 community:1 pair:4 c3:1 c4:2 learned:2 tennenholtz:2 proceeds:1 usually:1 below:4 summarize:1 program:1 ambuj:1 max:20 pascal2:1 suitable:1 difficulty:1 natural:6 force:1 technology:1 mdps:16 prior:2 asymptotic:1 expect:1 interesting:1 sufficient:2 xp:1 s0:43 strehl:4 row:2 periodicity:1 brafman:2 last:1 copy:5 supported:1 formal:2 weaker:2 side:3 institute:1 face:1 feedback:1 xn:1 transition:17 computes:2 author:1 c5:1 reinforcement:11 made:2 franz:1 adaptive:1 programme:1 far:1 restarting:1 keep:1 dealing:1 satinder:2 sat:1 summing:1 assumed:1 xi:1 search:1 as1:1 learn:2 expanding:1 european:1 pk:2 main:2 allowed:2 martingale:2 ny:1 wiley:1 exponential:1 comput:1 theorem:10 rk:3 bad:1 pac:4 jensen:1 dsa:3 phd:1 execution:1 nk:7 gap:3 sorting:1 logarithmic:4 hitting:1 applies:1 satisfies:1 exposition:1 replace:1 crl:24 change:6 infinite:1 determined:1 uniformly:1 kearns:3 total:8 called:1 select:1 formally:1 college:1 alexander:2 unileoben:2 |
2,650 | 3,402 | Global Ranking Using Continuous Conditional
Random Fields
1
Tao Qin, 1 Tie-Yan Liu, 2 Xu-Dong Zhang, 2 De-Sheng Wang, 1 Hang Li
1
Microsoft Research Asia, 2 Tsinghua University
1
{taoqin, tyliu, hangli}@microsoft.com
2
{zhangxd, wangdsh ee}@tsinghua.edu.cn
Abstract
This paper studies global ranking problem by learning to rank methods. Conventional learning to rank methods are usually designed for ?local ranking?, in the
sense that the ranking model is defined on a single object, for example, a document
in information retrieval. For many applications, this is a very loose approximation.
Relations always exist between objects and it is better to define the ranking model
as a function on all the objects to be ranked (i.e., the relations are also included).
This paper refers to the problem as global ranking and proposes employing a Continuous Conditional Random Fields (CRF) for conducting the learning task. The
Continuous CRF model is defined as a conditional probability distribution over
ranking scores of objects conditioned on the objects. It can naturally represent
the content information of objects as well as the relation information between
objects, necessary for global ranking. Taking two specific information retrieval
tasks as examples, the paper shows how the Continuous CRF method can perform
global ranking better than baselines.
1 Introduction
Learning to rank is aimed at constructing a model for ordering objects by means of machine learning.
It is useful in many areas including information retrieval, data mining, natural language processing,
bioinformatics, and speech recognition. In this paper, we take information retrieval as an example.
Traditionally learning to rank is restricted to ?local ranking?, in which the ranking model is defined
on a single object. In other words, the relations between the objects are not directly represented
in the model. In many application tasks this is far from being enough, however. For example, in
Pseudo Relevance Feedback [17, 8], we manage to rank documents on the basis of not only relevance
of documents to the query, but also similarity between documents. Therefore, the use of a model
solely based on individual documents would not be sufficient. (Previously, heuristic methods were
developed for Pseudo Relevance Feedback.) Similar things happen in the tasks of Topic Distillation
[12, 11] and Subtopic Retrieval [18]. Ideally, in information retrieval we would exploit a ranking
model defined as a function on all the documents with respect to the query. In other words, ranking
should be conducted on the basis of the contents of objects as well as the relations between objects.
We refer to this setting as ?global ranking? and give a formal description on it with information
retrieval as an example.
Conditional Random Fields (CRF) technique is a powerful tool for relational learning, because it allows the uses of both relations between objects and contents of objects [16]. However, conventional
CRF cannot be directly applied to global ranking because it is a discrete model in the sense that
the output variables are discrete [16]. In this work, we propose a Continuous CRF model (C-CRF)
to deal with the problem. The C-CRF model is defined as a conditional probability distribution
over ranking scores of objects (documents) conditioned on the objects (documents). The specific
1
probability distribution can be represented by an undirected graph, and the output variables (ranking scores) can be continuous. To our knowledge, this is the first time such kind of CRF model is
proposed.
We apply C-CRF to two global ranking tasks: Pseudo Relevance Feedback and Topic Distillation.
Experimental results on benchmark data show that our method performs better than baseline methods.
2
Global Ranking Problem
Document ranking in information retrieval is a problem as follows. When the user submits a query,
the system retrieves all the documents containing at least one query term, calculates a ranking score
for each of the documents using the ranking model, and sorts the documents according to the ranking
scores. The scores can represent relevance, importance, and/or diversity of documents.
(q)
(q)
(q)
Let q denote a query. Let x(q) = {x1 , x2 , . . . , xn(q) } denote the documents retrieved with q, and
(q)
(q)
(q)
y (q) = {y1 , y2 , . . . , yn(q) } denote the ranking scores assigned to the documents. Here n(q) stands
for the number of documents retrieved with q. Note that the numbers vary according to queries. We
assume that y (q) is determined by a ranking model.
We call the ranking ?local ranking?, if the ranking model is defined as
(q)
yi
(q)
= f (xi ), i = 1, . . . , n(q)
(1)
Furthermore, we call the ranking ?global ranking?, if the ranking model is defined as
y (q) = F (x(q) )
(2)
The major difference between the two is that F takes on all the documents together as its input,
while f takes on an individual document as its input. In other words, in global ranking, we use not
only the content information of documents but also the relation information between documents.
There are many specific application tasks that can be viewed as examples of global ranking. These
include Pseudo Relevance Feedback, Topic Distillation, and Subtopic Retrieval.
3
3.1
Continuous CRF for Global Ranking
Continuous CRF
(q)
(q)
1
Let {hk (yi , x(q) )}K
and
k=1 be a set of real-valued feature functions defined on document set x
(q)
(q) (q)
(q)
(q) K2
ranking score yi (i = 1, ? ? ? , n ), and {gk (yi , yj , x )}k=1 be a set of real-valued feature
(q)
(q)
functions defined on yi , yj , and x(q) (i, j = 1, ? ? ? , n(q) , i 6= j).
Continuous Conditional Random Fields is a conditional probability distribution with the following
density function,
(
)
K1
K2
XX
XX
1
(q)
(q)
(q)
(q) (q)
(q)
(q)
Pr(y
|x
)=
Z(x(q) )
exp
?k hk (yi , x
i
k=1
)+
?k gk (yi , yj , x
) ,
(3)
i,j k=1
where ? is a K1 -dimensional parameter vector and ? is a K2 -dimensional parameter vector, and
Z(x(q) ) is a normalization function,
)
(
Z
K2
K1
XX
XX
(q)
(q)
(q)
(q)
(q)
(q)
(q)
Z(x
)=
?k hk (yi , x
exp
y (q)
i
?k gk (yi , yj , x
)+
)
dy
.
(4)
i,j k=1
k=1
Given a set of documents x(q) for a query, we select the ranking score vector y (q) with the maximum
conditional probability Pr(y (q) |x(q) ) as the output of our proposed global ranking model:
F (x(q) ) = arg max Pr(y (q) |x(q) ).
y (q)
2
(5)
C-CRF is a graphical model, as depicted in Figure 1. In the conditioned undirected graph, a white
vertex represents a ranking score, a gray vertex represents a document, an edge between two white
vertexes represents the dependency between ranking scores, and an edge between a gray vertex
and a white vertex represents the dependency of a ranking score on its document (content). (In
principle a ranking score can depend on all the documents of the query; here for ease of presentation we only consider the simple case in which it only depends on the corresponding document.)
In C-CRF, feature function hk represents the dependency between the ranking score of a document and
x4
x6
x2
the content of it, and feature function gk represents a
relation between the ranking scores of two documents.
x3
x1
x5
Different retrieval tasks may have different relations
(e.g. similarity relation, parent-child relation), as will
be explained in Section 4. For ease of reference, we
y4
y6
y2
call the feature functions hk vertex features, and the
feature functions gk edge features.
y3
y5
y1
Note that in conventional CRF the output random variables are discrete while in C-CRF the output variables
Figure 1: Continuous CRF Model
are continuous. This makes the inference of C-CRF
largely different from that of conventional CRF, as will
be seen in Section 4.
3.2
Learning
In the inference of C-CRF, the paramters {?, ?} are given, while in learning, they are to be estimated.
(q)
(q)
(q)
(q)
Given training data {x(q) , y (q) }N
= {x1 , x2 , ..., xn(q) } is a set of documents
q=1 , where each x
(q)
(q)
(q)
of query q, and each y (q) = {y1 , y2 , ..., yn(q) } is a set of ranking scores associated with the
documents of query q, we employ Maximum Likelihood Estimation to estimate the parameters
{?, ?} of C-CRF. Specifically, we calculate the conditional log likelihood of the training data with
respect to the C-CRF model,
L(?, ?) =
N
X
log Pr(y (q) |x(q) ; ?, ?).
(6)
q=1
We then use Gradient Ascend to maximze the log likelihood, and use the optimal parameter ?
? , ?? to
rank the documents of a new query.
4
Case Study
4.1
Pseudo Relevance Feedback (PRF)
Pseudo Relevance Feedback (PRF) [17, 8] is an example of global ranking, in which similarity between documents are considered in the ranking process. Conceptually, in this task one first conducts
a round of ranking, assuming that the top ranked documents are relevant; then conducts another
round of ranking, using similarity information between the top ranked documents and the other documents to boost some relevant documents dropped in the first round. The underlying assumption
is that similar documents are likely to have similar ranking scores. Here we consider a method of
using C-CRF for performing the task.
4.1.1
Continuous CRF for Pseudo Relevance Feedback
We first introduce vertex feature functions. The relevance of a document to the query depends on
many factors, such as term frequency, page importance, and so on. For each factor we define a vertex
(q)
feature function. Suppose that xi,k is the k-th relevance factor of document xi with respect to query
3
(q)
q extracted by operator tk : xi,k = tk (xi , q). We define the k-th feature function1 hk (yi , x) as
hk (yi , x) = ?(yi ? xi,k )2 .
(7)
Next, we introduce the edge feature function. Recall that there are two rounds in PRF: the first
round scores each document, and the second round re-ranks the documents considering similarity
between documents. Here the similarities between any two documents are supposed to be given. We
incorporate them into the edge feature function.
1
g(yi , yj , x) = ? Si,j (yi ? yj )2 ,
(8)
2
where Si,j is similarity between documents xi and xj , which can be extracted by some operator s
from the raw content2 of document xi and xj : Si,j = s(xi , xj ). The larger Si,j is, the more similar
the two documents are. Sine only similarity relation is considered in this task, we have only one
edge function (K2 = 1).
The C-CRF for Pseudo Relevance(Feedback then becomes
K1
XX
1
2
Pr(y|x) =
Z(x)
where Z(x) is defined as
Z
Z(x) =
exp
y
To guarantee that exp
i
(
K1
XX
i
nP P
K1
i
??k (yi ? xi,k ) +
exp
k=1
i,j
2
??k (yi ? xi,k ) +
k=1
2
k=1 ??k (yi ? xi,k ) +
X ?
P
i,j
)
X ?
2
? Si,j (yi ? yj )
2
,
(9)
)
2
dy.
? Si,j (yi ? yj )
2
?
2
i,j ? 2 Si,j (yi ? yj )
(10)
o
is integrable, we
3
must have ?k > 0 and ? > 0.
P PK1
The item i k=1
??k (yi ? xi,k )2 in Eq. (9) plays a role similar to the first round of PRF:
the
score yi is determined solely by the relevance factors of document xi . The item
P ranking
?
2
i,j ? 2 Si,j (yi ? yj ) in Eq. (9) plays a role similar to the second round of PRF: it makes sure that
similar documents have similar ranking scores. We can see that CRF combines the two rounds of
ranking of PRF into one.
To rank the documents of a query, we calculate the ranking scores of documents with respect to this
query in the following way.
F (x) = arg max Pr(y|x; ?, ?) = (?T eI + ?D ? ?S)?1 X?.
(11)
y
where e is a K1 -dimensional all-ones vector, I is an n ? n identity
Pmatrix, S is a similarity matrix
with Si,j = s(xi , xj ), D is an n ? n diagonal matrix with Di,i = j Si,j , and X is a factor matrix
with Xi,k = xi,k . If we ignore the relation between documents and set ? = 0, then the ranking
model degenerates to F (x) = X?, which is equivalent to a linear model used in conventional local
ranking.
For n documents, the time complexity of straightforwardly computing the ranking model (11) is of
order O(n3 ) and thus the computation is expensive. The main cost of the computation comes from
matrix inversion. We employ a fast computation technique to quickly perform the task. First, we
make S a sparse matrix, which has at most K non-zero values in each row and each column. We
can do so by only considering the similarity between each document and its K
2 nearest neighbors.
Next, we use the Gibbs-Poole-Stockmeyer algorithm [9] to convert S to a banded matrix. Finally
we solve the following system of linear equation and take the solution as ranking scores.
(?T eI + ?D ? ?S)F (x) = X?
(12)
Since S is a banded matrix, the scores F (x) in Eq.(12) can be computed with time complexity of
O(n) when K ? n [5]. That is to say, the time complexity of testing a new query is comparable
with those of existing local ranking methods.
1
We omit superscript (q) in this section when there is no confusion.
Note that Si,j is not computed from the ranking factors of documents xi and xj but from their raw terms.
For more details, please refer to our technique report [13].
3
?k > 0 means that the factor xi,k is positively correlated with the ranking score yi . Considering that some
factor may be negatively correlated with yi , we double a factor xi,k into two factors xi,k and xi,k0 = ?xi,k in
experiments. Then if ?k0 > ?k , one can get the factor xi,k is negatively correlated with the ranking score yi .
2
4
Algorithm 1 Learning Algorithm of Continuous CRF for Pseudo Relevance Feedback
Input: training data {(x(1) , y (1) ), ? ? ? , (x(N ) , y (N ) )}, number of iterations T and learning rate ?
Initialize parameter log ?k and log ?
for t = 1 to T do
for i = 1 to N do
Compute gradient ?log ?k and ?log ? using Eq. (13) and (14) for a single query
(x(i) , y (i) , S (i) ).
Update log ?k = log ?k + ? ? ?log ?k and log ? = log ? + ? ? ?log ?
end for
end for
Output: parameters of CRF model ?k and ?.
4.1.2
Learning
In learning, we try to maximize the log likelihood. Note that maximization of L(?, ?) in Eq. (6) is
a constrained optimization problem because we need to guarantee that ?k > 0 and ? > 0. Gradient
Ascent cannot be directly applied to such a constrained optimization problem. Here we adopt a
technique similar to that in [3]. Specifically, we maximize L(?, ?) with respect to log ?k and log ?
instead of ?k and ?. As a result, the new optimization issue becomes unconstrained and Gradient
Ascent method can be used. Algorithm 1 shows the learning algorithm based on Stochastic Gradient
Ascent 4 , in which the gradient ?log ?k and ?log ? can be computed as follows5 .
(
)
?
?
X 2
?L(?, ?)
1 ?T T
T ?1
T ?1 ?1
?log ?k =
?log ?
? log ?k
? (A
2
= ??k
?L(?, ?)
= ??
=
? log ?
(
?
) : I : + 2X,k A
b?b A
A
b+
(yi ? 2yi xi,k )
i
(13)
)
?
X1
1
? (A?T ) :T (D ? S) : ? bT A?1 (D ? S)A?1 b +
Si,j (yi ? yj )2
2
2
i,j
P PK1
(14)
where A = ?T eI + ?D ? ?S, |A| is determinant of matrix A, b = X?, c = i k=1 ?k x2i,k , X :
denotes the long column vector formed by concatenating the columns of matrix X, and X,k denotes
the k-th column of matrix X.
4.2 Topic Distillation (TD)
Topic Distillation [12] is another example of global ranking. In this task, one selects a page that can
best represent the topic of the query from a web site by using structure (relation) information of the
site. If both a page and its parent page are concerned with the topic, then the parent page is preferred
(to be ranked higher) [12, 11]. Here we apply C-CRF to Topic Distillation.
4.2.1
Continuous CRF for Topic Distillation
We define the vertex feature function hk (yi , x) in the same way as in Eq.(7).
Recall that in Topic Distillation, a page is more preferred than its child page if both of them are
relevant to a query. Here the parent-child relation between two pages is supposed to be given. We
incorporate them into the edge feature function. Specifically, we define the (and the only) edge
feature function as
g(yi , yj , x) = Ri,j (yi ? yj ),
(15)
where Ri,j = r(xi , xj ) denotes the parent-child relation: r(xi , xj ) = 1 if document xi is the parent
of xj , and r(xi , xj ) = 0 for other cases.
The C-CRF for Topic Distillation then becomes
(
K1
XX
1
Pr(y|x) =
4
5
Z(x)
exp
2
??k (yi ? xi,k ) +
i
k=1
X
)
?Ri,j (yi ? yj ) ,
i,j
Stochastic Gradient means conducting gradient ascent from one query to another.
Details can be found in [13].
5
(16)
where Z(x) is defined as
Z
Z(x) =
exp
(
K1
XX
y
To guarantee that exp
have ?k > 0.
i
2
??k (yi ? xi,k ) +
k=1
)
?Ri,j (yi ? yj )
dy.
(17)
i,j
nP P
K1
i
X
??k (yi ? xi,k )2 +
k=1
o
P
?Ri,j (yi ? yj )
i,j
is integrable, we must
The C-CRF can naturally model Topic Distillation: if the value of Ri,j is one, then the value of yi is
large than that of yj with high probability.
To rank the documents of a query, we calculate the ranking scores in the following way.
1
(2X? + ?(Dr ? Dc )e)
?T e
P
P
where Dr and Dc are two diagonal matrixes with Dri,i = j Ri,j and Dci,i = j Rj,i .
F (x) = arg max Pr(y|x; ?, ?) =
y
(18)
Similarly to Pseudo Relevance Feedback, if we ignore the relation between documents and set ? =
0, the ranking model degenerates to a linear ranking model in conventional local ranking.
4.2.2
Learning
In learning, we use Gradient Ascent to maximize the log likelihood. We use the same technique as
that for PRF to guarantee ?k > 0. The gradient of L(?, ?) with respect to log ?k and ? can be
found6 in Eq. (19) and (20). Due to space limitation, we omit the details of the learning algorithm,
which is similar to Algorithm 1.
)
(
X
X 2
?L(?, ?)
n
1 T
1 T
2
?log ?k =
? log ?k
?? =
= ?k
2a
+
4a2
b b?
2a
xi,k ?
b X,k +
i
(yi ? xi,k )
(19)
i
X
?L(?, ?)
1
Ri,j (yi ? yj )
= ? bT (Dr ? Dc )e +
??
2a
(20)
i,j
where where n denotes number of documents for the query, and a = ?T e, b = 2X?+?(Dr ?Dc )e,
P PK1
c = i k=1
?k x2i,k , X,k denotes the k-th column of matrix X.
4.3
Continuous CRF for Multiple Relations
We only consider using one type of relation in the previous two cases. We can also conduct global
ranking by utilizing multiple types of relation. C-CRF is a powerful tool to perform the task. It can
easily incorporate various types of relation as edge feature functions. For example, we can combine
similarity relation and parent-child relation by using the following C-CRF model:
?
(
??
K1
XX
X?
1
S
i,j
Pr(y|x) =
exp
??k (yi ? xi,k )2 +
?1 Ri,j (yi ? yj ) ? ?2
(yi ? yj )2
.
?
Z(x)
2
i
i,j
k=1
In this case, the ranking scores of documents for a new query is calculated as follows.
?
?
?1
F (x) = arg max Pr(y|x; ?, ?) = (?T eI + ?2 D ? ?2 S)?1 X? + (Dr ? Dc )e
y
2
5 Experiments
We empirically tested the performance of C-CRF on both Pseudo Relevance Feedback and Topic
Distillation7 . As data, we used LETOR [10], which is a public dataset for learning to rank research.
6
7
Please refer to [13] for the derivation of the two equations.
Please refer to [13] for more details of experiments.
6
Table 1: Ranking Accuracy
TD on TREC2004 Data
PRF on OHSUMED Data
Algorithms ndcg1 ndcg2
Algorithms ndcg1 ndcg2 ndcg5
0.3067 0.2933
BM25
BM25
0.3994 0.3931 0.3972
ST
0.3200 0.3133
BM25-PRF 0.3962 0.4277 0.3981
SS
0.3200 0.3200
RankSVM 0.4952 0.4755 0.4579
RankSVM 0.4400 0.4333
ListNet
0.5231 0.497 0.4662
ListNet
0.4400 0.4267
C-CRF
0.5443 0.4986 0.4808
C-CRF
0.5200 0.4733
ndcg5
0.2293
0.3232
0.3227
0.3935
0.4209
0.4428
We made use of OHSUMED in LETOR for Pseudo Relevance Feedback and TREC2004 in LETOR
for Topic Distillation. As evaluation measure, we utilized NDCG@n (Normalized Discounted Cumulative Gain) [6].
As baseline methods for the two tasks, we used several local ranking algorithms such as BM25,
RankSVM [7] and ListNet [2]. BM25 is a widely used non-learning ranking method. RankSVM
is a state-of-the-art algorithm of the pairwise approach to learning to rank, and ListNet is a stateof-the-art algorithm of the listwise approach. For Pseudo Relevance Feedback, we also compared
with a traditional feedback method based on BM25 (BM25-PRF for short). For Topic Distillation,
we also compared with two traditional methods, sitemap based term propagation (ST) and sitemap
based score propagation (SS) [11], which propagate the relevance along sitemap structure. These
algorithms can be regarded as a kind of global ranking methods but they are not based on supervised
learning. We conducted 5 fold cross validation for C-CRF and all the baseline methods, using the
partition provided in LETOR.
The left part of Table 1 shows the ranking accuracies of BM25, BM25-PRF, RankSVM, ListNet,
and C-CRF, in terms of NDCG averaged over five trials on OHSUMED data. C-CRF?s performance
is superior to the performances of RankSVM and ListNet. This is particularly true for NDCG@1;
C-CRF achieves about 5 points higher accuracy than RankSVM and more than 2 points higher
accuracy than ListNet. The results indicate that C-CRF based global ranking can indeed improve
search relevance. C-CRF also outperforms BM25-PRF, the traditional method of using similarity
information for ranking. The result suggests that it is better to employ a supervised learning approach
for the task.
The right part of Table 1 shows the performances of BM25, SS, ST, RankSVM, ListNet, and C-CRF
model in terms of NDCG averaged over 5 trials on TREC data. C-CRF outperforms RankSVM and
ListNet at all NDCG positions. This is particularly true for NDCG@1. C-CRF achieves 8 points
higher accuracy than RankSVM and ListNet, which is a more than 15% relative improvement. The
result indicates that C-CRF based global ranking can achieve better results than local ranking for this
task. C-CRF also outperforms SS and ST, the traditional method of using parent-child information
for Topic Distillation. The result suggests that it is better to employ a learning based approach.
6
Related Work
Most existing work on using relation information in learning is for classification (e.g., [19, 1]) and
clustering (e.g., [4, 15]). To the best of our knowledge, there was not much work on using relation for
ranking, except Relational Ranking SVM (RRSVM) proposed in [14], which is based on a similar
motivation as our work.
There are large differences between RRSVM and C-CRF, however. For RRSVM, it is hard to combine the uses of multiple types of relation. In contrast, C-CRF can easily do it by incorportating the
relations in different edge feature functions. There is a hyper parameter ? in RRSVM representing
the trade-off between content and relation information. It needs to be manually tuned. This is not
necessary for C-CRF, however, because the trade-off between them is handled naturally by the feature weights in the model, which can be learnt automatically. Furthermore, in some cases certain
approximation must be made on the model in RRSVM (e.g. for Topic Distillation) in order to fit
into the learning framework of SVM. Such kind of approximation is unnecessary in C-CRF anyway.
7
Besides, C-CRF achieves better ranking accuracy than that reported for RRSVM [14] on the same
benchmark dataset.
7
Conclusions
We studied learning to rank methods for global ranking problem, in which we use both content
information of objects and relation information between objects for ranking. A Continuous CRF
(C-CRF) model was proposed for performing the learning task. Taking Pseudo Relevance Feedback
and Topic Distillation as examples, we showed how to use C-CRF in global ranking. Experimental
results on benchmark data show that C-CRF improves upon the baseline methods in the global
ranking tasks.
There are still issues which we need to investigate at the next step. (1) We have studied the method
of learning C-CRF with Maximum Likelihood Estimation. It is interesting to see how to apply
Maximum A Posteriori Estimation to the problem. (2) We have assumed absolute ranking scores
given in training data. We will study how to train C-CRF with relative preference data. (3) We have
studied two global ranking tasks: Pseudo Relevance Feedback and Topic Distillation. We plan to
look at other tasks in the future.
References
[1] M. Belkin, P. Niyogi, and V. Sindhwani. Manifold regularization: A geometric framework for learning
from labeled and unlabeled examples. J. Mach. Learn. Res., 7:2399?2434, 2006.
[2] Z. Cao, T. Qin, T.-Y. Liu, M.-F. Tsai, and H. Li. Learning to rank: from pairwise approach to listwise
approach. In ICML ?07, pages 129?136, 2007.
[3] W. Chu and Z. Ghahramani. Gaussian processes for ordinal regression. Journal of Machine Learning
Research, 6:1019?1041, 2005.
[4] I. S. Dhillon. Co-clustering documents and words using bipartite spectral graph partitioning. In KDD ?01.
[5] G. H. Golub and C. F. V. Loan. Matrix computations (3rd ed.). Johns Hopkins University Press, 1996.
[6] K. J?arvelin and J. Kek?al?ainen. Cumulated gain-based evaluation of ir techniques. ACM Trans. Inf. Syst.,
20(4):422?446, 2002.
[7] T. Joachims. Optimizing search engines using clickthrough data. In KDD ?02, pages 133?142, 2002.
[8] K. L. Kwok. A document-document similarity measure based on cited titles and probability theory, and
its application to relevance feedback retrieval. In SIGIR ?84, pages 221?231, 1984.
[9] J. G. Lewis. Algorithm 582: The gibbs-poole-stockmeyer and gibbs-king algorithms for reordering sparse
matrices. ACM Trans. Math. Softw., 8(2):190?194, 1982.
[10] T.-Y. Liu, J. Xu, T. Qin, W.-Y. Xiong, and H. Li. Letor: Benchmark dataset for research on learning to
rank for information retrieval. In SIGIR ?07 Workshop, 2007.
[11] T. Qin, T.-Y. Liu, X.-D. Zhang, Z. Chen, and W.-Y. Ma. A study of relevance propagation for web search.
In SIGIR ?05, pages 408?415, 2005.
[12] T. Qin, T.-Y. Liu, X.-D. Zhang, G. Feng, D.-S. Wang, and W.-Y. Ma. Topic distillation via sub-site
retrieval. Information Processing & Management, 43(2):445?460, 2007.
[13] T. Qin, T.-Y. Liu, X.-D. Zhang, D.-S. Wang, and H. Li. Global ranking of documents using continuous
conditional random fields. Technical Report MSR-TR-2008-156, Microsoft Corporation, 2008.
[14] T. Qin, T.-Y. Liu, X.-D. Zhang, D.-S. Wang, W.-Y. Xiong, and H. Li. Learning to rank relational objects
and its application to web search. In WWW ?08, 2008.
[15] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis
and Machine Intelligence, 22(8):888?905, 2000.
[16] C. Sutton and A. McCallum. An introduction to conditional random fields for relational learning. In
L. Getoor and B. Taskar, editors, Introduction to Statistical Relational Learning. MIT Press, 2006.
[17] T. Tao and C. Zhai. Regularized estimation of mixture models for robust pseudo-relevance feedback. In
SIGIR ?06, pages 162?169, 2006.
[18] C. X. Zhai, W. W. Cohen, and J. Lafferty. Beyond independent relevance: methods and evaluation metrics
for subtopic retrieval. In SIGIR ?03, pages 10?17, 2003.
[19] D. Zhou, O. Bousquet, T. Lal, J. Weston, and B. Sch?olkopf. Learning with local and global consistency,
2003. In 18th Annual Conf. on Neural Information Processing Systems.
8
| 3402 |@word trial:2 determinant:1 msr:1 inversion:1 propagate:1 tr:1 liu:7 score:29 tuned:1 document:63 outperforms:3 existing:2 com:1 si:12 chu:1 must:3 john:1 partition:1 happen:1 kdd:2 designed:1 ainen:1 update:1 intelligence:1 item:2 mccallum:1 short:1 math:1 preference:1 zhang:5 five:1 along:1 combine:3 introduce:2 pairwise:2 ascend:1 indeed:1 discounted:1 td:2 automatically:1 ohsumed:3 considering:3 becomes:3 provided:1 xx:9 underlying:1 kind:3 developed:1 corporation:1 guarantee:4 pseudo:16 y3:1 tie:1 k2:5 partitioning:1 omit:2 yn:2 dropped:1 local:9 tsinghua:2 sutton:1 mach:1 solely:2 ndcg:6 studied:3 suggests:2 co:1 ease:2 averaged:2 yj:20 testing:1 x3:1 area:1 yan:1 word:4 refers:1 submits:1 get:1 hangli:1 cannot:2 unlabeled:1 operator:2 www:1 conventional:6 equivalent:1 shi:1 sigir:5 utilizing:1 regarded:1 anyway:1 traditionally:1 suppose:1 play:2 user:1 us:2 recognition:1 expensive:1 utilized:1 particularly:2 cut:1 tyliu:1 labeled:1 role:2 taskar:1 wang:4 calculate:3 ordering:1 trade:2 complexity:3 ideally:1 depend:1 arvelin:1 negatively:2 upon:1 bipartite:1 basis:2 easily:2 k0:2 represented:2 retrieves:1 various:1 derivation:1 train:1 fast:1 query:23 hyper:1 heuristic:1 larger:1 valued:2 solve:1 say:1 s:4 widely:1 niyogi:1 superscript:1 propose:1 qin:7 relevant:3 cao:1 degenerate:2 achieve:1 supposed:2 description:1 olkopf:1 parent:8 double:1 letor:5 object:19 tk:2 nearest:1 eq:7 come:1 indicate:1 stochastic:2 public:1 pmatrix:1 considered:2 exp:9 ranksvm:10 major:1 vary:1 adopt:1 a2:1 achieves:3 estimation:4 prf:12 title:1 tool:2 mit:1 always:1 gaussian:1 zhou:1 joachim:1 improvement:1 rank:15 likelihood:6 indicates:1 hk:8 contrast:1 baseline:5 sense:2 posteriori:1 inference:2 bt:2 relation:29 selects:1 tao:2 arg:4 issue:2 classification:1 stateof:1 proposes:1 plan:1 constrained:2 art:2 initialize:1 field:6 manually:1 x4:1 represents:6 y6:1 look:1 icml:1 softw:1 future:1 np:2 report:2 employ:4 belkin:1 individual:2 microsoft:3 mining:1 investigate:1 evaluation:3 golub:1 mixture:1 edge:10 necessary:2 conduct:3 re:2 column:5 maximization:1 cost:1 vertex:9 conducted:2 reported:1 straightforwardly:1 dependency:3 learnt:1 st:4 density:1 cited:1 dong:1 off:2 together:1 quickly:1 hopkins:1 manage:1 containing:1 management:1 dr:5 conf:1 li:5 syst:1 de:1 diversity:1 trec2004:2 ranking:87 depends:2 sine:1 try:1 sort:1 formed:1 ir:1 accuracy:6 kek:1 conducting:2 largely:1 conceptually:1 raw:2 banded:2 taoqin:1 ed:1 frequency:1 sitemap:3 naturally:3 associated:1 di:1 gain:2 dataset:3 recall:2 knowledge:2 improves:1 segmentation:1 higher:4 supervised:2 x6:1 asia:1 stockmeyer:2 listnet:10 subtopic:3 furthermore:2 pk1:3 sheng:1 web:3 ei:4 propagation:3 gray:2 normalized:2 y2:3 true:2 regularization:1 assigned:1 dhillon:1 deal:1 white:3 round:9 x5:1 please:3 crf:60 confusion:1 performs:1 image:1 superior:1 empirically:1 cohen:1 function1:1 distillation:17 refer:4 gibbs:3 rd:1 unconstrained:1 consistency:1 similarly:1 language:1 similarity:13 showed:1 retrieved:2 optimizing:1 inf:1 certain:1 yi:44 integrable:2 seen:1 maximize:3 multiple:3 rj:1 technical:1 cross:1 long:1 retrieval:14 calculates:1 regression:1 metric:1 iteration:1 represent:3 normalization:1 sch:1 sure:1 ascent:5 dri:1 undirected:2 thing:1 lafferty:1 call:3 ee:1 enough:1 concerned:1 xj:9 fit:1 cn:1 handled:1 speech:1 useful:1 aimed:1 bm25:11 exist:1 estimated:1 paramters:1 discrete:3 rrsvm:6 graph:3 convert:1 powerful:2 dy:3 comparable:1 fold:1 annual:1 x2:3 n3:1 ri:9 bousquet:1 performing:2 according:2 explained:1 restricted:1 pr:10 equation:2 previously:1 loose:1 ordinal:1 end:2 apply:3 kwok:1 spectral:1 xiong:2 top:2 denotes:5 include:1 clustering:2 graphical:1 exploit:1 content2:1 k1:11 ghahramani:1 feng:1 malik:1 diagonal:2 traditional:4 gradient:10 topic:20 manifold:1 y5:1 assuming:1 besides:1 y4:1 zhai:2 dci:1 gk:5 clickthrough:1 perform:3 benchmark:4 relational:5 y1:3 dc:5 trec:1 lal:1 engine:1 boost:1 trans:2 beyond:1 poole:2 usually:1 pattern:1 including:1 max:4 getoor:1 ranked:4 natural:1 regularized:1 representing:1 improve:1 x2i:2 geometric:1 relative:2 reordering:1 interesting:1 limitation:1 validation:1 sufficient:1 principle:1 editor:1 row:1 formal:1 neighbor:1 taking:2 absolute:1 sparse:2 listwise:2 feedback:18 calculated:1 xn:2 stand:1 cumulative:1 made:2 employing:1 far:1 transaction:1 hang:1 ignore:2 preferred:2 global:26 unnecessary:1 assumed:1 xi:35 continuous:17 search:4 table:3 learn:1 robust:1 constructing:1 main:1 motivation:1 child:6 xu:2 x1:4 positively:1 site:3 sub:1 position:1 concatenating:1 specific:3 svm:2 workshop:1 cumulated:1 importance:2 conditioned:3 chen:1 depicted:1 likely:1 sindhwani:1 lewis:1 extracted:2 acm:2 ma:2 weston:1 conditional:11 viewed:1 presentation:1 identity:1 king:1 content:8 hard:1 included:1 determined:2 specifically:3 except:1 loan:1 experimental:2 select:1 bioinformatics:1 relevance:26 tsai:1 incorporate:3 tested:1 correlated:3 |
2,651 | 3,403 | Local Gaussian Process Regression
for Real Time Online Model Learning and Control
Duy Nguyen-Tuong Jan Peters Matthias Seeger
Max Planck Institute for Biological Cybernetics
Spemannstra?e 38, 72076 T?ubingen, Germany
{duy,jan.peters,matthias.seeger}@tuebingen.mpg.de
Abstract
Learning in real-time applications, e.g., online approximation of the inverse dynamics model for model-based robot control, requires fast online regression techniques. Inspired by local learning, we propose a method to speed up standard
Gaussian process regression (GPR) with local GP models (LGP). The training
data is partitioned in local regions, for each an individual GP model is trained.
The prediction for a query point is performed by weighted estimation using nearby
local models. Unlike other GP approximations, such as mixtures of experts, we
use a distance based measure for partitioning of the data and weighted prediction.
The proposed method achieves online learning and prediction in real-time. Comparisons with other non-parametric regression methods show that LGP has higher
accuracy than LWPR and close to the performance of standard GPR and ?-SVR.
1
Introduction
Precise models of technical systems can be crucial in technical applications. Especially in robot
tracking control, only a well-estimated inverse dynamics model can allow both high accuracy and
compliant control. For complex robots such as humanoids or light-weight arms, it is often hard to
model the system sufficiently well and, thus, modern regression methods offer a viable alternative
[7,8]. For most real-time applications, online model learning poses a difficult regression problem due
to three constraints, i.e., firstly, the learning and prediction process should be very fast (e.g., learning
needs to take place at a speed of 20-200Hz and prediction at 200Hz to a 1000Hz). Secondly, the
learning system needs to be capable at dealing with large amounts of data (i.e., with data arriving
at 200Hz, less than ten minutes of runtime will result in more than a million data points). And,
thirdly, the data arrives as a continuous stream, thus, the model has to be continuously adapted to
new training examples over time.
These problems have been addressed by real-time learning methods such as locally weighted projection regression (LWPR) [7, 8]. Here, the true function is approximated with local linear functions
covering the relevant state-space and online learning became computationally feasible due to low
computational demands of the local projection regression which can be performed in real-time. The
major drawback of LWPR is the required manual tuning of many highly data-dependent metaparameters [15]. Furthermore, for complex data, large numbers of linear models are necessary in order to
achieve a competitive approximation.
A powerful alternative for accurate function approximation in high-dimensional space is Gaussian
process regression (GPR) [1]. Since the hyperparameters of a GP model can be adjusted by maximizing the marginal likelihood, GPR requires little effort and is easy and flexible to use. However,
the main limitation of GPR is that the computational complexity scales cubically with the training
examples n. This drawback prevents GPR from applications which need large amounts of training
data and require fast computation, e.g., online learning of inverse dynamics model for model-based
1
robot control. Many attempts have been made to alleviate this problem, for example, (i) sparse
Gaussian process (SGP) [2], and (ii) mixture of experts (ME) [3, 4]. In SGP, the training data is
approximated by a smaller set of so-called inducing inputs [2, 5]. Here, the difficulty is to choose an
appropriate set of inducing inputs, essentially replacing the full data set [2]. In contrast to SGP, ME
divide the input space in smaller subspaces by a gating network, within which a Gaussian process
expert, i.e., Gaussian local model, is trained [4, 6]. The computational cost is then significantly reduced due to much smaller number of training examples within a local model. The ME performance
depends largely on the way of partitioning the training data and the choice of an optimal number of
local models for a particular data set [4].
In this paper, we combine the basic idea behind both approaches, i.e., LWPR and GPR, attempting
to get as close as possible to the speed of local learning while having a comparable accuracy to
Gaussian process regression. This results in an approach inspired by [6, 8] using many local GPs in
order to obtain a significant reduction of the computational cost during both prediction and learning
step allowing the application to online learning. For partitioning the training data, we use a distance based measure, where the corresponding hyperparameters are optimized by maximizing the
marginal likelihood.
The remainder of the paper is organized as follows: first, we give a short review of standard GPR in
Section 2. Subsequently, we describe our local Gaussian process models (LGP) approach in Section
3 and discuss how it inherits the advantages of both GPR and LWPR. Furthermore, the learning
accuracy and performance of our LGP approach will be compared with other important standard
methods in Section 4, e.g., LWPR [8], standard GPR [1], sparse online Gaussian process regression
(OGP) [5] and ?-support vector regression (?-SVR) [11], respectively. Finally, our LGP method is
evaluated for an online learning of the inverse dynamics models of real robots for accurate tracking
control in Section 5. Here, the online learning is demonstrated by rank-one update of the local GP
models [9]. The tracking task is performed in real-time using model-based control [10]. To our best
knowledge, it is the first time that GPR is successfully used for high-speed online model learning
in real time control on a physical robot. We present the results on a version of the Barrett WAM
showing that with the online learned model using LGP the tracking accuracy is superior compared
to state-of-the art model-based methods [10] while remaining fully compliant.
2
Regression with standard GPR
Given a set of n training data points {xi , yi }ni=1 , we would like to learn a function f (xi ) transforming the input vector xi into the target value yi given by yi = f (xi )+i , where i is Gaussian
2
noise with zero mean and variance
?n [1]. As a result, the observed targets can also be described
2
by y ? N 0, K(X, X) + ?n I , where K(X, X) denotes the covariance matrix. As covariance
function, a Gaussian kernel is frequently used [1]
1
k (xp , xq ) = ?s2 exp ? (xp ?xq )T W(xp ?xq ) ,
(1)
2
where ?s2 denotes the signal variance and W are the widths of the Gaussian kernel. The joint
distribution of the observed target values and predicted value for a query point x? is given by
y
K(X, X) + ?n2 I k(X, x? )
? N 0,
.
(2)
f (x? )
k(x? , X)
k(x? , x? )
The conditional distribution yields the predicted mean value f (x? ) with the corresponding variance
V (x? ) [1]
?1
f (x? ) = kT? K + ?n2 I
y = kT? ? ,
(3)
?1
V (x? ) = k(x? , x? ) ? kT? K + ?n2 I
k? ,
with k? = k(X, x? ), K = K(X, X) and ? denotes the so-called prediction vector. The hyperparameters of a Gaussian process with Gaussian kernel are ? = [?n2 , ?f2 , W] and their optimal value
for a particular data set can be derived by maximizing the log marginal likelihood using common
optimization procedures, e.g., Quasi-Newton methods [1].
2
Input: query data point x, M .
Determine M local models next to x.
for k = 1 to M do
Compute distance to the k-th local model:
wk = exp(?0.5(x ? ck )T W(x ? ck ))
Compute local mean using the k-th local
model:
y?k = kTk ?k
end for
Compute weighted prediction using M local
models:
PM
PM
y? = k=1 wk y?k / k=1 wk .
Input: new data point {x, y}.
for k = 1 to number of local models do
Compute distance to the k-th local model:
wk = exp(?0.5(x ? ck )T W(x ? ck ))
end for
Take the nearest local model:
v = max(wk )
if v > wgen then
Insert {x, y} to nearest local model:
Xnew = [X, x]
ynew = [y, y]
Update corresponding center:
cnew = mean(Xnew )
Compute inverse covariance matrix and
prediction vector of local model:
Knew = K(Xnew , Xnew )
?new = (Knew + ? 2 I)?1 ynew
else
Create new model:
ck+1 = x,
Xk+1 = [x], yk+1 = [y]
Initialization new inverse covariance matrix and new prediction vector.
end if
Algorithm 2: Prediction for a query point.
(a) SARCOS arm
Algorithm 1: Partitioning of training data and
model learning.
3
(b) Barrett WAM
Figure 1: Robot arms used for data generation
and evaluation.
Approximation using Local GP Models
The major limitation of GPR is the expensive computation of the inverse matrix (K + ?n2 I)?1 which
yields a cost of O(n3 ). Reducing this computational cost, we cluster the training data in local
regions and, subsequently, train the corresponding GP models on these local clusters. The mean
prediction for a query point is then made by weighted prediction using the nearby local models
in the neighborhood. Thus, the algorithm consists out of two stages: (i) localization of data, i.e.,
allocation of new input points and learning of corresponding local models, (ii) prediction for a query
point.
3.1
Partitioning and Training of Local Models
Clustering input data is efficiently performed by considering a distance measure of the input point x
to the centers of all local models. The distance measure wk is given by the kernel used to learn the
local GP models, e.g., Gaussian kernel
1
T
wk = exp ? (x ? ck ) W (x ? ck ) ,
(4)
2
where ck denotes the center of the k-th local model and W a diagonal matrix represented the kernel
width. It should be noted, that we use the same kernel width for computing wk as well as for training
of all local GP models as given in Section 2. The kernel width W is obtained by maximizing the
log likelihood on a subset of the whole training data points. For doing so, we subsample the training
data and, subsequently, perform an optimization procedure.
During the localization process, a new model with center ck+1 is created, if all distance measures wk
fall below a limit value wgen . The new data point x is then set as new center ck+1 . Thus, the number
of local models is allowed to increase as the trajectories become more complex. Otherwise, if a new
point is assigned to a particular k-th model, the center ck is updated as mean of corresponding local
3
data points. With the new assigned input point, the inverse covariance matrix of the corresponding
local model can be updated. The localization procedure is summarized in Algorithm 1.
The main computational cost of this algorithm is O(N 3 ) for inverting the local covariance matrix,
where N presents the number of data points in a local model. Furthermore, we can control the
complexity by limiting the number of data points in a local model. Since the number of local data
points increases continuously over time, we can adhere to comply with this limit by deleting old data
point as new ones are included. Insertion and deletion of data points can be decided by evaluating
the information gain of the operation. The cost for inverting the local covariance matrix can be
further reduced, as we need only to update the full inverse matrix once it is computed. The update
can be efficiently performed in a stable manner using rank-one update [9] which has a complexity
of O(N 2 ).
3.2
Prediction using Local Models
The prediction for a mean value y? is performed using weighted averaging over M local predictions y?k for a query point x [8]. The weighted prediction y? is then given by y? = E{?
yk |x} =
PM
y
?
p(k|x).
According
to
the
Bayesian
theorem,
the
probability
of
the
model
k
given
x can be
k=1 k
PM
PM
expressed as p(k|x) = p(k, x)/ k=1 p(k, x) = wk / k=1 wk . Hence, we have
PM
wk y?k
y? = Pk=1
.
(5)
M
k=1 wk
The probability p(k|x) can be interpreted as a normalized distance of the query point x to the
local model k where the measure metric wk is used as given in Equation (4). Thus, each local
prediction y?k , determined using Equation (3), is additionally weighted by the distance wk between
the corresponding center ck and the query point x. The search for M local models can be quickly
done by evaluating the distances between the query point x and all model centers ck . The prediction
procedure is summarized in Algorithm 2.
4
Learning Inverse Dynamics
We have evaluated our algorithm using high-dimensional robot data taken from real robots, e.g.,
the 7 degree-of-freedom (DoF) anthropomorphic SARCOS master arm and 7-DoF Barrett whole
arm manipulator shown in Figure 1, as well as a physically realistic SL simulation [12]. We compare the learning performance of LGP with the state-of-the-art in non-parametric regression, e.g.,
LWPR, ?-SVR, OGP and standard GPR in the context of approximating inverse robot dynamics.
For evaluating ?-SVR and GPR, we have employed the libraries [13] and [14].
4.1
Dynamics Learning Accuracy Comparison
For the comparison of the accuracy of our method in the setting of learning inverse dynamics, we
use three data sets, (i) SL simulation data (SARCOS model) as described in [15] (14094 training
points, 5560 test points), (ii) data from the SARCOS master arm (13622 training points, 5500 test
points) [8], (iii) a data set generated from our Barrett arm (13572 training points, 5000 test points).
Given samples x = [q, q,
? q
?] as input, where q, q,
? q
? denote the joint angles, velocity and acceleration,
and using the corresponding joint torques y = [u] as targets, we have a proper regression problem.
For the considered 7 degrees of freedom robot arms, we, thus, have data with 21 input dimensions
(for each joint, we have an angle, a velocity and an acceleration) and 7 targets (a torque for each
joint). We learn the robot dynamics model in this 21-dim space for each DoF separately employing
LWPR, ?-SVR, GPR, OGP and LGP, respectively.
Partitioning of the training examples for LGP can be performed either in the same input space (where
the model is learned) or in another space which has to be physically consistent with the approximated
function. In the following, we localize the data depending on the position of the robot. Thus, the
partitioning of training data is performed in a 7-dim space (7 joint angles). After determining wk
for all k local models in the partitioning space, the input point will be assigned to the nearest local
model, i.e., the local model with the maximal value of distance measure wk .
4
0.06
0.05
nMSE
0.03
LWPR
OGP
??SVR
GPR
LGP
0.02
0.04
LWPR
OGP
??SVR
GPR
LGP
0.2
nMSE
LWPR
OGP
??SVR
GPR
LGP
0.04
0.03
0.15
0.1
0.02
0.01
0.05
0.01
0
1
2
3
4
5
6
Degree of Freedom
7
(a) Approximation Error using SL data (SARCOS model)
0
1
2
3
4
5
6
Degree of Freedom
7
(b) Approximation Error using SARCOS data
0
1
2
3
4
5
6
Degree of Freedom
7
(c) Approximation Error using Barrett WAM data
Figure 2: Approximation error as nMSE for each DoF. The error is computed after prediction on
the test sets with simulated data from SL Sarcos-model, real robot data from Barrett and SARCOS
master arm, respectively. In most cases, LGP outperforms LWPR and OGP in learning accuracy
while being competitive to ?-SVR and standard GPR. It should be noted that the nMSE depends on
the target variances. Due to smaller variances in the Barrett data, the corresponding nMSE has also
a larger scale compared to SARCOS.
Figure 2 shows the normalized mean squared error (nMSE) of the evaluation on the test set for each
of the three evaluated scenarios, i.e., the simulated SARCOS arm in (a), the real SARCOS arm in
(b) and the Barrett arm in (c). Here, the normalized mean squared error is defined as nMSE =
Mean squared error/Variance of target. During the prediction on the test set using LGP, we take the
most activated local models, i.e., the ones which are next to the query point.
It should be noted that the choice of the limit
value wgen during the partitioning step is cruLWPR
cial for the performance of LGP and, unfortu??SVR
nately, is an open parameter. If wgen is too
GPR
small, a lot of local models will be generated
LGP
with small number of training points. It turns
2
out that these small local models do not perform well in generalization for unknown data.
1
If wgen is large, the local models become also
large which increase the computational complexity. Here, the training data are clustered
in about 30 local regions ensuring that each local model has a sufficient amount of data points
0
5000
10000
15000
for high accuracy (in practice, roughly a hunNr. of Training Points
Figure 3: Average time in millisecond needed for dred data points for each local model suffice)
prediction of 1 query point. The computation time while having sufficiently few that the solution
is plotted logarithmic in respect of the number of remains feasible in real-time (on our current
training examples. The time as stated above is the hardware, a Core Duo at 2GHz, that means less
required time for prediction of all 7 DoF. Here, than 1000 data points). On average, each loLWPR presents the fastest method due to simple cal model has approximately 500 training exregression models. Compared to global regression amples. This small number of training inputs
methods such as standard GPR and ?-SVR, local enables a fast training for each local model, i.e.,
GP makes significant improvement in term of pre- the matrix inversion. For estimating the hyperparameters using likelihood optimization, we
diction time.
subsample the training data which results in a
subset of about 1000 data points.
Prediction Time [ms] (log. Scale)
nMSE
0.25
0.07
0.05
7
6
5
4
3
Considering the approximation error on the test set shown in Figure 2(a-c), it can be seen that
LGP generalizes well using only few local models for prediction. In all cases, LGP outperforms
LWPR and OGP while being close in learning accuracy to global methods such as GPR and ?SVR. The mean-prediction for GPR is determined according to Equation (3) where we precomputed
5
the prediction vector ? from training data. When a query point appears, the kernel vector kT? is
evaluated for this particular point. The operation of mean-prediction has then the order of O(n) for
standard GPR (similarly, for ?-SVR) and O(N M ) for LGP, where n denotes the total number of
training points, M number of local models and N number of data points in a local model.
4.2
Comparison of Computation Speed for Prediction
Beside the reduction of training time (i.e., matrix inversion), the prediction time is also reduced
significantly compared to GPR and ?-SVR due to the fact that only a small amount of local models
in the vicinity of the query point are needed during prediction for LGP. Thus, the prediction time
can be controlled by the number of local models. A large number of local models may provide a
smooth prediction but on the other hand increases the time complexity.
The comparison of prediction speed is shown in Figure 3. Here, we train LWPR, ?-SVR, GPR
and LGP on 5 different data sets with increasing training examples (1065, 3726, 7452, 10646 and
14904 data points, respectively). Subsequently, using the trained models we compute the average
time needed to make a prediction for a query point for all 7 DoF. For LGP, we take a limited number
of local models in the vicinity for prediction, e.g., M = 3. Since our control system requires a
minimal prediction rate at 100 Hz (10 ms) in order to ensure system stability, data sets with more
than 15000 points are not applicable for standard GPR or ?-SVR due to high computation demands
for prediction.
The results show that the computation time requirements of ?-SVR and GPR rises very fast with
the size of training data set as expected. LWPR remains the best method in terms of computational
complexity only increasing at a very low speed. However, as shown in Figure 3, the cost for LGP is
significantly lower than the one ?-SVR and GPR and increases at a much lower rate. In practice, we
can also curb the computation demands of single models by deleting old data points, if a new ones are
assigned to the model. As approach to deleting and inserting data points, we can use the information
gain of the corresponding local model as a principled measure. It can be seen from the results that
LGP represents a compromise between learning accuracy and computational complexity. For large
data sets (e.g., more than 5000 training examples), LGP reduces the prediction cost considerably
while keeping a good learning performance.
5
qd
q? d
?d
q
Application in Model-based Robot Control
Local GP
!
+
u
Robot
!
+
+
Kp
q?
q
Kv
+
!
?
In this section, first, we use the inverse dynamics models learned in Section 4.1 for a modelbased tracking control task [10] in the setting
shown in Figure 4. Here, the learned model
of the robot is applied for an online prediction of the feedforward torques uFF given the
desired trajectory [qd , q? d , q
?d ]. Subsequently,
the model approximated by LGP is used for
an online learning performance. Demonstrating the online learning, the local GP models are
adapted in real-time using rank-one update.
As shown in Figure 4, the controller command
u consists of the feedforward part uFF and the
feedback part uFB = Kp e + Kv e,
? where e =
qd ? q denotes the tracking error and Kp , Kv
Figure 4: Schematic showing model-based robot
position-gain and velocity-gain, respectively.
control. The learned dynamics model can be upDuring the control experiment we set the gains
dated online using LGP.
to very low values taking the aim of compliant
control into account. As a result, the learned
model has a stronger effect on computing the predicted torque uFF and, hence, a better learning
performance of each method results in a lower tracking error.
+
!
?
For comparison with the learned models, we also compute the feedforward torque using rigid-body
(RB) formulation which is a common approach in robot control [10]. The control task is performed
6
0.07
RMSE
0.1
0.08
RBD
LWPR
??SVR
GPR
LGP offline
0.06
0.05
RMSE
0.12
0.06
0.04
0.03
0.04
0.02
0.02
0.01
0
1
2
3
4
5
6
Degree of Freedom
0
7
(a) Tracking Error on Barrett without online learning
LGP offline
GPR
LGP online
1
2
3
4
5
6
Degree of Freedom
7
(b) Tracking Error after LGP online
learning on Barrett
Figure 5: (a) Tracking error as RMSE on test trajectory for each DoF with Barrett WAM. (b) Tracking error after online learning with LGP. The model uncertainty is reduced with online learning
using LGP. With online learning, LGP is able to outperform offline learned models using standard
GPR for test trajectories.
in real-time on the Barrett WAM, as shown in Figure 1. As desired trajectory, we generate a test
trajectory which is similar to the one used for learning the inverse dynamics models in Section 4.1.
Figure 5 (a) shows the tracking errors on test trajectory for 7 DoFs, where the error is computed as
root mean squared error (RMSE). Here, LGP provides a competitive control performance compared
to GPR while being superior to LWPR and the state-of-the art rigid-body model. It can be seen that
for several DoFs the tracking errors are large, for example 5., 6. and 7. DoF. The reason is that for
these DoFs the unknown nonlinearities are time-dependent, e.g., gear drive for 7. DoF, which can
not be approximated well using just one offline learned model. Since it is not possible to learn the
complete state space using a single data set, online learning is necessary.
5.1
Online Learning of Inverse Dynamics Models with LGP
The ability of online adaptation of the learned inverse dynamics models with LGP is shown by the
rank-one update of the local models which has a complexity of O(n2 ) [9]. Since the number of
training examples in each local model is limited (500 points in average), the update procedure is fast
enough for real-time application. For online learning the models are updated as shown in Figure 4.
For doing so, we regularly sample the joint torques u and the corresponding robot trajectories
[q, q,
? q
?] online. For the time being, as a new point is inserted we randomly delete another data
point from the local model if the maximal number of data point is reached. The process of insertion
and deletion of data points can be further improved by considering the information gain (and information lost) of the operation. Figure 5 (b) shows the tracking error after online learning with LGP.
It can be seen that the errors for each DoF are significantly reduced with online LGP compared to
the ones with offline learned models. With online learning, LGP is also able to outperform standard
GPR.
6
Conclusion
We combine with LGP the fast computation of local regression with more accurate regression methods while having little tuning efforts. LGP achieves higher learning accuracy compared to locally
linear methods such as LWPR while having less computational cost compared to GPR and ?-SVR.
The reducing cost allows LGP for model online learning which is necessary in oder to generalize
the model for all trajectories. Model-based tracking control using online learned model achieves superior control performance compared to the state-of-the-art method as well as offline learned model
for unknown trajectories.
7
References
[1] C. E. Rasmussen and C. K. Williams, Gaussian Processes for Machine Learning.
sachusetts Institute of Technology: MIT-Press, 2006.
Mas-
[2] J. Q. Candela and C. E. Rasmussen, ?A unifying view of sparse approximate gaussian process
regression,? Journal of Machine Learning Research, 2005.
[3] V. Treps, ?Mixtures of gaussian process,? Advances in Neural Information Processing Systems,
2001.
[4] C. E. Rasmussen and Z. Ghahramani, ?Infinite mixtures of gaussian process experts,? Advances
in Neural Information Processing Systems, 2002.
[5] L. Csato and M. Opper, ?Sparse online gaussian processes,? Neural Computation, 2002.
[6] E. Snelson and Z. Ghahramani, ?Local and global sparse gaussian process approximations,?
Artificial Intelligence and Statistics, 2007.
[7] S. Schaal, C. G. Atkeson, and S. Vijayakumar, ?Scalable techniques from nonparameteric
statistics for real-time robot learning,? Applied Intelligence, pp. 49?60, 2002.
[8] S. Vijayakumar, A. D?Souza, and S. Schaal, ?Incremental online learning in high dimensions,?
Neural Computation, 2005.
[9] M. Seeger, ?Low rank update for the cholesky decomposition,? Tech. Rep., 2007. [Online].
Available: http://www.kyb.tuebingen.mpg.de/bs/people/seeger/
[10] J. J. Craig, Introduction to Robotics: Mechanics and Control, 3rd ed. Prentice Hall, 2004.
[11] B. Sch?olkopf and A. Smola, Learning with Kernels: Support Vector Machines, Regularization,
Optimization and Beyond. Cambridge, MA: MIT-Press, 2002.
[12] S. Schaal, ?The SL simulation and real-time control software package,? Tech. Rep., 2006.
[Online]. Available: http://www-clmc.usc.edu/publications/S/schaal-TRSL.pdf
[13] C.-C. Chang and C.-J. Lin, LIBSVM: a library for support vector machines, 2001,
http://www.csie.ntu.edu.tw/ cjlin/libsvm.
[14] M. Seeger,
LHOTSE: Toolbox for Adaptive
http://www.kyb.tuebingen.mpg.de/bs/people/seeger/lhotse/.
Statistical
Model,
2007,
[15] D. Nguyen-Tuong, J. Peters, and M. Seeger, ?Computed torque control with nonparametric
regression models,? Proceedings of the 2008 American Control Conference (ACC 2008), 2008.
8
| 3403 |@word version:1 inversion:2 stronger:1 open:1 simulation:3 covariance:7 decomposition:1 reduction:2 outperforms:2 current:1 realistic:1 enables:1 kyb:2 update:9 intelligence:2 gear:1 xk:1 short:1 core:1 sarcos:11 provides:1 firstly:1 become:2 viable:1 consists:2 combine:2 manner:1 expected:1 roughly:1 mpg:3 frequently:1 mechanic:1 torque:7 inspired:2 little:2 considering:3 increasing:2 estimating:1 suffice:1 duo:1 interpreted:1 cial:1 runtime:1 control:24 partitioning:9 planck:1 local:74 limit:3 approximately:1 rbd:1 initialization:1 fastest:1 limited:2 decided:1 practice:2 lost:1 procedure:5 jan:2 significantly:4 projection:2 pre:1 svr:20 get:1 tuong:2 close:3 cal:1 prentice:1 context:1 www:4 demonstrated:1 center:8 maximizing:4 williams:1 stability:1 curb:1 updated:3 limiting:1 target:7 gps:1 velocity:3 approximated:5 expensive:1 observed:2 inserted:1 csie:1 region:3 yk:2 principled:1 transforming:1 complexity:8 insertion:2 dynamic:14 trained:3 compromise:1 duy:2 localization:3 f2:1 joint:7 represented:1 train:2 fast:7 describe:1 kp:3 query:15 artificial:1 neighborhood:1 dof:10 larger:1 otherwise:1 ability:1 statistic:2 gp:12 online:37 advantage:1 matthias:2 propose:1 maximal:2 remainder:1 adaptation:1 inserting:1 relevant:1 achieve:1 inducing:2 kv:3 olkopf:1 cluster:2 requirement:1 incremental:1 depending:1 pose:1 nearest:3 predicted:3 qd:3 lhotse:2 drawback:2 subsequently:5 ogp:8 require:1 generalization:1 clustered:1 alleviate:1 anthropomorphic:1 ntu:1 biological:1 secondly:1 adjusted:1 insert:1 sufficiently:2 considered:1 hall:1 exp:4 major:2 achieves:3 estimation:1 applicable:1 create:1 successfully:1 weighted:8 mit:2 gaussian:21 aim:1 ck:13 command:1 publication:1 derived:1 inherits:1 schaal:4 improvement:1 rank:5 likelihood:5 tech:2 seeger:7 contrast:1 nonparameteric:1 dim:2 dependent:2 rigid:2 cubically:1 quasi:1 lwpr:18 metaparameters:1 germany:1 flexible:1 art:4 marginal:3 once:1 having:4 represents:1 sachusetts:1 few:2 modern:1 randomly:1 individual:1 usc:1 attempt:1 freedom:7 highly:1 evaluation:2 mixture:4 arrives:1 light:1 behind:1 activated:1 accurate:3 kt:4 capable:1 necessary:3 spemannstra:1 ynew:2 divide:1 old:2 dofs:3 desired:2 plotted:1 minimal:1 delete:1 cost:10 subset:2 too:1 considerably:1 vijayakumar:2 compliant:3 modelbased:1 continuously:2 quickly:1 squared:4 choose:1 expert:4 american:1 account:1 amples:1 de:3 nonlinearities:1 summarized:2 wk:17 depends:2 stream:1 performed:9 root:1 lot:1 view:1 candela:1 doing:2 reached:1 competitive:3 rmse:4 ni:1 accuracy:12 became:1 variance:6 largely:1 efficiently:2 yield:2 generalize:1 bayesian:1 craig:1 trajectory:10 drive:1 cybernetics:1 acc:1 manual:1 ed:1 pp:1 gain:6 knowledge:1 organized:1 appears:1 higher:2 nately:1 improved:1 formulation:1 evaluated:4 done:1 furthermore:3 just:1 stage:1 smola:1 hand:1 replacing:1 cnew:1 manipulator:1 effect:1 normalized:3 true:1 hence:2 assigned:4 vicinity:2 regularization:1 sgp:3 during:5 width:4 covering:1 noted:3 m:2 pdf:1 complete:1 snelson:1 superior:3 common:2 physical:1 million:1 thirdly:1 significant:2 cambridge:1 tuning:2 rd:1 pm:6 similarly:1 robot:21 stable:1 scenario:1 ubingen:1 rep:2 yi:3 seen:4 employed:1 determine:1 signal:1 ii:3 full:2 reduces:1 smooth:1 technical:2 offer:1 lin:1 controlled:1 ensuring:1 prediction:41 scalable:1 regression:20 basic:1 controller:1 essentially:1 metric:1 schematic:1 physically:2 kernel:10 robotics:1 csato:1 separately:1 addressed:1 else:1 adhere:1 crucial:1 sch:1 unlike:1 hz:5 regularly:1 feedforward:3 iii:1 easy:1 enough:1 idea:1 effort:2 peter:3 oder:1 amount:4 nonparametric:1 ten:1 locally:2 hardware:1 reduced:5 generate:1 http:4 sl:5 outperform:2 millisecond:1 estimated:1 rb:1 demonstrating:1 localize:1 libsvm:2 inverse:16 angle:3 powerful:1 wgen:5 master:3 uncertainty:1 package:1 place:1 comparable:1 uff:3 xnew:4 adapted:2 constraint:1 n3:1 software:1 nearby:2 speed:7 attempting:1 according:2 smaller:4 partitioned:1 tw:1 wam:5 b:2 taken:1 computationally:1 equation:3 remains:2 discus:1 turn:1 precomputed:1 cjlin:1 needed:3 end:3 generalizes:1 operation:3 available:2 appropriate:1 alternative:2 ktk:1 denotes:6 remaining:1 clustering:1 ensure:1 newton:1 unifying:1 ghahramani:2 especially:1 approximating:1 parametric:2 diagonal:1 subspace:1 distance:11 simulated:2 me:3 tuebingen:3 reason:1 difficult:1 stated:1 rise:1 proper:1 unknown:3 perform:2 allowing:1 precise:1 souza:1 inverting:2 required:2 toolbox:1 optimized:1 learned:13 deletion:2 diction:1 able:2 beyond:1 below:1 max:2 deleting:3 difficulty:1 arm:12 dated:1 technology:1 library:2 created:1 xq:3 review:1 comply:1 determining:1 beside:1 fully:1 generation:1 limitation:2 allocation:1 humanoid:1 degree:7 sufficient:1 xp:3 consistent:1 keeping:1 arriving:1 rasmussen:3 offline:6 allow:1 institute:2 fall:1 taking:1 sparse:5 ghz:1 feedback:1 dimension:2 opper:1 evaluating:3 made:2 adaptive:1 nguyen:2 employing:1 atkeson:1 approximate:1 dealing:1 global:3 knew:2 xi:4 continuous:1 search:1 additionally:1 learn:4 complex:3 pk:1 main:2 s2:2 noise:1 hyperparameters:4 whole:2 n2:6 subsample:2 allowed:1 nmse:8 body:2 position:2 gpr:36 minute:1 theorem:1 gating:1 showing:2 barrett:12 demand:3 logarithmic:1 prevents:1 expressed:1 tracking:15 chang:1 ma:2 conditional:1 acceleration:2 feasible:2 hard:1 included:1 determined:2 infinite:1 reducing:2 averaging:1 called:2 total:1 support:3 cholesky:1 people:2 |
2,652 | 3,404 | On Bootstrapping the ROC Curve
Patrice Bertail
CREST (INSEE) & MODAL?X - Universit?e Paris 10
[email protected]
St?ephan Cl?emenc?on
Telecom Paristech (TSI) - LTCI UMR Institut Telecom/CNRS 5141
[email protected]
Nicolas Vayatis
ENS Cachan & UniverSud - CMLA UMR CNRS 8536
[email protected]
Abstract
This paper is devoted to thoroughly investigating how to bootstrap the ROC curve,
a widely used visual tool for evaluating the accuracy of test/scoring statistics in
the bipartite setup. The issue of confidence bands for the ROC curve is considered
and a resampling procedure based on a smooth version of the empirical distribution called the ?smoothed bootstrap? is introduced. Theoretical arguments and
simulation results are presented to show that the ?smoothed bootstrap? is preferable to a ?naive? bootstrap in order to construct accurate confidence bands.
1
Introduction
Since the seminal contribution of [14], so-called ROC curves (ROC standing for Receiving Operator Characteristic) have been extensively used in a wide variety of applications (anomaly detection in signal analysis, medical diagnosis, search engines, credit-risk screening) as a visual tool for
evaluating the performance of a test statistic regarding its capacity of discrimination between two
populations, see [8]. Whereas the statistical properties of their empirical counterparts have been
only lately studied from the asymptotic angle, see [18, 13, 11, 16], ROC curves also have recently
received much attention in the machine-learning literature through the development of statistical
learning procedures tailored for the ranking problem, see [10, 2]. The latter consists of determining,
based on training data, a test statistic s(X) (also called a scoring function) with a ROC curve ?as
high as possible? at all points of the ROC space. Given a candidate s(X), it is thus of prime importance to assess its performance by computing a confidence band for the corresponding ROC curve,
in a data-driven fashion preferably. Indeed, in such a functional setup, resampling-based procedures
should naturally be preferred to those relying on computing/simulating the (gaussian) limiting distribution, as first observed in [19, 21, 20], where the use of the bootstrap is promoted for building
confidence bands in the ROC space.
By building on recent works, see [17, 12], it is the purpose of this paper to investigate how the
bootstrap approach should be practically implemented based on a thorough analysis of the asymptotic properties of empirical ROC curves. Beyond the pointwise analysis developed in the studies
mentioned above, here we tackle the problem from a functional angle, considering the entire ROC
curve or parts of it. This viewpoint indeed appears as particularly relevant in scoring applications.
Although the asymptotic results established in this paper are of a theoretical nature, they are considerably meaningful from a computational perspective. It turns out indeed that smoothing is the
1
key ingredient for the bootstrap confidence band to be accurate, whereas a naive bootstrap approach
would yield bands of low coverage probability in this case and should be consequently avoided by
practicioners for analyzing ROC curves.
The rest of the paper is organized as follows. In Section 2, notations are first set out and certain
key notions of ROC analysis are briefly recalled. The choice of an adequate (pseudo-)metric on the
ROC space, a crucial point of the analysis, is also considered. The smoothed bootstrap algorithm
is presented in Section 3, together with the theoretical results establishing its asymptotic accuracy
as well as preliminary simulation results illustrating the impact of smoothing on the bootstrap performance. In Section 4, the gain in terms of convergence rate acquired by the smoothing step is
thoroughly discussed. We refer to [1] for technical proofs.
2
Background
Here we briefly recall basic concepts of the bipartite ranking problem as well as key results related
to the statistical estimation of ROC curves. We also set out the notations that shall be needed
throughout the paper. Although the results contained in this paper can be formulated without
referring to the bipartite ranking framework, in the purpose of motivating the present analysis
we intentionally connected them to this major statistical learning problem, which has recently
revitalized the interest for the problem of assessing the accuracy of empirical ROC curves, see [4].
2.1
Assumptions and notation
In the bipartite ranking problem, the problem is to order all the elements X of a set X by degree
of relevance, when relevancy may be observed through some binary indicator variable Y . Precisely,
one has a system consisting of a binary random output Y , taking its values in {?1, 1} say, and a
random input X, taking its values in a (generally high-dimensional) feature space X , which models
some observation for predicting Y . The probabilistic model is the same as for standard binary
classification but the prediction task is different. In the case of information retrieval for instance, the
goal is to order all documents x of the list X by degree of relevance for a particular request (rather
than simply classifying them as relevant or not as in classification). This amounts to assigning to
each document x in X a score s(x) indicating its degree of relevance for this specific query. The
challenge is thus to build a scoring function s : X ? R from sampling data, so as to rank the
observations x by increasing order of their score s(x) as accurately as possible: the higher the score
s(X) is, the more likely one should observe Y = +1.
True ROC curves. A standard way of measuring the ranking performance consists of plotting the
ROC curve, namely the graph of the mapping
ROCs : ? ? (0, 1) 7? 1 ? (Gs ? Hs?1 )(1 ? ?),
where Gs (respectively Hs ) denotes s(X)?s cdf conditioned on Y = +1 (resp. conditioned on
Y = ?1) and F ?1 (?) = inf{x ? R/ F (x) ? ?} the generalized inverse of any cdf F on R. It
boils down to plotting the true positive rate versus the false positive rate when testing the assumption
?H0 : Y = ?1? based on the statistic s(X). This functional performance measure induces a partial
order on the set of scoring functions, according to which it may be shown, by standard NeymanPearson?s arguments, that increasing transforms of the regression function ?(x) = P(Y = +1 |
X = x) are the optimal scoring functions (the test statistic ?(X) is uniformly more powerful, i.e.
?? ? (0, 1), ROC? (?) ? ROCs (?), for any scoring function s(x)).
Empirical ROC curve estimates. Practical learning strategies for selecting a good scoring function are based on training data Dn = {(Xi , Yi )}1?i?n and should thus rely on accurate empirical
estimates of the true ROC curves. Let p = P(Y = +1). For any scoring function candidate s(X),
an empirical counterpart of ROCs is naturally obtained by computing
bs ? H
b s?1 (1 ? ?)
[ s (?) = 1 ? G
?? ? (0, 1), ROC
from empirical cdf estimates:
n
n
X
X
b s (x) = 1
b s (x) = 1
G
I{Yi =+1} K(x ? s(Xi )) and H
I{Yi =?1} K(x ? s(Xi )),
n+ i=1
n? i=1
2
Pn
where n+ = i=1 I{Yi = +1} = n ? n? is the (random) number of positive instances among
the sample (distributed as the binomial Bin(n, p)) and K(u) denotes the step function I{u?0} . In
e s (x) and Fes (x) of the latter cdfs, a typical choice consists of
order to obtain smoothed versions G
R
picking instead a function K(u) of the form v?0 Kh (u ? v)dv, with Kh (u) = h?1 K(h?1 ? u)
where K R? 0 is a regularizing Parzen-Rosenblatt kernel (i.e. a bounded square integrable function
such that K(v)dv = 1) and h > 0 is the smoothing bandwidth, see Remark 1 for a practical view
of smoothing. Here and throughout, I{A} denotes the indicator function of any event A.
Metrics on the ROC space. When it comes to measure closeness between curves in the ROC
space, various metrics may be used, see [9]. Viewing the ROC space as a subset of the Skorohod?s
space D([0, 1]) of c`ad-l`ag functions f : [0, 1] ? R, the standard metric induced by the sup norm
||.||? appears as a natural choice. As shall be seen below, asymptotic arguments for grounding
the bootstrapping of the empirical ROC curve fluctuations, when measured in terms of the sup
norm ||.||? , are rather straightforward. However, given the geometry of empirical ROC curves,
this metric is not always convenient for our purpose and may produce very wide, and thus non
informative confidence bands. For analyzing stepwise graphs, such as empirical ROC curves, we
shall consider the closely related pseudo-metric defined as follows:
?(f1 , f2 ) ? D([0, 1])2 , dB (f1 , f2 ) = sup dB (f1 , f2 ; t),
t?[0,1]
where dB (f1 , f2 ; t) = min{|f1 (t) ? f2 (t)|, |f2?1 ? f1 (t) ? t|, |f1?1 ? f2 (t) ? t|. We clearly have
dB (f1 , f2 ) ? ||f1 ?f2 ||? . The major advantage of considering this pseudo-metric is that it provides
a control on vertical and horizontal jumps of ROC curves both at the same time, treating both
types of error in a symmetric fashion. Equipped with this pseudo-metric, two piecewise constant
ROC curves may be close to each other, even if their jumps do not exactly match. This is clearly
appropriate for describing the fluctuations of the empirical ROC curve (and the deviation between
the latter and its bootstrap counterpart as well). This way, dB permits to construct builds bands of
reasonable size, well adapted to the stepwise shape of empirical ROC curves, with better coverage
probabilities. In this respect, the closely related Hausdorff distance (i.e. the distance between the
graphs completed by linear segments at jump points) would also be a pertinent choice. However,
providing a theoretical basis in the case of the Hausdorff distance is very challenging and will not
be addressed in this paper, owing to space limitations.
As the goal pursued in the present paper is to build, in the ROC space viewed as a subspace of
the Skorohod?s space D([0, 1]) equipped with a proper (pseudo-) metric, a confidence band for
the ROC curve of a given diagnosis test statistic s(X), we shall omit to index by s the quantities considered and denote by Z the r.v. s(X) (and by Zi , 1 ? i ? n, the s(Xi )?s) for
notational simplicity. Throughout the paper, we assume that H(dx) and G(dx) are continuous
probability distributions, with densities h(x) and g(x) respectively. Eventually, denote by P the
joint distribution of (Z, Y ) on R ? {?1, +1} and by Pn its empirical version based on the sample Dn = {(Zi , Yi )}1?i?n . Equipped with the notations above, one may write P(dz, y) =
pI{y=+1} G(dz) + (1 ? p)I{y=?1} H(dz).
2.2
Asymptotic law - Gaussian approximation
In the situation described above, the next theorem establishes the strong consistency
of the empirical
?
ROC curve in sup norm and provides a strong approximation at the rate 1/ n, up to logarithmic
factors, for the fluctuation process:
?
[ n (?) ? ROC(?)), ? ? [0, 1].
rn (?) = n(ROC
This (gaussian) approximation plays a crucial role in understanding the asymptotic behavior of the
empirical ROC curve and of its bootstrap counterpart. The following assumptions are required.
H1 The slope of the ROC curve is bounded: sup??[0,1] {g(H ?1 (?))/h(H ?1 (?))} < ?.
H2 H is twice differentiable on [0, 1]. Furthermore, ?? ? [0, 1], h(?) > 0 and there exists
? > 0 such that sup??[0,1] {?(1 ? ?) ? d log(h ? H ?1 (?))/d?} ? ? < ?.
Theorem. 1 (F UNCTIONAL LIMIT THEOREM) Suppose that H1 ? H2 are fulfilled. Then,
3
(i) the empirical ROC curve is strongly consistent:
[ n (?) ? ROC(?)| ? 0 a.s. as n ? ?,
sup |ROC
??[0,1]
(n)
(n)
(ii) there exist a sequence of two independent brownian bridges {(B1 (?), B2 (?))}??[0,1]
such that we almost surely have, uniformly over [0, 1],
?
(1)
rn (?) = z (n) (?) + o (log log n)?1 (?) (log n)?2 (?) )/ n ,
where
z (n) (?) = (1 ? p)?1/2
and
(
g(H ?1 (1 ? ?)) (n)
(n)
B (?) + p?1/2 B2 (ROC(?))
h(H ?1 (1 ? ?)) 1
?1 (?) = 0, ?2 (?) = 1, if ? < 1
?1 (?) = 0, ?2 (?) = 2, if ? = 1
.
?1 (?) = ?, ?2 (?) = ? ? 1 + ?, ? > 0, if ? > 1
These results may be immediately derived from classical strong approximations for the empirical
and quantile processes,
see [5, 18]). Incidentally, we mention that the approximation rate is not
?
always log2 (n)/ n, contrarily to what is claimed in [18].
We point out that, owing to the presence of the term (g/h)(H ?1 (1 ? ?)) in it, the gaussian approximant can hardly be used for constructing ROC confidence bands. To avoid explicit computation of
density estimates, bootstrap confidence sets should be certainly preferred in practice.
3
Bootstrapping empirical ROC curves
Beyond consistency of the empirical curve in sup norm and the asymptotic normality of the fluctuation process, we now tackle the question of constructing confidence bands for the true ROC curve
via the bootstrap approach introduced by [6], extending pointwise results established in [17]. The
latter suggests to consider, as an estimate of the law of the fluctuation process rn = {rn (?)}??[0,1] ,
the conditional law given Dn of the bootstrapped fluctuation process
?
[
(2)
rn? = { n(ROC? (?) ? ROC(?))}
??[0,1] ,
where ROC ? is the ROC curve corresponding to a sample Dn? = {(Zi? , Yi? )}1?i?n of i.i.d. random
en close to Pn . We shall also consider
pairs with a common distribution P
?
[
d?n = ndB (ROC? , ROC),
(3)
?
[ ROC).
whose random fluctuations, given Dn , are expected to mimic those of dn = ndB (ROC,
The difficulty is twofold. Firstly, the target of the bootstrap procedure is here a distribution on a path
space, the ROC space being viewed as a subspace of Dn ([0, 1]), equipped with either ||.||? or else
b ?1 (?)}??[0,1] . It is
dB (., .). Secondly, both rn and dn are functionals of the quantile process {H
well-known that the naive bootstrap (i.e. resampling from the raw empirical distribution) generally
provides bad approximations of the distribution of empirical quantiles in practice: the rate of convergence for a given quantile is indeed of order OP (n?1/4 ), see [7], whereas the rate of the gaussian
approximation is n?1/2 . As shall be seen below, the same phenomenon may be naturally observed
for ROC curves. In a similar fashion to what is generally recommended for empirical quantiles, we
suggest to implement a smoothed version of the bootstrap algorithm in order to improve the approximation rate of ||rn ||? ?s distribution, respectively of dn ?s distribution . In short, this boils down to
resampling the data from a smoothed version of the empirical distribution Pn .
3.1
The Algorithm
Here we describe the algorithm for building a confidence
Pband at level 1 ? in the ROC space from
sampling data Dn = {(Zi , Yi ); 1 ? i ? n}. Set n+ = 1?i?n I{Yi =1} = n ? n? . It is performed
in four steps as follows.
4
A LGORITHM - S MOOTHED ROC B OOTSTRAP
b and H,
b as well as their
1. Based on Dn , compute the empirical class cdf estimates G
e
e
smoothed versions G and H. Plot the ROC curve estimate:
b?H
b ?1 (1 ? ?), ? ? [0, 1].
\
ROC(?)
=1?G
2. From the smooth distribution estimate
n+
en (dz, y) = n? I{y=+1} G(dz)
e
e
P
+
I{y=?1} H(dz),
n
n
draw a bootstrap sample Dn? = {(Zi? , Yi? )}1?i?n conditioned on Dn .
3. Based on Dn? , compute the bootstrap versions of the empirical class cdf estimates G?
and H ? . Plot the bootstrap ROC curve
ROC? (?) = 1 ? G? ? H ??1 (1 ? ?), ? ? [0, 1].
4. Eventually, get the bootstrap confidence bands at level 1? defined by the ball of center
[ and radius ? /?n in D([0, 1]), where ? is defined by P? (||rn? ||? ? ? ) = 1 ?
ROC
in the case of the sup norm or by P? (d?n ? ? ) = 1 ? , when considering the dB
distance, denoting by P? (.) the conditional probability given the original data Dn .
Before turning to the theoretical properties of this algorithm and related numerical experiments, a
few remarks are in order.
Remark 1 (M ONTE -C ARLO APPROXIMATION) From a computational angle, the true smoothed
bootstrap distribution must be approximated in its turn, using a Monte-Carlo approximation scheme.
A convenient way of doing this in practice, while reproducing theoretical advantages of smoothing,
consists of drawing B bootstrap samples, of size n, with replacement in the original data and then
perturbating each drawn data by independent centered gaussian random variables of variance h2
en (dz, dy) com(this procedure is equivalent to drawing bootstrap data from a smooth estimate P
2 ?1/2
2
2
puted using a gaussian kernel Kh (u) = (2?h )
exp(?u /(2h ))), see [22]. Regarding the
choice of the number of bootstrap replications, picking B = n does not modify the rate of convergence. However, choosing B of magnitude comparable to n so that (1 + B) is an integer may be
more appropriate: the -quantile of the approximate bootstrap distribution is the uniquely defined
and this will not modify the rate of convergence neither, see [15].
Remark 2 (O N TUNING PARAMETERS) The primary tuning parameters of the Algorithm are those
related to the smoothing stage. When using a gaussian regularizing kernel, one should typically
choose a bandwidth hn of order n?1/5 in order to minimize the mean square error.
Remark 3 (O N RECENTERING) From the asymptotic analysis viewpoint, it would be fairly equivae ?H
e ?1 (1 ? .)
] = 1? G
lent to recenter by a smoothed version of the original empirical curve ROC(.)
in the computation of the bootstrap fluctuation process. However, numerically speaking, computing
the sup norm of the estimate (2) is much more tractable, insofar as it solely requires to evaluate the
distance between piecewise constant curves over the pooled set of jump points. It should also be
noticed that smoothing the original curve, as proposed in [17], should be also avoided in practice,
since it hides the jump locations, which constitute the essential part of the information.
3.2
Asymptotic analysis
We now investigate the accuracy of the bootstrap estimate output by the Algorithm. The result stated
in the next theorem extend those established in [17] in the pointwise framework. The functional
nature of the approximation result below is essential, since it should be enhanced that, in most
ranking applications, assessing the uncertainty about the whole estimated ROC curve, or some part
of it at least, is what really matters. In the sequel, we assume that the kernel K used in the smoothing
step is ?pyramidal? (e.g. gaussian or of the form I{u?[?1,+1]} ).
5
Theorem. 2 (A SYMPTOTIC ACCURACY) Suppose that the hypotheses of Theorem 1 are fulfilled.
e and H
e are computed at step 1 using a scaled
Assume further that smoothed versions of the cdf?s G
3
kernel Khn (u) with hn ? 0 as n ? ? in a way that nhn ? ? and nh5n log2 n ? 0. Then, the
bootstrap distribution estimates output by the Algorithm are such that
log(h?1
n )
?
?
? ?
?
sup |P (||rn ||? ? t) ? P(||rn ||? ? t)| and sup |P (dn ? t) ? P(dn ? t)| are of order oP
.
nhn
t?R
t?R
Hence, up to logarithmic factors, choosing hn ? 1/(log2+? n1/5 ) with ? > 0 yields an approximation error of order n?2/5 for the bootstrap estimate. Although its rate is slower than the one of the
gaussian approximation (1), the smoothed bootstrap method remains very appealing from a computational perspective, the construction of confidence bands from simulated brownian bridges being
very difficult to implement in practice. As shall be seen below, the rate reached by the smoothed
bootstrap distribution is nevertheless a great improvement, compared to the naive bootstrap approach
(see the discussion below).
Remark 4 (B OOTSTRAPPING SUMMARY STATISTICS) From Theorem 1 above, asymptotic validity
of the smooth bootstrap method for estimating the distribution of the fluctuations of a functional
[ of the empirical ROC curve may be deduced, as soon as the function ? defined on D([0, 1])
?(ROC)
is sufficiently smooth (namely continuously Hadamard differentiable). For instance, it could be
applied to summary statistics involving a specific piece of the ROC curve only in order to focus on
the ?best instances? [3], or more classically to the area under the ROC curve (AUC). However, in
the latter case, due to the fact that this particular summary statistic is of the form of a U -statistic
[2], the naive bootstrap rate is faster than the one we obtained here (of order n?1 ).
3.3
Simulation results
The striking advantage of the smoothed bootstrap is the improved rate of convergence of the resulting
estimator. Furthermore, choosing dB for measuring the magnitude order of curve fluctuations has an
even larger impact on the accuracy of the empirical bands. As an illustration of this theoretical result,
we now display simulation results, emphasizing the gain acquired by smoothing and considering the
pseudo-metric dB .
We present confidence bands for a single trajectory and the estimation of the coverage probability
of the bands for a simple binormal model:
Yi = +1 if ?0 + ?1 X + ? > 0, and Yi = ?1 otherwise,
where ? and X are independent standard normal r.v.?s. In this example, the scoring function s(x)
is the maximum likelihood estimator of the probit model on the training set. We choose here
?0 = ?1 = 1, n = 1000, B = 999 and ? = 0.95 for the targeted coverage probability. Coverage probabilities are obtained over 2000 replications of the procedure, using the package ROCR of
statistical software R. As mentioned before, choosing ||.||? yields very large bands with coverage
probability close to 1! Though still large, bands based on the pseudo-metric dB are clearly much
more informative (see Fig. 1). It should be noticed that the coverage improvement obtained by
smoothing is clearer in the pontwise estimation setup (here ? = 0.2) but much more difficult to
evidence for confidence bands.
Table 1: Empirical coverage probabilities for 95% empirical bands/intervals.
M ETHOD
NAIVE B OOTSTRAP ||rn ||?
S MOOTHED B OOTSTRAP ||rn ||?
NAIVE B OOTSTRAP dn
S MOOTHED B OOTSTRAP dn
NAIVE B OOTSTRAP rn (0.2)
S MOOTHED B OOTSTRAP rn (0.2)
6
C OVERAGE (%)
100
100
90.3
93.1
89.7
92.5
0.8
0.21
0.2
0.0
0.01
0.22
0.4
0.41
0.6
0.6
0.8
0.8
0.8
0.6
True positive rate
0.61
0.4
0.02
0.41
0.2
1
1.0
1
1.0
0.8
0.6
0.4
True positive rate
0.2
0.0
0.0
1.0
0.0
0.2
False positive rate
0.4
0.8
1.0
Figure 2 : dB confidence band
0.0
0.01
0.2
0.21
0.4
0.41
0.6
0.6
0.8
0.8
1
1.0
Figure 1 : ||.||? confidence band
True positive rate
0.6
False positive rate
0.0
0.2
0.4
0.6
0.8
1.0
False positive rate
Figure 3: Ponctual smooth bootstrap confidence interval
Figure 1: ROC confidence bands.
4
Discussion
Let us now give an insight into the reason why the smoothed bootstrap procedure outperforms the
bootstrap without smoothing. In most statistical problems where the nonparametric bootstrap is
useful, there is no particular reason for implementing it from a smoothed version of the empirical df
rather from the raw empirical distribution itself, see [22]. However, in the present case, smoothing
affects the rate of convergence. Suppose indeed that the bootstrap process (2) is built by drawing
b and H
b instead of their smoothed versions at step 2 of the Algorithm. Then, for
from the raw cdf?s G
any ? ?]0, 1[, supt?R |P? (rn? (?) ? t) ? P(rn (?) ? t)| = OP (n?1/4 ). Hence, the naive bootstrap
rate induces an error of order O(n?1/4 ) which cannot be improved, whereas it may be shown that
the rate n?2/5 is attained by the smoothed bootstrap (in a similar fashion to the functional setup),
provided that the amount of smoothing is properly chosen. Heuristically, this is a consequence of the
oscillation behavior of the deviation between the bootstrap quantile H ??1 (1 ? ?) and its expected
b ?1 (1 ? ?) given the data Dn , due to the fact that the step cdf H
b is not regular around
value H
?1
b
H (1 ? ?): this corresponds to a jump with probability one.
Higher-order accuracy. A classical way of improving the pointwise approximation rate consists of
bootstrapping a standardized version of the r.v. rn (?). It is natural to consider, as standardization
factor, the square root of an estimate of the asymptotic variance:
? 2 (?)
= var(z (n) (?)) =
?(1 ? ?) g(H ?1 (1 ? ?))2
ROC(?)(1 ? ROC(?))
.
+
?1
2
1 ? p h(H (1 ? ?))
p
(4)
] and
An estimate ?
bn2 of plug-in type could be considered, obtained by plugging n+ /n, ROC
0
0
? = H
e and g? = G
e into (4) instead of their (unknown) theoretical
smoothed density estimators h
counterparts. More interestingly, from a computational viewpoint, a bootstrap estimator of the variance could also be used. Following the argument used in [17] for a smoothed original estimate of
the ROC curve, one may show that a smoothed bootstrap
? of the studentized statistic rn (?)/?n (?)
yields a better pointwise rate of convergence than 1/ n, the one of the gaussian approximation in
the Central Limit Theorem. Precisely, for a given ? ?]0, 1[, if the bandwidth used in the computation
7
of ?n2 (?) is chosen of order n?1/3 , we have:
?
? rn (?)
1
rn (?)
sup P
,
?t ?P
? t = OP
?n? (?)
?n (?)
n2/3
t?R
(5)
denoting ?n2 (?)?s bootstrap counterpart by ?n?2 (?). Notice that the bandwidth used in the standardization step (i.e. for estimating the variance) is not the same as the one used at the resampling stage
of the procedure. This is a key point for achieving second-order accuracy. This time, the smoothed
(studentized) bootstrap method widely outperforms the gaussian approach, when the matter is to
[
build confidence intervals for the ordinate ROC(?)
of a point of abciss ? on the empirical ROC
curve. However, it is not clear yet, whether this result remains true for confidence bands, when
considering the whole ROC curve (this would actually require to establish an Edgeworth expansion
for the supremum ||rn /b
?n ||? ). This will be the scope of further research.
References
[1] P. Bertail, S. Cl?emenc?on, and N. Vayatis. On constructing accurate confidence bands for ROC curves
through smooth resampling, http://hal.archives-ouvertes.fr/hal-00335232/fr/. Technical report, 2008.
[2] S. Cl?emenc?on, G. Lugosi, and N. Vayatis. Ranking and scoring using empirical risk minimization. Proceedings of COLT 2005, Eds P. Auer and R. Meir, LNAI 3559, Springer, 2005.
[3] S. Cl?emenc?on and N. Vayatis. Ranking the best instances. Journal of Machine Learning Research,
5:197?227, 2007.
[4] W. Cohen, R. Schapire, and Y. Singer. Learning to order things. Journal of Artificial Intelligence Research,
10:243?270, 1999.
[5] M. Csorgo and P. Revesz. Strong approximations in probability and statistics. Academic Press, 1981.
[6] B. Efron. Bootstrap methods: another look at the jacknife. Annals of Statistics, 7:1?26, 1979.
[7] M. Falk and R. Reiss. Weak convergence of smoothed and nonsmoothed bootstrap quantile estimates.
Annals of Probability, 17:362?371, 1989.
[8] T. Fawcett. ROC graphs: Notes and practical considerations for data mining researchers. Technical Report
HPL 2003-4), 5:197?227, 2003.
[9] P. Flach. The geometry of roc space: understanding machine learning metrics through roc isometrics. In
T. Fawcett and N. Mishra, editors, Proc. 20th International Conference on Machine Learning (ICML?03),
AAAI Press, 86:194?201, 2003.
[10] Y. Freund, R. Iyer, R. Schapire, and Y. Singer. An efficient boosting algorithm for combining preferences.
Journal of Machine Learning Research, 4:933?969, 2003.
[11] P. Ghosal and J. Gu. Bayesian ROC curve estimation under binormality using a partial likelihood based
on ranks. Submitted for publication, 2007.
[12] P. Ghosal and J. Gu. Strong approximations for resample quantile process and application to ROC methodology. Submitted for publication, 2007.
[13] A. Girling. ROC confidence bands: An empirical evaluation. Journal of the Royal Statistical Society,
Series B, 62:367?382, 2000.
[14] D. Green and J. Swets. Signal detection theory and psychophysics. Wiley, NY, 1966.
[15] P. Hall. On the number of bootstrap simulations required to construct a confidence interval. Annals of
Statistics, 14:1453?1462, 1986.
[16] P. Hall and R. Hyndman. Improved methods for bandwidth selection when estimating ROC curves.
Statistics and Probability Letters, 64:181?189, 2003.
[17] P. Hall, R. Hyndman, and Y. Fan. Nonparametric confidence intervals for receiver operating characteristic
curves. Biometrika, 91:743?750, 2004.
[18] F. Hsieh and B. Turnbull. Nonparametric and semi-parametric statistical estimation of the ROC curve.
The Annals of Statistics, 24:25?40, 1996.
[19] S. Macskassy and F. Provost. Confidence bands for ROC curves: methods and an empirical study. In
Proceedings of the first Workshop on ROC Analysis in AI (ROCAI-2004) at ECAI-2004, 2004.
[20] S. Macskassy, F. Provost, and S. Rosset. Bootstrapping the ROC curve: an empirical evaluation. In
Proceedings of ICML-2005 Workshop on ROC Analysis in Machine Learning (ROCML-2005), 2005.
[21] S. Macskassy, F. Provost, and S. Rosset. ROC confidence bands: An empirical evaluation. In Proceedings
of the 22nd International Conference on Machine Learning (ICML-2005), 2005.
[22] B. Silverman and G. Young. The bootstrap: to smooth or not to smooth? Biometrika, 74:469?479, 1987.
8
| 3404 |@word h:2 illustrating:1 version:12 briefly:2 norm:6 flach:1 nd:1 relevancy:1 heuristically:1 simulation:5 hsieh:1 mention:1 series:1 score:3 selecting:1 denoting:2 document:2 bootstrapped:1 interestingly:1 outperforms:2 mishra:1 com:1 assigning:1 dx:2 must:1 yet:1 numerical:1 informative:2 shape:1 pertinent:1 treating:1 plot:2 resampling:6 discrimination:1 pursued:1 intelligence:1 short:1 provides:3 boosting:1 location:1 preference:1 firstly:1 dn:20 replication:2 consists:5 acquired:2 swets:1 expected:2 indeed:5 behavior:2 relying:1 equipped:4 considering:5 increasing:2 provided:1 estimating:3 notation:4 bounded:2 what:3 developed:1 ag:1 bootstrapping:5 pseudo:7 thorough:1 preferably:1 tackle:2 preferable:1 universit:1 exactly:1 scaled:1 biometrika:2 control:1 medical:1 omit:1 positive:9 before:2 modify:2 limit:2 consequence:1 analyzing:2 establishing:1 fluctuation:10 path:1 solely:1 lugosi:1 umr:2 twice:1 studied:1 suggests:1 challenging:1 cdfs:1 practical:3 testing:1 practice:5 implement:2 edgeworth:1 silverman:1 bootstrap:52 procedure:8 area:1 empirical:39 convenient:2 confidence:27 regular:1 suggest:1 get:1 cannot:1 close:3 selection:1 operator:1 risk:2 seminal:1 equivalent:1 dz:7 center:1 emenc:4 attention:1 straightforward:1 simplicity:1 immediately:1 estimator:4 insight:1 population:1 notion:1 limiting:1 resp:1 target:1 play:1 suppose:3 enhanced:1 anomaly:1 cmla:2 construction:1 annals:4 hypothesis:1 element:1 approximated:1 particularly:1 observed:3 role:1 connected:1 mentioned:2 segment:1 bipartite:4 f2:9 basis:1 gu:2 joint:1 various:1 describe:1 monte:1 query:1 artificial:1 choosing:4 h0:1 whose:1 widely:2 larger:1 say:1 drawing:3 otherwise:1 statistic:16 itself:1 patrice:1 advantage:3 differentiable:2 sequence:1 arlo:1 fr:5 relevant:2 hadamard:1 combining:1 kh:3 convergence:8 assessing:2 extending:1 produce:1 incidentally:1 clearer:1 measured:1 op:4 received:1 strong:5 implemented:1 coverage:8 come:1 radius:1 closely:2 owing:2 centered:1 viewing:1 implementing:1 bin:1 require:1 f1:9 really:1 preliminary:1 secondly:1 practically:1 sufficiently:1 considered:4 credit:1 normal:1 exp:1 great:1 hall:3 mapping:1 scope:1 around:1 major:2 resample:1 purpose:3 estimation:5 bn2:1 proc:1 khn:1 bridge:2 establishes:1 tool:2 minimization:1 clearly:3 gaussian:12 always:2 supt:1 rather:3 pn:4 avoid:1 publication:2 derived:1 focus:1 notational:1 improvement:2 rank:2 likelihood:2 properly:1 cnrs:2 entire:1 typically:1 lnai:1 issue:1 classification:2 among:1 colt:1 development:1 smoothing:14 fairly:1 psychophysics:1 construct:3 sampling:2 look:1 icml:3 mimic:1 report:2 piecewise:2 few:1 falk:1 geometry:2 consisting:1 replacement:1 n1:1 ltci:1 detection:2 practicioners:1 screening:1 interest:1 investigate:2 mining:1 evaluation:3 certainly:1 ouvertes:1 devoted:1 accurate:4 partial:2 clemencon:1 institut:1 theoretical:8 instance:5 measuring:2 turnbull:1 deviation:2 subset:1 motivating:1 considerably:1 rosset:2 thoroughly:2 st:1 referring:1 density:3 deduced:1 international:2 standing:1 probabilistic:1 sequel:1 receiving:1 picking:2 parzen:1 continuously:1 together:1 central:1 aaai:1 choose:2 hn:3 classically:1 approximant:1 b2:2 pooled:1 matter:2 ranking:8 ad:1 nhn:2 performed:1 view:1 h1:2 piece:1 root:1 doing:1 sup:13 reached:1 slope:1 contribution:1 ass:1 square:3 minimize:1 accuracy:8 variance:4 characteristic:2 yield:4 weak:1 raw:3 bayesian:1 accurately:1 carlo:1 trajectory:1 researcher:1 submitted:2 ed:1 intentionally:1 naturally:3 proof:1 boil:2 gain:2 recall:1 efron:1 organized:1 actually:1 auer:1 appears:2 higher:2 attained:1 isometric:1 methodology:1 modal:1 improved:3 though:1 strongly:1 furthermore:2 stage:2 hpl:1 lent:1 horizontal:1 puted:1 hal:2 building:3 grounding:1 validity:1 concept:1 true:9 lgorithm:1 counterpart:6 hausdorff:2 hence:2 symmetric:1 uniquely:1 auc:1 universud:1 generalized:1 consideration:1 recently:2 common:1 functional:6 cohen:1 discussed:1 extend:1 numerically:1 refer:1 ai:1 tuning:2 consistency:2 operating:1 brownian:2 recent:1 hide:1 perspective:2 inf:1 driven:1 prime:1 claimed:1 certain:1 binary:3 yi:11 scoring:11 integrable:1 seen:3 promoted:1 surely:1 recommended:1 signal:2 ii:1 semi:1 smooth:9 technical:3 match:1 faster:1 plug:1 academic:1 retrieval:1 plugging:1 impact:2 prediction:1 involving:1 basic:1 regression:1 overage:1 hyndman:2 metric:12 df:1 studentized:2 kernel:5 tailored:1 fawcett:2 vayatis:5 whereas:4 background:1 addressed:1 interval:5 else:1 pyramidal:1 revitalized:1 crucial:2 rest:1 contrarily:1 archive:1 induced:1 db:11 thing:1 integer:1 presence:1 stephan:1 insofar:1 variety:1 affect:1 zi:5 bandwidth:5 regarding:2 ndb:2 whether:1 speaking:1 hardly:1 remark:6 adequate:1 constitute:1 generally:3 useful:1 clear:1 amount:2 transforms:1 nonparametric:3 band:28 extensively:1 induces:2 http:1 schapire:2 meir:1 exist:1 notice:1 fulfilled:2 estimated:1 rosenblatt:1 diagnosis:2 write:1 shall:7 macskassy:3 key:4 four:1 nevertheless:1 achieving:1 drawn:1 neither:1 graph:4 angle:3 inverse:1 powerful:1 uncertainty:1 striking:1 package:1 letter:1 throughout:3 reasonable:1 almost:1 oscillation:1 draw:1 cachan:2 dy:1 comparable:1 display:1 fan:1 bertail:2 g:2 adapted:1 precisely:2 software:1 argument:4 min:1 according:1 request:1 ball:1 appealing:1 b:1 dv:2 remains:2 turn:2 describing:1 eventually:2 needed:1 singer:2 jacknife:1 tractable:1 permit:1 observe:1 appropriate:2 simulating:1 slower:1 original:5 denotes:3 binomial:1 standardized:1 completed:1 log2:3 quantile:7 build:4 establish:1 classical:2 society:1 noticed:2 question:1 quantity:1 strategy:1 primary:1 parametric:1 skorohod:2 subspace:2 distance:5 simulated:1 capacity:1 revesz:1 reason:2 onte:1 pointwise:5 index:1 illustration:1 providing:1 setup:4 difficult:2 fe:1 stated:1 ethod:1 proper:1 unknown:1 vertical:1 observation:2 moothed:4 situation:1 rn:21 reproducing:1 smoothed:22 provost:3 ephan:1 ordinate:1 introduced:2 ghosal:2 namely:2 paris:1 required:2 pair:1 engine:1 recalled:1 established:3 beyond:2 below:5 challenge:1 built:1 royal:1 green:1 event:1 natural:2 rely:1 difficulty:1 predicting:1 indicator:2 turning:1 normality:1 scheme:1 improve:1 lately:1 naive:9 literature:1 understanding:2 determining:1 asymptotic:12 law:3 freund:1 probit:1 limitation:1 versus:1 ingredient:1 var:1 h2:3 degree:3 consistent:1 standardization:2 plotting:2 viewpoint:3 editor:1 classifying:1 pi:1 summary:3 soon:1 ecai:1 wide:2 taking:2 recentering:1 distributed:1 curve:54 evaluating:2 jump:6 avoided:2 functionals:1 crest:1 approximate:1 preferred:2 supremum:1 tsi:1 investigating:1 b1:1 receiver:1 xi:4 search:1 continuous:1 why:1 table:1 nature:2 nicolas:1 improving:1 expansion:1 cl:4 constructing:3 whole:2 n2:3 fig:1 telecom:3 en:5 roc:95 quantiles:2 fashion:4 ny:1 wiley:1 explicit:1 candidate:2 young:1 down:2 theorem:8 emphasizing:1 bad:1 specific:2 list:1 closeness:1 evidence:1 exists:1 stepwise:2 essential:2 false:4 workshop:2 importance:1 magnitude:2 iyer:1 conditioned:3 recenter:1 logarithmic:2 simply:1 likely:1 visual:2 contained:1 springer:1 corresponds:1 cdf:8 conditional:2 goal:2 formulated:1 viewed:2 consequently:1 targeted:1 twofold:1 paristech:2 typical:1 uniformly:2 called:3 meaningful:1 indicating:1 latter:5 relevance:3 evaluate:1 reiss:1 regularizing:2 phenomenon:1 |
2,653 | 3,405 | Hierarchical Semi-Markov Conditional Random
Fields for Recursive Sequential Data
Tran The Truyen ? , Dinh Q. Phung ? , Hung H. Bui ? ?, and Svetha Venkatesh ?
?
Department of Computing, Curtin University of Technology
GPO Box U1987 Perth, WA 6845, Australia
[email protected]
{D.Phung,S.Venkatesh}@curtin.edu.au
?
Artificial Intelligence Center, SRI International
333 Ravenswood Ave, Menlo Park, CA 94025, USA
[email protected]
Abstract
Inspired by the hierarchical hidden Markov models (HHMM), we present the hierarchical semi-Markov conditional random field (HSCRF), a generalisation of embedded undirected Markov chains to model complex hierarchical, nested Markov
processes. It is parameterised in a discriminative framework and has polynomial
time algorithms for learning and inference. Importantly, we develop efficient algorithms for learning and constrained inference in a partially-supervised setting,
which is important issue in practice where labels can only be obtained sparsely.
We demonstrate the HSCRF in two applications: (i) recognising human activities
of daily living (ADLs) from indoor surveillance cameras, and (ii) noun-phrase
chunking. We show that the HSCRF is capable of learning rich hierarchical models with reasonable accuracy in both fully and partially observed data cases.
1
Introduction
Modelling hierarchical aspects in complex stochastic processes is an important research issue in
many application domains ranging from computer vision, text information extraction, computational linguistics to bioinformatics. For example, in a syntactic parsing task known as noun-phrase
chunking, noun-phrases (NPs) and part-of-speech tags (POS) are two layers of semantics associated
with words in the sentence. Previous approach first tags the POS and then feeds these tags as input
to the chunker. The POS tagger takes no information of the NPs. This may not be optimal, as a
noun-phrase is often very informative to infer the POS tags belonging to the phrase. Thus, it is more
desirable to jointly model and infer both the NPs and the POS tags at the same time.
Many graphical models have been proposed to address this challenge, typically extending the flat
hidden Markov models (e.g., hierarchical HMM (HHMM) [2], DBN [6]). These models are, however, generative in that they are forced to consider the modelling of the joint distribution Pr(x, z) for
both the observation z and the label x. An attractive alternative is to model the distribution Pr(x|z)
directly, avoiding the modelling of z. This line of research has recently attracted much interest, and
one of the significant results was the introduction of the conditional random field (CRF) [4]. Work
in CRFs was originally limited to flat structures for efficient inference, and subsequently extended to
?
Hung Bui is supported by the Defense Advanced Research Projects Agency (DARPA) under Contract
No. FA8750-07-D-0185/0004. Any opinions, findings and conclusions or recommendations expressed in this
material are those of the authors and do not necessarily reflect the views of DARPA, or the Air Force Research
Laboratory (AFRL).
hierarchical structures, such as the dynamic CRFs (DCRF) [10], and hierarchical CRFs [5]. These
models assume predefined structures, therefore, they are not flexible to adapt to many real-world
datasets. For example, in the noun-phrase chunking problem, no prior hierarchical structures are
known. Rather, if such a structure exists, it can only be discovered after the model has been successfully built and learned.
In addition, most discriminative structured models are trained in a completely supervised fashion
using fully labelled data, and limited research has been devoted to dealing with the partially labelled
data (e.g. [3, 12]). In several domains, it is possible to obtain some labels with minimal effort.
Such information can be used either for training or for decoding. We term the process of learning
with partial labels partial-supervision, and the process of inference with partial labels constrained
inference. Both processes require the construction of appropriate constrained inference algorithms.
We are motivated by the HHMM [2], a directed, generative model parameterised as a standard
Bayesian network. To address the above issues, we propose the Hierarchical Semi-Markov Conditional Random Field (HSCRF), which is a recursive, undirected graphical model that generalises the
undirected Markov chains and allows hierarchical decomposition. The HSCRF is parameterised as
a standard log-linear model, and thus can naturally incorporate discriminative modelling. For example, the noun-phrase chunking problem can be modeled as a two level HSCRF, where the top level
represents the NP process, the bottom level the POS process. The two processes are conditioned on
the sequence of words in the sentence. Each NP generally spans one or more words, each of which
has a POS tag. Rich contextual information such as starting and ending of the phrase, the phrase
length, and the distribution of words falling inside the phrase can be effectively encoded. At the
same time, like the HHMM, exact inference in the HSCRFs can be performed in polynomial time in
a manner similar to the Asymmetric Inside-Outside algorithm (AIO) [1].
We demonstrate the effectiveness of HSCRFs in two applications: (i) segmenting and labelling
activities of daily living (ADLs) in an indoor environment and (ii) jointly modelling noun-phrases
and part-of-speeches in shallow parsing. Our experimental results in the first application show that
the HSCRFs are capable of learning rich, hierarchical activities with good accuracy and exhibit
better performance when compared to DCRFs and flat-CRFs. Results for the partially observable
case also demonstrate that significant reduction of training labels still results in models that perform
reasonably well. We also show that observing a small amount of labels can significantly increase
the accuracy during decoding. In noun-phrase chunking, the HSCRFs can achieve higher accuracy
than standard CRF-based techniques and the recent DCRFs. Our contributions from this paper are
thus: i) the introduction of the novel and Hierarchical Semi-Markov Conditional Random Field
to model nested Markovian processes in a discriminative framework, ii) the development of an
efficient generalised Asymmetric Inside-Outside (AIO) algorithm for partially supervised learning
and constrained inference, and iii) the applications of the proposed HSCRFs in human activities
recognition, and in shallow parsing of natural language.
Due to space constraint, in this paper we present only main ideas and empirical evaluations. Complete details and extensions can be found in the technical report [11]. The next section introduces
necessary notations and provides a model description for the HSCRF, followed by the discussion
on learning and inference for fully and partially data cases in section 3 and 4 respectively. Applications for recognition of activities and natural language parsing are presented in section 5. Finally,
discussions on the implications of the HSCRF and conclusions are given in section 6.
2
2.1
Model Definition and Parameterisation
The Hierarchical Semi-Markov Conditional Random Fields
Consider a hierarchically nested Markov process with D levels where, by convention, the top level
is the dummy root level that generates all subsequent Markov chains. Then, as in the generative
process of the hierarchical HMMs [2], the parent state embeds a child Markov chain whose states
may in turn contain grand-child Markov chains. The relation among these nested Markov chains is
defined via the model topology, which is a state hierarchy of depth D. It specifies a set of states S d
at each level d, i.e., S d = {1...|S d |}, where |S d | is the number of states at level d and 1 ? d ? D.
For each state sd ? S d where d 6= D, the model also defines a set of children associated with
it at the next level ch(sd ) ? S d+1 , and thus conversely, each child sd+1 is associated with a set
of parental states at the upper level pa(sd+1 ) ? S d . Unlike the original HHMMs proposed in [2]
where tree structure is explicitly enforced on the state hierarchy, the HSCRFs allow arbitrary sharing
of children among parental states as addressed in [1]. This way of topology generalization implies
less number of sub-states required when D is large, and thus lead to fewer parameters and possibly
less training data and time complexity [1].
To provide an intuition, the temporal evolution can be informally described as follows. Start with
the root node at the top level, as soon as a new state is created at level d 6= D, it initialises a child
state at level d + 1. The initialisation continues downward until reaching the bottom level1 . This
child process at level d + 1 continues its execution recursively until it terminates, and when it does,
the control of execution returns to its parent at the upper level d. At this point, the parent makes a
decision either to transits to a new state at the same level or returns the control to the grand-parent
at the upper level d ? 1.
The key intuition for this hierarchical nesting process is that the lifespan of a child process is a subsegment in the lifespan of its parent. To be more precise, consider the case which a parent process
sdi:j at level d starts a new state2 at time i and persists until time j. At time i, the parent initialises
a child state sd+1
which continues until it ends at time k < j, at which the child state transits to
i
a new child state sd+1
k+1 . The child process exits at time j, at which the control from the child level
is returned to the parent sdi:j . Upon receiving the control, the parent state sdi:j may transit to a new
parent state sdj+1:l , or ends at j and returns the control to the grand-parent at level d ? 1.
d=1
xi?1
xj?1
xi
xd?1
i+1
x2
2
d=2
ei?1 = 1
e2
2
ei = 0
xd
i
d=D
ed
i?1 = 1
1
xj
2
T ?1
ej?1 = 0
xd
i
ej = 1
d?1
ei
= 0
xd
i
xd
i+1
ed
i = 1
ed
i = 1
T
xd+1
i
xd+1
i
Figure 1: Graphical presentation for HSCRFs (leftmost). Graph structures for state-persistence
(middle-top), initialisation and ending (middle-bottom), and state-transition (rightmost).
The HSCRF, which is a multi-level temporal graphical model of length T with D levels, can be
described formally as follows (Fig. 1). It starts from the root level indexed as 1, runs for T time
slices and at each time slice a hierarchy of D states are generated. At each level d and time index
i, there is a node representing a state variable xdi ? S d = {1, 2, ..., |S d |}. Associated with each xdi
is an ending indicator edi which can be either 1 or 0 to signify whether the state xdi terminates or
continues its execution to the next time slice. The nesting nature of the HSCRFs is formally realised
by imposing the specific constraints on the value assignment of ending indicators:
? The root state persists during the course of evolution, i.e., e11:T ?1 = 0, e1T = 1, and all
states end at the last time-slice, i.e., e1:D
= 1.
T
? When a state finishes, all of its descendants must also finish, i.e., edi = 1 implies
ed+1:D
= 1; and when a state persists, all of its ancestors must also persist, i.e., edi = 0
i
implies e1:d?1
= 0.
i
? When a state transits, its parent must remain unchanged, i.e., edi = 1, ed?1
= 0, and states
i
at the bottom level terminates at every single slice, i.e., eD
i = 1 for all i ? [1, T ].
Thus, specific value assignments of ending indicators provide contexts that realise the evolution of
the model states in both hierarchical (vertical) and temporal (horizontal) directions. Each context at
1
In HHMMs, the bottom level is also called production level, in which the states emit observational symbols.
In HSCRFs, this generative process is not assumed.
2
Our notation sdi:j is to denote the set of variables from time i to j at level d, i.e., sdi:j = {sdi , sdi+1 , . . . , sdj }.
a level and associated state variables form a contextual clique, and here we identify four contextual
clique types (cf. Fig. 1):
? State-persistence : This corresponds to the life time of a state at a given level Specifically,
persist,d
given a context be c = (edi?1:j = (1, 0, .., 0, 1)), then ?i:j
= (xdi:j , c), is a contextual
d
clique that specifies the life span [i, j] of any state s = xi:j .
? State-transition : This corresponds to a state at level d ? [2, D] at time i transiting to
a new state. Specifically, given a context c = (ed?1
= 0, edi = 1) then ?itransit,d =
i
d?1
d
(xi+1 , xi:i+1 , c) is a contextual clique that specifies the transition of xdi to xdi+1 at time i
under the same parent xd?1
i+1 .
? State-initialisation : This corresponds to a state at level d ? [1, D ? 1] initialising a new
child state at level d + 1 at time i. Specifically, given a context c = (edi?1 = 1), then
?iinit,d = (xdi , xd+1
, c) is a contextual clique that specifies the initialisation at time i from
i
the parent xdi to the first child xd+1
.
i
? State-exiting : This corresponds to a state at level d ? [1, D?1] to end at time i Specifically,
given a context c = (edi = 1), then ?iexit,d = (xdi , xd+1
, c) is a contextual clique that
i
specifies the ending of xdi at time i with the last child xd+1
.
i
In the HSCRF, we are interested in the conditional setting, in which the entire state and ending
1:D
variables (x1:D
1:T , e1:T ) are conditioned on an observational sequence z. For example, in computational linguistics, the observation is often the sequence of words, and the state variables might be
the part-of-speech tags and the phrases.
To capture the correlation between variables and such conditioning, we define a non-negative potential function ?(?, z) over each contextual clique ?. Figure 2 shows the notations for potentials
that correspond to the four contextual clique types we have identified above. Details of potential
specification are described in the Section 2.2.
State persistence potential
State transition potential
State initialization potential
State ending potential
d,s,z
persist,d
Ri:j
= ?(?i:j
, z) where s = xdi:j .
d,s,z
transit,d
d
d
Au,v,i = ?(?i
, z) where s = xd?1
i+1 and u = xi , v = xi+1 .
d,s,z
?u,i
= ?(?iinit,d , z) where s = xdi , u = xd+1
.
i
d,s,z
Eu,i = ?(?iexit,d , z) where s = xdi , u = xd+1
.
i
Figure 2: Shorthands for contextual clique potentials.
1:D
d
m
Let V = (x1:D
1:T , e1:T ) denote the set of all variables and let ? = {ik }k=1 denote the set of all time
d
indices where eik = 1. A configuration ? of the model is a complete assignment of all the states and
1:D
ending indicators (x1:D
1:i , e1:T ) which satisfies the set of hierarchical constraints described earlier in
this section. The joint potential defined for each configuration is the product of all contextual clique
potentials over?all ending time indexes i ? [1, T ] and all semantic levels d ? [1, D]:
?
Y
Y
?
Y ?
Y
Y
d,s,z
d,s,z
?(?, z) =
Rid,s,z
Ad,s,z
?u,i
Eu,i
u,v,ik
k +1:ik+1
k +1
k
?
?
d
d
d?1
d
d
d
(ik ,ik+1 )??
ik ?? ,ik ??
/
ik ??
ik ??
The conditional distribution is given as:
1
?(?, z)
Z(z)
P
where Z(z) = ? ?(?, z) is the partition function for normalisation.
Pr(?|z) =
2.2
(1)
Log-linear Parameterisation
d
In our HSCRF setting, there isa feature vector
f? (?, z) associated with each type of contextual
d
d
d
clique ?, in that ?(? , z) = exp ?? ? f? (?, z) . where a?b denotes the inner product of two vectors
a and b. Thus, the features are active only in the context in which the corresponding contextual
cliques appear. For the state-persistence contextual clique, the features incorporate state-duration,
start time i and end time j of the state. Other feature types incorporate the time index in which the
features are triggered. In what follows, we omit z for clarity, and implicitly use it as part of the
partition function Z and the potential ?(.).
3
Unconstrained Inference and Fully Supervised Learning
Typical inference tasks in the HSCRF include computing the partition function, MAP assignment
and feature expectations. The key insight is the context-specific independence, which is due to
hierarchical constraints described in Section 2.1. Let us call the set of variable assignments ?d,s
i:j =
d,s
d
d:D
d:D
d
(xi:j = s, ei?1 = 1, ej = 1, ei:j?1 = 0) the symmetric Markov blanket. Given ?i:j , the set of
variables inside the blanket is independent of those outside it. A similar relation holds with respect
d
to the asymmetric Markov blanket, which includes the set of variable assignments ?d,s
i:j (u) = (xi:j =
d+1
d+1:D
d
d:D
s, xj = u, ei?1 = 1, ej
= 1, ei:j?1 = 0). Figure 3 depicts an asymmetric Markov blanket
(the covering arrowed line) containing a smaller asymmetric blanket (the left arrowed line) and a
symmetric blanket (the double-arrowed line). Denote by ?d,s
i:j the sum of products of all clique
level d
level d +1
Figure 3: Decomposition with respect to symmetric/asymmetric Markov blankets.
potentials falling inside the symmetric Markov blanket ?d,s
i:j . The sum is taken over all possible
d,s
d,s
value assignments of the set of variables inside ?i:j . In the same manner, let ?i:j
(u) be the sum
? d,s
of products of all clique potentials falling inside the asymmetric Markov blanket ?d,s
i:j (u). Let ?i:j
d,s
be a shorthand for ?d,s
i:j Ri:j . Using the context-specific independence described above and the
decomposition depicted Figure 3, the following recursions arise:
?d,s
i:j =
X
d,s
d,s
?i:j
(u)Eu,j
;
u?S d+1
d,s
?i:j
(u) =
j
X
X
d,s
? d+1,u Ad+1,s + ?
? d+1,u ? d+1,s
?i:k?1
(v)?
i:j
u,i
k:j
v,u,k?1
(2)
k=i+1 v?S d+1
As the symmetric Markov blanket ?1,s
the set x11:T = s covers every state variable, the
1:T and
P
? 1,s .
partition function can be computed as Z = s?S 1 ?
1:T
MAP assignment is essentially the max-product problem, which can be solved by turning all summations in (2) into corresponding maximisations.
Parameter estimation in HSCRFs, as in other log-linear models, requires the computation of feature expectations as a part of the log-likelihood gradient (e.g. see [4]). The gradient is then fed into
any black-box standard numerical optimisation algorithms. As the feature expectations are rather
involved, we intend to omit the details. Rather, we include here as an example the expectation of the
state-persistence features
X X
1 X X
d,s
d,s d,s d,s
E[f?d,s
?d,s
persist (i, j)?(?i:j ? ?)] =
i:j ?i:j Ri:j f? persist (i, j)
Z
i?[1,T ] j?[i,T ]
i?[1,T ] j?[i,T ]
d
where f?d,s
persist (i, j) is the state-persistence feature vector for the state s = xi:j starting at i and
d,s
ending at j; ?i:j is the sum of products of all clique potentials falling outside the symmetric Markov
d,s
d,s
blanket ?d,s
i:j ; and ?(?i:j ? ?) is the indicator function that the Markov blanket ?i:j is part of the
random configuration ?.
4
Constrained Inference and Partially Supervised Learning
It may happen that the training data is not completely labelled, possibly due to lack of labelling
resources [12]. In this case, the learning algorithm should be robust enough to handle missing
labels. On the other hand, during inference, we may partially obtain high quality labels from external
sources [3]. This requires the inference algorithm to be responsive to the available labels which may
help to improve the performance.
In general, when we make observations, we observe some states and some ending indicators. Let
V? = {?
x, e?} be the set of observed state and end variables respectively. The procedures to compute
d,s
the auxiliary variables such as ?d,s
i:j and ?i:j (u) must be modified to address constraints arisen from
d,s
these observations. For example, computing ?d,s
i:j assumes ?i:j , which implies the constraint to the
state s at level d starting at i and persisting till terminating at j. Then, if any observations (e.g., there
is an x
?dk 6= s for k ? [i, j]) are made causing this constraint invalid, ?d,s
i:j will be zero. Therefore,
in general, the computation of each auxiliary variable is multiplied by an identity function that
enforces the consistency between the observations and the required constraints associated with the
computation of that variable.
d,s
As an example, we consider the computation of ?d,s
i:j . The sum ?i:j is only consistent if all of the
following conditions are satisfied: (a) if there are observed states at level d within the interval [i, j]
they must be s, (b) if there is any observed ending indicator e?di?1 , then e?di?1 = 1, (c) if the ending
indicator e?dk is observed for some k ? [i, j ? 1], then e?dk = 0, and (d) if the ending indicator e?dj is
observed, then e?dj = 1. These conditions are captured by using the following identity function
I[?d,s
xdk?[i,j] = s)?(?
edi?1 = 1)?(?
edk?[i:j?1] = 0)?(?
edj = 1)
i:j ] = ?(?
When observations are made, the first equation in (2) is thus replaced by
X
d,s
d,s
d,s
d,s
?i:j = I[?i:j ]
?i:j (u)Eu,j
(3)
(4)
u?S d+1
5
Applications
We describe two applications of the proposed hierarchical semi-Markov CRFs in this section: activity recognition in Secion 5.1 and shallow parsing in Section 5.2.
5.1
Recognising Indoor Activities
In this experiment, we evaluate the HSCRFs with a relatively small dataset from the domain of indoor video surveillance. The task is to recognise trajectories and activities, which a person performs
in a kitchen, from his noisy locations extracted from video. The data, originally described in [7], has
45 training and 45 test sequences, each of which corresponds to one of 3 the persistent activities:
(1) preparing short-meal, (2) having snack and (3) preparing normal-meal. The persistent activities
share some of the 12 sub-trajectories. Each sub-trajectory is a sub-sequence of discrete locations.
Thus naturally, the data has a state hierarchy of depth 3: the dummy root for each location sequence,
the persistent activities, and the sub-trajectories. The input observations to the model are simply
sequences of discrete locations.
At each level d and time t we count an error if the predicted state is not the same as the ground-truth.
First, we examine the fully observed case where the HSCRF is compared against the DCRF [10] at
both data levels, and against the Sequential CRF (SCRF) [4] at the bottom level. Table 1 (the left
half) shows that (a) both the multilevel models significantly outperform the flat model and (b) the
HSCRF outperforms the DCRF.
Alg.
HSCRF
DCRF
SCRF
d=2
100
96.5
-
d=3
93.9
89.7
82.6
Alg.
PO-HSCRF
PO-SCRF
-
d=2
80.2
-
d=3
90.4
83.5
-
Table 1: Accuracy (%) for fully observed data (left), and partially observed (PO) data (right).
Next, we consider partially-supervised learning in which about 50% of start/end times of a state and
state labels are observed at the second level. All ending indicators are known at the bottom level.
The results are reported in Table 1 (the right half). As can be seen, although only 50% of the state
labels and state start/end times are observed, the model learned is still performing well with accuracy
of 80.2% and 90.4% at levels 2 and 3, respectively.
We next consider the issue of partially observing labels during decoding and test the effect using
degraded learned models. Such degraded models (emulating noisy training data or lack of training
time) are extracted from the 10th iteration of the fully observed data case. The labels are provided
at random times. Figure 4a shows the decoding accuracy as a function of available state labels. It is
interesting to observe that a moderate amount of observed labels (e.g. 20?40%) causes the accuracy
rate to go up considerably.
5.2
POS Tagging and Noun-Phrase Chunking
110
92
100
90
90
88
F1?score
average F1?score (%)
In this experiment, we apply the HSCRF to the task of noun-phrase chunking. The data is from the
CoNLL-2000 shared task 3 , in which 8926 English sentences from the Wall Street Journal corpus
are used for training and 2012 sentences are for testing. Each word in a pre-processed sentence
is labelled by two labels: the part-of-speech (POS) and the noun-phrase (NP). There are 48 POS
labels and 3 NP labels (B-NP for beginning of a noun-phrase, I-NP for inside a noun-phrase or O
for others). Each noun-phrase generally has more than one words. To reduce the computational
burden, we reduce the POS tag-set to 5 groups: noun, verb, adjective, adverb and others. Since in
our HSCRFs we do not have to explicitly indicate which node is the beginning of a segment, the NP
label set can be reduced further into NP for noun-phrase, and O for anything else.
80
70
SCRF
HSCRF+POS
HSCRF
DCRF+POS
DCRF
Semi?CRF
84
activities
sub?trajectories
60
50
0
86
82
10
20 30 40 50 60 70
portion of available labels
80
90
(a)
80
3
10
number of training sentences
(b)
Figure 4: (a) Decoding accuracy of indoor activities as a function of available information on
label/start/end time . (b) Performance of various models on Conll2000 noun phrase chunking.
HSCRF+POS and DCRF+POS mean HSCRF and DCRF with POS given at test time, respectively.
We build an HSCRF topology of 3 levels, where the root is just a dummy node, the second level
has 2 NP states, and the bottom level has 5 POS states. For comparison, we implement a DCRF,
a SCRF, and a semi-Markov CRF (Semi-CRF) [8]. The DCRF has grid structure of depth 2, one
for modelling the NP process and another for the POS process. Since the state spaces are relatively
small, we are able to run exact inference in the DCRF by collapsing both the NP and POS state
spaces to a combined state space of size 3 ? 5 = 15. The SCRF and Semi-CRF model only the NP
process, taking the POS tags and words as input.
We extract raw features from the text in the way similar to that in [10]. The features for SCRF and the
Semi-CRF also include the POS tags. Words with less than 3 occurrences are not used. This reduces
the vocabulary and the feature size significantly. We also make use of bi-grams with similar selection
criteria. Furthermore, we use the contextual window of 5 instead of 7 as in [10]. This setting gives
rise to about 32K raw features. The model feature is factorised as f (xc , z) = I(xc )gc (z), where
I(xc ) is a binary function on the assignment of the clique variables xc , and gc (z) are the raw features.
Although both the HSCRF and the Semi-CRF are capable of modelling arbitrary segment durations,
we use a simple exponential distribution (i.e. weighted features activated at each time step are added
3
http://www.cnts.ua.ac.be/conll2000/chunking/
up) since it can be processed sequentially and thus is very efficient. For learning, we use a simple
online stochastic gradient ascent method. At test time, since the SCRF and the Semi-CRF are able
to use the POS tags as input, it is not fair for the DCRF and HSCRF to predict those labels during
inference. Instead, we also give the POS tags to the DCRF and HSCRF and perform constrained
inference to predict only the NP labels. This boosts the performance of the two multi-level models
significantly.
Let us look at the difference between the flat setting of SCRF and Semi-CRF and the the multilevel setting of DCRF and HSCRF. Let x = (xnp , xpos ). Essentially, we are about to model the
distribution Pr(x|z) = Pr(xnp |xpos , z) Pr(xpos |z) in the multi-level models while we ignore the
Pr(xpos |z) in the flat models. During test time of the multi-level models, we predict only the xnp
by finding the maximiser of Pr(xnp |xpos , z). The Pr(xpos |z) seems to be a waste because we do
not make use of it at test time. However, Pr(xpos |z) does give extra information about the joint
distribution Pr(x|z), that is, modelling the POS process may help to get smoother estimate of the
NP distribution.
The performance of these models is depicted in Figure 4b and we are interested in only the prediction
of the noun-phrases since this data has POS tags. Without POS tags given at test time, both the
HSCRF and the DCRF perform worse than the SCRF. This is not surprising because the POS tags
are always given in the case of SCRF. However, with POS tags, the HSCRF consistently works
better than all other models.
6
Discussion and Conclusions
The HSCRFs presented here are not a standard graphical model since the clique structures are not
predefined. The potentials are defined on-the-fly depending on the assignments of the ending indicators. Although the model topology is identical to that of shared structure HHMMs [1], the unrolled
temporal representation is an undirected graph, and the model distribution is formulated in a discriminative way. Furthermore, the state persistence potentials capture duration information that is
not available in the DBN representation of the HHMMs in [6]. Thus, the segmental nature of the
HSCRF thus incorporates the recent semi-Markov CRF [8] as a special case [11].
Our HSCRF is related to the conditional probabilistic context-free grammar (C-PCFG) [9] in the
same way that the HHMM is to the PCFG. However, the context-free grammar does not limit the
depth of semantic hierarchy, thus making unnecessarily difficult to map many hierarchical problems
into its form. Secondly, it lacks a graphical model representation, and thus does not enjoy the rich
set of approximate inference techniques available in graphical models.
References
[1] H. H. Bui, D. Q. Phung, and S. Venkatesh. Hierarchical hidden Markov models with general state
hierarchy. In AAAI, pages 324?329, San Jose, CA, Jul 2004.
[2] S. Fine, Y. Singer, and N. Tishby. The hierarchical hidden Markov model: Analysis and applications.
Machine Learning, 32(1):41?62, 1998.
[3] T. Kristjannson, A. Culotta, P. Viola, and A. McCallum. Interactive information extraction with constrained conditional random fields. In AAAI, pages 412?418, San Jose, CA, 2004.
[4] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for segmenting
and labeling sequence data. In ICML, pages 282?289, 2001.
[5] L. Liao, D. Fox, and H. Kautz. Hierarchical conditional random fields for GPS-based activity recognition.
In Proceedings of the International Symposium of Robotis Research (ISRR). Springer Verlag, 2005.
[6] K. Murphy. Dynamic Bayesian Networks: Representation, Inference and Learning. PhD thesis, Computer Science Division, University of California, Berkeley, Jul 2002.
[7] N. Nguyen, D. Phung, S. Venkatesh, and H. H. Bui. Learning and detecting activities from movement
trajectories using the hierarchical hidden Markov models. In CVPR, volume 2, pages 955?960, Jun 2005.
[8] S. Sarawagi and W. W. Cohen. Semi-Markov conditional random fields for information extraction. In
NIPS. 2004.
[9] C. Sutton. Conditional probabilistic context-free grammars. Master?s thesis, Uni. of Massachusetts, 2004.
[10] C. Sutton, A. McCallum, and K. Rohanimanesh. Dynamic conditional random fields: Factorized probabilistic models for labeling and segmenting sequence data. JMLR, 8:693?723, Mar 2007.
[11] T. T. Truyen, D. Q. Phung, H. H. Bui, and S. Venkatesh. Hierarchical semi-Markov conditional
random fields for recursive sequential data. Technical report, Curtin University of Technology,
http://www.computing.edu.au/?trantt2/pubs/hcrf.pdf, 2008.
[12] J. Verbeek and B. Triggs. Scene segmentation with CRFs learned from partially labeled images. In
Advances in Neural Information Processing Systems 20, pages 1553?1560. MIT Press, 2008.
| 3405 |@word middle:2 sri:2 polynomial:2 seems:1 triggs:1 decomposition:3 snack:1 recursively:1 reduction:1 configuration:3 score:2 pub:1 initialisation:4 fa8750:1 rightmost:1 outperforms:1 com:1 contextual:15 surprising:1 attracted:1 parsing:5 must:5 subsequent:1 partition:4 informative:1 numerical:1 happen:1 initialises:2 intelligence:1 generative:4 fewer:1 half:2 mccallum:3 beginning:2 short:1 provides:1 detecting:1 node:4 location:4 tagger:1 symposium:1 ik:9 persistent:3 descendant:1 shorthand:2 iinit:2 inside:8 manner:2 tagging:1 examine:1 multi:4 inspired:1 window:1 ua:1 project:1 provided:1 notation:3 factorized:1 what:1 finding:2 temporal:4 berkeley:1 every:2 xd:15 interactive:1 control:5 omit:2 appear:1 enjoy:1 segmenting:3 generalised:1 persists:3 sd:6 limit:1 sutton:2 might:1 black:1 au:4 initialization:1 conversely:1 hmms:1 limited:2 bi:1 directed:1 camera:1 enforces:1 testing:1 recursive:3 practice:1 maximisation:1 implement:1 sarawagi:1 procedure:1 empirical:1 significantly:4 persistence:7 word:9 pre:1 get:1 selection:1 context:12 www:2 map:3 center:1 crfs:6 missing:1 go:1 starting:3 duration:3 insight:1 nesting:2 importantly:1 his:1 handle:1 construction:1 hierarchy:6 svetha:1 exact:2 gps:1 cnts:1 pa:1 recognition:4 continues:4 asymmetric:7 sparsely:1 persist:6 labeled:1 observed:13 bottom:8 fly:1 solved:1 capture:2 culotta:1 eu:4 movement:1 intuition:2 agency:1 environment:1 complexity:1 dynamic:3 trained:1 terminating:1 segment:2 upon:1 division:1 exit:1 completely:2 po:31 joint:3 darpa:2 various:1 forced:1 describe:1 artificial:1 labeling:2 outside:4 whose:1 encoded:1 xpos:7 cvpr:1 grammar:3 syntactic:1 jointly:2 noisy:2 online:1 sequence:9 triggered:1 propose:1 tran:2 product:6 causing:1 till:1 achieve:1 description:1 parent:14 double:1 extending:1 arrowed:3 help:2 depending:1 develop:1 ac:1 auxiliary:2 predicted:1 implies:4 blanket:12 convention:1 indicate:1 direction:1 stochastic:2 subsequently:1 human:2 australia:1 xnp:4 observational:2 opinion:1 material:1 require:1 multilevel:2 f1:2 generalization:1 wall:1 summation:1 secondly:1 extension:1 hold:1 ground:1 normal:1 exp:1 predict:3 estimation:1 label:24 successfully:1 weighted:1 mit:1 ravenswood:1 always:1 modified:1 rather:3 reaching:1 ej:4 surveillance:2 consistently:1 modelling:8 likelihood:1 ave:1 inference:19 typically:1 entire:1 hidden:5 relation:2 ancestor:1 interested:2 semantics:1 e11:1 issue:4 among:2 flexible:1 x11:1 development:1 constrained:7 noun:18 special:1 field:12 extraction:3 having:1 preparing:2 represents:1 park:1 look:1 identical:1 unnecessarily:1 icml:1 eik:1 np:17 report:2 others:2 gpo:1 murphy:1 replaced:1 kitchen:1 interest:1 normalisation:1 adls:2 evaluation:1 introduces:1 activated:1 devoted:1 chain:6 predefined:2 implication:1 emit:1 capable:3 partial:3 daily:2 necessary:1 fox:1 tree:1 indexed:1 edj:1 minimal:1 earlier:1 aio:2 markovian:1 cover:1 assignment:10 phrase:22 tishby:1 xdi:13 reported:1 considerably:1 combined:1 person:1 international:2 grand:3 contract:1 probabilistic:4 receiving:1 decoding:5 thesis:2 reflect:1 satisfied:1 aaai:2 containing:1 possibly:2 collapsing:1 worse:1 external:1 return:3 potential:16 factorised:1 waste:1 includes:1 explicitly:2 ad:2 performed:1 view:1 root:6 observing:2 realised:1 start:7 portion:1 kautz:1 jul:2 contribution:1 air:1 accuracy:9 degraded:2 correspond:1 identify:1 bayesian:2 raw:3 trajectory:6 sharing:1 ed:7 definition:1 against:2 realise:1 involved:1 e2:1 naturally:2 associated:7 di:2 dataset:1 massachusetts:1 segmentation:1 e1t:1 feed:1 afrl:1 originally:2 higher:1 supervised:6 box:2 mar:1 furthermore:2 parameterised:3 just:1 until:4 correlation:1 hand:1 horizontal:1 ei:7 lack:3 defines:1 quality:1 usa:1 effect:1 contain:1 evolution:3 symmetric:6 laboratory:1 semantic:2 attractive:1 during:6 covering:1 anything:1 criterion:1 leftmost:1 pdf:1 crf:12 demonstrate:3 complete:2 performs:1 ranging:1 image:1 novel:1 recently:1 cohen:1 conditioning:1 volume:1 dinh:1 significant:2 imposing:1 ai:1 meal:2 unconstrained:1 dbn:2 consistency:1 grid:1 language:2 dj:2 persisting:1 specification:1 supervision:1 segmental:1 recent:2 moderate:1 adverb:1 verlag:1 binary:1 sdi:7 life:2 captured:1 seen:1 living:2 semi:17 ii:3 smoother:1 desirable:1 infer:2 reduces:1 generalises:1 technical:2 adapt:1 sdj:2 hhmm:5 e1:5 prediction:1 verbeek:1 liao:1 vision:1 expectation:4 essentially:2 optimisation:1 iteration:1 arisen:1 addition:1 signify:1 fine:1 addressed:1 interval:1 else:1 source:1 extra:1 unlike:1 ascent:1 undirected:4 incorporates:1 lafferty:1 effectiveness:1 call:1 iii:1 enough:1 xj:3 finish:2 independence:2 topology:4 identified:1 inner:1 idea:1 reduce:2 whether:1 motivated:1 defense:1 effort:1 returned:1 speech:4 cause:1 generally:2 informally:1 amount:2 processed:2 lifespan:2 reduced:1 http:2 specifies:5 outperform:1 dummy:3 discrete:2 group:1 key:2 four:2 falling:4 clarity:1 graph:2 isrr:1 sum:5 enforced:1 run:2 jose:2 master:1 reasonable:1 recognise:1 decision:1 initialising:1 conll:1 maximiser:1 layer:1 followed:1 phung:5 activity:15 xdk:1 constraint:8 x2:1 flat:6 ri:3 scene:1 tag:16 generates:1 aspect:1 span:2 performing:1 relatively:2 department:1 structured:1 transiting:1 belonging:1 chunker:1 terminates:3 remain:1 smaller:1 shallow:3 parameterisation:2 secion:1 making:1 pr:11 taken:1 chunking:9 resource:1 equation:1 turn:1 count:1 singer:1 fed:1 end:9 available:6 multiplied:1 apply:1 observe:2 hierarchical:27 appropriate:1 occurrence:1 responsive:1 alternative:1 original:1 perth:1 top:4 linguistics:2 cf:1 denotes:1 graphical:7 include:3 assumes:1 xc:4 build:1 unchanged:1 intend:1 added:1 exhibit:1 gradient:3 hmm:1 street:1 transit:5 length:2 modeled:1 index:4 unrolled:1 difficult:1 negative:1 rise:1 perform:3 upper:3 vertical:1 observation:8 markov:32 dcrf:15 datasets:1 viola:1 extended:1 emulating:1 precise:1 discovered:1 gc:2 arbitrary:2 verb:1 exiting:1 edi:9 venkatesh:5 required:2 sentence:6 california:1 learned:4 boost:1 nip:1 address:3 able:2 parental:2 indoor:5 challenge:1 adjective:1 built:1 max:1 video:2 natural:2 force:1 indicator:11 turning:1 recursion:1 advanced:1 representing:1 improve:1 technology:2 created:1 jun:1 extract:1 text:2 prior:1 embedded:1 fully:7 interesting:1 hhmms:4 consistent:1 share:1 production:1 hcrf:1 course:1 supported:1 last:2 soon:1 english:1 free:3 allow:1 taking:1 slice:5 depth:4 vocabulary:1 world:1 ending:17 rich:4 transition:4 gram:1 author:1 made:2 san:2 nguyen:1 curtin:4 approximate:1 observable:1 ignore:1 implicitly:1 uni:1 bui:6 dealing:1 clique:18 active:1 sequentially:1 rid:1 corpus:1 state2:1 assumed:1 discriminative:5 xi:10 table:3 nature:2 rohanimanesh:1 reasonably:1 ca:3 robust:1 menlo:1 conll2000:2 alg:2 complex:2 necessarily:1 domain:3 main:1 hierarchically:1 arise:1 child:16 fair:1 x1:3 fig:2 depicts:1 fashion:1 embeds:1 sub:6 pereira:1 exponential:1 jmlr:1 specific:4 symbol:1 dk:3 exists:1 burden:1 recognising:2 sequential:3 effectively:1 pcfg:2 phd:1 execution:3 labelling:2 conditioned:2 downward:1 depicted:2 simply:1 expressed:1 partially:12 recommendation:1 springer:1 ch:1 nested:4 corresponds:5 satisfies:1 truth:1 extracted:2 conditional:16 identity:2 presentation:1 formulated:1 invalid:1 labelled:4 shared:2 generalisation:1 specifically:4 typical:1 called:1 experimental:1 formally:2 bioinformatics:1 incorporate:3 evaluate:1 avoiding:1 hung:2 |
2,654 | 3,406 | Efficient Inference in Phylogenetic InDel Trees
Alexandre Bouchard-C?ot?e?
Michael I. Jordan??
Dan Klein?
?
Computer Science Division , Department of Statistics?
University of California at Berkeley
Berkeley, CA 94720
{bouchard,jordan,klein}@cs.berkeley.edu
Abstract
Accurate and efficient inference in evolutionary trees is a central problem in computational biology. While classical treatments have made unrealistic site independence assumptions, ignoring insertions and deletions, realistic approaches require tracking insertions and deletions along the phylogenetic tree?a challenging
and unsolved computational problem. We propose a new ancestry resampling
procedure for inference in evolutionary trees. We evaluate our method in two
problem domains?multiple sequence alignment and reconstruction of ancestral
sequences?and show substantial improvement over the current state of the art.
1
Introduction
Phylogenetic analysis plays a significant role in modern biological applications such as ancestral
sequence reconstruction and multiple sequence alignment [1, 2, 3]. While insertions and deletions
(InDels) of nucleotides or amino acids are an important aspect of phylogenetic inference, they pose
formidable computational challenges and they are usually handled with heuristics [4, 5, 6]. Routine
application of approximate inference techniques fails because of the intricate nature of the combinatorial space underlying InDel models.
Concretely, the models considered in the phylogenetic literature take the form of a tree-shaped
graphical model where nodes are string-valued random variables representing a fragment of DNA,
RNA or protein of a species. Edges denote evolution from one species to another, with conditional
probabilities derived from the stochastic model described in Sec. 2. Usually, only the terminal nodes
are observed, while the internal nodes are hidden. The interpretation is that the sequence at the root
is the common ancestor of those at the terminal nodes, and it subsequently evolved in a branching
process following the topology of the tree. We will concentrate on the problem of computing the
posterior of these hidden nodes rather than the problem of selecting the topology of the tree?hence
we will assume the tree is known or estimated with some other algorithm (a guide tree assumption).
This graphical model can be misleading. It only encodes one type of independence relation, those
between generations. There is another important structure that can be exploited. Informally, InDel
events that operate at the beginning of the sequences should not affect, for instance, those at the
end. However, because alignments between the sequences are unknown in practice, it is difficult to
exploit this structure in a principled way.
In many previous works [4, 5, 6], the following heuristic approach is taken to perform inference on
the hidden nodes (refer to Fig. 1): First, a guide tree (d) and a multiple sequence alignment (a) (a
transitive alignment between the characters in the sequences of the modern species) are computed
using heuristics [7, 8]. Second, the problem is cast into several easy subproblems as follows. For
each equivalence class in the multiple sequence alignment (called a site, corresponding to a column
in Fig. 1(b)), a new graphical model is created with the same tree structure as the original problem,
but where there is exactly one character in each node rather than a string. For nodes with a character
1
C
(a)
C
a:
C
A
?
T
A
b:
C
A
?
G
? ?
c:
?
A
T
C
?
A
A
:C
G
a
(b)
b:
A
T
d:
?
?
(e)
C
...ACCCTGCGGTGCT...
C
...ACGCTGCGGTGAT...
? ?
? ?
...ACCGTGCGATGCT...
c: A
...ACGTTGCGGTGAT...
T
C
C
(c)
(d)
...ACGCTGTTGTGAT...
(f)
...ACGGTGCTGTGCT..
...ACCGTGCGTGCT...
a
d
...ACCCTGCGGTGCT...
b
...
C
C
A
A
A
...ACGCTGCGTGAT...
T
c
...ACGTTGCGTGAT...
...ACGCTGTTGTGAT...
...ACGGTGCTGTGCT..
(AR)
Figure 1: Comparison of different approaches to phylogenetic modeling: (a,b,c,d) heuristics based
on site independence; (e) Single Sequence Resampling; (f) Ancestry Resampling. The boxes denote
the structures that can be sampled or integrated out in one step by each method.
in the current equivalence class, the node in this new tree is observed, and the rest of the nodes are
considered as unobserved data (Fig. 1(c)). Note that the question marks are not the gaps commonly
seen in linearized representations of multiple alignments, but rather phantom characters. Finally,
each site is assumed independent of the others, so the subproblems can be solved efficiently by
running the forward-backward algorithm on each site.
This heuristic has several problems, the most important being that it does not allow explicit modeling of insertions and deletions (InDel), which are frequent in real biological data and play an
important role in evolution [9]. If InDels are included in the probabilistic model, there is no longer
a deterministic notion of site on which independence assumptions can be made. This complicates
inference substantially. For instance, in the standard TKF91 model [10], the fastest known algorithm
for computing exact posteriors takes time O(2F N F ) where F is the number of leaves and N is the
geometric mean sequence length [11].
Holmes et al. [2] developed an approximate Markov chain Monte Carlo (MCMC) inference procedure for the TKF91 model. Their algorithm proceeds by sampling the entire sequence corresponding
to a single species conditioning on its parent and children (Fig. 1(e)). We will call this type of kernel
a Single Sequence Resampling (SSR) move. Unfortunately, chains based exclusively on SSR have
performance problems.
There are two factors behind these problems. The first factor is a random walk behavior that arises
in tall chains found in large or unbalanced trees [2, 12]: initially, the InDel events resampled at
the top of the tree are independent of all the observations. It takes time for the information from
the observations to propagate up the tree. The second factor is the computational cost of each SSR
move, which is O(N 3 ) with the TKF91 model and binary trees. For long sequences, this becomes
prohibitive, so it is common to use a ?maximum deviation pruning strategy? (i.e., putting a bound
on the relative positions of characters that mutate from one to the other) to speed things up [12]. We
observed that this pruning can substantially hurt the quality of the estimated posterior (see Sec. 4).
In this paper, we present a novel MCMC procedure for phylogenetic InDel models that we refer to as
Ancestry Resampling (AR). AR addresses both of the efficiency and accuracy problems that arise for
SSR. The intuition behind the AR approach is to use an MCMC kernel that combines the advantages
of the two approaches described above: like the forward-backward algorithm in the site-independent
case, AR always directly conditions on some part of the observed data, but, like SSR, it is capable
of resampling the InDel history. This is illustrated in Fig. 1(f).
2
Model
For concreteness, we describe the algorithms in the context of the standard TKF91 model [10], but
in Sec. 5 we discuss how the ideas extend to other models. We assume that a phylogenetic directed
tree topology ? = (V, E) is fixed, where nodes in this tree are string-valued random variables, from
2
an alphabet of K characters?K is four in nucleotide sequences and about twenty in amino-acid
sequences. Also known is a positive time length te associated to each edge e ? E.
We start the description of the model in the simple case of a single branch of known length t, with a
string x at the root and a string y at the leaf. The model, TKF91, is a string-valued Continuous-Time
Markov Chain (CTMC). There is one rate ? for deletion (death in the original TKF terminology)
and one rate ? for insertions, which can occur either to the right of one of the existing character
(birth), or to the left of the sequence (immigration). Additionally, there is an independent CTMC
substitution process on each character.
Fortunately, the TKF91 model has a closed form solution for the conditional distribution over strings
y at the leaf given the string x at the root. The derivation of this conditional distribution is presented
in [10] and its form is:
P(a character in x survived and has n descendants in y) = ?? n?1 (1 ? ?)
P(a character in x died and has n descendants in y) = (1 ? ?)(1 ? ?)
= (1 ? ?)?? n?1 (1 ? ?)
P(immigrants inserted at the left have n descendants in y) = ? n (1 ? ?)
for n = 1, 2, . . .
for n = 0
for n = 1, 2, . . .
for n = 0, 1, . . .
In defining descendants, we count the character itself, its children, grandchildren, etc. ?, ?, ? are
functions of t, ?, ?. See [2] for the details. Since we only work with these conditionals, note that the
situation resembles that of a standard weighted edit process with a specific, branch-length dependent
structure over insertions and deletions.
To go from a single branch to a tree, we simply compose this process. The full generative process
works as follows: starting at the root, we generate the first string according to the stationary distribution of TKF91. Then, for each outgoing edge e, we use the known time te and the equations above
to generate a child string. We continue in preorder recursively.
2.1
Auxiliary variables
We now define some auxiliary variables that will be useful in the next section. Between each pair
of nodes a, b ? V connected by an edge and with respective strings x, y , we define an alignment
random variable: its values are bipartite matchings between the characters of the strings x and y.
Links in this alignment denote survival of a character (allowing zero or more substitutions). Note
that this alignment is monotonic: if character i in x is linked to character j in y, then the characters
i0 > i in x can only be unlinked or linked to a character with index j 0 > j in y. The random variable
that consists of the alignments and the strings for all the edges and nodes in the phylogenetic tree ?
will be called a derivation.
Note also that a derivation D defines another graph that we will call a derivation graph. Its nodes
are the characters of all the strings in the tree. We put an edge between two characters x, y in this
graph iff two properties hold. Let a, b ? V be the nodes corresponding to the strings from which
respectively x, y belongs to. We put an edge between x, y iff (1) there is an edge between a and b
in E and (2) there is a link between x, y in the alignment of the corresponding strings. Examples of
derivation graphs are shown in Fig. 2.
3
Efficient inference
The approximate inference algorithm we propose, Ancestry Resampling (AR), is based on the
Metropolis-Hastings (MH) framework. While the SSR kernel resamples the whole sequence corresponding to a single node, AR works around the difficulties of SSR by joint resampling of a ?thin
vertical slice? (Fig. 1(f)) in the tree that is composed of a short substring in every node. As we will
see, if we use the right definition of vertical slice, this yields a valid and efficient MH algorithm.
3.1
Ancestry Resampling
We will call one of these ?thin slices? an ancestry A, and we now discuss what its definition should
be. Some care will be needed to ensure irreducibility and reversibility of the sampler.
3
(a)
b
c
(d)
a:
a
b:
c:
z
anchor
(b)
(c)
Legend
(e)
observed characters
selected characters
anchor
Figure 2: (a): the simple guide tree used in this example (left) and the corresponding sequences
and alignments (right). (a,b,c): the definitions of A0 , A? , A respectively are shaded (the ?selected
characters?). (d,e): An example showing the non-reversibility problem with A? .
We first augment the state of the AR sampler to include the derivation auxiliary variable described
in Sec. 2.1. Let D be the current derivation and let x be a substring of one of the terminal nodes, say
in node e. We will call x an anchor. The ancestry will depend on both a derivation and an anchor.
The overall MH sampler is a mixture of proposal distributions indexed by a set of anchors covering
all the characters in the terminal strings. Each proposal resamples a new value of A(D, x) given the
terminal nodes and keeping A(D, x)c frozen.
We first let A0 (D, x) be the set of characters connected to some character in x in the derivation
graph of D (see Fig. 2(a)). This set A0 (D, x) is not a suitable definition of vertical slice, but will
be useful to construct the correct one. It is unsuitable for two reasons. First, it does not yield an
irreducible chain, as illustrated in same figure, where nine of the characters of this sample (those
inside the dashed curve) will never be resampled, no matter which substring of the terminal node
is selected as anchor. Secondly, we would like the vertical slices to be contiguous substrings rather
than general subsequences to ease implementation.
We therefore modify the definition recursively as follows. See Fig. 2(b) for an illustration of this
definition. For i > 0, we will say that a character token y is in Ai (D, x) if one of the following
conditions is true:
1.
2.
3.
4.
y is connected to Ai?1 (D, x),
y appears in a string ? ? ? y 0 ? ? ? y ? ? ? y 00 ? ? ? such that both y 0 and y 00 are in Ai?1 (D, x),
y appears in a string ? ? ? y 0 ? ? ? y ? ? ? such that y 0 is in Ai?1 (D, x) and x is a suffix,
y appears in a string ? ? ? y ? ? ? y 0 ? ? ? such that y 0 is in Ai?1 (D, x) and x is a prefix.
Then, we define A? (D, x) := ??
i=0 Ai (D, x). In words, a symbol is in A? (D, x) if it is linked to
an anchored character through the alignments, or if it is ?squeezed? between previously connected
characters. Cases 3 and 4 handle the boundaries of strings. With this property, irreducibility could
be established with some conditions on the anchors, but it turns out that this definition is still not
quite right.
With A? , the main problem arises when one tries to establish reversibility of the chain. This is
illustrated in Fig. 2(d). In this example, the chain first transitions to a new state by altering the
circled link. One can see that with the definition of A? (D, x) given above, from the state 2 (e), the
state in 2 (d) is now unreachable by the same resampling operator, the reason being that the substring
labeled z in the figure belongs to the frozen part of the state if the transition is visited backwards.
While there exist MCMC methods that are not based on reversible chains [13], we prefer to take a
simpler approach: a variation on our definition solves the issue, informally by taking vertical slices
A(D, x) to be the ?complement of the ancestry taken on the complement of the anchor.? More
precisely, if x = x0 xx00 is the string at the anchor node e, we let the resampled section to be
A(D, x) := (A? (D, x0 ) ? A? (D, x00 ))c . This creates slightly thicker slices (Fig. 2(c)) but solves
the reversibility problem. We will call A(D, x) the ancestry of the anchor x. With this definition,
the proposal distribution can be made reversible using a MH acceptance ratio; it is also irreducible.
4
The problem of resampling a single slice decomposes along the tree structure ? , but an unbounded
number of InDels could occur a priori inside the thin slice. It may seem at the first glance that we
are back at our initial problem: sampling from a tree-structured directed graphical model where the
support of the space of the nodes is a countably infinite space. But in fact, we have made progress:
the distribution is now concentrated on very short sequences. Indeed, the anchors x can be taken
relatively small (we used anchors of length 3 to 5 in our experiments).
Another important property to notice is that given an assignment of the random variable A(D, x), it
is possible to compute efficiently and exactly an unnormalized probability for this assignment. The
summation over the possible alignments can be done using a standard quadratic dynamic program
known in its max version as the Needleman-Wunsch algorithm [14].
3.2
Cylindric proposal
We now introduce the second idea that will make efficient inference possible: when resampling
an ancestry given its complement, rather than allowing all possible strings for the resampled value
of A(D, x), we restrict the choices to the set of substitutes that are close to its current value. We
formalize closeness as follows: Let a1 , a2 be two values for the ancestry A(D, x). We define the
cylindric distance as the maximum over all the nodes e of the Levenshtein edit distance between the
substrings in a1 and a2 at node e. Fix some positive integer m. The proposal distribution consider
the substitution ancestry that are within a ball of radius m centered at the current state in the cylindric
metric. The value m = 1 worked well in practice.
Here the number of states in the tree-structured dynamic program at each node is polynomial in the
lengths of the strings in the current ancestry. A sample can therefore be obtained easily using the
observation we have made that unnormalized probability can be computed.1 Next, we compute the
acceptance ratio, i.e.:
P(ap ) ? Q(ac |ap )
,
min 1,
P(ac ) ? Q(ap |ac )
where ac , ap are the current and proposed ancestry values and Q(a2 |a1 ) is the transition probability
of the MH kernel, proportional to P(?), but with support restricted to the cylindric ball centered at
a1 .
4
Experiments
We consider two tasks: reconstruction of ancestral sequences and prediction of alignments between
multiple genetically-related proteins. We are interested in comparing the ancestry sampling method
(AR) presented in this paper with the Markov kernel used in previous literature (SSR).
4.1
Reconstruction of ancestral sequences
Given a set of genetically-related sequences, the reconstruction task is to infer properties of the
common ancestor of these modern species. This task has important scientific applications: for
instance, in [1], the ratio of G+C nucleotide content of ribosomal RNA sequences was estimated to
assess the environmental temperature of the common ancestor to all life forms (this ratio is strongly
correlated with the optimal growth temperature of prokaryotes).
Just as in the task of topology reconstruction, there are no gold ancestral sequences available to evaluate ancestral sequence reconstruction. For this reason, we take the same approach as in topology
reconstruction and perform comparisons on synthetic data [15].
We generated a root node from the DNA alphabet and evolved it down a binary tree of seven nodes.
Only the leaves were given to the algorithms (a total of 124010 nucleotides); the hidden nodes were
held out. Since our goal in this experiment is to compare inference algorithms rather than methods
1
What we are using here is actually a nested dynamic programs, meaning that the computation of a probability in the outer dynamic program (DP) requires the computation of an inner, simpler DP. While this may
seem prohibitive, this is made feasible by designing the sampling kernels so that the inner DP is executed most
of the time on small problem instances. We also cached the small-DP cost matrices.
5
AR
With max. dev.
5
5
4
4
Error
Error
SSR
3
2
1
Max dev = MAX
3
2
1
0
0
0
100
200
300
0
Time
100
200
300
Time
Figure 3: Left: Single Sequence Resampling versus Ancestry Resampling on the sequence reconstruction task. Right: Detrimental effect of a maximum deviation heuristic, which is not needed
with AR samplers.
of estimation, we gave both algorithms the true parameters; i.e., those that were used to generate the
data.
The task is to predict the sequences at the root node with error measured using the Levenshtein edit
distance l. For both algorithms, we used a standard approximation to minimum Bayes risk decoding
to produce the final
P reconstruction. If s1 , s2 , . . . , sI are the samples collected up to iteration I, we
return mini?1...I j?1...I l(si , sj ).
Fig. 3 (left) shows the error as a function of time for the two algorithms, both implemented efficiently
in Java. Although the computational cost for one pass through the data was higher with AR, the AR
method proved to be dramatically more effective: after only one pass through the data (345s), AR
already performed better than running SSR for nine hours. Moreover, AR steadily improved its
performance as more samples were collected, keeping its error at each iteration to less than half of
that of the competitor.
Fig. 3 (right) shows the detrimental effect of a maximum deviation heuristic. This experiment was
performed under the same setup described in this section. While the maximum deviation heuristic
is necessary for SSR to be able to handle the long sequences found in biological datasets, it is not
necessary for AR samplers.
4.2
Protein multiple sequence alignment
We also performed experiments on the task of protein multiple sequence alignment, for which the
BAliBASE [16] dataset provides a standard benchmark. BAliBASE contains annotations created by
biologists using secondary structure elements and other biological cues.
Note first that we can get a multiple sequence alignment from an InDel evolutionary model. For a
set S of sequences to align, construct a phylogenetic tree such that its terminal leaves coincide with
S. A multiple sequence alignment can be extracted from the inferred derivation D as follows: deem
the amino acids x, y ? S aligned iff y ? A0 (D, x).
The state- of- the- art for multiple sequence alignment systems based on an evolutionary model is
Handel [2]. It is based on TKF91 and produces a multiple sequence alignment as described above.
The key difference with our approach is that their inference algorithm is based on SSR rather than
the AR move that we advocate in this paper.
While other heuristic approaches are known to perform better than Handel on this dataset [8, 17],
they are not based on explicit evolutionary models. They perform better because they leverage more
sophisticated features such as affine gap penalties and hydrophobic core modeling. While these
features can be incorporated in our model, we leave this for future work since the topic of this paper
is inference.
6
SSR
AR
SSR
100
SP
0.77
0.86
Table 1: XXXXXX?clustalw
75
90
SP
CS
0.63
0.77
CS
System
SSR (Handel)
AR (this paper)
AR
100
50
80
25
0
70
3
4
5
6
7
8
3
Depth
4
5
6
7
8
Depth
Figure 4: Left: performance on the ref1 directory of BAliBASE. Center, right: Column- Score (CS)
and Sum- of- Pairs score (SP) as a function of the depth of the generating trees.
We built evolutionary trees using weighbor [7]. We ran each system for the same time on the
sequences in the ref1 directory of BAliBASE v.1. Decoding for this experiment was done by picking
the sample with highest likelihood. We report in Fig. 4(left) the CS and SP Scores, the two standard
metrics for this task. Both are recall measures on the subset of the alignments that were labeled,
called the core blocks; see, e.g., [17] for instance for the details. For both metrics, our approach
Figure 4: CS
and SP better.
XXXCITE are recall metrics XXX
performs
In order to investigate where the advantage comes from, we did another multiple alignment experiment,
function
of the
the trees.
lignment can be extracted from
theplotting
inferedperformance
derivation Dasasafollow:
deem
thedepth
aminoofacids
x, y ?IfSthe random walk argument
presented
in
the
introduction
holds,
we
would
expect
the
advantage
of AR over SSR to increase as
y ? A0 (D, x).
the tree gets taller. This prediction is confirmed as illustrated in Fig. 4 (center, right). For short trees,
the two
algorithms
perform
equally,
beating AR
slightly
for trees
of-the-art for multiple sequence
alignment
systems
based
on an SSR
evolutionary
model
is Handel
[2]. with three nodes, which is
surprising
since alignment
SSR actually
performs exact
this tiny topology. However, as the
on TKF91 and produces anot
multiple
sequence
as described
above.inference
The keyindifference
trees
get taller,
the task
more
difficult,
only
ARwe
maintains
pproach is that their inference
algorithm
is based
onbecomes
SSR rather
than
the ARand
move
that
advocategood performance.
er.
5 Conclusion
r heuristic approaches are known
to perform better than Handel on this dataset [7, 16], they are not
xplicit evolutionary models. They perform better because they leverage more sophisticated features
fine gap penalty and hydrophobic
coredescribed
modelling.
While these
featuresprocedure
can be incorporated
in ourWe have evaluated its perWe have
a principled
inference
for InDel trees.
formance
against
a stateof- is
the-inference.
art statistical alignment procedure and shown its clear superiority.
leave this for future work since
the topic
of this
paper
In contrast to heuristics such as Clustalw [8], it can be used both for reconstruction of ancestral
volutionary trees using weighbor
[17].
ran each
system for the same time on the sequences
sequences
andWe
multiple
alignment.
directory of BAliBASE v.1. Decoding for this experiment was done by picking the sample with
While
ourthe
algorithm
in two
the context
of metrics
TKF91,for
it can
extended to more sophisticated
elihood. We report in Figure
4, left,
CS andwas
SP described
Scores, the
standard
thisbetask.
models.
Incorporating
affine
penalties
andthe
hydrophobic
core
is of particular interest
ecall measures on the subset
of the alignments
that
weregap
labeled,
called
core blocks,
seemodeling
[16]
as they are
knownbetter.
to dramatically improve multiple alignment performance [2]. These models
ails. For both metrics, our approach
performs
typically do not have closed forms for the conditional probabilities, but this could be alleviated
by using
a discretization
longer multiple
branches.alignment
This creates
tall trees,
but as we have seen, AR still
investigate where the advantage
comes
from, we didofanother
experiment
plotwell inofthis
setting.
me performance after a fixedperforms
time as very
a function
depth
of the trees. If the random walk argument
n the introduction held, we would expect the advantage of AR over SSR to increase as the tree
This prediction is confirmed as illustrated in Figure 4, middle, right. For short trees, the two
References
perform equally, SSR beating
AR slightly for trees with three nodes, which is not surprising since
lly performs exact inference[1]
in this
tiny topology configuration. However, as trees get taller, the
N. Galtier, N. Tourasse, and M. Gouy. A nonhyperthermophilic common ancestor to extant
mes more difficult, and only AR manages
to Science,
maintain283:220?221,
good performances.
life forms.
1999.
re Directions
[2] I. Holmes and W. J. Bruno. Evolutionary HMM: a Bayesian approach to multiple alignment.
Bioinformatics, 17:803?820, 2001.
[3] J. Felsenstein. Inferring Phylogenies. Sinauer Associates, 2003.
egrating affine gap, hydrophobic core modelling, CRF models
[4] Z. Yang and B. Rannala. Bayesian phylogenetic inference using DNA sequences: A Markov
chain Monte Carlo method. Molecular Biology and Evolultion, 14:717?724, 1997.
ces
[5] B. Mau and M. A. Newton. Phylogenetic inference for binary data on dendrograms using
chain Monte common
Carlo. Journal
andScience,
Graphical Statistics, 6:122?131,
ltier, N. Tourasse, and M. Gouy. AMarkov
nonhyperthermophilic
ancestoroftoComputational
extant life forms.
1997.
20?221, 1999.
mes and W. J. Bruno. Evolutionary hmm: a bayesian approach to multiple alignment. Bioinformatics, 17:803?
001.
7
7
[6] S. Li, D. K. Pearl, and H. Doss. Phylogenetic tree construction using Markov chain Monte
Carlo. Journal of the American Statistical Association, 95:493?508, 2000.
[7] W. J. Bruno, N. D. Socci, and A. L. Halpern. Weighted neighbor joining: A likelihoodbased approach to distance-based phylogeny reconstruction. Molecular Biology and Evolution,
17:189?197, 2000.
[8] D. G. Higgins and P. M. Sharp. CLUSTAL: a package for performing multiple sequence
alignment on a microcomputer. Gene, 73:237?244, 1988.
[9] J. L. Thorne, H. Kishino, and J. Felsenstein. Inching toward reality: an improved likelihood
model of sequence evolution. Journal of Molecular Evolution, 34:3?16, 1992.
[10] J. L. Thorne, H. Kishino, and J. Felsenstein. An evolutionary model for maximum likelihood
alignment of DNA sequences. Journal of Molecular Evolution, 33:114?124, 1991.
[11] G. A. Lunter, I. Mikl?os, Y. S. Song, and J. Hein. An efficient algorithm for statistical multiple
alignment on arbitrary phylogenetic trees. Journal of Computational Biology, 10:869?889,
2003.
[12] A. Bouchard-C?ot?e, P. Liang, D. Klein, and T. L. Griffiths. A probabilistic approach to diachronic phonology. In Proceedings of EMNLP 2007, 2007.
[13] P. Diaconis, S. Holmes, and R. M. Neal. Analysis of a non-reversible Markov chain sampler.
Technical report, Cornell University, 1997.
[14] S. Needleman and C. Wunsch. A general method applicable to the search for similarities in the
amino acid sequence of two proteins. Journal of Molecular Biology, 48:443?453, 1970.
[15] K. St. John, T. Warnow, B. M. E. Moret, and L. Vawter. Performance study of phylogenetic
methods: (unweighted) quartet methods and neighbor-joining. Journal of Algorithms, 48:173?
193, 2003.
[16] J. Thompson, F. Plewniak, and O. Poch. BAliBASE: A benchmark alignments database for
the evaluation of multiple sequence alignment programs. Bioinformatics, 15:87?88, 1999.
[17] C. B. Do, M. S. P. Mahabhashyam, M. Brudno, and S. Batzoglou. PROBCONS: Probabilistic
consistency-based multiple sequence alignment. Genome Research, 15:330?340, 2005.
8
| 3406 |@word middle:1 version:1 polynomial:1 linearized:1 propagate:1 recursively:2 initial:1 substitution:3 contains:1 fragment:1 selecting:1 exclusively:1 score:4 configuration:1 prefix:1 existing:1 current:7 comparing:1 discretization:1 surprising:2 si:2 john:1 realistic:1 resampling:14 stationary:1 generative:1 leaf:5 prohibitive:2 selected:3 half:1 cue:1 directory:3 beginning:1 short:4 core:5 provides:1 node:32 simpler:2 phylogenetic:15 unbounded:1 along:2 descendant:4 consists:1 dan:1 combine:1 compose:1 inside:2 advocate:1 introduce:1 x0:2 mahabhashyam:1 indeed:1 intricate:1 behavior:1 terminal:7 deem:2 becomes:1 underlying:1 moreover:1 formidable:1 what:2 evolved:2 string:24 substantially:2 ail:1 developed:1 unobserved:1 microcomputer:1 berkeley:3 every:1 thicker:1 growth:1 exactly:2 superiority:1 positive:2 died:1 modify:1 joining:2 ap:4 resembles:1 equivalence:2 challenging:1 shaded:1 fastest:1 ease:1 directed:2 practice:2 block:2 likelihoodbased:1 procedure:4 survived:1 java:1 alleviated:1 word:1 griffith:1 protein:5 batzoglou:1 get:4 close:1 operator:1 put:2 context:2 risk:1 phantom:1 deterministic:1 center:2 go:1 starting:1 thompson:1 holmes:3 higgins:1 wunsch:2 dendrograms:1 handle:2 notion:1 variation:1 hurt:1 construction:1 play:2 exact:3 designing:1 associate:1 element:1 clustalw:2 labeled:3 database:1 observed:5 role:2 inserted:1 solved:1 connected:4 handel:5 highest:1 ran:2 substantial:1 principled:2 intuition:1 insertion:6 dynamic:4 halpern:1 depend:1 creates:2 division:1 efficiency:1 bipartite:1 matchings:1 easily:1 mh:5 joint:1 alphabet:2 derivation:11 describe:1 effective:1 monte:4 birth:1 quite:1 heuristic:11 valued:3 say:2 statistic:2 itself:1 final:1 sequence:50 advantage:5 frozen:2 propose:2 reconstruction:12 frequent:1 aligned:1 iff:3 gold:1 description:1 mutate:1 parent:1 produce:3 cached:1 generating:1 leave:2 tall:2 ac:4 pose:1 measured:1 progress:1 solves:2 auxiliary:3 c:7 implemented:1 come:2 concentrate:1 direction:1 radius:1 correct:1 stochastic:1 subsequently:1 centered:2 require:1 fix:1 biological:4 secondly:1 summation:1 brudno:1 hold:2 indels:3 around:1 considered:2 predict:1 a2:3 estimation:1 applicable:1 combinatorial:1 visited:1 edit:3 clustal:1 weighted:2 rna:2 always:1 rather:8 cornell:1 derived:1 improvement:1 modelling:2 likelihood:3 contrast:1 inference:21 dependent:1 suffix:1 i0:1 typically:1 integrated:1 entire:1 initially:1 hidden:4 relation:1 ancestor:4 a0:5 interested:1 overall:1 unreachable:1 issue:1 augment:1 priori:1 stateof:1 art:4 biologist:1 construct:2 never:1 shaped:1 reversibility:4 sampling:4 biology:5 thin:3 future:2 others:1 report:3 irreducible:2 modern:3 composed:1 diaconis:1 immigration:1 acceptance:2 interest:1 investigate:2 evaluation:1 alignment:39 mixture:1 behind:2 held:2 chain:12 accurate:1 edge:8 capable:1 necessary:2 nucleotide:4 respective:1 tree:45 indexed:1 walk:3 re:1 hein:1 complicates:1 instance:5 column:2 modeling:3 dev:2 ar:26 contiguous:1 balibase:6 altering:1 assignment:2 cost:3 deviation:4 subset:2 synthetic:1 st:1 ancestral:7 probabilistic:3 decoding:3 picking:2 michael:1 extant:2 central:1 emnlp:1 american:1 return:1 li:1 sec:4 matter:1 mcmc:4 performed:3 root:6 try:1 closed:2 linked:3 start:1 bayes:1 maintains:1 gouy:2 bouchard:3 annotation:1 ass:1 accuracy:1 formance:1 acid:4 unlinked:1 efficiently:3 yield:2 bayesian:3 manages:1 substring:6 carlo:4 confirmed:2 history:1 definition:10 competitor:1 against:1 steadily:1 associated:1 unsolved:1 sampled:1 arwe:1 proved:1 treatment:1 dataset:3 recall:2 formalize:1 mikl:1 routine:1 sophisticated:3 actually:2 back:1 appears:3 alexandre:1 higher:1 probcons:1 xxx:1 improved:2 done:3 box:1 strongly:1 evaluated:1 just:1 hastings:1 o:1 reversible:3 glance:1 indel:9 defines:1 quality:1 scientific:1 effect:2 true:2 needleman:2 poch:1 evolution:6 hence:1 death:1 neal:1 illustrated:5 branching:1 covering:1 unnormalized:2 crf:1 performs:4 temperature:2 resamples:2 meaning:1 novel:1 common:6 ctmc:2 conditioning:1 extend:1 interpretation:1 association:1 significant:1 refer:2 ai:6 consistency:1 bruno:3 longer:2 similarity:1 etc:1 align:1 posterior:3 belongs:2 pproach:1 binary:3 continue:1 life:3 hydrophobic:4 exploited:1 seen:2 minimum:1 fortunately:1 care:1 dashed:1 branch:4 multiple:24 full:1 infer:1 technical:1 long:2 equally:2 molecular:5 a1:4 prediction:3 metric:6 lly:1 iteration:2 kernel:6 proposal:5 conditionals:1 fine:1 diachronic:1 plewniak:1 ot:2 operate:1 rest:1 thing:1 legend:1 seem:2 jordan:2 call:5 integer:1 leverage:2 backwards:1 yang:1 easy:1 independence:4 affect:1 gave:1 irreducibility:2 topology:7 restrict:1 inner:2 idea:2 tkf:1 kishino:2 handled:1 penalty:3 song:1 nine:2 dramatically:2 useful:2 clear:1 informally:2 concentrated:1 dna:4 generate:3 exist:1 notice:1 estimated:3 klein:3 taller:3 putting:1 four:1 terminology:1 key:1 ce:1 backward:2 graph:5 concreteness:1 sum:1 package:1 prefer:1 grandchild:1 bound:1 resampled:4 quadratic:1 occur:2 precisely:1 worked:1 encodes:1 anot:1 aspect:1 speed:1 argument:2 min:1 performing:1 relatively:1 department:1 structured:2 according:1 ball:2 felsenstein:3 slightly:3 character:29 metropolis:1 s1:1 restricted:1 thorne:2 taken:3 equation:1 previously:1 discus:2 count:1 turn:1 needed:2 end:1 available:1 original:2 substitute:1 top:1 running:2 ensure:1 include:1 graphical:5 newton:1 unsuitable:1 exploit:1 phonology:1 establish:1 classical:1 move:4 question:1 already:1 strategy:1 evolutionary:11 dp:4 detrimental:2 distance:4 link:3 hmm:2 outer:1 me:3 seven:1 topic:2 collected:2 reason:3 toward:1 quartet:1 length:6 index:1 illustration:1 ratio:4 mini:1 liang:1 difficult:3 unfortunately:1 executed:1 setup:1 subproblems:2 implementation:1 unknown:1 perform:8 twenty:1 allowing:2 vertical:5 observation:3 markov:6 datasets:1 benchmark:2 prokaryote:1 defining:1 situation:1 incorporated:2 extended:1 ssr:21 sharp:1 arbitrary:1 inferred:1 immigrant:1 cast:1 pair:2 complement:3 preorder:1 california:1 deletion:6 established:1 hour:1 pearl:1 address:1 able:1 proceeds:1 usually:2 beating:2 challenge:1 genetically:2 program:5 built:1 max:4 unrealistic:1 event:2 suitable:1 difficulty:1 representing:1 improve:1 misleading:1 created:2 transitive:1 literature:2 geometric:1 circled:1 relative:1 sinauer:1 expect:2 squeezed:1 generation:1 proportional:1 versus:1 affine:3 tiny:2 token:1 keeping:2 guide:3 allow:1 neighbor:2 taking:1 slice:9 curve:1 boundary:1 depth:4 valid:1 transition:3 unweighted:1 genome:1 concretely:1 made:6 commonly:1 forward:2 coincide:1 sj:1 approximate:3 pruning:2 countably:1 gene:1 andthe:1 anchor:12 assumed:1 mau:1 subsequence:1 ancestry:16 continuous:1 x00:1 search:1 anchored:1 decomposes:1 reality:1 table:1 additionally:1 socci:1 nature:1 ref1:2 ca:1 ignoring:1 domain:1 sp:6 did:1 main:1 whole:1 s2:1 arise:1 child:3 amino:4 site:7 fig:15 moret:1 fails:1 position:1 inferring:1 explicit:2 dos:1 warnow:1 down:1 specific:1 showing:1 er:1 symbol:1 survival:1 closeness:1 incorporating:1 te:2 ribosomal:1 gap:4 simply:1 tracking:1 monotonic:1 nested:1 environmental:1 extracted:2 conditional:4 goal:1 content:1 feasible:1 included:1 infinite:1 sampler:6 called:4 specie:5 total:1 pas:2 secondary:1 phylogeny:2 internal:1 mark:1 support:2 arises:2 unbalanced:1 bioinformatics:3 levenshtein:2 evaluate:2 outgoing:1 correlated:1 |
2,655 | 3,407 | An Empirical Analysis of Domain Adaptation
Algorithms for Genomic Sequence Analysis
Gabriele Schweikert1
Max Planck Institutes
Spemannstr. 35-39, 72070 T?ubingen, Germany
[email protected]
Christian Widmer1
Friedrich Miescher Laboratory
Spemannstr. 39, 72070 T?ubingen, Germany
ZBIT, T?ubingen University
Sand 14, 72076 T?ubingen, Germany
[email protected]
Bernhard Sch?olkopf
Max Planck Institute for biol. Cybernetics
Spemannstr. 38, 72070 T?ubingen, Germany
[email protected]
Gunnar R?atsch
Friedrich Miescher Laboratory
Spemannstr. 39, 72070 T?ubingen, Germany
[email protected]
Abstract
We study the problem of domain transfer for a supervised classification task in
mRNA splicing. We consider a number of recent domain transfer methods from
machine learning, including some that are novel, and evaluate them on genomic
sequence data from model organisms of varying evolutionary distance. We find
that in cases where the organisms are not closely related, the use of domain adaptation methods can help improve classification performance.
1 Introduction
Ten years ago, an eight-year lasting collaborative effort resulted in the first completely sequenced
genome of a multi-cellular organism, the free-living nematode Caenorhabditis elegans. Today, a
decade after the accomplishment of this landmark, 23 eukaryotic genomes have been completed and
more than 400 are underway. The genomic sequence builds the basis for a large body of research on
understanding the biochemical processes in these organisms. Typically, the more closely related the
organisms are, the more similar the biochemical processes. It is the hope of biological research that
by analyzing a wide spectrum of model organisms, one can approach an understanding of the full
biological complexity. For some organisms, certain biochemical experiments can be performed more
readily than for others, facilitating the analysis of particular processes. This understanding can then
be transferred to other organisms, for instance by verifying or refining models of the processes?at
a fraction of the original cost. This is but one example of a situation where transfer of knowledge
across domains is fruitful.
In machine learning, the above information transfer is called domain adaptation, where one aims
to use data or a model of a well-analyzed source domain to obtain or refine a model for a less
analyzed target domain. For supervised classification, this corresponds to the case where there are
ample labeled examples (xi , yi ), i = 1, . . . , m for the source domain, but only few such examples
(xi , yi ), i = m + 1, . . . , m + n for the target domain (n ? m). The examples are assumed to be
drawn independently from the joint probability distributions PS (X, Y ) and PT (X, Y ), respectively.
The distributions PS (X, Y ) = PS (Y |X) ? PS (X) and PT (X, Y ) = PT (Y |X) ? PT (X) can differ
in several ways:
(1) In the classical covariate shift case, it is assumed that only the distributions of the input features
P (X) varies between the two domains: PS (X) 6= PT (X). The conditional, however, remains
1
These authors contributed equally.
invariant, PS (Y |X) = PT (Y |X). For a given feature vector x the label y is thus independent of
the domain from which the example stems. An example thereof would be if a function of some
biological material is conserved between two organisms, but its composition has changed (e.g. a
part of a chromosome has been duplicated).
(2) In a more difficult scenario the conditionals differ between domains, PS (Y |X) 6= PT (Y |X),
while P (X) may or may not vary. This is the more common case in biology. Here, two organisms
may have evolved from a common ancestor and a certain biological function may have changed
due to evolutionary pressures. The evolutionary distance may be a good indicator for how well
the function is conserved. If this distance is small, we have reason to believe that the conditionals
may not be completely different, and knowledge of one of them should then provide us with some
information also about the other one.
While such knowledge transfer is crucial for biology, and performed by biologists on a daily basis,
surprisingly little work has been done to exploit it using machine learning methods on biological
databases. The present paper attempts to fill this gap by studying a realistic biological domain
transfer problem, taking into account several of the relevant dimensions in a common experimental
framework:
? methods ? over the last years, the field of machine learning has seen a strong increase in
interest in the domain adaptation problem, reflected for instance by a recent NIPS workshop
? domain distance ? ranging from close organisms, where simply combining training sets
does the job, to distant organisms where more sophisticated methods can potentially show
their strengths
? data set sizes ? whether or not it is worth transferring knowledge from a distant organism
is expected to depend on the amount of data available for the target system
With the above in mind, we selected the problem of mRNA splicing (see Figure A1 in the Appendix
for more details) to assay the above dimensions of domain adaptation on a task which is relevant to
modern biology. The paper is organized as follows: In Section 2, we will describe the experimental design including the datasets, the underlying classification model, and the model selection and
evaluation procedure. In Section 3 we will briefly review a number of known algorithms for domain
adaptation, and propose certain variations. In Section 4 we show the results of our comparison with
a brief discussion.
2
Experimental Design
2.1 A Family of Classification Problems
We consider the task of identifying so-called acceptor splice sites within a large set of potential
splice sites based on a sequence window around a site. The idea is to consider the recognition
of splice sites in different organisms: In all cases, we used the very well studied model organism
C. elegans as the source domain. As target organisms we chose two additional nematodes, namely,
the close relative C. remanei, which diverged from C. elegans 100 million years ago [10], and the
more distantly related P. pacificus, a lineage which has diverged from C. elegans more than 200
million years ago [7]. As a third target organism we used D. melanogaster, which is separated
from C. elegans by 990 million years [11]. Finally, we consider the plant A. thaliana, which has
diverged from the other organisms more than 1,600 million years ago. It is assumed that a larger
evolutionary distance will likely also have led to an accumulation of functional differences in the
molecular splicing machinery. We therefore expect that the differences of classification functions
for recognizing splice sites in these organisms will increase with increasing evolutionary distance.
2.2 The Classification Model
It has been demonstrated that Support Vector Machines (SVMs) [1] are well suited for the task of
splice site predictions across a wide range of organisms [9]. In this work, the so-called Weighted
Degree kernel has been used to measure the similarity between two example sequences x and x? of
fixed length L by counting co-occurring substrings in both sequences at the same position:
L?l+1 ?
1 X X
k?wd (x, x? ) =
?d I x[l:l+d] = x?[l:l+d]
L
l=1
(1)
d=1
where x[l:l+d] is the substring of length d of x at position l and ?d = 2 ??d+1
?2 +? is the weighting of the
substring lengths.
In our previous study we have used sequences of length L = 140 and substrings of length ? = 22
for splice site detection [9]. With the four-letter DNA sequence alphabet {A, C, G, T } this leads to
a very high dimensional feature space (> 1013 dimensions). Moreover, to archive the best classification performance, a large number of training examples is very helpful ([9] used up to 10 million
examples).
For the designed experimental comparison we had to run all algorithms many times for different
training set sizes, organisms and model parameters. We chose the source and target training set as
large as possible?in our case at most 100,000 examples per domain. Moreover, not for all algorithms
we had efficient implementations available that can make use of kernels. Hence, in order to perform
this study and to obtain comparable results, we had to restrict ourselves to a case were we can
explicitly work in the feature space, if necessary (i.e. ? not much larger than two). We chose ? =
1. Note, that this choice does not limit the generality of this study, as there is no strong reason,
why efficient implementations that employ kernels could not be developed for all methods. The
development of large scale methods, however, was not the main focus of this study.
Note that the above choices required an equivalent of about 1500 days of computing time on state-ofthe-art CPU cores. We therefore refrained from including more methods, examples or dimensions.
2.3 Splits and Model Selection
In the first set of experiments we randomly selected a source dataset of 100,000 examples from
C. elegans, while data sets of sizes 2,500, 6,500, 16,000, 40,000 and 100,000 were selected for
each target organism. Subsequently we performed a second set of experiments where we combined
several sources. For our comparison we used 25,000 labeled examples from each of four remaining
organisms to predict on a target organism. We ensured that the positives to negatives ratio is at
1/100 for all datasets. Two thirds of each target set were used for training, while one third was used
for evaluation in the course of hyper-parameter tuning.1 Additionally, test sets of 60,000 examples
were set aside for each target organism. All experiments were repeated three times with different
training splits (source and target), except the last one which always used the full data set. Reported
will be the average area under the precision-recall-curve (auPRC) and its standard deviation, which
is considered a sensible measure for imbalanced classification problems. The data and additional
information will be made available for download on a supplementary website.2
3 Methods for Domain Adaptation
Regarding the distributional view that was presented in Section 1, the problem of splice site prediction can be affected by both evils simultaneously, namely PS (X) 6= PT (X) and PS (Y |X) 6=
PT (Y |X), which is also the most realistic scenario in the case of modeling most biological processes. In this paper, we will therefore drop the classical covariate shift assumption, and allow for
different predictive functions PS (Y |X) 6= PT (Y |X).
3.1 Baseline Methods (SVMS and SVMT )
As baseline methods for the comparison we consider two methods: (a) training on the source data
only (SVMS ) and (b) training on the target data only (SVMT ). For SVMS we use the source data
for training however we tune the hyper-parameter on the available target data. For SVMT we use
the available target data for training (67%) and model selection (33%). The resulting functions are
fS (x) = h?(x), wS i + bS
and
fT (x) = h?(x), wT i + bT .
1
2
Details on the hyper-parameter settings and tuning are shown in Table A2 in the appendix.
http://www.fml.mpg.de/raetsch/projects/genomedomainadaptation
3.2 Convex Combination (SVMS +SVMT )
The most straightforward idea for domain adaptation is to reuse the two optimal functions fT and
fS as generated by the base line methods SVMS and SVMT and combine them in a convex manner:
F (x) = ?fT (x) + (1 ? ?)fS (x).
Here, ? ? [0, 1] is the convex combination parameter that is tuned on the evaluation set (33%) of
the target domain. A great benefit of this approach is its efficiency.
3.3 Weighted Combination (SVMS+T )
Another simple idea is to train the method on the union of source and target data. The relative
importance of each domain is integrated into the loss term of the SVM and can be adjusted by
setting domain-dependent cost parameters CS and CT for the m and n training examples from the
source and target domain, respectively:
min
w,?
s.t.
m
m+n
X
X
1
2
kwk + CS
?i + CT
?i
2
i=1
i=m+1
yi (hw, ?(xi )i + b) ? 1 ? ?i
?i ? 0 ?i ? [1, m + n]
(2)
?i ? [1, m + n]
This method has two model parameters and requires training on the union of the training sets. Since
the computation time of most classification methods increases super-linearly and full model selection may require to train many parameter combinations, this approach is computationally quite
demanding.
Dual-task Learning (SVMS,T )
3.4
One way of extending the weighted combination approach is a variant of multi-task learning [2].
The idea is to solve the source and target classification problems simultaneously and couple the two
solutions via a regularization term. This idea can be realized by the following optimization problem:
min
wS ,wT ,?
s.t.
m+n
X
1
?i
kwS ? wT k2 + C
2
i=1
yi (hwS , ?(xi )i + b) ? 1 ? ?i
yi (hwT , ?(xi )i + b) ? 1 ? ?i
?i ? 0
(3)
?i ? 1, . . . , m
?i ? m + 1, . . . , m + n
?i ? 1, . . . , m + n
Please note that now wS and wT are optimized. The above optimization problem can be solved using a standard QP-solver. In a preliminary experiment we used the optimization package CPLEX to
solve this problem, which took too long as the number of variables is relatively large. Hence, we decided to approximate the soft-margin loss using the logistic loss l(f (x), y) = log(1+exp(?yf (x)))
and to use a conjugate gradient method3 to minimize the resulting objective function in terms of wS
and wT .
3.5 Kernel Mean Matching (SVMS?T )
Kernel methods map the data into a reproducing kernel Hilbert space (RKHS) by means of a mapping ? : X ? H related to a positive definite kernel via k(x, x? ) = h?(x), ?(x? )i. Depending
on the choice of kernel, the space of H may be spanned by a large number of higher order features
of the data. In such cases, higher order statistics for a set of input points can be computed in H by
simply taking the mean (i.e., the first order statistics). In fact, it turns out that for a certain class of
kernels, the mapping
n
1X
? : (x1 , . . . , xn ) 7?
?(xi )
n i=1
3
We used Carl Rasmussen?s minimize function.
is injective [5] ? in other words, given knowledge of (only) the mean (the right hand side), we
can completely reconstruct the set of points. For a characterization of this class of kernels, see for
instance [4]. It is often not necessary to retain all information (indeed, it may be useful to specify
which information we want to retain and which one we want to disregard, see [8]). Generally
speaking, the higher dimensional H, the more information is contained in the mean.
In [6] it was proposed that one could use this for covariate shift adaptation, moving the mean of a
source distribution (over the inputs only) towards the mean of a target distribution by re-weighting
the source training points. We have applied this to our problem, but found that a variant of this
approach performed better. In this variant, we do not re-weight the source points, but rather we
translate each point towards the mean of the target inputs:
!
m
m+n
X
X
1
1
? j ) = ?(xj ) ? ?
?(x
?(xi ) ?
?(xi )
?j = 1, . . . , m.
m i=1
n i=m+1
This also leads to a modified source input distribution which is statistically more similar to the
target distribution and which can thus be used to improve performance when training the target task.
Unlike [6], we do have a certain amount of labels also for the target distribution. We make use of
them by performing the shift separately for each class y ? {?1}:
!
m
m+n
X
X
1
1
? j ) = ?(xj ) ? ?
?(x
[[yi = y]]?(xi ) ?
[[yi = y]]?(xi )
my i=1
ny i=m+1
for all j = m + 1, . . . , m + n with yj = y, where my and ny are the number of source and target
examples with label y, respectively. The shifted examples can now be used in different ways to
obtain a final classifier. We decided to use the weighted combination with CS = CT for comparison.
3.6 Feature Augmentation (SVMS?T )
In [3] a method was proposed that augments the features of source and target examples in a domainspecific way:
?
?(x)
= (?(x), ?(x), 0)? for i = 1, . . . , m
?
?(x)
= (?(x), 0, ?(x))? for i = m + 1, . . . , m + n.
The intuition behind this idea is that there exist one set of parameters that models the properties
common to both sets and two additional sets of parameters that model the specifics of the two
domains. It can easily be seen that the kernel for the augmented feature space can be computed as:
2h?(xi ), ?(xj )i if [[i ? m]] = [[j ? m]]
kAU G (xi , xi ) =
h?(xi ), ?(xj )i
otherwise
This means that the ?similarity? between two examples is two times as high, if the examples were
drawn from the same domain, as if they were drawn from different domains. Instead of the factor 2,
we used a hyper-parameter B in the following.
3.7 Combination of Several Sources
Most of the above algorithms can be extended in one way or another to integrate several source domains. In this work we consider only three possible algorithms: (a) convex combinations of several
domains, (b) KMM on several domains and (c) an extension of the dual-task learning approach to
multi-task learning. We briefly describe these methods below:
Multiple Convex Combinations (M-SVMS +SVMT ) The most general version would be to optimize all convex combination coefficients independently. If done in a grid-search-like manner, it
becomes prohibitive for more than say three source domains. In principle, one can optimize these
coefficients also by solving a linear program. In preliminary experiments we tried both approaches
and they typically did not lead to better results than the following combination:
1 X
F (x) = ?fT (x) + (1 ? ?)
fS (x),
|S|
S?S
where S is the set of all considered source domains. We therefore only considered this way of
combining the predictions.
Multiple KMM (M-SVMS?T ) Here, we shift the source examples of each domain independently
towards the target examples, but by the same relative distance (?). Then we train one classifier on
the shifted source examples as well as the target examples.
Multi-task Learning (M-SVMS,T ) We consider the following version of multi-task learning:
min
{wD }D?D ,?
s.t.
X
1 X X
?D1 ,D2 kwD1 ? wD2 k2 +
?i
2
i
(4)
D1 ?D D2 ?D
yi (hwDj , ?(xi )i + b) ? 1 ? ?i
?i ? 0
(5)
for all examples (xi , yi ) in domain Dj ? D, where D is the set of all considered domains. ? is a set
of regularization parameters, which we parametrized by two parameters CS and CT in the following
way: ?D1 ,D2 = CS if D1 and D2 are source domains and CT otherwise.
4
Experimental Results
We considered two different settings for the comparison. For the first experiment we assume that
there is one source domain with enough data that should be used to improve the performance in
the target domain. In the second setting we analyze whether one can benefit from several source
domains.
4.1 Single Source Domain
Due to space constraints, we restrict ourselves to presenting a summary of our results with a focus on best and worst performing methods. The detailed results are given in Figure A2 in the
appendix, where we show the median auPRC of the methods SVMT , SVMS , SVMS?T , SVMS+T ,
SVMS +SVMT , SVMS?T and SVMS,T for the considered tasks. The summary is given in Figure 1, where we
? illustrate which method performed best (green), similarly well (within a confidence
interval of ?/ n) as the best (light green), considerably worse than the best (yellow), not significantly better than the worst (light red) or worst (red). From these results we can make the following
observations:
1. Independent of the task, if there is very little target data available, the training on source
data performs much better than training on the target data. Conversely, if there is much
target data available then training on it easily outperforms training the source data.
2. For a larger evolutionary distance of the target organisms to source organism C. elegans, a
relatively small number of target training examples for the SVMT approach is sufficient to
achieve similar performance to the SVMS approach, which is always trained on 100,000
examples. We call the number of target examples with equal source and target performance
the break-even point. For instance, for the closely related organism C. remanei one needs
nearly as many target data as source data to achieve the same performance. For the most
distantly related organism A. thaliana, less than 10% target data is sufficient to outperform
the source model.
3. In almost all cases, the performance of domain adaption algorithms is considerably higher
than source (SVMS ) and target only (SVMT ). This is most pronounced near the break-even
point, e.g. 3% improvement for C. remanei and 14% for D. melanogaster.
4. Among the domain adaptation algorithms, the dual-task learning approach (SVMS,T ) performed most often best (12/20 cases). Second most often best (5/20) performed the convex
combination approach (SVMS +SVMT ).
From our observations we can conclude that the simple convex combination approach works surprisingly well. It is only outperformed by the dual-task learning algorithm which performs consistently
well for all organisms and target training set sizes.
| 3407 |@word version:2 briefly:2 d2:4 tried:1 pressure:1 tuned:1 rkhs:1 outperforms:1 wd:2 readily:1 distant:2 realistic:2 christian:2 designed:1 drop:1 aside:1 selected:3 website:1 prohibitive:1 core:1 characterization:1 combine:1 manner:2 indeed:1 expected:1 mpg:5 multi:5 little:2 cpu:1 window:1 solver:1 increasing:1 becomes:1 project:1 underlying:1 moreover:2 evolved:1 developed:1 ensured:1 k2:2 classifier:2 planck:2 positive:2 limit:1 analyzing:1 chose:3 studied:1 conversely:1 co:1 range:1 statistically:1 decided:2 yj:1 union:2 definite:1 procedure:1 area:1 empirical:1 significantly:1 acceptor:1 matching:1 word:1 confidence:1 auprc:2 close:2 selection:4 accumulation:1 fruitful:1 equivalent:1 demonstrated:1 www:1 map:1 optimize:2 mrna:2 straightforward:1 independently:3 convex:8 identifying:1 lineage:1 fill:1 spanned:1 variation:1 target:39 today:1 pt:10 carl:1 recognition:1 distributional:1 database:1 labeled:2 ft:4 solved:1 verifying:1 worst:3 schoelkopf:1 intuition:1 complexity:1 trained:1 depend:1 solving:1 predictive:1 efficiency:1 completely:3 basis:2 easily:2 joint:1 alphabet:1 train:3 separated:1 describe:2 hyper:4 nematode:2 quite:1 larger:3 supplementary:1 solve:2 say:1 reconstruct:1 otherwise:2 statistic:2 final:1 sequence:8 took:1 propose:1 adaptation:10 caenorhabditis:1 relevant:2 combining:2 translate:1 achieve:2 pronounced:1 olkopf:1 p:10 extending:1 svmt:11 help:1 depending:1 illustrate:1 job:1 strong:2 c:5 differ:2 closely:3 subsequently:1 material:1 sand:1 require:1 preliminary:2 biological:7 adjusted:1 extension:1 around:1 considered:6 exp:1 great:1 mapping:2 diverged:3 predict:1 vary:1 a2:2 hwt:1 outperformed:1 label:3 weighted:4 hope:1 genomic:3 always:2 aim:1 super:1 rather:1 modified:1 varying:1 focus:2 refining:1 improvement:1 consistently:1 baseline:2 helpful:1 dependent:1 biochemical:3 typically:2 transferring:1 bt:1 integrated:1 w:4 ancestor:1 germany:5 classification:11 dual:4 among:1 development:1 art:1 biologist:1 field:1 equal:1 biology:3 kw:1 nearly:1 distantly:2 others:1 few:1 employ:1 modern:1 randomly:1 simultaneously:2 resulted:1 ourselves:2 cplex:1 attempt:1 detection:1 interest:1 evaluation:3 analyzed:2 light:2 behind:1 daily:1 necessary:2 injective:1 machinery:1 re:2 instance:4 modeling:1 soft:1 cost:2 deviation:1 recognizing:1 too:1 reported:1 varies:1 my:2 combined:1 considerably:2 retain:2 augmentation:1 worse:1 account:1 potential:1 de:5 gabriele:2 coefficient:2 explicitly:1 performed:7 view:1 break:2 kwk:1 analyze:1 red:2 collaborative:1 minimize:2 ofthe:1 yellow:1 substring:4 worth:1 cybernetics:1 ago:4 thereof:1 couple:1 dataset:1 duplicated:1 recall:1 knowledge:5 organized:1 hilbert:1 sophisticated:1 higher:4 supervised:2 day:1 reflected:1 specify:1 done:2 generality:1 hand:1 logistic:1 yf:1 believe:1 hence:2 regularization:2 laboratory:2 assay:1 widmer:1 please:1 presenting:1 performs:2 ranging:1 novel:1 common:4 functional:1 qp:1 million:5 organism:30 raetsch:2 composition:1 tuning:2 fml:1 grid:1 similarly:1 had:3 dj:1 moving:1 similarity:2 base:1 imbalanced:1 recent:2 scenario:2 certain:5 ubingen:6 yi:9 conserved:2 seen:2 additional:3 living:1 full:3 multiple:2 stem:1 long:1 equally:1 molecular:1 a1:1 prediction:3 variant:3 miescher:2 kernel:11 sequenced:1 conditionals:2 want:2 separately:1 interval:1 evil:1 median:1 source:35 crucial:1 sch:1 unlike:1 wd2:1 archive:1 elegans:7 ample:1 spemannstr:4 call:1 near:1 counting:1 split:2 enough:1 xj:4 restrict:2 idea:6 regarding:1 accomplishment:1 shift:5 whether:2 method3:1 reuse:1 effort:1 f:4 speaking:1 useful:1 generally:1 detailed:1 tune:1 amount:2 ten:1 svms:23 augments:1 dna:1 http:1 outperform:1 exist:1 shifted:2 per:1 affected:1 gunnar:2 four:2 drawn:3 fraction:1 year:7 run:1 package:1 letter:1 family:1 almost:1 splicing:3 schweikert:1 appendix:3 thaliana:2 comparable:1 ct:5 refine:1 strength:1 constraint:1 min:3 performing:2 relatively:2 transferred:1 combination:13 conjugate:1 across:2 b:1 kmm:2 lasting:1 invariant:1 computationally:1 remains:1 turn:1 mind:1 studying:1 available:7 eight:1 zbit:1 original:1 remaining:1 completed:1 exploit:1 build:1 classical:2 objective:1 realized:1 evolutionary:6 gradient:1 distance:8 tue:4 landmark:1 sensible:1 parametrized:1 cellular:1 reason:2 length:5 ratio:1 difficult:1 potentially:1 negative:1 design:2 implementation:2 contributed:1 perform:1 observation:2 datasets:2 situation:1 extended:1 reproducing:1 download:1 namely:2 required:1 friedrich:2 optimized:1 nip:1 below:1 program:1 max:2 including:3 green:2 demanding:1 indicator:1 improve:3 brief:1 review:1 understanding:3 underway:1 relative:3 plant:1 expect:1 loss:3 integrate:1 degree:1 sufficient:2 principle:1 course:1 changed:2 summary:2 surprisingly:2 last:2 free:1 rasmussen:1 side:1 allow:1 institute:2 wide:2 taking:2 benefit:2 curve:1 dimension:4 xn:1 genome:2 kau:1 author:1 made:1 domainspecific:1 melanogaster:2 approximate:1 bernhard:2 assumed:3 conclude:1 xi:16 spectrum:1 search:1 decade:1 why:1 table:1 additionally:1 transfer:6 chromosome:1 domain:44 eukaryotic:1 did:1 main:1 linearly:1 repeated:1 facilitating:1 body:1 x1:1 site:8 augmented:1 ny:2 precision:1 position:2 third:3 weighting:2 hw:1 splice:7 specific:1 covariate:3 svm:1 workshop:1 importance:1 occurring:1 margin:1 gap:1 suited:1 led:1 simply:2 likely:1 contained:1 corresponds:1 adaption:1 conditional:1 towards:3 except:1 wt:5 called:3 experimental:5 disregard:1 atsch:1 hws:1 support:1 evaluate:1 d1:4 biol:1 |
2,656 | 3,408 | Adaptive Template Matching with
Shift-Invariant Semi-NMF
Jonathan Le Roux
Graduate School of Information
Science and Technology
The University of Tokyo
[email protected]
Alain de Cheveign?
e
CNRS, Universit?e Paris 5,
and Ecole Normale Sup?erieure
[email protected]
Lucas C. Parra?
Biomedical Engineering
City College of New York
City University of New York
[email protected]
Abstract
How does one extract unknown but stereotypical events that are linearly superimposed within a signal with variable latencies and variable amplitudes?
One could think of using template matching or matching pursuit to ?nd
the arbitrarily shifted linear components. However, traditional matching
approaches require that the templates be known a priori. To overcome this
restriction we use instead semi Non-Negative Matrix Factorization (semiNMF) that we extend to allow for time shifts when matching the templates
to the signal. The algorithm estimates templates directly from the data
along with their non-negative amplitudes. The resulting method can be
thought of as an adaptive template matching procedure. We demonstrate
the procedure on the task of extracting spikes from single channel extracellular recordings. On these data the algorithm essentially performs spike
detection and unsupervised spike clustering. Results on simulated data and
extracellular recordings indicate that the method performs well for signalto-noise ratios of 6dB or higher and that spike templates are recovered
accurately provided they are su?ciently di?erent.
1
Introduction
It is often the case that an observed waveform is the superposition of elementary waveforms, taken from a limited set and added with variable latencies and variable but positive
amplitudes. Examples are a music waveform, made up of the superposition of stereotyped
instrumental notes, or extracellular recordings of nerve activity, made up of the superposition of spikes from multiple neurons. In these examples, the elementary waveforms
include both positive and negative excursions, but they usually contribute with a positive
weight. Additionally, the elementary events are often temporally compact and their occurrence temporally sparse. Conventional template matching uses a known template and
correlates it with the signal; events are assumed to occur at times where the correlation is
high. Multiple template matching raises combinatorial issues that are addressed by Matching Pursuit [1]. However these techniques assume a preexisting dictionary of templates. We
?
Corresponding author
wondered whether one can estimate the templates directly from the data, together with
their timing and amplitude.
Over the last decade a number of blind decomposition methods have been developed that
address a similar problem: given data, can one ?nd the amplitudes and pro?les of constituent
signals that explain the data in some optimal way. This includes independent component
analysis (ICA), non-negative matrix factorization (NMF), and a variety of other blind source
separation algorithms. The di?erent algorithms all assume a linear superposition of the
templates, but vary in their speci?c assumptions about the statistics of the templates and
the mixing process. These assumptions are necessary to obtain useful results because the
problem is under-constrained.
ICA does not ?t our needs because it does not implement the constraint that components (templates) are added with positive weights. NMF constrains weights to be nonnegative but requires templates to also be non-negative. We will use instead the semi-NMF
algorithm of Chris Ding [2, 3] that allows factoring a matrix into a product of a non-negative
and an arbitrary matrix. To accommodate time shifts we modify it following the ideas of
Morten M?rup [4] who presented a shift-invariant version of the NMF algorithm, that also
includes sparsity constraints. We begin with the conventional formulation of the NMF modeling task as a matrix factorization problem and then derive in the subsequent section the
case of a 1D sequence of data. NMF models a data matrix X as a factorization,
? = AB ,
X
(1)
with A ? 0 and B ? 0 and ?nds these coe?cients such that the square modeling error
? 2 is minimized. Matrix A can be thought of as component amplitudes and the rows
||X ? X||
of matrix B are the component templates. Semi-NMF drops the non-negative constraint
for B, while shift-NMF allows the component templates to be shifted in time. In the NMF
algorithm, there is an update equation for A and an update equation for B. Semi-NMF and
shift-NMF each modi?es one of these equations, fortunately not the same, so their updates
can be interleaved without interference.
2
Review of semi-NMF
Assume we are given N observations or segments of data with T samples arranged as
a matrix Xnt . (The segments can also represent di?erent epochs, trials, or even di?erent
channels.) The goal is to model this data as a linear superposition of K component templates
Bkt with amplitudes Ank , i.e.,
X
? nt =
Ank Bkt = Akn Bkt .
(2)
X
k
The second expression here uses Einstein notation: indices that appear both as superscript
and subscript within a product are to be summed. In contrast to matrix notation, all
dimensions of an expression are apparent, including those that are absorbed by a sum, and
the notation readily extends to more than two dimensions, which we will need when we
introduce delays. We use this notation throughout the paper and include explicit sum signs
only to avoid possible confusion.
Now, to minimize the modeling error
? 22 =
E = ||X ? X||
X?
?2
Xnt ? Akn Bkt ,
(3)
nt
the semi-NMF algorithm iterates between ?nding the optimum B for a given A, which is
trivially given by the classic least squares solution,
Bkt = (Ank Ak0 n )
?1
0
Akn Xtn ,
and improving the estimate of A for a given B with the multiplicative update
s
?
+
(Xnt Bkt ) + Akn0 (Bkt 0 Bkt )
Ank ? Ank
+ .
?
(Xnt Bkt ) + Akn0 (Bkt 0 Bkt )
(4)
(5)
In these expressions, k 0 is a summation index; (M )?1 stands for matrix inverse of M ; and,
(M )+ = 21 (|M | + M ) and (M )? = 12 (|M | ? M ) are to be applied on each element of matrix
M . The multiplicative update (5) ensures that A remains non-negative in each step; while,
baring constraints for B, the optimum solution for B for a given A is found in a single step
with (4).
3
3.1
Shift-invariant semi-NMF
Formulation of the model for a 1D sequence
Consider now the case where the data is given as a 1-dimensional time sequence Xt . In
the course of time, various events of unknown identity and variable amplitude appear in
this signal. We describe an event of type k with a template Bkl of length L. Time index l
represents now a time lag measured from the onset of the template. An event can occur at
any point in time, say at time sample n, and it may have a variable amplitude. In addition,
we do not know a priori what the event type is and so we assign to each time sample n and
each event type k an amplitude Ank ? 0. The goal is to ?nd the templates B and amplitudes
A that explain the data. In this formulation of the model, the timing of an event is given
by a non-zero sample in the amplitude matrix A. Ideally, each event is identi?ed uniquely
and is well localized in time. This means that for a given n the estimated amplitudes are
positive for only one k, and neighboring samples in time have zero amplitudes. This new
model can be written as
X
?t =
X
Akn Bk,t?n
(6)
n
=
XX
n
Akn ?n,t?l Bkl .
(7)
l
The Kronecker delta ?tl was used to induce the desired shifts n. We can dispense with the
cumbersome shift in the index if we introduce
X
A?tkl =
Ank ?n,t?l .
(8)
n
The tensor A?tkl represents a block Toeplitz matrix, with K blocks of dimension T ? L. Each
block implements a convolution of the k-th template Bkl with amplitudes signal Ank . With
this de?nition the model is written now simply as:
? t = A?kl
X
t Bkl ,
(9)
with Ank ? 0. We will also require a unit-norm constraint on the K templates in B, namely,
Bkl Bkl = 1, to disambiguate the arbitrary scale in the product of A and B.
3.2
Optimization criterion with sparseness prior
Under the assumption that the data represent a small set of well-localized events, matrix
A should consist of a sparse series of pulses, the other samples having zero amplitude. To
favor solutions having this property, we use a generalized Gaussian distribution as prior
probability for the amplitudes. Assuming Gaussian white noise, the new cost function given
by the negative log-posterior reads (up to a scaling factor),
E
=
=
1
? 22 + ? ||A||?
||X ? X||
?
2
?2
X
1 X?
Xt ? A?tkl Bkl + ?
A?
kl ,
2 t
(10)
(11)
kl
where ||?||p denotes the Lp norm (or quasi-norm for 0 < p < 1). The shape parameter ? of the
generalized Gaussian distribution controls the odds of observing low versus high amplitude
values and should be chosen based on the expected rate of events. For our data we mostly
choose ? = 1/4. The parameter ? is a normalization constant which depends on the power
??/2
?2 ?
2
2
.
of the noise, ?N
, and the power of the amplitudes, ?A
, with ? = ?N? ?(3/?)/?(1/?)
A
3.3
A update
The update for A which minimizes this cost function is similar to update (5) with some
modi?cations. In (5), amplitudes A can be treated as a matrix of dimensions T ? K and
each update can be applied separately for every n. Here the problem is no longer separable
in n and we need to treat A as a 1 ? T K matrix. B is now a T K ? T matrix of shifted
?nkt = Bk,t?n . The new update equation is similar to (5), but di?ers
templates de?ned as B
T
in the term BB :
v
??
?
?+
?
u
u
n0 k0 B
? l Bkl
?t 0 0 B
?nkt
X
+
A
u
n
nk
Ank ? Ank u
.
(12)
??
?
?+
t?
0
0
? nl Bkl
?t 0 0 B
?nkt
X
+ An k B
+ ??A??1
nk
nk
The summation in the BB T term is over t, and is 0 most of the time when the events do not
? l = Xn+l , and the time index in the summation X
? l Bkl extends
overlap. We also de?ned X
n
n
only over lags l from 0 to L ? 1. To limit the memory cost of this operation, we implemented
it by computing only the non-zero parts of the T K ? T K matrix BB T as 2L ? 1 blocks of
size K ? K. The extra term in the denominator of (12) is the gradient of the sparseness
term in (11). A convergence proof for (12) can be obtained by modifying the convergence
proof of the semi-NMF algorithm in [2] to include the extra L? norm as penalty term. The
proof relies on a new inequality on the L? norm recently introduced by Kameoka to prove
the convergence of his complex-NMF framework [5].
3.4
B update
The templates B that minimize the square modeling error, i.e., the ?rst term of the cost
function (11), are given by a least-squares solution which now writes:
??1 0 0
?
A?tk l X t .
(13)
Bkl = A?tkl A?tk0 l0
The matrix inverse is now over a matrix of LK by LK elements. Note that the sparseness
prior will act to reduce the magnitude of A. Any scaling of A can be compensated by a
corresponding inverse scaling of B so that the ?rst term of the cost function remains unaffected. The unit-norm constraint for the templates B therefore prevents A from shrinking
arbitrarily.
3.5
Normalization
The normalization constraint of the templates B can be implemented using Lagrange multipliers, leading to the constrained least squares solution:
?
??1 0 0
A?kt l X t .
(14)
Bkl = A?tkl A?tk0 l0 + ?kl,k0 l0
Here, ?kl,k0 l0 represents a diagonal matrix of size KL ? KL with K di?erent Lagrange
multipliers as parameters that need to be adjusted so that Bkl Bkl = 1 for all k. This can be
done with a Newton-Raphson root search of the K functions fk (?) = Bkl Bkl ? 1. The K
dimensional search for the Lagrange multipliers in ? can be interleaved with updates of A
and B. For simplicity however, in our ?rst implementation we used the unconstrained least
squares solution (? = 0) and renormalized B and A every 10 iterations.
4
Performance evaluations
We evaluated the algorithm on synthetic and real data. Synthetic data are used to provide
a quantitative evaluation of performance as a function of SNR and the similarity of di?erent
templates. The algorithm is then applied to extracellular recordings of neuronal spiking
activity and we evaluate its ability to recover two distinct spike types that are typically
superimposed in this data.
0
?1
1
250
13
Lag
500
Time (sample)
750
25 1
250
500
750
Time (sample)
1
0
?1
1
1000
Event amplitudes A
Templates B
1
Reconstructed waveform
Amplitude
Amplitude
Noisy data
1
250
1
15
Lag
750
1000
Event amplitudes A
Templates B
1000
500
Time (sample)
30 1
250
500
750
Time (sample)
1000
Figure 1: Example of synthetic spike trains and estimated model parameters at an SNR
of 2 (6 dB). Top left: synthetic data. Bottom left: synthetic parameters (templates B and
weight matrices A). Top right: reconstructed data. Bottom right: estimated parameters.
4.1
Quantitative evaluation on synthetic data
The goal of these simulations is to measure performance based on known truth data. We
report detection rate, false-alarm rate, and classi?cation error. In addition we report how
accurately the templates have been recovered. We generated synthetic spike trains with
two types of ?spikes? and added Gaussian white noise. Figure 1 shows an example for
SNR = ?A /?N = 2 (or 6 dB). The two sets of panels show the templates B (original
on the left and recovered on the right), amplitudes A (same as above) and noisy data
? (right). The ?gure shows the model parameters which resulted in
X (left) and estimated X
a minimum cost. Clearly, for this SNR the templates have been recovered accurately and
their occurrences within the waveform have been found with only a few missing events.
Performance as a function of varying SNR is shown in Figure 2. Detection rate is measured
as the number of events recovered over the total number of events in the original data.
False-alarms occur when noise is interpreted as actual events. Presence or absence of a
recovered event is determined by comparing the original pulse train with the reconstructed
pulse train A (channel number k is ignored). Templates in this example have a correlation
time (3 dB down) of 2-4 samples and so we tolerate a misalignment of events of up to
?2 samples. We simulated 30 events with amplitudes uniformly distributed in [0, 1]. The
algorithm tends to miss smaller events with amplitudes comparable to the noise amplitude.
To capture this e?ect, we also report a detection rate that is weighted by event amplitude.
Some events may be detected but assigned to the wrong template. We therefore report also
classi?cation performance. Finally, we report the goodness of ?t as R2 for the templates B
and the continuous valued amplitudes A for the events that are present in the original data.
Note that the proposed algorithm implements implicitly a clustering and classi?cation process. Obviously, the performance of this type of unsupervised clustering will degrade as the
templates become more and more similar. Figure 2 shows the same performance numbers
as a function of the similarity of the templates (without additive noise). A similarity of 0
corresponds to the templates shown as examples in Figure 1 (these are almost orthogonal
with a cosine of 74? ), and similarity 1 means identical templates. Evidently the algorithm
is most reliable when the target templates are dissimilar.
4.2
Analysis of extracellular recordings
The original motivation for this algorithm was to analyze extracellular recordings from
single electrodes in the guinea pig cochlear nucleus. Spherical and globular bushy cells in
the anteroventral cochlear nucleus (AVCN) are assumed to function as reliable relays of
spike trains from the auditory nerve, with ?primary-like? responses that resemble those of
auditory nerve ?bers. Every incoming spike evokes a discharge within the outgoing axon [6].
Detection
1
0.5
0
Weighted detection Misclassification
1
1
0.5
0 0.5 1 1.5
?N/?A
0
0.5
0 0.5 1 1.5
?N/?A
0
2
False alarm
Detection
1
0.5
0 0.5 1 1.5
?N/?A
0
0.5
0
2
R B
Weighted detection Misclassification
1
1
R A
0.5
1
Similarity
False alarm
0
0.5
0
0.5
1
Similarity
2
R B
0
1
1
1
1
1
1
0.5
0.5
0.5
0.5
0.5
0.5
0
0 0.5 1 1.5
?N/?A
0
0 0.5 1 1.5
?N/?A
0
0 0.5 1 1.5
?N/?A
0
0
0.5
1
Similarity
0
0
0.5
1
Similarity
0
0
0.5
1
Similarity
2
R A
0
0.5
1
Similarity
Figure 2: Left graph: performance as a function of SNR. Error bars represent standard
deviation over 100 repetitions with varying random amplitudes and random noise. Top left:
detection rate. Top center: weighted detection rate. Top right: misclassi?cation rate (events
attributed to the wrong template). Bottom left: false alarm rate (detected events which
do not correspond to an event in the original data). Bottom center: R2 of the templates
B. Bottom right: R2 of the amplitudes A. Right graph: same as a function of similarity
between templates.
However, recent observations give a more nuanced picture, suggesting that the post-synaptic
spike may sometimes be suppressed according to a process that is not well understood [7].
Extracellular recordings from primary-like cells within AVCN with a single electrode typically show a succession of events made up of three sub-events: a small pre-synaptic spike
from the large auditory nerve ?ber terminal, a medium-sized post-synaptic spike from the
initial segment of the axon where it is triggered (the IS spike), and a large-sized spike produced by back-propagation into the soma and dendrites of the cell (the soma-dendritic or
SD spike) (Fig. 3). Their relative amplitudes depend upon the position of the electrode
tip relative to the cell. Our aim is to isolate each of these components to understand the
process by which the SD spike is sometimes suppressed. The events may overlap in time (in
particular the SD spike always overlaps with an IS spike), with varying positive amplitudes.
They are temporally compact, on the order of a millisecond, and they occur repeatedly
but sparsely throughout the recording, with positive amplitudes. The assumptions of our
algorithm are met by these data, as well as by multi-unit recordings re?ecting the activity
of several neurons (the ?spike sorting problem?).
In the portions of our data that are su?ciently sparse (spontaneous activity), the components may be separated by an ad-hoc procedure: (a) trigger on the high-amplitude IS-soma
complexes and set to zero, (b) trigger on the remaining isolated IS spikes and average to
derive an IS spike template (the pre-synaptic spike is treated as part of the IS spike), (c) ?nd
the best match (in terms of regression) of the initial portion of the template to the initial
portion of each IS-SD complex, (d) subtract the matching waveform to isolate the SD spikes,
realign, and average to derive an SD spike template. The resulting templates are shown in
Fig. 3 (top right). This ad-hoc procedure is highly dependent on prior assumptions, and
we wished to have a more general and ?agnostic? method to apply to a wider range of
situations.
Figure 3 (bottom) shows the result of our automated algorithm. The automatically recovered
spike templates seem to capture a number of the key features. Template 1, in blue, resembles
the SD spike, and template 2, in red, is similar to the IS spike. The SD spikes are larger and
have sharper peaks as compared to the IS spikes, while the IS spikes have an initial peak
at 0.7 ms leading the main spike. The larger size of the extracted spikes corresponding to
template 1 is correctly re?ected in the histogram of the recovered amplitudes. However the
estimated spike shapes are inaccurate. The main di?erence is in the small peak preceding
the template 1. This is perhaps to be expected as the SD spike is always preceded in the
raw data by a smaller IS spike. The expected templates were very similar (with a cosine of
38? as estimated from the manually extracted spikes), making the task particularly di?cult.
Reconstructed waveform and residual
2
Manually constructed templates B
Reconstructed
Residual
1 SD
2 IS
mV
1
0
?1
0
0.1
0.2
0.3
0.4
Time (s)
Estimated templates B
0.5
mV
Frequency
1
2
0
0.625
1.25
1.875
Time (ms)
Distribution of the amplitudes A
2.5
1
2
20
10
0
0
0.625
1.25
Time (ms)
1.875
2.5
0.05
0.1
0.15
Amplitude
0.2
Figure 3: Experimental results on extracellular recordings. Top left: reconstructed waveform
(blue) and residual between the original data and the reconstructed waveform (red). Top
right: templates B estimated manually from the data. Bottom left: estimated templates
B. Bottom right: distribution of estimated amplitudes A. The SD spikes (blue) generally
occur with larger amplitudes than the IS spikes (red).
4.3
Implementation details
As with the original NMF and semi-NMF algorithms, the present algorithm is only locally
convergent. To obtain good solutions, we restart the algorithm several times with random
initializations for A (drawn independently from the uniform distribution in [0, 1]) and select
the solution with the maximum posterior likelihood or minimum cost (11). In addition to
these multiple restarts, we use a few heuristics that are motivated by the desired result
of spike detection. We can thus prevent the algorithm from converging to some obviously
suboptimal solutions:
Re-centering the templates: We noticed that local minima with poor performance typically occurred when the templates B were not centered within the L lags. In those cases
the main peaks could be adjusted to ?t the data, but the portion of the template that
extends outside the window of L samples could not be adjusted. To prune these suboptimal
solutions, it was su?cient to center the templates during the updates while shifting the
amplitudes accordingly.
Pruning events: We observed that spikes tended to generate non-zero amplitudes in A
in clusters of 1 to 3 samples. After convergence we compact these to pulses of 1-sample
duration located at the center of these clusters. Spike amplitude was preserved by scaling
the pulse amplitudes to match the sum of amplitudes in the cluster.
Re-training with a less conservative sparseness constraint: To ensure that templates
B are not a?ected by noise we initially train the algorithm with a strong penalty term (large
2
? e?ectively assuming strong noise power ?N
). Only spikes with large amplitudes remain
after convergence and the templates are determined by only those strong spikes that have
high SNR. After extracting templates accurately, we retrain the model amplitudes A while
keeping the templates B ?xed assuming now a weaker noise power (smaller ?).
As a result of these steps, the algorithm converged frequently to good solutions (approximately 50 % of the time on the simulated data). The performance reported here represents
the results with minimum error after 6 random restarts.
5
Discussion and outlook
Alternative models: The present 1D formulation of the problem is similar to that of
Morten M?rup [4] who presented a 2D version of this model that is limited to non-negative
templates. We have also derived a version of the model with observations X arranged as
a matrix, as well as a version in which event timing is encoded explicitly as time delays ?n
following [8]. We are omitting these alternative formulations here for the sake of brevity.
Alternative priors: In addition to the generalized Gaussian prior, we tested also Gaussian
process priors [9] to encourage orthogonality between the k sequences and refractoriness in
time. However, we found that the quadratic expression of a Gaussian process competed
with the L? sparseness term. In the future, we intend to combine both criteria by allowing
for correlations in the generalized Gaussian. The corresponding distributions are known
as elliptically symmetric densities [10] and the corresponding process is called a spherically
invariant random processes, e.g., [11].
Sparseness and dimensionality reduction: As with many linear decomposition methods, a key feature of the algorithm is to represent the data within a small linear subspace.
This is particularly true for the semi-NMF algorithm since, provided a su?ciently large K
and without enforcing a sparsity constraint, the positivity constraint on A actually amounts
to no constraint at all (identical templates with opposite sign can accomplish the same
as allowing negative A). For instance, without sparseness constraint on the amplitudes, a
trivial solution in our examples above would be a template B1l with a single positive spike
somewhere and another template B2l with a single negative spike, and all the time course
encoded in An1 and An2 .
MISO identi?cation: The identi?ability problem is compounded by the fact that the
estimation of templates B in this present formulation represents a multiple-input singleoutput (MISO) system identi?cation problem. In the general case, MISO identi?cation is
known to be under-determined [12]. In the present case, the ambiguities of MISO identi?cation may be limited due to the fact that we allow only for limited system length L as
compared to the number of samples N . Essentially, as the number of examples increases
with increasing length of the signal X, the ambiguity in B is reduced.
These issues will be adressed in future work.
References
[1] S. Mallat and Z. Zhang, ?Matching pursuit with time-frequency dictionnaries,? IEEE Trans.
Signal Process., vol. 41, pp. 3397?3415, 1993.
[2] C. Ding, T. Li, and M. I. Jordan, ?Convex and semi-nonnegative matrix factorization for
clustering and low-dimension representation,? Lawrence Berkeley National Laboratory, Tech.
Rep. LBNL-60428, 2006.
[3] T. Li and C. Ding, ?The relationships among various nonnegative matrix factorization methods
for clustering,? in Proc. ICDM, 2006, pp. 362?371.
[4] M. M?rup, M. N. Schmidt, and L. K. Hansen, ?Shift invariant sparse coding of image and
music data,? Technical University of Denmark, Tech. Rep. IMM2008-04659, 2008.
[5] H. Kameoka, N. Ono, K. Kashino, and S. Sagayama, ?Complex NMF: A new sparse representation for acoustic signals,? in Proc. ICASSP, Apr. 2009.
[6] P. X. Joris, L. H. Carney, P. H. Smith, and T. C. T. Yin, ?Enhancement of neural synchronization in the anteroventral cochlear nucleus. I. Responses to tones at the characteristic
frequency,? J. Neurophysiol., vol. 71, pp. 1022?1036, 1994.
[7] S. Arkadiusz, M. Sayles, and I. M. Winter, ?Spike waveforms in the anteroventral cochlear
nucleus revisited,? in ARO midwinter meeting, no. Abstract #678, 2008.
[8] M. M?rup, K. H. Madsen, and L. K. Hansen, ?Shifted non-negative matrix factorization,? in
Proc. MLSP, 2007, pp. 139?144.
[9] C. E. Rasmussen and C. K. I. Williams, Gaussian Processes for Machine Learning, ser. Adaptive Computation and Machine Learning. Cambridge, MA: The MIT Press, Jan. 2006.
[10] K. Fang, S. Kotz, and K. Ng, Symmetric Multivariate and Related Distributions.
Chapman and Hall, 1990.
London:
[11] M. Rangaswamy, D. Weiner, and A. Oeztuerk, ?Non-Gaussian random vector identi?cation
using spherically invariant random processes,? IEEE Trans. Aerospace and Electronic Systems,
vol. 29, no. 1, pp. 111?123, Jan. 1993.
[12] J. Benesty, J. Chen, and Y. Huang, Microphone Array Signal Processing.
Springer-Verlag, 2008.
Berlin, Germany:
| 3408 |@word trial:1 version:4 instrumental:1 norm:6 nd:5 pulse:5 simulation:1 decomposition:2 outlook:1 accommodate:1 reduction:1 initial:4 series:1 ecole:1 recovered:8 comparing:1 nt:2 written:2 readily:1 subsequent:1 additive:1 shape:2 drop:1 update:13 n0:1 tone:1 accordingly:1 cult:1 smith:1 gure:1 iterates:1 contribute:1 revisited:1 zhang:1 along:1 constructed:1 become:1 ect:1 prove:1 ectively:1 combine:1 introduce:2 expected:3 ica:2 frequently:1 multi:1 terminal:1 spherical:1 automatically:1 ccny:1 actual:1 window:1 increasing:1 provided:2 begin:1 notation:4 xx:1 panel:1 anteroventral:3 medium:1 agnostic:1 what:1 xed:1 interpreted:1 minimizes:1 developed:1 quantitative:2 every:3 berkeley:1 act:1 universit:1 wrong:2 ser:1 control:1 unit:3 appear:2 positive:8 engineering:1 timing:3 modify:1 treat:1 limit:1 tends:1 understood:1 sd:11 local:1 subscript:1 approximately:1 initialization:1 resembles:1 factorization:7 limited:4 graduate:1 range:1 block:4 implement:3 writes:1 procedure:4 jan:2 erence:1 thought:2 matching:11 pre:2 induce:1 restriction:1 conventional:2 compensated:1 missing:1 center:4 williams:1 wondered:1 independently:1 duration:1 convex:1 roux:1 simplicity:1 stereotypical:1 array:1 his:1 fang:1 classic:1 discharge:1 target:1 spontaneous:1 trigger:2 mallat:1 us:2 element:2 particularly:2 located:1 sparsely:1 adressed:1 observed:2 bottom:8 ding:3 capture:2 ensures:1 rup:4 constrains:1 dispense:1 ideally:1 kameoka:2 renormalized:1 raise:1 depend:1 segment:3 upon:1 misalignment:1 neurophysiol:1 icassp:1 k0:3 various:2 train:6 separated:1 distinct:1 describe:1 preexisting:1 london:1 detected:2 outside:1 apparent:1 lag:5 larger:3 valued:1 heuristic:1 say:1 encoded:2 toeplitz:1 statistic:1 favor:1 ability:2 think:1 noisy:2 superscript:1 obviously:2 hoc:2 sequence:4 triggered:1 evidently:1 aro:1 product:3 fr:1 neighboring:1 cients:1 mixing:1 constituent:1 rst:3 convergence:5 electrode:3 optimum:2 cluster:3 enhancement:1 tk:1 wider:1 derive:3 ac:1 bers:1 measured:2 erent:6 school:1 wished:1 strong:3 implemented:2 resemble:1 indicate:1 met:1 waveform:11 tokyo:2 modifying:1 centered:1 globular:1 require:2 assign:1 dendritic:1 elementary:3 parra:2 summation:3 adjusted:3 hall:1 lawrence:1 dictionary:1 vary:1 lbnl:1 relay:1 estimation:1 proc:3 combinatorial:1 miso:4 superposition:5 hansen:2 repetition:1 city:2 weighted:4 mit:1 clearly:1 gaussian:10 always:2 aim:1 hil:1 normale:1 avoid:1 varying:3 l0:4 derived:1 competed:1 superimposed:2 likelihood:1 tech:2 contrast:1 dependent:1 factoring:1 cnrs:1 inaccurate:1 typically:3 initially:1 quasi:1 b2l:1 germany:1 issue:2 among:1 priori:2 lucas:1 constrained:2 summed:1 tkl:5 having:2 ng:1 manually:3 identical:2 represents:5 chapman:1 unsupervised:2 future:2 minimized:1 report:5 few:2 modi:2 winter:1 resulted:1 national:1 ab:1 detection:11 highly:1 evaluation:3 nl:1 kt:1 encourage:1 necessary:1 orthogonal:1 cheveign:1 desired:2 re:4 isolated:1 instance:1 modeling:4 goodness:1 cost:7 deviation:1 snr:7 uniform:1 delay:2 reported:1 accomplish:1 synthetic:7 density:1 peak:4 tip:1 together:1 ambiguity:2 choose:1 huang:1 positivity:1 leading:2 li:2 suggesting:1 de:5 coding:1 includes:2 mlsp:1 explicitly:1 mv:2 blind:2 onset:1 depends:1 multiplicative:2 root:1 ad:2 observing:1 sup:1 analyze:1 portion:4 recover:1 red:3 minimize:2 square:6 who:2 characteristic:1 succession:1 correspond:1 raw:1 accurately:4 produced:1 xtn:1 unaffected:1 cation:10 converged:1 explain:2 cumbersome:1 tended:1 ed:1 synaptic:4 centering:1 frequency:3 pp:5 proof:3 di:9 attributed:1 auditory:3 dimensionality:1 amplitude:49 actually:1 back:1 nerve:4 higher:1 tolerate:1 restarts:2 response:2 arranged:2 formulation:6 done:1 evaluated:1 refractoriness:1 singleoutput:1 biomedical:1 correlation:3 su:4 propagation:1 perhaps:1 nuanced:1 omitting:1 multiplier:3 true:1 assigned:1 read:1 symmetric:2 spherically:2 laboratory:1 white:2 during:1 uniquely:1 cosine:2 criterion:2 generalized:4 m:3 demonstrate:1 confusion:1 performs:2 pro:1 image:1 recently:1 leroux:1 spiking:1 preceded:1 jp:1 extend:1 occurred:1 cambridge:1 unconstrained:1 erieure:1 trivially:1 fk:1 cuny:1 longer:1 similarity:11 an2:1 posterior:2 multivariate:1 recent:1 madsen:1 verlag:1 inequality:1 rep:2 arbitrarily:2 arkadiusz:1 meeting:1 nition:1 minimum:4 fortunately:1 preceding:1 speci:1 prune:1 signal:10 semi:12 multiple:4 compounded:1 technical:1 match:2 raphson:1 icdm:1 post:2 converging:1 regression:1 denominator:1 essentially:2 iteration:1 represent:4 normalization:3 sometimes:2 histogram:1 cell:4 preserved:1 addition:4 separately:1 addressed:1 ank:11 source:1 extra:2 recording:10 isolate:2 db:4 seem:1 jordan:1 odds:1 extracting:2 ciently:3 presence:1 automated:1 variety:1 suboptimal:2 opposite:1 reduce:1 idea:1 shift:10 whether:1 expression:4 motivated:1 weiner:1 penalty:2 york:2 repeatedly:1 elliptically:1 ignored:1 useful:1 latency:2 akn:5 generally:1 amount:1 locally:1 reduced:1 generate:1 avcn:2 millisecond:1 shifted:4 sign:2 estimated:10 delta:1 correctly:1 blue:3 vol:3 key:2 soma:3 drawn:1 prevent:1 graph:2 bushy:1 sum:3 inverse:3 extends:3 throughout:2 almost:1 evokes:1 kotz:1 excursion:1 separation:1 electronic:1 scaling:4 comparable:1 interleaved:2 convergent:1 quadratic:1 nonnegative:3 activity:4 occur:5 constraint:12 kronecker:1 orthogonality:1 sake:1 ected:2 ecting:1 separable:1 extracellular:8 ned:2 according:1 poor:1 smaller:3 remain:1 suppressed:2 lp:1 making:1 invariant:6 interference:1 taken:1 equation:4 remains:2 benesty:1 know:1 pursuit:3 operation:1 apply:1 einstein:1 b1l:1 occurrence:2 alternative:3 schmidt:1 original:8 denotes:1 clustering:5 include:3 top:8 remaining:1 ensure:1 newton:1 coe:1 bkt:11 music:2 somewhere:1 joris:1 tensor:1 noticed:1 added:3 intend:1 spike:49 primary:2 traditional:1 diagonal:1 gradient:1 subspace:1 morten:2 simulated:3 berlin:1 restart:1 chris:1 degrade:1 cochlear:4 trivial:1 enforcing:1 denmark:1 assuming:3 length:3 index:5 relationship:1 ratio:1 mostly:1 sharper:1 negative:13 xnt:4 implementation:2 unknown:2 allowing:2 neuron:2 observation:3 convolution:1 situation:1 arbitrary:2 nmf:21 bk:2 introduced:1 namely:1 paris:1 kl:7 aerospace:1 identi:7 acoustic:1 trans:2 address:1 bar:1 usually:1 sparsity:2 pig:1 including:1 memory:1 reliable:2 shifting:1 power:4 event:34 overlap:3 treated:2 misclassification:2 nkt:3 residual:3 technology:1 temporally:3 picture:1 nding:1 lk:2 extract:1 review:1 epoch:1 prior:7 relative:2 synchronization:1 versus:1 localized:2 nucleus:4 realign:1 row:1 course:2 last:1 keeping:1 rasmussen:1 alain:2 guinea:1 allow:2 understand:1 ber:1 weaker:1 template:73 sparse:5 distributed:1 overcome:1 dimension:5 ak0:1 stand:1 xn:1 tk0:2 author:1 made:3 adaptive:3 correlate:1 bb:3 reconstructed:7 pruning:1 compact:3 implicitly:1 incoming:1 assumed:2 search:2 continuous:1 decade:1 additionally:1 disambiguate:1 channel:3 an1:1 improving:1 dendrite:1 complex:4 sagayama:1 apr:1 main:3 stereotyped:1 linearly:1 motivation:1 noise:11 alarm:5 neuronal:1 fig:2 cient:1 en:1 tl:1 retrain:1 axon:2 shrinking:1 sub:1 position:1 explicit:1 carney:1 misclassi:1 down:1 ono:1 xt:2 er:1 r2:3 consist:1 false:5 magnitude:1 sparseness:7 nk:3 sorting:1 chen:1 signalto:1 subtract:1 yin:1 simply:1 absorbed:1 prevents:1 lagrange:3 springer:1 corresponds:1 truth:1 relies:1 extracted:2 ma:1 goal:3 identity:1 sized:2 bkl:16 absence:1 determined:3 uniformly:1 classi:3 miss:1 conservative:1 total:1 called:1 microphone:1 e:1 experimental:1 select:1 college:1 jonathan:1 dissimilar:1 brevity:1 evaluate:1 outgoing:1 tested:1 |
2,657 | 3,409 | Covariance Estimation for High Dimensional Data
Vectors Using the Sparse Matrix Transform
Guangzhi Cao
Charles A. Bouman
School of Electrical and Computer Enigneering
Purdue University
West Lafayette, IN 47907
{gcao, bouman}@purdue.edu
Abstract
Covariance estimation for high dimensional vectors is a classically difficult problem in statistical analysis and machine learning. In this paper, we propose a
maximum likelihood (ML) approach to covariance estimation, which employs a
novel sparsity constraint. More specifically, the covariance is constrained to have
an eigen decomposition which can be represented as a sparse matrix transform
(SMT). The SMT is formed by a product of pairwise coordinate rotations known
as Givens rotations. Using this framework, the covariance can be efficiently estimated using greedy minimization of the log likelihood function, and the number
of Givens rotations can be efficiently computed using a cross-validation procedure. The resulting estimator is positive definite and well-conditioned even when
the sample size is limited. Experiments on standard hyperspectral data sets show
that the SMT covariance estimate is consistently more accurate than both traditional shrinkage estimates and recently proposed graphical lasso estimates for a
variety of different classes and sample sizes.
1
Introduction
Many problems in statistical pattern recognition and analysis require the classification and analysis
of high dimensional data vectors. However, covariance estimation for high dimensional vectors is
a classically difficult problem because the number of coefficients in the covariance grows as the
dimension squared [1, 2]. This problem, sometimes referred to as the curse of dimensionality [3],
presents a classic dilemma in statistical pattern analysis and machine learning.
In a typical application, one measures n versions of a p dimensional vector. If n < p, then the sample
covariance matrix will be singular with p ? n eigenvalues equal to zero. Over the years, a variety
of techniques have been proposed for computing a nonsingular estimate of the covariance. For
example, regularized and shrinkage covariance estimators [4, 5, 6] are examples of such techniques.
In this paper, we propose a new approach to covariance estimation, which is based on constrained
maximum likelihood (ML) estimation of the covariance [7]. In particular, the covariance is constrained to have an eigen decomposition which can be represented as a sparse matrix transform
(SMT) [8, 9]. The SMT is formed by a product of pairwise coordinate rotations known as Givens
rotations [10]. Using this framework, the covariance can be efficiently estimated using greedy minimization of the log likelihood function, and the number of Givens rotations can be efficiently computed using a cross-validation procedure. The estimator obtained using this method is always positive definite and well-conditioned even when the sample size is limited.
In order to validate our model, we perform experiments using a standard set of hyperspectral data
[11], and we compare against both traditional shrinkage estimates and recently proposed graphical
lasso estimates [12] for a variety of different classes and sample sizes. Our experiments show that,
1
for this example, the SMT covariance estimate is consistently more accurate. The SMT method
also has a number of other advantages. It seems to be particularly good when estimating small
eigenvalues and their associated eigenvectors. The cross-validation procedure used to estimate the
SMT model order requires little additional computation, and the resulting eigen decomposition can
be computed with very little computation (i.e. ? p2 operations).
2
Covariance estimation for high dimensional vectors
In the general case, we observe a set of n vectors, y1 , y2 , ? ? ? , yn , where each vector, yi , is p dimensional. Without loss of generality, we assume yi has zero mean. We can represent this data as the
following p ? n matrix
Y = [y1 , y2 , ? ? ? , yn ] .
(1)
If the vectors yi are identically distributed, then the sample covariance is given by
S=
1
YYt ,
n
(2)
and S is an unbiased estimate of the true covariance matrix with R = E [yi yit ] = E[S].
While S is an unbiased estimate of R it is also singular when n < p. This is a serious deficiency
since as the dimension p grows, the number of vectors needed to estimate R also grows. In practical
applications, n may be much smaller than p which means that most of the eigenvalues of R are
erroneously estimated as zero.
A variety of methods have been proposed to regularize the estimate of R so that it is not singular.
Shrinkage estimators are a widely used class of estimators which regularize the covariance matrix by
shrinking it toward some target structures [4, 5, 13]. Shrinkage estimators generally have the form
? = ?D + (1 ? ?)S, where D is some positive definite matrix. Some popular choices for D are the
R
identity matrix (or its scaled version) [5, 13] and the diagonal entries of S, i.e. diag(S) [5, 14]. In
both cases, the shrinkage intensity ? can be estimated using cross-validation or boot-strap methods.
Recently, a number of methods have been proposed for regularizing the estimate by making either
the covariance or its inverse sparse [6, 12]. For example, the graphical lasso method enforces sparsity
by imposing an L1 norm constraint on the inverse covariance [12]. Banding or thresholding can also
be used to obtain a sparse estimate of the covariance [15].
2.1
Maximum likelihood covariance estimation
Our approach will be to compute a constrained maximum likelihood (ML) estimate of the covariance
R, under the modeling assumption that eigenvectors of R may be represented as a sparse matrix
transform (SMT) [8, 9]. To do this, we first decompose R as
R = E?E t ,
(3)
where E is the orthonormal matrix of eigenvectors and ? is the diagonal matrix of eigenvalues.
Then we will estimate the covariance by maximizing the likelihood of the data Y subject to the
constraint that E is an SMT. By varying the order, K, of the SMT, we may then reduce or increase
the regularizing constraint on the covariance.
If we assume that the columns of Y are independent and identically distributed Gaussian random
vectors with mean zero and positive-definite covariance R, then the likelihood of Y given R is given
by
1
1
?n
t ?1
2
pR (Y ) =
|R|
exp
?
tr{Y
R
Y
}
.
(4)
np
2
(2?) 2
The log-likelihood of Y is then given by [7]
n
np
n
log(2?) ,
log p(E,?) (Y ) = ? tr{diag(E t SE)??1 } ? log |?| ?
2
2
2
(5)
where R = E?E t is specified by the orthonormal eigenvector matrix E and diagonal eigenvalue
matrix ?. Jointly maximizing the likelihood with respect to E and ? then results in the ML estimates
2
y0
y1
y2
y3
y4
y5
y6
y7
W80
?1
W80
W80
W82
?1
?1
?1
W80
W80
W82
?1
?1
y0
y?1
y1
y?2
y2
y3
y?3
?1 W 0
8
W80
y?0
W81
?1
y?4
yp?4
W82
W83
?1
y?5
yp?3
?1
y?6
yp?2
?1
y?7
yp?1
?1
y?0
y?1
E0
y?2
E5
E1
y?3
E3
y?p?4
y?p?3
E6
y?p?2
E4
EK?1 y?p?1
E2
(a) FFT
(b) SMT
Figure 1: (a) 8-point FFT. (b) The SMT implementation of y? = Ey. The SMT can be viewed as a
generalization of FFT and orthonormal wavelet transforms.
of E and ? given by [7]
?
E
=
?
?
=
arg min diag(E t SE)
(6)
E??
? t S E)
? ,
diag(E
(7)
where ? is the set of allowed orthonormal transforms. So we may compute the ML estimate by first
solving the constrained optimization of (6), and then computing the eigenvalue estimates from (7).
2.2
ML estimation of eigenvectors using SMT model
The ML estimate of E can be improved if the feasible set of eigenvector transforms, ?, can be
constrained to a subset of all possible orthonormal transforms. By constraining ?, we effectively
regularize the ML estimate by imposing a model. However, as with any model-based approach, the
key is to select a feasible set, ?, which is as small as possible while still accurately modeling the
behavior of the data.
Our approach is to select ? to be the set of all orthonormal transforms that can be represented as an
SMT of order K [9]. More specifically, a matrix E is an SMT of order K if it can be written as a
product of K sparse orthornormal matrices, so that
E=
k=K?1
Y
Ek = E0 E1 ? ? ? EK?1 ,
(8)
0
where every sparse matrix, Ek , is a Givens rotation operating on a pair of coordinate indices (ik , jk )
[10]. Every Givens rotation Ek is an orthonormal rotation in the plane of the two coordinates, ik
and jk , which has the form
Ek = I + ?(ik , jk , ?k ) ,
(9)
where ?(ik , jk , ?k ) is defined as
?
cos(?k ) ? 1 if i = j = ik or i = j = jk
?
?
sin(?k )
if i = ik and j = jk
[?]ij =
?
sin(?
)
if i = jk and j = ik
?
k
?
0
otherwise
.
(10)
Figure 1(b) shows the flow diagram for the application of an SMT to a data vector y. Notice that each
2D rotation, Ek , plays a role analogous to a ?butterfly? used in a traditional fast Fourier transform
(FFT) [16] in Fig. 1(a). However, unlike an FFT, the organization of the butterflies in an SMT is
unstructured, and each butterfly can have an arbitrary rotation angle ?k . This more general structure
allows an SMT to implement a larger set of orthonormal transformations. In fact, the SMT can
be used to represent any orthonormal wavelet transform because, using the theory of paraunitary
wavelets, orthonormal waveletscan be represented as a product of Givens rotations and delays [17].
More generally, when K = p2 , the SMT can be used to exactly represent any p ? p orthonormal
transformation [7].
3
Using the SMT model constraint, the ML estimate of E is given by
? = arg
diag(E t SE) .
E
Qmin
k=K?1
E=
(11)
Ek
0
Unfortunately, evaluating the constrained ML estimate of (11) requires the solution of an optimization problem with a nonconvex constraint. So evaluation of the globally optimal solutions is difficult.
Therefore, our approach will be to use greedy minimization to compute a locally optimal solution to
(11). The greedy minimization approach works by selecting each new butterfly Ek to minimize the
cost, while fixing the previous butterflies, El for l < k.
This greedy optimization algorithm can be implemented with the following simple recursive procedure. We start by setting S0 = S to be the sample covariance, and initialize k = 0. Then we apply
the following two steps for k = 0 to K ? 1.
Ek? = arg min diag Ekt Sk Ek
(12)
Ek
Sk+1
= Ek?t Sk Ek? .
(13)
The resulting values of Ek? are the butterflies of the SMT.
The problem remains of how to compute the solution to (12). In fact, this can be done quite easily
by first determining the two coordinates, ik and jk , that are most correlated,
!
[Sk ]2ij
(ik , jk ) ? arg min 1 ?
.
(14)
[Sk ]ii [Sk ]jj
(i,j)
It can be shown that this coordinate pair, (ik , jk ), can most reduce the cost in (12) among all possible
coordinate pairs [7]. Once ik and jk are determined, we apply the Givens rotation Ek? to minimize
the cost in (12), which is given by
Ek? = I + ?(ik , jk , ?k ) ,
(15)
where
1
atan(?2[Sk ]ik jk , [Sk ]ik ik ? [Sk ]jk jk ) .
(16)
2
By iterating the (12) and (13) K times, we obtain the constrained ML estimate of E given by
?k =
?=
E
k=K?1
Y
Ek? .
(17)
0
The model order, K, can be determined by a simple cross-validation procedure. For example, we
can partition the data into three subsets, and K is chosen to maximize the average likelihood of the
left-out subsets given the estimated covariance using the other two subsets. Once K is determined,
the proposed covariance estimator is re-computed using all the data and the estimated model order.
The SMT covariance estimator obtained as above has some interesting properties. First, it is positive
definite even for the limited sample size n < p. Also, it is permutation invariant, that is, the
covariance estimator does not depend on the ordering of the data. Finally, the eigen decomposition
E t y can be computed very efficiently by applying the K sparse rotations in sequence.
2.3
SMT Shrinkage Estimator
In some cases, the accuracy of the SMT estimator can be improved by shrinking it towards the
? SM T represent the SMT covariance estimator. Then the SMT shrinkage
sample covariance. Let R
estimate (SMT-S) can be obtained as
? SM T ?S = ?R
? SM T + (1 ? ?)S ,
R
(18)
where the parameter ? can be computed using cross validation. Notice that
? )=p ?
?
pR? SM T ?S (Y ) = pE? R? SM T ?S E? t (EY
? E
? t (EY ) .
??+(1??)ES
So cross validation can be efficiently implemented as in [5].
4
(19)
3
Experimental results
The effectiveness of the SMT covariance estimation depends on how well the SMT model can capture the behavior of real data vectors. Therefore in this section, we compare the performance of the
SMT covariance estimator to commonly used shrinkage and graphical lasso estimators. We do this
comparison using hyperspectral remotely sensed data as our high dimensional data vectors.
The hyperspectral data we use is available with the recently published book [11]. Figure 2(a) shows
a simulated color IR view of an airborne hyperspectral data flightline over the Washington DC Mall.
The sensor system measured the pixel response in 191 effective bands in the 0.4 to 2.4 ?m region of
the visible and infrared spectrum. The data set contains 1208 scan lines with 307 pixels in each scan
line. The image was made using bands 60, 27 and 17 for the red, green and blue colors, respectively.
The data set also provides ground truth pixels for five classes designated as grass, water, roof, street,
and tree. In Fig. 2(a), the ground-truth pixels of the grass class are outlined with a white rectangle.
Figure 2(b) shows the spectrum of the grass pixels, and Fig. 2(c) shows multivariate Gaussian vectors
that were generated using the measured sample covariance for the grass class.
For each class, we computed the ?true? covariance by using all the ground truth pixels to calculate
the sample covariance. The covariance is computed by first subtracting the sample mean vector
for each class, and then computing the sample covariance for the zero mean vectors. The number
of pixels for the ground-truth classes of grass, water, roof, street, and tree are 1928, 1224, 3579,
416, and 388, respectively. In each case, the number of ground truth pixels was much larger than
191, so the true covariance matrices are nonsingular, and accurately represent the covariance of the
hyperspectral data for that class.
3.1
Review of alternative estimators
A popular choice of the shrinkage target is the diagonal of S [5, 14]. In this case, the shrinkage
estimator is given by
? = ?diag (S) + (1 ? ?) S .
R
(20)
We use an efficient algorithm implementation of the leave-one-out likelihood (LOOL) crossvalidation method to choose ? as suggested in [5].
An alternative estimator is the graphic lasso (glasso) estimate recently proposed in [12] which is an
L1 regularized maximum likelihood estimate, such that
? = arg max log(Y | R) ? ? k R?1 k1 ,
(21)
R
R??
where ? denotes the set of p ? p positive definite matrices and ? the regularization parameter. We
used the R code for glasso that is publically available online. We found cross-validation estimation
of ? to be difficult, so in each case we manually selected the value of ? to minimize the KullbackLeibler distance to the known covariance.
3.2
Gaussian case
First, we compare how different estimators perform when the data vectors are samples from an ideal
multivariate Gaussian distribution. To do this, we first generated zero mean multivariate vectors
with the true covariance for each of the five classes. Next we estimated the covariance using the
four methods, the shrinkage estimator, glasso, SMT and SMT shrinkage estimation. In order to
determine the effect of sample size, we also performed each experiment for a sample size of n = 80,
40, and 20, respectively. Every experiment was repeated 10 times.
In order to get an aggregate accessment of the effectiveness of SMT covariance estimation, we compared the estimated covariance for each method to the true covariance using the Kullback-Leibler
(KL) distance [7]. The KL distance is a measure of the error between the estimated and true distribution. Figure 3(a)(b) and (c) show plots of the KL distances as a function of sample size for the
four estimators. The error bars indicate the standard deviation of the KL distance due to random
variation in the sample statistics. Notice that the SMT shrinkage (SMT-S) estimator is consistently
the best of the four.
5
(a)
(b)
(c)
Figure 2: (a) Simulated color IR view of an airborne hyperspectral data over the Washington DC
Mall [11]. (b) Ground-truth pixel spectrum of grass that are outlined with the white rectangles in
(a). (c) Synthesized data spectrum using the Gaussian distribution.
Figure 4(a) shows the estimated eigenvalues for the grass class with n = 80. Notice that the eigenvalues of the SMT and SMT-S estimators are much closer to the true values than the shrinkage and
glasso methods. Notice that the SMT estimators generate good estimates especially for the small
eigenvalues.
Table 1 compares the computational complexity, CPU time and model order for the four estimators.
The CPU time and model order were measured for the Guassian case of the grass class with n = 80.
Notice that even with the cross validation, the SMT and SMT-S estimators are much faster than
glasso. This is because the SMT transform is a sparse operator. In this case, the SMT uses an
average of K = 495 rotations, which is equal to K/p = 495/191 = 2.59 rotations (or equivalently
multiplies) per spectral sample.
3.3
Non-Gaussian case
In practice, the sample vectors may not be from an ideal multivariate Gaussian distribution. In
order to see the effect of the non-Gaussian statistics on the accuracy of the covariance estimate,
we performed a set of experiments which used random samples from the ground truth pixels as
input. Since these samples are from the actual measured data, their distribution is not precisely
Gaussian. Using these samples, we computed the covariance estimates for the five classes using the
four different methods with sample sizes of n = 80, 40, and 20.
Plots of the KL distances for the non-Gaussian grass case1 are shown in Fig. 3(d)(e) and (f); and
Figure 4(b) shows the estimated eigenvalues for grass with n = 80. Note that the results are similar
to those found for the ideal Guassian case.
4
Conclusion
We have proposed a novel method for covariance estimation of high dimensional data. The new
method is based on constrained maximum likelihood (ML) estimation in which the eigenvector
transformation is constrained to be the composition of K Givens rotations. This model seems to
capture the essential behavior of the data with a relatively small number of parameters. The constraint set is a K dimensional manifold in the space of orthonormal transforms, but since it is not a
linear space, the resulting ML estimation optimization problem does not yield a closed form global
optimum. However, we show that a recursive local optimization procedure is simple, intuitive, and
yields good results.
We also demonstrate that the proposed SMT covariance estimation methods substantially reduce
the error in the covariance estimate as compared to current state-of-the-art estimates for a standard hyperspectral data set. The MATLAB code for SMT covariance estimation is available at:
https://engineering.purdue.edu/?bouman/publications/pub smt.html.
1
In fact, these are the KL distances between the estimated covariance and the sample covariance computed
from the full set of training data, under the assumption of a multivariate Gaussian distribution.
6
260
160
Shrinkage Estimator
Glasso Estimator
SMT Estimator
SMT?S Estimator
240
220
200
160
140
120
180
KL distance
KL distance
180
Shrinkage Estimator
Glasso Estimator
SMT Estimator
SMT?S Estimator
220
120
200
KL distance
240
Shrinkage Estimator
Glasso Estimator
SMT Estimator
SMT?S Estimator
140
100
80
160
140
120
100
60
100
80
40
80
60
60
10
20
30
40
50
60
Sample size
70
80
20
10
90
20
30
40
(a) Grass
70
80
40
10
90
220
180
140
120
100
140
120
100
100
60
80
80
40
40
50
60
Sample size
70
80
20
10
90
90
160
80
30
80
180
140
120
20
70
Shrinkage Estimator
Glasso Estimator
SMT Estimator
SMT?S Estimator
200
KL distance
KL distance
160
50
60
Sample size
220
160
180
40
240
Shrinkage Estimator
Glasso Estimator
SMT Estimator
SMT?S Estimator
200
200
60
10
30
(c) Street
220
Shrinkage Estimator
Glasso Estimator
SMT Estimator
SMT?S Estimator
240
20
(b) Water
260
KL distance
50
60
Sample size
60
20
30
(d) Grass
40
50
60
Sample size
70
80
90
40
10
20
30
(e) Water
40
50
60
Sample size
70
80
90
(f) Street
Figure 3: Kullback-Leibler distance from true distribution versus sample size for various classes:
(a) (b) (c) Gaussian case (d) (e) (f) non-Gaussian case.
8
8
10
10
True Eigenvalues
Shrinkage Estimator
Glasso Estimator
SMT Estimator
SMT?S Estimator
eigenvalues
10
4
10
10
2
10
0
4
10
2
10
0
10
10
?2
10
True Eigenvalues
Shrinkage Estimator
Glasso Estimator
SMT Estimator
SMT?S Estimator
6
eigenvalues
6
0
?2
50
100
index
150
10
200
(a)
0
50
100
index
150
200
(b)
Figure 4: The distribution of estimated eigenvalues for the grass class with n = 80: (a) Gaussian
case (b) Non-Gaussian case.
Shrinkage Est.
glasso
SMT
SMT-S
Complexity
(without cross-validation)
p
p3 I
p2 + Kp
p2 + Kp
CPU time
(seconds)
8.6 (with cross-validation)
422.6 (without cross-validation)
6.5 (with cross-validation)
7.2 (with cross-validation)
Model order
1
4939
495
496
Table 1: Comparison of computational complexity, CPU time and model order for various covariance estimators. The complexity is without cross validation and does not include the computation
of the sample covariance (order of np2 ). The CPU time and model order were measured for the
Guassian case of the grass class with n = 80. I is the number of cycles used in glasso.
7
Acknowledgments
This work was supported by the National Science Foundation under Contract CCR-0431024. We
would also like to thank James Theiler (J.T.) and Mark Bell for their insightful comments and suggestions.
References
[1] C. Stein, B. Efron, and C. Morris, ?Improving the usual estimator of a normal covariance
matrix,? Dept. of Statistics, Stanford University, Report 37, 1972.
[2] K. Fukunaga, Introduction to Statistical Pattern Recognition. Boston, MA: Academic Press,
1990, 2nd Ed.
[3] A. K. Jain, R. P. Duin, and J. Mao, ?Statistical pattern recognition: A review,? IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 1, pp. 4?37, 2000.
[4] J. H. Friedman, ?Regularized discriminant analysis,? Journal of the American Statistical Association, vol. 84, no. 405, pp. 165?175, 1989.
[5] J. P. Hoffbeck and D. A. Landgrebe, ?Covariance matrix estimation and classification with limited training data,? IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 18,
no. 7, pp. 763?767, 1996.
[6] P. J. Bickel and E. Levina, ?Regularized estimation of large covariance matrices,? Annals of
Statistics, vol. 36, no. 1, pp. 199?227, 2008.
[7] G. Cao and C. A. Bouman, ?Covariance estimation for high dimensional data vectors using the
sparse matrix transform,? Purdue University, Technical Report ECE 08-05, 2008.
[8] G. Cao, C. A. Bouman, and K. J. Webb, ?Fast reconstruction algorithms for optical tomography
using sparse matrix representations,? in Proceedings of 2007 IEEE International Symposium
on Biomedical Imaging, April 2007.
[9] ??, ?Non-iterative MAP reconstruction using sparse matrix representations,? (submitted to)
IEEE Trans. on Image Processing.
[10] W. Givens, ?Computation of plane unitary rotations transforming a general matrix to triangular
form,? Journal of the Society for Industrial and Applied Mathematics, vol. 6, no. 1, pp. 26?50,
March 1958.
[11] D. A. Landgrebe, Signal Theory Methods in Multispectral Remote Sensing. New York: WileyInterscience, 2005.
[12] J. Friedman, T. Hastie, and R. Tibshirani, ?Sparse inverse covariance estimation with the graphical lasso,? Biostatistics, vol. 9, no. 3, pp. 432?441, Jul. 2008.
[13] M. J. Daniels and R. E. Kass, ?Shrinkage estimators for covariance matrices,? Biometrics,
vol. 57, no. 4, pp. 1173?1184, 2001.
[14] J. Schafer and K. Strimmer, ?A shrinkage approach to large-scale covariance matrix estimation
and implications for functional genomics,? Statistical Applications in Genetics and Molecular
Biology, vol. 4, no. 1, 2005.
[15] P. J. Bickel and E. Levina, ?Covariance regularization by thresholding,? Department of Statistics, UC Berkeley, Technical Report 744, 2007.
[16] J. W. Cooley and J. W. Tukey, ?An algorithm for the machine calculation of complex Fourier
series,? Mathematics of Computation, vol. 19, no. 90, pp. 297?301, April 1965.
[17] A. Soman and P. Vaidyanathan, ?Paraunitary filter banks and wavelet packets,? Acoustics,
Speech, and Signal Processing, 1992. ICASSP-92., 1992 IEEE International Conference on,
vol. 4, pp. 397?400 vol.4, Mar 1992.
8
| 3409 |@word version:2 seems:2 norm:1 nd:1 sensed:1 covariance:68 decomposition:4 tr:2 contains:1 series:1 selecting:1 pub:1 daniel:1 current:1 ka:1 written:1 visible:1 partition:1 plot:2 grass:14 greedy:5 selected:1 intelligence:2 plane:2 provides:1 five:3 symposium:1 ik:15 pairwise:2 behavior:3 globally:1 little:2 curse:1 cpu:5 actual:1 estimating:1 schafer:1 qmin:1 biostatistics:1 banding:1 substantially:1 eigenvector:3 transformation:3 berkeley:1 y3:2 every:3 exactly:1 scaled:1 yn:2 positive:6 engineering:1 local:1 co:1 limited:4 case1:1 lafayette:1 practical:1 acknowledgment:1 enforces:1 recursive:2 practice:1 definite:6 implement:1 procedure:6 remotely:1 bell:1 get:1 operator:1 applying:1 map:1 maximizing:2 unstructured:1 estimator:60 orthonormal:12 regularize:3 classic:1 coordinate:7 variation:1 analogous:1 annals:1 target:2 play:1 us:1 recognition:3 particularly:1 jk:15 infrared:1 role:1 electrical:1 capture:2 calculate:1 region:1 cycle:1 ordering:1 remote:1 transforming:1 complexity:4 depend:1 solving:1 dilemma:1 easily:1 icassp:1 represented:5 various:2 jain:1 fast:2 effective:1 guassian:3 kp:2 aggregate:1 quite:1 widely:1 larger:2 stanford:1 otherwise:1 triangular:1 statistic:5 transform:8 jointly:1 butterfly:6 online:1 advantage:1 eigenvalue:15 sequence:1 propose:2 subtracting:1 reconstruction:2 product:4 cao:3 intuitive:1 validate:1 crossvalidation:1 optimum:1 leave:1 fixing:1 measured:5 ij:2 school:1 p2:4 implemented:2 indicate:1 filter:1 packet:1 require:1 generalization:1 decompose:1 ground:7 normal:1 exp:1 bickel:2 estimation:23 minimization:4 sensor:1 always:1 gaussian:15 shrinkage:26 varying:1 publication:1 np2:1 consistently:3 likelihood:14 industrial:1 el:1 publically:1 atan:1 pixel:10 arg:5 classification:2 among:1 html:1 multiplies:1 constrained:10 art:1 initialize:1 uc:1 equal:2 once:2 washington:2 manually:1 biology:1 y6:1 np:2 report:3 serious:1 employ:1 national:1 roof:2 friedman:2 organization:1 evaluation:1 strimmer:1 implication:1 accurate:2 closer:1 biometrics:1 tree:2 re:1 e0:2 bouman:5 column:1 modeling:2 cost:3 deviation:1 entry:1 subset:4 delay:1 graphic:1 kullbackleibler:1 international:2 contract:1 squared:1 choose:1 classically:2 book:1 ek:18 american:1 yp:4 coefficient:1 depends:1 performed:2 view:2 closed:1 tukey:1 red:1 start:1 multispectral:1 jul:1 minimize:3 formed:2 ir:2 accuracy:2 ekt:1 efficiently:6 yield:2 nonsingular:2 accurately:2 published:1 submitted:1 ed:1 against:1 pp:9 james:1 e2:1 associated:1 popular:2 color:3 efron:1 dimensionality:1 yyt:1 y7:1 response:1 improved:2 april:2 done:1 mar:1 generality:1 biomedical:1 grows:3 effect:2 y2:4 unbiased:2 true:10 regularization:2 leibler:2 white:2 sin:2 demonstrate:1 l1:2 image:2 novel:2 recently:5 charles:1 rotation:18 functional:1 association:1 synthesized:1 composition:1 imposing:2 outlined:2 mathematics:2 operating:1 multivariate:5 nonconvex:1 yi:4 additional:1 ey:3 determine:1 maximize:1 signal:2 ii:1 full:1 technical:2 faster:1 academic:1 levina:2 cross:15 calculation:1 e1:2 molecular:1 sometimes:1 represent:5 diagram:1 singular:3 airborne:2 unlike:1 comment:1 smt:66 subject:1 flow:1 effectiveness:2 unitary:1 ideal:3 constraining:1 identically:2 fft:5 variety:4 hastie:1 lasso:6 reduce:3 e3:1 york:1 speech:1 jj:1 matlab:1 generally:2 iterating:1 se:3 eigenvectors:4 transforms:6 stein:1 locally:1 band:2 morris:1 tomography:1 generate:1 http:1 notice:6 estimated:13 w81:1 per:1 tibshirani:1 ccr:1 blue:1 vol:11 key:1 four:5 yit:1 rectangle:2 imaging:1 year:1 inverse:3 angle:1 p3:1 duin:1 constraint:7 deficiency:1 precisely:1 erroneously:1 fourier:2 min:3 fukunaga:1 optical:1 relatively:1 department:1 designated:1 march:1 smaller:1 y0:2 making:1 invariant:1 pr:2 remains:1 needed:1 available:3 operation:1 apply:2 observe:1 spectral:1 alternative:2 eigen:4 denotes:1 include:1 graphical:5 k1:1 especially:1 society:1 usual:1 traditional:3 diagonal:4 distance:14 thank:1 simulated:2 street:4 manifold:1 y5:1 discriminant:1 toward:1 water:4 code:2 index:3 y4:1 equivalently:1 difficult:4 unfortunately:1 webb:1 implementation:2 perform:2 boot:1 sm:5 purdue:4 cooley:1 y1:4 dc:2 arbitrary:1 intensity:1 pair:3 specified:1 kl:12 acoustic:1 trans:1 suggested:1 bar:1 pattern:6 sparsity:2 green:1 max:1 mall:2 regularized:4 genomics:1 review:2 determining:1 loss:1 glasso:15 permutation:1 interesting:1 suggestion:1 versus:1 validation:15 foundation:1 theiler:1 s0:1 thresholding:2 bank:1 genetics:1 supported:1 sparse:14 orthornormal:1 distributed:2 dimension:2 evaluating:1 landgrebe:2 commonly:1 made:1 transaction:2 kullback:2 ml:13 global:1 spectrum:4 iterative:1 sk:9 table:2 improving:1 e5:1 complex:1 diag:7 allowed:1 repeated:1 fig:4 west:1 referred:1 shrinking:2 mao:1 pe:1 wavelet:5 e4:1 insightful:1 sensing:1 essential:1 effectively:1 hyperspectral:8 conditioned:2 boston:1 truth:7 ma:1 viewed:1 identity:1 towards:1 feasible:2 specifically:2 typical:1 determined:3 strap:1 ece:1 e:1 experimental:1 est:1 select:2 e6:1 mark:1 scan:2 dept:1 regularizing:2 wileyinterscience:1 correlated:1 |
2,658 | 341 | Phonetic Classification and Recognition
Using the Multi-Layer Perceptron
Hong C. Leung, James R. Glass,
Michael S. Phillips, and Victor W. Zue
Spoken Language Systems Group
Laboratory for Computer Science
Massachusetts Institute of Technology
Cambridge, Massachusetts 02139, U.S.A.
Abstract
In this paper, we will describe several extensions to our earlier work, utilizing a segment-based approach. We will formulate our segmental framework
and report our study on the use of multi-layer perceptrons for detection
and classification of phonemes. We will also examine the outputs of the
network, and compare the network performance with other classifiers. Our
investigation is performed within a set of experiments that attempts to
recognize 38 vowels and consonants in American English independent of
speaker. When evaluated on the TIMIT database, our system achieves an
accuracy of 56%.
1
Introduction
Thus far, the neural networks research community has placed heavy emphasis on
the problem of pattern classification. In many applications, including speech recognition, one must also address the issue of detection. Thus, for example, one must
detect the presence of phonetic segments as well as classify them. Recently, the
community has moved more towards recognition of continuous speech. A network
is typically used to label every frame of speech in a frame-based recognition system [Franzini 90, Morgan 90, Tebelskis 90].
Our goal is to study and exploit the capability of ANN for speech recognition,
based on the premise that ANN may offer a flexible framework for us to utilize our
248
Phonetic Classification and Recognition Using the Multi-Layer Perceptron
improved, albeit incomplete, speech knowledge. As an intermediate milestone, this
paper extends our earlier work on phonetic classification to context-independent
phonetic recognition. Thus we need to locate as well as identify the phonetic units.
Our system differs from the majority of approaches in that a; segmental framework is
adopted. The network is used in conjunction with acoustic segmentation procedures
to provide a phonetic string for the entire utterance.
2
Segmental Formulation
In our segmental framework, a phonetic unit is mapped to a segment explicitly
delineated by a begin and end time in the speech signal. This is motivated by the
belief that a segmental framework offers us more flexibility in applying our speech
knowledge than is afforded by a frame-based approach. As a result, a segment-based
approach could ultimately lead to superior modelling of the temporal variations in
the realization of underlying phonological units.
Let & denote the best sequence of phonetic units in an utterance. To simplify the
problem, we assume that P(SI) p(sllQ'j), where SI stand for the ith time segment
that has one and only one phoneme in it, and Q'j stands for the best phoneme label
in SI. Thus the probability of the best sequence, p( &), is:
=
1 <j< N
(1)
where sis any possible sequence of time segments consisting of {S1' S2, ... }, P(SI) is
the probability of a valid time segment, and N is the number of possible phonetic
units. In order to perform recognition, the two probabilities in Equation 1 must be
estimated. The first term, p(Q'j), is a set of phoneme probabilities and thus can be
viewed as a classification problem. The second term, P(SI), is a set of probabilities
of valid time regions and thus can be estimated as a segmentation problem.
2.1
Segmentation
In order to estimate the segment probabilities, P(SI), in Equation 1, we have formulated segmentation into a boundary classification problem. Let bl and br be the left
and right boundary of a time segment, SI, respectively, as shown in Figure 1a. Let
{b 1 , b2 , ?? , bK } be the set of boundaries that might exist within SI. These boundaries
can be proposed by a boundary detector, or they can simply occur at every frame of
speech. We define p( SI) to be the joint probability that the left and right boundaries
exist and all other boundaries within SI do not exist. To reduce the complexity of
the problem, assume bj is statistically independent of bk for \;fj # k. Thus,
p(Si) = p(b l , b1 , b2 , ?? , bK , br )
= p(bdp(b1 )p(b2 ) ??. p(b K )p(br ),
(2)
249
250
Leung, Glass, Phillips, and Zue
b?I
t.2
1(a)
t.1
1-
t.I
(b)
Figure 1: Schematic diagrams for estimation of (a) segment probability, P(Si), and
(b) boundary probability, p(b k ). The boundaries can be proposed by a boundary
detector, or they can simply occur at every frame. See text.
where p(b,) and p(br2 stand for the probability that the left and right boundary
exist, respectively, P(bk) stands for the probability that the kth boundary does not
exist. As a result, the probability of a segment, p(sd can be obtained by computing
the probabilities of the boundaries, P(bk), subsumed by the segment. As we will
discuss in a later section, by using the time-aligned transcription, we can train the
boundary probabilities in a supervised manner.
2.2
Phonetic Classification
Once the probability of a segment, P(SI), is obtained, we still need to classify it,
i.e. compute the probabilities of the phonetic units in the segment, p(aj). Again,
the time-aligned transcription can be used to train the probabilities in a supervised
manner. We have discussed this in earlier papers [Leung 89, Leung 90]. In a later
section, we will discuss some of our recent experimental results.
3
3.1
Experiments
Tasks and Corpora
The experiments described in this paper deal with classification and recognition of
38 phonetic labels representing 14 vowels, 3 semivowels, 3 nasals, 8 fricatives, 2
affricates, 6 stops, 1 flap and 1 silence. Within the context of classification, the
networks are given a segment of the speech signal, and are asked to determine
its phonetic identity. Within the context of recognition, the networks are given
an utterance, and are asked to determine the identity and locations of the phonetic units in the utterance. All experiments were based on the sentences in the
TIMIT database [Lamel 86]. As summarized in Table 1, Corpus I contains 1,750 sx
sentences spoken by 350 male and female speakers, resulting in a total of 64,000
phonetic tokens. Corpus II contains 4,400 sx and si sentences spoken by 550 male
and female speakers, resulting in a total of 165,000 phonetic tokens .
3.2
Phonetic Classification
As previously discussed, estimation of the probability, p(aj) in Equation 1 can be
viewed as a classification problem. Many statistical classifiers can be used. We have
Phonetic Classification and Recognition Using the Multi-Layer Perceptron
Corpus
I
II
Set
trainmg
testing
traIning
testing
Speakers
300
50
500
50
Sentences
1500
250
4000
400
Tokens
55,000
9,000
150,000
15,000
Type
sx
sx
sxfsi
sx/si
Table 1: Corpora I and II extracted from the TIMIT database. Corpus I contains
only sx sentences, whereas Corpus II contains both sx and si sentences. The speakers
in the testing sets for both Corpus I and Corpus II are the same.
chosen to use the MLP, due to its discriminatory capability, as well as its flexibility
in that it does not make assumptions about specific statistical distributions or
distance metrics. In addition, earlier work shows that the outputs of MLP can
approximate posteriori probabilities [Bourlard 88]. To train the network, we adopt
procedures such as center initialization, input normalization, adaptive gain, and
modular training [Leung 90]. The input representation was identical to that in the
SUMMIT system, and consisted of 82 acoustic attributes [Zue 89]. These segmental
attributes were generated automatically by a search procedure that uses the training
data to determine the settings of the free parameters of a set of generic property
detectors using an optimization procedure [Phillips 88].
3.3
Boundary Classification
In our segmental framework formulated in Equation 1, the main difference between
classification and recognition is the incorporation of a probability for each segment,
P(Si). As described previously in Equation 2, we have simplified the problem of
estimating P(SI) to one of determining the probability that a boundary exists, p(bt).
To estimate P(bk), a MLP with two output units is used, one for the valid boundaries and the other for the extraneous boundaries. By referencing the time-aligned
phonetic transcription, the desired outputs of the network can be determined. In
our current implementation p(bt) is determined using four abutting segments, as
shown in Figure lb. These segments are proposed by the boundary detector in
the SUMMIT system. Let tl stand for the time at which bl is located, and SI stand
for the segment between tl and tl+1, where tl+1 > tl. The boundary probability,
p( bl ), is then determined by using the average mean-rate response [Seneff 88] in
SI-2, SI-1, 81, and SI+1 as inputs to the MLP. Thus the network has altogether 160
input units.
3.4
3.4.1
Results
Phonetic Classification
In the phonetic classification experiments, the system classified a token extracted
from a phonetic transcription that had been aligned with the speech waveform.
Since there was no detection involved in these experiments only substitution errors
were possible.
251
252
Leung, Glass, Phillips, and Zue
Classifier
I
I
I
II
SUMMIT
Gaussian
MLP
MLP
Correct
70%
70%
74%
76%
Parameters
2,200
128,000
15,000
30,000
Table 2: Phonetic classification results using the SUMMIT classifier, Gaussian classifier, and MLP. Also shown are the number of parameters in the classifiers.
In the first set of experiments, we compared results based on Corpus I, using different classifiers. As Table 2 shows, the baseline speaker-independent classification
performance of SUMMIT on the testing data was 70%. When Gaussian classifiers
with full covariance matrices were used, we found that the performance is also about
70% . Finally, when the MLP is used, a performance of 74% is achieved.
Although the sx sentences were designed to be phonetically balanced, the 1,750
sentences in Corpus I are not distinct. In the second set of experiments, we evaluated
the MLP classifier on Corpus II, which include both the sx and si sentences.1 As
shown in Table 2, the classifier achieves 76%.
Parameters: The networks used as described in Table 2 have only 1 hidden layer.
The number of hidden units in the network can be 128 or 256, resulting in 15,000
or 30,000 connections. For comparison, Table 2 also shows the number of parameters for the SUMMIT and Gaussian classifiers. While the SUMMIT classifier requires
only about 2,200 parameters, the Gaussian classifiers require as much as 128,000
parameters, an order of magnitude more than the MLP. These numbers also give us
some idea about the computational requirements for different classifiers, since the
required number of multiplications is about the same as the number of parameters.
Network Outputs: We have chosen the network to estimate the phoneme probabilities. When the network is trained, the target values are either lor O. However, if
the network is over-trained, its output values may approach either 1 or 0, resulting
in poor estimates of the posterior probabilities. Figure 2 shows two distributions
for the output values of the network for 3600 tokens from the test set. Figure 2a
corresponds to the ratio of the highest output value to the sum of the network output values, whereas Figure 2b corresponds to the second highest output value. We
can see that both distributions are quite broad, suggesting that the network often
makes "soft" decisions about the phoneme labels. We feel that this is important
since in speech recognition, we often need to combine scores or probabilities from
different parts of the system.
3.4.2
Boundary Classification
We have evaluated the boundary classifier using the training and testing data in
Corpus I. By using 32 hidden units, the network can classify 87% of the boundaries
in the test set correctly.
1 All
the si sentences in TIMIT are distinct.
Phonetic Classification and Recognition Using the Multi-Layer Perceptron
:g400
Q.
E
c2l
'0
.x 200
E
:3
Z
o+-~~~~~~~~~~
0.0
Network Output Value
1.0
Figure 2: Histograms for the output values of the network extracted from 3600
samples: (a) the highest output values, and (b) the second highest output values.
Corpus
I
I
I
II
Classifer
Baseline
MLP
MLP
MLP
Segment
Binary Hierarchy
Binary Hierarchy
Stochastic Pruning
Stochastic Pruning
Accuracy
47%
50%
54%
56%
Table 3: Phonetic recognition results using binary hierarchy (dendrogram), and
boundary pruning. No duration, bigram, or trigram statistics have been used.
Errors include substitutions, deletions, and insertions.
3.4.3
Phonetic Recognition
One of the disadvantages of our segmental framework is that the amount of computation involved can be very significant, since a segment can begin and end at any
frame of an utterance. We have explored various pruning strategies. In this paper,
we will report our results using stochastic pruning and binary hierarchy [Leung gOa].
We have found that such pruning strategies can reduce the amount of computation
by about 3 orders of magnitude.
The results of the phonetic recognition experiments are shown in Table 3. No
duration, bigram, or trigram statistics have been used. The baseline performance
of the current SUMMIT system on Corpus I is 47%, including substitution, deletion,
and insertion errors. When the MLP was used in place of the classifier in the current
SUMMIT system using also the binary hierarchical representation, the performance
improved to 50%. When the MLP was used with stochastic pruning technique, the
performance improved to 54%. Finally, by using the network trained and tested on
Corpus II, the performance improved to 56%.
253
254
Leung, Glass, Phillips, and Zue
4
Discussion
In summary, we have discussed a segmental approach for phonetic recognition.
We have also examined the outputs of the network, and compared performance
results and computational requirements with different classifiers. We have shown
that decisions made by the network are quite "soft", and that the network yields
results favorable to other more traditional classifiers. Future work includes the use
of context-dependent models for phonetic and boundary classification, utilization
of other phonological units, and extension to recognition of continuous speech.
References
[Bourlard 88] Bourlard, H., and C.J. Wellekens, "Links between Markov Models
and Multilayer Perceptrons," Advances in Neural Information Processing Systems
1, Morgan Kaufmann, 1988.
[Franzini 90] Franzini, M.A., K.F. Lee, and A. Waibel, "Connectionist Viterbi Training: A New Hybrid Method for Continuous Speech Recognition," Proc. ICASSP-90,
Albuquerque, NM, USA, 1990.
[LameI86] Lamel, L.F., R.H. Kassel, and S. Seneff, "Speech Database Development:
Design and Analysis of the Acoustic Phonetic Corpus," Proc. DARPA Speech Recognition Workshop, 1986.
[Leung 89] Leung, H.C., The Use of Artificial Neural Networks of Phonetic Recognition, Ph.D. Thesis, Mass. Inst. of Tech., 1989.
[Leung 90] Leung, H.C., and V.W. Zue, "Phonetic Classification Using Multi-Layer
Perceptrons," Proc. ICASSP-90, Albuquerque, 1990.
[Leung 90a] Leung, H., Glass, J., Phillips, M., and Zue, V., "Detection and Classification of Phonemes Using Context-independent Error Back-Propagation," Proc.
International Conference on Spoken Language Processing, Kobe, Japan, 1990.
[Morgan 90] Morgan, N., and H. Bourlard, "Continuous Speech Recognition Using
Multilayer Perceptrons with Hidden Markov Models," Proc. ICASSP-90, Albuquerque, NM, USA, 1990.
[Phillips 88] Phillips, M.S., "Automatic Discovery of Acoustic Measurements for
Acoustic Classification," J. Acoust. Soc. Amer., Vol. 84, 1988.
[Seneff 88] Seneff, S. "A Joint Synchrony/Mean-Rate Model of Auditory Speech
Processing," Proc. J. of Phonetics, 1988.
[Tebelskis] Tebelskis, J., and A. Waibel, "Large Vocabulary Recognition Using
Linked Predictive Neural Networks," Proc. ICASSP-90, Albuquerque, NM, USA,
1990.
[Zue 89] Zue, V., J. Glass, M. Phillips, and S. Seneff, "The MIT SUMMIT Speech
Recognition System: A Progress Report," Proceedings of DARPA Speech and Natural Language Workshop, February, 1989.
| 341 |@word bigram:2 covariance:1 substitution:3 contains:4 score:1 current:3 si:25 must:3 designed:1 ith:1 location:1 lor:1 combine:1 manner:2 examine:1 multi:6 automatically:1 begin:2 estimating:1 underlying:1 mass:1 string:1 spoken:4 acoust:1 temporal:1 every:3 classifier:18 milestone:1 utilization:1 unit:12 sd:1 might:1 emphasis:1 initialization:1 examined:1 discriminatory:1 statistically:1 testing:5 differs:1 procedure:4 context:5 applying:1 center:1 duration:2 formulate:1 utilizing:1 variation:1 feel:1 target:1 hierarchy:4 us:1 recognition:25 located:1 summit:10 database:4 region:1 highest:4 balanced:1 complexity:1 insertion:2 asked:2 ultimately:1 trained:3 segment:21 predictive:1 classifer:1 icassp:4 joint:2 darpa:2 various:1 train:3 distinct:2 describe:1 artificial:1 quite:2 modular:1 statistic:2 sequence:3 aligned:4 realization:1 flexibility:2 moved:1 requirement:2 semivowel:1 progress:1 soc:1 waveform:1 correct:1 attribute:2 stochastic:4 require:1 premise:1 investigation:1 extension:2 viterbi:1 bj:1 trigram:2 achieves:2 adopt:1 estimation:2 favorable:1 proc:7 label:4 mit:1 gaussian:5 fricative:1 conjunction:1 modelling:1 tech:1 baseline:3 detect:1 glass:6 posteriori:1 inst:1 dependent:1 leung:14 typically:1 entire:1 bt:2 hidden:4 issue:1 classification:25 flexible:1 extraneous:1 development:1 once:1 phonological:2 identical:1 broad:1 future:1 report:3 connectionist:1 simplify:1 kobe:1 recognize:1 consisting:1 vowel:2 attempt:1 detection:4 subsumed:1 mlp:15 male:2 affricate:1 incomplete:1 desired:1 classify:3 earlier:4 soft:2 disadvantage:1 international:1 lee:1 michael:1 again:1 thesis:1 nm:3 american:1 japan:1 suggesting:1 b2:3 summarized:1 includes:1 explicitly:1 performed:1 later:2 linked:1 capability:2 synchrony:1 timit:4 accuracy:2 phonetically:1 phoneme:7 kaufmann:1 yield:1 identify:1 albuquerque:4 classified:1 detector:4 involved:2 james:1 stop:1 gain:1 auditory:1 massachusetts:2 knowledge:2 segmentation:4 back:1 supervised:2 response:1 improved:4 formulation:1 evaluated:3 amer:1 dendrogram:1 propagation:1 aj:2 usa:3 consisted:1 laboratory:1 deal:1 speaker:6 hong:1 lamel:2 goa:1 fj:1 phonetics:1 recently:1 superior:1 discussed:3 significant:1 measurement:1 cambridge:1 phillips:9 automatic:1 language:3 had:1 segmental:9 posterior:1 recent:1 female:2 phonetic:33 binary:5 seneff:5 victor:1 morgan:4 determine:3 signal:2 ii:9 full:1 offer:2 schematic:1 multilayer:2 metric:1 histogram:1 normalization:1 achieved:1 whereas:2 addition:1 diagram:1 presence:1 intermediate:1 reduce:2 idea:1 br:3 motivated:1 speech:19 nasal:1 amount:2 ph:1 exist:5 estimated:2 correctly:1 vol:1 group:1 four:1 utilize:1 sum:1 extends:1 place:1 decision:2 layer:7 occur:2 incorporation:1 afforded:1 flap:1 tebelskis:3 waibel:2 poor:1 delineated:1 s1:1 referencing:1 equation:5 previously:2 wellekens:1 discus:2 zue:9 end:2 adopted:1 hierarchical:1 generic:1 altogether:1 include:2 exploit:1 kassel:1 february:1 franzini:3 bl:3 strategy:2 traditional:1 kth:1 distance:1 link:1 mapped:1 majority:1 ratio:1 abutting:1 br2:1 implementation:1 design:1 perform:1 markov:2 frame:6 locate:1 lb:1 community:2 bk:6 required:1 sentence:10 connection:1 acoustic:5 deletion:2 address:1 pattern:1 including:2 belief:1 natural:1 hybrid:1 bourlard:4 representing:1 technology:1 utterance:5 text:1 discovery:1 multiplication:1 determining:1 bdp:1 heavy:1 summary:1 token:5 placed:1 free:1 english:1 silence:1 perceptron:4 institute:1 boundary:25 vocabulary:1 stand:6 valid:3 made:1 adaptive:1 simplified:1 far:1 approximate:1 pruning:7 transcription:4 b1:2 corpus:17 consonant:1 continuous:4 search:1 table:9 main:1 s2:1 tl:5 specific:1 explored:1 exists:1 workshop:2 albeit:1 magnitude:2 sx:9 simply:2 corresponds:2 extracted:3 goal:1 viewed:2 formulated:2 ann:2 identity:2 towards:1 determined:3 total:2 experimental:1 perceptrons:4 tested:1 |
2,659 | 3,410 | The Gaussian Process Density Sampler
Ryan Prescott Adams?
Cavendish Laboratory
University of Cambridge
Cambridge CB3 0HE, UK
[email protected]
Iain Murray
Dept. of Computer Science
University of Toronto
Toronto, Ontario. M5S 3G4
[email protected]
David J.C. MacKay
Cavendish Laboratory
University of Cambridge
Cambridge CB3 0HE, UK
[email protected]
Abstract
We present the Gaussian Process Density Sampler (GPDS), an exchangeable generative model for use in nonparametric Bayesian density estimation. Samples
drawn from the GPDS are consistent with exact, independent samples from a fixed
density function that is a transformation of a function drawn from a Gaussian process prior. Our formulation allows us to infer an unknown density from data using
Markov chain Monte Carlo, which gives samples from the posterior distribution
over density functions and from the predictive distribution on data space. We can
also infer the hyperparameters of the Gaussian process. We compare this density
modeling technique to several existing techniques on a toy problem and a skullreconstruction task.
1
Introduction
We present the Gaussian Process Density Sampler (GPDS), a generative model for probability density functions, based on a Gaussian process. We are able to draw exact and exchangeable data from
a fixed density drawn from the prior. Given data, this generative prior allows us to perform inference of the unnormalized density. We perform this inference by expressing the generative process in
terms of a latent history, then constructing a Markov chain Monte Carlo algorithm on that latent history. The central idea of the GPDS is to allow nonparametric Bayesian density estimation where the
prior is specified via a Gaussian process covariance function that encodes the intuition that ?similar
data should have similar probabilities.?
One way to perform Bayesian nonparametric density estimation is to use a Dirichlet process to
define a distribution over the weights of the components in an infinite mixture model, using a simple
parametric form for each component. Alternatively, Neal [1] generalizes the Dirichlet process itself,
introducing a spatial component to achieve an exchangeable prior on discrete or continuous density
functions with hierarchical characteristics. Another way to define a nonparametric density is to
transform a simple latent distribution through a nonlinear map, as in the Density Network [2] and
the Gaussian Process Latent Variable Model [3]. Here we use the Gaussian process to define a prior
on the density function itself.
2
The prior on densities
We consider densities on an input space X that we will call the data space. In this paper, we assume
without loss of generality that X is the d-dimensional real space Rd . We first construct a Gaussian
process prior with the data space X as its input and the one-dimensional real space R as its output.
The Gaussian process defines a distribution over functions from X to R. We define a mean function
m(?) : X ? R and a positive definite covariance function K(?, ?) : X ? X ? R. We
?
http://www.inference.phy.cam.ac.uk/rpa23/
3
3
3
3
2
2
2
2
1
1
1
1
0
0
0
0
?1
?1
?1
?1
?2
?2
?2
?2
?3
?3
?3
?3
?3
?3
?2
(a)
?1
0
1
2
?x = 1, ?y = 1, ? = 1
3
?2
(b)
?1
0
1
2
?x = 1, ?y = 1, ? = 10
3
(c)
?2
?1
0
1
2
3
?x = 0.2, ?y = 0.2, ? = 5
?3
?3
(d)
?2
?1
0
1
2
3
?x = 0.1, ?y = 2, ? = 5
Figure 1: Four samples from the GPDS prior are shown, with 200 data samples. The contour lines show the approximate unnormalized densities. In each case the base measure is the zero-mean spherical
with unit
PGaussian
?2
?
? 2
1
variance. The covariance function was the squared exponential: K(x, x ) = ? exp(? 2 i ?i (xi ? xi ) ),
with parameters varied as labeled in each subplot. ?(?) is the logistic function in these plots.
assume that these functions are together parameterized by a set of hyperparameters ?. Given these
two functions and their hyperparameters, for any finite subset of X with cardinality N there is a
multivariate Gaussian distribution on RN [4]. We will take the mean function to be zero.
Probability density functions must be everywhere nonnegative and must integrate to unity. We define
a map from a function g(x) : X ? R, x ? X , to a proper density f (x) via
f (x) =
1
?(g(x)) ?(x)
Z? [g]
(1)
where ?(x) is an arbitrary base probability measure on X , and ?(?) : R ? (0, 1) is a nonnegative
function with upper bound 1. We take ?(?) to be a sigmoid, e.g. the logistic function or cumulative
normal distribution function. We use the bold notation g to refer to the function g(x) compactly
as a vector of (infinite) length, versus its value at a particular x. The normalization constant is a
functional of g(x):
Z
Z? [g] = dx? ?(g(x? )) ?(x? ).
(2)
Through the map defined by Equation 1, a Gaussian process prior becomes a prior distribution over
normalized probability density functions on X . Figure 2 shows several sample densities from this
prior, along with sample data.
3
Generating exact samples from the prior
We can use rejection sampling to generate samples from a common density drawn from the the
prior described in Section 2. A rejection sampler requires a proposal density that provides an upper
bound for the unnormalized density of interest. In this case, the proposal density is ?(x) and the
unnormalized density of interest is ?(g(x))?(x).
If g(x) were known, rejection sampling would proceed as follows: First generate proposals {?
xq }
from the base measure ?(x). The proposal x
?q would be accepted if a variate rq drawn uniformly
from (0, 1) was less than ?(g(?
xq )). These samples would be exact in the sense that they were not
biased by the starting state of a finite Markov chain. However, in the GPDS, g(x) is not known: it is
a random function drawn from a Gaussian process prior. We can nevertheless use rejection sampling
by ?discovering? g(x) as we proceed at just the places we need to know it, by sampling from the
prior distribution of the latent function. As it is necessary only to know g(x) at the {xq } to accept
or reject these proposals, the samples are still exact. This retrospective sampling trick has been
used in a variety of other MCMC algorithms for infinite-dimensional models [5, 6]. The generative
procedure is shown graphically in Figure 2.
In practice, we generate the samples sequentially, as in Algorithm 1, so that we may be assured
of having as many accepted samples as we require. In each loop, a proposal is drawn from the
base measure ?(x) and the function g(x) is sampled from the Gaussian process at this proposed
coordinate, conditional on all the function values already sampled. We will call these data the
conditioning set for the function g(x) and will denote the conditioning inputs X and the conditioning
1
(a)
(b)
(c)
(d)
{?
gq }
{?
xq }
{rq }
0
1 0
0
1 0
1 0
(e)
111111
000000
000000
111111
000000
111111
000000
111111
000000
111111
000000
111111
000000
111111
000000
111111
000000
111111
1
1 0
Figure 2: These figures show the procedure for generating samples from a single density drawn from the
GP-based prior. (a): Draw Q samples {?
xq }Q from the base measure ?(x), which in this case is uniform on
[0, 1]. (b): Sample the function g(x) at the randomly chosen locations, generating the set {?
gq = g(?
xq )}Q . The
squashed function ?(g(x)) is shown. (c): Draw a set of variates {rq }Q uniformly beneath the bound in the
vertical coordinate. (d): Accept only the points whose uniform draws are beneath the squashed function value,
i.e. rq < ?(?
gq ). (e): The accepted points (?
xq , rq ) are uniformly drawn from the shaded area beneath the curve
and the marginal distribution of the accepted x
?q is proportional to ?(g(x))?(x).
function values G. After the function is sampled, a uniform variate is drawn from beneath the bound
and compared to the ?-squashed function at the proposal location.
The sequential procedure is exchangeable, which means that the probability of the data is identical
under reordering. First, the base measure draws are i.i.d.. Second, conditioned on the proposals
from the base measure, the Gaussian process is a simple multivariate Gaussian distribution, which
is exchangeable in its components. Finally, conditioned on the draw from the Gaussian process,
the acceptance/rejection steps are independent Bernoulli samples, and the overall procedure is exchangeable. This property is important because it ensures that the sequential procedure generates
data from the same distribution as the simultaneous procedure described above. More broadly, exchangeable priors are useful in Bayesian modeling because we may consider the data conditionally
independent, given the latent density.
Algorithm 1 Generate P exact samples from the prior
Purpose: Draw P exact samples from a common density on X drawn from the prior in Equation 1
Inputs: GP hyperparameters ?, number of samples to generate P
1: Initialize empty conditioning sets for the Gaussian process: X = ? and G = ?
2: repeat
3:
Draw a proposal from the base measure: x
? ? ?(x)
4:
Sample the function from the Gaussian process at x
?: g? ? GP(g | X, G, x
?, ?)
5:
Draw a uniform variate on [0, 1]: r ? U(0, 1)
6:
if r < ?(?
g ) (Acceptance rule) then
7:
Accept x
?
8:
else
9:
Reject x
?
10:
end if
11:
Add x
? and g? to the conditioning sets: X = X ? x
? and G = G ? g?
12: until P samples have been accepted
4
Inference
We have N data D = {xn }N
n=1 which we model as having been drawn independently from an unknown density f (x). We use the GPDS prior from Section 2 to specify our beliefs about f (x), and
we wish to generate samples from the posterior distribution over the latent function g(x) corresponding to the unknown density. We may also wish to generate samples from the predictive distribution
or perform hierarchical inference of the prior hyperparameters.
By using the GPDS prior to model the data, we are asserting that the data can be explained as the
result of the procedure described in Section 3. We do not, however, know what rejections were made
en route to accepting the observed data. These rejections are critical to defining the latent function
g(x). One might think of defining a density as analogous to putting up a tent: pinning the canvas
down with pegs is just as important as putting up poles. In density modeling, defining regions with
little probability mass is just as important as defining the areas with significant mass.
Although the rejections are not known, the generative procedure provides a probabilistic model that
allows us to traverse the posterior distribution over possible latent histories that resulted in the data.
If we define a Markov chain whose equilibrium distribution is the posterior distribution over latent
histories, then we may simulate plausible explanations of every step taken to arrive at the data.
Such samples capture all the information available about the unknown density, and with them we
may ask additional questions about g(x) or run the generative procedure further to draw predictive
samples. This approach is related to that described by Murray [7], who performed inference on an
exactly-coalesced Markov chain [8], and by Beskos et al. [5].
We model the data as having been generated exactly as in Algorithm 1, with P = N , i.e.
run until exactly N proposals were accepted. The state space of the Markov chain on latent
histories in the GPDS consists of: 1) the values of the latent function g(x) at the data, denoted GN = {gn }N
n=1 , 2) the number of rejections M , 3) the locations of the M rejected proposals,
denoted M = {xm }M
m=1 , and 4) the values of the latent function g(x) at the M rejected proposals,
denoted GM = {gm = g(xm )}M
m=1 . We will address hyperparameter inference in Section 4.3.
We perform Gibbs-like sampling of the latent history by alternating between modification of the
number of rejections M and block updating of the rejection locations M and latent function values GM and GN . We will maintain an explicit ordering of the latent rejections for reasons of clarity,
although this is not necessary due to exchangeability. We will also assume that ?(?) is the logistic
function, i.e. ?(z) = (1 + exp{?z})?1 .
4.1
Modifying the number of latent rejections
? by drawing it from a proposal distribution
We propose a new number of latent rejections M
?
?
q(M ? M ). If M is greater than M , we must also propose new rejections to add to the latent state. We take advantage of the exchangeability of the process to generate the new rejections:
we imagine these proposals were made after the last observed datum was accepted, and our pro? is less than M , we do the
posal is to call them rejections and move them before the last datum. If M
opposite by proposing to move some rejections to after the last acceptance.
When proposing additional rejections, we must also propose times for them among the current
?
?1
latent history. There are MM?+N
such ways to insert these additional rejections into the existing
?M
latent history, such that the sampler terminates after the N th acceptance. When
removing rejections,
we must choose which ones to place after the data, and there are MM
? possible sets. Upon
?M
simplification, the proposal ratios for both addition and removal of rejections are identical:
? >M
M
z
}|
?
q(M ? M )
? ?
q(M
{
? +N ?1
M
? ?M
M
?
M ) M? M
?M
? <M
M
z
}|
{
?) M?
? )M !(M
? + N ? 1)!
q(M ? M
q(M ? M
M ?M
=
.
=
? ? M )M
? !(M + N ? 1)!
? ? M ) M +N ?1
q(M
q(M
?
M ?M
When inserting rejections, we propose the locations of the additional proposals, denoted M+ , and
+
? ?M
the corresponding values of the latent function, denoted GM
. We generate M+ by making M
+
independent draws from the base measure. We draw GM jointly from the Gaussian process prior,
conditioned on all of the current latent state, i.e. (M, GM , D, GN ). The joint probability of this
state is
?
"N
#" M
# ? M?
Y
Y
Y
p(D, M, M+ , GN , GM , G + ) =
?(xn )?(gn )
?(xm )(1 ? ?(gm )) ?
?(xm )?
M
n=1
m=M +1
m=1
?
+
GP(GM , GN , GM
| D, M, M+ ).
(3)
The joint in Equation 3 expresses the probability of all the base measure draws, the values of the
function draws from the Gaussian process, and the acceptance or rejection probabilities of the proposals excluding the newly generated points. When we make an insertion proposal, exchangeability
allows us to shuffle the ordering without changing the probability; the only change is that now we
must account for labeling the new points as rejections. In the acceptance ratio, all terms except for
the ?labeling probability? cancel. The reverse proposal is similar, however we denote the removed
?
proposal locations as M? and the corresponding function values as GM
. The overall acceptance
ratios for insertions or?removals are
Q
? >M
? q(M? ?M? ) M? ! (M? +N ?1)!
+ (1 ? ?(g))
if M
?
? q(M ?M ) M ! (M +N ?1)! g?GM
a=
(4)
?
?
? q(M ?M? ) M ! (M? +N ?1)! Q ? (1 ? ?(g))?1 if M
? < M.
? ?M ) M
? ! (M +N ?1)!
g?G
q(M
M
4.2
Modifying rejection locations and function values
Given the number of latent rejections M , we propose modifying their locations M, their latent function values GM , and the values of the latent function at the data GN . We will denote these proposals
? = {?
?
?
as M
xm }M
gm = g?(?
xm )}M
gn = g?(xn )}N
m=1 , GM = {?
m=1 and GN = {?
n=1 , respectively. We
? ? M). For the latent funcmake simple perturbative proposals of M via a proposal density q(M
tion values, however, perturbative proposals will be poor, as the Gaussian process typically defines
a narrow mass. To avoid this, we propose modifications to the latent function that leave the prior
invariant.
? G?M and G?N in three steps. First, we draw new rejection locations
We make joint proposals of M,
?
from q(M ? M). Second, we draw a set of M intermediate function values from the Gaussian
? conditioned on the current rejection locations and their function values, as well as
process at M,
? and the data D via an
the function values at the data. Third, we propose new function values at M
underrelaxation proposal of the form
p
g?(x) = ? g(x) + 1 ? ?2 h(x)
where h(x) is a sample from the Gaussian process prior and ? is in [0, 1). This is a variant of the
overrelaxed MCMC method discussed by Neal [9]. This procedure leaves the Gaussian process prior
invariant, but makes conservative proposals if ? is near one. After making a proposal, we accept or
reject via the ratio of the joint distributions:
h
i hQ
i
N
? QM ?(?
q(M ? M)
xm )(1 ? ?(?
gm ))
gn )
m=1
n=1 ?(?
h
ih
i.
a=
? ? M) QM ?(xm )(1 ? ?(gm )) QN ?(gn )
q(M
m=1
n=1
4.3
Hyperparameter inference
Given a sample from the posterior on the latent history, we can also perform a Metropolis?Hasting
step in the space of hyperparameters. Parameters ?, governing the covariance function and mean
function of the Gaussian process provide common examples of hyperparameters, but we might also
introduce parameters ? that control the behavior of the base measure ?(x). We denote the proposal
distributions for these parameters as q(?? ? ?) and q(?? ? ?), respectively. With priors p(?)
and p(?), the acceptance ratio for a Metropolis?Hastings step is
" M
#" N
#
Y ?(xn | ?)
? q(? ? ?)
? p(?)
? p(?)
? N ({GM , GN } | M, D, ?)
? Y
?
?
q(? ? ?)
?(xm | ?)
a=
.
q(??? ?) q(?? ? ?) p(?) p(?) N ({GM , GN } | M, D, ?) m=1 ?(xm | ?) n=1 ?(xn | ?)
4.4
Prediction
The predictive distribution is the one that arises on the space X when the posterior on the latent
function g(x) (and perhaps hyperparameters) is integrated out. It is the expected distribution of the
next datum, given the ones we have seen and taking into account our uncertainty. In the GPDS we
sample from the predictive distribution by running the generative process of Section 3, initialized to
the current latent history sample from the Metropolis?Hastings procedure described above.
It may also be desirable to estimate the actual value of the predictive density. We use the method of
Chib and Jeliazkov [10], and observe by detailed balance of a Metropolis?Hastings move:
?(g(x? ))
?(g(x))
?
?
p(x | g, ?, ?)?(x ) min 1,
= p(x | g, ?, ?)?(x) min 1,
.
?(g(x))
?(g(x? ))
(a)
(b)
GM = {g(xm )}
(c)
G?M = {?
g (?
xm )}
GN = {g(xn )}
G?N = {?
g (xn )}
{g(?
xm )}
D= {xn }
M= {xm }
? {?
M=
xm }
Figure 3: These figures show the sequence of proposing new rejection locations, new function values at those
locations, and new function values at the data. (a): The current state, with rejections labeled M = {xm } on
the left, along with the values of the latent function GM = {gm }. On the right side are the data D = {xn }
? = {?
and the corresponding values of the latent function GN = {gn }. (b): New rejections M
xm } are
? ? M), and the latent function is sampled at these points. (c): The latent function is
proposed via q(M
perturbed at the new rejection locations and at the data via an underrelaxed proposal.
We find the expectation of each side under the posterior of g and the hyperparameters ? and ?:
Z Z
Z
Z
?(g(x? ))
?
?
d? d? p(?, ? | D) dg p(g | ?, D) dx p(x | g, ?, ?)?(x ) min 1,
?(g(x))
Z Z
Z
Z
?(g(x))
?
?
.
= d? d? p(?, ? | D) dg p(g | ?, D) dx p(x | g, ?, ?)?(x) min 1,
?(g(x? ))
This gives an expression for the predictive density:
R R R R ?
?(g(x))
d? d? dg dx p(?, ?, g, x? | D) ?(x) min 1, ?(g(x
? ))
p(x | D) = R R R R
(5)
? ))
d? d? dg dx? p(?, ?, g | x, D) ?(x? ) min 1, ?(g(x
?(g(x))
Both the numerator and the denominator in Equation 5 are expectations that can be estimated by
averaging over the output from the GPDS Metropolis?Hasting sampler. The denominator requires
sampling from the posterior distribution with the data augmented by x.
5
Results
We examined the GPDS prior and the latent history inference procedure on a toy data set and on
a skull reconstruction task. We compared the approach described in this paper to a kernel density
estimate (Parzen windows), an infinite mixture of Gaussians (iMoG), and Dirichlet diffusion trees
(DFT). The kernel density estimator used a spherical Gaussian with the bandwidth set via ten-fold
cross validation. Neal?s Flexible Bayesian Modeling (FBM) Software [1] was used for the implementation of both iMoG and DFT.
The toy data problem consisted of 100 uniform draws from a two-dimensional ring with radius 1.5,
and zero-mean Gaussian noise added with ? = 0.2. The test data were 50 additional samples,
and comparison used mean log probability of the test set. Each of the three Bayesian methods
improved on the Parzen window estimate by two or more nats, with the DFT approach being the
most successful. A bar plot of these results is shown in Figure 5.
We also compared the methods on a real-data task. We modeled the the joint density of ten measurements of linear distances between anatomical landmarks on 228 rhesus macaque (Macaca mulatta)
skulls. These linear distances were generated from three-dimensional coordinate data of anatomical
landmarks taken by a single observer from dried skulls using a digitizer [11]. Linear distances are
commonly used in morphological studies as they are invariant under rotation and translation of the
objects being compared [12]. Figure 5 shows a computed tomography (CT) scan reconstruction of
a macaque skull, along with the ten linear distances used. Each skull was measured three times in
different trials, and these were modeled separately. 200 randomly-selected skulls were used as a
training set and 28 were used as a test set. To be as fair as possible, the data was logarithmically
transformed and whitened as a preprocessing step, to have zero sample mean and spherical sample
covariance. Each of the Bayesian approaches outperformed the Parzen window technique in mean
log probability of the test set, with comparable results for each. This result is not surprising, as
flexible nonparametric Bayesian models should have roughly similar expressive capabilities. These
results are shown in Figure 5.
3.0
2.0
1.5
1.0
0.0
0.5
Improvement Over Parzen (nats)
2.5
GPDS
iMoG
DFT
Ring
Mac T1 Mac T2 Mac T3
Figure 4: The macaque skull data are linear dis-
Figure 5: This bar plot shows the improvement of
tances calculated between three-dimensional coordinates of anatomical landmarks. These are superior
and inferior views of a computed tomography (CT)
scan of a male macaque skull, with the ten linear
distances superimposed. The anatomical landmarks
are based on biological relevance and repeatability
across individuals.
the GPDS, infinite mixture of Gaussians (iMoG),
and Dirichlet diffusion trees (DFT) in mean log
probability (base e) of the test set over crossvalidated Parzen windows on the toy ring data and
the macaque data. The baseline log probability of
the Parzen method for the ring data was ?2.253 and
for the macaque data was ?15.443, ?15.742, and
?15.254 for each of three trials.
6
Discussion
Valid MCMC algorithms for fully Bayesian kernel regression methods are well-established. This
work introduces the first such prior that enables tractable density estimation, complementing alternatives such as Dirichlet Diffusion Trees [1] and infinite mixture models.
Although the GPDS has similar motivation to the logistic Gaussian process [13, 14, 15, 16], it differs
significantly in its applicability and practicality. All known treatments of the logistic GP require a
finite-dimensional proxy distribution. This proxy distribution is necessary both for tractability of
inference and for estimation of the normalization constant. Due to the complexity constraints of both
the basis-function approach of Lenk [15] and the lattice-based approach of [16], these have only been
implemented on single-dimensional toy problems. The GPDS construction we have presented here
not only avoids numerical estimation of the normalization constant, but allows infinite-dimensional
inference both in theory and in practice.
6.1
Computational complexity
The inference method for the GPDS prior is ?practical? in the sense that it can be implemented
without approximations, but it has potentially-steep computational costs. To compare two latent
histories in a Metropolis?Hastings step we must evaluate the marginal likelihood of the Gaussian
process. This requires a matrix decomposition whose cost is O((N + M )3 ). The model explicitly
allows M to be any nonnegative integer and so this cost is unbounded. The expected cost of an
M?H step is determined by the expected number of rejections M . For a given g(x), the expected
M is N (Z? [g]?1 ? 1). This expression is derived from the observation that ?(x) provides an upper
bound on the function ?(g(x))?(x) and the ratio of acceptances to rejections is determined by the
proportion of the mass of ?(x) contained by ?(g(x))?(x).
We are optimistic that more sophisticated Markov chain Monte Carlo techniques may realize
constant-factor performance gains over the basic Metropolis?Hasting scheme presented here, without compromising the correctness of the equilibrium distribution. Sparse approaches to Gaussian
process regression that improve the asymptotically cubic behavior may also be relevant to the GPDS,
but it is unclear that these will be an improvement over other approximate GP-based schemes for
density modeling.
6.2
Alternative inference methods
In developing inference methods for the GPDS prior, we have also explored the use of exchange
sampling [17, 7]. Exchange sampling is an MCMC technique explicitly developed for the situation
where there is an intractable normalization constant that prevents exact likelihood evaluation, but
exact samples may be generated for any particular parameter setting. Undirected graphical models
such as the Ising and Potts models provide common examples of cases where exchange sampling
is applicable via coupling from the past [8]. Using the exact sampling procedure of Section 3,
it is applicable to the GPDS as well. Exchange sampling for the GPDS, however, requires more
evaluations of the function g(x) than the latent history approach. In practice the latent history
approach of Section 4 does perform better.
Acknowledgements
The authors wish to thank Radford Neal and Zoubin Ghahramani for valuable comments. Ryan
Adams? research is supported by the Gates Cambridge Trust. Iain Murray?s research is supported by
the government of Canada. The authors thank the Caribbean Primate Research Center, the University
of Puerto Rico, Medical Sciences Campus, Laboratory of Primate Morphology and Genetics, and
the National Institutes of Health (Grant RR03640 to CPRC) for support.
References
[1] R. M. Neal. Defining priors for distributions using Dirichlet diffusion trees. Technical Report 0104,
Department of Statistics, University of Toronto, 2001.
[2] D. J. C. MacKay. Bayesian neural networks and density networks. Nuclear Instruments and Methods in
Physics Research, Section A, 354(1):73?80, 1995.
[3] N. Lawrence. Probabilistic non-linear principal component analysis with Gaussian process latent variable
models. Journal of Machine Learning Research, 6:1783?1816, 2005.
[4] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. MIT Press, Cambridge, MA, 2006.
[5] A. Beskos, O. Papaspiliopoulos, G. O. Roberts, and P. Fearnhead. Exact and computationally efficient
likelihood-based estimation for discretely observed diffusion processes (with discussion). Journal of the
Royal Statistical Society: Series B, 68:333?382, 2006.
[6] O. Papaspiliopoulos and G. O. Roberts. Retrospective Markov chain Monte Carlo methods for Dirichlet
process hierarchical models. Biometrika, 95(1):169?186, 2008.
[7] I. Murray. Advances in Markov chain Monte Carlo methods. PhD thesis, Gatsby Computational Neuroscience Unit, University College London, London, 2007.
[8] J. G. Propp and D. B. Wilson. Exact sampling with coupled Markov chains and applications to statistical
mechanics. Random Structures and Algorithms, 9(1&2):223?252, 1996.
[9] R. M. Neal. Supressing random walks in Markov chain Monte Carlo using ordered overrelaxation, 1998.
[10] S. Chib and I. Jeliazkov. Marginal likelihood from the Metropolis?Hastings output. Journal of the
American Statistical Association, 96(453):270?281, 2001.
[11] K. E. Willmore, C. P. Klingenberg, and B. Hallgrimsson. The relationship between fluctuating asymmetry
and environmental variance in rhesus macaque skulls. Evolution, 59(4):898?909, 2005.
[12] S. R. Lele and J. T. Richtsmeier. An invariant approach to statistical analysis of shapes. Chapman and
Hall/CRC Press, London, 2001.
[13] T. Leonard. Density estimation, stochastic processes and prior information. Journal of the Royal Statistical Society, Series B, 40(2):113?146, 1978.
[14] D. Thorburn. A Bayesian approach to density estimation. Biometrika, 73(1):65?75, 1986.
[15] P. J. Lenk. Towards a practicable Bayesian nonparametric density estimator. Biometrika, 78(3):531?543,
1991.
[16] S. T. Tokdar and J. K. Ghosh. Posterior consistency of logistic Gaussian process priors in density estimation. Journal of Statistical Planning and Inference, 137:34?42, 2007.
[17] I. Murray, Z. Ghahramani, and D. J.C. MacKay. MCMC for doubly-intractable distributions. In Proceedings of the 22nd Annual Conference on Uncertainty in Artificial Intelligence (UAI), pages 359?366,
2006.
| 3410 |@word trial:2 proportion:1 lenk:2 nd:1 rhesus:2 covariance:5 decomposition:1 phy:1 series:2 past:1 existing:2 current:5 surprising:1 dx:5 must:7 perturbative:2 realize:1 numerical:1 shape:1 enables:1 plot:3 generative:8 discovering:1 leaf:1 selected:1 complementing:1 intelligence:1 accepting:1 provides:3 toronto:4 location:13 traverse:1 unbounded:1 along:3 consists:1 doubly:1 introduce:1 g4:1 expected:4 roughly:1 behavior:2 planning:1 mechanic:1 morphology:1 spherical:3 little:1 actual:1 window:4 cardinality:1 becomes:1 notation:1 campus:1 mass:4 what:1 developed:1 proposing:3 ghosh:1 transformation:1 every:1 exactly:3 biometrika:3 qm:2 uk:5 control:1 exchangeable:7 grant:1 unit:2 digitizer:1 medical:1 positive:1 before:1 t1:1 willmore:1 propp:1 might:2 examined:1 pgaussian:1 shaded:1 practical:1 practice:3 block:1 definite:1 differs:1 procedure:13 area:2 reject:3 significantly:1 prescott:1 zoubin:1 www:1 map:3 center:1 graphically:1 williams:1 starting:1 independently:1 iain:2 rule:1 estimator:2 nuclear:1 cavendish:2 coordinate:4 analogous:1 imagine:1 gm:22 construction:1 exact:12 trick:1 logarithmically:1 updating:1 ising:1 labeled:2 observed:3 capture:1 region:1 ensures:1 morphological:1 ordering:2 shuffle:1 removed:1 valuable:1 rq:5 intuition:1 insertion:2 nats:2 complexity:2 cam:3 predictive:7 upon:1 basis:1 compactly:1 joint:5 london:3 monte:6 artificial:1 labeling:2 whose:3 plausible:1 drawing:1 statistic:1 gp:6 transform:1 itself:2 think:1 jointly:1 advantage:1 sequence:1 propose:7 reconstruction:2 gq:3 inserting:1 relevant:1 loop:1 beneath:4 ontario:1 achieve:1 macaca:1 empty:1 overrelaxed:1 asymmetry:1 generating:3 adam:2 leave:1 ring:4 object:1 coupling:1 ac:3 measured:1 implemented:2 c:1 radius:1 compromising:1 modifying:3 stochastic:1 crc:1 require:2 exchange:4 government:1 biological:1 ryan:2 insert:1 mm:2 hall:1 normal:1 exp:2 equilibrium:2 lawrence:1 purpose:1 estimation:10 coalesced:1 outperformed:1 applicable:2 correctness:1 rpa23:2 puerto:1 mit:1 gaussian:35 fearnhead:1 avoid:1 exchangeability:3 wilson:1 derived:1 improvement:3 potts:1 bernoulli:1 superimposed:1 likelihood:4 baseline:1 sense:2 inference:15 typically:1 integrated:1 accept:4 transformed:1 overall:2 among:1 flexible:2 denoted:5 spatial:1 mackay:4 initialize:1 marginal:3 construct:1 having:3 sampling:13 chapman:1 identical:2 cancel:1 t2:1 report:1 randomly:2 chib:2 dg:4 resulted:1 national:1 individual:1 maintain:1 interest:2 acceptance:9 evaluation:2 male:1 mixture:4 introduces:1 chain:11 necessary:3 tree:4 initialized:1 walk:1 modeling:5 gn:17 lattice:1 applicability:1 introducing:1 pole:1 subset:1 mac:3 tractability:1 uniform:5 cost:4 successful:1 perturbed:1 density:50 probabilistic:2 physic:1 jeliazkov:2 together:1 parzen:6 squared:1 central:1 thesis:1 choose:1 american:1 toy:5 account:2 bold:1 explicitly:2 performed:1 tion:1 observer:1 view:1 optimistic:1 pinning:1 capability:1 variance:2 characteristic:1 who:1 t3:1 repeatability:1 bayesian:12 carlo:6 m5s:1 history:14 simultaneous:1 sampled:4 newly:1 gain:1 treatment:1 ask:1 sophisticated:1 rico:1 specify:1 improved:1 formulation:1 generality:1 just:3 rejected:2 governing:1 until:2 canvas:1 hastings:5 expressive:1 trust:1 nonlinear:1 defines:2 logistic:6 perhaps:1 normalized:1 consisted:1 evolution:1 alternating:1 laboratory:3 neal:6 conditionally:1 numerator:1 inferior:1 unnormalized:4 pro:1 sigmoid:1 common:4 mulatta:1 rotation:1 functional:1 superior:1 tokdar:1 conditioning:5 discussed:1 he:2 association:1 expressing:1 refer:1 significant:1 cambridge:6 gibbs:1 dft:5 measurement:1 rd:1 consistency:1 base:12 add:2 posterior:9 multivariate:2 reverse:1 route:1 seen:1 additional:5 greater:1 subplot:1 desirable:1 infer:2 fbm:1 technical:1 cross:1 dept:1 hasting:3 prediction:1 variant:1 regression:2 basic:1 denominator:2 whitened:1 expectation:2 normalization:4 kernel:3 proposal:30 addition:1 separately:1 else:1 biased:1 comment:1 undirected:1 call:3 integer:1 near:1 intermediate:1 variety:1 variate:4 bandwidth:1 opposite:1 idea:1 beskos:2 expression:2 retrospective:2 proceed:2 useful:1 detailed:1 nonparametric:6 ten:4 tomography:2 mrao:1 http:1 generate:9 peg:1 estimated:1 neuroscience:1 anatomical:4 broadly:1 discrete:1 hyperparameter:2 express:1 putting:2 four:1 nevertheless:1 drawn:12 cb3:2 clarity:1 changing:1 diffusion:5 asymptotically:1 overrelaxation:1 run:2 parameterized:1 everywhere:1 uncertainty:2 place:2 arrive:1 draw:17 comparable:1 bound:5 ct:2 datum:3 simplification:1 fold:1 nonnegative:3 discretely:1 annual:1 constraint:1 software:1 encodes:1 generates:1 simulate:1 min:6 department:1 developing:1 poor:1 terminates:1 across:1 unity:1 metropolis:8 skull:9 modification:2 making:2 primate:2 tent:1 practicable:1 explained:1 invariant:4 taken:2 computationally:1 equation:4 know:3 tractable:1 instrument:1 end:1 generalizes:1 available:1 gaussians:2 observe:1 hierarchical:3 fluctuating:1 alternative:2 gate:1 dirichlet:7 running:1 graphical:1 practicality:1 ghahramani:2 murray:6 society:2 move:3 already:1 question:1 added:1 parametric:1 squashed:3 unclear:1 hq:1 distance:5 thank:2 lele:1 landmark:4 reason:1 length:1 modeled:2 relationship:1 ratio:6 balance:1 steep:1 robert:2 potentially:1 implementation:1 proper:1 unknown:4 perform:7 gpds:21 upper:3 vertical:1 observation:1 markov:11 finite:3 defining:5 situation:1 excluding:1 rn:1 varied:1 arbitrary:1 canada:1 david:1 specified:1 narrow:1 established:1 macaque:7 address:1 able:1 bar:2 xm:17 royal:2 explanation:1 belief:1 tances:1 critical:1 scheme:2 improve:1 coupled:1 health:1 xq:7 prior:35 acknowledgement:1 removal:2 loss:1 reordering:1 fully:1 proportional:1 versus:1 validation:1 integrate:1 consistent:1 proxy:2 translation:1 genetics:1 repeat:1 last:3 supported:2 rasmussen:1 dis:1 side:2 allow:1 institute:1 taking:1 sparse:1 crossvalidated:1 curve:1 calculated:1 xn:9 valid:1 cumulative:1 contour:1 asserting:1 qn:1 avoids:1 made:2 commonly:1 preprocessing:1 author:2 approximate:2 sequentially:1 uai:1 xi:2 alternatively:1 continuous:1 latent:40 constructing:1 assured:1 motivation:1 noise:1 hyperparameters:9 fair:1 underrelaxed:1 augmented:1 papaspiliopoulos:2 en:1 cubic:1 gatsby:1 wish:3 explicit:1 exponential:1 third:1 down:1 removing:1 explored:1 intractable:2 ih:1 sequential:2 phd:1 conditioned:4 rejection:35 prevents:1 contained:1 ordered:1 radford:1 environmental:1 ma:1 conditional:1 leonard:1 towards:1 change:1 infinite:7 except:1 uniformly:3 determined:2 sampler:6 averaging:1 principal:1 conservative:1 accepted:7 college:1 support:1 arises:1 scan:2 relevance:1 evaluate:1 mcmc:5 |
2,660 | 3,411 | Skill characterization based on betweenness
? ur
? S?ims?ek?
Ozg
Andrew G. Barto
Department of Computer Science
University of Massachusetts
Amherst, MA 01003
{ozgur|barto}@cs.umass.edu
Abstract
We present a characterization of a useful class of skills based on a graphical representation of an agent?s interaction with its environment. Our characterization uses
betweenness, a measure of centrality on graphs. It captures and generalizes (at
least intuitively) the bottleneck concept, which has inspired many of the existing
skill-discovery algorithms. Our characterization may be used directly to form a
set of skills suitable for a given task. More importantly, it serves as a useful guide
for developing incremental skill-discovery algorithms that do not rely on knowing
or representing the interaction graph in its entirety.
1
Introduction
The broad problem we consider is how to equip artificial agents with the ability to form useful
high-level behaviors, or skills, from available primitives. For example, for a robot performing tasks
that require manipulating objects, grasping is a useful skill that employs lower-level sensory and
motor primitives. In approaching this problem, we distinguish between two related questions: What
constitutes a useful skill? And, how can an agent identify such skills autonomously? Here, we
address the former question with the objective of guiding research on the latter.
Our main contribution is a characterization of a useful class of skills based on a graphical representation of the agent?s interaction with its environment. Specifically, we use betweenness, a measure
of centrality on graphs [1, 2], to define a set of skills that allows efficient navigation on the interaction graph. In the game of Tic-Tac-Toe, these skills translate into setting up a fork, creating an
opportunity to win the game. In the Towers of Hanoi puzzle, they include clearing the stack above
the largest disk and clearing one peg entirely, making it possible to move the largest disk.
Our characterization may be used directly to form a set of skills suitable for a given task if the
interaction graph is readily available. More importantly, this characterization is a useful guide for
developing low-cost, incremental algorithms for skill discovery that do not rely on complete representation of the interaction graph. We present one such algorithm here and perform preliminary
analysis.
Our characterization captures and generalizes (at least intuitively) the bottleneck concept, which
has inspired many of the existing skill-discovery algorithms [3, 4, 5, 6, 7, 8, 9]. Bottlenecks have
been described as regions that the agent tends to visit frequently on successful trajectories but not on
unsuccessful ones [3], border states of strongly connected areas [6], and states that allow transitions
to a different part of the environment [7]. The canonical example is a doorway connecting two rooms.
We hope that our explicit and concrete description of what makes a useful skill will lead to further
development of these existing algorithms and inspire alternative methods.
?
Now at the Max Planck Institute for Human Development, Center for Adaptive Behavior and Cognition,
Berlin, Germany.
1
Figure 1: A visual representation of betweenness on two sample graphs.
2
Skill Definition
We assume that the agent?s interaction with its environment may be represented as a Markov Decision Process (MDP). The interaction graph is a directed graph in which the vertices represent
the states of the MDP and the edges represent possible state transitions brought about by available
actions. Specifically, the edge u ? v is present in the graph if and only if the corresponding state
transition has a strictly positive probability through the execution of at least one action. The weight
on each edge is the expected cost of the transition, or expected negative reward.
Our claim is that states that have a pivotal role in efficiently navigating the interaction graph are
useful subgoals to reach and that a useful measure for evaluating how pivotal a vertex v is
X ?st (v)
wst ,
?st
s6=t6=v
where ?st is the number of shortest paths from vertex s to vertex t, ?st (v) is the number of such
paths that pass through vertex v, and wst is the weight assigned to paths from vertex s to vertex t.
With uniform path weights, the above expression equals betweenness, a measure of centrality on
graphs [1, 2]. It gives the fraction of shortest paths on the graph (between all possible sources and
destinations) that pass through the vertex of interest. If there are multiple shortest paths from a
given source to a given destination, they are given equal weights that sum to one. Betweenness
may be computed in O(nm) time and O(n + m) space on unweighted graphs with n nodes and m
edges [10]. On weighted graphs, the space requirement remains the same, but the time requirement
increases to O(nm + n2 logn).
In our use of betweenness, we include path weights to take into account the reward function. Depending on the reward function?or a probability distribution over possible reward functions?some
parts of the interaction graph may be given more weight than others, depending on how well they
serve the agent?s needs.
We define as subgoals those states that correspond to local maxima of betweenness on the interaction
graph, in other words, states that have a higher betweenness than other states in their neighborhood.
Here, we use a simple definition of neighborhood, including in it only the states that are one hop
away, which may be revised in the future. Skills for efficiently reaching the local maxima of betweenness represent a set of behaviors that may be combined in different ways to efficiently reach
different regions, serving as useful building blocks for navigating the graph.
Figure 1 is a visual representation of betweenness on two sample graphs, computed using uniform
edge and path weights. The gray-scale shading on the vertices corresponds to the relative values of
betweenness, with black representing the highest betweenness on the graph and white representing
the lowest. The graph on the left corresponds to a gridworld in which a doorway connects two
rooms. The graph on the right has a doorway of a different type: an edge connecting two otherwise
distant nodes. In both graphs, states that are local maxima of betweenness correspond to our intuitive
choice of subgoals.
2
Figure 2: Betweenness in Taxi, Playroom, and Tic-Tac-Toe (from left to right). Edge directions are
omitted in the figure.
3
Examples
We appled the skill definition of Section 2 to various domains in the literature: Taxi [11], Playroom [12, 13], and the game of Tic-Tac-Toe. Interaction graphs of these domains, displaying betweenness values as gray-scale shading on the vertices, are shown in Figure 2. In Taxi and Playroom,
graph layouts were determined by a force-directed algorithm that models the edges as springs and
minimizes the total force on the system. We considered a node to be a local maximum if its betweenness was higher than or equal to those of its immediate neighbors, taking into account both
incoming and outgoing edges. Unless stated otherwise, actions had uniform cost and betweenness
was computed using uniform path weights.
Taxi This domain includes a taxi and a passenger on the 5 ? 5 grid shown in Figure 4. At each
grid location, the taxi has six primitive actions: north, east, south, west, pick-up, and
put-down. The navigation actions succeed in moving the taxi in the intended direction with probability 0.8; with probability 0.2, the action takes the taxi to the right or left of the intended direction.
If the direction of movement is blocked, the taxi remains in the same location. Pick-up places the
passenger in the taxi if the taxi is at the passenger location; otherwise it has no effect. Similarly,
put-down delivers the passenger if the passenger is inside the taxi and the taxi is at the destination; otherwise it has no effect. The source and destination of all passengers are chosen uniformly
at random from among the grid squares R, G, B, Y. We used a continuing version of this problem in
which a new passenger appears after each successful delivery.
The highest local maxima of betweenness are at the four regions of the graph that correspond to
passenger delivery. Other local maxima belong to one of the following categories: (1) taxi is at
the passenger location1 , (2) taxi is at one of the passenger wait locations with the passenger in the
taxi2 , (3) taxi and passenger are both at destination, (4) the taxi is at x = 2, y = 3, a navigational
bottleneck on the grid, and (5) the taxi is at x = 3, y = 3, another navigational bottleneck. The
corresponding skills are (approximately) those that take the taxi to the passenger location, to the
destination (having picked up the passenger), or to a navigational bottleneck. These skills closely
resemble those that are hand-coded for this domain in the literature.
Playroom We created a Markov version of this domain in which an agent interacts with a number
of objects in its surroundings: a light switch, a ball, a bell, a button for turning music on and off,
1
Except when passenger is waiting at Y, in which case, the taxi is at x = 1, y = 3.
For wait location Y, the corresponding subgoal has the taxi at x = 1, y = 3, having picked up the
passenger.
2
3
4
4
1.5
Random
1
Skills
0.5
0
0
20
40
Episodes completed
60
Random
7000
Primitives
6000
5000
Skills
4000
3000
2000
1000
0
0
10
20
30
40
Episodes completed
Rooms
50
Cumulative number of steps
Primitives
Cumulative number of steps
Cumulative number of steps
2
7
x 10
x 10
Primitives
6
Random?100
5
4
Skills?100
3
Random?300
2
Skills?300
1
0
0
20
40
Episodes completed
Shortcut
Playroom
Figure 3: Learning performance in Rooms, Shortcut, and Playroom.
and a toy monkey. The agent has an eye, a hand, and a marker it can place on objects. Its actions
consist of looking at a randomly selected object, looking at the object in its hand, holding the object
it is looking at, looking at the object that the marker is placed on, placing the marker on the object it
is looking at, moving the object in its hand to the location it is looking at, flipping the light switch,
pressing the music button, and hitting the ball towards the marker. The first two actions succeed
with probability 1, while the remaining actions succeed with probability 0.75, producing no change
in the environment if they fail. In order to operate on an object, the agent must be looking at the
object and holding the object in its hand. To be able to press the music button successfully, the light
should be on. The toy monkey starts to make frightened sounds if the bell is rung while the music
is playing; it stops only when the music is turned off. If the ball hits the bell, the bell rings for one
decision stage.
The MDP state consists of the object that the agent is looking at, the object that the agent is holding,
the object that the marker is placed on, music (on/off), light (on/off), monkey (frightened/not), and
bell (ringing/not). The six different clusters of the interaction graph in Figure 2 emerge naturally
from the force-directed layout algorithm and correspond to the different settings of the music, light,
and monkey variables. There are only six such clusters because not all variable combinations are
possible. Betweenness peaks at regions that immediately connect neighboring clusters, corresponding to skills that change the setting of the music, light, or monkey variables.
Tic-Tac-Toe In the interaction graph, the node at the center of the interaction graph is the empty
board, with other board configurations forming rings around it with respect to their distance from
this initial configuration. The innermost ring shows states in which both players have played a
single turn. The agent played first. The opponent followed a policy that (1) placed the third mark in
a row, whenever possible, winning the game, (2) blocked the agent from completing a row, and (3)
placed its mark on a random empty square, with decreasing priority. Our state representation was
invariant with respect to rotational and reflective symmetries of the board. We assigned a weight of
+1 to paths that terminate at a win for the agent and 0 to all other paths. The state with the highest
betweenness is the one shown in Figure 4. The agent is the X player and will go next. This state
gives the agent two possibilities for setting up a fork (board locations marked with *), creating an
opportunity to win on the next turn. There were nine other local maxima that similarly allowed the
agent to immediately create a fork. In addition, there were a number of ?trivial? local maxima that
allowed the agent to immediately win the game.
4
Empirical Performance
We evaluated the impact of our skills on the agent?s learning performance in Taxi, Playroom, TicTac-Toe, and two additional domains, called Rooms and Shortcut, whose interaction graphs are those
presented in Figure 1. Rooms is a gridworld in which a doorway connects two rooms. At each state,
the available actions are north, south, east, and west. They move the agent in the intended
direction with probability 0.8 and in a uniform random direction with probability 0.2. The local
4
5
R
G
*
X
*
4
3
2
y
1
Y
1
B
2
3
4
5
x
O
O
X
Figure 4: Learning performance in Taxi and Tic-Tac-Toe.
maxima of betweenness are the two states that surround the doorway, which have a slightly higher
betweenness than the doorway itself. The transition dynamics of Shortcut is identical, except there
is one additional long-range action, connecting two particular states, which are the local maxima of
betweenness in this domain.
We represented skills using the options framework [14, 15]. The initiation set was restricted to
include a certain number of states and included those states with the least distance to the subgoal on
the interaction graph. The skills terminated with probability one outside the initiation set and at the
subgoal, with probability zero at all other states. The skill policy was the optimal policy for reaching
the subgoal. We compared three agents: one that used only the primitive actions of the domain, one
that used primitives and our skills, and one that used primitives and a control group of skills whose
subgoals were selected randomly. The number of subgoals used and the size of the initiation sets
were identical in the two skill conditions. The agent used Q-learning with -greedy exploration with
= 0.05. When using skills, it performed both intra-option and macro-Q updates [16]. The learning
rate () was kept constant at 0.1. Initial Q-values were 0. Discount rate was set to 1 in episodic
tasks, to 0.99 in continuing tasks.
Figure 3 shows performance results in Rooms, Shortcut, and PlayRoom, where we had the agent
perform 100 different episodic tasks, choosing a single goal state uniformly randomly in each task.
The reward was 0.001 for each transition and an additional +1 for transitions into the goal state.
The initial state was selected randomly. The labels in the figure indicate the size of the initiation sets.
If no number is present, the skills were made available everywhere in the domain. The availability of
our skills?those that were identified using local maxima of betweenness?revealed a big improvement compared to using primitive actions only. In some cases, random skills improved performance
as well, but this improvement was much smaller than that obtained by our skills.
Figure 4 shows similar results in Taxi and Tic-Tac-Toe. The figure shows mean performance in 100
trials. In Taxi, we examined performance on the single continuing task that rewarded the agent for
delivering passengers. Reward was 1 for each action, an additional +50 for passenger delivery, and
an additional 10 for an unsuccessful pick-up or put-down. In Tic-Tac-Toe, the agent received
a reward of 0.001 for each action, an additional +1 for winning the game, and an additional 1
for losing. Creating an individual skill for reaching each of the identified subgoals (which is what
we have done in other domains) generates skills that are not of much use in Tic-Tac-Toe because
reaching any particular board configuration is usually not possible. Instead, we defined a single skill
with multiple subgoals?the ten local maxima of betweenness that allow the agent to setup a fork.
We set the initial Q-value of this skill to 1 at the start state to ensure that the skill got executed
frequently enough. It is not clear what this single skill can be meaningfully compared to, so we do
not provide a control condition with randomly-selected subgoals.
5
Our analysis shows that, in a diverse set of domains, the skill definition of Section 2 gives rise
to skills that are consistent with common sense, are similar to skills people handcraft for these
domains, and improve learning performance. The improvements in performance are greater than
those observed when using a control group of randomly-generated skills, suggesting that they should
not be attributed to the presence of skills alone but to the presence of the specific skills that are
formed based on betweenness.
5
Related Work
A graphical approach to forming high-level behavioral units was first suggested by Amarel in his
classic analysis of the missionaries and cannibals problem [17]. Amarel advocated representing action consequences in the environment as a graph and forming skills that correspond to navigating this
graph by exploiting its structural regularities. He did not, however, propose any general mechanism
that can be used for this purpose.
Our skill definition captures the ?bottleneck? concept, which has inspired many of the existing skill
discovery algorithms [3, 6, 4, 5, 7, 8, 9]. There is clearly an overlap between our skills and the skills
that are generated by these algorithms. Here, we review these algorithms, with a focus on the extent
of this overlap and sample efficiency.
McGovern & Barto [3] examine past trajectories to identify states that are common in successful
trajectories but not in unsuccessful ones. An important concern with their method is its need for
excessive exploration of the environment. It can be applied only after the agent has successfully
performed the task at least once. Typically, it requires many additional successful trajectories. Furthermore, a fundamental property of this algorithm prevents it from identifying a large portion of
our subgoals. It examines different paths that reach the same destination, while we look for the most
efficient ways of navigating between different source and destination pairs. Bottlenecks that are not
on the path to the goal state would not be identified by this algorithm, while we consider such states
to be useful subgoals.
Stolle & Precup [4] and Stolle [5] address this last concern by obtaining their trajectories from
multiple tasks that start and terminate at different states. As the number of tasks increases, the
subgoals identified by their algorithms become more similar to ours. Unfortunately, however, sample
efficiency is even a larger concern with these algorithms, because they require the agent to have
already identified the optimal policy?not for only a single task, but for many different tasks in the
domain.
Menache et al. [6] and Mannor et al. [8] take a graphical approach and use the MDP state-transition
graph to identify subgoals. They apply a clustering algorithm to partition the graph into blocks and
create skills that efficiently take the agent to states that connect the different blocks. The objective is
to identify blocks that are highly connected within themselves but weakly connected to each other.
Different clustering techniques and cut metrics may be used towards this end. Rooms and Playroom
are examples of where these algorithms can succeed. Tic-Tac-Toe and Shortcut are examples of
where they fail.
S?ims?ek, Wolfe & Barto [9] address certain shortcomings of global graph partitioning by constructing their graphs from short trajectories. S?ims?ek & Barto [7] take a different approach and search for
states that introduce short-term novelty. Although their algorithm does not explicitly use the connectivity structure of the domain, it shares some of the limitations of graph partitioning as we discuss
more fully in the next section. We claim that the more fundamental property that makes a doorway
a useful subgoal is that it is between many source-destination pairs and that graph partitioning can
not directly tap into this property, although it can sometimes do it indirectly.
6
An Incremental Discovery Algorithm
Our skill definition may be used directly to form a set of skills suitable for a given environment.
Because of its reliance on complete knowledge of the interaction graph and the computational cost
of betweenness, the use of our approach as a skill-discovery method is limited, although there are
conditions under which it would be useful. An important research question is whether approximate
methods may be developed that do not require complete representation of the interaction graph.
6
Although betweenness of a given vertex is a global graph property that can not be estimated reliably
without knowledge of the entire graph, it should be possible to reliably determine the local maxima
of betweenness using limited information. Here, we investigate this possibility by combining the
descriptive contributions of the present paper with algorithmic insights of earlier work. In particular,
we apply the statistical approach from S?ims?ek & Barto [7] and S?ims?ek, Wolfe & Barto [9] using the
skill description in the present paper.
The resulting algorithm is founded on the premise that local maxima of betweenness of the interaction graph are likely to be local maxima on its subgraphs. While any single subgraph would not
be particularly useful to identify such vertices, a collection of subgraphs may allow us to correctly
identify them. The algorithm proceeds as follows. The agent uses short trajectories to construct subgraphs of the interaction graph and identifies the local maxima of betweenness on these subgraphs.
From each subgraph, it obtains a new observation for every state represented on it. This is a positive
observation if the state is a local maximum, a negative observation otherwise. We use the decision
rule from S?ims?ek, Wolfe & Barto [9], making a particular state a subgoal if there are at least no
observations on this state and if the proportion of positive observations is at least p+ . The agent
continues this incremental process indefinitely.
Figure 5 shows the results of applying this algorithm on two domains. The first is a gridworld
with six rooms. The second is also a gridworld, but the grid squares are one of two types with
different rewards. The lightly colored squares produce a reward of 0.001 for actions that originate
on them, while the darker squares produce 0.1. The reward structure creates two local maxima
of betweenness on the graph. These are the regions that look like doorways in the figure?they
are useful subgoals for the same reasons that doorways are. Graph partitioning does not succeed
in identifying these states because the structure is not created through node connectivity. Similarly,
the algorithms by S?ims?ek & Barto [7] and S?ims?ek, Wolfe & Barto [9] are also not suitable for this
domain. We applied them and found that they identified very few subgoals (<0.05/trial) randomly
distributed in the domain.
In both domains, we had the agent perform a random walk of 40,000 steps. Every 1000 transitions,
the agent created a new interaction graph using the last 1000 transitions. Figure 5 shows the number
of times each state was identified as a subgoal in 100 trials, using no = 10, p+ = 0.2. The individual
graphs had on average 156 nodes in the six-room gridworld and 224 nodes in the other one.
We present this algorithm here as a proof of concept, to demonstrate the feasibility of incremental
algorithms. An interesting direction is to develop algorithms that actively explore to discover local
maxima of betweenness rather than only passively mining available trajectories.
50
40
30
20
10
25
20
15
10
5
Figure 5: Subgoal frequency in 100 trials using the incremental discovery algorithm.
7
Acknowledgments
This work is supported in part by the National Science Foundation under grant CNS-0619337 and by
the Air Force Office of Scientific Research under grant FA9550-08-1-0418. Any opinions, findings,
conclusions or recommendations expressed here are those of the authors and do not necessarily
reflect the views of the sponsors.
References
[1] L. C. Freeman. A set of measures of centrality based upon betweenness. Sociometry, 40:35?41,
1977.
[2] L. C. Freeman. Centrality in social networks: Conceptual clarification. Social Networks,
1:215?239, 1979.
[3] A. McGovern and A. G. Barto. Automatic discovery of subgoals in reinforcement learning
using diverse density. In Proceedings of the Eighteenth International Conference on Machine
Learning, 2001.
[4] M. Stolle and D. Precup. Learning options in reinforcement learning. Lecture Notes in Computer Science, 2371:212?223, 2002.
[5] M. Stolle. Automated discovery of options in reinforcement learning. Master?s thesis, McGill
University, 2004.
[6] I. Menache, S. Mannor, and N. Shimkin. Q-Cut - Dynamic discovery of sub-goals in reinforcement learning. In Proceedings of the Thirteenth European Conference on Machine Learning,
2002.
? S?ims?ek and A. G. Barto. Using relative novelty to identify useful temporal abstractions
[7] O.
in reinforcement learning. In Proceedings of the Twenty-First International Conference on
Machine Learning, 2004.
[8] S. Mannor, I. Menache, A. Hoze, and U. Klein. Dynamic abstraction in reinforcement learning via clustering. In Proceedings of the Twenty-First International Conference on Machine
Learning, 2004.
? S?ims?ek, A. P. Wolfe, and A. G. Barto. Identifying useful subgoals in reinforcement learning
[9] O.
by local graph partitioning. In Proceedings of the Twenty-Second International Conference on
Machine Learning, 2005.
[10] U. Brandes. A faster algorithm for betweenness centrality. Journal of Mathematical Sociology,
25(2):163?177, 2001.
[11] T. G. Dietterich. Hierarchical reinforcement learning with the MAXQ value function decomposition. Journal of Artificial Intelligence Research, 13:227?303, 2000.
[12] A. G. Barto, S. Singh, and N. Chentanez. Intrinsically motivated learning of hierarchical
collections of skills. In Proceedings of the Third International Conference on Developmental
Learning, 2004.
[13] S. Singh, A. G. Barto, and N. Chentanez. Intrinsically motivated reinforcement learning. In
Advances in Neural Information Processing Systems, 2005.
[14] R. S. Sutton, D. Precup, and S. P. Singh. Between MDPs and Semi-MDPs: A framework
for temporal abstraction in reinforcement learning. Artificial Intelligence, 112(1-2):181?211,
1999.
[15] D. Precup. Temporal abstraction in reinforcement learning. PhD thesis, University of Massachusetts Amherst, 2000.
[16] A. McGovern, R. S. Sutton, and A. H. Fagg. Roles of macro-actions in accelerating reinforcement learning. In Grace Hopper Celebration of Women in Computing, 1997.
[17] S. Amarel. On representations of problems of reasoning about actions. In Machine Intelligence 3, pages 131?171. Edinburgh University Press, 1968.
8
| 3411 |@word trial:4 version:2 proportion:1 disk:2 decomposition:1 innermost:1 pick:3 shading:2 initial:4 configuration:3 uma:1 ours:1 past:1 existing:4 must:1 readily:1 distant:1 partition:1 motor:1 update:1 alone:1 greedy:1 selected:4 betweenness:36 intelligence:3 short:3 fa9550:1 indefinitely:1 colored:1 characterization:8 mannor:3 node:7 location:8 mathematical:1 become:1 consists:1 behavioral:1 inside:1 introduce:1 expected:2 behavior:3 themselves:1 frequently:2 examine:1 inspired:3 freeman:2 decreasing:1 rung:1 discover:1 lowest:1 what:4 tic:9 minimizes:1 monkey:5 ringing:1 developed:1 finding:1 temporal:3 every:2 hit:1 control:3 unit:1 partitioning:5 grant:2 planck:1 producing:1 positive:3 local:20 tends:1 consequence:1 taxi:25 sutton:2 path:13 approximately:1 black:1 examined:1 limited:2 range:1 directed:3 acknowledgment:1 block:4 episodic:2 area:1 empirical:1 bell:5 got:1 word:1 wait:2 put:3 applying:1 center:2 eighteenth:1 primitive:10 layout:2 go:1 identifying:3 immediately:3 subgraphs:4 examines:1 insight:1 rule:1 importantly:2 amarel:3 his:1 s6:1 classic:1 mcgill:1 losing:1 us:2 wolfe:5 particularly:1 continues:1 cut:2 observed:1 fork:4 role:2 capture:3 region:5 connected:3 episode:3 autonomously:1 grasping:1 movement:1 highest:3 environment:8 developmental:1 reward:10 dynamic:3 weakly:1 singh:3 serve:1 creates:1 upon:1 efficiency:2 represented:3 various:1 shortcoming:1 artificial:3 mcgovern:3 neighborhood:2 outside:1 choosing:1 whose:2 larger:1 otherwise:5 ability:1 itself:1 descriptive:1 pressing:1 propose:1 interaction:22 macro:2 neighboring:1 turned:1 combining:1 translate:1 subgraph:2 description:2 intuitive:1 exploiting:1 cluster:3 requirement:2 empty:2 regularity:1 produce:2 incremental:6 ring:3 object:15 depending:2 andrew:1 develop:1 advocated:1 received:1 c:1 entirety:1 resemble:1 indicate:1 direction:7 closely:1 exploration:2 human:1 opinion:1 require:3 premise:1 preliminary:1 strictly:1 around:1 considered:1 puzzle:1 cognition:1 algorithmic:1 claim:2 omitted:1 purpose:1 label:1 largest:2 create:2 successfully:2 weighted:1 hope:1 brought:1 clearly:1 reaching:4 rather:1 barto:15 office:1 focus:1 improvement:3 sense:1 ozg:1 abstraction:4 typically:1 entire:1 manipulating:1 germany:1 among:1 logn:1 development:2 equal:3 once:1 construct:1 having:2 hop:1 identical:2 placing:1 broad:1 look:2 constitutes:1 excessive:1 future:1 others:1 employ:1 few:1 surroundings:1 randomly:7 national:1 individual:2 intended:3 connects:2 cns:1 interest:1 possibility:2 highly:1 intra:1 investigate:1 mining:1 navigation:2 light:6 edge:9 wst:2 unless:1 continuing:3 walk:1 sociology:1 earlier:1 cost:4 vertex:12 uniform:5 successful:4 connect:2 combined:1 st:4 density:1 international:5 peak:1 amherst:2 fundamental:2 destination:9 off:4 connecting:3 precup:4 concrete:1 connectivity:2 thesis:2 reflect:1 nm:2 stolle:4 woman:1 priority:1 creating:3 ek:10 toy:2 actively:1 account:2 suggesting:1 includes:1 north:2 availability:1 explicitly:1 passenger:18 performed:2 handcraft:1 picked:2 view:1 portion:1 start:3 option:4 contribution:2 square:5 formed:1 air:1 efficiently:4 correspond:5 identify:7 trajectory:8 reach:3 whenever:1 definition:6 frequency:1 shimkin:1 celebration:1 toe:10 naturally:1 proof:1 attributed:1 stop:1 massachusetts:2 intrinsically:2 knowledge:2 playroom:9 appears:1 higher:3 inspire:1 improved:1 subgoal:8 evaluated:1 done:1 strongly:1 furthermore:1 stage:1 hand:5 marker:5 gray:2 scientific:1 mdp:4 building:1 effect:2 dietterich:1 concept:4 former:1 brandes:1 assigned:2 white:1 game:6 complete:3 demonstrate:1 delivers:1 reasoning:1 common:2 hopper:1 subgoals:16 belong:1 he:1 ims:10 blocked:2 surround:1 tac:9 chentanez:2 automatic:1 grid:5 similarly:3 had:4 moving:2 robot:1 rewarded:1 certain:2 initiation:4 additional:8 greater:1 novelty:2 shortest:3 determine:1 semi:1 multiple:3 sound:1 faster:1 long:1 visit:1 coded:1 feasibility:1 impact:1 sponsor:1 metric:1 represent:3 sometimes:1 addition:1 thirteenth:1 source:5 operate:1 south:2 meaningfully:1 reflective:1 structural:1 presence:2 revealed:1 enough:1 automated:1 switch:2 approaching:1 identified:7 knowing:1 bottleneck:8 whether:1 expression:1 six:5 motivated:2 accelerating:1 nine:1 action:19 useful:18 delivering:1 clear:1 discount:1 ten:1 category:1 peg:1 canonical:1 estimated:1 correctly:1 klein:1 serving:1 diverse:2 waiting:1 group:2 four:1 reliance:1 kept:1 graph:50 button:3 fagg:1 fraction:1 sum:1 everywhere:1 master:1 place:2 delivery:3 decision:3 entirely:1 completing:1 followed:1 distinguish:1 played:2 generates:1 lightly:1 spring:1 performing:1 passively:1 department:1 developing:2 ball:3 combination:1 smaller:1 slightly:1 ur:1 making:2 ozgur:1 intuitively:2 invariant:1 restricted:1 remains:2 turn:2 discus:1 fail:2 mechanism:1 serf:1 end:1 generalizes:2 available:6 opponent:1 apply:2 hierarchical:2 away:1 indirectly:1 centrality:6 alternative:1 remaining:1 include:3 ensure:1 completed:3 graphical:4 opportunity:2 clustering:3 music:8 objective:2 move:2 question:3 already:1 flipping:1 interacts:1 grace:1 navigating:4 win:4 distance:2 berlin:1 tower:1 originate:1 extent:1 trivial:1 reason:1 equip:1 rotational:1 setup:1 executed:1 unfortunately:1 holding:3 menache:3 negative:2 stated:1 rise:1 reliably:2 policy:4 twenty:3 perform:3 observation:5 revised:1 markov:2 immediate:1 looking:8 gridworld:5 stack:1 pair:2 tap:1 maxq:1 address:3 able:1 suggested:1 proceeds:1 usually:1 navigational:3 unsuccessful:3 max:1 including:1 suitable:4 overlap:2 rely:2 force:4 turning:1 representing:4 improve:1 mdps:2 eye:1 identifies:1 created:3 review:1 literature:2 discovery:11 relative:2 fully:1 lecture:1 interesting:1 limitation:1 foundation:1 agent:34 consistent:1 displaying:1 playing:1 share:1 row:2 placed:4 clearing:2 last:2 t6:1 supported:1 sociometry:1 guide:2 allow:3 institute:1 neighbor:1 taking:1 emerge:1 distributed:1 edinburgh:1 transition:10 evaluating:1 unweighted:1 cumulative:3 sensory:1 author:1 made:1 adaptive:1 collection:2 reinforcement:12 founded:1 social:2 approximate:1 skill:61 obtains:1 global:2 incoming:1 conceptual:1 doorway:9 search:1 terminate:2 symmetry:1 obtaining:1 necessarily:1 european:1 frightened:2 domain:18 constructing:1 did:1 main:1 terminated:1 border:1 big:1 n2:1 allowed:2 pivotal:2 west:2 board:5 darker:1 sub:1 hanoi:1 guiding:1 explicit:1 winning:2 third:2 down:3 specific:1 concern:3 consist:1 phd:1 execution:1 likely:1 explore:1 forming:3 visual:2 prevents:1 hitting:1 expressed:1 recommendation:1 corresponds:2 ma:1 succeed:5 marked:1 goal:4 towards:2 room:11 shortcut:6 change:2 included:1 specifically:2 determined:1 uniformly:2 except:2 total:1 called:1 pas:2 clarification:1 player:2 east:2 mark:2 people:1 latter:1 outgoing:1 |
2,661 | 3,412 | Cyclizing Clusters via Zeta Function of a Graph
Deli Zhao and Xiaoou Tang
Department of Information Engineering, Chinese University of Hong Kong
Hong Kong, China
{dlzhao,xtang}@ie.cuhk.edu.hk
Abstract
Detecting underlying clusters from large-scale data plays a central role in machine
learning research. In this paper, we tackle the problem of clustering complex data
of multiple distributions and multiple scales. To this end, we develop an algorithm named Zeta l-links (Zell) which consists of two parts: Zeta merging with
a similarity graph and an initial set of small clusters derived from local l-links
of samples. More specifically, we propose to structurize a cluster using cycles in
the associated subgraph. A new mathematical tool, Zeta function of a graph, is
introduced for the integration of all cycles, leading to a structural descriptor of a
cluster in determinantal form. The popularity character of a cluster is conceptualized as the global fusion of variations of such a structural descriptor by means
of the leave-one-out strategy in the cluster. Zeta merging proceeds, in the hierarchical agglomerative fashion, according to the maximum incremental popularity
among all pairwise clusters. Experiments on toy data clustering, imagery pattern
clustering, and image segmentation show the competitive performance of Zell.
The 98.1% accuracy, in the sense of the normalized mutual information (NMI), is
obtained on the FRGC face data of 16028 samples and 466 facial clusters.
1
Introduction
Pattern clustering is a classic topic in pattern recognition and machine learning. In general, algorithms for clustering fall into two categories: partitional clustering and hierarchical clustering. Hierarchical clustering proceeds by merging small clusters (agglomerative) or dividing large clusters
into small ones (divisive). The key point of agglomerative merging is the measurement of structural affinity between clusters. This paper is devoted to handle the problem of data clustering via
hierarchical agglomerative merging.
1.1 Related work
The representative algorithms for partitional clustering are the traditional K-means and the latest
Affinity Propagation (AP) [1]. It is known that the K-means is sensitive to the selection of initial
K centroids. The AP algorithm addresses this issue by that each sample is initially viewed as an
examplar and then examplar-to-member and member-to-examplar messages competitively transmit
among all samples until a group of good examplars and their corresponding clusters emerge. Besides
the superiority of finding good clusters, AP exhibits the surprising ability of handling large-scale
data. However, AP is computationally expensive to acquire clusters when the number of clusters is
set in advance. Both K-means and AP encounter difficulty on multiple manifolds mixed data.
The classic algorithms for agglomerative clustering include three kinds of linkage algorithms: the
single, complete, and average Linkages. Linkages are free from the restriction on data distributions,
but are quite sensitive to local noisy links. A novel agglomerative clustering algorithm was recently
developed by Ma et al. [2] with the lossy coding theory of multivariate mixed data. The core of
their algorithm is to characterize the structures of clusters by means of the variational coding length
of coding arbitrary two merged clusters against only coding them individually. The coding length
Figure 1: A small graph with four vertices and five edges can be decomposed into three cycles. The
complexity of the graph can be characterized by the collective dynamics of these basic cycles.
based algorithm exhibits the exceptional performance for clustering multivariate Gaussian data or
subspace data. However, it is not suitable for manifold-valued data.
Spectral clustering algorithms are another group of popular algorithms developed in recent years.
The Normalized Cuts (Ncuts) algorithm [3] was developed for image segmentation and data clustering. Ng et al.?s algorithm [4] is mainly for data clustering, and Newman?s work [5] is applied for
community detection in complex networks. Spectral clustering can handle complex data of multiple
distributions. However, it is sensitive to noise and the variation of local data scales.
In general, the following four factors pertaining to data are still problematic for most clustering algorithms: 1) mixing distributions such as multivariate Gaussians of different derivations, subspaces
of different dimensions, or globally curved manifolds of different dimensions; 2) multiple scales; 3)
global sampling densities; and 4) noise. To attack these problems, it is worthwhile to develop new
approaches that are conceptually different from existing ones.
1.2
Our work
To address issues for complex data clustering, we develop a new clustering approach called Zeta
l-links, or Zell. The core of the algorithm is based on a new cluster descriptor that is essentially
the integration of all cycles in the cluster by means of the Zeta function of the corresponding graph.
The Zeta function leads to a rational form of cyclic interactions of members in the cluster, where
cycles are employed as primitive structures of clusters. With the cluster descriptor, the popularity
of a cluster is quantified as the global fusion of variations of the structural descriptor by the leaveone-out strategy in the cluster. This definition of the popularity is expressible by diagonals of matrix
inverse. The structural inference between clusters may be performed with this popularity character.
Based on the novel popularity character, we propose a clustering method, named Zeta merging in the
hierarchical agglomerative fashion. This method has no additional assumptions on data distributions
and data scales. As a subsidiary procedure for Zeta merging, we present a simple method called llinks, to find the initial set of clusters as the input of Zeta merging. The Zell algorithm is the
combination of Zeta merging and l-links. Directed graph construction is derived from l-links.
2
Cyclizing a cluster with Zeta function
Our ideas are mainly inspired by recent progress on study of collective dynamics of complex networks. Experiments have validated that the stochastic states of a neuronal network is partially modulated by the information that cyclically transmits [6], and that proportions of cycles in a network
is strongly relevant to the level of its complexity [7]. Recent studies [8], [9] unveil that short cycles
and Hamilton cycles in graphs play a critical role in the structural connectivity and community of a
network. These progress inspires us to formalize the structural complexity of a cluster by means of
cyclic interactions of its members. As illustrated in Figure 1, the relationship between samples can
be characterized by the combination of all cycles in the graph. Thus the structural complexity of the
graph can be conveyed by the collective dynamics of these basic cycles. Therefore, we may characterize a cluster with the global combination of structural cycles in the associated graph. To do so,
we need to model cycles of different lengths and combine them together as a structural descriptor.
2.1
Modeling cycles of equal length
We here model cycles using the sum-product codes to structurize a cluster. Formally, let C =
{x1 , . . . , xn } denote the set of sample vectors in a cluster C. Suppose that W is the weighted
adjacency matrix of the graph associated with C. A vertex of the graph represents a member in
C. For generality, the graph is assumed to be directed, meaning that W may be asymmetric. Let
?` = {p1 ? p2 ? ? ? ? ? p`?1 ? p` , p` ? p1 } denote any cycle ?` of length ` defined on
W. We apply the factorial codes to retrieve the structural information of cycle ?` , thus defining
Q`?1
??` = Wp` ?p1 k=1 Wpk ?pk+1 , where Wpk ?pk+1 is the (pk , pk+1 ) entry of W. The value ??`
provides a kind of degree measure of interactions among ?` -associated vertices. For the set K` of
all cycles of length `, the sum-product code ?` is written as:`?1
X
X
Y
?` =
? ?` =
Wp` ?p1
Wpk ?pk+1 .
(1)
?` ?K`
?` ?K`
k=1
The value ?` may be viewed as the quantified indication of global interactions among C in the `cycle scale. The structural complexity of the graph is measured by these quantities of cycles of all
different lengths, i.e., {?1 , . . . , ?` , . . . , ?? }. Further, we need to perform the functional integration
of these individual measures. The Zeta function of a graph may play a role for such a task.
2.2 Integrating cycles using Zeta function
Zeta functions are widely applied in pure mathematics as tools of performing statistics in number
theory, computing algebraic invariants in algebraic geometry, measuring complexities in dynamic
systems. The forms of Zeta functions are diverse. The Zeta!function we use here is defined as:
?
X
z`
?z = exp
?`
,
(2)
`
`=1
where z is a real-valued variable. Here ?z may be viewed as a kind of functional organization of all
cycles in {K1 , . . . , K` , . . . , K? } in a global sense. What?s interesting is that ?z admits a rational
form [10], which makes the intractable manipulations arising in (1) tractable.
Theorem 1. ?z = 1/ det(I ? zW), where z < ?(W) and ?(W) denotes the spectral radius of the
matrix W.
From Theorem 1, we see that the global interaction of elements in C is quantified by a quite simple
expression of determinantal form.
2.3 Modeling popularity
The popularity of a group of samples means how much these samples in the group is perceived to
be a whole cluster. To model the popularity, we need to formalize the complexity descriptor of the
cluster C. With the cyclic integration ?z in the preceding section, the complexity of the cluster can
be measured by the polynomial entropy ?C?of logarithm form:
X z`
?C = ln ?z =
?` = ? ln det(I ? zW).
(3)
`
`=1
The entropy ?C will be employed to model the popularity of C. As we analyze at the beginning
of Section 2, cycles are strongly associated with structural communities of a network. To model
the popularity, therefore, we may investigate the variational information of cycles by successively
leaving one member in C out. More clearly, let ?C denote the popularity character of C. Then ?C is
defined as the averaged sum of the reductive entropies:
n
n
1X
1X
?C =
?C ? ?C\xp = ?C ?
?C\xp .
(4)
n p=1
n p=1
Let T denote the transpose operator of a matrix and ep is the p-th standard basis whose p-th element
is 1 and 0 elsewhere. We have the following theorem.
Qn
Theorem 2. ?C = n1 ln p=1 eTp (I ? zW)?1 ep .
By analysis of inequalities , we may obtain that ?C is bounded as 0 < ?C ? (?C /n). The popularity
measure ?C is a structural character of C , which can be exploited to handle problems in learning
such as clustering, ranking, and classification.
The computation of ?C is involved with that of the inverse of (I ? zW). In general, the complexity
of computing (I ? zW)?1 is O(n3 ). However, ?C is only related to the diagonals of (I ? zW)?1
instead of a full dense matrix. This unique property leads the computation of ?C to the complexity
of O(n1.5 ) by a specialized algorithm for computing diagonals of the inverse of a sparse matrix [11].
2.4 Structural affinity measurement
Given a set of initial clusters Cc = {C1 , . . . , Cm } and the adjacency matrix P of the corresponding
samples, the affinities between clusters or data groups can be measured via the corresponding popularity character ?C . Under our framework, an intuitive inference is that the two clusters that share
the largest reciprocal popularity have the most consistent structures, meaning the two clusters are
most relevant from the structural point of view. Formally, for two given data groups Ci and Cj from
Cc , the criterion of reciprocal popularity may be written as
??Ci ?Cj = ??Ci + ??Cj = (?Ci |Ci ?Cj ? ?Ci ) + (?Cj |Ci ?Cj ? ?Cj ),
(5)
Q
where the conditional popularity ?Ci |Ci ?Cj is defined as ?Ci |Ci ?Cj = |C1i | ln xp ?Ci eTp (I ?
zPCi ?Cj )?1 ep and PCi ?Cj is the submatrix of P corresponding to the samples in Ci and Cj . The
incremental popularity ??Ci embodies the information gain of Ci after being merged with Cj . The
larger the value of ??Ci ?Cj is, the more likely the two data groups Ci and Cj are perceived to be one
cluster. Therefore, ??Ci ?Cj may be exploited to measure the structural affinity between two groups
of samples from a whole set of samples.
3
Zeta merging
We will develop the clustering algorithm using the structural character ?C . The automatic detection
of the number of clusters are also taken into consideration.
3.1
Algorithm of Zeta merging
With the criterion of structural affinity in Section 2.4, it is straightforward to write the procedures of
clustering in the hierarchical agglomerative way. The algorithm may proceed from the pair {Ci , Cj }
that has the largest incremental popularity ??Ci ?Cj , i.e., {Ci , Cj } = arg max ??Ci ?Cj . We name the
i,j
method by Zeta merging, whose procedures are provided in Algorithm 1. In general, Zeta merging
1 1
will proceed smoothly if the damping factor z is bounded as 0 < z < 2kPk
.
Algorithm 1 Zeta merging
inputs: the weighted adjacency matrix P, the m initial clusters Cc = {C1 , . . . , Cm }, and the
number mc (mc ? m) of resulting clusters. Set t = m.
while 1 do
if t = mc then break; end if
Search two clusters Ci and Cj such that {Ci , Cj } = arg max ??Ci ?Cj .
Cc ? {Cc \ {Ci , Cj }} ? {Ci ? Cj }; t ? t ? 1.
end while
{Ci ,Cj }?Cc
The merits of Zeta merging are that it is free from the restriction of data distributions and is less
affected by the factor of multiple scales in data. Affinity propagation in Zeta merging proceeds on
graph according to cyclic associations, requiring no specification on data distributions. Moreover,
the popularity character ?C of each cluster is obtained from the averaged amount of variational
information conveyed by ?C . Thus the size of a cluster has little influence on the value ??Ci ?Cj .
What?s most important is that cycles rooted at each point in C globally interact with all other points.
Thus, the global descriptor ?C and the popularity character ?C are not sensitive to the local data scale
at each point, leading to the robustness of Zeta merging against the variation of data scales.
3.2
Number of clusters in Zeta merging
In some circumstances, it is needed to automatically detect the number of underlying clusters from
given data. This functionality can be reasonably realized in Zeta merging if each cluster corresponds
to a diagonal block structure in P, up to some permutations. The principle is that the minimum
??Ci ?Cj will be zero when a set of separable clusters emerges, behind which is the mathematical
principle that inverting a block-diagonal matrix is equivalent to inverting the matrices on the diagonal
blocks. In practice, however, the minimum ??Ci ?Cj has a jumping variation on the stable part of its
curve instead of exactly arriving at zero due to the perturbation of the interlinks between clusters.
Then the number of clusters corresponds to the step at the jumping point.
4
The Zell algorithm
An issue arising in Zeta merging is the determination of the initial set of clusters. Here, we give a
method by performing local single Linkages ( message passing by minimum distances). The method
of graph construction is also discussed here.
Figure 2: Schematic illustration of l-links. From left to right: data with two seed points (red markers), 2-links grown from two seed points, and 2-links from four seed points. The same cluster is
denoted by the markers with the same color of edges.
Detecting l-links
4.1
Given the sample set Cy = {y1 , . . . , ymo }, we first get the set Si2K of 2K nearest neighbors for
the point yi . Then from yi , messages are passed among Si2K in the sense of minimum distances
(or general dissimilarities), thus locally forming an acyclic directed subgraph at each point. We call
such an acyclic directed subgraph by l-links, where l is the number of steps of message passing
among Si2K . In general, l is a small integer, e.g., l ? {2, 3, 4, . . . }. The further manipulation is to
merge l-links that share common vertices. A simple schematic example is shown in Figure 2. The
specific procedures are provided in Algorithm 2.
Algorithm 2 Detecting l-links
inputs: the sample set Cy = {y1 , . . . , ymo }, the number l of l-links, the number K of nearest
neighbors for each point, where l < K.
Initialization: Cc = {Ci |Ci = {yi }, i = 1, . . . , mo } and q = 1.
for i from 1 to mo do
Search 2K nearest neighbors of yi and form Si2K .
Iteratively perform Ci ? Ci ? {yj } if yj = arg min min distance(y, yj ), until |Ci | ? l.
yj ?Si2K y?Ci
Perform Cj ? Ci ? Cj , Cc ? Cc \ Ci , and q ? q + 1, if |Ci ? Cj | > 0, where j = 1, . . . , q.
end for
4.2
Graph construction
The directional connectivity of l-links leads us to build a directed graph whose vertex yi directionally
points to its K nearest neighbors. The method of graph construction is presented in Algorithm 3.
The free parameter ? in (6) is estimated according to the criterion that the geometric mean of all
similarities between each point and its three nearest neighbors is set to be a, where a is a given
parameter in (0, 1]. It is easy to know that ?(P) < 1 here.
Algorithm 3 Directed graph construction
inputs: the sample set Cy , the number K of nearest neighbors, and a free parameter a ? (0, 1].
P
P
2
Estimate the parameter ? by ? 2 = ? mo1lna yi ?Cy yj ?S 3 [distance (yi , yj )] .
i
Define the entry of the i-th row (
and j-th column of the weighted adjacency matrix P as
[distance(y , y )]2
i
j
), if yj ? SiK ,
exp (?
?2
0,
otherwise.
P mo
Perform the sum-to-one operation for each row, i.e., Pi?j ? Pi?j / j=1
Pi?j .
Pi?j =
(6)
Zeta l-links (Zell)
4.3
Our algorithm for data clustering is in effect to perform Zeta merging on the initial set of small
clusters derived from l-links. So, we name our algorithm by Zeta l-links, or Zell. The complete
implementation of the Zell algorithm is to consecutively perform Algorithm 3, Algorithm 2, and Algorithm 1. In practice , the steps in Algorithm 3 and Algorithm 2 are operated together to enhance
1
Interested one may refer to the full version of this paper for proofs.
1000
20
800
1500
5
10
80
0
20
0
0
?5
500
1000
?10
50
?20
?50
?10
?10
400
300
?15
40
?60
5
10
?40
?30
?20
100
?20
?20
?10
?60 ?50 ?40 ?30 ?20 ?10
0
(a)
?6
1
20
40
60
80
100
Number of clusters
120
140
160
5
6
4
2
0
?40
?30
?7
x 10
First?order difference
Minimum Delta popularity
Minimum Delta popularity
2
?50
(c)
?7
8
3
0
?60
(b)
x 10
0
0
x 10
4
3
2
1
0
?1
?2
5
10
Number of clusters
15
5
10
Number of clusters
15
(d)
(e)
(f)
Figure 3: Clustering on toy data. (a) Generated data of 12 clusters. The number of each cluster is
shown in the figure. The data are of different distributions, consisting of multiple manifolds (two
circles and a hyperbola), subspaces (two pieces of lines and a piece of the rectangular strip), and
six Gaussians. The densities of clusters are diverse. The differences between the sizes of different
clusters are large. The scales of the data vary. For each cluster in the manifold and subspace data, the
points are randomly generated with different deviations. (b) Clusters yielded by Zell (given number
of clusters). The different colors denote different clusters. (c) Clusters automatically detected by Zell
on the data composed by six Gaussians and the short line. (d) Curve of minimum Delta popularity
(??). (e) Enlarged part of (d) and the curve of its first-order differences. The point marked by the
square is the detected jumping point. (f) The block structures of P corresponding to the data in (c).
the efficiency of Zell. Zeta merging may also be combined with K-means and Affinity Propagation for clustering. These two algorithms work well for producing small clusters. So, they can be
employed to generate initial clusters as the input of Zeta merging.
5
Experiment
Experiments are conducted on clustering toy data, hand-written digits and cropped faces from captured images, and segmenting images to test the performance of Zell. The quantitative performance
of the algorithms is measured by the normalized mutual information (NMI) [12] which is widely
used in learning communities. The NMI quantifies the normalized statistical information shared
between two distributions. The larger the NMI is, the better the clustering performance of the algorithm is.
Four representative algorithms are taken into comparison, i.e., K-centers, (average) Linkage, Affinity
Propagation (AP), and Normalized Cuts (Ncuts). Here we use K-centers instead of K-means because
it can handle the case where distances between points are not measured by Euclidean norms. For
fair comparison, we run Ncuts on the graph whose parameters are set the same with the graph used
by Zell. The parameters for Zell are set as z = 0.01, a = 0.95, K = 20, and l = 2.
5.1
On toy data
We first perform an experiment on a group of toy data of diverse distributions with multiple densities, multiple scales, and significantly different sizes of clusters. As shown in Figures 3 (b) and (c),
the Zell algorithm accurately detects the underlying clusters out. Particularly, Zell is capable of simultaneously differentiating the cluster with five members and the cluster with 1500 members. This
functionality is critically important for finding genes from microarray expressions in bioinformatics.
Figures 3 (d) and (e) show the curves of minimum variational ?? (for the data in Figure 3 (c)) where
the number of clusters is determined at the largest gap of the curve in the stable part. However, the
method presented in Section 3.2 fails to automatically detect the number of clusters for the data in
Figure 3 (a), because the corresponding P matrix has no clear diagonal block structures.
Table 1: Imagery data. MNIST and USPS: digit databases. ORL and FRGC: face databases. The
last row shows the numbers of clusters automatically detected by Zell on the five data sets.
Data set
MNIST
USPS
ORL
sFRGC
FRGC
Number of samples
5139
11000
400
11092
16028
Number of clusters
5
10
40
186
466
Average number of each cluster 1027 ? 64 1100 ? 0
10 ? 0
60 ? 14 34 ? 24
Dimension of each sample
784
256
2891
2891
2891
Detected number of clusters
11
8
85 (K = 5)
229
511
Table 2: Quantitative clustering results on imagery data. NMI: normalized mutual information. The
?pref? means the preference value used in Affinity Propagation for clustering of given numbers.
K = 5 for the ORL data set.
Algorithm
K-centers Linkage Ncuts Affinity propagation (pref)
Zell
MNIST
0.228
0.496
0.737 0.451 (-871906470)
0.865
NMI USPS
0.183
0.095
0.443 0.313 (-417749850)
0.772
ORL
0.393
0.878
0.939 0.877 (-6268)
0.940
sFRGC
0.106
0.934
0.953 0.899 (-16050)
0.988
FRGC
0.187
0.950
0.924 0.906 (-7877)
0.981
5.2 On imagery data
The imagery patterns we adopt are the hand-written digits in the MNIST and USPS
databases and the facial images in the ORL and FRGC (Face Recognition Grand Challenge,
http://www.frvt.org/FRGC/) databases. The MNIST and USPS data sets are downloaded from Sam
Roweis?s homepage (http://www.cs.toronto.edu/?roweis). For MNIST, we select all the images of
digits from 0 to 4 in the testing set for experiment. For FRGC, we use the facial images in the target
set of experiment 4 in the FRGC version 2. Besides the whole target set, we also select a subset from
it. Such persons are selected as another group of clusters if the number of faces for each person is no
less than forty. The information of data sets is provided in Table 1. For digit patterns, the Frobenius
norm is employed to measure distances of digit pairs without feature extraction. For face patterns,
however, we extract visual features of each face by means of the local binary pattern algorithm. The
P (?y ??y )2
Chi-square metric is exploited to compute distances, defined as distance(?y, ?y) = i ?yi +?yi .
i
i
The quantitative results are given in Table 2. We see that Zell consistently outperforms the other
algorithms across the five data sets. In particular, the performance of Zell is encouraging on the
FRGC data set which has the largest numbers of clusters and samples. As reported in [1], AP does
significantly outperform K-centers. However, AP shows the unsatisfactory performance on the digit
data where the manifold structures may occur due to that the styles of digits vary significantly. The
average Linkage also exhibits such phenomena. The results achieved by Ncuts are also competitive.
However, Ncuts is overall unstable, for example, yielding the low accuracy on the USPS data. The
results in Tabel 3 confirms the stability of Zell over the variations of free parameters. Actually, l
affects the performance of Zell when it is larger, because it may incur incorrect initial clusters.
5.3 Image segmentation
We show several examples of the application of Zell on image segmentation from the Berkeley
(I ?I )2
segmentation database. The weighted adjacency matrix P is defined as Pi?j = exp(? i ?2 j )
if Ij ? Ni8 and 0 otherwise, where Ii is the intensity value of an image and Ni8 denotes the set of
pixels in the 8-neighborhood of Ii . Figure 4 displays the segmentation results of different numbers of
segments for each image. Overall, attentional regions are merged by Zell. Note the small attentional
regions take the priorities of being merged than the large ones. Therefore, Zell yields many small
attentional regions as final clusters.
6
Conclusion
An algorithm, named Zell, has been developed for data clustering. The cyclization of a cluster is the
fundamental principle of Zell. The key point of the algorithm is the integration of structural cycles
but Zeta function of a graph. A popularity character of measuring the compactness of the cluster
is defined via Zeta function, on which the core of Zell for agglomerative clustering is based. An
Table 3: Results yielded by Zell over variations of free parameters on the sFRGC data. The initial
set is {z = 0.01, a = 0.95, K = 20, l = 3}. When one of them varies, the other keep invariant.
Parameter
z
a
K
l
?{1,2,3,4}
Range
10
0.2 ? {1, 2, 3, 4, 4.75} 10 ? {2, 3, 4, 5}
{2, 3, 4}
NMI
0.988 ? 0
0.988 ? 0.00019
0.987 ? 0.0015 0.988 ? 0.0002
Figure 4: Image segmentation by Zell from the Berkeley segmentation database.
approach for finding initial small clusters is presented, which is based on the merging of local links
among samples. The directed graph used in this paper is derived from the directionality of l-links.
Experimental results on toy data, hand-written digits, facial images, and image segmentation show
the competitive performance of Zell. We hope that Zell brings a new perspective on complex data
clustering.
Acknowledgement
We thank Yaokun Wu and Sergey Savchenko for their continuing help on algebraic graph theory.
We are also grateful of the interesting discussion with Yi Ma and John Wright on clustering and
classification. Feng Li and Xiaodi Hou are acknowledged due to their kind help. The reviewers?
insightful comments and suggestions are also greatly appreciated.
References
[1] Frey, B.J. & Dueck, D. (2007) Clustering by passing messages between data points. Science 315:972-976.
[2] Ma, Y. Derksen, H. Hong, W. & Wright, J. (2007) Segmentation of multivariate mixed data via lossy data
coding and compression. IEEE Trans. on Pattern Recognition and Machine Intelligence 29:1546-1562.
[3] Shi, J.B. & Malik, J. (2000) Normalized cuts and image segmentation. IEEE Trans. on Pattern Recognition
and Machine Intelligence 22(8):888-905.
[4] Ng, A.Y., Jordan, M.I. & Weiss, Y. (2001) On spectral clustering: analysis and an algorithm. Advances in
Neural Information Processing Systems. Cambridge, MA: MIT Press.
[5] Newman, M.E.J. (2006) Finding community structure in networks using the eigenvectors of matrices. Physical Review E 74(3).
[6] Destexhe, A. & Contreras, D. (2006) Neuronal computations with stochastic network states. Science,
314(6):85-90.
[7] Sporns, O. Tononi, G. & Edelman, G.M. (2000) Theoretical neuroanatomy: relating anatomical and functional connectivity in graphs and cortical connection matrices. Cerebral Cortex, 10:127-141.
[8] Bagrow, J. Bollt, E. & Costa, L.F. (2007) On short cycles and their role in network structure.
http://arxiv.org/abs/cond-mat/0612502.
[9] Bianconi, G. & Marsili, M. (2005) Loops of any size and Hamilton cycles in random scale-free networks.
Journal of Statistical Mechanics, P06005.
[10] Savchenko, S.V. (1993) The zeta-function and Gibbs measures. Russ. Math. Surv. 48(1):189-190.
[11] Li, S. Ahmed, S. Klimeck, G. & Darve, E. (2008) Computing entries of the inverse of a sparse matrix
using the FIND algorithm. Journal of Computational Physics 227:9408-9427.
[12] Strehl, A. & Ghosh, J. (2002) Cluster ensembles ? a knowledge reuse framework for combining multiple
partitions. Journal of Machine Learning Research 3:583617.
| 3412 |@word kong:2 version:2 polynomial:1 proportion:1 norm:2 compression:1 confirms:1 initial:11 cyclic:4 outperforms:1 existing:1 surprising:1 written:5 hou:1 determinantal:2 john:1 partition:1 intelligence:2 selected:1 beginning:1 reciprocal:2 core:3 short:3 detecting:3 provides:1 math:1 toronto:1 preference:1 attack:1 org:2 five:4 mathematical:2 incorrect:1 consists:1 edelman:1 combine:1 pairwise:1 p1:4 mechanic:1 zell:32 chi:1 inspired:1 globally:2 decomposed:1 detects:1 reductive:1 automatically:4 little:1 encouraging:1 provided:3 underlying:3 bounded:2 moreover:1 homepage:1 what:2 kind:4 cm:2 developed:4 finding:4 ghosh:1 dueck:1 quantitative:3 berkeley:2 tackle:1 exactly:1 superiority:1 hamilton:2 producing:1 segmenting:1 engineering:1 local:7 frey:1 etp:2 ap:8 merge:1 initialization:1 china:1 quantified:3 range:1 averaged:2 directed:7 unique:1 yj:7 testing:1 practice:2 block:5 digit:9 procedure:4 frgc:9 significantly:3 integrating:1 get:1 selection:1 operator:1 influence:1 restriction:2 equivalent:1 www:2 reviewer:1 center:4 conceptualized:1 shi:1 latest:1 primitive:1 straightforward:1 rectangular:1 pure:1 retrieve:1 classic:2 handle:4 stability:1 variation:7 transmit:1 construction:5 play:3 suppose:1 target:2 surv:1 element:2 recognition:4 expensive:1 particularly:1 asymmetric:1 cut:3 database:6 ep:3 role:4 cy:4 region:3 cycle:28 complexity:10 dynamic:4 grateful:1 segment:1 incur:1 efficiency:1 basis:1 usps:6 examplar:3 xiaoou:1 grown:1 derivation:1 pertaining:1 detected:4 newman:2 pci:1 neighborhood:1 quite:2 whose:4 widely:2 valued:2 larger:3 pref:2 otherwise:2 ability:1 statistic:1 noisy:1 final:1 directionally:1 indication:1 propose:2 interaction:5 product:2 relevant:2 loop:1 combining:1 subgraph:3 mixing:1 roweis:2 intuitive:1 frobenius:1 cluster:87 incremental:3 leave:1 help:2 develop:4 measured:5 ij:1 nearest:6 progress:2 p2:1 dividing:1 c:1 radius:1 merged:4 functionality:2 stochastic:2 consecutively:1 adjacency:5 wright:2 exp:3 seed:3 mo:3 vary:2 adopt:1 perceived:2 sensitive:4 individually:1 largest:4 exceptional:1 tool:2 weighted:4 hope:1 mit:1 clearly:1 gaussian:1 derived:4 validated:1 consistently:1 unsatisfactory:1 mainly:2 hk:1 greatly:1 centroid:1 sense:3 detect:2 inference:2 initially:1 compactness:1 expressible:1 interested:1 unveil:1 pixel:1 arg:3 issue:3 among:7 classification:2 denoted:1 overall:2 integration:5 mutual:3 equal:1 extraction:1 ng:2 sampling:1 represents:1 randomly:1 composed:1 simultaneously:1 individual:1 geometry:1 consisting:1 n1:2 ab:1 detection:2 organization:1 message:5 investigate:1 tononi:1 operated:1 yielding:1 behind:1 devoted:1 edge:2 capable:1 jumping:3 facial:4 damping:1 euclidean:1 logarithm:1 continuing:1 circle:1 theoretical:1 column:1 modeling:2 measuring:2 deviation:1 subset:1 vertex:5 entry:3 conducted:1 inspires:1 characterize:2 reported:1 varies:1 combined:1 person:2 density:3 grand:1 fundamental:1 ie:1 physic:1 enhance:1 zeta:37 together:2 connectivity:3 imagery:5 central:1 successively:1 priority:1 zhao:1 leading:2 style:1 toy:6 li:2 coding:6 wpk:3 ranking:1 piece:2 performed:1 view:1 break:1 analyze:1 red:1 competitive:3 square:2 accuracy:2 descriptor:8 ensemble:1 yield:1 directional:1 conceptually:1 hyperbola:1 accurately:1 critically:1 rus:1 mc:3 cc:9 kpk:1 strip:1 definition:1 against:2 involved:1 associated:5 transmits:1 proof:1 rational:2 gain:1 costa:1 popular:1 color:2 emerges:1 knowledge:1 segmentation:11 formalize:2 cj:32 actually:1 wei:1 strongly:2 generality:1 until:2 hand:3 marker:2 propagation:6 brings:1 lossy:2 name:2 effect:1 normalized:7 requiring:1 ncuts:6 wp:2 iteratively:1 illustrated:1 rooted:1 hong:3 criterion:3 complete:2 image:15 variational:4 meaning:2 novel:2 recently:1 consideration:1 common:1 specialized:1 functional:3 physical:1 cerebral:1 association:1 discussed:1 relating:1 measurement:2 refer:1 cambridge:1 gibbs:1 automatic:1 mathematics:1 specification:1 stable:2 similarity:2 cortex:1 multivariate:4 recent:3 perspective:1 manipulation:2 contreras:1 inequality:1 binary:1 yi:10 exploited:3 captured:1 minimum:8 additional:1 preceding:1 employed:4 neuroanatomy:1 forty:1 cuhk:1 ii:2 multiple:10 full:2 characterized:2 determination:1 ahmed:1 schematic:2 basic:2 essentially:1 circumstance:1 metric:1 arxiv:1 sergey:1 achieved:1 c1:2 cropped:1 leaving:1 microarray:1 zw:6 comment:1 member:8 examplars:1 jordan:1 call:1 integer:1 structural:20 easy:1 destexhe:1 affect:1 cyclizing:2 idea:1 det:2 expression:2 six:2 passed:1 linkage:7 reuse:1 algebraic:3 proceed:2 passing:3 clear:1 eigenvectors:1 factorial:1 amount:1 locally:1 category:1 generate:1 http:3 outperform:1 problematic:1 deli:1 arising:2 popularity:25 estimated:1 delta:3 anatomical:1 diverse:3 write:1 mat:1 affected:1 group:10 key:2 four:4 acknowledged:1 graph:28 year:1 sum:4 run:1 inverse:4 named:3 wu:1 orl:5 submatrix:1 display:1 savchenko:2 yielded:2 occur:1 n3:1 min:2 performing:2 separable:1 subsidiary:1 department:1 according:3 combination:3 across:1 nmi:7 derksen:1 character:10 sam:1 invariant:2 taken:2 computationally:1 ln:4 needed:1 know:1 merit:1 tractable:1 end:4 gaussians:3 operation:1 competitively:1 apply:1 worthwhile:1 hierarchical:6 spectral:4 encounter:1 robustness:1 denotes:2 clustering:37 include:1 embodies:1 k1:1 chinese:1 build:1 feng:1 malik:1 quantity:1 realized:1 strategy:2 traditional:1 diagonal:7 exhibit:3 affinity:11 partitional:2 subspace:4 distance:9 link:20 attentional:3 thank:1 topic:1 manifold:6 agglomerative:9 unstable:1 besides:2 length:7 code:3 relationship:1 illustration:1 acquire:1 implementation:1 collective:3 perform:7 xiaodi:1 curved:1 defining:1 y1:2 perturbation:1 arbitrary:1 community:5 intensity:1 introduced:1 inverting:2 pair:2 connection:1 trans:2 address:2 bagrow:1 proceeds:3 pattern:9 challenge:1 max:2 sporns:1 suitable:1 critical:1 difficulty:1 c1i:1 extract:1 review:1 geometric:1 acknowledgement:1 permutation:1 mixed:3 interesting:2 suggestion:1 acyclic:2 downloaded:1 degree:1 conveyed:2 xp:3 consistent:1 principle:3 sik:1 share:2 pi:5 strehl:1 row:3 elsewhere:1 last:1 free:7 transpose:1 arriving:1 appreciated:1 fall:1 neighbor:6 face:7 emerge:1 differentiating:1 leaveone:1 sparse:2 curve:5 dimension:3 xn:1 cortical:1 qn:1 gene:1 keep:1 global:8 xtang:1 assumed:1 search:2 quantifies:1 table:5 reasonably:1 interact:1 complex:6 pk:5 dense:1 whole:3 noise:2 fair:1 x1:1 neuronal:2 enlarged:1 representative:2 fashion:2 fails:1 tang:1 cyclically:1 theorem:4 specific:1 insightful:1 admits:1 fusion:2 intractable:1 mnist:6 merging:24 ci:40 dissimilarity:1 gap:1 entropy:3 smoothly:1 likely:1 forming:1 visual:1 partially:1 corresponds:2 ma:4 conditional:1 viewed:3 marked:1 shared:1 directionality:1 specifically:1 determined:1 called:2 divisive:1 experimental:1 cond:1 formally:2 select:2 modulated:1 bioinformatics:1 phenomenon:1 handling:1 |
2,662 | 3,413 | Estimating Robust Query Models
with Convex Optimization
Kevyn Collins-Thompson?
Microsoft Research
1 Microsoft Way
Redmond, WA U.S.A. 98052
[email protected]
Abstract
Query expansion is a long-studied approach for improving retrieval effectiveness
by enhancing the user?s original query with additional related words. Current
algorithms for automatic query expansion can often improve retrieval accuracy
on average, but are not robust: that is, they are highly unstable and have poor
worst-case performance for individual queries. To address this problem, we introduce a novel formulation of query expansion as a convex optimization problem
over a word graph. The model combines initial weights from a baseline feedback algorithm with edge weights based on word similarity, and integrates simple
constraints to enforce set-based criteria such as aspect balance, aspect coverage,
and term centrality. Results across multiple standard test collections show consistent and significant reductions in the number and magnitude of expansion failures,
while retaining the strong positive gains of the baseline algorithm. Our approach
does not assume a particular retrieval model, making it applicable to a broad class
of existing expansion algorithms.
1
Introduction
A major goal of current information retrieval research is to develop algorithms that can improve
retrieval effectiveness by inferring a more complete picture of the user?s information need, beyond
that provided by the user?s query text. A query model captures a richer representation of the context
and goals of a particular information need. For example, in the language modeling approach to
retrieval [9], a simple query model may be a unigram language model, with higher probability given
to terms related to the query text. Once estimated, a query model may be used for such tasks as
query expansion, suggesting alternate query terms to the user, or personalizing search results [11].
In this paper, we focus on the problem of automatically inferring a query model from the top-ranked
documents obtained from an initial query. This task is known as pseudo-relevance feedback or blind
feedback, because we do not assume any direct input from the user other than the initial query text.
Despite decades of research, even state-of-the-art methods for inferring query models ? and in particular, pseudo-relevance feedback ? still suffer from some serious drawbacks. First, past research
efforts have focused largely on achieving good average performance, without regard for the stability
of individual retrieval results. The result is that current models are highly unstable and have bad
worst-case performance for individual queries. This is one significant reason that Web search engines still make little or no use of automatic feedback methods. In addition, current methods do not
?
This work was primarily done while the author was at the Language Technologies Institute, School of
Computer Science, Carnegie Mellon University.
adequately capture the relationships or tradeoffs between competing objectives, such as maximizing
the expected relevance weights of selected words versus the risks of those choices. This is turn leads
to several problems.
First, when term risk is ignored, the result will be less reliable algorithms for query models, as we
show in Section 3. Second, selection of expansion terms is typically done in a greedy fashion by
rank or score, which ignores the properties of the terms as a set and leads to the problem of aspect
imbalance, a major source of retrieval failures [2]. Third, few existing expansion algorithms can
operate selectively; that is, automatically detect when a query is risky to expand, and then avoid or
reduce expansion in such cases. The few algorithms we have seen that do attempt selective expansion
are not especially effective, and rely on sometimes complex heuristics that are integrated in a way
that is not easy to untangle, modify or refine. Finally, for a given task there may be additional
factors that must be constrained, such as the computational cost of sending many expansion terms
to the search engine. To our knowledge such situations are not handled by any current query model
estimation methods in a principled way.
To remedy these problems, we need a better theoretical framework for query model estimation: one
that incorporates both risk and reward data about terms, that detect risky situations and expands
selectively, that can incorporate arbitrary additional problem constraints such as a computational
budget, and has fast practical implementations.
Our solution is to develop a novel formulation of query model estimation as a convex optimization
problem [1], by casting the problem in terms of constrained graph labeling. Informally, we seek
query models that use a set of terms with high expected relevance but low expected risk. This idea
has close connections with models of risk in portfolio optimization [7]. An optimization approach
frees us from the need to provide a closed-form formula for term weighting. Instead, we specify a
(convex) objective function and a set of constraints that a good query model should satisfy, letting
the solver do the work of searching the space of feasible query models. This approach gives a natural
way to perform selective expansion: if there is no feasible solution to the optimization problem, we
do not attempt to expand the original query. ore generally, it gives a very flexible framework for
integrating different criteria for expansion as optimization constraints or objectives.
Our risk framework consists of two key parts. First, we seek to minimize an objective function that
consists of two criteria: term relevance, and term risk. Term risk in turn has two subcomponents:
the individual risk of a term, and the conditional risk of choosing one term given we have already
chosen another. Second, we specify constraints on what ?good? sets of terms should look like. These
constraints are chosen to address traditional reasons for query drift. With these two parts, we obtain
a simple convex program for solving for the relative term weights in a query model.
2
Theoretical model
Our aim in this section is to develop a constrained optimization program to find stable, effective
query models. Typically, our optimization will embody a basic tradeoff between wanting to use
evidence that has strong expected relevance, such as expansion terms with high relevance model
weights, and the risk or confidence in using that evidence. We begin by describing the objectives
and constraints over term sets that might be of interest for estimating query models. We then describe
a set of (sometimes competing) constraints whose feasible set reflects query models that are likely to
be effective and reliable. Finally, we put all these together to form the convex optimization problem.
2.1
Query model estimation as graph labeling
We can gain some insight into the problem of query model estimation by viewing the process of
building a query as a two-class labeling problem over terms. Given a vocabulary V , for each term
t ? V we decide to either add term t to the query (assign label ?1? to the term), or to leave it out
(assign label ?0?). The initial query terms are given a label of ?1?. Our goal is to find a function
f : V ? {0, 1} that classifies the finite set V of |V | = K terms, choosing one of the two labels for
each term. The terms are typically related, so that the pairwise similarity ?(i, j) between any two
terms wi , wj is represented by the weight of the edge connecting wi and wj in the undirected graph
G = (V, E), where E is the set of all edges. The cost function L(f ) captures our displeasure for a
given f , according to how badly the following two criteria are given by the labeling produced by f .
?
?
Y
Z
Y
Z
Figure 1: Query model estimation as a constrained graph labeling problem using two labels (relevant, non-relevant) on a graph of pairwise term relations. The square nodes X, Y, and Z represent
query terms, and circular nodes represent potential expansion terms. Dark nodes represent terms
with high estimated label weights that are likely to be added to the initial query. Additional constraints can select sets of terms having desirable properties for stable expansion, such as a bias
toward relevant labels related to multiple query terms (right).
? The cost ci:k gives the cost of labeling term ti with label k ? {0, 1}.
? The cost ?i,j ? d(f (i), f (j)) gives the penalty for assigning labels f (i) and f (j) to items
i and j when their similarity is ?i,j . The function d(u, v) is a metric that is the same for
all edges. Typically, similar items are expected to have similar labels and thus a penalty is
assigned to the degree this expectation is violated.
For this study, we assume a very simple metric in which d(i, j) = 1 if i 6= j and 0 otherwise. In
a probabilistic setting, finding the most probable labeling can be viewed as a form of maximum a
posteriori (MAP) estimation over the Markov random field defined by the term graph.
Although this problem is NP-hard for arbitrary configurations, various approximation algorithms
exist that run in polynomial time by relaxing the constraints. Here we relax the condition that the
labels be integers in {0, 1} and allow real values in [0, 1]. A review of relaxations for the more
general metric labeling problem is given by Ravikumar and Lafferty [10]. The basic relaxation we
use is
X
X
maximize
cs;j xs;j +
?s,j;t,k xs;j xt;k
s;j
subject to
X
s,t;j,k
xs;j = 1
(1)
j
0 ? xs;j ? 1.
The variable xs;j denotes the assignment value of label j for term s. Our method obtains its initial
assignment costs cs;j from a baseline feedback method, given an observed query and corresponding
set of query-ranked documents. For our baseline expansion method, we use the strong default feedback algorithm included in Indri 2.2 based on Lavrenko?s Relevance Model [5]. Further details are
available in [4].
In the next section, we discuss how to specify values for cs;j and ?s,j;t,k that make sense for query
model estimation. For a two-label problem where j ? {0, 1}, the values of xi for one label completely determine the values for the other, since they must sum to 1, so it suffices to optimize over
only the xi;1 , and for simplicity we simply refer to xi instead of xi;1 .
Our goal is to find a set of weights x = (x1 , . . . , xK ) where each xi corresponds to the weight
in the final query model of term wi and thus is the relative value of each word in the expanded
query. The graph labeling formulation may be interpreted as combining two natural objectives:
the first maximizes the expected relevance of the selected terms, and the second minimizes the
risk associated with the selection. We now describe each of these in more detail, followed by a
description of additional set-based constraints that are useful for query expansion.
2.2
Relevance objectives
Given an initial set of term weights from a baseline expansion method c = (c1 , . . . , cK ) theP
expected
relevance over the vocabulary V of a solution x is given by the weighted sum c ? x = k ck xk .
Essentially, maximizing expected relevance biases the ?relevant? labels toward those words with the
highest ci values. Other relevance objective functions are also possible, as long as they are convex.
For example, if c and x represent probability distributions over terms, then we could replace c ? x
with KL(c||x) as an objective since KL-divergence is also convex in c and x.
The initial assignment costs (label values) c can be set using a number of methods depending on
how scores from the baseline expansion model are normalized. In the case of Indri?s language
model-based expansion, we are given estimates of the Relevance Model p(w|R) over the highestranking k documents1 . We can also estimate a non-relevance model p(w|N ) using the collection to
approximate non-relevant documents, or using the lowest-ranked k documents out of the top 1000
retrieved by the initial query Q. To set cs:1 , we first compute p(R | w) for each word w via Bayes
Theorem,
p(w|R)
p(R|w) =
(2)
p(w|R) + p(w|N )
? to denote our belief that
assuming p(R) = p(N ) = 1/2. Using the notation p(R|Q) and p(R|Q)
any query word or non-query word respectively should have label 1, the initial expected label value
is then
(
p(R|Q) + (1 ? p(R|Q)) ? p(R|ws ) s ? Q
cs:1 =
(3)
? ? p(R|ws )
p(R|Q)
s?
/Q
? = 0.5. Since the label values must sum
for the ?relevant? label. We use p(R|Q) = 0.75 and p(R|Q)
to one, for binary labels we have cs:0 = 1 ? cs:1 .
2.3
Risk objectives
Optimizing for expected term relevance only considers one dimension of the problem. A second
critical objective is minimizing the risk associated with a particular term labeling. We adapt an
informal definition of risk here in which the variance of the expected relevance is a proxy for uncertainty, encoded in the matrix ? with entries ?ij . Using a betting analogy, the weights x = {xi }
represent wagers on the utility of the query model terms. A risky strategy would place all bets on the
single term with highest relevance score. A lower-risk strategy would distribute bets among terms
that had both a large estimated relevance and low redundancy, to cover all aspects of the query.
Conditional term risk. First, we consider the conditional risk ?ij between pairs of terms wi and
wj . To quantify conditional risk, we measure the redundancy of choosing word wi given that wj
has already been selected. This relation is expressed by choosing a symmetric similarity measure
?(wi , wj ) between wi and wj , which is rescaled into a distance-like measure d(wi , wj ) with the
formula
?ij = d(wi , wj ) = ? exp(?? ? ?(wi , wj ))
(4)
The quantities ? and ? are scaling constants that depend on the output scale of ?, and the choice
of ? also controls the relative importance of individual vs. conditional term risk. In this study, our
?(wi , wj ) measure is based on term associations over the 2 ? 2 contingency table of term document
counts. For this experiment we used the Jaccard coefficient: future work will examine others.
Individual risk. We say that a term related to multiple query terms exhibits term centrality. Previous work has shown that central terms are more likely to be more effective for expansion than terms
related to few query terms [3] [12]. We use term centrality to quantify a term?s individual risk, and
define it for a term wi in terms of the vector di of all similarities of wi with all query terms. The
covariance matrix ? then has diagonal entries
X
?ii = kdi k22 =
d2 (wi , wq )
(5)
wq ?Q
1
We use the symbols R and N to represent relevance and non-relevance respectively.
?
?
?
?
?
?
Y
Y
Y
Y
Y
Y
Bad
Good
(a) Aspect balance
Low
High
(b) Aspect coverage
Variable
Centered
(c) Term centering
Figure 2: Three complementary criteria for expansion term weighting on a graph of candidate terms,
and two query terms X and Y . The aspect balance constraint (left) prefers sets of expansion terms
that balance the representation of X and Y . The aspect coverage constraint (center) increases recall
by allowing more expansion candidates within a distance threshold of each term. Term centering
(right) prefers terms near the center of the graph, and thus more likely to be related to both terms,
with minimum variation in the distances to X and Y .
Other definitions of centrality are certainly possible, e.g. depending on generative assumptions for
term distributions.
We can now combine relevance and risk into a single objective, and control the tradeoff with a single
parameter ?, by minimizing the function
?
(6)
L(x) = ?cT x + xT ?x.
2
If ? is estimated from term co-occurrence data in the top-retrieved documents, then the condition
to minimize xT ?x also encodes the fact that we want to select expansion terms that are not all in
the same co-occurrence cluster. Rather, we prefer a set of expansion terms that are more diverse,
covering a larger range of potential topics.
2.4
Set-based constraints
One limitation of current query model estimation methods is that they typically make greedy termby-term decisions using a threshold, without considering the qualities of the set of terms as a whole.
A one-dimensional greedy selection by term score, especially for a small number of terms, has the
risk of emphasizing terms related to one aspect and not others. This in turn increases the risk of
query drift after expansion. We now define several useful constraints on query model terms: aspect
balance, aspect coverage, and query term support. Figure 2 gives graphical examples of aspect
balance, aspect coverage, and the term centrality objective.
Aspect balance. We make the simplistic assumption that each of a query?s terms represents a
separate and unique aspect of the user?s information need. We create the matrix A from the vectors
?k (wi ) for each query term qk , by setting Aki = ?k (wi ) = ?ik . In effect, Ax gives the projection
of the solution model x on each query term?s feature vector ?k . We define the requirement that x be
in balance to be that the vector Ax be element-wise close to the mean vector ? of the ?k , within a
tolerance ?? , which we denote (with some flexibility in notation) by
Ax ? + ?? .
(7)
To demand an exact solution, we set ?? = 0. In reality, some slack is desirable for slightly better
results and so we use a small positive value for ?? such as 1.0.
Query term support. Another important constraint is that the set of initial query terms Q be
predicted by the solution labeling. We express this mathematically by requiring that the the weights
for the ?relevant? label on the query terms xi:1 lie in a range li ? xi ? ui and in particular be above
the threshold li for xi ? Q. Currently li is set to a default value of 0.95 for all query terms, and zero
for all other terms. ui is set to 1.0 for all terms. Term-specific values for li may also be desirable to
reflect the rarity or ambiguity of individual query terms.
? T
x ?x
2
subject to Ax ? + ??
minimize
? cT x +
T
gi x ? ?i ,
wi ? Q
li ? xi ? ui ,
i = 1, . . . , K
Relevance, term centrality & risk
(9)
Aspect balance
(10)
Aspect coverage
(11)
Query term support, positivity
(12)
Figure 3: The basic constrained quadratic program QMOD used for query model estimation.
Aspect coverage. One of the strengths of query expansion is its potential for solving the vocabulary mismatch problem by finding different words to express the same information need. Therefore,
we can also require a minimal level of aspect coverage. That is, we may require more than just that
terms are balanced evenly among all query terms: we may care about the absolute level of support
that exists. For example, suppose our information sources are feedback terms, and we have two
possible term weightings that are otherwise feasible solutions. The first weighting has only enough
terms selected to give a minimal non-zero but even covering to all aspects. The second weighting
scheme has three times as many terms, but also gives an even covering. Assuming no conflicting
constraints such as maximum query length, we may prefer the second weighting because it increases
the chance we find the right alternate words for the query, potentially improving recall.
We denote the set of distances to neighboring words of query term qi by the vector gi . The projection
gi T x gives us the aspect coverage, or how well the words selected by the solution x ?cover? term
qi . The more expansion terms near qi that are given higher weights, the larger this value becomes.
When only the query term is covered, the value of gi T x = ?ii . We want the aspect coverage for
each of the vectors gi to exceed a threshold ?i , and this is expressed by the constraint
gi T x ? ?i .
(8)
Putting together the relevance and risk objectives, and constraining by the set properties, results in
the following complete quadratic program for query model estimation, which we call QMOD and is
shown in Figure 3. The role of each constraint is given in italics.
3
Evaluation
In this section we summarize the effectiveness of using the QMOD convex programs to estimate
query models and examine how well the QMOD feasible set is calibrated to the empirical risk of
expansion. For space reasons we are unable to include a complete sensitivity analysis of the effect
of the various constraints. The best risk-reward tradeoff is generally obtained with a strong query
support constraint (li near 1.0) and moderate balance between individual and conditional term risk.
We used the following default values for the control parameters: ? = 1.0, ? = 0.75, ?? = 1.0,
?i = 0.1, ui = 1.0, and li = 0.95 for query terms and li = 0 for non-query terms.
3.1
Robustness of Model Estimation
In this section we evaluate the robustness of the query models estimated using the convex program
in Fig. 3 over several TREC collections. We created a histogram of MAP improvement across sets
of topics. This is a fine-grained look that shows the distribution of gain or loss in MAP for a given
feedback method. Using these histograms we can distinguish between two systems that might have
the same number of failures, but which help or hurt queries by very different magnitudes. The
number of queries helped or hurt by expansion is shown, binned by the loss or gain in average
precision by using feedback. The baseline feedback here was Indri 2.2 (Modified Relevance Model
with stoplist) [8]. The robustness histogram with results combined for all collections is shown in
Fig. 4. Both algorithms achieve the same gain in average precision over all collections (15%). Yet
considering the expansion failures whose loss in average precision is more than 10%, the robust
version hurts more than 60% fewer queries.
35
70
30
60
Number of Queries
80
25
20
15
50
40
30
20
5
10
0
0
[0
,1
0)
[1
0.
20
)
[2
0,
30
)
[3
0,
40
)
[4
0,
50
)
[5
0,
60
)
[6
0,
70
)
[7
0,
80
)
[8
0,
90
)
[9
0,
10
0)
10
0+
10
[-1
00
,- 9
0)
[-9
0,
-8
0)
[-8
0,
-7
0)
[-7
0,
-6
0)
[-6
0,
-5
0)
[-5
0,
-4
0)
[-4
0,
-3
0)
[-3
0,
-2
0)
[-2
0,
-1
0)
[-1
0,
0)
Number of Queries
40
(a) Queries hurt
(b) Queries helped
Figure 4: Comparison of expansion robustness for four TREC collections combined (TREC 1&2,
TREC 7, TREC 8, wt10g). The histograms show counts of queries, binned by percent change
in average precision. The dark bars show robust expansion performance using the QMOD convex
program with default control parameters. The light bars show baseline expansion performance using
term relevance weights only. Both methods improve average precision by an average of 15%, but
the robust version hurts significantly fewer queries, as evident by the greatly reduced tail on the left
histogram (queries hurt).
3.2
Calibration of Feasible Set
If the constraints of a convex program are well-designed for stable query expansion, the odds of an
infeasible solution should be much greater than 50% for queries that are risky. In those cases, the
algorithm will not attempt to enhance the query. Conversely, the odds of finding a feasible query
model should ideally increase for thoese queries that are more amenable to expansion. Overall, 17%
of all queries had infeasible programs. We binned these queries according to the actual gain or loss
that would have been achieved with the baseline expansion, normalized by the original number of
queries appearing in each bin when the (non-selective) baseline expansion is used. This gives the
log-odds of reverting to the original query for any given gain/loss level.
The results are shown in in Figure 5. As predicted, the QMOD algorithm is more likely to decide
infeasibility for the high-risk zones at the extreme ends of the scale. Furthermore, the odds of finding
a feasible solution do indeed increase directly with the actual benefits of using expansion, up to a
point where we reach an average precision gain of 75% and higher. At this point, such high-reward
queries are considered high risk by the algorithm, and the likelihood of reverting to the original
query increases dramatically again. This analysis makes clear that the selective expansion behavior
of the convex algorithm is well-calibrated to the true expansion benefit.
4
Conclusions
We have presented a new research approach to query model estimation, showing how to adapt convex
optimization methods to the problem by casting it as constrained graph labeling. By integrating
relevance and risk objectives with additional constraints to selectively reduce expansion for the most
risky queries, our approach is able to significantly reduce the downside risk of a strong baseline
algorithm while retaining its strong gains in average precision.
Our expansion framework is quite general and easily accomodates further extensions and refinements. For example, similar to methods used for portfolio optimization [6] we can assign a computational cost to each term having non-zero weight, and add budget constraints to prefer more efficient
expansions. In addition, sensitivity analysis of the constraints is likely provide useful information
for active learning: interesting extensions to semi-supervised learning are possible to incorporate
additional observations such as relevance feedback from the user. Finally, there are a number of
Figure 5: The log-odds of reverting to the original query as a result of selective expansion. Queries
are binned by the percent change in average precision if baseline expansion were used. Columns
above the line indicate greater-than-even odds that we revert to the original query.
higher-level control parameters and it would be interesting to determine the optimal settings. The
values we use have not been extensively tuned, so that further performance gains may be possible.
Acknowledgments
We thank Jamie Callan, John Lafferty, William Cohen, and Susan Dumais for their valuable feedback on many aspects of this work.
References
[1] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[2] C. Buckley. Why current IR engines fail. In Proceedings of the 27th Annual International ACM SIGIR
Conference on Research and Development in Information Retrieval (SIGIR 2004), pages 584?585, 2004.
[3] K. Collins-Thompson and J. Callan. Query expansion using random walk models. In Proc. of the 14th
International Conf. on Information and Knowledge Management (CIKM 2005), pages 704?711, 2005.
[4] K. Collins-Thompson and J. Callan. Estimation and use of uncertainty in pseudo-relevance feedback. In
Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in
Information Retrieval (SIGIR 2007), pages 303?310, 2007.
[5] V. Lavrenko. A Generative Theory of Relevance. PhD thesis, Univ. of Massachusetts, Amherst, 2004.
[6] M. S. Lobo, M. Fazel, and S. Boyd. Portfolio optimization with linear and fixed transaction costs. Annals
of Operations Research, 152(1):376?394, 2007.
[7] H. M. Markowitz. Portfolio selection. Journal of Finance, 7(1):77?91, 1952.
[8] D. Metzler and W. B. Croft. Combining the language model and inference network approaches to retrieval.
Information Processing and Management, 40(5):735?750, 2004.
[9] J. M. Ponte and W. B. Croft. A language modeling approach to information retrieval. In Proc. of the 1998
ACM SIGIR Conference on Research and Development in Information Retrieval, pages 275?281, 1998.
[10] P. Ravikumar and J. Lafferty. Quadratic programming relaxations for metric labeling and markov random
field map estimation. In Proceedings of the 23rd International Conference on Machine Learning (ICML
2006), pages 737?744, 2006.
[11] J. Teevan, S. T. Dumais, and E. Horvitz. Personalizing search via automated analysis of interests and
activities. In Proceedings of the 28th Annual International ACM SIGIR Conference on Research and
Development in Information Retrieval (SIGIR 2005), pages 449?456, New York, NY, USA, 2005. ACM.
[12] J. Xu and W. B. Croft. Query expansion using local and global document analysis. In Proceedings of
the 1996 Annual International ACM SIGIR Conference on Research and Development in Information
Retrieval, pages 4?11, 1996.
| 3413 |@word version:2 polynomial:1 d2:1 seek:2 covariance:1 reduction:1 initial:11 configuration:1 score:4 tuned:1 document:7 past:1 existing:2 horvitz:1 current:7 com:1 subcomponents:1 assigning:1 yet:1 must:3 john:1 designed:1 v:1 greedy:3 selected:5 generative:2 item:2 fewer:2 xk:2 node:3 lavrenko:2 direct:1 ik:1 consists:2 combine:2 introduce:1 pairwise:2 indeed:1 expected:11 behavior:1 embody:1 examine:2 automatically:2 little:1 actual:2 solver:1 considering:2 becomes:1 provided:1 estimating:2 begin:1 classifies:1 maximizes:1 notation:2 lowest:1 what:1 interpreted:1 minimizes:1 finding:4 pseudo:3 expands:1 ti:1 finance:1 control:5 positive:2 local:1 modify:1 despite:1 might:2 studied:1 conversely:1 relaxing:1 co:2 range:2 fazel:1 practical:1 unique:1 acknowledgment:1 empirical:1 significantly:2 projection:2 boyd:2 word:14 integrating:2 confidence:1 close:2 selection:4 put:1 context:1 risk:34 optimize:1 map:4 center:2 maximizing:2 convex:15 thompson:3 focused:1 sigir:8 simplicity:1 insight:1 vandenberghe:1 stability:1 searching:1 variation:1 hurt:6 annals:1 suppose:1 user:7 exact:1 programming:1 element:1 metzler:1 observed:1 role:1 capture:3 worst:2 susan:1 wj:10 highest:2 rescaled:1 valuable:1 principled:1 balanced:1 ui:4 reward:3 ideally:1 lobo:1 depend:1 solving:2 completely:1 easily:1 represented:1 various:2 revert:1 univ:1 fast:1 effective:4 describe:2 query:110 labeling:13 choosing:4 whose:2 richer:1 heuristic:1 encoded:1 larger:2 say:1 relax:1 otherwise:2 quite:1 gi:6 final:1 jamie:1 neighboring:1 relevant:7 combining:2 flexibility:1 achieve:1 description:1 cluster:1 requirement:1 leave:1 help:1 depending:2 develop:3 ij:3 school:1 strong:6 coverage:10 c:7 predicted:2 indicate:1 quantify:2 drawback:1 centered:1 viewing:1 stoplist:1 bin:1 require:2 assign:3 suffices:1 probable:1 mathematically:1 extension:2 considered:1 exp:1 major:2 estimation:15 proc:2 integrates:1 applicable:1 label:22 currently:1 create:1 reflects:1 weighted:1 aim:1 modified:1 ck:2 rather:1 avoid:1 bet:2 casting:2 ax:4 focus:1 improvement:1 rank:1 likelihood:1 greatly:1 baseline:12 detect:2 sense:1 posteriori:1 inference:1 typically:5 integrated:1 w:2 relation:2 expand:2 selective:5 overall:1 among:2 flexible:1 retaining:2 development:5 art:1 constrained:6 field:2 once:1 having:2 represents:1 broad:1 look:2 icml:1 future:1 markowitz:1 np:1 others:2 serious:1 primarily:1 few:3 divergence:1 individual:9 microsoft:3 william:1 attempt:3 interest:2 highly:2 circular:1 evaluation:1 certainly:1 extreme:1 light:1 wager:1 amenable:1 callan:3 edge:4 walk:1 theoretical:2 minimal:2 untangle:1 column:1 modeling:2 downside:1 cover:2 assignment:3 cost:9 entry:2 calibrated:2 combined:2 dumais:2 international:6 sensitivity:2 amherst:1 probabilistic:1 enhance:1 together:2 connecting:1 again:1 central:1 reflect:1 ambiguity:1 management:2 thesis:1 positivity:1 conf:1 li:8 suggesting:1 potential:3 distribute:1 coefficient:1 satisfy:1 blind:1 helped:2 closed:1 bayes:1 minimize:3 square:1 ir:1 accuracy:1 variance:1 largely:1 qk:1 produced:1 wt10g:1 reach:1 definition:2 centering:2 failure:4 associated:2 di:1 gain:10 massachusetts:1 recall:2 knowledge:2 higher:4 supervised:1 specify:3 formulation:3 done:2 furthermore:1 just:1 web:1 quality:1 building:1 effect:2 k22:1 normalized:2 true:1 remedy:1 requiring:1 adequately:1 usa:1 assigned:1 symmetric:1 aki:1 covering:3 criterion:5 evident:1 complete:3 percent:2 wise:1 personalizing:2 novel:2 cohen:1 association:1 tail:1 significant:2 mellon:1 refer:1 cambridge:1 automatic:2 rd:1 language:6 portfolio:4 had:2 stable:3 calibration:1 similarity:5 add:2 retrieved:2 optimizing:1 moderate:1 binary:1 seen:1 minimum:1 additional:7 care:1 greater:2 determine:2 maximize:1 ii:2 semi:1 multiple:3 desirable:3 adapt:2 long:2 retrieval:15 ravikumar:2 qi:3 basic:3 simplistic:1 enhancing:1 metric:4 expectation:1 essentially:1 histogram:5 sometimes:2 represent:6 achieved:1 c1:1 addition:2 want:2 ore:1 fine:1 source:2 operate:1 subject:2 undirected:1 incorporates:1 lafferty:3 effectiveness:3 integer:1 call:1 odds:6 near:3 exceed:1 constraining:1 easy:1 enough:1 automated:1 competing:2 reduce:3 idea:1 tradeoff:4 handled:1 utility:1 effort:1 penalty:2 suffer:1 york:1 prefers:2 ignored:1 generally:2 useful:3 covered:1 informally:1 dramatically:1 clear:1 buckley:1 dark:2 extensively:1 reduced:1 exist:1 estimated:5 cikm:1 diverse:1 carnegie:1 express:2 key:1 redundancy:2 putting:1 threshold:4 four:1 achieving:1 graph:11 relaxation:3 sum:3 run:1 uncertainty:2 place:1 decide:2 decision:1 prefer:3 scaling:1 jaccard:1 ct:2 followed:1 distinguish:1 quadratic:3 refine:1 annual:4 badly:1 activity:1 strength:1 binned:4 constraint:25 encodes:1 aspect:23 expanded:1 betting:1 according:2 alternate:2 poor:1 across:2 slightly:1 wi:17 making:1 teevan:1 turn:3 describing:1 discus:1 count:2 kdi:1 slack:1 letting:1 reverting:3 fail:1 end:1 sending:1 informal:1 available:1 operation:1 enforce:1 occurrence:2 appearing:1 centrality:6 robustness:4 original:7 top:3 denotes:1 include:1 graphical:1 especially:2 objective:15 already:2 added:1 quantity:1 strategy:2 traditional:1 diagonal:1 italic:1 exhibit:1 distance:4 separate:1 unable:1 thank:1 evenly:1 topic:2 considers:1 unstable:2 reason:3 toward:2 assuming:2 length:1 relationship:1 balance:10 minimizing:2 potentially:1 implementation:1 perform:1 allowing:1 imbalance:1 observation:1 markov:2 finite:1 situation:2 ponte:1 trec:5 arbitrary:2 drift:2 pair:1 kl:2 connection:1 engine:3 conflicting:1 address:2 beyond:1 redmond:1 bar:2 able:1 mismatch:1 summarize:1 program:9 reliable:2 belief:1 critical:1 ranked:3 rely:1 natural:2 accomodates:1 wanting:1 scheme:1 improve:3 technology:1 risky:5 picture:1 created:1 text:3 review:1 relative:3 loss:5 interesting:2 limitation:1 analogy:1 versus:1 contingency:1 degree:1 consistent:1 proxy:1 free:1 infeasible:2 bias:2 allow:1 institute:1 absolute:1 tolerance:1 regard:1 feedback:14 default:4 vocabulary:3 dimension:1 benefit:2 ignores:1 author:1 collection:6 refinement:1 transaction:1 approximate:1 obtains:1 global:1 active:1 xi:10 thep:1 search:4 infeasibility:1 decade:1 why:1 table:1 reality:1 robust:5 improving:2 expansion:50 complex:1 whole:1 complementary:1 x1:1 xu:1 fig:2 rarity:1 fashion:1 ny:1 precision:8 inferring:3 candidate:2 lie:1 third:1 weighting:6 grained:1 croft:3 formula:2 theorem:1 emphasizing:1 bad:2 unigram:1 xt:3 specific:1 showing:1 symbol:1 x:5 evidence:2 exists:1 importance:1 ci:2 phd:1 magnitude:2 budget:2 demand:1 simply:1 likely:6 expressed:2 corresponds:1 chance:1 acm:6 conditional:6 goal:4 viewed:1 replace:1 feasible:8 hard:1 change:2 included:1 zone:1 selectively:3 select:2 wq:2 support:5 collins:3 relevance:30 violated:1 incorporate:2 evaluate:1 |
2,663 | 3,414 | Efficient Sampling for Gaussian Process Inference
using Control Variables
Michalis K. Titsias, Neil D. Lawrence and Magnus Rattray
School of Computer Science, University of Manchester
Manchester M13 9PL, UK
Abstract
Sampling functions in Gaussian process (GP) models is challenging because of
the highly correlated posterior distribution. We describe an efficient Markov chain
Monte Carlo algorithm for sampling from the posterior process of the GP model.
This algorithm uses control variables which are auxiliary function values that provide a low dimensional representation of the function. At each iteration, the algorithm proposes new values for the control variables and generates the function
from the conditional GP prior. The control variable input locations are found by
minimizing an objective function. We demonstrate the algorithm on regression
and classification problems and we use it to estimate the parameters of a differential equation model of gene regulation.
1
Introduction
Gaussian processes (GPs) are used for Bayesian non-parametric estimation of unobserved or latent
functions. In regression problems with Gaussian likelihoods, inference in GP models is analytically
tractable, while for classification deterministic approximate inference algorithms are widely used
[16, 4, 5, 11]. However, in recent applications of GP models in systems biology [1] that require the
estimation of ordinary differential equation models [2, 13, 8], the development of deterministic approximations is difficult since the likelihood can be highly complex. Other applications of Gaussian
processes where inference is intractable arise in spatio-temporal models and geostatistics and deterministic approximations have also been developed there [14]. In this paper, we consider Markov
chain Monte Carlo (MCMC) algorithms for inference in GP models. An advantage of MCMC over
deterministic approximate inference is that it provides an arbitrarily precise approximation to the
posterior distribution in the limit of long runs. Another advantage is that the sampling scheme will
often not depend on details of the likelihood function, and is therefore very generally applicable.
In order to benefit from the advantages of MCMC it is necessary to develop an efficient sampling
strategy. This has proved to be particularly difficult in many GP applications, because the posterior
distribution describes a highly correlated high-dimensional variable. Thus simple MCMC sampling
schemes such as Gibbs sampling can be very inefficient. In this contribution we describe an efficient MCMC algorithm for sampling from the posterior process of a GP model which constructs
the proposal distributions by utilizing the GP prior. This algorithm uses control variables which are
auxiliary function values. At each iteration, the algorithm proposes new values for the control variables and samples the function by drawing from the conditional GP prior. The control variables are
highly informative points that provide a low dimensional representation of the function. The control
input locations are found by minimizing an objective function. The objective function used is the
expected least squares error of reconstructing the function values from the control variables, where
the expectation is over the GP prior.
We demonstrate the proposed MCMC algorithm on regression and classification problems and compare it with two Gibbs sampling schemes. We also apply the algorithm to inference in a systems
biology model where a set of genes is regulated by a transcription factor protein [8]. This provides
an example of a problem with a non-linear and non-factorized likelihood function.
2
Sampling algorithms for Gaussian Process models
In a GP model we assume a set of inputs (x1 , . . . , xN ) and a set of function values f = (f1 , . . . , fN )
evaluated at those inputs. A Gaussian process places a prior on f which is a N -dimensional Gaussian
distribution so that p(f ) = N (y|?, K). The mean ? is typically zero and the covariance matrix K
is defined by the kernel function k(xn , xm ) that depends on parameters ?. GPs are widely used for
supervised learning [11] in which case we have a set of observed pairs (yi , xi ), where i = 1, . . . , N ,
and we assume a likelihood model p(y|f ) that depends on parameters ?. For regression or classification problems, the latent function values are evaluated at the observed inputs and the likelihood
QN
factorizes according to p(y|f ) = i=1 p(yi |fi ). However, for other type of applications, such as
modelling latent functions in ordinary differential equations, the above factorization is not applicable. Assuming that we have obtained suitable values for the model parameters (?, ?) inference over
f is done by applying Bayes rule:
p(f |y) ? p(y|f )p(f ).
(1)
For regression, where the likelihood is Gaussian, the above posterior is a Gaussian distribution that
can be obtained using simple algebra. When the likelihood p(y|f ) is non-Gaussian, computations
become intractable and we need to carry out approximate inference.
The MCMC algorithm we consider is the general Metropolis-Hastings (MH) algorithm [12]. Suppose we wish to sample from the posterior in eq. (1). The MH algorithm forms a Markov chain. We
initialize f (0) and we consider a proposal distribution Q(f (t+1) |f (t) ) that allows us to draw a new
state given the current state. The new state is accepted with probability min(1, A) where
A=
p(y|f (t+1) )p(f (t+1) ) Q(f (t) |f (t+1) )
.
p(y|f (t) )p(f (t) ) Q(f (t+1) |f (t) )
(2)
To apply this generic algorithm, we need to choose the proposal distribution Q. For GP models,
finding a good proposal distribution is challenging since f is high dimensional and the posterior
distribution can be highly correlated.
To motivate the algorithm presented in section 2.1, we discuss two extreme options for specifying the proposal distribution Q. One simple way to choose Q is to set it equal to the GP prior
p(f ). This gives us an independent MH algorithm [12]. However, sampling from the GP prior is
very inefficient as it is unlikely to obtain a sample that will fit the data. Thus the Markov chain
will get stuck in the same state for thousands of iterations. On the other hand, sampling from the
prior is appealing because any generated sample satisfies the smoothness requirement imposed by
the covariance function. Functions drawn from the posterior GP process should satisfy the same
smoothness requirement as well.
The other extreme choice for the proposal, that has been considered in [10], is to apply Gibbs
sampling where we iteratively draw samples from each posterior conditional density p(fi |f?i , y)
with f?i = f \fi . However, Gibbs sampling can be extremely slow for densely discretized functions,
as in the regression problem of Figure 1, where the posterior GP process is highly correlated. To
clarify this, note that the variance of the posterior conditional p(fi |f?i , y) is smaller or equal to the
variance of the conditional GP prior p(fi |f?i ). However, p(fi |f?i ) may already have a tiny variance
caused by the conditioning on all remaining latent function values. For the one-dimensional example
in Figure 1, Gibbs sampling is practically not applicable. We further study this issue in section 4.
A similar algorithm to Gibbs sampling can be expressed by using the sequence of the conditional
densities p(fi |f?i ) as a proposal distribution for the MH algorithm1 . We call this algorithm the
Gibbs-like algorithm. This algorithm can exhibit a high acceptance rate, but it is inefficient to
sample from highly correlated functions. A simple generalization of the Gibbs-like algorithm that
is more appropriate for sampling from smooth functions is to divide the domain of the function
into regions and sample the entire function within each region by conditioning on the remaining
function regions. Local region sampling iteratively draws each block of functions values fk from
1
Thus we replace the proposal distribution p(fi |f?i , y) with the prior conditional p(fi |f?i ).
(t)
the conditional GP prior p(fkt+1 |f?k ), where f?k = f \ fk . However, this scheme is still inefficient
to sample from highly correlated functions since the variance of the proposal distribution can be
very small close to the boundaries between neighbouring function regions. The description of this
algorithm is given in the supplementary material. In the next section we discuss an algorithm using
control variables that can efficiently sample from highly correlated functions.
2.1
Sampling using control variables
Let fc be a set of M auxiliary function values that are evaluated at inputs Xc and drawn from the
GP prior. We call fc the control variables and their meaning is analogous to the auxiliary inducing
variables used in sparse GP models [15]. To compute the posterior p(f |y) based on control variables
we use the expression
Z
p(f |y) =
p(f |fc , y)p(fc |y)dfc .
(3)
fc
Assuming that fc is highly informative about f , so that p(f |fc , y) ' p(f |fc ), we can approximately
sample from p(f |y) in a two-stage manner: firstly sample the control variables from p(fc |y) and
then generate f from the conditional prior p(f |fc ). This scheme can allow us to introduce a MH
(t+1) (t)
|fc ), that will mimic
algorithm, where we need to specify only a proposal distribution q(fc
sampling from p(fc |y), and always sample f from the conditional prior p(f |fc ). The whole proposal
distribution takes the form
Q(f (t+1) , fc(t+1) |f (t) , fc(t) ) = p(f (t+1) |fc(t+1) )q(fc(t+1) |fc(t) ).
(4)
Each proposed sample is accepted with probability min(1, A) where A is given by
(t+1)
A=
p(y|f (t+1) )p(fc
(t)
(t+1)
) q(fc |fc
)
. (t+1) (t) .
(t)
|fc )
p(y|f (t) )p(fc )
q(fc
(5)
The usefulness of the above sampling scheme stems from the fact that the control variables can form
a low-dimensional representation of the function. Assuming that these variables are much fewer
than the points in f , the sampling is mainly carried out in the low dimensional space. In section 2.2
we describe how to select the number M of control variables and the inputs Xc so as fc becomes
highly informative about f . In the remainder of this section we discuss how we set the proposal
(t+1) (t)
|fc ).
distribution q(fc
A suitable choice for q is to use a Gaussian distribution with diagonal or full covariance matrix.
The covariance matrix can be adapted during the burn-in phase of MCMC in order to increase
the acceptance rate. Although this scheme is general, it has practical limitations. Firstly, tuning
a full covariance matrix is time consuming and in our case this adaption process must be carried out simultaneously with searching for an appropriate set of control variables. Also, since the
terms involving p(fc ) do not cancel out in the acceptance probability in eq. (5), using a diagonal
covariance for the q distribution has the risk of proposing control variables that may not satisfy
the GP prior smoothness requirement. To avoid these problems, we define q by utilizing the GP
prior. According to eq. (3) a suitable choice for q must mimic the sampling from the posterior
p(fc |y). Given that the control points are far apart from each other, Gibbs sampling in the control
variables space can be efficient. However, iteratively sampling fci from the conditional posterior
p(fci |fc?i , y) ? p(y|fc )p(fci |fc?i ), where fc?i = fc \ fci is intractable for non-Gaussian likelihoods2 . An attractive alternative is to use a Gibbs-like algorithm where each fci is drawn from
(t+1) (t)
the conditional GP prior p(fci |fc?i ) and is accepted using the MH step. More specifically, the
(t+1)
(t+1)
(t)
proposal distribution draws a new fci
for a certain control variable i from p(fci |fc?i ) and
(t+1) (t)
(t+1)
generates the function f (t+1) from p(f (t+1) |fci , fc?i ). The sample (fci , f (t+1) ) is accepted
using the MH step. This scheme of sampling the control variables one-at-a-time and resampling f is
iterated between different control variables. A complete iteration of the algorithm consists of a full
scan over all control variables. The acceptance probability A in eq. (5) becomes the likelihood ratio
and the prior smoothness requirement is always satisfied. The iteration between different control
variables is illustrated in Figure 1.
2
This is because we need to integrate out f in order to compute p(y|fc ).
3
3
3
2
2
2
1
1
1
0
0
0
?1
?1
?1
?2
?2
?2
?3
0
0.2
0.4
0.6
0.8
1
?3
0
0.2
0.4
0.6
0.8
1
?3
0
0.2
0.4
0.6
0.8
1
Figure 1: Visualization of iterating between control variables. The red solid line is the current f (t) , the blue
(t)
line is the proposed f (t+1) , the red circles are the current control variables fc while the diamond (in magenta)
(t+1) (t)
(t+1)
is the proposed control variable fci . The blue solid vertical line represents the distribution p(fci |fc?i )
t+1 (t)
(with two-standard error bars) and the shaded area shows the effective proposal p(f
|fc?i ).
Although the control variables are sampled one-at-at-time, f can still be drawn with a considerable
variance. To clarify this, note that when the control variable fci changes the effective proposal
R
(t+1)
(t+1) (t)
(t+1) (t)
(t)
distribution for f is p(f t+1 |fc?i ) = f (t+1) p(f t+1 |fci , fc?i )p(fci |fc?i )dfci , which is the
ci
conditional GP prior given all the control points apart from the current point fci . This conditional
prior can have considerable variance close to fci and in all regions that are not close to the remaining
control variables. As illustrated in Figure 1, the iteration over different control variables allow f to
be drawn with a considerable variance everywhere in the input space.
2.2
Selection of the control variables
To apply the previous algorithm we need to select the number, M , of the control points and the
associated inputs Xc . Xc must be chosen so that knowledge of fc can determine f with small
?1
fc which is the mean of the conditional prior
error. The prediction of f given fc is equal to Kf,c Kc,c
?1
fc ||2
p(f |fc ). A suitable way to search over Xc is to minimize the reconstruction error ||f ? Kf,c Kc,c
averaged over any possible value of (f , fc ):
Z
G(Xc ) =
?1
?1 T
||f ? Kf,c Kc,c
fc ||2 p(f |fc )p(fc )df dfc = Tr(Kf,f ? Kf,c Kc,c
Kf,c ).
f ,fc
The quantity inside the trace is the covariance of p(f |fc ) and thus G(Xc ) is the total variance of
this distribution. We can minimize G(Xc ) w.r.t. Xc using continuous optimization similarly to the
approach in [15]. Note that when G(Xc ) becomes zero, p(f |fc ) becomes a delta function.
To find the number M of control points we minimize G(Xc ) by incrementally adding control variables until the total variance of p(f |fc ) becomes smaller than a certain percentage of the total variance of the prior p(f ). 5% was the threshold used in all our experiments. Then we start the simulation and we observe the acceptance rate of the Markov chain. According to standard heuristics
[12] which suggest that desirable acceptance rates of MH algorithms are around 1/4, we require a
full iteration of the algorithm (a complete scan over the control variables) to have an acceptance rate
larger than 1/4. When for the current set of control inputs Xc the chain has a low acceptance rate, it
means that the variance of p(f |fc ) is still too high and we need to add more control points in order
to further reduce G(Xc ). The process of observing the acceptance rate and adding control variables
is continued until we reach the desirable acceptance rate.
When the training inputs X are placed uniformly in the space, and the kernel function is stationary,
the minimization of G places Xc in a regular grid. In general, the minimization of G places the
control inputs close to the clusters of the input data in such a way that the kernel function is taken
into account. This suggests that G can also be used for learning inducing variables in sparse GP
models in a unsupervised fashion, where the observed outputs y are not involved.
3
Applications
We consider two applications where exact inference is intractable due to a non-linear likelihood
function: classification and parameter estimation in a differential equation model of gene regulation.
Classification: Deterministic inference methods for GP classification are described in [16, 4, 7].
Among these approaches, the Expectation-Propagation (EP) algorithm [9] is found to be the most
efficient [6]. Our MCMC implementation confirms these findings since sampling using control
variables gave similar classification accuracy to EP.
Transcriptional regulation: We consider a small biological sub-system where a set of target genes
are regulated by one transcription factor (TF) protein. Ordinary differential equations (ODEs) can
provide an useful framework for modelling the dynamics in these biological networks [1, 2, 13, 8].
The concentration of the TF and the gene specific kinetic parameters are typically unknown and
need to be estimated by making use of a set of observed gene expression levels. We use a GP prior
to model the unobserved TF activity, as proposed in [8], and apply full Bayesian inference based on
the MCMC algorithm presented previously.
Barenco et al. [2] introduce a linear ODE model for gene activation from TF. This approach was
extended in [13, 8] to account for non-linear models. The general form of the ODE model for
transcription regulation with a single TF has the form
dyj (t)
= Bj + Sj g(f (t)) ? Dj yj (t),
(6)
dt
where the changing level of a gene j?s expression, yj (t), is given by a combination of basal transcription rate, Bj , sensitivity, Sj , to its governing TF?s activity, f (t), and the decay rate of the
mRNA, Dj . The differential equation can be solved for yj (t) giving
Z t
Bj
yj (t) =
+ Aj e?Dj t + Sj e?Dj t
g(f (u))eDj u du,
(7)
Dj
0
where Aj term arises from the initial condition. Due to the non-linearity of the g function that transforms the TF, the integral in the above expression is not analytically obtained. However, numerical
integration can be used to accurately approximate the integral with a dense grid (ui )P
i=1 of points in
the time axis and evaluating the function at the grid points fp = f (up ). In this case the integral in the
PPt
above equation can be written p=1
wp g(fp )eDj up where the weights wp arise from the numerical
integration method used and, for example, can be given by the composite Simpson rule.
The TF concentration f (t) in the above system of ODEs is a latent function that needs to be estimated. Additionally, the kinetic parameters of each gene ?j = (Bj , Dj , Sj , Aj ) are unknown and
also need to be estimated. To infer these quantities we use mRNA measurements (obtained from
microarray experiments) of N target genes at T different time steps. Let yjt denote the observed
gene expression level of gene j at time t and let y = {yjt } collect together all these observations.
Assuming a Gaussian noise for the observed gene expressions the likelihood of our data has the form
p(y|f , {?j }N
j=1 ) =
N Y
T
Y
p(yjt |f1?p?Pt , ?j ),
(8)
j=1 t=1
where each probability density in the above product is a Gaussian with mean given by eq. (7) and
f1?p?Pt denotes the TF values up to time t. Notice that this likelihood is non-Gaussian due to the
non-linearity of g. Further, this likelihood does not have a factorized form, as in the regression and
classification cases, since an observed gene expression depends on the protein concentration activity
in all previous times points. Also note that the discretization of the TF in P time points corresponds
to a very dense grid, while the gene expression measurements are sparse, i.e. P T .
To apply full Bayesian inference in the above model, we need to define prior distributions over all
unknown quantities. The protein concentration f is a positive quantity, thus a suitable prior is to
consider a GP prior for log f . The kinetic parameters of each gene are all positive scalars. Those
parameters are given vague gamma priors. Sampling the GP function is done exactly as described
in section 2; we have only to plug in the likelihood from eq. (8) in the MH step. Sampling from
the kinetic parameters is carried using Gaussian proposal distributions with diagonal covariance
matrices that sample the positive kinetic parameters in the log space.
20
0.25
60
Gibbs
region
control
50
number of control variables
gibbs
region
control
KL(real||empirical)
KL(real||empirical)
15
10
5
40
30
20
10
0
2
4
6
MCMC iterations
(a)
8
10
4
x 10
2
4
6
dimension
(b)
8
10
0.2
60
50
0.15
40
0.1
30
20
0.05
corrCoef
control
10
0
0
0
70
2
4
6
dimension
8
0
10
(c)
Figure 2: (a) shows the evolution of the KL divergence (against the number of MCMC iterations) between the
true posterior and the empirically estimated posteriors for a 5-dimensional regression dataset. (b) shows the
mean values with one-standard error bars of the KL divergence (against the input dimension) between the true
posterior and the empirically estimated posteriors. (c) plots the number of control variables together with the
average correlation coefficient of the GP prior.
4
Experiments
In the first experiment we compare Gibbs sampling (Gibbs), sampling using local regions (region)
(see the supplementary file) and sampling using control variables (control) in standard regression
problems of varied input dimensions. The performance of the algorithms can be accurately assessed
by computing the KL divergences between the exact Gaussian posterior p(f |y) and the Gaussians
obtained by MCMC. We fix the number of training points to N = 200 and we vary the input dimension d from 1 to 10. The training inputs X were chosen randomly inside the unit hypercube
[0, 1]d . Thus, we can study the behavior of the algorithms w.r.t. the amount of correlation in the
posterior GP process which depends on how densely the function is sampled. The larger the dimension, the sparser the function is sampled. The outputs Y were chosen by randomly producing a GP
?xn ||2
), where (?f2 , `2 ) = (1, 100) and
function using the squared-exponential kernel ?f2 exp(? ||xm2`
2
2
then adding noise with variance ? = 0.09. The burn-in period was 104 iterations3 . For a certain
dimension d the algorithms were initialized to the same state obtained by randomly drawing from
the GP prior. The parameters (?f2 , `2 , ? 2 ) were fixed to the values that generated the data. The
experimental setup was repeated 10 times so as to obtain confidence intervals. We used thinned
samples (by keeping one sample every 10 iterations) to calculate the means and covariances of the
200-dimensional posterior Gaussians. Figure 2(a) shows the KL divergence against the number of
MCMC iterations for the 5-dimensional input dataset. It seems that for 200 training points and 5
dimensions, the function values are still highly correlated and thus Gibbs takes much longer for the
KL divergence to drop to zero. Figure 2(b) shows the KL divergence against the input dimension
after fixing the number of iterations to be 3 ? 104 . Clearly Gibbs is very inefficient in low dimensions because of the highly correlated posterior. As dimension increases and the functions become
sparsely sampled, Gibbs improves and eventually the KL divergences approaches zero. The region
algorithm works better than Gibbs but in low dimensions it also suffers from the problem of high
correlation. For the control algorithm we observe that the KL divergence is very close to zero for
all dimensions. Figure 2(c) shows the increase in the number of control variables used as the input
dimension increases. The same plot shows the decrease of the average correlation coefficient of the
GP prior as the input dimension increases. This is very intuitive, since one should expect the number
of control variables to increase as the function values become more independent.
Next we consider two GP classification problems for which exact inference is intractable. We used
the Wisconsin Breast Cancer (WBC) and the Pima Indians Diabetes (PID) binary classification
datasets. The first consists of 683 examples (9 input dimensions) and the second of 768 examples
(8 dimensions). 20% of the examples were used for testing in each case. The MCMC samplers
were run for 5 ? 104 iterations (thinned to one sample every five iterations) after a burn-in of 104
iterations. The hyperparameters were fixed to those obtained by EP. Figures 3(a) and (b) shows
3
For Gibbs we used 2 ? 104 iterations since the region and control algorithms require additional iterations
during the adaption phase.
?254
?30
0.7
0.6
?256
Log likelihood
Log likelihood
?35
?258
?260
0.5
0.4
?40
0.3
?45
0.2
?262
0.1
?264
200
400
600
MCMC iterations
800
?50
1000
200
400
600
MCMC iterations
(a)
800
1000
0
gibbs contr ep
(b)
gibbs contr ep
(c)
Figure 3: We show results for GP classification. Log-likelihood values are shown for MCMC samples obtained
from (a) Gibbs and (b) control applied to the WBC dataset. In (c) we show the test errors (grey bars) and the
average negative log likelihoods (black bars) on the WBC (left) and PID (right) datasets and compare with EP.
p26 sesn1 Gene ? first Replica
Decay rates
3.5
10
9
3
8
2.5
7
6
2
5
1.5
4
1
3
2
0.5
1
0
0
2
4
Inferred protein
6
8
10
12
0
DDB2
p26 sesn1
dinI Gene
7
6
5
TNFRSF10b
CIp1/p21
BIK
yjiW Gene
7.5
7
7
6.5
6.5
6
6
5.5
4
5.5
5
5
4.5
3
2
1
0
0
10
20
30
40
50
60
4.5
4
4
3.5
3.5
0
10
20
30
40
50
60
3
0
10
20
30
40
50
60
Figure 4: First row: The left plot shows the inferred TF concentration for p53; the small plot on top-right shows
the ground-truth protein concentration obtained by a Western blot experiment [2]. The middle plot shows the
predicted expression of a gene obtained by the estimated ODE model; red crosses correspond to the actual gene
expression measurements. The right-hand plot shows the estimated decay rates for all 5 target genes used to
train the model. Grey bars display the parameters found by MCMC and black bars the parameters found in [2]
using a linear ODE model. Second row: The left plot shows the inferred TF for LexA. Predicted expressions of
two target genes are shown in the rest two plots. Error bars in all plots correspond to 95% credibility intervals.
the log-likelihood for MCMC samples on the WBC dataset, for the Gibbs and control algorithms
respectively. It can be observed that mixing is far superior for the control algorithm and it has also
converged to a much higher likelihood. In Figure 3(c) we compare the test error and the average
negative log likelihood in the test data obtained by the two MCMC algorithms with the results from
EP. The proposed control algorithm shows similar classification performance to EP, while the Gibbs
algorithm performs significantly worse on both datasets.
In the final two experiments we apply the control algorithm to infer the protein concentration of TFs
that activate or repress a set of target genes. The latent function in these problems is always onedimensional and densely discretized and thus the control algorithm is the only one that can converge
to the GP posterior process in a reasonable time.
We first consider the TF p53 which is a tumour repressor activated during DNA damage. Seven
samples of the expression levels of five target genes in three replicas are collected as the raw time
course data. The non-linear activation of the protein follows the Michaelis Menten kinetics inspired
(t)
response [1] that allows saturation effects to be taken into account so as g(f (t)) = ?jf+f
(t) in eq.
(6) where the Michaelis constant for the jth gene is given by ?j . Note that since f (t) is positive the
GP prior is placed on the log f (t). To apply MCMC we discretize f using a grid of P = 121 points.
During sampling, 7 control variables were needed to obtain the desirable acceptance rate. Running
time was 4 hours for 5 ? 105 sampling iterations plus 5 ? 104 burn-in iterations. The first row of
Figure 4 summarizes the estimated quantities obtained from MCMC simulation.
Next we consider the TF LexA in E.Coli that acts as a repressor. In the repression case there is an
analogous Michaelis Menten model [1] where the non-linear function g takes the form: g(f (t)) =
1
?j +f (t) . Again the GP prior is placed on the log of the TF activity. We applied our method to
the same microarray data considered in [13] where mRNA measurements of 14 target genes are
collected over six time points. For this dataset, the expression of the 14 genes were available for
T = 6 times. The GP function f was discretized using 121 points. The result for the inferred
TF profile along with predictions of two target genes are shown in the second row of Figure 4.
Our inferred TF profile and reconstructed target gene profiles are similar to those obtained in [13].
However, for certain genes, our model provides a better fit to the gene profile.
5
Discussion
Gaussian processes allow for inference over latent functions using a Bayesian estimation framework.
In this paper, we presented an MCMC algorithm that uses control variables. We showed that this
sampling scheme can efficiently deal with highly correlated posterior GP processes. MCMC allows
for full Bayesian inference in the transcription factor networks application. An important direction
for future research will be scaling the models used to much larger systems of ODEs with multiple interacting transcription factors. In such large systems where MCMC can become slow a combination
of our method with the fast sampling scheme in [3] could be used to speed up the inference.
Acknowledgments
This work is funded by EPSRC Grant No EP/F005687/1 ?Gaussian Processes for Systems Identification with Applications in Systems Biology?.
References
[1] U. Alon. An Introduction to Systems Biology: Design Principles of Biological Circuits. Chapman and
Hall/CRC, 2006.
[2] M. Barenco, D. Tomescu, D. Brewer, J. Callard, R. Stark, and M. Hubank. Ranked prediction of p53
targets using hidden variable dynamic modeling. Genome Biology, 7(3), 2006.
[3] B. Calderhead, M. Girolami, and N.D. Lawrence. Accelerating Bayesian Inference over Nonlinear Differential Equations with Gaussian Processes. In Neural Information Processing Systems, 22, 2008.
[4] L. Csato and M. Opper. Sparse online Gaussian processes. Neural Computation, 14:641?668, 2002.
[5] M. N. Gibbs and D. J. C. MacKay. Variational Gaussian process classifiers. IEEE Transactions on Neural
Networks, 11(6):1458?1464, 2000.
[6] M. Kuss and C. E. Rasmussen. Assessing Approximate Inference for Binary Gaussian Process Classification. Journal of Machine Learning Research, 6:1679?1704, 2005.
[7] N. D. Lawerence, M. Seeger, and R. Herbrich. Fast sparse Gaussian process methods: the informative
vector machine. In Advances in Neural Information Processing Systems, 13. MIT Press, 2002.
[8] N. D. Lawrence, G. Sanguinetti, and M. Rattray. Modelling transcriptional regulation using Gaussian
processes. In Advances in Neural Information Processing Systems, 19. MIT Press, 2007.
[9] T. Minka. Expectation propagation for approximate Bayesian inference. In UAI, pages 362?369, 2001.
[10] R. M. Neal. Monte Carlo implementation of Gaussian process models for Bayesian regression and classification. Technical report, Dept. of Statistics, University of Toronto, 1997.
[11] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006.
[12] C. P. Robert and G. Casella. Monte Carlo Statistical Methods. Springer-Verlag, 2nd edition, 2004.
[13] S. Rogers, R. Khanin, and M. Girolami. Bayesian model-based inference of transcription factor activity.
BMC Bioinformatics, 8(2), 2006.
[14] H. Rue, S. Martino, and N. Chopin. Approximate Bayesian inference for latent Gaussian models using
integrated nested Laplace approximations. NTNU Statistics Preprint, 2007.
[15] E. Snelson and Z. Ghahramani. Sparse Gaussian process using pseudo inputs. In Advances in Neural
Information Processing Systems, 13. MIT Press, 2006.
[16] C. K. I. Williams and D. Barber. Bayesian classification with Gaussian processes. IEEE Transactions on
Pattern Analysis and Machine Intelligence, 20(12):1342?1351, 1998.
| 3414 |@word middle:1 seems:1 nd:1 grey:2 confirms:1 simulation:2 covariance:9 tr:1 solid:2 carry:1 initial:1 current:5 discretization:1 activation:2 must:3 written:1 fn:1 numerical:2 informative:4 plot:9 drop:1 resampling:1 stationary:1 intelligence:1 fewer:1 provides:3 location:2 herbrich:1 toronto:1 firstly:2 five:2 along:1 differential:7 become:4 consists:2 inside:2 thinned:2 introduce:2 manner:1 expected:1 behavior:1 discretized:3 inspired:1 actual:1 becomes:5 linearity:2 circuit:1 factorized:2 developed:1 proposing:1 unobserved:2 finding:2 temporal:1 pseudo:1 every:2 act:1 exactly:1 classifier:1 uk:1 control:62 unit:1 grant:1 producing:1 positive:4 local:2 limit:1 approximately:1 black:2 burn:4 plus:1 lexa:2 specifying:1 challenging:2 shaded:1 suggests:1 collect:1 factorization:1 averaged:1 practical:1 acknowledgment:1 yj:4 testing:1 block:1 fci:17 area:1 empirical:2 significantly:1 composite:1 confidence:1 regular:1 protein:8 suggest:1 get:1 close:5 selection:1 risk:1 applying:1 deterministic:5 imposed:1 mrna:3 williams:2 rule:2 continued:1 utilizing:2 searching:1 analogous:2 laplace:1 target:10 suppose:1 pt:2 exact:3 neighbouring:1 gps:2 us:3 diabetes:1 particularly:1 sparsely:1 observed:8 ep:9 epsrc:1 preprint:1 solved:1 thousand:1 calculate:1 region:12 decrease:1 ui:1 tfs:1 dynamic:2 motivate:1 depend:1 algebra:1 titsias:1 calderhead:1 f2:3 vague:1 mh:9 train:1 fast:2 describe:3 effective:2 monte:4 activate:1 heuristic:1 widely:2 supplementary:2 larger:3 drawing:2 statistic:2 neil:1 gp:42 final:1 online:1 advantage:3 sequence:1 reconstruction:1 product:1 remainder:1 mixing:1 description:1 inducing:2 intuitive:1 manchester:2 cluster:1 requirement:4 assessing:1 develop:1 alon:1 fixing:1 school:1 eq:7 auxiliary:4 predicted:2 girolami:2 direction:1 material:1 rogers:1 crc:1 require:3 f1:3 generalization:1 fix:1 biological:3 pl:1 clarify:2 kinetics:1 practically:1 around:1 considered:2 magnus:1 ground:1 exp:1 hall:1 lawrence:3 bj:4 vary:1 estimation:4 applicable:3 tf:17 minimization:2 mit:4 clearly:1 gaussian:31 always:3 avoid:1 factorizes:1 martino:1 modelling:3 likelihood:21 mainly:1 seeger:1 contr:2 inference:22 typically:2 unlikely:1 entire:1 integrated:1 hidden:1 kc:4 hubank:1 chopin:1 issue:1 classification:16 among:1 proposes:2 development:1 integration:2 initialize:1 mackay:1 equal:3 construct:1 sampling:36 chapman:1 biology:5 represents:1 bmc:1 cancel:1 unsupervised:1 mimic:2 future:1 report:1 ppt:1 randomly:3 simultaneously:1 densely:3 gamma:1 divergence:8 phase:2 acceptance:11 highly:14 simpson:1 extreme:2 activated:1 chain:6 integral:3 necessary:1 divide:1 initialized:1 circle:1 edj:2 modeling:1 ordinary:3 usefulness:1 too:1 density:3 sensitivity:1 together:2 repression:1 squared:1 again:1 satisfied:1 choose:2 worse:1 coli:1 inefficient:5 stark:1 account:3 repressor:2 dfc:2 coefficient:2 satisfy:2 caused:1 depends:4 observing:1 red:3 start:1 bayes:1 option:1 michaelis:3 contribution:1 minimize:3 square:1 accuracy:1 variance:12 efficiently:2 correspond:2 bayesian:11 raw:1 iterated:1 accurately:2 identification:1 carlo:4 kuss:1 converged:1 p21:1 reach:1 suffers:1 casella:1 against:4 involved:1 minka:1 associated:1 sampled:4 proved:1 dataset:5 knowledge:1 improves:1 higher:1 dt:1 supervised:1 specify:1 response:1 evaluated:3 done:2 governing:1 stage:1 until:2 correlation:4 hand:2 hastings:1 nonlinear:1 propagation:2 incrementally:1 western:1 aj:3 effect:1 true:2 evolution:1 analytically:2 iteratively:3 wp:2 neal:1 illustrated:2 deal:1 attractive:1 during:4 complete:2 demonstrate:2 performs:1 meaning:1 variational:1 snelson:1 fi:9 superior:1 empirically:2 conditioning:2 onedimensional:1 measurement:4 gibbs:25 smoothness:4 tuning:1 credibility:1 fk:2 grid:5 similarly:1 menten:2 dj:6 funded:1 longer:1 add:1 posterior:25 recent:1 showed:1 apart:2 certain:4 verlag:1 binary:2 arbitrarily:1 yi:2 additional:1 repress:1 determine:1 converge:1 period:1 full:7 desirable:3 multiple:1 infer:2 stem:1 smooth:1 technical:1 plug:1 cross:1 long:1 yjt:3 prediction:3 involving:1 regression:10 breast:1 expectation:3 df:1 iteration:21 kernel:4 csato:1 proposal:16 ode:7 interval:2 ddb2:1 microarray:2 rest:1 file:1 bik:1 call:2 fit:2 gave:1 reduce:1 expression:13 six:1 accelerating:1 generally:1 iterating:1 useful:1 transforms:1 amount:1 dna:1 generate:1 dyj:1 percentage:1 notice:1 delta:1 estimated:8 rattray:2 blue:2 basal:1 threshold:1 drawn:5 changing:1 replica:2 run:2 everywhere:1 place:3 reasonable:1 draw:4 summarizes:1 scaling:1 display:1 activity:5 adapted:1 wbc:4 generates:2 speed:1 min:2 extremely:1 barenco:2 tomescu:1 p53:3 according:3 combination:2 describes:1 smaller:2 reconstructing:1 appealing:1 metropolis:1 making:1 taken:2 pid:2 equation:8 visualization:1 previously:1 discus:3 eventually:1 brewer:1 needed:1 tractable:1 available:1 gaussians:2 apply:8 observe:2 generic:1 appropriate:2 alternative:1 lawerence:1 callard:1 algorithm1:1 denotes:1 michalis:1 remaining:3 top:1 running:1 xc:14 giving:1 ghahramani:1 corrcoef:1 hypercube:1 objective:3 already:1 quantity:5 parametric:1 strategy:1 concentration:7 damage:1 diagonal:3 tumour:1 transcriptional:2 exhibit:1 regulated:2 blot:1 seven:1 barber:1 collected:2 assuming:4 ratio:1 minimizing:2 regulation:5 difficult:2 setup:1 robert:1 pima:1 trace:1 negative:2 implementation:2 design:1 unknown:3 diamond:1 discretize:1 vertical:1 observation:1 markov:5 datasets:3 extended:1 precise:1 interacting:1 varied:1 inferred:5 pair:1 kl:10 hour:1 geostatistics:1 bar:7 pattern:1 xm:1 fp:2 saturation:1 suitable:5 ranked:1 scheme:10 axis:1 carried:3 prior:32 kf:6 wisconsin:1 expect:1 limitation:1 integrate:1 principle:1 tiny:1 row:4 cancer:1 course:1 placed:3 keeping:1 rasmussen:2 jth:1 allow:3 sparse:6 benefit:1 boundary:1 dimension:17 xn:3 evaluating:1 p26:2 genome:1 qn:1 opper:1 stuck:1 far:2 transaction:2 sj:4 approximate:7 reconstructed:1 transcription:7 gene:32 uai:1 spatio:1 xi:1 consuming:1 sesn1:2 sanguinetti:1 search:1 latent:8 continuous:1 additionally:1 m13:1 du:1 complex:1 domain:1 rue:1 dense:2 whole:1 noise:2 arise:2 hyperparameters:1 profile:4 edition:1 ntnu:1 repeated:1 x1:1 fashion:1 slow:2 sub:1 wish:1 exponential:1 magenta:1 specific:1 decay:3 intractable:5 adding:3 ci:1 sparser:1 fc:59 f005687:1 expressed:1 fkt:1 scalar:1 springer:1 corresponds:1 truth:1 satisfies:1 adaption:2 nested:1 kinetic:5 conditional:15 replace:1 jf:1 considerable:3 change:1 xm2:1 specifically:1 uniformly:1 sampler:1 total:3 accepted:4 experimental:1 select:2 scan:2 arises:1 assessed:1 bioinformatics:1 indian:1 dept:1 mcmc:26 correlated:10 |
2,664 | 3,415 | Bounds on marginal probability distributions
Joris Mooij
MPI for Biological Cybernetics
T?ubingen, Germany
[email protected]
Bert Kappen
Department of Biophysics
Radboud University Nijmegen, the Netherlands
[email protected]
Abstract
We propose a novel bound on single-variable marginal probability distributions in
factor graphs with discrete variables. The bound is obtained by propagating local
bounds (convex sets of probability distributions) over a subtree of the factor graph,
rooted in the variable of interest. By construction, the method not only bounds
the exact marginal probability distribution of a variable, but also its approximate
Belief Propagation marginal (?belief?). Thus, apart from providing a practical
means to calculate bounds on marginals, our contribution also lies in providing
a better understanding of the error made by Belief Propagation. We show that
our bound outperforms the state-of-the-art on some inference problems arising in
medical diagnosis.
1
Introduction
Graphical models are used in many different fields. A fundamental problem in the application of
graphical models is that exact inference is NP-hard [1]. In recent years, much research has focused
on approximate inference techniques, such as sampling methods and deterministic approximation
methods, e.g., Belief Propagation (BP) [2]. Although the approximations obtained by these methods can be very accurate, there are only few useful guarantees on the error of the approximation,
and often it is not known (without comparing with the intractable exact solution) how accurate an
approximate result is. Thus it is desirable to calculate, in addition to the approximate results, tight
bounds on the approximation error.
There exist various methods to bound the BP error [3, 4, 5, 6], which can be used, in conjunction
with the results of BP, to calculate bounds on the exact marginals. Furthermore, upper bounds on
the partition sum, e.g., [7, 8], can be combined with lower bounds on the partition sum, such as
the well-known mean field bound or higher-order lower bounds [9], to obtain bounds on marginals.
Finally, a method called Bound Propagation [10] directly calculates bounds on marginals. However,
most of these bounds (with the exception of [3, 10]) have only been formulated for the special case
of pairwise interactions, which limits their applicability, excluding for example the interesting class
of Bayesian networks.
In this contribution we describe a novel bound on exact single-variable marginals in factor graphs
which is not limited to pairwise interactions. The original motivation for this work was to better
understand and quantify the BP error. This has led to bounds which are at the same time bounds
for the exact single-variable marginals as well as for the BP beliefs. A particularly nice feature of
our bounds is that their computational cost is relatively low, provided that the number of possible
values of each variable in the factor graph is small. On the other hand, the computational complexity
is exponential in the number of possible values of the variables, which limits application to factor
graphs in which each variable has a low number of possible values. On these factor graphs however,
our bound can significantly outperform existing methods, either in terms of accuracy or in terms
of computation time (or both). We illustrate this on two toy problems and on real-world problems
arising in medical diagnosis.
The basic idea underlying our method is that we recursively propagate bounds over a particular subtree of the factor graph. The propagation rules are similar to those of Belief Propagation; however,
instead of propagating messages, we propagate convex sets of messages. This can be done in such
a way that the final ?beliefs? at the root node of the subtree are convex sets which contain the exact
marginal of the root node (and, by construction, also its BP belief). In the next section, we describe
our method in more detail. Due to space constraints, we have omitted the proofs and other technical details; these are provided in a technical report [11], which also reports additional experimental
results and presents an extension that uses self-avoiding-walk trees instead of subtrees (inspired by
[6]).
2
2.1
Theory
Preliminaries
Factorizing probability distributions. Let V := {1, . . . , N } and consider N discrete random
variables (xi )i?V . Each variable xi takes values in a discrete domain Xi . We will use the following
multi-index notation: let A = {i1 , i2 , . . . , im } ? V with i1 < i2 < . . . im ; we write XA := Xi1 ?
Xi2 ? ? ? ? ? Xim and for any family (Yi )i?B with A ? B ? V, we write YA := (Yi1 , Yi2 , . . . , Yim ).
We consider a probability distribution over x = (x1 , . . . , xN ) ? XV that can be written as a product
of factors (?I )I?F :
P(x) =
1 Y
?I (xNI ),
Z
I?F
where Z =
X Y
?I (xNI ).
(1)
x?XV I?F
For each factor index I ? F, there is an associated subset NI ? V of variable indices and the
factor ?I is a nonnegative function ?I : XNI ? [0, ?). For a Bayesian network, the factors are
(conditional) probability tables. In case of Markov random fields, the factors are often called potentials. In general, the normalizing constant (?partition sum?) Z is not known and exact computation
of Z is infeasible, due to the fact that the number of terms to be summed is exponential in the
number of variables N . Similarly, computing marginal distributions P(xA ) for subsets of variables
A ? V is intractable in general. In this
P article, we focus on the task of obtaining rigorous bounds on
single-variable marginals P(xi ) = xV\{i} P(x).
Factor graphs. We can represent the structure of the probability distribution (1) using a factor
graph (V, F, E). This is a bipartite graph, consisting of variable nodes i ? V, factor nodes I ? F,
and edges e ? E, with an edge {i, I} between i ? V and I ? F if and only if the factor ?I
depends on xi (i.e., if i ? NI ). We will represent factor nodes visually as rectangles and variable
nodes as circles. Figure 1(a) shows a simple example of a factor graph. The set of neighbors
of a factor node I is precisely NI ; similarly, we denote the set of neighbors of a variable node i by
Ni := {I ? F : i ? NI }. We will assume throughout this article that the factor graph corresponding
to (1) is connected.
Convexity. We denote the set of extreme points of a convex set X ? Rd by ext (X). For a subset
Y ? Rd , the convex hull of Y is defined as the smallest convex set X ? Rd with Y ? X; we denote
the convex hull of Y as conv (Y ).
Measures. For A ? V, define MA := [0, ?)XA as the set of nonnegative functions on XA . Each
element of MA can be identified with a finite measure on XA ; therefore we will call the elements
of MA ?measures on A?. We write M?A := MA \ {0}.
Operations on measures. Adding two measures ?, ? ? MA results in the measure ? + ? in MA .
For A, B ? V, we can multiply a measure on MA with a measure on MB to obtain a measure
on MA?B ; a special case is multiplication with a scalar. Note that there is a natural embedding of
MA in MB for A ? B ? V obtained by multiplying a measure ? ? MA by 1B\A ? MB\A ,
the constant function with value 1 on XB\A
P . Another important operation is the partial summation:
given A ? B ? V and ? ? MB , define xA ? to be the measure in MB\A obtained by summing
P
P
? over all xa ? XA , i.e., xA ? : xB\A 7? xA ?XA ?(xA , xB\A ).
Operations on sets of measures. We will define operations on sets of measures by applying the
operation on elements of these sets and taking the set of the resulting measures; e.g., if we have two
subsets ?A ? MA and ?B ? MB for A, B ? V, we define the product of the sets ?A and ?B to be
the set of the products of elements of ?A and ?B , i.e., ?A ?B := {?A ?B : ?A ? ?A , ?B ? ?B }.
Completely factorized measures. For A
Q ? V, we will define QA to be the set of completely
factorized measures on A, i.e., QA := a?A M{a} . Note that MA is the convex hull of QA .
Indeed, we can write each measure ? ? MA as a convex combination of measures in QA which
are zero everywhere except at one particular value of their argument. We denote Q?A := QA \ {0}.
Normalized (probability)P
measures. We denote with PA the set of probability measures on A,
i.e., PA = {? ? MA : xA ? = 1}. The set PA is called a simplex. Note that a simplex is
convex; the simplex PA has precisely #(XA ) extreme points, each of which corresponds to putting
all probability mass on one of the possible values of xA . We define the normalization
operator N
P
which normalizes measures, i.e., for ? ? M?A we define N ? := Z1 ? with Z = xA ?.
Boxes. Let a, b ? Rd such that a? ? b? for all components ? = 1, . . . , d. Then we define the box with lower bound a and upper bound b by B (a, b) := {x ? Rd : a? ? x? ?
b? for all ? = 1, . . . , d}. Note that a box is convex; indeed, its extreme points are the ?corners? of
which there are 2d .
Smallest bounding boxes. Let X ? Rd be bounded. The smallest bounding box of X is defined
as B (X) := B (a, b) where the lower bound a is given by the pointwise infimum of X and the
upper bound b is given by the pointwise supremum of X, that is a? := inf{x? : x ? X} and
b? := sup{x? : x ? X} for all ? = 1, . . . , d. Note that B (X) = B (conv (X)). Therefore, if
X is convex, the smallest bounding box for X depends only on the extreme points ext (X), i.e.,
B (X) = B (ext (X)); this bounding box can be easily calculated if the number of extreme points is
not too large.
2.2
The basic tools
To calculate marginals of subsets of variables in some factor graph, several operations performed
on measures are relevant: normalization, taking products of measures, and summing over subsets of
variables. Here we study the interplay between convexity and these operations. This will turn out
to be useful later on, because our bounds make use of convex sets of measures that are propagated
over the factor graph.
The interplay between convexity and normalization, taking products and partial summation is described by the following lemma.
Lemma 1 Let A ? V and let ? ? M?A . Then:
1. conv (N ?) = N (conv ?);
2. for all B ? V, ? ? MB : conv (??) = ?(conv ?);
P
P
3. for all B ? A: conv
xB ? =
xB conv ?.
The next lemma concerns the interplay between convexity and taking products; it says that if we
take the product of convex sets of measures on different spaces, the resulting set is contained in the
convex hull of the product of the extreme points of the convex sets.
Lemma 2 Let (At )t=1,...,T be disjoint subsets of V. Foreach t = 1, . . . , T ,let ?t ? MAt be
QT
QT
convex with a finite number of extreme points. Then conv
t=1 ?t = conv
t=1 ext ?t .
The third lemma says that the product of several boxes on the same subset A of variables can be
easily calculated: the product of the boxes is again a box, with as lower (upper) bound the product
of the lower (upper) bounds of the boxes.
Lemma 3 Let A ? Vand for each t = 1,
. . . , T , let ?t , ?t ? MA such that ?t ? ?t . Then
QT
QT
QT
t=1 B ?t , ?t = B
t=1 ?t ,
t=1 ?t .
We are now ready to state the basic lemma. It basically says that one can bound the marginal of
a variable by replacing a factor depending on some other variables by a product of single-variable
Bi
(a)
(b)
i
(c)
i
(d)
i
(e)
i
i
BJ?i
J
K
J
K
J
K
J
J
j
L
j
k
j
k
j
k
BK?i
K
k
Bj?J
K
Bk?K
L
?
j0
?
L
j
k
Pj
L
L
BL?j
j0
BL?k
Bj?L
?
Pj
Figure 1: (a) Example factor graph with three variable nodes (i, j, k) and three factor nodes
(J, K, L), with probability distribution P(xi , xj , xk ) = Z1 ?J (xi , xj )?K (xi , xk )?L (xj , xk ); (b)
Cloning node j by adding a new variable j 0 and a factor ?? (xj , xj 0 ) = ?xj (xj 0 ); (c) Illustration
of the bound on P(xi ) based on (b): ?what can we say about the range of P(xi ) when the factors
corresponding to the nodes marked with question marks are arbitrary??; (d) Subtree of the factor
graph; (e) Propagating convex sets of measures (boxes or simplices) on the subtree (d), leading to a
bound Bi on the marginal probability of xi in G.
factors and bounding the result. This can be exploited to simplify the computational complexity of
bounding the marginal. An example of its use will be given in the next subsection.
Lemma 4 Let A, B,PC ? V be mutually disjoint subsets of variables. Let ? ? MA?B?C such that
for each xC ? XC , xA?B ? > 0. Then:
!!
!!
X
X
B N
?M?C
=B N
?Q?C
.
xB ,xC
xB ,xC
Proof. Note that M?C is the convex hull of Q?C and apply Lemma 1.
The positivity condition is a technical condition, which in our experience is fulfilled for many practically relevant factor graphs.
2.3
A simple example
Before proceeding to our main result, we first illustrate for a simple case how the basic lemma can
be employed to obtain computationally tractable bounds on marginals. We derive a bound for the
marginal of the variable xi in the factor graph in Figure 1(a). We start by cloning the variable xj ,
i.e., adding a new variable xj 0 that is constrained to take the same value as xj . In terms of the
factor graph, we add a variable node j 0 and a factor node ?, connected to variable nodes j and j 0 ,
with corresponding factor ?? (xj , xj 0 ) := ?xj (xj 0 ); see also Figure 1(b). Clearly, the marginal of xi
satisfies:
?
?
?
?
XX
XXX
?J ?K ?L ?xj (xj 0 )?
P(xi ) = N ?
?J ?K ?L ? = N ?
xj
xk
xj
xj 0
xk
where it should be noted that the first occurrence of ?L is shorthand for ?L (xj , xk ), but the second
occurrence is shorthand for ?L (xj 0 , xk ). Noting that ?? ? M?{j,j 0 } and applying the basic lemma
with A = {i}, B = {k}, C = {j, j 0 } and ? = ?J ?K ?L yields:
?
?
?
?
XXX
XXX
P(xi ) ? N ?
?J ?K ?L M?{j,j 0 } ? ? BN ?
?J ?K ?L Q?{j,j 0 } ? .
xj
xj 0
xk
xj
xj 0
xk
Applying the distributive law, we obtain (see also Figure 1(c)):
??
??
??
X
X
X
?J M?{j} ? ?
?K
?L M?{j 0 } ?? ,
P(xi ) ? BN ??
xj
xk
xj 0
which we relax to
?
?
P(xi ) ? BN ?BN ?
?
X
?
?J P{j} ? BN ?
xj
?
X
?K BN ?
xk
???
X
?L P{j 0 } ??? .
xj 0
Now it may seem that this smallest bounding box would be difficult to compute. Fortunately, we
only need to compute the extreme points of these sets because of convexity. Since smallest bounding
boxes only depend on extreme points, we conclude that
?
?
?
?
?
???
X
X
X
P(xi ) ? BN ?BN ?
?J ext P{j} ? BN ?
?K BN ?
?L ext P{j 0 } ??? .
xj
xk
xj 0
which can be calculated efficiently if the number of possible values of each variable is small.
2.4
The main result
The example in the previous subsection can be generalized as follows. First, one chooses a particular
subtree of the factor graph, rooted in the variable for which one wants to calculate a bound on its
marginal. Then, one propagates messages (which are either bounding boxes or simplices) over this
subtree, from the leaf nodes towards the root node. The update equations resemble those of Belief
Propagation. The resulting ?belief? at the root node is a box that bounds the exact marginal of
the root node. The choice of the subtree is arbitrary; different choices lead to different bounds in
general. We now describe this ?box propagation? algorithm in more detail.
Definition 5 Let (V, F, E) be a factor graph. We call the bipartite graph (V, F, E) a subtree of
(V, F, E) with root i if i ? V ? V, F ? F, E ? E such that (V, F, E) is a tree with root i and for
all {j, J} ? E, j ? V and J ? F (i.e., there are no ?loose edges?).1 We denote the parent of j ? V
according to (V, F, E) by par(j) and similarly, we denote the parent of J ? F by par(J).
An illustration of a possible subtree of the factor graph in Figure 1(a) is the one shown in Figure
1(d). The bound that we will obtain using this subtree corresponds to the example described in the
previous subsection.
In the following, we will use the topology of the original factor graph (V, F, E) whenever we refer
to neighbors of variables or factors. Each edge of the subtree will carry one message, oriented such
that it ?flows? towards the root node. In addition, we define messages entering the subtree for all
?missing? edges in the subtree (see also Figure 1(e)). Because of the bipartite character of the factor
graph, we can distinguish two types of messages: messages BJ?j ? Mj sent to a variable j ? V
from a neighboring factor J ? Nj , and messages Bj?J ? Mj sent to a factor J ? F from a
neighboring variable j ? NJ . The messages entering the subtree are all defined to be simplices;
more precisely, we define the incoming messages
Bj?J = Pj
for all J ? F , {j, J} ? E \ E
BJ?j = Pj
for all j ? V , {j, J} ? E \ E.
We propagate messages towards the root i of the tree using the following update rules (note the
similarity with the BP update rules). The message sent from a variable j ? V to its parent J =
par(j) ? F is defined as
? Y
BK?j if all incoming BK?j are boxes
?
Bj?J = K?Nj \J
?
Pj
if at least one of the BK?j is the simplex Pj ,
where the product of the boxes can be calculated using Lemma 3. The message sent from a factor
J ? F to its parent k = par(J) ? V is defined as
?
?
?
?
X
Y
X
Y
BJ?k = BN ?
?J
Bl?J ? = BN ?
?J
ext Bl?J ? ,
(2)
xNJ \k
l?NJ \k
xNJ \k
l?NJ \k
1
Note that this corresponds to the notion of subtree of a bipartite graph; for a subtree of a factor graph, one
sometimes imposes the additional constraint that for all factors J ? F , all its connecting edges {J, j} with
j ? NJ have to be in E; here we do not impose this additional constraint.
where the second equality follows from Lemmas 1 and 2. The final ?belief? Bi at the root node i is
calculated by
!
?
Y
?
?BN
BK?i
if all incoming BK?i are boxes
Bi =
K?Ni
?
?
Pi
if at least one of the BK?i is the simplex Pi .
We can now formulate our main result, which gives a rigorous bound on the exact single-variable
marginal of the root node:
Theorem 6 Let (V, F, E) be a factor graph with corresponding probability distribution (1). Let
i ? V and (V, F, E) be a subtree of (V, F, E) with root i ? V . Apply the ?box propagation?
algorithm described above to calculate the final ?belief? Bi on the root node i. Then P(xi ) ? Bi .
Proof sketch The first step consists in extending the subtree such that each factor node has the
right number of neighboring variables by cloning the missing variables. The second step consists
of applying the basic lemma where the set C consists of all the variable nodes of the subtree which
have connecting edges in E \ E, together with all the cloned variable nodes. Then we apply the
distributive law, which can be done because the extended subtree has no cycles. Finally, we relax
the bound by adding additional normalizations and smallest bounding boxes at each factor node in
the subtree. It should now be clear that the ?box propagation? algorithm described above precisely
calculates the smallest bounding box at the root node i that corresponds to this procedure.
Because each subtree of the orginal factor graph is also a subtree of the computation tree for i
[12], the bounds on the (exact) marginals that we just derived are at the same time bounds on the
approximate Belief Propagation marginals (beliefs):
Corollary 7 In the situation described in Theorem 6, the final bounding box Bi also bounds the
(approximate) Belief Propagation marginal of the root node i, i.e., PBP (xi ) ? Bi .
2.5
Related work
We briefly discuss the relationship of our bound to previous work. More details are provided in [11].
The bound in [6] is related to the bound we present here; however, the bound in [6] differs from ours
in that it (i) goes deeper into the computation tree by propagating bounds over self-avoiding-walk
(SAW) trees instead of mere subtrees, (ii) uses a different parameterization of the propagated bounds
and a different update rule, and (iii), it is only formulated for the special case of factors depending
on two variables, while it is not entirely obvious how to extend the result to more general factor
graphs.
Another method to obtain bounds on exact marginals is ?Bound Propagation? [10]. The idea underlying Bound Propagation is very similar to the one employed
in this work, with one crucial
S
difference. For a variable i ? V, we define the sets ?i := Ni (consisting of all variables that appear in some factor in which i participates) and ?i := ?i \ {i} (the Markov blanket of i). Whereas
our method uses a cavity approach, using as basis equation
!
X Y
X Y
?I P\i (x?i ),
P\i (x?i ) ?
P(xi ) ?
?I
x?i
xV\?i I?F \Ni
I?Ni
\i
and bound the quantity P(x
P i ) by optimizing over P (x?i ), the basis equation employed by Bound
Propagation is P(xi ) = x?i P(xi | x?i )P(x?i ) and the optimization is over P(x?i ). Unlike in our
case, the computational complexity of Bound Propagation is exponential in the size of the Markov
blanket, because of the required calculation of the conditional distribution P(xi | x?i ). On the other
hand, the advantage of this approach is that a bound on P(xj ) for j ? ?i is also a bound on P(x?i ),
which in turn gives rise to a bound on P(xi ). In this way, bounds can propagate through the graphical
model, eventually yielding a new (tighter) bound on P(x?i ). Although the iteration can result in
rather tight bounds, the main disadvantage of Bound Propagation is its computational cost: it is
exponential in the Markov blanket and often many iterations are needed for the bounds to become
tight.
PROMEDAS
100x100 grid, strong interactions
[9]-[7]
MF-[7]
[9]-[8]
MF-[8]
[3]+[8]
[10]
[6]
BoxProp
[5]
[4]
0.8
Gaps [6]
0.1
Gaps [10]
8x8 toroidal grid, medium interactions
1
1
0.01
0.6
0.4
0.001
0.2
0.0001
0
0.0001
0.001
0.01
0.1
1
Gaps BoxProp
0
0.2
0.4
0.6
Gaps BoxProp
0.8
1
0.001
0.01
0.1
1
Gaps
Figure 2: Comparisons of various methods on different factor graphs: P ROMEDAS (left), a large grid
with strong interactions (middle) and a small grid with medium-strength interactions (right).
3
Experiments
In this section, we present only few empirical results due to space constraints. More details and
additional experimental results are given in [11]. We have compared different methods for calculating bounds on single-variable marginals; for each method and each variable, we calculated the gap
(tightness) of the bound, which we defined as the `? distance between the upper and lower bound of
the bounding box. We have investigated three different types of factor graphs; the results are shown
in Figure 2. The factor graphs used for our experiments are provided as supplementary material to
the electronic version of this article at books.nips.cc. We also plan to release the source code
of several methods as part of a new release of the approximate inference library libDAI [13]. For
our method, we chose the subtrees in a breadth-first manner.
First, we applied our bound on simulated P ROMEDAS patient cases [14]. These factor graphs have
binary variables and singleton, pairwise and triple interactions (containing zeros). We generated nine
different random instances. For each instance, we calculated bounds for each ?finding? variable in
that instance using our method (?B OX P ROP?) and the method in [10]. Note that the tightness of
both bounds varies widely depending on the instance and on the variable of interest. Our bound was
tighter than the bound from [10] for all but one out of 1270 variables. Furthermore, whereas [10]
had only finished on 7 out of 9 instances after running for 75000 s (after which we decided to abort
the calculation on the remaining two instances), our method only needed 51 s to calculate all nine
instances.
We also compared our method with the method described in [6] on a large grid of 100 ? 100 binary
(?1-valued) variables with strong interactions. Note that this is an intractable problem for exact
inference methods. The single-variable factors were chosen as exp(?i xi ) with ?i ? N (0, 1), the
pair factors were exp(?ij xi xj ) with ?ij ? N (0, 1). We truncated the subtree to 400 nodes and the
SAW tree to 105 nodes. Note that our method yields the tightest bound for almost all variables.
Finally, we compared our method with several other methods referred to in Section 1 on a small
8 ? 8 grid with medium-strength interactions (similarly chosen as for the large grid, but with
?i ? N (0, 0.22 ) and ?ij ? N (0, 0.22 )). The small size of the grid was necessary because some
methods would need several days to handle a large grid. In this case, the method by [6] yields the
tightest bounds, followed by [10], and our method gets a third place. Note that many methods return
completely uninformative bounds in this case.
4
Conclusion and discussion
We have described a novel bound on exact single-variable marginals, which is at the same time a
bound on the (approximate) Belief Propagation marginals. Contrary to many other existing bounds,
it is formulated for the general case of factor graphs with discrete variables and factors depending on
an arbitrary number of variables. The bound is calculated by propagating convex sets of measures
over a subtree of the factor graph, with update equations resembling those of BP. For variables with
a limited number of possible values, the bounds can be computed efficiently. We have compared our
bounds with existing methods and conclude that our method belongs to the best methods, but that
it is difficult to say in general which method will yield the tightest bounds for a given variable in a
specific factor graph. Our method could be further improved by optimizing over the choice of the
subtree.
Although our bounds are a step forward in quantifying the error of Belief Propagation, the actual
error made by BP is often at least one order of magnitude lower than the tightness of these bounds.
This is due to the fact that (loopy) BP cycles information through loops in the factor graph; this
cycling apparently often improves the results. The interesting and still unanswered question is why
it makes sense to cycle information in this way and whether this error reduction effect can be quantified.
Acknowledgments
We thank Wim Wiegerinck for several fruitful discussions, Bastian Wemmenhove for providing the P ROMEDAS
test cases, and Martijn Leisink for kindly providing his implementation of Bound Propagation. The research
reported here was supported by the Interactive Collaborative Information Systems (ICIS) project (supported by
the Dutch Ministry of Economic Affairs, grant BSIK03024), the Dutch Technology Foundation (STW), and the
IST Programme of the European Community, under the PASCAL2 Network of Excellence, IST-2007-216886.
References
[1] G.F. Cooper. The computational complexity of probabilistic inferences. Artificial Intelligence, 42(23):393?405, March 1990.
[2] J. Pearl. Probabilistic Reasoning in Intelligent systems: Networks of Plausible Inference. Morgan Kaufmann, San Francisco, CA, 1988.
[3] M.J. Wainwright, T.S. Jaakkola, and A.S. Willsky. Tree-based reparameterization framework for analysis
of sum-product and related algorithms. IEEE Transactions on Information Theory, 49(5):1120?1146,
May 2003.
[4] S. C. Tatikonda. Convergence of the sum-product algorithm. In Proceedings 2003 IEEE Information
Theory Workshop, pages 222?225, April 2003.
[5] Nobuyuki Taga and Shigeru Mase. Error bounds between marginal probabilities and beliefs of loopy
belief propagation algorithm. In MICAI, pages 186?196, 2006.
[6] A. Ihler. Accuracy bounds for belief propagation. In Proceedings of the 23th Annual Conference on
Uncertainty in Artificial Intelligence (UAI-07), July 2007.
[7] T. S. Jaakkola and M. Jordan. Recursive algorithms for approximating probabilities in graphical models.
In Proc. Conf. Neural Information Processing Systems (NIPS 9), pages 487?493, Denver, CO, 1996.
[8] M. J. Wainwright, T. Jaakkola, and A. S. Willsky. A new class of upper bounds on the log partition
function. IEEE Transactions on Information Theory, 51:2313?2335, July 2005.
[9] M. A. R. Leisink and H. J. Kappen. A tighter bound for graphical models. In Lawrence K. Saul, Yair
Weiss, and L?eon Bottou, editors, Advances in Neural Information Processing Systems 13 (NIPS*2000),
pages 266?272, Cambridge, MA, 2001. MIT Press.
[10] M. Leisink and B. Kappen. Bound propagation. Journal of Artificial Intelligence Research, 19:139?154,
2003.
[11] J. M. Mooij and H. J. Kappen. Novel bounds on marginal probabilities. arXiv.org, arXiv:0801.3797
[math.PR], January 2008. Submitted to Journal of Machine Learning Research.
[12] S. C. Tatikonda and M. I. Jordan. Loopy belief propagation and Gibbs measures. In Proc. of the 18th
Annual Conf. on Uncertainty in Artificial Intelligence (UAI-02), pages 493?500, San Francisco, CA, 2002.
Morgan Kaufmann Publishers.
[13] J. M. Mooij. libDAI: A free/open source C++ library for discrete approximate inference methods, 2008.
http://mloss.org/software/view/77/.
[14] B. Wemmenhove, J. M. Mooij, W. Wiegerinck, M. Leisink, H. J. Kappen, and J. P. Neijt. Inference in
the Promedas medical expert system. In Proceedings of the 11th Conference on Artificial Intelligence in
Medicine (AIME 2007), volume 4594 of Lecture Notes in Computer Science, pages 456?460. Springer,
2007.
| 3415 |@word middle:1 version:1 briefly:1 open:1 cloned:1 propagate:4 bn:13 recursively:1 carry:1 reduction:1 kappen:6 icis:1 ours:1 outperforms:1 existing:3 xnj:2 comparing:1 written:1 partition:4 update:5 intelligence:5 leaf:1 parameterization:1 xk:12 yi1:1 affair:1 math:1 node:31 org:2 become:1 shorthand:2 consists:3 manner:1 excellence:1 pairwise:3 indeed:2 mpg:1 multi:1 inspired:1 actual:1 conv:10 provided:4 xx:1 underlying:2 notation:1 bounded:1 factorized:2 mass:1 medium:3 what:1 project:1 finding:1 nj:6 guarantee:1 interactive:1 toroidal:1 medical:3 grant:1 appear:1 before:1 orginal:1 local:1 xv:4 limit:2 ext:7 chose:1 quantified:1 co:1 limited:2 bi:8 range:1 decided:1 practical:1 acknowledgment:1 recursive:1 differs:1 procedure:1 j0:2 empirical:1 significantly:1 get:1 operator:1 applying:4 fruitful:1 deterministic:1 missing:2 resembling:1 go:1 convex:20 focused:1 formulate:1 rule:4 his:1 reparameterization:1 embedding:1 handle:1 notion:1 unanswered:1 construction:2 exact:14 us:3 leisink:4 pa:4 element:4 particularly:1 calculate:7 connected:2 cycle:3 convexity:5 complexity:4 depend:1 tight:3 bipartite:4 completely:3 basis:2 easily:2 various:2 x100:1 describe:3 radboud:1 artificial:5 supplementary:1 widely:1 valued:1 say:5 relax:2 tightness:3 plausible:1 final:4 interplay:3 advantage:1 propose:1 interaction:9 product:15 mb:7 neighboring:3 relevant:2 loop:1 martijn:1 parent:4 xim:1 convergence:1 extending:1 illustrate:2 depending:4 derive:1 nobuyuki:1 propagating:5 ij:3 qt:5 strong:3 resemble:1 blanket:3 quantify:1 hull:5 material:1 preliminary:1 biological:1 tighter:3 summation:2 im:2 extension:1 practically:1 visually:1 exp:2 lawrence:1 bj:9 smallest:8 omitted:1 proc:2 wim:1 tatikonda:2 saw:2 tool:1 mit:1 clearly:1 rather:1 jaakkola:3 conjunction:1 corollary:1 derived:1 focus:1 release:2 cloning:3 rigorous:2 sense:1 inference:9 i1:2 germany:1 plan:1 art:1 special:3 summed:1 constrained:1 marginal:17 field:3 libdai:2 rop:1 sampling:1 simplex:5 np:1 report:2 simplify:1 intelligent:1 few:2 oriented:1 consisting:2 interest:2 message:13 multiply:1 extreme:9 nl:1 yielding:1 pc:1 xb:7 subtrees:3 accurate:2 xni:3 edge:7 partial:2 necessary:1 experience:1 tree:8 walk:2 circle:1 instance:7 disadvantage:1 loopy:3 applicability:1 cost:2 subset:9 too:1 reported:1 varies:1 combined:1 chooses:1 fundamental:1 probabilistic:2 xi1:1 participates:1 connecting:2 together:1 again:1 containing:1 positivity:1 corner:1 book:1 conf:2 expert:1 leading:1 return:1 toy:1 potential:1 de:1 singleton:1 depends:2 performed:1 root:15 later:1 view:1 apparently:1 sup:1 shigeru:1 start:1 contribution:2 collaborative:1 ni:9 accuracy:2 kaufmann:2 efficiently:2 yield:4 bayesian:2 basically:1 mere:1 multiplying:1 cc:1 cybernetics:1 submitted:1 whenever:1 definition:1 obvious:1 proof:3 associated:1 ihler:1 propagated:2 subsection:3 improves:1 higher:1 day:1 xxx:3 improved:1 april:1 wei:1 done:2 box:26 ox:1 furthermore:2 xa:17 just:1 hand:2 sketch:1 replacing:1 propagation:24 abort:1 infimum:1 effect:1 contain:1 normalized:1 equality:1 entering:2 i2:2 self:2 rooted:2 noted:1 mpi:1 generalized:1 reasoning:1 novel:4 pbp:1 denver:1 volume:1 extend:1 marginals:15 refer:1 cambridge:1 gibbs:1 rd:6 grid:9 similarly:4 had:1 similarity:1 add:1 recent:1 optimizing:2 inf:1 belongs:1 apart:1 ubingen:1 binary:2 yi:1 exploited:1 morgan:2 ministry:1 additional:5 fortunately:1 impose:1 employed:3 july:2 ii:1 desirable:1 technical:3 calculation:2 biophysics:1 calculates:2 basic:6 patient:1 dutch:2 iteration:2 represent:2 normalization:4 sometimes:1 arxiv:2 addition:2 want:1 whereas:2 uninformative:1 source:2 crucial:1 promedas:2 publisher:1 unlike:1 sent:4 contrary:1 flow:1 seem:1 jordan:2 call:2 noting:1 iii:1 xj:33 identified:1 topology:1 economic:1 idea:2 whether:1 nine:2 useful:2 clear:1 mloss:1 netherlands:1 http:1 outperform:1 exist:1 fulfilled:1 arising:2 disjoint:2 diagnosis:2 discrete:5 write:4 ist:2 putting:1 pj:6 breadth:1 rectangle:1 graph:38 year:1 sum:5 everywhere:1 uncertainty:2 place:1 family:1 throughout:1 almost:1 electronic:1 entirely:1 bound:93 followed:1 distinguish:1 bastian:1 nonnegative:2 annual:2 strength:2 precisely:4 constraint:4 bp:10 software:1 argument:1 relatively:1 department:1 according:1 combination:1 march:1 character:1 pr:1 computationally:1 equation:4 mutually:1 turn:2 loose:1 discus:1 xi2:1 eventually:1 needed:2 tractable:1 operation:7 tightest:3 apply:3 yim:1 occurrence:2 yair:1 original:2 running:1 remaining:1 graphical:5 xc:4 calculating:1 joris:2 medicine:1 eon:1 approximating:1 bl:4 question:2 quantity:1 cycling:1 distance:1 thank:1 simulated:1 distributive:2 tuebingen:1 willsky:2 ru:1 code:1 index:3 pointwise:2 illustration:2 providing:4 relationship:1 difficult:2 nijmegen:1 rise:1 stw:1 implementation:1 wemmenhove:2 upper:7 markov:4 finite:2 truncated:1 january:1 situation:1 extended:1 excluding:1 bert:1 arbitrary:3 community:1 bk:8 pair:1 required:1 z1:2 pearl:1 nip:3 qa:5 belief:21 pascal2:1 wainwright:2 natural:1 technology:1 library:2 finished:1 ready:1 x8:1 mase:1 nice:1 understanding:1 mooij:5 multiplication:1 law:2 par:4 lecture:1 interesting:2 triple:1 foundation:1 imposes:1 article:3 propagates:1 editor:1 pi:2 normalizes:1 supported:2 free:1 infeasible:1 understand:1 deeper:1 neighbor:3 saul:1 taking:4 calculated:8 xn:1 world:1 forward:1 made:2 san:2 programme:1 transaction:2 approximate:9 supremum:1 cavity:1 incoming:3 uai:2 summing:2 conclude:2 francisco:2 xi:27 factorizing:1 why:1 table:1 mj:2 ca:2 obtaining:1 investigated:1 european:1 bottou:1 domain:1 kindly:1 yi2:1 main:4 motivation:1 bounding:13 x1:1 referred:1 simplices:3 cooper:1 exponential:4 lie:1 third:2 theorem:2 specific:1 normalizing:1 concern:1 intractable:3 workshop:1 adding:4 magnitude:1 subtree:27 gap:6 mf:2 led:1 contained:1 scalar:1 springer:1 corresponds:4 satisfies:1 ma:17 conditional:2 marked:1 formulated:3 quantifying:1 towards:3 hard:1 except:1 wiegerinck:2 lemma:14 called:3 experimental:2 ya:1 exception:1 mark:1 avoiding:2 |
2,665 | 3,416 | Fast Computation of Posterior Mode in Multi-Level
Hierarchical Models
Liang Zhang
Department of Statistical Science
Duke University
Durham, NC 27708
[email protected]
Deepak Agarwal
Yahoo! Research
2821 Mission College Blvd.
Santa Clara, CA 95054
[email protected]
Abstract
Multi-level hierarchical models provide an attractive framework for incorporating
correlations induced in a response variable that is organized hierarchically. Model
fitting is challenging, especially for a hierarchy with a large number of nodes. We
provide a novel algorithm based on a multi-scale Kalman filter that is both scalable
and easy to implement. For Gaussian response, we show our method provides the
maximum a-posteriori (MAP) parameter estimates; for non-Gaussian response,
parameter estimation is performed through a Laplace approximation. However,
the Laplace approximation provides biased parameter estimates that is corrected
through a parametric bootstrap procedure. We illustrate through simulation studies
and analyses of real world data sets in health care and online advertising.
1 Introduction
In many real-world prediction problems, the response variable of interest is clustered hierarchically.
For instance, in studying the immunization status of a set of children in a particular geographic location, the children are naturally clustered by families, which in turn are clustered into communities.
The clustering often induce correlations in the response variable; models that exploit this provide
significant improvement in predictive performance. Multi-level hierarchical models provide an attractive framework for modeling such correlations. Although routinely applied to moderate sized
data (few thousand nodes) in several fields like epidemiology, social sciences, biology [3], model
fitting is computationally expensive and is usually performed through a Cholesky decomposition
of a q (number of nodes in the hierarchy) dimensional matrix. Recently, such models have shown
promise in a novel application of internet advertising [1] where the goal is to select top-k advertisements to be shown on a webpage to maximize the click-through rates. To capture the semantic
meaning of content in a parsimonious way, it is commonplace to classify webpages and ads into
large pre-defined hierarchies. The hierarchy in such applications consist of several levels and the
total number of nodes may run into millions. Moreover, the main goal is to exploit the hierarchy
for obtaining better predictions; computing the full posterior predictive distribution is of secondary
importance. Existing fitting algorithms are difficult to implement and do not scale well for such
problems. In this paper, we provide a novel, fast and easy to implement algorithm to compute the
posterior mode of parameters for such models on datasets organized hierarchically into millions of
nodes with several levels. The key component of our algorithm is a multi-scale Kalman filter that
expedites the computation of an expensive to compute conditional posterior.
The central idea in multi-level hierarchical (MLH hereafter) models is ?shrinkage? across the nodes
in the hierarchy. More specifically, these models assume a multi-level prior wherein parameters of
children nodes are assumed to be drawn from a distribution centered around the parameter of the
parent. This bottom-up, recursive assumption provides a posterior whose estimates at the finest resolution are smoothed using data on the lineage path of the node in the hierarchy. The fundamental
1
Notation
Tj
mj
q
pa(r)
ci (r)
nr
yir
Y
xir
X
?
?jr
?
V
?j
?
?jr|r
j
?r|r
?
?jr
?rj
Meaning
Level j of the hierarchy T
The number of nodes at level j in T
The total number of nodes in T
The parent node of node r in T
The ith child node of node r in T
The number of observations at leaf node r
The ith observation (response) at leaf node r
{yir , i = 1, ? ? ? , nr , r ? T }
The ith observation (p-dimensional covariates) at leaf node r
{xir , i = 1, ? ? ? , nr , r ? T }
The regression parameter vector associated with X
The random effect parameter at node r at level j
{?jr , r ? T, j = 1, ? ? ? , L}
The residual variance of yir , if yir has a Gaussian model
The variance of ?jr for all the nodes at level j
{?1 , ? ? ? , ?L }
The mean of ?jr |{yir? , i = 1, ? ? ? , nr? , ?r? ? r}
The variance of ?jr |{yir? , i = 1, ? ? ? , nr? , ?r? ? r}
The mean of ?jr |{yir? , i = 1, ? ? ? , nr? , ?r? ? TL }
The variance of ?jr |{yir? , i = 1, ? ? ? , nr? , ?r? ? TL }
Table 1: A list of the key notations.
assumption is that the hierarchy, determined from domain knowledge, provides a natural clustering
to account for latent processes generating the data which, when incorporated into the model, improve predictions. Although MLH models are intuitive, parameter estimation presents a formidable
challenge, especially for large hierarchies. For Gaussian response, the main computational bottleneck is the Cholesky factorization of a dense covariance matrix whose order depends on the number
of nodes, this is expensive for large problems. For non-Gaussian response (e.g binary data), the nonquadratic nature of the log-likelihood adds on an additional challenge of approximating an integral
whose dimension depends on the number of nodes in the hierarchy. This is an active area of research
in statistics with several solutions being proposed, such as [5] (see references therein as well). For
Cholesky factorization, techniques based on sparse factorization of the covariance matrix have been
recently proposed in [5]. For non-Gaussian models, solutions require marginalization over a high dimensional integral and is often accomplished through higher order Taylor series approximations[6].
However, these techniques involve linear algebra that is often non-intuitive and difficult to implement. A more natural computational scheme that exploits the structure of the model is based on
Gibbs sampling; however, it is not scalable due to slow convergence.
Our contributions are as follows: We provide a novel fitting procedure based on multi-scale Kalman
filter algorithm that directly exploits the hierarchical structure of the problem and computes the posterior mode of MLH parameters. The complexity of our method is almost linear in the number of
nodes in the hierarchy. Other than scalability, our fitting procedure is more intuitive and easy to
implement. We note that although multi-scale Kalman filters have been studied in the electrical
engineering literature [2] and spatial statistics, their application to fitting MLH is novel. Moreover,
fitting such models to non-Gaussian data present formidable challenges as we illustrate in the paper.
We provide strategies to overcome those through a bootstrap correction and compare with the commonly used cross-validation approach. Our methods are illustrated on simulated data, benchmark
data and data obtained from an internet advertising application.
2 MLH for Gaussian Responses
Assume we have a hierarchy T consisting of L levels (root is level 0), for which mj , j = 0, ? ? ? , L,
denotes the number of nodes at level j. Denote the set of nodes at level j in the hierarchy T as Tj .
For node r in T , denote the parent of r as pa(r), and the ith child of node r as ci (r). If a node r? is
2
a descendent of r, we say r? ? r. Since the hierarchy has L levels, TL denotes the set of leaf nodes
in the hierarchy. Let yir , i = 1, ? ? ? , nr denote the ith observation at leaf node r, and xir denote the
p-dimensional covariate vector associated with yir . For simplicity, we assume all observations are
available at leaf nodes (a more general case where each node in the hierarchy can have observations
is easily obtained from our algorithm). Consider the Gaussian MLH defined by
?
L
yir |?L
(1)
r ? N (xir ? + ?r , V ),
j
where ? is a fixed effect parameter vector and ?r is a random effect associated with node r
at level j with joint distribution defined through a set of hierarchical conditional distributions
0
j j?1
p(?jr |?j?1
pa(r) ), j = 0, ? ? ? , L, where ?0 = 0. The form of p(?r |?pa(r) ), j = 1, ? ? ? , L is assumed
to be
j?1
(2)
?jr |?j?1
pa(r) ? N (?pa(r) , ?j ); j = 1, ? ? ? , L,
where ? = (?1 , ? ? ? , ?L ) is a vector of level-specific variance components that control the amount
of smoothing. To complete the model specification in a Bayesian framework, we put a vague prior
on V (?(V ) ? 1/V ) and a mild quadratic prior on ?i (?(?i |V ) ? V /(V + ?i )2 ). For ?, we assume
a non-informative prior, i.e., ?(?) ? 1.
The specification of MLH given by Equation 2 is referred to as the centered parametrization and
was shown to provide good performance in a fully Bayesian framework by [9]. An equivalent way
of specifying MLH is obtained by associating independent random variables bjr ? N (0, ?j ) to the
j
nodes and replacing ?L
r in (1) by the sum of the br parameters along the lineage path from root to
leaf node in the hierarchy. We denote this compactly as zr? b, where b is a vector of bjr for all the
nodes in the hierarchy, and zr is a vector of 0/1?s turned on for nodes in the path of node r. More
compactly, let y = {yir , i = 1, ? ? ? , nr , r ? T }, and X as well as Z be the corresponding matrix of
?
vectors xir and zr for i = 1, ? ? ? nr and r ? T , then y ? N (X ? + Zb, V I) with b ? N (0, ?(?)).
PL
The problem is to compute the posterior mode of (?p?1 , bq?1 , ?L?1 , V ) where q = j=1 mj . The
?
main computational bottleneck is computing the Cholesky factor of a q?q matrix (Z Z +??1 ), this
is expensive for large values of q. Existing state-of-the-art methods are based on sparse Cholesky
factorization; we provide a more direct way. In fact, our method provides a MAP estimate of the
parameters for the Gaussian case. For non-Gaussian case, we provide an approximation to the MAP
through the Laplace method coupled with a bootstrap correction. We also note that our method
apply if the random effects are vectors and enter into equation (2) as linear combination with some
covariate vector. In this paper, we illustrate through a scalar.
2.1 Model Fitting
Throughout, we work with the parametrization specified by ?. The main component of our fitting
algorithm is computing the conditional posterior distribution of ? = {?jr , r ? T, j = 1, ? ? ? , L}
given (?, V, ?). Since the parameters V and ? are unknown, we estimate them through an EM
algorithm. The multi-scale Kalman filter (described next) computes the conditional posterior of ?
mentioned above and is used in the inner loop of the EM.
As in temporal state space models, the Kalman filter consists of two steps - a)Filtering: where one
propagates information from leaves to the root and b) Smoothing: where information is propagated
from root all the way down to the leaves.
Filtering:
?
? ?
? , and V? respectively. Then, eir = yir ? xir ??
Denote the current estimates of ?, ? and V as ?,
P
are the residuals and V ar(?jr ) = ?j = ji=1 ??i , r ? Tj are the marginal variances of the random
L
L
effects. If the conditional posterior distribution ?L
r |{yir , i = 1, ? ? ? , nr } ? N (?r|r , ?r|r ), the first
L
L
step is to update ?L
r|r and ?r|r for all leaf random effects ?r using standard Bayesian update formula
for Gaussian models
nr
P
?L
eir
i=1
?L
=
,
(3)
r|r
V? + nr ?L
?L V?
L
?r|r
=
.
(4)
V? + nr ?L
3
j
Next, the posteriors ?jr |{yir? , i = 1, ? ? ? , nr? , ?r? ? r} ? N (?jr|r , ?r|r
), are recursively updated
from j = L ? 1 to j = 1, by regressing the parent node effect towards each child and combining
information from all the children.
To provide intuition about regression step, it is useful to invert the state equation (2) and express the
j?1
distribution of ?pa(r)
conditional on ?jr . Note that
j?1
j?1
j?1
j
j
?pa(r)
= E(?j?1
pa(r) |?r ) + (?pa(r) ? E(?pa(r) |?r ))
(5)
j
Simple algebra provides the conditional expectation and variance of ?j?1
pa(r) |?r as
j
j
?j?1
pa(r) = Bj ?r + ?r ,
where Bj =
N (0, Bj ??j ).
Pj?1
i=1
??i /
Pj
i=1
(6)
??i , correlation between any two siblings at level j and ?rj ?
First, a new prior is obtained for the parent node based on the current estimate of each child by
plugging-in the current estimates of a child into equation (6). For the ith child of node r (here we
assume that r is at level j ? 1, and ci (r) is at level j),
j
?j?1
r|ci (r) = Bj ?ci (r)|ci (r) ,
(7)
j?1
= Bj2 ?cji (r)|ci (r) + Bj ??j ,
?r|c
i (r)
(8)
Next, we combine information obtained by the parent from all its children.
j?1
?j?1
r|r = ?r|r
kr
X
j?1
(?j?1
r|ci (r) /?r|ci (r) ),
(9)
i=1
j?1
1/?r|r
= ??1
j?1 +
kr
X
i=1
j?1
((1/?r|c
) ? ??1
j?1 ).
i (r)
(10)
where kr is the number of children of node r at level j ? 1.
Smoothing:
In the smoothing step, parents propagate information recursively from root to the leaves to provide
us with the posterior of each ?jr based on the entire data. Denoting the posterior mean and variance
?
of ?jr given all the observations by ?jr and ?rj respectively, the update equations are given below.
1
.
For level 1 nodes, set ??1r = ?1r|r , and ?r1 = ?r|r
For node r at other levels,
?
?
j
j?1
j
?jr = ?jr|r + ?r|r
Bj (?j?1
pa(r) ? ?pa(r)|r )/?pa(r)|r ,
2
2
(11)
j
j
j?1
j?1
j
?rj = ?r|r
+ ?r|r
Bj2 (?pa(r)
? ?pa(r)|r
)/?pa(r)|r
,
(12)
j,j?1
j
j?1
j?1
?r,pa(r)
= ?r|r
Bj ?pa(r)
/?pa(r)|r
.
(13)
and let
The computational complexity of the algorithm is linear in the number of nodes in the hierarchy and
for each parent node, we perform an operation which is cubic in the number of children. Hence,
for most hierarchies that arise in practical applications, the complexity is ?essentially? linear in the
number of nodes.
Expectation Maximization:
To estimate all parameters simultaneously, we use an EM algorithm which assumes the ? parameters
to be the missing latent variables. The expectation step consists of computing the expected value of
complete log-posterior with respect to the conditional distribution of missing data ?, obtained using
the multi-scale Kalman filter algorithm. The maximization step obtains revised estimates of other
parameters by maximizing the expected complete log-posterior.
4
V? =
X
r?TL
For j = 1, ? ? ? , L,
P
??j =
r?Tj
nr
P
2
L
(eir ? ??L
r ) + nr ?r
P
,
nr
i=1
(14)
r?TL
j
j?1
j,j?1
? j?1 )2 )
(?rj + ?pa(r)
? 2?r,pa(r)
+ (??r ? ?pa(r)
|mj |
.
(15)
?
Updating ?:
We use the posterior mean of ? obtained from the Kalman filtering step, to compute the posterior
mean of ? as given in equation (16).
?L ),
?? = (X ? X)?1 X ? (Y ? ?
(16)
?L is the vector of ??L corresponding to each observation yir at different leaf node r.
where ?
r
2.2 Simulation Performance
We first perform a simulation study with a hierarchy described in [7, 8]. The data focus on 2449
Guatemalan children who belong to 1558 families who in turn live in 161 communities. The response variable of interest is binary with a positive label assigned to a child if he/she received a
full set of immunizations. The actual data contains 15 covariates capturing individual, family and
community level characteristics as shown in Table 2. For our simulation study, we consider only
three covariates, with the coefficient vector ? set with entries all equal to 1. We simulated Gaussian
?
response as follows: yir |b ? N (xir ? + b1r + b2r , 10) where b1r ? N (0, 4), and b2r ? N (0, 1). We
simulated 100 data sets and compared the estimates from Kalman filter to the one obtained from
standard routine lme4 in the statistical software R. Results from our procedure agreed almost exactly with those obtained from lme4, our computations was many times faster than lme4. The EM
method converged rapidly and required at most 30 iterations.
3 MLH for Non-Gaussian Responses
We discuss model fitting for Bernoulli response but note that other distributions in the generalized linear model family can be easily fitted using the procedure. Let yir ? Bernoulli(pir ), i.e.
pir
P (yir ) = pyirir (1 ? pir )1?yir . Let ?ir = log 1?p
be the log-odds. The MLH logistic regression is
ir
defined as:
?
?ir = xir ? + ?L
(17)
r,
with the same multi-level prior as described in equation (2). The non-conjugacy of the normal
multi-level prior makes the computation more difficult. We take recourse to Taylor series approximation coupled with the Kalman filter algorithm. The estimates obtained are biased; we recommend
cross-validation and parametric bootstrap (adapted from [4]) to correct for the bias. The bootstrap
procedure though expensive is easily parallelizable and accurate.
3.1 Approximation Methods
? ?L
Let ?ir = xir ?? + ??L
r , where ?, ?r are current estimates of the parameters in our algorithm. We do
a quadratic approximation of the log-likelihood through a second order Taylor expansion (Laplace
approximation) around ?ir . This enables us to do the calculations as in the Gaussian case with the
response yir being replaced by Zir where
Zir = ?ir +
2yir ? 1
,
g((2yir ? 1)?ir )
5
(18)
Algorithm 1 The bootstrap procedure
Let ? = (?, ?).
Obtain ?? as an initial estimate of ?. Bias b(0) = 0.
for i = 1 to N do
?? = ?? ? b(i) .
for j = 1 to M do
Use ?? to simulate new data j, by simulating ? and the corresponding Y .
For data j, obtain an new estimate of ? as ??(j) .
end for
M
P
?
b(i+1) = 1
??(j) ? ?.
M
j=1
end for
and g(x) = 1/(1 + exp(?x)). Approximately,
Zir ? N (x?ir ? + ?L
r,
1
).
g(?ir )g(??ir )
(19)
? and the approximated variance of Zir as Vir . Analogous to equaNow denote eir = Zir ? x?ir ?,
tion (3) and (4), the resulting filtering step for the leaf nodes becomes:
L
?L
r|r = ?r|r
nr
X
eir
,
V
i=1 ir
r
X
1
1 ?1
+
) .
?L i=1 Vir
(20)
n
L
?r|r
=(
(21)
The step for estimating ? becomes:
where W =
diag( V1ir ).
?? = (X ? W X)?1 X ? W (Z ? ??L ),
(22)
All the other computational steps remain the same as in the Gaussian case.
3.2 Bias correction
Table 2 shows estimates of parameters obtained from our approximation method in the column titled
KF . Compared to the unbiased estimates obtained from the slow Gibbs sampler, it is clear our
estimates are biased. Our bias correction procedure is described in Algorithm 1. In general, a value
of M = 50 with about 100 ? 200 iterations worked well for us. The bias corrected estimates are
reported under KF-B in Table 2. The estimates after bootstrap correction are closer to the estimates
obtained from Gibbs sampling. It is also customary to estimate hyper parameters like the ? using
a?tuning
? dataset. To test the performance of such a strategy, we created a two-dimensional grid for
( ?1 , ?2 ) for the epidemiological Guatemalan data set ranging in [.1, 3]?[.1, 3] and computed the
log-likelihood on a 10% randomly sampled hold-out data. For each point on the two-dimensional
grid, we estimated the other parameters ? and ?, using our EM algorithm that does not update the
value of ?. The estimates at the optimal value of ? are shown in Table 2 under KF-C. The estimates
are better than KF but worse than KF-B.
Based on our findings, we recommend KF-B when computing resources are available (especially
multiple processors) and running time is not a big constraint; if runtime is an issue we recommend
grid search using a small number of points around the initial estimate.
4 Content Match Data Analysis
We analyze data from an internet advertising application where every showing of an ad on a web
page (called an impression) constitutes an event. The goal is to rank ads on a given page based on
click-through rates. Building a predictive model for click-rates via features derived from pages and
6
Effects
Fixed effects
Individual
Child age ? 2 years
Mother age ? 25 years
Birth order 2-3
Birth order 4-6
Birth order ? 7
Family
Indigenous, no Spanish
Indigenous Spanish
Mother?s education primary
Mother?s education secondary
or better
Husband?s education primary
Husband?s education secondary
or better
Husband?s education missing
Mother ever worked
Community
Rural
Proportion indigenous, 1981
KF
KF-B
KF-C
Gibbs
0.99
-0.09
-0.10
0.13
0.20
1.77
-0.16
-0.18
0.25
0.36
1.18
-0.10
-0.25
0.10
0.21
1.84
-0.26
-0.29
0.21
0.50
-0.05
0.00
0.22
0.23
-0.11
0.01
0.44
0.44
0.02
0.02
0.32
0.27
-0.22
-0.11
0.48
0.46
0.30
0.27
0.53
0.48
0.39
0.35
0.59
0.55
0.02
0.21
0.04
0.35
-0.08
0.24
0.00
0.42
-0.50
-0.67
-0.91
-1.23
-0.62
-0.89
-0.96
-1.22
0.74
0.56
2.40
1.05
1.92
0.81
2.60
1.13
Random effects
Standard deviations ?
Family
Community
Table 2: Estimates for the binary MLH model of complete immunization (Kalman Filtering results)
ads is an attractive approach. In our case, semantic features are obtained by classifying pages and
ads into a large seven-level content hierarchy that is manually constructed by humans. We form
a new hierarchy (a pyramid) by taking the cross product of the two hierarchies. This is used to
estimate smooth click-rates of (page,ad) pairs.
4.1 Training and Test Data
Although the page and ad hierarchies consist of 7 levels, classification is often done at coarser levels
by the classifier. In fact, the average level at which classification took place is 3.8. To train our
model, we only consider the top 3 levels of the original hierarchy. Pages and ads that are classified at
coarser levels are randomly assigned to the children nodes. Overall, the pyramid has 441, 25751 and
241292 nodes for the top 3 levels. The training data were collected by confining to a specific subset
of data which is sufficient to illustrate our methodology but in no way representative of the actual
publisher traffic received by the ad-network under consideration. The training data we collected
spans 23 days and consisted of approximately 11M binary observations with approximately 1.9M
clicks. The test set consisted of 1 day?s worth of data with approximately .5M observations. We
randomly split the test data into 20 equal sized partitions to report our results. The covariates include
the position at which an ad is shown; ranking ads on pages after adjusting for positional effects is
important since the positional effects introduce strong bias in the estimates In the training data a large
fraction of leaf nodes in the pyramid (approx 95%) have zero clicks, this provides a good motivation
to fit the binary MLH on this data to get smoother estimates at leaf nodes by using information at
coarser resolutions.
4.2 Results
We compare the following models using log-likelihood on the test data: a) The model which predicts
a constant probability for all examples, b) 3 level MLH but without positional effects, c) top 2 level
MLH to illustrate the gains of using information at a finer resolution, and d) 3 level MLH with
positional effects to illustrate the generality of the approach; one can incorporate both additional
features and the hierarchy into a single model. Figure 1 shows the distribution of average test
likelihood on the partitions. As expected, all variations of MLH are better than the constant model.
The MLH model which uses only 2 levels is inferior to the 3 level MLH while the general model
that uses both covariates and hierarchy is the best.
7
?2.45
?2.55
?2.65
Log?likelihood
2lev
3lev
3lev?pos
con
Model
Figure 1: Distribution of test log-likelihood on 20 equal sized splits of test data.
5 Discussion
In applications where data is aggregated at multiple resolutions with sparsity at finer resolutions,
multi-level hierarchical models provide an attractive class to reduce variance by smoothing estimates
at finer resolutions using data at coarser resolutions. However, the smoothing provides a better biasvariance tradeoff only when the hierarchy provides a natural clustering for the response variable and
captures some latent characteristics of the process; often true in practice. We proposed a fast novel
algorithm to fit these models based on a multi-scale Kalman filter that is both scalable and easy to
implement. For the non-Gaussian case, the estimates are biased but performance can be improved
by using a bootstrap correction or estimation through a tuning set. In future work, we will report on
models that generalize our approach to arbitrary number of hierarchies that may all have different
structure. This is a challenging problem since in general cross-product of trees is not a hierarchy but
a graph.
References
[1] D. Agarwal, A. Broder, D. Chakrabarti, D. Diklic, V. Josifovski, and M. Sayyadian. Estimating
rates of rare events at multiple resolutions. In KDD, pages 16?25, 2007.
[2] K. C. Chou, A. S. Willsky, and R. Nikoukhah. Multiscale systems, Kalman filters, and Ricatti
equations. IEEE Transactions on Automatic Control, 39:479?492, 1994.
[3] A. Gelman and J. Hill. Data Analysis sing Regression and Multi-Level/Hierarchical Models.
Cambridge University Press, 2007.
[4] A. Y. C. Kuk. Asymptotically unbiased estimation in generalized linear models with random
effects. Journal of the Royal Statistical Society, Series B (Methodological),, 57:395?407, 1995.
[5] J. C. Pinheiro and D. M. Bates. Mixed-Effects Models in S and S-PLUS. Springer-Verlag, New
York, 2000.
[6] S. W. Raudenbush, M. L. Yang, and M. Yosef. Maximum likelihood for generalized linear
models with nested random effects via high-order, multivariate Laplace approximation. Journal
of Computational and Graphical Statistics, 9(1):141?157, 2000.
[7] G. Rodriguez and N. Goldman. An assessment of estimation procedures for multilevel models
with binary responses. Journal of Royal Statistical Society, Series A,, 158:73?89, 1995.
[8] G. Rodriguez and N. Goldman. Improved estimation procedures for multilevel models with
binary response: A case-study. Journal of the Royal Statistical Society, Series A,, 164(2):339?
355, 2001.
[9] S. K. Sahu and A. E. Gelfand. Identifiability, improper Priors, and Gibbs sampling for generalized linear models. Journal of the American Statistical Association, 94(445):247?254, 1999.
8
| 3416 |@word mild:1 proportion:1 simulation:4 propagate:1 decomposition:1 covariance:2 recursively:2 initial:2 series:5 contains:1 hereafter:1 denoting:1 existing:2 current:4 com:1 clara:1 finest:1 partition:2 informative:1 kdd:1 enables:1 update:4 leaf:15 parametrization:2 ith:6 provides:9 node:52 location:1 zhang:1 along:1 constructed:1 direct:1 chakrabarti:1 consists:2 fitting:10 combine:1 introduce:1 expected:3 multi:16 goldman:2 actual:2 becomes:2 estimating:2 moreover:2 notation:2 formidable:2 finding:1 temporal:1 every:1 runtime:1 exactly:1 classifier:1 vir:2 control:2 positive:1 engineering:1 lev:3 path:3 approximately:4 plus:1 blvd:1 therein:1 studied:1 specifying:1 challenging:2 josifovski:1 factorization:4 practical:1 recursive:1 practice:1 implement:6 epidemiological:1 bootstrap:8 procedure:10 area:1 pre:1 induce:1 get:1 gelman:1 put:1 live:1 equivalent:1 map:3 expedites:1 missing:3 maximizing:1 rural:1 resolution:8 simplicity:1 lineage:2 variation:1 laplace:5 updated:1 analogous:1 hierarchy:32 duke:2 us:2 pa:25 expensive:5 approximated:1 updating:1 coarser:4 predicts:1 bottom:1 eir:5 electrical:1 capture:2 thousand:1 commonplace:1 improper:1 mentioned:1 intuition:1 complexity:3 covariates:5 algebra:2 predictive:3 vague:1 compactly:2 easily:3 joint:1 po:1 routinely:1 train:1 fast:3 hyper:1 birth:3 whose:3 gelfand:1 say:1 statistic:3 online:1 took:1 b2r:2 mission:1 product:2 turned:1 loop:1 combining:1 rapidly:1 intuitive:3 zir:5 scalability:1 webpage:2 parent:8 convergence:1 r1:1 generating:1 illustrate:6 stat:1 received:2 strong:1 correct:1 filter:11 centered:2 human:1 education:5 require:1 multilevel:2 clustered:3 pl:1 correction:6 hold:1 around:3 normal:1 exp:1 bj:7 estimation:6 label:1 gaussian:17 shrinkage:1 xir:9 derived:1 focus:1 improvement:1 she:1 bernoulli:2 likelihood:8 rank:1 methodological:1 chou:1 posteriori:1 entire:1 issue:1 classification:2 overall:1 yahoo:2 spatial:1 smoothing:6 art:1 marginal:1 field:1 equal:3 sampling:3 manually:1 biology:1 constitutes:1 future:1 report:2 recommend:3 few:1 randomly:3 simultaneously:1 individual:2 replaced:1 consisting:1 interest:2 regressing:1 tj:4 accurate:1 integral:2 closer:1 bq:1 tree:1 taylor:3 fitted:1 instance:1 classify:1 modeling:1 column:1 ar:1 maximization:2 deviation:1 entry:1 subset:1 rare:1 reported:1 epidemiology:1 fundamental:1 broder:1 central:1 worse:1 american:1 account:1 coefficient:1 inc:1 descendent:1 ranking:1 ad:11 depends:2 performed:2 root:5 tion:1 analyze:1 traffic:1 identifiability:1 contribution:1 ir:12 variance:10 who:2 characteristic:2 generalize:1 bayesian:3 bates:1 advertising:4 worth:1 finer:3 processor:1 converged:1 classified:1 parallelizable:1 husband:3 naturally:1 associated:3 bj2:2 con:1 propagated:1 sampled:1 gain:1 dataset:1 adjusting:1 knowledge:1 organized:2 agreed:1 routine:1 higher:1 day:2 methodology:1 response:17 wherein:1 improved:2 done:1 though:1 generality:1 correlation:4 web:1 replacing:1 multiscale:1 assessment:1 rodriguez:2 mode:4 logistic:1 building:1 effect:17 consisted:2 geographic:1 unbiased:2 true:1 hence:1 assigned:2 semantic:2 illustrated:1 attractive:4 spanish:2 inferior:1 generalized:4 hill:1 impression:1 complete:4 bjr:2 meaning:2 ranging:1 consideration:1 novel:6 recently:2 ji:1 million:2 belong:1 he:1 nikoukhah:1 association:1 significant:1 cambridge:1 gibbs:5 enter:1 mother:4 tuning:2 approx:1 grid:3 automatic:1 specification:2 add:1 posterior:17 multivariate:1 moderate:1 verlag:1 binary:7 accomplished:1 additional:2 care:1 aggregated:1 maximize:1 smoother:1 full:2 multiple:3 rj:5 smooth:1 faster:1 match:1 calculation:1 cross:4 plugging:1 prediction:3 scalable:3 regression:4 essentially:1 expectation:3 iteration:2 agarwal:2 invert:1 pyramid:3 biasvariance:1 publisher:1 biased:4 pinheiro:1 induced:1 odds:1 yang:1 split:2 easy:4 marginalization:1 fit:2 associating:1 click:6 inner:1 idea:1 reduce:1 br:1 sibling:1 tradeoff:1 bottleneck:2 pir:3 cji:1 nonquadratic:1 titled:1 york:1 useful:1 santa:1 involve:1 clear:1 amount:1 estimated:1 promise:1 express:1 key:2 drawn:1 pj:2 kuk:1 graph:1 asymptotically:1 fraction:1 sum:1 year:2 run:1 place:1 family:6 almost:2 throughout:1 parsimonious:1 capturing:1 internet:3 quadratic:2 adapted:1 constraint:1 worked:2 software:1 simulate:1 span:1 department:1 combination:1 yosef:1 jr:21 across:1 remain:1 em:5 recourse:1 computationally:1 equation:8 conjugacy:1 resource:1 turn:2 discus:1 mlh:18 end:2 studying:1 available:2 operation:1 apply:1 hierarchical:8 simulating:1 customary:1 original:1 top:4 clustering:3 denotes:2 assumes:1 running:1 include:1 graphical:1 exploit:4 especially:3 approximating:1 society:3 parametric:2 strategy:2 primary:2 nr:19 simulated:3 seven:1 collected:2 willsky:1 kalman:13 nc:1 liang:1 difficult:3 yir:23 unknown:1 perform:2 confining:1 observation:10 revised:1 datasets:1 benchmark:1 sing:1 incorporated:1 ever:1 smoothed:1 arbitrary:1 community:5 pair:1 required:1 specified:1 usually:1 below:1 sparsity:1 challenge:3 immunization:3 royal:3 event:2 natural:3 zr:3 residual:2 scheme:1 improve:1 created:1 coupled:2 health:1 prior:8 literature:1 kf:9 fully:1 mixed:1 filtering:5 age:2 validation:2 sufficient:1 propagates:1 classifying:1 bias:6 taking:1 deepak:1 sparse:2 overcome:1 dimension:1 world:2 computes:2 commonly:1 social:1 lme4:3 transaction:1 obtains:1 status:1 active:1 assumed:2 search:1 latent:3 table:6 sahu:1 mj:4 nature:1 ca:1 obtaining:1 expansion:1 domain:1 diag:1 hierarchically:3 main:4 dense:1 big:1 motivation:1 arise:1 child:17 referred:1 representative:1 tl:5 cubic:1 slow:2 position:1 ricatti:1 advertisement:1 down:1 formula:1 specific:2 covariate:2 showing:1 list:1 incorporating:1 consist:2 importance:1 ci:9 kr:3 durham:1 positional:4 scalar:1 springer:1 nested:1 conditional:8 sized:3 goal:3 towards:1 content:3 specifically:1 determined:1 corrected:2 sampler:1 zb:1 total:2 called:1 secondary:3 select:1 college:1 cholesky:5 incorporate:1 |
2,666 | 3,417 | Estimation of Information Theoretic Measures for
Continuous Random Variables
Fernando P?erez-Cruz
Princeton University, Electrical Engineering Department
B-311 Engineering Quadrangle, 08544 Princeton (NJ)
[email protected]
Abstract
We analyze the estimation of information theoretic measures of continuous random variables such as: differential entropy, mutual information or KullbackLeibler divergence. The objective of this paper is two-fold. First, we prove that the
information theoretic measure estimates using the k-nearest-neighbor density estimation with fixed k converge almost surely, even though the k-nearest-neighbor
density estimation with fixed k does not converge to its true measure. Second,
we show that the information theoretic measure estimates do not converge for k
growing linearly with the number of samples. Nevertheless, these nonconvergent
estimates can be used for solving the two-sample problem and assessing if two
random variables are independent. We show that the two-sample and independence tests based on these nonconvergent estimates compare favorably with the
maximum mean discrepancy test and the Hilbert Schmidt independence criterion.
1
Introduction
Kullback-Leibler divergence, mutual information and differential entropy are central to information
theory [5]. The divergence [17] measures the ?distance? between two density distributions while
mutual information measures the information one random variable contains about a related random
variable [23]. In machine learning, statistics and neuroscience the information theoretic measures
also play a leading role. For instance, the divergence is the error exponent in large deviation theory
[5] and the divergence can be directly applied to solving the two-sample problem [1]. Mutual information is extensively used to assess whether two random variables are independent [2] and has been
proposed to solve the all-relevant feature selection problem [8, 24]. Information-theoretic analysis of
neural data is unavoidable given the questions neurophysiologists are interested in1 . There are other
relevant applications in different research areas in which divergence estimation is used to measure
the difference between two density functions, such as multimedia [19] and text [13] classification,
among others.
The estimation of information theoretic quantities can be traced back to the late fifties [7], when Dobrushin estimated the differential entropy for one-dimensional random variables. The review paper
by Beirlant et al. [4] analyzes the different contributions of nonparametric differential entropy estimation for continuous random variables. The estimation of the divergence and mutual information
for continuous random variables has been addressed by many different authors [25, 6, 26, 18, 20, 16],
see also the references therein. Most of these approaches are based on estimating the densities first.
For example, in [25], the authors propose to estimate the densities based on data-dependent histograms with a fixed number of samples from q(x) in each bin. The authors of [6] compute relative
frequencies on data-driven partitions achieving local independence for estimating mutual information. Also, in [20, 21], the authors compute the divergence using a variational approach, in which
1
See [22] for a detailed discussion on mutual information estimation in neuroscience.
1
convergence is proven ensuring that the estimate for p(x)/q(x) or log p(x)/q(x) converges to the
true measure ratio or its log ratio.
There are only a handful of approaches that use k-nearest-neighbors (k-nn) density estimation [26,
18, 16] for estimating the divergence and mutual information for finite k. Although finite k-nn
density estimation does not converge to the true measure, the authors are able to prove mean-square
consistency of their divergence estimators imposing some regularity constraint over the densities.
These proofs are based on the results reported in [15] for estimating the differential entropy with
k-nn density estimation.
The results in this paper are two-fold. First, we prove almost sure convergence of our divergence estimate based on k-nn density estimation with finite k. Our result is based on describing the statistics
of p(x)/b
p(x) as a waiting time distribution independent of p(x). We can readily apply this result to
the estimation of the differential entropy and mutual information.
Second, we show that for k linearly growing with the number of samples, our estimates do not converge nor present known statistics. But they can be reliably used for solving the two-sample problem
or assessing if two random variables are independent. We show that for this choice of k, the estimates of the divergence or mutual information perform, respectively, as well as the maximum mean
discrepancy (MMD) test in [9] and the Hilbert Schmidt independence criterion (HSIC) proposed in
[10].
The rest of the paper is organized as follows. We prove in Section 2 the almost sure convergence
of the divergence estimate based on k-nn density estimation with fixed k. We extend this result
for differential entropy and mutual information in Section 3. In Section 4 we present some examples to illustrate the convergence of our estimates and to show how can they be used to assess the
independence of related random variables. Section 5 concludes the paper with some final remarks.
2
Estimation of the Kullback-Leibler Divergence
If the densities P and Q exist with respect to a Lebesgue measure, the Kullback-Leibler divergence
is given by:
Z
p(x)
D(P ||Q) =
p(x) log
dx ? 0.
(1)
q(x)
Rd
This divergence is finite whenever P is absolutely continuous with respect to Q and it is zero only
if P = Q.
The idea of using k-nn density estimation to estimate the divergence was put forward in [26, 18],
where they prove mean-square consistency of their estimator for finite k. In this paper, we prove
the almost sure convergence of this divergence estimator, using waiting-times distributions without
needing to impose additional conditions over the density models. Given a set with n i.i.d. samples
from p(x), X = {xi }ni=1 , and m i.i.d. samples from q(x), X 0 = {x0j }m
j=1 , we estimate D(P ||Q)
from a k-nn density estimate of p(x) and q(x) as follows:
n
n
X
pbk (xi )
dX
sk (xi )
m
b k (P ||Q) = 1
D
log
=
log
+ log
n i=1
qbk (xi )
n i=1
kk (xi )
n?1
(2)
where
k
?(d/2 + 1)
1
d/2
(n ? 1)
rk (xi )d
?
k ?(d/2 + 1)
1
qbk (xi ) =
d/2
m
sk (xi )d
?
pbk (xi ) =
(3)
(4)
rk (xi ) and sk (xi ) are, respectively, the Euclidean distances to the k-nn of xi in X \xi and X 0 , and
? d/2 /?(d/2 + 1) is the volume of the unit-ball in Rd . Before proving (2) converges almost surely
to D(P ||Q), let us show an intermediate necessary result.
Lemma 1. Given n i.i.d. samples, X = {xi }ni=1 , from an absolutely continuous probability distribution P , the limiting distribution of p(x)/b
p1 (x) is exponentially distributed with unit mean for any
x in the support of p(x).
2
Proof. Let?s initially assume p(x) is a d-dimensional uniform distribution with a given support. The
set Sx,R = {xi | kxi ? xk2 ? R, xi ? X } contains all the samples from X inside the ball centered
in x of radius R. The radius R has to be small enough for the ball centered in x to be contained
within the support of p(x).
The samples in {kxi ? xkd2 | xi ? Sx,R } are consequently uniformly distributed between 0 and Rd .
Thereby, the limiting distribution of r1 (x)d = minxj ?Sx,R (kxj ? xkd2 ) is exponentially distributed,
as it measures the waiting time between the origin and the first event of a uniformly-spaced sample
(see Theorem 2.4 in [3]). Since p(x)n? d/2 /?(d/2 + 1) is the mean number of samples per unit
ball centered in x, p(x)/b
p1 (x) is distributed as a unit-mean exponential distribution as n tends to
infinity.
For non-uniform absolutely-continuous P , P(r1 (x) > ?) ? 0 as n ? ? for any x in the support
of p(x) and any ? > 0. Therefore, as n tends to infinity p(arg minxj ?Sx,R (kxj ? xkd2 )) ? p(x)
and the limiting distribution of p(x)/b
p1 (x) is a unit-mean exponential distribution.
Corolary 1. Given n i.i.d. samples, X = {xi }ni=1 , from an absolutely continuous probability distribution P , the limiting distribution of p(x)/b
pk (x) is a unit-mean 1/k-variance gamma distribution
for any x in the support of p(x).
Proof. In the previous proof, instead of measuring the waiting time to the first event, we compute the
waiting time to the k th event of a uniformly-spaced sample. This waiting-time limiting distribution
is a unit-mean and 1/k-variance Erlang (gamma) distribution [14].
Corolary 2. Given n i.i.d. samples, X = {xi }ni=1 , from an absolutely continuous probability
P
distribution P , then pbk (x) ? p(x) for any x in the support of p(x), if k ? ? and k/n ? 0, as
n ? ?.
Proof. The k-nn in X tends to x as k/n ? 0 and n ? ?. Thereby the limiting distribution of
p(x)/b
pk (x) is a unit-mean 1/k-variance gamma distribution. As k ? ? the variance of the gamma
distribution goes to zero and consequently pbk (x) converges to p(x).
The second corollary is the widely known result that k-nn density estimation converges to the true
measure if k ? ? and k/n ? 0. We have just include it in the paper for clarity and completeness. If
k grows linearly with n, the k-nn sample in X does not converge to x, which precludes p(x)/b
pk (x)
to present known statistics. For this growth on k, the divergence estimate does not converge to
D(P ||Q).
Now we can prove the almost surely convergence to (1) of the estimate in (2) based on the k-nn
density estimation.
Theorem 1. Let P and Q be absolutely continuous probability measures and let P be absolutely
continuous with respect to Q. Let X = {xi }ni=1 and X 0 = {x0i }m
i=1 be i.i.d. samples, respectively,
from P and Q, then
a.s.
b k (P ||Q)
D
??
D(P ||Q)
(5)
b k (P ||Q) in (2) as follows:
Proof. We can rearrange D
n
n
n
n
X
pbk (xi )
1X
p(xi )
1X
p(xi )
1X
q(xi )
b k (P ||Q) = 1
D
log
=
log
?
log
+
log
(6)
n i=1
qbk (xi )
n i=1
q(xi ) n i=1
pbk (xi ) n i=1
qbk (xi )
The first term is the empirical estimate of (1) and, by the law of large numbers [11], it converges
almost surely to its mean, D(P ||Q).
The limiting distributions of p(xi )/b
pk (xi ) and q(xi )/b
qk (xi ) are unit-mean 1/k-variance gamma
distributions, independent of i, p(x) and q(x) (see Corollary 1). In the large sample limit:
Z ?
n
1X
p(xi )
kk
a.s.
log
??
log(z)z k?1 e?kz dz
(7)
n i=1
pbk (xi )
(k ? 1)! 0
by the law of large numbers [11].
3
Finally, the sum of almost surely convergent terms also converges almost surely [11], which completes our proof.
The k-nn based divergence estimator is biased, because the convergence rate of p(xi )/b
pk (xi ) and
q(xi )/b
qk (xi ) to the unit-mean 1/k-variance gamma distribution depends on the density models and
we should not expect them to be identical. If p(x) = q(x), the divergence is zero and our estimate
is unbiased for any k (even if k/n does not tend to zero), since the statistics of the second and
third term in (6) are identical and they cancel each other out for any n (their expected mean is the
same). We use the Monte Carlo based test described in [9] with our divergence estimator to solve
the two-sample problem and decide if the samples from X and X 0 actually came from the same
distribution.
3
Differential Entropy and Mutual Information Estimation
The results obtained for the divergence can be readily applied to estimate the differential entropy of
a random variable or the mutual information between two correlated random variables.
The differential entropy for an absolutely continuous random variable P is given by:
Z
h(x) = ? p(x) log p(x)dx
(8)
We can estimate the differential entropy given a set with n i.i.d. samples from P , X = {xi }ni=1 ,
using k-nn density estimation as follows:
1X
b
hk (x) = ?
log pbk (xi )
(9)
n i=1
where pbk (xi ) is given by (3).
Theorem 2. Let P be an absolutely continuous probability measure and let X = {xi }ni=1 be i.i.d.
samples from P , then
a.s.
b
hk (x)
??
h(x) + ?k
(10)
where
Z ?
kk
?k = ?
log(z)z k?1 e?kz dz
(11)
(k ? 1)! 0
and ?1 ?
=0.5772 and it is known as the Euler-Mascheroni constant [12].
Proof. We can rearrange b
hk (x) in (9) as follows:
1
b
hk (x) = ?
n
n
X
n
log pbk (xi ) = ?
i=1
n
1X
1X
p(xi )
log p(xi ) +
log
n i=1
n i=1
pbk (xi )
(12)
The first term is the empirical estimate of (9) and, by the law of large numbers [11], it converges
almost surely to its mean, h(x).
The limiting distributions of p(xi )/b
pk (xi ) is a unit-mean 1/k-variance gamma distribution, independent of i and p(x) (see Corollary 1). In the large sample limit:
Z ?
n
p(xi )
kk
1X
a.s.
log
??
log(z)z k?1 e?kz dz = ??k
(13)
n i=1
pbk (xi )
(k ? 1)! 0
by the law of large numbers [11].
Finally, the sum of almost surely convergent terms also converges almost surely [11], which completes our proof.
Now, we can use the expansion of the conditional differential entropy, mutual information and conditional mutual information to prove the convergence of their estimates based on k-nn density estimation to their values.
4
? Conditional differential entropy:
Z
p(y, x)
h(y|x) = ? p(x, y) log
dxdy
p(x)
n
1X
p(yi , xi )
a.s.
b
h(y|x) = ?
log
??
n i=1
p(xi )
(14)
h(y|x)
(15)
? Mutual Information:
Z
I(x; y) = ?
p(x, y) log
p(y, x)
dxdy
p(x)p(y)
(16)
n
X
p(yi , xi )
b |y) = 1
I(x;
log
n i=1
p(xi )p(yi )
a.s.
??
I(x; y) + ?k
? Conditional Mutual Information:
Z
p(y, x, z)p(z)
I(x; y|z) = p(x, y, z) log
dxdydz
p(x, z)p(y, z)
n
X
p(yi , xi , zi )p(zi )
a.s.
b y|z) = 1
log
??
I(x;
n i=1
p(xi , zi )p(yi , zi )
4
(17)
(18)
I(x; y|z)
(19)
Experiments
We have carried out two sets of experiments. In the first one, we show the convergence of the
divergence to their limiting value as the number of samples tends to infinity and we compare the
divergence estimation to the MMD test in [9] for MNIST dataset. In the second experiment, we
compute if two random variables are independent and compare the obtained results to the HSIC
proposed in [10].
We first compare the divergence between a uniform distribution between 0 and 1 in d-dimension and
a zero-mean Gaussian distribution with identity covariance matrix. ?
We plot the divergence estimates
for d = 1 and d = 5 in Figure 1 as a function of n, for k = 1, k = n and k = n/2 with m = n.
1.15
d=1
5.5
5.4
5.3
5.2
1.05
KLD
KLD
1.1
d=5
k=0.5n
k=n0.5
k=1
KLD
k=0.5n
0.5
k=n
k=1
KLD
1
5.1
5
4.9
0.95
4.8
0.9 2
10
3
10
4.7 2
10
4
n
10
(a)
3
10
4
n
10
(b)
Figure 1: We plot the divergence for d = 1 in (a) and d = 5 in (b). The solid line with ??? represents
the divergence
estimate for k = 1, the solid line with ??? represents the divergence estimate for
?
k = n, the solid line with ??? represents the divergence estimate for k = n/2 and the dasheddotted line represents the divergence. The dashed-lines represent ?3 standard deviation for each
divergence estimate. We have not added symbols to them to avoid cluttering the images further and
from the plots it should be clear which confidence interval is assigned to what estimate.
As expected, the divergence estimate for k = n/2 does not converge to the true divergence as the
limiting distributions of p(x)/b
pk (x) and q(x)/b
qk (x) are unknown and they depend on p(x) and
5
q(x), respectively. Nevertheless, this estimate converges faster to its limiting value
? and its variance
is much smaller than that provided by the estimates of the divergence with k = n or k = 1. This
may indicate that using k = n/2 might be a better option for solving the two-sample problem than
actually trying to estimate the true divergence, as theorized in [9].
?
Both divergence estimates for k = 1 and k = n converge to the true divergence as the number
of samples tends to infinity.
? The convergence of the divergence estimate for k = 1 is significantly
faster than that with k = n, because p(x)/b
p1 (x) converges much faster to its limiting distribution
p1 (x) converges faster because the nearest neighbor to x is much closer
than p(x)/b
p?n (x). p(x)/b
?
than the n-nearest-neighbor and we need that the k-nn to be close enough to x for p(x)/b
pk (x) to
be close to its limiting distribution. As d grows the divergence estimates need many more samples
to converge and even for small dimensions the number of samples can be enormously large.
Nevertheless, we can still use this divergence estimate to assess whether two sets of samples come
from the same distribution, because the divergence estimate for p(x) = q(x) is unbiased for any
k. In Figure 2(a) we plot the divergence estimate between the three?s and two?s handwritten digits
in the MNIST dataset (http://yann.lecun.com/exdb/mnist/) in a 784 dimensional space. In Figure
b 1 (3, 2) (solid line) and D
b 1 (3, 3) (dashed line) mean
2(a) we plot the divergence estimator for D
values for 100 experiments together with their 90% confidence interval. For comparison purposes
we plot the MMD test from [9], in which a kernel method was proposed for solving the two-sample
problem. We use the code available in http://www.kyb.mpg.de/bs/people/arthur/mmd.htm and use
its bootstrap
estimate for our comparisons. For n = 5 the error rate for the test using k = 1 is 1%, for
?
k = n is 7% and for k = n/2 is 43% and for the MMD test is 34%. For n ? 10 all tests reported
zero error rate. It seems than the k = 1 test is more powerful than the MMD test in this case, at
least for small n. But we can see that the confidence interval for the MMD test decreases faster than
the test based on the divergence estimate with k = 1 and we should expect better performance for
larger n, similar to the divergence estimate with k = n/2.
Divergence
0.3
MMD(3||3) MMD(3||2)
500
D(3||3) D(3||2)
400
300
200
100
0
?100
?200
1
2
10
n
10
(a)
Maximum Mean Discrepancy
0.25
0.2
0.15
0.1
0.05
0
?0.05
?0.1
?0.15
?0.2
1
2
10
n
10
(b)
b 1 (3||2) (solid), D
b 1 (3||3) (dashed) and their 90% confidence interval
Figure 2: In (a) we plot D
(dotted). In (b) we repeat the same plots using the MMD test from [9].
In the second example we compute the mutual information between y1 and y2 , which are given by:
y1
cos(?) sin(?)
x1
(20)
=
y2
? sin(?) cos(?) x2
where x1 and x2 are independent and uniformly distributed between 0 and 1, and ? ? [0, ?/4]. If
? is zero, y1 and y2 are independent. Otherwise they are not independent, but still uncorrelated for
any ?.
We carry out a test for describing if y1 and y2 are independent. The test is identical to the one
described in [10] and we use the Mote Carlo resampling technique proposed in that paper with a
95% confidence interval and 1000 repetitions. In Figure 3 we report the acceptance of the null
hypothesis (y1 and y2 are independent) as a function of ? for n = 100 in (a)
? and as a function of n
for ? = ?/8 in (b). We compute the mutual information with k = 1, k = n and k = n/2 for our
test, and compare it to the HSIC in [10].
6
n=100
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
?=?/8
1
Acept H0
Acept H0
1
k=0.5n
HSIC
k=n0.5
k=1
0.1
0.6
0.5
0.4
0.3
0.2
0.1
0.2
0.3
?
0.4
0.5
0.6
0.7
(a)
0
k=0.5n
HSIC
k=n0.5
k=1
2
10
3
n
10
(b)
Figure 3: We plot the acceptance of the null hypothesis (y1 and y2 are independent) for a 95%
confidence interval in (a) as a function of ? and in (b) as a function on (n). The solid line uses the
mutual information estimate with k = n/2 and the dash-dotted line uses?
the HSIC. The dashed and
dotted lines, respectively, use the mutual information estimate with k = n and k = 1.
The HSIC test and the mutual information estimate based test with k = n/2 perform equally well
at predicting whether y1 and y2?are independent, while the test based on the mutual information
estimates with k = 1 and k = n clearly underperforms. This example shows that if our goal is
to predict whether two random variables are independent we are better off using HSIC or a nonconvergent estimate of the mutual information rather than trying to compute the mutual information as
accurately as possible. Furthermore, in our test, the computational complexity of computing HSIC
for n = 5000 is over 10 times more computationally costly (running time) than computing the
mutual information for k = n/22 .
As we saw in the case of the divergence estimate in Figure 1, mutual information is more accurately
estimated when k = 1, but at the cost of a higher variance. If our objective is to estimate the mutual
information (or the divergence), we should use a small value of k, ideally k = 1. However, if we are
interested in assessing whether two random variables are independent, it is better to use k = n/2,
because the variance of the estimate is much lower, even though it does not converge to the mutual
information (or the divergence).
5
Conclusions
We have proved that the estimates of the differential entropy, mutual information and divergence
based on k-nn density estimation for finite k converge almost surely, even though the density estimate does not converge. The previous literature could only prove mean-squared consistency and it
required imposing some constraints over the density models. The proof in this paper relies on describing the limiting distribution of p(x)/b
pk (x). This limiting distribution can be easily described
using waiting-times distributions, such as the exponential or the Erlang distributions.
We have shown, experimentally, that fixing k = 1 achieves the fastest convergence rate, at the
expense of a higher variance for our estimator. The divergence, mutual information
? and differential
entropy
estimates
using
k
=
1
are
much
better
than
the
estimates
using
k
=
n, even though for
?
k = n we can prove that pbk (x) converges to p(x) while for finite k this convergences does not
occur.
Finally, if we are interested in solving the two-sample problem or assessing if two random variables
are independent, it is best to fix k to a fraction of n (we have used k = n/2 in our experiments),
although in this case the estimates do not converge to the true value. Nevertheless, their variances
are significantly lower, which allows our tests to perform better. The tests with k = n/2 perform as
well as the MMD test for solving the two sample problem and the HSIC for assessing independence.
2
For computing HSIC test we use A. Gretton code in http://www.kyb.mpg.de/bs/people/arthur/indep.htm
and for finding the k-nn we use the sort function in Matlab.
7
Acknowledgment
Fernando Prez-Cruz is supported by Marie Curie Fellowship 040883-AI-COM. This work was partially funded by Spanish government (Ministerio de Educaci?on y Ciencia TEC2006-13514-C0201/TCM.
References
[1] N. Anderson, P. Hall, and D. Titterington. Two-sample test statistics for measuring discrepancies between two multivariate probability density functions using kernel-based density estimates. Journal of
Multivariate Analysis, 50(1):41?54, 7 1994.
[2] F. R. Bach and M. I. Jordan. Kernel independent component analysis. JMLR, 3:1?48, 2004.
[3] K. Balakrishnan and A. P. Basu. The Exponential Distribution: Theory, Methods and Applications. Gordon and Breach Publishers, Amsterdam, Netherlands, 1996.
[4] J. Beirlant, E. Dudewicz, L. Gyorfi, and E. van der Meulen. Nonparametric entropy estimation: An
overview. nternational Journal of the Mathematical Statistics Sciences, pages 17?39, 1997.
[5] T. M. Cover and J. A. Thomas. Elements of Information Theory. Wiley, New York, USA, 1991.
[6] G. A. Darbellay and I. Vajda. Estimation of the information by an adaptive partitioning of the observation
space. IEEE Trans. Information Theory, 45(4):1315?1321, 5 1999.
[7] R. L. Dobrushin. A simplified method for experimental estimate of the entropy of a stationary sequence.
Theory of Probability and its Applications, (4):428?430, 1958.
[8] F. Fleuret. Fast binary feature selection with conditional mutual information. JMLR, 5:1531?1555, 2004.
[9] A. Gretton, K. M. Borgwardt, M. Rasch, B. Sch?olkopf, and A. Smola. A kernel method for the twosample-problem. In B. Sch?olkopf, J. Platt, and T. Hofmann, editors, Advances in Neural Information
Processing Systems 19, Cambridge, MA, 2007. MIT Press.
[10] A. Gretton, K. Fukumizu, C. H. Teo, L. Song, B. Sch?olkopf, and A. Smola. A kernel statistical test of
independence. In J.C. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information
Processing Systems 20, Cambridge, MA, 2008. MIT Press.
[11] G.R. Grimmett and D.R. Stirzaker. Probability and Random Processes. Oxford University Press, Oxford,
UK, 3 edition, 2001.
[12] Julian Havil. Gamma: Exploring Euler?s Constant. Princeton University Press, New York, USA, 2003.
[13] S. Mallela I. S. Dhillon and R. Kumar. A divisive information-theoretic feature clustering algorithm for
text classification. JMLR, 3:1265?1287, 3 2003.
[14] Leonard Kleinrock. Queueing Systems. Volume 1: Theory. Wiley, New York, USA, 1975.
[15] L. F. Kozachenko and N. N. Leonenko. Sample estimate of the entropy of a random vector. Problems
Inform. Transmission, 23(2):95?101, 4 1987.
[16] A. Kraskov, H. St?ogbauer, and P. Grassberger. Estimating mutual information. Physical Review E,
69(6):1?16, 6 2004.
[17] S. Kullback and R. A. Leibler. On information and sufficiency. Ann. Math. Stats., 22(1):79?86, 3 1951.
[18] N. N. Leonenko, L. Pronzato, and V. Savani. A class of renyi information estimators for multidimensional
densities. Annals of Statistics, 2007. Submitted.
[19] P. J. Moreno, P. P. Ho, and N. Vasconcelos. A kullback-leibler divergence based kernel for svm classification in multimedia applications. Technical Report HPL-2004-4, HP Laboratories, 2004.
[20] X. Nguyen, M. J. Wainwright, and M. I. Jordan. Nonparametric estimation of the likelihood ratio and
divergence functionals. In IEEE Int. Symp. Information Theory, Nice, France, 6 2007.
[21] X. Nguyen, M. J. Wainwright, and M. I. Jordan. Estimating divergence functionals and the likelihood
ratio by penalized convex risk minimization. In J.C. Platt, D. Koller, Y. Singer, and S. Roweis, editors,
Advances in Neural Information Processing Systems 20, Cambridge, MA, 2008. MIT Press.
[22] L. Paninski. Estimation of entropy and mutual information. Neural Compt, 15(6):1191?1253, 6 2003.
[23] C. E. Shannon. A mathematical theory of communication. Bell System Tech. J., pages 379?423, 1948.
[24] K. Torkkola. Feature extraction by non parametric mutual information maximization. JMLR, 3:1415?
1438, 2003.
[25] Q. Wang, S. Kulkarni, and S. Verd?u. Divergence estimation of continuous distributions based on datadependent partitions. IEEE Trans. Information Theory, 51(9):3064?3074, 9 2005.
[26] Q. Wang, S. Kulkarni, and S. Verd?u. A nearest-neighbor approach to estimating divergence between
continuous random vectors. In IEEE Int. Symp. Information Theory, Seattle, USA, 7 2006.
8
| 3417 |@word seems:1 covariance:1 thereby:2 solid:6 carry:1 contains:2 com:2 dx:3 readily:2 cruz:2 grassberger:1 ministerio:1 partition:2 hofmann:1 kyb:2 moreno:1 plot:9 n0:3 resampling:1 stationary:1 completeness:1 math:1 mathematical:2 differential:15 prove:10 symp:2 inside:1 expected:2 p1:5 nor:1 growing:2 mpg:2 cluttering:1 provided:1 estimating:7 null:2 what:1 titterington:1 finding:1 nj:1 multidimensional:1 growth:1 uk:1 platt:3 partitioning:1 unit:11 before:1 engineering:2 local:1 tends:5 limit:2 oxford:2 might:1 therein:1 co:2 fastest:1 gyorfi:1 savani:1 acknowledgment:1 lecun:1 bootstrap:1 digit:1 area:1 empirical:2 bell:1 significantly:2 confidence:6 close:2 selection:2 put:1 risk:1 kld:4 www:2 dz:3 go:1 convex:1 mascheroni:1 stats:1 estimator:8 proving:1 compt:1 limiting:15 hsic:11 annals:1 play:1 us:2 verd:2 hypothesis:2 origin:1 element:1 role:1 electrical:1 wang:2 indep:1 decrease:1 complexity:1 ideally:1 ciencia:1 depend:1 solving:7 kxj:2 htm:2 easily:1 fast:1 monte:1 h0:2 widely:1 solve:2 larger:1 otherwise:1 precludes:1 statistic:8 final:1 sequence:1 propose:1 relevant:2 roweis:2 olkopf:3 seattle:1 convergence:12 regularity:1 transmission:1 assessing:5 r1:2 quadrangle:1 converges:12 illustrate:1 fixing:1 x0i:1 nearest:6 indicate:1 come:1 rasch:1 radius:2 centered:3 vajda:1 bin:1 government:1 fix:1 exploring:1 hall:1 predict:1 achieves:1 xk2:1 purpose:1 estimation:29 saw:1 teo:1 repetition:1 fukumizu:1 minimization:1 mit:3 clearly:1 gaussian:1 rather:1 avoid:1 theorized:1 corollary:3 likelihood:2 hk:4 tech:1 dependent:1 nn:18 initially:1 koller:2 france:1 interested:3 arg:1 classification:3 among:1 exponent:1 mutual:35 vasconcelos:1 extraction:1 identical:3 represents:4 cancel:1 discrepancy:4 others:1 report:2 gordon:1 gamma:8 divergence:58 lebesgue:1 acceptance:2 rearrange:2 closer:1 erlang:2 necessary:1 arthur:2 euclidean:1 instance:1 cover:1 measuring:2 maximization:1 cost:1 deviation:2 euler:2 uniform:3 kullbackleibler:1 reported:2 kxi:2 st:1 density:27 borgwardt:1 off:1 together:1 squared:1 central:1 unavoidable:1 leading:1 de:3 darbellay:1 int:2 depends:1 analyze:1 sort:1 option:1 curie:1 contribution:1 ass:3 square:2 ni:7 variance:12 qk:3 spaced:2 educaci:1 handwritten:1 accurately:2 carlo:2 submitted:1 inform:1 whenever:1 frequency:1 proof:10 dataset:2 proved:1 minxj:2 hilbert:2 organized:1 qbk:4 back:1 actually:2 higher:2 sufficiency:1 though:4 anderson:1 furthermore:1 just:1 smola:2 hpl:1 grows:2 usa:4 true:8 unbiased:2 y2:7 assigned:1 leibler:5 dhillon:1 laboratory:1 pbk:13 sin:2 spanish:1 nternational:1 criterion:2 trying:2 exdb:1 theoretic:8 image:1 variational:1 physical:1 overview:1 exponentially:2 volume:2 extend:1 cambridge:3 imposing:2 ai:1 rd:3 consistency:3 hp:1 erez:1 funded:1 multivariate:2 driven:1 binary:1 came:1 yi:5 der:1 analyzes:1 additional:1 dxdy:2 impose:1 mallela:1 surely:10 converge:14 fernando:2 dxdydz:1 ogbauer:1 dashed:4 needing:1 gretton:3 technical:1 faster:5 bach:1 equally:1 ensuring:1 dudewicz:1 histogram:1 represent:1 kernel:6 mmd:11 underperforms:1 fellowship:1 addressed:1 interval:6 completes:2 publisher:1 sch:3 fifty:1 rest:1 biased:1 sure:3 tend:1 balakrishnan:1 jordan:3 kraskov:1 intermediate:1 enough:2 independence:7 zi:4 idea:1 whether:5 tcm:1 song:1 york:3 remark:1 matlab:1 fleuret:1 detailed:1 clear:1 netherlands:1 nonparametric:3 extensively:1 http:3 exist:1 dotted:3 neuroscience:2 estimated:2 per:1 waiting:7 nevertheless:4 traced:1 achieving:1 queueing:1 clarity:1 marie:1 fraction:1 sum:2 powerful:1 almost:13 x0j:1 decide:1 yann:1 dash:1 convergent:2 fold:2 stirzaker:1 pronzato:1 occur:1 handful:1 constraint:2 infinity:4 x2:2 leonenko:2 kumar:1 mote:1 department:1 ball:4 kleinrock:1 smaller:1 b:2 computationally:1 describing:3 singer:2 available:1 apply:1 kozachenko:1 grimmett:1 schmidt:2 ho:1 thomas:1 running:1 include:1 clustering:1 objective:2 question:1 in1:1 quantity:1 added:1 parametric:1 costly:1 distance:2 code:2 kk:4 ratio:4 julian:1 favorably:1 expense:1 reliably:1 unknown:1 perform:4 observation:1 finite:7 communication:1 y1:7 required:1 trans:2 able:1 fp:1 wainwright:2 event:3 predicting:1 meulen:1 concludes:1 carried:1 breach:1 text:2 review:2 literature:1 nice:1 relative:1 law:4 expect:2 proven:1 editor:3 uncorrelated:1 penalized:1 twosample:1 repeat:1 supported:1 neighbor:6 basu:1 distributed:5 van:1 dimension:2 kz:3 author:5 forward:1 adaptive:1 simplified:1 nguyen:2 functionals:2 kullback:5 xi:56 continuous:15 sk:3 expansion:1 beirlant:2 pk:9 linearly:3 edition:1 x1:2 enormously:1 wiley:2 exponential:4 jmlr:4 late:1 third:1 renyi:1 rk:2 theorem:3 nonconvergent:3 symbol:1 torkkola:1 svm:1 mnist:3 sx:4 entropy:19 paninski:1 amsterdam:1 contained:1 datadependent:1 partially:1 relies:1 ma:3 conditional:5 identity:1 goal:1 consequently:2 leonard:1 ann:1 experimentally:1 neurophysiologists:1 uniformly:4 lemma:1 multimedia:2 experimental:1 divisive:1 shannon:1 support:6 people:2 dobrushin:2 absolutely:9 kulkarni:2 princeton:4 correlated:1 |
2,667 | 3,418 | Exploring Large Feature Spaces
with Hierarchical Multiple Kernel Learning
Francis Bach
?
INRIA - Willow Project, Ecole
Normale Sup?erieure
45, rue d?Ulm, 75230 Paris, France
[email protected]
Abstract
For supervised and unsupervised learning, positive definite kernels allow to use
large and potentially infinite dimensional feature spaces with a computational cost
that only depends on the number of observations. This is usually done through
the penalization of predictor functions by Euclidean or Hilbertian norms. In this
paper, we explore penalizing by sparsity-inducing norms such as the ?1 -norm or
the block ?1 -norm. We assume that the kernel decomposes into a large sum of
individual basis kernels which can be embedded in a directed acyclic graph; we
show that it is then possible to perform kernel selection through a hierarchical
multiple kernel learning framework, in polynomial time in the number of selected
kernels. This framework is naturally applied to non linear variable selection; our
extensive simulations on synthetic datasets and datasets from the UCI repository
show that efficiently exploring the large feature space through sparsity-inducing
norms leads to state-of-the-art predictive performance.
1 Introduction
In the last two decades, kernel methods have been a prolific theoretical and algorithmic machine
learning framework. By using appropriate regularization by Hilbertian norms, representer theorems
enable to consider large and potentially infinite-dimensional feature spaces while working within an
implicit feature space no larger than the number of observations. This has led to numerous works on
kernel design adapted to specific data types and generic kernel-based algorithms for many learning
tasks (see, e.g., [1, 2]).
Regularization by sparsity-inducing norms, such as the ?1 -norm has also attracted a lot of interest in
recent years. While early work has focused on efficient algorithms to solve the convex optimization
problems, recent research has looked at the model selection properties and predictive performance of
such methods, in the linear case [3] or within the multiple kernel learning framework (see, e.g., [4]).
In this paper, we aim to bridge the gap between these two lines of research by trying to use ?1 -norms
inside the feature space. Indeed, feature spaces are large and we expect the estimated predictor
function to require only a small number of features, which is exactly the situation where ?1 -norms
have proven advantageous. This leads to two natural questions that we try to answer in this paper: (1)
Is it feasible to perform optimization in this very large feature space with cost which is polynomial
in the size of the input space? (2) Does it lead to better predictive performance and feature selection?
More precisely, we consider a positive definite kernel that can be expressed as a large sum of positive
definite basis or local kernels. This exactly corresponds to the situation where a large feature space is
the concatenation of smaller feature spaces, and we aim to do selection among these many kernels,
which may be done through multiple kernel learning. One major difficulty however is that the
number of these smaller kernels is usually exponential in the dimension of the input space and
applying multiple kernel learning directly in this decomposition would be intractable.
In order to peform selection efficiently, we make the extra assumption that these small kernels can
be embedded in a directed acyclic graph (DAG). Following [5], we consider in Section 2 a specific combination of ?2 -norms that is adapted to the DAG, and will restrict the authorized sparsity
patterns; in our specific kernel framework, we are able to use the DAG to design an optimization
algorithm which has polynomial complexity in the number of selected kernels (Section 3). In simulations (Section 5), we focus on directed grids, where our framework allows to perform non-linear
variable selection. We provide extensive experimental validation of our novel regularization framework; in particular, we compare it to the regular ?2 -regularization and shows that it is always competitive and often leads to better performance, both on synthetic examples, and standard regression
and classification datasets from the UCI repository.
Finally, we extend in Section 4 some of the known consistency results of the Lasso and multiple kernel learning [3, 4], and give a partial answer to the model selection capabilities of our regularization
framework by giving necessary and sufficient conditions for model consistency. In particular, we
show that our framework is adapted to estimating consistently only the hull of the relevant variables.
Hence, by restricting the statistical power of our method, we gain computational efficiency.
2 Hierarchical multiple kernel learning (HKL)
We consider the problem of predicting a random variable Y ? Y ? R from a random variable X ?
X , where X and Y may be quite general spaces. We assume that we are given n i.i.d. observations
(xP
i , yi ) ? X ? Y, i = 1, . . . , n. We define the empirical risk of a function f from X to R as
n
1
+
i=1 ?(yi , f (xi )), where ? : Y ? R 7? R is a loss function. We only assume that ? is convex
n
with respect to the second parameter (but not necessarily differentiable). Typical examples of loss
functions are the square loss for regression, i.e., ?(y, y?) = 12 (y ? y?)2 for y ? R, and the logistic loss
?(y, y?) = log(1 + e?yy?) or the hinge loss ?(y, y?) = max{0, 1 ? y y?} for binary classification, where
y ? {?1, 1}, leading respectively to logistic regression and support vector machines [1, 2].
2.1 Graph-structured positive definite kernels
We assume that we are given a positive definite kernel k : X ? X ? R, and that this kernel can
?
be expressed
Pas the sum, ?over an index set V , of basis kernels kv , v ? V , i.e, for all x, x ? X ,
?
k(x, x ) = v?V kv (x, x ). For each v ? V , we denote by Fv and ?v the feature space and feature
map of kv , i.e., for all x, x? ? X , kv (x, x? ) = h?v (x), ?v (x? )i. Throughout the paper, we denote
by kuk the Hilbertian norm of u and by hu, vi the associated dot product, where the precise space is
omitted and can always be inferred from the context.
Our sum assumption corresponds to a situation where the feature map ?(x) and feature
Q space F
for k is the concatenation of the feature maps ?v (x) for each kernel kv , i.e, F = v?V Fv and
?(x) = (?v (x))v?V . Thus, looking for a certain ? ? F and a predictorP
function f (x) = h?, ?(x)i
is equivalent to looking jointly for ?v ? Fv , for all v ? V , and f (x) = v?V h?v , ?v (x)i.
As mentioned earlier, we make the assumption that the set V can be embedded into a directed acyclic
graph. Directed acyclic graphs (DAGs) allow to naturally define the notions of parents, children,
descendants and ancestors. Given a node w ? V , we denote by A(w) ? V the set of its ancestors,
and by D(w) ? V , the set of its descendants. We use the convention that any w is a descendant
and an ancestor of itself, i.e., w ? A(w) and w ? D(w). Moreover, for W ? V , we let denote
sources(W ) the set of sources of the graph G restricted to W (i.e., nodes in W with no parents
belonging to W ). Given a subset of nodes
S W ? V , we can define the hull of W as the union of all
ancestors of w ? W , i.e., hull(W ) = w?W A(w). Given a set W , we define the set of extreme
points of WTas the smallest subset T ? W such that hull(T ) = hull(W ) (note that it is always well
defined, as T ?V, hull(T )=hull(W ) T ). See Figure 1 for examples of these notions.
The goal of this paper is to perform kernel selection among the kernels kv , v ? V . We essentially
use the graph to limit the search to specific subsets of V . Namely, instead of considering all possible
subsets of active (relevant) vertices, we are only interested in estimating correctly the hull of these
relevant vertices; in Section 2.2, we design a specific sparsity-inducing norms adapted to hulls.
In this paper, we primarily focus on kernels that can be expressed as ?products of sums?, and on the
associated p-dimensional directed grids, while noting that our framework is applicable to many other
kernels. Namely, we assume that the input space X factorizes into p components X = X1 ? ? ? ?? Xp
and that we are given p sequences of length q + 1 of kernels kij (xi , x?i ), i ? {1, . . . , p}, j ?
Figure 1: Example of graph and associated notions. (Left) Example of a 2D-grid. (Middle) Example
of sparsity pattern (? in light blue) and the complement of its hull (+ in light red). (Right) Dark
blue points (?) are extreme points of the set of all active points (blue ?); dark red points (+) are the
sources of the set of all red points (+).
{0, . . . , q}, such that k(x, x? ) =
Pq
j1 ,...,jp =0
Qp
i=1
kiji (xi , x?i ) =
Qp
i=1
P
q
ji =0
kiji (xi , x?i ) . We
thus have a sum Q
of (q+1)p kernels, that can be computed efficiently as a product of p sums. A natural
p
DAG on V = i=1 {0, . . . , q} is defined by connecting each (j1 , . . . , jp ) to (j1 + 1, j2 , . . . , jp ),
. . . , (j1 , . . . , jp?1 , jp + 1). As shown in Section 2.2, this DAG will correspond to the constraint
of selecting a given product of kernels only after all the subproducts are selected. Those DAGs
are especially suited to nonlinear variable selection, in particular with the polynomial and Gaussian
kernels. In this context, products of kernels correspond to interactions between certain variables, and
our DAG implies that we select an interaction only after all sub-interactions were already selected.
Polynomial kernels We consider Xi = R, kij (xi , x?i ) = qj (xi x?i )j ; the full kernel is then equal
Q
P
Q
p
q
p
to k(x, x? ) = i=1 j=0 qj (xi x?i )j = i=1 (1 + xi x?i )q . Note that this is not exactly the usual
polynomial kernel (whose feature space is the space of multivariate polynomials of total degree less
than q), since our kernel considers polynomials of maximal degree q.
? 2
Gaussian kernels We also consider Xi = R, and the Gaussian-RBF kernel e?b(x?x ) . The
following decomposition is the eigendecomposition of the non centered covariance operator for a
normal distribution with variance 1/4a (see, e.g., [6]):
?
?
k
P?
? 2
? 2
b
b
?A
(a+c)x2
e?b(x?x ) = k=0 (b/A)
Hk ( 2cx)][e? A (a+c)(x ) Hk ( 2cx? )],
2k k! [e
where c2 = a2 + 2ab, A = a + b + c, and Hk is the k-th Hermite polynomial. By appropriately
truncating the sum, i.e, by considering that the first q basis kernels are obtained from the first q
single Hermite polynomials, and the (q + 1)-th kernel is summing over all other kernels, we obtain a decomposition of a uni-dimensional Gaussian kernel into q + 1 components (q of them are
one-dimensional, the last one is infinite-dimensional, but can be computed by differencing). The
decomposition ends up being close to a polynomial kernel of infinite degree, modulated by an exponential [2]. One may also use an adaptive decomposition using kernel PCA (see, e.g., [2, 1]),
which is equivalent to using the eigenvectors of the empirical covariance operator associated with
the data (and not the population one associated with the Gaussian distribution with same variance).
In simulations, we tried both with no significant differences.
ANOVA kernels When q = 1, the directed grid is isomorphic to the power set (i.e., the set
of subsets) with the inclusion DAG. In this setting, we can decompose the ANOVA kernel [2] as
Qp
P
Q
P
?b(xi ?x?i )2
?b(xi ?x?i )2
?bkxJ ?x?J k22
) =
=
, and our
i=1 (1 + e
J?{1,...,p}
i?J e
J?{1,...,p} e
framework will select the relevant subsets for the Gaussian kernels.
Kernels or features? In this paper, we emphasize the kernel view, i.e., we are given a kernel (and
thus a feature space) and we explore it using ?1 -norms. Alternatively, we could use the feature view,
i.e., we have a large structured set of features that we try to select from; however, the techniques
developed in this paper assume that (a) each feature might be infinite-dimensional and (b) that we
can sum all the local kernels efficiently (see in particular Section 3.2). Following the kernel view
thus seems slightly more natural.
2.2 Graph-based structured regularization
Q
P
Given ? ? v?V Fv , the natural Hilbertian norm k?k is defined through k?k2 = v?V k?v k2 .
Penalizing with this norm is efficient because summing all kernels kv is assumed feasible in polynomial time and we can bring to bear the usual kernel machinery; however, it does not lead to sparse
solutions, where many ?v will be exactly equal to zero.
As said earlier, we are only interested in the hull of the selected elements ?v ? Fv , v ? V ; the hull
of a set I is characterized by the set of v, such that D(v) ? I c , i.e., such that all descendants of v
are in the complement I c : hull(I) = {v ? V, D(v) ? I c }c . Thus, if we try to estimate hull(I), we
need to determine which v ? V are such that D(v) ? I c . In our context, we are hence looking at
selecting vertices v ? V for which ?D(v) = (?w )w?D(v) = 0.
P
We thus consider the following structured block ?1 -norm defined as
v?V dv k?D(v) k =
P
P
2 1/2
d
(
k?
k
)
,
where
(d
)
are
positive
weights.
Penalizing
by such a norm
v
w
v
v?V
v?V
w?D(v)
Q
will indeed impose that some of the vectors ?D(v) ? w?D(v) Fw are exactly zero. We thus consider the following minimization problem1:
2
Pn
P
P
.
(1)
min??Qv?V Fv n1 i=1 ?(yi , v?V h?v , ?v (xi )i) + ?2
v?V dv k?D(v) k
Our Hilbertian norm is a Hilbert space instantiation of the hierarchical norms recently introduced
by [5] and also considered by [7] in the MKL setting. If all Hilbert spaces are finite dimensional, our
particular choice of norms corresponds to an ??1 -norm of ?2 -norms?. While with uni-dimensional
groups/kernels, the ??1 -norm of ?? -norms? allows an efficient path algorithm for the square loss
and when the DAG is a tree [5], this is not possible anymore with groups of size larger than one, or
when the DAG is a not a tree. In Section 3, we propose a novel algorithm to solve the associated
optimization problem in time polynomial in the number of selected groups/kernels, for all group
sizes, DAGs and losses. Moreover, in Section 4, we show under which conditions a solution to the
problem in Eq. (1) consistently estimates the hull of the sparsity pattern.
Finally, note that in certain settings (finite dimensional Hilbert spaces and distributions with absolutely continuous densities), these norms have the effect of selecting a given kernel only after all of
its ancestors [5]. This is another explanation why hulls end up being selected, since to include a
given vertex in the models, the entire set of ancestors must also be selected.
3 Optimization problem
In this section, we give optimality conditions for the problems in Eq. (1), as well as optimization
algorithms with polynomial time complexity in the number of selected kernels. In simulations we
consider total numbers of kernels larger than 1030 , and thus such efficient algorithms are essential
to the success of hierarchical multiple kernel learning (HKL).
3.1 Reformulation in terms of multiple kernel learning
Following [8, 9], we can simply derive an equivalent formulation
P of Eq. (1). Using Cauchy-Schwarz
inequality, we have that for all ? ? RV such that ? > 0 and v?V d2v ?v 6 1,
P
P
P
P
k?
k2
= w?V ( v?A(w) ?v?1 )k?w k2 ,
( v?V dv k?D(v) k)2 6 v?V D(v)
?v
P
?1
with equality if and only if ?v = d?1
. We associate to the vector
v k?D(v) k(
v?VPdv k?D(v) k)
V
V
?1
?1
? ? R , the vector ? ? R such that ?w ? V , ?w = v?A(w) ?v . We use the natural convention
that if ?v is equal to zero, then ?w is equal to zero for all descendants w of v. We let denote H the
set of allowed ? and Z the set of all associated ?. The set H and Z are in bijection, and we can
interchangeably use ? ? H or the corresponding ?(?) ? Z. Note that Z is in general not convex 2
(unless the DAG is a tree, see [10]), and if ? ? Z, then ?w 6 ?v for all w ? D(v), i.e., weights of
descendant kernels are smaller, which is consistent with the known fact that kernels should always
be selected after all their ancestors.
The problem in Eq. (1) is thus equivalent to
Pn
P
1
min Q
min
i=1 ?(yi ,
v?V h?v , ?v (xi )i) +
n
??H ??
v?V Fv
?
2
P
w?V
?w (?)?1 k?w k2 .
(2)
?1/2
1/2
?
Using the change of variable ??v = ?v ?v
and ?(x)
= (?v ?v (x))v?V , this implies that given
the optimal ? (and associated ?), ?Pcorresponds to the solution of the regular supervised learning
problem with kernel matrix K = w?V ?w Kw , where Kw is n ? n the kernel matrix associated
1
We consider the square of the norm, which does not change the regularization properties, but allow simple
links with multiple kernel learning.
2
Although Z is not convex, we can still maximize positive linear combinations over Z, which is the only
needed operation (see [10] for details).
Pn
with kernel kw . Moreover, the solution is then ?w = ?w i=1 ?i ?w (xi ), where ? ? Rn are the
dual parameters associated with the single kernel learning problem.
Thus, the solution is entirely determined by ? ? Rn and ? ? RV (and its corresponding ? ? RV ).
More precisely, we have (see proof in [10]):
Pn
Proposition 1 The pair (?, ?) is optimal for Eq. (1), with ?w, ?w = ?w i=1 ?i ?w (xi ), if and
only
with kernel matrix K =
P if (a) given ?, ? is optimal for the single kernelPlearning
P problem?1
?1 ?
?
(?)K
,
and
(b)
given
?,
?
?
H
maximizes
(
?
? Kw ?.
w
w?V w
w?V
v?A(w) v )
Moreover, the total duality gap can be upperbounded as the sum of the two separate duality gaps for
the two optimization problems, which will be useful in Section 3.2 (see [10] for more details). Note
that in the case of ?flat? regular multiple kernel learning, where the DAG has no edges, we obtain
back usual optimality conditions [8, 9].
Following a common practice for convex sparsity problems [11], we will try to solve a small problem
where we assume we know the set of v such that k?D(v) k is equal to zero (Section 3.3). We then
?simply? need to check that variables in that set may indeed be left out of the solution. In the next
section, we show that this can be done in polynomial time although the number of kernels to consider
leaving out is exponential (Section 3.2).
3.2 Conditions for global optimality of reduced problem
We let denote J the complement of the set of norms which are set to zero. We thus consider the
optimal solution ? of the reduced problem (on J), namely,
2
Pn
P
P
,
(3)
min?J ?Qv?J Fv n1 i=1 ?(yi , v?J h?v , ?v (xi )i) + ?2
v?V dv k?D(v)?J k
with optimal primal variables ?J , dual variables ? and optimal pair (?J , ?J ). We now consider
necessary conditions and sufficient conditions for this solution (augmented with zeros for non active
c
variables,
Pi.e., variables in J ) to be optimal with respect to the full problem in Eq. (1). We denote
by ? = v?J dv k?D(v)?J k the optimal value of the norm for the reduced problem.
Proposition 2 (NJ ) If the reduced solution is optimal for the full problem in Eq. (1) and all kernels
in the extreme points of J are active, then we have maxt?sources(J c ) ?? Kt ?/d2t 6 ? 2 .
P
P
Proposition 3 (SJ,? ) If maxt?sources(J c ) w?D(t) ?? Kw ?/( v?A(w)?D(t) dv )2 6 ? 2 + ?/?,
then the total duality gap is less than ?.
The proof is fairly technical and can be found in [10]; this result constitutes the main technical
contribution of the paper: it essentially allows to solve a very large optimization problem over
exponentially many dimensions in polynomial time.
The necessary condition (NJ ) does not cause any computational problems. However, the sufficient
condition (SJ,? ) requires to sum over all descendants of the active kernels, which is impossible in
practice (as shown in Section 5, we consider V of cardinal often greater than 1030 ). Here, we need
to bring to bear the specific structure of the kernel k. In the context
of directed grids we consider
P
in this paper, if dv can also be decomposed as a product, then v?A(w)?D(t) dv is also factorized,
and
v ? D(t) in linear time in p. Moreover we can cache the sums
P we can compute
P the sum over all
2
K
/(
d
)
in
order to save running time.
w
v
w?D(t)
v?A(w)?D(t)
3.3 Dual optimization for reduced or small problems
When kernels kv , v ? V have low-dimensional feature spaces, we may use a primal representation and solve the problem in Eq. (1) using generic optimization toolboxes adapted to
conic constraints (see, e.g., [12]). However, in order to reuse existing optimized supervised
learning code and use high-dimensional kernels, it is preferable to use a dual optimization.
Namely, we use P
the same technique
as [8]: we consider
for ? ? Z, the function B(?) =
P
P
?1
min??Qv?V Fv n1 ni=1 ?(yi , v?V h?v , ?v (xi )i)+ ?2 w?V ?w
k?w k2 , which is the optimal value
P
of the single kernel learning problem with kernel matrix w?V ?w Kw . Solving Eq. (2) is equivalent
to minimizing B(?(?)) with respect to ? ? H.
If a ridge (i.e., positive diagonal) is added to the kernel matrices, the function B is differentiable [8].
Moreover, the function ? 7? ?(?) is differentiable on (R?+ )V . Thus, the function ? 7? B[?((1 ?
?)? + |V? | d?2 )] , where d?2 is the vector with elements d?2
v , is differentiable if ? > 0. We can then
use the same projected gradient descent strategy as [8] to minimize it. The overall complexity of
the algorithm is then proportional to O(|V |n2 )?to form the kernel matrices?plus the complexity
of solving a single kernel learning problem?typically between O(n2 ) and O(n3 ). Note that this
algorithm is only used for small reduced subproblems for which V has small cardinality.
3.4 Kernel search algorithm
We are now ready to present the detailed algorithm which extends the feature search algorithm
of [11]. Note that the kernel matrices are never all needed explicitly, i.e., we only need them (a)
explicitly to solve the small problems (but we need only a few of those) and (b) implicitly to compute
the sufficient condition (SJ,? ), which requires to sum over all kernels, as shown in Section 3.2.
? Input: kernel matrices Kv ? Rn?n , v ? V , maximal gap ?, maximal # of kernels Q
? Algorithm
1. Initialization: set J = sources(V ),
compute (?, ?) solutions of Eq. (3), obtained using Section 3.3
2. while (NJ ) and (SJ,? ) are not satisfied and #(V ) 6 Q
? If (NJ ) is not satisfied, add violating variables in sources(J c ) to J
else, add violating variables in sources(J c ) of (SJ,? ) to J
? Recompute (?, ?) optimal solutions of Eq. (3)
? Output: J, ?, ?
The previous algorithm will stop either when the duality gap is less than ? or when the maximal
number of kernels Q has been reached. In practice, when the weights dv increase with the depth of
v in the DAG (which we use in simulations), the small duality gap generally occurs before we reach
a problem larger than Q. Note that some of the iterations only increase the size of the active sets to
check the sufficient condition for optimality; forgetting those does not change the solution, only the
fact that we may actually know that we have an ?-optimal solution.
In order to obtain a polynomial complexity, the maximal out-degree of the DAG (i.e., the maximal
number of children of any given node) should be polynomial as well. Indeed, for the directed pgrid (with maximum out-degree equal to p), the total running time complexity is a function of the
number of observations n, and the number R of selected kernels; with proper caching, we obtain the
following complexity, assuming O(n3 ) for the single kernel learning problem, which is conservative:
O(n3 R + n2 Rp2 + n2 R2 p), which decomposes into solving O(R) single kernel learning problems,
caching O(Rp) kernels, and computing O(R2 p) quadratic forms for the sufficient conditions.
4 Consistency conditions
As said earlier, the sparsity pattern of the solution of Eq. (1) will be equal to its hull, and thus we
can only hope to obtain consistency of the hull of the pattern, which we consider in this section. For
simplicity, we consider the case of finite dimensional Hilbert spaces (i.e., Fv = Rfv ) and the square
loss. We also hold fixed the vertex set of V , i.e., we assume that the total number of features is fixed,
and we let n tend to infinity and ? = ?n decrease with n.
Following [4], we make the following assumptions on the underlying joint distribution of (X, Y ):
(a) the joint covariance matrixP? of (?(xv ))v?V (defined with appropriate blocks of size fv ? fw )
is invertible, (b) E(Y |X) = w?W h? w , ?w (x)i with W ? V and var(Y |X) = ? 2 > 0 almost
surely. With these simple assumptions, we obtain (see proof in [10]):
P
k?wW ??1
Diag(dv k?D(v) k?1 )v?W ?W k2
W WP
Proposition 4 (Sufficient condition) If max c w?D(t)
(
d v )2
t?sources(W )
< 1, then ? and the hull of W are consistently estimated when ?n n
v?A(w)?D(t)
1/2
? ? and ?n ? 0.
Proposition 5 (Necessary condition) If the ? and the hull of W are consistently estimated for
2
2
some sequence ?n , then maxt?sources(W c ) k?wW ??1
W W Diag(dv /k?D(v) k)v?W ? W k /dt 6 1.
Note that the last two propositions are not consequences of the similar results for flat MKL [4],
because the groups that we consider are overlapping. Moreover, the last propositions show that we
indeed can estimate the correct hull of the sparsity pattern if the sufficient condition is satisfied. In
particular, if we can make the groups such that the between-group correlation is as small as possible,
1
HKL
greedy
L2
0.5
0
2
3
test set error
test set error
1
4
5
log (p)
6
0.5
0
2
7
HKL
greedy
L2
3
2
4
5
log2(p)
6
7
Figure 2: Comparison on synthetic examples: mean squared error over 40 replications (with halved
standard deviations). Left: non rotated data, right: rotated data. See text for details.
dataset
abalone
abalone
bank-32fh
bank-32fh
bank-32fm
bank-32fm
bank-32nh
bank-32nh
bank-32nm
bank-32nm
boston
boston
pumadyn-32fh
pumadyn-32fh
pumadyn-32fm
pumadyn-32fm
pumadyn-32nh
pumadyn-32nh
pumadyn-32nm
pumadyn-32nm
n
4177
4177
8192
8192
8192
8192
8192
8192
8192
8192
506
506
8192
8192
8192
8192
8192
8192
8192
8192
p
10
10
32
32
32
32
32
32
32
32
13
13
32
32
32
32
32
32
32
32
k
pol4
rbf
pol4
rbf
pol4
rbf
pol4
rbf
pol4
rbf
pol4
rbf
pol4
rbf
pol4
rbf
pol4
rbf
pol4
rbf
#(V )
?107
?1010
?1022
?1031
?1022
?1031
?1022
?1031
?1022
?1031
?109
?1012
?1022
?1031
?1022
?1031
?1022
?1031
?1022
?1031
L2
44.2?1.3
43.0?0.9
40.1?0.7
39.0?0.7
6.0?0.1
5.7?0.2
44.3?1.2
44.3?1.2
17.2?0.6
16.9?0.6
17.1?3.6
16.4?4.0
57.3?0.7
57.7?0.6
6.9?0.1
5.0?0.1
84.2?1.3
56.5?1.1
60.1?1.9
15.7?0.4
greedy
43.9?1.4
45.0?1.7
39.2?0.8
39.7?0.7
5.0?0.2
5.8?0.4
46.3?1.4
49.4?1.6
18.2?0.8
21.0?0.6
24.7?10.8
32.4?8.2
56.4?0.8
72.2?22.5
6.4?1.6
46.2?51.6
73.3?25.4
81.3?25.0
69.9?32.8
67.3?42.4
lasso-?
47.9?0.7
49.0?1.7
41.3?0.7
66.1?6.9
7.0?0.2
36.3?4.1
45.8?0.8
93.0?2.8
19.5?0.4
62.3?2.5
29.3?2.3
29.4?1.6
57.5?0.4
89.3?2.0
7.5?0.2
44.7?5.7
84.8?0.5
98.1?0.7
78.5?1.1
95.9?1.9
MKL
44.5?1.1
43.7?1.0
38.7?0.7
38.4?0.7
6.1?0.3
5.9?0.2
46.0?1.2
46.1?1.1
21.0?0.7
20.9?0.7
22.2?2.2
20.7?2.1
56.4?0.7
56.5?0.8
7.0?0.1
7.1?0.1
83.6?1.3
83.7?1.3
77.5?0.9
77.6?0.9
HKL
43.3?1.0
43.0?1.1
38.9?0.7
38.4?0.7
5.1?0.1
4.6?0.2
43.6?1.1
43.5?1.0
16.8?0.6
16.4?0.6
18.1?3.8
17.1?4.7
56.4?0.8
55.7?0.7
3.1?0.0
3.4?0.0
36.7?0.4
35.5?0.5
5.5?0.1
7.2?0.1
Table 1: Mean squared errors (multiplied by 100) on UCI regression datasets, normalized so that the
total variance to explain is 100. See text for details.
we can ensure correct hull selection. Finally, it is worth noting that if the ratios dw / maxv?A(w) dv
tend to infinity slowly with n, then we always consistently estimate the depth of the hull, i.e., the
optimal interaction complexity. We are currently investigating extensions to the non parametric
case [4], in terms of pattern selection and universal consistency.
5 Simulations
Synthetic examples We generated regression data as follows: n = 1024 samples of p ? [22 , 27 ]
variables were generated from a random covariance matrix, and the label y ? R was sampled as a
random sparse fourth order polynomial of the input variables (with constant number of monomials).
We then compare the performance of our hierarchical multiple kernel learning method (HKL) with
the polynomial kernel decomposition presented in Section 2 to other methods that use the same
kernel and/or decomposition: (a) the greedy strategy of selecting basis kernels one after the other, a
procedure similar to [13], and (b) the regular polynomial kernel regularization with the full kernel
(i.e., the sum of all basis kernels). In Figure 2, we compare the two approaches on 40 replications in
the following two situations: original data (left) and rotated data (right), i.e., after the input variables
were transformed by a random rotation (in this situation, the generating polynomial is not sparse
anymore). We can see that in situations where the underlying predictor function is sparse (left),
HKL outperforms the two other methods when the total number of variables p increases, while in
the other situation where the best predictor is not sparse (right), it performs only slightly better: i.e.,
in non sparse problems, ?1 -norms do not really help, but do help a lot when sparsity is expected.
UCI datasets For regression datasets, we compare HKL with polynomial (degree 4) and GaussianRBF kernels (each dimension decomposed into 9 kernels) to the following approaches with the same
dataset
mushrooms
mushrooms
ringnorm
ringnorm
spambase
spambase
twonorm
twonorm
magic04
magic04
n
1024
1024
1024
1024
1024
1024
1024
1024
1024
1024
p
117
117
20
20
57
57
20
20
10
10
k
pol4
rbf
pol4
rbf
pol4
rbf
pol4
rbf
pol4
rbf
#(V )
?1082
?10112
?1014
?1019
?1040
?1054
?1014
?1019
?107
?1010
L2
0.4?0.4
0.1?0.2
3.8?1.1
1.2?0.4
8.3?1.0
9.4?1.3
2.9?0.5
2.8?0.6
15.9?1.0
15.7?0.9
greedy
0.1?0.1
0.1?0.2
5.9?1.3
2.4?0.5
9.7?1.8
10.6?1.7
4.7?0.5
5.1?0.7
16.0?1.6
17.7?1.3
HKL
0.1?0.2
0.1?0.2
2.0?0.3
1.6?0.4
8.1?0.7
8.4?1.0
3.2?0.6
3.2?0.6
15.6?0.8
15.6?0.9
Table 2: Error rates (multiplied by 100) on UCI binary classification datasets. See text for details.
kernel: regular Hilbertian regularization (L2), same greedy approach as earlier (greedy), regularization by the ?1 -norm directly on the vector ?, a strategy which is sometimes used in the context of
sparse kernel learning [14] but does not use the Hilbertian structure of the kernel (lasso-?), multiple
kernel learning with the p kernels obtained by summing all kernels associated with a single variable
(MKL). For all methods, the kernels were held fixed, while in Table 1, we report the performance
for the best regularization parameters obtained by 10 random half splits.
We can see from Table 1, that HKL outperforms other methods, in particular for the datasets bank32nm, bank-32nh, pumadyn-32nm, pumadyn-32nh, which are datasets dedicated to non linear regression. Note also, that we efficiently explore DAGs with very large numbers of vertices #(V ).
For binary classification datasets, we compare HKL (with the logistic loss) to two other methods (L2,
greedy) in Table 2. For some datasets (e.g., spambase), HKL works better, but for some others, in
particular when the generating problem is known to be non sparse (ringnorm, twonorm), it performs
slightly worse than other approaches.
6 Conclusion
We have shown how to perform hierarchical multiple kernel learning (HKL) in polynomial time in
the number of selected kernels. This framework may be applied to many positive definite kernels
and we have focused on polynomial and Gaussian kernels used for nonlinear variable selection.
In particular, this paper shows that trying to use ?1 -type penalties may be advantageous inside the
feature space. We are currently investigating applications to string and graph kernels [2].
References
[1] B. Sch?olkopf and A. J. Smola. Learning with Kernels. MIT Press, 2002.
[2] J. Shawe-Taylor and N. Cristianini. Kernel Methods for Pattern Analysis. Camb. U. P., 2004.
[3] P. Zhao and B. Yu. On model selection consistency of Lasso. JMLR, 7:2541?2563, 2006.
[4] F. Bach. Consistency of the group Lasso and multiple kernel learning. JMLR, 9:1179?1225, 2008.
[5] P. Zhao, G. Rocha, and B. Yu. Grouped and hierarchical model selection through composite absolute
penalties. Ann. Stat., To appear, 2008.
[6] C. K. I. Williams and M. Seeger. The effect of the input density distribution on kernel-based classifiers.
In Proc. ICML, 2000.
[7] M. Szafranski, Y. Grandvalet, and A. Rakotomamonjy. Composite kernel learning. In Proc. ICML, 2008.
[8] A. Rakotomamonjy, F. Bach, S. Canu, and Y. Grandvalet. Simplemkl. JMLR, 9:2491?2521, 2008.
[9] M. Pontil and C.A. Micchelli. Learning the kernel function via regularization. JMLR, 6:1099?1125, 2005.
[10] F. Bach. Exploring large feature spaces with hierarchical MKL. Technical Report 00319660, HAL, 2008.
[11] H. Lee, A. Battle, R. Raina, and A. Ng. Efficient sparse coding algorithms. In NIPS, 2007.
[12] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge Univ. Press, 2003.
[13] K. Bennett, M. Momma, and J. Embrechts. Mark: A boosting algorithm for heterogeneous kernel models.
In Proc. SIGKDD, 2002.
[14] V. Roth. The generalized Lasso. IEEE Trans. on Neural Networks, 15(1), 2004.
| 3418 |@word repository:2 middle:1 momma:1 polynomial:25 norm:31 advantageous:2 seems:1 hu:1 simulation:6 tried:1 decomposition:7 covariance:4 selecting:4 ecole:1 outperforms:2 existing:1 spambase:3 magic04:2 mushroom:2 attracted:1 must:1 j1:4 maxv:1 greedy:8 selected:12 half:1 recompute:1 boosting:1 node:4 bijection:1 org:1 hermite:2 c2:1 replication:2 descendant:7 inside:2 forgetting:1 expected:1 indeed:5 decomposed:2 cache:1 considering:2 cardinality:1 project:1 estimating:2 moreover:7 underlying:2 maximizes:1 factorized:1 string:1 developed:1 nj:4 exactly:5 preferable:1 k2:7 classifier:1 appear:1 positive:9 before:1 local:2 xv:1 limit:1 consequence:1 path:1 simplemkl:1 inria:1 might:1 plus:1 initialization:1 ringnorm:3 directed:9 union:1 block:3 definite:6 practice:3 procedure:1 pontil:1 universal:1 empirical:2 composite:2 boyd:1 regular:5 close:1 selection:15 operator:2 risk:1 applying:1 context:5 impossible:1 equivalent:5 map:3 szafranski:1 roth:1 williams:1 truncating:1 convex:6 focused:2 simplicity:1 vandenberghe:1 dw:1 rocha:1 population:1 notion:3 ulm:1 pa:1 element:2 associate:1 decrease:1 mentioned:1 complexity:8 cristianini:1 mine:1 solving:3 predictive:3 efficiency:1 basis:6 joint:2 univ:1 quite:1 whose:1 larger:4 solve:6 jointly:1 itself:1 sequence:2 differentiable:4 propose:1 interaction:4 product:6 maximal:6 j2:1 uci:5 relevant:4 inducing:4 kv:9 olkopf:1 parent:2 generating:2 rotated:3 help:2 derive:1 stat:1 eq:12 implies:2 convention:2 correct:2 hull:23 centered:1 enable:1 require:1 really:1 decompose:1 proposition:7 exploring:3 extension:1 hold:1 considered:1 normal:1 algorithmic:1 major:1 early:1 smallest:1 omitted:1 a2:1 fh:4 proc:3 applicable:1 label:1 currently:2 bridge:1 schwarz:1 grouped:1 qv:3 minimization:1 hope:1 mit:1 always:5 gaussian:7 aim:2 normale:1 pn:5 caching:2 factorizes:1 focus:2 consistently:5 check:2 hk:3 seeger:1 sigkdd:1 entire:1 typically:1 ancestor:7 willow:1 france:1 interested:2 transformed:1 overall:1 among:2 classification:4 dual:4 hilbertian:7 art:1 fairly:1 equal:7 never:1 ng:1 wtas:1 kw:6 yu:2 unsupervised:1 constitutes:1 representer:1 problem1:1 icml:2 report:2 others:1 prolific:1 primarily:1 cardinal:1 few:1 individual:1 n1:3 ab:1 interest:1 extreme:3 upperbounded:1 light:2 primal:2 held:1 kt:1 edge:1 partial:1 necessary:4 machinery:1 unless:1 tree:3 euclidean:1 taylor:1 theoretical:1 kij:2 earlier:4 cost:2 rakotomamonjy:2 vertex:6 subset:6 deviation:1 monomials:1 predictor:4 answer:2 synthetic:4 density:2 twonorm:3 lee:1 invertible:1 connecting:1 pumadyn:10 squared:2 satisfied:3 nm:5 slowly:1 worse:1 zhao:2 leading:1 coding:1 explicitly:2 depends:1 vi:1 try:4 lot:2 view:3 francis:2 sup:1 competitive:1 red:3 reached:1 capability:1 contribution:1 minimize:1 square:4 ni:1 variance:3 efficiently:5 correspond:2 worth:1 explain:1 reach:1 naturally:2 associated:11 proof:3 gain:1 stop:1 dataset:2 sampled:1 hilbert:4 actually:1 back:1 dt:1 supervised:3 violating:2 formulation:1 done:3 implicit:1 smola:1 correlation:1 working:1 nonlinear:2 overlapping:1 mkl:5 logistic:3 hal:1 effect:2 k22:1 normalized:1 regularization:12 hence:2 equality:1 wp:1 d2t:1 interchangeably:1 abalone:2 generalized:1 trying:2 ridge:1 performs:2 dedicated:1 bring:2 novel:2 recently:1 common:1 rotation:1 camb:1 ji:1 qp:3 jp:5 nh:6 exponentially:1 extend:1 significant:1 cambridge:1 dag:17 erieure:1 grid:5 consistency:7 inclusion:1 canu:1 shawe:1 dot:1 pq:1 add:2 matrixp:1 multivariate:1 halved:1 recent:2 certain:3 inequality:1 binary:3 success:1 yi:6 greater:1 impose:1 surely:1 determine:1 maximize:1 rv:3 multiple:15 full:4 technical:3 characterized:1 bach:5 regression:7 heterogeneous:1 essentially:2 iteration:1 kernel:123 sometimes:1 rfv:1 else:1 source:10 leaving:1 appropriately:1 extra:1 sch:1 tend:2 noting:2 split:1 lasso:6 restrict:1 fm:4 qj:2 pca:1 reuse:1 penalty:2 cause:1 useful:1 generally:1 detailed:1 eigenvectors:1 dark:2 reduced:6 estimated:3 correctly:1 yy:1 blue:3 group:8 reformulation:1 penalizing:3 anova:2 kuk:1 graph:10 sum:15 year:1 fourth:1 extends:1 throughout:1 almost:1 entirely:1 quadratic:1 adapted:5 precisely:2 constraint:2 infinity:2 x2:1 flat:2 n3:3 rp2:1 min:5 optimality:4 structured:4 combination:2 belonging:1 battle:1 smaller:3 slightly:3 dv:12 restricted:1 needed:2 know:2 end:2 operation:1 multiplied:2 hierarchical:9 appropriate:2 generic:2 anymore:2 save:1 rp:1 original:1 running:2 include:1 ensure:1 log2:1 hinge:1 giving:1 especially:1 micchelli:1 question:1 already:1 looked:1 added:1 strategy:3 occurs:1 parametric:1 usual:3 diagonal:1 said:2 gradient:1 link:1 separate:1 concatenation:2 considers:1 cauchy:1 assuming:1 length:1 code:1 index:1 ratio:1 minimizing:1 differencing:1 potentially:2 subproblems:1 design:3 proper:1 perform:5 observation:4 datasets:11 finite:3 descent:1 situation:7 looking:3 precise:1 rn:3 ww:2 inferred:1 introduced:1 complement:3 namely:4 paris:1 pair:2 extensive:2 toolbox:1 optimized:1 fv:11 nip:1 trans:1 able:1 usually:2 pattern:8 sparsity:11 hkl:13 max:2 explanation:1 power:2 natural:5 difficulty:1 predicting:1 raina:1 numerous:1 conic:1 ready:1 text:3 l2:6 embedded:3 loss:9 expect:1 bear:2 proportional:1 acyclic:4 proven:1 var:1 penalization:1 validation:1 eigendecomposition:1 degree:6 sufficient:8 xp:2 consistent:1 bank:9 grandvalet:2 pi:1 maxt:3 last:4 allow:3 absolute:1 sparse:9 dimension:3 depth:2 adaptive:1 projected:1 sj:5 emphasize:1 uni:2 implicitly:1 global:1 active:6 instantiation:1 investigating:2 summing:3 assumed:1 xi:18 alternatively:1 search:3 continuous:1 decade:1 decomposes:2 why:1 table:5 necessarily:1 rue:1 diag:2 main:1 n2:4 child:2 allowed:1 x1:1 augmented:1 sub:1 exponential:3 jmlr:4 theorem:1 specific:6 r2:2 intractable:1 essential:1 restricting:1 gap:7 authorized:1 boston:2 suited:1 cx:2 led:1 simply:2 explore:3 expressed:3 corresponds:3 goal:1 ann:1 rbf:16 bennett:1 feasible:2 fw:2 change:3 infinite:5 typical:1 determined:1 conservative:1 total:8 isomorphic:1 duality:5 experimental:1 select:3 support:1 embrechts:1 mark:1 modulated:1 absolutely:1 |
2,668 | 3,419 | On the asymptotic equivalence between differential
Hebbian and temporal difference learning using a
local third factor
Christoph Kolodziejski1,2 , Bernd Porr3 , Minija Tamosiunaite1,2,4 , Florentin W?rg?tter1,2
1
Bernstein Center for Computational Neuroscience G?ttingen
2
Georg-August University G?ttingen, Department of Nonlinear Dynamics
Bunsenstr. 10, 37073 G?ttingen, Germany
3
University of Glasgow, Department of Electronics & Electrical Engineering
Glasgow, GT12 8LT, Scotland
4
Vytautas Magnus University, Department of Informatics
Vileikos 8, 44404, Kaunas, Lithuania
kolo|minija|[email protected], [email protected]
Abstract
In this theoretical contribution we provide mathematical proof that two of the
most important classes of network learning - correlation-based differential Hebbian learning and reward-based temporal difference learning - are asymptotically
equivalent when timing the learning with a local modulatory signal. This opens the
opportunity to consistently reformulate most of the abstract reinforcement learning framework from a correlation based perspective that is more closely related to
the biophysics of neurons.
1
Introduction
The goal of this study is to prove that the most influential form of reinforcement learning (RL)
[1], which relies on the temporal difference (TD) learning rule [2], is equivalent to correlation based
learning (Hebb, CL) which is convergent over wide parameter ranges when using a local third factor,
as a gating signal, together with a differential Hebbian emulation of CL.
Recently there have been several contributions towards solving the question of equivalence of different rules [3, 4, 5, 6], which presented specific solutions to be discussed later (see section 4).
Thus, there is more and more evidence emerging that Hebbian learning and reinforcement learning
can be brought together under a more unifying framework. Such an equivalence would have substantial influence on our understanding of network learning as these two types of learning could be
interchanged under these conditions.
The idea of differential Hebbian learning was first used by Klopf [7] to describe classical conditioning relating to the stimulus substitution model of Sutton [8]. One of its most important features
is the implicit introduction of negative weight changes (LTD), which leads to intrinsic stabilization
properties in networks. Earlier approaches had to explicitly introduce negative weight changes into
the learning rule, e.g. by ways of a threshold [9].
One drawback of reinforcement learning algorithms, like temporal difference learning, is their use
of discrete time and discrete non-overlapping states. In real neural systems, time is continuous and
the state space can only be represented by the activity of neurons, many of which will be active at
the same time and for the same "space". This creates a rather continuous state space representation
in real systems. In order to allow for overlapping states or for generalizing over a wider range of
input regions, RL algorihtms are usually extended by value function approximation methods [1].
However, while biologically more realistic [10], this makes initially elegant RL algorithms often
quite opaque and convergence can many times not be guaranteed anymore [11]. Here we are not
concerned with function approximation, but instead address the question of how to transform an RL
algorithm (TD-learning) to continuous time using differential Hebbian learning with a local third
factor and remaining fully compatible with neuronally plausible operations.
Biophysical considerations about how such a third factor might be implemented in real neural tissue
are of secondary importance for this study. At this stage we are concerned with a formal proof only.
1.1
Emulating RL by Temporal Difference Learning
Reinforcement learning maximizes the rewards r(s) an agent will receive in the future when following a policy
? traveling along states s. The return R is defined as the sum of the future rewards:
P
R(si ) = k ? k r(si+k+1 ), where future rewards are discounted by a factor 0 < ? ? 1. One central goal of RL is to determine the values V (s) for each state given by the average expected return
E ? {R}, that can be obtained when following policy ?. Many algorithms exist to determine the
values, almost all of which rely on the temporal difference (TD) learning rule (Eq. 1) [2].
Every time the agent encounters a state si , it updates the value V (si ) with the discounted value
V (si+1 ) and the reward r(si+1 ) of the next state that is associated with the consecutive state si+1 :
V (si ) ? (1 ? ?)V (si ) + ?(r(si+1 ) + ?V (si+1 ))
(1)
where ? is the learning rate. This rule is called TD(? = 0), short TD(0), as it only evaluates adjacent
states. For values of ? 6= 0 more of the recently visited states are used for value-function update.
TD(0) is by far the most influential RL learning rule as it is the simplest way to assure optimality of
learning [12, 1].
1.2
Differential Hebbian learning with a local third factor
In traditional Hebbian learning, the change of a weight ? relies on the correlation between input u(t)
and output v(t) of a neuron: ?? (t) = ??u(t)?v(t),
?
where ?
? is the learning rate and prime denotes the
temporal derivative. If we consider the change of the post-synaptic signal and, therefore, replace v(t)
with v ? (t), we will arrive at differential Hebbian learning. Then, also negative weight changes are
possible and this yields properties similar to experimental neurophysiological observations (spiketiming dependent plasticity, [13]).
In order to achieve the equivalence (see section 4 for a discussion) we additionally introduce a local
third modulatory factor Mk (t) responsible for controlling the learning [14]. Here local means that
each input uk (t) controls a separate third factor Mk (t) which in turn modulates only the weight
change of the corresponding weight ?k (t). The local three-factor differential-Hebbian learning rule
is then:
??k (t) = ?
? ? uk (t) ? v ? (t) ? Mk (t)
(2)
where uk (t) is the considered pre-synaptic signal and
X
v(t) =
?n (t)un (t)
(3)
n
the post-synaptic activity of a model neuron with weights ?n (t). We will assume in the following
that our modulatory signal Mk (t) is either 1 or 0, thus represented by a step function.
2
Analytical derivation
We are going to analyze the weight change of weight ?i (t) when considering two consecutive signals
ui (t) and ui+1 (t) with the index i representing a temporal (and not e.g. a spatial) ordering. The
local third factor Mi (t) opens a time window for its corresponding weight ?i (t) in which changes
can occur. Although this time window could be located anywhere depending on the input ui (t) it
should be placed at the end of the state si (t) as it makes only sense if states correlate with their
successor.
R?
The relation between state s(t) and input u(t) is determined by a convolution: u(t) = 0 s(z)h(t?
z)dz with filter function h(t) which are identical for all states. As we are using only states that are
either on or off during a visiting duration S, the input functions u(t) do not differ between states.
Therefore we will use ui (t) (with index i) having a particular state in mind and u(t) (without index
i) when pointing to functional development.
Furthermore we define the time period between the end of a state si (t) and the beginning of the next
state si+1 (t) as T (T < 0 in case of overlapping states). Concerning the modulatory third factor
Mi (t) we define its length as L, and the time period between beginning of Mi (t) and the end of the
corresponding state si (t) as O. These four parameters (L, O, T , and S) are constant over states and
are displayed in detail in fig. 1 B.
2.1
Analysis of the differential equation
For the following analysis we need to substitute Eq. 3 in Eq. 2 and solve this differential equation
which consists of a homogeneous and an inhomogeneous part:
X
??i (t) = ?
? ? Mi (t) ? ui (t)[ui (t) ? ?i (t)]? + ?
? ? Mi (t) ? ui (t)[
uj (t) ? ?j (t)]?
(4)
j6=i
where the modulator Mi (t) is defining the integration boundaries. The first summand leads us to the
homogeneous solution which we will define as auto-correlation ?ac (t). The second summand(s) on
the other hand will lead to the inhomogeneous solution and this we will define as cross-correlation
?cc (t). Together we have ?(t) = ?ac (t) + ?cc (t).
In general the overall change of the weight ?i (t) after integrating over the visiting duration of si (t)
cc
and si+1 (t) and using the modulatory signal Mi (t) is: ??i =: ?i = ?ac
i + ?i
Without restrictions, we can now limit further analysis of Eq. 4, in particular of the cross-correlation
term, to the case of j = i + 1 as the modulatory factor only effects the weight of the following state.
??
u?
Since weight changes are in general slow, we can assume a quasi-static process ( ?ii ? uii , ? ? 0).
As a consequence, the derivatives of ? on the right hand side of Eq. 4 can be neglected.
The solution of the auto-correlation ?ac
i (t) is then in general:
2
1
2
ac
??M
? i (t)? 2 [ui (t)?ui (t0 )]
?ac
i (t) = ?i (t0 )e
(5)
and the overall weight change with the third factor being present between t = O + S and t =
O + S + L (fig. 1 B) is therefore:
1
2
2
?
? 2 [ui (O+S+L)?ui (O+S)]
? 1)
?ac
i = ?i (e
(6)
Using again the argument of a quasi-static process (?
? ? 0), we can expand the exponential function
to the first order:
1
?ac
??i [u2i (O + S) ? u2i (O + S + L) + o(?
?)]
(7)
i : = ??
2
= ??
??i ?
(8)
where we have defined ? in the following way:
1 2
[u (O + S) ? u2 (O + S + L) + o(?
?)]
2
which is independent of i since we assume all state signals as identical.
?(L, O, S)
=
(9)
Next we investigate the cross-correlation ?cc (t) again under the assumption of a quasi-static process.
This leads us to:
Z
t
cc
?cc
? i+1
i (t) = ?i (t0 ) + ??
0
Mi (z) ? ui (z)u?i+1 (z)dz
(10)
which yields assuming a time shift between signals ui and ui+1 of S+T , i.e. ui (t?S?T ) = ui+1 (t)
an overall weight change of
Z O+S+L
cc
?i = ?
? ?i+1
ui (z)u?i (z ? S ? T )dz := ??
? i+1 ?
(11)
O+S
whereas the third factor was being present between t = O + S and t = O + S + L (fig. 1 B).
Additionally we defined ? as follows:
Z O+L?T
? (L, O, T, S) =
u(z + S + T )u? (z)dz
(12)
O?T
which, too, is independent of i.
Both ? and ? depend on the actually used signal shape u(t) and the values for the parameters L, O,
T and S.
2.2
Analysis of the network
After the analysis of the auto- and cross-correlation of Eq. 4 we are going to discuss the weight
changes in a network context with a reward only at the terminal state (non-terminal reward states
will be discussed in section 4). Without restrictions, we can limit this discussion to the situation in
Fig. 1 A where we have one intermediate state transition (from si to si+1 ) and a final one (from si+1
to sR ) which yields a reward. The weight associated with the reward state sR is set to a constant
value unequal to zero.
Therefore three-factor differential Hebbian will influence two synaptic connections ?i and ?i+1 of
states si and si+1 respectively, which directly project onto neuron v.
Fig. 1 B shows a realistic situation of state transitions leaving the old state si?1 and entering the new
state si and so on. The signals as such could be considered as membrane voltages or firing rates of
neurons.
A
Mi?1
direction
Mi
M i+1
MR
si
s i+1
sR
ui
u i+1
?
?i
i+1
v
B
O
u i?1
Mi
L
O
ui
?cc
i
?ac
i
v
?R
T
s S
s i?1
S
uR
?cc
i+1 ~ ? R
?ac
i+1 ~ ? i+1
~ ? i+1
~ ?i
T
si
L
u i+1
u,v
uR
Mi+1 O
L
T
S
s i+1
S
sR
time
Figure 1: The setup is shown in panel A and the signal structure in panel B. (A) Three states,
including the rewarded state, converge on the neuron which learns according to Eq. 2. Each state
si controls the occurrence of the modulatory factor Mi which in turn will influence learning at
synapse ?i . The states s will be active according to the direction arrow. (B) The lower part shows
the states si which have a duration of length S. We assume that the duration for the transition
between two states is T . In the middle the output v and the signals u are depicted. Here u is given
RS
by u(t) = 0 (e?a (t?z) ? e?b (t?z) ) dz. The third factor Mi is released for the duration L after
a time delay of O and is shown in the upper part. For each state the weight change separated into
auto-correlation ?ac and cross-correlation ?cc and their dependence on the weights according to
Eq. 7 and 11 are indicated.
We will start our considerations with the weight change of ?i which is only influenced by the visiting
state si itself and by the transition between si and si+1 . The weight change ?ac
i caused by the autocorrelation (si with itself) is governed by the weight ?i of state si (see Eq. 8) and is negative as the
signal ui at the the end of the state decays (? is positive, though, because we factorized a minus
sign from Eq. 6 to Eq 7). The cross-correlation (?cc
i ), however, is proportional to the weight ?i+1
of the following state si+1 (see Eq. 11) and is positive because the positive derivative of the next
state signal ui+1 correlates with the signal ui of state si . According to these considerations the
contributions for the ?i+1 -values can be discussed in an identical way for the following sequence
(si+1 , sR ).
In general the weight after a single trial is the sum of the old weight ?i with the two ?-values:
cc
?i ? ?i + ?ac
i + ?i
(13)
Using Eq. 8 and Eq. 11 we can reformulate Eq. 13 into
?i ? ?i ? ?
? ? ? ? ?i + ?
? ? ? ? ?i+1
(14)
Substituting ? = ?
? ? ? and ? = ? /? we get
?i ? (1 ? ?) ? ?i + ? ? ? ? ?i+1
(15)
At this point we can make the transition from weights ?i (differential Hebbian learning) to states
V (si ) (temporal difference learning). Additionally we note that sequences only terminate at i + 1,
thus this index will capture the reward state sR and its value r(si+1 ), while this is not the case for all
other indices (see section 4 for a detail discussion of rewards at non-terminal states). Consequently
this gives us an equation almost identical to Eq 1:
V (si ) ? (1 ? ?)V (si ) + ? ? ?[r(si+1 ) + V (si+1 )]
(16)
where one small difference arises as in Eq. 16 the reward is scaled by ?. However, this has no
influence as numerical reward values are arbitrary. Thus, if learning follows this third factor differential Hebbian rule, weights will converge to the optimal estimated TD-values. This proves that,
under some conditions for ? and ? (see below), TD(0) and the here proposed three factor differential
Hebbian learning are indeed asymptotically equivalent.
2.3
Analysis of ? and ?
Here we will take a closer look at the signal shape and the parameters (L, O, T and S) which
influence the values of ? (Eq. 9) and ? (Eq. 12) and therefore ? = ? /?. For guaranteed convergence
these values are constraint by two conditions, ? ? 0 and ? > 0 (where ? = 0 is allowed in case of
? = 0), which come from Eq. 14. A non-positive value of ? would lead to divergent weights ? and a
negative value of ? to oscillating weight pairs (?i , ?i+1 ). However even if fulfilled, these conditions
will not always lead to meaningful weight developments. A ? -value of 0 leaves all weights at
their initial weight value and discount factors, which are represented by ?-values exceeding 1, are
usually not considered in reinforcement learning [1]. Thus it makes sense to introduce more rigorous
conditions and demand that 0 < ? ? 1 and ? > 0.
P/3
2
2 P/3
4 P/3
1
T/P 0
?1
?2
?2
?1
0
O/P
>1.0 1.0
1
0.9
2 ?2
0.8
0.7
0.6
?1
?
0.5
0
O/P
0.4
1
0.3
2 ?2
0.2
0.1 <0.0
?1
0
O/P
?
0
1
2
?
0
Figure 2: Shown are ?-values dependent on the ratio O/P and T /P for three different values of
L/P (1/3, 2/3, and 4/3). Here P is the length of the rising as well as the falling phase. The
RS
shape of the signal u is given by u(t) = 0 (e?a (t?z) ? e?b (t?z) ) dz with parameters a = 0.006
and b = 0.066. The individual figures are subdivided into a patterned area where the weights will
diverge (? = 0, see Eq.7), a striped area where no overlap between both signals and the third factor
exists and into a white area that consists of ?-values which, however, are beyond a meaningful range
(? > 1). The detailed gray shading represent ?-values (0 < ? ? 1) for which convergence is
fulfilled.
Furthermore, as these conditions depend on the signal shape, the following theoretical considerations
need to be guided by biophysics. Hence, we will discuss neuronally plausible signals that can arise
at a synapse. This constrains u to functions that posses only one maximum and divide the signal
into a rising and a falling phase.
One quite general possibility for the shape of the signal u is the function used in Fig. 1 for which we
investigate the area of convergence. As we have three (we do not have to consider the parameter S
if we take this value to be large compared to |T |, L or O) parameters to be varied, Fig. 2 shows the
?-value in 3 different panels. In each panel we varied the parameters O and T from minus to plus
2 P where P is the time the signal u needs to reach the maximum. In each of the panels we plot
?-values for a particular value of L.
Regarding ? the condition formed by Eq. 9 for the shape of the signal u(t) is in general already
fulfilled by using neuronally plausible signals and the third factor at the end of each state. As the
signals start to decay after the end of a state visit, u(O + S) is always larger than u(O + S + L)
and therefore ? > 0. Only if the third factor is shifted (due to the parameter O, see fig. 1 B for
more details) to regions of the signal u where the decay has not yet started (O < ?L) or has already
ended (O > P ) the difference of u(O + S) and u(O + S + L) is 0 which leads using Eq. 9 to ? = 0.
This is indicated by the patterned area in fig. 2.
A gray shading displays in detail the ?-values for which the condition is fulfilled, whereas white represents those areas for which we receive ? > 1. The striped area indicates parameter configurations
for which no overlap between two consecutive signals and the third factor exist (? = 0).
The different frames show clearly that the area of convergence changes only gradually and the
area as such is increasing with increasing duration of the third factor. Altogether it shows that
for a general neuronally plausible signal shape u the condition for asymptotic equivalence between
temporal difference learning and differential Hebbian learning with a local third factor is fulfilled
for a wide parameter range.
3
Simulation of a small network
In this section we show that we can reproduce the behavior of TD-learning in a small linear network
with two terminal states. This is done with a network of neurons designed according to our algorithm
with a local third factor. Obtained weights of the differential Hebbian learning neuron represent
the corresponding TD-value (see fig. 3 A). It is known that in a linear TD-learning system with
two terminal states (one is rewarded, the other not) and a ?-value close to 1, values at the end of
learning will represent the probability of reaching the reward state starting at the corresponding state
(compare [1]). This is shown, including the weight development, in panel (B).
A
B
N
R=0
terminal
N?1
N?2
R=0
R=0
2
1
0
R=0
R=0
R=1
MN
MN?1
MN?2
M2
M1
M0
SN
SN?1
SN?2
S2
S1
S0
weights
terminal
1
?
0.9
?
0.8
?
0.7
?
0.6
?
0.5
?
0
1
2
3
4
5
0.4
?
6
0.3
? =0
N
? =0
N?1
? =0
N?2
V
? =0
2
? =0
1
? =1
0
0.2
?
0.1
?
0
7
8
0
1000
2000
3000
4000 5000
trials
6000
7000
8000
?
9
Figure 3: The linear state arrangement and the network architecture is shown in panel A. The corresponding weights after a typical experiment are depicted in panel B. The lines represent the mean
of the last 2000 weight-values of each state and are distributed uniformly (compare [1]). The signal
RS
shape is given by u(t) = 0 (e?a (t?z) ? e?b (t?z) ) dz with parameters a = 0.006 and b = 0.066.
Furthermore is O = 1/20 P , L = P , T = 0 (which yields ? ? 1), N = 9, and ?
? = 0.01.
4
Discussion
The TD-rule has become the most influential algorithm in reinforcement learning, because of its
tremendous simplicity and proven convergence to the optimal value function [1]. It had been successfully transferred to control problems, too, in the form of Q- or SARSA learning [15, 16], which
use the same algorithmic structure, while maintaining similar advantageous mathematical properties
[15].
In this study we have shown that TD(0)-learning and differential Hebbian learning modulated by
a local third factor are equivalent under certain conditions. This proof relies only on commonly
applicable, fairly general assumptions, thus rendering a generic result not constraining the design of
larger networks. However, in which way the timing of the third factor is implemented in networks
will be an important issue when constructing such networks.
Several earlier results have pointed to the possibility of an equivalence between RL and CL. Izhikevich [3] solved the distal reward problem using a spiking neural network, yet with fixed exponential
functions [17] to emulate differential Hebbian characteristics. His approach is related to neurophysiologically findings on spike-timing dependent plasticity (STDP, [13]). Each synapse learned the
correlation between conditioned stimuli and unconditioned stimuli (e.g. a reward) through STDP
and a third signal. Furthermore Roberts [4] showed that that asymmetrical STDP and temporal
difference learning are related. In our differential Hebbian learning model, in contrast to the work
described above, STDP emerges automatically because of the use of the derivative in the postsynaptic potential (Eq. 2). Rao and Sejnowski [18] showed that using the temporal difference will
directly lead to STDP, but they could not provide a rigorous proof for the equivalence. Recently, it
has been shown that the online policy-gradient RL-algorithm (OLPOMDP, [19]) can be emulated
by spike timing dependent plasticity [5], however, in a complex way using a global reward signal.
On the other hand, the observations reported here provide a rather simple, equivalent correlation
based implementation of TD and support the importance of three factor learning for providing a link
between conventional Hebbian approaches and reinforcement learning.
In most physiological experiments [20, 21, 22] the reward is given at the end of the stimulus sequence. Our assumption that the reward state is a terminating state and is therefore only at the end
of the learning sequence conforms, thus, to this paradigm. However, for TD in general we cannot
assume that the reward is only provided at the end. Differential Hebbian learning will then lead to
a slightly different solution compared to TD-learning. This solution has already been discussed in a
another context [23]. Specifically, the difference in our case is the final result for the state-value after
convergence for states that provide a reward: We get V (s) ? ?V (si+1 )+r(si+1 )?r(si ) compared
to TD learning: V (s) ? ?V (si+1 ) + r(si+1 ). It would be interesting to assess with physiological
and or behavioral experiments, which of the two equations does more closely represent experimental
reality.
Our results rely in a fundamental way on the third factor Mi , and the analysis performed in this study
indicates that the third factor is necessary for the emulation of TD-learning by a differential Hebb
rule. To explain the reason for this requires a closer look at the temporal difference learning rule.
We find that the TD-rule requires a leakage term ?? ? V (s). If this term does not exist, values would
diverge. It has been shown [24] that in differential Hebbian learning without a third factor, however,
the auto-correlation part, which is the source of the leakage needed, (see Eq. 13 and Eq. 7) is non
existing. This shows that just through a well-timed third factor the ratio between cross-correlation
and auto-correlation term is correctly adjusted. This ratio is at the end responsible for the ?-value
we will get using differential Hebbian learning to emulate TD-learning.
References
[1] R.S. Sutton and A.G. Barto. Reinforcement Learning: An Introduction. MIT Press, Cambridge,
MA, 1998.
[2] R. S. Sutton. Learning to predict by the method of temporal differences. Mach. Learn., 3:9?44,
1988.
[3] E. Izhikevich. Solving the distal reward problem through linkage of stdp and dopamine signaling. Cereb. Cortex., 17:2443?2452, 2007.
[4] PD. Roberts, RA. Santiago, and G. Lafferriere. An implementation of reinforcement learning
based on spike-timing dependent plasticity. Biol. Cybern., in press.
[5] R. V. Florian. Reinforcement learning through modulation of spike-timing-dependent synaptic
plasticity. Neural Comput., 19:1468?1502, 2007.
[6] W. Potjans, A. Morrison, and M. Diesmann. A spiking neural network model of an actor-critic
learning agent. Neural Comput., 21:301?339, 2009.
[7] A. H. Klopf. A neuronal model of classical conditioning. Psychobiol., 16(2):85?123, 1988.
[8] R. Sutton and A. Barto. Towards a modern theory of adaptive networks: Expectation and
prediction. Psychol. Review, 88:135?170, 1981.
[9] E. Oja. A simplified neuron model as a principal component analyzer. J. Math. Biol.,
15(3):267?273, 1982.
[10] M. Tamosiunaite, J. Ainge, T. Kulvicius, B. Porr, P. Dudchenko, and F. W?rg?tter. Pathfinding in real and simulated rats: On the usefulness of forgetting and frustration for navigation
learning. J. Comp. Neurosci., 25(3):562?582, 2008.
[11] M. Wiering. Convergence and divergence in standard averaging reinforcement learning. In
J Boulicaut, F Esposito, F Giannotti, and D Pedreschi, editors, Proceedings of the 15th European Conference on Machine learning ECML?04, pages 477?488, 2004.
[12] P. Dayan and T. Sejnowski. Td(?) converges with probability 1. Mach. Learn., 14(3):295?301,
1994.
[13] H. Markram, J. L?bke, M. Frotscher, and B. Sakmann. Regulation of synaptic efficacy by
coincidence of postsynaptic APs and EPSPs. Science, 275:213?215, 1997.
[14] B. Porr and F. W?rg?tter. Learning with ?relevance?: Using a third factor to stabilise hebbian
learning. Neural Comput., 19:2694?2719, 2007.
[15] C. Watkins and P. Dayan. Technical note:Q-Learning. Mach. Learn., 8:279?292, 1992.
[16] S. P. Singh, T. Jaakkola, M. L. Littman, and C. Szepesv?ri. Convergence results for single-step
on-policy reinforcement-learning algorithms. Mach. Learn., 38(3):287?308, 2000.
[17] W. Gerstner, R. Kempter, L. van Hemmen, and H. Wagner. A neuronal learning rule for submillisecond temporal coding. Nature, 383:76? 78, 1996.
[18] R. Rao and T. Sejnowski. Spike-timing-dependent hebbian plasticity as temporal difference
learning. Neural Comput., 13:2221?2237, 2001.
[19] J. Baxter, P. L. Bartlett, and L. Weaver. Experiments with infinite-horizon,policy-gradient
estimation. J. Artif. Intell. Res., 15:351?381, 2001.
[20] W. Schultz, P. Apicella, E. Scarnati, and T. Ljungberg. Neuronal activity in monkey ventral
striatum related to the expectation of reward. J. Neurosci., 12(12):4595?610, 1992.
[21] P. R. Montague, P. Dayan, and T. J. Sejnowski. A framework for mesencephalic dopamine
systems based on predictive hebbian learning. J. Neurosci., 76(5):1936?1947, 1996.
[22] G. Morris, A. Nevet, D. Arkadir, E. Vaadia, and H. Bergman. Midbrain dopamine neurons
encode decisions for future action. Nat. Neurosci., 9 (8):1057?1063, 2006.
[23] P. Dayan. Matters temporal. Trends. Cogn. Sci., 6(3):105?106, 2002.
[24] C. Kolodziejski, B. Porr, and F. W?rg?tter. Mathematical properties of neuronal TD-rules and
differential hebbian learning: A comparison. Biol. Cybern., 98(3):259?272, 2008.
| 3419 |@word trial:2 middle:1 rising:2 advantageous:1 open:2 r:3 simulation:1 minus:2 shading:2 initial:1 substitution:1 configuration:1 efficacy:1 electronics:1 existing:1 si:47 yet:2 realistic:2 numerical:1 plasticity:6 shape:8 plot:1 designed:1 update:2 aps:1 leaf:1 beginning:2 scotland:1 short:1 math:1 u2i:2 mathematical:3 along:1 differential:24 become:1 prove:1 consists:2 behavioral:1 autocorrelation:1 introduce:3 minija:2 forgetting:1 indeed:1 ra:1 expected:1 behavior:1 terminal:7 discounted:2 td:22 automatically:1 window:2 considering:1 increasing:2 project:1 provided:1 maximizes:1 panel:8 factorized:1 emerging:1 monkey:1 finding:1 ended:1 temporal:17 every:1 scaled:1 uk:4 control:3 positive:4 engineering:1 local:12 timing:7 limit:2 bccn:1 consequence:1 sutton:4 mach:4 striatum:1 firing:1 modulation:1 might:1 plus:1 equivalence:7 christoph:1 patterned:2 range:4 responsible:2 signaling:1 cogn:1 area:9 pre:1 integrating:1 get:3 onto:1 close:1 cannot:1 context:2 influence:5 cybern:2 restriction:2 equivalent:5 conventional:1 center:1 dz:7 starting:1 duration:6 simplicity:1 glasgow:2 m2:1 rule:14 his:1 controlling:1 homogeneous:2 bergman:1 assure:1 trend:1 located:1 coincidence:1 electrical:1 capture:1 solved:1 wiering:1 region:2 ordering:1 substantial:1 pd:1 ui:22 constrains:1 reward:23 littman:1 dynamic:1 neglected:1 terminating:1 depend:2 solving:2 singh:1 pedreschi:1 predictive:1 creates:1 montague:1 represented:3 emulate:2 derivation:1 elec:1 separated:1 describe:1 sejnowski:4 quite:2 larger:2 plausible:4 solve:1 transform:1 itself:2 final:2 unconditioned:1 online:1 sequence:4 vaadia:1 biophysical:1 analytical:1 achieve:1 convergence:9 oscillating:1 converges:1 wider:1 depending:1 ac:14 eq:26 epsps:1 implemented:2 come:1 differ:1 direction:2 guided:1 inhomogeneous:2 closely:2 emulation:2 drawback:1 filter:1 stabilization:1 successor:1 subdivided:1 goettingen:1 sarsa:1 adjusted:1 considered:3 magnus:1 stdp:6 algorithmic:1 predict:1 pointing:1 substituting:1 m0:1 interchanged:1 ventral:1 consecutive:3 released:1 estimation:1 applicable:1 visited:1 successfully:1 brought:1 clearly:1 mit:1 always:2 worgott:1 rather:2 reaching:1 voltage:1 barto:2 jaakkola:1 encode:1 consistently:1 indicates:2 contrast:1 rigorous:2 sense:2 stabilise:1 dependent:7 dayan:4 initially:1 relation:1 expand:1 going:2 quasi:3 reproduce:1 germany:1 overall:3 issue:1 development:3 spatial:1 integration:1 fairly:1 frotscher:1 having:1 identical:4 represents:1 look:2 future:4 stimulus:4 summand:2 modern:1 oja:1 divergence:1 intell:1 individual:1 phase:2 investigate:2 possibility:2 navigation:1 gla:1 closer:2 necessary:1 conforms:1 old:2 divide:1 timed:1 re:1 theoretical:2 mk:4 earlier:2 rao:2 usefulness:1 delay:1 too:2 reported:1 fundamental:1 off:1 informatics:1 diverge:2 together:3 again:2 central:1 frustration:1 derivative:4 return:2 potential:1 de:1 coding:1 matter:1 santiago:1 explicitly:1 caused:1 later:1 performed:1 analyze:1 start:2 contribution:3 ass:1 formed:1 characteristic:1 neuronally:4 yield:4 emulated:1 comp:1 cc:12 j6:1 tissue:1 explain:1 influenced:1 reach:1 synaptic:6 evaluates:1 proof:4 associated:2 mi:15 static:3 emerges:1 actually:1 synapse:3 done:1 though:1 furthermore:4 anywhere:1 implicit:1 stage:1 just:1 correlation:18 traveling:1 hand:3 nonlinear:1 overlapping:3 gray:2 indicated:2 izhikevich:2 artif:1 effect:1 vytautas:1 asymmetrical:1 hence:1 entering:1 white:2 distal:2 adjacent:1 during:1 rat:1 cereb:1 consideration:4 recently:3 functional:1 spiking:2 rl:9 conditioning:2 discussed:4 m1:1 relating:1 cambridge:1 pointed:1 analyzer:1 had:2 actor:1 cortex:1 florentin:1 showed:2 perspective:1 ljungberg:1 prime:1 rewarded:2 certain:1 florian:1 mr:1 determine:2 converge:2 period:2 paradigm:1 signal:33 ii:1 morrison:1 hebbian:27 technical:1 cross:7 concerning:1 post:2 visit:1 biophysics:2 prediction:1 expectation:2 ttingen:3 dopamine:3 submillisecond:1 represent:5 receive:2 whereas:2 szepesv:1 leaving:1 source:1 sr:6 posse:1 elegant:1 bernstein:1 intermediate:1 constraining:1 concerned:2 rendering:1 baxter:1 modulator:1 architecture:1 idea:1 regarding:1 shift:1 t0:3 bartlett:1 ltd:1 linkage:1 action:1 modulatory:7 detailed:1 discount:1 morris:1 simplest:1 exist:3 shifted:1 sign:1 neuroscience:1 estimated:1 fulfilled:5 correctly:1 discrete:2 georg:1 four:1 threshold:1 falling:2 asymptotically:2 sum:2 opaque:1 arrive:1 almost:2 bunsenstr:1 decision:1 esposito:1 guaranteed:2 convergent:1 display:1 activity:3 occur:1 constraint:1 uii:1 striped:2 ri:1 diesmann:1 argument:1 optimality:1 transferred:1 department:3 influential:3 according:5 membrane:1 slightly:1 postsynaptic:2 ur:2 biologically:1 s1:1 midbrain:1 gradually:1 equation:4 turn:2 discus:2 needed:1 mind:1 end:11 operation:1 olpomdp:1 generic:1 occurrence:1 anymore:1 encounter:1 altogether:1 substitute:1 denotes:1 remaining:1 opportunity:1 maintaining:1 unifying:1 spiketiming:1 uj:1 prof:1 classical:2 leakage:2 question:2 already:3 arrangement:1 spike:5 dependence:1 traditional:1 visiting:3 apicella:1 gradient:2 separate:1 link:1 simulated:1 sci:1 reason:1 assuming:1 length:3 tter:3 index:5 reformulate:2 ratio:3 providing:1 setup:1 regulation:1 robert:2 negative:5 design:1 implementation:2 sakmann:1 policy:5 upper:1 neuron:11 observation:2 convolution:1 displayed:1 ecml:1 defining:1 extended:1 emulating:1 situation:2 frame:1 varied:2 arbitrary:1 august:1 bernd:1 pair:1 connection:1 unequal:1 learned:1 tremendous:1 address:1 beyond:1 usually:2 below:1 including:2 overlap:2 rely:2 weaver:1 mn:3 representing:1 started:1 psychol:1 auto:6 sn:3 review:1 understanding:1 asymptotic:2 fully:1 kempter:1 neurophysiologically:1 interesting:1 proportional:1 proven:1 agent:3 s0:1 editor:1 critic:1 compatible:1 placed:1 last:1 formal:1 allow:1 side:1 wide:2 markram:1 wagner:1 distributed:1 van:1 boundary:1 transition:5 porr:4 reinforcement:13 commonly:1 adaptive:1 simplified:1 schultz:1 far:1 correlate:2 mesencephalic:1 global:1 active:2 continuous:3 un:1 reality:1 additionally:3 terminate:1 learn:4 nature:1 gerstner:1 cl:3 complex:1 constructing:1 european:1 neurosci:4 arrow:1 s2:1 arise:1 allowed:1 neuronal:4 fig:10 hemmen:1 hebb:2 slow:1 exceeding:1 exponential:2 comput:4 governed:1 watkins:1 third:28 learns:1 specific:1 gating:1 decay:3 divergent:1 physiological:2 evidence:1 intrinsic:1 exists:1 importance:2 modulates:1 nat:1 conditioned:1 demand:1 horizon:1 rg:4 generalizing:1 lt:1 depicted:2 neurophysiological:1 u2:1 relies:3 ma:1 goal:2 consequently:1 towards:2 replace:1 change:17 determined:1 typical:1 uniformly:1 specifically:1 averaging:1 infinite:1 principal:1 called:1 secondary:1 experimental:2 meaningful:2 klopf:2 support:1 arises:1 modulated:1 relevance:1 biol:3 |
2,669 | 342 | An Analog VLSI Splining Network
Daniel B. Schwartz and Vijay K. Samalam
GTE Laboratories, Inc.
40 Sylvan Rd.
Waltham, MA 02254
Abstract
We have produced a VLSI circuit capable of learning to approximate arbitrary smooth of a single variable using a technique closely related to
splines. The circuit effectively has 512 knots space on a uniform grid and
has full support for learning. The circuit also can be used to approximate
multi-variable functions as sum of splines.
An interesting, and as of yet, nearly untapped set of applications for VLSI implementation of neural network learning systems can be found in adaptive control and
non-linear signal processing. In most such applications, the learning task consists
of approximating a real function of a small number of continuous variables from
discrete data points. Special purpose hardware is especially interesting for applications of this type since they generally require real time on-line learning and there
can be stiff constraints on the power budget and size of the hardware. Frequently,
the already difficult learning problem is made more complex by the non-stationary
nature of the underlying process.
Conventional feed-forward networks with sigmoidal units are clearly inappropriate
for applications of this type. Although they have exhibited remarkable performance
in some types of time series prediction problems (for example, Wiegend, 1990 and
Atlas, 1990), their learning rates in general are too slow for on-line learning. On-line
performance can be improved most easily by using networks with more constrained
architecture, effectively making the learning problem easier by giving the network a
hint about the learning task. Networks that build local representations of the data,
such as radial basis functions, are excellent candidates for these type of problems.
One great advantage of such networks is that they require only a single layer of
units. If the position and width of the units are fixed, the learning problem is linear
1008
An Analog VLSI Splining Network
in the coefficients and local. By local we mean the computation of a weight change
requires only information that is locally available to each weight, a highly desirable
property for VLSI implementation. If the learning algorithm is allowed to adjust
both the position and width of the units then many of the advantages of locally
tuned units are lost.
A number of techniques have been proposed for the determination of the width
and placement of the units. One of the most direct is to center a unit at every
data point and to adjust the widths of the units so the receptive fields overlap
with those of neighboring data points ( Broom head , 1989). The proliferation of
units can be limited by using unsupervised clustering techniques to clump the data
followed by the allocation of units to fit the clumps (Moody, 1989). Others have
advocated assigning new units only when the error on a new data point is larger than
a threshold and otherwise making small adjustments in the weights and parameters
of the existing units (Platt, 1990). All of these methods suffer from the common
problem of requiring an indeterminate quantity of resources in contrast with the
fixed resources available from most VLSI circuits. Even worse, when used with
non-stationary processes a mechanism is needed to deallocate units as well as to
allocate them. The resource allocation/deallocation problem is a serious barrier to
implementing these algorithms as autonomous VLSI microsystems.
A Splining Network
To avoid the resource allocation problem we propose a network that uses all of
its weights and units regardless of the problem. We avoid over parameterization
of the training data by building constraints on smoothness into the network, thus
reducing the number of degrees of freedom available to the training process. In
its simplest guise, the network approximates arbitrary I-d smooth functions with a
linear superposition of locally tuned units spaced on a uniform grid,
g(z)
= LWifC7(z -
i~z)
(1)
i
where u is the radius of the unit's receptive field and the Wi are the weights. fC7 is a
bump of width u such as a gaussian or a cubic spline basis function. Mathematically
the network is closely related to function approximation using B-splines (Lancaster,
1986) with uniformly spaced knots. However, in B-spline interpolation the overlap
of the basis functions is normally determined by the degree of the spline whereas
we use the degree of overlap as a free parameter to constrain the smoothness of
the network's output. As mentioned earlier, the network is linear in its weights
so gradient descent with a quadratic cost function (LMS) is an effective training
procedure.
The weights needed for this network can easily be implemented in CMOS with an
array of transconductance amplifiers. The amplifiers are wired as voltage followers
with their outputs tied together and the weights are represented by voltages lti
at the non-inverting inputs of the amplifiers. If the outputs of the locally tuned
units are represented by unipolar currents Ii these currents can be used to bias the
1009
1010
Schwartz and Samalam
transconductance amplifiers and the result is (Mead,1989)
t7
_
You' -
Ei IiYi
~
L"i
Ii
provided that care is taken to control the non-linearities of the amplifiers. However,
while the weights have a simple implementation in analog VLSI circuitry, the input
units du not. A number of circuits exist whose transfer characteristics can be shaped
to be a suitable bump but none of those known to the authors allow the width of
the bump to be adjusted over a wide range without the use of resistors.
Generating the Receptive Fields
Input units with tunable receptive fields can be generated quite efficiently by breaking them up into two layers of circuitry as shown in figure 1. The input layer place
encodes the input signal - i.e. only one or perhaps a small cluster of units is active
at a time. The output of the place encoding units either injects or controls the
output
weight
spreading
layer
place
encoding
Input
Figure 1: An architecture that allows the width and shape of the receptive fields to
be varied over a wide range. The elements of the 'spreading layer' are passive and
can sink current to ground.
injection of current into the laterally connected spreading layer. The elements in
the spreading layer all contain ground terminals and the current sunk by each one
determines the bias current applied to the associated weight. Clearly, the distribution of currents flowing to ground through the spreading layer form a smooth bump
such that when excitation is applied to tap j of the spreading layer,
Ii = 10 1(1(; - j}
where I(I(;} is the bump called for by equation 1. In our earliest realizations of
this network the input layer was a crude flash A-to-D converter and the input
to the circuit was analog. In the current generation the input is digital with the
place encoding performed by a conventional address decoder. If desired, input
quantization can be avoided by using a layer of amplifiers that generate smooth
bumps of fixed width to generate the input place encoding.
An Analog VLSI Splining Network
The simplest candidate to implement the spreading layer in conventional CMOS
is a set of diode connected n-channel transistors laterally connected by n-channel
pass transistors. The gate voltages of the diode connected transistors determine
the bias currents Ii of the weights. Ignoring the body effect and assuming weak
inversion in the current sink, this type of networks tends to gives bumps with rather
sharp peaks, Ii ~ E j Ioe- aul , where Iii is the distance from the point where the
excitation is applied. Figure 2 shows a more sophisticated version of this circuit
in which the output of the place encoding units applies excitation to the spreading
network through a p-channel transistor. The shape of the bumps can be softened by
to weights
bias
voltages
from place encoder
Figure 2: A schematic of a section of the spreading layer. Roughly speaking, the
n-channel pass transistor controls the extent of the tails of the bumps and the
p-channel pass transistor and the cascode transistor control its width.
limiting the amount of current drawn by the current sinks with an n-channel cascode
transistor in series with the current sink. Some experimental results for this type of
circuit are shown in figure 3a. More control can be obtained by using complementary
pass transistors. The use of p-channel pass transistors alone unexpectedly results
in bumps that are nearly square (figure 3b). These can be smoothed by using a
using both flavors of pass transistor simultaneously (figure 3c).
The Weights
As described earlier, the implementation of the output weights is based on the
computation of means by the well known follower-aggregation circuit. With typical
transconductance amplifiers, this averaging is linear only when the voltages being
averaged are distributed over a voltage range of no more than a few time UQ kT/e
in weak inversion. In the circuits described here the linear range has been widened
to nearly a volt by reducing the transconductance of the readout amplifiers through
the combination of low width to length ratio input transistors and relatively large
tail currents.
=
The weights Vi are stored on MOS capacitors and are programmed by the gated
transconductance amplifier shown in figure 4. Since this amplifier computes the
1011
1012
Schwartz and Samalam
b
, II
II
I:
I,
I
I
I
I
I
I
I
I
I
I
I
I
,
,,,
,
cCD
, .,
t:
:s
,:
o
I
,:d.1:,
.i 1,
,! :
,: :.,
I! \.
I
,:
..
,:
" ?...l
...
-.~ ~
o
\
\
: \
,,"...... ...
' ........::-.::..--
50 100 150 200 250
o
'-
10
20
!
30
40
50
o
10
20
30
40
50
Tap Number
Figure 3: Experimental measurements of the receptive field shapes obtained from
different types of networks. (a) n-channel transistors for several gate voltages. (b)
p-channel transistors for several gate voltages. (c) Both n-channel and p-channel
pass transistors.
exdmkm
>-~+----------+--------~
Figure 4: Schematic of an output weight including the circuitry to generate weight
updates. To minimize leakage and charge injection simultaneously, the pass transistors used to gate the weight change amplifier are of minimum size and a separate
transistor turns off the output transistors of the amplifier.
difference between the target voltage and the actual output of the network, the
learning rule is just LMS,
where C is the capacitance of the storage capacitor and T is the duration of weight
changes. The transconductance gi of the weight change amplifier is determined by
the strength of excitation current from the spreading layer, gi oc Ii in weak inversion.
Since the weight changes are governed by strengths of the excitation currents from
the spreading layer, clusters of weights are changed at a time. This enhances the
fault tolerance of the circuit since the group of weights surrounding a bad one can
compensate for it.
An Analog VLSI Splining Network
Experimental Evaluation
Several different chips have been fabricated in 21' p-well CMOS and tested to evaluate the principles described here. The most recent of these has 512 weights arranged
in a 64 x 8 matrix connected to form a one dimensional array. The active area of
this chip is 4.1mm x 3.7mm. The input signal is digital with the place encoding
performed by a conventional address decoder. To maximize the flexibility of the
chip, the excitation is applied to the spreading layer by a register located in each
cell. By writing to multiple registers between resets, the spreading layer can be excited at multiple points simultaneously. This feature allows the chip to be treated
as a single I-dimensional spline with 512 weights or, for example, as the sum of
four distinct I-dimensional splines each made up of 128 weights. One of the most
noticeable virtues of this design is the simplicity of the layout due to the absence
of any dear distinction between 'weights' and 'units'. The primitive cell consists of
a register, a piece of the spreading network, a weight change amplifier, a storage
capacitor and output amplifier. All but a tiny fraction of the chip is a tiling of
this primitive cell. The excess circuitry consists of the address decoders, a timing
circuit to control the duration of weight changes and some biasing circuitry for the
spreading layer.
To execute LMS learning, the user need only provide a sequence of target voltages
and a current proportional to the duration of weight changes. Under reasonable
operating conditions a weight updates cycle takes less than 11'8 implying a weight
change rate of 5 x 108 connections/second. The response of the chip to a single
weight change after initialization is shown in in figure 5a. One feature of this plot
is striking - even though the distribution of offsets in the individual amplifiers has
a variance of 13mV, the ripple in the output of the chip is about a ImV. For some
computations, it appears the limiting factor on the accuracy of the chip is the rate
of weight decay, about IOmV/s.
As a more strenuous test of the functionality of the chip we trained it to predict
chaotic time series generated by the well know logistic equation,
Xt+l = 4axt{1 - x,), a < 1.
Some experimental results for the mean prediction error are shown in figure 5b.
In these experiments, a mean prediction error of 3% is achieved, which is well
above the intrinsic accuracy of the circuit. A detailed examination of the error
rate as a function of the size and shape of the bumps indicates that the problem
lies in the long tails exhibited by the spreading layer when the n-channel pass
transistors are turned on. This tail falls off very slowly due to the body effect.
One remedy to this problem is to actively bias the gates of the n-channel pass
transistors to be a programmed offset above their source voltages (Mead, 1989). A
simpler solution is to subtract a fixed current from each of the bias current defined
by the spreading layer. This solution costs a mere 4 transistors and has the added
benefit of guaranteeing that the bumps will always have a finite support.
Conclusion
We have demonstrated that neural network learning can be efficiently mapped onto
analog VLSI provided that the network architecture and training procedure are
1013
1014
Schwartz and Samalam
co
ci
II)
...
N
0
t::
Q)
N
.....
~
ci
'2
a.c::
ci
as
E
N
ci
c::
~
(')
Q)
b
II)
(')
"!
0
.....
ci
0
~
.....
ci
0
10
20
30
40
input value
50
60
70
0
200
400
600
800
1000
time
Figure 5: Some experimental results from a splining circuit. (a) The response of
the circuit to learning one data point after initialization of the weights to a constant
value. (b) Experimental mean prediction while learning a chaotic time series.
tailored to match the constraints imposed by VLSI. Besides the computational
speed and low power consumption ( 300pA ) that follow directly from this mapping
onto VLSI, the circuit also demonstrates intrinsic fault tolerance to defects in the
weights.
Acknowledgements
This work was initially inspired by a discussion with A. G. Barto and R. S. Sutton.
A discussion with J. Moody was also helpful.
References
[1] L. Atlas, R. Cole, Y. Muthusamy, A. Lippman, J. Connor, D. Park, M. EISharkawi, and R. J. Marks II. A performance comparison of trained multi-layer
perceptrons and trained classification trees. IEEE Proceedings, 1990.
[2] D. S. Broomhead and D. Lowe. Multivariable function interpolation and adaptive networks. Complex Systems, 2:321-355, 1988.
[3] P. Lancaster and K. Salkauskas. Curve and Surface Fitting. Academic Press,
1986.
[4] C. Mead. Analog VLSI and Neural Systems. Addison-Wesley, 1989.
[5] J. Moody and C.J. Darken. Fast learning in networks oflocally-tuned processing
units. Neural Computation, 1(2), 1989.
[6] J. Platt. A resource-allocating neural network for function interpolation. In
Richard P. Lippman, John Moody, and David S. Touretzky, editors, Advances
in Neural Information Processing Systems 9, 1991.
[7] A. S. Weigend, , B. A. Huberman, and D. E. Rummlehart. Predicting the future
: A connectionist approach. International Journal of Neural Systems, 3, 1990.
| 342 |@word version:1 inversion:3 excited:1 series:4 daniel:1 tuned:4 t7:1 existing:1 current:19 yet:1 assigning:1 follower:2 john:1 shape:4 atlas:2 plot:1 update:2 stationary:2 alone:1 implying:1 parameterization:1 dear:1 sigmoidal:1 simpler:1 direct:1 consists:3 oflocally:1 fitting:1 roughly:1 proliferation:1 frequently:1 multi:2 terminal:1 inspired:1 actual:1 inappropriate:1 provided:2 linearity:1 underlying:1 circuit:16 fabricated:1 every:1 charge:1 laterally:2 axt:1 demonstrates:1 schwartz:4 control:7 unit:24 platt:2 normally:1 local:3 timing:1 tends:1 sutton:1 encoding:6 mead:3 interpolation:3 initialization:2 co:1 limited:1 programmed:2 range:4 clump:2 averaged:1 lost:1 implement:1 chaotic:2 lippman:2 procedure:2 area:1 indeterminate:1 radial:1 unipolar:1 onto:2 storage:2 writing:1 conventional:4 imposed:1 demonstrated:1 center:1 layout:1 regardless:1 primitive:2 duration:3 simplicity:1 ioe:1 rule:1 array:2 autonomous:1 limiting:2 target:2 user:1 salkauskas:1 us:1 pa:1 element:2 located:1 unexpectedly:1 readout:1 connected:5 cycle:1 mentioned:1 cascode:2 trained:3 basis:3 sink:4 easily:2 chip:9 represented:2 surrounding:1 distinct:1 fast:1 effective:1 lancaster:2 whose:1 quite:1 larger:1 otherwise:1 encoder:1 gi:2 advantage:2 sequence:1 transistor:21 propose:1 reset:1 neighboring:1 turned:1 realization:1 flexibility:1 cluster:2 ripple:1 wired:1 generating:1 cmos:3 guaranteeing:1 noticeable:1 advocated:1 implemented:1 diode:2 waltham:1 radius:1 closely:2 functionality:1 implementing:1 require:2 mathematically:1 adjusted:1 mm:2 ground:3 great:1 aul:1 mapping:1 predict:1 mo:1 bump:12 lm:3 circuitry:5 purpose:1 spreading:17 superposition:1 cole:1 clearly:2 gaussian:1 always:1 rather:1 avoid:2 voltage:11 barto:1 sunk:1 earliest:1 indicates:1 contrast:1 helpful:1 initially:1 vlsi:14 classification:1 constrained:1 special:1 field:6 shaped:1 park:1 unsupervised:1 nearly:3 future:1 others:1 spline:8 connectionist:1 hint:1 serious:1 few:1 richard:1 simultaneously:3 individual:1 freedom:1 amplifier:16 highly:1 evaluation:1 adjust:2 kt:1 allocating:1 capable:1 tree:1 desired:1 earlier:2 samalam:4 cost:2 uniform:2 too:1 stored:1 peak:1 international:1 off:2 together:1 moody:4 slowly:1 worse:1 actively:1 coefficient:1 inc:1 untapped:1 register:3 mv:1 vi:1 piece:1 performed:2 lowe:1 aggregation:1 minimize:1 square:1 accuracy:2 variance:1 characteristic:1 efficiently:2 spaced:2 weak:3 sylvan:1 produced:1 knot:2 none:1 mere:1 touretzky:1 associated:1 tunable:1 broomhead:1 sophisticated:1 appears:1 feed:1 wesley:1 follow:1 flowing:1 improved:1 response:2 arranged:1 execute:1 though:1 just:1 ei:1 logistic:1 perhaps:1 building:1 effect:2 requiring:1 contain:1 remedy:1 volt:1 laboratory:1 width:10 excitation:6 oc:1 multivariable:1 passive:1 common:1 strenuous:1 analog:8 tail:4 approximates:1 measurement:1 connor:1 smoothness:2 rd:1 grid:2 operating:1 surface:1 fc7:1 recent:1 stiff:1 fault:2 minimum:1 care:1 determine:1 maximize:1 signal:3 ii:11 full:1 desirable:1 multiple:2 smooth:4 match:1 determination:1 academic:1 compensate:1 long:1 schematic:2 prediction:4 tailored:1 achieved:1 cell:3 whereas:1 source:1 exhibited:2 capacitor:3 iii:1 muthusamy:1 fit:1 architecture:3 converter:1 wiegend:1 allocate:1 suffer:1 speaking:1 generally:1 detailed:1 amount:1 locally:4 hardware:2 simplest:2 generate:3 exist:1 discrete:1 group:1 four:1 threshold:1 drawn:1 lti:1 defect:1 injects:1 sum:2 fraction:1 weigend:1 you:1 striking:1 place:8 reasonable:1 layer:21 followed:1 quadratic:1 strength:2 placement:1 constraint:3 constrain:1 encodes:1 speed:1 transconductance:6 injection:2 relatively:1 softened:1 combination:1 wi:1 making:2 taken:1 resource:5 equation:2 turn:1 mechanism:1 needed:2 know:1 addison:1 tiling:1 available:3 uq:1 gate:5 clustering:1 ccd:1 giving:1 especially:1 build:1 approximating:1 leakage:1 capacitance:1 already:1 quantity:1 added:1 receptive:6 enhances:1 gradient:1 distance:1 separate:1 mapped:1 decoder:3 consumption:1 extent:1 broom:1 assuming:1 length:1 besides:1 ratio:1 difficult:1 implementation:4 design:1 gated:1 darken:1 finite:1 descent:1 head:1 varied:1 smoothed:1 arbitrary:2 sharp:1 david:1 inverting:1 widened:1 connection:1 tap:2 distinction:1 address:3 microsystems:1 biasing:1 including:1 power:2 suitable:1 overlap:3 treated:1 examination:1 predicting:1 acknowledgement:1 interesting:2 generation:1 allocation:3 proportional:1 remarkable:1 digital:2 degree:3 principle:1 editor:1 tiny:1 changed:1 free:1 bias:6 allow:1 wide:2 fall:1 barrier:1 distributed:1 tolerance:2 benefit:1 curve:1 computes:1 forward:1 made:2 adaptive:2 author:1 avoided:1 excess:1 approximate:2 active:2 continuous:1 nature:1 transfer:1 channel:13 ignoring:1 du:1 excellent:1 complex:2 allowed:1 complementary:1 body:2 cubic:1 slow:1 guise:1 position:2 resistor:1 candidate:2 crude:1 tied:1 breaking:1 governed:1 lie:1 bad:1 xt:1 offset:2 decay:1 virtue:1 intrinsic:2 quantization:1 effectively:2 ci:6 budget:1 vijay:1 easier:1 flavor:1 subtract:1 adjustment:1 applies:1 determines:1 ma:1 flash:1 absence:1 change:10 determined:2 typical:1 reducing:2 uniformly:1 averaging:1 huberman:1 gte:1 called:1 pas:10 experimental:6 perceptrons:1 support:2 mark:1 iiyi:1 evaluate:1 tested:1 |
2,670 | 3,420 | Automatic online tuning for fast Gaussian summation
Vlad I. Morariu1?, Balaji V. Srinivasan1 , Vikas C. Raykar2 , Ramani Duraiswami1 , and Larry S. Davis1
1
University of Maryland, College Park, MD 20742
2
Siemens Medical Solutions Inc., USA, 912 Monroe Blvd, King of Prussia, PA 19406
[email protected], [email protected], [email protected],
[email protected], [email protected]
Abstract
Many machine learning algorithms require the summation of Gaussian kernel
functions, an expensive operation if implemented straightforwardly. Several methods have been proposed to reduce the computational complexity of evaluating such
sums, including tree and analysis based methods. These achieve varying speedups
depending on the bandwidth, dimension, and prescribed error, making the choice
between methods difficult for machine learning tasks. We provide an algorithm
that combines tree methods with the Improved Fast Gauss Transform (IFGT). As
originally proposed the IFGT suffers from two problems: (1) the Taylor series
expansion does not perform well for very low bandwidths, and (2) parameter selection is not trivial and can drastically affect performance and ease of use. We
address the first problem by employing a tree data structure, resulting in four evaluation methods whose performance varies based on the distribution of sources and
targets and input parameters such as desired accuracy and bandwidth. To solve the
second problem, we present an online tuning approach that results in a black box
method that automatically chooses the evaluation method and its parameters to
yield the best performance for the input data, desired accuracy, and bandwidth.
In addition, the new IFGT parameter selection approach allows for tighter error
bounds. Our approach chooses the fastest method at negligible additional cost,
and has superior performance in comparisons with previous approaches.
1
Introduction
Gaussian summations occur in many machine learning algorithms, including kernel density estimation [1], Gaussian process regression [2], fast particle smoothing [3], and kernel based machine
learning techniques that need to solve a linear system with a similarity matrix [4]. In such algorithms,
PN
2
2
the sum g(yj ) = i=1 qi e?||xi ?yj || /h must be computed for j = 1, . . . , M , where {x1 , . . . , xN }
and {y1 , . . . , yM } are d-dimensional source and target (or reference and query) points, respectively;
qi is the weight associated with xi ; and h is the bandwidth. Straightforward computation of the
above sum is computationally intensive, taking O(M N ) time.
To reduce the computational complexity, Greengard and Strain proposed the Fast Gauss Transform
(FGT) [5], using two expansions, the far-field Hermite expansion and the local Taylor expansion, and
a translation process that converts between the two, yielding an overall complexity of O(M + N ).
However, due to the expensive translation operation, O(pd ) constant term, and the box based data
structure, this method becomes less effective for higher dimensions (e.g. d > 3) [6].
Dual-tree methods [7, 8, 9, 10] approach the problem by building two separate trees for the source
and target points respectively, and recursively considering contributions from nodes of the source
tree to nodes of the target tree. The most recent works [9, 10] present new expansions and error
control schemes that yield improved results for bandwidths in a large range above and below the optimal bandwidth, as determined by the standard least-squares cross-validation score [11]. Efficiency
across bandwidth scales is important in cases where the optimal bandwidth must be searched for.
?
Our code is available for download as open source at http://sourceforge.net/projects/figtree.
Another approach, the Improved Fast Gauss Transform (IFGT) [6, 12, 13], uses a Taylor expansion
and a space subdivision different than the original FGT, allowing for efficient evaluation in higher
dimensions. This approach also achieves O(M + N ) asymptotic computational complexity. However, the approach as initially presented in [6, 12] was not accompanied by an automatic parameter
selection algorithm. Because the parameters interact in a non-trivial way, some authors designed
simple parameter selection methods to meet the error bounds, but which did not maximize performance [14]; others attempted, unsuccessfully, to choose parameters, reporting times of ??? for
IFGT [9, 10]. Recently, Raykar et al [13] presented an approach which selects parameters that minimize the constant term that appears in the asymptotic complexity of the method, while guaranteeing
that error bounds are satisfied. This approach is automatic, but only works for uniformly distributed
sources, a situation often not met in practice. In fact, Gaussian summations are often used because
a simple distribution cannot be assumed. In addition, the IFGT performs poorly at low bandwidths
because of the number of Taylor expansion terms that must be retained to meet error bounds.
We address both problems with IFGT: 1) small bandwidth performance, and 2) parameter selection.
First we employ a tree data structure [15, 16] that allows for fast neighbor search and greatly speeds
up computation for low bandwidths. This gives rise to four possible evaluation methods that are
chosen based on input parameters and data distributions: direct evaluation, direct evaluation using
tree data structure, IFGT evaluation, and IFGT evaluation using tree data structure (denoted by
direct, direct+tree, ifgt, and ifgt+tree, respectively). We improve parameter selection by removing
the assumption that data is uniformly distributed and by providing a method for selecting individual
source and target truncation numbers that allows for tighter error bounds. Finally, we provide an
algorithm that automatically selects the evaluation method that is likely to be fastest for the given
data, bandwidth, and error tolerance. This is done in a way that is automatic and transparent to the
user, as for other software packages such as FFTW [17] and ATLAS [18].The algorithm is tested on
several datasets, including those in [10], and in each case found to perform as expected.
2
Improved Fast Gauss Transform
We briefly summarize the IFGT, which is described in detail [13, 12, 6]. The speedup is achieved by
employing a truncated Taylor series factorization, using a space sub-division to reduce the number
of terms needed to satisfy the error bound, and ignoring sources whose contributions are negligible.
The approximation is guaranteed to satisfy the absolute error |?
g (yj ) ? g(yj )| /Q ? ?, where Q =
P
i |qi |. The factorization that IFGT uses involves the truncated multivariate Taylor expansion
?
?
X 2? yj ? x? ? xi ? x? ?
2
2
2
2
2
2
? + ?ij
e?kyj ?xi k /h = e?kxi ?x? k /h e?||yj ?x? k /h ?
?!
h
h
|?|?p?1
1
where ? is multi-index notation and ?ij is the error induced by truncating the series to exclude
terms of degree p and higher and can be bounded by
p
p
2
2
2p ||xi ? x? ||
||yj ? x? ||
?ij ?
e?(||xi ?x? ||?||yj ?x? ||) /h .
(1)
p!
h
h
Because reducing the distance ||xi ? x? || also reduces the error bound given above, the sources can
be divided into K clusters, so the Taylor series center of expansion for source xi is the center of
the cluster to which the source belongs. Because of the rapid decay of the Gaussian
p function, the
contribution of sources in cluster k can be ignored if ||yj ? ck || > ryk = rxk + h log(1/?), where
ck and rxk are cluster center and radius of the k th cluster, respectively.
In [13], the authors ensure that the error bound is met by choosing the truncation number pi for each
PN
PN
source so that ?ij ? ?. This guarantees that |?
g (yj ) ? g(yj )| = | i=1 qi ?ij | ? i=1 |qi |? = Q?.
Because ||yj ? ck || cannot be computed for each ?ij term (to prevent quadratic complexity), the
authors use the worst case scenario; denoting?dik = ||xi ? ck || and djk = ||yj ? ck ||, the bound on
dik +
d2 +2pi h2
ik
error term ?ij is maximized at d?jk =
, or d?jk = ryk , whichever is smaller (since
2
k
targets further than ry from ck will not consider cluster k).
1
Multi-index ? = {?1 , . . . , ?d } is a d-tuple of nonnegative integers, its length is |?| = ?1 + . . . + ?d , its
?d
1 ?2
factorial is defined as ?! = ?1 !?2 ! . . . ?d !, and for x = (x1 , . . . , xd ) ? Rd , x? = x?
1 x2 . . . xd .
Target
Target
c
c
2
r
r
c
Sources
Sources
direct
direct+tree
Target
Target
r
c
1
2
3
Sources
ifgt
c
c
1
3
Sources
ifgt+tree
Figure 1: The four evaluation methods. Target is displayed elevated to separate it from sources.
The algorithm proceeds as follows. First, the number of clusters K, maximum truncation number
pmax , and the cut-off radius r are selected by assuming that sources are uniformly distributed. Next,
K-center clustering is performed to obtain c1 , . . . , cK , and the set of sources S is partitioned into
S1 , . . . , Sk . Using the max cluster radius rx , the truncation number pmax is found that satisfies
worst-case error bound. Choosing pi for each source xi so that ?ij ? ?, source contributions are
accumulated to cluster centers:
?
||xi ?ck ||2
2? X
xi ? ck
C?k =
qi e? h2
1|?|?pi ?1
?!
h
xi ?Sk
For each yi , influential clusters for which ||yi ? ck || ? ryk = rxk + r are found, and contributions
from those clusters are evaluated:
?
X
X
||yj ?ck ||2
yj ? ck
g?(yj ) =
C?k e? h2
h
y
||yi ?ck ||?rk |?|?pmax ?1
The clustering step can be performed in O(N K) time using a simple algorithm [19] due to Gonzalez,
or in optimal O(N log K) time using the algorithm by Feder and Greene [20]. Because the number
of values of ? such that |?| ? p is rpd = C(p + d, d), the total complexity of the algorithm is
O (N + M nc )(log K + r(pmax ?1)d ) where nc is the number of cluster centers that are within the
cut-off radius of a target point. Note that for fixed p, rpd is polynomial in the dimension d rather than
exponential. Searching for clusters within the cut-off radius of each target can take time O(M K),
but efficient data-structures can be used to reduce the cost to O(M nc log K).
3
Fast Fixed-Radius Search with Tree Data Structure
One problem that becomes apparent from the point-wise error bound on ?ij is that as bandwidth h
decreases, the error bound increases, and either dik = ||xi ? ck || must be decreased (by increasing
the number of clusters K) or the maximum truncation number pmax must be increased to continue
satisfying the desired error. An increase in either K or pmax increases the total cost of the algorithm.
Consequently, the algorithm originally presented above does not perform well for small bandwidths.
However, few sources have a contribution greater than qi ? at low bandwidths, since the cut-off radius
becomes very small. Also, because the number of clusters increases as the bandwidth decreases, we
need an efficient way of searching for clusters that are within the cut-off radius. For this reason, a
tree data structure can be used since it allows for efficient fixed-radius nearest neighbor search. If h
is moderately low, a tree data structure can be built on the cluster centers, such that the nc influential
clusters within the cut-off radius can be found in O(nc log K) time [15, 16]. If the bandwidth is
very low, then it is more efficient to simply find all source points xi that influence a target yj and
perform exact evaluation for those source points. Thus, if ns source points are within the cut-off
radius of yj , then the time to build the structure is O(N log N ) and the time to perform a query is
O(ns log N ) for each target. Thus, we have four methods that may be used for evaluation of the
Gauss Transform: direct evaluation, direct evaluation with the tree data structure, IFGT evaluation,
and IFGT evaluation with a tree data structure on the cluster centers. Figure 1 shows a graphical
representation of the four methods. Because the running times of the four methods for various
parameters can differ greatly (i.e. using direct+tree evaluation when ifgt is optimal could result in a
running time that is many orders of magnitude larger), we will need an efficient and online method
selection approach, which is presented in section 5.
1
3
10
Actual radius
Predicted radius
0
10
d=1
d=2
d=3
d=4
d=5
d=6
2
10
Speedup
Max Cluster Radius, rx
10
?1
10
?2
10
1
10
0
10
?3
10
?1
?4
10
10
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
Number of clusters, K
1.8
2
?2
?1
10
4
x 10
10
Bandwidth h
Figure 2: Selecting pmax and K using cluster radius, for M =N =20000, sources dist. as mixture of
d
25 N (? ? U [0, 1]d , ?=4?4 I), targets as U [0, 1] , ?=10?2 . Left: Predicted cluster radius as K ?1/d
vs actual cluster radius for d = 3. Right: Speedup from using actual cluster radius.
4
Choosing IFGT Parameters
As mentioned in Section 1, the process of choosing the parameters is non-trivial. In [13], the pointwise error bounds described in Eq. 1 were used in an automatic parameter selection scheme that is
optimized when sources are uniformly distributed. We remove the uniformity assumption and also
make the error bounds tighter by selecting individual source and target truncation numbers to satisfy
cluster-wise error bounds instead of the worst-case point-wise error bounds. The first improvement
provides significant speedup in cases where sources are not uniformly distributed, and the second
improvement results in general speedup since we are no longer considering the error contribution of
just the worst source point, but considering the total error of each cluster instead.
4.1
Number of Clusters and Maximum Truncation Number
The task of selecting the number of clusters K and maximum truncation number pmax is difficult
because they depend on each other indirectly through the source distribution. For example, increasing K decreases the cluster radius, which allows for a lower truncation number while still satisfying
the error bound; conversely, increasing pmax allows clusters to have a larger radius, which allows
for a smaller K. Ideally, both parameters should be as low as possible since they both affect computational complexity. Unfortunately, we cannot find the balance between the two without analyzing
the source distribution because it influences the rate at which the cluster radius decreases. The uniformity assumption leads to an estimate of maximum cluster radius, rx ? K ?1/d [13]. However,
few interesting datasets are uniformly distributed, and when the assumption is violated, as in Fig. 2,
actual rx will decrease faster than K ?1/d , leading to over-clustering and increased running time.
Our solution is to perform clustering as part of the parameter selection process, obtaining the actual
cluster radii for each value of K. Using this approach, parameters are selected in a way that the
algorithm is tuned to the actual distribution of the sources.
We can take advantage of the incremental nature of some clustering algorithms such as the greedy algorithm proposed by Gonzalez [19] or the first phase of the Feder and Greene algorithm [20], which
provide a 2-approximation and 6-approximation of the optimal k-center clustering, respectively. We
can then increment the value K, obtain the maximum cluster radius, and then find the lowest p that
satisfies the error bound, picking the final value K which yields the lowest computational cost.
Note that if we simply set the maximum number of clusters to Klimit = N , we would spend
O(N log N ) time to estimate parameters. However, in practice, the optimal value of K is low
relative to N , and it is possible to detect when we cannot lower cost further by increasing K or
lowering pmax , thus allowing the search to terminate early. In addition, in Section 5, we show how
the data distribution allows us to intelligently choose Klimit .
4.2
Individual Truncation Numbers by Cluster-wise Error Bounds
Once the maximum truncation number pmax is selected, we can guarantee that the worst sourcetarget pairwise error is below the desired error bound. However, simply setting each source and
target truncation number to pmax wastes computational resources since most source-target pairs do
not contribute much error. This problem is addressed in [13] by allowing each source to have its own
truncation number based on its distance from the cluster center and assuming the worst placement of
Speedup
d=1
d=2
d=3
d=4
d=5
d=6
0
10
?2
?1
10
10
Bandwidth h
Figure 3: Speedup obtained by using cluster-wise instead of point-wise truncation numbers, for
d
M =N =4000, sources dist. as mixture of 25 N (? ? U [0, 1]d , ?=4?4 I), targets as U [0, 1] , ?=10?4 .
For d=1, the gain of lowering truncation is not large enough to make up for overhead costs.
any target. However, this means that each cluster will have to compute r(pi ?1)d coefficients where
pi is the truncation number of its farthest point.
We propose a method for further decreasing most individual source and target truncation numbers
by considering the total error incurred by evaluation at any target
X
X
X
X
|?
g (yj ) ? g(yj )| ?
|qi |?ij +
|qi |?
k : ||yj ?ck ||?ryk xi ?Sk
k : ||yj ?ck ||>ryk xi ?Sk
where the left term on the r.h.s. is the error from truncating the Taylor series for the clusters that
are within the cut-off radius, and the right term bounds the error from ignoring clusters outside the
cut-off radius, ry . Instead of ensuring that ?ij ? ? for all (i, j) pairs, we ensure
P
P
xi ?Sk |qi |?ij ?
xi ?Sk |qi |? = Qk ?
for all clusters. In this case, if a cluster is outside the cut-off radius, then the error incurred is no
greater than Qk ?; otherwise, the cluster-wise error bounds guarantee that the error is still no greater
than Qk ?. Summing over all clusters we have
P
|?
g (yj ) ? g(yj )| ? k Qk ? = Q?,
our desired error bound. The lowest truncation number that satisfies the cluster-wise error for each
cluster is found in O(pmax N ) time by evaluating the cluster-wise error for all clusters for each
value of p = {1 . . . pmax }. In addition, we can find individual target point truncation numbers by
not only considering the worst case target distance ryk when computing cluster error contributions,
but considering target errors for sources at varying distance ranges from each cluster center. This
yields concentric regions around each cluster, each of which has its own truncation number, which
can be used for targets in that region. Our approach satisfies the error bound tighter and reduces
computational cost because:
? Each cluster?s maximum truncation number no longer depends only on its farthest point, so
if most points are clustered close to the center the maximum truncation will be lower;
? The weight of each source point is considered in the error contributions, so if a source point
is far away but has a weight of qi = 0 its error contribution will be ignored; and finally
? Each target can use a truncation number that depends on its distance from the cluster.
5
Automatic Tuning via Method Selection
For any input source and target point distribution, requested absolute error, and Gaussian bandwidth,
we have the option of evaluating the Gauss Transform using any one of four methods: direct, direct+tree, ifgt, and ifgt+tree. As Fig. 4 shows, choosing the wrong method can result in orders of
magnitude more time to evaluate the sum. Thus, we require an efficient scheme to automatically
choose the best method online based on the input. The scheme must use the distribution of both the
source and target points in making its decision, while at the same time avoiding long computations
that would defeat the purpose of automatic method selection.
Note that if we know d, M , N , ns , nc , K, and pmax , we can calculate the cost of each method:
Costdirect (d, N, M )
O(dM N )
Costdirect+tree (d, N, M, ns )
O(d(N + M ns ) log N )
Costifgt (d, N, M, K, nc , pmax )
O(dN log K + (N + M nc )r(pmax ?1)d + dM K)
Costifgt+tree (d, N, M, K, nc , pmax ) O((N + M nc )(d log K + r(pmax ?1)d ))
1
2
10
direct
direct?tree
ifgt
ifgt?tree
auto
1
10
0
10
?1
10
?1
10
?2
10
?3
?2
10
10
?4
?3
10
0
10
Time ratio
CPU Time (seconds)
10
?2
10
?1
10
Bandwidth h
0
10
10
?2
10
?1
10
0
10
d = 1, auto to best
d = 2, auto to best
d = 3, auto to best
d = 4, auto to best
d = 5, auto to best
d = 6, auto to best
d = 1, auto to worst
d = 2, auto to worst
d = 3, auto to worst
d = 4, auto to worst
d = 5, auto to worst
d = 6, auto to worst
Bandwidth h
Figure 4: Running times of the four methods and our automatic method selection for M =N =4000,
d
sources dist. as mixture of 25 N (? ? U [0, 1]d , ?=4?4 I), targets as U [0, 1] , ?=10?4 . Left: example
for d=4. Right: Ratio of automatic to fastest method and automatic to slowest method, showing that
method selection incurs very small overhead while preventing potentially large slowdowns.
Algorithm 1 Method Selection
1: Calculate n
? s , an estimate of ns
2: Calculate Costdirect (d, N, M ) and Costdirect+tree (d, N, M, n
?s)
3: Calculate highest Klimit ? 0 such that for some nc and pmax
min(Costifgt , Costifgt+tree ) ? min(Costdirect , Costdirect+tree )
4: if Klimit > 0 then
5:
Compute pmax and K ? Klimit that minimize estimated cost of IFGT
6:
Calculate n
? c , an estimate of nc
7:
Calculate Costifgt+tree (d, N, M, K, n
? c , pmax ) and Costifgt (d, N, M, K, n
? c , pmax )
8: end if
9: return arg mini Costi
More precise equations and the correct constants that relate the four costs can be obtained directly
from the specific implementation of each method (this could be done by inspection, or automatically
offline or at compile-time to account for hardware). A simple approach to estimating the distribution
dependent ns and nc is to build a tree on sample source points and compute the average number of
neighbors to a sampled set of targets. The asymptotic complexity of this approximation is the same
as that of direct+tree, unless sub-linear sampling is used at the expense of accuracy in predicting
cost. However, ns and nc can be estimated in O(M + N ) time even without sampling by using
techniques from the field of database management systems for estimating spatial join selectivity[21].
Given ns , we predict the cost of direct+tree, and estimate Klimit as the highest value that might yield
lower costs than direct or direct+tree. If Klimit > 0, then, we can estimate the parameters and costs
of ifgt or ifgt+tree. Finally, we pick the method with lowest cost. As figure 4 shows, our method
selection approach chooses the correct method across bandwidths at very low computational cost.
6
Experiments
Performance Across Bandwidths. We empirically evaluate our method on the same six real-world
datasets as in [10] and compare against the authors? reported results. As in [10], we scale the data to
fit the unit hypercube and evaluate the Gauss transform using all 50K points as sources and targets,
with bandwidths varying from 10?3 to 103 times the optimal bandwidth. Because our method satisfies an absolute error, we use for absolute ? the highest value that guarantees a relative error of 10?2
(to achieve this, ? ranges from 10?1 to 10?4 by factors of 10). We do not include the time required to
choose ? (since we are doing this only to evaluate the running times of the two methods for the same
relative errors) but we do include the time to automatically select the method and parameters. Since
the code of [10] is not currently available, our experiments do not use the same machine as [10], and
the CPU times are scaled based on the reported/computed the times needed by the naive approach on
the corresponding machines. Figure 5 shows the normalized running times of our method versus the
Dual-Tree methods DFD, DFDO, DFTO, and DITO. For most bandwidths our method is generally
faster by about one order of magnitude (sometimes as much as 1000 times faster). For near-optimal
bandwidths, our approach is either faster or comparable to the other approaches.
Gaussian Process Regression. Gaussian process regression (GPR) [22] provides a Bayesian framework for non-parametric regression. The computational complexity for straightforward GPR is
DFD
DFDO
DFTO
DITO
Our method
?1
10
?2
10
?3
10
?4
10
?3
10
?2
10
?1
10
0
10
1
10
2
10
3
10
CPU Time / Naive CPU Time
CPU Time / Naive CPU Time
sj2, d = 1, h* = 0.001395
0
10
mockgalaxy, d = 3, h* = 0.000768
1
10
0
10
?1
10
?2
10
?3
10
?4
10
?3
10
?2
10
?1
10
bio5, d = 5, h* = 0.000567
1
10
0
10
?1
10
?2
10
?3
10
?4
10
?3
10
?2
10
?1
10
0
10
1
10
2
10
3
10
?1
10
?2
10
?3
10
?4
?3
10
?2
10
?1
10
0
10
1
10
Bandwidth scale h/h*
2
10
3
10
CPU Time / Naive CPU Time
CPU Time / Naive CPU Time
0
10
10
2
10
2
10
2
10
10
3
0
10
?1
10
?2
10
?3
10
?4
10
?3
10
?2
10
?1
10
0
10
1
10
10
3
Bandwidth scale h/h*
covtype, d = 10, h* = 0.015476
1
1
10
pall7, d = 7, h* = 0.001319
1
10
Bandwidth scale h/h*
10
0
10
Bandwidth scale h/h*
CPU Time / Naive CPU Time
CPU Time / Naive CPU Time
Bandwidth scale h/h*
CoocTexture, d = 16, h* = 0.026396
1
10
0
10
?1
10
?2
10
?3
10
?4
10
?3
10
?2
10
?1
10
0
10
1
10
10
3
Bandwidth scale h/h*
Figure 5: Comparison with Dual-Tree methods for six real-world datasets (lower is faster).
O(N 3 ) which is undesirable for large datasets. The core computation in GPR involves the solution
of a linear system for the dense covariance matrix K + ? 2 I, where [K]ij = K(xi , xj ). Our method
can be used to accelerate this solution for Gaussian processes with Gaussian covariance, given by
Pd
K(x, x? ) = ?f2 exp(? k=1 (xk ? x?k )2 /h2k ) [22]. Given the training set, D = {xi , yi }N
i=1 , and a
new point x? , the training phase involves computing ? = (K + ? 2 I)?1 y, and the prediction of y?
is given by y? = k(x? )T ?, where k(x? ) = [K(x? , x1 ), . . . , K(x? , xN )]. The system can be solved
efficiently by a conjugate gradient method using IFGT for matrix-vector multiplication. Further, the
accuracy of the matrix-vector product can be reduced as the iterations proceed (i.e. ? is modified
every iteration) if we use inexact Krylov subspaces [23] for the conjugate gradient iterations.
We apply our method for Gaussian process regression on four standard datasets: robotarm, abalone,
housing, and elevator2 . We present the results of the training phase (though we also speed up
the prediction phase). For each dataset we ran five experiments: the first four fixed one of the
four methods (direct, direct+tree, ifgt, ifgt+tree) and used it for all conjugate gradient iterations;
the fifth automatically selected the best method at each iteration (denoted by auto in figure 6). To
validate our solutions, we measured the relative error between the vectors found by the direct method
and our approximate methods; they were small, ranging from ? 10?10 to ? 10?5 . As expected,
auto chose the correct method for each dataset, incurring only a small overhead cost. Also, for
the abalone dataset, auto outperformed any of the fixed method experiments; as the right side of
figure 6 shows, half way through the iterations, the required accuracy decreased enough to make ifgt
faster than direct evaluation. By switching methods dynamically, the automatic selection approach
outperformed any fixed method, further demonstrating the usefulness of our online tuning approach.
Fast Particle Smoothing. Finally, we embed our automatic method selection in the the two-filter
particle smoothing demo provided by the authors of [3]3 . For a data size of 1000, tolerance set at
10?6 , the run-times are 18.26s, 90.28s and 0.56s for the direct, dual-tree and automatic (ifgt was
chosen) methods respectively. The RMS error for all methods from the ground truth values were
observed as 2.904 ? 10?04 .
2
The last three datasets can be downloaded from http://www.liaad.up.pt/?ltorgo/Regression/DataSets.html;
the first, robotarm, is a synthetic dataset generated as in [2]
3
The code was downloaded from http://www.cs.ubc.ca/?awll/nbody/demos.html
Abalone
7
4177
16.1s
32.3s
328s
35.2s
14.5s
Housing
12
506
0.313s
1317s
2.27s
549s
0.547s
Elevator
18
8752
132s
133s
0.516s
101s
0.797s
8
?log(desired accuracy)
Dims
Size
direct
ifgt
direct-tree
ifgt-tree
auto
Robotarm
2
1000
0.578s
0.0781s
5.45s
0.0781s
0.0938s
IFGT
Direct Method
7
6
5
4
3
2
1
0
0
2
4
6
8
10
12
14
16
18
20
Iteration Number
Figure 6: GPR Results. Left: CPU times. Right: Desired accuracy per iteration for abalone dataset.
7
Conclusion
We presented an automatic online tuning approach to Gaussian summations that combines a tree data
structure with IFGT that is well suited for both high and low bandwidths and which users can treat
as a black box. The approach also tunes IFGT parameters to the source distribution, and provides
tighter error bounds. Experiments demonstrated that our approach outperforms competing methods
for most bandwidth settings, and dynamically adapts to various datasets and input parameters.
Acknowledgments. We would like to thank the U.S. Government VACE program for supporting
this work. This work was also supported by a NOAA-ESDIS Grant to ASIEP at UMD.
References
[1] M.P. Wand and M.C. Jones. Kernel Smoothing. Chapman and Hall, 1995.
[2] C. K. I. Williams and C. E. Rasmussen. Gaussian processes for regression. In NIPS, 1995.
[3] M. Klaas, M. Briers, N. de Freitas, A. Doucet, S. Maskell, and D. Lang. Fast particle smoothing: if I had
a million particles. In ICML, 2006.
[4] N. de Freitas, Y. Wang, M. Mahdaviani, and D. Lang. Fast Krylov methods for N-body learning. In NIPS,
2006.
[5] L. Greengard and J. Strain. The fast Gauss transform. SIAM J. Sci. Stat. Comput., 1991.
[6] C. Yang, R. Duraiswami, N. A. Gumerov, and L. S. Davis. Improved fast Gauss transform and efficient
kernel density estimation. In ICCV, 2003.
[7] A. G. Gray and A. W. Moore. ?N-body? problems in statistical learning. In NIPS, 2000.
[8] A. G. Gray and A. W. Moore. Nonparametric density estimation: Toward computational tractability. In
SIAM Data Mining, 2003.
[9] D. Lee, A. Gray, and A. Moore. Dual-tree fast Gauss transforms. In NIPS, 2006.
[10] D. Lee and A. G. Gray. Faster Gaussian summation: Theory and experiment. In UAI, 2006.
[11] B. W. Silverman. Density estimation for statistics and data analysis. Chapman and Hal, 1986.
[12] C. Yang, R. Duraiswami, and L. S. Davis. Efficient kernel machines using the improved fast Gauss
transform. In NIPS, 2004.
[13] V. Raykar, C. Yang, R. Duraiswami, and N. Gumerov. Fast computation of sums of Gaussians in high
dimensions. UMD-CS-TR-4767, 2005.
[14] D. Lang, M. Klaas, and N. de Freitas. Empirical testing of fast kernel density estimation algorithms.
Technical Report UBC TR-2005-03, University of British Columbia, Vancouver, 2005.
[15] S. Arya and D. Mount. Approximate nearest neighbor queries in fixed dimensions. In SODA, 1993.
[16] S. Arya, D. M. Mount, N. S. Netanyahu, R. Silverman, and A. Y. Wu. An optimal algorithm for approximate nearest neighbor searching fixed dimensions. Journal of the ACM, 1998.
[17] M. Frigo and S. G. Johnson. The design and implementation of FFTW3. Proceedings of the IEEE, 2005.
[18] R. C. Whaley, A. Petitet, and J. J. Dongarra. Automated empirical optimization of software and the
ATLAS project. Parallel Computing, 27(1?2):3?35, 2001.
[19] T. F. Gonzalez. Clustering to minimize the maximum inter?cluster distance. In Journal of Theoretical
Computer Science, number 38, pages 293 ? 306, October 1985.
[20] T. Feder and D. H. Greene. Optimal algorithms for approximate clustering. In STOC, 1988.
[21] C. Faloutsos, B. Seeger, A. Traina, and C. Traina. Spatial join selectivity using power laws. In SIGMOD
Conference, 2000.
[22] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. The MIT Press, 2006.
[23] V. Simoncini and D. Szyld. Theory of inexact Krylov subspace methods and applications to scientific
computing. Technical Report 02-4-12, Temple University, 2002.
| 3420 |@word briefly:1 polynomial:1 open:1 d2:1 covariance:2 pick:1 incurs:1 tr:2 recursively:1 series:5 score:1 selecting:4 denoting:1 tuned:1 fgt:2 outperforms:1 freitas:3 com:1 lang:3 must:6 klaas:2 remove:1 designed:1 atlas:2 v:1 greedy:1 selected:4 morariu:1 half:1 inspection:1 xk:1 core:1 provides:3 node:2 contribute:1 hermite:1 five:1 dn:1 direct:25 ik:1 combine:2 overhead:3 pairwise:1 inter:1 expected:2 rapid:1 dist:3 brier:1 multi:2 ry:2 decreasing:1 automatically:6 actual:6 cpu:15 considering:6 increasing:4 becomes:3 project:2 estimating:2 notation:1 bounded:1 provided:1 lowest:4 costi:1 guarantee:4 klimit:7 every:1 xd:2 wrong:1 scaled:1 control:1 unit:1 medical:1 farthest:2 grant:1 negligible:2 local:1 treat:1 switching:1 lsd:1 analyzing:1 mount:2 meet:2 black:2 might:1 chose:1 blvd:1 dynamically:2 conversely:1 compile:1 ease:1 fastest:3 factorization:2 range:3 acknowledgment:1 yj:24 testing:1 practice:2 silverman:2 empirical:2 cannot:4 close:1 selection:17 undesirable:1 influence:2 www:2 demonstrated:1 center:12 straightforward:2 williams:2 truncating:2 searching:3 increment:1 target:33 pt:1 user:2 exact:1 us:2 pa:1 expensive:2 jk:2 satisfying:2 balaji:1 cut:10 database:1 observed:1 whaley:1 solved:1 wang:1 worst:13 calculate:6 region:2 decrease:5 highest:3 ran:1 mentioned:1 pd:3 complexity:10 moderately:1 ideally:1 uniformity:2 depend:1 maskell:1 division:1 efficiency:1 f2:1 accelerate:1 various:2 fast:17 effective:1 query:3 choosing:5 outside:2 whose:2 apparent:1 larger:2 solve:2 spend:1 otherwise:1 statistic:1 transform:10 final:1 online:6 housing:2 advantage:1 net:1 intelligently:1 propose:1 product:1 poorly:1 achieve:2 adapts:1 validate:1 sourceforge:1 cluster:56 guaranteeing:1 incremental:1 simoncini:1 depending:1 stat:1 measured:1 nearest:3 ij:13 eq:1 implemented:1 c:3 involves:3 predicted:2 met:2 differ:1 radius:27 correct:3 filter:1 larry:1 require:2 government:1 transparent:1 clustered:1 rpd:1 tighter:5 summation:6 around:1 considered:1 ground:1 hall:1 exp:1 predict:1 achieves:1 early:1 purpose:1 estimation:5 outperformed:2 currently:1 mit:1 gaussian:16 modified:1 ck:16 rather:1 pn:3 varying:3 improvement:2 vace:1 slowest:1 greatly:2 seeger:1 detect:1 dependent:1 accumulated:1 initially:1 selects:2 djk:1 overall:1 dual:5 arg:1 html:2 denoted:2 smoothing:5 spatial:2 field:2 once:1 sampling:2 chapman:2 park:1 jones:1 icml:1 others:1 report:2 employ:1 few:2 unsuccessfully:1 individual:5 elevator:1 phase:4 mining:1 evaluation:19 mixture:3 yielding:1 tuple:1 unless:1 tree:44 taylor:8 desired:7 theoretical:1 increased:2 temple:1 cost:17 tractability:1 usefulness:1 johnson:1 reported:2 straightforwardly:1 varies:1 kxi:1 synthetic:1 chooses:3 density:5 siam:2 lee:2 off:10 picking:1 ym:1 satisfied:1 management:1 ltorgo:1 choose:4 leading:1 return:1 account:1 exclude:1 de:3 accompanied:1 waste:1 coefficient:1 inc:1 satisfy:3 depends:2 performed:2 doing:1 option:1 parallel:1 fftw:1 contribution:10 minimize:3 square:1 accuracy:7 qk:4 efficiently:1 maximized:1 yield:5 bayesian:1 rx:4 suffers:1 inexact:2 against:1 dm:2 associated:1 gain:1 sampled:1 dataset:5 vlad:1 mahdaviani:1 ramani:2 noaa:1 appears:1 originally:2 higher:3 improved:6 duraiswami:3 done:2 box:3 evaluated:1 though:1 just:1 gray:4 scientific:1 hal:1 usa:1 building:1 normalized:1 moore:3 raykar:3 davis:2 abalone:4 performs:1 ranging:1 wise:9 recently:1 superior:1 empirically:1 defeat:1 million:1 elevated:1 significant:1 tuning:5 automatic:14 rd:1 particle:5 had:1 similarity:1 longer:2 multivariate:1 own:2 recent:1 belongs:1 scenario:1 selectivity:2 continue:1 yi:4 additional:1 greater:3 maximize:1 reduces:2 technical:2 faster:7 cross:1 long:1 divided:1 qi:12 ensuring:1 prediction:2 regression:7 iteration:8 kernel:7 sometimes:1 bio5:1 achieved:1 c1:1 addition:4 decreased:2 addressed:1 source:48 umd:6 umiacs:2 induced:1 ifgt:36 integer:1 near:1 yang:3 enough:2 automated:1 affect:2 fit:1 xj:1 bandwidth:37 competing:1 reduce:4 intensive:1 six:2 rms:1 feder:3 sj2:1 dik:3 proceed:1 ignored:2 generally:1 tune:1 factorial:1 transforms:1 nonparametric:1 hardware:1 reduced:1 http:3 estimated:2 dims:1 per:1 four:12 demonstrating:1 prevent:1 lowering:2 sum:5 convert:1 wand:1 run:1 package:1 soda:1 dongarra:1 reporting:1 wu:1 gonzalez:3 decision:1 comparable:1 bound:25 guaranteed:1 quadratic:1 nonnegative:1 greene:3 occur:1 placement:1 x2:1 software:2 speed:2 prescribed:1 min:2 speedup:8 influential:2 dfd:2 conjugate:3 across:3 smaller:2 partitioned:1 making:2 s1:1 iccv:1 computationally:1 resource:1 equation:1 needed:2 know:1 whichever:1 end:1 available:2 operation:2 incurring:1 gaussians:1 greengard:2 apply:1 away:1 indirectly:1 faloutsos:1 vikas:2 original:1 clustering:8 ensure:2 running:6 include:2 graphical:1 sigmod:1 build:2 hypercube:1 parametric:1 md:1 gradient:3 subspace:2 distance:6 separate:2 maryland:1 thank:1 sci:1 trivial:3 reason:1 toward:1 assuming:2 code:3 length:1 retained:1 index:2 pointwise:1 providing:1 balance:1 ratio:2 mini:1 nc:14 difficult:2 unfortunately:1 october:1 potentially:1 relate:1 expense:1 stoc:1 rise:1 pmax:23 implementation:2 design:1 perform:6 allowing:3 gumerov:2 datasets:9 arya:2 displayed:1 truncated:2 supporting:1 situation:1 strain:2 precise:1 y1:1 download:1 concentric:1 pair:2 required:2 optimized:1 nip:5 address:2 proceeds:1 below:2 krylov:3 rxk:3 summarize:1 program:1 built:1 including:3 max:2 power:1 predicting:1 scheme:4 improve:1 auto:17 naive:7 columbia:1 multiplication:1 vancouver:1 asymptotic:3 relative:4 law:1 interesting:1 versus:1 validation:1 h2:3 downloaded:2 incurred:2 degree:1 szyld:1 netanyahu:1 pi:6 translation:2 supported:1 slowdown:1 truncation:23 last:1 rasmussen:2 drastically:1 offline:1 side:1 neighbor:5 taking:1 absolute:4 fifth:1 distributed:6 tolerance:2 dimension:7 xn:2 evaluating:3 kyj:1 world:2 preventing:1 author:5 employing:2 far:2 approximate:4 doucet:1 uai:1 summing:1 assumed:1 xi:21 demo:2 search:4 liaad:1 sk:6 nature:1 terminate:1 ca:1 ignoring:2 obtaining:1 requested:1 interact:1 expansion:9 did:1 dense:1 mockgalaxy:1 x1:3 body:2 fig:2 join:2 n:9 sub:2 exponential:1 comput:1 gpr:4 removing:1 rk:1 embed:1 british:1 specific:1 showing:1 decay:1 covtype:1 magnitude:3 monroe:1 suited:1 simply:3 likely:1 petitet:1 h2k:1 ubc:2 truth:1 satisfies:5 acm:1 king:1 consequently:1 determined:1 uniformly:6 reducing:1 total:4 gauss:11 subdivision:1 siemens:2 attempted:1 traina:2 select:1 college:1 searched:1 violated:1 evaluate:4 tested:1 avoiding:1 |
2,671 | 3,421 | Interpreting the Neural Code with
Formal Concept Analysis
Dominik Endres, Peter F?oldi?ak
School of Psychology,University of St. Andrews
KY16 9JP, UK
{dme2,pf2}@st-andrews.ac.uk
Abstract
We propose a novel application of Formal Concept Analysis (FCA) to neural decoding: instead of just trying to figure out which stimulus was presented, we
demonstrate how to explore the semantic relationships in the neural representation
of large sets of stimuli. FCA provides a way of displaying and interpreting such
relationships via concept lattices. We explore the effects of neural code sparsity on
the lattice. We then analyze neurophysiological data from high-level visual cortical area STSa, using an exact Bayesian approach to construct the formal context
needed by FCA. Prominent features of the resulting concept lattices are discussed,
including hierarchical face representation and indications for a product-of-experts
code in real neurons.
1
Introduction
Mammalian brains consist of billions of neurons, each capable of independent electrical activity.
From an information-theoretic perspective, the patterns of activation of these neurons can be understood as the codewords comprising the neural code. The neural code describes which pattern of
activity corresponds to what information item. We are interested in the (high-level) visual system,
where such items may indicate the presence of a stimulus object or the value of some stimulus attribute, assuming that each time this item is represented the neural activity pattern will be the same
or at least similar. Neural decoding is the attempt to reconstruct the stimulus from the observed pattern of activation in a given population of neurons [1, 2, 3, 4]. Popular decoding quality measures,
such as Fisher?s linear discriminant [5] or mutual information [6] capture how accurately a stimulus can be determined from a neural activity pattern (e.g., [4]). While these measures are certainly
useful, they tell us little about the structure of the neural code, which is what we are concerned
with here. Furthermore, we would also like to elucidate how this structure relates to the represented
information items, i.e. we are interested in the semantic aspects of the neural code.
To explore the relationship between the representations of related items, F?oldi?ak [7] demonstrated
that a sparse neural code can be interpreted as a graph (a kind of ?semantic net?). In this interpretation, the neural responses are assumed to be binary (active/inactive). Each codeword can then
be represented as a set of active units (a subset of all units). The codewords can now be partially
ordered under set inclusion: codeword A ? codeword B iff the set of active neurons of A is a subset of the active neurons of B. This ordering relation is capable of capturing semantic relationships
between the represented information items. There is a duality between the information items and
the sets representing them: a more general class corresponds to a smaller subset of active neurons,
and more specific items are represented by larger sets [7]. Additionally, storing codewords as sets is
especially efficient for sparse codes. The resulting graphs (lattices) are an interesting representation
of the relationships implicit in the code.
We would also like to be able to represent how the relationship between sets of active neurons translates into the corresponding relationship between the encoded stimuli. These observations can be
formalized by the well developed branch of mathematical order theory called Formal Concept Analysis (FCA) [8, 9]. In FCA, data from a binary relation (or formal context) is represented as a concept
lattice. Each concept has a set of formal objects as an extent and a set of formal attributes as an intent. In our application, the stimuli are the formal objects, and the neurons are the formal attributes.
The FCA approach exploits the duality of extensional and intensional descriptions and allows to
visually explore the data in lattice diagrams. FCA has shown to be useful for data exploration and
knowledge discovery in numerous applications in a variety of fields [10, 11].
We give a short introduction to FCA in section 2 and demonstrate how the sparseness (or denseness)
of the neural code affects the structure of the concept lattice in section 3. Section 4 describes the
generative classifier model which we use to build the formal context from the responses of neurons
in the high-level visual cortex of monkeys. Finally, we discuss the concept lattices so obtained in
section 5.
2
Formal Concept Analysis
Central to FCA[9] is the notion of the formal context K := (G, M, I), which is comprised of a set
of formal objects G, a set of formal attributes M and a binary relation I ? G?M between members
of G and M . In our application, the members of G are visual stimuli, whereas the members of M
are the neurons. If neuron m ? M responds when stimulus g ? G is presented, then we write
(g, m) ? I or gIm. It is customary to represent the context as a cross table, where the row(column)
headings are the object(attribute) names. For each pair (g, m) ? I, the corresponding cell in the
cross table has an ?x?. Table 1, left, shows a simple example context.
monkeyFace
monkeyHand
humanFace
spider
n1
?
n2
?
?
n3
?
?
concept
0
1
2
3
4
5
extent (stimuli)
ALL
spider
humanFace monkeyFace
monkeyFace monkeyHand
monkeyFace
NONE
intent (neurons)
NONE
n3
n1
n2
n1 n2
ALL
Table 1: Left: a simple example context, represented as a cross-table. The objects (rows) are 4 visual
stimuli, the attributes (columns) are 3 (hypothetical) neurons n1,n2,n3. An ?x? in a cell indicates that
a stimulus elicited a response from the corresponding neuron. Right: the concepts of this context.
Concepts are lectically ordered [9]. Colors correspond to fig.1.
Define the prime operator for subsets A ? G as A0 = {m ? M |?g ? A : gIm} i.e. A0 is the set of
all attributes shared by the objects in A. Likewise, for B ? M define B 0 = {g ? G|?m ? B : gIm}
i.e. B 0 is the set of all objects having all attributes in B.
Definition 2.1 [9] A formal concept of the context K is a pair (A, B) with A ? G, B ? M such
that A0 = B and B 0 = A. A is called the extent and B is the intent of the concept (A, B). IB(K)
denotes the set of all concepts of the context K.
In other words, given the relation I, (A, B) is a concept if A determines B and vice versa. A and B
are sometimes called closed subsets of G and M with respect to I. Table 1, right, lists all concepts
of the context in table 1, left. One can visualize the defining property of a concept as follows: if
(A, B) is a concept, reorder the rows and columns of the cross table such that all objects in A are in
adjacent rows, and all attributes in B are in adjacent columns. The cells corresponding to all g ? A
and m ? B then form a rectangular block of ?x?s with no empty spaces in between. In the example
above, this can be seen (without reordering rows and columns) for concepts 1,3,4. For a graphical
representation of the relationships between concepts, one defines an order IB(K):
Definition 2.2 [9] If (A1 , B1 ) and (A2 , B2 ) are concepts of a context, (A1 , B1 ) is a subconcept of
(A2 , B2 ) if A1 ? A2 (which is equivalent to B1 ? B2 ). In this case, (A2 , B2 ) is a superconcept of
(A1 , B1 ) and we write (A1 , B1 ) ? (A2 , B2 ). The relation ? is called the order of the concepts.
It can be shown [8, 9] that IB(K) and the concept order form a complete lattice. The concept lattice
of the context in table 1, with full and reduced labeling, is shown in fig.1. Full labeling means that
a concept node is depicted with its full extent and intent. A reduced labeled concept lattice shows
an object only in the smallest (w.r.t. the concept order of definition 2.2) concept of whose extent
the object is a member. This concept is called the object concept, or the concept that introduces
the object. Likewise, an attribute is shown only in the largest concept of whose intent the attribute
is a member, the attribute concept, which introduces the attribute. The closedness of extents and
intents has an important consequence for neuroscientific applications. Adding attributes to M (e.g.
responses of additional neurons) will very probably grow IB(K). However, the original concepts
will be embedded as a substructure in the larger lattice, with their ordering relationships preserved.
Figure 1: Concept lattice computed from the context in table 1. Each node is a concept, arrows
represent superconcept relation, i.e. an arrow from X to Y reads: X is a superconcept of Y . Colors
correspond to table 1, right. The number in the leftmost compartment is the concept number. Middle
compartment contains the extent, rightmost compartment the intent. Left: fully labeled concepts,
i.e. all members of extents and intents are listed in each concept node. Right: reduced labeling.
An object/attribute is only listed in the extent/intent of the smallest/largest concept that contains it.
Reduced labeling is very useful for drawing large concept lattices.
The lattice diagrams make the ordering relationship between the concepts graphically explicit: concept 3 contains all ?monkey-related? stimuli, concept 2 encompasses all ?faces?. They have a common child, concept 4, which is the ?monkeyFace? concept. The ?spider? concept (concept 1) is
incomparable to any other concept except the top and the bottom of the lattice. Note that these relationships arise as a consequence of the (here hypothetical) response behavior of the neurons. We
will show (section 5) that the response patterns of real neurons can lead to similarly interpretable
structures.
From a decoding perspective, a fully labeled concept shows those stimuli that have activated at least
the set of neurons in the intent. In contrast, the stimuli associated with a concept in reduced labeling
will activate the set of neurons in the intent, but no others. The fully labeled concepts show stimuli
encoded by activity of the active neurons of the concept without knowledge of the firing state of the
other neurons. Reduced labels, on the other hand show those stimuli that elicited a response only
from the neurons in the intent.
3
Concept lattices of local, sparse and dense codes
One feature of neural codes which has attracted a considerable amount of interest is its sparseness.
In the case of a binary neural code, the sparseness of a codeword is inversely related to the fraction of
active neurons. The average sparseness across all codewords is the sparseness of the code [12, 13].
Sparse codes, i.e. codes where this fraction is low, are found interesting for a variety of reasons:
they offer a good compromise between encoding capacity, ease of decoding and robustness [14],
they seem to be employed in the mammalian visual processing system [15] and they are well suited
to representing the visual environment we live in [15, 16]. It is also possible to define sparseness
for graded or even continuous-valued responses (see e.g. [17, 4, 13]). To study what structural
effects different levels of sparseness would have on a neural code, we generated random codes, i.e.
each of 10 stimuli was associated with randomly drawn responses of 10 neurons, subject to the
constraints that the code be perfectly decodable and that the sparseness of each codeword was equal
to the sparseness of the code. Fig.2 shows the contexts (represented as cross-tables) and the concept
lattices of a local code (activity ratio 0.1), a sparse code (activity ratio 0.2) and a dense code (activity
ratio 0.5).
neuron
x
s x
t
i
x
m
u x
l
x
u
s
x
x
x
x
x
neuron
x
x
x x
x
s
t
i
x
m
xx
u
x
l
x
u
s x
s
t
i
m
u
l
u
s
x
x
x
x
x
x
x
x
xx
x
x
x
x
x
x
neuron
x x xx
xxx
x
xx
xx
x
xxxx
xxxx x
x x x
x
x xx x
x
x
xxx
x
xx
x
x
xxx
Figure 2: Contexts (represented as cross-tables) and concept lattices for a local, sparse and dense
random neural code. Each context was built out of the responses of 10 (hypothetical) neurons
to 10 stimuli. Each node represents a concept, the left(right) compartment contains the number
of introduced stimuli(neurons). In a local code, the response patters to different stimuli have no
overlapping activations, hence the lattice representing this code is an antichain with top and bottom
element added. Each concept in the antichain introduces (at least) one stimulus and (at least) one
neuron. In contrast, a dense code results in a lot of concepts which introduce neither a stimulus nor
a neuron. The lattice of the dense code is also substantially longer than that of the sparse and local
codes.
The most obvious differences between the lattices is the total number of concepts. A dense code,
even for a small number of stimuli, will give rise to a lot of concepts, because the neuron sets representing the stimuli are very probably going to have non-empty intersections. These intersections are
potentially the intents of concepts which are larger than those concepts that introduce the stimuli.
Hence, the latter are found towards the bottom of the lattice. This implies that they have large intents,
which is of course a consequence of the density of the code. Determining these intents thus requires
the observation of a large number of neurons, which is unappealing from a decoding perspective.
The local code does not have this drawback, but is hampered by a small encoding capacity (maximal
number of concepts with non-empty extents): the concept lattice in fig.2 is the largest one which can
be constructed for a local code comprised of 10 binary neurons. Which of the above structures is
most appropriate depends on the conceptual structure of environment to be encoded.
4
Building a formal context from responses of high-level visual neurons
To explore whether FCA is a suitable tool for interpreting real neural codes, we constructed formal
contexts from the responses of high-level visual cortical cells in area STSa (part of the temporal lobe)
of monkeys. Characterizing the responses of these cells is a difficult task. They exhibit complex
nonlinearities and invariances which make it impossible to apply linear techniques, such as reverse
correlation [18, 19, 20]. The concept lattice obtained by FCA might enable us to display and browse
these invariances: if the response of a subset of cells indicates the presence of an invariant feature
in a stimulus, then all stimuli having this feature should form the extent of a concept whose intent
is given by the responding cells, much like the ?monkey? and ?face? concepts in the example in
section 2.
4.1
Physiological data
The data were obtained through [21], where the experimental details can be found. Briefly, spike
trains were obtained from neurons within the upper and lower banks of the superior temporal sulcus
(STSa) via standard extracellular recording techniques [22] from an awake and behaving monkey
(Macaca mulatta) performing a fixation task. This area contains cells which are responsive to faces.
The recorded firing patters were turned into distinct samples, each of which contained the spikes
from ?300 ms before to 600 ms after the stimulus onset with a temporal resolution of 1 ms. The
stimulus set consisted of 1704 images, containing color and black and white views of human and
monkey head and body, animals, fruits, natural outdoor scenes, abstract drawings and cartoons.
Stimuli were presented for 55ms each without inter-stimulus gaps in random sequences. While this
rapid serial visual presentation (RSVP) paradigm complicates the task of extracting stimulus-related
information from the spiketrains, it has the advantage of allowing for the testing of a large number
of stimuli. A given cell was tested on a subset of 600 or 1200 of these stimuli, each stimulus was
presented between 1-15 times.
4.2
Bayesian thresholding
Before we can apply FCA, we need to extract a binary attribute from the raw spiketrains. While
FCA can also deal with many-valued attributes, see [23, 9], we will employ binary thresholding as a
starting point. Moreover, when time windows are limited (e.g. in the RSVP condition) it is usually
impossible to extract more than 1 bit of stimulus identity-related information from a spiketrain per
stimulus [24]. We do not suggest that real neurons have a binary activation function. We are merely
concerned with finding a maximally informative response binarization, to allow for the construction
of meaningful concepts. We do this by Bayesian thresholding, as detailed in appendix A. This
procedure also avails us of a null hypothesis H0 =?the responses contain no information about the
stimuli?.
4.3
Cell selection
The experimental data consisted of recordings from 26 cells. To minimize the risk that the computed neural responses were a result of random fluctuations, we excluded a cell if 1.) H0 was more
probable than 10?6 or 2.) the posterior standard deviations of the counting window parameters
were larger than 20ms, indicating large uncertainties about the response timing. Cells which did
not respond above the threshold included all cells excluded by the above criteria (except one). Furthermore, since not all cells were tested on all stimuli, we also had to select pairs of subsets of cells
and stimuli such that all cells in a pair were tested on all stimuli. Incidentally, this selection can also
be accomplished with FCA, by determining the concepts of a context with gJm =?stimulus g was
tested on cell m? and selecting those with a large number of stimuli ? number of cells. Two of these
cell and stimulus subset pairs (?A?, containing 364 stimuli and 13 cells, and ?B?, containing 600
stimuli, 12 cells) were selected for further analysis.
5
Results
To analyze the neural code, the thresholded neural resposes were used to build stimulus-by-cellresponse contexts. We performed FCA on these with C OLIBRI C ONCEPTS1 , created stimulus image
montages and plotted the lattices2 . The complete concept lattices were too large to display on a page.
Graphs of lattices A and B with reduced labeling on the stimuli are included in the supplementary
1
2
see http://code.google.com/p/colibri-concepts/
with I MAGE M AGICK, http://www.imagemagick.org and G RAPHVIZ, http://www.graphviz.org
A
B
Figure 3: A: a subgraph of lattice A with reduced labeling on the stimuli, i.e. stimuli are only shown
in their object concepts. The ? indicates that an extent is the intersection of its superconcepts?
extents, i.e. no new stimuli were introduced by this concept. All cells forming this part of the
concept lattice were responsive to faces. B: a subgraph of lattice B, fully labeled. The concepts on
the right side are not exclusively ?face? concepts, but most members of their extents have something
?roundish? about them.
material (files latticeA neuroFCA.pdf and latticeB neuroFCA.pdf). In these graphs,
the top of the frame around each concept image contains the concept number and the list of cells in
the intent.
Fig.3, A shows a subgraph from lattice A, which exclusively contained ?face? concepts.
This subgraph, with full labeling, is also a part of the supplementary material (file
faceSubgraphLatticeA neuroFCA.pdf). The top concepts introduce human and cartoon
faces, i.e. their extents are consist of general ?face? images, while their intents are small (3 cells).
In contrast, the lower concepts introduce mostly single monkey faces, with the bottom concepts
having an intent of 7 cells. We may interpret this as an indication that the neural code has a higher
?resolution? for faces of conspecifics than for faces in general, i.e. other monkeys are represented
in greater detail in a monkey?s brain than humans or cartoons. This feature can be observed in most
lattices we generated.
Fig.3, B shows a subgraph from lattice B with full labeling. The concepts in the left half of the
graph are face concepts, whereas the extents of the concepts in the right half also contain a number
of non-face stimuli. Most of the latter have something ?roundish? about them. The bottom concept,
being subordinate to both the ?round? and the ?face? concepts, encompasses stimuli with both characteristics, which points towards a product-of-experts encoding [25]. This example also highlights
another advantage of FCA over standard hierarchical analysis techniques, e.g. hierarchical clustering: it does not impose a tree structure when the data do not support it (a shortcoming of the analysis
in [26]).
For preliminary validation, we experimented with stimulus shuffling (i.e. randomly assigning stimuli
to the recorded responses) to determine whether the found concepts are indeed meaningful. This
procedure leaves the lattice structure intact, but mixes up the extents. A ?naive? observer was then
no longer able to label the concepts (as in fig.3, ?round?, ?face? or ?conspecifics?). Evidence of
concept stability was obtained by trying different binarization thresholds: as stated in appendix A,
we used a threshold probability of 0.5. This threshold can be raised up to 0.7 without losing any
of the conceptual structures described in fig.3, although some of the stimuli migrate upwards in the
lattice.
6
Conclusion
We demonstrated the potential usefulness of FCA for the exploration and interpretation of neural
codes. This technique is feasible even for high-level visual codes, where linear decoding methods
[19, 20] fail, and it provides qualitative information about the structure of the code which goes
beyond stimulus label decoding [4]. Clearly, this application of FCA is still in its infancy. It would
be very interesting to repeat the analysis presented here on data obtained from simultaneous multicell recordings, to elucidate whether the conceptual structures derived by FCA are used for decoding
by real brains. On a larger scale than single neurons, FCA could also be employed to study the
relationships in fMRI data [27].
Acknowledgment D. Endres was supported by MRC fellowship G0501319.
References
[1] A. P. Georgopoulos, A. B. Schwartz, and R. E. Kettner. Neuronal population coding of movement direction. Science, 233(4771):1416?1419, 1986.
[2] P F?oldi?ak. The ?Ideal Homunculus?: Decoding neural population responses by Bayesian inference. Perception, 22 suppl:43, 1993.
[3] MW Oram, P F?oldi?ak, DI Perrett, and F Sengpiel. The ?Ideal Homunculus?: decoding neural
population signals. Trends In Neurosciences, 21:259?265, June 1998.
[4] R. Q. Quiroga, L. Reddy, C. Koch, and I. Fried. Decoding Visual Inputs From Multiple Neurons in the Human Temporal Lobe. J Neurophysiol, 98(4):1997?2007, 2007.
[5] OR Duda, PE Hart, and DG Stork. Pattern classification. John Wiley & Sons, New York,
Chichester, 2001.
[6] T. M. Cover and J. A. Thomas. Elements of Information Theory. John Wiley & Sons, New
York, 1991.
[7] P F?oldi?ak. Sparse neural representation for semantic indexing. In XIII Conference
of the European Society of Cognitive Psychology (ESCOP-2003), 2003. http://www.standrews.ac.uk/?pf2/escopill2.pdf.
[8] R. Wille. Restructuring lattice theory: an approach based on hierarchies of concepts. In I. Rival,
editor, Ordered sets, pages 445?470. Reidel, Dordrecht-Boston, 1982.
[9] Bernhard Ganter and Rudolf Wille. Formal Concept Analysis: Mathematical foundations.
Springer, 1999.
[10] B. Ganter, G. Stumme, and R. Wille, editors. Formal Concept Analysis, Foundations and
Applications, volume 3626 of Lecture Notes in Computer Science. Springer, 2005.
[11] U. Priss. Formal concept analysis in information science. Annual Review of Information
Science and Technology, 40:521?543, 2006.
[12] P F?oldi?ak. Sparse coding in the primate cortex. In Michael A Arbib, editor, The Handbook of
Brain Theory and Neural Networks, pages 1064?1068. MIT Press, second edition, 2002.
[13] P F?oldi?ak and D Endres.
Sparse coding.
Scholarpedia, 3(1):2984, 2008.
http://www.scholarpedia.org/article/Sparse coding.
[14] P F?oldi?ak. Forming sparse representations by local anti-Hebbian learning. Biological Cybernetics, 64:165?170, 1990.
[15] B. A Olshausen, D. J Field, and A Pelah. Sparse coding with an overcomplete basis set: a
strategy employed by V1. Vision Res., 37(23):3311?3325, 1997.
[16] Eero P Simoncelli and Bruno A Olshausen. Natural image statistics and neural representation.
Annual Review of Neuroscience, 24:1193?1216, 2001.
[17] ET Rolls and A Treves. The relative advantages of sparse versus distributed encoding for
neuronal networks in the brain. Network, 1:407?421, 1990.
[18] P Dayan and LF Abbott. Theoretical Neuroscience. MIT Press, London, Cambridge, 2001.
[19] J.P. Jones and L. A. Palmer. An evaluation of the two-dimensional Gabor filter model of simple
receptive fields in cat striate cortex. Journal of Neurophysiology, 58(6):1233?1258, 1987.
[20] D. L. Ringach. Spatial structure and symmetry of simple-cell receptive fields in macaque
primary visual cortex. Journal of Neurophysiology, 88:455?463, 2002.
[21] P F?oldi?ak, D Xiao, C Keysers, R Edwards, and DI Perrett. Rapid serial visual presentation
for the determination of neural selectivity in area STSa. Progress in Brain Research, pages
107?116, 2004.
[22] M. W. Oram and D. I. Perrett. Time course of neural responses discriminating different views
of the face and head. Journal of Neurophysiology, 68(1):70?84, 1992.
[23] R. Wille and F. Lehmann. A triadic approach to formal concept analysis. In G. Ellis, R. Levinson, W. Rich, and J. F. Sowa, editors, Conceptual structures: applications, implementation and
theory, pages 32?43. Springer, Berlin-Heidelberg-New York, 1995.
[24] D. Endres. Bayesian and Information-Theoretic Tools for Neuroscience. PhD thesis, School
of Psychology, University of St. Andrews, U.K., 2006. http://hdl.handle.net/10023/162.
[25] GE Hinton. Products of experts. In Ninth International Conference on Artificial Neural Networks ICANN 99, number 470 in ICANN, 1999.
[26] R Kiani, H Esteky, K Mirpour, and K Tanaka. Object category structure in response patterns of neuronal population in monkey inferior temporal cortex. Journal of Neurophysiology,
97(6):4296?4309, April 2007.
[27] K. N. Kay, T. Naselaris, R. J. Prenger, and J. L. Gallant. Identifying natural images from
human brain activity. Nature, 452:352?255, 2008. http://dx.doi.org/10.1038/nature06713.
[28] D. Endres and P. F?oldi?ak. Exact Bayesian bin classification: a fast alternative to bayesian
classification and its application to neural response analysis. Journal of Computational Neuroscience, 24(1):24?35, 2008. DOI: 10.1007/s10827-007-0039-5.
A
Method of Bayesian thresholding
A standard way of obtaining binary responses from neurons is thresholding the spike count within
a certain time window. This is a relatively straightforward task, if the stimuli are presented well
separated in time and a lot of trials per stimulus are available. Then latencies and response offsets
are often clearly discernible and thus choosing the time window is not too difficult. However, under
RSVP conditions with few trials per stimulus, response separation becomes more tricky, as the
responses to subsequent stimuli will tend to follow each other without an intermediate return to
baseline activity. Moreover, neural resposes tend to be rather noisy. We will therefore employ a
simplified version of the generative Bayesian Bin classification algorithm (BBCa) [28], which was
shown to perform well on RSVP data [24].
BBCa was designed for the purpose of inferring stimulus labels g from a continuous-valued, scalar
measure z of a neural response. The range of z is divided into a number of contiguous bins. Within
each bin, the observation model for the g is a Bernoulli scheme with a Dirichlet prior over its parameters. It is shown in [28] that one can iterate/integrate over all possible bin boundary configurations
efficiently, thus making exact Bayesian inference feasible. We make two simplifications to BBCa:
1) z is discrete, because we are counting spikes and 2) we use models with only 1 bin boundary in
the range of z. The bin membership of a given neural response can then serve as the binary attribute
required for FCA, since BBCa weighs bin configurations by their classification (i.e. stimulus label
decoding) performance. We proceed in a straight Bayesian fashion: since the bin membership is the
only variable we are interested in, all other parameters (counting window size and position, class
membership probabilities, bin boundaries) are marginalized. This minimizes the risk of spurious results due to ?contrived? information (i.e. choices of parameters) made at some stage of the inference
process. Afterwards, the probability that the response belongs to the upper bin is thresholded at a
probability of 0.5. BBCa can also be used for model comparison. Running the algorithm with no bin
boundaries in the range of z effectively yields the probability of the data given the ?null hypothesis?
H0 : z does not contain any information about g. We can then compare it against the alternative
hypothesis described above (i.e. the information which bin z is in tells us something about g) to
determine whether the cell has responded at all.
| 3421 |@word neurophysiology:4 trial:2 version:1 middle:1 briefly:1 duda:1 lobe:2 stsa:4 configuration:2 contains:6 exclusively:2 selecting:1 rightmost:1 com:1 activation:4 assigning:1 dx:1 attracted:1 john:2 subsequent:1 extensional:1 informative:1 discernible:1 designed:1 interpretable:1 generative:2 selected:1 half:2 item:8 leaf:1 fried:1 short:1 pf2:2 provides:2 node:4 org:4 mathematical:2 constructed:2 qualitative:1 fixation:1 introduce:4 inter:1 indeed:1 rapid:2 behavior:1 nor:1 brain:7 montage:1 little:1 window:5 becomes:1 xx:7 moreover:2 null:2 what:3 kind:1 interpreted:1 substantially:1 monkey:10 minimizes:1 developed:1 finding:1 temporal:5 hypothetical:3 classifier:1 uk:3 schwartz:1 unit:2 tricky:1 before:2 understood:1 local:8 timing:1 consequence:3 encoding:4 ak:10 firing:2 fluctuation:1 might:1 black:1 ease:1 limited:1 palmer:1 range:3 acknowledgment:1 testing:1 block:1 lf:1 procedure:2 area:4 gabor:1 word:1 suggest:1 selection:2 operator:1 context:21 live:1 impossible:2 risk:2 www:4 equivalent:1 demonstrated:2 graphically:1 go:1 starting:1 straightforward:1 rectangular:1 resolution:2 formalized:1 identifying:1 wille:4 kay:1 population:5 stability:1 notion:1 handle:1 elucidate:2 construction:1 hierarchy:1 exact:3 losing:1 hypothesis:3 element:2 trend:1 mammalian:2 labeled:5 observed:2 bottom:5 electrical:1 capture:1 ordering:3 movement:1 environment:2 compromise:1 serve:1 basis:1 neurophysiol:1 fca:21 patter:2 represented:10 cat:1 train:1 separated:1 distinct:1 fast:1 shortcoming:1 activate:1 london:1 prenger:1 artificial:1 doi:2 tell:2 labeling:9 choosing:1 h0:3 dordrecht:1 whose:3 encoded:3 larger:5 valued:3 supplementary:2 drawing:2 reconstruct:1 statistic:1 noisy:1 sequence:1 indication:2 advantage:3 net:2 propose:1 product:3 maximal:1 turned:1 iff:1 subgraph:5 description:1 macaca:1 billion:1 empty:3 contrived:1 incidentally:1 object:16 andrew:3 ac:2 avail:1 school:2 progress:1 edward:1 indicate:1 implies:1 direction:1 drawback:1 attribute:18 filter:1 exploration:2 human:5 enable:1 material:2 subordinate:1 bin:13 preliminary:1 probable:1 biological:1 quiroga:1 around:1 koch:1 visually:1 visualize:1 a2:5 smallest:2 purpose:1 label:5 largest:3 vice:1 tool:2 naselaris:1 mit:2 clearly:2 rather:1 sengpiel:1 derived:1 june:1 bernoulli:1 indicates:3 contrast:3 baseline:1 inference:3 dayan:1 membership:3 a0:3 spurious:1 relation:6 mirpour:1 going:1 comprising:1 interested:3 classification:5 animal:1 raised:1 spatial:1 mutual:1 field:4 construct:1 equal:1 having:3 cartoon:3 represents:1 jones:1 fmri:1 others:1 stimulus:66 xiii:1 employ:2 few:1 randomly:2 decodable:1 dg:1 conspecific:2 unappealing:1 n1:4 hdl:1 attempt:1 interest:1 evaluation:1 certainly:1 chichester:1 introduces:3 activated:1 capable:2 tree:1 re:1 plotted:1 overcomplete:1 weighs:1 theoretical:1 complicates:1 column:5 elli:1 cover:1 contiguous:1 lattice:36 deviation:1 subset:9 comprised:2 usefulness:1 too:2 closedness:1 spiketrain:1 endres:5 st:3 density:1 international:1 discriminating:1 decoding:13 michael:1 gjm:1 thesis:1 central:1 recorded:2 containing:3 gim:3 cognitive:1 expert:3 return:1 potential:1 nonlinearities:1 b2:5 coding:5 scholarpedia:2 depends:1 onset:1 performed:1 view:2 lot:3 closed:1 observer:1 analyze:2 elicited:2 substructure:1 spiketrains:2 minimize:1 compartment:4 roll:1 responded:1 characteristic:1 likewise:2 efficiently:1 correspond:2 sowa:1 yield:1 bayesian:11 raw:1 accurately:1 none:2 mrc:1 cybernetics:1 straight:1 simultaneous:1 definition:3 against:1 obvious:1 associated:2 di:2 oram:2 popular:1 knowledge:2 color:3 higher:1 xxx:3 follow:1 response:31 maximally:1 april:1 furthermore:2 just:1 implicit:1 stage:1 correlation:1 xxxx:2 hand:1 overlapping:1 google:1 defines:1 quality:1 olshausen:2 name:1 effect:2 building:1 concept:98 consisted:2 contain:3 hence:2 read:1 excluded:2 semantic:5 ringach:1 white:1 deal:1 adjacent:2 round:2 inferior:1 m:5 leftmost:1 trying:2 prominent:1 criterion:1 pdf:4 theoretic:2 demonstrate:2 complete:2 perrett:3 interpreting:3 upwards:1 image:6 novel:1 common:1 superior:1 mulatta:1 stork:1 jp:1 volume:1 discussed:1 interpretation:2 interpret:1 versa:1 cambridge:1 shuffling:1 similarly:1 inclusion:1 bruno:1 had:1 cortex:5 longer:2 behaving:1 something:3 posterior:1 perspective:3 belongs:1 prime:1 reverse:1 codeword:5 selectivity:1 browse:1 certain:1 binary:10 accomplished:1 seen:1 additional:1 greater:1 impose:1 employed:3 determine:2 paradigm:1 signal:1 levinson:1 afterwards:1 relates:1 branch:1 full:5 mix:1 multiple:1 hebbian:1 simoncelli:1 determination:1 cross:6 offer:1 divided:1 hart:1 serial:2 a1:5 vision:1 represent:3 sometimes:1 suppl:1 cell:28 preserved:1 whereas:2 fellowship:1 diagram:2 grow:1 esteky:1 probably:2 file:2 subject:1 recording:3 tend:2 member:7 seem:1 extracting:1 structural:1 mw:1 presence:2 counting:3 ideal:2 intermediate:1 concerned:2 variety:2 affect:1 iterate:1 psychology:3 arbib:1 perfectly:1 incomparable:1 translates:1 inactive:1 whether:4 peter:1 york:3 proceed:1 migrate:1 useful:3 latency:1 detailed:1 listed:2 amount:1 rival:1 kiani:1 category:1 reduced:8 http:7 homunculus:2 neuroscience:5 per:3 write:2 discrete:1 threshold:4 drawn:1 sulcus:1 neither:1 abbott:1 thresholded:2 v1:1 graph:5 merely:1 fraction:2 uncertainty:1 respond:1 lehmann:1 separation:1 appendix:2 bit:1 capturing:1 simplification:1 display:2 annual:2 activity:10 constraint:1 awake:1 n3:3 scene:1 rsvp:4 georgopoulos:1 aspect:1 performing:1 extracellular:1 relatively:1 describes:2 smaller:1 across:1 son:2 primate:1 making:1 invariant:1 indexing:1 reddy:1 discus:1 count:1 fail:1 needed:1 ge:1 available:1 apply:2 hierarchical:3 appropriate:1 triadic:1 responsive:2 alternative:2 robustness:1 customary:1 original:1 hampered:1 denotes:1 top:4 responding:1 clustering:1 thomas:1 graphical:1 dirichlet:1 marginalized:1 running:1 exploit:1 especially:1 build:2 graded:1 society:1 added:1 spike:4 codewords:4 strategy:1 receptive:2 primary:1 striate:1 responds:1 exhibit:1 berlin:1 capacity:2 extent:17 discriminant:1 reason:1 assuming:1 code:41 relationship:12 ratio:3 difficult:2 mostly:1 potentially:1 spider:3 stated:1 rise:1 intent:19 neuroscientific:1 reidel:1 implementation:1 perform:1 allowing:1 upper:2 gallant:1 neuron:41 observation:3 oldi:10 anti:1 defining:1 hinton:1 head:2 frame:1 ninth:1 treves:1 introduced:2 pair:5 required:1 tanaka:1 macaque:1 able:2 beyond:1 usually:1 pattern:8 perception:1 sparsity:1 encompasses:2 built:1 including:1 suitable:1 natural:3 representing:4 scheme:1 technology:1 inversely:1 numerous:1 created:1 extract:2 naive:1 binarization:2 review:2 prior:1 discovery:1 determining:2 relative:1 embedded:1 reordering:1 fully:4 highlight:1 antichain:2 lecture:1 interesting:3 versus:1 validation:1 foundation:2 integrate:1 fruit:1 article:1 displaying:1 thresholding:5 bank:1 editor:4 storing:1 xiao:1 row:5 course:2 repeat:1 supported:1 heading:1 denseness:1 formal:21 allow:1 side:1 face:17 characterizing:1 sparse:14 distributed:1 boundary:4 cortical:2 rich:1 made:1 simplified:1 bernhard:1 active:8 handbook:1 b1:5 conceptual:4 assumed:1 reorder:1 eero:1 continuous:2 table:13 additionally:1 kettner:1 nature:1 symmetry:1 obtaining:1 heidelberg:1 complex:1 european:1 did:1 icann:2 dense:6 arrow:2 arise:1 edition:1 n2:4 child:1 body:1 neuronal:3 fig:8 fashion:1 wiley:2 inferring:1 position:1 explicit:1 outdoor:1 infancy:1 ib:4 dominik:1 pe:1 specific:1 list:2 experimented:1 physiological:1 offset:1 evidence:1 consist:2 adding:1 effectively:1 phd:1 sparseness:9 keysers:1 gap:1 mage:1 boston:1 suited:1 depicted:1 intersection:3 explore:5 forming:2 neurophysiological:1 visual:14 ordered:3 contained:2 restructuring:1 partially:1 scalar:1 springer:3 corresponds:2 determines:1 identity:1 presentation:2 towards:2 shared:1 fisher:1 considerable:1 feasible:2 included:2 determined:1 except:2 called:5 total:1 duality:2 invariance:2 experimental:2 meaningful:2 intact:1 indicating:1 select:1 rudolf:1 support:1 latter:2 tested:4 |
2,672 | 3,422 | A Convex Upper Bound on the Log-Partition Function
for Binary Graphical Models
Laurent El Ghaoui
Department of Electrical Engineering and Computer Science
University of California Berkeley
Berkeley, CA 9470
[email protected]
Assane Gueye
Department of Electrical Engineering and Computer Science
University of California Berkeley
Berkeley, CA 9470
[email protected]
Abstract
We consider the problem of bounding from above the log-partition function corresponding to
second-order Ising models for binary distributions. We introduce a new bound, the cardinality
bound, which can be computed via convex optimization. The corresponding error on the logpartition function is bounded above by twice the distance, in model parameter space, to a class of
?standard? Ising models, for which variable inter-dependence is described via a simple mean field
term. In the context of maximum-likelihood, using the new bound instead of the exact log-partition
function, while constraining the distance to the class of standard Ising models, leads not only to a
good approximation to the log-partition function, but also to a model that is parsimonious, and easily interpretable. We compare our bound with the log-determinant bound introduced by Wainwright
and Jordan (2006), and show that when the l1 -norm of the model parameter vector is small enough,
the latter is outperformed by the new bound.
1
1.1
Introduction
Problem statement
This paper is motivated by the problem fitting of binary distributions to experimental data. In the second-order Ising
model, PUT REF HERE the fitted distribution p is assumed to have the parametric form
p(x; Q, q) = exp(xT Qx + q T x ? Z(Q, q)), x ? {0, 1}n ,
where Q = QT ? Rn and q ? Rn contain the parameters of the model, and Z(Q, q), the normalization constant, is
called the log-partition function of the model. Noting that xT Qx + q T x = xT (Q + D(q))x for every x ? {0, 1}n ,
we will without loss of generality assume that q = 0, and denote by Z(Q) the corresponding log-partition function
?
?
X
Z(Q) := log ?
exp[xT Qx]? .
(1)
x?{0,1}n
In the Ising model, the maximum-likelihood approach to fitting data leads to the problem
min Z(Q) ? TrQS,
Q?Q
(2)
n
where Q is a subset of the set S n of symmetric matrices, and S ? S+
is the empirical second-moment matrix. When
n
Q = S , the dual to (2) is the maximum entropy problem
X
max H(p) : p ? P, S =
p(x)xxT ,
(3)
p
x?{0,1}n
where P is the set of distributions with support in {0, 1}n , and H is the entropy
X
H(p) = ?
p(x) log p(x).
(4)
x?{0,1}n
n
The constraints of problem (3) define a polytope in R2 called the marginal polytope.
For general Q?s, computing the log-partition function is NP-hard. Hence, except for special choices of Q, the
maximum-likelihood problem (2) is also NP-hard. It is thus desirable to find computationally tractable approximations to the log-partition function, such that the resulting maximum-likelihood problem is also tractable. In this
regard, convex, upper bounds on the log-partition function are of particular interest, and our focus here: convexity
usually brings about computational tractability, while using upper bounds yields a parameter Q that is suboptimal for
the exact problem.
Using an upper bound in lieu of Z(Q) in (2), leads to a problem we will generically refer to as the pseudo maximumlikelihood problem. This corresponds to a relaxation to the maximum-entropy problem, which is (3) when Q = S n .
Such relaxations may involve two ingredients: an upper bound on the entropy, and an outer approximation to the
marginal polytope.
1.2
Prior work
Due to the vast applicability of Ising models, the problem of approximating their log-partition function, and the
related maximum-likelihood problem, has received considerable attention in the literature for decades, first in statistical
physics, and more recently in machine learning.
The so-called log-determinant bound has been recently introduced, for a large class of Markov random fields, by
Wainwright and Jordan [2]. (Their paper provides an excellent overview of the prior work, in the general context of
graphical models.) The log-determinant bound is based on an upper bound on the differential entropy of continuous
random variable, that is attained for a Gaussian distribution. The log-determinant bound enjoys good tractability
properties, both for the computation of the log-partition function, and in the context of the maximum-likelihood
problem (2). A recent paper by Ravikumar and Lafferty [1] discusses using bounds on the log-partition function to
estimate marginal probabilities for a large class of graphical models, which adds extra motivation for the present study.
1.3
Main results and outline
The main purpose of this note is to introduce a new upper bound on the log-partition function that is computationally
tractable. The new bound is convex in Q, and leads to a restriction to the maximum-likelihood problem that is also
tractable. Our development crucially involves a specific class of Ising models, which we?ll refer to as standard Ising
models, in which the model parameter Q has the form Q = ?I + ?11T , where ?, ? are arbitrary scalars. Such models
are indeed standard in statistical physics: the first term ?I describes interaction with the external magnetic field, and
the second (?11T ) is a simple mean field approximation to ferro-magnetic coupling.
For standard Ising models, it can be shown that the log-partition functions has a computationally tractable, closed-form
expression. Due to space limitation, such proof is omitted in this paper. Our bound is constructed so as to be exact in
the case of standard Ising models. In fact, the error between our bound and the true value of the log-partition function
is bounded above by twice the l1 -norm distance from the model parameters (Q) to the class of standard Ising models.
The outline of the note reflects our main results: in section 2, we introduce our bound, and show that the approximation
error is bounded above by the distance to the class of standard Ising models. We discuss in section 3 the use of our
bound in the context of the maximum-likelihood problem (2) and its dual (3). In particular, we discuss how imposing
a bound on the distance to the class of standard Ising models may be desirable, not only to obtain an accurate approximation to the log-partition function, but also to find a parsimonious model, having good interpretability properties.
We then compare the new bound with the log-determinant bound of Wainwright and Jordan in section 4. We show
that our new bound outperforms the log-determinant bound when the norm kQk1 is small enough (less than 0.08n),
and provide numerical experiments supporting the claim that our comparison analysis is quite conservative: our bound
appears to be better over a wide range of values of kQk1 .
Notation. Throughout the note, n is a fixed integer. For k ? {0, . . . , n}, define ?k := {x ? {0, 1}n : Card(x) =
k}. Let ck = |?k | denote the cardinal of ?k , and ?k := 2?n ck the probability of ?k under the uniform distribution.
For a distribution p, the notation Ep refers to the corresponding expectation operator, and Probp (S) to the probability
of the event S under p. The set P is the set of distributions with support on {0, 1}n .
For X ? Rn?n , the notation kXk1 denotes the sum of the absolute values of the elements of X, and kXk? the
n
largest of these values. The set S n is the set of symmetric matrices, S+
the set of symmetric positive semidefinite
n
matrices. We use the notation X ? 0 for the statement X ? S+ . If x ? Rn , D(x) is the diagonal matrix with x
on its diagonal. If X ? Rn?n , d(X) is the n-vector formed with the diagonal elements of X. Finally, X is the set
{(X, x) ? S n ? Rn : d(X) = x} and X+ = {(X, x) ? S n ? Rn : X ? xxT , d(X) = x}.
2
2.1
The Cardinality Bound
The maximum bound
To ease our derivation, we begin with a simple bound based on replacing each term in the log-partition function by its
maximum over {0, 1}n . This leads to an upper bound on the log-partition function:
Z(Q) ? n log 2 + ?max (Q),
where
?max (Q) :=
max
x?{0,1}n
xT Qx.
Computing the above quantity is in general NP-hard. Starting with the expression
?max (Q) =
max
(X,x)?X+
TrQX : rank(X) = 1,
and relaxing the rank constraint leads to the upper bound ?max (Q) ? ?max (Q), where ?max (Q) is defined via a
semidefinite program:
?max (Q) = max TrQX,
(5)
(X,x)?X+
n
n
where X+ = {(X, x) ? S ? R
T
: X ? xx , d(X) = x}. For later reference, we note the dual form:
?
?
D(?) ? Q 21 ?
?0
?max (Q) = min t :
1 T
t
t,?
2?
1
= min ? T (D(?) ? Q)?1 ? : D(?) ? Q.
?
4
(6)
(7)
The corresponding bound on the log-partition function, referred to as the maximum bound, is
Z(Q) ? Zmax (Q) := n log 2 + ?max (Q).
The complexity of this bound (using interior-point methods) is roughly O(n3 ).
Let us make a few observations before proceeding. First, the maximum-bound is a convex function of Q, which is
important in the context of the maximum-likelihood problem (2). Second, we have Zmax (Q) ? n log 2 + kQk1 ,
which follows from (5), together with the fact that any matrix X that is feasible for that problem satisfies kXk? ? 1.
Finally, we observe that the function Zmax is Lipschitz continuous, with constant 1 with respect to the l1 -norm. It can
be shown that the same property holds for the log-partition function Z itself. Due to space limitation such proof is
omitted in this paper. Indeed, for every symmetric matrices Q, R we have the sub-gradient inequality
Zmax (R) ? Zmax (Q) + TrX opt (R ? Q),
where X opt is any optimal variable for the dual problem (5). Since any feasible X satisfies kXk? ? 1, we can bound
the term TrX opt (Q ? R) from below by ?kQ ? Rk1 , and after exchanging the roles of Q, R, obtain the desired result.
2.2
The cardinality bound
For every k ? {0, . . . , n}, consider the subset of variables with cardinality k, ?k := {x ? {0, 1}n : Card(x) = k}.
This defines a partition of {0, 1}n , thus
? n
!
X X
T
Z(Q) = log
exp[x Qx] .
k=0 x??k
We can refine the maximum bound by replacing the terms in the log-partition by their maximum over ?k , leading to
? n
!
X
Z(Q) ? log
ck exp[?k (Q)] ,
k=0
where, for k ? {0, . . . , n}, ck = |?k |, and
?k (Q) := max xT Qx.
x??k
Computing ?k (Q) for arbitrary k ? {0, . . . , n} is NP-hard. Based on the identity
?k (Q) =
max
(X,x)?X+
TrQX : xT x = k, 1T X1 = k 2 , rankX = 1,
(8)
and using rank relaxation as before, we obtain the bound ?k (Q) ? ?k (Q), where
?k (Q) =
max
(X,x)?X+
TrQX : xT x = k, 1T X1 = k 2 .
We define the cardinality bound, as
?
Zcard (Q) := log
n
X
(9)
!
ck exp[?k (Q)] .
k=0
The complexity of computing ?k (Q) (using interior-point methods) is roughly O(n3 ). The upper bound Zcard (Q) is
computed via n semidefinite programs of the form (9). Hence, its complexity is roughly O(n4 ).
Problem (9) admits the dual form
?k (Q)
:=
?
min t + k? + ?k 2 :
t,?,?,?
D(?) + ?I + ?11T ? Q
1 T
2?
1
2?
t
?
? 0.
(10)
The fact that ?k (Q) ? ?max (Q) for every k is obtained upon setting ? = ? = 0 in the semi-definite programming
problem (10). In fact, we have
?k (Q) = min k? + k 2 ? + ?max (Q ? ?I ? ?11T ).
?,?
(11)
The above expression can be directly obtained from the following, valid for every ?, ?:
?k (Q) = k? + k 2 ? + ?k (Q ? ?I ? ?11T )
? k? + k 2 ? + ?max (Q ? ?I ? ?11T )
? k? + k 2 ? + ?max (Q ? ?I ? ?11T ).
It can be shown (proof which we omit due to space limitation) that, in the case of standard Ising models, that is if Q
has the form ?I + ?11T for some scalars ?, ?, then the bound ?k (Q) is exact. Since the values of xT Qx when x
ranges ?k are constant, the cardinality bound is also exact.
By construction, Zcard (Q) is guaranteed to be better (lower) than Zmax (Q), since the latter is obtained upon replacing
?k (Q) by its upper bound ?(Q) for every k. The cardinality bound thus satisfies
Z(Q) ? Zcard (Q) ? Zmax (Q) ? n log 2 + kQk1 .
(12)
Using the same technique as used in the context of the maximum bound, we can show that the function ?k is Lipschitzcontinuous, with constant 1 with respect to the l1 -norm. Using the Lipschitz continuity of positively weighted log-sumexp functions (with constant 1 with respect to the l? norm), we deduce that Zcard (Q) is also Lipschitz-continuous:
for every symmetric matrices Q, R,
? ? n
!
? n
!?
?
?
X
X
?
?
|Zcard (Q) ? Zcard (R)| ? ?log
ck exp[?k (Q)] ? log
ck exp[?k (R)] ?
?
?
k=0
k=0
? max |?k (Q) ? ?k (R)|
0?k?n
? kQ ? Rk1 ,
as claimed.
2.3
Quality analysis
We now seek to establish conditions on the model parameter Q, which guarantee that the approximation error
Zcard (Q) ? Z(Q) is small. The analysis relies on the fact that, for standard Ising models, the error is zero.
We begin by establishing an upper bound on the difference between maximal and minimal values of xT Qx when
x ? ?k . We have the bound
min xT Qx ? ?k (Q) :=
x??k
min
(X,x)?X+
TrQX : xT x = k, 1T X1 = k 2 .
In the same fashion as for the quantity ?k (Q), we can express ?k (Q) as
?k (Q) = max k? + k 2 ? + ?min (Q ? ?I ? ?11T ),
?,?
where ?min (Q) :=
min
(X,x)?X+
TrQX. Based on this expression , we have, for every k:
0 ? ?k (Q) ? ?k (Q)
=
k(? ? ?0 ) + k 2 (? ? ?0 ) +
min
?,?, ?0 ,?0
?max (Q ? ?I ? ?11T ) ? ?min (Q ? ?0 I ? ?0 11T )
?max (Q ? ?I ? ?11T ) ? ?min (Q ? ?I ? ?11T )
? min
?,?
kQ ? ?I ? ?11T k1 ,
? 2 min?,?
where we have used the fact that , for every symmetric matrix R, we have
0 ? ?max (R) ? ?min (R)
=
?
=
max
(X,x),(Y,y)?X+
max
TrR(X ? Y )
kXk? ?1, kY k? ?1
TrR(X ? Y )
2kRk1 .
Using again the Lipschitz continuity properties of the weighted log-sum-exp function, we obtain that for every Q, the
absolute error between Z(Q) and Zcard (Q) is bounded as follows:
? n
!
? n
!
X
X
0 ? Zcard (Q) ? Z(Q) ? log
ck exp[?k (Q)] ? log
ck exp[?k (Q)]
k=0
k=0
? max (?k (Q) ? ?k (Q))
0?k?n
? 2Dst (Q), Dst (Q) := min kQ ? ?I ? ?11T k1 ,
?,?
(13)
Thus, a measure of quality is Dst (Q), the distance, in l1 -norm, between the model and the class of standard Ising
models. Note that this measure is easily computed, in O(n2 log n) time, by first setting ? to be the median of the
values Qij , 1 ? i < j ? n, and then setting ? to be the median of the values Qii ? ?, i = 1, . . . , n.
We summarize our findings so far with the following theorem:
Theorem 1 (Cardinality bound) The cardinality bound is
? n
!
X
Zcard (Q) := log
ck exp[?k (Q)] .
k=0
where ?k (Q), k = 0, . . . , n, is defined via the semidefinite program (9), which can be solved in O(n3 ). The approximation error is bounded above by twice the distance (in l1 -norm) to the class of standard Ising models:
0 ? Zcard (Q) ? Z(Q) ? 2 min kQ ? ?I ? ?11T k1 .
?,?
3
3.1
The Pseudo Maximum-Likelihood Problem
Tractable formulation
Using the bound Zcard (Q) in lieu of Z(Q) in the maximum-likelihood problem (2) leads to a convex restriction of
that problem, referred to as the pseudo-maximum likelihood problem. This problem can be cast as
? n
!
X
2
min log
ck exp[tk + k?k + k ?k ] ? TrQS
t,?,?,Q
k=0
?
s.t. Q ? Q,
D(?k ) + ?k I + ?k 11T ? Q
1 T
2 ?k
1
2 ?k
tk
?
? 0, k = 0, . . . , n.
The complexity of this bound is XXX. For numerical reasons, and without loss of generality, it is advisable to scale
the ck ?s and replace them by ?k := 2?n ck ? [0, 1].
3.2
Dual and interpretation
When Q = S n , the dual to the above problem is
max n
(Yk ,yk ,qk )k=0
?D(q||?) :
S=
?
n
X
Yk , q ? 0, q T 1 = 1,
k=0
Yk
ykT
yk
qk
?
? 0, d(Yk ) = yk ,
1T yk = kqk , 1T Yk 1 = k 2 qk , k = 0 . . . , n.
where ? is the distribution on {0, . . . , n}, with ?k = Probu ?k = 2?n ck , and D(q||?) is the relative entropy
(Kullback-Leibler divergence) between the distributions q, ?:
D(q||?) :=
n
X
k=0
qk log
qk
.
?k
To interpret this dual, we assume without loss of generality q > 0, and use the variables Xk := qk?1 Yk , xk := qk?1 yk .
We obtain the equivalent (non-convex) formulation
max
(Xk ,xk ,qk )n
k=0
?D(q||?)
: S=
n
X
qk Xk , q ? 0, q T 1 = 1,
(14)
k=0
(Xk , xk ) ? X+ , 1T xk = k, 1T Xk 1 = k 2 , k = 0 . . . , n.
The above problem can be obtained as a relaxation to the dual of the exact maximum-likelihood problem (2), which
is the maximum entropy problem (3). The relaxation involves two steps: one is to form an outer approximation to the
marginal polytope, the other is to find an upper bound on the entropy function (4).
First observe that we can express any distribution on {0, 1}n as
p(x) =
n
X
qk pk (x),
(15)
k=0
where
X
qk = Probp ?k =
?
p(x), pk (x) =
x??k
qk?1 p(x)
0
if x ? ?k ,
otherwise.
Note that the functions pk are valid distributions on {0, 1}n as well as ?k .
To obtain an outer approximation to the marginal polytope, we then write the moment-matching equality constraint in
problem (3) as
n
X
S = Ep xxT =
qk Xk ,
k=0
where Xk ?s are the second-order moment matrices with respect to pk :
X
Xk = Epk xxT = qk?1
p(x)xxT .
x??k
To relax the constraints in the maximum-entropy problem (3), we simply use the valid constraints Xk ? xk xTk ,
d(Xk ) = xk , 1T xk = k, 1T Xk 1 = k 2 , where xk is the mean under pk :
X
xk = Epk x = qk?1
p(x)x.
x??k
This process yields exactly the constraints of the relaxed problem (14).
To finalize our relaxation, we now form an upper bound on the entropy function (4). To this end, we use the fact that,
since each pk has support in ?k , its entropy is bounded above by log |?k |, as follows:
?H(p) =
X
p(x) log p(x) =
n X
X
p(x) log p(x)
k=0 x??k
x?{0,1}n
=
=
?
?
n X
X
qk pk (x) log(qk pk (x))
k=0 x??k
n
X
qk (log qk ? H(pk ))
k=0
n
X
k=0
n
X
k=0
qk (log qk ? log |?k |)
qk log
(|?k | = 2n ?k )
qk
? n log 2,
?k
which is, up to a constant, the objective of problem (14).
3.3
Ensuring quality via bounds on Q
We consider the (exact) maximum-likelihood problem (2), with Q = {Q = QT : kQk1 ? ?}:
min Z(Q) ? TrQS : kQk1 ? ?,
(16)
min Zcard (Q) ? TrQS : kQk1 ? ?.
(17)
Q=QT
and its convex relaxation:
Q=QT
The feasible sets of problems (16) and (17) are the same, and on it the difference in the objective functions is uniformly
bounded by 2?. Thus, any ?-suboptimal solution of the relaxation (17) is guaranteed to by 3?-suboptimal for the exact
problem, (16).
In practice, the l1 -norm constraint in (17) encourages sparsity of Q, hence the interpretability of the model. It also has
good properties in terms of the generalization error. As seen above, the constraint also implies a better approximation
to the exact problem (16). All these benefits come at the expense of goodness-of-fit, as the constraint reduces the
expressive power of the model. This is an illustration of the intimate connections between computational and statistical
properties of the model.
A more accurate bound on the approximation error can be obtained by imposing the following constraint on Q and
two new variables ?, ?:
kQ ? ?I ? ?11T k1 ? ?.
We can draw similar conclusions as before. Here, the resulting model will not be sparse, in the sense of having many
elements in Q equal to zero. However, it will still be quite interpretable, as the bound above will encourage the number
of off-diagonal elements in Q that differ from their median, to be small.
A yet more accurate control on the approximation error can be induced by the constraints ?k (Q) ? ? + ?k (Q) for
every k, each of which can be expressed as an LMI constraint. The corresponding constrained relaxation to the
maximum-likelihood problem has the form
?
min
t,?? ,? ? ,Q
log
?
s.t.
?
n
X
!
ck exp[t+
k
+
k?+
k
+
k 2 ?+
k]
? TrQS
k=0
+
T
diag(?k+ ) + ?+
k I + ?k 11 ? Q
1 +
2 ?k
?
T
Q ? diag(?k? ) ? ??
k I ? ?k 11
1 ?
2 ?k
1 +
2 ?k
t+
k
1 ?
?
2 k
t?
k
?
? 0, k = 0, . . . , n,
?
? 0, k = 0, . . . , n,
?
t+
k ? tk ? ?, k = 0, . . . , n.
Using this model instead of ones we saw previously, we sacrifice less on the front of the approximation to the true
likelihood, at the expense of increased computational effort.
4
4.1
Links with the Log-Determinant Bound
The log-determinant bounds
The bound in Wainwright and Jordan [2] is based on an upper bound on the (differential) entropy of a continuous
random variable, which is attained for a Gaussian distribution. It has the form Z(Q) ? Zld (Q), with
Zld (Q) := ?n +
max
(X,x)?X+
TrQX +
1
1
log det(X ? xxT + I)
2
12
(18)
where ? := (1/2) log(2?e) ? 1.42. Wainwright and Jordan suggest to further relax this bound to one which is easier
to compute:
Zld (Q) ? Zrld (Q) := ?n + max
(X,x)?X
TrQX +
1
1
log det(X ? xxT + I).
2
12
(19)
Like Z and the bounds examined previously, the bound Zld and Zrld are Lipschitz-continuous, with constant 1 with
respect to the l1 norm. The proof starts with the representations above, and exploits the fact that kQk1 is an upper
bound on TrQX when (X, x) ? X+ .
The dual of the log-determinant bound has the form (see appendix (??))
Zld (Q) =
1
n
log ? ? log 2+
2
2
?
1
1
D(?) ? Q ? F
min t + Tr(D(?) ? Q ? F ) ? log det
? 12 ? T ? g T
t,?,F,g,h
12
2
?
?
F g
s.t.
? 0.
g h
? 21 ? ? g
t?h
?
(20)
The relaxed counterpart Zrld (Q) is obtained upon setting F, g, h to zero in the dual above:
?
?
1
n
1
1
D(?) ? Q ? 21 ?
.
Zrld (Q) = log ? ? log 2 + min t + Tr(D(?) ? Q) ? log det
t
? 12 ? T
t,?
2
2
12
2
Using Schur complements to eliminate the variable t, we further obtain
1
n
log ? + +
2
2
1
1 T
1
min ? (D(?) ? Q)?1 ? + Tr(D(?) ? Q) ? log det(D(?) ? Q).
?
4
12
2
Zrld (Q) =
4.2
(21)
Comparison with the maximum bound
We first note the similarity in structure between the dual problem (5) defining Zmax (Q) and that of the relaxed logdeterminant bound.
Despite these connections, the log-determinant bound is neither better nor worse than the cardinality or maximum
bounds. Actually, for some special choices of Q (e.g. when Q is diagonal), the cardinality bound is exact, while the
log-determinant one is not. Conversely, one can choose Q so that Zcard (Q) > Zld (Q), so no bound dominates the
other. The same can be said for Zmax (Q) (see section 4.4 for numerical examples).
However, when we impose an extra condition on Q, namely a bound on its l1 norm, more can be said. The analysis is
based on the case Q = 0, and exploits the Lipschitz continuity of the bounds with respect to the l1 -norm.
First notice (although not shown in this paper because of space limitation) that, for Q = 0, the relaxed log-determinant
bound writes
2?e 1
n
+
Zrld (0) = log
2
3
2
?e 1
n
+ .
= Zmax (0) + log
2
6
2
Now invoke the Lipschitz continuity properties of the bounds Zrld (Q) and Zmax (Q), and obtain that
Zrld (Q) ? Zmax (Q) = (Zrld (Q) ? Zrld (0)) + (Zrld (0) ? Zmax (0)) + (Zmax (0) ? Zmax (Q))
? ?2kQk1 + (Zrld (0) ? Zmax (0))
?e 1
n
+ .
= ?2kQk1 + + log
2
6
2
1
This proves that if kQk1 ? n4 log ?e
6 + 4 , then the relaxed log-determinant bound Zrld (Q) is worse (larger) than the
maximum bound Zmax (Q). We can strengthen the above condition to kQk1 ? 0.08n.
4.3
Summary of comparison results
To summarize our findings:
Theorem 2 (Comparison) We have for every Q:
Z(Q) ? Zcard (Q) ? Zmax (Q) ? n log 2 + kQk1 .
In addition, we have Zmax (Q) ? Zrld (Q) whenever kQk1 ? 0.08n.
4.4
A numerical experiment
We now illustrate our findings on the comparison between the log-determinant bounds and the cardinality and maximum bounds. We set the size of our model to be n = 20, and for a range of values of a parameter ?, generate N = 10
random instances of Q with kQk1 = ?. Figure ?? shows the average values of the bounds, as well as the associated
error bars. Clearly, the new bound outperforms the log-determinant bounds for a wide range of values of ?. Our
predicted threshold value of kQk1 for which the new bound becomes worse, namely ? = 0.08n ? 1.6 is seen to be
very conservative, with respect to the observed threshold of ? ? 30. On the other hand, we observe that for large
values of kQk1 , the log-determinant bounds do behave better. Across the range of ?, we note that the log-determinant
bound is indistinguishable from its relaxed counterpart.
5
Conclusion and Remarks
We have introduced a new upper bound (the cardinality bound) for the log-partition function corresponding to secondorder Ising models for binary distribution. We have shown that such a bound can be computed via convex optimization,
and, when compared to the log-determinant bound introduced by Wainwright and Jordan (2006), the cardinality bound
performs better when the l1 -norm of the model parameter vector is small enough.
Although not shown in the paper, the cardinality bound becomes exact in the case of standard Ising model, while the
maximum bound (for example) is not exact for such model.
As was shown in section 2, the cardinality bound was computed by defining a partition of {0, 1}. This idea can be
generalized to form a class of bounds which we call partition bounds. It turns out that partitions bound are closely
linked to the more general class bounds that are based on worst-case probability analysis.
We acknowledge the importance of applying our bound to real-word data. We hope to include such results in subsequent versions of this paper.
References
[1] P. Ravikumar and J. Lafferty. Variational Chernoff bounds for graphical models. In Proc. Advances in Neural
Information Processing Systems (NIPS), December 2007.
[2] Martin J. Wainwright and Michael I. Jordan. Log-determinant relaxation for approximate inference in discrete
Markov random fields. IEEE Trans. Signal Processing, 2006.
| 3422 |@word determinant:19 version:1 norm:13 seek:1 crucially:1 tr:3 moment:3 outperforms:2 yet:1 subsequent:1 numerical:4 partition:26 interpretable:2 xk:20 zmax:19 provides:1 constructed:1 differential:2 qij:1 fitting:2 introduce:3 sacrifice:1 inter:1 indeed:2 roughly:3 nor:1 cardinality:16 becomes:2 begin:2 xx:1 bounded:7 notation:4 finding:3 guarantee:1 pseudo:3 berkeley:6 every:12 exactly:1 control:1 omit:1 positive:1 before:3 engineering:2 despite:1 establishing:1 laurent:1 twice:3 examined:1 conversely:1 relaxing:1 qii:1 ease:1 range:5 practice:1 definite:1 writes:1 empirical:1 matching:1 word:1 refers:1 suggest:1 interior:2 operator:1 put:1 context:6 applying:1 restriction:2 equivalent:1 logpartition:1 attention:1 starting:1 convex:9 ferro:1 construction:1 strengthen:1 exact:12 programming:1 secondorder:1 element:4 ising:19 ep:2 kxk1:1 role:1 observed:1 electrical:2 solved:1 worst:1 yk:11 convexity:1 complexity:4 upon:3 easily:2 xxt:7 derivation:1 quite:2 larger:1 relax:2 otherwise:1 itself:1 interaction:1 maximal:1 ky:1 tk:3 coupling:1 advisable:1 illustrate:1 qt:4 received:1 predicted:1 involves:2 implies:1 come:1 differ:1 closely:1 generalization:1 rk1:2 opt:3 hold:1 exp:13 claim:1 omitted:2 purpose:1 proc:1 outperformed:1 saw:1 largest:1 reflects:1 weighted:2 hope:1 clearly:1 gaussian:2 ck:15 focus:1 rank:3 likelihood:16 sense:1 inference:1 el:1 eliminate:1 dual:12 development:1 constrained:1 special:2 marginal:5 field:5 equal:1 having:2 chernoff:1 np:4 cardinal:1 few:1 divergence:1 interest:1 generically:1 semidefinite:4 accurate:3 encourage:1 desired:1 minimal:1 fitted:1 increased:1 instance:1 goodness:1 exchanging:1 tractability:2 applicability:1 subset:2 uniform:1 kq:6 front:1 eec:2 physic:2 off:1 invoke:1 michael:1 together:1 again:1 choose:1 lmi:1 worse:3 external:1 leading:1 later:1 closed:1 linked:1 start:1 formed:1 qk:23 yield:2 epk:2 finalize:1 whenever:1 proof:4 associated:1 probp:2 actually:1 appears:1 attained:2 xxx:1 formulation:2 generality:3 hand:1 replacing:3 expressive:1 continuity:4 defines:1 brings:1 quality:3 contain:1 true:2 counterpart:2 hence:3 equality:1 symmetric:6 leibler:1 ll:1 indistinguishable:1 encourages:1 generalized:1 outline:2 performs:1 l1:11 variational:1 recently:2 overview:1 interpretation:1 interpret:1 refer:2 imposing:2 similarity:1 deduce:1 add:1 logdeterminant:1 recent:1 claimed:1 inequality:1 binary:4 seen:2 relaxed:6 impose:1 signal:1 semi:1 desirable:2 reduces:1 ravikumar:2 ensuring:1 expectation:1 normalization:1 addition:1 median:3 extra:2 induced:1 december:1 lafferty:2 schur:1 jordan:7 integer:1 call:1 noting:1 constraining:1 enough:3 fit:1 suboptimal:3 idea:1 det:5 motivated:1 expression:4 effort:1 remark:1 involve:1 generate:1 notice:1 write:1 discrete:1 express:2 threshold:2 neither:1 kqk:1 vast:1 relaxation:10 sum:2 dst:3 throughout:1 parsimonious:2 draw:1 appendix:1 bound:102 guaranteed:2 refine:1 constraint:12 n3:3 min:25 xtk:1 martin:1 department:2 describes:1 across:1 n4:2 ghaoui:1 computationally:3 previously:2 discus:3 turn:1 tractable:6 end:1 lieu:2 observe:3 magnetic:2 denotes:1 include:1 graphical:4 exploit:2 k1:4 prof:1 establish:1 approximating:1 objective:2 quantity:2 parametric:1 dependence:1 diagonal:5 said:2 gradient:1 distance:7 link:1 card:2 outer:3 polytope:5 reason:1 illustration:1 statement:2 expense:2 upper:17 observation:1 markov:2 acknowledge:1 behave:1 supporting:1 defining:2 rn:7 arbitrary:2 introduced:4 complement:1 cast:1 namely:2 connection:2 california:2 zld:6 nip:1 trans:1 bar:1 usually:1 below:1 sparsity:1 summarize:2 elghaoui:1 program:3 max:32 interpretability:2 wainwright:7 power:1 event:1 kqk1:17 prior:2 literature:1 relative:1 loss:3 limitation:4 ingredient:1 summary:1 enjoys:1 trr:2 wide:2 absolute:2 sparse:1 benefit:1 regard:1 valid:3 lipschitzcontinuous:1 far:1 qx:9 approximate:1 kullback:1 assumed:1 continuous:5 decade:1 ca:2 excellent:1 diag:2 pk:9 main:3 bounding:1 motivation:1 n2:1 ref:1 x1:3 positively:1 referred:2 fashion:1 sub:1 intimate:1 theorem:3 xt:12 specific:1 r2:1 admits:1 dominates:1 importance:1 easier:1 entropy:12 simply:1 kxk:4 expressed:1 ykt:1 scalar:2 corresponds:1 satisfies:3 relies:1 trx:2 identity:1 lipschitz:7 replace:1 considerable:1 hard:4 feasible:3 except:1 uniformly:1 conservative:2 called:3 experimental:1 support:3 maximumlikelihood:1 latter:2 |
2,673 | 3,423 | Bounding Performance Loss in Approximate MDP
Homomorphisms
Jonathan J. Taylor
Dept. of Computer Science
University of Toronto
Toronto, Canada, M5S 3G4
[email protected]
Doina Precup
School of Computer Science
McGill University
Montreal, Canada, H3A 2A7
[email protected]
Prakash Panangaden
School of Computer Science
McGill University
Montreal, Canada, H3A 2A7
[email protected]
Abstract
We define a metric for measuring behavior similarity between states in a Markov
decision process (MDP), which takes action similarity into account. We show
that the kernel of our metric corresponds exactly to the classes of states defined
by MDP homomorphisms (Ravindran & Barto, 2003). We prove that the difference in the optimal value function of different states can be upper-bounded by
the value of this metric, and that the bound is tighter than previous bounds provided by bisimulation metrics (Ferns et al. 2004, 2005). Our results hold both
for discrete and for continuous actions. We provide an algorithm for constructing
approximate homomorphisms, by using this metric to identify states that can be
grouped together, as well as actions that can be matched. Previous research on
this topic is based mainly on heuristics.
1 Introduction
Markov Decision Processes (MDPs) are a very popular formalism for decision making under uncertainty (Puterman, 1994). A significant problem is computing the optimal strategy when the state
and action space are very large and/or continuous. A popular approach is state abstraction, in which
states are grouped together in partitions, or aggregates, and the optimal policy is computed over
these. Li et al. (2006) provide a nice comparative survey of approaches to state abstraction. The
work we present in this paper bridges two such methods: bisimulation-based approaches and methods based on MDP homomorphisms.
Bisimulation is a well-known, well-studied notion of behavioral equivalence between systems
(Larsen & Skou, 1991; Milner, 1995) which has been specialized for MDPs by Givan et al (2003). In
recent work, Ferns et al. (2004, 2005, 2006) introduced (pseudo)metrics for measuring the similarity
of states, which provide approximations to bisimulation. One of the disadvantages of bisimulation
and the corresponding metrics is that they require that the behavior matches for exactly the same
actions. However, in many cases of practical interest, actions with the exact same label may not
match, but the environment may contain symmetries and other types of special structure, which may
allow correspondences between states by matching their behavior with different actions. This idea
was formalized by (Ravindran & Barto, 2003) with the concept of MDP homomorphisms. MDP homomorphisms specify a map matching equivalent states as well as equivalent actions in such states.
This matching can then be used to transfer policies between different MDPs. However, like any
equivalence relations in probabilistic systems, MDP homomorphisms are brittle: a small change
in the transition probabilities or the rewards can cause two previously equivalent state-action pairs
to become distinct. This implies that such approaches do not work well in situations in which the
model of the system is estimated from data. As a solution to this problem, Ravindran & Barto
(2004) proposed using approximate homomorphisms, which allow aggregating states that are not
exactly equivalent. They define an MDP over these partitions and quantify the approximate loss
resulting from using this MDP, compared to the original system. As expected, the bound depends on
the quality of the partition. Subsequent work (e.g. Wolfe & Barto, 2006) constructs such partitions
heuristically.
In this paper, we attempt to construct provably good, approximate MDP homomorphisms from first
principles. First, we relate the notion of MDP homomorphisms to the concept of lax bisimulation,
explored recently in the process algebra literature (Arun-Kumar, 2006). This allows us to define a
metric on states, similarly to existing bisimulation metrics. Interestingly, this approach works both
for discrete and for continuous actions. We show that the difference in the optimal value function of
two states is bounded above by this metric. This allows us to provide a state aggregation algorithm
with provable approximation guarantees. We illustrate empirically the fact that this approach can
provide much better state space compression than the use of existing bisimulation metrics.
2 Background
A finite Markov decision process (MDP) is a tuple hS, A, P, Ri, where S is a finite set of states, A is a
set of actions, P : S ? A ? S ? [0, 1] is the transition model, with P(s, a, s? ) denoting the probability
of transition from state s to s? under action a, and R : S ? A ? R is the reward function with R(s, a)
being the reward for performing action a in state s. For the purpose of this paper, the state space S
is assumed to be finite, but the action set A could be finite or infinite (as will be detailed later). We
assume without loss of generality that rewards are bounded in [0, 1].
A deterministic policy ? : S ? A specifies which action should be taken in every state. By following
?
policy ? from state s, an agent can expect a value of V ? (s) = E(?t=1
?t?1 rt |s0 = s, ?) where ? ? (0, 1)
is a discount factor and rt is the sample reward received at time t. In a finite MDP, the optimal
value function V ? is unique and satisfies the following formulas, known as the Bellman optimality
equations:
!
V ? (s) = max R(s, a) + ? ? P(s, a, s? )V ? (s? ) , ?s ? S
a?A
s?
If the action space is continuous, we will assume that it is compact, so the max can be taken and
the above results still hold (Puterman, 1994). Given the optimal value function, an optimal policy
is easily inferred by simply taking at every state the greedy action with respect to the one-steplookahead value. It is well known that the optimal value function can be computed by turning the
above equation into an update rule, which can be applied iteratively.
Ideally, if the state space is very large, ?similar? states should be grouped together in order to speed
up this type of computation. Bisimulation for MDPs (Givan et al., 2003) is a notion of behavioral
equivalence between states. A relation E ? S ? S is a bisimulation relation if:
sEu ? ?a.(R(s, a) = R(u, a) and ?X ? S/E.Pr(X|s, a) = Pr(X|u, a))
where S/E denotes the partition of S into E-equivalent subsets of states. The relation ? is the union
of all bisimulation relations and two states in an MDP are said to be bisimilar if s ? u. From this
definition, it follows that bisimilar states can match each others? actions to achieve the same returns.
Hence, bisimilar states have the same optimal value (Givan et al., 2003). However, bisimulation is
not robust to small changes in the rewards or the transition probabilities.
One way to avoid this problem is to quantify the similarity between states using a (pseudo)-metric.
Ferns et al. (2004) proposed a bisimulation metric, defined as the least fixed point of the following
operator on the lattice of 1-bounded metrics d : S ? S ? [0, 1]:
G(d)(s,t) = max(cr |R(s, a) ? R(u, a)| + c pK(d)(P(s, a, ?), P(u, a, ?))
(1)
a
The first term above measures reward similarity. The second term is the Kantorovich metric between
the probability distributions of the two states. Given probability distributions P and Q over the state
space S, and a semimetric d on S, the Kantorovich metric K(d)(P, Q) is defined by the following
linear program:
|S|
max ? (P(si ) ? Q(si ))vi subject to: ?i, j.vi ? v j ? d(si , s j ) and ?i.0 ? vi ? 1
vi
i=1
which has the following equivalent dual program:
|S|
min
?
?k j k, j=1
?k j d(sk , s j ) subject to: ?k. ? ?k j = P(sk ), ? j. ? ?k j = Q(s j ) and ?k, j.?k j ? 0
j
k
Ferns et al. (2004) showed that by applying (1) iteratively, the least fixed point e f ix can be obtained,
and that s and u are bisimilar if and only if e f ix (s, u) = 0. In other words, bisimulation is the kernel
of this metric.
3 Lax bisimulation
In many cases of practical interest, actions with exactly the same label may not match, but the
environment may contain symmetries and other types of special structure, which may allow correspondences between different actions at certain states. For example, consider the environment in
Figure 1. Because of symmetry, going south in state N6 is ?equivalent? to going north in state S6.
However, no two states are bisimilar. Recent work in process algebra has rethought the definition of
bisimulation to allow certain distinct actions to be essentially equivalent (Arun-Kumar, 2006). Here,
we define lax bisimulation in the context of MDPs.
Definition 1. A relation B is a lax (probabilistic) bisimulation relation if whenever sBu we have that:
?a ?b such that R(s, a) = R(u, b) and for all B-closed sets X we have that Pr(X|s, a) = P(X|u, b),
and vice versa. The lax bisimulation ? is the union of all the lax bisimulation relations.
It is easy to see that B is an equivalence relation and we denote the equivalence classes of S by
S/B. Note that the definition above assumes that any action can be matched by any other action.
However, the set of actions that can be used to match another action can be restricted based on prior
knowledge.
Lax bisimulation is very closely related to the idea of MDP homomorphisms (Ravindran & Barto,
2003). We now formally establish this connection.
Definition 2. (Ravindran & Barto, 2003) A MDP homomorphism h from M = hS, A, P, Ri to M ? =
hS? , A? , P? , R? i is a tuple of surjections h f , {gs : s ? S}i with h(s, a) = ( f (s), gs (a)), where f : S ? S?
and gs : A ? A? such that R(s, a) = R? ( f (s), gs (a)) and P(s, a, f ?1 ( f (s? ))) = P? ( f (s), gs (a), f (s? ))
Hence, a homomorphism puts in correspondence states, and has a state-dependent mapping between
actions as well. We now show that homomorphisms are identical to lax probabilistic bisimulation.
Theorem 3. Two states s and u are bisimilar if and only if they are related by some MDP homomorphism h f , {gs : s ? S}i in the sense that f (s) = f (u).
Proof: For the first direction, let h be a MDP homomorphism and define the relation B such that sBu
iff f (s) = f (u). Since gu is a surjection to A, there must be some b ? A with gu (b) = gs (a). Hence,
R(s, a) = R? ( f (s), gs (a)) = R? ( f (u), gu (b)) = R(u, b)
Let X be a non-empty B-closed set such that f ?1 ( f (s? )) = X for some s? . Then:
P(s, a, X) = P? ( f (s), gs (a), f (s? )) = P? ( f (u), gu (b), f (s? )) = P(u, b, X)
so B is a lax bisimulation relation.
For the other direction, let B be a lax bisimulation relation. We will construct an MDP homomorphism in which sBu =? f (s) = f (u). Consider the partition S/B induced by the equivalence
relation B on set S. For each equivalence class X ? S/B, we choose a representative state sX ? X
and define f (sX ) = sX and gsX (a) = a, ?a ? A. Then, for any s ? sX , we define f (s) = sX . From
definition 1, we have that ?a?b s.t. Pr(X ? |s, a) = Pr(X ? |sX , b), ?X ? ? S/B. Hence, we set gs (a) = b.
Then, we have:
P? ( f (s), gs (a), f (s? )) = P? ( f (sX ), b? , f ?1 ( f (s? )) = P(sX , b, f ?1 ( f (s? )) = P(s, a, f ?1 ( f (s? ))
Also, R? ( f (s), gs (a)) = R? ( f (sX ), b) = R(sX , a). Hence, we constructed a homomorphism. ?
4 A metric for lax bisimulation
We will now define a lax bisimulation metric for measuring similarity between state-action pairs,
following the approach used by Ferns et al. (2004) for defining the bisimulation metric between
states. We want to say that states s and u are close exactly when every action of one state is close to
some action available in the other state. In order to capture this meaning, we first define similarity
between state-action pairs, then we lift this to states using the Hausdorff metric (Munkres, 1999).
Definition 4. Let cr , c p ? 0 be constants with cr + c p ? 1. Given a 1-bounded semi-metric d on S,
the metric ?(d) : S ? A ? [0, 1] is defined as follows:
?(d)((s, a), (u, b)) = cr |R(s, a) ? R(u, b)| + c pK(d)(P(s, a, ?), P(u, b, ?))
We now have to measure the distance between the set of of actions at state s and the set of actions
at state u. Given a metric between pairs of points, the Hausdorff metric can be used to measure the
distance between sets of points. It is defined as follows.
Definition 5. Given a finite 1-bounded metric space (M , d), let P (M ) be the set of compact spaces
(e.g. closed and bounded in R). The Hausdorff metric H(d) : P (M ) ? P (M ) ? [0, 1] is defined as:
H(d)(X,Y ) = max(sup inf d(x, y), sup inf d(x, y))
x?X y?Y
y?Y x?X
Definition 6. Denote Xs = {(s, a)|a ? A}. Let M be the set of all semimetrics on S. We define the
operator F : M ? M as F(d)(s, u) = H(?(d))(Xs , Xu )
We note that the same definition can be applied both for discrete and for compact continuous action
spaces. If the action set is compact then Xs = {s} ? A is also compact, so the Hausdorff metric is
still well defined. For simplicity, we consider the discrete case, so that max and min are defined.
Theorem 7. F is monotonic and has a least fixed point d f ix in which d f ix (s, u) = 0 iff s ? u.
The proof is similar in flavor to (Ferns et al., 2004) and we omit it for lack of space.
As both e f ix and d f ix quantify the difference in behaviour between states, it is not surprising to see
that they constrain the difference in optimal value. Indeed, the bound below has previously been
shown in (Ferns et al., 2004) for e f ix , but we also show that our metric d f ix is tighter.
Theorem 8. Let e f ix be the metric defined in (Ferns et al., 2004). Then we have:
cr |V ? (s) ? V ? (u)| ? d f ix (s, u) ? e f ix (s, u)
Proof: We show via induction on n that for the sequence of iterates Vn encountered during value
iteration, cr |Vn (s) ? Vn (u)| ? d f ix (s, u) ? e f ix (s, u), and then the result follows by merely taking
limits.
For the base case note that cr |V0 (s) ? V0(u)| = d0 (s, u) = e0 (s, u) = 0.
Assume this holds for n. By the monotonicity of F, we have that F(dn )(s, u) ? F(en )(s, u). Now,
for any a, ?(en )((s, a), (u, a)) ? G(en )(s, u), which implies:
F(en )(s, u) ? max(max ?(en )((s, a), (u, a)), max ?(en )((s, b), (u, b))
a
?
b
max(max G(en )(s, u), G(en )(s, u)) = G(en )(s, u)
a
so dn+1 ? en+1 Without loss of generality, assume that Vn+1(s) > Vn+1(u). Then:
cr |Vn+1 (s) ? Vn+1(u)| =cr | max(R(s, a) + ? ? P(s, a, s? )Vn (s? )) ? max(R(u, b) + ? ? P(u, b, s? )Vn (s? ))|
a
b
s?
s?
?
=cr |(R(s, a ) + ? ? P(s, a , s )Vn (s )) ? (R(t, b ) + ? ? P(u, b , s? )Vn (s? ))|
?
?
?
?
?
s?
s?
=cr min |(R(s, a ) + ? ? P(s, a , s )Vn (s )) ? (R(u, b) + ? ? P(u, b, s? )Vn (s? ))|
?
?
b
?
?
s?
s?
?cr max min |(R(s, a) + ? ? P(s, a, s )Vn (s )) ? (R(t, b) + ? ? P(u, b, s? )Vn (s? ))|
?
a
b
?
s?
s?
?max min(cr |R(s, a) ? R(u, b)| + c p| ?(P(s, a, s? ) ? P(u, b, s?))
a
b
Now since ? ? c p , we have 0 ?
s?
cr ?
?
c p Vi (s )
?
(1?c p )?
c p (1??)
cr ?
Vn (s? )|)
cp
? 1 and by the induction hypothesis
cr ?
cr ?
Vn (s) ?
Vn (u) ? cr |Vn (s) ? Vn (u)| ? dn (s, u)
cp
cp
So { ccrp? Vn (s? ) : s? ? S} is a feasible solution to the LP for K(dn )(P(s, a), P(t, b)). We then continue the
inequality: cr |Vn+1 (s) ? Vn+1 (u)| ? maxa minb (cr |R(s, a) ? R(u, b)| + c pK(dn )(P(s, a), P(u, b))) =
F(dn )(s, u) = dn+1 (s, u)?
5 State aggregation
We now show how we can use this notion of lax bisimulation metrics to construct approximate MDP
homomorphisms. First, if we have an MDP homomorphism, we can use it to provide a state space
aggregation, as follows.
Definition 9. Given a MDP M and a homomorphism, an aggregated MDP M ? is given by
(S? , A, {P(C, a, D) : a ? A;C, D ? S? }, {R(C, a) : a ? A,C ? S? }, ?, gs : s ? S) where S? is a partition of
S, ? : S ? S? maps states to their aggregates, each gs : A ? A relabels the action set and we have that
?C, D ? S? and a ? A,
P(C, a, D) =
1
1
P(s, gs (a), D) and R(C, a) =
?
? R(s, gs (a))
|C| s?C
|C| s?C
Note that all the states in a partition have actions that are relabelled specifically so they can exactly
match each other?s behaviour. Thus, a policy in the aggregate MDP can be lifted to the original
MDP by using this relabeling.
Definition 10. If M ? is an aggregation of MDP M and ?? is a policy in M ? , then the lifted policy is
defined by ?(s) = gs (?? (s? )).
Using a lax bisimulation metric, it is possible to choose appropriate re-labelings so that states within
a partition can approximately match each other?s actions.
Definition 11. Given a lax bisimulation metric d and a MDP M, we say that an aggregated MDP M ?
is d-consistent if each aggregated class C has a state s ? C, called the representative of C, such that:
?u ? C, ?(d)((s, gs (a)), (u, gu (a))) ? F(d)(s, u)
When the re-labelings are chosen in this way, we can solve for the optimal value function of the
aggregated MDP and be assured that for each state, its true optimal value is close to the optimal
value of the partition in which it is contained.
Theorem 12. If M ? is a d? -consistent aggregation of a MDP M and n ? ?, then ?s ? S we have:
n?1
cr |Vn (?(s)) ? Vn(s)| ? m(?(s)) + M ? ?n?k .
k=1
(s? , u), s?
where m(C) = 2 maxu?C d?
denotes the representative state of C and M = maxC m(C). Furthermore, if ?? is a policy in M ? and ? is the corresponding lifted policy in M, then:
n?1
cr |Vn? (?(s)) ? Vn? (s)| ? m(?(s)) + M ? ?n?k
?
k=1
Proof: |Vn+1 (?(s)) ? Vn+1(s)| =
= | max(R(?(s), a) + ?
a
(R(s, a) + ? ? P(s, a, s? )Vn (s? ))|
? ? P(?(s), a, D)Vn(D)) ? max
a
?
D?S
s
!
1
?
|R(u, gu (a)) ? R(s, gs (a))| + ?| ? P(u, gu (a), D)Vn (D) ? ? P(s, gs (a), s? )Vn (s? )|
? max
a
|?(s)| u??(s)
D?S?
s?
!
1
|R(u, gu (a)) ? R(s, gs (a))| + ?| ?(P(u, gu (a), s? )Vn (?(s? )) ? P(s, gs (a), s? )Vn (s? ))|
?
? max
a
|?(s)| u??(s)
s?
?
1
(|R(u, gu (a)) ? R(s, gs(a))| + ?| ?(P(u, gu (a), s? ) ? P(s, gs (a), s? ))Vn (s? )
? max
a
|?(s)| u??(s)
s?
+ ?| ? P(u, gu (a), s? )(Vn (?(s? )) ? Vn (s? ))|) ?
s?
+c p | ?(P(u, gu (a), s? ) ? P(s, gs (a), s? ))
s?
1
(cr |R(s, gs (a)) ? R(u, gu(a))|
? max
a
cr |?(s)| u??(s)
cr ?
?
Vn (s? )|) +
?max
?? P(u, gu(a), s? )|Vn(?(s? )) ? Vn(s? )|
a
cp
|?(s)| u??(s)
s
From Theorem 8, we know that { ccrp? Vn (s? ) : s? ? S} is a feasible solution to the primal LP for
K(dn )(P(s, gs (a)), P(u, gu (a))). Let z be the representative used for ?(s). Then we can continue
as follows:
? cr |R(s, gs (a) ? R(u, gu(a))| + c pK(dn )(P(s, gs (a)), P(u, gu (a)))
? cr |R(s, gs (a)) ? R(u, gu(a))| + c pK(d? )(P(s, gs (a)), P(u, gu (a)))
? cr |R(s, gs (a)) ? R(z, gz (a))| + c pK(d? )(P(s, gs (a)), P(z, gz (a)))
+ cr |R(z, gz (a)) ? R(u, gu(a))| + c pK(d? )(P(z, gz (a)), P(u, gu (a))) = d? (s, z) + d? (z, u) ? m(?(s))
We continue with the original inequality using these two results:
?
+
1
? (cr |R(s, gs (a)) ? R(u, gu(a))| + c pK(dn )(P(s, gs (a)), P(u, gu (a))))
cr u??(s)
?
|Vn (?(s?? )) ? Vn (s?? )|
? max
?? P(u, gu(a), s? ) max
??
a
|?(s)| u??(s)
s
s
n?1
1
m(?(s))
m(?(s))
?
m(?(s)) + ? max |Vn (?(s? )) ? Vn(s? )| ?
+ ? max
+ M ? ?n?k
?
?
??
cr |?(s)| u??(s)
cr
cr
s
s
k=1
!
!
n?1
n
1
1
m(?(s)) + ? max m(?(s? )) + M ? ?n+1?k ?
m(?(s)) + M ? ?(n+1)?k
?
?
cr
c
s
r
k=1
k=1
!
The second proof is nearly identical except that instead of maximizing over actions, the action
selected by the policy, a = ?? (?(s)), and the lifted policy, gs (a) = ?(s) are used. ?
By taking limits we get the following theorem:
Theorem 13. If M ? is a d f ix -consistent aggregation of a MDP M, then ?s ? S we have:
cr |V ? (?(s)) ? V ? (s)| ? m(?(s)) +
?
M
1??
Furthermore, if ?? is any policy in M ? and ? is the lifted policy to M then
?
?
cr |V ? (?(s)) ? V ? (s)| ? m(?(s)) +
M
1??
where m(C) = 2 maxu?C d f ix (s? , u), s? is the representative state of C and M = maxC m(C).
One appropriate way to aggregrate states is to choose some desired error bound ? > 0 and ensure
that the states in each partition are within an ?-ball. A simple way to do this is to pick states and
random and add to a partition each state within the ?-ball. Of course, better clustering heuristics can
be used here as well.
It has been noted that when the above condition holds, then under the unlaxed bisimulation metric
2?
e f ix , we can be assured that for each state s, |V ? (?(s)) ? V (s)| is bounded by cr (1??)
. The theorem
4?
above shows that under the lax bisimulation metric d f ix this difference is actually bounded by cr (1??)
.
However, as we illustrate in the next section. a massive reduction in the size of the state space can
be achieved by moving from e f ix to d f ix , even when using ?? = 2? .
For large systems, it might not be feasible to compute the metric e f ix in the original MDP. In this
case, we might want to use some sort of heuristic or prior knowledge to create an aggregation.
Ravindran & Barto (2003) provided, based on a result from Whitt (1978), a bound on the difference
in values between the optimal policy in the aggregated MDP and the lifted policy in the original
MDP. We now show that our metric can be used to tighten this bound.
Theorem 14. If M ? is an aggregation of a MDP M, ?? is an optimal policy in M ? , ? is the policy
lifted from ?? to M and d ?f ix corresponds to our metric computed on M ? , then
|V ? (s) ? V ? (?(s))| ?
?
?
2
max |R(s, gs (a)) ? R(?(s), a)| + max K(d ?f ix )(P(s, gs (a)), P(?(s), a))
1 ? ? s,a
cr s,a
N6
Comparison of Laxed and Unlaxed Lumping Performance
30
N5
Unlaxed Metric
Laxed Metric
N4
25
N3
N2
Lumped States
20
N1
E6 E5 E4 E3 E2 E1 C W1 W2 W3 W4 W5 W6
S1
15
10
S2
S3
5
S4
S5
0
0.0
0.2
0.4
0.6
0.8
1.0
Epsilon
S6
Figure 1: Example environment exhibiting symmetries (left). Aggregation performance (right)
Proof: We have:
?
?
2
|V ? (s) ? V ? (?(s))|) ?
max |R(s, gs (a)) ? R(?(s), a) + ? ?(P(s, gs (a),C) ? P(?(s), a,C))V ? (C)|
1 ? ? s,a
C
?
?
2
max |R(s, gs (a)) ? R(?(s), a)| + ? max | ?(P(s, gs (a),C) ? P(?(s), a,C))V ? (C)|
s,a
1 ? ? s,a
C
?
2
?
max |R(s, gs (a)) ? R(?(s), a)| + max K(d ?f ix )(P(s, gs (a)), P(?(s), a))
s,a
s,a
1??
cr
The first inequality originally comes from (Whitt, 1978) and is applied to MDPs in (Ravindran &
Barto, 2003). The last inequality holds since ?? is an optimal policy and thus by Theorem 8 we know
that { V
?? (C)
cr
: C ? S? } is a feasible solution. ?
As a corrolary, we can get the same bound as in (Ravindran & Barto, 2003) by bounding the Kantorovich by the total variation metric.
Definition 15. Given two finite distributions P and Q, the total variation metric TV (P, Q) is defined
as: TV (P, Q) = ?s 21 |P(s) ? Q(s)|
Corollary 16. Let ? = maxC,a R(C, a) ? minC,a R(C, a) be the maximum difference in rewards in the
aggregated MDP. Then:
?
2
|V ? (s) ? V ? (?(s))| ?
max |R(s, gs (a)) ? R(?(s), a)| +
? ? TV (P(s, gs (a)), P(?(s), a))
1 ? ? s,a
1??
Proof: This follows from the fact that:
max d ?f ix (C, D) ? cr ? + c p max d ?f ix (C, D) ? ? ? ?
C,D
C,D
cr ?
cr ?
?
1 ? cp 1 ? ?
and using the total variation as an approximation (Gibbs & Su, 2002), we have:
K(d ?f ix )(P(s, gs (a)), P(?(s), a)) ? max d ?f ix (C, D) ? TV (P(s, gs (a)), P(?(s), a)) ?
C,D
6 Illustration
Consider the cross-shaped MDP displayed in Figure 1. There is a reward of 1 in the center and the
probability of the agent moving in the intended direction is 0.8. For a given ?, we used the random
partitioning algorithm outlined earlier to create a state aggregation. The graph plots the size of the
aggregated MDPs obtained against ?, using the lax and the non-lax bisimulation metrics. In the case
of the lax metric, we used ?? = ?/2 to compensate for the factor of 2 difference in the error bound.
It is very revealing that the number of partitions drops very quickly and levels at around 6 or 7 for
our algorithm. This is because the MDP is collapsing to a state space close to the natural choice of
{{C}} ? {{Ni, Si,Wi, Ei} : i ? {1, 2, 3, 4, 5, 6}}. Under the unlaxed metric, this is not likely to occur,
and thus the first states to be partitioned together are the ones neighbouring each other (which can
actually have quite different behaviours).
7 Discussion and future work
We defined a metric for measuring the similarity of state-action pairs in a Markov Decision Process
and used it in an algorithm for constructing approximate MDP homomorphisms. Our approach
works significantly better than the bisimulation metrics of Ferns et al., as it allows capturing different
regularities in the environment. The theoretical bound on the error in the value function presented
in (Ravindran & Barto, 2004) can be derived using our metric.
Although the metric is potentially expensive to compute, there are domains in which having an
accurate aggregation is worth it. For example, in mobile device applications, one may have big
computational resources initially to build an aggregation, but may then insist on a very coarse,
good aggregation, to fit on a small device. The metric can also be used to find subtasks in a larger
problem that can be solved using controllers from a pre-supplied library. For example, if a controller
is available to navigate single rooms, the metric might be used to lump states in a building schematic
into ?rooms?. The aggregate MDP can then be used to solve the high level navigational task using
the controller to navigate specific rooms.
An important avenue for future work is reducing the computational complexity of this approach.
Two sources of complexity include the quadratic dependence on the number of actions, and the
evaluation of the Kantorovich metric. The first issue can be addressed by sampling pairs of actions,
rather than considering all possibilities. We are also investigating the possibility of replacing the
Kantorovich metric (which is very convenient from the theoretical point of view) with a more practical approximation. Finally, the extension to continuous states is very important. We currently have
preliminary results on this issue, using an approach similar to (Ferns et al, 2005), which assumes
lower-semi-continuity of the reward function. However, the details are not yet fully worked out.
Acknowledgements: This work was funded by NSERC and CFI.
References
Arun-Kumar, S. (2006). On bisimilarities induced by relations on actions. SEFM ?06: Proceedings of the Fourth
IEEE International Conference on Software Engineering and Formal Methods (pp. 41?49). Washington, DC,
USA: IEEE Computer Society.
Ferns, N., Castro, P. S., Precup, D., & Panangaden, P. (2006). Methods for computing state similarity in Markov
Decision Processes. Proceedings of the 22nd UAI.
Ferns, N., Panangaden, P., & Precup, D. (2004). Metrics for finite markov decision processes. Proceedings of
the 20th UAI (pp. 162?169).
Ferns, N., Panangaden, P., & Precup, D. (2005). Metrics for markov decision processes with infinite state
spaces. Proceedings of the 21th UAI (pp. 201?209).
Gibbs, A., & Su, F. (2002). On choosing and bounding probability metrics.
Givan, R., Dean, T., & Greig, M. (2003). Equivalence notions and model minimization in Markov Decision
Processes. Artificial Intelligence, 147, 163?223.
Larsen, K. G., & Skou, A. (1991). Bisimulation through probabilistic testing. Inf. Comput., 94, 1?28.
Li, L., Walsh, T. J., & Littman, M. L. (2006). Towards a unified theory of state abstraction for MDPs. Proceedings of the International Symposium on Artificial Intelligence and Mathematics.
Milner, R. (1995). Communication and concurrency. Prentice Hall International (UK) Ltd.
Munkres, J. (1999). Topology. Prentice Hall.
Puterman, M. L. (1994). Markov decision processes: discrete stochastic dynamic programming. Wiley.
Ravindran, B., & Barto, A. G. (2003). Relativized options: Choosing the right transformation. Proceedings of
20th ICML (pp. 608?615).
Ravindran, B., & Barto, A. G. (2004). Approximate homomorphisms: A framework for non-exact minimization
inn Markov Decision Processes. Proceedings of the Fifth International Conference on Knowledge Based
Computer Systems.
Whitt, W. (1978). Approximations of dynamic programs i. Mathematics of Operations Research, 3, 231?243.
Wolfe, A. P., & Barto, A. G. (2006). Decision tree methods for finding reusable MDP homomorphisms.
Proceedings of AAAI.
| 3423 |@word h:3 compression:1 nd:1 heuristically:1 homomorphism:24 pick:1 reduction:1 relabelled:1 denoting:1 interestingly:1 existing:2 surprising:1 si:4 yet:1 must:1 subsequent:1 partition:13 plot:1 drop:1 update:1 greedy:1 selected:1 device:2 intelligence:2 iterates:1 coarse:1 toronto:2 dn:10 constructed:1 become:1 symposium:1 prove:1 behavioral:2 g4:1 ravindran:11 indeed:1 dprecup:1 expected:1 behavior:3 bellman:1 insist:1 considering:1 provided:2 bounded:9 matched:2 maxa:1 unified:1 finding:1 transformation:1 guarantee:1 pseudo:2 every:3 prakash:2 exactly:6 uk:1 partitioning:1 omit:1 engineering:1 aggregating:1 limit:2 approximately:1 might:3 studied:1 equivalence:8 munkres:2 walsh:1 practical:3 unique:1 testing:1 union:2 cfi:1 w4:1 significantly:1 revealing:1 matching:3 convenient:1 word:1 pre:1 get:2 close:4 operator:2 put:1 context:1 applying:1 prentice:2 equivalent:8 map:2 deterministic:1 center:1 maximizing:1 dean:1 survey:1 formalized:1 simplicity:1 lumping:1 rule:1 s6:2 notion:5 variation:3 mcgill:4 milner:2 massive:1 exact:2 neighbouring:1 programming:1 hypothesis:1 wolfe:2 expensive:1 solved:1 capture:1 skou:2 environment:5 complexity:2 reward:10 ideally:1 littman:1 dynamic:2 algebra:2 concurrency:1 gu:25 easily:1 distinct:2 artificial:2 aggregate:4 lift:1 choosing:2 quite:1 heuristic:3 larger:1 solve:2 say:2 sequence:1 inn:1 iff:2 achieve:1 empty:1 regularity:1 comparative:1 illustrate:2 montreal:2 school:2 received:1 c:2 implies:2 come:1 quantify:3 exhibiting:1 direction:3 closely:1 stochastic:1 require:1 behaviour:3 givan:4 preliminary:1 tighter:2 extension:1 hold:5 around:1 hall:2 maxu:2 mapping:1 purpose:1 label:2 currently:1 bridge:1 grouped:3 vice:1 create:2 arun:3 minimization:2 rather:1 avoid:1 cr:45 lifted:7 minc:1 barto:13 mobile:1 corollary:1 derived:1 mainly:1 sense:1 abstraction:3 dependent:1 initially:1 relation:14 going:2 labelings:2 provably:1 issue:2 dual:1 special:2 construct:4 shaped:1 having:1 sampling:1 washington:1 identical:2 icml:1 nearly:1 future:2 others:1 seu:1 relabeling:1 intended:1 n1:1 attempt:1 interest:2 w5:1 possibility:2 surjection:1 evaluation:1 primal:1 accurate:1 tuple:2 tree:1 taylor:2 re:2 desired:1 e0:1 theoretical:2 formalism:1 earlier:1 disadvantage:1 measuring:4 lattice:1 subset:1 semimetrics:1 international:4 probabilistic:4 together:4 precup:4 quickly:1 w1:1 aaai:1 choose:3 collapsing:1 return:1 li:2 account:1 north:1 doina:1 depends:1 vi:5 later:1 view:1 closed:3 sup:2 aggregation:13 bisimulation:35 sort:1 option:1 whitt:3 ni:1 identify:1 fern:13 worth:1 m5s:1 maxc:3 whenever:1 definition:14 against:1 semimetric:1 larsen:2 pp:4 e2:1 proof:7 popular:2 knowledge:3 actually:2 originally:1 specify:1 generality:2 furthermore:2 ei:1 su:2 a7:2 replacing:1 lack:1 continuity:1 quality:1 mdp:41 usa:1 building:1 contain:2 true:1 concept:2 hausdorff:4 hence:5 iteratively:2 puterman:3 during:1 lumped:1 noted:1 cp:5 meaning:1 recently:1 specialized:1 empirically:1 significant:1 s5:1 versa:1 gibbs:2 outlined:1 mathematics:2 similarly:1 funded:1 moving:2 similarity:9 v0:2 base:1 add:1 recent:2 showed:1 inf:3 certain:2 inequality:4 continue:3 aggregated:7 semi:2 d0:1 h3a:2 match:7 cross:1 compensate:1 e1:1 schematic:1 n5:1 essentially:1 metric:56 controller:3 iteration:1 kernel:2 achieved:1 background:1 want:2 addressed:1 source:1 w2:1 minb:1 south:1 subject:2 induced:2 lump:1 easy:1 fit:1 w3:1 greig:1 topology:1 idea:2 avenue:1 ltd:1 e3:1 cause:1 action:43 detailed:1 s4:1 discount:1 specifies:1 supplied:1 s3:1 estimated:1 discrete:5 reusable:1 graph:1 merely:1 uncertainty:1 fourth:1 panangaden:4 vn:46 decision:12 capturing:1 bound:10 correspondence:3 quadratic:1 encountered:1 g:48 occur:1 worked:1 constrain:1 ri:2 n3:1 software:1 speed:1 optimality:1 min:5 kumar:3 performing:1 tv:4 ball:2 wi:1 lp:2 partitioned:1 n4:1 making:1 s1:1 castro:1 restricted:1 pr:5 taken:2 equation:2 resource:1 previously:2 know:2 lax:19 available:2 operation:1 appropriate:2 original:5 denotes:2 assumes:2 ensure:1 clustering:1 include:1 epsilon:1 build:1 establish:1 society:1 strategy:1 rt:2 dependence:1 kantorovich:5 said:1 distance:2 topic:1 provable:1 induction:2 w6:1 illustration:1 potentially:1 relate:1 policy:19 upper:1 markov:10 finite:8 displayed:1 situation:1 defining:1 communication:1 dc:1 relabels:1 canada:3 subtasks:1 inferred:1 introduced:1 pair:6 connection:1 below:1 program:3 navigational:1 max:38 natural:1 turning:1 mdps:8 library:1 gz:4 n6:2 nice:1 literature:1 prior:2 acknowledgement:1 loss:4 expect:1 fully:1 brittle:1 agent:2 consistent:3 s0:1 principle:1 course:1 last:1 formal:1 allow:4 taking:3 fifth:1 transition:4 tighten:1 approximate:8 compact:5 monotonicity:1 investigating:1 uai:3 assumed:1 continuous:6 sk:2 transfer:1 robust:1 ca:3 symmetry:4 e5:1 constructing:2 domain:1 assured:2 pk:8 bounding:3 s2:1 big:1 n2:1 xu:1 representative:5 en:10 wiley:1 comput:1 ix:27 formula:1 theorem:10 e4:1 specific:1 navigate:2 utoronto:1 explored:1 x:3 sx:10 flavor:1 simply:1 likely:1 contained:1 nserc:1 sbu:3 monotonic:1 corresponds:2 satisfies:1 towards:1 room:3 feasible:4 change:2 infinite:2 specifically:1 except:1 reducing:1 called:1 total:3 formally:1 e6:1 jonathan:2 dept:1 |
2,674 | 3,424 | Near-Minimax Recursive Density Estimation
on the Binary Hypercube
Maxim Raginsky
Duke University
Durham, NC 27708
[email protected]
Svetlana Lazebnik
UNC Chapel Hill
Chapel Hill, NC 27599
[email protected]
Rebecca Willett
Duke University
Durham, NC 27708
[email protected]
Jorge Silva
Duke University
Durham, NC 27708
[email protected]
Abstract
This paper describes a recursive estimation procedure for multivariate binary densities using orthogonal expansions. For d covariates, there are 2d basis coefficients
to estimate, which renders conventional approaches computationally prohibitive
when d is large. However, for a wide class of densities that satisfy a certain sparsity condition, our estimator runs in probabilistic polynomial time and adapts to
the unknown sparsity of the underlying density in two key ways: (1) it attains
near-minimax mean-squared error, and (2) the computational complexity is lower
for sparser densities. Our method also allows for flexible control of the trade-off
between mean-squared error and computational complexity.
1 Introduction
Multivariate binary data arise in a variety of fields, such as biostatistics [1], econometrics [2] or
artificial intelligence [3]. In these and other settings, it is often necessary to estimate a probability density from a number of independent observations. Formally, we have n i.i.d. samples
from a probability density f (with respect to the counting measure) on the d-dimensional bi?
nary hypercube B d, B = {0, 1}, and seek an estimate fb of f with a small mean-squared error
P
2
b
MSE(f, fb) = E
x?Bd (f (x) ? f (x)) .
In many cases of practical interest, the number of covariates d is much larger than log n, so direct
estimation of f as a multinomial density with 2d parameters is both unreliable and impractical. Thus,
one has to resort to ?nonparametric? methods and search for good estimators in a suitably defined
class whose complexity grows with n. Some nonparametric methods proposed in the literature, such
as kernels [4] and orthogonal expansions [5, 6], either have very slow rates of MSE convergence or
are computationally prohibitive for large d. For example, the kernel method [4] requires O(n2 d)
operations to compute the estimate at any x ? B d , yet its MSE decays as O(n?4/(4+d) ) [7], which is
extremely slow when d is large. In contrast, orthogonal function methods generally have much better
MSE decay rates, but rely on estimating 2d coefficients in a fixed basis, which requires enormous
computational resources for large d. For instance, using the Fast Hadamard Transform to estimate
the coefficients in the so-called Walsh basis using n samples requires O(nd2d ) operations [5].
In this paper we take up the problem of accurate, computationally tractable estimation of a density
on the binary hypercube. We take the minimax point of view, where we assume that f comes from
a particular function class F and seek an estimator that approximately attains the minimax MSE
?
Rn? (F ) = inf sup MSE(f, fb),
fb f ?F
where the infimum is over all estimators based on n i.i.d. samples. We will define our function class
to reflect another feature often encountered in situations involving multivariate binary data: namely,
that the shape of the underlying density is strongly influenced by small constellations of the d covariates. For example, when working with panel data [2], it may be the case that the answers to some
specific subset of questions are highly correlated among a particular group of the panel participants,
and the responses of these participants to other questions are nearly random; moreover, there may
be several such distinct groups in the panel. To model such ?constellation effects? mathematically,
we will consider classes of densities that satisfy a particular sparsity condition.
Our contribution consists in developing a thresholding density estimator that adapts to the unknown
sparsity of the underlying density in two key ways: (1) it is near-minimax optimal, with the error
decay rate depending upon the sparsity, and (2) it can be implemented using a recursive algorithm
that runs in probabilistic polynomial time and whose computational complexity is lower for sparser
densities. The algorithm entails recursively examining empirical estimates of whole blocks of the
2d basis coefficients. At each stage of the algorithm, the weights of the coefficients estimated at
previous stages are used to decide which remaining coefficients are most likely to be significant,
and computing resources are allocated accordingly. We show that this decision is accurate with high
probability. An additional attractive feature of our approach is that it gives us a principled way of
trading off MSE against computational complexity by controlling the decay of the threshold as a
function of the recursion depth.
2 Preliminaries
We first list some definitions and results needed in the sequel. Throughout the paper, C and c denote
generic constants whose values may change from line to line. For two real numbers a and b, a ? b
and a ? b denote, respectively, the smaller and the larger of the two.
Biased Walsh bases. Let ?d denote the counting measure on the d-dimensional binary hypercube
B d . Then the space of all real-valued functions on B d is the real Hilbert space L2 (?d ) with the
? P
standard inner product hf, gi =
x?Bd f (x)g(x). Given any ? ? (0, 1), we can construct an
2
orthonormal system ?d,? in L (?d ) as follows. Define two functions ?0,? , ?1,? : B ? R by
?
?0,? (x) = (1 ? ?)x/2 ? (1?x)/2
?
and ?1,? (x) = (?1)x ? x/2 (1 ? ?)(1?x)/2 ,
x ? {0, 1}. (1)
Now, for any s = (s(1), . . . , s(d)) ? B d define the function ?s,? : B d ? R by
?
?s,? (x) =
d
Y
?s(i),? (x(i)),
i=1
?x = (x(1), . . . , x(d)) ? B d
(2)
(this is written more succinctly as ?s,? = ?s(1),? ? . . . ? ?s(d),? , where ? is the tensor product).
The set ?d,? = {?s,? : s ? B d } is an orthonormal system in L2 (?d ), which is referred to as the
Walsh system with bias ? [8, 9]. Any function f ? L2 (?d ) can be uniquely represented as
X
f=
?s,? ?s,? ,
s?Bd
where ?s,? = hf, ?s,? i. When ? = 1/2, we get the standard Walsh system used in [5, 6]; in that
case, we shall omit the index ? = 1/2 for simplicity. The product structure of the biased Walsh
bases makes them especially convenient for statistical applications as it allows for a computationally efficient recursive method for computing accurate estimates of squared coefficients in certain
hierarchically structured sets.
Sparsity and weak-?p balls. We are interested in densities whose representations in some biased
Walsh basis satisfy a certain sparsity constraint. Given ? ? (0, 1) and a function f ? L2 (?d ), let
?(f ) denote the list of its coefficients in ?d,? . We are interested in cases when the components
of ?(f ) decay according to a power law. Formally, let ?(1) , . . . , ?(M) , where M = 2d , be the
components of ?(f ) arranged in decreasing order of magnitude: |?(1) | ? |?(2) | ? . . . ? |?(M) |.
Given some 0 < p < ?, we say that ?(f ) belongs to the weak-?p ball of radius R [10], and write
?(f ) ? w?p (R), if
|?(m) | ? R ? m?1/p ,
1 ? m ? M.
(3)
It is not hard to show that the coefficients of any probability density on B d in ?d,? are bounded by
R(?) = [? ? (1 ? ?)]d/2 . With this in mind, let us define the class Fd (p, ?) of all functions f on B d
satisfying ?(f ) ? w?p (R(?)) in RM . We are particularly interested in the case 0 < p < 2. When
? = 1/2, with R(?) = 2?d/2 , we shall write simply Fd (p).
We will need approximation properties of weak-?p balls as listed, e.g., in [11]. The basic fact is that
the power-law condition (3) is equivalent to the concentration estimate
s ? B d : |?s | ? ? ? (R/?)p ,
?? > 0.
(4)
For any 1 ? k ? M , let ?k (f ) denote the vector ?(f ) with ?(k+1) , . . . , ?(M) set to zero. Then it
?
follows from (3) that k?(f ) ? ?k (f )k?2M ? CRk ?r , where r = 1/p ? 1/2, and C is some constant
that depends only on p. Given any f ? Fd (p, ?) and denoting by fk the function obtained from it
by retaining only the k largest coefficients, we get from Parseval?s identity that
kf ? fk kL2 (?d ) ? CRk ?r .
(5)
To get a feeling for what the classes Fd (p, ?) could model in practice, we note that, for a fixed
?
?
? ?
? ? (0, 1), the product of d Bernoulli(? ? ) densities with ? ? = ?/( ? + 1 ? ?) is the unique
sparsest density in the entire scale of Fd (p, ?) spaces with 0 < p < 2: ?all of its coefficients in
Fd,? are zero, except for ?s,? with s = (0, . . . , 0), which is equal to (? ? / ?)d . Other densities in
{?d (p, ?) : 0 < p < 2} include, for example, mixtures of components that, up to a permutation
of {1, . . . , d}, can be written as a tensor product of a large number of Bernoulli(? ? ) densities and
some other density. The parameter ? can be interpreted either as the default noise level in measuring
an individual covariate or as a smoothness parameter that interpolates between the point masses
?(0,...,0) and ?(1,...,1) . We assume that ? is known (e.g., from some preliminary exploration of the
data or from domain-specific prior information) and fixed.
In the following, we limit ourselves to the ?noisiest? case ? = 1/2 with R(1/2) = 2?d/2 . Our
theory can be easily modified to cover any other ? ? (0, 1): one would need to replace R = 2?d/2
with the corresponding R(?) and use the bound k?s,? k? ? R(?) instead of k?s k? ? 2?d/2 when
estimating variances and higher moments.
3 Density estimation via recursive Walsh thresholding
We now turn to our problem of estimating a density f on B d from a sample {Xi }ni=1 when f ? Fd (p)
for some unknown 0 < p < 2. The minimax theory for weak-?p balls [10] says that
d
Rn? (Fd (p)) ? CM ?p/2 n?2r/(2r+1) ,
r = 1/p ? 1/2
where M = 2 . We shall construct an estimator that adapts to unknown sparsity of f in the sense
that it achieves this minimax rate up to a logarithmic factor without prior knowledge of p and that
its computational complexity improves as p ? 0.
Our method is based on the thresholding of empirical Walsh coefficients. A thresholding estimator
is any estimator of the form
X
I{T (?bs )??n } ?bs ?s ,
fb =
Pn
s?Bd
where ?bs = (1/n) i=1 ?s (Xi ) are empirical estimates of the Walsh coefficients of f , T (?) is
some statistic, and I{?} is an indicator function. The threshold ?n depends on the sample size. For
example, in [5, 6] the statistic T (?bs ) = ?bs2 was used with the threshold ?n = 1/M (n + 1). This
choice was motivated by the considerations of bias-variance trade-off for each individual coefficient.
The main disadvantage of such direct methods is the need to estimate all M = 2d Walsh coefficients.
While this is not an issue when d ? log n, it is clearly impractical when d ? log n. To deal with this
issue, we will consider a recursive thresholding approach that will allow us to reject whole groups
of coefficients based on efficiently computable statistics. This approach is motivated as follows. For
any 1 ? k ? d, we can write any f ? L2 (?d ) with the Walsh coefficients ?(f ) as
X
X X
?uv ?uv =
fu ? ?u ,
f=
u?Bk v?Bd?k
u?Bk
?
where uv denotes the concatenation of u ? B k and v ? B d?k and, for each u ? B k , fu =
P
P
?
2
2
2
v?Bd?k ?uv ?v lies in L (?d?k ). By Parseval?s identity, Wu = kfu kL2 (?d?k ) =
v?Bd?k ?uv .
2
This means that if Wu < ? for some u ? B k , then ?uv
< ? for every v ? B d?k . Thus, we could
start at u = 0 and u = 1 and check whether Wu ? ?. If not, then we would discard all ?uv with
v ? B d?1; otherwise, we would proceed on to u0 and u1. At the end of this process, we will be left
only with those s ? B d for which ?s2 ? ?. Let f? denote the resulting function. If f ? Fd (p) for
some p, then we will have kf ? f? k2L2 (?d ) ? CM ?1 (M ?)?2r/(2r+1) .
We will follow this reasoning in constructing our estimator. We begin by developing an estimator
for Wu . We will use the following fact, easily proved using the definitions (1) and (2) of the Walsh
functions: for any density f on B d , any k and u ? B k , we have
fu (y) = Ef ?u (?k (X))I{?k (X)=y} , ?y ? B d?k and Wu = Ef {?u (?k (X))fu (?k (X))} ,
?
?
where ?k (x) = (x(1), . . . , x(k)) and ?k (x) = (x(k + 1), . . . , x(d)) for any x ? B d . This suggests
that we can estimate Wu by
n X
n
X
cu = 1
?u (?k (Xi1 ))?u (?k (Xi2 ))I{?k (Xi1 )=?k (Xi2 )} .
W
n2 i =1 i =1
1
(6)
2
cu = P d?k ?b2 . An advantage of
Using induction and Eqs. (1) and (2), we can prove that W
uv
v?B
cu indirectly via (6) rather than as a sum of ?b2 , v ? B d?k , is that, while the latter
computing W
uv
has O(2d?k n) complexity, the former has only O(n2 d) complexity. This can lead to significant
computational savings for small k. When k ? d ? log(nd), it becomes more efficient to use the
direct estimator.
Now we can define our density estimation procedure. Instead of using a single threshold for all
cu to a
1 ? k ? d, we consider a more flexible strategy: for every k, we shall compare each W
threshold that depends not only on n, but also on k. Specifically, we will let
?k,n =
?k log n
,
n
1?k?d
(7)
where ? = {?k }dk=1 satisfies ?1 ? ?k ? ?d > 0. (This k-dependent scaling will allow us to trade
?
off MSE and computational complexity.) Given ? = {?k,n }dk=1 , define the set A(?) = {s ? B d :
c? (s) ? ?k,n , ?1 ? k ? d} and the corresponding estimator
W
k
X
?
fbRWT =
I{s?A(?)} ?bs ?s ,
(8)
s?Bd
where RWT stands for ?recursive Walsh thresholding.? To implement fbRWT on a computer, we adapt
the algorithm of Goldreich and Levin [12], originally developed for cryptography and later applied
to the problem of learning Boolean functions from membership queries [13]: we call the routine
R ECURSIVE WALSH, shown in Algorithm 1, with u = ? (the empty string) and with ? from (7).
Analysis of the estimator. We now turn to the asymptotic analysis of the MSE and the computational complexity of fbRWT . We first prove that fbRWT adapts to unknown sparsity of f :
Theorem 3.1 Suppose the threshold sequence ? = {?k }dk=1 is such that ?d ? (20d + 25)2 /2d.
Then for all 0 < p < 2 the estimator (8) satisfies
2r/(2r+1)
C 2d ?1 log n
2
b
b
sup MSE(f, fRWT ) = sup Ef kf ? fRWT kL2 (?d ) ? d
, (9)
2
n
f ?Fd (p)
f ?Fd (p)
where the constant C depends only on p.
Proof: Let us decompose the squared L2 error of fbRWT as
X
X
kf ? fbRWT k2L2 (?d ) =
I{s?A(?)} (?s ? ?bs )2 +
I{s?A(?)c } ?s2 ? T1 + T2 .
s
s
Algorithm 1 R ECURSIVE WALSH(u, ?)
k ? length(u)
if k = d then
n
P
compute ?bu ? n1
?u (Xi ); if ?bu2 ? ?d,n then output u, ?bu ; return
i=1
end if
cu0 ?
compute W
cu1 ?
compute W
1
n2
1
n2
n
n
P
P
i1 =1 i2 =1
n
n
P
P
i1 =1 i2 =1
?u0 (?k+1 (Xi1 ))?u0 (?k+1 (Xi2 ))I{?k+1 (Xi1 )=?k+1 (Xi2 )}
?u1 (?k+1 (Xi1 ))?u1 (?k+1 (Xi2 ))I{?k+1 (Xi1 )=?k+1 (Xi2 )}
cu0 ? ?k+1,n then return else R ECURSIVE WALSH(u0, ?); end if
if W
cu1 ? ?k+1,n then return else R ECURSIVE WALSH(u1, ?); end if
if W
We start by observing that s ? A(?) only if ?bs2 ? ?d,n , while for any s ? A(?)c there exists some
1 ? k ? d such that ?bs2 < ?k,n ? ?1,n . Defining the sets A1 = {s ? B d : ?bs2 ? ?d,n } and
P
P
A2 = {s ? B d : ?bs2 < ?1,n }, we get T1 ? s I{s?A1 } (?s ? ?bs )2 and T2 ? s I{s?A2 } ?s2 . Further,
defining B = {s ? B d : ?s2 < ?d,n /2} and S = {s ? B d : ?s2 ? 3?1,n /2}, we can write
X
X
T1 =
I{s?A1 ?B} (?s ? ?bs )2 +
I{s?A1 ?B c } (?s ? ?bs )2 ? T11 + T12 ,
s
T2 =
X
I{s?A2 ?S} ?s2
X
+
s
s
s
I{s?A2 ?S c } ?s2 ? T21 + T22 .
First we deal with the easy terms T12 , T22 . Applying (4), (5) and a bit of algebra, we get
p/2
1
1
2
1 ?2r/(2r+1)
2
E T12 ?
n
,
s : ?s ? ?d,n /2 ?
?
Mn
M n M ?d,n
M
2r/(2r+1)
X
C M ?1 log n
2
E T22 ?
I{?s2 <(3?1 /2) log n/n} ?s ?
.
M
n
d
(10)
(11)
s?B
Next we deal with the large-deviation terms T11 and T21 . Using Cauchy?Schwarz, we get
i1/2
Xh
E T11 ?
E(?s ? ?bs )4 ? P(s ? A1 ? B)
.
(12)
s
To estimate the fourth moment in (12), we use Rosenthal?s inequality [14] to get E(?s ? ?bs )4 ?
c/M 2 n2 . To bound the probability that s ? A1 ? B, we observe that s ? A1 ? B implies that
p
|?bs ? ?s | ? (1/5) ?d,n , and then use Bernstein?s inequality [14] to get
p
2
? 2 log n
P |?bs ? ?s | ? (1/5) ?d,n ? 2 exp ?
= 2n?? /[2(1+2?/3)] ? 2n?(??1)/2
2(1 + 2?/3)
?
?(??1)/2
with ? = (1/5) M ?d ? 4d + 5. Since n
? n?2(d+1) , we have
E T11 ? Cn?(d+1) ? C/(M n).
(13)
2
Finally, E T21 ? s P(s ? A2 ? S)?s . Using the same argument as above, we get P(s ? A2 ? S) ?
?
2n?(??1)/2 , where ? = (1/5) M ?1 . Since ?s2 ? 1/M for all s ? B d and since ? ? ?, this gives
P
E T21 ? 2n?2(d+1) ? 2/(M n).
(14)
Putting together Eqs. (10), (11), (13), and (14), we get (9), and the theorem is proved.
?
Our second result concerns the running time of Algorithm 1. Let K(?, p) =
Theorem 3.2 Given any ? ? (0, 1), provided each ?k is chosen so that
p
?
2k ?k n log n ? 5 C2 n + (log(d/?) + k)/ log e ,
2
p/2
Algorithm 1 runs in O(n d(n/M log n)
Pd
k=1
?p/2
?k
K(?, p)) time with probability at least 1 ? ?.
.
(15)
Proof: The complexity is determined by the number of calls to R ECURSIVE WALSH. For each k,
cu ? ?k,n . Let us say that a call to
a call to R ECURSIVE WALSH is made at every u ? B k with W
R ECURSIVE WALSH(u, ?) is correct if Wu ? ?k,n /2. We will show that, with probability at least
1 ? ?, only the correct calls are made. The probability of making at least one incorrect call is
!
d
d X
[
[
X
cu ? ?k,n , Wu < ?k,n /2} ?
cu ? ?k,n , Wu < ?k,n /2 .
P
{W
P W
k=1 u?Bk
k=1 u?Bk
cu ? ?k,n and Wu < ?k,n /2 together imply that kfu ? fbu k2 2
For a given u ? B k , W
L (?d?k ) ?
p
? P
(1/5) ?k,n , where fbu = v?Bd?k ?buv ?v . Now, it can be shown that, for every u ? B k , the norm
kfu ? fbu kL2 (?d?k ) can be expressed as a supremum of an empirical process [15] over a certain
function class that depends on k (details are omitted for lack of space). We can then use Talagrand?s
concentration-of-measure inequality for empirical processes [16] to get
cu ? ?k,n , Wu < ?k,n /2) ? exp ? nC1 (2k a2 ? 2k/2 ak,n ) ,
P(W
k,n
?
p
where ak,n = (1/5) ?k log n/n ? C2 / 2k n, and C1 , C2 are the absolute constants in Talagrand?s
cu ? ?k,n , Wu < ?k,n /2) ? ?/(d2d?k ) for all u ? B k .
bound. If we choose ?k as in (15), then P(W
k
Summing over k, u ? B , we see that, with probability ? 1 ? ?, only the correct calls will be made.
It remains to bound the number of the correct calls. For each k, Wu ? ?k,n /2 implies that there
2
exists at least one v ? B d?k such that ?uv
? ?k,n /2. Since for every 1 ? k ? d each ?s contributes
to exactly one Wu , we have by the pigeonhole principle that
u ? B k : Wu ? ?k,n /2 ? s ? B d : ?s2 ? ?k,n /2 ? (2/M ?k,n )p/2 ,
?
where in the second inequality we used (4) with R = 1/ M . Hence, the number of correct
Pd
recursive calls is bounded by N = k=1 (2/M ?k,n )p/2 = (2n/M log n)p/2 K(?, p). At each call,
we compute an estimate of the corresponding Wu0 and Wu1 , which requires O(n2 d) operations.
Therefore, with probability at least 1 ? ?, the time complexity will be as stated in the theorem.
MSE vs. complexity. By controlling the rate at which the sequence ?k decays with k, we can
trade off MSE against complexity. Consider the following two extreme cases: (1) ?1 = . . . =
?d ? 1/M and (2) ?k ? 2d?k /M . The first case, which reduces to term-by-term thresholding, achieves the best bias-variance trade-off with the MSE O((log n/n)2r/(2r+1) (1/M )). However, it has K(?, p) = O(M p/2 d), resulting in O(d2 n2 (n/ log n)p/2 ) complexity. The second
case, which leads to a very severe estimator that will tend to reject a lot of coefficients, has MSE
of O((log n/n)2r/(2r+1) M ?1/(2r+1) ), but K(?, p) = O(M p/2 ), leading to a considerably better
O(dn2 (n/ log n)p/2 ) complexity. From the computational viewpoint, it is preferable to use rapidly
decaying thresholds. However, this reduction in complexity will be offset by a corresponding increase in MSE. In fact, using exponentially decaying ?k ?s in practice is not advisable as its low
complexity is mainly due to the fact that it will tend to reject even the big coefficients very early on,
especially when d is large. To achieve a good balance between complexity and MSE, a moderately
decaying threshold sequence might be best, e.g., ?k ? (d? k + 1)m /M for some m ? 1. As p ? 0,
the effect of ? on complexity becomes negligible, and the complexity tends to O(n2 d).
Positivity and normalization issues. As is the case with orthogonal series estimators, fbRWT may
not necessarily be a bona fide density. In particular, there may be some x ? B d such that fbRWT (x) <
R
0, and it may happen that fbRWT d?d 6= 1. In principle, this can be handled by clipping the negative
values at zero and renormalizing, which can only improve the MSE. In practice renormalization may
be computationally expensive when d is very large. If the estimate is suitably sparse, however, the
renormalization can be carried out approximately using Monte-Carlo methods.
4 Simulations
The focus of our work is theoretical, consisting in the derivation of a recursive thresholding procedure for estimating multivariate binary densities (Algorithm 1), with a proof of its near-minimaxity
and an asymptotic analysis of its complexity. Although an extensive empirical evaluation is outside
the scope of this paper, we have implemented the proposed estimator, and now present some simulation results to demonstrate its small-sample performance. We generated synthetic observations from
a mixture density f on a 15-dimensional binary hypercube. The mixture has 10 components, where
each component is a product density with 12 randomly chosen covariates having Bernoulli(1/2)
distributions, and the other three having Bernoulli(0.9) distributions. For d = 15, it is still feasible
to quickly compute the ground truth, consisting of 32768 values of f and its Walsh coefficients.
These values are shown in Fig. 1 (left). As can be seen from the coefficient profile in the bottom of
the figure, this density is clearly sparse. Fig. 1 also shows the estimated probabilities and the Walsh
coefficients for sample sizes n = 5000 (middle) and n = 10000 (right).
fbRWT , n = 5000
Ground truth (f )
fbRWT , n = 10000
Figure 1: Ground truth (left) and estimated density for n = 5000 (middle) and n = 10000 (right) with
constant thresholding. Top: true and estimated probabilities (clipped at zero and renormalized) arranged in
lexicographic order. Bottom: absolute values of true and estimated Walsh coefficients arranged in lexicographic
order. For the estimated densities, the coefficient plots also show the threshold level (dotted line) and absolute
values of the rejected coefficients (lighter color).
0.3
0.2
3500
1200
3000
1000
800
600
400
0.1
Coeffs. estimated
0.4
1400
Recursive calls
0.5
Time (s)
MSE (? 2d)
40
constant
log
linear
0.6
2500
2000
1500
1000
200
4000 6000 8000
Sample size (n)
(a)
10000
2000
4000 6000 8000
Sample size (n)
(b)
10000
30
25
20
15
10
500
2000
35
2000
4000 6000 8000
Sample size (n)
(c)
10000
2000
4000
6000
8000
Sample size (n)
10000
(d)
Figure 2: Small-sample performance of fbRWT in estimating f wth three different thresholding schemes:
(a) MSE; (b) running time (in seconds); (c) number of recursive calls; (d) number of coefficients retained by
the algorithm. All results are averaged over five independent runs for each sample size (the error bars show the
standard deviations).
To study the trade-off between MSE and complexity, we implemented three different thresholding
schemes: (1) constant, ?k,n = 2 log n/(2d n), (2) logarithmic, ?k,n = 2 log(d ? k + 2) log n/(2d n),
and (3) linear, ?k,n = 2(d ? k + 1) log n/(2d n). Up to the log n factor (dictated by the theory),
the thresholds at k = d are set to twice the variance of the empirical estimate of any coefficient
whose value is zero; this forces the estimator to reject empirical coefficients whose values cannot
be reliably distinguished from zero. Occasionally, spurious coefficients get retained, as can be seen
in Fig. 1 (middle) for the estimate for n = 5000. Fig 2 shows the performance of fbRWT . Fig. 2(a)
is a plot of MSE vs. sample size. In agreement with the theory, MSE is the smallest for the constant thresholding scheme [which is simply an efficient recursive implementation of a term-by-term
thresholding estimator with ?n ? log n/(M n)], and then it increases for the logarithmic and for
the linear schemes. Fig. 2(b,c) shows the running time (in seconds) and the number of recursive
calls made to R ECURSIVE WALSH vs. sample size. The number of recursive calls is a platformindependent way of gauging the computational complexity of the algorithm, although it should be
kept in mind that each recursive call has O(n2 d) overhead. The running time increases polynomially with n, and is the largest for the constant scheme, followed by the logarithmic and the linear
schemes. We see that, while the MSE of the logarithmic scheme is fairly close to that of the constant
scheme, its complexity is considerably lower, in terms of both the number of recursive calls and the
running time. In all three cases, the number of recursive calls decreases with n due to the fact that
weight estimates become increasingly accurate with n, which causes the expected number of false
discoveries (i.e., making a recursive call at an internal node of the tree only to reject its descendants
later) to decrease. Finally, Fig. 2(d) shows the number of coefficients retained in the estimate. This
number grows with n as a consequence of the fact that the threshold decreases with n, while the
number of accurately estimated coefficients increases. The true density f has 40 parameters: 9 to
specify the weights of the components, 3 per component to locate the indices of the nonuniform
covariates, and the single Bernoulli parameter of the nonuniform covariates. It is interesting to note
that the maximal number of coefficients returned by our algorithm approaches 40.
Overall, these preliminary simulation results show that our implemented estimator behaves in accordance with the theory even in the small-sample regime. The performance of the logarithmic thresholding scheme is especially encouraging, suggesting that it may be possible to trade off MSE against
complexity in a way that will scale to large values of d. In the future, we plan to test our method
on high-dimensional real data sets. Our particular interest is in social network data, e.g., records of
meetings among large groups of individuals. These are represented by binary strings most of whose
entries are zero (i.e., only a very small number of people are present at any given meeting). To model
their densities, we plan to experiment with Walsh bases with ? biased toward unity.
Acknowledgments
This work was supported by NSF CAREER Award No. CCF-06-43947 and DARPA Grant No. HR0011-07-1003.
References
[1] I. Shmulevich and W. Zhang. Binary analysis and optimization-based normalization of gene expression
data. Bioinformatics 18(4):555?565, 2002.
[2] J.M. Carro. Estimating dynamic panel data discrete choice models with fixed effects. J. Econometrics
140:503?528, 2007.
[3] Z. Ghahramani and K. Heller. Bayesian sets. NIPS 18:435?442, 2006.
[4] J. Aitchison and C.G.G. Aitken. Multivariate binary discrimination by the kernel method. Biometrika
63(3):413?420, 1976.
[5] J. Ott and R.A. Kronmal. Some classification procedures for multivariate binary data using orthogonal
functions. J. Amer. Stat. Assoc. 71(354):391?399, 1976.
[6] W.-Q. Liang and P.R. Krishnaiah. Nonparametric iterative estimation of multivariate binary density. J.
Multivariate Anal. 16:162?172, 1985.
[7] J.S. Simonoff. Smoothing categorical data. J. Statist. Planning and Inference 47:41?60, 1995.
[8] M. Talagrand. On Russo?s approximate zero-one law. Ann. Probab. 22:1576?1587, 1994.
[9] I. Dinur, E. Friedgut, G. Kindler and R. O?Donnell. On the Fourier tails of bounded functions over the
discrete cube. Israel J. Math. 160:389?421, 2007.
[10] I.M. Johnstone. Minimax Bayes, asymptotic minimax and sparse wavelet priors. In S.S. Gupta and
J.O. Berger, eds., Statistical Decision Theory and Related Topics V, pp. 303?326, Springer, 1994.
[11] E.J. Cand`es and T. Tao. Near-optimal signal recovery from random projections: universal encoding strategies? IEEE Trans. Inf. Theory 52(12):5406?5425, 2006.
[12] O. Goldreich and L. Levin. A hard-core predicate for all one-way functions. STOC, pp. 25?32, 1989.
[13] E. Kushilevitz and Y. Mansour. Learning decision trees using the Fourier spectrum. SIAM J. Comput.
22(6):1331-1348, 1993.
[14] W. H?ardle, G. Kerkyacharian, D. Picard and A.B. Tsybakov. Wavelets, Approximation, and Statistical
Applications, Springer, 1998.
[15] S.A. van de Geer. Empirical Processes in M-Estimation, Cambridge Univ. Press, 2000.
[16] M. Talagrand. Sharper bounds for Gaussian and empirical processes. Ann. Probab. 22:28?76, 1994.
| 3424 |@word cu:10 middle:3 polynomial:2 norm:1 nd:1 suitably:2 d2:1 seek:2 simulation:3 recursively:1 reduction:1 moment:2 series:1 denoting:1 yet:1 bd:9 bs2:5 written:2 happen:1 shape:1 plot:2 v:3 discrimination:1 intelligence:1 prohibitive:2 accordingly:1 core:1 record:1 d2d:1 wth:1 math:1 node:1 zhang:1 five:1 c2:3 direct:3 become:1 incorrect:1 consists:1 prove:2 descendant:1 overhead:1 aitken:1 expected:1 cand:1 planning:1 decreasing:1 encouraging:1 becomes:2 provided:1 begin:1 estimating:6 underlying:3 moreover:1 panel:4 biostatistics:1 bounded:3 mass:1 what:1 israel:1 cm:2 interpreted:1 string:2 developed:1 impractical:2 every:5 exactly:1 preferable:1 rm:1 k2:1 biometrika:1 control:1 assoc:1 grant:1 omit:1 t1:3 negligible:1 accordance:1 tends:1 limit:1 consequence:1 k2l2:2 ak:2 encoding:1 approximately:2 might:1 twice:1 suggests:1 walsh:25 bi:1 averaged:1 russo:1 practical:1 unique:1 acknowledgment:1 recursive:18 block:1 practice:3 implement:1 procedure:4 empirical:10 universal:1 reject:5 convenient:1 projection:1 get:12 unc:2 cannot:1 close:1 applying:1 conventional:1 equivalent:1 simplicity:1 recovery:1 chapel:2 estimator:20 kushilevitz:1 orthonormal:2 controlling:2 suppose:1 duke:6 lighter:1 agreement:1 satisfying:1 particularly:1 expensive:1 econometrics:2 pigeonhole:1 bottom:2 nc1:1 t12:3 trade:7 decrease:3 principled:1 pd:2 complexity:26 covariates:6 moderately:1 dynamic:1 renormalized:1 algebra:1 upon:1 basis:5 easily:2 darpa:1 goldreich:2 represented:2 derivation:1 univ:1 distinct:1 fast:1 buv:1 monte:1 artificial:1 query:1 outside:1 whose:7 larger:2 valued:1 say:3 otherwise:1 statistic:3 gi:1 transform:1 advantage:1 sequence:3 t21:4 product:6 maximal:1 hadamard:1 rapidly:1 achieve:1 adapts:4 convergence:1 empty:1 renormalizing:1 depending:1 advisable:1 stat:1 eq:2 implemented:4 c:1 come:1 trading:1 implies:2 radius:1 correct:5 exploration:1 preliminary:3 decompose:1 mathematically:1 ardle:1 ground:3 exp:2 scope:1 achieves:2 early:1 a2:7 omitted:1 smallest:1 estimation:8 schwarz:1 largest:2 t22:3 clearly:2 lexicographic:2 gaussian:1 modified:1 rather:1 pn:1 shmulevich:1 focus:1 bernoulli:5 check:1 mainly:1 contrast:1 attains:2 sense:1 kfu:3 inference:1 dependent:1 membership:1 entire:1 spurious:1 i1:3 tao:1 interested:3 issue:3 among:2 flexible:2 overall:1 classification:1 retaining:1 plan:2 smoothing:1 fairly:1 cube:1 field:1 construct:2 equal:1 saving:1 having:2 nearly:1 gauging:1 future:1 t2:3 randomly:1 individual:3 ourselves:1 consisting:2 n1:1 interest:2 fd:11 highly:1 picard:1 evaluation:1 severe:1 mixture:3 extreme:1 accurate:4 fu:4 necessary:1 orthogonal:5 tree:2 theoretical:1 instance:1 boolean:1 cover:1 disadvantage:1 measuring:1 coeffs:1 clipping:1 ott:1 deviation:2 subset:1 entry:1 examining:1 levin:2 predicate:1 answer:1 considerably:2 synthetic:1 density:35 siam:1 sequel:1 probabilistic:2 off:8 xi1:6 bu:2 donnell:1 together:2 quickly:1 squared:5 reflect:1 choose:1 positivity:1 resort:1 leading:1 return:3 suggesting:1 de:1 b2:2 coefficient:32 satisfy:3 depends:5 later:2 view:1 lot:1 observing:1 sup:3 start:2 hf:2 participant:2 decaying:3 bayes:1 contribution:1 ni:1 variance:4 efficiently:1 weak:4 bayesian:1 accurately:1 carlo:1 influenced:1 ed:1 definition:2 against:3 kl2:4 pp:2 proof:3 proved:2 knowledge:1 color:1 improves:1 hilbert:1 routine:1 dinur:1 higher:1 originally:1 follow:1 response:1 specify:1 arranged:3 amer:1 strongly:1 rejected:1 stage:2 talagrand:4 working:1 lack:1 infimum:1 grows:2 effect:3 true:3 ccf:1 former:1 hence:1 i2:2 deal:3 attractive:1 uniquely:1 hill:2 demonstrate:1 silva:2 reasoning:1 lazebnik:2 consideration:1 ef:3 behaves:1 multinomial:1 exponentially:1 tail:1 willett:2 significant:2 cambridge:1 smoothness:1 uv:10 fk:2 jg:1 entail:1 base:3 multivariate:8 dictated:1 inf:2 belongs:1 discard:1 occasionally:1 certain:4 inequality:4 binary:13 jorge:1 meeting:2 seen:2 additional:1 signal:1 u0:4 reduces:1 adapt:1 award:1 a1:7 involving:1 basic:1 crk:2 kernel:3 normalization:2 c1:1 rwt:1 else:2 allocated:1 biased:4 nary:1 tend:2 call:18 near:5 counting:2 bernstein:1 easy:1 variety:1 wu1:1 inner:1 cn:1 computable:1 whether:1 motivated:2 handled:1 expression:1 render:1 returned:1 interpolates:1 proceed:1 cause:1 generally:1 listed:1 nonparametric:3 tsybakov:1 statist:1 nsf:1 dotted:1 estimated:8 rosenthal:1 per:1 aitchison:1 write:4 discrete:2 shall:4 cu0:2 group:4 key:2 putting:1 threshold:11 enormous:1 kept:1 sum:1 raginsky:2 run:4 fourth:1 svetlana:1 clipped:1 throughout:1 decide:1 wu:15 decision:3 scaling:1 fide:1 bit:1 bound:5 followed:1 encountered:1 constraint:1 u1:4 fourier:2 argument:1 extremely:1 kerkyacharian:1 structured:1 developing:2 according:1 ball:4 describes:1 smaller:1 increasingly:1 unity:1 b:13 making:2 krishnaiah:1 computationally:5 resource:2 remains:1 turn:2 xi2:6 needed:1 mind:2 tractable:1 end:4 operation:3 observe:1 generic:1 indirectly:1 distinguished:1 denotes:1 remaining:1 include:1 t11:4 running:5 top:1 ghahramani:1 especially:3 hypercube:5 tensor:2 question:2 strategy:2 concentration:2 concatenation:1 topic:1 cauchy:1 toward:1 induction:1 length:1 index:2 retained:3 berger:1 balance:1 nc:4 liang:1 stoc:1 sharper:1 stated:1 negative:1 bona:1 implementation:1 reliably:1 anal:1 unknown:5 observation:2 situation:1 defining:2 locate:1 rn:2 nonuniform:2 mansour:1 rebecca:1 bk:4 namely:1 extensive:1 nip:1 trans:1 hr0011:1 bar:1 regime:1 sparsity:9 power:2 rely:1 force:1 indicator:1 recursion:1 mn:1 minimax:9 scheme:9 improve:1 imply:1 carried:1 categorical:1 minimaxity:1 prior:3 literature:1 l2:6 discovery:1 kf:4 heller:1 probab:2 asymptotic:3 law:3 parseval:2 permutation:1 interesting:1 thresholding:14 principle:2 wu0:1 viewpoint:1 succinctly:1 supported:1 bias:3 allow:2 johnstone:1 wide:1 absolute:3 sparse:3 van:1 depth:1 default:1 stand:1 fb:5 dn2:1 simonoff:1 made:4 feeling:1 polynomially:1 social:1 approximate:1 unreliable:1 supremum:1 gene:1 ecursive:8 summing:1 xi:3 spectrum:1 search:1 iterative:1 career:1 contributes:1 expansion:2 mse:24 necessarily:1 constructing:1 domain:1 hierarchically:1 main:1 whole:2 noise:1 arise:1 s2:10 n2:10 big:1 profile:1 cryptography:1 fig:7 referred:1 renormalization:2 slow:2 sparsest:1 xh:1 comput:1 lie:1 wavelet:2 theorem:4 specific:2 covariate:1 constellation:2 list:2 decay:6 dk:3 offset:1 gupta:1 concern:1 exists:2 false:1 maxim:1 magnitude:1 friedgut:1 sparser:2 durham:3 logarithmic:6 simply:2 likely:1 expressed:1 springer:2 truth:3 satisfies:2 identity:2 ann:2 replace:1 feasible:1 change:1 hard:2 specifically:1 except:1 determined:1 called:1 geer:1 e:1 formally:2 internal:1 people:1 latter:1 bioinformatics:1 correlated:1 |
2,675 | 3,425 | Performance analysis for L2 kernel classification
Clayton D. Scott?
Department of EECS
University of Michigan
Ann Arbor, MI, USA
[email protected]
JooSeuk Kim
Department of EECS
University of Michigan
Ann Arbor, MI, USA
[email protected]
Abstract
We provide statistical performance guarantees for a recently introduced kernel
classifier that optimizes the L2 or integrated squared error (ISE) of a difference
of densities. The classifier is similar to a support vector machine (SVM) in that
it is the solution of a quadratic program and yields a sparse classifier. Unlike
SVMs, however, the L2 kernel classifier does not involve a regularization parameter. We prove a distribution free concentration inequality for a cross-validation
based estimate of the ISE, and apply this result to deduce an oracle inequality and
consistency of the classifier on the sense of both ISE and probability of error. Our
results also specialize to give performance guarantees for an existing method of
L2 kernel density estimation.
1
Introduction
In the binary classification problem we are given realizations (x1 , y1 ), . . . , (xn , yn ) of a jointly
distributed pair (X, Y ), where X ? Rd is a pattern and Y ? {?1, +1} is a class label. The goal
of classification is to build a classifier, i.e., a function taking X as input and outputting a label, such
that some measure of performance is optimized. Kernel classifiers [1] are an important family of
classifiers that have drawn much recent attention for their ability to represent nonlinear decision
boundaries and to scale well with increasing dimension d. A kernel classifier (without offset) has
the form
( n
)
X
g(x) = sign
?i yi k(x, xi ) ,
i=1
where ?i are parameters and k is a kernel function. For example, support vector machines (SVMs)
without offset have this form [2], as does the standard kernel density estimate (KDE) plug-in rule.
Recently Kim and Scott [3] introduced an L2 or integrated squared error (ISE) criterion to design the
coefficients ?i of a kernel classifier with Gaussian kernel. Their L2 classifier performs comparably
to existing kernel methods while possesing a number of desirable properties. Like the SVM, L2
kernel classifiers are the solutions of convex quadratic programs that can be solved efficiently using
standard decomposition algorithms. In addition, the classifiers are sparse, meaning most of the
coefficients ?i = 0, which has advantages for representation and evaluation efficiency. Unlike
the SVM, however, there are no free parameters to be set by the user except the kernel bandwidth
parameter.
In this paper we develop statistical performance guarantees for the L2 kernel classifier introduced
in [3]. The linchpin of our analysis is a new concentration inequality bounding the deviation of a
cross-validation based ISE estimate from the true ISE. This bound is then applied to prove an oracle
inequality and consistency in both ISE and probability of error. In addition, as a special case of
?
Both authors supported in part by NSF Grant CCF-0830490
1
our analysis, we are able to deduce performance guarantees for the method of L2 kernel density
estimation described in [4, 5].
The ISE criterion has a long history in the literature on bandwidth selection for kernel density estimation [6] and more recently in parametric estimation [7]. The use of ISE for optimizing the weights
of a KDE via quadratic programming was first described in [4] and later rediscovered in [5]. In [8],
an `1 penalized ISE criterion was used to aggregate a finite number of pre-determined densities.
Linear and convex aggregation of densities, based on an L2 criterion, are studied in [9], where the
densities are based on a finite dictionary or an independent sample. In contrast, our proposed method
allows data-adaptive kernels, and does not require and independent (holdout) sample.
In classification, some connections relating SVMs and ISE are made in [10], although no new algorithms are proposed. Finally, the ?difference of densities? perspective has been applied to classification in other settings by [11], [12], and [13]. In [11] and [13], a difference of densities are used to
find smoothing parameters or kernel bandwidths. In [12], conditional densities are chosen among a
parameterized set of densities to maximize the average (bounded) density differences.
Section 2 reviews the L2 kernel classifier, and presents a slight modification needed for our analysis.
Our results are presented in Section 3. Conclusions are offered in the final section, and proofs are
gathered in an appendix.
2
L2 Kernel Classification
We review the previous work of Kim & Scott [3] and introduce an important modification. For
convenience, we relabel Y so that it belongs to {1, ??} and denote I+ = {i | Yi = +1} and I? =
{i | Yi = ??}. Let f? (x) and f+ (x) denote the class-conditional densities of the pattern given the
label. From decision theory, the optimal classifier has the form
g ? (x) = sign {f+ (x) ? ?f? (x)} ,
(1)
where ? incorporates prior class probabilities and class-conditional error costs (in the Bayesian
setting) or a desired tradeoff between false positives and false negatives [14]. Denote the ?difference
of densities? d? (x) := f+ (x) ? ?f? (x).
The class-conditional densities are modelled using the Gaussian kernel as
X
X
fb+ (x; ?) =
?i k? (x, Xi ) , fb? (x; ?) =
?i k? (x, Xi )
i?I+
i?I?
with constraints ? = (?1 , . . . , ?n ) ? A where
?
?
X
X
?i =
?i = 1,
A= ?|
?
i?I+
i?I?
?i ? 0
?
?
?i .
?
The Gaussian kernel is defined as
?
?
?
??d/2
kx ? Xi k2
.
k? (x, Xi ) = 2?? 2
exp ?
2? 2
The ISE associated with ? is
Z ?
?2
ISE (?) = kdb? (x; ?) ? d? (x) k2L2 =
db? (x; ?) ? d? (x) dx
Z
Z
Z
2
b
b
= d? (x; ?) dx ? 2 d? (x; ?) d? (x) dx + d2? (x) dx.
Since we do not know the true d? (x), we need to estimate the second term in the above equation
Z
H (?) , db? (x; ?) d? (x) dx
(2)
by Hn (?) which will be explained in detail in Section 2.1. Then, the empirical ISE is
Z
Z
d (?) = db2? (x; ?) dx ? 2Hn (?) + d2? (x) dx.
ISE
2
(3)
b is defined as
Now, ?
d (?)
b = arg min ISE
?
??A
and the final classifier will be
2.1
(
+1,
g (x) =
??,
(4)
b ?0
db? (x; ?)
b
b < 0.
d? (x; ?)
Estimation of H (?)
In this section, we propose a method of estimating H (?) in (2). The basic idea is to view H (?) as
an expectation and estimate it using a sample average. In [3], the resubstitution estimator for H (?)
was used. However, since this estimator is biased, we use a leave-one-out cross-validation (LOOCV)
estimator, which is unbiased and facilitates our theoretical analysis. Note that the difference of
densities can be expressed as
n
X
db? (x; ?) = fb+ (x) ? ? fb? (x) =
?i Yi k? (x, Xi ) .
i=1
Then,
Z
H (?) =
=
Z
db? (x; ?) d? (x) dx =
Z X
n
Z
db? (x; ?) f+ (x) dx ? ?
?i Yi k? (x, Xi ) f+ (x) dx ? ?
Z X
n
i=1
=
n
X
db? (x; ?) f? (x) dx
?i Yi k? (x, Xi ) f? (x) dx
i=1
?i Yi h (Xi )
i=1
where
Z
h (Xi ) ,
Z
k? (x, Xi ) f+ (x) dx ? ?
k? (x, Xi ) f? (x) dx.
(5)
We estimate each h (Xi ) in (5) for i = 1, . . . , n using leave-one-out cross-validation
?
X
? X
1
?
?
k? (Xj , Xi ) ?
k? (Xj , Xi ) , i ? I+
?
? N+ ? 1
N?
j?I?
j?I+ ,j6=i
b
hi ,
X
?
1 X
?
?
k
(X
,
X
)
?
k? (Xj , Xi ) , i ? I?
?
j
i
?
? N+
N? ? 1
j?I+
j?I? ,j6=i
Pn
hi .
where N+ = |I+ | , N? = |I? |. Then, the estimate of H (?) is Hn (?) = i=1 ?i Yi b
2.2
Optimization
The optimization problem (4) can be formulated as a quadratic program. The first term in (3) is
!2
Z
Z ?X
n
2
?i Yi k? (x, Xi ) dx
db? (x; ?) dx =
i=1
=
n X
n
X
Z
?i ?j Yi Yj
k? (x, Xi ) k? (x, Xj ) dx =
i=1 j=1
n X
n
X
?i ?j Yi Yj k?2? (Xi , Xj )
i=1 j=1
by the convolution theorem for Gaussian kernels [15]. As we have seen in Section 2.1, the second
Pn
term Hn (?) in (3) is linear in ? and can be expressed as i=1 ?i ci where ci = Yi b
hi . Finally, since
the third term does not depend on ?, the optimization problem (4) becomes the following quadratic
program (QP)
n
n
n
X
1 XX
b = arg min
?i ?j Yi Yj k?2? (Xi , Xj ) ?
ci ?i .
(6)
?
2 i=1 j=1
??A
i=1
The QP (6) is similar to the dual QP of the 2-norm SVM with hinge loss [2] and can be solved by a
variant of the Sequential Minimal Optimization (SMO) algorithm [3].
3
3
Statistical performance analysis
In this section, we give theoretical performance analysis on our proposed method. We assume that
{Xi }i?I+ and {Xi }i?I? are i.i.d samples from f+ (x) and f? (x), respectively, and treat N+ and
N? as deterministic variables n+ and n? such that n+ ? ? and n? ? ? as n ? ?.
3.1
Concentration inequality for Hn (?)
Lemma 1. Conditioned on Xi , b
hi is an unbiased estimator of h (Xi ), i.e,
h
i
E b
hi | Xi = h (Xi ) .
Furthermore, for any ? > 0,
?
?
?
?
2
2
P sup |Hn (?) ? H (?)| > ? ? 2n e?c(n+ ?1)? + e?c(n? ?1)?
??A
where c = 2
??
?2d
4
2??
/ (1 + ?) .
Lemma 1 implies that Hn (?) ? H (?) almost surely for all ? ? A simultaneously, provided that
?, n+ , and n? evolve as functions of n such that n+ ? 2d / ln n ? ? and n? ? 2d / ln n ? ?.
3.2
Oracle Inequality
Next, we establish on oracle inequality, which relates the performance of our estimator to that of the
best possible kernel classifier.
?
?
2
2
Theorem 1. Let ? > 0 and set ? = ? (?) = 2n e?c(n+ ?1)? + e?c(n? ?1)? where c =
?2d
??
4
/ (1 + ?) . Then, with probability at least 1 ? ?
2 2??
b ? inf ISE (?) + 4?.
ISE (?)
??A
Proof. From Lemma 1, with probability at least 1 ? ?
?
?
?
d (?)?? ? 2?,
?ISE (?) ? ISE
?? ? A
d (?) = 2 (Hn (?) ? H (?)). Then, with probability at least 1 ? ?,
by using the fact ISE (?) ? ISE
for all ? ? A, we have
d (?)
d (?) + 2? ? ISE (?) + 4?
b ? ISE
b + 2? ? ISE
ISE (?)
b This proves the theorem.
where the second inequality holds from the definition of ?.
3.3
ISE consistency
b converges to zero in probability.
Next, we have a theorem stating that ISE (?)
Theorem 2. Suppose that for f = f+ and f? , the Hessian Hf (x) exists and each entry of Hf (x)
is piecewise continuous and square integrable. If ?, n+ , and n? evolve as functions of n such that
b ? 0 in probability as n ? ?
? ? 0, n+ ? 2d / ln n ? ?, and n? ? 2d / ln n ? ?, then ISE (?)
This result intuitively follows from the oracle inequality since the standard Parzen window density
estimate is consistent and uniform weights are among the simplex A. The rigorous proof is omitted
due to space limitations.
4
3.4
Bayes Error Consistency
In classification, we are ultimately interested in minimizing the probability of error. Let us now
n
assume {Xi }i=1 is an i.i.d sample from f (x) = pf+ (x) + (1 ? p)f? (x), where 0 < p < 1 is
the prior probability of the positive class. The consistency with respect to the probability of error
could be easily shown if we set ? to ? ? = 1?p
p and apply Theorem 3 in [17]. However, since p is
?
unknown, we must estimate ? . Note that N+ and N? are binomial random variables, and we may
?
estimate ? ? as ? = N
N+ . The next theorem says the L2 kernel classifier is consistent with respect to
the probability of error.
Theorem 3. Suppose that the assumptions in Theorem 2 are satisfied. In addition, suppose that
f? ? L2 (R), i.e. kf? kL2 < ?. Let ? = N? /N+ be an estimate of ? ? = 1?p
p . If ? evolves as
2d
a function of n such that ? ? 0 and n? / ln n ? ? as n ? ?, then the L2 kernel classifier is
consistent. In other words, given training data Dn = ((X1 , Y1 ) , . . . , (Xn , Yn )), the classification
error
n
n
o
o
b 6= Y | Dn
Ln = P sgn db? (X; ?)
converges to the Bayes error L? in probability as n ? ?.
The proof is given in Appendix A.2.
3.5
Application to density estimation
By setting ? = 0, our goal becomes estimating f+ and we recover the L2 kernel density estimate
of [4, 5] using leave-one-out cross-validation. Given an i.i.d sample X1 , . . . , Xn from f (x), the L2
kernel density estimate of f (x) is defined as
b =
fb(x; ?)
n
X
?
bi k? (x, Xi )
i=1
with ?
bi ?s optimized such that
b = arg
?
min
P
?i =1
?i ?0
n
n
1 XX
2
?i ?j k?2? (Xi , Xj ) ?
i=1 j=1
n
X
i=1
?
?
?i ?
1
n?1
X
k? (Xi , Xj )? .
j6=i
Our concentration inequality, oracle inequality, and L2 consistency result immediately extend to
provide the same performance guarantees for this method. For example, we state the following
corollary.
Corollary 1. Suppose that the Hessian Hf (x) of a density function f (x) exists and each entry of
Hf (x) is piecewise continuous and square integrable. If ? ? 0 and n? 2d / ln n ? ? as n ? ?,
then
Z ?
?2
b ? f (x) dx ? 0
fb(x; ?)
in probability.
4
Conclusion
Through the development of a novel concentration inequality, we have established statistical performance guarantees on a recently introduced L2 kernel classifier. We view the relatively clean
analysis of this classifier as an attractive feature relative to other kernel methods. In future work, we
hope to invoke the full power of the oracle inequality to obtain adaptive rates of convergence, and
consistency for ? not necessarily tending to zero.
5
A
A.1
Appendix
Proof of Lemma 1
Note that for any given i, (k? (Xj , Xi ))j6=i are independent and bounded by M = 1/
For random vectors Z ? f+ (x) and W ? f? (x), h (Xi ) in (5) can be expressed as
??
?d
2?? .
h (Xi ) = E [k? (Z, Xi ) | Xi ] ? ?E [k? (W, Xi ) | Xi ] .
Since Xi ? f+ (x) for i ? I+ and Xi ? f? (x) for i ? I? , it can be easily shown that
h
i
E b
hi | Xi = h (Xi ) .
For i ? I+ ,
?
?
?
?
?
?
?b
?
?
P hi ? h (Xi ) > ? ? Xi = x
?
?
?
??
X
?
? 1
? ??
?
?
k? (Xj , Xi ) ? E [k? (Z, Xi ) | Xi ]? >
Xi = x
? P ?
n+ ? 1
1+? ?
j?I+ ,j6=i
?
?
??
?
?
? ? X
?? ??
?
?
k? (Xj , Xi ) ? ?E [k? (W, Xi ) | Xi ]? >
+ P ?
Xi = x
n?
1+? ?
j?I?
By Hoeffding?s inequality [16], the first term in (7) is
?
?
?? X
?
?
? (n+ ? 1) ? ?
?
?
?
P ?
k? (Xj , Xi ) ? (n+ ? 1)E [k? (Z, Xi ) | Xi ]? >
Xi = x
1+? ?
j?I+ ,j6=i
?
?? X
? X
??
?
?
? (n+ ? 1) ? ?
?
?
?
= P ?
k? (Xj , Xi ) | Xi ? >
k? (Xj , Xi ) ? E
Xi = x
1+? ?
j?I+ ,j6=i
j?I+ ,j6=i
?
?? X
??
? X
?
?
? (n+ ? 1) ? ?
? Xi = x
= P ??
k? (Xj , Xi ) | Xi ?? >
k? (Xj , Xi ) ? E
1+? ?
j?I+ ,j6=i
j?I+ ,j6=i
?
2e?2(n+ ?1)?
2
2
/(1+?) M
2
.
The second term in (7) is
?
?
?? X
?
?
?
n? ? ??
P ??
X
=
x
k? (Xj , Xi ) ? n? E [k? (W, Xi ) | Xi ]?? >
i
1+? ?
j?I?
?
?
?? X
?X
??
?
?
n? ? ??
?
?
Xi = x
= P ?
k? (Xj , Xi ) ? E
k? (Xj , Xi ) | Xi ? >
1+? ?
j?I?
? 2e
j?I?
?2n? ?2 /(1+?)2 M 2
? 2e
?2(n? ?1)?2 /(1+?)2 M 2
.
Therefore,
?
? ??
??
?
?
n?
o
?
?b
?
?b
?
?
P ?hi ? h (Xi )? ? ? = E P ?hi ? h (Xi )? ? ? ? Xi = X
? 2e?2(n+ ?1)?
2
/(1+?)2 M 2
+ 2e?2(n? ?1)?
2
/(1+?)2 M 2
.
In a similar way, it can be shown that for i ? I? ,
?
n?
o
2
2
2
2
2
2
?
?
P ?b
hi ? h (Xi )? > ? ? 2e?2(n+ ?1)? /(1+?) M + 2e?2(n? ?1)? /(1+?) M .
6
(7)
Then,
? n
?
(
)
?
?
?X
?
??
?
?
P sup |Hn (?) ? H (?)| > ? = P sup ?
?i Yi b
hi ? h (Xi ) ? > ?
?
??A
??A ? i=1
(
)
n
?
?
X
?
?
? P sup
?i |Yi | ?b
hi ? h (Xi )? > ?
??A i=1
?
?
n
n
?
? X
?
?
X
?
?
?
?
= P sup
?i ?b
hi ? h (Xi )? +
?i ? ?b
hi ? h (Xi )? > ?
??A i?I+
i?I?
? ?
?
?
n
n
?
?
?
?
X ?
X
? ??
??
?
?b
?
b
? P sup
B
+
P
sup
?
?
h
?
h
(X
)
?i ?hi ? h (Xi )? >
?
?>
i
i
i
?
1+?
1+?
??A i?I+
??A i?I?
? ?
? ?
?
?
?
?
?
?
? ??
? ??
?
?
?
?
hi ? h (Xi )? >
hi ? h (Xi )? >
= P max ?b
B + P max ?b
B
?
i?I+
i?I?
1+?
1+? ?
? [ ??
?? ?
? [ ??
?? ?
?
?
?
?
?
?
?b
?
?b
?
?B + P
?B
>
= P
h
?
h
(X
)
?
?hi ? h (Xi )? >
?
i
i
?
1+?
1+? ?
i?I+
i?I?
? ? X ??
? ?
?
?
X ???
? ??
? ??
?
?
?b
b
P ?hi ? h (Xi )? >
P ?hi ? h (Xi )? >
?
B +
B
1+? ?
1+? ?
i?I+
i?I?
?
?
2
4
2
2
4
2
? n+ 2e?2(n+ ?1)? /(1+?) M + 2e?2(n? ?1)? /(1+?) M
?
?
2
4
2
2
4
2
+ n? 2e?2(n+ ?1)? /(1+?) M + 2e?2(n? ?1)? /(1+?) M
?
?
2
4
2
2
4
2
= n 2e?2(n+ ?1)? /(1+?) M + 2e?2(n? ?1)? /(1+?) M .
A.2
? ?
?
?B
?
Proof of Theorem 3
From Theorem 3 in [17], it suffices to show that
Z ?
?2
b ? d? ? (x) dx ? 0
db? (x; ?)
in probability. Since from the triangle inequality
b ? d? ? (x) kL2 = kdb? (x; ?)
b ? d? (x) + (? ? ? ? ) f? (x) kL2
kdb? (x; ?)
b ? d? (x) kL2 + k (? ? ? ? ) f? (x) kL2
? kdb? (x; ?)
p
b + |? ? ? ? | ? kf? (x) kL2 ,
= ISE (?)
b and ? converges in probability to 0 and ? ? , respectively. The conwe need to show that ISE (?)
?
vergence of ? to ? can be easily shown from the strong law of large numbers.
b by treating N+ , N? and ?
In the previous analyses, we have shown the convergence of ISE (?)
as deterministic
n variables but now we turn to theocase where these variables are random. Define an
n(1?p)
, ? ? 2? ? . For any ? > 0,
event D = N+ ? np
2 , N? ?
2
? ?
?
?
b > ?} ? P Dc + P ISE (?)
b > ?, D .
P {ISE (?)
The first term
converges to 0 from the strong law of large numbers. Let define a set S =
?
?
?
n(1?p) n?
?
, n+ ? 2? ? . Then,
(n+ , n? ) n+ ? np
2 , n? ?
2
?
?
b > ?, D
P ISE (?)
X ?
?
?
b > ?, D ? N+ = n+ , N? = n? ? P {N+ = n+ , N? = n? }
=
P ISE (?)
X
?
?
?
b > ? ? N+ = n+ , N? = n? ? P {N+ = n+ , N? = n? }
=
P ISE (?)
(n+ ,n? )?S
?
max
(n+ ,n? )?S
?
?
?
b > ? ? N + = n+ , N ? = n? .
P ISE (?)
7
(8)
Provided that ? ? 0 and n? 2d / ln n ? ?, any pair (n+ , n? ) ? S satisfies ? ? 0, n+ ? 2d / ln n ?
?, and n? ? 2d / ln n ? ? as n ? ? and thus the term in (8) converges to 0 from Theorem 2. This
proves the theorem.
References
[1] B. Sch?olkopf and A. J. Smola, Learning with Kernels, MIT Press, Cambridge, MA, 2002.
[2] C. Cortes and V. Vapnik, ?Support-vector networks,? Machine Learning, vol. 20, no. 3, pp. 273?297,
1995.
[3] J. Kim and C. Scott, ?Kernel classification via integrated squared error,? IEEE Workshop on Statistical
Signal Processing, August 2007.
[4] D. Kim, Least Squares Mixture Decomposition Estimation, unpublished doctoral dissertation, Dept. of
Statistics, Virginia Polytechnic Inst. and State Univ., 1995.
[5] Mark Girolami and Chao He, ?Probability density estimation from optimally condensed data samples,?
IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 10, pp. 1253?1264, OCT
2003.
[6] B.A. Turlach, ?Bandwidth selection in kernel density estimation: A review,? Technical Report 9317,
C.O.R.E. and Institut de Statistique, Universit?e Catholique de Louvain, 1993.
[7] David W.Scott, ?Parametric statistical modeling by minimum integrated square error,? Technometrics 43,
pp. 274?285, 2001.
[8] A.B. Tsybakov F. Bunea and M.H. Wegkamp, ?Sparse density estimation with l1 penalties,? Proceedings
of 20th Annual Conference on Learning Theory, COLT 2007, Lecture Notes in Artificial Intelligence,
v4539, pp. 530? 543, 2007.
[9] Ph. Rigollet and A.B. Tsybakov, ?Linear and convex aggregation of density estimators,? https://
hal.ccsd.cnrs.fr/ccsd-00068216, 2004.
[10] Robert Jenssen, Deniz Erdogmus, Jose C.Principe, and Torbj?rn Eltoft, ?Towards a unification of information theoretic learning and kernel method,? in Proc. IEEE Workshop on Machine Learning for Signal
Processing (MLSP2004), Sao Luis, Brazil.
[11] Peter Hall and Matthew P.Wand, ?On nonparametric discrimination using density differeces,? Biometrika,
vol. 75, no. 3, pp. 541?547, Sept 1988.
[12] P. Meinicke, T. Twellmann, and H. Ritter, ?Discriminative densities from maximum contrast estimation,?
in Advances in Neural Information Proceeding Systems 15, Vancouver, Canada, 2002, pp. 985?992.
[13] M. Di Marzio and C.C. Taylor, ?Kernel density classification and boosting: an l2 analysis,? Statistics and
Computing, vol. 15, pp. 113?123(11), April 2005.
[14] E. Lehmann, Testing statistical hypotheses, Wiley, New York, 1986.
[15] M.P. Wand and M.C. Jones, Kernel Smoothing, Chapman & Hall, 1995.
[16] L. Devroye and G. Lugosi, ?Combinatorial methods in density estimation,? 2001.
[17] Charles T. Wolverton and Terry J. Wagner, ?Asymptotically optimal discriminant fucntions for pattern
classification,? IEEE Trans. Info. Theory, vol. 15, no. 2, pp. 258?265, Mar 1969.
8
| 3425 |@word norm:1 turlach:1 meinicke:1 d2:2 decomposition:2 existing:2 dx:19 must:1 luis:1 deniz:1 kdb:4 treating:1 discrimination:1 intelligence:2 dissertation:1 boosting:1 dn:2 prove:2 specialize:1 introduce:1 torbj:1 window:1 pf:1 increasing:1 becomes:2 provided:2 estimating:2 bounded:2 xx:2 guarantee:6 universit:1 classifier:22 k2:1 biometrika:1 grant:1 yn:2 positive:2 treat:1 k2l2:1 lugosi:1 doctoral:1 studied:1 bi:2 yj:3 testing:1 empirical:1 pre:1 word:1 statistique:1 convenience:1 selection:2 twellmann:1 deterministic:2 attention:1 convex:3 immediately:1 rule:1 estimator:6 brazil:1 suppose:4 user:1 programming:1 hypothesis:1 solved:2 ultimately:1 depend:1 efficiency:1 triangle:1 easily:3 univ:1 artificial:1 aggregate:1 ise:38 say:1 ability:1 statistic:2 jointly:1 final:2 advantage:1 propose:1 outputting:1 fr:1 realization:1 olkopf:1 convergence:2 leave:3 converges:5 eltoft:1 develop:1 stating:1 strong:2 implies:1 girolami:1 sgn:1 require:1 suffices:1 hold:1 hall:2 exp:1 matthew:1 dictionary:1 omitted:1 wolverton:1 estimation:12 proc:1 loocv:1 label:3 condensed:1 combinatorial:1 bunea:1 hope:1 mit:1 gaussian:4 pn:2 corollary:2 contrast:2 rigorous:1 kim:5 sense:1 inst:1 cnrs:1 integrated:4 interested:1 arg:3 classification:11 among:2 dual:1 colt:1 development:1 smoothing:2 special:1 chapman:1 jones:1 future:1 simplex:1 np:2 report:1 piecewise:2 simultaneously:1 technometrics:1 rediscovered:1 evaluation:1 mixture:1 unification:1 institut:1 taylor:1 desired:1 theoretical:2 minimal:1 modeling:1 cost:1 deviation:1 entry:2 uniform:1 virginia:1 optimally:1 eec:2 density:30 ritter:1 invoke:1 wegkamp:1 parzen:1 squared:3 clayscot:1 satisfied:1 hn:9 hoeffding:1 de:2 coefficient:2 later:1 view:2 sup:7 aggregation:2 hf:4 bayes:2 recover:1 square:4 efficiently:1 yield:1 gathered:1 modelled:1 bayesian:1 comparably:1 j6:10 history:1 definition:1 kl2:6 pp:8 proof:6 mi:2 associated:1 di:1 holdout:1 april:1 mar:1 furthermore:1 smola:1 nonlinear:1 hal:1 usa:2 true:2 unbiased:2 ccf:1 regularization:1 attractive:1 criterion:4 theoretic:1 performs:1 l1:1 meaning:1 novel:1 recently:4 charles:1 tending:1 rigollet:1 qp:3 extend:1 slight:1 he:1 relating:1 cambridge:1 rd:1 consistency:7 deduce:2 resubstitution:1 recent:1 perspective:1 optimizing:1 belongs:1 optimizes:1 inf:1 inequality:15 binary:1 yi:15 integrable:2 seen:1 minimum:1 surely:1 maximize:1 signal:2 relates:1 full:1 desirable:1 technical:1 plug:1 cross:5 long:1 variant:1 basic:1 relabel:1 expectation:1 represent:1 kernel:36 addition:3 sch:1 biased:1 unlike:2 db:10 facilitates:1 incorporates:1 xj:19 bandwidth:4 idea:1 tradeoff:1 penalty:1 peter:1 hessian:2 york:1 involve:1 nonparametric:1 tsybakov:2 ph:1 svms:3 http:1 nsf:1 sign:2 vol:5 drawn:1 clean:1 asymptotically:1 wand:2 jose:1 parameterized:1 lehmann:1 family:1 almost:1 decision:2 appendix:3 bound:1 hi:20 quadratic:5 oracle:7 annual:1 constraint:1 min:3 relatively:1 department:2 evolves:1 modification:2 explained:1 intuitively:1 ln:10 equation:1 turn:1 needed:1 know:1 umich:2 apply:2 polytechnic:1 binomial:1 hinge:1 build:1 establish:1 prof:2 parametric:2 concentration:5 discriminant:1 devroye:1 minimizing:1 robert:1 kde:2 info:1 negative:1 design:1 unknown:1 convolution:1 finite:2 y1:2 dc:1 rn:1 august:1 canada:1 david:1 introduced:4 clayton:1 unpublished:1 pair:2 optimized:2 connection:1 smo:1 louvain:1 established:1 trans:1 able:1 pattern:4 scott:5 program:4 max:3 terry:1 power:1 event:1 sept:1 chao:1 review:3 literature:1 l2:20 prior:2 kf:2 evolve:2 vancouver:1 relative:1 law:2 loss:1 lecture:1 limitation:1 validation:5 offered:1 consistent:3 sao:1 penalized:1 supported:1 free:2 catholique:1 taking:1 wagner:1 sparse:3 distributed:1 boundary:1 dimension:1 xn:3 fb:6 author:1 made:1 adaptive:2 transaction:1 xi:84 discriminative:1 continuous:2 vergence:1 necessarily:1 bounding:1 x1:3 db2:1 wiley:1 third:1 theorem:13 offset:2 svm:4 cortes:1 exists:2 workshop:2 false:2 sequential:1 vapnik:1 ci:3 conditioned:1 kx:1 michigan:2 expressed:3 satisfies:1 jenssen:1 ma:1 oct:1 conditional:4 goal:2 formulated:1 ann:2 erdogmus:1 towards:1 determined:1 except:1 lemma:4 arbor:2 principe:1 support:3 mark:1 dept:1 |
2,676 | 3,426 | An Efficient Sequential Monte Carlo Algorithm for
Coalescent Clustering
?
Dilan G?orur
Gatsby Unit
University College London
Yee Whye Teh
Gatsby Unit
University College London
[email protected]
[email protected]
Abstract
We propose an efficient sequential Monte Carlo inference scheme for the recently
proposed coalescent clustering model [1]. Our algorithm has a quadratic runtime
while those in [1] is cubic. In experiments, we were surprised to find that in
addition to being more efficient, it is also a better sequential Monte Carlo sampler
than the best in [1], when measured in terms of variance of estimated likelihood
and effective sample size.
1
Introduction
Algorithms for automatically discovering hierarchical structure from data play an important role
in machine learning. In many cases the data itself has an underlying hierarchical structure whose
discovery is of interest, examples include phylogenies in biology, object taxonomies in vision or
cognition, and parse trees in linguistics. In other cases, even when the data is not hierarchically
structured, such structures are still useful simply as a statistical tool to efficiently pool information
across the data at different scales; this is the starting point of hierarchical modelling in statistics.
Many hierarchical clustering algorithms have been proposed in the past for discovering hierarchies.
In this paper we are interested in a Bayesian approach to hierarchical clustering [2, 3, 1]. This is
mainly due to the appeal of the Bayesian approach being able to capture uncertainty in learned structures in a coherent manner. Unfortunately, inference in Bayesian models of hierarchical clustering
are often very complex to implement, and computationally expensive as well.
In this paper we build upon the work of [1] who proposed a Bayesian hierarchical clustering model
based on Kingman?s coalescent [4, 5]. [1] proposed both greedy and sequential Monte Carlo (SMC)
based agglomerative clustering algorithms for inferring hierarchical clustering which are simpler
to implement than Markov chain Monte Carlo methods. The algorithms work by starting with each
data item in its own cluster, and iteratively merge pairs of clusters until all clusters have been merged.
The SMC based algorithm has computational cost O(n3 ) per particle, where n is the number of data
items.
We propose a new SMC based algorithm for inference in the coalescent clustering of [1]. The
algorithm is based upon a different perspective on Kingman?s coalescent than that in [1], where
the computations required to consider whether to merge each pair of clusters at each iteration is
not discarded in subsequent iterations. This improves the computational cost to O(n2 ) per particle,
allowing this algorithm to be applied to larger datasets. In experiments we show that our new
algorithm achieves improved costs without sacrificing accuracy or reliability.
Kingman?s coalescent originated in the population genetics literature, and there has been significant
interest there on inference, including Markov chain Monte Carlo based approaches [6] and SMC
approaches [7, 8]. The SMC approaches have interesting relationship to our algorithm and to that of
[1]. While ours and [1] integrate out the mutations on the coalescent tree and sample the coalescent
1
times, [7, 8] integrate out the coalescent times, and sample mutations instead. Because of this
difference, ours and that of [1] will be more efficient in higher dimensional data, as well as other
cases where the state space is too large and sampling mutations will be inefficient.
In the next section, we review Kingman?s coalescent and the existing SMC algorithms for inference
on this model. In Section 3, we describe a cheaper SMC algorithm. We compare our method with
that of [1] in Section 4 and conclude with a discussion in Section 5.
2
Hierarchical Clustering using Kingman?s Coalescent
Kingman?s coalescent [4, 5] describes the family relationship between a set of haploid individuals
by constructing the genealogy backwards in time. Ancestral lines coalesce when the individuals
share a common ancestor, and the genealogy is a binary tree rooted at the common ancestor of all
the individuals under consideration. We briefly review the coalescent and the associated clustering
model as presented in [1] before presenting a different formulation more suitable for our proposed
algorithm.
Let ? be the genealogy of n individuals. There are n?1 coalescent events in ?, we order these events
with i = 1 being the most recent one, and i = n ? 1 for the last event when all ancestral lines are
coalesced. Event i occurs at time Ti < 0 in the past, and involves the coalescing of two ancestors,
denoted ?li and ?ri , into one denoted ?i . Let Ai be the set of ancestors right after coalescent event i,
and A0 be the full set of individuals at the present time T0 = 0. To draw a sample ? from Kingman?s
coalescent we sample the coalescent events one at a time starting from the present. At iteration i we
pick the pair of individuals ?li , ?ri uniformlyat random from the n ? i + 1 individuals available in
Ai?1 , pick a waiting time ?i ? Exp( n?i+1
) from an exponential distribution with rate n?i+1
2
2
equal to the number of pairs available, and set Ai = Ai?1 ? {?li , ?ri } + {?i }, Ti = Ti?1 ? ?i . The
probability of ? is thus:
p(?)
=
n?1
Y
i=1
n?i+1
exp ?
?i .
2
(1)
The coalescent can be used as a prior over binary trees in a model where we have a tree-structured
likelihood function for observations at the leaves. Let ?i be the subtree rooted at ?i and xi be the
observations at the leaves of ?i . [1] showed that by propagating messages up the tree the likelihood
function can be written in a sequential form:
p(x | ?) = Z0 (x)
n?1
Y
Z?i (xi |?i ),
(2)
i=1
where Z?i is a function only of the coalescent times associates with ?li , ?ri , ?i and of the local
messages sent from ?li , ?ri to ?i , and Z0 (x) is an easily computed normalization constant in eq. (2).
Each function has the form (see [1] for further details):
Z
Y Z
(3)
p(yc |yi , ?i )M?c (yc ) dyc dyi
Z?i (xi |?i ) = p0 (yi )
c=li ,ri
where M?c is the message from child ?c to ?i . The posterior is proportional to the product of eq. (1)
and eq. (2) and our aim is to have an efficient way to compute the posterior. For this purpose, we
will give a different perspective to constructing the coalescent in the following and describe our
sequential Monte Carlo algorithm in Section 3.
2.1
A regenerative race process
In this section we describe a different formulation of the coalescent based on the fact that each stage
of the coalescent can be interpreted as a race between the n?i+1
pairs of individuals to coalesce.
2
Each pair proposes a coalescent time, the pair with most recentcoalescent time ?wins? the race and
gets to coalesce, at which point the next stage starts with n?i
pairs in the race. Na??vely this race
2
process would require a total of O(n3 ) pairs to propose coalescent times. We show that using the
regenerative (memoryless) property of exponential distributions allows us to reduce this to O(n2 ).
2
Algorithm 1 A regenerative race process for constructing the coalescent
inputs: number of individuals n,
set starting time T0 = 0 and A0 the set of n individuals
for all pairs of existing individuals ?l , ?r ? A0 do
propose coalescent time tlr using eq. (4)
end for
for all coalescence events i = 1 : n ? 1 do
find the pair to coalesce (?li , ?ri ) using eq. (5)
set coalescent time Ti = tli ri and update Ai = Ai?1 ? {?li , ?ri } + {?i }
remove pairs with ?l ? {?li , ?ri }, ?r ? Ai?1 \{?li , ?ri }
for all new pairs with ?l = ?i , ?r ? Ai \{?i } do
propose coalescent time using eq. (4)
end for
end for
The same idea will allow us to reduce the computational cost of our SMC algorithm from O(n3 ) to
O(n2 ).
At stage i of the coalescent we have n ? i + 1 individuals in Ai?1 , and n?i+1
pairs in the race to
2
coalesce. Each pair ?l , ?r ? Ai?1 , ?l 6= ?r proposes a coalescent time
tlr |Ti?1 ? Ti?1 ? Exp(1),
(4)
that is, by subtracting from the last coalescent time a waiting time drawn from an exponential distribution of rate 1. The pair ?li , ?ri with most recent coalescent time wins the race:
n
o
(5)
(?li , ?ri ) = argmax tlr , ?l , ?r ? Ai?1 , ?l 6= ?r
(?l ,?r )
and coalesces into a new individual ?i at time Ti = tli ri . At this point stage i + 1 of the race begins,
with some pairs dropping out of the race (specifically those with one half of the pair being either ?li
or ?ri ) and new ones entering (specifically those formed by pairing the new individual ?i with an
existing one). Among the pairs (?l , ?r ) that did not drop out nor just entered the race, consider the
distribution of tlr conditioned on the fact that tlr < Ti (since (?l , ?r ) did not win the race at stage i).
Using the memoryless property of the exponential distribution, we see that tlr |Ti ? Ti ? Exp(1),
thus eq. (4) still holds and we need not redraw tlr for the stage i + 1 race. In other words, once tlr
is drawn once, it can be reused for subsequent stages of the race until it either wins a race or drops
out. The generative process is summarized in Algorithm 1.
We obtain the probability of the coalescent ? as a product over the i = 1, . . . , n ? 1 stages of the
race, of the probability of each event ??li , ?ri wins stage i and coalesces at time Ti ? given more
recent stages. The probability at stage i is simply the probability that tli ri = Ti , and that all other
proposed coalescent times tlr < Ti , conditioned on the fact that the proposed coalescent times tlr
for all pairs at stage i are all less than Ti?1 . This gives:
n?1
Y
Y
p(tl0 r0 < Ti | tl0 r0 < Ti?1 )
(6)
p(?) =
p(tli ri = Ti | tli ri < Ti?1 )
i=1
=
(?l ,?r )6=(?li ,?ri )
n?1
Y
i=1
p(tli ri = Ti )
p(tli ri < Ti?1 )
Y
(?l ,?r )6=(?li ,?ri )
p(tlr < Ti )
p(tlr < Ti?1 )
(7)
where the second product runs over all pairs in stage i except the winning pair. Each pair that
participated in the race has corresponding terms in eq. (7), starting at the stage when the pair entered
the race, and ending with the stage when the pair either dropped out or wins the stage. As these
terms cancel, eq. (7) simplifies to,
p(?) =
n?1
Y
i=1
p(tli ri = Ti )
Y
?l ?{?li ,?ri },?r ?Ai?1 \{?li ,?ri }
3
p(tlr < Ti ) ,
(8)
where the second product runs only over those pairs that dropped out after stage i. The first term
is the probability of pair (?li , ?ri ) coalescing at time Ti given its entrance time, and the second
term is the probability of pair (?l , ?r ) dropping out of the race at time Ti given its entrance time.
We can verify that this expression equals eq. (1) by plugging in the probabilities for exponential
distributions. Finally, multiplying the prior eq. (8) and the likelihood eq. (2) we have,
p(x, ?) = Z0 (x)
n?1
Y
Z?i (xi |?i )p(tli ri = Ti )
i=1
3
p(tlr < Ti ) .
Y
(9)
?l ?{?li ,?ri }, ?r ?Ai?1 \{?li ,?ri }
Efficient SMC Inference on the Coalescent
Our sequential Monte Carlo algorithm for posterior inference is directly inspired by the regenerative
race process described above. In fact the algorithm is structurally exactly as in Algorithm 1, but with
each pair ?l , ?r proposing a coalescent time from a proposal distribution tlr ? Qlr instead of from
eq. (4). The idea is that the proposal distribution Qlr is constructed taking into account the observed
data, so that Algorithm 1 produces better approximate samples from the posterior.
The overall probability of proposing ? under the SMC algorithm can be computed similarly to
eq. (6)-(8), and is,
q(?) =
n?1
Y
i=1
qlr (tlr < Ti ) ,
Y
qli ri (tli ri = Ti )
(10)
?l ?{?li ,?ri },?r ?Ai?1 \{?li ,?ri }
where qlr is the density of Qlr . As both eq. (9) and eq. (10) can be computed sequentially, the weight
w associated with each sample ? can be computed ?on the fly? as the coalescent tree is constructed:
w0 = Z0 (x)
Z? (xi |?i )p(tli ri = Ti )
wi = wi?1 i
qli ri (tli ri = Ti )
Y
?l ?{?li ,?ri }, ?r ?Ai?1 \{?li ,?ri }
p(tlr < Ti )
.
qlr (tlr < Ti )
(11)
Finally we address the choice of proposal distribution Qlr to use. [1] noted that Z?i (xi |?i ) acts as a
?local likelihood? term in eq. (9). We make use of this observation and use eq. (4) as a ?local prior?,
i.e. the following density for the proposal distribution Qlr :
qlr (tlr ) ? Z?lr (xlr |tlr , ?l , ?r , ?i?1 )p(tlr | Tc(lr) )
(12)
where ?lr is a hypothetical individual resulting from coalescing l and r, Tc(lr) denotes the time
when the pair (?l , ?r ) enters the race, xlr are the data under ?l and ?r , and p(tlr | Tc(lr) ) =
etlr ?Tc(lr) I(tlr < Tc(lr) ) is simply an exponential density with rate 1 that has been shifted and
reflected. I(?) is an indicator function returning 1 if its argument is true, and 0 otherwise.
The proposal distribution in [1] also has a form similar to eq. (12), but with the exponential rate
being n?i+1
instead, if the proposal was in stage i of the race. This dependence means that at
2
each stage of the race the coalescent times proposal distribution needs to be recomputed for each
pair, leading to an O(n3 ) computation time. On the other hand, similar to the prior process, we need
to propose a coalescent time for each pair only once when it is first created. This results in O(n2 )
computational complexity per particle1 .
Note that it may not always be possible (or efficient) to compute the normalizing constant of the
density in eq. (12) (even if we can sample from it efficiently). This means that the weight updates
eq. (11) cannot be computed. In that case, we can use an approximation Z??lr to Z?lr instead. In the
following subsection we describe the independent-sites parent-independent model we used in the
experiments, and how to construct Z??lr .
1
Technically the time cost is O(n2 (m + log n)), where n is the number of individuals, and m is the cost of
sampling from and evaluating eq. (12). The additional log n factor comes about because a priority queue needs
to be maintained to determine the winner of each stage efficiently, but this is negligible compared to m.
4
3.1
Independent-Sites Parent-Independent Likelihood Model
In our experiments we have only considered coalescent clustering of discrete data, though our approach can be applied more generally. Say each data item consists of a D dimensional vector where
each entry can take on one of K values. We use the independent-sites parent-independent mutation
model over multinomial vectors in [1] as our likelihood model. Specifically, this model assumes that
each point on the tree is associated with a D dimensional multinomial vector, and each entry of this
vector on each branch of the tree evolves independently (thus independent-sites), forward in time,
and with mutations occurring at rate ?d on entry d. When a mutation occurs, a new value for the
entry is drawn from a distribution ?d , independently of the previous value at that entry (thus parentindependent). When a coalescent event is encountered, the mutation process evolves independently
down both branches.
Some calculations show that the transition probability matrix of the mutation process associated
with entry d on a branch of length t is e??d t IK + (1 ? e??d t )?>
d 1K , where IK is the identity
matrix, 1K is a vector of 1?s, and we have implicitly represented the multinomial distribution ?d as
a vector of probabilities. The message for entry d from node ?i on the tree to its parent is a vector
d
M?di = [M?d1
, . . . , M?dK
]> , normalized so that ?>
d M?i = 1. The local likelihood term is then:
i
i
!
K
X
d
?d (2tlr ?tl ?tr )
dk
dk
Z?lr (xlr |tlr , ?l , ?r , ?i?1 ) = 1 ? e
1?
?dk M?l M?r
(13)
k=1
The logarithm of the proposal density is then:
log qlr (tlr ) = constant + (tlr ? Tc(lr) ) +
D
X
log Z?dlr (xlr |tlr , ?l , ?r , ?i?1 )
(14)
d=1
This is not of standard form, and we use an approximation log q?lr (tlr ) instead. Specifically, we use
a piecewise linear log q?lr (tlr ), which can be easily sampled from, and for which the normalization
term is easy to compute.
The approximation is constructed as follows. Note that log Z?dlr (xlr |tlr , ?l , ?r , ?i?1 ), as a function
of tlr , is concave if the term inside the parentheses in eq. (13) is positive, convex if negative, and
constant if zero. Thus eq. (14) is a sum of linear, concave and convex terms. Using the upper and
lower envelopes developed for adaptive rejection sampling [9], we can construct piecewise linear
upper and lower envelopes for log qlr (tlr ) by upper and lower bounding the concave and convex
parts separately. The upper and lower envelopes give exact bounds on the approximation error
introduced, and we can efficiently improve the envelopes until a given desired approximation error
is achieved. Finally, we used the upper bound as our approximate log q?lr (tlr ). Note that the same
issue arises in the proposal distribution for SMC-PostPost, and we used the same piecewise linear
approximation. The details of this algorithm can be found in [10].
4
Experiments
The improved computational cost of inference makes it possible to do Bayesian inference for the coalescence models on larger datasets. The SMC samplers converge to the exact solution in the limit of
infinite particles. However, it is not enough to be more efficient per particle, the crucial point is how
efficient the algorithm is overall. An important question is how many particles we need in practice.
To address this question, we compared the performance of our algorithm SMC1 to SMC-PostPost
on the synthetic data shown in Figure 12 . There are 15 binary 12-dimensional vectors in the dataset.
There is overlap between the features of the data points however the data does not obey a tree structure, which will result in a multimodal posterior. Both SMC1 and SMC-PostPost recover the
structure with only a few particles. However there is room for improvement as the variance in the
likelihood obtained from multiple runs decreases with increasing number of particles. Since both
SMC algorithms are exact in the limit, the values should converge as we add more particles. We can
check convergence by observing the variance of likelihood estimates of multiple runs. The variance
2
The comparison is done in the importance sampling setting, i.e. without using resampling for comparison
of the proposal distributions.
5
2
1.5
4
6
1
8
0.5
0
(a)
10
(b)
12
333344555511122
Figure 1: Synthetic data features is shown on the left; each data point is a binary column vector. A
sample tree from the SMC1 algorithm demonstrate that the algorithm could capture the similarity
structure. The true covariance of the data (a) and the distance on the tree learned by the SMC1
algorithm averaged over particles (b) are shown, showing that the overall structure was corerctly
captured. The results obtained from SMC-PostPost were very similar to SMC1 therefore are
not shown here.
should shrink as we increase the number of particles. Figure 2 shows the change in the estimated
likelihood as a function of number of particles. From this figure, we can conclude that the computationally cheaper algorithm SMC1 is more efficient also in the number of particles as it gives more
accurate answers with less particles.
?30
7
x 10
200
effective sample size
6
likelihood
5
4
3
2
1
150
100
50
0
?1
0
200
400
600
800
number of particles, 7 runs each
1000
0
200
400
600
800
number of particles, 7 runs each
1000
Figure 2: The change in the likelihood (left) and the effective sample size (right) as a function of
number of particles for SMC1 (solid) and SMC-PostPost (dashed). The mean estimate of both
algorithms are very close, with the SMC1 having a much tighter variance. The variance of both
algorithms shrink and the effective sample size increases as the number of particles increase.
A quantity of interest in genealogical studies is the time to the most recent common ancestor
(MRCA), which is the time of the last coalescence event. Although there is not a physical interpretation of this quantity for hierarchical clustering, it gives us an indication about the variance of
the particles. We can observe the variation in the time to MRCA to assess convergence. Similar to
the variance behaviour in the likelihood, with small number of particles SMC-PostPost has
higher variance than SMC1 . However, as there are more particles, results of the two algorithms
almost overlap. The mean time for each step of coalescence together with its variance for 7250
particles for both algorithms is depicted in Figure 3. It is interesting that the first few coalescence
times of SMC1 are shorter than those for SMC-PostPost. The distribution of the particle weights
is important for the efficiency of the importance sampler. Ideally, the weights would be uniform
such that each particle contributes equally to the posterior estimation. If there is only a few particles
that come from a high probability region, the weights of those particles would be much larger than
6
0
10
?1
10
?2
10
smc1
smcPostPost
?3
10
2
4
6
8
10
12
14
Figure 3: Times for each coalescence step averaged over 7250 particles. Note that both algorithms
almost converged at the same distribution when given enough resources. There is a slight difference
in the mean coalescence time. It is interesting that the SMC1 algorithm proposes shorter times for
the initial coalescence events.
the rest, resulting in a low effective sample size. We will discuss this point more in the next section.
Here, we note that for the synthetic dataset, the effective sample size of SMC-PostPost is very
poor, and that of SMC1 is much higher, see Figure 2.
5
Discussion
We described an efficient Sequential Monte Carlo algorithm for inference on hierarchical clustering
models that use Kingman?s coalescent as a proir. Our method makes use of a regenerative perspective to construct the coalescent tree. Using this construction, we achieve quadratic run time per
particle. By employing a tight upper bound on the local likelihood term, the proposed algorithm is
applicable to general data generation processes.
We also applied our algorithm for inferring the structure in the phylolinguistic data used in [1]. We
used the same Indo-European subset of the data, with the same subset of features, that is 44 languages with 100 binary features. Three example trees with the largest weights out of 7750 samples
are depicted in Figure 4. Unfortunately, on this dataset, the effective sample size of both algorithms
is close to one. A usual method to circumvent the low effective sample size problem in sequential
Monte Carlo algorithms is to do resampling, that is, detecting the particles that will not contribute
much to the posterior from the partial samples and prune them away, multiplying the promising
samples. There are two stages to doing resampling. We need to decide at what point to prune away
samples, and how to select which samples to prune away. As shown by [11], different problems may
require different resampling algorithms. We tried resampling using Algorithm 5.1 of [12], however
this only had a small improvement in the final performance for both algorithms on this data set.
Note that both algorithms use ?local likelihoods? for calculating the weights, therefore the weights
are not fully informative about the actual likelihood of the partial sample. Furthermore, in the
recursive calculation of the weights in SMC1 , we are including the effect of a pair only when they
either coalesce or cease to exist for the sake of saving computations. Therefore the partial weights
are even less informative about the state of the sample and the effective sample size cannot really
give full explanation about whether the current sample is good or not. In fact, we did observe
oscillations on the effective sample size calculated on the weights along the iterations, i.e. starting
off with a high value, decreasing to virtually 1 and increasing later before the termination, which
also indicates that it is not clear which of the particles will be more effective eventually. An open
question is how to incorporate a resampling algorithm to improve the efficiency.
References
[1] Y. W. Teh, H. Daume III, and D. M. Roy. Bayesian agglomerative clustering with coalescents.
In Advances in Neural Information Processing Systems, volume 20, 2008.
7
0.998921
0.000379939
Slavic ?Russian
Slavic ?Ukrainian
Slavic ?Polish
Slavic ?Slovene
Slavic ?Serbian
Baltic ?Lithuanian
Baltic ?Latvian
Slavic ?Czech
Celtic ?Irish
Celtic ?Gaelic
Celtic ?Welsh
Celtic ?Cornish
Celtic ?Breton
Germanic?Icelandic
Germanic?English
Germanic?German
Germanic?Dutch
Germanic?Norwegian
Germanic?Swedish
Germanic?Danish
Romance ?Spanish
Romance ?Portuguese
Romance ?French
Romance ?Romanian
Romance ?Italian
Romance ?Catalan
Greek ?Greek
Slavic ?Bulgarian
Albanian?Albanian
Indic ?Kashmiri
Iranian ?Pashto
Indic ?Panjabi
Indic ?Hindi
Indic ?Nepali
Iranian ?Ossetic
Indic ?Maithili
Indic ?Sinhala
Indic ?Marathi
Indic ?Bengali
Iranian ?Tajik
Iranian ?Persian
Iranian ?Kurdish
Armenian?Armenian W
Armenian?Armenian E
2
1
0
a) no resampling
0.0151504
Iranian ?Tajik
Iranian ?Persian
Iranian ?Kurdish
Celtic ?Irish
Celtic ?Gaelic
Celtic ?Welsh
Celtic ?Cornish
Celtic ?Breton
Armenian?Armenian E
Iranian ?Ossetic
Indic ?Nepali
Indic ?Marathi
Indic ?Maithili
Iranian ?Pashto
Indic ?Panjabi
Indic ?Hindi
Indic ?Kashmiri
Indic ?Bengali
Indic ?Sinhala
Armenian?Armenian W
Germanic?Icelandic
Germanic?English
Germanic?German
Germanic?Dutch
Germanic?Swedish
Germanic?Norwegian
Germanic?Danish
Romance ?Spanish
Romance ?Italian
Romance ?Catalan
Greek ?Greek
Slavic ?Bulgarian
Romance ?Portuguese
Romance ?French
Romance ?Romanian
Albanian?Albanian
Baltic ?Latvian
Slavic ?Polish
Slavic ?Ukrainian
Slavic ?Czech
Baltic ?Lithuanian
Slavic ?Russian
Slavic ?Slovene
Slavic ?Serbian
2
1
0
b) no resampling
Slavic ?Polish
Baltic ?Latvian
Slavic ?Czech
Celtic ?Irish
Celtic ?Gaelic
Celtic ?Welsh
Celtic ?Breton
Greek ?Greek
Slavic ?Bulgarian
Iranian ?Tajik
Iranian ?Persian
Iranian ?Kurdish
Indic ?Sinhala
Indic ?Kashmiri
Indic ?Bengali
Iranian ?Ossetic
Iranian ?Pashto
Indic ?Nepali
Indic ?Marathi
Indic ?Maithili
Indic ?Panjabi
Indic ?Hindi
Armenian?Armenian W
Armenian?Armenian E
Germanic?Icelandic
Germanic?English
Germanic?German
Germanic?Dutch
Germanic?Norwegian
Germanic?Swedish
Germanic?Danish
Slavic ?Serbian
Slavic ?Slovene
Slavic ?Ukrainian
Slavic ?Russian
Baltic ?Lithuanian
Romance ?Romanian
Celtic ?Cornish
Albanian?Albanian
Romance ?Spanish
Romance ?Portuguese
Romance ?French
Romance ?Italian
Romance ?Catalan
2
1
0
c) with resampling
Figure 4: Tree structure infered from WALS data. (a),(b) Samples from a run with 7750 particles
without resampling. (c) Sample from a run with resampling. The values above the trees are normalized weights. Note that the weight of (a) is almost one, which means that the contribution from
the rest of the particles is infinitesimal although the tree structure in (b) also seem to capture the
similarities between languages.
[2] R. M. Neal. Defining priors for distributions using Dirichlet diffusion trees. Technical Report
0104, Department of Statistics, University of Toronto, 2001.
[3] C. K. I. Williams. A MCMC approach to hierarchical mixture modelling. In Advances in
Neural Information Processing Systems, volume 12, 2000.
[4] J. F. C. Kingman. On the genealogy of large populations. Journal of Applied Probability,
19:27?43, 1982. Essays in Statistical Science.
[5] J. F. C. Kingman. The coalescent. Stochastic Processes and their Applications, 13:235?248,
1982.
[6] J. Felsenstein. Evolutionary trees from DNA sequences: a maximum likelihood approach.
Journal of Molecular Evolution, 17:368?376, 1981.
[7] R. C. Griffiths and S. Tavare. Simulating probability distributions in the coalescent. Theoretical
Population Biology, 46:131?159, 1994.
[8] M. Stephens and P. Donnelly. Inference in molecular population genetics. Journal of the Royal
Statistical Society, 62:605?655, 2000.
[9] W.R. Gilks and P. Wild. Adaptive rejection sampling for Gibbs sampling. Applied Statistics,
41:337?348, 1992.
[10] D. G?or?ur and Y.W. Teh. Concave convex adaptive rejection sampling. Technical report, Gatsby
Computational Neuroscience Unit, 2008.
[11] Y. Chen, J. Xie, and J. Liu. Stopping-time resampling for sequential monte carlo methods.
Journal of the Royal Statistical Society, 67, 2005.
[12] P. Fearnhead. Sequential Monte Carlo Method in Filter Theory. PhD thesis, Merton College,
University of Oxford, 1998.
8
| 3426 |@word briefly:1 reused:1 open:1 termination:1 essay:1 tried:1 covariance:1 p0:1 pick:2 tr:1 solid:1 initial:1 liu:1 ours:2 past:2 existing:3 nepali:3 current:1 xlr:5 written:1 romance:18 portuguese:3 subsequent:2 entrance:2 informative:2 remove:1 drop:2 update:2 resampling:12 greedy:1 discovering:2 leaf:2 item:3 half:1 generative:1 breton:3 lr:15 detecting:1 node:1 contribute:1 toronto:1 simpler:1 along:1 constructed:3 ik:2 surprised:1 pairing:1 consists:1 wild:1 inside:1 manner:1 nor:1 inspired:1 decreasing:1 automatically:1 actual:1 increasing:2 begin:1 underlying:1 what:1 interpreted:1 developed:1 proposing:2 hypothetical:1 ti:34 act:1 concave:4 tajik:3 runtime:1 exactly:1 returning:1 uk:2 unit:3 positive:1 before:2 dropped:2 negligible:1 local:6 limit:2 oxford:1 merge:2 kurdish:3 smc:20 averaged:2 gilks:1 practice:1 recursive:1 implement:2 cornish:3 word:1 ossetic:3 griffith:1 get:1 cannot:2 close:2 yee:1 williams:1 starting:6 independently:3 convex:4 pashto:3 population:4 variation:1 hierarchy:1 play:1 construction:1 exact:3 haploid:1 associate:1 roy:1 expensive:1 observed:1 role:1 fly:1 enters:1 capture:3 region:1 decrease:1 complexity:1 ideally:1 tight:1 technically:1 upon:2 efficiency:2 easily:2 multimodal:1 represented:1 postpost:8 effective:11 london:2 monte:12 describe:4 whose:1 larger:3 say:1 otherwise:1 coalescents:1 statistic:3 panjabi:3 itself:1 final:1 sequence:1 indication:1 ucl:2 propose:6 subtracting:1 product:4 entered:2 achieve:1 qli:2 parent:4 cluster:4 convergence:2 produce:1 indic:24 armenian:12 object:1 ac:2 propagating:1 measured:1 eq:24 involves:1 come:2 greek:6 merged:1 filter:1 stochastic:1 coalescent:47 require:2 behaviour:1 really:1 tighter:1 genealogy:4 hold:1 considered:1 exp:4 cognition:1 achieves:1 serbian:3 purpose:1 estimation:1 coalesced:1 applicable:1 largest:1 tool:1 icelandic:3 always:1 fearnhead:1 aim:1 kashmiri:3 improvement:2 modelling:2 likelihood:18 mainly:1 check:1 indicates:1 polish:3 inference:11 baltic:6 stopping:1 a0:3 italian:3 ancestor:5 interested:1 overall:3 among:1 issue:1 bulgarian:3 denoted:2 proposes:3 equal:2 once:3 construct:3 having:1 saving:1 sampling:7 irish:3 biology:2 cancel:1 report:2 piecewise:3 few:3 individual:16 cheaper:2 argmax:1 welsh:3 interest:3 message:4 mixture:1 chain:2 dyi:1 accurate:1 iranian:15 partial:3 shorter:2 vely:1 tree:20 logarithm:1 desired:1 sacrificing:1 theoretical:1 column:1 cost:7 entry:7 subset:2 uniform:1 too:1 celtic:15 answer:1 synthetic:3 density:5 ancestral:2 off:1 pool:1 together:1 na:1 thesis:1 priority:1 coalesces:2 inefficient:1 kingman:10 leading:1 li:25 account:1 summarized:1 race:23 tli:12 later:1 observing:1 doing:1 start:1 recover:1 mutation:8 contribution:1 ass:1 formed:1 accuracy:1 variance:10 who:1 efficiently:4 orur:1 bayesian:6 carlo:12 multiplying:2 converged:1 danish:3 infinitesimal:1 associated:4 di:1 sampled:1 merton:1 dataset:3 subsection:1 improves:1 higher:3 xie:1 reflected:1 improved:2 swedish:3 formulation:2 done:1 though:1 shrink:2 catalan:3 furthermore:1 just:1 stage:21 until:3 hand:1 parse:1 french:3 russian:3 qlr:11 effect:1 verify:1 true:2 normalized:2 marathi:3 evolution:1 entering:1 memoryless:2 iteratively:1 neal:1 spanish:3 rooted:2 noted:1 maintained:1 whye:1 presenting:1 demonstrate:1 consideration:1 recently:1 common:3 multinomial:3 physical:1 slovene:3 winner:1 volume:2 tavare:1 interpretation:1 slight:1 ukrainian:3 significant:1 gibbs:1 ai:15 similarly:1 particle:31 language:2 coalescence:8 reliability:1 had:1 similarity:2 add:1 posterior:7 own:1 recent:5 showed:1 perspective:3 binary:5 yi:2 captured:1 additional:1 r0:2 prune:3 determine:1 converge:2 dashed:1 stephen:1 branch:3 full:2 multiple:2 persian:3 technical:2 calculation:2 mrca:2 equally:1 molecular:2 regenerative:5 plugging:1 parenthesis:1 vision:1 dutch:3 iteration:4 normalization:2 achieved:1 proposal:10 addition:1 participated:1 separately:1 crucial:1 envelope:4 rest:2 virtually:1 sent:1 maithili:3 seem:1 backwards:1 iii:1 easy:1 enough:2 dilan:2 dlr:2 reduce:2 idea:2 simplifies:1 t0:2 whether:2 expression:1 queue:1 useful:1 generally:1 clear:1 dna:1 exist:1 shifted:1 estimated:2 neuroscience:1 per:5 discrete:1 dropping:2 waiting:2 donnelly:1 recomputed:1 drawn:3 diffusion:1 sum:1 run:9 uncertainty:1 family:1 almost:3 decide:1 oscillation:1 draw:1 bound:3 quadratic:2 tlr:34 encountered:1 ri:39 n3:4 sake:1 ywteh:1 argument:1 structured:2 department:1 poor:1 felsenstein:1 across:1 describes:1 ur:1 wi:2 evolves:2 gaelic:3 computationally:2 resource:1 discus:1 eventually:1 german:3 end:3 available:2 obey:1 hierarchical:12 observe:2 away:3 simulating:1 lithuanian:3 coalescing:3 clustering:15 include:1 linguistics:1 denotes:1 assumes:1 dirichlet:1 calculating:1 germanic:21 build:1 society:2 question:3 quantity:2 occurs:2 dependence:1 usual:1 evolutionary:1 win:6 distance:1 bengali:3 w0:1 albanian:6 agglomerative:2 length:1 relationship:2 romanian:3 unfortunately:2 taxonomy:1 negative:1 teh:3 allowing:1 upper:6 observation:3 markov:2 discarded:1 datasets:2 defining:1 norwegian:3 introduced:1 pair:33 required:1 coalesce:6 coherent:1 learned:2 czech:3 address:2 able:1 yc:2 including:2 royal:2 explanation:1 suitable:1 event:11 overlap:2 sinhala:3 circumvent:1 indicator:1 latvian:3 hindi:3 scheme:1 improve:2 created:1 review:2 literature:1 discovery:1 prior:5 fully:1 interesting:3 generation:1 proportional:1 integrate:2 share:1 genetics:2 last:3 english:3 allow:1 taking:1 calculated:1 ending:1 evaluating:1 transition:1 forward:1 adaptive:3 employing:1 approximate:2 implicitly:1 wals:1 sequentially:1 infered:1 conclude:2 xi:6 promising:1 contributes:1 complex:1 european:1 constructing:3 did:3 hierarchically:1 bounding:1 daume:1 n2:5 child:1 site:4 tl:1 cubic:1 gatsby:5 structurally:1 inferring:2 originated:1 exponential:7 winning:1 indo:1 z0:4 down:1 showing:1 appeal:1 dk:4 cease:1 normalizing:1 sequential:11 importance:2 phd:1 subtree:1 conditioned:2 occurring:1 chen:1 rejection:3 tc:6 depicted:2 simply:3 identity:1 room:1 change:2 specifically:4 except:1 uniformly:1 infinite:1 sampler:3 slavic:21 total:1 select:1 college:3 phylogeny:1 arises:1 genealogical:1 incorporate:1 mcmc:1 d1:1 |
2,677 | 3,427 | Regularized Learning with Networks of Features
Ted Sandler, Partha Pratim Talukdar, and Lyle H. Ungar
Department of Computer & Information Science, University of Pennsylvania
{tsandler,partha,ungar}@cis.upenn.edu
John Blitzer
Department of Computer Science, U.C. Berkeley
[email protected]
Abstract
For many supervised learning problems, we possess prior knowledge about which
features yield similar information about the target variable. In predicting the topic
of a document, we might know that two words are synonyms, and when performing image recognition, we know which pixels are adjacent. Such synonymous or
neighboring features are near-duplicates and should be expected to have similar
weights in an accurate model. Here we present a framework for regularized learning when one has prior knowledge about which features are expected to have similar and dissimilar weights. The prior knowledge is encoded as a network whose
vertices are features and whose edges represent similarities and dissimilarities between them. During learning, each feature?s weight is penalized by the amount
it differs from the average weight of its neighbors. For text classification, regularization using networks of word co-occurrences outperforms manifold learning and compares favorably to other recently proposed semi-supervised learning
methods. For sentiment analysis, feature networks constructed from declarative
human knowledge significantly improve prediction accuracy.
1
Introduction
For many important problems in machine learning, we have a limited amount of labeled training
data and a very high-dimensional feature space. A common approach to alleviating the difficulty
of learning in these settings is to regularize a model by penalizing a norm of its parameter vector.
The most commonly used norms in classification, L1 and L2 , assume independence among model
parameters [1]. However, we often have access to information about dependencies between parameters. For example, with spatio-temporal data, we usually know which measurements were taken at
points nearby in space and time. And in natural language processing, digital lexicons such as WordNet can indicate which words are synonyms or antonyms [2]. For the biomedical domain, databases
such as KEGG and DIP list putative protein interactions [3, 4]. And in the case of semi-supervised
learning, dependencies can be inferred from unlabeled data [5, 6]. Consequently, we should be able
to learn models more effectively if we can incorporate dependency structure directly into the norm
used for regularization.
Here we introduce regularized learning with networks of features, a framework for constructing customized norms on the parameters of a model when we have prior knowledge about which parameters
are likely to have similar values. Since our focus is on classification, the parameters we consider are
feature weights in a linear classifier. The prior knowledge is encoded as a network or graph whose
nodes represent features and whose edges represent similarities between the features in terms of how
likely they are to have similar weights. During learning, each feature?s weight is penalized by the
amount it differs from the average weight of its neighbors. This regularization objective is closely
connected to the unsupervised dimensionality reduction method, locally linear embedding (LLE),
proposed by Roweis and Saul [7]. In LLE, each data instance is assumed to be a linear combination of its nearest neighbors on a low dimensional manifold. In this work, each feature?s weight is
preferred (though not required) to be a linear combination of the weights of its neighbors.
Similar to other recent methods for incorporating prior knowledge in learning, our framework can
be viewed as constructing a Gaussian prior with non-diagonal covariance matrix on the model parameters [6, 8]. However, instead of constructing the covariance matrix directly, it is induced from
a network. The network is typically sparse in that each feature has only a small number of neighbors. However, the induced covariance matrix is generally dense. Consequently, we can implicitly
construct rich and dense covariance matrices over large feature spaces without incurring the space
and computational blow-ups that would be incurred if we attempted to construct these matrices
explicitly.
Regularization using networks of features is especially appropriate for high-dimensional feature
spaces such as are encountered in text processing where the local distances required by traditional manifold classification methods [9, 10] may be difficult to estimate accurately, even with
large amounts of unlabeled data. We show that regularization with feature-networks derived from
word co-occurrence statistics outperforms manifold regularization and another, more recent, semisupervised learning approach [5] on the task of text classification. Feature network based regularization also supports extensions which provide flexibility in modeling parameter dependencies,
allowing for feature dissimilarities and the introduction of feature classes whose weights have common but unknown means. We demonstrate that these extensions improve classification accuracy
on the task of classifying product reviews in terms of how favorable they are to the products in
question [11]. Finally, we contrast our approach with related regularization methods.
2
Regularized Learning with Networks of Features
We assume a standard supervised learning framework in which we are given a training set of instances T = {(xi , yi )}ni=1 with xi ? Rd and associated labels yi ? Y. We wish to learn a linear
classifier parameterized by weight vector w ? Rd by minimizing a convex loss function l(x, y ; w)
over the training instances, (xi , yi ). For many problems, the dimension, d, is much larger than the
number of labeled instances, n. Therefore, it is important to impose some constraints on w. Here
we do this using a directed network or graph, G, whose vertices, V = {1, ..., d}, correspond to the
features of our model and whose edges link features whose weights are believed to be similar. The
edges of G are non-negative with larger weights indicating greater similarity. Conversely, a weight
of zero means that two features are not believed a priori to be similar. As has been shown elsewhere
[5, 6, 8], such similarities can be inferred from prior domain knowledge, auxiliary task learning, and
statistics computed on unlabeled data. For the time being we assume that G is given and defer its
construction until section 4, experimental work.
The weights of G are encoded by a matrix, P , where Pij ? 0 gives the weight of the
Pdirected edge
from vertex i to vertex j. We constrain the out-degree of each vertex to sum to one, j Pij = 1, so
that no feature ?dominates? the graph. Because the semantics of the graph are that linked features
should have similar weights, we penalize each feature?s weight by the squared amount it differs from
the weighted average of its neighbors. This gives us the following criterion to optimize in learning:
loss(w) =
n
X
i=1
d
X
X
?2
?
Pjk wk + ? kwk22 ,
l(xi , yi ; w) + ?
wj ?
j=1
(1)
k
where we have added a ridge term to make the loss strictly convex. The hyperparameters ? and ?
specify the amount of network and ridge regularization respectively. The regularization penalty can
be rewritten as w? M w where M = ? (I ? P )? (I ? P ) + ? I. The matrix M is symmetric positive
definite, and therefore our criterion possesses a Bayesian interpretation in which the weight vector,
w, is a priori normally distributed with mean zero and covariance matrix 2M ?1 .
Minimizing equation (1) isP
equivalent to finding the MAP estimate for w. The gradient of (1) with
n
respect to w is ?w loss = i=1 ?w l(xi , yi ; w) + 2M w and therefore requires only an additional
matrix multiply on top of computing the loss over the training data. If P is sparse, as it is in
our experiments?i.e., it has only kd entries for k ? d?then the matrix multiply is O(d). Thus
equation (1) can be minimized very quickly. Additionally, the induced covariance matrix M ?1
will typically be dense even though P is sparse, showing that we can construct dense covariance
structures over w without incurring storage and computation costs.
2.1
Relationship to Locally Linear Embedding
Locally linear embedding (LLE) is an unsupervised learning method for embedding high dimen~ i }n is assumed to lie on a low dimensional data in a low dimensional vector space. The data {X
i=1
sional manifold of dimension c within a high dimensional vector space of dimension d with c ? d.
Since the data lies on a manifold, each point is approximately a convex combination of its nearest
~ i ? P Pij X
~ j , where j ? i denotes the samples, j, which
neighbors on the manifold. That is, X
j?i
lie close to i on the manifold. As above, the matrix P has non-negative entries and its rows sum to
~i }n , Y
~i ? Rc , are found by minimizing the sum
one. The set of low dimensional coordinates, {Y
i=1
of squares cost:
X
X
~i }) =
~i ?
~j k22 ,
cost({Y
kY
Pij Y
(2)
i
j
~i } have unit variance in each of the c dimensions. The solution
subject to the constraint that the {Y
to equation (2) is found by performing eigen-decomposition on the matrix (I ? P )? (I ? P ) =
U ?U ? where U is the matrix of eigenvectors and ? is the diagonal matrix of eigenvalues. The
LLE coordinates are obtained from the eigenvectors, u1 , ..., uc whose eigenvalues, ?1 , ..., ?c , are
~i = (u1i , ..., uci )? . Looking at equation (1) and ignoring the ridge term, it is
smallest1 by setting Y
clear that our feature network regularization penalty is identical to LLE except that the embedding
is found for the feature weights rather than data instances. However, there is a deeper connection.
If we let L(Y, Xw) denote the unregularized loss over the training set where X is the n ? d matrix
of instances and Y is the n-vector of class labels, we can express equation (1) in matrix form as
?
?
w? = argmin L(Y, Xw) + w? ? (I ? P )? (I ? P ) + ? I w.
(3)
w
? to be XU (?? + ? I)?1/2 where U and ? are from the eigen-decomposition above, it is
Defining X
not hard to show that equation (3) is equivalent to the alternative ridge regularized learning problem
? w)
? +w
? ? w.
?
? ? = argmin L(Y, X
w
(4)
?
w
? w.
? yield the same predictions: Y? = Xw = X
? Consequently,
That is, the two minimizers, w and w,
we can view feature network regularization as: 1) finding an embedding for the features using LLE
in which all of the eigenvectors are used and scaled by the inverse square-roots of their eigenvalues
(plus a smoothing term, ?I, that makes the inverse well-defined); 2) projecting the data instances
onto these coordinates; and 3) learning a ridge-penalized model for the new representation. In using
all of the eigenvectors, the dimensionality of the feature embedding is not reduced. However, in
scaling the eigenvectors by the inverse square-roots of their eigenvalues, the directions of least cost
in the network regularized problem become the directions of maximum variance in the associated
ridge regularized problem, and hence are the directions of least cost in the ridge problem. As a result,
the effective dimensionality of the learning problem is reduced to the extent that the distribution
of inverted eigenvalues is sharply peaked. When the best representation for classification has high
dimension, it is faster to solve (3) than to compute a large eigenvector basis and solve (4). In the high
dimensional problems of section 4, we find that regularization with feature networks outperforms
LLE-based regression.
3
Extensions to Feature Network Regularization
In this section, we pose a number of extensions and alternatives to feature network regularization as
formulated in section 2, including the modeling of classes of features whose weights are believed
to share the same unknown means, the incorporation of feature dissimilarities, and two alternative
regularization criteria based on the graph Laplacian.
1
~i } are centered.
More precisely, eigenvectors u2 , ..., uc+1 are used so that the {Y
3.1
Regularizing with Classes of Features
In machine learning, features can often be grouped into classes, such that all the weights of the
features in a given class are drawn from the same underlying distribution. For example, words can
be grouped by part of speech, by meaning (as in WordNet?s synsets), or by clustering based on the
words they co-occur with or the documents they occur in. Using an appropriately constructed feature
graph, we can model the case in which the underlying distributions are believed to be Gaussians with
known, identical variances but with unknown means. That is, the case in which there are k disjoint
classes of features {Ci }ki=1 whose weights are drawn i.i.d. N (?i , ? 2 ) with ?i unknown but ? 2
known and shared across all classes.
The straight-forward approach to modeling this scenario might seem to be to link all the features
within a class to each other, forming a clique, but this does not lead to the desired interpretation.
Additionally, the number of edges in this construction scales quadratically in the clique sizes, resulting in feature graphs that are not sparse. Our approach is therefore to create k additional ?virtual?
features, f1 , ..., fk , that do not appear in any of the data instances but whose weights ??1 , ..., ?
?k serve
as the estimates for the true but unknown means, ?1 , ..., ?k . In creating the feature graph, we link
each feature to the virtual feature for its class with an edge of weight one. The virtual features,
themselves, do not possess any out-going links.
Denoting the class of feature i as c(i), and setting the hyperparameters ? and ? in equation (1) to
Pd
1/(2? 2 ) and 0, respectively, yields a network regularization cost of 21 ? ?2 i=1 (wi ? ?
?c(i) )2 . Since
the virtual features do not appear in any instances, i.e. their values are zero in every data instance,
their weights are free to take on whatever values minimize the network regularization cost in (1),
in particular the estimates of the class means, ?1 , ..., ?k . Consequently, minimizing the network
regularization penalty maximizes the log-likelihood for the intended scenario. We can extend this
construction to model the case in which the feature weights are drawn from a mixture of Gaussians
by connecting each feature to a number of virtual features with edge weights that sum to one.
3.2
Incorporating Feature Dissimilarities
Feature network regularization can also be extended to induce features to have opposing weights.
Such feature ?dissimilarities? can be useful in tasks such as sentiment prediction where we would
like weights for words such as ?great? or ?fantastic? to have opposite signs from their negated bigram
counterparts ?not great? and ?not fantastic,? and from their antonyms. To model dissimilarities, we
construct a separate graph whose edges represent anti-correlations between features. Regularizing
over this graph enforces each feature?s weight to be equal to the negative of the average of the neighboring weights. To do this, we encode the dissimilarity graph using a matrix Q, defined analogously
?2
P?
P
to the matrix P , and add the term i wi + j Qij wj to the network regularization criterion,
which can be written as w? (I +Q)? (I +Q)w. The matrix (I +Q)? (I +Q) is positive semidefinite
like its similarity graph counterpart. Goldberg et al. [12] use a similar construction with the graph
Laplacian in order to incorporate dissimilarities between instances in manifold learning.
3.3
Regularizing Features with the Graph Laplacian
A natural alternative to the network regularization criterion given in section (2) is to regularize the
feature weights using a penalty derived from the graph Laplacian [13]. Here, the feature graph?s edge
weights are given by a symmetric matrix, W , whoseP
entries, Wij ? 0, give the weight of the edge
1
between features i and j. The Laplacian penalty is 2 i,j Wij (wi ? wj )2 which can be written as
w? (D?W ) w, where D = diag(W 1) is the vertex degree matrix. The main difference between the
Laplacian penalty and the network penalty in equation (1) is that the Laplacian penalizes each edge
equally (modulo the edge weights) whereas the network penalty penalizes each feature equally. In
graphs where there are large differences in vertex degree, the Laplacian penalty will therefore focus
most of the regularization cost on features with many neighbors. Experiments in section 4 show
that the criterion in (1) outperformsP
the Laplacian?penalty as well
p as a related penalty derived from
the normalized graph Laplacian, 12 i,j Wij (wi / Dii ? wj / Djj )2 . The normalized Laplacian
p
?
penalty assumes that Djj wi ? Dii wj , which is different from assuming that linked features
should have similar weights.
80
80
70
60
Accuracy
FNR
Manifold (Loc/Glob)
ASO Top
ASO Bottom
30
40
50
70
60
50
30
40
Accuracy
FNR
LLE Regression
PCR
Norm. Laplacian
Laplacian
Ridge Penalty
60
100
200
500
1000
Number of Training Instances
2000
100
200
500
1000
Number of Training Instances
Figure 1: Left: Accuracy of feature network regularization (FNR) and five baselines on ?20 newsgroups? data.
Right: Accuracy of FNR compared to reported accuracies of three other semi-supervised learning methods.
4
Experiments
We evaluated logistic regression augmented with feature network regularization on two natural language processing tasks. The first was document classification on the 20 Newsgroups dataset, a
well-known document classification benchmark. The second was sentiment classification of product reviews, the task of classifying user-written reviews according to whether they are favorable or
unfavorable to the product under review based on the review text [11]. Feature graphs for the two
tasks were constructed using different information. For document classification, the feature graph
was constructed using feature co-occurrence statistics gleaned from unlabeled data. In sentiment
prediction, both co-occurrence statistics and prior domain knowledge were used.
4.1
Experiments on 20 Newsgroups
We evaluated feature network based regularization on the 20 newsgroups classification task using
all twenty classes. The feature set was restricted to the 11,376 words which occurred in at least 20
documents, not counting stop-words. Word counts were transformed by adding one and taking logs.
To construct the feature graph, each feature (word) was represented by a binary vector denoting its
presence/absence in each of the 20,000 documents of the dataset. To measure similarity between
features, we computed cosines between these binary vectors. Each feature was linked to the 25
other features with highest cosine scores, provided that the scores were above a minimum threshold
of 0.10. The edge weights of the graph were set to these cosine scores and the matrix P was
constructed by normalizing each vertex?s out-degree to sum to one.
Figure 1 (left) shows feature network regularization compared against five other baselines: logistic regression with an L2 (ridge) penalty; principal components logistic regression (PCR) in which
each instance was projected onto the largest 200 right singular vectors of the n ? d matrix, X; LLElogistic regression in which each instance was projected onto the smallest 200 eigenvectors of the
matrix (I ?P )? (I ?P ) described in section 2; and logistic regression regularized by the normalized
and unnormalized graph Laplacians described in section 3.3. Results at each training set size are
averages of five trials with training sets sampled to contain an equal number of documents per class.
For ridge, the amount of L2 regularization was chosen using cross validation on the training set.
Similarly, for feature network regularization and the Laplacian regularizers, the hyperparameters ?
and ? were chosen through cross validation on the training set using a simple grid search. The ratio
of ? to ? tended to be around 100:1. For PCR and LLE-logistic regression, the number of eigenvectors used was chosen to give good performance on the test set at both large and small training set
sizes. All models were trained using L-BFGS with a maximum of 200 iterations. Learning a single model took between between 30 seconds and two minutes, with convergence typically achieved
before the full 200 iterations.
Books
DVDs
sim
sim+dissim
ridge
Kitchen Appliances
sim
sim+dissim
ridge
sim
sim+dissim
ridge
50
60
70
80
sim
sim+dissim
ridge
Electronics
2
10
50
250 1000
2
Training Instances
10
50
250
Training Instances
1000
2
10
50
250
Training Instances
1000
2
10
50
250
1000
Training Instances
Figure 2: Accuracy of feature network regularization on the sentiment datasets using feature classes and
dissimilarity edges to regularize the small sent of SentiWordNet features.
The results in figure 1 show that feature network regularization with a graph constructed from unlabeled data outperforms all baselines and increases accuracy by 4%-17% over the plain ridge penalty,
an error reduction of 17%-30%. Additionally, it outperforms the related LLE regression. We conjecture this is because in tuning the hyperparameters, we can adaptively tune the dimensionality of the
underlying data representation. Additionally, by scaling the eigenvectors by their eigenvalues, feature network regularization keeps more information about the directions of least cost in weight space
than does LLE regression, which does not rescale the eigenvectors but simply keeps or discards them
(i.e. scales them by 1 or 0).
Figure 1 (right) compares feature network regularization against two external approaches that leverage unlabeled data: a multi-task learning approach called alternating structure optimization (ASO),
and our reimplementation of a manifold learning method which we refer to as ?local/global consistency? [5, 10]. To make a fair comparison against the reported results for ASO, training sets were
sampled so as not to necessarily contain an equal number of documents per class. Accuracies are
given for the highest and lowest performing variants of ASO reported in [5]. Our reimplementation
of local/global consistency used the same document preprocessing described in [10]. However, the
graph was constructed so that each document had only K = 10 neighbors (the authors in [10] use
a fully connected graph which does not fit in memory for the entire 20 newsgroups dataset). Classification accuracy of local/global consistency did not vary much with K and up to 500 neighbors
were tried for each document. Here we see that feature network regularization is competitive with
the other semi-supervised methods and performs best at all but the smallest training set size.
4.2
Sentiment Classification
For sentiment prediction, we obtained the product review datasets used in [11]. Each dataset consists of reviews downloaded from Amazon.com for one of four different product domains: books,
DVDs, electronics, and kitchen appliances. The reviews have an associated number of ?stars,? ranging from 0 to 5, rating the quality of a product. The goal of the task is to predict whether a review
has more than (positive) or less than (negative) 3 stars associated with it based only on the text in the
review. We performed two sets of experiments in which prior domain knowledge was incorporated
using feature networks. In both, we used a list of sentimentally-charged words obtained from the
SentiWordNet database [14], a database which associates positive and negative sentiment scores to
each word in WordNet. In the first experiment, we constructed a set of feature classes in the manner
described in section 3.1 to see if such classes could be used to boot-strap weight polarities for groups
of features. In the second, we computed similarities between words in terms of the similarity of their
co-occurrence?s with the sentimentally charged words.
From SentiWordNet we extracted a list of roughly 200 words with high positive and negative sentiment scores that also occurred in the product reviews at least 100 times. Words to which SentiWordNet gave a high ?positive? score were placed in a ?positive words? cluster and words given
a high ?negative? score were placed in a ?negative words? cluster. As described in section 3.1, all
words in the positive cluster were attached to a virtual feature representing the mean feature weight
of the positive cluster words, and all words in the negative cluster were attached to a virtual weight
representing the mean weight of the negative cluster words. We also added a dissimilarity edge (described in section 3.2) between the positive and negative clusters? virtual features to induce the two
60 65 70 75 80 85 90
Books
DVDs
FNR
Ridge Penalty
50
100
250
500
1000
Training Instances
Electronics
FNR
Ridge Penalty
50
100
250
500
Training Instances
1000
Kitchen Appliances
FNR
Ridge Penalty
50
100
250
500
Training Instances
1000
FNR
Ridge Penalty
50
100
250
500
1000
Training Instances
Figure 3: Accuracy of feature network and ridge regularization on four sentiment classification datasets.
classes of features to have opposite means. As shown in figure 2, imposing feature clusters on the
two classes of words improves performance noticeably while the addition of the feature dissimilarity
edge does not yield much benefit. When it helps, it is only for the smallest training set sizes.
This simple set of experiments demonstrated the applicability of feature classes for inducing groups
of features to have similar means, and that the words extracted from SentiWordNet were relatively
helpful in determining the sentiment of a review. However, the number of features used in these
experiments was too small to yield reasonable performance in an applied setting. Thus we extended
the feature sets to include all unigram and bigram word-features which occurred in ten or more
reviews. The total number of reviews and size of the feature sets is given in table 1.
The method used to construct the feature graph
in the 20 newsgroups experiments was not well
Dataset
Instances Features
Edges
suited for sentiment prediction since plain feature
books
13,161
29,404
470,034
co-occurrence statistics tended to find groups of
DVDs
13,005
31,475
419,178
words that showed up in reviews for products of the
electronics
8,922
15,104
343,890
same type, e.g., digital cameras or laptops. While
kitchen
7,760
11,658
305,926
such similarities are useful in predicting what type
of product is being reviewed, they are of little help
Table 1: Sentiment Data Statistics
in determining whether a review is favorable or unfavorable. Thus, to align features along dimensions
of ?sentiment,? we computed the correlations of all features with the SentiWordNet features so that
each word was represented as a 200 dimensional vector of correlations with these highly charged
sentiment words. Distances between these correlation vectors were computed in order to determine
which features should be linked. We next computed each feature?s 100 nearest neighbors. Two features were linked if both were in the other?s set of nearest 100 neighbors. For simplicity, the edge
weights were set to one and the graph weight matrix was then row-normalized in order to construct
the matrix P . The number of edges in each feature graph is given in table 1.
The ?kitchen? dataset was used as a development dataset in order to arrive at the method for constructing the feature graph and for choosing the hyperparameter values: ? = 9.9 and ? = 0.1.
Figure 3 gives accuracy results for all four sentiment datasets at training sets of 50 to 1000 instances. The results show that linking features which are similarly correlated with sentiment-loaded
words yields improvements on every dataset and at every training set size.
5
Related Work
Most similar to the work presented here is that of the fused lasso (Tibshirani et al. [15]) which can
be interpreted as using the graph
regularizer but with an L1 norm instead of L2 on the
PLaplacian
P
residuals of weight differences: i j?i |wi ? wj | and all edge weights set to one. As the authors
discuss, an L1 penalty prefers that weights of linked features be exactly equal so that the residual
vector of weight differences is sparse. L1 is appropriate if the true weights are believed to be exactly
equal, but in many settings, features are near copies of one another whose weights should be similar
rather than identical. Thus in these settings, penalizing squared differences rather than absolute
ones is more appropriate. Optimizing L1 feature weight differences also leads to a much harder
optimization problem, making it less applicable in large scale learning. Li and Li [13] regularize
feature weights using the normalized graph Laplacian in their work on biomedical prediction tasks.
As shown, this criterion does not work as well on the text prediction problems considered here.
Krupka and Tishby [8] proposed a method for inducing feature-weight covariance matrices using
distances in a ?meta-feature? space. Under their framework, two features positively covary if they
are close in this space and approach independence as they grow distant. The authors represent each
feature i as a vector of meta-features, ui , and compute the entries of the feature weight covariance
matrix, Cij = exp(? 2?1 2 kui ? uj k2 ). Obviously, the choice of which is more appropriate, a feature
graph or metric space, is application dependent. However, it is less obvious how to incorporate
feature dissimilarities in a metric space. A second difference is that our work defines the regularizer
in terms of C ?1 ? (I ?P )? (I ?P ) rather than C itself. While C ?1 is constructed to be sparse with
a nearest neighbors graph, the induced covariance matrix, C, need not be sparse. Thus, working with
C ?1 allows for construct dense covariance matrices without having to explicitly store them. Finally,
Raina et al. [6] learn a feature-weight covariance matrix via auxiliary task learning. Interestingly, the
entries of this covariance matrix are learned jointly with a regression model for predicting feature
weight covariances as a function of meta-features. However, since their approach explicitly predicts
each entry of the covariance matrix, they are restricted to learning smaller models, consisting of
hundreds rather than tens of thousands of features.
6
Conclusion
We have presented regularized learning with networks of features, a simple and flexible framework
for incorporating expectations about feature weight similarities in learning. Feature similarities
are modeled using a feature graph and the weight of each feature is preferred to be close to the
average of its neighbors. On the task of document classification, feature network regularization
is superior to several related criteria, as well as to a manifold learning approach where the graph
models similarities between instances rather than between features. Extensions for modeling feature
classes, as well as feature dissimilarities, yielded benefits on the problem of sentiment prediction.
References
[1] T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning. Springer New York, 2001.
[2] C. Fellbaum. WordNet: an electronic lexical database. MIT Press, 1998.
[3] H. Ogata, S. Goto, K. Sato, W. Fujibuchi, H. Bono, and M. Kanehisa. KEGG: Kyoto Encyclopedia of
Genes and Genomes. Nucleic Acids Research, 27(1):29?34, 1999.
[4] I. Xenarios, D.W. Rice, L. Salwinski, M.K. Baron, E.M. Marcotte, and D. Eisenberg. DIP: The Database
of Interacting Proteins. Nucleic Acids Research, 28(1):289?291, 2000.
[5] R.K. Ando and T. Zhang. A Framework for Learning Predictive Structures from Multiple Tasks and
Unlabeled Data. JMLR, 6:1817?1853, 2005.
[6] R. Raina, A.Y. Ng, and D. Koller. Constructing informative priors using transfer learning. In ICML, 2006.
[7] S.T. Roweis and L.K. Saul. Nonlinear Dimensionality Reduction by Locally Linear Embedding. Science,
290(5500):2323?2326, 2000.
[8] E. Krupka and N. Tishby. Incorporating Prior Knowledge on Features into Learning. In AISTATS, 2007.
[9] M. Belkin, P. Niyogi, and V. Sindhwani. Manifold regularization: a geometric framework for lerning
from lableed and unlabeled examples. JMLR, 7:2399?2434, 2006.
[10] D. Zhou, O. Bousquet, T.N. Lal, J. Weston, and B. Sch?olkopf. Learning with local and global consistency.
In NIPS, 2004.
[11] J. Blitzer, M. Dredze, and F. Pereira. Biographies, Bollywood, Boom-boxes and Blenders: Domain
Adaptation for Sentiment Classification. In ACL, 2007.
[12] A.B. Goldberg, X. Zhu, and S. Wright. Dissimilarity in Graph-Based Semi-Supervised Classification. In
AISTATS, 2007.
[13] C. Li and H. Li. Network-constrained regularization and variable selection for analysis of genomic data.
Bioinformatics, 24(9):1175?1182, 2008.
[14] A. Esuli and F. Sebastiani. SentiWordNet: A Publicly Available Lexical Resource For Opinion Mining.
In LREC, 2006.
[15] R. Tibshirani, M. Saunders, S. Rosset, J. Zhu, and K. Knight. Sparsity and Smoothness via the Fused
Lasso. Journal of the Royal Statistical Society Series B, 67(1):91?108, 2005.
| 3427 |@word trial:1 bigram:2 norm:6 pratim:1 tried:1 covariance:15 decomposition:2 blender:1 harder:1 reduction:3 electronics:4 loc:1 score:7 series:1 denoting:2 document:13 interestingly:1 outperforms:5 com:1 written:3 john:1 distant:1 informative:1 node:1 lexicon:1 appliance:3 zhang:1 five:3 rc:1 along:1 constructed:9 become:1 qij:1 consists:1 dimen:1 manner:1 introduce:1 upenn:1 expected:2 roughly:1 themselves:1 multi:1 little:1 provided:1 underlying:3 maximizes:1 laptop:1 lowest:1 what:1 argmin:2 interpreted:1 eigenvector:1 finding:2 temporal:1 berkeley:2 every:3 exactly:2 classifier:2 scaled:1 k2:1 whatever:1 normally:1 unit:1 appear:2 glob:1 positive:10 before:1 local:5 krupka:2 approximately:1 might:2 plus:1 acl:1 conversely:1 co:7 limited:1 directed:1 camera:1 enforces:1 lyle:1 definite:1 differs:3 reimplementation:2 significantly:1 ups:1 word:31 induce:2 protein:2 onto:3 unlabeled:8 close:3 selection:1 storage:1 optimize:1 equivalent:2 map:1 charged:3 demonstrated:1 lexical:2 convex:3 amazon:1 simplicity:1 regularize:4 embedding:8 coordinate:3 target:1 construction:4 alleviating:1 modulo:1 user:1 goldberg:2 associate:1 element:1 recognition:1 predicts:1 labeled:2 database:5 bottom:1 thousand:1 wj:6 connected:2 highest:2 knight:1 pd:1 ui:1 trained:1 predictive:1 serve:1 sentiwordnet:7 basis:1 isp:1 represented:2 regularizer:2 effective:1 choosing:1 saunders:1 whose:14 encoded:3 larger:2 solve:2 statistic:6 niyogi:1 jointly:1 itself:1 obviously:1 eigenvalue:6 took:1 interaction:1 product:10 talukdar:1 adaptation:1 neighboring:2 uci:1 flexibility:1 roweis:2 inducing:2 ky:1 olkopf:1 convergence:1 cluster:8 help:2 blitzer:3 pose:1 rescale:1 nearest:5 sim:8 auxiliary:2 c:1 indicate:1 direction:4 closely:1 centered:1 human:1 opinion:1 virtual:8 dii:2 noticeably:1 pjk:1 ungar:2 f1:1 extension:5 strictly:1 around:1 considered:1 wright:1 exp:1 great:2 predict:1 vary:1 smallest:3 favorable:3 applicable:1 label:2 grouped:2 largest:1 create:1 weighted:1 aso:5 mit:1 genomic:1 gaussian:1 rather:6 zhou:1 encode:1 derived:3 focus:2 improvement:1 likelihood:1 contrast:1 baseline:3 helpful:1 dependent:1 synonymous:1 minimizers:1 typically:3 entire:1 koller:1 going:1 wij:3 transformed:1 semantics:1 pixel:1 sandler:1 classification:18 among:1 flexible:1 priori:2 development:1 smoothing:1 constrained:1 uc:2 equal:5 construct:8 having:1 ted:1 ng:1 identical:3 unsupervised:2 icml:1 peaked:1 minimized:1 duplicate:1 belkin:1 kitchen:5 intended:1 consisting:1 opposing:1 ando:1 friedman:1 highly:1 mining:1 multiply:2 mixture:1 semidefinite:1 regularizers:1 accurate:1 edge:21 penalizes:2 desired:1 instance:26 modeling:4 cost:9 applicability:1 vertex:8 entry:6 hundred:1 too:1 tishby:2 reported:3 dependency:4 rosset:1 adaptively:1 connecting:1 quickly:1 analogously:1 fused:2 squared:2 external:1 creating:1 book:4 li:4 bfgs:1 blow:1 star:2 wk:1 boom:1 explicitly:3 performed:1 view:1 root:2 linked:6 competitive:1 defer:1 partha:2 minimize:1 square:3 ni:1 baron:1 accuracy:13 variance:3 loaded:1 acid:2 publicly:1 yield:6 correspond:1 bayesian:1 accurately:1 straight:1 tended:2 against:3 obvious:1 associated:4 stop:1 sampled:2 dataset:8 knowledge:11 dimensionality:5 improves:1 fellbaum:1 supervised:7 specify:1 evaluated:2 though:2 box:1 biomedical:2 until:1 correlation:4 working:1 nonlinear:1 defines:1 logistic:5 quality:1 semisupervised:1 dredze:1 k22:1 normalized:5 true:2 contain:2 counterpart:2 regularization:38 hence:1 alternating:1 symmetric:2 covary:1 adjacent:1 during:2 djj:2 cosine:3 unnormalized:1 criterion:8 ridge:20 demonstrate:1 gleaned:1 l1:5 performs:1 image:1 meaning:1 ranging:1 recently:1 common:2 superior:1 attached:2 extend:1 interpretation:2 occurred:3 linking:1 measurement:1 refer:1 sebastiani:1 imposing:1 smoothness:1 rd:2 tuning:1 fk:1 grid:1 similarly:2 consistency:4 language:2 had:1 access:1 similarity:12 dissim:4 add:1 align:1 recent:2 showed:1 optimizing:1 discard:1 scenario:2 store:1 meta:3 binary:2 yi:5 inverted:1 minimum:1 greater:1 additional:2 impose:1 determine:1 semi:5 full:1 multiple:1 kyoto:1 faster:1 believed:5 cross:2 equally:2 laplacian:15 prediction:9 variant:1 regression:11 metric:2 expectation:1 iteration:2 represent:5 achieved:1 penalize:1 whereas:1 addition:1 singular:1 grow:1 appropriately:1 sch:1 posse:3 induced:4 kwk22:1 subject:1 sent:1 goto:1 seem:1 marcotte:1 near:2 counting:1 presence:1 leverage:1 u1i:1 newsgroups:6 independence:2 fit:1 gave:1 pennsylvania:1 lasso:2 opposite:2 hastie:1 whether:3 sentiment:19 penalty:20 speech:1 york:1 prefers:1 generally:1 useful:2 clear:1 eigenvectors:10 tune:1 amount:7 encyclopedia:1 locally:4 ten:2 reduced:2 sign:1 disjoint:1 per:2 tibshirani:3 hyperparameter:1 express:1 group:3 four:3 threshold:1 drawn:3 penalizing:2 graph:37 sum:5 inverse:3 parameterized:1 arrive:1 reasonable:1 electronic:1 putative:1 scaling:2 ki:1 lrec:1 encountered:1 yielded:1 sato:1 occur:2 constraint:2 sharply:1 constrain:1 incorporation:1 precisely:1 dvd:4 nearby:1 bousquet:1 u1:1 performing:3 relatively:1 conjecture:1 department:2 according:1 combination:3 kd:1 across:1 smaller:1 wi:6 making:1 projecting:1 restricted:2 kegg:2 taken:1 unregularized:1 equation:8 resource:1 discus:1 count:1 know:3 available:1 gaussians:2 incurring:2 rewritten:1 appropriate:4 occurrence:6 alternative:4 eigen:2 top:2 denotes:1 clustering:1 assumes:1 include:1 xw:3 especially:1 uj:1 society:1 objective:1 question:1 added:2 diagonal:2 traditional:1 gradient:1 distance:3 link:4 separate:1 topic:1 manifold:13 extent:1 declarative:1 assuming:1 modeled:1 relationship:1 polarity:1 ratio:1 minimizing:4 difficult:1 cij:1 favorably:1 negative:11 unknown:5 negated:1 allowing:1 twenty:1 boot:1 nucleic:2 datasets:4 benchmark:1 anti:1 defining:1 extended:2 looking:1 incorporated:1 interacting:1 fantastic:2 inferred:2 rating:1 required:2 connection:1 lal:1 quadratically:1 learned:1 nip:1 able:1 usually:1 laplacians:1 sparsity:1 including:1 pcr:3 memory:1 royal:1 difficulty:1 natural:3 regularized:9 predicting:3 customized:1 residual:2 raina:2 representing:2 zhu:2 improve:2 text:6 prior:12 review:16 l2:4 geometric:1 determining:2 eisenberg:1 loss:6 fully:1 digital:2 validation:2 downloaded:1 incurred:1 degree:4 pij:4 classifying:2 share:1 row:2 elsewhere:1 penalized:3 placed:2 free:1 copy:1 synset:1 lle:11 deeper:1 neighbor:14 saul:2 taking:1 absolute:1 sparse:7 distributed:1 benefit:2 dip:2 dimension:6 plain:2 rich:1 genome:1 forward:1 commonly:1 author:3 projected:2 preprocessing:1 preferred:2 implicitly:1 keep:2 clique:2 gene:1 global:4 assumed:2 spatio:1 xi:5 search:1 table:3 additionally:4 reviewed:1 learn:3 transfer:1 ignoring:1 kui:1 necessarily:1 constructing:5 domain:6 diag:1 bollywood:1 did:1 aistats:2 dense:5 main:1 synonym:2 hyperparameters:4 fair:1 xu:1 augmented:1 positively:1 pereira:1 wish:1 lie:3 jmlr:2 ogata:1 minute:1 unigram:1 showing:1 list:3 dominates:1 normalizing:1 incorporating:4 adding:1 effectively:1 ci:2 dissimilarity:14 antonym:2 suited:1 simply:1 likely:2 forming:1 u2:1 sindhwani:1 springer:1 extracted:2 rice:1 weston:1 viewed:1 formulated:1 goal:1 consequently:4 kanehisa:1 shared:1 absence:1 hard:1 fnr:8 except:1 wordnet:4 principal:1 strap:1 called:1 total:1 experimental:1 attempted:1 unfavorable:2 indicating:1 support:1 dissimilar:1 bioinformatics:1 incorporate:3 regularizing:3 biography:1 correlated:1 |
2,678 | 3,428 | Supervised Bipartite Graph Inference
Yoshihiro Yamanishi
Mines ParisTech CBIO
Institut Curie, INSERM U900,
35 rue Saint-Honore, Fontainebleau, F-77300 France
[email protected]
Abstract
We formulate the problem of bipartite graph inference as a supervised learning
problem, and propose a new method to solve it from the viewpoint of distance
metric learning. The method involves the learning of two mappings of the heterogeneous objects to a unified Euclidean space representing the network topology of
the bipartite graph, where the graph is easy to infer. The algorithm can be formulated as an optimization problem in a reproducing kernel Hilbert space. We report
encouraging results on the problem of compound-protein interaction network reconstruction from chemical structure data and genomic sequence data.
1
Introduction
The problem of bipartite graph inference is to predict the presence or absence of edges between
heterogeneous objects known to form the vertices of the bipartite graph, based on the observation
about the heterogeneous objects. This problem is becoming a challenging issue in bioinformatics
and computational biology, because there are many biological networks which are represented by a
bipartite graph structure with vertices being heterogeneous molecules and edges being interactions
between them. Examples include compound-protein interaction network consisting of interactions
between ligand compounds and target proteins, metabolic network consisting of interactions between substrates and enzymes, and host-pathogen protein-protein network consisting of interactions
between host proteins and pathogen proteins.
Especially, the prediction of compound-protein interaction networks is a key issue toward genomic
drug discovery, because drug development depends heavily on the detection of interactions between
ligand compounds and target proteins. The human genome sequencing project has made available the sequences of a large number of human proteins, while the high-throughput screening of
large-scale chemical compound libraries is enabling us to explore the chemical space of possible
compounds[1]. However, our knowledge about the such compound-protein interactions is very limited. It is therefore important is to detect unknown compound-protein interactions in order to identify
potentially useful compounds such as imaging probes and drug leads from huge amount of chemical
and genomic data.
A major traditional method for predicting compound-protein interactions is docking simulation
[2]. However, docking simulation requires 3D structure information for the target proteins. Most
pharmaceutically useful target proteins are membrane proteins such as ion channels and G proteincoupled receptors (GPCRs). It is still extremely difficult and expensive to determine the 3D structures of membrane proteins, which limits the use of docking. There is therefore a strong incentive to
develop new useful prediction methods based on protein sequences, chemical compound structures,
and the available known compound-protein interaction information simultaneously.
Recently, several supervised methods for inferring a simple graph structure (e.g., protein network,
enzyme network) have been developed in the framework of kernel methods [3, 4, 5]. The corresponding algorithms of the previous methods are based on kernel canonical correlation analysis
1
[3], distance metric learning [4], and em-algorithm [5], respectively. However, the previous methods can only predict edges between homogeneous objects such as protein-protein interactions and
enzyme-enzyme relations, so it is not possible to predict edges between heterogeneous objects such
as compound-protein interactions and substrate-enzyme interactions, because their frameworks are
based only on a simple graph structure with homogeneous vertices. In contrast, in this paper we
address the problem of supervised learning of the bipartite graph rather than the simple graph.
In this contribution, we develop a new supervised method for inferring the bipartite graph, borrowing
the idea of distance metric learning used in the framework for inferring the simple graph [4]. The
proposed method involves the learning of two mappings of the heterogeneous objects to a unified
Euclidean space representing the network topology of the bipartite graph, where the graph is easy to
infer. The algorithm can be formulated as an optimization problem in a reproducing kernel Hilbert
space. To our knowledge, there are no statistical methods to predict bipartite graphs from observed
data in a supervised context. In the results, we show the usefulness of the proposed method on the
predictions of compound-protein interaction network reconstruction from chemical structure data
and genomic sequence data.
Vertex with a!ribute 1 in known graph
Vertex with a!ribute 2 in known graph
Addi"onal vertex with a!ribute 1
Addi"onal vertex with a!ribute 2
Known edge
Predicted edge
Figure 1: An illustration of the problem of the supervised bipartite graph inference
2
Formalism of the supervised bipartite graph inference problem
Let us formally define the supervised bipartite graph inference problem. Suppose that we are given
an undirected bipartite graph G = (U + V, E), where U = (u1 , ? ? ? , un1 ) and V = (v1 , ? ? ? , vn2 )
are sets of heterogeneous vertices and E ? (U ? V ) ? (V ? U ) is a set of edges. Note that the
attribute of U is completely different from that of V . The problem is, given additional sets of vertices
?
U ? = (u?1 , ? ? ? , u?m1 ) and V ? = (v1? , ? ? ? , vm
), to infer a set of new edges E ? ? U ? ? (V + V ? ) ?
2
?
?
?
?
?
V ? (U + U ) ? (U + U ) ? V ? (V + V ) ? U ? involving the additional vertices in U ? and V ? .
Figure 1 shows an illustration of this problem.
The prediction of compound-protein interaction networks is a typical problem which is suitable
in this framework from a practical viewpoint. In this case, U corresponds to a set of compounds
(known ligands), V corresponds to a set of proteins (known targets), and E corresponds to a set of
known compound-protein interactions (known ligand-target interactions). U ? corresponds to a set of
additional compounds (new ligand candidates), V ? corresponds to a set of additional proteins (new
target candidates), and E ? corresponds to a set of unknown compound-protein interactions (potential
ligand-target interactions).
The prediction is performed based on available observations about the vertices. Sets of vertices
?
U = (u1 , ? ? ? , un1 ), V = (v1 , ? ? ? , vn2 ), U ? = (u?1 , ? ? ? , u?m1 ) and V ? = (v1? , ? ? ? , vm
) are repre2
?
?
sented by sets of observed data X = (x1 , ? ? ? , xn1 ), Y = (y1 , ? ? ? , yn2 ), X = (x1 , ? ? ? , x?m1 ) and
?
Y ? = (y1? , ? ? ? , ym
), respectively. For example, compounds are represented by molecular structures
2
and proteins are represented by amino acid sequences. The question is how to predict unknown
compound-protein interactions from compound structures and protein sequences using prior knowledge about known compound-protein interactions. Sets of U and V (X and Y) are referred to as
training sets, and heterogeneous objects are represented by u and v in the sense of vertices on the
bipartite graph or by x and y in the sense of objects in the observed data below.
In order to deal with the data heterogeneity and take advantage of recent works on kernel similarity
functions on general data structures [6], we will assume that X is a set endowed
with a positive defPn1
inite kernel ku , that is, a symmetric function ku : X 2 ? R satisfying i,j=1
ai aj ku (xi , xj ) ? 0
2
for any n1 ? N, (a1 , a2 , ? ? ? , an1 ) ? Rn1 and (x1 , x2 , ? ? ? , xn1 ) ? X . Similarly, we will assume that Y is a set endowed
Pn2 with a positive definite kernel kv , that is, a symmetric function
kv : Y 2 ? R satisfying i,j=1
ai aj kv (yi , yj ) ? 0 for any n2 ? N, (a1 , a2 , ? ? ? , an2 ) ? Rn2
and (y1 , y2 , ? ? ? , yn2 ) ? Y.
3
3.1
Distance metric learning (DML) for the bipartite graph inference
Euclidean embedding and distance metric learning (DML)
Suppose that a bipartite graph must be reconstructed from the similarity information about n1 objects
(x1 , ? ? ? , xn1 ) in X (observed data for U ) and n2 objects (y1 , ? ? ? , yn2 ) in Y (observed data for V ).
One difficulty is that the attribute of observed data differs between X and Y in nature, so it is
not possible to evaluate the link between (x1 , ? ? ? , xn1 ) and (y1 , ? ? ? , yn2 ) from the observed data
directly. For example, in the case of compounds and proteins, each x has a chemical graph structure
and each y has a sequence structure, so the data structures completely differ between x and y.
Therefore, we make an assumption that n1 objects (x1 , ? ? ? , xn1 ) and n2 objects (y1 , ? ? ? , yn2 ) are
implicitly embedded in a unified Euclidean space Rd , and a graph is inferred on those heterogeneous
points by the nearest neighbor approach, i.e., putting an edge between heterogeneous points that are
close to each other.
We propose the following two step procedure for the supervised bipartite graph inference:
1. embed the heterogeneous objects into a unified Euclidean space representing the network
topology of the bipartite graph, where connecting heterogeneous vertices are close to each
other, through mappings f : X ? Rd and g : Y ? Rd
2. apply the mappings f and g to X ? and Y ? respectively, and predict new edges between
the heterogeneous objects if the distance between the points {f (x), x ? X ? X ? } and
{g(y), y ? Y ? Y ? } is smaller than a fixed threshold ?.
While the second step in this procedure is fixed, the first step can be optimized by supervised learning
of f and g using the known bipartite graph. To do so, we require the mappings f and g to map
adjacent heterogeneous vertices in the known bipartite graph onto nearby positions in a unified
Euclidian space Rd , in order to ensure that the known bipartite graph can be recovered to some
extent by the nearest neighbor approach.
Given functions f : X ? R and g : Y ? R, a possible criterion to assess whether connected (resp.
disconnected) heterogeneous vertices are mapped onto similar (resp. dissimilar) points in R is the
following:
P
P
2
2
(ui ,vj )?E (f (xi ) ? g(yj )) ?
(ui ,vj )?E
/ (f (xi ) ? g(yj ))
P
R(f, g) =
.
(1)
2
(ui ,vj )?U ?V (f (xi ) ? g(yj ))
A small value of R(f, g) ensures that connected heterogeneous vertices tend to be closer than disconnected heterogeneous vertices in the sense of quadratic error.
To represent the connectivity between heterogeneous vertices on the bipartite graph G = (U +V, E),
we define a kind of the adjacency matrix Auv , where element (Auv )ij is equal to 1 (resp. 0) if
vertices ui and vj are connected (resp. disconnected). Note that the size of the matrix Auv is
n1 ? n2 . We also define a kind of the degree matrix of the heterogeneous vertices as Du and Dv ,
where diagonal elements (Du )ii and (Dv )jj are the degrees of vertices ui and vj (the numbers of
edges involving vertices ui and vj ), respectively. Note that all non-diagonal elements in Du and Dv
are zero, and the sizes of the matrices are n1 ? n1 and n2 ? n2 , respectively.
Let us denote by fU = (f (x1 ), ? ? ? , f (xn1 ))T ? Rn1 and gV = (g(y1 ), ? ? ? , g(yn2 ))T ? Rn2
the
g on the training set. If we restrict fU and fV to have zero means as
Pn1values taken by fPand
n2
i=1 f (xi ) = 0 and
i=1 g(yi ) = 0, then the criterion (1) can be rewritten as follows:
?
?
?T ?
??
Du
?Auv
fU
fU
gV
gV
?ATuv
Dv
R(f, g) = 4
?2
(2)
?
?T ?
?
fU
fU
gV
gV
3
To avoid the over-fitting problem and obtain meaningful solutions, we propose to regularize the
criterion (1) by a smoothness functional on f and g based on a classical approach in statistical
learning [7, 8]. We assume that that f and g belong to the reproducing kernel Hilbert space (r.k.h.s.)
HU and HV defined by the kernels ku on X and kv on Y, and to use the norms of f and g as
regularization operators. Let us define by ||f || and ||g|| the norms of f and g in HU and HV . Then,
the regularized criterion to be minimized becomes:
?
?T ?
??
?
Du
?Auv
fU
fU
+ ?1 ||f ||2 + ?2 ||g||2
gV
gV
?ATuv
Dv
R(f, g) =
,
(3)
?
?T ?
?
fU
fU
gV
gV
where ?1 and ?2 are regularization parameters which control the trade-off between minimizing the
original criterion (1) and ensuring that the solution has a small norm in the r.k.h.s.
The criterion is defined up to a scaling of the functions and the solution is therefore a direction in
the r.k.h.s. Here we set additional constraints. In this case we impose the norm ||f || = ||g|| = 1,
which corresponds to an orthogonal projection onto the direction selected in the r.k.h.s. Note that
the criterion can be used for extracting one-dimentional feature of the objects. In order to obtain a
d-dimensional feature representation of the objects, we propose to iterate the minimization of the
regularized criterion (3) under orthogonality constraints in the r.k.h.s., that is, we recursively define
the p-th features fp and gp for p = 1, ? ? ? , d as follows:
?
??
?
?T ?
Du
?Auv
fU
fU
+ ?1 ||f ||2 + ?2 ||g||2
gV
gV
?ATuv
Dv
(fp , gp ) = arg min
(4)
?
?T ?
?
fU
fU
gV
gV
under the orthogonality constraints: f ?f1 , ? ? ? , fp?1 , and g?g1 , ? ? ? , gp?1 .
In the prediction process, we map any new objects x? ? X ? and y ? ? Y ? by the mappings f and g
respectively, and predict new edges between the heterogeneous objects if the distance between the
points {f (x), x ? X ? X ? } and {g(y), y ? Y ? Y ? } is smaller than a fixed threshold ?.
3.2
Algorithm
Let ku and kv be the kernels on the sets X and Y, where the kernels are both centered in HU and
HV . According to the representer theorem [9] in the r.k.h.s., for any p = 1, ? ? ? , d, the solution to
equation (4) has the following expansions:
fp (x) =
n1
X
?p,j ku (xj , x),
gp (y) =
j=1
n2
X
?p,j kv (yj , y),
(5)
j=1
for some vector ?p = (?p,1 , ? ? ? , ?p,n1 )T ? Rn1 and ? p = (?p,1 , ? ? ? , ?p,n2 )T ? Rn2 .
Let Ku and Kv be the Gram matrices of the kernels ku and ku such that (Ku )ij = ku (xi , xj ), i, j =
1, ? ? ? , n1 and (Kv )ij = kv (yi , yj ), i, j = 1, ? ? ? , n2 . The corresponding feature vectors fp,U and
gp,V can be written as fp,U = Ku ?p and gp,V = Kv ? p , respectively. The squared norms of features
f and g in HU and HV are equal to ||f ||2 = ?T Ku ? and ||g||2 = ? T Kv ?, so the normalization
constraints for f and g can be written as ?T Ku ? = ? T Kv ? = 1. The orthogonarity constraints
fp ?fq and gp ?gq (p 6= q) can be written by ?Tp Ku ?q = 0 and ? Tp Kv ? q = 0.
Using the above representations, the minimization problem of R(f, g) is equivalent to finding ? and
? which minimize
?
?T ?
??
?
Ku Du Ku
?Ku Auv Kv
?
?
+ ?1 ?T Ku ? + ?2 ? T Kv ?
?
?
?Kv ATuv Ku
Kv Dv Kv
, (6)
R(f, g) =
?
?
?T ?
??
?
?
Ku Ku
0
?
?
0
Kv Kv
4
under the following orthogonality constraints:
? T Kv ? 1 = ? ? ? = ? T Kv ? (p?1) = 0.
?T Ku ?1 = ? ? ? = ?T Ku ?(p?1) = 0,
Taking the differential of equation (6) with respect to ? and ? and setting to zero, the solution of
the first vectors ?1 and ? 1 can be obtained as the eigenvectors associated with the smallest (nonnegative) eigenvalue in the following generalized eigenvalue problem:
?
??
?
?
??
?
Ku Du Ku + ?1 Ku
?Ku Auv Kv
?
Ku Ku
0
?
=?
(7)
?
0
Kv Kv
?
?Kv ATuv Ku
Kv Dv Kv + ?2 Kv
Sequentially, the solutions of vectors ?1 , ? ? ? , ?d and ? 1 , ? ? ? , ? d can be obtained as the eigenvectors
associated with d smallest (non-negative) eigenvalues in the above generalized eigenvalue problem.
4
Relationship with other methods
The process of embedding heterogeneous objects into the same space is similar to correspondence
analysis (CA) [10] and Co-Occurence Data Embedding (CODE) [11] which are unsupervised methods to embed the rows and columns of a contingency table (adjacency matrix Auv in this study) into
a low dimensional Euclidean space. However, critical differences with our proposed method are as
follows: i) the above methods cannot use observed data (X and Y in this study) about heterogeneous
nodes for prediction, because the algorithms are based only on co-occurence information (Auv in
this study), and ii) we need to define a new representation of not only the objects in the training set
but also additional objects outside of the training set. Therefore, it is not possible to directly apply
the above methods to the bipartite graph inference problem.
Recall that the goal of the ordinary CA is to find embedding functions ? : U ? R and ? : V ? R
which maximize the following correlation coefficient:
P
i,j I{(ui , vj ) ? E}?(ui )?(vj )
qP
corr(?, ?) = pP
,
(8)
2
2
d
d
?(u
)
?(v
)
u
v
i
j
i
j
i
j
where I{?} is an indicator function which returns
is true or 0 otherwise, dui (resp.
P 1 if the argumentP
dvj ) is the degree of node ui (resp. vj ), and i ?(ui ) = 0 (resp. j ?(vj ) = 0) is assumed [10].
Here we attempt to consider an extension of the CA using the idea of kernel methods so that it can
work in the context of the bipartite graph inference problem. The method is referred to as kernel
correspondence analysis (KCA) below.
To formulate the KCA, we propose to replace the embedding functions ? : U ? R and ? : V ? R
by functions f : X ? R and g : Y ? R, where f and g belong to the r.k.h.s. HU and HV defined
by the kernels ku on X and kv on Y. Then, we consider maximizing the following regularized
correlation coefficient:
P
i,j I{(ui , vj ) ? E}f (xi )g(yj )
qP
,
(9)
corr(f, g) = pP
2
2
2
2
i dui f (xi ) + ?1 ||f ||
j dvj g(yj ) + ?2 ||g||
where ?1 and ?2 are regularization parameters which control the trade-off between maximizing
the original correlation coefficient between two features and ensuring that the solution has a small
norm in the r.k.h.s. In order to obtain a d-dimensional feature representation and deal with the
scale issue, we propose to iterate the maximization of the regularized correlation coefficient (9)
under orthogonality constraints in the r.k.h.s., that is, we recursively define the p-th features fp
and gp for p = 1, ? ? ? , d as (fp , gp ) = arg max corr(f, g) under the orthogonality constraints:
f ?f1 , ? ? ? , fp?1 and g?g1 , ? ? ? , gp?1 and the normalization constraints: ||f || = ||g|| = 1.
Using the function expansions in equation (5) and related matrix representations defined in the previous section, the maximization problem of the regularized correlation coefficient in equation (9) is
equivalent to finding ? and ? which maximize
?T Ku Auv Kv ?
q
.
?T Ku Du Ku ? + ?1 ?T Ku ? ? T Kv Dv Kv ? + ?2 ? T Kv ?
corr(f, g) = p
5
(10)
Taking the differential of equation (10) with respect to ? and ? and setting to zero, the solution of
the first vectors ?1 and ? 1 can be obtained as the eigenvectors associated with the largest eigenvalue
in the following generalized eigenvalue problem:
?
??
?
?
??
?
0
Ku Auv Kv
?
Ku Du Ku + ?1 Ku
0
?
=?
.
?
0
Kv Dv Kv + ?2 Kv
?
Kv ATuv Ku
0
(11)
Sequentially, the solutions of vectors ?1 , ? ? ? , ?d and ? 1 , ? ? ? , ? d can be obtained as the eigenvectors
associated with d largest eigenvalues in the above generalized eigenvalue problem.
The final form of KCA is similar to that of kernel canonical correlation analysis (KCCA) [12, 13],
so KCA can be regarded as a variant of KCCA. However, the critical differences between KCA and
KCCA are as follows: i) the objects are the same across two different data in KCCA, while the
objects are different across two different data in KCA, and ii) KCCA cannot deal with co-occurence
information about the objects. In the experiment below, we are interested in the performance comparison between the distance learning in DML and correlation maximization in KCA. A similar
extension might be possible for CODE as well, but it is out of scope in this paper.
5
5.1
Experiment
Data
In this study we focus on compound-protein interaction networks made by four pharmaceutically
useful protein classes: enzymes, ion channels, G protein-coupled receptors (GPCRs), and nuclear
receptors. The information about compound-protein interactions were obtained from the KEGG
BRITE [14], SuperTarget [15] and DrugBank databases [16]. The number of known interactions
involving enzymes, ion channels, GPCRs, and nuclear receptors is 5449, 3020, 1124, and 199, respectively. The number of proteins involving the interactions is 1062, 242, 134, and 38, respectively,
and the number of compounds involving the interactions is 1123, 475, 417, and 115, respectively.
The compound set includes not only drugs but also experimentally confirmed ligand compounds.
These data are regarded as gold standard sets to evaluate the prediction performance below.
Chemical structures of the compounds and amino acid sequences of the human proteins were obtained from the KEGG database [14]. We computed the kernel similarity value of chemical structures between compounds using the SIMCOMP algorithm [17], where the kernel similarity value
between two compounds is computed by Tanimoto coefficient defined as the ratio of common substructures between two compounds based on a graph alignment algorithm. We computed the sequence similarities between the proteins using Smith-Waterman scores based on the local alignment
between two amino acid sequences [18]. In this study we used the above similarity measures as
kernel functions, but the Smith-Waterman scores are not always positive definite, so we added an
appropriate identify matrix such that the corresponding kernel Gram matrix is positive definite,
which is related with [19]. All the kernel matrices are normalized such that all diagonals are ones.
5.2
Performance evaluation
As a baseline method, we used the nearest neighbor (NN) method, because this idea has been used in
traditional molecular screening in many public databases. Given a new ligand candidate compound,
we find a known ligand compound (in the training set) sharing the highest structure similarity with
the new compound, and predict the new compound to interact with proteins known to interact with
the nearest ligand compound. Likewise, given a new target candidate protein, we find a known target
protein (in the training set) sharing the highest sequence similarity with the new protein, and predict
the new protein to interact with ligand compounds known to interact with the nearest target protein.
Newly predicted compound-protein interaction pairs are assigned prediction scores with the highest
structure or sequence similarity values involving new compounds or new proteins in order to draw
the ROC curve below.
We tested the three different methods: NN, KCA, and DML on their abilities to reconstruct the four
compound-protein interaction networks. We performed the following 5-fold cross-validation procedure: the gold standard set was split into 5 subsets of roughly equal size by compounds and proteins,
each subset was then taken in turn as a test set, and the training is performed on the remaining 4
6
Table 1: AUC (ROC scores) for each interaction class, where ?train c.?, ?train p.?, ?test c.?, and ?test
p.? indicates training compounds, training proteins, test compounds and test proteins, respectively.
Data
Prediction class
Enzyme
i) test c. vs train p.
ii) train c. vs test p.
iii) test c. vs test p.
iv) all c. vs all p.
i) test c. vs train p.
ii) train c. vs test p.
iii) test c. vs test p.
iv) all c. vs all p.
i) test c. vs train p.
ii) train c. vs test p.
iii) test c. vs test p.
iv) all c. vs all p.
i) test c. vs train p.
ii) train c. vs test p.
iii) test c. vs test p.
iv) all c. vs all p.
Ion
channel
GPCR
Nuclear
receptor
Nearest neighbor
(NN)
0.655 ? 0.011
0.758 ? 0.008
0.500 ? 0.000
0.684 ? 0.006
0.712 ? 0.004
0.896 ? 0.008
0.500 ? 0.000
0.770 ? 0.004
0.714 ? 0.005
0.781 ? 0.026
0.500 ? 0.000
0.720 ? 0.013
0.715 ? 0.009
0.683 ? 0.010
0.500 ? 0.000
0.675 ? 0.004
Kernel correspondence
analysis (KCA)
0.741 ? 0.011
0.839 ? 0.009
0.692 ? 0.008
0.778 ? 0.008
0.768 ? 0.008
0.927 ? 0.004
0.748 ? 0.004
0.838 ? 0.005
0.848 ? 0.002
0.895 ? 0.025
0.823 ? 0.038
0.866 ? 0.015
0.808 ? 0.018
0.784 ? 0.012
0.670 ? 0.053
0.784 ? 0.011
Distance metric
learning (DML)
0.843 ? 0.006
0.878 ? 0.003
0.782 ? 0.013
0.852 ? 0.020
0.800 ? 0.004
0.945 ? 0.002
0.771 ? 0.008
0.864 ? 0.002
0.882 ? 0.005
0.936 ? 0.004
0.864 ? 0.013
0.904 ? 0.003
0.832 ? 0.013
0.812 ? 0.036
0.747 ? 0.049
0.815 ? 0.024
sets. We draw a receiver operating curve (ROC), the plot of true positives as a function of false
positives based on various thresholds ?, where true positives are correctly predicted interactions and
false positives are predicted interactions that are not present in the gold standard interactions. The
performance was evaluated by AUC (area under the ROC curve) score. The regularization parameter
? and the number of features d are optimized by applying the internal cross-validation within the
training set with the AUC score as a target criterion in the case of KCA and DML. To obtain robust
results, we repeated the above cross-validation experiment five times, and computed the average and
standard deviation of the resulting AUC scores.
Table 1 shows the resulting AUC scores for different sets of predictions depending on whether the
compound and/or the protein were in the initial training set or not. Compounds and proteins in the
training set are called training compounds and proteins whereas those not in the training set are
called test compounds and proteins. Four different classes are then possible: i) test compounds vs
training proteins, ii) training compounds vs test proteins, iii) test compounds vs test proteins, and
iv) all the possible predictions (the average of the above three parts). Comparing the three different
methods, DML seems to have the best performance for all four types of compound-protein interaction networks, and outperform the other methods KCA and NN at a significant level. The worst
performance of NN implies that raw compound structure or protein sequence similarities do not
always reflect the tendency of interaction partners in true compound-protein interaction networks.
Among the four prediction classes, predictions where neither the protein nor the compound are in
the training set (iii) are weakest, but even then reliable predictions are possible in DML. Note that
the NN method cannot predict iii) test vs test interaction, because it depends on the template information about known ligand compounds and known target proteins. These results suggest that the
feature space learned by DML successfully represents the network topology of the bipartite graph
structure of compound-protein networks, and the correlation maximization learning used in KCA is
not enough to reflect the network topology of the bipartite graph.
6
Concluding remarks
In this paper, we developed a new supervised method to infer the bipartite graph from the viewpoint
of distance metric learning (DML). The originality of the proposed method lies in the embedding of
heterogeneous objects forming vertices on the bipartite graph into a unified Euclidian space and in
the learning of the distance between heterogeneous objects with different data structures in the unified feature space. We also discussed the relationship with correspondence analysis (CA) and kernel
canonical correlation analysis (KCCA). In the experiment, it is shown that the proposed method
DML outperforms the other methods on the problem of compound-protein interaction network reconstruction from chemical structure and genomic sequence data. From a practical viewpoint, the
7
proposed method is useful for virtual screening of a huge number of ligand candidate compounds
being generated with various biological assays and target candidate proteins toward genomic drug
discovery. It should be also pointed out that the proposed method can be applied to other network
prediction problems such as metabolic network reconstruction, host-pathogen protein-protein interaction prediction, and customer-product recommendation system as soon as they are represented by
bipartite graphs.
References
[1] C.M. Dobson. Chemical space and biology. Nature, 432:824?828, 2004.
[2] M. Rarey, B. Kramer, T. Lengauer, and G. Klebe. A fast flexible docking method using an incremental
construction algorithm. J Mol Biol, 261:470?489, 1996.
[3] Y. Yamanishi, J.P. Vert, and M. Kanehisa. Protein network inference from multiple genomic data: a
supervised approach. Bioinformatics, 20 Suppl 1:i363?370, 2004.
[4] J.-P. Vert and Y. Yamanishi. Supervised graph inference. Advances in Neural Information and Processing
System, pages 1433?1440, 2005.
[5] T. Kato, K. Tsuda, and K. Asai. Selective integration of multiple biological data for supervised network
inference. Bioinformatics, 21:2488?2495, 2005.
[6] B. Sch?olkopf, K. Tsuda, and J.P. Vert. Kernel Methods in Computational Biology. MIT Press, 2004.
[7] G. Wahba. Splines Models for Observational Data: Series in Applied Mathematics. SIAM, Philadelphia,
1990.
[8] F. Girosi, M. Jones, and T. Poggio. Regularization theory and neural networks architectures. Neural
Computation, 7:219?269, 1995.
[9] J. Shawe-Taylor and N. Cristianini. Kernel Methods for Pattern Analysis. Camb. Univ. Press, 2004.
[10] M.J. Greenacre. Theory and applications of correspondence analysis. Academic Press, 1984.
[11] A. Globerson, G. Chechik, F. Pereira, and N. Tishby. Euclidean embedding of co-occurrence data. Advances in Neural Information and Processing System, pages 497?504, 2005.
[12] S. Akaho. A kernel method for canonical correlation analysis. International. Meeting on Psychometric
Society (IMPS2001), 2001.
[13] F.R. Bach and M.I. Jordan. Kernel independent component analysis. Journal of Machine Learning
Research, 3:1?48, 2002.
[14] M. Kanehisa, S. Goto, M. Hattori, K.F. Aoki-Kinoshita, M. Itoh, S. Kawashima, T. Katayama, M. Araki,
and M. Hirakawa. From genomics to chemical genomics: new developments in kegg. Nucleic Acids Res.,
34(Database issue):D354?357, Jan 2006.
[15] S. Gunther, S. Guenther, M. Kuhn, M. Dunkel, and et al. Supertarget and matador: resources for exploring
drug-target relationships. Nucleic Acids Res, 2007.
[16] D.S. Wishart, C. Knox, A.C. Guo, D. Cheng, S. Shrivastava, D. Tzur, B. Gautam, and M. Hassanali.
Drugbank: a knowledgebase for drugs, drug actions and drug targets. Nucleic Acids Res, 2007.
[17] M. Hattori, Y. Okuno, S. Goto, and M. Kanehisa. Development of a chemical structure comparison
method for integrated analysis of chemical and genomic information in the metabolic pathways. J. Am.
Chem. Soc., 125:11853?11865, 2003.
[18] T.F. Smith and M.S. Waterman. Identification of common molecular subsequences. J Mol Biol, 147:195?
197, 1981.
[19] H. Saigo, J.P. Vert, N. Ueda, and T. Akutsu. Protein homology detection using string alignment kernels.
Bioinformatics, 20:1682?1689, 2004.
8
| 3428 |@word norm:6 seems:1 hu:5 simulation:2 euclidian:2 recursively:2 initial:1 series:1 score:8 outperforms:1 recovered:1 comparing:1 must:1 written:3 girosi:1 gv:13 plot:1 v:20 selected:1 smith:3 node:2 gautam:1 five:1 differential:2 fitting:1 pathway:1 guenther:1 roughly:1 nor:1 encouraging:1 becomes:1 project:1 duo:2 kind:2 string:1 developed:2 unified:7 finding:2 control:2 positive:8 local:1 limit:1 receptor:5 becoming:1 might:1 aoki:1 challenging:1 co:4 limited:1 practical:2 globerson:1 yj:8 definite:3 differs:1 procedure:3 jan:1 area:1 drug:9 vert:4 projection:1 chechik:1 protein:72 suggest:1 hattori:2 onto:3 close:2 cannot:3 operator:1 context:2 applying:1 kawashima:1 equivalent:2 map:2 customer:1 pn2:1 maximizing:2 asai:1 formulate:2 regarded:2 nuclear:3 regularize:1 embedding:7 resp:7 target:16 suppose:2 heavily:1 construction:1 substrate:2 homogeneous:2 element:3 expensive:1 satisfying:2 database:4 observed:8 hv:5 worst:1 ensures:1 connected:3 trade:2 highest:3 inite:1 ui:11 cristianini:1 mine:1 bipartite:30 completely:2 represented:5 various:2 train:10 univ:1 fast:1 outside:1 solve:1 otherwise:1 reconstruct:1 ability:1 addi:2 g1:2 gp:10 final:1 sequence:14 advantage:1 un1:2 eigenvalue:8 yoshihiro:2 propose:6 interaction:40 reconstruction:4 gq:1 fr:1 product:1 kato:1 gold:3 itoh:1 kv:39 olkopf:1 yamanishi:4 incremental:1 object:26 depending:1 develop:2 ij:3 nearest:6 strong:1 soc:1 predicted:4 involves:2 implies:1 differ:1 direction:2 kuhn:1 attribute:2 centered:1 human:3 observational:1 public:1 virtual:1 adjacency:2 require:1 f1:2 biological:3 extension:2 exploring:1 mapping:6 predict:10 scope:1 major:1 a2:2 smallest:2 largest:2 successfully:1 minimization:2 mit:1 genomic:8 always:2 rather:1 avoid:1 focus:1 sequencing:1 fq:1 indicates:1 contrast:1 baseline:1 detect:1 sense:3 am:1 inference:13 nn:6 integrated:1 borrowing:1 relation:1 selective:1 france:1 interested:1 issue:4 arg:2 among:1 flexible:1 development:3 integration:1 equal:3 biology:3 represents:1 jones:1 unsupervised:1 throughput:1 representer:1 minimized:1 report:1 dml:11 spline:1 simultaneously:1 consisting:3 n1:9 attempt:1 detection:2 screening:3 huge:2 evaluation:1 alignment:3 edge:12 closer:1 fu:14 poggio:1 institut:1 orthogonal:1 iv:5 euclidean:7 taylor:1 re:3 tsuda:2 formalism:1 column:1 tp:2 maximization:4 ordinary:1 onal:2 vertex:24 subset:2 deviation:1 usefulness:1 tishby:1 ensmp:1 knox:1 dvj:2 siam:1 international:1 vm:2 off:2 ym:1 connecting:1 connectivity:1 squared:1 reflect:2 rn1:3 wishart:1 return:1 potential:1 rn2:3 fontainebleau:1 coefficient:6 includes:1 depends:2 performed:3 substructure:1 curie:1 contribution:1 ass:1 minimize:1 acid:6 likewise:1 gpcr:1 identify:2 raw:1 identification:1 confirmed:1 sharing:2 pp:2 associated:4 xn1:6 newly:1 recall:1 knowledge:3 hilbert:3 supervised:15 evaluated:1 correlation:11 tanimoto:1 aj:2 lengauer:1 normalized:1 y2:1 true:4 homology:1 regularization:5 assigned:1 chemical:14 symmetric:2 assay:1 deal:3 adjacent:1 auc:5 criterion:9 generalized:4 recently:1 common:2 functional:1 camb:1 qp:2 belong:2 discussed:1 m1:3 significant:1 ai:2 smoothness:1 rd:4 mathematics:1 similarly:1 pointed:1 akaho:1 shawe:1 saigo:1 similarity:10 operating:1 an2:1 enzyme:8 recent:1 compound:61 meeting:1 yi:3 additional:6 impose:1 determine:1 maximize:2 ii:8 multiple:2 infer:4 academic:1 cross:3 bach:1 host:3 molecular:3 a1:2 ensuring:2 prediction:17 involving:6 kcca:6 variant:1 heterogeneous:24 metric:7 dimentional:1 kernel:28 represent:1 normalization:2 suppl:1 ion:4 whereas:1 sch:1 tend:1 goto:2 undirected:1 jordan:1 extracting:1 presence:1 split:1 easy:2 iii:7 enough:1 iterate:2 xj:3 architecture:1 topology:5 restrict:1 wahba:1 idea:3 whether:2 jj:1 remark:1 action:1 useful:5 eigenvectors:4 kca:12 amount:1 outperform:1 canonical:4 vn2:2 correctly:1 dobson:1 incentive:1 key:1 putting:1 four:5 threshold:3 gunther:1 neither:1 v1:4 imaging:1 graph:42 ueda:1 sented:1 draw:2 scaling:1 correspondence:5 fold:1 quadratic:1 cheng:1 auv:12 nonnegative:1 constraint:9 orthogonality:5 x2:1 nearby:1 u1:2 extremely:1 min:1 concluding:1 according:1 disconnected:3 membrane:2 smaller:2 across:2 em:1 kinoshita:1 dv:10 kegg:3 taken:2 equation:5 resource:1 turn:1 available:3 endowed:2 rewritten:1 probe:1 apply:2 appropriate:1 occurrence:1 original:2 remaining:1 include:1 ensure:1 saint:1 especially:1 classical:1 society:1 question:1 added:1 traditional:2 diagonal:3 distance:11 link:1 mapped:1 yn2:6 partner:1 extent:1 toward:2 code:2 relationship:3 illustration:2 ratio:1 minimizing:1 difficult:1 potentially:1 negative:1 u900:1 unknown:3 observation:2 nucleic:3 enabling:1 waterman:3 heterogeneity:1 y1:7 reproducing:3 inferred:1 pair:1 optimized:2 fv:1 learned:1 address:1 below:5 pattern:1 fp:10 max:1 reliable:1 suitable:1 critical:2 difficulty:1 regularized:5 predicting:1 indicator:1 greenacre:1 representing:3 library:1 coupled:1 cbio:1 occurence:3 originality:1 philadelphia:1 docking:4 prior:1 katayama:1 discovery:2 genomics:2 embedded:1 validation:3 contingency:1 degree:3 viewpoint:4 metabolic:3 row:1 soon:1 neighbor:4 template:1 taking:2 curve:3 gram:2 genome:1 made:2 reconstructed:1 implicitly:1 inserm:1 sequentially:2 receiver:1 assumed:1 xi:8 subsequence:1 table:3 an1:1 channel:4 ku:41 molecule:1 nature:2 ca:4 robust:1 shrivastava:1 mol:2 interact:4 du:10 expansion:2 rue:1 vj:11 n2:10 repeated:1 amino:3 x1:7 referred:2 psychometric:1 roc:4 inferring:3 position:1 pereira:1 candidate:6 lie:1 theorem:1 embed:2 weakest:1 false:2 corr:4 pathogen:3 explore:1 akutsu:1 forming:1 recommendation:1 ligand:13 corresponds:7 goal:1 formulated:2 kramer:1 kanehisa:3 replace:1 absence:1 paristech:1 experimentally:1 typical:1 called:2 tendency:1 meaningful:1 formally:1 internal:1 guo:1 chem:1 dissimilar:1 bioinformatics:4 evaluate:2 tested:1 biol:2 |
2,679 | 3,429 | Using Bayesian Dynamical Systems for
Motion Template Libraries
Silvia Chiappa, Jens Kober, Jan Peters
Max-Planck Institute for Biological Cybernetics
Spemannstra?e 38, 72076 T?bingen, Germany
{silvia.chiappa,jens.kober,jan.peters}@tuebingen.mpg.de
Abstract
Motor primitives or motion templates have become an important concept for both
modeling human motor control as well as generating robot behaviors using imitation learning. Recent impressive results range from humanoid robot movement
generation to timing models of human motions. The automatic generation of skill
libraries containing multiple motion templates is an important step in robot learning. Such a skill learning system needs to cluster similar movements together and
represent each resulting motion template as a generative model which is subsequently used for the execution of the behavior by a robot system. In this paper,
we show how human trajectories captured as multi-dimensional time-series can be
clustered using Bayesian mixtures of linear Gaussian state-space models based on
the similarity of their dynamics. The appropriate number of templates is automatically determined by enforcing a parsimonious parametrization. As the resulting
model is intractable, we introduce a novel approximation method based on variational Bayes, which is especially designed to enable the use of efficient inference
algorithms. On recorded human Balero movements, this method is not only capable of finding reasonable motion templates but also yields a generative model
which works well in the execution of this complex task on a simulated anthropomorphic SARCOS arm.
1 Introduction
Humans demonstrate a variety and versatility of movements far beyond the reach of current anthropomorphic robots. It is widely believed that human motor control largely relies on a set of ?mental
templates? [1] better known as motor primitives or motion templates. This concept has gained increasing attention both in the human motor control literature [1, 2] as well as in robot imitation
learning [3, 4]. The recent suggestion of Ijspeert et al. [3] to use dynamical systems as motor
primitives has allowed this approach to scale in the domain of humanoid robot imitation learning
and has yielded a variety of interesting applications as well as follow-up publications. However, up
to now, the focus of motion template learning has largely been on single template acquisition and
self-improvement. Future motor skill learning systems on the other hand need to be able to observe
several different behaviors from human presenters and compile libraries of motion templates directly
from these examples with as little predetermined structures as possible.
An important part of such a motor skill learning system is the clustering of many presented movements into different motion templates. Human trajectories are recorded as multi-dimensional timeseries of joint angles as well as joint velocities using either a marker-based tracking setup (e.g.,
a VICONT M setup), a sensing suit (e.g., a SARCOS SenSuit) or a haptic interface (e.g., an anthropomorphic master arm). Inspired by Ijspeert et al. [3], we intend to use dynamical systems
as generative models of the presented trajectories, i.e., as motion templates. Our goal is to cluster
1
these multi-dimensional time-series automatically into a small number of motion templates without
pre-labeling of the trajectories or assuming an a priori number of templates. Thus, the system has
to discover the underlying motion templates, determine the number of templates as well as learn the
underlying skill sufficiently well for robot application.
In principle, one could use a non-generative clustering approach (e.g., a type of K-means) with a
method for selecting an appropriate number of clusters and, subsequently, fit a generative model to
each cluster. Here we prefer to take a different approach in which the clustering and learning of the
underlying time-series dynamics are performed at the same time. This way we aim at ensuring that
each obtained cluster can be modeled well by its representative generative model.
To date the majority of the work on time-series clustering using generative models has focused on
static mixture models. Clustering long or high-dimensional time-series is hard when approached
with static models, such that collapsing the trajectories to a few relevant features is often required.
This problem would be severe for a high-dimensional motor learning system where the data needs
to be represented at high sampling rates in order to ensure the capturing of all relevant details for
motor skill learning. In addition, it is difficult to ensure smoothness when the time-series display
high variability and, therefore, to obtain accurate generative models with static approaches.
A natural alternative is to use mixtures of temporal models which explicitly model the dynamics of
the time-series. In this paper, we use Mixtures of Linear Gaussian State-Space Models (LGSSMs).
LGSSMs are probabilistic temporal models which, despite their computational simplicity, can represent many natural dynamical processes [5]. As we will see later in this paper, LGSSMs are powerful
enough to model our time-series sufficiently accurately.
For determining the number of clusters, most probabilistic approaches in the past used to train a separate model for each possible cluster configuration, and then select the one which would optimize
the trade-off between accuracy and complexity, as measured for example by the Bayesian Information Criterion [6, 7]. The drawback of these approaches is that training many separate models can
lead to a large computational overhead, such that heuristics are often needed to restrict the number
of possible cluster configurations [7].
A less computationally expensive alternative is offered by recent Bayesian approaches where the
model parameters are treated as random variables and integrated out yielding the marginal likelihood
of the data. An appropriate prior distribution can be used to enforce a sparse representation, i.e., to
select the smallest set of parameters that explains the data well by making the remaining parameters
inactive. As a result, the structure selection can be achieved within the model, without the need to
train and compare several separate models.
As a Bayesian treatment of the Mixtures of Linear Gaussian State-Space Models is intractable, we
introduce a deterministic approximation based on variational Bayes. Importantly, our approximation
is especially designed to enable the use of standard LGSSM inference methods for the hidden state
variables, which has the advantage of minimizing numerically instabilities.
As a realistically difficult scenario in this first step towards large motor skill libraries, we have
selected the game of dexterity Balero (also known as Ball-In-A-Cup or Kendama, see [8]) as an
evaluation platform. Several substantially different types of movements exist for performing this
task and humans tend to have a large variability in movement execution [9]. From a robotics point
of view, Balero can be considered sufficiently complex as it involves movements in all major seven
degrees of freedom of a human arm as well as an anthropomorphic robot arm. We are able to show
that the presented method gives rise to a reasonable number of clusters representing quite distinct
movements and that the resulting generative models can be used successfully as motion templates
in physically realistic simulations.
In the remainder of the paper, we will proceed as follows. We will first introduce a generative
approach for clustering and modeling multi-dimensional time-series with Bayesian Mixtures of
LGSSMs and describe how this approach can be made tractable using a variational approximation.
We will then show that the resulting model can be used to infer the motion templates underlying a
set of human demonstrations, and give evidence that the generative model representing each motion
template is sufficiently accurate for control in a mechanically plausible simulation of the SARCOS
Master Arm.
2
2 Bayesian Mixtures of Linear Gaussian State-Space Models
Our goal is to model both human and robot movements in order to build motion template libraries. In
this section, we describe our Bayesian modeling approach and discuss both the underlying assumptions as well as how the structure of the model is selected. As the resulting model is not tractable for
analytical solution, we introduce an approximation method based on variational Bayes.
2.1 Modeling Approach
In our Bayesian approach to Mixtures of Linear Gaussian State-Space Models (LGSSMs), we are
1:N
given a set of N time-series1 v1:T
of length T for which we define with the following marginal
likelihood
Z
XZ
1:N 1:N
1:N ? 1:K
? 1:K ) p(z 1:N |?)p(?|?),
|? , ?) =
p(v1:T
|z , ?1:K )p(?1:K |?
p(v1:T
z 1:N
?1:K
?
n
where z n ? {1, . . . , K} indicates which of a set of K LGSSMs generated the sequence v1:T
. The
k
parameters of LGSSM k are denoted by ? and have a prior distribution depending on hyperparam? k . The K-dimensional vector ? includes the prior probabilities of the time-series generation
eters ?
for each LGSSM and has prior distribution hyperparameter ?.
The optimal hyperparameters are estimated by type-II maximum likelihood [10], i.e., by maximizing
? 1:K and ?. Clustering can be performed by inferring the LGSSM that
the marginal likelihood over ?
n
1:N ? 1:K
most likely generated the sequence v1:T
by computing arg maxk p(z n = k|v1:T
, ? , ?).
1:N 1:N ? 1:K
Modeling p(v1:T
|z
,?
). As a generative temporal model for each time-series, we employ a Linear Gaussian State-Space Model [5] that assumes that the observations v1:T , with vt ? ?V ,
are generated from a latent Markovian linear dynamical system with hidden states h1:T , with
ht ? ?H , according to2
vt = Bht + ?tv , ?tv ? N (0V , ?V ),
ht = Aht?1 + ?th , ?th ? N (?t , ?H ) .
(1)
Standard LGSSMs assume a zero-mean hidden-state noise (?t ? 0H ). In our application the use of
a time-dependent mean ?t 6= 0H leads to a superior modeling accuracy. A probabilistic formulation
of the LGSSM is given by
p(v1:T , h1:T |?) = p(v1 |h1 , ?)p(h1 |?)
T
Y
p(vt |ht , ?)p(ht |ht?1 , ?),
t=2
with p(ht |ht?1 , ?) = N (Aht?1 + ?t , ?H ), p(h1 |?) = N (?1 , ?), p(vt |ht , ?) = N (Bht , ?V ),
and ? = {A, B, ?H , ?V , ?1:T , ?}. Due to the simple structure of the model, performing inference,
that is to compute quantities such as p(ht |v1:T , ?), can be efficiently achieved in O(T ) operations.
? over the parameters ?
In the presented Bayesian approach, we define a prior distribution p(?|?)
?
where ? are the associated hyperparameters. More specifically, we define zero-mean Gaussians on
the elements of A and on the columns of B by3
p
A|?, ??1
H
=
1/2
H
Y
?ij
p
e
2? [?H ]ii
i,j=1
?
?ij
2
2
[??1
H ]ii Aij , p B|?, ??1 =
V
H
Y
V /2
?j
p
e?
|2??
|
V
j=1
?j
2
BjT ??1
V Bj
,
where ? and ? are a set of hyperparameters which need to be optimized. We make the assumption
?1
?1
that ??1
are diagonal and define Gamma distributions on them. For ?1 we define
H , ?V and ?
a zero-mean Gaussian prior, while we formally treat ?2:T as hyperparameters and determine their
1 1:N
v1:T
2
is a shorthand for v11 , . . . , vT1 , . . . , v1N , . . . , vTN .
Here, N (m, S) denotes a Gaussian with mean m and covariance S, and 0X denotes an X-dimensional
zero vector. The initial latent state h1 is drawn from N (?1 , ?).
3
[X]ij and Xj denote the ij-th element and the j-th column of matrix X respectively. The dependency of
the priors on ?H and ?V is chosen specifically to render a variational implementation feasible.
3
optimal values. These choices are made in order to render our Bayesian treatment feasible and to
obtain a sparse parametrization, as discussed in more details below.
In the resulting mixture model, we consider a set of K such Bayesian LGSSMs. The joint distribution over all sequences given the indicator variables and hyperparameters is defined as
(N
) K
Z
Y
Y
n
1:K
1:N 1:N ? 1:K
n
? k ),
p(v1:T |z , ? ) =
p(v1:T |z , ? )
p(?k |?
?1:K
n=1
k=1
n
n
n
where p(v1:T
|z n = k, ?1:K ) ? p(v1:T
|?k ) denotes the probability of time-series v1:T
given that
k
parameters ? have been employed to generate it.
Modeling p z 1:N |? .
As prior for ?, we define a symmetric Dirichlet distribution
p(?|?) =
K
? (?) Y ?/K?1
?k
,
?(?/K)K
k=1
where ?(?) is the Gamma function and ? denotes a hyperparameter that needs to be optimized. This
distribution is conjugate to the multinomial, which greatly simplifies our Bayesian treatment. To
model the joint indicator variables, we define
)
Z (Y
N
1:N
n
p(z |?) =
p(z |?) p(?|?), where p(z n = k|?) ? ?k .
?
n=1
Such Bayesian approach favors simple model structures. In particular, the priors on Ak and B k enforce a sparse parametrization since, during learning, many ?kij and ?jk get close to infinity whereby
(the posterior distribution of) Akij and Bjk get close to zero (see [11] for an analysis of this pruning effect). This enables us to achieve structure selection within the model. Specifically, this approach ensures that the unnecessary LGSSMs are pruned out from the model during training (for certain k, all
1:N ? 1:K
elements of B k are pruned out such that LGSSM k becomes inactive (p(z n = k|v1:T
, ? , ?) = 0
for all n)).
2.2 Model Intractability and Approximate Solution
The Bayesian treatment of the model is non-trivial as the integration over the parameters ?1:K and ?
renders the computation of the required posterior distributions intractable. This problem results from
the coupling in the posterior distributions between the hidden state variables h1:N
1:T and the parameters
?1:K as well as between the indicators z 1:N and ?, ?1:K . To deal with this intractability, we use a
deterministic approximation method based on variational Bayes.
Variational Approximation. In our variational approach we introduce a new distribution q and
make the following approximation4
1:K 1:N ? 1:K
1:N
p(z 1:N , h1:N
|v1:T , ? , ?) ? q(h1:N
)q(z 1:N )q(?1:K ).
1:T , ?
1:T |z
(2)
That is, we approximate the posterior distribution of the hidden variables of the model by one in
which the hidden states are decoupled from the parameters given the indicator variables and in
which the indicators are decoupled from the parameters.
The approximation is achieved with a variational expectation-maximization algorithm which minimizes the KL divergence between the right and left hand sides of Equation (2), or, equivalently,
1:N ? 1:K
? 1:K , ?, q) with
maximizes a tractable lower bound on the log-likelihood log p(v1:T
|? , ?) ? F(?
1:K
n
?
respect to q for fixed ?
and ? and vice-versa. Observation vt is then placed in the most likely
LGSSM by computing arg maxk q(z n = k).
4
Here, we describe a collapsed approximation over ? [13]. To simplify the notation, we omit conditioning
1:N ? 1:K
on v1:T
, ? , ? for the q distribution.
4
Figure 1: This figure shows one of the Balero motion templates found by our clustering method,
i.e., the cluster C2 in Figure 2. Here, a sideways movement with a subsequent catch is performed
and the uppermost row illustrates this movement with a symbolic sketch. The middle row shows an
execution of the movement generated with the LGSSM representing the cluster C2. The lowest row
shows a recorded human movement which was attributed to cluster C2 by our method. Note that
movements generated from LGSSMs representing other clusters differ significantly.
Resulting Updates. While the space does not suffice for complete derivation, we will briefly
sketch the updates for q. Additional details and the updates for the hyperparameters can be found
in [12]. The updates consist of a parameter update, an indicator variable update and a latent state
update. First, the approximate parameter posterior is given by
? k )e
q(?k ) ? p(?k |?
P
N
n=1
n
k
q(z n =k)hlog p(v1:T
,hn
1:T |? )i
q(hn |z n =k)
1:T
,
? k ) makes the
where h?iq denotes expectation with respect to q. The specific choice for p(?k |?
computation of this posterior relatively straightforward, since q(?k ) is a distribution of the same
type. Second, the approximate posterior over the indicator variables is given by
q(z n = k) ? e
n
n
?n
Hq (hn
,?)iQ
1:T |z =k)+hlog p(z =k|z
m
m6=n q(z )
e
n
k
,hn
hlog p(v1:T
1:T |? )iq(hn
1:T
|z n =k)q(?k )
,
where Hq (x) denotes the entropy of the distribution q(x) and z ?n includes all indicatorR variables
except for z n . Due to the choice of a Dirichlet prior, the term p(z n = k|z ?n , ?) = ? p(z n =
k|z ?n , ?)p(?, ?) can be determined analytically. However, the required average over this term is
computationally expensive, and, thus, we approximate it using a second order expansion [13]. The
third and most challenging update is the one of the hidden states
q (hn1:T |z n = k) ? e
n
k
,hn
hlog p(v1:T
1:T |? )iq(?k )
.
(3)
Whilst computing this joint density is relatively straightforward, the parameter
and indicator
variable
updates require the non-trivial estimation of the posterior averages hhnt i and hnt hnt?1 with respect
to this distribution. Following a similar approach to the one proposed in [14] for the Bayesian
LGSSM, we reformulate the rhs of Equation (3) as proportional to the distribution of an augmented
LGSSM such that standard inference routines for the LGSSM can be used.
3 Results
In this section we show that the model presented in Section 2 can be used effectively both for
inferring the motion templates underlying a set of human trajectories and for approximating motion
templates with dynamical systems. For doing so, we take the difficult task of Balero, also known
as Ball-In-A-Cup or Kendama, and collect human executions of this task using a motion capture
5
C2
0.3
C1
0.3
Z
Z
Y
?0.2
0.4 ?0.3
Y
X
C6
X
?0.6 0
?1
1
X
0.7
Y
X
X
?0.6 ?0.5
C9
0.3
Z
?0.2
0.4 ?0.3
Y
0.7
Y
?0.6 ?0.5
Z
?0.2
0.4 ?0.3
Y
?1
1
X
C8
0.3
Z
?0.2
?0.3
Z
?0.6 0
C7
0.3
Z
C5
0.3
Z
0.4
Y
X
?0.6 0
?0.6 0
0.3
C4
0.3
Z
?0.2
0.4 ?0.3
?0.2
?0.3
C3
0.3
?0.2
0.4 ?0.3
Y
?0.6 0
X
?0.6 0
0.4
Y
X
?0.6 0
Figure 2: In this figure, we show nine plots where each plot represents one cluster found by our
method. Each of the five shown trajectories in the respective clusters represents a different recorded
Balero movement. For better visualization, we do not show joint trajectories here but rather the
trajectories of the cup which have an easier physical interpretation and, additionally, reveal the
differences between the isolated clusters. All axes show units in meters.
setup. We show that the presented model successfully extracts meaningful human motion templates
underlying Balero, and that the movements generated by the model are successful in simulation of
the Balero task on an anthropomorphic SARCOS arm.
3.1 Data Generation of Balero Motions
In the Balero game of dexterity, a human is given a toy consisting of a cup with a ball attached by
a string. The goal of the human is to toss the ball into the cup. Humans perform a wide variety of
different movements in order to achieve this task [9]. For example, three very distinct movements
are: (i) swing the hand slightly upwards to the side and then go back to catch the ball, (ii) hold the
cup high and then move very fast to catch the ball, and (iii) jerk the cup upwards and catch the ball
in a fast downwards movement. Whilst the difference in these three movements is significant and
can be easily detected visually, there exist many other movements for which this is not the case.
We collected 124 different Balero trajectories where the subject was free to select the employed
movement. For doing so, we used a VICONT M data collection system which samples the trajectories at 200Hz to track both the cup as well as all seven major degrees of freedom of the human arm.
For the evaluation of our method, we considered the seven joint angles of the human presenter as
well as the corresponding seven estimated joint velocities.
In the lowest row of Figure 1, we show how the human motion is collected with a VICONT M motion
tracking setup. As we will see later, this specific movement is assigned by our method to cluster C2
whose representative generative LGSSM can be used successfully for imitating this motion (middle
row). A sketch of the represented movement is shown in the top row of Figure 1.
3.2 Clustering and Imitation of Motion Templates
We trained the variational method with different initial conditions, hidden dimension H = 35 and a
number of clusters K which varied from 20 to 50 in order to avoid suboptimal results due to local
maxima.
The resulting clustering contains nine active motion templates. These are plotted in Figure 2, where,
instead of the 14-dimensional joint angles and velocities, we show the three-dimensional cup trajectories resulting from these joint movements, as it is easier for humans to make sense of cartesian
trajectories. Clusters C1, C2 and C3 are movements to the side which subsequently catch the ball.
Here, C1 is a short jerk, C3 appears to have a circular movement similar to a jerky movement, while
C2 uses a longer but smoother movement to induce kinetic energy in the ball. Motion templates
C4 and C5 are dropping movements where the cup moves down fast for more than 1.2m and then
6
Positions[rad]
Execution 1
Execution 1
Execution 2
0.5
0.5
0.5
0
0
0
0
?0.4
Velocities[rad/s]
Execution 2
0.5
0.16
0.32
0.48
0.64
?0.4
0.16
0.32
0.48
?0.4
0.64
0.16
0.32
0.48
0.64
?0.4
5
5
5
5
0
0
0
0
?4
0.16
0.32
0.48
0.64
?4
Time[s]
0.16
0.32
0.48
?4
0.64
Time[s]
0.16
0.32
0.48
0.64
?4
Time[s]
(a)
0.16
0.16
0.32
0.48
0.64
0.32
0.48
0.64
Time[s]
(b)
Figure 3: (a) Time-series recorded from two executions of the Balero movement assigned by our
model to cluster C1. In the first and second rows are plotted the positions and velocities respectively
(for better visualization each time-series component is plotted with its mean removed). (b) Two
executions of the Balero movement generated by our trained model using probability distributions
of cluster C1.
catches the ball. The template C5 is a smoother movement than C4 with a wider catching movement.
For C6 and C7, we observe a significantly different movement where the cup is jerked upwards dragging the ball in this direction and then catches the ball on the way down. Clusters C8 and C9 exhibit
the most interesting movement where the main motion is forward-backwards and the ball swings
into the cup. In C8 this task is achieved by moving upwards at the same time while in C9 there is
little loss of height.
To generate Balero movements with our trained model, we can use the recursive formulation of the
LGSSM given by Equation 1 where, for each cluster k, Ak , B k and ?k1 are replaced by the mean
values of their inferred Gaussian q distributions, while the noise covariances are replaced by the
modes of their Gamma q distributions. The initial hidden state h1 and the noise elements ?th and
?tv are sampled from their respective q distributions, whist the inferred optimal values are used for
?k2:T .
In Figure 3 (a) we plotted two recorded executions of the Balero task assigned by our model to cluster
C1. As we can see, the two executions have similar dynamics but also display some differences due
to human variability in performing the same type of movement. In Figure 3 (b) we plotted two
executions generated by our model using the learned distributions representing cluster C1. Our
model can generate time-series with very similar dynamics to the ones of the recorded time-series.
To investigate the accuracy of the obtained motion templates, we used them for executing Balero
movements on a simulated anthropomorphic SARCOS arm. Inspired by Miyamoto et al. [15], a
small visual feedback term based on a Jacobian transpose method was activated when the ball was
within 3cm in order to ensure task-fulfillment. We found that our motion templates are accurate
enough to generate successful task executions. This can be seen in Figure 1 for cluster C2 (middle
row) and in the video on the author?s website.
4 Conclusions
In this paper, we addressed the problem of automatic generation of skill libraries for both robot
learning and human motion analysis as a unsupervised time-series clustering and learning problem
based on human trajectories. We have introduced a novel Bayesian temporal mixture model based
on a variational approximation method which is especially designed to enable the use of efficient
inference algorithms. We demonstrated that our model gives rise to a meaningful clustering of
human executions of the difficult game of dexterity Balero and is able to generate time-series which
are very close to the recorded ones. Finally, we have shown that the model can be used to obtain
successful executions of the Balero movements on a physically realistic simulation of the SARCOS
Master Arm.
7
5 Acknowledgments
The authors would like to thank David Barber for useful discussions and Betty Mohler for help with
data collection.
References
[1] T. Flash and B. Hochner. Motor primitives in vertebrates and invertebrates. Current Opinion in
Neurobiology, 15(6):660?666, 2005.
[2] B. Williams, M. Toussaint, and A. Storkey. Modelling motion primitives and their timing in
biologically executed movements. In Advances in Neural Information Processing Systems 20,
pages 1609?1616, 2008.
[3] A. Ijspeert, J. Nakanishi, and S. Schaal. Learning attractor landscapes for learning motor primitives. In Advances in Neural Information Processing Systems 15, pages 1547?1554, 2003.
[4] S. Calinon, F. Guenter, and A. Billard. On learning, representing and generalizing a task in a
humanoid robot. IEEE Transactions on Systems, Man and Cybernetics, Part B, 37(2):286?298,
2007.
[5] J. Durbin and S. J. Koopman. Time Series Analysis by State Space Methods. Oxford Univ. Press,
2001.
[6] Y. Xiong and D-Y. Yeung. Mixtures of ARMA models for model-based time series clustering.
In Proceedings of the IEEE International Conference on Data Mining, pages 717?720, 2002.
[7] C. Li and G. Biswas. A Bayesian approach to temporal data clustering using hidden Markov
models. In Proceedings of the International Conference on Machine Learning, pages 543?550,
2000.
[8] J. Kober, B. Mohler and J. Peters. Learning perceptual coupling for motor primitives. International Conference on Intelligent Robots and Systems, pages 834?839, 2008.
[9] S. Fogel, J. Jacob, and C. Smith. Increased sleep spindle activity following simple motor procedural learning in humans. Actas de Fisiologia, 7(123), 2001.
[10] D. J. C. MacKay. Information Theory, Inference and Learning Algorithms. Cambridge Univ.
Press, 2003.
[11] D. Wipf and J. Palmer and B. Rao. Perspectives on Sparse Bayesian Learning. In Advances in
Neural Information Processing Systems 16, 2004.
[12] S. Chiappa and D. Barber. Dirichlet Mixtures of Bayesian Linear Gaussian State-Space Models: a Variational Approach. Technical Report no. 161, MPI for Biological Cybernetics, T?bingen, Germany, 2007.
[13] K. Kurihara, M. Welling, and Y. W. Teh. Collapsed variational Dirichlet process mixture
models. In Proceedings of the International Joint Conference on Artificial Intelligence, pages
2796?2801, 2007.
[14] D. Barber and S. Chiappa. Unified inference for variational Bayesian linear Gaussian statespace models. In Advances in Neural Information Processing Systems 19, pages 81?88, 2007.
[15] H. Miyamoto and S. Schaal and F. Gandolfo and Y. Koike and R. Osu and E. Nakano and
Y. Wada and M. Kawato. A Kendama learning robot based on bi-directional theory. Neural
Networks, 9(8): 1281?1302, 1996
8
| 3429 |@word briefly:1 middle:3 simulation:4 covariance:2 hochner:1 jacob:1 initial:3 configuration:2 series:20 contains:1 selecting:1 past:1 current:2 subsequent:1 realistic:2 predetermined:1 enables:1 motor:15 designed:3 plot:2 update:9 generative:13 selected:2 website:1 intelligence:1 parametrization:3 smith:1 short:1 sarcos:6 mental:1 c6:2 five:1 height:1 c2:8 become:1 shorthand:1 overhead:1 introduce:5 behavior:3 mpg:1 xz:1 multi:4 inspired:2 automatically:2 little:2 increasing:1 becomes:1 vertebrate:1 discover:1 underlying:7 notation:1 maximizes:1 suffice:1 lowest:2 cm:1 substantially:1 minimizes:1 string:1 whilst:2 unified:1 finding:1 temporal:5 k2:1 control:4 unit:1 omit:1 planck:1 timing:2 treat:1 local:1 despite:1 ak:2 oxford:1 vtn:1 collect:1 challenging:1 compile:1 palmer:1 range:1 bi:1 lgssms:10 bjk:1 acknowledgment:1 recursive:1 jan:2 significantly:2 pre:1 v11:1 induce:1 akij:1 symbolic:1 get:2 close:3 selection:2 collapsed:2 instability:1 optimize:1 deterministic:2 demonstrated:1 maximizing:1 primitive:7 attention:1 straightforward:2 go:1 williams:1 focused:1 simplicity:1 importantly:1 us:1 velocity:5 element:4 expensive:2 jk:1 storkey:1 capture:1 ensures:1 movement:42 trade:1 removed:1 complexity:1 dynamic:5 trained:3 easily:1 joint:11 represented:2 derivation:1 train:2 univ:2 distinct:2 fast:3 describe:3 vt1:1 detected:1 approached:1 labeling:1 artificial:1 quite:1 heuristic:1 widely:1 plausible:1 whose:1 favor:1 advantage:1 sequence:3 analytical:1 kober:3 remainder:1 relevant:2 date:1 achieve:2 realistically:1 cluster:26 generating:1 executing:1 wider:1 depending:1 coupling:2 iq:4 help:1 chiappa:4 measured:1 ij:4 involves:1 bjt:1 differ:1 direction:1 drawback:1 subsequently:3 human:29 enable:3 opinion:1 explains:1 require:1 clustered:1 anthropomorphic:6 biological:2 hold:1 sufficiently:4 considered:2 visually:1 bj:1 major:2 smallest:1 estimation:1 vice:1 successfully:3 sideways:1 uppermost:1 gaussian:11 aim:1 rather:1 avoid:1 publication:1 ax:1 focus:1 schaal:2 improvement:1 modelling:1 likelihood:5 indicates:1 dragging:1 greatly:1 sense:1 inference:7 dependent:1 integrated:1 hidden:10 germany:2 arg:2 denoted:1 priori:1 platform:1 integration:1 mackay:1 marginal:3 sampling:1 represents:2 unsupervised:1 future:1 wipf:1 report:1 simplify:1 intelligent:1 few:1 employ:1 gamma:3 divergence:1 replaced:2 consisting:1 versatility:1 attractor:1 suit:1 freedom:2 circular:1 investigate:1 mining:1 evaluation:2 severe:1 mixture:13 yielding:1 activated:1 accurate:3 capable:1 respective:2 decoupled:2 spemannstra:1 arma:1 plotted:5 miyamoto:2 isolated:1 catching:1 increased:1 column:2 modeling:7 kij:1 markovian:1 rao:1 maximization:1 wada:1 calinon:1 successful:3 dependency:1 density:1 international:4 spindle:1 probabilistic:3 off:1 mechanically:1 together:1 recorded:8 hn1:1 containing:1 hn:6 collapsing:1 toy:1 li:1 koopman:1 de:2 includes:2 explicitly:1 performed:3 later:2 view:1 h1:10 bht:2 doing:2 bayes:4 gandolfo:1 jerky:1 accuracy:3 largely:2 efficiently:1 yield:1 landscape:1 directional:1 koike:1 bayesian:21 accurately:1 eters:1 trajectory:14 cybernetics:3 reach:1 c7:2 energy:1 acquisition:1 associated:1 attributed:1 static:3 sampled:1 treatment:4 to2:1 routine:1 betty:1 back:1 appears:1 follow:1 formulation:2 kendama:3 hand:3 sketch:3 marker:1 mode:1 reveal:1 effect:1 concept:2 biswas:1 swing:2 analytically:1 assigned:3 symmetric:1 deal:1 game:3 self:1 during:2 whereby:1 mpi:1 criterion:1 guenter:1 presenter:2 complete:1 demonstrate:1 motion:34 interface:1 upwards:4 variational:14 dexterity:3 novel:2 superior:1 kawato:1 multinomial:1 physical:1 conditioning:1 attached:1 discussed:1 interpretation:1 numerically:1 significant:1 cup:12 versa:1 cambridge:1 smoothness:1 automatic:2 moving:1 robot:14 impressive:1 similarity:1 longer:1 posterior:8 recent:3 perspective:1 scenario:1 certain:1 vt:5 jens:2 captured:1 seen:1 additional:1 employed:2 determine:2 ii:4 smoother:2 multiple:1 infer:1 technical:1 believed:1 long:1 nakanishi:1 ensuring:1 expectation:2 physically:2 yeung:1 represent:2 achieved:4 robotics:1 c1:7 addition:1 addressed:1 haptic:1 subject:1 tend:1 hz:1 backwards:1 iii:1 enough:2 m6:1 variety:3 xj:1 fit:1 jerk:2 restrict:1 suboptimal:1 simplifies:1 inactive:2 render:3 peter:3 bingen:2 proceed:1 nine:2 useful:1 generate:5 exist:2 estimated:2 track:1 hyperparameter:2 dropping:1 procedural:1 drawn:1 ht:9 v1:24 angle:3 master:3 powerful:1 reasonable:2 parsimonious:1 prefer:1 capturing:1 bound:1 display:2 sleep:1 yielded:1 durbin:1 activity:1 infinity:1 lgssm:13 invertebrate:1 aht:2 c8:3 pruned:2 performing:3 relatively:2 tv:3 according:1 ball:14 conjugate:1 slightly:1 making:1 biologically:1 imitating:1 computationally:2 equation:3 visualization:2 discus:1 needed:1 tractable:3 operation:1 gaussians:1 observe:2 appropriate:3 enforce:2 c9:3 xiong:1 alternative:2 mohler:2 assumes:1 clustering:14 ensure:3 remaining:1 denotes:6 dirichlet:4 top:1 nakano:1 k1:1 especially:3 build:1 approximating:1 move:2 intend:1 quantity:1 fulfillment:1 diagonal:1 exhibit:1 hq:2 separate:3 thank:1 simulated:2 majority:1 seven:4 barber:3 collected:2 tuebingen:1 trivial:2 enforcing:1 assuming:1 length:1 modeled:1 reformulate:1 minimizing:1 demonstration:1 equivalently:1 setup:4 difficult:4 hlog:4 executed:1 rise:2 implementation:1 perform:1 teh:1 billard:1 observation:2 markov:1 timeseries:1 maxk:2 neurobiology:1 variability:3 varied:1 inferred:2 introduced:1 david:1 required:3 kl:1 c3:3 optimized:2 rad:2 fogel:1 c4:3 learned:1 beyond:1 able:3 dynamical:6 below:1 max:1 video:1 natural:2 treated:1 indicator:8 arm:9 representing:6 library:6 catch:7 extract:1 prior:10 literature:1 meter:1 determining:1 loss:1 generation:5 suggestion:1 interesting:2 proportional:1 toussaint:1 humanoid:3 degree:2 offered:1 principle:1 intractability:2 row:8 placed:1 free:1 transpose:1 aij:1 side:3 institute:1 wide:1 template:31 sparse:4 feedback:1 dimension:1 forward:1 made:2 c5:3 collection:2 author:2 far:1 welling:1 transaction:1 pruning:1 skill:8 approximate:5 active:1 unnecessary:1 imitation:4 latent:3 additionally:1 learn:1 hyperparam:1 expansion:1 complex:2 domain:1 main:1 rh:1 silvia:2 noise:3 hyperparameters:6 allowed:1 augmented:1 representative:2 downwards:1 inferring:2 position:2 perceptual:1 third:1 jacobian:1 down:2 specific:2 sensing:1 evidence:1 intractable:3 consist:1 effectively:1 gained:1 execution:17 illustrates:1 cartesian:1 easier:2 entropy:1 generalizing:1 likely:2 visual:1 tracking:2 relies:1 kinetic:1 goal:3 flash:1 towards:1 hnt:2 toss:1 man:1 feasible:2 hard:1 determined:2 specifically:3 except:1 kurihara:1 ijspeert:3 osu:1 meaningful:2 select:3 formally:1 statespace:1 |
2,680 | 343 | Extensions of a Theory of Networks for
Approximation and Learning: Outliers and
Negative Examples
Federico Girosi
AI Lab. M.I.T.
Cambridge, MA 02139
Tomaso Poggio
Al Lab. M.LT.
Cambridge, MA 021:39
Bruno Caprile
I.R.S.T .
Povo, Italy, 38050
Abstract
Learning an input-output mapping from a set of examples can be regarded
as synthesizing an approximation of a multi-dimensional function. From
this point of view, this form of learning is closely related to regularization
theory, and we have previously shown (Poggio and Girosi, 1990a, 1990b)
the equivalence between reglilari~at.ioll and a. class of three-layer networks
that we call regularization networks. In this note, we ext.end the theory
by introducing ways of <lealing with t.wo aspect.s of learning: learning in
presence of unreliable examples or outliel?s, an<llearning from positive and
negative examples.
Introduction
1
In previous papers (Poggio and Girosi, 1990a, 1990b) we have shown the equivalence
between certain regularization techniques and a. cla'3s of tlll?ee-layer networks - that
we call regularization networks - which are relat.ed to the Ra<lial Basis Functions
interpolation method (Powell, 1987). In this not.e we indicat.e how it is possible
to extend our theory of learning in order t.o deal with 1) occurence of unreliable
examples, 2) negative examples. Both problems are also interesting from the point
of view of classical approximation theory:
1. discounting "bad" examples cOlTesponds to discarding, in the approximation
of a function, data points that are outliel?s.
2. learning by using negative examples - in addition to positive ones - corresponds
to approximating a function considering not only points which the function
750
Extensions of a Theory of Networks for Approximation and Learning
ought to be close to, but also point.s - or regions - that the functioll must
avoid.
2
Unreliable data
Suppose that a set 9
sampling a function
write
= {(Xi, yd E Rn x R}f:l of data has been obtained by randomly
f, defined in Rn, in presence of noise, in a way that we can
Yi
= f (xd +
fi ,
i = 1, ... , N
where fi are independent random variables. \Ve arc interested in recovering an
estimate of the function f from the set of data [I. Taking a probabilistic approach,
we can regard the function I as the realization of a random field with specified
prior probability distribut.ion. Consequelltly, the data 9 and the function I are nOll
independent random variables, and, by using Bayes rule, it is possible to express
the conditional probability P[flg] of t.he function I, given the examples g, in terms
of the prior probability of f, P[t], and the conditional probability of 9 given f,
P[glf]:
P[tlg] ex P[gll] P[t].
(1)
A common choice (Marroquin et. al., 1987) for the prior probability distribut.ion
P[f] is
(2)
where P is a differential operator (the so called sta bili:er), 11?11 is the L2 norm, and .x
is a positive real number. This form of probability distribution assignes significant
probability only to those functions for which the term liP 1112 is "small", that is to
functions that do not vary too "quickly" in their domain.
If the noise is Gaussian, the probabilit.y P[glf] can be written as:
(3)
=
where f3i
pI ,and (1i is the variance of t.he noise related to the i-th data point.
u.
The values of the variances are usually assumed to be equal to some known value
(1, that reflects the accuracy of the measurement apparatus.
However, in many
cases we do not have access to such an information, and weaker assumptions have
to be made. A fairly natural and general one consists in regarding the variances of
the noise, as well as the function f, as random variables. Of course, some a priori
knowledge about these variables, represented by an appropriate prior probability
distribution, is needed. Let us denote by j.3 the set of random variables {f3df:I' By
751
752
Girosi, Poggio, and Caprile
means of Bayes rule we can compute the joint probability of the variables f and fJ?
Assuming that the field f and the set fJ are conditionally independent we obtain:
prj, f3lg] ex P[gIJ, f3] P[J] P[f3]
(4)
where P[f3] is the prior probability of the set of variances fJ and P[g If, f3] is the
same as in eq. (3). Given the posterior probability (4) we are mainly interested in
computing an estimate of J. Thus what we really need to compute is the marginal
posterior probability of J, Pm [fl, that is obtained integrating equation (4) over the
variables f3i:
(5)
A simple way to obtain an estimate of t.he fUllction f from the probability distribution (5) consists in computing the so callecl MAP (Maximum A Posteriori) estimate,
that is the function that maximizes t.he post.erior probability P,n (f]. The problem
of recovering the function f from the set of data g, with partial information about
the amount of Gaussian noise affecting the data, is therefore equivalent to solving
an appropriate variational problem. The specific form of the functional that has to
be maximized - or minimized - depends on the proba.bility clistributions P(f] and
P[f3].
Here we consider the following situation: we have knowledge that a given percentage,
(1 - t) of the data is characterized by a Gaussian noise distribution of variance
0"1 = (2f31)- 4, whereas for the rest of the data t.he variance of the noise is a very
large number 0"2
(2f32)-! (we will call these cla.ta "outliel?s"). This situation
yields the following probability dist.ribution:
=
rn(1 N
P[f3] =
t)O(fJi - f31) + t O(fJi - f3:d] .
(6)
i=l
In this case, choosing P(fl as in eq. (2), we can show that Pm(fl ex e-H[Jl, where
N
H(f]
= L V(td + AIIP tl12 .
(7)
;=1
Here V represents the effecti've potential
(8)
depicted in fig. (1) for different values of {32.
Extensions of a Theory of Networks for Approximation and Learning
Vex)
6.SO
6.00
~.SO
~.OO
4.SO
/
\
~~"'.
\
~
'.
.~
....'.\
4.00
,.
\
... "
- ?--- -
3.SO
- - -r-- -- c-
3.00
2. SO
2.00
\
I
\ I
\ I
.~ j
I.SO
1.00
o.SO
0.00
....00
-2.00
0.00
2.00
Figure 1: The effective potential V(x) for
values of f32: 0.1,0.03, 0.001
4.00
t
x
= 0.1, f31
3.0 and three different
The MAP estimate is therefore obtained minimizing the fUllctional (7) . The first.
term enforces closeness to the data, while the second term enforces smoothness of
the solution, the trade off between these two opposite tendencies being controlled
by the parameter A. Looking at fig. (1) we notice that, in the limit of f32 .0, the effective potential V is quadratic if the absolute value of its argument is
smaller than a threshold, and constant othen-vise (fig . 1). Therefore , data points
are taken in account when the interpolation error is smaller than a threshold, and
their contribution neglected otherwise.
=
=
If f31
f32
p, that is if the distribution of the variables f3i is a delta function
centered on some value ~, the effect.ive potential V(x) = ilx 2 is obtained. Therefore, this method becomes equivalent. to the so called "regularization technique"
(Tikhonov and Arsenin, 1977) that has been extensively used to solve ill-posed
problems, of which the one we have just outlilleJ is a particulal' example (Poggio
and Girosi, 1990a, 1990b). Suitable choices of dist.ribution P[f3] result in other effective potentials (for example the potential \I(J:)
vex'), + x 2 can be obtained),
and the corresponding estimators turn out to be similar to the well known robust
smoothing splines (Eubank, 1988).
=
The functional (7), with the choice expressed by eq. (2), admits a simple physical interpretation. Let us consider for simplicity a function defined on a. one-dimensional
la.ttice. The value of the function J(xd at. site i is regarded as the position of a
particle that can move only in the vertical direction. The particle is attracted according to a spring-like potential V - towards the data point and the neighboring
753
754
Girosi, Poggio, and Caprile
particles as well. The natUl'al trend of the system will be to mmmuze its total
energy which, in this scheme, is expressed by the functional (7): the first term is
associated to the springs connecting the particle to the data point, and the second
one, being associated to the the springs connecting neighboring particles, enforces
the smoothness of the final configuration. Notice that the potential energy of the
springs connecting the particle to the data point is not quadratic, as for the "standard" springs, resulting this in a non-linear relationship between the force and the
elongation. The potential energy becomes constant when the elongation is larger
than a fixed threshold, and the force (which is proportional to the first derivati ve of
the potential energy) goes to zero. In this sense we can say that the springs "break"
when we try to stretch them t.oo much (Geiger and Cirosi, 1990).
3
Negative exanlples
In many situations, further information about a function may consist in knowing
that its value at some given point has to be far from a given value (which, in this
context, can be considered as a "negative example"). 'rVe shall account for the
presence of negative examples by adding t.o t.he functional (7) a quadratic repulsive
term for each negative example (for a relat.ed trick, see Kass et aI., 1987). However, the introduction of such a "repulsive spring" may make the functional (7)
unbounded from below, because the repulsive terms tend to push the value of the
function up to infinity. The simplest. way to prevent this occurency is eit.her to
allow the spring constant to decrease with the increasing elongation, or, in the extreme case, to break at some point. Hence. we can use the same model of nonlinear
spring of the previous section, and just reverse the sign of the associated potential. If {(ta, Ya) E R n x R}f:l is the set of negative examples, and if we define
.1. a = Ya - f(t a ) the functional (7) becomes:
N
H[fJ =
K
L V (.1.;) - L V(.1.
a )
+ AIIP 1112
i=l
4
Solution of the variational problenl
An exhaustive discussion of the solution of the variational problem associated to
the functional (7) cannot be given here. \Ve refer t.he reader to the papers of Poggio
and Cirosi (1990a, 1990b) and Cirosi, Poggio aud Caprile (1990), and just sketch
the form of the solution. In both cases of unreliable and negat.ive data, it call be
shown that the solution of the variational problem always has the form
I.:
N
r(x)
=L
i=l
CiG(X; xd
+L
O'i?i(X)
(9)
i=l
where G is the Green's function of the operator PP (p denoting the adjoint operator of P), and {?i(x)}f=l is a basis of functions for the null space of P (usually
polynomials or low degree) and {cdf:l and {adt'=l are coefficients to be computed.
Extensions of a Theory of Networks for Approximation and Learning
Substituting the expansion (9) in the functional (7), the fUIlction H* (c, a) = H[J*]
is defined. The vectors c and a can then be found by minimizing the function
H*(c, a).
We shall finally notice that the solution (9) has a simple interpretation in terms
of feedforward networks with one layer of hidden units, of the same class of the
regularization networks introduced in previous papers (Poggio and Girosi, 1990a,
1990b). The only difference between these networks and the regularization networks
previously introduced consists in the functioll that has to be minimized in order to
find the weights of the network.
5
Exp erin1ental Results
In this section we report two examples of the application of these techniques to very
simple one-dimensional problems.
5.1
Unreliable data
The data set consisted of seven examples, randomly ta.ken, within the interval
[-1,1], from the graph of f( x) = cos(.r). In order to create an outlier in the
data set, the value of the fOUl'th point. has been sub::;tituted with the value 1.5. The
Green's function of the problem was a Gaussian of variance (J = 0.:3, the parameter
f was set to 0.1, the value of the regularization parameter A was 10- 2 , and the
parameters /31 and /32 were set l'espectively to 10.0 and 0.003. With this choice of
the parameters the effective potential was approximately constant for values of its
argument larger than 1. In figure (2a) we show the result that is obtained after
only 10 iterations of gradient descent: the spring of the outlier breaks, and it does
not influence the solution any more. The "hole" that the solution shows nearby the
outlier is a combined effect of the fact that t.he variance of the Gaussian Green's
function is small ((J = 0,3), and of the lack of data next to the outlier itself.
5.2
Negative exalnples
Again data to be approximated came from a ra.11l10111 sampling of the function
f(x) = cos(x), in the interval [-1,1]. The fourth data point was selected as the
negative example, and the parametel's were set in a way that its spring would break
when the elongation exceeded the value 1. In figure (2b) we show a result obtained
with 500 iterations of a stochast.ic gradient descellt a.lgorithm, with a Gaussian
Green's function of variance (J = 0.4.
Acknowledgements We thank Cesare Furlanello for useful discussions and for a critical reading of the manuscript.
755
756
Girosi, Poggio, and Caprile
y
....
....
",'
'AO
.
I."
..,
-'"
.'"
....
-- f--
1.>.
1.1'
---+--~--~--~.--+--. -- f-----+~~ --~--+---+----~ --
L'.
-+--~--_r---+_--
1.0'
LOO
....
..,.
....
.... "
0.7. . .
....
,.
..10
....,.-r" t---...
......
.. n ---+--_t_--~--+----+---
.........
f-----
r--
"~ --~----r--+-~---~---r--~
~.--7+---~-+--~--~---~- -?
~ -;'"
..M--~--_t_--_r--+_--+_--r_~
"M ~~--_t_--_r--+_---~-- -r_~
".
-:
..........
"Q ~-+--~--4---+---+---r-~
l
"~ ~r-+-'--~-4---+--~----1-~
Figure 2: (a) Approximation in presence of an outlier (the data POillt whose value
is 1.5) . (b) Approximation in presence of a. negative example.
References
[1] R.L . Eubank. Spline Smoothing and Nonpamm etric Regr'essioll, volume gO of
Statistics: Textbooks and lI10nogmphs. l\larcel Dekker, Inc ., New YOl'k, 1988.
[2] D. Geiger and F . Girosi. Parallel and deterministic algorithms for IVIRFs: surface reconstruction and integration. In O. Faugeras, editor, Lecture Not es in
Computer Science, Vol. 427: Computer Vision - ECCV 90. Springer-Verlag,
Berlin, 1990.
[3] F. Girosi, T. Poggio, and B. Caprile. Extensions of a theory of networks for
approximation and learning: outliers and negative examples. A.l. Memo 1220,
Artificial Intelligence Labora.tory , Massachusetts Instit.ut.e of Technology, 1990.
[4] M. Kass, A. Witkin, and D. Terzopoulos . Snakes: Active contour models. III
Proceedings of the First Inter'national Confercnce on Comp uter' Vision , London,
1987. IEEE Computer Society Press , VVashingtoll , D.C .
[5] J. 1. Marroquin, S. Mittel', ant! T . Poggio. Probabilistic solution of ill-poset!
problems in computational vision. 1. Amer. Stat . A.ssoc., 82:76-89, 1987.
[6] T. Poggio and F. Girosi . A theory of networks for learning. Sci ence , 247:978- 982 ,
1990a.
[7] T. Poggio and F. Girosi. Networks for approximation and learning. P1'Oceedings
of the IEEE, 78(9), September 1990b .
[8] M. J. D. Powell. Ra.dial basis funct.iolls for mult.iva.riable interpolation: a. review. In J. C. Mason and M. G. Cox, editors , Algorithms for Approximatw1t.
Clarendon Press , Oxford, 1987.
[9] A. N. Tikhonov and V. Y. Arsenin. Solutions of Ill-posed Problem s. VI/. II .
Winston, Washington, D.C., 1977.
| 343 |@word cox:1 f32:4 polynomial:1 norm:1 dekker:1 t_:3 cla:2 noll:1 configuration:1 etric:1 denoting:1 ka:2 must:1 written:1 attracted:1 girosi:12 intelligence:1 selected:1 unbounded:1 differential:1 consists:3 inter:1 ra:3 tomaso:1 p1:1 bility:1 dist:2 multi:1 td:1 considering:1 increasing:1 becomes:3 maximizes:1 null:1 what:1 textbook:1 iva:1 ought:1 xd:3 unit:1 positive:3 oceedings:1 apparatus:1 limit:1 ext:1 oxford:1 interpolation:3 yd:1 approximately:1 tory:1 equivalence:2 co:2 gll:1 enforces:3 poset:1 powell:2 probabilit:1 mult:1 integrating:1 cannot:1 close:1 operator:3 context:1 influence:1 equivalent:2 map:2 deterministic:1 go:2 simplicity:1 rule:2 estimator:1 regarded:2 suppose:1 trick:1 trend:1 approximated:1 cesare:1 region:1 trade:1 decrease:1 neglected:1 solving:1 funct:1 basis:3 joint:1 eit:1 represented:1 effective:4 london:1 artificial:1 choosing:1 exhaustive:1 adt:1 whose:1 faugeras:1 posed:2 ive:2 solve:1 larger:2 say:1 otherwise:1 federico:1 statistic:1 itself:1 final:1 cig:1 reconstruction:1 neighboring:2 realization:1 adjoint:1 prj:1 oo:2 stat:1 eq:3 recovering:2 direction:1 aud:1 yol:1 closely:1 centered:1 ao:1 really:1 extension:5 stretch:1 considered:1 ic:1 exp:1 mapping:1 substituting:1 vary:1 create:1 reflects:1 ribution:2 gaussian:6 always:1 r_:2 avoid:1 mainly:1 sense:1 posteriori:1 snake:1 her:1 hidden:1 interested:2 ill:3 distribut:2 priori:1 smoothing:2 integration:1 fairly:1 marginal:1 field:2 equal:1 f3:8 washington:1 sampling:2 elongation:4 represents:1 minimized:2 report:1 spline:2 sta:1 randomly:2 ve:4 national:1 proba:1 furlanello:1 extreme:1 partial:1 poggio:14 ence:1 introducing:1 too:1 loo:1 combined:1 probabilistic:2 off:1 othen:1 connecting:3 quickly:1 again:1 account:2 potential:12 coefficient:1 inc:1 relat:2 depends:1 vi:1 view:2 break:4 lab:2 try:1 bayes:2 parallel:1 contribution:1 accuracy:1 variance:9 maximized:1 yield:1 ant:1 uter:1 eubank:2 comp:1 povo:1 ed:2 energy:4 pp:1 associated:4 massachusetts:1 knowledge:2 ut:1 marroquin:2 manuscript:1 exceeded:1 clarendon:1 ta:3 amer:1 regr:1 just:3 sketch:1 nonlinear:1 lack:1 effect:2 consisted:1 lgorithm:1 regularization:8 discounting:1 hence:1 deal:1 conditionally:1 functioll:2 fj:4 glf:2 variational:4 fi:2 common:1 functional:8 physical:1 volume:1 jl:1 extend:1 he:8 interpretation:2 significant:1 measurement:1 refer:1 cambridge:2 ai:2 smoothness:2 pm:2 particle:6 bruno:1 access:1 tlg:1 indicat:1 surface:1 posterior:2 italy:1 reverse:1 tikhonov:2 certain:1 verlag:1 came:1 yi:1 ii:1 witkin:1 characterized:1 post:1 controlled:1 vision:3 iteration:2 ion:2 addition:1 affecting:1 whereas:1 interval:2 rest:1 tend:1 negat:1 call:4 ee:1 presence:5 feedforward:1 iii:1 ioll:1 opposite:1 f3i:3 regarding:1 knowing:1 fullction:1 wo:1 useful:1 amount:1 extensively:1 ken:1 simplest:1 instit:1 percentage:1 notice:3 sign:1 delta:1 write:1 shall:2 vol:1 express:1 threshold:3 prevent:1 graph:1 fourth:1 reader:1 geiger:2 vex:2 layer:3 fl:3 quadratic:3 winston:1 infinity:1 nearby:1 aspect:1 argument:2 foul:1 spring:11 according:1 smaller:2 outlier:7 taken:1 equation:1 previously:2 turn:1 needed:1 end:1 fji:2 repulsive:3 appropriate:2 dial:1 approximating:1 classical:1 society:1 move:1 september:1 gradient:2 thank:1 berlin:1 aiip:2 sci:1 seven:1 assuming:1 relationship:1 minimizing:2 negative:13 memo:1 synthesizing:1 vertical:1 arc:1 descent:1 situation:3 looking:1 rn:3 introduced:2 specified:1 usually:2 below:1 reading:1 green:4 suitable:1 critical:1 natural:1 force:2 scheme:1 technology:1 occurence:1 prior:5 review:1 l2:1 acknowledgement:1 lecture:1 interesting:1 proportional:1 degree:1 editor:2 pi:1 eccv:1 course:1 arsenin:2 weaker:1 allow:1 terzopoulos:1 taking:1 absolute:1 regard:1 f31:4 contour:1 made:1 far:1 iolls:1 unreliable:5 active:1 assumed:1 xi:1 lip:1 robust:1 expansion:1 domain:1 noise:7 fig:3 site:1 tlll:1 sub:1 position:1 rve:1 flg:1 bad:1 specific:1 discarding:1 er:1 mason:1 admits:1 closeness:1 consist:1 adding:1 push:1 hole:1 depicted:1 lt:1 vise:1 ilx:1 expressed:2 springer:1 corresponds:1 ma:2 cdf:1 conditional:2 towards:1 called:2 gij:1 total:1 tendency:1 la:1 ya:2 e:1 caprile:6 ex:3 |
2,681 | 3,430 | Multiscale Random Fields with Application to
Contour Grouping
Longin Jan Latecki
Dept. of Computer and Info. Sciences
Temple University, Philadelphia, USA
[email protected]
ChengEn Lu
Dept. of Electronics and Info. Eng.
Huazhong Univ. of Sci. and Tech., China
[email protected]
Marc Sobel
Statistics Dept.
Temple University, Philadelphia, USA
[email protected]
Xiang Bai
Dept. of Electronics and Info. Eng.
Huazhong Univ. of Sci. and Tech., China
[email protected]
Abstract
We introduce a new interpretation of multiscale random fields (MSRFs) that admits efficient optimization in the framework of regular (single level) random fields
(RFs). It is based on a new operator, called append, that combines sets of random
variables (RVs) to single RVs. We assume that a MSRF can be decomposed into
disjoint trees that link RVs at different pyramid levels. The append operator is
then applied to map RVs in each tree structure to a single RV. We demonstrate
the usefulness of the proposed approach on a challenging task involving grouping
contours of target shapes in images. It provides a natural representation of multiscale contour models, which is needed in order to cope with unstable contour
decompositions. The append operator allows us to find optimal image segment
labels using the classical framework of relaxation labeling. Alternative methods
like Markov Chain Monte Carlo (MCMC) could also be used.
1 Introduction
Random Fields (RFs) have played an increasingly important role in the fields of image denoising,
texture discrimination, image segmentation and many other important problems in computer vision.
The images analyzed for these purposes typically have significant fractal properties which preclude
the use of models operating at a single resolution level. Such models, which aim to minimize meansquared estimation error, use only second-order image statistics which fail to accurately characterize
the images of interest. Multiscale random fields (MSRFs) resolve this problem by using information
at many different resolution levels [2, 15, 5]. In [6], a probabilistic model of multiscale conditional
random fields (mCRF) was proposed to segment images by labeling pixels using a predefined set of
class labels.
The main difference between the proposed interpretation of MSRFs or mCFF as known in the literature, e.g., [2, 15, 6, 5], and the proposed MSRF is the interpretation of the connections between
different scales (levels). In the proposed approach, the random variables (RVs) linked by a tree substructure across different levels compete for their label assignments, while in the existing approaches
the goal is to cooperate in the label assigns, which is usually achieved by averaging. In other words,
usually the label assignment of a parent node is enforced to be compatible with the label assignment
of its children by averaging. In contrast, in the proposed approach the parent node and all its children
compete for the best possible label assignment.
Contour grouping is one of key approaches to object detection and recognition, which is a fundamental goal of computer vision. We introduce a novel MSRF interpretation, and show its benefits
in solving the contour grouping problem. The MSRF allows us to cast contour grouping as contour matching. Detection and grouping by shape has been investigated in earlier work. The basic
1
idea common to all methods is to define distance measures between shapes, and then accurately
label and/or classify shapes using these measures. Classical methods, of this type, such as shape
contexts [1] and chamfer matching [13] can not cope well with clutter and shape deformations.
Some researchers described the shape of the entire object using deformable contour fragments and
their relative positions [10, 12], but their detection results are always grassy contour edges. The
deformable template matching techniques often require either good initial positions or clean images
(or both) to avoid (false) local minima [14, 9]. Recently, Ferrari et al. [4] have used the sophisticated
edge detection methods of [8]; the resulting edges are linked to a network of connected contour segments by closing small gaps. Wu et al. [16] proposed an active basis model that provides deformable
template consisting of a small number of Gabor wavelet elements allowed to slightly perturb their
locations and orientations.
Our grouping is also based on the edge detection of [8], but we do not perform edge linking directly
for purposes of grouping. We perform matching a given contour model to edge segments in images.
This allows us to perform grouping and detection at the same time. Our method differs from former
sampled-points-based matching methods [14, 3]; we match the contour segments from the given
contour to segments in edge images directly.
We decompose a given closed contour of a model shape into a group of contour segments, and match
the resulting contour segments to edge segments in a given image. Our model contour decomposition
is flexible and admits a hierarchical structure, e.g., a parent contour segment is decomposed into two
or more child segments. In this way, our model can adapt to different configurations of contour
parts in edge images. The proposed MSRF interpretation allows us to formulate the problem of
contour grouping as a soft label assignment problem. Since in our approach a parent node and all its
children compete for the best possible label assignment, allowing us to examine multiple composite
hypotheses of model segments in the image, a successful contour grouping of edge segments is
possible even if significant contour parts are missing or are distorted. The competition is made
possible by the proposed append operator. It appends the random variables (RVs) representing the
parent and all its children nodes to a single new RV. Since the connectivity relation between each
pair of model segments is known, the soft label assignment and the competition for best labels make
accurate grouping results in real images possible.
We also want to stress that our grouping approach is based on matching of contour segments. The
advantages of segment matching over alternative techniques based on point matching are at least
twofold: 1) it permits deformable matching (i.e., the global shape will not be changed even when
some segments shift or rotate a little); 2) it is more stable than point matching, since contour segments are more informative than points as shape cues.
2 Multiscale Random Fields
Given a set of data points X = {x 1 , . . . , xn }, the goal of random fields is to find a label assignment
f that maximizes the posterior probability p(f |X) (of that assignment):
f = argmax p(f |X)
(1)
f
Thus, we want to select the label assignment with the largest possible probability given the observed
data. Although the proposed method is quite general, for clarity of presentation, we focus on an
application of interest to us: contour grouping based on contour part correspondence.
We take the contour of an example shape to be our shape model S. We assume that the model
is composed of several contour segments s 1 , . . . , sm . In our application, the data points X =
{x1 , . . . , xn } are contour segments extracted by some low level process in a given image. The
random field is defined by a sequence of random variables F = (F 1 , . . . , Fm ) associated with nodes
si of the model graph F represents the mapping of the nodes (model segments) S = {s 1 , , sm } to
the data points X = {x1 , . . . , xn } (i.e., F : S ? X). We write Fi = xj to denote the event that
the model segment s i is assigned the image segment x j by the map F. (Observe that usually the
assignment is defined in the reverse direction, i.e., from an image to the model.)
Our goal is to find a label assignment f = (f 1 , . . . , fm ) ? X m that maximizes the probability
p(f |X) = p(F1 = f1 , . . . , Fm = fm |X), i.e.,
f = (f1 , . . . , fm ) = argmax p(F1 = f1 , . . . , Fm = fm |X)
(2)
(f1 ,...,fm )
2
However, the object contour in the given image (which is composed of some subset of segments in
X = {x1 , . . . , xn } may have a different decomposition into contour segments than is the case for
the model s1 , . . . , sm . This is the case, for example, if some parts of the true contour are missing,
i.e., some si may not correspond to parts in X. Therefore, a shape model is needed that can provide
robust detection and recognition under these conditions. We introduce such a model by imposing a
multiscale structure on contour segments of the model shape. Let the lowest level zero represents the
finest subdivision of a given model contour S into the segments S 0 = {s01 , . . . , s0m0 }. The ? level
?
partition subdivides the contour into the segments S ? = {s?
1 , . . . , sm? } for ? = 1, . . . , ?, where ?
denotes the highest (i.e., most coarse) pyramid level. For each pyramid level ?, the segments, S ? ,
?
partition the model contour S, i.e., S = s ?
1 ? ? ? ? ? sm? . The segments S ? in level ? refine the
segments S?+1 in level ? + 1, i.e., segments in the level ? + 1 are unions of one or more consecutive
segments in the level ?. On each level ? we have a graph structure G ? = (S ? , E ? ), where E ? is
the set of edges governing the relations between segments in S ? , and we have a forest composed of
trees that link nodes at different levels. The number of the trees corresponds to the number of nodes
on the highest level s ?1 , . . . , s?m? , since each of these nodes is the root of one tree. We denote these
trees with T1 , . . . , Tm? . For example, in Fig. 1 we have eight segments on the level zero s 01 , . . . , s08 ,
and four segments on the level one
s11 = s01 ? s02 , s12 = s03 ? s04 , s13 = s05 ? s06 , s14 = s07 ? s08 .
This construction leads to a tree structure relation among segments at different levels. For example,
T1 is a tree with s11 (segment 1) as a parent node and with two children s 01 , s02 (segments 5 and 6).
9
3
Model
8
10
2
12
5
7
11
4
1
T1
T2
1
2
4
3
6
5
6
7
T4
T3
8
9
10
11
Si1
12
S i0
Figure 1: An example of a multiscale random field structure.
?
We associate a random variable F i? with each segment s ?
i . The range of each random variable F i
is the set of contour segments X = {x 1 , . . . , xn } extracted in a given image. The random variables
inherit the tree structure from the corresponding model segments. Thus, we obtain a multiscale
random field with random variables (RVs)
0
?
?
F = (F10 , . . . , Fm
, . . . , F1? , . . . , Fm
, . . . , F1? , . . . , Fm
),
0
?
?
?
?
(3)
?
the relational structure (RS) G = (S , E ), and trees T1 , . . . , Tm? . Our goal remains the same as
stated in (2), but the graph structure of the underlying RF is significantly more complicated by the
introduction of the multiscale tree relations. Therefore, the maximization in (2) is significantly more
complicated as well. Usually, the computation in multiscale random fields is based on modeling the
dependencies between the random variables related by the (aforementioned) tree structures.
In the proposed approach, we do not explicitly model these tree structure dependencies. Instead, we
build relations between them using the construction of a new random variable that explicitly relates
all random variables in each given tree. We introduce a new operator acting on random variables,
called append operator. The operator combines a given set of random variables Y = {Y 1 , . . . , Yk }
into a single random variable denoted
? Y = Y1 ? ? ? ? ? Yk .
(4)
For simplicity, we assume, in the definition below, that {Y 1 , . . . , Yk } are discrete random variables
taking values in the set X = {x1 , . . . , xn }. Our definition can be easily generalized to continuous
random variables. The append random variable, ?Y, with distribution defined below, takes values
in the set of pairs, {1, . . . , k} ? X. The distribution of the random variable ?Y is given by,
1
(5)
p(?Y = (i, xj )) = ? p(Yi = xj ),
k
3
where index i is over the RVs and index j is over the labels. The intuition behind this construction
can be explained by the following simple example. Let Y 1 , Y2 be two discrete random variables with
distributions
(p(Y1 = 1), p(Y1 = 2), p(Y1 = 3)) and (p(Y2 = 1), p(Y2 = 2), p(Y2 = 3)),
(6)
then the distribution of Y 1 ? Y2 is simply given by vector
1/2 ? (p(Y1 = 1), p(Y1 = 2), p(Y1 = 3), p(Y2 = 1), p(Y2 = 2), p(Y2 = 3)).
(7)
Armed with this construction, we return to our multiscale RF with RVs in (3). Recall that the RVs
?
are the roots of trees T 1 , . . . , Tm? . By
representing the nodes on the highest level F 1? , . . . , Fm
?
slightly abusing our notation, we define ?T i as the append of all random variables that are nodes of
tree Ti . This construction allows us to reduce the multiscale RF with RVs in (3) to a RF with RVs
T = (?T1 , . . . , ?Tm? ).
(8)
The graph structure of this new RF is defined by graph G = (T, E) such that
(?Ti , ?Tj ) ? E iff ?? ?a,b (Fa? , Fb? ) ? E ? and Fa? ? ?Ti and Fb? ? ?Tj
(9)
In simple words, ?T i and ?Tj are related in G iff on some level ? both trees have related random
variables.
The construction in (8) and (9) maps a multiscale RF to a single level RF, i.e., to a random field
with a simple graph structure G. The intuition is that we collapse all graphs G ? = (S ? , E ? ) for
? = 1, . . . , ? to a single graph G = (T, E) by gluing all RVs in each tree T i to a single RV ?Ti .
Consequently, any existing RF optimization method can be applied to compute
t = (
t1 , . . . ,
tm? ) = argmax p(?T1 = t1 , . . . , ?Tm? = tm? |X).
(t1 ,...,tm? )
(10)
We observe that when optimizing the new RF in (10), we can simply perform separate optimizations
on each level, i.e., on each level ? we optimize (8) with respect to the graph structure G ? . Hence at
each level ? we choose the maximum aposteriori estimate associated with the random field at that
level. Our key contribution is the fact that these optimizing estimators are linked by the internal
structure of the RVs ?T i .
After optimizing a regular RF in (10) that contains append RVs, we obtain as the solution updated
distributions of the append RVs. From them, we can easily reconstruct the updated distributions of
the original RVs from the multiscale RF in (2) by the construction of the append RVs. For example,
1 3 1
1
1
, 5 , 10 , 0, 10
, 10
) as the updated distribution of some RV Y 1 ? Y2 , then we can easily
if we obtain ( 10
derive the updated distributions of Y 1 , Y2 as
(p(Y1 = 1) =
1
3
1
1
1
, p(Y1 = 2) = , p(Y1 = 3) = ) & (p(Y2 = 1) = 0, p(Y2 = 2) = , p(Y2 = 3) = )
8
4
8
2
2
To obtain the distributions of the compound RVs Y 1 , Y2 , we only need to ensure that both distributions of Y1 and Y2 sum to one. Since we are usually interested in selecting a variable assignment with
maximum posterior probability (10), we do not need to derive these distributions. Consequently, in
this example, it is sufficient for us to determine that the assignment of Y 1 to label 2 maximizes
Y1 ? Y2 .
Going back to our application in contour grouping, the RV ?T 2 is an append of three RVs representing segments 2, 7, 8 in Fig. 1. We observe that RVs appended to ?T 2 compete in the label
assignment. For example, if a given assignment of RV ?T 2 to an image segment, say x 5 , maximizes
?T2 , then, by the position in the discrete distribution of ?T 2 , we can clearly identify which RV is
the winner, i.e., which of the model segments 2, 7, 8 is assigned to image segment x 5 . We can also
make this competition soft (with more then one winner) if we select local maxima of the discrete
distribution of ?T 2 , which may lead to assigning more than one of model segments 2, 7, 8 to image
segments. In the computation model presented in the next section, we focus on finding a global
maximum for each RV ?T i .
4
3 Computing the label assignment with relaxation labeling
There exist several approaches to compute the assignment f that optimizes the relational structure of
a given RF [7], i.e., approaches that solve Eq. (10), which is our formulation of the general RF Eq.
(2). In our implementation, we use a particularly simple approach of relaxation labeling introduced
by Rosenfeld et al. in [11]. However, a more powerful class of MCMC methods could also be used
[7]. In this section, we briefly describe the relaxation labeling (RL) method, and how it fits into our
framework.
We recall that our goal is to find a label assignment t = (t 1 , . . . , tm ) that maximizes the probability
p(t|X) = p(?T1 = t1 , . . . , ?Tm = tm |X) in Eq. (10), where we have shortened m = m ? . One of
the key ideas of using RL is to decompose p(t|X) into individual probabilities p(?T a = (ia , xj )),
where index a = 1, . . . , m ranges over the RVs of the RF, index j = 1, . . . , n ranges over the
possible labels, which in our case are the contour segments X = {x 1 , . . . , xn } extracted from a
given image, and index i a ranges over the RVs that are appended to ?T a , which we denote with
ia ? a. For brevity, we use the notation
pa (ia , xj ) = p(?Ta = (ia , xj )).
Going back to our example in Fig. 1, p 2 (7, x5 ) denotes the probability that contour segment 7 is
assigned to an image segment x 5 , and 2 is the index of RV ?T 2 . We recall that ?T2 is an append of
three RVs representing segments 2, 7, 8 in Fig. 1. In Section 5, p 2 (7, x5 ) is modeled as a Gaussian
of the shape dissimilarity between model contour segment 7 and image contour segment 5.
As is usually the case for RFs, we also consider binary relations between RVs that are adjacent
in the underlying graph structure G = (T, E), which represent conditional probabilities p(?T a =
(ia , xj ) | ? Tb = (ib , xk )). They express the compatibility of these label assignment. Again for
brevity, we use notation
Ca,b ((ia , xj ), (ib , xk )) = p(?Ta = (ia , xj ) | ? Tb = (ib , xk )).
For example, C2,3 ((7, x5 ), (9, x8 )) models the compatibility of assignment of model segment 7
(part of model tree 2) to image segment x 5 with the assignment of model segment 9 (part of model
tree 3) to image segment x 8 . This compatibility is a function of geometric relations between the
segments. Since segment 9 is above segment 7 in the model contour, it is reasonable to assign high
compatibility only if the same holds for the image segments, i.e., x 8 is above x5 .
The RL algorithm iteratively estimates the change in the probability p a (ia , xj ) by:
Ca,b ((ia , xj ), (ib , xk )) ? pb (ib , xk ),
?pa (ia , xj ) =
(11)
b=1,...,m: b=a ib ?b xk ?X: xk =xj
where b varies over all append random variables ?T b different form ?T a and ib varies over all
compound RVs that are combined by append to ?T b . Then the probability is updated by
pa (ia , xj ) =
ia ?a
p (i , x )[1 + ?pa (ia , xj )]
a a j
,
xk ?X pa (ia , xk )[1 + ?pa (ia , xk )]
(12)
The double sum in the denominator simply normalizes the distribution of ?T a so that it sums to one.
The RL algorithm in our framework iterates steps (11) and (12) for all a = 1, . . . , m (append RVs),
all ia ? a, and all labels xj ? X. It can be shown that the RL algorithm is guaranteed to converge,
but not necessarily to a global maximum [7].
4 A contour grouping example
We provide a simple but real example to illustrate how our multiscale RF framework solves a concrete contour grouping instance. We use the contour model presented in Fig. 1. Let F i be a RV corresponding to model contour segment s i for i = 1, . . . , 12. We have two levels S 0 = {F5 , . . . , F12 }
and S 1 = {F1 , . . . , F4 }. Both graph structures G 0 and G1 are complete graphs. As described in
Section 2, we have MSRF with four trees. The append RVs determined by these trees are:
?T1 = F1 ? F5 ? F6 , ?T2 = F2 ? F7 ? F8 , ?T3 = F3 ? F9 ? F10 , ?T4 = F4 ? F11 ? F12
We obtain a regular (single level) RF with the four append RVs, T = (?T 1 , ?T2 , ?T3 , ?T4 ), and
with the graph structure G = (T, E) determined by Eq. (9).
5
Given an image as in Fig. 2(a), we first compute its edge map shown in Fig. 2(b), and use a low level
edge linking to obtain edge segments in Fig. 2(c). The 16 edge segments in Fig. 2(c) form our label
set X = {x1 , x2 , . . . x16 }. Our goal is to find label assignment to RVs ?T a for a = 1, 2, 3, 4 with
maximum posterior probability (10). However, the label set of each append RV is different, e.g., the
label set of ?T1 is equal to {1, 5, 6} ? X, where ?T 1 = (1, x5 ) denotes the assignment of F 1 = x5
representing mapping model segment 1 to image segment 5. Hence p 1 (ia , xj ) = p(?T1 = (ia , xj ))
for ia = 1, j = 5 denotes the probability of mapping model segment i a = 1 to image segment
j = 5.
As described in Section 3, we use relaxation labeling to compute the maximum posterior probability
(10). Initially, all probabilities p a (ia , xj ) are set based on shape similarity between involved model
and image segments. The assignments compatibilities are determined using geometric relations described in Section 5. After 200 iterations, RL finds the best assignment for each RV ?T a as Fig.
2(d) illustrates. They are presented in the format RV: model segment ? edge segment:
?T1 : 1 ? x12 ; ?T2 : 5 ? x10 ; ?T3 : 8 ? x7 ; ?T4 : 4 ? x5 .
Observe that many model segments remained unmatched, since there they do not have any corresponding segments in the image 2(c). This very desirable property results from the label assignment
competition within each append RV ?T a for a = 1, 2, 3, 4. This fact demonstrates one of the main
benefits of the propose approach. We stress that we do not use any penalties for non matching,
which are usually used in classical RFs (e.g., nil variables in [7]), but are very hard to set in real
applications.
8
1
14
16
6
7 10
5
4
13
9
8
4
12
5
1
2
3
15
11
(a)
(b)
(c)
(d)
Figure 2: (c) The 16 edge segments form our label set X = {x 1 , x2 , . . . x16 }. (d) The numbers and
colors indicate the assignment of the model segments from Fig. 1.
5 Geometric contour relations
In this section, we provide a brief description of contour segment relations used to assign labels
for contour grouping. Two kinds of relations are defined. First, the probability p a (ia , xj ) is set to
be a Gaussian of shape dissimilarity between model segment i a and image segment x j . The shape
dissimilarity is computed by matching sequences of tangent directions at their sample points. To
make our matching scale invariant, we sample each model and image segment with the same number
of sample points. We also consider four binary relations to measure the compatibility between a pair
of model segments and a pair of image segments: d (1) (i, i ) ? the maximum distance between the
end-points of two contour segments i and i ; d(2) (i, i ) ? the minimum distance between the endpoints of two contour segments i and i ; d(3) (i, i ) ? the direction from the mid-point of i to the
mid-point of i ; d(4) (i, i ) ? the distance between the mid-points of i and i . To make our relations
scale invariant, all distances are normalized by the sum of the lengths of segments i and i . Then the
compatibility between pair of model segments i a , ib and pair of image segments x j , xk is given by
a mixture of Gaussians:
Ca,b ((ia , xj ), (ib , xk )) =
4
1
r=1
4
N (d(r) (ia , ib ) ? d(r) (xj , xk ), ? (r) )
(13)
6 Experimental results
We begin with a comparison between the proposed append MSRF and single level RF. Given an edge
map in Fig. 3(b) extracted by edge detector [8], we employ a low level edge linking method to obtain
edge segments as shown in 3(c), where the 27 edge segments form our label set X = {x 1 , . . . , x27 }.
Fig. 3(d) illustrates our shape contour model and its two level multiscale structure of 10 contour
segments. Fig. 3(e) shows the result of contour grouping obtained in the framework of the proposed
6
append MSRF. The numbers and colors in indicate the assignment of the model segments. The
benefits of the flexible multiscale model structure are clearly visible. Out of 10 model segments,
only 4 have corresponding edge segments in the image, and our approach correctly determined a
label assignments reflecting this fact.
In contrast, this is not the case for a single level RF. Fig. 3(f) shows a model with a fixed single level
structure, and its contour grouping result computed with classical RL can be found in Fig. 3(g).
We observe that model segment 2 on giraffe?s head has no matching contour in the image, but is
nevertheless incorrectly assigned. This wrong assignment influences model contour 4, and leads to
another wrong assignment. In the proposed approach, model contours 2 and 3 in Fig. 3(d) compete
for label assignments. Since contour 3 finds a good match in the image, we correctly obtain (through
our append RV structure) that that there is not match for segment 2.
11
5
13
14
94
26
16
24
20
2317
1
2 19
25
21
15
7
22
12
6
18
8
10
27
(a)
(b)
(c)
3
2
2
2
10
6
1
6
1
3
1
1
3
3
4
5
9
5
(d)
8
5
4
4
4
5
7
(e)
(f)
(g)
Figure 3: (d-g) comparison of results obtain by the proposed MSRF to a single level RF.
By mapping the model segments to the image segments, we enforce the existence of a solution.
Even if no target shape is present in a given image, our approach will ?hallucinate? a matching
configuration of edge segments in the image. A standard alternative in the framework of random
fields is to use a penalty for non-matching (dummy or null nodes). However, this requires several
constants, and it is a highly nontrivial problem to determine their values. In our approach, we
can easily distinguish hallucinated contours from true contours, since when the RF optimization is
completed, we obtain the assignment of contour segments, i.e., we know a global correspondence
between model segments and image segments. Based on this correspondence, we compute global
shape similarity, and discard solutions with low global similarity to the model contour. This requires
only one threshold on global shape similarity, which is relatively easy to set, and our experimental
results verify this fact. In Figs. 4 and 5, we show several examples of contour grouping obtained by
the proposed MSRF method on the ETHZ data set [4]. We only use two contour models, the swan
model (Fig. 1) and the giraffe model (Fig. 3(d)). Their original images are included as shape models
in the ETHZ data set. Model contours are decomposed into segments by introducing break points at
high curvature points. Edge contour segments in the test images have been automatically computed
by a low level edge linking process. Noise and shape variations cause the edge segments to vary a
lot from image to image. We also observe that grouped contours contain internal edge structures.
7 Conclusions
Since edges, and consequently, contour parts vary significantly in real images, it is necessary to make
decomposition of model contours into segments flexible. The proposed multiscale construction
permits us to have a very flexible decomposition that can adapt to different configurations of contour
parts in the image. We introduce a novel multiscale random field interpretation based on the append
operator that leads to efficient optimization. We applied the new algorithm to the ETHZ data set to
illustrate the application potential of the proposed method.
7
Figure 4: ETHZ data set grouping results for the Giraffe model.
Figure 5: ETHZ data set grouping results for the swan model.
Acknowledgments
This work was supported in part by the NSF Grants IIS-0534929, IIS-0812118 in the Robust Intelligence Cluster and by the DOE Grant DE-FG52-06NA27508.
References
[1] S. Belongie, J. Malik, and J. Puzicha. Shape matching and object recognition using shape contexts. IEEE
Trans. Pattern Analysis and Machine Intelligence, 24:705?522, 2002.
[2] C. A. Bouman and M. Shapiro. A multiscale random field model for bayesian image segmentation. IEEE
Trans. on IP, 3(2):162?177, 1994.
[3] H. Chui and A. Rangarajan. A new algorithm for non-rigid point matching. In CVPR, 2000.
[4] V. Ferrari, L. Fevrier, F. Jurie, and C. Schmid. Groups of adjacent contour segments for object detection.
IEEE Trans. PAMI, 2008.
[5] A.R. Ferreira and H.K.H.Lee. Multiscale Modeling: A Bayesian Perspective. Springer-Verlag, Springer
Series in Statistics, 2007.
[6] X. He, R. S. Zemel, and M. A. Carreira-Perpinan. Multiscale conditional random fields for image labeling.
In CVPR, volume 2, pages 695?702, 2004.
[7] S. Z. Li. Markov Random Field Modeling in Image Analysis. Springer-Verlag, Tokyo, 2001.
[8] D. Martin, C. Fowlkes, and J. Malik. Learning to detect natural image boundaries using local birghtness,
colour and texture cues. IEEE Trans. PAMI, 26:530?549, 2004.
[9] G. McNeill and S. Vijayakumar. Part-based probabilistic point matching using equivalence constraints.
In NIPS, 2006.
[10] A. Opelt, A. Pinz, and A. Zisserman. A boundary-fragment-model for object detection. In ECCV, 2006.
[11] A. Rosenfeld, R. Hummel, and S. Zucker. Scene labeling by relaxation operations. Trans. on Systems,
Man and Cybernetics, 6:420?433, 1976.
[12] J. Shotton, A. Blake, and R. Cipolla. Contour-based learning for object detection. In ICCV, 2005.
[13] A. Thayananthan, B. Stenger, P. H. S. Torr, and R. Cipolla. Shape context and chamfer matching in
cluttered scenes. In CVPR, 2003.
[14] Z. Tu and A.L. Yuille. Shape matching and recognition using generative models and informative features.
In ECCV, 2004.
[15] A. S. Willsky. Multiresolution markov models for signal and image processing. Proceedings of the IEEE,
90:1396?1458, 2002.
[16] Y. N. Wu, Z. Si, C. Fleming, and S.-C. Zhu. Deformable template as active basis. In ICCV, 2007.
8
| 3430 |@word briefly:1 r:1 eng:2 decomposition:5 initial:1 configuration:3 contains:1 fragment:2 selecting:1 series:1 fevrier:1 bai:2 electronics:2 existing:2 com:2 si:3 gmail:2 assigning:1 finest:1 visible:1 partition:2 informative:2 shape:28 discrimination:1 cue:2 intelligence:2 generative:1 xk:13 hallucinate:1 provides:2 coarse:1 node:13 location:1 iterates:1 si1:1 c2:1 combine:2 introduce:5 examine:1 f11:1 decomposed:3 automatically:1 resolve:1 little:1 armed:1 preclude:1 latecki:2 begin:1 underlying:2 notation:3 maximizes:5 lowest:1 null:1 kind:1 finding:1 ti:4 ferreira:1 demonstrates:1 wrong:2 grant:2 t1:15 local:3 shortened:1 pami:2 china:2 equivalence:1 challenging:1 collapse:1 range:4 jurie:1 acknowledgment:1 union:1 differs:1 jan:1 gabor:1 composite:1 matching:21 significantly:3 word:2 regular:3 operator:8 context:3 influence:1 optimize:1 map:5 missing:2 cluttered:1 resolution:2 formulate:1 simplicity:1 assigns:1 estimator:1 ferrari:2 variation:1 updated:5 target:2 construction:8 hypothesis:1 associate:1 element:1 pa:6 recognition:4 particularly:1 observed:1 role:1 connected:1 highest:3 yk:3 intuition:2 pinz:1 solving:1 segment:106 yuille:1 subdivides:1 f2:1 basis:2 easily:4 univ:2 describe:1 monte:1 zemel:1 labeling:8 quite:1 solve:1 cvpr:3 say:1 reconstruct:1 statistic:3 g1:1 rosenfeld:2 ip:1 advantage:1 sequence:2 propose:1 tu:1 iff:2 multiresolution:1 deformable:5 f10:2 description:1 competition:4 parent:6 double:1 cluster:1 rangarajan:1 mcneill:1 object:7 derive:2 illustrate:2 eq:4 solves:1 indicate:2 direction:3 tokyo:1 f4:2 require:1 assign:2 f1:9 decompose:2 hold:1 blake:1 mapping:4 vary:2 consecutive:1 purpose:2 estimation:1 f7:1 label:33 s12:1 largest:1 grouped:1 clearly:2 always:1 gaussian:2 aim:1 avoid:1 x27:1 focus:2 tech:2 contrast:2 detect:1 rigid:1 i0:1 typically:1 entire:1 initially:1 relation:13 going:2 interested:1 pixel:1 compatibility:7 among:1 orientation:1 flexible:4 denoted:1 aforementioned:1 field:20 equal:1 f3:1 represents:2 t2:6 employ:1 composed:3 individual:1 argmax:3 consisting:1 hummel:1 detection:10 interest:2 highly:1 analyzed:1 mixture:1 behind:1 tj:3 sobel:2 chain:1 predefined:1 accurate:1 edge:29 necessary:1 tree:23 deformation:1 bouman:1 instance:1 classify:1 earlier:1 soft:3 modeling:3 temple:4 assignment:34 maximization:1 introducing:1 subset:1 usefulness:1 successful:1 characterize:1 dependency:2 varies:2 combined:1 fundamental:1 vijayakumar:1 stenger:1 probabilistic:2 lee:1 concrete:1 connectivity:1 again:1 choose:1 f5:2 unmatched:1 return:1 li:1 huazhong:2 s13:1 f6:1 potential:1 de:1 explicitly:2 root:2 break:1 closed:1 lot:1 linked:3 complicated:2 substructure:1 contribution:1 minimize:1 appended:2 f12:2 correspond:1 t3:4 identify:1 bayesian:2 accurately:2 lu:1 carlo:1 researcher:1 cybernetics:1 detector:1 definition:2 involved:1 associated:2 sampled:1 appends:1 recall:3 color:2 segmentation:2 sophisticated:1 back:2 reflecting:1 ta:2 zisserman:1 formulation:1 governing:1 multiscale:23 abusing:1 usa:2 normalized:1 true:2 y2:16 verify:1 former:1 hence:2 assigned:4 contain:1 iteratively:1 adjacent:2 x5:7 generalized:1 stress:2 complete:1 demonstrate:1 cooperate:1 image:56 novel:2 recently:1 fi:1 common:1 rl:7 winner:2 endpoint:1 volume:1 linking:4 interpretation:6 he:1 significant:2 imposing:1 closing:1 stable:1 zucker:1 similarity:4 operating:1 curvature:1 posterior:4 swan:2 perspective:1 optimizing:3 optimizes:1 reverse:1 discard:1 compound:2 verlag:2 binary:2 s11:2 yi:1 minimum:2 determine:2 converge:1 signal:1 ii:2 rv:45 multiple:1 relates:1 desirable:1 x10:1 match:4 adapt:2 involving:1 basic:1 denominator:1 vision:2 iteration:1 represent:1 pyramid:3 achieved:1 longin:1 want:2 shotton:1 easy:1 xj:22 fit:1 fm:11 s14:1 reduce:1 idea:2 tm:11 shift:1 colour:1 penalty:2 cause:1 fractal:1 clutter:1 mid:3 shapiro:1 exist:1 nsf:1 disjoint:1 correctly:2 dummy:1 write:1 discrete:4 express:1 group:2 key:3 four:4 nevertheless:1 pb:1 threshold:1 clarity:1 clean:1 f8:1 graph:13 relaxation:6 sum:4 enforced:1 compete:5 powerful:1 distorted:1 reasonable:1 wu:2 guaranteed:1 played:1 distinguish:1 correspondence:3 refine:1 nontrivial:1 constraint:1 x2:2 scene:2 x7:1 chui:1 format:1 x12:1 relatively:1 martin:1 across:1 slightly:2 increasingly:1 gluing:1 s1:1 explained:1 invariant:2 iccv:2 remains:1 fail:1 needed:2 know:1 end:1 gaussians:1 operation:1 permit:2 eight:1 observe:6 hierarchical:1 enforce:1 fowlkes:1 alternative:3 existence:1 s01:2 original:2 denotes:4 ensure:1 completed:1 perturb:1 build:1 classical:4 malik:2 fa:2 distance:5 link:2 separate:1 sci:2 unstable:1 willsky:1 length:1 index:6 modeled:1 info:3 stated:1 append:23 implementation:1 perform:4 allowing:1 markov:3 sm:5 incorrectly:1 relational:2 head:1 y1:12 introduced:1 cast:1 pair:6 connection:1 hallucinated:1 meansquared:1 fleming:1 nip:1 trans:5 usually:7 below:2 pattern:1 tb:2 rf:24 ia:23 event:1 natural:2 zhu:1 representing:5 brief:1 x8:1 schmid:1 philadelphia:2 literature:1 geometric:3 tangent:1 xiang:2 relative:1 aposteriori:1 sufficient:1 normalizes:1 eccv:2 compatible:1 changed:1 supported:1 opelt:1 template:3 taking:1 benefit:3 boundary:2 xn:7 contour:75 fb:2 made:1 cope:2 global:7 active:2 belongie:1 continuous:1 robust:2 ca:3 forest:1 investigated:1 necessarily:1 marc:2 inherit:1 giraffe:3 main:2 noise:1 child:6 allowed:1 x1:5 fig:20 f9:1 x16:2 position:3 perpinan:1 ib:10 wavelet:1 remained:1 chamfer:2 admits:2 grouping:23 thayananthan:1 false:1 texture:2 dissimilarity:3 illustrates:2 t4:4 gap:1 simply:3 cipolla:2 springer:3 corresponds:1 extracted:4 conditional:3 goal:7 presentation:1 consequently:3 twofold:1 man:1 change:1 hard:1 included:1 determined:4 carreira:1 torr:1 averaging:2 acting:1 denoising:1 called:2 nil:1 experimental:2 subdivision:1 s02:2 select:2 puzicha:1 internal:2 rotate:1 brevity:2 ethz:5 dept:4 mcmc:2 |
2,682 | 3,431 | An Homotopy Algorithm for the Lasso with Online
Observations
Pierre J. Garrigues
Department of EECS
Redwood Center for Theoretical Neuroscience
University of California
Berkeley, CA 94720
[email protected]
Laurent El Ghaoui
Department of EECS
University of California
Berkeley, CA 94720
[email protected]
Abstract
It has been shown that the problem of `1 -penalized least-square regression commonly referred to as the Lasso or Basis Pursuit DeNoising leads to solutions that
are sparse and therefore achieves model selection. We propose in this paper RecLasso, an algorithm to solve the Lasso with online (sequential) observations. We
introduce an optimization problem that allows us to compute an homotopy from
the current solution to the solution after observing a new data point. We compare our method to Lars and Coordinate Descent, and present an application to
compressive sensing with sequential observations. Our approach can easily be
extended to compute an homotopy from the current solution to the solution that
corresponds to removing a data point, which leads to an efficient algorithm for
leave-one-out cross-validation. We also propose an algorithm to automatically
update the regularization parameter after observing a new data point.
1
Introduction
Regularization using the `1 -norm has attracted a lot of interest in the statistics [1], signal processing
[2], and machine learning communities. The `1 penalty indeed leads to sparse solutions, which is
a desirable property to achieve model selection, data compression, or for obtaining interpretable
results. In this paper, we focus on the problem of `1 -penalized least-square regression commonly
referred to as the Lasso [1]. We are given a set of training examples or observations (yi , xi ) ?
R ? Rm , i = 1 . . . n. We wish to fit a linear model to predict the response yi as a function of xi
and a feature vector ? ? Rm , yi = xTi ? + ?i , where ?i represents the noise in the observation. The
Lasso optimization problem is given by
n
min
?
1X T
(x ? ? yi )2 + ?n k?k1 ,
2 i=1 i
(1)
where ?n is a regularization parameter. The solution of (1) is typically sparse, i.e. the solution ? has
few entries that are non-zero, and therefore identifies which dimensions in xi are useful to predict
yi .
The `1 -regularized least-square problem can be formulated as a convex quadratic problem (QP) with
linear equality constraints. The equivalent QP can be solved using standard interior-point methods
(IPM) [3] which can handle medium-sized problems. A specialized IPM for large-scale problems
was recently introduced in [4]. Homotopy methods have also been applied to the Lasso to compute
the full regularization path when ? varies [5] [6][7]. They are particularly efficient when the solution
is very sparse [8]. Other methods to solve (1) include iterative thresholding algorithms [9][10][11],
feature-sign search [12], bound optimization methods [13] and gradient projection algorithms [14].
1
We propose an algorithm to compute the solution of the Lasso when the training examples
(yi , xi )i=1...N are obtained sequentially. Let ?(n) be the solution of the Lasso after observing n
training examples and ?(n+1) the solution after observing a new data point (yn+1 , xn+1 ) ? R ? Rm .
We introduce an optimization problem that allows us to compute an homotopy from ?(n) to ?(n+1) .
Hence we use the previously computed solution as a ?warm-start?, which makes our method particularly efficient when the supports of ?(n) and ?(n+1) are close.
In Section 2 we review the optimality conditions of the Lasso, which we use in Section 3 to derive
our algorithm. We test in Section 4 our algorithm numerically, and show applications to compressive sensing with sequential observations and leave-one-out cross-validation. We also propose an
algorithm to automatically select the regularization parameter each time we observe a new data
point.
2
Optimality conditions for the Lasso
The objective function in (1) is convex and non-smooth since the `1 norm is not differentiable when
?i = 0 for some i. Hence there is a global minimum at ? if and only if the subdifferential of the
objective function at ? contains the 0-vector. The subdifferential of the `1 -norm at ? is the following
set
vi = sgn(?i ) if |?i | > 0
?k?k1 = v ? Rm :
.
vi ? [?1, 1] if ?i = 0
Let X ? Rn?m be the matrix whose ith row is equal to xTi , and y = (y1 , . . . , yn )T . The optimality
conditions for the Lasso are given by
X T (X? ? y) + ?n v = 0, v ? ?k?k1 .
We define as the active set the indices of the elements of ? that are non-zero. To simplify notations we
assume that the active set appears first, i.e. ?T = (?1T , 0T ) and v T = (v1T , v2T ), where v1i = sgn(?1i )
for all i, and ?1 ? v2j ? 1 for all j. Let X = (X1 X2 ) be the partitioning of X according to the
active set. If the solution is unique it can be shown that X1T X1 is invertible, and we can rewrite the
optimality conditions as
?1 = (X1T X1 )?1 (X1T y ? ?n v1 )
.
??n v2 = X2T (X1 ?1 ? y)
Note that if we know the active set and the signs of the coefficients of the solution, then we can
compute it in closed form.
3
3.1
Proposed homotopy algorithm
Outline of the algorithm
Suppose we have computed the solution ?(n) to the Lasso with n observation and that we are given
an additional observation (yn+1 , xn+1 ) ? R ? Rm . Our goal is to compute the solution ?(n+1) of
the augmented problem. We introduce the following optimization problem
2
1
X
y
?(t, ?) = arg min
??
(2)
+ ?k?k1 .
T
ty
tx
n+1
2
n+1
?
2
(n)
(n+1)
We have ? = ?(0, ?n ) and ?
from ?(n) to ?(n+1) in two steps:
= ?(1, ?n+1 ). We propose an algorithm that computes a path
Step 1 Vary the regularization parameter from ?n to ?n+1 with t = 0. This amounts to computing the regularization path between ?n and ?n+1 as done in Lars. The solution path is
piecewise linear and we do not review it in this paper (see [15][7][5]).
Step 2 Vary the parameter t from 0 to 1 with ? = ?n+1 . We show in Section 3.2 how to compute
this path.
2
3.2
Algorithm derivation
We show in this Section that ?(t, ?) is a piecewise smooth function of t. To make notations lighter
we write ?(t) := ?(t, ?). We saw in Section 2 that the solution to the Lasso can be easily computed
once the active set and signs of the coefficients are known. This information is available at t = 0,
and we show that the active set and signs will remain the same for t in an interval [0, t? ) where the
solution ?(t) is smooth. We denote such a point where the active set changes a ?transition point?
and show how to compute it analytically. At t? we update the active set and signs which will remain
valid until t reaches the next transition point. This process is iterated until we know the active set
and signs of the solution at t = 1, and therefore can compute the desired solution ?(n+1) .
We suppose as in Section 2 and without loss of generality that the solution at t = 0 is such that
?(0) = (?1T , 0T ) and v T = (v1T , v2T ) ? ?k?(0)k1 satisfy the optimality conditions.
Lemma 1. Suppose ?1i 6= 0 for all i and |v2j | < 1 for all j. There exist t? > 0 such that for all
t ? [0, t? ), the solution of (2) has the same support and the same sign as ?(0).
P ROOF. The optimality conditions of (2) are given by
X T (X? ? y) + t2 xn+1 xTn+1 ? ? yn+1 + ?w = 0,
(3)
where w ? ?k?k1 . We show that there exists a solution ?(t)T = (?1 (t)T , 0T ) and w(t)T =
(v1T , w2 (t)T ) ? ?k?(t)k1 satisfying the optimality conditions for t sufficiently small. We partition
xTn+1 = (xTn+1,1 , xTn+1,2 ) according to the active set. We rewrite the optimality conditions as
T
X1 (X1 ?1 (t) ? y) + t2 xn+1,1 xn+1,1 T ?1 (t) ? yn+1 + ?v1 = 0
.
X2T (X1 ?1 (t) ? y) + t2 xn+1,2 xn+1,1 T ?1 (t) ? yn+1 + ?w2 (t) = 0
Solving for ?1 (t) using the first equation gives
?1 (t) = X1T X1 + t2 xn+1,1 xn+1,1 T
?1
X1T y + t2 yn+1 xn+1,1 ? ?v1 .
(4)
We can see that ?1 (t) is a continuous function of t. Since ?1 (0) = ?1 and the elements of ?1 are all
strictly positive, there exists t?1 such that for t < t?1 , all elements of ?1 (t) remain positive and do not
change signs. We also have
??n+1 w2 (t) = X2T (X1 ?1 (t) ? y) + t2 xn+1,2 xn+1,1 T ?1 (t) ? yn+1 .
(5)
Similarly w2 (t) is a continuous function of t, and since w2 (0) = v2 , there exists t?2 such that for
t < t?2 all elements of w2 (t) are strictly smaller than 1 in absolute value. By taking t? = min(t?1 , t?2 )
we obtain the desired result.
The solution ?(t) will therefore be smooth until t reaches a transition point where either a component
of ?1 (t) becomes zero, or one of the component of w2 (t) reaches one in absolute value. We now
show how to compute the value of the transition point.
X
y
? =
?
?
?
Let X
and
y
?
=
.
We
partition
X
=
X
X
1
2 according to the active
yn+1
xn+1 T
set. We use the Sherman-Morrison formula and rewrite (4) as
?1 (t) = ??1 ?
(t2 ? 1)?
e
u,
1 + ?(t2 ? 1)
?T X
? 1 )?1 (X
? T y? ? ?v1 ), e? = xn+1,1 T ??1 ? yn+1 , ? = xn+1,1 T (X
?T X
? 1 )?1 xn+1,1 and
where ??1 = (X
1
1
1
T ? ?1
?
u = (X1 X1 ) xn+1,1 . Let t1i the value of t such that ?1i (t) = 0. We have
?1 ! 21
e?ui
t1i = 1 +
??
,
??1i
We now examine the case where a component of w2 (t) reaches one in absolute value. We first notice
that
(
xn+1,1 T ?1 (t) ? yn+1 = 1+?(te?2 ?1)
2
,
e ?
? 1 ?1 (t) ? y? = e? ? (t ?1)?
X
X1 u
2
1+?(t ?1)
3
? 1 ??1 ? y?. We can rewrite (5) as
where e? = X
? 2T e? +
??w2 (t) = X
e?(t2 ? 1)
? 2T X
? 1 u).
(xn+1,2 ? X
1 + ?(t2 ? 1)
? 2 , and x(j) the j th element of xn+1,2 . The j th component of w2 (t)
Let cj be the j th column of X
will become 1 in absolute value as soon as
2
T
(j)
T ?
cj e? + e?(t ? 1)
= ?.
x
?
c
X
u
1
j
1 + ?(t2 ? 1)
?
Let t+
2 j (resp. t2 j ) be the value such that w2j (t) = 1 (resp. w2j (t) = ?1). We have
?
?1 ! 12
?
?
?
e?(x(j) ?cT
?+
j X1 u)
?
??
?
?t2 j = 1 +
?
???cT
j e
?1 ! 12 .
?
(j)
T ?
?
e
?
(x
?c
X
u)
?
1
?
j
?
??
?
?t2 j = 1 +
?
??cT
j e
?
Hence the transition point will be equal to t0 = min{mini t1i , minj t+
2 j , minj t2 j } where we restrict ourselves to the real solutions that lie between 0 and 1. We now have the necessary ingredients
to derive the proposed algorithm.
Algorithm 1 RecLasso: homotopy algorithm for online Lasso
1: Compute the path from ? (n) = ?(0, ?n ) to ?(0, ?n+1 ).
2: Initialize the active set to the non-zero coefficients of ?(0, ?n+1 ) and let v = sign (?(0, ?n+1 )).
? 1 the
Let v1 and xn+1,1 be the subvectors of v and xn+1 corresponding to the active set, and X
?
submatrix of X whose columns correspond to the active set.
?T X
? 1 )?1 (X
? T y? ? ?v1 ).
Initialize ??1 = (X
1
1
Initialize the transition point t0 = 0.
3: Compute the next transition point t0 . If it is smaller than the previous transition point or greater
than 1, go to Step 5.
Case 1 The component of ?1 (t0 ) corresponding to the ith coefficient goes to zero:
Remove i from the active set.
Update v by setting vi = 0.
Case 2 The component of w2 (t0 ) corresponding to the j th coefficient reaches one in absolute
value:
Add j to the active set.
If the component reaches 1 (resp. ?1), then set vj = 1 (resp. vj = ?1).
? 1 and xn+1,1 according to the updated active set.
4: Update v1 , X
?
?T X
? 1 )?1 (X
? T y? ? ?v1 ) (rank 1 update).
Update ?1 = (X
1
1
Go to Step 3.
5: Compute final value at t = 1, where the values of ? (n+1) on the active set are given by ??1 .
The initialization amounts to computing the solution of the Lasso when we have only one data point
(y, x) ? R ? Rm . In this case, the active set has at most one element. Let i0 = arg maxi |x(i) | and
v = sign(yx(i0 ) ). We have
(
1
(i0 )
? ?1 v)ei0 if |yx(i0 ) | > ?1
(i ) 2 (yx
?(1) = (x 0 )
.
0 otherwise
We illustrate our algorithm by showing the solution path when the regularization parameter and t
are successively varied with a simple numerical example in Figure 1.
3.3
Complexity
?T X
? 1 at each transition
The complexity of our algorithm is dominated by the inversion of the matrix X
1
point. The size of this matrix is bounded by q = min(n, m). As the update to this matrix after a
4
Figure 1: Solution path for both steps of our algorithm. We set n = 5, m = 5, ?n = .1n. All
the values of X, y, xn+1 and yn+1 are drawn at random. On the left is the homotopy when the
regularization parameter goes from ?n = .5 to ?n+1 = .6. There is one transition point as ?2
becomes inactive. On the right is the piecewise smooth path of ?(t) when t goes from 0 to 1. We
can see that ?3 becomes zero, ?2 goes from being 0 to being positive, whereas ?1 , ?4 and ?5 remain
active with their signs unchanged. The three transition points are shown as black dots.
transition point is rank 1, the cost of computing the inverse is O(q 2 ). Let k be the total number of
transition points after varying the regularization parameter from ?n to ?n+1 and t from 0 to 1. The
complexity of our algorithm is thus O(kq 2 ). In practice, the size of the active set d is much lower
than q, and if it remains ? d throughout the homotopy, the complexity is O(kd2 ). It is instructive
to compare it with the complexity of recursive least-square, which corresponds to ?n = 0 for all n
and n > m. For this problem the solution typically has m non-zero elements, and therefore the cost
of updating the solution after a new observation is O(m2 ). Hence if the solution is sparse (d small)
and the active set does not change much (k small), updating the solution of the Lasso will be faster
than updating the solution to the non-penalized least-square problem.
Suppose that we applied Lars directly to the problem with n + 1 observations without using knowledge of ?(n) by varying the regularization parameter from a large value where the size of the active
set is 0 to ?n+1 . Let k 0 be the number of transition points. The complexity of this approach is
O(k 0 q 2 ), and we can therefore compare the efficiency of these two approaches by comparing the
number of transition points.
4
4.1
Applications
Compressive sensing
Let ?0 ? Rm be an unknown vector that we wish to reconstruct. We observe n linear projections
yi = xTi ?0 + ?i , where ?i is Gaussian noise of variance ? 2 . In general one needs m such measurement to reconstruct ?0 . However, if ?0 has a sparse representation with k non-zero coefficients, it
has been shown in the noiseless case that it is sufficient to use n ? k log m such measurements.
This approach is known as compressive sensing [16][17] and has generated a tremendous amount of
interest in the signal processing community. The reconstruction is given by the solution of the Basis
Pursuit (BP) problem
min k?k1 subject to X? = y.
?
If measurements are obtained sequentially, it is advantageous to start estimating the unknown sparse
signal as measurements arrive, as opposed to waiting for a specified number of measurements. Algorithms to solve BP with sequential measurements have been proposed in [18][19], and it has been
shown that the change in the active set gives a criterion for how many measurements are needed to
recover the underlying signal [19].
In the case where the measurements are noisy (? > 0), a standard approach to recover ?0 is to
solve the Basis Pursuit DeNoising problem instead [20]. Hence, our algorithm is well suited for
5
compressive sensing with sequential and noisy measurements. We compare our proposed algorithm
to Lars as applied to the entire dataset each time we receive a new measurement. We also compare
our method to coordinate descent [11] with warm start: when receiving a new measurement, we
initialize coordinate descent (CD) to the actual solution.
We sample measurements of a model where m = 100, the vector ?0 used to sample the data has 25
non- zero elements whose values are Bernoulli ?1, xi ? N (0, Im ), ? = 1, and we set ?n = .1n.
The reconstruction error decreases as the number of measurements grows (not plotted). The parameter that controls the complexity of Lars and RecLasso is the number of transition points. We see
in Figure 2 that this quantity is consistently smaller for RecLasso, and that after 100 measurements
when the support of the solution does not change much there are typically less than 5 transition
points for RecLasso. We also show in Figure 2 timing comparison for the three algorithms that we
have each implemented in Python. We observed that CD requires a lot of iterations to converge to
the optimal solution when n < m, and we found difficult to set a stopping criterion that ensures
convergence. Our algorithm is consistently faster than Lars and CD with warm- start.
Figure 2: Compressive sensing results. On the x- axis of the plots are the iterations of the algorithm,
where at each iteration we receive a new measurement. On the left is the comparison of the number
of transition points for Lars and RecLasso, and on the right is the timing comparison for the three
algorithms. The simulation is repeated 100 times and shaded areas represent one standard deviation.
4.2
Selection of the regularization parameter
We have supposed until now a pre- determined regularization schedule, an assumption that is not
practical. The amount of regularization depends indeed on the variance of the noise present in
the data which is not known a priori. It is therefore not obvious how to determine the amount of
regularization. We write ?n = n?n such that ?n is the weighting factor between the average meansquared error and the `1 - norm. We propose an algorithm that selects ?n in a data- driven manner.
The problem with n observations is given by
n
1 X T
?(?) = arg min
(xi ? ? yi )2 + ?k?k1 .
2n
?
i=1
We have seen previously that ?(?) is piecewise linear, and we can therefore compute its gradient
unless ? is a transition point. Let err(?) = (xTn+1 ?(?)?yn+1 )2 be the error on the new observation.
We propose the following update rule to select ?n+1
log ?n+1
=
? ?n+1
=
?err
(?n )
log ?n ? ?
? log ?
n
o
?n ? exp 2n?xTn+1,1 (X1T X1 )?1 v1 (xTn+1 ?1 ? yn+1 ) ,
where the solution after n observations corresponding to the regularization parameter ?n is given by
(?1T , 0T ), and v1 = sign(?1 ). We therefore use the new observation as a test set, which allows us
to update the regularization parameter before introducing the new observation by varying t from 0
6
to 1. We perform the update in the log domain to ensure that ?n is always positive. We performed
simulations using the same experimental setup as in Section 4.1 and using ? = .01. We show in
Figure 3 a representative example where ? converges. We compared this value to the one we would
obtain if we had a training and a test set with 250 observations each such that we could fit the model
on the training set for various values of ?, and see which one gives the smallest prediction error on
the test set. We obtain a very similar result, and understanding the convergence properties of our
proposed update rule for the regularization parameter is the object of current research.
4.3
Leave-one-out cross-validation
We suppose in this Section that we have access to a dataset (yi , xi )i=1...n and that ?n = n?. The
parameter ? is tied to the amount of noise in the data which we do not know a priori. A standard
approach to select this parameter is leave-one-out cross-validation. For a range of values of ?, we
use n ? 1 data points to solve the Lasso with regularization parameter (n ? 1)? and then compute
the prediction error on the data point that was left out. This is repeated n times such that each data
point serves as the test set. Hence the best value for ? is the one that leads to the smallest mean
prediction error.
Our proposed algorithm can be adapted to the case where we wish to update the solution of the Lasso
after a data point is removed. To do so, we compute the first homotopy by varying the regularization
parameter from n? to (n ? 1)?. We then compute the second homotopy by varying t from 1 to 0
which has the effect of removing the data point that will be used for testing. As the algorithm is
very similar to the one we proposed in Section 3.2 we omit the derivation. We sample a model with
n = 32 and m = 32. The vector ?0 used to generate the data has 8 non-zero elements. We add
Gaussian noise of variance 0.2 to the observations, and select ? for a range of 10 values. We show in
Figure 4 the histogram of the number of transition points for our algorithm when solving the Lasso
with n ? 1 data points (we solve this problem 10 ? n times). Note that in the majority cases there
are very few transition points, which makes our approach very efficient in this setting.
Figure 3: Evolution of the regularization parameter when using our proposed update rule.
5
Figure 4: Histogram of the number of transition
points when removing an observation.
Conclusion
We have presented an algorithm to solve `1 -penalized least-square regression with online observations. We use the current solution as a ?warm-start? and introduce an optimization problem that allows us to compute an homotopy from the current solution to the solution after observing a new data
point. The algorithm is particularly efficient if the active set does not change much, and we show a
computational advantage as compared to Lars and Coordinate Descent with warm-start for applications such as compressive sensing with sequential observations and leave-one-out cross-validation.
We have also proposed an algorithm to automatically select the regularization parameter where each
new measurement is used as a test set.
7
Acknowledgments
We wish to acknowledge support from NSF grant 0835531, and Guillaume Obozinski and Chris
Rozell for fruitful discussions.
References
[1] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society.
Series B, 58(1):267?288, 1996.
[2] S. Chen, D. Donoho, and M. Saunders. Atomic decomposition by basis pursuit. SIAM Review, 43(1):129?
159, 2001.
[3] S. Boyd and L. Vandenberghe. Convex optimization. Cambridge Univ. Press, 2004.
[4] S-J. Kim, K. Koh, M. Lustig, S. Boyd, and D. Gorinevsky. An interior-point method for large-scale
l1-regularized least squares. IEEE Journal of Selected Topics in Signal Processing, 1(4):606?617, 2007.
[5] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least angle regression. Annals of Statistics,
32(2):407?499, 2004.
[6] M.R. Osborne, B. Presnell, and B.A. Turlach. A new approach to variable selection in least squares
problems. IMA Journal of Numerical Analysis, 20:389?404, 2000.
[7] D.M. Malioutov, M. Cetin, and A.S. Willsky. Homotopy continuation for sparse signal representation.
In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP),
Philadelphia, PA, March 2005.
[8] I. Drori and D.L. Donoho. Solution of `1 minimization problems by lars/homotopy methods. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Toulouse,
France, May 2006.
[9] I. Daubechies, M. Defrise, and C. De Mol. An iterative thresholding algorithm for linear inverse problems
with a sparsity constraint. Communications on Pure and Applied Mathematics, 57:1413?1541, 2004.
[10] C.J. Rozell, D.H. Johnson, R.G. Baraniuk, and B.A. Olshausen. Locally competitive algorithms for sparse
approximation. In Proceedings of the International Conference on Image Processing (ICIP), San Antonio,
TX, September 2007.
[11] J. Friedman, T. Hastie, H. Hoefling, and R. Tibshirani. Pathwise coordinate optimization. The Annals of
Applied Statistics, 1(2):302?332, 2007.
[12] H. Lee, A. Battle, R. Raina, and A.Y. Ng. Efficient sparse coding algorithms. In Proceedings of the
Neural Information Processing Systems (NIPS), 2007.
[13] M. Figueiredo and R. Nowak. A bound optimization approach to wavelet-based image deconvolution.
In Proceedings of the International Conference on Image Processing (ICIP), Genova, Italy, September
2005.
[14] M. Figueiredo, R. Nowak, and S. Wright. Gradient projection for sparse reconstruction: Application to
compressed sensing and other inverse problems. IEEE Journal of Selected Topics in Signal Processing,
1(4):586?597, 2007.
[15] M Osborne. An effective method for computing regression quantiles. IMA Journal of Numerical Analysis,
Jan 1992.
[16] E. Cand`es. Compressive sampling. Proceedings of the International Congress of Mathematicians, 2006.
[17] D.L. Donoho. Compressed sensing. IEEE Transactions on Information Theory, 52(4):1289?1306, 2006.
[18] S. Sra and J.A. Tropp. Row-action methods for compressed sensing. In Proceedings of the International
Conference on Acoustics, Speech, and Signal Processing (ICASSP), Toulouse, France, May 2006.
[19] D. Malioutov, S. Sanghavi, and A. Willsky. Compressed sensing with sequential observations. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Las
Vegas, NV, March 2008.
[20] Y. Tsaig and D.L. Donoho. Extensions of compressed sensing. Signal Processing, 86(3):549?571, 2006.
8
| 3431 |@word inversion:1 compression:1 turlach:1 norm:4 advantageous:1 simulation:2 decomposition:1 ipm:2 garrigues:1 contains:1 series:1 err:2 current:5 comparing:1 attracted:1 numerical:3 partition:2 remove:1 plot:1 interpretable:1 update:13 selected:2 ith:2 become:1 manner:1 introduce:4 indeed:2 cand:1 examine:1 v1t:3 automatically:3 xti:3 actual:1 subvectors:1 becomes:3 estimating:1 notation:2 bounded:1 underlying:1 medium:1 compressive:8 mathematician:1 berkeley:4 rm:7 partitioning:1 control:1 grant:1 omit:1 yn:14 positive:4 before:1 cetin:1 timing:2 congress:1 v2t:2 laurent:1 path:9 defrise:1 black:1 initialization:1 shaded:1 range:2 unique:1 practical:1 acknowledgment:1 testing:1 atomic:1 practice:1 recursive:1 jan:1 drori:1 area:1 projection:3 boyd:2 pre:1 interior:2 selection:5 close:1 equivalent:1 fruitful:1 center:1 go:6 convex:3 pure:1 m2:1 rule:3 vandenberghe:1 handle:1 coordinate:5 updated:1 resp:4 annals:2 suppose:5 lighter:1 pa:1 element:9 satisfying:1 particularly:3 t1i:3 updating:3 rozell:2 observed:1 solved:1 ensures:1 decrease:1 removed:1 ui:1 complexity:7 solving:2 rewrite:4 efficiency:1 basis:4 easily:2 icassp:4 various:1 tx:2 derivation:2 univ:1 effective:1 w2j:2 saunders:1 whose:3 solve:7 otherwise:1 reconstruct:2 compressed:5 toulouse:2 statistic:3 noisy:2 final:1 online:4 advantage:1 differentiable:1 propose:7 reconstruction:3 achieve:1 supposed:1 x1t:6 convergence:2 leave:5 converges:1 object:1 derive:2 illustrate:1 implemented:1 lars:9 sgn:2 homotopy:14 im:1 strictly:2 extension:1 sufficiently:1 wright:1 exp:1 predict:2 achieves:1 vary:2 smallest:2 saw:1 v1i:1 minimization:1 gaussian:2 always:1 shrinkage:1 varying:5 focus:1 consistently:2 rank:2 bernoulli:1 kim:1 el:1 stopping:1 i0:4 typically:3 entire:1 france:2 selects:1 arg:3 priori:2 initialize:4 equal:2 once:1 ng:1 sampling:1 represents:1 kd2:1 t2:15 sanghavi:1 simplify:1 piecewise:4 few:2 roof:1 ima:2 ourselves:1 friedman:1 interest:2 nowak:2 necessary:1 unless:1 desired:2 plotted:1 theoretical:1 column:2 cost:2 introducing:1 deviation:1 entry:1 kq:1 johnson:1 varies:1 eec:4 international:7 siam:1 gorinevsky:1 lee:1 receiving:1 invertible:1 daubechies:1 successively:1 opposed:1 de:1 coding:1 coefficient:6 satisfy:1 vi:3 depends:1 performed:1 lot:2 closed:1 observing:5 start:6 recover:2 competitive:1 square:8 variance:3 correspond:1 garrigue:1 iterated:1 xtn:7 malioutov:2 minj:2 reach:6 ty:1 obvious:1 dataset:2 knowledge:1 efron:1 cj:2 schedule:1 appears:1 response:1 done:1 generality:1 hoefling:1 until:4 tropp:1 grows:1 olshausen:1 effect:1 evolution:1 regularization:22 equality:1 hence:6 analytically:1 criterion:2 outline:1 l1:1 image:3 vega:1 recently:1 specialized:1 qp:2 numerically:1 measurement:16 cambridge:1 mathematics:1 similarly:1 sherman:1 dot:1 had:1 access:1 add:2 italy:1 driven:1 yi:9 seen:1 minimum:1 additional:1 greater:1 converge:1 determine:1 signal:12 morrison:1 full:1 desirable:1 smooth:5 faster:2 cross:5 prediction:3 regression:6 noiseless:1 histogram:2 represent:1 iteration:3 receive:2 subdifferential:2 whereas:1 interval:1 w2:11 nv:1 subject:1 fit:2 hastie:2 lasso:20 restrict:1 t0:5 inactive:1 presnell:1 penalty:1 speech:4 action:1 antonio:1 useful:1 amount:6 locally:1 generate:1 continuation:1 exist:1 nsf:1 notice:1 sign:12 neuroscience:1 tibshirani:3 write:2 waiting:1 lustig:1 drawn:1 v1:10 inverse:3 angle:1 baraniuk:1 arrive:1 throughout:1 genova:1 submatrix:1 bound:2 ct:3 quadratic:1 adapted:1 constraint:2 bp:2 x2:1 dominated:1 min:7 optimality:8 department:2 according:4 march:2 battle:1 remain:4 smaller:3 ei0:1 ghaoui:1 koh:1 equation:1 previously:2 remains:1 x2t:3 needed:1 know:3 serf:1 pursuit:4 available:1 observe:2 v2:2 pierre:1 include:1 ensure:1 yx:3 k1:9 society:1 unchanged:1 objective:2 quantity:1 september:2 gradient:3 majority:1 chris:1 topic:2 willsky:2 index:1 mini:1 difficult:1 setup:1 unknown:2 perform:1 observation:21 acknowledge:1 descent:4 extended:1 communication:1 y1:1 rn:1 redwood:1 varied:1 community:2 introduced:1 specified:1 icip:2 meansquared:1 tsaig:1 california:2 acoustic:4 tremendous:1 nip:1 sparsity:1 elghaoui:1 royal:1 warm:5 regularized:2 raina:1 identifies:1 axis:1 philadelphia:1 review:3 understanding:1 python:1 loss:1 ingredient:1 validation:5 sufficient:1 thresholding:2 cd:3 row:2 penalized:4 soon:1 figueiredo:2 johnstone:1 taking:1 absolute:5 sparse:11 dimension:1 xn:24 transition:22 valid:1 computes:1 commonly:2 san:1 transaction:1 global:1 sequentially:2 active:25 xi:7 search:1 iterative:2 continuous:2 ca:2 sra:1 obtaining:1 mol:1 domain:1 vj:2 noise:5 osborne:2 repeated:2 x1:14 augmented:1 referred:2 representative:1 quantiles:1 wish:4 lie:1 tied:1 weighting:1 wavelet:1 removing:3 formula:1 showing:1 sensing:12 maxi:1 deconvolution:1 exists:3 sequential:7 te:1 chen:1 suited:1 pathwise:1 corresponds:2 obozinski:1 sized:1 formulated:1 goal:1 donoho:4 change:6 determined:1 denoising:2 lemma:1 total:1 experimental:1 e:1 la:1 select:5 guillaume:1 support:4 instructive:1 |
2,683 | 3,432 | High-dimensional support union recovery in
multivariate regression
Guillaume Obozinski
Department of Statistics
UC Berkeley
[email protected]
Martin J. Wainwright
Department of Statistics
Dept. of Electrical Engineering and Computer Science
UC Berkeley
[email protected]
Michael I. Jordan
Department of Statistics
Department of Electrical Engineering and Computer Science
UC Berkeley
[email protected]
Abstract
We study the behavior of block `1 /`2 regularization for multivariate regression,
where a K-dimensional response vector is regressed upon a fixed set of p covariates. The problem of support union recovery is to recover the subset of
covariates that are active in at least one of the regression problems. Studying this problem under high-dimensional scaling (where the problem parameters as well as sample size n tend to infinity simultaneously), our main result
is to show that exact recovery is possible once the order parameter given by
?`1 /`2 (n, p, s) : = n/[2?(B ? ) log(p ? s)] exceeds a critical threshold. Here n is
the sample size, p is the ambient dimension of the regression model, s is the size
of the union of supports, and ?(B ? ) is a sparsity-overlap function that measures a
combination of the sparsities and overlaps of the K-regression coefficient vectors
that constitute the model. This sparsity-overlap function reveals that block `1 /`2
regularization for multivariate regression never harms performance relative to a
naive `1 -approach, and can yield substantial improvements in sample complexity
(up to a factor of K) when the regression vectors are suitably orthogonal relative to the design. We complement our theoretical results with simulations that
demonstrate the sharpness of the result, even for relatively small problems.
1
Introduction
A recent line of research in machine learning has focused on regularization based on block-structured
norms. Such structured norms are well motivated in various settings, among them kernel learning [3, 8], grouped variable selection [12], hierarchical model selection [13], simultaneous sparse
approximation [10], and simultaneous feature selection in multi-task learning [7]. Block-norms that
compose an `1 -norm with other norms yield solutions that tend to be sparse like the Lasso, but the
structured norm also enforces blockwise sparsity, in the sense that parameters within blocks are
more likely to be zero (or non-zero) simultaneously.
The focus of this paper is the model selection consistency of block-structured regularization in the
setting of multivariate regression. Our goal is to perform model or variable selection, by which we
mean extracting the subset of relevant covariates that are active in at least one regression. We refer
to this problem as the support union problem. In line with a large body of recent work in statistical
machine learning (e.g., [2, 9, 14, 11]), our analysis is high-dimensional in nature, meaning that we
allow the model dimension p (as well as other structural parameters) to grow along with the sample
size n. A great deal of work has focused on the case of ordinary `1 -regularization (Lasso) [2, 11, 14],
showing for instance that the Lasso can recover the support of a sparse signal even when p ? n.
1
Some more recent work has studied consistency issues for block-regularization schemes, including
classical analysis (p fixed) of the group Lasso [1], and high-dimensional analysis of the predictive risk of block-regularized logistic regression [5]. Although there have been various empirical
demonstrations of the benefits of block regularization, the generalizations of the result of [11] obtained by [6, 4] fail to capture the improvements observed in practice. In this paper, our goal is to
understand the following question: under what conditions does block regularization lead to a quantifiable improvement in statistical efficiency, relative to more naive regularization schemes? Here
statistical efficiency is assessed in terms of the sample complexity, meaning the minimal sample size
n required to recover the support union; we wish to know how this scales as a function of problem parameters. Our main contribution is to provide a function quantifying the benefits of block
regularization schemes for the problem of multivariate linear regression, showing in particular that,
under suitable structural conditions on the data, the block-norm regularization we consider never
harms performance relative to naive `1 -regularization and can lead to substantial gains in sample
complexity.
More specifically, we consider the following problem of multivariate linear regression: a group of
K scalar outputs are regressed on the same design matrix X ? Rn?p . Representing the regression
coefficients as a p ? K matrix B ? , the regression model takes the form
Y
=
XB ? + W,
(1)
where Y ? Rn?K and W ? Rn?K are matrices of observations and zero-mean noise respectively
and B ? has columns ? ?(1) , . . . , ? ?(K) which are the parameter vectors of each univariate regression.
We are interested
in recovering the union
n
o of the supports of individual regressions, more specifically
?(k)
if Sk = i ? {1, . . . , p}, ?i
6= 0 we would like to recover S = ?k Sk . The Lasso is often
presented as a relaxation of the so-called `0 regularization, i.e., the count of the number of non-zero
parameter coefficients, an intractable non-convex function. More generally, block-norm regularizations can be thought of as the relaxation of a non-convex regularization which counts the number of
?(k)
covariates i for which at least one of the univariate regression parameters ?i
is non-zero. More
?
th
?
specifically, let ?i denote the i row of B , and define, for q ? 1,
kB ? k`0 /`q = |{i ? {1, . . . , p}, k?i? kq > 0}|
and
kB ? k`1 /`q =
p
X
k?i? kq
i=1
All `0 /`q norms define the same function, but differ conceptually in that they lead to different `1 /`q
relaxations. In particular the `1 /`1 regularization is the same as the usual Lasso. The other conceptually most natural block-norms are `1 /`2 and `1 /`? . While `1 /`? is of interest, it seems intuitively
to be relevant essentially to situations where the support is exactly the same for all regressions, an
assumption that we are not willing to make.
b obtained by solving
In the current paper, we focus on the `1 /`2 case and consider the estimator B
the following disguised second-order cone program:
?
?
1
2
|||Y ? XB|||F + ?n kBk`1 /`2 ,
min
(2)
2n
B?Rp?K
P
where |||M |||F : = ( i,j m2ij )1/2 denotes the Frobenius norm. We study the support union problem
under high-dimensional scaling, meaning that the number of observations n, the ambient dimension p and the size of the union of supports s can all tend to infinity. The main contribution of
this paper is to show that under certain technical conditions on the design and noise matrices, the
model selection performance of block-regularized `1 /`2 regression (2) is governed by the control
n
parameter ?`1 /`2 (n, p ; B ? ) : = 2 ?(B ? ,?SS
) log(p?s) , where n is the sample size, p is the ambient
dimension, s = |S| is the size of the union of the supports, and ?(?) is a sparsity-overlap function
defined below. More precisely, the probability of correct support union recovery converges to one for
all sequences (n, p, s, B ? ) such that the control parameter ?`1 /`2 (n, p ; B ? ) exceeds a fixed critical
threshold ?crit < +?. Note that ?`1 /`2 is a measure of the sample complexity of the problem?that
is, the sample size required for exact recovery as a function of the problem parameters. Whereas
the ratio (n/ log p) is standard for high-dimensional theory on `1 -regularization (essentially due to
covering numberings of `1 balls), the function ?(B ? , ?SS ) is a novel and interesting quantity, which
2
measures both the sparsity of the matrix B ? , as well as the overlap between the different regression
tasks (columns of B ? ).
In Section 2, we introduce the models and assumptions, define key characteristics of the problem and
state our main result and its consequences. Section 3 is devoted to the proof of this main result, with
most technical results deferred to the appendix. Section 4 illustrates with simulations the sharpness
of our analysis and how quickly the asymptotic regime arises.
1.1
Notations
For a (possibly random) matrix M ? Rp?K , and for parameters 1 ? a ? b ? ?, we distinguish
the `a /`b block norms from the (a, b)-operator norms, defined respectively as
?X
? ab ? a1
p ?X
K
b
and
|||M |||a, b : = sup kM xka ,
kM k`a /`b : =
|mik |
(3)
i=1
kxkb =1
k=1
although `? /`p norms belong to both families (see Lemma B.0.1). For brevity,
P we denote the
spectral norm |||M |||2, 2 as |||M |||2 , and the `? -operator norm |||M |||?, ? = maxi j |Mij | as |||M |||? .
2
Main result and some consequences
The analysis of this paper applies to multivariate linear regression problems of the form (1), in which
the noise matrix W ? Rn?K is assumed to consist of i.i.d. elements Wij ? N (0, ? 2 ). In addition,
we assume that the measurement or design matrices X have rows drawn in an i.i.d. manner from a
zero-mean Gaussian N (0, ?), where ? ? 0 is a p ? p covariance matrix.
Suppose that we partition the full set of covariates into the support set S and its complement S c , with
|S| = s, |S c | = p ? s. Consider the following block decompositions of the regression coefficient
matrix, the design matrix and its covariance matrix:
?
? ??
?
BS
?SS ?SS c
c
X
.
B? =
,
X
=
[X
]
,
and
?
=
S
S
?S c S ?S c S c
BS? c
We use ?i? to denote the ith row of B ? , and assume that the sparsity of B ? is assessed as follows:
(A0) Sparsity: The matrix B ? has row support S : = {i ? {1, . . . , p} | ?i? 6= 0}, with s = |S|.
In addition, we make the following assumptions about the covariance ? of the design matrix:
(A1) Bounded eigenspectrum: There exist a constant Cmin > 0 (resp. Cmax < +?) such that all
eigenvalues of ?SS (resp. ?) are greater than Cmin (resp. smaller than Cmax ).
???
???
(A2) Mutual incoherence: There exists ? ? (0, 1] such that ????S c S (?SS )?1 ???? ? 1 ? ?.
???
???
(A3) Self incoherence: There exists a constant Dmax such that ???(?SS )?1 ??? ? Dmax .
?
Assumption A1 is a standard condition required to prevent excess dependence among elements of
the design matrix associated with the support S. The mutual incoherence assumption A2 is also
well known from previous work on model selection with the Lasso [10, 14]. These assumptions are
trivially satisfied by the standard Gaussian ensemble (? = Ip ) with Cmin = Cmax = Dmax = ? = 1.
More generally, it can be shown that various matrix classes satisfy these conditions [14, 11].
2.1
Statement of main result
With the goal of estimating the union of supports S, our main result is a set of sufficient conditions
using the following procedure. Solve the block-regularized problem (2) with regularization paramb = B(?
b n ). Use this solution to compute an estimate
eter ?n > 0, thereby obtaining a solution
B
n
o
b
b
of the support union as S(B) : = i ? {1, . . . , p} | ?bi 6= 0 . This estimator is unambiguously
b is unique, and as part of our analysis, we show that the solution B
b is indeed
defined if the solution B
unique with high probability in the regime of interest. We study the behavior of this estimator for a
3
sequence of linear regressions indexed by the triplet (n, p, s), for which the data follows the general
model presented in the previous section with defining parameters B ? (n) and ?(n) satisfying A0b
A3. As (n, p, s) tends to infinity, we give conditions on the triplet and properties of B ? for which B
b
is unique, and such that P[S = S] ? 1.
The central objects in our main result are the sparsity-overlap function, and the sample complexity
parameter, which we define here. For any vector ?i 6= 0, define ?(?i ) : = k??iik2 . We extend the
function ? to any matrix BS ? Rs?K with non-zero rows by defining the matrix ?(BS ) ? Rs?K
with ith row [?(BS )]i = ?(?i ). With this notation, we define the sparsity-overlap function ?(B)
and the sample complexity parameter ?`1 /`2 (n, p ; B ? ) as
???
???
n
?(B) : = ??? ?(BS )T (?SS )?1 ?(BS )???2
and
?`1 /`2 (n, p ; B ? ) : =
.
(4)
2 ?(B ? ) log(p?s)
Finally, we use b?min : = mini?S k?i? k2 to denote the minimal `2 row-norm of the matrix BS? . With
this notation, we have the following result:
Theorem 1. Consider a random design matrix X drawn with i.i.d. N (0, ?) row vectors, an observation matrix Y specified by model (1), and a regression matrix B ? such that (b?min )2 decays strictly
more slowly than f (p)
n max {s, log(p ? s)}, for any function f (p) ? +?.
?pSuppose that we? solve
f (p) log(p)/n .
the block-regularized program (2) with regularization parameter ?n = ?
For any sequence (n, p, B ? ) such that the `1 /`2 control parameter ?`1 /`2 (n, p ; B ? ) exceeds the
critical threshold ?crit (?) : = C?max
2 , then with probability greater than 1 ? exp(??(log p)),
b and
(a) the block-regularized program (2) has a unique solution B,
b B)
b is equal to the true support union S.
(b) its support set S(
Remarks: (i) For the standard Gaussian ensemble (? = Ip ), the critical threshold is simply
?crit (?) = 1. (ii) A technical condition that we require on the regularization parameter is
?2n n
? ?
log(p ? s)
(5)
which is satisfied by the choice given in the statement.
2.2
Some consequences of Theorem 1
It is interesting to consider some special cases of our main result. The simplest special case is the
univariate regression problem (K = 1), in which case the function ?(? ? ) outputs an s-dimensional
sign vector with elements zi? = sign(?i? ), so that ?(? ? ) = z ? T (?SS )?1 z ? = ?(s). Consequently,
the order parameter of block `1 /`2 -regression for univariate regresion is given by ?(n/(2s log(p ?
s)), which matches the scaling established in previous work on the Lasso [11].
More generally, given our assumption (A1) on ?SS , the sparsity overlap ?(B ? ) always lies in the
s
T
]. At the most pessimistic extreme, suppose that B ? : = ? ?~1K
interval [ KCsmax , Cmin
?that is, B ?
?
p
consists of K copies of the same coefficient
vector ? ? R , with support of cardinality |S| = s.
?
We then have [?(B ? )]ij = sign(?i? )/ K, from which we see that ?(B ? ) = z ? T (?SS )?1 z ? , with
z ? again the s-dimensional sign vector with elements zi? = sign(?i? ), so that there is no benefit in
sample complexity relative to the naive strategy of solving separate Lasso problems and constructing the union of individually estimated supports. This might seem a pessimistic result, since under
model (1), we essentially have Kn observations of the coefficient vector ? ? with the same design
matrix but K independent noise realizations. However, the thresholds as well as the rates of convergence in high-dimensional results such as Theorem 1 are not determined by the noise variance, but
rather by the number of interfering variables (p ? s).
At the most optimistic extreme, consider the case where ?SS = Is and (for s > K) suppose that
B ? is constructed such that the columns of the s ? K matrix ?(B ? ) are all orthogonal and of equal
length. Under this condition, we have
4
Corollary 1 (Orthonormal tasks). If the columns of the matrix ?(B ? ) are all orthogonal with equal
length and ?SS = Is?s then the block-regularized problem (2) succeeds in union support recovery
once the sample complexity parameter n/(2 Ks log(p ? s)) is larger than 1.
For the standard Gaussian ensemble, it is known [11] that the Lasso fails with probability one for
all sequences such that n < (2 ? ?)s log(p ? s) for any arbitrarily small ? > 0. Consequently,
Corollary 1 shows that under suitable conditions on the regression coefficient matrix B ? , `1 /`2 can
provides a K-fold reduction in the number of samples required for exact support recovery.
As a third illustration, consider, for ?SS = Is?s , the case where the supports Sk of individual
regression problems are all disjoint. The sample complexity parameter for each of the individual
Lassos is n/(2sk log(p ? sk )) where |Sk | = sk , so that the sample size required to recover the
support union from individual Lassos scales as n = ?(maxk [sk log(p ? sk )]). However, if the
supports are all disjoint, then the columns of the matrix ZS? = ?(BS? ) are orthogonal, and ZS? TZS? =
diag(s1 , . . . , sK ) so that ?(B ? ) = maxk sk and the sample complexity is the same. In other words,
even though there is no sharing of variables at all there is surprisingly no penalty from regularizing
jointly with the `1 /`2 -norm. However, this is not always true if ?SS 6= Is?s and in many situations
`1 /`2 -regularization can have higher sample complexity than separate Lassos.
3
Proof of Theorem 1
b S c S = 1 X Tc XS
b SS = 1 X T XS , ?
In addition to previous notations, the proofs use the shorthands: ?
n S
n S
b SS )?1 X T denotes the orthogonal projection onto the range of XS .
and ?S = XS (?
S
High-level proof outline: At a high level, our proof is based on the notion of what we refer to as
a primal-dual witness: we first formulate the problem (2) as a second-order cone program (SOCP),
with the same primal variable B as in (2) and a dual variable Z whose rows coincide at optimality
b along with a dual
with the subgradient of the `1 /`2 norm. We then construct a primal matrix B
b
matrix Z such that, under the conditions of Theorem 1, with probability converging to 1:
b Z)
b satisfies the Karush-Kuhn-Tucker (KKT) conditions of the SOCP.
(a) The pair (B,
(b) In spite of the fact that for general high-dimensional problems (with p ? n), the SOCP need
not have a unique solution a priori, a strict feasibility condition satisfied by the dual variables Zb
b is the unique optimal solution of (2).
guarantees that B
b is identical to the support union S of B ? .
(c) The support union S? of B
At the core of our constructive procedure is the following convex-analytic result, which characterizes
b correctly recovers the support set S:
an optimal primal-dual pair for which the primal solution B
b Z)
b that satisfy the conditions:
Lemma 1. Suppose that there exists a primal-dual pair (B,
ZbS
=
bS )
?(B
(6a)
=
??n ZbS
?
?
?
?
b S c S (B
bS ? B ? ) ? 1 XSTc W ?
??
< ?n
S
?
?
n
`? /`2
(6b)
0.
(6d)
b SS (B
bS ? BS? ) ? 1 XST W
?
n
?
?
? b c?
?n ?ZS ?
:=
bS c
B
=
`? /`2
(6c)
b Z)
b is the unique optimal solution to the block-regularized problem, with S(
b B)
b = S by
Then (B,
construction.
Appendix A proves Lemma 1, with the strict feasibility of ZbS c given by (6c) to certify uniqueness.
3.1
Construction of primal-dual witness
b Z)
b as follows. First, we set B
bS c = 0, to
Based on Lemma 1, we construct the primal dual pair (B,
b
b
satisfy condition (6d). Next, we obtain the pair (BS , ZS ) by solving a restricted version of (2):
)
(
???
? ????2
???
???
1
B
S
bS = arg min
???Y ? X
??? + ?n kBS k` /` .
(7)
B
1
2
0S c ???F
2n ???
BS ?Rs?K
5
b SS = 1 X T XS is strictly positive definite
Since s < n, the empirical covariance (sub)matrix ?
n S
with probability one, which implies that the restricted problem (7) is strictly convex and therefore
bS . We then choose ZbS to be the solution of equation (6b). Since any
has a unique optimum B
b
such matrix ZS is also a dual solution to the SOCP (7), it must be an element of the subdifferential
bS k` /` . It remains to show that this construction satisfies conditions (6a) and (6c). In order to
?kB
1
2
b SS is
satisfy condition (6a), it suffices to show that ?bi 6= 0, i ? S. From equation (6b) and since ?
invertible, we may solve as follows
?
?
??1 ? X T W
?
S
b
b
b
? ?n ZS = : US .
(BS ? BS ) =
?SS
(8)
n
For any row i ? S, we have k?bi k2 ? k?i? k2 ? kUS k`? /`2 . Thus, it suffices to show that the
following event occurs with high probability
?
?
1 ?
(9)
E(US ) : =
kUS k`? /`2 ? bmin
2
bS is identically zero. We establish this result later in this section.
to show that no row of B
bS ? B ? ) into equaTurning to condition (6c), by substituting expression (8) for the difference (B
S
c
tion (6c), we obtain a (p ? s) ? K random matrix VS c , whose row j ? S is given by
?
?
XS b
W
? ?n
(?SS )?1 ZbS .
Vj : = XjT [?S ? In ]
(10)
n
n
In order for condition (6c) to hold, it is necessary and sufficient that the probability of the event
n
o
E(VS c ) : =
kVS c k`? /`2 < ?n
(11)
converges to one as n tends to infinity.
Correct inclusion of supporting covariates: We begin by analyzing the probability of E(US ).
Lemma 2. Under assumption A3 and conditions (5) of Theorem 1, with probability 1 ?
exp(??(log s)), we have
?
?
?p
??
?p
(log s)/n + ?n Dmax + O
s2 /n .
kUS k`? /`2 ? O
This lemma is proved in in the Appendix. With the assumed scaling n = ? (s log(p ? s)), and the
for
assumed slow decrease of b?min , which we write explicitly as (b?min )2 ? ?12 f (p) max{s,log(p?s)}
n
n
some ?n ? 0, we have
kUS k`? /`2
? O(?n ),
(12)
b?min
so that the conditions of Theorem 1 ensure that E(US ) occurs with probability converging to one.
Correct exclusion of non-support: Next we analyze the event E(VS c ). For simplicity, in the
following arguments, we drop the index S c and write V for VS c . In order to show that kV k`? /`2 <
?n with probability converging to one, we make use of the decomposition
3
X
1
kV k`? /`2 ?
Ti0
?n
i=1
where
T10 : =
1
kE [V | XS ]k`? /`2 ,
?n
1
1
kE [V |XS , W ] ? E [V |XS ]k`? /`2
kV ? E [V |XS , W ]k`? /`2 .
and
T30 : =
?n
?n
Lemma 3. Under assumption A2, T10 ? 1 ? ? . Under conditions (5) of Theorem 1, T20 = op (1).
T20 : =
Therefore, to show that ?1n kV k`? /`2 < 1 with high probability, it suffices to show that T30 < ?
with high probability. Until now, we haven?t appealed to the sample complexity parameter
?`1 /`2 (n, p ; B ? ). In the next section, we prove that ?`1 /`2 (n, p ; B ? ) > ?crit (?) implies that T30 < ?
with high probability.
6
Lemma 4. Conditionally on W and XS , we have
?
?
kVj ? E [Vj | XS , W ]k22 | W, XS
d
=
?
?S c | S
?
? T Mn ?j ,
jj j
where ?j ? N (~0K , IK ) and where the K ? K matrix Mn = Mn (XS , W ) is given by
?2n bT b
bS + 1 W T (?S ? In )W.
ZS (?SS )?1 Z
(13)
n
n2
But the covariance matrix Mn is itself concentrated. Indeed,
Lemma 5. Under the conditions (5) of Theorem 1, for any ? > 0, the following event T (?) has
probability converging to 1:
n
o
?(B ? )
(1 + ?) .
T (?) : = |||Mn |||2 ? ?2n
(14)
n
For any fixed ? > 0, we have P[T30 ? ?] ? P[T30 ? ? | T (?)] + P[T (?)c ], but, from lemma 5,
P[T (?)c ] ? 0, so that it suffices to deal with the first term.
Mn
:=
Given that (?S c | S )jj ? (?S c S c )jj ? Cmax for all j, on the event T (?), we have
?(B ? )
maxc k?j k22 and
maxc (?S c | S )jj ?jT Mn ?j ? Cmax |||Mn |||2 maxc k?j k22 ? Cmax ?2n
j?S
j?S
j?S
n
?
?
n
1 ?2
0
2
?
?
?
?
P[T3 ? ? | T (?)] ? P maxc k?j k2 ? 2t (n, B )
with t (n, B ) : =
.
j?S
2 Cmax ?(B ? ) (1 + ?)
Finally using the union bound and a large deviation bound for ?2 variates we get the following
condition which is equivalent to the condition of Theorem 1: ?`1 /`2 (n, p ; B ? ) > ?crit (?):
?
?
Lemma 6. P maxc k?j k22 ? 2t? (n, B ? ) ? 0 if t? (n, B ? ) > (1 + ?) log(p ? s) for some ? > 0.
j?S
4
Simulations
In this section, we illustrate the sharpness of Theorem 1 and furthermore ascertain how quickly
the predicted behavior is observed as n, p, s grow in different regimes, for two regression tasks
?
(i.e., K = 2). In the following
coefficients is designed
?simulations, the matrix B of regression
?
?
with entries ?ij in {?1/ 2, 1/ 2} to yield a desired value of ?(B ? ). The design matrix X is
?
?
sampled from the standard Gaussian ensemble. Since |?ij
| = 1/ 2 in this construction, we have
?
?
?
?
B
S ), and
??? S =? ?(B
??? bmin = 1. Moreover, since ? = Ip , the sparsity-overlap ?(B ) is simply
T
?
??? ?(B ) ?(B )??? . From our analysis, the sample complexity parameter ?` /` is controlled by the
1
2
2
?interference? of irrelevant covariates, and not by the variance of a noise component.
We consider linear sparsity with s = ?p, for ? = 1/8, for various ambient model dimensions p ? {32, 256, 1024}. For each value of p, we perform simulations varying the sample
size n to match corresponding values of the basic Lasso sample complexity parameter, given
by ?Las : = n/(2s log(p ? s)), in the interval [0.25, 1.5]. In each case, we solve the blockregularized
p problem (2) with sample size n = 2?Las s log(p ? s) using the regularization parameter
?n = log(p ? s) (log s)/n. In all cases, the noise level is set at ? = 0.1.
For our construction of matrices B ? , we choose both p and the scalings for the sparsity so that the
obtained values for s that are multiples of four, and construct the columns Z (1)? and Z (2)? of the
matrix B ? = ?(B ? ) from copies of vectors of length 4. Denoting by ? the usual matrix tensor
product, we consider:
Identical regressions: We set Z (1)? = Z (2)? = ?1 ~1s , so that the sparsity-overlap is ?(B ? ) = s.
2
Orthogonal regression: Here B is constructed with Z (1)? ? Z (2)? , so that ?(B ? ) = 2s , the most
favorable situation. To achieve this, we set Z (1)? = ?12 ~1s and Z (2)? = ?12 ~1s/2 ?(1, ?1)T .
?
Intermediate angles: In this intermediate case, the columns Z (1)? and Z (2)? are at a 60? angle,
which leads to ?(B ? ) = 43 s. We set Z (1)? = ?12 ~1s and Z (2)? = ?12 ~1s/4 ? (1, 1, 1, ?1)T .
Figure 1 shows plots of all three cases and the reference Lasso case for the three different values
of the ambient dimension and the two types of sparsity described above. Note how the curves all
undergo a threshold phenomenon, with the location consistent with the predictions of Theorem 1.
7
p=32 s=p/8=4
p=256 s=p/8=32
Z1=Z2
0.8
(Z ,Z )=60o
?
0.6
1
2
Z1? Z2
0.4
0.2
0
0
0.5
?
1
1.5
p=1024 s=p/8=128
1
1
0.8
0.8
P(support correct)
L1
P(support correct)
P(support correct)
1
0.6
0.4
0.2
0
0
0.5
?
1
1.5
0.6
0.4
0.2
0
0
0.5
?
1
1.5
Figure 1. Plots of support recovery probability P[Sb = S] versus the basic `1 control parameter
?Las=n/[2s log(p ? s)] for linear sparsity s=p/8, and for increasing values of p ? {32, 256, 1024}
from left to right. Each graph shows four curves corresponding to the case of independent `1 regularization (pluses), and for `1 /`2 regularization, the cases of identical regression (crosses), intermediate
angles (nablas), and orthogonal regressions (squares). As plotted in dotted vertical lines, Theorem 1
predicts that identical case should succeed for ?Las>1 (same as ordinary Lasso), intermediate case for
?Las>0.75, and orthogonal case for ?Las>0.50. The shift of these curves confirms this prediction.
5
Discussion
We studied support union recovery under high-dimensional scaling with the `1 /`2 regularization,
and shown that its sample complexity is determined by the function ?(B ? ). The latter integrates
the sparsity of each univariate regression with the overlap of all the supports and the discrepancies
between each of the vectors of parameter estimated. In favorable cases, for K regressions, the
sample complexity for `1 /`2 is K times smaller than that of the Lasso. Moreover, this gain is not
obtained at the expense of an assumption of shared support over the data. In fact, for standard
Gaussian designs, the regularization seems ?adaptive? in sense that it doesn?t perform worse than
the Lasso for disjoint supports. This is not necessarily the case for more general designs and in some
situations, which need to be characterized in future work, it could do worse than the Lasso.
References
[1] F. Bach. Consistency of the group Lasso and multiple kernel learning. Technical report, INRIA D?epartement d?Informatique, Ecole Normale Sup?erieure, 2008.
[2] F. Bach, G. Lanckriet, and M. Jordan. Multiple kernel learning, conic duality, and the SMO algorithm. In
Proc. Int. Conf. Machine Learning (ICML). Morgan Kaufmann, 2004.
[3] D. Donoho, M. Elad, and V. M. Temlyakov. Stable recovery of sparse overcomplete representations in
the presence of noise. IEEE Trans. Info Theory, 52(1):6?18, January 2006.
[4] H. Liu and J. Zhang. On the `1 ?`q regularized regression. Technical Report arXiv:0802.1517v1, Carnegie
Mellon University, 2008.
[5] L. Meier, S. van de Geer, and P. B?uhlmann. The group lasso for logistic regression. Technical report,
Mathematics Department, Swiss Federal Institute of Technology Z?urich, 2007.
[6] Y. Nardi and A. Rinaldo. On the asymptotic properties of the group lasso estimator for linear models.
Electronic Journal of Statistics, 2:605?633, 2008.
[7] G. Obozinski, B. Taskar, and M. Jordan. Joint covariate selection and joint subspace selection for multiple
classification problems. Statistics and Computing, 2009. To appear.
[8] M. Pontil and C.A. Michelli. Learning the kernel function via regularization. Journal of Machine Learning
Research, 6:1099?1125, 2005.
[9] P. Ravikumar, J. Lafferty, H. Liu, and L. Wasserman. SpAM: sparse additive models. In Neural Info.
Proc. Systems (NIPS) 21, Vancouver, Canada, December 2007.
[10] J. A. Tropp. Just relax: Convex programming methods for identifying sparse signals in noise. IEEE Trans.
Info Theory, 52(3):1030?1051, March 2006.
[11] M. J. Wainwright. Sharp thresholds for high-dimensional and noisy recovery of sparsity using using
`1 -constrained quadratic programs. Technical Report 709, Department of Statistics, UC Berkeley, 2006.
[12] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. Journal of the
Royal Statistical Society B, 1(68):4967, 2006.
[13] P. Zhao, G. Rocha, and B. Yu. Grouped and hierarchical model selection through composite absolute
penalties. Technical report, Statistics Department, UC Berkeley, 2007.
[14] P. Zhao and B. Yu. Model selection with the lasso. J. of Machine Learning Research, pages 2541?2567,
2007.
8
| 3432 |@word version:1 norm:19 seems:2 suitably:1 willing:1 km:2 simulation:5 r:3 confirms:1 covariance:5 decomposition:2 thereby:1 epartement:1 reduction:1 liu:2 denoting:1 ecole:1 current:1 z2:2 must:1 additive:1 partition:1 analytic:1 drop:1 designed:1 plot:2 v:4 ith:2 core:1 provides:1 location:1 zhang:1 along:2 constructed:2 ik:1 yuan:1 consists:1 shorthand:1 prove:1 compose:1 manner:1 introduce:1 indeed:2 behavior:3 multi:1 nardi:1 cardinality:1 increasing:1 begin:1 estimating:1 notation:4 bounded:1 moreover:2 what:2 z:7 guarantee:1 berkeley:8 exactly:1 k2:4 control:4 appear:1 positive:1 engineering:2 tends:2 consequence:3 analyzing:1 incoherence:3 might:1 plus:1 inria:1 studied:2 k:1 bi:3 range:1 unique:8 enforces:1 union:20 block:23 practice:1 definite:1 swiss:1 procedure:2 pontil:1 empirical:2 thought:1 t10:2 projection:1 composite:1 word:1 spite:1 t30:5 get:1 onto:1 selection:12 operator:2 risk:1 equivalent:1 urich:1 convex:5 focused:2 sharpness:3 formulate:1 simplicity:1 recovery:11 ke:2 identifying:1 wasserman:1 estimator:4 orthonormal:1 rocha:1 notion:1 resp:3 construction:5 suppose:4 exact:3 programming:1 lanckriet:1 element:5 satisfying:1 predicts:1 observed:2 taskar:1 electrical:2 capture:1 decrease:1 substantial:2 complexity:16 covariates:7 ti0:1 solving:3 crit:5 predictive:1 upon:1 efficiency:2 joint:2 various:4 informatique:1 disguised:1 whose:2 larger:1 solve:4 elad:1 s:23 relax:1 statistic:7 jointly:1 itself:1 noisy:1 ip:3 sequence:4 eigenvalue:1 product:1 relevant:2 realization:1 achieve:1 frobenius:1 kv:5 quantifiable:1 convergence:1 optimum:1 converges:2 object:1 illustrate:1 stat:3 xka:1 ij:3 op:1 recovering:1 predicted:1 implies:2 differ:1 kuhn:1 correct:6 kb:4 cmin:4 require:1 suffices:4 generalization:1 karush:1 pessimistic:2 strictly:3 hold:1 exp:2 great:1 substituting:1 a2:3 uniqueness:1 favorable:2 proc:2 integrates:1 estimation:1 uhlmann:1 individually:1 grouped:3 federal:1 gaussian:6 always:2 rather:1 normale:1 varying:1 corollary:2 focus:2 improvement:3 sense:2 sb:1 bt:1 a0:1 wij:1 interested:1 issue:1 among:2 dual:9 arg:1 classification:1 priori:1 constrained:1 special:2 uc:5 mutual:2 equal:3 once:2 never:2 construct:3 identical:4 yu:2 icml:1 discrepancy:1 future:1 report:5 haven:1 simultaneously:2 individual:4 ab:1 interest:2 deferred:1 extreme:2 primal:8 devoted:1 xb:2 ambient:5 necessary:1 orthogonal:8 indexed:1 michelli:1 desired:1 plotted:1 overcomplete:1 theoretical:1 minimal:2 instance:1 column:7 ordinary:2 deviation:1 subset:2 entry:1 kq:2 kn:1 invertible:1 michael:1 quickly:2 kvj:1 again:1 central:1 satisfied:3 choose:2 possibly:1 slowly:1 worse:2 conf:1 zhao:2 socp:4 de:1 coefficient:8 int:1 satisfy:4 explicitly:1 later:1 tion:1 optimistic:1 analyze:1 sup:2 characterizes:1 recover:5 contribution:2 square:1 variance:2 characteristic:1 kaufmann:1 ensemble:4 yield:3 t3:1 conceptually:2 simultaneous:2 maxc:5 sharing:1 tucker:1 proof:5 associated:1 recovers:1 gain:2 sampled:1 proved:1 higher:1 unambiguously:1 response:1 though:1 furthermore:1 just:1 until:1 tropp:1 logistic:2 k22:4 true:2 regularization:27 deal:2 conditionally:1 self:1 covering:1 outline:1 bmin:2 demonstrate:1 l1:1 meaning:3 novel:1 belong:1 extend:1 refer:2 measurement:1 mellon:1 consistency:3 trivially:1 erieure:1 inclusion:1 mathematics:1 stable:1 multivariate:7 exclusion:1 recent:3 irrelevant:1 certain:1 arbitrarily:1 morgan:1 greater:2 signal:2 ii:1 full:1 multiple:4 exceeds:3 technical:8 match:2 characterized:1 cross:1 bach:2 lin:1 ravikumar:1 a1:4 feasibility:2 controlled:1 converging:4 prediction:2 regression:39 xjt:1 basic:2 essentially:3 arxiv:1 kernel:4 eter:1 whereas:1 addition:3 subdifferential:1 interval:2 xst:1 grow:2 strict:2 tend:3 undergo:1 december:1 lafferty:1 seem:1 jordan:4 extracting:1 structural:2 presence:1 intermediate:4 identically:1 variate:1 zi:2 lasso:23 shift:1 motivated:1 expression:1 penalty:2 constitute:1 remark:1 jj:4 generally:3 concentrated:1 simplest:1 exist:1 dotted:1 sign:5 estimated:2 disjoint:3 correctly:1 certify:1 write:2 carnegie:1 group:5 key:1 four:2 threshold:7 drawn:2 prevent:1 v1:1 graph:1 relaxation:3 subgradient:1 cone:2 angle:3 family:1 electronic:1 appendix:3 scaling:6 bound:2 distinguish:1 fold:1 quadratic:1 infinity:4 precisely:1 regressed:2 argument:1 min:7 optimality:1 martin:1 relatively:1 department:7 structured:4 numbering:1 combination:1 ball:1 march:1 smaller:2 ascertain:1 b:25 s1:1 kbk:1 intuitively:1 restricted:2 interference:1 equation:2 remains:1 dmax:4 count:2 fail:1 know:1 a0b:1 studying:1 hierarchical:2 spectral:1 rp:2 denotes:2 ensure:1 cmax:7 prof:1 establish:1 classical:1 society:1 tensor:1 question:1 quantity:1 occurs:2 strategy:1 dependence:1 usual:2 subspace:1 separate:2 eigenspectrum:1 length:3 index:1 mini:1 ratio:1 demonstration:1 illustration:1 statement:2 blockwise:1 expense:1 info:3 design:12 perform:3 vertical:1 observation:4 supporting:1 january:1 situation:4 defining:2 maxk:2 witness:2 rn:4 sharp:1 canada:1 complement:2 pair:5 required:5 specified:1 meier:1 z1:2 smo:1 established:1 nip:1 trans:2 below:1 regime:3 sparsity:19 program:5 kus:4 including:1 max:3 t20:2 royal:1 wainwright:3 critical:4 overlap:11 suitable:2 natural:1 regularized:8 event:5 mn:8 representing:1 scheme:3 technology:1 conic:1 naive:4 vancouver:1 relative:5 asymptotic:2 appealed:1 interesting:2 versus:1 sufficient:2 consistent:1 interfering:1 row:12 surprisingly:1 copy:2 allow:1 understand:1 mik:1 institute:1 absolute:1 sparse:6 benefit:3 van:1 curve:3 dimension:6 doesn:1 adaptive:1 coincide:1 spam:1 excess:1 temlyakov:1 active:2 reveals:1 kkt:1 harm:2 assumed:3 triplet:2 sk:11 nature:1 obtaining:1 necessarily:1 constructing:1 diag:1 vj:2 main:10 s2:1 noise:9 n2:1 body:1 slow:1 fails:1 sub:1 wish:1 lie:1 governed:1 third:1 theorem:13 covariate:1 jt:1 m2ij:1 showing:2 maxi:1 decay:1 x:14 a3:3 intractable:1 consist:1 exists:3 illustrates:1 tc:1 simply:2 likely:1 univariate:5 rinaldo:1 scalar:1 applies:1 mij:1 satisfies:2 obozinski:2 succeed:1 goal:3 quantifying:1 consequently:2 donoho:1 shared:1 specifically:3 determined:2 lemma:11 zb:6 called:1 geer:1 duality:1 la:6 succeeds:1 guillaume:1 support:38 latter:1 arises:1 assessed:2 brevity:1 constructive:1 dept:1 regularizing:1 phenomenon:1 |
2,684 | 3,433 | Extended Grassmann Kernels for
Subspace-Based Learning
Daniel D. Lee
GRASP Laboratory
University of Pennsylvania
Philadelphia, PA 19104
[email protected]
Jihun Hamm
GRASP Laboratory
University of Pennsylvania
Philadelphia, PA 19104
[email protected]
Abstract
Subspace-based learning problems involve data whose elements are linear subspaces of a vector space. To handle such data structures, Grassmann kernels have
been proposed and used previously. In this paper, we analyze the relationship between Grassmann kernels and probabilistic similarity measures. Firstly, we show
that the KL distance in the limit yields the Projection kernel on the Grassmann
manifold, whereas the Bhattacharyya kernel becomes trivial in the limit and is
suboptimal for subspace-based problems. Secondly, based on our analysis of the
KL distance, we propose extensions of the Projection kernel which can be extended to the set of affine as well as scaled subspaces. We demonstrate the advantages of these extended kernels for classification and recognition tasks with
Support Vector Machines and Kernel Discriminant Analysis using synthetic and
real image databases.
1
Introduction
In machine learning problems the data often live in a vector space, typically a Euclidean space.
However, there are many other kinds of non-Euclidean spaces suitable for data outside this conventional context. In this paper we focus on the domain where each data sample is a linear subspace
of vectors, rather than a single vector, of a Euclidean space. Low-dimensional subspace structures
are commonly encountered in computer vision problems. For example, the variation of images due
to the change of pose, illumination, etc, is well-aproximated by the subspace spanned by a few
?eigenfaces?. More recent examples include the dynamical system models of video sequences from
human actions or time-varying textures, represented by the linear span of the observability matrices
[1, 14, 13].
Subspace-based learning is an approach to handle the data as a collection of subspaces instead of the
usual vectors. The appropriate data space for the subspace-based learning is the Grassmann manifold
G(m, D), which is defined as the set of m-dimensional linear subspaces in RD . In particular, we
can define positive definite kernels on the Grassmann manifold, which allows us to treat the space as
if it were a Euclidean space. Previously, the Binet-Cauchy kernel [17, 15] and the Projection kernel
[16, 6] have been proposed and demonstrated the potential for subspace-based learning problems.
On the other hand, the subspace-based learning problem can be approached purely probabilistically.
Suppose the set of vectors are i.i.d samples from an arbitrary probability distribution. Then it is
possible to compare two such distributions of vectors with probabilistic similarity measures, such
as the KL distance1 , the Chernoff distance, or the Bhattacharyya/Hellinger distance, to name a few
[11, 7, 8, 18]. Furthermore, the Bhattacharyya affinity is indeed a positive definite kernel function
on the space of distributions and have nice closed-form expressions for the exponential family [7].
1
by distance we mean any nonnegative measure of similarity and not necessarily a metric.
1
In this paper, we investigate the relationship between the Grassmann kernels and the probabilistic distances. The link is provided by the probabilistic generalization of subspaces with a Factor
Analyzer which is a Gaussian ?blob? that has nonzero volume along all dimensions.
Firstly, we show that the KL distance yields the Projection kernel on the Grassmann manifold in the
limit of zero noise, whereas the Bhattacharyya kernel becomes trivial in the limit and is suboptimal
for subspace-based problems. Secondly, based on our analysis of the KL distance, we propose an
extension of the Projection kernel which is originally confined to the set of linear subspaces, to the
set of affine as well as scaled subspaces.
We will demonstrate the extended kernels with the Support Vector Machines and the Kernel Discriminant Analysis using synthetic and real image databases. The proposed kernels show the better
performance compared to the previously used kernels such as Binet-Cauchy and the Bhattacharyya
kernels.
2
Probabilistic subspace distances and kernels
In this section we will consider the two well-known probabilistic distances, the KL distance and the
Bhattacharyya distance, and establish their relationships to the Grassmann kernels. Although these
probabilistic distances are not restricted to specific distributions, we will model the data distribution
as the Mixture of Factor Analyzers (MFA) [4]. If we have i = 1, ..., N sets in the data, then each set
is considered as i.i.d. samples from the i-th Factor Analyzer
x ? pi (x) = N (ui , Ci ), Ci = Yi Yi0 + ? 2 ID ,
(1)
D
where ui ? R is the mean, Yi is a full-rank D ? m matrix (D > m), and ? is the ambient noise
level. The FA model is a practical substitute for a Gaussian distribution in case the dimensionality D
of the images is greater than the number of samples n in a set. Otherwise it is impossible to estimate
the full covariance C nor invert it.
More importantly, we use the FA distribution to provide the link between the Grassmann manifold
and the space of probabilistic distributions. In fact a linear subspace can be considered as the ?flattened? (? ? 0) limit of a zero-mean (ui = 0), homogeneous (Yi0 Yi = Im ) FA distribution. We will
look at the limits of the KL distance and the Bhattacharyya kernel under this condition.
2.1
KL distance in the limit
The (symmetrized) KL distance is defined as
Z
p1 (x)
dx.
JKL (p1 , p2 ) = [p1 (x) ? p2 (x)] log
p2 (x)
(2)
Let Ci = ? 2 I + Yi Yi0 be the covariance matrix, and define Yei = Yi (? 2 I + Yi0 Yi )?1/2 , and
e = 2?1/2 [Ye1 Ye2 ]. In this case the KL distance is
Z
? ?2
1
tr(?Ye10 Ye1 ? Ye20 Ye2 ) +
tr(Y10 Y1 + Y20 Y2 ? Ye10 Y2 Y20 Ye1 ? Ye20 Y1 Y10 Ye2 )
JKL (p1 , p2 ) =
2
2
? ?2
(3)
+
(u1 ? u2 )0 2ID ? Ye1 Ye10 ? Ye2 Ye20 (u1 ? u2 ).
2
Under the subspace condition (? ? 0, ui = 0, Yi0 Yi = Im , i = 1, ..., N ), the KL distance simplifies
to
1
m
? ?2
1
JKL (p1 , p2 ) =
(?2 2
)+
2m ? 2 2
tr(Y10 Y2 Y20 Y1 )
2
? +1
2
? +1
1
=
(2m ? 2tr(Y10 Y2 Y20 Y1 ))
2? 2 (? 2 + 1)
We can ignore the multiplying factors which do not depend on Y1 or Y2 , and rewrite the distance as
JKL (p1 , p2 ) = 2m ? 2tr(Y10 Y2 Y20 Y1 ).
We immediately realize that the distance JKL (p1 , p2 ) coincides with the definition of the squared
Projection distance [2, 16, 6], with the corresponding Projection kernel
kProj (Y1 , Y2 ) = tr(Y10 Y2 Y20 Y1 ).
(4)
2
2.2
Bhattacharyya kernel in the limit
Jebara and Kondor [7, 8] proposed the Probability Product kernel
Z
kProb (p1 , p2 ) = [p1 (x) p2 (x)]? dx (? > 0)
(5)
which includes the Bhattacharyya kernel as a special case.
Under the subspace condition (? ? 0, ui = 0, Yi0 Yi = Im , i = 1, ..., N ) the kernel kProb becomes
kProb (p1 , p2 )
? 2?(m?D)+D
det(I2m ? Ye 0 Ye )?1/2
(? 2 + 1)?m
?1/2
1
0
0
.
? det Im ?
Y Y2 Y2 Y1
(2? 2 + 1)2 1
= ? (1?2?)D 2??D ??D/2
(6)
(7)
Suppose the two subspaces span(Y1 ) and span(Y2 ) intersect only at the origin, that is, the singular
values of Y10 Y2 are strictly less than 1. In this case kProb has a finite value as ? ? 0 and the inversion
of (7) is well-defined. In contrast, the diagonal terms of kProb become
?1/2
m/2
1
(2? 2 + 1)2
kProb (Y1 , Y1 ) = det (1 ?
=
)Im
,
(8)
(2? 2 + 1)2
4? 2 (? 2 + 1)
which diverges to infinity as ? ? 0. This implies that after normalizing the kernel by the diagonal
terms, the resulting kernel becomes a trivial kernel
1, span(Yi ) = span(Yj )
?
kProb (Yi , Yj ) =
, as ? ? 0.
(9)
0,
otherwise
The derivations are detailed in the thesis [5]. As we claimed earlier, the Probability Product kernel,
including the Bhattacharyya kernel, loses its discriminating power as the distributions become close
to subspaces.
3
Extended Projection Kernel
Based on the analysis of the previous section, we will extend the Projection kernel (4) to more
general spaces than the Grassmann manifold in this section. We will examine the two directions of
extension: from linear to affine, and from homogeneous to scaled subspaces.
3.1
Extension to affine subspaces
An affine subspace in RD is a linear subspace with an ?offset? . In that sense a linear subspace is
simply an affine subspace with a zero offset. Analogously to the (linear) Grassmann manifold, we
can define an affine Grassmann manifold as the set of all m-dim affine subspaces in RD space 2 .
The affine span is defined from the orthonormal basis Y ? RD?m and an offset u ? RD by
aspan(Y, u) , {x | x = Y v + u, ?v ? Rm }.
(10)
By definition, the representation of an affine space by (Y, u) is not unique and there is an invariant
condition for the equivalent of representations:
Definition 1 (invariance to representations).
aspan(Y1 , u1 ) = aspan(Y2 , u2 ) if and only if span(Y1 ) = span(Y2 ) and Y1? (Y1? )0 u1 = Y2? (Y2? )0 u2 ,
where Y ? is any orthonormal basis for the orthogonal complement of span(Y ).
Similarly to the definition of Grassmann kernels [6], we can now formally define the affine Grassmann kernel as follows. Let k : (Rm?D ? RD ) ? (Rm?D ? RD ) ? R be a real valued symmetric
function k(Y1 , u1 , Y2 , u2 ) = k(Y2 , u2 , Y1 , u1 ).
2
The Grassmann manifold is defined as a quotient space O(D)/O(m) ? O(D ? m) where O is the orthogonal group. The affine Grassmann manifold is similarly defined as E(D)/E(m) ? O(D ? m), where E is the
Euclidean group. Fore more explanations, please refer to [5].
3
Definition 2. A real valued symmetric function k is an affine Grassmann kernel if it is positive
definite and invariant to different representations:
k(Y1 , u1 , Y2 , u2 ) = k(Y3 , u3 , Y4 , u4 ) for any Y1 , Y2 , Y3 , Y4 , and u1 , u2 , u3 , u4 such that
aspan(Y1 , u1 ) = aspan(Y3 , u3 ) and aspan(Y2 , u2 ) = aspan(Y4 , u4 ).
With this definition we check if the KL distance in the limit suggests an affine Grassmann kernel.
The KL distance with the homogeneity condition only Y10 Y1 = Y20 Y2 = Im becomes,
1
[2m ? 2tr(Y10 Y2 Y20 Y1 ) + (u1 ? u2 )0 (2ID ? Y1 Y10 ? Y2 Y20 ) (u1 ? u2 )] .
2? 2
Ignoring the multiplicative factor, the first term is the same is the original Projection kernel, which
we will denote as the ?linear? kernel to emphasize the underlying assumption:
JKL (p1 , p2 ) ?
kLin (Y1 , Y2 ) = tr(Y1 Y10 Y2 Y20 ),
(11)
The second term give rise to a new ?kernel?
ku (Y1 , u1 , Y2 , u2 ) = u01 (2ID ? Y1 Y10 ? Y2 Y20 )u2 ,
Y1 Y10
(12)
Y2 Y20 .
which measures the similarity of the offsets u1 and u2 scaled by 2I ?
?
However, this
term is not invariant under the invariance condition unfortunately. We instead propose the slight
modification:
k(Y1 , u1 , Y2 , u2 ) = u01 (I ? Y1 Y10 )(I ? Y2 Y20 )u2
The proof of the proposed form being invariant and positive definite is straightforward and is omitted. Combined with the linear term kLin , this defines the new ?affine? kernel
kAff (Y1 , u1 , Y2 , u2 ) = tr(Y1 Y10 Y2 Y20 ) + u01 (I ? Y1 Y10 )(I ? Y2 Y20 )u2 .
(13)
As we can see, the KL distance with only the homogeneity condition has two terms related to the
subspace Y and the offset u. This suggests a general construction rule for affine kernels. If we have
two separate positive kernels for subspaces and for offsets, we can add or multiply them together to
construct new kernels [10].
3.2
Extension to scaled subspaces
We have assumed homogeneous subspace so far. However, if the subspaces are computed from
the PCA of real data, the eigenvalues in general will have non-homogeneous values. To incorporate these scales for affine subspaces, we now allow the Y to be non-orthonormal and check if the
resultant kernel is still valid.
Let Yi be a full-rank D ? m matrix, and Ybi = Yi (Y 0 Yi )?1/2 be the orthonormalization of Yi .
i
Ignoring the multiplicative factors, the limiting (? ? 0) ?kernel? from (3) becomes
1 b b0
tr(Y1 Y1 Y2 Y20 + Y1 Y10 Yb2 Yb20 ) + u01 (2I ? Yb1 Yb10 ? Yb2 Yb20 )u2 ,
2
which is again not well-defined.
k=
The second term is the same as (12) in the previous subsection, and can be modified in the same way
to ku = u01 (I ? Yb1 Yb10 )(I ? Yb2 Yb20 )u2 .
The first term is not positive definite, and there are several ways to remedy it. We propose to use the
following form
k(Y1 , Y2 ) =
1
tr(Y1 Yb10 Yb2 Y20 + Yb1 Y10 Y2 Yb20 ) = tr(Yb10 Yb2 Y20 Y1 ),
2
among other possibilities.
The sum of the two modified terms, is the proposed ?affine scaled? kernel:
kAffSc (Y1 , u1 , Y2 , u2 ) = tr(Y1 Yb10 Yb2 Y20 ) + u01 (I ? Yb1 Yb10 )(I ? Yb2 Yb20 )u2 .
This is a positive definite kernel which can be shown from the definition.
4
(14)
Summary of the extended Projection kernels
The proposed kernels are summarized below. Let Yi be a full-rank D ? m matrix, and let
Ybi = Yi (Yi0 Yi )?1/2 the orthonormalization of Yi as before.
kLin (Y1 , Y2 )
=
kAff (Y1 , Y2 )
=
tr(Yb10 Yb2 Yb20 Yb1 ), kLinSc (Y1 , Y2 ) = tr(Yb10 Yb2 Y20 Y1 )
tr(Yb10 Yb2 Yb20 Yb1 ) + u01 (I ? Yb1 Yb10 )(I ? Yb2 Yb20 )u2
kAffSc (Y1 , Y2 )
=
tr(Yb10 Yb2 Y20 Y1 ) + u01 (I ? Yb1 Yb10 )(I ? Yb2 Yb20 )u2
(15)
We also spherize the kernels
e
k(Y1 , u1 , Y2 , u2 ) = k(Y1 , u1 , Y2 , u2 ) k(Y1 , u1 , Y1 , u1 )?1/2 k(Y2 , u2 , Y2 , u2 )?1/2
so that k(Y1 , u1 , Y1 , u1 ) = 1 for any Y1 and u1 .
There is a caveat in implementing these kernels. Although we used the same notations Y and Yb for
both linear and affine kernels, they are different in computation. For linear kernels the Y and Yb are
computed from data assuming u = 0, whereas for affine kernels the Y and Yb are computed after
removing the estimated mean u from the data.
3.3
Extension to nonlinear subspaces
A systematic way of extending the Projection kernel from linear/affine subspaces to nonlinear spaces
is to use an implicit map via a kernel function, where the latter kernel is to be distinguished from the
former kernels. Note that the proposed kernels (15) can be computed only from the inner products
of the column vectors of Y ?s and u?s including the orthonormalization procedure. If we replace the
inner products of those vectors yi0 yi by a positive definite function f (yi , yj ) on Euclidean spaces,
this implicitly defines a nonlinear feature space. This ?doubly kernel? approach has already been
proposed for the Binet-Cauchy kernel [17, 8] and for probabilistic distances in general [18]. We
can adopt the trick for the extended Projection kernels as well to extend the kernels to operate on
?nonlinear subspaces?3 .
4
Experiments with synthetic data
In this section we demonstrate the application of the extended Projection kernels to two-class classification problems with Support Vector Machines (SVMs).
4.1
Synthetic data
The extended kernels are defined under different assumptions of data distribution. To test the kernels
we generate three types of data ? ?easy?, ?intermediate? and ?difficult? ? from MFA distribution,
which cover the different ranges of data distribution.
A total of N = 100 FA distributions are generated in D = 10 dimensional space. The parameters
of each FA distribution pi (x) = N (ui , Ci ) are randomly chosen such that
? ?Easy? data have well separarted means ui and homogeneous scales Yi0 Yi
? ?Intermediate? data have partially overlapping means ui and homogeneous scales Yi0 Yi
? ?Difficult? data have totally overlapping means (u1 = ... = uN = 0) and randomly chosen
scales between 0 and 1.
The class label for each distribution pi is assigned as follows. We choose a pair of distribution p+
and p? which are the farthest apart from each other among all pairs of distributions. Then the labels
of the remaining distributions pi are determined from whether they are close to p+ or p? . The
distances are measured by the KL distance JKL .
3
the preimage corresponding to the linear subspaces in the RKHS via the feature map
5
4.2
Algorithms and results
We compare the performance of the Euclidean SVM with linear/ polynomial/ RBF kernels and the
performance of SVM with Grassmann kernels. To test the original SVMs, we randomly sampled
n = 50 point from each FA distribution pi (x). We evaluate the algorithm with N -fold cross validation by holding out one set and training with the other N ? 1 sets. The polynomial kernel used is
3
k(x1 , x2 ) = (hx1 , x2 i + 1) .
To test the Grassmann SVM, we first estimated the mean ui and the basis Yi from n = 50 points of
each FA distribution pi (x) used for the original SVM. The Yi , ?i and ? are estimated simply from
the probabilistic PCA [12], although they can also be estimated by the Expectation Maximization
approach.
Six different Grassmann kernels are compared: 1) the original and the extended Projection kernels (Linear, Linear Scaled, Affine, Affine Scaled), 2) the Binet-Cauchy
kernel kBC (Y1R, Y2 ) = (det Y10 Y2 )2 = det Y10 Y2 Y20 Y1 , and 3) the Bhattacharyya kernel
kBhat (p1 , p2 ) = [p1 (x) p2 (x)]1/2 dx adapted for FA distributions.
We evaluate the algorithms with leave-one-out test by holding out one subspace and training with the other N ? 1
subspaces.
Table 1: Classification rates of the Euclidean SVMs and the Grassmann SVMs. The BC and Bhat
are short for Binet-Cauchy and Bhattacharyya kernels, respectively.
Easy
Intermediate
Difficult
Euclidean
Linear Poly
84.63 79.85
62.40 61.76
52.00 63.74
Linear
55.10
68.10
80.10
Grassmann
Lin Sc
Aff
Aff Sc
55.30 92.70 92.30
67.50 85.20 83.60
81.00 80.30 81.20
BC
54.70
60.90
68.90
Probabilistic
Bhat
46.10
59.00
77.30
Table 1 shows the classification rates of the Euclidean SVMs and the Grassmann SVMs, averaged
for 10 trials. The results shows that best rates are obtained from the extended kernels, and the
Euclidean kernels lag behind for all three types of data. Interestingly the polynomial kernels often
perform worse than the linear kernels, and the RBF kernel performed even worse which we do not
report. For the ?difficult? data where the means are zero, the linear SVMs degrade to the chancelevel (50%), which agrees with the intuitive picture that any decision hyperplane that passes the
origin will roughly halve the points from a zero-mean distribution. As expected, the linear kernel
is inappropriate for data with nonzero offsets (?easy? and ?intermediate?), whereas the affine kernel
performs well regardless of the offsets. However, there is no significant difference between the
homogeneous and the scaled kernels. The Binet-Cauchy and the Bhattacharyya kernels mostly
underperformed.
We conclude that under certain conditions the extended kernels have clear advantages over the original linear kernels and the Euclidean kernels for the subspace-based classification problem.
5
Experiments with real-world data
In this section we demonstrate the application of the extended Projection kernels to recognition
problems with the kernel Fisher Discriminant Analysis [10].
5.1
Databases
The Yale face database and the Extended Yale face database [3] together consist of pictures of 38
subjects with 9 different poses and 45 different lighting conditions. The ETH-80 [9] is an object
database designed for object categorization test under varying poses. The database consists of pictures of 8 object categories and 10 object instances for each category, recored under 41 different
poses.
6
These databases have naturally factorized structures which make them ideal to test subspace-based
learning algorithms with. In Yale Face database, a set consists of images of all illumination conditions a person at a fixed pose. By treating the set as a point in the Grassmann manifold, we can
perform illumination-invarint learning tasks with the data. For ETH-80 database, a set consists of
images of all possible poses of an object from a category. Also by treating such set as a point in the
Grassmann manifold, we can perform pose-invariant learning tasks with the data.
There are a total of N = 279 and 80 sets as described above respectively. The images are resized
to the dimension of D = 504 and 1024 respectively, and the maximum of m = 9 dimensional
subspaces are used to compute the kernels. The subspace parameters Yi , ui and ? are estimated
from the probabilistic PCA [12].
5.2
Algorithms and results
We perform subject recognition tests with Yale Face, and categorization tests with ETH-80 database.
Since these databases are highly multiclass (31 and 8 classes) relative to the total number of samples, we use the kernel Discriminant Analysis to reduce dimensionality and extract features, in conjunction with a 1-NN classifier. The six different Grassmann kernels are compared: the extended
Projection (Lin/LinSc/Aff/Affsc) kernels, the Binet-Cauchy kernel, and the Bhattacharyya kernel.
The baseline algorithm (Eucl) is the Linear Discriminant Analysis applied to the original images in
the data from which the subspaces are computed.
Figure 1 summarizes the average recognition/categoriazation rates from 9- and 10-fold cross validation with the Yale Face and ETH-80 databases respectively. The results shows that best rates are
achieved from the extended kernels: linear scaled kernel in Yale Face and the affine kernel in ETH80. However the difference within the extended kernels are small. The performance of the extended
kernels remain relatively unaffected by the subspace dimensionality, which is a convenient property
in practice since we do not know the true dimensionality a priori. However the Binet-Cauchy and the
Bhattacharyya kernels do not perform as well, and degrade fast as the subspace dimension increases.
The analysis of the poor performance are given in the thesis [5].
6
Conclusion
In this paper we analyzed the relationship between probabilistic distances and the geometric Grassmann kernels, especially the KL distance and the Projection kernel. This analysis help us to understand the limitations of the Bhattacharyya kernel for subspace-based problems, and also suggest the
extensions of the Projection kernel. With synthetic and real data we demonstrated that the extended
kernels can outperform the original Projection kernel, as well as the previously used Bhattacharyya
and the Binet-Cauchy kernels for subspace-based classification problems. The relationship between
other probabilistic distances and the Grassmann kernels is yet to be fully explored, and we expect to
see more results from a follow-up study.
References
[1] Gianfranco Doretto, Alessandro Chiuso, Ying Nian Wu, and Stefano Soatto. Dynamic textures. Int. J.
Comput. Vision, 51(2):91?109, 2003.
[2] Alan Edelman, Tom?as A. Arias, and Steven T. Smith. The geometry of algorithms with orthogonality
constraints. SIAM J. Matrix Anal. Appl., 20(2):303?353, 1999.
[3] Athinodoros S. Georghiades, Peter N. Belhumeur, and David J. Kriegman. From few to many: Illumination cone models for face recognition under variable lighting and pose. IEEE Trans. Pattern Anal. Mach.
Intell., 23(6):643?660, 2001.
[4] Zoubin Ghahramani and Geoffrey E. Hinton. The EM algorithm for mixtures of factor analyzers. Technical Report CRG-TR-96-1, Department of Computer Science, University of Toronto, 21 1996.
[5] Jihun Hamm.
Subspace-based Learning with Grassmann Manifolds.
Electrical and Systems Engineering, University of Pennsylvania, 2008.
http://www.seas.upenn.edu/ jhham/Papers/thesis-jh.pdf.
Ph.D thesis in
Available at
[6] Jihun Hamm and Daniel Lee. Grassmann discriminant analysis: a unifying view on subspace-based
learning. In Int. Conf. Mach. Learning, 2008.
7
Yale Face
100
rate (%)
90
Eucl
Lin
LinSc
Aff
AffSc
BC
Bhat
80
70
60
50
40
1
3
5
7
9
subspace dimension (m)
ETH!80
100
rate (%)
90
Eucl
Lin
LinSc
Aff
AffSc
BC
Bhat
80
70
60
50
40
1
3
5
7
9
subspace dimension (m)
Figure 1: Comparison of Grassmann kernels for face recognition/ object categorization tasks with
kernel discriminant analysis. The extended Projection kernels (Lin/LinSc/Aff/ AffSc) outperform
the baseline method (Eucl) and the Binet-Cauchy (BC) and the Bhattacharyya (Bhat) kernels.
[7] Tony Jebara and Risi Imre Kondor. Bhattacharyya expected likelihood kernels. In COLT, pages 57?71,
2003.
[8] Risi Imre Kondor and Tony Jebara. A kernel between sets of vectors. In Proc. of the 20th Int. Conf. on
Mach. Learn., pages 361?368, 2003.
[9] Bastian Leibe and Bernt Schiele. Analyzing appearance and contour based methods for object categorization. CVPR, 02:409, 2003.
[10] Bernhard Sch?olkopf and Alexander J. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, Cambridge, MA, USA, 2001.
[11] Gregory Shakhnarovich, John W. Fisher, and Trevor Darrell. Face recognition from long-term observations. In Proc. of the 7th Euro. Conf. on Computer Vision, pages 851?868, London, UK, 2002.
[12] Michael E. Tipping and Christopher M. Bishop. Probabilistic principal component analysis. Journal Of
The Royal Statistical Society Series B, 61(3):611?622, 1999.
[13] Pavan Turaga, Ashok Veeraraghavan, and Rama Chellappa. Statistical analysis on Stiefel and Grassmann
manifolds with applications in computer vision. In CVPR, 2008.
[14] Ashok Veeraraghavan, Amit K. Roy-Chowdhury, and Rama Chellappa. Matching shape sequences
in video with applications in human movement analysis. IEEE Trans. Pattern Anal. Mach. Intell.,
27(12):1896?1909, 2005.
[15] S.V.N. Vishwanathan and Alexander J. Smola. Binet-Cauchy kernels. In NIPS, 2004.
[16] Liwei Wang, Xiao Wang, and Jufu Feng. Subspace distance analysis with application to adaptive bayesian
algorithm for face recognition. Pattern Recogn., 39(3):456?464, 2006.
[17] Lior Wolf and Amnon Shashua. Learning over sets using kernel principal angles. J. Mach. Learn. Res.,
4:913?931, 2003.
[18] Shaohua Kevin Zhou and Rama Chellappa. From sample similarity to ensemble similarity: Probabilistic distance measures in Reproducing Kernel Hilbert Space. IEEE Trans. Pattern Anal. Mach. Intell.,
28(6):917?929, 2006.
8
| 3433 |@word trial:1 kondor:3 inversion:1 polynomial:3 yi0:10 covariance:2 tr:18 series:1 daniel:2 rkhs:1 bc:5 bhattacharyya:19 interestingly:1 yet:1 dx:3 jkl:7 john:1 realize:1 shape:1 nian:1 designed:1 treating:2 smith:1 short:1 caveat:1 toronto:1 firstly:2 along:1 become:2 chiuso:1 edelman:1 consists:3 doubly:1 hellinger:1 upenn:3 expected:2 indeed:1 roughly:1 p1:13 nor:1 examine:1 inappropriate:1 ye2:4 becomes:6 provided:1 totally:1 underlying:1 notation:1 factorized:1 jufu:1 kind:1 y3:3 scaled:10 rm:3 classifier:1 uk:1 farthest:1 positive:8 before:1 engineering:1 treat:1 limit:9 mach:6 id:4 analyzing:1 yb1:8 suggests:2 ye1:4 appl:1 range:1 averaged:1 practical:1 unique:1 yj:3 practice:1 hamm:3 definite:7 orthonormalization:3 procedure:1 intersect:1 eth:5 liwei:1 projection:21 convenient:1 matching:1 suggest:1 zoubin:1 hx1:1 close:2 context:1 live:1 impossible:1 www:1 conventional:1 equivalent:1 demonstrated:2 map:2 straightforward:1 regardless:1 immediately:1 rule:1 importantly:1 spanned:1 orthonormal:3 handle:2 variation:1 limiting:1 construction:1 suppose:2 homogeneous:7 origin:2 pa:2 element:1 trick:1 recognition:8 roy:1 u4:3 database:13 steven:1 electrical:1 wang:2 movement:1 alessandro:1 ui:10 schiele:1 kriegman:1 dynamic:1 depend:1 rewrite:1 shakhnarovich:1 purely:1 yei:1 basis:3 georghiades:1 represented:1 recogn:1 derivation:1 fast:1 london:1 chellappa:3 approached:1 sc:2 kevin:1 outside:1 whose:1 lag:1 bernt:1 valued:2 cvpr:2 otherwise:2 aspan:7 advantage:2 sequence:2 blob:1 eigenvalue:1 propose:4 product:4 intuitive:1 olkopf:1 darrell:1 diverges:1 sea:3 extending:1 categorization:4 leave:1 object:7 help:1 rama:3 pose:8 measured:1 b0:1 p2:13 quotient:1 implies:1 direction:1 human:2 implementing:1 generalization:1 secondly:2 im:6 crg:1 extension:7 strictly:1 considered:2 u3:3 adopt:1 omitted:1 proc:2 label:2 agrees:1 mit:1 gaussian:2 modified:2 rather:1 imre:2 zhou:1 resized:1 varying:2 probabilistically:1 conjunction:1 focus:1 rank:3 check:2 likelihood:1 contrast:1 baseline:2 sense:1 dim:1 nn:1 typically:1 classification:6 among:2 colt:1 priori:1 recored:1 special:1 distance1:1 construct:1 chernoff:1 look:1 report:2 few:3 randomly:3 homogeneity:2 intell:3 geometry:1 investigate:1 possibility:1 multiply:1 highly:1 chowdhury:1 grasp:2 mixture:2 analyzed:1 behind:1 eucl:4 ambient:1 orthogonal:2 euclidean:12 re:1 instance:1 column:1 earlier:1 cover:1 maximization:1 pavan:1 gregory:1 synthetic:5 combined:1 person:1 siam:1 discriminating:1 lee:2 probabilistic:16 systematic:1 michael:1 analogously:1 together:2 squared:1 thesis:4 again:1 choose:1 worse:2 conf:3 potential:1 summarized:1 u01:8 includes:1 int:3 multiplicative:2 performed:1 view:1 closed:1 analyze:1 shashua:1 ensemble:1 yield:2 ybi:2 bayesian:1 multiplying:1 fore:1 lighting:2 unaffected:1 halve:1 trevor:1 definition:7 resultant:1 proof:1 naturally:1 lior:1 sampled:1 subsection:1 dimensionality:4 veeraraghavan:2 hilbert:1 originally:1 tipping:1 follow:1 tom:1 yb:3 furthermore:1 implicit:1 smola:2 hand:1 christopher:1 nonlinear:4 overlapping:2 defines:2 preimage:1 name:1 usa:1 ye:2 y2:49 binet:11 remedy:1 former:1 soatto:1 assigned:1 true:1 regularization:1 symmetric:2 laboratory:2 nonzero:2 please:1 coincides:1 pdf:1 demonstrate:4 performs:1 stefano:1 stiefel:1 image:8 volume:1 extend:2 slight:1 refer:1 significant:1 cambridge:1 rd:7 similarly:2 analyzer:4 similarity:6 etc:1 add:1 gianfranco:1 recent:1 apart:1 claimed:1 certain:1 yi:25 greater:1 belhumeur:1 ashok:2 doretto:1 full:4 alan:1 technical:1 cross:2 long:1 lin:5 grassmann:34 vision:4 metric:1 expectation:1 kernel:122 confined:1 invert:1 achieved:1 whereas:4 bhat:5 singular:1 sch:1 operate:1 pass:1 subject:2 ideal:1 intermediate:4 yb2:13 easy:4 pennsylvania:3 suboptimal:2 observability:1 simplifies:1 inner:2 reduce:1 multiclass:1 det:5 whether:1 expression:1 pca:3 six:2 amnon:1 peter:1 action:1 detailed:1 involve:1 clear:1 ph:1 svms:7 category:3 generate:1 http:1 outperform:2 estimated:5 group:2 y10:20 sum:1 cone:1 angle:1 family:1 wu:1 decision:1 summarizes:1 yale:7 fold:2 bastian:1 encountered:1 nonnegative:1 adapted:1 infinity:1 orthogonality:1 aff:6 constraint:1 x2:2 vishwanathan:1 u1:24 span:9 relatively:1 department:1 turaga:1 poor:1 remain:1 em:1 modification:1 restricted:1 invariant:5 previously:4 yb10:12 know:1 available:1 leibe:1 appropriate:1 distinguished:1 symmetrized:1 substitute:1 original:7 remaining:1 include:1 tony:2 klin:3 unifying:1 risi:2 ghahramani:1 especially:1 establish:1 amit:1 society:1 feng:1 already:1 fa:8 usual:1 diagonal:2 affinity:1 subspace:57 distance:32 link:2 separate:1 degrade:2 manifold:14 cauchy:11 discriminant:7 trivial:3 assuming:1 relationship:5 y4:3 ying:1 difficult:4 unfortunately:1 mostly:1 holding:2 rise:1 anal:4 perform:5 observation:1 finite:1 i2m:1 extended:20 hinton:1 y1:56 reproducing:1 arbitrary:1 jebara:3 david:1 complement:1 pair:2 kl:16 nip:1 trans:3 beyond:1 dynamical:1 below:1 pattern:4 including:2 royal:1 video:2 explanation:1 mfa:2 suitable:1 power:1 athinodoros:1 picture:3 extract:1 philadelphia:2 nice:1 geometric:1 relative:1 fully:1 expect:1 limitation:1 geoffrey:1 validation:2 affine:25 xiao:1 pi:6 summary:1 allow:1 understand:1 jh:1 eigenfaces:1 face:11 dimension:5 valid:1 world:1 contour:1 commonly:1 collection:1 adaptive:1 far:1 emphasize:1 ignore:1 implicitly:1 bernhard:1 assumed:1 conclude:1 un:1 table:2 underperformed:1 ku:2 learn:2 ignoring:2 necessarily:1 poly:1 domain:1 noise:2 x1:1 euro:1 ddlee:1 exponential:1 comput:1 kbc:1 removing:1 specific:1 bishop:1 offset:8 explored:1 svm:4 normalizing:1 consist:1 flattened:1 ci:4 aria:1 texture:2 illumination:4 simply:2 appearance:1 partially:1 u2:28 wolf:1 loses:1 ma:1 rbf:2 replace:1 fisher:2 y20:22 change:1 determined:1 hyperplane:1 principal:2 total:3 invariance:2 formally:1 support:4 latter:1 alexander:2 jihun:3 incorporate:1 evaluate:2 |
2,685 | 3,434 | Probabilistic detection of short events, with
application to critical care monitoring
Norm Aleks
U.C. Berkeley
[email protected]
Diane Morabito
U.C. San Francisco
morabitod@
neurosurg.ucsf.edu
Stuart Russell
U.C. Berkeley
[email protected]
Kristan Staudenmayer
Stanford University
kristans@
stanford.edu
Michael G. Madden
National U. of Ireland, Galway
[email protected]
Mitchell Cohen
U.C. San Francisco
mcohen@
sfghsurg.ucsf.edu
Geoffrey Manley
U.C. San Francisco
manleyg@
neurosurg.ucsf.edu
Abstract
We describe an application of probabilistic modeling and inference technology to
the problem of analyzing sensor data in the setting of an intensive care unit (ICU).
In particular, we consider the arterial-line blood pressure sensor, which is subject
to frequent data artifacts that cause false alarms in the ICU and make the raw data
almost useless for automated decision making. The problem is complicated by
the fact that the sensor data are averaged over fixed intervals whereas the events
causing data artifacts may occur at any time and often have durations significantly
shorter than the data collection interval. We show that careful modeling of the
sensor, combined with a general technique for detecting sub-interval events and
estimating their duration, enables detection of artifacts and accurate estimation
of the underlying blood pressure values. Our model?s performance identifying
artifacts is superior to two other classifiers? and about as good as a physician?s.
1
Introduction
The work we report here falls under the general heading of state estimation, i.e., computing the
posterior distribution P(Xt |e1:t ) for the state variables X of a partially observable stochastic system,
given a sequence of observations e1:t . The specific setting for our work at the Center for Biomedical
Informatics in Critical Care (C-BICC) is an intensive care unit (ICU) at San Francisco General
Hospital (SFGH) specializing in traumatic brain injury, part of a major regional trauma center. In this
setting, the state variables Xt include aspects of patient state, while the evidence variables Et include
up to 40 continuous streams of sensor data such as blood pressures (systolic/diastolic/mean, arterial
and venous), oxygenation of blood, brain, and other tissues, intracranial pressure and temperature,
inspired and expired oxygen and CO2 , and many other measurements from the mechanical ventilator.
A section of data from these sensors is shown in Figure 1(a). It illustrates a number of artifacts,
including, in the top traces, sharp deviations in blood pressure due to external interventions in the
arterial line; in the middle traces, ubiquitous drop-outs in the venous oxygen level; and in the lower
traces, many jagged spikes in measured lung compliance due to coughing.
The artifacts cannot be modeled simply as ?noise? in the sensor model; many are extended over time
(some for as long as 45 minutes) and most exhibit complex patterns of their own. Simple techniques
for ?cleaning? such data, such as median filtering, fail. Instead, we follow the general approach
suggested by Russell and Norvig (2003), which involves careful generative modeling of sensor state
using dynamic Bayesian networks (DBNs).
This paper focuses on the arterial-line blood pressure sensor (Figure 1(b)), a key element of the
monitoring system. As we describe in Section 2, this sensor is subject to multiple artifacts, including
1
F
l
us
hs
ol
ut
i
on
(
he
par
i
ni
z
e
ds
al
i
ne
)
P
r
e
s
s
ur
ebag
andgauge
T
r
ans
duc
e
r
I
nputt
o
be
ds
i
de
3ways
t
opc
oc
k
moni
t
or
wi
t
hs
i
t
ef
orz
e
r
oi
ng
orbl
ooddr
aw
Radi
alar
t
e
r
y
c
at
he
t
e
r
(a)
(b)
Figure 1: (a) One day?s worth of minute-by-minute monitoring data for an ICU patient. (b) Arterialline blood pressure measurement.
artificially low or high values due to zeroing, line flushes, or the drawing of blood samples. These
artifacts not only complicate the state estimation and diagnosis task; they also corrupt recorded data
and cause a large number of false alarms in the ICU, which lead in turn to true alarms being ignored
and alarms being turned off (Tsien & Fackler, 1997). By modeling the artifact-generating processes,
we hope to be able to infer the true underlying blood pressure even when artifacts occur.
To this point, the task described would be an applied Bayesian modeling problem of medium difficulty. What makes it slightly unusual and perhaps of more general interest is the fact that our
sensor data are recorded as averages over each minute (our analysis is off-line, for the purpose of
making recorded data useable for biomedical research), whereas the events of interest?in this case,
re-zeroings, line flushes, and blood draws?can occur at any time and have durations ranging from
under 5 seconds to over 100 seconds. Thus, the natural time step for modeling the sensor state transitions might be one second, whereas the measurement interval is much larger. This brings up the
question of how a ?slow? (one-minute) model might be constructed and how it relates to a ?fast?
(one-second) model. This is an instance of a very important issue studied in the dynamical systems
and chemical kinetics literatures under the heading of separation of time scales (see, e.g., Rao &
Arkin, 2003). Fortunately, in our case the problem has a simple, exact solution: Section 3 shows
that a one-minute model can be derived efficiently, offline, from the more natural one-second model
and gives exactly the same evidence likelihood. The more general problem of handling multiple
time scales within DBNs, noted by Aliferis and Cooper (1996), remains open.
Section 4 describes the complete model for blood pressure estimation, including artifact models, and
Section 5 then evaluates the model on real patient data. We show a number of examples of artifacts,
their detection, and inference of the underlying state values. We analyze model performance over
more than 300 hours of data from 7 patients, containing 228 artifacts. Our results show very high
precision and recall rates for event detection; we are able to eliminate over 90% of false alarms for
blood pressure while missing fewer than 1% of the true alarms.
Our work is not the first to consider the probabilistic analysis of intensive care data. Indeed, one
of the best known early Bayes net applications was the A LARM model for patient monitoring under ventilation (Beinlich et al., 1989)?although this model had no temporal element. The work
most closely related to ours is that of Williams, Quinn, and McIntosh (2005), who apply factorial
switching Kalman filters?a particular class of DBNs?to artifact detection in neonatal ICU data.
Their (one-second) model is roughly analogous to the models described by Russell and Norvig,
using Boolean state variables to represent events that block normal sensor readings. Sieben and
2
Figure 2: 1-second (top) and 1-minute-average (bottom) data for systolic/mean/diastolic pressures.
One the left, a blood draw and line flush in quick succession. On the right, a zeroing.
Gather (2007) have applied discriminative models (decision forests and, more recently, SVMs) to
correction of one-second-resolution heart-rate data. Another important line of work is the MIMIC
project, which, like ours, aims to apply model-based methods to the interpretation of ICU data (Heldt
et al., 2006).
2
Blood Pressure Monitoring
Blood pressure provides informs much of medical thinking and is typically measured continuously in
the ICU. The most common ICU blood pressure measurement device is the arterial line, illustrated
in Figure 1(b); a catheter placed into one of the patient?s small arteries is connected to a pressure
transducer whose output is displayed on a bedside monitor.
Because blood flow varies during the cardiac cycle, blood pressure is pulsatile. In medical records,
including our data set, blood pressure measurements are summarized in two or three values: systolic
blood pressure, which is the maximum reached during the cardiac cycle, diastolic, which is the
corresponding minimum, and sometimes the mean.
We consider the three common artifact types illustrated in Figure 2: 1) in a blood draw, sensed
pressure gradually climbs toward that of the pressure bag, then suddenly returns to the blood pressure
when the stopcock is closed, seconds or minutes later; 2) in a line flush, the transducer is exposed to
bag pressure for a few seconds; 3) during zeroing, the transducer is exposed to atmospheric pressure
(defined as zero). We refer to blood draws and line flushes collectively as ?bag events.?
Figure 2(top) shows the artifacts using data collected at one-second intervals. However, the data
we work with are the one-minute means of the one-second data, as shown in Figure 2(bottom). A
fairly accurate simplification is that each second?s reading reflects either the true blood pressure
or an artifactual pressure, thus our model for the effect of averaging is that each recorded oneminute datum is a linear function of the true pressure, the artifactual pressure(s), and the fraction
of the minute occupied by artifact(s). Using systolic pressure s as an example, for an artifact of
length p (as a fraction of the averaging interval) and mean artifact pressure x, the apparent pressure
s= px + (1 p)s.
Our DBN model in Section 4 includes summary variables and equations relating the one-minute
readings to the true underlying pressures, artifacts? durations, bag and atmospheric pressure, etc.;
it can therefore estimate the duration and other characteristics of artifacts that have corrupted the
data. Patterns produced by artifacts in the one-minute data are highly varied, but it turns out (see
Section 5) that the detailed modeling pays off in revealing the characteristic relationships that follow
from the nature of the corrupting events.
3
Modeling Sub-Interval Events
The data we work with are generated by a combination of physiological processes that vary over
timescales of several minutes and artifactual events lasting perhaps only a few seconds. A natural
3
Figure 3: (left) DBN model showing relationships among the fast event variables fi , interval count
variables GN j , and measurement variables EN j . (right) A reduced model with the same distributions
GNt .
for G0 GN
choice would be a ?fast? time step for the DBN model, e.g., 1 second: on this timescale, the sensor
state variables indicate whether or not an artifactual event is currently in progress. The transition
model for these variables indicates the probability at each second that a new event begins and the
probability that an event already in progress continues. Assuming for now that there is only one
event type, and given memoryless (geometric) distribution of durations such as we see in Section 5,
only two parameters are necessary: p = P( fi = 1| fi1 = 1) and q = P( fi = 1| fi1 = 0). Both can be
estimated simply by measuring event frequencies and durations.
The main drawback of using a fast time step is computational: inference must be carried out over 60
time steps for every one measurement that arrives. Furthermore, much of this inference is pointless
given the lack of evidence at all the intervening time steps.
We could instead build a model with a ?slow? time step of one minute, so that evidence arrives at
each time step. The problem here is to determine the structure and parameters of such a model. First,
to explain the evidence, we?ll need a count variable saying how many seconds of the minute were
occupied by events. It is easy to see that this variable must depend on the corresponding variable
one minute earlier: for example, if the preceding minute was fully occupied by a blood draw event,
then the blood draw was in progress at the beginning of the current minute, so the current minute is
likely to be at least partially occupied by the event. (If there are multiple mutually exclusive event
types, then each count variable depends on all the preceding variables.) Each count variable can
take on 61 values, which leads to huge conditional distributions summarizing how the preceding 60
seconds could be divided among the various event types. Estimating these seems hopeless.
However, as we will now see, CPTs for the slow model need not be estimated or guessed?they can
be derived from the fast model. This is the typical situation with separation of time scales: slowtime-scale models are computationally more tractable but can only be constructed by deriving them
from a fast-time-scale model.
Consider a fast model as shown in Figure 3(a). Let the fast time step be and a measurement interval
be N (where N = 60 in our domain). fi = 1 iff an event is occurring at time i ; GN j Ni =j1
f
N( j1) i
counts the number of fast time steps within the jth measurement interval during which an event is
occurring. The jth observed measurement EN j is determined entirely by GN j ; therefore, it suffices
GNt .
to consider the joint distribution over G0 GN
To obtain a model containing only variables at the slow intervals, we simply need to sum out the
fi variables other than the ones at interval boundaries. We can do this topologically by a series of
arc reversal and node removal operations (Shachter, 1986); a simple proof by induction (omitted)
shows that, regardless of the number of fast steps per slow step, we obtain the reduced structure
GNt .
in Figure 3(b). By construction, this model gives the same joint distribution for G0 GN
Importantly, neither fN j nor GN j depends on GN( j1) .1
To complete the reduced model, we need the conditional distributions P(GN j | fN( j1) ) and
P( fN j | fN( j1) GN j ). That is, how many ?ones? do we expect in an interval, given the event status at the beginning of the interval, and what is the probability that an event is occurring at the
beginning of the next interval, given also the number of ones in the current interval? Given the fast
model?s parameters p and q, these quantities can be calculated offline using dynamic programming:
1 Intuitively, the distribution over G
N j for the Nth interval is determined by the value of f at the beginning of
the interval, independent of GN( j1) , whereas fN j depends on the count GN j for the preceding interval because,
for example, a high count implies that an event is likely to be in progress at the end of the interval.
4
a table is constructed for the variables fi and Ci for i from 1 up to N, where Ci is the number of ones
up to i ? 1 and C0 = 0. The recurrences for fi and Ci are as follows:
P(Ci , fi = 1| f0 ) = p P(Ci?1 =Ci ? 1, fi?1 = 1| f0 ) + q P(Ci?1 =Ci , fi?1 = 0| f0 )
P(Ci , fi = 0| f0 ) = (1 ? p) P(Ci?1 =Ci ? 1, fi?1 = 1| f0 ) + (1 ? q) P(Ci?1 =Ci , fi?1 = 0| f0 )
(1)
(2)
Extracting the required conditional probabilities from the table is straightforward. The table is of
size O(N 2 ), so the total time to compute the table is negligible for any plausible value of N. Now
we have the following result:
Theorem 1 Given the conditional distributions computed by Equations 1 and 2, the reduced model
in Figure 3(b) yields the same distribution for the count sequence G0 , GN , . . . , GNt as the fine-grained
model in Figure 3(a).
The conditional distributions that we obtain by dynamic programming have some interesting limit
cases. In particular, when events are short compared to measurement intervals and occur frequently,
we expect the dependence on fN( j?1) to disappear and the distribution for GN j to be approximately
N
Gaussian with mean 1+p/(1?q)
. When p = q, the fi s become i.i.d. and GN j is exactly binomial?the
recurrences compute the binomial coefficients via Pascal?s rule.
Generalizing the analysis to the case of multiple disjoint event types (i.e., fi takes on more than
two values) is mathematically straightforward and the details are omitted. There is, however, a
complexity problem as the number of event types increases. The count variables GN j , HN j , and so
on at time N j are all dependent on each other given fN( j?1) , and fN j depends on all of them; thus,
using the approach given above, the precomputed tables will scale exponentially with the number
of event types. This is not a problem in our application, where we do not expect sensors to have
more than a few distinct types of ?error? state. Furthermore, if each event type occurs independently
of the others (except for the mutual exclusion constraint), then the conditional distribution for the
count variable of each depends not on the combination of counts for the other types but on the sum
of those counts, leading to low-order polynomial growth in the table sizes.
The preceding analysis covers only the case in which fi depends just on fi?1 , leading to independently occurring events with a geometric length distribution. Constructing models with other length
distributions is a well-studied problem in statistics and most cases can be well approximated with
a modest increase in the size of the dynamic programming table. Handling non-independent event
occurrence is often more important; for example, blood draws may occur in clusters if multiple samples are required. Such dependencies can be handled by augmenting the state with timer variables,
again at modest cost.
Before we move on to describe the complete model, it is important to note that a model with a finer
time scale that the measurement frequency can provide useful extra information. By analogy with
sub-pixel localization in computer vision, such a model can estimate the time of occurrence of an
event within a measurement interval.
4
Combined model
The complete model for blood pressure measurements is shown in Figure 4. It has the same basic
structure as the reduced model in Figure 3(b) but extends it in various ways.
The evidence variables ENj are just the three reported blood pressure values ObservedDiaBP, ObservedSysBP, and ObservedMeanBP. These reflect, with some Gaussian noise, idealized Apparent values, determined in turn by
? the true time-averaged pressures: TrueDiaBP, TrueSysBP, and TrueMeanBP;
? the total duration of artifacts within the preceding minute (i.e., the GN j variables): BagTime
and ZeroTime;
? the average induced pressure to which the transducer is exposed during each event type:
BagPressure and ZeroPressure (these have their own slowly varying dynamics).
5
Figure 4: The blood pressure artifact detection DBN. Gray edges connect nodes within a time slice;
black edges are between time slices. ?Nodes? without surrounding ovals are deterministic functions
included for clarity.
variables are deterministic functions of their parents. For example, we have
1
ApparentDiaBP =
BagTime ? BagPressure + ZeroTime ? ZeroPressure +
N
(N BagTime ZeroTime) ? TrueDiaBP
The Apparent
The physiological state variables in this model are TrueSystolicFraction (the average portion of each
heartbeat spent ejecting blood), TruePulseBP (the peak-to-trough size of the pressure wave generated by each heartbeat), and TrueMeanBP. For simplicity, basic physiologic factors are modeled
with random walks weighted toward physiologically sensible values.2
The key event variable in the model, corresponding to fN j in Figure 3(b), is EndingValveState.
This has three values for the three possible stopcock positions at the one-minute boundary: open
to patient, open to bag, or open to air. The CPTs for this variable and for its children (at the next
time step) BagTime and ZeroTime are the ones computed by the method of Section 3. The CPT for
EndingValveState has 3 ? 3 ? 61 ? 61 = 33 489 entries.
5
Experimental Results
To estimate the CPT parameters (P( ft+1 = 1| ft = 0) and P( ft+1 = 1| ft = 1)) for the one-second
model, and to evaluate the one-minute model?s performance, we first needed ground truth for event
occurrence and length. By special arrangement we were able to obtain 300 hours of 1Hz data, in
which the artifacts we describe here are obvious to the human eye; one of us (a physician) then
tagged each of those data points for artifact presence and type, giving the ground truth. (There were
a total of 228 events of various lengths in the 300 hours? data.) With half the annotated data we
verified that event durations were indeed approximately geometrically distributed, and estimated the
one-second CPT parameters; from those, as described in Section 3, we calculated corresponding
one-minute-interval CPTs.
Using averaging equivalent to that used by the regular system, we transformed the other half of
the high-resolution data into 1-minute average blood pressures with associated artifact-time ground
truth. We then used standard particle filtering (Gordon et al., 1993) with 8000 particles to derive
posteriors for true blood pressure and the presence and length of each type of artifact at each minute.
For comparison, we also evaluated three other artifact detectors:
?
a support vector machine (SVM) using blood pressures at times t, t 1, t 2, and t 3 as
its features;
? a deterministic model-based detector, based on the linear-combination model of Section 2,
which calculates three estimates of artifact pressure and length, pairwise among the current measured systolic, diastolic, and mean pressures, to explain the current measurements
2 More accurate modeling of the physiology actually improves the accuracy of artifact detection, but this
point is explored in a separate paper.
6
Figure 5: ROC curves for the DBN?s performance detecting bag events (left) and zeroing events
(right), as compared with an SVM, a deterministic model-based detector, and a physician.
Figure 6: Two days? blood pressure data for one patient, with the hypertension threshold overlaid.
Raw data are on the left; on the right are filtering results showing elimination (here) of false declarations of hypertension.
given the assumption that the true blood pressure is that recorded at the most recent minute
during which no artifact was detected; it predicts artifact presence if the sum of the estimates? squared distances from their mean is below some threshold. (Because this model?s
prediction for any particular minute depends on its prediction at the previous minute, its
sensitivity and specificity do not vary monotonically with changes in the threshold; the
ROC curve shown is of only the undominated points.)
?
a physician working only with the one-minute-average data.
Figure 5(left) shows results for the detection of bag events. The DBN achieves a true positive rate
of 80% with almost no false positives, or a TPR of 90% with 10% false positives. It does less well
with zeroing events, as shown in Figure 5(b), achieving a TPR of nearly 70% with minimal false
positives, but beyond that having unacceptable false positive levels. The physician had an even lower
false positive rate for each artifact type, but with a true positive rate of only about 50%; the SVM
and deterministic model-based detector both had better-than-chance performance but were clinically
useless due to high false positive rates.
The model?s accuracy in tracking true blood pressure is harder to evaluate because we have no
minute-by-minute gold standard. (Arterial blood pressure measurements as we?ve described them,
despite their artifacts, are the gold standard in the ICU. Placing a second arterial line, besides being
subject to the same artifacts, also exposes patients to unnecessary infection risk.) However, on a
more qualitative level, four physicians in our group have examined many hours of measured and
inferred blood pressure traces, a typical example of which is shown in Figure 7, and have nearly
always agreed with the inference results. Where the system?s inferences are questionable, examining
other sensors often helps to reveal whether a pressure change was real or artifactual.
7
Figure 7: Sensed blood pressure (dark lines) and inferred true blood pressure (lighter bands, representing mean ? 1SD) across an observed blood draw with following zeroing. The lowest two lines
show the inferred fraction of each minute occupied by bag or zero artifact.
6
Conclusions and Further Work
We have applied dynamic Bayesian network modeling to the problem of handling aggregated data
with sub-interval artifacts. In preliminary experiments, this model of a typical blood pressure sensor
appears quite successful at tracking true blood pressure and identifying and classifying artifacts.
Our approach has reduced the need for learning (as distinct from modeling and inference) to the
small but crucial role of determining the distribution of event durations. It is interesting that the more
straightforward learning approach, the SVM described above, had performance markedly inferior to
the generative model?s.
Modified to run at 1Hz, this model could run on-line at the bedside, helping to reduce false alarms.
We are currently extending the model to include more sensors and physiological state variables and
anticipate further improvements in detection accuracy as a result of combining multiple sensors.
References
Aliferis, C., & Cooper, G. (1996). A structurally and temporally extended Bayesian belief network
model: Definitions, properties, and modeling techniques. Proc. Uncertainty in Artificial Intelligence (pp. 28?39).
Beinlich, I., Suermondt, H., Chavez, R., & Cooper, G. (1989). The ALARM monitoring system.
Proc. Second European Conference on Artificial Intelligence in Medicine (pp. 247?256).
Gordon, N. J., Salmond, D., & Smith, A. (1993). Novel approach to nonlinear/non-Gaussian
Bayesian state estimation. Radar and Signal Processing, IEE Proceedings?F, 140, 107?113.
Heldt, T., Long, W., Verghese, G., Szolovits, P., & Mark, R. (2006). Integrating data, models, and
reasoning in critical care. Proceedings of the 28th IEEE EMBS International Conference (pp.
350?353).
Rao, C. V., & Arkin, A. P. (2003). Stochastic chemical kinetics and the quasi-steady-state assumption: Application to the Gillespie algorithm. Journal of Chemical Physics, 18.
Russell, S. J., & Norvig, P. (2003). Artificial intelligence: A modern approach. Upper Saddle River,
New Jersey: Prentice-Hall. 2nd edition.
Shachter, R. D. (1986). Evaluating influence diagrams. Operations Research, 34, 871?882.
Sieben, W., & Gather, U. (2007). Classifying alarms in intensive care?analogy to hypothesis testing. Lecture notes in computer science, 130?138.
Tsien, C. L., & Fackler, J. C. (1997). Poor prognosis for existing monitors in the intensive care unit.
Critical Care Medicine, 25, 614?619.
Williams, C. K. I., Quinn, J., & McIntosh, N. (2005). Factorial switching Kalman filters for condition monitoring in neonatal intensive care. NIPS. Vancouver, Canada.
8
| 3434 |@word h:2 middle:1 polynomial:1 norm:2 seems:1 nd:1 c0:1 open:4 sensed:2 pressure:53 harder:1 series:1 ours:2 existing:1 current:5 timer:1 must:2 suermondt:1 fn:9 oxygenation:1 enables:1 drop:1 generative:2 fewer:1 device:1 half:2 intelligence:3 beginning:4 smith:1 short:2 record:1 detecting:2 provides:1 node:3 unacceptable:1 constructed:3 become:1 transducer:4 qualitative:1 pairwise:1 indeed:2 opc:1 roughly:1 nor:1 frequently:1 brain:2 ol:1 inspired:1 project:1 estimating:2 underlying:4 begin:1 medium:1 lowest:1 what:2 temporal:1 berkeley:4 every:1 growth:1 questionable:1 exactly:2 classifier:1 unit:3 medical:2 intervention:1 before:1 negligible:1 positive:8 sd:1 limit:1 switching:2 despite:1 analyzing:1 approximately:2 might:2 black:1 studied:2 examined:1 diastolic:4 averaged:2 ventilation:1 testing:1 block:1 useable:1 significantly:1 revealing:1 physiology:1 integrating:1 regular:1 specificity:1 cannot:1 prentice:1 risk:1 influence:1 equivalent:1 deterministic:5 quick:1 center:2 missing:1 arterial:7 radi:1 williams:2 duration:10 regardless:1 straightforward:3 resolution:2 independently:2 simplicity:1 identifying:2 rule:1 importantly:1 deriving:1 analogous:1 dbns:3 norvig:3 construction:1 cleaning:1 exact:1 programming:3 lighter:1 hypothesis:1 arkin:2 element:2 approximated:1 continues:1 predicts:1 bottom:2 observed:2 ft:4 role:1 connected:1 cycle:2 russell:5 complexity:1 co2:1 dynamic:6 radar:1 depend:1 exposed:3 localization:1 heartbeat:2 joint:2 various:3 jersey:1 surrounding:1 distinct:2 fast:11 describe:4 detected:1 artificial:3 whose:1 apparent:3 stanford:2 larger:1 aliferis:2 plausible:1 drawing:1 quite:1 statistic:1 timescale:1 sequence:2 net:1 frequent:1 causing:1 turned:1 combining:1 iff:1 gold:2 intervening:1 artery:1 parent:1 cluster:1 extending:1 generating:1 spent:1 derive:1 informs:1 bedside:2 augmenting:1 help:1 measured:4 progress:4 c:2 involves:1 indicate:1 implies:1 closely:1 drawback:1 annotated:1 filter:2 stochastic:2 human:1 elimination:1 suffices:1 preliminary:1 anticipate:1 mathematically:1 kinetics:2 correction:1 helping:1 hall:1 ground:3 normal:1 overlaid:1 major:1 vary:2 early:1 achieves:1 omitted:2 purpose:1 estimation:5 proc:2 bag:8 currently:2 expose:1 reflects:1 weighted:1 hope:1 sensor:19 gaussian:3 always:1 aim:1 modified:1 occupied:5 varying:1 derived:2 focus:1 improvement:1 verghese:1 likelihood:1 indicates:1 kristan:1 summarizing:1 inference:7 dependent:1 eliminate:1 typically:1 transformed:1 quasi:1 pixel:1 issue:1 among:3 pascal:1 special:1 fairly:1 mutual:1 having:1 ng:1 placing:1 stuart:1 nearly:2 thinking:1 mimic:1 report:1 others:1 gordon:2 few:3 modern:1 national:1 ve:1 detection:9 interest:2 huge:1 highly:1 venous:2 arrives:2 accurate:3 edge:2 necessary:1 shorter:1 modest:2 walk:1 re:1 minimal:1 instance:1 modeling:12 boolean:1 rao:2 gn:17 injury:1 earlier:1 cover:1 measuring:1 cost:1 deviation:1 entry:1 examining:1 successful:1 iee:1 reported:1 connect:1 dependency:1 aw:1 varies:1 corrupted:1 combined:2 peak:1 sensitivity:1 international:1 ie:1 river:1 probabilistic:3 physician:6 informatics:1 off:3 physic:1 michael:2 continuously:1 squared:1 again:1 reflect:1 recorded:5 containing:2 hn:1 slowly:1 external:1 leading:2 return:1 de:1 summarized:1 includes:1 coefficient:1 trough:1 jagged:1 depends:7 stream:1 cpts:3 later:1 idealized:1 closed:1 analyze:1 reached:1 portion:1 lung:1 bayes:1 complicated:1 wave:1 air:1 ni:2 oi:1 gnt:4 accuracy:3 who:1 efficiently:1 succession:1 characteristic:2 guessed:1 yield:1 raw:2 bayesian:5 produced:1 monitoring:7 worth:1 finer:1 tissue:1 explain:2 detector:4 complicate:1 infection:1 definition:1 evaluates:1 frequency:2 pp:3 obvious:1 proof:1 associated:1 mitchell:1 recall:1 ut:1 improves:1 ubiquitous:1 agreed:1 actually:1 appears:1 day:2 follow:2 evaluated:1 furthermore:2 just:2 biomedical:2 d:2 working:1 duc:1 nonlinear:1 lack:1 brings:1 artifact:39 perhaps:2 gray:1 reveal:1 effect:1 larm:1 true:14 tagged:1 chemical:3 memoryless:1 illustrated:2 ll:1 during:6 recurrence:2 inferior:1 noted:1 steady:1 oc:1 complete:4 temperature:1 oxygen:2 reasoning:1 ranging:1 ef:1 recently:1 fi:19 novel:1 superior:1 common:2 neurosurg:2 cohen:1 exponentially:1 he:2 interpretation:1 relating:1 tpr:2 measurement:16 refer:1 mcintosh:2 dbn:6 zeroing:6 particle:2 had:4 f0:6 moni:1 etc:1 posterior:2 intracranial:1 own:2 exclusion:1 recent:1 minimum:1 fortunately:1 care:10 preceding:6 determine:1 aggregated:1 monotonically:1 signal:1 relates:1 multiple:6 infer:1 long:2 divided:1 e1:2 specializing:1 calculates:1 prediction:2 basic:2 patient:9 vision:1 represent:1 sometimes:1 whereas:4 embs:1 fine:1 interval:24 diagram:1 median:1 crucial:1 extra:1 regional:1 markedly:1 subject:3 compliance:1 induced:1 hz:2 flow:1 climb:1 extracting:1 presence:3 easy:1 automated:1 prognosis:1 reduce:1 intensive:6 whether:2 handled:1 trauma:1 cause:2 cpt:3 ignored:1 useful:1 physiologic:1 detailed:1 hypertension:2 factorial:2 dark:1 band:1 svms:1 reduced:6 estimated:3 disjoint:1 per:1 diagnosis:1 group:1 key:2 four:1 threshold:3 blood:44 monitor:2 achieving:1 clarity:1 neither:1 verified:1 geometrically:1 fraction:3 sum:3 run:2 uncertainty:1 topologically:1 extends:1 almost:2 saying:1 separation:2 draw:8 decision:2 entirely:1 pay:1 simplification:1 datum:1 occur:5 constraint:1 aspect:1 px:1 flush:5 combination:3 clinically:1 poor:1 describes:1 slightly:1 cardiac:2 across:1 ur:1 wi:1 making:2 lasting:1 intuitively:1 gradually:1 heart:1 computationally:1 equation:2 mutually:1 remains:1 turn:3 count:12 fail:1 precomputed:1 needed:1 tractable:1 reversal:1 unusual:1 end:1 operation:2 apply:2 quinn:2 occurrence:3 top:3 binomial:2 include:3 medicine:2 giving:1 neonatal:2 build:1 disappear:1 suddenly:1 move:1 g0:4 question:1 already:1 spike:1 quantity:1 occurs:1 arrangement:1 exclusive:1 dependence:1 exhibit:1 ireland:1 manley:1 separate:1 distance:1 sensible:1 collected:1 toward:2 induction:1 systolic:5 assuming:1 kalman:2 length:7 useless:2 modeled:2 relationship:2 besides:1 trace:4 upper:1 observation:1 beinlich:2 arc:1 displayed:1 situation:1 extended:2 varied:1 sharp:1 canada:1 atmospheric:2 inferred:3 mechanical:1 required:2 hour:4 nip:1 able:3 suggested:1 beyond:1 dynamical:1 pattern:2 below:1 reading:3 including:4 belief:1 gillespie:1 aleks:1 critical:4 event:44 difficulty:1 natural:3 nth:1 representing:1 technology:1 eye:1 ne:1 temporally:1 madden:2 carried:1 literature:1 geometric:2 removal:1 vancouver:1 determining:1 fully:1 par:1 expect:3 lecture:1 interesting:2 filtering:3 analogy:2 geoffrey:1 gather:2 expired:1 corrupting:1 corrupt:1 classifying:2 hopeless:1 summary:1 placed:1 heading:2 jth:2 offline:2 salmond:1 fall:1 distributed:1 slice:2 boundary:2 calculated:2 curve:2 transition:2 evaluating:1 collection:1 san:4 observable:1 status:1 unnecessary:1 francisco:4 discriminative:1 continuous:1 physiologically:1 table:7 nature:1 forest:1 diane:1 complex:1 artificially:1 constructing:1 domain:1 icu:10 european:1 timescales:1 main:1 noise:2 alarm:9 edition:1 pulsatile:1 child:1 en:2 roc:2 slow:5 cooper:3 precision:1 sub:4 position:1 structurally:1 grained:1 pointless:1 minute:32 theorem:1 xt:2 specific:1 showing:2 explored:1 physiological:3 svm:4 evidence:6 false:11 ci:13 illustrates:1 occurring:4 chavez:1 traumatic:1 tsien:2 artifactual:5 simply:3 likely:2 shachter:2 generalizing:1 saddle:1 tracking:2 partially:2 collectively:1 truth:3 chance:1 declaration:1 conditional:6 careful:2 change:2 included:1 typical:3 determined:3 except:1 averaging:3 total:3 hospital:1 oval:1 enj:1 experimental:1 heldt:2 support:1 mark:1 ucsf:3 evaluate:2 handling:3 |
2,686 | 3,435 | Shared Segmentation of Natural Scenes
Using Dependent Pitman-Yor Processes
Erik B. Sudderth and Michael I. Jordan
Electrical Engineering & Computer Science, University of California, Berkeley
[email protected], [email protected]
Abstract
We develop a statistical framework for the simultaneous, unsupervised segmentation and discovery of visual object categories from image databases. Examining
a large set of manually segmented scenes, we show that object frequencies and
segment sizes both follow power law distributions, which are well modeled by the
Pitman?Yor (PY) process. This nonparametric prior distribution leads to learning
algorithms which discover an unknown set of objects, and segmentation methods
which automatically adapt their resolution to each image. Generalizing previous applications of PY processes, we use Gaussian processes to discover spatially
contiguous segments which respect image boundaries. Using a novel family of
variational approximations, our approach produces segmentations which compare
favorably to state-of-the-art methods, while simultaneously discovering categories
shared among natural scenes.
1
Introduction
Images of natural environments contain a rich diversity of spatial structure at both coarse and fine
scales. We would like to build systems which can automatically discover the visual categories
(e.g., foliage, mountains, buildings, oceans) which compose such scenes. Because the ?objects?
of interest lack rigid forms, they are poorly suited to traditional, fixed aspect detectors. In simple
cases, topic models can be used to cluster local textural elements, coarsely representing categories
via a bag of visual features [1, 2]. However, spatial structure plays a crucial role in general scene
interpretation [3], particularly when few labeled training examples are available.
One approach to modeling additional spatial dependence begins by precomputing one, or several,
segmentations of each input image [4?6]. However, low-level grouping cues are often ambiguous,
and fixed partitions may improperly split or merge objects. Markov random fields (MRFs) have
been used to segment images into one of several known object classes [7, 8], but these approaches
require manual segmentations to train category-specific appearance models. In this paper, we instead
develop a statistical framework for the unsupervised discovery and segmentation of visual object
categories. We approach this problem by considering sets of images depicting related natural scenes
(see Fig. 1(a)). Using color and texture cues, our method simultaneously groups dense features
into spatially coherent segments, and refines these partitions using shared appearance models. This
extends the cosegmentation framework [9], which matches two views of a single object instance, to
simultaneously segment multiple object categories across a large image database. Some recent work
has pursued similar goals [6, 10], but robust object discovery remains an open challenge.
Our models are based on the Pitman?Yor (PY) process [11], a nonparametric Bayesian prior on
infinite partitions. This generalization of the Dirichlet process (DP) leads to heavier-tailed, power
law distributions for the frequencies of observed objects or topics. Using a large database of manual
scene segmentations, Sec. 2 demonstrates that PY priors closely match the true distributions of
natural segment sizes, and frequencies with which object categories are observed. Generalizing
the hierarchical DP [12], Sec. 3 then describes a hierarchical Pitman?Yor (HPY) mixture model
which shares ?bag of features? appearance models among related scenes. Importantly, this approach
coherently models uncertainty in the number of object categories and instances.
3
?1
10
?2
10
?3
10
?4
10
Number of forest Segments
Proportion of forest Segments
120
10
Segment Labels
PY(0.39,3.70)
DP(11.40)
Segment Areas
PY(0.02,2.20)
DP(2.40)
Number of forest Images
0
10
2
10
1
10
0
0
10
1
10
10
2
10
10
?1
10
10
10
?3
10
?4
10
Segment Areas
PY(0.32,0.80)
DP(2.90)
2
10
1
10
0
0
10
1
10
20
2
10
10
Segment Labels (sorted by frequency)
1
2
3
4
5
?2
10
?1
10
Proportion of Image Area
0
10
6
7
8
Number of Segments per Image
Number of insidecity Images
?2
Number of insidecity Segments
Segment Labels
PY(0.47,6.90)
DP(33.00)
?1
40
120
10
10
60
0
0
3
10
80
Proportion of Image Area
Segment Labels (sorted by frequency)
0
Proportion of insidecity Segments
?2
Segment Counts
PY(0.02,2.20)
DP(2.40)
100
Segment Counts
PY(0.32,0.80)
DP(2.90)
100
80
60
40
20
0
1
2
3
4
5
6
7
8
Number of Segments per Image
(a)
(b)
(c)
(d)
Figure 1: Validation of stick-breaking priors for the statistics of human segmentations of the forest (top) and
insidecity (bottom) scene categories. We compare observed frequencies (black) to those predicted by Pitman?
Yor process (PY, red circles) and Dirichlet process (DP, green squares) models. For each model, we also display
95% confidence intervals (dashed). (a) Example human segmentations, where each segment has a text label
such as sky, tree trunk, car, or person walking. The full segmented database is available from LabelMe [14].
(b) Frequency with which different semantic text labels, sorted from most to least frequent on a log-log scale,
are associated with segments. (c) Number of segments occupying varying proportions of the image area, on a
log-log scale. (d) Counts of segments of size at least 5,000 pixels in 256 ? 256 images of natural scenes.
As described in Sec. 4, we use thresholded Gaussian processes to link assignments of features to
regions, and thereby produce smooth, coherent segments. Simulations show that our use of continuous latent variables captures long-range dependencies neglected by MRFs, including intervening
contour cues derived from image boundaries [13]. Furthermore, our formulation naturally leads
to an efficient variational learning algorithm, which automatically searches over segmentations of
varying resolution. Sec. 5 concludes by demonstrating accurate segmentation of complex images,
and discovery of appearance patterns shared across natural scenes.
2
Statistics of Natural Scene Categories
To better understand the statistical relationships underlying natural scenes, we analyze manual segmentations of Oliva and Torralba?s eight categories [3]. A non-expert user partitioned each image
into a variable number of polygonal segments corresponding to distinctive objects or scene elements
(see Fig. 1(a)). Each segment has a semantic text label, allowing study of object co-occurrence frequencies across related scenes. There are over 29,000 segments in the collection of 2,688 images.1
2.1
Stick Breaking and Pitman?Yor Processes
The relative frequencies of different object categories, as well as the image areas they occupy, can be
statistically
modeled via distributions on potentially infinite partitions. Let ? = (?1 , ?2 , ?3 , . . .),
P?
?
=
1, denote the probability mass associated with each subset. In nonparametric Bayesian
k
k=1
statistics, prior models for partitions are often defined via a stick-breaking construction:
k?1
k?1
Y
X
?k = wk
(1 ? w? ) = wk 1 ?
??
wk ? Beta(1 ? ?a , ?b + k?a )
(1)
?=1
?=1
This Pitman?Yor (PY) process [11], denoted by ? ? GEM(?a , ?b ), is defined by two hyperparameters satisfying 0 ? ?a < 1, ?b > ??a . When ?a = 0, we recover a Dirichlet process (DP) with
concentration parameter ?b . This construction induces a distribution on ? such that subsets with
more mass ?k typically have smaller indexes k. When ?a > 0, E[wk ] decreases with k, and the
resulting partition frequencies follow heavier-tailed, power law distributions.
While the sequences of beta variables
underlying
PY processes lead to infinite partitions, only a
random, finite subset of size K? = {k | ?k > ?} will have probability greater than any threshold ?.
Implicitly, nonparametric models thus also place priors on the number of latent classes or objects.
1
See LabelMe [14]:
http://labelme.csail.mit.edu/browseLabelMe/spatial envelope 256x256 static 8outdoorcategories.html
2.2
Object Label Frequencies
Pitman?Yor processes have been previously used to model the well-known power law behavior of
text sequences [15, 16]. Intuitively, the labels assigned to segments in the natural scene database
have similar properties: some (like sky, trees, and building) occur frequently, while others (rainbow,
lichen, scaffolding, obelisk, etc.) are more rare. Fig. 1(b) plots the observed frequencies with which
unique text labels, sorted from most to least frequent, occur in two scene categories. The overlaid
quantiles correspond to the best fitting DP and PY processes, with parameters (?
?a , ??b ) estimated
via maximum likelihood. When ??a > 0, log E[?
ek | ?? ] ? ??
?a?1 log(k) + ?(?
?a , ??b ) for large k [11],
producing power law behavior which accurately predicts observed object frequencies. In contrast,
the closest fitting DP model (?
?a = 0) significantly underestimates the number of rare labels.
We have quantitatively assessed the accuracy of these models using bootstrap significance tests [17].
The PY process provides a good fit for all categories, while there is significant evidence against the
DP in most cases. By varying PY hyperparameters, we also capture interesting differences among
scene types: urban, man-made environments have many more unique objects than natural ones.
2.3
Segment Counts and Size Distributions
We have also used the natural scene database to quantitatively validate PY priors for image partitions [17]. For natural environments, the DP and PY processes both provide accurate fits. However,
some urban environments have many more small objects, producing power law area distributions
(see Fig. 1(c)) better captured by PY processes. As illustrated in Fig. 1(d), PY priors also model
uncertainty in the number of segments at various resolutions.
While power laws are often used simply as a descriptive summary of observed statistics, PY processes provide a consistent generative model which we use to develop effective segmentation algorithms. We do not claim that PY processes are the only valid prior for image areas; for example,
log-normal distributions have similar properties, and may also provide a good model [18]. However, PY priors lead to efficient variational inference algorithms, avoiding the costly MCMC search
required by other segmentation methods with region size priors [18, 19].
3
A Hierarchical Model for Bags of Image Features
We now develop hierarchical Pitman?Yor (HPY) process models for visual scenes. We first describe
a ?bag of features? model [1, 2] capturing prior knowledge about region counts and sizes, and then
extend it to model spatially coherent shapes in Sec. 4. Our baseline bag of features model directly
generalizes the stick-breaking representation of the hierarchical DP developed by Teh et al. [12].
N-gram language models based on HPY processes [15, 16] have somewhat different forms.
3.1
Hierarchical Pitman?Yor Processes
Each image is first divided into roughly 1,000 superpixels [18] using a variant of the normalized
cuts spectral clustering algorithm [13]. We describe the texture of each superpixel via a local texton
histogram [20], using band-pass filter responses quantized to Wt = 128 bins. Similarly, a color
histogram is computed by quantizing the HSV color space into Wc = 120 bins. Superpixel i in
image j is then represented by histograms xji = (xtji , xcji ) indicating its texture xtji and color xcji .
Figure 2 contains a directed graphical model summarizing our HPY model for collections of local image features. Each of the potentially infinite set of global object categories occurs with frequency ?k , where ? ? GEM(?a , ?b ) as motivated in Sec. 2.2. Each category k also has an associated appearance model ?k = (?kt , ?kc ), where ?kt and ?kc parameterize multinomial distributions on
the Wt texture and Wc color bins, respectively. These parameters are regularized by Dirichlet priors
?kt ? Dir(?t ), ?kc ? Dir(?c ), with hyperparameters chosen to encourage sparse distributions.
Consider a dataset containing J images of related scenes, each of which is allocated an infinite set
of potential segments or regions. As in Sec. 2.3, region t occupies a random proportion ?jt of the
area in image j, where ? j ? GEM(?a , ?b ). Each region is also associated with a particular global
object category kjt ? ?. For each superpixel i, we then independently select a region tji ? ? j , and
sample features using parameters determined by that segment?s global object category:
p xtji , xcji | tji , kj , ? = Mult xtji | ?zt ji ?Mult xcji | ?zcji
zji , kjtji
(2)
As in other adaptations of topic models to visual data [8], we assume that different feature channels
vary independently within individual object categories and segments.
tji
Tk
Nj
J
U
2
Probability Density
Probability Density
3
4
3
2
1
0
0
0.2
0.4
0.6
0.8
3
2
0.2
0.4
0.6
0.8
0
0
1
0.2
0.4
0.6
0.8
Stick?Breaking Proportion
Stick?Breaking Proportion
Stick?Breaking Proportion
GEM(0, 10)
GEM(0.1, 2)
GEM(0.5, 5)
0.6
0.5
0.4
0.3
0.2
0.1
0
?4
4
1
0
0
1
0.6
f
xji
5
4
0.5
0.4
0.3
0.2
0.1
?2
0
2
4
Stick?Breaking Threshold
0
?4
1
0.6
Probability Density
f
wk
6
5
Probability Density
kjt
6
5
1
Probability Density
vjt
Probability Density
J
D
6
0.5
0.4
0.3
0.2
0.1
?2
0
2
Stick?Breaking Threshold
4
0
?4
?2
0
2
4
Stick?Breaking Threshold
Figure 2: Stick-breaking representation of a hierarchical Pitman?Yor (HPY) model for J groups of features.
Left: Directed graphical model in which global category frequencies ? ? GEM(?) are constructed from stickbreaking proportions wk ? Beta(1 ? ?a , ?b + k?a ), as in Eq. (1). Similarly, vjt ? Beta(1 ? ?a , ?b + t?a )
define region areas ? j ? GEM(?) for image j. Each of the Nj features xji is independently sampled as in
Eq. (2). Upper right: Beta distributions from which stick proportions wk are sampled for three different PY
processes: k = 1 (blue), k = 10 (red), k = 20 (green). Lower right: Corresponding distributions on thresholds
for an equivalent generative model employing zero mean, unit variance Gaussians (dashed black). See Sec. 4.1.
3.2
Variational Learning for HPY Mixture Models
To allow efficient learning of HPY model parameters from large image databases, we have developed a mean field variational method which combines and extends previous approaches for DP
mixtures [21, 22] and finite topic models. Using the stick-breaking representation of Fig. 2, and a
factorized variational posterior, we optimize the following lower bound on the marginal likelihood:
log p(x | ?, ?, ?) ? H(q) + Eq [log p(x, k, t, v, w, ? | ?, ?, ?)]
(3)
" K
# J " T
# Nj
Y
Y Y
Y
q(tji | ?ji )
q(k, t, v, w, ?) =
q(wk | ?k )q(?k | ?k ) ?
q(vjt | ?jt )q(kjt | ?jt )
k=1
j=1
t=1
i=1
Here, H(q) is the entropy. We truncate the variational posterior [21] by setting q(vjT = 1) = 1 for
each image or group, and q(wK = 1) = 1 for the shared global clusters. Multinomial assignments
q(kjt | ?jt ), q(tji | ?ji ), and beta stick proportions q(wk | ?k ), q(vjt | ?jt ), then have closed form
update equations. To avoid bias, we sort the current sets of image segments, and global categories,
in order of decreasing aggregate assignment probability after each iteration [22].
4
Segmentation with Spatially Dependent Pitman?Yor Processes
We now generalize the HPY image segmentation model of Fig. 2 to capture spatial dependencies.
For simplicity, we consider a single-image model in which features xi are assigned to regions by
indicator variables zi , and each segment k has its own appearance parameters ?k (see Fig. 3). As in
Sec. 3.1, however, this model is easily extended to share appearance parameters among images.
4.1
Coupling Assignments using Thresholded Gaussian Processes
Consider a generative model which partitions data into two clusters via assignments zi ? {0, 1}
sampled such that P[zi = 1] = v. One representation of this sampling process first generates a
Gaussian auxiliary variable ui ? N (0, 1), and then chooses zi according to the following rule:
Z u
2
1
if ui < ??1 (v)
1
e?s /2 ds
(4)
zi =
?(u) , ?
0
otherwise
2? ??
Here, ?(u) is the standard normal cumulative distributionfunction (CDF).
Since ?(ui ) is uniformly
distributed on [0, 1], we immediately have P[zi = 1] = P ui < ??1 (v) = P[?(ui ) < v] = v.
We adapt this idea to PY processes using Q
the stick-breaking representation of Eq. (1). In particuk?1
lar, we note that if zi ? ? where ?k = vk ?=1 (1 ? v? ), a simple induction argument shows that
vk = P[zi = k | zi 6= k ? 1, . . . , 1]. The stick-breaking proportion vk is thus the conditional probability of choosing cluster k, given that clusters with indexes ? < k have been rejected. Combining
uk2
uk1
S1 S2 S3 S4
uk4
uk3
S1 S2 S3 S4
u3
f
z2
z1
S1 S2 S3 S4
D
u2
vk
z4
z3
x1
x2
u1
Tk
f
x3
x4
U
Figure 3: A nonparametric Bayesian approach to image segmentation in which thresholded Gaussian processes
generate spatially dependent Pitman?Yor processes. Left: Directed graphical model in which expected segment
areas ? ? GEM(?) are constructed from stick-breaking proportions vk ? Beta(1 ? ?a , ?b + k?a ). Zero
mean Gaussian processes (uki ? N (0, 1)) are cut by thresholds ??1 (vk ) to produce segment assignments
zi , and thereby features xi . Right: Three randomly sampled image partitions (columns), where assignments
(bottom, color-coded) are determined by the first of the ordered Gaussian processes uk to cross ??1 (vk ).
this insight with Eq. (4), we can generate samples zi ? ? as follows:
zi = min k | uki < ??1 (vk )
where uki ? N (0, 1) and uki ? u?i , k 6= ?
(5)
As illustrated in Fig. 3, each cluster k is now associated with a zero mean Gaussian process (GP) uk ,
and assignments are determined by the sequence of thresholds in Eq. (5). If the GPs have identity
covariance functions, we recover the basic HPY model of Sec. 3.1. More general covariances can
be used to encode the prior probability that each feature pair occupies the same segment. Intuitively,
the ordering of segments underlying this dependent PY model is analogous to layered appearance
models [23], in which foreground layers occlude those that are farther from the camera.
To retain the power law prior on segment sizes justified in Sec. 2.3, we transform priors on stick
proportions vk ? Beta(1 ? ?a , ?b + k?a ) into corresponding random thresholds:
p(?
vk | ?) = N (?
vk | 0, 1) ? Beta(?(?
vk ) | 1 ? ?a , ?b + k?a )
v?k , ??1 (vk )
(6)
Fig. 2 illustrates the threshold distributions corresponding to several different PY stick-breaking
priors. As the number of features N becomes large relative to the GP covariance length-scale, the
proportion assigned to segment k approaches ?k , where ? ? GEM(?a , ?b ) as desired.
4.2
Variational Learning for Dependent PY Processes
Substantial innovations are required to extend the variational method of Sec. 3.2 to the Gaussian processes underlying our dependent PY processes. Complications arise due to the threshold assignment
process of Eq. (5), which is ?stronger? than the likelihoods typically used in probit models for GP
classification, as well as the non-standard threshold prior of Eq. (6). In the simplest case, we place
factorized Gaussian variational posteriors on thresholds q(?
vk ) = N (?
vk | ?k , ?k ) and assignment
surfaces q(uki ) = N (uki | ?ki , ?ki ), and exploit the following key identities:
?k
?k ? ?ki
?
?
Pq [uki < v?k ] = ?
Eq [log ?(?
vk )] ? log Eq [?(?
vk )] = log ?
(7)
?k + ?ki
1 + ?k
The first expression leads to closed form updates for Dirichlet appearance parameters q(?k | ?k ),
while the second evaluates the beta normalization constants in Eq. (6). We then jointly optimize
each layer?s threshold q(?
vk ) and assignment surface q(uk ), fixing all other layers, via backtracking
conjugate gradient (CG) with line search. For details and further refinements, see [17].
Figure 4: Five samples from each of four prior models for image partitions (color coded). Top Left: Nearest
neighbor Potts MRF with K = 10 states. Top Right: Potts MRF with potentials biased by DP samples [28].
Bottom Left: Softmax model in which spatially varying assignment probabilities are coupled by logistically
transformed GPs [25?27]. Bottom Right: PY process assignments coupled by thresholded GPs (as in Fig. 3).
4.3
Related Work
Recently, Duan et. al. [24] proposed a generalized spatial Dirichlet process which links assignments
via thresholded GPs, as in Sec. 4.1. However, their focus is on modeling spatial random effects
for prediction tasks, as opposed to the segmentation tasks which motivate our generalization to PY
processes. Unlike our HPY extension, they do not consider approaches to sharing parameters among
related groups or images. Moreover, their basic Gibbs sampler takes 12 hours on a toy dataset with
2,000 observations; our variational method jointly segments 200 scenes in comparable time.
Several authors have independently proposed a spatial model based on pointwise, multinomial logistic transformations of K latent GPs [25?27]. This produces a field of smoothly varying multinomial
? i , from which segment assignments are independently sampled as zi ? ?
? i . As shown
distributions ?
in Fig. 4, this softmax construction produces noisy, less spatially coherent partitions. Moreover, its
bias towards partitions with K segments of similar size is a poor fit for natural scenes.
A previous nonparametric image segmentation method defined its prior as a normalized product
of a DP sample ? ? GEM(0, ?) and a nearest neighbor MRF with Potts potentials [28]. This
construction effectively treats log ? as the canonical, rather than moment, parameters of the MRF,
and does not produce partitions whose size distribution matches GEM(0, ?). Due to the phase
transition which occurs with increasing potential strength, Potts models assign low probability to
realistic image partitions [29]. Empirically, the DP-Potts product construction seems to have similar
issues (see Fig. 4), although it can still be effective with strongly informative likelihoods [28].
5
Results
Figure 5 shows segmentation results for images from the scene categories considered in Sec. 2.
We compare the bag of features PY model (PY-BOF), dependent PY with distance-based squared
exponential covariance (PY-Dist), and dependent PY with covariance that incorporates intervening
contour cues (PY-Edge) based on the Pb detector [20].
pThe conditionally specified PY-Edge model
scales the covariance between superpixels i and j by 1 ? bij , where bij is the largest Pb response
on the straight line connecting them. We convert these local covariance estimates into a globally
consistent, positive definite matrix via an eigendecomposition. For the results in Figs. 5 and 6, we
independently segment each image, without sharing appearance models or supervised training.
We compare our results to the normalized cuts spectral clustering method with varying numbers of
segments (NCut(K)), and a high-quality affinity function based on color, texture, and intervening
contour cues [13]. Our PY models consistently capture variability in the number of true segments,
and detect both large and small regions. In contrast, normalized cuts is implicitly biased towards
regions of equal size, which produces distortions. To quantitatively evaluate results, we measure
overlap with held-out human segments via the Rand index [30]. As summarized in Fig. 6, PY-BOF
performs well for some images with unambiguous features, but PY-Edge is often substantially better.
We have also experimented with our hierarchical PY extension, in which color and texture distributions are shared between images. As shown in Fig. 7, many of the inferred global visual categories
align reasonably with semantic categories (e.g., sky, foliage, mountains, or buildings).
6
Discussion
We have developed a nonparametric framework for image segmentation which uses thresholded
Gaussian processes to produce spatially coupled Pitman?Yor processes. This approach produces
empirically justified power law priors for region areas and object frequencies, allows visual appear-
Figure 5: Segmentation results for two images (rows) from each of the coast, mountain, and tallbuilding scene
categories. From left to right, columns show LabelMe human segments, image with boundaries inferred by
PY-Edge, and segments for PY-Edge, PY-Dist, PY-BOF, NCut(3), NCut(4), and NCut(6). Best viewed in color.
0.8
0.7
0.6
0.5
0.4
0.9
0.8
0.7
0.6
0.3
0.2
0.2
0.4
0.6
0.8
Normalized Cuts
(a)
1
0.5
2
4
6
8
10
Number of Normalized Cuts Regions
(b)
1
Normalized Cuts
PY Gaussian (Edge Covar)
PY Gaussian (Distance Covar)
PY Bag of Features
0.9
Average Rand Index
Normalized Cuts
PY Gaussian (Edge Covar)
PY Gaussian (Distance Covar)
PY Bag of Features
PY Gaussian (Edge Covar)
1
1
Average Rand Index
PY Gaussian (Edge Covar)
1
0.9
0.8
0.7
0.6
0.5
0.4
0.9
0.8
0.7
0.6
0.3
0.2
0.2
0.4
0.6
0.8
Normalized Cuts
(c)
1
0.5
2
4
6
8
10
Number of Normalized Cuts Regions
(d)
Figure 6: Quantitative comparison of segmentation results to human segments, using the Rand index. (a) Scatter plot of PY-Edge and NCut(4) Rand indexes for 200 mountain images. (b) Average Rand indexes for mountain images. We plot the performance of NCut(K) versus the number of segments K, compared to the variable
resolution segmentations of PY-Edge, PY-Dist, and PY-BOF. (c) Scatter plot of PY-Edge and NCut(6) Rand
indexes for 200 tallbuilding images. (d) Average Rand indexes for tallbuilding images.
ance models to be flexibly shared among natural scenes, and leads to efficient variational inference
algorithms which automatically search over segmentations of varying resolution. We believe this
provides a promising starting point for discovery of shape-based visual appearance models, as well
as weakly supervised nonparametric learning in other, non-visual application domains.
Acknowledgments We thank Charless Fowlkes and David Martin for the Pb boundary estimation and segmentation code, Antonio Torralba for helpful conversations, and Sra. Barriuso for her image labeling expertise.
This research supported by ONR Grant N00014-06-1-0734, and DARPA IPTO Contract FA8750-05-2-0249.
References
[1] L. Fei-Fei and P. Perona. A Bayesian hierarchical model for learning natural scene categories. In CVPR,
volume 2, pages 524?531, 2005.
[2] J. Sivic, B. C. Russell, A. A. Efros, A. Zisserman, and W. T. Freeman. Discovering objects and their
location in images. In ICCV, volume 1, pages 370?377, 2005.
[3] A. Oliva and A. Torralba. Modeling the shape of the scene: A holistic representation of the spatial
envelope. IJCV, 42(3):145?175, 2001.
Figure 7: Most significant segments associated with each of three shared, global visual categories (rows) for
hierarchical PY-Edge models trained with 200 images of mountain (left) or tallbuilding (right) scenes.
[4] L. Cao and L. Fei-Fei. Spatially coherent latent topic model for concurrent object segmentation and
classification. In ICCV, 2007.
[5] B. C. Russell, A. A. Efros, J. Sivic, W. T. Freeman, and A. Zisserman. Using multiple segmentations to
discover objects and their extent in image collections. In CVPR, volume 2, pages 1605?1614, 2006.
[6] S. Todorovic and N. Ahuja. Learning the taxonomy and models of categories present in arbitrary images.
In ICCV, 2007.
[7] X. He, R. S. Zemel, and M. A. Carreira-Perpi?na? n. Multiscale conditional random fields for image labeling.
In CVPR, volume 2, pages 695?702, 2004.
[8] J. Verbeek and B. Triggs. Region classification with Markov field aspect models. In CVPR, 2007.
[9] C. Rother, V. Kolmogorov, T. Minka, and A. Blake. Cosegmentation of image pairs by histogram matching: Incorporating a global constraint into MRFs. In CVPR, volume 1, pages 993?1000, 2006.
[10] M. Andreetto, L. Zelnik-Manor, and P. Perona. Non-parametric probabilistic image segmentation. In
ICCV, 2007.
[11] J. Pitman and M. Yor. The two-parameter Poisson?Dirichlet distribution derived from a stable subordinator. Ann. Prob., 25(2):855?900, 1997.
[12] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Hierarchical Dirichlet processes. J. Amer. Stat.
Assoc., 101(476):1566?1581, December 2006.
[13] C. Fowlkes, D. Martin, and J. Malik. Learning affinity functions for image segmentation: Combining
patch-based and gradient-based approaches. In CVPR, volume 2, pages 54?61, 2003.
[14] B. C. Russell, A. Torralba, K. P. Murphy, and W. T. Freeman. LabelMe: A database and web-based tool
for image annotation. IJCV, 77:157?173, 2008.
[15] S. Goldwater, T. L. Griffiths, and M. Johnson. Interpolating between types and tokens by estimating
power-law generators. In NIPS 18, pages 459?466. MIT Press, 2006.
[16] Y. W. Teh. A hierarchical Bayesian language model based on Pitman?Yor processes. In Coling/ACL,
2006.
[17] E. B. Sudderth and M. I. Jordan. Shared segmentation of natural scenes using dependent Pitman-Yor
processes. Technical report, Dept. of Statistics, University of California, Berkeley. In preparation, 2009.
[18] X. Ren and J. Malik. Learning a classification model for segmentation. In ICCV, 2003.
[19] Z. Tu and S. C. Zhu. Image segmentation by data-driven Markov chain Monte Carlo. IEEE Trans. PAMI,
24(5):657?673, May 2002.
[20] D. R. Martin, C. C. Fowlkes, and J. Malik. Learning to detect natural image boundaries using local
brightness, color, and texture cues. IEEE Trans. PAMI, 26(5):530?549, May 2004.
[21] D. M. Blei and M. I. Jordan. Variational inference for Dirichlet process mixtures. Bayes. Anal., 1(1):121?
144, 2006.
[22] K. Kurihara, M. Welling, and Y. W. Teh. Collapsed variational Dirichlet process mixture models. In
IJCAI 20, pages 2796?2801, 2007.
[23] J. Y. A. Wang and E. H. Adelson. Representing moving images with layers. IEEE Trans. IP, 3(5):625?
638, September 1994.
[24] J. A. Duan, M. Guindani, and A. E. Gelfand. Generalized spatial Dirichlet process models. Biometrika,
94(4):809?825, 2007.
[25] C. Fern?andez and P. J. Green. Modelling spatially correlated data via mixtures: A Bayesian approach. J.
R. Stat. Soc. B, 64(4):805?826, 2002.
[26] M. A. T. Figueiredo. Bayesian image segmentation using Gaussian field priors. In CVPR Workshop on
Energy Minimization Methods in Computer Vision and Pattern Recognition, 2005.
[27] M. W. Woolrich and T. E. Behrens. Variational Bayes inference of spatial mixture models for segmentation. IEEE Trans. MI, 25(10):1380?1391, October 2006.
[28] P. Orbanz and J. M. Buhmann. Smooth image segmentation by nonparametric Bayesian inference. In
ECCV, volume 1, pages 444?457, 2006.
[29] R. D. Morris, X. Descombes, and J. Zerubia. The Ising/Potts model is not well suited to segmentation
tasks. In IEEE DSP Workshop, 1996.
[30] R. Unnikrishnan, C. Pantofaru, and M. Hebert. Toward objective evaluation of image segmentation algorithms. IEEE Trans. PAMI, 29(6):929?944, June 2007.
| 3435 |@word seems:1 proportion:16 stronger:1 triggs:1 open:1 zelnik:1 simulation:1 covariance:7 brightness:1 thereby:2 moment:1 contains:1 fa8750:1 current:1 z2:1 scaffolding:1 scatter:2 refines:1 realistic:1 partition:15 informative:1 shape:3 plot:4 update:2 occlude:1 cue:6 discovering:2 pursued:1 generative:3 farther:1 blei:2 coarse:1 provides:2 quantized:1 hsv:1 complication:1 location:1 five:1 constructed:2 beta:10 ijcv:2 compose:1 fitting:2 combine:1 expected:1 xji:3 roughly:1 behavior:2 frequently:1 dist:3 freeman:3 globally:1 decreasing:1 automatically:4 duan:2 considering:1 increasing:1 becomes:1 begin:1 discover:4 underlying:4 moreover:2 estimating:1 mass:2 factorized:2 mountain:6 substantially:1 developed:3 transformation:1 nj:3 berkeley:4 sky:3 quantitative:1 descombes:1 biometrika:1 demonstrates:1 assoc:1 stick:19 uk:3 unit:1 grant:1 appear:1 producing:2 positive:1 engineering:1 local:5 textural:1 treat:1 merge:1 pami:3 black:2 acl:1 co:1 range:1 statistically:1 directed:3 unique:2 camera:1 x256:1 acknowledgment:1 definite:1 ance:1 x3:1 bootstrap:1 area:12 significantly:1 mult:2 matching:1 confidence:1 griffith:1 layered:1 collapsed:1 py:60 optimize:2 equivalent:1 flexibly:1 independently:6 starting:1 resolution:5 simplicity:1 immediately:1 rule:1 insight:1 importantly:1 analogous:1 construction:5 play:1 behrens:1 user:1 gps:5 us:1 superpixel:3 element:2 satisfying:1 particularly:1 walking:1 recognition:1 cut:10 predicts:1 database:8 labeled:1 observed:6 role:1 bottom:4 ising:1 electrical:1 capture:4 parameterize:1 wang:1 region:15 ordering:1 decrease:1 russell:3 substantial:1 environment:4 ui:5 neglected:1 motivate:1 weakly:1 trained:1 segment:54 distinctive:1 easily:1 darpa:1 various:1 represented:1 kolmogorov:1 train:1 effective:2 describe:2 monte:1 zemel:1 labeling:2 aggregate:1 choosing:1 whose:1 gelfand:1 cvpr:7 distortion:1 otherwise:1 statistic:5 gp:3 transform:1 jointly:2 noisy:1 ip:1 beal:1 sequence:3 descriptive:1 quantizing:1 product:2 adaptation:1 frequent:2 tu:1 cao:1 combining:2 holistic:1 pthe:1 poorly:1 intervening:3 validate:1 ijcai:1 cluster:6 produce:9 object:29 tk:2 coupling:1 develop:4 stat:2 fixing:1 nearest:2 eq:11 soc:1 auxiliary:1 c:2 predicted:1 foliage:2 closely:1 tji:5 filter:1 human:5 occupies:2 bin:3 require:1 assign:1 generalization:2 andez:1 extension:2 considered:1 blake:1 normal:2 overlaid:1 claim:1 u3:1 vary:1 torralba:4 efros:2 andreetto:1 estimation:1 bag:8 label:11 kjt:4 stickbreaking:1 largest:1 concurrent:1 occupying:1 tool:1 minimization:1 mit:2 gaussian:18 manor:1 rather:1 avoid:1 varying:7 encode:1 derived:2 focus:1 june:1 unnikrishnan:1 dsp:1 vk:18 potts:6 consistently:1 likelihood:4 superpixels:2 modelling:1 contrast:2 cg:1 baseline:1 summarizing:1 detect:2 helpful:1 inference:5 dependent:9 rigid:1 mrfs:3 typically:2 her:1 perona:2 kc:3 transformed:1 pantofaru:1 pixel:1 issue:1 among:6 html:1 classification:4 denoted:1 art:1 spatial:11 softmax:2 marginal:1 field:6 equal:1 sampling:1 manually:1 x4:1 adelson:1 unsupervised:2 foreground:1 others:1 report:1 quantitatively:3 few:1 randomly:1 simultaneously:3 individual:1 murphy:1 phase:1 interest:1 evaluation:1 mixture:7 held:1 chain:1 accurate:2 kt:3 edge:13 encourage:1 tree:2 circle:1 desired:1 instance:2 column:2 modeling:3 contiguous:1 assignment:15 subset:3 rare:2 examining:1 johnson:1 dependency:2 dir:2 chooses:1 person:1 density:6 csail:1 retain:1 contract:1 uk1:1 probabilistic:1 michael:1 connecting:1 na:1 squared:1 woolrich:1 containing:1 opposed:1 expert:1 ek:1 toy:1 potential:4 diversity:1 xtji:4 sec:14 wk:10 summarized:1 view:1 closed:2 analyze:1 red:2 recover:2 sort:1 bayes:2 annotation:1 cosegmentation:2 square:1 accuracy:1 variance:1 correspond:1 generalize:1 goldwater:1 bayesian:8 accurately:1 fern:1 ren:1 carlo:1 expertise:1 straight:1 simultaneous:1 detector:2 manual:3 sharing:2 against:1 underestimate:1 evaluates:1 bof:4 frequency:16 energy:1 minka:1 naturally:1 associated:6 mi:1 static:1 sampled:5 dataset:2 color:11 car:1 knowledge:1 conversation:1 segmentation:39 supervised:2 follow:2 response:2 zisserman:2 rand:8 formulation:1 amer:1 strongly:1 furthermore:1 rejected:1 d:1 web:1 multiscale:1 lack:1 logistic:1 lar:1 quality:1 believe:1 building:3 effect:1 contain:1 true:2 normalized:10 assigned:3 spatially:10 semantic:3 illustrated:2 conditionally:1 subordinator:1 ambiguous:1 unambiguous:1 generalized:2 performs:1 image:69 variational:15 coast:1 novel:1 recently:1 charles:1 multinomial:4 ji:3 empirically:2 volume:7 extend:2 interpretation:1 he:1 significant:2 gibbs:1 similarly:2 z4:1 language:2 pq:1 moving:1 stable:1 surface:2 etc:1 align:1 closest:1 posterior:3 recent:1 own:1 orbanz:1 driven:1 n00014:1 onr:1 captured:1 additional:1 greater:1 somewhat:1 dashed:2 multiple:2 full:1 segmented:2 smooth:2 match:3 adapt:2 technical:1 cross:1 long:1 divided:1 coded:2 prediction:1 variant:1 basic:2 oliva:2 mrf:4 verbeek:1 vision:1 poisson:1 histogram:4 iteration:1 normalization:1 texton:1 justified:2 fine:1 interval:1 sudderth:3 crucial:1 allocated:1 envelope:2 biased:2 unlike:1 december:1 incorporates:1 jordan:5 uki:7 split:1 fit:3 zi:13 idea:1 motivated:1 heavier:2 expression:1 improperly:1 todorovic:1 antonio:1 nonparametric:9 s4:3 band:1 morris:1 induces:1 category:29 simplest:1 http:1 occupy:1 generate:2 canonical:1 s3:3 uk2:1 estimated:1 per:2 blue:1 coarsely:1 group:4 key:1 four:1 demonstrating:1 threshold:13 pb:3 urban:2 thresholded:6 convert:1 prob:1 uncertainty:2 extends:2 family:1 place:2 patch:1 comparable:1 capturing:1 bound:1 layer:4 ki:4 display:1 strength:1 occur:2 constraint:1 fei:4 scene:30 x2:1 wc:2 aspect:2 generates:1 argument:1 u1:1 min:1 martin:3 according:1 truncate:1 poor:1 conjugate:1 across:3 describes:1 smaller:1 partitioned:1 s1:3 intuitively:2 iccv:5 equation:1 vjt:5 remains:1 trunk:1 previously:1 count:5 available:2 generalizes:1 gaussians:1 eight:1 hierarchical:12 spectral:2 ocean:1 occurrence:1 fowlkes:3 hpy:10 top:3 dirichlet:11 clustering:2 graphical:3 exploit:1 build:1 malik:3 objective:1 coherently:1 occurs:2 parametric:1 concentration:1 dependence:1 costly:1 traditional:1 september:1 gradient:2 dp:19 affinity:2 distance:3 link:2 thank:1 topic:5 evaluate:1 extent:1 toward:1 induction:1 rother:1 erik:1 length:1 modeled:2 relationship:1 index:10 z3:1 pointwise:1 code:1 innovation:1 october:1 potentially:2 taxonomy:1 favorably:1 anal:1 zt:1 unknown:1 allowing:1 teh:4 upper:1 observation:1 markov:3 finite:2 extended:1 variability:1 arbitrary:1 inferred:2 david:1 pair:2 required:2 specified:1 z1:1 sivic:2 california:2 coherent:5 hour:1 nip:1 trans:5 pattern:2 challenge:1 guindani:1 green:3 including:1 power:10 overlap:1 natural:18 regularized:1 indicator:1 buhmann:1 zhu:1 representing:2 concludes:1 coupled:3 kj:1 text:5 prior:22 discovery:5 relative:2 law:10 probit:1 interesting:1 versus:1 generator:1 validation:1 eigendecomposition:1 consistent:2 share:2 row:2 eccv:1 summary:1 token:1 supported:1 hebert:1 figueiredo:1 bias:2 allow:1 understand:1 neighbor:2 pitman:17 yor:17 sparse:1 distributed:1 boundary:5 rainbow:1 valid:1 gram:1 rich:1 contour:3 cumulative:1 author:1 collection:3 made:1 refinement:1 transition:1 employing:1 welling:1 implicitly:2 global:9 gem:12 xi:2 continuous:1 latent:4 search:4 tailed:2 promising:1 channel:1 reasonably:1 robust:1 sra:1 depicting:1 forest:4 complex:1 interpolating:1 domain:1 precomputing:1 significance:1 dense:1 s2:3 hyperparameters:3 arise:1 x1:1 fig:16 quantiles:1 ahuja:1 exponential:1 breaking:16 coling:1 bij:2 perpi:1 specific:1 jt:5 experimented:1 evidence:1 grouping:1 incorporating:1 workshop:2 polygonal:1 effectively:1 texture:7 illustrates:1 zji:1 suited:2 entropy:1 generalizing:2 smoothly:1 backtracking:1 simply:1 appearance:11 visual:11 ncut:7 ordered:1 u2:1 cdf:1 conditional:2 goal:1 sorted:4 identity:2 viewed:1 ann:1 towards:2 shared:9 labelme:5 man:1 carreira:1 infinite:5 determined:3 uniformly:1 wt:2 sampler:1 kurihara:1 pas:1 indicating:1 select:1 kjtji:1 ipto:1 assessed:1 preparation:1 dept:1 mcmc:1 avoiding:1 correlated:1 |
2,687 | 3,436 | Model Selection in Gaussian Graphical Models:
High-Dimensional Consistency of ?1-regularized MLE
Pradeep Ravikumar? , Garvesh Raskutti? , Martin J. Wainwright?? and Bin Yu??
Department of Statistics? , Department of EECS? ,
University of California, Berkeley
{pradeepr,garveshr,wainwright,binyu}@stat.berkeley.edu
Abstract
We consider the problem of estimating the graph structure associated with a Gaussian
Markov random field (GMRF) from i.i.d. samples. We study the performance of study the
performance of the ?1 -regularized maximum likelihood estimator in the high-dimensional
setting, where the number of nodes in the graph p, the number of edges in the graph s and
the maximum node degree d, are allowed to grow as a function of the number of samples
n. Our main result provides sufficient conditions on (n, p, d) for the ?1 -regularized MLE
estimator to recover all the edges of the graph with high probability. Under some conditions
on the model covariance, we show that model selection can be achieved for sample sizes
n = ?(d2 log(p)), with the error decaying as O(exp(?c log(p))) for some constant c. We
illustrate our theoretical results via simulations and show good correspondences between
the theoretical predictions and behavior in simulations.
1 Introduction
The area of high-dimensional statistics deals with estimation in the ?large p, small n? setting, where
p and n correspond, respectively, to the dimensionality of the data and the sample size. Such highdimensional problems arise in a variety of applications, among them remote sensing, computational
biology and natural language processing, where the model dimension may be comparable or substantially larger than the sample size. It is well-known that such high-dimensional scaling can lead to
dramatic breakdowns in many classical procedures. In the absence of additional model assumptions,
it is frequently impossible to obtain consistent procedures when p ? n. Accordingly, an active line
of statistical research is based on imposing various restrictions on the model?-for instance, sparsity, manifold structure, or graphical model structure?-and then studying the scaling behavior of
different estimators as a function of sample size n, ambient dimension p and additional parameters
related to these structural assumptions.
In this paper, we study the problem of estimating the graph structure of a Gauss Markov random field
(GMRF) in the high-dimensional setting. This graphical model selection problem can be reduced to
the problem of estimating the zero-pattern of the inverse covariance or concentration matrix ?? . A
line of recent work [1, 2, 3, 4] has studied estimators based on minimizing Gaussian log-likelihood
penalized by the ?1 norm of the entries (or the off-diagonal entries) of the concentration matrix. The
resulting optimization problem is a log-determinant program, which can be solved in polynomial
time with interior point methods [5], or by faster co-ordinate descent algorithms [3, 4]. In recent
work, Rothman et al. [1] have analyzed some aspects of high-dimensional behavior, in particular
establishing consistency in Frobenius norm under certain conditions on the model covariance and
under certain scalings of the sparsity, sample size, and ambient model dimension.
The main contribution of this paper is to provide sufficient conditions for model selection consistency of ?1 -regularized Gaussian maximum likelihood. It is worth noting that such a consistency
result for structure learning of Gaussian graphical models cannot be derived from Frobenius norm
consistency alone. For any concentration matrix ?, denote the set of its non-zero off-diagonal entries
1
by E(?) = {s 6= t | ?st 6= 0}. (As will be clarified below, the notation E alludes to the fact that
this set corresponds to the edges in the graph defining the GMRF.) Under certain technical conditions to be specified, we prove that the ?1 -regularized (on off-diagonal entries of ?) Gaussian MLE
b = E(?? )] ? 1. In many applirecovers this edge set with high probability, meaning that P[E(?)
cations of graphical models (e.g., protein networks, social network analysis), it is this edge structure
itself, as opposed to the weights ??st on the edges, that is of primary interest. Moreover, we note
that model selection consistency is useful even when one is interested in convergence in spectral or
Frobenius norm; indeed, having extracted the set E(?? ), we could then restrict to this subset, and
estimate the non-zero entries of ?? at the faster rates applicable to the reduced dimension.
The remainder of this paper is organized as follows. In Section 2, we state our main result, discuss
its connections to related work, and some of its consequences. Section 3 provides an outline of the
proof. In Section 4, we provide some simulations that illustrate our results.
Notation For the convenience of the reader, we summarize here notation to be used throughout
the paper. Given a vector u ? Rd and parameter a ? [1, ?], we use kuka to denote the usual
?a norm. Given a matrix U ? Rp?p and parameters a, b ? [1, ?], we use |||U |||a,b to denote the
induced matrix-operator norm maxkyka =1 kU ykb ; see [6] for background. Three cases of particular
importance in this paper are the spectral norm |||U |||2 , corresponding to the maximal singular value
of U ; the ?? /?? -operator norm, given by
p
X
|||U |||? :=
max
|Ujk |,
(1)
j=1,...,p
k=1
T
and the ?1 /?1 -operator norm, given by |||U |||1 = |||U |||? . Finally, we use kU k? to denote the
element-wise maximum maxi,j |Uij |; note that this is not a matrix norm, but rather a norm on the
2
vectorized form of the matrix. For any matrix U ? Rp?p , we use vec(U ) or equivalentlyP
U ? Rp to
denote its vectorized form, obtained by stacking up the rows of U . We use hhU, V ii := i,j Uij Vij
to denote the trace inner product on theqspace of symmetric matrices. Note that this inner product
P
2
induces the Frobenius norm |||U |||F :=
i,j Uij . Finally, for asymptotics, we use the following
standard notation: we write f (n) = O(g(n)) if f (n) ? cg(n) for some constant c < ?, and
f (n) = ?(g(n)) if f (n) ? c? g(n) for some constant c? > 0. The notation f (n) ? g(n) means that
f (n) = O(g(n)) and f (n) = ?(g(n)).
2 Background and statement of main result
In this section, we begin by setting up the problem, with some background on Gaussian MRFs and
?1 -regularization. We then state our main result, and discuss some of its consequences.
2.1 Gaussian MRFs and ?1 penalized estimation
Consider an undirected graph G = (V, E) with p = |V | vertices, and let X = (X1 , . . . , Xp )
denote a p-dimensional Gaussian random vector, with variate Xi identified with vertex i ? V . A
Gauss-Markov random field (MRF) is described by a density of the form
1
1 T ?
exp
f (x1 , . . . , xp ; ?? ) =
?
x
?
x
.
(2)
2
(2? det(?? ))p/2
As illustrated in Figure 1, Markov structure is reflected in the sparsity pattern of the inverse covariance or concentration matrix ?? , a p ? p symmetric matrix. In particular, by the HammersleyClifford theorem [7], it must satisfy ??ij = 0 for all (i, j) ?
/ E. Consequently, the problem of
graphical model selection is equivalent to estimating the off-diagonal zero-pattern of the concentration matrix?that is, the set E(?? ) := {i, j ? V | i 6= j, ??ij 6= 0}.
In this paper, we P
study the minimizer of the ?1 -penalized Gaussian negative log-likelihood. Letting hhA, Bii := i,j Aij Bij be the trace inner product on the space of symmetric matrices, this
objective function takes the form
n
o
b = arg min hh?, ?ii
b ? logdet(?) + ?n k?k1,off = arg min g(?; ?,
b ?n ).
?
(3)
?0
?0
2
Zero pattern of inverse covariance
1
2
1
2
3
3
5
4
4
5
1
2
(a)
3
4
5
(b)
Figure 1. (a) Simple undirected graph. A Gauss Markov random field has a Gaussian variable Xi
associated with each vertex i ? V . This graph has p = 5 vertices, maximum degree d = 3 and s = 6
edges. (b) Zero pattern of the inverse covariance ?? associated with the GMRF in (a). The set E(?? )
corresponds to the off-diagonal non-zeros (white blocks); the diagonal is also non-zero (grey squares),
but these entries do not correspond to edges. The black squares correspond to non-edges, or zeros in
?? .
b denotes the sample covariance?that is, ?
b := 1 Pn X (?) [X (?) ]T , where each X (?) is
Here ?
?=1
n
drawn in an i.i.d. manner according to the
P density (2). The quantity ?n > 0 is a user-defined
regularization parameter. and k?k1,off := i6=j |?ij | is the off-diagonal ?1 regularizer; note that it
does not include the diagonal. Since the negative log-determinant is a strictly convex function [5],
this problem always has a unique solution, so that there is no ambiguity in equation (3).
b = {(i, j) | i 6= j, ?
b ij 6= 0} denote the edge set associated with the estimate. Of
We let E(?)
b as a function of the graph size p
interest in this paper is studying the probability P[E(?? ) = E(?)]
(which serves as the ?model dimension? for the Gauss-Markov model), the sample size n, and the
b In particular, we define both the sparsity index
structural properties of ?.
s
:=
|E(?? )| = {i, j ? V | i 6= j, ??ij 6= 0}|.
(4)
corresponding to the total number of edges, and the maximum degree or row cardinality
d :=
max |{i | ??ij 6= 0},
j=1,...,p
(5)
corresponding to the maximum number of non-zeros in any row of ?? , or equivalently the maximum
degree in the graph G, where we include the diagonal in the degree count.
2.2 Statement of main result
Our assumptions involve the Hessian with respect to ? of the objective function g defined in equation (3), evaluated at the true model ?? . Using standard results on matrix derivatives [5], it can be
shown that this Hessian takes the form
?? := ?2? g(?)
= ?? ?1 ? ?? ?1 ,
(6)
?
?=?
where ? denotes the Kronecker matrix product. By definition, ?? is a p2 ? p2 matrix indexed by
2
g
vertex pairs, so that entry ??(j,k),(?,m) corresponds to the second partial derivative ??jk? ??
, evalu?m
ated at ? = ?? . When X has multivariate Gaussian distribution, then ?? is the Fisher information
of the model, and by standard results on cumulant functions in exponential families [8], we have the
more specific expression ??(j,k),(?,m) = cov{Xj Xk , X? Xm }. For this reason, ?? can be viewed as
an edge-based counterpart to the usual covariance matrix ?? .
We define the set of non-zero off-diagonal entries in the model concentration matrix ?? :
S(?? ) :=
{(i, j) ? V ? V | i 6= j, ??ij 6= 0},
(7)
and let S(?? ) = {S(?? ) ? {(1, 1), . . . , (p, p)} be the augmented set including the diagonal. We let
S c (?? ) denote the complement of S(?? ) in the set {1, . . . , p} ? {1, . . . , p}, corresponding to all
3
pairs (?, m) for which ???m = 0. When it is clear from context, we shorten our notation for these
sets to S and S c , respectively. Finally, for any two subsets T and T ? of V ? V , we use ??T T ? to
denote the |T | ? |T ? | matrix with rows and columns of ?? indexed by T and T ? respectively.
We require the following conditions on the Fisher information matrix ?? :
[A1] Incoherence condition: This condition captures the intuition that variable-pairs which are
non-edges cannot exert an overtly strong effect on variable-pairs which form edges of the Gaussian
graphical model.
|||??S c S (??SS )?1 |||?
? (1 ? ?),
for some fixed ? > 0.
(8)
We note that similar conditions arise in the analysis of the Lasso in linear regression [9, 10, 11].
[A2] Covariance control: There exist constants K?? , K?? < ? such that
|||?? ?1 |||? ? K?? ,
and |||(??SS )?1 |||? ? K?? .
(9)
These assumptions require that the covariance elements along any row of (?? )?1 and (??SS )?1 have
bounded ?1 norms. Note that similar assumptions are are also required for consistency in Frobenius
norm [1].
Recall from equations (4) and (5) the definitions of the sparsity index s and maximum degree d,
respectively. With this notation, we have:
Theorem 1. Consider a Gaussian distribution with concentration
matrix ?? that satisfies conditions
q
(A1) and (A2). Suppose the penalty is set as ?n = C1 logn p , and the minimum edge-weight ??min :=
q
min(i,j)?S |??ij | scales as ??min > C2 logn p for some constants C1 , C2 > 0. Further, suppose the
triple (n, d, p) satisfies the scaling
n >
L d2 log(p),
(10)
b specified by the estimator specifies the true edge
for some constant L > 0. Then the edge set E(?)
set w.h.p.?in particular,
b = E(?? )] ?
P[E(?)
for some constant c > 0.
1 ? exp(?c log p) ? 1.
(11)
Remarks: Rothman et al. [1] prove that the error of the estimator in Frobenius norm obeys the
b ? ?? |||2 = O {((s + p) log p)/n}, with high probability. We note that model selection
bound |||?
F
consistency does not follow from this result, since an estimate may be close in Frobenius norm while
differing substantially in terms of zero-pattern. In one sense, the model selection criterion is more
demanding, since given knowledge of the edge set E(?? ), one could restrict estimation procedures
to this subset, and so achieve faster rates. On the other hand, Theorem 1 requires incoherence
conditions [A1] on the covariance matrix, which are not required for Frobenius norm consistency [1].
2.3 Comparison to neighbor-based graphical model selection
It is interesting to compare the estimator to the Gaussian neighborhood regression method studied
by Meinshausen and B?uhlmann [9], in which each node is linearly regressed with an ?1 penalty
(Lasso) on the rest of the nodes; and the location of the non-zero regression weights is taken as the
neighborhood estimate of that node. These neighborhoods are then combined, by either an OR rule
or an AND rule, to estimate the full graph. Wainwright [12] shows that the rate n ? d log p is a
sharp threshold for the success/failure of neighborhood selection by Lasso. By a union bound over
the p nodes, it follows this threshold holds for the Meinshausen and B?uhlmann approach as well.
This is superior to the scaling in our result (10). However, the two methods rely on slightly different
underlying assumptions, and the current form of the neighborhood-based approach requires solving
a total of p Lasso programs, as opposed to a single log-determinant problem. Below we show two
cases where the Lasso irrepresentability condition holds, while the log-determinant requirement
fails. However, in general, we do not know whether the log-determinant irrepresentability strictly
dominates its analog for the Lasso.
4
2.3.1 Illustration of irrepresentability: Diamond graph
Consider the following Gaussian MRF example from [13]. Figure 2(a) shows a diamond-shaped
graph G = (V, E), with vertex set V = {1, 2, 3, 4} and edge-set as the fully connected graph over V
with the edge (1, 4) removed. The covariance matrix ?? is parameterized by the correlation param2
2
1
3
1
4
4
(b)
3
(a)
Figure 2: (a) Graph of the example discussed by [13]. (b) A simple 4-node star graph.
?
eter ? ? [0, 1/ 2]: the diagonal entries are set to ??ii = 1, for all i ? V ; the entries corresponding
to edges are set to ??ij = ? for (i, j) ? E\{(2, 3)}, ??23 = 0; and finally the entry corresponding to
b fails
the non-edge is set as ??14 = 2?2 . For this model, [13] showed that the ?1 -regularized MLE ?
1/2
to recover the graph structure for any sample size, if ? > ?1 + (3/2)
? 0.23. It is instructive
to compare this necessary condition to the sufficient condition provided in our analysis, namely the
incoherence Assumption [A1] as applied to the Hessian ?? . For this particular example, a little calculation shows that Assumption [A1] is equivalent to the constraint 4|?|(|?| + 1) < 1, an inequality
which holds for all ? ? (?0.2017, 0.2017). Note that the upper value 0.2017 is just below the necessary threshold discussed by [13]. On the other hand, the irrepresentability condition for the Lasso
requires only that 2|?| < 1, i.e., ? ? (?0.5, 0.5). Thus, in the regime |?| ? [0.2017, 0.5), the Lasso
irrepresentability condition holds while our log-determinant counterpart fails.
2.3.2 Illustration of irrepresentability: Star graphs
A second interesting example is the star-shaped graphical model, illustrated in Figure 2(b), which
consists of a single hub node connected to the rest of the spoke nodes. We consider a four node
graph, with vertex set V = {1, 2, 3, 4} and edge-set E = {(1, s) | s ? {2, 3, 4}}. The covariance
matrix ?? is parameterized the correlation parameter ? ? [?1, 1]: the diagonal entries are set to
??ii = 1, for all i ? V ; the entries corresponding to edges are set to ??ij = ? for (i, j) ? E; while
the non-edge entries are set as ??ij = ?2 for (i, j) ?
/ E. Consequently, for this particular example,
Assumption [A1] reduces to the constraint |?|(|?|+2) < 1, which holds for all ? ? (?0.414, 0.414).
The irrepresentability condition for the Lasso on the other hand allows the full range ? ? (?1, 1).
Thus there is again a regime, |?| ? [0.414, 1), where the Lasso irrepresentability condition holds
while the log-determinant counterpart fails.
3 Proof outline
Theorem 1 follows as a corollary to Theorem 2 in Ravikumar et al [14], an extended and more
general version of this paper. There we consider the more general problem of estimation of the
covariance matrix of a random vector (that need not necessarily be Gaussian) from i.i.d. samples;
and where we relax Assumption [A2], and allow quantities K?? , K?? to grow with sample size n.
We provide here a high-level outline of the proof of Theorem 1, deferring details to the extended
version [14]. Our proofs are based on a technique that we call a primal-dual witness method, used
previously in analysis of the Lasso [12]. It involves following a specific sequence of steps to cone Z)
e of symmetric matrices that together satisfy the optimality conditions associated
struct a pair (?,
with the convex program (3) with high probability. Thus, when the constructive procedure succeeds,
e is equal to the unique solution ?
b of the convex program (3), and Ze is an optimal solution to its
?
5
b inherits from ?
e various optimality properties in terms of its disdual. In this way, the estimator ?
tance to the truth ?? , and its recovery of the signed sparsity pattern. To be clear, our procedure for
e is not a practical algorithm for solving the log-determinant problem (3), but rather is
constructing ?
used as a proof technique for certifying the behavior of the ?1 -regularized MLE (3).
3.1 Primal-dual witness approach
At the core of the primal-dual witness method are the standard convex optimality conditions that
b of the convex program (3). For future reference, we note that the subcharacterize the optimum ?
differential of the norm k ? k1,off evaluated at some ? consists the set of all symmetric matrices
Z ? Rp?p such that
?
if i = j
?0
Zij =
(12)
sign(?ij )
if i 6= j and ?ij 6= 0
?
? [?1, +1] if i 6= j and ?ij = 0.
b with strictly positive diagonal, the ?1 Lemma 1. For any ?n > 0 and sample covariance ?
b ? 0 characterized by
regularized log-determinant problem (3) has a unique solution ?
b ??
b ?1 + ?n Z
e
?
= 0,
(13)
e is an element of the subdifferential ?k?k
b 1,off .
where Z
e Z)
e as follows:
Based on this lemma, we construct the primal-dual witness solution (?,
e by solving the restricted log-determinant problem
(a) We determine the matrix ?
e := arg
b ? log det(?) + ?n k?k1,off .
?
min
hh?, ?ii
??0, ?S c =0
(14)
e ? 0, and moreover ?
e S c = 0.
Note that by construction, we have ?
(b) We choose ZeS as a member of the sub-differential of the regularizer k ? k1,off , evaluated at
e
?.
eS c as
(c) We set Z
eS c
Z
=
1 b
e ?1 ]S c ,
? ?S c + [?
?n
(15)
e Z)
e satisfy the optimality condition (13).
which ensures that constructed matrices (?,
(d) We verify the strict dual feasibility condition
eij | <
|Z
1 for all (i, j) ? S c .
e Z)
e that
To clarify the nature of the construction, steps (a) through (c) suffice to obtain a pair (?,
e is an element of sub-differential
satisfy the optimality conditions (13), but do not guarantee that Z
e 1,off . By construction, specifically step (b) of the construction ensures that the entries Ze in S
?k?k
eS is a member of the sub-differential of ?k?
e S k1,off .
satisfy the sub-differential conditions, since Z
e
The purpose of step (d), then, is to verify that the remaining elements of Z satisfy the necessary
conditions to belong to the sub-differential.
If the primal-dual witness construction succeeds, then it acts as a witness to the fact that the solution
e to the restricted problem (14) is equivalent to the solution ?
b to the original (unrestricted) prob?
lem (3). We exploit this fact in our proof of Theorem 1: we first show that the primal-dual witness
technique succeeds with high-probability, from which we can conclude that the support of the optib is contained within the support of the true ?? . The next step requires checking that
mal solution ?
e S constructed in Equation (14) are zero. It is to verify this that we require
none of the entries in ?
the lower bound assumption in Theorem 1 on the value of the minimum value ??min .
6
4 Experiments
In this section, we describe some experiments which illustrate the model selection rates in Theorem 1. We solved the ?1 penalized log-determinant optimization problem using the ?glasso? program [4], which builds on the block co-ordinate descent algorithm of [3]. We report experiments
for star-shaped graphs, which consist of one node connected to the rest of the nodes. These graphs
allow us to vary both d and p, since the degree of the central hub can be varied between 1 and p ? 1.
Applying the algorithm to these graphs should therefore provide some insight on how the required
number of samples n is related to d and p. We tested varying graph sizes p from p = 64 upwards
to p = 375. The edge-weights were set as entries in the inverse of a covariance matrix ?? with
diagonal entries set as ??ii = 1 for all i = 1, . . . , p, and ??ij = 2.5/d for all (i, j) ? E, so that the
quantities (K?? , K?? , ?) remain constant.
Dependence on graph size:
Star graph
1
0.8
0.8
Prob. of success
Prob. of success
Star graph
1
0.6
0.4
p=64
p=100
p=225
p=375
0.2
0
100
200
300
n
400
0.6
0.4
p=64
p=100
p=225
p=375
0.2
0
20
500
(a)
40
60
80
n/log p
100
120
140
(b)
Figure 3. Simulations for a star graph with varying number of nodes p, fixed maximal degree d = 40,
and edge covariances ??ij = 1/16 for all edges. Plots of probability of correct signed edge-set recovery
versus the sample size n in panel (a), and versus the rescaled sample size n/ log p in panel (b). Each
point corresponds to the average over N = 100 trials.
Panel (a) of Figure 3 plots the probability of correct signed edge-set recovery against the sample size
n for a star-shaped graph of three different graph sizes p. For each curve, the probability of success
starts at zero (for small sample sizes n), but then transitions to one as the sample size is increased.
As would be expected, it is more difficult to perform model selection for larger graph sizes, so
that (for instance) the curve for p = 375 is shifted to the right relative to the curve for p = 64.
Panel (b) of Figure 3 replots the same data, with the horizontal axis rescaled by (1/ log p). This
scaling was chosen because our theory predicts that the sample size should scale logarithmically
with p (see equation (10)). Consistent with this prediction, when plotted against the rescaled sample
size n/ log p, the curves in panel (b) all stack up. Consequently, the ratio (n/ log p) acts as an
effective sample size in controlling the success of model selection, consistent with the predictions
of Theorem 1.
Dependence on the maximum node degree:
Panel (a) of Figure 4 plots the probability of correct signed edge-set recovery against the sample size
n for star-shaped graphs; each curve corresponds to a different choice of maximum node degree d,
allowing us to investigate the dependence of the sample size on this parameter. So as to control these
comparisons, we fixed the number of nodes to p = 200. Observe how the plots in panel (a) shift to
the right as the maximum node degree d is increased, showing that star-shaped graphs with higher
degrees are more difficult. In panel (b) of Figure 4, we plot the same data versus the rescaled sample
size n/d. Recall that if all the curves were to stack up under this rescaling, then it means the required
sample size n scales linearly with d. These plots are closer to aligning than the unrescaled plots, but
the agreement is not perfect. In particular, observe that the curve d (right-most in panel (a)) remains
a bit to the right in panel (b), which suggests that a somewhat more aggressive rescaling?perhaps
n/d? for some ? ? (1, 2)?is appropriate. The sufficient condition from Theorem 1, as summarized
7
Truncated Star with Varying d
Prob. of success
0.8
Truncated Star with Varying d
1
d=50
d=60
d=70
d=80
d=90
d=100
0.8
Prob. of success
1
0.6
0.4
0.2
0
1000
d=50
d=60
d=70
d=80
d=90
d=100
0.6
0.4
0.2
1500
0
26
n
32
n/d
(a)
(b)
2000
2500
3000
3500
28
30
34
36
38
Figure 4. Simulations for star graphs with fixed number of nodes p = 200, varying maximal (hub)
degree d, edge covariances ??ij = 2.5/d. Plots of probability of correct signed edge-set recovery
versus the sample size n in panel (a), and versus the rescaled sample size n/d in panel (b).
in equation (10), is n = ?(d2 log p), which appears to be overly conservative based on these data.
Thus, it might be possible to tighten our theory under certain regimes.
References
[1] A.J. Rothman, P.J. Bickel, E. Levina, and J. Zhu. Sparse permutation invariant covariance estimation.
Electron. J. Statist., 2:494?515, 2008.
[2] M. Yuan and Y. Lin. Model selection and estimation in the Gaussian graphical model. Biometrika,
94(1):19?35, 2007.
[3] A. d?Aspr?emont, O. Banerjee, and L. El Ghaoui. First-order methods for sparse covariance selection.
SIAM J. Matrix Anal. Appl., 30(1):56?66, 2008.
[4] J. Friedman, T. Hastie, and R. Tibshirani. Sparse inverse covariance estimation with the graphical Lasso.
Biostat., 9(3):432?441, 2007.
[5] S. Boyd and L. Vandenberghe. Convex optimization. Cambridge University Press, Cambridge, UK, 2004.
[6] R. A. Horn and C. R. Johnson. Matrix Analysis. Cambridge University Press, Cambridge, 1985.
[7] S. L. Lauritzen. Graphical Models. Oxford University Press, Oxford, 1996.
[8] L.D. Brown. Fundamentals of statistical exponential families. Institute of Mathematical Statistics, Hayward, CA, 1986.
[9] N. Meinshausen and P. B?uhlmann. High-dimensional graphs and variable selection with the Lasso. Ann.
Statist., 34(3):1436?1462, 2006.
[10] J. A. Tropp. Just relax: Convex programming methods for identifying sparse signals. IEEE Trans. Info.
Theory, 51(3):1030?1051, 2006.
[11] P. Zhao and B. Yu. On model selection consistency of Lasso. Journal of Machine Learning Research,
7:2541?2567, 2006.
[12] M. J. Wainwright. Sharp thresholds for high-dimensional and noisy recovery of sparsity using the Lasso.
Technical Report 709, UC Berkeley, May 2006. To appear in IEEE Trans. Info. Theory.
[13] N. Meinshausen. A note on the Lasso for graphical Gaussian model selection. Statistics and Probability
Letters, 78(7):880?884, 2008.
[14] P. Ravikumar, M. J. Wainwright, G. Raskutti, and B. Yu. High-dimensional covariance estimation by
minimizing ?1 -penalized log-determinant divergence. Technical Report 767, Department of Statistics,
UC Berkeley, November 2008.
8
| 3436 |@word trial:1 determinant:12 version:2 polynomial:1 norm:18 d2:3 grey:1 simulation:5 covariance:22 dramatic:1 zij:1 current:1 must:1 plot:8 alone:1 accordingly:1 xk:1 core:1 provides:2 node:18 clarified:1 location:1 mathematical:1 along:1 c2:2 constructed:2 differential:6 yuan:1 prove:2 consists:2 manner:1 expected:1 indeed:1 behavior:4 frequently:1 little:1 cardinality:1 begin:1 estimating:4 notation:7 moreover:2 bounded:1 underlying:1 provided:1 suffice:1 panel:12 hayward:1 substantially:2 z:1 differing:1 guarantee:1 berkeley:4 act:2 biometrika:1 uk:1 control:2 appear:1 positive:1 consequence:2 oxford:2 establishing:1 incoherence:3 black:1 signed:5 exert:1 might:1 studied:2 meinshausen:4 suggests:1 appl:1 co:2 range:1 obeys:1 unique:3 practical:1 horn:1 union:1 block:2 procedure:5 asymptotics:1 area:1 boyd:1 protein:1 cannot:2 interior:1 selection:18 convenience:1 operator:3 close:1 context:1 impossible:1 applying:1 restriction:1 equivalent:3 convex:7 gmrf:4 recovery:6 shorten:1 identifying:1 estimator:8 rule:2 insight:1 vandenberghe:1 kuka:1 construction:5 suppose:2 controlling:1 user:1 hhu:1 programming:1 agreement:1 element:5 logarithmically:1 ze:2 jk:1 breakdown:1 predicts:1 solved:2 capture:1 mal:1 ensures:2 pradeepr:1 connected:3 remote:1 removed:1 rescaled:5 intuition:1 solving:3 various:2 regularizer:2 describe:1 effective:1 neighborhood:5 larger:2 s:3 relax:2 statistic:5 cov:1 itself:1 noisy:1 sequence:1 maximal:3 product:4 remainder:1 achieve:1 frobenius:8 convergence:1 requirement:1 optimum:1 perfect:1 illustrate:3 stat:1 ij:17 lauritzen:1 strong:1 p2:2 involves:1 correct:4 bin:1 require:3 rothman:3 strictly:3 clarify:1 hold:6 exp:3 electron:1 vary:1 bickel:1 a2:3 purpose:1 estimation:8 applicable:1 uhlmann:3 gaussian:19 always:1 rather:2 pn:1 varying:5 corollary:1 derived:1 inherits:1 likelihood:4 cg:1 sense:1 mrfs:2 el:1 uij:3 interested:1 arg:3 among:1 dual:7 logn:2 uc:2 field:4 equal:1 construct:1 having:1 shaped:6 biology:1 yu:3 future:1 report:3 divergence:1 optib:1 friedman:1 interest:2 investigate:1 analyzed:1 pradeep:1 primal:6 ambient:2 edge:33 closer:1 partial:1 necessary:3 indexed:2 plotted:1 theoretical:2 instance:2 column:1 increased:2 stacking:1 vertex:7 entry:18 subset:3 johnson:1 eec:1 combined:1 st:2 density:2 fundamental:1 siam:1 off:15 together:1 again:1 ambiguity:1 central:1 opposed:2 choose:1 derivative:2 zhao:1 rescaling:2 aggressive:1 star:13 summarized:1 satisfy:6 ated:1 start:1 recover:2 decaying:1 contribution:1 square:2 correspond:3 none:1 biostat:1 worth:1 cation:1 binyu:1 definition:2 failure:1 against:3 associated:5 proof:6 recall:2 knowledge:1 dimensionality:1 organized:1 appears:1 higher:1 follow:1 reflected:1 evaluated:3 just:2 correlation:2 hand:3 horizontal:1 tropp:1 banerjee:1 perhaps:1 effect:1 verify:3 true:3 brown:1 counterpart:3 regularization:2 symmetric:5 illustrated:2 deal:1 white:1 hammersleyclifford:1 criterion:1 outline:3 upwards:1 meaning:1 wise:1 superior:1 garvesh:1 raskutti:2 analog:1 discussed:2 belong:1 cambridge:4 imposing:1 vec:1 rd:1 consistency:10 i6:1 language:1 overtly:1 aligning:1 multivariate:1 recent:2 showed:1 irrepresentability:8 certain:4 inequality:1 success:7 minimum:2 additional:2 unrestricted:1 somewhat:1 determine:1 signal:1 ii:6 full:2 reduces:1 technical:3 faster:3 levina:1 characterized:1 calculation:1 lin:1 mle:5 ravikumar:3 a1:6 feasibility:1 prediction:3 mrf:2 regression:3 achieved:1 eter:1 c1:2 background:3 subdifferential:1 grow:2 singular:1 rest:3 strict:1 induced:1 undirected:2 member:2 call:1 structural:2 noting:1 variety:1 xj:1 variate:1 ujk:1 hastie:1 restrict:2 identified:1 lasso:16 inner:3 det:2 shift:1 whether:1 expression:1 penalty:2 hessian:3 logdet:1 remark:1 useful:1 clear:2 involve:1 statist:2 induces:1 reduced:2 specifies:1 exist:1 shifted:1 sign:1 overly:1 tibshirani:1 write:1 four:1 threshold:4 drawn:1 spoke:1 graph:35 cone:1 inverse:6 parameterized:2 prob:5 letter:1 throughout:1 reader:1 family:2 scaling:6 comparable:1 bit:1 bound:3 correspondence:1 kronecker:1 constraint:2 regressed:1 certifying:1 aspect:1 min:7 optimality:5 martin:1 department:3 according:1 remain:1 slightly:1 deferring:1 lem:1 restricted:2 invariant:1 ghaoui:1 taken:1 equation:6 previously:1 remains:1 discus:2 count:1 hh:2 know:1 letting:1 serf:1 evalu:1 studying:2 observe:2 spectral:2 appropriate:1 bii:1 struct:1 rp:4 original:1 denotes:2 remaining:1 include:2 graphical:13 exploit:1 k1:6 build:1 classical:1 objective:2 quantity:3 concentration:7 primary:1 usual:2 diagonal:15 dependence:3 manifold:1 reason:1 index:2 illustration:2 ratio:1 minimizing:2 equivalently:1 difficult:2 statement:2 info:2 trace:2 negative:2 anal:1 diamond:2 perform:1 upper:1 allowing:1 markov:6 descent:2 november:1 truncated:2 defining:1 extended:2 witness:7 varied:1 stack:2 sharp:2 ordinate:2 complement:1 pair:6 required:4 specified:2 namely:1 connection:1 california:1 trans:2 below:3 pattern:7 xm:1 regime:3 sparsity:7 summarize:1 program:6 max:2 including:1 tance:1 wainwright:5 demanding:1 natural:1 rely:1 regularized:8 zhu:1 axis:1 checking:1 relative:1 fully:1 glasso:1 permutation:1 interesting:2 versus:5 triple:1 degree:13 sufficient:4 consistent:3 vectorized:2 xp:2 alludes:1 vij:1 row:5 penalized:5 aij:1 allow:2 institute:1 neighbor:1 sparse:4 curve:7 dimension:5 transition:1 tighten:1 social:1 active:1 conclude:1 xi:2 ku:2 nature:1 ca:1 necessarily:1 constructing:1 main:6 linearly:2 arise:2 allowed:1 x1:2 augmented:1 fails:4 sub:5 exponential:2 bij:1 theorem:11 specific:2 hub:3 showing:1 sensing:1 maxi:1 dominates:1 hha:1 consist:1 importance:1 eij:1 contained:1 corresponds:5 minimizer:1 satisfies:2 truth:1 extracted:1 viewed:1 consequently:3 ann:1 absence:1 fisher:2 specifically:1 lemma:2 conservative:1 total:2 gauss:4 e:3 succeeds:3 highdimensional:1 support:2 cumulant:1 constructive:1 tested:1 instructive:1 |
2,688 | 3,437 | Empirical performance maximization
for linear rank statistics
St?ephan Cl?emenc?on
Telecom Paristech (TSI) - LTCI UMR Institut Telecom/CNRS 5141
[email protected]
Nicolas Vayatis
ENS Cachan & UniverSud - CMLA UMR CNRS 8536
[email protected]
Abstract
The ROC curve is known to be the golden standard for measuring performance of
a test/scoring statistic regarding its capacity of discrimination between two populations in a wide variety of applications, ranging from anomaly detection in signal
processing to information retrieval, through medical diagnosis. Most practical
performance measures used in scoring applications such as the AUC, the local
AUC, the p-norm push, the DCG and others, can be seen as summaries of the
ROC curve. This paper highlights the fact that many of these empirical criteria
can be expressed as (conditional) linear rank statistics. We investigate the properties of empirical maximizers of such performance criteria and provide preliminary
results for the concentration properties of a novel class of random variables that
we will call a linear rank process.
1
Introduction
In the context of ranking, several performance measures may be considered. Even in the simplest
framework of bipartite ranking, where a binary label is available, there is not one and only natural criterion, but many possible options. The ROC curve provides a complete description of performance
but its functional nature renders direct optimization strategies rather complex. Empirical risk minimization strategies are thus based on summaries of the ROC curve, which take the form of empirical
risk functionals where the averages involved are no longer taken over i.i.d. sequences. The most
popular choice is the so-called AUC criterion (see [AGH+ 05] or [CLV08] for instance), but when
top-ranked instances are more important, various choices can be considered: the Discounted Cumulative Gain or DCG [CZ06], the p-norm push (see [Rud06]), or the local AUC (refer to [CV07]).
The present paper starts from the simple observation that all these summary criteria have a common
feature: conditioned upon the labels, they all belong to the class of linear rank statistics. Such statistics have been extensively studied in the mathematical statistics literature because of their optimality
properties in hypothesis testing, see [HS67]. Now, in the statistical learning view, with the importance of excess risk bounds, the theory of rank tests needs to be revisited and new problems come
up. The arguments required to deal with risk functionals based on linear rank statistics have been
sketched in [CV07] in a special case. The empirical AUC, known as the Wilcoxon-Mann-Whitney
statistic, is also a U -statistic and this particular dependence structure was extensively exploited in
[CLV08]. In the present paper, we describe the generic structure of linear rank statistics as an orthogonal decomposition after projection onto the space of sums of i.i.d. random variables (Section
2). This projection method is the key to all statistical results related to maximizers of such criteria:
consistency, (fast) rates of convergence or model selection. We relate linear rank statistics to performance measures relevant for the ranking problem by showing that the target of ranking algorithms
1
correspond to optimal ordering rules in that sense (Section 3). Eventually, we provide some preliminary results in Section 4 for empirical maximizers of performance criteria based on linear rank
statistics with smooth score-generating functions.
2
Criteria based on linear rank statistics
Along the paper, we shall consider the standard binary classification model. Take a random pair
(X, Y ) ? X ?{?1, +1}, where X is an observation vector in a high dimensional space X ? Rd and
Y is a binary label, and denote by P the distribution of (X, Y ). The dependence structure between X
and Y can be described by conditional distributions. We can consider two descriptions: either P =
(?, ?) where ? is the marginal distribution of X and ? is the posterior distribution defined by ?(x) =
P{Y = 1 | X = x} for all x ? Rd , or else P = (p, G, H) with p = P{Y = 1} being the proportion
of positive instances, G = L(X | Y = +1) the conditional distribution of positive instances and
H = L(X | Y = ?1) the conditional distribution of negative instances. A sample of size n of i.i.d.
realizations of this statistical model can be represented as a set of pairs {(Xi , Yi )}1?i?n , where
?
(Xi , Yi ) is a copy of (X, Y ), but also as a set {X1+ , . . . , Xk+ , X1? , . . . , Xm
, } where L(Xi+ ) = G,
?
L(Xi ) = H, and k + m = n. In this setup, the integers k and m are random, drawn as binomials
of size n and respective parameters p and 1 ? p.
2.1
Motivation
Most of the statistical learning theory has been developed for empirical risk minimizers (ERM) of
sums of i.i.d. random variables. Mathematical results were elaborated with the use of empirical
processes techniques and particularly concentration inequalities for such processes (see [BBL05]
for an overview). This was made possible by the standard assumption that, in a batch setup, for
the usual prediction problems (classification, regression or density estimation), the sample data
{(Xi , Yi )}i=1,...,n are i.i.d. random variables. Another reason is that the error probability in these
problems involves only ?first-order? events, depending only on (X1 , Y1 ). In classification, for instance, most theoretical developments were focused on the error probability P{Y1 6= g(X1 )} of a
classifier g : X ? {?1, +1}, which is hardly considered in practice because the two populations are
rarely symmetric in terms of proportions or costs. For prediction tasks such as ranking or scoring,
more involved statistics need to be considered, such as the Area Under the ROC curve (AUC), the
local AUC, the Discounted Cumulative Gain (DCG), the p-norm push, etc. For instance, the AUC,
a very popular performance measure in various scoring applications, such as medical diagnosis or
credit-risk screening, can be seen as a probability of an ?event of order two?, i.e. depending on
(X1 , Y1 ), (X2 , Y2 ). In information retrieval, the DCG is the reference measure and it seems to have
a rather complicated statistical structure. The first theoretical studies either attempt to get back to
sums of i.i.d. random variables by artificially reducing the information available (see [AGH+ 05],
[Rud06]) or adopt a plug-in strategy ([CZ06]). Our approach is to i) avoid plug-in in order to understand the intimate nature of the learning problem, ii) keep all the information available and provide
the analysis of the full statistic. We shall see that this approach requires the development of new
tools for handling the concentration properties of rank processes, namely collections of rank statistics indexed by classes of functions, which have never been studied before.
2.2
Empirical performance of scoring rules
The learning task on which we focus here is known as the bipartite ranking problem. The goal of
ranking is to order the instances Xi by means of a real-valued scoring function s : X ? R , given the
binary labels Yi . We denote by S the set of all scoring functions. It is natural to assume that a good
scoring rule s would assign higher ranks to the positive instances (those for which Yi = +1) than to
the negative ones.P
The rank of the observation Xi induced by the scoring function s is expressed as
n
Rank(s(Xi )) = j=1 I{s(Xj )?s(Xi )} and ranges from 1 to n. In the present paper, we consider a
particular class of simple (conditional) linear rank statistics inspired from the Wilcoxon statistic.
2
Definition. 1 Let ? : [0, 1] ? [0, 1] be a nondecreasing function. We define the ?empirical Wranking performance measure? as the empirical risk functional
n
X
Rank(s(Xi ))
c
Wn (s) =
, ?s ? S.
I{Yi =+1} ?
n+1
i=1
cn (s)}s?S .
The function ? is called the ?score-generating function? of the ?rank process? {W
We refer to the book by Serfling [Ser80] for properties and asymptotic theory of rank statistics.
We point out that our definition does not match exactly with the standard definition of linear rank
statistics. Indeed, in our case, coefficients of the ranks in the sum are random because they involve
cn (s) conditional linear rank statistics.
the variables Yi . We will call statistics W
It is a very natural idea to consider ranking criteria based on ranks. Observe indeed that the performance of a given scoring function s is invariant by increasing transforms of the latter, when evaluated
through the empirical W -ranking performance measure. For specific choices of the score-generating
function ?, we recover the main examples mentioned in the introduction and many relevant criteria
can be accurately approximated by statistics of this form:
? ?(u) = u - this choice leads to the celebrated Wilcoxon-Mann-Whitney statistic which is
related to the empirical version of the AUC (see [CLV08]).
? ?(u) = u ? I{u?u0 } , for some u0 ? (0, 1) - such a score-generating function corresponds
to the local AUC criterion, introduced recently in [CV07]. Such a criterion is of interest
when one wants to focus on the highest ranks.
? ?(u) = up - this is another choice which puts emphasis on high ranks but in a smoother
way than the previous one. This is related to the p-norm push approach taken in [Rud06].
However, we point out that the criterion studied in the latter work relies on a different
definition of the rank of an observation. Namely, the rank of positive instances among
negative instances (and not in the pooled sample) is used. This choice permits to use
independence which makes the technical part much simpler, at the price of increasing the
variance of the criterion.
? ?(u) = ?n (u) = c ((n + 1) u)?I{u?k/(n+1)} - this corresponds to the DCG criterion in the
bipartite setup, one of the ?gold standard quality measure? in information retrieval, when
grades are binary (namely I{Yi =+1} ). The c(i)?s denote the discount factors, c(i) measuring the importance of rank i. The integer k denotes the number of top-ranked instances to
take into account. Notice that, with our indexation, top positions correspond to the largest
ranks and the sequence {ci } should be chosen to be increasing.
2.3
Uniform approximation of linear rank statistics
This subsection describes the main result of the present analysis, which shall serve as the essential
tool for deriving statistical properties of maximizers of empirical W -ranking performance measures. For a given scoring function s, we denote by Gs , respectively Hs , the conditional cumulative
distribution function of s(X) given Y = +1, respectively Y = ?1. With these notations, the unconditional cdf of s(X) is then Fs = pGs + (1 ? p)Hs . For averages of non-i.i.d. random variables, the
underlying statistical structure can be revealed by orthogonal projections onto the space of sums of
i.i.d. random variables in many situations. This projection argument was the key for the study of empirical AUC maximization, which involved U -processes, see [CLV08]. In the case of U -statistics,
this orthogonal decomposition is known as the Hoeffding decomposition and the remainder may be
expressed as a degenerate U -statistic, see [Hoe48]. For rank statistics, a similar though less accurate
decomposition can be considered. We refer to [Haj68] for a systematic use of the projection method
for investigating the asymptotic properties of general statistics.
Lemma. 2 ([Haj68]) Let Z1 , . . . , Zn be independent r.v.?s and T = T (Z1 , . . . , Zn ) be a square
Pn
integrable statistic. The r.v. Tb = i=1 E[T | Zi ] ? (n ? 1)E[Z] is called the H?ajek projection of
T . It satisfies
E[Tb] = E[T ] and E[(Tb ? T )2 ] = E[(T ? E[T ])2 ] ? E[(Tb ? E[Tb])2 ].
3
From the perspective of ERM in statistical learning theory, through the projection method, wellknown concentration results for standard empirical processes may carry over to more complex collections of r.v. such as rank processes, as shown by the next approximation result.
Proposition. 3 Consider a score-generating
function ? which is twice continuously differentiable
R +?
on [0, 1]. We set ?s (x) = ?(Fs (s(x))) + p s(x) ?0 (Fs (u))dGs (u) for all x ? X . Let S0 ? S be a
VC major class of functions. Then, we have: ?s ? S0 ,
cn (s) = Vbn (s) + R
bn (s),
W
Pn
bn (s) = OP (1) as n ? ? uniformly over s ? S .
where Vbn (s) = i=1 I{Yi =+1} ?s (Xi ) and R
The notation OP (1) means bounded in probability and the integrals are represented in the sense of
the Lebesgue-Stieltjes integral. Details of the proof can be found in the Appendix.
Remark 1 (O N THE COMPLEXITY ASSUMPTION .) On the terminology of major sets and major
classes, we refer to [Dud99]. In the Proposition 3?s proof, we need to control the complexity of
subsets of the form {x ? X : s(x) ? t}. The stipulated complexity assumption garantees that this
collection of sets indexed by (s, t) ? S0 ? R forms a VC class.
Remark 2 (O N THE SMOOTHNESS ASSUMPTION .) We point out that it is also possible to deal with
discontinuous score-generating functions as seen in [CV07]. In this case, the lack of smoothness of ?
has to be compensated by smoothness assumptions on the underlying conditional distributions. Ancn (s) by the empirical W-ranking criterion where
other approach would consist of approximating W
the score-generating function ? would be a smooth approximation of ?. Owing to space limitations,
here we only handle the smooth case.
An essential hint to the study of the asymptotic behavior of a linear rank statistic consists in rewriting
Pn
it as a function of the sampling cdf. Denoting by Fbs (x) = n?1 i=1 I{s(Xi )?x} the empirical
counterpart of Fs (x), we have:
k
X
n b
cn (s) =
?
W
Fs s(Xi+ ) .
n+1
i=1
which may easily shown to converge to E[?(Fs (s(X)) | Y = +1] as n ? ?, see [CS58].
Definition. 4 For a given score-generating function ?, we will call the functional
W? (s) = E[?(Fs (s(X)) | Y = +1] ,
a ?W-ranking performance measure?.
The following result is a consequence of Proposition 3 and its proof can be found in the Appendix.
Proposition. 5 Let S0 ? S be a VC major class of functions with VC dimension V and ? be a
score-generating function of class C 1 . Then, as n ? ?, we have with probability one:
1
cn (s) ? kW? (s)| ? 0.
sup |W
n s?S0
3
Optimality
We introduce the class S ? of scoring functions obtained as strictly increasing transformations of the
regression function ?:
S ? = { s? = T ? ? | T : [0, 1] ? R strictly increasing }.
The class S ? contains the optimal scoring rules for the bipartite ranking problem. The next paragraphs motivate the use of W -ranking performance measures as optimization criteria for this problem.
4
3.1 ROC curves
A classical tool for measuring the performance of a scoring rule s is the so-called ROC curve
ROC(s, .) : ? ? [0, 1] 7? 1 ? Gs ? Hs?1 (1 ? ?),
where Hs?1 (x) = inf{t ? R | Hs (t) ? x}. In the case where s = ?, we will denote
ROC(?, ?) = ROC? (?), for any ? ? [0, 1]. The set of points (?, ?) ? [0, 1]2 which can be
achieved as (?, ROC(s, ?)) for some scoring function s is called the ROC space.
It is a well-known fact that the regression function provides an optimal scoring function for the ROC
curve. This fact relies on a simple application of Neyman-Pearson?s lemma. We refer to [CLV08]
for the details. Using the fact that, for a given scoring function, the ROC curve is invariant by
increasing transformations of the scoring function, we get the following result:
Lemma. 6 For any scoring function s and any ? ? [0, 1], we have:
.
?s? ? S ? , ROC(s, ?) ? ROC(s? , ?) = ROC? (?) .
The next result states that the set of optimal scoring functions coincides with the set of maximizers
of the W? -ranking performance, provided that the score-generating function ? is strictly increasing.
Proposition. 7 Assume that the score-generating function ? is strictly increasing. Then, we have:
?s ? S , W? (s) ? W? (?) .
.
Moreover W?? = W? (?) = W? (s? ) for any s? ? S ? .
Remark 3 (O N PLUG - IN RANKING RULES) Theoretically, a possible approach to ranking is the
plug-in method ([DGL96]), which consists of using an estimate ?? of the regression function as a
scoring function. As shown by the subsequent bound, when ? is differentiable with a bounded
derivative, when ?? is close to ? in the L1 -sense, it leads to a nearly optimal ordering in terms of
W-ranking criterion:
W?? ? W? (b
? ) ? (1 ? p)||?0 ||? E[|b
? (X) ? ?(X)|].
However, one faces difficulties with the plug-in approach when dealing with high-dimensional data,
see [GKKW02]), which provides the motivation for exploring algorithms based on W-ranking performance maximization.
3.2
Connection to hypothesis testing
From the angle embraced in this paper, the ranking problem is tightly related to hypothesis testing.
Denote by X + and X ? two r.v. distributed as G and H respectively. As a first go, we can reformulate the ranking problem as the one of finding a scoring function s such that s(X ? ) is stochastically
smaller than s(X + ), which means, for example, that: ?t ? R, P{s(X ? ) ? t} ? P{s(X + ) ? t}.
It is easy to see that the latter statement means that the ROC curve of s dominates the first diagonal
of the ROC space. We point out the fact that the first diagonal corresponds to nondiscriminating
scoring functions s0 such that Hs0 = Gs0 . However, searching for a scoring function s fulfilling
this property is generally not sufficient in practice. Heuristically, one would like to pick an s in order
to be as far as possible from the case where ?Gs = Hs ?. This requires to specify a certain measure
of dissimilarity between distributions. In this respect, various criteria may be considered such as the
L1 -Mallows metric (see the next remark). Indeed, assuming temporarily that s is fixed and considering the problem of testing similarity vs. dissimilarity between two distributions Hs and Gs based
?
on two independent samples s(X1+ ), . . . , s(Xk+ ) and s(X1? ), . . . , s(Xm
), it is well-known that
nonparametric tests based on linear rank statistics have optimality properties. We refer to Chapter
9 in [Ser80] for an overview of rank procedures for testing homogeneity, which may yield relevant
criteria in the ranking context.
1
Remark 4 (C ONNECTION
R 1 BETWEEN AUC AND THE L -M ALLOWS METRIC ) Consider the AUC
criterion: AUC(s) = ?=0 ROC(s, ?)d?. It is well-known that this criterion may be interpreted as
5
the ?rate of concording pairs?: AUC(s) = P{s(X) < s(X 0 ) | Y = ?1, Y 0 = +1} where (X, Y )
and (X 0 , Y 0 ) denote independent copies. Furthermore, it may be easily shown that
Z ?
1
{Hs (t) ? Gs (t)}dF (t),
AUC(s) = +
2
??
where the cdf F may be taken as any linear convex combination of Hs and Gs . Provided that Hs is
stochastically smaller than Gs and that F (dt) is the uniform distribution over (0, 1) (this is always
possible, even if it means replacing s by F ? s, which leaves the ordering untouched), the second
term may be identified as the L1 -Mallows distance between Hs and Gs , a well-known probability
metric widely considered in the statistical literature (also known as the L1 -Wasserstein metric).
4
A generalization error bound
We now provide a bound on the generalization ability of scoring rules based on empirical maximization of W-ranking performance criteria.
cn (s). UnTheorem. 8 Set the empirical W -ranking performance maximizer s?n = arg maxs?S W
der the same assumptions as in Proposition 3 and assuming in addition that the class of functions
?s induced by S0 is also a VC major class of functions, we have, for any ? > 0, and with probability
1 ? ?:
r
r
V
log(1/?)
?
W? ? W? (?
sn ) ? c1
+ c2
,
n
n
for some positive constants c1 , c2 .
The proof is a straightforward consequence from Proposition 3 and it can be found in the Appendix.
5
Conclusion
In this paper, we considered a general class of performance measures for ranking/scoring which can
be described as conditional linear rank statistics. Our overall setup encompasses in particular known
criteria used in medical diagnosis and information retrieval. We have described the statistical nature
of such statistics, proved that they ar compatible with optimal
? scoring functions in the bipartite setup,
and provided a preliminary generalization bound with a n-rate of convergence. By doing so, we
provided the very results on a class of linear rank processes. Further work is needed to identify a
variance control assumption in order to derive fast rates of convergence and to obtain consistency
under weaker complexity assumptions. Moreover, it is not clear how to formulate convex surrogates
for such functionals yet.
Appendix - Proofs
Proof of Proposition 5
By virtue of the finite increment theorem, we have:
!
1
+
sup |Fbs (t) ? Fs (t)|
n + 1 (s,t)?S0 ?R
cn (s) ? kW? (s)| ? k||?0 ||?
sup |W
s?S0
and the desired result immediately follows from the application of the VC inequality, see Remark 1.
Proof of Proposition 3
Since ? is of class C 2 , a Taylor expansion at the second order immediately yields:
cn (s) =
W
k
X
bn (s) + R
bn (s),
?(Fs (s(Xi+ ))) + B
i=1
6
with
bn (s) =
B
k
X
Rank(s(X + ))
i
i=1
bn (s)| ?
|R
n+1
k
X
Rank(s(X + ))
i
i=1
n+1
? Fs (s(Xi+ )) ?0 (Fs (s(Xi+ )))
2
? Fs (s(Xi+ )) ||?00 ||? .
bn (s) onto the space ? of
Following in the P
footsteps of [Haj68], we first compute the projection of B
r.v.?s of the form i?n fi (Xi , Yi ) such that E[fi2 (Xi , Yi )] < ? for all i ? {1, . . . , n}:
k X
n
X
Rank(s(Xi+ ))
bn (s)) =
P? (B
? Fs (s(Xi+ )) ?0 (Fs (s(Xi ))) | Xj , Yj ,
E
n+1
i=1 j=1
This projection may be splitted into two terms:
n
X
1
E[Rank(s(Xi )) | s(Xi )] ? Fs (s(Xi )) ?0 (Fs (s(Xi ))),
(I) =
I{Yi =+1}
n+1
i=1
n
X
X Rank(s(X + ))
+
0
i
(II) =
I{Yi =+1}
E
? Fs (s(Xi )) ? (Fs (s(Xi ))) | s(Xj ), Yj .
n+1
i=1
j6=i
The first term is easily handled and may be seen as negligible (it is of order OP (n?1/2 )), since we
have E[Rank(s(Xi )) | s(Xi )] = nFbs (s(Xi )) and, by assumption, sup(s,t)?S?R |Fbs (t) ? Fs (t)| =
OP (n?1/2 ) (see Remark 1). Up to an additive term of order OP (1) uniformly over s ? S, the second
term may be rewritten as
n
X
1 X
I{Yi =+1}
E I{s(Xj )?s(Xi )} ?0 (Fs (s(Xi ))) | s(Xj ), Yj
(II) =
n + 1 i=1
j6=i
Z
Z ?
n
n
?
k X
1 X
=
?0 (Fs (u))dGs (u) ?
I{Yi =+1}
?0 (Fs (u))dGs (u).
n + 1 j=1 s(Xj )
n + 1 i=1
s(Xi )
R?
Pn
As i=1 I{Yi =+1} s(Xi ) ?0 (Fs (u))dGs (u)/(n + 1) ? supu?[0,1] ?0 (t) and k/(n + 1) ? p, we get
that, uniformly over s ? S0 :
k
X
bn (s))) = Vbn (s) + OP (1) as n ? ?.
?(Fs (s(Xi+ ))) + P? (B
i=1
bn (s) is negligible, since, up to the multiplicative constant ||?00 ||? , it is bounded by
The term R
??
?2 ?
n
X
X
1
?
?
E ??2Fs (s(Xi )) +
{I{s(Xk )?s(Xi )} ? Fs (s(Xi ))}? ? .
(n + 1)2 i=1
k6=i
As Fs is bounded by 1, it suffices to observe that for all i:
??
?
?2
X
? X
?
E ?? {I{s(Xk )?s(Xi )} ? Fs (s(Xi ))}? | s(Xi )? =
E (I{s(Xk )?s(Xi )} ? Fs (s(Xi ))})2 | s(Xi ) .
k6=i
k6=i
Bounding the variance of the binomial r.v. E[I{s(Xk ) ? s(Xi )} ? Fs (s(Xi ))})2 | s(Xi )] by 1/4,
? n (s) is of order OP (1) uniformly over s ? S0 .
one finally gets that R
bn (s) ?
Eventually, one needs to evaluate the accuracy of the approximation yield by the projection B
b
b
{P? (Bn (s)) ? (n ? 1)E[Bn (s)]}. Write, for all s ? S0 ,
n
X
1
bn (s) = nU
bn (s) +
B
I{Yi =+1}
? Fs (s(Xi )) ?0 (Fs (s(Xi ))),
n+1
i=1
7
where Un (s) =
P
i6=j
qs ((Xi , Yi ), (Xj , Yj ))/(n(n + 1)) is a U -statistic with kernel:
qs ((x, y), (x0 , y 0 )) = I{y=+1} ? I{s(x0 )?s(x)} ? ?0 (Fs (s(x))).
Hence, we have
bn (s) ? {P? (B
bn (s)) ? (n ? 1)E[B
bn (s)]} = U
bn (s) ? {P? (U
bn (s)) ? (n ? 1)E[U
bn (s)]}
n?1 B
which actually corresponds to the degenerate part of the Hoeffding decomposition of the U -statistic
bn (s). Now, given that sups?S ||qs ||? < ?, it follows from Theorem 11 in [CLV08] for instance,
U
0
combined with the basic symmetrization device of the kernel qs , that
bn (s) ? {P? (U
bn (s)) ? (n ? 1)E[U
bn (s)]}| = OP (n?1 ) as n ? ?,
sup |U
s?S0
which concludes the proof.
Proof of Proposition 7
Using the decomposition Fs = pGs + (1 ? p)Hs , we are led to the following expression:
Z 1
?(u) du ? (1 ? p)E[?(Fs (s(X))) | Y = ?1].
pW? (s) =
0
Then, using a change of variable:
Z
E[?(Fs (s(X))) | Y = ?1] =
1
?(p(1 ? ROC(s, ?)) + (1 ? p)(1 ? ?)) d? .
0
It is now easy to conclude since ? is increasing (by assumption) and because of the optimality of
elements of S ? in the sense of Lemma 6.
Proof of Theorem 8
Observe that, by virtue of Proposition 3,
cn (s)/k ? W? (s)| ?
W?? ? W? (?
sn ) ? 2 sup |W
s?S0
2
sup |Vbn (s) ? kW? (s)| + OP (n?1 ),
k s?S0
and the desired bound derives from the VC inequality applied to the sup term, noticing that it follows
from our assumptions that {(x, y) 7? I{y=+1} ?s (x)}s?S0 is a VC class of functions.
References
[AGH+ 05] S. Agarwal, T. Graepel, R. Herbrich, S. Har-Peled, and D. Roth. Generalization bounds
for the area under the ROC curve. Journal of Machine Learning Research, 6:393?425,
2005.
[BBL05]
S. Boucheron, O. Bousquet, and G. Lugosi. Theory of Classification: A Survey of
Some Recent Advances. ESAIM: Probability and Statistics, 9:323?375, 2005.
[CLV08]
S. Cl?emenc?on, G. Lugosi, and N. Vayatis. Ranking and empirical risk minimization of
U-statistics. The Annals of Statistics, 36(2):844?874, 2008.
[CS58]
J. Chernoff and Savage. Asymptotic normality and efficiency of certain non parametric
test statistics. Ann. Math. Stat., 29:972?994, 1958.
[CV07]
S. Cl?emenc?on and N. Vayatis. Ranking the best instances. Journal of Machine Learning Research, 8:2671?2699, 2007.
[CZ06]
D. Cossock and T. Zhang. Subset ranking using regression. In H.U. Simon and G. Lugosi, editors, Proceedings of COLT 2006, volume 4005 of Lecture Notes in Computer
Science, pages 605?619, 2006.
[DGL96]
L. Devroye, L. Gy?orfi, and G. Lugosi. A Probabilistic Theory of Pattern Recognition.
Springer, 1996.
8
[Dud99]
R.M. Dudley. Uniform Central Limit Theorems. Cambridge University Press, 1999.
[GKKW02] L. Gy?orfi, M. K?ohler, A. Krzyzak, and H. Walk. A Distribution-Free Theory of Nonparametric Regression. Springer, 2002.
[Haj68]
J. Hajek. Asymptotic normality of simple linear rank statistics under alternatives. Ann.
Math. Stat., 39:325?346, 1968.
[Hoe48]
W. Hoeffding. A class of statistics with asymptotically normal distribution. Ann. Math.
Stat., 19:293?325, 1948.
[HS67]
J. H?ajek and Z. Sid?ak. Theory of Rank Tests. Academic Press, 1967.
[Rud06]
C. Rudin. Ranking with a P-Norm Push. In H.U. Simon and G. Lugosi, editors,
Proceedings of COLT 2006, volume 4005 of Lecture Notes in Computer Science, pages
589?604, 2006.
[Ser80]
R.J. Serfling. Approximation theorems of mathematical statistics. John Wiley & Sons,
1980.
9
| 3437 |@word h:12 version:1 pw:1 norm:5 proportion:2 seems:1 heuristically:1 bn:25 decomposition:6 pick:1 carry:1 celebrated:1 contains:1 score:11 denoting:1 savage:1 yet:1 john:1 additive:1 subsequent:1 discrimination:1 v:1 leaf:1 device:1 rudin:1 xk:6 provides:3 math:3 revisited:1 herbrich:1 simpler:1 zhang:1 mathematical:3 along:1 c2:2 direct:1 consists:2 paragraph:1 introduce:1 x0:2 theoretically:1 indeed:3 behavior:1 grade:1 inspired:1 discounted:2 considering:1 increasing:9 provided:4 notation:2 underlying:2 bounded:4 moreover:2 interpreted:1 developed:1 finding:1 transformation:2 golden:1 exactly:1 classifier:1 control:2 medical:3 positive:5 before:1 negligible:2 local:4 limit:1 consequence:2 ak:1 lugosi:5 umr:2 emphasis:1 studied:3 twice:1 range:1 practical:1 testing:5 mallow:2 practice:2 yj:4 supu:1 procedure:1 area:2 empirical:22 orfi:2 projection:10 get:4 onto:3 close:1 selection:1 put:1 context:2 risk:8 compensated:1 roth:1 emenc:3 go:1 straightforward:1 convex:2 agh:3 focused:1 formulate:1 survey:1 immediately:2 rule:7 q:4 deriving:1 population:2 handle:1 searching:1 increment:1 annals:1 target:1 anomaly:1 cmla:2 hypothesis:3 element:1 approximated:1 particularly:1 recognition:1 ordering:3 highest:1 mentioned:1 complexity:4 peled:1 motivate:1 serve:1 upon:1 bipartite:5 efficiency:1 easily:3 various:3 represented:2 chapter:1 fast:2 describe:1 pearson:1 widely:1 valued:1 ability:1 statistic:43 nondecreasing:1 sequence:2 differentiable:2 fr:2 remainder:1 relevant:3 realization:1 degenerate:2 gold:1 description:2 bbl05:2 convergence:3 generating:11 depending:2 derive:1 stat:3 op:9 involves:1 come:1 discontinuous:1 owing:1 stipulated:1 vc:8 mann:2 assign:1 suffices:1 generalization:4 preliminary:3 proposition:11 strictly:4 exploring:1 onnection:1 considered:8 credit:1 normal:1 major:5 adopt:1 estimation:1 label:4 symmetrization:1 largest:1 tool:3 minimization:2 indexation:1 always:1 rather:2 avoid:1 pn:4 focus:2 rank:45 sense:4 minimizers:1 cnrs:2 dcg:5 footstep:1 sketched:1 arg:1 classification:4 among:1 overall:1 colt:2 k6:3 development:2 special:1 marginal:1 never:1 sampling:1 chernoff:1 kw:3 nearly:1 others:1 hint:1 dg:4 tightly:1 homogeneity:1 lebesgue:1 attempt:1 ltci:1 detection:1 screening:1 interest:1 investigate:1 unconditional:1 har:1 accurate:1 integral:2 respective:1 clemencon:1 institut:1 orthogonal:3 indexed:2 taylor:1 clv08:7 desired:2 walk:1 theoretical:2 instance:14 ar:1 measuring:3 whitney:2 zn:2 maximization:4 cost:1 subset:2 uniform:3 combined:1 st:1 density:1 systematic:1 probabilistic:1 continuously:1 central:1 hoeffding:3 stochastically:2 book:1 derivative:1 account:1 gy:2 pooled:1 coefficient:1 ranking:29 multiplicative:1 view:1 stieltjes:1 doing:1 sup:9 start:1 recover:1 option:1 complicated:1 simon:2 elaborated:1 square:1 accuracy:1 variance:3 correspond:2 yield:3 identify:1 fbs:3 sid:1 vbn:4 accurately:1 j6:2 splitted:1 definition:5 involved:3 proof:10 gain:2 proved:1 popular:2 subsection:1 graepel:1 embraced:1 hajek:1 actually:1 back:1 higher:1 dt:1 specify:1 evaluated:1 though:1 furthermore:1 replacing:1 maximizer:1 lack:1 quality:1 y2:1 counterpart:1 hence:1 symmetric:1 ajek:2 boucheron:1 deal:2 auc:16 universud:1 coincides:1 criterion:24 complete:1 l1:4 ranging:1 novel:1 recently:1 fi:1 common:1 functional:3 overview:2 volume:2 untouched:1 belong:1 cossock:1 refer:6 cambridge:1 smoothness:3 rd:2 consistency:2 i6:1 longer:1 similarity:1 etc:1 wilcoxon:3 posterior:1 recent:1 perspective:1 inf:1 wellknown:1 certain:2 inequality:3 binary:5 yi:18 exploited:1 scoring:27 integrable:1 seen:4 der:1 wasserstein:1 converge:1 signal:1 ii:3 u0:2 full:1 smoother:1 smooth:3 technical:1 match:1 academic:1 plug:5 retrieval:4 prediction:2 regression:6 basic:1 metric:4 df:1 kernel:2 agarwal:1 achieved:1 c1:2 vayatis:4 addition:1 want:1 else:1 induced:2 call:3 integer:2 revealed:1 stephan:1 wn:1 easy:2 variety:1 xj:7 independence:1 zi:1 identified:1 regarding:1 cn:9 idea:1 expression:1 handled:1 krzyzak:1 render:1 f:36 hardly:1 remark:7 generally:1 clear:1 involve:1 transforms:1 nonparametric:2 discount:1 extensively:2 simplest:1 notice:1 gs0:1 diagnosis:3 write:1 shall:3 dgl96:2 key:2 terminology:1 drawn:1 rewriting:1 asymptotically:1 sum:5 angle:1 noticing:1 cachan:2 appendix:4 bound:7 g:8 x2:1 bousquet:1 argument:2 optimality:4 combination:1 describes:1 smaller:2 son:1 serfling:2 invariant:2 fulfilling:1 erm:2 taken:3 neyman:1 eventually:2 needed:1 dud99:2 available:3 rewritten:1 permit:1 observe:3 generic:1 dudley:1 batch:1 alternative:1 top:3 binomial:2 denotes:1 approximating:1 classical:1 strategy:3 concentration:4 dependence:2 usual:1 diagonal:2 surrogate:1 parametric:1 distance:1 capacity:1 reason:1 assuming:2 devroye:1 reformulate:1 setup:5 statement:1 relate:1 negative:3 observation:4 finite:1 situation:1 y1:3 ephan:1 introduced:1 pair:3 required:1 namely:3 z1:2 connection:1 nu:1 fi2:1 pattern:1 xm:2 encompasses:1 tb:5 max:1 event:2 natural:3 ranked:2 difficulty:1 normality:2 esaim:1 concludes:1 sn:2 literature:2 asymptotic:5 lecture:2 highlight:1 limitation:1 sufficient:1 s0:16 editor:2 compatible:1 summary:3 copy:2 free:1 cv07:5 understand:1 weaker:1 wide:1 face:1 distributed:1 curve:11 dimension:1 cumulative:3 made:1 collection:3 far:1 functionals:3 excess:1 keep:1 dealing:1 tsi:1 investigating:1 conclude:1 xi:51 un:1 nature:3 nicolas:1 expansion:1 du:1 cl:3 complex:2 artificially:1 main:2 pgs:2 motivation:2 bounding:1 x1:7 telecom:3 en:2 roc:22 wiley:1 position:1 intimate:1 ser80:3 theorem:5 specific:1 showing:1 virtue:2 dominates:1 maximizers:5 essential:2 consist:1 derives:1 importance:2 ci:1 dissimilarity:2 conditioned:1 push:5 led:1 expressed:3 temporarily:1 springer:2 corresponds:4 satisfies:1 relies:2 cdf:3 conditional:9 goal:1 ann:3 hs0:1 price:1 paristech:2 change:1 reducing:1 uniformly:4 lemma:4 called:5 rarely:1 latter:3 hoe48:2 evaluate:1 handling:1 |
2,689 | 3,438 | On the Reliability of Clustering Stability in the Large
Sample Regime - Supplementary Material
Ohad Shamir? and Naftali Tishby??
? School of Computer Science and Engineering
? Interdisciplinary Center for Neural Computation
The Hebrew University
Jerusalem 91904, Israel
{ohadsh,tishby}@cs.huji.ac.il
A
Exact Formulation of the Sufficient Conditions
In this section, we give a mathematically rigorous formulation of the sufficient conditions discussed
in the main paper. For that we will need some additional notation.
First of all, it will be convenient to define a scaled version of our distance measure
dD (Ak (S1 ), Ak (S2 )) between clusterings. Formally, define the random variable
?
?
m
?
dD (Ak (S1 ), Ak (S2 )) := mdD (Ak (S1 ), Ak (S2 )) = m Pr argmaxf?,i
? (x) 6= argmaxf?
? ,i (x) ,
x?D
i
i
?
where ?, ? ? ? are the solutions returned by Ak (S1 ), Ak (S2 ), and S1 , S2 are random samples, each
of size m, drawn i.i.d from the underlying distribution D. The scaling by the square root of the
sample size will allow us to analyze the non-trivial asymptotic behavior of these distance measures,
which without scaling simply converge to zero in probability as m ? ?.
For some ? > 0 and a set S ? Rn , let B? (S) be the ?-neighborhood of S, namely
B? (S) := x ? X : inf kx ? yk2 ? ? .
y?S
In this paper, when we talk about neighborhoods in general, we will always assume they are uniform
(namely, contain an ?-neighborhood for some positive ?).
We will also need to define the following variant of dm
D (Ak (S1 ), Ak (S2 )), where we restrict ourselves to the mass in some subset of Rn . Formally, we define the restricted distance between two
clusterings, with respect to a set B ? Rn , as
?
dm
m Pr argmaxf?,i
(1)
? (x) 6= argmaxf?
? ? ,i (x) ? x ? B .
D (Ak (S1 ), Ak (S2 ), B) :=
x?D
i
i
In particular, dm
), Ak (S2 ), Br/?m (?i,j F?0 ,i,j )) refers to the mass which switches clusters,
D (Ak (S1?
and is also inside an r/ m-neighborhood of the limit cluster boundaries (where the boundaries are
defined with respect to f?0 (?)). Once again, when S1 , S2 are random samples, we can think of it as
a random variable with respect to drawing and clustering S1 , S2 .
Conditions. The following conditions shall be assumed to hold:
? converges in probability (over drawing and clustering a sample
1. Consistency Condition: ?
of size m, m ? ?) to some ? 0 ? ?. Furthermore, the association of clusters to indices
{1, . . . , k} is constant in some neighborhood of ? 0 .
? ?
2. Central Limit Condition: m(?
? ? 0 ) converges in distribution to a multivariate zero
mean Gaussian random variable Z.
1
3. Regularity Conditions:
(a) f? (x) is Sufficiently Smooth: For any ? in some neighborhood of ? 0 , and any x in
some neighborhood of the cluster boundaries ?i,j F?0 ,i,j , f? (x) is twice continuously
differentiable with respect to ?, with a non-zero first derivative and uniformly bounded
second derivative for any x. Both f?0 (x) and (?/??)f?0 (x) are twice differentiable
with respect to any x ? X , with a uniformly bounded second derivative.
(b) Limit Cluster Boundaries are Reasonably Nice: For any two clusters i, j, F?0 ,i,j is
either empty, or a compact, non-self-intersecting, orientable n ? 1 dimensional hypersurface in Rn with finite positive volume, a boundary (edge), and with a neighborhood
contained in X in which the underlying density function p(?) is continuous. Moreover,
the gradient ?(f?0 ,i (?) ? f?0 ,j (?)) has positive magnitude everywhere on F?0 ,i,j .
(c) Intersections of Cluster Boundaries are Relatively Negligible: For any two distinct
non-empty cluster boundaries F?0 ,i,j , F?0 ,i? ,j ? , we have that
Z
Z
1
1
1dx ,
1dx
? B? (F?0 ,i,j ?F? ,i? ,j ? )?B? (F?0 ,i,j )?B? (F? ,i? ,j ? )
? B? (?F?0 ,i,j )
0
0
converge to 0 as ?, ? ? 0 (in any manner), where ?F?0 ,i is the edge of F?0 ,i,j .
(d) Minimal Parametric Stability: It holds for some ? > 0 that
`
?
m
?3??
?
Pr dm
) + o(1),
D (Ak (S1 ), Ak (S2 )) 6= dD (Ak (S1 ), Ak (S2 ), Br/ m (?i,j F? 0 ,i,j )) = O(r
where o(1) ? 0 as m ? ?. Namely, the mass of D which switches between clusters
is with high probability inside thin strips around the limit cluster boundaries, and this
high probability increases at least polynomially as the width of the strips increase (see
below for a further discussion of this).
The regularity assumptions are relatively mild, and can usually be inferred based on the consistency
and central limit conditions, as well as the the specific clustering framework that we are considering.
For example, condition 3c and the assumptions on F?0 ,i,j in condition 3b are fulfilled in a clustering framework where the clusters are separated by hyperplanes. As to condition 3d, suppose our
? in a smooth manner. Then the
clustering framework is such that the cluster boundaries depend on ?
?
asymptotic normality of ?, with variance O(1/m), and the compactness of X , will generally imply
that the cluster boundaries ?
obtained from clustering a sample are contained with high probability
More specifically, the asympinside strips of width O(1/ m) around the limit cluster boundaries.
?
totic probability of this happening for strips of width r/ m will be exponentially high in r, due
? As a result, the mass which switches between clusters, when we
to the asymptotic normality of ?.
compare two independent clusterings, will be in those strips with probability exponentially high in
r. Therefore, condition 3d will hold by a large margin, since only polynomially high probability is
required there.
B
Proofs - General Remarks
The proofs will use the additional notation and the sufficient conditions, as presented in Sec. A.
Throughout the proofs, we will sometimes use the stochastic order notation Op (?) and op (?) (cf.
[8]), defined as follows. Let {Xm } and {Ym } be sequences of random vectors, defined on the same
probability space. We write Xm = Op (Ym ) to mean that for each ? > 0 there exists a real number
M such that Pr(kXm k ? M kYm k) < ? if m is large enough. We write Xm = op (Ym ) to mean that
Pr(kXm k ? ?kYm k) ? 0 for each ? > 0. Notice that {Ym } may also be non-random. For example,
Xm = op (1) means that Xm ? 0 in probability. When we write for example Xm = Ym + op (1),
we mean that Xm ? Ym = op (1).
C
Proof of Proposition 1
? close enough to ? 0 ,
By condition 3a, f? (x) has a first order Taylor expansion with respect to any ?
with a remainder term uniformly bounded for any x:
?
?
? ? ? 0 k).
? ? ? 0 ) + o(k?
(2)
f?? (x) = f?0 (x) +
f? (x)
(?
?? 0
2
? ?
? ? ? 0 k = Op (1/?m).
By the asymptotic normality assumption, mk?
? ? 0 k = Op (1), hence k?
Therefore, we get from Eq. (2) that
?
?
?
?
? ? ? 0 )) + op (1),
f?0 (x)
( m(?
m f?? (x) ? f?0 (x) =
(3)
??
where the remainder term op (1) does not depend on x. By regularity condition 3a and compactness
of X , (?/??)f?0 (?) is a uniformly bounded vector-valued function from X to the Euclidean space
? 7? ((?/??)f? (?))? ?
? is a mapping from ?, with
in which ? resides. As a result, the mapping ?
0
the metric induced by the Euclidean space in which it resides, to the space of all uniformly bounded
Rk -valued functions on X . We can turn the latter space into a metric space by equipping it with
the obvious extension of the supremum norm (namely, for any two functions f (?), g(?), kf ? gk :=
supx?X kf (x) ? g(x)k? , where k ? k? is the infinity norm in Euclidean space). With this norm, the
? ?
mapping above is a continuous mapping between two metric spaces. We also know that m(???
0)
converges in distribution to a multivariate?Gaussian random variable Z. By the continuous mapping
theorem [8] and Eq. (3), this implies that m(f?? (?)?f?0 (?)) converges in distribution to a Gaussian
process G(?), where
?
?
G(?) :=
f? (?)
(4)
Z.
?? 0
D
D.1
Proof of Thm. 1
A High Level Description of the Proof
The full proof of Thm. 1 is rather long and technical, mostly due to the many technical subtleties
that need to be taken care of. Since these might obscure the main ideas, we present here separately
a general overview of the proof, without the finer details.
?
k
The purpose of the stability estimator ??m,q
, scaled by m, boils down to trying to assess the
?expected? value of the random variable dm
D (Ak (S1 ), Ak (S2 )): we estimate q instantiations of
dm
D (Ak (S1 ), Ak (S2 )), and take their average. Our goal is to show that this average, taking m ? ?,
\ k , D) as defined in the theorem. The most straightforward
is likely to be close to the value instab(A
\ k , D) actually equals limm?? Edm
way to go about it is to prove that instab(A
D (Ak (S1 ), Ak (S2 )),
?
k
is indeed close to it with high
and then use some large deviation bound to prove that m ??m,q
probability, if q is large enough. Unfortunately, computing limm?? Edm
D (Ak (S1 ), Ak (S2 )) is problematic. The reason is that the convergence tools at our disposal deals with convergence in distribution of random variables, but convergence in distribution does not necessarily imply convergence of expectations. In other words, we can try and analyze the asymptotic distribution of
dm
D (Ak (S1 ), Ak (S2 )), but the expected value of this asymptotic distribution is not necessarily the
same as limm?? Edm
D (Ak (S1 ), Ak (S2 )). As a result, we will have to take a more indirect route.
Here is the basic idea: instead of analyzing the asymptotic expectation of dm
D (Ak (S1 ), Ak (S2 )), we
analyze the asymptotic expectation of a different random variable, dm
(A
(S
k
1 ), Ak (S2 ), B), which
D
is
the mass of the unwas formally defined in Eq. (1). Informally, recall that dm
(A
(S
),
A
(S
))
k
1
k
2
D
derlying distribution D which switches between clusters, when we draw and cluster two independent samples of size m. Then dm
D (Ak (S1 ), Ak (S2 ), B) measures the subset of this mass, which
lies inside some B ? Rn . In particular, following the notation of Sec. A, we will pick B to be
dm
(Ak (S1 ), Ak (S2 ), Br/?m (?i,j F?0 ,i,j )) for some r > 0. In words, this constitutes strips of width
D?
r/ m around the limit cluster boundaries. Writing the above expression for B as Br/?m , we have
m
?
that if r be large enough, then dm
D (Ak (S1 ), Ak (S2 ), Br/ m ) is equal to dD (Ak (S1 ), Ak (S2 )) with
very high probability over drawing and clustering a pair of samples, for any large enough sample
size m. Basically, this is because the fluctuations of the cluster boundaries, based on drawing and
clustering a random sample of size m, cannot be too large, and therefore the mass which switches
clusters is concentrated around the limit cluster boundaries, if m is large enough.
?
The advantage of the ?surrogate? random variable dm
D (Ak (S1 ), Ak (S2 ), Br/ m ) is that it is bounded
for any finite r, unlike dm
(A
(S
),
A
(S
)).
With
bounded
random
variables,
convergence in
k
1
k
2
D
distribution does imply convergence of expectations, and as a result we are able to calcu?
late limm?? Edm
D (Ak (S1 ), Ak (S2 ), Br/ m ) explicitly. This will turn out to be very close to
3
\ k , D) as it appears in the theorem (in fact, we can make it arbitrarily close to instab(A
\ k , D) by
instab(A
? ) and dm (A (S ), A (S ))
making r large enough). Using the fact that dm
(A
(S
),
A
(S
),
B
k
1
k
2
k
1
k
2
r/ m
D
D
are
with very high probability, we show that conditioned on a highly probable event,
? equal
k
?
m ??m,q
is an unbiased estimator of dm
D (Ak (S1 ), Ak (S2 ), Br/ m ), based on q instantiations, for
?
k
any sample size m. As a result, using large deviation bounds, we get that m ??m,q
is close to
m
dD (Ak (S1 ), Ak (S2 ), Br/?m ), with a high probability which does not depend on m. Therefore, as
?
k
?
will be close to limm?? Edm
m ? ?, m ??m,q
D (Ak (S1 ), Ak (S2 ), Br/ m ) with high probability.
By picking r to scale appropriately with q, our theorem follows.
For convenience, the proof is divided into two parts: in Subsec. D.2, we calculate
?
limm?? Edm
D (Ak (S1 ), Ak (S2 ), Br/ m ) explicitly, while Subsec. D.3 executes the general plan outlined above to prove our theorem.
?
A few more words are in order about the calculation of limm?? Edm
D (Ak (S1 ), Ak (S2 ), Br/ m )
in Subsec. D.2, since it is rather long and involved in itself. Our goal is to perform this calculation without going through an intermediate step of explicitly characterizing the distribution of
?
dm
D (Ak (S1 ), Ak (S2 ), Br/ m ). This is because the distribution might be highly dependent on the specific clustering framework, and thus it is unsuitable for the level of generality which we aim at (in
other words, we do not wish to assume a specific clustering framework). The idea is as follows:
recall that dm
(A (S ), A (S ), Br/?m ) is the mass of the underlying distribution D, inside strips of
? D k 1 k 2
width r/ m around the limit cluster boundaries, which switches clusters when we draw and cluster
two independent samples of size m. For any x ? X , let Ax be the event that x switched clusters.
?
Then we can write dm
D (Ak (S1 ), Ak (S2 ), Br/ m ), by Fubini?s theorem, as:
?
Edm
D (Ak (S1 ), Ak (S2 ), Br/ m ) =
?
mE
Z
1(Ax )p(x)dx =
Br/?m
Z
?
m Pr(Ax )p(x)dx.
Br/?m
(5)
The heart of the proof is Lemma D.5, which considers what happens to the integral above inside a
single strip near one of the limit cluster boundaries F?0 ,i,j . The main body of the proof then shows
how the result of Lemma D.5 can be combined to give the asymptotic value of Eq. (5) when we
take the integral over all of Br/?m . The bottom line is that we can simply sum the contributions
from each strip, because the intersection of these different strips is asymptotically negligible. All
the other lemmas in Subsec. D.2 develop technical results needed for our proof.
Finally, let us describe the proof of Lemma D.5 in a bit more detail. It starts with an expression
equivalent to the one in Eq. (5), and transforms it to an expression composed of a constant value,
and a remainder term which converges to 0 as m ? ?. The development can be divided into a
number of steps. The first step is rewriting everything using the asymptotic Gaussian distribution
of the cluster association function f?? (x) for each x, plus remainder terms (Eq. (13)). Since we are
integrating over x, special care is given to show that the convergence to the asymptotic distribution
is uniform for all x in the domain of integration. The second step is to rewrite the integral (which is
over a strip around the cluster boundary) as a double integral along the cluster boundary itself, and
along a normal segment at any point on the cluster boundary (Eq. (14)). Since the strips become
arbitrarily small as m ? ?, the third step consists of rewriting everything in terms of a Taylor
expansion around each point on the cluster boundary (Eq. (16), Eq. (17) and Eq. (18)). The fourth
and final step is a change of variables, and after a few more manipulations we get the required result.
D.2
Part 1: Auxiliary Result
As described in the previous subsection, we will need an auxiliary result (Proposition D.1 below),
?
characterizing the asymptotic expected value of dm
D (Ak (S1 ), Ak (S2 ), Br/ m (?i,j F? 0 ,i,j )).
Proposition D.1. Let r > 0.
Assuming the set of conditions from Sec. A holds,
?
limm?? Edm
D (Ak (S1 ), Ak (S2 ), Br/ m (?i,j F? 0 ,i,j )) is equal to
p
X Z
p(x) Var(Gi (x) ? Gj (x))
1
2 ? ? h(r)
dx,
?
F?0 ,i,j k?(f? 0 ,i (x) ? f? 0 ,j (x))k
1?i<j?k
2
where h(r) = O(exp(?r )).
4
To prove this result, we will need several technical lemmas.
Lemma D.1. Let S be a hypersurface in Rn which fulfill the regularity conditions 3b and 3c for any
F?0 ,i,j , and let g(?) be a continuous real function on X . Then for any ? > 0,
Z
Z Z ?
1
1
g(x)dx =
g(x + ynx )dydx + o(1),
(6)
? B? (S)
? S ??
where nx is a unit normal vector to S at x, and o(1) ? 0 as ? ? 0.
Proof. Let B?? (S) be a strip around S, composed of all points which are on some normal to S and
close enough to S:
B?? (S) := {y ? Rn : ?x ? S, ?y ? [??, ?], y = x + ynx }.
Since S is orientable, then for small enough ? > 0, B?? (S) is diffeomorphic to S ? [??, ?]. In
particular, the map ? : S ? [??, ?] 7? B?? (S), defined by
?(x, y) = x + ynx
will be a diffeomorphism. Let D?(x, y) be the Jacobian of ? at the point (x, y) ? S ? [??, ?]. Note
that D?(x, 0) = 1 for every x ? S.
We now wish to claim that as ? ? 0,
Z
Z
1
1
g(x)dx =
g(x)dx + o(1).
? B? (S)
? B?? (S)
(7)
To see this, we begin by noting that B?? (S) ? B? (S). Moreover, any point in B? (S) \ B?? (S) has the
property that its projection to the closest point in S is not a normal to S, and thus must be ?-close
to the edge of S. As a result of regularity condition 3c for S, and the fact that g(?) is continuous
and hence uniformly bounded in the volume of integration, we get that the integration of g(?) over
B? \ B?? is asymptotically negligible (as ? ? 0), and hence Eq. (7) is justified.
By the change of variables theorem from multivariate calculus, followed by Fubini?s theorem, and
using the fact that D? is continuous and equals 1 on S ? {0},
Z
Z
1
1
g(x)dx =
g(x + ynx )D?(x, y)dxdy
? B?? (S)
? S?[??,?]
Z Z
1 ?
=
g(x + ynx )D?(x, y)dx dy
? ??
S
Z Z
1 ?
=
g(x + ynx )dx dy + o(1),
? ??
S
where o(1) ? 0 as ? ? 0. Combining this with Eq. (7) yields the required result.
Lemma D.2. Let (gm : X 7? R)?
m=1 be a sequence of integrable functions, such that gm (x) ? 0
uniformly for all x as m ? ?. Then for any i, j ? {1, . . . , k}, i 6= j,
Z
?
mgm (x)p(x)dx ? 0
Br/?m (F?0 ,i,j )
as m ? ?
?
Proof. By the assumptions on (gm (?))?
m=1 , there exists a sequence of positive constants (bm )m=1 ,
converging to 0, such that
Z
Z
?
?
mgm (x)p(x)dx ? bm
mp(x)dx.
Br/?m (F? ,i,j )
Br/?m (F? ,i,j )
0
0
5
For large enough m, p(x)
? is bounded and continuous in the volume of integration. Applying
Lemma D.1 with ? = r/ m, we have that as m ? ?,
Z
Z
Z r/?m
?
?
bm m
p(x + ynx )dydx + o(1)
p(x)dx = bm m
?
Br/?m (F?0 ,i,j )
F?0 ,i,j
? C
? bm m ? + o(1) = bm C + o(1)
m
?r/ m
for some constant C dependant on r and the upper bound on p(?). Since bm converge to 0, we have
that the expression in the lemma converges to 0 as well.
Lemma D.3. Let (Xm ) and (Ym ) be a sequence of real random variables, such that Xm , Ym are
defined on the same probability space, and Xm ? Ym converges to 0 in probability. Assume that Ym
converges in distribution to a continuous random variable Y . Then | Pr(Xm ? c) ? Pr(Ym ? c)|
converges to 0 uniformly for all c ? R.
Proof. We will use the following standard fact (see for example section 7.2 of [4]): for any two real
random variables A, B, any c ? R and any ? > 0, it holds that
Pr(A ? c) ? Pr(B ? c + ?) + Pr(|A ? B| > ?).
From this inequality, it follows that for any c ? R and any ? > 0,
| Pr(Xm ? c) ? Pr(Ym ? c)| ? Pr(Ym ? c + ?) ? Pr(Ym ? c)
+ Pr(Ym ? c) ? Pr(Ym ? c ? ?) + Pr(|Xm ? Ym | ? ?).
(8)
We claim that the r.h.s of Eq. (8) converges to 0 uniformly for all c, from which the lemma follows.
To see this, we begin by noticing that Pr(|Xm ? Ym | ? ?) converges to 0 for any ? by definition of
convergence in probability. Next, Pr(Ym ? c? ) converges to Pr(Y ? c? ) uniformly for all c? ? R,
since Y is continuous (see section 1 of [6]). Moreover, since Y is a continuous random variable, we
have that its distribution function is uniformly continuous, hence Pr(Y ? c + ?) ? Pr(Y ? c) and
Pr(Y ? c) ? Pr(Y ? c ? ?) converges to 0 as ? ? 0, uniformly for all c. Therefore, by letting
m ? ?, and ? ? 0 at an appropriate rate compared to m, we have that the l.h.s of Eq. (8) converges
to 0 uniformly for all c.
?
Lemma D.4. Pr( a, m(f?? (x) ? f?0 (x)) < b) converges to Pr(ha, G(x)i < b) uniformly for
any x ? X , any a 6= 0 in some bounded subset of Rk , and any b ? R.
Proof. By Eq. (3),
?
m f?? (x) ? f?0 (x) =
?
?
?
? ? ? 0 )) + op (1).
f?0 (x)
( m(?
??
Where the remainder term does not depend on x. Thus, for any a in a bounded subset of Rk ,
?
a, m f?? (x) ? f?0 (x) =
*
+
?
?
?
?
f? (x)
a
, m(? ? ? 0 ) + op (1),
?? 0
(9)
Where the convergence in probability is uniform for all bounded a and x ? X .
We now need to use a result which tells us when is a convergence in distribution uniform. Using thm.
4.2 in [6], we have that if a sequence of random vectors (Xm )?
m=1 in Euclidean space converge to a
random variable X in distribution, then Pr(hy, Xm i < b) converges to Pr(hy, Xi < b) uniformly
for any vector y and b ? R. We note that a stronger result (Thm. 6 in [2]) apparently allows us to
extend this to cases where Xm and X reside in some infinite dimensional, separable Hilbert space
(for example, if ? is a subset of an infinite dimensional reproducing kernel Hilbert space in kernel
? ?
clustering). Therefore, recalling that m(?
? ? 0 ) converges in distribution to a random normal
vector Z, we have that uniformly for all x, a, b,
6
*
+
!
?
?
?
? ? ?0 ) < b
Pr
a
f? (x)
=
, m(?
?? 0
=
*
!
? +
?
Pr
a
f? (x)
, Z < b + o(1)
?? 0
Pr (ha, G(x)i < b) + o(1)
(10)
Here we think of a((?/??)f?0 (x))? as the vector
y to which we apply the
theorem. By regularity
condition 3a, and assuming a 6= 0, we have that a((?/??)f?0 (x))? , Z is a continuous real random variable for any x, unless Z = 0 in which case the lemma is trivial. Therefore, the conditions
of Lemma D.3 apply: the two sides of Eq. (9) give us two sequences of random variables which
converge in probability to each other, and by Eq. (10) we have convergence in distribution of one of
the sequences to a fixed continuous random variable. Therefore, using Lemma D.3, we have that
*
+
!
?
?
?
?
Pr
f? (x)
a
, m(? ? ? 0 ) < b + o(1),
?? 0
(11)
where the convergence is uniform for any bounded a 6= 0, b and x ? X .
?
a, m f?? (x) ? f?0 (x) < b = Pr
Combining Eq. (10) and Eq. (11) gives us the required result.
Lemma D.5. Fix some two clusters i, j. Assuming the expression below is integrable, we have that
Z
?
2
m Pr(f?,i
? (x) ? f?,j
? (x) < 0) Pr(f?,i
? (x) ? f?,j
? > 0)p(x)dx
Br/?m (F?0 ,i,j )
=2
p
Z
p(x) Var(Gi (x) ? Gj (x))
1
? ? h(r)
dx + o(1)
?
F?0 ,i,j k?(f? 0 ,i (x) ? f? 0 ,j (x))k
where o(1) ? 0 as m ? ? and h(r) = O(exp(?r2 )).
Proof. Define a ? Rk as ai = 1, aj = ?1, and 0 for any other entry. Applying Lemma D.4, with a
as above, we have that uniformly for all x in some small enough neighborhood around F?0 ,i,j :
Pr(f?,i
? (x) ? f?,j
? (x) < 0)
?
?
?
= Pr
m(f?,i
m(f?,j
m(f?0 ,j (x) ? f?0 ,i (x))
? (x) ? f? 0 ,i (x)) ?
? (x) ? f? 0 ,j (x)) <
?
= Pr(Gi (x) ? Gj (x) < m(f?0 ,j (x) ? f?0 ,i (x))) + o(1).
where o(1) converges uniformly to 0 as m ? ?.
Since Gi (x) ? Gj (x) has a zero mean normal distribution, we can rewrite the above (if Var(Gi (x) ?
Gj (x)) > 0) as
!
?
m(f?0 ,j (x) ? f?0 ,i (x))
Gi (x) ? Gj (x)
< p
+ o(1)
Pr p
Var(Gi (x) ? Gj (x))
Var(Gi (x) ? Gj (x))
!
?
m(f?0 ,j (x) ? f?0 ,i (x))
p
=?
+ o(1),
(12)
Var(Gi (x) ? Gj (x))
where ?(?) is the cumulative standard normal distribution function. Notice that by some abuse of
notation, the expression is also valid in the case where Var(Gi (x) ? Gj (x)) = ?
0. In that case,
Gi (x) ? Gj (x) is equal to 0 with probability 1, and thus Pr(Gi (x) ? Gj (x) < m(f?0 ,j (x) ?
f?0 ,i (x))) is 1 if f?0 ,j (x) ? f?0 ,i (x)) ? 0 and 0 if f?0 ,j (x) ? f?0 ,i (x)) < 0. This is equal to
Eq. (12) if we are willing to assume that ?(?) = 1, ?(0/0) = 1, ?(??) = 0.
7
Therefore, we can rewrite the l.h.s of the equation in the lemma statement as
!
?
Z
?
m(f?0 ,i (x) ? f?0 ,j (x))
p
2
m?
Var(Gi (x) ? Gj (x))
Br/?m (F?0 ,i,j )
!!
?
?
m(f?0 ,i (x) ? f?0 ,j (x))
p
1??
+ mo(1)p(x)dx.
Var(Gi (x) ? Gj (x))
The integration of the remainder term can be rewritten as o(1) by Lemma D.2, and we get that the
expression can be rewritten as:
!
?
Z
?
m(f?0 ,i (x) ? f?0 ,j (x))
p
m?
2
Var(Gi (x) ? Gj (x))
Br/?m (F?0 ,i,j )
!!
?
m(f?0 ,i (x) ? f?0 ,j (x))
p
1??
p(x)dx + o(1). (13)
Var(Gi (x) ? Gj (x))
One can verify that the expression inside the integral is a continuous function of x, by the regularity
conditions and the expression for G(?) as proven in Sec. C (namely Eq. (4)). We can therefore apply
Lemma D.1, and again take all the remainder terms outside of the integral by Lemma D.2, to get
that the above can be rewritten as
2
Z
F?0 ,i,j
!
?
m(f?0 ,i (x + ynx ) ? f?0 ,j (x + ynx ))
p
m?
?
Var(Gi (x + ynx ) ? Gj (x + ynx ))
?r/ m
!!
?
m(f?0 ,i (x + ynx ) ? f?0 ,j (x + ynx ))
p
1??
p(x)dydx + o(1),
Var(Gi (x + ynx ) ? Gj (x + ynx ))
Z
?
r/ m
?
(14)
where nx is a unit normal to F?0 ,i,j at x.
Inspecting Eq. (14), we see that y ranges over an arbitrarily small domain as m ? ?. This suggests
that we can rewrite the above using Taylor expansions, which is what we shall do next.
Let us assume for a minute that Var(Gi (x) ? Gj (x)) > 0 for some point x ? F?0 ,i,j . One can
verify that by the regularity conditions and the expression for G(?) in Eq. (4), the expression
f (?) ? f?0 ,j (?)
p ?0 ,i
(15)
Var(Gi (?) ? Gj (?))
is twice differentiable, with a uniformly bounded second derivative. Therefore, we can rewrite the
expression in Eq. (15) as its first-order Taylor expansion around each x ? F?0 ,i,j , plus a remainder
term which is uniform for all x:
f (x + ynx ) ? f?0 ,j (x + ynx )
p ?0 ,i
Var(Gi (x + ynx ) ? Gj (x + ynx ))
!
f?0 ,i (x) ? f?0 ,j (x)
f?0 ,i (x) ? f?0 ,j (x)
=p
+? p
ynx + O(y 2 ).
Var(Gi (x) ? Gj (x))
Var(Gi (x) ? Gj (x))
Since f?0 ,i (x) ? f?0 ,j (x) = 0 for any x ? F?0 ,i,j , the expression reduces after a simple calculation
to
?(f?0 ,i (x) ? f?0 ,j (x))
p
ynx + O(y 2 ).
Var(Gi (x) ? Gj (x))
Notice that ?(f?0 ,i (x) ? f?0 ,j (x)) (the gradient of f?0 ,i (x) ? f?0 ,j (x)) has the same direction as
nx (the normal to the cluster boundary). Therefore, the expression above can be rewritten, up to a
sign, as
?(f (x) ? f
? 0 ,i
? 0 ,j (x))
y
p
+ O(y 2 ).
Var(Gi (x) ? Gj (x))
8
p
As a result, denoting s(x) := ?(f?0 ,i (x) ? f?0 ,j (x))/ Var(Gi (x) ? Gj (x)), we have that
!
?
m(f?0 ,i (x + ynx ) ? f?0 ,j (x + ynx ))
p
1??
Var(Gi (x + ynx ) ? Gj (x + ynx ))
!!
?
m(f?0 ,i (x + ynx ) ? f?0 ,j (x + ynx ))
p
?
Var(Gi (x + ynx ) ? Gj (x + ynx ))
(16)
!
?
?
= ? m ks(x)ky + O(y 2 )
1 ? ? m ks(x)ky + O(y 2 )
?
?
= ? m ks(x)ky
1 ? ? m ks(x)ky
!
?
+ O( my 2 ).
(17)
In the preceding development, we have assumed that Var(Gi (x) ? Gj (x)) > 0. However, notice
that the expressions in Eq. (16) and Eq. (17), without the remainder term, are both equal (to zero)
even if Var(Gi (x) ? Gj (x)) = 0 (with our?
previous
?abuse of notation that ?(??)
? = 0, ?(?) =
1). ?Moreover, since y?takes values in [?r/ m, r/ m], the remainder term O( my 2 ) is at most
O( mr/m) = O(r/ m), so it can be rewritten as o(1) which converges to 0 as m ? ?.
In conclusion, and again using Lemma D.2 to take the remainder terms outside of the integral, we
can rewrite Eq. (14) as
2
Z
F?0 ,i,j
Z
?
r/ m
?
?r/ m
?
?
?
m? mks(x)ky) 1 ? ? mks(x)ky) p(x)dydx + o(1).
We now perform a change of variables, letting zx =
2
Z
F?0 ,i,j
Z
rks(x)k
?rks(x)k
(18)
?
mks(x)ky in the inner integral, and get
1
? (zx ) (1 ? ? (zx )) p(x)dzx dx + o(1),
ks(x)k
which is equal by the mean value theorem to
! Z
!
Z
rks(x0 )k
p(x)
2
dx
? (zx0 ) (1 ? ? (zx0 )) dzx0 + o(1)
F?0 ,i,j ks(x)k
?rks(x0 )k
(19)
for some x0 ? F?0 ,i,j .
By regularity condition 3b, it can be verified that ks(x)k is positive or infinite for any x ? F?0 ,i,j .
As a result, as r ? ?, we have that
Z rks(x0 )k
Z ?
1
? (zx0 ) (1 ? ? (zx0 )) dzx0 ??
?(zx0 )(1 ? ?(zx0 ))dzx0 = ? .
?
?rks(x0 )k
??
?
and the convergence to 1/ ? is at a rate of O(exp(?r2 )). Combining this with Eq. (19) gives us
the required result.
Proof of Proposition D.1. We can now turn to prove Proposition D.1 itself. For any x ? X , let Ax
be the event (over drawing and clustering a sample pair) that x switched clusters. For any F?0 ,i,j
and sample size m, define F?m0 ,i,j to be the subset of F?0 ,i,j , which is at a distance of at least m?1/4
from any other cluster boundary (with respect to ? 0 ). Formally,
m
? ?
?1/4
?
?
F?0 ,i,j := x ? F?0 ,i,j : ? ({i , j } 6= {i, j}, F?0 ,i ,j 6= ?) ,
inf
kx ? yk ? m
.
y?F?0 ,i? ,j ?
9
Letting S1 , S2 be two independent samples of size m, we have by Fubini?s theorem that
?
Edm
D (Ak (S1 ), Ak (S2 ), Br/ m (?i,j F? 0 ,i,j ))
Z
Z
?
= mES1 ,S2
1(Ax )p(x)dx =
Br/?m (?i,j F?0 ,i,j )
=
Br/?m (?i,j F?0 ,i,j )
?
m Pr(Ax )p(x)dx +
Z
Br/?m (?i,j F?m ,i,j )
0
?
m Pr(Ax )p(x)dx
Z
?
m Pr(Ax )p(x)dx.
Br/?m (?i,j F?0 ,i,j \F?m ,i,j )
0
As to the first integral, notice that each point in F?m0 ,i,j is separated from any point in any other
F?m0 ,i? ,j ? by a distance of at least 2m?1/4 . Therefore, for large enough m, Br/?m (F?m0 ,i,j ) are
disjoint for each i, j, and we can rewrite the above as:
X
1?i<j?k
?
m Pr(Ax )p(x)dx +
Z
Br/?m (F?m ,i,j )
0
Z
?
m Pr(Ax )p(x)dx.
Br/?m (?i,j F?0 ,i,j \F?m ,i,j )
0
As?to the second integral, notice that the integration is over points which are at a distance of at most
r/ m from some F?0 ,i,j , and also at a distance of at most m?1/4 from some other F?0 ,i? ,j ? . By
regularity condition 3c, and the fact that m?1/4 ? 0, it follows that this integral converges to 0 as
m ? ?, and we can rewrite the above as:
X
1?i<j?k
Z
?
m Pr(Ax )p(x)dx + o(1)
(20)
Br/?m (F?m ,i,j )
0
If there were only two clusters i, j, then
Pr(Ax ) = 2 Pr(f?,i
? (x) < 0) Pr(f?,i
? (x) ? f?,j
? > 0).
? (x) ? f?,j
This is simply by definition of Ax : the probability that under one clustering, based on a random
sample, x is more associated with cluster i, and that under a second clustering, based on another
independent random sample, x is more associated with cluster j.
In general, we will have more than two clusters. However, notice that any point x in Br/?m (F?m0 ,i,j )
(for some i, j) is much closer to F?0 ,i,j
? than to any other cluster boundary. This is because its
distance to F?0 ,i,j is on the order of 1/ m, while its distance to any other boundary is on the order
of m?1/4 . Therefore, if x does switch clusters, then it is highly likely to switch between cluster i and
cluster j. Formally,
by regularity condition 3d (which ensure that the cluster boundaries experience
?
at most O(1/ m) fluctuations), we have that uniformly for any x,
Pr(Ax ) = 2 Pr(f?,i
? (x) ? f?,j
? (x) < 0) Pr(f?,i
? (x) ? f?,j
? > 0) + o(1),
where o(1) converges to 0 as m ? ?.
Substituting this back to Eq. (20), using Lemma D.2 to take the remainder term outside the integral,
and using the regularity condition 3c in the reverse direction to transform integrals over F?m0 ,i,j
back into F?0 ,i,j with asymptotically negligible remainder terms, we get that the quantity we are
interested in can be written as
Z
X
?
2
m Pr(f?,i
? (x) ? f?,j
? (x) < 0) Pr(f?,i
? (x) ? f?,j
? > 0)p(x)dx + o(1).
1?i<j?k
Br/?m (F?0 ,i,j )
Now we can apply Lemma D.5 to each summand, and get the required result.
D.3
Part 2: Proof of Thm. 1
For notational convenience, we will denote
m
?
dm
D (r) := dD (Ak (S1 ), Ak (S2 ), Br/ m (?i,j F? 0 ,i,j ))
10
\ k , D) = 0, the proof of the
whenever the omitted terms are obvious from context. If instab(A
\ k , D) in Thm. 1 and
theorem is straightforward. In this special case, by definition of instab(A
m
Proposition D.1, we have that P
dD (r) converges in probability to 0 for any r. By regularity conq
1
2
dition 3d, for any fixed q, 1q i=1 dm
D (Ak (Si ), Ak (Si )) converges in probability to 0 (because
m
1
2
m
1
2
dD (Ak (Si ), Ak (Si )) = dD (Ak (Si ), Ak (Si ), Br/?m (?i,j F?0 ,i,j )) with arbitrarily high probabil?
k
ity as r increases). Therefore, m ??m,q
, which is a plug-in estimator of the expected value of
P
q
1
m
1
2
d
(A
(S
),
A
(S
)),
converges
in
probability to 0 for any fixed q as m ? ?, and the thek
k
i
i
i=1 D
q
\ k , D) > 0.
orem follows for this special case. Therefore, we will assume from now on that instab(A
We need the following variant of Hoeffding?s bound, adapted to conditional probabilities.
Lemma D.6. Fix some r > 0. Let X1 , . . . , Xq be real, nonnegative, independent and identically
distributed random variables, such that Pr(X1 ? [0, r]) > 0. For any Xi , let Yi be a random
variable on the same probability space, such that Pr(Yi = Xi |Xi ? [0, r]) = 1. Then for any
? > 0,
q
!
1 X
2q? 2
Pr
Xi ? E[Y1 |X1 ? [0, r]] ? ? ?i, Xi ? [0, r] ? 2 exp ? 2
.
q
r
i=1
Proof. Define an auxiliary set of random variables Z1 , . . . , Zq , such that Pr(Zi ? a) = Pr(Xi ?
a|Xi ? [0, r]) for any i, a. In words, Xi and Zi have the same distribution conditioned on the event
Xi ? [0, r]. Also, we have that Yi has the same distribution conditioned on Xi ? [0, r]. Therefore,
E[Y1 |X1 ? [0, r]] = E[X1 |X1 ? [0, r]], and as a result E[Y1 |X1 ? [0, r]] = E[Z1 ]. Therefore, the
probability in the lemma above can be written as
q
!
1 X
Pr
Zi ? E[Zi ] ? ? ,
q
i=1
where Zi are bounded in [0, r] with probability 1. Applying the regular Hoeffding?s bound gives us
the required result.
1
2
We now turn to the proof of the theorem. Let Am
r be the event that for all subsample pairs {Si , Si },
1
2
m
1
2
?
dm
(A
(S
),
A
(S
),
B
)
=
d
(A
(S
),
A
(S
)).
Namely,
this
is
the
event
that
for
k
k
k
k
r/ m(?i,j F?0 ,i,j )
i
i
i
i
D
D
all subsample pairs,
the
mass
which
switches
clusters
when
we
compare
the
two
resulting
clusterings
?
is always in an r/ m-neighborhood of the limit cluster boundaries.
Since p(?) is bounded, we have that dm
D (r) is deterministically bounded by O(r), with implicit
constants depending only on D and ? 0 . Using the law of total expectation, this implies that
m
m
E[dD (r)] ? E[dm
D (r)|Ar ]
m
m
m
m
m
m
m
m
= Pr(Ar )E[dD (r)|Ar ] + (1 ? Pr(Ar ))E[dD (r)|?Ar ] ? E[dD (r)|Ar ]
m
m
m
m
m
= 1 ? Pr(Ar ) E[dD (r)|?Ar ] ? E[dD (r)|Ar ]
? (1 ? Pr(Am
r ))O(r).
(21)
For any two events A, B, we have by the law of total probability that Pr(A) = Pr(B) Pr(A|B) +
Pr(B c ) Pr(A|B c ). From this it follows that Pr(A) ? Pr(B) + Pr(A|B c ). As a result, for any
11
? > 0,
?
k
\ k , D) > ?
Pr m ??m,q
? instab(A
q
!
1 X
?
m
1
2
\
? Pr
dD (Ak (Si ), Ak (Si )) ? instab(Ak , D) >
q
2
i=1
" q
#!
i 1 X
h?
?
k
m
1
2
\ k , D) > ?
\ k , D) ?
+ Pr m ??m,q ? instab(A
dD (Ak (Si ), Ak (Si )) ? instab(A
.
q
2
i=1
(22)
\ k , D).
Wewill assume w.l.o.g that ?/2 < instab(A
Otherwise, we can upper bound
?
k
\
Pr m ??m,q ? instab(Ak , D) > ? in the equation above by replacing ? with some smaller quan\ k , D,).
tity ?? for which ?? /2 < instab(A
We start by analyzing the conditional probability, forming the second summand in Eq. (22). Recall
k
that ??m,q
, after clustering the q subsample pairs {Si1 , Si2 }qi=1 , uses an additional i.i.d sample S 3
P
?
1
2
of size m to empirically estimate q dm
D (Ak (Si ), Ak (Si ))/ mq ? [0, 1]. This is achieved by
calculating the average percentage of instances in S 3 which switches between clusterings. Thus,
k
conditioned on the event appearing in the second summand of Eq. (22), ??m,q
is simply an empirical
average of m i.i.d random variables in [0, 1], whose expected value, denoted as v, is a strictly positive
?
\ k , D) ? ?/2)/ m. Thus, the second summand of Eq. (22) refers to
number in the range of (instab(A
?
an event where this empirical average is at a distance of at least ?/(2 m) from its expected value.
We can therefore apply a large deviation result to bound this probability. Since the expectation itself
is a (generally decreasing) function of the sample size m, we will need something a bit stronger than
the regular Hoeffding?s bound. Using a relative entropy version of Hoeffding?s bound [5], we have
that the second summand in Eq. (22) is upper bounded by:
v + ?/2 v
v ? ?/2 v
?
?
exp ?mDkl
+ exp ?mDkl max 0, ?
,
(23)
?m
m m
m
where Dkl [p||q] := ?p log(p/q) ? (1 ? p) log((1 ? p)/(1 ? q)) for any q ? (0, 1) and any p ? [0, 1].
Using the fact that Dkl [p||q] ? (p ? q)2 /2 max{p, q}, we get that Eq. (23) can be upper bounded by
a quantity which converges to 0 as m ? ?. As a result, the second summand in Eq. (22) converges
to 0 as m ? ?.
As to the first summand in Eq. (22), using the triangle inequality and switching sides allows us to
upper bound it by:
X
1 q m
m
Pr
dD (Ak (Si1 ), Ak (Si2 )) ? E[dm
(r)|A
]
D
r
q i=1
m
?
m
m
\
? ? E[dm
?
Ed
(r)
?
instab(A
,
D)
(r)|A
]
?
E[d
(r)]
k
D
r
D
D
2
(24)
\ k , D) as appearing in Thm. 1 , and Proposition D.1,
By the definition of instab(A
2
\
lim Edm
D (r) ? instab(Ak , D) = O(h(r)) = O(exp(?r )).
m??
Using Eq. (25) and Eq. (21), we can upper bound Eq. (24) by
q
1 X
m
1
2
m
m
dD (Ak (Si ), Ak (Si )) ? E[dD (r)|Ar ]
Pr
q
(25)
i=1
?
?
2
? (1 ? Pr(Am
))O(r)
?
O(exp(?r
))
?
o(1)
,
r
2
12
(26)
where o(1) ? 0 as m ? ?. Moreover, by using the law of total probability and Lemma D.6, we
have that for any ? > 0,
q
!
1 X
m
1
2
m
m
Pr
dD (Ak (Si ), Ak (Si )) ? E[dD (r)|Ar ] > ?
q
i=1
q
!
1 X
m
1
2
m
m
m
? (1 ? Pr(Am
dm
r )) ? 1 + Pr(Ar ) Pr
D (Ak (Si ), Ak (Si )) ? E[dD (r)|Ar ] > ? Ar
q
i=1
2
2q?
m
m
? (1 ? Pr(Ar )) + 2 Pr(Ar ) exp ? 2
.
(27)
r
1
2
m
m
Lemma D.6 can be applied because dm
D (Ak (Si ), Ak (Si )) = dD (r) for any i, if Ar occurs.
If m, r are such that
?
2
? (1 ? Pr(Am
r ))O(r) ? O(exp(?r )) ? o(1) > 0,
2
(28)
we can substitute this expression instead of ? in Eq. (27), and get that Eq. (26) is upper bounded by
2 !
2
2q 2? ? (1 ? Pr(Am
r ))O(r) ? O(exp(?r ))) ? o(1)
m
m
.
(1 ? Pr(Ar )) + 2 Pr(Ar ) exp ?
r2
(29)
Let
gm (r) :=
Pr
S1 ,S2 ?D m
m
(dm
D (r) 6= dD (Ak (S1 ), Ak (S2 )))
,
g(r) = lim gm (r)
m??
?3??
By regularity condition 3d, g(r) = O(r
) for some ? > 0. Also, we have that Pr(Am
r ) =
q
m
(1 ? gm (r)) , and therefore limm?? Pr(Ar ) = (1 ? g(r))q for any fixed q. In consequence, as
m ? ?, Eq. (29) converges to
2 !
2q 2? ? (1 ? (1 ? g(r))q )O(r) ? O(exp(?r2 ))
q
q
(1 ? (1 ? g(r))) ) + 2(1 ? g(r)) exp ?
.
r2
(30)
Now we use the fact that r can be chosen arbitrarily. In particular, let r = q 1/(2+?/2) , where ? > 0
is the same quantity appearing in condition 3d. It follows that
3+?
1 ? (1 ? g(r))q ? qg(r) = O(q/r3+? ) = O q 1? 2+?/2
2+?
?
(1 ? (1 ? g(r))q )O(r) = qg(r)O(r) = O q 1? 2+?/2 = O(q ? 4+? )
1
q/r2 = q 1? 1+?/4
1
exp(?r2 ) = exp(?q 1+?/4 ).
It can be verified that the equations above imply the validness of Eq. (28) for large enough m and q
(and hence r). Substituting these equations into Eq. (30), we get an upper bound
?
2
3+?
1
1
?
1? 2+?/2
1? 1+?/4
? 4+?
1+?/4
O q
+ exp ?2q
?O q
? O exp(?q
)
.
2
Since ? > 0, it can be verified that the first summand asymptotically dominates the second summand
(as q ? ?), and can be bounded in turn by o(q ?1/2 ).
?, and the
Summarizing, we have that the first summand in Eq. (22) converges to o(q ?1/2 ) as m ??
k
second summand in Eq. (22) converge to 0 as m ? ?, for any fixed ? > 0, and thus Pr(| m ??m,q
?
?1/2
\
instab(Ak , D)| > ?) converges to o(q
).
13
E
Proof of Thm. 2 and Thm. 3
The tool we shall use for proving Thm. 2 and Thm. 3 is the following general central limit theorem for Z-estimators (Thm. 3.3.1 in [8]). We will first quote the theorem and then explain the
terminology used.
Theorem E.1 (Van der Vaart). Let ?m and ? be random maps and a fixed map, respectively, from
a subset ? of some Banach space into another Banach space such that as m ? ?,
?
? ? ?m(?m ? ?)(? 0 )k
k m(?m ? ?)(?)
?0
(31)
? ?
1 + mk? ? ? 0 k
?
in probability, and such that the sequence m(?m ? ?)(? 0 ) converges in distribution to a tight
random element Z. Let ? 7? ?(?) be Fr?echet-differentiable at ? 0 with an invertible derivative
? ?m ? 0
? ? , which is assumed to be a continuous linear operator1 . If ?(? 0 ) = 0 and ?m (?)/
?
0
?
? converges in probability to ? 0 , then m(?
? ? ? 0 ) converges in distribution to
in probability, and ?
?1
?
???0 Z.
A Banach space is any complete normed vector space (possible infinite dimensional). A tight random element essentially means that an arbitrarily large portion of its distribution lies in compact
sets. This condition is trivial when ? is a subset of Euclidean space. Fr?echet-differentiability of a
function f : U 7? V at x ? U , where U, V are Banach spaces, means that there exists a bounded
linear operator A : U 7? V such that
lim
h?0
kf (x + h) ? f (x) ? A(h)kW
= 0.
khkU
This is equivalent to regular differentiability in finite dimensional settings.
It is important to note that the theorem is stronger than what we actually need, since we only consider
finite dimensional Euclidean spaces, while the theorem deals with possibly infinite dimensional
Banach spaces. In principle, it is possible to use this theorem to prove central limit theorems in
infinite dimensional settings, for example in kernel clustering where the associated reproducing
kernel Hilbert space is infinite dimensional. However, the required conditions become much less
trivial, and actually fail to hold in some cases (see below for further details).
We now turn to the proofs themselves. Since the proofs of Thm. 2 and Thm. 3 are almost identical,
we will prove them together, marking differences between them as needed. In order to allow uniform
notation in both cases, we shall assume that ?(?) is the identity mapping in Bregman divergence
clustering, and the feature map from X to H in kernel clustering.
With the assumptions that we made in the theorems, the only thing really left to show before applying
Thm. E.1 is that Eq. (31) holds. Notice that it is enough to show that
?
? ? ?m(?i ? ?i )(? 0 )k
k m(?im ? ?i )(?)
m
?0
? ?
1 + mk?
? ?0 k
for any i ? {1, . . . , k}. We will prove this in a slightly more complicated way than necessary, which
also treats the case of kernel clustering where H is infinite-dimensional. By Lemma 3.3.5 in [8],
since X is bounded, it is sufficient to show that for any i, there is some ? > 0 such that
i
i
{??,h
?
? (?) ? ?? 0 ,h (?)}k???
0 k??,h?X
is a Donsker class, where
i
??,h
(x) =
h? i ? ?(x), ?(h)i
0
x ? C?,i
otherwise.
Intuitively, a set of real functions {f (?)} from X (with any probability distribution D) to R is called
Donsker if it satisfies a uniform central limit theorem. Without getting too much into the details,
1
A linear operator is automatically continuous in finite dimensional spaces, not necessarily in infinite dimensional spaces.
14
?
this means that if we sample i.i.d m elements from D, then (f (x1 ) + . . . + f (xm ))/ m converges
in distribution (as m ? ?) to a Gaussian random variable, and the convergence is uniform over all
f (?) in the set, in an appropriately defined sense.
We use the fact that if F and G are Donsker classes, then so are F + G and F ? G (see examples
2.10.7 and 2.10.8 in [8]). This allows us to reduce the problem to showing that the following three
function classes, from X to R, are Donsker:
{h? i , ?(h)i}k???
, {h?(?), ?(h)i}h?X , {1C?,i (?)}k???
.
(32)
?
?
0 k??,h?X
0 k??
Notice that the first class is a set of bounded constant functions, while the third class is a set of
indicator functions for all possible clusters. One can now use several tools to show that each class
in Eq. (32) is Donsker. For example, consider a class of real functions on a bounded subset of some
Euclidean space. By Thm. 8.2.1 in [3] (and its preceding discussion), the class is Donsker if any
function in the class is differentiable to a sufficiently high order. This ensures that the first class in
Eq. (32) is Donsker, because it is composed of constant functions. As to the second class in Eq. (32),
the same holds in the case of Bregman divergence clustering (where ?(?) is the identity function),
because it is then just a set of linear functions. For finite dimensional kernel clustering, it is enough
to show that {h?, ?(h)i}h?X is Donsker (namely, the same class of functions after performing the
transformation from X to ?(X )). This is again a set of linear functions in Hk , a subset of some
finite dimensional Euclidean space, and so it is Donsker. In infinite dimensional kernel clustering,
our class of functions can be written as {k(?, h)}h?X , where k(?, ?) is the kernel function, so it is
Donsker if the kernel function is differentiable to a sufficiently high order.
The third class in Eq. (32) is more problematic. By Theorem 8.2.15 in [3] (and its preceding discussion), it suffices that the boundary of each possible cluster is composed of a finite number of smooth
surfaces (differentiable to a high enough order) in some Euclidean space. In Bregman divergence
clustering, the clusters are separated by hyperplanes, which are linear functions (see appendix A in
[1]), and thus the class is Donsker. The same holds for finite dimensional kernel clustering. This
will still be true for infinite dimensional kernel clustering, if we can guarantee that any cluster in
any solution close enough to ?0 in ? will have smooth boundaries. Unfortunately, this does not hold
in some important cases. For example, universal kernels (such as the Gaussian kernel) are capable
of inducing cluster boundaries arbitrarily close in form to any continuous function, and thus our
line of attack will not work in such cases. In a sense, this is not too surprising, since these kernels
correspond to very ?rich? hypothesis classes, and it is not clear if a precise characterization of their
stability properties, via central limit theorems, is at all possible.
Summarizing the above discussion, we have shown that for the settings assumed in our theorem, all
three classes in Eq. (32) are Donsker and hence Eq. (31) holds. We now return to deal with the other
ingredients required to apply Thm. E.1.
?
As to the asymptotic distribution of m(?m ? ?)(? 0 ), since ?(? 0 ) = 0 by assumption, we have
that for any i ? {1, . . . , k},
m
?
1 X
m(?im ? ?i )(? 0 ) = ?
?i (? 0 , xj ).
(33)
m j=1
where x1 , . . . , xm is the sample by which ?m is defined. The r.h.s of Eq. (33)
? is a sum of identically
distributed, independent random
with zero mean, normalized by m. As a result, by the
? variables
standard central limit theorem, m(?im ??i )(? 0 ) converges in distribution to a zero mean Gaussian
random vector Y , with covariance matrix
Z
p(x)(?(x) ? ? 0,i )(?(x) ? ? 0,i )? dx.
Vi =
C?0 ,i
Moreover, it is easily verified that Cov(?i (? 0 , x), ?i? (? 0 , x)) = 0 for any i 6= i? . Therefore,
?
m(?m ? ?)(? 0 ) converges in distribution to a zero mean Gaussian random vector, whose covariance matrix V is composed of k diagonal blocks (V1 , . . . , Vk ), all other elements of V being
zero.
? ?
Thus, we can use Thm. E.1 to get that m(???
0 ) converges in distribution to a zero mean Gaussian
? ?1 Y , which is a Gaussian random vector with a covariance matrix of
random vector of the form ??
?0
? ?1 V ?
? ?1 .
the form ?
?0
?0
15
F
Proof of Thm. 4
Since our algorithm returns a locally optimal solution with respect to the differentiable loglikelihood function, we can frame it as a Z-estimator of the derivative of the log-likelihood function
with respect to the parameters, namely the score function
m
? =
?m (?)
1 X ?
?
log(q(xi |?)).
m i=1 ??
This is a random mapping based on the sample x1 , . . . , xm .
Similarly, we can define ?(?) as the ?asymptotic? score function with respect to the underlying
distribution D:
Z
?
? =
?
?(?)
log(q(x|?))p(x)dx.
X ??
? returned by the algorithm satisfies ?m (?)
? = 0,
Under the assumptions we have made, the model ?
?
and ? converges in probability to some ? 0 for which ?(? 0 ) = 0. The asymptotic normality of
? ?
m(? ? ? 0 ) is now an immediate consequence of central limit theorems for ?maximum likelihood?
Z-estimators, such as Thm. 5.21 in [7].
References
[1] A. Banerjee, S. Merugu, I. S. Dhillon, and J. Ghosh. Clustering with bregman divergences. Journal of
Machine Learning Research, 6:1705?1749, 2005.
[2] P. Billingsley and F. Tops?e. Uniformity in weak convergence. Probability Theory and Related Fields,
7:1?16, 1967.
[3] R. Dudley. Uniform Central Limit Theorems. Cambridge Studies in Advanced Mathematics. Cambridge
University Press, 1999.
[4] G. R. Grimmet and D. R. Stirzaker. Probability and Random Processes. Oxford University Press, 2001.
[5] W. Hoeffding. Probability inequalities for sums of bounded random variables. Journal of the American
Statistical Association, 58(301):13?30, Mar. 1963.
[6] R. R. Rao. Relations betwen weak and uniform convergence of measures with applications. The Annals of
Mathematical Statistics, 33(2):659?680, June 1962.
[7] A. W. V. D. Vaart. Asymptotic Statistics. Cambridge University Press, 1998.
[8] A. W. van der Vaart and J. A. Wellner. Weak Convergence and Empirical Processes : With Applications to
Statistics. Springer, 1996.
16
| 3438 |@word mild:1 version:2 norm:3 stronger:3 calculus:1 willing:1 covariance:3 pick:1 score:2 denoting:1 surprising:1 si:22 dx:31 must:1 written:3 dydx:4 characterization:1 hyperplanes:2 attack:1 si1:2 mathematical:1 along:2 become:2 prove:8 consists:1 inside:6 manner:2 x0:5 indeed:1 expected:6 behavior:1 themselves:1 subsec:4 decreasing:1 automatically:1 considering:1 begin:2 notation:7 underlying:4 bounded:26 mass:9 moreover:6 israel:1 argmaxf:4 what:3 ghosh:1 transformation:1 guarantee:1 every:1 scaled:2 unit:2 positive:6 negligible:4 engineering:1 before:1 treat:1 limit:18 consequence:2 switching:1 ak:97 analyzing:2 oxford:1 fluctuation:2 abuse:2 might:2 plus:2 twice:3 k:7 suggests:1 range:2 block:1 universal:1 empirical:3 convenient:1 projection:1 word:5 integrating:1 refers:2 regular:3 get:13 cannot:1 close:11 convenience:2 operator:2 context:1 applying:4 writing:1 equivalent:2 map:4 center:1 jerusalem:1 straightforward:2 go:1 normed:1 estimator:6 mq:1 ity:1 stability:4 proving:1 annals:1 shamir:1 suppose:1 gm:6 exact:1 us:1 hypothesis:1 element:4 totic:1 bottom:1 calculate:1 ensures:1 yk:1 depend:4 rewrite:8 segment:1 tight:2 uniformity:1 mgm:2 triangle:1 easily:1 indirect:1 talk:1 separated:3 distinct:1 describe:1 tell:1 neighborhood:10 outside:3 whose:2 supplementary:1 valued:2 loglikelihood:1 drawing:5 otherwise:2 cov:1 gi:30 vaart:3 statistic:3 think:2 transform:1 itself:4 final:1 sequence:8 differentiable:8 advantage:1 mks:3 remainder:13 fr:2 combining:3 description:1 inducing:1 ky:7 getting:1 probabil:1 convergence:17 cluster:51 regularity:14 empty:2 double:1 converges:37 depending:1 develop:1 ac:1 school:1 op:13 eq:54 auxiliary:3 c:1 implies:2 direction:2 stochastic:1 material:1 everything:2 fix:2 suffices:1 really:1 proposition:7 probable:1 inspecting:1 mathematically:1 im:3 extension:1 strictly:1 hold:11 sufficiently:3 around:10 normal:9 exp:18 mapping:7 mo:1 claim:2 substituting:2 m0:6 omitted:1 purpose:1 quote:1 tool:3 always:2 gaussian:10 aim:1 rather:2 fulfill:1 thek:1 ax:14 june:1 notational:1 vk:1 likelihood:2 hk:1 rigorous:1 diffeomorphic:1 am:7 summarizing:2 sense:2 dependent:1 compactness:2 relation:1 limm:9 going:1 interested:1 denoted:1 development:2 plan:1 special:3 integration:6 equal:9 once:1 field:1 identical:1 kw:1 constitutes:1 thin:1 betwen:1 summand:11 few:2 composed:5 divergence:4 ourselves:1 recalling:1 highly:3 bregman:4 edge:3 integral:13 closer:1 necessary:1 experience:1 capable:1 ohad:1 unless:1 taylor:4 euclidean:9 minimal:1 mk:3 instance:1 rao:1 ar:20 deviation:3 subset:10 entry:1 uniform:11 tishby:2 too:3 supx:1 my:2 combined:1 density:1 huji:1 interdisciplinary:1 picking:1 invertible:1 ym:19 continuously:1 together:1 intersecting:1 again:4 central:9 possibly:1 hoeffding:5 american:1 derivative:6 return:2 sec:4 explicitly:3 mp:1 vi:1 root:1 try:1 analyze:3 apparently:1 portion:1 start:2 complicated:1 contribution:1 ass:1 il:1 square:1 tity:1 merugu:1 variance:1 yield:1 correspond:1 weak:3 basically:1 zx:3 finer:1 executes:1 explain:1 whenever:1 strip:13 ed:1 definition:4 echet:2 involved:1 obvious:2 dm:33 proof:27 associated:3 boil:1 billingsley:1 recall:3 subsection:1 lim:3 hilbert:3 actually:3 back:2 appears:1 disposal:1 fubini:3 formulation:2 mar:1 generality:1 furthermore:1 just:1 implicit:1 equipping:1 orientable:2 replacing:1 banerjee:1 dependant:1 aj:1 contain:1 unbiased:1 verify:2 true:1 normalized:1 hence:6 dhillon:1 deal:3 self:1 width:5 naftali:1 trying:1 complete:1 edm:11 empirically:1 overview:1 exponentially:2 volume:3 banach:5 discussed:1 association:3 extend:1 cambridge:3 ai:1 consistency:2 outlined:1 similarly:1 mathematics:1 reliability:1 yk2:1 gj:30 surface:1 something:1 multivariate:3 closest:1 inf:2 reverse:1 manipulation:1 route:1 inequality:3 arbitrarily:7 yi:3 der:2 integrable:2 additional:3 care:2 dxdy:1 kxm:2 preceding:3 mr:1 converge:6 full:1 reduces:1 smooth:4 technical:4 calculation:3 plug:1 long:2 divided:2 dkl:2 qg:2 qi:1 converging:1 variant:2 basic:1 essentially:1 metric:3 expectation:6 rks:6 sometimes:1 kernel:15 achieved:1 justified:1 separately:1 appropriately:2 unlike:1 induced:1 quan:1 thing:1 near:1 noting:1 intermediate:1 enough:17 identically:2 switch:10 xj:1 zi:5 restrict:1 inner:1 idea:3 reduce:1 br:41 expression:16 wellner:1 returned:2 remark:1 generally:2 clear:1 informally:1 transforms:1 locally:1 concentrated:1 differentiability:2 percentage:1 problematic:2 notice:9 sign:1 fulfilled:1 disjoint:1 write:4 shall:4 terminology:1 drawn:1 rewriting:2 verified:4 v1:1 asymptotically:4 sum:3 everywhere:1 fourth:1 noticing:1 throughout:1 almost:1 draw:2 dy:2 scaling:2 appendix:1 bit:2 bound:12 followed:1 dition:1 stirzaker:1 nonnegative:1 adapted:1 infinity:1 hy:2 performing:1 separable:1 relatively:2 marking:1 smaller:1 slightly:1 making:1 s1:40 happens:1 intuitively:1 restricted:1 pr:94 taken:1 heart:1 equation:4 turn:6 r3:1 fail:1 needed:2 know:1 letting:3 rewritten:5 apply:6 appropriate:1 appearing:3 dudley:1 substitute:1 top:1 clustering:32 cf:1 ensure:1 unsuitable:1 calculating:1 quantity:3 occurs:1 parametric:1 diagonal:1 surrogate:1 gradient:2 distance:10 nx:3 me:1 considers:1 trivial:4 reason:1 assuming:3 index:1 hebrew:1 mostly:1 unfortunately:2 statement:1 gk:1 perform:2 upper:8 zx0:6 finite:9 immediate:1 precise:1 y1:3 rn:7 frame:1 reproducing:2 thm:20 inferred:1 namely:8 required:9 pair:5 z1:2 mdd:1 able:1 below:4 usually:1 xm:20 regime:1 orem:1 max:2 event:9 indicator:1 advanced:1 normality:4 ohadsh:1 imply:4 xq:1 nice:1 kf:3 asymptotic:16 law:3 relative:1 proven:1 var:25 ingredient:1 switched:2 sufficient:4 dd:25 principle:1 obscure:1 side:2 allow:2 taking:1 characterizing:2 distributed:2 van:2 boundary:29 valid:1 cumulative:1 resides:2 rich:1 reside:1 made:2 bm:7 polynomially:2 hypersurface:2 si2:2 compact:2 supremum:1 instantiation:2 assumed:4 xi:12 continuous:17 zq:1 reasonably:1 expansion:4 necessarily:3 domain:2 main:3 s2:42 subsample:3 body:1 x1:10 wish:2 deterministically:1 lie:2 donsker:12 late:1 third:3 jacobian:1 rk:4 theorem:28 down:1 minute:1 specific:3 showing:1 r2:7 dominates:1 exists:3 magnitude:1 conditioned:4 kx:2 margin:1 entropy:1 intersection:2 kym:2 simply:4 likely:2 forming:1 happening:1 contained:2 subtlety:1 springer:1 satisfies:2 conditional:2 goal:2 identity:2 diffeomorphism:1 change:3 specifically:1 infinite:11 uniformly:20 lemma:29 total:3 called:1 formally:5 grimmet:1 latter:1 |
2,690 | 3,439 | Bio-inspired Real Time Sensory Map Realignment in
a Robotic Barn Owl
Juan Huo, Zhijun Yang and Alan Murray
DTC, School of Informatics, Schoolf of Electronics & Engineering
The University of Edinburgh
Edinburgh, UK
{J.Huo, Zhijun.Yang, Alan.Murray}@ed.ac.uk
Abstract
The visual and auditory map alignment in the Superior Colliculus (SC) of barn
owl is important for its accurate localization for prey behavior. Prism learning or
Blindness may interfere this alignment and cause loss of the capability of accurate
prey. However, juvenile barn owl could recover its sensory map alignment by
shifting its auditory map. The adaptation of this map alignment is believed based
on activity dependent axon developing in Inferior Colliculus (IC). A model is
built to explore this mechanism. In this model, axon growing process is instructed
by an inhibitory network in SC while the strength of the inhibition adjusted by
Spike Timing Dependent Plasticity (STDP). We test and analyze this mechanism
by application of the neural structures involved in spatial localization in a robotic
system.
1
Introduction
Barn owl is a nocturnal predator with strong able auditory and visual localization system. During
localization, the sensory stimuli are translated into neuron response, the visual and auditory maps
are formed. In the deep Superior Colliculus (SC), visual and auditory information are integrated
together. Normally, the object localization of visual map and auditory map are aligned with each
other. But this alignment can be disrupted by wearing a prism or blindness [1, 2]. The juvenile barn
owl is able to adapt so that it can foveates correctly on the source of auditory stimuli. A model based
on the newest biological discoveries and account for a large amount of biological observations has
been developed to explore the adaptation in map alignment [3].
This model is applied to a robotic system emulating the behavior of heading of the barn owl, so
as to provide a real-time visual and auditory information integration and map realignment. It also
provides a new mechanism for the hardware to mimic some of brain?s abilities, adapt to novel
situation without instruction.
1.1
Biological Background
Superior Colliculus (SC) gets different sensory inputs and it sends its outputs to effect behavior.
As a hub of sensory information, SC neurons access the auditory stimuli from Inferior Colliculus
(IC) [4, 1], which includes external Inferior Colliculus (ICx) and central Inferior Colliculus (ICc).
ICx wraps around ICc. As revealed by anatomical and physiological experiments, the main site of
map adaptation is in two areas, one is axon connection between ICc and ICx, the other area is an
inhibitory network in SC.
Large amounts of evidence has shown axon sprouting and retraction between ICc and ICx are guided
by inhibitory network in SC during prism learning [5, 6, 7]. Axons do not extend spontaneously,
1
(a)
(b)
Figure 1: (a) The simulation environment. (b) The information projection between ICc, ICx and SC.
they?re promoted by neurotrophin (one kind of nerve growth factor) release and electrical activity of
the cell body [8]. The release of neurotrophin is triggered by guiding signal from SC. In this paper
we call the guiding signal, Map Adaptation Cue (MAC), as shown in Fig. 1(b). In the inhibitory
network, MAC is assumed to be introduced by inter neuron, which is plausible to be bimodal neuron
[7]. Bimodal neuron can be potentiated by both visual input (from retina) and auditory input (from
ICx). Its neuron response is obviously strenthened when visual and auditory input are correlated [9].
Previous work has pointed out Hebbian Learning plays a main role in sensory information integration on bimodal neuron [4]. This paper includes a closer representation of biological structure.
2
Neural spike train
Neurons in nervous system process and transmit information by neural spikes. Sensory stimulus
is coded by the spatiotemporal spike pattern before applied to the spiking neural network [10]. In
this study, the input spike pattern was applied repeatedly and frequently, similar as the input stimuli.
Spike patterns within which the fixed time intervals between spikes are set mannually, with two
discrete values of mean firing rate, high and low. As the neuron response in visual map (retina layer)
and auditory map (ICx layer) has a center surround profile, the receptive center has the highest
firing rate, e.g. the ?mexican hat?, [11]. The spike patterns of visual center and auditory center,
corresponding to a same target, are highly correlated with each other. Adjacent neurons respond
with template spike trains of low firing rate. The spike patterns of center neuron and ajacent neuron
are independent with each other. The remaining neurons have negligible activity. Another possilbe
spike train generating method and its simulation result for this model can be found in paper [12].
3
Neural Model
The simulation is to emulate a virtual barn owl at the center of a fixed, head-centered reference
system with the origin centered on the perch as in Fig. 1(a). Fig. 2(b) schematically illustrates the
model, 4-layer (ICc, ICx, SC, retina), 10-pathway. Each pathway corresponds to 18? in azimuth.
The single pathway is composed of two basic sections shown in Fig. 2(a). Block I comprises the
ICc, ICx and the axon connections that map between them. Block II is both the detector of any
shift between visual and auditory cues and the controller of the ICx/ICc mapping in block I. The
connection between ICc and ICx in block I is instructed by Map Adaptation Cue (MAC), which is
generated by the inter neuron in block II.
In block II, both bimodal and inter neurons in this model are Leaky Integrate-and-Fire (LIF) neuron
(Equation 1). ge is the excitatory synaptic conductance, which is associated with excitatory reversal potential Vexc . Similarly, gi , the inhibitory conductance, is associated with inhibitory reversal
2
(a)
(b)
Figure 2: (a) Schematic of the auditory and visual signal processing pathway. (b) Schematic of the
network. Each single pathway represents 18? in azimuth. The visual stimulus arrives in the retina at
N42 , N22 receives the strongest MAC. The active growthcone from N13 is attracted by neurotrophin.
The dashed line is the new connection built when the growthcone reaches its threshold. The old
connection between N13 and N23 is thus eliminated due to lack of alignment between the auditory
and visual stimuli.
potential Vinh . gl is the membrane conductance, the membrane resistance in this case is given by
Rm = 1/gl . When the membrane potential V (t) reaches the threshold value of about -50 to -55mV ,
V (t) is reset to a value Vreset [13]. In this model, Vreset is chosen to be equal to Vrest , the rest membrane potential, here Vrest = Vreset = ?70mV . The other paprameters of the neuron model are as
follows: Vexc = 0mV , Vinh = ?70mV , ?m = Cm Rm = 5ms.
Cm
dV (t)
= ?gl (V (t) ? Vrest ) ? ge (V (t) ? Vexc ) ? gi (V (t) ? Vinh )
dt
(1)
The synapses connecting the sensory signals with the bimodel neuron are excitatory while the
synapse between bimodal neuron and inter neuron is inhibitory. The synaptic weight change in this
model is mediated by Spike Timing Dependent Plasticity (STDP). STDP is a learning rule in which
the synaptic weight is strengthened or weakened by the paired presynaptic spikes and postsynaptic
spikes in a time window [14].
The whole network is shown in Fig. 2(b), neuron Nij indicates the neuron location in layer i and
pathway j. The developing of axon growthcone is activated by presynaptic spikes from its source
layer ICc (layer 1). The target layer ICx (layer 2) releases neurotrophin when it is excited by MAC
spikes. The concentration of neurotrophin c2j is set to be linearly proportional to the total MAC
induced synaptic activity, P2j , which sums the MAC spikes of ICx layer neurons. In Fig. 2(b),
N2j (cen) is the ICx neuron that receives strongest stimulation from the visual signal, via the retina
and SC. The concentrations of neurotrophin released by neurons N2j depend upon the distance
between neuron N2j and N2j (cen), kN2j ? N2j (cen)k. c2j is contributed by all active release sites,
however, this contribution decays with distance. To represent the effect of neighbouring neurons, a
spreading kernel D(N2j ? N2j (cen)) is used to weight P2j . D(N2j ? N2j (cen)) is an exponential
decay function with the decay variable kN2j ? N2j (cen)k. The concentration of neurotrophin also
decays with time step .
3
c(N2j (cen))
=
X
P (N2j )D(N2j ? N2j (cen))
(2)
P (N2j )e??kN2j ?N2j (cen)k
(3)
N2j
=
X
N2j
When there is neurotrophin released, the growth cone begins to grow induced by neural activity.
The growth cone activity is bounded by the presynaptic factor which is a summation filter representing the linear sum of the presynaptic spikes of the corresponding neuron N1j . The most active
growth cone from source neuron N1j (sou) has the highest possibility to be extended. If N2j (tag)
is the target direction of growth cone, N2j (tag) is identified when the accumulated neurotrophin
c2j (tag) exceeds the threshold, the new connection between N1j (sou) and N2j (tag) is validated,
meanwhile the neurotrophin is reset to the initial state. When the new connection is completed, the
old connection which is bifurcated from the same neuron is blocked [15, 16].
N2j (tag) = argmaxN2j (tag)?Y (N2j ) c2j
4
4.1
(4)
Real-time Learning and Adaptation
Experiments
To analyze the capability of the model in a real-time robotic system, an e-puck robot equipped
with two lateral microphones, and a camera with a 30? prism is shown in Fig. 3. E-puck robot
communicates with PC through bluetooth interface. We use e-puck robot to emulate barn owl head.
The visual and auditory target (LED and loudspeaker) was fixed in one location and the owl-head
robot moves into different directions manually or by motor command. The high firing rate spike
pattern was fed into center neurons, which correspond to the target localization in space, in ICc or
retina layer. In the network model, each pathway represents 18? field in space. We label the neurons
corresponding to the azimuth angle ?90? ? ?72? , pathway 1, so that azimuth angle 0? ? 18? is
represented by pathway 6.
The chirp from the loudspeaker is 1K Hz sine wave. The sound signal is processed by Fast Fourier
Transform (FFT). When the average amplitude of the input signal above a threshold, the characteristic frequency f and phase ?? between the left and right ear are calculated. With Equation 5 and
6, we get the interaural time difference ?t and the target direction ? in azimuth. In this equation, V
is sound speed, L is the diameter of the robot head.
??
2?f
?tV
?=
L
?t =
4.2
(5)
(6)
Experiment Results
The experiment consisted of two steps: first, the owl-head robot without prism was positioned to
head towards different directions in a random sequence. For every stimulation, a visual or an audiovisual target was presented at one of the 10 available locations. Secondly, the owl-head robot wearing prism with azimuth angle 36? was presented to randomly selected direction in azimuth. In each
direction, the target stimuli repeated 75 times. Each stimuli introduce spike cluster in 40 time units.
These spikes are binary signals with equal amplitude. Experiment results have shown that the system
was able to adjust itself in different initial conditions.
The results of 0? target localization in the first experiment are shown in Fig. 4. Since visual and
auditory signals are registered with each other, both the visual excitatory synapse (the arrow between
N4j and N3j in Fig. 2(b)) and auditory excitatory synapse (the arrow between N2j and N3j in
Fig. 2(b)) are strengthened. This means the bimodal neuron becomes more active. Because of
4
(a)
(b)
(c)
Figure 3: (a) E-puck robot wearing a prism. (b) Real-time experiment. (c) Visual and auditory input.
We get the visual direction from the luminous image by identifying the position of the brightest pixel.
The auditory signal is processed by FFT to identify the phase difference between left and right ear,
so as to find the auditory direction.
the inhibitory relationship between bimodal neuron and the interneuron, the interneuron is strongly
inhibited and its output is close to zero. Therefore, no neurotrophin is released in the ICx neuron, as
shown in Fig. 4(a). The growthcone does not grow in the auditory layer, so there is no change to the
original axon connection, Fig. 4(b).
The results of 0? target localization in the second experiment are shown in Fig. 5 and Fig. 6. Because of the prism wearing, the visual receptive center and auditory receptive center are in different
pathways, pathway 8 and pathway 6. Visual and auditory input spike trains are independent of each
other in pathway 8. Thus both visual and auditory synapses connected to the bimodal neuron are
weakened. The reduced inhibition increases the spike output of interneuron. This stimulates neurotrophin release in pathway 8. With high neurotrophin value and high firing rate spike train input,
the pathway 6 growthcone is the most active one at the source layer. When the growthcone grows to
certain level, the axon connection is updated, as shown in Fig. 6(b).
For the camera is limited by its visual angle ?30? ? 30? , the real-time robot experiment only tested
pathway 4 ? 7. The rest of the pathway test is simulated in PC in terms of data accessed from
pathway 4 ? 7. The last map realignment result is shown in Fig.7.
5
Conclusion
Adaptability is a crucial issue in the design of autonomous systems. In this paper, we demonstrate
a robust model to eliminate the visual and auditory localization disparity. This model explains the
mechanism behind visual and auditory signal integration. Spike-Timing Dependent Plasticity is
accompanied by modulation of the signals between ICc and ICx neurons. The model also provides
the first clear indication of the possible role of a ?Map Adaptation Cue? in map alignment. The
real-time application in a robotic barn owl head shows the model can work in real world which the
brains of animals have to face.
By studying the brain wiring mechanism in superior colliculus, we can better understand the maps
alignment in brain. Maps alignment also exists in hippacampus and cortex. It is believed maps
alignment plays an important role for learning, perception and memory which is our future work.
5
Figure 4: Visual and auditory localization signals from a same target are registered with each other.
(a) There?s no neurotrophin released from ICx layer at any time during the experiment. (b) The
axon connection between ICc and ICx doesn?t change. (c)(d) Here the target direction is in 0? . Both
visual and auditory receptive center corresponds to pathway 6 and their synaptic weight increases
simultaneously.
Figure 5: Visual and auditory localization signals are misaligned with each other. (a) Neurotrophin
released from the target ICx neurons is accumulated. (b) The axon connection between ICc and
ICx doesn?t change before the neurontrophin and growthcone reach a threshold. Here the visual
receptive center is in pathway 8, while the auditory receptive center is in pathway 6. (c)(d) Both the
visual and auditory synapses are weakened because the input spike trains are independent with each
other.
6
Figure 6: New axon connection is built. (a) After the axon connection is updated, neurotrophin is
reset to its original status. (b) The new axon connection is built while the old connection is inhibited.
(c)(d) Both visual and auditory synapses begin to increase after visual and auditory signal registered
with each other again.
Figure 7: The arrangement of axon connection between maps. The small square represents the
original point to point connection. The black blocks represent the new connection after adaptation.
7
Another issue for further discussion is MAC. Although it is clear MAC is generated by an inhibitory
network in SC, whether it comes from bimodal neuron or not remains unclear.
The video of the experiment can be found on website:
http://www.see.ed.ac.uk/ s0454392/
Acknowledgments
For this research, we are grateful to Barbara Webb?s suggestion for using e-puck robot. We would
like to thank the support of the EPSRC Doctoral Training Center in Neuroinformatics. We also
thank Leslie Smith for advice and assistance in model building.
References
[1] E. Knudsen, ?Auditory and visual maps of space in the optic tectum of the owl,? The Journal
of Neuroscience, vol. 2, pp. 1177?1194, 1982.
[2] K. EI, ?Early blindness results in a degraded auditory map of space in the optic tectum of the
barn owl,? Proc Natl Acad Sci, vol. 85, no. 16, pp. 6211?4, 1988.
[3] J. Huo, A. Murray, L. Smith, and Z. Yang, ?Adaptation of barn owl localization system with
spike timing dependent plasticity.? IEEE World Congress on Computational Intelligence,
June 2008.
[4] M. Rucci, G. Tononi, and G. M. Edelman, ?Registration of neural maps through valuedependent learning: modeling the alignment of auditory and visual maps in the barn owl?s
optic tectum,? J Neurosci., vol. 17, no. 1, pp. 334?52, 1997.
[5] J. I.Gold and K. EI, ?Adaptive adjustment of connectivity in the inferior colliculus revealed by
focal pharmacological inactivation,? J Neurophysiol., vol. 85(4), pp. 1575?84, 2001.
[6] P. S. Hyde and E. I. Knudsen, ?Topographic projection from the optic tectum to the auditory
space map in the inferior colliculus of the barn owl,? J Comp Neurol, vol. 421, pp. 146?160,
2000.
[7] Y. Gutfreund, W. Zheng, and E. I. Knudsen, ?Gated visual input to the central auditory system,?
Science, vol. 297, pp. 1556?1559, 2002.
[8] J. L. Goldberg, J. S. Espinosa, Y. Xu, N. Davidson, G. T. Kovacs, and B. A. Barres, ?Retinal ganglion cells do not extend axons by default: Promotion by neurotrophic signaling and
electrical activity,? Neuron, vol. 33, pp. 689?702, 2002.
[9] M. Meredith, J. Nemitz, and B. Stein, ?Determinants of multisensory integration in superior
colliculus neurons. i. temporal factors.? J Neurosci., vol. 10, pp. 3215?29, 1987.
[10] R. Hosaka, O. Araki, and T. Ikeguchi, ?Stdp provides the substrate for igniting synfire chains
by spatiotemporal input patterns,? Neural Computation, vol. 20, pp. 415?435, 2008.
[11] J. Sneyd, Mathematical Modeling in Physiology, Cell Biology, and Immunology. New Orleans, Louisiana: American Mathematical Society, January 2001, vol. 59.
[12] J. Huo, Z. Yang, and A. Murray, ?Modeling visual and auditory integration of barn owl superior
colliculus with stdp.? IEEE CIS&RAM, June 2008.
[13] L.F.Abbott and P. Dayan, Theoretical Neuroscience. Cambridge: MIT Press, August 2001,
ch. 6.
[14] S. Song, K. D. Miller, and L.F.Abbott, ?Competitive hebbian learning through spike-timingdependent synaptic plasticity,? Nature Neurosci, vol. 3, pp. 919?926, 2000.
[15] E. W.Dent and F. B.Gertler, ?Cytoskeletal dynamics and transport in growth cone mobility and
axon guidance,? Neuron, vol. 40, pp. 209?227, 2003.
[16] H. Hatt and D. O. Smith, ?Synaptic depression related to presynaptic axon conduction block,?
J. Physiol., vol. 259, no. 2, pp. 367?93, 1976.
8
| 3439 |@word blindness:3 determinant:1 instruction:1 simulation:3 excited:1 gertler:1 initial:2 electronics:1 disparity:1 attracted:1 physiol:1 plasticity:5 motor:1 newest:1 cue:4 selected:1 website:1 nervous:1 intelligence:1 huo:4 smith:3 provides:3 location:3 accessed:1 mathematical:2 edelman:1 pathway:21 interaural:1 n22:1 introduce:1 inter:4 behavior:3 frequently:1 growing:1 brain:4 inspired:1 audiovisual:1 n23:1 window:1 equipped:1 becomes:1 begin:2 bounded:1 kind:1 cm:2 developed:1 gutfreund:1 n1j:3 temporal:1 every:1 p2j:2 growth:6 rm:2 uk:3 bio:1 normally:1 unit:1 before:2 negligible:1 engineering:1 timing:4 congress:1 acad:1 firing:5 modulation:1 black:1 doctoral:1 weakened:3 chirp:1 misaligned:1 limited:1 acknowledgment:1 camera:2 spontaneously:1 orleans:1 block:8 signaling:1 area:2 physiology:1 projection:2 get:3 close:1 www:1 map:28 center:13 identifying:1 rule:1 autonomous:1 juvenile:2 transmit:1 updated:2 target:13 play:2 tectum:4 neighbouring:1 substrate:1 goldberg:1 origin:1 role:3 epsrc:1 electrical:2 connected:1 highest:2 environment:1 dynamic:1 depend:1 grateful:1 localization:12 upon:1 neurophysiol:1 translated:1 emulate:2 represented:1 train:6 fast:1 sc:12 bifurcated:1 neuroinformatics:1 plausible:1 ability:1 gi:2 topographic:1 transform:1 itself:1 obviously:1 triggered:1 sequence:1 indication:1 reset:3 adaptation:9 aligned:1 gold:1 cluster:1 generating:1 object:1 ac:2 school:1 strong:1 come:1 direction:9 guided:1 vrest:3 dtc:1 filter:1 centered:2 virtual:1 owl:18 explains:1 hyde:1 biological:4 summation:1 secondly:1 adjusted:1 dent:1 around:1 barn:14 ic:2 stdp:5 brightest:1 mapping:1 cen:9 early:1 released:5 proc:1 spreading:1 label:1 promotion:1 mit:1 inactivation:1 command:1 sou:2 release:5 validated:1 june:2 indicates:1 dependent:5 dayan:1 accumulated:2 integrated:1 eliminate:1 pixel:1 issue:2 animal:1 spatial:1 integration:5 lif:1 equal:2 field:1 eliminated:1 manually:1 biology:1 represents:3 mimic:1 future:1 stimulus:9 inhibited:2 retina:6 randomly:1 composed:1 simultaneously:1 puck:5 phase:2 fire:1 conductance:3 highly:1 possibility:1 tononi:1 zheng:1 adjust:1 alignment:12 arrives:1 pc:2 activated:1 behind:1 natl:1 chain:1 accurate:2 closer:1 mobility:1 old:3 re:1 guidance:1 nij:1 theoretical:1 modeling:3 leslie:1 mac:9 azimuth:7 conduction:1 stimulates:1 spatiotemporal:2 disrupted:1 espinosa:1 immunology:1 informatics:1 together:1 connecting:1 realignment:3 connectivity:1 again:1 central:2 ear:2 juan:1 external:1 american:1 account:1 potential:4 accompanied:1 retinal:1 includes:2 mv:4 sine:1 analyze:2 wave:1 recover:1 competitive:1 capability:2 predator:1 vinh:3 contribution:1 formed:1 square:1 degraded:1 characteristic:1 miller:1 correspond:1 identify:1 comp:1 detector:1 strongest:2 reach:3 synapsis:4 retraction:1 ed:2 synaptic:7 frequency:1 involved:1 pp:12 associated:2 auditory:41 amplitude:2 positioned:1 adaptability:1 neurotrophic:1 nerve:1 dt:1 response:3 synapse:3 strongly:1 receives:2 ei:2 synfire:1 transport:1 lack:1 interfere:1 c2j:4 grows:1 building:1 effect:2 consisted:1 adjacent:1 wiring:1 during:3 assistance:1 pharmacological:1 inferior:6 sprouting:1 timingdependent:1 m:1 demonstrate:1 interface:1 image:1 novel:1 superior:6 stimulation:2 spiking:1 extend:2 blocked:1 surround:1 cambridge:1 focal:1 similarly:1 pointed:1 access:1 robot:10 cortex:1 inhibition:2 barbara:1 certain:1 prism:8 binary:1 promoted:1 signal:15 ii:3 dashed:1 sound:2 hebbian:2 alan:2 exceeds:1 adapt:2 believed:2 coded:1 paired:1 schematic:2 basic:1 controller:1 represent:2 kernel:1 bimodal:9 cell:3 background:1 schematically:1 interval:1 meredith:1 grow:2 source:4 sends:1 crucial:1 rest:2 induced:2 hz:1 hatt:1 call:1 yang:4 revealed:2 fft:2 identified:1 shift:1 whether:1 song:1 resistance:1 cause:1 repeatedly:1 depression:1 deep:1 nocturnal:1 clear:2 amount:2 stein:1 hardware:1 processed:2 diameter:1 reduced:1 http:1 inhibitory:9 neuroscience:2 correctly:1 anatomical:1 discrete:1 vol:13 threshold:5 prey:2 registration:1 abbott:2 ram:1 sum:2 colliculus:12 cone:5 angle:4 respond:1 layer:13 activity:7 strength:1 optic:4 tag:6 fourier:1 speed:1 n13:2 loudspeaker:2 developing:2 tv:1 membrane:4 postsynaptic:1 rucci:1 dv:1 equation:3 remains:1 mechanism:5 ge:2 fed:1 reversal:2 studying:1 available:1 hat:1 original:3 remaining:1 completed:1 murray:4 society:1 move:1 arrangement:1 spike:28 receptive:6 concentration:3 unclear:1 wrap:1 distance:2 thank:2 lateral:1 simulated:1 sci:1 presynaptic:5 relationship:1 webb:1 design:1 contributed:1 gated:1 potentiated:1 neuron:42 observation:1 knudsen:3 january:1 situation:1 emulating:1 extended:1 head:8 kovacs:1 august:1 introduced:1 connection:19 icx:20 registered:3 able:3 pattern:7 perception:1 built:4 memory:1 video:1 shifting:1 representing:1 neurotrophin:16 mediated:1 vreset:3 discovery:1 icc:14 loss:1 suggestion:1 proportional:1 integrate:1 excitatory:5 gl:3 last:1 heading:1 understand:1 template:1 face:1 leaky:1 edinburgh:2 calculated:1 default:1 world:2 doesn:2 sensory:8 instructed:2 adaptive:1 vexc:3 status:1 robotic:5 active:5 assumed:1 davidson:1 nature:1 robust:1 meanwhile:1 main:2 linearly:1 arrow:2 whole:1 neurosci:3 profile:1 repeated:1 body:1 xu:1 site:2 fig:16 advice:1 strengthened:2 axon:18 position:1 guiding:2 comprises:1 exponential:1 communicates:1 hub:1 decay:4 physiological:1 neurol:1 evidence:1 exists:1 ci:1 illustrates:1 interneuron:3 led:1 explore:2 ganglion:1 visual:38 adjustment:1 louisiana:1 ch:1 corresponds:2 towards:1 n2j:24 change:4 mexican:1 microphone:1 total:1 n42:1 multisensory:1 support:1 wearing:4 tested:1 correlated:2 |
2,691 | 344 | Neural Network Application to Diagnostics and
Control of Vehicle Control Systems
Kenneth A. Marko
Research Staff
Ford Motor Company
Dearborn, Michigan 48121
ABSTRACT
Diagnosis of faults in complex, real-time control systems is a
complicated task that has resisted solution by traditional methods. We
have shown that neural networks can be successfully employed to
diagnose faults in digitally controlled powertrain systems. This paper
discusses the means we use to develop the appropriate databases for
training and testing in order to select the optimum network architectures
and to provide reasonable estimates of the classification accuracy of
these networks on new samples of data. Recent work applying neural
nets to adaptive control of an active suspension system is presented.
1 INTRODUCTION
This paper reports on work performed on the application of artificial neural systems
(ANS) techniques to the diagnosis and control of vehicle systems. Specifically, we have
examined the diagnosis of common faults in powertrain systems and investigated the
problem of developing an adaptive controller for an active suspension system.
In our diagnostic investigations we utilize neural networks routinely to establish the
standards for diagnostic accuracy we can expect from analysis of vehicle data. Previously
we have examined the use of various ANS paradigms to diagnosis of a wide range of
faults in a carefully collected data set from a vehicle operated in a narrow range of speed
and load. Subsequently, we have explored the classification of a data set with a more
restricted set of faults, drawn from a much broader range of operating conditions. This
step was taken as concern about needs for specific, real-time continuous diagnostics
superseded the need to develop well-controlled, on-demand diagnostic testing. The
537
538
Marko
impetus arises from recently enacted legislation which dictates that such real-time
diagnosis of powertrain systems wi II be req uired on cars sold in the U.S. by the
mid-1990's. The difference between the two applications is simple: in the former studies
it was presumed that an independent agent has identified that a fault is present, the root
cause needs only to be identified. In the real-time problem, the diagnostic task is to detect
and identify the fault as soon as it occurs. Consequently, the real-time application is
more demanding. In analyzing this more difficult task, we explore some of the
complications that arise in developing successful classification schemes for the virtually
semi-infinite data streams that are prcxJuced in continuous operation of a vehicle fleet.
The obstacles to realized applications of neural nets in this area often stem from the
sophistication required of the classifier and the complexity of the problems addressed. The
limited computational resources on-board vehicles will determine the scope of the
diagnostic task and how implementations, such as ANS methods, will operate.
Finally, we briefly examine an extension of the ANS work to developing trainable
controllers for non-linear dynamic systems such as active suspension systems.
Preliminary work in this area indicates that effecti ve controllers for non-linear plants can
be developed effiCiently, despite the exclusion of an accurate plant model from the training
process. Although our studies were carried out in simulation, and accurate plant models
were therefore available, the capability to develop controllers in the absence of such
models is a significant step forward. Such controllers can be developed for existing, unmodeled hardware, and thereby reduce both the efforts required to develop control
algorithms by conventional means and the time to program the real-time controllers.
2 NEURAL NET
DIAG~OSTICS
OF CONTROL SYSTEMS
Our interest in neural networks for diagnosis of faults in control systems stemmed from
work on model-based diagnosis of faults in such systems, typically called plants. In the
model-based approach, a model of the system under control is developed and used to
predict the dynamic behavior of the system. With the system in operation, the plant
performance is observed. The expected behavior and the observed behavior are compared,
and if no differences are found, the plant is deemed to be operating normally. If deviations
are found, the differences indicate that a fault of some sort is present (failure detection),
and an analysis of the differences is used in an attempt to identify the cause (fault
identification). Successful implememations (~lin, 1987; Liubakka et al, 1988; Rizzoni
et aI, 1989) of fault detection and identification in complex systems linearized about
selected operating points were put together utilizing mathematical constructs called failure
detection filters. These filters are simply matrices which transform a set of observations
(which become an input vector to the filter) of a plant into another vector space (the
output vector or classification space). The form of these filters suggested to us that
neural networks could be used to learn similar transforms and thereby avoid the tedious
process of model development and validation and a priori identification of the detection
filter matrix elements. We showed previously that complex signal patterns from
operating internal combustion engines could be examined on a cycle by cycle basis (two
revolutions of the common four-stroke engine cycle) and used to correctly identify faults
present in the engine (Marko el ai, 1989).
Typical data collected from an operating engine has been shown elsewhere (Marko et ai,
1989). This demonstration was focussed on a production engine, limited to a small
Neural Network Application to Diagnostics
operating range. One might suppose that a linear model-based diagnostic system could
be constructed for such a task, if one wished to expend the time and effort., and therefore
this exercise was not a strenuous test of the neural network approach. Additionally. our
expert diagnostician could examine the data traces and accurately identify the faults.
However. we demonstrated that this problem, which had eluded automated solution by
other means up to that time, could easily be handled by neural network classifiers and
encouraged us to proceed to more difficult problems for which efficient. rigorous
procedures did not exist. We were prepared to tolerate developing empirical solutions to
our more difficult problems, since we did not expect that a thorough analytic
understanding would precede a demonstrated solution. The process outlined here utilized
neural network analysis almost exclusively (predominantly back-propagation) on these
problems. The understanding of the relationship of neural networks, the structure of the
data and the training and testing of the classifiers emerged after acceptable solutions using
the neural networks methods were obtained.
Consequently, the next problem addressed was that of identifying similar faults by
observing the system through the multiplex serial communication link resident on the
engine control computer. The serial link provides a simple hook-up procedure to the
vehicle without severing any links between plant and microcontroller. However, the chief
drawback of this approach is that it greatly complicates the recognition task. The
complication arises because the data from the plant is sampled too infrequently, is
"contaminated" by some processing in the controller. and delivered asynchronously to the
serial link with respect to events in the plant (the data output process is not permitted to
interrupt the real-time control requirements). In this case, a test sample of a smaller
number of faults was drawn from a vehicle operated in a similar limited range to the fIrst
example and an attempt to detect and identify the faults was made using a variety of
networks. Unlike the previous case. it was impossible for any experienced technicians to
identify the faults. Again, neural network classifIers were found to develop satisfactory
solutions over these limited data sets, which were later verified by a number of careful
statistical tests (Marko el aI, 1990). This more complex problem also produced a wider
range of performance among the various neural ne.t paradigms studied, as shown in Figure
I. where the error rates for various classifiers on these data sets are shown in the graph.
These results suggested that not only would data quality and quantity need to be controlled
and improved. but that the problem itself would implicitly direct us to the choice of the
classifier paradigm. These issues are more thoroughly discussed elsewhere (Marko et al,
1990; Weiss et al. 1990). but the conclusion was that a complete, acceptable solution to
the real scope of this problem could not be developed with our group's resources for data
collection, data verification and classifier validation.
With these two experiences in mind, we could see that the fIrst approach was an effective
means of handling the failure detection and identification (FDI) problem, while the latter,
although attractive from the standpoint of easy link-up to a vehicle, was for our
numerical analysis, a very difficult task. It seemed that the appropriate course was to
obtain reliable data, by observing the plant directly, and to perfonn the classification on
that data. An effective scheme to accomplish this goal is to perfonn the classifIcation task
in the control microprocessor which has access to the dire{:t data. Adopting this strategy,
we move the diagnostics from an off-board processor to the on-board processor, and
create a new set of possibilities for diagnostics.
539
540
Marko
With diagnostics contained in the controlling processor, diagnostics can be shifted from
an on-demand activity. undertaken at predetermined intervals or when the vehicle operator
has detected a problem, to a continuous, real-time activity. This change implies that me
diagnostic algorithms will, for the most part, be evaluating a properly operating system
and only infrequently be required to detect a failure and identify the cause. Additionally,
the diagnostic algorithms will have to be very compact, since the current control
microprocessors have very limited time and memory for calculation compared to a
off-board PC. Furthermore, the classification task will need to be learned from a sample
of data which is minuscule compared to the data sets that the deployed diagnostics will
have to classify. This fact imposes on the training data set the requirement that it be an
accurate statistical sample of the much more voluminous real-world data. This situation
must prevail because we cannot anticipate the deployment of a classifier mal is
undergoing continuous training. A classifier capable of continuous adaptation would
require more computational capability, and quite likely a supervised learning environment.
The fact is, even for relatively simple diagnostics of operating engines, assembling a
large, accurate training data set off-line is a considerable task. This last issue is explored
in the next paragraph, but it seems to rule out early deployment of anything other than
pretrained classifiers until some experience with much larger data sets from deployed
diagnostic systems is obtained.
ERROR RATE"
60 PIN DATA
~
DCLDATA
0.5
.... . ...................... . .............. . ....................... . ................................ . ............................................................. .
o.~
...................................................................... . . . ...................... . . . .
0.3
................... -- ..... __ ............................................ .
. .................................................. . .
o.~
..
0.1
::~::-~::=::::::::::::::~::
o
::::::~........
... .
N.N.
RCESINGLE
RCEMULT.
BACK
PROP.
. . . . . . . . . . ,.
~::---'
TREEHYPPL
TREE?
C.R.
CLASSIFIER
Figure I. Comparison of the performance of various neural network paradigms on two
static data sets by leave~ne~ut testing from measurements performed on vehicles in a
service bay. The network paradigms tested arc nearest neighbor, Restricted Coulomb
Energy CRCE) Single Unit, RCE Multiple Units, Backpropagation, Tree Classifier using
hyperplane separation, Tree Classifier using Center-Radius decision surface. The 6O-Pin
data is the data obtained directly from the engine, the DCL (Data Communication Link)
data comes through the control microprocessor on a multiplexed two-wire link. Note that
RCE-Multiple requires a priori knowledge about the problem which was unavailable for
the DCL data and thal the complete statistical testing of backpropagation was impractical
due to the length of time required to train each network.
Neural Network Application to Diagnostics
We have examined this issue of real-time diagnostics as it applies to engine misfire
detection and identification. Data from nonnal and misfiring engines was required from a
wide range of conditions, a task which consumes hours of test track driving. The set of
measurements taken is extensive in order to be cenain that the infonnation obtained is a
superset of the minimum set of informaLion required. Additionally, great care needed to be
exercised in eSLablishing the accuracy of a training set for supervised learning.
Specifically, we needed to be certain that the only samples of misfires included were those
intentionally created, and not those which occurred spontaneously and were presumably
mislabeled as normal because no intentional fault was being introduced at that time. In
order to accomplish this purification of the training set, one must either have an
independent detector of misfires (none exists for a production engine operating in a
vehicle) or go through an iterative process to remove all the data vectors misclassified as
misfire from the data set after the network has completed training. Since the independent
assessment of misfire cannot be obtained, we must accept the latter method which is not
altogether satisfactory. The problem with the iterative method is that one must initially
exclude from the training set exactly the type of event that the system is being trained to
classify. We have to stan with the assumption that any additional misfires, beyond the
number we introduce, are classification errors. We then reserve the right to amend this
judgment in light of further experience as we build up confidence in the classifier. The
results of our initial studies is shown in Fig. 2. Here we can see that a backpropagation
neural network can classify a broad range of engine operation correctly, and thal the
network does quite well when we broaden the operating range almost to the perfonnance
limits of the engine. The classification errors indicated in the more exhaustive study are
misfires detected when no misfire was introduced. At this stage of our investigation we
cannot be certain that these are real errors, they may very well be misfires occurring
spontaneously or appearing as a result of an additional, unintentional induced misfrre in
an engine cycle following the one in which the fault was introduced.
The results shown in Fig. 2 therefore represent a conservative estimate of the
classification errors thaL can be expected from tests of our engine data. The
backpropagation network we constructed demonstrated that misfire detection and
identification is attainable if adequate computation resources are available and appropriate
LIMITED OPERA nON
C/)
C/)
EXTENDED OPERA nON
-=:
NORMAL
762
o
NORMAL
7419
13
-=:
w
MISFIRE
1
15
MISFIRE
4
150
....J
<.)
....J
a:
ANSCLASS
NORMAL MISFIRE
NORMAL MISFIRE
Figure 2. Classification accuracy of a backpropagation neural network trained on
misfire data tabulated as confusion matrices. Data similar to that shown in Fig. 2 is
collected over a modest range of dynamic conditions and then over a very wide range of
conditions (potholed roads, severe accelerations and braking etc.) to estimate the
performance limits of classifiers on such data. These misclassification rates are indicators
of the best possible perfonnance obtainable from such data, and therefore they are not
reasonable estimates of what practical implementations of classifiers should produce.
541
542
Marko
care in obtaining a suitable training set is exercised. However. in order to make a neural
net a practical means of performing this diagnosis aboard vehicles, we need to eHminate
information from the input vector which has no effect on the classification accuracy;
otherwise the computational task is hopelessly beyond the capability of the engine's
microcontroller. This work is currently underway using a combination of a priori
knowledge about the sensor information and principal component analysis of the data
sets. Nonetheless, the neural network analysis has once again established that a solution
exists and set standards for classification accuracy that we can hope. to emulate with more
compact forms of classifiers.
3 NEURAL NET CONTROL OF ACTIVE SUSPENSION
The empirical approach to developing solutions for diagnostic problems suggested that a
similar tactic might be employed effectively to control problems for which developing
acceptable controllers for non-linear dynamic systems by conventional means was a
daunting task. We wished to explore the application of feed-forward networks to the
problem of learning to control a model of a non-linear active suspension system. This
problem was of interest because considerable effort had gone into designing controllers by
conventional means and a performance comparison could readily be made. In addition t
since active suspension systems are being investigated by a number of companies, we
wished to examine the possibility of developing model-independent controllers for such
systems, since effective hardware systems are usually available before thoroughly
validated system models appear. The initial results of this investigation, outlined below,
are quite encouraging.
A backpropagation network was trained to emulate an existing controller for an active
suspension as a first exercise to establish some feel for the complexity of the network
required to perform such a task. A complete description of the work can be found
elsewhere (Hampo, 1990), but briefly, a network with seve.ral hidden nodes was trained to
provide perfonnance equivalent to the conventional controller. Since this exercise simply
replicated an existing controller, the next step was to develop a controller in the absence
of any conventional controller. Therefore, a system model with a novel non-linearity was
developed and utilized to train a neural network to control such a plant. The architecture
for this control system is similar to that used by Nygen and Widrow (Ngyen et al t 1990)
and is described in detail elsewhere.(Hampo et ai, 1991) Once again, a backpropagation
network, with only 2 hidden nodes, was trained to provide an satisfactory performance in
controlling the suspension system simulation running on a workstation. This small
network learned the task with less than WOO training vectors, the equivalent of less than
1()() feet of bumpy road.
Finally, we examined the performance of the neural network on the same plant, but
without explicit use of the plant model in the control architecture. In this scheme, the
output error is derived from the difference between the. observed performance and the
desired performance produced by a cost function based upon conventional measures of
suspension performance. In this Cost Function architecture, networks of similar size
were readily trained to control non-linear plants and attain performance equivalent to
conventional controllers hand-tuned for such plants. Controllers developed in this
manner provide a flexible means of approaching the problem of investigating tradeoffs
between the conflicting demands made on such suspension systems. These demands
Neural Network Application to Diagnostics
include ride quality, vehicle control, and energy management. This control architecture is
being applied both to simulations of new systems and to actual, un-modeled hardware rigs
to expedite prototype development.
4
CONCLUSIONS
This brief summary of our investigations has shown that neural networks play an
important role in the development both of classification systems for diagnosis of faults in
control systems and of controllers for practical non-linear plants. In these tasks, neural
networks must compete with conventional methods. Conventional methods, although
endowed with a more thorough analytic understanding, have usually failed to provide
acceptable solutions to the problems we encountered as readily as have the neural network
methods. Therefore, the ANS methods have a crucial role in developing solutions.
Although neural networks provide these solutions expeditiously, we are just beginning to
understand how these solutions arise. The growth of this understanding will detennine
the role neural networks play in the deployed implementations of these solutions.
References
1. P.S. Min, "Detection of Incipient Failures in Dynamic Systems", Ph.D. Thesis,
University of Michigan, 1987.
2. M.K. Liubakka, G. Rizzoni, W.B. Ribbens and K.A. Marko, "Failure Detection
Algorithms Applied to Control System Design for Improved Diagnostics and
Reliability", SAE Paper #880726, Detroit, Michigan, 1988.
3. G. Rizzoni, R. Hampo, M.K. Liubakka and K.A. Marko, "Real-Time Detection
Filters for On-Board Diagnosis of Incipient Failures", SAE Paper #890763, 1989.
4. K.A. Marko, J. James, J. Dosdall and J. Murphy, "Automotive Control System
Diagnostics Using Neural ~ets for Rapid Classification of Large Data Sets", Proceedings
DCNN, 11-13, Washington, D.C., 1989.
5. K.A. Marko, L.A. Feldkamp and G.V. Puskorius, "Automotive Diagnostics Using
Trainable Classifiers: Statistical Testing and Paradigm Selection", Proceedings IJC!\"N,
1-33, San Diego, California, 1990.
6. Sholom Weiss and Casimir Kulikowski, "Computer Systems That Learn", Morgan
Kaufman, San Mateo, California, 1990.
7. RJ. Hampo, "Neural ~et Control of an Active Suspension System",
University of Michigan, 1990.
~1.S.
Thesis,
8. D. Ngyen and B. Widrow, "The Truck-Backer Upper: An Example of Self-Learning in
~eural Networks", in Neural Networks for Control, ed. W.T. Miller, MIT Press,
Cambridge, Massachusetts, 1990.
9. RJ. Hampo and K.A. Marko, "Neural Net Architectures for Active Suspension
Control", paper submitted to UCNN, Seattle, Washington, 1991.
543
| 344 |@word briefly:2 seems:1 tedious:1 simulation:3 linearized:1 attainable:1 thereby:2 initial:2 exclusively:1 tuned:1 existing:3 current:1 stemmed:1 must:5 readily:3 numerical:1 predetermined:1 analytic:2 motor:1 remove:1 selected:1 beginning:1 provides:1 complication:2 node:2 mathematical:1 constructed:2 direct:1 become:1 paragraph:1 manner:1 introduce:1 expected:2 presumed:1 rapid:1 behavior:3 examine:3 feldkamp:1 company:2 encouraging:1 actual:1 linearity:1 what:1 kaufman:1 developed:6 impractical:1 perfonn:2 thorough:2 growth:1 exactly:1 classifier:18 control:29 normally:1 unit:2 appear:1 before:1 service:1 multiplex:1 limit:2 despite:1 ets:1 analyzing:1 backer:1 might:2 studied:1 examined:5 mateo:1 deployment:2 limited:6 range:11 gone:1 practical:3 spontaneously:2 testing:6 backpropagation:7 procedure:2 area:2 empirical:2 attain:1 dictate:1 confidence:1 road:2 uired:1 cannot:3 selection:1 operator:1 put:1 applying:1 impossible:1 conventional:9 equivalent:3 demonstrated:3 center:1 go:1 identifying:1 enacted:1 rule:1 utilizing:1 feel:1 controlling:2 suppose:1 play:2 diego:1 designing:1 element:1 infrequently:2 recognition:1 utilized:2 database:1 observed:3 role:3 mal:1 cycle:4 rig:1 consumes:1 digitally:1 environment:1 complexity:2 dynamic:5 trained:6 upon:1 req:1 basis:1 mislabeled:1 easily:1 routinely:1 various:4 emulate:2 train:2 amend:1 effective:3 artificial:1 detected:2 exhaustive:1 quite:3 emerged:1 larger:1 otherwise:1 transform:1 ford:1 itself:1 delivered:1 asynchronously:1 net:6 adaptation:1 minuscule:1 impetus:1 description:1 seattle:1 optimum:1 requirement:2 produce:1 leave:1 wider:1 develop:6 fdi:1 widrow:2 nearest:1 wished:3 thal:3 indicate:1 implies:1 come:1 foot:1 radius:1 drawback:1 filter:6 subsequently:1 require:1 investigation:4 preliminary:1 anticipate:1 extension:1 intentional:1 normal:5 great:1 presumably:1 scope:2 predict:1 reserve:1 driving:1 early:1 precede:1 currently:1 infonnation:1 exercised:2 create:1 successfully:1 detroit:1 hope:1 mit:1 sensor:1 dcnn:1 aboard:1 avoid:1 broader:1 validated:1 interrupt:1 derived:1 properly:1 ral:1 indicates:1 greatly:1 rigorous:1 detect:3 el:2 typically:1 accept:1 initially:1 hidden:2 misclassified:1 voluminous:1 nonnal:1 issue:3 classification:15 among:1 flexible:1 priori:3 development:3 construct:1 once:2 washington:2 encouraged:1 broad:1 report:1 contaminated:1 ve:1 murphy:1 attempt:2 detection:10 interest:2 possibility:2 severe:1 diagnostics:15 operated:2 pc:1 light:1 misfire:15 puskorius:1 accurate:4 capable:1 experience:3 perfonnance:3 modest:1 tree:3 desired:1 complicates:1 classify:3 obstacle:1 cost:2 deviation:1 successful:2 too:1 accomplish:2 thoroughly:2 off:3 together:1 again:3 thesis:2 suspension:12 bumpy:1 management:1 expert:1 exclude:1 stream:1 performed:2 vehicle:14 root:1 diagnose:1 microcontroller:2 observing:2 later:1 sort:1 dcl:2 complicated:1 capability:3 opera:2 accuracy:6 purification:1 efficiently:1 miller:1 judgment:1 identify:7 identification:6 accurately:1 produced:2 none:1 processor:3 stroke:1 submitted:1 detector:1 sae:2 ed:1 failure:7 energy:2 nonetheless:1 intentionally:1 james:1 static:1 workstation:1 sampled:1 massachusetts:1 knowledge:2 car:1 ut:1 obtainable:1 carefully:1 back:2 feed:1 combustion:1 tolerate:1 supervised:2 permitted:1 improved:2 wei:2 daunting:1 furthermore:1 just:1 stage:1 until:1 hand:1 assessment:1 propagation:1 resident:1 quality:2 indicated:1 effect:1 former:1 satisfactory:3 attractive:1 marko:13 self:1 anything:1 tactic:1 complete:3 confusion:1 novel:1 recently:1 predominantly:1 common:2 strenuous:1 discussed:1 assembling:1 occurred:1 braking:1 significant:1 measurement:2 cambridge:1 ai:5 outlined:2 had:2 reliability:1 ride:1 access:1 expeditiously:1 operating:10 surface:1 etc:1 recent:1 exclusion:1 showed:1 certain:2 fault:21 morgan:1 minimum:1 additional:2 care:2 staff:1 employed:2 determine:1 paradigm:6 signal:1 ii:1 semi:1 multiple:2 expedite:1 rj:2 stem:1 technician:1 calculation:1 lin:1 serial:3 controlled:3 controller:18 represent:1 adopting:1 addition:1 addressed:2 interval:1 standpoint:1 crucial:1 operate:1 unlike:1 induced:1 virtually:1 easy:1 superset:1 automated:1 variety:1 architecture:6 identified:2 approaching:1 reduce:1 prototype:1 tradeoff:1 fleet:1 handled:1 effort:3 tabulated:1 proceed:1 cause:3 adequate:1 transforms:1 prepared:1 mid:1 ph:1 hardware:3 exist:1 shifted:1 diagnostic:10 correctly:2 track:1 diagnosis:10 group:1 incipient:2 four:1 drawn:2 verified:1 kenneth:1 utilize:1 undertaken:1 graph:1 legislation:1 compete:1 almost:2 reasonable:2 separation:1 decision:1 acceptable:4 encountered:1 truck:1 activity:2 automotive:2 speed:1 min:1 performing:1 relatively:1 developing:8 combination:1 unintentional:1 smaller:1 wi:1 severing:1 restricted:2 taken:2 resource:3 previously:2 discus:1 pin:2 needed:2 mind:1 powertrain:3 available:3 operation:3 endowed:1 detennine:1 appropriate:3 coulomb:1 appearing:1 altogether:1 broaden:1 running:1 include:1 completed:1 rce:2 kulikowski:1 build:1 establish:2 move:1 realized:1 occurs:1 quantity:1 strategy:1 traditional:1 link:7 me:1 collected:3 length:1 modeled:1 relationship:1 demonstration:1 difficult:4 dire:1 trace:1 implementation:3 design:1 perform:1 upper:1 observation:1 wire:1 sold:1 arc:1 situation:1 extended:1 communication:2 unmodeled:1 introduced:3 required:7 trainable:2 extensive:1 engine:16 eluded:1 learned:2 narrow:1 conflicting:1 established:1 hour:1 california:2 beyond:2 suggested:3 usually:2 pattern:1 below:1 program:1 reliable:1 memory:1 event:2 demanding:1 misclassification:1 suitable:1 indicator:1 scheme:3 brief:1 ne:2 hook:1 stan:1 superseded:1 carried:1 deemed:1 created:1 woo:1 understanding:4 underway:1 plant:17 expect:2 validation:2 agent:1 verification:1 imposes:1 production:2 elsewhere:4 course:1 summary:1 last:1 soon:1 understand:1 wide:3 neighbor:1 focussed:1 expend:1 evaluating:1 world:1 seemed:1 forward:2 made:3 adaptive:2 collection:1 replicated:1 san:2 compact:2 implicitly:1 active:9 investigating:1 continuous:5 misfiring:1 iterative:2 un:1 bay:1 chief:1 additionally:3 learn:2 obtaining:1 unavailable:1 investigated:2 complex:4 microprocessor:3 diag:1 did:2 arise:2 eural:1 fig:3 board:5 deployed:3 experienced:1 ijc:1 explicit:1 exercise:3 load:1 specific:1 revolution:1 undergoing:1 explored:2 concern:1 exists:2 effectively:1 resisted:1 prevail:1 occurring:1 demand:4 michigan:4 sophistication:1 simply:2 explore:2 likely:1 failed:1 hopelessly:1 contained:1 pretrained:1 applies:1 prop:1 goal:1 consequently:2 careful:1 acceleration:1 absence:2 considerable:2 change:1 included:1 specifically:2 infinite:1 typical:1 hyperplane:1 principal:1 conservative:1 called:2 select:1 internal:1 latter:2 arises:2 multiplexed:1 tested:1 handling:1 |
2,692 | 3,440 | Kernel Measures of Independence for non-iid Data
Xinhua Zhang
NICTA and Australian National University
Canberra, Australia
[email protected]
Le Song?
School of Computer Science
Carnegie Mellon University, Pittsburgh, USA
[email protected]
Arthur Gretton
MPI T?ubingen for Biological Cybernetics
T?ubingen, Germany
[email protected]
Alex Smola?
Yahoo! Research
Santa Clara, CA, United States
[email protected]
Abstract
Many machine learning algorithms can be formulated in the framework of statistical independence such as the Hilbert Schmidt Independence Criterion. In this
paper, we extend this criterion to deal with structured and interdependent observations. This is achieved by modeling the structures using undirected graphical
models and comparing the Hilbert space embeddings of distributions. We apply
this new criterion to independent component analysis and sequence clustering.
1
Introduction
Statistical dependence measures have been proposed as a unifying framework to address many machine learning problems. For instance, clustering can be viewed as a problem where one strives to
maximize the dependence between the observations and a discrete set of labels [14]. Conversely, if
labels are given, feature selection can be achieved by finding a subset of features in the observations
which maximize the dependence between labels and features [15]. Similarly in supervised dimensionality reduction [13], one looks for a low dimensional embedding which retains additional side
information such as class labels. Likewise, blind source separation (BSS) tries to unmix independent
sources, which requires a contrast function quantifying the dependence of the unmixed signals.
The use of mutual information is well established in this context, as it is theoretically well justified.
Unfortunately, it typically involves density estimation or at least a nontrivial optimization procedure
[11]. This problem can be averted by using the Hilbert Schmidt Independence Criterion (HSIC). The
latter enjoys concentration of measure properties and it can be computed efficiently on any domain
where a Reproducing Kernel Hilbert Space (RKHS) can be defined.
However, the application of HSIC is limited to independent and identically distributed (iid) data, a
property that many problems do not share (e.g., BSS on audio data). For instance many random
variables have a pronounced temporal or spatial structure. A simple motivating example is given in
Figure 1a. Assume that the observations xt are drawn iid from a uniform distribution on {0, 1} and
yt is determined by an XOR operation via yt = xt ? xt?1 . Algorithms which treat the observation
?
pairs {(xt , yt )}t=1 as iid will consider the random variables x, y as independent. However, it is
trivial to detect the XOR dependence by using the information that xi and yi are, in fact, sequences.
In view of its importance, temporal correlation has been exploited in the independence test for blind
source separation. For instance, [9] used this insight to reject nontrivial nonseparability of nonlinear
mixtures, and [18] exploited multiple time-lagged second-order correlations to decorrelate over time.
These methods work well in practice. But they are rather ad hoc and appear very different from
standard criteria. In this paper, we propose a framework which extends HSIC to structured noniid data. Our new approach is built upon the connection between exponential family models and
?
This work was partially done when the author was with the Statistical Machine Learning Group of NICTA.
zt
zt
zt
xt?1
xt
xt+1
xt?1
xt
xt+1
xt?1
xt
xt+1
xst
yt?1
yt
yt+1
yt?1
yt
yt+1
yt?1
yt
yt+1
yst
(a) XOR sequence
(b) iid
(c) First order sequential
(d) 2-Dim mesh
Figure 1: From left to right: (a) Graphical model representing the XOR sequence, (b) a graphical
model representing iid observations, (c) a graphical model for first order sequential data, and (d) a
graphical model for dependency on a two dimensional mesh.
the marginal polytope in an RKHS. This is doubly attractive since distributions can be uniquely
identified by the expectation operator in the RKHS and moreover, for distributions with conditional
independence properties the expectation operator decomposes according to the clique structure of
the underlying undirected graphical model [2].
2
The Problem
Denote by X and Y domains from which we will be drawing observations Z :=
{(x1 , y1 ), . . . , (xm , ym )} according to some distribution p(x, y) on Z := X ? Y. Note that the
domains X and Y are fully general and we will discuss a number of different structural assumptions
on them in Section 3 which allow us to recover existing and propose new measures of dependence.
For instance x and y may represent sequences or a mesh for which we wish to establish dependence.
To assess whether x and y are independent we briefly review the notion of Hilbert Space embeddings
of distributions [6]. Subsequently we discuss properties of the expectation operator in the case of
conditionally independent random variables which will lead to a template for a dependence measure.
Hilbert Space Embedding of Distribution Let H be a RKHS on Z with kernel v : Z ? Z 7? R.
Moreover, let P be the space of all distributions over Z, and let p ? P. The expectation operator in
H and its corresponding empirical average can be defined as in [6]
?[p] := Ez?p(z) [v(z, ?)]
m
1 X
?[Z] :=
v((xi , y i ), ?)
m i=1
such that
such that
Ez?p(z) [f (z)] = h?[p], f i
m
1 X
f (xi , y i ) = h?[Z], f i .
m i=1
(1)
(2)
The map ? : P 7? H characterizes a distribution by an element in the RKHS. The following theorem
shows that the map is injective [16] for a large class of kernels such as Gaussian and Laplacian RBF.
Theorem 1 If Ez?p [v(z, z)] < ? and H is dense in the space of bounded continuous functions
C 0 (Z) in the L? norm then the map ? is injective.
2.1
Exponential Families
We are interested in the properties of ?[p] in the case where p satisfies the conditional independence relations specified by an undirected graphical model. In [2], it is shown that for this case the
sufficient statistics decompose along the maximal cliques of the conditional independence graph.
More formally, denote by C the set of maximal cliques of the graph G and let zc be the restriction
of z ? Z to the variables on clique c ? C. Moreover, let vc be universal kernels in the sense of [17]
acting on the restrictions of Z on clique c ? C. In this case, [2] showed that
X
v(z, z 0 ) =
vc (zc , zc0 )
(3)
c?C
can be used to describe all probability distributions with the above mentioned conditional independence relations using an exponential family model with v as its kernel. Since for exponential families
expectations of the sufficient statistics yield injections, we have the following result:
Corollary 2 On the class of probability distributions satisfying conditional independence properties
according to a graph G with maximal clique set C and with full support on their domain, the operator
?[p] =
X
?c [pc ] =
c?C
X
Ezc [vc (zc , ?)]
(4)
c?C
is injective if the kernels vc are all universal. The same decomposition holds for the empirical
counterpart ?[Z].
The condition of full support arises from the conditions of the Hammersley-Clifford Theorem [4, 8]:
without it, not all conditionally independent random variables can be represented as the product of
potential functions. Corollary 2 implies that we will be able to perform all subsequent operations on
structured domains simply by dealing with mean operators on the corresponding maximal cliques.
2.2
Hilbert Schmidt Independence Criterion
Theorem 1 implies that we can quantify the difference between two distributions p and q by simply
2
computing the square distance between their RKHS embeddings, i.e., k?[p] ? ?[q]kH . Similarly,
we can quantify the strength of dependence between random variables x and y by simply measuring
the square distance between the RKHS embeddings of the joint distribution p(x, y) and the product
of the marginals p(x) ? p(y) via
2
I(x, y) := k?[p(x, y)] ? ?[p(x)p(y)]kH .
(5)
Moreover, Corollary 2 implies that for an exponential family consistent with the conditional independence graph G we may decompose I(x, y) further into
X
2
I(x, y) =
k?c [pc (xc , yc )] ? ?c [pc (xc )pc (yc )]kHc
c?C
X
=
E(xc yc )(x0c yc0 ) + Exc yc x0c yc0 ? 2E(xc yc )x0c yc0 [vc ((xc , yc ), (x0c , yc0 ))]
(6)
c?C
where bracketed random variables in the subscripts are drawn from their joint distributions and unbracketed ones are from their respective marginals, e.g., E(xc yc )x0c yc0 := E(xc yc ) Ex0c Eyc0 . Obviously
the challenge is to find good empirical estimates of (6). In its simplest form we may replace each of
the expectations by sums over samples, that is, by replacing
m
m
1 X
1 X
f (xi , yi ) and E(x)(y) [f (x, y)] ? 2
f (xi , yj ).
(7)
E(x,y) [f (x, y)] ?
m i=1
m i,j=1
3
Estimates for Special Structures
To illustrate the versatility of our approach we apply our model to a number of graphical models
ranging from independent random variables to meshes proceeding according to the following recipe:
1.
2.
3.
4.
5.
3.1
Define a conditional independence graph.
Identify the maximal cliques.
Choose suitable joint kernels on the maximal cliques.
Exploit stationarity (if existent) in I(x, y) in (6).
Derive the corresponding empirical estimators for each clique, and hence for all of I(x, y).
Independent and Identically Distributed Data
T
As the simplest case, we first consider the graphical model in Figure 1b, where {(xt , yt )}t=1 are
T
iid random variables. Correspondingly the maximal cliques are {(xt , yt )}t=1 . We choose the joint
kernel on the cliques to be
XT
vt ((xt , yt ), (x0t , yt0 )) := k(xt , x0t )l(yt , yt0 ) hence v((x, y), (x0 , y 0 )) =
k(xt , x0t )l(yt , yt0 ). (8)
t=1
The representation for vt implies that we are taking an outer product between the Hilbert Spaces on
xt and yt induced by kernels k and l respectively. If the pairs of random variables (xt , yt ) are not
identically distributed, all that is left is to use (8) to obtain an empirical estimate via (7).
We may improve the estimate considerably if we are able to assume that all pairs (xt , yt )
are drawn from the same distribution p(xt , yt ). Consequently all coordinates of the mean
map are identical and we can use all the data to estimate just one of the discrepancies
2
k?c [pc (xc , yc )] ? ?c [pc (xc )pc (yc )]k . The latter expression is identical to the standard HSIC criterion and we obtain the biased estimate
? y) = 1 tr HKHL where Kst := k(xs , xt ), Lst := l(ys , yt ) and Hst := ?st ? 1 . (9)
I(x,
T
T
3.2
Sequence Data
A more interesting application beyond iid data is sequences with a Markovian dependence as deT ?1
picted in Figure 1c. Here the maximal cliques are the sets {(xt , xt+1 , yt , yt+1 )}t=1 . More generally, for longer range dependency of order ? ? N, the maximal cliques will involve the random
variables (xt , . . . , xt+? , yt , . . . , yt+? ) =: (xt,? , yt,? ).
We assume homogeneity and stationarity of the random variables: that is, all cliques share the same
sufficient statistics (feature map) and their expected value is identical. In this case the kernel
0
0
v0 ((xt,? , yt,? ), (x0t,? , yt,?
)) := k(xt,? , x0t,? )l(yt,? , yt,?
)
can be used to measure discrepancy between the random variables. Stationarity means that
?c [pc (xc , yc )] and ?c [pc (xc )pc (yc )] are the same for all cliques c, hence I(x, y) is a multiple of
the difference for a single clique.
Using the same argument as in the iid case, we can obtain a biased estimate of the measure of
dependence by using Kij = k(xi,? , xj,? ) and Lij = l(yi,? , yj,? ) instead of the definitions of K and
L in (9). This works well in experiments. In order to obtain an unbiased estimate we need some
more work. Recall the unbiased estimate of I(x, y) is a fourth order U-statistic [6].
2
Theorem 3 An unbiased empirical estimator for k?[p(x, y)] ? ?[p(x)p(y)]k is
X
? y) := (m?4)!
h(xi , yi , . . . , xr , yr ),
I(x,
m!
(10)
(i,j,q,r)
where the sum is over all terms such that i, j, q, r are mutually different, and
(1,2,3,4)
1 X
h(x1 , y1 , . . . , x4 , y4 ) :=
k(xt , xu )l(xt , xu ) + k(xt , xu )l(xv , xw ) ? 2k(xt , xu )l(xt , xv ),
4!
(t,u,v,w)
and the latter sum denotes all ordered quadruples (t, u, v, w) drawn from (1, 2, 3, 4).
The theorem implies that in expectation h takes on the value of the dependence measure. To establish that this also holds for dependent random variables we use a result from [1] which establishes
convergence for stationary mixing sequences under mild regularity conditions, namely whenever
the kernel of the U-statistic h is bounded and the process generating the observations is absolutely
regular. See also [5, Section 4].
Theorem 4 Whenever I(x, y) > 0, that is, whenever the random variables are dependent, the
? y) is asymptotically normal with
estimate I(x,
?
d
? y) ? I(x, y)) ?
m(I(x,
? N (0, 4? 2 )
(11)
where the variance is given by
?
X
2
? 2 =Var [h3 (x1 , y1 )] + 2
Cov(h3 (x1 , y1 ), h3 (xt , yt ))
(12)
t=1
and
h3 (x1 , y1 ) :=E(x2 ,y2 ,x3 ,y3 ,x4 ,y4 ) [h(x1 , y1 , . . . , x4 , y4 )]
(13)
This follows from [5, Theorem 7], again under mild regularity conditions (note that [5] state their
results for U-statistics of second order, and claim the results hold for higher orders). The proof is
tedious but does not require additional techniques and is therefore omitted.
3.3
TD-SEP as a special case
So far we did not discuss the freedom of choosing different kernels. In general, an RBF kernel will
lead to an effective criterion for measuring the dependence between random variables, especially in
time-series applications. However, we could also choose linear kernels for k and l, for instance, to
obtain computational savings.
For a specific choice of cliques and kernels, we can recover the work of [18] as a special case of our
framework. In [18], for two centered scalar time series x and y, the contrast function is chosen as
the sum of same-time and time-lagged cross-covariance E[xt yt ]+E[xt yt+? ]. Using our framework,
two types of cliques, (xt , yt ) and (xt , yt+? ), are considered in the corresponding graphical model.
Furthermore, we use a joint kernel of the form
hxs , xt i hys , yt i + hxs , xt i hys+? , yt+? i ,
(14)
which leads to the estimator of structured HSIC: I(x, y) = T1 (tr HKHL + tr HKHL? ). Here L?
denotes the linear covariance matrix for the time lagged y signals. For scalar time series, basic algebra shows that tr HKHL and tr HKHL? are the estimators of E[xt yt ] and E[xt yt+? ] respectively
(up to a multiplicative constant).
Further generalization can incorporate several time lagged cross-covariances into the contrast function. For instance, TD-SEP [18] uses a range of time lags from 1 to ? . That said, by using a nonlinear
kernel we are able to obtain better contrast functions, as we will show in our experiments.
3.4
Grid Structured Data
Structured HSIC can go beyond sequence data and be applied to more general dependence structures
such as 2-D grids for images. Figure 1d shows the corresponding graphical model. Here each node
of the graphical model is indexed by two subscripts, i for row and j for column. In the simplest
case, the maximal cliques are
C = {(xij , xi+1,j , xi,j+1 , xi+1,j+1 , yij , yi+1,j , yi,j+1 , yi+1,j+1 )}ij .
In other words, we are using a cross-shaped stencil to connect vertices. Provided that the kernel v can
also be decomposed into the product of k and l, then a biased estimate of the independence measure
can be again formulated as tr HKHL up to a multiplicative constant. The statistical analysis of
U-statistics for stationary Markov random fields is highly nontrivial. We are not aware of results
equivalent to those discussed in Section 3.2.
4
Experiments
Having a dependence measure for structured spaces is useful for a range of applications. Analogous
to iid HSIC, structured HSIC can be applied to non-iid data in applications such as independent
component analysis [12], independence test [6], feature selection [15], clustering [14], and dimensionality reduction [13]. The fact that structured HSIC can take into account the interdependency
between observations provides us with a principled generalization of these algorithms to, e.g., time
series analysis. In this paper, we will focus on two examples: independent component analysis,
where we wish to minimize the dependence, and time series segmentation, where we wish to maximize the dependence instead. Two simple illustrative experiments on independence test for XOR
binary sequence and Gaussian process can be found in the longer version of this paper.
4.1
Independent Component Analysis
In independent component analysis (ICA), we observe a time series of vectors u that corresponds to
a linear mixture u = As of n mutually independent sources s (each entry in the source vector here
is a random process, and depends on its past values; examples include music and EEG time series).
Based on the series of observations t, we wish to recover the sources using only the independence
assumption on s. Note that sources can only be recovered up to scaling and permutation. The core
of ICA is a contrast function that measures the independence of the estimated sources. An ICA
algorithm searches over the space of mixing matrix A such that this contrast function is minimized.
Thus, we propose to use structured HSIC as the contrast function for ICA. By incorporating time
lagged variables in the cliques, we expect that structured HSIC can better deal with the non-iid nature
of time series. In this respect, we generalize the TD-SEP algorithm [18], which implements this idea
using a linear kernel on the signal. Thus, we address the question of whether correlations between
higher order moments, as encoded using non-linear kernels, can improve the performance of TDSEP on real data.
Table 1: Median performance of ICA on music using HSIC, TDSEP, and structured HSIC. In the top
row, the number n of sources and m of samples are given. In the second row, the number of time lags
? used by TDSEP and structured HSIC are given: thus the observation vectors x, xt?1 , . . . , xt??
were compared. The remaining rows contain the median Amari divergence (multiplied by 100) for
the three methods tested. The original HSIC method does not take into account time dependence
(? = 0), and returns a single performance number. Results are in all cases averaged over 136
repetitions: for two sources, this represents all possible pairings, whereas for larger n the sources
are chosen at random without replacement.
Method
n = 2, m = 5000
2
3
1.51
1.54
1.62
1.74
1.48
1.62
1.64
1
HSIC
TDSEP
Structured HSIC
n = 3, m = 10000
1
2
3
1.70
1.84
1.72
1.54
1.65
1.58
1.56
n = 4, m = 10000
1
2
3
2.68
2.90
2.08
1.91
2.65
2.12
1.83
Data Following the settings of [7, Section 5.5], we unmixed various musical sources, combined
using a randomly generated orthogonal matrix A (since optimization over the orthogonal part of
a general mixing matrix is the more difficult step in ICA). We considered mixtures of two to four
sources, drawn at random without replacement from 17 possibilities. We used the sum of pairwise
dependencies as the overall contrast function when more than two sources were present.
Methods We compared structured HSIC to TD-SEP and iid HSIC. While iid HSIC does not take
the temporal dependence in the signal into account, it has been shown to perform very well for
iid data [12]. Following [7], we employed a Laplace kernel, k(x, x0 ) = exp(??kx ? x0 k) with
? = 3 for both structured and iid HSIC. For both structured and iid HSIC, we used gradient descent
over the orthogonal group with a Golden search, and low rank Cholesky decompositions of the Gram
matrices to reduce computational cost, as in [3].
Results We chose the Amari divergence as the index for comparing performance of the various
ICA methods. This is a divergence measure between the estimated and true unmixing matrices,
which is invariant to the output ordering and scaling ambiguities. A smaller Amari divergence
indicates better performance. Results are shown in Table 1. Overall, contrast functions that take
time delayed information into account perform best, although the best time lag is different when the
number of sources varies.
4.2
Time Series Clustering and Segmentation
We can also extend clustering to time series and sequences using structured HSIC. This is carried
out in a similar way to the iid case. One can formulate clustering as generating the labels y from a
finite discrete set, such that their dependence on x is maximized [14]:
maximizey tr HKHL subject to constraints on y.
(15)
Here K and L are the kernel matrices for x and the generated y respectively. More specifically,
assuming Lst := ?(ys , yt ) for discrete labels y, we recover clustering. Relaxing discrete labels to
yt ? R with bounded norm kyk2 and setting Lst := ys yt , we obtain Principal Component Analysis.
This reasoning for iid data carries over to sequences by introducing additional dependence structure
through the kernels: Kst := k(xs,? , xt,? ) and Lst := l(ys,? , yt,? ). In general, the interacting label
sequences make the optimization in (15) intractable. However, for a class of kernels l an efficient
decomposition can be found by applying a reverse convolution on k: assume that l is given by
X?
?l(ys+u , yt+v )Muv ,
l(ys,? , yt,? ) =
(16)
u,v=0
where M ? R
with M 0, and ?l is a base kernel between individual time points. A
common choice is ?l(ys , yt ) = ?(ys , yt ). In this case we can rewrite tr HKHL by applying the
summation over M to HKH, i.e.,
T
?
T
+?
?
X
X
X
X
?l(ys+u , yt+v )Muv =
[HKH]ij
Muv [HKH]s?u,t?v ?l(ys , yt ) (17)
(? +1)?(? +1)
s,t=1
u,v=0
s,t=1
u,v=0
s?u,t?v?[1,T ]
|
{z
? st
:=K
}
Table 2: Segmentation errors by various methods on the four studied time series.
Method
Swimming I Swimming II Swimming II BCI
structured HSIC
99.0
118.5
108.6
111.5
spectral clustering
125
212.3
143.9
162
HMM
153.2
120
150
168
This means that we may apply the matrix M to HKH and thereby we are able to decouple the
? = [HKH] ? M . Consequently using K
? we can
dependency within y. Denote the convolution by K
directly apply (15) to times series and sequence data. In practice, approximate algorithms such as
?
incomplete Cholesky decomposition are needed to efficiently compute K.
Datasets We study two datasets in this experiment. The first dataset is collected by the Australian
Institute of Sports (AIS) from a 3-channel orientation sensor attached to a swimmer. The three time
series we used in our experiment have the following configurations: T = 23000 time steps with 4
laps; T = 47000 time steps with 16 laps; and T = 67000 time steps with 20 laps. The task is to
automatically find the starting and finishing time of each lap based on the sensor signals. We treated
this problem as a segmentation problem. Since the dataset contains 4 different style of swimming,
we used 6 as the number of clusters (there are 2 additional clusters for starting and finishing a lap).
The second dataset is a brain-computer interface data (data IVb of Berlin BCI group1 ). It contains
EEG signals collected when a subject was performing three types of cued imagination. Furthermore,
the relaxation period between two imagination is also recorded in the EEG. Including the relaxation
period, the dataset consists of T = 10000 time points with 16 different segments. The task is to
automatically detect the start and end of an imagination. We used 4 clusters for this problem.
Methods We compared three algorithms: structured HSIC for clustering, spectral clustering [10],
and HMM. For structured HSIC, we used the maximal cliques of (xt , yt?50,100 ), where y is the
discrete label sequence to be generated. The kernel l on y took the form of equation (16), with M ?
R101?101 and Muv := exp(?(u ? v)2 ). The kernel k on x was Gaussian RBF: exp(?kx ? x0 k2 ).
As a baseline, we used a spectral clustering with the same kernel k on x, and a first order HMM with
6 hidden states and diagonal Gaussian observation model2 .
Further details regarding preprocessing of the above two datasets (which is common to all algorithms
subsequently compared), parameters of algorithms and protocols of experiments, are available in the
longer version of this paper.
Results To evaluate the segmentation quality, the boundaries found by various methods were compared to the ground truth. First, each detected boundary was matched to a true boundary, and then
the discrepancy between them was counted into the error. The overall error was this sum divided by
the number of boundaries. Figure 2d gives an example on how to compute this error.
According to Table 2, in all of the four time series we studied, segmentation using structured HSIC
leads to lower error compared with spectral clustering and HMM. For instance, structured HSIC
reduces nearly 1/3 of the segmentation error in the BCI dataset. To provide a visual feel of the
improvement, we plot the true boundaries together with the segmentation results in Figure 2a, 2b,2c.
Clearly, segment boundaries produced by structured HSIC fit better with the ground truth.
5
Conclusion
In this paper, we extended the Hilbert Schmidt Independence Criterion from iid data to structured
and non-iid data. Our approach is based on RKHS embeddings of distributions, and utilizes the efficient factorizations provided by the exponential family associated with undirected graphical models.
Encouraging experimental results were demonstrated on independence test, ICA, and segmentation
for time series. Further work will be done in the direction of applying structured HSIC to PCA and
feature selection on structured data.
Acknowledgements
NICTA is funded by the Australian Governments Backing Australia?s Ability and the Centre of Excellence
programs. This work is also supported by the IST Program of the European Community, under the FP7 Network
of Excellence, ICT-216886-NOE.
1
2
http://ida.first.fraunhofer.de/projects/bci/competition-iii/desc-IVb.html
http://www.torch.ch
4
4
3
3
2
2
1
0
0
1
Structured HSIC
Ground Truth
2000
4000
6000
8000
10000
(a)
0
0
Spectral Clustering
Ground Truth
2000
4000
6000
8000
10000
(b)
2
1.5
1
0.5
0
0
HMM
Ground Truth
2000
4000
6000
8000
10000
(c)
(d)
Figure 2: Segmentation results produced by (a) structured HSIC, (b) spectral clustering and (c)
HMM. (d) An example for counting the segmentation error. Red line denotes the ground truth and
blue line is the segmentation results. The error introduced for segment R1 to R10 is a + b, while that
for segment R2 to R20 is c + d. The overall error in this example is then (a + b + c + d)/4.
References
[1] Aaronson, J., Burton, R., Dehling, H., Gilat, D., Hill, T., & Weiss, B. (1996). Strong laws for L and
U-statistics. Transactions of the American Mathematical Society, 348, 2845?2865.
[2] Altun, Y., Smola, A. J., & Hofmann, T. (2004). Exponential families for conditional random fields. In
UAI.
[3] Bach, F. R., & Jordan, M. I. (2002). Kernel independent component analysis. JMLR, 3, 1?48.
[4] Besag, J. (1974). Spatial interaction and the statistical analysis of lattice systems (with discussion). J.
Roy. Stat. Soc. B, 36(B), 192?326.
[5] Borovkova, S., Burton, R., & Dehling, H. (2001). Limit theorems for functionals of mixing processes
with applications to dimension estimation. Transactions of the American Mathematical Society, 353(11),
4261?4318.
[6] Gretton, A., Fukumizu, K., Teo, C.-H., Song, L., Sch?olkopf, B., & Smola, A. (2008). A kernel statistical
test of independence. Tech. Rep. 168, MPI for Biological Cybernetics.
[7] Gretton, A., Herbrich, R., Smola, A., Bousquet, O., & Sch?olkopf, B. (2005). Kernel methods for measuring independence. JMLR, 6, 2075?2129.
[8] Hammersley, J. M., & Clifford, P. E. (1971). Markov fields on finite graphs and lattices. Unpublished
manuscript.
[9] Hosseni, S., & Jutten, C. (2003). On the separability of nonlinear mixtures of temporally correlated
sources. IEEE Signal Processing Letters, 10(2), 43?46.
[10] Ng, A., Jordan, M., & Weiss, Y. (2002). On spectral clustering: Analysis and an algorithm. In NIPS.
[11] Nguyen, X., Wainwright, M. J., & Jordan, M. I. (2008). Estimating divergence functionals and the
likelihood ratio by penalized convex risk minimization. In NIPS.
[12] Shen, H., Jegelka, S., & Gretton, A. (submitted). Fast kernel-based independent component analysis.
IEEE Transactions on Signal Processing.
[13] Song, L., Smola, A., Borgwardt, K., & Gretton, A. (2007). Colored maximum variance unfolding. In
NIPS.
[14] Song, L., Smola, A., Gretton, A., & Borgwardt, K. (2007). A dependence maximization view of clustering. In Proc. Intl. Conf. Machine Learning.
[15] Song, L., Smola, A., Gretton, A., Borgwardt, K., & Bedo, J. (2007). Supervised feature selection via
dependence estimation. In ICML.
[16] Sriperumbudur, B., Gretton, A., Fukumizu, K., Lanckriet, G., & Sch?olkopf, B. (2008). Injective hilbert
space embeddings of probability measures. In COLT.
[17] Steinwart, I. (2002). The influence of the kernel on the consistency of support vector machines. JMLR, 2.
[18] Ziehe, A., & M?uller, K.-R. (1998). TDSEP ? an efficient algorithm for blind separation using time
structure. In ICANN.
| 3440 |@word mild:2 version:2 briefly:1 norm:2 tedious:1 decomposition:4 covariance:3 decorrelate:1 thereby:1 tr:8 carry:1 moment:1 reduction:2 configuration:1 series:16 contains:2 united:1 rkhs:8 past:1 existing:1 recovered:1 comparing:2 ida:1 clara:1 mesh:4 subsequent:1 hofmann:1 plot:1 stationary:2 yr:1 core:1 colored:1 provides:1 unmixed:2 node:1 herbrich:1 org:1 zhang:2 mathematical:2 along:1 pairing:1 consists:1 doubly:1 yst:1 excellence:2 x0:4 pairwise:1 ica:8 theoretically:1 expected:1 mpg:1 brain:1 decomposed:1 td:4 automatically:2 encouraging:1 provided:2 project:1 moreover:4 underlying:1 bounded:3 matched:1 estimating:1 finding:1 noe:1 temporal:3 y3:1 golden:1 bedo:1 k2:1 appear:1 t1:1 treat:1 xv:2 limit:1 subscript:2 quadruple:1 chose:1 au:1 studied:2 conversely:1 relaxing:1 limited:1 factorization:1 range:3 averaged:1 yj:2 practice:2 implement:1 x3:1 xr:1 procedure:1 empirical:6 universal:2 reject:1 word:1 regular:1 altun:1 selection:4 operator:6 context:1 applying:3 risk:1 influence:1 restriction:2 equivalent:1 map:5 demonstrated:1 yt:51 www:1 go:1 starting:2 convex:1 formulate:1 shen:1 insight:1 estimator:4 embedding:2 notion:1 coordinate:1 hsic:32 analogous:1 laplace:1 feel:1 us:1 swimmer:1 lanckriet:1 element:1 roy:1 satisfying:1 burton:2 ordering:1 mentioned:1 principled:1 xinhua:2 existent:1 rewrite:1 segment:4 algebra:1 upon:1 model2:1 sep:4 joint:5 lst:4 represented:1 various:4 fast:1 describe:1 effective:1 detected:1 choosing:1 lag:3 encoded:1 larger:1 drawing:1 amari:3 bci:4 ability:1 statistic:8 cov:1 obviously:1 hoc:1 sequence:15 took:1 propose:3 interaction:1 maximal:11 product:4 mixing:4 kh:2 pronounced:1 competition:1 olkopf:3 recipe:1 convergence:1 regularity:2 tdsep:5 cluster:3 r1:1 unmixing:1 generating:2 intl:1 cued:1 illustrate:1 derive:1 stat:1 ij:2 h3:4 school:1 strong:1 soc:1 c:1 involves:1 implies:5 australian:3 quantify:2 hst:1 direction:1 subsequently:2 vc:5 centered:1 australia:2 require:1 government:1 generalization:2 decompose:2 biological:2 summation:1 yij:1 desc:1 hold:3 considered:2 ground:6 normal:1 exp:3 claim:1 group1:1 omitted:1 estimation:3 proc:1 label:9 teo:1 repetition:1 establishes:1 fukumizu:2 minimization:1 unfolding:1 clearly:1 sensor:2 gaussian:4 uller:1 rather:1 corollary:3 focus:1 finishing:2 improvement:1 rank:1 indicates:1 likelihood:1 tech:1 contrast:9 besag:1 baseline:1 detect:2 sense:1 dim:1 dependent:2 typically:1 torch:1 hidden:1 relation:2 interested:1 germany:1 backing:1 overall:4 orientation:1 html:1 colt:1 yahoo:1 spatial:2 special:3 mutual:1 marginal:1 field:3 aware:1 saving:1 shaped:1 having:1 ng:1 identical:3 x4:3 represents:1 look:1 icml:1 nearly:1 ezc:1 discrepancy:3 minimized:1 randomly:1 national:1 homogeneity:1 divergence:5 delayed:1 individual:1 hkhl:8 replacement:2 versatility:1 freedom:1 stationarity:3 highly:1 possibility:1 mixture:4 pc:10 arthur:2 injective:4 respective:1 orthogonal:3 indexed:1 incomplete:1 instance:7 kij:1 modeling:1 column:1 markovian:1 retains:1 averted:1 hys:2 measuring:3 lattice:2 cost:1 introducing:1 vertex:1 subset:1 entry:1 maximization:1 uniform:1 motivating:1 dependency:4 connect:1 varies:1 muv:4 considerably:1 combined:1 st:2 density:1 borgwardt:3 ym:1 together:1 clifford:2 again:2 ambiguity:1 recorded:1 choose:3 conf:1 american:2 imagination:3 style:1 return:1 unmix:1 account:4 potential:1 de:2 bracketed:1 blind:3 ad:1 depends:1 multiplicative:2 try:1 view:2 characterizes:1 red:1 start:1 recover:4 ass:1 square:2 minimize:1 xor:5 variance:2 musical:1 likewise:1 efficiently:2 yield:1 identify:1 maximized:1 generalize:1 produced:2 iid:21 cybernetics:2 submitted:1 whenever:3 definition:1 sriperumbudur:1 proof:1 associated:1 dataset:5 recall:1 dimensionality:2 hilbert:10 segmentation:12 manuscript:1 higher:2 supervised:2 wei:2 done:2 furthermore:2 just:1 smola:8 correlation:3 steinwart:1 replacing:1 nonlinear:3 jutten:1 quality:1 usa:1 contain:1 unbiased:3 y2:1 counterpart:1 true:3 hence:3 deal:2 attractive:1 conditionally:2 kyk2:1 uniquely:1 illustrative:1 mpi:2 criterion:9 hill:1 interface:1 reasoning:1 ranging:1 image:1 common:2 x0t:5 attached:1 extend:2 discussed:1 marginals:2 mellon:1 ai:1 grid:2 consistency:1 similarly:2 centre:1 funded:1 longer:3 v0:1 base:1 showed:1 reverse:1 ubingen:2 binary:1 rep:1 vt:2 yi:7 exploited:2 additional:4 employed:1 maximize:3 period:2 signal:8 ii:2 multiple:2 full:2 interdependency:1 gretton:8 reduces:1 cross:3 bach:1 divided:1 y:10 laplacian:1 basic:1 cmu:1 expectation:7 kernel:34 represent:1 achieved:2 justified:1 whereas:1 xst:1 median:2 source:16 sch:3 biased:3 induced:1 subject:2 undirected:4 jordan:3 structural:1 counting:1 iii:1 embeddings:6 identically:3 independence:22 xj:1 fit:1 identified:1 reduce:1 idea:1 regarding:1 det:1 whether:2 expression:1 pca:1 lesong:1 song:5 generally:1 useful:1 santa:1 involve:1 simplest:3 http:2 xij:1 estimated:2 blue:1 carnegie:1 discrete:5 group:2 ist:1 four:3 drawn:5 r10:1 graph:6 asymptotically:1 swimming:4 relaxation:2 sum:6 letter:1 fourth:1 extends:1 family:7 separation:3 utilizes:1 scaling:2 nontrivial:3 strength:1 constraint:1 alex:2 x2:1 bousquet:1 argument:1 performing:1 injection:1 structured:29 according:5 smaller:1 strives:1 separability:1 invariant:1 equation:1 mutually:2 discus:3 needed:1 fp7:1 end:1 available:1 operation:2 multiplied:1 apply:4 observe:1 spectral:7 schmidt:4 original:1 denotes:3 clustering:16 include:1 top:1 remaining:1 graphical:13 unifying:1 xc:11 xw:1 music:2 exploit:1 especially:1 establish:2 society:2 question:1 concentration:1 dependence:23 diagonal:1 said:1 gradient:1 distance:2 berlin:1 hmm:6 outer:1 exc:1 polytope:1 collected:2 tuebingen:1 trivial:1 nicta:3 assuming:1 kst:2 index:1 y4:3 ratio:1 difficult:1 unfortunately:1 lagged:5 noniid:1 zt:3 perform:3 x0c:5 observation:12 convolution:2 markov:2 datasets:3 finite:2 descent:1 extended:1 y1:6 interacting:1 reproducing:1 community:1 introduced:1 pair:3 namely:1 specified:1 unpublished:1 connection:1 established:1 nip:3 address:2 able:4 beyond:2 xm:1 yc:12 hammersley:2 challenge:1 program:2 built:1 including:1 wainwright:1 suitable:1 treated:1 representing:2 improve:2 temporally:1 carried:1 fraunhofer:1 lij:1 review:1 interdependent:1 acknowledgement:1 ict:1 law:1 fully:1 expect:1 permutation:1 interesting:1 var:1 jegelka:1 sufficient:3 consistent:1 share:2 row:4 yt0:3 penalized:1 supported:1 enjoys:1 zc:3 side:1 allow:1 institute:1 template:1 taking:1 correspondingly:1 distributed:3 boundary:6 bs:2 dimension:1 gram:1 author:1 hxs:2 preprocessing:1 counted:1 far:1 dehling:2 nguyen:1 transaction:3 functionals:2 hkh:5 approximate:1 r20:1 clique:22 dealing:1 uai:1 pittsburgh:1 xi:10 continuous:1 search:2 decomposes:1 table:4 nature:1 channel:1 ca:1 yc0:5 eeg:3 european:1 domain:5 protocol:1 did:1 icann:1 dense:1 x1:6 xu:4 canberra:1 wish:4 exponential:7 jmlr:3 theorem:9 xt:49 specific:1 r2:1 x:2 incorporating:1 intractable:1 sequential:2 importance:1 anu:1 kx:2 lap:5 simply:3 ez:3 visual:1 ordered:1 partially:1 scalar:2 sport:1 ch:1 corresponds:1 truth:6 satisfies:1 conditional:8 viewed:1 formulated:2 quantifying:1 rbf:3 consequently:2 replace:1 determined:1 specifically:1 acting:1 decouple:1 principal:1 experimental:1 formally:1 ziehe:1 support:3 cholesky:2 latter:3 arises:1 absolutely:1 incorporate:1 evaluate:1 audio:1 tested:1 correlated:1 |
2,693 | 3,441 | Scalable Algorithms for String Kernels with Inexact
Matching
Pavel P. Kuksa, Pai-Hsi Huang, Vladimir Pavlovic
Department of Computer Science,
Rutgers University, Piscataway, NJ 08854
{pkuksa,paihuang,vladimir}@cs.rutgers.edu
Abstract
We present a new family of linear time algorithms for string comparison with
mismatches under the string kernels framework. Based on sufficient statistics, our
algorithms improve theoretical complexity bounds of existing approaches while
scaling well in sequence alphabet size, the number of allowed mismatches and
the size of the dataset. In particular, on large alphabets and under loose mismatch constraints our algorithms are several orders of magnitude faster than the
existing algorithms for string comparison under the mismatch similarity measure.
We evaluate our algorithms on synthetic data and real applications in music genre
classification, protein remote homology detection and protein fold prediction. The
scalability of the algorithms allows us to consider complex sequence transformations, modeled using longer string features and larger numbers of mismatches,
leading to a state-of-the-art performance with significantly reduced running times.
1
Introduction
Analysis of large scale sequential data has become an important task in machine learning and data
mining, inspired by applications such as biological sequence analysis, text and audio mining. Classification of string data, sequences of discrete symbols, has attracted particular interest and has led to
a number of new algorithms [1, 2, 3, 4]. They exhibit state-of-the-art performance on tasks such as
protein superfamily and fold prediction, music genre classification and document topic elucidation.
Classification of data in sequential domains is made challenging by the variability in the sequence
lengths, potential existence of important features on multiple scales, as well as the size of the alphabets and datasets. Typical alphabet sizes can vary widely, ranging in size from 4 nucleotides
in DNA sequences, up to thousands of words from a language lexicon for text documents. Strings
within the same class, such as the proteins in one fold or documents about politics, can exhibit wide
variability in the primary sequence content. Moreover, important datasets continue to increase in
size, easily reaching millions of sequences. As a consequence, the resulting algorithms need the
ability to efficiently handle large alphabets and datasets as well as establish measures of similarity
under complex sequence transformations in order to accurately classify the data.
A number of state-of-the-art approaches to scoring similarity between pairs of sequences in a
database rely on fixed, spectral representations of sequential data and the notion of mismatch kernels, c.f. [2, 3]. In that framework an induced representation of a sequence is typically that of
the spectra (counts) of all short substrings (k-mers) contained within a sequence. The similarity
score is established by allowing transformations of the original k-mers based on different models of
deletions, insertions and mutations. However, computing those representations efficiently for large
alphabet sizes and ?loose? similarity models can be computationally challenging. For instance,
the complexity of an efficient trie-based computation [3, 5] of the mismatch kernel between two
strings X and Y strongly depends on the alphabet size and the number of mismatches allowed as
O(k m+1 |?|m (|X| + |Y |)) for k-mers (contiguous substring of length k) with up to m mismatches
and the alphabet size |?|. This limits the applicability of such algorithms to simpler transformation
models (shorter k and m) and smaller alphabets, reducing their practical utility on complex real data.
As an alternative, more complex transformation models such as [2] lead to state-of-the-art predictive
performance at the expense of increased computational effort.
In this work we propose novel algorithms for modeling sequences under complex transformations
(such as multiple insertions, deletions, mutations) that exhibit state-of-the-art performance on a
variety of distinct classification tasks. In particular, we present new algorithms for inexact (e.g.
with mismatches) string comparison that improve currently known time bounds for such tasks and
show orders-of-magnitude running time improvements. The algorithms rely on an efficient implicit
computation of mismatch neighborhoods and k-mer statistic on sets of sequences. This leads to
a mismatch kernel algorithm with complexity O(ck,m (|X| + |Y |)), where ck,m is independent of
the alphabet ?. The algorithm can be easily generalized to other families of string kernels, such as
the spectrum and gapped kernels [6], as well as to semi-supervised settings such as the neighborhood kernel of [7]. We demonstrate the benefits of our algorithms on many challenging classification problems, such as detecting homology (evolutionary similarity) of remotely related proteins,
recognizing protein fold, and performing classification of music samples. The algorithms display
state-of-the-art classification performance and run substantially faster than existing methods. Low
computational complexity of our algorithms opens the possibility of analyzing very large datasets
under both fully-supervised and semi-supervised setting with modest computational resources.
2
Related Work
Over the past decade, various methods were proposed to solve the string classification problem,
including generative, such as HMMs, or discriminative approaches. Among the discriminative approaches, in many sequence analysis tasks, kernel-based [8] machine learning methods provide most
accurate results [2, 3, 4, 6].
Sequence matching is frequently based on common co-occurrence of exact sub-patterns (k-mers,
features), as in spectrum kernels [9] or substring kernels [10]. Inexact comparison in this framework
is typically achieved using different families of mismatch [3] or profile [2] kernels. Both spectrum-k
and mismatch(k,m) kernel directly extract string features based on the observed sequence, X. On
the other hand, the profile kernel, proposed by Kuang et al. in [2], builds a profile [11] PX and
uses a similar |?|k -dimensional representation, derived from PX . Constructing the profile for each
sequence may not be practical in some application domains, since the size of the profile is dependent
on the size of the alphabet set. While for bio-sequences |?| = 4 or 20, for music or text classification
|?| can potentially be very large, on the order of tens of thousands of symbols. In this case, a very
simple semi-supervised learning method, the sequence neighborhood kernel, can be employed [7]
as an alternative to lone k-mers with many mismatches.
The most efficient available trie-based algorithms [3, 5] for mismatch kernels have a strong dependency on the size of alphabet set and the number of allowed mismatches, both of which need to be
restricted in practice to control the complexity of the algorithm. Under the trie-based framework, the
list of k-mers extracted from given strings is traversed in a depth-first search with branches corresponding to all possible ? ? ?. Each leaf node at depth k corresponds to a particular k-mer feature
(either exact or inexact instance of the observed exact string features) and contains a list of matching
features from each string. The kernel matrix is updated at leaf nodes with corresponding counts.
The complexity of the trie-based algorithm for mismatch kernel computation for two strings X and
Y is O(k m+1 |?|m (|X| + |Y |)) [3]. The algorithm complexity depends on the size of ? since during a trie traversal, possible substitutions are drawn from ? explicitly; consequently, to control the
complexity of the algorithm we need to restrict the number of allowed mismatches (m), as well as
the alphabet size (|?|). Such limitations hinder wide application of the powerful computational tool,
as in biological sequence analysis, mutation, insertions and deletions frequently co-occur, hence
establishing the need to relax the parameter m; on the other hand, restricting the size of the alphabet sets strongly limits applications of the mismatch model. While other efficient string algorithms
exist, such as [6, 12] and the suffix-tree based algorithms in [10], they do not readily extend to the
mismatch framework. In this study, we aim to extend the works presented in [6, 10] and close the
existing gap in theoretical complexity between the mismatch and other fast string kernels.
3
Combinatorial Algorithm
In this section we will develop our first improved algorithm for kernel computations with mismatches, which serves as a starting point for our main algorithm in Section 4.
3.1
Spectrum and Mismatch Kernels Definition
Given a sequence X with symbols from alphabet ? the spectrum-k kernel [9] and the mismatch(k,m)
kernel [3] induce the following |?|k -dimensional representation for the sequence:
!
X
?(X) =
Im (?, ?)
,
(1)
??X
???k
where Im (?, ?) = 1 if ? ? Nk,m (?), and Nk,m (?) is the mutational neighborhood, the set of all
k-mers that differ from ? by at most m mismatches. Note that, by definition, for spectrum kernels,
m = 0.
The mismatch kernel is then defined as
K(X, Y |k, m) =
X
cm (?|X)cm (?|Y ),
(2)
???k
where cm (?|X) = ???X Im (?, ?) is the number of times a contiguous substring of length k (k-mer)
? occurs in X with no more than m mismatches.
3.2
Intersection-based Algorithm
Our first algorithm presents a novel way of performing local inexact string matching with the following key properties:
a. parameter independent: the complexity is independent of |?| and mismatch parameter m
b. in-place: only uses min(2m, k) + 1 extra space for an auxiliary look-up table
c. linear complexity: in k, the length of the substring (as opposed to exponential k m )
To develop our first algorithm, we first write the mismatch kernel (Equation 2) in an equivalent form:
K(X, Y |k, m) =
?k+1
nxX
?k+1 nyX
ix =1
=
?k+1
nxX
?k+1 nyX
ix =1
=
iy =1
Im (a, xix :ix +k?1 )Im (a, yiy :iy +k?1 )
(3)
|(N (xix :ix +k?1 , m) ? N (yiy :iy +k?1 , m)|
(4)
I(xix :ix +k?1 , yiy :iy +k?1 )
(5)
a??k
iy =1
?k+1
nxX
?k+1 nyX
ix =1
X
iy =1
where I(a, b) is the number of induced (neighboring) k-mers common between a, b (i.e. I(a, b) is
the size of intersection of mismatch neighborhoods of a and b). The key observation here is that if we
can compute I(a, b) efficiently then the kernel evaluation problem reduces to performing pairwise
comparison based on all pairs of observed k-mers, a and b, in the two sequences. The complexity
for such procedure is O(c|X||Y |) where c is the cost for evaluating I(a, b) for any given k-mers a
and b. In fact, for fixed k, m and ?, such quantity depends only on the Hamming distance d(a, b)
(i.e. the number of mismatches) and can be evaluated in advance, as we will show in Section 3.3. As
a result, the intersection values can be looked up in a table in constant time during matching. Note
the summation now shows no explicit dependency on |?| and m. In summary, given two strings X
and Y , the algorithm (Algorithm 1) compares pairs of observed k-mers from X and Y and computes
the mismatch kernel according to Equation 5.
Algorithm 1. (Hamming-Mismatch) Mismatch algorithm based on Hamming distance
Input: strings X, Y, |X| = nx , |Y | = ny , parameters k, m, lookup table I for intersection sizes
Evaluate kernel using Equation 5:
Pn ?k+1 Pny ?k+1
K(X, Y |k, m) = ixx=1
I(d(xix :ix +k?1 , yiy :iy +k?1 )|k, m)
iy =1
where I(d) is the intersection size for distance d
Output: Mismatch kernel value K(X, Y |k, m)
The overall complexity of the algorithm is O(knx ny ) since the Hamming distances between all
k-mer pairs observed in X and Y need to be known. In the following section, we discuss how to
efficiently compute the size of the intersection.
3.3
Intersection Size: Closed Form Solution
The number of neighboring k-mers shared by two observed k-mers a and b can be directly computed, in a closed-form, from the Hamming distance d(a, b) for fixed k and m, requiring no explicit
traversal of the k-mer space as in the case of trie-based computations. We first consider the case
a = b (i.e. d(a, b) = 0). The intersection size corresponds to the size of the (k, m)-mismatch
Pm
neighborhood, i.e. I(a, b) = |Nk,m | = i=0 ki (|?| ? 1)i . For higher values of Hamming distance d, the key observation is that for fixed ?, k, and m, given any distance d(a, b) = d, I(a, b) is
also a constant, regardless of the mismatch positions. As a result, intersection values can always be
pre-computed once, stored and looked up when necessary. To illustrate this, we show two examples
for m = 1, 2:
?
|Nk,m |, d(a, b) = 0
?
?
?
?
?
?
1
+
k(|?|
?
1)
+
(k
?
1)(|?|
? 1)2 , d(a, b) = 1
?
? |Nk,m |, d(a, b) = 0
?
I(a, b)
I(a, b)
|?|, d(a, b) = 1
=
= 1 + 2(k ? 1)(|?| ? 1) + (|?| ? 1)2 , d(a, b) = 2
(m = 1) ?
(m = 2) ?
?
?
2, d(a, b) = 2
6(|?| ? 1), d(a, b) = 3
?
?
?
?
4
2 , d(a, b) = 4
P
In general, the intersection size can be found in a weighted form i wi (|?| ? 1)i and can be precomputed in constant time.
4
Mismatch Algorithm based on Sufficient Statistics
In this section, we further develop ideas from the previous section and present an improved mismatch
algorithm that does not require pairwise comparison of the k-mers between two strings and dependes
linearly on sequence length. The crucial observation is that in Equation 5, I(a, b) is non-zero
only when d(a, b) ? 2m. As a result, the kernel computed in Equation 5 is incremented only by
min(2m, k) + 1 distinct values, corresponding to min(2m, k) + 1 possible intersection sizes. We
then can re-write the equation in the following form:
K(X, Y |m, k) =
?k+1
nxX
?k+1 nyX
ix =1
iy =1
min(2m,k)
I(xix :ix +k?1 , yiy :iy +k?1 ) =
X
Mi Ii ,
(6)
i=0
where Ii is the size of the intersection of k-mer mutational neighborhood for Hamming distance
i, and Mi , the number of observed k-mer pairs in X and Y having Hamming distance i. The
problem of computing the kernel has been further reduced to a single summation. We have shown
in Section 3.3 that given any i, we can compute Ii in advance. The crucial task now becomes
computing the sufficient statistics Mi efficiently. In the following, we will show how to compute the
mismatch statistics {Mi } in O(ck,m (nx + ny )) time, where ck,m is a constant that does not depend
on the alphabet size. We formulate the task of inferring matching statistics {Mi } as the following
auxiliary counting problem:
Mismatch Statistic Counting: Given a set of n k-mers from two strings X and Y ,
for each Hamming distance i = 0, 1, ..., min(2m, k), output the number of k-mer
pairs (a, b), a ? X, b ? Y with d(a, b) = i.
In this problem it is not necessary to know the distance between each pair of k-mers; one only
needs to know the number of pairs (Mi ) at each distance i. We show next that the above problem
of computing matching statistics can be solved in linear time (in the number n of k-mers) using
multiple rounds of counting sort as a sub-algorithm.
We first consider the problem of computing number of k-mers at distance 0, i.e. the number of
exact matches. In this case, we can apply counting sort to order all k-mers lexicographically and
find the number of exact matches by scanning the sorted list. The counting then requires linear
O(kn) time. Efficient direct computation of Mi for any i > 0 is difficult (requires quadratic time);
we take another
approach and first compute inexact cumulative mismatch statistics, Ci = Mi +
Pi?1 k?j
M
,
that overcount the number of k-mer pairs at a given distance i, as follows. Consider
j
j=0 i?j
two k-mers a and b. Pick i positions and remove from the k-mers the symbols at the corresponding
positions to obtain (k?i)-mers a? and b? . The key observation is that d(a? , b? ) = 0 ? d(a, b) ? i.
As a result,
given n k-mers, we can compute the cumulative mismatch statistics Ci in linear time
using ki rounds of counting sort on (k ? i)-mers. The exact mismatch statistics Mi can then be
obtained from Ci by subtracting the exact counts to compensate for overcounting as follows:
i?1
X
k?j
Mj , i = 0, . . . , min(min(2m, k), k ? 1)
(7)
Mi = Ci ?
i?j
j=0
The last mismatch statistic Mk can be computed by subtracting the preceding statistics M0 , ...Mk?1
from the total number of possible matches:
Mk = T ?
k?1
X
Mj , where T = (nx ? k + 1)(ny ? k + 1).
(8)
j=0
Our algorithm for mismatch kernel computations based on sufficient statistics is summarized in
Algorithm 2. The overall complexity of the algorithm is O(nck,m ) with the constant ck,m =
Pmin(2m,k) k
k
l=0
l (k ? l), independent of the size of the alphabet set, and l is the number of rounds
of counting sort for evaluating the cumulative mismatch statistics Cl .
Algorithm 2. (Mismatch-SS) Mismatch kernel algorithm based on Sufficient Statistics
Input: strings X, Y, |X| = nx , |Y | = ny , parameters k, m, pre-computed intersection values I
1. Compute min(2m, k) cumulative matching statistics, Ci , using counting sort
2. Compute exact matching statistics, Mi , using Equation 7
Pmin(2m,k)
3. Evaluate kernel using Equation 6: K(X, Y |m, k) = i=0
Mi Ii
Output: Mismatch kernel value K(X, Y |k, m)
5
Extensions
Our algorithmic approach can also be applied to a variety of existing string kernels, leading to very
efficient and simple algorithms that could benefit many applications.
Spectrum Kernels. The spectrum kernel [9] in our notation is the first sufficient statistic M0 , i.e.
K(X, Y |k) = M0 , which can be computed in k rounds of counting sort (i.e. in O(kn) time).
Gapped Kernels. The gapped kernels [6] measure similarity between strings X and Y based on the
co-occurrence of gapped instances g, |g| = k + m > k of k-long substrings:
X
X
X
K(X, Y |k, g) =
I(?, g)
I(?, g) ,
(9)
???k
g?X,|g|=k+m
g?Y,|g|=k+m
where I(?, g) = 1 when ? is a subsequence of g. Similar to the algorithmic approach for extracting
cumulative mismatch statistics in Algorithm-2, to compute the gapped(g,k) kernel, we perform a
single round of
counting sort over k-mers contained in the g-mers. This gives a very simple and
efficient O( kg kn) time algorithm for gapped kernel computations.
Wildcard kernels. The wildcard(k,m)
kernel [6] in our notation
sum of the cumulative
Pm
Pm is the
statistics K(X, Y |k, m) = i=0 Ci , i.e. can be computed in i=0 ki rounds of counting sort,
Pm
giving a simple and efficient O( i=0 ki (k ? i)n) algorithm.
Spatial kernels. The spatial(k,t,d) kernel [13] can be computed by sorting kt-mers iteratively for
every arrangement of t k-mers spatially constrained by distance d.
Neighborhood Kernels. The sequence neighborhood kernels [7] proved to be a powerful tool in
many sequence analysis tasks. The method uses the unlabeled data to form a set of neighbors for
train/test sequences and measure similarity of two sequences X and Y using their neighborhoods:
X
X
K(X, Y ) =
K(x, y)
(10)
x?N (X) y?N (Y )
where N (X) is the sequence neighborhood that contains neighboring sequences from the unlabeled
data set, including X itself. Note the kernel value, if computed directly using Equation 10, will incur
quadratic complexity in the size of the neighborhoods. Similar to the single string case, using our
algorithmic approach, to compute the neighborhood kernel (over the string sets), we can jointly sort
the observed k-mers in N (X) and N (Y ) and apply the desired kernel evaluation method (spectrum,
mismatch, or gapped). Under this setting, the neighborhood kernel can be evaluated in time linear to
the neighborhood size. This leads to very efficient algorithms for computing sequence neighborhood
kernels even for very large datasets, as we will show in the experimental section.
6
Evaluation
We study the performance of our algorithms, both in running time and predictive accuracy, on synthetic data and standard benchmark datasets for protein sequence analysis and music genre classification. The reduced running time requirements of our algorithms open the possibility to consider
?looser? mismatch measures with larger k and m. The results presented here demonstrate that such
mismatch kernels with larger (k, m) can lead to state-of-the-art predictive performance even when
compared with more complex models such as [2].
We use three standard benchmark datasets to compare with previously published results: the SCOP
dataset (7329 sequences with 2862 labeled) [7] for remote protein homology detection, the DingDubchak dataset1 (27 folds, 694 sequences) [14, 15] for protein fold recognition, and music genre
data2 (10 classes, 1000 sequences, |?| = 1024) [16] for multi-class genre prediction. For protein
sequence classification under the semi-supervised setting, we also use the Protein Data Bank (PDB,
17, 232 sequences), the Swiss-Prot (101, 602 sequences), and the non-redundant (NR) databases
as the unlabeled datasets, following the setup of [17]. All experiments are performed on a single
2.8GHz CPU. The datasets used in our experiments and the suplementary data/code are available
at http://seqam.rutgers.edu/new-inexact/new-inexact.html.
6.1
Running time analysis
We compare the running time of our algorithm on synthetic and real data with the trie-based computations. For synthetic data, we generate strings of length n = 105 over alphabets of different
sizes and measure the running time of the trie-based and our sufficient statistics based algorithms
for evaluating mismatch string kernel. Figure 1 shows relative running time Ttrie /Tss , in logarithmic scale, of the mismatch-trie and mismatch-SS as a function of the alphabet size. As can be seen
from the plot, our algorithm demonstrates several orders of magnitude improvements, especially for
large alphabet sizes.
Table 1 compares running times of our algorithm and the trie-based algorithm for different real
dataset (proteins, DNA, text, music) for a single kernel entry (pair of strings) computation. We
observe the speed improvements ranging from 100 to 106 times depending on the alphabet size.
We also measure the running time for full 7329-by-7329 mismatch(5,2) kernel matrix computations for SCOP dataset under the supervised setting. The running time of our algorithm is 1525
seconds compared to 196052 seconds for the trie-based computations. The obtained speed-up of
128 times is as expected from the theoretical analysis (our algorithm performs 31 counting-sort
iterations in total over 5-, 4-, 3-, 2-, and 1- mers, which gives the running time ratio of approximately 125 when compared to the trie-based complexity). We observe similar improvements under
1
2
http://ranger.uta.edu/?chqding/bioinfo.html
http://opihi.cs.uvic.ca/sound/genres
5
10
4
relative running time, Ttrie/Tss
10
Table 1: Running time (in seconds) for kernel computation between two strings on real data
3
10
2
10
1
10
0
10
100
200
300
400
500
600
alphabet size
700
800
900
Figure 1:
1000
Relative running time Ttrie /Tss
(in logarithmic scale) of the mismatch-trie and
mismatch-ss as a function of the alphabet size
(mismatch(5,1) kernel, n = 105 )
n
|?|
(5,1)-trie
(5,1)-ss
time ratio
(5,2)-trie
(5,2)-ss
time ratio
long
protein
36672
20
1.6268
0.1987
8
31.5519
0.2957
100
protein
dna
text
music
116
20
0.0212
0.0052
4
0.2918
0.0067
44
570
4
0.0260
0.0054
5
0.4800
0.0064
75
242
29224
20398
0.0178
106
0.0649
-
6892
1024
526.8
0.0331
16,000
0.0941
-
the semi-supervised setting for neighborhood mismatch kernels; for example, computing a smaller
neighborhood mismatch(5,2) kernel matrix for the labeled sequences only (2862-by-2862 matrix)
using the Swiss-Prot unlabeled dataset takes 1, 480 seconds with our algorithm, whereas performing
the same task with the trie-based algorithm takes about 5 days.
6.2
Empirical performance analysis
In this section we show predictive performance results for several sequence analysis tasks using our
new algorithms. We consider the tasks of the multi-class music genre classification [16], with results
in Table 2, and the protein remote homology (superfamily) prediction [9, 2, 18] in Table 3. We also
include preliminary results for multi-class fold prediction [14, 15] in Table 4.
On the music classification task, we observe significant improvements in accuracy for larger number
of mismatches. The obtained error rate (35.6%) on this dataset compares well with the state-of-theart results based on the same signal representation in [16]. The remote protein homology detection,
as evident from Table 3, clearly benefits from larger number of allowed mismatches because the
remotely related proteins are likely to be separated by multiple mutations or insertions/deletions.
For example, we observe improvement in the average ROC-50 score from 41.92 to 52.00 under a
fully-supervised setting, and similar significant improvements in the semi-supervised settings. In
particular, the result on the Swiss-Prot dataset for the (7, 3)-mismatch kernel is very promising and
compares well with the best results of the state-of-the-art, but computationally more demanding,
profile kernels [2]. The neighborhood kernels proposed by Weston et al. have already shown very
promising results in [7], though slightly worse than the profile kernel. However, using our new
algorithm that significantly improves the speed of the neighborhood kernels, we show that with
larger number of allowed mismatches the neighborhood can perform even better than the stateof-the-art profile kernel: the (7,3)-mismatch neighborhood achieves the average ROC-50 score of
86.32, compared to 84.00 of the profile kernel on the Swiss-Prot dataset. This is an important result
that addresses a main drawback of the neighborhood kernels, the running time [7, 2].
Table 2: Classification per- Table 3: Classification performance on protein remote homology
formance on music genre prediction
classification (multi-class) dataset
mismatch (5,1) mismatch (5,2) mismatch (7,3)
Method
Error
ROC ROC50 ROC ROC50 ROC ROC50
Mismatch (5,1) 42.6?6.34 SCOP (supervised) 87.75 41.92
90.67 49.09
91.31
52.00
Mismatch (5,2) 35.6?4.99 SCOP (unlabeled) 90.93 67.20
91.42 69.35
92.27
73.29
SCOP (PDB)
97.06 80.39
97.24 81.35
97.93
84.56
SCOP (Swiss-Prot) 96.73 81.05
97.05 82.25
97.78
86.32
For multi-class protein fold recognition (Table 4), we similarly observe improvements in performance for larger numbers of allowed mismatches. The balanced error of 25% for the (7,3)-mismatch
neighborhood kernel using Swiss-Prot compares well with the best error rate of 26.5% for the state-
of-the-art profile kernel with adaptive codes in [15] that used a much larger non-redundant (NR)
dataset. Using NR, the balanced error further reduces to 22.5% for the (7,3)-mismatch.
Table 4: Classification performance on fold prediction (multi-class)
Top 5
Top 5 Balanced
Top 5
Top 5
Balanced Recall
Method
Error
Precision
Error Error
Recall
Precision
Error
Mismatch (5, 1) 51.17 22.72 53.22
28.86
46.78 71.14 90.52
95.25
Mismatch (5, 2) 42.30 19.32 44.89
22.66
55.11 77.34 67.36
84.77
Mismatch (5, 2)? 27.42 14.36 24.98
13.36
75.02 86.64 79.01
91.02
Mismatch (7, 3) 43.60 19.06 47.13
22.76
52.87 77.24 84.65
91.95
Mismatch (7, 3)? 26.11 12.53 25.01
12.57
74.99 87.43 85.00
92.78
Mismatch (7, 3)? 23.76 11.75 22.49
12.14
77.59 87.86 84.90
91.99
?
7
F1
Top5
F1
61.68
60.62
76.96
65.09
79.68
81.04
81.45
80.89
88.78
83.96
90.02
89.88
used the Swiss-Prot sequence database; ? used NR (non-redundant) database
Conclusions
We presented new algorithms for inexact matching of the discrete-valued string representations that
reduce computational complexity of current algorithms, demonstrate state-of-the-art performance
and significantly improved running times. This improvement makes the string kernels with approximate but looser matching a viable alternative for practical tasks of sequence analysis. Our algorithms
work with large databases in supervised and semi-supervised settings and scale well in the alphabet
size and the number of allowed mismatches. As a consequence, the proposed algorithms can be
readily applied to other challenging problems in sequence analysis and mining.
References
[1] Jianlin Cheng and Pierre Baldi. A machine learning information retrieval approach to protein fold recognition. Bioinformatics, 22(12):1456?1463, June 2006.
[2] Rui Kuang, Eugene Ie, Ke Wang, Kai Wang, Mahira Siddiqi, Yoav Freund, and Christina S. Leslie.
Profile-based string kernels for remote homology detection and motif extraction. In CSB, pages 152?
160, 2004.
[3] Christina S. Leslie, Eleazar Eskin, Jason Weston, and William Stafford Noble. Mismatch string kernels
for SVM protein classification. In NIPS, pages 1417?1424, 2002.
[4] S?oren Sonnenburg, Gunnar R?atsch, and Bernhard Sch?olkopf. Large scale genomic sequence SVM classifiers. In ICML ?05, pages 848?855, New York, NY, USA, 2005.
[5] John Shawe-Taylor and Nello Cristianini. Kernel Methods for Pattern Analysis. Cambridge University
Press, New York, NY, USA, 2004.
[6] Christina Leslie and Rui Kuang. Fast string kernels using inexact matching for protein sequences. J.
Mach. Learn. Res., 5:1435?1455, 2004.
[7] Jason Weston, Christina Leslie, Eugene Ie, Dengyong Zhou, Andre Elisseeff, and William Stafford Noble.
Semi-supervised protein classification using cluster kernels. Bioinformatics, 21(15):3241?3247, 2005.
[8] Vladimir N. Vapnik. Statistical Learning Theory. Wiley-Interscience, September 1998.
[9] Christina S. Leslie, Eleazar Eskin, and William Stafford Noble. The spectrum kernel: A string kernel for
SVM protein classification. In Pacific Symposium on Biocomputing, pages 566?575, 2002.
[10] S. V. N. Vishwanathan and Alex Smola. Fast kernels for string and tree matching. Advances in Neural
Information Processing Systems, 15, 2002.
[11] M. Gribskov, A.D. McLachlan, and D. Eisenberg. Profile analysis: detection of distantly related proteins.
Proceedings of the National Academy of Sciences, 84:4355?4358, 1987.
[12] Juho Rousu and John Shawe-Taylor. Efficient computation of gapped substring kernels on large alphabets.
J. Mach. Learn. Res., 6:1323?1344, 2005.
[13] Pavel Kuksa, Pai-Hsi Huang, and Vladimir Pavlovic. Fast protein homology and fold detection with
sparse spatial sample kernels. In ICPR 2008, 2008.
[14] Chris H.Q. Ding and Inna Dubchak. Multi-class protein fold recognition using support vector machines
and neural networks. Bioinformatics, 17(4):349?358, 2001.
[15] Iain Melvin, Eugene Ie, Jason Weston, William Stafford Noble, and Christina Leslie. Multi-class protein
classification using adaptive codes. J. Mach. Learn. Res., 8:1557?1581, 2007.
[16] Tao Li, Mitsunori Ogihara, and Qi Li. A comparative study on content-based music genre classification.
In SIGIR ?03, pages 282?289, New York, NY, USA, 2003. ACM.
[17] Pavel Kuksa, Pai-Hsi Huang, and Vladimir Pavlovic. On the role of local matching for efficient semisupervised protein sequence classification. In BIBM, 2008.
[18] Tommi Jaakkola, Mark Diekhans, and David Haussler. A discriminative framework for detecting remote
protein homologies. In Journal of Computational Biology, volume 7, pages 95?114, 2000.
| 3441 |@word ixx:1 open:2 mers:30 pavel:3 elisseeff:1 pick:1 substitution:1 contains:2 score:3 document:3 past:1 existing:5 current:1 attracted:1 readily:2 john:2 remove:1 plot:1 generative:1 leaf:2 data2:1 short:1 eskin:2 detecting:2 node:2 lexicon:1 simpler:1 melvin:1 direct:1 become:1 symposium:1 viable:1 interscience:1 baldi:1 pairwise:2 expected:1 kuksa:3 frequently:2 multi:8 bibm:1 inspired:1 cpu:1 becomes:1 moreover:1 notation:2 kg:1 cm:3 string:39 substantially:1 lone:1 transformation:6 nj:1 every:1 prot:7 demonstrates:1 classifier:1 bio:1 control:2 eleazar:2 local:2 limit:2 consequence:2 mach:3 analyzing:1 establishing:1 approximately:1 challenging:4 co:3 hmms:1 trie:16 mer:9 practical:3 practice:1 swiss:7 procedure:1 empirical:1 remotely:2 significantly:3 matching:14 word:1 induce:1 pre:2 pdb:2 protein:30 close:1 unlabeled:5 equivalent:1 regardless:1 starting:1 overcounting:1 sigir:1 formulate:1 ke:1 iain:1 haussler:1 handle:1 notion:1 updated:1 exact:8 us:3 recognition:4 database:5 labeled:2 observed:8 role:1 ding:1 solved:1 wang:2 thousand:2 stafford:4 sonnenburg:1 remote:7 mutational:2 incremented:1 balanced:4 complexity:17 insertion:4 cristianini:1 gapped:8 traversal:2 hinder:1 depend:1 predictive:4 incur:1 easily:2 various:1 genre:9 alphabet:25 train:1 separated:1 distinct:2 fast:4 dubchak:1 neighborhood:24 larger:8 widely:1 solve:1 valued:1 relax:1 s:5 kai:1 ability:1 statistic:22 jointly:1 itself:1 sequence:47 propose:1 subtracting:2 neighboring:3 academy:1 scalability:1 olkopf:1 cluster:1 requirement:1 comparative:1 illustrate:1 develop:3 depending:1 dengyong:1 strong:1 auxiliary:2 c:2 differ:1 tommi:1 drawback:1 require:1 f1:2 preliminary:1 biological:2 summation:2 traversed:1 im:5 extension:1 algorithmic:3 m0:3 vary:1 achieves:1 combinatorial:1 currently:1 tool:2 weighted:1 mclachlan:1 clearly:1 genomic:1 always:1 aim:1 reaching:1 ck:5 pn:1 zhou:1 jaakkola:1 derived:1 june:1 improvement:9 dependent:1 suffix:1 motif:1 typically:2 tao:1 overall:2 classification:24 among:1 html:2 stateof:1 art:11 spatial:3 constrained:1 once:1 having:1 extraction:1 biology:1 pai:3 look:1 icml:1 knx:1 theart:1 noble:4 distantly:1 pavlovic:3 jianlin:1 ogihara:1 yiy:5 national:1 uta:1 ranger:1 william:4 detection:6 tss:3 interest:1 mining:3 possibility:2 evaluation:3 accurate:1 kt:1 necessary:2 nucleotide:1 shorter:1 modest:1 tree:2 taylor:2 re:4 desired:1 theoretical:3 mk:3 instance:3 classify:1 increased:1 modeling:1 top5:1 contiguous:2 yoav:1 leslie:6 applicability:1 cost:1 entry:1 kuang:3 recognizing:1 stored:1 dependency:2 kn:3 scanning:1 synthetic:4 ie:3 iy:10 opposed:1 huang:3 worse:1 leading:2 pmin:2 li:2 potential:1 lookup:1 scop:6 summarized:1 explicitly:1 depends:3 performed:1 jason:3 closed:2 sort:10 mutation:4 accuracy:2 formance:1 efficiently:5 pny:1 accurately:1 substring:7 published:1 andre:1 definition:2 inexact:10 overcount:1 mi:12 hamming:9 dataset:10 proved:1 recall:2 improves:1 higher:1 supervised:13 day:1 improved:3 evaluated:2 though:1 strongly:2 implicit:1 smola:1 hand:2 nyx:4 semisupervised:1 usa:3 requiring:1 homology:9 hence:1 spatially:1 iteratively:1 round:6 during:2 generalized:1 evident:1 demonstrate:3 performs:1 ranging:2 nck:1 novel:2 common:2 volume:1 million:1 extend:2 significant:2 cambridge:1 pm:4 similarly:1 language:1 shawe:2 similarity:8 longer:1 continue:1 scoring:1 seen:1 preceding:1 employed:1 redundant:3 signal:1 hsi:3 semi:8 multiple:4 branch:1 ii:4 reduces:2 full:1 sound:1 faster:2 match:3 lexicographically:1 compensate:1 long:2 retrieval:1 christina:6 qi:1 prediction:7 scalable:1 rutgers:3 rousu:1 iteration:1 kernel:83 achieved:1 oren:1 whereas:1 crucial:2 sch:1 extra:1 induced:2 extracting:1 counting:12 variety:2 restrict:1 reduce:1 idea:1 politics:1 chqding:1 diekhans:1 utility:1 effort:1 mitsunori:1 york:3 ten:1 siddiqi:1 dna:3 reduced:3 http:3 generate:1 exist:1 per:1 discrete:2 write:2 key:4 gunnar:1 drawn:1 sum:1 run:1 powerful:2 place:1 family:3 looser:2 scaling:1 bound:2 ki:4 display:1 cheng:1 fold:12 quadratic:2 occur:1 constraint:1 vishwanathan:1 alex:1 speed:3 min:8 performing:4 px:2 department:1 pacific:1 according:1 piscataway:1 icpr:1 smaller:2 slightly:1 wi:1 restricted:1 computationally:2 resource:1 equation:9 previously:1 discus:1 loose:2 count:3 precomputed:1 know:2 serf:1 available:2 apply:2 observe:5 juho:1 spectral:1 pierre:1 occurrence:2 nxx:4 gribskov:1 alternative:3 existence:1 original:1 top:4 running:17 include:1 elucidation:1 music:12 giving:1 build:1 establish:1 especially:1 arrangement:1 quantity:1 occurs:1 looked:2 already:1 primary:1 inna:1 nr:4 exhibit:3 evolutionary:1 september:1 distance:15 nx:4 chris:1 topic:1 nello:1 length:6 code:3 modeled:1 ratio:3 vladimir:5 difficult:1 setup:1 potentially:1 expense:1 perform:2 allowing:1 observation:4 datasets:9 benchmark:2 variability:2 csb:1 david:1 pair:10 deletion:4 established:1 nip:1 address:1 pattern:2 mismatch:85 including:2 demanding:1 rely:2 improve:2 extract:1 text:5 eugene:3 relative:3 eisenberg:1 freund:1 fully:2 limitation:1 sufficient:7 bank:1 pi:1 summary:1 last:1 wide:2 neighbor:1 superfamily:2 sparse:1 benefit:3 xix:5 ghz:1 depth:2 evaluating:3 cumulative:6 dataset1:1 computes:1 made:1 adaptive:2 approximate:1 bernhard:1 roc50:3 discriminative:3 spectrum:11 subsequence:1 search:1 decade:1 table:13 promising:2 mj:2 learn:3 ca:1 complex:6 cl:1 constructing:1 domain:2 main:2 linearly:1 profile:12 allowed:8 roc:5 ny:8 wiley:1 precision:2 sub:2 position:3 inferring:1 explicit:2 exponential:1 ix:9 symbol:4 list:3 svm:3 restricting:1 sequential:3 vapnik:1 ci:6 magnitude:3 rui:2 nk:5 gap:1 sorting:1 intersection:13 led:1 logarithmic:2 likely:1 contained:2 corresponds:2 extracted:1 acm:1 weston:4 sorted:1 consequently:1 shared:1 content:2 typical:1 reducing:1 total:2 experimental:1 wildcard:2 atsch:1 support:1 mark:1 bioinformatics:3 biocomputing:1 evaluate:3 audio:1 |
2,694 | 3,442 | Supervised Exponential Family Principal Component
Analysis via Convex Optimization
Yuhong Guo
Computer Sciences Laboratory
Australian National University
[email protected]
Abstract
Recently, supervised dimensionality reduction has been gaining attention, owing
to the realization that data labels are often available and indicate important underlying structure in the data. In this paper, we present a novel convex supervised
dimensionality reduction approach based on exponential family PCA, which is
able to avoid the local optima of typical EM learning. Moreover, by introducing a sample-based approximation to exponential family models, it overcomes the
limitation of the prevailing Gaussian assumptions of standard PCA, and produces
a kernelized formulation for nonlinear supervised dimensionality reduction. A
training algorithm is then devised based on a subgradient bundle method, whose
scalability can be gained using a coordinate descent procedure. The advantage of
our global optimization approach is demonstrated by empirical results over both
synthetic and real data.
1
Introduction
Principal component analysis (PCA) has been extensively used for data analysis and processing.
It provides a closed-form solution for linear unsupervised dimensionality reduction through singular value decomposition (SVD) on the data matrix [8]. Probabilistic interpretations of PCA have
also been provided in [9, 16], which formulate PCA using a latent variable model with Gaussian
distributions. To generalize PCA to better suit non-Gaussian data, many extensions to PCA have
been proposed that relax the assumption of a Gaussian data distribution. Exponential family PCA
is the most prominent example, where the underlying dimensionality reduction principle of PCA
is extended to the general exponential family [4, 7, 13]. Previous work has shown that improved
quality of dimensionality reduction can be obtained by using exponential family models appropriate for the data at hand [4, 13]. Given data from a non-Gaussian distribution these techniques are
better able than PCA to capture the intrinsic low dimensional structure. However, most existing
non-Gaussian dimensionality reduction methods rely on iterative local optimization procedures and
thus suffer from local optima, with the sole exception of [7] which shows a general convex form can
be obtained for dimensionality reduction with exponential family models.
Recently, supervised dimensionality reduction has begun to receive increased attention. As the goal
of dimensionality reduction is to identify the intrinsic structure of a data set in a low dimensional
space, there are many reasons why supervised dimensionality reduction is a meaningful topic to
study. First, data labels are almost always assigned based on some important intrinsic property of
the data. Such information should be helpful to suppress noise and capture the most useful aspects
of a compact representation of the data. Moreover, there are many high dimensional data sets with
label information available, e.g., face and digit images, and it is unwise to ignore them. A few supervised dimensionality reduction methods based on exponential family models have been proposed
in the literature. For example, a supervised probabilistic PCA (SPPCA) model was proposed in
[19]. SPPCA extends probabilistic PCA by assuming that both features and labels have Gaussian
distributions and are generated independently from the latent low dimensional space through linear
transformations. The model is learned by maximizing the marginal likelihood of the observed data
using an alternating EM procedure. A more general supervised dimensionality reduction approach
with generalized linear models (SDR GLM) was proposed in [12]. SDR GLM views both features
and labels as exponential family random variables and optimizes a weighted linear combination of
their conditional likelihood given latent low dimensional variables using an alternating EM-style
procedure with closed-form update rules. SDR GLM is able to deal with different data types by
using different exponential family models. Similar to SDR GLM, the linear supervised dimensionality reduction method proposed in [14] also takes advantage of exponential family models to deal
with different data types. However, it optimizes the conditional likelihood of labels given observed
features within a mixture model framework using an EM-style optimization procedure. Beyond the
PCA framework, many other supervised dimensionality reduction methods have been proposed in
the literature. Linear (fisher) discriminant analysis (LDA) is a popular alternative [5], which maximizes between-class variance and minimizes within-class variance. Moreover, a kernelized fisher
discriminant analysis (KDA) has been studied in [10]. Another notable nonlinear supervised dimensionality reduction approach is the colored maximum variance unfolding (MVU) approach proposed
in [15], which maximizes the variance aligning with the side information (e.g., label information),
while preserving the local distance structures from the data. However, colored MVU has only been
evaluated on training data.
In this paper, we propose a novel supervised exponential family PCA model (SEPCA). In the SEPCA
model, observed data x and its label y are assumed to be generated from the latent variables z via
conditional exponential family models; dimensionality reduction is conducted by optimizing the
conditional likelihood of the observations (x, y). By exploiting convex duality of the sub-problems
and eigenvector properties, a solvable convex formulation of the problem can be derived that preserves solution equivalence to the original. This convex formulation allows efficient global optimization algorithms to be devised. Moreover, by introducing a sample-based approximation to
exponential family models, SEPCA does not suffer from the limitations of implicit Gaussian assumptions and is able to be conveniently kernelized to achieve nonlinearity. A training algorithm
is then devised based on a subgradient bundle method, whose scalability can be gained through a
coordinate descent procedure. Finally, we present a simple formulation to project new testing data
into the embedded space. This projection can be used for other supervised dimensionality reduction
approach as well. Our experimental results over both synthetic and real data suggest that a more
global, principled probabilistic approach, SEPCA, is better able to capture subtle structure in the
data, particularly when good label information is present.
The remainder of this paper is organized as follows. First, in Section 2 we present the proposed
supervised exponential family PCA model and formulate a convex nondifferentiable optimization
problem. Then, an efficient global optimization algorithm is presented in Section 3. In Section 4,
we present a simple projection method for new testing points. We then present the experimental
results in Section 5. Finally, in Section 6 we conclude the paper.
2
Supervised Exponential Family PCA
We assume we are given a t ? n data matrix, X, consisting of t observations of n-dimensional
feature vectors, Xi: , and
Pak t?k indicator matrix, Y , with each row to indicate the class label for each
observation Xi: ; thus j=1 Yij = 1. For simplicity, we assume features in X are centered; that is,
their empirical means are zeros. We aim to recover a d-dimensional re-representation, a t ? d matrix
Z, of the data (d < n). This is typically viewed as discovering a latent low dimensional manifold
in the high dimensional feature space. Since the label information Y is exploited in the discovery
process, this is called supervised dimensionality reduction. For recovering Z, a key restriction that
one would like to enforce is that the features used for coding, Z:j , should be linearly independent;
that is, one would like to enforce the constraint Z ? Z = I, which ensures that the codes are expressed
by orthogonal features in the low dimensional representation.
Given the above setup, in this paper, we are attempting to address the problem of supervised dimensionality reduction using a probabilistic latent variable model. Our intuition is that the important
intrinsic structure (underlying feature representation) of the data should be able to accurately generate/predict the original data features and labels.
In this section, we formulate the low-dimensional principal component discovering problem as a
conditional likelihood maximization problem based on exponential family model representations,
which can be reformulated into an equivalent nondifferentiable convex optimization problem. We
then exploit a sample-based approximation to unify exponential family models for different data
types.
2.1
Convex Formulation of Supervised Exponential Family PCA
As with the generalized exponential family PCA [4], we attempt to find low-dimensional representation by maximizing the conditional likelihood of the observation matrix X and Y given the latent
matrix Z, log P (X, Y |Z) = log P (X|Z) + log P (Y |Z). Using the general exponential family
representation, a regularized version of this maximization problem can be formulated as
?
?
? ?
? ? ? ??
max max log P (X|Z, W ) ? tr W W ? + log P (Y |Z, ?, b) ?
tr ?? + b? b
2
2
Z:Z ?Z=I W,?,b
X
?
?
?
?
?
= max max tr ZW X ? ?
(A(Zi: , W ) ? log P0 (Xi: )) ? tr W W ?
(1)
?
W,?,b
2
Z:Z Z=I
i
X
?
?
?
? ? ? ??
+tr Z?Y ? + 1? Y b ?
A(Zi: , ?, b) ?
tr ?? + b? b
2
i
where W is a d ? n parameter matrix for conditional model P (X|Z); ? is a d ? k parameter matrix
for conditional model P (Y |Z) and b is a k ? 1 bias vector; 1 denotes the vector of all 1s; A(Zi: , W )
and A(Zi: , ?, b) are the log normalization functions to ensure valid probability distributions:
Z
A(Zi: , W ) = log exp (Zi: W x) P0 (x) dx .
(2)
A(Zi: , ?, b)
=
log
k
X
?=1
?
?
exp Zi: ?1? + 1?
? b
(3)
where 1? denotes a zero vector with a single 1 in the ?th entry.
Note that the class variable y is discrete, thus maximizing log P (Y |Z, ?, b) is a discriminative
classification training. In fact, the second part of the objective function in (1) is simply a multi-class
logistic regression. That is why we have incorporated an additional bias term b into the model.
Theorem 1 The optimization problem (1) is equivalent to
X
?
1 ?
max
(A? (Ui:x ) + log P0 (Xi: )) +
min
tr (X ?U x )(X ?U x )? M
U x ,U y M :I?M ?0, tr(M )=d
2?
i
X
?
?
1
+
A? (Ui:y ) +
tr (Y ?U y )(Y ?U y )? (M + E)
(4)
2?
i
where E is a t ? t matrix with all 1s; U x is a t ? n matrix; U y is a t ? k matrix; A? (Ui:x ) and
A? (Ui:y ) are the Fenchel conjugates of A(Zi: , W ) and A(Zi: , ?, b) respectively; M = ZZ ? and Z
can be recovered by taking the top d eigenvectors of M ; and the model parameters W, ?, b can be
recovered by
1
1
1
W = Z ? (X ? U x ), ? = Z ? (Y ? U y ), b = (Y ? U y )? 1
?
?
?
Proof: The proof is simple and based on standard results. Due to space limitation, we only provide
a summarization of the key steps here. There are three steps. The first step is to derive the Fenchel
conjugate dual for each log partition function, A(Z, .), following [18, Section 3.3.3]; which can be
used to yield
X
?
1 ?
max min
(A? (Ui:x ) + log P0 (Xi: )) +
tr (X ?U x )(X ?U x )? ZZ ?
2?
Z:Z ?Z=I U x ,U y
i
X
?
1 ?
+
A? (Ui:y ) +
(5)
tr (Y ?U y )(Y ?U y )? (ZZ ? + E)
2?
i
that is equivalent to the original problem (1). The second step is based on exploiting the strong
min-max property [2] and the relationships between different constraint sets
{M : M = ZZ ? for some Z such that Z ? Z = I} ? {M : I ? M ? 0, tr(M ) = d},
which allows one to further show the optimization (4) is an upper bound relaxation of (5). The final
equivalence proof is based on the result of [11], which suggests the substitution of ZZ ? with matrix
M does not produce relaxation gap.
Note that (4) is a min-max optimization problem. Moreover, for each fixed M , the outer minimization problem is obviously convex, since the Fenchel conjugates, A? (Ui:x ) and A? (Ui:y ), are convex
functions of U x and U y respectively [2]; that is, the objective function for the outer minimization is a
pointwise supremum over an infinite set of convex functions. Thus the overall min-max optimization
is convex [3], but apparently not necessarily differentiable. We will address the nondifferentiable
training issue in Section 3.
2.2
Sample-based Approximation
In the previous section, we have formulated our supervised exponential family PCA as a convex
optimization problem (4). However, before attempting to devise a training algorithm to solve it, we
have to provide some concrete forms for the Fenchel conjugate functions A? (Ui:x ) and A? (Ui:y ). For
different exponential family models, the Fenchel conjugate functions A? are different; see [18, Table
2]. For example, since the y variable in our model is a discrete class variable, it takes a multinomial
distribution. Thus the Fenchel conjugate function A? (Ui:y ) is given by
?
?
A? (Ui:y ) = A? (?yi: ) = tr ?yi: log ?y?
, where ?y ? 0, ?y 1 = 1
(6)
i:
The specific exponential family model is determined by the data type and distribution. PCA and
SPPCA use Gaussian models, thus their performances might be degraded when the data distribution
is non-Gaussian. However, it is tedious and sometimes hard to choose the most appropriate exponential family model to use for each specific application problem. Moreover, the log normalization
function A and its Fenchel conjugate A? might not be easily computable. For these reasons, we propose to use a sample-based approximation to the integral (2) and achieve an empirical approximation
to the true underlying exponential family modelP
as follows.
the integral definition (2)
?
? If one?replaces
/t, then the conjugate function
with an empirical definition, A(Zi: , W ) = log j exp Zi: W Xj:
can be given by
?
?
A? (Ui:x ) = A? (?xi: ) = tr ?xi: log ?x?
? log(1/t), where ?x ? 0, ?x 1 = 1
(7)
i:
With this sample-based approximation, problem (4) can be expressed as
min
x
y
? ,?
?
1 ?
tr (I ??x )K(I ??x )? M
2?
?
1 ?
+ tr (?y log ?y ) +
tr (Y ??y )(Y ??y )? (M + E)
2?
subject to
?x ? 0, ?x 1 = 1; ?y ? 0, ?y 1 = 1
max
M :I?M ?0, tr(M )=d
tr (?x log ?x ) +
(8)
(9)
One benefit of working with this sample-based approximation is that it is automatically kernelized,
K = XX ? , to enable non-linearity to be conveniently introduced.
3
Efficient Global Optimization
The optimization (8) we derived in the previous section is a convex-concave min-max optimization
problem. The inner maximization?of (8) is a well known problem with a closed-form
solution [11]:
?
M ? = Z ? Z ?? and Z ? = Qdmax (I ??x )K(I ??x )? + (Y ??y )(Y ??y )? , where Qdmax (D)
denotes the matrix formed by the top d eigenvectors of D. However, the overall outer minimization
problem is nondifferentiable with respect to ?x and ?y . Thus the standard first-order or secondorder optimization techniques that rely on the standard gradients can not be applied here. In this
section, we deploy a bundle method to solve this nondifferentiable min-max optimization.
3.1
Bundle Method for Min-Max Optimization
The bundle method is an efficient subgradient method for nondifferentiable convex optimization; it
relies on the computation of subgradient terms of the objective function. A vector g is a subgradient
of function f at point x, if f (y) ? f (x) + g? (y ? x), ?y. To adapt standard bundle methods to our
specific min-max problem, we need to first address the critical issue of subgradient computation.
Proposition 1 Consider a joint function h(x, y) defined over x ? X and y ? Y, satisfying: (1)
h(?, y) is convex for all y ? Y; (2) h(x, ?) is concave for all x ? X . Let f (x) = maxy h(x, y),
and q(x0 ) = arg maxy h(x0 , y). Assume that g is a gradient of h(?, q(x0 )) at x = x0 , then g is a
subgradient of f (x) at x = x0 .
Proof:
f (x)
=
max h(x, y) ? h(x, q(x0 ))
y
? h(x0 , q(x0 )) + g? (x ? x0 )
?
= f (x0 ) + g (x ? x0 )
(since h(?, y) is convex for all y ? Y)
(by the definitions of f (x) and q(x0 ))
Thus g is a subgradient of f (x) at x = x0 according to the definition of subgradient.
According to Proposition 1, the subgradients of our outer minimization objective function f in (8)
over ?x and ?y can be given by
?
?
?
?
1
1
??x f ? log ?x + 1 ? M ? (I ? ?x )K , ??y f ? log ?y + 1 ? M ? (Y ? ?y )
(10)
?
?
where M ? is the optimal inner maximization solution at the current point [?x , ?y ].
Algorithm 1 illustrates the bundle method we developed to solve the infinite min-max optimization (8), where the linear constraints (9) over ?x and ?y can be conveniently incorporated into the
quadratic bound optimization. One important issue in this algorithm is how to manage the size of the
linear lower bound constraints formed from the active set B (defined in Algorithm 1), as it incrementally increases with new points being explored. To solve this problem, we noticed the Lagrangian
dual parameters ? for the lower bound constraints obtained by the quadratic optimization in step 1
is a sparse vector, indicating that many lower bound constraints can be turned off. Moreover, any
constraint that is turned off will mostly stay off in the later steps. Therefore, for the bundle method
we developed, whenever the size of B is larger than a given constant b, we will keep the active points
of B that correspond to the first b largest ? values, and drop the remaining ones.
3.2
Coordinate Descent Procedure
An important factor affecting the running efficiency is the size of the problem. The convex optimization (8) works in the dual parameter space, where the size of the parameters ? = {?x , ?y },
t ? (t + k), depends only on the number of training samples, t, not on the feature size, n. For high
dimensional small data sets (n ? t), our dual optimization is certainly a good option. However,
with the increase of t, our problem size will increase in an order of O(t2 ). It might soon become too
large to handle for the quadratic optimization step of the bundle method.
On the other hand, the optimization problem (8) possesses a nice semi-decomposable structure:
one equality constraint in (9) involves only one row of the ?; that is, the ? can be separated into
rows without affecting the equality constraints. Based on this observation, we develop a coordinate
descent procedure to obtain scalability of the bundle method over large data sets. Specifically, we
put an outer loop above the bundle method. Within each of this outer loop iteration, we randomly
separate the ? parameters into m groups, with each group containing a subset rows of ?; and
we then use bundle method to sequentially optimize each subproblem defined on one group of
? parameters while keeping the remaining rows of ? fixed. Although coordinate descent with a
nondifferentiable convex objective is not guaranteed to converge to a minimum in general [17], we
have found that this procedure performs quite well in practice, as shown in the experimental results.
4
Projection for Testing Data
One important issue for supervised dimensionality reduction is to map new testing data into the
dimensionality-reduced principal dimensions. We deploy a simple procedure for this purpose. After
Algorithm 1 Bundle Method for Min-Max Optimization in (8)
Input: ?? > 0, m ? (0, 1), b ? IN , ? ? IR
Initial: Find an initial point ?? satisfying the linear constraints in (9); compute f (?? ).
Let ? = 1, ?? = ?? , compute g? ? ??? f by (10); e? = f (?? ) ? f (?? ) ? g?? (?? ? ?? ).
? = 0; ? = ? + 1.
Let B = {(e? , g? )}, ?? = Inf, g
repeat
? and Lagrangian dual parameters ? w.r.t. the
1. Solve quadratic minimization for solution ?,
lower bound linear constraints in B [1]:
?
?? = arg min ?? (?) + k? ? ?? k2 , subject to the linear constraints in (9)
?
2
?
?
?
? ? (? ? ?? ), max {?ei + g
? i? (? ? ?? )}
where ?? (?) = f (? ) + max ? ?? + g
i
i
(e ,g )?B
? + ? k?? ? ?? k2 ? 0. If ?? < ?,
? return.
2. Define ?? = f (?? ) ? [?? (?)
2
?
?
?
? for 0 < ? < 1.
3. Conduct line search to minimize f (? ) with ? = ?? + (1 ? ?)?,
4. Compute g? ? ??? f by (10); e? = f (?? )?f (?? )?g?? (?? ??? ); update B = B?{(e? , g? )}.
5. If f (?? ) ? f (?? ) ? m?? , then take a serious step:
?
i? ?
(1) update: ei = ei + f (?? ) ? f (?
? ?? );
P ) + gi (? P
? = i ?i g , ?? = i ?i ei ;
(2) update the aggregation: g
(3) update the stored solution: ?? = ?? , f (?? ) = f (?? ).
6. If |B| > b, reduce B set according to ?.
7. ? = ? + 1.
until maximum iteration number is reached
training, we obtain a low-dimensional representation Z for X, where Z can be viewed as a linear
projection of X in some transformed space ?(X) through a parameter matrix U; such that Z =
?(X)U = ?(X)?(X)? K + ?(X)U , where K + denotes the pseudo inverse of K = ?(X)?(X)? .
Then a new testing sample x? can be projected by
z? = ?(x? )?(X)? K + ?(X)U = k(x? , X)K + Z
5
(11)
Experimental Results
In order to evaluate the performance of the proposed supervised exponential family PCA (SEPCA)
approach, we conducted experiments over both synthetic and real data, and compared to supervised
dimensionality reduction with generalized linear models (SDR GLM), supervised probabilistic PCA
(SPPCA), linear discriminant analysis (LDA), and colored maximum variance unfolding (MVU).
The projection procedure (11) is used for colored MVU as well. In all the experiments, we used
? = 1 for Algorithm 1, and used ? = 0.0001 for SDR GLM as suggested in [12].
5.1
Experiments on Synthetic Data
Two synthetic experiments were conducted to compare the five approaches under controlled conditions. The first synthetic data set is formed by first generating four Gaussian clusters in a twodimensional space, with each corresponding to one class, and then adding the third dimension to
each point by uniformly sampling from a fixed interval. This experiment attempts to compare the
performance of the five approaches in the situation where the data distribution does not satisfy the
Gaussian assumption. Figure 1 shows the projection results for each approach in a two dimensional
space for 120 testing points after being trained on a set with 80 points. In this case, SEPCA and
LDA outperform all the other three approaches.
The second synthetic experiment is designed to test the capability of performing nonlinear dimensionality reduction. The synthetic data is formed by first generating two circles in a two dimensional
space (one circle is located inside the other one), with each circle corresponding to one class, and
then the third dimension sampled uniformly from a fixed interval. As SDR GLM does not provide
a nonlinear form, we conducted the experiment with only the remaining four approaches. For LDA,
we used its kernel variant, KDA. A Gaussian kernel with ? = 1 was used for SEPCA, SPPCA and
KDA. Figure 2 shows the projection results for each approach in a two dimensional space for 120
SEPCA
SDR?GLM
0.25
60
0.2
50
SPPCA
Colored?MVU
LDA
4
2
1
1.5
2
0.15
1
0.5
40
0.1
0.5
0
30
0.05
0
0
20
0
?2
?0.5
10
?0.05
?1
?0.5
?4
0
?0.1
?1.5
?10
?0.15
?20
?0.2
?0.25
?0.4
?0.3
?0.2
?0.1
0
0.1
0.2
0.3
?6
?2
?1
?2.5
?30
?30
?20
?10
0
10
20
30
?1.5
?1
?0.5
0
0.5
1
1.5
?8
?7
?3
?4
?3
?2
?1
0
1
?6
?5
?4
?3
?2
?1
0
1
2
?7
2
x 10
Figure 1: Projection results on test data for synthetic experiment 1. Each color indicates one class.
SPPCA
SEPCA
KDA
0.01
0.15
Colored?MVU
0.02
2.5
2
0.1
0.01
0.005
1.5
0.05
0
1
0
0.5
0
?0.01
?0.005
0
?0.05
?0.02
?0.5
?0.01
?0.1
?1
?0.03
?0.015
?1.5
?0.15
?0.04
?2
?0.2
?0.25
?0.02
?20
?0.2
?0.15
?0.1
?0.05
0
0.05
0.1
0.15
?15
?10
?5
0
5
?3
x 10
?0.05
?0.35
?0.3
?0.25
?0.2
?0.15
?0.1
?0.05
0
?2.5
5.85
5.9
5.95
6
6.05
6.1
6.15
6.2
6.25
6.3
6.35
Figure 2: Projection results on test data for synthetic experiment 2. Each color indicates one class.
testing points after being trained on a set with 95 points. Again, SEPCA and KDA achieve good
class separations and outperform the other two approaches.
5.2
Experiments on Real Data
To better characterize the performance of dimensionality reduction in a supervised manner, we conducted some experiments on a few high dimensional multi-class real world data sets. The left side
of Table 1 provides the information about these data sets. Our experiments were conducted in the
following way. We randomly selected 3?5 examples from each class to form the training set and
used the remaining examples as the test set. For each approach, we first learned the dimensionality
reduction model on the training set. Moreover, we also trained a logistic regression classifier using the projected training set in the reduced low dimensional space. (Note, for SEPCA, a classifier
was trained simultaneously during the process of dimensionality reduction optimization.) Then the
test data were projected into the low dimensional space according to each dimensionality reduction
model. Finally, the projected test set for each approach were classified using each corresponding
logistic regression classifier. The right side of Table 1 shows the classification accuracies on the test
set for each approach. To better understand the quality of the classification using projected data, we
also included the standard classification results, indicated as ?FULL?, using the original high dimensional data. (Note, we are not able to obtain any result for SDR GLM on the newsgroup data as it is
inefficient for very high dimensional data.) The results reported here are averages over 20 repeated
runs, and the projection dimension d = 10. Still the proposed SEPCA presents the best performance
among the compared approaches. But different from the synthetic experiments, LDA does not work
well on these real data sets.
The results on both synthetic and real data show that SEPCA outperforms the other four approaches.
This might be attributed to its adaptive exponential family model approximation and its global optimization, while SDR GLM and SPPCA apparently suffer from local optima.
6
Conclusions
In this paper, we propose a supervised exponential family PCA (SEPCA) approach, which can
be solved efficiently to find global solutions. Moreover, SEPCA overcomes the limitation of the
Gaussian assumption of PCA and SPPCA by using a data adaptive approximation for exponential
family models. A simple, straightforward projection method for new testing data has also been
constructed. Empirical study suggests that this SEPCA outperforms other supervised dimensionality
reduction approaches, such as SDR GLM, SPPCA, LDA and colored MVU.
Table 1: Data set statistics and test accuracy results (%)
Dataset
#Data
#Dim
#Class
FULL
SEPCA
SDR
GLM
SPPCA
LDA
colored
MVU
Yale
YaleB
11 Tumor
Usps3456
Newsgroup
165
2414
174
120
19928
4096
1024
12533
256
25284
15
38
11
4
20
65.3
47.0
77.6
82.1
32.1
64.4
20.5
88.9
79.7
16.9
58.8
19.0
63.5
77.9
?
51.6
9.8
63.0
78.5
6.9
31.0
6.2
23.7
74.3
10.0
21.1
2.8
40.2
75.8
10.4
References
[1] A. Belloni. Introduction to bundle methods. Technical report, MIT, 2005.
[2] J. Borwein and A. Lewis. Convex Analysis and Nonlinear Optimization. Springer, 2000.
[3] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge U. Press, 2004.
[4] M. Collins, S. Dasgupta, and R. Schapire. A generalization of principal component analysis to
the exponential family. In Advances in Neural Information Processing Systems (NIPS), 2001.
[5] R. Fisher. The use of multiple measurements in taxonomic problems. Annals of Eugenics,
7:179?188, 1936.
[6] Y. Guo and D. Schuurmans. Convex relaxations of latent variable training. In Advances in
Neural Information Processing Systems (NIPS), 2007.
[7] Y. Guo and D. Schuurmans. Efficient global optimization for exponential family PCA and
low-rank matrix factorization. In Allerton Conf. on Commun., Control, and Computing, 2008.
[8] I. Jolliffe. Principal Component Analysis. Springer Verlag, 2002.
[9] N. Lawrence. Probabilistic non-linear principle component analysis with gaussian process
latent variable models. Journal of Machine Learning Research, 6:1783?1816, 2005.
[10] S. Mika, G. Ratsch, J. Weston, B. Scholkopf, and K. Muller. Fisher discriminant analysis with
kernels. In IEEE Neural Networks for Signal Processing Workshop, 1999.
[11] M. Overton and R. Womersley. Optimality conditions and duality theory for minimizing sums
of the largest eigenvalues of symmetric matrices. Math. Prog., 62:321?357, 1993.
[12] I. Rish, G. Grabarnilk, G. Cecchi, F. Pereira, and G. Gordon. Closed-form supervised dimensionality reduction with generalized linear models. In Proceedings of International Conference
on Machine Learning (ICML), 2008.
[13] Sajama and A. Orlitsky. Semi-parametric exponential family PCA. In Advances in Neural
Information Processing Systems (NIPS), 2004.
[14] Sajama and A. Orlitsky. Supervised dimensionality reduction using mixture models. In Proceedings of the International Conference on Machine Learning (ICML), 2005.
[15] L. Song, A. Smola, K. Borgwardt, and A. Gretton. Colored maximum variance unfolding. In
Advances in Neural Information Processing Systems (NIPS), 2007.
[16] M. Tipping and C. Bishop. Probabilistic principal component analysis. Journal of the Royal
Statistical Society, B, 6(3):611?622, 1999.
[17] P. Tseng. Convergence of a block coordinate descent method for nondifferentiable minimization. Journal of Optimization Theory and Applications, 109:457?494, 2001.
[18] M. Wainwright and M. Jordan. Graphical models, exponential families, and variational inference. Technical Report TR-649, UC Berkeley, Dept. Statistics, 2003.
[19] S. Yu, K. Yu, V. Tresp, H. Kriegel, and M. Wu. Supervised probabilistic principal component
analysis. In Proceedings of 12th ACM SIGKDD International Conf. on KDD, 2006.
| 3442 |@word version:1 tedious:1 decomposition:1 p0:4 tr:20 reduction:30 initial:2 substitution:1 outperforms:2 existing:1 recovered:2 com:1 current:1 rish:1 gmail:1 dx:1 partition:1 kdd:1 drop:1 designed:1 update:5 discovering:2 selected:1 colored:9 provides:2 math:1 allerton:1 five:2 constructed:1 become:1 scholkopf:1 inside:1 manner:1 x0:13 multi:2 automatically:1 provided:1 project:1 underlying:4 moreover:9 maximizes:2 xx:1 linearity:1 minimizes:1 eigenvector:1 developed:2 transformation:1 pseudo:1 berkeley:1 orlitsky:2 concave:2 k2:2 classifier:3 control:1 before:1 local:5 might:4 mika:1 studied:1 equivalence:2 suggests:2 factorization:1 testing:8 practice:1 block:1 digit:1 procedure:11 empirical:5 projection:11 boyd:1 suggest:1 mvu:8 put:1 twodimensional:1 restriction:1 equivalent:3 map:1 demonstrated:1 lagrangian:2 optimize:1 maximizing:3 straightforward:1 attention:2 independently:1 convex:23 formulate:3 unify:1 simplicity:1 decomposable:1 rule:1 vandenberghe:1 handle:1 coordinate:6 annals:1 deploy:2 secondorder:1 satisfying:2 particularly:1 located:1 observed:3 subproblem:1 solved:1 capture:3 ensures:1 principled:1 intuition:1 ui:13 trained:4 efficiency:1 easily:1 joint:1 separated:1 whose:2 quite:1 larger:1 solve:5 relax:1 statistic:2 gi:1 final:1 obviously:1 advantage:2 differentiable:1 eigenvalue:1 propose:3 remainder:1 turned:2 loop:2 realization:1 achieve:3 scalability:3 exploiting:2 convergence:1 cluster:1 optimum:3 produce:2 generating:2 derive:1 develop:1 sole:1 strong:1 recovering:1 c:1 involves:1 indicate:2 australian:1 owing:1 centered:1 enable:1 generalization:1 proposition:2 yij:1 extension:1 exp:3 lawrence:1 predict:1 modelp:1 purpose:1 pak:1 label:12 largest:2 weighted:1 unfolding:3 minimization:6 mit:1 gaussian:15 always:1 aim:1 avoid:1 derived:2 rank:1 likelihood:6 indicates:2 sigkdd:1 helpful:1 dim:1 inference:1 typically:1 kernelized:4 transformed:1 overall:2 classification:4 dual:5 issue:4 arg:2 among:1 prevailing:1 uc:1 marginal:1 sampling:1 zz:5 yu:2 unsupervised:1 icml:2 t2:1 report:2 gordon:1 serious:1 few:2 randomly:2 preserve:1 national:1 simultaneously:1 consisting:1 sdr:12 suit:1 attempt:2 certainly:1 mixture:2 bundle:14 overton:1 integral:2 orthogonal:1 conduct:1 re:1 circle:3 increased:1 fenchel:7 maximization:4 introducing:2 subset:1 entry:1 sajama:2 conducted:6 too:1 characterize:1 stored:1 reported:1 synthetic:12 borgwardt:1 international:3 stay:1 probabilistic:9 off:3 concrete:1 again:1 borwein:1 manage:1 containing:1 choose:1 conf:2 inefficient:1 style:2 return:1 coding:1 satisfy:1 notable:1 depends:1 later:1 view:1 closed:4 apparently:2 reached:1 recover:1 option:1 aggregation:1 capability:1 minimize:1 formed:4 ir:1 degraded:1 accuracy:2 variance:6 efficiently:1 yield:1 identify:1 correspond:1 generalize:1 accurately:1 classified:1 whenever:1 definition:4 proof:4 attributed:1 sampled:1 dataset:1 begun:1 popular:1 color:2 dimensionality:31 organized:1 subtle:1 tipping:1 supervised:30 improved:1 formulation:5 evaluated:1 implicit:1 smola:1 until:1 hand:2 working:1 ei:4 nonlinear:5 incrementally:1 logistic:3 quality:2 lda:8 indicated:1 yaleb:1 true:1 equality:2 assigned:1 alternating:2 symmetric:1 laboratory:1 deal:2 during:1 generalized:4 prominent:1 performs:1 image:1 variational:1 novel:2 recently:2 womersley:1 multinomial:1 interpretation:1 measurement:1 cambridge:1 nonlinearity:1 aligning:1 optimizing:1 optimizes:2 inf:1 commun:1 verlag:1 yi:2 exploited:1 devise:1 muller:1 preserving:1 minimum:1 additional:1 converge:1 signal:1 semi:2 multiple:1 full:2 gretton:1 technical:2 adapt:1 unwise:1 devised:3 controlled:1 variant:1 regression:3 iteration:2 normalization:2 sometimes:1 kernel:3 receive:1 affecting:2 interval:2 ratsch:1 singular:1 zw:1 posse:1 subject:2 jordan:1 xj:1 zi:12 inner:2 reduce:1 computable:1 pca:26 cecchi:1 song:1 suffer:3 reformulated:1 useful:1 eigenvectors:2 extensively:1 reduced:2 generate:1 schapire:1 outperform:2 discrete:2 dasgupta:1 group:3 key:2 four:3 subgradient:9 relaxation:3 sum:1 run:1 inverse:1 taxonomic:1 extends:1 family:34 almost:1 prog:1 wu:1 separation:1 bound:6 guaranteed:1 yale:1 replaces:1 quadratic:4 constraint:12 belloni:1 aspect:1 min:13 optimality:1 attempting:2 subgradients:1 performing:1 according:4 combination:1 conjugate:8 em:4 maxy:2 glm:12 jolliffe:1 kda:5 available:2 appropriate:2 enforce:2 alternative:1 original:4 denotes:4 top:2 ensure:1 remaining:4 running:1 graphical:1 exploit:1 society:1 objective:5 noticed:1 parametric:1 gradient:2 distance:1 separate:1 outer:6 nondifferentiable:8 topic:1 manifold:1 evaluate:1 discriminant:4 tseng:1 reason:2 assuming:1 code:1 pointwise:1 relationship:1 minimizing:1 setup:1 mostly:1 suppress:1 summarization:1 upper:1 observation:5 descent:6 situation:1 extended:1 incorporated:2 introduced:1 learned:2 nip:4 address:3 able:7 beyond:1 suggested:1 eugenics:1 kriegel:1 gaining:1 max:19 royal:1 wainwright:1 critical:1 rely:2 regularized:1 indicator:1 solvable:1 tresp:1 nice:1 literature:2 discovery:1 embedded:1 limitation:4 principle:2 row:5 repeat:1 soon:1 keeping:1 side:3 bias:2 understand:1 face:1 taking:1 sparse:1 benefit:1 dimension:4 valid:1 world:1 sppca:11 adaptive:2 projected:5 compact:1 ignore:1 overcomes:2 supremum:1 keep:1 global:8 active:2 sequentially:1 assumed:1 conclude:1 xi:7 discriminative:1 search:1 latent:9 iterative:1 why:2 table:4 schuurmans:2 necessarily:1 linearly:1 noise:1 repeated:1 sub:1 pereira:1 exponential:34 third:2 theorem:1 specific:3 yuhong:1 bishop:1 explored:1 intrinsic:4 workshop:1 adding:1 gained:2 illustrates:1 gap:1 simply:1 conveniently:3 expressed:2 springer:2 relies:1 lewis:1 acm:1 weston:1 conditional:8 goal:1 viewed:2 formulated:2 fisher:4 hard:1 included:1 typical:1 infinite:2 determined:1 specifically:1 uniformly:2 principal:8 tumor:1 called:1 duality:2 experimental:4 svd:1 meaningful:1 newsgroup:2 exception:1 indicating:1 guo:3 collins:1 dept:1 |
2,695 | 3,443 | B reak in g Aud i o CAPTCHAs
Jennifer Tam
Computer Science Department
Carnegie Mellon University
5000 Forbes Ave, Pittsburgh 15217
[email protected]
Jiri Simsa
Computer Science Department
Carnegie Mellon University
5000 Forbes Ave, Pittsburgh 15217
[email protected]
Sean Hyde
Electrical and Computer Engineering
Carnegie Mellon University
5000 Forbes Ave, Pittsburgh 15217
[email protected]
Luis Von Ahn
Computer Science Department
Carnegie Mellon University
5000 Forbes Ave, Pittsburgh 15217
[email protected]
Abstract
CAP TCHAs are computer-generated tests that humans can pass but current
computer systems cannot. CAP TCHAs provide a method for automatically
distinguishing a human from a computer program, and therefore can protect
Web services from abuse by so-called ?bots.? Most CAP TCHAs consist of
distorted images, usually text, for which a user must provide some
description. Unfortunately, visual CAP TCHAs limit access to the millions
of visually impaired people using the Web. Audio CAP TCHAs were
created to solve this accessibility issue; however, the security of audio
CAP TCHAs was never formally tested. Some visual CAP TCHAs have
been broken using machine learning techniques, and we propose using
similar ideas to test the security of audio CAP TCHAs. Audio CAP TCHAs
are generally composed of a set of words to be identified, layered on top of
noise. We analyzed the security of current audio CAP TCHAs from popular
Web sites by using AdaBoost, SVM, and k-NN, and achieved correct
solutions for test samples with accuracy up to 71%. Such accuracy is
enough to consider these CAPTCHAs broken. Training several different
machine learning algorithms on different types of audio CAP TCHAs
allowed us to analyze the strengths and weaknesses of the algorithms so
that we could suggest a design for a more robust audio CAPTCHA.
1
Int rod uct i o n
CAP TCHAs [1] are automated tests designed to tell computers and humans apart by
presenting users with a problem that humans can solve but current computer programs
cannot. Because CAPTCHAs can distinguish between humans and computers with high
probability, they are used for many different security applications: they prevent bots from
voting continuously in online polls, automatically registering for millions of spam email
accounts, automatically purchasing tickets to buy out an event, etc. Once a CAP TCHA is
broken (i.e., computer programs can successfully pass the test), bots can impersonate
humans and gain access to services that they should not. Therefore, it is important for
CAP TCHAs to be secure.
To pass the typical visual CAP TCHA, a user must correctly type the characters displayed in
an image of distorted text. Many visual CAP TCHAs have been broken with machine
learning techniques [2]-[3], though some remain secure against such attacks. Because
visually impaired users who surf the Web using screen-reading programs cannot see this type
of CAPTCHA, audio CAP TCHAs were created. Typical audio CAP TCHAs consist of one
or several speakers saying letters or digits at randomly spaced intervals. A user must
correctly identify the digits or characters spoken in the audio file to pass the CAP TCHA. To
make this test difficult for current computer systems, specifically automatic speech
recognition (ASR) programs, background noise is injected into the audio files.
Since no official evaluation of existing audio CAP TCHAs has been reported, we tested the
security of audio CAP TCHAs used by many popular Web sites by running machine learning
experiments designed to break them. In the next section, we provide an overview of the
literature related to our project. Section 3 describes our methods for creating training data,
and section 4 describes how we create classifiers that can recognize letters, digits, and noise.
In section 5, we discuss how we evaluated our methods on widely used audio CAP TCHAs
and we give our results. In particular, we show that the audio CAP TCHAs used by sites
such as Google and Digg are susceptible to machine learning attacks. Section 6 mentions the
proposed design of a new more secure audio CAP TCHA based on our findings.
2
Lit erat u r e r ev i ew
To break the audio CAP TCHAs, we derive features from the CAP TCHA audio and use
several machine learning techniques to perform ASR on segments of the CAPTCHA. There
are many popular techniques for extracting features from speech. The three techniques we use
are mel-frequency cepstral coefficients (MFCC), perceptual linear prediction (PLP), and
relative spectral transform-PLP (RAS TA-PLP). MFCC is one of the most popular speech
feature representations used. Similar to a fast Fourier transform (FF T), MFCC transforms an
audio file into frequency bands, but (unlike FF T) MFCC uses mel-frequency bands, which
are better for approximating the range of frequencies humans hear. PLP was designed to
extract speaker-independent features from speech [4]. Therefore, by using PLP and a variant
such as RAS TA-PL P, we were able to train our classifiers to recognize letters and digits
independently of who spoke them. Since many different people recorded the digits used in
one of the types of audio CAP TCHAs we tested, PLP and RAS TA-PLP were needed to
extract the features that were most useful for solving them.
In [4]-[5], the authors conducted experiments on recognizing isolated digits in the presence
of noise using both PLP and RAS TA-PL P. However, the noise used consisted of telephone
or microphone static caused by recording in different locations. The audio CAP TCHAs we
use contain this type of noise, as well as added vocal noise and/or music, which is supposed
to make the automated recognition process much harder.
The authors of [3] emphasize how many visual CAP TCHAs can be broken by successfully
splitting the task into two smaller tasks: segmentation and recognition. We follow a similar
approach in that we first automatically split the audio into segments, and then we classify
these segments as noise or words.
In early March 2008, concurrent to our work, the blog of Wintercore Labs [6] claimed to
have successfully broken the Google audio CAP TCHA. After reading their Web article and
viewing the video of how they solve the CAP TCHAs, we are unconvinced that the process
is entirely automatic, and it is unclear what their exact pass rate is. Because we are unable to
find any formal technical analysis of this program, we can neither be sure of its accuracy nor
the extent of its automation.
3
Cr e at i o n of tra i n i n g dat a
Since automated programs can attempt to pass a CAPTCHA repeatedly, a CAPTCHA is
essentially broken when a program can pass it more than a non-trivial fraction of the time;
e.g., a 5% pass rate is enough.
Our approach to breaking the audio CAP TCHAs began by first splitting the audio files into
segments of noise or words: for our experiments, the words were spoken letters or digits. We
used manual transcriptions of the audio CAP TCHAs to get information regarding the
location of each spoken word within the audio file. We were able to label our segments
accurately by using this information.
We gathered 1,000 audio CAP TCHAs from each of the following Web sites: google.com,
digg.com, and an older version of the audio CAP TCHA in recaptcha.net. Each of the
CAP TCHAs was annotated with the information regarding letter/digit locations provided by
the manual transcriptions. For each type of CAPTCHA, we randomly selected 900 samples
for training and used the remaining 100 for testing.
Using the digit/letter location information provided in the manual CAP TCHA
transcriptions, each training CAP TCHA is divided into segments of noise, the letters a-z, or
the digits 0-9, and labeled as such. We ignore the annotation information of the
CAP TCHAs we use for testing, and therefore we cannot identify the size of those segments.
Instead, each test CAP TCHA is divided into a number of fixed-size segments. The segments
with the highest energy peaks are then classified using machine learning techniques (Figure
1). Since the size of a feature vector extracted from a segment generally depends on the size
of the segment, using fixed-size segments allows each segment to be described with a feature
vector of the same length. We chose the window size by listening to a few training
segments and adjusted accordingly to ensure that the segment contained the entire
digit/letter. There is undoubtedly a more optimal way of selecting the window size,
however, we were still able to break the three CAP TCHAs we tested with our method.
Figure 1: A test audio CAP TCHA with the fixed-size segments containing the highest
energy peaks highlighted.
The information provided in the manual transcriptions of the audio CAP TCHAs contains a
list of the time intervals within which words are spoken. However, these intervals are of
variable size and the word might be spoken anywhere within this interval. To provide fixedsize segments for training, we developed the following heuristic. First, divide each file into
variable-size segments using the time intervals provided and label each segment accordingly.
Then, within each segment, detect the highest energy peak and return its fixed-size
neighborhood labeled with the current segment?s label. This heuristic achieved nearly perfect
labeling accuracy for the training set. Rare mistakes occurred when the highest energy peak
of a digit or letter segment corresponded to noise rather than to a digit or letter.
To summarize this subsection, an audio file is transformed into a set of fixed-size segments
labeled as noise, a digit between 0 and 9, or a letter between a and z. These segments are
then used for training. Classifiers are trained for one type of CAPTCHA at a time.
4
C l a s s i f i e r con s t ru ct i o n
From the training data we extracted five sets of features using twelve MFCCs and twelfth-
order spectral (SPEC) and cepstral (CEPS) coefficients from PLP
Matlab functions for extracting these features were provided online
Voicebox package. We use AdaBoost, SVM, and k-NN algorithms
digit and letter recognition. We detail our implementation of
following subsections.
4 .1
and RAS TA-PL P. The
at [7] and as part of the
to implement automated
each algorithm in the
AdaBoost
Using decision stumps as weak classifiers for AdaBoost, anywhere from 11 to 37 ensemble
classifiers are built. The number of classifiers built depends on which type of CAPTCHA we
are solving. Each classifier trains on all the segments associated with that type of
CAP TCHA, and for the purpose of building a single classifier, segments are labeled by
either -1 (negative example) or +1 (positive example). Using cross-validation, we choose to
use 50 iterations for our AdaBoost algorithm. A segment can then be classified as a
particular letter, digit, or noise according to the ensemble classifier that outputs the number
closest to 1.
4 .2
S u p p o rt v e ct o r m a c h i n e
To conduct digit recognition with SVM, we used the C++ implementations of libSVM [8]
version 2.85 with C-SMV and RBF kernel. First, all feature values are scaled to the range of
-1 to 1 as suggested by [8]. The scale parameters are stored so that test samples can be
scaled accordingly. Then, a single multiclass classifier is created for each set of features
using all the segments for a particular type of CAPTCHA. We use cross-validation and grid
search to discover the optimal slack penalty (C=32) and kernel parameter (?=0.011).
4 .3
k - n e a re st n e i g h b o r ( k - N N )
We use k-NN as our final method for classifying digits. For each type of CAP TCHA, five
different classifiers are created by using all of the training data and the five sets of features
associated with that particular type of CAP TCHA. Again we use cross-validation to discover
the optimal parameter, in this case k=1. We use Euclidian distance as our distance metric.
5
Ass e s sm e n t of cu rre n t a ud i o CAPTCHAs
Our method for solving CAP TCHAs iteratively extracts an audio segment from a
CAP TCHA, inputs the segment to one of our digit or letter recognizers, and outputs the
label for that segment. We continue this process until the maximum solution size is reached
or there are no unlabeled segments left. Some of the CAPTCHAs we evaluated have
solutions that vary in length. Our method ensures that we get solutions of varying length
that are never longer than the maximum solution length. A segment to be classified is
identified by taking the neighborhood of the highest energy peak of an as yet unlabeled part
of the CAP TCHA.
Once a prediction of the solution to the CAPTCHA is computed, it is compared to the true
solution. Given that at least one of the audio CAP TCHAs allows users to make a mistake in
one of the digits (e.g., reCAPTCHA), we compute the pass rate for each of the different types
of CAPTCHAs with all of the following conditions:
?
The prediction matches the true solution exactly.
?
Inserting one digit into the prediction would make it match the solution exactly.
?
Replacing one digit in the prediction would make it match the solution exactly.
?
Removing one digit from the prediction would make it match the solution exactly.
However, since we are only sure that these conditions apply to reCAPTCHA audio
CAP TCHAs, we also calculate the percentage of exact solution matches in our results for
each type of audio CAP TCHA. These results are described in the following subsections.
5 .1
Goog le
Google audio CAP TCHAs consist of one speaker saying random digits 0-9, the phrase
?once again,? followed by the exact same recorded sequence of digits originally presented.
The background noise consists of human voices speaking backwards at varying volumes. A
solution can range in length from five to eight words. We set our classifier to find the 12
loudest segments and classify these segments as digits or noise. Because the phrase ?once
again? marks the halfway point of the CAPTCHA, we preprocessed the audio to only serve
this half of the CAP TCHA to our classifiers. It is important to note, however, that the
classifiers were always able to identify the segment containing ?once again,? and these
segments were identified before all other segments. Therefore, if necessary, we could have
had our system cut the file in half after first labeling this segment.
For AdaBoost, we create 12 classifiers: one classifier for each digit, one for noise, and one
for the phrase ?once again.? Our results ( Table 1) show that at best we achieved a 90% pass
rate using the ?one mistake? passing conditions and a 66% exact solution match rate. Using
SVM and the ?one mistake? passing conditions, at best we achieve a 92% pass rate and a
67% exact solution match. For k-NN, the ?one mistake? pass rate is 62% and the exact
solution match rate is 26%.
Table 1: Google audio CAP TCHA results: Maximum 67% accuracy was achieved by SVM.
Classifiers Used
Features Used
AdaBoost
5 .2
SVM
k-NN
One
mistake
exact
match
one
mistake
exact
match
one
mistake
exact
match
MFCC
88%
61%
92%
67%
30%
1%
PLPSPEC
90%
66%
90%
67%
60%
26%
PLPCEPS
90%
66%
92%
67%
62%
23%
RAS TAPLPSPEC
88%
48%
90%
61%
29%
1%
RAS TAPLPCEPS
90%
63%
92%
67%
33%
2%
Digg
Digg CAP TCHAs also consist of one speaker, in this case saying a random combination of
letters and digits. The background noise consists of static or what sounds like trickling
water and is not continuous throughout the entire file. We noticed in our training data that
the following characters were never present in a solution: 0, 1, 2, 5, 7, 9, i, o, z. Since the
Digg audio CAPTCHA is also the verbal transcription of the visual CAP TCHA, we believe
that these characters are excluded to avoid confusion between digits and letters that are
similar in appearance. The solution length varies between three and six words. Using
AdaBoost, we create 28 classifiers: one classifier for each digit or letter that appears in our
training data and one classifier for noise. Perhaps because we had fewer segments to train
with and there was a far higher proportion of noise segments, AdaBoost failed to produce any
correct solutions. We believe that the overwhelming number of negative training examples
versus the small number of positive training samples used to create each decision stump
severely affected AdaBoost?s ability to classify audio segments correctly.
A histogram of the training samples is provided in Figure 2 to illustrate the amount of
training data available for each character. When using SVM, the best feature set passed with
96% using ?one mistake? passing conditions and passed with 71% when matching the
solution exactly. For k-NN, the best feature set produced a 90% ?one mistake? pass rate and
a 49% exact solution match. Full results can be found in Table 2.
Table 2: Digg audio CAP TCHA results: Maximum 71% accuracy was achieved by SVM.
Classifiers Used
Features Used
AdaBoost
SVM
k-NN
one
mistake
exact
match
one
mistake
exact
match
one
mistake
exact
match
MFCC
-
-
96%
71%
89%
49%
PLPSPEC
-
-
94%
65%
90%
47%
PLPCEPS
-
-
96%
71%
64%
17%
RAS TAPLPSPEC
-
-
17%
3%
67%
17%
RAS TAPLPCEPS
-
-
96%
71%
82%
34%
1000
900
800
# of Segments
700
600
500
400
300
200
100
y
x
v
w
t
u
r
s
q
p
n
m
l
j
k
h
f
g
e
d
c
b
a
8
6
4
3
noise
0
Segment Label
Figure 2: Digg CAP TCHA training data distribution.
5 .3
reC A P T C H A
The older version of reCAPTCHA?s audio CAP TCHAs we tested consist of several speakers
who speak random digits. The background noise consists of human voices speaking
backwards at varying volumes. The solution is always eight digits long. For AdaBoost, we
create 11 classifiers: one classifier for each digit and one classifier for noise. Because we
know that the reCAPTCHA passing conditions are the ?one mistake? passing conditions,
SVM produces our best pass rate of 58%. Our best exact match rate is 45% ( Table 3).
Table 3: reCAPTCHA audio CAP TCHA results: Maximum 45% accuracy was achieved by
SVM.
Classifiers Used
Features Used
AdaBoost
6
SVM
k-NN
one
mistake
exact
match
one
mistake
exact
match
one
mistake
exact
match
MFCC
18%
6%
56%
43%
22%
11%
PLPSPEC
27%
10%
58%
39%
43%
25%
PLPCEPS
23%
10%
56%
45%
29%
14%
RAS TAPLPSPEC
9%
3%
36%
18%
24%
4%
RAS TAPLPCEPS
9%
3%
46%
30%
32%
12%
Prop ert i e s of w ea k ver s u s st ro n g CAPT CHAs
From our results, we note that the easiest CAP TCHAs to break were from Digg. Google
had the next strongest CAP TCHAs followed by the strongest from reCAPTCHA. Although
the Digg CAP TCHAs have the largest vocabulary, giving us less training data per label, the
same woman recorded them all. More importantly, the same type of noise is used
throughout the entire CAPTCHA. The noise sounds like running water and static which
sounds very different from the human voice and does not produce the same energy spikes
needed to locate segments, therefore making segmentation quite easy. The CAP TCHAs from
Google and reCAPTCHA used other human voices for background noise, making
segmentation much more difficult. Although Google used a smaller vocabulary than Digg
and also only used one speaker, Google?s background noise made the CAP TCHA more
difficult to solve. After listening to a few of Google?s CAP TCHAs, we noticed that
although the background noise consisted of human voices, the same background noise was
repeated. reCAP TCHA had similar noise to Google, but they had a larger selection of noise
thus making it harder to learn. reCAP TCHA also has the longest solution length making it
more difficult to get perfectly correct. Finally, reCAPTCHA used many different speakers
causing it to be the strongest CAP TCHA of the three we tested. In conclusion, an audio
CAP TCHA that consists of a finite vocabulary and background noise should have multiple
speakers and noise similar to the speakers.
7
Recomm e n d at i o n s f or creat i n g st ro n g e r aud i o
CAPTCHAs
Due to our success in solving audio CAP TCHAs, we have decided to start developing new
audio CAP TCHAs that our methods, and machine learning methods in general, will be less
likely to solve. From our experiments, we note that CAP TCHAs containing longer
solutions and multiple speakers tend to be more difficult to solve. Also, because our
methods depend on the amount of training data we have, having a large vocabulary would
make it more difficult to collect enough training data. Already since obtaining these results,
reCAPTCHA.net has updated their audio CAP TCHA to contain more distortions and a
larger vocabulary: the digits 0 through 99. In designing a new audio CAP TCHA we are also
concerned with the human pass rate. The current human pass rate for the reCAPTCHA audio
CAP TCHAs is only 70%. To develop an audio CAP TCHA with an improved human pass
rate, we plan to take advantage of the human mind?s ability to understand distorted audio
through context clues. By listening to a phrase instead of to random isolated words, humans
are better able to decipher distorted utterances because they are familiar with the phrase or
can use contextual clues to decipher the distorted audio. Using this idea, the audio for our
new audio CAP TCHA will be taken from old-time radio programs in which the poor quality
of the audio makes transcription by ASR systems difficult. Users will be presented with an
audio clip consisting of a 4-6 word phrase. Half of the CAPTCHA consists of words, which
validate a user to be human, while the other half of the words need to be transcribed. This is
the same idea behind the visual reCAP TCHA that is currently digitizing text on which OCR
fails. We expect that this new audio CAP TCHA will be more secure than the current version
and easier for humans to pass. Initial experiments using this idea show this to be true [9].
8
Co n c l u s i o n
We have succeeded in ?breaking? three different types of widely used audio CAP TCHAs,
even though these were developed with the purpose of defeating attacks by machine learning
techniques. We believe our results can be improved by selecting optimal segment sizes, but
that is unnecessary given our already high success rate. For our experiments, segment sizes
were not chosen in a special way; occasionally yielding results in which a segment only
contained half of a word, causing our prediction to contain that particular word twice. We
also believe that the AdaBoost results can be improved, particularly for the Digg audio
CAP TCHAs, by ensuring that the number of negative training samples is closer to the
number of positive training samples. We have shown that our approach is successful and can
be used with many different audio CAP TCHAs that contain small finite vocabularies.
A ck n o w l e d g m e n t s
This work was partially supported by generous gifts from the Heinz Endowment, by an
equipment grant from Intel Corporation, and by the Army Research Office through grant
number DAAD19-02-1-0389 to CyLab at Carnegie Mellon University. Luis von Ahn was
partially supported by a Microsoft Research New Faculty Fellowship and a MacArthur
Fellowship. Jennifer Tam was partially supported by a Google Anita Borg Scholarship.
R e f e re n c e s
[1] L. von Ahn, M. Blum, and J. Langford. ?Telling Humans and Computers Apart Automatically,?
Communication s of the ACM, vol. 47, no. 2, pp. 57-60, Feb. 2004.
[2] G. Mori and J. Malik. ?Recognizing Objects in Adversarial Clutter: Breaking a Visual
CAPTCHA,? In Computer Vision and Pattern Recognition CVPR'03, June 2003.
[3] K. Chellapilla, and P. Simard, ? U sing Machine Learning to Break Visual Human Interactio n
Proofs (HIP s),? Advances in Neural Information P rocessing Systems 17, Neural Info rmatio n
P rocessing Systems (NIPS'2004), MIT Press.
[4] H. Hermansk y, ? Perceptual Linear Predictive (PL P) Analysis of Speech,? J. Acoust. Soc. Am.,
vol. 87, no. 4, pp. 1738-1752, Apr. 1990.
[5] H. Hermansk y, N. Morgan, A. Bayya, and P. Kohn. ?RASTA-PL P Speech Analysi s
Technique,? In P roc. IEEE Int?l Conf. Acoustics, Speech & Signal Processing, vol. 1, pp. 121124, San Francisco, 1992.
[6] R. Santamarta. ?Breaking Gmail ?s Audio Captcha,? http://blog.wintercore.com/?p=11, 2008.
[7] D. Ell is. ? P L P and RASTA (and MFCC, and inversion) in Matlab using melfcc.m and
invmelfcc.m,? http:/ /ww w.ee.columbia.edu/~dpwe/resources/matlab/rastamat/, 2006.
[8] C. Chang and C. Lin. LIBSVM: a library for support vector machines, 2001. Software available
at http: //ww w.csie.ntu.edu.tw/~cjlin/libsvm
[9] A. Schlaikjer. ? A Dual-Use Speech CA PTCHA: Aiding Visually Impaired Web Users while
Providing Transcriptions of Audio Streams,? Technical Report CMU-LTI-07-014, Carnegie
Mellon Universi t y. November 2007.
| 3443 |@word cu:1 faculty:1 version:4 inversion:1 proportion:1 twelfth:1 mention:1 euclidian:1 harder:2 initial:1 contains:1 selecting:2 existing:1 current:7 com:4 contextual:1 gmail:2 yet:1 must:3 luis:2 creat:1 designed:3 spec:1 selected:1 half:5 fewer:1 accordingly:3 erat:1 location:4 universi:1 attack:3 five:4 registering:1 borg:1 jiri:1 consists:5 ra:11 nor:1 heinz:1 automatically:5 overwhelming:1 window:2 gift:1 project:1 provided:6 discover:2 what:2 easiest:1 chas:1 developed:2 spoken:5 finding:1 acoust:1 corporation:1 voting:1 exactly:5 ro:2 classifier:25 scaled:2 grant:2 positive:3 service:2 engineering:1 before:1 limit:1 mistake:17 severely:1 abuse:1 might:1 chose:1 twice:1 collect:1 co:1 range:3 decided:1 testing:2 implement:1 daad19:1 digit:34 matching:1 word:15 vocal:1 suggest:1 get:3 cannot:4 unlabeled:2 layered:1 selection:1 context:1 independently:1 splitting:2 importantly:1 ert:1 updated:1 user:9 exact:17 speak:1 distinguishing:1 us:1 designing:1 recognition:6 particularly:1 rec:1 cut:1 labeled:4 csie:1 electrical:1 calculate:1 ensures:1 highest:5 broken:7 trained:1 depend:1 solving:4 segment:48 rre:1 predictive:1 serve:1 train:3 fast:1 tell:1 labeling:2 corresponded:1 neighborhood:2 quite:1 heuristic:2 widely:2 solve:6 larger:2 distortion:1 cvpr:1 ability:2 transform:2 highlighted:1 final:1 online:2 sequence:1 advantage:1 net:2 propose:1 inserting:1 causing:2 achieve:1 supposed:1 description:1 validate:1 impaired:3 produce:3 perfect:1 object:1 derive:1 illustrate:1 develop:1 ticket:1 soc:1 c:3 aud:2 correct:3 annotated:1 human:21 viewing:1 ntu:1 hyde:2 adjusted:1 pl:5 recap:3 visually:3 vary:1 early:1 generous:1 purpose:2 label:6 radio:1 currently:1 concurrent:1 largest:1 create:5 successfully:3 smv:1 defeating:1 mit:1 always:2 rather:1 ck:1 avoid:1 cr:1 varying:3 office:1 june:1 longest:1 secure:4 ave:4 equipment:1 adversarial:1 detect:1 am:1 nn:8 anita:1 entire:3 transformed:1 issue:1 dual:1 plan:1 special:1 ell:1 once:6 never:3 asr:3 having:1 lit:1 nearly:1 report:1 few:2 randomly:2 composed:1 recognize:2 loudest:1 familiar:1 tcha:34 consisting:1 microsoft:1 attempt:1 undoubtedly:1 evaluation:1 weakness:1 analyzed:1 yielding:1 behind:1 succeeded:1 closer:1 necessary:1 conduct:1 divide:1 old:1 re:2 isolated:2 hip:1 classify:3 phrase:6 rare:1 recognizing:2 successful:1 conducted:1 reported:1 stored:1 varies:1 st:3 peak:5 twelve:1 continuously:1 again:5 von:3 recorded:3 containing:3 choose:1 woman:1 transcribed:1 conf:1 tam:2 creating:1 simard:1 return:1 account:1 stump:2 automation:1 int:2 coefficient:2 tra:1 caused:1 depends:2 stream:1 break:5 lab:1 analyze:1 reached:1 start:1 annotation:1 forbes:4 accuracy:7 who:3 ensemble:2 spaced:1 identify:3 gathered:1 decipher:2 weak:1 accurately:1 produced:1 mfcc:8 classified:3 strongest:3 manual:4 email:1 against:1 energy:6 frequency:4 pp:3 associated:2 proof:1 static:3 con:1 gain:1 popular:4 subsection:3 cap:81 segmentation:3 sean:2 ea:1 appears:1 ta:5 originally:1 higher:1 follow:1 adaboost:14 improved:3 evaluated:2 though:2 anywhere:2 uct:1 until:1 langford:1 web:8 replacing:1 google:12 quality:1 perhaps:1 believe:4 building:1 consisted:2 contain:4 true:3 excluded:1 iteratively:1 plp:9 speaker:10 mel:2 presenting:1 confusion:1 image:2 macarthur:1 began:1 overview:1 volume:2 million:2 digitizing:1 occurred:1 mellon:6 automatic:2 captchas:7 grid:1 mfccs:1 had:5 access:2 recognizers:1 ahn:3 longer:2 etc:1 feb:1 closest:1 apart:2 claimed:1 occasionally:1 blog:2 continue:1 success:2 morgan:1 ud:1 signal:1 full:1 sound:3 multiple:2 technical:2 match:19 cross:3 long:1 lin:1 divided:2 ensuring:1 prediction:7 variant:1 essentially:1 cmu:4 metric:1 vision:1 iteration:1 kernel:2 histogram:1 achieved:6 background:9 fellowship:2 interval:5 unlike:1 file:9 sure:2 recording:1 tend:1 extracting:2 ee:1 presence:1 backwards:2 split:1 enough:3 easy:1 automated:4 concerned:1 identified:3 perfectly:1 idea:4 regarding:2 multiclass:1 listening:3 rod:1 six:1 kohn:1 passed:2 penalty:1 speech:8 speaking:2 passing:5 repeatedly:1 matlab:3 generally:2 useful:1 transforms:1 amount:2 clutter:1 aiding:1 band:2 clip:1 http:3 percentage:1 bot:3 correctly:3 per:1 carnegie:6 vol:3 affected:1 poll:1 blum:1 prevent:1 neither:1 libsvm:3 preprocessed:1 spoke:1 lti:1 fraction:1 halfway:1 package:1 letter:17 injected:1 distorted:5 saying:3 throughout:2 decision:2 entirely:1 ct:2 followed:2 distinguish:1 strength:1 software:1 fourier:1 department:3 developing:1 according:1 march:1 combination:1 poor:1 remain:1 describes:2 smaller:2 character:5 tw:1 making:4 taken:1 mori:1 resource:1 jennifer:2 discus:1 slack:1 cjlin:1 needed:2 know:1 mind:1 available:2 apply:1 eight:2 ocr:1 spectral:2 voice:5 top:1 running:2 remaining:1 ensure:1 music:1 giving:1 scholarship:1 approximating:1 dat:1 malik:1 noticed:2 added:1 already:2 spike:1 rocessing:2 rmatio:1 rt:1 unclear:1 distance:2 unable:1 captcha:16 accessibility:1 extent:1 trivial:1 water:2 ru:1 length:7 providing:1 difficult:7 unfortunately:1 susceptible:1 recaptcha:11 info:1 negative:3 design:2 implementation:2 perform:1 rasta:2 sm:1 sing:1 finite:2 november:1 displayed:1 communication:1 locate:1 ww:2 security:5 acoustic:1 protect:1 nip:1 able:5 suggested:1 usually:1 pattern:1 ev:1 reading:2 hear:1 summarize:1 program:9 built:2 video:1 event:1 older:2 library:1 created:4 extract:3 utterance:1 columbia:1 text:3 literature:1 relative:1 expect:1 versus:1 validation:3 purchasing:1 article:1 classifying:1 endowment:1 supported:3 verbal:1 formal:1 understand:1 telling:1 taking:1 cepstral:2 vocabulary:6 author:2 made:1 clue:2 san:1 spam:1 far:1 emphasize:1 ignore:1 transcription:7 buy:1 ver:1 pittsburgh:4 unnecessary:1 francisco:1 search:1 continuous:1 table:6 learn:1 robust:1 ca:1 obtaining:1 as:1 official:1 voicebox:1 surf:1 apr:1 noise:32 allowed:1 repeated:1 site:4 intel:1 ff:2 screen:1 roc:1 fails:1 perceptual:2 breaking:4 digg:11 removing:1 chellapilla:1 list:1 svm:12 consist:5 easier:1 appearance:1 likely:1 army:1 visual:9 failed:1 contained:2 partially:3 chang:1 extracted:2 acm:1 prop:1 rbf:1 typical:2 specifically:1 telephone:1 microphone:1 called:1 pas:18 ew:1 formally:1 people:2 mark:1 support:1 audio:63 tested:6 |
2,696 | 3,444 | A Transductive Bound for the Voted Classifier with an
Application to Semi-supervised Learning
Massih R. Amini
Laboratoire d?Informatique de Paris 6
Universit?e Pierre et Marie Curie, Paris, France
[email protected]
Franc?ois Laviolette
Universit?e Laval
Qu?ebec (QC), Canada
[email protected]
Nicolas Usunier
Laboratoire d?Informatique de Paris 6
Universit?e Pierre et Marie Curie, Paris, France
[email protected]
Abstract
We propose two transductive bounds on the risk of majority votes that are estimated over
partially labeled training sets. The first one involves the margin distribution of the classifier and a risk bound on its associate Gibbs classifier. The bound is tight when so is
the Gibbs?s bound and when the errors of the majority vote classifier is concentrated on a
zone of low margin. In semi-supervised learning, considering the margin as an indicator
of confidence constitutes the working hypothesis of algorithms which search the decision
boundary on low density regions. Following this assumption, we propose to bound the error probability of the voted classifier on the examples for whose margins are above a fixed
threshold. As an application, we propose a self-learning algorithm which iteratively assigns pseudo-labels to the set of unlabeled training examples that have their margin above
a threshold obtained from this bound. Empirical results on different datasets show the
effectiveness of our approach compared to the same algorithm and the TSVM in which
the threshold is fixed manually.
1
Introduction
Ensemble methods [5] return a weighted vote of baseline classifiers. It is well known that under the PACBayes framework [9], one can obtain an estimation of the generalization error (also called risk) of such
majority votes (referred as Bayes classifier). Unfortunately, those bounds are generally not tight, mainly
because they are indirectly obtain via a bound on a randomized combination of the baseline classifiers
(called the Gibbs classifier). Although the PAC-Bayes theorem gives tight risk bounds of Gibbs classifiers,
the bounds of their associate Bayes classifiers come at a cost of worse risk (trivially a factor of 2, or under
some margin assumption, a factor of 1+). In practice the Bayes risk is often smaller than the Gibbs risk.
In this paper we present a transductive bound over the Bayes risk. This bound is also based on the risk of
the associated Gibbs classifier, but it takes as an additional information the exact knowledge of the margin
distribution of unlabeled data. This bound is obtained by analytically solving a linear program. The intuitive
idea here is that given the risk of the Gibbs classifier and the margin distribution, the risk of the majority
vote classifier is maximized when all its errors are located on low margin examples. We show that our
bound is tight when the associated Gibbs risk can accurately be estimated and when the Bayes classifier
makes most of its errors on low margin examples.
The proof of this transductive bound makes use of the (joint) probability over an unlabeled data set that the
majority vote classifier makes an error and the margin is above a given threshold. This second result naturally leads to consider the conditional probability that the majority vote classifier makes an error knowing
that the margin is above a given threshold.
This conditional probability is related to the concept that the margin is an indicator of confidence which is
recurrent in semi-supervised self-learning algorithms [3,6,10,11,12]. These methods first train a classifier
on the labeled training examples. The classifier outputs serve then to assign pseudo-class labels to unlabeled
data having margin above a given threshold. The supervised method is retrained using the initial labeled set
and its previous predictions on unlabeled data as additional labeled examples. Practical algorithms almost
fix the margin threshold manually.
In the second part of the paper, we propose to find this margin threshold by minimizing the bound on
the conditional probability. Empirical results on different datasets show the effectiveness of our approach
compared to TSVM [7] and the same algorithm but with a manually fixed threshold as in [11]
In the remainder of the paper, we present, in section 2, our transductive bounds and show their outcomes
in terms of sufficient conditions under which unlabeled data may be of help in the learning process and a
linear programming method to estimate these bounds. In section 4, we present experimental results obtained
with a self-learning algorithm on different datasets in which we use the bound presented in section 2.2 for
choosing the threshold which serve in the label assignment step of the algorithm. Finally, in section 5 we
discuss the outcomes of this study and give some pointers to further research.
2
Transductive Bounds on the Risk of the Voted Classifier
We are interested in the study of binary classification problems where the input space X is a subset of Rd
and the output space is Y = {?1, +1}. We furthermore suppose that the training set is composed of a
l
u
labeled set Z` = ((xi , yi ))i=1 ? Z l and an unlabeled set XU = (x0i )l+u
i=l+1 ? X , where Z represents the
set of X ? Y. We suppose that each pair (x, y) ? Z` is drawn i.i.d. with respect to a fixed, but unknown,
probability distribution D over X ? Y and we denote the marginal distribution over X by DX .
To simplify the notation and the proofs, we restrict ourselves to the deterministic labeling case, that is, for
each x0 ? XU , there is exactly one possible label that we will denote by y 0 .1
In this study, we consider learning algorithms that work in a fixed hypothesis space H of binary classifiers
(defined without reference to the training data). After observing the training set S = Z` ? XU , the task of
the learner is to choose a posterior distribution Q over H such that the Q-weighted majority vote classifier
BQ (also called the Bayes classifier) will have the smallest possible risk on examples of XU . Recall that the
Bayes classifier is defined by
BQ (x) = sgn [Eh?Q h(x)]
?x ? X .
(1)
where, sgn(x)=+1 if the real number x > 0 and ?1 otherwise. We further denote by GQ the associated
Gibbs classifier which for classifying any example x ? X chooses randomly a classifier h according to the
distribution Q. We accordingly define the transductive risk of GQ over an unlabeled set by:
X
def 1
Ru (GQ ) =
Eh?Q [[h(x0 ) 6= y 0 ]]
(2)
u 0
x ?XU
Where, [[?]] = 1 if predicate ? holds and 0 otherwise, and for every unlabeled example x0 ? XU we refer to
y 0 as its true unknown class label. In section 2.1 we show that if we consider the margin as an indicator of
confidence and that we dispose a tight upper bound Ru? (GQ ) of the risk of GQ which holds with probability
1 ? ? over the random choice of Z` and XU (for example using Theorem 17 or 18 of Derbelo et al. [4]), we
are then able to accurately bound the transductive risk of the Bayes classifier:
X
def 1
Ru (BQ ) =
[[BQ (x0 ) 6= y 0 ]]
(3)
u 0
x ?XU
This result follows from a bound on the joint Bayes risk:
X
def 1
[[BQ (x0 ) 6= y 0 ? mQ (x0 ) > ?]]
Ru?? (BQ ) =
u 0
(4)
x ?XU
Where mQ (?) = |Eh?Q h(?)| denotes the unsigned margin function. One of the practical issues that arises
from this result is the possibility to define a threshold ? for which the bound is optimal and that we use
in a self-learning algorithm by iteratively assigning pseudo-labels to unlabeled examples having margin
above this threshold. We finally denote by Eu z the expectation of a random variable z with respect to
the uniform distribution over XU and for notation convenience we equivalently define Pu the uniform
probability distribution over XU i.e. For any subset A, P (A) = u1 card(A).
1
The proofs can be inferred to the more general noisy case, but one has to replace the summation
0 0
0 0
(x0 ,y 0 )?XU ?{?1,+1}. P(x ,y )?D (y |x ) in the definitions of equations (3) and (4).
P
P
x0 ?XU
by
2.1
Main Result
Our main result is the following theorem which provides two bounds on the transductive risks of the Bayes
classifier (3) and the joint Bayes risk (4).
Theorem 1 Suppose that BQ is as in (1). Then for all Q and all ? ? (0, 1] with probability at least 1 ? ?:
1 ?
<
(5)
(?) +
Ru (BQ ) ? inf
Pu (mQ (x0 ) < ?) +
Ku (Q) ? MQ
?
??(0,1]
C
Where Ku? (Q) = Ru? (GQ ) + 21 (Eu mQ (x0 ) ? 1), MQ
(t) = Eu mQ (x0 )[[mQ (x0 ) C t]] for C being < or ?
and b.c+ denotes the positive part (i.e. bxc+ = [[x > 0]]x).
More generally, with probability at least 1 ? ?, for all Q and all ? ? 0:
k
1j ?
?
<
0
Ku (Q) + MQ (?) ? MQ (?)
Ru?? (BQ ) ? inf
Pu (? < mQ (x ) < ?) +
?
??(?,1]
+
(6)
In section 2.2 we will prove that the bound (5) simply follows from (6). In order to better understand the
former bound on the risk of the Bayes classifier, denote by Fu? (Q) the right hand side of equation (5):
1 ?
?
0
<
def
Fu (Q) =
inf
Pu (mQ (x ) < ?) +
Ku (Q) ? MQ (?) +
?
??(0,1]
and consider the following special case where the classifier makes most of its errors on unlabeled examples
with low margin. Proposition 2, together with the explanations that follow, makes this idea clearer.
Proposition 2 Assume that ?x ? XU , mQ (x) > 0 and that ?C ? (0, 1] such that ?? > 0:
Pu (BQ (x0 ) 6= y 0 ? mQ (x0 ) = ?) 6= 0 ? Pu (BQ (x0 ) 6= y 0 ? mQ (x0 ) < ?) ? C ? Pu (mQ (x0 ) < ?)
Then, with probability at least 1 ? ?:
Fu? (Q) ? Ru (BQ ) ?
1?C
R? (GQ ) ? Ru (GQ )
Ru (BQ ) + u
C
??
(7)
Where ? ? = sup {?|Pu (BQ (x0 ) 6= y 0 ? mQ (x0 ) = ?) 6= 0}
Now, suppose that the margin is an indicator of confidence. Then, a Bayes classifier that makes its error
mostly on low margin regions will admit a coefficient C in inequality (7) close to 1 and the bound of (5)
becomes tight (provided we have an accurate upper bound Ru? (GQ ) ). In the next section we provide proofs
of all the statements above and show in lemma 4 a simple way to compute the best margin threshold for
which the general bound on the joint Bayes risk is the lowest.
2.2
Proofs
All our proofs are based on the relationship between Ru (GQ ) and Ru (BQ ) and the following lemma:
Lemma 3 Let (?1 , .., ?N ) be the ordered sequence of the different strictly positive values of the margin on
XU , that is {?i , i = 1..N } = {mQ (x0 )|x0 ? XU ? mQ (x0 ) > 0} and ?i ? {1, . . . , N ? 1}, ?i < ?i+1 .
Denote moreover bi = Pu (BQ (x0 ) 6= y 0 ? mQ (x0 ) = ?i ) for i ? {1, . . . , N }. Then,
Ru (GQ ) =
N
X
bi ?i +
i=1
?? ? [0, 1], Ru?? (BQ ) =
N
X
1
(1 ? Eu mQ (x0 ))
2
(8)
bi with k = max{i|?i ? ?}
(9)
i=k+1
Proof Equation (9) follows the definition Ru?? (BQ ) = Pu (BQ (x0 ) 6= y 0 ? mQ (x0 ) > ?).
Equation (8) is obtained from the definition of the margin mQ which writes as
?x0 ? XU , mQ (x0 ) = |Eh?Q [[h(x0 ) = 1]] ? Eh?Q [[h(x0 ) = ?1]]| = |1 ? 2Eh?Q [[h(x0 ) 6= y 0 ]]|
By noticing that for all x0 ? XU the condition Eh?Q [[h(x0 ) 6= y 0 ]] > 21 is equivalent to the statement
y 0 Eh?Q h(x0 ) < 0 or BQ (x0 ) 6= y 0 , we can rewrite mQ without absolute values and hence get:
Eh?Q [[h(x0 ) 6= y 0 ]] =
1
1
(1 + mQ (x0 ))[[BQ (x0 ) 6= y 0 ]] + (1 ? mQ (x0 ))[[BQ (x0 ) = y 0 ]]
2
2
(10)
Finally equation (8) yields by taking the mean over x0 ? XU and by reorganizing the equation using the
notations of bi and ?i . Recall that the values the x0 for which mQ (x0 ) = 0 counts for 0 in the sum that
defined the Gibbs risk (see equation 2 and the definition of mQ ).
?
Proof of Theorem 1 First, we notice that equation (5) follows equation (6) from the fact that MQ
(0) = 0
and the following inequality:
Ru (BQ ) = Ru?0 (BQ ) + Pu (BQ (x0 ) 6= y 0 ? mQ (x0 ) = 0) ? Ru?0 (BQ ) + Pu (mQ (x0 ) = 0)
For proving equation (6), we know from lemma 3 that for a fix ? ? [0, 1] there exist (b1 , . . . , bN ) such that
0 ? bi ? Pu (mQ (x0 ) = ?i ) and which satisfy equations (8) and (9).
Let k = max{i | ?i ? ?}, assuming now that we can obtain an upper bound Ru? (GQ ) of Ru (GQ ) which
holds with probability 1 ? ? over the random choices of Z` and XU , from the definition (4) of Ru?? (BQ )
with probability 1 ? ? we have then
Ru?? (BQ ) ? max
N
X
b1 ,..,bN
bi u.c. ?i, 0 ? bi ? Pu (mQ (x0 ) = ?i ) and
N
X
bi ?i ? Ku? (Q)
(11)
i=1
i=k+1
Where Ku? (Q) = Ru? (GQ ) ? 21 (1 ? Eu mQ (x0 )). It turns out that the right hand side in equation (11) is the
solution of a linear program that can be solved analytically and which is attained for:
?
if i ? k,
?
?0
?
!
P
0
(12)
bi =
Ku (Q)? k<j<i ?j Pu (mQ (x )=?j )
0
?
elsewhere.
?min Pu (mQ (x ) = ?i ) ,
?i
+
For clarity, we defer the proof of equation (12) to lemma 4, and continue the proof of equation (6).
P
?
<
Using the notations defined in Theorem 1, we rewrite k<j<i ?j Pu (mQ (x0 ) = ?j ) as MQ
(?i ) ? MQ
(?).
o
n
P
P
N
I
?
<
?
We further define I = max i|Ku (Q) + MQ (?) ? MQ (?i ) > 0 which implies i=k+1 bi = i=k+1 bi
?
from equations (11) and (12) with bI =
bound on Ru?? (BQ ):
?
<
Ku
(Q)+MQ (?)?MQ
(?I )
.
?I
Ru?? (BQ ) ? Pu (? < mQ (x0 ) < ?I ) +
From this inequality we hence obtain a
?
<
Ku? (Q) + MQ
(?) ? MQ
(?I )
(13)
?I
The proof of the second point in theorem 1 is just a rewriting of this result as from the definition of ?I , for any
? > ?I , the right-hand side of equation (6) is equal to Pu (mQ (x0 ) < ?), which is greater than the right-hand
?
side of equation (13). Moreover, for ? < ?I , we notice that ? 7? Pu (mQ (x0 ) < ?)+
decreases.
?
<
Ku
(Q)+MQ (?)?MQ
(?)
?
Lemma 4 (equation (12)) Let gi , i = 1...N be such that 0 < gi < gi+1 , pi ? 0, i = 1...N , B ? 0 and
k ? {1, . . . , N }. Then, the optimal value of the linear program:
max
q1 ,...,qN
N
X
i=k+1
qi
u.c. ?i, 0 ? qi ? pi and
N
X
qi gi ? B
(14)
i=1
P
B? j<i qj? gj
is attained for q ? defined by: ?i ? k : qi? = 0 and ?i > k, qi? = min pi , b
c
+
gi
QN
Proof Define O = {0}k ? i=k+1 [0, pi ]. We will show that problem (14) has a unique optimal solution
PN
in O, and that this solution is q ? . In the rest of the proof, we denote F (q) = i=k+1 qi .
First, the problem is convex, feasible (take ?i, qi = 0) and bounded. Therefore there is an optimal solution
QN
q opt ? i=1 [0, pi ]. Define q opt,O by qiopt,O = qiopt if i > k and qiopt,O = 0 otherwise. Then, q opt,O ? O,
it is clearly feasible, and F (q opt,O ) = F (q opt ). Therefore, there is an optimal solution in O.
Now, for (q, q 0 ) ? RN ? RN , define I(q, q 0 ) = {i|qi > qi0 }, and consider the lexicographic order :
?(q, q 0 ) ? RN ? RN , q q 0 ? I(q 0 , q) = ? or (I(q, q 0 ) 6= ? and min I(q, q 0 ) < min I(q 0 , q))
The crucial point is that q ? is the greatest feasible solution in O for . Indeed, notice first that q ? is
PN
necessarily feasible and i=1 qi? = B. To see this result let M be the set {i > k| : qi? < pi }, we then have
two possibilities to consider. (a) M = ?. In this case q ? is simply the maximal element for in O. (b)
M 6= ?. In this case, let K = min{i > k|qi? < pi } and M = I(q, q ? ).
We claim that there are no feasible q ? RN such that q q ? . By way of contradiction, suppose such a q
exists. Then, if M ? k, we have qM > 0, and therefore q is not in O; if k < M < K, we have qM > pM ,
PN
PN ?
which yields the same result; and finally, if M ? K, we have i=1 qi >
i=1 qi = B, and q is not
feasible.
We now show that if q ? O is feasible and q ? q, then q is not optimal (which is equivalent to show that
an optimal solution in O must be the greatest feasible solution for ).
Let q ? O be a feasible solution such that q ? q. Since q q ? , I(q ? , q) is not empty. If I(q, q ? ) = ?, we
have F (q ? ) > F (q), and therefore q is not optimal. We now treat the case where I(q, q ? ) 6= ?.
Let K = min I(q ? , q) andM = min I(q, q ? ). We have qM > 0 by definition, and K < M because q ? q
and q ? O. Let ? = min qM , ggM
(pK ? qK ) and define q 0 by:
K
gM
0
?
and qM
= qM ? ?.
gK
P
We
that q 0 is feasible by thePdefinition of ?, that it satisfies the box constraints, and i qi0 gi =
P can seegM
g
0
M
i qi gi ? B. Moreover F (q ) = F (q) + ?( gK ? 1) > F (q) since
i qi gi + gK ? ? gK ? ? ? gM =
gK < gM and ? > 0. Thus, q is not optimal.
qi0 = qi if i 6? {K, M } ,
0
qK
= qK +
In summary, we have shown that there is an optimal solution in O, and that a feasible solution in O must be
the greatest feasible solution for the lexicographic order in O to be optimal and which is q ? .
Proof of Proposition 2 First let us claim that
Ru (BQ ) ? Pu (BQ (x0 ) 6= y 0 ? mQ (x0 ) < ? ? ) +
1
< ?
Ku (Q) ? MQ
(? ) +
??
(15)
where ? ? = sup {?|Pu (BQ (x0 ) 6= y 0 ? mQ (x0 ) = ?) 6= 0} and Ku (Q) = Ru (GQ ) + 21 (Eu mQ (x0 ) ? 1).
Indeed, assume for now that equation (15) is true. Then, by assumption we have:
1
< ?
Ru (BQ ) ? C ? Pu (mQ (x0 ) < ? ? ) + ? Ku (Q) ? MQ
(? ) +
(16)
?
j
k
< ?
Since Fu? (Q) ? Pu (mQ (x0 ) < ? ? ) + ?1? Ku? (Q) ? MQ
(? ) , with probability at least 1 ? ? we obtain:
+
Fu? (Q) ? Ru (BQ ) ? (1 ? C)Pu (mQ (x0 ) < ? ? ) +
Ru? (GQ ) ? Ru (GQ )
??
(17)
< ?
< ?
This is due to the fact that bKu? (Q) ? MQ
(? )c+ ? bKu (Q) ? MQ
(? )c+ ? Ru? (GQ ) ? Ru (GQ ) when
?
Ru (GQ ) ? Ru (GQ ). Taking once again equation (16), we have Pu (mQ (x0 ) < ? ? ) ? C1 Ru (BQ ). Plugging back this result in equation (17) yields Proposition 2.
Now, let us prove the claim (equation (15)). Since ?x0 ? XU , mQ (x0 ) > 0, we have Ru (BQ ) = Ru?0 (BQ ).
Using the notations of lemma 3, denote K the index such that ?K = ? ? . Then, it follows from equation (8)
PK
of lemma 3 that Ru (GQ ) = i=1 bi ?i + 21 (1 ? Eu mQ (x0 )). Solving for bK in this equality yields bK =
P
Ku (Q)? K?1
< ?
i=1 bi ?i
and we therefore have bK ? ?1K bKu (Q) ? MQ
(? )c+ since bK ? 0 and ?i, bi ?
?K
PK
0
Pu (mQ (x ) = ?i ). Finally, from equation (9), we have Ru (BQ ) = i=1 bi , which implies equation (15)
PK?1
by using the lower bound on bK and the fact that i=1 bi = Pu (BQ (x0 ) 6= y 0 ? mQ (x0 ) < ? ? ).
In general, good PAC-Bayesian approximations of Ru (GQ ) are difficult to carry out in supervised learning
[4] mostly due to the huge number of needed instances to obtain accurate approximations of the distribution
of the absolute values of the margin. In this section we have shown that the transductive setting allows for
high precision on the bounds from the risk Ru (GQ ) of the Gibbs classifier to the risk Ru (BQ ) if we suppose
that the Bayes classifier makes its errors mostly on low margin regions.
3
Relationship with margin-based self-learning algorithms
In Proposition 2 we have considered the hypothesis that the margin is an indicator of confidence as one of
the sufficient conditions which leads to a tight approximation of the risk of the Bayes classifier, Ru (BQ ).
This assumption constitutes the working hypothesis of margin-based self-learning algorithms in which a
classifier is first built on the labeled training set. The output of the learner can then be used to assign
pseudo-labels to unlabeled examples having a margin above a fixed threshold (denoted by the set ZU\ in
what follows) and the supervised method is repeatedly retrained upon the set of the initial labeled and
unlabeled examples that have been classified in the previous steps. The idea behind this pseudo-labeling
is that unlabeled examples having a margin above a threshold are less subject to error prone labels, or
equivalently, are those which have a small conditional Bayes error defined as:
Ru?? (BQ )
def
Ru|? (BQ ) =
Pu (BQ (x0 ) 6= y 0 | mQ (x0 ) > ?) =
(18)
Pu (mQ (x0 ) > ?)
In this case the label assignation of unlabeled examples upon a margin criterion has the effect to push away the decision boundary from the unlabeled data.
This strategy follows the
cluster assumption [10] used in the design of some semi-supervised learning algorithms where
the decision boundary is supposed to pass through a region of low pattern density.
Though
margin-based self-learning algorithms are inductive in essence, their learning phase is nearly related to transductive learning which predicts the labels of a given unlabeled set. Indeed, in both
cases the pseudo class-label assignation of unlabeled examples is interrelated to their margin.
For all these algorithms the choice
of the threshold is a crucial point, Input: Labeled and Unlabeled training sets: Z` , XU
as with a low threshold the risk Initialize
to assign false labels to exam- (1) Train a classifier H on Z`
ples is high and a higher value of (2) Set ZU\ ? ?
the threshold would not provide repeat
(3) Compute the margin threshold ?? minimizing (18) from (6)
enough examples to enhance the
(4) S ? {(x0 , y 0 ) | x0 ? XU ; mQ (x0 ) ? ?? ? y 0 = sgn(H(x0 ))}
current decision function. In order
(5) ZU\ ? ZU\ ? S, XU = XU \S
to examine the effect of fixing the
(6) Learn a classifier H by optimizing a global loss function on
threshold or computing it automatZ` and ZU\
ically we considered the marginbased self-training algorithm pro- until XU is empty or that there are no adds to ZU\ ;
posed by T?ur et al. [10, Figure 6] Output The final classifier H
(referred as SLA in the following),
in which unlabeled examples havFigure 1: Self-learning algorithm (SLA? )
ing margin above a fixed threshold
are iteratively added to the labeled
set and are not considered in next rounds for label distribution. In our approach, the best threshold minimizing the conditional Bayes error (18) from equation (6) of theorem 1 is computed at each round of the
algorithm (line 3, figure 1 - SLA? ) while the threshold is kept fixed in [10, Figure 6] (line 3 is outside of the
?
repeat loop). The bound RQ
(G), of the risk of the Gibbs classifier which is involved in the computation of
the threshold in equation (18) was fixed to its worst value 0.5.
4
Experiments and Results
In our experiments, we employed a Boosting algorithm optimizing the following exponential loss2 as the
baseline learner (line (6), figure 1):
1 X ?yH(x)
1 X ?y0 H(x0 )
e
+
e
(19)
Lc (H, Z` , ZU\ ) =
l
|ZU\ | 0
x?Z`
2
x ?ZU\
Bennett et al. [1] have shown that the minimization of (19) allows to reach a local minima of the margin loss
P
P
0
function LM (H, Z` , ZU\ ) = 1l x?Z` e?yH(x) + |Z1 | x0 ?ZU\ e|H(x )| .
U
\
P
Where H = t ?t ht is a linear weighted sum of decision stumps ht which are uniquely defined by an
input feature jt ? {1, . . . , d} and a threshold ?t as:
ht (x) = 2[[?jt (x) > ?t ]] ? 1
With ?j (x) the j th feature characteristic of x. Within this setting, the Gibbs classifier is defined as a
t|
random choice from the set of baseline classifiers {ht }Tt=1 according to Q such that ?t, PQ (ht ) = P|?|?
.
t|
t
Accordingly the Bayes classifier is simply the weighted voting classifier BQ = sign(H). Although the
self-learning model (SLA? ) is an inductive algorithm we carried out experiments in a transductive setting
in order to compare results with the transductive SVM of Joachims [7] and the self-learning algorithm
(SLA) described in [11, Figure 6]. For the latter, after training a classifier H on Z` (figure 1, step 1) we
fixed different margin thresholds considering the lowest and the highest output values of H over the labeled
training examples. We evaluated the performance of the algorithms on 4 collections from the benchmark
data sets3 used in [3] as well as 2 data sets from the UCI repository [2]. In this case, we chose sets large
enough for reasonable labeled/unlabeled partitioning, and that represent binary classification problems.
Each experiment was repeated 20 times by partitioning, at each time, the data set into two random labeled
and unlabeled training sets.
Table 1: Means and standard deviations of the classification error on unlabeled training data over the 20
trials for each data set. d denotes the dimension, l and u refer respectively to the number of labeled and
unlabeled examples in each data set.
Dataset
C OIL2
D IGIT
G241c
USPS
PIMA
WDBC
d
241
241
241
241
8
30
l+u
1500
1500
1500
1500
768
569
l
10
10
10
10
10
10
SLA
SLA?
TSVM
?
.302 ?.042 .255?.019 .286? ?.031
.201? ?.038 .149?.012 .156?.014
.314? ?.037 .248?.018 .252?.021
.342? ?.024 .278? ?.022 .261?.019
.379? ?.026 .305?.021 .318? ?.018
.168? ?.016 .124?.011 .141? ?.016
l
100
100
100
100
50
50
SLA
SLA?
TSVM
?
.148 ?.015 .134?.011 .152? ?.016
.091? ?.01 .071?.005 .087? ?.009
.201? ?.017 .191?.014 .196?.022
.114? ?.012 .112? ?.012 .103?.011
.284? ?.019 .266?.018 .276?.021
.112? ?.011 .079?.007 .108? ?.01
For each data set, means and standard deviations of the classification error on unlabeled training data over
the 20 trials are shown in Table 1 for 2 different splits of the labeled and unlabeled sets. The symbol ?
indicates that performance is significantly worse than the best result, according to a Wilcoxon rank sum test
used at a p-value threshold of 0.01 [8]. In addition, we show in figure 2 the evolutions on the C OIL2 , D IGIT
and USPS data sets of the classification and both risks of the Gibbs classifier (on the labeled and unlabeled
training sets) for different number of rounds in the SLA? algorithm. These figures are obtained from one of
the 20 trials that we ran for these collections.
The most important conclusion from these empirical results is that for all data sets, the self-learning algorithm becomes competitive when the margin threshold is found automatically rather than if it is fixed
manually. The augmented self-learning algorithm achieves performance statistically better or equivalent to
that of TSVM in most cases, while it outperforms the initial method over all runs.
Figure 2: Classification error, train and test Gibbs errors with respect to the iterations of the SLA? algorithm
for a fixed number of labeled training data l = 10.
3
http://www.kyb.tuebingen.mpg.de/ssl-book/benchmarks.html
In SLA? the automatic choice of the margin-threshold has the effect to select, at the first rounds of the
algorithm, many unlabeled examples for which their class labels can be predicted with high confidence by
the voted classifier. The exponential fall of the classification rate in figure 2 can be explained by the addition
of these highly informative pseudo-labeled examples at the first steps of the learning process (figure 1).
After this fall, few examples are added because the learning algorithm does not increase the margin on
unlabeled data. Hence, the number of additional pseudo-labeled examples decreases resulting in a plateau
in the classification error curves in figure 2. We further notice that the error of the Gibbs classifier on labeled
data increases fastly to a stationary error point and that on the unlabeled examples does not vary in time.
5
Conclusions
The contribution of this paper is two fold. First, we proposed a bound on the risk of the voted classifier
using the margin distribution of unlabeled examples and an estimation of the risk of the Gibbs classifier.
We have shown that our bound is a good approximation of the true risk when the errors of the associated
Gibbs classifier can accurately be estimated and that the voted classifier makes most its errors on low margin
examples.
The proof of the bound passed through a second bound on the joint probability that the voted classifier
makes an error and that the margin is above a given threshold. This tool led to the conditional probability
that the voted classifier makes an error knowing that the margin is above a given threshold. We showed that
the search of a margin threshold minimizing this conditional probability can be obtained by analytically
solving a linear program.
This resolution conducted to our second contribution which is to find automatically the margin threshold in
a self-learning algorithm. Empirical results on a number of data sets have shown that the adaptive threshold
allows to enhance the performance of a self-learning algorithm.
References
[1] Bennett, K., Demiriz, A. & Maclin, R. (2002) Expoliting unlabeled data in ensemble methods. In Proc. ACM Int.
Conf. Knowledge Discovery and Data Mining, 289-296.
[2] Blake, C., Keogh, E. & Merz, C.J. (1998) UCI repository of machine learning databases. University of California,
Irvine. [on-line] http://www.ics.uci.edu/ mlearn/MLRepository.html
[3] Chapelle, O., Sch?olkopf, B. & Zien, A. (2006) Semi-supervised learning. MA: MIT Press.
[4] Derbeko, P., El-Yaniv, R. & Meir, R. (2004) Explicit learning curves for transduction and application to clustering
and compression algorithms. Journal of Artificial Intelligence Research 22:117-142.
[5] Dietterich, T.G. (2000) Ensemble Methods in Machine Learning. In First International Workshop on Multiple
Classifier Systems, 1-15.
[6] Grandvalet, Y. & Bengio, Y. (2005) Semi-supervised learning by entropy minimization. In Advances in Neural
Information Processing Systems 17, 529-536. Cambridge, MA: MIT Press.
[7] Joachims, T. (1999) Transductive Inference for Text Classification using Support Vector Machines. In Proceedings
of the 16th International Conference on Machine Learning, 200-209.
[8] Lehmann, E.L. (1975) Nonparametric Statistical Methods Based on Ranks. McGraw-Hill, New York.
[9] McAllester, D. (2003) Simplified PAC-Bayesian margin bounds. In Proc. od the 16th Annual Conference on
Learning Theory, Lecture Notes in Artificial Intelligence, 203-215.
[10] Seeger, M. (2002) Learning with labeled and unlabeled data. Technical report, Institute for Adaptive and Neural
Computation, University of Edinburgh.
[11] T?ur, G., Hakkani-T?ur, D.Z. & Schapire, R.E. (2005) Combining active and semi-supervised learning for spoken
language understanding. Journal of Speech Communication 45(2):171-186.
[12] Vittaut, J.-N., Amini, M.-R. & Gallinari, P. (2002) Learning Classification with Both Labeled and Unlabeled Data.
In European Conference on Machine Learning, 468-476.
| 3444 |@word trial:3 repository:2 compression:1 bn:2 q1:1 carry:1 initial:3 outperforms:1 current:1 od:1 assigning:1 dx:1 must:2 informative:1 kyb:1 stationary:1 intelligence:2 accordingly:2 pointer:1 provides:1 boosting:1 prove:2 x0:77 indeed:3 mpg:1 examine:1 automatically:2 reorganizing:1 considering:2 becomes:2 provided:1 notation:5 moreover:3 bounded:1 lowest:2 what:1 spoken:1 pseudo:8 every:1 voting:1 ebec:1 exactly:1 universit:3 classifier:54 qm:6 partitioning:2 gallinari:1 positive:2 local:1 treat:1 chose:1 bi:17 statistically:1 practical:2 unique:1 practice:1 writes:1 empirical:4 significantly:1 confidence:6 get:1 convenience:1 unlabeled:34 g241c:1 close:1 unsigned:1 risk:32 www:2 equivalent:3 deterministic:1 convex:1 resolution:1 qc:1 assigns:1 contradiction:1 mq:72 proving:1 suppose:6 gm:3 exact:1 programming:1 hypothesis:4 associate:2 element:1 located:1 predicts:1 labeled:21 database:1 solved:1 worst:1 region:4 eu:7 decrease:2 highest:1 ran:1 rq:1 tight:7 solving:3 rewrite:2 serve:2 upon:2 learner:3 usps:2 joint:5 train:3 informatique:2 artificial:2 labeling:2 outcome:2 choosing:1 outside:1 whose:1 posed:1 otherwise:3 gi:8 transductive:14 demiriz:1 noisy:1 final:1 sequence:1 propose:4 gq:24 maximal:1 fr:2 massih:2 remainder:1 uci:3 loop:1 combining:1 supposed:1 intuitive:1 olkopf:1 empty:2 assignation:2 cluster:1 francois:1 yaniv:1 help:1 recurrent:1 clearer:1 fixing:1 exam:1 x0i:1 ois:1 involves:1 come:1 implies:2 predicted:1 qi0:3 pacbayes:1 sgn:3 mcallester:1 assign:3 fix:2 generalization:1 proposition:5 opt:5 summation:1 keogh:1 strictly:1 hold:3 considered:3 blake:1 ic:1 claim:3 lm:1 achieves:1 vary:1 smallest:1 estimation:2 proc:2 label:14 lip6:2 tool:1 weighted:4 minimization:2 mit:2 clearly:1 lexicographic:2 rather:1 pn:4 joachim:2 rank:2 indicates:1 mainly:1 seeger:1 baseline:4 inference:1 el:1 maclin:1 france:2 interested:1 issue:1 classification:10 html:2 denoted:1 special:1 initialize:1 ssl:1 marginal:1 equal:1 once:1 having:4 manually:4 represents:1 constitutes:2 nearly:1 report:1 simplify:1 few:1 franc:1 randomly:1 composed:1 phase:1 ourselves:1 huge:1 possibility:2 highly:1 mining:1 behind:1 accurate:2 fu:5 bq:46 ples:1 instance:1 assignment:1 cost:1 deviation:2 subset:2 uniform:2 predicate:1 conducted:1 chooses:1 density:2 international:2 randomized:1 enhance:2 together:1 again:1 choose:1 worse:2 admit:1 book:1 conf:1 return:1 de:3 stump:1 coefficient:1 int:1 satisfy:1 observing:1 sup:2 competitive:1 bayes:20 tsvm:5 defer:1 curie:2 contribution:2 voted:8 ggm:1 qk:3 characteristic:1 ensemble:3 maximized:1 yield:4 bayesian:2 accurately:3 classified:1 mlearn:1 plateau:1 reach:1 igit:2 definition:7 involved:1 naturally:1 associated:4 proof:15 irvine:1 dataset:1 recall:2 knowledge:2 back:1 attained:2 higher:1 supervised:10 follow:1 evaluated:1 box:1 though:1 furthermore:1 just:1 until:1 working:2 hand:4 effect:3 dietterich:1 concept:1 true:3 former:1 analytically:3 hence:3 equality:1 evolution:1 inductive:2 iteratively:3 round:4 self:15 uniquely:1 essence:1 mlrepository:1 ulaval:1 criterion:1 hill:1 tt:1 pro:1 laval:1 reza:1 refer:2 cambridge:1 gibbs:18 rd:1 automatic:1 trivially:1 pm:1 language:1 pq:1 chapelle:1 gj:1 pu:30 add:1 wilcoxon:1 posterior:1 showed:1 optimizing:2 inf:3 inequality:3 binary:3 continue:1 yi:1 minimum:1 additional:3 greater:1 employed:1 semi:7 zien:1 multiple:1 ing:1 technical:1 plugging:1 hakkani:1 qi:16 prediction:1 expectation:1 iteration:1 represent:1 c1:1 addition:2 laboratoire:2 crucial:2 sch:1 rest:1 subject:1 effectiveness:2 sets3:1 split:1 enough:2 bengio:1 restrict:1 idea:3 knowing:2 qj:1 passed:1 speech:1 york:1 repeatedly:1 generally:2 nonparametric:1 concentrated:1 http:2 schapire:1 meir:1 exist:1 notice:4 dispose:1 sign:1 estimated:3 threshold:34 drawn:1 sla:12 clarity:1 marie:2 rewriting:1 ht:5 kept:1 sum:3 run:1 noticing:1 lehmann:1 almost:1 reasonable:1 decision:5 def:5 bound:41 fold:1 annual:1 constraint:1 u1:1 min:8 according:3 combination:1 marginbased:1 smaller:1 y0:1 ur:3 qu:1 explained:1 equation:27 discus:1 count:1 turn:1 needed:1 know:1 usunier:2 away:1 indirectly:1 amini:3 pierre:2 denotes:3 clustering:1 laviolette:2 added:2 strategy:1 card:1 majority:7 tuebingen:1 assuming:1 ru:47 index:1 relationship:2 minimizing:4 equivalently:2 difficult:1 unfortunately:1 mostly:3 statement:2 pima:1 gk:5 design:1 unknown:2 upper:3 datasets:3 benchmark:2 communication:1 rn:5 retrained:2 canada:1 inferred:1 bk:5 pair:1 paris:4 z1:1 california:1 able:1 pattern:1 program:4 built:1 max:5 explanation:1 greatest:3 eh:9 indicator:5 carried:1 text:1 understanding:1 discovery:1 loss:2 lecture:1 ically:1 sufficient:2 grandvalet:1 classifying:1 pi:7 prone:1 ift:1 elsewhere:1 summary:1 repeat:2 side:4 understand:1 institute:1 fall:2 taking:2 absolute:2 edinburgh:1 boundary:3 dimension:1 curve:2 qn:3 collection:2 adaptive:2 simplified:1 mcgraw:1 global:1 active:1 b1:2 derbeko:1 xi:1 search:2 table:2 ku:16 learn:1 ca:1 nicolas:2 necessarily:1 european:1 pk:4 main:2 repeated:1 xu:26 augmented:1 referred:2 transduction:1 bxc:1 lc:1 precision:1 explicit:1 exponential:2 yh:2 theorem:8 loss2:1 zu:11 jt:2 pac:3 symbol:1 svm:1 exists:1 workshop:1 false:1 push:1 margin:49 wdbc:1 entropy:1 led:1 interrelated:1 simply:3 ordered:1 partially:1 satisfies:1 acm:1 ma:2 conditional:7 replace:1 bennett:2 feasible:12 lemma:8 called:3 pas:1 experimental:1 merz:1 vote:8 zone:1 select:1 support:1 latter:1 arises:1 |
2,697 | 3,445 | Regularized Policy Iteration
Amir-massoud Farahmand1 , Mohammad Ghavamzadeh2 , Csaba Szepesv?ari1 , Shie Mannor3
1
Department of Computing Science, University of Alberta, Edmonton, Alberta, Canada
2
INRIA Lille - Nord Europe, Team SequeL, France
3
Department of ECE, McGill University, Canada - Department of EE, Technion, Israel
Abstract
In this paper we consider approximate policy-iteration-based reinforcement learning algorithms. In order to implement a flexible function approximation scheme
we propose the use of non-parametric methods with regularization, providing a
convenient way to control the complexity of the function approximator. We propose two novel regularized policy iteration algorithms by adding L2 -regularization
to two widely-used policy evaluation methods: Bellman residual minimization
(BRM) and least-squares temporal difference learning (LSTD). We derive efficient implementation for our algorithms when the approximate value-functions
belong to a reproducing kernel Hilbert space. We also provide finite-sample performance bounds for our algorithms and show that they are able to achieve optimal
rates of convergence under the studied conditions.
1
Introduction
A key idea in reinforcement learning (RL) is to learn an action-value function which can then be
used to derive a good control policy [15]. When the state space is large or infinite, value-function
approximation techniques are necessary, and their quality has a major impact on the quality of the
learned policy. Existing techniques include linear function approximation (see, e.g., Chapter 8 of
[15]), kernel regression [12], regression tree methods [5], and neural networks (e.g., [13]). The user
of these techniques often has to make non-trivial design decisions such as what features to use in
the linear function approximator, when to stop growing trees, how many trees to grow, what kernel
bandwidth to use, or what neural network architecture to employ. Of course, the best answers to
these questions depend on the characteristics of the problem in hand. Hence, ideally, these questions
should be answered in an automated way, based on the training data.
A highly desirable requirement for any learning system is to adapt to the actual difficulty of the
learning problem. If the problem is easier (than some other problem), the method should deliver
better solution(s) with the same amount of data. In the supervised learning literature, such procedures are called adaptive [7]. There are many factors that can make a problem easier, such as when
only a few of the inputs are relevant, when the input data lies on a low-dimensional submanifold of
the input space, when special noise conditions are met, when the expansion of the target function
is sparse in a basis, or when the target function is highly smooth. These are called the regularities
of the problem. An adaptive procedure is built in two steps: 1) designing flexible methods with
a few tunable parameters that are able to deliver ?optimal? performance for any targeted regularity, provided that their parameters are chosen properly, and 2) tuning the parameters automatically
(automatic model-selection).
Smoothness is one of the most important regularities: In regression when the target function has
smoothness of order p the optimal rate of convergence of the squared L2 -error is n?2p/(2p+d) ,
?
Csaba Szepesv?ari is on leave from MTA SZTAKI. This research was funded in part by the National Science
and Engineering Research Council (NSERC), iCore and the Alberta Ingenuity Fund. We acknowledge the
insightful comments by the reviewers.
?
where n is the number of data points and d is the dimension of the input space [7]. Hence, the rate
of convergence is higher for larger p?s. Methods that achieve the optimal rate are more desirable, at
least in the limit for large n, and seem to perform well in practice. However, only a few methods
in the regression literature are known to achieve the optimal rates. In fact, it is known that tree
methods with averaging in the leaves, linear methods with piecewise constant basis functions, and
kernel estimates do not achieve the optimal rate, while neural networks and regularized least-squares
estimators do [7]. An advantage of using a regularized least-squares estimator compared to neural
networks is that these estimators do not get stuck in local minima and therefore their training is more
reliable.
In this paper we study how to add L2 -regularization to value function approximation in RL. The
problem setting is to find a good policy in a batch or active learning scenario for infinite-horizon
expected total discounted reward Markovian decision problems with continuous state and finite action spaces. We propose two novel policy evaluation algorithms by adding L2 -regularization to two
widely-used policy evaluation methods in RL: Bellman residual minimization (BRM) [16; 3] and
least-squares temporal difference learning (LSTD) [4]. We show how our algorithms can be implemented efficiently when the value-function approximator belongs to a reproducing kernel Hilbert
space. We also prove finite-sample performance bounds for our algorithms. In particular, we show
that they are able to achieve a rate that is as good as the corresponding regression rate when the
value functions belong to a known smoothness class. We further show that this rate of convergence
carries through to the performance of a policy found by running policy iteration with our regularized
policy evaluation methods. The results indicate that from the point of view of convergence rates
RL is not harder than regression estimation, answering an open question of Antos et al. [2]. Due
to space limitations, we do not present the proofs of our theorems in the paper; they can be found,
along with some empirical results using our algorithms, in [6].
To our best knowledge this is the first work that addresses finite-sample performance of a regularized
RL algorithm. While regularization in RL has not been thoroughly explored, there has been a few
works that used regularization. Xu et al. [17] used sparsification in LSTD. Although sparsification
does achieve some form of regularization, to the best of our knowledge the effect of sparsification
on generalization error is not well-understood. Note that sparsification is fundamentally different
from our approach. In our method the empirical error and the penalties jointly determine the solution, while in sparsification first a subset of points is selected independently of the empirical error,
which are then used to obtain a solution. Comparing the efficiency of these methods requires further
research, but the two methods can be combined, as was done in our experiments. Jung and Polani
[9] explored adding regularization to BRM, but this solution is restricted to deterministic problems.
The main contribution of that work was the development of fast incremental algorithms using sparsification techniques. L1 penalties have been considered by [11], who were similarly concerned with
incremental implementations and computational efficiency.
2
Preliminaries
As we shall work with continuous spaces, we first introduce a few concepts from analysis. This is
followed by an introduction to Markovian Decision Processes (MDPs) and the associated concepts
and notation.
For a measurable space with domain S, we let M(S) and B(S; L) denote the set of probability
measures over S and the space of bounded measurable functions with domain S and bound 0 <
L < ?, respectively. For a measure ? ? M(S), and a measurable function f : S ? R, we define
the L2 (?)-norm of f , kf k? , and its empirical counterpart kf k?,n as follows:
2
kf k?
Z
=
2
n
2
|f (s)|2 ?(ds) ,
def
kf k?,n =
1X 2
f (st ) , st ? ?.
n t=1
(1)
2
If {st } is ergodic, kf k?,n converges to kf k? as n ? ?.
A finite-action discounted MDP is a tuple (X , A, P, S, ?), where X is the state space, A =
{a1 , a2 , . . . , aM } is the finite set of M actions, P : X ? A ? M(X ) is the transition probability
kernel with P (?|x, a) defining the next-state distribution upon taking action a in state x, S(?|x, a)
gives the corresponding distribution of immediate rewards, and ? ? (0, 1) is a discount factor. We
make the following assumptions on MDP:
Assumption A1 (MDP Regularity) The set of states X is a compactRsubspace of the d-dimensional
Euclidean space and the expected immediate rewards r(x, a) = rS(dr|x, a) are bounded by
Rmax .
We denote by ? : X ? M(A) a stationary Markov policy. A policy is deterministic if it is a
mapping from states to actions ? : X ? A. The value and the action-value functions of a policy ?,
denoted respectively by V ? and Q? , are defined as the expected sum of discounted rewards that are
encountered when the policy ? is executed:
#
#
" ?
" ?
X
X
?
t
?
t
V (x) = E?
? Rt X 0 = x
,
Q (x, a) = E?
? Rt X0 = x, A0 = a .
t=0
t=0
Here Rt denotes the reward received at time step t; Rt ? S(?|Xt , At ), Xt evolves according to
Xt+1 ? P (?|Xt , At ), and At is sampled from the policy At ? ?(?|Xt ). It is easy to see that for any
policy ?, the functions V ? and Q? are bounded by Vmax = Qmax = Rmax /(1??). The action-value
function of a policy is the unique fixed-point of the Bellman operator T ? : B(X ? A) ? B(X ? A)
defined by
Z
(T ? Q)(x, a) = r(x, a) + ? Q(y, ?(y))P (dy|x, a).
Given an MDP, the goal is to find a policy that attains the best possible values, V ? (x) =
sup? V ? (x), ?x ? X . Function V ? is called the optimal value function. Similarly the optimal
action-value function is defined as Q? (x, a) = sup? Q? (x, a), ?x ? X , ?a ? A. We say that
a deterministic policy ? is greedy w.r.t. an action-value function Q and write ? = ?
? (?; Q), if,
?(x) ? argmaxa?A Q(x, a), ?x ? X , ?a ? A. Greedy policies are important because any greedy
policy w.r.t. Q? is optimal. Hence, knowing Q? is sufficient for behaving optimally. In this paper
we shall deal with a variant of the policy iteration algorithm [8]. In the basic version of policy
iteration an optimal policy is found by computing a series of policies, each being greedy w.r.t. the
action-value function of the previous one.
Throughout the paper we denote by F M ? { f : X ? A ? R } some subset of real-valued functions over the state-action space X ? A, and use it as the set of admissible functions in the optimization problems of our algorithms. We will treat f ? F M as f ? (f1 , . . . , fM ), fj (x) =
f (x, aj ), j = 1, . . . , M . For ? ? M(X ), we extend k?k? and k?k?,n defined in Eq. (1) to F M by
PM
2
2
1
kf k? = M
j=1 kfj k? , and
2
kf k?,n =
n M
n
1 X 2
1 XX
I{At =at } fj2 (Xt ) =
f (Xt , At ),
nM t=1 j=1
nM t=1
(2)
where I{?} is the indicator function: for an event E, I{E} = 1 if and only if E holds and I{E} = 0,
otherwise.
3
Approximate Policy Evaluation
The ability to evaluate a given policy is the core requirement to run policy iteration. Loosely speaking, in policy evaluation the goal is to find a ?close enough? approximation V (or Q) of the value
(or action-value) function of a fixed target policy ?, V ? (or Q? ). There are several interpretations to
the term ?close enough? in this context and it does not necessarily refer to a minimization of some
norm. If Q? (or noisy estimates of it) is available at a number of points (Xt , At ), one can form a
training set of examples of the form {(Xt , At ), Qt }1?t?n , where Qt is an estimate of Q? (Xt , At )
and then use a supervised learning algorithm to infer a function Q that is meant to approximate Q? .
Unfortunately, in the context of control, the target function, Q? , is not known in advance and its
high quality samples are often very expensive to obtain if this option is available at all. Most often
these values have to be inferred from the observed system dynamics, where the observations do not
necessarily come from following the target policy ?. This is referred to as the off-policy learning
problem in the RL literature. The difficulty arising is similar to the problem when training and test
distributions differ in supervised learning. Many policy evaluation techniques have been developed
in RL. Here we concentrate on the ones that are directly related to our proposed algorithms.
3.1
Bellman Residual Minimization
The idea of Bellman residual minimization (BRM) goes back at least to the work of Schweitzer and
Seidmann [14]. It was used later in the RL community by Williams and Baird [16] and Baird [3].
The basic idea of BRM comes from writing the fixed-point equation for the Bellman operator in the
form Q? ? T ? Q? = 0. When Q? is replaced by some other function Q, the left-hand side becomes
non-zero. The resulting quantity, Q ? T ? Q, is called the Bellman residual of Q. If the magnitude
of the Bellman residual, kQ ? T ? Qk, is small, then Q can be expected to be a good approximation
of Q? . For an analysis using supremum norms see, e.g., [16]. It seems, however, more natural
to use a weighted L2 -norm to measure the magnitude of the Bellman residual as it leads to an
optimization problem with favorable characteristics and enables an easy connection to regression
2
function estimation [7]. Hence, we define the loss function LBRM (Q; ?) = kQ ? T ? Qk? , where
? is the stationary distribution of states in the input data. Using Eq. (2) with samples (Xt , At )
and by replacing (T ? Q)(Xt , At ) with its sample-based approximation (T?? Q)(Xt , At ) = Rt +
?Q(Xt+1 , ?(Xt+1 )), the empirical counterpart of LBRM (Q; ?) can be written as
n h
X
i2
? BRM (Q; ?, n) = 1
L
Q(Xt , At ) ? Rt + ?Q Xt+1 , ?(Xt+1 )
.
(3)
nM t=1
?
However, as
h it is well-known
i (e.g., see [15],[10]), in general, LBRM is not an unbiased estimate of
?
LBRM ; E LBRM (Q; ?, n) 6= LBRM (Q; ?). The reason is that stochastic transitions may lead to a
non-vanishing variance term in Eq. (3). A common suggestion to deal with this problem is to use
? BRM . According to this proposal, for each state-action pair in
uncorrelated or ?double? samples in L
the sample, at least two next states should be generated (e.g., see [15]). This is neither realistic nor
sample-efficient unless a generative model of the environment is available or the state transitions are
deterministic. Antos et al. [2] recently proposed a de-biasing procedure for this problem. We will
refer to it as modified BRM in this paper. The idea is to cancel the unwanted variance by introducing
an auxiliary function h and a new loss function LBRM (Q, h; ?) = LBRM (Q; ?) ? kh ? T ? Qk2? , and
approximating the action-value function Q? by solving
? BRM = argmin sup LBRM (Q, h; ?),
Q
(4)
Q?F M h?F M
2
where the supremum comes from the negative sign of kh ? T ? Qk? . They showed that optimizing
the new loss function still makes sense and the empirical version of this loss is unbiased. Solving
Eq. (4) requires solving the following nested optimization problems:
2
2
2 i
h
? BRM = argmin
Q
h?Q = argmin
h ? T?? Q
,
Q ? T?? Q
?
h?Q ? T?? Q
. (5)
?
h?F M
Q?F M
?
?
??
?
Of course in practice, T Q is replaced by its sample-based approximation T Q.
3.2
Least-Squares Temporal Difference Learning
Least-squares temporal difference learning (LSTD) was first proposed by Bradtke and Barto [4],
and later was extended to control by Lagoudakis and Parr [10]. They called the resulting algorithm
least-squares policy iteration (LSPI), which is an approximate policy iteration algorithm based on
LSTD. Unlike BRM that minimizes the distance of Q and T ? Q, LSTD minimizes the distance of Q
and ?T ? Q, the back-projection of the image of Q under the Bellman operator, T ? Q, onto the space
of admissible functions F M (see Figure 1). Formally, this means that LSTD minimizes the loss
2
function LLST D (Q; ?) = kQ ? ?T ? Qk? . It can also be seen as finding a good approximation for
?
the fixed-point of operator ?T . The projection operator ? : B(X ? A) ? B(X ? A) is defined
2
by ?f = argminh?F M kh ? f k? . In order to make this minimization problem computationally
feasible, it is usually assumed that F M is a linear subspace of B(X ? A). The LSTD solution can
therefore be written as the solution of the following nested optimization problems:
2
h?Q = argmin kh ? T ? Qk? ,
h?F M
? LST D = argmin
Q ? h?Q
2 ,
Q
?
(6)
Q?F M
where the first equation finds the projection of T ? Q onto F M , and the second one minimizes the
distance of Q and the projection.
Figure 1: This figure shows the loss functions minimized by
BRM, modified BRM, and LSTD methods. The function space
F M is represented by the plane. The Bellman operator, T ? , maps
an action- value function Q ? F M to a function T ? Q. The vector connecting T ? Q and its back- projection to F M , ?T ? Q, is
orthogonal to the function space F M . The BRM loss function is
the squared Bellman error, the distance of Q and T ? Q. In order
to obtain the modified BRM loss, the squared distance of T ? Q
and ?T ? Q is subtracted from the squared Bellman error. LSTD
aims at a function Q that has minimum distance to ?T ? Q.
minimized by BRM
FM
Q
T ?Q
?T ? Q
minimized by LSTD
Antos et al. [2] showed that when F M is linear, the solution of modified BRM (Eq. 4 or 5) coincides
with the LSTD solution (Eq. 6). A quick explanation for this is: the first equations in (5)
and (6)
are
?
?2
the same, the projected vector h?Q ? T ? Q has to be perpendicular to F M , as a result ?Q ? h?Q ? =
?
?2
kQ ? T ? Qk2 ? ?h?Q ? T ? Q? (Pythagorean theorem), and therefore the second equations in (5) and
(6) have the same solution.
4
Regularized Policy Iteration Algorithms
In this section, we introduce two regularized policy iteration algorithms. These algorithms are instances of the generic policy- iteration method, whose pseudo- code is shown in Table 1. By assumption, the training sample Di used at the ith (1 ? i ? N ) iteration of the algorithm is a finite trajectory
{(Xt , At , Rt )}1?t?n generated
by a policy ?, thus, At = ?(Xt )
FittedPolicyQ(N ,Q(?1) ,PEval)
and Rt ? S(?|Xt , At ). Examples
// N : number of iterations
of such policy ? are ?i plus some
// Q(?1) : Initial action- value function
exploration or some stochastic
//
PEval: Fitting procedure
stationary policy ?b . The actionfor
i = 0 to N ? 1 do
(?1)
value function Q
is used to
(?) ? ?
? (?; Q(i?1) ) // the greedy policy w.r.t. Q(i?1) //
?
i
initialize the first policy. AlterGenerate
training sample Di
natively, one may start with an
(i)
Q
?
PEval(?
i , Di )
arbitrary initial policy. The proceend
for
dure PEval takes a policy ?i (here
? (?; Q(N ?1) )
return Q(N ?1) or ?N (?) = ?
the greedy policy w.r.t. the current
action- value function Q(i?1) )
along with training sample Di ,
Table 1: The pseudo- code of policy- iteration algorithm
and returns an approximation to
the action- value function of policy ?i . There are many possibilities to design PEval. In this paper,
we propose two approaches, one based on regularized (modified) BRM (REG- BRM), and one based
on regularized LSTD (REG- LSTD). In REG- BRM, the next iteration is computed by solving the
following nested optimization problems:
?2
?2 ?
?2
i
i
h?
h?
?
? ?
?
?
?
h? (?; Q) = argmin ?h ? T??i Q? +?h,n J(h) , Q(i) = argmin ?Q ? T??i Q? ??h? (?; Q) ? T??i Q? +?Q,n J(Q) ,
h?F M
n
Q?F M
n
n
(7)
where (T??i Q)(Zt ) = Rt + ?Q(Zt0 ) represents the empirical Bellman operator, Zt = (Xt , At ) and
Zt0 = Xt+1 , ?i (Xt+1 ) represent state- action pairs, J(h) and J(Q) are penalty functions (e.g.,
norms), and ?h,n , ?Q,n > 0 are regularization coefficients.
In REG- LSTD, the next iteration is computed by solving the following nested optimization problems:
?2
i
i
h?
h
?
?
2
h? (?; Q) = argmin ?h ? T??i Q? + ?h,n J(h) , Q(i) = argmin kQ ? h? (?; Q)kn + ?Q,n J(Q) . (8)
h?F M
n
Q?F M
It is important to note that unlike the non- regularized case described in Sections 3.1 and 3.2, REGBRM and REG- LSTD do not have the same solution. This is because, although the first equations
in (7) and (8) are the same, the projected vector h? (?; Q) ? T??i Q is not necessarily perpendicular to
the admissible function space F M . This is due to the regularization term ?h,n J(h). As a result, the
?
?2
?
?2
?
?
?
Pythagorean theorem does not hold: kQ ? h? (?; Q)k2 6= ?
?Q ? T??i Q? ? ?h? (?; Q) ? T??i Q? , and
therefore the objective functions of the second equations in (7) and (8) are not equal and they do not
have the same solution.
We now present algorithmic solutions for REG-BRM and REG-LSTD problems described above.
We can obtain Q(i) by solving the regularization problems of Eqs. (7) and (8) in a reproducing
kernel Hilbert space (RKHS) defined by a Mercer kernel K. In this case, we let the regularization
2
2
terms J(h) and J(Q) be the RKHS norms of h and Q, khkH and kQkH , respectively. Using
the Representer theorem, we can then obtain the following closed-form solutions for REG-BRM
and REG-LSTD. This is not immediate, because the solutions of these procedures are defined with
nested optimization problems.
P2n
Theorem 1. The optimizer Q ? H of Eqs. (7) and (8) can be written as Q(?) = i=1 ?
? i k(Z?i , ?),
0
?
?
? = (?
where Zi = Zi if i ? n and Zi = Zi?n , otherwise. The coefficient vector ?
?1 , . . . , ?
? 2n )> can
be obtained by
REG-BRM:
REG-LSTD:
>
? = (CK Q + ?Q,n I)?1 (D > + ?C >
?
2 B B)r,
? = (F > F K Q + ?Q,n I)?1 F > Er,
?
where r = (R1 , . . . , Rn )> , C = D > D ?? 2 (BC 2 )> (BC 2 ), B = K h (K h +?h,n I)?1 ?I, D =
C 1 ? ?C 2 , F = C 1 ? ?EC 2 , E = K h (K h + ?h,n I)?1 , and K h ? Rn?n , C 1 , C 2 ? Rn?2n ,
and K Q ? R2n?2n are defined by [K h ]ij = k(Zi , Zj ), [C 1 K Q ]ij = k(Zi , Z?j ), [C 2 K Q ]ij =
k(Zi0 , Z?j ), and [K Q ]ij = k(Z?i , Z?j ).
5
Theoretical Analysis of the Algorithms
In this section, we analyze the statistical properties of the policy iteration algorithms based on REGBRM and REG-LSTD. We provide finite-sample convergence results for the error between Q?N , the
action-value function of policy ?N , the policy resulted after N iterations of the algorithms, and Q? ,
the optimal action-value function. Due to space limitations, we only report assumptions and main
results here (Refer to [6] for more details). We make the following assumptions in our analysis,
some of which are only technical:
Assumption A2 (1) At every iteration, samples are generated i.i.d. using a fixed distribution over
0 n
states ? and a fixed stochastic
policy ?b , i.e., {(Zt , Rt , Zt )}t=1 are0 i.i.d. samples, where Zt =
0
0
0
(Xt , At ), Zt = Xt , ?(Xt ) , Xt ? ? ? M(X ), At ? ?b (?|Xt ), Xt ? P (?|Xt , At ), and ? is the
policy being evaluated. We further assume that ?b selects all actions with non-zero probability.
(2) The function space F used in the optimization problems (7) and (8) is a Sobolev space Wk (Rd )
with 2k > d. We denote by Jk (Q) the norm of Q in this Sobolev space.
(3) The selected function space F M contains the true action-value function, i.e., Q? ? F M .
(4) For every function Q ? F M with bounded norm J(Q), its image under the Bellman operator,
T ? Q, is in the same space, and we have J(T ? Q) ? BJ(Q), for some positive and finite B, which
is independent of Q.
(5) We assume F M ? B(X ? A; Qmax ), for Qmax > 0.
(1) indicates that the training sample should be generated by an i.i.d. process. This assumption is
used mainly for simplifying the proofs and can be extended to the case where the training sample
is a single trajectory generated by a fixed policy with appropriate mixing conditions as was done
in [2]. (2) Using Sobolev space allows us to explicitly show the effect of smoothness k on the
convergence rate of our algorithms and to make comparison with the regression learning settings.
Note that Sobolev spaces are large: In fact, Sobolev spaces are more flexible than H?older spaces (a
generalization of Lipschitz spaces to higher order smoothness) in that in these spaces the norm measures the average smoothness of the functions as opposed to measuring their worst-case smoothness.
Thus, functions that are smooth most over the place except for some parts that have a small measure
will have small Sobolev-space norms, i.e., they will be looked as ?simple?, while they would be
viewed as ?complex? functions in H?older spaces. Actually, our results extend to other RKHS spaces
that have well-behaved metric entropy capacity, i.e., log N (?, F) ? A??? for some 0 < ? < 2
and some finite positive A. In (3), we assume that the considered function space is large enough
to include the true action-value function. This is a standard assumption when studying the rate of
convergence in supervised learning [7]. (4) constrains the growth rate of the complexity of the norm
of Q under Bellman updates. We believe that this is a reasonable assumption that will hold in most
practical situations. Finally, (5) is about the uniform boundedness of the functions in the selected
function space. If the solutions of our optimization problems are not bounded, they must be truncated, and thus, truncation arguments must be used in the analysis. Truncation does not change the
final result, so we do not address it to avoid unnecessary clutter.
We now first derive an upper bound on the policy evaluation error in Theorem 2. We then show how
the policy evaluation error propagates through the iterations of policy iteration in Lemma 3. Finally,
we state our main result in Theorem 4, which follows directly from the first two results.
Theorem 2 (Policy Evaluation Error). Let Assumptions A1 and A2 hold. Choosing ?Q,n =
`
? 2k
2k+d
c1 nJlog(n)
and ?h,n = ?(?Q,n ), for any policy ?, the following holds with probability at
2 (Q? )
k
least 1 ? ?, for c1 , c2 , c3 , c4 > 0.
?
? 2k
?
?2
` 2 ? ? d
c3 log(n) + c4 log( 1? )
log(n) 2k+d
??
? ??
+
.
?Q ? T Q
? ? c2 Jk (Q ) 2k+d
n
n
?
Theorem 2 shows how the number of samples and the difficulty of the problem as characterized
??
by Jk2 (Q? ) influence the policy evaluation error. With a large number of samples, we expect ||Q
? ? 2
? is its estimated
T Q||? to be small with high probability, where ? is the policy being evaluated and Q
action-value function using REG-BRM or REG-LSTD.
? (i) , i = 0, . . . , N ? 1 denote the estimated action-value function and
? (i) and ?i = Q
? (i) ? T ?i Q
Let Q
the Bellman residual at the ith iteration of our algorithms. Theorem 2 indicates that at each iteration
? (i) such that k?i k2 is small with high probability.
i, the optimization procedure finds a function Q
?
Lemma 3, which was stated as Lemma 12 in [2], bounds the final error after N iterations as a
? (i) is
function of the intermediate errors. Note that no assumption is made on how the sequence Q
generated in this lemma. In Lemma 3 and Theorem 4, ? ? M(X ) is a measure used to evaluate the
performance of the algorithms, and C?,? and C? are the concentrability coefficients defined in [2].
Lemma 3 (Error Propagation). Let p ? 1 be a real and N be a positive integer. Then, for any
sequence of functions {Q(i) } ? B(X ? A; Qmax ), 0 ? i < N , and ?i as defined above, the
following inequalities hold:
?
2? ? 1/p
C?,? max k?i kp,? + ? N/p Rmax ,
2
0?i<N
(1 ? ?)
?
2? ? 1/p
?
C?
max k?i kp,? + ? N/p Rmax .
2
0?i<N
(1 ? ?)
kQ? ? Q?N kp,? ?
kQ? ? Q?N k?
Theorem 4 (Convergence Result). Let Assumptions A1 and A2 hold, ?h,n and ?Q,n use the same
schedules as in Theorem 2, and the number of samples n be large enough. The error between
the optimal action-value function, Q? , and the action-value function of the policy resulted after N
? ?N , is
iterations of the policy iteration algorithm based on REG-BRM or REG-LSTD, Q
2
0
?
? k
2k+d
2?
1/2
4c ? C?,? @ log(n)
+
kQ ? Q k? ?
2
(1 ? ?)
n
2
0
? k
?
2k+d
2?
?
?N
1/2 @ log(n)
4
c ? C?
+
kQ ? Q k? ?
(1 ? ?)2
n
?
?N
log( N? )
n
log( N? )
n
!1 1
3
2
A+?
N/2
Rmax 5 ,
3
!1 1
2
A + ? N/2 Rmax 5 ,
with probability at least 1 ? ? for some c > 0.
Theorem 4 shows the effect of number of samples n, degree of smoothness k, number of iterations
N , and concentrability coefficients on the quality of the policy induced by the estimated actionvalue function. Three important observations are: 1) the main term in the rate of convergence
k
is O(log(n)n? 2k+d ), which is an optimal rate for regression up to a logarithmic factor and hence
it is an optimal rate value-function estimation, 2) the effect of smoothness k is evident: for two
problems with different degrees of smoothness, learning the smoother one is easier ? an intuitive,
but previously not rigorously proven result in the RL literature, and 3) increasing the number of
iterations N increases the error of the second term, but its effect is only logarithmic.
6
Conclusions and Future Work
In this paper we showed how L2 -regularization can be added to two widely-used policy evaluation methods in RL: Bellman residual minimization (BRM) and least-squares temporal difference
learning (LSTD), and developed two regularized policy evaluation algorithms REG-BRM and REGLSTD. We then showed how these algorithms can be implemented efficiently when the valuefunction approximation belongs to a reproducing kernel Hilbert space (RKHS). We also proved
finite-sample performance bounds for REG-BRM and REG-LSTD, and the regularized policy iteration algorithms built on top of them. Our theoretical results indicate that our methods are able to
achieve the optimal rate of convergence under the studied conditions.
One of the remaining problems is how to find the regularization parameters: ?h,n and ?Q,n . Using cross-validation may lead to a completely self-tuning process. Another issue is the type of
regularization. Here we used L2 -regularization, however, the idea can be extended naturally to L1 regularization in the style of Lasso, opening up the possibility of procedures that can handle a high
number of irrelevant features. Although the i.i.d. sampling assumption is technical, extending our
analysis to the case when samples are correlated requires generalizing quite a few results in supervised learning. However, we believe that this can be done without problem following the work of
[2]. Extending the results to continuous-action MDPs is another major challenge. Here the interesting question is if it is possible to achieve better rates than the one currently available in the literature,
which scales quite unfavorably with the dimension of the action space [1].
References
[1] A. Antos, R. Munos, and Cs. Szepesv?ari. Fitted Q-iteration in continuous action-space MDPs. In Advances in Neural Information Processing Systems 20 (NIPS-2007), pages 9?16, 2008.
[2] A. Antos, Cs. Szepesv?ari, and R. Munos. Learning near-optimal policies with Bellman-residual minimization based fitted policy iteration and a single sample path. Machine Learning, 71:89?129, 2008.
[3] L.C. Baird. Residual algorithms: Reinforcement learning with function approximation. In Proceedings
of the Twelfth International Conference on Machine Learning, pages 30?37, 1995.
[4] S.J. Bradtke and A.G. Barto. Linear least-squares algorithms for temporal difference learning. Machine
Learning, 22:33?57, 1996.
[5] D. Ernst, P. Geurts, and L. Wehenkel. Tree-based batch mode reinforcement learning. JMLR, 6:503?556,
2005.
[6] A. M. Farahmand, M. Ghavamzadeh, Cs. Szepesv?ari, and S. Mannor. L2-regularized policy iteration.
2009. (under preparation).
[7] L. Gy?orfi, M. Kohler, A. Krzy?zak, and H. Walk. A distribution-free theory of nonparametric regression.
Springer-Verlag, New York, 2002.
[8] R.A. Howard. Dynamic Programming and Markov Processes. The MIT Press, Cambridge, MA, 1960.
[9] T. Jung and D. Polani. Least squares SVM for least squares TD learning. In ECAI, pages 499?503, 2006.
[10] M. Lagoudakis and R. Parr. Least-squares policy iteration. JMLR, 4:1107?1149, 2003.
[11] M. Loth, M. Davy, and P. Preux. Sparse temporal difference learning using LASSO. In IEEE International
Symposium on Approximate Dynamic Programming and Reinforcement Learning, 2007.
[12] D. Ormoneit and S. Sen. Kernel-based reinforcement learning. Machine Learning, 49:161?178, 2002.
[13] M. Riedmiller. Neural fitted Q iteration ? first experiences with a data efficient neural reinforcement
learning method. In 16th European Conference on Machine Learning, pages 317?328, 2005.
[14] P. J. Schweitzer and A. Seidmann. Generalized polynomial approximations in Markovian decision processes. Journal of Mathematical Analysis and Applications, 110:568?582, 1985.
[15] R.S. Sutton and A.G. Barto. Reinforcement Learning: An Introduction. Bradford Book. MIT Press, 1998.
[16] R. J. Williams and L.C. Baird. Tight performance bounds on greedy policies based on imperfect value
functions. In Proceedings of the Tenth Yale Workshop on Adaptive and Learning Systems, 1994.
[17] X. Xu, D. Hu, and X. Lu. Kernel-based least squares policy iteration for reinforcement learning. IEEE
Trans. on Neural Networks, 18:973?992, 2007.
| 3445 |@word version:2 polynomial:1 norm:11 seems:1 twelfth:1 open:1 hu:1 r:1 simplifying:1 valuefunction:1 boundedness:1 harder:1 carry:1 initial:2 series:1 contains:1 rkhs:4 bc:2 existing:1 current:1 comparing:1 written:3 must:2 realistic:1 enables:1 fund:1 update:1 stationary:3 greedy:7 leaf:1 selected:3 generative:1 amir:1 plane:1 ith:2 vanishing:1 core:1 mannor:1 mathematical:1 along:2 schweitzer:2 c2:2 symposium:1 farahmand:1 prove:1 fitting:1 introduce:2 x0:1 expected:4 ingenuity:1 nor:1 growing:1 bellman:19 discounted:3 alberta:3 automatically:1 td:1 actual:1 increasing:1 becomes:1 provided:1 xx:1 notation:1 bounded:5 israel:1 what:3 argmin:9 rmax:6 minimizes:4 developed:2 finding:1 csaba:2 sparsification:6 temporal:7 pseudo:2 every:2 growth:1 unwanted:1 k2:2 control:4 brm:28 positive:3 engineering:1 local:1 understood:1 treat:1 limit:1 sutton:1 path:1 inria:1 plus:1 studied:2 zi0:1 perpendicular:2 unique:1 practical:1 practice:2 implement:1 procedure:7 riedmiller:1 empirical:7 orfi:1 convenient:1 projection:5 davy:1 argmaxa:1 get:1 onto:2 close:2 selection:1 operator:8 context:2 influence:1 writing:1 measurable:3 deterministic:4 reviewer:1 map:1 quick:1 go:1 williams:2 independently:1 ergodic:1 estimator:3 handle:1 mcgill:1 target:6 user:1 programming:2 designing:1 expensive:1 jk:2 pythagorean:2 observed:1 worst:1 environment:1 complexity:2 constrains:1 reward:5 ideally:1 rigorously:1 dynamic:3 ghavamzadeh:1 depend:1 solving:6 tight:1 deliver:2 upon:1 efficiency:2 basis:2 completely:1 lst:1 chapter:1 represented:1 fast:1 kp:3 choosing:1 whose:1 quite:2 widely:3 larger:1 valued:1 say:1 otherwise:2 ability:1 jointly:1 noisy:1 final:2 advantage:1 sequence:2 sen:1 propose:4 relevant:1 mixing:1 ernst:1 achieve:8 intuitive:1 kh:4 convergence:11 regularity:4 requirement:2 double:1 r1:1 extending:2 incremental:2 leave:1 converges:1 derive:3 ij:4 qt:2 received:1 eq:8 implemented:2 auxiliary:1 c:3 indicate:2 come:3 met:1 differ:1 concentrate:1 stochastic:3 exploration:1 dure:1 f1:1 generalization:2 preliminary:1 hold:7 considered:2 mapping:1 algorithmic:1 bj:1 parr:2 major:2 optimizer:1 a2:4 estimation:3 favorable:1 currently:1 council:1 weighted:1 minimization:8 mit:2 aim:1 modified:5 ck:1 avoid:1 barto:3 krzy:1 properly:1 indicates:2 mainly:1 attains:1 am:1 sense:1 kqkh:1 a0:1 france:1 selects:1 issue:1 flexible:3 denoted:1 development:1 special:1 initialize:1 equal:1 sampling:1 represents:1 lille:1 cancel:1 representer:1 future:1 minimized:3 report:1 piecewise:1 fundamentally:1 employ:1 few:6 opening:1 national:1 resulted:2 loth:1 replaced:2 zt0:2 highly:2 possibility:2 evaluation:13 antos:5 tuple:1 necessary:1 experience:1 orthogonal:1 unless:1 tree:5 euclidean:1 loosely:1 walk:1 theoretical:2 fitted:3 instance:1 markovian:3 measuring:1 introducing:1 subset:2 kq:10 technion:1 submanifold:1 uniform:1 optimally:1 kn:1 answer:1 combined:1 thoroughly:1 st:3 international:2 sequel:1 off:1 connecting:1 qk2:2 squared:4 nm:3 opposed:1 dr:1 book:1 style:1 return:2 sztaki:1 de:1 gy:1 wk:1 coefficient:4 baird:4 explicitly:1 later:2 view:1 closed:1 analyze:1 sup:3 start:1 option:1 contribution:1 square:13 qk:5 characteristic:2 efficiently:2 who:1 variance:2 lu:1 trajectory:2 concentrability:2 naturally:1 proof:2 associated:1 di:4 stop:1 sampled:1 tunable:1 proved:1 knowledge:2 hilbert:4 schedule:1 actually:1 back:3 higher:2 supervised:5 done:3 evaluated:2 jk2:1 d:1 hand:2 replacing:1 propagation:1 mode:1 quality:4 aj:1 behaved:1 mdp:4 believe:2 effect:5 concept:2 unbiased:2 true:2 counterpart:2 regularization:17 hence:5 i2:1 deal:2 self:1 coincides:1 generalized:1 evident:1 mohammad:1 geurts:1 l1:2 bradtke:2 fj:1 image:2 novel:2 ari:4 recently:1 lagoudakis:2 common:1 rl:11 belong:2 extend:2 interpretation:1 refer:3 cambridge:1 zak:1 smoothness:10 tuning:2 automatic:1 rd:1 pm:1 similarly:2 funded:1 europe:1 behaving:1 add:1 showed:4 optimizing:1 belongs:2 irrelevant:1 scenario:1 verlag:1 inequality:1 icore:1 seen:1 minimum:2 determine:1 smoother:1 desirable:2 infer:1 smooth:2 technical:2 adapt:1 characterized:1 cross:1 a1:4 impact:1 variant:1 regression:10 basic:2 metric:1 iteration:36 kernel:11 represent:1 c1:2 proposal:1 szepesv:5 grow:1 unlike:2 comment:1 induced:1 shie:1 seem:1 integer:1 ee:1 near:1 intermediate:1 easy:2 concerned:1 automated:1 enough:4 zi:6 architecture:1 bandwidth:1 fm:2 lasso:2 imperfect:1 idea:5 knowing:1 penalty:3 speaking:1 york:1 action:32 amount:1 clutter:1 discount:1 nonparametric:1 massoud:1 zj:1 sign:1 estimated:3 arising:1 write:1 shall:2 key:1 neither:1 polani:2 tenth:1 sum:1 run:1 qmax:4 place:1 throughout:1 reasonable:1 sobolev:6 decision:4 dy:1 bound:7 def:1 followed:1 yale:1 encountered:1 answered:1 argument:1 department:3 mta:1 according:2 evolves:1 restricted:1 computationally:1 equation:6 previously:1 studying:1 available:4 generic:1 appropriate:1 subtracted:1 batch:2 denotes:1 running:1 include:2 top:1 remaining:1 wehenkel:1 approximating:1 lspi:1 llst:1 objective:1 question:4 quantity:1 looked:1 added:1 parametric:1 rt:9 subspace:1 distance:6 capacity:1 evaluate:2 trivial:1 reason:1 code:2 providing:1 executed:1 unfortunately:1 nord:1 negative:1 stated:1 implementation:2 design:2 zt:6 policy:75 perform:1 upper:1 observation:2 markov:2 howard:1 finite:11 acknowledge:1 truncated:1 immediate:3 defining:1 extended:3 situation:1 team:1 rn:3 reproducing:4 arbitrary:1 community:1 canada:2 inferred:1 pair:2 c3:2 connection:1 c4:2 learned:1 nip:1 trans:1 address:2 able:4 usually:1 khkh:1 biasing:1 challenge:1 preux:1 built:2 reliable:1 max:2 explanation:1 event:1 difficulty:3 natural:1 regularized:14 indicator:1 ormoneit:1 residual:11 scheme:1 older:2 mdps:3 literature:5 l2:9 kf:8 loss:8 expect:1 suggestion:1 limitation:2 interesting:1 proven:1 approximator:3 validation:1 degree:2 sufficient:1 mercer:1 propagates:1 uncorrelated:1 r2n:1 course:2 jung:2 truncation:2 unfavorably:1 free:1 ecai:1 side:1 taking:1 munos:2 sparse:2 seidmann:2 dimension:2 transition:3 stuck:1 made:1 reinforcement:9 adaptive:3 vmax:1 projected:2 ec:1 approximate:6 supremum:2 active:1 assumed:1 unnecessary:1 continuous:4 p2n:1 table:2 learn:1 expansion:1 necessarily:3 complex:1 european:1 domain:2 main:4 noise:1 xu:2 referred:1 edmonton:1 natively:1 kfj:1 lie:1 answering:1 jmlr:2 admissible:3 theorem:14 xt:31 insightful:1 er:1 explored:2 svm:1 workshop:1 adding:3 magnitude:2 horizon:1 easier:3 entropy:1 generalizing:1 logarithmic:2 nserc:1 actionvalue:1 lstd:24 springer:1 nested:5 ma:1 goal:2 targeted:1 viewed:1 lipschitz:1 feasible:1 change:1 infinite:2 except:1 averaging:1 lemma:6 called:5 total:1 bradford:1 ece:1 formally:1 meant:1 preparation:1 argminh:1 kohler:1 reg:19 correlated:1 |
2,698 | 3,446 | Online Metric Learning and Fast Similarity Search
Prateek Jain, Brian Kulis, Inderjit S. Dhillon, and Kristen Grauman
Department of Computer Sciences
University of Texas at Austin
Austin, TX 78712
{pjain,kulis,inderjit,grauman}@cs.utexas.edu
Abstract
Metric learning algorithms can provide useful distance functions for a variety
of domains, and recent work has shown good accuracy for problems where the
learner can access all distance constraints at once. However, in many real applications, constraints are only available incrementally, thus necessitating methods
that can perform online updates to the learned metric. Existing online algorithms
offer bounds on worst-case performance, but typically do not perform well in
practice as compared to their offline counterparts. We present a new online metric
learning algorithm that updates a learned Mahalanobis metric based on LogDet
regularization and gradient descent. We prove theoretical worst-case performance
bounds, and empirically compare the proposed method against existing online
metric learning algorithms. To further boost the practicality of our approach, we
develop an online locality-sensitive hashing scheme which leads to efficient updates to data structures used for fast approximate similarity search. We demonstrate our algorithm on multiple datasets and show that it outperforms relevant
baselines.
1 Introduction
A number of recent techniques address the problem of metric learning, in which a distance function
between data objects is learned based on given (or inferred) similarity constraints between examples [4, 7, 11, 16, 5, 15]. Such algorithms have been applied to a variety of real-world learning
tasks, ranging from object recognition and human body pose estimation [5, 9], to digit recognition [7], and software support [4] applications. Most successful results have relied on having access
to all constraints at the onset of the metric learning. However, in many real applications, the desired
distance function may need to change gradually over time as additional information or constraints
are received. For instance, in image search applications on the internet, online click-through data
that is continually collected may impact the desired distance function. To address this need, recent
work on online metric learning algorithms attempts to handle constraints that are received one at a
time [13, 4]. Unfortunately, current methods suffer from a number of drawbacks, including speed,
bound quality, and empirical performance.
Further complicating this scenario is the fact that fast retrieval methods must be in place on top
of the learned metrics for many applications dealing with large-scale databases. For example, in
image search applications, relevant images within very large collections must be quickly returned
to the user, and constraints and user queries may often be intermingled across time. Thus a good
online metric learner must also be able to support fast similarity search routines. This is problematic
since existing methods (e.g., locality-sensitive hashing [6, 1] or kd-trees) assume a static distance
function, and are expensive to update when the underlying distance function changes.
1
The goal of this work is to make metric learning practical for real-world learning tasks in which both
constraints and queries must be handled efficiently in an online manner. To that end, we first develop
an online metric learning algorithm that uses LogDet regularization and exact gradient descent. The
new algorithm is inspired by the metric learning algorithm studied in [4]; however, while the loss
bounds for the latter method are dependent on the input data, our loss bounds are independent of
the sequence of constraints given to the algorithm. Furthermore, unlike the Pseudo-metric Online
Learning Algorithm (POLA) [13], another recent online technique, our algorithm requires no eigenvector computation, making it considerably faster in practice. We further show how our algorithm
can be integrated with large-scale approximate similarity search. We devise a method to incrementally update locality-sensitive hash keys during the updates of the metric learner, making it possible
to perform accurate sub-linear time nearest neighbor searches over the data in an online manner.
We compare our algorithm to related existing methods using a variety of standard data sets. We
show that our method outperforms existing approaches, and even performs comparably to several
offline metric learning algorithms. To evaluate our approach for indexing a large-scale database, we
include experiments with a set of 300,000 image patches; our online algorithm effectively learns to
compare patches, and our hashing construction allows accurate fast retrieval for online queries.
1.1 Related Work
A number of recent techniques consider the metric learning problem [16, 7, 11, 4, 5]. Most work
deals with learning Mahalanobis distances in an offline manner, which often leads to expensive optimization algorithms. The POLA algorithm [13], on the other hand, is an approach for online learning
of Mahalanobis metrics that optimizes a large-margin objective and has provable regret bounds, although eigenvector computation is required at each iteration to enforce positive definiteness, which
can be slow in practice. The information-theoretic metric learning method of [4] includes an online variant that avoids eigenvector decomposition. However, because of the particular form of the
online update, positive-definiteness still must be carefully enforced, which impacts bound quality
and empirical performance, making it undesirable for both theoretical and practical purposes. In
contrast, our proposed algorithm has strong bounds, requires no extra work for enforcing positive
definiteness, and can be implemented efficiently. There are a number of existing online algorithms
for other machine learning problems outside of metric learning, e.g. [10, 2, 12].
Fast search methods are becoming increasingly necessary for machine learning tasks that must cope
with large databases. Locality-sensitive hashing [6] is an effective technique that performs approximate nearest neighbor searches in time that is sub-linear in the size of the database. Most existing
work has considered hash functions for Lp norms [3], inner product similarity [1], and other standard distances. While recent work has shown how to generate hash functions for (offline) learned
Mahalanobis metrics [9], we are not aware of any existing technique that allows incremental updates
to locality-sensitive hash keys for online database maintenance, as we propose in this work.
2 Online Metric Learning
In this section we introduce our model for online metric learning, develop an efficient algorithm to
implement it, and prove regret bounds.
2.1 Formulation and Algorithm
As in several existing metric learning methods, we restrict ourselves to learning a Mahalanobis
distance function over our input data, which is a distance function parameterized by a d ? d positive
definite matrix A. Given d-dimensional vectors u and v, the squared Mahalanobis distance between
them is defined as
dA (u, v) = (u ? v)T A(u ? v).
Positive definiteness of A assures that the distance function will return positive distances. We may
equivalently view such distance functions as applying a linear transformation to the input data and
computing the squared Euclidean distance in the transformed space; this may be seen by factorizing
the matrix A = GT G, and distributing G into the (u ? v) terms.
In general, one learns a Mahalanobis distance by learning the appropriate positive definite matrix A
based on constraints over the distance function. These constraints are typically distance or similarity
constraints that arise from supervised information?for example, the distance between two points
in the same class should be ?small?. In contrast to offline approaches, which assume all constraints
2
are provided up front, online algorithms assume that constraints are received one at a time. That
is, we assume that at time step t, there exists a current distance function parameterized by At . A
constraint is received, encoded by the triple (ut , vt , yt ), where yt is the target distance between ut
and vt (we restrict ourselves to distance constraints, though other constraints are possible). Using
At , we first predict the distance y?t = dAt (ut , vt ) using our current distance function, and incur a
loss ?(?
yt , yt ). Then we update our matrix P
from At to At+1 . The goal is to minimize the sum of
the losses over all time steps, i.e. LA =
yt , yt ). One common choice is the squared loss:
t ?(?
?(?
yt , yt ) = 12 (?
yt ? yt )2 . We also consider a variant of the model where the input is a quadruple
(ut , vt , yt , bt ), where bt = 1 if we require that the distance between ut and vt be less than or equal
to yt , and bt = ?1 if we require that the distance between ut and vt be greater than or equal to yt .
In that case, the corresponding loss function is ?(?
yt , yt , bt ) = max(0, 12 bt (?
yt ? yt ))2 .
A typical approach [10, 4, 13] for the above given online learning problem is to solve for At+1 by
minimizing a regularized loss at each step:
At+1 = argmin D(A, At ) + ??(dA (ut , vt ), yt ),
(2.1)
A?0
where D(A, At ) is a regularization function and ?t > 0 is the regularization parameter. As in [4],
we use the LogDet divergence D?d (A, At ) as the regularization function. It is defined over positive
?1
definite matrices and is given by D?d (A, At ) = tr(AA?1
t ) ? log det(AAt ) ? d. This divergence
has previously been shown to be useful in the context of metric learning [4]. It has a number
of desirable properties for metric learning, including scale-invariance, automatic enforcement of
positive definiteness, and a maximum-likelihood interpretation.
Existing approaches solve for At+1 by approximating the gradient of the loss function, i.e.
?? (dA (ut , vt ), yt ) is approximated by ?? (dAt (ut , vt ), yt ) [10, 13, 4]. While for some regularization functions (e.g. Frobenius divergence, von-Neumann divergence) such a scheme works out well,
for LogDet regularization it can lead to non-definite matrices for which the regularization function
is not even defined. This results in a scheme that has to adapt the regularization parameter in order
to maintain positive definiteness [4].
In contrast, our algorithm proceeds by exactly solving for the updated parameters At+1 that minimize (2.1). Since we use the exact gradient, our analysis will become more involved; however, the
resulting algorithm will have several advantages over existing methods for online metric learning.
Using straightforward algebra and the Sherman-Morrison inverse formula, we can show that the
resulting solution to the minimization of (2.1) is:
At+1 = At ?
?(?
y ? yt )At zt ztT At
,
1 + ?(?
y ? yt )ztT At zt
(2.2)
where zt = ut ? vt and y? = dAt+1 (ut , vt ) = ztT At+1 zt . The detailed derivation will appear in
a longer version. It is not immediately clear that this update can be applied, since y? is a function
of At+1 . However, by multiplying the update in (2.2) on the left by ztT and on the right by zt and
noting that y?t = ztT At zt , we obtain the following:
p
?(?
y ? yt )?
yt2
?yt y?t ? 1 + (?yt y?t ? 1)2 + 4? y?t2
y? = y?t ?
, and so y? =
.
(2.3)
1 + ?(?
y ? yt )?
yt
2? y?t
We can solve directly for y? using this formula, and then plug this into the update (2.2). For the case
when the input is a quadruple and the loss function is the squared hinge loss, we only perform the
update (2.2) if the new constraint is violated.
It is possible to show that the resulting matrix At+1 is positive definite; the proof appears in our
longer version. The fact that this update maintains positive definiteness is a key advantage of our
method over existing methods; POLA, for example, requires projection to the positive semidefinite
cone via an eigendecomposition. The final loss bound in [4] depends on the regularization parameter
?t from each iteration and is in turn dependent on the sequence of constraints, an undesirable property for online algorithms. In contrast, by minimizing the function ft we designate above in (2.1),
our algorithm?s updates automatically maintain positive definiteness. This means that the regularization parameter ? need not be changed according to the current constraint, and the resulting bounds
(Section 2.2) and empirical performance are notably stronger.
3
We refer to our algorithm as LogDet Exact Gradient Online (LEGO), and use this name throughout
to distinguish it from POLA [13] (which uses a Frobenius regularization) and the Information Theoretic Metric Learning (ITML)-Online algorithm [4] (which uses an approximation to the gradient).
2.2 Analysis
We now briefly analyze the regret bounds for our online metric learning algorithm. Due to space
issues, we do not present the full proofs; please see the longer version for further details.
To evaluate the online learner?s quality, we want to compare the loss of the online algorithm (which
has access to one constraint at a time in sequence) to the loss of the best possible offline algorithm
(which has access to all constraints at once). Let d?t = dA? (ut , vt ) be the learned distance between
P
points ut and vt with a fixed positive definite matrix A? , and let LA? = t ?(d?t , yt ) be the loss
suffered over all t time steps. Note that the loss LA? is with respect to a single matrix A? , whereas
LA (Section 2.1) is with respect to a matrix that is being updated every time step. Let A? be the
optimal offline solution, i.e. it minimizes total loss incurred (LA? ). The goal is to demonstrate that
the loss of the online algorithm LA is competitive with the loss of any offline algorithm. To that end,
we now show that LA ? c1 LA? + c2 , where c1 and c2 are constants.
In the result below, we assume that the length of the data points is bounded: kuk22 ? R for all u.
The following key lemma shows that we can bound the loss at each step of the algorithm:
Lemma 2.1. At each step t,
1
1
?t (?
yt ? yt )2 ? ?t (dA? (ut , vt ) ? yt )2 ? Dld (A? , At ) ? Dld (A? , At+1 ),
2
2
?
2 , ?t = ?, and A? is the optimal offline solution.
where 0 ? ?t ?
q
1+?
R
2 +
R2
4
+ ?1
Proof. See longer version.
Theorem 2.2.
s
s
2
2
R
R
R2
1
1
R2
1
+
+
+
+
+
LA ? 1 + ?
L A? +
Dld (A? , A0 ),
2
4
?
?
2
4
?
P
where LA =
yt , yt ) is the loss incurred by the series of matrices At generated by Equat ?(?
tion (2.3), A0 ? 0 is the initial matrix, and A? is the optimal offline solution.
Proof. The bound is obtained by summing the loss at each step using Lemma 2.1:
X
X 1
1
?t (?
yt ? yt )2 ? ?t (dA? (ut , vt ) ? yt )2 ?
Dld (A? , At ) ? Dld (A? , At+1 ) .
2
2
t
t
The result follows by plugging in the appropriate ?t and ?t , and observing that the right-hand side
telescopes to Dld (A? , A0 ) ? Dld (A? , At+1 ) ? Dld (A? , A0 ) since Dld (A? , At+1 ) ? 0.
For the squared hinge loss ?(?
yt , yt , bt ) = max(0, bt (?
yt ? yt ))2 , the corresponding algorithm has the
same bound.
The regularization parameter affects the tradeoff between LA? and Dld (A? , A0 ): as ? gets larger,
the coefficient of LA? grows while the coefficient of Dld (A? , A0 ) shrinks. In most scenarios,
R is ?
small; for example,
? in the case when R = 2 and ? = 1, then the bound is LA ?
(4 + 2)LA? + 2(4 + 2)Dld (A? , A0 ). Furthermore, in the case when there exists an offline
solution with zero error, i.e., LA? = 0, then with a sufficiently large regularization parameter, we
know that LA ? 2R2 Dld (A? , A0 ). This bound is analogous to the bound proven in Theorem 1 of
the POLA method [13]. Note, however, that our bound is much more favorable to scaling of the optimal solution A? , since the bound of POLA has a kA? k2F term while our bound uses Dld (A? , A0 ):
if we scale the optimal solution by c, then the Dld (A? , A0 ) term will scale by O(c), whereas kA? k2F
will scale by O(c2 ). Similarly, our bound is tighter than that provided by the ITML-Online algorithm since, in the ITML-Online algorithm, the regularization parameter ?t for step t is dependent
on the input data. An adversary can always provide an input (ut , vt , yt ) so that the regularization
4
parameter has to be decreased arbitrarily; that is, the need to maintain positive defininteness for each
update can prevent ITML-Online from making progress towards an optimal metric.
In summary, we have proven a regret bound for the proposed LEGO algorithm, an online metric
learning algorithm based on LogDet regularization and gradient descent. Our algorithm automatically enforces positive definiteness every iteration and is simple to implement. The bound is comparable to POLA?s bound but is more favorable to scaling, and is stronger than ITML-Online?s bound.
3 Fast Online Similarity Searches
In many applications, metric learning is used in conjunction with nearest-neighbor searching, and
data structures to facilitate such searches are essential. For online metric learning to be practical
for large-scale retrieval applications, we must be able to efficiently index the data as updates to the
metric are performed. This poses a problem for most fast similarity searching algorithms, since each
update to the online algorithm would require a costly update to their data structures.
Our goal is to avoid expensive naive updates, where all database items are re-inserted into the search
structure. We employ locality-sensitive hashing to enable fast queries; but rather than re-hash all
database examples every time an online constraint alters the metric, we show how to incorporate
a second level of hashing that determines which hash bits are changing during the metric learning
updates. This allows us to avoid costly exhaustive updates to the hash keys, though occasional
updating is required after substantial changes to the metric are accumulated.
3.1 Background: Locality-Sensitive Hashing
Locality-sensitive hashing (LSH) [6, 1] produces a binary hash key H(u) = [h1 (u)h2 (u)...hb (u)]
for every data point. Each individual bit hi (u) is obtained by applying the locality sensitive hash
function hi to input u. To allow sub-linear time approximate similarity search for a similarity
function ?sim?, a locality-sensitive hash function must satisfy the following property: P r[hi (u) =
hi (v)] = sim(u, v), where ?sim? returns values between 0 and 1. This means that the more similar
examples are, the more likely they are to collide in the hash table.
A LSH function when ?sim? is the inner product was developed in [1], in which a hash bit is the sign
of an input?s inner product with a random hyperplane. For Mahalanobis distances, the similarity
function of interest is sim(u, v) = uT Av. The hash function in [1] was extended to accommodate
a Mahalanobis similarity function in [9]: A can be decomposed as GT G, and the similarity function
? T v?, where u
? = Gu and v? = Gv. Hence, a valid LSH function for uT Av is:
is then equivalently u
1,
if r T Gu ? 0
hr,A (u) =
(3.1)
0,
otherwise,
where r is the normal to a random hyperplane. To perform sub-linear time nearest neighbor searches,
a hash key is produced for all n data points in our database. Given a query, its hash key is formed
and then, an appropriate data structure can be used to extract potential nearest neighbors (see [6, 1]
for details). Typically, the methods search only O(n1/(1+?) ) of the data points, where ? > 0, to
retrieve the (1 + ?)-nearest neighbors with high probability.
3.2 Online Hashing Updates
The approach described thus far is not immediately amenable to online updates. We can imagine
producing a series of LSH functions hr1 ,A , ..., hrb ,A , and storing the corresponding hash keys for
each data point in our database. However, the hash functions as given in (3.1) are dependent on the
Mahalanobis distance; when we update our matrix At to At+1 , the corresponding hash functions,
parameterized by Gt , must also change. To update all hash keys in the database would require
O(nd) time, which may be prohibitive. In the following we propose a more efficient approach.
?(?
y ?y )A z z T A
t
t t t
t
Recall the update for A: At+1 = At ? 1+?(?
, which we will write as At+1 = At +
y ?yt )?
yt
T
?t At zt zt At , where ?t = ??(?
y ? yt )/(1 + ?(?
y ? yt )?
yt ). Let GTt Gt = At . Then At+1 =
T T
T T
T T
GTt (I +
?
G
z
z
G
)G
.
The
square-root
of
I
+
?
G
t t zt zt Gt is I + ?t Gt zt zt Gt , where
p t t t t t t
?t = ( 1 + ?t ztT At zt ?1)/(ztT At zt ). As a result, Gt+1 = Gt +?t Gt zt ztT At . The corresponding
update to (3.1) is to find the sign of
r T Gt+1 x = r T Gt u + ?t r T Gt zt ztT At u.
(3.2)
5
Suppose that the hash functions have been updated in full at some time step t1 in the past.
Now at time t, we want to determine which hash bits have flipped since t1 , or more precisely, which examples? product with some r T Gt has changed from positive to negative, or vice
versa. This amounts to determining all bits such that sign(r T Gt1 u) 6= sign(r T Gt u), or equivalently, (r T Gt1 u)(r T Gt u) ? 0. Expanding the update given in (3.2), we can write r T Gt u as
Pt?1
r T Gt1 u + ?=t1 ?? r T G? z? z?T A? u. Therefore, finding the bits that have changed sign is equiva
Pt?1
T
2
T
T
T
lent to finding all u such that (r Gt1 u) + (r Gt1 u)
? 0. We can
?=t1 ?? r G? z? z? A? u
use a second level of locality-sensitive hashing to approximately find all such u. Define a vecP
T
T
? = [(r T Gt1 u)2 ; (r T Gt1 u)u] and a ?query? q? = [?1; ? t?1
tor u
?=t1 ?? r A? z? z? G? ]. Then the
? such that
bits that have changed sign can be approximately identified by finding all examples u
? ? 0. In other words, we look for all u
? that have a large inner product with q,
? which translates
q?T u
the problem to a similarity search problem. This may be solved approximately using the locality? for each r can
sensitive hashing scheme given in [1] for inner product similarity. Note that finding u
? for only a randomly selected subset of the vectors r.
be computationally expensive, so we search u
In summary, when performing online metric learning updates, instead of updating all the hash keys
at every step (which costs O(nd)), we delay updating the hash keys and instead determine approximately which bits have changed in the stored entries in the hash table since the last update. When we
have a nearest-neighbor query, we can quickly determine which bits have changed, and then use this
information to find a query?s approximate nearest neighbors using the current metric. Once many of
the bits have changed, we perform a full update to our hash functions.
Finally, we note that the above can be extended to the case where computations are done in kernel
space. We omit details due to lack of space.
4 Experimental Results
In this section we evaluate the proposed algorithm (LEGO) over a variety of data sets, and examine
both its online metric learning accuracy as well as the quality of its online similarity search updates.
As baselines, we consider the most relevant techniques from the literature: the online metric learners
POLA [13] and ITML-Online [4]. We also evaluate a baseline offline metric learner associated with
our method. For all metric learners, we gauge improvements relative to the original (non-learned)
Euclidean distance, and our classification error is measured with the k-nearest neighbor algorithm.
First we consider the same collection of UCI data sets used in [4]. For each data set, we provide the
online algorithms with 10,000 randomly-selected constraints, and generate their target distances as
in [4]?for same-class pairs, the target distance is set to be equal to the 5th percentile of all distances
in the data, while for different-class pairs, the 95th percentile is used. To tune the regularization
parameter ? for POLA and LEGO, we apply a pre-training phase using 1,000 constraints. (This is not
required for ITML-Online, which automatically sets the regularization parameter at each iteration
to guarantee positive definiteness). The final metric (AT ) obtained by each online algorithm is used
for testing (T is the total number of time-steps). The left plot of Figure 1 shows the k-nn error rates
for all five data sets. LEGO outperforms the Euclidean baseline as well as the other online learners,
and even approaches the accuracy of the offline method (see [4] for additional comparable offline
learning results using [7, 15]). LEGO and ITML-Online have comparable running times. However,
our approach has a significant speed advantage over POLA on these data sets: on average, learning
with LEGO is 16.6 times faster, most likely due to the extra projection step required by POLA.
Next we evaluate our approach on a handwritten digit classification task, reproducing the experiment
used to test POLA in [13]. We use the same settings given in that paper. Using the MNIST data set,
we pose a binary classification problem between each pair of digits (45 problems in all). The training
and test sets consist of 10,000 examples each. For each problem, 1,000 constraints are chosen and
the final metric obtained is used for testing. The center plot of Figure 1 compares the test error
between POLA and LEGO. Note that LEGO beats or matches POLA?s test error in 33/45 (73.33%)
of the classification problems. Based on the additional baselines provided in [13], this indicates that
our approach also fares well compared to other offline metric learners on this data set.
We next consider a set of image patches from the Photo Tourism project [14], where user photos
from Flickr are used to generate 3-d reconstructions of various tourist landmarks. Forming the
reconstructions requires solving for the correspondence between local patches from multiple images
of the same scene. We use the publicly available data set that contains about 300,000 total patches
6
MNIST data set
UCI data sets (order of bars = order of legend)
ITML Offline
LEGO
ITML Online
POLA
Baseline Euclidean
0.4
0.35
0.25
0.2
0.15
0.1
0.05
0
Wine
Iris
Bal?Scale
Ionosphere
Soybean
1
0.035
0.95
0.025
0.02
0.015
0.85
0.8
0.7
0.005
0.65
0.005
0.01
0.015 0.02 0.025
LEGO Error
0.03
0.035
0.04
LEGO
ITML Offline
POLA
ITML Online
Baseline Euclidean
0.75
0.01
0
0
PhotoTourism Dataset
0.9
0.03
POLA Error
k?NN Error
0.3
0.04
True Positives
0.45
0.6
0
0.05
0.1
0.15
False Positives
0.2
0.25
Figure 1: Comparison with existing online metric learning methods. Left: On the UCI data sets, our method
(LEGO) outperforms both the Euclidean distance baseline as well as existing metric learning methods, and
even approaches the accuracy of the offline algorithm. Center: Comparison of errors for LEGO and POLA
on 45 binary classification problems using the MNIST data; LEGO matches or outperforms POLA on 33 of the
45 total problems. Right: On the Photo Tourism data, our online algorithm significantly outperforms the L2
baseline and ITML-Online, does well relative to POLA, and nearly matches the accuracy of the offline method.
from images of three landmarks1. Each patch has a dimensionality of 4096, so for efficiency we
apply all algorithms in kernel space, and use a linear kernel. The goal is to learn a metric that
measures the distance between image patches better than L2 , so that patches of the same 3-d scene
point will be matched together, and (ideally) others will not. Since the database is large, we can also
use it to demonstrate our online hash table updates. Following [8], we add random jitter (scaling,
rotations, shifts) to all patches, and generate 50,000 patch constraints (50% matching and 50% nonmatching patches) from a mix of the Trevi and Halfdome images. We test with 100,000 patch pairs
from the Notre Dame portion of the data set, and measure accuracy with precision and recall.
The right plot of Figure 1 shows that LEGO and POLA are able to learn a distance function that
significantly outperforms the baseline squared Euclidean distance. However, LEGO is more accurate
than POLA, and again nearly matches the performance of the offline metric learning algorithm. On
the other hand, the ITML-Online algorithm does not improve beyond the baseline. We attribute
the poor accuracy of ITML-Online to its need to continually adjust the regularization parameter to
maintain positive definiteness; in practice, this often leads to significant drops in the regularization
parameter, which prevents the method from improving over the Euclidean baseline. In terms of
training time, on this data LEGO is 1.42 times faster than POLA (on average over 10 runs).
Finally, we present results using our online metric learning algorithm together with our online hash
table updates described in Section 3.2 for the Photo Tourism data. For our first experiment, we
provide each method with 50,000 patch constraints, and then search for nearest neighbors for 10,000
test points sampled from the Notre Dame images. Figure 2 (left plot) shows the recall as a function
of the number of patches retrieved for four variations: LEGO with a linear scan, LEGO with our
LSH updates, the L2 baseline with a linear scan, and L2 with our LSH updates. The results show
that the accuracy achieved by our LEGO+LSH algorithm is comparable to the LEGO+linear scan
(and similarly, L2 +LSH is comparable to L2 +linear scan), thus validating the effectiveness of our
online hashing scheme. Moreover, LEGO+LSH needs to search only 10% of the database, which
translates to an approximate speedup factor of 4.7 over the linear scan for this data set.
Next we show that LEGO+LSH performs accurate and efficient retrievals in the case where constraints and queries are interleaved in any order. Such a scenario is useful in many applications: for
example, an image retrieval system such as Flickr continually acquires new image tags from users
(which could be mapped to similarity constraints), but must also continually support intermittent
user queries. For the Photo Tourism setting, it would be useful in practice to allow new constraints
indicating true-match patch pairs to stream in while users continually add photos that should participate in new 3-d reconstructions with the improved match distance functions. To experiment with
this scenario, we randomly mix online additions of 50,000 constraints with 10,000 queries, and measure performance by the recall value for 300 retrieved nearest neighbor examples. We recompute the
hash-bits for all database examples if we detect changes in more than 10% of the database examples.
Figure 2 (right plot) compares the average recall value for various methods after each query. As
expected, as more constraints are provided, the LEGO-based accuracies all improve (in contrast to
the static L2 baseline, as seen by the straight line in the plot). Our method achieves similar accuracy
to both the linear scan method (LEGO Linear) as well as the naive LSH method where the hash
table is fully recomputed after every constraint update (LEGO Naive LSH). The curves stack up
1
http://phototour.cs.washington.edu/patches/default.htm
7
0.3
0.74
0.8
0.78
0.72
0.76
Average Recall
Recall
0.74
0.72
0.7
0.68
0.66
L2 Linear Scan
0.64
L2 LSH
0.62
LEGO Linear Scan
LEGO LSH
100
200
300
400
500
600
700
800
Number of nearest neighbors (N)
0.7
0.68
0.66
LEGO LSH
LEGO Linear Scan
LEGO Naive LSH
L2 Linear Scan
0.64
900
0.62
0
1000
2000
4000
6000
Number of queries
8000
10000
Figure 2: Results with online hashing updates. The left plot shows the recall value for increasing numbers of
nearest neighbors retrieved. ?LEGO LSH? denotes LEGO metric learning in conjunction with online searches
using our LSH updates, ?LEGO Linear? denotes LEGO learning with linear scan searches. L2 denotes the
baseline Euclidean distance. The right plot shows the average recall values for all methods at different time
instances as more queries are made and more constraints are added. Our online similarity search updates make
it possible to efficiently interleave online learning and querying. See text for details.
appropriately given the levels of approximation: LEGO Linear yields the upper bound in terms of
accuracy, LEGO Naive LSH with its exhaustive updates is slightly behind that, followed by our
LEGO LSH with its partial and dynamic updates. In reward for this minor accuracy loss, however,
our method provides a speedup factor of 3.8 over the naive LSH update scheme. (In this case the
naive LSH scheme is actually slower than a linear scan, as updating the hash tables after every update
incurs a large overhead cost.) For larger data sets, we can expect even larger speed improvements.
Conclusions: We have developed an online metric learning algorithm together with a method to
perform online updates to fast similarity search structures, and have demonstrated their applicability
and advantages on a variety of data sets. We have proven regret bounds for our online learner that
offer improved reliability over state-of-the-art methods in terms of regret bounds, and empirical
performance. A disadvantage of our algorithm is that the LSH parameters, e.g. ? and the number of
hash-bits, need to be selected manually, and may depend on the final application. For future work,
we hope to tune the LSH parameters automatically using a deeper theoretical analysis of our hash
key updates in conjunction with the relevant statistics of the online similarity search task at hand.
Acknowledgments: This research was supported in part by NSF grant CCF-0431257, NSFITR award IIS-0325116, NSF grant IIS-0713142, NSF CAREER award 0747356, Microsoft
Research, and the Henry Luce Foundation.
References
[1] M. Charikar. Similarity Estimation Techniques from Rounding Algorithms. In STOC, 2002.
[2] L. Cheng, S. V. N. Vishwanathan, D. Schuurmans, S. Wang, and T. Caelli. Implicit Online Learning with
Kernels. In NIPS, 2006.
[3] M. Datar, N. Immorlica, P. Indyk, and V. Mirrokni. Locality-Sensitive Hashing Scheme Based on p-Stable
Distributions. In SOCG, 2004.
[4] J. Davis, B. Kulis, P. Jain, S. Sra, and I. Dhillon. Information-Theoretic Metric Learning. In ICML, 2007.
[5] A. Frome, Y. Singer, and J. Malik. Image retrieval and classification using local distance functions. In
NIPS, 2007.
[6] A. Gionis, P. Indyk, and R. Motwani. Similarity Search in High Dimensions via Hashing. In VLDB, 1999.
[7] A. Globerson and S. Roweis. Metric Learning by Collapsing Classes. In NIPS, 2005.
[8] G. Hua, M. Brown, and S. Winder. Discriminant embedding for local image descriptors. In ICCV, 2007.
[9] P. Jain, B. Kulis, and K. Grauman. Fast Image Search for Learned Metrics. In CVPR, 2008.
[10] J. Kivinen and M. K. Warmuth. Exponentiated Gradient Versus Gradient Descent for Linear Predictors.
Inf. Comput., 132(1):1?63, 1997.
[11] M. Schultz and T. Joachims. Learning a Distance Metric from Relative Comparisons. In NIPS, 2003.
[12] S. Shalev-Shwartz and Y. Singer. Online Learning meets Optimization in the Dual. In COLT, 2006.
[13] S. Shalev-Shwartz, Y. Singer, and A. Ng. Online and Batch Learning of Pseudo-metrics. In ICML, 2004.
[14] N. Snavely, S. Seitz, and R. Szeliski. Photo Tourism: Exploring Photo Collections in 3D. In SIGGRAPH
Conference Proceedings, pages 835?846, New York, NY, USA, 2006. ACM Press. ISBN 1-59593-364-6.
[15] K. Weinberger, J. Blitzer, and L. Saul. Distance Metric Learning for Large Margin Nearest Neighbor
Classification. In NIPS, 2006.
[16] E. Xing, A. Ng, M. Jordan, and S. Russell. Distance Metric Learning, with Application to Clustering with
Side-Information. In NIPS, 2002.
8
| 3446 |@word kulis:4 briefly:1 version:4 interleave:1 norm:1 stronger:2 nd:2 vldb:1 seitz:1 decomposition:1 incurs:1 tr:1 accommodate:1 initial:1 series:2 contains:1 outperforms:7 existing:14 past:1 current:5 ka:2 must:10 gv:1 plot:8 drop:1 update:47 hash:32 prohibitive:1 selected:3 item:1 warmuth:1 recompute:1 provides:1 five:1 c2:3 gtt:2 become:1 prove:2 overhead:1 introduce:1 manner:3 notably:1 expected:1 examine:1 inspired:1 decomposed:1 automatically:4 increasing:1 provided:4 project:1 underlying:1 bounded:1 matched:1 moreover:1 prateek:1 argmin:1 minimizes:1 eigenvector:3 developed:2 finding:4 transformation:1 guarantee:1 pseudo:2 every:7 exactly:1 grauman:3 grant:2 omit:1 appear:1 producing:1 continually:5 positive:22 t1:5 aat:1 local:3 quadruple:2 datar:1 becoming:1 approximately:4 meet:1 studied:1 trevi:1 practical:3 globerson:1 enforces:1 testing:2 acknowledgment:1 practice:5 regret:6 implement:2 definite:6 caelli:1 digit:3 empirical:4 significantly:2 projection:2 matching:1 word:1 pre:1 get:1 undesirable:2 context:1 applying:2 demonstrated:1 yt:46 center:2 straightforward:1 immediately:2 retrieve:1 embedding:1 handle:1 searching:2 variation:1 analogous:1 updated:3 construction:1 pjain:1 target:3 user:6 exact:3 imagine:1 suppose:1 us:4 pt:2 recognition:2 expensive:4 approximated:1 updating:4 equiva:1 database:14 ft:1 inserted:1 solved:1 wang:1 worst:2 russell:1 equat:1 substantial:1 reward:1 ideally:1 dynamic:1 depend:1 solving:2 algebra:1 incur:1 efficiency:1 learner:10 gu:2 htm:1 collide:1 siggraph:1 various:2 tx:1 derivation:1 jain:3 fast:11 effective:1 query:14 intermingled:1 outside:1 exhaustive:2 shalev:2 encoded:1 larger:3 solve:3 cvpr:1 otherwise:1 vecp:1 statistic:1 final:4 online:77 indyk:2 sequence:3 advantage:4 isbn:1 propose:2 reconstruction:3 product:6 relevant:4 uci:3 roweis:1 frobenius:2 motwani:1 neumann:1 produce:1 incremental:1 object:2 blitzer:1 develop:3 pose:3 measured:1 nearest:14 minor:1 received:4 progress:1 sim:5 strong:1 implemented:1 c:2 frome:1 drawback:1 attribute:1 human:1 enable:1 require:4 kristen:1 brian:1 tighter:1 designate:1 exploring:1 sufficiently:1 considered:1 normal:1 predict:1 tor:1 achieves:1 wine:1 purpose:1 estimation:2 favorable:2 utexas:1 sensitive:13 vice:1 gauge:1 gt1:7 minimization:1 hope:1 always:1 rather:1 avoid:2 conjunction:3 joachim:1 improvement:2 likelihood:1 indicates:1 contrast:5 dld:15 baseline:15 detect:1 dependent:4 accumulated:1 nn:2 typically:3 integrated:1 bt:7 a0:10 transformed:1 issue:1 classification:7 dual:1 colt:1 tourism:5 art:1 equal:3 once:3 aware:1 having:1 washington:1 ng:2 manually:1 flipped:1 look:1 k2f:2 nearly:2 icml:2 future:1 t2:1 others:1 employ:1 randomly:3 divergence:4 individual:1 phase:1 ourselves:2 maintain:4 n1:1 attempt:1 microsoft:1 interest:1 adjust:1 notre:2 semidefinite:1 behind:1 amenable:1 accurate:4 partial:1 necessary:1 tree:1 euclidean:9 desired:2 re:2 theoretical:3 instance:2 disadvantage:1 cost:2 applicability:1 subset:1 entry:1 predictor:1 delay:1 successful:1 rounding:1 front:1 itml:15 stored:1 considerably:1 together:3 quickly:2 squared:6 von:1 again:1 soybean:1 collapsing:1 return:2 winder:1 potential:1 includes:1 coefficient:2 gionis:1 satisfy:1 onset:1 stream:1 depends:1 tion:1 view:1 performed:1 h1:1 root:1 analyze:1 observing:1 portion:1 competitive:1 relied:1 maintains:1 xing:1 minimize:2 formed:1 square:1 accuracy:12 publicly:1 descriptor:1 efficiently:4 yield:1 handwritten:1 comparably:1 produced:1 multiplying:1 straight:1 flickr:2 against:1 involved:1 proof:4 associated:1 static:2 sampled:1 dataset:1 recall:9 ut:18 dimensionality:1 routine:1 carefully:1 actually:1 appears:1 hashing:15 supervised:1 improved:2 formulation:1 done:1 though:2 shrink:1 furthermore:2 implicit:1 hand:4 lent:1 lack:1 incrementally:2 quality:4 grows:1 name:1 facilitate:1 usa:1 brown:1 true:2 counterpart:1 ccf:1 regularization:21 hence:1 dhillon:2 deal:1 mahalanobis:10 during:2 please:1 acquires:1 davis:1 percentile:2 iris:1 bal:1 theoretic:3 demonstrate:3 necessitating:1 performs:3 ranging:1 image:15 common:1 rotation:1 empirically:1 interpretation:1 fare:1 refer:1 significant:2 versa:1 automatic:1 similarly:2 sherman:1 reliability:1 lsh:24 henry:1 access:4 stable:1 similarity:23 longer:4 gt:17 add:2 recent:6 retrieved:3 optimizes:1 inf:1 scenario:4 binary:3 arbitrarily:1 vt:16 devise:1 seen:2 additional:3 greater:1 determine:3 morrison:1 ii:2 multiple:2 desirable:1 full:3 mix:2 faster:3 adapt:1 plug:1 offer:2 match:6 retrieval:6 award:2 plugging:1 impact:2 variant:2 maintenance:1 metric:61 iteration:4 kernel:4 achieved:1 c1:2 whereas:2 want:2 background:1 addition:1 decreased:1 suffered:1 appropriately:1 extra:2 unlike:1 validating:1 legend:1 effectiveness:1 jordan:1 lego:39 noting:1 hb:1 variety:5 affect:1 restrict:2 click:1 identified:1 inner:5 tradeoff:1 translates:2 luce:1 texas:1 det:1 shift:1 handled:1 distributing:1 hr1:1 suffer:1 returned:1 york:1 logdet:6 useful:4 detailed:1 clear:1 tune:2 amount:1 telescope:1 generate:4 http:1 problematic:1 nsf:3 alters:1 sign:6 write:2 key:13 four:1 recomputed:1 changing:1 prevent:1 sum:1 cone:1 enforced:1 run:1 inverse:1 parameterized:3 jitter:1 place:1 throughout:1 patch:16 scaling:3 comparable:5 bit:12 interleaved:1 bound:29 internet:1 hi:4 dame:2 distinguish:1 followed:1 correspondence:1 cheng:1 constraint:35 precisely:1 vishwanathan:1 scene:2 software:1 tag:1 speed:3 performing:1 speedup:2 department:1 charikar:1 according:1 poor:1 kd:1 across:1 slightly:1 increasingly:1 lp:1 making:4 gradually:1 indexing:1 iccv:1 socg:1 computationally:1 previously:1 assures:1 turn:1 phototour:1 singer:3 enforcement:1 know:1 end:2 photo:8 available:2 apply:2 occasional:1 enforce:1 appropriate:3 batch:1 weinberger:1 slower:1 original:1 top:1 running:1 include:1 denotes:3 clustering:1 yt2:1 hinge:2 practicality:1 approximating:1 dat:3 objective:1 malik:1 added:1 snavely:1 costly:2 mirrokni:1 gradient:9 distance:44 mapped:1 landmark:1 participate:1 collected:1 discriminant:1 provable:1 enforcing:1 length:1 index:1 minimizing:2 equivalently:3 unfortunately:1 stoc:1 negative:1 zt:16 perform:7 upper:1 av:2 datasets:1 descent:4 beat:1 extended:2 intermittent:1 reproducing:1 stack:1 inferred:1 pair:5 required:4 nonmatching:1 learned:8 boost:1 nip:6 address:2 able:3 adversary:1 proceeds:1 below:1 bar:1 beyond:1 including:2 max:2 regularized:1 hr:1 kivinen:1 scheme:8 improve:2 naive:7 extract:1 text:1 literature:1 l2:11 determining:1 relative:3 loss:23 fully:1 expect:1 proven:3 querying:1 versus:1 triple:1 eigendecomposition:1 h2:1 incurred:2 foundation:1 storing:1 austin:2 changed:7 summary:2 supported:1 last:1 offline:20 side:2 allow:2 deeper:1 exponentiated:1 szeliski:1 neighbor:14 saul:1 curve:1 default:1 complicating:1 world:2 avoids:1 valid:1 dimension:1 collection:3 made:1 schultz:1 far:1 cope:1 approximate:6 tourist:1 dealing:1 summing:1 shwartz:2 factorizing:1 search:27 table:6 learn:2 expanding:1 career:1 sra:1 improving:1 schuurmans:1 domain:1 da:6 ztt:9 arise:1 body:1 definiteness:11 slow:1 ny:1 precision:1 sub:4 kuk22:1 comput:1 learns:2 formula:2 theorem:2 r2:4 ionosphere:1 exists:2 essential:1 mnist:3 consist:1 false:1 effectively:1 margin:2 locality:13 likely:2 forming:1 prevents:1 inderjit:2 hua:1 aa:1 determines:1 acm:1 goal:5 towards:1 halfdome:1 change:5 pola:23 typical:1 hyperplane:2 lemma:3 total:4 invariance:1 experimental:1 la:16 indicating:1 immorlica:1 support:3 latter:1 scan:12 violated:1 incorporate:1 evaluate:5 |
2,699 | 3,447 | Resolution Limits of Sparse Coding in
High Dimensions?
Alyson K. Fletcher,? Sundeep Rangan,? and Vivek K Goyal?
Abstract
This paper addresses the problem of sparsity pattern detection for unknown ksparse n-dimensional signals observed through m noisy, random linear measurements. Sparsity pattern recovery arises in a number of settings including statistical
model selection, pattern detection, and image acquisition. The main results in this
paper are necessary and sufficient conditions for asymptotically-reliable sparsity
pattern recovery in terms of the dimensions m, n and k as well as the signal-tonoise ratio (SNR) and the minimum-to-average ratio (MAR) of the nonzero entries
of the signal. We show that m > 2k log(n ? k)/(SNR ? MAR) is necessary for
any algorithm to succeed, regardless of complexity; this matches a previous sufficient condition for maximum likelihood estimation within a constant factor under
certain scalings of k, SNR and MAR with n. We also show a sufficient condition
for a computationally-trivial thresholding algorithm that is larger than the previous expression by only a factor of 4(1 + SNR) and larger than the requirement for
lasso by only a factor of 4/MAR. This provides insight on the precise value and
limitations of convex programming-based algorithms.
1 Introduction
Sparse signal models have been used successfully in a variety of applications including waveletbased image processing and pattern recognition. Recent research has shown that certain naturallyoccurring neurological processes may exploit sparsity as well [1?3]. For example, there is now
evidence that the V1 visual cortex naturally generates a sparse representation of the visual data
relative to a certain Gabor-like basis. Due to the nonlinear nature of sparse signal models, developing
and analyzing algorithms for sparse signal processing has been a major research challenge.
This paper considers the problem of estimating sparse signals in the presence of noise. We are
specifically concerned with understanding the theoretical estimation limits and how far practical
algorithms are from those limits. In the context of visual cortex modeling, this analysis may help
us understand what visual features are resolvable from visual data. To keep the analysis general, we
consider the following abstract estimation problem: An unknown sparse signal x is modeled as an
n-dimensional real vector with k nonzero components. The locations of the nonzero components
is called the sparsity pattern. We consider the problem of detecting the sparsity pattern of x from
an m-dimensional measurement vector y = Ax + d, where A ? Rm?n is a known measurement
matrix and d ? Rm is an additive noise vector with a known distribution. We are interested in
?
This work was supported in part by a University of California President?s Postdoctoral Fellowship, NSF
?
CAREER Grant CCF-643836, and the Centre Bernoulli at Ecole
Polytechnique F?ed?erale de Lausanne.
?
A. K. Fletcher (email: [email protected]) is with the Department of Electrical Engineering and
Computer Sciences, University of California, Berkeley.
?
S. Rangan (email: [email protected]) is with Qualcomm Technologies, Bedminster, NJ.
?
V. K. Goyal (email: [email protected]) is with the Department of Electrical Engineering and Computer
Science and the Research Laboratory of Electronics, Massachusetts Institute of Technology.
finite SNR
Any algorithm must fail
Necessary and
sufficient for lasso
Sufficient for
thresholding estimator (11)
m<
2
SNR
k log(n ? k) + k ? 1
Theorem 1
MAR?SNR
??
m?k
(elementary)
unknown (expressions above
and right are necessary)
m ? 2k log(n ? k) + k + 1
Wainwright [14]
8(1+SNR)
k log(n
MAR?SNR
8
m > MAR
k log(n ? k)
from Theorem 2
m>
Theorem 2
? k)
Table 1: Summary of Results on Measurement Scaling for Reliable Sparsity Recovery
(see body for definitions and technical limitations)
determining necessary and sufficient conditions on the ability to reliably detect the sparsity pattern
based on problem dimensions m, n and k, and signal and noise statistics.
Previous work. While optimal sparsity pattern detection is NP-hard [4], greedy heuristics (matching pursuit [5] and its variants) and convex relaxations (basis pursuit [6], lasso [7], and others) have
been widely-used since at least the mid 1990s. While these algorithms worked well in practice,
until recently, little could be shown analytically about their performance. Some remarkable recent
results are sets of conditions that can guarantee exact sparsity recovery based on certain simple
?incoherence? conditions on the measurement matrix A [8?10].
These conditions and others have been exploited in developing the area of ?compressed sensing,?
which considers large random matrices A with i.i.d. components [11?13]. The main theoretical
result are conditions that guarantee sparse detection with convex programming methods. The best
of these results is due to Wainwright [14], who shows that the scaling
m ? 2k log(n ? k) + k + 1.
(1)
is necessary and sufficient for lasso to detect the sparsity pattern when A has Gaussian entries,
provided the SNR scales to infinity.
Preview. This paper presents new necessary and sufficient conditions, summarized in Table 1
along with Wainwright?s lasso scaling (1). The parameters MAR and SNR represent the minimumto-average and signal-to-noise ratio, respectively. The exact definitions and measurement model are
given below.
The necessary condition applies to all algorithms, regardless of complexity. Previous necessary conditions had been based on information-theoretic analyses such as [15?17]. More recent publications
with necessary conditions include [18?21]. As described in Section 3, our new necessary condition
is stronger than previous bounds in certain important regimes.
The sufficient condition is derived for a computationally-trivial thresholding estimator. By comparing with the lasso scaling, we argue that main benefits of more sophisticated methods, such as
lasso, is not generally in the scaling with respect to k and n but rather in the dependence on the
minimum-to-average ratio.
2 Problem Statement
Consider estimating a k-sparse vector x ? Rn through a vector of observations,
y = Ax + d,
m?n
(2)
m
where A ? R
is a random matrix with i.i.d. N (0, 1/m) entries and d ? R is i.i.d. unitvariance Gaussian noise. Denote the sparsity pattern of x (positions of nonzero entries) by the set
Itrue , which is a k-element subset of the set of indices {1, 2, . . . , n}. Estimates of the sparsity
pattern will be denoted by I? with subscripts indicating the type of estimator. We seek conditions
under which there exists an estimator such that I? = Itrue with high probability.
In addition to the signal dimensions, m, n and k, we will show that there are two variables that
dictate the ability to detect the sparsity pattern reliably: the signal-to-noise ratio (SNR), and what
we will call the minimum-to-average ratio (MAR).
The SNR is defined by
E[kAxk2 ]
E[kAxk2 ]
=
.
(3)
E[kdk2 ]
m
Since we are considering x as an unknown deterministic vector, the SNR can be further simplified
as follows: The entries of A are i.i.d. N (0, 1/m), so columns ai ? Rm and aj ? Rm of A satisfy
E[a?i aj ] = ?ij . Therefore, the signal energy is given by
XX
XX
E kAxk2 =
E [a?i aj xi xj ] =
xi xj ?ij = kxk2 .
SNR =
i,j?Itrue
i,j?Itrue
Substituting into the definition (3), the SNR is given by
1
SNR =
kxk2 .
m
(4)
The minimum-to-average ratio of x is defined as
MAR =
minj?Itrue |xj |2
.
kxk2 /k
(5)
Since kxk2 /k is the average of {|xj |2 | j ? Itrue }, MAR ? (0, 1] with the upper limit occurring
when all the nonzero entries of x have the same magnitude.
One final value that will be important is the minimum component SNR, defined as
1
1
SNRmin =
min Ekaj xj k2 =
min |xj |2 .
Ekdk2 j?Itrue
m j?Itrue
(6)
The quantity SNRmin has a natural interpretation: The numerator, min Ekaj xj k2 , is the signal
power due to the smallest nonzero component of x, while the denominator, Ekdk2 , is the total noise
power. The ratio SNRmin thus represents the contribution to the SNR from the smallest nonzero
component of the unknown vector x. Observe that (3) and (5) show
1
SNRmin = SNR ? MAR.
(7)
k
Normalizations. Other works use a variety of normalizations, e.g.: the entries of A have variance
1/n in [13, 19]; the entries of A have unit variance and the variance of d is a variable ? 2 in [14, 17,
20,21]; and our scaling of A and a noise variance of ? 2 are used in [22]. This necessitates great care
in comparing results.
To facilitate the comparison we have expressed all our results in terms of SNR, MAR and SNRmin
as defined above. All of these quantities are dimensionless, in that if either A and d or x and d are
scaled together, these ratios will not change. Thus, the results can be applied to any scaling of A, d
and x, provided that the quantities SNR, MAR and SNRmin are computed appropriately.
3 Necessary Condition for Sparsity Recovery
We first consider sparsity recovery without being concerned with computational complexity of the
estimation algorithm. Since the vector x ? Rn is k-sparse, the vector Ax belongs to one of L = nk
subspaces spanned by k of the n columns of A. Estimation of the sparsity pattern is the selection
of one of these subspaces, and since the noise d is Gaussian, the probability of error is minimized
by choosing the subspace closest to the observed vector y. This results in the maximum likelihood
(ML) estimate.
Mathematically, the ML estimator can be described as follows. Given a subset J ? {1, 2, . . . , n},
let PJ y denote the orthogonal projection of the vector y onto the subspace spanned by the vectors
{aj | j ? J}. The ML estimate of the sparsity pattern is
I?ML = arg max kPJ yk2 ,
J : |J|=k
where |J| denotes the cardinality of J. That is, the ML estimate is the set of k indices such that the
subspace spanned by the corresponding columns of A contain the maximum signal energy of y.
Since the number of subspaces L grows exponentially in n and k, an exhaustive search is, in general,
computationally infeasible. However, the performance of ML estimation provides a lower bound on
the number of measurements needed by any algorithm that cannot exploit a priori information on x
other than it being k-sparse.
ML estimation for sparsity recovery was first examined in [17]. There, it was shown that there exists
a constant C > 0 such that the condition
n
n
log(n ? k)
k log(n ? k)
m > C max
, k log
= C max
, k log
(8)
SNRmin
k
SNR ? MAR
k
is sufficient for ML to asymptotically reliably recover the sparsity pattern. Note that the equality between the two expressions in (8) is a consequence of (7). Our first theorem provides a corresponding
necessary condition.
Theorem 1 Let k = k(n), m = m(n), SNR = SNR(n) and MAR = MAR(n) be deterministic
sequences in n such that limn?? k(n) = ? and
m(n) <
2??
SNRmin
log(n ? k) + k ? 1 =
2??
MAR ? SNR
k log(n ? k) + k ? 1
(9)
for some ? > 0. Then even the ML estimator asymptotically cannot detect the sparsity pattern, i.e.,
lim Pr I?ML = Itrue = 0.
n??
Proof sketch: The basic idea in the proof is to consider an ?incorrect? subspace formed by removing
one of the k correct vectors with the least energy, and adding one of the n ? k incorrect vectors with
largest energy. The change in energy can be estimated using tail distributions of chi-squared random
variables. A complete proof appears in [23].
The theorem provides a simple lower bound on the minimum number of measurements required to
recover the sparsity pattern in terms of k, n and the minimum component SNR, SNRmin . Note that
the equivalence between the two expressions in (9) is due to (7).
Remarks.
1. The theorem strengthens an earlier necessary condition in [18] which showed that there exists
a C > 0 such that
C
m=
k log(n ? k)
SNR
is necessary for asymptotic reliable recovery. Theorem 1 strengthens the result to reflect the
dependence on MAR and make the constant explicit.
2. The theorem applies for any k(n) such that limn?? k(n) = ?, including both cases with
k = o(n) and k = ?(n). In particular, under linear sparsity (k = ?n for some constant ?), the
theorem shows that
2?
n log n
m?
MAR ? SNR
measurements are necessary for sparsity recovery. Similarly, if m/n is bounded above by a
constant, then sparsity recovery will certainly fail unless
k = O (SNR ? MAR ? n/ log n) .
In particular, when SNR ? MAR is bounded, the sparsity ratio k/n must approach zero.
3. In the case where SNR ? MAR and the sparsity ratio k/n are both constant, the sufficient condition (8) reduces to
m = (C/(SNR ? MAR))k log(n ? k),
which matches the necessary condition in (9) within a constant factor.
4. In the case of MAR ? SNR < 1, the bound (9) improves upon the necessary condition of [14] for
the asymptotic success of lasso by the factor (MAR ? SNR)?1 .
m
(1, 1)
(2, 1)
(5, 1)
(10, 1)
(10, 0.5)
(10, 0.2)
(10, 0.1)
40
40
40
40
40
40
40
35
35
35
35
35
35
35
30
30
30
30
30
30
30
25
25
25
25
25
25
25
20
20
20
20
20
20
20
15
15
15
15
15
15
15
10
10
10
10
10
10
10
5
5
5
5
5
5
5
1
0.8
0.6
0.4
0.2
0
2
4
k
2
4
k
2
4
k
2
4
k
2
4
k
2
4
k
2
4
k
Figure 1: Simulated success probability of ML detection for n = 20 and many values of k, m, SNR,
and MAR. Each subfigure gives simulation results for k ? {1, 2, . . . , 5} and m ? {1, 2, . . . , 40}
for one (SNR, MAR) pair. Each subfigure heading gives (SNR, MAR). Each point represents at least
500 independent trials. Overlaid on the color-intensity plots is a black curve representing (9).
5. The bound (9) can be compared against information-theoretic bounds such as those in [15?17,
20, 21]. For example, a simple capacity argument in [15] shows that
2 log2 nk
m?
(10)
log2 (1 + SNR)
is necessary. When the sparsity ratio k/n and SNR are both fixed, m can satisfy (10) while
growing only linearly with k. In contrast, Theorem 1 shows that with sparsity ratio and SNR ?
MAR fixed, m = ?(k log(n?k)) is necessary for reliable sparsity recovery. That is, the number
of measurements must grow superlinearly in k in the linear sparsity regime with bounded SNR.
In the sublinear regime where k = o(n), the capacity-based bound (10) may be stronger than
(9) depending on the scaling of SNR, MAR and other terms.
6. Results more similar to Theorem 1?based on direct analyses of error events rather than
information-theoretic arguments?appeared in [18, 19]. The previous results showed that with
fixed SNR as defined here, sparsity recovery with m = ?(k) must fail. The more refined
analysis in this paper gives the additional log(n ? k) factor and the precise dependence on
MAR ? SNR.
7. Theorem 1 is not contradicted by the relevant sufficient condition of [20, 21].?That sufficient
condition holds?for scaling that gives linear sparsity and MAR ? ?
SNR = ?( n log n). For
MAR ? SNR = n log n, Theorem 1 shows that fewer than m ? 2 k log k measurements will
cause ML decoding to fail, while [21, Thm. 3.1] shows that a typicality-based decoder will
succeed with m = ?(k) measurements.
8. The necessary condition (9) shows a dependence on the minimum-to-average ratio MAR instead
of just the average power through SNR. Thus, the bound shows the negative effects of relatively
small components. Note that [17, Thm. 2] appears to have dependence on the minimum power
as well, but is actually only proven for the case MAR = 1.
Numerical validation. Computational confirmation of Theorem 1 is technically impossible, and
even qualitative support is hard to obtain because of the high complexity of ML detection. Nevertheless, we may obtain some evidence through Monte Carlo simulation.
Fig. 1 shows the probability of success of ML detection for n = 20 as k, m, SNR, and MAR are
varied. Signals with MAR < 1 are created by having one small nonzero component and k ? 1 equal,
larger nonzero components. Taking any one column of one subpanel from bottom to top shows that
as m is increased, there is a transition from ML failing to ML succeeding. One can see that (9)
follows the failure-success transition qualitatively. In particular, the empirical dependence on SNR
and MAR approximately follows (9). Empirically, for the (small) value of n = 20, it seems that with
MAR ? SNR held fixed, sparsity recovery becomes easier as SNR increases (and MAR decreases).
4 Sufficient Condition for Thresholding
Consider the following simple estimator. As before, let aj be the jth column of the random matrix
A. Define the thresholding estimate as
I?thresh = j : |a?j y|2 > ? ,
(11)
where ? > 0 represents a threshold level. This algorithm simply correlates the observed signal
y with all the frame vectors aj and selects the indices j where the correlation energy exceeds a
certain level ?. It is significantly simpler than both lasso and matching pursuit and is not meant to
be proposed as a competitive alternative. Rather, we consider thresholding simply to illustrate what
precise benefits lasso and more sophisticated methods bring.
Sparsity pattern recovery by thresholding was studied in [24], which proves a sufficient condition
when there is no noise. The following theorem improves and generalizes the result to the noisy case.
Theorem 2 Let k = k(n), m = m(n), SNR = SNR(n) and MAR = MAR(n) be deterministic
sequences in n such that limn?? k = ? and
8(1 + ?)(1 + SNR)
m>
k log(n ? k)
(12)
MAR ? SNR
for some ? > 0. Then, there exists a sequence of threshold levels ? = ?(n), such that thresholding
asymptotically detects the sparsity pattern, i.e.,
lim Pr I?thresh = Itrue = 1.
n??
Proof sketch: Using tail distributions of chi-squared random variables, it is shown that the minimum
value for the correlation |a?j y|2 when j ? Itrue is greater than the maximum correlation when
j 6? Itrue . A complete proof appears in [23].
Remarks.
1. Comparing (9) and (12), we see that thresholding requires a factor of at most 4(1 + SNR)
more measurements than ML estimation. Thus, for a fixed SNR, the optimal scaling not only
does not require ML estimation, it does not even require lasso or matching pursuit?it can be
achieved with a remarkably simply method.
2. Nevertheless, the gap between thresholding and ML of 4(1 + SNR) measurements can be large.
This is most apparent in the regime where the SNR ? ?. For ML estimation, the lower bound
on the number of measurements required by ML decreases to k ? 1 as SNR ? ?.1 In contrast,
with thresholding, increasing the SNR has diminishing returns: as SNR ? ?, the bound on
the number of measurements in (12) approaches
8
m>
k log(n ? k).
(13)
MAR
Thus, even with SNR ? ?, the minimum number of measurements is not improved from
m = ?(k log(n ? k)).
This diminishing returns for improved SNR exhibited by thresholding is also a problem
for more sophisticated methods such as lasso. For example, as discussed earlier, the analysis
of [14] shows that when SNR ? MAR ? ?, lasso requires
m > 2k log(n ? k) + k + 1
(14)
for reliable recovery. Therefore, like thresholding, lasso does not achieve a scaling better than
m = O(k log(n ? k)), even at infinite SNR.
3. There is also a gap between thresholding and lasso. Comparing (13) and (14), we see that,
at high SNR, thresholding requires a factor of up to 4/MAR more measurements than lasso.
This factor is largest when MAR is small, which occurs when there are relatively small nonzero
components. Thus, in the high SNR regime, the main benefit of lasso is its ability to detect
small coefficients, even when they are much below the average power. However, if the range of
component magnitudes is not large, so MAR is close to one, lasso and thresholding have equal
performance within a constant factor.
1
Of course, at least k + 1 measurements are necessary.
4. The high SNR limit (13) matches the sufficient condition in [24] for the noise free case, except
that the constant in (13) is tighter.
Numerical validation. Thresholding is extremely simple and can thus be simulated easily for
large problem sizes. The results of a large number of Monte Carlo simulations are presented in [23],
which also reports additional simulations of maximum likelihood estimation. With n = 100, the
sufficient condition predicted by (12) matches well to the parameter combinations where the probability of success drops below about 0.995.
5 Conclusions
We have considered the problem of detecting the sparsity pattern of a sparse vector from noisy
random linear measurements. Necessary and sufficient scaling laws for the number of measurements
to recover the sparsity pattern for different detection algorithms were derived. The analysis reveals
the effect of two key factors: the total signal-to-noise ratio (SNR), as well as the minimum-toaverage ratio (MAR), which is a measure of the spread of component magnitudes. The product of
these factors is k times the SNR contribution from the smallest nonzero component; this product
often appears.
Our main conclusions are:
? Tight scaling laws for constant SNR and MAR. In the regime where SNR = ?(1) and MAR =
?(1), our results show that the scaling of the number of measurements
m = O(k log(n ? k))
is both necessary and sufficient for asymptotically reliable sparsity pattern detection. Moreover, the scaling can be achieved with a thresholding algorithm, which is computationally simpler than even lasso or basis pursuit. Under the additional assumption of linear sparsity (k/n
fixed), this scaling is a larger number of measurements than predicted by previous informationtheoretic bounds.
? Dependence on SNR. While the number of measurements required for exhaustive ML estimation and simple thresholding have the same dependence on n and k with the SNR fixed, the
dependence on SNR differs significantly. Specifically, thresholding requires a factor of up to
4(1 + SNR) more measurements than ML. Moreover, as SNR ? ?, the number of measurements required by ML may be as low as m = k + 1. In contrast, even letting SNR ? ?,
thresholding and lasso still require m = O(k log(n ? k)) measurements.
? Lasso and dependence on MAR. Thresholding can also be compared to lasso, at least in the high
SNR regime. There is a potential gap between thresholding and lasso, but the gap is smaller
than the gap to ML. Specifically, in the high SNR regime, thresholding requires at most 4/MAR
more measurements than lasso. Thus, the benefit of lasso over simple thresholding is its ability
to detect the sparsity pattern even in the presence of relatively small nonzero coefficients (i.e.
low MAR). However, when the components of the unknown vector have similar magnitudes
(MAR close to one), the gap between lasso and simple thresholding is reduced.
While our results provide both necessary and sufficient scaling laws, there is clearly a gap in terms
of the scaling with the SNR. We have seen that full ML estimation could potentially have a scaling
in SNR as small as m = O(1/SNR) + k ? 1. An open question is whether there is any practical
algorithm that can achieve a similar scaling.
A second open issue is to determine conditions for partial sparsity recovery. The above results
define conditions for recovering all the positions in the sparsity pattern. However, in many practical
applications, obtaining some large fraction of these positions would be sufficient. Neither the limits
of partial sparsity recovery nor the performance of practical algorithms are completely understood,
though some results have been reported in [19?21, 25].
References
[1] M. Lewicki. Efficient coding of natural sounds. Nature Neuroscience, 5:356?363, 2002.
[2] B. A. Olshausen and D. J. Field. Sparse coding of sensory inputs. Curr. Op. in Neurobiology,
14:481?487, 2004.
[3] C. J. Rozell, D. H. Johnson, R. G. Baraniuk, and B. A. Olshausen. Sparse coding via thresholding and local competition in neural circuits. Neural Computation, 2008. In press.
[4] B. K. Natarajan. Sparse approximate solutions to linear systems. SIAM J. Computing,
24(2):227?234, April 1995.
[5] S. G. Mallat and Z. Zhang. Matching pursuits with time-frequency dictionaries. IEEE Trans.
Signal Process., 41(12):3397?3415, Dec. 1993.
[6] S. S. Chen, D. L. Donoho, and M. A. Saunders. Atomic decomposition by basis pursuit. SIAM
J. Sci. Comp., 20(1):33?61, 1999.
[7] R. Tibshirani. Regression shrinkage and selection via the lasso. J. Royal Stat. Soc., Ser. B,
58(1):267?288, 1996.
[8] D. L. Donoho, M. Elad, and V. N. Temlyakov. Stable recovery of sparse overcomplete representations in the presence of noise. IEEE Trans. Inform. Theory, 52(1):6?18, Jan. 2006.
[9] J. A. Tropp. Greed is good: Algorithmic results for sparse approximation. IEEE Trans. Inform.
Theory, 50(10):2231?2242, Oct. 2004.
[10] J. A. Tropp. Just relax: Convex programming methods for identifying sparse signals in noise.
IEEE Trans. Inform. Theory, 52(3):1030?1051, March 2006.
[11] E. J. Cand`es, J. Romberg, and T. Tao. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inform. Theory, 52(2):489?
509, Feb. 2006.
[12] D. L. Donoho. Compressed sensing. IEEE Trans. Inform. Theory, 52(4):1289?1306, April
2006.
[13] E. J. Cand`es and T. Tao. Near-optimal signal recovery from random projections: Universal
encoding strategies? IEEE Trans. Inform. Theory, 52(12):5406?5425, Dec. 2006.
[14] M. J. Wainwright. Sharp thresholds for high-dimensional and noisy recovery of sparsity.
arXiv:0605.740v1 [math.ST]., May 2006.
[15] S. Sarvotham, D. Baron, and R. G. Baraniuk. Measurements vs. bits: Compressed sensing
meets information theory. In Proc. 44th Ann. Allerton Conf. on Commun., Control and Comp.,
Monticello, IL, Sept. 2006.
[16] A. K. Fletcher, S. Rangan, and V. K. Goyal. Rate-distortion bounds for sparse approximation.
In IEEE Statist. Sig. Process. Workshop, pages 254?258, Madison, WI, Aug. 2007.
[17] M. J. Wainwright. Information-theoretic limits on sparsity recovery in the high-dimensional
and noisy setting. Tech. Report 725, Univ. of California, Berkeley, Dept. of Stat., Jan. 2007.
[18] V. K. Goyal, A. K. Fletcher, and S. Rangan. Compressive sampling and lossy compression.
IEEE Sig. Process. Mag., 25(2):48?56, March 2008.
[19] G. Reeves. Sparse signal sampling using noisy linear projections. Tech. Report UCB/EECS2008-3, Univ. of California, Berkeley, Dept. of Elec. Eng. and Comp. Sci., Jan. 2008.
[20] M. Akc?akaya and V. Tarokh. Shannon theoretic limits on noisy compressive sampling.
arXiv:0711.0366v1 [cs.IT]., Nov. 2007.
[21] M. Akc?akaya and V. Tarokh. Noisy compressive sampling limits in linear and sublinear
regimes. In Proc. Conf. on Inform. Sci. & Sys., Princeton, NJ, March 2008.
[22] J. Haupt and R. Nowak. Signal reconstruction from noisy random projections. IEEE Trans.
Inform. Theory, 52(9):4036?4048, Sept. 2006.
[23] A. K. Fletcher, S. Rangan, and V. K. Goyal. Necessary and sufficient conditions on sparsity
pattern recovery. arXiv:0804.1839v1 [cs.IT]., April 2008.
[24] H. Rauhut, K. Schnass, and P. Vandergheynst. Compressed sensing and redundant dictionaries.
IEEE Trans. Inform. Theory, 54(5):2210?2219, May 2008.
[25] S. Aeron, M. Zhao, and V. Saligrama. On sensing capacity of sensor networks for the class of
linear observation, fixed SNR models. arXiv:0704.3434v3 [cs.IT]., June 2007.
| 3447 |@word trial:1 compression:1 seems:1 stronger:2 itrue:12 open:2 seek:1 simulation:4 decomposition:1 eng:1 electronics:1 mag:1 ecole:1 com:1 comparing:4 must:4 numerical:2 additive:1 plot:1 succeeding:1 drop:1 v:1 tarokh:2 greedy:1 fewer:1 sys:1 provides:4 detecting:2 math:1 location:1 allerton:1 simpler:2 zhang:1 along:1 direct:1 incorrect:2 qualitative:1 kdk2:1 cand:2 nor:1 growing:1 chi:2 detects:1 little:1 considering:1 cardinality:1 becomes:1 provided:2 estimating:2 xx:2 bounded:3 increasing:1 moreover:2 circuit:1 kpj:1 what:3 superlinearly:1 compressive:3 nj:2 guarantee:2 berkeley:4 rm:4 k2:2 scaled:1 ser:1 unit:1 grant:1 control:1 before:1 engineering:2 understood:1 local:1 limit:9 consequence:1 encoding:1 analyzing:1 subscript:1 incoherence:1 meet:1 approximately:1 black:1 studied:1 examined:1 equivalence:1 lausanne:1 range:1 practical:4 atomic:1 practice:1 goyal:5 differs:1 jan:3 area:1 empirical:1 universal:1 gabor:1 dictate:1 matching:4 projection:4 significantly:2 onto:1 cannot:2 selection:3 close:2 romberg:1 context:1 dimensionless:1 impossible:1 deterministic:3 regardless:2 convex:4 typicality:1 resolution:1 recovery:22 identifying:1 insight:1 estimator:7 spanned:3 president:1 mallat:1 exact:3 programming:3 sig:2 element:1 recognition:1 strengthens:2 rozell:1 natarajan:1 observed:3 bottom:1 electrical:2 contradicted:1 decrease:2 complexity:4 kaxk2:3 tight:1 technically:1 upon:1 basis:4 completely:1 alyson:2 necessitates:1 easily:1 univ:2 elec:1 monte:2 choosing:1 refined:1 exhaustive:2 saunders:1 apparent:1 heuristic:1 larger:4 widely:1 elad:1 distortion:1 relax:1 akaya:2 compressed:4 ability:4 qualcomm:2 statistic:1 noisy:9 final:1 sequence:3 reconstruction:2 product:2 saligrama:1 relevant:1 erale:1 achieve:2 competition:1 requirement:1 help:1 depending:1 illustrate:1 stat:2 ij:2 op:1 aug:1 soc:1 recovering:1 predicted:2 c:3 correct:1 require:3 tighter:1 elementary:1 mathematically:1 hold:1 considered:1 great:1 fletcher:5 overlaid:1 algorithmic:1 substituting:1 major:1 dictionary:2 smallest:3 failing:1 estimation:13 proc:2 largest:2 successfully:1 mit:1 clearly:1 sensor:1 gaussian:3 rather:3 shrinkage:1 publication:1 ax:3 derived:2 june:1 bernoulli:1 likelihood:3 tech:2 contrast:3 detect:6 waveletbased:1 diminishing:2 selects:1 interested:1 tao:2 arg:1 issue:1 denoted:1 priori:1 equal:2 field:1 having:1 sampling:4 represents:3 minimized:1 np:1 others:2 report:3 sundeep:1 preview:1 curr:1 detection:9 highly:1 certainly:1 held:1 nowak:1 partial:2 necessary:26 monticello:1 orthogonal:1 unless:1 incomplete:1 overcomplete:1 theoretical:2 subfigure:2 increased:1 column:5 modeling:1 earlier:2 entry:8 subset:2 snr:86 johnson:1 reported:1 eec:1 st:1 siam:2 decoding:1 together:1 squared:2 reflect:1 conf:2 zhao:1 return:2 potential:1 de:1 coding:4 summarized:1 coefficient:2 satisfy:2 competitive:1 recover:3 contribution:2 formed:1 il:1 baron:1 variance:4 who:1 rauhut:1 carlo:2 comp:3 minj:1 inform:9 ed:1 email:3 definition:3 against:1 failure:1 energy:6 acquisition:1 frequency:2 naturally:1 proof:5 massachusetts:1 lim:2 color:1 improves:2 sophisticated:3 actually:1 appears:4 improved:2 april:3 though:1 mar:56 just:2 until:1 correlation:3 sketch:2 tropp:2 nonlinear:1 aj:6 lossy:1 grows:1 olshausen:2 facilitate:1 effect:2 contain:1 ccf:1 analytically:1 equality:1 nonzero:12 laboratory:1 vivek:1 numerator:1 theoretic:5 polytechnique:1 complete:2 bring:1 image:2 recently:1 empirically:1 exponentially:1 tail:2 interpretation:1 discussed:1 schnass:1 measurement:29 ai:1 reef:1 similarly:1 centre:1 had:1 stable:1 cortex:2 yk2:1 feb:1 closest:1 recent:3 showed:2 thresh:2 belongs:1 commun:1 certain:6 success:5 exploited:1 seen:1 minimum:12 additional:3 care:1 greater:1 determine:1 v3:1 redundant:1 signal:24 full:1 sound:1 reduces:1 exceeds:1 technical:1 match:4 variant:1 basic:1 regression:1 denominator:1 arxiv:4 represent:1 normalization:2 achieved:2 dec:2 addition:1 fellowship:1 remarkably:1 grow:1 limn:3 appropriately:1 exhibited:1 call:1 near:1 presence:3 concerned:2 variety:2 srangan:1 xj:7 lasso:27 idea:1 whether:1 expression:4 greed:1 cause:1 remark:2 bedminster:1 generally:1 mid:1 statist:1 reduced:1 nsf:1 estimated:1 neuroscience:1 tibshirani:1 key:1 nevertheless:2 threshold:3 pj:1 neither:1 v1:4 asymptotically:5 relaxation:1 fraction:1 baraniuk:2 uncertainty:1 scaling:21 bit:1 bound:12 infinity:1 worked:1 rangan:5 generates:1 argument:2 min:3 extremely:1 relatively:3 department:2 developing:2 combination:1 march:3 smaller:1 wi:1 pr:2 computationally:4 aeron:1 fail:4 needed:1 letting:1 pursuit:7 generalizes:1 observe:1 alternative:1 denotes:1 top:1 include:1 log2:2 madison:1 exploit:2 prof:1 question:1 quantity:3 occurs:1 strategy:1 dependence:10 subspace:7 simulated:2 capacity:3 decoder:1 sci:3 argue:1 considers:2 trivial:2 modeled:1 index:3 ratio:16 statement:1 potentially:1 negative:1 reliably:3 unknown:6 upper:1 observation:2 finite:1 neurobiology:1 precise:3 frame:1 rn:2 varied:1 sharp:1 thm:2 intensity:1 pair:1 required:4 california:4 trans:9 address:1 below:3 pattern:26 akc:2 regime:9 sparsity:47 challenge:1 appeared:1 including:3 ksparse:1 reliable:6 max:3 wainwright:5 power:5 event:1 royal:1 natural:2 representing:1 technology:2 created:1 sept:2 understanding:1 determining:1 relative:1 asymptotic:2 law:3 haupt:1 sublinear:2 limitation:2 proven:1 remarkable:1 vandergheynst:1 validation:2 sufficient:22 thresholding:27 principle:1 course:1 summary:1 supported:1 free:1 infeasible:1 heading:1 jth:1 understand:1 institute:1 taking:1 sparse:20 benefit:4 curve:1 dimension:4 transition:2 sensory:1 qualitatively:1 simplified:1 far:1 correlate:1 temlyakov:1 approximate:1 nov:1 informationtheoretic:1 keep:1 ml:26 reveals:1 xi:2 postdoctoral:1 search:1 table:2 nature:2 tonoise:1 robust:1 vgoyal:1 career:1 confirmation:1 obtaining:1 main:5 spread:1 linearly:1 noise:14 body:1 fig:1 position:3 explicit:1 kxk2:4 resolvable:1 theorem:17 removing:1 sensing:5 evidence:2 exists:4 workshop:1 adding:1 magnitude:4 occurring:1 nk:2 gap:7 easier:1 chen:1 simply:3 visual:5 expressed:1 neurological:1 lewicki:1 applies:2 sarvotham:1 succeed:2 oct:1 donoho:3 ann:1 hard:2 change:2 specifically:3 infinite:1 except:1 called:1 total:2 e:2 shannon:1 ucb:1 indicating:1 support:1 arises:1 meant:1 dept:2 princeton:1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.