Unnamed: 0
int64 0
7.24k
| id
int64 1
7.28k
| raw_text
stringlengths 9
124k
| vw_text
stringlengths 12
15k
|
---|---|---|---|
2,100 | 2,907 | Radial Basis Function Network for Multi-task
Learning
Xuejun Liao
Department of ECE
Duke University
Durham, NC 27708-0291, USA
[email protected]
Lawrence Carin
Department of ECE
Duke University
Durham, NC 27708-0291, USA
[email protected]
Abstract
We extend radial basis function (RBF) networks to the scenario in which
multiple correlated tasks are learned simultaneously, and present the corresponding learning algorithms. We develop the algorithms for learning the network structure, in either a supervised or unsupervised manner.
Training data may also be actively selected to improve the network?s generalization to test data. Experimental results based on real data demonstrate the advantage of the proposed algorithms and support our conclusions.
1
Introduction
In practical applications, one is frequently confronted with situations in which multiple
tasks must be solved. Often these tasks are not independent, implying what is learned from
one task is transferable to another correlated task. By making use of this transferability,
each task is made easier to solve. In machine learning, the concept of explicitly exploiting
the transferability of expertise between tasks, by learning the tasks simultaneously under a
unified representation, is formally referred to as ?multi-task learning? [1].
In this paper we extend radial basis function (RBF) networks [4,5] to the scenario of multitask learning and present the corresponding learning algorithms. Our primary interest is to
learn the regression model of several data sets, where any given data set may be correlated
with some other sets but not necessarily with all of them. The advantage of multi-task
learning is usually manifested when the training set of each individual task is weak, i.e., it
does not generalize well to the test data. Our algorithms intend to enhance, in a mutually
beneficial way, the weak training sets of multiple tasks, by learning them simultaneously.
Multi-task learning becomes superfluous when the data sets all come from the same generating distribution, since in that case we can simply take the union of them and treat the
union as a single task. In the other extreme, when all the tasks are independent, there is no
correlation to utilize and we learn each task separately.
The paper is organized as follows. We define the structure of multi-task RBF network
in Section 2 and present the supervised learning algorithm in Section 3. In Section 4 we
show how to learn the network structure in an unsupervised manner, and based on this
we demonstrate how to actively select the training data, with the goal of improving the
generalization to test data. We perform experimental studies in Section 5 and conclude the
paper in Section 6.
2
Multi-Task Radial Basis Function Network
Figure 1 schematizes the radial basis function (RBF) network structure customized to multitask learning. The network consists of an input layer, a hidden layer, and an output layer.
The input layer receives a data point x = [x1 , ? ? ? , xd ]T ? Rd and submits it to the hidden
layer. Each node at the hidden layer has a localized activation ?n (x) = ?(||x ? cn ||, ?n ),
n = 1, ? ? ? , N , where || ? || denotes the vector norm and ?n (?) is a radial basis function
(RBF) localized around cn with the degree of localization parameterized by ?n . Choosz2
ing ?(z, ?) = exp(? 2?
2 ) gives the Gaussian RBF. The activations of all hidden nodes
are weighted and sent to the output layer. Each output node represents a unique task
and has its own hidden-to-output weights. The weighted activations of the hidden nodes
are summed at each output node to produce the output for the associated task. Denoting
wk = [w0k , w1k , ? ? ? , wN k ]T as the weights connecting hidden nodes to the k-th output
node, then the output for the k-th task, in response to input x, takes the form
fk (x) = wkT ?(x)
(1)
T
where ?(x) = ?0 (x), ?1 (x), . . . , ?N (x) is a column containing N + 1 basis functions
with ?0 (x) ? 1 a dummy basis accounting for the bias in Figure 1.
f1(x)
f2(x)
Task
1
Task
2
?
w2
w1
I 0 {1
fK(x) Network response
I 1(x)
I 2(x)
Task
K
(Specified by the
Output layer number of tasks)
wK Hidden-to-output weights
?
I N(x) Hidden layer (basis functions)
(To be learned by algorithms)
Bias
?
Input layer (Specified by data
dimensionality)
x = [ x1
x2
?
xd ]T Input data point
Figure 1: A multi-task structure of RBF Network. Each of the output nodes represents a unique
task. Each task has its own hidden-to-output weights but all the tasks share the same hidden nodes.
The activation of hidden node n is characterized by a basis function ?n (x) = ?(||x ? cn ||, ?n ). A
z2
typical choice of ? is ?(z, ?) = exp(? 2?
2 ), which gives the Gaussian RBF.
3
Supervised Learning
Suppose we have K tasks and the data set of the k-th task is Dk =
{(x1k , y1k ), ? ? ? , (xJk k , yJk k )}, where yik is the target (desired output) of xik . By definition, a given data point xik is said to be supervised if the associated target yik is provided
and unsupervised if yik is not provided. The definition extends similarly to a set of data
Table 1: Learning Algorithm of Multi-Task RBF Network
Input: {(x1k , y2k ), ? ? ? , (xJk ,k , yJk ,k )}k=1:K , ?(?, ?), ?, and ?; Output: ?(?) and
{wk }K
k=1 .
1. For m = 1 : K, For n = 1 : Jm , For k = 1 : K, For i = 1 : Jk
Compute ?bnm
||, ?);
ik = ?(||xnm ? xik
i
PK hPJk 2
PJ k
?1
2
2. Let N = 0, ?(?) = 1, e0 = k=1
y
?
(J
+
?)
(
y
)
;
k
ik
i=1 ik
i=1
P
?1 Jk
For k = 1 : K, compute Ak = Jk +?, wk = (Jk +?)
i=1 yik ;
3. For m = 1 : K, For n = 1 : Jm
If ?bnm is not marked as ?deleted?
For k = 1 : K, compute
PJ k
PJk bnm 2
ck = i=1
?ik ?bnm
qk = i=1
(?ik ) + ? ? cTk A?1
ik ,
k ck ;
If there exists k such that qk = 0, mark ?bnm as ?deleted?;
else, compute ?e(?, ?bnm ) using (5).
ik
b
4. If {? }i=1:Jk ,k=1:K are all marked as ?deleted?, go to 10.
? ?
5. Let (n?, m?) = arg max?bnm not marked as ?deleted? ?e(?,?bnm); Mark ?bn m as ?deleted?.
6. Tune RBF parameter ?N +1 = arg max ? ?e(?, ?(|| ? ?xn? m? ||, ?))
7. Let ?N +1 (?) = ?(|| ? ?xn? m? ||, ?N +1 ); Update ?(?) ? [?T (?), ?N +1 (?)]T ;
8. For k = 1 : K
Compute Anew
and wknew respectively by (A-1) and (A-3) in the appendix; Upk
new
date Ak ? Ak , wk ? wknew
9. Let eN +1 = eN ? ?e(?, ?N +1 ); If the sequence {en }n=0:(N +1) is converged, go
to 10, else update N ? N + 1 and go back to 3.
10. Exit and output ?(?) and {wk }K
k=1 .
points. We are interested in learning the functions fk (x) for the K tasks, based on ?K
k=1 Dk .
The learning is based on minimizing the squared error
o
2
PK nPJk
T
2
e (?, w) = k=1
w
?
?
y
+
?
||w
||
(2)
ik
k
k ik
i=1
where ?ik = ?(xik ) for notational simplicity. The regularization terms ? ||wk ||2 , k =
1, ? ? ? , K, are used to prevent singularity of the A matrices defined in (3), and ? is typically
set to a small positive number. For fixed ??s, the w?s are solved by minimizing e(?, w)
with respect to w, yielding
PJk
PJk
wk = A?1
and Ak = i=1
?ik ?Tik + ? I, k = 1, ? ? ? , K
(3)
i=1 yik ?ik
k
In a multi-task RBF network, the input layer and output layer are respectively specified by
the data dimensionality and the number of tasks. We now discuss how to determine the
hidden layer (basis functions ?). Substituting the solutions of the w?s in (3) into (2) gives
PK PJk
2
e(?) = k=1 i=1
yik
? yik wkT ?ik
(4)
where e(?) is a function of ? only because w?s are now functions of ? as given by
(3). By minimizing e(?), we can determine ?. Recalling that ?ik is an abbreviation of
T
?(xik ) = 1, ?1 (xik ), . . . , ?N (xik ) , this amounts to determining N , the number of basis functions, and the functional form of each basis function ?n (?), n = 1, . . . , N . Consider
the candidate functions {?nm (x) = ?(||x?xnm ||, ?) : n = 1, ? ? ? , Jm , m = 1, ? ? ? , K}.
We learn the RBF network structure by selecting ?(?) from these candidate functions such
that e(?) in (4) is minimized. The following theorem tells us how to perform the selection
in a sequential way; the proof is given in the Appendix.
Theorem 1 Let ?(x) = [1, ?1 (x), . . . , ?N (x)]T and ?N +1 (x) be a single basis function.
Assume the A matrices corresponding to ? and [?, ?N +1 ]T are all non-degenerate. Then
PK
PJ k
+1 2 ?1
?e(?, ?N +1 ) = e(?) ? e([?, ?N +1 ]T ) = k=1 cTk wk ? i=1
yik ?N
qk
(5)
ik
+1
where ?N
= ?N +1 (?ik ), wk and A are the same as in (3), and
ik
PJk
PJk
+1
+1 2
ck = i=1
?ik ?N
,
dk = i=1
(?N
) + ?,
qk = dk ? cTk A?1
ik
ik
k ck
(6)
By the conditions of the theorem Anew
is full rank and hence it is positive definite by conk
struction. By (A-2) in the Appendix, qk?1 is a diagonal element of (Anew
)?1 , therefore qk?1
k
N +1
N +1
is positive and by (5) ?e(?, ?
) > 0, which means adding ?
to ? generally makes
the squared error decrease. The decrease ?e(?, ?N +1 ) depends on ?N +1 . By sequentially
selecting basis functions that bring the maximum error reduction, we achieve the goal of
maximizing e(?). The details of the learning algorithm are summarized in Table 1.
4
Active Learning
In the previous section, the data in Dk are supervised (provided with the targets). In this
section, we assume the data in Dk are initially unsupervised (only x is available without
access to the associated y) and we select a subset from Dk to be supervised (targets acquired) such that the resulting network generalizes well to the remaining data in Dk . The
approach is generally known as active learning [6]. We first learn the basis functions ?
from the unsupervised data, and based on ? select data to be supervised. Both of these
steps are based on the following theorem, the proof of which is given in the Appendix.
ek where
Theorem 2 Let there be K tasks and the data set of the k-th task is Dk ? D
Jk
Jk +Jek
e
Dk = {(xik , yik )}i=1 and Dk = {(xik , yik )}i=Jk +1 . Let there be two multi-task RBF
networks, whose output nodes are characterized by fk (?) and fk? (?), respectively, for task
k = 1, . . . , K. The two networks have the same given basis functions (hidden nodes)
?(?) = [1, ?1 (?), ? ? ? , ?N (?)]T , but different hidden-to-output weights. The weights of fk (?)
ek , while the weights of f ? (?) are trained using D
ek . Then for
are trained with Dk ? D
k
k = 1, ? ? ? , K, the square errors committed on Dk by fk (?) and fk? (?) are related by
PJk
2 ?1 PJk
2
?1
?
0 ? [det ?k ]?1? ??1
i=1(yik ?fk (xik ))
i=1(yik ?fk (xik )) ? ?min,k ? 1 (7)
max,k ?
2
T
e e T ?1
e
where ?k = I + ?k (? I + ?k?k ) ?k with ? = ?(x1k ), . . . , ?(xJk k ) and ? =
?(xJk +1,k ), . . . , ?(xJk +Jek ,k ) , and ?max,k and ?min,k are respectively the largest and
smallest eigenvalues of ?k .
Specializing Theorem 2 to the case Jek = 0, we have
k
Corollary 1 Let there be K tasks and the data set of the k-th task is Dk = {(xik , yik )}Ji=1
.
Let the RBF network, whose output nodes are characterized by fk (?) for task k =
1, . . . , K, have given basis functions (hidden nodes) ?(?) = [1, ?1 (?), ? ? ? , ?N (?)]T and
the hidden-to-output weights of task k be trained with Dk . Then for k = 1, ? ? ? , K, the
squared error committed on Dk by fk (?) is bounded as 0 ? [det ?k ]?1 ? ??1
max,k ?
2
PJk 2 ?1 PJk
2
?1
?1 T
?k ?k with
ik )) ? ?min,k ? 1, where ?k = I + ?
i=1 (yik ? fk (x
i=1yik
? = ?(x1,k ), . . . , ?(xJk ,k ) , and ?max,k and ?min,k are respectively the largest and
smallest eigenvalues of ?k .
It is evident from the properties of matrix determinant [7] and the definition of ? that
2
2
PJ k
?ik ?Tik ) [det(? I)]?2 .
det ?k = det(?I + ?k ?Tk ) [det(? I)]?2 = det(?I + i=1
Using (3) we write succinctly det ?k = [det A2k ][det(? I)]?2 . We are interested in selecting the basis functions ? that minimize the error, before seeing y?s. By Corollary 1
and the equation det ?k = [det A2k ][det(? I)]?2 , the squared error is lower bounded by
PJk 2
2
?2
i=1 yik [det(? I)] [det Ak ]P . Instead of minimizing the error directly, we minimize its
Lk 2
2
lower bound. As [det(? I)]
i=1 yik does not depend on ?, this amounts to selecting ? to
?2
minimize (det Ak ) . To minimize the errors for all tasks k = 1 ? ? ? , K, we select ? to
QK
minimize k=1 (det Ak )?2 .
The selection proceeds in a sequential manner. Suppose we have selected basis funcPJ k
T
tions ? = [1, ?1 , ? ? ? , ?N ]T . The associated A matrices are Ak =
i=1 ?ik ?ik +
T
? I(N +1)?(N +1) , k = 1, ? ? ? , K. Augmenting basis functions to [? , ?N +1 ]T , the A
PJk
T
N +1 T
+1
matrices change to Anew
=
] [?Tik , ?N
] + ? I(N +2)?(N +2) . Usk
i=1 [?ik , ?ik
ik
QK
ing the determinant formula of block matrices [7], we get k=1 (det Anew
)?2 =
k
QK
?2
, where qk is the same as in (6). As Ak does not depend on ?N +1 ,
k=1 (qk det Ak )
QK
the left-hand side is minimized by maximizing k=1 qk2 . The selection is easily implemented by making the following two minor modifications in Table 1: (a) in step 2, compute
PK
PK
e0 = k=1 ln(Jk + ?)?2 ; in step 3, compute ?e(?, ?bnm ) = k=1 ln qk2 . Employing the
logarithm is for gaining additivity and it does not affect the maximization.
Based on the basis functions ? determined above, we proceed to selecting data to be supervised and determining the hidden-to-output weights w from the supervised data using
the equations in (3). The selection of data is based on an iterative use of the following
corollary, which is a specialization of Theorem 2 and was originally given in [8].
k
Corollary 2 Let there be K tasks and the data set of the k-th task is Dk = {(xik , yik )}Ji=1
.
Let there be two RBF networks, whose output nodes are characterized by fk (?) and
fk+ (?), respectively, for task k = 1, . . . , K. The two networks have the same given
basis functions ?(?) = [1, ?1 (?), ? ? ? , ?N (?)]T , but different hidden-to-output weights.
The weights of fk (?) are trained with Dk , while the weights of fk+ (?) are trained using Dk+ = Dk ? {(xJk +1,k , yJk +1,k )}. Then for k = 1, ? ? ? , K, the squared errors
committed on (xJk +1,k , yJk +1,k ) by fk (?) and fk+ (?) are related by fk+ (xJk +1,k ) ?
2
?1
2
yJk +1,k
= ?(xJk +1,k )
fk (xJk +1,k ) ? yJk +1,k , where ?(xJk +1,k ) = 1 +
2
PJ k
? 1 and Ak = i=1
?I + ?(xik )?T (xik ) is the same
?T (xJk +1,k )A?1
k ?(xJk +1,k )
as in (3).
Two observations are made from Corollary 2. First, if ?(xJk +1,k ) ? 1, seeing yJk +1,k
does not effect the error on xJk +1,k , indicating Dk already contain sufficient information
about (xJk +1,k , yJk +1,k ). Second, if ?(xi ) ? 1, seeing yJk +1,k greatly decrease the error on xJk +1,k , indicating xJk +1,k is significantly dissimilar (novel) to Dk and xJk +1,k
must be supervised to reduce the error. Based on Corollary 2, the selection proceeds sek
quentially. Suppose we have selected data Dk = {(xik , yik )}Ji=1
, from which we compute Ak . We select the next data point as xJk +1,k = arg max i>Jk , k=1,??? ,K ?(xik ) =
2
arg max i>Jk k=1,??? ,K 1 + ?T(xik )A?1
k ?(xik ) . After xJk +1,k is selected, the Ak is
updated and the next selection begins. As the iteration advances ? will decrease until it
reaches convergence. We use (3) to compute w from the selected x and their associated
targets y, completing learning of the RBF network.
5
Experimental Results
In this section we compare the multi-task RBF network against single-task RBF networks
via experimental studies. We consider three types of RBF networks to learn K tasks, each
with its data set Dk . In the first, which we call ?one RBF network?, we let the K tasks
share both basis functions ? (hidden nodes) and hidden-to output weights w, thus we do
not distinguish the K tasks and design a single RBF network to learn a union of them. The
second is the multi-task RBF network, where the K tasks share the same ? but each has its
own w. In the third, we have K independent networks, each designed for a single task.
We use a school data set from the Inner London Education Authority, consisting of examination records of 15362 students from 139 secondary schools. The data are available
at http://multilevel.ioe.ac.uk/intro/datasets.html. This data set was originally used to study
the effectiveness of schools and has recently been used to evaluate multi-task algorithms
[2,3]. The goal is to predict the exam scores of the students based on 9 variables: year of
exam (1985, 1986, or 1987), school code (1-139), FSM (percentage of students eligible for
free school meals), VR1 band (percentage of students in school in VR band one), gender,
VR band of student (3 categories), ethnic group of student (11 categories), school gender
(male, female, or mixed), school denomination (3 categories). We consider each school a
task, leading to 139 tasks in total. The remaining 8 variables are used as inputs to the RBF
network. Following [2,3], we converted each categorical variable to a number of binary
variables, resulting in a total number of 27 input variables, i.e., x ? R27 . The exam score
is the target to be predicted.
The three types of RBF networks as defined above are designed as follows. The multi-task
RBF network is implemented as the structure as shown in Figure 1 and trained with the
learning algorithm in Table 1. The ?one RBF network? is implemented as a special case
of Figure 1, with a single output node and trained using the union of supervised data from
all 139 schools. We design 139 independent RBF networks, each of which is implemented
with a single output node and trained using the supervised data from a single school. We
2
n ||
), where the cn ?s are selected from training
use the Gaussian RBF ?n (x) = exp(? ||x?c
2? 2
data points and ?n ?s are initialized as 20 and optimized as described in Table 1. The main
role of the regularization parameter ? is to prevent the A matrices from being singular and
it does not affect the results seriously. In the results reported here, ? is set to 10?6 .
Following [2-3], we randomly take 75% of the 15362 data points as training (supervised)
data and the remaining 25% as test data. The generalization performance is measured by
the squared error (fk (xik ) ? yik )2 averaged over all test data xik of tasks k = 1, ? ? ? , K.
We made 10 independent trials to randomly split the data into training and test sets and the
squared error averaged over the test data of all the 139 schools and the trials are shown in
Table 2, for the three types of RBF networks.
Table 2: Squared error averaged over the test data of all 139 schools and the 10 independent trials
for randomly splitting the school data into training (75%) and testing (25%) sets.
Multi-task RBF network
Independent RBF networks
One RBF network
109.89 ? 1.8167
136.41 ? 7.0081
149.48 ? 2.8093
Table 2 clearly shows the multi-task RBF network outperforms the other two types of RBF
networks by a considerable margin. The ?one RBF network? ignores the difference between the tasks and the independent RBF networks ignore the tasks? correlations, therefore
they both perform inferiorly. The multi-task RBF network uses the shared hidden nodes
(basis functions) to capture the common internal representation of the tasks and meanwhile
uses the independent hidden-to-output weights to learn the statistics specific to each task.
We now demonstrate the results of active learning. We use the method in Section 4 to actively split the data into training and test sets using a two-step procedure. First we learn the
basis functions ? of multi-task RBF network using all 15362 data (unsupervised). Based
on the ?, we then select the data to be supervised and use them as training data to learn
the hidden-to-output weights w. To make the results comparable, we use the same training
data to learn the other two types of RBF networks (including learning their own ? and w).
The networks are then tested on the remaining data.
Figure 2 shows the results of active learning. Each curve is the squared error averaged over
the test data of all 139 schools, as a function of number of training data. It is clear that
the multi-task RBF network maintains its superior performance all the way down to 5000
training data points, whereas the independent RBF networks have their performances degraded seriously as the training data diminish. This demonstrates the increasing advantage
of multi-task learning as the number of training data decreases. The ?one RBF network?
seems also insensitive to the number of training data, but it ignores the inherent dissimilarity between the tasks, which makes its performance inferior.
260
Multi?task RBF network
240
Independent RBF networks
Squared error averaged over test data
One RBF network
220
200
180
160
140
120
100
5000
6000
7000
8000
9000
10000
Number of training (supervised) data
11000
12000
Figure 2: Squared error averaged over the test data of all 139 schools, as a function of the number
of training (supervised) data. The data are split into training and test sets via active learning.
6
Conclusions
We have presented the structure and learning algorithms for multi-task learning with the
radial basis function (RBF) network. By letting multiple tasks share the basis functions
(hidden nodes) we impose a common internal representation for correlated tasks. Exploiting the inter-task correlation yields a more compact network structure that has enhanced
generalization ability. Unsupervised learning of the network structure enables us to actively
split the data into training and test sets. As the data novel to the previously selected ones are
selected next, what finally remain unselected and to be tested are all similar to the selected
data which constitutes the training set. This improves the generalization of the resulting
network to the test data. These conclusions are substantiated via results on real multi-task
data.
References
[1] R. Caruana. (1997) Multitask learning. Machine Learning, 28, p. 41-75, 1997.
[2] B. Bakker and T. Heskes (2003). Task clustering and gating for Bayesian multitask learning.
Journal of Machine Learning Research, 4: 83-99, 2003
[3] T. Evgeniou, C. A. Micchelli, and M. Pontil (2005). Learning Multiple Tasks with Kernel Methods. Journal of Machine Learning Research, 6: 615637, 2005
[4] Powell M. (1987), Radial basis functions for multivariable interpolation : A review, J.C. Mason
and M.G. Cox, eds, Algorithms for Approximation, pp.143-167.
[5] Chen, F. Cowan, and P. Grant (1991), Orthogonal least squares learning algorithm for radial basis
function networks, IEEE Transactions on Neural Networks, Vol. 2, No. 2, 302-309, 1991
[6] Cohn, D. A., Ghahramani, Z., and Jordan, M. I. (1995). Active learning with statistical models.
Advances in Neural Information Processing Systems, 7, 705-712.
[7] V. Fedorov (1972), Theory of Optimal Experiments, Academic Press, 1972
[8] M. Stone (1974), Cross-validatory choice and assessment of statistical predictions, Journal of the
Royal Statistical Society, Series B, 36, pp. 111-147, 1974.
Appendix
Proof of Theorem 1:. Let ?new = [?, ?N +1 ]T . By (3), the A matrices corresponding to ?new are
h A
P k h ?ik i T
ck i
k
+1
Anew
= Ji=1
+ ? I(N +2)?(N +2) =
(A-1)
?ik ?N
N +1
T
k
ik
ck dk
?ik
new
where ck and dk are as in (6). By the conditions of the theorem, the matrices Ak and Ak are all
non-degenerate. Using the block matrix inversion formula [7] we get
?1
?1 T
?1
?1
A + A?1
?A?1
k ck qk ck Ak
k ck qk
(A-2)
(Anew
)?1= k
k
qk?1
?qk?1 cTk A?1
k
where qk is as in (6). By (3), the weights wknew corresponding to [?T , ?N +1 ]T are
PJ
?1
k
yik ?ik
wk + A?1
k ck qk gk
wknew = (Anew
)?1 PJi=1
=
(A-3)
k
?1
N
+1
k
?qk gk
i=1 yik ?ik
P
Jk
N +1
T
new
. Hence, (?new
= ?T wk + ?Tik A?1
with gk = cTk wk ?
ik ) wk
k ck ?
i=1 yik ?ik
PK PJk 2 ik
N +1
?1
new
new T
new
?ik
gk qk , which is put into (4) to get e(?
=
y
?
y
(?
)
w
=
ik
ik
k
ik
k=1
i=1
P K P Jk 2
P
K
?1
N +1
?1
T
T
T
y
?
y
?
w
?
y
?
A
c
?
?
g
q
=
e(?)
?
c
w
?
ik
k
ik
k
k
k
k
ik
ik k
ik
k
i=1 ik
k=1
Pk=1
Jk
N +1 2 ?1
,
where
in
arriving
the
last
equality
we
have
used
(3)
and
(4)
and
g
=
y
?
q
k
k
i=1 ik ik
P k
+1
. The theorem is proved.
cTk wk ? Ji=1
yik ?N
ik
Proof
of Theorem 2: The proof
? =
applies to k = 1, ? ? ? , K. For any given k, define
T
e = ?(xJ +1,k ), . . . , ?(x
e
?(x1k ), . . . , ?(xJk k ) , ?
)
,
y
=
[y
,
.
.
.
,
y
]
,
y
k
1k
Jk k
k =
k
Jk +Jek ,k
T
T
?
?
?
[yJk +1,k , . . . , yJk +Jek ,k ] , fk = [f (x1k ), . . . , f (xJk k )] , fk = [fk (x1k ), . . . , fk (xJk k )]T ,
e k = ?I + ?
e k?
e Tk . By (1), (3), and the conditions of the theorem, fk = ?Tk A
ek +
and A
?1 T ?1
(a) T
?1
?1
?1
T
T
T ?1
e
e ?k ?k A
e ?k +I?I I+?k A
e yk ) = ?k A
e ? ?k A
?k yk+
?k ?k
(?k yk +?e
k
k
k
k
?1 ?
(b)
?1 T ?1
?1
?1
T
T
e ky
e ?k
e
e ky
e ?k
e k + ?k yk = I + ?k A
e k = I + ?k A
fk + I +
?
?k A
?
?1 T ?1k
k
?1 ? k
T e ?1
T e ?1
e
?k Ak ?k
?k Ak ?k + I ? I yk = yk + I + ?k Ak ?k
fk ? yk , where equation (a) is due to the Sherman-Morrison-Woodbury formula and equation (b) results because
e ?1 ?
e ky
e ?1 ?k ?1 fk? ? yk , which gives
e k . Hence, fk ? yk = I + ?Tk A
fk? = ?Tk A
k
k
T ?1 ?
P Jk
2
T
?
(y
?
f
(x
))
=
(f
?
y
)
(f
?
y
?k fk ? yk
(A-4)
ik
k
ik
k
k
k
k ) = fk ? yk
i=1
2
2
T ?1
T
T e ?1
e k?
e k ) ?k .
where ?k = I + ?k Ak ?k = I + ?k (? I + ?
By construction, ?k has all its eigenvalues no less than 1, i.e., ?k = ETk diag[?1k , ? ? ? , ?Jk k ]Ek with
ETk Ek = I and ?1k , ? ? ? , ?Jk k ? 1, which makes the first, second, and last inequality in (7) hold.
Using this expansion of ?k in (A-4) we get
T T
P Jk
?1
2
?
, . . . , ?J?1
] fk? ? yk
Ek diag[?1k
i=1 (fk (xik ) ? yik ) = fk ? yk
kk
?1
T
P Jk ?
2
(A-5)
I Ek fk? ? yk = ??1
? fk? ? yk ETk ?min,k
min,k
i=1(fk (xik ) ? yik )
where the inequality results because ?min,k = min(?1,k , ? ? ? , ?Jk ,k ). From (A-5) follows the
fourth inequality in (7). The third inequality in (7) can be proven in in a similar way.
| 2907 |@word multitask:4 trial:3 determinant:2 cox:1 inversion:1 norm:1 seems:1 bn:1 accounting:1 reduction:1 series:1 score:2 selecting:5 denoting:1 seriously:2 outperforms:1 transferability:2 z2:1 activation:4 must:2 enables:1 designed:2 update:2 implying:1 selected:9 record:1 authority:1 node:20 xnm:2 ik:51 consists:1 manner:3 acquired:1 inter:1 frequently:1 multi:23 jm:3 struction:1 becomes:1 provided:3 begin:1 bounded:2 increasing:1 what:2 bakker:1 unified:1 xd:2 demonstrates:1 uk:1 grant:1 positive:3 before:1 treat:1 w1k:1 ak:20 interpolation:1 averaged:6 practical:1 unique:2 woodbury:1 testing:1 union:4 block:2 definite:1 lcarin:1 procedure:1 pontil:1 powell:1 significantly:1 radial:9 seeing:3 submits:1 get:4 selection:6 put:1 maximizing:2 go:3 xuejun:1 simplicity:1 splitting:1 ioe:1 denomination:1 updated:1 target:6 suppose:3 enhanced:1 construction:1 duke:4 us:2 element:1 jk:22 role:1 solved:2 capture:1 decrease:5 yk:15 trained:8 depend:2 localization:1 exit:1 f2:1 basis:30 vr1:1 easily:1 substantiated:1 additivity:1 london:1 tell:1 whose:3 solve:1 ability:1 statistic:1 confronted:1 advantage:3 sequence:1 eigenvalue:3 date:1 degenerate:2 achieve:1 ky:3 exploiting:2 convergence:1 produce:1 generating:1 tk:5 tions:1 develop:1 ac:1 augmenting:1 exam:3 measured:1 a2k:2 school:16 minor:1 implemented:4 predicted:1 come:1 education:1 pjk:13 multilevel:1 f1:1 generalization:5 singularity:1 hold:1 around:1 diminish:1 exp:3 lawrence:1 predict:1 substituting:1 smallest:2 tik:4 largest:2 weighted:2 clearly:1 gaussian:3 ck:11 corollary:6 notational:1 rank:1 greatly:1 typically:1 initially:1 hidden:25 interested:2 arg:4 html:1 summed:1 special:1 validatory:1 evgeniou:1 represents:2 unsupervised:7 carin:1 bnm:9 constitutes:1 minimized:2 inherent:1 randomly:3 simultaneously:3 individual:1 consisting:1 recalling:1 interest:1 male:1 extreme:1 yielding:1 superfluous:1 fsm:1 orthogonal:1 logarithm:1 initialized:1 y1k:1 desired:1 xjk:25 e0:2 y2k:1 column:1 ctk:6 caruana:1 maximization:1 subset:1 reported:1 conk:1 enhance:1 connecting:1 qk2:2 w1:1 squared:11 nm:1 containing:1 ek:8 leading:1 actively:4 jek:5 converted:1 summarized:1 wk:15 student:6 explicitly:1 depends:1 w0k:1 maintains:1 minimize:5 square:2 degraded:1 qk:20 yield:1 generalize:1 weak:2 bayesian:1 usk:1 expertise:1 converged:1 reach:1 ed:1 definition:3 against:1 pp:2 associated:5 proof:5 proved:1 dimensionality:2 improves:1 organized:1 back:1 originally:2 supervised:16 response:2 correlation:3 until:1 hand:1 receives:1 cohn:1 assessment:1 usa:2 effect:1 concept:1 contain:1 regularization:2 hence:3 equality:1 inferior:1 transferable:1 multivariable:1 stone:1 evident:1 demonstrate:3 bring:1 novel:2 recently:1 common:2 superior:1 functional:1 sek:1 ji:5 insensitive:1 extend:2 meal:1 rd:1 fk:40 heskes:1 similarly:1 sherman:1 access:1 own:4 female:1 scenario:2 manifested:1 inequality:4 binary:1 r27:1 impose:1 determine:2 morrison:1 multiple:5 full:1 ing:2 characterized:4 academic:1 cross:1 specializing:1 prediction:1 regression:1 liao:1 iteration:1 kernel:1 whereas:1 separately:1 else:2 singular:1 w2:1 wkt:2 sent:1 cowan:1 effectiveness:1 jordan:1 call:1 ee:2 split:4 wn:1 affect:2 xj:1 reduce:1 inner:1 cn:4 det:20 specialization:1 x1k:6 proceed:1 etk:3 yik:26 generally:2 clear:1 tune:1 amount:2 band:3 category:3 http:1 percentage:2 dummy:1 write:1 vol:1 group:1 deleted:5 prevent:2 pj:6 utilize:1 year:1 parameterized:1 fourth:1 extends:1 eligible:1 appendix:5 comparable:1 layer:13 bound:1 completing:1 distinguish:1 x2:1 min:8 department:2 beneficial:1 remain:1 making:2 modification:1 xjliao:1 ln:2 equation:4 mutually:1 previously:1 discus:1 letting:1 available:2 generalizes:1 pji:1 denotes:1 remaining:4 clustering:1 ghahramani:1 society:1 micchelli:1 intend:1 already:1 intro:1 primary:1 diagonal:1 said:1 code:1 kk:1 minimizing:4 nc:2 yjk:11 xik:23 gk:4 design:2 perform:3 observation:1 fedorov:1 datasets:1 situation:1 committed:3 specified:3 optimized:1 learned:3 proceeds:2 usually:1 max:8 gaining:1 including:1 royal:1 examination:1 customized:1 improve:1 unselected:1 lk:1 categorical:1 review:1 determining:2 mixed:1 proven:1 localized:2 degree:1 sufficient:1 share:4 succinctly:1 last:2 free:1 arriving:1 bias:2 side:1 curve:1 xn:2 ignores:2 made:3 employing:1 transaction:1 compact:1 ignore:1 anew:8 sequentially:1 active:6 conclude:1 xi:1 iterative:1 table:8 learn:11 improving:1 expansion:1 necessarily:1 meanwhile:1 diag:2 pk:8 main:1 x1:3 ethnic:1 referred:1 en:3 vr:2 candidate:2 third:2 theorem:12 formula:3 down:1 specific:1 gating:1 mason:1 dk:26 exists:1 sequential:2 adding:1 dissimilarity:1 margin:1 chen:1 durham:2 easier:1 simply:1 applies:1 gender:2 abbreviation:1 goal:3 marked:3 rbf:46 shared:1 considerable:1 change:1 typical:1 determined:1 total:2 secondary:1 ece:2 experimental:4 indicating:2 formally:1 select:6 internal:2 support:1 mark:2 dissimilar:1 evaluate:1 tested:2 correlated:4 |
2,101 | 2,908 | Non-iterative Estimation with Perturbed
Gaussian Markov Processes
Yunsong Huang
B. Keith Jenkins
Signal and Image Processing Institute
Department of Electrical Engineering-Systems
University of Southern California
Los Angeles, CA 90089-2564
{yunsongh,jenkins}@sipi.usc.edu
Abstract
We develop an approach for estimation with Gaussian Markov processes
that imposes a smoothness prior while allowing for discontinuities. Instead of propagating information laterally between neighboring nodes
in a graph, we study the posterior distribution of the hidden nodes as a
whole?how it is perturbed by invoking discontinuities, or weakening
the edges, in the graph. We show that the resulting computation amounts
to feed-forward fan-in operations reminiscent of V1 neurons. Moreover,
using suitable matrix preconditioners, the incurred matrix inverse and
determinant can be approximated, without iteration, in the same computational style. Simulation results illustrate the merits of this approach.
1 Introduction
Two issues, (i) efficient representation, and (ii) efficient inference, are of central importance
in the area of statistical modeling of vision problems. For generative models, often the ease
of generation and the ease of inference are two conflicting features. Factor Analysis [1]
and its variants, for example, model the input as a linear superposition of basis functions.
While the generation, or synthesis, of the input is immediate, the inference part is usually
not. One may apply a set of filters, e.g., Gabor filters, to the input image. In so doing,
however, the statistical modeling is only deferred, and further steps, either implicit or explicit, are needed to capture the ?code? carried by those filter responses. By characterizing
mutual dependencies among adjacent nodes, Markov Random Field (MRF) [2] and graphical models [3] are other powerful ways for modeling the input, which, when continuous,
is often conveniently assumed to be Gaussian. In vision applications, it?s suitable to employ smoothness priors admitting discontinuities [4]. Examples include weak membranes
and plates [5], formulated in the context of variational energy minimization. Typically, the
inference for MRF or graphical models would incur lateral propagation of information between neighboring units [6]. This is appealing in the sense that it consists of only simple,
local operations carried out in parallel. However, the resulting latency could undermine the
plausibility that such algorithms are employed in human early vision inference tasks [7].
In this paper we take the weak membrane and plate as instances of Gaussian processes
(GP). We show that the effect of marking each discontinuity (hereafter termed as ?bond-
breaking?) is to perturb the inverse of covariance matrix of the hidden nodes x by a matrix
of rank 1. When multiple bonds are broken, the computation of the posterior mean and
covariance of x would involve the inversion of a matrix, which typically has large condition number, implying very slow convergence in straight-forward iterative approaches.
We show that there exists a family of preconditioners that can bring the condition number
close to 1, thereby greatly speeding up the iteration?to the extent that a single step would
suffice in practice. Therefore, the predominant computation employed in our approach is
noniterative, of fan-in and fan-out style. We also devise ways to learn the parameters regarding state and observation noise non-iteratively. Finally, we report experimental results
of applying the proposed algorithm to image-denoising.
2 Perturbing a Gaussian Markov Process (GMP)
Consider a spatially invariant GMP defined on a torus, x ? N (0, Q0 ), whose energy?
1
defined as xT Q?1
0 x?is the sum of energies of all edges in the graph, due to the Markovian
property. In what follows, we perturb the potential matrix Q?1
0 by reducing the coupling
energy of certain bonds2 . This relieves the smoothness constraint on the nodes connected
via those bonds.
Suppose the energy reduction of a bond connecting node i and j (whose state vectors are xi
and xj , respectively) can be expressed as (xTi fi + xTj fj )2 , where fi and fj are coefficient
vectors. This becomes (xT f )2 , if f is constructed to be a vector of same size as x, with the
only non-zero entries fi and fj corresponding to node i and j. This manipulation can be
?1
?1
T
identified with a rank-1 perturbation of Q?1
0 , as Q1 ? Q0 ? f f , which is equivalent
T ?1
T ?1
T
2
to x Q1 x ? x Q0 x ? (x f ) , ?x. We call this an elementary perturbation of Q?1
0 ,
and f an elementary perturbation vector associated with the particular bond.
When L such perturbations have taken place (cf. Fig. 1), we form the L perturbation vectors
into a matrix F1 = [f 1 , . . . , f L ], and then the collective perturbations yield
Q?1
1
and thus
Q1
T
= Q?1
0 ? F1 F1
= Q0 + Q0 F1 (I ?
(1)
F1T Q0 F1 )?1 F1T Q0 ,
(2)
which follows from the Sherman-Morrison-Woodbury Formula (SMWF).
2.1 Perturbing a membrane and a plate
In a membrane model [5], xi is scalar and the energy of the bond connecting xi and xj is
(xi ? xj )2 /q, where q is a parameter denoting the variance of state noise. Upon perturbation, this energy is reduced to ? 2 (xi ? xj )2 /q, where 0 < ? 1 ensures positivity of the
2
2
energy.
p Then, the energy reduction is (1 ? ? )(xi ? xj ) /q, from which we can identify
2
fi = (1 ? ? )/q and fj = ?fi .
In the case of a plate [5], xi = [ui , uh i , uv i ]T , in which ui represents the intensity, while
uhi and uvi represent its gradient in the horizontal and vertical direction, respectively.
(?,i)
We define the energy of a horizontal bond connecting node j and i as E0
= (uv i ?
2
(?,i) T ?1 (?,i)
O d
, where
uv j ) /q + d
uj
1 1
1/3 1/2
ui
(?,i)
?
and O = q
,
=
d
0 1
1/2 1
uh i
uhj
1
2
Henceforth called bonds, as edge will refer to intensity discontinuity in an image.
The bond energy remains positive. This ensures the positive definiteness of the potential matrix.
the superscript (?, i) representing horizontal bond to the left of node i. The first and second term of E (?,i) would correspond to (? 2 u(h, v)/?h?v)2 /q and (? 2 u(h, v)/?h2 )2 /q,
(?,i)
respectively, if u(h, v) is a continuous function of h and v (cf. [5]). If E0
is re(?,i)
duced to E1
= [(uvi ? uv j )2 + (uhi ? uhj )2 ]/q, i.e., coupling between node i
and j exists only through their gradient values, one can show that the energy reduction
(?,i)
(?,i)
E0
? E1
= [ui ? uj ? (uh i + uh j )/2]2 ? 12/q. Taking the actual energy reduction to
p
(?,i)
(?,i)
? E1 ), we can identify fi (?,i) =
12(1 ? ? 2 )/q[1, ?1/2, 0]T
be (1 ? ? 2 )(E0
p
(?,i)
T
and fj
=
12(1 ? ? 2 )/q[?1, ?1/2, 0] , where 0 < ? 1 ensures the positive definiteness of the resulting potential matrix. A similar procedure can be applied
to a vertical bond in the plate, producing a p
perturbation vector f (|,i) , whose compo(|,i)
= 12(1 ? ? 2 )/q[1, 0, ?1/2]T and fj (|,i) =
nents
for fi
p are zero everywhere except
12(1 ? ? 2 )/q[?1, 0, ?1/2]T , for which node j is the lower neighbor of node i.
One can verify that xT f = 0 when the plate assumes the shape of a linear slope, meaning that this perturbation produces no energy difference in such a case. (xT f )2 becomes
significant when the perturbed, or broken, bond associated with f straddles across a step
discontinuity of the image. Such an f is thus related to edge detection.
2.2 Hidden state estimation
Standard formulae exist for the posterior covariance K and mean x
? of x, given a noisy
observation3 y = Cx + n, where n ? N (0, rI).
T
?1
,
x
?? = K? C T y/r, and K? = [Q?1
? + C C/r]
for either the unperturbed (? = 0) or perturbed (? = 1) process. Thus,
K1
T
T ?1
= [Q?1
,
0 + C C/r ? F1 F1 ]
=
=
where H1
,
?x
?1
=
=
c
where x
?
,
(3)
following Eq. 3 and 1
[K0?1
? F1 F1T ]?1 ,
K0 + W1 H1?1 W1T , applying SMWF,
I ? F1T K0 F1 , and W1 , K0 F1
K1 C T y/r
?0 +
K0 C T y/r + W1 H1?1 W1T C T y/r = x
W1 H1?1 W1T C T y/r,
W1 H1?1 z 1 , where z 1 = W1T C T y/r
(4)
(5)
x
?c ,
(6)
(7)
=
On a digital computer, the above computation can be efficiently implemented in the Fourier
domain, despite the huge size of K? and Q? . For example, K1 equals K0 ?a circulant
matrix?plus a rank-L perturbation (cf. Eq. 4). Since each column of W1 is a spatially
shifted copy of a prototypical vector, arising from breaking either a horizontal or a vertical
bond, convolution can be utilized in computing W1T C T y. The computation of H1?1 is
deferred to Section 3. On a neural substrate, however, the computation can be implemented
by inner-products in parallel. For instance, z 1 r is the result of inner-products between
the input y and the feed-forward fan-in weights CW , coded by the dendrites of identical
neurons, each situated at a broken bond. Let v 1 = H1?1 z 1 be the responses of another layer
of neurons. Then C x
?c = CW v 1 amounts to the back-projection of layer v 1 to the input
plane with fan-out weights identical to the fan-in counterpart.
We can also apply the above procedure incrementally4, i.e., apply F1 and then F2 , both
consisting of a set of perturbation vectors. Quantities resulting from the ??th perturba3
4
The observation matrix C = I for a membrane, and C = I ? [1, 0, 0] for a plate.
Latency considerations, however, preclude the practicability of fully incremental computation.
0.01
(a)
Weight value
0.005
0
?0.005
?0.01
10
(b)
Figure 1: A portion of
MRF. Solid and broken
lines denote intact and
broken bonds, respectively. Open circles denote hidden nodes xi
and filled circles denote
observed nodes yi .
15
20
25
(c)
Figure 2: The resulting receptive field of the edge detector produced by breaking the shaded bond shown in Fig. 1. The central
vertical dashed line in (a) and (b) marks the location of the vertical streak of bonds shown as broken in Fig. 1. In (a), those
bonds are not actually broken; in (b), they are. In (c), a central
horizontal slice of (a) is plotted as a solid curve and the counterpart of (b) as a dashed curve.
,
y x
?1
x
?c
x
?0
Figure 3: Estimation of x given input y. x
?0 : by unperturbed rod; x
?1 : coinciding perfectly with y, is obtained by a rod whose two bonds at the step edges of y are broken; x
?c :
correction term, engendered by the perturbed rod.
tion step can be obtained from those of the (? ? 1)?th step, simply by replacing the subscript/superscript ?1? and ?0? with ? and ? ? 1, respectively, in Eqs. 1 to 6. In particular,
W2 = K1 F2 = K0 F2 + W1 H1?1 W1T F2 ,
| {z } |
{z
}
g
W2
(8)
?W2
f2 refers to the weights due to F2 in the absence of perturbation F1 , which, when
where W
indeed existent, would exert a contextual effect on F2 , thereby contributing to the term
?W2 .
Figure 2 illustrates this effect on one perturbation vector (termed ?edge detector?) in a
f2 and W2 in the case of panel (a)
membrane model, wherein ?receptive field? refers to W
and (b), respectively. Evidently, the receptive field of W2 across the contextual boundary
is pinched off. Figure 3 shows the estimation of x, cf. Eq. 6 and 7, using a 1D plate, i.e.,
rod. We stress that once the relevant edges are detected, x
?c is computed almost instantly,
without the need of iterative refinement via lateral propagation. This could be related to the
brightness filling-in signal[8].
2.3 Parameter estimation
As edge inference/detection is outside the scope of this paper, we limit our attention to
finding optimal values for the parameters r and q. Although the EM algorithm is possible
for that purpose, we strive for a non-iterative alternative. To that end, we reparameterize r
and q into r and % = q/r. Given a possibly perturbed model M? , in which x ? N (0, Q? ),
f? , S? /r does not
we have y ? N (0, S? ), where S? = rI + CQ? C T . Note that S
depend on r when % is fixed, as Q? ? q ? r =? S? ? r. Next, we aim to maximize the
log-probability of y, which is a vector of N components (or pixels).
J?? , Lnp(y|M? )
Setting ? J?? /?r = 0
Define J
=
?(N Ln(2?) + Ln|S? | + y T S? ?1 y)/2
=
f?
f? | + (y T S
?(N Ln(2?) + N Lnr + Ln|S
?
,
?1
f? y
where E? , y T S
f? | = const. ? 2J?? |r?
N LnE? + Ln|S
r? = E? /N,
?1
y)/r)/2
(9)
(10)
J is a function of % only, and we locate the %? that minimizes J as follows. Prompted by the
fact that % governs the spatial scale of the process [5] and scale channels exist in primate
visual system, we compute J(%) for a preselected set of %, corresponding to spatial scales
half-octave apart, and then fit the resulting J?s with a cubic polynomial, whose location of
minimum suggests %?. We use this approach in Section 4.
Computing J in Eq. 10 needs two identities, which are included here without proof (the
second can be proven by using SMWF and its associated determinant identity): E? =
y T (y ? C x
?? ) (cf. Appendix A of [5]), and |S0 |/|S? | = |B? |/|H? |, where
H? = I ? F? T K0 F? ,
and B? , I ? F? T Q0 F?
(11)
f? | =
That is, E? can be readily obtained once x
?? has been estimated, and |S
f
f
|S0 ||H? |/|B? |, in which |S0 | can be calculated in the spectral domain, as S0 is circulant.
The computation of |H? | and |B? | is dealt with in the next section.
3 Matrix Preconditioning
Some of the foregoing computation necessitates matrix determinant and matrix inverse,
e.g., H ?1 z 1 (cf. Eq. 7). Because H is typically poorly conditioned, plain iterative means to
evaluate H ?1 z a would converge very slowly. Methods exist in the literature for finding a
matrix P ([9] and references therein) satisfying the following two criteria: (1) inverting P
is easy; (2) the condition number ?(P ?1 H) approaches 1. Ideally, ?(P ?1 H) = 1 implies
P = H. Here we summarize our findings regarding the best class of preconditioners when
H arises from some prototypical configurations of bond breaking. We call the following
procedure Approximate Diagonalization (AD).
(1) ?DFT?. When a streak of broken bonds forms a closed contour, with a consistent polarity
convention (e.g., the excitatory region of the receptive field of the edge detector associated
with each bond lies inside the enclosed region), H and B (cf. Eq. 11) are approximately circulant. Let X be the unitary Fourier matrix of same size as H, then H e = X ? HX would
e = X?H X ? is
be approximately diagonal. Let ?H be diagonal: ?H ij = ?ij H e ii , then H
Q
X?H ?1 X ? approxa circulant matrix approximating H;
i ?H ii approximates |H|;
?1
?1 1
imates H . In this way, a computation such as H z becomes X?H ?1 X ? z 1 , which
amounts to simple fan-in and fan-out operations, if we regard each column of X as a fan-in
e can be evaluated by both the condition
weight vector. The quality of this preconditioner H
?1
e
number ?(H H) and the relative error between the inverse matrices:
e ?1 ? H ?1 kF /kH ?1 kF ,
, kH
(12)
where k kF denotes Frobenius norm. The same X can approximately diagonalize B, and
the product of the diagonal elements of the resulting matrix approximates |B|.
(2) ?DCST?. One end of the streak of broken bonds (target contour) abuts another contour,
and the other end is open (i.e., line-end). Imagine a vibrational mode of the membrane/plate
given the configuration of broken bonds. The vibrational contrast of the nodes across the
broken bond at a line-end has to be small, since in the immediate vicinity there exist paths
of intact bonds linking the two nodes. This suggests a Dirichlet boundary condition at
the line-end. At the abutting end (i.e., a T-junction), however, the vibrational contrast can
be large, since the nodes on different sides of the contour are practically decoupled. This
suggests a von Neumann boundary condition. This analysis leads to using a transform
(termed ?HSWA? in [10]) which we call ?DCST?, denoting sine phase at the open end and
cosine
phase at the abutting end. The unitary transform matrix X is given by: Xi,j =
?
2 2L + 1 cos(?(i ? 1/2)(j ? 1/2)/(L + 1/2)), 1 ? i, j ? L, where L is the number of
broken bonds in the target contour.
(3) ?DST?. When the streak of broken bonds form an open-ended contour, H can be approximately diagonalized by Sine Transform (cf. the intuitive p
rationale stated in case (2)), of
which the unitary transform matrix X is given by: Xi,j = 2/(L + 1) sin(?ij/(L + 1)),
1 ? i, j ? L.
For a ?clean? prototypical contour, the performance of such preconditioners is remarkable,
typically producing 1 ? ? < 1.2 and < 0.05. When contours in the image are interconnected in a complex way, we first parse the image domain into non-overlapping enclosed
regions, and then treat each region independently. A contour segment dividing two regions is shared between them, and thus would contribute two copies, each belonging to one
region[11].
4 Experiment
We test our approach on a real image (Fig. 4a), which is corrupted with three increasing
levels of white Gaussian noise: SNR = 4.79db (Fig. 4b), 3.52db, and 2.34db. Our task is to
estimate the original image, along with finding optimal q and r. We used both membrane
and plate models, and in each case we used both the ?direct? method, which directly computes H ?1 in Eq. 7 and |H|/|B| required in Eq. 10, and the ?AD? method, as described in
Section 3, to compute those quantities in approximation.
We first apply a Canny detector to generate an edge map (Fig. 4g) for each noisy image,
which is then converted to broken bonds. The large number (over 104 ) of broken bonds
makes the direct method impractical. In order to attain a ?direct? result, we partition the
image domain into a 5 ? 5 array of blocks (one such block is delineated by the inner
square in Fig. 4g), and focus on each of them in turn by retaining edges not more than 10
pixels from the target block (this block?s outer scope is delineated with the outer square in
Fig. 4g). When x
? is inferred given this partial edge map, only its pixels within the block
are considered valid and are retained. We mosaic up x
? from all those blocks to get the
complete inferred image. In ?AD?, we parse the contours in each block and apply different
diagonalizers accordingly, as summarized in Section 3. The performance of the three types
of AD is plotted in Fig. 5, from which it is evident that in majority of cases ? < 1.5 and
? 10%. Fig. 4e and f illustrate the procedure to find optimal q/r for a membrane and a
plate, respectively, as explained in Section 2.3. Note how good the cubic polynomial fit is,
and that the results of AD do not deviate much from those of the direct (rigorous) method.
Fig. 4c and 4d show x
? by a perturbed and intact membrane model, respectively. Notice that
the edges, for instance around Lena?s shoulder and her hat, in Fig. 4d are more smeared
than those in Fig. 4c (cf. Fig. 3). Table 1 summarizes the value of optimal q/r and MeanSquared-Error (MSE). Our results compare favorably with those listed in the last column
of the table, which is excerpted from [12].
(a)
(b)
(c)
4
x 10
4
5
x 10
direct
5.5
4.8
cubic fit
4.6
AD
5
J
J
cubic fit
4.4
extremum
4.2
4.5
4
3.8
4
0.1
1
(d)
0.01
0.1
1
q/r
(f)
q/r
(e)
Figure 4: (a) Original image, (b) noisy image. Estimation by (c) a
perturbed membrane, and (d) an intact membrane. The criterion function of varying q/r for (e) perturbed membrane, and (f) perturbed plate,
which shares the same legend as in (e). (g) Canny edge map.
(g)
3
2.2
2
2
2.5
1.8
?
2
1.5
1.6
1.5
1.4
0
20
40
1
0.2
0.2
0.15
0.15
0.1
0.1
0.05
0.05
0
200
400
1
0
50
100
0
50
(c)
DCST
100
0.25
0.2
0.15
0.1
0.05
0
0
10
20
(a)
DFT
30
0
0
100
(b)
DST
200
0
Figure 5: Histograms
of condition number
? after preconditioning, and relative
error as defined in
Eq. 12, illustrating
the
performance
of preconditioners,
DFT,
DST,
and
DCST, on their respective
datasets.
Horizontal axes indicate the number of
occurrences in each
bin.
SNR
4.79
3.52
2.34
Table 1: Optimal q/r and MSE.
membrane model
plate model
direct
AD
direct
AD
q/r
MSE
q/r
MSE
q/r
MSE
q/r
MSE
0.456
92
0.444
92
0.067 100 0.075
98
0.299 104 0.311 104 0.044 111 0.049 108
0.217 115 0.233 115 0.033 119 0.031 121
Improved
Entropic [12]
MSE
121
138
166
5 Conclusions
We have shown how the estimation with perturbed Gaussian Markov processes?hidden
state and parameter estimation?can be carried out in non-iterative way. We have adopted
a holistic viewpoint. Instead of focusing on each individual hidden node, we have taken
each process as an entity under scrutiny. This paradigm shift changes the way information
is stored and represented?from the scenario where the global pattern of the process is
embodied entirely by local couplings to the scenario where fan-in and fan-out weigths, in
addition to local couplings, reflect the patterns of larger scales.
Although edge detection has not been treated in this paper, our formulation is capable of
doing so, and our preliminary results are encouraging. It may be premature at this stage to
translate the operations of our model to neural substrate; we speculate nevertheless that our
approach may have relevance to understanding biological visual systems.
Acknowledgments
This work was supported in part by the TRW Foundation, ARO (Grant Nos. DAAG55-981-0293 and DAAD19-99-1-0057), and DARPA (Grant No. DAAD19-0010356).
References
[1] Z. Ghahramani and M.J. Beal. Variational inference for Bayesian mixtures of factor analysers.
In Advances in Neural Information Processing Systems, volume 12. MIT Press, 2000.
[2] S.Z. Li. Markov Random Field Modeling in Computer Vision. Springer-Verlag, 1995.
[3] M.I. Jordan, Z. Ghahramani, T.S. Jaakkola, and L.K. Saul. An introduction to variational methods for graphical models. Machine Learning, 37:183?233, 1999.
[4] F. C. Jeng and J. W. Woods. Compound Gauss-Markov random fields for image estimation.
IEEE Trans. on Signal Processing, 39(3):683?697, 1991.
[5] A. Blake and A. Zisserman. Visual Reconstruction. MIT Press, 1987.
[6] J.S. Yedidia, W.T. Freeman, and Y. Weiss. Bethe free energy, kikuchi approximations, and
belief propagation algorithms. Technical Report TR2001-16, MERL, May 2001.
[7] S. Thorpe, D. Fize, and C. Marlot. Speed of processing in the human visual system. Nature,
381:520?522, 1996.
[8] L. Pessoa and P. De Weerd, editors. Filling-in: From Perceptual Completion to Cortical Reorganization. Oxford: Oxford University Press, 2003.
[9] R. Chan, M. Ng, and C. Wong. Sine transform based preconditioners for symmetric toeplitz
systems. Linear Algebra and its Applications, 232:237?259, 1996.
[10] S. A. Martucci. Symmetric convolution and the discrete sine and cosine transforms. IEEE
Trans. on Signal Processing, 42(5):1038?1051, May 1994.
[11] H. Zhou, H. Friedman, and R. von der Heydt. Coding of border ownership in monkey visual
cortex. J. Neuroscience, 20(17):6594?6611, 2000.
[12] A. Ben Hamza, H. Krim, and G. B. Unal. Unifying probabilistic and variational estimation.
IEEE Signal Processing Magazine, pages 37?47, September 2002.
| 2908 |@word determinant:3 illustrating:1 inversion:1 polynomial:2 norm:1 open:4 simulation:1 covariance:3 invoking:1 q1:3 brightness:1 thereby:2 solid:2 reduction:4 configuration:2 hereafter:1 denoting:2 diagonalized:1 contextual:2 reminiscent:1 readily:1 partition:1 engendered:1 shape:1 implying:1 generative:1 half:1 accordingly:1 plane:1 compo:1 node:18 location:2 contribute:1 along:1 constructed:1 direct:7 consists:1 inside:1 indeed:1 lena:1 freeman:1 xti:1 actual:1 preclude:1 encouraging:1 increasing:1 becomes:3 moreover:1 suffice:1 panel:1 what:1 minimizes:1 monkey:1 finding:4 extremum:1 ended:1 impractical:1 yunsong:1 laterally:1 unit:1 grant:2 scrutiny:1 producing:2 positive:3 engineering:1 local:3 treat:1 limit:1 despite:1 oxford:2 subscript:1 path:1 approximately:4 plus:1 exert:1 f1t:4 therein:1 suggests:3 shaded:1 co:1 ease:2 acknowledgment:1 woodbury:1 practice:1 block:7 daad19:2 procedure:4 area:1 gabor:1 attain:1 projection:1 refers:2 lnr:1 get:1 close:1 context:1 applying:2 wong:1 equivalent:1 map:3 attention:1 independently:1 array:1 target:3 suppose:1 imagine:1 magazine:1 substrate:2 mosaic:1 element:1 approximated:1 satisfying:1 utilized:1 hamza:1 observed:1 electrical:1 capture:1 region:6 ensures:3 connected:1 broken:16 ui:4 ideally:1 tr2001:1 existent:1 depend:1 segment:1 algebra:1 incur:1 upon:1 f2:8 basis:1 uh:4 preconditioning:2 necessitates:1 darpa:1 k0:8 represented:1 detected:1 analyser:1 unal:1 outside:1 whose:5 larger:1 foregoing:1 toeplitz:1 gp:1 transform:5 noisy:3 superscript:2 beal:1 evidently:1 aro:1 reconstruction:1 interconnected:1 product:3 canny:2 neighboring:2 relevant:1 holistic:1 translate:1 poorly:1 intuitive:1 kh:2 frobenius:1 los:1 convergence:1 neumann:1 produce:1 incremental:1 ben:1 kikuchi:1 illustrate:2 develop:1 coupling:4 propagating:1 completion:1 ij:3 keith:1 eq:10 dividing:1 implemented:2 implies:1 indicate:1 convention:1 direction:1 filter:3 human:2 bin:1 hx:1 f1:12 preliminary:1 biological:1 elementary:2 correction:1 practically:1 around:1 considered:1 blake:1 scope:2 nents:1 early:1 entropic:1 purpose:1 estimation:11 bond:30 superposition:1 minimization:1 smeared:1 mit:2 gaussian:7 aim:1 zhou:1 varying:1 jaakkola:1 ax:1 focus:1 vibrational:3 rank:3 greatly:1 contrast:2 rigorous:1 sense:1 inference:7 weakening:1 typically:4 hidden:6 her:1 pixel:3 issue:1 among:1 retaining:1 spatial:2 mutual:1 field:7 equal:1 once:2 ng:1 identical:2 represents:1 filling:2 report:2 employ:1 thorpe:1 individual:1 xtj:1 usc:1 phase:2 relief:1 consisting:1 friedman:1 detection:3 huge:1 marlot:1 deferred:2 predominant:1 mixture:1 admitting:1 edge:16 capable:1 partial:1 respective:1 decoupled:1 filled:1 re:1 circle:2 plotted:2 e0:4 instance:3 column:3 modeling:4 merl:1 markovian:1 entry:1 snr:2 stored:1 dependency:1 perturbed:11 corrupted:1 straddle:1 krim:1 probabilistic:1 off:1 synthesis:1 connecting:3 w1:7 von:2 central:3 reflect:1 huang:1 possibly:1 slowly:1 positivity:1 henceforth:1 strive:1 style:2 li:1 potential:3 converted:1 de:1 speculate:1 summarized:1 coding:1 coefficient:1 ad:8 tion:1 h1:8 sine:4 closed:1 doing:2 weerd:1 portion:1 parallel:2 slope:1 square:2 variance:1 efficiently:1 yield:1 identify:2 correspond:1 dealt:1 weak:2 bayesian:1 produced:1 straight:1 detector:4 energy:15 associated:4 proof:1 actually:1 back:1 trw:1 focusing:1 feed:2 response:2 wei:1 coinciding:1 wherein:1 improved:1 formulation:1 evaluated:1 zisserman:1 implicit:1 stage:1 preconditioner:1 undermine:1 horizontal:6 parse:2 replacing:1 overlapping:1 propagation:3 mode:1 quality:1 effect:3 verify:1 counterpart:2 vicinity:1 spatially:2 q0:8 iteratively:1 symmetric:2 white:1 adjacent:1 sin:1 cosine:2 criterion:2 octave:1 plate:13 stress:1 evident:1 complete:1 bring:1 fj:6 image:15 variational:4 meaning:1 consideration:1 fi:7 perturbing:2 volume:1 linking:1 approximates:2 refer:1 significant:1 dft:3 smoothness:3 uv:4 sherman:1 cortex:1 posterior:3 chan:1 apart:1 termed:3 manipulation:1 certain:1 scenario:2 verlag:1 compound:1 yi:1 devise:1 lnp:1 der:1 minimum:1 employed:2 converge:1 maximize:1 paradigm:1 signal:5 ii:3 morrison:1 multiple:1 dashed:2 technical:1 plausibility:1 e1:3 coded:1 variant:1 mrf:3 vision:4 iteration:2 represent:1 histogram:1 addition:1 diagonalize:1 w2:6 noniterative:1 db:3 legend:1 jordan:1 call:3 unitary:3 lne:1 easy:1 xj:5 fit:4 identified:1 perfectly:1 inner:3 regarding:2 angeles:1 shift:1 rod:4 latency:2 practicability:1 involve:1 governs:1 listed:1 weigths:1 amount:3 transforms:1 pinched:1 situated:1 reduced:1 generate:1 exist:4 shifted:1 notice:1 estimated:1 arising:1 neuroscience:1 instantly:1 discrete:1 nevertheless:1 clean:1 fize:1 v1:1 graph:3 sum:1 wood:1 inverse:4 everywhere:1 powerful:1 dst:3 place:1 family:1 almost:1 appendix:1 summarizes:1 entirely:1 layer:2 fan:11 constraint:1 ri:2 fourier:2 speed:1 reparameterize:1 preconditioners:6 uvi:2 department:1 marking:1 belonging:1 membrane:14 across:3 em:1 appealing:1 delineated:2 primate:1 explained:1 invariant:1 taken:2 ln:5 remains:1 turn:1 needed:1 merit:1 end:9 adopted:1 junction:1 jenkins:2 operation:4 yedidia:1 apply:5 spectral:1 occurrence:1 alternative:1 hat:1 original:2 assumes:1 denotes:1 cf:9 include:1 dirichlet:1 graphical:3 unifying:1 const:1 perturb:2 uj:2 k1:4 approximating:1 ghahramani:2 quantity:2 receptive:4 diagonal:3 southern:1 gradient:2 september:1 cw:2 lateral:2 entity:1 majority:1 outer:2 extent:1 code:1 reorganization:1 retained:1 polarity:1 cq:1 prompted:1 abutting:2 favorably:1 stated:1 collective:1 allowing:1 vertical:5 neuron:3 observation:2 markov:7 convolution:2 datasets:1 immediate:2 shoulder:1 locate:1 perturbation:13 heydt:1 duced:1 intensity:2 inferred:2 inverting:1 required:1 meansquared:1 gmp:2 california:1 conflicting:1 discontinuity:6 trans:2 usually:1 pattern:2 summarize:1 preselected:1 belief:1 suitable:2 treated:1 representing:1 imates:1 carried:3 embodied:1 speeding:1 deviate:1 prior:2 literature:1 understanding:1 kf:3 contributing:1 relative:2 uhi:2 fully:1 rationale:1 generation:2 prototypical:3 proven:1 enclosed:2 remarkable:1 digital:1 h2:1 foundation:1 incurred:1 consistent:1 imposes:1 s0:4 viewpoint:1 editor:1 share:1 excitatory:1 supported:1 last:1 copy:2 free:1 side:1 institute:1 neighbor:1 circulant:4 characterizing:1 taking:1 excerpted:1 saul:1 pessoa:1 slice:1 curve:2 boundary:3 calculated:1 plain:1 regard:1 contour:10 computes:1 valid:1 forward:3 cortical:1 refinement:1 premature:1 approximate:1 global:1 assumed:1 xi:10 continuous:2 iterative:6 table:3 nature:1 learn:1 channel:1 ca:1 bethe:1 streak:4 dendrite:1 mse:7 complex:1 domain:4 whole:1 noise:3 border:1 w1t:6 fig:14 cubic:4 definiteness:2 slow:1 explicit:1 torus:1 lie:1 perceptual:1 breaking:4 formula:2 xt:4 unperturbed:2 exists:2 importance:1 diagonalization:1 illustrates:1 conditioned:1 cx:1 simply:1 visual:5 conveniently:1 expressed:1 scalar:1 springer:1 identity:2 formulated:1 ownership:1 shared:1 absence:1 change:1 included:1 except:1 reducing:1 denoising:1 called:1 experimental:1 gauss:1 intact:4 mark:1 arises:1 relevance:1 evaluate:1 |
2,102 | 2,909 | Laplacian Score for Feature Selection
Xiaofei He1
Deng Cai2
Partha Niyogi1
Department of Computer Science, University of Chicago
{xiaofei, niyogi}@cs.uchicago.edu
2
Department of Computer Science, University of Illinois at Urbana-Champaign
[email protected]
1
Abstract
In supervised learning scenarios, feature selection has been studied
widely in the literature. Selecting features in unsupervised learning scenarios is a much harder problem, due to the absence of class labels that
would guide the search for relevant information. And, almost all of previous unsupervised feature selection methods are ?wrapper? techniques
that require a learning algorithm to evaluate the candidate feature subsets.
In this paper, we propose a ?filter? method for feature selection which is
independent of any learning algorithm. Our method can be performed in
either supervised or unsupervised fashion. The proposed method is based
on the observation that, in many real world classification problems, data
from the same class are often close to each other. The importance of a
feature is evaluated by its power of locality preserving, or, Laplacian
Score. We compare our method with data variance (unsupervised) and
Fisher score (supervised) on two data sets. Experimental results demonstrate the effectiveness and efficiency of our algorithm.
1
Introduction
Feature selection methods can be classified into ?wrapper? methods and ?filter? methods
[4]. The wrapper model techniques evaluate the features using the learning algorithm that
will ultimately be employed. Thus, they ?wrap? the selection process around the learning
algorithm. Most of the feature selection methods are wrapper methods. Algorithms based
on the filter model examine intrinsic properties of the data to evaluate the features prior to
the learning tasks. The filter based approaches almost always rely on the class labels, most
commonly assessing correlations between features and the class label. In this paper, we
are particularly interested in the filter methods. Some typical filter methods include data
variance, Pearson correlation coefficients, Fisher score, and Kolmogorov-Smirnov test.
Most of the existing filter methods are supervised. Data variance might be the simplest
unsupervised evaluation of the features. The variance along a dimension reflects its representative power. Data variance can be used as a criteria for feature selection and extraction.
For example, Principal Component Analysis (PCA) is a classical feature extraction method
which finds a set of mutually orthogonal basis functions that capture the directions of maximum variance in the data.
Although the data variance criteria finds features that are useful for representing data, there
is no reason to assume that these features must be useful for discriminating between data in
different classes. Fisher score seeks features that are efficient for discrimination. It assigns
the highest score to the feature on which the data points of different classes are far from
each other while requiring data points of the same class to be close to each other. Fisher
criterion can be also used for feature extraction, such as Linear Discriminant Analysis
(LDA).
In this paper, we introduce a novel feature selection algorithm called Laplacian Score (LS).
For each feature, its Laplacian score is computed to reflect its locality preserving power.
LS is based on the observation that, two data points are probably related to the same topic
if they are close to each other. In fact, in many learning problems such as classification,
the local structure of the data space is more important than the global structure. In order to
model the local geometric structure, we construct a nearest neighbor graph. LS seeks those
features that respect this graph structure.
2
Laplacian Score
Laplacian Score (LS) is fundamentally based on Laplacian Eigenmaps [1] and Locality
Preserving Projection [3]. The basic idea of LS is to evaluate the features according to their
locality preserving power.
2.1
The Algorithm
Let Lr denote the Laplacian Score of the r-th feature. Let fri denote the i-th sample of the
r-th feature, i = 1, ? ? ? , m. Our algorithm can be stated as follows:
1. Construct a nearest neighbor graph G with m nodes. The i-th node corresponds
to xi . We put an edge between nodes i and j if xi and xj are ?close?, i.e. xi is
among k nearest neighbors of xj or xj is among k nearest neighbors of xi . When
the label information is available, one can put an edge between two nodes sharing
the same label.
kxi ?xj k2
2. If nodes i and j are connected, put Sij = e? t , where t is a suitable constant.
Otherwise, put Sij = 0. The weight matrix S of the graph models the local
structure of the data space.
3. For the r-th feature, we define:
fr = [fr1 , fr2 , ? ? ? , frm ]T , D = diag(S1), 1 = [1, ? ? ? , 1]T , L = D ? S
where the matrix L is often called graph Laplacian [2]. Let
T
efr = fr ? fr D1 1
1T D1
4. Compute the Laplacian Score of the r-th feature as follows:
Lr =
3
3.1
Justification
efT Lefr
r
efT Defr
r
(1)
Objective Function
Recall that given a data set we construct a weighted graph G with edges connecting nearby
points to each other. Sij evaluates the similarity between the i-th and j-th nodes. Thus,
the importance of a feature can be thought of as the degree it respects the graph structure.
To be specific, a ?good? feature should the one on which two data points are close to each
other if and only if there is an edge between these two points. A reasonable criterion for
choosing a good feature is to minimize the following object function:
P
2
ij (fri ? frj ) Sij
(2)
Lr =
V ar(fr )
P
where V ar(fr ) is the estimated variance of the r-th feature. By minimizing ij (fri ?
frj )2 Sij , we prefer those features respecting the pre-defined graph structure. For a good
feature, the bigger Sij , the smaller (fri ? frj ), and thus the Laplacian Score tends to be
small. Following some simple algebraic steps, we see that
X
X
2
2
2
(fri ? frj ) Sij =
fri
+ frj
? 2fri frj Sij
ij
=
2
X
ij
2
fri
Sij ? 2
X
ij
fri Sij frj = 2fTr Dfr ? 2fTr Sfr = 2fTr Lfr
ij
By maximizing V ar(fr ), we prefer those features with large variance which have more
representative power. Recall that the variance of a random variable a can be written as
follows:
Z
Z
V ar(a) =
(a ? ?)2 dP (a), ? =
adP (a)
M
M
where M is the data manifold, ? is the expected value of a and dP is the probability
measure. By spectral graph theory [2], dP can be estimated by the diagonal matrix D on
the sample points. Thus, the weighted data variance can be estimated as follows:
P
V ar(fr ) = i (fri ? ?r )2 Dii
P
P
fT
1
r D1
ii
P
(
f
D
)
=
=
?r = i fri PDD
ri
ii
T D1
i
1
ii
( i Dii )
i
To remove the mean from the samples, we define:
T
efr = fr ? fr D1 1
1T D1
Thus,
V ar(fr ) =
X
i
T
that efr Lefr
ef2 Dii = efT Defr
ri
r
Also, it is easy to show
= fTr Lfr (please see Proposition 1 in Section 4.2 for
detials). We finally get equation (1).
It would be important to note that, if we do not remove the mean, the vector fr can be a nonzero constant vector such as 1. It is easy to check that, 1T L1 = 0 and 1T D1 > 0. Thus,
Lr = 0. Unfortunately, this feature is clearly of no use since it contains no information.
With mean being removed, the new vector efr is orthogonal to 1 with respect to D, i.e.
efT D1 = 0. Therefore, efr can not be any constant vector other than 0. If efr = 0, efT Lefr =
r
r
efT Defr = 0. Thus, the Laplacian Score Lr becomes a trivial solution and the r-th feature
r
is excluded from selection. While computing the weighted variance, the matrix D models
the importance (or local density) of the data points. We can also simply replace it by the
identity matrix I, in which case the weighted variance becomes the standard variance. To
be specific,
T
T
efr = fr ? fr I1 1 = fr ? fr 1 1 = fr ? ?1
n
1T I1
where ? is the mean of fri , i = 1, ? ? ? , n. Thus,
T
1
T
V ar(fr ) = efr Iefr = (fr ? ?1) (fr ? ?1)
n
which is just the standard variance.
(3)
In fact, the Laplacian scores can be thought of as the Rayleigh quotients for the features
with respect to the graph G, please see [2] for details.
3.2
Connection to Fisher Score
In this section, we provide a theoretical analysis of the connection between our algorithm
and the canonical Fisher score.
Given a set of data points with label, {xi , yi }ni=1 , yi ? {1, ? ? ? , c}. Let ni denote the
number of data points in class i. Let ?i and ?i2 be the mean and variance of class i,
i = 1, ? ? ? , c, corresponding to the r-th feature. Let ? and ? 2 denote the mean and variance
of the whole data set. The Fisher score is defined below:
Pc
n (? ? ?)2
Pc i i 2
(4)
Fr = i=1
i=1 ni ?i
In the following, we show that Fisher score is equivalent to Laplacian score with a special
graph structure. We define the weight matrix as follows:
1
nl , yi = yj = l;
Sij =
(5)
0,
otherwise.
Without loss of generality, we assume that the data points are ordered according to which
class they are in, so that {x1 , ? ? ? , xn1 } are in the first class, {xn1 +1 , ? ? ? , xn1 +n2 } are in
the second class, etc. Thus, S can be written as follows:
?
?
S1 0
0
?
?
S = ? 0 ... 0 ?
0
0 Sc
where Si = n1i 11T is an ni ? ni matrix. For each Si , the raw (or column) sum is equal
to 1, so Di = diag(Si 1) is just the identity matrix. Define f1r = [fr1 , ? ? ? , frn1 ]T , f2r =
[fr,n1 +1 , ? ? ? , fr,n1 +n2 ]T , etc. We now make the following observations.
T
Observation
1 With the weight matrix S defined in (5), we have efr Lefr = fTr Lfr =
P
2
i ni ?i , where L = D ? S.
To see this, define Li = Di ? Si = Ii ? Si , where Ii is the ni ? ni identity matrix. We
have
fTr Lfr =
c
X
i=1
(fir )T Li fir =
c
X
i=1
c
(fir )T (Ii ?
c
X
1 T i X
11 )fr =
ni cov(fir , fir ) =
ni ?i2
ni
i=1
i=1
Note that, since uT L1 = 1T Lu = 0, ?u ? Rn , the value of fTr Lfr remains unchanged by
P
T
subtracting a constant vector (= ?1) from fr . This shows that ef Lefr = fT Lfr =
ni ? 2 .
r
Observation 2 With the weight matrix S defined in (5), we
T
have efr Defr
r
i
i
= n? 2 .
To see this, by the definition of S, we have D = I. Thus, this is a immediate result from
equation (3).
Observation 3 With the weight matrix S defined in (5), we have
efT Defr ? efT Lefr .
r
r
Pc
i=1
ni (?i ? ?)2 =
To see this, notice
c
c
X
X
ni (?i ? ?)2 =
ni ?2i ? 2ni ?i ? + ni ?2
i=1
i=1
c
c
c
c
X
X
X
X
1 i T T i
1
(ni ?i )2 ? 2?
ni ?i + ?2
ni =
(fr ) 11 fr ? 2n?2 + n?2
=
n
n
i
i
i=1
i=1
i=1
i=1
=
c
X
fir Si fir ?
i=1
1
1
(n?)2 = fTr Sfr ? fTr ( 11T )fr
n
n
= fTr (I ? S)fr ? fTr (I ?
This completes the proof.
T
T
T
T
1 T
11 )fr = fTr Lfr ? n? 2 = efr Lefr ? efr Defr
n
We therefore get the following relationship between the Laplacian score and Fisher score:
Theorem 1 Let Fr denote the Fisher score of the r-th feature. With the weight matrix S
1
defined in (5), we have Lr = 1+F
.
r
Proof From observations 1,2,3, we see that
Pc
efT DefT ? efT LefT
ni (?i ? ?)2
1
i=1
Pc
Fr =
= r rT Tr r =
?1
2
Lr
ef Lef
i=1 ni ?i
r
r
Thus, Lr =
4
1
1+Fr .
Experimental Results
Several experiments were carried out to demonstrate the efficiency and effectiveness of our
algorithm. Our algorithm is a unsupervised filter method, while almost all the existing filter
methods are supervised. Therefore, we compared our algorithm with data variance which
can be performed in unsupervised fashion.
4.1
UCI Iris Data
Iris dataset, popularly used for testing clustering and classification algorithms, is taken from
UCI ML repository. It contains 3 classes of 50 instances each, where each class refers to
a type of Iris plant. Each instance is characterized by four features, i.e. sepal length, sepal
width, petal length, and petal width. One class is linearly separable from the other two,
but the other two are not linearly separable from each other. Out of the four features it is
known that the features F3 (petal length) and F4 (petal width) are more important for the
underlying clusters.
The class correlation for each feature is 0.7826, -0.4194, 0.9490 and 0.9565. We also used
leave-one-out strategy to do classification by using each single feature. We simply used the
nearest neighbor classifier. The classification error rates for the four features are 0.41, 0.52,
0.12 and 0.12, respectively. Our analysis indicates that F3 and F4 are better than F1 and F2
in the sense of discrimination. In figure 1, we present a 2-D visualization of the Iris data.
We compared three methods, i.e. Variance, Fisher score and Laplacian Score for feature
selection. All of them are filter methods which are independent to any learning tasks.
However, Fisher score is supervised, while the other two are unsupervised.
25
45
Class 1
Class 2
Class 3
20
Feature 4
Feature 2
40
35
30
15
10
Class 1
Class 2
Class 3
5
25
20
40
50
60
Feature 1
70
80
0
10
20
30
40
Feature 3
50
60
70
Figure 1: 2-D visualization of the Iris data.
By using variance, the four features are sorted as F3, F1, F4, F2. Laplacian score (with
k ? 15) sorts these four features as F3, F4, F1, F2. Laplacian score (with 3 ? k < 15)
sorts these four features as F4, F3, F1, F2. With a larger k, we see more global structure
of the data set. Therefore, the feature F3 is ranked above F4 since the variance of F3 is
greater than that of F4. By using Fisher score, the four features are sorted as F3, F4, F1,
F2. This indicates that Laplacian score (unsupervised) achieved the same result as Fisher
score (supervised).
4.2
Face Clustering on PIE
In this section, we apply our feature selection algorithm to face clustering. By using Laplacian score, we select a subset of features which are the most useful for discrimination.
Clustering is then performed in such a subspace.
4.2.1
Data Preparation
The CMU PIE face database is used in this experiment. It contains 68 subjects with 41,368
face images as a whole. Preprocessing to locate the faces was applied. Original images
were normalized (in scale and orientation) such that the two eyes were aligned at the same
position. Then, the facial areas were cropped into the final images for matching. The size
of each cropped image is 32 ? 32 pixels, with 256 grey levels per pixel. Thus, each image
is represented by a 1024-dimensional vector. No further preprocessing is done. In this
experiment, we fixed the pose and expression. Thus, for each subject, we got 24 images
under different lighting conditions.
For each given number k, k classes were randomly selected from the face database. This
process was repeated 20 times (except for k = 68) and the average performance was computed. For each test (given k classes), two algorithms, i.e. feature selection using variance
and Laplacian score are used to select the features. The K-means was then performed in the
selected feature subspace. Again, the K-means was repeated 10 times with different initializations and the best result in terms of the objective function of K-means was recorded.
4.2.2
Evaluation Metrics
The clustering result is evaluated by comparing the obtained label of each data point with
that provided by the data corpus. Two metrics, the accuracy (AC) and the normalized
mutual information metric (M I) are used to measure the clustering performance [6]. Given
a data point xi , let ri and si be the obtained cluster label and the label provided by the data
corpus, respectively. The AC is defined as follows:
Pn
?(si , map(ri ))
AC = i=1
(6)
n
where n is the total number of data points and ?(x, y) is the delta function that equals one
if x = y and equals zero otherwise, and map(ri ) is the permutation mapping function that
0.9
0.9
0.9
0.8
0.6
Laplacian Score
Variance
0.7
0.7
Accuracy
Mutual Information
Accuracy
0.7
0.85
Laplacian Score
Variance
0.8
Mutual Information
Laplacian Score
Variance
0.8
0.6
0.6
0.5
0
200
400
600
800
0.4
1000
0
200
400
600
800
1000
0
200
Number of features
Number of features
400
600
800
1000
0.5
(a) 5 classes
0.45
0.4
0.8
0.75
0.7
0.65
0
200
400
600
800
1000
0.55
0.5
0.45
0.4
0.35
0.6
0.35
1000
200
Number of features
400
600
800
1000
Number of features
0.25
0.8
0.75
0.7
0.65
0.6
0.3
0
800
Laplacian Score
Variance
0.85
Laplacian Score
Variance
0.55
Mutual Information
0.5
600
0.9
0.6
Laplacian Score
Variance
Accuracy
Mutual Information
0.55
400
(b) 10 classes
0.85
Laplacian Score
Variance
0.6
200
Number of features
0.65
0.65
0
Number of features
0.7
Accuracy
0.6
0.55
0.4
0.3
0.7
0.65
0.5
0.5
0.4
Laplacian Score
Variance
0.8
0.75
0
200
400
600
800
1000
0.55
0
200
Number of features
(c) 30 classes
400
600
800
1000
Number of features
(d) 68 classes
Figure 2: Clustering performance versus number of features
maps each cluster label ri to the equivalent label from the data corpus. The best mapping
can be found by using the Kuhn-Munkres algorithm [5].
Let C denote the set of clusters obtained from the ground truth and C 0 obtained from our
algorithm. Their mutual information metric M I(C, C 0 ) is defined as follows:
M I(C, C 0 ) =
X
p(ci , c0j ) ? log2
ci ?C,c0j ?C 0
p(ci , c0j )
p(ci ) ? p(c0j )
(7)
where p(ci ) and p(c0j ) are the probabilities that a data point arbitrarily selected from the
corpus belongs to the clusters ci and c0j , respectively, and p(ci , c0j ) is the joint probability
that the arbitrarily selected data point belongs to the clusters ci as well as c0j at the same
time. In our experiments, we use the normalized mutual information M I as follows:
M I(C, C 0 ) =
M I(C, C 0 )
max(H(C), H(C 0 ))
(8)
where H(C) and H(C 0 ) are the entropies of C and C 0 , respectively. It is easy to check
that M I(C, C 0 ) ranges from 0 to 1. M I = 1 if the two sets of clusters are identical, and
M I = 0 if the two sets are independent.
4.2.3
Results
We compared Laplacian score with data variance for clustering. Note that, we did not
compare with Fisher score because it is supervised and the label information is not available
in the clustering experiments. Several tests were performed with different numbers of
clusters (k=5, 10, 30, 68). In all the tests, the number of nearest neighbors in our algorithm
is taken to be 5. The experimental results are shown in Figures 2 and Table 1. As can
be seen, in all these cases, our algorithm performs much better than using variance for
feature selection. The clustering performance varies with the number of features. The best
performance is obtained at very low dimensionality (less than 200). This indicates that
feature selection is capable of enhancing clustering performance. In Figure 3, we show the
selected features in the image domain for each test (k=5, 10, 30, 68), using our algorithm,
data variance and Fisher score. The brightness of the pixels indicates their importance.
That is, the more bright the pixel is, the more important. As can be seen, Laplacian score
provides better approximation to Fisher score than data variance. Both Laplacian score
(a) Variance
(b) Laplacian Score
(c) Fisher Score
Figure 3: Selected features in the image domain, k = 5, 10, 30, 68. The brightness of the
pixels indicates their importance.
Table 1: Clustering performance comparisons (k is the number of clusters)
k
5
10
30
68
k
5
10
30
68
Feature Number
Laplacian Score
Variance
Laplacian Score
Variance
Laplacian Score
Variance
Laplacian Score
Variance
20
0.727
0.683
0.685
0.494
0.591
0.399
0.479
0.328
Feature Number
Laplacian Score
Variance
Laplacian Score
Variance
Laplacian Score
Variance
Laplacian Score
Variance
20
0.807
0.662
0.811
0.609
0.807
0.646
0.778
0.639
Accuracy
50
100
200
0.806
0.831
0.849
0.698
0.602
0.503
0.743
0.787
0.772
0.500
0.456
0.418
0.623
0.671
0.650
0.393
0.390
0.365
0.554
0.587
0.608
0.362
0.334
0.316
Mutual Information
50
100
200
0.866
0.861
0.862
0.697
0.609
0.526
0.849
0.865
0.842
0.632
0.6
0.563
0.826
0.849
0.831
0.649
0.649
0.624
0.83
0.833
0.843
0.686
0.661
0.651
300
0.837
0.482
0.711
0.392
0.588
0.346
0.553
0.311
500
0.644
0.464
0.585
0.392
0.485
0.340
0.465
0.312
1024
0.479
0.479
0.403
0.403
0.358
0.358
0.332
0.332
300
0.85
0.495
0.796
0.538
0.803
0.611
0.814
0.642
500
0.652
0.482
0.705
0.529
0.735
0.608
0.76
0.643
1024
0.484
0.484
0.538
0.538
0.624
0.624
0.662
0.662
and Fisher score have the brightest pixels in the area of two eyes, nose, mouth, and face
contour. This indicates that even though our algorithm is unsupervised, it can discover the
most discriminative features to some extent.
5
Conclusions
In this paper, we propose a new filter method for feature selection which is independent
to any learning tasks. It can be performed in either supervised or unsupervised fashion.
The new algorithm is based on the observation that local geometric structure is crucial
for discrimination. Experiments on Iris data set and PIE face data set demonstrate the
effectiveness of our algorithm.
References
[1] M. Belkin and P. Niyogi, ?Laplacian Eigenmaps and Spectral Techniques for Embedding and
Clustering,? Advances in Neural Information Processing Systems, Vol. 14, 2001.
[2] Fan R. K. Chung, Spectral Graph Theory, Regional Conference Series in Mathematics, number
92, 1997.
[3] X. He and P. Niyogi, ?Locality Preserving Projections,? Advances in Neural Information
Processing Systems, Vol. 16, 2003.
[4] R. Kohavi and G. John, ?Wrappers for Feature Subset Selection,? Artificial Intelligence, 97(12):273-324, 1997.
[5] L. Lovasz and M. Plummer, Matching Theory, Akad?
emiai Kiad?
o, North Holland, 1986.
[6] W. Xu, X. Liu and Y. Gong, ?Document Clustering Based on Non-negative Matrix Factorization
,? ACM SIGIR Conference on Information Retrieval, 2003.
| 2909 |@word repository:1 smirnov:1 grey:1 seek:2 brightness:2 tr:1 harder:1 wrapper:5 liu:1 contains:3 score:58 selecting:1 series:1 document:1 existing:2 comparing:1 si:8 must:1 written:2 john:1 chicago:1 remove:2 discrimination:4 intelligence:1 selected:6 lr:8 provides:1 node:6 along:1 introduce:1 expected:1 examine:1 uiuc:1 becomes:2 provided:2 discover:1 underlying:1 k2:1 classifier:1 local:5 tends:1 might:1 initialization:1 studied:1 munkres:1 factorization:1 range:1 yj:1 testing:1 area:2 thought:2 got:1 projection:2 matching:2 pre:1 refers:1 get:2 close:5 selection:17 put:4 equivalent:2 map:3 maximizing:1 l:5 sigir:1 assigns:1 pdd:1 deft:1 embedding:1 justification:1 fr2:1 particularly:1 database:2 ft:2 capture:1 connected:1 highest:1 removed:1 respecting:1 ultimately:1 efficiency:2 f2:5 basis:1 joint:1 fr1:2 represented:1 kolmogorov:1 plummer:1 artificial:1 sc:1 pearson:1 choosing:1 widely:1 larger:1 otherwise:3 niyogi:3 cov:1 final:1 propose:2 subtracting:1 fr:32 relevant:1 uci:2 aligned:1 cluster:9 assessing:1 leave:1 object:1 ac:3 gong:1 pose:1 ij:6 nearest:6 dengcai2:1 c:1 quotient:1 direction:1 kuhn:1 popularly:1 filter:11 f4:8 dii:3 require:1 f1:5 proposition:1 around:1 ground:1 brightest:1 mapping:2 label:12 weighted:4 reflects:1 lovasz:1 clearly:1 always:1 pn:1 f2r:1 check:2 indicates:6 sense:1 interested:1 i1:2 pixel:6 classification:5 among:2 orientation:1 special:1 mutual:8 equal:3 construct:3 f3:8 extraction:3 identical:1 unsupervised:11 fundamentally:1 belkin:1 c0j:8 randomly:1 n1:2 evaluation:2 nl:1 pc:5 edge:4 capable:1 facial:1 orthogonal:2 theoretical:1 instance:2 column:1 ar:7 subset:3 eigenmaps:2 varies:1 kxi:1 density:1 discriminating:1 connecting:1 again:1 reflect:1 recorded:1 fir:7 chung:1 li:2 north:1 coefficient:1 performed:6 sort:2 partha:1 minimize:1 bright:1 ni:22 accuracy:6 variance:43 raw:1 lu:1 lighting:1 classified:1 sharing:1 definition:1 evaluates:1 proof:2 di:2 xn1:3 dataset:1 recall:2 ut:1 dimensionality:1 supervised:9 evaluated:2 done:1 though:1 generality:1 just:2 correlation:3 lda:1 requiring:1 normalized:3 excluded:1 nonzero:1 i2:2 width:3 please:2 iris:6 criterion:4 demonstrate:3 performs:1 l1:2 image:8 novel:1 ef:2 eft:10 adp:1 he:1 mathematics:1 illinois:1 similarity:1 etc:2 belongs:2 scenario:2 arbitrarily:2 yi:3 preserving:5 seen:2 greater:1 deng:1 employed:1 fri:12 ii:6 champaign:1 characterized:1 retrieval:1 bigger:1 laplacian:42 ftr:12 basic:1 enhancing:1 cmu:1 metric:4 achieved:1 cropped:2 completes:1 crucial:1 kohavi:1 regional:1 probably:1 subject:2 n1i:1 effectiveness:3 petal:4 easy:3 xj:4 idea:1 expression:1 pca:1 algebraic:1 sepal:2 useful:3 simplest:1 canonical:1 notice:1 estimated:3 delta:1 per:1 vol:2 four:7 graph:12 sum:1 almost:3 reasonable:1 prefer:2 fan:1 frj:7 ri:6 nearby:1 separable:2 department:2 according:2 smaller:1 s1:2 sij:11 taken:2 equation:2 mutually:1 remains:1 visualization:2 nose:1 available:2 apply:1 spectral:3 original:1 clustering:14 include:1 log2:1 classical:1 unchanged:1 objective:2 strategy:1 rt:1 diagonal:1 dp:3 wrap:1 subspace:2 topic:1 manifold:1 extent:1 discriminant:1 trivial:1 reason:1 length:3 relationship:1 minimizing:1 akad:1 pie:3 unfortunately:1 stated:1 negative:1 observation:8 urbana:1 xiaofei:2 immediate:1 locate:1 rn:1 lfr:7 connection:2 efr:12 below:1 max:1 mouth:1 power:5 suitable:1 ranked:1 rely:1 representing:1 eye:2 carried:1 prior:1 literature:1 geometric:2 loss:1 plant:1 permutation:1 cai2:1 he1:1 versus:1 degree:1 dfr:1 lef:1 guide:1 uchicago:1 neighbor:6 face:8 dimension:1 world:1 contour:1 commonly:1 preprocessing:2 far:1 ml:1 global:2 corpus:4 xi:6 discriminative:1 search:1 table:2 domain:2 diag:2 did:1 linearly:2 whole:2 n2:2 repeated:2 x1:1 ef2:1 xu:1 representative:2 fashion:3 position:1 candidate:1 theorem:1 specific:2 intrinsic:1 importance:5 ci:8 locality:5 entropy:1 rayleigh:1 simply:2 ordered:1 holland:1 corresponds:1 truth:1 acm:1 identity:3 sorted:2 replace:1 absence:1 fisher:19 typical:1 except:1 principal:1 called:2 total:1 experimental:3 select:2 preparation:1 evaluate:4 d1:8 |
2,103 | 291 | 818
Smotroff
Dataflow Architectures:
Flexible Platforms for
Neural Network Simulation
Ira G. Smotroff
MITRE-Bedford Neural Network Group
The MITRE Corporation
Bedford, MA 01730
ABSTRACT
Dataflow architectures are general computation engines optimized for
the execution of fme-grain parallel algorithms. Neural networks can be
simulated on these systems with certain advantages. In this paper, we
review dataflow architectures, examine neural network simulation
performance on a new generation dataflow machine, compare that
performance to other simulation alternatives, and discuss the benefits
and drawbacks of the dataflow approach.
1 DATAFLOW ARCHITECTURES
Dataflow research has been conducted at MIT (Arvind & Culler, 1986) and elsewhere
(Hiraki, et. aI., 1987) for a number of years. Dataflow architectures are general
computation engines that treat each instruction of a program as a separate task which is
scheduled in an asynchronous, data-driven fashion. Dataflow programs are compiled into
graphs which explicitly describe the data dependencies of the computation. These graphs
are directly executed by the machine. Computations which are not linked by a path in the
graphs can be executed in parallel. Each machine has a large number of processing
elements with hardware that is optimized to reduce task switching overhead to a
minimum. As each computation executes and produces a result, it causes all of the
following computations that require the result to be scheduled. In this manner, fine grain
parallel computation is achieved, with the limit on the amount of possible parallelism
determined by the problem and the number of processing elements in the machine.
Dataflow Architectures: Flexible Platforms for Neural Network Simulation
-1 -1
a
Figure 1: XOR network and its dataflow graph.
1.1 NEURAL NETWORKS & DATAFLOW
The most powerful hardware platforms for neural network simulation were enumerated
in the DARPA Neural Network Study (Lincoln Laboratory, 1988): Supercomputers offer
programming in sequential languages at great cost. Systolic Arrays such as the eMU
WARP (pomerleau, 1988) and "Massively" Parallel machines such as the Connection
Machine (Hillis, 1987), offer power at increasingly reasonable costs, but require
specialized low-level programming to map the algorithm to the hardware. Specialized
VLSI and Optical devices (Alspector, 1989) (Farhat, 1987) (Rudnick & Hammerstrom,
1989) offer fast implementations of fixed algorithms 1.
Although dataflow architectures were not included on the DARPA list, there are good
reasons for using them for neural network simulation. First, there is a natural mapping
between neural networks and the dataflow graphs used to encode dataflow programs (see
Figure 1). By expressing a neural network simulation as a dataflow program, one gains
the data synchronization and the parallel execution efficiencies that the dataflow
architecture provides at an appropriate fine grain of abstraction. The close mapping may
allow simple compilation of neural network specifications into executable programs.
Second, this ease of programming makes the approach extremely flexible, so one can get
good performance on a new algorithm the first time it is run, without having to spend
additional time determining the best way to map it onto the hardware. Thus dataflow
simulations may be particularly appropriate for those who develop new learning
algorithms or architectures. Third, high level languages are being developed for dataflow
machines, providing environments in which neural nets can be combined with standard
calculations; this can't be done with much of the specialized neural network hardware.
Last, there may be ways to optimize dataflow architectures for neural network simulation.
1 Hammerstrom's device (Rudnick & Hammerstrom, 1989) may be micro-programmable.
819
820
Smotroff
wait
match
J
from
netwo(''k
,'"
~
~ ALU ~
'W ,
"""
h
instruction
fetch
~
~
~
form
tag
--'"
form
token
to
network
~
oJ
"'
\
IJ
"'
structure .oJ
......
memory
Figure 2: Schematic of a tagged-token dataflow processor.
2 TAGGED-TOKEN DATAFLOW
The Tagged-token dataflow approach represents each computation product as a token
which is passed to following computations. A schematic view of a tagged-token
processor is shown in Figure 2. Execution proceeds in a Wait-Match-Store cycle which
achieves data synchronization. An instruction to be executed waits in the wait-match
queue for a token with its operand. If a match occurs, the incoming token contains its
operand and one of two things happens: for a monadic operation, the instruction is
executed and the result is passed on; for a dyadic operation, a check is made to see if the
operand is the first or the second one to arrive. If it's the first, the location representing
the instruction is tagged, the operand is stored, and the instruction continues to wait. If
it's the second (Le. the instruction is tagged already) the instruction is executed and a
token containing the result is sent to all computations requiring the result. A schematic
view of the execution of the XOR network of Figure 1 on a tagged-token dataflow
machine is illustrated in Figure 3.
2.1 SPLIT-PHASE TRANSACTIONS
In fine-grain parallel computations distributed over a number of physical devices, the
large number of network transactions represent a potential bottleneck. The tagged-token
dataflow architecture mitigates this problem in a way that enhances the overall parallel
execution time. Each network transaction is split into two phases. A process requests an
external data value and then goes to sleep. When the token bearing the requested value
returns, the process is awakened and the computation proceeds. In standard approaches, a
processor must idle while it waits for a result. This non-blocking approach allows other
computations to proceed while the value is in transit, thus masking memory and network
latencies. Independent threads of computation may be interwoven at each cycle, thus
allowing the maximum amount of parallel execution at each cycle. As long as the amount
of parallelism in the task (Le. the length of each processor's task queue) is larger than the
network latency, the processors never idle. Consequently, massively parallel applications
such as neural simulations benefit most from the split-phase transaction approach.
Dataflow Architectures: Flexible Platforms for Neural Network Simulation
3 NEURAL NETWORK DATAFLOW SIMULATION
To illustrate neural network execution on a dataflow processor, the XOR network in
Figure 1 was coded in the dataflow language ID (Nikhil, 1988) and run on the MIT GITA
(Q.raph Interpreter for Tagged-token Architecture) simulator (Nikhil, 1988). Figures 4-6
are ALU operations profiles with the vertical axis representing the number of processors
that could be simultaneously kept busy (i.e. the amount of parallelism in the task at a
particular instance) and the horizontal axis representing elapsed computation cycles. In
addition, Figures 4 & 5 are ideal simulations with communication latency of zero time
and an infinite number of processors available at all times. The ideal profile width
represents the absolute minimum time in which the dataflow calculation could possibly
be performed, and is termed the critical path. Figure 4 shows the execution profile for a
single linear threshold neuron processing its two inputs. The initial peak: activity of eleven
corresponds to initialization activities, with later peaks corresonding to actual
computation steps. The complexity of the profile may be attributed to various dataflow
synchronization mechanisms. In figure 5, the ideal execution profile for the XOR net,
note the initialization peak: similar to the one appearing in the single neuron profile; the
peak parallelism of fifty-five corresponds to all five neuron initializations occuring
simultaneously. This illustrates the ability of the dataflow approach to automatically
expose the inherent parallelism in the overall computation. Note also that the critical path
of one hundred fifty one is substantially less than five times the single neuron critical path
of eighty-five. Wherever possible, the dataflow approach has performed computation in
parallel, and the lengthening of the critical path can be attributed to those computations
which had to be delayed until prior computations became available.
Figure 6 represents the execution of the same XOR net under more realistic conditions in
which each token operation is subject to a finite network delay. The regular spacing of the
profile corresponds to the effect of the network delays. The interesting thing to observe is
that the overall critical path length has only increased slightly to one hundred seventy
because the average amount of parallelism available as tokens come in from the net is
higher. Dataflow's ability to interleave computations thus compensates for much of the
network latency effects.
I SS
ZQQ
Figure 4: Ideal parallelism profile for dataflow execution - single threshold neuron unit.
821
822
Smotroff
1
Figure 3: Execution of the XOR network of Figure 1 on a tagged-token
dataflow processor. The black dots represent active tokens, the white dots
represent waiting tokens, and the shaded boxes represent enabled operations
executing.
1
Dataflow Architectures: Flexible Platforms for Neural Network Simulation
1
55
s91
I
38
299
Figure 5: Ideal parallelism profile for dataflow execution of XOR network.
"1j
_a
I
39
29
la
9~~~~~~~~~~~~~--
9
19a
_ _ _ _ _ _ _ __
289
Figure 6: Parallelism profile for dataflow execution of XOR with constant
communication latency.
3.1 COST OF THE DATAFLOW APPROACH
The Tagged-Token Dataflow machine executing an ID program performs two to three
times as many instructions as an IBM 370 executing an equivalent FORTRAN program.
The overhead in dataflow programs is attributable to mechanisms which manage the
asynchronous parallel execution. Similar overhead would probably exist in specialized
neural network simulators written for dataflow machines. However, this overhead can be
justified because the maximum amount of parallelism in the computation is exposed in a
straightforward manner, which requires no additional programming effort. On
conventional multiprocessors, parallelism must be selectively tailored for each problem.
As the amount of parallelism increases, the associated costs increase as well; often they
will eventually surpass the cost of dataflow (Arvind ,Culler & Ekanadham, 1988). Thus
the parallel performance on the dataflow machine will often surpass that of alternative
platforms despite the overhead.
823
824
Smotroff
4 THE MONSOON ARCHITECTURE
Early dataflow implementations using a Tagged Token approach had a number of
practical barriers (papadoupoulos, 1988). While useful results were achieved, the cost and
expansion limits of the associative memory used for token matching made them
impractical. However, the systems did prove the utility of the Tagged Token approach ..
Recently, the MONSOON architecture (papadoupoulos, 1988) was developed to remedy
the problems encountered with Tagged Token architectures. The token-matching problem
has been solved by treating each token descriptor as an address in a global memory space
which is partitioned among the processors in the system; matching becomes a simple
RAM operation.
An initial MONSOON prototype has been constructed and a 8 processor machine is
scheduled to be built in 1990. Processor elements for that machine are CMOS gate-array
implementations being fabricated by Motorola. Each processor board will have a 100 ns
cycle time and process at a rate of 7-8 MIPS!2-4 MFLOPS. Total memory for the 8
processor machine is 256 MBytes. Interconnect is provided by a 100 MByte!s packet
switch network. The throughput of the 8 processor machine is estimated at 56-64 MIPS!
16-32 MFLOPs. This translates to 2-3 million connections per second per processor and
16-24 million connections per second for the machine. Monsoon performance is in the
supercomputer class while the projected Monsoon cost is significantly less due to the use
of standard process technologies.
A 256 processor machine with CMOS VLSI processors is envisioned. Estimated
performance is 40 MIPS per processor and 10,240 MIPS for the machine. Aggregate
neural simulation performance is estimated at 2.5-3.8 billion connections per second,
assuming an interconnect network of suitable performance.
5 CONCLUSIONS
i)
Dataflow architectures should be cost effective and flexible
platforms for neural network simulation if they become widely
available.
ii)
As general architectures. their performance will not exceed that of
specialized neural network architectures.
iii) Maximum parallelism is attained simply by using the dataflow
approach: no machine or problem-specific tuning is needed. Thus
dataflow is seen as an excellent tool for empirical simulation.
Excellent performance may be obtained on cost effective hardware,
with no special effort required for performance improvement.
iv) Dataflow architectures optimized for neural network simulation
performance may be possible.
References
Alspector, J .? Gupta, B. and Allen, R. B. (1989) Performance of a Stochastic Learning
Microchip. In D. S. Touretzky (ed.). Advances in Neural Information Processing Systems
1, 748-760. San Mateo, CA: Morgan Kaufmann.
Dataflow Architectures: Flexible Platforms for Neural Network Simulation
Arvind and Culler, D. E .. (1986) Dataflow Architectures, MIT Technical Report
MIT/LCS/fM-294, Cambridge, MA.
Arvind, Culler, D. E., Ekanadham, K. (1988) The Price of Asynchronous Parallelism: An
Analysis of Dataflow Architectures. MIT Laboratory for Computer Science, Computation
Structures Group Memo 278.
DARPA Neural Network Study (1988) Lincoln Laboratory, MIT, Lexington, MA.
Farhat, N.H., and Shai, Z. Y.(1987) Architectures and Methodologies for SelfOrganization and Stochastic Learning in Opto-Electronic Analogs of Neural Nets. In
Proceedings of IEEE First International Conference on Neural Networks, ill:565-576.
Hillis, W. D.(1986) The Connection Machine, Cambridge, MA: The MIT Press.
Hiraki, K., Sekiguchi, S. and Shimada, T. (1987) System Architecture of a Dataflow
Supercomputer. Technical Report, Computer Systems Division, Electrotechnical
Laboratory, 1-1-4 Umezono, Sakura-mura, Niihari-gun, lbaraki, 305, Japan.
Nikhil, R. S. (1988) Id World Reference Manual, Computational Structures Group, MIT
Laboratory for Computer Science, Cambridge, MA.
Pomerleau, D. A., Gusciora, G. L., Touretsky and D. S., Kung, H. T.(1988) Neural
Simulation at Warp Speed: How we got 17 Million Connections per Second. In
Proceedings of the IEEE International Conference on Neural Networks, II: 143-150, San
Diego.
Papadoupoulos, G. M. (1988) Implementation of a General Purpose Dataflow
Multiprocessor, Phd. Thesis, MIT Department of Electrical Engineering and Computer
Science, Cambridge, MA.
Rudnick, M. and Hammerstrom, D.(1989) An Interconnection Structure for Wafer Scale
Neurocomputers. In Proceedings of the 1988 Connectionist Models Summer School. San
Mateo, CA: Morgan Kaufmann.
825
PART X:
HISTORY OF NEURAL NETWORKS
| 291 |@word selforganization:1 interleave:1 instruction:9 simulation:20 initial:2 contains:1 must:2 written:1 grain:4 realistic:1 eleven:1 treating:1 device:3 provides:1 location:1 five:4 constructed:1 become:1 prove:1 microchip:1 overhead:5 manner:2 alspector:2 examine:1 simulator:2 automatically:1 actual:1 motorola:1 becomes:1 provided:1 substantially:1 developed:2 interpreter:1 lexington:1 corporation:1 impractical:1 fabricated:1 unit:1 engineering:1 treat:1 limit:2 switching:1 despite:1 id:3 path:6 black:1 initialization:3 mateo:2 shaded:1 ease:1 practical:1 empirical:1 significantly:1 got:1 matching:3 idle:2 regular:1 wait:6 get:1 onto:1 close:1 optimize:1 equivalent:1 map:2 conventional:1 go:1 straightforward:1 array:2 enabled:1 diego:1 programming:4 element:3 particularly:1 continues:1 blocking:1 solved:1 electrical:1 cycle:5 envisioned:1 environment:1 complexity:1 mbytes:1 exposed:1 division:1 efficiency:1 darpa:3 various:1 fast:1 describe:1 effective:2 aggregate:1 spend:1 larger:1 widely:1 nikhil:3 s:1 interconnection:1 neurocomputers:1 compensates:1 ability:2 associative:1 advantage:1 net:5 product:1 lincoln:2 lengthening:1 shimada:1 billion:1 produce:1 cmos:2 executing:3 illustrate:1 develop:1 ij:1 school:1 come:1 drawback:1 stochastic:2 packet:1 require:2 enumerated:1 great:1 mapping:2 achieves:1 early:1 purpose:1 expose:1 tool:1 mit:9 ira:1 encode:1 improvement:1 check:1 abstraction:1 multiprocessor:2 interconnect:2 vlsi:2 overall:3 among:1 flexible:7 ill:1 opto:1 platform:8 special:1 never:1 having:1 represents:3 seventy:1 throughput:1 report:2 connectionist:1 micro:1 inherent:1 eighty:1 simultaneously:2 delayed:1 phase:3 mflop:2 compilation:1 raph:1 iv:1 instance:1 increased:1 bedford:2 cost:9 ekanadham:2 hundred:2 delay:2 conducted:1 stored:1 dependency:1 fetch:1 combined:1 peak:4 international:2 electrotechnical:1 thesis:1 manage:1 containing:1 possibly:1 external:1 return:1 japan:1 busy:1 potential:1 explicitly:1 performed:2 view:2 later:1 linked:1 parallel:12 masking:1 shai:1 fme:1 xor:8 who:1 became:1 descriptor:1 kaufmann:2 executes:1 processor:19 history:1 touretzky:1 manual:1 farhat:2 ed:1 associated:1 attributed:2 gain:1 dataflow:55 higher:1 attained:1 methodology:1 done:1 box:1 until:1 horizontal:1 scheduled:3 alu:2 effect:2 requiring:1 remedy:1 tagged:14 laboratory:5 illustrated:1 white:1 width:1 occuring:1 performs:1 allen:1 recently:1 specialized:5 executable:1 operand:4 physical:1 million:3 analog:1 expressing:1 gusciora:1 cambridge:4 ai:1 tuning:1 language:3 had:2 dot:2 specification:1 compiled:1 driven:1 massively:2 store:1 certain:1 termed:1 seen:1 minimum:2 additional:2 morgan:2 ii:2 technical:2 match:4 calculation:2 offer:3 long:1 arvind:4 coded:1 schematic:3 represent:4 tailored:1 achieved:2 justified:1 addition:1 fine:3 spacing:1 fifty:2 probably:1 subject:1 sent:1 thing:2 ideal:5 exceed:1 split:3 iii:1 mips:4 switch:1 architecture:26 fm:1 reduce:1 prototype:1 translates:1 bottleneck:1 thread:1 utility:1 passed:2 effort:2 queue:2 proceed:1 cause:1 programmable:1 useful:1 latency:5 amount:7 hardware:6 exist:1 estimated:3 per:6 waiting:1 wafer:1 group:3 threshold:2 kept:1 ram:1 graph:5 year:1 run:2 powerful:1 arrive:1 reasonable:1 electronic:1 summer:1 sleep:1 encountered:1 activity:2 tag:1 speed:1 extremely:1 optical:1 department:1 request:1 mura:1 slightly:1 increasingly:1 partitioned:1 wherever:1 happens:1 discus:1 eventually:1 mechanism:2 fortran:1 needed:1 available:4 operation:6 observe:1 appropriate:2 appearing:1 alternative:2 gate:1 supercomputer:3 hammerstrom:4 already:1 occurs:1 enhances:1 separate:1 simulated:1 gun:1 transit:1 reason:1 systolic:1 assuming:1 length:2 touretsky:1 providing:1 executed:5 memo:1 pomerleau:2 implementation:4 mbyte:1 allowing:1 vertical:1 neuron:5 finite:1 communication:2 required:1 optimized:3 connection:6 engine:2 elapsed:1 emu:1 hillis:2 address:1 proceeds:2 parallelism:14 program:8 built:1 oj:2 memory:5 power:1 critical:5 suitable:1 natural:1 representing:3 technology:1 axis:2 review:1 prior:1 determining:1 synchronization:3 generation:1 interesting:1 ibm:1 elsewhere:1 token:25 last:1 asynchronous:3 interwoven:1 allow:1 warp:2 barrier:1 absolute:1 benefit:2 distributed:1 world:1 made:2 projected:1 san:3 transaction:4 global:1 active:1 incoming:1 lcs:1 ca:2 requested:1 expansion:1 bearing:1 excellent:2 mitre:2 did:1 profile:10 dyadic:1 board:1 fashion:1 attributable:1 n:1 third:1 specific:1 mitigates:1 list:1 gupta:1 sequential:1 phd:1 execution:15 illustrates:1 simply:1 corresponds:3 ma:6 consequently:1 price:1 included:1 determined:1 infinite:1 surpass:2 total:1 la:1 selectively:1 kung:1 rudnick:3 |
2,104 | 2,910 | Policy-Gradient Methods for Planning
Douglas Aberdeen
Statistical Machine Learning, National ICT Australia, Canberra
[email protected]
Abstract
Probabilistic temporal planning attempts to find good policies for acting
in domains with concurrent durative tasks, multiple uncertain outcomes,
and limited resources. These domains are typically modelled as Markov
decision problems and solved using dynamic programming methods.
This paper demonstrates the application of reinforcement learning ? in
the form of a policy-gradient method ? to these domains. Our emphasis
is large domains that are infeasible for dynamic programming. Our approach is to construct simple policies, or agents, for each planning task.
The result is a general probabilistic temporal planner, named the Factored
Policy-Gradient Planner (FPG-Planner), which can handle hundreds of
tasks, optimising for probability of success, duration, and resource use.
1
Introduction
To date, only a few planning tools have attempted to handle general probabilistic temporal
planning problems. These tools have only been able to produce good policies for relatively
trivial examples. We apply policy-gradient reinforcement learning (RL) to these domains
with the goal of creating tools that produce good policies in real-world domains rather than
perfect policies in toy domains. We achieve this by: (1) factoring the policy into simple
independent policies for starting each task; (2) presenting each policy with critical observations instead of the entire state; (3) using function approximators for each policy; (4) using
local optimisation methods instead of global optimisation; and (5) using algorithms with
memory requirements that are independent of the state space size.
Policy gradient methods do not enumerate states and are applicable to multi-agent settings
with function approximation [1, 2], thus they are a natural match for our approach to handling large planning problems. We use the GPOMDP algorithm [3] to estimate the gradient
of a long-term average reward of the planner?s performance, with respect to the parameters
of each task policy. We show that maximising a simple reward function naturally minimises
plan durations and maximises the probability of reaching the plan goal.
A frequent criticism of policy-gradient methods compared to traditional forward chaining
planners ? or even compared to value-based RL methods ? is the lack of a clearly interpretable policy. A minor contribution of this paper is a description of how policy-gradient
methods can be used to prune a decision tree over possible policies. After training, the
decision tree can be translated into a list of policy rules.
Previous probabilistic temporal planners include CPTP [4], Prottle [5], Tempastic [6] and a
military operations planner [7]. Most these algorithms use some form of dynamic program-
ming (either RTDP [8] or AO*) to associate values with each state/action pair. However,
this requires values to be stored for each encountered state. Even though these algorithms
do not enumerate the entire state space their ability to scale is limited by memory size. Even
problems with only tens of tasks can produce millions of relevant states. CPTP, Prottle, and
Tempastic minimise either plan duration or failure probability, not both. The FPG-Planner
minimises both of these metrics and can easily optimise over resources too.
2
Probabilistic temporal planning
Tasks are the basic planning unit corresponding to grounded1 durative actions. Tasks have
the effect of setting condition variables to true or false. Each task has a set of preconditions,
effects, resource requirements, and a fixed probability of failure. Durations may be fixed or
dependent on how long it takes for other conditions to be established. A task is eligible to
begin when its preconditions are satisfied and sufficient resources are available. A starting
task may have some immediate effects. As tasks end a set of effects appropriate to the
outcome are applied. Typically, but not necessarily, succeeding tasks set some facts to
true, while failing tasks do nothing or negate facts. Resources are occupied during task
execution and consumed when the task ends. Different outcomes can consume varying
levels of resources. The planning goal is to set a subset of the conditions to a desired value.
The closest work to that presented here is described by Peshkin et al. [1] which describes
how a policy-gradient approach can be applied to multi-agent MDPs. This work lays the
foundation for this application, but does not consider the planning domain specifically. It
is also applied to relatively small domains, where the state space could be enumerated.
Actions in temporal planning consist of launching multiple tasks concurrently. The number
of candidate actions available in a given state is the power set of the tasks that are eligible
to start. That is, with N eligible tasks there are 2N possible actions. Current planners
explore this action space systematically, pruning actions that lead to low rewards. When
combined with probabilistic outcomes the state space explosion cripples existing planners
for tens of tasks and actions. A key reason treat each task as an individual policy agent is to
deal with this explosion of the action space. We replace the single agent choosing from the
power-set of eligible tasks with a single simple agent for each task. The policy learnt by
each agent is whether to start its associated task given its observation, independent of the
decisions made by the other agents. This idea alone does not simplify the problem. Indeed,
if the agents received perfect state information they could learn to predict the decision of
the other agents and still act optimally. The significant reduction in complexity arises from:
(1) restricting the class of functions that represent agents, (2) providing only partial state
information, (3) optimising locally, using gradient ascent.
3
POMDP formulation of planning
Our intention is to deliberately use simple agents that only consider partial state information. This requires us to explicitly consider partial observability. A finite partially observable Markov decision process consists of: a finite set of states s ? S; a finite set of actions
a ? A; probabilities Pr[s0 |s, a] of making state transition s ? s0 under action a; a reward
for each state r(s) : S ? R; and a finite set of observation vectors o ? O seen by the agent
in place of the complete state descriptions. For this application, observations are drawn
deterministically given the state, but more generally may be stochastic. Goal states are
states where all the goal state variables are satisfied. From failure states it is impossible
to reach a goal state, usually because time or resources have run out. These two classes
of state are combined to form the set of reset states that produce an immediate reset to the
1
Grounded means that tasks do not have parameters that can be instantiated.
initial state s0 . A single trajectory through the state space consists of many individual trials
that automatically reset to s0 each time a goal state or failure state is reached.
Policies are stochastic, mapping observation vectors o to a probability over actions. Let N
be the number of basic tasks available to the planner. In our setting an action a is a binary
vector of length N . An entry of 1 at index n means ?Yes? begin task n, and a 0 entry means
?No? do not start task n. The probability of actions is Pr[a|o, ?], where conditioning on ?
reflects the fact that the policy is controlled by a set of real valued parameters ? ? Rp . This
paper assumes that all stochastic policies (i.e., any values for ?) reach reset states in finite
time when executed from s0 . This is enforced by limiting the maximum duration of a plan.
This ensures that the underlying MDP is ergodic, a necessary condition for GPOMDP. The
GPOMDP algorithm maximises the long-term average reward
T ?1
1 X
r(st ).
T ?? T
t=0
?(?) = lim
In the context of planning, the instantaneous reward provides the agent with a measure of
progress toward the goal. A simple reward scheme is to set r(s) = 1 for all states s that
represent the goal state, and 0 for all other states. To maximise ?(?), successful planning
outcomes must be reached as frequently as possible. This has the desired property of
simultaneously minimising plan duration, as well as maximising the probability of reaching
the goal (failure states achieve no reward). It is tempting to provide a negative reward for
failure states, but this can introduce poor local maxima in the form of policies that avoid
negative rewards by avoiding progress altogether. We provide a reward of 1000 each time
the goal is achieved, plus an admissible heuristic reward for progress toward the goal. This
additional shaping reward provides a reward of 1 for every goal state variable achieved, and
-1 for every goal variable that becomes unset. Policies that are optimal with the additional
shaping reward are still optimal under the basic goal state reward [9].
3.1
Planning state space
For probabilistic temporal planning our state description contains [7]: the state?s absolute
time, a queue of impending events, the status of each task, the truth value of each condition,
and the available resources. In a particular state, only a subset of the eligible tasks will
satisfy all preconditions for execution. We call these tasks eligible. When a decision to start
a fixed duration task is made an end-task event is added to a time ordered event queue. The
event queue holds a list of events that the planner is committed to, although the outcome of
those events may be uncertain.
The generation of successor states is shown in Alg. 1. The algorithm begins by starting the
tasks given by the current action, implementing any immediate effects. An end-task event
is added at an appropriate time in the queue. The state update then proceeds to process
events until there is at least one task that is eligible to begin. Events have probabilistic
outcomes. Line 20 of Alg. 1 samples one possible outcome from the distribution imposed
by probabilities in the problem definition. Future states are only generated at points where
tasks can be started. Thus, if an event outcome is processed and no tasks are enabled, the
search recurses to the next event in the queue.
4
Factored Policy-Gradient
We assume the presence of policy agents, parameterised with independent sets of parameters for each agent ? = {?1 , . . . , ?N }. We seek to adjust the parameters of the policy to
maximise the long-term average reward ?(?). The GPOMDP algorithm [3] estimates the
gradient ??(?) of the long-term average reward with respect to the current set of policy
Alg. 1: findSuccessor(State s, Action a)
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
21:
22:
for each an =?Yes? in a do
s.beginTask(n)
s.addEvent(n, s.time+taskDuration(n))
end for
repeat
if s.time > maximum makespan then
s.failureLeaf=true
return
end if
if s.operationGoalsMet() then
s.goalLeaf=true
return
end if
if ?s.anyEligibleTasks() then
s.failureLeaf=true
return
end if
event = s.nextEvent()
s.time = event.time
sample outcome from event
s.implementEffects(outcome)
until s.anyEligibleTasks()
Alg. 2: Gradient Estimator
1: Set s0 to initial state, t = 0, et = [0]
2: while t < T do
3:
et = ?et?1
4:
Generate observation ot of st
5:
for Each eligible task n do
6:
Sample atn =Yes or atn =No
7:
et = et + ? log Pr[atn |o, ?n ]
8:
end for
9:
Try
action
at
=
{at1 , at2 , . . . , atN }
10:
while mutex prohibits at do
11:
randomly disable task in at
12:
end while
13:
st+1 = findSuccessor(st , at )
? t ?(?) = ?
? t?1 ?(?)?
14:
?
1
? t?1 ?(?))
(r(st+1 )et ? ?
t+1
15:
t?t+1
16: end while
? T ?(?)
17: Return ?
?
parameters. Once an estimate ??(?)
is computed over T simulation steps, we maximise
?
the long-term average reward with the gradient ascent ? ? ? + ???(?),
where ? is a small
step size. The experiments in this paper use a line search to determine good values of ?.
We do not guarantee that the best representable policy is found, but our experiments have
produced policies comparable to global methods like real-time dynamic programming [8].
The algorithm works by sampling a single long trajectory through the state space (Fig. 4):
(1) the first state represents time 0 in the plan; (2) the agents all receive the vector observation ot of the current state st ; (3) each agent representing an eligible task emits a probability
of starting; (4) each agent samples start or do not start and issues it as a planning action;
(5) the state transition is sampled with Alg. 1; (6) the agents receive the global reward for
the new state action and update their gradient estimates. Steps 1 to 6 are repeated T times.
Each vector action at is a combination of independent ?Yes? or ?No? choices made by each
eligible agent. Each agent is parameterised by an independent set of parameters that make
up ? ? Rp : ?1 , ?2 , . . . , ?N . If atn represents the binary decision made by agent n at time t
about whether to start its corresponding task, then the policy factors into
Pr[at |ot , ?] = Pr[at1 , . . . , atN |ot , ?1 , . . . , ?N ]
= Pr[at1 |ot , ?1 ] ? ? ? ? ? Pr[atN |ot , ?N ].
It is not necessary for all agents to receive the same observation, and it may be advantageous
to show different agents different parts of the state, leading to a decentralised planning
algorithm. Similar approaches are adopted by Peshkin et al. [1], Tao et al. [2], using policygradient methods to train multi-agent systems. The main requirement for each policy-agent
is that log Pr[atn |ot , ?n ] be differentiable with respect to the parameters for each choice
task start atn =?Yes? or ?No?. We now describe two such agents.
4.1
Linear approximator agents
One representation of agents is a linear network mapped into probabilities using a logistic
regression function:
Task 1
Pr[Y es|ot , ?1 ] = 0.1
ot
Current State
Time
Conditions
Eligible tasks
Task status
Resources
Event queue
Pr[N o|ot , ?1 ] = 0.9
Task 2
Not Eligible
Pr[N o|ot , ?2 ] = 1.0
Choice disabled
at
findSuccessor(st , at )
ot
Next State
Task N
Pr[Y es|ot , ?N ] = 0.5
Pr[N o|ot , ?N ] = 0.5
Time
Conditions
Eligible tasks
Task status
Resources
Event queue
Pr[atn = Y es|ot , ?n ] =
Fig. 3: Decision tree agent.
Fig. 4: (Left) Individual taskpolicies make independent decisions.
exp(o>
t ?n )
exp(o>
?
t n) + 1
(1)
If the dimension of the observation vector is |o| then each set of parameters ?n can be
thought of as an |o| vector that represents the approximator weights for task n. The log
derivatives, necessary for Alg. 2, are given in [10]. Initially, the parameters are set to small
random values: a near uniform random policy. This encourages exploration of the action
space. Each gradient step typically moves the parameters closer to a deterministic policy.
After some experimentation we chose an observation vector that is a binary description of
the eligible tasks and the state variable truth values plus a constant 1 bit to provide bias to
the agents? linear networks.
4.2
Decision tree agents
Often we have a selection of potential control rules. A decision tree can represent all
such control rules at the leaves. The nodes are additional parameterised or hardwired rules
that select between different branches, and therefore different control rules. An action
a is selected by starting at the root node and following a path down the tree, visiting a
set of decision nodes D. At each node we either applying a hard coded branch selection
rule, or sample a stochastic branch rule from the probability distribution invoked by the
parameterisation. Assuming the independence of decisions at each node, the probability or
reaching an action leaf l equals the product of branch probabilities at each decision node
Y
Pr[a = l|o, ?] =
Pr[d0 |o, ?d ],
(2)
d?D
where d represents the current decision node, and d0 represents the next node visited in the
tree. The final next node d0 is the leaf l. The probability of a branch followed as a result
of a hard-coded rule is 1. The individual Pr(d0 |o, ?d ) functions can be any differentiable
function of the observation vector o.
For multi-agent domains, such as our formulation of planning, we have a decision tree for
each task agent. We use the same initial tree (with different parameters), for each agent,
shown in Fig. 3. Nodes A, D, F, H represent hard coded rules that switch with probability
one between the Yes and No branches based on a boolean observation that gives the truth
of the statement in the node for the current state. Nodes B, C, E, G are parameterised so
that they select branches stochastically. For this application, the probability of choosing
the Yes or No branches is a single parameter logistic function that is independent of the
observations. Parameter adjustments have the simple effect of pruning parts the tree that
represent poor policies, leaving the hard coded rules to choose the best action given the
observation. The policy encoded by the parameter is written in the node label. For example
for task agent n, and decision node C ?task duration matters??, we have the probability
Pr(Y es|o, ?n,C ) = Pr(Y es|?n,C ) =
exp(?n,C )
exp(?n,C ) + 1
The log gradient of this function is given in [10]. If set parameters to always select the
dashed branch in Fig. 3 we would be following the policy: if the task IS eligible, and
probability this task success does NOT matter, and the duration of this task DOES matter,
and this task IS fast, then start, otherwise do not start. Apart from being easy to interpret
the optimised decision tree as a set of ? possibly stochastic ? if-then rules, we can also
encode highly expressive policies with only a few parameters.
4.3
GPOMDP for planning
?
Alg. 4 describes the algorithm for computing ??(?),
based on GPOMDP [3]. The vector
quantity et is an eligibility trace. It has dimension p (the total number of parameters),
and can be thought of as storing the eligibility of each parameter for being reinforced
after receiving a reward. The gradient estimate provably converges to a biased estimate of
??(?) as T ? ?. The quantity ? ? [0, 1) controls the degree of bias in the estimate.
As ? approaches 1, the bias of the estimates drop to 0. However if ? = 1, estimates
exhibit infinite variance in the limit as T ? ?. Thus the parameter ? is used to achieve
a bias/variance tradeoff in our stochastic gradient estimates. GPOMDP gradient estimates
have been proven to converge, even under partial observability.
Line 8 computes the log gradient of the sampled action probability and adds the gradient
for the n?th agent?s parameters into the eligibility trace. The gradient for parameters not
relating to agent n is 0. We do not compute Pr[atn |ot , ?n ] or gradients for tasks with
unsatisfied preconditions. If all eligible agents decide not to start their tasks, we issue a
null-action. If the state event queue is not empty, we process the next event, otherwise time
is incremented by 1 to ensure all possible policies will eventually reach a reset state.
5
5.1
Experiments
Comparison with previous work
We compare the FPG-Planner with that of our earlier RTDP based planner for military
operations [7], which is based on real-time dynamic programming with [8]. The domains
come from the Australian Defence Science and Technology Organisation, and represent
military operations planning scenarios. There are two problems, the first with 18 tasks
and 12 conditions, and the second with 41 tasks and 51 conditions. The goal is to set the
?Objective island secured? variable to true. There are multiple interrelated tasks that can
lead to the goal state. Tasks fail or succeed with a known probability and can only execute
once, leading to relatively large probabilities of failure even for optimal plans. See [7] for
details. Unless stated, FPG-Planner experiments used T = 500, 000 gradient estimation
steps and ? = 0.9. Optimisation time was limited to 20 minutes wall clock time on a single
user 3GHz Pentium IV with 1GB ram. All evaluations are based on 10,000 simulated
executions of finalised policies. Results quote the average duration, resource consumption,
and the percentage of plans that terminate in a failure state.
We repeat the comparison experiments 50 times with different random seeds and report
Table 1: Two domains compared with a dynamic programming based planner.
Problem
Assault Ave
Assault Best
Webber Ave
Webber Best
Dur
171
113
245
217
RTDP
Res Fail%
8.0
26.1
6.2
24.0
4.4
58.1
4.2
57.7
Factored Linear
Dur Res Fail%
105
8.3
26.6
93.1
8.7
23.1
193
4.1
57.9
190
4.1
57.0
Factored Tree
Dur Res Fail%
115
8.3
27.1
112
8.4
25.6
186
4.1
58.0
181
4.1
57.3
Table 3: Results for the Art45/25 domain.
Table 2: Effect of different observations.
Observation
Eligible & Conds
Conds only
Eligible only
Dur
105
112
112
Res
8.3
8.1
8.1
Fail%
26.6
28.1
29.6
Policy
Random
Naive
Linear
Dumb Tree
Prob Tree
Dur Tree
Res Tree
Dur
394
332
121
157
156
167
136
Res
206
231
67
92
62
72
53
Fail%
83.4
78.6
7.4
19.1
10.9
17.4
8.50
mean and best results in Table 1. The ?Best? plan minimises an arbitrarily chosen combined metric of 10 ? f ail% + dur. FPG-Planning with a linear approximator significantly
shortens the duration of plans, without increasing the failure rate. The very simple decision tree performs less well than than the linear approximator, but better than the dynamic
programming algorithm. This is somewhat surprising given the simplicity of the tree for
each task. The shorter duration for the Webber decision tree is probably due to the slightly
higher failure rate. Plans failing early produces shorter durations.
Table 1 assumes that the observation vector o presented to linear agents is a binary description of the eligible tasks and the condition truth values plus a constant 1 bit to provide bias
to the agents? linear networks. Table 2 shows that giving the agents less information in the
observation harms performance.
5.2
Large artificial domains
Each scenario consists of N tasks and C state variables. The goal state of the synthetic
scenarios is to assert 90% of the state variables, chosen during scenario synthesis, to be
true. See [10] for details. All generated problems have scope for choosing tasks instead of
merely scheduling them. All synthetic scenarios are guaranteed to have at least one policy
which will reach the operation goal assuming all tasks succeed. Even a few tens of tasks
and conditions can generate a state space too large for main memory.
We generated 37 problems, each with 40 tasks and 25 conditions (Art40/25). Although the
number of tasks and conditions is similar to the Webber problem described above, these
problems demonstrate significantly more choices to the planner, making planning nontrivial. Unlike the initial experiments, all tasks can be repeated as often as necessary so the
overall probability of failure depends on how well the planner chooses and orders tasks to
avoid running out of time and resources. Our RTDP based planner was not able to perform
any significant optimisation in 20m due to memory problems. Thus, to demonstrate FPGPlanning is having some effect, we compared the optimised policies to two simple policies.
The random policy starts each eligible task with probability 0.5. The naive policy starts
all eligible tasks. Both of these policies suffer from excessive resource consumption and
negative effects that can cause failure.
Table 3 shows that the linear approximator produces the best plans, but it requires C + 1
parameters per task. The results for the decision tree illustrated in Fig. 3 are given in the
?Prob Tree? row. This tree uses a constant 4 parameters per task, and subsequently requires
fewer operations when computing gradients. The ?Dumb? row is a decision stub, with one
parameter per task that simply learns whether to start when eligible. The remaining ?Dur?
and ?Res? Tree rows re-order the nodes in Fig. 3 to swap the nodes C and E respectively
with node B. This tests the sensitivity of the tree to node ordering. There appears to be
significant variation in the results. For example, when node E is swapped with B, the
resultant policies use less resources.
We also performed optimisation of a 200 task, 100 condition problem generated using the
same rules as the Art40/25 domain. The naive policy had a failure rate of 72.4%. No
time limit was applied. Linear network agents (20,200 parameters) optimised for 14 hours,
before terminating with small gradients, and resulted in a plan with 20.8% failure rate. The
decision tree agent (800 parameters) optimised for 6 hours before terminating with a 1.7%
failure rate. The smaller number of parameters and a priori policies embedded in the tree,
allow the decision tree to perform well in very large domains. Inspection of the resulting
parameters demonstrated that different tasks pruned different regions of the decision tree.
6
Conclusion
We have demonstrated an algorithm with great potential to produce good policies in realworld domains. Further work will refine our parameterised agents, and validate this approach on realistic larger domains. We also wish to characterise possible local minima.
Acknowledgements
Thank you to Olivier Buffet and Sylvie Thi?ebaux for many helpful comments. National
ICT Australia is funded by the Australian Government?s Backing Australia?s Ability program and the Centre of Excellence program. This project was also funded by the Australian
Defence Science and Technology Organisation.
References
[1] L. Peshkin, K.-E. Kim, N. Meuleau, and L. P. Kaelbling. Learning to cooperate via policy
search. In UAI, 2000.
[2] Nigel Tao, Jonathan Baxter, and Lex Weaver. A multi-agent, policy-gradient approach to network routing. In Proc. ICML?01. Morgan Kaufmann, 2001.
[3] J. Baxter, P. Bartlett, and L. Weaver. Experiments with infinite-horizon, policy-gradient estimation. JAIR, 15:351?381, 2001.
[4] Mausam and Daniel S. Weld. Concurrent probabilistic temporal planning. In Proc. International Conference on Automated Planning and Scheduling, Moneteray, CA, June 2005. AAAI.
[5] I. Little, D. Aberdeen, and S. Thi?ebaux. Prottle: A probabilistic temporal planner. In Proc.
AAAI?05, 2005.
[6] Hakan L. S. Younes and Reid G. Simmons. Policy generation for continuous-time stochastic
domains with concurrency. In Proc. of ICAPS?04, volume 14, 2005.
[7] Douglas Aberdeen, Sylvie Thi?ebaux, and Lin Zhang. Decision-theoretic military operations
planning. In Proc. ICAPS, volume 14, pages 402?411. AAAI, June 2004.
[8] A.G. Barto, S. Bradtke, and S. Singh. Learning to act using real-time dynamic programming.
Artificial Intelligence, 72, 1995.
[9] A.Y. Ng, D. Harada, and S. Russell. Policy invariance under reward transformations: Theory
and application to reward shaping. In Proc. ICML?99, 1999.
[10] Douglas Aberdeen. The factored policy-gradient planner. Technical report, NICTA, 2005.
| 2910 |@word webber:4 trial:1 advantageous:1 simulation:1 seek:1 reduction:1 initial:4 contains:1 daniel:1 existing:1 current:7 surprising:1 must:1 written:1 realistic:1 drop:1 interpretable:1 succeeding:1 update:2 alone:1 intelligence:1 leaf:3 selected:1 fewer:1 inspection:1 meuleau:1 provides:2 node:19 launching:1 zhang:1 consists:3 introduce:1 excellence:1 indeed:1 planning:26 frequently:1 multi:5 ming:1 automatically:1 little:1 increasing:1 becomes:1 begin:4 project:1 underlying:1 null:1 ail:1 prohibits:1 rtdp:4 transformation:1 guarantee:1 temporal:9 assert:1 every:2 act:2 icaps:2 demonstrates:1 control:4 unit:1 reid:1 before:2 maximise:3 local:3 treat:1 limit:2 optimised:4 path:1 plus:3 emphasis:1 au:1 chose:1 limited:3 thi:3 thought:2 significantly:2 intention:1 selection:2 scheduling:2 fpg:5 impossible:1 context:1 applying:1 imposed:1 deterministic:1 demonstrated:2 unset:1 starting:5 duration:13 pomdp:1 ergodic:1 simplicity:1 factored:5 rule:12 estimator:1 enabled:1 handle:2 variation:1 limiting:1 simmons:1 user:1 programming:7 olivier:1 us:1 associate:1 lay:1 solved:1 precondition:4 region:1 ensures:1 ordering:1 ebaux:3 incremented:1 russell:1 complexity:1 reward:23 dynamic:8 terminating:2 singh:1 concurrency:1 decentralised:1 swap:1 translated:1 easily:1 train:1 instantiated:1 fast:1 describe:1 artificial:2 outcome:11 choosing:3 heuristic:1 encoded:1 valued:1 larger:1 consume:1 otherwise:2 ability:2 final:1 differentiable:2 mausam:1 product:1 reset:5 recurses:1 frequent:1 relevant:1 date:1 achieve:3 description:5 validate:1 empty:1 requirement:3 produce:7 perfect:2 converges:1 minimises:3 minor:1 received:1 progress:3 come:1 australian:3 hakan:1 stochastic:7 subsequently:1 exploration:1 australia:3 successor:1 routing:1 implementing:1 government:1 ao:1 wall:1 enumerated:1 hold:1 exp:4 great:1 seed:1 mapping:1 predict:1 scope:1 early:1 failing:2 estimation:2 proc:6 applicable:1 label:1 visited:1 quote:1 concurrent:2 tool:3 reflects:1 clearly:1 concurrently:1 always:1 defence:2 rather:1 reaching:3 occupied:1 avoid:2 varying:1 barto:1 encode:1 june:2 pentium:1 ave:2 criticism:1 kim:1 helpful:1 dependent:1 factoring:1 typically:3 entire:2 initially:1 tao:2 provably:1 backing:1 issue:2 overall:1 priori:1 plan:13 equal:1 construct:1 once:2 having:1 ng:1 sampling:1 optimising:2 represents:5 icml:2 excessive:1 future:1 report:2 simplify:1 few:3 randomly:1 simultaneously:1 national:2 resulted:1 individual:4 attempt:1 highly:1 evaluation:1 adjust:1 dumb:2 closer:1 explosion:2 partial:4 necessary:4 shorter:2 unless:1 tree:28 iv:1 desired:2 re:8 uncertain:2 military:4 earlier:1 boolean:1 kaelbling:1 subset:2 entry:2 hundred:1 uniform:1 harada:1 successful:1 at2:1 too:2 optimally:1 stored:1 nigel:1 learnt:1 synthetic:2 combined:3 chooses:1 st:7 international:1 sensitivity:1 probabilistic:10 receiving:1 synthesis:1 aaai:3 satisfied:2 choose:1 possibly:1 stochastically:1 creating:1 derivative:1 leading:2 return:4 toy:1 potential:2 dur:8 matter:3 satisfy:1 explicitly:1 depends:1 performed:1 try:1 root:1 reached:2 start:14 contribution:1 variance:2 kaufmann:1 reinforced:1 yes:7 modelled:1 produced:1 trajectory:2 reach:4 definition:1 failure:15 naturally:1 associated:1 resultant:1 emits:1 sampled:2 lim:1 shaping:3 appears:1 higher:1 jair:1 formulation:2 execute:1 though:1 parameterised:5 until:2 clock:1 expressive:1 lack:1 atn:11 logistic:2 disabled:1 mdp:1 effect:9 true:7 deliberately:1 illustrated:1 deal:1 during:2 encourages:1 eligibility:3 chaining:1 presenting:1 complete:1 demonstrate:2 theoretic:1 performs:1 bradtke:1 cooperate:1 instantaneous:1 invoked:1 rl:2 conditioning:1 volume:2 million:1 relating:1 interpret:1 makespan:1 significant:3 centre:1 had:1 funded:2 add:1 durative:2 closest:1 apart:1 scenario:5 binary:4 success:2 arbitrarily:1 approximators:1 seen:1 minimum:1 additional:3 somewhat:1 disable:1 morgan:1 prune:1 determine:1 converge:1 tempting:1 dashed:1 branch:9 multiple:3 d0:4 technical:1 match:1 minimising:1 long:7 lin:1 coded:4 controlled:1 basic:3 regression:1 optimisation:5 metric:2 represent:6 grounded:1 achieved:2 receive:3 leaving:1 ot:16 biased:1 unlike:1 swapped:1 ascent:2 probably:1 comment:1 gpomdp:7 call:1 near:1 presence:1 easy:1 baxter:2 automated:1 switch:1 independence:1 observability:2 idea:1 tradeoff:1 consumed:1 minimise:1 peshkin:3 whether:3 sylvie:2 bartlett:1 gb:1 suffer:1 queue:8 cause:1 action:26 enumerate:2 generally:1 characterise:1 ten:3 locally:1 processed:1 younes:1 generate:2 percentage:1 impending:1 per:3 key:1 drawn:1 douglas:3 assault:2 ram:1 merely:1 enforced:1 run:1 prob:2 realworld:1 you:1 named:1 place:1 planner:21 eligible:22 decide:1 decision:27 comparable:1 bit:2 followed:1 guaranteed:1 encountered:1 refine:1 nontrivial:1 weld:1 pruned:1 relatively:3 combination:1 poor:2 representable:1 describes:2 slightly:1 smaller:1 island:1 parameterisation:1 making:2 pr:20 resource:15 eventually:1 fail:6 end:11 adopted:1 available:4 operation:6 experimentation:1 apply:1 mutex:1 appropriate:2 buffet:1 altogether:1 rp:2 assumes:2 running:1 include:1 ensure:1 remaining:1 giving:1 move:1 objective:1 added:2 quantity:2 lex:1 traditional:1 visiting:1 exhibit:1 gradient:30 thank:1 mapped:1 simulated:1 consumption:2 trivial:1 reason:1 toward:2 nicta:1 maximising:2 assuming:2 length:1 index:1 providing:1 executed:1 statement:1 trace:2 negative:3 stated:1 policy:61 perform:2 maximises:2 observation:18 markov:2 finite:5 immediate:3 committed:1 pair:1 established:1 hour:2 able:2 proceeds:1 usually:1 cripple:1 program:3 optimise:1 memory:4 power:2 critical:1 event:18 natural:1 hardwired:1 weaver:2 representing:1 scheme:1 technology:2 mdps:1 started:1 doug:1 naive:3 ict:2 acknowledgement:1 unsatisfied:1 embedded:1 generation:2 proven:1 approximator:5 foundation:1 at1:3 agent:47 degree:1 sufficient:1 s0:6 systematically:1 storing:1 row:3 repeat:2 infeasible:1 bias:5 allow:1 shortens:1 absolute:1 ghz:1 dimension:2 world:1 transition:2 computes:1 forward:1 made:4 reinforcement:2 pruning:2 observable:1 status:3 global:3 uai:1 harm:1 conds:2 search:3 continuous:1 table:7 learn:1 terminate:1 ca:1 alg:7 necessarily:1 domain:19 main:2 nothing:1 repeated:2 fig:7 canberra:1 deterministically:1 wish:1 candidate:1 learns:1 admissible:1 down:1 minute:1 list:2 negate:1 organisation:2 consist:1 false:1 restricting:1 execution:3 anu:1 horizon:1 aberdeen:5 interrelated:1 simply:1 explore:1 ordered:1 adjustment:1 partially:1 truth:4 succeed:2 goal:19 replace:1 hard:4 specifically:1 infinite:2 acting:1 total:1 invariance:1 e:5 attempted:1 select:3 arises:1 jonathan:1 avoiding:1 handling:1 |
2,105 | 2,911 | Location-based Activity Recognition
Lin Liao, Dieter Fox, and Henry Kautz
Computer Science & Engineering
University of Washington
Seattle, WA 98195
Abstract
Learning patterns of human behavior from sensor data is extremely important for high-level activity inference. We show how to extract and
label a person?s activities and signi?cant places from traces of GPS data.
In contrast to existing techniques, our approach simultaneously detects
and classi?es the signi?cant locations of a person and takes the highlevel context into account. Our system uses relational Markov networks
to represent the hierarchical activity model that encodes the complex relations among GPS readings, activities and signi?cant places. We apply
FFT-based message passing to perform ef?cient summation over large
numbers of nodes in the networks. We present experiments that show
signi?cant improvements over existing techniques.
1
Introduction
The problem of learning patterns of human behavior from sensor data arises in many areas
and applications of computer science, including intelligent environments, surveillance, and
assistive technology for the disabled. A focus of recent interest is the use of data from wearable sensors, and in particular, GPS (global positioning system) location data. Such data is
used to recognize the high-level activities in which a person is engaged and to determine
the relationship between activities and locations that are important to the user [1, 6, 8, 3].
Our goal is to segment the user?s day into everyday activities such as ?working,? ?visiting,?
?travel,? and to recognize and label signi?cant locations that are associated with one or
more activity, such as ?work place,? ?friend?s house,? ?user?s bus stop.? Such activity logs
can be used, for instance, for automated diaries or long-term health monitoring. Previous
approaches to location-based activity recognition suffer from design decisions that limit
their accuracy and ?exibility:
First, previous work decoupled the subproblem of determining whether or not a geographic
location is signi?cant and should be assigned a label, from that of labeling places and
activities. The ?rst problem was handled by simply assuming that a location is signi?cant
if and only if the user spends at least N minutes there, for some ?xed threshold N [1, 6,
8, 3]. Some way of restricting the enormous set of all locations recorded for the user to
a meaningful subset is clearly necessary. However, in practice, any ?xed threshold leads
to many errors. Some signi?cant locations, for example, the place where the user drops
off his children at school, may be visited only brie?y, and so would be excluded by a high
threshold. A lower threshold, however, would include too many insigni?cant locations, for
example, a place where the user brie?y waited at a traf?c light. The inevitable errors cannot
be resolved because information cannot ?ow from the label assignment process back to the
one that determines the domain to be labeled.
Second, concerns for computational ef?ciency prevented previous approaches from tackling the problem of activity and place labeling in full generality. [1] does not distinguish
between places and activities; although [8] does, the implementation limited places to a single activity. Neither approaches model or label the user?s activities when moving between
places. [6] and [3] learn transportation patterns, but not place labels.
The third problem is one of the underlying causes of the other limitations. The representations and algorithms used in previous work make it dif?cult to learn and reason with the
kinds of non-local features that are useful in disambiguating human activity. For a simple
example, if a system could learn that a person rarely went to a restaurant more than once a
day, then it could correctly give a low probability to an interpretation of a day?s data under
which the user went to three restaurants. Our previous work [8] used clique templates in
relational Markov networks for concisely expressing global features, but the MCMC inference algorithm we used made it costly to reason with aggregate features, such as statistics
on the number of times a given activity occurs. The ability to efficiently leverage global
features of the data stream could enhance the scope and accuracy of activity recognition.
This paper presents a uni?ed approach to automated activity and place labeling which overcomes these limitations. Contributions of this work include the following:
? We show how to simultaneously solve the tasks of identifying signi?cant locations
and labeling both places and activities from raw GPS data, all in a conditionally
trained relational Markov network. Our approach is notable in that nodes representing signi?cant places are dynamically added to the graph during inference.
No arbitrary thresholds regarding the time spent at a location or the number of
signi?cant places are employed.
? Our model creates a complete interpretation of the log of a user?s data, including
transportation activities as well as activities performed at particular places. It
allows different kinds of activities to be performed at the same location.
? We extend our work on using clique templates for global features to support ef?cient inference by belief propagation. We introduce, in particular, specialized Fast
Fourier Transform (FFT) templates for belief propagation over aggregate (counting) features, which reduce computation time by an exponential amount. Although [9] introduced the use of the FFT to compute probability distributions over
summations, our work appears to be the ?rst to employ it for full bi-directional
belief propagation.
This paper is organized as follows. We begin with a discussion of relational Markov networks and a description of an FFT belief propagation algorithm for aggregate statistical
features. Then we explain how to apply RMNs to the problem of location-based activity
recognition. Finally, we present experimental results on real-world data that demonstrate
signi?cant improvement in coverage and accuracy over previous work.
2
2.1
Relational Markov Networks and Aggregate Features
Preliminaries
Relational Markov Networks (RMNs) [10] are extensions of Conditional Random Fields
(CRFs), which are undirected graphical models that were developed for labeling sequence
data [5]. CRFs have been shown to produce excellent results in areas such as natural
language processing [5] and computer vision [4]. RMNs extend CRFs by providing a
relational language for describing clique structures and enforcing parameter sharing at the
template level. Thereby RMNs provide a very ?exible and concise framework for de?ning
the features we use in our activity recognition context.
A key concept of RMNs are relational clique templates, which specify the structure of a
CRF in a concise way. In a nutshell, a clique template C ? C is similar to a database query
(e.g., SQL) in that it selects tuples of nodes from a CRF and connects them into cliques.
Each clique template C is additionally associated with a potential function ?C (vC ) that
maps values of variables to a non-negative real number. Using a log-linear combination
T
of feature functions, we get ?C (vC ) = exp{wC
? fC (vC )}, where fC () de?nes a feature
T
vector for C and wC is the transpose of the corresponding weight vector.
An RMN de?nes a conditional distribution p(y|x) over labels y given observations x. To
compute such a conditional distribution, the RMN generates a CRF with the cliques speci?ed by the clique templates. All cliques that originate from the same template must share
the same weight vector wC . The resulting cliques factorize the conditional distribution as
1 Y Y
T
p(y | x) =
exp{wC
? fC (vC )},
(1)
Z(x)
C?C vC ?C
where Z(x) is the normalizing partition function.
The weights w of an RMN can be learned discriminatively by maximizing the loglikelihood of labeled training data [10, 8]. This requires running an inference procedure
at each iteration of the optimization and can be very expensive. To overcome this problem,
we instead maximize the pseudo-log-likelihood of the training data:
n
X
wT w
(2)
L(w) ?
log p(yi | MB(yi ), w) ?
2? 2
i=1
where MB(yi ) is the Markov Blanket of variable yi . The rightmost term avoids over?tting
by imposing a zero-mean, Gaussian shrinkage prior on each component of the weights [10].
In the context of place labeling, [8] showed how to use non-zero mean priors in order to
transfer weights learned for one person to another person. In our experiments, learning the
weights using pseudo-log-likelihood is very ef?cient and performs well in our tests.
In our previous work [8] we used MCMC for inference. While this approach performed
well for the models considered in [8], it does not scale to more complex activity models
such as the one described here. Taskar and colleagues [10] relied on belief propagation
(BP) for inference. The BP (sum-product) algorithm converts a CRF to a pairwise representation and performs message passing, where the message from node i to its neighbor j
is computed as
X
Y
mij (yj ) =
?(yi )?(yi , yj )
mki (yi ) ,
(3)
yi
k?n(i)\j
where ?(yi ) is a local potential, ?(yi , yj ) is a pairwise potential, and {n(i) \ j} denotes i?s
neighbors other than j. All messages are updated iteratively until they (possibly) converge.
However, our model takes into account aggregate features, such as summation. Performing
aggregation would require the generation of cliques that contain all nodes over which the
aggregation is performed. Since the complexity of standard BP is exponential in the number
of nodes in the largest clique, aggregation can easily make BP intractable.
2.2
Efficient summation templates
In our model, we address the inference of aggregate cliques at the template level within
the framework of BP. Each type of aggregation function is associated with a computation
template that speci?es how to propagate messages through the clique. In this section, we
discuss an ef?cient computation template for summation.
To handle summation cliques with potentially large numbers of addends, our summation
template dynamically builds a summation tree, which is a pairwise Markov network as
shown in Fig. 1(a). In a summation tree, the leaves are the original addends and each
(a)
p1
Place
a1
Activity
Local evidence
a2
p2
....
aN?2
....
(b)
pK
aN?1
aN
....
e 11
....
e E1
e 1N
e EN
Figure 1: (a) Summation tree that represents ysum = i=1 yi , where the Si ?s are auxiliary nodes
to ensure the summation relation. (b) CRF for labeling activities and places. Each activity node ai is
connected to E observed local evidence nodes e1i to eE
i . Place nodes pi are generated based on the
inferred activities and each place is connected to all activity nodes that are within a certain distance.
P8
internal node yjk represents the sum of its two children yj and yk , and this sum relation
is encoded by an auxiliary node Sjk and its potential. The state space of Sjk consists of
the joint (cross-product) state of its neighbors yj , yk , and yP
jk . It is easy to see that the
n
summation tree guarantees that the root represents ysum = i=1 yi , where y1 to yn are
the leaves of the tree. To de?ne the BP protocol for summation trees, we need to specify two
types of messages: an upward message from an auxiliary node to its parent (e.g., mS12 y12 ),
and a downward message from an auxiliary node to one of its two children (e.g., mS12 y1 ).
Upward message update: Starting with Equation (3), we can update an upward message
mSij yij as follows.
X
mSij yij (yij ) =
?S (yi , yj , yij ) myi Sij (yi ) myj Sij (yj )
yi ,yj
=
X
myi Sij (yi ) myj Sij (yij ? yi )
(4)
yi
= F ?1 F(myi Sij (yi )) ? F(myj Sij (yj ))
(5)
where ?S (yi , yj , yij ) is the local potential of Sij encoding the equality yij = yi + yj . (4)
follows because all terms not satisfying the equality disappear. Therefore, message mSij yij
is the convolution of myi Sij and myj Sij . (5) follows from the convolution theorem, which
states that the Fourier transform of a convolution is the point-wise product of Fourier transforms [2], where F and F ?1 represent the Fourier transform and its inverse, respectively.
When the messages are discrete functions, the Fourier transform and its inverse can be
computed ef?ciently using the Fast Fourier Transform (FFT) [2, 9]. The computational
complexity of one summation using FFT is O(k log k), where k is the maximum number
of states in yi and yj .
Downward message update: We also allow messages to pass from sum variables downward to its children. This is necessary if we want to use the belief on sum variables (e.g.,
knowledge on the number of homes) to change the distribution of individual variables (e.g.,
place labels). From Equation (3)X
we get the downward message mSij yi as
mSij yi (yi ) =
?S (yi , yj , yij )myj Sij (yj )myij Sij (yij )
yj ,yij
myj Sij (yj )myij Sij (yi + yj )
(6)
= F ?1 F(myj Sij (yj )) ? F(myij Sij (yij ))
(7)
=
X
yj
where (6) again follows from the sum relation. Note that the downward message mSij yi
turns out to be the correlation of messages myj Sij and myij Sij . (7) follows from the correlation theorem [2], which is similar to the convolution theorem except, for correlation, we
must compute the complex conjugate of the ?rst Fourier transform, denoted as F. Again,
for discrete messages, (7) can be evaluated ef?ciently using FFT.
At each level of a summation tree, the number of messages (nodes) is reduced by half and
the size of each message is doubled. Suppose the tree has n upward messages at the bottom
and the maximum size of a message is k . For large summation trees where n k, the
total complexity of updating the upward messages at all the log n levels follows now as
!
log
log
Xn n
Xn
n
? O 2i?1 k log 2i?1 k = O
log 2i?1
= O(n log2 n)
(8)
i
2
2
i=1
i=1
Similar reasoning shows that the complexity of the downward pass is O(n log2 n) as well.
Therefore, updating all messages in a summation clique takes O(n log2 n) instead of time
exponential in n, as would be the case for a non-specialized implementation of aggregation.
3
3.1
Location-based Activity Model
Overview
To recognize activities and places, we ?rst segment raw GPS traces by grouping consecutive GPS readings based on their spatial relationship. This segmentation can be performed
by simply combining all consecutive readings that are within a certain distance from each
other (10m in our implementation). However, it might be desirable to associate GPS traces
to a street map, for example, in order to relate locations to addresses in the map. To jointly
estimate the GPS to street association and trace segmentation, we construct an RMN that
takes into account the spatial relationship and temporal consistency between the measurements and their associations (see [7] for more details). In this section, we focus on inferring
activities and types of signi?cant places after segmentation. To do so, we construct a hierarchical RMN that explicitly encodes the relations between activities and places. A CRF
instantiated from the RMN is shown in Fig. 1(b). At the lower level of the hierarchy, each
activity node is connected to various features, summarizing information resulting from the
GPS segmentation. These features include:
? Temporal information such as time of day, day of week, and duration of the stay;
? Average speed through a segment, for discriminating transportation modes;
? Information extracted from geographic databases, such as whether a location is
close to a bus route or bus stop, and whether it is near a restaurant or store;
? Additionally, each activity node is connected to its neighbors. These features measure compatibility between types of activities at neighboring nodes in the trace.
Our model also aims at determining those places that play a signi?cant role in the activities
of a person, such as home, work place, friend?s home, grocery stores, restaurants, and bus
stops. Such significant places comprise the upper level of the CRF shown in Fig. 1(b).
However, since these places are not known a priori, we must additionally detect a person?s
signi?cant places. To incorporate place detection into our system, we use an iterative algorithm that re-estimates activities and places. Before we describe this algorithm, let us
?rst look at the features that are used to determine the types of signi?cant places under the
assumption that the locations and number of these places are known.
? The activities that occur at a place strongly indicate the type of the place. For
example, at a friends? home people either visit or pick up / drop off someone. Our
features consider the frequencies of the different activities at a place. This is done
by generating a clique for each place that contains all activity nodes in its vicinity.
For example, the nodes p1 , a1 , and aN ?2 in Fig. 1(b) form such a clique.
? A person usually has only a limited number of different homes or work places. We
add two additional summation cliques that count the number of homes and work
places. These counts provide soft constraints that bias the system to generate
interpretations with reasonable numbers of homes and work places.
1.
2.
3.
Input: GPS trace hg1 , g2 , . . . , gT i and iteration counter i := 0
ha1 , . . . , aN i, he11 , . . . , eE
1 , . . .i := trace segmentation (hg1 , g2 , . . . , gT i)
// Generate CRF containing activity and local evidence (lower
two levels in Fig. 1(b))
CRF0 := instantiate crf h i, ha1 , . . . , aN i, he11 , . . . , eE
1 , . . .i
4. a? 0 := BP inference( CRF0 ) // infer sequence of activities
5. do
6.
i := i + 1
7.
hp1 , . . . , pK ii := generate places(a? i?1 ) // Instantiate places
8.
CRFi := instantiate crf hp1 , . . . , pK ii , ha1 , . . . , aN i, he11 , . . . , eE
1 , . . .i
9
ha?i , p?i i := BP inference( CRFi ) // inference in complete CRF
10. until a?i = a?i?1
11. return ha?i , p?i i
Table 1: Algorithm for extracting and labeling activities and signi?cant places.
Note that the above two types of aggregation features can generate large cliques in the CRF,
which could make standard inference intractable. In our inference, we use the optimized
summation templates discussed in Section 2.2.
3.2
Place Detection and Labeling Algorithm
Table 1 summarizes our algorithm for ef?ciently constructing a CRF that jointly estimates
a person?s activities and the types of his signi?cant places. The algorithm takes as input
a GPS trace. In Step 2 and 3, this trace is segmented into activities ai and their local
evidence eji , which are then used to generate CRF0 without significant places. BP inference
is ?rst performed in this CRF so as to determine the activity estimate a? 0 , which consists
of a sequence of locations and the most likely activity performed at that location (Step
4). Within each iteration of the loop starting at Step 5, such an activity estimate is used
to extract a set of signi?cant places. This is done by classifying individual activities in the
sequence according to whether or not they belong to a signi?cant place. For instance, while
walking, driving a car, or riding a bus are not associated with signi?cant places, working
or getting on or off the bus indicate a signi?cant place. All instances at which a significant
activity occurs generate a place node. Because a place can be visited multiple times within
a sequence, we perform clustering and merge duplicate places into the same place node.
This classi?cation and clustering is performed by the algorithm generate places() in Step
7. These places are added to the model and BP is performed in this complete CRF. Since a
CRFi can have a different structure than the previous CRFi?1 , it might generate a different
activity sequence. If this is the case, the algorithm returns to Step 5 and re-generates the set
of places using the improved activity sequence. This process is repeated until the activity
sequence does not change. In our experiments we observed that this algorithm converges
very quickly, typically after three or four iterations.
4
Experimental Results
In our experiments, we collected GPS data traces from four different persons, approximately seven days of data per person. The data from each person consisted of roughly
40,000 GPS measurements, resulting in about 10,000 10m segments. We used leaveone-out cross-validation for evaluation. Learning from three persons? data took about one
minute and BP inference on the last person?s data converged within one minute.
Extracting significant places
We compare our model with a widely-used approach that uses a time threshold to determine
whether or not a location is signi?cant [1, 6, 8, 3]. We use four different thresholds from
10 min
Threshold method
Our model
False negative
8
6
5 min
3 min
Naive BP
MCMC
Optimized BP
2000
4
1000
2
0
0
3000
Running Time [s]
10
10
20
30
False positive
1 min
40
0
10
100
1000
Number of nodes
(a)
(b)
Figure 2: (a) Accuracy of extracting places. (b) Computation times for summation cliques.
Inferred labels
Truth
Work
Sleep Leisure
Visit
Pickup On/off car
Other
FN
Work
12 / 11
0
0/1
0
0
0
1
0
Sleep
0
21
1
2
0
0
0
0
Leisure
2
0
20 / 17
1/4
0
0
3
0
Visiting
0
0
0/2
7/5
0
0
2
0
Pickup
0
0
0
0
1
0
0
2
On/Off car
0
0
0
0
1
13 / 12
0
2/3
Other
0
0
0
0
0
0
37
1
FP
0
0
0
0
2
2
3
Table 2: Activity confusion matrix of cross-validation data with (left values) and without (right
values) considering places for activity inference (FN and FP are false negatives and false positives).
1 minute to 10 minutes, and we measure the false positive and false negative locations extracted from the GPS traces. As shown in Fig. 2(a), any ?xed threshold is not satisfactory:
low thresholds have many false positives, and high thresholds result in many false negatives. In contrast, our model performs much better: it only generates 4 false positives and 3
false negative. This experiment shows that using high-level context information drastically
improves the extraction of signi?cant places.
Labeling places and activities
In our system the labels of activities generate instances of places, which then help to better
estimate the activities occurring in their spatial area. The confusion matrix given in Table 2
summarizes the activity estimation results achieved with our system on the cross-validation
data. The results are given with and without taking the detected places into account. More
speci?cally, without places are results achieved by CRF0 generated by Step 4 of the algorithm in Table 1, and results with places are those achieved after model convergence.
When the results of both approaches are identical, only one number is given, otherwise, the
?rst number gives the result achieved with the complete model. The table shows two main
results. First, the accuracy of our approach is quite high, especially when considering that
the system was evaluated on only one week of data and was trained on only three weeks
of data collected by different persons. Second, performing joint inference over activities
and places increases the quality of inference. The reason for this is that a place node connects all the activities occurring in its spatial area so that these activities can be labeled in
a more consistent way. A further evaluation of the detected places showed that our system
achieved 90.6% accuracy in place detection and labeling (see [7] for more results).
Efficiency of inference
We compared our optimized BP algorithm using FFT summation cliques with inference
based on MCMC and regular BP, using the model and data from [8]. Note that a naive implementation of BP is exponential in the number of nodes in a clique. In our experiments,
the test accuracies resulting from using the different algorithms are almost identical. Therefore, we only focus on comparing the ef?ciency and scalability of summation aggregations.
The running times for the different algorithms are shown in Fig. 2(b). As can be seen, naive
BP becomes extremely slow for only 20 nodes, MCMC only works for up to 500 nodes,
while our algorithm can perform summation for 2, 000 variables within a few minutes.
5
Conclusions
We provided a novel approach to performing location-based activity recognition. In contrast to existing techniques, our approach uses one consistent framework for both low-level
inference and the extraction of a person?s signi?cant places. Thereby, our model is able
to take high-level context into account in order to detect the signi?cant locations of a person. Furthermore, once these locations are determined, they help to better detect low-level
activities occurring in their vicinity.
Summation cliques are extremely important to introduce long-term, soft constraints into
activity recognition. We show how to incorporate such cliques into belief propagation using
bi-directional FFT computations. The clique templates of RMNs are well suited to specify
such clique-speci?c inference mechanisms and we are developing additional techniques,
including clique-speci?c MCMC and local dynamic programming.
Our experiments based on traces of GPS data show that our system signi?cantly outperforms existing approaches. We demonstrate that the model can be trained from a group of
persons and then applied successfully to a different person, achieving more than 85% accuracy in determining low-level activities and above 90% accuracy in detecting and labeling
signi?cant places. In future work, we will add more sensor data, including accelerometers,
audio signals, and barometric pressure. Using the additional information provided by these
sensors, we will be able to perform more ?ne-grained activity recognition.
Acknowledgments
The authors would like to thank Jeff Bilmes for useful comments. This work has partly been supported by DARPA?s ASSIST and CALO Programme (contract numbers: NBCH-C-05-0137, SRI
subcontract 27-000968) and by the NSF under grant number IIS-0093406.
References
[1] D. Ashbrook and T. Starner. Using GPS to learn signi?cant locations and predict movement
across multiple users. Personal and Ubiquitous Computing, 7(5), 2003.
[2] E. Oran Brigham. Fast Fourier Transform and Its Applications. Prentice Hall, 1988.
[3] V. Gogate, R. Dechter, C. Rindt, and J. Marca. Modeling transportation routines using hybrid
dynamic mixed networks. In Proc. of the Conference on Uncertainty in Artificial Intelligence,
2005.
[4] S. Kumar and M. Hebert. Discriminative random ?elds: A discriminative framework for contextual interaction in classi?cation. In Proc. of the International Conference on Computer Vision,
2003.
[5] J. Lafferty, A. McCallum, and F. Pereira. Conditional random ?elds: Probabilistic models for
segmenting and labeling sequence data. In Proc. of the International Conference on Machine
Learning, 2001.
[6] L. Liao, D. Fox, and H. Kautz. Learning and inferring transportation routines. In Proc. of the
National Conference on Artificial Intelligence, 2004.
[7] L. Liao, D. Fox, and H. Kautz. Hierarchical conditional random ?elds for GPS-based activity
recognition. In Proc. of the 12th International Symposium of Robotics Research (ISRR), 2005.
[8] L. Liao, D. Fox, and H. Kautz. Location-based activity recognition using relational Markov
networks. In Proc. of the International Joint Conference on Artificial Intelligence, 2005.
[9] Yongyi Mao, Frank R. Kschischang, and Brendan J. Frey. Convolutional factor graphs as probabilistic models. In Proc. of the Conference on Uncertainty in Artificial Intelligence, 2004.
[10] B. Taskar, P. Abbeel, and D. Koller. Discriminative probabilistic models for relational data. In
Proc. of the Conference on Uncertainty in Artificial Intelligence, 2002.
| 2911 |@word sri:1 propagate:1 pick:1 eld:3 concise:2 thereby:2 pressure:1 contains:1 exibility:1 rightmost:1 outperforms:1 existing:4 comparing:1 contextual:1 si:1 tackling:1 must:3 fn:2 dechter:1 partition:1 cant:29 drop:2 update:3 half:1 leaf:2 instantiate:3 intelligence:5 cult:1 mccallum:1 detecting:1 node:28 location:28 symposium:1 consists:2 introduce:2 pairwise:3 p8:1 roughly:1 p1:2 behavior:2 detects:1 considering:2 becomes:1 begin:1 provided:2 underlying:1 xed:3 kind:2 spends:1 developed:1 guarantee:1 pseudo:2 temporal:2 nutshell:1 grant:1 yn:1 segmenting:1 before:1 positive:5 engineering:1 local:8 frey:1 limit:1 encoding:1 merge:1 approximately:1 might:2 dynamically:2 someone:1 dif:1 limited:2 bi:2 acknowledgment:1 yj:19 practice:1 procedure:1 area:4 regular:1 get:2 cannot:2 doubled:1 close:1 prentice:1 context:5 map:3 transportation:5 crfs:3 maximizing:1 starting:2 duration:1 identifying:1 his:2 handle:1 updated:1 tting:1 hierarchy:1 suppose:1 play:1 user:11 programming:1 gps:17 us:3 associate:1 recognition:10 expensive:1 jk:1 satisfying:1 updating:2 walking:1 labeled:3 database:2 observed:2 taskar:2 subproblem:1 bottom:1 role:1 connected:4 went:2 counter:1 movement:1 yk:2 environment:1 complexity:4 dynamic:2 personal:1 trained:3 segment:4 creates:1 efficiency:1 resolved:1 easily:1 joint:3 darpa:1 various:1 assistive:1 instantiated:1 fast:3 describe:1 query:1 detected:2 labeling:13 aggregate:6 artificial:5 quite:1 encoded:1 widely:1 solve:1 loglikelihood:1 otherwise:1 ability:1 statistic:1 transform:7 jointly:2 highlevel:1 sequence:9 took:1 interaction:1 mb:2 product:3 neighboring:1 combining:1 loop:1 description:1 everyday:1 scalability:1 getting:1 seattle:1 rst:7 parent:1 convergence:1 produce:1 generating:1 converges:1 spent:1 help:2 friend:3 school:1 p2:1 coverage:1 auxiliary:4 signi:29 blanket:1 indicate:2 ning:1 vc:5 human:3 calo:1 sjk:2 require:1 abbeel:1 preliminary:1 mki:1 summation:24 yij:12 extension:1 considered:1 hall:1 exp:2 scope:1 week:3 predict:1 driving:1 consecutive:2 a2:1 estimation:1 proc:8 travel:1 label:10 visited:2 largest:1 successfully:1 clearly:1 sensor:5 gaussian:1 aim:1 shrinkage:1 surveillance:1 focus:3 improvement:2 likelihood:2 contrast:3 brendan:1 summarizing:1 detect:3 inference:22 typically:1 relation:5 koller:1 selects:1 upward:5 compatibility:1 among:1 denoted:1 priori:1 grocery:1 spatial:4 field:1 once:2 construct:2 comprise:1 washington:1 extraction:2 identical:2 represents:3 look:1 inevitable:1 future:1 intelligent:1 duplicate:1 employ:1 few:1 simultaneously:2 recognize:3 national:1 individual:2 connects:2 detection:3 interest:1 message:24 evaluation:2 light:1 necessary:2 decoupled:1 fox:4 tree:9 re:2 instance:4 soft:2 modeling:1 assignment:1 subset:1 too:1 person:20 starner:1 international:4 discriminating:1 stay:1 cantly:1 contract:1 off:5 probabilistic:3 enhance:1 yongyi:1 quickly:1 again:2 recorded:1 containing:1 possibly:1 myj:8 return:2 yp:1 account:5 potential:5 de:4 accelerometer:1 notable:1 explicitly:1 stream:1 performed:9 root:1 hg1:2 relied:1 aggregation:7 kautz:4 contribution:1 accuracy:9 convolutional:1 efficiently:1 directional:2 raw:2 monitoring:1 bilmes:1 cation:2 converged:1 explain:1 sharing:1 ed:2 addend:2 colleague:1 frequency:1 associated:4 wearable:1 stop:3 knowledge:1 car:3 improves:1 ubiquitous:1 organized:1 segmentation:5 routine:2 back:1 appears:1 day:6 specify:3 improved:1 evaluated:2 rmns:6 strongly:1 generality:1 done:2 furthermore:1 until:3 correlation:3 working:2 propagation:6 mode:1 quality:1 disabled:1 leisure:2 riding:1 concept:1 geographic:2 contain:1 consisted:1 equality:2 assigned:1 vicinity:2 excluded:1 y12:1 iteratively:1 satisfactory:1 conditionally:1 during:1 subcontract:1 complete:4 demonstrate:2 crf:15 confusion:2 performs:3 reasoning:1 wise:1 ef:9 novel:1 specialized:2 rmn:6 overview:1 extend:2 interpretation:3 association:2 discussed:1 belong:1 expressing:1 measurement:2 significant:4 imposing:1 ai:2 consistency:1 language:2 henry:1 moving:1 sql:1 gt:2 add:2 recent:1 showed:2 diary:1 route:1 certain:2 store:2 yi:28 seen:1 additional:3 employed:1 speci:5 myi:4 determine:4 maximize:1 converge:1 signal:1 ii:3 full:2 desirable:1 multiple:2 infer:1 segmented:1 positioning:1 eji:1 cross:4 long:2 lin:1 prevented:1 e1:1 visit:2 a1:2 liao:4 vision:2 iteration:4 represent:2 achieved:5 robotics:1 want:1 comment:1 undirected:1 lafferty:1 ciently:3 ee:4 near:1 leverage:1 counting:1 extracting:3 easy:1 fft:9 automated:2 restaurant:4 reduce:1 regarding:1 whether:5 handled:1 assist:1 suffer:1 passing:2 cause:1 useful:2 amount:1 transforms:1 reduced:1 generate:9 nsf:1 correctly:1 per:1 discrete:2 group:1 key:1 four:3 threshold:11 enormous:1 achieving:1 neither:1 graph:2 isrr:1 sum:6 convert:1 inverse:2 uncertainty:3 place:73 almost:1 reasonable:1 home:7 decision:1 summarizes:2 distinguish:1 sleep:2 activity:73 occur:1 constraint:2 bp:17 encodes:2 wc:4 fourier:8 generates:3 speed:1 extremely:3 min:4 kumar:1 performing:3 developing:1 according:1 combination:1 conjugate:1 across:1 sij:17 dieter:1 equation:2 bus:6 describing:1 discus:1 turn:1 count:2 mechanism:1 apply:2 hierarchical:3 original:1 denotes:1 running:3 include:3 ensure:1 clustering:2 graphical:1 log2:3 cally:1 build:1 especially:1 disappear:1 added:2 occurs:2 costly:1 visiting:2 ow:1 distance:2 thank:1 street:2 originate:1 seven:1 collected:2 reason:3 enforcing:1 assuming:1 relationship:3 gogate:1 providing:1 potentially:1 yjk:1 relate:1 frank:1 trace:12 negative:6 design:1 implementation:4 perform:4 upper:1 observation:1 convolution:4 markov:9 pickup:2 relational:10 y1:2 arbitrary:1 inferred:2 introduced:1 optimized:3 concisely:1 learned:2 address:2 able:2 hp1:2 usually:1 pattern:3 fp:2 reading:3 including:4 belief:7 natural:1 hybrid:1 representing:1 technology:1 ne:4 extract:2 health:1 naive:3 prior:2 determining:3 discriminatively:1 mixed:1 generation:1 limitation:2 validation:3 consistent:2 classifying:1 share:1 pi:1 supported:1 last:1 transpose:1 hebert:1 drastically:1 bias:1 allow:1 neighbor:4 template:16 taking:1 leaveone:1 ha1:3 overcome:1 xn:2 world:1 avoids:1 author:1 made:1 programme:1 uni:1 overcomes:1 clique:29 nbch:1 global:4 e1i:1 tuples:1 factorize:1 discriminative:3 iterative:1 table:6 additionally:3 learn:4 transfer:1 kschischang:1 excellent:1 complex:3 constructing:1 domain:1 protocol:1 pk:3 main:1 child:4 repeated:1 fig:7 cient:4 en:1 brie:2 slow:1 inferring:2 waited:1 pereira:1 ciency:2 exponential:4 mao:1 house:1 third:1 grained:1 minute:6 theorem:3 exible:1 concern:1 normalizing:1 intractable:2 evidence:4 grouping:1 restricting:1 false:10 brigham:1 downward:6 occurring:3 suited:1 fc:3 simply:2 likely:1 g2:2 mij:1 truth:1 determines:1 extracted:2 conditional:6 goal:1 disambiguating:1 jeff:1 change:2 determined:1 except:1 wt:1 classi:3 total:1 pas:2 engaged:1 e:2 experimental:2 partly:1 meaningful:1 rarely:1 internal:1 support:1 people:1 arises:1 incorporate:2 mcmc:6 audio:1 |
2,106 | 2,912 | Sensory Adaptation within a Bayesian
Framework for Perception
Alan A. Stocker? and Eero P. Simoncelli
Howard Hughes Medical Institute and
Center for Neural Science
New York University
Abstract
We extend a previously developed Bayesian framework for perception
to account for sensory adaptation. We first note that the perceptual effects of adaptation seems inconsistent with an adjustment of the internally represented prior distribution. Instead, we postulate that adaptation
increases the signal-to-noise ratio of the measurements by adapting the
operational range of the measurement stage to the input range. We show
that this changes the likelihood function in such a way that the Bayesian
estimator model can account for reported perceptual behavior. In particular, we compare the model?s predictions to human motion discrimination
data and demonstrate that the model accounts for the commonly observed
perceptual adaptation effects of repulsion and enhanced discriminability.
1
Motivation
A growing number of studies support the notion that humans are nearly optimal when performing perceptual estimation tasks that require the combination of sensory observations
with a priori knowledge. The Bayesian formulation of these problems defines the optimal
strategy, and provides a principled yet simple computational framework for perception that
can account for a large number of known perceptual effects and illusions, as demonstrated
in sensorimotor learning [1], cue combination [2], or visual motion perception [3], just to
name a few of the many examples.
Adaptation is a fundamental phenomenon in sensory perception that seems to occur at all
processing levels and modalities. A variety of computational principles have been suggested as explanations for adaptation. Many of these are based on the concept of maximizing the sensory information an observer can obtain about a stimulus despite limited sensory
resources [4, 5, 6]. More mechanistically, adaptation can be interpreted as the attempt of
the sensory system to adjusts its (limited) dynamic range such that it is maximally informative with respect to the statistics of the stimulus. A typical example is observed in the
retina, which manages to encode light intensities that vary over nine orders of magnitude
using ganglion cells whose dynamic range covers only two orders of magnitude. This is
achieved by adapting to the local mean as well as higher order statistics of the visual input
over short time-scales [7].
?
corresponding author.
If a Bayesian framework is to provide a valid computational explanation of perceptual
processes, then it needs to account for the behavior of a perceptual system, regardless of
its adaptation state. In general, adaptation in a sensory estimation task seems to have two
fundamental effects on subsequent perception:
? Repulsion: The estimate of parameters of subsequent stimuli are repelled by
those of the adaptor stimulus, i.e. the perceived values for the stimulus variable
that is subject to the estimation task are more distant from the adaptor value after
adaptation. This repulsive effect has been reported for perception of visual speed
(e.g. [8, 9]), direction-of-motion [10], and orientation [11].
? Increased sensitivity: Adaptation increases the observer?s discrimination ability
around the adaptor (e.g. for visual speed [12, 13]), however it also seems to decrease it further away from the adaptor as shown in the case of direction-of-motion
discrimination [14].
In this paper, we show that these two perceptual effects can be explained within a Bayesian
estimation framework of perception. Note that our description is at an abstract functional
level - we do not attempt to provide a computational model for the underlying mechanisms
responsible for adaptation, and this clearly separates this paper from other work which
might seem at first glance similar [e.g., 15].
2
Adaptive Bayesian estimator framework
Suppose that an observer wants to estimate a property of a stimulus denoted by the variable
?, based on a measurement m. In general, the measurement can be vector-valued, and
is corrupted by both internal and external noise. Hence, combining the noisy information
gained by the measurement m with a priori knowledge about ? is advantageous. According
to Bayes? rule
1
p(?|m) = p(m|?)p(?) .
(1)
?
That is, the probability of stimulus value ? given m (posterior) is the product of the likelihood p(m|?) of the particular measurement and the prior p(?). The normalization constant
? serves to ensure that the posterior is a proper probability distribution. Under the assump?
tion of a squared-error loss function, the optimal estimate ?(m)
is the mean of the posterior,
thus
Z ?
?
?(m)
=
? p(?|m) d? .
(2)
0
?
Note that ?(m)
describes an estimate for a single measurement m. As discussed in [16],
the measurement will vary stochastically over the course of many exposures to the same
stimulus, and thus the estimator will also vary. We return to this issue in Section 3.2.
Figure 1a illustrates a Bayesian estimator, in which the shape of the (arbitrary) prior distribution leads on average to a shift of the estimate toward a lower value of ? than the true
stimulus value ?stim . The likelihood and the prior are the fundamental constituents of the
Bayesian estimator model. Our goal is to describe how adaptation alters these constituents
so as to account for the perceptual effects of repulsion and increased sensitivity.
Adaptation does not change the prior ...
An intuitively sensible hypothesis is that adaptation changes the prior distribution. Since
the prior is meant to reflect the knowledge the observer has about the distribution of occurrences of the variable ? in the world, repeated viewing of stimuli with the same parameter
a
b
probability
probability
attraction !
posterior
likelihood
prior
modified prior
??
?
?? '
?
?adapt
Figure 1: Hypothetical model in which adaptation alters the prior distribution. a) Unadapted Bayesian estimation configuration in which the prior leads to a shift of the estimate
? relative to the stimulus parameter ?stim . Both the likelihood function and the prior distri?,
bution contribute to the exact value of the estimate ?? (mean of the posterior). b) Adaptation
acts by increasing the prior distribution around the value, ?adapt , of the adapting stimulus
parameter. Consequently, an subsequent estimate ??0 of the same stimulus parameter value
?stim is attracted toward the adaptor. This is the opposite of observed perceptual effects,
and we thus conclude that adjustments of the prior in a Bayesian model do not account for
adaptation.
value ?adapt should presumably increase the prior probability in the vicinity of ?adapt . Figure 1b schematically illustrates the effect of such a change in the prior distribution. The
estimated (perceived) value of the parameter under the adapted condition is attracted to the
adapting parameter value. In order to account for observed perceptual repulsion effects,
the prior would have to decrease at the location of the adapting parameter, a behavior that
seems fundamentally inconsistent with the notion of a prior distribution.
... but increases the reliability of the measurements
Since a change in the prior distribution is not consistent with repulsion, we are led to the
conclusion that adaptation must change the likelihood function. But why, and how should
this occur?
In order to answer this question, we reconsider the functional purpose of adaptation. We assume that adaptation acts to allocate more resources to the representation of the parameter
values in the vicinity of the adaptor [4], resulting in a local increase in the signal-to-noise
ratio (SNR). This can be accomplished, for example, by dynamically adjusting the operational range to the statistics of the input. This kind of increased operational gain around
the adaptor has been effectively demonstrated in the process of retinal adaptation [17]. In
the context of our Bayesian estimator framework, and restricting to the simple case of a
scalar-valued measurement, adaptation results in a narrower conditional probability density p(m|?) in the immediate vicinity of the adaptor, thus an increase in the reliability of
the measurement m. This is offset by a broadening of the conditional probability density p(m|?) in the region beyond the adaptor vicinity (we assume that total resources are
conserved, and thus an increase around the adaptor must necessarily lead to a decrease
elsewhere).
Figure 2 illustrates the effect of this local increase in signal-to-noise ratio on the likeli-
unadapted
adapted
?adapt
p(m2| ? )'
1/SNR
?
?
?1
?2
?1
p(m2|?)
?2
?
m2
p(m1| ? )'
m1
m
m
p(m1|?)
?
?
?
?adapt
p(m| ?2)'
p(m|?2)
likelihoods
p(m|?1)
p(m| ?1)'
p(m|?adapt )'
conditionals
Figure 2: Measurement noise, conditionals and likelihoods. The two-dimensional conditional density, p(m|?), is shown as a grayscale image for both the unadapted and adapted
cases. We assume here that adaptation increases the reliability (SNR) of the measurement
around the parameter value of the adaptor. This is balanced by a decrease in SNR of the
measurement further away from the adaptor. Because the likelihood is a function of ? (horizontal slices, shown plotted at right), this results in an asymmetric change in the likelihood
that is in agreement with a repulsive effect on the estimate.
a
b
^
??
^
?? [deg]
+
0
60
30
0
-30
-
?
? adapt
-60
-180
-90
90
?adapt
180
? [deg]
Figure 3: Repulsion: Model predictions vs. human psychophysics. a) Difference in perceived direction in the pre- and post-adaptation condition, as predicted by the model. Postadaptive percepts of motion direction are repelled away from the direction of the adaptor.
b) Typical human subject data show a qualitatively similar repulsive effect. Data (and fit)
are replotted from [10].
hood function. The two gray-scale images represent the conditional probability densities,
p(m|?), in the unadapted and the adapted state. They are formed by assuming additive
noise on the measurement m of constant variance (unadapted) or with a variance that
decreases symmetrically in the vicinity of the adaptor parameter value ?adapt , and grows
slightly in the region beyond. In the unadapted state, the likelihood is convolutional and
the shape and variance are equivalent to the distribution of measurement noise. However,
in the adapted state, because the likelihood is a function of ? (horizontal slice through the
conditional surface) it is no longer convolutional around the adaptor. As a result, the mean
is pushed away from the adaptor, as illustrated in the two graphs on the right. Assuming
that the prior distribution is fairly smooth, this repulsion effect is transferred to the posterior
distribution, and thus to the estimate.
3
Simulation Results
We have qualitatively demonstrated that an increase in the measurement reliability around
the adaptor is consistent with the repulsive effects commonly seen as a result of perceptual adaptation. In this section, we simulate an adapted Bayesian observer by assuming a
simple model for the changes in signal-to-noise ratio due to adaptation. We address both
repulsion and changes in discrimination threshold. In particular, we compare our model
predictions with previously published data from psychophysical experiments examining
human perception of motion direction.
3.1
Repulsion
In the unadapted state, we assume the measurement noise to be additive and normally
distributed, and constant over the whole measurement space. Thus, assuming that m and
? live in the same space, the likelihood is a Gaussian of constant width. In the adapted
state, we assume a simple functional description for the variance of the measurement noise
around the adapter. Specifically, we use a constant plus a difference of two Gaussians,
a
b
relative discrimination threshold
relative discrimination threshold
1.8
1
?
?adapt
1.6
1.4
1.2
1
0.8
-40
-20
? adapt
20
40
? [deg]
Figure 4: Discrimination thresholds: Model predictions vs. human psychophysics. a) The
model predicts that thresholds for direction discrimination are reduced at the adaptor. It
also predicts two side-lobes of increased threshold at further distance from the adaptor.
b) Data of human psychophysics are in qualitative agreement with the model. Data are
replotted from [14] (see also [11]).
each having equal area, with one twice as broad as the other (see Fig. 2). Finally, for
simplicity, we assume a flat prior, but any reasonable smooth prior would lead to results
that are qualitatively similar. Then, according to (2) we compute the predicted estimate of
motion direction in both the unadapted and the adapted case.
Figure 3a shows the predicted difference between the pre- and post-adaptive average estimate of direction, as a function of the stimulus direction, ?stim . The adaptor is indicated with
an arrow. The repulsive effect is clearly visible. For comparison, Figure 3b shows human
subject data replotted from [10]. The perceived motion direction of a grating was estimated,
under both adapted and unadapted conditions, using a two-alternative-forced-choice experimental paradigm. The plot shows the change in perceived direction as a function of test
stimulus direction relative to that of the adaptor. Comparison of the two panels of Figure 3
indicate that despite the highly simplified construction of the model, the prediction is quite
good, and even includes the small but consistent repulsive effects observed 180 degrees
from the adaptor.
3.2
Changes in discrimination threshold
Adaptation also changes the ability of human observers to discriminate between the direction of two different moving stimuli. In order to model discrimination thresholds, we
need to consider a Bayesian framework that can account not only for the mean of the estimate but also its variability. We have recently developed such a framework, and used
it to quantitatively constrain the likelihood and the prior from psychophysical data [16].
This framework accounts for the effect of the measurement noise on the variability of the
? Specifically, it provides a characterization of the distribution p(?|?
? stim ) of the
estimate ?.
estimate for a given stimulus direction in terms of its expected value and its variance as a
function of the measurement noise. As in [16] we write
?
? stim i = varhmi( ? ?(m) )2 |m=? .
varh?|?
(3)
stim
?m
Assuming that discrimination threshold is proportional to the standard deviation,
q
? stim i, we can now predict how discrimination thresholds should change after adapvarh?|?
tation. Figure 4a shows the predicted change in discrimination thresholds relative to the unadapted condition for the same model parameters as in the repulsion example (Figure 3a).
Thresholds are slightly reduced at the adaptor, but increase symmetrically for directions
further away from the adaptor. For comparison, Figure 4b shows the relative change in discrimination thresholds for a typical human subject [14]. Again, the behavior of the human
observer is qualitatively well predicted.
4
Discussion
We have shown that adaptation can be incorporated into a Bayesian estimation framework
for human sensory perception. Adaptation seems unlikely to manifest itself as a change
in the internal representation of prior distributions, as this would lead to perceptual bias
effects that are opposite to those observed in human subjects. Instead, we argue that adaptation leads to an increase in reliability of the measurement in the vicinity of the adapting
stimulus parameter. We show that this change in the measurement reliability results in
changes of the likelihood function, and that an estimator that utilizes this likelihood function will exhibit the commonly-observed adaptation effects of repulsion and changes in
discrimination threshold. We further confirm our model by making quantitative predictions
and comparing them with known psychophysical data in the case of human perception of
motion direction.
Many open questions remain. The results demonstrated here indicate that a resource allocation explanation is consistent with the functional effects of adaptation, but it seems unlikely
that theory alone can lead to a unique quantitative prediction of the detailed form of these
effects. Specifically, the constraints imposed by biological implementation are likely to
play a role in determining the changes in measurement noise as a function of adaptor parameter value, and it will be important to characterize and interpret neural response changes
in the context of our framework. Also, although we have argued that changes in the prior
seem inconsistent with adaptation effects, it may be that such changes do occur but are
offset by the likelihood effect, or occur only on much longer timescales.
Last, if one considers sensory perception as the result of a cascade of successive processing
stages (with both feedforward and feedback connections), it becomes necessary to expand
the Bayesian description to describe this cascade [e.g., 18, 19]. For example, it may be
possible to interpret this cascade as a sequence of Bayesian estimators, in which the measurement of each stage consists of the estimate computed at the previous stage. Adaptation
could potentially occur in each of these processing stages, and it is of fundamental interest
to understand how such a cascade can perform useful stable computations despite the fact
that each of its elements is constantly readjusting its response properties.
References
[1] K. K?ording and D. Wolpert. Bayesian integration in sensorimotor learning.
427(15):244?247, January 2004.
Nature,
[2] D C Knill and W Richards, editors. Perception as Bayesian Inference. Cambridge University
Press, 1996.
[3] Y. Weiss, E. Simoncelli, and E. Adelson. Motion illusions as optimal percept. Nature Neuroscience, 5(6):598?604, June 2002.
[4] H.B. Barlow. Vision: Coding and Efficiency, chapter A theory about the functional role and
synaptic mechanism of visual after-effects, pages 363?375. Cambridge University Press., 1990.
[5] M.J. Wainwright. Visual adaptation as optimal information transmission. Vision Research,
39:3960?3974, 1999.
[6] N. Brenner, W. Bialek, and R. de Ruyter van Steveninck. Adaptive rescaling maximizes information transmission. Neuron, 26:695?702, June 2000.
[7] S.M. Smirnakis, M.J. Berry, D.K. Warland, W. Bialek, and M. Meister. Adaptation of retinal
processing to image contrast and spatial scale. Nature, 386:69?73, March 1997.
[8] P. Thompson. Velocity after-effects: the effects of adaptation to moving stimuli on the perception of subsequently seen moving stimuli. Vision Research, 21:337?345, 1980.
[9] A.T. Smith. Velocity coding: evidence from perceived velocity shifts. Vision Research,
25(12):1969?1976, 1985.
[10] P. Schrater and E. Simoncelli. Local velocity representation: evidence from motion adaptation.
Vision Research, 38:3899?3912, 1998.
[11] C.W. Clifford. Perceptual adaptation: motion parallels orientation. Trends in Cognitive Sciences, 6(3):136?143, March 2002.
[12] C. Clifford and P. Wenderoth. Adaptation to temporal modulaton can enhance differential speed
sensitivity. Vision Research, 39:4324?4332, 1999.
[13] A. Kristjansson. Increased sensitivity to speed changes during adaptation to first-order, but not
to second-order motion. Vision Research, 41:1825?1832, 2001.
[14] R.E. Phinney, C. Bowd, and R. Patterson. Direction-selective coding of stereoscopic (cyclopean) motion. Vision Research, 37(7):865?869, 1997.
[15] N.M. Grzywacz and R.M. Balboa. A Bayesian framework for sensory adaptation. Neural
Computation, 14:543?559, 2002.
[16] A.A. Stocker and E.P. Simoncelli. Constraining a Bayesian model of human visual speed perception. In Lawrence K. Saul, Yair Weiss, and L?eon Bottou, editors, Advances in Neural Information Processing Systems NIPS 17, pages 1361?1368, Cambridge, MA, 2005. MIT Press.
[17] D. Tranchina, J. Gordon, and R.M. Shapley. Retinal light adaptation ? evidence for a feedback
mechanism. Nature, 310:314?316, July 1984.
[18] S. Deneve. Bayesian inference in spiking neurons. In Lawrence K. Saul, Yair Weiss, and L e? on
Bottou, editors, Adv. Neural Information Processing Systems (NIPS*04), vol 17, Cambridge,
MA, 2005. MIT Press.
[19] R. Rao. Hierarchical Bayesian inference in networks of spiking neurons. In Lawrence K. Saul,
Yair Weiss, and L?eon Bottou, editors, Adv. Neural Information Processing Systems (NIPS*04),
vol 17, Cambridge, MA, 2005. MIT Press.
| 2912 |@word seems:7 wenderoth:1 advantageous:1 open:1 simulation:1 lobe:1 kristjansson:1 configuration:1 ording:1 comparing:1 yet:1 attracted:2 must:2 distant:1 additive:2 visible:1 subsequent:3 informative:1 shape:2 plot:1 discrimination:15 v:2 cue:1 alone:1 smith:1 short:1 provides:2 characterization:1 contribute:1 location:1 successive:1 differential:1 qualitative:1 consists:1 shapley:1 expected:1 behavior:4 growing:1 increasing:1 becomes:1 distri:1 underlying:1 panel:1 maximizes:1 kind:1 interpreted:1 developed:2 temporal:1 quantitative:2 hypothetical:1 act:2 smirnakis:1 normally:1 medical:1 internally:1 local:4 tation:1 despite:3 might:1 plus:1 discriminability:1 twice:1 dynamically:1 limited:2 range:5 steveninck:1 unique:1 responsible:1 hood:1 hughes:1 illusion:2 area:1 adapting:6 cascade:4 pre:2 context:2 live:1 equivalent:1 imposed:1 demonstrated:4 center:1 maximizing:1 exposure:1 regardless:1 thompson:1 simplicity:1 m2:3 estimator:8 adjusts:1 rule:1 attraction:1 notion:2 grzywacz:1 enhanced:1 suppose:1 construction:1 play:1 exact:1 hypothesis:1 agreement:2 element:1 velocity:4 trend:1 tranchina:1 asymmetric:1 richards:1 predicts:2 observed:7 role:2 region:2 adv:2 decrease:5 principled:1 balanced:1 dynamic:2 likeli:1 patterson:1 efficiency:1 represented:1 chapter:1 forced:1 describe:2 whose:1 quite:1 valued:2 ability:2 statistic:3 noisy:1 itself:1 sequence:1 product:1 adaptation:45 combining:1 description:3 constituent:2 transmission:2 adaptor:25 grating:1 predicted:5 indicate:2 direction:18 subsequently:1 human:15 viewing:1 require:1 argued:1 biological:1 around:8 unadapted:10 presumably:1 lawrence:3 predict:1 vary:3 purpose:1 perceived:6 estimation:6 mit:3 clearly:2 gaussian:1 modified:1 encode:1 june:2 likelihood:17 contrast:1 inference:3 repulsion:11 unlikely:2 expand:1 selective:1 issue:1 orientation:2 denoted:1 priori:2 spatial:1 integration:1 psychophysics:3 fairly:1 equal:1 having:1 broad:1 adelson:1 nearly:1 stimulus:20 fundamentally:1 quantitatively:1 few:1 retina:1 gordon:1 attempt:2 interest:1 highly:1 light:2 stocker:2 necessary:1 plotted:1 increased:5 rao:1 cover:1 deviation:1 snr:4 examining:1 characterize:1 reported:2 answer:1 corrupted:1 density:4 fundamental:4 mechanistically:1 sensitivity:4 enhance:1 squared:1 postulate:1 reflect:1 again:1 clifford:2 external:1 stochastically:1 cognitive:1 return:1 rescaling:1 account:10 de:1 retinal:3 coding:3 includes:1 tion:1 observer:7 bution:1 bayes:1 parallel:1 formed:1 convolutional:2 variance:5 percept:2 bayesian:23 manages:1 published:1 synaptic:1 sensorimotor:2 gain:1 adjusting:1 manifest:1 knowledge:3 higher:1 response:2 maximally:1 wei:4 formulation:1 just:1 stage:5 horizontal:2 glance:1 defines:1 indicated:1 gray:1 grows:1 name:1 effect:27 concept:1 true:1 barlow:1 hence:1 vicinity:6 illustrated:1 during:1 width:1 demonstrate:1 motion:14 image:3 recently:1 functional:5 spiking:2 extend:1 discussed:1 m1:3 interpret:2 schrater:1 measurement:26 cambridge:5 reliability:6 moving:3 stable:1 longer:2 surface:1 posterior:6 accomplished:1 conserved:1 seen:2 paradigm:1 signal:4 july:1 simoncelli:4 alan:1 smooth:2 adapt:12 post:2 prediction:7 vision:8 normalization:1 represent:1 achieved:1 cell:1 schematically:1 want:1 conditionals:2 varh:1 modality:1 subject:5 inconsistent:3 seem:2 symmetrically:2 feedforward:1 constraining:1 variety:1 adapter:1 fit:1 opposite:2 shift:3 allocate:1 york:1 nine:1 useful:1 detailed:1 reduced:2 alters:2 stereoscopic:1 estimated:2 neuroscience:1 cyclopean:1 write:1 vol:2 threshold:14 deneve:1 graph:1 reasonable:1 utilizes:1 pushed:1 adapted:9 occur:5 constraint:1 constrain:1 flat:1 speed:5 simulate:1 performing:1 transferred:1 according:2 combination:2 march:2 describes:1 slightly:2 remain:1 making:1 explained:1 intuitively:1 resource:4 previously:2 mechanism:3 serf:1 repulsive:6 meister:1 gaussians:1 hierarchical:1 away:5 occurrence:1 alternative:1 yair:3 ensure:1 warland:1 eon:2 psychophysical:3 question:2 strategy:1 bialek:2 exhibit:1 distance:1 separate:1 sensible:1 argue:1 considers:1 toward:2 stim:8 assuming:5 ratio:4 potentially:1 repelled:2 reconsider:1 implementation:1 proper:1 perform:1 observation:1 neuron:3 howard:1 january:1 immediate:1 variability:2 incorporated:1 arbitrary:1 intensity:1 connection:1 nip:3 address:1 beyond:2 suggested:1 perception:15 replotted:3 explanation:3 wainwright:1 prior:25 berry:1 determining:1 relative:6 loss:1 proportional:1 allocation:1 degree:1 consistent:4 principle:1 editor:4 course:1 elsewhere:1 last:1 side:1 bias:1 understand:1 institute:1 saul:3 distributed:1 slice:2 feedback:2 van:1 valid:1 world:1 sensory:11 author:1 commonly:3 adaptive:3 qualitatively:4 simplified:1 confirm:1 deg:3 conclude:1 eero:1 grayscale:1 why:1 nature:4 ruyter:1 operational:3 broadening:1 bottou:3 necessarily:1 timescales:1 arrow:1 motivation:1 noise:13 whole:1 knill:1 repeated:1 fig:1 perceptual:14 offset:2 evidence:3 restricting:1 effectively:1 gained:1 magnitude:2 illustrates:3 wolpert:1 led:1 likely:1 ganglion:1 assump:1 visual:7 adjustment:2 scalar:1 constantly:1 ma:3 conditional:5 goal:1 narrower:1 consequently:1 brenner:1 change:24 typical:3 specifically:3 total:1 discriminate:1 experimental:1 internal:2 support:1 meant:1 phenomenon:1 |
2,107 | 2,913 | Consensus Propagation
Ciamac C. Moallemi
Stanford University
Stanford, CA 95014 USA
[email protected]
Benjamin Van Roy
Stanford University
Stanford, CA 95014 USA
[email protected]
Abstract
We propose consensus propagation, an asynchronous distributed protocol for averaging numbers across a network. We establish convergence,
characterize the convergence rate for regular graphs, and demonstrate
that the protocol exhibits better scaling properties than pairwise averaging, an alternative that has received much recent attention. Consensus
propagation can be viewed as a special case of belief propagation, and
our results contribute to the belief propagation literature. In particular,
beyond singly-connected graphs, there are very few classes of relevant
problems for which belief propagation is known to converge.
1
Introduction
Consider a network of n nodes
Pnin which the ith node observes a number yi ? [0, 1] and
aims to compute the average i=1 yi /n. The design of scalable distributed protocols for
this purpose has received much recent attention and is motivated by a variety of potential
needs. In both wireless sensor and peer-to-peer networks, for example, there is interest
in simple protocols for computing aggregate statistics (see, for example, the references
in [1]), and averaging enables computation of several important ones. Further, averaging
serves as a primitive in the design of more sophisticated distributed information processing
algorithms. For example, a maximum likelihood estimate can be produced by an averaging
protocol if each node?s observations are linear in variables of interest and noise is Gaussian
[2]. As another example, averaging protocols are central to policy-gradient-based methods
for distributed optimization of network performance [3].
In this paper we propose and analyze a new protocol ? consensus propagation ? for asynchronous distributed averaging. As a baseline for comparison, we will also discuss another
asychronous distributed protocol ? pairwise averaging ? which has received much recent
attention. In pairwise averaging, each node maintains its current estimate of the average,
and each time a pair of nodes communicate, they revise their estimates to both take on the
mean of their previous estimates. Convergence of this protocol in a very general model of
asynchronous computation and communication was established in [4]. Recent work [5, 6]
has studied the convergence rate and its dependence on network topology and how pairs of
nodes are sampled. Here, sampling is governed by a certain doubly stochastic matrix, and
the convergence rate is characterized by its second-largest eigenvalue.
Consensus propagation is a simple algorithm with an intuitive interpretation. It can also be
viewed as an asynchronous distributed version of belief propagation as applied to approxi-
mation of conditional distributions in a Gaussian Markov random field. When the network
of interest is singly-connected, prior results about belief propagation imply convergence
of consensus propagation. However, in most cases of interest, the network is not singlyconnected and prior results have little to say about convergence. In particular, Gaussian
belief propagation on a graph with cycles is not guaranteed to converge, as demonstrated
by examples in [7].
In fact, there are very few relevant cases where belief propagation on a graph with cycles is known to converge. Some fairly general sufficient conditions have been established
[8, 9, 10], but these conditions are abstract and it is difficult to identify interesting classes
of problems that meet them. One simple case where belief propagation is guaranteed to
converge is when the graph has only a single cycle [11, 12, 13]. Recent work proposes the
use of belief propagation to solve maximum-weight matching problems and proves convergence in that context [14]. [15] proves convergence in the application of belief propogation
to a classification problem. In the Gaussian case, [7, 16] provide sufficient conditions for
convergence, but these conditions are difficult to interpret and do not capture situations that
correspond to consensus propagation.
With this background, let us discuss the primary contributions of this paper: (1) we propose consensus propagation, a new asynchronous distributed protocol for averaging; (2) we
prove that consensus propagation converges even when executed asynchronously. Since
there are so few classes of relevant problems for which belief propagation is known to
converge, even with synchronous execution, this is surprising; (3) We characterize the convergence time in regular graphs of the synchronous version of consensus propagation in
terms of the the mixing time of a certain Markov chain over edges of the graph; (4) we
explain why the convergence time of consensus propagation scales more gracefully with
the number of nodes than does that of pairwise averaging, and for certain classes of graphs,
we quantify the improvement.
2
Algorithm
Consider a connected undirected graph (V, E) with |V | = n nodes. For each node i ?
V , let N (i) = {j : (i, j) ? E} be the set of neighbors of i. Each node i ? V is
assigned
a number yi ? [0, 1]. The goal is for each node to obtain an estimate of y? =
P
y
/n
through an asynchronous distributed protocol in which each node carries out
i
i?V
simple computations and communicates parsimonious messages to its neighbors.
We propose consensus propagation as an approach to the aforementioned problem. In this
protocol, if a node i communicates to a neighbor j at time t, it transmits a message consistt
ing of two numerical values. Let ?tij ? R and Kij
? R+ denote the values associated with
the most recently transmitted message from i to j at or before time t. At each time t, node j
t
has stored in memory the most recent message from each neighbor: {?tij , Kij
|i ? N (j)}.
The initial values in memory before receiving any messages are arbitrary.
Consensus propagation is parameterized by a scalar ? > 0 and a non-negative matrix
~ ? V ? V be a set
Q ? Rn?n
with Qij > 0 if and only if i 6= j and (i, j) ? E. Let E
+
consisting of two directed edges (i, j) and (j, i) per undirected edge (i, j) ? E. For each
~ it is useful to define the following three functions:
(i, j) ? E,
P
1 + u?N (i)\j Kui
,
Fij (K) =
(1)
P
1 + ?Q1 ij 1 + u?N (i)\j Kui
Gij (?, K) =
yi +
P
1+
u?N (i)\j
P
Kui ?ui
u?N (i)\j
Kui
,
Xi (?, K) =
yi +
P
1+
u?N (i)
P
Kui ?ui
u?N (i)
Kui
.
(2)
~ the set of directed edges along which messages are transmitFor each t, denote by Ut ? E
ted at time t. Consensus propagation is presented below as Algorithm 1.
Algorithm 1 Consensus propagation.
1: for time t = 1 to ? do
2:
for all (i, j) ? Ut do
t
3:
Kij
? Fij (K t?1 )
4:
?tij ? Gij (?t?1 , K t?1 )
5:
end for
6:
for all (i, j) ?
/ Ut do
t?1
t
7:
Kij
? Kij
8:
?tij ? ?t?1
ij
9:
end for
10:
xt ? X (?t , K t )
11: end for
Consensus propagation is a distributed protocol because computations at each node require only information that is locally available. In particular, the messages Fij (K t?1 ) and
t?1
Gij (K t?1 ) transmitted from node i to node j depend only on {?t?1
ui , Kui |u ? N (i)},
which node i has stored in memory. Similarly, xti , which serves as an estimate of y, det
pends only on {?tui , Kui
|u ? N (i)}.
Consensus propagation is an asynchronous protocol because only a subset of the potential
messages are transmitted at each time. Our convergence analysis can also be extended to
accommodate more general models of asynchronism that involve communication delays,
as those presented in [17].
In our study of convergence time, we will focus on the synchronous version of consensus
~ for all t. Note that synchronous consensus propagation
propagation. This is where Ut = E
is defined by:
K t = F(K t?1 ), ?t = G(?t?1 , K t?1 ), xt = X (?t?1 , K t?1 ).
(3)
2.1
Intuitive Interpretation
~ there is a set
Consider the special case of a singly connected graph. For any (i, j) ? E,
Sij ? V of nodes that can transmit information to Sji = V \ Sij only through (i, j). In
order for nodes in Sji to compute y, they must at least be provided with the average ??ij
?
among observations at nodes in Sij and the cardinality Kij
= |Sij |. The messages ?tij and
t
t
t
Kij can be viewed as estimates. In fact, when ? = ?, ?ij and Kij
converge to ??ij and
?
Kij , as we will now explain.
Suppose the graph is singly connected, ? = ?, and transmissions are synchronous. Then,
X
t?1
t
Kij
=1+
Kui
,
(4)
u?N (i)\j
~ This is a recursive characterization of |Sij |, and it is easy to see that it
for all (i, j) ? E.
converges in a number of iterations equal to the diameter of the graph. Now consider the
iteration
P
t?1 t?1
yi + u?N (i)\j Kui
?ui
t
,
?ij =
P
t?1
1 + u?N (i)\j Kui
~ A simple inductive argument shows that at each time t, ?t is an average
for all (i, j) ? E.
ij
t
among observations at Kij
nodes in Sij , and after a number of iterations equal to the
diameter of the graph, ?t = ?? . Further, for any i ? V ,
P
yi + u?N (i) Kui ?ui
P
y=
,
1 + u?N (i) Kui
so xti converges to y. This interpretation can be extended to the asynchronous case where it
elucidates the fact that ?t and K t become ?? and K ? after every pair of nodes in the graph
has established bilateral communication through some sequence of transmissions among
adjacent nodes.
~ that is part of a cycle,
Suppose now that the graph has cycles. If ? = ?, for any (i, j) ? E
t
Kij ? ? whether transmissions are synchronous or asynchronous, so long as messages
are transmitted along each edge of the cycle an infinite number of times.
A heuristic fix
t?1
? t ? 1+P
might be to compose the iteration (4) with one that attenuates: K
ij
u?N (i)\j Kui ,
t
? t /(1+ij K
? t ). Here, ij > 0 is a small constant. The message is essentially
and Kij
?K
ij
ij
? t is small but becomes increasingly attenuated as K
? t grows. This is
unaffected when ij K
ij
ij
exactly the kind of attenuation carried out by consensus propagation when ?Qij = 1/ij <
?. Understanding why this kind of attenuation leads to desirable results is a subject of our
analysis.
2.2
Relation to Belief Propagation
Consensus propagation can also be viewed as a special case of belief propagation. In this
context, belief propagation is used to approximate the marginal distributions of a vector
x ? Rn conditioned on the observations y ? Rn . The mode of each of the marginal
distributions approximates y.
Take the prior distribution over (x, y) to be the normalized product of potential func?
tions {?i (?)|i ? V } and compatibility functions {?ij
(?)|(i, j) ? E}, given by ?i (xi ) =
?
2
exp(?(xi ? yi ) ), and ?ij (xi , xj ) = exp(??Qij (xi ? xj )2 ), where Qij , for each
~ and ? are positive constants. Note that ? can be viewed as an inverse temper(i, j) ? E,
ature parameter; as ? increases, components of x associated with adjacent nodes become
increasingly correlated.
P
Let ? be a positive semidefinite symmetric matrix such that xT ?x = (i,j)?E Qij (xi ?
xj )2 . Note that when Qij = 1 for all (i, j) ? E, ? is the graph Laplacian. Given the vector
y of observations, the conditional density of x is
Y
Y
?
p? (x) ?
?i (xi )
?ij
(xi , xj ) = exp ?kx ? yk22 ? ?xT ?x .
i?V
(i,j)?E
Let x? denote the mode of p? (?). Since the distribution is Gaussian, each component x?i
is also the mode of the corresponding marginal distribution. Note that x? it is the unique
solution to the positive definite quadratic program
minimize
x
kx ? yk22 + ?xT ?x.
(5)
The following theorem, whose proof can be found in [1], suggests that if ? is sufficiently
large each component x?i can be used as an estimate of the mean value y?.
P ?
Theorem 1.
? and lim??? x?i = y?, for all i ? V .
i xi /n = y
In belief propagation, messages are passed along edges of a Markov random field. In our
case, because of the structure of the distribution p? (?), the relevant Markov random field
has the same topology as the graph (V, E). The message Mij (?) passed from node i to
node j is a distribution on the variable xj . Node i computes this message using incoming
messages from other nodes as defined by the update equation
Z
Y
t?1 0
t
Mij (xj ) = ? ?ij (x0i , xj )?i (x0i )
Mui
(xi ) dx0i .
(6)
u?N (i)\j
Here, ? is a normalizing constant. Since our underlying distribution p? (?) is Gaussian,
it is natural to consider messages which are Gaussian distributions. In particular, let
t
t
t
(?tij , Kij
) ? R ? R+ parameterize Gaussian message Mij
(?) according to Mij
(xj ) ?
t
t 2
exp ?Kij (xj ? ?ij ) . Then, (6) is equivalent to synchronous consensus propagation
iterations for K t and ?t .
The sequence of densities
?
?
ptj (xj )
Y
? ?j (xj )
t
Mij
(xj )
2
= exp ??(xj ? yj ) ?
i?N (j)
X
t
Kij
(xj
?
?tij )2 ? ,
i?N (j)
is meant to converge to an approximation of the marginal conditional distribution of xj .
As such, an approximation to x?j is given by maximizing ptj (?). It is easy to show that,
the maximum is attained by xtj = Xj (?t , K t ). With this and aforementioned correspondences, we have shown that consensus propagation is a special case of belief propagation.
Readers familiar with belief propagation will notice that in the derivation above we have
used the sum product form of the algorithm. In this case, since the underlying distribution
is Gaussian, the max product form yields equivalent iterations.
3
Convergence
The following theorem is our main convergence result.
Theorem 2.
(i) There are unique vectors (?? , K ? ) such that K ? = F(K ? ), and
?? = G(?? , K ? ).
~ appears infinitely often in the sequence of
(ii) Assume that each edge (i, j) ? E
communication sets {Ut }. Then, independent of the initial condition (?0 , K 0 ),
limt?? K t = K ? , and limt?? ?t = ?? .
(iii) Given (?? , K ? ), if x? = X (?? , K ? ), then x? is the mode of the distribution
p? (?).
The proof of this theorem can be found in [1], but it rests on two ideas. First, notice that,
according to the update equation (1), K t evolves independently of ?t . Hence, we analyze
K t first. Following the work of [7], we prove that the functions {Fij (?)} are monotonic.
This property is used to establish convergence to a unique fixed point. Next, we analyze ?t
assuming that K t has already converged. Given fixed K, the update equations for ?t are
linear, and we establish that they induce a contraction with respect to the maximum norm.
This allows us to establish existence of a fixed point and asynchronous convergence.
4
Convergence Time for Regular Graphs
In this section, we will study the convergence time of synchronous consensus propagation.
For > 0, we will say that an estimate x
? of y? is -accurate if k?
x ??y?1k2,n ? . Here, for
integer m, k ? k2,m is the norm on Rm defined by kxk2,m = kxk2 / m. We are interested
in the number of iterations required to obtain an -accurate estimate of the mean y?.
4.1
The Case of Regular Graphs
We will restrict our analysis of convergence time to cases where (V, E) is a d-regular graph,
for d ? 2. Extension of our analysis to broader classes of graphs remains an open issue.
We will also make simplifying assumptions that Qij = 1, ?0ij = yi , and K 0 = [k0 ]ij for
some scalar k0 ? 0.
In this restricted setting, the subspace of constant K vectors is invariant under F. This
implies that there is some scalar k ? > 0 so that K ? = [k ? ]ij . This k ? is the unique solution
to the fixed point equation k ? = (1+(d?1)k ? )/((1+(1+(d?1)k ? )/?). Given a uniform
initial condition K 0 = [k0 ]ij , we can study the sequence of iterates {K t } by examining
the scalar sequence {kt }, defined by kt = (1 + (d ? 1)kt?1 )(1 + (1 + (d ? 1)kt?1 )/?).
In particular, we have K t = [kt ]ij , for all t ? 0.
Similarly, in this setting, the equations for the evolution of ?t take the special form
X
yi
1
?t?1
t
ui
?ij =
+ 1?
.
1 + (d ? 1)kt?1
1 + (d ? 1)kt?1
d?1
u?N (i)\j
Defining ?t = 1/(1 + (d ? 1)kt ), we have, in vector form,
?t = ?t?1 y? + (1 ? ?t?1 )P? ?t?1 ,
(7)
is a doubly stochastic matrix.
where y? ? Rnd is a vector with y?ij = yi and P? ?
~ In this chain,
The matrix P? corresponds to a Markov chain on the set of directed edges E.
an edge (i, j) transitions to an edge (u, i) with u ? N (i)\j, with equal probability assigned
to each such edge. As in (3), we associate each ?t with an estimate xt of x? according to
n?nd
?
?
t
?
xt = y/(1
is a matrix defined by
P + dk ) + dk A? /(1 + dk ), where A ? R+
(A?)j = i?N (j) ?ij /d.
Rnd?nd
+
The update equation (7) suggests that the convergence of ?t is intimately tied to a notion
Pt?1
of mixing time associated with P? . Let P? ? be the Ces`aro limit P? ? = limt?? ? =0 P? ? /t.
Pt
Define the Ces`aro mixing time ? ? by ? ? = supt?0 k ? =0 (P? ? ?P? ? )k2,nd . Here, k?k2,nd is
the matrix norm induced by the corresponding vector norm k?k2,nd . Since P? is a stochastic
matrix, P? ? is well-defined and ? ? < ?. Note that, in the case where P? is aperiodic,
irreducible, and symmetric, ? ? corresponds to the traditional definition of mixing time: the
inverse of the spectral gap of P? .
A time t? is said to be an -convergence time if estimates xt are -accurate for all t ? t? .
The following theorem, whose proof can be found in [1], establishes a bound on the convergence time of synchronous consensus propagation given appropriately chosen ?, as
a function of and ? ? .
Theorem 3. Suppose k0 ? k ? . If d = 2 there exists a ? = ?((? ? /)2 ) and if d > 2 there
exists a ? = ?(? ? /) such that some t? = O((? ? /) log(? ? /)) is an -convergence time.
Alternatively, suppose k0 = k ? . If d = 2 there exists a ? = ?((? ? /)2 ) and if d > 2 there
exists a ? = ?(? ? /) such that some t? = O((? ? /) log(1/)) is an -convergence time.
In the first part of the above theorem, k0 is initialized arbitrarily so long as k0 ? k ? .
Typically, one might set k0 = 0 to guarantee this. The second case of interest is when
k0 = k ? , so that kt = k ? for all t ? 0 Theorem 3 suggests that initializing with k0 = k ?
leads to an improvement in convergence time. However, in our computational experience,
we have found that an initial condition of k0 = 0 consistently results in faster convergence
than k0 = k ? . Hence, we suspect that a convergence time bound of O((? ? /) log(1/))
also holds for the case of k0 = 0. Proving this remains an open issue. Theorems 3 posits
choices of ? that require knowledge of ? ? , which may be both difficult to compute and also
requires knowledge of the graph topology. This is not a major restriction, however. It is not
difficult to imagine variations of Algorithm 1 which use a doubling sequence of guesses for
the Ces?aro mixing time ? ? . Each guess leads to a choice of ? and a number of iterations t?
to run with that choice of ?. Such a modified algorithm would still have an -convergence
time of O((? ? /) log(? ? /)).
5
Comparison with Pairwise Averaging
Using the results of Section 4, we can compare the performance of consensus propagation
to that of pairwise averaging. Pairwise averaging is usually defined in an asynchronous
setting, but there is a synchronous counterpart which works as follows. Consider a doubly
stochastic symmetric matrix P ? Rn?n such that Pij = 0 if (i, j) ?
/ E. Evolve estimates
according to xt = P xt?1 , initialized with x0 = y. Clearly xt = P t y ? y?1 as t ? ?.
In the case of a singly-connected graph, synchronous consensus propagation converges
exactly in a number of iterations equal to the diameter of the graph. Moreover, when
? = ?, this convergence is to the exact mean, as discussed in Section 2.1. This is the best
one can hope for under any algorithm, since the diameter is the minimum amount of time
required for a message to travel between the two most distant nodes. On the other hand, for
a fixed accuracy , the worst-case number of iterations required by synchronous pairwise
averaging on a singly-connected graph scales at least quadratically in the diameter [18].
The rate of convergence of synchronous pairwise averaging is governed by the relation
kxt ? y?1k2,n ? ?t2 , where ?2 is the second largest eigenvalue of P . Let ?2 = 1/ log(1/?2 ),
and call it the mixing time of P . In order to guarantee -accuracy (independent of y),
t > ?2 log(1/) suffices and t = ?(?2 log(1/)) is required [6].
Consider d-regular graphs and fix a desired error tolerance . The number of iterations
required by consensus propagation is ?(? ? log ? ? ), whereas that required by pairwise averaging is ?(?2 ). Both mixing times depend on the size and topology of the graph. ?2
is the mixing time of a process on nodes that transitions along edges whereas ? ? is the
mixing time of a process on directed edges that transitions towards nodes. An important
distinction is that the former process is allowed to ?backtrack? where as the latter is not.
By this we mean that a sequence of states {i, j, i} can be observed in the vertex process,
but the sequence {(i, j), (j, i)} cannot be observed in the edge process. As we will now illustrate through an example, it is this difference that makes ?2 larger than ? ? and, therefore,
pairwise averaging less efficient than consensus propagation.
In the case of a cycle (d = 2) with an even number of nodes n, minimizing the mixing
time over P results in ?2 = ?(n2 ) [19]. For comparison, as demonstrated in the following
theorem (whose proof can be found in [1]), ? ? is linear in n.
?
Theorem 4. For the cycle with n nodes, ? ? ? n/ 2.
Intuitively, the improvement in mixing time arises from the fact that the edge process moves
around the cycle in a single direction and therefore explores the entire graph within n
iterations. The vertex process, on the other hand, randomly transitions back and forth
among adjacent nodes, relying on chance to eventually explore the entire cycle.
The cycle example demonstrates a ?(n/ log n) advantage offered by consensus propagation. Comparisons of mixing times associated with other graph topologies remains an issue
for future analysis. But let us close by speculating on a uniform grid of n nodes over the
m-dimensional unit torus. Here, n1/m is an integer, and each vertex has 2m neighbors,
each a distance n?1/m away. With P optimized, it can be shown that ?2 = ?(n2/m ) [20].
We put forth a conjecture on ? ? .
2
Conjecture 1. For the m-dimensional torus with n nodes, ? ? = ?(n(2m?1)/m ).
Acknowledgments
The authors wish to thank Balaji Prabhakar and Ashish Goel for their insights and comments. The
first author was supported by a Benchmark Stanford Graduate Fellowship. This research was supported in part by the National Science Foundation through grant IIS-0428868 and a supplement to
grant ECS-9985229 provided by the Management of Knowledge Intensive Dynamic Systems Program (MKIDS).
References
[1] C. C. Moallemi and B. Van Roy. Consensus propagation. Technical report, Management Science & Engineering Deptartment, Stanford University, 2005. URL: http://www.
moallemi.com/ciamac/papers/cp-2005.pdf.
[2] L. Xiao, S. Boyd, and S. Lall. A scheme for robust distributed sensor fusion based on average
consensus. To appear in the proceedings of IPSN, 2005.
[3] C. C. Moallemi and B. Van Roy. Distributed optimization in adaptive networks. In Advances in
Neural Information Processing Systems 16, 2004.
[4] J. N. Tsitsiklis. Problems in Decentralized Decision-Making and Computation. PhD thesis,
Massachusetts Institute of Technology, Cambridge, MA, 1984.
[5] D. Kempe, A. Dobra, and J. Gehrke. Gossip-based computation of aggregate information. In
ACM Symposium on Theory of Computing, 2004.
[6] S. Boyd, A. Ghosh, B. Prabhakar, and D. Shah. Gossip algorithms: Design, analysis and
applications. To appear in the proceedings of INFOCOM, 2005.
[7] P. Rusmevichientong and B. Van Roy. An analysis of belief propagation on the turbo decoding graph with Gaussian densities. IEEE Transactions on Information Theory, 47(2):745?765,
2001.
[8] S. Tatikonda and M. I. Jordan. Loopy belief propagation and Gibbs measures. In Proceedings
of the 18th Conference on Uncertainty in Artificial Intelligence, 2002.
[9] T. Heskes. On the uniqueness of loopy belief propagation fixed points. Neural Computation,
16(11):2379?2413, 2004.
[10] A. T. Ihler, J. W. Fisher III, and A. S. Willsky. Message errors in belief propagation. In Advances
in Neural Information Processing Systems, 2005.
[11] G. Forney, F. Kschischang, and B. Marcus. Iterative decoding of tail-biting trelisses. In Proceedings of the 1998 Information Theory Workshop, 1998.
[12] S. M. Aji, G. B. Horn, and R. J. McEliece. On the convergence of iterative decoding on graphs
with a single cycle. In Proceedings of CISS, 1998.
[13] Y. Weiss and W. T. Freeman. Correctness of local probability propagation in graphical models
with loops. Neural Computation, 12:1?41, 2000.
[14] M. Bayati, D. Shah, and M. Sharma. Maximum weight matching via max-product belief propagation. preprint, 2005.
[15] V. Saligrama, M. Alanyali, and O. Savas. Asynchronous distributed detection in sensor networks. preprint, 2005.
[16] Y. Weiss and W. T. Freeman. Correctness of belief propagation in Gaussian graphical models
of arbitrary topology. Neural Computation, 13:2173?2200, 2001.
[17] D. P. Bertsekas and J. N. Tsitsiklis. Parallel and Distributed Computation: Numerical Methods.
Athena Scientific, Belmont, MA, 1997.
[18] S. Boyd, P. Diaconis, J. Sun, and L. Xiao. Fastest mixing Markov chain on a path. submitted to
The American Mathematical Monthly, 2003.
[19] S. Boyd, A. Ghosh, B. Prabhakar, and D. Shah. Mixing times for random walks on geometric
random graphs. To appear in the proceedings of SIAM ANALCO, 2005.
[20] S. Roch. Bounded fastest mixing. preprint, 2004.
| 2913 |@word version:3 norm:4 nd:5 open:2 contraction:1 simplifying:1 q1:1 accommodate:1 carry:1 initial:4 current:1 com:1 surprising:1 must:1 belmont:1 numerical:2 distant:1 enables:1 cis:1 update:4 intelligence:1 guess:2 ith:1 characterization:1 iterates:1 contribute:1 node:37 mathematical:1 along:4 become:2 symposium:1 qij:7 prove:2 doubly:3 mui:1 compose:1 x0:1 pairwise:11 freeman:2 relying:1 little:1 xti:2 cardinality:1 becomes:1 provided:2 underlying:2 moreover:1 bounded:1 kind:2 ghosh:2 guarantee:2 every:1 attenuation:2 exactly:2 k2:6 rm:1 demonstrates:1 unit:1 grant:2 appear:3 bertsekas:1 before:2 positive:3 engineering:1 local:1 limit:1 meet:1 path:1 might:2 studied:1 suggests:3 fastest:2 graduate:1 directed:4 horn:1 unique:4 acknowledgment:1 yj:1 recursive:1 definite:1 aji:1 matching:2 boyd:4 induce:1 regular:6 cannot:1 close:1 put:1 context:2 restriction:1 equivalent:2 www:1 demonstrated:2 maximizing:1 primitive:1 attention:3 independently:1 insight:1 proving:1 notion:1 variation:1 transmit:1 deptartment:1 pt:2 suppose:4 imagine:1 elucidates:1 exact:1 associate:1 roy:4 balaji:1 observed:2 preprint:3 initializing:1 capture:1 parameterize:1 worst:1 connected:7 cycle:12 sun:1 observes:1 benjamin:1 ui:6 dynamic:1 depend:2 k0:13 derivation:1 artificial:1 aggregate:2 peer:2 whose:3 heuristic:1 stanford:8 solve:1 larger:1 say:2 statistic:1 asynchronously:1 sequence:8 eigenvalue:2 kxt:1 advantage:1 propose:4 aro:3 product:4 biting:1 saligrama:1 relevant:4 loop:1 mixing:15 forth:2 intuitive:2 convergence:33 transmission:3 prabhakar:3 converges:4 tions:1 illustrate:1 x0i:2 ij:29 received:3 implies:1 quantify:1 direction:1 posit:1 fij:4 aperiodic:1 stochastic:4 ipsn:1 require:2 fix:2 suffices:1 extension:1 hold:1 sufficiently:1 dx0i:1 around:1 exp:5 major:1 purpose:1 ptj:2 uniqueness:1 travel:1 tatikonda:1 largest:2 correctness:2 establishes:1 gehrke:1 hope:1 clearly:1 sensor:3 gaussian:11 mation:1 aim:1 supt:1 modified:1 broader:1 focus:1 improvement:3 consistently:1 likelihood:1 baseline:1 asychronous:1 typically:1 entire:2 relation:2 interested:1 compatibility:1 issue:3 classification:1 aforementioned:2 among:4 temper:1 proposes:1 special:5 fairly:1 kempe:1 marginal:4 field:3 equal:4 ted:1 sampling:1 future:1 t2:1 report:1 few:3 irreducible:1 randomly:1 diaconis:1 national:1 xtj:1 familiar:1 consisting:1 n1:1 detection:1 interest:5 message:19 semidefinite:1 chain:4 accurate:3 kt:9 edge:15 moallemi:4 experience:1 initialized:2 desired:1 walk:1 kij:16 loopy:2 vertex:3 subset:1 uniform:2 delay:1 examining:1 characterize:2 stored:2 density:3 explores:1 propogation:1 siam:1 receiving:1 decoding:3 ashish:1 thesis:1 central:1 management:2 american:1 savas:1 potential:3 rusmevichientong:1 bilateral:1 infocom:1 analyze:3 maintains:1 parallel:1 contribution:1 minimize:1 accuracy:2 correspond:1 identify:1 yield:1 produced:1 backtrack:1 unaffected:1 converged:1 submitted:1 explain:2 definition:1 transmits:1 associated:4 proof:4 ihler:1 sampled:1 massachusetts:1 revise:1 lim:1 ut:5 knowledge:3 sophisticated:1 back:1 appears:1 dobra:1 attained:1 wei:2 mceliece:1 hand:2 propagation:54 mode:4 scientific:1 grows:1 usa:2 normalized:1 counterpart:1 inductive:1 evolution:1 assigned:2 former:1 hence:2 symmetric:3 adjacent:3 pdf:1 demonstrate:1 cp:1 recently:1 discussed:1 interpretation:3 approximates:1 tail:1 interpret:1 monthly:1 cambridge:1 gibbs:1 grid:1 heskes:1 similarly:2 recent:6 certain:3 sji:2 arbitrarily:1 yi:11 transmitted:4 minimum:1 goel:1 converge:7 sharma:1 ii:2 desirable:1 ing:1 technical:1 faster:1 characterized:1 long:2 laplacian:1 scalable:1 essentially:1 iteration:12 limt:3 background:1 whereas:2 fellowship:1 appropriately:1 rest:1 comment:1 subject:1 induced:1 suspect:1 undirected:2 jordan:1 integer:2 call:1 yk22:2 iii:2 easy:2 variety:1 xj:16 topology:6 restrict:1 idea:1 attenuated:1 intensive:1 det:1 synchronous:13 whether:1 motivated:1 url:1 passed:2 tij:7 useful:1 involve:1 singly:6 amount:1 locally:1 diameter:5 http:1 notice:2 per:1 ce:3 graph:32 sum:1 run:1 inverse:2 parameterized:1 uncertainty:1 communicate:1 reader:1 parsimonious:1 decision:1 scaling:1 forney:1 bound:2 guaranteed:2 correspondence:1 quadratic:1 turbo:1 lall:1 argument:1 conjecture:2 according:4 across:1 increasingly:2 intimately:1 evolves:1 making:1 intuitively:1 restricted:1 sij:6 invariant:1 equation:6 remains:3 discus:2 eventually:1 serf:2 end:3 available:1 decentralized:1 away:1 spectral:1 alternative:1 shah:3 existence:1 graphical:2 prof:2 establish:4 move:1 already:1 primary:1 dependence:1 traditional:1 said:1 exhibit:1 gradient:1 subspace:1 distance:1 thank:1 athena:1 gracefully:1 bvr:1 consensus:32 willsky:1 marcus:1 assuming:1 minimizing:1 difficult:4 executed:1 negative:1 design:3 attenuates:1 policy:1 observation:5 speculating:1 markov:6 benchmark:1 situation:1 extended:2 communication:4 defining:1 rn:4 arbitrary:2 tui:1 pair:3 required:6 optimized:1 quadratically:1 distinction:1 established:3 beyond:1 roch:1 below:1 usually:1 program:2 max:2 memory:3 belief:23 natural:1 scheme:1 technology:1 imply:1 carried:1 func:1 prior:3 literature:1 understanding:1 geometric:1 evolve:1 interesting:1 bayati:1 foundation:1 offered:1 sufficient:2 pij:1 xiao:2 supported:2 wireless:1 asynchronous:12 tsitsiklis:2 institute:1 neighbor:5 van:4 distributed:14 tolerance:1 transition:4 computes:1 author:2 adaptive:1 ec:1 transaction:1 approximate:1 approxi:1 incoming:1 xi:10 alternatively:1 iterative:2 why:2 robust:1 ca:2 kschischang:1 kui:14 protocol:14 main:1 noise:1 n2:2 allowed:1 gossip:2 wish:1 torus:2 governed:2 kxk2:2 tied:1 communicates:2 theorem:12 xt:11 ciamac:3 dk:3 normalizing:1 fusion:1 exists:4 workshop:1 ature:1 supplement:1 phd:1 pnin:1 execution:1 conditioned:1 kx:2 gap:1 explore:1 infinitely:1 scalar:4 doubling:1 monotonic:1 rnd:2 mij:5 corresponds:2 chance:1 acm:1 ma:2 conditional:3 viewed:5 goal:1 towards:1 fisher:1 infinite:1 averaging:18 pends:1 gij:3 latter:1 arises:1 meant:1 correlated:1 |
2,108 | 2,914 | Non-Local Manifold Parzen Windows
Yoshua Bengio, Hugo Larochelle and Pascal Vincent
Dept. IRO, Universit?e de Montr?eal
P.O. Box 6128, Downtown Branch, Montreal, H3C 3J7, Qc, Canada
{bengioy,larocheh,vincentp}@iro.umontreal.ca
Abstract
To escape from the curse of dimensionality, we claim that one can learn
non-local functions, in the sense that the value and shape of the learned
function at x must be inferred using examples that may be far from x.
With this objective, we present a non-local non-parametric density estimator. It builds upon previously proposed Gaussian mixture models with
regularized covariance matrices to take into account the local shape of
the manifold. It also builds upon recent work on non-local estimators of
the tangent plane of a manifold, which are able to generalize in places
with little training data, unlike traditional, local, non-parametric models.
1 Introduction
A central objective of statistical machine learning is to discover structure in the joint distribution between random variables, so as to be able to make predictions about new combinations of values of these variables. A central issue in obtaining generalization is how
information from the training examples can be used to make predictions about new examples and, without strong prior assumptions (i.e. in non-parametric models), this may be
fundamentally difficult, as illustrated by the curse of dimensionality.
(Bengio, Delalleau and Le Roux, 2005) and (Bengio and Monperrus, 2005) present several arguments illustrating some fundamental limitations of modern kernel methods due
to the curse of dimensionality, when the kernel is local (like the Gaussian kernel). These
arguments are all based on the locality of the estimators, i.e., that very important information about the predicted function at x is derived mostly from the near neighbors of x in the
training set. This analysis has been applied to supervised learning algorithms such as SVMs
as well as to unsupervised manifold learning algorithms and graph-based semi-supervised
learning. The analysis in (Bengio, Delalleau and Le Roux, 2005) highlights intrinsic limitations of such local learning algorithms, that can make them fail when applied on problems
where one has to look beyond what happens locally in order to overcome the curse of dimensionality, or more precisely when the function to be learned has many variations while
there exist more compact representations of these variations than a simple enumeration.
This strongly suggests to investigate non-local learning methods, which can in principle
generalize at x using information gathered at training points xi that are far from x. We
present here such a non-local learning algorithm, in the realm of density estimation.
The proposed non-local non-parametric density estimator builds upon the Manifold Parzen
density estimator (Vincent and Bengio, 2003) that associates a regularized Gaussian with
each training point, and upon recent work on non-local estimators of the tangent plane of
a manifold (Bengio and Monperrus, 2005). The local covariance matrix characterizing the
density in the immediate neighborhood of a data point is learned as a function of that data
point, with global parameters. This allows to potentially generalize in places with little or
no training data, unlike traditional, local, non-parametric models. Here, the implicit assumption is that there is some kind of regularity in the shape of the density, such that learning about its shape in one region could be informative of the shape in another region that
is not adjacent. Note that the smoothness assumption typically underlying non-parametric
models relies on a simple form of such transfer, but only for neighboring regions, which is
not very helpful when the intrinsic dimension of the data (the dimension of the manifold
on which or near which it lives) is high or when the underlying density function has many
variations (Bengio, Delalleau and Le Roux, 2005). The proposed model is also related to
the Neighborhood Component Analysis algorithm (Goldberger et al., 2005), which learns
a global covariance matrix for use in the Mahalanobis distance within a non-parametric
classifier. Here we generalize this global matrix to one that is a function of the datum x.
2 Manifold Parzen Windows
In the Parzen Windows estimator, one puts a spherical (isotropic) Gaussian around each
training point xi , with a single shared variance hyper-parameter. One approach to improve
on this estimator, introduced in (Vincent and Bengio, 2003), is to use not just the presence
of xi and its neighbors but also their geometry, trying to infer the principal characteristics of
the local shape of the manifold (where the density concentrates), which can be summarized
in the covariance matrix of the Gaussian, as illustrated in Figure 1. If the data concentrates
in certain directions around xi , we want that covariance matrix to be ?flat? (near zero
variance) in the orthogonal directions.
One way to achieve this is to parametrize each of these covariance matrices in terms of
?principal directions? (which correspond to the tangent vectors of the manifold, if the data
concentrates on a manifold). In this way we do not need to specify individually all the
entries of the covariance matrix. The only required assumption is that the ?noise directions?
orthogonal to the ?principal directions? all have the same variance.
n
1X
p?(y) =
N (y; xi + ?(xi ), S(xi ))
(1)
n i=1
where N (y; xi + ?(xi ), S(xi )) is a Gaussian density at y, with mean vector xi + ?(xi ) and
covariance matrix S(xi ) represented compactly by
d
X
2
S(xi ) = ?noise
(xi )I +
s2j (xi )vj (xi )vj (xi )?
(2)
j=1
2
where s2j (xi ) and ?noise
(xi ) are scalars, and vj (xi ) denotes a ?principal? direction with
2
2
variance s2j (xi ) + ?noise
(xi ), while ?noise
(xi ) is the noise variance (the variance in all the
?
other directions). vj (xi ) denotes the transpose of vj (xi ).
2
In (Vincent and Bengio, 2003), ?(xi ) = 0, and ?noise
(xi ) = ?02 is a global hyper2
2
parameter, while (?j (xi ), vj ) = (sj (xi ) + ?noise (xi ), vj (xi )) are the leading (eigenvalue,eigenvector) pairs from the eigen-decomposition of a locally weighted covariance
matrix (e.g. the empirical covariance of the vectors xl ? xi , with xl a near neighbor of xi ).
The ?noise level? hyper-parameter ?02 must be chosen such that the principal eigenvalues
are all greater than ?02 . Another hyper-parameter is the number d of principal components
2
to keep. Alternatively, one can choose ?noise
(xi ) to be the (d + 1)th eigenvalue, which
2
guarantees that ?j (xi ) > ?noise
(xi ), and gets rid of a hyper-parameter. This very simple
model was found to be consistently better than the ordinary Parzen density estimator in
numerical experiments in which all hyper-parameters are chosen by cross-validation.
3 Non-Local Manifold Tangent Learning
In (Bengio and Monperrus, 2005) a manifold learning algorithm was introduced in which
the tangent plane of a d-dimensional manifold at x is learned as a function of x ? RD ,
using globally estimated parameters. The output of the predictor function F (x) is a d ? D
matrix whose d rows are the d (possibly non-orthogonal) vectors that span the tangent
plane. The training information about the tangent plane is obtained by considering pairs of
near neighbors xi and xj in the training set. Consider the predicted tangent plane of the
manifold at xi , characterized by the rows of F (xi ). For a good predictor we expect the
vector (xi ? xj ) to be close to its projection on the tangent plane, with local coordinates
w ? Rd . w can be obtained analytically by solving a linear system of dimension d.
The training criterion chosen in (Bengio and Monperrus, 2005) then minimizes the sum
over such (xi , xj ) of the sinus of the projection angle, i.e. ||F ? (xi )w ? (xj ? xi )||2 /||xj ?
xi ||2 . It is a heuristic criterion, which will be replaced in our new algorithm by one derived from the maximum likelihood criterion, considering that F (xi ) indirectly provides
the principal eigenvectors of the local covariance matrix at xi . Both criteria gave similar
results experimentally, but the model proposed here yields a complete density estimator. In
both cases F (xi ) can be interpreted as specifying the directions in which one expects to
see the most variations when going from xi to one of its near neighbors in a finite sample.
1
0
1
0
v1
q
2
s21 + ?noise
xi
?
tangent
plane
?noise
Figure 1: Illustration of the local parametrization of local or Non-Local Manifold Parzen.
The examples around training point xi are modeled by a Gaussian. ?(xi ) specifies the
center of that Gaussian, which should be non-zero when xi is off the manifold. vk ?s are
principal directions of the Gaussian and are tangent vectors of the manifold. ?noise represents the thickness of the manifold.
4 Proposed Algorithm: Non-Local Manifold Parzen Windows
In equations (1) and (2) we wrote ?(xi ) and S(xi ) as if they were functions of xi rather
than simply using indices ?i and Si . This is because we introduce here a non-local version of Manifold Parzen Windows inspired from the non-local manifold tangent learning
algorithm, i.e., in which we can share information about the density across different
regions of space. In our experiments we use a neural network of nhid hidden neurons,
2
with xi in input to predict ?(xi ), ?noise
(xi ), and the s2j (xi ) and vj (xi ). The vectors computed by the neural network do not need to be orthonormal: we only need to consider the
subspace that they span. Also, the vectors? squared norm is used to infer s2j (xi ), instead
of having a separate output for them. We will note F (xi ) the matrix whose rows are the
vectors output of the neural network. From it we obtain the s2j (xi ) and vj (xi ) by performPd
ing a singular value decomposition, i.e. F ? F = j=1 s2j vj vj? . Moreover, to make sure
2
?noise
does not get too small, which could make the optimization unstable, we impose
2
?noise
(xi ) = s2noise (xi ) + ?02 , where snoise (?) is an output of the neural network and ?02 is
a fixed constant.
Imagine that the data were lying near a lower dimensional manifold. Consider a training
example xi near the manifold. The Gaussian centered near xi tells us how neighbors of
xi are expected to differ from xi . Its ?principal? vectors vj (xi ) span the tangent of the
manifold near xi . The Gaussian center variation ?(xi ) tells us how xi is located with
2
respect to its projection on the manifold. The noise variance ?noise
(xi ) tells us how far
2
from the manifold to expect neighbors, and the directional variances s2j (xi ) + ?noise
(xi )
tell us how far to expect neighbors on the different local axes of the manifold, near xi ?s
projection on the manifold. Figure 1 illustrates this in 2 dimensions.
The important element of this model is that the parameters of the predictive neural network
can potentially represent non-local structure in the density, i.e., they allow to potentially
discover shared structure among the different covariance matrices in the mixture. Here is
the pseudo code algorithm for training Non-Local Manifold Parzen (NLMP):
Algorithm NLMP::Train(X, d, k, k? , ?(?), S(?), ?02 )
Input: training set X, chosen number of principal directions d, chosen number of
neighbors k and k? , initial functions ?(?) and S(?), and regularization hyper-parameter
?02 .
(1) For xi ? X
(2) Collect max(k,k? ) nearest neighbors of xj .
Below, call yj one of the k nearest neighbors, yj? one of the k? nearest neighbors.
(3) Perform a stochastic gradient step on parameters of S(?) and ?(?),
using the negative log-likelihood error signal on the yj , with a Gaussian
of mean xi + ?(xi ) and of covariance matrix S(xi ).
The approximate gradients are:
?C(yj? ,xi )
??(xi )
= ? nk
?C(yj ,xi )
2
??noise
(xi )
?C(yj ,xi )
?F (xi )
1
?
? (yj )
S(xi )?1 (yj? ? xi ? ?(xi ))
1
= 0.5 nk (y
T r(S(xi )?1 ) ? ||(yj ? xi ? ?(xi ))? S(xi )?1 ||2
j)
=
1
?1
nk (yj ) F (xi )S(xi )
I ? (yj ? xi ? ?(xi ))(yj ? xi ? ?(xi ))? S(xi )?1
where nk (y) = |Nk (y)| is the number of points in the training set that
have y among their k nearest neighbors.
(4) Go to (1) until a given criterion is satisfied (e.g. average NLL of NLMP density
estimation on a validation set stops decreasing)
Result: trained ?(?) and S(?) functions, with corresponding ?02 .
Deriving the gradient formula (the derivative of the log-likelihood with respect to the neural
network outputs) is lengthy but straightforward. The main trick is to do a Singular Value
Decomposition of the basis vectors computed by the neural network, and to use known
simplifying formulas for the derivative of the inverse of a matrix and of the determinant of
a matrix. Details on the gradient derivation and on the optimization of the neural network
are given in the technical report (Bengio and Larochelle, 2005).
5 Computationally Efficient Extension: Test-Centric NLMP
While the NLMP algorithm appears to perform very well, one of its main practical limitation for density estimation, that it shares with Manifold Parzen, is the large amount of
computation required upon testing: for each test point x, the complexity of the computation
is O(n.d.D) (where D is the dimensionality of input space RD ).
However there may be a different and cheaper way to compute an estimate of the density
at x. We build here on an idea suggested in (Vincent, 2003), which yields an estimator that
does not exactly integrate to one, but this is not an issue if the estimator is to be used for
applications such as classification. Note that in our presentation of NLMP, we are using
?hard? neighborhoods (i.e. a local weighting kernel that assigns a weight of 1 to the k
nearest neighbors and 0 to the rest) but it could easily be generalized to ?soft? weighting,
as in (Vincent, 2003).
Let us decompose the true density at x as: p(x) = p(x|x ? Bk (x))P (Bk (x)), where
Bk (x) represents the spherical ball centered on x and containing the k nearest neighbors
of x (i.e., the ball with radius kx ? Nk (x)k where Nk (x) is the k-th neighbor of x in the
training set).
It can be shown that the above NLMP learning procedure looks for functions ?(?) and S(?)
that best characterize the distribution of the k training-set nearest neighbors of x as the
normal N (?; x + ?(x), S(x)). If we trust this locally normal (unimodal) approximation of
the neighborhood distribution to be appropriate then we can approximate p(x|x ? Bk (x))
by N (x; x + ?(x), S(x)). The approximation should be good when Bk (x) is small and
p(x) is continuous. Moreover as Bk (x) contains k points among n we can approximate
P (Bk (x)) by nk .
This yields the estimator p?(x) = N (x; x+?(x), S(x)) kn , which requires only O(d.D) time
to evaluate at a test point. We call this estimator Test-centric NLMP, since it considers only
the Gaussian predicted at the test point, rather than a mixture of all the Gaussians obtained
at the training points.
6 Experimental Results
We have performed comparative experiments on both toy and real-world data, on density
estimation and classification tasks. All hyper-parameters are selected by cross-validation,
and the costs on a large test set is used to compare final performance of all algorithms.
Experiments on toy 2D data. To understand and validate the non-local algorithm we
tested it on toy 2D data where it is easy to understand what is being learned. The sinus
data set includes examples sampled around a sinus curve. In the spiral data set examples
are sampled near a spiral. Respectively, 57 and 113 examples are used for training, 23 and
48 for validation (hyper-parameter selection), and 920 and 3839 for testing. The following
algorithms were compared:
? Non-Local Manifold Parzen Windows. The hyper-parameters are the number of principal directions (i.e., the dimension of the manifold), the number of nearest neighbors k and
k? , the minimum constant noise variance ?02 and the number of hidden units of the neural
network.
? Gaussian mixture with full but regularized covariance matrices. Regularization is done
by setting a minimum constant value ?02 to the eigenvalues of the Gaussians. It is trained
by EM and initialized using the k-means algorithm. The hyper-parameter is ?02 , and early
stopping of EM iterations is done with the validation set.
? Parzen Windows density estimator, with a spherical Gaussian kernel. The hyperparameter is the spread of the Gaussian kernel.
? Manifold Parzen density estimator. The hyper-parameters are the number of principal
components, k of the nearest neighbor kernel and the minimum eigenvalue ?02 .
Note that, for these experiments, the number of principal directions (or components) was
fixed to 1 for both NLMP and Manifold Parzen.
Density estimation results are shown in table 1. To help understand why Non-Local Manifold Parzen works well on these data, figure 2 illustrates the learned densities for the sinus
and spiral data. Basically, it works better here because it yields an estimator that is less sensitive to the specific samples around each test point, thanks to its ability to share structure
Algorithm
Non-Local MP
Manifold Parzen
Gauss Mix Full
Parzen Windows
sinus
1.144
1.345
1.567
1.841
spiral
-1.346
-0.914
-0.857
-0.487
Table 1: Average out-of-sample negative loglikelihood on two toy problems, for Non-Local
Manifold Parzen, a Gaussian mixture with full
covariance, Manifold Parzen, and Parzen Windows. The non-local algorithm dominates all
the others.
Algorithm
Non-Local MP
Manifold Parzen
Parzen Windows
Valid.
-73.10
65.21
77.87
Test
-76.03
58.33
65.94
Table 2: Average Negative Log-Likelihood on
the digit rotation experiment, when testing on
a digit class (1?s) not used during training, for
Non-Local Manifold Parzen, Manifold Parzen,
and Parzen Windows. The non-local algorithm
is clearly superior.
across the whole training set.
Figure 2: Illustration of the learned densities (sinus on top, spiral on bottom) for four compared models. From left to right: Non-Local Manifold Parzen, Gaussian mixture, Parzen
Windows, Manifold Parzen. Parzen Windows wastes probability mass in the spheres around
each point, while leaving many holes. Gaussian mixtures tend to choose too few components to avoid overfitting. The Non-Local Manifold Parzen exploits global structure to yield
the best estimator.
Experiments on rotated digits. The next experiment is meant to show both qualitatively
and quantitatively the power of non-local learning, by using 9 classes of rotated digit images
(from 729 first examples of the USPS training set) to learn about the rotation manifold and
testing on the left-out class (digit 1), not used for training. Each training digit was rotated
by 0.1 and 0.2 radians and all these images were used as training data. We used NLMP
for training, and for testing we formed an augmented mixture with Gaussians centered not
only on the training examples, but also on the original unrotated 1 digits. We tested our
estimator on the rotated versions of each of the 1 digits. We compared this to Manifold
Parzen trained on the training data containing both the original and rotated images of the
training class digits and the unrotated 1 digits. The objective of the experiment was to see
if the model was able to infer the density correctly around the original unrotated images,
i.e., to predict a high probability for the rotated versions of these images. In table 2 we see
quantitatively that the non-local estimator predicts the rotated images much better.
As qualitative evidence, we used small steps in the principal direction predicted by Testcentric NLMP to rotate an image of the digit 1. To make this task even more illustrative of
the generalization potential of non-local learning, we followed the tangent in the direction
opposite to the rotations of the training set. It can be seen in figure 3 that the rotated
Figure 3: From left to right: original image of a digit 1; rotated analytically by ?0.2
radians; Rotation predicted using Non-Local MP; rotation predicted using MP. Rotations
are obtained by following the tangent vector in small steps.
digit obtained is quite similar to the same digit analytically rotated. For comparison, we
tried to apply the same rotation technique to that digit, but by using the principal direction,
computed by Manifold Parzen, of its nearest neighbor?s Gaussian component in the training
set. This clearly did not work, and hence shows how crucial non-local learning is for this
task.
In this experiment, to make sure that NLMP focusses on the tangent plane of the rotation
manifold, we fixed the number of principal directions d = 1 and the number of nearest
neighbors k = 1, and also imposed ?(?) = 0. The same was done for Manifold Parzen.
Experiments on Classification by Density Estimation. The USPS data set was used
to perform a classification experiment. The original training set (7291) was split into a
training (first 6291) and validation set (last 1000), used to tune hyper-parameters. One
density estimator for each of the 10 digit classes is estimated. For comparison we also
show the results obtained with a Gaussian kernel Support Vector Machine (already used
in (Vincent and Bengio, 2003)). Non-local MP* refers to the variation described in (Bengio
and Larochelle, 2005), which attemps to train faster the components with larger variance.
The t-test statistic for the null hypothesis of no difference in the average classification
error on the test set of 2007 examples between Non-local MP and the strongest competitor
(Manifold Parzen) is shown in parenthesis. Figure 4 also shows some of the invariant
transformations learned by Non-local MP for this task.
Note that better SVM results (about 3% error) can be obtained using prior knowledge about
image invariances, e.g. with virtual support vectors (Decoste and Scholkopf, 2002). However, as far as we know the NLMP performance is the best on the original USPS dataset
among algorithms that do not use prior knowledge about images.
Algorithm
Valid. Test
Hyper-Parameters
SVM
1.2%
4.68%
C = 100, ? = 8
Parzen Windows 1.8%
5.08%
? = 0.8
Manifold Parzen 0.9%
4.08%
d = 11, k = 11, ?02 = 0.1
Non-local MP
0.6%
3.64% (-1.5218) d = 7, k = 10, k? = 10,
?02 = 0.05, nhid = 70
Non-local MP*
0.6%
3.54% (-1.9771) d = 7, k = 10, k? = 4,
?02 = 0.05, nhid = 30
Table 3: Classification error obtained on USPS with SVM, Parzen Windows and Local and
Non-Local Manifold Parzen Windows classifiers. The hyper-parameters shown are those
selected with the validation set.
7 Conclusion
We have proposed a non-parametric density estimator that, unlike its predecessors, is able
to generalize far from the training examples by capturing global structural features of the
Figure 4: Tranformations learned by Non-local MP. The top row shows digits taken from
the USPS training set, and the two following rows display the results of steps taken by one
of the 7 principal directions learned by Non-local MP, the third one corresponding to more
steps than the second one.
density. It does so by learning a function with global parameters that successfully predicts
the local shape of the density, i.e., the tangent plane of the manifold along which the density
concentrates. Three types of experiments showed that this idea works, yields improved
density estimation and reduced classification error compared to its local predecessors.
Acknowledgments
The authors would like to thank the following funding organizations for support: NSERC,
MITACS, and the Canada Research Chairs. The authors are also grateful for the feedback
and stimulating exchanges that helped to shape this paper, with Sam Roweis and Olivier
Delalleau.
References
Bengio, Y., Delalleau, O., and Le Roux, N. (2005). The curse of dimensionality for local
kernel machines. Technical Report 1258, D?epartement d?informatique et recherche
op?erationnelle, Universit?e de Montr?eal.
Bengio, Y. and Larochelle, H. (2005). Non-local manifold parzen windows. Technical report, D?epartement d?informatique et recherche op?erationnelle, Universit?e de Montr?eal.
Bengio, Y. and Monperrus, M. (2005). Non-local manifold tangent learning. In Saul, L.,
Weiss, Y., and Bottou, L., editors, Advances in Neural Information Processing Systems
17. MIT Press.
Decoste, D. and Scholkopf, B. (2002). Training invariant support vector machines. Machine Learning, 46:161?190.
Goldberger, J., Roweis, S., Hinton, G., and Salakhutdinov, R. (2005). Neighbourhood
component analysis. In Saul, L., Weiss, Y., and Bottou, L., editors, Advances in
Neural Information Processing Systems 17. MIT Press.
Vincent, P. (2003). Mod`eles a` Noyaux a` Structure Locale. PhD thesis, Universit?e de
Montr?eal, D?epartement d?informatique et recherche op?erationnelle, Montreal, Qc.,
Canada.
Vincent, P. and Bengio, Y. (2003). Manifold parzen windows. In Becker, S., Thrun, S.,
and Obermayer, K., editors, Advances in Neural Information Processing Systems 15,
Cambridge, MA. MIT Press.
| 2914 |@word determinant:1 illustrating:1 version:3 norm:1 tried:1 covariance:15 decomposition:3 simplifying:1 epartement:3 initial:1 contains:1 goldberger:2 si:1 must:2 numerical:1 informative:1 shape:8 s21:1 selected:2 plane:10 isotropic:1 parametrization:1 recherche:3 provides:1 along:1 predecessor:2 scholkopf:2 qualitative:1 introduce:1 expected:1 inspired:1 globally:1 spherical:3 decreasing:1 salakhutdinov:1 little:2 curse:5 window:18 enumeration:1 considering:2 decoste:2 discover:2 underlying:2 moreover:2 mass:1 null:1 what:2 kind:1 interpreted:1 minimizes:1 eigenvector:1 transformation:1 locale:1 guarantee:1 pseudo:1 exactly:1 universit:4 classifier:2 unit:1 local:58 suggests:1 specifying:1 collect:1 practical:1 acknowledgment:1 yj:12 testing:5 digit:17 procedure:1 empirical:1 projection:4 refers:1 get:2 close:1 selection:1 put:1 imposed:1 center:2 go:1 straightforward:1 qc:2 roux:4 assigns:1 estimator:22 orthonormal:1 deriving:1 variation:6 coordinate:1 imagine:1 olivier:1 hypothesis:1 associate:1 element:1 trick:1 located:1 predicts:2 bottom:1 eles:1 region:4 complexity:1 trained:3 grateful:1 solving:1 predictive:1 upon:5 basis:1 usps:5 compactly:1 easily:1 joint:1 represented:1 derivation:1 train:2 informatique:3 tell:4 hyper:14 neighborhood:4 whose:2 heuristic:1 quite:1 larger:1 delalleau:5 loglikelihood:1 ability:1 statistic:1 h3c:1 final:1 nll:1 eigenvalue:5 neighboring:1 achieve:1 roweis:2 validate:1 regularity:1 comparative:1 rotated:10 help:1 montreal:2 nearest:11 op:3 strong:1 predicted:6 larochelle:4 differ:1 concentrate:4 direction:17 radius:1 downtown:1 stochastic:1 centered:3 virtual:1 exchange:1 generalization:2 decompose:1 extension:1 lying:1 around:7 normal:2 predict:2 claim:1 early:1 estimation:7 sensitive:1 individually:1 successfully:1 weighted:1 mit:3 clearly:2 j7:1 gaussian:21 rather:2 avoid:1 derived:2 ax:1 focus:1 vk:1 consistently:1 likelihood:4 sense:1 helpful:1 stopping:1 typically:1 hidden:2 going:1 issue:2 among:4 classification:7 pascal:1 having:1 represents:2 look:2 unsupervised:1 yoshua:1 report:3 fundamentally:1 escape:1 others:1 few:1 modern:1 quantitatively:2 cheaper:1 replaced:1 geometry:1 montr:4 organization:1 investigate:1 mixture:8 orthogonal:3 initialized:1 eal:4 soft:1 s2j:8 ordinary:1 cost:1 entry:1 expects:1 predictor:2 too:2 characterize:1 kn:1 thickness:1 thanks:1 density:31 fundamental:1 off:1 parzen:40 thesis:1 squared:1 central:2 satisfied:1 containing:2 choose:2 possibly:1 unrotated:3 derivative:2 leading:1 toy:4 account:1 potential:1 de:4 summarized:1 waste:1 includes:1 mp:11 performed:1 helped:1 formed:1 variance:10 characteristic:1 gathered:1 correspond:1 yield:6 directional:1 generalize:5 vincent:9 basically:1 strongest:1 lengthy:1 competitor:1 radian:2 stop:1 sampled:2 dataset:1 realm:1 knowledge:2 dimensionality:6 centric:2 appears:1 supervised:2 specify:1 improved:1 wei:2 done:3 box:1 strongly:1 mitacs:1 just:1 implicit:1 until:1 trust:1 monperrus:5 true:1 analytically:3 regularization:2 hence:1 illustrated:2 adjacent:1 mahalanobis:1 during:1 illustrative:1 criterion:5 generalized:1 trying:1 complete:1 image:10 funding:1 umontreal:1 superior:1 rotation:8 hugo:1 cambridge:1 smoothness:1 rd:3 recent:2 showed:1 certain:1 life:1 seen:1 minimum:3 greater:1 impose:1 signal:1 semi:1 branch:1 full:3 unimodal:1 mix:1 infer:3 ing:1 technical:3 faster:1 characterized:1 cross:2 sphere:1 dept:1 parenthesis:1 prediction:2 sinus:6 iteration:1 kernel:9 represent:1 want:1 singular:2 leaving:1 crucial:1 rest:1 unlike:3 sure:2 tend:1 mod:1 call:2 structural:1 near:12 presence:1 bengio:18 easy:1 spiral:5 split:1 xj:6 gave:1 opposite:1 idea:2 becker:1 eigenvectors:1 tune:1 amount:1 locally:3 svms:1 reduced:1 specifies:1 exist:1 estimated:2 correctly:1 hyperparameter:1 four:1 v1:1 graph:1 sum:1 angle:1 inverse:1 place:2 capturing:1 followed:1 datum:1 display:1 precisely:1 attemps:1 flat:1 argument:2 span:3 chair:1 combination:1 ball:2 across:2 nhid:3 em:2 sam:1 happens:1 invariant:2 taken:2 computationally:1 equation:1 previously:1 fail:1 know:1 parametrize:1 gaussians:3 apply:1 indirectly:1 appropriate:1 neighbourhood:1 eigen:1 original:6 denotes:2 top:2 exploit:1 build:4 objective:3 already:1 parametric:8 erationnelle:3 traditional:2 obermayer:1 gradient:4 subspace:1 distance:1 separate:1 thank:1 thrun:1 manifold:57 considers:1 unstable:1 iro:2 code:1 modeled:1 index:1 illustration:2 difficult:1 mostly:1 potentially:3 negative:3 perform:3 neuron:1 finite:1 immediate:1 hinton:1 canada:3 inferred:1 introduced:2 bk:7 pair:2 required:2 learned:10 able:4 beyond:1 suggested:1 below:1 vincentp:1 max:1 power:1 regularized:3 improve:1 prior:3 tangent:18 expect:3 highlight:1 limitation:3 larocheh:1 validation:7 integrate:1 principle:1 editor:3 share:3 row:5 last:1 transpose:1 allow:1 understand:3 neighbor:21 saul:2 characterizing:1 overcome:1 dimension:5 curve:1 world:1 valid:2 feedback:1 author:2 qualitatively:1 far:6 sj:1 approximate:3 compact:1 keep:1 wrote:1 global:7 overfitting:1 rid:1 xi:103 alternatively:1 continuous:1 why:1 table:5 learn:2 transfer:1 ca:1 obtaining:1 bottou:2 vj:12 did:1 main:2 spread:1 whole:1 noise:22 augmented:1 xl:2 bengioy:1 weighting:2 third:1 learns:1 formula:2 specific:1 svm:3 dominates:1 evidence:1 intrinsic:2 phd:1 illustrates:2 hole:1 kx:1 nk:8 locality:1 simply:1 nserc:1 scalar:1 relies:1 ma:1 stimulating:1 presentation:1 shared:2 experimentally:1 hard:1 principal:17 invariance:1 experimental:1 gauss:1 support:4 rotate:1 meant:1 evaluate:1 tested:2 |
2,109 | 2,915 | Variable KD-Tree Algorithms for Spatial Pattern
Search and Discovery
Jeremy Kubica
Robotics Institute
Carnegie Mellon University
Pittsburgh, PA 15213
Joseph Masiero
Institute for Astronomy
University of Hawaii
Honolulu, HI 96822
Andrew Moore
Robotics Institute
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
[email protected]
[email protected]
Robert Jedicke
Institute for Astronomy
University of Hawaii
Honolulu, HI 96822
Andrew Connolly
Physics & Astronomy Department
University of Pittsburgh
Pittsburgh, PA 15213
[email protected]
[email protected]
Abstract
In this paper we consider the problem of finding sets of points that conform to a given underlying model from within a dense, noisy set of observations. This problem is motivated by the task of efficiently linking
faint asteroid detections, but is applicable to a range of spatial queries.
We survey current tree-based approaches, showing a trade-off exists between single tree and multiple tree algorithms. To this end, we present a
new type of multiple tree algorithm that uses a variable number of trees
to exploit the advantages of both approaches. We empirically show that
this algorithm performs well using both simulated and astronomical data.
1 Introduction
Consider the problem of detecting faint asteroids from a series of images collected on a
single night. Inherently, the problem is simply one of connect-the-dots. Over a single
night we can treat the asteroid?s motion as linear, so we want to find detections that, up
to observational errors, lie along a line. However, as we consider very faint objects, several difficulties arise. First, objects near our brightness threshold may oscillate around this
threshold, blinking into and out-of our images and providing only a small number of actual
detections. Second, as we lower our detection threshold we will begin to pick up more spurious noise points. As we look for really dim objects, the number of noise points increases
greatly and swamps the number of detections of real objects.
The above problem is one example of a model based spatial search. The goal is to identify
sets of points that fit some given underlying model. This general task encompasses a wide
range of real-world problems and spatial models. For example, we may want to detect
a specific configuration of corner points in an image or search for multi-way structure in
scientific data. We focus our discussion on problems that have a high density of both true
and noise points, but which may have only a few points actually from the model of interest.
Returning to the asteroid linking example, this corresponds to finding a handful of points
that lie along a line within a data set of millions of detections.
Below we survey several tree-based approaches for efficiently solving this problem. We
show that both single tree and conventional multiple tree algorithms can be inefficient and
that a trade-off exists between these approaches. To this end, we propose a new type of
multiple tree algorithm that uses a variable number of tree nodes. We empirically show
that this new algorithm performs well using both simulated and real-world data.
2 Problem Definition
Our problem consists of finding sets of points that fit a given underlying spatial model. In
doing so, we are effectively looking for known types of structure buried within the data. In
general, we are interested in finding sets with k or more points, thus providing a sufficient
amount of support to confirm the discovery. Finding this structure within the data may
either be our end goal, such as in asteroid linkage, or may just be a preprocessor for a more
sophisticated statistical test, such as renewal strings [1]. We are particularly interested in
high-density, low-support domains where there may be many hundreds of thousands of
points, but only a handful actually support our model.
Formally, the data consists of N unique D-dimensional points. We assume that the underlying model can be estimated from c unique points. Since k ? c, the model may overconstrained. In these cases we divide the points into two sets: Model Points and Support
Points. Model points are the c points used to fully define the underlying model. Support
points are the remaining points used to confirm the model. For example, if we are searching for sets of k linear points, we could use a set?s endpoints as model points and treat the
middle k ? 2 as support points. Or we could allow any two points to serve as model points,
providing an exhaustive variant of the RANSAC algorithm [2].
The prototypical example used in this paper is the (linear) asteroid linkage problem:
For each pair of points find the k ? 2 best support points for the line that
they define (such that we use at most one point at each time step).
In addition, we place restrictions on the validity of the initial pairs by providing velocity
bounds. It is important to note that although we use this problem as a running example, the
techniques described can be applied to a range of spatial problems.
3 Overview of Previous Approaches
3.1 Constructive Algorithms
Constructive algorithms ?build up? valid sets of points by repeatedly finding additional
points that are compatible with the current set. Perhaps the simplest approach is to perform
a two-tiered brute force search. First, we exhaustively test all sets of c points to determine
if they define a valid model. Then, for each valid set we test all of the remaining points for
support. For example in the asteroid linkage problem, we can initially search over all O(N 2 )
pairs of points and for each of the resulting lines test all O(N) points to determine if they
support that line. A similar approach within the domain of target tracking is sequential
tracking (for a good introduction see [3]), where points at early time steps are used to
estimate a track that is then projected to later time steps to find additional support points.
In large-scale domains, these approaches can often be made tractable by using spatial structure in the data. Again returning to our asteroid example, we can place the points in a
KD-tree [4]. We can then limit the number of initial pairs examined by using this tree to
find points compatible with our velocity constraints. Further, we can use the KD-tree to
only search for support points in localized regions around the line, ignoring large numbers
of obviously infeasible points. Similarly, trees have been used in tracking algorithms to
efficiently find points near predicted track positions [5]. We call these adaptations single
tree algorithms, because at any given time the algorithm is searching at most one tree.
3.2 Parameter Space Methods
Another approach is to search for valid sets of points by searching the model?s parameter
space, such as in the Hough transform [6]. The idea behind these approaches is that we can
test whether each point is compatible with a small set of model parameters, allowing us to
search parameter space to find the valid sets. However, this method can be expensive in
terms of both computation and memory, especially for high dimensional parameter spaces.
Further, if the model?s total support is low, the true model occurrences may be effectively
washed out by the noise. For these reasons we do not consider parameter space methods.
3.3 Multiple Tree Algorithms
The primary benefit of tree-based algorithms is that they are able to use spatial structure
within the data to limit the cost of the search. However, there is a clear potential to push
further and use structure from multiple aspects of the search at the same time. In doing
so we can hopefully avoid many of the dead ends and wrong turns that may result from
exploring bad initial associations in the first few points in our model. For example, in the
domain of asteroid linkage we may be able to limit the number of short, initial associations
that we have to consider by using information from later time steps. This idea forms the
basis of multiple tree search algorithms [7, 8, 9].
Multiple tree methods explicitly search for the entire set of points at once by searching
over combinations of tree nodes. In standard single tree algorithms, the search tries to find
individual points satisfying some criteria (e.g. the next point to add) and the search state
is represented by a single node that could contain such a point. In contrast, multiple tree
algorithms represent the current search state with multiple tree nodes that could contain
points that together conform to the model. Initially, the algorithm begins with k root nodes
from either the same or different tree data structures, representing the k different points that
must be found. At each step in the search, it narrows in on a set of mutually compatible
spatial regions and thus a set of individual points that fit the model by picking one of the
model nodes and recursively exploring its children. As with a standard ?single tree? search,
we constantly check for opportunities to prune the search.
There are several important drawbacks to multiple tree algorithms. First, additional trees
introduce a higher branching factor in the search and increase the potential for taking deep
?wrong turns.? Second, care must be taken in order to deal with missing or a variable
number of support points. Kubica et. al. discuss the use of an additional ?missing? tree
node to handle these cases [9]. However, this approach can effectively make repeated
searches over subsets of trees, making it more expensive both in theory and practice.
4 Variable Tree Algorithms
In general we would like to exploit structural information from all aspects of our search
problem, but do so while branching the search on just the parameters of interest. To this
end we propose a new type of search that uses a variable number of tree nodes. Like a
standard multiple tree algorithm, the variable tree algorithm searches combinations of tree
nodes to find valid sets of points. However, we limit this search to just those points required
(A)
(B)
Figure 1: The model nodes? bounds (1 and 2) define a region of feasible support (shaded)
for any combination of model points from those nodes (A). As shown in (B), we can classify
entire support tree nodes as feasible (node b) or infeasible (nodes a and c).
to define, and thus bound, the models currently under consideration. Specifically, we use M
model tree nodes,1 which guide the recursion and thus the search. In addition, throughout
the search we maintain information about other potential supporting points that can be used
to confirm the final track or prune the search due to a lack of support.
For example in the asteroid linking problem each line is defined by only 2 points, thus we
can efficiently search through the models using a multiple tree search with 2 model trees.
As shown in Figure 1.A, the spatial bounds of our current model nodes immediately limit
the set of feasible support points for all line segments compatible with these nodes. If we
track which support points are feasible, we can use this information to prune the search due
to a lack of support for any model defined by the points in those nodes.
The key idea behind the variable tree search is that we can use a dynamic representation of
the potential support. Specifically, we can place the support points in trees and maintain
a dynamic list of currently valid support nodes. As shown in Figure 1.B, by only testing
entire nodes (instead of individual points), we are using spatial coherence of the support
points to remove the expense of testing each support point at each step in the search. And
by maintaining a list of support tree nodes, we are no longer branching the search over
these trees. Thus we remove the need to make a hard ?left or right? decision. Further, using
a combination of a list and a tree for our representation allows us to refine our support
representation on the fly. If we reach a point in the search where a support node is no
longer valid, we can simply drop it off the list. And if we reach a point where a support
node provides too coarse a representation of the current support space, we can simply
remove it and add both of its children to the list.
This leaves the question of when to split support nodes. If we split them too soon, we may
end up with many support nodes in our list and mitigate the benefits of the nodes? spatial
coherence. If we wait too long to split them, then we may have a few large support nodes
that cannot efficiently be pruned. Although we are still investigating splitting strategies, the
experiments in this paper use a heuristic that seeks to provide a small number of support
nodes that are a reasonable fit to the feasible region. We effectively split a support node
if doing so would allow one of its two children to be pruned. For KD-trees this roughly
means checking whether the split value lies outside the feasible region.
The full variable tree algorithm is given in Figure 2. A simple example of finding linear
tracks while using the track?s endpoints (earliest and latest in time) as model points and
1 Typically M = c, although in some cases it may be beneficial to use a different number of model
nodes.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
Variable Tree Model Detection
Input: A set of M current model tree nodes M
A set of current support tree nodes S
Output: A list Z of feasible sets of points
S? ? {} and Scurr ? S
IF we cannot prune based on the mutual compatibility of M:
FOR each s ? Scurr
IF s is compatible with M:
IF s is ?too wide?:
Add s?s left and right child to the end of Scurr .
ELSE
Add s to S? .
IF we have enough valid support points:
IF all of m ? M are leaves:
Test all combinations of points owned by the model nodes, using
the support nodes? points as potential support.
Add valid sets to Z.
ELSE
Let m? be the non-leaf model tree node that owns the most points.
Search using m? ?s left child in place of m? and S? instead of S.
Search using m? ?s right child in place of m? and S? instead of S.
Figure 2: A simple variable tree algorithm for spatial structure search. This algorithm
shown uses simple heuristics such as: searching the model node with the most points and
splitting a support node if it is too wide. These heuristics can be replaced by more accurate,
problem-specific ones.
using all other points for support is illustrated in Figure 3. The first column shows all
the tree nodes that are currently part of the search. The second and third columns show
the search?s position on the two model trees and the current set of valid support nodes
respectively. Unlike the pure multiple tree search, the variable tree search does not ?branch
off? on the support trees, allowing us to consider multiple support nodes from the same
time step at any point in the search. Again, it is important to note that by testing the
support points as we search, we are both incorporating support information into the pruning
decisions and ?pruning? the support points for entire sets of models at once.
5 Results on the Asteroid Linking Domain
The goal of the single-night asteroid linkage problem is to find sets of 2-dimensional point
detections that correspond to a roughly linear motion model. In the below experiments we
are interested in finding sets of at least 7 detections from a sequence of 8 images. The
movements were constrained to have a speed between 0.05 and 0.5 degrees per day and
were allowed an observational error threshold of 0.0003 degrees. All experiments were run
on a dual 2.5 GHz Apple G5 with 4 GB of RAM.
The asteroid detection data consists of detections from 8 images of the night sky separated
by half-hour intervals. The images were obtained with the MegaCam instrument on the
3.6-meter Canada-France-Hawaii Telescope. The detections, along with confidence levels,
were automatically extracted from the images. We can pre-filter the data to pull out only
those observations above a given confidence threshold ?. This allows us to examine how
the algorithms perform as we begin to look for increasingly faint asteroids. It should be
noted that only limited preprocessing was done to the data, resulting in a very high level
Search Step 1:
Search Step 2:
Search Step 5:
Figure 3: The variable tree algorithm performs a depth first search over the model nodes.
At each level of the search the model nodes are checked for mutual compatibility and each
support node on the list is check for compatibility with the set of model nodes. Since we
are not branching on the support nodes, we can split a support node and add both children
to our list. This figure shows the current model and support nodes and their spatial regions.
Table 1: The running times (in seconds) for the asteroid linkers with different detection
thresholds ? and thus different numbers N and density of observations.
?
N
Single Tree
Multiple Tree
Variable Tree
10.0
3531
2
1
<1
8.0
5818
7
3
1
6.0
12911
61
30
4
5.0
24068
488
607
40
4.0
48646
2442
4306
205
of false detections. While future data sets will contain significantly reduced noise, it is
interesting to examine the performance of the algorithms on this real-world high noise,
high density data.
The results on the intra-night asteroid tracking domain, shown in Table 1, illustrate a clear
advantage to using a variable tree approach. As the significance threshold ? decreases,
the number and density of detections increases, allowing the support tree nodes to capture
feasibility information for a large number of support points. In contrast, neither the full
multiple tree algorithm nor the single-tree algorithm performed well. For the multiple tree
algorithm, this decrease in performance is likely due to a combination of the high number
of time steps, the allowance of a missing observation, and the high density. In particular,
the increased density can reduce opportunities for pruning, causing the algorithm to explore
deeper before backtracking.
Table 2: Average running times (in seconds) for a 2-dimensional rectangle search with
different numbers of points N. The brute force algorithm was only run to N = 2500.
N
Brute Force
Single Tree
Multi-Tree
Variable-Tree
500
0.37
0.02
0.01
0.01
1000
2.73
0.07
0.02
0.02
2000
21.12
0.30
0.06
0.05
2500
41.03
0.51
0.09
0.07
5000
n/a
2.15
0.30
0.22
10000
n/a
10.05
1.11
0.80
25000
n/a
66.24
6.61
4.27
50000
n/a
293.10
27.79
16.30
Table 3: Average running times (in seconds) for a rectangle search with different numbers
of required corners k. For this experiment N = 10000 and D = 3.
k
Single Tree
Multi-Tree
Variable-Tree
8
4.71
3.96
0.65
7
4.72
19.45
0.75
6
4.71
45.02
0.85
5
4.71
67.50
0.92
4
4.71
78.81
1.02
6 Experiments on the Simulated Rectangle Domain
We can apply the above techniques to a range of other model-based spatial search problems.
In this section we consider a toy template matching problem, finding axis-aligned hyperrectangles in D-dimensional space by finding k or more corners that fit a rectangle. We
use this simple, albeit artificial, problem both to demonstrate potential pattern recognition
applications and to analyze the algorithms as we vary the properties of the data.
Formally, we restrict the model to use the upper and lower corners as the two model points.
Potential support points are those points that fall within some threshold of the other 2D ? 2
corners. In addition, we restrict the allowable bounds of the rectangles by providing a
maximum width.
To evaluate the algorithms? relative performance, we used random data generated from a
uniform distribution on a unit hyper-cube. The threshold and maximum width were fixed
for all experiments at 0.0001 and 0.2 respectively. All experiments were run on a dual 2.5
GHz Apple G5 with 4 GB of RAM.
The first factor that we examined was how each algorithm scales with the number of points.
We generated random data with 5 known rectangles and N additional random points and
computed the average wall-clock running time (over ten trials) for each algorithm. The
results, shown in Table 2, show a graceful scaling of all of the multiple tree algorithms. In
contrast, the brute force and single tree algorithms run into trouble as the number of points
becomes moderately large. The variable tree algorithm consistently performs the best, as it
is able to avoid significant amounts of redundant computation.
One potential drawback of the full multiple tree algorithm is that since it branches on all
points, it may become inefficient as the allowable number of missing support points grows.
To test this we looked at 3-dimensional data and varied the minimum number of required
support points k. As shown in Table 3, all multiple tree methods become more expensive
as the number of required support points decreases. This is especially the case for the
multi-tree algorithm, which has to perform several almost identical searches to account for
missing points. However, the variable-tree algorithm?s performance degrades gracefully
and is the best for all trials.
7 Conclusions
Tree-based spatial algorithms provide the potential for significant computational savings
with multiple tree algorithms providing further opportunities to exploit structure in the
data. However, a distinct trade-off exists between ignoring structure from all aspects of
the problem and increasing the combinatorics of the search. We presented a variable tree
approach that exploits the advantages of both single tree and multiple tree algorithms. A
combinatorial search is carried out over just the minimum number of model points, while
still tracking the feasibility of the various support points. As shown in the above experiments, this approach provides significant computational savings over both the traditional
single tree and and multiple tree searches. Finally, it is interesting to note that the dynamic
support technique described in this paper is general and may be applied to a range of other
algorithms, such as the Fast Hough Transform [10], that maintain information on which
points support a given model.
Acknowledgments
Jeremy Kubica is supported by a grant from the Fannie and John Hertz Foundation. Andrew
Moore and Andrew Connolly are supported by a National Science Foundation ITR grant
(CCF-0121671).
References
[1] A.J. Storkey, N.C. Hambly, C.K.I. Williams, and R.G. Mann. Renewal Strings for
Cleaning Astronomical Databases. In UAI 19, 559-566, 2003.
[2] M.A. Fischler and R.C. Bolles. Random Sample Consensus: A Paradigm for Model
Fitting with Applications to Image Analysis and Automated Cartography. Comm. of
the ACM, 24:381?395, 1981.
[3] S. Blackman and R. Popoli. Design and Analysis of Modern Tracking Systems. Artech
House, 1999.
[4] J.L. Bentley . Multidimensional Binary Search Trees Used for Associative Searching.
Comm. of the ACM, 18 (9), 1975.
[5] J. K. Uhlmann. Algorithms for multiple-target tracking.
80(2):128?141, 1992.
American Scientist,
[6] P. V. C. Hough. Machine analysis of bubble chamber pictures. In International Conference on High Energy Accelerators and Instrumentation. CERN, 1959.
[7] A. Gray and A. Moore. N-body problems in statistical learning. In T. K. Leen and
T. G. Dietterich, editors, Advances in Neural Information Processing Systems. MIT
Press, 2001.
[8] G. R. Hjaltason and H. Samet. Incremental distance join algorithms for spatial
databases. In Proc. of the 1998 ACM-SIGMOD Conference, 237?248, 1998.
[9] J. Kubica, A. Moore, A. Connolly, and R. Jedicke. A Multiple Tree Algorithm for the
Efficient Association of Asteroid Observations. In KDD?05. August 2005.
[10] H. Li, M.A. Lavin, and R.J. Le Master. Fast Hough Transform: A Hierarchical
Approach. In Computer Vision, Graphics, and Image Processing, 36(2-3):139?161,
November 1986.
| 2915 |@word trial:2 middle:1 seek:1 pick:1 brightness:1 recursively:1 initial:4 configuration:1 series:1 current:9 must:2 john:1 kdd:1 remove:3 drop:1 half:1 leaf:3 short:1 detecting:1 provides:2 node:48 coarse:1 along:3 become:2 consists:3 fitting:1 introduce:1 roughly:2 examine:2 nor:1 multi:4 automatically:1 actual:1 increasing:1 becomes:1 begin:3 underlying:5 string:2 astronomy:3 finding:10 mitigate:1 sky:1 multidimensional:1 returning:2 wrong:2 brute:4 hyperrectangles:1 unit:1 grant:2 before:1 scientist:1 treat:2 limit:5 examined:2 shaded:1 limited:1 range:5 unique:2 acknowledgment:1 testing:3 practice:1 honolulu:2 significantly:1 matching:1 confidence:2 pre:1 wait:1 cannot:2 restriction:1 conventional:1 missing:5 latest:1 williams:1 survey:2 splitting:2 immediately:1 pure:1 pull:1 swamp:1 searching:6 handle:1 target:2 cleaning:1 us:4 pa:3 velocity:2 storkey:1 expensive:3 particularly:1 satisfying:1 recognition:1 database:2 fly:1 capture:1 thousand:1 region:6 trade:3 movement:1 linkers:1 decrease:3 comm:2 moderately:1 fischler:1 exhaustively:1 dynamic:3 solving:1 segment:1 serve:1 basis:1 represented:1 various:1 separated:1 distinct:1 fast:2 query:1 artificial:1 hyper:1 outside:1 exhaustive:1 heuristic:3 transform:3 noisy:1 final:1 associative:1 obviously:1 advantage:3 sequence:1 propose:2 adaptation:1 causing:1 aligned:1 incremental:1 object:4 illustrate:1 andrew:4 c:1 predicted:1 drawback:2 filter:1 kubica:4 awm:1 observational:2 mann:1 samet:1 really:1 wall:1 exploring:2 around:2 pitt:1 vary:1 early:1 proc:1 applicable:1 combinatorial:1 currently:3 uhlmann:1 mit:1 avoid:2 earliest:1 focus:1 consistently:1 blackman:1 check:2 cartography:1 greatly:1 contrast:3 detect:1 dim:1 entire:4 typically:1 initially:2 spurious:1 buried:1 france:1 interested:3 compatibility:3 dual:2 spatial:17 renewal:2 constrained:1 mutual:2 cube:1 once:2 saving:2 identical:1 look:2 future:1 few:3 modern:1 national:1 individual:3 replaced:1 maintain:3 detection:15 interest:2 intra:1 behind:2 accurate:1 tree:88 divide:1 hough:4 increased:1 classify:1 column:2 cost:1 subset:1 hundred:1 uniform:1 connolly:3 too:5 graphic:1 connect:1 density:7 international:1 physic:1 off:5 picking:1 together:1 again:2 hawaii:5 corner:5 dead:1 american:1 inefficient:2 toy:1 li:1 account:1 jeremy:2 potential:9 fannie:1 combinatorics:1 explicitly:1 later:2 try:1 root:1 performed:1 doing:3 analyze:1 efficiently:5 correspond:1 identify:1 cern:1 apple:2 reach:2 checked:1 definition:1 energy:1 astronomical:2 blinking:1 overconstrained:1 sophisticated:1 actually:2 higher:1 day:1 leen:1 done:1 just:4 clock:1 night:5 hopefully:1 lack:2 gray:1 perhaps:1 scientific:1 grows:1 bentley:1 dietterich:1 validity:1 contain:3 true:2 ccf:1 moore:4 illustrated:1 deal:1 branching:4 width:2 noted:1 criterion:1 allowable:2 demonstrate:1 bolles:1 performs:4 motion:2 image:9 consideration:1 empirically:2 overview:1 endpoint:2 million:1 linking:4 association:3 mellon:2 significant:3 similarly:1 dot:1 longer:2 add:6 instrumentation:1 binary:1 minimum:2 additional:5 care:1 lavin:1 prune:4 determine:2 paradigm:1 redundant:1 artech:1 branch:2 multiple:26 full:3 long:1 feasibility:2 variant:1 ransac:1 vision:1 cmu:2 represent:1 robotics:2 addition:3 want:2 interval:1 else:2 unlike:1 call:1 structural:1 near:2 split:6 enough:1 automated:1 fit:5 restrict:2 reduce:1 idea:3 itr:1 whether:2 motivated:1 gb:2 linkage:5 oscillate:1 repeatedly:1 deep:1 clear:2 amount:2 ten:1 simplest:1 telescope:1 reduced:1 estimated:1 track:6 per:1 conform:2 carnegie:2 key:1 threshold:9 neither:1 rectangle:6 ram:2 run:4 master:1 place:5 throughout:1 reasonable:1 almost:1 allowance:1 coherence:2 decision:2 scaling:1 bound:5 hi:2 refine:1 handful:2 constraint:1 ri:1 aspect:3 speed:1 pruned:2 graceful:1 department:1 combination:6 kd:4 hertz:1 beneficial:1 increasingly:1 joseph:1 making:1 taken:1 mutually:1 turn:2 discus:1 tractable:1 instrument:1 end:7 apply:1 hierarchical:1 chamber:1 occurrence:1 running:5 remaining:2 trouble:1 ifa:2 opportunity:3 maintaining:1 exploit:4 sigmod:1 build:1 especially:2 question:1 looked:1 strategy:1 primary:1 degrades:1 traditional:1 g5:2 distance:1 simulated:3 gracefully:1 collected:1 consensus:1 reason:1 tiered:1 providing:6 robert:1 expense:1 design:1 perform:3 allowing:3 upper:1 observation:5 november:1 supporting:1 looking:1 varied:1 august:1 canada:1 pair:4 required:4 narrow:1 hour:1 able:3 below:2 pattern:2 encompasses:1 memory:1 difficulty:1 force:4 recursion:1 representing:1 picture:1 axis:1 carried:1 bubble:1 washed:1 discovery:2 checking:1 meter:1 relative:1 fully:1 prototypical:1 interesting:2 accelerator:1 localized:1 foundation:2 degree:2 sufficient:1 editor:1 compatible:6 supported:2 soon:1 infeasible:2 guide:1 allow:2 deeper:1 institute:4 wide:3 template:1 taking:1 fall:1 benefit:2 ghz:2 depth:1 world:3 valid:11 made:1 projected:1 preprocessing:1 pruning:3 confirm:3 investigating:1 uai:1 pittsburgh:4 owns:1 search:57 table:6 inherently:1 ignoring:2 domain:7 significance:1 dense:1 noise:6 arise:1 child:7 repeated:1 allowed:1 body:1 join:1 position:2 lie:3 house:1 third:1 preprocessor:1 bad:1 specific:2 showing:1 list:9 faint:4 exists:3 incorporating:1 false:1 sequential:1 effectively:4 albeit:1 push:1 backtracking:1 simply:3 likely:1 explore:1 tracking:7 corresponds:1 owned:1 constantly:1 extracted:1 acm:3 goal:3 feasible:7 hard:1 specifically:2 asteroid:17 total:1 formally:2 support:59 constructive:2 evaluate:1 |
2,110 | 2,916 | Query By Committee Made Real
Ran Gilad-Bachrach??
Amir Navot?
Naftali Tishby??
? School of Computer Science and Engineering
? Interdisciplinary Center for Neural Computation
The Hebrew University, Jerusalem, Israel.
? Intel Research
Abstract
Training a learning algorithm is a costly task. A major goal of active
learning is to reduce this cost. In this paper we introduce a new algorithm, KQBC, which is capable of actively learning large scale problems
by using selective sampling. The algorithm overcomes the costly sampling step of the well known Query By Committee (QBC) algorithm by
projecting onto a low dimensional space. KQBC also enables the use
of kernels, providing a simple way of extending QBC to the non-linear
scenario. Sampling the low dimension space is done using the hit and
run random walk. We demonstrate the success of this novel algorithm by
applying it to both artificial and a real world problems.
1 Introduction
Stone?s celebrated theorem proves that given a large enough training sequence, even naive
algorithms such as the k-nearest neighbors can be optimal. However, collecting large training sequences poses two main obstacles. First, collecting these sequences is a lengthy and
costly task. Second, processing large datasets requires enormous resources. The selective
sampling framework [1] suggests permitting the learner some control over the learning process. In this way, the learner can collect a short and informative training sequence. This is
done by generating a large set of unlabeled instances and allowing the learner to select the
instances to be labeled.
The Query By Committee algorithm (QBC) [2] was the inspiration behind many algorithms
in the selective sampling framework [3, 4, 5]. QBC is a simple yet powerful algorithm. During learning it maintains a version space, the space of all the classifiers which are consistent
with all the previous labeled instances. Whenever an unlabeled instance is available, QBC
selects two random hypotheses from the version space and only queries for the label of the
new instance if the two hypotheses disagree. Freund et al. [6] proved that when certain
conditions apply, QBC will reach a generalization error of ? when using only O (log 1/?)
labels. QBC works in an online fashion where each instance is considered only once to decide whether to query for its label or not. This is significant when there are a large number
of unlabeled instances. In this scenario, batch processing of the data is unfeasible (see e.g.
[7]). However, QBC was never implemented as is, since it requires the ability to sample
hypotheses from the version space, a task that all known method do in an unreasonable
amount of time [8].
The algorithm we present in this paper uses the same skeleton as QBC, but replaces sampling from the high dimensional version space by sampling from a low dimensional projection of it. By doing so, we obtain an algorithm which can cope with large scale problems
and at the same time authorizes the use of kernels. Although the algorithm uses linear classifiers at its core, the use of kernels makes it much broader in scope. This new sampling
method is presented in section 2. Section 3 gives a detailed description of the kernelized version, the Kernel Query By Committee (KQBC) algorithm. The last building block
is a method for sampling from convex bodies. We suggest the hit and run [9] random
walk for this purpose in section 4. A Matlab implementation of KQBC is available at
http://www.cs.huji.ac.il/labs/learning/code/qbc.
The empirical part of this work is presented in section 5. We demonstrate how KQBC
works on two binary classification tasks. The first is a synthetic linear classification task.
The second involves differentiating male and female facial images. We show that in both
cases, KQBC learns faster than Support Vector Machines (SVM) [10]. KQBC can be used
to select a subsample to which SVM is applied. In our experiments, this method was
superior to SVM; however, KQBC outperformed both.
Related work: Many algorithms for selective sampling have been suggested in the literature. However only a few of them have a theoretical justification. As already mentioned,
QBC has a theoretical analysis. Two other notable algorithms are the greedy active learning algorithm [11] and the perceptron based active learning algorithm [12]. The greedy
active learning algorithm has the remarkable property of being close to optimal in all settings. However, it operates in a batch setting, where selecting the next query point requires
reevaluation of the whole set of unlabeled instances. This is problematic when the dataset is
large. The perceptron based active learning algorithm, on the other hand, is extremely efficient in its computational requirements, but is restricted to linear classifiers since it requires
the explicit use of the input dimension.
Graepel et al. [13] presented a billiard walk in the version space as a part of the Bayes Point
Machine. Similar to the method presented here, the billiard walk is capable of sampling
hypotheses from the version space when kernels are used. The method presented here has
several advantages: it has better theoretical grounding and it is easier to implement.
2 A New Method for Sampling the Version-Space
The Query By Committee algorithm [2] provides a general framework that can be used with
any concept class. Whenever a new instance is presented, QBC generates two independent
predictions for its label by sampling two hypotheses from the version space1 . If the two
predictions differ, QBC queries for the label of the instance at hand (see algorithm 1). The
main obstacle in implementing QBC is the need to sample from the version space (step 2b).
It is not clear how to do this with reasonable computational complexity. As is the case for
most research in machine learning, we first focus on the class of linear classifiers and then
extend the discussion by using kernels. In the linear case, the dimension of the version
space is the input dimension which is typically large for real world problems. Thus direct
sampling is practically impossible. We overcome this obstacle by projecting the version
space onto a low dimensional subspace.
k
Assume that the learner has seen the labeled sample S = {(xi , yi )}i=1 , where xi ? IRd
and yi ? {?1}. The version space is defined to be the set of all classifiers which correctly
classify all the instances seen so far:
V = {w : kwk ? 1 and ?i yi (w ? xi ) > 0}
1
The version space is the collection of hypotheses that are consistent with previous labels.
(1)
Algorithm 1 Query By Committee [2]
Inputs:
? A concept class C and a probability measure ? defined over C.
The algorithm:
1. Let S ? ?, V ? C.
2. For t = 1, 2, . . .
(a) Receive an instance x.
(b) Let h1 , h2 be two random hypotheses selected from ? restricted to V .
(c) If h1 (x) =
6 h2 (x) then
i. Ask for the label y of x.
ii. Add the pair (x, y) to S.
iii. Let V ? {c ? C : ? (x, y) ? S c (x) = y}.
QBC assumes a prior ? over the class of linear classifiers. The sample S induces a posterior
over the class of linear classifiers which is the restriction of ? to V . Thus, the probability
that QBC will query for the label of an instance x is exactly
2 Pr [w ? x > 0] Pr [w ? x < 0]
w??|V
(2)
w??|V
where ?|V is the restriction of ? to V .
From (2) we see that there is no need to explicitly select two random hypotheses. Instead,
we can use any stochastic approach that will query for the label with the same probability
as in (2). Furthermore, if we can sample y? ? {?1} such that
Pr [?
y = 1] = Prw??|V [w ? x > 0]
and
Pr [?
y = ?1] = Prw??|V [w ? x < 0]
(3)
(4)
we can use it instead, by querying the label of x with a probability of
2 Pr [?
y = 1] Pr [?
y = ?1]. Based on this observation, we introduce a stochastic algorithm
which returns y? with probabilities as specified in (3) and (4). This procedure can replace
the sampling step in the QBC algorithm.
k
Let S = {(xi , yi )}i=1 be a labeled sample. Let x be an instance for which we need
to decide whether to query for its label or not. We denote by V the version space as
defined in (1) and denote by T the space spanned by x1 , . . . , xk and x. QBC asks for two
random hypotheses from V and queries for the label of x only if these two hypotheses
predict different labels for x. Our procedure does the same thing, but instead of sampling
the hypotheses from V we sample them from V ? T . One main advantage of this new
procedure is that it samples from a space of low dimension and therefore its computational
complexity is much lower. This is true since T is a space of dimension k + 1 at most,
where k is the number of queries for label QBC made so far. Hence, the body V ? T is a
low-dimensional convex body2 and thus sampling from it can be done efficiently. The input
dimension plays a minor role in the sampling algorithm. Another important advantage is
that it allows us to use kernels, and therefore gives a systematic way to extend QBC to the
non-linear scenario. The use of kernels is described in detail in section 3.
The following theorem proves that indeed sampling from V ? T produces the desired results. It shows that if the prior ? (see algorithm 1) is uniform, then sampling hypotheses
uniformly from V or from V ? T generates the same results.
2
From the definition of the version space V it follows that it is a convex body.
k
Theorem 1 Let S = {(xi , yi )}i=1 be a labeled sample and x an instance. Let the version
space be V = {w : kwk ? 1 and ?i yi (w ? xi ) > 0} and T = span (x, x1 , . . . , xk ) then
Prw?U(V ) [w ? x > 0] = Prw?U(V ?T ) [w ? x > 0]
and
Prw?U(V ) [w ? x < 0] = Prw?U(V ?T ) [w ? x < 0]
where U (?) is the uniform distribution.
The proof of this theorem is given in the supplementary material [14].
3 Sampling with Kernels
In this section we show how the new sampling method presented in section 2 can be used
together with kernel. QBC uses the random hypotheses for one purpose alone: to check the
labels they predict for instances. In our new sampling method the hypotheses are sampled
from V ? T , where T = span (x, x1 , . . . , xk ). Hence, any hypothesis is represented by
w ? V ? T , that has the form
w = ?0 x +
k
X
?j xj
(5)
j=1
The label w assigns to an instance x? is
?
?
k
k
X
X
w ? x? = ??0 x +
?j xj ? ? x? = ?0 x ? x? +
?j xj ? x?
j=1
(6)
j=1
Note that in (6) only inner products are used, hence we can use kernels. Using these observations, we can sample a hypothesis by sampling ?0 , . . . , ?k and define w as in (5).
However, since the xi ?s do not form an orthonormal basis of T , sampling the ??s uniformly
is not equivalent to sampling the w?s uniformly. We overcome this problem by using an
orthonormal basis of T . The following lemma shows a possible way in which the orthonormal basis for T can be computed when only inner products are used. The method presented
here does not make use of the fact that we can build this basis incrementally.
Lemma 1 Let x0 , . . . , xk be a set of vectors, let T = span (x0 , . . . , xk ) and let G = (gi,j )
be the Grahm matrix such that gi,j = xi ? xj . Let ?1 , . . . , ?r be the non-zero eigen values
of G with the corresponding eigen-vectors ?1 , . . . , ?r . Then the vectors t1 , . . . , tr such that
ti =
k
X
?i (l)
? xl
?i
l=0
form an orthonormal basis of the space T .
The proof of lemma 1 is given in the supplementary material [14]. This lemma is significant
since the basis t1 , . . . , tr enables us to sample
P from V ? T using simple techniques. Note
that a vector w ? T can be expressed as ri=1 ? (i) ti . Since the ti ?s form an orthonormal
basis, kwk = k?k. Furthermore, we can check the label w assigns to xj by
X
X
?i (l)
w ? xj =
? (i) ti ? xj =
? (i) ? xl ? xj
?i
i
i,l
which is a function of the Grahm matrix. Therefore, sampling from V ? T boils down to
the problem of sampling from convex bodies, where instead of sampling a vector directly
we sample the coefficients of the orthonormal basis t1 , . . . , tr .
There are several methods for generating the final hypothesis to be used in the generalization phase. In the experiments reported in section 5 we have randomly selected a single
hypothesis from V ? T and used it to make all predictions, where V is the version space at
the time when the learning terminated and T is the span of all instances for which KQBC
queried for label during the learning process.
4 Hit and Run
Hit and run [9] is a method of sampling from a convex body K using a random walk. Let
z ? K, a single step of the hit and run begins by choosing a random point u from the unit
sphere. Afterwards the algorithm moves to a random point selected uniformly from l ? K,
where l is the line passing through z and z + u.
Hit and run has several advantages over other random walks for sampling from convex
bodies. First, its stationary distribution is indeed the uniform distribution, it mixes fast [9]
and it does not require a ?warm? starting point [15]. What makes it especially suitable for
practical use is the fact that it does not require any parameter tuning other than the number
of random steps. It is also very easy to implement.
Current proofs [9, 15] show that O? d3 steps are needed for the random walk to mix.
However, the constants in these bounds are very large. In practice hit and run mixes much
faster than that. We have used it to sample from the body V ? T . The number of steps
we used was very small, ranging from couple of hundred to a couple of thousands. Our
empirical study shows that this suffices to obtain impressive results.
5 Empirical Study
In this section we present the results of applying our new kernelized version of the query by
committee (KQBC), to two learning tasks. The first task requires classification of synthetic
data while the second is a real world problem.
5.1 Synthetic Data
In our first experiment we studied the task of learning a linear classifier in a d-dimensional
space. The target classifier is the vector w? = (1, 0, . . . , 0) thus the label of an instance x ? IRd is the sign of its first coordinate. The instances were normally distributed
N (? = 0, ? = Id ). In each trial we used 10000 unlabeled instances and let KQBC select
the instances to query for the labels. We also applied Support Vector Machine (SVM) to
the same data in order to demonstrate the benefit of using active learning. The linear kernel
was used for both KQBC and SVM. Since SVM is a passive learner, SVM was trained on
prefixes of the training data of different sizes. The results are presented in figure 1.
The difference between KQBC and SVM is notable. When both are applied to a 15dimensional linear discrimination problem (figure 1b), SVM and KQBC have an error rate
of ? 6% and ? 0.7% respectively after 120 labels. After such a short training sequence the
difference is of an order of magnitude. The same qualitative results appear for all problem
sizes.
As expected, the generalization error of KQBC decreases exponentially fast as the number of queries is increased, whereas the generalization error of SVM decreases only at an
inverse-polynomial rate (the rate is O? (1/k) where k is the number of labels). This should
not come as a surprise since Freund et al. [6] proved that this is the expected behavior.
Note also that the bound of 50 ? 2?0.67k/d over the generalization error that was proved in
[6] was replicated in our experiments (figure 1c).
% generalization error
% generalization error
% generalization error
100
10
Kernel Query By Committee
Support Vector Machine
?0.9k/5
48?2
1
0.1
0
10
20
30
40
(a) 5 dimensions
50
60
70
80
100
10
Kernel Query By Committee
Support Vector Machine
?0.76k/15
53?2
1
0.1
0
50
100
150
(b) 15 dimensions
200
250
100
30
10
Kernel Query By Committee
Support Vector Machine
50?2?0.67k/45
3
1
0
50
100
150
200
250
300
(c) 45 dimensions
350
400
450
500
Figure 1: Results on the synthetic data. The generalization error (y-axis) in percents (in
logarithmic scale) versus the number of queries (x-axis). Plots (a), (b) and (c) represent
the synthetic task in 5, 15 and 45 dimensional spaces respectively. The generalization error
of KQBC is compared to the generalization error of SVM. The results presented here are
averaged over 50 trials. Note that the error rate of KQBC decreases exponentially fast.
Recall that [6] proved a bound on the generalization error of 50 ? 2?0.67k/d where k is the
number of queries and d is the dimension.
5.2 Face Images Classification
The learning algorithm was then applied in a more realistic setting. In the second task we
used the AR face images dataset [16]. The people in these images are wearing different
accessories, have different facial expressions and the faces are lit from different directions.
We selected a subset of 1456 images from this dataset. Each image was converted into grayscale and re-sized to 85 ? 60 pixels, i.e. each image was represented as a 5100 dimensional
vector. See figure 2 for sample images. The task was to distinguish male and female
images. For this purpose we split the data into a training sequence of 1000 images and a
test sequence of 456 images. To test statistical significance we repeated this process 20
times, each time splitting the dataset into training and testing sequences.
We applied both KQBC
and SVM to this dataset. We used the Gaussian kernel:
2
K (x1 , x2 ) = exp ? kx1 ? x2 k /2? 2 where ? = 3500 which is the value favorable
by SVM. The results are presented in figure 3. It is apparent from figure 3 that KQBC
outperforms SVM. When the budget allows for 100 ? 140 labels, KQBC has an error rate
of 2 ? 3 percent less than the error rate of SVM. When 140 labels are used, KQBC outperforms SVM by 3.6% on average. This difference is significant as in 90% of the trials
% generalization error
Figure 2: Examples of face images used for the face recognition task.
48
45
42
39
36
33
30
27
24
21
Kernel Query By Committeei (KQBC)
Support Vector Machine (SVM)
SVM over KQBC selected instances
18
15
12
0
20
40
60
80
100
120
number of labels
140
160
180
200
Figure 3: The generalization error of KQBC and SVM for the faces dataset (averaged
over 20 trials). The generalization error (y-axis) vs. number of queries (x-axis) for KQBC
(solid) and SVM (dashed) are compared. When SVM was applied solely to the instances
selected by KQBC (dotted line) the results are better than SVM but worse than KQBC.
KQBC outperformed SVM by more than 1%. In one of the cases, KQBC was 11% better.
We also used KQBC as an active selection method for SVM. We trained SVM over the
instances selected by KQBC. The generalization error obtained by this combined scheme
was better than the passive SVM but worse than KQBC.
In figure 4 we see the last images for which KQBC queried for labels. It is apparent, that
the selection made by KQBC is non-trivial. All the images are either highly saturated or
partly covered by scarves and sunglasses. We conclude that KQBC indeed performs well
even when kernels are used.
6 Summary and Further Study
In this paper we present a novel version of the QBC algorithm. This novel version is
both efficient and rigorous. The time-complexity of our algorithm depends solely on the
number of queries made and not on the input dimension or the VC-dimension of the class.
Furthermore, our technique only requires inner products of the labeled data points - thus it
can be implemented with kernels as well.
We showed a practical implementation of QBC using kernels and the hit and run random
walk which is very close to the ?provable? version. We conducted a couple of experiments
with this novel algorithm. In all our experiments, KQBC outperformed SVM significantly.
However, this experimental study needs to be extended. In the future, we would like to
compare our algorithm with other active learning algorithms, over a variety of datasets.
Figure 4: Images selected by KQBC. The last six faces for which KQBC queried for a
label. Note that three of the images are saturated and that two of these are wearing a scarf
that covers half of their faces.
References
[1] D. Cohn, L. Atlas, and R. Ladner. Training connectionist networks with queries and
selective sampling. Advanced in Neural Information Processing Systems 2, 1990.
[2] H. S. Seung, M. Opper, and H. Sompolinsky. Query by committee. Proc. of the Fifth
Workshop on Computational Learning Theory, pages 287?294, 1992.
[3] C. Campbell, N. Cristianini, and A. Smola. Query learning with large margin classifiers. In Proc. 17?th International Conference on Machine Learning (ICML), 2000.
[4] S. Tong. Active Learning: Theory and Applications. PhD thesis, Stanford University,
2001.
[5] G. Tur, R. Schapire, and D. Hakkani-T?ur. Active learning for spoken language understanding. In Proc. IEEE International Conference on Acoustics, Speech and Signal
Processing, 2003.
[6] Y. Freund, H. Seung, E. Shamir, and N. Tishby. Selective sampling using the query
by committee algorithm. Machine Learning, 28:133?168, 1997.
[7] H. Mamitsuka and N. Abe. Efficient data mining by active learning. In Progress in
Discovery Science, pages 258?267, 2002.
[8] R. Bachrach, S. Fine, and E. Shamir. Query by committee, linear separation and
random walks. Theoretical Computer Science, 284(1), 2002.
[9] L. Lov?asz and S. Vempala. Hit and run is fast and fun. Technical Report MSR-TR2003-05, Microsoft Research, 2003.
[10] B. Boser, I. Guyon, and V. Vapnik. Optimal margin classifiers. In Fifth Annual
Workshop on Computational Learning Theory, pages 144?152, 1992.
[11] S. Dasgupta. Analysis of a greedy active learning strategy. In Neural Information
Processing Systems (NIPS), 2004.
[12] S. Dasgupta, A. T. Kalai, and C. Monteleoni. Analysis of perceptron-based active
learning. In Proceeding of the 18th annual Conference on Learning Theory (COLT),
2005.
[13] R. Herbrich, T. Graepel, and C. Campbell. Bayes point machines. Journal of Machine
Learning Research, 1:245?279, 2001.
[14] R. Gilad-Bachrach, A. Navot, and N. Tishby. Query by committee made real - supplementary material. http://www.cs.huji.ac.il/?ranb/kqcb supp.ps.
[15] L. Lov?asz and S. Vempala. Hit-and-run from a corner. In Proc. of the 36th ACM
Symposium on the Theory of Computing (STOC), 2004.
[16] A.M. Martinez and R. Benavente. The ar face database. Technical report, CVC Tech.
Rep. #24, 1998.
| 2916 |@word trial:4 msr:1 version:22 polynomial:1 asks:1 tr:3 solid:1 celebrated:1 selecting:1 prefix:1 outperforms:2 current:1 yet:1 grahm:2 realistic:1 informative:1 enables:2 plot:1 atlas:1 discrimination:1 alone:1 greedy:3 selected:8 stationary:1 v:1 amir:1 half:1 xk:5 short:2 core:1 provides:1 billiard:2 herbrich:1 direct:1 symposium:1 qualitative:1 introduce:2 x0:2 lov:2 expected:2 indeed:3 behavior:1 begin:1 israel:1 what:1 spoken:1 collecting:2 ti:4 fun:1 exactly:1 classifier:11 hit:10 control:1 unit:1 normally:1 appear:1 t1:3 engineering:1 id:1 solely:2 studied:1 suggests:1 collect:1 averaged:2 practical:2 testing:1 practice:1 block:1 implement:2 procedure:3 empirical:3 significantly:1 projection:1 suggest:1 onto:2 unlabeled:5 unfeasible:1 close:2 selection:2 applying:2 impossible:1 www:2 restriction:2 equivalent:1 center:1 jerusalem:1 starting:1 convex:6 bachrach:3 splitting:1 assigns:2 spanned:1 orthonormal:6 coordinate:1 justification:1 target:1 play:1 shamir:2 us:3 hypothesis:18 recognition:1 labeled:6 database:1 role:1 thousand:1 reevaluation:1 sompolinsky:1 decrease:3 tur:1 ran:1 mentioned:1 complexity:3 skeleton:1 seung:2 cristianini:1 trained:2 learner:5 basis:8 represented:2 fast:4 query:32 artificial:1 choosing:1 apparent:2 supplementary:3 stanford:1 ability:1 gi:2 final:1 online:1 sequence:8 advantage:4 product:3 kx1:1 description:1 requirement:1 extending:1 p:1 produce:1 generating:2 ac:2 pose:1 nearest:1 minor:1 school:1 progress:1 implemented:2 c:2 involves:1 come:1 differ:1 direction:1 stochastic:2 vc:1 material:3 implementing:1 require:2 suffices:1 generalization:16 practically:1 considered:1 exp:1 scope:1 predict:2 major:1 purpose:3 favorable:1 proc:4 outperformed:3 label:27 gaussian:1 kalai:1 broader:1 focus:1 check:2 tech:1 rigorous:1 scarf:2 typically:1 kernelized:2 selective:6 selects:1 pixel:1 classification:4 colt:1 once:1 never:1 sampling:33 lit:1 icml:1 future:1 connectionist:1 report:2 tr2003:1 few:1 randomly:1 phase:1 microsoft:1 highly:1 mining:1 saturated:2 male:2 behind:1 capable:2 facial:2 walk:9 desired:1 re:1 theoretical:4 instance:25 classify:1 increased:1 obstacle:3 ar:2 cover:1 cost:1 subset:1 uniform:3 hundred:1 conducted:1 tishby:3 reported:1 synthetic:5 combined:1 international:2 huji:2 interdisciplinary:1 systematic:1 together:1 thesis:1 benavente:1 worse:2 corner:1 return:1 actively:1 supp:1 converted:1 coefficient:1 notable:2 explicitly:1 depends:1 h1:2 lab:1 doing:1 kwk:3 bayes:2 maintains:1 il:2 efficiently:1 reach:1 monteleoni:1 whenever:2 lengthy:1 definition:1 proof:3 boil:1 couple:3 sampled:1 proved:4 dataset:6 ask:1 recall:1 graepel:2 campbell:2 done:3 furthermore:3 accessory:1 smola:1 hand:2 cohn:1 incrementally:1 building:1 grounding:1 concept:2 true:1 hence:3 inspiration:1 during:2 naftali:1 stone:1 demonstrate:3 performs:1 passive:2 percent:2 image:16 ranging:1 novel:4 superior:1 exponentially:2 extend:2 significant:3 queried:3 tuning:1 language:1 impressive:1 add:1 posterior:1 showed:1 female:2 scenario:3 certain:1 binary:1 success:1 rep:1 yi:6 seen:2 dashed:1 ii:1 signal:1 afterwards:1 mix:3 technical:2 faster:2 sphere:1 permitting:1 hakkani:1 prediction:3 kernel:20 represent:1 gilad:2 receive:1 whereas:1 cvc:1 fine:1 asz:2 thing:1 iii:1 enough:1 easy:1 split:1 variety:1 xj:8 reduce:1 inner:3 whether:2 expression:1 six:1 ird:2 speech:1 passing:1 matlab:1 detailed:1 clear:1 covered:1 amount:1 induces:1 http:2 schapire:1 problematic:1 dotted:1 sign:1 correctly:1 dasgupta:2 enormous:1 d3:1 run:10 inverse:1 powerful:1 reasonable:1 decide:2 guyon:1 separation:1 bound:3 distinguish:1 replaces:1 annual:2 ri:1 x2:2 generates:2 extremely:1 span:4 vempala:2 body2:1 ur:1 projecting:2 restricted:2 pr:6 prw:6 resource:1 committee:14 needed:1 available:2 unreasonable:1 apply:1 batch:2 eigen:2 assumes:1 prof:2 build:1 especially:1 move:1 already:1 strategy:1 costly:3 subspace:1 trivial:1 provable:1 code:1 providing:1 hebrew:1 stoc:1 implementation:2 allowing:1 disagree:1 ladner:1 observation:2 datasets:2 extended:1 abe:1 pair:1 specified:1 acoustic:1 boser:1 nip:1 suggested:1 suitable:1 warm:1 advanced:1 scheme:1 sunglass:1 axis:4 naive:1 prior:2 literature:1 understanding:1 discovery:1 freund:3 querying:1 versus:1 remarkable:1 h2:2 consistent:2 summary:1 last:3 perceptron:3 neighbor:1 face:9 differentiating:1 fifth:2 distributed:1 benefit:1 overcome:2 dimension:13 opper:1 world:3 made:5 collection:1 replicated:1 far:2 cope:1 overcomes:1 active:13 conclude:1 navot:2 xi:8 grayscale:1 significance:1 main:3 terminated:1 whole:1 subsample:1 martinez:1 repeated:1 body:7 x1:4 intel:1 fashion:1 tong:1 space1:1 explicit:1 xl:2 learns:1 theorem:4 down:1 svm:27 workshop:2 vapnik:1 phd:1 magnitude:1 budget:1 margin:2 easier:1 surprise:1 logarithmic:1 expressed:1 qbc:23 acm:1 goal:1 sized:1 replace:1 operates:1 uniformly:4 lemma:4 mamitsuka:1 partly:1 experimental:1 select:4 support:6 people:1 wearing:2 |
2,111 | 2,917 | Neuronal Fiber Delineation in Area of Edema
from Diffusion Weighted MRI
Ofer Pasternak?
School of Computer Science
Tel-Aviv University
Tel-Aviv, ISRAEL 69978
[email protected]
Nathan Intrator
School of Computer Science
Tel-Aviv University
[email protected]
Nir Sochen
Department of Applied Mathematics
Tel-Aviv University
[email protected]
Yaniv Assaf
Department of Neurobiochemistry
Faculty of Life Science
Tel-Aviv University
[email protected]
Abstract
Diffusion Tensor Magnetic Resonance Imaging (DT-MRI) is a non invasive method for brain neuronal fibers delineation. Here we show a modification for DT-MRI that allows delineation of neuronal fibers which
are infiltrated by edema. We use the Muliple Tensor Variational (MTV)
framework which replaces the diffusion model of DT-MRI with a multiple component model and fits it to the signal attenuation with a variational regularization mechanism. In order to reduce free water contamination we estimate the free water compartment volume fraction in
each voxel, remove it, and then calculate the anisotropy of the remaining
compartment. The variational framework was applied on data collected
with conventional clinical parameters, containing only six diffusion directions. By using the variational framework we were able to overcome
the highly ill posed fitting. The results show that we were able to find
fibers that were not found by DT-MRI.
1 Introduction
Diffusion weighted Magnetic Resonance Imaging (DT-MRI) enables the measurement of
the apparent water self-diffusion along a specified direction [1]. Using a series of Diffusion
Weighted Images (DWIs) DT-MRI can extract quantitative measures of water molecule
diffusion anisotropy which characterize tissue microstructure [2]. Such measures are in
particular useful for the segmentation of neuronal fibers from other brain tissue which then
allows a noninvasive delineation and visualization of major brain neuronal fiber bundles in
vivo [3]. Based on the assumptions that each voxel can be represented by a single diffusion
compartment and that the diffusion within this compartment has a Gaussian distribution
?
http://www.cs.tau.ac.il/?oferpas
DT-MRI states the relation between the signal attenuation, E, and the diffusion tensor, D,
as follows [4, 5, 6]:
A(qk )
E(qk ) =
= exp(?bqkT Dqk ) ,
(1)
A(0)
where A(qk ) is the DWI for the k?th applied diffusion gradient direction qk . The notation
A(0) is for the non weighted image and b is a constant reflecting the experimental diffusion weighting [2]. D is a second order tensor, i.e., a 3 ? 3 positive semidefinite matrix,
that requires at least 6 DWIs from different non-collinear applied gradient directions to
uniquely determine it. The symmetric diffusion tensor has a spectral decomposition for
three eigenvectors U a and three positive eigenvalues ?a . The relation between the eigenvalues determines the diffusion anisotropy using measures such as Fractional Anisotropy
(FA) [5]:
3((?1 ? D)2 + (?2 ? D)2 + (?3 ? D)2 )
FA =
,
(2)
2(?21 + ?22 + ?23 )
where D = (?1 + ?2 + ?3 )/3. FA is relatively high in neuronal fiber bundles (white
matter), where the cylindrical geometry of fibers causes the diffusion perpendicular to
the fibers be much smaller than parallel to them. Other brain tissues, such as gray matter and Cerebro-Spinal Fluid (CSF), are less confined with diffusion direction and exhibit
isotropic diffusion. In cases of partial volume where neuronal fibers reside other tissue
type in the same voxel, or present complex architecture, the diffusion has no longer a single pronounced orientation and therefore the FA value of the fitted tensor is decreased. The
decreased FA values causes errors in segmentation and in any proceeding fiber analysis.
In this paper we focus on the case where partial volume occurs when fiber bundles are infiltrated with edema. Edema might occur in response to brain trauma, or surrounding a tumor.
The brain tissue accumulate water which creates pressure and might change the fiber architecture, or infiltrate it. Since the edema consists mostly of relatively free diffusing water
molecules, the diffusion attenuation increases and the anisotropy decreases. We chose to
reduce the effect of edema by changing the diffusion model to a dual compartment model,
assuming an isotropic compartment added to a tensor compartment.
2 Theory
The method we offer is based on the dual compartment model which was already demonstrated as able to reduce CSF contamination [7], where it required a large number of diffusion measurement with different diffusion times. Here we require the conventional DT-MRI
data of only six diffusion measurement, and apply it on the edema case.
2.1 The Dual Compartment Model
The dual compartment model is described as follows:
E(qk ) = f exp(?bqkT D1 qk ) + (1 ? f ) exp(?bD2 ) .
(3)
The diffusion tensor for the tensor compartment is denoted by D1 , and the diffusion coefficient of the isotropic water compartment is denoted by D2 . The compartments have relative
volume of f and 1?f . Finding the best fitting parameters D1 , D2 and f is highly ill-posed,
especially in the case of six measurement, where for any arbitrarily chosen isotropic compartment there could be found a tensor compartment which exactly fits the data.
Figure 1: The initialization scheme. In addition to the DWI data, MTV uses the T2 image
to initialize f . The initial orientation for the tensor compartment are those that DT-MRI
calculated.
2.2 The Variational Framework
In order to stabilize the fitting process we chose to use the Multiple Tensor Variational
(MTV) framework [8] which was previously used to resolve partial volume caused by
complex fiber architecture [9], and to reduce CSF contamination in cases of hydrocephalus
[10]. We note that the dual compartment model is a special case of the more general multiple tensor model, where the number of the compartments is restricted to 2 and one of the
compartments is restricted to equal eigenvalues (isotropy). Therefore the MTV framework
adapted for separation of fiber compartments from edema is composed of the following
functional, whose minima should provide the wanted diffusion parameters:
d
? k ))2 + ?(|?Ui1 |) d? .
S(f, D1 , D2 ) =
(E(qk ) ? E(q
(4)
?
?
k=1
? is for the observed diffusion signal attenuation and E is calculated using
The notation E
(3) for ddifferent acquisition directions. ? is the image domain with 3D axis (x, y, z),
?I 2
?I 2
?I 2
|?I| = ( ?x
) + ( ?y
) + ( ?z
) is defined as the vector gradient norm. The notation Ui1
stands for the principal eigenvector of the i?th diffusion tensor. The fixed parameters ? is
set to keep the solution closer to the observed diffusion signal. The function ? is a diffusion
flow
which controls the regularization behavior. Here we chose to use ?i (s) =
function,
s2
1+ K
which
lead to anisotropic diffusion-like flow while preserving discontinuities
2
i
[11]. The regularized fitting allows the identification of smoothed fiber compartments and
reduces noise. The minimum of (4) solves the Euler-Lagrange equations, and can be found
by the gradient descent scheme.
2.3 Initialization Scheme
Since the functional space is highly irregular (not enough measurements), the minimization
process requires initial guess (figure 1), which is as close as possible to the global minimum. In order to apriori estimate the relative volume of the isotropic compartment we used
a normalized diffusion non-weighted image, where high contrast correlates to larger fluid
volume. In order to apriori estimate the parameters of D1 we used the result of conventional
DT-MRI fitting on the original data. The DT-MRI results were spectrally decomposed and
the eigenvectors were used as initial guess for the eigenvectors of D1 . The initial guess for
the eigenvalues of D1 were set to ?1 = 1.5, ?2 = ?3 = 0.4.
3 methods
We demonstrate how partial volume of neuronal fiber and edema can be reduced by applying the modified MTV framework on a brain slice taken from a patient with sever edema
surrounding a brain tumor. MRI was performed on a 1.5T MRI scanner (GE, Milwaukee). DT-MRI experiments were performed using a diffusion-weighted spin-echo echoplanar-imaging (DWI-EPI) pulse sequence. The experimental parameters were as follows:
T R/T E = 10000/98ms, ?/? = 31/25ms, b = 1000s/mm2 with six diffusion gradient directions. 48 slices with thickness of 3mm and no gap were acquired covering the
whole brain with FOV of 240mm2 and matrix of 128x128. Number of averages was 4, and
the total experimental time was about 6 minutes. Head movement and image distortions
were corrected using a mutual information based registration algorithm [12]. The corrected
DWIs were fitted to the dual compartment model via the modified MTV framework, then
the isotropic compartment was omitted. FA was calculated for the remaining tensor for
which FA higher than 0.25 was considered as white matter. We compared these results to
single component DT-MRI with no regularization, which was also used for initialization of
the MTV fitting.
4 Results and Discussion
Figure 2: A single slice of a patient with edema. (A) a non diffusion weighted image
with ROI marked. Showing the tumor in black surrounded by sever edema which appear
bright. (B) Normalized T2 of the ROI, used for f initialization. (C) FA map from DT-MRI
(threshold of FA> 0.25). Large parts of the corpus callosum are obscured. (D) FA map of
D1 from MTV (thresholds f> 0.35, FA> 0.25). A much larger part of the corpus callosum
is revealed
Figure (2) shows the Edema case, where DTI was unable to delineate large parts of the
corpus callosum. Since the corpus callosum is one of the largest fiber bundles in the brain
it was highly unlikely that the fibers were disconnected or disappeared. The expected FA
should have been on the same order as on the opposite side of the brain, where the corpus
callosum shows high FA values. Applying the MTV on the slice and mapping the FA value
of the tensor compartment reveals considerably much more pixels of higher FA in the area
of the corpus callosum. In general the FA values of most pixels were increased, which was
predicted, since by removing any size of a sphere (isotropic compartment) we should be left
with a shape which is less spherical, and therefore with increased FA. The benefit of using
the MTV framework over an overall reduce of FA threshold in recognizing neuronal fiber
voxels is that the amount of FA increase is not uniform in all tissue types. In areas where
the partial volume was not big due to the edema, the increase was much lower than in areas
contaminated with edema. This keeps the nice contrast reflected by FA values between
neuronal fibers and other tissue types. Reducing the FA threshold on original DT-MRI
results would cause a less clear separation between the fiber bundles and other tissue types.
This tool could be used for fiber tracking in the vicinity of brain tumors, or with stroke,
where edema contaminates the fibers and prevents fiber delineation with the conventional
DT-MRI.
5 Conclusions
We show that by modifying the MTV framework to fit the dual compartment model we
can reduce the contamination of edema, and delineate much larger fiber bundle areas. By
using the MTV framework we stabilize the fitting process, and also include some biological
constraints, such as the piece-wise smoothness nature of neuronal fibers in the brain. There
is no doubt that using a much larger number of diffusion measurements should increase the
stabilization of the process, and will increase its accuracy. However, more measurement
require much more scan time, which might not be available in some cases. The variational
framework is a powerful tool for the modeling and regularization of various mappings. It
is applied, with great success, to scalar and vector fields in image processing and computer
vision. Recently it has been generalized to deal with tensor fields which are of great interest
to brain research via the analysis of DWIs and DT-MRI. We show that the more realistic
model of multi-compartment voxels conjugated with the variational framework provides
much improved results.
Acknowledgments
We acknowledge the support of the Edersheim - Levi - Gitter Institute for Functional Human Brain Mapping of Tel-Aviv Sourasky Medical Center and Tel-Aviv University, the
Adams super-center for brain research of Tel-Aviv University, the Israel Academy of Sciences, Israel Ministry of Science, and the Tel-Aviv University research fund.
References
[1] E Stejskal and JE Tanner. Spin diffusion measurements: Spin echoes in the presence
of a time-dependant field gradient. J. Chem. Phys., 42:288?292, 1965.
[2] D. Le-Bihan, J.-F. Mangin, C. Poupon, C.A. Clark, S. Pappata, N. Molko, and
H. Chabriat. Diffusion tensor imaging: concepts and applications. Journal of Magnetic Resonance Imaging, 13:534?546, 2001.
[3] S. Mori and P.C. van Zijl. Fiber tracking: principles and strategies - a technical review.
NMR Biomed., 15:468?480, 2002.
[4] P.J. Basser, J. Mattiello, and D. Le-Bihan. MR diffusion tensor spectroscopy and
imaging. Biophysical Journal, 66:259?267, 1994.
[5] P.J. Basser and C. Pierpaoli. Microstructural and physiological features of tissues
elucidated by quantitative-diffusion-tensor MRI. Journal of Magnetic Resonance,
111(3):209?219, June 1996.
[6] C. Pierpaoli, P. Jezzard, P.J. Basser, A. Barnett, and G. Di-Chiro. Diffusion tensor
MR imaging of human brain. Radiology, 201:637?648, 1996.
[7] C. Pierpaoli and D. K. Jones. Removing CSF contamination in brain DT-MRIs by
using a two-compartment tensor model. In Proc. International Society for Magnetic
Resonance in Medicine 12th Scientific meeting ISMRM04, page 1215, Kyoto, Japan,
2004.
[8] O. Pasternak, N. Sochen, and Y. Assaf. Variational regularization of multiple diffusion
tensor fields. In J. Weickert and H. Hagen, editors, Visualization and Processing of
Tensor Fields. Springer, Berlin, 2005.
[9] O. Pasternak, N. Sochen, and Y. Assaf. Separation of white matter fascicles from diffusion MRI using ?-functional regularization. In Proceedings of 12th Annual Meeting
of the ISMRM, page 1227, 2004.
[10] O. Pasternak, N. Sochen, and Y. Assaf. CSF partial volume reduction in hydrocephalus using a variational framework. In Proceedings of 13th Annual Meeting of
the ISMRM, page 1100, 2005.
[11] G. Aubert and P. Kornprobst. Mathematical Problems in Image Processing: Partial
Differential Equations and the Calculus of Variations, volume 147 of Applied Mathematical Sciences. Springer-Verlag, 2002.
[12] G.K. Rohde, A.S. Barnett, P.J. Basser, S. Marenco, and C. Pierpaoli. Comprehensive
approach for correction of motion and distortion in diffusion-weighted MRI. Magnetic Resonance in Medicine, 51:103?114, 2004.
| 2917 |@word cylindrical:1 faculty:1 mri:24 norm:1 d2:3 calculus:1 pulse:1 decomposition:1 pressure:1 edema:17 reduction:1 initial:4 series:1 realistic:1 shape:1 enables:1 wanted:1 remove:1 fund:1 guess:3 isotropic:7 provides:1 x128:1 mathematical:2 along:1 differential:1 consists:1 fitting:7 assaf:4 acquired:1 expected:1 behavior:1 multi:1 brain:18 decomposed:1 spherical:1 anisotropy:5 resolve:1 delineation:5 notation:3 isotropy:1 israel:3 eigenvector:1 spectrally:1 finding:1 quantitative:2 dti:1 attenuation:4 rohde:1 exactly:1 mangin:1 nmr:1 control:1 medical:1 appear:1 positive:2 might:3 chose:3 black:1 initialization:4 fov:1 perpendicular:1 acknowledgment:1 area:5 close:1 applying:2 www:1 conventional:4 map:2 demonstrated:1 center:2 variation:1 us:1 hagen:1 observed:2 calculate:1 decrease:1 contamination:5 movement:1 creates:1 various:1 represented:1 fiber:28 surrounding:2 epi:1 apparent:1 whose:1 posed:2 larger:4 distortion:2 radiology:1 echo:2 sequence:1 eigenvalue:4 biophysical:1 jezzard:1 academy:1 pronounced:1 yaniv:1 nin:1 disappeared:1 adam:1 ac:5 school:2 solves:1 c:1 predicted:1 direction:7 csf:5 modifying:1 stabilization:1 human:2 require:2 microstructure:1 biological:1 correction:1 scanner:1 mm:1 considered:1 roi:2 exp:3 great:2 mapping:3 major:1 omitted:1 proc:1 largest:1 callosum:6 tool:2 weighted:8 minimization:1 gaussian:1 super:1 modified:2 focus:1 june:1 contrast:2 unlikely:1 relation:2 biomed:1 pixel:2 overall:1 dual:7 ill:2 orientation:2 denoted:2 resonance:6 special:1 initialize:1 mutual:1 apriori:2 equal:1 field:5 barnett:2 mm2:2 jones:1 t2:2 contaminated:1 composed:1 comprehensive:1 geometry:1 aubert:1 interest:1 highly:4 semidefinite:1 bundle:6 closer:1 partial:7 obscured:1 fitted:2 increased:2 modeling:1 euler:1 uniform:1 recognizing:1 characterize:1 thickness:1 considerably:1 international:1 tanner:1 containing:1 conjugated:1 doubt:1 japan:1 stabilize:2 coefficient:1 matter:4 caused:1 piece:1 performed:2 parallel:1 vivo:1 il:5 compartment:29 spin:3 bright:1 qk:7 accuracy:1 identification:1 tissue:9 stroke:1 phys:1 acquisition:1 invasive:1 di:1 fractional:1 segmentation:2 reflecting:1 higher:2 dt:18 reflected:1 response:1 improved:1 delineate:2 bihan:2 dependant:1 gray:1 scientific:1 aviv:9 effect:1 normalized:2 concept:1 regularization:6 vicinity:1 symmetric:1 white:3 deal:1 self:1 uniquely:1 covering:1 m:2 generalized:1 demonstrate:1 motion:1 image:9 variational:10 wise:1 recently:1 functional:4 spinal:1 volume:11 anisotropic:1 accumulate:1 measurement:8 smoothness:1 mathematics:1 longer:1 milwaukee:1 verlag:1 arbitrarily:1 success:1 life:1 meeting:3 preserving:1 minimum:3 ministry:1 mr:2 determine:1 signal:4 multiple:4 reduces:1 kyoto:1 technical:1 clinical:1 sphere:1 offer:1 post:4 mattiello:1 patient:2 vision:1 ui1:2 confined:1 irregular:1 addition:1 decreased:2 basser:4 flow:2 presence:1 revealed:1 enough:1 diffusing:1 fit:3 architecture:3 opposite:1 reduce:6 six:4 collinear:1 trauma:1 cause:3 sever:2 useful:1 clear:1 eigenvectors:3 amount:1 fascicle:1 reduced:1 http:1 levi:1 threshold:4 changing:1 registration:1 diffusion:45 imaging:7 fraction:1 powerful:1 separation:3 weickert:1 replaces:1 annual:2 mtv:12 elucidated:1 adapted:1 occur:1 constraint:1 pierpaoli:4 nathan:1 relatively:2 department:2 disconnected:1 smaller:1 modification:1 restricted:2 taken:1 mori:1 equation:2 visualization:2 previously:1 mechanism:1 sochen:5 ge:1 ofer:1 available:1 apply:1 intrator:1 spectral:1 magnetic:6 pasternak:4 original:2 remaining:2 include:1 medicine:2 especially:1 society:1 tensor:24 added:1 already:1 occurs:1 fa:21 strategy:1 exhibit:1 gradient:6 unable:1 berlin:1 collected:1 water:7 assuming:1 dqk:1 mostly:1 fluid:2 acknowledge:1 descent:1 head:1 smoothed:1 required:1 specified:1 discontinuity:1 able:3 tau:5 regularized:1 scheme:3 axis:1 extract:1 nir:1 nice:1 voxels:2 review:1 relative:2 clark:1 principle:1 editor:1 surrounded:1 free:3 side:1 cerebro:1 institute:1 benefit:1 slice:4 overcome:1 noninvasive:1 calculated:3 stand:1 van:1 reside:1 dwi:3 microstructural:1 voxel:3 correlate:1 keep:2 global:1 reveals:1 corpus:6 nature:1 molecule:2 contaminates:1 tel:9 spectroscopy:1 complex:2 domain:1 s2:1 noise:1 whole:1 big:1 neuronal:11 je:1 weighting:1 minute:1 removing:2 showing:1 physiological:1 gap:1 lagrange:1 prevents:1 tracking:2 scalar:1 springer:2 determines:1 marked:1 change:1 corrected:2 reducing:1 tumor:4 principal:1 total:1 stejskal:1 experimental:3 support:1 scan:1 chem:1 d1:8 |
2,112 | 2,918 | Learning Influence among Interacting
Markov Chains
Dong Zhang
IDIAP Research Institute
CH-1920 Martigny, Switzerland
[email protected]
Samy Bengio
IDIAP Research Institute
CH-1920 Martigny, Switzerland
[email protected]
Daniel Gatica-Perez
IDIAP Research Institute
CH-1920 Martigny, Switzerland
[email protected]
Deb Roy
Massachusetts Institute of Technology
Cambridge, MA 02142, USA
[email protected]
Abstract
We present a model that learns the influence of interacting Markov chains
within a team. The proposed model is a dynamic Bayesian network
(DBN) with a two-level structure: individual-level and group-level. Individual level models actions of each player, and the group-level models
actions of the team as a whole. Experiments on synthetic multi-player
games and a multi-party meeting corpus show the effectiveness of the
proposed model.
1
Introduction
In multi-agent systems, individuals within a group coordinate and interact to achieve a goal.
For instance, consider a basketball game where a team of players with different roles, such
as attack and defense, collaborate and interact to win the game. Each player performs a set
of individual actions, evolving based on their own dynamics. A group of players interact
to form a team. Actions of the team and its players are strongly correlated, and different
players have different influence on the team. Taking another example, in conversational
settings, some people seem particularly capable of driving the conversation and dominating
its outcome. These people, skilled at establishing the leadership, have the largest influence
on the group decisions, and often shift the focus of the meeting when they speak [8].
In this paper, we quantitatively investigate the influence of individual players on their team
using a dynamic Bayesian network, that we call two-level influence model. The proposed
model explicitly learns the influence of individual player on the team with a two-level
structure. In the first level, we model actions of individual players. In the second one, we
model team actions as a whole. The model is then applied to determine (a) the influence of
players in multi-player games, and (b) the influence of participants in meetings.
The paper is organized as follows. Section 2 introduces the two-level influence model.
Section 3 reviews related models. Section 4 presents results on multi-player games, and
Section 5 presents results on a meeting corpus. Section 6 provides concluding remarks.
individual player state
S ti
i
St-1
t-1
i
St+1
team state
i
i
Ot-1
Ot
observation
(a)
S
1
Q=1
G
St-1
t
t+1
G
G
St+1
St
i
Ot+1
1
SSt-1
player A
Q
player B
St1
1
St+1
2
SSt-1
S2t
2
St+1
3
SSt-1
S3t
3
St+1
Q=2
S2
SG
Q=N
SN
player C
(b)
(c)
Figure 1: (a) Markov Model for individual player. (b) Two-level influence model (for
simplicity, we omit the observation variables of individual Markov chains, and the switching parent variable Q). (c) Switching parents. Q is called a switching parent of S G , and
{S 1 ? ? ? S N } are conditional parents of S G . When Q = i, S i is the only parent of S G .
2
Two-level Influence Model
The proposed model, called two-level influence model, is a dynamic Bayesian network
(DBN) with a two-level structure: the player level and the team level (Fig. 1). The player
level represents the actions of individual players, evolving based on their own Markovian
dynamics (Fig. 1 (a)). The team level represents group-level actions (the action belongs to
the team as a whole, not to a particular player). In Fig. 1 (b), the arrows up (from players to
team) represent the influence of the individual actions on the group actions, and the arrows
down (from team to players) represent the influence of the group actions on the individual
actions. Let O i and S i denote the observation and state of the ith player respectively, and
S G denotes the team state. For N players, and observation sequences of identical length
T , the joint distribution of our model is given by
P (S, O)=
N
Y
i=1
T
T Y
N
T Y
N
Y
Y
Y
i
G
P (Sti |St?1
, St?1
). (1)
P (Oti |Sti ) P (StG |St1 ? ? ? StN )
P (S1i )
t=1
t=1 i=1
t=2 i=1
Regarding the player level, we model the actions of each individual with a first-order
Markov model (Fig. 1 (a)) with one observation variable O i and one state variable S i .
Furthermore, to capture the dynamics of all the players interacting as a team, we add a
hidden variable S G (team state), which is responsible to model the group-level actions.
Different from individual player state that has its own Markovian dynamics, team state is
not directly influenced by its previous state . S G could be seen as the aggregate behaviors of the individuals, yet provides a useful level of description beyond individual actions.
There are two kinds of relationships between the team and players: (1) The team state at
time t influences the players? states at the next time (down arrow in Fig. 1 (b)). In other
words, the state of the ith player at time t + 1 depends on its previous state as well as on
i
the team state, i.e., P (St+1
|Sti , StG ). (2) The team state at time t is influenced by all the
players? states at the current time (up arrow in Fig. 1 (b)), resulting in a conditional state
transition distribution P (StG |St1 ? ? ? StN ).
To reduce the model complexity, we add one hidden variable Q in the model, to switch
parents for S G . The idea of switching parent (also called Bayesian multi-nets in [3]) is as
follows: a variable -S G in this case- has a set of parents {Q, S 1 ? ? ? S N } (Fig. 1(c)). Q is
the switching parent that determines which of the other parents to use, conditioned on the
current value of the switching parent. {S 1 ? ? ? S N } are the conditional parents. In Fig. 1(c),
Q switches the parents of S G among {S 1 ? ? ? S N }, corresponding to the distribution
P (StG |St1 ? ? ? StN )
=
N
X
i=1
P (StG , Q = i|St1 ? ? ? StN )
(2)
N
X
=
P (Q = i|St1 ? ? ? StN )P (StG |Sti ? ? ? StN , Q = i)
(3)
i=1
N
X
=
P (Q = i)P (StG |Sti ) =
N
X
?i P (StG |Sti ).
(4)
i=1
i=1
From Eq. 3 to Eq. 4, we made two assumptions: (i) Q is independent of {S 1 ? ? ? S N };
and (ii) when Q = i, StG only depends on Sti . The distribution over the switching-parent
variable P (Q) essentially describes how much influence or contribution the state transitions
of the player variables have on the state transitions of the team variable. We refer to ? i =
PN
P (Q = i) as the influence value of the ith player. Obviously, i=1 ?i = 1. If we further
assume that all player variables have the same number of states NS , and the team variable
has NG possible states, the joint log probability is given by
log P (S, O)
=
NS
N X
X
i
zj,1
? log P (S1i = j) +
+
i
? log P (Oti |Sti = j)
zj,t
t=1 i=1 j=1
i=1 j=1
|
NS
T X
N X
X
{z
initial probability
NS X
NS X
NG
T X
N X
X
}
|
{z
emission probability
}
i
i
G
i
G
zj,t
? zk,t?1
? zg,t?1
? log P (Sti = j|St?1
= k, St?1
= g)
t=2 i=1 j=1 k=1 g=1
{z
|
+
NS X
NG
T X
X
group inf luence on individual transition
G
i
zg,t
? zk,t
? log{
t=1 k=1 g=1
|
N
X
?i P (StG = g|Sti = k)},
}
(5)
i=1
{z
individual inf luence on group
}
where the indicator variable zj,t = 1 if St = j, otherwise zj,t = 0. We can see that the
model has complexity O(T ? N ? NG ? NS2 ). For T = 2000, NS = 10, NG = 5, N = 4, a
total of 106 operations is required, which is still tractable.
For the model implementation, we used the Graphical Models Toolkit (GMTK) [4], a DBN
system for speech, language, and time series data. Specifically, we used the switching parents feature of GMTK, which greatly facilitates the implementation of the two-level model
to learn the influence values using the Expectation Maximization (EM) algorithm. Since
EM has the problem of local maxima, good initialization is very important. To initialize
the emission probability distribution in Eq. 5, we first train individual action models (Fig.
1 (a)) by pooling all observation sequences together. Then we use the trained emission
distribution from the individual action model to initialize the emission distribution of the
two-level influence model.This procedure is beneficial because we use data from all individual streams together, and thus have a larger amount of training data for learning.
3
Related Models
The proposed two-level influence model is related to a number of models, namely mixedmemory Markov model (MMM) [14, 11], coupled HMM (CHMM) [13], influence model
[1, 2, 6] and dynamical systems trees (DSTs) [10]. MMMs decompose a complex model
into mixtures of simpler ones, for example, a K-order Markov model, into mixtures of firstPK
order models: P (St |St?1 St?2 ? ? ? St?K ) =
i=1 ?i P (St |St?i ). The CHMM models
interactions of multiple Markov chains by directly linking the current state of one stream
1
2
N
with the previous states of all the streams (including itself): P (Sti |St?1
St?1
? ? ? St?1
).
However, the model becomes computationally intractable for more than two streams. The
influence model [1, 2, 6] simplifies the state transition distribution of the CHMM into a
Figure 2: (a) A snapshot of the multi-player games: four players move along the pathes
labeled in the map. (b) A snapshot of four-participant meetings.
N
2
1
)=
? ? ? St?1
St?1
convex combination of pairwise conditional distributions, i.e., P (Sti |St?1
PN
i j
j=1 ?ji P (St |St?1 ). We can see that influence model and MMM take the same strategy
to reduce complex models with large state spaces to a combination of simpler ones with
smaller state spaces. In [2, 6], the influence model was used to analyze speaking patterns
in conversations (i.e., turn-taking) to determine how much influence one participant has on
others. In such model, ?ji is regarded as the influence of the j th player on the ith player.
All these models, however, limit themselves to modeling the interactions between individual players, i.e., the influence of one player on another player. The proposed two-level
influence model extends these models by using the group-level variable S G that allows
to model the influence between all the players and the team: P (StG |St1 St2 ? ? ? StN ) =
PN
G i
i=1 ?i P (St |St ), and additionally conditioning the dynamics of each player on the team
i
state: P (St+1 |Sti , StG ).
DSTs [10] have a tree structure that models interacting processes through the parent hidden
Markov chains. There are two differences between DSTs and our model: (1) In DSTs, the
parent chain has its own Markovian dynamics, while the team state of our model is not
directly influenced by the previous team state. Thus, our model captures the emergent
phenomena in which the group action is ?nothing more? than the aggregate behaviors of
individuals, yet it provides a useful level of representation beyond individual actions. (2)
The influence between players and team in our model is ?bi-direction? (up and down arrows
in Fig. 1(b)). In DSTs, the influence between child and parent chains is ?uni-direction?:
parent chains could influence child chains, while child chains could not influence their
parent chains.
4
Experiments on Synthetic Data
We first test our model on multi-player synthetic games, in which four players (labeled
A-D) move along a number of predetermined paths manually labeled in a map (Fig. 2(a)),
based on the following rules:
? Game I: Player A moves randomly. Player B and C are meticulously following
player A. Player D moves randomly.
? Game II: Player A moves randomly. Player B is meticulously following player
A. Player C moves randomly. Player D is meticulously following player C.
? Game III: All four players, A, B, C and D, move randomly.
A follower moves randomly until it lies on the same path of its target, and after that it tries
to reach the target by following the target?s direction. The initial positions and speeds of
players are randomly generated. The observation of an individual player is its motion trajectory in the form of a sequence of positions, (x1 , y1 ), (x2 , y2 ) ? ? ? (xt , yt ), each of which
belongs to one of 20 predetermined paths in the map. Therefore, we set NS = 20. The
number of team states is set to NG = 5. In experiments, we found that the final results
were not sensitive to the specific number of team states for this dataset in a wide range.
The length of each game sequence is T = 2000 frames. EM iterations were stopped once
0.8
0.6
0.4
0.2
0
A
B
C
D
1
Influence Value
Influence Value
Game II
Player
Player
Player
Player
Game III
Player
Player
Player
Player
0.8
0.6
0.4
0.2
10
20
EM Iterations
30
0
A
B
C
D
1
Influence Value
Game I
1
Player
Player
Player
Player
0.8
A
B
C
D
0.6
0.4
0.2
20
40
EM Iterations
60
0
20
40
EM Iterations
60
Figure 3: Influence values with respect to the EM iterations in different games.
the relative difference in the global log likelihood was less than 2%.
Fig. 3 shows the learned influence value for each of the four players in the different games
with respect to the number of EM iterations. We can see that for Game I, player A is the
leader player based on the defined rules. The final learned influence value for player A is
almost 1, while the influence for the rest three players are almost 0. For Game II, player
A and player C are both leaders based on the defined rules. The learned influence values
for player A and C are indeed close to 0.5, which indicates they have similar influence on
the team. For Game III, the four players are moving randomly, and the learned influence
values are around 0.25, which indicates that all players have similar influence on the team.
The results on these toy data suggest that our model is capable of learning sensible values
for {?i }, in good agreement with the concept of influence we have described before.
5
Experiments on Meeting Data
As an application of the two-level influence model, we investigate the influence of participants in meetings. Status, dominance, and influence are important concepts in social psychology for which our model could be particularly suitable in a (dynamic) conversational
setting [8]. We used a public meeting corpus (available at http://mmm.idiap.ch),
which consists of 30 five-minute four-participant meetings collected in a room equipped
with synchronized multi-channel audio and video recorders [12]. A snapshot of the meeting is shown in Fig. 2 (b). These meetings have pre-defined topics and an action agenda,
designed to ensure discussions and monologues. Manual speech transcripts are also available. We first describe how we manually collected influence judgements, and the performance measure we used. We then report our results using audio and language features,
compared with simple baseline methods.
5.1
Manually Labeling Influence Values and the Performance Measure
The manual annotation of influence of meeting participants is to some degree a subjective
task, as a definite ground-truth does not exist. In our case, each meeting was labeled by
three independent annotators who had no access to any information about the participants
(e.g. job titles and names). This was enforced to avoid any bias based on prior knowledge
of the meeting participants (e.g. a student would probably assign a large influence value to
his supervisor). After watching an entire meeting, the three annotators were asked to assign
a probability-based value (ranging from 0 to 1, all adding up to 1) to meeting participants,
which indicated their influence in the meeting (Fig. 5(b-d)). From the three annotations, we
computed the pairwise Kappa statistics [7], a commonly used measure for inter-rate agreement. The obtained pairwise Kappa ranges between 0.68 and 0.72, which demonstrates a
good agreement among the different annotators. We estimated the ground-truth influence
values by averaging the results from the three annotators (Fig. 5(a)).
We use Kullback-Leibler (KL) divergence to evaluate the results. For the j th meeting, given an automatically determined influence distribution P? (Q), and the ground
truth influence distribution P (Q), the KL divergence is given by: D j (P? kP ) =
silence
speaking
person A
person B
audio 0011100001111100111111000
language 0022200003333300444444000
timeline
audio 0000001100000011000000111
language 0000002200000033000000444
timeline
Figure 4: Illustration of state sequences using audio and language features respectively:
Using audio, there are two states: speaking and silence. Using language, the number of
states equals PLSA topics plus one silence state.
PN
i=1
P? (Q = i) log2
P? (Q=i)
P (Q=i) ,
where N is the number of participants. The smaller D j , the
better the performance (if P? = P ? Dj = 0). Note that KL divergence is not symmetric.
PM
1
j ?
We calculate the average KL divergence for all the meetings: D = M
j=1 D (P kP ),
where M is the number of meetings.
5.2
Audio and Language Features
We first extract audio features useful to detect speaking turns in conversations. We compute the SRP-PHAT measure using the signals from a 8-microphone array [12], which is
a continuous value indicating the speech activity from a particular participant. We use a
Gaussian emission probability, and set NS = 2, each state corresponding to speaking and
non-speaking (silence), respectively (Fig. 4).
Additionally, language features were extracted from manual transcripts. After removing
stop words, the meeting corpus contains 2175 unique terms. We then employed probabilistic latent semantic analysis (PLSA) [9], which is a language model that projects documents in the high-dimensional bag-of-words space into a topic-based space of lower dimension. Each dimension in this new space represents a ?topic?, and each document is
represented as a mixture of topics. In our case, a document corresponds to one speech utterance (ts , te , w1 w2 ? ? ? wk ), where ts is the start time, te is the end time, and w1 w2 ? ? ? wk
is a sequence of words. PLSA is thus used as a feature extractor that could potentially
capture ?topic turns? in meetings.
We embedded PLSA into our model by treating the states of individual players as instances
of PLSA topics (similar to [5]). Therefore, the PLSA model determines the emission probability in Eq. 5. We repeat the PLSA topic within the same utterance (ts ? t ? te ). The
topic for the silence segments was set to 0 (Fig. 4). We can see that using audio-only features can be seen as a special case of using language features, by using only one topic in
the PLSA model (i.e., all utterances belong to the same topic). We set 10 topics in PLSA
(NS = 10), and set NG = 5 using simple reasonable a priori knowledge. EM iterations
were stopped once the relative difference in the global log likelihood was less than 2%.
5.3
Results and Discussions
We compare our model with a method based on the speaking length (how much time each
of the participants speaks). In this case, the influence value of a meeting participant is
PN
defined to be proportional to his speaking length: P (Q = i) = Li / i=1 Li , where Li is
the speaking length of participant i. As a second baseline model, we randomly generated
1000 combinations of influence values (under the constraint that the sum of the four values
equals 1), and report the average performance.
The results are shown in Table 1 (left) and Fig. 5(e-h). We can see that the results of the
three methods: model + language, model + audio, and speaking-length (Fig. 5 (e-g)) are
significantly better than the result of randomization (Fig. 5 (h)). Using language features
1
2
0.5
4
0
5
10
15
20
25
30
1
2
0.5
4
(a)
0
5
10
15
20
25
30
1
2
0.5
4
0
5
10
15
20
25
30
2
0.5
4
(c)
0
5
10
15
20
25
30
1
2
0.5
4
0
5
10
15
20
25
2
0.5
4
0
5
10
15
20
25
30
1
2
0.5
4
0
5
10
15
20
25
30
(g)
(d)
1
(e)
30
(b)
1
(f)
1
2
0.5
4
0
5
10
15
20
25
30
(h)
Figure 5: Influence values of the 4 participants (y-axis) in the 30 meetings (x-axis) (a)
ground-truth (average of the three human annotations: A1 , A2 , A3 ). (b) A1 : human annotation 1 (c) A2 : human annotation 2 (d) A3 : human annotation 3 (e) our model + language
(f) our model + audio (g) speaking-length (h) randomization.
Table 1: Results on meetings (?model? denotes the two-level influence model).
Method
model + Language
model + Audio
Speaking length
Randomization
KL divergence
0.106
0.135
0.226
0.863
Human Annotation
Ai vs. Aj
Ai vs. Ai
Ai vs. GT
KL divergence
0.090
0.053
0.037
with our model achieves the best performance. Our model (using either audio or language
features) outperforms the speaking-length based method, which suggests that the learned
influence distributions are in better accordance with the influence distributions from human
judgements. As shown in Fig. 4, using audio features can be seen as a special case of using
language features. We use language features to capture ?topic turns? by factorizing the two
states: ?speaking, silence? into more states: ?topic1, topic2, ..., silence?. We can see that
the result using language features is better than that using audio features. In other words,
compared with ?speaking turns?, ?topic turns? improves the performance of our model to
learn the influence of participants in meetings.
It is interesting to look at the KL divergence between any pair of the three human annotations (Ai vs. Aj ), any one against the average of the others (Ai vs. Ai ), and any one
against the ground-truth (Ai vs. GT). The average results are shown in Table 1 (right).
We can see that the result of ?Ai vs. GT? is the best, which is reasonable since ?GT? is
the average of A1 , A2 , and A3 . Fig. 6(a) shows the histogram of KL divergence between
any pair of human annotations for the 30 meetings. The histogram has a distribution of
? = 0.09, ? = 0.11. We can see that the results of our model (language: 0.106, audio:
0.135) are very close to the mean (? = 0.09), which indicates that our model is comparable
to human performance.
With our model, we can calculate the cumulative influence of each meeting participant over
time. Fig. 6(b) shows such an example using the two-level influence model with audio
features. We can see that the cumulative influence is related to the meeting agenda: The
meeting starts with the monologue of person1 (monologue1). The influence of person1 is
almost 1, while the influences of the other persons are nearly 0. When four participants are
0.18
0.06
0.4
discussion
0.2
0.03
0
0
0.6
Person1
Person2
Person3
Person4
monologue4
0.09
discussion
Influence Value
0.12
0.8
monologue1
1
0.15
0.1
0.2 0.3 0.4
KL divergence
0.5
0.6
0
(a)
1
2
3
Time (min.)
4
5
(b)
Figure 6: (a) Histogram of KL divergence between any pair of the human annotations
(Ai vs. Aj ) for the 30 meetings. (b) The evolution of cumulative influence over time (5
minutes). The dotted vertical lines indicate the predefined meeting agenda.
involved in a discussion, the influence of person1 decreases, and the influences of the other
three persons increase. The influence of person4 increases quickly during monologue4.
The final influence of participants becomes stable in the second discussion.
6
Conclusions
We have presented a two-level influence model that learns the influence of all players within
a team. The model has a two-level structure: individual-level and group-level. Individual
level models actions of individual players and group-level models the group as a whole.
Experiments on synthetic multi-player games and a multi-party meeting corpus showed the
effectiveness of the proposed model. More generally, we anticipate that our approach to
multi-level influence modeling may provide a means for analyzing a wide range of social
dynamics to infer patterns of emergent group behaviors.
Acknowledgements
This work was supported by the Swiss National Center of Competence in Research on Interactive
Multimodal Information Management (IM2), and the EC project AMI (Augmented Multi-Party Interaction) (pub. AMI-124). We thank Florent Monay (IDIAP) and Jeff Bilmes (University of Washington) for sharing PLSA code and the GMTK. We also thank the annotators for their efforts.
References
[1] C. Asavathiratham. The influence model: A tractable representation for the dynamics of networked markov chains. Ph.D. dissertation, Dept. of EECS, MIT, Cambridge, 2000.
[2] S. Basu, T. Choudhury, B. Clarkson, and A. Pentland. Learning human interactions with the
influence model. MIT Media Laboratory Technical Note No. 539, 2001.
[3] J. Bilmes. Dynamic bayesian multinets. In Uncertainty in Artificial Intelligence, 2000.
[4] J. Bilmes and G. Zweig. The graphical models toolkit: An open source software system for
speech and time series processing. Proc. ICASSP, vol. 4:3916?3919, 2002.
[5] D. Blei and P. Moreno. Topic segmentation with an aspect hidden markov model. Proc. of ACM
SIGIR conference on Research and development in information retrieval, pages 343?348, 2001.
[6] T. Choudhury and S. Basu. Modeling conversational dynamics as a mixed memory markov
process. Proc. of Intl. Conference on Neural Information and Processing Systems (NIPS), 2004.
[7] J.A. Cohen. A coefficient of agreement for nominal scales. Educ Psych Meas, 20:37?46, 1960.
[8] S. L. Ellyson and J. F. Dovidio, editors. Power, Dominance, and Nonverbal Behavior. SpringerVerlag., 1985.
[9] T. Hofmann. Unsupervised learning by probabilistic latent semantic analysis. In Machine
Learning, 42:177?196, 2001.
[10] A. Howard and T. Jebara. Dynamical systems trees. In Uncertainty in Artificial Intelligence?01.
[11] K. Kirchhoff, S. Parandekar, and J. Bilmes. Mixed-memory markov models for automatic
language identification. IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, 2000.
[12] I. McCowan, D. Gatica-Perez, S. Bengio, G. Lathoud, M. Barnard, and D. Zhang. Automatic
analysis of multimodal group actions in meetings. In IEEE Transactions on PAMI, volume
27(3), 2005.
[13] N. Oliver, B. Rosario, and A. Pentland. Graphical models for recognizing human interactions.
Proc. of Intl. Conference on Neural Information and Processing Systems (NIPS), 1998.
[14] L. K. Saul and M. I. Jordan. Mixed memory markov models: Decomposing complex stochastic
processes as mixtures of simpler ones. Machine Learning, 37(1):75?87, 1999.
| 2918 |@word judgement:2 plsa:10 open:1 initial:2 series:2 contains:1 pub:1 daniel:1 document:3 subjective:1 outperforms:1 current:3 yet:2 follower:1 predetermined:2 hofmann:1 moreno:1 designed:1 treating:1 v:8 intelligence:2 ith:4 leadership:1 dissertation:1 blei:1 provides:3 attack:1 simpler:3 zhang:3 five:1 along:2 skilled:1 ns2:1 s2t:1 consists:1 pairwise:3 inter:1 speaks:1 indeed:1 behavior:4 themselves:1 multi:13 automatically:1 equipped:1 becomes:2 project:2 medium:2 kind:1 psych:1 ti:1 interactive:1 demonstrates:1 omit:1 before:1 local:1 accordance:1 limit:1 switching:8 analyzing:1 establishing:1 path:3 pami:1 plus:1 initialization:1 suggests:1 bi:1 range:3 unique:1 responsible:1 definite:1 swiss:1 procedure:1 evolving:2 significantly:1 word:5 pre:1 suggest:1 close:2 influence:80 map:3 yt:1 center:1 convex:1 monologue:2 sigir:1 simplicity:1 rule:3 array:1 regarded:1 his:2 coordinate:1 target:3 nominal:1 speak:1 samy:1 agreement:4 roy:1 particularly:2 kappa:2 labeled:4 role:1 capture:4 calculate:2 s1i:2 decrease:1 complexity:2 asked:1 dynamic:14 trained:1 segment:1 multimodal:2 joint:2 icassp:1 emergent:2 kirchhoff:1 represented:1 train:1 describe:1 kp:2 artificial:2 labeling:1 aggregate:2 outcome:1 gmtk:3 larger:1 dominating:1 otherwise:1 statistic:1 itself:1 final:3 obviously:1 sequence:6 net:1 interaction:5 networked:1 achieve:1 description:1 parent:20 intl:2 mmm:3 transcript:2 job:1 eq:4 idiap:8 indicate:1 synchronized:1 switzerland:3 direction:3 stochastic:1 human:12 public:1 st1:7 assign:2 decompose:1 randomization:3 anticipate:1 around:1 ground:5 driving:1 achieves:1 a2:3 proc:4 bag:1 title:1 sensitive:1 largest:1 mit:3 gaussian:1 choudhury:2 pn:5 avoid:1 focus:1 emission:6 likelihood:2 indicates:3 greatly:1 stg:12 baseline:2 detect:1 entire:1 hidden:4 among:3 priori:1 development:1 special:2 initialize:2 equal:2 once:2 phat:1 ng:7 washington:1 manually:3 identical:1 represents:3 look:1 unsupervised:1 nearly:1 others:2 report:2 quantitatively:1 randomly:9 divergence:10 national:1 individual:30 chmm:3 investigate:2 introduces:1 mixture:4 perez:2 chain:12 predefined:1 oliver:1 capable:2 tree:3 stopped:2 instance:2 modeling:3 markovian:3 maximization:1 recognizing:1 supervisor:1 eec:1 synthetic:4 st:31 person:4 probabilistic:2 dong:1 together:2 quickly:1 w1:2 management:1 watching:1 conf:1 toy:1 meticulously:3 li:3 student:1 wk:2 coefficient:1 int:1 explicitly:1 depends:2 stream:4 try:1 analyze:1 start:2 participant:19 annotation:10 contribution:1 who:1 bayesian:5 identification:1 trajectory:1 bilmes:4 influenced:3 reach:1 manual:3 sharing:1 against:2 involved:1 stop:1 nonverbal:1 dataset:1 massachusetts:1 conversation:3 knowledge:2 improves:1 organized:1 segmentation:1 strongly:1 furthermore:1 until:1 educ:1 aj:3 indicated:1 name:1 usa:1 concept:2 y2:1 evolution:1 symmetric:1 leibler:1 laboratory:1 semantic:2 game:20 basketball:1 during:1 performs:1 motion:1 ranging:1 ji:2 cohen:1 conditioning:1 volume:1 linking:1 belong:1 im2:1 refer:1 cambridge:2 ai:10 automatic:2 dbn:3 collaborate:1 pm:1 language:20 dj:1 had:1 toolkit:2 access:1 moving:1 stable:1 gt:4 add:2 own:4 showed:1 belongs:2 inf:2 meeting:34 rosario:1 seen:3 employed:1 gatica:3 determine:2 signal:2 ii:4 multiple:1 infer:1 technical:1 zweig:1 retrieval:1 recorder:1 a1:3 essentially:1 expectation:1 iteration:7 represent:2 histogram:3 source:1 ot:3 rest:1 w2:2 probably:1 pooling:1 facilitates:1 effectiveness:2 seem:1 call:1 jordan:1 topic2:1 bengio:3 iii:3 switch:2 psychology:1 florent:1 reduce:2 regarding:1 idea:1 simplifies:1 shift:1 defense:1 effort:1 clarkson:1 speech:6 speaking:15 action:23 remark:1 useful:3 generally:1 sst:3 amount:1 ph:1 http:1 exist:1 zj:5 dotted:1 estimated:1 topic1:1 vol:1 group:18 dominance:2 four:9 sum:1 enforced:1 sti:13 uncertainty:2 person1:4 extends:1 almost:3 reasonable:2 decision:1 comparable:1 activity:1 deb:1 constraint:1 x2:1 multinets:1 software:1 aspect:1 speed:1 min:1 concluding:1 conversational:3 combination:3 describes:1 beneficial:1 em:9 pathes:1 smaller:2 monay:1 computationally:1 turn:6 tractable:2 end:1 available:2 operation:1 decomposing:1 denotes:2 ensure:1 graphical:3 log2:1 move:8 strategy:1 win:1 thank:2 hmm:1 sensible:1 topic:15 evaluate:1 collected:2 length:9 code:1 relationship:1 illustration:1 potentially:1 martigny:3 implementation:2 agenda:3 vertical:1 observation:7 snapshot:3 markov:14 st2:1 howard:1 t:3 pentland:2 team:35 y1:1 interacting:4 frame:1 competence:1 jebara:1 namely:1 required:1 kl:10 pair:3 acoustic:1 learned:5 timeline:2 nip:2 beyond:2 dynamical:2 pattern:2 including:1 memory:3 video:1 power:1 suitable:1 indicator:1 technology:1 axis:2 coupled:1 extract:1 utterance:3 sn:1 review:1 sg:1 stn:7 prior:1 acknowledgement:1 relative:2 embedded:1 mixed:3 interesting:1 proportional:1 annotator:5 agent:1 degree:1 editor:1 repeat:1 supported:1 silence:7 bias:1 institute:4 wide:2 basu:2 taking:2 saul:1 dimension:2 transition:5 cumulative:3 made:1 commonly:1 party:3 ec:1 social:2 transaction:1 uni:1 status:1 kullback:1 global:2 corpus:5 leader:2 factorizing:1 continuous:1 latent:2 table:3 additionally:2 learn:2 zk:2 channel:1 interact:3 complex:3 arrow:5 whole:4 s2:1 srp:1 nothing:1 child:3 x1:1 augmented:1 fig:23 n:10 position:2 lie:1 person3:1 extractor:1 learns:3 oti:2 down:3 minute:2 removing:1 xt:1 specific:1 meas:1 a3:3 intractable:1 adding:1 te:3 conditioned:1 ch:7 corresponds:1 truth:5 determines:2 extracted:1 ma:1 person2:1 acm:1 conditional:4 goal:1 room:1 jeff:1 barnard:1 springerverlag:1 specifically:1 determined:1 ami:2 averaging:1 microphone:1 total:1 called:3 player:87 zg:2 indicating:1 people:2 dept:1 audio:17 phenomenon:1 correlated:1 |
2,113 | 2,919 | Searching for Character Models
Jaety Edwards
Department of Computer Science
UC Berkeley
Berkeley, CA 94720
[email protected]
David Forsyth
Department of Computer Science
UC Berkeley
Berkeley, CA 94720
[email protected]
Abstract
We introduce a method to automatically improve character models for a
handwritten script without the use of transcriptions and using a minimum
of document specific training data. We show that we can use searches for
the words in a dictionary to identify portions of the document whose
transcriptions are unambiguous. Using templates extracted from those
regions, we retrain our character prediction model to drastically improve
our search retrieval performance for words in the document.
1 Introduction
An active area of research in machine transcription of handwritten documents is reducing
the amount and expense of supervised data required to train prediction models. Traditional
OCR techniques require a large sample of hand segmented letter glyphs for training. This
per character segmentation is expensive and often impractical to acquire, particularly if the
corpora in question contain documents in many different scripts.
Numerous authors have presented methods for reducing the expense of training data by
removing the need to segment individual characters. Both Kopec et al [3] and LeCun et al
[5] have presented models that take as input images of lines of text with their ASCII transcriptions. Training with these datasets is made possible by explicitly modelling possible
segmentations in addition to having a model for character templates.
In their research on ?wordspotting?, Lavrenko et al [4] demonstrate that images of entire
words can be highly discriminative, even when the individual characters composing the
word are locally ambiguous. This implies that images of many sufficiently long words
should have unambiguous transcriptions, even when the character models are poorly tuned.
In our previous work, [2], the discriminatory power of whole words allowed us to achieve
strong search results with a model trained on a single example per character.
The above results have shown that A) one can learn new template models given images of
text lines and their associated transcriptions, [3, 5] without needing an explicit segmentation
and that B) entire words can often be identified unambiguously, even when the models for
individual characters are poorly tuned. [2, 4]. The first of these two points implies that
given a transcription, we can learn new character models. The second implies that for at
least some parts of a document, we should be able to provide that transcription ?for free?,
by matching against a dictionary of known words.
s1
s2
s3
s4
s5
s6
s7
s8
?d
di
ix
xe
er
ri
is
s?
Figure 1: A line, and the states that generate it. Each state st is defined by its left and
right characters ctl and ctr (eg ?x? and ?e? for s4 ). In the image, a state spans half of each
of these two characters, starting just past the center of the left character and extending to
the center of the right character, i.e. the right half of the ?x? and the left half of the ?e?
in s4 . The relative positions of the two characters is given by a displacement vector dt
(superimposed on the image as white lines). Associating states with intracharacter spaces
instead of with individual characters allows for the bounding boxes of characters to overlap
while maintaining the independence properties of the Markov chain.
In this work we combine these two observations in order to improve character models
without the need for a document specific transcription. We provide a generic dictionary of
words in the target language. We then identify ?high confidence? regions of a document.
These are image regions for which exactly one word from our dictionary scores highly
under our model. Given a set of high confidence regions, we effectively have a training
corpus of text images with associated transcriptions. In these regions, we infer a segmentation and extract new character examples. Finally, we use these new exemplars to learn
an improved character prediction model. As in [2], our document in this work is a 12th
century manuscript of Terence?s Comedies obtained from Oxford?s Bodleian library [1].
2 The Model
Hidden Markov Models are a natural and widely used method for modeling images of text.
In their simplest incarnation, a hidden state represents a character and the evidence variable
is some feature vector calculated at points along the line. If all characters were known to
be of a single fixed width, this model would suffice. The probability of a line under this
model is given as
Y
p(line) = p(c1 |?)
p(ct |ct?1 )p(im[w?(t?1):w?t]|ct )
(1)
t>1
where ct represents the tth character on the line, ? represents the start state, w is the width
of a character, and im[w(t?1)+1:wt] represents the column of pixels beginning at column
w ? (t ? 1) + 1 of the image and ending at column w ? t, (i.e. the set of pixels spanned by
c)
Unfortunately, character?s widths do vary quite substantially and so we must extend the
model to accommodate different possible segmentations. A generalized HMM allows us to
do this. In this model a hidden state is allowed to emit a variable length series of evidence
variables. We introduce an explicit distribution over the possible widths of a character.
Letting dt be the displacement vector associated with the tth character, and ctx refer to the
x location of the left edge of a character on the line, the probability of a line under this
revised model is
Y
p(line) = p(c1 |?)
p(ct |ct?1 )p(dt |ct )p(im[ctx +1:ctx +d] |dt , ct )
(2)
t>1
This is the model we used in [2]. It performs far better than using an assumption of fixed
widths, but it still imposes unrealistic constraints on the relative positions of characters. In
particular, the portion of the ink generated by the current character is assumed to be independent of the preceding character. In other words, the model assumes that the bounding
boxes of characters do not overlap. This constraint is obviously unrealistic. Characters
routinely overlap in our documents. ?f?s, for instance, form ligatures with most following characters. In previous work, we treated this overlap as noise, hurting our ability to
correctly localize templates. Under this model, local errors of alignment would also often propagate globally, adversely affecting the segmentation of the whole line. For search,
this noisy segmentation still provides acceptable results. In this work, however, we need
to extract new templates, and thus correct localization and segmentation of templates is
crucial.
In our current work, we have relaxed this constraint, allowing characters to partially overlap. We achieve this by changing hidden states to represent character bigrams instead of
single characters (Figure 1). In the image, a state now spans the pixels from just past the
center of the left character to the pixel containing the center of the right character. We
adjust our notation somewhat to reflect this change, letting st now represent the tth hidden state and ctl and ctr be the left and right characters associated with s. dt is now the
displacement vector between the centers of ctl and ctr .
The probability of a line under this, our actual, model is
Y
p(line) = p(s1 |?)
p(st |st?1 )p(dt |ctl , ctr )p(im[stx +1:stx +dt ]|ctl , ctr , dt )
(3)
t>1
This model allows overlap of bounding boxes, but it does still make the assumption that
the bounding box of the current character does not extend past the center of the previous
character. This assumption does not fully reflect reality either. In Figure 1, for example,
the left descender of the x extends back further than the center of the preceding character.
It does, however, accurately reflect the constraints within the heart of the line (excluding
ascenders and descenders). In practice, it has proven to generate very accurate segmentations. Moreover, the errors we do encounter no longer tend to affect the entire line, since
the model has more flexibility with which to readjust back to the correct segmentation.
2.1 Model Parameters
Our transition distribution between states is simply a 3-gram character model. We train this
model using a collection of ASCII Latin documents collected from the web. This set does
not include the transcriptions of our documents.
Conditioned on displacement vector, the emission model for generating an image chunk
given a state is a mixture of gaussians. We associate with each character a set of image
windows extracted from various locations in the document. We initialize these sets with
one example a piece from our hand cut set (Figure 2). We adjust the probability of an image
given the state to include the distribution over blocks by expanding the last term of Equation
3 to reflect this mixture. Letting bck represent the k th exemplar in the set associated with
character c, the conditional probability of an image region spanning the columns from x to
x? is given as
X
p(imx:x? |ctl , ctr , dt ) =
p(imx:x? |bctl i , bctr j , dt )
(4)
i,j
In principle, the displacement vectors should now be associated with an individual block,
not a character. This is especially true when we have both upper and lower case letters.
However, our model does not seem particularly sensitive to this displacement distribution
and so in practice, we have a single, fairly loose, displacement distribution per character.
Given a displacement vector, we can generate the maximum likelihood template image
under our model by compositing the correct halves of the left and right blocks. Reshaping
the image window into a vector, the likelihood of an image window is then modeled as
a gaussian, using the corresponding pixels in the template as the means, and assuming
a diagonal covariance matrix. The covariance matrix largely serves to mask out empty
regions of a character?s bounding box, so that we do not pay a penalty when the overlap of
two characters? bounding boxes contains only whitespace.
2.2 Efficiency Considerations
The number of possible different templates for a state is O(|B| ? |B| ? |D|), where |B| is
the number of different possible blocks and |D| is the number of candidate displacement
vectors. To make inference in this model computationally feasible, we first restrict the
domain of d. For a given pair of blocks bl and br , we consider only displacement vectors
within some small x distance from a mean displacement mbl ,br , and we have a uniform
distribution within this region. m is initialized from the known size of our single hand cut
template. In the current work, we do not relearn the m. These are held fixed and assumed
to be the same for all blocks associated with the same letter.
Even when restricting the number of d?s under consideration as discussed above, it is computationally infeasible to consider every possible location and pair of blocks. We therefore
prune our candidate locations by looking at the likelihood of blocks in isolation and only
considering locations where there is a local optimum in the response function and whose
value is better than a given threshold. In this case our threshold for a given location is that
L(block) < .7L(background) (where L(x) represents the negative log likelihood of x).
In other words, a location has to look at least marginally more like a given block than it
looks like the background.
After pruning locations in this manner, we are left with a discrete set of ?sites,? where we
define a site as the tuple (block type, x location, y location). We can enumerate the set of
possible states by looking at every pair of sites whose displacement vector has a non-zero
probability.
2.3 Inference In The Model
The statespace defined above is a directed acyclic graph, anchored at the left edge and
right edges of a line of text. A path through this lattice defines both a transcription and
a segmentation of the line into individual characters. Inference in this model is relatively
straightforward because of our constraint that each character may overlap only one preceding and one following character, and our restriction of displacement vectors to a small
discrete range. The first restriction means that we need only consider binary relations between templates. The second preserves the independence relationships of an HMM. A
given state st is independent of the rest of the line given the values of all other states within
dmax of either edge of st (where dmax is the legal displacement vector with the longest
x component.) We can therefore easily calculate the best path or explicitly calculate the
posterior of a node by traversing the state graph in topological order, sorted from left to
right. The literature on Weighted Finite State Transducers ([6], [5]) is a good resource for
efficient algorithms on these types of statespace graph.
3 Learning Better Character Templates
We initialize our algorithm with a set of handcut templates, exactly 1 per character, (Figure
2), and our goal is to construct more accurate character models automatically from unsupervised data. As noted above, we can easily calculate the posterior of a given site under
our model. (Recall that a site is a particular character template at a given (x,y) location in
the line.) The traditional EM approach to estimating new templates would be to use these
Figure 2: Original Training Data These 22 glyphs are our only document specific training
data. We use the model based on these characters to extract the new examples shown below
Figure 3: Examples of extracted templates We extract new templates from high confidence
regions. From these, we choose a subset to incorporate into the model as new exemplars.
Templates are chosen iteratively to best cover the space of training examples. Notice that
for ?q? and ?a?, we have extracted capital letters, of which there were no examples in
our original set of glyphs. This happens when the combination of constraints from the
dictionary the surrounding glyphs make a ?q? or ?a? the only possible explanation for
this region, even though its local likelihood is poor.
sites as training examples, weighted by their posteriors. Unfortunately, the constraints imposed by 3 and even 4-gram character models seem to be insufficient. The posteriors of
sites are not discriminative enough to get learning off the ground.
The key to successfully learning new templates lies is the observation from our previous
work [2], that even when the posteriors of individual characters are not discriminative, one
can still achieve very good search results with the same model. The search word in effect
serves as its own language model, only allowing paths through the state graph that actually
contain it, and the longer the word the more it constrains the model. Whole words impose
much tighter constraints than a 2 or 3-gram character model, and it is only with this added
power that we can successfully learn new character templates.
We define the score for a search as the negative log likelihood of the best path containing
that word. With sufficiently long words, it becomes increasingly unlikely that a spurious
path will achieve a high score. Moreover, if we are given a large dictionary of words and
no alternative word explains a region of ink nearly as well as the best scoring word, then
we can be extremely confident that this is a true transcription of that piece of ink.
Starting with a weak character model, we do not expect to find many of these ?high confidence? regions, but with a large enough document, we should expect to find some. From
these regions, we can extract new, reliable templates with which to improve our character
models. The most valuable of these new templates will be those that are significantly different from any in our current set. For example, in Figure 3, note that our system identifies
capital Q?s, even though our only input template was lower case. It identifies this ink as
a Q in much the same way that a person solves a crossword puzzle. We can easily infer
the missing character in the string ?obv-ous? because the other letters constrain us to one
possible solution. Similarly, if other character templates in a word match well, then we can
unambiguously identify the other, more ambiguous ones. In our Latin case, ?Quid? is the
only likely explanation for ?-uid?.
3.1 Extracting New Templates and Updating The Model
Within a high confidence region we have both a transcription and a localization of template
centers. It remains only to cut out new templates. We accomplish this by creating a template
image for the column of pixels from the corresponding block templates and then assigning
image pixels to the nearest template character (measured by Euclidean distance).
Given a set of templates extracted from high confidence regions, we choose a subset of
Score Under Model
worse
3400
3350
3300
best
Confidence Margins
Figure 4: Each line segment in the lower figure represents a proposed location for a word
from our dictionary. It?s vertical height is the score of that location under our model. A
lower score represents a better fit. The dotted line is the score of our model?s best possible
path. Three correct words, ?nec?, ?quin? and ?dari?, are actually on the best path. We
define the confidence margin of a location as the difference in score between the best
fitting word from our dictionary and the next best.
Figure 5: Extracting Templates For a region with sufficiently high confidence margin, we
construct the maximum likelihood template from our current exemplars. left, and we assign
pixels from the original image to a template based on its distance to the nearest pixel in
the template image, extracting new glyph exemplars right. These new glyphs become the
exemplars for our next round of training.
templates that best explain the remaining examples. We do this in a greedy fashion by
choosing the example whose likelihood is lowest under our current model and adding it to
our set. Currently, we threshold the number of new templates for the sake of efficiency. Finally, given the new set of templates, we can add them to the model and rerun our searches,
potentially identifying new high confidence regions.
4 Results
Our algorithm iteratively improves the character model by gathering new training data from
high confidence regions. Figure 3 shows that this method finds new templates significantly
different from the originals. In this document, our set of examples after one round appears
to cover the space of character images well, at least those in lower case. Our templates are
not perfect. The ?a?, for instance, has become associated with at least one block that is in
fact an ?o?. These mistakes are uncommon, particularly if we restrict ourselves to longer
words. Those that do occur introduce a tolerable level noise into our model. They make
certain regions of the document more ambiguous locally, but that local ambiguity can be
overcome with the context provided by surrounding characters and a language model.
Improved Character Models We evaluate the method more quantitatively by testing the
impact of the new templates on the quality of searches performed against the document.
To search for a given word, we rank lines by the ratio of the maximum likelihood transcription/segmentation that contains the search word to the likelihood of the best possible
segmentation/transcription under our model. The lowest possible search score is 1, happening when the search word is actually a substring of the maximum likelihood transcription.
Higher scores mean that the word is increasingly unlikely under our model. In Figure 7, the
figure on the left shows the improvement in ranking of the lines that truly contain selected
search words. The odd rows (in red) are search results using only the original 22 glyphs,
20
40
60
80
100
200
300
400
500
600
Rnd 2
Rnd 1
2700
2650
2600
dotted (wrong):
solid (correct):
1920
1900
1880
1860
1840
dotted (wrong):
solid (correct):
iam
nupta
nuptiis
inquam
(v|u)ideo
videt
nupta
nuptiis
post inquam
postquam
(v|u)ideo
videt
Figure 6: Search Results with (Rnd 1) initial templates only and with (Rnd 2) templates
extracted from high confidence regions. We show results that have a score within 5% of the
best path. Solid Lines are the results for the correct word. Dotted lines represent other
search results, where we have made a few larger in order to show those words that are
the closest competitors to the true word. Many alternative searches, like the highlighted
?post? are actually portions of the correct larger words. These restrict our selection of
confidence regions, but do not impinge on search quality.
Each correct word has significantly improved after one round of template reestimation.
?iam? has been correctly identified, and is a new high confidence region. Both ?nuptiis?
and ?postquam? are now the highest likelihood words for their region barring smaller
subsequences, and ?videt? has narrowed the gap between its competitor ?video?.
while the even rows (in green) use an additional 332 glyphs extracted from high confidence
regions. Search results are markedly improved in the second model. The word ?est?, for
instance, only had 15 of 24 of the correct lines in the top 100 under the original model,
while under the learned model all 24 are not only present but also more highly ranked.
Improved Search Figure 6 shows the improved performance of our refitted model for
a single line. Most words have greatly improved relative to their next best alternative.
?postquam? and ?iam? were not even considered by the original model and now are nearly
optimal. The right of Figure 7 shows the average precision/recall curve under each model
for 21 words with more than 4 occurrences in the dataset. Precision is the percentage
of lines truly containing a word in the top n search results, and recall is the percentage
of all lines containing the word returned in the top n results. The learned model clearly
dominates. The new model also greatly improves performance for rare words. For 320
words ocurring just once in the dataset, 50% are correctly returned as the top ranked result
under the original model. Under the learned model, this number jumps to 78%.
5 Conclusions and Future Work
In most fonts, characters are quite ambiguous locally. An ?n? looks like a ?u?, looks like
?ii?, etc. This ambiguity is the major hurdle to the unsupervised learning of character
templates. Language models help, but the standard n-gram models provide insufficient
constraints, giving posteriors for character sites too uninformative to get EM off the ground.
Aggregate Precision/Recall Curve
Selected Words, Top 100 Returned Lines
Precision
est
(15,24)/24
nescio
( 1, 1)/ 1
postquam
( 0, 2)/ 2
quod
(14,14)/14
moram
( 0, 2)/ 2
non
( 8, 8)/ 8
quid
( 9, 9)/ 9
10 20 30 40 50 60 70 80 90100
0.75
0.7
0.65
0.6
0.55
0.5
0.45
0.4
0.35
Original Model
Refit Model
0.2
0.4
0.6
Recall
0.8
1
Figure 7: The figure on the left shows the those lines with the top 100 scores that actually
contain the specified word. The first of each set of two rows (in red) is the results from
Round 1. The second (in green) is the results for Round 2. Almost all search words in our
corpus show a significant improvement. The numbers to the right (x/y) mean that out of
y lines that actually contained the search word in our document, x of them made it into
the top ten. On the right are average precision/recall curves for 21 high frequency words
under the model with our original templates (Rnd 1) and after refitting with new extracted
templates (Rnd 2). Extracting new templates vastly improves our search quality
An entire word is much different. Given a dictionary, we expect many word images to have
a single likely transcription even if many characters are locally ambiguous. We show that
we can identify these high confidence regions even with a poorly tuned character model. By
extracting new templates only from these regions of the document, we overcome the noise
problem and significantly improve our character models. We demonstrate this improvement
for the task of search where the refitted models have drastically better search responses than
with the original. Our method is indifferent to the form of the actual character emission
model. There is a rich literature in character prediction from isolated image windows, and
we expect that incorporating more powerful character models should provide even greater
returns and help us in learning less regular scripts.
Finding high confidence regions to extract good training examples is a broadly applicable concept. We believe this work should extend to other problems, most notably speech
recognition. Looked at more abstractly, our use of language model in this work is actually encoding spatial constraints. The probability of a character given an image window
depends not only on the identify of surrounding characters but also on their spatial configuration. Integrating context into recognition problems is an area of intense research in
the computer vision community, and we are investigating extending the idea of confidence
regions to more general object recognition problems.
References
[1] Early Manuscripts at Oxford University. Bodleian library ms. auct. f. 2.13. http://image.ox.ac.uk/.
[2] J. Edwards, Y.W. Teh, D. Forsyth, R. Bock, M. Maire, and G. Vesom. Making latin manuscripts
searchable using ghmm?s. In NIPS 17, pages 385?392. 2005.
[3] G. Kopec and M. Lomelin. Document-specific character template estimation. In Proceedings,
Document Image Recognition III, SPIE, 1996.
[4] V. Lavrenko, T. Rath, and R. Manmatha. Holistic word recognition for handwritten historical
documents. In dial, pages 278?287, 2004.
[5] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document
recognition. Proceedings of the IEEE, 86(11):2278?2324, 1998.
[6] M. Mohri, F. Pereira, and M. Riley. Weighted finite state transducers in speech recognition. ISCA
ITRW Automatic Speech Recognition, pages 97?106, 2000.
| 2919 |@word bigram:1 propagate:1 covariance:2 solid:3 incarnation:1 accommodate:1 bck:1 initial:1 configuration:1 series:1 score:12 contains:2 manmatha:1 tuned:3 document:24 past:3 current:7 rath:1 ocurring:1 assigning:1 must:1 ideo:2 half:4 greedy:1 selected:2 beginning:1 provides:1 node:1 location:14 lavrenko:2 height:1 along:1 become:2 transducer:2 combine:1 fitting:1 manner:1 introduce:3 crossword:1 notably:1 mask:1 globally:1 automatically:2 actual:2 window:5 considering:1 becomes:1 provided:1 estimating:1 notation:1 suffice:1 moreover:2 lowest:2 substantially:1 string:1 finding:1 impractical:1 berkeley:6 every:2 exactly:2 wrong:2 uk:1 local:4 mistake:1 encoding:1 oxford:2 path:8 discriminatory:1 range:1 directed:1 lecun:2 testing:1 practice:2 block:13 maire:1 displacement:14 area:2 significantly:4 matching:1 word:51 confidence:18 regular:1 integrating:1 get:2 selection:1 context:2 restriction:2 imposed:1 center:8 missing:1 straightforward:1 starting:2 identifying:1 spanned:1 refitted:2 s6:1 century:1 searching:1 target:1 associate:1 expensive:1 particularly:3 updating:1 recognition:8 cut:3 calculate:3 region:28 highest:1 valuable:1 constrains:1 trained:1 segment:2 localization:2 efficiency:2 easily:3 routinely:1 various:1 surrounding:3 train:2 aggregate:1 choosing:1 whose:4 quite:2 widely:1 larger:2 ability:1 abstractly:1 highlighted:1 noisy:1 obviously:1 kopec:2 holistic:1 poorly:3 achieve:4 flexibility:1 compositing:1 empty:1 optimum:1 extending:2 generating:1 perfect:1 object:1 help:2 ac:1 measured:1 exemplar:6 nearest:2 odd:1 edward:2 solves:1 strong:1 c:2 implies:3 correct:10 explains:1 require:1 assign:1 tighter:1 im:4 sufficiently:3 considered:1 ground:2 puzzle:1 major:1 dictionary:9 vary:1 early:1 estimation:1 applicable:1 currently:1 sensitive:1 ctl:6 successfully:2 weighted:3 clearly:1 gaussian:1 emission:2 longest:1 improvement:3 modelling:1 superimposed:1 likelihood:12 rank:1 greatly:2 inference:3 entire:4 unlikely:2 spurious:1 hidden:5 relation:1 pixel:9 rerun:1 spatial:2 initialize:2 uc:2 fairly:1 construct:2 once:1 having:1 barring:1 represents:7 look:4 unsupervised:2 nearly:2 future:1 descender:2 quantitatively:1 few:1 preserve:1 individual:7 ourselves:1 highly:3 adjust:2 alignment:1 indifferent:1 uncommon:1 mixture:2 truly:2 held:1 chain:1 accurate:2 emit:1 edge:4 tuple:1 traversing:1 intense:1 euclidean:1 initialized:1 isolated:1 instance:3 column:5 modeling:1 cover:2 lattice:1 riley:1 subset:2 rare:1 uniform:1 nuptiis:3 too:1 accomplish:1 chunk:1 st:6 confident:1 person:1 refitting:1 off:2 terence:1 ctr:6 vastly:1 reflect:4 ambiguity:2 containing:4 choose:2 worse:1 adversely:1 creating:1 return:1 isca:1 forsyth:2 explicitly:2 ranking:1 depends:1 piece:2 script:3 performed:1 imx:2 portion:3 start:1 red:2 narrowed:1 wordspotting:1 largely:1 identify:5 weak:1 handwritten:3 accurately:1 marginally:1 substring:1 explain:1 against:2 competitor:2 frequency:1 associated:8 di:1 spie:1 dataset:2 recall:6 improves:3 segmentation:13 actually:7 back:2 manuscript:3 appears:1 higher:1 dt:10 supervised:1 unambiguously:2 response:2 improved:7 box:6 though:2 ox:1 just:3 relearn:1 hand:3 web:1 defines:1 quality:3 believe:1 glyph:8 effect:1 contain:4 true:3 concept:1 iteratively:2 eg:1 white:1 round:5 width:5 unambiguous:2 noted:1 ambiguous:5 m:1 generalized:1 demonstrate:2 performs:1 image:28 consideration:2 s8:1 extend:3 discussed:1 s5:1 refer:1 significant:1 hurting:1 automatic:1 similarly:1 language:5 had:1 longer:3 etc:1 add:1 posterior:6 own:1 closest:1 certain:1 binary:1 xe:1 ous:1 scoring:1 minimum:1 additional:1 relaxed:1 preceding:3 somewhat:1 impose:1 prune:1 greater:1 ii:1 needing:1 infer:2 segmented:1 match:1 long:2 retrieval:1 reshaping:1 post:2 impact:1 prediction:4 vision:1 represent:4 c1:2 addition:1 affecting:1 background:2 hurdle:1 uninformative:1 crucial:1 rest:1 markedly:1 tend:1 searchable:1 seem:2 extracting:5 latin:3 iii:1 enough:2 bengio:1 independence:2 affect:1 isolation:1 fit:1 identified:2 associating:1 restrict:3 idea:1 haffner:1 br:2 quid:2 s7:1 penalty:1 returned:3 speech:3 enumerate:1 amount:1 s4:3 locally:4 ten:1 statespace:2 simplest:1 tth:3 generate:3 http:1 percentage:2 s3:1 notice:1 dotted:4 per:4 correctly:3 broadly:1 discrete:2 key:1 threshold:3 jaety:2 localize:1 changing:1 capital:2 uid:1 graph:4 letter:5 powerful:1 extends:1 almost:1 whitespace:1 acceptable:1 ct:8 pay:1 topological:1 iam:3 occur:1 constraint:10 constrain:1 ri:1 sake:1 span:2 extremely:1 relatively:1 department:2 combination:1 poor:1 smaller:1 em:2 character:84 increasingly:2 making:1 s1:2 happens:1 gathering:1 heart:1 computationally:2 equation:1 legal:1 resource:1 remains:1 dmax:2 loose:1 letting:3 serf:2 gaussians:1 ocr:1 generic:1 tolerable:1 occurrence:1 alternative:3 encounter:1 original:11 assumes:1 remaining:1 include:2 top:7 maintaining:1 dial:1 auct:1 giving:1 readjust:1 especially:1 ink:4 bl:1 question:1 added:1 looked:1 font:1 traditional:2 diagonal:1 gradient:1 distance:3 hmm:2 impinge:1 collected:1 spanning:1 assuming:1 length:1 modeled:1 relationship:1 insufficient:2 ratio:1 acquire:1 unfortunately:2 potentially:1 expense:2 negative:2 refit:1 teh:1 allowing:2 upper:1 vertical:1 observation:2 revised:1 datasets:1 markov:2 finite:2 excluding:1 looking:2 community:1 david:1 pair:3 required:1 specified:1 comedy:1 learned:3 nip:1 able:1 below:1 ctx:3 reliable:1 green:2 explanation:2 video:1 power:2 overlap:8 unrealistic:2 natural:1 treated:1 ranked:2 improve:5 library:2 numerous:1 identifies:2 extract:6 ascender:1 text:5 literature:2 relative:3 fully:1 expect:4 proven:1 acyclic:1 bock:1 mbl:1 imposes:1 principle:1 daf:1 ascii:2 row:3 mohri:1 last:1 free:1 infeasible:1 drastically:2 template:50 inquam:2 ghmm:1 overcome:2 calculated:1 curve:3 ending:1 transition:1 gram:4 rich:1 author:1 made:3 collection:1 jump:1 historical:1 far:1 pruning:1 transcription:18 active:1 investigating:1 reestimation:1 corpus:3 assumed:2 bodleian:2 discriminative:3 subsequence:1 search:27 anchored:1 reality:1 learn:4 ca:2 composing:1 expanding:1 bottou:1 domain:1 whole:3 s2:1 bounding:6 ligature:1 noise:3 allowed:2 site:8 retrain:1 fashion:1 precision:5 position:2 pereira:1 explicit:2 candidate:2 lie:1 ix:1 removing:1 specific:4 er:1 evidence:2 dominates:1 incorporating:1 restricting:1 adding:1 effectively:1 nec:1 conditioned:1 margin:3 gap:1 simply:1 likely:2 happening:1 contained:1 partially:1 rnd:6 extracted:8 conditional:1 sorted:1 goal:1 feasible:1 change:1 reducing:2 wt:1 est:2 incorporate:1 evaluate:1 |
2,114 | 292 | Discovering the Structure of a Reactive Environment by Exploration
Discovering the Structure of a Reactive Environment
by Exploration
Michael C. Mozer
Department of Computer Science
and Institute of Cognitive Science
University of Colorado
Boulder, CO 80309-0430
Jonatban Bachrach
DepartmentofCompu~
and Infonnation Science
University of Massachusetts
Amherst, MA 01003
ABSTRACT
Consider a robot wandering around an unfamiliar environment. performing actions and sensing the resulting environmental states. The robot's task is to construct an internal model of its environment. a model that will allow it to predict
the consequences of its actions and to determine what sequences of actions to
take to reach particular goal states. Rivest and Schapire (1987&, 1987b;
Schapire. 1988) have studied this problem and have designed a symbolic algorithm to strategically explore and infer the structure of "finite state" environments. The heart of this algorithm is a clever representation of the environment
called an update graph. We have developed a connectionist implementation of
the update graph using a highly-specialized network architecture. With back
propagation learning and a trivial exploration strategy - choosing random actions - the connectionist network can outperfonn the Rivest and Schapire algorithm on simple problems. The network has the additional strength that it
can accommodate stochastic environments. Perhaps the greatest virtue of the
connectionist approach is that it suggests generalizations of the update graph
representation that do not arise from a traditional, symbolic perspective.
1 INTRODUCTION
Consider a robot placed in an unfamiliar environment The robot is allowed to wander
around the environment, performing actions and sensing the resulting environmental
states. With sufficient exploration, the robot should be able to construct an internal
model of the environment, a model that will allow it to predict the consequences of its actions and to determine what sequence of actions must be taken to reach a particular goal
state. In this paper, we describe a connectionist network that accomplishes this task,
based on a representation of finite-state automata developed by Rivest and Scbapire
439
440
Mozer and Bachrach
(1987a, 1987b; Schapire. 1988).
The environments we wish to consider can be modeled by a finite-state automaton (FSA).
In each environment. the robot has a set of discrete actions it can execute to move from
one environmental state to another. At each environmental state. a set of binary-valued
sensations can be detected by the robot To illustrate the concepts and methods in our
work, we use as an extended example a simple environment, the n -room world (from
Rivest and Schapire). The n -room world consists of n rooms arranged in a circular
chain. Each room is connected to the two adjacent rooms. In each room is a light bulb
and light switch. The robot can sense whether the light in the room where it currently
stands is on or off. The robot has three possible actions: move to the next room down
the chain (0). move to the next room up the chain (U). and toggle the light switch in the
current room (T).
2 MODELING THE ENVIRONMENT
If the FSA corresponding to the n -room world is known, the sensory consequences of
any sequence of actions can be predicted. Further. the FSA can be used to determine a
sequence of actions to take to obtain a certain goal state. Although one might try
developing an algorithm to learn the FSA directly, there are several arguments against
doing so (Schapire, 1988). Most important is that the FSA often does not capture structure inherent in the environment. Rather than trying to learn the FSA, Rivest and
Scbapire suggest learning another representation of the environment called an update
graph. The advantage of the update graph is that in environments with regularities, the
number of nodes in the update graph can be much smaller than in the FSA (e.g., 2n
versus 2" for the n -room world). Rivest and Schapire's formal definition of the update
graph is based on the notion of tests that can be performed on the environment. and the
equivalence of different tests. In this section, we present an alternative, more intuitive
view of the update graph that facilitates a connectionist interpretation.
Consider a three-room world. To model this environment, the essential knowledge required is the status of the lights in the current room (CUR), the next room up from the
ClUTent room (UP), and the next room down from the current room (DOWN). Assume the
update graph has a node for each of these environmental variables. Further assume that
each node has an associated value indicating whether the light in the particular room is
on or off.
If we know the values of the variables in the current environmental state, what will their
new values be after taking some action, say u1 When the robot moves to the next room
up, the new value of CUR becomes the previous value of UP; the new value of DOWN becomes the previous value of CUR; and in the three-room world, the new value of UP be-
comes the previous value of DOWN. As depicted in Figure la, this action thus results in
shifting values around in the three nodes. This makes sense because moving up does not
affect the status of any light, but it does alter the robot's position with respect to the three
rooms. Figure 1b shows the analogous flow of information for the action o. Finally, the
action T should cause the status of the current room's light to be complemented while the
other two rooms remain unaffected (Figure 1c). In Figure 1d, the three sets of links from
Figures la-c have been superimposed and have been labeled with the appropriate action.
One final detail: The Rivest and Schapire update graph formalism does not make use of
the "complementation" link. To avoid it, each node may be split into two values. one
Discovering the Structure of a Reactive Environment by Exploration
representing the status of a room and the other its complement (Figure Ie). Toggling
thus involves exchanging the values of CUR and CUR. Just as the values of CUR, UP, and
DOWN must be shifted for the actions u and D, so must their complements.
Given the update graph in Figure Ie and the value of each node for the current environmental state, the resuk of any sequence of actions can be predicted simply by shifting
values around in the graph. Thus, as far as predicting the input/output behavior of the environment is concerned, the update graph serves the same purpose as the FSA.
A defining and nonobvious (from the current description) property of an update graph is
that each node has exactly one incoming link for each action. We call this the oneinput-per-action constraint. For example, CUR gets input from CUR for the action T,
from UP for u. and from DOWN for D.
(6)
(a)
N
(c)
@
(d)
~
-~
T
(e)
Flgure 1: (a) Links between nodes indicating the desired infonnation flow on pedonning the action u. CUR
represenu that status of the Jighu in the ament room, UP the status of the Jighu in the next room up, and DOWN
the status of the lights in the next room down. (b) Links between nodes indicating the desired infonnation flow
on perfonning the action D. (c) Links between nodes indicating the desired infonnation flow on perfonning the
action T. The "_" on the link from CUR to iuelf indicates that the value must be complemented. (d) Links
from the three separate actions superimposed and labeled by the action. (e) The complementation link can be
avoided by adding a set of nodes that represent the complemenu of the original seL Thil is the update grapb for
a three-room world.
441
442
Mozer and Bachrach
3 THE RIVEST AND SCHAPIRE ALGORITHM
Rivest and Schapire have developed a symbolic algorithm (hereafter, the RS algorithm) to
strategically explore an environment and learn its update graph representation. The RS
algorithm fonnulates explicit hypotheses about regularities in the environment and tests
these hypotheses one or a relatively small number at a time. As a result, the algorithm
may not make full use of the environmental feedback obtained. It thus seems worthwhile
to consider alternative approaches that could allow more efficient use of the environmental feedback, and hence, more efficient learning of the update graph. We have taken connectionist approach, which has shown quite promising results in preliminary experiments
and suggests other significant benefits. We detail these benefits below, but must first
describe the basic approach.
4 THE UPDATE GRAPH AS A CONNECTIONIST NETWORK
How might we tum the update graph into a connectionist network? Start by asswning
one unit in a network for each node in the update graph. The activity level of the unit
represents the truth value associated with the update graph node. Some of these units
serve as "outputs" of the network. For example, in the three-room world, the output of
the network is the unit that represents the status of the current room. In other enviroDments, there may several sensations in which case there will be several output units.
What is the analog of the labeled links in the update graph? The labels indicate that
values are to be sent down a link when a particular action occurs. In connectionist tenns,
the links should be gated by the action. To elaborate, we might include a set of units that
represent the possible actions; these units act to multiplicatively gate the flow of activity
between units in the update graph. Thus, when a particular action is to be perfonned, the
corresponding action unit is activated, and the connections that are gated by this action
become enabled. If the action units fonn a local representation, i.e., only one is active at
a time, exactly one set of connections is enabled at a time. Consequently, the gated connections can be replaced by a set of weight matrices, one per action. To predict the
consequences of a particular action, the weight matrix for that action is plugged into the
network and activity is allowed to propagate through the connections. Thus, the network
is dynamically rewired contingent on the current action.
The effect of activity propagation should be that the new activity of a unit is the previous
activity of some other unit A linear activation function is sufficient to achieve this:
X(t) =Wa(t)X(t-l),
(1)
where a (t) is the action selected at time t, Wa (t) is the weight matrix associated with this
action, and X(t) is the activity vector that results from taking action a (t). Assuming
weight matrices which have zeroes in each row except for one connection of strength 1
(the one-input-per-action constraint), the activation rule will cause activity values to be
copied around the network.
5 TRAINING THE NETWORK TO BE AN UPDATE GRAPH
We have described a connectionist network that can behave as an update graph, and now
tum to the procedure used to learn the connection strengths in this network. For expository purposes, assume that the number of units in the update graph is known in advance.
Discovering the Structure of a Reactive Environment by Exploration
(This is not necessary, as we show in Mozer & Bachrach, 1989.) A weight matrix is required for each action, with a potential non-zero connection between every pair of units.
As in most connectionist learning procedures, the weight matrices are initialized to random values; the outcome of learning will be a set of matrices that represent the update
graph connectivity.
If the network is to behave as an update graph, the one-input-per-action constraint must
be satisfied. In terms of the connectivity matrices, this means that each row of each
weight matrix should have connection strengths of zero except for one value which is 1.
To achieve this property, additional constraints are placed on the weights. We have explored a combination of three constraints:
(1) l:w~j = 1,
j
(2) l:Waij = 1,
and (3) Waij ~ 0,
j
where waij is the connection strength to i from j for action a. Constraint 1 is satisfied by
introducing an additional cost term to the error function. Constraints 2 and 3 are rigidly
enforced by renormalizing the Wai following each weight update. The normalization
procedure finds the shortest distance projection from the updated weight vector to the hyperplane specified by constraint 2 that also satisfies constraint 3.
At each time step t, the training procedure consists the following sequence of events:
1. An action. a (t), is selected at random.
2. The weight matrix for that action, Wa(t). is used to compute the activities at t, X(t),
from the previous activities X(t-l).
3. The selected action is performed on the environment and the resulting sensations are
observed.
4. The observed sensations are compared with the sensations predicted by the network
(Le., the activities of units chosen to represent the sensations) to compute a measure of
error. To this error is added the contribution of constraint 1.
5. The back propagation "unfolding-in-time" procedure (Rumelhart, Hinton. & Williams,
1986) is used to compute the derivative of the error with respect to weights at the
current and earlier time steps, Wa(t-;)' for i =0 ... 't-l.
6. The weight matrices for each action are updated using the overall error gradient and
then are renormalized to enforce constraints 2 and 3.
7. The temporal record of unit activities, X(t-i) for i=O? .. 't, which is maintained to
permit back propagation in time, is updated to reflect the new weights. (See further
explanation below.)
8. The activities of the output units at time t, which represent the predicted sensations,
are replaced by the actual observed sensations.
Steps 5-7 require further elaboration. The error measured at time t may be due to incorrect propagation of activities from time t-l, which would call for modification of the
weight matrix Wa(t). But the error may also be attributed to incorrect propagation of activities at earlier times. Thus. back propagation is usui to assign blame to the weights at
earlier times. One critical parameter of training is the amount of temporal history, 't, to
consider. We have found that. for a particular problem, error propagation beyond a cer-
443
444
Mozer and Bachrach
lain critical number of steps does not improve learning performance, although any fewer
does indeed harm performance. In the results described below, we set 't for a particular
problem to what appeared to be a safe limit: one less than the number of nodes in the update graph solution of the problem.
To back propagate error in time, we maintain a temporal record of unit activities. However, a problem arises with these activities following a weight update: the activities are
no longer consistent with the weights - i.e., Equation I is violated. Because the error
derivatives computed by back propagation are exact only when Equation I is satisfied,
future weight updates based on the inconsistent activities are not assured of being correct.
Empirically, we have found the algorithm extremely unstable if we do not address this
problem.
In most situations where back propagation is applied to temporally-extended sequences.
the sequences are of finite length. Consequently. it is possible to wait until the end of the
sequence to update the weights, at which point consistency between activities and
weights no longer matters because the system starts fresh at the beginning of the next sequence. In the present situation. however, the sequence of actions does not tenninate.
We thus were forced to consider alternative means of ensuring consistency. The most
successful approach involved updating the activities after each weight change to force
consistency (step 7 of the list above). To do this, we propagated the earliest activities in
the temporal record. X(t--'t). forward again to time t, using the updated weight matrices.
6 RESULTS
Figure 2 shows the weights in the update graph network for the three-room world after
the robot has taken 6,000 steps. The Figure depicts a connectivity pattern identical to
that of the update graph of Figure Ie. To explain the correspondence, think of the diagram as being in the shape of a person who has a head, left and right arms, left and right
legs, and a heart. For the action U, the head - the output unit - receives input from
the left leg, the left leg from the heart, and the heart from the head, thereby fonning a
three-unit loop. The other three units - the left arm, right arm, and right leg - fonn a
Flgure 2: Weights learned after 6,000 exploratory steps in the three-room world. Each large diagram
represents the weights corresponding to one of the three actic.lI. Each small diagram contained within a large
diagram represents the connection strengths feeding into a particular Wlit for a particular action. There are six
Wlits, hence six small diagrams. The output Wlit, which indicates the state of the light in the wrrent room, is the
protruding "head" of the large diagram. A white square in a particular position of a small diagram represents the
strength of connection from the unit in the homologous position in the large diagram to the unit represented by
the small diagram. The area of the square is proportional to the cormection strength.
Discovering the Structure of a Reactive Environment by Exploration
similar loop. For the action D, the same two loops are present but in the reverse direction. These two loops also appear in Figure Ie. For the action T, the left and right anns,
heart, and left leg each keep their current value, while the head and the right leg exchange values. This corresponds to the exchange of values between the CUR and CUR
nodes of the Figure Ie.
In addition to learning the update graph connectivity, the network has simultaneously
learned the correct activity values associated with each node for the current state of the
environment. Armed with this infonnation, the network can predict the outcome of any
sequence of actions. Indeed, the prediction error drops to zero, causing learning to cease
and the network to become completely stable.
Now for the bad news: The network does not converge for every set of random initial
weights, and when it does, it requires on the order of 6,000 steps. However, when the
weight constraints are removed, that the network converges without fail and in about 300
steps. In Mozer and Bachrach (1989), we consider why the weight constraints are hannful and suggest several remedies. Without weight constraints, the resulting weight matrix, which contains a collection of positive and negative weights of varying magnitudes,
is not readily interpreted. In the case of the n -room world, . one reason why the final
weights are difficult to interpret is because the net has discovered a solution that does not
satisfy the RS update graph fonnalism; it has discovered the notion of complementation
links of the sort shown in Figure ld. With the use of complementation links, only three
units are required, not six. Consequently, the three unnecessary units are either cut out of
the solution or encode infonnation redundantly.
Table 1 compares the perfonnance of the RS algorithm against that of the connectionist
network without weight constraints for several environments. Perfonnance is measured
in tenns of the median number of actions the robot must take before it is able to predict
the outcome of subsequent actions. (Further details of the experiments can be found in
Mozer and Bachrach, 1989.) In simple environments, the connectionist update graph can
outperfonn the RS algorithm. This result is quite surprising when considering that the action sequence used to train the network is generated at random, in contrast to the RS algorithm, which involves a strategy for exploring the environment. We conjecture that the
network does as well as it does because it considers and updates many hypotheses in
parallel at each time step. In complex environments, however, the network does poorly.
By "complex", we mean that the number of nodes in the update graph is quite large and
the number of distinguishing environmental sensations is relatively small. For example,
the network failed to learn a 32-room world, whereas the RS algorithm succeeded. An
intelligent exploration strategy seems necessary in this case: random actions will take
too long to search the state space. This is one direction our future work will take.
Beyond the potential speedups offered by connectionist learning algorithms, the connectionist approach has other benefits.
Table 1: Nwnber of Steps Required to Learn Update Graph
Environment
Little Prince Wodd
Car Radio World
Four-Room World
32-Room World
RS
Algorithm
200
27,695
1,388
52,436
Connectionist
Update Graph
91
8,167
1,308
fails
445
446
Mozer and Bachrach
? Perfonnance of the network appears insensitive to prior knowledge of the number of
nodes in the update graph being learned. In contrast, the RS algorithm requires an
upper bound on the update graph complexity, and performance degrades significantly if
the upper bound isn't tight.
? The network is able to accommodate "noisy" environments, also in contrast to the RS
algorithm.
? Owing learning, the network continually makes predictions about what sensations will
result from a particular action, and these predictions improve with experience. The RS
algorithm cannot make predictions until learning is complete; it could perhaps be
modified to do so, but there would be an associated cost.
? Treating the update graph as matrices of connection strengths has suggested generalizations of the update graph formalism that don't arise from a more traditional analysis.
First, there is the fairly direct extension of allowing complementation links. Second,
because the connectionist network is a linear system. any rank-preserving linear
transform of the weight matrices will produce an equivalent system, but one that does
not have the local connectivity of the update graph (see Mozer & Bachrach, 1989).
The linearity of the network also allows us to use tools of linear algebra to analyze the
resulting connectivity matrices.
These benefits indicate that the connectionist approach to the environment-modeling
problem is worthy of further study. We do not wish to claim that the connectionist approach supercedes the impressive work of Rivest and Schapire. However, it offers complementary strengths and alternative conceptualizations of the learning problem.
Acknowledgements
Our thanks to Rob Schapire, Paul Smolensky, and Rich Sutton for helpful discussions. This work
was supported by a grant from the James S. McDonnell Foundation to Michael Mozer. grant 87-236 from the Sloan Foundation to Geoffrey Hinton. and grant AFOSR-87"()()30 from the Air Force
Office of Scientific Research. Bolling AFB, to Andrew Barto.
References
Mozer, M. C., & Bachrach, J. (1989). Discovering the structure of a reactive environment by
exploration (Teclmical Report CU-CS-451-89). Boulder, CO: University of Colorado,
Department of Computer Science.
Rivest, R. L., & Schapire, R. E. (1987). Diversity-based inference of finite automata. In
Proceedings of the Twenty-Eighth Annual Symposium on Foundations of Computer
Science (pp. 78-87).
Rivest, R. L., & Schapire, R. E. (1987). A new approach to unsupervised learning in detenninistic
environments. In P. Langley (Ed.), Proceedings of the Fourth Inlernational Workslwp on
Machine Learning (pp. 364-375).
Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning internal representations by
error propagation. In D. E. Rumelhart & J. L. McClelland (Eds.), Parallel distributed
processing: Explorations in the microstructure of cognition. Volume I: Foundations (pp.
318-362). Cambridge, MA: MIT Press/Bradford Books.
Schapire, R. E. (1988). Diversity-based inference ofjiniJe automara. Unpublished master's thesis,
Massachusetts Instiblte of Technology, Cambridge, MA.
| 292 |@word cu:1 toggling:1 seems:2 r:11 propagate:2 fonn:2 thereby:1 accommodate:2 ld:1 initial:1 contains:1 hereafter:1 current:12 surprising:1 activation:2 must:7 readily:1 subsequent:1 shape:1 designed:1 drop:1 update:46 treating:1 discovering:6 selected:3 fewer:1 beginning:1 record:3 node:18 direct:1 become:2 symposium:1 incorrect:2 consists:2 indeed:2 behavior:1 actual:1 armed:1 little:1 considering:1 becomes:2 rivest:12 linearity:1 what:6 interpreted:1 developed:3 redundantly:1 temporal:4 every:2 act:1 exactly:2 unit:25 grant:3 appear:1 continually:1 positive:1 before:1 local:2 limit:1 consequence:4 sutton:1 rigidly:1 might:3 studied:1 equivalence:1 suggests:2 dynamically:1 co:2 procedure:5 langley:1 area:1 significantly:1 projection:1 wait:1 suggest:2 symbolic:3 get:1 cannot:1 clever:1 equivalent:1 williams:2 automaton:3 bachrach:10 clutent:1 rule:1 outperfonn:2 enabled:2 notion:2 exploratory:1 analogous:1 updated:4 colorado:2 exact:1 distinguishing:1 hypothesis:3 rumelhart:3 updating:1 cut:1 labeled:3 observed:3 capture:1 connected:1 news:1 removed:1 mozer:11 environment:36 complexity:1 renormalized:1 tight:1 algebra:1 serve:1 completely:1 represented:1 train:1 forced:1 describe:2 detected:1 choosing:1 outcome:3 quite:3 valued:1 say:1 think:1 transform:1 noisy:1 final:2 fsa:8 sequence:13 advantage:1 net:1 causing:1 loop:4 poorly:1 achieve:2 description:1 intuitive:1 regularity:2 produce:1 renormalizing:1 converges:1 illustrate:1 andrew:1 measured:2 flgure:2 predicted:4 involves:2 come:1 indicate:2 c:1 direction:2 sensation:10 safe:1 correct:2 owing:1 stochastic:1 exploration:10 require:1 exchange:2 feeding:1 assign:1 microstructure:1 generalization:2 preliminary:1 exploring:1 extension:1 around:5 cognition:1 predict:5 claim:1 purpose:2 label:1 currently:1 infonnation:6 radio:1 tool:1 unfolding:1 mit:1 modified:1 rather:1 avoid:1 sel:1 varying:1 barto:1 office:1 earliest:1 encode:1 rank:1 superimposed:2 indicates:2 contrast:3 sense:2 helpful:1 inference:2 waij:3 overall:1 fairly:1 construct:2 identical:1 represents:5 cer:1 unsupervised:1 alter:1 future:2 connectionist:19 report:1 intelligent:1 inherent:1 strategically:2 simultaneously:1 replaced:2 maintain:1 highly:1 circular:1 light:10 activated:1 chain:3 grapb:1 succeeded:1 detenninistic:1 necessary:2 experience:1 perfonnance:3 plugged:1 initialized:1 desired:3 prince:1 formalism:2 modeling:2 earlier:3 exchanging:1 cost:2 introducing:1 successful:1 too:1 person:1 thanks:1 amherst:1 ie:5 off:2 michael:2 connectivity:6 again:1 reflect:1 satisfied:3 thesis:1 cognitive:1 book:1 derivative:2 li:1 potential:2 diversity:2 matter:1 satisfy:1 sloan:1 performed:2 try:1 view:1 doing:1 analyze:1 start:2 sort:1 parallel:2 contribution:1 square:2 air:1 who:1 unaffected:1 complementation:5 history:1 anns:1 explain:1 reach:2 wai:1 ed:2 definition:1 against:2 pp:3 involved:1 james:1 associated:5 attributed:1 cur:12 propagated:1 massachusetts:2 knowledge:2 car:1 back:7 appears:1 tum:2 afb:1 arranged:1 execute:1 just:1 until:2 receives:1 propagation:11 perhaps:2 scientific:1 effect:1 concept:1 remedy:1 hence:2 white:1 adjacent:1 maintained:1 trying:1 toggle:1 complete:1 specialized:1 empirically:1 insensitive:1 volume:1 analog:1 interpretation:1 interpret:1 unfamiliar:2 significant:1 cambridge:2 consistency:3 blame:1 moving:1 robot:13 stable:1 impressive:1 longer:2 perspective:1 reverse:1 certain:1 binary:1 tenns:2 preserving:1 additional:3 contingent:1 accomplishes:1 determine:3 shortest:1 converge:1 full:1 infer:1 offer:1 long:1 elaboration:1 ensuring:1 prediction:4 basic:1 represent:5 normalization:1 addition:1 whereas:1 diagram:9 median:1 facilitates:1 sent:1 flow:5 inconsistent:1 call:2 split:1 concerned:1 switch:2 affect:1 architecture:1 whether:2 six:3 wandering:1 cause:2 action:59 amount:1 mcclelland:1 schapire:15 shifted:1 per:4 discrete:1 four:1 graph:41 lain:1 enforced:1 fourth:1 master:1 bound:2 rewired:1 copied:1 correspondence:1 annual:1 activity:23 strength:10 constraint:15 nonobvious:1 u1:1 argument:1 extremely:1 performing:2 relatively:2 conjecture:1 speedup:1 department:2 developing:1 conceptualization:1 expository:1 combination:1 mcdonnell:1 smaller:1 remain:1 rob:1 modification:1 leg:6 boulder:2 heart:5 taken:3 equation:2 fail:1 know:1 serf:1 end:1 permit:1 worthwhile:1 appropriate:1 enforce:1 alternative:4 gate:1 original:1 include:1 move:4 added:1 occurs:1 strategy:3 degrades:1 traditional:2 gradient:1 distance:1 link:15 separate:1 considers:1 unstable:1 trivial:1 reason:1 fresh:1 fonnalism:1 assuming:1 length:1 modeled:1 multiplicatively:1 difficult:1 negative:1 implementation:1 twenty:1 gated:3 allowing:1 upper:2 finite:5 behave:2 defining:1 extended:2 hinton:3 situation:2 head:5 discovered:2 worthy:1 fonning:1 nwnber:1 complement:2 pair:1 required:4 specified:1 bolling:1 connection:12 unpublished:1 learned:3 address:1 able:3 beyond:2 suggested:1 below:3 pattern:1 eighth:1 appeared:1 smolensky:1 explanation:1 shifting:2 greatest:1 perfonned:1 event:1 critical:2 force:2 homologous:1 predicting:1 usui:1 arm:3 representing:1 improve:2 technology:1 temporally:1 isn:1 prior:1 acknowledgement:1 wander:1 afosr:1 proportional:1 versus:1 geoffrey:1 foundation:4 bulb:1 offered:1 sufficient:2 consistent:1 row:2 placed:2 supported:1 formal:1 allow:3 institute:1 taking:2 protruding:1 benefit:4 distributed:1 feedback:2 world:15 stand:1 rich:1 sensory:1 forward:1 collection:1 avoided:1 far:1 status:8 keep:1 active:1 incoming:1 harm:1 unnecessary:1 don:1 search:1 why:2 table:2 promising:1 learn:6 complex:2 assured:1 arise:2 paul:1 allowed:2 complementary:1 elaborate:1 depicts:1 fails:1 position:3 wish:2 explicit:1 down:10 bad:1 sensing:2 explored:1 list:1 cease:1 virtue:1 essential:1 adding:1 magnitude:1 depicted:1 simply:1 explore:2 failed:1 contained:1 corresponds:1 truth:1 environmental:10 satisfies:1 complemented:2 ma:3 goal:3 consequently:3 room:38 change:1 except:2 hyperplane:1 called:2 bradford:1 la:2 indicating:4 internal:3 arises:1 reactive:6 violated:1 perfonning:2 |
2,115 | 2,920 | Learning from Data of Variable Quality
Koby Crammer, Michael Kearns, Jennifer Wortman
Computer and Information Science
University of Pennsylvania
Philadelphia, PA 19103
{crammer,mkearns,wortmanj}@cis.upenn.edu
Abstract
We initiate the study of learning from multiple sources of limited data,
each of which may be corrupted at a different rate. We develop a complete theory of which data sources should be used for two fundamental
problems: estimating the bias of a coin, and learning a classifier in the
presence of label noise. In both cases, efficient algorithms are provided
for computing the optimal subset of data.
1
Introduction
In many natural machine learning settings, one is not only faced with data that may be corrupted or deficient in some way (classification noise or other label errors, missing attributes,
and so on), but with data that is not uniformly corrupted. In other words, we might be presented with data of variable quality ? perhaps some small amount of entirely ?clean? data,
another amount of slightly corrupted data, yet more that is significantly corrupted, and so
on. Furthermore, in such circumstances we may often know at least an upper bound on the
rate and type of corruption in each pile of data. An extreme example is the recent interest in
settings where one has a very limited set of correctly labeled examples, and an effectively
unlimited set of entirely unlabeled examples, as naturally arises in problems such as classifying web pages [1]. Another general category of problems that falls within our interest
is when multiple piles of data are drawn from processes that differ perhaps slightly and in
varying amounts from the process we wish to estimate. For example, we might wish to
estimate a conditional distribution P (X|Y = y) but have only a small number of observations in which Y = y, but a larger number of observations in which Y = y ? for values of
y ? ?near? to y. In such circumstances it might make sense to base our model on a larger
number of observations, at least for those y ? closest to y.
While there is a large body of learning theory both for uncorrupted data and for data that
is uniformly corrupted in some way [2, 3], there is no general framework and theory for
learning from data of variable quality. In this paper we introduce such a framework, and
develop its theory, for two basic problems: estimating a bias from corrupted coins, and
learning a classifier in the presence of varying amounts of label noise. For the corrupted
coins case we provide an upper bound on the error that is expressed as a trade-off between
weighted approximation errors and larger amounts of data. This bound provides a building
block for the classification noise setting, in which we are able to give a bound on the
generalization error of empirical risk minimization that specifies the optimal subset of the
data to use. Both bounds can computed by simple and efficient algorithms. We illustrate
both problems and our algorithms with numerical simulations.
2
Estimating the Bias from Corrupted Coins
We begin by considering perhaps the simplest possible instance of the general class of
problems in which we are interested ? namely, the problem of estimating the unknown
bias of a coin. In this version of the variable quality model, we will have access to different
amounts of data from ?corrupted? coins whose bias differs from the one we wish to estimate. We use our solution for this simple problem as a building block for the classification
noise setting in Section 3.
2.1
Problem Description
Suppose we wish to estimate the bias ? of a coin given K piles of training observations
N1 , ..., NK . Each pile Ni contains ni outcomes of flips of a coin with bias ?i , where the
only information we are provided is that ?i ? [? ? ?i , ? + ?i ], and 0 ? ?1 ? ?2 ? ... ? ?K .
We refer to the ?i as bounds on the approximation errors of the corrupted coins. We denote
by hi the number of heads observed in the ith pile. Our immediate goal is to determine
which piles should be considered in order to obtain the best estimate of the true bias ?.
We consider estimates for ? obtained by merging some subset of the data into a single
unified pile, and computing the maximum likelihood estimate for ?, which is simply the
fraction of times heads appears as an outcome in the unified pile. Although one can consider using any subset of the data, it can be proved (and is intuitively obvious) that an
optimal estimate (in the sense that will be defined shortly) always uses a prefix of the data,
i.e. all data from the piles indexed 1 to k for some k ? K, and possibly a subset of the data
from pile k + 1. In fact, it will be shown that only complete piles need to be considered.
Therefore, from this point on we restrict ourselves to estimates of this form, and identify
them by the maximal index k of the piles used. The associated estimate is then simply
h1 + . . . + hk
??k =
.
n1 + . . . + nk
We denote the expectation of this estimate by
h i n ? + ... + n ?
1 1
k k
??k = E ??k =
.
n1 + . . . + nk
To simplify the presentation we denote by ni,j the number of outcomes in piles Ni , . . . , Nj ,
Pj
that is, ni,j = m=i nm .
We now bound the deviation of the estimate ??k from the true bias of the coin ? using the
expectation ??k :
|? ? ??k | = |? ? ??k + ??k ? ??k |
?
?
|? ? ??k | + |??k ? ??k |
k
X
ni
?i + |??k ? ??k |
n
1,k
i=1
The first inequality follows from the triangle inequality and the second from our assumptions. Using the Hoeffding inequality we can bound the second term and find that with high
probability for an appropriate choice of ? we have
s
k
X
ni
log(2K/?)
?
|? ? ?k | ?
?i +
.
(1)
n
2n1,k
i=1 1,k
To summarize, we have proved the following theorem.
Theorem 1 Let ??k be the estimate obtained by using only the data from the first k piles.
Then for any ? > 0, with probability ? 1 ? ? we have
s
k
X
log(2K/?)
ni
?
?i +
? ? ?k ?
n
2n1,k
i=1 1,k
simultaneously for all k = 1, . . . , K.
Two remarks are in place here. First, the theorem is data-independent since it does not take
into account the actual outcomes of the experiments h1 , . . . , hK . Second, the two terms in
the bound reflect the well-known trade-off between bias (approximation error) and variance
(estimation error). The first term bounds the approximation error of replacing the true coin
? with the average ??k . The second term corresponds to the estimation error which arises
as a result of our finite sample size.
This theorem implies a natural algorithm to choose the number of piles k ? as is the minimizer of the bound over the number of piles used:
s
)
( k
X ni
log(2K/?)
?
?i +
.
k = argmin
2n1,k
k?{1,...,K} i=1 n1,k
To conclude this section we argue that our choice of using a prefix of piles is optimal. First,
note that by adding a new pile with a corruption level ? smaller then the current corruption
level, we can always reduce the bounds. Thus it is optimal to use prefix of the piles and
not to ignore piles with low corruption levels. Second, we need to show that if we decide
to use a pile, it will be optimal to use all of it. Note that we can choose to view each coin
toss as a separate pile with a single observation, thus yielding n1,K piles of size 1. The
following technical lemma states that under this view of singleton piles, once we decide to
add a pile with some corruption level, it will be optimal to use all singleton piles with the
same corruption level. The proof of this lemma is omitted due to lack of space.
Lemma 1 Assume that all the piles are of size ni = 1 and that ?k ? ?p+k = ?p+k+1 . Then
the following two inequalities cannot hold simultaneously:
s
s
k+p
k
X
X ni
ni
log(2n1,K /?)
log(2n1,K /?)
?i +
>
?i +
n
2n1,k
n
2n1,k+p
i=1 1,k
i=1 1,k+p
s
s
k+p
k+p+1
X ni
X
ni
log(2n1,K /?)
log(2n1,K /?)
?i +
?
?i +
.
n
2n
n
2n1,k+p
1,k+p+1
1,k+p+1
i=1 1,k+p
i=1
In other words, if the bound on |? ? ??k+p | is smaller than the bound on |? ? ??k |, then
the bound on |? ? ??k+p+1 | must be smaller than both unless ?k+p+1 > ?k+p . Thus if the
pth and p+1th samples are from the same original pile (and ?k+p+1 = ?k+p ), then once we
decide to use samples through p, we will always want to include sample p + 1. It follows
that we must only consider using complete piles of data.
2.2
Corrupted Coins Simulations
The theory developed so far can be nicely illustrated via some simple simulations. We
briefly describe just one such experiment in which there were K = 8 piles. The target coin was fair: ? = 0.5. The approximation errors of the corrupted coins were
1
Actual Error
Bound
Singeltons Bound
0.9
0.8
e6
e4 + B(4, k)
0.7
Error Bound
Actual Error
Achieved Error
0.8
e5
0.6
0.6
e4
0.5
0.4
Error
Error
1
0.4
e3
0.3
0.2
e2
0.1
1
10
2
3
10
10
Number of Examples Used
4
10
i?k = 2
e1
0.2
0
2
4
6
8
Number of Piles Used
10
12
Figure 1: Left: Illustration of the actual error and our error bounds for estimating the bias
of a coin. The error bars show one standard deviation. Center: Illustration of the interval
construction. Right: Illustration of actual error of a 20 dimensional classification problem
and the error bounds found using our methods.
~? = (0.001, 0.01, 0.02, 0.03, 0.04, 0.2, 0.3, 0.5), and number of outcomes in the corresponding piles were ~n = (10, 50, 100, 500, 1500, 2000, 3000, 10000). The following process was repeated 1, 000 times. We set the probability of the ith coin to be ?i = ? + ?i
and sampled ni times from it. We then used all possible prefixes 1, . . . , k of piles to estimate ?. For each k, we computed the bound for the estimate using piles 1, . . . , k using
the theory developed in the previous section. To illustrate Lemma 1 we also computed the
bound using partial piles. This bound is slightly higher than the suggested bound since we
use effectively more piles (n1,K instead of K). As the lemma predicts, it is not valuable to
use subsets of piles. Simulations with other values of K, ~? and ~n yield similar qualitative
behavior. We note that a strength of the theory developed is its generality, as it provides
bounds for any model parameters.
The leftmost panel of Figure 1 summarizes the simulation results. Empirically, the best
estimate of the target coin is using the first four piles, while our algorithm suggests using
the first five piles. However, the empirical difference in quality between the two estimates
is negligible, so the theory has given near-optimal guidance in this case. We note that while
our bounds have essentially the right shape (which is what matters for the computation
of k ? ), numerically they are quite loose compared to the true behavior. There are various
limits to the numerical precision we should expect without increasing the complexity of the
theory ? for example, the precision is limited by accuracy of constants in the Hoeffding
inequality and the use of the union bound.
3
Classification with Label Noise
We next explore the problem of classification in the presence of multiple data sets with
varying amounts of label noise. The setting is as follows. We assume there is a fixed and
unknown binary function f : X ? {0, 1} and a fixed and unknown distribution P on the
inputs X to f . We are presented again with K piles of data, N1 , ..., NK . Now each pile
Ni contains ni labeled examples (x, y) that are generated from the target function f with
label noise at rate ?i , where 0 ? ?1 < ?2 < ... < ?K . In other words, for each example
(x, y) in pile Ni , y = f (x) with probability 1 ? ?i and y = ?f (x) with probability ?i .
The goal is to decide which piles of data to use in order to choose a function h from a set
of hypothesis functions H with minimal generalization (true) error e(h) with respect to f
and P . As before, for any prefix of piles N1 , . . . , Nk , we examine the most basic estimator
based on this data, namely the hypothesis minimizing the observed or training error:
? k = argmin{?
h
ek (h)}
h?H
where e?k (h) is the fraction of times h(x) 6= y over all (x, y) ? N1 ? ? ? ? ? Nk . Thus we
examine the standard empirical risk minimization framework [2]. Generalizing from the
biased coin setting, we are interested in three primary questions: what can we say about
? k ) ? e?(h
? k )|, which is the gap between the true and observed error of the
the deviation |e(h
?
estimator hk ; what is the optimal value of k; and how can we compute the corresponding
bounds?
We note that the classification noise setting can naturally be viewed as a special case of
a more general and challenging ?agnostic? classification setting that we discuss briefly in
Section 4. Here we provide a more specialized solution that exploits particular properties
of class label noise.
We begin by observing that for any fixed function h, the question of how e?k (h) is related
to e(h) bears great similarity to the biased coin setting. More precisely, the expected classification error of h on pile Ni only is
(1 ? ?i )e(h) + ?i (1 ? e(h)) = e(h) + ?i (1 ? 2e(h)) .
Thus if we set
? = e(h),
?i = ?i |1 ? 2e(h)|
(2)
and if we were only concerned with making the best use of the data in estimating e(h), we
could attempt to apply the theory developed in Section 2 using the reduction above. There
are two distinct and obvious difficulties. The first difficulty is that even restricting attention
to estimating e(h) for a fixed h, the values for ?i above (and thus the bounds computed
by the methods of Section 2) depend on e(h), which is exactly the unknown quantity we
would like to estimate. The second difficulty is that in order to bound the performance of
empirical error minimization within H, we must say something about the probability of
any h ? H being selected. We address each of these difficulties in turn.
3.1
Computing the Error Bound Matrix
For now we assume that {e(h) : h ? H} is a finite set containing M values e1 < . . . < eM .
This assumption clearly holds if |H| is finite, and can be removed entirely by discretizing
the values in {e(h) : h ? H}. For convenience we assume that for all levels ei there exists
a function h ? H such that e(h) = ei . This assumption can also be removed (details of
both omitted due to space considerations). We define a matrix B of estimation errors as
follows. Each row i of B represents one possible value of e(h) = ei , while each column
k represents the use of only piles N1 , . . . , Nk of noisy labeled examples of the target f .
The entry B(i, k) will contain a bound on |e(h) ? e?k (h)| that is valid simultaneously for all
h ? H with e(h) = ei . In other words, for any such h, with high probability e?k (h) falls in
the range [ei ? B(i, k), ei + B(i, k)]. It is crucial to note that we do not need to know which
functions h ? H satisfy e(h) = ei in order to either compute or use the bound B(i, k), as
we shall see shortly. Rather, it is enough to know that for each h ? H, some row of B will
provide estimation error bounds for each k.
The values in B can be now be calculated using the settings provided by Eq. (2) and the
bound in Eq. (1). However, since Eq. (1) applies to the case of a single biased coin and here
we have many (essentially one for each function at a given generalization error ei ), we must
modify it slightly. We can (pessimistically) bound the VC dimension of all functions with
error rate e(h) = ei by the VC dimension d of the entire class H. Formally, we replace the
square root term in Eq. (1) with the following expression, which is a simple application of
VC theory [2, 3]:
s
!
n
1
KM
1,k
O
d log
+ log
.
(3)
n1,k
d
?
We note that in cases where we have more information on the structure of the generalization
errors in H, an accordingly modified equation can be used, which may yield considerably
improved bounds. For example, in the statistical physics theory of learning curves[4] it is
common to posit knowledge of the density or number of functions in H at a given generalization error ei . In such a case we could clearly substitute the VC dimension d by the
(potentially much smaller) VC dimension di of just this subclass.
In a moment we describe how the matrix B can be used to choose the number k of piles
? k . We first formalize the
to use, and to compute a bound on the generalization error of h
development above as an intermediate result.
Lemma 2 Suppose H is a set of binary functions with VC dimension d. Let M be the
number of noise levels and K be the number of piles. Then for all ? > 0, with probability
at least 1 ? ?, for all i ? {1, . . . , M }, for all h ? H with e(h) = ei , and for all k ?
{1, . . . , K} we have
|e(h) ? e?k (h)| ? B(i, k) .
The matrix B can be computed in time linear in its size O(KM ).
3.2
Putting It All Together
By Lemma 2, the matrix B gives, for each possible generalization error ei and each k, an
upper bound on the deviation between observed and true errors for functions of true error
ei when using piles N1 , . . . , Nk . It is thus natural to try to use column k of B to bound the
? k , the function minimizing the observed error on these piles.
error of h
Suppose we fix the number of piles used to be k. The observed error of any function
with true generalization error ei must, with high probability, lie in the interval Ii,k =
[ei ? B(i, k), ei + B(i, k)]. By simultaneously considering these intervals for all values of
ei , we can put a bound on the generalization error of the best function in the hypothesis
class. This process is best illustrated by an example.
Consider a hypothesis space in which the generalization error of the available functions can
take on the discrete values 0, 0.1, 0.2, 0.3, 0.4, and 0.5. Suppose the matrix B has been
calculated as above and the kth column is (0.16, 0.05, 0.08, 0.14, 0.07, 0.1). We know, for
example, that all functions with true generalization error e2 = 0.1 will show an error in
the range I2,k = [0.05, 0.15], and that all functions with true generalization error e4 = 0.3
will show an error in the range I4,k = [0.16, 0.44]. The center panel of Figure 1 illustrates
the span of each interval.
? k minimizing the error on
Examining this diagram, it becomes clear that the function h
N1 ? ? ? ? ? Nk could not possibly be a function with true error e4 or higher as long as
H contains at least one function with true error e2 since the observed error of the latter
would necessarily be lower (with high probability). Likewise, it would not be possible
for a function with true error e5 or e6 to be chosen. However, a function with true error e3
could produce a lower observed error than one with true error e1 or e2 (since e3 ?B(3, k) <
? k . Therefore,
e2 + B(2, k) and e3 ? B(3, k) < e1 + B(1, k)), and thus could be chosen as h
?
the smallest bound we can place on the true error of hk in this example is e3 = 0.2.
? k will have true error corresponding to the midpoint of an a interIn general, we know that h
val which overlaps with the interval with the least upper bound (I2,k in this example). This
? k . First, we deleads to an intuitive procedure for calculating a bound on the true error of h
?
termine the interval with the smallest upper bound, ik = argmini {ei + B(i, k)}. Consider
the set of intervals which overlap with i?k , namely Jk = {i : ei ?B(i, k) ? ei?k +B(i?k , k)}.
It is possible for the smallest observed error to come from a function corresponding to any
? k can be obtained by taking the
of the intervals in Jk . Thus, a bound on the true error of h
def
maximum e(h) value for any function in Jk , i.e. C(k) = maxi?Jk {ei }.
? k ) and choosing k ? can thus be summarized:
Our overall algorithm for bounding e(h
1. Compute the matrix B as described in Section 3.1 .
2. Compute the vector C described above.
3. Output k ? = argmink {C(k)}.
We have established the following theorem.
Theorem 2 Suppose H is a set of binary functions with VC dimension d. Let M be the
?k =
number of noise levels and K be the number of piles. For all k = 1, ..., K, let h
argminh {?
ek (h)} be the function in H with the lowest empirical error evaluated using the
first k piles of data. Then for all ? > 0, with probability at least 1 ? ?,
? k ) ? C(k)
e(h
The suggested choice of k is thus k ? = argmink {C(k)}.
3.3
Classification Noise Simulations
In order to illustrate the methodology described in this section, simulations were run on a
classification problem in which samples ~x ? {0, 1}20 were chosen uniformly at random,
P20
and the target function f (~x) was 1 if and only if i=1 xi > 10.
Classification models were created for k = 1, ..., K by training using the first k piles of data
using logistic regression with a learning rate of 0.0005 for a maximum of 5, 000 iterations.
The generalization error for each model was determined by testing on a noise-free sample
of 500 examples drawn from the same uniform distribution. Bounds were calculated using
the algorithm described above with functions binned into 101 evenly spaced error values
~e = (0, 0.01, 0.02, ..., 1) with ? = 0.001.
The right panel of Figure 1 shows an example of the bounds found with K = 12 piles,
noise levels ~? = (0.001, 0.002, 0.01, 0.02, 0.03, 0.04, 0.05, 0.1, 0.2, 0.3, 0.4, 0.5), and
sample sizes ~n = (20, 150, 300, 400, 500, 600, 700, 1000, 1500, 2000, 3000, 5000). The
algorithm described above correctly predicts that the eighth pile should be chosen as the
cutoff, yielding an optimal error value of 0.018. It is interesting to note that although the
error bounds shown are significantly higher than the actual error, the shapes of the curves
are similar. This phenomena is common to many uniform convergence bounds.
Further experimentation has shown that the algorithm described here works well in general
when there are small piles of low noise data and large piles of high noise data. Its predictions are more useful in higher dimensional space, since it is relatively easy to get good
predictions without much available data in lower dimensions.
4
Further Research
In research subsequent to the results presented here [5], we examine a considerably more
general ?agnostic? classification setting [6]. As before, we assume there is a fixed and
unknown binary function f : X ? {0, 1} and a fixed and unknown distribution P on the
inputs X to f . We are presented again with K piles of data, N1 , ..., NK . Now each pile Ni
contains ni labeled examples (x, y) that are generated from an unknown function hi such
that e(hi ) = e(hi , f ) = PrP [hi (x) 6= f (x)] ? ?i for given values ?1 ? . . . ? ?K . Thus
we are provided piles of labeled examples of unknown functions ?nearby? the unknown
target f , where ?nearby? is quantified by the sequence of ?i .
In forthcoming work [5] we show that with high probability, for any k ? K
s
!
k
n
X
1
K
n
i
1,k
? k , f ) ? min {e(f, h)}+2
?i +O
d log
+ log
e(h
h?H
n1,k
n1,k
d
?
i=1
This result again allows us to express the optimal number of piles as a trade-off between
weighted approximation errors and increasing sample size. We suspect the result can be
extended to a wider class of loss functions that just classification.
References
[1] A. Blum and T. Mitchell. Combining labeled and unlabeled data with co-training. In Proceedings
of the Eleventh Annual Conference on Computational Learning Theory, pages 92?100, 1998.
[2] V. N. Vapnik. Statistical Learning Theory. Wiley, 1998.
[3] M. J. Kearns and U. V. Vazirani. An Introduction to Computational Learning Theory. MIT Press,
1994.
[4] D. Haussler, M. Kearns, H.S. Seung, and N. Tishby. Rigorous learning curve bounds from
statistical mechanics. In Proceedings of the Seventh Annual ACM Conference on Computational
Learning Theory, pages 76?87, 1994.
[5] K. Crammer, M. Kearns, and J. Wortman. Forthcoming. 2006.
[6] M. Kearns, R. Schapire, and L. Sellie. Towards efficient agnostic learning. Machine Learning,
17:115?141, 1994.
| 2920 |@word briefly:2 version:1 km:2 simulation:7 moment:1 reduction:1 mkearns:1 contains:4 prefix:5 current:1 yet:1 must:5 subsequent:1 numerical:2 shape:2 selected:1 accordingly:1 ith:2 provides:2 five:1 ik:1 qualitative:1 eleventh:1 introduce:1 upenn:1 expected:1 behavior:2 examine:3 mechanic:1 actual:6 considering:2 increasing:2 becomes:1 provided:4 estimating:7 begin:2 panel:3 agnostic:3 lowest:1 what:3 argmin:2 developed:4 unified:2 nj:1 subclass:1 exactly:1 classifier:2 before:2 negligible:1 modify:1 limit:1 might:3 quantified:1 suggests:1 challenging:1 co:1 limited:3 range:3 testing:1 union:1 block:2 differs:1 procedure:1 empirical:5 significantly:2 word:4 get:1 cannot:1 unlabeled:2 convenience:1 put:1 risk:2 missing:1 center:2 attention:1 estimator:2 haussler:1 target:6 suppose:5 construction:1 us:1 hypothesis:4 pa:1 jk:4 predicts:2 labeled:6 observed:9 trade:3 removed:2 valuable:1 complexity:1 seung:1 depend:1 triangle:1 various:1 distinct:1 describe:2 outcome:5 choosing:1 whose:1 quite:1 larger:3 say:2 noisy:1 sequence:1 maximal:1 combining:1 argmink:2 description:1 intuitive:1 convergence:1 produce:1 wider:1 illustrate:3 develop:2 eq:4 implies:1 come:1 differ:1 posit:1 attribute:1 vc:7 fix:1 generalization:13 hold:2 considered:2 great:1 smallest:3 omitted:2 estimation:4 label:7 weighted:2 minimization:3 mit:1 clearly:2 always:3 modified:1 rather:1 varying:3 likelihood:1 hk:4 prp:1 rigorous:1 sense:2 entire:1 interested:2 overall:1 classification:14 development:1 special:1 once:2 nicely:1 represents:2 koby:1 simplify:1 simultaneously:4 ourselves:1 n1:26 attempt:1 interest:2 extreme:1 yielding:2 partial:1 unless:1 indexed:1 guidance:1 minimal:1 instance:1 column:3 deviation:4 subset:6 entry:1 uniform:2 wortman:2 examining:1 seventh:1 tishby:1 corrupted:13 considerably:2 density:1 fundamental:1 off:3 physic:1 michael:1 together:1 again:3 reflect:1 nm:1 containing:1 choose:4 possibly:2 hoeffding:2 ek:2 account:1 singleton:2 summarized:1 matter:1 satisfy:1 h1:2 view:2 root:1 try:1 observing:1 square:1 ni:21 accuracy:1 variance:1 likewise:1 yield:2 identify:1 spaced:1 corruption:6 obvious:2 e2:5 naturally:2 associated:1 proof:1 di:1 sampled:1 proved:2 mitchell:1 knowledge:1 formalize:1 appears:1 higher:4 methodology:1 improved:1 evaluated:1 generality:1 furthermore:1 just:3 web:1 replacing:1 ei:21 lack:1 logistic:1 quality:5 perhaps:3 building:2 contain:1 true:20 i2:2 illustrated:2 leftmost:1 complete:3 consideration:1 common:2 specialized:1 empirically:1 numerically:1 refer:1 p20:1 access:1 similarity:1 base:1 add:1 something:1 closest:1 recent:1 inequality:5 binary:4 discretizing:1 uncorrupted:1 determine:1 ii:1 multiple:3 technical:1 long:1 e1:4 prediction:2 basic:2 regression:1 circumstance:2 expectation:2 essentially:2 iteration:1 achieved:1 want:1 interval:8 diagram:1 source:2 crucial:1 biased:3 suspect:1 deficient:1 near:2 presence:3 intermediate:1 enough:1 concerned:1 easy:1 forthcoming:2 pennsylvania:1 restrict:1 reduce:1 expression:1 e3:5 remark:1 useful:1 clear:1 amount:7 category:1 simplest:1 schapire:1 specifies:1 correctly:2 discrete:1 shall:1 sellie:1 express:1 putting:1 four:1 blum:1 drawn:2 pj:1 clean:1 cutoff:1 fraction:2 run:1 place:2 decide:4 summarizes:1 entirely:3 bound:51 hi:5 def:1 annual:2 strength:1 i4:1 binned:1 precisely:1 unlimited:1 nearby:2 span:1 min:1 relatively:1 smaller:4 slightly:4 em:1 making:1 intuitively:1 equation:1 jennifer:1 discus:1 loose:1 turn:1 initiate:1 know:5 flip:1 available:2 experimentation:1 apply:1 appropriate:1 coin:21 shortly:2 pessimistically:1 original:1 substitute:1 include:1 calculating:1 exploit:1 question:2 quantity:1 primary:1 kth:1 separate:1 evenly:1 argue:1 index:1 illustration:3 minimizing:3 potentially:1 unknown:9 upper:5 observation:5 finite:3 immediate:1 extended:1 head:2 namely:3 established:1 address:1 able:1 bar:1 suggested:2 eighth:1 summarize:1 overlap:2 natural:3 difficulty:4 created:1 philadelphia:1 faced:1 val:1 loss:1 expect:1 bear:1 interesting:1 classifying:1 pile:62 row:2 free:1 bias:11 fall:2 taking:1 midpoint:1 curve:3 calculated:3 dimension:7 valid:1 pth:1 far:1 vazirani:1 ignore:1 conclude:1 xi:1 e5:2 necessarily:1 bounding:1 noise:17 fair:1 repeated:1 body:1 wiley:1 precision:2 wish:4 lie:1 theorem:6 e4:4 maxi:1 exists:1 restricting:1 merging:1 effectively:2 adding:1 ci:1 vapnik:1 illustrates:1 nk:10 gap:1 generalizing:1 simply:2 explore:1 expressed:1 applies:1 corresponds:1 minimizer:1 acm:1 conditional:1 goal:2 presentation:1 viewed:1 towards:1 toss:1 replace:1 argmini:1 determined:1 uniformly:3 kearns:5 lemma:7 formally:1 e6:2 latter:1 crammer:3 arises:2 argminh:1 phenomenon:1 |
2,116 | 2,921 | Learning Depth from Single Monocular Images
Ashutosh Saxena, Sung H. Chung, and Andrew Y. Ng
Computer Science Department
Stanford University
Stanford, CA 94305
[email protected],
{codedeft,ang}@cs.stanford.edu
Abstract
We consider the task of depth estimation from a single monocular image. We take a supervised learning approach to this problem, in which
we begin by collecting a training set of monocular images (of unstructured outdoor environments which include forests, trees, buildings, etc.)
and their corresponding ground-truth depthmaps. Then, we apply supervised learning to predict the depthmap as a function of the image.
Depth estimation is a challenging problem, since local features alone are
insufficient to estimate depth at a point, and one needs to consider the
global context of the image. Our model uses a discriminatively-trained
Markov Random Field (MRF) that incorporates multiscale local- and
global-image features, and models both depths at individual points as
well as the relation between depths at different points. We show that,
even on unstructured scenes, our algorithm is frequently able to recover
fairly accurate depthmaps.
1
Introduction
Recovering 3-D depth from images is a basic problem in computer vision, and has important applications in robotics, scene understanding and 3-D reconstruction. Most work
on visual 3-D reconstruction has focused on binocular vision (stereopsis) [1] and on other
algorithms that require multiple images, such as structure from motion [2] and depth from
defocus [3]. Depth estimation from a single monocular image is a difficult task, and requires that we take into account the global structure of the image, as well as use prior
knowledge about the scene. In this paper, we apply supervised learning to the problem
of estimating depth from single monocular images of unstructured outdoor environments,
ones that contain forests, trees, buildings, people, buses, bushes, etc.
In related work, Michels, Saxena & Ng [4] used supervised learning to estimate 1-D distances to obstacles, for the application of autonomously driving a remote control car. Nagai
et al. [5] performed surface reconstruction from single images for known, fixed, objects
such as hands and faces. Gini & Marchi [6] used single-camera vision to drive an indoor robot, but relied heavily on known ground colors and textures. Shape from shading
[7] offers another method for monocular depth reconstruction, but is difficult to apply to
scenes that do not have fairly uniform color and texture. In work done independently of
ours, Hoiem, Efros and Herbert (personal communication) also considered monocular 3-D
reconstruction, but focused on generating 3-D graphical images rather than accurate metric depthmaps. In this paper, we address the task of learning full depthmaps from single
images of unconstrained environments.
Markov Random Fields (MRFs) and their variants are a workhorse of machine learning,
and have been successfully applied to numerous problems in which local features were insufficient and more contextual information had to be used. Examples include text segmentation [8], object classification [9], and image labeling [10]. To model spatial dependencies
in images, Kumar and Hebert?s Discriminative Random Fields algorithm [11] uses logistic
regression to identify man-made structures in natural images. Because MRF learning is
intractable in general, most of these model are trained using pseudo-likelihood.
Our approach is based on capturing depths and relationships between depths using an MRF.
We began by using a 3-D distance scanner to collect training data, which comprised a
large set of images and their corresponding ground-truth depthmaps. Using this training
set, the MRF is discriminatively trained to predict depth; thus, rather than modeling the
joint distribution of image features and depths, we model only the posterior distribution
of the depths given the image features. Our basic model uses L2 (Gaussian) terms in
the MRF interaction potentials, and captures depths and interactions between depths at
multiple spatial scales. We also present a second model that uses L1 (Laplacian) interaction
potentials. Learning in this model is approximate, but exact MAP posterior inference is
tractable (similar to Gaussian MRFs) via linear programming, and it gives significantly
better depthmaps than the simple Gaussian model.
2
Monocular Cues
Humans appear to be extremely good at judging depth from single monocular images. [12]
This is done using monocular cues such as texture variations, texture gradients, occlusion,
known object sizes, haze, defocus, etc. [4, 13, 14] For example, many objects? texture will
look different at different distances from the viewer. Texture gradients, which capture the
distribution of the direction of edges, also help to indicate depth.1 Haze is another depth
cue, and is caused by atmospheric light scattering.
Most of these monocular cues are ?contextual information,? in the sense that they are global
properties of an image and cannot be inferred from small image patches. For example, occlusion cannot be determined if we look at just a small portion of an occluded object.
Although local information such as the texture and color of a patch can give some information about its depth, this is usually insufficient to accurately determine its absolute depth.
For another example, if we take a patch of a clear blue sky, it is difficult to tell if this patch
is infinitely far away (sky), or if it is part of a blue object. Due to ambiguities like these,
one needs to look at the overall organization of the image to determine depths.
3
Feature Vector
In our approach, we divide the image into small patches, and estimate a single depth value
for each patch. We use two types of features: absolute depth features?used to estimate the
absolute depth at a particular patch?and relative features, which we use to estimate relative
depths (magnitude of the difference in depth between two patches). We chose features that
capture three types of local cues: texture variations, texture gradients, and haze.
Texture information is mostly contained within the image intensity channel,2 so we apply
Laws? masks [15, 4] to this channel to compute the texture energy (Fig. 1). Haze is reflected
in the low frequency information in the color channels, and we capture this by applying a
local averaging filter (the first Laws mask) to the color channels. Lastly, to compute an
1
For example, a tiled floor with parallel lines will appear to have tilted lines in an image. The distant patches will have larger variations in the line orientations, and nearby patches will have smaller
variations in line orientations. Similarly, a grass field when viewed at different distances will have
different texture gradient distributions.
2
We represent each image in YCbCr color space, where Y is the intensity channel, and Cb and Cr
are the color channels.
Figure 1: The convolutional filters used for texture energies and gradients. The first nine are
3x3 Laws? masks. The last six are the oriented edge detectors spaced at 300 intervals. The
nine Law?s masks are used to perform local averaging, edge detection and spot detection.
Figure 2: The absolute depth feature vector for a patch, which includes features from its
immediate neighbors and its more distant neighbors (at larger scales). The relative depth
features for each patch use histograms of the filter outputs.
estimate of texture gradient that is robust to noise, we convolve the intensity channel with
six oriented edge filters (shown in Fig. 1).
3.1 Features for absolute depth
Given some patch i in the image I(x, y), we compute summary statistics for it as follows.
We use the output of each of the 17 (9 Laws? masks,
P2 color channels and 6 texture gradients) filters Fn (x, y), n = 1, ..., 17 as: Ei (n) = (x,y)?patch(i) |I(x, y) ? Fn (x, y)|k ,
where k = {1, 2} give the sum absolute energy and sum squared energy respectively. This
gives us an initial feature vector of dimension 34.
To estimate the absolute depth at a patch, local image features centered on the patch are
insufficient, and one has to use more global properties of the image. We attempt to capture
this information by using image features extracted at multiple scales (image resolutions).
(See Fig. 2.) Objects at different depths exhibit very different behaviors at different resolutions, and using multiscale features allows us to capture these variations [16]. 3 In addition
to capturing more global information, computing features at multiple spatial scales also
help accounts for different relative sizes of objects. A closer object appears larger in the
image, and hence will be captured in the larger scale features. The same object when far
away will be small and hence be captured in the small scale features. Such features may be
strong indicators of depth.
To capture additional global features (e.g. occlusion relationships), the features used to
predict the depth of a particular patch are computed from that patch as well as the four
neighboring patches. This is repeated at each of the three scales, so that the feature vector
3
For example, blue sky may appear similar at different scales; but textured grass would not.
at a patch includes features of its immediate neighbors, and its far neighbors (at a larger
scale), and its very far neighbors (at the largest scale), as shown in Fig. 2. Lastly, many
structures (such as trees and buildings) found in outdoor scenes show vertical structure, in
the sense that they are vertically connected to themselves (things cannot hang in empty air).
Thus, we also add to the features of a patch additional summary features of the column it
lies in.
For each patch, after including features from itself and its 4 neighbors at 3 scales, and
summary features for its 4 column patches, our vector of features for estimating depth at a
particular patch is 19 ? 34 = 646 dimensional.
3.2 Features for relative depth
We use a different feature vector to learn the dependencies between two neighboring
patches. Specifically, we compute a histogram (with 10 bins) of each of the 17 filter outputs
|I(x, y) ? Fn (x, y)|, giving us a total of 170 features yi for each patch i. These features
are used to estimate how the depths at two different locations are related. We believe that
learning these estimates requires less global information than predicting absolute depth, 4
but more detail from the individual patches. Hence, we use as our relative depth features the
differences between the histograms computed from two neighboring patches y ij = yi ? yj .
4
The Probabilistic Model
The depth of a particular patch depends on the features of the patch, but is also related to
the depths of other parts of the image. For example, the depths of two adjacent patches
lying in the same building will be highly correlated. We will use an MRF to model the
relation between the depth of a patch and the depths of its neighboring patches. In addition
to the interactions with the immediately neighboring patches, there are sometimes also
strong interactions between the depths of patches which are not immediate neighbors. For
example, consider the depths of patches that lie on a large building. All of these patches
will be at similar depths, even if there are small discontinuities (such as a window on the
wall of a building). However, when viewed at the smallest scale, some adjacent patches
are difficult to recognize as parts of the same object. Thus, we will also model interactions
between depths at multiple spatial scales.
Our first model will be a jointly Gaussian MRF. To capture the multiscale depth relations,
let us P
define di (s) as follows. For each of three scales s = 1, 2, 3, define di (s + 1) =
(1/5) j?Ns (i)?{i} dj (s). Here, Ns (i) are the 4 neighbors of patch i at scale s. I.e., the
depth at a higher scale is constrained to be the average of the depths at lower scales. Our
model over depths is as follows:
?
?
M
3 X
M
T
2
X
X
X (di (s) ? dj (s))2
1
(d
(1)
?
x
?
)
i
r
i
?
P (d|X; ?, ?) = exp ??
?
2
2
Z
2?
2?
1r
2rs
s=1 i=1
i=1
j?Ns (i)
(1)
Here, M is the total number of patches in the image (at the lowest scale); xi is the absolute
depth feature vector for patch i; and ? and ? are parameters of the model. In detail, we
use different parameters (?r , ?1r , ?2r ) for each row in the image, because the images we
consider are taken from a horizontally mounted camera, and thus different rows of the
image have different statistical properties.5 Z is the normalization constant for the model.
4
For example, given two adjacent patches of a distinctive, unique, color and texture, we may be
able to safely conclude that they are part of the same object, and thus that their depths are close, even
without more global features.
5
For example, a blue patch might represent sky if it is in upper part of image, and might be more
likely to be water if in the lower part of the image.
We estimate the parameters ?r in Eq. 1 by maximizing the conditional likelihood
p(d|X; ?r ) of the training data. Since the model is a multivariate Gaussian, the maximum
likelihood estimate of parameters ?r is obtained by solving a linear least squares problem.
The first term in the exponent above models depth as a function of multiscale features of a
single patch i. The second term in the exponent places a soft ?constraint? on the depths to
2
is a fixed constant, the effect of this term is that it tends
be smooth. If the variance term ?2rs
to smooth depth estimates across nearby patches. However, in practice the dependencies
between patches are not the same everywhere, and our expected value for (d i ? dj )2 may
depend on the features of the local patches.
2
Therefore, to improve accuracy we extend the model to capture the ?variance? term ? 2rs
in the denominator of the second term as a linear function of the patches i and j?s relative
2
= uTrs |yijs |. This helps deterdepth features yijs (discussed in Section 3.2). We use ?2rs
mine which neighboring patches are likely to have similar depths. E.g., the ?smoothing?
effect is much stronger if neighboring patches are similar. This idea is applied at multiple
2
for the different scales s (and rows r of the image).
scales, so that we learn different ?2rs
2
to the expected value of (di (s) ? dj (s))2 , with a
The parameters urs are chosen to fit ?2rs
2
non-negative).
constraint that urs ? 0 (to keep the estimated ?2rs
2
2
Similar to our discussion on ?2rs
, we also learn the variance parameter ?1r
= vrT xi as a
2
linear function of the features. The parameters vr are chosen to fit ?1r to the expected value
2
of (di (r) ? ?rT xi )2 , subject to vr ? 0.6 This ?1r
term gives a measure of the uncertainty
in the first term, and depends on the features. This is motivated by the observation that in
some cases, depth cannot be reliably estimated from the local features. In this case, one
has to rely more on neighboring patches? depths to infer a patch?s depth (as modeled by the
second term in the exponent).
After learning the parameters, given a new test-set image we can find the MAP estimate of
the depths by maximizing Eq. 1 in terms of d. Since Eq. 1 is Gaussian, log P (d|X; ?, ?)
is quadratic in d, and thus its maximum is easily found in closed form (taking at most 2-3
seconds per image, including feature computation time).
4.1 Laplacian model
We now present a second model that uses Laplacians instead of Gaussians to model the
posterior distribution of the depths. Our motivation for doing so is three-fold. First, a
histogram of the relative depths (di ? dj ) empirically appears Laplacian, which strongly
suggests that it is better modeled as one. Second, the Laplacian distribution has heavier
tails, and is therefore more robust to outliers in the image features and error in the trainingset depthmaps (collected with a laser scanner; see Section 5.1). Third, the Gaussian model
was generally unable to give depthmaps with sharp edges; in contrast, Laplacians tend to
model sharp transitions/outliers better. Our model is as follows:
?
?
3 X
M
M
T
X
X
X
1
|d
(1)
?
x
?
|
|d
(s)
?
d
(s)|
i
j
i
i r
? (2)
P (d|X; ?, ?) = exp ??
?
Z
?
?
1r
2rs
s=1 i=1
i=1
j?Ns (i)
Here, the parameters are the same as Eq. 1, except for the variance terms. Here, ? 1r and
?2rs are the Laplacian spread parameters. Maximum-likelihood parameter estimation for
the Laplacian model is not tractable (since the partition function depends on ? r ). But by
analogy to the Gaussian case, we approximate this by solving a linear system of equations
Xr ?r ? dr to minimize L1 (instead of L2 ) error. Here Xr is the matrix of absolute-depth
features. Following the Gaussian model, we also learn the Laplacian spread parameters
in the denominator in the same way, except that the instead of estimating the expected
value of (di ? dj )2 , we estimate the expected value of |di ? dj |. Even though maximum
6
2
The absolute depth features xir are non-negative; thus, the estimated ?1r
is also non-negative.
likelihood parameter estimation for ?r is intractable in the Laplacian model, given a new
test-set image, MAP inference for the depths d is tractable. Specifically, P (d|X; ?, ?) is
easily maximized in terms of d using linear programming.
Remark. We can also extend these models to combine Gaussian and Laplacian terms in
the exponent, for example by using a L2 norm term for absolute depth, and a L1 norm
term for the interaction terms. MAP inference remains tractable in this setting, and can be
solved using convex optimization as a QP (quadratic program).
5
Experiments
5.1 Data collection
We used a 3-D laser scanner to collect images and their corresponding depthmaps. The
scanner uses a SICK 1-D laser range finder mounted on a motor to get 2D scans. We collected a total of 425 image+depthmap pairs, with an image resolution of 1704x2272 and a
depthmap resolution of 86x107. In the experimental results reported here, 75% of the images/depthmaps were used for training, and the remaining 25% for hold-out testing. Due
to noise in the motor system, the depthmaps were not perfectly aligned with the images,
and had an alignment error of about 2 depth patches. Also, the depthmaps had a maximum
range of 81m (the maximum range of the laser scanner), and had minor additional errors
due to reflections and missing laser scans. Prior to running our learning algorithms, we
transformed all the depths to a log scale so as to emphasize multiplicative rather than additive errors in training. In our earlier experiments (not reported here), learning using linear
depth values directly gave poor results.
5.2 Results
We tested our model on real-world test-set images of forests (containing trees, bushes, etc.),
campus areas (buildings, people, and trees), and indoor places (such as corridors). The
algorithm was trained on a training set comprising images from all of these environments.
Table 1 shows the test-set results when using different feature combinations. We see that
using multiscale and column features significantly improves the algorithm?s performance.
Including the interaction terms further improved its performance, and the Laplacian model
performs better than the Gaussian one. Empirically, we also observed that the Laplacian
model does indeed give depthmaps with significantly sharper boundaries (as in our discussion in Section 4.1; also see Fig. 3). Table 1 shows the errors obtained by our algorithm
on a variety of forest, campus, and indoor images. The results on the test set show that the
algorithm estimates the depthmaps with a average error of 0.132 orders of magnitude. It
works well even in the varied set of environments as shown in Fig. 3 (last column). It also
appears to be very robust towards variations caused by shadows.
Informally, our algorithm appears to predict the relative depths of objects quite well (i.e.,
their relative distances to the camera), but seems to make more errors in absolute depths.
Some of the errors can be attributed to errors or limitations of the training set. For example,
the training set images and depthmaps are slightly misaligned, and therefore the edges in
the learned depthmap are not very sharp. Further, the maximum value of the depths in the
training set is 81m; therefore, far-away objects are all mapped to the one distance of 81m.
Our algorithm appears to incur the largest errors on images which contain very irregular
trees, in which most of the 3-D structure in the image is dominated by the shapes of the
leaves and branches. However, arguably even human-level performance would be poor on
these images.
6
Conclusions
We have presented a discriminatively trained MRF model for depth estimation from single
monocular images. Our model uses monocular cues at multiple spatial scales, and also
Figure 3: Results for a varied set of environments, showing original image (column 1),
ground truth depthmap (column 2), predicted depthmap by Gaussian model (column 3),
predicted depthmap by Laplacian model (column 4). (Best viewed in color)
Table 1: Effect of multiscale and column features on accuracy. The average absolute errors
(RMS errors gave similar results) are on a log scale (base 10). H1 and H2 represent summary statistics for k = 1, 2. S1 , S2 and S3 represent the 3 scales. C represents the column
features. Baseline is trained with only the bias term (no features).
F EATURE
BASELINE
G AUSSIAN (S1 ,S2 ,S3 , H1 ,H2 ,no neighbors)
G AUSSIAN (S1 , H1 ,H2 )
G AUSSIAN (S1 ,S2 , H1 ,H2 )
G AUSSIAN (S1 , S2 ,S3 , H1 ,H2 )
G AUSSIAN (S1 ,S2 ,S3 , C, H1 )
G AUSSIAN (S1 ,S2 ,S3 , C, H1 ,H2 )
L APLACIAN
A LL
.295
.162
.171
.155
.144
.139
.133
.132
F OREST
.283
.159
.164
.151
.144
.140
.135
.133
C AMPUS
.343
.166
.189
.164
.143
.141
.132
.142
I NDOOR
.228
.165
.173
.157
.144
.122
.124
.084
incorporates interaction terms that model relative depths, again at different scales. In addition to a Gaussian MRF model, we also presented a Laplacian MRF model in which
MAP inference can be done efficiently using linear programming. We demonstrated that
our algorithm gives good 3-D depth estimation performance on a variety of images.
Acknowledgments
We give warm thanks to Jamie Schulte, who designed the 3-D scanner, for help in collecting
the data used in this work. We also thank Larry Jackel for helpful discussions. This work
was supported by the DARPA LAGR program under contract number FA8650-04-C-7134.
References
[1] D. Scharstein and R. Szeliski. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. Int?l Journal of Computer Vision, 47:7?42, 2002.
[2] David A. Forsyth and Jean Ponce. Computer Vision : A Modern Approach. Prentice Hall, 2003.
[3] S. Das and N. Ahuja. Performance analysis of stereo, vergence, and focus as depth cues for
active vision. IEEE Trans Pattern Analysis & Machine Intelligence, 17:1213?1219, 1995.
[4] J. Michels, A. Saxena, and A.Y. Ng. High speed obstacle avoidance using monocular vision
and reinforcement learning. In ICML, 2005.
[5] T. Nagai, T. Naruse, M. Ikehara, and A. Kurematsu. Hmm-based surface reconstruction from
single images. In Proc IEEE Int?l Conf Image Processing, volume 2, 2002.
[6] G. Gini and A. Marchi. Indoor robot navigation with single camera vision. In PRIS, 2002.
[7] M. Shao, T. Simchony, and R. Chellappa. New algorithms from reconstruction of a 3-d depth
map from one or more images. In Proc IEEE CVPR, 1988.
[8] J. Lafferty, A. McCallum, and F. Pereira. Discriminative fields for modeling spatial dependencies in natural images. In ICML, 2001.
[9] K. Murphy, A. Torralba, and W.T. Freeman. Using the forest to see the trees: A graphical model
relating features, objects, and scenes. In NIPS 16, 2003.
[10] Xuming He, Richard S. Zemel, and Miguel A. Carreira-Perpinan. Multiscale conditional random fields for image labeling. In proc. CVPR, 2004.
[11] S. Kumar and M. Hebert. Discriminative fields for modeling spatial dependencies in natural
images. In NIPS 16, 2003.
[12] J.M. Loomis. Looking down is looking up. Nature News and Views, 414:155?156, 2001.
[13] B. Wu, T.L. Ooi, and Z.J. He. Perceiving distance accurately by a directional process of integrating ground information. Letters to Nature, 428:73?77, 2004.
[14] P. Sinha I. Blthoff, H. Blthoff. Top-down influences on stereoscopic depth-perception. Nature
Neuroscience, 1:254?257, 1998.
[15] E.R. Davies. Laws? texture energy in TEXTURE. In Machine Vision: Theory, Algorithms,
Practicalities 2nd Edition. Academic Press, San Diego, 1997.
[16] A.S. Willsky. Multiresolution markov models for signal and image processing. IEEE, 2002.
| 2921 |@word seems:1 stronger:1 nd:1 norm:2 r:10 shading:1 initial:1 hoiem:1 ours:1 contextual:2 fn:3 distant:2 additive:1 tilted:1 partition:1 shape:2 motor:2 designed:1 ashutosh:1 grass:2 alone:1 cue:7 leaf:1 intelligence:1 mccallum:1 location:1 corridor:1 lagr:1 combine:1 mask:5 indeed:1 expected:5 behavior:1 themselves:1 frequently:1 freeman:1 window:1 begin:1 estimating:3 campus:2 lowest:1 sung:1 pseudo:1 sky:4 safely:1 saxena:3 collecting:2 ooi:1 control:1 appear:3 arguably:1 local:10 vertically:1 tends:1 might:2 chose:1 collect:2 challenging:1 suggests:1 misaligned:1 range:3 unique:1 camera:4 acknowledgment:1 yj:1 testing:1 practice:1 x3:1 xr:2 spot:1 area:1 significantly:3 davy:1 integrating:1 marchi:2 cannot:4 close:1 get:1 prentice:1 context:1 applying:1 influence:1 map:6 demonstrated:1 missing:1 maximizing:2 independently:1 convex:1 focused:2 resolution:4 unstructured:3 immediately:1 avoidance:1 variation:6 diego:1 heavily:1 exact:1 programming:3 us:7 observed:1 solved:1 capture:9 connected:1 news:1 autonomously:1 remote:1 environment:6 occluded:1 mine:1 personal:1 trained:6 depend:1 solving:2 incur:1 distinctive:1 textured:1 shao:1 easily:2 joint:1 darpa:1 laser:5 chellappa:1 gini:2 zemel:1 labeling:2 tell:1 quite:1 jean:1 stanford:4 larger:5 cvpr:2 statistic:2 jointly:1 itself:1 reconstruction:7 jamie:1 interaction:9 neighboring:8 aligned:1 multiresolution:1 empty:1 generating:1 object:15 help:4 andrew:1 miguel:1 ij:1 minor:1 eq:4 strong:2 p2:1 recovering:1 c:1 shadow:1 indicate:1 predicted:2 direction:1 filter:6 centered:1 human:2 larry:1 bin:1 require:1 wall:1 utrs:1 viewer:1 scanner:6 lying:1 hold:1 considered:1 ground:5 hall:1 exp:2 cb:1 predict:4 orest:1 driving:1 efros:1 torralba:1 smallest:1 estimation:7 proc:3 jackel:1 largest:2 successfully:1 gaussian:13 rather:3 cr:1 xir:1 focus:1 ponce:1 likelihood:5 contrast:1 baseline:2 sense:2 helpful:1 inference:4 mrfs:2 relation:3 transformed:1 comprising:1 overall:1 classification:1 orientation:2 exponent:4 spatial:7 constrained:1 fairly:2 smoothing:1 field:7 schulte:1 ng:3 represents:1 look:3 icml:2 richard:1 modern:1 oriented:2 recognize:1 individual:2 murphy:1 occlusion:3 attempt:1 detection:2 organization:1 highly:1 evaluation:1 alignment:1 navigation:1 light:1 accurate:2 edge:6 closer:1 tree:7 divide:1 sinha:1 column:10 modeling:3 obstacle:2 soft:1 earlier:1 uniform:1 comprised:1 reported:2 dependency:5 thanks:1 probabilistic:1 contract:1 squared:1 ambiguity:1 again:1 containing:1 dr:1 conf:1 chung:1 account:2 potential:2 includes:2 int:2 forsyth:1 caused:2 depends:3 performed:1 multiplicative:1 h1:7 closed:1 view:1 doing:1 portion:1 recover:1 relied:1 parallel:1 minimize:1 air:1 square:1 accuracy:2 convolutional:1 variance:4 who:1 efficiently:1 maximized:1 spaced:1 identify:1 directional:1 accurately:2 drive:1 detector:1 energy:5 frequency:1 di:8 attributed:1 knowledge:1 car:1 color:10 improves:1 segmentation:1 appears:5 scattering:1 higher:1 supervised:4 reflected:1 improved:1 done:3 though:1 strongly:1 just:1 binocular:1 lastly:2 hand:1 ei:1 multiscale:7 logistic:1 believe:1 building:7 effect:3 contain:2 hence:3 adjacent:3 ll:1 workhorse:1 performs:1 motion:1 reflection:1 l1:3 image:68 began:1 empirically:2 qp:1 volume:1 extend:2 discussed:1 tail:1 relating:1 he:2 unconstrained:1 similarly:1 had:4 dj:7 robot:2 surface:2 etc:4 add:1 base:1 sick:1 posterior:3 multivariate:1 depthmap:7 yi:2 herbert:1 captured:2 additional:3 floor:1 determine:2 signal:1 branch:1 multiple:7 full:1 infer:1 smooth:2 academic:1 offer:1 finder:1 laplacian:13 mrf:10 basic:2 variant:1 regression:1 vision:9 metric:1 denominator:2 histogram:4 represent:4 sometimes:1 normalization:1 robotics:1 irregular:1 addition:3 interval:1 subject:1 tend:1 thing:1 incorporates:2 lafferty:1 variety:2 trainingset:1 fit:2 gave:2 perfectly:1 idea:1 yijs:2 six:2 motivated:1 nagai:2 heavier:1 rms:1 stereo:2 fa8650:1 nine:2 remark:1 generally:1 clear:1 informally:1 ang:1 s3:5 judging:1 stereoscopic:1 estimated:3 neuroscience:1 per:1 x107:1 blue:4 four:1 sum:2 everywhere:1 uncertainty:1 letter:1 place:2 wu:1 patch:53 asaxena:1 capturing:2 correspondence:1 fold:1 quadratic:2 constraint:2 scene:6 nearby:2 dominated:1 loomis:1 speed:1 extremely:1 kumar:2 department:1 combination:1 poor:2 smaller:1 across:1 slightly:1 ur:2 s1:7 outlier:2 taken:1 haze:4 monocular:14 equation:1 bus:1 remains:1 tractable:4 gaussians:1 apply:4 away:3 original:1 convolve:1 remaining:1 include:2 running:1 top:1 graphical:2 giving:1 practicality:1 rt:1 aussian:6 exhibit:1 gradient:7 distance:7 unable:1 mapped:1 thank:1 hmm:1 collected:2 water:1 willsky:1 modeled:2 relationship:2 insufficient:4 difficult:4 mostly:1 sharper:1 taxonomy:1 ycbcr:1 negative:3 reliably:1 perform:1 upper:1 vertical:1 observation:1 markov:3 immediate:3 communication:1 looking:2 frame:1 varied:2 sharp:3 intensity:3 atmospheric:1 inferred:1 david:1 pair:1 eature:1 learned:1 discontinuity:1 trans:1 address:1 able:2 nip:2 usually:1 pattern:1 perception:1 indoor:4 laplacians:2 program:2 including:3 natural:3 rely:1 warm:1 predicting:1 indicator:1 improve:1 numerous:1 text:1 prior:2 understanding:1 l2:3 relative:11 law:6 discriminatively:3 limitation:1 mounted:2 xuming:1 analogy:1 h2:6 row:3 summary:4 supported:1 last:2 hebert:2 bias:1 szeliski:1 neighbor:9 face:1 taking:1 absolute:14 boundary:1 depth:82 dimension:1 transition:1 world:1 made:1 collection:1 reinforcement:1 san:1 far:5 scharstein:1 approximate:2 hang:1 emphasize:1 keep:1 global:9 active:1 conclude:1 discriminative:3 xi:3 stereopsis:1 vergence:1 table:3 nature:3 channel:8 learn:4 robust:3 ca:1 forest:5 defocus:2 da:1 spread:2 dense:1 motivation:1 noise:2 s2:6 edition:1 repeated:1 fig:6 ahuja:1 vr:2 n:4 pereira:1 lie:2 outdoor:3 perpinan:1 third:1 down:2 showing:1 intractable:2 texture:18 magnitude:2 michels:2 likely:2 infinitely:1 visual:1 horizontally:1 contained:1 truth:3 extracted:1 conditional:2 viewed:3 towards:1 man:1 carreira:1 determined:1 specifically:2 except:2 perceiving:1 averaging:2 total:3 vrt:1 experimental:1 tiled:1 people:2 scan:2 bush:2 tested:1 correlated:1 |
2,117 | 2,922 | Learning Topology with the Generative Gaussian
Graph and the EM Algorithm
Micha?el Aupetit
CEA - DASE
BP 12 - 91680
Bruy`eres-le-Ch?atel, France
[email protected]
Abstract
Given a set of points and a set of prototypes representing them, how to
create a graph of the prototypes whose topology accounts for that of the
points? This problem had not yet been explored in the framework of statistical learning theory. In this work, we propose a generative model
based on the Delaunay graph of the prototypes and the ExpectationMaximization algorithm to learn the parameters. This work is a first
step towards the construction of a topological model of a set of points
grounded on statistics.
1
1.1
Introduction
Topology what for?
Given a set of points in a high-dimensional euclidean space, we intend to extract the topology of the manifolds from which they are drawn. There are several reasons for this among
which: increasing our knowledge about this set of points by measuring its topological
features (connectedness, intrinsic dimension, Betti numbers (number of voids, holes, tunnels. . . )) in the context of exploratory data analysis [1], allowing to compare two sets of
points wrt their topological characteristics or to find clusters as connected components in
the context of pattern recognition [2], or finding shortest path along manifolds in the context
of robotics [3].
There are two families of approaches which deal with ?topology? : on one hand, the ?topology preserving? approaches based on nonlinear projection of the data in lower dimensional
spaces with a constrained topology to allow visualization [4, 5, 6, 7, 8]; on the other hand,
the ?topology modelling? approaches based on the construction of a structure whose topology is not constrained a priori, so it is expected to better account for that of the data
[9, 10, 11] at the expense of the visualisability. Much work has been done about the former problem also called ?manifold learning?, from Generative Topographic Mapping [4]
to Multi-Dimensional Scaling and its variants [5, 6], Principal Curves [7] and so on. In
all these approaches, the intrinsic dimension of the model is fixed a priori which eases the
visualization but arbitrarily forces the topology of the model. And when the dimension
is not fixed as in the mixture of Principal Component Analyzers [8], the connectedness is
lost. The latter problem we deal with had never been explored in the statistical learning
perspective. Its aim is not to project and visualize a high-dimensional set of points, but to
extract the topological information from it directly in the high-dimensional space, so that
the model must be freed as much as possible from any a priori topological constraint.
1.2
Learning topology: a state of the art
As we may learn a complicated function combining simple basis functions, we shall learn
a complicated manifold1 combining simple basis manifolds. A simplicial complex2 is such
a model based on the combination of simplices, each with its own dimension (a 1-simplex
is a line segment, a 2-simplex is a triangle. . . a k-simplex is the convex hull of a set of k + 1
points). In a simplicial complex, the simplices are exclusively connected by their vertices or
their faces. Such a structure is appealing because it is possible to extract from it topological
information like Betti numbers, connectedness and intrinsic dimension [10]. A particular
simplicial complex is the Delaunay complex defined as the set of simplices whose Vorono??
cells3 of the vertices are adjacent assuming general position for the vertices. The Delaunay
graph is made of vertices and edges of the Delaunay complex [12].
All the previous work about topology modelling is grounded on the result of Edelsbrunner
and Shah [13] which prove that given a manifold M ? RD and a set of N0 vector prototypes w ? (RD )N0 nearby M, it exists a simplicial subcomplex of the Delaunay complex
of w which has the same topology as M under what we call the ?ES-conditions?.
In the present work, the manifold M is not known but through a finite set of M data points
v ? MM . Martinetz and Schulten proposed to build a graph of the prototypes with an
algorithm called ?Competitive Hebbian Learning? (CHL)[11] to tackle this problem. Their
approach has been extended to simplicial complexes by De Silva and Carlsson with the
definition of ?weak witnesses? [10]. In both cases, the ES-conditions about M are weakened so they can be verified by a finite sample v of M, so that the graph or the simplicial
complex built over w is proved to have the same topology as M if v is a sufficiently dense
sampling of M.
The CHL consists in connecting two prototypes in w if they are the first and the second
closest neighbors to a point of v (closeness wrt the Euclidean norm). Each point of v leads
to an edge, and is called a ?weak witness? of the connected prototypes [10]. The topology
representing graph obtained is a subgraph of the Delaunay graph. The region of R D in
which any data point would connect the same prototypes, is the ?region of influence? (ROI)
of this edge (see Figure 2 d-f). This principle is extended to create k-simplices connecting
k + 1 prototypes, which are part of the Delaunay simplicial-complex of w [10].
Therefore, the model obtained is based on regions of influence: a simplex exists in the
model if there is at least one datum in its ROI. Hence, the capacity of this model to correctly
represent the topology of a set of points, strongly depends on the shape and location of the
ROI wrt the points, and on the presence of noise in the data. Moreover, as far as N 0 > 2,
it cannot exist an isolated prototype allowing to represent an isolated bump in the data
distribution, because any datum of this bump will have two closest prototypes to connect to
each other. An aging process has been proposed by Martinetz and Schulten to filter out the
noise, which works roughly such that edges with fewer data than a threshold in there ROI
are pruned from the graph. This looks like a filter based on the probability density of the
data distribution, but no statistical criterion is proposed to tune the parameters. Moreover
the area of the ROI may be intractable in high dimension and is not trivially related to the
1
For simplicity, we call ?manifold? what can be actually a set of manifolds connected or not to
each other with possibly various intrinsic dimensions.
2
The terms ?simplex? or ?graph? denote both the abstract object and its geometrical realization.
3
Given a set of points w in RD , Vi = {v ? RD |(v ? wi )2 ? (v ? wj )2 , ?j} defines the Vorono??
cell associated to wi ? w.
corresponding line segment, so measuring the frequency over such a region is not relevant
to define a useful probability density. At last, the line segment associated to an edge of
the graph is not part of the model: data are not projected on it, data drawn from such a
line segment may not give rise to the corresponding edge, and the line segment may not
intersect at all its associated ROI. In other words, the model is not self-consistent, that is
the geometrical realization of the graph is not always a good model of its own topology
whatever the density of the sampling.
We proposed to define Vorono?? cells of line segments as ROI for the edges and defined a
criterion to cut edges with a lower density of data projecting on their middle than on their
borders [9]. This solves some of the CHL limits but it still remains one important problem
common to both approaches: they rely on the visual control of their quality, i.e. no criterion
allows to assess the quality of the model especially in dimension greater than 3.
1.3
Emerging topology from a statistical generative model
For all the above reasons, we propose another way for modelling topology. The idea is to
construct a ?good? statistical generative model of the data taking the noise into account,
and to assume that its topology is therefore a ?good? model of the topology of the manifold
which generated the data. The only constraint we impose on this generative model is that
its topology must be as ?flexible? as possible and must be ?extractible?. ?Flexible? to
avoid at best any a priori constraint on the topology so as to allow the modelling of any
one. ?Extractible? to get a ?white box? model from which the topological characteristics
are tractable in terms of computation. So we propose to define a ?generative simplicial
complex?. However, this work being preliminary, we expose here the simpler case of
defining a ?generative graph? (a simplicial complex made only of vertices and edges) and
tuning its parameters. This allows to demonstrate the feasibility of this approach and to
foresee future difficulties when it is extended to simplicial complexes.
It works as follows. Given a set of prototypes located over the data distribution using
e.g. Vector Quantization [14], the Delaunay graph (DG) of the prototypes is constructed
[15]. Then, each edge and each vertex of the graph is the basis of a generative model so
that the graph generates a mixture of gaussian density functions. The maximization of the
likelihood of the data wrt the model, using Expectation-Maximization, allows to tune the
weights of this mixture and leads to the emergence of the expected topology representing
graph through the edges with non-negligible weights that remain after the optimization
process.
We first present the framework and the algorithm we use in section 2. Then we test it on
artificial data in section 3 before the discussion and conclusion in section 4.
2
2.1
A Generative Gaussian Graph to learn topology
The Generative Gaussian Graph
In this work, M is the support of the probability density function (pdf) p from which are
drawn the data v. In fact, this is not the topology of M which is of interest, but the topology
of manifolds Mprin called ?principal manifolds? of the distribution p (in reference to the
definition of Tibshirani [7]) which can be viewed as the manifold M without the noise. We
assume the data have been generated by some set of points and segments constituting the
set of manifolds Mprin which have been corrupted with additive spherical gaussian noise
2
with mean 0 and unknown variance ?noise
. Then, we define a gaussian mixture model
to account for the observed data, which is based on both gaussian kernels that we call
?gaussian-points?, and what we call ?gaussian-segments?, forming a ?Generative Gaussian
Graph? (GGG).
The value at point vj ? v of a normalized gaussian-point centered on a prototype wi ? w
with variance ? 2 is defined as: g 0 (vj , wi , ?) = (2?? 2 )?D/2 exp(?
(vj ?wi )2
)
2? 2
A normalized gaussian-segment is defined as the sum of an infinite number of gaussianpoints evenly spread on a line segment. Thus, this is the integral of a gaussian-point along
a line segment. The value at point vj of the gaussian-segment [wai wbi ] associated to the
ith edge {ai , bi } in DG with variance ? 2 is:
g 1 (vj , {wai , wbi }, ?)
=
=
Z
w bi
w ai
(vj ?w)2
exp ?
dw
2? 2
(2?? 2 )
exp ?
j
(vj ?q )
i
2? 2
D?1
(2?? 2 ) 2
hv ?w
D
2
2
L ai b i
(1)
j
ai b i
?
? 2
Q
erf
?
!
Q
?erf
j
?La b
ai b i
i i
?
? 2
!
2Lai bi
|w ?w
i
Qj
where Lai bi = kwbi ? wai k, Qjai bi = j aLi a bbi ai and qij = wai + (wbi ? wai ) Laaibbi
i i
i i
is the orthogonal projection of vj on the straight line passing through wai and wbi . In the
case where wai = wbi , we set g 1 (vj , {wai , wbi }, ?) = g 0 (vj , wai , ?).
The left part of the dot product accounts for the gaussian noise orthogonal to the line segment, and the right part for the gaussian noise integrated along
the line segment. The
R
0
1
0
functions
g
and
g
are
positive
and
we
can
prove
that:
g
(v, wi , ?)dv = 1 and
D
R
R
1
g
(v,
{w
,
w
},
?)dv
=
1,
so
they
are
both
probability
density
functions. A gaussiana
b
RD
point is associated to each prototype in w and a gaussian-segment to each edge in DG.
The gaussian mixture is obtained by a weighting sum of the N0 gaussian-points and N1
gaussian-segments, such that the weights ? sum to 1 and are non-negative:
p(vj |?, w, ?, DG) =
Nk
1 X
X
?ik g k (vj , ski , ?)
(2)
k=0 i=1
P 1 PN k k
with k=0 i=1
?i = 1 and ?i, k, ?ik ? 0, where s0i = wi and s1i = {wai , wbi } such
that {ai , bi } is the ith edge in DG. The weight ?i0 (resp. ?i1 ) is the probability that a datum v
was drawn from the gaussian-point associated to wi (resp. the gaussian-segment associated
to the ith edge of DG).
2.2
Measure of quality
The function p(vj |?, w, ?, DG) is the probability density at vj given the parameters of the
model. We measure the likelihood P of the data v wrt the parameters of the GGG model:
P = P (?, w, ?, DG) =
M
Y
p(vj |?, w, ?, DG)
(3)
j=1
2.3
The Expectation-Maximization algorithm
In order to maximize the likelihood P or equivalently to minimize the negative loglikelihood L = ?log(P ) wrt ? and ?, we use the Expectation-Maximization algorithm.
We refer to [2] (pages 59 ? 73) and [16] for further details. The minimization of the negative log-likelihood consists in tmax iterative steps updating ? and ? which ensure the
decrease of L. The updating rules take into account the constraints about positivity or sum
to unity of the parameters:
k[new]
PM
?i
=
1
M
? 2[new]
=
1
DM
+
where
I1
I2
j=1
P N1
P (k, i|vj )
PM P N 0
2
i=1 P (0, i|vj )(vj ? wi )
j=1 [
i=1
P (1, i|vj )
j
(v ?q )2
(2?? 2 )?D/2 exp(? j2?2i )(I1 [(vj ?qij )2 +? 2 ]+I2 )
1
Lai bi ?g (vj ,{wai ,wbi },?)
(4)
]
p?
Qj
Qj
?La b
(erf( ?a?i b2i ) ? erf( ai b?i ?2 i i ))
2
(Qja b )2
(Qj ?La b )2
i i
)
= ? 2 (Qjai bi ?Lai bi ) exp(? ai bi2?2 i i )?Qjai bi exp(? 2?
2
=?
(5)
? k g k (v ,sk ,?)
j i
and P (k, i|vj ) = p(vi j |?,w,?,DG)
is the posterior probability that the datum vj was generated by the component associated to (k, i).
2.4
Emerging topology by maximizing the likelihood
Finally, to get the topology representing graph from the generative model, the core idea is
to prune from the initial DG the edges for which there is probability they generated the
data. The complete algorithm is the following:
1. Initialize the location of the prototypes w using vector quantization [14].
2. Construct the Delaunay graph DG of the prototypes.
3. Initialize the weights ? to 1/(N0 + N1 ) to give equiprobability to each
vertices and edges.
4. Given w and DG, use updating rules (4) to find ? 2? and ? ? maximizing
the likelihood P .
5. Prune the edges {ai bi } of DG associated to the gaussian segments with
probability ?i1 ? where ?i1 ? ? ? .
The topology representing graph emerges from the edges with probabilities ? ? > . It is the
graph which best models the topology of the data in the sense of the maximum likelihood
wrt ?, ?, and the set of prototypes w and their Delaunay graph.
3
Experiments
In these experiments, given a set of points and a set of prototypes located thanks to vector
quantization [14], we want to verify the relevance of the GGG to learn the topology in
various noise conditions. The principle of the GGG is shown in the Figure 1. In the Figure
2, we show the comparison of the GGG to a CHL for which we filter out edges which have
a number of hits lower than a threshold T . The data and prototypes are the same for both
algorithms. We set T ? such that the graph obtained matches visually as close as possible
the expected solution. We optimize ? and ? using (4) for tmax = 100 steps and = 0.001.
Conditions and conclusions of the experiments are given in the captions.
1.2
1.2
1
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
?0.2
?0.1
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
?0.2
?0.1
0.8
(a)
(b)
(c)
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
(d)
Figure 1: Principle of the Generative Gaussian Graph: (a) Data drawn from an oblique segment,
an horizontal one and an isolated point with respective density {0.25; 0.5; 0.25}. The prototypes are
located at the extreme points of the segments, and at the isolated point. They are connected with
edges from the Delaunay graph. (b) The corresponding initial Generative Gaussian Graph. (c) The
optimal GGG obtained after optimization of the likelihood according to ? and ?. (d) The edges of
the optimal GGG associated to non-negligible probabilities model the topology of the data.
4
Discussion
We propose that the problem of learning the topology of a set of points can be posed
as a statistical learning problem: we assume that the topology of a statistical generative
model of a set of points is an estimator of the topology of the principal manifold of this
set. From this assumption, we define a topologically flexible statistical generative mixture
model that we call Generative Gaussian Graph from which we can extract the topology. The
final topology representing graph emerges from the edges with non-negligible probability.
We propose to use the Delaunay graph as an initial graph assuming it is rich enough to
contain as a subgraph a good topological model of the data. The use of the likelihood
criterion makes possible cross-validation to select the best generative model hence the best
topological model in terms of generalization capacities.
The GGG allows to avoid the limits of the CHL for modelling topology. In particular, it
allows to take into account the noise and to model isolated bumps. Moreover, the likelihood
of the data wrt the GGG is maximized during the learning, allowing to measure the quality
of the model even when no visualization is possible. For some particular data distributions where all the data lie on the Delaunay line segments, no maximum of the likelihood
exists. This case is not a problem because ? = 0 effectively defines a good solution (no
noise in a data set drawn from a graph). If only some of the data lie exactly on the line
segments, a maximum of the likelihood still exists because ? 2 defines the variance for all
the generative gaussian points and segments at the same time so it cannot vanish to 0. The
computing time complexity of the GGG is o(D(N0 + N1 )M tmax ) plus the time O(DN03 )
[15] needed to build the Delaunay graph which dominates the overall worst time complexity. The Competitive Hebbian Learning is in time o(DN0 M ). As in general, the CHL
builds too much edges than needed to model the topology, it would be interesting to use the
Delaunay subgraph obtained with the CHL as a starting point for the GGG model.
The Generative Gaussian Graph can be viewed as a generalization of gaussian mixtures to
points and segments: a gaussian mixture is a GGG with no edge. GGG provides at the
same time an estimation of the data distribution density more accurate than the gaussian
mixture based on the same set of prototypes and the same noise isovariance hypothesis (because it adds gaussian-segments to the pool of gaussian-points), and intrinsically an explicit
model of the topology of the data set which provides most of the topological information
at once. In contrast, other generative models do not provide any insight about the topology of the data, except the Generative Topographic Map (GTM) [4], the revisited Principal
Manifolds [7] or the mixture of Probabilistics Principal Component Analysers (PPCA) [8].
However, in the two former cases, the intrinsic dimension of the model is fixed a priori and
?noise = 0.05
?noise = 0.15
?noise = 0.2
1
1
0.8
1
0.6
0.4
0.5
0.5
0
0
0.2
0
?0.2
?0.5
?0.5
?0.4
?0.6
?1
?1
?0.8
?1
?0.5
0
0.5
1
?1
(a) GGG: ? ? = 0.06
?0.5
0
0.5
1
?1
1.5
(b) GGG: ? ? = 0.17
?0.5
0
0.5
1
1.5
(c) GGG: ? ? = 0.21
1
1
0.8
1
0.6
0.4
0.5
0.5
0
0
0.2
0
?0.2
?0.5
?0.5
?0.4
?0.6
?1
?1
?0.8
?0.5
0
0.5
1
?0.5
(d) CHL: T = 0
0
0.5
1
?1
(e) CHL: T = 0
?0.5
0
0.5
1
(f) CHL: T = 0
1
1
0.8
1
0.6
0.4
0.5
0.5
0
0
0.2
0
?0.2
?0.5
?0.5
?0.4
?0.6
?1
?1
?0.8
?0.5
0
0.5
1
(g) CHL: T = 60
?
?0.5
0
0.5
1
(h) CHL: T = 65
?
?1
?0.5
0
0.5
1
(i) CHL: T = 58
?
Figure 2: Learning the topology of a data set: 600 data drawn from a spirale and an isolated point
2
corrupted with additive gaussian noise with mean 0 and variance ? noise
. Prototypes are located by
vector quantization [14]. (a-c) The edges of the GGG with weights greater than allow to recover the
topology of the principal manifolds except for large noise variance (c) where a triangle was created
at the center of the spirale. ? ? over-estimates ?noise because the model is piecewise linear while
the true manifolds are non-linear. (d-f) The CHL without threshold (T=0) is not able to recover the
true topology of the data for even small ?noise . In particular, the isolated bump cannot be recovered.
The grey cells correspond to ROI of the edges (darker cells contain more data). It shows these cells
are not intuitively related to the edges they are associated to (e.g. they may have very tiny areas
(e), and may partly (d) or never (f) contain the corresponding line segment). (g-h) The CHL with
a threshold T allows to recover the topology of the data only for small noise variance (g) (Notice
T1 < T2 ? DGCHL (T2 ) ? DGCHL (T1 )). Moreover, setting T requires visual control and is not
associated to the optimum of any energy function which prevents its use in higher dimensional space.
not learned from the data, while in the latter the local intrinsic dimension is learned but the
connectedness between the local models is not.
One obvious way to follow to extend this work is considering a simplicial complex in place
of the graph to get the full topological information extractible. Some other interesting
questions arise about the curse of the dimension, the selection of the number of prototypes
and the threshold , the theoretical grounding of the connection between the likelihood and
some topological measure of accuracy, the possibility to devise a ?universal topology estimator?, the way to deal with data sets with multi-scale structures or background noise. . .
This preliminary work is an attempt to bridge the gap between Statistical Learning Theory [17] and Computational Topology [18][19]. We wish it to cross-fertilize and to open
new perspectives in both fields.
References
[1] M. Aupetit and T. Catz. High-dimensional labeled data analysis with topology representing
graphs. Neurocomputing, Elsevier, 63:139?169, 2005.
[2] C. M. Bishop. Neural Networks for Pattern Recognition. Oxford Univ. Press, New York, 1995.
[3] M. Zeller, R. Sharma, and K. Schulten. Topology representing network for sensor-based robot
motion planning. World Congress on Neural Networks, INNS Press, pages 100?103, 1996.
[4] C. M. Bishop, M. Svens?en, and C. K. I. Williams. Gtm: the generative topographic mapping.
Neural Computation, MIT Press, 10(1):215?234, 1998.
[5] V. de Silva and J. B. Tenenbaum. Global versus local methods for nonlinear dimensionality
reduction. In S. Becker, S. Thrun, K. Obermayer (Eds) Advances in Neural Information Processing Systems, MIT Press,Cambridge, MA, 15:705?712, 2003.
[6] J. A. Lee, A. Lendasse, and M. Verleysen. Curvilinear distance analysis versus isomap. Europ.
Symp. on Art. Neural Networks, Bruges (Belgium), d-side eds., pages 185?192, 2002.
[7] R. Tibshirani. Principal curves revisited. Statistics and Computing, (2):183?190, 1992.
[8] M. E. Tipping and C. M. Bishop. Mixtures of probabilistic principal component analysers.
Neural Computation, 11(2):443?482, 1999.
[9] M. Aupetit. Robust topology representing networks. European Symp. on Artificial Neural
Networks, Bruges (Belgium), d-side eds., pages 45?50, 2003.
[10] V. de Silva and G. Carlsson. Topological estimation using witness complexes. In M.
Alexa and S. Rusinkiewicz (Eds) Eurographics Symposium on Point-Based Graphics, ETH,
Z?urich,Switzerland, June 2-4, 2004.
[11] T. M. Martinetz and K. J. Schulten. Topology representing networks. Neural Networks, Elsevier
London, 7:507?522, 1994.
[12] A. Okabe, B. Boots, and K. Sugihara. Spatial tessellations: concepts and applications of
Vorono?? diagrams. John Wiley, Chichester, 1992.
[13] H. Edelsbrunner and N. R. Shah. Triangulating topological spaces. International Journal on
Computational Geometry and Applications, 7:365?378, 1997.
[14] T. M. Martinetz, S. G. Berkovitch, and K. J. Schulten. ?neural-gas? network for vector quantization and its application to time-series prediction. IEEE Trans. on NN, 4(4):558?569, 1993.
[15] E. Agrell. A method for examining vector quantizer structures. Proceedings of IEEE International Symposium on Information Theory, San Antonio, TX, page 394, 1993.
[16] A. Dempster, N. Laird, and D. Rubin. Maximum likelihood from incomplete data via the em
algorithm. Journal of the Royal Statistical Society, Series B, 39(1):1?38, 1977.
[17] V.N. Vapnik. Statistical Learning Theory. John Wiley, 1998.
[18] T. Dey, H. Edelsbrunner, and S. Guha. Computational topology. In B. Chazelle, J. Goodman
and R. Pollack, editors, Advances in Discrete and Computational Geometry. American Math.
Society, Princeton, NJ, 1999.
[19] V. Robins, J. Abernethy, N. Rooney, and E. Bradley. Topology and intelligent data analysis.
IDA-03 (International Symposium on Intelligent Data Analysis), Berlin, 2003.
| 2922 |@word middle:1 norm:1 open:1 grey:1 reduction:1 initial:3 series:2 exclusively:1 bradley:1 recovered:1 chazelle:1 ida:1 yet:1 must:3 john:2 additive:2 shape:1 n0:5 generative:24 fewer:1 ith:3 core:1 oblique:1 provides:2 quantizer:1 revisited:2 location:2 math:1 simpler:1 along:3 constructed:1 symposium:3 ik:2 qij:2 prove:2 consists:2 symp:2 expected:3 roughly:1 planning:1 multi:2 spherical:1 wbi:8 curse:1 considering:1 increasing:1 dase:2 project:1 moreover:4 what:4 emerging:2 finding:1 nj:1 tackle:1 exactly:1 hit:1 whatever:1 control:2 before:1 negligible:3 positive:1 t1:2 local:3 zeller:1 aging:1 limit:2 congress:1 oxford:1 path:1 connectedness:4 tmax:3 plus:1 weakened:1 micha:1 bi:11 lost:1 area:2 intersect:1 universal:1 eth:1 projection:2 word:1 get:3 cannot:3 close:1 selection:1 context:3 influence:2 optimize:1 map:1 center:1 maximizing:2 williams:1 urich:1 starting:1 convex:1 simplicity:1 rule:2 estimator:2 insight:1 dw:1 exploratory:1 resp:2 construction:2 caption:1 hypothesis:1 recognition:2 located:4 updating:3 cut:1 labeled:1 observed:1 hv:1 worst:1 region:4 wj:1 connected:5 s1i:1 decrease:1 dempster:1 complexity:2 segment:26 ali:1 basis:3 triangle:2 various:2 gtm:2 tx:1 univ:1 london:1 artificial:2 analyser:2 abernethy:1 whose:3 posed:1 loglikelihood:1 statistic:2 erf:4 topographic:3 emergence:1 laird:1 final:1 bruges:2 inn:1 propose:5 product:1 fr:1 j2:1 relevant:1 combining:2 realization:2 subgraph:3 curvilinear:1 cluster:1 chl:15 optimum:1 object:1 expectationmaximization:1 solves:1 europ:1 switzerland:1 filter:3 hull:1 centered:1 generalization:2 preliminary:2 mm:1 sufficiently:1 roi:8 exp:6 visually:1 mapping:2 visualize:1 bump:4 belgium:2 estimation:2 okabe:1 expose:1 bridge:1 equiprobability:1 create:2 minimization:1 mit:2 sensor:1 gaussian:34 always:1 aim:1 avoid:2 pn:1 june:1 modelling:5 likelihood:14 contrast:1 sense:1 elsevier:2 el:1 i0:1 nn:1 integrated:1 france:1 i1:5 overall:1 among:1 flexible:3 priori:5 verleysen:1 constrained:2 art:2 initialize:2 spatial:1 field:1 construct:2 never:2 once:1 sampling:2 look:1 future:1 simplex:5 t2:2 piecewise:1 intelligent:2 dg:14 neurocomputing:1 geometry:2 n1:4 attempt:1 interest:1 possibility:1 chichester:1 mixture:11 extreme:1 accurate:1 edge:28 integral:1 respective:1 orthogonal:2 incomplete:1 euclidean:2 isolated:7 theoretical:1 pollack:1 measuring:2 tessellation:1 maximization:4 vertex:7 examining:1 guha:1 too:1 graphic:1 connect:2 corrupted:2 thanks:1 density:10 international:3 eas:1 lee:1 probabilistic:1 pool:1 alexa:1 connecting:2 eurographics:1 possibly:1 positivity:1 american:1 account:7 de:3 depends:1 vi:2 b2i:1 competitive:2 recover:3 complicated:2 ass:1 minimize:1 accuracy:1 variance:7 characteristic:2 maximized:1 simplicial:11 correspond:1 weak:2 straight:1 wai:11 ed:4 definition:2 energy:1 frequency:1 obvious:1 dm:1 associated:12 ppca:1 proved:1 intrinsically:1 knowledge:1 emerges:2 dimensionality:1 actually:1 higher:1 tipping:1 follow:1 done:1 box:1 strongly:1 foresee:1 dey:1 hand:2 horizontal:1 nonlinear:2 defines:3 dn0:1 quality:4 grounding:1 normalized:2 verify:1 contain:3 true:2 former:2 hence:2 concept:1 isomap:1 i2:2 deal:3 white:1 adjacent:1 during:1 self:1 criterion:4 pdf:1 complete:1 demonstrate:1 motion:1 silva:3 geometrical:2 common:1 extend:1 refer:1 cambridge:1 ai:10 rd:5 tuning:1 trivially:1 pm:2 analyzer:1 had:2 dot:1 robot:1 add:1 delaunay:15 edelsbrunner:3 closest:2 own:2 posterior:1 perspective:2 arbitrarily:1 devise:1 preserving:1 greater:2 impose:1 prune:2 sharma:1 shortest:1 maximize:1 full:1 hebbian:2 match:1 cross:2 lai:4 feasibility:1 prediction:1 variant:1 expectation:3 grounded:2 represent:2 kernel:1 robotics:1 cell:5 background:1 want:1 void:1 diagram:1 goodman:1 martinetz:4 call:5 presence:1 enough:1 topology:54 idea:2 prototype:24 qj:4 becker:1 passing:1 york:1 tunnel:1 antonio:1 useful:1 tune:2 tenenbaum:1 exist:1 notice:1 correctly:1 tibshirani:2 discrete:1 shall:1 threshold:5 drawn:7 verified:1 freed:1 graph:38 sum:4 topologically:1 place:1 family:1 scaling:1 datum:4 topological:14 constraint:4 svens:1 bp:1 nearby:1 generates:1 pruned:1 according:1 combination:1 remain:1 em:2 unity:1 wi:9 appealing:1 projecting:1 dv:2 intuitively:1 visualization:3 remains:1 wrt:8 needed:2 tractable:1 shah:2 ensure:1 build:3 especially:1 society:2 intend:1 question:1 obermayer:1 distance:1 thrun:1 capacity:2 berlin:1 evenly:1 manifold:17 reason:2 assuming:2 eres:1 equivalently:1 expense:1 negative:3 rise:1 ski:1 unknown:1 allowing:3 boot:1 finite:2 gas:1 defining:1 extended:3 witness:3 connection:1 learned:2 trans:1 able:1 pattern:2 agrell:1 bbi:1 built:1 royal:1 bi2:1 difficulty:1 force:1 rely:1 cea:2 representing:10 created:1 extract:4 carlsson:2 interesting:2 versus:2 validation:1 consistent:1 rubin:1 principle:3 ggg:17 editor:1 tiny:1 last:1 side:2 allow:3 sugihara:1 neighbor:1 face:1 taking:1 curve:2 dimension:11 world:1 rich:1 made:2 projected:1 san:1 far:1 constituting:1 global:1 iterative:1 s0i:1 sk:1 betti:2 robin:1 learn:5 robust:1 rusinkiewicz:1 complex:13 european:1 vj:23 dense:1 spread:1 border:1 noise:22 arise:1 en:1 simplices:4 darker:1 wiley:2 position:1 schulten:5 explicit:1 wish:1 lie:2 vanish:1 weighting:1 bishop:3 explored:2 closeness:1 dominates:1 intractable:1 intrinsic:6 exists:4 quantization:5 vapnik:1 effectively:1 hole:1 nk:1 gap:1 forming:1 visual:2 prevents:1 ch:1 ma:1 viewed:2 towards:1 infinite:1 except:2 principal:9 called:4 triangulating:1 partly:1 e:2 la:3 select:1 support:1 latter:2 relevance:1 princeton:1 |
2,118 | 2,923 | Beyond Pair-Based STDP: a Phenomenogical
Rule for Spike Triplet and Frequency Effects
Jean-Pascal Pfister and Wulfram Gerstner
School of Computer and Communication Sciences
and Brain-Mind Institute,
Ecole Polytechnique F?ed?erale de Lausanne (EPFL), CH-1015 Lausanne
{jean-pascal.pfister, wulfram.gerstner}@epfl.ch
Abstract
While classical experiments on spike-timing dependent plasticity analyzed synaptic changes as a function of the timing of pairs of pre- and
postsynaptic spikes, more recent experiments also point to the effect of
spike triplets. Here we develop a mathematical framework that allows
us to characterize timing based learning rules. Moreover, we identify a
candidate learning rule with five variables (and 5 free parameters) that
captures a variety of experimental data, including the dependence of potentiation and depression upon pre- and postsynaptic firing frequencies.
The relation to the Bienenstock-Cooper-Munro rule as well as to some
timing-based rules is discussed.
1
Introduction
Most experimental studies of Spike-Timing Dependent Plasticity (STDP) have focused on
the timing of spike pairs [1, 2, 3] and so do many theoretical models. The spike-pair
based models can be divided into two classes: either all pairs of spikes contribute in a
homogeneous fashion [4, 5, 6, 7, 8, 9, 10] (called ?all-to-all? interaction in the following)
or only pairs of ?neighboring? spikes [11, 12, 13] (called ?nearest-spike? interaction in the
following); cf. [14, 15]. Apart from these phenomenological models, there are also models
that are somewhat closer to the biophysics of synaptic changes [16, 17, 18, 19].
Recent experiments have furthered our understanding of timing effects in plasticity and
added at least two different aspects: firstly, it has been shown that the mechanism of potentiation in STDP is different from that of depression [20] and secondly, it became clear
that not only the timing of pairs, but also of triplets of spikes contributes to the outcome of
plasticity experiments [21, 22].
In this paper, we introduce a learning rule that takes these two aspects partially into account
in a simple way. Depression is triggered by pairs of spikes with post-before-pre timing,
whereas potentiation is triggered by triplets of spikes consisting of 1 pre- and 2 postsynaptic
spikes. Moreover, in our model the pair-based depression includes an explicit dependence
upon the mean postsynaptic firing rate. We show that such a learning rule accounts for two
important stimulation paradigms:
P1 (Relative Spike Timing): Both the pre- and postsynaptic spike trains consist of a burst
of N spikes at regular intervals T , but the two spike trains are shifted by a time ?t =
tpost ? tpre .
The total weight change is a function of the relative timing ?t (this gives the standard
STDP function), but also a function of the firing frequency ? = 1/T during the burst; cf.
Fig. 1A (data from L5 pyramidal neurons in visual cortex).
P2 (Poisson Firing): The pre- and postsynaptic spike trains are generated by two independent Poisson processes with rates ?x and ?y respectively.
Protocol P2 has less experimental support but it helps to establish a relation to the
Bienenstock-Cooper-Munro (BCM) model [23]. To see that relation, it is useful to plot
the weight change as a function of the postsynaptic firing rate, i.e., ?w ? ?(?y ) (cf. Fig
1B). Note that the function ? has only been measured indirectly in experiments [24, 25].
We emphasize that in the BCM model,
?w = ?x ?(?y , ??y )
(1)
the function ? depends not only on the current firing rate ?y , but also on the mean firing rate
??y averaged over the recent past which has the effect that the threshold between depression
and potentiation is not fixed but dynamic. More precisely, this threshold ? depends nonlinearly on the mean firing rate ??y :
? = ??
?py ,
p>1
(2)
with parameters ? and p. Previous models of STDP have already discussed the relation
of STDP to the BCM rule [16, 12, 17, 26], but none of these seems to be completely
satisfactory as discussed in Section 4. We will also compare our results to the rule of [21]
which was together with the work of [16] amongst the first triplet rules to be proposed.
A
B
?4
100
hwi
? [ms?1 ]
20
?w [%]
50
0
?50
?100
0
10
20
30
40
? [Hz]
50
x 10
15
10
5
0
?5
0
10
20
30
40
50
?y [Hz]
Figure 1: A. Weight change in an experiment on cortical synapses using pairing protocol (P1) (solid
line: ?t = 10 ms, dot-dashed line ?t = ?10 ms) as a function of the frequency ?. Figure redrawn
from [11]. B. Weight change in protocol P2 according to the BCM rule for ? = 20, 30, 40 Hz.
2
A Framework for STDP
Several learning rules in the modeling literature can be classified according to the two
criteria introduced above: (i) all-to-all interaction vs. nearest spike interaction; (ii) pairbased vs. triplet based rules. Point (ii) can be elaborated further in the context of an
expansion (pairs, triplets, quadruplets, ... of spikes) that we introduce now.
2.1
Volterra Expansion (?all-to-all?)
For the sake of simplicity, we assume that weight changes occur at the moment of presynaptic spike arrival or at the moment of postsynaptic firing. The direction and amplitude
of the weight
P change depends on the configuration of spikes in the
P presynaptic spike train
X(t) = k ?(t ? tkx ) and the postsynaptic spike train Y (t) = k ?(t ? tky ). With some
arbitrary functionals F [X, Y ] and G[X, Y ], we write (see also [8])
w(t)
?
= X(t)F [X, Y ] + Y (t)G[X, Y ]
(3)
Clearly, there can be other neuronal variables that influence the synaptic dynamics. For
example, the weight change can depend on the current weight value w [8, 15, 10], the
Ca2+ concentration [17, 19], the depolarization [25, 27, 28], the mean postsynaptic firing
rate ??y (t) [23],. . . . Here, we will consider only the dependence upon the history of the
pre- and postsynaptic firing times and the mean postsynaptic firing rate ??y . Note that even
if ??y depends via a low-pass filter ?? ??? y = ??
?y + Y (t) on the past spike train Y of the
postsynaptic neuron, the description of the problem will turn out to be simpler if the mean
firing rate is considered as a separate variable. Therefore, let us write the instantaneous
weight change as
w(t)
?
= X(t)F ([X, Y ], ??y (t)) + Y (t)G([X, Y ], ??y (t))
(4)
The goal is now to determine the simplest functionals F and G that would be consistent
with the experimental protocols P 1 and P 2 introduced above. Since the functionals are
unknown, we perform a Volterra expansion of F and G in the hope that a small number of
low-order terms are sufficient to explain a large body of experimental data. The Volterra
expansion [29] of the functional G can be written as1
Z ?
Z ?
G([X, Y ]) = Gy1 +
Gxy
(s)X(t
?
s)ds
+
Gyy
2
2 (s)Y (t ? s)ds
0
0
Z ?Z ?
0
0
0
+
Gxxy
3 (s, s )X(t ? s)X(t ? s )ds ds
0
0
Z ?Z ?
0
0
0
+
Gxyy
3 (s, s )X(t ? s)Y (t ? s )ds ds
0
0
Z ?Z ?
0
0
0
+
Gyyy
(5)
3 (s, s )Y (t ? s)Y (t ? s )ds ds + . . .
0
0
Similarly, the expansion of F yields
Z ?
Z
F ([X, Y ]) = F1x +
F2xx (s)X(t ? s)ds +
0
?
Fxy
2 (s)Y (t ? s)ds + . . . (6)
0
Note that the upper index in functions represents the type of interaction. For example, Gxyy
3
(in bold face above) refers to a triplet interaction consisting of 1 pre- and 2 postsynaptic
spikes. Note that the Gxyy
term could correspond to a pre-post-post sequence as well as
3
a post-pre-post sequence. Similarly the term F2xy picks up the changes caused by arrival
of a presynaptic spike after postsynaptic spike firing. Several learning rules with all-to-all
interaction can be classified in this framework, e.g. [5, 6, 7, 8, 9, 10].
2.2
Our Model
Not all term in the expansion need to be non-zero. In fact, in the results section we will
xy
0
0
show that a learning rule with Gxyy
3 (s, s ) ? 0 for all s, s > 0 and F2 (s) ? 0 for s > 0
and all other terms set to zero is sufficient to explain the results from protocols P1 and P2.
Thus, in our learning rule an isolated pair of spikes in configuration post-before-pre will
lead to depression. An isolated spike pair pre-before-post, on the other hand, would not be
sufficient to trigger potentiation, whereas a triplet pre-post-post or post-pre-post will do so
(see Fig. 2).
1
For the sake of clarity we have omitted the dependence on ??y .
A
B
?+
?y
??
Figure 2: A. Triplet interaction for LTP B. Pair interaction for LTD.
To be specific, we consider
?
s
?
s
?s
0
0
?+
e ?y .
F2xy (s) = ?A? (?
?y )e ?? and Gxyy
(7)
3 (s, s ) = A+ e
Such an exponential model can be implemented by a mechanistic update involving three
variables (the dot denotes a temporal derivative)
a
a? = ? ; if t = tkx then a ? a + 1
?+
b
b? = ? ; if t = tky then b ? b + 1
(8)
??
c
c? = ? ; if t = tky then c ? c + 1
?y
The weight update is then
w(t)
?
= ?A? (?
?y )X(t)b(t) + A+ Y (t)a(t)c(t).
(9)
2.3
Nearest Spike Expansion (truncated model)
Following ideas of [11, 12, 13], the expansion can also be restricted to neighboring spikes
only. Let us denote by fy (t) the firing time of the last postsynaptic spike before time t.
Similarly, fx (t0 ) denotes the timing of the last presynaptic spike preceding t0 . With this
notation the Volterra expansion of the preceding section can be repeated in a form that only
nearest spikes play a role. A classification of the models [11, 12, 13] is hence possible.
We focus immediately on the truncated version of our model
w(t)
?
= X(t)F2xy (t ? fy (t), ??y (t)) + Y (t)Gxyy
(10)
3 (t ? fx (t), t ? fy (t))
The mechanistic model that generates the truncated version of the model is similar to Eq. (8)
except that under the appropriate update condition, the variable goes to one, i.e. a ? 1, b ?
1 and c ? 1. The weight update is identical to that of the all-to-all model, Eq. (9).
3
Results
One advantage of our formulation is that we can derive explicit formulas for the total weight
changes induced by protocols P1 and P2.
3.1
All-to-all Interaction
If we use protocol P1 with a total of N pre- and postsynaptic spikes at frequency ? shifted
by a time ?t, then the total weight change ?w is for our model with all-to-all interaction
N
?1 N
?1
X
X
k/? + ?t
k0
?w = A+
(N ? max(k, k0 )) exp ?
exp ?
?k (??t)
?+
?y ?
k=0 k0 =1
N
?1
X
k/? ? ?t
? A? (?
?y )
(N ? k) exp ?
?k (?t)
(11)
??
k=0
where ?k (?t) = 1 ? ?k0 ?(?t) with ? the Heaviside step function. The results are plotted
in Fig. 3 top-left for N = 60 spikes.
P1
P2
?5
x 10
200
20
hwi
? [ms?1 ]
All-to-all
?w [%]
150
100
50
0
?50
10
0
?100
0
10
20
30
40
50
0
10
20
30
40
50
40
50
?y [Hz]
? [Hz]
?5
100
hwi
? [ms?1 ]
Nearest Spike
?w [%]
x 10
15
50
0
10
5
0
?50
0
10
20
30
? [Hz]
40
50
0
10
20
30
?y [Hz]
Figure 3: Triplet learning rule. Summary of all results of protocol P 1 (left) and P 2 (right) for an
all-to-all (top) and nearest-spike (bottom) interaction scheme. For the left column, the upper thick
lines correspond to positive timing (?t > 0) while the lower thin lines to negative timing. Dashed
line: ?t = ?2 ms, solid line: ?t = ?10 ms and dot-dashed line ?t = ?30 ms. The error bars
indicate the experimental data points of Fig. 1A. Right column: dashed-line ??y = 8 Hz, solid line
??y = 10 Hz and dot-dashed line ??y = 12 Hz. Top: ?y = 200 ms, bottom: ?y = 40 ms.
The mean firing rate ??y reflects the firing activity during the recent past (i.e. before the start
of the experiment) and is assumed as fixed during the experiment. The exact value does not
matter. Overall, the frequency dependence of changes ?w is very similar to that observed
in experiments. If X and Y are independent Poisson process, the protocol P2 gives a total
weight change that can be calculated using standard arguments [8]
hwi
? = ?A? (?
?y )?x ?y ?? + A+ ?x ?2y ?+ ?y
(12)
As before, the mean firing rate ??y reflects the firing activity during the recent past and is
assumed as fixed during the experiment. In order to implement a sliding threshold as in the
BCM rule, we take A? (?
?y ) = ?? ??2y /?20 where we set ?0 = 10 Hz. This yields a frequency
dependent threshold ?(?
?y ) = ?? ?? ??2y /(A+ ?+ ?y ?20 ). As can be seen in Fig. 3 top-right our
model exhibits all essential features of a BCM rule.
3.2
Nearest Spike Interaction
We now apply protocols P1 and P2 to our truncated rule, i.e. restricted to the nearest-spike
interaction; cf. Eq. (10) where the expression of F2xy and Gxyy
are taken from Eq. (7). The
3
weight change ?w for the protocol P 1 can be calculated explicitly and is plotted in Fig. 3
bottom-left. For protocol P 2 (see Fig. 3 bottom-right) we find
!
?2y
A? (?
?y )?y
A+
hwi
? = ?x ?
+
(13)
?y + ??
?x + ?+ ?y + ?y
where ?y = ?y?1 . If we assume that ?x ?x , Eq. (13) is a BCM learning rule.
In summary, both versions of our learning rule (all-to-all or nearest-spike) yield a frequency dependence that is consistent with experimental results under protocol P1 and with
the BCM rule tested under protocol P2. We note that our learning rule contains only two
terms, i.e., a triplet term (1 pre and 2 post) for potentiation and a post-pre pair term for
depression. The dynamics is formulated using five variables (a, b, c, ??y , w) and five parameters (?+ , ?? , ?y , A+ , ?? ). ?+ = 16.8 ms and ?? = 33.7 ms are taken from [14]. A+
and ?? are chosen such that the weight changes for ?t = ?10 ms and ? = 20 Hz fit the
experimental data [11].
4
Discussion - Comparison with Other Rules
While we started out developing a general framework, we focused in the end on a simple
model with only five parameters - why, then, this model and not some other combination
of terms? To answer this question we apply our approach to a couple of other models, i.e.,
pair-based models (all-to-all or nearest spike), triplet-based models, and others.
4.1
STDP Models Based on Spike Pairs
Pair-based models with all-to-all interaction [4, 5, 6, 7, 8, 9, 10] yield under Poisson stimulation (protocol P2) a total weight change that is linear in presynaptic and postsynaptic
frequencies. Thus, as a function of postsynaptic frequency we always find a straight line
with a slope that depends on the integral of the STDP function [5, 7]. Thus pair-based
models with all-to-all interaction need to be excluded in view of BCM features of plasticity
[25, 24].
P1
P2
?5
x 10
80
hwi
? [ms?1 ]
Pair
?w [%]
60
40
20
0
?20
?40
?60
3
2
1
0
?1
0
10
20
30
40
50
0
10
? [Hz]
20
30
40
50
40
50
?y [Hz]
5
50
0
hwi
? [ms?1 ]
F-D
?w [%]
?6
100
0
?50
?100
?150
0
10
20
30
? [Hz]
40
50
x 10
?5
?10
?15
?20
0
10
20
30
?y [Hz]
Figure 4: Pair learning rule in a nearest spike interaction scheme (top) and Froemke-Dan rule (bottom). For the left column, the higher thick lines correspond to positive timing (?t > 0) while the
lower thin lines to negative timing. Dashed line: ?t = ?2 ms, solid line: ?t = ?10 ms and
dot-dashed line ?t = ?30 ms. Right column: dashed-line ??y = 8 Hz, solid line ??y = 10 Hz and
dot-dashed line ??y = 12 Hz. The parameters of the F-D model are taken from [21]. The dependence
upon ??y has been added to the original F-D rule (A? ? ?? ??2y /?20 ).
A pair-based model with nearest-spike interaction, however, can give a non-linear dependence upon the postsynaptic frequency under protocol P2 with fixed threshold between
depression and potentation [12]. We can go beyond the results of [12] by adding a suitable
dependence of the parameter A? upon ??y which yields a sliding threshold; cf. Fig. 4 top
right.
But even a pair rule restricted to nearest-spike interaction is unable to account for the results
of protocol P1. An important feature of the experimental results with protocol P1 is that
potentiation only occurs above a minimal firing frequency of the postsynaptic neuron (cf.
Fig. 1A) whereas pair-based rules always exhibit potentiation with pre-before-post timing
even in the limit of low frequencies; cf. Fig. 4 top left. The intuitive reason is that at
low frequency the total weight change is proportional to the number of pre-post pairings
and this argument can be directly transformed into a mathematical proof (details omitted).
Thus, pair-based rules of potentiation (all-to-all or nearest spike) cannot account for results
of protocol P1 and must be excluded.
4.2
Comparison with Triplet-Based Learning Rules
The model of Senn et al. [16] can well account of the results under protocol P1. A classification of this rule within our framework reveals that the update algorithm generates pair
terms of the form pre-post and post-pre, as well as triplet terms of the form pre-post-post
and post-pre-pre. As explained in the previous paragraph, a pair term pre-post generated
potentiation even at very low frequencies which is not realistic. In order to avoid this effect
in their model, Senn et al. included additional threshold values which increased the number
of parameters in their model to 9 [16] while the number of variables is 5 as in our model.
Moreover, the mapping of the model of Senn et al. to the BCM rule is not ideal, since the
sliding threshold is different for each individual synapse [16].
An explicit triplet rule has been proposed by Froemke and Dan [21]. In our framework,
the rule can be classified as a combination of triplet terms for potentiation and depression.
Following the same line or argument as in the preceding sections we can calculate the
total weight change for protocols P1 and P2. The result is shown in Fig. 4 bottom. We
can clearly see that the pairing experiment P 1 yields a behavior opposite to the one found
experimentally and the BCM behavior is not at all reproduced in protocol P2.
4.3
Summary
We consider our model as a minimal model to account for results of protocol P1 and P2, but,
of course, several factors are not captured by the model. First, our model has no dependence
upon the current weight value, but, in principle, this could be included along the lines
of [10]. Second, the model has no explicit dependence upon the membrane potential or
calcium concentration, but the postsynaptic neuron enters only via its firing activity. Third,
and most importantly, there are other experimental paradigms that have to be taken care of.
In a recent series of experiments Bi and colleagues [22] have systematically studied the
effect of symmetric spike triplets (pre-post-pre or post-pre-post) and spike quadruplets
(e.g., pre-post-post-pre) in hippocampal cultures. While the model presented in this paper
is intended to model the synaptic dynamic for L5 pyramidal neurons in the visual cortex
[11], it is possible to consider a similar model for the hippocampus containing two extra
terms (a pair term for potentiation and and triplet term for depression).
References
[1] Markram, H., L?ubke, J., Frotscher, M., and Sakmann, B. Science 275, 213?215
(1997).
[2] Zhang, L., Tao, H., Holt, C., W.A.Harris, and Poo, M.-M. Nature 395, 37?44 (1998).
[3] Bi, G. and Poo, M. Ann. Rev. Neurosci. 24, 139?166 (2001).
[4] Gerstner, W., Kempter, R., van Hemmen, J. L., and Wagner, H. Nature 383, 76?78
(1996).
[5] Kempter, R., Gerstner, W., and van Hemmen, J. L. Phys. Rev. E 59, 4498?4514
(1999).
[6] Roberts, P. J. Computational Neuroscience 7, 235?246 (1999).
[7] Song, S., Miller, K., and Abbott, L. Nature Neuroscience 3, 919?926 (2000).
[8] Kistler, W. M. and van Hemmen, J. L. Neural Comput. 12, 385?405 (2000).
[9] Rubin, J., Lee, D. D., and Sompolinsky, H. Physical Review Letters 86, 364?367
(2001).
[10] G?utig, R., Aharonov, R., Rotter, S., and Sompolinsky, H. J. Neuroscience 23, 3697?
3714 (2003).
[11] Sj?ostr?om, P., Turrigiano, G., and Nelson, S. Neuron 32, 1149?1164 (2001).
[12] Izhikevich, E. and Desai, N. Neural Computation 15, 1511?1523 (2003).
[13] Burkitt, A. N., Meffin, M. H., and Grayden, D. Neural Computation 16, 885?940
(2004).
[14] Bi, G.-Q. Biological Cybernetics 319-332 (2002).
[15] van Rossum, M. C. W., Bi, G. Q., and Turrigiano, G. G. J. Neuroscience 20, 8812?
8821 (2000).
[16] Senn, W., Tsodyks, M., and Markram, H. Neural Computation 13, 35?67 (2001).
[17] Shouval, H. Z., Bear, M. F., and Cooper, L. N. Proc. Natl. Acad. Sci. USA 99, 10831?
10836 (2002).
[18] Abarbanel, H., Huerta, R., and Rabinovich, M. Proc. Natl. Academy of Sci. USA 59,
10137?10143 (2002).
[19] Karmarkar, U., Najarian, M., and Buonomano, D. Biol. Cybernetics 87, 373?382
(2002).
[20] Sj?ostr?om, P., Turrigiano, G., and Nelson, S. Neuron 39, 641?654 (2003).
[21] Froemke, R. and Dan, Y. Nature 416, 433?438 (2002).
[22] Wang, H. X., Gerkin, R. C., Nauen, D. W., and Bi, G. Q. Nature Neuroscience 8,
187?193 (2005).
[23] Bienenstock, E., Cooper, L., and Munro, P. Journal of Neuroscience 2, 32?48 (1982).
reprinted in Anderson and Rosenfeld, 1990.
[24] Kirkwood, A., Rioult, M. G., and Bear, M. F. Nature 381, 526?528 (1996).
[25] Artola, A. and Singer, W. Trends Neurosci. 16(11), 480?487 (1993).
[26] Toyoizumi, T., Pfister, J.-P., Aihara, K., and Gerstner, W. In Advances in Neural
Information Processing Systems 17, Saul, L. K., Weiss, Y., and Bottou, L., editors,
1409?1416. MIT Press, Cambridge, MA (2005).
[27] Fusi, S., Annunziato, M., Badoni, D., Salamon, A., and D.J.Amit. Neural Computation 12, 2227?2258 (2000).
[28] Toyoizumi, T., Pfister, J.-P., Aihara, K., and Gerstner, W. Proc. National Academy
Sciences (USA) 102, 5239?5244 (2005).
[29] Volterra, V. Theory of Functionals and of Integral and Integro-Differential Equations.
Dover, New York, (1930).
| 2923 |@word version:3 seems:1 hippocampus:1 pick:1 solid:5 moment:2 configuration:2 contains:1 series:1 ecole:1 past:4 current:3 written:1 must:1 realistic:1 plasticity:5 plot:1 update:5 v:2 dover:1 contribute:1 firstly:1 simpler:1 zhang:1 five:4 mathematical:2 burst:2 along:1 differential:1 pairing:3 dan:3 paragraph:1 introduce:2 behavior:2 p1:15 brain:1 moreover:3 notation:1 depolarization:1 temporal:1 rossum:1 before:7 positive:2 timing:17 limit:1 acad:1 firing:21 studied:1 lausanne:2 bi:5 averaged:1 implement:1 integro:1 pre:30 regular:1 refers:1 holt:1 cannot:1 huerta:1 context:1 influence:1 py:1 poo:2 go:2 focused:2 simplicity:1 immediately:1 rule:36 importantly:1 fx:2 aharonov:1 trigger:1 play:1 exact:1 homogeneous:1 trend:1 bottom:6 role:1 observed:1 enters:1 capture:1 wang:1 calculate:1 tsodyks:1 sompolinsky:2 desai:1 dynamic:4 depend:1 upon:8 f2:1 completely:1 k0:4 shouval:1 train:6 outcome:1 jean:2 toyoizumi:2 artola:1 rosenfeld:1 reproduced:1 triggered:2 sequence:2 advantage:1 turrigiano:3 interaction:19 neighboring:2 erale:1 academy:2 description:1 intuitive:1 help:1 derive:1 develop:1 measured:1 nearest:14 school:1 eq:5 p2:15 implemented:1 indicate:1 direction:1 thick:2 filter:1 redrawn:1 kistler:1 potentiation:12 biological:1 secondly:1 fxy:1 considered:1 stdp:9 exp:3 mapping:1 omitted:2 proc:3 reflects:2 hope:1 mit:1 clearly:2 always:2 avoid:1 focus:1 annunziato:1 utig:1 dependent:3 epfl:2 bienenstock:3 relation:4 transformed:1 tao:1 overall:1 classification:2 pascal:2 frotscher:1 identical:1 represents:1 ubke:1 thin:2 others:1 national:1 individual:1 intended:1 consisting:2 analyzed:1 hwi:7 natl:2 integral:2 closer:1 xy:1 culture:1 plotted:2 isolated:2 theoretical:1 minimal:2 increased:1 column:4 modeling:1 rabinovich:1 characterize:1 answer:1 l5:2 lee:1 together:1 containing:1 derivative:1 abarbanel:1 account:6 potential:1 de:1 bold:1 includes:1 matter:1 caused:1 explicitly:1 depends:5 view:1 start:1 slope:1 elaborated:1 om:2 became:1 miller:1 yield:6 identify:1 correspond:3 none:1 cybernetics:2 straight:1 classified:3 history:1 explain:2 synapsis:1 phys:1 ed:1 synaptic:4 colleague:1 frequency:15 proof:1 couple:1 amplitude:1 salamon:1 higher:1 wei:1 synapse:1 formulation:1 anderson:1 d:10 hand:1 izhikevich:1 usa:3 effect:6 hence:1 excluded:2 symmetric:1 satisfactory:1 during:5 quadruplet:2 m:18 criterion:1 hippocampal:1 polytechnique:1 instantaneous:1 stimulation:2 functional:1 physical:1 discussed:3 cambridge:1 gerkin:1 similarly:3 phenomenological:1 dot:6 cortex:2 recent:6 apart:1 rotter:1 seen:1 captured:1 additional:1 somewhat:1 preceding:3 care:1 determine:1 paradigm:2 dashed:9 ii:2 sliding:3 divided:1 post:26 biophysics:1 involving:1 poisson:4 whereas:3 interval:1 pyramidal:2 extra:1 hz:19 induced:1 ltp:1 ideal:1 variety:1 fit:1 opposite:1 idea:1 reprinted:1 t0:2 expression:1 munro:3 ltd:1 song:1 york:1 depression:10 useful:1 clear:1 tkx:2 simplest:1 shifted:2 senn:4 neuroscience:6 write:2 badoni:1 threshold:8 clarity:1 abbott:1 letter:1 ca2:1 fusi:1 activity:3 occur:1 precisely:1 sake:2 generates:2 aspect:2 argument:3 buonomano:1 developing:1 according:2 combination:2 membrane:1 postsynaptic:22 rev:2 aihara:2 explained:1 restricted:3 taken:4 equation:1 turn:1 mechanism:1 singer:1 mind:1 mechanistic:2 end:1 apply:2 indirectly:1 appropriate:1 original:1 denotes:2 top:7 cf:7 amit:1 establish:1 classical:1 added:2 already:1 spike:51 volterra:5 question:1 concentration:2 dependence:11 furthered:1 occurs:1 exhibit:2 amongst:1 separate:1 unable:1 sci:2 nelson:2 presynaptic:5 fy:3 reason:1 index:1 robert:1 negative:2 sakmann:1 calcium:1 unknown:1 perform:1 upper:2 neuron:7 truncated:4 communication:1 arbitrary:1 introduced:2 pair:27 nonlinearly:1 bcm:11 tpre:1 beyond:2 bar:1 including:1 max:1 suitable:1 scheme:2 started:1 review:1 understanding:1 literature:1 relative:2 kempter:2 bear:2 proportional:1 sufficient:3 consistent:2 rubin:1 principle:1 editor:1 systematically:1 course:1 summary:3 last:2 free:1 ostr:2 institute:1 saul:1 face:1 markram:2 wagner:1 van:4 kirkwood:1 calculated:2 cortical:1 tpost:1 functionals:4 sj:2 gxy:1 emphasize:1 reveals:1 assumed:2 triplet:19 why:1 as1:1 nature:6 contributes:1 expansion:9 gerstner:6 bottou:1 froemke:3 protocol:23 neurosci:2 arrival:2 repeated:1 body:1 neuronal:1 fig:12 burkitt:1 hemmen:3 fashion:1 cooper:4 explicit:4 exponential:1 comput:1 candidate:1 third:1 formula:1 specific:1 consist:1 essential:1 adding:1 nauen:1 visual:2 partially:1 ch:2 harris:1 ma:1 goal:1 formulated:1 ann:1 wulfram:2 change:20 included:2 experimentally:1 except:1 called:2 pfister:4 total:8 pas:1 experimental:10 support:1 heaviside:1 karmarkar:1 tested:1 biol:1 |
2,119 | 2,924 | Robust design of biological experiments
Patrick Flaherty
EECS Department
University of California
Berkeley, CA 94720
[email protected]
Michael I. Jordan
Computer Science and Statistics
University of California
Berkeley, CA 94720
[email protected]
Adam P. Arkin
Bioengineering Department,
LBL, Howard Hughes Medical Institute
University of California
Berkeley, CA 94720
[email protected]
Abstract
We address the problem of robust, computationally-efficient design of biological experiments. Classical optimal experiment design methods have
not been widely adopted in biological practice, in part because the resulting designs can be very brittle if the nominal parameter estimates for the
model are poor, and in part because of computational constraints. We
present a method for robust experiment design based on a semidefinite
programming relaxation. We present an application of this method to the
design of experiments for a complex calcium signal transduction pathway, where we have found that the parameter estimates obtained from
the robust design are better than those obtained from an ?optimal? design.
1
Introduction
Statistical machine learning methods are making increasing inroads in the area of biological data analysis, particularly in the context of genome-scale data, where computational
efficiency is paramount. Learning methods are particularly valuable for their ability to
fuse multiple sources of information, aiding the biologist to interpret a phenomenon in its
appropriate cellular, genetic and evolutionary context. At least as important to the biologist, however, is to use the results of data analysis to aid in the design of further experiments. In this paper we take up this challenge?we show how recent developments in
computationally-efficient optimization can be brought to bear on the problem of the design of experiments for complex biological data. We present results for a specific model
of calcium signal transduction in which choices must be made among 17 kinds of RNAi
knockdown experiments.
There are three main objectives for experiment design: parameter estimation, hypothesis
testing and prediction. Our focus in this paper is parameter estimation, specifically in the
setting of nonlinear kinetic models [1]. Suppose in particular that we have a nonlinear
model y = f (x, ?) + ?, ? ? N (0, ? 2 ), where x ? X represents the controllable conditions
of the experiment (such as dose or temperature), y is the experimental measurement and
? ? Rp is the set of parameters to be estimated. We consider a finite menu of available
experiments X = {x1 , . . . , xm }. Our objective is to select the best set of N experiments
(with repeats) from the menu. Relaxing the problem to a continuous representation, we
solve for a distribution over the design points and then multiply the weights by N at the
end [2]. The experiment design is thus
X
m
x1 , . . . , x m
?=
,
wi = 1, wi ? 0, ?i,
(1)
w1 , . . . , w m
i=1
and it is our goal to select values of wi that satisfy an experimental design criterion.
2
Background
We adopt a standard least-squares framework for parameter estimation. In the nonlinear
setting this is done by making a Taylor series expansion of the model about an estimate
?0 [3]
f (x, ?) ? f (x, ?0 ) + V (? ? ?0 ),
(2)
?f (xi ,?)
th
T
where V is the Jacobian matrix of the model; the i row of V is vi =
.
??
?0
The least-squares estimate of ? is ?? = ?0 + V T W V
?1
V T W (y ? f (x, ?0 )), where W =
? = ? 2 V T W V ?1 ,
diag(w). The covariance matrix for the parameter estimate is cov(?|?)
which is the inverse of the observed Fisher information matrix.
The aim of optimal experiment design methods is to minimize the covariance matrix of the
parameter estimate [4, 5, 6]. There are two well-known difficulties that must be surmounted
in the case of nonlinear models [6]:
? The optimal design depends on an evaluation of the derivative of the model with
respect to the parameters at a particular parameter estimate. Given that our goal is
parameter estimation, this involves a certain circularity.
? Simple optimal design procedures tend to concentrate experimental weight on
only a few design points [7]. Such designs are overly optimistic about the appropriateness of the model, and provide little information about possible lack of
fit over a wider experimental range.
There have been three main responses to these problems: sequential experiment design [7],
Bayesian methods [8], and maximin approaches [9].
In the sequential approach, a working parameter estimate is first used to construct a tentative experiment design. Data are collected under that design and the parameter estimate is
updated. The procedure is iterated in stages. While heuristically reasonable, this approach
is often inapplicable in practice because of costs associated with experiment set-up time.
In the Bayesian approach exemplified by [8], a proper prior distribution is constructed for
the parameters to be estimated. The objective function is the KL divergence between the
prior distribution and the expected posterior distribution; this KL divergence is maximized
(thereby maximizing the amount of expected information in the experiment design). Sensitivity to priors is a serious concern, however, particularly in the biological setting in which
it can be quite difficult to choose priors for quantities such as bulk rates for a complex
process.
The maximin approach considers a bounded range for each parameter and finds the optimal
design for the worst case parameters in that range. The major difficulties with this approach
are computational, and its main applications have been to specialized problems [7].
The approach that we present here is closest in spirit to the maximin approach. We view
both of the problems discussed above as arguments for a robust design, one which is insensitive to the linearization point and to model error. We work within the framework of
E-optimal design (see below) and consider perturbations to the rank-one Fisher information
matrix for each design point. An optimization with respect to such perturbations yields a
robust semidefinite program [10, 11, 12].
3
Optimal Experiment Design
The three most common scalar measures of the size of the parameter covariance matrix in
optimal experiment design are:
? D-optimal design: determinant of the covariance matrix.
? A-optimal design: trace of the covariance matrix.
? E-optimal design: maximum eigenvalue of the covariance matrix.
We adopt the E-optimal design criterion, and formulate the design problem as follows:
?
!?1 ?
m
m
X
X
? s.t.
P0 : p?0 = min ?max ?
wi vi viT
wi = 1
(3)
w
i=1
i=1
wi ? 0, ?i,
where ?max [M ] is the maximum eigenvalue of a matrix M . This problem can be recast as
the following semidefinite program [5]:
P0 :
p?0
= max s s.t.
w,s
m
X
i=1
m
X
i=1
wi vi viT ? sIp
(4)
wi = 1, wi ? 0, ?i,
which forms the basis of the robust extension that we develop in the following section.
4
Robust Experiment Design
The uncertain parameters appear in the experiment design optimization problem through
the Jacobian matrix, V . We consider additive unstructured perturbations on the Jacobian
or
?data? in this problem. The uncertain observed Fisher information matrix is F (w, ?) =
Pm
T
i=1 wi (vi vi ? ?i ), where ?i is a p ? p matrix for i = 1, . . . , m. We consider a spectral
norm bound on the magnitude of the perturbations such that kblkdiag(?1 , . . . , ?m )k ? ?.
Incorporating the perturbations, the E-optimal experiment design problem with uncertainty
based on (4) can be cast as the following minimax problem:
P? : p?? =
minw,s maxk?k?? ?s
subject to
Pm
i=1
wi (vi viT ? ?i ) ? sIp
? = blkdiag(?1 , . . . , ?m )
Pm
i=1 wi = 1, wi ? 0, ?i.
We will call equation (5) an E-robust experiment design.
(5)
To implement the program efficiently, we can recast the linear matrix inequality in (5) in a
linear fractional representation:
F (w, s, ?) = F (w, s) + L?R(w) + R(w)T ?T LT ? 0,
where
F (w, s) =
m
X
i=1
wi vi viT ? sIp ,
?1
L = ? 1Tm ? Ip ,
2
R(w) =
?1
2
(w ? Ip )
? = blkdiag(?1 , . . . , ?m ).
Taking ?1 = ? ? ? = ?m , a special case of the S-procedure [11] yields the following
semidefinite program:
P? : p?? =
minw,s,? ?s
subject to
" Pm
i=1
Pm
i=1
wi vi viT ? sIp ?
w ? ??2 Ip
m
2 ? Ip
wT ? ??2 Ip
? Imp
#
?0
(7)
wi = 1, wi ? 0, ?i.
If ? = 0 we recover (4). Using the Schur complement the first constraint in (7) can be
further simplified to
m
X
?
(8)
wi vi viT ? ? mkwk2 ? sIp ,
i=1
which makes the regularization of the optimization problem (4) explicit. The uncertainty
bound, ?, serves as a weighting parameter for a Tikhonov regularization term.
5
Results
We demonstrate the robust experiment design on two models of biological systems. The
first model is the Michaelis-Menten model of a simple enzyme reaction system. This
model, derived from mass-action kinetics, is a fundamental building block of many mechanistic models of biological systems. The second example is a model of a complex calcium
signal transduction pathway in macrophage immune cells. In this example we consider
RNAi knockdowns at a variety of ligand doses for the estimation of receptor level parameters.
5.1
Michaelis-Menten Reaction Model
The Michaelis-Menten model is a common approximation to an enzyme-substrate reack+1
k2
??
*
tion [13]. The basic chemical reaction that leads to this model is E +S )
?
? C ?? E +P ,
k?1
where E is the enzyme concentration, S is the substrate concentration and P is the product
concentration. We employ mass action kinetics to develop a differential equation model for
this reaction system [13]. The velocity of the reaction is defined to be the rate of product
formation, V0 = ?P
?t t . The initial velocity of the reaction is
0
V0 ?
?1 x
,
?2 + x
where
?1 = k+2 E0 , ?2 =
k?1 + k+2
.
k+1
(9)
(10)
We have taken the controllable factor, x, in this system to be the initial substrate concentration S0 . The parameter ?1 is the saturating velocity and ?2 is the initial substrate
concentration at which product is formed at one-half the maximal velocity. In this example
?1 = 2 and ?2 = 2 are the total enzyme and initial substrate concentrations.
We consider
six initial substrate concentrations as the menu of experiments, X = 18 , 1, 2, 4, 8, 16 .
Figure 1 shows the robust experiment design weights as a function of the uncertainty parameter with the Jacobian computed at the true parameter values. When ? is small, the
experimental weight is concentrated on only two design points. As ? ? ?max the design
converges to a uniform distribution over the entire menu of design points. In a sense, this
uniform allocation of experimental energy is most robust to parameter uncertainty. Intermediate values of ? yield an allocation of design points that reflects a tradeoff between
robustness and nominal optimality.
E?robust Experiment Design Weight by ?
0.7
0.125
1
2
4
8
16
0.6
weight
0.5
0.4
0.3
0.2
0.1
0 ?4
10
?3
10
?2
10
?
?1
10
0
10
Figure 1: Michaelis-Menten model experiment design weights as a function of ?.
For moderate values of ? we gain significantly in terms of robustness to errors in v i viT , at
a moderate cost to maximal value of the minimum eigenvalues of the parameter estimate
covariance matrix. Figure 2 shows the efficiency of the experiment design as a function of
? and the prior estimate ?02 used to compute the Jacobian matrix. The E-efficiency of a
design is defined to be
h
i
? ?0
?max cov ?|?,
h
i .
efficiency ,
(11)
? 0 , ??
?max cov ?|?
If the Jacobian is computed at the correct point in parameter space the optimal design
achieves maximal efficiency. As the distance between ?0 and ? grows the efficiency of the
optimal design decreases rapidly. If the estimate, ?02 , is eight instead of the true value,
two, the efficiency of the optimal design at ?0 is 36% of the optimal design at ?. However,
at the cost of a decrease in efficiency for parameter estimates close to the true parameter
value we guarantee the efficiency is better for points further from the true parameters with
a robust design. For example, for ? = 0.001 the robust design is less efficient for the range
0 < ?02 < 7, but is more efficient for 7 < ?02 < 16.
5.2
Calcium Signal Transduction Model
When certain small molecule ligands such as the anaphylatoxin C5a are introduced into
the environment of an immune cell a complex chain of chemical reactions leads to the
1
?=0
?=1e?3
?=1e?2
?=1e?1
?=10
0.9
0.8
Efficiency
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
2
4
6
8
?02
10
12
14
16
Figure 2: Efficiency of robust designs as a function of ? and perturbations in the prior
parameter estimate ?02 .
transduction of the extracellular ligand concentration information and a transient increase
in the intracellular calcium concentration. This chain of reactions can be mathematically
modeled using the principles of mass-action kinetics and nonlinear ordinary differential
equations. We consider specifically the model presented in [14] which was developed for
the P2Y2 receptor, modifying the model for our data on the C5a receptor.
The menu of available experiments is indexed by one of two different cell lines in combination with different ligand doses. The cell lines are: wild-type and a GRK2 knockdown line.
GRK2 is a protein that represses signaling in the G-protein receptor complex. When its
concentration is decreased with interfering RNA the repression of the signal due to GRK2
is reduced. There are 17 experiments on the menu and we choose to do 100 experiments
allocated according the experiment design. For each experiment we are able measure the
transient calcium spike peak height using a fluorescent calcium dye. We are concerned
with estimating three C5A receptor parameters: K1 , kp , kdeg which are detailed in [14].
We have selected the initial parameter estimates based on a least-squares fit to a separate
data set of 67 experiments on a wild-type cell line with a ligand concentration of 250nM.
We have estimated, from experimental data, the mean and variance for all of the experiments in our menu. Observations are simulated from these data to obtain the least-squares
parameter estimate for the optimal, robust (? = 1.5 ? 10?6 ) and uniform experiment designs.
Figure 3 shows the model fits with associated 95% confidence bands for the wild-type and
knockdown cell lines for the parameter estimates from the three experiment designs. A
separate validation data set is generated uniformly across the design menu. Compared to
the optimal design, the parameter estimates based on the robust design provide a better
fit across the whole dose range for both cell types as measured by mean-squared residual
error.
Note also that the measured response at high ligand concentration is better fit with parameters estimated from the robust design. Near 1?M of C5a concentration the peak height is
predicted to decrease slightly in the wild-type cell line, but plateaus for the GRK2 knockdown cell line. This matches the biochemical understanding that GRK2 acts as a repressor
of signaling.
Optimal Design ? WT
Optimal Design ? GRK
0.1
0.1
0.05
0.05
0
?4
10
?2
10
0
?4
10
0
10
Robust Design ? WT
0.1
0.05
0.05
Uniform Design ? WT
0.1
0.05
0 ?4
10
?2
10
[C5A] (uM)
0
?4
10
0
10
0
10
Peak Height (uM)
Peak Height (uM)
?2
10
0
10
Robust Design ? GRK
0.1
0
?4
10
?2
10
?2
10
0
10
Uniform Design ? GRK
0.1
0.05
0
?4
10
?2
10
0
10
[C5A] (uM)
Figure 3: Model predictions based on the least squares parameter estimate using data observed from the optimal, robust and uniform design. The predicted peak height curve (black
line) based on the robust design data is shifted to the left compared to the peak height curve
based on the optimal design data and matches the validation sample (shown as blue dots)
more accurately.
6
Discussion
The methodology of optimal experiment design leads to efficient algorithms for the
construction of designs in general nonlinear situations [15]. However, these varianceminimizing designs fail to account for uncertainty in the nominal parameter estimate and
the model. We present a methodology, based on recent advances in semidefinite programming, that retains the advantages of the general purpose algorithm while explicitly incorporating uncertainty.
We demonstrated this robust experiment design method on two example systems. In the
Michaelis-Menten model, we showed that the E-optimal design is recovered for ? = 0 and
the uniform design is recovered as ? ? ?max . It was also shown that the robust design is
more efficient than the optimal for large perturbations of the nominal parameter estimate
away from the true parameter.
The second example, of a calcium signal transduction model, is a more realistic case of
the need for experiment design in high-throughput biological research. The model captures some of the important kinetics of the system, but is far from complete. We require
a reasonably accurate model to make further predictions about the system and drive a set
of experiments to estimate critical parameters of the model more accurately. The resulting
robust design spreads some experiments across the menu, but also concentrates on experiments that will help minimize the variance of the parameter estimates.
These robust experiment designs were obtained using SeDuMi 1.05 [16]. The design for the
calcium signal transduction model takes approximately one second on a 2GHz processor,
which is less time than required to compute the Jacobian matrix for the model.
Research in machine learning has led to significant advances in computationally-efficient
data analysis methods, allowing increasingly complex models to be fit to biological data.
Challenges in experimental design are the flip side of this coin?for complex models to
be useful in closing the loop in biological research it is essential to begin to focus on the
development of computationally-efficient experimental design methods.
Acknowledgments
We would like to thank Andy Packard for helpful discussions. We would also like to thank
Robert Rebres and William Seaman for the data used in the second example. PF and APA
would like to acknowledge support from the Howard Hughes Medical Institute and from
the Alliance for Cellular Signaling through the NIH Grant Number 5U54 GM62114-05.
MIJ would like to thank NIH R33 HG003070 for funding.
References
[1] I. Ford, D.M. Titterington, and C.P. Kitsos. Recent advances in nonlinear experiment design.
Technometrics, 31(1):49?60, 1989.
[2] L. Vandenberghe, S. Boyd, and W. S.-P. Determinant maximization with linear matrix inequality
constraints. SIAM Journal on Matrix Analysis and Applications, 19(2):499?533, 1998.
[3] G.A.F. Seber and C.J. Wild. Nonlinear Regression. Wiley-Interscience, Hoboken, NJ, 2003.
[4] A.C. Atkinson and A.N. Donev. Optimum Experimental Designs. Oxford University Press,
1992.
[5] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2003.
[6] G.E.P Box, W.G. Hunter, and J.S. Hunter. Statistics for Experimenters: An Introduction to
Design, Data Analysis, and Model Building. John Wiley and Sons, New York, 1978.
[7] S.D. Silvey. Optimal Design. Chapman and Hall, London, 1980.
[8] D.V. Lindley. On the measure of information provided by an experiment. The Annals of Mathematical Statistics, 27(4):986?1005, 1956.
[9] L. Pronzato and E. Walter. Robust experiment design via maximin optimization. Mathematical
Biosciences, 89:161?176, 1988.
[10] L. Vandenberghe and S. Boyd. Semidefinite programming. SIAM Review, 38(1):49?95, 1996.
[11] L. El Ghaoui, L. Oustry, and H. Lebret. Robust solutions to uncertain semidefinite programs.
SIAM J. Optimization, 9(1):33?52, 1998.
[12] L. El Ghaoui and H. Lebret. Robust solutions to least squares problems with uncertain data.
SIAM J. Matrix Anal. Appl., 18(4):1035?1064, 1997.
[13] L.A. Segel and M. Slemrod. The quasi-steady state assumption: A case study in perturbation.
SIAM Review, 31(3):446?477, 1989.
[14] G. Lemon, W.G. Gibson, and M.R. Bennett. Metabotropic receptor activation, desensitization
and sequestrationi: modelling calcium and inositol 1,4,5-trisphosphate dynamics following receptor activation. Journal of Theoretical Biology, 223(1):93?111, 2003.
[15] A.C. Atkinson. The usefulness of optimum experiment designs. JRSS B, 58(1):59?76, 1996.
[16] J.F. Sturm. Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones.
Optimization Methods and Software, 11:625?653, 1999.
| 2924 |@word determinant:2 norm:1 heuristically:1 grk:3 covariance:7 p0:2 thereby:1 initial:6 series:1 genetic:1 reaction:8 recovered:2 activation:2 must:2 hoboken:1 john:1 realistic:1 additive:1 half:1 selected:1 height:6 mathematical:2 constructed:1 differential:2 pathway:2 wild:5 interscience:1 expected:2 gov:1 little:1 pf:1 increasing:1 begin:1 estimating:1 bounded:1 provided:1 mass:3 kind:1 developed:1 titterington:1 nj:1 guarantee:1 berkeley:5 act:1 sip:5 um:4 k2:1 medical:2 grant:1 appear:1 receptor:7 oxford:1 approximately:1 black:1 relaxing:1 appl:1 range:5 acknowledgment:1 testing:1 hughes:2 practice:2 implement:1 block:1 signaling:3 procedure:3 area:1 gibson:1 significantly:1 boyd:3 confidence:1 protein:2 close:1 context:2 demonstrated:1 maximizing:1 vit:7 convex:1 formulate:1 unstructured:1 vandenberghe:3 menu:9 updated:1 annals:1 construction:1 nominal:4 suppose:1 programming:3 substrate:6 hypothesis:1 arkin:1 velocity:4 particularly:3 observed:3 capture:1 worst:1 decrease:3 valuable:1 environment:1 dynamic:1 inapplicable:1 efficiency:11 basis:1 walter:1 london:1 kp:1 formation:1 quite:1 widely:1 solve:1 ability:1 statistic:3 cov:3 ford:1 ip:5 advantage:1 eigenvalue:3 varianceminimizing:1 product:3 maximal:3 loop:1 rapidly:1 optimum:2 adam:1 converges:1 wider:1 help:1 develop:2 measured:2 c:1 involves:1 predicted:2 appropriateness:1 concentrate:2 correct:1 modifying:1 transient:2 require:1 biological:11 mathematically:1 extension:1 kinetics:4 hall:1 major:1 achieves:1 adopt:2 purpose:1 estimation:5 reflects:1 brought:1 rna:1 aim:1 derived:1 focus:2 rank:1 modelling:1 sense:1 helpful:1 el:2 biochemical:1 entire:1 quasi:1 among:1 development:2 special:1 biologist:2 construct:1 chapman:1 biology:1 represents:1 throughput:1 imp:1 serious:1 few:1 employ:1 divergence:2 william:1 technometrics:1 multiply:1 evaluation:1 circularity:1 semidefinite:7 silvey:1 chain:2 accurate:1 andy:1 bioengineering:1 minw:2 sedumi:2 indexed:1 taylor:1 lbl:2 alliance:1 e0:1 theoretical:1 dose:2 uncertain:4 retains:1 maximization:1 ordinary:1 cost:3 uniform:7 usefulness:1 eec:1 fundamental:1 sensitivity:1 peak:6 siam:5 michael:1 repression:1 w1:1 squared:1 nm:1 choose:2 derivative:1 account:1 repressor:1 donev:1 satisfy:1 explicitly:1 vi:9 depends:1 tion:1 view:1 optimistic:1 recover:1 michaelis:5 lindley:1 minimize:2 square:6 formed:1 variance:2 efficiently:1 maximized:1 yield:3 bayesian:2 iterated:1 accurately:2 hunter:2 drive:1 processor:1 plateau:1 energy:1 associated:2 bioscience:1 gain:1 experimenter:1 fractional:1 methodology:2 response:2 done:1 box:1 stage:1 working:1 sturm:1 nonlinear:8 lack:1 grows:1 building:2 true:5 regularization:2 metabotropic:1 chemical:2 symmetric:1 steady:1 criterion:2 c5a:6 complete:1 demonstrate:1 temperature:1 funding:1 nih:2 common:2 specialized:1 insensitive:1 macrophage:1 discussed:1 interpret:1 measurement:1 significant:1 cambridge:1 pm:5 closing:1 menten:5 immune:2 dot:1 v0:2 patrick:1 enzyme:4 posterior:1 closest:1 recent:3 dye:1 showed:1 moderate:2 tikhonov:1 certain:2 inequality:2 minimum:1 signal:7 multiple:1 match:2 prediction:3 basic:1 regression:1 cell:9 background:1 decreased:1 source:1 allocated:1 subject:2 tend:1 spirit:1 schur:1 jordan:2 call:1 near:1 intermediate:1 concerned:1 variety:1 fit:6 r33:1 tm:1 tradeoff:1 six:1 york:1 action:3 matlab:1 useful:1 detailed:1 amount:1 aiding:1 band:1 concentrated:1 reduced:1 shifted:1 estimated:4 overly:1 bulk:1 blue:1 fuse:1 relaxation:1 cone:1 inverse:1 uncertainty:6 reasonable:1 seber:1 bound:2 apa:1 atkinson:2 paramount:1 pronzato:1 lemon:1 constraint:3 software:1 argument:1 min:1 optimality:1 extracellular:1 department:2 according:1 inroad:1 combination:1 poor:1 jr:1 across:3 slightly:1 increasingly:1 son:1 wi:18 making:2 ghaoui:2 taken:1 computationally:4 equation:3 fail:1 mechanistic:1 flip:1 end:1 serf:1 adopted:1 available:2 eight:1 away:1 appropriate:1 spectral:1 robustness:2 coin:1 rp:1 k1:1 classical:1 objective:3 quantity:1 spike:1 concentration:13 evolutionary:1 flaherty:2 distance:1 separate:2 thank:3 simulated:1 collected:1 cellular:2 considers:1 modeled:1 difficult:1 robert:1 trace:1 design:90 anal:1 calcium:10 proper:1 allowing:1 observation:1 howard:2 finite:1 acknowledge:1 maxk:1 situation:1 perturbation:8 introduced:1 complement:1 cast:1 required:1 kl:2 toolbox:1 tentative:1 maximin:4 california:3 lebret:2 address:1 able:1 below:1 exemplified:1 xm:1 challenge:2 blkdiag:2 program:5 recast:2 max:7 packard:1 critical:1 difficulty:2 residual:1 minimax:1 prior:6 understanding:1 review:2 bear:1 brittle:1 allocation:2 fluorescent:1 validation:2 s0:1 principle:1 interfering:1 row:1 repeat:1 side:1 institute:2 taking:1 ghz:1 curve:2 genome:1 made:1 simplified:1 u54:1 far:1 xi:1 continuous:1 reasonably:1 robust:30 ca:3 controllable:2 molecule:1 expansion:1 complex:8 diag:1 main:3 spread:1 intracellular:1 whole:1 x1:2 transduction:7 aid:1 wiley:2 dos:2 explicit:1 jacobian:7 weighting:1 specific:1 concern:1 incorporating:2 essential:1 sequential:2 magnitude:1 linearization:1 lt:1 led:1 saturating:1 scalar:1 ligand:6 mij:1 kinetic:1 goal:2 fisher:3 bennett:1 specifically:2 uniformly:1 wt:4 total:1 experimental:10 select:2 support:1 rnai:2 phenomenon:1 |
2,120 | 2,925 | Estimating the ?wrong? Markov random field:
Benefits in the computation-limited setting
Martin J. Wainwright
Department of Statistics, and
Department of Electrical Engineering and Computer Science
UC Berkeley, Berkeley CA 94720
wainwrig@{stat,eecs}.berkeley.edu
Abstract
Consider the problem of joint parameter estimation and prediction in a Markov
random field: i.e., the model parameters are estimated on the basis of an initial set of data, and then the fitted model is used to perform prediction (e.g.,
smoothing, denoising, interpolation) on a new noisy observation. Working in the
computation-limited setting, we analyze a joint method in which the same convex
variational relaxation is used to construct an M-estimator for fitting parameters,
and to perform approximate marginalization for the prediction step. The key result of this paper is that in the computation-limited setting, using an inconsistent
parameter estimator (i.e., an estimator that returns the ?wrong? model even in
the infinite data limit) is provably beneficial, since the resulting errors can partially compensate for errors made by using an approximate prediction technique.
En route to this result, we analyze the asymptotic properties of M-estimators
based on convex variational relaxations, and establish a Lipschitz stability property that holds for a broad class of variational methods. We show that joint estimation/prediction based on the reweighted sum-product algorithm substantially
outperforms a commonly used heuristic based on ordinary sum-product. 1
Keywords: Markov random fields; variational method; message-passing algorithms; sum-product;
belief propagation; parameter estimation; learning.
1
Introduction
Consider the problem of joint learning (parameter estimation) and prediction in a Markov
random field (MRF): in the learning phase, an initial collection of data is used to estimate parameters, and the fitted model is then used to perform prediction (e.g., smoothing,
interpolation, denoising) on a new noisy observation. Disregarding computational cost,
there exist optimal methods for solving this problem (Route A in Figure 1). For general
MRFs, however, optimal methods are computationally intractable; consequently, many researchers have examined various types of message-passing methods for learning and prediction problems, including belief propagation [3, 6, 7, 14], expectation propagation [5],
linear response [4], as well as reweighted message-passing algorithms [10, 13]. Accordingly, it is of considerable interest to understand and quantify the performance loss incurred
1
Work partially supported by Intel Corporation Equipment Grant 22978, an Alfred P. Sloan Foundation Fellowship, and NSF Grant DMS-0528488.
by using computationally tractable methods versus exact methods (i.e., Route B versus A
in Figure 1).
ROUTE A
OPTIMAL PARAMETER
ESTIMATION
??
DATA SOURCE
i
NEW OBSERVATIONS
{x }
ROUTE B
OPTIMAL
PREDICTION
y
APPROXIMATE PARAMETER
ESTIMATION
?b
APPROXIMATE
PREDICTION
PREDICTION
?
zb(y, ?; ? )
Error
PREDICTION
b
zb(y, ? ; ?)
Figure 1. Route A: computationally intractable combination of parameter estimation and
prediction. Route B: computationally efficient combination of approximate parameter estimation and prediction.
It is now well known that many message-passing algorithms?including mean field, (generalized) belief propagation, expectation propagation and various convex relaxations?can
be understood from a variational perspective; in particular, all of these message-passing algorithms are iterative methods solving relaxed forms of an exact variational principle [12].
This paper focuses on the analysis of variational methods based convex relaxations, which
includes a broad range of extant algorithms?among them the tree-reweighted sum-product
algorithm [11], reweighted forms of generalized belief propagation [13], and semidefinite
relaxations [12]. Moreover, it is straightforward to modify other message-passing methods
(e.g., expectation propagation [5]) so as to ?convexify? them. At a high level, the key idea
of this paper is the following: given that approximate methods can lead to errors at both
the estimation and prediction phases, it is natural to speculate that these sources of error
might be arranged to partially cancel one another. Our theoretical analysis confirms this
intuition: we show that with respect to end-to-end performance, it is in fact beneficial, even
in the infinite data limit, to learn the ?wrong? the model by using an inconsistent parameter
estimator.
More specifically, we show how any convex variational method can be used to define a
surrogate likelihood function. We then investigate the asymptotic properties of parameter
estimators based maximizing such surrogate likelihoods, and establish that they are asymptotically normal but inconsistent in general. We then prove that any variational method that
is based on a strongly concave entropy approximation is globally Lipschitz stable. Finally,
focusing on prediction for a coupled mixture of Gaussians, we prove upper bounds on
the increase in MSE of our computationally efficient method, relative to the unachievable
Bayes optimum. We provide experimental results using the tree-reweighted (TRW) sumproduct algorithm that confirm the stability of our methods, and demonstrate its superior
performance to a heuristic method based on standard sum-product.
2
Background
We begin with necessary notation and background on multinomial Markov random fields,
as well as variational representations and methods.
Markov random fields: Given an undirected graph G = (V, E) with N = |V | vertices,
we associate to each vertex s ? V a discrete random variable Xs , taking values in Xs =
{0, 1 . . . , m ? 1}. We assume that the vector X = {Xs | s ? V } has a distribution that is
Markov with respect to the graph G, so that its distribution can be represented in the form
X
X
p(x; ?) = exp{
?s (xs ) +
?st (xs , xt ) ? A(?)}
(1)
s?V
(s,t)?E
P
Here A(?) := log x?X N exp
s?V ?s (xs ) +
(s,t)?E ?st (xs , xt ) is the cumulant
generating function that normalizes the distribution, and ?s (?) and ?stP
(?, ?) are potential
functions. In particular, we make use of the parameterization ?s (xs ) := j?Xs ?s;j I j [xs ],
where I j [xs ] is an indicator function for the event {xs = j}; the quantity ?st is defined
analogously. Overall, the family of MRFs (1) is an exponential family with canonical
parameter ? ? Rd . Note that the elements of the canonical parameters are associated with
vertices {?s;j , s ? V, j ? Xs } and edges {?st;jk , (s, t) ? E, (j, k) ? Xs ? Xt } of the
underlying graph.
P
P
Variational representation: We now describe how the cumulant generating function can
be represented as the solution
of an optimization
problem. The constraint set is given
P
by MARG(G; ?) := ? ? Rd | ? =
x?X N p(x)?(x) for some p(?) , consisting
of all globally realizable singleton ?s (?) and pairwise ?st (? , ?) marginal distributions on
the graph G. For any ? ? MARG(G; ?), we define A? (?) = ? maxp H(p), where
the maximum is taken over all distributions that have mean parameters ?. With these
definitions, it can be shown [12] that A has the variational representation
T
A(?) =
max
? ? ? A? (?) .
(2)
??MARG(G;?)
3
From convex surrogates to joint estimation/prediction
In general, solving the variational problem (2) is intractable for two reasons: (i) the constraint set MARG(G; ?) is extremely difficult to characterize; and (ii) the dual function
A? lacks a closed-form representation. These challenges motivate approximations to A?
and MARG(G; ?); the resulting relaxed optimization problem defines a convex surrogate
to the cumulant generating function.
Convex surrogates: Let REL(G; ?) be a compact and convex outer bound to the marginal
polytope MARG(G; ?), and let B ? be a strictly convex and twice continuously differentiable approximation to the dual function A? . We use these approximations to define a
convex surrogate B via the relaxed optimization problem
T
B(?) :=
max
? ? ? B ? (? ) .
(3)
? ?REL(G;?)
The function B so defined has several desirable properties. First, since B is defined by the
maximum of a collection of functions linear in ?, it is convex [1]. Moreover, by the strict
convexity of B ? and compactness of REL(G; ?), the optimum is uniquely attained at some
? (?). Finally, an application of Danskin?s theorem [1] yields that B is differentiable, and
that ?B(?) = ? (?). Since ? (?) has a natural interpretation as a pseudomarginal, this last
property of B is analogous to the well-known cumulant generating property of A?namely,
?A(?) = ?(?).
One example of such a convex surrogate is the tree-reweighted Bethe free energy considered in our previous work [11]. For this surrogate, P
the relaxed constraint
P set REL(G; ?)
takes the form LOCAL(G; ?) := ? ? Rd+ |
?
(x
)
=
1,
s
s
xs
xt ?st (xs , xt ) =
?s (xs ) , whereas the entropy approximation B ? is of the ?convexified? Bethe form
X
X
?B ? (? ) =
Hs (?s ) ?
?st Ist (?st ).
(4)
s?V
(s,t)?E
Here Hs and Ist are the singleton entropy and edge-based mutual information, respectively,
and the weights ?st are derived from the graph structure so as to ensure convexity (see [11]
for more details). Analogous convex variational formulations underlie the reweighted generalized BP algorithm [13], as well as a log-determinant relaxation [12].
Approximate parameter estimation using surrogate likelihoods: Consider the problem of estimating the parameter ? using i.i.d. samples {x1 , . . . , xn }. For an MRF of
the form (1), the maximum likelihood estimate (MLE) is specified using the vector ?
b of
empirical marginal distributions (singleton ?
bs and pairwise ?
bst ). Since the likelihood is
intractable to optimize (due to the cumulant generating function A), it is natural to use the
convex surrogate B to define an alternative estimator obtained by maximizing the regularized surrogate likelihood:
?bn := arg max LB (?; ?
b) ? ?n R(?) = arg max ?T ?
b ? B(?) ? ?n R(?) . (5)
??Rd
??Rd
Here R : R ? R+ is a regularization function (e.g., R(?) = k?k2 ), whereas ?n > 0
is a regularization coefficient. For the tree-reweighted Bethe surrogate, we have shown in
previous work [10] that in the absence of regularization, the optimal parameter estimates
?bn have a very simple closed-form solution, specified in terms of the weights ?st and the
empirical marginals ?
b. If a regularizing term is added, these estimates no longer have a
closed-form solution, but the optimization problem (5) can still be solved efficiently by
message-passing methods.
d
Joint estimation/prediction: Using such an estimator, we now consider the joint approach to estimation and prediction illustrated in Figure 2. Using an initial set of i.i.d. samples, we first use the surrogate likelihood (5) to construct a parameter estimate ?bn . Given
a new noisy or incomplete observation y, we wish to perform near-optimal prediction or
data fusion using the fitted model (e.g., for smoothing or interpolation of a noisy image).
In order to do so, we first incorporate the new observation into the model, and then use
the message-passing algorithm associated with the convex surrogate B in order to compute
approximate pseudomarginals ? . These pseudomarginals can then be used to construct a
prediction zb(y; ? ), where the specifics of the prediction depend on the observation model.
We provide a concrete illustration in Section 5 using a mixture-of-Gaussians observation
model.
4
Analysis
Asymptotics of estimator: We begin by considering the asymptotic behavior of the parameter estmiator ?bn defined by the surrogate likelihood (5). Since this parameter estimator is
a particular type of M -estimator, the following result follows from standard techniques [8]:
Proposition 1. For a general graph with cycles, ?bn converges in probability to some fixed
?
b is asymptotically normal.
?b 6= ?? ; moreover, n[?bn ? ?]
A key property of the estimator is its inconsistency?i.e., the estimated model ?b differs
from the true model ?? even in the limit of large data. Despite this inconsistency, we will
see that ?bn is useful for performing prediction.
Algorithmic stability: A desirable property of any algorithm?particularly one applied
to statistical data?is that it exhibit an appropriate form of stability with respect to its
inputs. Not all message-passing algorithms have such stability properties. For instance,
the standard BP algorithm, although stable for relatively weakly coupled MRFs [3, 6],
can be highly unstable due to phase transitions. Previous experimental work has shown
that methods based on convex relaxations, including reweighted belief propagation [10],
Generic algorithm for joint parameter estimation and prediction:
1. Estimate parameters ?bn from initial data x1 , . . . , xn by maximizing surrogate likelihood LB .
2. Given a new set of observations y, incorporate them into the model:
?es ( ? ; ys ) = ?bsn ( ? ) + log p(ys | ? ).
(6)
3. Compute approximate marginals ? by using the message-passing algorithm associated
with the convex surrogate B. Use approximate marginals to construct prediction zb(y; ? )
of z based on the observation y and pseudomarginals ? .
Figure 2. Algorithm for joint parameter estimation and prediction. Both the learning and
prediction steps are approximate, but the key is that they are both based on the same underlying convex surrogate B. Such a construction yields a provably beneficial cancellation of
the two sources of error (learning and prediction).
reweighted generalized BP [13], and log-determinant relaxations [12] appear to be very
stable. Here we provide theoretical support for these empirical observations: in particular,
we prove that, in sharp contrast to non-convex methods, any variational method based on a
strongly convex entropy approximation is globally stable.
A function f : Rn ? R is strongly convex if there exists a constant c > 0 such that
f (y) ? f (x) + ?f (x)T y ? x) + 2c ky ? xk2 for all x, y ? Rn . For a twice continuously
differentiable function, this condition is equivalent to the eigenspectrum of the Hessian
?2 f (x) being uniformly bounded away from zero by c. With this definition, we have:
Proposition 2. Consider any variational method based on a strongly concave entropy approximation ?B ? ; moreover, for any parameter ? ? Rd , let ? (?) denote the associated
set of pseudomarginals. If the optimum is attained interior of the constraint set, then there
exists a constant R < +? such that
k? (? + ?) ? ? (?)k
?
Rk?k
for all ?, ? ? Rd .
Proof. By our construction of the convex surrogate B, we have ? (?) = ?B(?), so that the
statement is equivalent to the assertion that the gradient ?B is a Lipschitz function. Applying the mean value theorem to ?B, we can write ?B(? + ?) ? ?B(?) = ?2 B(? + t?)?
where t ? [0, 1]. Consequently, in order to establish the Lipschitz condition, it suffices
to show that the spectral norm of ?2 B(?) is uniformly bounded above over all ? ? Rd .
Differentiating the relation ?B(?) = ? (?) yields ?2 B(?) = ?? (?). Now standard sensitivity analysis results [1] yield that ?? (?) = [?2 B ? (? (?)]?1 . Finally, our assumption
of strong convexity of B ? yields that the spectral norm of ?2 B ? (? ) is uniformly bounded
away from zero, which yields the claim.
Many existing entropy approximations, including the convexifed Bethe entropy (4), can be
shown to be strongly concave [9].
5
Bounds on performance loss
We now turn to theoretical analysis of the joint method for parameter estimation and prediction illustrated in Figure 2. Note that given our setting of limited computation, the
Bayes optimum is unattainable for two reasons: (a) it has knowledge of the exact parameter value ?? ; and (b) the prediction step (7) involves computing exact marginal probabilities
?. Therefore, our ultimate goal is to bound the performance loss of our method relative to
the unachievable Bayes optimum. So as to obtain a concrete result, we focus on the special case of joint learning/prediction for a mixture-of-Gaussians; however, the ideas and
techniques described here are more generally applicable.
Prediction for mixture of Gaussians: Suppose that the discrete random vector is a label
vector for the components in a finite mixture of Gaussians: i.e., for each s ? V , the random
variable Zs is specified by p(Zs = zs | Xs = j; ?? ) ? N (?j , ?j2 ), for j ? {0, 1, . . . , m ?
1}. Such models are widely used in statistical signal and image processing
[2]. Suppose
?
that we observe a noise-corrupted version of Zs ?namely Ys = ?Zs + 1 ? ?2 Ws , where
Ws ? N (0, 1) is additive Gaussian noise, and the parameter ? ? [0, 1] specifies the signalto-noise ratio (SNR) of the observation model. (Here ? = 0 corresponds to pure noise,
whereas ? = 1 corresponds to completely uncorrupted observations.)
With this set-up, it is straightforward to show that the optimal Bayes least squares estimator
(BLSE) of Z takes the form
m?1
X
zbs (y; ?) :=
?s (j; ?? ) ?j (?) ys ? ?j + ?j ,
(7)
j=0
?
where ?s (j; ? ) is the exact marginal of the distribution p(y | x)p(x; ?? ); and ?j (?) :=
??j2
?2 ?j2 +(1??2 )
is the usual BLSE weighting for a Gaussian with variance ?j . For this set-up,
the approximate predictor zbs (y; ? ) defined by our joint procedure in Figure 2 corresponds
e obtained by solving
to replacing the exact marginals ? with the pseudomarginals ?s (j; ?)
e
the variational problem with ?.
Bounds on performance loss: We now turn to a comparison of the mean-squared error
(MSE) of the Bayes optimal predictor zb(Y ; ?) to the MSE of the surrogate-based predictor
zb(Y ; ? ). More specifically, we provide an upper bound on the increase in MSE, where the
bound is specified in terms of the coupling strength and the SNR parameter ?. Although
results of this nature can be derived more generally, for simplicity we focus on the case
of two mixture components (m = 2), and consider the asymptotic setting, in which the
number of data samples n ? +?, so that the law of large numbers [8] ensures that the
empirical marginals ?
bn converge to the exact marginal distributions ?? . Consequently, the
MLE converges to the true parameter value ?? , whereas Proposition 1 guarantees that our
b By construction, we
approximate parameter estimate ?bn converges to the fixed quantity ?.
?
?
b
have the relations ?B(?) = ? = ?A(? ).
An important factor in our bound is the quantity
b := sup ?max ?2 A(?? + ?) ? ?2 B(?b + ?) ,
L(?? ; ?)
(8)
??Rd
where ?max denotes the maximal singular value. Following the argument in the proof of
b is finite. Two additional quantities that play a
Proposition 2, it can be seen that L(?? ; ?)
role in our bound are the differences
?? (?) := ?1 (?) ? ?0 (?),
and
?? (?) := [1 ? ?1 (?)]?1 ? [1 ? ?0 (?)]?0 ,
where ?0 , ?1 are the means of the two Gaussian components. Finally, we define ?(Y ; ?) ?
s |Xs =1)
Rd with components log p(Y
p(Ys |Xs =0) for s ? V , and zeroes otherwise. With this notation,
we state the following result (see the technical report [9] for the proof):
Theorem 1. Let MSE(? ) and MSE(?) denote the mean-squared prediction errors of the
surrogate-based predictorzb(y; ? ), and the Bayesoptimal estimate zb(y; ?) respectively. The
MSE increase I(?) := N1 MSE(? ) ? MSE(?) is upper bounded by
(
)
rP
rP
h
4
2i
2
2
2
s Ys
s Ys
I(?) ? E ? (?)?? (?) + ?(?) ?? (?)
+ 2|?? (?)| |?? (?)|
N
N
b ?(Y ;?) k}.
where ?(?) := min{1, L(?? ; ?)k
N
IND
BP
40
30
20
10
1
0
0
40
30
20
10
1
0
1
0
0
0.5
0.5
50
Performance loss
50
Performance loss
Performance loss
50
40
30
20
10
1
Edge strength
0
1
0
0
0.5
0.5
SNR
1
Edge strength
TRW
3
2
1
1
0
0
0.5
0.5
1
0
5
Performance loss
Performance loss
5
4
4
3
2
1
1
0
0
0.5
0.5
1
Edge strength
SNR
(d)
Edge strength
(c)
BP
5
0
SNR
(b)
IND
SNR
0.5
0.5
SNR
(a)
Performance loss
TRW
0
4
3
2
1
1
0
0
0.5
0.5
1
Edge strength
0
Edge strength
SNR
(e)
(f)
Figure 3. Surface plots of the percentage increase in MSE relative to Bayes optimum
for different methods as a function of observation SNR and coupling strength. Top row:
Gaussian mixture with components (?0 , ?02 ) = (?1, 0.5) and (?1 , ?12 ) = (1, 0.5). Bottom row: Gaussian mixture with components (?0 , ?02 ) = (0, 1) and (?0 , ?12 ) = (0, 9).
Left column: independence model (IND). Center column: ordinary belief propagation
(BP). Right column: tree-reweighted algorithm (TRW).
It can be seen that I(?) ? 0 as ? ? 0+ and as ? ? 1? , so that the surrogate-based
method is asymptotically optimal for both low and high SNR. The behavior of the bound
in the intermediate regime is controlled by the balance between these two terms.
Experimental results: In order to test our joint estimation/prediction procedure, we have
applied it to coupled Gaussian mixture models on different graphs, coupling strengths,
observation SNRs, and mixture distributions. Although our methods are more generally
applicable, here we show representative results for m = 2 components, and two different mixture types. The first ensemble, constructed with mean and variance components
(?0 , ?02 ) = (0, 1) and (?1 , ?12 ) = (0, 9), mimics heavy-tailed behavior. The second ensemble is bimodal, with components (?0 , ?02 ) = (?1, 0.5) and (?1 , ?12 ) = (1, 0.5). In
both cases, each mixture component is equally weighted. Here we show results for a 2-D
grid with N = 64 nodes. Since the mixturevariables
have m
P
P= 2 states, the coupling
distribution can be written as p(x; ?) ? exp
s?V ?s xs +
(s,t)?E ?st xs xt . where
x ? {?1, +1}N are spin variables indexing the mixture components. In all trials, we chose
?s = 0 for all nodes s ? V , which ensures uniform marginal distributions p(xs ; ?) at each
node. For each coupling strength ? ? [0, 1], we chose edge parameters as ?st ? U[0, ?],
and we varied the SNR parameter ? controlling the observation model in [0, 1]. We evaluated the following three methods based on their increase in mean-squared error (MSE)
over the Bayes optimal predictor (7): (a) As a baseline, we used the independence model
for the mixture components: parameters are estimated ?s (xs ) = log ?
bs (xs ), and setting
coupling terms ?st (xs , xt ) equal to zero. The prediction step reduces to performing BLSE
at each node independently. (b) The standard belief propagation (BP) approach is based on
estimating parameters (see step (1) of Figure 2) using ?st = 1 for all edges (s, t), and using
BP to compute the pseudomarginals. (c) The tree-reweighted method (TRW) is based on
estimating parameters using the tree-reweighted surrogate [10] with weights ?st = 21 for all
edges (s, t), and using the TRW sum-product algorithm to compute the pseudomarginals.
Shown in Figure 3 are 2-D surface plots of the average percentage increase in MSE, taken
over 100 trials, as a function of the coupling strength ? ? [0, 1] and the observation SNR
parameter ? ? [0, 1] for the independence model (left column), BP approach (middle column) and TRW method (right column). For weak coupling (? ? 0), all three methods?
including the independence model?perform quite well, as should be expected given the
weak dependency. Although not clear in these plots, BP outperforms TRW for weak coupling; however, both methods lose than than 1% in this regime. As the coupling is increased, the BP method eventually deteriorates quite seriously; indeed, for large enough
coupling and low/intermediate SNR, its performance can be worse than the independence
model. Looking at alternative models (in which phase transitions are known), we have
found that this rapid degradation co-incides with the appearance of multiple fixed points.
In contrast, the behavior of the TRW method is extremely stable, consistent with our theory.
6
Conclusion
We have described and analyzed joint methods for parameter estimation and prediction/smoothing using variational methods that are based on convex surrogates to the cumulant generating function. Our results?both theoretical and experimental?confirm the
intuition that in the computation-limited setting, in which errors arise from approximations
made both during parameter estimation and subsequent prediction, it is provably beneficial
to use an inconsistent parameter estimator. Our experimental results on the coupled mixture
of Gaussian model confirm the theory: the tree-reweighted sum-product algorithm yields
prediction results close to the Bayes optimum, and substantially outperforms an analogous
but heuristic method based on standard belief propagation.
References
[1] D. Bertsekas. Nonlinear programming. Athena Scientific, Belmont, MA, 1995.
[2] M. Crouse, R. Nowak, and R. Baraniuk. Wavelet-based statistical signal processing using
hidden Markov models. IEEE Trans. Signal Processing, 46:886?902, April 1998.
[3] A. Ihler, J. Fisher, and A. S. Willsky. Loopy belief propagation: Convergence and effects of
message errors. Journal of Machine Learning Research, 6:905?936, May 2005.
[4] M. A. R. Leisink and H. J. Kappen. Learning in higher order Boltzmann machines using linear
response. Neural Networks, 13:329?335, 2000.
[5] T. P. Minka. A family of algorithms for approximate Bayesian inference. PhD thesis, MIT,
January 2001.
[6] S. Tatikonda and M. I. Jordan. Loopy belief propagation and Gibbs measures. In Proc. Uncertainty in Artificial Intelligence, volume 18, pages 493?500, August 2002.
[7] Y. W. Teh and M. Welling. On improving the efficiency of the iterative proportional fitting
procedure. In Workshop on Artificial Intelligence and Statistics, 2003.
[8] A. W. van der Vaart. Asymptotic statistics. Cambridge University Press, Cambridge, UK, 1998.
[9] M. J. Wainwright. Joint estimation and prediction in Markov random fields: Benefits of inconsistency in the computation-limited regime. Technical Report 690, Department of Statistics,
UC Berkeley, 2005.
[10] M. J. Wainwright, T. S. Jaakkola, and A. S. Willsky. Tree-reweighted belief propagation algorithms and approximate ML estimation by pseudomoment matching. In Workshop on Artificial
Intelligence and Statistics, January 2003.
[11] M. J. Wainwright, T. S. Jaakkola, and A. S. Willsky. A new class of upper bounds on the log
partition function. IEEE Trans. Info. Theory, 51(7):2313?2335, July 2005.
[12] M. J. Wainwright and M. I. Jordan. A variational principle for graphical models. In New
Directions in Statistical Signal Processing. MIT Press, Cambridge, MA, 2005.
[13] W. Wiegerinck. Approximations with reweighted generalized belief propagation. In Workshop
on Artificial Intelligence and Statistics, January 2005.
[14] J. Yedidia, W. T. Freeman, and Y. Weiss. Constructing free energy approximations and generalized belief propagation algorithms. IEEE Trans. Info. Theory, 51(7):2282?2312, July 2005.
| 2925 |@word h:2 trial:2 determinant:2 version:1 middle:1 norm:2 pseudomoment:1 confirms:1 bn:10 kappen:1 initial:4 seriously:1 outperforms:3 wainwrig:1 existing:1 written:1 belmont:1 subsequent:1 additive:1 partition:1 pseudomarginals:7 plot:3 intelligence:4 parameterization:1 accordingly:1 node:4 constructed:1 prove:3 fitting:2 pairwise:2 indeed:1 expected:1 rapid:1 behavior:4 freeman:1 globally:3 considering:1 begin:2 estimating:4 moreover:4 notation:2 underlying:2 bounded:4 substantially:2 z:5 corporation:1 convexify:1 guarantee:1 berkeley:4 concave:3 wrong:3 k2:1 uk:1 bst:1 grant:2 underlie:1 appear:1 bertsekas:1 engineering:1 understood:1 modify:1 local:1 limit:3 despite:1 interpolation:3 might:1 chose:2 twice:2 examined:1 co:1 limited:6 range:1 differs:1 procedure:3 asymptotics:1 empirical:4 matching:1 interior:1 close:1 marg:6 applying:1 optimize:1 equivalent:2 center:1 maximizing:3 straightforward:2 independently:1 convex:24 simplicity:1 pure:1 estimator:14 stability:5 analogous:3 construction:3 suppose:2 play:1 controlling:1 exact:7 programming:1 leisink:1 associate:1 element:1 jk:1 particularly:1 bottom:1 role:1 electrical:1 solved:1 ensures:2 cycle:1 intuition:2 convexity:3 motivate:1 depend:1 solving:4 weakly:1 efficiency:1 basis:1 completely:1 joint:15 various:2 represented:2 snrs:1 describe:1 artificial:4 quite:2 heuristic:3 widely:1 otherwise:1 maxp:1 statistic:6 vaart:1 noisy:4 differentiable:3 product:7 maximal:1 j2:3 ky:1 convergence:1 optimum:7 generating:6 converges:3 coupling:11 stat:1 keywords:1 strong:1 involves:1 quantify:1 direction:1 suffices:1 proposition:4 strictly:1 hold:1 considered:1 normal:2 exp:3 algorithmic:1 claim:1 xk2:1 estimation:21 proc:1 applicable:2 lose:1 label:1 tatikonda:1 weighted:1 mit:2 gaussian:7 jaakkola:2 derived:2 focus:3 likelihood:9 contrast:2 equipment:1 baseline:1 realizable:1 inference:1 mrfs:3 compactness:1 w:2 relation:2 hidden:1 provably:3 overall:1 among:1 stp:1 dual:2 arg:2 smoothing:4 special:1 uc:2 marginal:7 field:8 construct:4 mutual:1 equal:1 incides:1 broad:2 cancel:1 mimic:1 report:2 phase:4 consisting:1 n1:1 interest:1 message:11 investigate:1 highly:1 mixture:16 analyzed:1 semidefinite:1 edge:11 nowak:1 necessary:1 tree:9 incomplete:1 theoretical:4 fitted:3 instance:1 column:6 increased:1 assertion:1 ordinary:2 cost:1 loopy:2 vertex:3 snr:13 predictor:5 uniform:1 characterize:1 unattainable:1 dependency:1 eec:1 corrupted:1 st:15 sensitivity:1 analogously:1 continuously:2 concrete:2 extant:1 squared:3 thesis:1 worse:1 return:1 potential:1 singleton:3 speculate:1 includes:1 coefficient:1 sloan:1 closed:3 analyze:2 sup:1 bayes:9 square:1 spin:1 variance:2 efficiently:1 ensemble:2 yield:7 weak:3 bayesian:1 researcher:1 definition:2 energy:2 minka:1 dm:1 associated:4 proof:3 ihler:1 knowledge:1 trw:9 focusing:1 attained:2 higher:1 response:2 wei:1 april:1 arranged:1 formulation:1 evaluated:1 strongly:5 working:1 replacing:1 nonlinear:1 propagation:16 lack:1 defines:1 scientific:1 effect:1 true:2 regularization:3 illustrated:2 reweighted:16 ind:3 during:1 uniquely:1 generalized:6 demonstrate:1 image:2 variational:19 superior:1 multinomial:1 volume:1 interpretation:1 marginals:5 cambridge:3 gibbs:1 rd:10 grid:1 cancellation:1 convexified:1 stable:5 longer:1 surface:2 perspective:1 route:7 inconsistency:3 der:1 uncorrupted:1 seen:2 additional:1 relaxed:4 bsn:1 converge:1 july:2 ii:1 signal:4 multiple:1 desirable:2 reduces:1 technical:2 compensate:1 mle:2 y:7 equally:1 controlled:1 prediction:39 mrf:2 expectation:3 bimodal:1 background:2 fellowship:1 whereas:4 pseudomarginal:1 singular:1 source:3 strict:1 undirected:1 inconsistent:4 jordan:2 near:1 intermediate:2 enough:1 marginalization:1 independence:5 idea:2 ultimate:1 passing:10 hessian:1 useful:1 generally:3 clear:1 specifies:1 exist:1 percentage:2 nsf:1 canonical:2 estimated:3 deteriorates:1 alfred:1 discrete:2 write:1 ist:2 key:4 asymptotically:3 relaxation:8 graph:7 sum:7 baraniuk:1 uncertainty:1 family:3 bound:11 strength:11 constraint:4 bp:11 unachievable:2 argument:1 extremely:2 min:1 performing:2 martin:1 relatively:1 department:3 combination:2 beneficial:4 b:2 indexing:1 taken:2 computationally:5 turn:2 eventually:1 tractable:1 end:2 gaussians:5 yedidia:1 observe:1 away:2 appropriate:1 generic:1 spectral:2 alternative:2 rp:2 denotes:1 top:1 ensure:1 graphical:1 establish:3 added:1 quantity:4 usual:1 surrogate:24 exhibit:1 gradient:1 athena:1 outer:1 polytope:1 unstable:1 eigenspectrum:1 reason:2 willsky:3 illustration:1 ratio:1 balance:1 difficult:1 statement:1 info:2 danskin:1 boltzmann:1 perform:5 teh:1 upper:4 observation:16 markov:9 finite:2 january:3 looking:1 rn:2 varied:1 lb:2 sumproduct:1 sharp:1 august:1 namely:2 specified:4 trans:3 regime:3 challenge:1 including:5 max:6 belief:13 wainwright:5 event:1 natural:3 regularized:1 indicator:1 coupled:4 asymptotic:5 relative:3 law:1 loss:10 proportional:1 versus:2 foundation:1 incurred:1 consistent:1 principle:2 heavy:1 normalizes:1 row:2 supported:1 last:1 free:2 understand:1 taking:1 differentiating:1 benefit:2 van:1 xn:2 transition:2 made:2 commonly:1 collection:2 welling:1 approximate:15 compact:1 confirm:3 ml:1 iterative:2 tailed:1 learn:1 bethe:4 nature:1 ca:1 improving:1 mse:12 constructing:1 noise:4 arise:1 x1:2 intel:1 representative:1 en:1 wish:1 exponential:1 weighting:1 wavelet:1 theorem:3 rk:1 xt:7 specific:1 disregarding:1 x:26 fusion:1 intractable:4 exists:2 workshop:3 rel:4 phd:1 signalto:1 entropy:7 appearance:1 partially:3 corresponds:3 ma:2 goal:1 consequently:3 lipschitz:4 absence:1 considerable:1 fisher:1 infinite:2 specifically:2 uniformly:3 denoising:2 degradation:1 zb:10 wiegerinck:1 experimental:5 e:1 support:1 cumulant:6 incorporate:2 regularizing:1 |
2,121 | 2,926 | Multiple Instance Boosting for Object Detection
Paul Viola, John C. Platt, and Cha Zhang
Microsoft Research
1 Microsoft Way
Redmond, WA 98052
{viola,jplatt}@microsoft.com
Abstract
A good image object detection algorithm is accurate, fast, and does not
require exact locations of objects in a training set. We can create such
an object detector by taking the architecture of the Viola-Jones detector
cascade and training it with a new variant of boosting that we call MILBoost. MILBoost uses cost functions from the Multiple Instance Learning literature combined with the AnyBoost framework. We adapt the
feature selection criterion of MILBoost to optimize the performance of
the Viola-Jones cascade. Experiments show that the detection rate is up
to 1.6 times better using MILBoost. This increased detection rate shows
the advantage of simultaneously learning the locations and scales of the
objects in the training set along with the parameters of the classifier.
1
Introduction
When researchers use machine learning for object detection, they need to know the location and size of the objects, in order to generate positive examples for the classification
algorithm. It is often extremely tedious to generate large training sets of objects, because
it is not easy to specify exactly where the objects are. For example, given a ZIP code of
handwritten digits, which pixel is the location of a ?5? ? This sort of ambiguity leads to
training sets which themselves have high error rates, this limits the accuracy of any trained
classifier.
In this paper, we explicitly acknowledge that object recognition is innately a Multiple Instance Learning problem: we know that objects are located in regions of the image, but
we don?t know exactly where. In MIL, training examples are not singletons. Instead, they
come in ?bags?, where all of the examples in a bag share a label [4]. A positive bag means
that at least one example in the bag is positive, while a negative bag means that all examples
in the bag are negative. In MIL, learning must simultaneously learn which examples in the
positive bags are positive, along with the parameters of the classifier.
We have combined MIL with the Viola-Jones method of object detection, which uses Adaboost [11] to create a cascade of detectors. To do this, we created MILBoost, a new
method for folding MIL into the AnyBoost [9] framework. In addition, we show how early
stage in the detection cascade can be re-trained using information extracted from the final
MIL classifier.
We test this new form of MILBoost for detecting people in a teleconferencing application.
This is a much harder problem then face detection, since the participants do not look at the
camera (and sometimes away). The MIL framework is shown to produce classifiers with
much higher detection rates and fast computation times.
1.1
Structure of paper
We first review the previous work in two fields: previous related work in object detection
(Section 2.1) and in multiple instance learning (Section 2.2). We derive a new MIL variant
of boosting in Section 3, called MILBoost. MILBoost is used to train a detector in the
Viola-Jones framework in Section 4. We then adapt MILBoost to train an effective cascade
using a new criterion for selecting features in the early rounds of training (Section 5). The
paper concludes in Section 6 with experimental results on the problem of person detection
in a teleconferencing application. The MIL framework is shown to produce classifiers with
much higher detection rates and fast computation times.
2
Relationship to previous work
This paper lies at the intersection between the subfields of object detection and multiple instance learning. Therefore, we discuss the relationship with previous work in each subfield
separately.
2.1
Previous work in image object detection
The task of object detection in images is quite daunting. Amongst the challenges are 1)
creating a system with high accuracy and low false detection rate, 2) restricting the system
to consume a reasonable amount of CPU time, and 3) creating a large training set that has
low labeling error.
Perona et. al [3, 5] and Schmid [12] have proposed constellation models: spatial models of
local image features. These models can be trained using unsegmented images in which the
object can appear at any location. Learning uses EM-like algorithms to iteratively localize
and refine discriminative image features. However, hitherto, the detection accuracy has not
be as good as the best methods.
Viola and Jones [13] created a system that exhaustively scans pose space for generic objects. This system is accurate, because it is trained using AdaBoost [11]. It is also very
efficient, because it uses a cascade of detectors and very simple image features. However,
the AdaBoost algorithm requires exact positions of objects to learn.
The closest work to this paper is Nowlan and Platt [10], which built on the work of Keeler,
et. al [7] (see below). In the Nowlan paper, a convolutional neural network was trained
to detect hands. The exact location and size of the hands is approximately truthed: the
neural network used MIL training to co-learn the object location and the parameters of the
classifier. The system is effective, but is not as fast as Viola and Jones, because the detector
is more complex and it does not use a cascade.
This paper builds on the accuracy and speed of Viola and Jones, by using the same architecture. We attempt to gain the flexibility of the constellation models. Instead of an
EM-like algorithm, we use MIL to create our system, which does not require iteration.
Unlike Nowlan and Platt, we maintain a cascade of detectors for maximum speed.
2.2
Previous work in Multiple Instance Learning
The idea for multiple instance learning was originally proposed 1990 for handwritten digit
recognition by Keeler, et. al [7]. Keeler?s approach was called Integrated Segmentation
and Recognition (ISR). In that paper, the position of a digit in a ZIP code was considered
completely unknown. ISR simultaneously learned the positions of the digits and the parameters of a convolutional neural network recognizer. More details on ISR are given below
(Section 3.2).
Another relevant example of MIL is the Diverse Density approach of Maron [8]. Diverse
Density uses the Noisy OR generative model [6] to explain the bag labels. A gradientdescent algorithm is used to find the best point in input space that explains the positive
bags. We also utilize the Noisy OR generative model in a version of our algorithm, below
(Section 3.1).
Finally, a number of researchers have modified the boosting algorithm to perform MIL. For
example, Andrews and Hofmann [1] have proposed modifying the inner loop of boosting
to use linear programming. This is not practically applicable to the object detection task,
which can have millions of examples (pixels) and thousands of bags.
Another approach is due to Auer and Ortner [2], which enforces a constraint that weak
classifiers must be either hyper-balls or hyper-rectangles in <N . This would exclude the
fast features used by Viola and Jones.
A third approach is that of Xu and Frank [14], which uses a generative model that the
probability of a bag being positive is the mean of the probabilities that the examples are
positive. We believe that this rule is unsuited for object detection, because only a small
subset of the examples in the bag are ever positive.
3
MIL and Boosting
We will present two new variants of AdaBoost which attempt to solve the MIL problem.
The derivation uses the AnyBoost framework of of Mason et al. which views boosting
as a gradient descent process [9]. The derivation builds on previous appropriate MIL cost
functions, namely ISR and Noisy OR. The Noisy OR derivation is simpler and a bit more
intuitive.
3.1
Noisy-OR Boost
Recall in boosting each example is classified by a linear combination of weak classifiers.
In MILBoost, examples are not individually labeled. Instead, they reside in bags. Thus,
an example is indexed with two indices: i, which indexes the bag, and j, which indexes
the
within the bag. The score of the example is yij = C(xij ) and C(xij ) =
P example
t
?
c
(x
)
a weighted sum of weak classifiers. The probability of an example is positive
t
ij
t
is given by
1
pij =
,
1 + exp(?yij )
the standard
Qlogistic function. The probability that the bag is positive is a ?noisy OR?
pi = 1 ? j?i (1 ? pij ) [6] [8]. Under this model the likelihood assigned to a set of
training bags is:
Y
L(C) =
ptii (1 ? pi )(1?ti )
i
where ti ? {0, 1} is the label of bag i.
Following the AnyBoost approach, the weight on each example is given as the derivative
of the cost function with respect to a change in the score of the example. The derivative of
the log likelihood is:
? log L(C)
t i ? pi
= wij =
pij .
(1)
?yij
pi
Note, that the weights here are signed. The interpretation is straightforward; the sign determines
Pthe example label. Each round of boosting is a search for a classifier which maximizes ij c(xij )wij where c(xij ) is the score assigned to the example by the weak classifier (for a binary classifier c(xij ) ? {?1, +1}). The parameter ?t is determined using a
line search to maximize log L(C + ?t ct ).
Examining the criteria (1) the weight on each example is the product of two quantities: the
i
bag weight Wbag = ti ?p
pi and the instance weight Winstance = pij . Observe that Wbag for
a negative bag is always ?1. Thus, the weight for a negative instance, pij , is the same that
would result in a non-MIL AdaBoost framework (i.e. the negative examples are all equally
negative). The weight on the positive instances is more complex. As learning proceeds and
the probability of the bag approaches the target, the weight on the entire bag is reduced.
Within the bag, the examples are assigned a weight which is higher for examples with
higher scores. Intuitively the algorithm selects a subset of examples to assign a higher
positive weight, and these example dominate subsequent learning.
3.2
ISR Boost
The authors of the ISR paper may well have been aware of the Noisy OR criteria described
above. They chose instead to derive a different perhaps less probabilistic criteria. They do
this in part because the derivatives (and hence example weights) lead to a form of instance
competition.
P
Si
Define ?ij = exp(yij ), Si =
j?i ?ij and pi = 1+Si . Keeler et al. argue that ?ij
can be interpreted as the likelihood that the object occurs at ij. The quantity Si can be
interpreted as a likelihood ratio that some (at least one) instance is positive, and finally pi is
the probability that some instance is positive. The example weights for the ISR framework
are:
? log L(C)
?ij
(2)
= wij = (ti ? pi ) P
?yij
j?i ?ij
Examining the ISR criteria reveals two key properties. The first is the form of the example weight which is explicitly competitive. The examples in the bag compete for weight,
since the weight is normalized by sum of the ?ij ?s. Though the experimental evidence is
weak, this rule perhaps leads to a very localized representation, where a single example is
labeled positive and the other examples are labeled negative. The second property is that
the negative examples also compete for weight. This turns out to be troublesome in the detection framework since there are many, many more negative examples than positive. How
many negative bags should there be? In contrast, the Noisy OR criteria treats all negative
examples as independent negative examples.
4
Application of MIL Boost to Object Detection in Images
Each image is divided into a set of overlapping square windows that uniformly sample
the space of position and scale (typically there are between 10,000 and 100,000 windows
in a training image). Each window is used as an example for the purposes of training and
detection. Each training image is labeled to determine the position and scale of the object of
interest. For certain types of objects, such as frontal faces, it may be possible to accurately
determine the position and scale of the face. One possibility is to localize the eyes and then
to determine the single positive image window in which the eyes appear at a given relative
location and scale. Even for this type of object the effort in carefully labeling the images is
significant.
For many other types of objects (objects which may be visible from multiple poses, or
Figure 1: Two example images with people in a wide variety of poses. The algorithm will
attempt to detect all people in the images, including those that are looking away from the
camera.
Figure 2: Some of the subwindows in one positive bag.
are highly varied, or are flexible) the ?correct? geometric normalization is unclear. It is not
clear how to normalize images of people in a conference room, who may be standing, sitting
upright, reclining, looking toward, or looking away from the camera. Similar questions
arise for most other image classes such as cars, trees, or fruit.
Experiments in this paper are performed on a set of images from a teleconferencing application. The images are acquired from a set of cameras near the center of the conference
room (see Figure 1). The practical challenge is to steer a synthetic virtual camera toward
the location of the speaker. The focus here is on person detection; determination of the
person who is speaking is beyond the scope of this paper.
In every training image each person is labeled by hand. The labeler is instructed to draw a
box around the head of the person. While this may seem like a reasonable geometric normalization, it ignores one critical issue, context. At the available resolution (approximately
1000x150 pixels) the head is often less than 10 pixels wide. At this resolution, even for
clear frontal faces, the best face detection algorithms frequently fail. There are simply too
few pixels on the face. The only way to detect the head is to include the surrounding image
context. It is difficult to determine the correct quantity of image context (Figure 2 shows
many possible normalizations).
If the body context is used to assist in detection, it is difficult to foresee the effect of body
pose. Some of the participants are facing right, others left, and still others are leaning
far forward/backward (while taking notes or reclining). The same context image is not be
appropriate for all situations.
Both of these issues can be addressed with the use of MIL. Each positive head is represented, during training, by a large number of related image windows (see Figure 2). The
MIL boosting algorithm is then used to simultaneously learn a detector and determine the
location and scale of the appropriate image context.
5
MIL Boosting a Detection Cascade
In their work on face detection Viola and Jones train a cascade of classifiers, each designed
to achieve high detection rates and modest false positive rates. During detection almost
all of the computation is performed by the early stages in the cascade, perhaps 90% in the
first 10 features. Training the initial stages of the cascade is the key to a fast and effective
classifier.
Training and evaluating a detector in a MIL framework has a direct impact on cascade
construction, both on the features selected and the appropriate thresholds.
The result of the MIL boost learning process is not only an example classifier, but also
a set of weights on the examples. Those examples in positive bags which are assigned
high weight have also high score. The final classifier labels these examples positive. The
remaining examples in the positive bags are assigned a low weight and have a low score.
The final classifier often classifies these examples as negative (as they should be).
Since boosting is a greedy process, the initial weak classifiers do not have any knowledge
of the subsequent classifiers. As a result, the first classifiers selected have no knowledge
of the final weights assigned to the examples. The key to efficient processing, is that the
initial classifiers have a low false negative rate on the examples determined to be positive
by the final MIL classifier.
This suggests a simple scheme for retraining the initial classifiers. Train a complete MIL
boosted classifier and set the detection threshold to achieve the desired false positive and
false negative rates. Retrain the initial weak classifier so that it has a zero false negative
rate on the examples labeled positive by the full classifier. This results in a significant
increase in the number of examples which can be pruned by this classifier. The process can
be repeated, so that the second classifier is trained to yield a zero false negative rate on the
remaining examples.
6
Experimental Results
Experiments were performed using a set of 8 videos recorded in different conference rooms.
A collection of 1856 images were sampled from these videos. In all cases the detector was
trained on 7 video conferences and tested on the remaining video conference. There were
a total of 12364 visible people in these images. Each was labeled by drawing a rectangle
around the head of each person.
Learning is performed on a total of about 30 million subwindows in the 1856 images. In
addition to the monochrome images, two additional feature images are used. One measures
the difference from the running mean image (this is something like background subtraction)
and the other measures temporal variance over longer time scales. A set of 2654 rectangle
filters are used for training. In each round the optimal filter and threshold is selected. In
each experiment a total of 60 filters are learned.
Figure 3: ROC comparison between various boosting rules.
Figure 4: One example from the testing dataset and overlaid results.
We compared classical AdaBoost with two variants of MIL boost: ISR and Noisy-OR.
For the MIL algorithms there is one bag for each labeled head, containing those positive
windows which overlap that head. Additionally there is one negative bag for each image.
After training, performance is evaluated on held out conference video (see Figure 3).
During training a set of positive windows are generated for each labeled example. All
windows whose width is between 0.67 times and 1.5 times the head width and whose
center is within 0.5 times the head width of the center of the head are labeled positive.
An exception is made for AdaBoost, which has a tighter definition on positive examples
(width between 0.83 and 1.2 times the head width and center within 0.2 times the head
width) and produces better performance than the looser criterion. All windows which do
not overlap with any head are considered negative. For each algorithm one experiment
uses the ground truth obtained by hand (which has small yet unavoidable errors). A second
experiment corrupts this ground truth further, moving each head by a uniform random shift
such that there is non-zero overlap with the true position. Note that conventional AdaBoost
is much worse when trained using corrupted ground truth. Interestingly, Adaboost is worse
than NorBoost using the ?correct? ground truth, even with a tight definition of positive
examples. We conjecture that this is due to unavoidable ambiguity in the training and
testing data.
Overall the MIL detection results are practically useful. A typical example of detection
results are shown in Figure 4. Results shown are for the noisy OR algorithm. In order to
simplify the display, significantly overlapping detection windows are averaged into a single
window.
The scheme for retraining the initial classifier was evaluated on the noisy OR strong classifier trained above. Training a conventional cascade requires finding a small set of weak
classifiers that can achieve zero false negative rate (or almost zero) and a low false positive
rate. Using the first weak classifier yields a false positive rate of 39.7%. Including the first
four weak classifiers yields a false positive rate of 21.4%. After retraining the first weak
classifier alone yields a false positive rate of 11.7%. This improved rejection rate has the
effect of reducing computation time of the cascade by roughly a factor of three.
7
Conclusions
This paper combines the truthing flexibility of multiple instance learning with the high
accuracy of the boosted object detector of Viola and Jones. This was done by introducing
a new variant of boosting, called MILBoost. MILBoost combines examples into bags,
using combination functions such as ISR or Noisy OR. Maximum likelihood on the output
of these bag combination functions fit within the AnyBoost framework, which generates
boosting weights for each example.
We apply MILBoost to Viola-Jones face detection, where the standard AdaBoost works
very well. NorBoost improves the detection rate over standard AdaBoost (tight positive)
by nearly 15% (at a 10% false positive rate). Using MILBoost for object detection allows
the detector to flexibly assign labels to the training set, which reduces label noise and
improves performance.
References
[1] S. Andrews and T. Hofmann. Multiple-instance learning via disjunctive programming boosting.
In S. Thrun, L. K. Saul, and B. Sch?olkopf, editors, Proc. NIPS, volume 16. MIT Press, 2004.
[2] P. Auer and R. Ortner. A boosting approach to multiple instance learning. In Lecture Notes in
Computer Science, volume 3201, pages 63?74, October 2004.
[3] M. C. Burl, T. K. Leung, and P. Perona. Face localization via shape statistics. In Proc. Int?l
Workshop on Automatic Face and Gesture Recognition, pages 154?159, 1995.
[4] T. G. Dietterich, R. H. Lathrop, and T. Lozano-Pérez. Solving the multiple instance
problem with axis-parallel rectangles. Artif. Intell., 89(1-2):31?71, 1997.
[5] R. Fergus, P. Perona, and A. Zisserman. Object class recognition by unsupervised scaleinvariant learning. In Proc. CVPR, volume 2, pages 264?271, 2003.
[6] D. Heckerman. A tractable inference algorithm for diagnosing multiple diseases. In Proc. UAI,
pages 163?171, 1989.
[7] J. D. Keeler, D. E. Rumelhart, and W.-K. Leow. Integrated segmentation and recognition of
hand-printed numerals. In NIPS-3: Proceedings of the 1990 conference on Advances in neural
information processing systems 3, pages 557?563, San Francisco, CA, USA, 1990. Morgan
Kaufmann Publishers Inc.
[8] O. Maron and T. Lozano-Perez. A framework for multiple-instance learning. In Proc. NIPS,
volume 10, pages 570?576, 1998.
[9] L. Mason, J. Baxter, P. Bartlett, and M. Frean. Boosting algorithms as gradient descent in
function space, 1999.
[10] S. J. Nowlan and J. C. Platt. A convolutional neural network hand tracker. In G. Tesauro,
D. Touretzky, and T. Leen, editors, Advances in Neural Information Processing Systems, volume 7, pages 901?908. The MIT Press, 1995.
[11] R. E. Schapire and Y. Singer. Improved boosting algorithms using confidence-rated predictions.
In Proc. COLT, volume 11, pages 80?91, 1998.
[12] C. Schmid and R. Mohr. Local grayvalue invariants for image retrieval. IEEE Trans. PAMI,
19(5):530?535, 1997.
[13] P. Viola and M. Jones. Robust real-time object detection. Int?l. J. Computer Vision, 57(2):137?
154, 2002.
[14] X. Xu and E. Frank. Logistic regression and boosting for labeled bags of instances. In Lecture
Notes in Computer Science, volume 3056, pages 272?281, April 2004.
| 2926 |@word version:1 retraining:3 tedious:1 cha:1 leow:1 harder:1 initial:6 score:6 selecting:1 interestingly:1 com:1 nowlan:4 si:4 yet:1 must:2 john:1 subsequent:2 visible:2 shape:1 hofmann:2 designed:1 alone:1 generative:3 selected:3 greedy:1 detecting:1 boosting:20 location:10 simpler:1 zhang:1 diagnosing:1 along:2 direct:1 anyboost:5 combine:2 acquired:1 roughly:1 themselves:1 frequently:1 cpu:1 window:11 classifies:1 maximizes:1 hitherto:1 interpreted:2 finding:1 temporal:1 every:1 ti:4 exactly:2 classifier:35 platt:4 appear:2 positive:38 local:2 treat:1 limit:1 troublesome:1 mohr:1 approximately:2 pami:1 signed:1 chose:1 suggests:1 co:1 subfields:1 averaged:1 practical:1 camera:5 enforces:1 testing:2 digit:4 cascade:15 significantly:1 printed:1 confidence:1 selection:1 context:6 optimize:1 conventional:2 center:4 straightforward:1 flexibly:1 resolution:2 rule:3 dominate:1 target:1 construction:1 exact:3 programming:2 us:8 rumelhart:1 recognition:6 located:1 labeled:11 disjunctive:1 thousand:1 region:1 disease:1 exhaustively:1 trained:9 tight:2 solving:1 localization:1 completely:1 teleconferencing:3 represented:1 various:1 unsuited:1 derivation:3 train:4 surrounding:1 fast:6 effective:3 labeling:2 hyper:2 quite:1 whose:2 solve:1 cvpr:1 consume:1 drawing:1 statistic:1 noisy:12 scaleinvariant:1 final:5 advantage:1 product:1 relevant:1 loop:1 pthe:1 flexibility:2 achieve:3 intuitive:1 competition:1 normalize:1 olkopf:1 produce:3 object:33 derive:2 andrew:2 frean:1 pose:4 ij:9 strong:1 come:1 correct:3 modifying:1 filter:3 virtual:1 numeral:1 explains:1 require:2 assign:2 tighter:1 yij:5 keeler:5 practically:2 around:2 considered:2 ground:4 tracker:1 exp:2 overlaid:1 scope:1 early:3 purpose:1 recognizer:1 proc:6 applicable:1 bag:33 label:7 individually:1 create:3 weighted:1 mit:2 always:1 modified:1 boosted:2 mil:27 focus:1 monochrome:1 likelihood:5 contrast:1 detect:3 inference:1 leung:1 integrated:2 entire:1 typically:1 perona:3 wij:3 selects:1 corrupts:1 pixel:5 issue:2 classification:1 flexible:1 overall:1 colt:1 spatial:1 field:1 aware:1 labeler:1 jones:12 look:1 nearly:1 unsupervised:1 others:2 simplify:1 ortner:2 few:1 simultaneously:4 intell:1 microsoft:3 maintain:1 attempt:3 detection:37 interest:1 possibility:1 highly:1 perez:1 held:1 accurate:2 modest:1 indexed:1 tree:1 re:1 desired:1 instance:19 increased:1 steer:1 cost:3 introducing:1 subset:2 uniform:1 jplatt:1 examining:2 too:1 corrupted:1 synthetic:1 combined:2 person:6 density:2 standing:1 probabilistic:1 ambiguity:2 recorded:1 unavoidable:2 containing:1 worse:2 creating:2 derivative:3 exclude:1 singleton:1 int:2 inc:1 explicitly:2 innately:1 view:1 performed:4 competitive:1 sort:1 participant:2 parallel:1 square:1 accuracy:5 convolutional:3 variance:1 who:2 kaufmann:1 sitting:1 yield:4 weak:11 handwritten:2 accurately:1 researcher:2 classified:1 detector:12 explain:1 touretzky:1 definition:2 gain:1 sampled:1 dataset:1 recall:1 knowledge:2 car:1 improves:2 segmentation:2 carefully:1 auer:2 higher:5 originally:1 adaboost:11 specify:1 improved:2 april:1 daunting:1 zisserman:1 evaluated:2 done:1 leen:1 foresee:1 though:1 box:1 stage:3 hand:6 unsegmented:1 overlapping:2 logistic:1 maron:2 perhaps:3 believe:1 artif:1 usa:1 effect:2 dietterich:1 normalized:1 true:1 burl:1 hence:1 assigned:6 lozano:2 iteratively:1 round:3 during:3 width:6 speaker:1 criterion:8 complete:1 image:34 volume:7 million:2 interpretation:1 significant:2 automatic:1 moving:1 longer:1 something:1 closest:1 tesauro:1 certain:1 binary:1 morgan:1 additional:1 zip:2 subtraction:1 determine:5 maximize:1 multiple:14 full:1 reduces:1 adapt:2 determination:1 gesture:1 retrieval:1 divided:1 equally:1 impact:1 prediction:1 variant:5 regression:1 vision:1 iteration:1 sometimes:1 normalization:3 folding:1 addition:2 background:1 separately:1 addressed:1 publisher:1 sch:1 unlike:1 seem:1 call:1 near:1 easy:1 baxter:1 variety:1 fit:1 architecture:2 inner:1 idea:1 shift:1 bartlett:1 assist:1 effort:1 speaking:1 useful:1 clear:2 amount:1 reduced:1 generate:2 schapire:1 xij:5 sign:1 diverse:2 key:3 four:1 threshold:3 localize:2 utilize:1 rectangle:4 backward:1 sum:2 compete:2 almost:2 reasonable:2 looser:1 draw:1 bit:1 ct:1 display:1 refine:1 constraint:1 generates:1 speed:2 extremely:1 pruned:1 conjecture:1 ball:1 combination:3 heckerman:1 em:2 intuitively:1 invariant:1 discus:1 turn:1 fail:1 singer:1 know:3 tractable:1 available:1 apply:1 observe:1 away:3 generic:1 appropriate:4 remaining:3 include:1 running:1 build:2 classical:1 question:1 quantity:3 occurs:1 unclear:1 subwindows:2 amongst:1 gradient:2 thrun:1 argue:1 toward:2 code:2 index:3 relationship:2 ratio:1 difficult:2 october:1 frank:2 negative:20 unknown:1 perform:1 acknowledge:1 descent:2 viola:14 situation:1 ever:1 looking:3 head:14 varied:1 namely:1 learned:2 boost:5 nip:3 trans:1 beyond:1 redmond:1 proceeds:1 below:3 challenge:2 built:1 including:2 video:5 critical:1 overlap:3 scheme:2 rated:1 eye:2 axis:1 created:2 concludes:1 schmid:2 review:1 literature:1 geometric:2 relative:1 subfield:1 lecture:2 x150:1 facing:1 localized:1 pij:5 fruit:1 editor:2 leaning:1 share:1 pi:8 wide:2 saul:1 taking:2 face:10 isr:10 evaluating:1 ignores:1 reside:1 author:1 instructed:1 forward:1 collection:1 made:1 san:1 far:1 reveals:1 uai:1 francisco:1 discriminative:1 fergus:1 don:1 search:2 additionally:1 learn:4 robust:1 ca:1 complex:2 noise:1 paul:1 arise:1 repeated:1 xu:2 body:2 retrain:1 roc:1 position:7 lie:1 third:1 rez:1 gradientdescent:1 constellation:2 mason:2 evidence:1 workshop:1 false:13 restricting:1 rejection:1 intersection:1 simply:1 truthed:1 truth:4 determines:1 extracted:1 room:3 change:1 determined:2 upright:1 uniformly:1 typical:1 reducing:1 called:3 total:3 lathrop:1 experimental:3 exception:1 people:5 scan:1 frontal:2 tested:1 |
2,122 | 2,927 | Scaling Laws in Natural Scenes and the
Inference of 3D Shape
Brian Potetz
Department of Computer Science
Center for the Neural Basis of Cognition
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Tai Sing Lee
Department of Computer Science
Center for the Neural Basis of Cognition
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Abstract
This paper explores the statistical relationship between natural images
and their underlying range (depth) images. We look at how this relationship changes over scale, and how this information can be used to enhance
low resolution range data using a full resolution intensity image. Based
on our findings, we propose an extension to an existing technique known
as shape recipes [3], and the success of the two methods are compared
using images and laser scans of real scenes. Our extension is shown
to provide a two-fold improvement over the current method. Furthermore, we demonstrate that ideal linear shape-from-shading filters, when
learned from natural scenes, may derive even more strength from shadow
cues than from the traditional linear-Lambertian shading cues.
1
Introduction
The inference of depth information from single images is typically performed by devising models of image formation based on the physics of light interaction and then inverting
these models to solve for depth. Once inverted, these models are highly underconstrained,
requiring many assumptions such as Lambertian surface reflectance, smoothness of surfaces, uniform albedo, or lack of cast shadows. Little is known about the relative merits
of these assumptions in real scenes. A statistical understanding of the joint distribution of
real images and their underlying 3D structure would allow us to replace these assumptions
and simplifications with probabilistic priors based on real scenes. Furthermore, statistical
studies may uncover entirely new sources of information that are not obvious from physical models. Real scenes are affected by many regularities in the environment, such as
the natural geometry of objects, the arrangements of objects in space, natural distributions
of light, and regularities in the position of the observer. Few current shape inference algorithms make use of these trends. Despite the potential usefulness of statistical models
and the growing success of statistical methods in vision, few studies have been made into
the statistical relationship between images and range (depth) images. Those studies that
have examined this relationship in nature have uncovered meaningful and exploitable statistical trends in real scenes which may be useful for designing new algorithms in surface
inference, and also for understanding how humans perceive depth in real scenes [6, 4, 8].
In this paper, we explore some of the properties of the statistical relationship between
images and their underlying range (depth) images in real scenes, using images acquired by
laser scanner in natural environments. Specifically, we will examine the cross-covariance
between images and range images, and how this structure changes over scale. We then
illustrate how our statistical findings can be applied to inference problems by analyzing
and extending the shape recipe depth inference algorithm.
2
Shape recipes
We will motivate our statistical study with an application. Often, we may have a highresolution color image of a scene, but only a low spatial resolution range image (range
images record the 3D distance between the scene and the camera for each pixel). This often
happens if our range image was acquired by applying a stereo depth inference algorithm.
Stereo algorithms rely on smoothness constraints, either explicitly or implicitly, and so
the high-frequency components of the resulting range image are not reliable [1, 7]. Lowresolution range data may also be the output of a laser range scanner, if the range scanner is
inexpensive, or if the scan must be acquired quickly (range scanners typically acquire each
pixel sequentially, taking up to several minutes for a high-resolution scan).
It should be possible to improve our estimate of the high spatial frequencies of the range
image by using monocular cues from the high-resolution intensity (or color) image. Shape
recipes [3, 9] provide one way of doing this. The basic principle of shape recipes is that
a relationship between shape and light intensity could be learned from the low resolution
image pair, and then extrapolated and applied to the high resolution intensity image to
infer the high spatial frequencies of the range image. One advantage of this approach is
that hidden variables important to inference from monocular cues, such as illumination
direction and material reflectance properties, might be implicitly learned from the lowresolution range and intensity images. However, for this approach to work, we require
some model of how the relationship between shape and intensity changes over scale, which
we discuss below.
For shape recipes, both the high resolution intensity image and the low resolution range
image are decomposed into steerable wavelet filter pyramids, linearly breaking the image
down according to scale and orientation [2]. Linear regression is then used between the
highest frequency band of the available low-resolution range image and the corresponding
band of the intensity image, to learn a linear filter that best predicts the range band from
the image band. The hypothesis of the model is that this filter can then be used to predict high frequency range bands from the high frequency image bands. We describe the
implementation in more detail below.
Let im,? and zm,? be steerable filter pyramid subbands of the intensity and range image
respectively, at spatial resolution m and orientation ? (both are integers). Number the band
levels so that m=0 is the highest frequency subband of the intensity image, and m=n is
the highest available frequency subband of the low-resolution range image. Thus, higher
level numbers correspond to lower spatial frequencies. Shape
P recipes work by learning a
linear filter kn,? at level n by minimizing sum-squared error (zn,? ? kn,? ? in,? )2 , where
? denotes convolution. Higher resolution subbands of the range image are inferred by:
z?m,? =
1
(kn,? ? im,? )
cn?m
(1)
where c = 2. The choice of c = 2 in the shape recipe model is motivated by the linear
Lambertian shading model [9]. We will discuss this choice of constant in section 3.
The underlying assumption of shape recipes is that the convolution kernel km,? should be
roughly constant over the four highest resolution bands of the steerable filter pyramid. This
is based on the idea that shape recipe kernels should vary slowly over scale. In this section,
we show mathematically that this model is internally inconsistent. To do this, we first reexpress the shape recipe process in the Fourier domain. The operations of shape recipes
(pyramid decomposition, convolution, and image reconstruction) are all linear operations,
and so they can be combined into a single linear convolution. In other words, we can think
of shape recipes as inferring the high resolution range data zhigh via a single convolution
Zhigh (u, v) = I(u, v) ? Krecipe (u, v)
(2)
where I is the Fourier transform of the intensity image i. (In general, we will use capital
letters to denote functions in the Fourier domain). Krecipe is a filter in the Fourier domain,
of the same size as the image, whose construction is discussed below. Note that Krecipe is
zero in the low frequency bands where Zlow is available. Once zhigh (the inverse Fourier
transform of Zhigh ) is estimated, it can be combined with the known low-resolution range
data simply by adding them together: zrecipe (x, y) = zlow (x, y) + zhigh (x, y).
For shorthand, we will write I(u, v)I ? (u, v) as II(u, v) and Z(u, v)I ? (u, v) as ZI(u, v).
II is also known as the power spectrum, and it is the Fourier transform of the autocorrelation of the intensity image. ZI is the Fourier transform of the cross-correlation between
the intensity and range images, and it has both real and imaginary parts. Let K = ZI/II.
Observe that I ? K is a perfect reconstruction of the original high resolution range image
(as long as II(u, v) 6= 0). Because we do not have the full-resolution range image, we
can only compute the low spatial frequencies of ZI(u, v). Let Klow = ZIlow /II, where
ZIlow is the Fourier transform of the cross-correlation between the low-resolution range
image, and a low-resolution version of the intensity image. Klow is zero in the high frequency bands. We can then think of Krecipe as an approximation of K = ZI/II formed
by extrapolating Klow into the higher spatial frequencies.
In the appendix, we show that shape recipes implicitly perform this extrapolation by learning the highest available frequency octave of Klow , and duplicating this octave into all
successive octaves of Krecipe , multiplied by a scale factor. However, there is a problem
with this approach. First, there is no reason to expect that features in the range/intensity
relationship should repeat once every octave. Figure 1a shows a plot of ZI from a scene in
our database of ground-truth range data (to be described in section 3). The fine structures
in real[K] do not duplicate themselves every octave. Second and more importantly, octave
duplication violates Freeman and Torralba?s assumption that shape recipe kernels should
change slowly over scale, which we take to mean over all scales, not just over successive
octaves. Even if octave 2 of K is made identical to octave 1, it is mathematically impossible
for fractional octaves of K like 1.5 to also be identical unless ZI/II is completely smooth
and devoid of fine structure. The fine structures in K therefore cannot possibly generalize
over all scales.
In the next section, we use laser scans of real scenes to study the joint statistics of range
and intensity images in greater detail, and use our results to form a statistically-motivated
model of ZI. We believe that a greater understanding of the joint distribution of natural
images and their underlying 3D structure will have a broad impact on the development
of robust depth inference algorithms, and also on understanding human depth perception.
More immediately, our statistical observations lead to a more accurate way to extrapolate
Klow , which in turn results in a more accurate shape recipe method.
3
Scaling laws in natural scene statistics
To study the correlational structures between depth and intensity in natural scenes, we
have collected a database of coregistered intensity and high-resolution range images (corresponding pixels of the two images correspond to the same point in space). Scans were
collected using the Riegl LMS-Z360 laser range scanner with integrated color photosensor.
b) Example BK (?) vs degrees counter-clockwise from horizontal
a) |real[ZI]|
?3
B(?)
2
x 10
0
?2
0?
90?
180?
270?
360?
Figure 1: a) A log-log polar plot of |real[ZI]| from a scene in our database. ZI contains
extensive fine structures that do not repeat at each octave. However, along all orientations,
the general form of |real[ZI]| is a power-law. |imag[ZI]| similarly obeys a power-law.
b) A plot of BK (?) for the scene in figure 2. real[BK (?)] is drawn in black and
imag[BK (?)] in grey. This plot is typical of most scenes in our database. As predicted
by equation 4, imag[BK (?)] reaches its minima at the illumination direction (in this case,
to the extreme left, almost 180? ). Also typical is that real[BK (?)] is uniformly negative,
most likely caused by cast shadows in object concavities [6].
Scans were taken of a variety of rural and urban scenes. All images were taken outdoors,
under sunny conditions, while the scanner was level with ground. The shape recipe model
was intended for scenes with homogenous albedo and surface material. To test this algorithm in real scenes of this type, we selected 28 single-texture image sections from our
database. These textures include statue surfaces and faceted building exteriors, such as
archways and church facades (12 scenes), rocky terrain and rock piles (8), and leafy foliage (8). No logarithm or other transformation was applied to the intensity or range data
(measured in meters), as this would interfere with the Lambertian model that motivates the
shape recipe technique. Average size of these textures was 172,669 pixels per image.
We show a log-log polar plot of |real[ZI(r, ?)]| from one image in our database in figure
1a. As can be seen in the figure, this structure appears to closely follow a power law. We
claim that ZI can be reasonably modeled by B(?)/r? , where r is spatial frequency in polar
coordinates, and B(?) is a parameter of the model (with both real and imaginary parts) that
depends only on polar angle ?. We test this claim by dividing the Fourier plane into four
45? octants (vertical, forward diagonal, horizontal, and backward diagonal), and measuring
the drop-off rate in each octant separately. For each octant, we average over the octant?s
included orientations and fit the result to a power-law. The resulting values of ? (averaged
over all 28 images) are listed in the table below:
orientation
horizontal
forward diagonal
vertical
backward diagonal
mean
II
2.47 ?0.10
2.61 ?0.11
2.76 ?0.11
2.56 ?0.09
2.60 ?0.10
real[ZI]
3.61 ?0.18
3.67 ?0.17
3.62 ?0.15
3.69 ?0.17
3.65 ?0.14
imag[ZI]
3.84 ?0.19
3.95 ?0.17
3.61 ?0.24
3.84 ?0.23
3.87 ?0.16
ZZ
2.84 ?0.11
2.92 ?0.11
2.89 ?0.11
2.86 ?0.10
2.88 ?0.10
For each octant, the correlation coefficient between the power-law fit and the actual spectrum ranged from 0.91 to 0.99, demonstrating that each octant is well-fit by a power-law
(Note that averaging over orientation smooths out some fine structures in each spectrum).
Furthermore, ? varies little across orientations, showing that our model fits ZI closely.
The above findings predict that K = ZI/II also obeys a power-law. Subtracting ?II from
?real[ZI] and ?imag[ZI] , we find that real[K] drops off at 1/r1.1 and imag[K] drops off at
1/r1.2 . Thus, we have that K(r, ?) ? BK (?)/r.
Now that we know that K can be fit (roughly) by a 1/r power-law, we can offer some
insight into why K tends to approximate this general form. The 1/r drop-off in the
imaginary part of K can be explained by the linear Lambertian model of shading, with
oblique lighting conditions. This argument was used by Freeman and Torralba [9] in
their theoretical motivation for choosing c = 2. The linear Lambertian model is obtained
by taking only the linear terms of the Taylor series of the Lambertian equation. Under
this model, if constant albedo is assumed, and no occlusion is present, then with lighting from above, i(x, y) = a ?z/?y,?where a is some constant. In the Fourier domain,
I(u, v) = a2?jvZ(u, v), where j = ?1. Thus, we have that
j
ZI(r, ?) = ?
II(r, ?)
(3)
a2? r sin(?)
1
1
(4)
K(r, ?) = ?j
r a2? sin(?)
In other words, under this model, K obeys a 1/r power-law. This means that each octave
of K is half of the octave before it. Our empirical finding that the imaginary part of K
obeys a 1/r power-law confirms Freeman and Torralba?s reasoning behind choosing c = 2
for shape recipes.
However, the linear Lambertian shading model predicts that only the imaginary part of ZI
should obey a power-law. In fact, according to equation 3, this model predicts that the real
part of ZI should be zero. Yet, in our database, the real part of ZI was typically stronger
than the imaginary part. The real part of ZI is the Fourier transform of the even-symmetric
part of the cross-correlation function, and it includes the direct correlation cov[i, z]. In
a previous study of the statistics of natural range images [6], we have found that darker
pixels in the image tend to be farther away, resulting in significantly negative cov[i, z].
We attributed this phenomenon to cast shadows in complex scenes: object interiors and
concavities are farther away than object exteriors, and these regions are the most likely
to be in shadow. This effect can be observed wherever shadows are found, such as the
crevices of figure 2a. However, the effect appears strongest in complex objects with many
shadows and concavities, like folds of cloth, or foliage. We found that the real part of ZI
is especially likely to be strongly negative in images of foliage. Such correlation between
depth and darkness has been predicted theoretically for diffuse lighting conditions, such as
cloudy days, when viewed from directly above [5]. The fact that all of our images were
taken under cloudless, sunny conditions and with oblique lighting from above suggests that
this cue may be more important than at first realized. Psychophysical experiments have
demonstrated that in the absence of all other cues, darker image regions appear farther,
suggesting that the human visual system makes use of this cue for depth inference (see [6]
for a review, also [10]). We believe that the 1/r drop-off rate observed in real[K] is due to
the fact that concavities with smaller apertures but equal depths tend to be darker. In other
words, for a given level of darkness, a smaller aperture corresponds to a more shallow hole.
4
Inference using power-law models
Armed with a better understanding of the statistics of real scenes, we are better prepared to
develop successful depth inference algorithms. We now know that fine details in ZI/II do
not generalize across scales, but that its coarse structure roughly follows a 1/r power-law.
We can exploit this statistical trend directly. We can simply fit our BK (?)/r power law to
ZIlow /II, and then use this estimate of K to reconstruct the high frequency range data.
Specifically, from the low-resolution range and intensity image, we compute low resolution
spectra of ZI and II. From the highest frequency octave of the low-resolution images, we
estimate BII (?) and BZI (?). Any standard interpolation method will work to estimate
these functions. We chose a cos3 (? + ??/4) basis function based on steerable filters [2].
a) Original Intensity Image
b) Low-Resolution Range Data c) Power-law Shape Recipe
d) Krecipe
e) Kpowerlaw
Figure 2: a) An example intensity image from our database. b) A Lambertian rendering of
the corresponding low resolution range image. c) Power-law method output. Shape recipe
reconstructions show a similar amount of texture, but tests show that texture generated by
the power-law method is more highly correlated with the true texture. d) The imaginary
parts of Krecipe and e) Kpowerlaw for the same scene. Dark regions are negative, light
regions are positive. The grey center region in each estimate of K corresponds to the low
spatial frequencies, where range data is not inferred because it is already known. Notice
that Krecipe oscillates over scale.
We now can estimate the high spatial frequencies of the range image, z. Define
Kpowerlaw (r, ?) = Fhigh (r) ? (BZI (?)/BII (?))/r
Zpowerlaw = Zlow + I ? Kpowerlaw
(5)
(6)
where Fhigh is the high-pass filter associated with the two highest resolution bands of the
steerable filter pyramid of the full-resolution image.
5
Empirical evaluation
In this section, we compare the performance of shape recipes with our new approach, using our ground-truth database of high-resolution range and intensity image pairs described
in section 3. For each range image in our database, a low-resolution (but still full-sized)
range image, zlow , was generated by setting to zero the top two steerable filter pyramid layers. Both algorithms accepted as input the low-resolution range image and high-resolution
intensity image, and the output was compared with the original high-resolution range image. The high resolution output corresponds to a 4-fold increase in spatial resolution (or a
16-fold increase in total size).
Although encouraging enhancements of stereo output were given by the authors, shape
recipes has not been evaluated with real, ground-truth high resolution range data. To maximize its performance, we implemented shape recipes using ridge regression, with the ridge
coefficient obtained using cross-validation. Linear kernels were learned (and the output
evaluated) over a region of the image at least 21 pixels from the image border.
For each high-resolution output, we measured the sum squared error between the reconstruction (zrecipe or zpowerlaw ) and the original range image (z). We compared this with
the sum-squared error of the low-resolution range image zlow to get the percent reduction
errlow ?errrecipe
in sum-squared error: error reductionrecipe =
. This measure of error
errlow
reflects the performance of the method independently of the variance or absolute depth of
the range image. On average, shape recipe reconstructions had 1.3% less mean-squared
error than zlow . Shape recipes improved 21 of the 28 images. Our new approach had 2.2%
less mean-squared error than zlow , and improved 26 of the 28 images.
We cannot expect the error reduction values to be very high, partly because our images
are highly complex natural scenes, and also because some noise was present in both the
range and intensity images. Therefore, it is difficult to assess how much of the remaining
error could be recovered by a superior algorithm, and how much is simply due to sensor
noise. As a comparison, we generated an optimal linear reconstruction, zoptlin , by learning
11 ? 11 shape recipe kernels for the two high resolution pyramid bands directly from
the ground-truth high resolution range image. This reconstruction provides a loose upper
bound on the degree of improvement possible by linear shape methods. We then measured
the percentage of linearly achievable improvement for each image: improvementrecipe =
errlow ?errrecipe
errlow ?erroptlin Shape recipes yielded an average improvement of 23%. Our approach
achieved an improvement of 44%, nearly a two-fold enhancement over shape recipes.
6
The relative strengths of shading and shadow cues
Earlier we showed that Lambertian shading alone predicts that the real part of ZI in natural
scenes is empty of useful correlations between images and range images. Yet in our database, the real part of ZI, which we believe is related to shadow cues, was often stronger
than the imaginary component. Our depth-inference algorithm offers an opportunity to
compare the performance of shading cues versus shadow cues. We ran our algorithm again,
except that we set the real part of Kpowerlaw to zero. This yielded only a 12% improvement.
However, when we ran the algorithm after setting imag[K] to zero, 32% improvement was
achieved. Thus, 72% of the algorithm?s total improvement was due to shadow cues. When
the database is broken down into categories, the real part of ZI is responsible for 96% of
total improvement in foliage scenes, 76% in rocky terrain scenes, and 35% in urban scenes
(statue surfaces and building facades). As expected, the algorithm relies more heavily on
the real part of ZI in environments rich in cast shadows. These results show that shadow
cues are far more useful than was previously expected, and also that they can be exploited
more easily than was previously thought possible, using only simple linear relationships
that might easily be incorporated into linear shape-from-shading techniques. We feel that
these insights into natural scene statistics are the most important contributions of this paper.
7
Discussion
The power-law extension to shape recipes not only offers a substantial improvement in
performance, but it also greatly reduces the number of parameters that must be learned. The
original shape recipes required one 11?11 kernel, or 121 parameters, for each orientation of
the steerable filters. The new algorithm requires only two parameters for each orientation
(the real and the imaginary parts of BK (?)). This suggests that the new approach has
captured only those components of K that generalize across scales, disregarding all others.
While it is encouraging that the power-law algorithm is highly parsimonious, it also means
that fewer scene properties are encoded in the shape recipe kernels than was previously
hoped [3]. For example, complex properties of the material and surface reflectance cannot
be encoded. We believe that the B(?) parameter of the power-law model can be determined
almost entirely by the direction of illumination and the prominence of cast shadows (see figure 1b). This suggests that the power-law algorithm of this paper would work equally well
for scenes with multiple materials. To capture more complex material properties, nonlinear
methods and probabilistic methods may achieve greater success. However, when designing
these more sophisticated methods, care must be taken to avoid the same pitfall encountered
by shape recipes: not all properties of a scene can be scale-invariant simultaneously.
8
Appendix
Shape recipes infer each high resolution band of the range using equation 1. Let ? = 2n?m .
If we take the Fourier transform of equation 1, we get
u v
1
,
? (I ? Fm,? )
(7)
Zhigh ? Fm,? = n?m Kn,?
c
? ?
where Fm,? is the Fourier transform of the steerable filter at level m and orientation ?, and
Zhigh is the inferred high spatial frequency components of the range image. If we take the
steerable pyramid decomposition of Zhigh and then transform it back, we get Zhigh again,
and so:
m<n
X
?
I ? Krecipe = Zhigh =
Zhigh Fm,? Fm,?
(8)
m,?
=I
m<n
X
m,?
1
u v
?
K
,
? Fm,? ? Fm,?
n,?
cn?m
? ?
(9)
The steerable filters at each levelare simply a dilation of the steerable filters of preceding
levels: Fm,? (u, v) = Fn,? ?u , ?v . Thus, recalling that ? = 2n?m , we have
m<n
X 1
u v
u v
u v
?
Krecipe =
Kn,? ( , ) ? Fn,? ( , ) ? Fn,?
( , )
(10)
cn?m
? ?
? ?
? ?
m,?
The steerable filters Fn,? are band-pass filters, and they are essentially zero outside of
octave n. Thus, each octave of Krecipe is identical to the octave before it, except reduced
by a constant scale factor c. In other words, shape recipes extrapolate Klow by copying the
highest available octave of Klow (or some estimation of it) into each successive octave. An
example of Krecipe can be seen in figure 2d.
This research was funded in part by NSF IIS-0413211, Penn Dept of Health-MPC 05-06-2.
Brian Potetz is supported by an NSF Graduate Research Fellowship.
References
[1] J. E. Cryer, P. S. Tsai and M. Shah, ?Integration of shape from shading and stereo,? Pattern
Recognition, 28(7):1033?1043, 1995.
[2] W. T. Freeman, E. H. Adelson, ?The design and use of steerable filters,? IEEE Transactions on
Pattern Analysis and Machine Intelligence, 13, 891?906 1991.
[3] W. T. Freeman and A. Torralba, ?Shape Recipes: Scene representations that refer to the image,?
Advances in Neural Information Processing Systems 15 (NIPS), MIT Press, 2003.
[4] C. Q. Howe and D. Purves, ?Range image statistics can explain the anomalous perception of
length,? Proc. Nat. Acad. Sci. U.S.A. 99 13184?13188 2002.
[5] M. S. Langer and S. W. Zucker, ?Shape-from-shading on a cloudy day,? J. Opt. Soc. Am. A 11,
467?478 (1994).
[6] B. Potetz, T. S. Lee, ?Statistical correlations between two-dimensional images and threedimensional structures in natural scenes,? J. Opt. Soc. Amer. A, 20, 1292?1303 2003.
[7] D. Scharstein and R. Szeliski, ?A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,? IJCV 47(1/2/3):7?42, April-June 2002.
[8] A. Torralba, A. Oliva, ?Depth estimation from image structure,? IEEE Transactions on Pattern
Analysis and Machine Intelligence. 24(9): 1226?1238 2002.
[9] A. Torralba and W. T. Freeman, ?Properties and applications of shape recipes,? IEEE Computer
Society Conference on Computer Vision and Pattern Recognition, 2003.
[10] C. W. Tyler, ?Diffuse illumination as a default assumption for shape-from-shading in the absence of shadows,? J. Imaging Sci. Technol. 42, 319?325 1998.
| 2927 |@word version:1 achievable:1 stronger:2 grey:2 km:1 confirms:1 covariance:1 decomposition:2 prominence:1 shading:12 reduction:2 uncovered:1 contains:1 series:1 existing:1 imaginary:9 current:2 recovered:1 yet:2 must:3 fn:4 shape:47 extrapolating:1 plot:5 drop:5 v:1 alone:1 cue:13 selected:1 devising:1 half:1 fewer:1 intelligence:2 plane:1 oblique:2 farther:3 record:1 coarse:1 provides:1 successive:3 along:1 direct:1 lowresolution:2 shorthand:1 ijcv:1 autocorrelation:1 cnbc:1 theoretically:1 acquired:3 expected:2 faceted:1 themselves:1 examine:1 growing:1 roughly:3 freeman:6 decomposed:1 pitfall:1 little:2 actual:1 armed:1 encouraging:2 underlying:5 finding:4 transformation:1 duplicating:1 every:2 oscillates:1 internally:1 imag:7 appear:1 penn:1 positive:1 before:2 tends:1 acad:1 despite:1 analyzing:1 interpolation:1 might:2 black:1 chose:1 examined:1 suggests:3 range:57 statistically:1 obeys:4 averaged:1 graduate:1 camera:1 responsible:1 steerable:13 empirical:2 significantly:1 thought:1 word:4 get:3 cannot:3 interior:1 applying:1 impossible:1 darkness:2 demonstrated:1 center:3 rural:1 independently:1 resolution:41 immediately:1 perceive:1 insight:2 importantly:1 coordinate:1 feel:1 construction:1 heavily:1 designing:2 hypothesis:1 pa:2 trend:3 recognition:2 predicts:4 database:12 observed:2 capture:1 region:6 counter:1 highest:8 ran:2 substantial:1 environment:3 broken:1 motivate:1 basis:3 completely:1 easily:2 joint:3 laser:5 describe:1 formation:1 choosing:2 outside:1 whose:1 encoded:2 solve:1 reconstruct:1 statistic:6 cov:2 think:2 transform:9 advantage:1 rock:1 propose:1 reconstruction:7 interaction:1 subtracting:1 zm:1 facade:2 achieve:1 recipe:37 regularity:2 enhancement:2 extending:1 r1:2 empty:1 perfect:1 object:6 derive:1 illustrate:1 develop:1 measured:3 dividing:1 implemented:1 c:1 shadow:15 predicted:2 soc:2 direction:3 foliage:4 closely:2 filter:19 human:3 material:5 violates:1 require:1 opt:2 brian:2 im:2 mathematically:2 extension:3 scanner:6 ground:5 tyler:1 cognition:2 predict:2 lm:1 claim:2 vary:1 torralba:6 a2:3 albedo:3 polar:4 proc:1 estimation:2 reflects:1 mit:1 sensor:1 avoid:1 june:1 improvement:10 greatly:1 am:1 inference:13 cloth:1 typically:3 integrated:1 hidden:1 pixel:6 orientation:10 development:1 spatial:12 integration:1 homogenous:1 equal:1 once:3 zz:1 identical:3 broad:1 look:1 adelson:1 nearly:1 others:1 duplicate:1 few:2 simultaneously:1 geometry:1 intended:1 occlusion:1 recalling:1 highly:4 evaluation:2 extreme:1 light:4 behind:1 accurate:2 unless:1 taylor:1 logarithm:1 theoretical:1 earlier:1 measuring:1 zn:1 uniform:1 usefulness:1 successful:1 kn:5 varies:1 combined:2 devoid:1 explores:1 lee:2 physic:1 probabilistic:2 off:5 enhance:1 together:1 quickly:1 squared:6 again:2 slowly:2 possibly:1 suggesting:1 potential:1 includes:1 coefficient:2 explicitly:1 caused:1 depends:1 performed:1 observer:1 extrapolation:1 doing:1 purves:1 cos3:1 contribution:1 ass:1 formed:1 variance:1 correspond:2 generalize:3 lighting:4 explain:1 strongest:1 reach:1 inexpensive:1 frequency:20 bzi:2 obvious:1 mpc:1 associated:1 attributed:1 color:3 fractional:1 uncover:1 sophisticated:1 back:1 appears:2 higher:3 day:2 follow:1 improved:2 april:1 amer:1 evaluated:2 strongly:1 furthermore:3 just:1 correlation:8 horizontal:3 nonlinear:1 lack:1 interfere:1 outdoors:1 believe:4 building:2 effect:2 ranged:1 true:1 requiring:1 symmetric:1 sin:2 octave:19 highresolution:1 ridge:2 demonstrate:1 percent:1 reasoning:1 image:90 superior:1 physical:1 discussed:1 mellon:2 refer:1 smoothness:2 similarly:1 had:2 funded:1 zucker:1 surface:7 showed:1 success:3 exploited:1 inverted:1 seen:2 minimum:1 greater:3 captured:1 care:1 preceding:1 maximize:1 clockwise:1 ii:15 full:4 multiple:1 infer:2 reduces:1 smooth:2 cross:5 long:1 offer:3 equally:1 impact:1 anomalous:1 basic:1 regression:2 oliva:1 vision:2 cmu:2 essentially:1 kernel:7 pyramid:8 achieved:2 fellowship:1 fine:6 separately:1 source:1 howe:1 klow:7 duplication:1 tend:2 photosensor:1 inconsistent:1 integer:1 reexpress:1 ideal:1 rendering:1 variety:1 fit:6 zi:33 fm:8 idea:1 cn:3 motivated:2 stereo:5 statue:2 useful:3 listed:1 amount:1 prepared:1 dark:1 band:14 category:1 reduced:1 percentage:1 nsf:2 notice:1 estimated:1 per:1 carnegie:2 write:1 affected:1 four:2 demonstrating:1 drawn:1 capital:1 urban:2 backward:2 imaging:1 langer:1 sum:4 inverse:1 letter:1 angle:1 almost:2 parsimonious:1 appendix:2 scaling:2 entirely:2 layer:1 bound:1 simplification:1 correspondence:1 fold:5 encountered:1 yielded:2 strength:2 constraint:1 scene:36 diffuse:2 fourier:13 argument:1 cloudy:2 department:2 according:2 sunny:2 across:3 smaller:2 shallow:1 wherever:1 happens:1 explained:1 invariant:1 taken:4 monocular:2 equation:5 tai:2 previously:3 discus:2 turn:1 loose:1 know:2 merit:1 available:5 operation:2 multiplied:1 subbands:2 observe:1 lambertian:10 obey:1 away:2 bii:2 shah:1 original:5 denotes:1 top:1 include:1 remaining:1 opportunity:1 exploit:1 potetz:3 reflectance:3 subband:2 especially:1 threedimensional:1 society:1 psychophysical:1 arrangement:1 realized:1 already:1 traditional:1 diagonal:4 distance:1 sci:2 collected:2 reason:1 length:1 modeled:1 relationship:9 copying:1 minimizing:1 acquire:1 difficult:1 taxonomy:1 negative:4 implementation:1 design:1 motivates:1 perform:1 upper:1 vertical:2 convolution:5 observation:1 sing:1 technol:1 incorporated:1 frame:1 intensity:25 inferred:3 bk:9 inverting:1 cast:5 pair:2 required:1 extensive:1 learned:5 extrapolate:2 nip:1 below:4 perception:2 pattern:4 reliable:1 power:22 natural:14 rely:1 improve:1 rocky:2 church:1 health:1 prior:1 understanding:5 review:1 meter:1 relative:2 law:23 expect:2 versus:1 validation:1 degree:2 principle:1 pile:1 extrapolated:1 repeat:2 supported:1 allow:1 szeliski:1 taking:2 absolute:1 depth:18 default:1 rich:1 concavity:4 forward:2 made:2 author:1 far:1 transaction:2 scharstein:1 approximate:1 implicitly:3 aperture:2 sequentially:1 pittsburgh:2 assumed:1 spectrum:4 terrain:2 why:1 table:1 dilation:1 learn:1 nature:1 robust:1 reasonably:1 exterior:2 complex:5 domain:4 dense:1 linearly:2 motivation:1 border:1 noise:2 cryer:1 exploitable:1 darker:3 position:1 inferring:1 breaking:1 wavelet:1 minute:1 down:2 showing:1 disregarding:1 underconstrained:1 adding:1 texture:6 hoped:1 illumination:4 nat:1 hole:1 coregistered:1 simply:4 explore:1 likely:3 octant:6 visual:1 corresponds:3 truth:4 relies:1 viewed:1 sized:1 replace:1 absence:2 change:4 included:1 specifically:2 typical:2 uniformly:1 except:2 averaging:1 determined:1 correlational:1 total:3 pas:2 accepted:1 partly:1 meaningful:1 scan:6 tsai:1 dept:1 phenomenon:1 correlated:1 |
2,123 | 2,928 | Dual-Tree Fast Gauss Transforms
Dongryeol Lee
Computer Science
Carnegie Mellon Univ.
[email protected]
Alexander Gray
Computer Science
Carnegie Mellon Univ.
[email protected]
Andrew Moore
Computer Science
Carnegie Mellon Univ.
[email protected]
Abstract
In previous work we presented an efficient approach to computing kernel summations which arise in many machine learning methods such as
kernel density estimation. This approach, dual-tree recursion with finitedifference approximation, generalized existing methods for similar problems arising in computational physics in two ways appropriate for statistical problems: toward distribution sensitivity and general dimension,
partly by avoiding series expansions. While this proved to be the fastest
practical method for multivariate kernel density estimation at the optimal
bandwidth, it is much less efficient at larger-than-optimal bandwidths.
In this work, we explore the extent to which the dual-tree approach can
be integrated with multipole-like Hermite expansions in order to achieve
reasonable efficiency across all bandwidth scales, though only for low dimensionalities. In the process, we derive and demonstrate the first truly
hierarchical fast Gauss transforms, effectively combining the best tools
from discrete algorithms and continuous approximation theory.
1 Fast Gaussian Summation
Kernel summations are fundamental in both statistics/learning and computational physics.
N
PR ?||xq ?xr ||2
2h2
e
This paper will focus on the common form G(xq ) =
i.e. where the kerr=1
nel is the Gaussian kernel with scaling parameter, or bandwidth h, there are NR reference
points xr , and we desire the sum for NQ different query points xq . Such kernel summations
appear in a wide array of statistical/learning methods [5], perhaps most obviously in kernel
density estimation [11], the most widely used distribution-free method for the fundamental
task of density estimation, which will be our main example. Understanding kernel summation algorithms from a recently developed unified perspective [5] begins with the picture of
Figure 1, then separately considers the discrete and continuous aspects.
Discrete/geometric aspect. In terms of discrete algorithmic structure, the dual-tree framework of [5], in the context of kernel summation, generalizes all of the well-known algorithms. 1 It was applied to the problem of kernel density estimation in [7] using a simple
1
These include the Barnes-Hut algorithm [2], the Fast Multipole Method [8], Appel?s algorithm
[1], and the WSPD [4]: the dual-tree method is a node-node algorithm (considers query regions rather
than points), is fully recursive, can use distribution-sensitive data structures such as kd-trees, and is
bichromatic (can specialize for differing query and reference sets).
Figure 1: The basic idea is to approximate the kernel sum contribution of some subset of the reference points XR , lying in some compact region of space R with centroid xR , to a query point. In
more efficient schemes a query region is considered, i.e. the approximate contribution is made to an
entire subset of the query points XQ lying in some region of space Q, with centroid xQ .
finite-difference approximation, which is tantamount to a centroid approximation. Partially
by avoiding series expansions, which depend explicitly on the dimension, the result was
the fastest such algorithm for general dimension, when operating at the optimal bandwidth.
Unfortunately, when performing cross-validation to determine the (initially unknown) optimal bandwidth, both suboptimally small and large bandwidths must be evaluated. The
finite-difference-based dual-tree method tends to be efficient at or below the optimal bandwidth, and at very large bandwidths, but for intermediately-large bandwidths it suffers.
Continuous/approximation aspect. This motivates investigating a multipole-like series
approximation which is appropriate for the Gaussian kernel, as introduced by [9], which
can be shown the generalize the centroid approximation. We define the Hermite functions
2
hn (t) by hn (t) = e?t Hn (t), where the Hermite polynomials Hn (t) are defined by the
2
2
Rodrigues formula: Hn (t) = (?1)n et Dn e?t , t ? R1 . After scaling and shifting the argument t appropriately, then taking the product of univariate functions for each dimension,
we obtain the multivariate Hermite expansion
?
NR
NR X
X
X
?||xq ?xr ||2
xq ? xR
1 xr ? xR
2
2h
?
?
G(xq ) =
e
=
h?
(1)
?!
2h2
2h2
r=1
r=1 ??0
where we?ve adopted the usual multi-index notation as in [9]. This can be re-written as
?
NR X
NR
X
X
?||xq ?xr ||2
1
xr ? xQ
xq ? xQ
2
2h
?
?
G(xq ) =
e
=
h?
(2)
?!
2h2
2h2
r=1
r=1 ??0
to express the sum as a Taylor (local) expansion about a nearby representative centroid xQ
in the query region. We will be using both types of expansions simultaneously.
Since series approximations only hold locally, Greengard and Rokhlin [8] showed that it
is useful to think in terms of a set of three ?translation operators? for converting between
expansions centered at different points, in order to create their celebrated hierarchical algorithm. This was done in the context of the Coulombic kernel, but the Gaussian kernel has
importantly different mathematical properties. The original Fast Gauss Transform (FGT)
[9] was based on a flat grid, and thus provided only one operator (?H2L? of the next section), with an associated error bound (which was unfortunately incorrect). The Improved
Fast Gauss Transform (IFGT) [14] was based on a flat set of clusters and provided no operators with a rearranged series approximation, which intended to be more favorable in
higher dimensions but had an incorrect error bound. We will show the derivations of all
the translation operators and associated error bounds needed to obtain, for the first time, a
hierarchical algorithm for the Gaussian kernel.
2 Translation Operators and Error Bounds
The first operator converts a multipole expansion of a reference node to form a local expansion centered at the centroid of the query node, and is our main approximation workhorse.
Lemma 2.1. Hermite-to-local (H2L) translation operator for Gaussian kernel (as presented in Lemma 2.2 in [9, 10]): Given a reference node XR , a query node XQ , and
the
P
xq ?xR
Hermite expansion centered at a centroid xR of XR : G(xq ) =
A? h? ?2h2 , the
??0
Taylor expansion of the Hermite expansion at the centroid xQ of the query node XQ is
?
|?| P
P
x ?x
xQ ?xR
given by G(xq ) =
B? ?q 2h2Q
where B? = (?1)
A? h?+? ?
.
2
?!
2h
??0
??0
Proof. (sketch) The proof consists of replacing the Hermite function portion of the expansion with its Taylor series.
Note that we can rewrite G(xq ) =
P
N
PR
??0 r=1
1
?!
x?
r ?xR
2h2
?
h?
xq ?xR
?
2h2
by interchanging
the summation order, such that the term in the brackets depends only on the reference
points, and can thus be computed indepedent of any query location ? we will call such
terms Hermite moments. The next operator allows the efficient pre-computation of the
Hermite moments in the reference tree in a bottom-up fashion from its children.
Lemma 2.2. Hermite-to-Hermite (H2H) translation operator for Gaussian kernel:
Given the Hermite expansion
centered at a centroid xR? in a reference node XR? :
P ?
xq ?xR?
G(xq ) =
A? h? ?2h2 , this same Hermite expansion shifted to a new loca??0
P
x ?x
tion xR of the parent node of XR is given by G(xq ) =
A? h? ?q 2h2R where
??0
???
P
xR? ?xR
1
?
?
.
A? =
(???)! A?
2h2
0????
Proof. We simply replace the Hermite function part of the expansion by a new Taylor
series, as follows:
?
x q ? x R?
?
2h2
??0
?
?
X ? X 1 ? x R ? x R? ? ?
xq ? xR
?
?
=
A?
(?1)|?| h?+?
?!
2h2
2h2
??0
??0
?
?
?
?
X X ? 1 x R ? x R? ?
xq ? xR
|?|
?
?
(?1) h?+?
=
A?
?!
2h2
2h2
??0 ??0
?
?
?
?
X X ? 1 x R? ? x R ?
xq ? xR
?
?
A?
=
h?+?
2
?!
2h
2h2
??0 ??0
3
2
????
?
?
?
X X
? ? xR
1
x
q ? xR
?
R
5 h? x?
4
?
=
A?
(? ? ?)!
2h2
2h2
??0 0????
G(xq ) =
where ? = ? + ?.
X
A?? h?
?
The next operator acts as a ?clean-up? routine in a hierarchical algorithm. Since we can
approximate at different scales in the query tree, we must somehow combine all the approximations at the end of the computation. By performing a breadth-first traversal of the
query tree, the L2L operator shifts a node?s local expansion to the centroid of each child.
Lemma 2.3. Local-to-local (L2L) translation operator for Gaussian kernel: Given a Taylor expansion centered at a centroid xQ? of a query node
P
xq ?x ? ?
XQ? : G(xq ) =
B? ?2hQ
, the Taylor expansion obtained by shift2
??0
ing "this expansion to the new centroid
xQ of the child node XQ is G(xq ) =
#
???
?
P P
x
?x
?
xq ?xQ
Q
?!
? Q
?
B
.
?
?!(???)!
2h2
2h2
??0
???
Proof. Applying the multinomial theorem to to expand about the new center xQ yields:
G(xq ) =
X
??0
=
B?
?
XX
??0 ???
xq ? xQ?
?
2h2
B?
??
?!
?!(? ? ?)!
?
xQ ? xQ?
?
2h2
???? ?
xq ? xQ
?
2h2
??
.
whose summation order can be interchanged to achieve the result.
Because the Hermite and the Taylor expansion are truncated after taking pD terms, we incur
an error in approximation. The original error bounds for the Gaussian kernel in [9, 10] were
wrong and corrections were shown in [3]. Here, we will present all necessary three error
bounds incurred in performing translation operators. We note that these error bounds place
limits on the size of the query node and the reference node. 2
Lemma 2.4. Error Bound for Truncating an Hermite Expansion (as presented in [3]):
Suppose we are given an Hermite expansion of a reference node XR about its centroid xR :
N
P
PR 1 xr ?xR ?
x ?x
?
G(xq ) =
A? h? ?q 2h2R where A? =
. For any query point xq , the
?!
2h2
??0
r=1
NR
error due to truncating the series after the first pD term is |?M (p)| ? (1?r)
D
p D?k
rp )k ?r p!
where ?xr ? XR satisfies ||xr ? xR ||? < rh for r < 1.
D?1
P
k=0
D
k
(1 ?
Proof. (sketch) We expand the Hermite expansion as a product of one-dimensional Hermite functions, and utilize a bound on one-dimensional Hermite functions due to [13]:
n
?x2
1
22
?
2 , n ? 0, x ? R1 .
e
n! |hn (x)| ?
n!
Lemma 2.5. Error Bound for Truncating a Taylor Expansion Converted from an
Hermite Expansion of Infinite Order: Suppose we are given the following Taylor ex
?
P
x ?x
pansion about the centroid xQ of a query node G(xq ) =
B? ?q 2h2Q
where
??0
2
`Strain
?n[12] proposed the interesting idea of using Stirling?s formula (for any non-negative integer
? n!) to lift the node size constraint; one might imagine that this could allow approxin: n+1
e
mation of larger regions in a tree-based algorithm. Unfortunately, the error bounds developed in [12]
were also incorrect. We have derived the three necessary corrected error bounds based on the techniques in [3]. However, due to space, and because using these bounds actually degraded performance
slightly, we do not include those lemmas here.
(?1)|?|
?!
B? =
P
A? h?+?
??0
xQ ?xR
?
2h2
and A? ?s are the coefficients of the Hermite ex-
pansion centered at the reference node centroid xR . Then, truncating the series after
p D?k
D?1
P D
NR
r
p k ?
where
pD terms satisfies the error bound |?L (p)| ? (1?r)
(1
?
r
)
D
k
p!
k=0
||xq ? xQ ||? < rh for r < 1, ?xq ? XQ .
Proof. Taylor expansion of the Hermite function yields
e
?||xq ?xr ||2
2h2
Use e
?
??
??
X (?1)|?| X 1 ? xr ? xR ??
xq ? xQ
xQ ? xR
?
?
?
h?+?
=
?!
?!
2h2
2h2
2h2
??0
??0
?
?
?
?
??
?
X (?1)|?| X 1 xR ? xr ?
xQ ? xR
xq ? xQ
|?|
?
?
?
=
(?1) h?+?
?!
?!
2h2
2h2
2h2
??0
??0
?
?
?
?
?
X (?1)|?|
xq ? xQ
xQ ? xr
?
?
=
h?
?!
2h2
2h2
??0
?||xq ?xr ||2
2h2
=
D
Q
i=1
(up (xqi , xri , xQi ) + vp (xqi , xri , xQi )) for 1 ? i ? D, where
??
?n
?
X (?1)ni
xqi ? xQi i
xQi ? xri
?
?
hni
ni !
2h2
2h2
ni =0
?
?
?
?ni
?
X
xqi ? xQi
xQi ? xri
(?1)ni
?
?
hni
vp (xqi , xri , xQi ) =
.
ni !
2h2
2h2
ni =p
p?1
up (xqi , xri , xQi ) =
These univariate functions respectively satisfy up (xqi , xri , xQi ) ?
rp
, for 1 ? i ? D, achieving the multivariate bound.
vp (xqi , xri , xQi ) ? ?1p! 1?r
1?r p
1?r
and
Lemma 2.6. Error Bound for Truncating a Taylor Expansion Converted from an Already Truncated Hermite Expansion: A truncated Hermite
centered about
expansion
P
xq ?xR
the centroid xR of a reference node G(xq ) =
A? h? ?2h2 has the following
?<p
?
P
xq ?xQ
Taylor expansion about the centroid xQ of a query node: G(xq ) =
C? ?
2
2h
??0
xQ ?xR
(?1)|?| P
where the coefficients C? are given by C? =
A? h?+? ?2h2 . Truncat?!
?<p
D?1
P D
NR
ing the series after pD terms satisfies the error bound |?L (p)| ? (1?2r)
2D
k ((1 ?
k=0
D?k
p
p
)
?
for a query node XQ for which ||xq ? xQ ||? < rh, and
(2r)p )2 )k ((2r) )(2?(2r)
p!
a reference node XR for which ||xr ? xR ||? < rh for r < 12 , ?xq ? XQ , ?xr ? XR .
Proof. We define upi = up (xqi , xri , xQi , xRi ), vpi = vp (xqi , xri , xQi , xRi ), wpi =
wp (xqi , xri , xQi , xRi ) for 1 ? i ? D:
upi =
?
?nj
?
??
?ni
p?1
p?1
X
xqi ? xQi
x Ri ? x r i
xQi ? xRi
(?1)ni X 1
?
?
?
(?1)nj hni +nj
ni ! n =0 nj !
2h2
2h2
2h2
ni =0
j
vpi =
?n
?
??
?n
?
?
X (?1)ni X
xQi ? xRi
xqi ? xQi i
1
x Ri ? x r i j
?
?
?
(?1)nj hni +nj
ni ! n =p nj !
2h2
2h2
2h2
ni =0
j
p?1
wpi =
?nj
?
??
?n
?
?
?
X
xQi ? xRi
xqi ? xQi i
(?1)ni X 1
x Ri ? x r i
?
?
?
(?1)nj hni +nj
ni ! n =0 nj !
2h2
2h2
2h2
ni =p
j
Note that e
?||xq ?xr ||2
2h2
=
D
Q
i=1
(upi + vpi + wpi ) for 1 ? i ? D. Using the bound for
Hermite functions and the property of geometric series, we obtain the following upper
bounds:
p?1 p?1
upi ?
X X
(2r)ni (2r)nj =
ni =0 nj =0
?
1 ? (2r)p )
1 ? 2r
?2
?
??
?
p?1 ?
1 X X
1
1 ? (2r)p
(2r)p
vpi ? ?
(2r)ni (2r)nj = ?
1 ? 2r
1 ? 2r
p! n =0 n =p
p!
i
1
wpi ? ?
p!
j
? X
?
X
ni =p nj
1
(2r)ni (2r)nj = ?
p!
=0
?
1
1 ? 2r
??
(2r)p
1 ? 2r
?
Therefore,
?
?
!
?D?k
?
D
D?1
? ?||xq ?xr ||2
?
Y
X D
((2r)p )(2 ? (2r)p )
?
?
?2D
2
2h
?
?
upi ? ? (1 ? 2r)
((1 ? (2r)p )2 )k
?e
?
?
k
p!
i=1
k=0
?
?
?
? ?
?
?
?
D?1
X ?D ?
X
?
xq ? xQ ? ??
((2r)p )(2 ? (2r)p ) D?k
NR
p 2 k
?G(xq ) ?
?
((1 ? (2r) ) )
?
C?
?
?
?
2D
(1 ? 2r)
k
p!
2h2
?
?
k=0
?<p
3 Algorithm and Results
Algorithm.
The algorithm mainly consists of making the function call
DFGT(Q.root,R.root), i.e. calling the recursive function DFGT() with the root
nodes of the query tree and reference tree. After the DFGT() routine is completed, the
pre-order traversal of the query tree implied by the L2L operator is performed. Before the
DFGT() routine is called, the reference tree could be initialized with Hermite coefficients
stored in each node using the H2H translation operator, but instead we will compute
them as needed on the fly. It adaptively chooses among three possible methods for
approximating the summation contribution of the points in node R to the queries in node
Q, which are self-explanatory, based on crude operation count estimates. Gmin
Q , a running
lower bound on the kernel sum G(xq ) for any xq ? XQ , is used to ensure locally that
the global relative error is ? or less. This automatic mechanism allows the user to specify
only an error tolerance ? rather than other tweak parameters. Upon approximation, the
upper and lower bounds on G for Q and all its children are updated; the latter can be
done in an O(1) delayed fashion as in [7]. The remainder of the routine implements the
characteristic four-way dual-tree recursion. We also tested a hybrid method (DFGTH)
which approximates if either of the DFD or DFGT approximation criteria are met.
Experimental results. We empirically studied the runtime 3 performance of five algorithms on five real-world datasets for kernel density estimation at every query point with a
range of bandwidths, from 3 orders of magnitude smaller than optimal to three orders larger
than optimal, according to the standard least-squares cross-validation score [11]. The naive
3
All times include all preprocessing costs including any data structure construction. Times are
measured in CPU seconds on a dual-processor AMD Opteron 242 machine with 8 Gb of main memory and 1 Mb of CPU cache. All the codes that we have written and obtained are written in C and
C++, and was compiled under -O6 -funroll-loops flags on Linux kernel 2.4.26.
algorithm computes the sum explicitly and
thus exactly. We have
limited all datasets to
b q ) ? Gtrue (xq )| /Gtrue (xq ), can be eval50K points so that true relative error, i.e. |G(x
uated, and set the tolerance at 1% relative error for all query points. When any method fails
to achieve the error tolerance in less time than twice that of the naive method, we give up.
Codes for the FGT [9] and for the IFGT [14] were obtained from the authors? websites.
Note that both of these methods require the user to tweak parameters, while the others are
automatic. 4 DFD refers to the depth-first dual-tree finite-difference method [7].
DFGT(Q, R)
pDH = pDL = pH2L = ?
if R.maxside < 2h, pDH = the smallest p ? 1 such that
p D?k
D?1
P D
NR
r
p k ?
< ?Gmin
Q .
k (1 ? r )
(1?r)D
p!
k=0
if Q.maxside < 2h, pDL = the smallest p ? 1 such that
p D?k
D?1
P D
NR
r
p k ?
< ?Gmin
Q .
(1?r)D
k (1 ? r )
p!
k=0
if max(Q.maxside,R.maxside) < h, pH2L = the smallest p ? 1 such that
D?k
D?1
p
p
P D
)
NR
p 2 k ((2r) )(2?(2r)
?
< ?Gmin
((1
?
(2r)
)
)
2D
Q .
k
(1?2r)
p!
k=0
D+1
D
cDH = pD
DH NQ . cDL = pDL NR . cH2L = DpH2L . cDirect = DNQ NR .
if no Hermite coefficient of order pDH exists for XR ,
Compute it. cDH = cDH + pD
DH NR .
if no Hermite coefficient of order pH2L exists for XR ,
Compute it. cH2L = cH2L + pD
H2L NR .
c = min(cDH , cDL , cH2L , cDirect ).
if c = cDH < ?, (Direct Hermite)
Evaluate each xq at the Hermite series of order pDH centered about xR of XR
using Equation 1.
if c = cDL < ?, (Direct Local)
Accumulate each xr ? XR as the Taylor series of order pDL about the center
xQ of XQ using Equation 2.
if c = cH2L < ?, (Hermite-to-Local)
Convert the Hermite series of order pH2L centered about xR of XR to the Taylor
series of the same order centered about xQ of XQ using Lemma 2.1.
if c 6= cDirect ,
Update Gmin and Gmax in Q and all its children. return.
if leaf(Q) and leaf(R),
Perform the naive algorithm on every pair of points in Q and R.
else
DFGT(Q.left, R.left). DFGT(Q.left, R.right).
DFGT(Q.right, R.left). DFGT(Q.right, R.right).
?
?
?b
?
For the FGT, note that the algorithm only ensures: ?G(x
q ) ? Gtrue (xq )? ? ? . Therefore, we
first set ? = ?, halving ? until the error tolerance ? was met. For the IFGT, which has multiple
parameters that must be tweaked simultaneously, an automatic scheme was created, based on the
recommendations given in the paper and?software documentation: For D = 2, use p = 8; for D = 3,
use p = 6; set ?x = 2.5; start with K = N and double K until the error tolerance is met. When this
failed to meet the tolerance, we resorted to additional trial and error by hand. The costs of parameter
selection for these methods in both computer and human time is not included in the table.
4
Algorithm \ scale
Naive
FGT
IFGT
DFD
DFGT
DFGTH
Naive
FGT
IFGT
DFD
DFGT
DFGTH
Naive
FGT
IFGT
DFD
DFGT
DFGTH
Naive
FGT
IFGT
DFD
DFGT
DFGTH
Naive
FGT
IFGT
DFD
DFGT
DFGTH
0.001
0.01
0.1
1
10
100
sj2-50000-2 (astronomy: positions), D = 2, N = 50000, h? = 0.00139506
301.696
301.696
301.696
301.696
301.696
301.696
out of RAM
out of RAM
out of RAM
3.892312
2.01846
0.319538
> 2?Naive
> 2?Naive
> 2?Naive
> 2?Naive
> 2?Naive
> 2?Naive
0.837724
1.087066
1.658592
6.018158
62.077669
151.590062
0.849935
1.11567
4.599235
72.435177
18.450387
2.777454
0.846294
1.10654
1.683913
6.265131
5.063365
1.036626
?
colors50k (astronomy: colors), D = 2, N = 50000, h = 0.0016911
301.696
301.696
301.696
301.696
301.696
301.696
out of RAM
out of RAM
out of RAM
> 2?Naive
> 2?Naive
0.475281
> 2?Naive
> 2?Naive
> 2?Naive
> 2?Naive
> 2?Naive
> 2?Naive
1.095838
1.469454
2.802112
30.294007
280.633106
81.373053
1.099828
1.983888
29.231309
285.719266
12.886239
5.336602
1.081216
1.47692
2.855083
24.598749
7.142465
1.78648
?
edsgc-radec-rnd (astronomy: angles), D = 2, N = 50000, h = 0.00466204
301.696
301.696
301.696
301.696
301.696
301.696
out of RAM
out of RAM
out of RAM
2.859245
1.768738
0.210799
> 2?Naive
> 2?Naive
> 2?Naive
> 2?Naive
> 2?Naive
> 2?Naive
0.812462
1.083528
1.682261
5.860172
63.849361
357.099354
0.84023
1.120015
4.346061
73.036687
21.652047
3.424304
0.821672
1.104545
1.737799
6.037217
5.7398
1.883216
?
mockgalaxy-D-1M-rnd (cosmology: positions), D = 3, N = 50000, h = 0.000768201
354.868751
354.868751
354.868751
354.868751
354.868751
354.868751
out of RAM
out of RAM
out of RAM
out of RAM
> 2?Naive
> 2?Naive
> 2?Naive
> 2?Naive
> 2?Naive
> 2?Naive
> 2?Naive
> 2?Naive
0.70054
0.701547
0.761524
0.843451
1.086608
42.022605
0.73007
0.733638
0.799711
0.999316
50.619588
125.059911
0.724004
0.719951
0.789002
0.877564
1.265064
22.6106
?
bio5-rnd (biology: drug activity), D = 5, N = 50000, h = 0.000567161
364.439228
364.439228
364.439228
364.439228
364.439228
364.439228
out of RAM
out of RAM
out of RAM
out of RAM
out of RAM
out of RAM
> 2?Naive
> 2?Naive
> 2?Naive
> 2?Naive
> 2?Naive
> 2?Naive
2.249868
2.4958865
4.70948
12.065697
94.345003
412.39142
> 2?Naive
> 2?Naive
> 2?Naive
> 2?Naive
> 2?Naive
> 2?Naive
> 2?Naive
> 2?Naive
> 2?Naive
> 2?Naive
> 2?Naive
> 2?Naive
1000
301.696
0.183616
7.576783
1.551019
2.532401
0.68471
301.696
0.114430
7.55986
3.604753
3.5638
0.627554
301.696
0.059664
7.585585
0.743045
1.977302
0.436596
354.868751
> 2?Naive
> 2?Naive
383.12048
109.353701
87.488392
364.439228
out of RAM
> 2?Naive
107.675935
> 2?Naive
> 2?Naive
Discussion. The experiments indicate that the DFGTH method is able to achieve reasonable performance across all bandwidth scales. Unfortunately none of the series
approximation-based methods do well on the 5-dimensional data, as expected, highlighting the main weakness of the approach presented. Pursuing corrections to the error bounds
necessary to use the intriguing series form of [14] may allow an increase in dimensionality.
References
[1] A. W. Appel. An Efficient Program for Many-Body Simulations. SIAM Journal on Scientific and Statistical Computing,
6(1):85?103, 1985.
[2] J. Barnes and P. Hut. A Hierarchical O(N logN ) Force-Calculation Algorithm. Nature, 324, 1986.
[3] B. Baxter and G. Roussos. A new error estimate of the fast gauss transform. SIAM Journal on Scientific Computing,
24(1):257?259, 2002.
[4] P. Callahan and S. Kosaraju. A decomposition of multidimensional point sets with applications to k-nearest-neighbors and
n-body potential fields. Journal of the ACM, 62(1):67?90, January 1995.
[5] A. Gray and A. W. Moore. N-Body Problems in Statistical Learning. In T. K. Leen, T. G. Dietterich, and V. Tresp, editors,
Advances in Neural Information Processing Systems 13 (December 2000). MIT Press, 2001.
[6] A. G. Gray. Bringing Tractability to Generalized N-Body Problems in Statistical and Scientific Computation. PhD thesis,
Carnegie Mellon University, 2003.
[7] A. G. Gray and A. W. Moore. Rapid Evaluation of Multiple Density Models. In Artificial Intelligence and Statistics 2003,
2003.
[8] L. Greengard and V. Rokhlin. A Fast Algorithm for Particle Simulations. Journal of Computational Physics, 73, 1987.
[9] L. Greengard and J. Strain. The fast gauss transform. SIAM Journal on Scientific and Statistical Computing, 12(1):79?94,
1991.
[10] L. Greengard and X. Sun. A new version of the fast gauss transform. Documenta Mathematica, Extra Volume ICM(III):575?
584, 1998.
[11] B. W. Silverman. Density Estimation for Statistics and Data Analysis. Chapman and Hall, 1986.
[12] J. Strain. The fast gauss transform with variable scales. SIAM Journal on Scientific and Statistical Computing, 12:1131?
1139, 1991.
[13] O. Sz?asz. On the relative extrema of the hermite orthogonal functions. J. Indian Math. Soc., 15:129?134, 1951.
[14] C. Yang, R. Duraiswami, N. A. Gumerov, and L. Davis. Improved fast gauss transform and efficient kernel density estimation. International Conference on Computer Vision, 2003.
| 2928 |@word trial:1 version:1 polynomial:1 simulation:2 decomposition:1 moment:2 celebrated:1 series:17 score:1 fgt:8 existing:1 intriguing:1 must:3 written:3 l2l:3 update:1 intelligence:1 leaf:2 website:1 nq:2 finitedifference:1 node:26 location:1 math:1 hermite:35 five:2 mathematical:1 dn:1 direct:2 incorrect:3 specialize:1 consists:2 combine:1 expected:1 rapid:1 multi:1 dfgt:15 cpu:2 cache:1 begin:1 provided:2 notation:1 xx:1 tweaked:1 developed:2 unified:1 differing:1 astronomy:3 extremum:1 nj:16 every:2 multidimensional:1 act:1 runtime:1 exactly:1 wrong:1 appear:1 before:1 local:8 tends:1 limit:1 meet:1 might:1 twice:1 studied:1 fastest:2 limited:1 range:1 practical:1 recursive:2 implement:1 silverman:1 xr:66 drug:1 pre:2 refers:1 selection:1 operator:15 context:2 applying:1 center:2 truncating:5 array:1 importantly:1 updated:1 imagine:1 suppose:2 construction:1 user:2 rodrigues:1 documentation:1 bottom:1 fly:1 gmin:5 region:6 ensures:1 sun:1 pd:7 traversal:2 depend:1 rewrite:1 incur:1 upon:1 efficiency:1 derivation:1 univ:3 fast:12 query:24 artificial:1 lift:1 whose:1 larger:3 widely:1 statistic:3 think:1 transform:7 obviously:1 product:2 mb:1 remainder:1 combining:1 loop:1 achieve:4 parent:1 cluster:1 double:1 r1:2 derive:1 andrew:1 measured:1 nearest:1 soc:1 c:2 indicate:1 met:3 opteron:1 awm:1 centered:10 human:1 require:1 summation:9 pdh:4 correction:2 hold:1 lying:2 hut:2 considered:1 hall:1 algorithmic:1 interchanged:1 smallest:3 estimation:8 favorable:1 sensitive:1 create:1 tool:1 mit:1 gaussian:9 mation:1 uated:1 rather:2 derived:1 focus:1 mainly:1 centroid:17 integrated:1 entire:1 explanatory:1 initially:1 expand:2 dual:9 among:1 logn:1 loca:1 field:1 chapman:1 biology:1 interchanging:1 others:1 hni:5 simultaneously:2 ve:1 delayed:1 intended:1 evaluation:1 weakness:1 truly:1 bracket:1 gmax:1 necessary:3 orthogonal:1 tree:17 taylor:14 initialized:1 re:1 stirling:1 cost:2 tweak:2 tractability:1 subset:2 stored:1 dongryeol:1 chooses:1 adaptively:1 density:9 fundamental:2 sensitivity:1 siam:4 international:1 lee:1 physic:3 h2h:2 linux:1 thesis:1 hn:6 return:1 converted:2 potential:1 coefficient:5 satisfy:1 explicitly:2 bichromatic:1 depends:1 tion:1 root:3 performed:1 portion:1 start:1 contribution:3 square:1 ni:22 degraded:1 pdl:4 characteristic:1 yield:2 vp:4 generalize:1 none:1 processor:1 suffers:1 mathematica:1 associated:2 proof:7 cosmology:1 proved:1 color:1 dimensionality:2 routine:4 actually:1 higher:1 h2q:2 o6:1 specify:1 improved:2 duraiswami:1 leen:1 done:2 though:1 evaluated:1 until:2 sketch:2 hand:1 replacing:1 somehow:1 gray:4 perhaps:1 scientific:5 dietterich:1 true:1 moore:3 wp:1 self:1 davis:1 xqi:33 criterion:1 generalized:2 demonstrate:1 workhorse:1 recently:1 common:1 multinomial:1 empirically:1 wpi:4 volume:1 approximates:1 accumulate:1 mellon:4 automatic:3 grid:1 particle:1 had:1 operating:1 compiled:1 multivariate:3 showed:1 perspective:1 kosaraju:1 additional:1 converting:1 determine:1 multiple:2 ing:2 calculation:1 cross:2 halving:1 basic:1 vision:1 cmu:3 kernel:23 bio5:1 separately:1 pansion:2 else:1 appropriately:1 extra:1 bringing:1 asz:1 cdh:5 december:1 ifgt:8 call:2 integer:1 yang:1 iii:1 baxter:1 bandwidth:12 idea:2 shift:1 gb:1 sj2:1 wspd:1 appel:2 useful:1 transforms:2 locally:2 nel:1 rearranged:1 shifted:1 arising:1 carnegie:4 discrete:4 express:1 four:1 achieving:1 intermediately:1 clean:1 breadth:1 h2r:2 utilize:1 resorted:1 ram:20 sum:5 convert:2 angle:1 place:1 reasonable:2 pursuing:1 scaling:2 vpi:4 bound:22 barnes:2 activity:1 constraint:1 callahan:1 x2:1 flat:2 ri:3 software:1 calling:1 nearby:1 aspect:3 argument:1 min:1 performing:3 according:1 dfd:7 kd:1 across:2 slightly:1 smaller:1 making:1 pr:3 equation:2 count:1 kerr:1 mechanism:1 needed:2 end:1 adopted:1 generalizes:1 operation:1 greengard:4 hierarchical:5 appropriate:2 rp:2 original:2 multipole:4 include:3 running:1 completed:1 ensure:1 roussos:1 approximating:1 implied:1 already:1 dongryel:1 usual:1 nr:16 hq:1 amd:1 extent:1 considers:2 toward:1 suboptimally:1 code:2 index:1 unfortunately:4 coulombic:1 xri:17 negative:1 motivates:1 unknown:1 perform:1 gumerov:1 upper:2 datasets:2 finite:3 truncated:3 january:1 strain:3 introduced:1 pair:1 h2l:3 able:1 below:1 program:1 including:1 memory:1 max:1 shifting:1 hybrid:1 force:1 recursion:2 scheme:2 cdl:3 picture:1 created:1 naive:59 tresp:1 xq:95 understanding:1 geometric:2 tantamount:1 relative:4 fully:1 interesting:1 validation:2 h2:52 incurred:1 editor:1 translation:8 free:1 allow:2 wide:1 neighbor:1 taking:2 tolerance:6 dimension:5 depth:1 world:1 computes:1 author:1 made:1 preprocessing:1 approximate:3 compact:1 sz:1 global:1 investigating:1 continuous:3 table:1 nature:1 expansion:31 agray:1 main:4 rh:4 arise:1 mockgalaxy:1 child:5 icm:1 body:4 representative:1 fashion:2 fails:1 position:2 crude:1 formula:2 theorem:1 gtrue:3 edsgc:1 exists:2 effectively:1 upi:5 phd:1 magnitude:1 simply:1 explore:1 univariate:2 failed:1 highlighting:1 desire:1 partially:1 recommendation:1 rnd:3 satisfies:3 dh:2 acm:1 replace:1 included:1 infinite:1 corrected:1 flag:1 lemma:9 called:1 partly:1 gauss:9 experimental:1 rokhlin:2 latter:1 alexander:1 indian:1 evaluate:1 tested:1 avoiding:2 ex:2 |
2,124 | 2,929 | Correcting sample selection bias in maximum
entropy density estimation
Miroslav Dud??k, Robert E. Schapire
Princeton University
Department of Computer Science
35 Olden St, Princeton, NJ 08544
Steven J. Phillips
AT&T Labs ? Research
180 Park Ave, Florham Park, NJ 07932
[email protected]
{mdudik,schapire}@princeton.edu
Abstract
We study the problem of maximum entropy density estimation in the
presence of known sample selection bias. We propose three bias correction approaches. The first one takes advantage of unbiased sufficient
statistics which can be obtained from biased samples. The second one estimates the biased distribution and then factors the bias out. The third one
approximates the second by only using samples from the sampling distribution. We provide guarantees for the first two approaches and evaluate
the performance of all three approaches in synthetic experiments and on
real data from species habitat modeling, where maxent has been successfully applied and where sample selection bias is a significant problem.
1
Introduction
We study the problem of estimating a probability distribution, particularly in the context of
species habitat modeling. It is very common in distribution modeling to assume access to
independent samples from the distribution being estimated. In practice, this assumption is
violated for various reasons. For example, habitat modeling is typically based on known
occurrence locations derived from collections in natural history museums and herbariums
as well as biological surveys [1, 2, 3]. Here, the goal is to predict the species? distribution
as a function of climatic and other environmental variables. To achieve this in a statistically sound manner using current methods, it is necessary to assume that the sampling
distribution and species distributions are not correlated. In fact, however, most sampling is
done in locations that are easier to access, such as areas close to towns, roads, airports or
waterways [4]. Furthermore, the independence assumption may not hold since roads and
waterways are often correlated with topography and vegetation which influence species distributions. New unbiased sampling may be expensive, so much can be gained by using the
extensive existing biased data, especially since it is becoming freely available online [5].
Although the available data may have been collected in a biased manner, we usually
have some information available about the nature of the bias. For instance, in the case of
habitat modeling, some factors influencing the sampling distribution are well known, such
as distance from roads, towns, etc. In addition, a list of visited sites may be available and
viewed as a sample of the sampling distribution itself. If such a list is not available, the
set of sites where any species from a large group has been observed may be a reasonable
approximation of all visited locations.
In this paper, we study probability density estimation under sample selection bias. We
assume that the sampling distribution (or an approximation) is known during training, but
we require that unbiased models not use any knowledge of sample selection bias during
testing. This requirement is vital for habitat modeling where models are often applied to
a different region or under different climatic conditions. To our knowledge this is the first
work addressing sample selection bias in a statistically sound manner and in a setup suitable
for species habitat modeling from presence-only data.
We propose three approaches that incorporate sample selection bias in a common density estimation technique based on the principle of maximum entropy (maxent). Maxent with ?1 -regularization has been successfully used to model geographic distributions
of species under the assumption that samples are unbiased [3]. We review ?1 -regularized
maxent with unbiased data in Section 2, and give details of the new approaches in Section 3.
Our three approaches make simple modifications to unbiased maxent and achieve analogous provable performance guarantees. The first approach uses a bias correction technique
similar to that of Zadrozny et al. [6, 7] to obtain unbiased confidence intervals from biased
samples as required by our version of maxent. We prove that, as in the unbiased case, this
produces models whose log loss approaches that of the best possible Gibbs distribution
(with increasing sample size).
In contrast, the second approach we propose first estimates the biased distribution and
then factors the bias out. When the target distribution is a Gibbs distribution, the solution
again approaches the log loss of the target distribution. When the target distribution is not
Gibbs, we demonstrate that the second approach need not produce the optimal Gibbs distribution (with respect to log loss) even in the limit of infinitely many samples. However,
we prove that it produces models that are almost as good as the best Gibbs distribution according to a certain Bregman divergence that depends on the selection bias. In addition, we
observe good empirical performance for moderate sample sizes. The third approach is an
approximation of the second approach which uses samples from the sampling distribution
instead of the distribution itself.
One of the challenges in studying methods for correcting sample selection bias is that
unbiased data sets, though not required during training, are needed as test sets to evaluate
performance. Unbiased data sets are difficult to obtain ? this is the very reason why
we study this problem! Thus, it is almost inevitable that synthetic data must be used. In
Section 4, we describe experiments evaluating performance of the three methods. We use
both fully synthetic data, as well as a biological dataset consisting of a biased training set
and an independently collected reasonably unbiased test set.
Related work. Sample selection bias also arises in econometrics where it stems from
factors such as attrition, nonresponse and self selection [8, 9, 10]. It has been extensively
studied in the context of linear regression after Heckman?s seminal paper [8] in which the
bias is first estimated and then a transform of the estimate is used as an additional regressor.
In the machine learning community, sample selection bias has been recently considered
for classification problems by Zadrozny [6]. Here the goal is to learn a decision rule from
a biased sample. The problem is closely related to cost-sensitive learning [11, 7] and the
same techniques such as resampling or differential weighting of samples apply.
However, the methods of the previous two approaches do not apply directly to density
estimation where the setup is ?unconditional?, i.e. there is no dependent variable, or, in the
classification terminology, we only have access to positive examples, and the cost function
(log loss) is unbounded. In addition, in the case of modeling species habitats, we face the
challenge of sample sizes that are very small (2?100) by machine learning standards.
2
Maxent setup
In this section, we describe the setup for unbiased maximum entropy density estimation
and review performance guarantees. We use a relaxed formulation which will yield an
?1 -regularization term in our objective function.
The goal is to estimate an unknown target distribution ? over a known sample space X
based on samples x1 , . . . , xm ? X . We assume that samples are independently distributed
according to ? and denote the empirical distribution by ?
? (x) = |{1 ? i ? m : xi =
x}|/m. The structure of the problem is specified by real valued functions fj : X ? R,
j = 1, . . . , n, called features and by a distribution q0 representing a default estimate. We
assume that features capture all the relevant information available for the problem at hand
and q0 is the distribution we would choose if we were given no samples. The distribution
q0 is most often assumed uniform.
For a limited number of samples, we expect that ?
? will be a poor estimate of ? under
any reasonable distance measure. However, empirical averages of features will not be too
different from their expectations with respect to ?. Let p[f ] denote the expectation of a
function f (x) when x is chosen randomly according to distribution p. We would like to
find a distribution p which satisfies
|p[fj ] ? ?
? [fj ]| ? ?j for all 1 ? j ? n,
(1)
for some estimates ?j of deviations of empirical averages from their expectations. Usually
there will be infinitely many distributions satisfying these constraints. For the case when
the default distribution q0 is uniform, the maximum entropy principle tells us to choose
the distribution of maximum entropy satisfying these constraints. In general, we should
minimize the relative entropy from q0 . This corresponds to choosing the distribution that
satisfies the constraints (1) but imposes as little additional information as possible when
compared with q0 . Allowing for asymmetric constraints, we obtain the formulation
min RE(p k q0 ) subject to ?1 ? j ? n : aj ? p[fj ] ? bj .
(2)
p??
X
Here, ? ? R is the simplex of probability distributions and RE(p k q) is the relative
entropy (or Kullback-Leibler divergence) from q to p, an information theoretic measure of
difference between the two distributions. It is non-negative, equal to zero only when the
two distributions are identical, and convex in its arguments.
Problem (2) is a convex program. Using Lagrange multipliers, we obtain that the solution takes the form
q? (x) = q0 (x)e??f (x) /Z?
(3)
P
??f (x)
where Z? = x q0 (x)e
is the normalization constant. Distributions q? of the form
(3) will be referred to as q0 -Gibbs or just Gibbs when no ambiguity arises.
Instead of solving (2) directly, we solve its dual:
P
P
minn log Z? ? 21 j (bj + aj )?j + 12 j (bj ? aj )|?j | .
(4)
??R
We can choose from a range of general convex optimization techniques or use some of the
algorithms in [12]. For the symmetric case when
[aj , bj ] = ?
? [fj ] ? ?j , ?
? [fj ] + ?j ,
(5)
the dual becomes
P
minn ??
? [log q? ] + j ?j |?j | .
(6)
??R
The first term is the empirical log loss (negative log likelihood), the second term is an ?1 regularization. Small values of log loss mean a good fit to the data. This is balanced by
regularization forcing simpler models and hence preventing overfitting.
When all the primal constraints are satisfied by the target distribution ? then the solution
q? of the dual is guaranteed to be not much worse an approximation of ? than the best Gibbs
distribution q ? . More precisely:
Theorem 1 (Performance guarantees, Theorem 1 of [12]). Assume that the distribution
? satisfies the primal constraints (2). Let q? be the solution of the dual (4). Then for an
arbitrary Gibbs distribution q ? = q??
P
RE(? k q?) ? RE(? k q ? ) + j (bj ? aj )|??j |.
Input: finite domain X
features f1 , . . . , fn where fj : X ? [0, 1]
default estimate q0
regularization parameter ? > 0
sampling distribution s
samples x1 , . . . , xm ? X
Output: q`?? approximating
the target distribution
? ?
Let ?0 = ?/ ? m ? min {f
? s[1/s], (max 1/s
? ? min
? 1/s)/2}
[c0 , d0 ] = ?
fs[1/s] ? ?0 , ?
fs[1/s] + ?0 ? min 1/s, max 1/s]
For j = 1,`. . .?
, n: ?
?j = ?/ ? m ? min {f
? s[fj /s], (max fj /s
? ? ?min fj /s)/2}
[cj , dj ] = ?
fs[fj /s] ? ?j , ?
fs[fj /s] + ?j ? min fj /s, max fj /s]
?
?
?
[aj , bj ] = cj /d0 , dj /c0 ? min fj , max fj ]
Solve the dual (4)
Algorithm 1: D EBIAS AVERAGES.
Table 1: Example 1.
Comparison
of distributions q ? and q ?? minimizing
RE(? k q? ) and RE(?s k q? s).
x
1
2
3
4
f (x) ?(x) s(x)
(0, 0) 0.4 0.4
(0, 1) 0.1 0.4
(1, 0) 0.1 0.1
(1, 1) 0.4 0.1
?s(x)
0.64
0.16
0.04
0.16
q ? (x) q ?? s(x) q ?? (x)
0.25 0.544
0.34
0.25 0.256
0.16
0.25 0.136
0.34
0.25 0.064
0.16
Whenp
features are bounded between 0 and 1, the symmetric box constraints (5) with
?j = O( (log n)/m) are satisfied with high probability by Hoeffding?s inequality and
the union bound. Then the relative entropy from q? to ? will not be worse
than the relative
p
entropy from any Gibbs distribution q ? to ? by more than O(k?? k1 (log n)/m).
In practice, we set
?
?j = ?/ m ? min {?
? [fj ], ?max [fj ]}
(7)
where ? is a tuned constant, ?
? [fj ] is the sample deviation of fj , and ?max [fj ] is an upper
bound on the standard deviation, such as (maxx fj (x) ? minx fj (x))/2. We refer to this
algorithm for unbiased data as U NBIASED M AXENT.
3
Maxent with sample selection bias
In the biased case, the goal is to estimate the target distribution ?, but samples do not come
directly from ?. For nonnegative functions p1 , p2 defined on X , let p1 p2 denote the distribution obtained by multiplying weights p1 (x) and p2 (x) at every point and renormalizing:
p1 (x)p2 (x)
.
?
?
x? p1 (x )p2 (x )
p1 p2 (x) = P
Samples x1 , . . . , xm come from the biased distribution ?s where s is the sampling distribution. This setup corresponds to the situation when an event being observed occurs at the
point x with probability ?(x) while we perform an independent observation with probability s(x). The probability of observing an event at x given that we observe an event is then
equal to ?s(x). The empirical distribution of m samples drawn from ?s will be denoted
by ?
fs. We assume that s is known (principal assumption, see introduction) and strictly
positive (technical assumption).
Approach I: Debiasing Averages. In our first approach, we use the same algorithm as
for the unbiased case but employ a different method to obtain confidence intervals [aj , bj ].
Since we do not have direct access to samples from ?, we use a version of the Bias Correction Theorem of Zadrozny [6] to convert expectations with respect to ?s to expectations
with respect to ?.
Theorem 2 (Bias Correction Theorem [6], Translation Theorem [7]).
?s[f /s] ?s[1/s] = ?[f ].
Hence, it suffices to give confidence intervals for ?s[f /s] and ?s[1/s] to obtain confidence intervals for ?[f ].
Corollary 3. Assume that for some sample-derived bounds cj , dj , 0 ? j ? n, with high
probability 0 < c0 ? ?s[1/s] ? d0 and 0 ? cj ? ?s[fj /s] ? dj for all 1 ? j ? n. Then
with at least the same probability cj /d0 ? ?[fj ] ? dj /c0 for all 1 ? j ? n.
If s is bounded away from 0 then Chernoff bounds may be used to determine cj , dj .
Corollary 3 and Theorem 1 then yield guarantees that this method?s performance converges,
with increasing sample sizes, to that of the ?best? Gibbs distribution.
In practice, confidence intervals [cj , dj ] may be determined using expressions analogous to (5) and (7) for random variables fj /s, 1/s and the empirical distribution ?
fs. After
first restricting the confidence intervals in a natural fashion, this yields Algorithm 1. Alternatively, we could use bootstrap or other types of estimates for the confidence intervals.
Approach II: Factoring Bias Out. The second algorithm does not approximate ? directly, but uses maxent to estimate the distribution ?s and then converts this estimate into
an approximation of ?. If the default estimate of ? is q0 , then the default estimate of ?s is
q0 s. Applying unbiased maxent to the empirical distribution ?
fs with the default q0 s, we
?
?
??f
obtain a q0 s-Gibbs distribution q0 se
approximating ?s. We factor out s to obtain q0 e??f
as an estimate of ?. This yields the algorithm FACTOR B IAS O UT.
This approach corresponds to ?1 -regularized maximum likelihood estimation of ? by
q0 -Gibbs distributions. When ? itself is q0 -Gibbs then the distribution ?s is q0 s-Gibbs.
Performance guarantees for unbiased maxent imply that estimates of ?s converge to ?s as
the number of samples increases. Now, if inf x s(x) > 0 (which is the case for finite X )
then estimates of ? obtained by factoring out s converge to ? as well.
When ? is not q0 -Gibbs then ?s is not q0 s-Gibbs either. We approximate ? by a
q0 -Gibbs distribution q? = q?? which, with an increasing number of samples, minimizes
RE(?s k q? s) rather than RE(? k q? ). Our next example shows that these two minimizers may be different.
Example 1. Consider the space X = {1, 2, 3, 4} with two features f1 , f2 . Features f1 , f2 ,
target distribution ?, sampling distribution s and the biased distribution ?s are given in Table 1. We use the uniform distribution as a default estimate. The minimizer of RE(? k q? )
is the unique uniform-Gibbs distribution q ? such that q ? [f ] = ?[f ]. Similarly, the minimizer q ?? s of RE(?s k q? s) is the unique s-Gibbs distribution for which q ?? s[f ] = ?s[f ].
Solving for these exactly, we find that q ? and q ?? are as given in Table 1, and that these two
distributions differ.
Even though FACTOR B IAS O UT does not minimize RE(? k q? ), we can show that it
minimizes a different Bregman divergence. More precisely, it minimizes a Bregman divergence between certain projections of the two distributions. Bregman divergences generalize some common distance measures such as relative entropy or the squared Euclidean
distance, and enjoy many of the same favorable properties. The Bregman divergence associated with a convex function F is defined as DF (u k v) = F (u)?F (v)??F (v)?(u?v).
P
Proposition 4. Define F : RX
+ ? R as F (u) =
x s(x)u(x) log u(x). Then F
?
?
is a convex function
and
for
all
p
,
p
?
?,
RE(p
s
k
p
1 2
1
2 s) = DF (p1 k p2 ), where
P
P
?
?
?
?
?
?
p1 (x) = p1 (x)/ x? s(x )p1 (x ) and p2 (x) = pP
2 (x)/
x? s(x )p2 (x ) are projections of
p1 , p2 along lines tp, t ? R onto the hyperplane x s(x)p(x) = 1.
Approach III: Approximating FACTOR B IAS O UT. As mentioned in the introduction,
knowing the sampling distribution s exactly is unrealistic. However, we often have access to samples from s. In this approach we assume that s is unknown but that, in
addition to samples x1 , . . . , xm from ?s, we are also given a separate set of samples
x(1) , x(2) , . . . , x(N ) from s. We use the algorithm FACTOR B IAS O UT with the sampling
distribution s replaced by the corresponding empirical distribution s?.
To simplify the algorithm, we note that instead of using
q0 s? as a default estimate for
?s, it suffices to replace the sample space X by X ? = x(1) , x(2) , . . . , x(N ) and use q0
relative entropy
to target
2.5
target=?1
RE(?1||u)=4.5
2
target=?2
RE(?2||u)=5.0
2.2
1.5
2
1
1.8
target=?3
RE(?3||u)=3.3
2
1.5
10
100
number of training
samples (m)
1000
10
100
unbiased maxent
debias averages
factor bias out
1000
10
100
1000
approximate factor bias out
1,000 samples
10,000 samples
Figure 1: Learning curves for synthetic experiments. We use u to denote the uniform distribution. For
the sampling distribution s, RE(s k u) = 0.8. Performance is measured in terms of relative entropy
to the target distribution as a function of an increasing number of training samples. The number of
samples is plotted on a log scale.
? returned
restricted to X ? as a default. The last step of factoring out s? is equivalent to using ?
for space X ? on the entire space X .
When the sampling distribution s is correlated with feature values, X ? might not cover
all feature ranges. In that case, reprojecting on X may yield poor estimates outside of these
ranges. We therefore do ?clamping?, restricting values fj (x) to their ranges over X ? and
? ? f (x) at its maximum over X ? . The resulting algorithm
capping values of the exponent ?
is called A PPROX FACTOR B IAS O UT.
4
Experiments
Conducting real data experiments to evaluate bias correction techniques is difficult, because bias is typically unknown and samples from unbiased distributions are not available.
Therefore, synthetic experiments are often a necessity for precise evaluation. Nevertheless, in addition to synthetic experiments, we were also able to conduct experiments with
real-world data for habitat modeling.
Synthetic experiments. In synthetic experiments, we generated three target uniformGibbs distributions ?1 , ?2 , ?3 over a domain X of size 10,000. These distributions were
derived from 65 features indexed as fi , 0 ? i ? 9 and fij , 0 ? i ? j ? 9. Values
fi (x) were chosen independently and uniformly in [0, 1], and we set fij (x) = fi (x)fj (x).
Fixing these features, we generated weights for each distribution. Weights ?i and ?ii were
generated jointly to capture a range of different behaviors for values of fi in the range [0, 1].
Let US denote a random variable uniform over the set S. Each instance of US corresponds to a new independent variable. We set ?ii = U{?1,0,1} U[1,5] and ?i to be ?ii U[?3,1]
if ?ii 6= 0, and U{?1,1} U[2,10] otherwise. Weights ?ij , i < j were chosen to create correlations between fi ?s that would be observable, but not strong enough to dominate ?i ?s
and ?ii ?s. We set ?ij = ?0.5 or 0 or 0.5 with respective probabilities 0.05, 0.9 and 0.05.
In maxent, we used a subset of features specifying target distributions and some irrelevant
features. We used features fi? , 0 ? i ? 9 and their squares fii? , where fi? (x) = fi (x) for
0 ? i ? 5 (relevant features) and fi? (x) = U[0,1] for 6 ? i ? 9 (irrelevant features). Once
generated, we used the same set of features in all experiments. We generated a sampling
distribution s correlated with target distributions. More specifically, s was a Gibbs distrib(s)
(s)
(s)
ution generated from features fi , 0 ? i ? 5 and their squares fii , where fi (x) = U[0,1]
(s)
(s)
(s)
for 0 ? i ? 1 and fi = fi+2 for 2 ? i ? 5. We used weights ?i = 0 and ?ii = ?1.
For every target distribution, we evaluated the performance of U NBIASED M AXENT,
D EBIAS AVERAGES, FACTOR B IAS O UT and A PPROX FACTOR B IAS O UT with 1,000 and
10,000 samples from the sampling distribution. The performance was evaluated in terms
of relative entropy to the target distribution. We used training sets of sizes 10 to 1000. We
considered five randomly generated training sets and took the average performance over
these five sets for settings of ? from the range [0.05, 4.64]. We report results for the best ?,
chosen separately for each average. The rationale behind this approach is that we want to
Table 2: Results of real data experiments. Average performance of unbiased maxent and three bias
correction approaches over all species in six regions. The uniform distribution would receive the
log loss of 14.2 and AUC of 0.5. Results of bias correction approaches are italicized if they are
significantly worse and set in boldface if they are significantly better than those of the unbiased
maxent according to a paired t-test at the level of significance 5%.
awt
unbiased maxent
13.78
debias averages
13.92
factor bias out
13.90
apx. factor bias out 13.89
can
12.89
13.10
13.13
13.40
average log loss
nsw
nz
sa
13.40 13.77 13.14
13.88 14.31 14.10
14.06 14.20 13.66
14.19 14.07 13.62
swi
12.81
13.59
13.46
13.41
awt
0.69
0.67
0.71
0.72
can
0.58
0.64
0.69
0.72
average AUC
nsw nz
sa
0.71 0.72 0.78
0.65 0.67 0.68
0.72 0.72 0.78
0.73 0.73 0.78
swi
0.81
0.78
0.83
0.84
explore the potential performance of each method.
Figure 1 shows the results at the optimal ? as a function of an increasing number of samples. FACTOR B IAS O UT is always better than U NBIASED M AXENT. D EBIAS AVERAGES is
worse than U NBIASED M AXENT for small sample sizes, but as the number of training samples increases, it soon outperforms U NBIASED M AXENT and eventually also outperforms
FACTOR B IAS O UT. A PPROX FACTOR B IAS O UT improves as the number of samples from
the sampling distribution increases from 1,000 to 10,000, but both versions of A PPROX FACTOR B IAS O UT perform worse than U NBIASED M AXENT for the distribution ?2 .
Real data experiments. In this set of experiments, we evaluated maxent in the task of
estimating species habitats. The sample space is a geographic region divided into a grid
of cells and samples are known occurrence localities ? cells where a given species was
observed. Every cell is described by a set of environmental variables, which may be categorical, such as vegetation type, or continuous, such as altitude or annual precipitation.
Features are real-valued functions derived from environmental variables. We used binary
indicator features for different values of categorical variables and binary threshold features
for continuous variables. The latter are equal to one when the value of a variable is greater
than a fixed threshold and zero otherwise.
Species sample locations and environmental variables were all produced and used as
part of the ?Testing alternative methodologies for modeling species? ecological niches and
predicting geographic distributions? Working Group at the National Center for Ecological
Analysis and Synthesis (NCEAS). The working group compared modeling methods across
a variety of species and regions. The training set contained presence-only data from unplanned surveys or incidental records, including those from museums and herbariums. The
test set contained presence-absence data from rigorously planned independent surveys.
We compared performance of our bias correction approaches with that of the unbiased
maxent which was among the top methods in the NCEAS comparison [13]. We used the full
dataset consisting of 226 species in 6 regions with 2?5822 training presences per species
(233 on average) and 102?19120 test presences/absences. For more details see [13].
We treated training occurrence locations for all species in each region as sampling
distribution samples and used them directly in A PPROX FACTOR B IAS O UT. In order to
apply D EBIAS AVERAGES and FACTOR B IAS O UT, we estimated the sampling distribution
using unbiased maxent. Sampling distribution estimation is also the first step of [6]. In
contrast with that work, however, our experiments do not use the sampling distribution
estimate during evaluation and hence do not depend on its quality.
The resulting distributions were evaluated on test presences according to the log loss
and on test presences and absences according to the area under an ROC curve (AUC) [14].
AUC quantifies how well the predicted distribution ranks test presences above test absences. Its value is equal to the probability that a randomly chosen presence will be ranked
above a randomly chosen absence. The uniformly random prediction receives AUC of 0.5
while a perfect prediction receives AUC of 1.0.
In Table 2 we show performance of our three approaches compared with the unbiased
maxent. All three algorithms yield on average a worse log loss than the unbiased maxent.
This can perhaps be attributed to the imperfect estimate of the sampling distribution or to
the sampling distribution being zero over large portions of the sample space. In contrast,
when the performance is measured in terms of AUC, FACTOR B IAS O UT and A PPROX FACTOR B IAS O UT yield on average the same or better AUC as U NBIASED M AXENT in all
six regions. Improvements in regions awt, can and swi are dramatic enough so that both of
these methods perform better than any method evaluated in [13].
5
Conclusions
We have proposed three approaches that incorporate information about sample selection
bias in maxent and demonstrated their utility in synthetic and real data experiments. Experiments also raise several questions that merit further research: D EBIAS AVERAGES has
the strongest performance guarantees, but it performs the worst in real data experiments
and catches up with other methods only for large sample sizes in synthetic experiments.
This may be due to poor estimates of unbiased confidence intervals and could be possibly
improved using a different estimation method. FACTOR B IAS O UT and A PPROX FACTOR B IAS O UT improve over U NBIASED M AXENT in terms of AUC over real data, but are worse
in terms of log loss. This disagreement suggests that methods which aim to optimize AUC
directly could be more successful in species modeling, possibly incorporating some concepts from FACTOR B IAS O UT and A PPROX FACTOR B IAS O UT. A PPROX FACTOR B IAS O UT performs the best on real world data, possibly due to the direct use of samples from the
sampling distribution rather than a sampling distribution estimate. However, this method
comes without performance guarantees and does not exploit the knowledge of the full sample space. Proving performance guarantees for A PPROX FACTOR B IAS O UT remains open
for future research.
Acknowledgments
This material is based upon work supported by NSF under grant 0325463. Any opinions, findings, and conclusions
or recommendations expressed in this material are those of the authors and do not necessarily reflect the views
of NSF. The NCEAS data was kindly shared with us by the members of the ?Testing alternative methodologies
for modeling species? ecological niches and predicting geographic distributions? Working Group, which was
supported by the National Center for Ecological Analysis and Synthesis, a Center funded by NSF (grant DEB0072909), the University of California and the Santa Barbara campus.
References
[1] Jane Elith. Quantitative methods for modeling species habitat: Comparative performance and an application
to Australian plants. In Scott Ferson and Mark Burgman, editors, Quantitative Methods for Conservation
Biology, pages 39?58. Springer-Verlag, 2002.
[2] A. Guisan and N. E. Zimmerman. Predictive habitat distribution models in ecology. Ecological Modelling,
135:147?186, 2000.
[3] Steven J. Phillips, Miroslav Dud??k, and Robert E. Schapire. A maximum entropy approach to species distribution modeling. In Proceedings of the Twenty-First International Conference on Machine Learning, 2004.
[4] S. Reddy and L. M. D?avalos. Geographical sampling bias and its implications for conservation priorities in
Africa. Journal of Biogeography, 30:1719?1727, 2003.
[5] Barbara R. Stein and John Wieczorek. Mammals of the world: MaNIS as an example of data integration in
a distributed network environment. Biodiversity Informatics, 1(1):14?22, 2004.
[6] Bianca Zadrozny. Learning and evaluating classifiers under sample selection bias. In Proceedings of the
Twenty-First International Conference on Machine Learning, 2004.
[7] Bianca Zadrozny, John Langford, and Naoki Abe. Cost-sensitive learning by cost-proportionate example
weighting. In Proceedings of the Third IEEE International Conference on Data Mining, 2003.
[8] James J. Heckman. Sample selection bias as a specification error. Econometrica, 47(1):153?161, 1979.
[9] Robert M. Groves. Survey Errors and Survey Costs. Wiley, 1989.
[10] Roderick J. Little and Donald B. Rubin. Statistical Analysis with Missing Data. Wiley, second edition, 2002.
[11] Charles Elkan. The foundations of cost-sensitive learning. In Proceedings of the Seventeenth International
Joint Conference on Artificial Intelligence, 2001.
[12] Miroslav Dud??k, Steven J. Phillips, and Robert E. Schapire. Performance guarantees for regularized maximum entropy density estimation. In 17th Annual Conference on Learning Theory, 2004.
[13] J. Elith, C. Graham, and NCEAS working group. Comparing methodologies for modeling species? distributions from presence-only data. In preparation.
[14] J. A. Hanley and B. S. McNeil. The meaning and use of the area under a receiver operating characteristic
(ROC) curve. Radiology, 143:29?36, 1982.
| 2929 |@word version:3 c0:4 open:1 nsw:2 mammal:1 dramatic:1 herbarium:2 necessity:1 att:1 tuned:1 pprox:10 outperforms:2 existing:1 africa:1 current:1 com:1 comparing:1 must:1 john:2 fn:1 resampling:1 intelligence:1 record:1 location:5 simpler:1 five:2 unbounded:1 along:1 direct:2 differential:1 prove:2 manner:3 behavior:1 p1:11 little:2 reprojecting:1 increasing:5 becomes:1 precipitation:1 estimating:2 bounded:2 campus:1 minimizes:3 finding:1 nj:2 guarantee:10 quantitative:2 every:3 exactly:2 classifier:1 grant:2 enjoy:1 positive:2 influencing:1 naoki:1 limit:1 becoming:1 might:1 nz:2 studied:1 specifying:1 suggests:1 limited:1 range:7 statistically:2 seventeenth:1 unique:2 acknowledgment:1 elith:2 testing:3 practice:3 union:1 bootstrap:1 area:3 empirical:9 maxx:1 significantly:2 projection:2 confidence:8 road:3 donald:1 onto:1 close:1 selection:16 context:2 influence:1 seminal:1 applying:1 optimize:1 equivalent:1 demonstrated:1 center:3 missing:1 independently:3 convex:5 survey:5 correcting:2 rule:1 dominate:1 proving:1 nonresponse:1 analogous:2 target:18 us:3 elkan:1 expensive:1 particularly:1 satisfying:2 econometrics:1 asymmetric:1 observed:3 steven:3 capture:2 worst:1 region:8 balanced:1 mentioned:1 environment:1 roderick:1 econometrica:1 rigorously:1 depend:1 solving:2 raise:1 predictive:1 upon:1 debias:2 f2:2 joint:1 various:1 describe:2 artificial:1 tell:1 choosing:1 outside:1 whose:1 valued:2 solve:2 otherwise:2 florham:1 statistic:1 radiology:1 transform:1 itself:3 jointly:1 apx:1 online:1 advantage:1 took:1 propose:3 relevant:2 climatic:2 achieve:2 requirement:1 produce:3 renormalizing:1 perfect:1 converges:1 comparative:1 fixing:1 measured:2 ij:2 nceas:4 strong:1 sa:2 p2:10 predicted:1 come:3 australian:1 differ:1 fij:2 closely:1 opinion:1 material:2 require:1 f1:3 suffices:2 proposition:1 biological:2 strictly:1 correction:8 hold:1 attrition:1 considered:2 predict:1 bj:7 estimation:10 favorable:1 visited:2 sensitive:3 create:1 successfully:2 always:1 aim:1 rather:2 corollary:2 derived:4 improvement:1 rank:1 likelihood:2 modelling:1 contrast:3 ave:1 zimmerman:1 dependent:1 factoring:3 minimizers:1 typically:2 entire:1 classification:2 dual:5 among:1 denoted:1 exponent:1 integration:1 airport:1 equal:4 once:1 sampling:27 chernoff:1 identical:1 biology:1 park:2 inevitable:1 future:1 simplex:1 report:1 simplify:1 employ:1 randomly:4 museum:2 divergence:6 national:2 replaced:1 consisting:2 ecology:1 mining:1 evaluation:2 unconditional:1 primal:2 behind:1 implication:1 bregman:5 grove:1 necessary:1 respective:1 conduct:1 indexed:1 euclidean:1 maxent:22 re:16 plotted:1 miroslav:3 instance:2 modeling:16 planned:1 cover:1 tp:1 cost:6 addressing:1 deviation:3 subset:1 uniform:7 successful:1 too:1 synthetic:10 st:1 density:7 international:4 geographical:1 informatics:1 regressor:1 synthesis:2 swi:3 again:1 ambiguity:1 satisfied:2 town:2 squared:1 choose:3 possibly:3 hoeffding:1 reflect:1 priority:1 worse:7 potential:1 depends:1 unplanned:1 view:1 lab:1 observing:1 portion:1 proportionate:1 minimize:2 square:2 conducting:1 characteristic:1 yield:7 generalize:1 produced:1 multiplying:1 rx:1 history:1 strongest:1 pp:1 james:1 associated:1 attributed:1 biodiversity:1 dataset:2 knowledge:3 ut:21 improves:1 cj:7 methodology:3 improved:1 formulation:2 done:1 though:2 box:1 evaluated:5 furthermore:1 just:1 correlation:1 langford:1 hand:1 working:4 receives:2 quality:1 perhaps:1 aj:7 concept:1 unbiased:26 multiplier:1 geographic:4 regularization:5 hence:3 dud:3 q0:25 leibler:1 symmetric:2 during:4 self:1 auc:10 theoretic:1 demonstrate:1 performs:2 fj:28 meaning:1 recently:1 fi:13 charles:1 common:3 debiasing:1 approximates:1 vegetation:2 significant:1 refer:1 gibbs:21 phillips:4 grid:1 similarly:1 dj:7 funded:1 access:5 specification:1 operating:1 etc:1 fii:2 moderate:1 inf:1 forcing:1 irrelevant:2 barbara:2 certain:2 verlag:1 ecological:5 inequality:1 binary:2 additional:2 relaxed:1 greater:1 freely:1 determine:1 converge:2 ii:7 full:2 sound:2 stem:1 d0:4 technical:1 divided:1 paired:1 nbiased:8 prediction:2 regression:1 expectation:5 df:2 normalization:1 cell:3 receive:1 addition:5 want:1 separately:1 interval:8 biased:11 subject:1 member:1 presence:11 vital:1 iii:1 enough:2 variety:1 independence:1 fit:1 ution:1 imperfect:1 knowing:1 expression:1 six:2 utility:1 f:7 returned:1 se:1 santa:1 stein:1 extensively:1 schapire:4 nsf:3 estimated:3 per:1 group:5 terminology:1 nevertheless:1 threshold:2 drawn:1 mcneil:1 convert:2 almost:2 reasonable:2 decision:1 graham:1 bound:4 guaranteed:1 nonnegative:1 annual:2 constraint:7 precisely:2 argument:1 min:9 department:1 according:6 poor:3 across:1 modification:1 restricted:1 altitude:1 reddy:1 remains:1 awt:3 eventually:1 needed:1 merit:1 studying:1 available:7 apply:3 observe:2 away:1 disagreement:1 occurrence:3 alternative:2 jane:1 top:1 manis:1 exploit:1 hanley:1 k1:1 especially:1 approximating:3 objective:1 question:1 occurs:1 heckman:2 minx:1 distance:4 separate:1 olden:1 italicized:1 collected:2 reason:2 provable:1 boldface:1 minn:2 minimizing:1 setup:5 difficult:2 robert:4 negative:2 incidental:1 unknown:3 perform:3 allowing:1 upper:1 twenty:2 observation:1 finite:2 zadrozny:5 situation:1 precise:1 arbitrary:1 community:1 abe:1 required:2 specified:1 extensive:1 california:1 able:1 usually:2 xm:4 scott:1 challenge:2 program:1 max:7 including:1 ia:21 suitable:1 event:3 natural:2 unrealistic:1 regularized:3 predicting:2 indicator:1 treated:1 ranked:1 representing:1 improve:1 habitat:11 imply:1 categorical:2 catch:1 review:2 relative:8 loss:11 fully:1 expect:1 rationale:1 topography:1 plant:1 foundation:1 sufficient:1 imposes:1 rubin:1 principle:2 editor:1 translation:1 supported:2 last:1 soon:1 bias:34 face:1 distributed:2 curve:3 default:9 evaluating:2 world:3 preventing:1 author:1 collection:1 approximate:3 observable:1 kullback:1 overfitting:1 receiver:1 assumed:1 conservation:2 xi:1 alternatively:1 continuous:2 quantifies:1 why:1 table:5 learn:1 nature:1 reasonably:1 necessarily:1 domain:2 kindly:1 significance:1 edition:1 x1:4 site:2 referred:1 roc:2 fashion:1 bianca:2 wiley:2 third:3 weighting:2 capping:1 niche:2 theorem:7 list:2 incorporating:1 restricting:2 gained:1 clamping:1 easier:1 locality:1 entropy:16 explore:1 infinitely:2 lagrange:1 expressed:1 contained:2 recommendation:1 springer:1 corresponds:4 minimizer:2 environmental:4 satisfies:3 goal:4 viewed:1 replace:1 absence:5 mdudik:1 shared:1 determined:1 specifically:1 uniformly:2 hyperplane:1 principal:1 called:2 specie:23 mark:1 latter:1 arises:2 distrib:1 violated:1 preparation:1 incorporate:2 evaluate:3 princeton:3 correlated:4 |
2,125 | 293 | 396
Le Cun, Boser, Denker, Henderson, Howard, Hubbard and Jackel
Handwritten Digit Recognition with a
Back-Propagation Network
Y. Le Cun, B. Boser, J. S. Denker, D. Henderson,
R. E. Howard, W. Hubbard, and L. D. Jackel
AT&T Bell Laboratories, Holmdel, N. J. 07733
ABSTRACT
We present an application of back-propagation networks to handwritten digit recognition. Minimal preprocessing of the data was
required, but architecture of the network was highly constrained
and specifically designed for the task. The input of the network
consists of normalized images of isolated digits. The method has
1% error rate and about a 9% reject rate on zipcode digits provided
by the U.S. Postal Service.
1
INTRODUCTION
The main point of this paper is to show that large back-propagation (BP) networks can be applied to real image-recognition problems without a large, complex
preprocessing stage requiring detailed engineering. Unlike most previous work on
the subject (Denker et al., 1989), the learning network is directly fed with images,
rather than feature vectors, thus demonstrating the ability of BP networks to deal
with large amounts of low level information.
Previous work performed on simple digit images (Le Cun, 1989) showed that the
architecture of the network strongly influences the network's generalization ability.
Good generalization can only be obtained by designing a network architecture that
contains a certain amount of a priori knowledge about the problem. The basic design principle is to minimize the number of free parameters that must be determined
by the learning algorithm, without overly reducing the computational power of the
network. This principle increases the probability of correct generalization because
Handwritten Digit Recognition with a Back-Propagation Network
I
tl( If!?-()()
rt'r .A..3 ~CJ->i
Figure 1: Examples of original zip codes from the testing set.
it results in a specialized network architecture that has a reduced entropy (Denker
et al., 1987; Patarnello and Carnevali, 1987; Tishby, Levin and Solla, 1989; Le Cun,
1989). On the other hand, some effort must be devoted to designing appropriate
constraints into the architecture.
2
ZIPCODE RECOGNITION
The handwritten digit-recognition application was chosen because it is a relatively
simple machine vision task: the input consists of black or white pixels, the digits
are usually well-separated from the background, and there are only ten output
categories. Yet the problem deals with objects in a real two-dimensional space and
the mapping from image space to category space has both considerable regularity
and considerable complexity. The problem has added attraction because it is of
great practical value.
The database used to train and test the network is a superset of the one used in
the work reported last year (Denker et al., 1989). We emphasize that the method
of solution reported here relies more heavily on automatic learning, and much less
on hand-designed preprocessing.
The database consists of 9298 segmented numerals digitized from handwritten zipcodes that appeared on real U.S. Mail passing through the Buffalo, N.Y. post office.
Examples of such images are shown in figure 1. The digits were written by many
different people, using a great variety of sizes, writing styles and instruments, with
widely varying levels of care. This was supplemented by a set of 3349 printed digits coming from 35 different fonts. The training set consisted of 7291 handwritten
digits plus 2549 printed digits. The remaining 2007 handwritten and 700 printed
digits were used as the test set. The printed fonts in the test set were different from
the printed fonts in the training set.One important feature of this database, which
397
398
Le Cun, Boser, Denker, Henderson, Howard, Hubbard and Jackel
Figure 2: Examples of normalized digits from the testing set.
is a common feature to all real-world databases, is that both the training set and
the testing set contain numerous examples that are ambiguous, unclassifiable, or
even misclassified.
3
PREPROCESSING
Acquisition, binarization, location of the zip code, and preliminary segmentation
were performed by Postal Service contractors (Wang and Srihari, 1988). Some of
these steps constitute very hard tasks in themselves. The segmentation (separating
each digit from its neighbors) would be a relatively simple task if we could assume
that a character is contiguous and is disconnected from its neighbors, but neither
of these assumptions holds in practice. Many ambiguous characters in the database
are the result of mis-segmentation (especially broken 5's) as can be seen on figure 2.
At this point, the size of a digit varies but is typically around 40 by 60 pixels. Since
the input of a back-propagation network is fixed size, it is necessary to normalize
the size of the characters. This was performed using a linear transformation to
make the characters fit in a 16 by 16 pixel image. This transformation preserves
the aspect ratio of the character, and is performed after extraneous marks in the
image have been removed. Because of the linear transformation, the resulting image
is not binary but has multiple gray levels, since a variable number of pixels in the
original image can fall into a given pixel in the target image. The gray levels of
each image are scaled and translated to fall within the range -1 to 1.
4
THE NETWORK
The remainder ofthe recognition is entirely performed by a multi-layer network. All
of the connections in the network are adaptive, although heavily constrained, and
are trained using back-propagation. This is in contrast with earlier work (Denker
et al., 1989) where the first few layers of connections were hand-chosen constants.
The input of the network is a 16 by 16 normalized image and the output is composed
Handwritten Digit Recognition with a Back-Propagation Network
of 10 units: one per class. When a pattern belonging to class i is presented, the
desired output is +1 for the ith output unit, and -1 for the other output units.
Figure 3: Input image (left), weight vector (center), and resulting feature map
(right). The feature map is obtained by scanning the input image with a single
neuron that has a local receptive field, as indicated. White represents -1, black
represents +1.
A fully connected network with enough discriminative power for the task would have
far too many parameters to be able to generalize correctly. Therefore a restricted
connection-scheme must be devised, guided by our prior knowledge about shape
recognition. There are well-known advantages to performing shape recognition by
detecting and combining local features. We have required our network to do this
by constraining the connections in the first few layers to be local. In addition, if
a feature detector is useful on one part of the image, it is likely to be useful on
other parts of the image as well. One reason for this is that the salient features of a
distorted character might be displaced slightly from their position in a typical character. One solution to this problem is to scan the input image with a single neuron
that has a local receptive field, and store the states of this neuron in corresponding
locations in a layer called a feature map (see figure 3). This operation is equivalent
to a convolution with a small size kernel, followed by a squashing function. The
process can be performed in parallel by implementing the feature map as a plane
of neurons whose weight vectors are constrained to be equal. That is, units in a
feature map are constrained to perform the same operation on different parts of the
image. An interesting side-effect of this weight sharing technique, already described
in (Rumelhart, Hinton and Williams, 1986), is to reduce the number of free parameters by a large amount, since a large number of units share the same weights. In
addition, a certain level of shift invariance is present in the system: shifting the
input will shift the result on the feature map, but will leave it unchanged otherwise.
In practice, it will be necessary to have multiple feature maps, extracting different
features from the same image.
399
400
Le Cun, Boser, Denker, Henderson, Howard, Hubbard and Jackel
1
2
3
4
1
2
3
4
5
6
X
X
X
X
X
X
X
X
X
X
7
8
9
10
11
12
X
X
X
X
X
X
X
X
X
X
Table 1: Connections between H2 and H3.
The idea of local, convolutional feature maps can be applied to subsequent hidden
layers as well, to extract features of increasing complexity and abstraction. Interestingly, higher level features require less precise coding of their location. Reduced
precision is actually advantageous, since a slight distortion or translation of the input will have reduced effect on the representation. Thus, each feature extraction in
our network is followed by an additional layer which performs a local averaging and
a subsampling, reducing the resolution of the feature map. This layer introduces
a certain level of invariance to distortions and translations. A functional module
of our network consists of a layer of shared-weight feature maps followed by an
averaging/subsampling layer. This is reminiscent of the Neocognitron architecture
(Fukushima and Miyake, 1982), with the notable difference that we use backprop
(rather than unsupervised learning) which we feel is more appropriate to this sort
of classification problem.
The network architecture, represented in figure 4, is a direct extension of the ones
described in (Le Cun, 1989; Le Cun et al., 1990a). The network has four hidden
layers respectively named HI, H2, H3, and H4. Layers HI and H3 are shared-weights
feature extractors, while H2 and H4 are averaging/subsampling layers.
Although the size of the active part of the input is 16 by 16, the actual input is a 28
by 28 plane to avoid problems when a kernel overlaps a boundary. HI is composed
of 4 groups of 576 units arranged as 4 independent 24 by 24 feature maps. These
four feature maps will be designated by HI.l, HI.2, HI.3 and HIA. Each unit in
a feature map takes its input from a 5 by 5 neighborhood on the input plane. As
described above, corresponding connections on each unit in a given feature map are
constrained to have the same weight. In other words, all of the 576 units in H1.1
uses the same set of 26 weights (including the bias). Of course, units in another
map (say HI.4) share another set of 26 weights.
Layer H2 is the averaging/subsampling layer. It is composed of 4 planes of size 12
by 12. Each unit in one of these planes takes inputs on 4 units on the corresponding
plane in HI. Receptive fields do not overlap. All the weights are constrained to be
equal, even within a single unit. Therefore, H2 performs a local averaging and a 2
to 1 sUbsampling of HI in each direction.
Layer H3 is composed of 12 feature maps. Each feature map contains 64 units
arranged in a 8 by 8 plane. As before, these feature maps will be designated as
H2.1, H2.2 ... H2.12. The connection scheme between H2 and H3 is quite similar
to the one between the input and HI, but slightly more complicated because H3
has multiple 2-D maps. Each unit receptive field is composed of one or two 5 by
Handwritten Digit Recognition with a Back.Propagation Network
Figure 4: Network Architecture with 5 layers of fully-adaptive connections.
401
402
Le Cun, Boser, Denker, Henderson, Howard, Hubbard and Jackel
5 neighborhoods centered around units that are at identical positions within each
H2 maps. Of course, all units in a given map are constrained to have identical
weight vectors. The maps in H2 on which a map in H3 takes its inputs are chosen
according to a scheme described on table 1. According to this scheme, the network
is composed of two almost independent modules. Layer H4 plays the same role as
layer H2, it is composed of 12 groups of 16 units arranged in 4 by 4 planes.
The output layer has 10 units and is fully connected to H4. In summary, the
network has 4635 units, 98442 connections, and 2578 independent parameters. This
architecture was derived using the Optimal Brain Damage technique (Le Cun et al.,
1990b) starting from a previous architecture (Le Cun et al., 1990a) that had 4 times
more free parameters.
5
RESULTS
After 30 training passes the error rate on training set (7291 handwritten plus 2549
printed digits) was 1.1% and the MSE was .017. On the whole test set (2007
handwritten plus 700 printed characters) the error rate was 3.4% and the MSE was
0.024. All the classification errors occurred on handwritten characters.
In a realistic application, the user is not so much interested in the raw error rate
as in the number of rejections necessary to reach a given level of accuracy. In our
case, we measured the percentage of test patterns that must be rejected in order
to get 1% error rate. Our rejection criterion was based on three conditions: the
activity level of the most-active output unit should by larger than a given threshold
t 1 , the activity level of the second most-active unit should be smaller than a given
threshold t2, and finally, the difference between the activity levels of these two units
should be larger than a given threshold td. The best percentage of rejections on
the complete test set was 5.7% for 1% error. On the handwritten set only, the
result was 9% rejections for 1% error. It should be emphasized that the rejection
thresholds were obtained using performance measures on the test set. About half
the substitution errors in the testing set were due to faulty segmentation, and an
additional quarter were due to erroneous assignment of the desired category. Some
of the remaining images were ambiguous even to humans, and in a few cases the
network misclassified the image for no discernible reason.
Even though a second-order version of back-propagation was used, it is interesting
to note that the learning takes only 30 passes through the training set. We think
this can be attributed to the large amount of redundancy present in real data. A
complete training session (30 passes through the training set plus test) takes about
3 days on a SUN SP ARCstation 1 using the SN2 connectionist simulator (Bottou
and Le Cun, 1989).
After successful training, the network was implemented on a commercial Digital
Signal Processor board containing an AT&T DSP-32C general purpose DSP chip
with a peak performance of 12.5 million multiply-add operations per second on 32 bit
floating point numbers. The DSP operates as a coprocessor in a PC connected to
a video camera. The PC performs the digitization, binarization and segmentation
Handwritten Digit Recognition with a Back-Propagation Network
Figure 5: Atypical data. The network classifies these correctly, even though they
are quite unlike anything in the training set.
of the image, while the DSP performs the size-normalization and the classification.
The overall throughput of the digit recognizer including image acquisition is 10 to
12 classifications per second and is limited mainly by the normalization step. On
normalized digits, the DSP performs more than 30 classifications per second.
6
CONCLUSION
Back-propagation learning was successfully applied to a large, real-world task. Our
results appear to be at the state of the art in handwritten digit recognition. The
network had many connections but relatively few free parameters. The network
architecture and the constraints on the weights were designed to incorporate geometric knowledge about the task into the system. Because of its architecture, the
network could be trained on a low-level representation of data that had minimal
preprocessing (as opposed to elaborate feature extraction). Because of the redundant nature of the data and because of the constraints imposed on the network, the
learning time was relatively short considering the size of the training set. Scaling
properties were far better than one would expect just from extrapolating results of
back-propagation on smaller, artificial problems. Preliminary results on alphanumeric characters show that the method can be directly extended to larger tasks.
The final network of connections and weights obtained by back-propagation learning was readily implementable on commercial digital signal processing hard ware.
Throughput rates, from camera to classified image, of more than ten digits per
second were obtained.
Acknowledgments
We thank the US Postal Service and its contractors for providing us with the zipcode database. We thank Henry Baird for useful discussions and for providing the
printed-font database.
References
Bottou, L.-Y. and Le Cun, Y. (1989). SN2: A Simulator for Connectionist Models.
Neuristique SA, Paris, France.
403
404
Le Cun, Boser, Denker, Henderson, Howard, Hubbard and Jackel
Denker, J., Schwartz, D., Wittner, B., Solla, S. A., Howard, R., Jackel, L., and
Hopfield, J. (1987). Large Automatic Learning, Rule Extraction and Generalization. Complex Systems, 1:877-922.
Denker, J. S., Gardner, W. R., Graf, H. P., Henderson, D., Howard, R. E., Hubbard, W., Jackel, L. D., Baird, H. S., and Guyon, I. (1989). Neural Network
Recognizer for Hand-Written Zip Code Digits. In Touretzky, D., editor, Neural Information Processing Systems, volume 1, pages 323-331, Denver, 1988.
Morgan Kaufmann.
Fukushima, K. and Miyake, S. (1982). Neocognitron: A new algorithm for pattern
recognition tolerant of deformations and shifts in position. Pattern Recognition,
15:455-469.
Le Cun, Y. (1989). Generalization and Network Design Strategies. In Pfeifer, R.,
Schreter, Z., Fogelman, F., and Steels, L., editors, Connectionism in Perspective, Zurich, Switzerland. Elsevier.
Le Cun, Y., Boser, B., Denker, J. S., Henderson, D., Howard, R. E., Hubbard, W.,
and Jackel, L. D. (1990a). Back-Propagation Applied to Handwritten Zipcode
Recognition. Neural Computation, 1(4).
Le Cun, Y., Denker, J. S., Solla, S., Howard, R. E .. , and Jackel, L. D. (1990b). Optimal Brain Damage. In Touretzky, D., editor, Neural Information Processing
Systems, volume 2, Denver, 1989. Morgan Kaufman.
Patarnello, S. and Carnevali, P. (1987). Learning Networks of Neurons with Boolean
Logic. Europhysics Letters, 4(4):503-508.
Rumelhart, D. E., Hinton, G. E., and Williams, R. J. (1986). Learning internal
representations by error propagation. In Parallel distributed processing: Explorations in the microstructure of cognition, volume I, pages 318-362. Bradford
Books, Cambridge, MA.
Tishby, N., Levin, E., and Solla, S. A. (1989). Consistent Inference of Probabilities
in Layered Networks: Predictions and Generalization. In Proceedings of the
International Joint Conference on Neural Networks, Washington DC.
Wang, C. H. and Srihari, S. N. (1988). A Framework for Object Recognition in a
Visually Complex Environment and its Application to Locating Address Blocks
on Mail Pieces. International Journal of Computer Vision, 2:125.
| 293 |@word coprocessor:1 version:1 advantageous:1 substitution:1 contains:2 interestingly:1 yet:1 must:4 readily:1 written:2 reminiscent:1 realistic:1 subsequent:1 alphanumeric:1 shape:2 discernible:1 designed:3 extrapolating:1 half:1 plane:8 ith:1 short:1 detecting:1 postal:3 location:3 h4:4 direct:1 consists:4 themselves:1 multi:1 brain:2 simulator:2 td:1 actual:1 considering:1 increasing:1 provided:1 classifies:1 kaufman:1 transformation:3 neuristique:1 scaled:1 schwartz:1 unit:23 appear:1 before:1 service:3 engineering:1 local:7 ware:1 black:2 plus:4 might:1 limited:1 range:1 practical:1 camera:2 acknowledgment:1 testing:4 practice:2 block:1 digit:25 bell:1 reject:1 printed:8 word:1 get:1 layered:1 faulty:1 influence:1 writing:1 equivalent:1 map:23 imposed:1 center:1 williams:2 starting:1 resolution:1 miyake:2 rule:1 attraction:1 feel:1 target:1 play:1 heavily:2 user:1 commercial:2 us:1 designing:2 rumelhart:2 recognition:17 database:7 role:1 module:2 wang:2 connected:3 sun:1 solla:4 removed:1 environment:1 broken:1 complexity:2 trained:2 translated:1 joint:1 hopfield:1 chip:1 represented:1 train:1 separated:1 artificial:1 neighborhood:2 whose:1 quite:2 widely:1 larger:3 distortion:2 say:1 otherwise:1 ability:2 think:1 final:1 zipcode:4 advantage:1 coming:1 remainder:1 combining:1 normalize:1 regularity:1 leave:1 object:2 measured:1 h3:7 sa:1 implemented:1 direction:1 guided:1 switzerland:1 correct:1 centered:1 human:1 exploration:1 implementing:1 numeral:1 backprop:1 require:1 microstructure:1 generalization:6 preliminary:2 connectionism:1 extension:1 hold:1 around:2 visually:1 great:2 mapping:1 cognition:1 purpose:1 recognizer:2 jackel:10 hubbard:8 successfully:1 rather:2 avoid:1 varying:1 office:1 derived:1 dsp:5 mainly:1 contrast:1 elsevier:1 inference:1 abstraction:1 typically:1 hidden:2 misclassified:2 france:1 interested:1 pixel:5 overall:1 classification:5 fogelman:1 priori:1 extraneous:1 constrained:7 art:1 field:4 equal:2 extraction:3 washington:1 identical:2 represents:2 unsupervised:1 throughput:2 t2:1 connectionist:2 few:4 composed:7 preserve:1 floating:1 fukushima:2 highly:1 multiply:1 henderson:8 introduces:1 pc:2 devoted:1 necessary:3 desired:2 isolated:1 deformation:1 minimal:2 earlier:1 boolean:1 contiguous:1 assignment:1 successful:1 levin:2 tishby:2 too:1 reported:2 varies:1 scanning:1 peak:1 international:2 containing:1 opposed:1 book:1 style:1 coding:1 baird:2 notable:1 piece:1 performed:6 h1:1 sort:1 parallel:2 complicated:1 minimize:1 accuracy:1 convolutional:1 kaufmann:1 ofthe:1 generalize:1 handwritten:16 raw:1 processor:1 classified:1 detector:1 reach:1 touretzky:2 sharing:1 acquisition:2 mi:1 attributed:1 knowledge:3 cj:1 segmentation:5 actually:1 back:14 higher:1 day:1 arranged:3 though:2 strongly:1 rejected:1 stage:1 just:1 hand:4 propagation:15 indicated:1 gray:2 effect:2 normalized:4 requiring:1 consisted:1 contain:1 laboratory:1 deal:2 white:2 ambiguous:3 anything:1 criterion:1 neocognitron:2 complete:2 performs:5 image:25 common:1 specialized:1 quarter:1 functional:1 denver:2 volume:3 million:1 slight:1 occurred:1 cambridge:1 automatic:2 session:1 had:3 henry:1 add:1 showed:1 perspective:1 store:1 certain:3 binary:1 seen:1 morgan:2 additional:2 care:1 zip:3 redundant:1 signal:2 multiple:3 segmented:1 devised:1 wittner:1 post:1 europhysics:1 prediction:1 basic:1 vision:2 kernel:2 normalization:2 background:1 addition:2 unlike:2 pass:3 subject:1 extracting:1 constraining:1 superset:1 enough:1 variety:1 fit:1 zipcodes:1 architecture:12 reduce:1 idea:1 shift:3 effort:1 locating:1 hia:1 passing:1 constitute:1 useful:3 detailed:1 sn2:2 amount:4 ten:2 category:3 reduced:3 percentage:2 overly:1 per:5 correctly:2 group:2 redundancy:1 salient:1 four:2 demonstrating:1 threshold:4 neither:1 year:1 letter:1 distorted:1 named:1 almost:1 guyon:1 holmdel:1 scaling:1 bit:1 entirely:1 layer:19 hi:10 followed:3 activity:3 constraint:3 bp:2 aspect:1 schreter:1 performing:1 relatively:4 contractor:2 designated:2 according:2 disconnected:1 belonging:1 smaller:2 slightly:2 character:10 cun:17 restricted:1 zurich:1 fed:1 instrument:1 operation:3 denker:14 appropriate:2 original:2 remaining:2 subsampling:5 especially:1 unchanged:1 added:1 already:1 font:4 receptive:4 damage:2 rt:1 strategy:1 thank:2 separating:1 digitization:1 mail:2 reason:2 code:3 ratio:1 providing:2 steel:1 design:2 perform:1 neuron:5 displaced:1 convolution:1 howard:10 implementable:1 buffalo:1 hinton:2 extended:1 precise:1 digitized:1 dc:1 required:2 paris:1 connection:11 boser:7 address:1 able:1 usually:1 pattern:4 appeared:1 including:2 video:1 shifting:1 power:2 overlap:2 scheme:4 numerous:1 gardner:1 extract:1 binarization:2 prior:1 geometric:1 graf:1 fully:3 expect:1 interesting:2 digital:2 h2:12 consistent:1 principle:2 editor:3 share:2 squashing:1 translation:2 course:2 summary:1 last:1 free:4 side:1 bias:1 neighbor:2 fall:2 distributed:1 boundary:1 world:2 adaptive:2 preprocessing:5 far:2 emphasize:1 logic:1 active:3 tolerant:1 discriminative:1 table:2 nature:1 mse:2 bottou:2 complex:3 sp:1 main:1 whole:1 tl:1 board:1 elaborate:1 precision:1 position:3 atypical:1 extractor:1 pfeifer:1 erroneous:1 emphasized:1 supplemented:1 rejection:5 entropy:1 likely:1 srihari:2 relies:1 ma:1 unclassifiable:1 shared:2 considerable:2 hard:2 specifically:1 determined:1 reducing:2 typical:1 averaging:5 operates:1 called:1 bradford:1 invariance:2 internal:1 people:1 mark:1 scan:1 incorporate:1 |
2,126 | 2,930 | An Analog Visual Pre-Processing Processor
Employing Cyclic Line Access in
Only-Nearest-Neighbor-Interconnects
Architecture
Yusuke Nakashita
Department of Frontier Informatics
School of Frontier Sciences
The University of Tokyo
5-1-5 Kashiwanoha, Kashiwa-shi, Chiba
277-8561, Japan
[email protected]
Yoshio Mita
Department of Electrical Engineering
School of Engineering
The University of Tokyo
7-3-1 Hongo, Bunkyo-ku,Tokyo
113-8656, Japan.
[email protected]
Tadashi Shibata
Department of Frontier Informatics
School of Frontier Sciences
The University of Tokyo
5-1-5 Kashiwanoha, Kashiwa-shi, Chiba
277-8561, Japan
[email protected]
Abstract
An analog focal-plane processor having a 128 128 photodiode array has
been developed for directional edge filtering. It can perform 4 4-pixel
kernel convolution for entire pixels only with 256 steps of simple analog processing. Newly developed cyclic line access and row-parallel
processing scheme in conjunction with the ?only-nearest-neighbor interconnects? architecture has enabled a very simple implementation. A
proof-of-concept chip was fabricated in a 0.35- m 2-poly 3-metal CMOS
technology and the edge filtering at a rate of 200 frames/sec. has been
experimentally demonstrated.
1 Introduction
Directional edge detection in an input image is the most essential operation in early visual
processing [1, 2]. Such spatial filtering operations are carried out by taking the convolution between a block of pixels and a weight matrix, requiring a number of multiply-andaccumulate operations. Since the convolution operation must be repeated pixel-by-pixel
to scan the entire image, the computation is very expensive and software solutions are not
compatible to real-time applications. Therefore, the hardware implementation of focalplane parallel processing is highly demanded. However, there exists a hard problem which
we call the interconnects explosion as illustrated in Fig. 1.
(a)
(b)
Figure 1: (a) Interconnects from nearest neighbor (N.N.) and second N.N. pixels to a single
pixel at the center. (b) N.N. and second N.N. interconnects for pixels in the two rows, an
illustrative example of interconnecs explosion.
In carrying out a filtering operation for one pixel, the luminance data must be gathered from
the nearest-neighbor and second nearest-neighbor pixels. The interconnects necessary for
this is illustrated in Fig. 1(a). If such wiring is formed for two rows of pixels, excessively
high density overlapping interconnects are required. If we extend this to an entire chip,
it is impossible to form the wiring even with the most advanced VLSI interconnects technology. Biology has solved the problem by real 3D-interconnects structures. Since only
two dimensional layouts are allowed with a limited number of stacks in VLSI technology,
the missing one dimension is crucial. We must overcome the difficulty by introducing new
architectures.
In order to achieve real-time performance in image filtering, a number of VLSI chips have
been developed in both digital [3, 4] and analog [5, 6, 7] technologies. A flash-convolution
processor [4] allows a single 5 5-pixel convolution operation in a single clock cycle by
introducing a subtle memory access scheme. However, for an N M-pixel image, it takes
N M clock cycles to complete the processing. In the line-parallel processing scheme employed in [7], both row-parallel and column-parallel processing scan the target image several times and the entire filtering finishes in O (N+M) steps. (A single step includes several
clock cycles to control the analog processing.)
The purpose of this work is to present an analog focal-plane CMOS image sensor chip
which carries out the directional edge filtering convolution for an N M-pixel image only in
M (or N) steps. In order to achieve an efficient processing, two key technologies have been
introduced: ?only-nearest-neighbor interconnects? architecture and ?cyclic line access and
row-parallel processing?. The former was first developed in [8], and has enabled the convolution including second-nearest-neighbor luminance data only using nearest neighbor
interconnects, thus greatly reducing the interconnect complexity. However, the fill factor
was sacrificed due to the pixel parallel organization. The problem has been resolved in
the present work by ?cyclic line access and row-parallel processing.? Namely, the processing elements are separated from the array of photo diodes and the ?only-nearest-neighbor
interconnects? architecture was realized as a separate module of row-parallel processing
elements. The cyclic line access scheme first introduced in the present work has eliminated
the redundant data readout operations from the photodiode array and has established a very
efficient processing. As a result, it has become possible to complete the edge filtering
for a 128 128 pixel image only in 128 2 steps. A proof-of-concept chip was fabricated
in a 0.35- m 2-poly 3-metal CMOS technology, and the edge detection at a rate of 200
frames/sec. has been experimentally demonstrated.
photodiode
(a)
processing
element
(b)
(c)
(d)
Figure 2: Edge filtering in the ?only-nearest-neighbor interconnects? architecture: (a) first
step; (b) second step; (c) all interconnects necessary for pixel parallel processing; (d) PD?s
involved in the convolution.
0 +1+1 0
-1 0 +2+1
-1 -2 0 +1
0 -1 -1 0
0 +1+1 0
+1+2 0 -1
+1 0 -2 -1
0 -1 -1 0
0 +1+1 0
+1+2+2+1
-1 -2 -2 -1
0 -1 -1 0
0 +1 -1 0
+1+2 -2 -1
+1+2 -2 -1
0 +1 -1 0
(a)
(b)
(c)
(d)
Figure 3: Edge filtering kernels realized in ?only-nearest-neighbor interconnects? architecture: (a) degree; (b) degree; (c) horizontal; (d) vertical.
2 System Organization
The two key technologies employed in the present work are explained in the following.
2.1 ?Only-Nearest-Neighbor Interconnects? Architecture
This architecture was first proposed in [8], and experimentally verified with small-scale test
circuits (7 7 processing elements without photodiodes). The key feature of the architecture
is that photodiodes (PD?s) are placed at four corners of each processing element (PE), and
that the luminance data of each PD are shared by four PE?s as shown in Fig. 2.
The edge filtering is carried out as explained below. First, as shown in Fig. 2 (a), preprocessing is carried out in each PE using the luminance data taken from four PD?s located
at its corners. Then, the result is transferred to the center PE as shown in Fig. 2 (b) and necessary computation is carried out. This accomplishes the filtering processing for one half
of the entire pixels. Then the roles of pre-processing PE?s and center PE?s are interchanged
and the same procedure follows to complete the processing for the rest of the pixels. The
interconnects necessary for the entire parallel processing is shown in Fig. 2(c). In this manner, every PE can gather all data necessary for the processing from its nearest-neighbor and
second nearest-neighbor pixels without complicated crossover interconnects. The kernels
illustrated in Fig. 3 have been all realized in this architecture. The luminance data from 12
PD?s enclosed in Fig. 2 (d) are utilized to detect the edge information at the center location.
photodiode array
131 pixels
address decoder
131 pixels
RST
M1 SH
M2
130 (128 + 2) processing elements
130 (128 + 2) processing elements
130 (128 + 2) processing elements
130 (128 + 2) processing elements
4 rows
PD
C
SELECT
Vout
M3
BIAS
(c)
128 parallel output
(a)
cyclic connection
analog memory
processing element (PE)
4 rows of PEs
x mod 4 = 1
x mod 4 = 2
x mod 4 = 3
x mod 4 = 0
x mod 4 = 1
1 PE
128 PEs for output
1 PE
(b)
Figure 4: Block diagram of the chip (a), and organization of row-parallel processing module (b). in (b) represents the row number 1131. (c) shows read out circuit of photodiode.
2.2 Cyclic Line Access and Row-Parallel Processing
A block diagram of the analog edge-filtering processor is given in Fig. 4 (a). It consists of
an array of 131 131 photodiodes (PD?s) and a module for row-parallel processing placed
at the bottom of the PD array. Figure 4(b) illustrates the organization of the row processing
module, which is composed of four rows of 130 PE?s and five rows of 131 analog memory
cells that temporarily store the luminance data read out from the PD array. It should be
noted that only three rows of PE?s and four rows of PD?s are sufficient to carry out a singlerow processing as explained in reference to Fig. 2(d). However, one extra row of PE?s and
one extra row of analog memories for PD data storage were included in the row-parallel
processing module. This is essential to carry out a seamless data read out from the PD array
and computation without analog data shift within the processing module. The chip yields
analog PD memory
PE
1
5
2
2
3
3
4
4
1
5
5
5
6
6
3
7
4
4
5
5
(c)
(b)
(a)
(d)
Figure 5: ?Cyclic-line access and row-parallel processing? scheme.
the kernel convolution results for one of the rows in the PD array as 128 parallel outputs.
Now, the operation of the row-parallel processing module is explained with reference to
Fig. 4 (b) and Fig. 5. In order to carry out the convolution for the data in Row 14, the PD
data are temporarily stored in the analog memory array as shown in Fig. 5 (a). Imporatant
to note is that the data from Row 1 are duplicated at the bottom. The convolution operation
proceeds using the upper four rows of data as explained in Fig. 5 (a). In the next step,
the data from Row 5 are overwritten to the sites of Row 1 data as shown in Fig. 5 (b).
The operation proceeds using the lower four rows of data and the second set of outputs
is produced. In the third step, the data from Row 6 is overwritten to the sites of Row 2
data (Fig. 5 (c)), and the convolution is taken using the data in the enclosure. Although a
part of the data (top two rows) are separated from the rest, the topology of the hardware
computation is identical to that explained in Fig. 5 (a). This is because the same set of data
is stored in both top and bottom PD memories and the top and bottom PE?s are connected
by ?cyclic connection? as illustrated in Fig. 4 (b). By introducing such one extra row of PD
memories and one extra row of PE?s with cyclic interconnections, row-parallel processing
can be seamlessly performed with only a single-row PD data set download at each step.
3 Circuit Configurations
In this architecture, we need only two arithmetic operations, i.e., the sum of four inputs and
the subtraction.
Figure 6(a) shows the adder circuit using the multiple-input floating-gate source follower [9]. The substrate of is connected to the source to avoid the body effect. The
transistor operates as a current source for fast output voltage stabilization as well as to
achieve good linearity. Due to the charge redistribution in the floating gate, the average of
the four input voltages appears at the output as
where represents the threshold voltage of . Here, the four coupling capacitors
connected to the floating gate of are identical and the capacitance coupling between
the floating gate and the ground was assumed to be 0 for simplicity. The electrical charge
in the floating gate is initialized periodically using the reset switch ( ). The coupling
capacitors themselves are also utilized as temporary memories for the PD data read out
from the PD array.
Figure 6(b) shows the subtraction circuit, where the same source follower was used. When
SW1 and SW2 are turned on, and SW3 is turned off, the following voltage difference is
Vin_p1 Vin_p2
BIAS
Vin1
C1
Vin2
Vin3
C2
C1
M2
C3
C4
RST
C4
BIAS
M2
Vout
C5
SW3
M3
C3
SW1
Vout
M1
Floating gate
C2
SW2
Vin4
Vin_m1 Vin_m2
M1
Vref
(b)
(a)
Figure 6: Adder circuit (a) and subtraction circuit (b) using floating-gate MOS technology.
developed across the capacitor :
Then, SW1 and SW2 are turned off, and SW3 is turned on. As a result, the output voltage
becomes
where represents the threshold voltage of .
4 Experimental Results
A proof-of-concept chip was designed and fabricated in a 0.35- m 2-poly 3-metal CMOS
technology. Figure 7 shows the photomicrograph of the chip, and the chip specifications are
given in Table 1. Since the pitch of a single PE unit is larger than the pitch of the PD array,
130 PE units are laid out as two separate L-shaped blocks at the periphery of the PD array
as seen in the chip photomicrograph. Successful operation of the chip was experimently
verified.
An example is shown in Fig. 8, where the experimental results for -degree edge filtering
are demonstrated. Since the thresholding circuitry was not implemented in the present
chip, only the convolution results are shown. 128 parallel outputs from the test chip were
multiplexed for observation using the external multiplexers mounted on a milled printed
circuit board. The vertical stripes observed in the result are due to the resistance variation
in the external interconnects poorly produced on the milled printed circuit board.
It was experimentally confirmed the chip operates at 1000 frames/sec. However, the operation is limited by the integration time of PD?s and typical motion images are processed at
about 200 frames/sec. The power dissipation in the PE?s was 25 mW and that in the PD
array was 40mW.
5 Conclusions
An analog edge-filtering processor has been developed based on the two key technologies:
?only-nearest-neighbor interconnects? architecture and ?cyclic line access and row-parallel
PEs
131 x 131
PD Array
PEs
Figure 7: Chip photomicrograph.
Table 1: Chip Specifications.
Process Technology
0.35 m CMOS,
2-Poly, 3-Metal
Die Size
9.8 mm x 9.8 mm
Voltage Supply
3.3 V
Operating Frequency 50M Hz
Power Dissipation
25 mW (PE Array)
PE Operation
1000 Frames/secl
Typical Frame Ratel
200 Frames / sec
(limited by
PD integration time)
(a)
Figure 8: Experimental set up (a), and measurement results of
convolution (b).
(b)
degree edge filtering
processing?. As a result, the convolution operation involving second nearest-neighbor pixel
data for an -pixel image can be performed only in steps. The edge filtering operation for 128 128-pixel images at 200 frames/sec. has been experimentally demonstrated.
The chip meets the requirement of low-power and real-time-response applications.
6 Acknowledgments
The VLSI chip in this study was fabricated in the chip fabrication program of VLSI Design
and Education Center (VDEC), the University of Tokyo in collaboration with Rohm Corporation and Toppan Printing Corporation. The work is partially supported by the Ministry
of Education, Science, Sports, and Culture under Grant-in-Aid for Scientific Research (No.
14205043).
References
[1] D. H. Hubel and T. N. Wiesel, ?Receptive fields of single neurons in the cat?s striate
cortex,? Journal of Physiology, vol. 148, pp. 574-591, 1959.
[2] M. Yagi and T. Shibata, ?An image representation algorithm compatible with neuralassociative-processor-based hardware recognition systems,? IEEE Trans. Neural Networks, vol. 14(5), pp. 1144-1161, 2003.
[3] J. C. Gealow and C. G. Sodini, ?A pixel parallel-processor using logic pitch-matched
to dynamic memory,? IEEE J. Solid-State Circuits, vol. 34, pp. 831-839, 1999.
[4] K. Ito, M. Ogawa and T. Shibata, ?A variable-kernel flash-convolution image filtering
processor,? Dig. Tech. Papers of Int. Solid-State Circuits Conf., pp. 470-471, 2003.
[5] L. D. McIlrath, ?A CCD/CMOS focal plane array edge detection processor implementing the multiscale veto algorithm,? IEEE J. Solid-State Circuits, vol. 31(9), pp.
1239-1247, 1996.
[6] R. Etiene-Cummings, Z. K. Kalayjian and D. Cai, ?A programmable focal plane MIMD
image processor chip,? IEEE J. Solid-State Circuits, vol. 36(1), pp. 64-73, 2001.
[7] T. Taguchi, M. Ogawa and T. Shibata, ?An Analog Image Processing LSI Employing
Scanning Line Parallel Processing,? Proc. 29th European Solid-Sate Circuits Conference (ESSCIRC 2003), pp. 65-68, 2003.
[8] Y. Nakashita, Y. Mita and T. Shibata, ?An Analog Edge-Filtering Processor Employing
Only-Nearest-Neighbor Interconnects,? Ext. Abstracts of the International Conference
on Solid State Devices and Materials (SSDM ?04), pp. 356-357, 2004.
[9] T. Shibata and T. Ohmi, ?A Functional MOS Transistor Featuring Gate-Level Weighted
Sum and Threshold Operations,? IEEE Trans. Electron Devices, vol. 39(6), pp. 14441455, 1992.
| 2930 |@word wiesel:1 overwritten:2 solid:6 carry:4 cyclic:11 configuration:1 current:1 follower:2 must:3 periodically:1 designed:1 half:1 device:2 plane:4 location:1 five:1 c2:2 become:1 supply:1 consists:1 manner:1 themselves:1 becomes:1 linearity:1 matched:1 circuit:14 vref:1 developed:6 fabricated:4 corporation:2 every:1 charge:2 control:1 unit:2 grant:1 engineering:2 ext:1 yusuke:2 meet:1 limited:3 sw3:3 acknowledgment:1 block:4 procedure:1 crossover:1 physiology:1 printed:2 pre:2 enclosure:1 storage:1 impossible:1 demonstrated:4 shi:2 center:5 missing:1 layout:1 simplicity:1 m2:3 array:17 fill:1 enabled:2 variation:1 target:1 substrate:1 element:10 expensive:1 recognition:1 located:1 utilized:2 photodiode:5 stripe:1 bottom:4 role:1 module:7 observed:1 electrical:2 solved:1 gealow:1 readout:1 cycle:3 connected:3 pd:26 complexity:1 dynamic:1 carrying:1 resolved:1 chip:21 cat:1 sacrificed:1 separated:2 fast:1 larger:1 interconnection:1 transistor:2 cai:1 reset:1 turned:4 poorly:1 achieve:3 ogawa:2 rst:2 requirement:1 cmos:6 ohmi:1 coupling:3 ac:3 nearest:17 school:3 vin2:1 implemented:1 diode:1 tokyo:8 stabilization:1 material:1 redistribution:1 education:2 implementing:1 frontier:4 mm:2 ground:1 mo:2 electron:1 circuitry:1 interchanged:1 early:1 purpose:1 proc:1 weighted:1 sensor:1 avoid:1 voltage:7 conjunction:1 seamlessly:1 greatly:1 tech:1 detect:1 interconnect:1 entire:6 vlsi:5 shibata:7 pixel:26 spatial:1 integration:2 field:1 having:1 shaped:1 eliminated:1 biology:1 represents:3 identical:2 composed:1 floating:7 detection:3 organization:4 highly:1 multiply:1 sh:1 edge:17 explosion:2 necessary:5 culture:1 initialized:1 column:1 introducing:3 successful:1 fabrication:1 stored:2 scanning:1 density:1 international:1 seamless:1 off:2 informatics:2 rohm:1 corner:2 external:2 conf:1 multiplexer:1 japan:3 sec:6 includes:1 int:1 toppan:1 performed:2 parallel:24 complicated:1 formed:1 sw1:3 gathered:1 yield:1 directional:3 vout:3 produced:2 confirmed:1 dig:1 processor:11 frequency:1 involved:1 pp:9 proof:3 newly:1 duplicated:1 subtle:1 appears:1 cummings:1 sw2:3 response:1 clock:3 horizontal:1 adder:2 multiscale:1 overlapping:1 scientific:1 effect:1 excessively:1 concept:3 requiring:1 former:1 read:4 illustrated:4 wiring:2 vdec:1 illustrative:1 noted:1 die:1 complete:3 motion:1 dissipation:2 image:15 functional:1 jp:3 vin1:1 analog:16 extend:1 m1:3 measurement:1 focal:4 access:9 specification:2 cortex:1 operating:1 periphery:1 store:1 seen:1 ministry:1 employed:2 accomplishes:1 subtraction:3 redundant:1 arithmetic:1 multiple:1 photodiodes:3 pitch:3 involving:1 kernel:5 cell:1 c1:2 else:1 diagram:2 source:4 crucial:1 extra:4 rest:2 hz:1 veto:1 mod:5 capacitor:3 call:1 ee:2 mw:3 switch:1 finish:1 architecture:13 topology:1 shift:1 resistance:1 bunkyo:1 programmable:1 hardware:3 processed:1 lsi:1 vol:6 key:4 four:10 threshold:3 photomicrograph:3 verified:2 luminance:6 sum:2 laid:1 esscirc:1 software:1 yagi:1 transferred:1 department:3 across:1 sate:1 explained:6 taken:2 photo:1 operation:17 gate:8 top:3 ccd:1 capacitance:1 tadashi:1 realized:3 receptive:1 striate:1 separate:2 decoder:1 mimd:1 implementation:2 design:1 perform:1 upper:1 vertical:2 convolution:16 observation:1 neuron:1 frame:8 stack:1 download:1 introduced:2 namely:1 required:1 c3:2 connection:2 c4:2 temporary:1 established:1 trans:2 address:1 proceeds:2 below:1 program:1 including:1 memory:10 power:3 difficulty:1 advanced:1 scheme:5 interconnects:21 technology:11 carried:4 yoshio:1 filtering:19 mounted:1 enclosed:1 digital:1 degree:4 gather:1 metal:4 sufficient:1 thresholding:1 collaboration:1 row:38 compatible:2 featuring:1 placed:2 supported:1 bias:3 neighbor:17 taking:1 chiba:2 dimension:1 overcome:1 c5:1 preprocessing:1 employing:3 hongo:1 logic:1 hubel:1 assumed:1 demanded:1 table:2 ku:1 poly:4 european:1 repeated:1 allowed:1 body:1 fig:19 site:2 board:2 aid:1 pe:25 third:1 printing:1 ito:1 essential:2 exists:1 illustrates:1 visual:2 kashiwa:2 temporarily:2 partially:1 sport:1 flash:2 shared:1 experimentally:5 hard:1 included:1 typical:2 reducing:1 operates:2 experimental:3 m3:2 select:1 scan:2 multiplexed:1 mita:3 |
2,127 | 2,931 | A Connectionist Model for Constructive
Modal Reasoning
Artur S. d?Avila Garcez
Department of Computing, City University London
London EC1V 0HB, UK
[email protected]
Lu??s C. Lamb
Institute of Informatics, Federal University of Rio Grande do Sul
Porto Alegre RS, 91501-970, Brazil
[email protected]
Dov M. Gabbay
Department of Computer Science, King?s College London
Strand, London, WC2R 2LS, UK
[email protected]
Abstract
We present a new connectionist model for constructive, intuitionistic
modal reasoning. We use ensembles of neural networks to represent intuitionistic modal theories, and show that for each intuitionistic modal
program there exists a corresponding neural network ensemble that computes the program. This provides a massively parallel model for intuitionistic modal reasoning, and sets the scene for integrated reasoning,
knowledge representation, and learning of intuitionistic theories in neural
networks, since the networks in the ensemble can be trained by examples
using standard neural learning algorithms.
1
Introduction
Automated reasoning and learning theory have been the subject of intensive investigation
since the early developments in computer science [14]. However, while (machine) learning has focused mainly on quantitative and connectionist approaches [16], the reasoning
component of intelligent systems has been developed mainly by formalisms of classical
and non-classical logics [7, 9]. More recently, the recognition of the need for systems that
integrate reasoning and learning into the same foundation, and the evolution of the fields of
cognitive and neural computation, has led to a number of proposals that attempt to integrate
reasoning and learning [1, 3, 12, 13, 15].
We claim that an effective integration of reasoning and learning can be obtained by neuralsymbolic learning systems [3, 4]. Such systems concern the application of problem-specific
symbolic knowledge within the neurocomputing paradigm. By integrating logic and neural
networks, they may provide (i) a sound logical characterisation of a connectionist system,
(ii) a connectionist (parallel) implementation of a logic, or (iii) a hybrid learning system
bringing together advantages from connectionism and symbolic reasoning.
Intuitionistic logical systems have been advocated by many as providing adequate logical
foundations for computation (see [2] for a survey). We argue, therefore, that intuitionism
could also play an important part in neural computation. In this paper, we follow the research path outlined in [4, 5], and develop a computational model for integrated reasoning,
representation, and learning of intuitionistic modal knowledge. We concentrate on reasoning and knowledge representation issues, which set the scene for connectionist intuitionistic
learning, since effective knowledge representation should precede learning [15]. Still, we
base the representation on standard, simple neural network architectures, aiming at future
work on experimental learning within the model proposed here.
A key contribution of this paper is the proposal to shift the notion of logical implication
(and negation) in neural networks from the standard notion of implication as a partial function from input to output (and of negation as failure to activate a neuron), to an intuitionistic
notion which we will see can be implemented in neural networks if we make use of network
ensembles. We claim that the intuitionistic interpretation introduced here will make sense
for a number of problems in neural computation in the same way that intuitionistic logic is
more appropriate than classical logic in a number of computational settings. We will start
by illustrating the proposed computational model in an appropriate constructive reasoning,
distributed knowledge representation scenario, namely, the wise men puzzle [7]. Then, we
will show how ensembles of Connectionist Inductive Learning and Logic Programming
(C-ILP) networks [3] can compute intuitionistic modal knowledge. The networks are set
up by an Intuitionistic Modal Algorithm introduced in this paper. A proof that the algorithm
produces a neural network ensemble that computes a semantics of its associated intuitionistic modal theory is then given. Furthermore, the networks in the ensemble are kept simple
and in a modular structure, and may be trained from examples with the use of standard
learning algorithms such as backpropagation [11].
In Section 2, we present the basic concepts of intuitionistic reasoning used in the paper. In
Section 3, we motivate the proposed model using the wise men puzzle. In Section 4, we
introduce the Intuitionistic Modal Algorithm, which translates intuitionistic modal theories
into neural network ensembles, and prove that the ensemble computes a semantics of the
theory. Section 5 concludes the paper and discusses directions for future work.
2
Background
In this section, we present some basic concepts of artificial neural networks and intuitionistic programs used throughout the paper. We concentrate on ensembles of single hidden
layer feedforward networks, and on recurrent networks typically with feedback only from
the output to the input layer. Feedback is used with the sole purpose of denoting that the
output of a neuron should serve as the input of another neuron when we run the network,
i.e. the weight of any feedback connection is fixed at 1. We use bipolar semi-linear activation functions h(x) = 1+e2??x ? 1 with inputs in {?1, 1}. Throughout, we will use 1 to
denote truth-value true, and ?1 to denote truth-value f alse.
Intuitionistic logic was originally developed by Brouwer, and later by Heyting and Kolmogorov [2]. In intuitionistic logics, a statement that there exists a proof of a proposition
x is only made if there is a constructive method of the proof of x. One of the consequences
of Brouwer?s ideas is the rejection of the law of the excluded middle, namely ? ? ??, since
one cannot always state that there is a proof of ? or of its negation, as accepted in classical logic and in (classical) mathematics. The development of these ideas and applications
in mathematics has led to developments in constructive mathematics and has influenced
several lines of research on logic and computing science [2].
An intuitionistic modal language L includes propositional letters (atoms) p, q, r..., the connectives ?, ?, an intuitionistic implication ?, the necessity (?) and possibility (?) modal
operators, where an atom will be necessarily true in a possible world if it is true in every
world that is related to this possible world, while it will be possibly true if it is true in some
world related to this world. Formally, we interpret the language as follows, where formulas
are denoted by ?, ?, ?...
Definition 1 (Kripke Models for Intuitionistic Modal Logic) Let L be an intuitionistic
language. A model for L is a tuple M = h?, R, vi where ? is a set of worlds, v is a
mapping that assigns to each ? ? ? a subset of the atoms of L, and R is a reflexive,
transitive, binary relation over ?, such that: (a) (M, ?) |= p iff p ? v(?) (for atom p);
(b) (M, ?) |= ?? iff for all ? ? such that R(?, ? ? ), (M, ? ? ) 6? ?; (c) (M, ?) |= ? ? ? iff
(M, ?) |= ? and (M, ?) |= ?; (d) (M, ?) |= ? ? ? iff for all ? ? with R(?, ? ? ) we have
(M, ? ? ) |= ? whenever we have (M, ? ? ) |= ?; (e) (M, ?) |= ?? iff for all ? ? ? ? if
R(?, ? ? ) then (M, ? ? ) |= ?; (f) (M, ?) |= ?? iff there exists ? ? ? ? such that R(?, ? ? )
and (M, ? ? ) |= ?.
We now define labelled intuitionistic programs as sets of intuitionistic rules, where each
rule is labelled by the world at which it holds, similarly to Gabbay?s Labelled Deductive
Systems [8].
Definition 2 (Labelled Intuitionistic Program) A Labelled Intuitionistic Program is a finite
set of rules C of the form ?i : A1 , ..., An ? A0 (where ?,? abbreviates ???, as usual),
and a finite set of relations R between worlds ?i (1 ? i ? m) in C, where Ak (0 ? k ? n)
are atoms and ?i is a label representing a world in which the associated rule holds.
To deal with intuitionistic negation, we adopt the approach of [10], as follows. We rename
any negative literal ?A as an atom A? not present originally in the language. This form of
renaming allows our definition of labelled intuitionistic programs above to consider atoms
only. For example, given A1 , ..., A?k , ..., An ? A0 , where A?k is a renaming of ?Ak , an
interpretation that assigns true to A?k represents that ?Ak is true; it does not represent that
Ak is false. Following Definition 1 (intuitionistic negation), A? will be true in a world ?i if
and only if A does not hold in every world ?j such that R(?i , ?j ).
Finally, we extend labelled intuitionistic programs to include modalities.
Definition 3 (Labelled Intuitionistic Modal Program) A modal atom is of the form M A
where M ? {?, ?} and A is an atom. A Labelled Intuitionistic Modal Program is a finite
set of rules C of the form ?i : M A1 , ..., M An ? M A0 , where M Ak (0 ? k ? n) are
modal atoms and ?i is a label representing a world in which the associated rule holds, and
a finite set of (accessibility) relations R between worlds ?i (1 ? i ? m) in C.
3
Motivating Scenario
In this section, we consider an archetypal testbed for distributed knowledge representation,
namely, the wise men puzzle [7], and model it intuitionistically in a neural network ensemble. Our aim is to illustrate the combination of neural networks and intuitionistic modal
reasoning. The formalisation of our computational model will be given in Section 4.
A certain king wishes to test his three wise men. He arranges them in a circle so that they
can see and hear each other. They are all perceptive, truthful and intelligent, and this is
common knowledge in the group. It is also common knowledge among them that there are
three red hats and two white hats, and five hats in total. The king places a hat on the head
of each wise man in a way that they are not able to see the colour of their own hats, and
then asks each one whether they know the colour of the hats on their heads.
The puzzle illustrates a situation in which intuitionistic implication and intuitionistic negation occur. Knowledge evolves in time, with the current knowledge persisting in time. For
example, at the first round it is known that there are at most two white hats on the wise
men?s heads. Then, if the wise men get to a second round, it becomes known that there is
at most one white hat on their heads.1 This new knowledge subsumes the previous knowledge, which in turn persists. This means that if A ? B is true at a world t1 then A ? B
will be true at a world t2 that is related to t1 (intuitionistic implication). Now, in any situation in which a wise man knows that his hat is red, this knowledge - constructed with
the use of sound reasoning processes - cannot be refuted. In other words, in this puzzle, if
?A is true at world t1 then A cannot be true at a world t2 that is related to t1 (intuitionistic
negation).
We model the wise men puzzle by constructing the relative knowledge of each wise man
along time points. This allows us to explicitly represent the relativistic notion of knowledge, which is a principle of intuitionistic reasoning. For simplicity, we refer to wise man
1 (respectively, 2 and 3) as agent 1 (respectively, 2 and 3). The resulting model is a twodimensional network ensemble (agents ? time), containing three networks in each dimension. In addition to pi - denoting the fact that wise man i wears a red hat - to model each
agent?s individual knowledge, we need to use a modality Kj , j ? {1, 2, 3}, which represents the relative notion of knowledge at each time point t1 , t2 , t3 . Thus, Kj pi denotes the
fact that agent j knows that agent i wears a red hat. The K modality above corresponds to
the ? modality in intuitionistic modal reasoning, as customary in the logics of knowledge
[7], and as exemplified below.
First, we model the fact that each agent knows the colour of the others? hats. For example,
if wise man 3 wears a red hat (neuron p3 is active) then wise man 1 knows that wise man
3 wears a red hat (neuron Kp3 is active for wise man 1). We then need to model the
reasoning process of each wise man. In this example, let us consider the case in which
neurons p1 and p3 are active. For agent 1, we have the rule t1 : K1 ?p2 ? K1 ?p3 ? K1 p1 ,
which states that agent 1 can deduce that he is wearing a red hat if he knows that the other
agents are both wearing white hats. Analogous rules exist for agents 2 and 3. As before,
the implication is intuitionistic, so that it persists at t2 and t3 as depicted in Figure 1 for
wise man 1 (represented via hidden neuron h1 in each network). In addition, according to
the philosophy of intuitionistic negation, we may only conclude that agent 1 knows ?p2 , if
in every world envisaged by agent 1, p2 is not derived. This is illustrated with the use of
dotted lines in Figure 1, in which, e.g., if neuron Kp2 is not active at t3 then neuron K?p2
will be active at t2 . As a result, the network ensemble will never derive p2 (as one should
expect), and thus it will derive K1 ?p2 and K3 ?p2 .2
4
Connectionist Intuitionistic Modal Reasoning
The wise men puzzle example of Section 3 shows that simple, single-hidden layer neural
networks can be combined in a modular structure where each network represents a possible
world in the Kripke structure of Definition 1. The way that the networks should then be
inter-connected can be defined by following a semantics for ? and ?, and for ? and ? from
intuitionistic logic. In this section, we see how exactly we construct a network ensemble
1
This is because if there were two white hats on their heads, one of them would have known (and
have said), in the first round, that his hat was red, for he would have been seeing the other two with
white hats.
2
To complete the formalisation of the problem, the following rules should also hold at t2 (and at
t3 ): K1 ?p2 ? K1 p1 and K1 ?p3 ? K1 p1 . Analogous rules exist for agents 2 and 3.
?1
?1
?p2 K?
?p3
Kp1 Kp2 Kp3 K?
h1
h2
h3
h5
h4
?p3
?p2 K?
Kp1 Kp2 Kp3 K?
wise man 1 at point t3
K?
?p2 K?
? p3
?1
h1
?p2 K?
?p3
Kp1 Kp2 Kp3 K?
h1
h2
h3
h4
h2
h3
h4
h5
? p3
K?
?p2 K?
wise man 1 at point t2
h5
?1
K?
?p2 K?p3
wise man 1 at point t1
Figure 1: Wise men puzzle: Intuitionistic negation and implication.
given an intuitionistic modal program. We introduce a translation algorithm, which takes
the program as input and produces the ensemble as output by setting the initial architecture,
set of weights, and thresholds of the networks according to a Kripke semantics for the
program. We then prove that the translation is correct, and thus that the network ensemble
can be used to compute the logical consequences of the program in parallel.
Before we present the algorithm, let us illustrate informally how ?, ?, ?, and ? are represented in the ensemble. We follow the key idea behind Connectionist Modal Logics (CML)
to represent Kripke models in neural networks [6]. Each possible world is represented by
a single hidden layer neural network. In each network, input and output neurons represent
atoms or modal atoms of the form A, ?A, ?A, or ?A, while each hidden neuron encodes
a rule. For example, in Figure 1, hidden neuron h1 encodes a rule of the form A ? B ? C.
Thresholds and weights must be such that the hidden layer computes a logical and of the
input layer, while the output layer computes a logical or of the hidden layer.3 Furthermore,
in each network, each output neuron is connected to its corresponding input neuron with a
weight fixed at 1.0 (as depicted in Figure 1 for K?p2 and K?p3 ), so that chains of the form
A ? B and B ? C can be represented and computed. This basically characterises C-ILP
networks [3]. Now, in CML, we allow for an ensemble of C-ILP networks, each network
representing knowledge in a (learnable) possible world. In addition, we allow for a number
of fixed feedforward and feedback connections to occur among different networks in the
ensemble, as shown in Figure 1. These are defined as follows: in the case of ?, if neuron
?A is activated (true) in network (world) ?i then A must be activated in every network
?j that is related to ?i (this is analogous to the situation in which we activate K1 p3 and
K2 p3 whenever p3 is active). Dually, if A is active in every ?j then ?A must be activated
3
For example, if A ? B ? D and C ? D then a hidden neuron h1 is used to connect A and B
to D, and a hidden neuron h2 is used to connect C to D such that if h1 or h2 is activated then D is
activated.
in ?i (this is done with the use of feedback connections and a hidden neuron that computes
a logical and, as detailed in the algorithm below). In the case of ?, if ?A is activated in
network ?i then A must be activated in at least one network ?j that is related to ?i (we do
this by choosing an arbitrary ?j to make A active). Dually, if A is activated in any ?j that is
related to ?i then ?A must be activated in ?i (this is done with the use of a hidden neuron
that computes a logical or, also as detailed in the algorithm below). Now, in the case of ?,
according to the semantics of intuitionistic implication, ?i : A ? B and R(?i , ?j ) imply
?j : A ? B. We implement this by copying the neural representation of A ? B from
?i to ?j , as done via h1 in Figure 1. Finally, in the case of ?, we need to make sure that
?A is activated in ?i if, for every ?j such that R(?i , ?j ), A is not active in ?j . This is implemented with the use of negative weights (to account for the fact that the non-activation
of a neuron needs to activate another neuron), as depicted in Figure 1 (dashed arrows), and
detailed in the algorithm below.
We are now in a position to introduce the Intuitionistic Modal Algorithm. Let P =
{P1 , ..., Pn } be a labelled intuitionistic modal program with rules of the form ?i :
M A1 , ..., M Ak ? M A0 , where each Aj (0 ? j ? k) is an atom and M ? {?, ?},
1 ? i ? n. Let N = {N1 , ..., Nn } be a neural network ensemble with each network Ni
corresponding to program Pi . Let q denote the number of rules occurring in P. Consider
that the atoms of Pi are numbered from 1 to ?i such that the input and output layers of Ni
are vectors of length ?i , where the j-th neuron represents the j-th atom of Pi . In addition,
let Amin denote the minimum activation for a neuron to be considered active (or true),
Amin ? (0, 1); for each rule rl in each program Pi , let kl denote the number of atoms in
the body of rule rl , and let ?l denote the number of rules in Pi with the same consequent
as rl (including rl ). Let M AXrl (kl , ?l ) denote the greater of kl and ?l for rule rl , and
let M AXP (k1 , ..., kq , ?1 , ..., ?q ) denote the greatest of k1 , ..., kq , ?1 , ..., ?q for program
P. Below, we use k as a shorthand for k1 , ..., kq , and ? as a shorthand for ?1 , ..., ?q . The
equations in the algorithm come from the proof of Theorem 1, given in the sequel.
Intuitionistic Modal Algorithm
1. Rename each modal atom M Aj by a new atom not occurring in P of the form A?
j if M = ?, or
A?
if
M
=
?;
j
2. For each rule rl of the form A1 , ..., Ak ? A0 in Pi (1 ? i ? n) such that R(?i , ?j ), do: add a
rule A1 , ..., Ak ? A0 to Pj (1 ? j ? n).
3. Calculate Amin > (M AXP (k,?, n) ? 1)?(M AXP (k,?, n) + 1);
4. Calculate W ? (2??)?(ln (1 + Amin )?ln (1 ? Amin ))?(M AXP (k,?)?(Amin ? 1)+Amin +
1);
5. For each rule rl of the form A1 , ..., Ak ? A0 (k ? 0) in Pi (1 ? i ? n), do:
(a) Add a neuron Nl to the hidden layer of neural network Ni associated with Pi ; (b) Connect each
neuron Ai (1 ? i ? k) in the input layer of Ni to Nl and set the connection weight to W ; (c)
Connect Nl to neuron A0 in the output layer of Ni and set the connection weight to W ; (d) Set the
threshold ?l of Nl to ?l = ((1 + Amin ) ? (kl ? 1) ?2)W ; (e) Set the threshold ?A0 of A0 in the
output layer of Ni to ?A0 = ((1 + Amin ) ? (1 ? ?l )?2)W. (f) For each atom of the form A? in rl ,
do:
(i) Add a hidden neuron NA? to Ni ; (ii) Set the step function s(x) as the activation function of
NA? ;4 (iii) Set the threshold ?A? of NA? such that n ? (1 + Amin ) < ?A? < nAmin ; (iv) For each
4
Any hidden neuron created to encode negation (such as h4 in Figure 1) shall have a non-linear
activation function s(x) = y, where y = 1 if x > 0, and y = 0 otherwise. Such neurons encode (meta-level) knowledge about negation, while the other hidden neurons encode (object-level)
knowledge about the problem domain. The former are not expected to be trained by examples and,
as a result, the use of the step function will simplify the algorithm. The latter are to be trained, and
therefore require a differentiable, semi-linear activation function.
network Nj corresponding to program Pj (1 ? j ? n) in P such that R(?i , ?j ), do: Connect the
output neuron A of Nj to the hidden neuron NA? of Ni and set the connection weight to ?1; and
Connect the hidden neuron NA? of Ni to the output neuron A? of Ni and set the connection weight
to W I such that W I > h?1 (Amin ) +?A? .W + ?A? .
6. For each output neuron A?
j in network Ni , do:
(a) Add a hidden neuron AM
and an output neuron Aj to an arbitrary network Nz such that
j
R(?i , ?z ); (b) Set the step function s(x) as the activation function of AM
j , and set the semi-linear
M
function h(x) as the activation function of Aj ; (c) Connect A?
j in Ni to Aj and set the connection
M
M
M
weight to 1; (d) Set the threshold ? of Aj such that ?1 < ? < Amin ; (e) Set the threshold ?Aj
of Aj in Nz such that ?Aj = ((1 + Amin ) ? (1 ? ?Aj )?2)W ; (f) Connect AM
j to Aj in Nz and set
the connection weight to W M > h?1 (Amin ) + ?Aj W + ?Aj .
7. For each output neuron A?
j in network Ni , do:
(a) Add a hidden neuron AM
j to each Nu (1 ? u ? n) such that R(?i , ?u ), and add an output
neuron Aj to Nu if Aj ?
/ Nu ; (b) Set the step function s(x) as the activation function of AM
j , and
?
set the semi-linear function h(x) as the activation function of Aj ; (c) Connect Aj in Ni to AM
j and
M
set the connection weight to 1; (d) Set the threshold ?M of AM
< Amin ; (e) Set
j such that ?1 < ?
the threshold ?Aj of Aj in each Nu such that ?Aj = ((1 + Amin ) ? (1 ? ?Aj )?2)W ; (f) Connect
M
AM
> h?1 (Amin ) + ?Aj W + ?Aj .
j to Aj in Nu and set the connection weight to W
8. For each output neuron Aj in network Nu such that R(?i , ?u ), do:
?
(a) Add a hidden neuron A?
j to Ni ; (b) Set the step function s(x) as the activation function of Aj ;
?
(c) For each output neuron Aj in Ni , do:
?
?
(i) Connect Aj in Nu to A?
j and set the connection weight to 1; (ii) Set the threshold ? of Aj such
?
?
?
that ?nAmin < ? < Amin ? (n ? 1); (iii) Connect Aj to Aj in Ni and set the connection weight
to W M > h?1 (Amin ) + ?Aj W + ?Aj .
9. For each output neuron Aj in network Nu such that R(?i , ?u ), do:
?
(a) Add a hidden neuron A?
j to Ni ; (b) Set the step function s(x) as the activation function of Aj ;
?
(c) For each output neuron Aj in Ni , do:
?
?
(i) Connect Aj in Nu to A?
j and set the connection weight to 1; (ii) Set the threshold ? of Aj such
?
?
?
that n ? (1 + Amin ) < ? < nAmin ; (iii) Connect Aj to Aj in Ni and set the connection weight
to W M > h?1 (Amin ) + ?Aj W + ?Aj .
Finally, we prove that N is equivalent to P.
Theorem 1 (Correctness of Intuitionistic Modal Algorithm) For any intuitionistic modal
program P there exists an ensemble of neural networks N such that N computes the intuitionistic modal semantics of P.
Proof The algorithm to build each individual network in the ensemble is that of C-ILP,
which we know is provably correct [3]. The algorithm to include modalities is that of
CML, which is also provably correct [6]. We need to consider when modalities and intuitionistic negation are to be encoded together. Consider an output neuron A0 with neurons
M (encoding modalities) and neurons n (encoding negation) among its predecessors in a
network?s hidden layer. There are four cases to consider. (i) Both neurons M and neurons
n are not activated: since the activation function of neurons M and n is the step function,
their activation is zero, and thus this case reduces to C-ILP. (ii) Only neurons M are activated: from the algorithm above, A0 will also be activated (with minimum input potential
W M + ?, where ? ? R). (iii) Only neurons n are activated: as before, A0 will also be
activated (now with minimum input potential W I + ?). (iv) Both neurons M and neurons
n are activated: the input potential of A0 is at least W M + W I + ?. Since W M > 0 and
W I > 0, and since the activation function of A0 , h(x), is monotonically increasing, A0
will be activated whenever both M and n neurons are activated. This completes the proof.
5
Concluding Remarks
In this paper, we have presented a new model of computation that integrates neural networks and constructive, intuitionistic modal reasoning. We have defined labelled intuitionistic modal programs, and have presented an algorithm to translate the intuitionistic
theories into ensembles of C-ILP neural networks, and showed that the ensembles compute a semantics of the corresponding theories. As a result, each ensemble can be seen as a
new massively parallel model for the computation of intuitionistic modal logic. In addition,
since each network can be trained efficiently using, e.g., backpropagation, one can adapt the
network ensemble by training possible world representations from examples. Work along
these lines has been done in [4, 5], where learning experiments in possible worlds settings
were investigated. As future work, we shall consider learning experiments based on the
constructive model introduced in this paper. Extensions of this work also include the study
of how to represent other non-classical logics such as branching time temporal logics, and
conditional logics of normality, which are relevant for cognitive and neural computation.
Acknowledgments
Artur Garcez is partly supported by the Nuffield Foundation and The Royal Society. Luis Lamb is
partly supported by the Brazilian Research Council CNPq and by the CAPES and FAPERGS foundations.
References
[1] A. Browne and R. Sun. Connectionist inference models. Neural Networks, 14(10):1331?1355,
2001.
[2] D. Van Dalen. Intuitionistic logic. In D. M. Gabbay and F. Guenthner, editors, Handbook of
Philosophical Logic, volume 5. Kluwer, 2nd edition, 2002.
[3] A. S. d?Avila Garcez, K. Broda, and D. M. Gabbay. Neural-Symbolic Learning Systems: Foundations and Applications. Perspectives in Neural Computing. Springer-Verlag, 2002.
[4] A. S. d?Avila Garcez and L. C. Lamb. Reasoning about time and knowledge in neural-symbolic
learning systems. In Advances in Neural Information Processing Systems 16, Proceedings of
NIPS 2003, pages 921?928, Vancouver, Canada, 2004. MIT Press.
[5] A. S. d?Avila Garcez, L. C. Lamb, K. Broda, and D. M. Gabbay. Applying connectionist modal
logics to distributed knowledge representation problems. International Journal on Artificial
Intelligence Tools, 13(1):115?139, 2004.
[6] A. S. d?Avila Garcez, L. C. Lamb, and D. M. Gabbay. Connectionist modal logics. Theoretical
Computer Science. Forthcoming.
[7] R. Fagin, J. Halpern, Y. Moses, and M. Vardi. Reasoning about Knowledge. MIT Press, 1995.
[8] D. M. Gabbay. Labelled Deductive Systems. Clarendom Press, Oxford, 1996.
[9] D. M. Gabbay, C. Hogger, and J. A. Robinson, editors. Handbook of Logic in Artificial Intelligence and Logic Programming, volume 1-5, Oxford, 1994-1999. Clarendom Press.
[10] M. Gelfond and V. Lifschitz. Classical negation in logic programs and disjunctive databases.
New Generation Computing, 9:365?385, 1991.
[11] D. E. Rumelhart, G. E. Hinton, and R. J. Williams.
propagating errors. Nature, 323:533?536, 1986.
Learning representations by back-
[12] L. Shastri. Advances in SHRUTI: a neurally motivated model of relational knowledge representation and rapid inference using temporal synchrony. Applied Intelligence, 11:79?108,
1999.
[13] G. G. Towell and J. W. Shavlik. Knowledge-based artificial neural networks. Artificial Intelligence, 70(1):119?165, 1994.
[14] A. M. Turing. Computer machinery and intelligence. Mind, 59:433?460, 1950.
[15] L. G. Valiant. Robust logics. Artificial Intelligence, 117:231?253, 2000.
[16] V. Vapnik. The nature of statistical learning theory. Springer-Verlag, 1995.
| 2931 |@word illustrating:1 middle:1 nd:1 cml:3 r:1 asks:1 initial:1 necessity:1 denoting:2 current:1 activation:15 must:5 luis:1 refuted:1 intelligence:6 provides:1 org:1 five:1 along:2 constructed:1 h4:4 predecessor:1 prove:3 shorthand:2 introduce:3 inter:1 expected:1 rapid:1 p1:5 increasing:1 becomes:1 nuffield:1 connective:1 developed:2 kp3:4 nj:2 temporal:2 quantitative:1 every:6 fagin:1 bipolar:1 exactly:1 k2:1 uk:4 t1:7 before:3 persists:2 aiming:1 consequence:2 gabbay:8 ak:9 encoding:2 oxford:2 path:1 nz:3 acknowledgment:1 implement:1 backpropagation:2 word:1 integrating:1 renaming:2 seeing:1 numbered:1 symbolic:4 get:1 cannot:3 operator:1 twodimensional:1 applying:1 equivalent:1 williams:1 l:1 focused:1 survey:1 arranges:1 simplicity:1 assigns:2 artur:2 rule:21 his:3 notion:5 brazil:1 analogous:3 play:1 intuitionistic:60 programming:2 rumelhart:1 recognition:1 relativistic:1 database:1 disjunctive:1 calculate:2 connected:2 sun:1 halpern:1 trained:5 motivate:1 serve:1 represented:4 kolmogorov:1 kcl:1 effective:2 london:4 activate:3 artificial:6 choosing:1 modular:2 encoded:1 otherwise:1 advantage:1 differentiable:1 relevant:1 iff:6 translate:1 amin:21 produce:2 object:1 illustrate:2 develop:1 ac:2 propagating:1 recurrent:1 derive:2 h3:3 sole:1 advocated:1 p2:15 implemented:2 come:1 broda:2 concentrate:2 direction:1 porto:1 correct:3 require:1 wc2r:1 investigation:1 archetypal:1 proposition:1 connectionism:1 extension:1 hold:5 considered:1 k3:1 puzzle:8 mapping:1 claim:2 early:1 adopt:1 purpose:1 integrates:1 precede:1 label:2 council:1 deductive:2 correctness:1 city:2 tool:1 federal:1 mit:2 always:1 aim:1 pn:1 encode:3 derived:1 mainly:2 sense:1 kp2:4 rio:1 am:8 inference:2 nn:1 integrated:2 typically:1 a0:17 hidden:23 relation:3 semantics:7 provably:2 issue:1 among:3 denoted:1 development:3 integration:1 field:1 construct:1 never:1 atom:19 represents:4 future:3 connectionist:12 t2:7 intelligent:2 others:1 simplify:1 kp1:3 dg:1 neurocomputing:1 individual:2 n1:1 negation:14 attempt:1 possibility:1 nl:4 behind:1 activated:18 chain:1 implication:8 dov:1 tuple:1 partial:1 machinery:1 iv:2 circle:1 theoretical:1 formalism:1 reflexive:1 subset:1 kq:3 motivating:1 connect:14 combined:1 international:1 sequel:1 garcez:6 informatics:1 together:2 na:5 containing:1 possibly:1 literal:1 cognitive:2 account:1 potential:3 subsumes:1 includes:1 explicitly:1 vi:1 later:1 kripke:4 h1:8 red:8 start:1 parallel:4 synchrony:1 contribution:1 ni:20 efficiently:1 ensemble:26 t3:5 basically:1 lu:1 influenced:1 whenever:3 definition:6 failure:1 e2:1 proof:7 associated:4 logical:9 knowledge:28 back:1 originally:2 axp:4 follow:2 modal:36 done:4 furthermore:2 aj:42 concept:2 true:14 evolution:1 inductive:1 former:1 excluded:1 illustrated:1 deal:1 white:6 round:3 branching:1 complete:1 reasoning:23 wise:23 aag:1 recently:1 common:2 rl:8 volume:2 extend:1 interpretation:2 he:4 kluwer:1 interpret:1 refer:1 ai:1 outlined:1 mathematics:3 similarly:1 language:4 persisting:1 wear:4 deduce:1 base:1 add:8 own:1 showed:1 perspective:1 massively:2 scenario:2 verlag:2 certain:1 meta:1 binary:1 seen:1 minimum:3 greater:1 paradigm:1 truthful:1 monotonically:1 envisaged:1 semi:4 dashed:1 neurally:1 ii:5 sound:2 reduces:1 sul:1 adapt:1 a1:7 basic:2 represent:6 proposal:2 background:1 addition:5 completes:1 modality:7 bringing:1 sure:1 subject:1 feedforward:2 iii:5 hb:1 automated:1 browne:1 forthcoming:1 architecture:2 idea:3 translates:1 intensive:1 shift:1 whether:1 motivated:1 colour:3 remark:1 adequate:1 detailed:3 informally:1 exist:2 dotted:1 moses:1 towell:1 shall:2 group:1 key:2 four:1 threshold:11 characterisation:1 pj:2 kept:1 run:1 turing:1 letter:1 soi:1 place:1 throughout:2 lamb:5 brazilian:1 p3:14 layer:14 occur:2 scene:2 encodes:2 avila:5 concluding:1 department:2 according:3 combination:1 evolves:1 alse:1 ln:2 equation:1 discus:1 turn:1 ilp:6 know:8 mind:1 appropriate:2 hat:19 customary:1 denotes:1 brouwer:2 include:3 cape:1 k1:12 build:1 classical:7 society:1 usual:1 said:1 accessibility:1 gelfond:1 argue:1 length:1 copying:1 providing:1 shastri:1 statement:1 negative:2 implementation:1 neuron:57 finite:4 situation:3 hinton:1 relational:1 head:5 dc:1 dually:2 arbitrary:2 canada:1 introduced:3 propositional:1 namely:3 kl:4 connection:15 philosophical:1 testbed:1 nu:9 nip:1 robinson:1 able:1 below:5 exemplified:1 hear:1 program:22 including:1 royal:1 greatest:1 hybrid:1 representing:3 normality:1 imply:1 created:1 concludes:1 transitive:1 kj:2 characterises:1 vancouver:1 relative:2 law:1 expect:1 men:9 generation:1 foundation:5 integrate:2 h2:5 agent:13 principle:1 editor:2 wearing:2 pi:10 translation:2 supported:2 cnpq:1 allow:2 institute:1 shavlik:1 distributed:3 van:1 feedback:5 dimension:1 world:24 computes:8 made:1 logic:26 active:10 handbook:2 conclude:1 grande:1 nature:2 robust:1 investigated:1 necessarily:1 constructing:1 domain:1 arrow:1 edition:1 vardi:1 body:1 formalisation:2 position:1 wish:1 formula:1 theorem:2 specific:1 learnable:1 consequent:1 concern:1 exists:4 false:1 vapnik:1 valiant:1 illustrates:1 occurring:2 rejection:1 depicted:3 led:2 strand:1 springer:2 corresponds:1 truth:2 acm:1 conditional:1 king:3 labelled:12 man:14 abbreviates:1 total:1 accepted:1 experimental:1 partly:2 formally:1 college:1 rename:2 perceptive:1 latter:1 philosophy:1 constructive:7 h5:3 |
2,128 | 2,932 | Hyperparameter and Kernel Learning for
Graph Based Semi-Supervised Classification
Ashish Kapoor? , Yuan (Alan) Qi? , Hyungil Ahn? and Rosalind W. Picard?
?
MIT Media Laboratory, Cambridge, MA 02139
{kapoor, hiahn, picard}@media.mit.edu
?
MIT CSAIL, Cambridge, MA 02139
[email protected]
Abstract
There have been many graph-based approaches for semi-supervised classification. One problem is that of hyperparameter learning: performance
depends greatly on the hyperparameters of the similarity graph, transformation of the graph Laplacian and the noise model. We present a
Bayesian framework for learning hyperparameters for graph-based semisupervised classification. Given some labeled data, which can contain
inaccurate labels, we pose the semi-supervised classification as an inference problem over the unknown labels. Expectation Propagation is
used for approximate inference and the mean of the posterior is used for
classification. The hyperparameters are learned using EM for evidence
maximization. We also show that the posterior mean can be written in
terms of the kernel matrix, providing a Bayesian classifier to classify new
points. Tests on synthetic and real datasets show cases where there are
significant improvements in performance over the existing approaches.
1
Introduction
A lot of recent work on semi-supervised learning is based on regularization on graphs [5].
The basic idea is to first create a graph with the labeled and unlabeled data points as the
vertices and with the edge weights encoding the similarity between the data points. The aim
is then to obtain a labeling of the vertices that is both smooth over the graph and compatible
with the labeled data. The performance of most of these algorithms depends upon the edge
weights of the graph. Often the smoothness constraints on the labels are imposed using a
transformation of the graph Laplacian and the parameters of the transformation affect the
performance. Further, there might be other parameters in the model, such as parameters
to address label noise in the data. Finding a right set of parameters is a challenge, and
usually the method of choice is cross-validation, which can be prohibitively expensive for
real-world problems and problematic when we have few labeled data points.
Most of the methods ignore the problem of learning hyperparameters that determine the
similarity graph and there are only a few approaches that address this problem. Zhu et al.
[8] propose learning non-parametric transformation of the graph Laplacians using semidefinite programming. This approach assumes that the similarity graph is already provided;
thus, it does not address the learning of edge weights. Other approaches include label
entropy minimization [7] and evidence-maximization using the Laplace approximation [9].
This paper provides a new way to learn the kernel and hyperparameters for graph based
semi-supervised classification, while adhering to a Bayesian framework. The semisupervised classification is posed as a Bayesian inference. We use the evidence to simultaneously tune the hyperparameters that define the structure of the similarity graph,
the parameters that determine the transformation of the graph Laplacian, and any other
parameters of the model. Closest to our work is Zhu et al. [9], where they proposed a
Laplace approximation for learning the edge weights. We use Expectation Propagation
(EP), a technique for approximate Bayesian inference that provides better approximations
than Laplace. An additional contribution is a new EM algorithm to learn the hyperparameters for the edge weights, the parameters of the transformation of the graph spectrum.
More importantly, we explicitly model the level of label noise in the data, while [9] does
not do. We provide what may be the first comparison of hyperparameter learning with
cross-validation on state-of-the-art algorithms (LLGC [6] and harmonic fields [7]).
2
Bayesian Semi-Supervised Learning
We assume that we are given a set of data points X = {x1 , .., xn+m }, of which XL =
{x1 , .., xn } are labeled as tL = {t1 , .., tn } and XU = {xn+1 , .., xn+m } are unlabeled.
Throughout this paper we limit ourselves to two-way classification, thus t ? {?1, 1}. Our
model assumes that the hard labels ti depend upon hidden soft-labels yi for all i. Given
the dataset D = [{XL , tL }, XU ], the task of semi-supervised learning is then to infer the
posterior p(tU |D), where tU = [tn+1 , .., tn+m ]. The posterior can be written as:
Z
p(tU |D) =
p(tU |y)p(y|D)
(1)
y
In this paper, we propose to first approximate the posterior p(y|D) and then use (1) to
classify the unlabeled data. Using the Bayes rule we can write:
p(y|D) = p(y|X, tL ) ? p(y|X)p(tL |y)
The term, p(y|X) is the prior. It enforces a smoothness constraint and depends upon the
underlying data manifold. Similar to the spirit of graph regularization [5] we use similarity
graphs and their transformed Laplacian to induce priors on the soft labels y. The second
term, p(tL |y) is the likelihood that incorporates the information provided by the labels.
In this paper, p(y|D) is inferred using Expectation Propagation, a technique for approximate Bayesian inference [3]. In the following subsections first we describe the prior and
the likelihood in detail and then we show how evidence maximization can be used to learn
hyperparameters and other parameters in the model.
2.1
Priors and Regularization on Graphs
The prior plays a significant role in semi-supervised learning, especially when there is only
a small amount of labeled data. The prior imposes a smoothness constraint and should be
such that it gives higher probability to the labelings that respect the similarity of the graph.
The prior, p(y|X), is constructed by first forming an undirected graph over the data points.
The data points are the nodes of the graph and edge-weights between the nodes are based
on similarity. This similarity is usually captured using a kernel. Examples of kernels
include RBF, polynomial etc. Given the data points and a kernel, we can construct an
(n + m) ? (n + m) kernel matrix K, where Kij = k(xi , xj ) for all i ? {1, .., n + m}.
? which is same as the matrix K, except that the diagonals are set
Lets consider the matrix K,
P ?
to zero. Further, if G is a diagonal matrix such that Gii = j K
ij , then we can construct the
? or the normalized Laplacian (?
? = I ?G? 21 KG
? ? 12 )
combinatorial Laplacian (? = G? K)
of the graph. For brevity, in the text we use ? as a notation for both the Laplacians. Both
the Laplacians are symmetric and positive semidefinite. Consider the eigen decomposition
of ? where {vi } denote the eigenvectors and {?i } the corresponding eigenvalues; thus, we
Pn+m
Pn+m
can write ? = i=1 ?i vi viT . Usually, a transformation r(?) = i=1 r(?i )vi viT that
modifies the spectrum of ? is used as a regularizer. Specifically, the smoothness imposed
by this regularizer prefers soft labeling for which the norm y T r(?)y is small. Equivalently,
we can interpret this probabilistically as following:
1
p(y|X) ? e? 2 y
T
r(?)y
= N (0, r(?)?1 )
(2)
Where r(?)?1 denotes the pseudo-inverse if the inverse does not exist. Equation (2) suggests that the labelings with the small value of y T r(?)y are more probable than the others.
Note, that when r(?) is not invertible the prior is improper. The fact that the prior can
be written as a Gaussian is advantageous as techniques for approximate inference can be
easily applied. Also, different choices of transformation functions lead to different semisupervised learning algorithms. For example, the approach based on Gaussian fields and
harmonic functions (Harmonic) [7] can be thought of as using the transformation r(?) = ?
on the combinatorial Laplacian without any noise model. Similarly, the approach based in
local and global consistency (LLGC) [6] can be thought of as using the same transformation but on the normalized Laplacian and a Gaussian likelihood. Therefore, it is easy to see
that most of these algorithms can exploit the proposed evidence maximization framework.
In the following we focus only on the parametric linear transformation r(?) = ? + ?. Note
that this transformation removes zero eigenvalues from the spectrum of ?.
2.2
The Likelihood
Assuming conditional independence of the observed
Qn labels given the hidden soft labels, the
likelihood p(tL |y) can be written as p(tL |y) = i=1 p(ti |yi ). The likelihood models the
probabilistic relation between the observed label ti and the hidden label yi . Many realworld datasets contain hand-labeled data and can often have labeling errors. While most
people tend to model label errors with a linear or quadratic slack in the likelihood, it has
been noted that such an approach does not address the cases where label errors are far from
the decision boundary [2]. The flipping likelihood can handle errors even when they are far
from the decision boundary and can be written as:
p(ti |yi ) = (1 ? ?(yi ? ti )) + (1 ? )?(yi ? ti ) = + (1 ? 2)?(yi ? ti )
(3)
Here, ? is the step function, is the labeling error rate and the model admits possibility of
errors in labeling with a probability . This likelihood has been earlier used in the context
of Gaussian process classification [2][4]. The above described likelihood explicitly models
the labeling error rate; thus, the model should be more robust to the presence of label noise
in the data. The experiments in this paper use the flipping noise likelihood shown in (3).
2.3
Approximate Inference
In this paper, we use EP to obtain a Gaussian approximation of the posterior p(y|D).
Although, the prior derived in section 2.1 is a Gaussian distribution, the exact posterior
is not a Gaussian due to the form of the likelihood. We use EP to approximate the posterior
as a Gaussian and then equation (1) can be used to classify unlabeled data points. EP has
been previously used [3] to train a Bayes Point Machine, where EP starts with a Gaussian
prior over the classifiers and produces a Gaussian posterior. Our task is very similar and we
use the same algorithm. In our case, EP starts with the prior defined in (2) and incorporates
likelihood to approximate the posterior p(y|D) ? N (?
y, ?y ).
2.4
Hyperparameter Learning
We use evidence maximization to learn the hyperparameters. Denote the parameters of
the kernel as ?K and the parameters of transformation of the graph Laplacian as ?T .
? =
Let ? = {?K , ?T , }, where is the noise hyperparameter. The goal is to solve ?
arg max? log[p(tL |X, ?)].
Non-linear optimization techniques, such as gradient descent or Expectation Maximization
(EM) can be used to optimize the evidence. When the parameter space is small then the
Matlab function fminbnd, based on golden section search and parabolic interpolation,
can be used. The main challenge is that the gradient of evidence is not easy to compute.
Previously, an EM algorithm for hyperparameter learning [2] has been derived for Gaussian Process classification. Using similar ideas we can derive an EM algorithm for semisupervised learning. In the E-step EP is used to infer the posterior q(y) over the soft labels.
The M-step consists of maximizing the lower bound:
Z
p(y|X, ?)p(tL |y, ?)
q(y) log
q(y)
Z
Z
= ? q(y) log q(y) +
q(y) log N (y; 0, r(?)?1 )
F =
y
y
+
n Z
X
i=1
y
q(yi ) log ( + (1 ? 2)?(yi ? ti )) ? p(tL |X, ?)
yi
The EM procedure alternates between the E-step and the M-step until convergence.
? E-Step: Given the current parameters ?i , approximate the posterior q(y) ?
N (?
y, ?y ) by EP.
? M-Step: Update
R
L |y,?)
?i+1 = arg max? y q(y) log p(y|X,?)p(t
q(y)
In the M-step the maximization with respect to the ? cannot be computed in a closed
form, but can be solved using gradient descent. For maximizing the lower bound, we used
gradient based projected BFGS method using Armijo rule and simple line search. When
using the linear transformation r(?) = ? + ? on the Laplacian ?, the prior p(y|X, ?) can
be written as N (0, (? + ?I)?1 ). Define Z = ? + ?I then, the gradients of the lower bound
with respect to the parameters are as follows:
??
?F
1
1 T ??
1
??
?
? ? tr(
= tr(Z?1
)? y
y
?y )
??K
2
??K
2
??K
2
??K
1
?F
1
1 T
? y
? ? tr(?y )
= tr(Z?1 ) ? y
??T
2
2
2
Z
n
X
1 ? 2?(ti ? y?i )
?F
?
where: y?i =
yi q(y)
?
+ (1 ? 2)?(ti ? y?i )
y
i=1
It is easy to show that the provided approximation of the derivative ?F
? equals zero, when
k
= n , where k is the number of labeled data points differing in sign from their posterior
means. The EM procedure described here is susceptible to local minima and in a few cases
might be too slow to converge. Especially, when the evidence curve is flat and the initial
values are far from the optimum, we found that the EM algorithm provided very small
steps, thus, taking a long time to converge.
Whenever we encountered this problem in the experiments, we used an approximate gradient search to find a good value of initial parameters for the EM algorithm. Essentially as the
gradients of the evidence are hard to compute, they can be approximated by the gradients
of the lower bound and can be used in any gradient ascent procedure.
?4
?4
?4
Evidence curves (odd vs even, ?=10 , ?=0)
Evidence curves (moon data,? = 10 ,?=0.1)
Evidence curves (PC vs MAC, ? = 10 , ?=0)
10
0
N=5
N=15
N=25
N=40
?5
?10
?15
?20
?20
?30
?30
?40
?50
?40
?20
N=5
N=15
N=35
N=50
?10
Log evidence
?10
Log evidence
0
Log evidence
0
N=1
N=6
N=10
N=20
5
?60
?25
?50
?30
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
100
Hyperparameter (?) for RBF kernel
150
200
(a)
350
?70
400
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
Hyperparameter (?) for the kernel
(b)
Recognition and log evidence (moon data N=10)
(c)
Recognition and log evidence (odd vs even N=25)
Recognition and log evidence (PC vs MAC N=35)
100
95
90
85
80
75
0
0.2
0.4
0.6
Hyperparameter (?) for RBF kernel
(d)
Log evidence
Accuracy
95
90
85
80
75
70
65
60
100
150
200
250
300
350
Log evidence
Accuracy
105
Log evidence/Recognition Rate
Log evidence
Accuracy
Log evidence/Recognition Rate
Log evidence/Recognition Rate
300
Hyperparameter (?) for RBF kernel
100
70
250
400
100
95
90
85
80
75
0
Hyperparameter (?) for RBF kernel
0.1
0.2
Hyperparameter (?) for the kernel
(e)
(f)
Figure 1: Evidence curves showing similar properties across different datasets (half-moon,
odd vs even and PC vs MAC). The top row figures (a), (b) and (c) show the evidence curves
for different amounts of labeled data per class. The bottom row figures (d), (e) and (f) show
the correlation between recognition accuracy on unlabeled points and the evidence.
2.5
Classifying New Points
Since we compute a posterior distribution over the soft-labels of the labeled and unlabeled data points, classifying a new point is tricky. Note, that from the parameterization lemma for Gaussian Processes [1] it follows that given a prior distribution p(y|X) ?
N (0, r(?)?1 ), the mean of the posterior p(y|D) is a linear combination of the columns of
r(?)?1 . That is:
? = r(?)?1 a where, a ? IR(n+m)?1
y
Further, if the similarity matrix K is a valid kernel matrix1 then we can write the mean
directly in terms of the linear combination of the columns of K:
? = KK ?1 r(?)?1 a = Kb
y
T
(4)
?1
?1
Here, b = [b1 , .., bn+m ] is a column vector and is equal to K r(?) a. Thus, we
Pn+m
have that y?i = j=1 bj ? K(xi , xj ). This provides a natural extension of the framework
to classify new points.
3
Experiments
We performed experiments to evaluate the three main contributions of this work: Bayesian
hyperparameter learning, classification of unseen data points, and robustness with respect
to noisy labels. For all the experiments we use the linear transformation r(?) = ? + ?
either on normalized Laplacian (EP-NL) or the combinatorial Laplacian (EP-CL). The experiments were performed on one synthetic (Figure 4(a)) and on three real-world datasets.
Two real-world datasets were the handwritten digits and the newsgroup data from [7]. We
evaluated the task of classifying odd vs even digits (15 labeled, 485 unlabeled and rest new
1
The matrix K is the adjacency matrix of the graph and depending upon the similarity criterion
might not always be positive semi-definite. For example, discrete graphs induced using K-nearest
neighbors might result in K that is not positive semi-definite.
Evidence curves (affect data, K=3, ?=10?4)
?4
Evidence curves (affect data, ? = 10 , ? = 0.05)
0
?5
N=5
N=15
N=25
N=40
?10
?15
Log evidence
?20
?30
?40
N=5
N=15
N=25
N=40
?10
?15
?20
Log evidence
?10
Log evidence
Evidence curves (affect data, K = 3, ? = 0.05)
?5
N=5
N=15
N=25
N=40
?25
?30
?35
?20
?25
?30
?35
?40
?50
?40
?45
?60
0
0.1
0.2
0.3
0.4
?50
0.5
?45
1
2
3
Noise parameter (?)
4
5
6
7
K in K?nearest neigbors
(a)
(b)
8
9
10
0
0.02
0.04
0.06
0.08
0.1
0.12
Transformation parameter (?)
(c)
Figure 2: Evidence curves showing similar properties across different parameters of the
model. The figures (a), (b) and (c) show the evidence curves for different amount of labeled
data per class for the three different parameters in the model.
EP?NL
EP?CL
EP?CL
EP?NL
LLGC
LLGC
Harmonic
Harmonic
1?NN
1?NN
15
20
25
10
Error rate on unlabeled points (odd vs even)
20
30
40
Error rate on unlabeled points (PC vs MAC)
(a)
(b)
EP?NL
EP?CL
EP?CL
EP_NL
LLGC
LLGC
Harmonic
Harmonic
1?NN
1?NN
15
20
25
Error rate on new points (odd vs even)
(c)
15
20
25
30
35
40
Error rate on new points (PC vs MAC)
Figure 3: Error rates for different
algorithms on digits (first column,
(a) and (c)) and newsgroup dataset
(second column (b) and (d)). The
figures in the top row (a) and
(b) show error rates on unlabeled
points and the bottom row figures
(c) and (d) on the new points. The
results are averaged over 5 runs.
Non-overlapping of error bars, the
standard error scaled by 1.64, indicates 95% significance of the
performance difference.
(d)
(unseen) points per class) and classifying PC vs MAC (5 labeled, 895 unlabeled and rest as
new (unseen) points per class). An RBF kernel was used for handwritten digits, whereas
x Tx
kernel K(xi , xj ) = exp[? ?1 (1 ? |xii ||xjj| )] was used on 10-NN graph to determine similarity. The third real-world dataset labels the level of interest (61 samples of high interest and
75 samples of low interest) of a child solving a puzzle on the computer. Each data point is
a 19 dimensional real vector summarizing 8 seconds of activity from the face, posture and
the puzzle. The labels in this database are suspected to be noisy because of human labeling.
All the experiments on this data used K-nearest neighbor to determine the kernel matrix.
Hyperparameter learning: Figure 1 (a), (b) and (c) plots log evidence versus kernel parameters that determine the similarity graphs for the different datasets with varying size of
the labeled set per class. The value of ? and were fixed to the values shown in the plots.
Figure 2 (a), (b) and (c) plots the log evidence versus the noise parameter (), the kernel
parameter (k in k-NN) and the transformation parameter (?) for the affect dataset. First,
we see that the evidence curves generated with very little data are flat and as the number of
labeled data points increases we see the curves become peakier. When there is very little
labeled data, there is not much information available for the evidence maximization framework to prefer one parameter value over the other. With more labeled data, the evidence
curves become more informative. Figure 1 (d), (e) and (f) show the correlation between the
evidence curves and the recognition rate on the unlabeled data and reveal that the recognition over the unlabeled data points is highly correlated with the evidence. Note that both
of these effects are observed across all the datasets as well as all the different parameters,
justifying evidence maximization for hyperparameter learning.
Toy Data
1.5
noisy label
1
0.5
0
?0.5
?1
?1.5
?1.5
noisy label
?1
?0.5
0
0.5
(a)
1
1.5
2
2.5
(b)
(c)
Figure 4: Semi-supervised classification in presence of label noise. (a) Input data with label
noise. Classification (b) without flipping noise model and with (c) flipping noise model.
How good are the learnt parameters? We performed experiments on the handwritten digits and on the newsgroup data and compared with 1-NN, LLGC and Harmonic approach.
The kernel parameters for both LLGC and Harmonic were estimated using leave one out
cross validation2 . Note that both the approaches can be interpreted in terms of the new
proposed Bayesian framework (see sec 2.1). We performed experiments with both the normalized (EP-NL) and the combinatorial Laplacian (EP-CL) with the proposed framework
to classify the digits and the newsgroup data. The approximate gradient descent was first
used to find an initial value of the kernel parameter for the EM algorithm. All three parameters were learnt and the top row in figure 3 shows the average error obtained for 5
different runs on the unlabeled points. On the task of classifying odd vs even the error rate
for EP-NL was 14.46?4.4%, significantly outperforming the Harmonic (23.98?4.9%) and
1-NN (24.23?1.1%). Since the prior in EP-NL is determined using the normalized Laplacian and there is no label noise in the data, we expect EP-NL to at least work as well as
LLGC (16.02 ? 1.1%). Similarly for the newsgroup dataset EP-CL (9.28?0.7%) significantly beats LLGC (18.03?3.5%) and 1-NN (46.88?0.3%) and is better than Harmonic
(10.86?2.4%). Similar, results are obtained on new points as well. The unseen points were
classified using eq. (4) and the nearest neighbor rule was used for LLGC and Harmonic.
Handling label noise: Figure 4(a) shows a synthetic dataset with noisy labels. We performed semi-supervised classification both with and without the likelihood model given in
(3) and the EM algorithm was used to tune all the parameters including the noise (). Besides modifying the spectrum of the Laplacian, the transformation parameter ? can also be
considered as latent noise and provides a quadratic slack for the noisy labels [2]. The results
are shown in figure 4 (b) and (c). The EM algorithm can correctly learn the noise parameter
resulting in a perfect classification. The classification without the flipping model, even with
the quadratic slack, cannot handle the noisy labels far from the decision boundary.
Is there label noise in the data? It was suspected that due to the manual labeling the
affect dataset might have some label noise. To confirm this and as a sanity check, we first
plotted evidence using all the available data. For all the semi-supervised methods in these
experiments, we use 3-NN to induce the adjacency graph. Figure 5(a) shows the plot for
the evidence against the noise parameter (). From the figure, we see that the evidence
peaks at = 0.05 suggesting that the dataset has around 5% of labeling noise. Figure
5(b) shows comparisons with other semi-supervised (LLGC and SVM with graph kernel)
and supervised methods (SVM with RBF kernel) for different sizes of the labeled dataset.
Each point in the graph is the average error on 20 random splits of the data, where the
error bars represent the standard error. EM was used to tune and ? in every run. We
used the same transformation r(?) = ? + ? on the graph kernel in the semi-supervised
SVM. The hyperparameters in both the SVMs (including ? for the semi-supervised case)
were estimated using leave one out. When the number of labeled points are small, both
2
Search space for ? (odd vs even) was 100 to 400 with increments of 10 and for ? (PC vs MAC)
was 0.01 to 0.2 with increments of 0.1
45
EP?NL
LLGC
SVM (RBF)
SVM (N?Laplacian)
Evidence curve for the whole affect data (K=3, ?=10?4)
?60
Error on unlabeled data
40
?65
Log evidence
?70
?75
?80
?85
35
EP?NL (N = 80)
30
25
Harmonic (N = 92)
20
?90
0.5
?95
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
Noise parameter (?)
(a)
0.4
0.45
0.5
5
10
15
20
25
30
1.0
1.5
Number of labels per class
Error rate on unlabeled points (1 vs 2)
(b)
(c)
Figure 5: (a) Evidence vs noise parameter plotted using all the available data in the affect
dataset. The maximum at = 0.05 suggests that there is around 5% label noise in the
data. (b) Performance comparison of the proposed approach with LLGC, SVM using graph
kernel and the supervised SVM (RBF kernel) on the affect dataset which has label noise.
The error bars represent the standard error. (c) Comparison of the proposed EM method for
hyperparameter learning with the result reported in [7] using label entropy minimization.
The plotted error bars represent the standard deviation.
LLGC and EP-NL perform similarly beating both the SVMs, but as the size of the labeled
data increases we see a significant improvement of the proposed approach over the other
methods. One of the reasons is when you have few labels the probability of the labeled set
of points containing a noisy label is low. As the size of the labeled set increases the labeled
data has more noisy labels. And, since LLGC has a Gaussian noise model, it cannot handle
flipping noise well. As the number of labels increase, the evidence curve turns informative
and EP-NL starts to learn the label noise correctly, outperforming the other Both the SVMs
show competitive performance with more labels but still are worse than EP-NL. Finally, we
also test the method on the task of classifying ?1? vs ?2? in the handwritten digits dataset.
With 40 labeled examples per class (80 total labels and 1800 unlabeled), EP-NL obtained
an average recognition accuracy of 99.72 ? 0.04% and figure 5(c) graphically shows the
gain over the accuracy of 98.56 ? 0.43% reported in [7], where the hyperparameter were
learnt by minimizing label entropy with 92 labeled and 2108 unlabeled examples.
4
Conclusion
We presented and evaluated a Bayesian framework for learning hyperparameters for graphbased semi-supervised classification. The results indicate that evidence maximization
works well for learning hyperparameters, including the amount of label noise in the data.
References
[1] Csato, L. (2002) Gaussian processes-iterative sparse approximation. PhD Thesis, Aston Univ.
[2] Kim, H. & Ghahramani, Z. (2004) The EM-EP algorithm for Gaussian process classification.
ECML.
[3] Minka, T. P. (2001) Expectation propagation for approximate Bayesian inference. UAI.
[4] Opper, M. & Winther, O. (1999) Mean field methods for classification with Gaussian processes.
NIPS.
[5] Smola, A. & Kondor, R. (2003) Kernels and regularization on graphs. COLT.
[6] Zhou et al. (2004) Learning with local and global consistency. NIPS.
[7] Zhu, X., Ghahramani, Z. & Lafferty, J. (2003) Semi-supervised learning using Gaussian fields
and harmonic functions. ICML.
[8] Zhu, X., Kandola, J., Ghahramani, Z. & Lafferty, J. (2004) Nonparametric transforms of graph
kernels for semi-supervised learning. NIPS.
[9] Zhu, X., Lafferty, J. & Ghahramani, Z. (2003) Semi-supervised learning: From Gaussian fields to
Gaussian processes. CMU Tech Report:CMU-CS-03-175.
| 2932 |@word kondor:1 polynomial:1 norm:1 advantageous:1 bn:1 decomposition:1 tr:4 initial:3 existing:1 current:1 written:6 alanqi:1 informative:2 remove:1 plot:4 update:1 v:18 half:1 parameterization:1 matrix1:1 provides:4 node:2 constructed:1 become:2 yuan:1 consists:1 little:2 provided:4 notation:1 underlying:1 medium:2 what:1 kg:1 interpreted:1 differing:1 finding:1 transformation:19 pseudo:1 every:1 ti:10 golden:1 prohibitively:1 classifier:2 scaled:1 tricky:1 t1:1 positive:3 local:3 limit:1 encoding:1 interpolation:1 might:5 suggests:2 averaged:1 enforces:1 definite:2 digit:7 procedure:3 thought:2 significantly:2 induce:2 cannot:3 unlabeled:18 context:1 optimize:1 imposed:2 maximizing:2 modifies:1 graphically:1 vit:2 adhering:1 rule:3 importantly:1 handle:3 increment:2 laplace:3 play:1 exact:1 programming:1 expensive:1 approximated:1 recognition:10 labeled:25 database:1 ep:30 role:1 observed:3 bottom:2 solved:1 improper:1 depend:1 solving:1 upon:4 fminbnd:1 easily:1 tx:1 regularizer:2 train:1 univ:1 peakier:1 describe:1 labeling:9 sanity:1 posed:1 solve:1 unseen:4 noisy:9 eigenvalue:2 propose:2 tu:4 kapoor:2 convergence:1 optimum:1 produce:1 perfect:1 leave:2 derive:1 depending:1 pose:1 nearest:4 ij:1 odd:8 eq:1 c:1 indicate:1 modifying:1 kb:1 human:1 adjacency:2 probable:1 extension:1 around:2 considered:1 exp:1 puzzle:2 bj:1 label:46 combinatorial:4 create:1 minimization:2 mit:4 gaussian:19 always:1 aim:1 pn:3 zhou:1 varying:1 probabilistically:1 derived:2 focus:1 improvement:2 likelihood:14 indicates:1 check:1 greatly:1 tech:1 kim:1 summarizing:1 inference:8 nn:10 inaccurate:1 hidden:3 relation:1 transformed:1 labelings:2 arg:2 classification:19 colt:1 art:1 field:5 construct:2 equal:2 icml:1 others:1 report:1 few:4 simultaneously:1 kandola:1 ourselves:1 interest:3 possibility:1 highly:1 picard:2 nl:14 semidefinite:2 pc:7 edge:6 plotted:3 kij:1 classify:5 soft:6 earlier:1 column:5 maximization:10 mac:7 vertex:2 deviation:1 too:1 reported:2 learnt:3 synthetic:3 peak:1 winther:1 csail:2 probabilistic:1 invertible:1 ashish:1 thesis:1 containing:1 worse:1 derivative:1 toy:1 suggesting:1 bfgs:1 sec:1 explicitly:2 depends:3 vi:3 performed:5 lot:1 closed:1 graphbased:1 start:3 bayes:2 competitive:1 contribution:2 ir:1 accuracy:6 moon:3 bayesian:11 handwritten:4 classified:1 whenever:1 manual:1 against:1 minka:1 gain:1 dataset:12 subsection:1 higher:1 supervised:20 evaluated:2 smola:1 until:1 correlation:2 hand:1 overlapping:1 propagation:4 reveal:1 semisupervised:4 effect:1 contain:2 normalized:5 regularization:4 symmetric:1 laboratory:1 noted:1 criterion:1 tn:3 harmonic:14 interpret:1 significant:3 cambridge:2 smoothness:4 consistency:2 similarly:3 similarity:13 ahn:1 etc:1 posterior:15 closest:1 recent:1 outperforming:2 yi:11 captured:1 minimum:1 additional:1 determine:5 converge:2 semi:20 infer:2 alan:1 smooth:1 cross:3 long:1 justifying:1 laplacian:16 qi:1 basic:1 essentially:1 expectation:5 cmu:2 kernel:29 represent:3 csato:1 whereas:1 rest:2 ascent:1 induced:1 tend:1 gii:1 undirected:1 incorporates:2 spirit:1 lafferty:3 presence:2 split:1 easy:3 affect:9 xj:3 independence:1 idea:2 xjj:1 prefers:1 matlab:1 eigenvectors:1 tune:3 amount:4 nonparametric:1 transforms:1 svms:3 exist:1 problematic:1 sign:1 estimated:2 neigbors:1 correctly:2 per:7 xii:1 discrete:1 write:3 hyperparameter:17 graph:36 realworld:1 inverse:2 run:3 you:1 throughout:1 parabolic:1 decision:3 prefer:1 bound:4 quadratic:3 encountered:1 activity:1 constraint:3 flat:2 alternate:1 combination:2 across:3 em:15 equation:2 previously:2 slack:3 turn:1 available:3 rosalind:1 robustness:1 eigen:1 assumes:2 denotes:1 include:2 top:3 exploit:1 ghahramani:4 especially:2 already:1 flipping:6 posture:1 parametric:2 diagonal:2 gradient:10 manifold:1 reason:1 assuming:1 besides:1 kk:1 providing:1 minimizing:1 equivalently:1 susceptible:1 unknown:1 perform:1 datasets:7 descent:3 ecml:1 beat:1 inferred:1 learned:1 nip:3 address:4 bar:4 usually:3 beating:1 laplacians:3 challenge:2 max:2 including:3 natural:1 zhu:5 aston:1 text:1 prior:15 expect:1 versus:2 validation:2 imposes:1 suspected:2 classifying:6 row:5 compatible:1 neighbor:3 taking:1 face:1 sparse:1 boundary:3 curve:17 xn:4 world:4 valid:1 opper:1 qn:1 projected:1 far:4 approximate:12 ignore:1 confirm:1 global:2 uai:1 b1:1 xi:3 spectrum:4 search:4 latent:1 iterative:1 learn:6 robust:1 correlated:1 cl:7 significance:1 main:2 whole:1 noise:30 hyperparameters:12 child:1 x1:2 xu:2 tl:10 slow:1 xl:2 third:1 showing:2 admits:1 svm:7 evidence:52 phd:1 entropy:3 forming:1 ma:2 conditional:1 goal:1 rbf:9 hard:2 specifically:1 except:1 determined:1 lemma:1 total:1 newsgroup:5 people:1 armijo:1 brevity:1 evaluate:1 handling:1 |
2,129 | 2,933 | Variational EM Algorithms for
Non-Gaussian Latent Variable Models
J. A. Palmer, D. P. Wipf, K. Kreutz-Delgado, and B. D. Rao
Department of Electrical and Computer Engineering
University of California San Diego, La Jolla, CA 92093
{japalmer,dwipf,kreutz,brao}@ece.ucsd.edu
Abstract
We consider criteria for variational representations of non-Gaussian latent variables, and derive variational EM algorithms in general form. We
establish a general equivalence among convex bounding methods, evidence based methods, and ensemble learning/Variational Bayes methods,
which has previously been demonstrated only for particular cases.
1 Introduction
Probabilistic methods have become well-established in the analysis of learning algorithms
over the past decade, drawing largely on classical Gaussian statistical theory [21, 2, 28].
More recently, variational Bayes and ensemble learning methods [22, 13] have been proposed. In addition to the evidence and VB methods, variational methods based on convex
bounding have been proposed for dealing with non-gaussian latent variables [18, 14]. We
concentrate here on the theory of the linear model, with direct application to ICA [14],
factor analysis [2], mixture models [13], kernel regression [30, 11, 32], and linearization
approaches to nonlinear models [15]. The methods can likely be applied in other contexts.
In Mackay?s evidence framework, ?hierarchical priors? are employed on the latent variables, using Gamma priors on the inverse variances, which has the effect of making the
marginal distribution of the latent variable prior the non-Gaussian Student?s t [30]. Based
on Mackay?s framework, Tipping proposed the Relevance Vector Machine (RVM) [30] for
estimation of sparse solutions in the kernel regression problem. A relationship between the
evidence framework and ensemble/VB methods has been noted in [22, 6] for the particular case of the RVM with t hyperprior. Figueiredo [11] proposed EM algorithms based
on hyperprior representations of the Laplacian and Jeffrey?s priors. In [14], Girolami employed the convex variational framework of [16] to derive a different type of variational
EM algorithm using a convex variational representation of the Laplacian prior. Wipf et al.
[32] demonstrated the equivalence between the variational approach of [16, 14] and the evidence based RVM for the case of t priors, and thus via [6], the equivalence of the convex
variational method and the ensemble/VB methods for the particular case of the t prior.
In this paper we consider these methods from a unifying viewpoint, deriving algorithms in
more general form and establishing a more general relationship among the methods than
has previously been shown. In ?2, we define the model and estimation problems we shall
be concerned with, and in ?3 we discuss criteria for variational representations. In ?4 we
consider the relationships among these methods.
2 The Bayesian linear model
Throughout we shall consider the following model,
y = Ax + ? ,
(1)
Q
where A ? Rm?n , x ? p(x) = i p(xi ), and ? ? N (0, ?? ), with x and ? independent.
The important thing to note for our purposes is that the xi are non-Gaussian.
We consider two types of variational representation of the non-Gaussian priors p(xi ), which
we shall call convex type and integral type. In the convex type of variational representation,
the density is represented as a supremum over Gaussian functions of varying scale,
p(x) = sup N (x; 0, ? ?1 ) ?(?) .
(2)
?>0
The essential property of ?concavity in x2 ? leading to this representation was used in [29,
17, 16, 18, 6] to represent the Logistic link function. A convex type representation of the
Laplace density was applied to learning overcomplete representations in [14].
In the integral type of representation, the density p(x) is represented as an integral over the
scale parameter of the density, with respect to some positive measure ?,
Z ?
p(x) =
N (x; 0, ? ?1 ) d?(?) .
(3)
0
Such representations with a general kernel are referred to as scale mixtures [19]. Gaussian
scale mixtures were discussed in the examples of Dempster, Laird, and Rubin?s original
EM paper [9], and treated more extensively in [10]. The integral representation has been
used, sometimes implicitly, for kernel-based estimation [30, 11] and ICA [20]. The distinction between MAP estimation of components and estimation of hyperparameters has been
discussed in [23] and [30] for the case of Gamma distributed inverse variance.
We shall be interested in variational EM algorithms for solving two basic problems, corresponding essentially to the two methods of handling hyperparameters discussed in [23]:
the MAP estimate of the latent variables
? = arg max p(x|y)
x
(4)
x
and the MAP estimate of the hyperparameters,
?? = arg max p(?|y) .
(5)
?
The following section discusses the criteria for and relationship between the two types of
variational representation. In ?4, we discuss algorithms for each problem based on the
two types of variational representations, and determine when these are equivalent. We also
discuss the approximation of the likelihood p(y; A) using the ensemble learning or VB
method, which approximates the posterior p(x, ?|y) by a factorial density q(x|y)q(?|y).
We show that the ensemble method is equivalent to the hyperparameter MAP method.
3 Variational representations of super-Gaussian densities
In this section we discuss the criteria for the convex and integral type representations.
3.1 Convex variational bounds
We wish to determine when a symmetric, unimodal density p(x) can be represented in the
form (2) for some function ?(?). Equivalently, when,
?
?
1
? log p(x) = ? sup log N x ; 0, ? ?1 ?(?) = inf 21 x2 ? ? log ? 2 ?(?)
?>0
?>0
?
for all x > 0. The last formula says that ? log p( x) is the concave conjugate of (the
1
closure of ?
the convex hull of) the function, log ? 2 ?(?) [27, ?12]. This is possible if and only
if ? log p( x) is closed, increasing and concave on (0, ?). Thus we have the following.
Theorem 1. A symmetric probability density p(x) ? exp(?g(x2 )) can be represented in
the convex variational form,
p(x) = sup N (x; 0, ? ?1 ) ?(?)
?>0
?
if and only if g(x) ? ? log p( x) is increasing and concave on (0, ?). In this case we can
use the function,
p
?
?
?(?) = 2?/? exp g ?(?/2) ,
where g ? is the concave conjugate of g.
Examples of densities satisfying this criterion include: (i) Generalized Gaussian ?
exp(?|x|? ), 0 < ? ? 2, (ii) Logistic ? 1/ cosh2 (x/2), (iii) Student?s t ?
(1 + x2 /?)?(?+1)/2 , ? > 0, and (iv) symmetric ?-stable densities (having characteristic
function exp(?|?|? ), 0 < ? ? 2).
The convex variational representation motivates the following definition.
?
Definition 1. A symmetric probability density p(x) is ?
strongly super-gaussian if p( x) is
log-convex on (0, ?), and strongly sub-gaussian if p( x) is log-concave on (0, ?).
An equivalent definition is given in [5, pp. 60-61], which defines p(x) = exp(?f (x))
to be sub-gaussian (super-gaussian) if f 0 (x)/x is increasing (decreasing) on (0, ?). This
condition is equivalent to f (x) = g(x2 ) with g concave, i.e. g 0 decreasing. The property of
being strongly sub- or super-gaussian is independent of scale.
3.2 Scale mixtures
We now wish to determine when a probability density p(x) can be represented in the form
(3) for some ?(?) non-decreasing on (0, ?). A fundamental result dealing with integral
representations was given by Bernstein and Widder (see [31]). It uses the following definition.
Definition 1. A function f (x) is completely monotonic on (a, b) if,
(?1)n f (n) (x) ? 0 ,
n = 0, 1, . . .
for every x ? (a, b).
That is, f (x) is completely monotonic if it is positive, decreasing, convex, and so on.
Bernstein?s theorem [31, Thm. 12b] states:
Theorem 2. A necessary and sufficient condition that p(x) should be completely monotonic
on (0, ?) is that,
Z
?
p(x) =
e?tx d?(t) ,
0
where ?(t) is non-decreasing on (0, ?).
Thus for p(x) to be a Gaussian scale mixture,
p(x) = e
?f (x)
=e
?g(x2 )
Z
=
0
?
1
2
e? 2 tx d?(t) ,
?
a necessary and sufficient condition is that p( x) = e?g(x) be completely monotonic for
0 < x < ?, and we have the following (see also [19, 1]),
Theorem
3. A function p(x) can be represented as a Gaussian scale mixture if and only if
?
p( x) is completely monotonic on (0, ?).
3.3 Relationship between convex and integral type representations
We now consider the relationship between the convex and integral types of variational
representation. Let p(x) = exp(?g(x2 )). We have seen that p(x) can be represented in the
form (2) if and only if g(x) is symmetric and concave on
? (0, ?). And we have seen that
p(x) can be represented in the form (3) if and only if p( x) = exp(?g(x)) is?completely
monotonic. We shall consider now
?whether or not complete monotonicity of p( x) implies
the concavity of g(x) = ? log p( x), that is whether representability in the integral form
implies representability in the convex form.
Complete monotonicity
of a function q(x) implies that q ? 0, q 0 ? 0, q 00 ? 0, etc. For
?
example, if p( x) is completely monotonic, then,
?
?
d2 ?
d2 ?g(x)
p(
e
= e?g(x) g 0 (x)2 ? g 00 (x) ? 0 .
x)
=
2
2
dx
dx
?
Thus if g 00 ? 0, then p( x) is convex, but the converse
? does not necessarily hold. That
is, concavity of g does not follow from convexity of p( x), as the latter only requires that
g 00 ? g 0 2 .
?
Concavity of g does follow however from the complete monotonicity of p( x). For example, we can use the following result [8, ?3.5.2].
R
Theorem 4. If the functions ft (x), t ? D, are convex, then D eft (x) dt is convex.
Thus completely monotonic functions, being scale mixtures of the log convex function
e?x by Theorem 2, are also log convex. We thus see that any function representable in the
integral variational form (3) is also representable in the convex variational form (2).
In fact, a stronger result holds. The following theorem [7, Thm. 4.1.5] establishes the
equivalence between q(x) and g 0 (x) = d/dx?log q(x) in terms of complete monotonicity.
Theorem 5. If g(x) > 0, then e?ug(x) is completely monotonic for every u > 0, if and
only if g 0 (x) is completely monotonic.
?
In particular, it holds that q(x) ? p( x) = exp(?g(x)) is convex only if g 00 (x) ? 0.
2
To summarize, let p(x) = e?g(x ) . If g is increasing and concave for x > 0, then p(x) admits the convex type of variational representation (2). If, in addition, the higher derivatives
satisfy g (3) (x) ? 0, g (4) (x) ? 0, g (5) (x) ? 0, etc., then p(x) also admits the Gaussian
scale mixture representation (3).
4 General equivalences among Variational methods
4.1 MAP estimation of components
Consider first the MAP estimate of the latent variables (4).
4.1.1 Component MAP ? Integral case
Following [10]1 , consider an EM algorithm to estimate x when the p(xi ) are independent
Gaussian scale mixtures as in (3). Differentiating inside the integral gives,
Z ?
Z ?
d
0
p(x|?)p(?)d? = ?
? xp(x, ?) d?
p (x) =
dx 0
0
Z ?
= ?xp(x)
?p(?|x) d? .
0
1
In [10], the xi in (1) are actually estimated as non-random parameters, with the noise ? being
non-gaussian, but the underlying theory is essentially the same.
Thus, with p(x) ? exp(?f (x)), we see that,
Z ?
f 0(xi )
p0 (xi )
=
.
(6)
E(?i |xi ) =
?i p(?i |xi ) d?i = ?
xi p(xi )
xi
0
The EM algorithm alternates setting ??i to the posterior mean, E(?i |xi ) = f 0 (xi )/xi , and
setting x to minimize,
? = 1 xTAT ??1Ax ? yT ??1Ax + 1 xT?x + const.,
(7)
? log p(y|x)p(x|?)
?
?
2
2
?1
k
0 k
k
k
k ?1
?
where ? = diag(?) . At iteration k, we put ? = f (x )/x , and ? = diag(? ) , and
i
i
i
xk+1 = ?k AT (A?k AT + ?? )?1 y .
4.1.2 Component MAP ? Convex case
Again consider the MAP estimate of x. For strongly super-gaussian priors, p(xi ), we have,
arg max p(x|y) = arg max p(y|x)p(x) = arg max max p(y|x)p(x; ?)?(?)
x
x
x
?
Now since,
? log p(y|x)p(x; ?)?(?) =
1
2
T ?1
xTAT ??1
? Ax ? y ?? Ax +
n
X
1
2
x2i ?i ? g ? (?i /2) ,
i=1
the MAP estimate can be improved iteratively by alternately maximizing x and ?,
f 0(xki )
,
(8)
?ik = 2 g ?0?1(xki 2 ) = 2 g 0(xki 2 ) =
xki
with x updated as in ?4.1.1. We thus see that this algorithm is equivalent to the MAP
algorithm derived in ?4.1.1 for Gaussian scale mixtures. That is, for direct MAP estimation
of latent variable x, the EM Gaussian scale mixture method and the variational bounding
method yield the same algorithm.
This algorithm has also been derived in the image restoration literature [12] as the ?halfquadratic? algorithm, and it is the basis for the FOCUSS algorithms derived in [26, 25].
The regression algorithm given in [11] for the particular cases of Laplacian and Jeffrey?s
priors is based on the theory in ?4.1.1, and is in fact equivalent to the FOCUSS algorithm
derived in [26].
4.2 MAP estimate of variational parameters
Now consider MAP estimation of the (random) variational hyperparameters ?.
4.2.1 Hyperparameter MAP ? Integral case
Consider an EM algorithm to find the MAP estimate of the hyperparameters ? in the integral
representation (Gaussian scale mixture) case, where the latent variables x are hidden. For
the complete likelihood, we have,
p(?, x|y) ? p(y|x, ?)p(x|?)p(?) = p(y|x)p(x|?)p(?) .
The function to be minimized over ? is then,
X
p
?
?
2
1
hx
i
?
?
log
?i p(?i ) + const.
? log p(x|?)p(?) x =
(9)
i
i
2
i
?
If we define h(?) ? log ?i p(?i ), and assume that this function is concave, then the
optimal value of ? is given by,
?
?
?i = h?0 21 hx2i i .
? which then yields an estimate
This algorithm converges to a local maximum of p(?|y), ?,
? Alternative algorithms result from using this method to find
? = E(x|y, ?).
of x by taking x
the MAP estimate of different functions of the scale random variable ?.
4.2.2 Hyperparameter MAP ? Convex case
In the convex representation, the ? parameters do not actually represent a probabilistic
quantity, but rather arise as parameters in a variational inequality. Specifically, we write,
Z
Z
p(y) =
p(y, x) dx = max p(y|x) p(x|?) ?(?) dx
?
Z
? max p(y|x) p(x|?) ?(?) dx
?
?
?
= max N y; 0, A?AT + ?? ?(?) .
?
Now we define the function,
?
?
p?(y; ?) ? N y; 0, A?AT + ?? ?(?)
and try to find ?? = arg max p?(y; ?). We maximize p? by EM, marginalizing over x,
Z
p?(y; ?) = p(y|x) p(x|?) ?(?) dx .
The algorithm is then equivalent to that in ?4.1.2 except that the expectation is taken of x2
as the E step, and the diagonal weighting matrix becomes,
?i =
f 0 (?i )
,
?i
p
where ?i = E (x2i |y; ?i ). Although p? is not a true probability density function, the proof
of convergence for EM does not assume unit normalization. This theory is the basis for the
algorithm presented in [14] for the particular case of a Laplacian prior (where in addition
A in the model (1) is updated according to the standard EM update.)
4.3 Ensemble learning
In the ensemble learning approach (also Variational Bayes [4, 3, 6]) the idea is to find the
approximate separable posterior that minimizes the KL divergence from the true posterior,
using the following decomposition of the log likelihood,
Z
??
?
?
p(z, y)
dz + D q(z|y)?? p(z|y)
log p(y) =
q(z|y) log
q(z|y)
? ?F (q) + D(q||p) .
The term F (q) is commonly called the variational free energy [29, 24]. Minimizing the F
over q is equivalent to minimizing D over q. The posterior approximating distribution is
taken to be factorial,
q(z|y) = q(x, ?|y) = q(x|y)q(?|y) .
For fixed q(?|y), the free energy F is given by,
ZZ
?
?
??
p(x, ?|y)
d? dx = D q(x|y)?? ehlog p(x,?|y)i? + const.,
?
q(x|y)q(?|y) log
q(x|y)q(?|y)
(10)
where
h?i
denotes
expectation
with
respect
to
q(?|y),
and
the
constant
is
the
entropy,
?
?
?
H q(?|y) . The minimum of the KL divergence in (10) is attained if and only if
?
?
?
?
q(x|y) ? exp log p(x, ?|y) ? ? p(y|x) exp log p(x|?) ?
almost surely. An identical derivation yields the optimal
?
?
?
?
q(?|y) ? exp log p(x, ?|y) x ? p(?) exp log p(x|?) x
when q(x|y) is fixed. The ensemble (or VB) algorithm consists of alternately updating the
parameters of these approximating marginal distributions.
In the linear model with Gaussian scale mixture latent variables, the complete likelihood is
again,
p(y, x, ?) = p(y|x)p(x|?)p(?) .
The optimal approximate posteriors are given by,
? ?
?
q(x|y) = N (x; ?x|y , ?x|y ) ,
q(?i |y) = p ?i ? xi = hx2i i1/2 ,
where, letting ? = diag(h?i)?1 , the posterior moments are given by,
?x|y
?
?AT (A?AT + ?? )?1 y
?x|y
?
?1 ?1
(AT ??1
)
= ? ? ?AT (A?AT + ?? )?1A? .
? A+?
The only relevant fact about q(?|y) that we need is h?i, for which we have, using (6),
Z
Z
?
?
f 0 (?i )
,
h?i i =
?i q(?i |y) d?i =
?i p ?i | xi = hx2i i1/2 d?i =
?i
p
where ?i = E (x2i |y; ?i ). We thus see that the ensemble learning algorithm is equivalent
to the approximate hyperparameter MAP algorithm of ?4.2.2. Note also that this shows that
the VB methods can be applied to any Gaussian scale mixture density, using only the form
of the latent variable prior p(x), without needing the marginal hyperprior p(?) in closed
form. This is particularly important in the case of the Generalized Gaussian and Logistic
densities, whose scale parameter densities are ?-Stable and Kolmogorov [1] respectively.
5 Conclusion
In this paper, we have discussed criteria for variational representations of non-Gaussian
latent variables, and derived general variational EM algorithms based on these representations. We have shown a general equivalence between the two representations in MAP
estimation taking hyperparameters as hidden, and we have shown the general equivalence
between the variational convex approximate MAP estimate of hyperparameters and the
ensemble learning or VB method.
References
[1] D. F. Andrews and C. L. Mallows. Scale mixtures of normal distributions. J. Roy. Statist. Soc.
Ser. B, 36:99?102, 1974.
[2] H. Attias. Independent factor analysis. Neural Computation, 11:803?851, 1999.
[3] H. Attias. A variational Bayesian framework for graphical models. In Advances in Neural
Information Processing Systems 12. MIT Press, 2000.
[4] M. J. Beal and Z. Ghahrarmani. The variational Bayesian EM algorithm for incomplete data:
with application to scoring graphical model structures. In Bayesian Statistics 7, pages 453?464.
University of Oxford Press, 2002.
[5] A. Benveniste, M. M?etivier, and P. Priouret. Adaptive algorithms and stochastic approximations. Springer-Verlag, 1990.
[6] C. M. Bishop and M. E. Tipping. Variational relevance vector machines. In C. Boutilier and
M. Goldszmidt, editors, Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence, pages 46?53. Morgan Kaufmann, 2000.
[7] S. Bochner. Harmonic analysis and the theory of probability. University of California Press,
Berkeley and Los Angeles, 1960.
[8] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[9] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via
the EM algorithm. Journal of the Royal Statistical Society, Series B, 39:1?38, 1977.
[10] A. P. Dempster, N. M. Laird, and D. B. Rubin. Iteratively reweighted least squares for linear
regression when errors are Normal/Independent distributed. In P. R. Krishnaiah, editor, Multivariate Analysis V, pages 35?57. North Holland Publishing Company, 1980.
[11] M. Figueiredo. Adaptive sparseness using Jeffreys prior. In T. G. Dietterich, S. Becker, and
Z. Ghahramani, editors, Advances in Neural Information Processing Systems 14, Cambridge,
MA, 2002. MIT Press.
[12] D. Geman and G. Reynolds. Constrained restoration and the recovery of discontinuities. IEEE
Trans. Pattern Analysis and Machine Intelligence, 14(3):367?383, 1992.
[13] Z. Ghahramani and M. J. Beal. Variational inference for Bayesian mixtures of factor analysers.
In Advances in Neural Information Processing Systems 12. MIT Press, 2000.
[14] M. Girolami. A variational method for learning sparse and overcomplete representations. Neural Computation, 13:2517?2532, 2001.
[15] A. Honkela and H. Valpola. Unsupervised variational Bayesian learning of nonlinear models.
In Advances in Neural Information Processing Systems 17. MIT Press, 2005.
[16] T. S. Jaakkola. Variational Methods for Inference and Estimation in Graphical Models. PhD
thesis, Massachusetts Institute of Technology, 1997.
[17] T. S. Jaakkola and M. I. Jordan. A variational approach to Bayesian logistic regression models
and their extensions. In Proceedings of the 1997 Conference on Artificial Intelligence and
Statistics, 1997.
[18] M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul. An introduction to variational
methods for graphical models. In M. I. Jordan, editor, Learning in Graphical Models. Kluwer
Academic Publishers, 1998.
[19] J. Keilson and F. W. Steutel. Mixtures of distributions, moment inequalities, and measures of
exponentiality and Normality. The Annals of Probability, 2:112?130, 1974.
[20] H. Lappalainen. Ensemble learning for independent component analysis. In Proceedings of the
First International Workshop on Independent Component Analysis, 1999.
[21] D. J. C. MacKay. Bayesian interpolation. Neural Computation, 4(3):415?447, 1992.
[22] D. J. C. MacKay. Ensemble learning and evidence maximization. Unpublished manuscript,
1995.
[23] D. J. C. Mackay. Comparison of approximate methods for handling hyperparameters. Neural
Computation, 11(5):1035?1068, 1999.
[24] R. M. Neal and G. E. Hinton. A view of the EM algorithm that justifies incremental, sparse, and
other variants. In M. I. Jordan, editor, Learning in Graphical Models, pages 355?368. Kluwer,
1998.
[25] B. D. Rao, K. Engan, S. F. Cotter, J. Palmer, and K. Kreutz-Delgado. Subset selection in noise
based on diversity measure minimization. IEEE Trans. Signal Processing, 51(3), 2003.
[26] B. D. Rao and I. F. Gorodnitsky. Sparse signal reconstruction from limited data using FOCUSS:
a re-weighted minimum norm algorithm. IEEE Trans. Signal Processing, 45:600?616, 1997.
[27] R. T. Rockafellar. Convex Analysis. Princeton, 1970.
[28] Sam Roweis and Zoubin Ghahramani. A unifying review of linear gaussian models. Neural
Computation, 11(5):305?345, 1999.
[29] L. K. Saul, T. S. Jaakkola, and M. I. Jordan. Mean field theory for sigmoid belief networks.
Journal of Artificial Intelligence Research, 4:61?76, 1996.
[30] M. E. Tipping. Sparse Bayesian learning and the Relevance Vector Machine. Journal of Machine Learning Research, 1:211?244, 2001.
[31] D. V. Widder. The Laplace Transform. Princeton University Press, 1946.
[32] D. Wipf, J. Palmer, and B. Rao. Perspectives on sparse bayesian learning. In S. Thrun, L. Saul,
and B. Sch?olkopf, editors, Advances in Neural Information Processing Systems 16, Cambridge,
MA, 2003. MIT Press.
| 2933 |@word stronger:1 norm:1 d2:2 closure:1 decomposition:1 p0:1 delgado:2 moment:2 series:1 reynolds:1 past:1 dx:9 update:1 intelligence:4 xk:1 direct:2 become:1 ik:1 consists:1 inside:1 ica:2 decreasing:5 company:1 increasing:4 becomes:1 underlying:1 minimizes:1 berkeley:1 every:2 concave:9 rm:1 ser:1 unit:1 cosh2:1 converse:1 positive:2 engineering:1 local:1 oxford:1 establishing:1 interpolation:1 equivalence:7 limited:1 palmer:3 mallow:1 boyd:1 zoubin:1 selection:1 put:1 context:1 equivalent:9 map:21 demonstrated:2 yt:1 maximizing:1 dz:1 convex:32 recovery:1 deriving:1 vandenberghe:1 laplace:2 updated:2 annals:1 diego:1 us:1 roy:1 satisfying:1 particularly:1 updating:1 geman:1 ft:1 electrical:1 dempster:3 convexity:1 solving:1 completely:10 basis:2 represented:8 tx:2 kolmogorov:1 derivation:1 artificial:3 analyser:1 whose:1 say:1 drawing:1 statistic:2 transform:1 laird:3 beal:2 reconstruction:1 relevant:1 japalmer:1 roweis:1 olkopf:1 los:1 convergence:1 incremental:1 converges:1 derive:2 andrew:1 soc:1 implies:3 girolami:2 concentrate:1 hull:1 stochastic:1 hx:1 gorodnitsky:1 extension:1 hold:3 normal:2 exp:13 purpose:1 estimation:10 rvm:3 establishes:1 weighted:1 cotter:1 minimization:1 mit:5 gaussian:30 super:5 rather:1 varying:1 jaakkola:4 ax:5 derived:5 focus:3 likelihood:5 inference:2 hidden:2 interested:1 i1:2 arg:6 among:4 constrained:1 mackay:5 marginal:3 field:1 having:1 zz:1 identical:1 unsupervised:1 wipf:3 minimized:1 gamma:2 divergence:2 jeffrey:2 mixture:17 integral:14 necessary:2 iv:1 incomplete:2 hyperprior:3 re:1 overcomplete:2 rao:4 restoration:2 maximization:1 subset:1 density:16 fundamental:1 international:1 probabilistic:2 again:2 thesis:1 derivative:1 leading:1 diversity:1 student:2 north:1 rockafellar:1 satisfy:1 try:1 view:1 closed:2 sup:3 bayes:3 lappalainen:1 minimize:1 square:1 variance:2 largely:1 characteristic:1 ensemble:13 yield:3 kaufmann:1 bayesian:10 definition:5 energy:2 pp:1 proof:1 massachusetts:1 actually:2 manuscript:1 higher:1 tipping:3 dt:1 follow:2 attained:1 improved:1 strongly:4 honkela:1 nonlinear:2 defines:1 logistic:4 effect:1 dietterich:1 true:2 symmetric:5 iteratively:2 neal:1 reweighted:1 noted:1 criterion:6 generalized:2 complete:6 image:1 variational:44 harmonic:1 recently:1 sigmoid:1 ug:1 discussed:4 eft:1 approximates:1 kluwer:2 cambridge:3 stable:2 etc:2 posterior:7 multivariate:1 perspective:1 jolla:1 inf:1 verlag:1 inequality:2 scoring:1 seen:2 minimum:2 morgan:1 employed:2 surely:1 determine:3 maximize:1 bochner:1 signal:3 ii:1 unimodal:1 needing:1 academic:1 laplacian:4 variant:1 regression:5 basic:1 essentially:2 expectation:2 iteration:1 sometimes:1 kernel:4 represent:2 normalization:1 addition:3 publisher:1 sch:1 thing:1 jordan:5 call:1 bernstein:2 iii:1 concerned:1 idea:1 attias:2 angeles:1 whether:2 engan:1 becker:1 boutilier:1 factorial:2 extensively:1 statist:1 estimated:1 write:1 hyperparameter:4 shall:5 inverse:2 uncertainty:1 throughout:1 almost:1 vb:7 bound:1 widder:2 x2:8 xki:4 separable:1 department:1 according:1 alternate:1 representable:2 conjugate:2 em:17 sam:1 making:1 jeffreys:1 krishnaiah:1 taken:2 previously:2 discus:5 letting:1 hierarchical:1 alternative:1 original:1 denotes:1 include:1 publishing:1 graphical:6 unifying:2 const:3 ghahramani:4 establish:1 approximating:2 classical:1 society:1 quantity:1 diagonal:1 link:1 valpola:1 thrun:1 relationship:6 priouret:1 minimizing:2 representability:2 equivalently:1 motivates:1 hinton:1 ucsd:1 thm:2 unpublished:1 kl:2 california:2 distinction:1 established:1 alternately:2 discontinuity:1 trans:3 pattern:1 summarize:1 max:10 royal:1 belief:1 treated:1 dwipf:1 normality:1 x2i:3 technology:1 prior:13 literature:1 review:1 marginalizing:1 sufficient:2 xp:2 rubin:3 viewpoint:1 editor:6 benveniste:1 last:1 free:2 figueiredo:2 institute:1 saul:3 taking:2 differentiating:1 sparse:6 distributed:2 concavity:4 commonly:1 adaptive:2 san:1 approximate:5 implicitly:1 supremum:1 monotonicity:4 dealing:2 kreutz:3 xi:18 latent:12 decade:1 ca:1 necessarily:1 brao:1 diag:3 bounding:3 noise:2 hyperparameters:8 arise:1 referred:1 sub:3 wish:2 weighting:1 formula:1 theorem:8 xt:1 bishop:1 admits:2 evidence:6 essential:1 workshop:1 phd:1 linearization:1 justifies:1 sparseness:1 entropy:1 likely:1 holland:1 monotonic:10 springer:1 ma:2 specifically:1 except:1 called:1 ece:1 la:1 xtat:2 latter:1 goldszmidt:1 relevance:3 princeton:2 handling:2 |
2,130 | 2,934 | Kernelized Infomax Clustering
Felix V. Agakov
Edinburgh University
Edinburgh EH1 2QL, U.K.
[email protected]
David Barber
IDIAP Research Institute
CH-1920 Martigny Switzerland
[email protected]
Abstract
We propose a simple information-theoretic approach to soft clustering based on maximizing the mutual information I(x, y) between
the unknown cluster labels y and the training patterns x with respect to parameters of specifically constrained encoding distributions. The constraints are chosen such that patterns are likely to
be clustered similarly if they lie close to specific unknown vectors
in the feature space. The method may be conveniently applied to
learning the optimal affinity matrix, which corresponds to learning parameters of the kernelized encoder. The procedure does not
require computations of eigenvalues of the Gram matrices, which
makes it potentially attractive for clustering large data sets.
1
Introduction
Let x ? R|x| be a visible pattern, and y ? {y1 , . . . , y|y| } its discrete unknown cluster
label. Rather than learning a density model of the observations, our goal here will
be to learn a mapping x ? y from the observations to the latent codes (cluster
labels) by optimizing a formal measure of coding efficiency. Good codes y should
be in some way informative about the underlying high-dimensional source vectors x,
so that the useful information contained in the sources is not lost. The fundamental
measure in this context is the mutual information
def
I(x, y) = H(x) ? H(x|y) ? H(y) ? H(y|x),
(1)
which indicates the decrease in uncertainty about the pattern x due to the knowledge of the underlying cluster label y (e.g. Cover and Thomas (1991)). Here
H(y) ? ?hlog p(y)ip(y) and H(y|x) ? ?hlog p(y|x)ip(x,y) are marginal and conditional entropies respectively, and the brackets h. . .ip represent averages over p. In
our case the encoder model is defined as
p(x, y) ?
M
X
?(x ? x(m) )p(y|x),
(2)
m=1
where {x(m) |m = 1, . . . , M } is a set of training patterns.
Our goal is to maximize (1) with respect to parameters of a constrained encoding distribution p(y|x). In contrast to most applications of the infomax principle
(Linsker (1988)) in stochastic channels (e.g. Brunel and Nadal (1998); Fisher and
Principe (1998); Torkkola and Campbell (2000)), optimization of the objective (1)
is computationally tractable since the cardinality of the code space |y| (the number
of clusters) will typically be low. Indeed, had the code space been high-dimensional,
computation of I(x, y) would have required evaluation of the generally intractable
entropy of the mixture H(y), and approximations would have needed to be considered (e.g. Barber and Agakov (2003); Agakov and Barber (2006)).
Maximization of the mutual information with respect to parameters of the encoder
model effectively defines a discriminative unsupervised optimization framework,
where the model is parameterized similarly to a conditionally trained classifier, but
where the cluster allocations are generally unknown. Training such models p(y|x)
by maximizing the likelihood p(x) would be meaningless, as the cluster variables
would marginalize out, which motivates also our information theoretic approach.
In this way we may extract soft cluster allocations directly from the training set,
with no additional information about class labels, relevance patterns, etc. required.
This is an important difference from other clustering techniques making a recourse
to information theory, which consider different channels and generally require additional information about relevance or irrelevance variables (cf Tishby et al. (1999);
Chechik and Tishby (2002); Dhillon and Guan (2003)).
Our infomax approach is in contrast with probabilistic methods based on likelihood
maximization. There the task of finding an optimal cluster allocation y for an observed pattern x may be viewed as an inference
P problem in generative models y ? x,
where the probability of the data p(x) = y p(y)p(x|y) is defined as a mixture of
|y| processes. The key idea of fitting such models to data is to find a constrained
probability distribution p(x) which would be likely to generate the visible patterns
{x(1) , . . . , x(M ) } (this is commonly achieved by maximizing the marginal likelihood
for deterministic parameters of the constrained distribution). The unknown clusters
y corresponding to each pattern x may then be assigned according to the posterior
p(y|x) ? p(y)p(x|y). Such generative approaches are well-known but suffer from the
constraint that p(x|y) is a correctly normalised distribution in x. In high dimensions
|x| this restricts the class of generative distributions usually to (mixtures of) Gaussians whose mean is dependent (in a linear or non-linear way) on the latent cluster
y. Typically data will lie on low dimensional curved manifolds embedded in the high
dimensional x-space. If we are restricted to using mixtures of Gaussians to model
this curved manifold, typically a very large number of mixture components will be
required. No such restrictions apply in the infomax case so that the mappings p(y|x)
may be very complex, subject only to sensible clustering constraints.
2
Clustering in Nonlinear Encoder Models
Arguably, there are at least two requirements which a meaningful cluster allocation
procedure should satisfy. Firstly, clusters should be, in some sense, locally smooth.
For example, each pair of source vectors should have a high probability of being
assigned to the same cluster if the vectors satisfy specific geometric constraints.
Secondly, we may wish to avoid assigning unique cluster labels to outliers (or other
constrained regions in the data space), so that under-represented regions in the
data space are not over-represented in the code space. Note that degenerate cluster
allocations are generally suboptimal under the objective (1), as they would lead to
a reduction in the marginal entropy H(y). On the other hand, it is intuitive that
maximization of the mutual information I(x, y) favors hard assignments of cluster
labels to equiprobable data regions, as this would result in the growth in H(y) and
reduction in H(y|x).
2.1
Learning Optimal Parameters
Local smoothness and ?softness? of the clusters may be enforced by imposing appropriate constraints on p(y|x). A simple choice of the encoder is
p(yj |x(i) ) ? exp{?kx(i) ? wj k2 /sj + bj },
(3)
|x|
where the cluster centers wj ? R , the dispersions sj , and the biases bj are the
encoder parameters to be learned. Clearly, under the encoding distribution (3)
patterns x lying close to specific centers wj in the data space will tend to be clustered
similarly. In principle, we could consider other choices of p(y|x); however (3) will
prove to be particularly convenient for the kernelized extensions.
Learning the optimal cluster allocations corresponds to maximizing (1) with respect
to the encoder parameters (3). The gradients are given by
?I(x, y)
?wj
=
M
(x(m) ? wj ) (m)
1 X
p(yj |x(m) )
?j
M m=1
sj
M
kx(m) ? wj k2 (m)
1 X
p(yj |x(m) )
=
?j .
M m=1
2s2j
PM
(m)
Analogously, we get ?I(x, y)/?bj = m=1 p(yj |x(m) )?j /M .
?I(x, y)
?sj
(4)
(5)
Expressions (4) and (5) have the form of the weighted EM updates for isotropic
(m)
Gaussian mixtures, with the weighting coefficients ?j defined as
?
?
p(yj |x(m) )
def
(m) def
?j = ?j (x(m) ) = log
? KL p(y|x(m) )khp(y|x)ip(x)
,
(6)
?
p(yj )
where KL defines the Kullback-Leibler divergence (e.g. Cover and Thomas (1991)),
P
(m)
and p?(x) ? m ?(x ? x(m) ) is the empirical distribution. Clearly, if ?j is kept
fixed for all m = 1, . . . , M and j = 1, . . . , |y|, the gradients (4) are identical to
those obtained by maximizing the log-likelihood of a Gaussian mixture model (up
(m)
to irrelevant constant pre-factors). Generally, however, the coefficients ?j will be
functions of wl , sl , and bl for all cluster labels l = 1, . . . , |y|.
In practice, we may impose a simple construction ensuring that sj > 0, for example
by assuming that sj = exp{?
sj } where s?j ? R. For this case, we may re-express
the gradients for the variances as ?I(x, y)/??
sj = sj ?I(x, y)/?sj . Expressions (4)
and (5) may then be used to perform gradient ascent on I(x, y) for wj , s?j , and bj ,
where j = 1, . . . , |y|. After training, the optimal cluster allocations may be assigned
according to the encoding distribution p(y|x).
2.2
Infomax Clustering with Kernelized Encoder Models
We now extend (3) by considering a kernelized parameterization of a nonlinear
encoder. Let us assume that the source patterns x(i) , x(j) have a high probability
of being assigned to the same cluster if they lie close to a specific cluster center in
some feature space. One choice of the encoder distribution for this case is
p(yj |x(i) ) ? exp{?k?(x(i) ) ? wj k2 /sj + bj },
(7)
where ?(x(i) ) ? R|?| is the feature vector corresponding to the source pattern x(i) ,
and wj ? R|?| is the (unknown) cluster center in the feature space. The feature
space may be very high- or even infinite-dimensional.
Since each cluster center wi ? R|?| lives in the same space as the projected sources
?(x(i) ), it is representable in the basis of the projections as
wj =
M
X
?mj ?(x(m) ) + wj? ,
(8)
m=1
? i? ? R|?| is orthogonal to the span of ?(x1 ), . . . , ?(xM ), and {?mj } is a set
where w
of coefficients (here j and m index |y| codes and M patterns respectively). Then
we may transform the encoder distribution (7) to
n ?
o
?
p(yj |x(m) ) ? exp ? Kmm ? 2kT (x(m) )aj + aTj Kaj + cj /sj
def
=
(m)
where k(x
def
exp{?fj (x(m) )},
) corresponds to the m
def
(i) T
K = {Kij } = {?(x ) ?(x
(j)
th
(9)
column (or row) of the Gram matrix
M ?M
)} ? R
, aj ? RM is the j th column of the
def
matrix of the coefficients A = {amj } ? RM ?|y| , and cj = (wj? )T wj? ? sj bj . Without loss of generality, we may assume that c = {cj } ? R|y| is a free unconstrained
parameter. Additionally, we will ensure positivity of the dispersions sj by considering a construction constraint sj = exp{?
sj }, where s?j ? R.
Learning Optimal Parameters
First we will assume that the Gram matrix K ? RM ?M is fixed and known (which
effectively corresponds to considering a fixed affinity matrix, see e.g. Dhillon et al.
(2004)). Objective (1) should be optimized with respect to the log-dispersions
s?j ? log(sj ), biases cj , and coordinates A ? RM ?|y| in the space spanned by the
feature vectors {?(x(i) )|i = 1, . . . , M }. From (9) we get
1
?I(x, y)
=
hp(yj |x) (k(x) ? Kaj ) ?j (x)ip(x)
? RM ,
(10)
?
?aj
sj
?I(x, y)
1
=
hp(yj |x)fj (x)?j (x)ip(x)
,
(11)
?
??
sj
2sj
PM
where p?(x) ? m?1 ?(x?x(m) ) is the empirical distribution. Analogously, we obtain
?I(x, y)/?cj = h?j (x)ip(x)
? ,
(12)
where the coefficients ?j (x) are given by (6). For a known Gram matrix K ? RM ?M ,
the gradients ?I/?aj , ?I/??
sj , and ?I/?cj given by expressions (10) ? (12) may be
used in numerical optimization for the model parameters. Note that the matrix
multiplication in (10) is performed once for each aj , so that the complexity of
computing the gradient is ? O(M 2 |y|) per iteration. We also note that one could
potentially optimize (1) by applying the iterative Arimoto-Blahut algorithm for
maximizing the channel capacity (see e.g. Cover and Thomas (1991)). However, for
any given constrained encoder it is generally difficult to derive closed-form updates
for the parameters of p(y|x), which motivates a numerical optimization.
Learning Optimal Kernels
Since we presume that explicit computations in R|?| are expensive, we cannot compute the Gram matrix by trivially applying its definition K = {?(xi )T ?(xj )}. Instead, we may interpret scalar products in feature spaces as kernel functions
?(x(i) )T ?(x(j) ) = K? (x(i) , x(j) ; ?), ?x(i) , x(j) ? Rx ,
(13)
where K? : Rx ? Rx ? R satisfies Mercer?s kernel properties (e.g. Scholkopf and
Smola (2002)). We may now apply our unsupervised framework to implicitly learn
the optimal nonlinear features by optimizing I(x, y) with respect to the parameters
? of the kernel function K? . After some algebraic manipulations, we get
M
|y|
M
X
X
?fk (x(m) )
?I(x, y)
=
KL(p(y|x(m) )kp(y))
p(yk |x(m) )
??
??
m=1
k=1
?
|y|
M X
X
m=1 j=1
?fj (x(m) )
p(yj |x(m) )
p(yj |x(m) ) log
??
p(yj )
(14)
where fk (x(m) ) is given by (9). The computational complexity of computing the
updates for ? is O(M |y|2 ), where M is the number of training patterns and |y|
is the number of clusters (which is assumed to be small). Note that in contrast
to spectral methods (see e.g. Shi and Malik (2000), Ng et al. (2001)) neither the
objective (1) nor its gradients require inversion of the Gram matrix K ? RM ?M or
computations of its eigenvalue decomposition.
In the special case of the radial basis function (RBF) kernels
K? (x(i) , x(j) ) = exp{??kx(i) ? x(j) k2 },
the gradients of the encoder potentials are simply given by
?
1 ? T?
?fj (x(m) )
=
aj Kaj ? 2?kT (x(m) )aj ,
??
sj
def
(15)
(16)
def
? = {K
? ij } = K(x(i) , x(j) )(1 ? ?(x(i) ? x(j) )), and ? is the Kronecker delta.
where K
By substituting (16) into the general expression (14), we obtain the gradient of the
mutual information with respect to the RBF kernel parameters.
3
Demonstrations
We have empirically compared our kernelized information-theoretic clustering approach with Gaussian mixture, k-means, feature-space k-means, non-kernelized
information-theoretic clustering (see Section 2.1), and a multi-class spectral clustering method optimizing the normalized cuts. We illustrate the methods on datasets
that are particularly easy to visualize. Figure 1 shows a typical application of the
methods to the spiral data, where x1 (t) = t cos(t)/4, x2 (t) = t sin(t)/4 correspond
to different coordinates of x ? R|x| , |x| = 2, and t ? [0, 3.(3)?]. The kernel parameters ? of the RBF-kernelized encoding distribution were initialized at ?0 = 2.5
and learned according to (16). The initial settings of the coefficients A ? RM ?|y|
in the feature space were sampled from NAij (0, 0.1). The log-variances s?1 , . . . , s?|y|
were initialized at zeros. The encoder parameters A and {?
sj |j = 1, . . . , |y|} (along
with the RBF kernel parameter ?) were optimized by applying the scaled conjugate
gradients. We found that Gaussian mixtures trained by maximizing the likelihood
usually resulted in highly stochastic cluster allocations; additionally, they led to
a large variation in cluster sizes. The Gaussian mixtures were initialized using
k-means ? other choices usually led to worse performance. We also see that the
k-means effectively breaks, as the similarly clustered points lie close to each other
in R2 (according to the L2 -norm), but the allocated clusters are not locally smooth
in t. On the other hand, our method with the RBF-kernelized encoders typically
led to locally smooth cluster allocations.
Gaussian mixture clustering for |y|=3
K?means clustering for |y|=3
KMI Clustering, ?=0.825 (?0 = 2.500), |y|=3
2.5
2.5
2
2
2
1.5
1.5
1.5
1
1
1
0.5
0.5
0.5
0
0
0
?0.5
?0.5
?0.5
2.5
?1
?1
?1
?1.5
?1.5
?1.5
?2
?2
?2.5
?2
0.5
?1
0
1
2
Gaussian Mixture Clustering
?2
?2.5
?2
?1
0
1
2
K?means Clustering
0.5
?2.5
?2
0.5
1.5
1.5
2
2.5
p(yj|x(m))
1
1.5
p(yj|x(m))
1
p(yj|x(m))
1
2
2.5
3
3
3.5
3.5
30
40
50
Training Patterns
60
1
2
2
3
20
0
2.5
3.5
10
?1
Kernelized Encoders, ? = 0.825
10
20
30
40
50
Training Patterns
60
10
20
30
40
50
Training Patterns
60
Figure 1: Cluster allocations (top) and the corresponding responsibilities (bottom)
p(yj |x(m) ) for |x| = 2, |y| = 3, M = 70 (the patterns are sorted to indicate local
smoothness in the phase parameter). Left: Gaussian mixtures; middle: K-means;
right: information-maximization for the (RBF-)kernelized encoder (the learned parameter ? ? 0.825). Light, medium, and dark-gray squares show the cluster colors
corresponding to deterministic cluster allocations. The color intensity of each training point x(m) is the average of the pure cluster intensities, weighted by the responsibilities p(yj |x(m) ). Nearly indistinguishable dark colors of the Gaussian mixture
clustering indicate soft cluster assignments.
Figure 2 shows typical results for spatially translated letters with |x| = 2, M =
150, and |y| = 2 (or |y| = 3), where we compare Gaussian mixture, feature-space
k-means, the spectral method of Ng et al. (2001), and our information-theoretic
clustering method. The initializations followed the same procedure as the previous
experiment. The results produced by our kernelized infomax method were generally
stable under different initializations, provided that ?0 was not too large or too small.
In contrast to Gaussian mixture, spectral, and feature-space k-means clustering,
the clusters produced by kernelized infomax for the cases considered are arguably
more anthropomorphically appealing. Note that feature-space k-means, as well
as the spectral method, presume that the kernel matrix K ? RM ?M is fixed and
known (in the latter case, the Gram matrix defines the edge weights of the graph).
For illustration purposes, we show the results for the fixed Gram matrices with
kernel parameters ? set to the initial values ?0 = 1 or the learned values ? ?
0.604 of the kernelized infomax method for |y| = 2. One may potentially improve
the performance of these methods by running the algorithms several times (with
different kernel parameters ?), and choosing ? which results in tightest clusters
(Ng et al. (2001)). We were indeed able to apply the spectral method to obtain
clusters for TA and T (for ? ? 1.1). While being useful in some situations, the
procedure generally requires multiple runs. In contrast, the kernelized infomax
method typically resulted in meaningful cluster allocations (TT and A) after a single
run of the algorithm (see Figure 2 (c)), with the results qualitatively consistent
under a variety of initializations.
Additionally, we note that in situations when we used simpler encoder models (see
expression (3)) or did not adapt parameters of the kernel functions, the extracted
clusters were often more intuitive than those produced by rival methods, but inferior
2.5
Gaussian mixture clustering for |y|=2
Feauture space K?means, ?=0.604
2.5
KMI Clustering, ? = 0.6035 (from ?0=1), |y|=2
2.5
2
2
2
1.5
1.5
1.5
1
1
1
0.5
0.5
0.5
0
0
0
?0.5
?0.5
?0.5
?1
?1
?1
?1.5
?1.5
?1.5
?2
?2
?2.5
?3
?2
?1
0
1
2
3
?2
?2.5
?3
?2
?1
(a)
0
1
2
3
?2.5
?3
Spectral Clustering, ? ? 0.604, |y|=2
KMI Clustering, ?0=1.000, |y|=3, I = 1.03
2
2
1.5
1.5
1.5
1
1
1
0.5
0.5
0
0
0
?0.5
?0.5
?0.5
?1
?1
?1
?1.5
?1.5
?1.5
?2
?1
0
1
(d)
2
3
?2.5
?3
1
2
3
2.5
0.5
?2
0
KMI Clusters: ? ? 0.579 (?0 = 1), |y|=3, I = 1.10
2.5
2
?2
?1
(c)
2.5
?2.5
?3
?2
(b)
?2
?2
?1
0
(e)
1
2
3
?2.5
?3
?2
?1
0
1
2
3
(f)
Figure 2: Learning cluster allocations for |y| = 2 and |y| = 3. Where appropriate,
the stars show the cluster centers. (a) two-component Gaussian mixture trained
by the EM algorithm; (b) feature-space k-means with ? = 1.0 and ? ? 0.604 (the
only pattern clustered differently (under identical initializations) is shown by ?);
(c) kernelized infomax clustering for |y| = 2 (the inverse variance ? of the RBF
kernel varied from ?0 = 1 (at the initialization) to ? ? 0.604 after convergence);
(d) spectral clustering for |y| = 2 and ? ? 0.604; (e) kernelized infomax clustering
for |y| = 3 with a fixed Gram matrix; (f ) kernelized infomax clustering for |y| = 3
started at ?0 = 1 and reaching ? ? 0.579 after convergence.
to the ones produced by (7) with the optimal learned ?. Our results suggest that by
learning kernel parameters we may often obtain higher values of the objective I(x, y),
as well as more appealing cluster labeling (e.g. for the examples shown on Figure 2
(e), (f) we get I(x, y) ? 1.03 and I(x, y) ? 1.10 respectively). Undoubtedly, a careful
choice of the kernel function could potentially lead to an even better visualization
of the locally smooth, non-degenerate structure.
4
Discussion
The proposed information-theoretic clustering framework is fundamentally different from the generative latent variable clustering approaches. Instead of explicitly
parameterizing the data-generating process, we impose constraints on the encoder
distributions, transforming the clustering problem to learning optimal discrete encodings of the unlabeled data. Many possible parameterizations of such distributions may potentially be considered. Here we discussed one such choice, which
implicitly utilizes projections of the data to high-dimensional feature spaces.
Our method suggests a formal information-theoretic procedure for learning optimal
cluster allocations. One potential disadvantage of the method is a potentially large
number of local optima; however, our empirical results suggest that the method is
stable under different initializations, provided that the initial variances are sufficiently large. Moreover, the results suggest that in the cases considered the method
favorably compares with the common generative clustering techniques, k-means,
feature-space k-means, and the variants of the method which do not use nonlinearities or do not learn parameters of kernel functions.
A number of interesting interpretations of clustering approaches in feature spaces
are possible. Recently, it has been shown (Bach and Jordan (2003); Dhillon et al.
(2004)) that spectral clustering methods optimizing normalized cuts (Shi and Malik
(2000); Ng et al. (2001)) may be viewed as a form of weighted feature-space k-means,
for a specific fixed similarity matrix. We are currently relating our method to the
common spectral clustering approaches and a form of annealed weighted featurespace k-means. We stress, however, that our information-maximizing framework
suggests a principled way of learning optimal similarity matrices by adapting parameters of the kernel functions. Additionally, our method does not require computations of eigenvalues of the similarity matrix, which may be particularly beneficial for
large datasets. Finally, we expect that the proper information-theoretic interpretation of the encoder framework may facilitate extensions of the information-theoretic
clustering method to richer families of encoder distributions.
References
Agakov, F. V. and Barber, D. (2006). Auxiliary Variational Information Maximization
for Dimensionality Reduction. In Proceedings of the PASCAL Workshop on Subspace,
Latent Structure and Feature Selection Techniques. Springer. To appear.
Bach, F. R. and Jordan, M. I. (2003). Learning spectral clustering. In NIPS. MIT Press.
Barber, D. and Agakov, F. V. (2003). The IM Algorithm: A Variational Approach to
Information Maximization. In NIPS. MIT Press.
Brunel, N. and Nadal, J.-P. (1998). Mutual Information, Fisher Information and Population Coding. Neural Computation, 10:1731?1757.
Chechik, G. and Tishby, N. (2002). Extracting relevant structures with side information.
In NIPS, volume 15. MIT Press.
Cover, T. M. and Thomas, J. A. (1991). Elements of Information Theory. Wiley, NY.
Dhillon, I. S. and Guan, Y. (2003). Information Theoretic Clustering of Sparse CoOccurrence Data. In Proceedings of the 3rd IEEE International Conf. on Data Mining.
Dhillon, I. S., Guan, Y., and Kulis, B. (2004). Kernel k-means, Spectral Clustering and
Normalized Cuts. In KDD. ACM.
Fisher, J. W. and Principe, J. C. (1998). A methodology for information theoretic feature
extraction. In Proc. of the IEEE International Joint Conference on Neural Networks.
Linsker, R. (1988). Towards an Organizing Principle for a Layered Perceptual Network.
In Advances in Neural Information Processing Systems. American Institute of Physics.
Ng, A. Y., Jordan, M., and Weiss, Y. (2001). On spectral clustering: Analysis and an
algorithm. In NIPS, volume 14. MIT Press.
Scholkopf, B. and Smola, A. (2002). Learning with Kernels. MIT Press.
Shi, J. and Malik, J. (2000). Normalized Cuts and Image Segmentation. IEEE Transactions
on Pattern Analysis and Machine Intelligence, 22(8):888?905.
Tishby, N., Pereira, F. C., and Bialek, W. (1999). The information bottleneck method. In
Proceedings of the 37-th Annual Allerton Conference on Communication, Control and
Computing. Kluwer Academic Publishers.
Torkkola, K. and Campbell, W. M. (2000). Mutual Information in Learning Feature
Transformations. In ICML. Morgan Kaufmann.
| 2934 |@word kulis:1 middle:1 inversion:1 norm:1 decomposition:1 reduction:3 initial:3 assigning:1 numerical:2 visible:2 informative:1 kdd:1 update:3 generative:5 intelligence:1 parameterization:1 isotropic:1 parameterizations:1 allerton:1 firstly:1 simpler:1 atj:1 along:1 scholkopf:2 prove:1 fitting:1 indeed:2 nor:1 multi:1 cardinality:1 considering:3 provided:2 underlying:2 moreover:1 medium:1 nadal:2 finding:1 transformation:1 growth:1 softness:1 classifier:1 k2:4 uk:1 rm:9 scaled:1 control:1 appear:1 arguably:2 felix:1 local:3 encoding:6 amj:1 initialization:6 suggests:2 co:1 unique:1 yj:18 lost:1 practice:1 procedure:5 empirical:3 adapting:1 convenient:1 chechik:2 pre:1 projection:2 radial:1 suggest:3 get:4 cannot:1 close:4 marginalize:1 selection:1 unlabeled:1 layered:1 context:1 applying:3 restriction:1 optimize:1 deterministic:2 center:6 maximizing:8 shi:3 annealed:1 pure:1 parameterizing:1 spanned:1 population:1 coordinate:2 variation:1 construction:2 element:1 expensive:1 particularly:3 agakov:5 cut:4 observed:1 bottom:1 region:3 wj:13 decrease:1 yk:1 principled:1 transforming:1 complexity:2 kmi:4 cooccurrence:1 trained:3 efficiency:1 basis:2 translated:1 joint:1 differently:1 represented:2 kp:1 labeling:1 choosing:1 whose:1 richer:1 encoder:19 favor:1 khp:1 transform:1 ip:7 eigenvalue:3 propose:1 product:1 relevant:1 organizing:1 degenerate:2 intuitive:2 convergence:2 cluster:46 requirement:1 optimum:1 generating:1 derive:1 illustrate:1 ac:1 ij:1 auxiliary:1 idiap:2 indicate:2 switzerland:1 stochastic:2 require:4 clustered:4 secondly:1 im:1 extension:2 lying:1 sufficiently:1 considered:4 exp:7 mapping:2 bj:6 visualize:1 substituting:1 purpose:1 proc:1 label:8 currently:1 wl:1 weighted:4 mit:5 clearly:2 gaussian:13 rather:1 reaching:1 avoid:1 indicates:1 likelihood:5 contrast:5 sense:1 inference:1 dependent:1 typically:5 kernelized:18 pascal:1 constrained:6 special:1 mutual:7 marginal:3 once:1 extraction:1 ng:5 identical:2 unsupervised:2 nearly:1 icml:1 linsker:2 fundamentally:1 equiprobable:1 divergence:1 resulted:2 phase:1 blahut:1 undoubtedly:1 highly:1 mining:1 evaluation:1 mixture:18 bracket:1 irrelevance:1 light:1 kt:2 edge:1 orthogonal:1 initialized:3 re:1 kij:1 column:2 soft:3 s2j:1 cover:4 disadvantage:1 assignment:2 maximization:6 tishby:4 too:2 encoders:2 density:1 fundamental:1 international:2 probabilistic:1 physic:1 infomax:12 analogously:2 positivity:1 worse:1 conf:1 american:1 potential:2 nonlinearities:1 star:1 felixa:1 coding:2 coefficient:6 satisfy:2 explicitly:1 performed:1 break:1 closed:1 responsibility:2 square:1 variance:4 kaufmann:1 correspond:1 produced:4 rx:3 presume:2 ed:1 definition:1 sampled:1 knowledge:1 color:3 dimensionality:1 cj:6 segmentation:1 campbell:2 ta:1 higher:1 methodology:1 wei:1 generality:1 smola:2 hand:2 nonlinear:3 defines:3 aj:7 gray:1 facilitate:1 normalized:4 assigned:4 spatially:1 dhillon:5 leibler:1 attractive:1 conditionally:1 sin:1 indistinguishable:1 inferior:1 stress:1 theoretic:11 tt:1 fj:4 image:1 variational:2 recently:1 common:2 empirically:1 arimoto:1 volume:2 extend:1 discussed:1 interpretation:2 relating:1 interpret:1 kluwer:1 imposing:1 smoothness:2 rd:1 unconstrained:1 trivially:1 pm:2 similarly:4 hp:2 fk:2 had:1 stable:2 similarity:3 etc:1 posterior:1 optimizing:4 inf:1 irrelevant:1 manipulation:1 life:1 morgan:1 additional:2 impose:2 maximize:1 multiple:1 smooth:4 adapt:1 academic:1 bach:2 ensuring:1 variant:1 iteration:1 kernel:19 represent:1 achieved:1 source:6 allocated:1 publisher:1 meaningless:1 ascent:1 subject:1 tend:1 jordan:3 extracting:1 easy:1 spiral:1 variety:1 xj:1 suboptimal:1 idea:1 bottleneck:1 expression:5 suffer:1 algebraic:1 useful:2 generally:8 dark:2 rival:1 locally:4 generate:1 sl:1 restricts:1 delta:1 correctly:1 per:1 discrete:2 express:1 key:1 neither:1 kept:1 graph:1 enforced:1 run:2 inverse:1 parameterized:1 uncertainty:1 letter:1 family:1 utilizes:1 kaj:3 def:9 followed:1 annual:1 constraint:7 kronecker:1 x2:1 span:1 according:4 representable:1 conjugate:1 beneficial:1 em:2 wi:1 appealing:2 making:1 kmm:1 outlier:1 restricted:1 recourse:1 computationally:1 visualization:1 needed:1 tractable:1 gaussians:2 tightest:1 apply:3 appropriate:2 spectral:13 thomas:4 top:1 clustering:38 cf:1 ensure:1 running:1 bl:1 objective:5 malik:3 eh1:1 bialek:1 affinity:2 gradient:10 subspace:1 capacity:1 sensible:1 manifold:2 barber:6 assuming:1 code:6 index:1 illustration:1 demonstration:1 ql:1 difficult:1 hlog:2 potentially:6 favorably:1 martigny:1 motivates:2 proper:1 unknown:6 perform:1 observation:2 dispersion:3 datasets:2 curved:2 situation:2 communication:1 y1:1 varied:1 intensity:2 david:2 pair:1 required:3 kl:3 optimized:2 learned:5 nip:4 able:1 usually:3 pattern:20 xm:1 improve:1 started:1 extract:1 geometric:1 l2:1 multiplication:1 embedded:1 loss:1 expect:1 interesting:1 allocation:14 consistent:1 mercer:1 principle:3 row:1 free:1 formal:2 normalised:1 bias:2 side:1 institute:2 sparse:1 edinburgh:2 dimension:1 gram:9 commonly:1 qualitatively:1 projected:1 transaction:1 sj:23 implicitly:2 kullback:1 assumed:1 discriminative:1 xi:1 latent:4 iterative:1 additionally:4 learn:3 channel:3 mj:2 complex:1 did:1 x1:2 ny:1 wiley:1 pereira:1 wish:1 explicit:1 lie:4 perceptual:1 guan:3 weighting:1 specific:5 r2:1 torkkola:2 intractable:1 workshop:1 effectively:3 kx:3 entropy:3 led:3 simply:1 likely:2 conveniently:1 contained:1 scalar:1 brunel:2 ch:2 corresponds:4 springer:1 satisfies:1 extracted:1 acm:1 conditional:1 goal:2 viewed:2 sorted:1 rbf:7 careful:1 towards:1 fisher:3 hard:1 specifically:1 infinite:1 typical:2 meaningful:2 principe:2 latter:1 relevance:2 |
2,131 | 2,935 | Variational Bayesian Stochastic
Complexity of Mixture Models
Kazuho Watanabe?
Department of Computational Intelligence
and Systems Science
Tokyo Institute of Technology
Mail Box:R2-5, 4259 Nagatsuta,
Midori-ku, Yokohama, 226-8503, Japan
[email protected]
Sumio Watanabe
P& I Lab.
Tokyo Institute of Technology
[email protected]
Abstract
The Variational Bayesian framework has been widely used to approximate the Bayesian learning. In various applications, it has
provided computational tractability and good generalization performance. In this paper, we discuss the Variational Bayesian learning of the mixture of exponential families and provide some additional theoretical support by deriving the asymptotic form of
the stochastic complexity. The stochastic complexity, which corresponds to the minimum free energy and a lower bound of the
marginal likelihood, is a key quantity for model selection. It also
enables us to discuss the e?ect of hyperparameters and the accuracy of the Variational Bayesian approach as an approximation of
the true Bayesian learning.
1
Introduction
The Variational Bayesian (VB) framework has been widely used as an approximation of the Bayesian learning for models involving hidden (latent) variables such as
mixture models[2][4]. This framework provides computationally tractable posterior
distributions with only modest computational costs in contrast to Markov chain
Monte Carlo (MCMC) methods. In many applications, it has performed better
generalization compared to the maximum likelihood estimation.
In spite of its tractability and its wide range of applications, little has been done
to investigate the theoretical properties of the Variational Bayesian learning itself.
For example, questions like how accurately it approximates the true one remained
unanswered until quite recently. To address these issues, the stochastic complexity
in the Variational Bayesian learning of gaussian mixture models was clari?ed and
the accuracy of the Variational Bayesian learning was discussed[10].
?
This work was supported by the Ministry of Education, Science, Sports and Culture,
Grant-in-Aid for JSPS Fellows 4637 and for Scientific Research 15500130, 2005.
In this paper, we focus on the Variational Bayesian learning of more general mixture models, namely the mixtures of exponential families which include mixtures of
distributions such as gaussian, binomial and gamma. Mixture models are known to
be non-regular statistical models due to the non-identi?ability of parameters caused
by their hidden variables[7]. In some recent studies, the Bayesian stochastic complexities of non-regular models have been clari?ed and it has been proven that they
become smaller than those of regular models[12][13]. This indicates an advantage
of the Bayesian learning when it is applied to non-regular models.
As our main results, the asymptotic upper and lower bounds are obtained for the
stochastic complexity or the free energy in the Variational Bayesian learning of the
mixture of exponential families. The stochastic complexity is important quantity
for model selection and giving the asymptotic form of it also contributes to the
following two issues. One is the accuracy of the Variational Bayesian learning as
an approximation method since the stochastic complexity shows the distance from
the variational posterior distribution to the true Bayesian posterior distribution in
the sense of Kullback information. Indeed, we give the asymptotic form of the
stochastic complexity as F (n) ? log n where n is the sample size, by comparing
the coe?cient ? with that of the true Bayesian learning, we discuss the accuracy of
the VB approach. Another is the in?uence of the hyperparameter on the learning
process. Since the Variational Bayesian algorithm is a procedure of minimizing the
functional that ?nally gives the stochastic complexity, the derived bounds indicate
how the hyperparameters in?uence the process of the learning. Our results have
an implication for how to determine the hyperparameter values before the learning
process.
We consider the case in which the true distribution is contained in the learner model.
Analyzing the stochastic complexity in this case is most valuable for comparing the
Variational Bayesian learning with the true Bayesian learning. This is because the
advantage of the Bayesian learning is typical in this case[12]. Furthermore, this
analysis is necessary and essential for addressing the model selection problem and
hypothesis testing.
The paper is organized as follows. In Section 2, we introduce the mixture of exponential family model. In Section 3, we describe the Bayesian learning. In Section
4, the Variational Bayesian framework is described and the variational posterior
distribution for the mixture of exponential family model is derived. In Section 5,
we present our main result. Discussion and conclusion follow in Section 6.
2
Mixture of Exponential Family
Denote by c(x|b) a density function of the input x ? RN given an M -dimensional
parameter vector b = (b(1) , b(2) , ? ? ? , b(M ))T ? B where B is a subset of RM . The
general mixture model p(x|?) with a parameter vector ? is de?ned by
p(x|?) =
K
ak c(x|bk ),
k=1
where integer K is the number of components and {ak |ak ? 0, K
k=1 ak = 1} is the
set of mixing proportions. The model parameter ? is {ak , bk }K
k=1 .
A mixture model is called a mixture of exponential family (MEF) model or exponential family mixture model if the probability distribution c(x|b) for each component
is given by the following form,
c(x|b) = exp{b ? f(x) + f0 (x) ? g(b)},
(1)
where b ? B is called the natural parameter, b ? f(x) is its inner product with the
vector f(x) = (f1 (x), ? ? ? , fM (x))T , f0 (x) and g(b) are real-valued functions of the
input x and the parameter b, respectively[3]. Suppose functions f1 , ? ? ? , fM and a
constant function are linearly independent, which means the e?ective number of
parameters in a single component distribution c(x|b) is M .
The conjugate prior distribution ?(?) for the MEF model is given by the product
K
of the following two distributions on a = {ak }K
k=1 and b = {bk }k=1 ,
?(a) =
K
?(K?0 ) ?0 ?1
ak
,
?(?0 )K
(2)
k=1
?(b) =
K
?(bk ) =
k=1
K
exp{?0 (bk ? ?0 ? g(bk ))}
,
C(?0 , ?0)
(3)
k=1
where ?0 > 0, ?0 ? RM and ?0 > 0 are constants called hyperparameters and
C(?, ?) = exp{?(? ? b ? g(b))}db
(4)
is a function of ? ? R and ? ? RM .
The mixture model can be rewritten as follows by using a hidden variable y =
(y1 , ? ? ? , yK ) ? {(1, 0, ? ? ?, 0), (0, 1, ? ? ? , 0), ? ? ? , (0, 0, ? ? ?, 1)},
p(x, y|?) =
K
yk
ak c(x|bk ) .
k=1
If and only if the datum x is generated from the kth component, yk = 1.
3
The Bayesian Learning
Suppose n training samples X n = {x1 , ? ? ? , xn } are independently and identically
taken from the true distribution p0 (x). In the Bayesian learning of a model p(x|?)
whose parameter is ?, ?rst, the prior distribution ?(?) on the parameter ? is set.
Then the posterior distribution p(?|X n ) is computed from the given dataset and
the prior by
1
exp(?nHn (?))?(?),
p(?|X n ) =
(5)
Z(X n )
where Hn (?) is the empirical Kullback information,
n
Hn (?) =
1
p0 (xi )
,
log
n i=1
p(xi |?)
(6)
and Z(X n ) is the normalization constant that is also known as the marginal likelihood or the evidence of the dataset X n [6]. The Bayesian predictive distribution
p(x|X n ) is given by averaging the model over the posterior distribution as follows,
p(x|X n ) = p(x|?)p(?|X n )d?.
(7)
The stochastic complexity F (X n ) is de?ned by
F (X n ) = ? log Z(X n ),
(8)
which is also called the free energy and is important in most data modelling problems. Practically, it is used as a criterion by which the model is selected and the
hyperparameters in the prior are optimized[1][9].
De?ne the average stochastic complexity F (n) by
F (n) = EX n F (X n ) ,
(9)
where EX n [?] denotes the expectation value over all sets of training samples. Recently, it was proved that F (n) has the following asymptotic form[12],
F (n) ? log n ? (m ? 1) log log n + O(1),
(10)
where ? and m are the rational number and the natural number respectively which
are determined by the singularities of the set of true parameters. In regular statistical models, 2? is equal to the number of parameters and m = 1, whereas in
non-regular models such as mixture models, 2? is not larger than the number of
parameters and m ? 1. This means an advantage of the Bayesian learning.
However, in the Bayesian learning, one computes the stochastic complexity or the
predictive distribution by integrating over the posterior distribution, which typically
cannot be performed analytically. As an approximation, the VB framework was
proposed[2][4].
4
The Variational Bayesian Learning
4.1
The Variational Bayesian Framework
In the VB framework, the Bayesian posterior p(Y n , ?|X n ) of the hidden variables
and the parameters is approximated by the variational posterior q(Y n , ?|X n ), which
factorizes as
q(Y n , ?|X n ) = Q(Y n |X n )r(?|X n ),
(11)
n
n
where Q(Y |X ) and r(?|X n ) are posteriors on the hidden variables and the parameters respectively. The variational posterior q(Y n , ?|X n ) is chosen to minimize
the functional F [q] de?ned by
q(Y n , ?|X n )p0 (X n )
d?,
(12)
q(Y n , ?|X n ) log
F [q] =
p(X n , Y n , ?)
n
Y
= F (X n ) + K(q(Y n , ?|X n )||p(Y n , ?|X n )),
n
n
n
(13)
n
where K(q(Y , ?|X )||p(Y , ?|X )) is the Kullback information between the true
Bayesian posterior p(Y n , ?|X n ) and the variational posterior q(Y n , ?|X n ) 1 . This
leads to the following theorem. The proof is well known[8].
Theorem 1 If the functional F [q] is minimized under the constraint (11) then the
variational posteriors, r(?|X n ) and Q(Y n |X n ), satisfy
1
r(?|X n ) =
?(?) exp log p(X n , Y n |?) Q(Y n |X n ) ,
(14)
Cr
1
Q(Y n |X n ) =
exp log p(X n , Y n |?) r(?|X n ) ,
(15)
CQ
1
K(q(x)||p(x)) denotes the Kullback information from a distribution q(x) to a distribution p(x), that is,
q(x)
K(q(x)||p(x)) =
dx.
q(x) log
p(x)
where Cr and CQ are the normalization constants2 .
We de?ne the stochastic complexity in the VB learning F (X n ) by the minimum
value of the functional F [q] , that is ,
F (X n ) = min F [q],
r,Q
which shows the accuracy of the VB approach as an approximation of the Bayesian
learning. F (X n ) is also used for model selection since it gives an upper bound of
the true Bayesian stochastic complexity F (X n ).
4.2
Variational Posterior for MEF Model
In this subsection, we derive the variational posterior r(?|X n ) for the MEF model
based on (14) and then de?ne the variational parameter for this model.
Using the complete data {X n , Y n } = {(x1 , y1 ), ? ? ? , (xn , yn )}, we put
n
n
1 k
y ki = yik Q(Y n ) , nk =
y ki , and ?k =
y f(xi ),
nk i=1 i
i=1
where yik = 1 if and only if the ith datum xi is from the kth component. The
variable nk is the expected number of the data that are estimated to be from the
kth component. From (14) and the respective prior (2) and (3), the variational
posterior r(?) is obtained as the product of the following two distributions3 ,
K
?(n + K?0 ) nk +?0 ?1
r(a) = K
ak
,
k=1 ?(nk + ?0 ) k=1
r(b) =
K
r(bk ) =
k=1
nk ?k +?0 ?0
and
nk +?0
where ?k =
K
k=1
1
exp{?k (?k ? bk ? g(bk ))},
C(?k , ?k )
(16)
(17)
?k = nk + ?0 . Let
nk + ? 0
,
(18)
n + K?0
1 ? log C(?k , ?k )
bk = bk r(bk ) =
,
(19)
?k
??k
and de?ne the variational parameter ? by ? = ?r(?) = {ak , bk }K
k=1 . Then it is
noted that the variational posterior r(?) and CQ in (15) are parameterized by the
variational parameter ?. Therefore, we denote them as r(?|?) and CQ (?) henceforth.
We de?ne the variational estimator ?vb by the variational parameter ? that attains
the minimum value of the stochastic complexity F (X n ). Then, putting (15) into
(12), we obtain
ak = ak r(a) =
F (X n )
n
where S(X ) = ?
=
min{K(r(?|?)||?(?)) ? (log CQ(?) + S(X n ))},
?
K(r(?|? vb )||?(?)) ? (log CQ (? vb ) + S(X n )),
i=1 log p0 (x).
=
n
(20)
(21)
Therefore, our aim is to evaluate the minimum value of (20) as a function of the
variational parameter ?.
2
?p(x) denotes the expectation over p(x).
Hereafter, we omit the condition X n of the variational posteriors, and abbreviate them
to q(Y n , ?), Q(Y n ) and r(?).
3
5
Main Result
The average stochastic complexity F (n) in the VB learning is de?ned by
F (n) = EX n [F (X n )].
(22)
We assume the following conditions.
(i) The true distribution p0 (x) is an MEF model p(x|?0 ) which has K0 com0
ponents and the parameter ?0 = {a?k , b?k }K
k=1 ,
p(x|?0 ) =
K0
a?k exp{b?k ? f(x) + f0 (x) ? g(b?k )},
k=1
b?k
M
? R and
where
has K components,
b?k
p(x|?) =
= b?j (k = j). And suppose that the model p(x|?)
K
ak exp{bk ? f(x) + f0 (x) ? g(bk )},
k=1
and K ? K0 holds.
(ii) The prior distribution of the parameters is ?(?) = ?(a)?(b) given by (2)
and (3) with ?(b) bounded.
(iii) Regarding the distribution c(x|b) of each component, the Fisher information
2
g(b)
matrix I(b) = ??b?b
satis?es 0 < |I(b)| < +?, for arbitrary b ? B 4 . The
function ? ? b ? g(b) has a stationary point at ?b in the interior of B for each
? ? { ?g(b)
?b |b ? B}.
Under these conditions, we prove the following.
Theorem 2 (Main Result) Assume the conditions (i),(ii) and (iii). Then the
average stochastic complexity F (n) defined by (22) satisfies
? log n + EX n nHn (? vb ) + C1 ? F (n) ? ? log n + C2 ,
(23)
for an arbitrary natural number n, where C1 , C2 are constants independent of n and
0 ?1
,
(?0 ? M2+1 ),
(K ? 1)?0 + M
(K ? K0 )?0 + M K0 +K
2
2
?
=
?=
(24)
M K+K?1
M K+K?1
,
(?0 > M2+1 ).
2
2
This theorem shows the asymptotic form of the average stochastic complexity in
the Variational Bayesian learning. The coe?cients ?, ? of the leading terms are
identi?ed by K,K0 , that are the numbers of components of the learner and the true
distribution, the number of parameters M of each component and the hyperparameter ?0 of the conjugate prior given by (2).
In this theorem, nHn (?vb ) = ? ni=1 log p(xi |?vb )?S(X n ), and ? ni=1 log p(xi |? vb )
is a training error which is computable during the learning. If the term
EX n nHn (? vb ) is a bounded function of n, then it immediately follows from this
theorem that
? log n + O(1) ? F 0 (n) ? ? log n + O(1),
4 ? 2 g(b)
?b?b
denotes the matrix whose ijth entry is
of a matrix.
? 2 g(b)
?b(i) ?b(j)
and |?| denotes the determinant
where O(1) is a bounded function of n. In certain cases, such as binomial mixtures
and mixtures of von-Mises distributions, it is actually a bounded function of n. In
the case of gaussian mixtures, if B = RN , it is conjectured that the minus likelihood
ratio min? nHn (?), a lower bound of nHn (? vb ), is at most of the order of log log n[5].
Since the dimension of the parameter ? is M K + K ? 1, the average stochastic
complexity of regular statistical models, which coincides with the Bayesian information criterion (BIC)[9] is given by ?BIC log n where ?BIC = M K+K?1
. Theorem
2
2 claims that the coe?cient ? of log n is smaller than ?BIC when ?0 ? (M + 1)/2.
This implies that the advantage of non-regular models in the Bayesian learning still
remains in the VB learning.
(Outline of the proof of Theorem 2)
From the condition (iii), calculating C(?k , ?k ) in (17) by the saddle point approximation, K(r(?|?)||?(?)) in (20) is evaluated as follows 5 ,
K(r(?|?)||?(?)) = G(a) ?
K
log ?(bk ) + Op (1),
(25)
k=1
where the function G(a) of a = {ak }K
k=1 is given by
K
G(a) =
MK + K ? 1
1
M
log ak .
log n + {
? (?0 ? )}
2
2
2
(26)
k=1
Then log CQ (?) in (20) is evaluated as follows.
nHn (?) + Op(1) ? ?(log CQ (?) + S(X n )) ? nH n (?) + Op (1)
where
(27)
n
H n (?) =
1
p(xi |?0 )
log K
?
n i=1
k=1 ak c(xi |bk ) exp ?
C
nk +min{?0 ,?0 }
,
and C is a constant. Thus, from (20), evaluating the right-hand sides of (25) and
(27) at speci?c points near the true parameter ?0 , we obtain the upper bound in
(23). The lower bound in (23) is obtained from (25) and (27) by Jensen?s inequality
K
and the constraint k=1 ak = 1. (Q.E.D)
6
Discussion and Conclusion
In this paper, we showed the upper and lower bounds of the stochastic complexity
for the mixture of exponential family models in the VB learning.
Firstly, we compare the stochastic complexity shown in Theorem 2 with the one
in the true Bayesian learning. On the mixture models with M parameters in each
component, the following upper bound for the coe?cient of F (n) in (10) is known
[13],
(K + K0 ? 1)/2 (M = 1),
??
(28)
(K ? K0 ) + (M K0 + K0 ? 1)/2 (M ? 2).
By the certain conditions about the prior distribution under which the above bound
was derived, we can compare the stochastic complexity when ?0 = 1. Putting ?0 = 1
in (24), we have
? = K ? K0 + (M K0 + K0 ? 1)/2.
(29)
5
Op (1) denotes a random variable bounded in probability.
Since we obtain F (n) ? log n+O(1) under certain assumptions[11], let us compare
? of the VB learning to ? in (28) of the true Bayesian learning. When M = 1, that
is, each component has one parameter, ? ? ? holds since K0 ? K. This means
that the more redundant components the model has, the more the VB learning
di?ers from the true Bayesian learning. In this case, 2? is equal to the number
of the parameters of the model. Hence the BIC[9] corresponds to ? log n when
M = 1. If M ? 2, the upper bound of ? is equal to ?. This implies that the
variational posterior is close to the true Bayesian posterior when M ? 2. More
precise discussion about the accuracy of the approximation can be done for models
on which tighter bounds or exact values of the coe?cient ? in (10) are given[10].
Secondly, we point out that Theorem 2 shows how the hyperparameter ?0 in?uence the process of the VB learning. The coe?cient ? in (24) indicates that only
when ?0 ? (M + 1)/2, the prior distribution (2) works to eliminate the redundant
components that the model has and otherwise it works to use all the components.
And lastly, let us give examples of how to use the theoretical bounds in (23). One
can examine experimentally whether the actual iterative algorithm converges to the
optimal variational posterior instead of local minima by comparing the stochastic
complexity with our theoretical result. The theoretical bounds would also enable us
to compare the accuracy of the VB learning with that of the Laplace approximation
or the MCMC method. As mentioned in Section 4, our result will be important for
developing e?ective model selection methods using F (X n ) in the future work.
References
[1] H.Akaike, ?Likelihood and Bayes procedure,? Bayesian Statistics, (Bernald J.M. eds.)
University Press, Valencia, Spain, pp.143-166, 1980.
[2] H.Attias, ?Inferring parameters and structure of latent variable models by variational
bayes,? Proc. of UAI, 1999.
[3] L.D.Brown, ?Fundamentals of statistical exponential families,? IMS Lecture NotesMonograph Series, 1986.
[4] Z.Ghahramani, M.J.Beal, ?Graphical models and variational methods,? Advanced
Mean Field Methods , MIT Press, 2000.
[5] J.A.Hartigan, ?A Failure of likelihood asymptotics for normal mixtures,? Proc. of the
Berkeley Conference in Honor of J.Neyman and J.Kiefer, Vol.2, 807-810, 1985.
[6] D.J. Mackay, ?Bayesian interpolation,? Neural Computation, 4(2), pp.415-447, 1992.
[7] G.McLachlan, D.Peel,?Finite mixture models,? Wiley, 2000.
[8] M.Sato, ?Online model selection based on the variational bayes,? Neural Computation,
13(7), pp.1649-1681, 2001.
[9] G.Schwarz, ?Estimating the dimension of a model,? Annals of Statistics, 6(2), pp.461464, 1978.
[10] K.Watanabe, S.Watanabe, ?Lower bounds of stochastic complexities in variational
bayes learning of gaussian mixture models,? Proc. of IEEE CIS04, pp.99-104, 2004.
[11] K.Watanabe, S.Watanabe, ?Stochastic complexity for mixture of exponential families
in variational bayes,? Proc. of ALT05, pp.107-121, 2005.
[12] S.Watanabe,?Algebraic analysis for non-identifiable learning machines,? Neural Computation, 13(4), pp.899-933, 2001.
[13] K.Yamazaki, S.Watanabe, ?Singularities in mixture models and upper bounds of
stochastic complexity,? Neural Networks, 16, pp.1029-1038, 2003.
| 2935 |@word determinant:1 proportion:1 p0:5 minus:1 series:1 hereafter:1 clari:2 nally:1 comparing:3 dx:1 enables:1 midori:1 stationary:1 intelligence:1 selected:1 ith:1 provides:1 mef:5 firstly:1 c2:2 become:1 ect:1 prove:1 introduce:1 expected:1 indeed:1 examine:1 little:1 actual:1 provided:1 spain:1 bounded:5 estimating:1 fellow:1 berkeley:1 rm:3 grant:1 omit:1 yn:1 before:1 local:1 ak:17 analyzing:1 interpolation:1 range:1 testing:1 procedure:2 com0:1 asymptotics:1 empirical:1 integrating:1 regular:8 spite:1 cannot:1 interior:1 selection:6 close:1 put:1 independently:1 immediately:1 m2:2 estimator:1 deriving:1 unanswered:1 laplace:1 annals:1 suppose:3 exact:1 akaike:1 hypothesis:1 approximated:1 valuable:1 yk:3 mentioned:1 complexity:28 predictive:2 learner:2 k0:14 various:1 notesmonograph:1 describe:1 monte:1 quite:1 whose:2 widely:2 valued:1 larger:1 otherwise:1 ability:1 statistic:2 itself:1 online:1 beal:1 advantage:4 product:3 cients:1 mixing:1 rst:1 converges:1 derive:1 ac:2 op:4 indicate:1 implies:2 tokyo:2 stochastic:28 enable:1 education:1 f1:2 generalization:2 tighter:1 singularity:2 secondly:1 hold:2 practically:1 normal:1 exp:10 claim:1 estimation:1 proc:4 schwarz:1 mclachlan:1 mit:1 gaussian:4 aim:1 cr:2 factorizes:1 derived:3 focus:1 modelling:1 likelihood:6 indicates:2 contrast:1 attains:1 sense:1 typically:1 eliminate:1 hidden:5 issue:2 mackay:1 marginal:2 equal:3 field:1 future:1 minimized:1 gamma:1 peel:1 satis:1 investigate:1 mixture:28 chain:1 implication:1 necessary:1 culture:1 respective:1 modest:1 uence:3 theoretical:5 mk:1 ijth:1 tractability:2 cost:1 addressing:1 subset:1 entry:1 jsps:1 density:1 fundamental:1 von:1 hn:2 henceforth:1 leading:1 japan:1 de:9 satisfy:1 caused:1 nhn:7 performed:2 lab:1 bayes:5 minimize:1 ni:2 accuracy:7 kiefer:1 kazuho:1 bayesian:44 accurately:1 carlo:1 ed:4 failure:1 energy:3 pp:8 proof:2 mi:1 di:1 rational:1 dataset:2 ective:2 proved:1 subsection:1 organized:1 actually:1 follow:1 done:2 box:1 evaluated:2 furthermore:1 lastly:1 until:1 hand:1 scientific:1 brown:1 true:17 analytically:1 hence:1 during:1 noted:1 coincides:1 criterion:2 outline:1 complete:1 variational:41 recently:2 functional:4 jp:2 nh:1 discussed:1 approximates:1 ims:1 f0:4 posterior:22 recent:1 showed:1 conjectured:1 certain:3 honor:1 inequality:1 minimum:5 additional:1 ministry:1 speci:1 determine:1 redundant:2 ii:2 involving:1 titech:2 expectation:2 normalization:2 c1:2 whereas:1 db:1 valencia:1 integer:1 near:1 iii:3 identically:1 bic:5 fm:2 inner:1 regarding:1 computable:1 attias:1 whether:1 algebraic:1 yik:2 estimated:1 hyperparameter:4 vol:1 key:1 putting:2 nagatsuta:1 sumio:1 hartigan:1 parameterized:1 family:11 vb:22 bound:16 ki:2 datum:2 identifiable:1 sato:1 constraint:2 min:4 ned:4 department:1 developing:1 conjugate:2 smaller:2 taken:1 computationally:1 neyman:1 remains:1 discus:3 tractable:1 rewritten:1 yokohama:1 binomial:2 denotes:6 include:1 graphical:1 coe:6 calculating:1 giving:1 ghahramani:1 question:1 quantity:2 kth:3 distance:1 mail:1 cq:8 ratio:1 minimizing:1 upper:7 markov:1 finite:1 precise:1 y1:2 rn:2 arbitrary:2 bk:18 namely:1 swatanab:1 optimized:1 identi:2 address:1 natural:3 abbreviate:1 advanced:1 technology:2 ne:5 prior:9 asymptotic:6 lecture:1 proven:1 pi:2 supported:1 free:3 side:1 institute:2 wide:1 dimension:2 xn:2 evaluating:1 computes:1 approximate:1 kullback:4 uai:1 xi:8 latent:2 iterative:1 ku:1 contributes:1 yamazaki:1 main:4 linearly:1 hyperparameters:4 x1:2 cient:5 aid:1 wiley:1 inferring:1 watanabe:8 exponential:11 theorem:10 remained:1 jensen:1 er:1 r2:1 evidence:1 essential:1 nk:10 saddle:1 contained:1 sport:1 corresponds:2 satisfies:1 ponents:1 fisher:1 experimentally:1 typical:1 determined:1 averaging:1 called:4 e:1 support:1 evaluate:1 mcmc:2 ex:5 |
2,132 | 2,936 | A Cortically-Plausible Inverse Problem
Solving Method Applied to Recognizing
Static and Kinematic 3D Objects
David W. Arathorn
Center for Computational Biology,
Montana State University
Bozeman, MT 59717
dwa@cns . montana . edu
General Intelligence Corporation
dwa@giclab . com
Abstract
Recent neurophysiological evidence suggests the ability to interpret
biological motion is facilitated by a neuronal "mirror system"
which maps visual inputs to the pre-motor cortex. If the common
architecture and circuitry of the cortices is taken to imply a
common computation across multiple perceptual and cognitive
modalities, this visual-motor interaction might be expected to have
a unified computational basis. Two essential tasks underlying such
visual-motor cooperation are shown here to be simply expressed
and directly solved as transformation-discovery inverse problems:
(a) discriminating and determining the pose of a primed 3D object
in a real-world scene, and (b) interpreting the 3D configuration of
an articulated kinematic object in an image. The recently developed
map-seeking method provides a mathematically tractable,
cortically-plausible solution to these and a variety of other inverse
problems which can be posed as the discovery of a composition of
transformations between two patterns. The method relies on an
ordering property of superpositions and on decomposition of the
transformation spaces inherent in the generating processes of the
problem.
1 Introduction
A variety of "brain tasks" can be tersely posed as transformation-discovery
problems. Vision is replete with such problems, as is limb control. The problem of
recognizing the 2D projection of a known 3D object is an inverse problem of
finding both the visual and pose transformations relating the image and the 3D
model of the object. When the object in the image may be one of many known
objects another step is added to the inverse problem, because there are multiple
candidates each of which must be mapped to the input image with possibly different
transformations. When the known object is not rigid, the determination of
articulations and/or morphings is added to the inverse problem. This includes the
general problem of recognition of biological articulation and motion, a task recently
attributed to a neuronal mirror-system linking visual and motor cortical areas [1].
Though the aggregate transformation space implicit in such problems is vast, a
recently developed method for exploring vast transformation spaces has allowed
some significant progress with a simple unified approach. The map-seeking method
[2,4] is a general purpose mathematical procedure for finding the decomposition of
the aggregate transformation between two patterns, even when that aggregate
transformation space is vast and there is no prior information is available to restrict
the search space. The problem of concurrently searching a large collection of
memories can be treated as a subset of the transformation problem and consequently
the same method can be applied to find the best transformation between an input
image and a collection of memories (numbering at least thousands in practice to
date) during a single convergence. In the last several years the map-seeking method
has been applied to a variety of practical problems, most of them related to vision, a
few related to kinematics, and some which do not correspond to usual categories of
"brain functions." The generality of the method is due to the fact that only the
mappings are specialized to the task. The mathematics of the search, whether
expressed in an algorithm or in a neuronal or electronic circuit, do not change.
From an evolutionary biological point of view this is a satisfying characteristic for a
model of cortical function because only the connectivity which implements the
mappings must be varied to specialize a cortex to a task. All the rest - organization
and dynamics - would remain the same across cortical areas.
f
=
=
t' /
L1
'"
.'
q
t
,
I
b'
E
t
~
(; y
~ f'
~
t"'
V
~
e/
'"
,
)'1
L2
~
I
~
E
t'2
=
f2 =
b\
) ~
. ~2
~
V
~
b3
~
L3
93
.~
q3
E
t
~ b3
q3 } ~
1
1
w, W
2
w,
w, ~
W2
w, ~
Figure 1. Data flow in map-seeking circuit
t
c:::!:::J
wt
Cortical neuroanatomy offers emphatic hints about the characteristics of its solution
in the vast neuronal resources allocated to creating reciprocal top-down and bottomup pathways. More specifically, recent evidence suggests this reciprocal pathway
architecture appears to be organized with reciprocal, co-centered fan outs in the
opposing directions [3], quite possibly implementing inverse mappings. The data
flow of map-seeking computations, seen in Figure I, is architecturally compatibility
with these features of cortical organization. Though not within the scope of this
discussion, it has been demonstrated [4] that the mathematical expression of the
map-seeking method, seen in equations 6-9 below, has an isomorphic
implementation in neuronal circuitry with reasonably realistic dendritic architecture
and dynamics (e.g. compatible with [5] ) and oscillatory dynamics.
2 The basis for tractable transformation-discovery
The related problems of recognition/interpretation of 2D images of static and
articulated kinematic 3D objects illustrate how cleanly significant vision problems
may be posed and solved as transformation-discovery inverse ?roblems. The visual
and pose (in the sense of orientation) transformations , t VIS ua and fo s e , between a
given 3D model ml and the extent of an input image containing a 2D projection
P(OI) of an object 01 mappable to ml can be expressed
ff sllal E
T Visuol ,
tr
eq. I
se E T pose
If we now consider that the model ml may be constructed by the one-to-many
mapping of a base vector or feature e, and that arbitrarily other models mj may be
similarly constructed by different mappings, then the transformation f ormation
corresponding to the correct "memory" converts the memory database search
problem into another transformation-discovery problem with one more composed
transformation I
p( 0
I
) =
r :isual
J
0
{ pose
k
0
( formation
1111
(e)
t~~rmatiol1 E
T formatioll
t""fo rmatioll (e) = mI
mlE
M
eq. 2
Finally, if we allow a morphable object to be "constructed" by a generative model ,
whose various configurations or articulations may be generated by a composition of
transformations f ell erative of some root or seed feature e, the problem of explicitly
recognizing the particular configuration of morph becomes a transformationdiscovery problem of the form
p( C ( 0) ) = t,/",al
0
tfse 0 Wile/alive ( e)
t lenerative E
T generative
eq. 3
These unifying formulations are only useful, however, if there is a tractable method
of solving for the various transformations. That is what the map-seeking method
provides.
Abstractly the problem is the discovery of a composition of
transformations between two patterns. In general the transformations express the
generating process of the problem. Define correspondence c between vectors rand
w through a composition of L transformations tJ, ,t]2 ,.. ?,tfL where t~t E ti ,t~,? ??,t;'t
1 This illustrates that forming a superposItion of memories is equivalent to forming
superpositions of transformations. The first is a more practical realization, as seen in
Figure 1. Though not demonstrated in this paper, the multi-memory architecture has
proved robust with 1000 or more memory patterns from real-world datasets.
c( j) = (~I tj r) ,
i (
w)
eq. 4
where the composition operator is defined
L
o
.
tl
(
r) = (I = 1???L
;=0. 1 ;.
( L o ( L- I ... o (1
)L
J1
JL-I
1=0
(r)
r
Let C be an L dimensional matrix of values of c(j) whose dimensions are n, .. . nL.
The problem, then is to find
x = argmax c(j)
eq. 5
The indices x specify the sequence of transformations that best correspondence
between vectors rand w. The problem is that C is too large a space to search for x
by conventional means. Instead, a continuous embedding of C permits a search
with resources proportional to the sum of sizes of the dimensions of C instead of
their product.
C is embedded in a superposition dot product space
Q defined
eq. 6
where
G= [g;:" ]
m = 1???L,x",
=
1?? ?nm
nm is number of t in layer m,
g;:, E [0,1] ,
t: I is adjoint of tf .
In Q space, the solution to eq. 5 lies along a single axis in the set of axes
represented each row of G.
That is, gIll =< 0,.? ?'U'm" ? ?,0> U'm > 0 which
corresponds to the best fitting transformation tx , where Xm is the mth index in x in
eq. 5. This state is reached from an initial ~'~ate G= [1] by a process termed
superposition culling in which the components of grad Q are used to compute a path
in steps Llg ,
eq. 7
eq. 8
The functionfpreserves the maximal component and reduces the others: in neuronal
terms , lateral inhibition . The resulting path along the surface Q can be thought of as
a "high traverse" in contrast to the gradient ascent or descent usual in optimization
methods . The price for moving the problem into superposition dot product space is
that collusions of components of the superpositions can result in better matches for
incorrect mappings than for the mappings of the correct solution. If this occurs it is
almost always a temporary state early in the convergence. This is a consequence of
the ordering property of superpositions (OPS) [2,4], which, as applied here,
describes the characteristics of the surface Q. For example, let three
superpositions r =
:t
U; , S
=
:t
i= 1
V j
and s' =
j =l
:t
Vk be formed from three sets of sparse
k=l
vectors u ; ER, Vj ES and VkES' where R n S=0 and R n S'=v q
following relationship expresses the OPS:
define Pco" ec!
then
Pcorrecl
= p( r ? s' > r. s),
> R ncorrect or
P'"co,,'eCl
?
Then the
= p( r. s' :::; r. s)
> 0.5
R orrecf
and as n, m --+ 1 Pco"'ect --+ l.0
Applied to eq. 8, this means that for superposItIOns composed of vectors which
satisfy the distribution properties of sparse, decorrelating encodings 2 (a biologically
plausible assumption [6]), the probability of the maximum components of grad Q
moving the solution in the correct direction is always greater than 0.5 and increases
toward 1.0 as the G becomes sparser. In other words, the probability of the
occurrence of collusion decreases with the decrease in numbers of contributing
components in the superposition(s), and/or the decrease in their gating coefficients.
3 The map-seeking method and application
A map-seeking circuit (MSC) is composed of several transformation or mapping
layers between the input at one end and a memory layer at the other, as seen in
Figure l. The compositional structure is evident in the simplicity of the equations
(eqs. 9-12 below) which define a circuit of any dimension. In a multi-layer circuit
of L layers plus memory with n{ mappings in layer I the forward path signal for layer
m is computed
11m
f m=
Lg;"
t;' (rm-l)
eq. 9
for m = 1. .. L
) =1
The
signal
for
layer
m
is
computed
form=1. .. L
eq. 10
or
!gZ" Wk or
W
for m = L+ I
k=1
The mapping coefficients g are updated by the recurrence
gi" := K( gi", ti" (f m- I ) . b ",+I) for m = 1. ..L ,i = 1. . .n,
eq. 11
g/+I := K( g/+I , f' ? Wk) for k = l... nw (optional)
where match operator u ? v = q, q is a scalar measure of goodness-of-match between
u and v, and may be non-linear. When. is a dot product, the second argument of K
is the same as oQlg in eq. 7. The competition function K is a realization of lateral
inhibition function/in eq. 8. It may optionally be applied to the memory layer, as
seen in eq. 11.
2 A restricted case of the superposition ordering property using non-sparse representation
is exploited by HRR distributed memory. See [7] for an analysis which is also applicable
here.
K(g; , q;) = max [0, g ; - k, -( 1-
J
m:~ q J
eq. 12
Thresholds are normally applied to q and g, below which they are set to zero to
speed convergence. In above, f is the input signal, tT , (Ill are the /h forward and
backward mappings for the m th layer, Wk is the kth memory pattern, z( ) is a nonlinearity applied to the response of each memory. gill is the set of mapping
coefficients gTfor the m th layer, each of which is associated with mapping tT and
is modified over time by the competition function K( ).
Recognizing 2D projections of 3D objects under real operating conditions
(a) 3D memory model
o '::
o
(b )source image
1 00
15 0
(d) iter 1
200
,.-------, 0.
2 00 , - - - - - - - - - ,
200 . - - - - - - - - ,
150
150
o.
100
100
o.
50
50
os
0
50
(c) input image - blurred
0
50
100
150
(e) iter 3
2 00
0 ':---::cc----:-o-:---:-:-----:,-:'
0
50
100 150 2 00
(f)iterI2
oL..-o-.-o-.--os-....lo 0
(g) final model pose
Figure 2. Recognizing target among distractor vehicles. (a) M60 3D memory model ;
(b) source image, Fort Carson Data Set; (c) Gaussian blurred input image; (d-f)
isolation of target in layer 0, iterations 1, 3, 12; (g) pose determination in final
iteration , layer 4 backward - presented left-right mirrored to reflect mirroring
determined in layer 3. M-60 model courtesy Colorado State University.
Real world problems of the form expressed in eq. 1 often present objects at
distances or in conditions which so limit the resolution that there are no alignable
features other than the shape of the object itself, which is sufficiently blurred as to
prevent generating reliable edges in a feed-forward manner (e.g. Fig. 2c). In the
map-seeking approach, however, the top-down (in biological parlance) inversemappings of the 3D model are used to create a set of edge hypotheses on the
backward path out of layer 1 into layer O. In layer 0 these hypotheses are used to
gate the input image. As convergence proceeds, the edge hypotheses are reduced to
a single edge hypothesis that best fits the grayscale input image. Figure 2 shows this
process applied to one of a set of deliberately blurred images from the Fort Carson
Imagery Data Set. The MSC used four layers of visual transformations: 14,400
translational, 31 rotational, 41 scaling, 481 3D projection. The MSC had no
difficulty distinguishing the location and orientation of the tank, despite distractors
and background clutter: in all tests in the dataset target was correctly located. In
effect, once primed with a top-down expectation, attentional behavior IS an
emergent property of application of the map-seeking method to vision [8].
Adapting generative models by transformation
"The direct-matching hypothesis of the interpretation of biological motion] holds
that we understand actions when we map the visual representation of the observed
action onto our motor representation of the same action." [1] This mapping,
attributed to a neuronal mirror-system for which there is gathering neurobiological
evidence (as reviewed in [1]), requires a mechanism for projecting between the
visual space and the constrained skeletal joint parameter (kinematic) space to
disambiguate the 2D projection of body structure. [4] Though this problem has been
solved to various degrees by other computational methods, a review of which is
beyond the scope of this discussion , to the author's knowledge none of these have
biological plausibility. The present purpose is to show how simply the problem can
be expressed by the generative model interpretation problem introduced in eq. 3 and
solve by map-seeking circuits. An idealized example is the problem of interpreting
the shape of a featureless "snake" articulated into any configuration, as appears in
Fig. 3.
(e)
(d)
Figure 3. Projection between visual and kinematic spaces with two map-seeking
circuits. (a) input view, (b) top view, (c) projection of 3D occluding contours ,
(d ,e) projections of relationship of occluding contours to generating spine.
The solution to this problem involves two coupled map-seeking circuits. The
kinematic circuit layers model the multiple degrees of freedom (here two angles,
variable length and optionally variable radius from spine to surface) of each of the
connected spine segments. The other circuit determines the visual transformations,
as seen in the earlier example. The surface of the articulated cylinder is mapped
from an axial spine. The points where that surface is tangent to the viewpoint
vectors define the occluding contours which, projected in 2D, become the object
silhouette. The problem is to find the articulations, segment lengths (and optionally
segment diameter) which account for the occluding contour matching the silhouette
in the input image. In the MSC solution, the initial state all possible articulations of
the snake spine are superposed, and all the occluding contours from a range of
viewing angles are projected into 2D. The latter superposition serves as the
backward input to the visual space map-seeking circuit. Since the snake surfaceis
determined by all of the layers of the kinematic circuit, these are projected in
parallel to form the backward (biologically top-down) 2D input to the visual
transformation-discovery circuit. A matching operation between the contributors to
the 2D occluding contour superposition and the forward transformations of the input
image modulates the gain of each mapping in the kinematic circuit via a/n in eqs.
13, 14 (modified from eq. 11). In eqs. 13 , 14 K indicates kinematic circuit, V
indicates visual circuit.
K := camp ( gt',
K at'
VKK
K = l. .. n!
K
gin
? tin ( f m-Kl ) ? b mK+ 1 ) for m = l. ..L,i
eq. 13
VK
a!"
eq. 14
= ( f VL ).
t 3D ....::,.2D
0
t mrjace 0
t:K( b K )
III
m+ '
The process converges concurrently in both circuits to a solution, as seen in Figure
3. The match of the occluding contours and the input image, Figure 3a, is seen in
Figure 3b,c, with its three dimensional structure is clarified in Figure 3d. Figure 3e
shows a view of the 3D structure as determined directly from the mapping
parameters defining the snake "spine" after convergence.
4 Conclusion
The investigations reported here expand the envelope of vision-related problems
amenable to a pure transformation-discovery approach implemented by the mapseeking method. The recognition of static 3D models, as seen in Figure 2, and
other problems [9] solved by MSC have been well tested with real-world input.
Numerous variants of Figure 3 have demonstrated the applicability of MSC to
recognizing generative models of high dimensionality, and the principle has recently
been applied successfully to real-world domains. Consequently, the research to date
does suggest that a single cortical computational mechanism could span a
significant range of the brain's visual and kinematic computing.
References
[IJ G. Ri zzo lati, L. Fogassi, V. Gallese, Neurophysiological mechanisms underlying the
understanding and imitation of action, Nature Reviews Neuroscience, 2, 2001 , 661-670
[2J D. Arathorn, Map-Seeking: Recognition Under Transformation Using A Superposition
Ordering Property. Electronics Letters 37(3), 2001 pp164-165
[3J A. Angelucci, B. Levitt, E. Walton, J.M. Hupe, J. Bullier, J. Lund, Circuits for Local and
Global Signal Integration in Primary Visual Cortex, Journal of Neuroscience, 22(19) , 2002
pp 8633-8646
[4J D. Arathorn, Map-Seeking Circuits in Visual Cognition, Palo Alto, Stanford Univ Press,
2002
[5J A. Polsky, B. Mel, J. Schiller, Computational Subunits in Thin Dendrites of Pyramidal
Cells, Nature Neuroscience 7(6) , 2004 pp 621-627
[6J B.A. OIshausen, D.J. Field, Emergence of Simple-Cell Receptive Field Properties by
Learning a Sparse Code for Natural Images, Natu re, 381, 1996 pp607-609
[7J T. Plate, Holographic Reduced Representation, CSLI publications, Stanford, California,
2003
[8J D. Arathorn, Memory-driven visual attention: an emergent behavior of map-seeking
circuits, in Neurobiology of Attention, Eds Itti L, Rees G, Tsotsos J, Academic/Elsevier,
2005
[9J C. Vogel, D. Arathorn, A. Parker, and A. Roorda, "Retinal motion tracking in adaptive
optics scanning laser ophthalmoscopy" , Proceedings of OSA Conference on Signal Recovery
and Synthesis, Charlotte NC, June 2005.
| 2936 |@word cleanly:1 decomposition:2 tr:1 electronics:1 configuration:4 initial:2 com:1 must:2 realistic:1 j1:1 shape:2 motor:5 intelligence:1 generative:5 reciprocal:3 fogassi:1 provides:2 clarified:1 traverse:1 location:1 mathematical:2 along:2 constructed:3 direct:1 become:1 ect:1 incorrect:1 specialize:1 pathway:2 fitting:1 manner:1 expected:1 spine:6 behavior:2 distractor:1 multi:2 brain:3 ol:1 ua:1 becomes:2 underlying:2 circuit:20 alto:1 what:1 developed:2 unified:2 finding:2 transformation:34 corporation:1 m60:1 ti:2 rm:1 control:1 normally:1 local:1 limit:1 consequence:1 despite:1 encoding:1 culling:1 path:4 might:1 plus:1 montana:2 suggests:2 co:2 range:2 practical:2 practice:1 implement:1 procedure:1 area:2 thought:1 adapting:1 projection:8 matching:3 pre:1 word:1 suggest:1 onto:1 operator:2 superposed:1 equivalent:1 map:20 demonstrated:3 center:1 conventional:1 courtesy:1 attention:2 resolution:1 simplicity:1 recovery:1 pure:1 embedding:1 searching:1 updated:1 target:3 colorado:1 distinguishing:1 hypothesis:5 recognition:4 satisfying:1 located:1 database:1 observed:1 solved:4 thousand:1 ormation:1 connected:1 ordering:4 decrease:3 architecturally:1 dynamic:3 solving:2 segment:3 f2:1 basis:2 tersely:1 joint:1 emergent:2 various:3 represented:1 tx:1 articulated:4 univ:1 laser:1 aggregate:3 formation:1 quite:1 whose:2 posed:3 plausible:3 solve:1 stanford:2 csli:1 ability:1 gi:2 abstractly:1 emergence:1 itself:1 final:2 sequence:1 interaction:1 product:4 maximal:1 realization:2 date:2 adjoint:1 competition:2 bozeman:1 convergence:5 ecl:1 walton:1 generating:4 converges:1 object:15 illustrate:1 pose:7 axial:1 ij:1 progress:1 eq:26 implemented:1 involves:1 direction:2 radius:1 correct:3 centered:1 viewing:1 implementing:1 investigation:1 biological:6 dendritic:1 mathematically:1 exploring:1 hold:1 sufficiently:1 seed:1 mapping:16 scope:2 nw:1 cognition:1 circuitry:2 early:1 purpose:2 applicable:1 superposition:15 palo:1 contributor:1 tf:1 create:1 successfully:1 concurrently:2 always:2 gaussian:1 modified:2 primed:2 publication:1 q3:2 ax:1 june:1 vk:2 indicates:2 contrast:1 sense:1 camp:1 elsevier:1 rigid:1 vl:1 snake:4 mth:1 expand:1 compatibility:1 translational:1 among:1 orientation:2 ill:1 tank:1 constrained:1 integration:1 ell:1 field:2 once:1 biology:1 thin:1 others:1 inherent:1 few:1 hint:1 composed:3 erative:1 argmax:1 cns:1 opposing:1 freedom:1 cylinder:1 organization:2 kinematic:10 llg:1 nl:1 tj:2 amenable:1 edge:4 re:1 mk:1 earlier:1 goodness:1 applicability:1 subset:1 vkk:1 recognizing:6 holographic:1 too:1 reported:1 morph:1 scanning:1 rees:1 bullier:1 discriminating:1 ops:2 synthesis:1 natu:1 connectivity:1 imagery:1 reflect:1 nm:2 containing:1 possibly:2 charlotte:1 cognitive:1 creating:1 itti:1 account:1 retinal:1 wk:3 includes:1 coefficient:3 blurred:4 satisfy:1 explicitly:1 vi:1 idealized:1 vehicle:1 view:4 root:1 reached:1 parallel:1 oi:1 formed:1 characteristic:3 correspond:1 none:1 cc:1 oscillatory:1 fo:2 ed:1 pp:2 associated:1 attributed:2 mi:1 static:3 gain:1 proved:1 dataset:1 distractors:1 knowledge:1 dimensionality:1 organized:1 appears:2 feed:1 specify:1 response:1 rand:2 formulation:1 decorrelating:1 though:4 generality:1 implicit:1 msc:6 parlance:1 o:2 arathorn:5 b3:2 effect:1 deliberately:1 roorda:1 ncorrect:1 during:1 recurrence:1 mel:1 carson:2 plate:1 evident:1 tt:2 angelucci:1 motion:4 interpreting:2 l1:1 image:18 recently:4 common:2 specialized:1 mt:1 jl:1 linking:1 interpretation:3 relating:1 interpret:1 significant:3 composition:5 mathematics:1 similarly:1 nonlinearity:1 had:1 dot:3 l3:1 moving:2 cortex:4 surface:5 morphable:1 inhibition:2 base:1 operating:1 gt:1 recent:2 driven:1 termed:1 arbitrarily:1 exploited:1 seen:9 greater:1 gill:2 neuroanatomy:1 signal:5 multiple:3 hrr:1 reduces:1 replete:1 match:4 determination:2 plausibility:1 offer:1 academic:1 mle:1 variant:1 vision:5 expectation:1 iteration:2 cell:2 background:1 pyramidal:1 source:2 modality:1 allocated:1 w2:1 rest:1 envelope:1 vogel:1 ascent:1 flow:2 iii:1 variety:3 isolation:1 fit:1 architecture:4 restrict:1 grad:2 whether:1 expression:1 roblems:1 compositional:1 action:4 mirroring:1 useful:1 se:1 clutter:1 category:1 diameter:1 reduced:2 mirrored:1 neuroscience:3 correctly:1 skeletal:1 express:2 iter:2 four:1 threshold:1 prevent:1 backward:5 vast:4 tsotsos:1 year:1 convert:1 sum:1 inverse:8 facilitated:1 angle:2 letter:1 almost:1 electronic:1 scaling:1 layer:20 correspondence:2 fan:1 optic:1 alive:1 scene:1 ri:1 collusion:2 speed:1 argument:1 span:1 numbering:1 across:2 remain:1 ate:1 describes:1 biologically:2 wile:1 projecting:1 restricted:1 gathering:1 taken:1 resource:2 equation:2 kinematics:1 mechanism:3 tractable:3 end:1 serf:1 available:1 operation:1 permit:1 polsky:1 limb:1 occurrence:1 gate:1 top:5 lat:1 unifying:1 seeking:18 added:2 occurs:1 receptive:1 primary:1 usual:2 evolutionary:1 gradient:1 kth:1 gin:1 distance:1 attentional:1 mapped:2 lateral:2 schiller:1 extent:1 toward:1 length:2 code:1 index:2 relationship:2 rotational:1 optionally:3 lg:1 nc:1 implementation:1 datasets:1 descent:1 optional:1 defining:1 subunit:1 neurobiology:1 varied:1 david:1 introduced:1 fort:2 kl:1 california:1 temporary:1 beyond:1 proceeds:1 below:3 pattern:5 xm:1 articulation:5 lund:1 max:1 memory:16 reliable:1 treated:1 difficulty:1 natural:1 imply:1 numerous:1 axis:1 gz:1 coupled:1 prior:1 review:2 discovery:9 l2:1 tangent:1 understanding:1 determining:1 contributing:1 embedded:1 proportional:1 emphatic:1 degree:2 principle:1 viewpoint:1 row:1 lo:1 compatible:1 cooperation:1 last:1 allow:1 understand:1 sparse:4 distributed:1 dimension:3 cortical:6 world:5 contour:7 forward:4 collection:2 author:1 projected:3 adaptive:1 ec:1 neurobiological:1 silhouette:2 ml:3 global:1 imitation:1 bottomup:1 grayscale:1 search:5 continuous:1 reviewed:1 disambiguate:1 mj:1 reasonably:1 robust:1 nature:2 dendrite:1 domain:1 vj:1 featureless:1 allowed:1 body:1 neuronal:7 fig:2 levitt:1 tl:1 ff:1 parker:1 cortically:2 gallese:1 candidate:1 lie:1 perceptual:1 tin:1 down:4 gating:1 er:1 evidence:3 essential:1 dwa:2 modulates:1 mirror:3 tfl:1 illustrates:1 sparser:1 simply:2 forming:2 neurophysiological:2 visual:18 expressed:5 tracking:1 scalar:1 corresponds:1 determines:1 relies:1 consequently:2 price:1 change:1 specifically:1 determined:3 wt:1 isomorphic:1 e:1 occluding:7 latter:1 tested:1 |
2,133 | 2,937 | Inferring Motor Programs from Images of
Handwritten Digits
Geoffrey Hinton and Vinod Nair
Department of Computer Science, University of Toronto
10 King?s College Road, Toronto, M5S 3G5 Canada
{hinton,vnair}@cs.toronto.edu
Abstract
We describe a generative model for handwritten digits that uses two pairs
of opposing springs whose stiffnesses are controlled by a motor program.
We show how neural networks can be trained to infer the motor programs
required to accurately reconstruct the MNIST digits. The inferred motor
programs can be used directly for digit classification, but they can also be
used in other ways. By adding noise to the motor program inferred from
an MNIST image we can generate a large set of very different images of
the same class, thus enlarging the training set available to other methods.
We can also use the motor programs as additional, highly informative
outputs which reduce overfitting when training a feed-forward classifier.
1
Overview
The idea that patterns can be recognized by figuring out how they were generated has been
around for at least half a century [1, 2] and one of the first proposed applications was the
recognition of handwriting using a generative model that involved pairs of opposing springs
[3, 4]. The ?analysis-by-synthesis? approach is attractive because the true generative model
should provide the most natural way to characterize a class of patterns. The handwritten 2?s
in figure 1, for example, are very variable when viewed as pixels but they have very similar
motor programs. Despite its obvious merits, analysis-by-synthesis has had few successes,
partly because it is computationally expensive to invert non-linear generative models and
partly because the underlying parameters of the generative model are unknown for most
large data sets. For example, the only source of information about how the MNIST digits
were drawn is the images themselves.
We describe a simple generative model in which a pen is controlled by two pairs of opposing springs whose stiffnesses are specified by a motor program. If the sequence of
stiffnesses is specified correctly, the model can produce images which look very like the
MNIST digits. Using a separate network for each digit class, we show that backpropagation can be used to learn a ?recognition? network that maps images to the motor programs
required to produce them. An interesting aspect of this learning is that the network creates
its own training data, so it does not require the training images to be labelled with motor
programs. Each recognition network starts with a single example of a motor program and
grows an ?island of competence? around this example, progressively extending the region
over which it can map small changes in the image to the corresponding small changes in
the motor program (see figure 2).
Figure 1: An MNIST image of a 2 and the additional images that can be generated by inferring the motor program and then adding random noise to it. The pixels are very different,
but they are all clearly twos.
Fairly good digit recognition can be achieved by using the 10 recognition networks to find
10 motor programs for a test image and then scoring each motor program by its squared
error in reconstructing the image. The 10 scores are then fed into a softmax classifier.
Recognition can be improved by using PCA to model the distribution of motor trajectories
for each class and using the distance of a motor trajectory from the relevant PCA hyperplane
as an additional score.
Each recognition network is solving a difficult global search problem in which the correct
motor program must be found by a single, ?open-loop? pass through the network. More
accurate recognition can be achieved by using this open-loop global search to initialize an
iterative, closed-loop local search which uses the error in the reconstructed image to revise the motor program. This requires reconstruction errors in pixel space to be mapped to
corrections in the space of spring stiffnesses. We cannot backpropagate errors through the
generative model because it is just a hand-coded computer program. So we learn ?generative? networks, one per digit class, that emulate the generator. After learning, backpropagation through these generative networks is used to convert pixel reconstruction errors into
stiffness corrections.
Our final system gives 1.82% error on the MNIST test set which is similar to the 1.7%
achieved by a very different generative approach [5] but worse than the 1.53% produced
by the best backpropagation networks or the 1.4% produced by support vector machines
[6]. It is much worse than the 0.4% produced by convolutional neural networks that use
cleverly enhanced training sets [7]. Recognition of test images is quite slow because it uses
ten different recognition networks followed by iterative local search. There is, however, a
much more efficient way to make use of our ability to extract motor programs. They can
be treated as additional output labels when using backpropagation to train a single, multilayer, discriminative neural network. These additional labels act as a very informative
regularizer that reduces the error rate from 1.53% to 1.27% in a network with two hidden
layers of 500 units each. This is a new method of improving performance that can be used
in conjunction with other tricks such as preprocessing the images, enhancing the training
set or using convolutional neural nets [8, 7].
2
A simple generative model for drawing digits
The generative model uses two pairs of opposing springs at right angles. One end of each
spring is attached to a frictionless horizontal or vertical rail that is 39 pixels from the center
of the image. The other end is attached to a ?pen? that has significant mass. The springs
themselves are weightless and have zero rest length. The pen starts at the equilibrium
position defined by the initial stiffnesses of the four springs. It then follows a trajectory
that is determined by the stiffness of each spring at each of the 16 subsequent time steps
in the motor program. The mass is large compared with the rate at which the stiffnesses
change, so the system is typically far from equilibrium as it follows the smooth trajectory.
On each time step, the momentum is multiplied by 0.9 to simulate viscosity. A coarse-grain
trajectory is computed by using one step of forward integration for each time step in the
motor program, so it contains 17 points. The code is at www.cs.toronto.edu/? hinton/code.
Figure 2: The training data for each class-specific recognition network is produced by
adding noise to motor programs that are inferred from MNIST images using the current
parameters of the recognition network. To initiate this process, the biases of the output
units are set by hand so that they represent a prototypical motor program for the class.
Given a coarse-grain trajectory, we need a way of assigning an intensity to each pixel. We
tried various methods until we hand-evolved one that was able to reproduce the MNIST images fairly accurately, but we suspect that many other methods would be just as good. For
each point on the coarse trajectory, we share two units of ink between the the four closest
pixels using bilinear interpolation. We also use linear interpolation to add three fine-grain
trajectory points between every pair of coarse-grain points. These fine-grain points also
contribute ink to the pixels using bilinear interpolation, but the amount of ink they contribute is zero if they are less than one pixel apart and rises linearly to the same amount as
the coarse-grain points if they are more than two pixels apart. This generates a thin skeleton
with a fairly uniform ink density. To flesh-out the skeleton, we use two ?ink parameters?,
a a a
a a a
a, b, to specify a 3 ? 3 kernel of the form b(1 + a)[ 12
, 6 , 12 ; a6 , 1 ? a, a6 ; 12
, 6 , 12 ] which
is convolved with the image four times. Finally, the pixel intensities are clipped to lie in
the interval [0,1]. The matlab code is at www.cs.toronto.edu/? hinton/code. The values of
2a and b/1.5 are additional, logistic outputs of the recognition networks1 .
3
Training the recognition networks
The obvious way to learn a recognition network is to use a training set in which the inputs
are images and the target outputs are the motor programs that were used to generate those
images. If we knew the distribution over motor programs for a given digit class, we could
easily produce such a set by running the generator. Unfortunately, the distribution over
motor programs is exactly what we want to learn from the data, so we need a way to train
1
We can add all sorts of parameters to the hand-coded generative model and then get the recognition networks to learn to extract the appropriate values for each image. The global mass and viscosity
as well as the spacing of the rails that hold the springs can be learned. We can even implement affinelike transformations by attaching the four springs to endpoints whose eight coordinates are given by
the recognition networks. These extra parameters make the learning slower and, for the normalized
digits, they do not improve discrimination, probably because they help the wrong digit models as
much as the right one.
the recognition network without knowing this distribution in advance. Generating scribbles
from random motor programs will not work because the capacity of the network will be
wasted on irrelevant images that are far from the real data.
Figure 2 shows how a single, prototype motor program can be used to initialize a learning
process that creates its own training data. The prototype consists of a sequence of 4 ? 17
spring stiffnesses that are used to set the biases on 68 of the 70 logistic output units of
the recognition net. If the weights coming from the 400 hidden units are initially very
small, the recognition net will then output a motor program that is a close approximation
to the prototype, whatever the input image. Some random noise is then added to this motor
program and it is used to generate a training image. So initially, all of the generated training
images are very similar to the one produced by the prototype. The recognition net will
therefore devote its capacity to modeling the way in which small changes in these images
map to small changes in the motor program. Images in the MNIST training set that are
close to the prototype will then be given their correct motor programs. This will tend to
stretch the distribution of motor programs produced by the network along the directions that
correspond to the manifold on which the digits lie. As time goes by, the generated training
set will expand along the manifold for that digit class until all of the MNIST training images
of that class are well modelled by the recognition network.
It takes about 10 hours in matlab on a 3 GHz Xeon to train each recognition network.
We use minibatches of size 100, momentum of 0.9, and adaptive learning rates on each
connection that increase additively when the sign of the gradient agrees with the sign of
the previous weight change and decrease multiplicatively when the signs disagree [9]. The
net is generating its own training data, so the objective function is always changing which
makes it inadvisable to use optimization methods that go as far as they can in a carefully
chosen direction. Figures 3 and 4 show some examples of how well the recognition nets
perform after training. Nearly all models achieve an average squared pixel error of less
than 15 per image on their validation set (pixel intensities are between 0 and 1 with a
preponderance of extreme values). The inferred motor programs are clearly good enough
to capture the diverse handwriting styles in the data. They are not good enough, however, to
give classification performance comparable to the state-of-the-art on the MNIST database.
So we added a series of enhancements to the basic system to improve the classification
accuracy.
4
Enhancements to the basic system
Extra strokes in ones and sevens. One limitation of the basic system is that it draws digits
using only a single stroke (i.e. the trajectory is a single, unbroken curve). But when people
draw digits, they often add extra strokes to them. Two of the most common examples are
the dash at the bottom of ones, and the dash through the middle of sevens (see examples in
figure 5). About 2.2% of ones and 13% of sevens in the MNIST training set are dashed and
not modelling the dashes reduces classification accuracy significantly. We model dashed
ones and sevens by augmenting their basic motor programs with another motor program to
draw the dash. For example, a dashed seven is generated by first drawing an ordinary seven
using the motor program computed by the seven model, and then drawing the dash with a
motor program computed by a separate neural network that models only dashes.
Dashes in ones and sevens are modeled with two different networks. Their training proceeds the same way as with the other models, except now there are only 50 hidden units and
the training set contains only the dashed cases of the digit. (Separating these cases from
the rest of the MNIST training set is easy because they can be quickly spotted by looking
at the difference between the images and their reconstructions by the dashless digit model.)
The net takes the entire image of a digit as input, and computes the motor program for
just the dash. When reconstructing an unlabelled image as say, a seven, we compute both
Figure 3: Examples of validation set images reconstructed by their corresponding model.
In each case the original image is on the left and the reconstruction is on the right. Superimposed on the original image is the pen trajectory.
the dashed and dashless versions of seven and pick the one with the lower squared pixel
error to be that image?s reconstruction as a seven. Figure 5 shows examples of images
reconstructed using the extra stroke.
Local search. When reconstructing an image in its own class, a digit model often produces
a sensible, overall approximation of the image. However, some of the finer details of the
reconstruction may be slightly wrong and need to be fixed up by an iterative local search
that adjusts the motor program to reduce the reconstruction error. We first approximate the
graphics model with a neural network that contains a single hidden layer of 500 logistic
units. We train one such generative network for each of the ten digits and for the dashed
version of ones and sevens (for a total of 12 nets). The motor programs used for training
are obtained by adding noise to the motor programs inferred from the training data by
the relevant, fully trained recognition network. The images produced from these motor
programs by the graphics model are used as the targets for the supervised learning of each
generative network. Given these targets, the weight updates are computed in the same way
as for the recognition networks.
Figure 4: To model 4?s we use a single smooth trajectory, but turn off the ink for timesteps
9 and 10. For images in which the pen does not need to leave the paper, the recognition net
finds a trajectory in which points 8 and 11 are close together so that points 9 and 10 are not
needed. For 5?s we leave the top until last and turn off the ink for timesteps 13 and 14.
Figure 5: Examples of dashed ones and sevens reconstructed using a second stroke. The
pen trajectory for the dash is shown in blue, superimposed on the original image.
Initial squared pixel error = 33.8
10 iterations, error = 15.2
20 iterations, error = 10.5
30 iterations, error = 9.3
Figure 6: An example of how local search improves the detailed registration of the trajectory found by the correct model. After 30 iterations, the squared pixel error is less than a
third of its initial value.
Once the generative network is trained, we can use it to iteratively improve the initial motor
program computed by the recognition network for an image. The main steps in one iteration
are: 1) compute the error between the image and the reconstruction generated from the
current motor program by the graphics model; 2) backpropagate the reconstruction error
through the generative network to calculate its gradient with respect to the motor program;
3) compute a new motor program by taking a step along the direction of steepest descent
plus 0.5 times the previous step. Figure 6 shows an example of how local search improves
the reconstruction by the correct model. Local search is usually less effective at improving
the fits of the wrong models, so it eliminates about 20% of the classification errors on the
validation set.
PCA model of the image residuals. The sum of squared pixel errors is not the best way
of comparing an image with its reconstruction, because it treats the residual pixel errors
as independent and zero-mean Gaussian distributed, which they are not. By modelling the
structure in the residual vectors, we can get a better estimate of the conditional probability
of the image given the motor program. For each digit class, we construct a PCA model of
the image residual vectors for the training images. Then, given a test image, we project
the image residual vector produced by each inferred motor program onto the relevant PCA
hyperplane and compute the squared distance between the residual and its projection. This
gives ten scores for the image that measure the quality of its reconstructions by the digit
models. We don?t discard the old sum of squared pixel errors as they are still useful for
classifying most images correctly. Instead, all twenty scores are used as inputs to the
classifier, which decides how to combine both types of scores to achieve high classification
accuracy.
PCA model of trajectories. Classifying an image by comparing its reconstruction errors
for the different digit models tacitly relies on the assumption that the incorrect models will
reconstruct the image poorly. Since the models have only been trained on images in their
Squared error = 24.9, Shape prior score = 31.5
Squared error = 15.0, Shape prior score = 104.2
Figure 7: Reconstruction of a two image by the two model (left box) and by the three model
(right box), with the pen trajectory superimposed on the original image. The three model
sharply bends the bottom of its trajectory to better explain the ink, but the trajectory prior
for three penalizes it with a high score. The two model has a higher squared error, but a
much lower prior score, which allows the classifier to correctly label the image.
own class, they often do reconstruct images from other classes poorly, but occasionally
they fit an image from another class well. For example, figure 7 shows how the three model
reconstructs a two image better than the two model by generating a highly contorted three.
This problem becomes even more pronounced with local search which sometimes contorts
the wrong model to fit the image really well. The solution is to learn a PCA model of the
trajectories that a digit model infers from images in its own class. Given a test image, the
trajectory computed by each digit model is scored by its squared distance from the relevant
PCA hyperplane. These 10 ?prior? scores are then given to the classifier along with the
20 ?likelihood? scores described above. The prior scores eliminate many classification
mistakes such as the one in figure 7.
5
Classification results
To classify a test image, we apply multinomial logistic regression to the 30 scores ? i.e.
we use a neural network with no hidden units, 10 softmax output units and a cross-entropy
error. The net is trained by gradient descent using the scores for the validation set images.
To illustrate the gain in classification accuracy achieved by the enhancements explained
above, table 1 gives the percent error on the validation set as each enhancement is added to
the system. Together, the enhancements almost halve the number of mistakes.
Enhancements
None
1
1, 2
1, 2, 3
1, 2, 3, 4
Validation set
% error
4.43
3.84
3.01
2.67
2.28
Test set
% error
1.82
Table 1: The gain in classification accuracy on the validation set as the following enhancements are added: 1) extra stroke for dashed ones and sevens, 2) local search, 3) PCA model
of image residual, and 4) PCA trajectory prior. To avoid using the test set for model selection, the performance on the official test set was only measured for the final system.
6
Discussion
After training a single neural network to output both the class label and the motor program
for all classes (as described in section 1) we tried ignoring the label output and classifying
the test images by using the cost, under 10 different PCA models, of the trajectory defined
by the inferred motor program. Each PCA model was fitted to the trajectories extracted
from the training images for a given class. This gave 1.80% errors which is as good as the
1.82% we got using the 10 separate recognition networks and local search. This is quite
surprising because the motor programs produced by the single network were simplified
to make them all have the same dimensionality and they produced significantly poorer
reconstructions. By only using the 10 digit-specific recognition nets to create the motor
programs for the training data, we get much faster recognition of test data because at test
time we can use a single recognition network for all classes. It also means we do not
need to trade-off prior scores against image residual scores because there is only one image
residual.
The ability to extract motor programs could also be used to enhance the training set. [7]
shows that error rates can be halved by using smooth vector distortion fields to create extra
training data. They argue that these fields simulate ?uncontrolled oscillations of the hand
muscles dampened by inertia?. Motor noise may be better modelled by adding noise to
an actual motor program as shown in figure 1. Notice that this produces a wide variety of
non-blurry images and it can also change the topology.
The techniques we have used for extracting motor programs from digit images may be applicable to speech. There are excellent generative models that can produce almost perfect
speech if they are given the right formant parameters [10]. Using one of these generative
models we may be able to train a large number of specialized recognition networks to extract formant parameters from speech without requiring labeled training data. Once this has
been done, labeled data would be available for training a single feed-forward network that
could recover accurate formant parameters which could be used for real-time recognition.
Acknowledgements We thank Steve Isard, David MacKay and Allan Jepson for helpful discussions. This research was funded by NSERC, CFI and OIT. GEH is a fellow of the Canadian Institute
for Advanced Research and holds a Canada Research Chair in machine learning.
References
[1] D. M. MacKay. Mindlike behaviour in artefacts. British Journal for Philosophy of Science,
2:105?121, 1951.
[2] M. Halle and K. Stevens. Speech recognition: A model and a program for research. IRE
Transactions on Information Theory, IT-8 (2):155?159, 1962.
[3] Murray Eden. Handwriting and pattern recognition. IRE Transactions on Information Theory,
IT-8 (2):160?166, 1962.
[4] J.M. Hollerbach. An oscillation theory of handwriting. Biological Cybernetics, 39:139?156,
1981.
[5] G. Mayraz and G. E. Hinton. Recognizing hand-written digits using hierarchical products of
experts. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24:189?197, 2001.
[6] D. Decoste and B. Schoelkopf. Training invariant support vector machines. Machine Learning,
46:161?190, 2002.
[7] Patrice Y. Simard, Dave Steinkraus, and John Platt. Best practice for convolutional neural networks applied to visual document analysis. In International Conference on Document Analysis
and Recogntion (ICDAR), IEEE Computer Society, Los Alamitos, pages 958?962, 2003.
[8] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document
recognition. Proceedings of the IEEE, 86(11):2278?2324, November 1998.
[9] A. Jacobs R. Increased Rates of Convergence Through Learning Rate Adaptation. Technical
Report: UM-CS-1987-117. University of Massachusetts, Amherst, MA, 1987.
[10] W. Holmes, J. Holmes, and M. Judd. Extension of the bandwith of the jsru parallel-formant
synthesizer for high quality synthesis of male and female speech. In Proceedings of ICASSP 90
(1), pages 313?316, 1990.
| 2937 |@word version:2 middle:1 open:2 additively:1 tried:2 jacob:1 pick:1 initial:4 contains:3 score:16 series:1 document:3 current:2 comparing:2 surprising:1 mayraz:1 assigning:1 synthesizer:1 must:1 written:1 john:1 grain:6 subsequent:1 informative:2 shape:2 motor:58 progressively:1 update:1 discrimination:1 dampened:1 generative:19 half:1 isard:1 intelligence:1 steepest:1 coarse:5 ire:2 contribute:2 toronto:5 along:4 incorrect:1 consists:1 combine:1 allan:1 themselves:2 steinkraus:1 actual:1 decoste:1 becomes:1 project:1 underlying:1 mass:3 what:1 evolved:1 transformation:1 fellow:1 every:1 act:1 exactly:1 um:1 classifier:5 wrong:4 platt:1 whatever:1 unit:9 local:10 treat:1 mistake:2 despite:1 bilinear:2 interpolation:3 plus:1 unbroken:1 lecun:1 practice:1 implement:1 backpropagation:4 digit:30 cfi:1 networks1:1 significantly:2 got:1 projection:1 road:1 get:3 cannot:1 close:3 onto:1 bend:1 selection:1 www:2 map:3 center:1 go:2 adjusts:1 holmes:2 century:1 coordinate:1 enhanced:1 target:3 us:4 trick:1 recognition:37 expensive:1 database:1 labeled:2 bottom:2 capture:1 calculate:1 region:1 schoelkopf:1 decrease:1 trade:1 skeleton:2 tacitly:1 trained:5 solving:1 creates:2 easily:1 icassp:1 emulate:1 various:1 regularizer:1 train:5 describe:2 effective:1 whose:3 quite:2 say:1 drawing:3 reconstruct:3 distortion:1 ability:2 formant:4 final:2 patrice:1 sequence:2 net:11 reconstruction:15 coming:1 product:1 adaptation:1 relevant:4 loop:3 poorly:2 achieve:2 pronounced:1 los:1 convergence:1 enhancement:7 extending:1 produce:6 generating:3 perfect:1 leave:2 help:1 illustrate:1 augmenting:1 measured:1 c:4 flesh:1 direction:3 artefact:1 stevens:1 correct:4 require:1 behaviour:1 really:1 biological:1 extension:1 correction:2 hold:2 stretch:1 around:2 equilibrium:2 applicable:1 label:5 agrees:1 create:2 clearly:2 always:1 gaussian:1 avoid:1 conjunction:1 modelling:2 superimposed:3 likelihood:1 helpful:1 typically:1 entire:1 eliminate:1 initially:2 hidden:5 expand:1 reproduce:1 pixel:19 overall:1 classification:10 art:1 softmax:2 fairly:3 initialize:2 integration:1 field:2 once:2 construct:1 mackay:2 look:1 nearly:1 thin:1 report:1 few:1 recogntion:1 opposing:4 highly:2 male:1 extreme:1 accurate:2 poorer:1 old:1 penalizes:1 fitted:1 increased:1 xeon:1 modeling:1 classify:1 a6:2 ordinary:1 cost:1 uniform:1 recognizing:1 graphic:3 characterize:1 density:1 international:1 amherst:1 off:3 inadvisable:1 enhance:1 synthesis:3 quickly:1 together:2 squared:12 reconstructs:1 worse:2 expert:1 simard:1 style:1 bandwith:1 vnair:1 closed:1 start:2 sort:1 recover:1 parallel:1 accuracy:5 convolutional:3 correspond:1 modelled:2 handwritten:3 accurately:2 produced:10 none:1 trajectory:23 finer:1 m5s:1 cybernetics:1 dave:1 stroke:6 explain:1 halve:1 against:1 involved:1 obvious:2 handwriting:4 gain:2 frictionless:1 revise:1 massachusetts:1 improves:2 infers:1 dimensionality:1 carefully:1 feed:2 steve:1 higher:1 supervised:1 specify:1 improved:1 done:1 box:2 just:3 until:3 hand:6 horizontal:1 logistic:4 quality:2 grows:1 normalized:1 true:1 requiring:1 preponderance:1 iteratively:1 attractive:1 percent:1 image:75 common:1 specialized:1 multinomial:1 overview:1 attached:2 endpoint:1 significant:1 had:1 funded:1 add:3 closest:1 own:6 halved:1 female:1 irrelevant:1 apart:2 discard:1 occasionally:1 success:1 scoring:1 muscle:1 additional:6 recognized:1 dashed:8 infer:1 reduces:2 smooth:3 technical:1 unlabelled:1 faster:1 cross:1 spotted:1 coded:2 controlled:2 basic:4 regression:1 multilayer:1 enhancing:1 iteration:5 represent:1 kernel:1 sometimes:1 invert:1 achieved:4 want:1 fine:2 spacing:1 interval:1 source:1 extra:6 rest:2 eliminates:1 probably:1 suspect:1 tend:1 extracting:1 contorted:1 canadian:1 bengio:1 vinod:1 enough:2 easy:1 variety:1 fit:3 timesteps:2 gave:1 topology:1 reduce:2 idea:1 prototype:5 knowing:1 haffner:1 oit:1 pca:12 speech:5 matlab:2 useful:1 detailed:1 amount:2 viscosity:2 ten:3 generate:3 notice:1 figuring:1 sign:3 correctly:3 per:2 blue:1 diverse:1 geh:1 four:4 eden:1 drawn:1 changing:1 registration:1 wasted:1 convert:1 sum:2 angle:1 clipped:1 almost:2 oscillation:2 draw:3 comparable:1 layer:2 uncontrolled:1 followed:1 dash:9 sharply:1 generates:1 aspect:1 simulate:2 chair:1 spring:12 department:1 cleverly:1 slightly:1 reconstructing:3 island:1 explained:1 invariant:1 computationally:1 turn:2 icdar:1 needed:1 initiate:1 merit:1 fed:1 end:2 available:2 stiffness:9 multiplied:1 eight:1 apply:1 hierarchical:1 appropriate:1 blurry:1 slower:1 convolved:1 original:4 top:1 running:1 murray:1 society:1 ink:8 objective:1 added:4 alamitos:1 g5:1 devote:1 gradient:4 distance:3 separate:3 mapped:1 separating:1 capacity:2 thank:1 sensible:1 seven:14 manifold:2 argue:1 length:1 code:4 modeled:1 multiplicatively:1 difficult:1 unfortunately:1 rise:1 unknown:1 perform:1 twenty:1 disagree:1 vertical:1 descent:2 november:1 hinton:5 looking:1 competence:1 canada:2 intensity:3 inferred:7 david:1 pair:5 required:2 specified:2 connection:1 learned:1 hour:1 able:2 proceeds:1 usually:1 pattern:4 hollerbach:1 program:57 natural:1 treated:1 residual:9 advanced:1 improve:3 halle:1 extract:4 prior:8 acknowledgement:1 fully:1 interesting:1 prototypical:1 limitation:1 geoffrey:1 generator:2 validation:7 classifying:3 share:1 jsru:1 last:1 bias:2 institute:1 wide:1 taking:1 attaching:1 ghz:1 distributed:1 curve:1 judd:1 computes:1 forward:3 inertia:1 adaptive:1 preprocessing:1 simplified:1 far:3 scribble:1 transaction:3 reconstructed:4 approximate:1 global:3 overfitting:1 decides:1 knew:1 discriminative:1 don:1 search:12 pen:7 iterative:3 table:2 learn:6 ignoring:1 improving:2 excellent:1 bottou:1 official:1 jepson:1 main:1 linearly:1 noise:7 scored:1 slow:1 inferring:2 position:1 momentum:2 lie:2 rail:2 third:1 british:1 enlarging:1 specific:2 mnist:13 adding:5 backpropagate:2 entropy:1 visual:1 nserc:1 relies:1 extracted:1 minibatches:1 nair:1 ma:1 conditional:1 viewed:1 king:1 labelled:1 change:7 determined:1 except:1 hyperplane:3 total:1 pas:1 partly:2 college:1 support:2 people:1 philosophy:1 |
2,134 | 2,938 | Combining Graph Laplacians for
Semi?Supervised Learning
Andreas Argyriou,
Mark Herbster,
Massimiliano Pontil
Department of Computer Science
University College London
Gower Street, London WC1E 6BT, England, UK
{a.argyriou, m.herbster, m.pontil}@cs.ucl.ac.uk
Abstract
A foundational problem in semi-supervised learning is the construction
of a graph underlying the data. We propose to use a method which optimally combines a number of differently constructed graphs. For each
of these graphs we associate a basic graph kernel. We then compute
an optimal combined kernel. This kernel solves an extended regularization problem which requires a joint minimization over both the data and
the set of graph kernels. We present encouraging results on different
OCR tasks where the optimal combined kernel is computed from graphs
constructed with a variety of distances functions and the ?k? in nearest
neighbors.
1
Introduction
Semi-supervised learning has received significant attention in machine learning in recent
years, see, for example, [2, 3, 4, 8, 9, 16, 17, 18] and references therein. The defining insight
of semi-supervised methods is that unlabeled data may be used to improve the performance
of learners in a supervised task. One of the key semi-supervised learning methods builds
on the assumption that the data is situated on a low dimensional manifold within the ambient space of the data and that this manifold can be approximated by a weighted discrete
graph whose vertices are identified with the empirical (labeled and unlabeled) data, [3, 17].
Graph construction consists of two stages, first selection of a distance function and then
application of it to determine the graph?s edges (or weights thereof). For example, in this
paper we consider distances between images based on the Euclidean distance, Euclidean
distance combined with image transformations, and the related tangent distance [6]; we
determine the edge set of the graph with k-nearest neighbors. Another common choice is
2
to weight edges by a decreasing function of the distance d such as e??d .
Although a surplus of unlabeled data may improve the quality of the empirical approximation of the manifold (via the graph) leading to improved performances, practical experience
with these methods indicates that their performance significantly depends on how the graph
is constructed. Hence, the model selection problem must consider both the selection of the
distance function and the parameters k or ? used in the graph building process described
above. A diversity of methods have been proposed for graph construction; in this paper
we do not advocate selecting a single graph but, rather we propose combining a number
of graphs. Our solution implements a method based on regularization which builds upon
the work in [1]. For a given dataset each combination of distance functions and edge set
specifications from the distance will lead to a specific graph. Each of these graphs may then
be associated with a kernel. We then apply regularization to select the best convex combination of these kernels; the minimizing function will trade off its fit to the data against
its norm. What is unique about this regularization is that the minimization is not over a
single kernel space but rather over a space corresponding to all convex combinations of
kernels. Thus all data (labeled vertices) may be conserved for training rather than reduced
by cross-validation which is not an appealing option when the number of labeled vertices
per class is very small.
Figure 3 in Section 4 illustrates our algorithm on a simple example. There, three different
distances for 400 images of the digits ?six? and ?nine? are depicted, namely, the Euclidean
distance, a distance invariant under small centered image rotations from [?10? , 10? ] and
a distance invariant under rotations from [?180? , 180? ]. Clearly, the last distance is problematic as sixes become similar to nines. The performance of our graph regularization
learning algorithm discussed in Section 2.2 with these distances is reported below each
plot; as expected, this performance is much lower in the case that the third distance is used.
The paper is constructed as follows. In Section 2 we discuss how regularization may be
applied to single graphs. First, we review regularization in the context of reproducing kernel Hilbert spaces (Section 2.1); then in Section 2.2 we specialize our discussion to Hilbert
spaces of functions defined over a graph. Here we review the (normalized) Laplacian of
the graph and a kernel which is the pseudoinverse of the graph Laplacian. In Section 3 we
detail our algorithm for learning an optimal convex combination of Laplacian kernels. Finally, in Section 4 we present experiments on the USPS dataset with our algorithm trained
over different classes of Laplacian kernels.
2
Background on graph regularization
In this section we review graph regularization [2, 9, 14] from the perspective of reproducing
kernel Hilbert spaces, see e.g. [12].
2.1
Reproducing kernel Hilbert spaces
Let X be a set and K : X ? X ? IR a kernel function. We say that HK is a reproducing
kernel Hilbert space (RKHS) of functions f : X ? IR if (i): for every x ? X, K(x, ?) ?
HK and (ii): the reproducing kernel property f (x) = hf, K(x, ?)iK holds for every f ?
HK and x ? X, where h?, ?iK is the inner product on HK . In particular, (ii) tells us that for
x, t ? X, K(x, t) = hK(x, ?), K(t, ?)iK , implying that the n ? n matrix (K(ti , tj ) : i, j ?
INp ) is symmetric and positive semi-definite for any set of inputs {ti : i ? INp } ? X,
p ? IN, where we use the notation INp := {1, . . . , p}.
Regularization in an RKHS learns a function f ? HK on the basis of available input/output
examples {(xi , yi ) : i ? IN? } by solving the variational problem
( ?
)
X
E? (K) := min
V (yi , f (xi )) + ?kf k2K : f ? HK
(2.1)
i=1
where V : IR ? IR ? [0, ?) is a loss function and ? a positive parameter. Moreover, if f
is a solution to problem (2.1) then it has the form
f (x) =
?
X
i=1
ci K(xi , x), x ? X
(2.2)
for some real vector of coefficients c = (ci : i ? IN? )? , see, for example, [12], where ?? ?
denotes transposition. This vector can be found by replacing f by the right hand side of
equation (2.2) in equation (2.1) and then optimizing with respect to c. However, in many
practical situations it is more convenient to compute c by solving the dual problem to (2.1),
namely
)
(
?
X
1 ?e
c Kc +
V ? (yi , ci ) : c ? IR?
(2.3)
?E? (K) := min
4?
i=1
?
e = (K(xi , xj ))?
where K
i,j=1 and the function V : IR ? IR ? IR ? {+?} is the conjugate
of the loss function V which is defined, for every z, ? ? IR, as V ? (z, ?) := sup{?? ?
V (z, ?) : ? ? IR}, see, for example, [1] for a discussion. The choice of the loss function
V leads to different learning methods among which the most prominent are square loss
regularization and support vector machines, see, for example [15].
2.2
Graph regularization
Let G be an undirected graph with m vertices and an m ? m adjacency matrix A such that
Aij = 1 if there is an edge connecting vertices i and j and zero otherwise1 . The graph
Laplacian L is the m ? m matrix defined asP
L := D ? A, where D = diag(di : i ? INm )
m
and di is the degree of vertex i, that is di = j=1 Aij .
We identify the linear space of real-valued functions defined on the graph with IRm and
introduce on it the semi-inner product
hu, vi := u? Lv, u, v ? IRm .
p
The induced semi-norm is kvk := hv, vi, v ? IRm . It is a semi-norm
Pm since kvk = 0 if
v is a constant vector, as can be verified by noting that kvk2 = 21 i,j=1 (vi ? vj )2 Aij .
We recall that G has r connected components if and only if L has r eigenvectors with
zero eigenvalues. Those eigenvectors are piece-wise constant on the connected components of the graph. In particular, G is connected if and only if the constant vector is the
only eigenvector of L with zero eigenvalue [5]. We let {?i , ui }m
i=1 be a system of eigenvalues/vectors of L where the eigenvalues are non-decreasing in order, ?i = 0, i ? INr ,
and define the linear subspace H(G) of IRm which is orthogonal to the eigenvectors with
zero eigenvalue, that is,
H(G) := {v : v? ui = 0, i ? INr }.
Within this framework, we wish to learn a function v ? H(G) on the basis of a set of
labeled vertices. Without loss of generality we assume that the first ? ? m vertices are
labeled and let y1 , ..., y? ? {?1, 1} be the corresponding labels. Following [2] we prescribe
a loss function V and compute the function v by solving the optimization problem
( ?
)
X
2
min
V (yi , vi ) + ?kvk : v ? H(G) .
(2.4)
i=1
We note that a similar approach is presented in [17] where v is (essentially) obtained as the
minimal norm interpolant in H(G) to the labeled vertices. The functional (2.4) balances the
error on the labeled points with a smoothness term measuring the complexity of v on the
graph. Note that this last term contains the information of both the labeled and unlabeled
vertices via the graph Laplacian.
1
The ideas we discuss below naturally extend to weighted graphs.
Method (2.4) is a special case of problem (2.1). Indeed, the restriction of the semi-norm k?k
on H(G) is a norm. Moreover, the pseudoinverse of the Laplacian, L+ , is the reproducing
kernel of H(G), see, for example, [7] for a proof. This means that for every v ? H(G) and
+
i ? INm there holds the reproducing kernel property vi = hL+
i , vi, where Li is the i-th
column of L+ . Hence, by setting X ? INm , f (i) = vi and K(i, j) = L+
ij , i, j ? INm , we
see that HK ? H(G). We note that the above analysis naturally extends to the case that L
is replaced by any positive semidefinite matrix. In particular, in our experiments below we
1
1
will use the normalized Laplacian matrix given by D? 2 LD? 2 .
Typically, problem (2.4) is solved by optimizing over v = (vi : i ? INm ). In particular, for
square loss regularization [2] and minimal norm interpolation [17] this requires solving a
squared linear system of m and m ? ? equations respectively. On the contrary, in this paper
we use the representer theorem to express v as
v=
?
X
j=1
L+
ij cj : i ? INm .
This approach is advantageous if L+ can be computed off-line because, typically, ? ? m.
A further advantage of this approach is that multiple problems may be solved with the same
e =
Laplacian kernel. The coefficients ci are obtained by solving problem (2.3) with K
+ ?
(Lij )i,j=1 . For example, for square loss regularization the computation of the parameter
vector c = (ci : i ? IN? ) involves solving a linear system of ? equations, namely
3
e + ?I)c = y.
(K
(2.5)
Learning a convex combination of Laplacian kernels
We now describe our framework for learning with multiple graph Laplacians. We assume
that we are given n graphs G(q) , q ? INn , all having m vertices, with corresponding
Laplacians L(q) , kernels K (q) = (L(q) )+ , Hilbert spaces H(q) := H(G(q) ) and norms
kvk2q := v? L(q) v, v ? H(q) . We propose to learn an optimal convex combination of
graph kernels, that is, we solve the optimization problem
( ?
)
X
2
? = min
V (yi , vi ) + ?kvkK(?) : ? ? ?, v ? HK(?)
(3.1)
i=1
Pn
where we have defined the set ? := {? ? IRn : ?q ? 0, q=1 ?q = 1} and, for each
Pn
? ? ?, the kernel K(?) := q=1 ?q K (q) . The above problem is motivated by observing
that
? ? min E? (K (q) ) : q ? INn .
Hence an optimal convex combination of kernels has a smaller right hand side than that of
any individual kernel, motivating the expectation of improved performance. Furthermore,
large values of the components of the minimizing ? identify the most relevant kernels.
Problem (3.1) is a special case of the problem of jointly minimizing functional (2.1) over
v ? HK and K ? co(K), the convex hull of kernels in a prescribed set K. This problem is discussed in detail in [1, 12], see also [10, 11] where the case that K is finite is
considered. Practical experience with this method [1, 10, 11] indicates that it can enhance
the performance of the learning algorithm and, moreover, it is computationally efficient to
solve. When solving problem (3.1) it is important to require that the kernels K (q) satisfy
a normalization condition such as that they all have the same trace or the same Frobenius
norm, see [10] for a discussion.
Initialization: Choose K (1) ? co{K (q) : q ? INn }
For t = 1 to T :
1. compute c(t) to be the solution of problem (2.3) with K = K (t) ;
2. find q ? INn : (c(t) , K (q) c(t) ) > (c(t) , K (t) c(t) ). If such q does not exist
terminate;
n
o
3. compute p? = argmin E? (pK (q) + (1 ? p)K (t) ) : p ? (0, 1] ;
4. set K (t+1) = p?K (q) + (1 ? p?)K (t) .
Figure 1: Algorithm to compute an optimal convex combination of kernels in the set
co{K (q) : q ? INn }.
Using the dual problem formulation discussed above (see equation (2.3)) in the inner minimum in (3.1) we can rewrite this problem as
(
)
?
n 1
o
X
?
? e
?
?? = max min
c K(?)c +
V (yi , ci ) : c ? IR : ? ? ? .
(3.2)
4?
i=1
The variational problem (3.2) expresses the optimal convex combination of the kernels as
the solution to a saddle point problem. This problem is simpler to solve than the original
problem (3.1) since its objective function is linear in ?, see [1] for a discussion. Several
? ? IR? ? ?. Here we adapt
algorithms can be used for computing a saddle point (?
c, ?)
an algorithm from [1] which alternately optimizes over c and ?. For reproducibility of the
? is computed c
? is given by a minimizer
algorithm, it is reported in Figure 1. Note that once ?
of problem (2.3) for K = K(?). In particular, for square loss regularization this requires
? : i, j ? IN? ).
e = (Kij (?)
solving the equation (2.5) with K
4
Experiments
In this section we present our experiments on optical character recognition. We observed
the following. First, the optimal convex combination of kernels computed by our algorithm
is competitive with the best base kernels. Second, by observing the ?weights? of the convex
combination we can distinguish the strong from the weak candidate kernels. We proceed
by discussing the details of the experimental design interleaved with our results.
We used the USPS dataset2 of 16?16 images of handwritten digits with pixel values ranging between -1 and 1. We present the results for 5 pairwise classification tasks of varying
difficulty and for odd vs. even digit classification. For pairwise classification, the training
set consisted of the first 200 images for each digit in the USPS training set and the number
of labeled points was chosen to be 4, 8 or 12 (with equal numbers for each digit). For odd
vs. even digit classification, the training set consisted of the first 80 images per digit in the
USPS training set and the number of labeled points was 10, 20 or 30, with equal numbers
for each digit. Performance was averaged over 30 random selections, each with the same
number of labeled points.
In each experiment, we constructed n = 30 graphs G(q) (q ? INn ) by combining k-nearest
neighbors (k ? IN10 ) with three different distances. Then, n corresponding Laplacians
were computed together with their associated kernels. We chose as the loss function V
the square loss. Since kernels obtained from different types of graphs can vary widely, it
was necessary to renormalize them. Hence, we chose to normalize each kernel during the
2
Available at: http://www-stat-class.stanford.edu/?tibs/ElemStatLearn/data.html
Euclidean (10 kernels)
Transf. (10 kernels)
Tangent dist. (10 kernels)
All (30 kernels)
Task \ Labels %
1%
2%
3%
1%
2%
3%
1%
2%
3%
1%
2%
3%
1 vs. 7
1.55
1.53
1.50
1.45
1.45
1.38
1.01
1.00
1.00
1.28
1.24
1.20
0.08
0.05
0.15
0.10
0.11
0.12
0.00
0.09
0.11
0.28
0.27
0.22
3.08
3.34
3.38
0.80
0.85
0.82
0.73
0.19
0.03
0.79
0.25
0.10
0.85
1.21
1.29
0.40
0.38
0.32
0.93
0.51
0.09
0.93
0.61
0.21
4.46
4.04
3.56
3.27
2.92
2.96
2.95
2.30
2.14
3.51
2.54
2.41
1.17
1.21
0.82
1.16
1.26
1.08
1.79
0.76
0.53
1.92
0.97
0.89
7.33
7.30
7.03
6.98
6.87
6.50
4.43
4.22
3.96
4.80
4.32
4.20
1.67
1.49
1.43
1.57
1.77
1.78
1.21
1.36
1.25
1.57
1.46
1.53
2.90
2.64
2.25
1.81
1.82
1.69
0.88
0.90
0.90
1.04
1.14
1.13
0.77
0.78
0.77
0.26
0.42
0.45
0.17
0.20
0.20
0.37
0.42
0.39
Labels
10
20
30
10
20
30
10
20
30
10
20
30
Odd vs. Even
18.6
15.5
13.4
15.7
11.7
8.52
14.66
10.50
8.38
17.07
10.98
8.74
3.98
2.40
2.67
4.40
3.14
1.32
4.37
2.30
1.90
4.38
2.61
2.39
2 vs. 3
2 vs. 7
3 vs. 8
4 vs. 7
Table 1: Misclassification error percentage (top) and standard deviation (bottom) for the
best convex combination of kernels on different handwritten digit recognition tasks, using
different distances. See text for description.
training process by the Frobenius norm of its submatrix corresponding to the labeled data.
We also observed that similar results were obtained when normalizing with the trace of
this submatrix. The regularization parameter was set to 10?5 in all algorithms. For convex
minimization, as the starting kernel in the algorithm in Figure 1 we always used the average
of the n kernels and as the maximum number of iterations T = 100.
Table 1 shows the results obtained using three distances as combined with k-NN (k ?
IN10 ). The first distance is the Euclidean distance between images. The second method is
transformation, where the distance between two images is given by the smallest Euclidean
distance between any pair of transformed images as determined by applying a number of
affine transformations and a thickness transformation3 , see [6] for more information. The
third distance is tangent distance, as described in [6], which is a first-order approximation
to the above transformations. For the first three columns in the table the Euclidean distance
was used, for columns 4?6 the image transformation distance was used, for columns 7?9
the tangent distance was used. Finally, in the last three columns all three methods were
jointly compared. As the results indicate, when combining different types of kernels, the
algorithm tends to select the most effective ones (in this case the tangent distance kernels
and to a lesser degree the transformation distance kernels which did not work very well
because of the Matlab optimization routine we used). We also noted that within each of
the methods the performance of the convex combination is comparable to that of the best
kernels. Figure 2 reports the weight of each individual kernel learned by our algorithm
when 2% labels are used in the pairwise tasks and 20 labels are used for odd vs. even.
With the exception of the easy 1 vs. 7 task, the large weights are associated with the
graphs/kernels built with the tangent distance.
The effectiveness of our algorithm in selecting the good graphs/kernels is better demonstrated in Figure 3, where the Euclidean and the transformation kernels are combined with
a ?low-quality? kernel. This ?low-quality? kernel is induced by considering distances invariant over rotation in the range [?180? , 180? ], so that the image of a 6 can easily have
a small distance from an image of a 9, that is, if x and t are two images and T? (x) is the
image obtained by rotating x by ? degrees, we set
d(x, t) = min{kT? (x) ? T?? (t)k : ?, ?? ? [?180? , 180? ]}.
3
This distance was approximated using Matlab?s constrained minimization function.
The figure shows the distance matrix on the set of labeled and unlabeled data for the Euclidean, transformation and ?low-quality distance? respectively. The best error among 15
different values of k within each method, the error of the learned convex combination and
the total learned weights for each method are shown below each plot. It is clear that the
solution of the algorithm is dominated by the good kernels and is not influenced by the
ones with low performance. As a result, the error of the convex combination is comparable
to that of the Euclidean and transformation methods. The final experiment (see Figure 4)
demonstrates that unlabeled data improves the performance of our method.
0.12
1 vs. 7
0.1
0.08
0.7
2 vs. 3
2 vs. 7
0.35
0.6
0.3
0.5
0.25
0.4
0.2
0.3
0.15
0.06
0.04
0.02
0
0
5
10
15
20
0.7
25
30
3 vs. 8
0.2
0.1
0.1
0.05
0
0
5
10
15
20
0.6
0.3
0.5
0.25
25
30
4 vs. 7
0.35
0
0
5
10
15
20
25
30
0.25
odd?even
0.2
0.4
0.2
0.3
0.15
0.2
0.1
0.15
0.1
0.05
0.1
0.05
0
0
5
10
15
20
25
30
0
0
5
10
15
20
25
30
0
0
5
10
15
20
25
30
Figure 2: Kernel weights for Euclidean (first 10), Transformation (middle 10) and Tangent
(last 10). See text for more information.
Euclidean
Transformation
Low?quality distance
0
0
0
50
50
50
100
100
100
150
150
150
200
200
200
250
250
250
300
300
300
350
350
400
0
100
200
300
400
error = 0.24%
P15
i=1 ?i = 0.553
400
0
350
100
200
300
400
error = 0.24%
P30
i=16 ?i = 0.406
convex combination error = 0.26%
400
0
100
200
300
400
error = 17.47%
P45
i=31 ?i = 0.041
Figure 3: Similarity matrices and corresponding learned coefficients of the convex combination for the 6 vs. 9 task. See text for description.
5
Conclusion
We have presented a method for computing an optimal kernel within the framework of regularization over graphs. The method consists of a minimax problem which can be efficiently
solved by using an algorithm from [1]. When tested on optical character recognition tasks,
the method exhibits competitive performance and is able to select good graph structures.
Future work will focus on out-of-sample extensions of this algorithm and on continuous
optimization versions of it. In particular, we may consider a continuous family of graphs
each corresponding to a different weight matrix and study graph kernel combinations over
this class.
0.22
0.28
Euclidean
transformation
tang. dist.
0.27
0.26
0.25
Euclidean
transformation
tang. dist.
0.2
0.18
0.24
0.16
0.23
0.22
0.14
0.21
0.2
0.12
0.19
0.18
0
500
1000
1500
2000
0.1
0
500
1000
1500
2000
Figure 4: Misclassification error vs. number of training points for odd vs. even classification. The number of labeled points is 10 on the left and 20 on the right.
References
[1] A. Argyriou, C.A. Micchelli and M. Pontil. Learning convex combinations of continuously
parameterized basic kernels. Proc. 18-th Conf. on Learning Theory, 2005.
[2] M. Belkin, I. Matveeva and P. Niyogi. Regularization and semi-supervised learning on large
graphs. Proc. of 17?th Conf. Learning Theory (COLT), 2004.
[3] M. Belkin and P. Niyogi. Semi-supervised learning on Riemannian manifolds. Mach. Learn.,
56: 209?239, 2004.
[4] A. Blum and S. Chawla. Learning from Labeled and Unlabeled Data using Graph Mincuts,
Proc. of 18?th International Conf. on Learning Theory, 2001.
[5] F.R. Chung. Spectral Graph Theory. Regional Conference Series in Mathematics, Vol. 92,
1997.
[6] T. Hastie and P. Simard. Models and Metrics for Handwritten Character Recognition. Statistical
Science, 13(1): 54?65, 1998.
[7] M. Herbster, M. Pontil, L. Wainer. Online learning over graphs. Proc. 22-nd Int. Conf. Machine
Learning, 2005.
[8] T. Joachims. Transductive Learning via Spectral Graph Partitioning. Proc. of the Int. Conf.
Machine Learning (ICML), 2003.
[9] R.I. Kondor and J. Lafferty. Diffusion kernels on graphs and other discrete input spaces. Proc.
19-th Int. Conf. Machine Learning, 2002.
[10] G. R. G. Lanckriet, N. Cristianini, P. Bartlett, L. El Ghaoui, M. I. Jordan. Learning the kernel
matrix with semidefinite programming. J. Machine Learning Research, 5: 27?72, 2004.
[11] Y. Lin and H.H. Zhang. Component selection and smoothing in smoothing spline analysis of
variance models ? COSSO. Institute of Statistics Mimeo Series 2556, NCSU, January 2003.
[12] C. A. Micchelli and M. Pontil. Learning the kernel function via regularization, J. Machine
Learning Research, 6: 1099?1125, 2005.
[13] C.S. Ong, A.J. Smola, and R.C. Williamson. Hyperkernels. Advances in Neural Information
Processing Systems, 15, S. Becker et al. (Eds.), MIT Press, Cambridge, MA, 2003.
[14] A.J. Smola and R.I Kondor. Kernels and regularization on graphs. Proc. of 16?th Conf. Learning Theory (COLT), 2003.
[15] V.N. Vapnik. Statistical Learning Theory. Wiley, New York, 1998.
[16] D. Zhou, O. Bousquet, T.N. Lal, J. Weston and B. Scholkopf. Learning with local and global
consistency. Advances in Neural Information Processing Systems, 16, S. Thrun et al. (Eds.),
MIT Press, Cambridge, MA, 2004.
[17] X. Zhu, Z. Ghahramani and J. Lafferty. Semi-supervised learning using Gaussian fields and
harmonic functions. Proc. 20?th Int. Conf. Machine Learning, 2003.
[18] X. Zhu, J. Kandola, Z, Ghahramani, J. Lafferty. Nonparametric transforms of graph kernels for
semi-supervised learning. Advances in Neural Information Processing Systems, 17, L.K. Saul
et al. (Eds.), MIT Press, Cambridge, MA, 2005.
| 2938 |@word version:1 middle:1 kondor:2 norm:10 advantageous:1 nd:1 hu:1 ld:1 contains:1 series:2 selecting:2 rkhs:2 must:1 plot:2 v:18 implying:1 transposition:1 simpler:1 zhang:1 constructed:5 kvk2:1 become:1 ik:3 scholkopf:1 consists:2 specialize:1 combine:1 advocate:1 introduce:1 pairwise:3 indeed:1 expected:1 dist:3 decreasing:2 encouraging:1 considering:1 underlying:1 notation:1 moreover:3 what:1 argmin:1 eigenvector:1 transformation:13 every:4 ti:2 demonstrates:1 uk:2 partitioning:1 positive:3 local:1 tends:1 mach:1 interpolation:1 chose:2 therein:1 initialization:1 co:3 range:1 averaged:1 practical:3 unique:1 implement:1 definite:1 digit:9 pontil:5 foundational:1 empirical:2 significantly:1 convenient:1 inp:3 unlabeled:7 selection:5 context:1 applying:1 restriction:1 www:1 demonstrated:1 attention:1 starting:1 convex:19 insight:1 argyriou:3 construction:3 programming:1 prescribe:1 matveeva:1 associate:1 lanckriet:1 approximated:2 recognition:4 labeled:15 observed:2 bottom:1 tib:1 solved:3 hv:1 connected:3 trade:1 ui:2 complexity:1 interpolant:1 cristianini:1 ong:1 trained:1 solving:8 rewrite:1 upon:1 learner:1 usps:4 basis:2 easily:1 joint:1 differently:1 massimiliano:1 describe:1 london:2 effective:1 tell:1 whose:1 widely:1 valued:1 solve:3 say:1 stanford:1 niyogi:2 statistic:1 transductive:1 jointly:2 final:1 online:1 advantage:1 eigenvalue:5 inn:6 ucl:1 propose:3 product:2 relevant:1 combining:4 reproducibility:1 description:2 frobenius:2 normalize:1 ac:1 stat:1 ij:2 nearest:3 odd:6 received:1 strong:1 solves:1 c:1 involves:1 indicate:1 hull:1 centered:1 adjacency:1 require:1 extension:1 hold:2 considered:1 vary:1 smallest:1 proc:8 label:5 weighted:2 minimization:4 mit:3 clearly:1 always:1 gaussian:1 rather:3 pn:2 asp:1 zhou:1 varying:1 focus:1 joachim:1 indicates:2 hk:10 el:1 nn:1 bt:1 typically:2 kc:1 irn:1 transformed:1 pixel:1 dual:2 among:2 classification:5 html:1 colt:2 constrained:1 special:2 smoothing:2 equal:2 once:1 field:1 having:1 icml:1 representer:1 future:1 report:1 spline:1 belkin:2 kandola:1 individual:2 replaced:1 inr:2 kvk:3 semidefinite:2 tj:1 kt:1 ambient:1 edge:5 necessary:1 experience:2 orthogonal:1 euclidean:14 irm:4 rotating:1 renormalize:1 minimal:2 inm:6 kij:1 column:5 measuring:1 vertex:11 deviation:1 optimally:1 reported:2 motivating:1 thickness:1 combined:5 herbster:3 international:1 off:2 enhance:1 connecting:1 together:1 continuously:1 squared:1 choose:1 conf:8 chung:1 leading:1 simard:1 li:1 diversity:1 coefficient:3 int:4 satisfy:1 depends:1 vi:9 piece:1 observing:2 sup:1 competitive:2 hf:1 option:1 square:5 ir:12 variance:1 efficiently:1 identify:2 weak:1 handwritten:3 influenced:1 ed:3 against:1 p15:1 thereof:1 naturally:2 associated:3 di:3 proof:1 riemannian:1 dataset:2 recall:1 improves:1 hilbert:6 cj:1 routine:1 surplus:1 supervised:10 improved:2 formulation:1 generality:1 furthermore:1 stage:1 smola:2 hand:2 replacing:1 quality:5 building:1 consisted:2 normalized:2 regularization:20 hence:4 symmetric:1 during:1 noted:1 prominent:1 wainer:1 cosso:1 image:15 variational:2 wise:1 ranging:1 harmonic:1 common:1 rotation:3 functional:2 discussed:3 extend:1 significant:1 cambridge:3 smoothness:1 consistency:1 pm:1 mathematics:1 specification:1 similarity:1 base:1 recent:1 perspective:1 optimizing:2 optimizes:1 dataset2:1 discussing:1 yi:6 conserved:1 minimum:1 determine:2 semi:14 ii:2 multiple:2 england:1 adapt:1 cross:1 lin:1 laplacian:10 basic:2 essentially:1 expectation:1 metric:1 iteration:1 kernel:65 normalization:1 background:1 regional:1 induced:2 undirected:1 contrary:1 lafferty:3 effectiveness:1 jordan:1 noting:1 easy:1 variety:1 xj:1 fit:1 hastie:1 identified:1 andreas:1 inner:3 idea:1 lesser:1 six:2 motivated:1 bartlett:1 elemstatlearn:1 becker:1 proceed:1 nine:2 york:1 matlab:2 clear:1 eigenvectors:3 transforms:1 nonparametric:1 situated:1 reduced:1 http:1 exist:1 percentage:1 problematic:1 per:2 discrete:2 vol:1 express:2 key:1 p30:1 blum:1 verified:1 diffusion:1 graph:53 year:1 parameterized:1 extends:1 family:1 comparable:2 submatrix:2 interleaved:1 distinguish:1 dominated:1 ncsu:1 bousquet:1 min:7 prescribed:1 optical:2 department:1 combination:19 conjugate:1 smaller:1 character:3 appealing:1 hl:1 invariant:3 ghaoui:1 computationally:1 equation:6 discus:2 available:2 apply:1 ocr:1 spectral:2 chawla:1 original:1 denotes:1 top:1 gower:1 wc1e:1 ghahramani:2 build:2 micchelli:2 objective:1 exhibit:1 subspace:1 distance:38 thrun:1 street:1 manifold:4 minimizing:3 balance:1 trace:2 design:1 finite:1 january:1 defining:1 extended:1 situation:1 y1:1 reproducing:7 namely:3 pair:1 lal:1 learned:4 alternately:1 able:1 below:4 laplacians:4 built:1 max:1 misclassification:2 difficulty:1 zhu:2 minimax:1 improve:2 lij:1 text:3 review:3 tangent:7 kf:1 loss:11 lv:1 validation:1 degree:3 affine:1 last:4 aij:3 side:2 institute:1 neighbor:3 saul:1 pseudoinverse:2 global:1 xi:4 continuous:2 table:3 learn:3 terminate:1 williamson:1 diag:1 vj:1 did:1 pk:1 k2k:1 wiley:1 wish:1 candidate:1 third:2 learns:1 tang:2 theorem:1 specific:1 normalizing:1 vapnik:1 ci:6 illustrates:1 depicted:1 saddle:2 minimizer:1 ma:3 weston:1 determined:1 hyperkernels:1 total:1 mincuts:1 experimental:1 exception:1 select:3 college:1 mark:1 support:1 kvkk:1 tested:1 |
2,135 | 2,939 | Learning Cue-Invariant Visual Responses
Jarmo Hurri
HIIT Basic Research Unit, University of Helsinki
P.O.Box 68, FIN-00014 University of Helsinki, Finland
Abstract
Multiple visual cues are used by the visual system to analyze a scene;
achromatic cues include luminance, texture, contrast and motion. Singlecell recordings have shown that the mammalian visual cortex contains
neurons that respond similarly to scene structure (e.g., orientation of
a boundary), regardless of the cue type conveying this information.
This paper shows that cue-invariant response properties of simple- and
complex-type cells can be learned from natural image data in an unsupervised manner. In order to do this, we also extend a previous conceptual
model of cue invariance so that it can be applied to model simple- and
complex-cell responses. Our results relate cue-invariant response properties to natural image statistics, thereby showing how the statistical modeling approach can be used to model processing beyond the elemental
response properties visual neurons. This work also demonstrates how to
learn, from natural image data, more sophisticated feature detectors than
those based on changes in mean luminance, thereby paving the way for
new data-driven approaches to image processing and computer vision.
1
Introduction
When segmenting a visual scene, the brain utilizes a variety of visual cues. Spatiotemporal variations in the mean luminance level ? which are also called first-order cues ? are
computationally the simplest of these; the name ?first-order? comes from the idea that a
single linear filtering operation can detect these cues. Other types of visual cues include
contrast, texture and motion; in general, cues related to variations in other characteristics
than mean luminance are called higher-order (also called non-Fourier) cues; the analysis of
these is thought to involve more than one level of processing/filtering. Single-cell recordings have shown that the mammalian visual cortex contains neurons that are selective to
both first- and higher-order cues. For example, a neuron may exhibit similar selectivity
to the orientation of a boundary, regardless of whether the boundary is a result of spatial
changes in mean luminance or contrast [1]. Monkey cortical areas V1 and V2, and cat cortical areas 17 and 18, contain both simple- (orientation-, frequency- and phase-selective)
and complex-type (orientation- and frequency-selective, phase-invariant) cells that exhibit
such cue-invariant response properties [2, 1, 3, 4, 5]. Previous research has been unable to
pinpoint the connectivity that gives rise to cue-invariant responses.
Recent computational modeling of the visual system has produced fundamental results
relating stimulus statistics to first-order response properties of simple and complex cells
(see, e.g., [6, 7, 8, 9]). The contribution of this paper is to introduce a similar, natural image
B
A
Nonlinear
stream
Linear
stream
IMAGE
IMAGE
Input
First stage
filters
First-order
Feedback
Cue-invariant
cells
path
cells
IMAGE
a,c
Rectification
IMAGE
b,d
Simplecell
Second stage
filters
level
g
+
e
f
+
h
ComplexIntegration
cell
level
+
+
Complexcell
output
Figure 1: (A) The two-stream model [1], with a linear stream (on the right) and a nonlinear
stream (on the left). The linear stream responds to first-order cues, while the nonlinear
stream responds to higher-order cues. In the nonlinear stream, the stimulus (image) is
first filtered with multiple high-frequency filters, whose outputs are transformed nonlinearly (rectified), and subsequently used as inputs for a second-stage filter. Cue-invariant
responses are obtained when the outputs of these two streams are integrated. (B) Our
model of cue-invariant responses. The model consists of simple cells, complex cells and
a feedback path leading from a population of high-frequency first-order complex cells to
low-frequency cue-invariant simple cells. In a cue-invariant simple cell, the feedback is
filtered with a filter that has similar spatial characteristics as the feedforward filter of the
cell. The output of a cue-invariant simple cell is given by the sum of the linearly filtered
input and the filtered feedback. Note that while our model results in cue-invariant response
properties, it is not a model of cue integration, because in the sum the two paths can cancel out. However, this simplification does not affect our results, that is, learning, since
the summed output is not used in learning (see Section 3), or measurements, which excite
only one of the paths significantly and do not consider integration effects (see Figures 3
and 4). In this instance of the model, the high-frequency cells prefer horizontal stimuli,
while the low-frequency cue-invariant cells prefer vertical stimuli; in other instances, this
relationship can be different. For actual filters used in an implementation of this model, see
Figure 2. Lowercase letters a?g refer to the corresponding subfigures in Figure 2.
statistics -based framework for cue-invariant responses of both simple and complex cells.
In order to achieve this, we also extend the two-stream model of cue-invariant responses
(Figure 1A) to account for cue-invariant responses at both simple- and complex-cell levels.
The rest of this paper is organized as follows. In Section 2 we describe our version of the
two-stream model of cue-invariant responses, which is based on feedback from complex
cells to simple cells. In Section 3 we formulate an unsupervised learning rule for learning
these feedback connections. We apply our learning rule to natural image data, and show
that this results in the emergence of connections that give rise to cue-invariant responses at
both simple- and complex-cell levels. We end this paper with conclusions in Section 4.
2
A model of cue-invariant responses
The most prominent model of cue-invariant responses introduced in previous research is
the two-stream model (see, e.g., [1]), depicted in Figure 1A. In this research we have extended this model so that it can be applied directly to model the cue-invariant responses of
simple and complex cells. Our model, shown in Figure 1B, employs standard linear-filter
a
c
d
b
e
f
g
h
Figure 2: The filters used in an implementation of our model. The reader is referred to
Figure 1B for the correspondence between subfigures (a)?(h) and the schematic model of
Figure 1B. (a) The feedforward filter (Gabor function [10]) of a high-frequency first-order
simple cell; the filter has size 19 ? 19 pixels, which is the size of the image data in our
experiments. (b) The feedforward filter of another first-order simple cell. This feedforward
filter is otherwise similar to the one in (a), except that there is a phase difference of ?/2
between the two; together, the feedforward filters in (a) and (b) are used to implement an
energy model of a complex cell. (c) A lattice of size 7 ? 7 of high-frequency filters of the
type shown in (a); these filters are otherwise identical, except that their spatial locations
vary. (d) A lattice of filters of the type shown in (b). Together, the lattices shown in (c)
and (d) are used to implement a 7 ? 7 lattice of energy-model complex cells with different
spatial positions; the output of this lattice is the feedback relayed to the low-frequency cueinvariant cells. (e,f ) Feedforward filters of low-frequency simple cells. (g) A feedback
filter of size 7 ? 7 for the simple cell whose feedforward filter is shown in (e); in order to
avoid confusion between feedforward filters and feedback filters, the latter are visualized
as lattices of slightly rounded rectangles. (h) A feedback filter for the simple cell whose
feedforward filter is shown in (f). The feedback filters in (g) and (h) have been obtained by
applying the learning algorithm introduced in this paper (see Section 3 for details).
models of simple cells and energy models of complex cells [10], and a feedback path from
the complex-cell level to the simple-cell level. This feedback path introduces a second,
nonlinear input stream to cue-invariant cells, and gives rise to cue-invariant responses in
these cells. To avoid confusion between the two types of filters ? one type operating on
the input image and the other on the feedback ? we will use the term ?feedforward filter?
for the former and the term ?feedback filter? for the latter. Figure 2 shows the feedforward
and feedback filters of a concrete instance (implementation) of our model. Gabor functions
[10] are used to model simple-cell feedforward filters.
Figure 3 illustrates the design of higher-order gratings, and shows how the complex-cell
lattice of the model transforms higher-order cues into feedback activity patterns that resemble corresponding first-order cues. A quantitative evaluation of the model is given
in Figure 4. These measurements show that our model possesses the fundamental cueinvariant response properties: in our model, a cue-invariant neuron has similar selectivity
to the orientation, frequency and phase of a grating stimulus, regardless of cue type (see
figure caption for details). We now proceed to show how the feedback filters of our model
(Figures 2g and h) can be learned from natural image data.
3
3.1
Learning feedback connections in an unsupervised manner
The objective function and the learning algorithm
In this section we introduce an unsupervised algorithm for learning feedback connection
weights from complex cells to simple cells. When this learning algorithm is applied to
natural image data, the resulting feedback filters are those shown in Figures 2g and h ? as
cue type
A
sinusoidal constituents
of stimulus
stimulus
[equation]
B
feedback
activity
C
luminance
D
E
I
J
F
[=A]
G
H
texture
[=DE+(1-D)F]
K
L
contrast
[=IJ]
Figure 3: The design of grating stimuli with different cues, and the feedback activity for
these gratings. Design of grating stimuli: Each row illustrates how, for a particular cue, a
grating stimulus is composed of sinusoidal constituents; the equation of each stimulus (B,
G, K) as a function of the constituents is shown under the stimulus. Note that the orientation, frequency and phase of each grating is determined by the first sinusoidal constituent
(A, D, I); here these parameters are the same for all stimuli. Here (E) and (F) are two
different textures, and (I) is called the envelope and (J) the carrier of a contrast-defined
stimulus. Feedback activity: The rightmost column shows the feedback activity ? that is,
response of the complex-cell lattice (see Figures 2c and d) ? for the three types of stimuli.
(C) There is no response to the luminance stimuli, since the orientation and frequency of
the stimulus are different from those of the high-frequency feedforward filters. (H, L) For
other cue types, the lattice detects the locations of energy of the vertical high-frequency
constituent (E, J), thereby resulting in feedback activity that has a spatial pattern similar to
a corresponding luminance pattern (A). Thus, the complex-cell lattice transforms higherorder cues into activity patterns that resemble first-order cues, and these can subsequently
produce a strong response in a feedback filter (compare (H) and (L) with the feedback filter
in Figure 2g). For a quantitative evaluation of the model with these stimuli, see Figure 4.
was shown in Figure 4, these feedback filters give rise to cue-invariant response properties.
The intuitive idea behind the learning algorithm is the following: in natural images, higherorder cues tend to coincide with first-order cues. For example, when two different textures
are adjacent, there is often also a luminance border between them; two examples of this
phenomenon are shown in Figure 5. Therefore, cue-invariant response properties could
be a result of learning in which large responses in the feedforward channel (first-order
responses) have become associated with large responses in the feedback channel (higherorder responses). Previous research has demonstrated the importance of such energy dependencies in modeling the visual system (see, e.g., [11, 9, 12, 13, 14]).
To turn this idea into equations, let us introduce some notation. Let vector c(n) =
T
[c1 (n) c2 (n) ? ? ? cK (n)] denote the responses of a set of K first-order high-frequency
complex cells for the input image with index n. In our case the number of these complex
cells is K = 7 ? 7 = 49 (see Figures 2c and d), so the dimension of this vector is 49.
This vectorization can be done in a standard manner [15] by scanning values from the 2D
lattice column-wise into a vector; when the learned feedback filter is visualized, the filter
is ?unvectorized? with a reverse procedure. Let s(n) denote the response of a single low-
?/2
cue orientation
response
0
0 0.1 0.2 0.3
cue frequency
?
0
cue phase
2?
?/2
cue orientation
standard
simple cell
(without feedback)
1
0
0
1
0
0 0.1 0.2 0.3
cue frequency
?/2
?
cue orientation
F
1
0
0 0.1 0.2 0.3
cue frequency
I
1
0
?
0
2?
cue phase
J
response
0
?
H
1
0
?1
response
response
G
1
0
?1
?
0
cue phase
2?
K
0.5
0
0
?/2
?
carrier orientation
M
0.5
0
0 0.1 0.2 0.3
carrier frequency
response
L
response
0
E
1
response
0
?
C
1
response
response
0
D
response
B
1
response
response
A
cue-invariant
complex cell
(with feedback)
response
cue-invariant
simple cell
(with feedback)
0.5
0
0
?/2
?
carrier orientation
0.5
0
0 0.1 0.2 0.3
carrier frequency
Figure 4: Our model fulfills the fundamental properties of cue-invariant responses. The
plots show tuning curves for a cue-invariant simple cell ? corresponding to the filters of
Figures 2e and g ? and complex cell of our new model (two leftmost columns), and a
standard simple-cell model without feedback processing (rightmost column). Solid lines
show responses to luminance-defined gratings (Figure 3B), dotted lines show responses to
texture-defined gratings (Figure 3G), and dashed lines show responses to contrast-defined
gratings (Figure 3K). (A?I) In our model, a neuron has similar selectivity to the orientation, frequency and phase of a grating stimulus, regardless of cue type; in contrast, a
standard simple-cell model, without the feedback path, is only selective to the parameters
of a luminance-defined grating. The preferred frequency is lower for higher-order gratings
than for first-order gratings; similar observations have been made in single-cell recordings
[4]. (J?M) In our model, the neurons are also selective to the orientation and frequency
of the carrier (Figure 3J) of a contrast-defined grating (Figure 3K), thus conforming with
single-cell recordings [1]. Note that these measurements were made with the feedback
filters learned by our unsupervised algorithm (see Section 3); thus, these measurements
confirm that learning results in cue-invariant response properties.
A
B
Figure 5: Two examples of coinciding first- and higher-order
boundary cues. Image in (A)
contains a near-vertical luminance boundary across the image; the boundary in (B) is nearhorizontal. In both (A) and (B),
texture is different on different
sides of the luminance border.
(For image source, see [8].)
A
B
C
D
E
F
G
H
I
J
Figure 6: (A-D, F-I) Feedback filters (top row)
learned from natural image data by using our unsupervised learning algorithm; the bottom row
shows the corresponding feedforward filters. For
a quantitative evaluation of the cue-invariant response properties resulting from the learned filters (A) and (B), see Figure 4. (E, J) The result
of a control experiment, in which Gaussian white
noise was used as input data; (J) shows the feedforward filter used in this control experiment.
frequency simple cell for the input image with index n. In our learning algorithm all the
feedforward filters are fixed and only a feedback filter is learned; this means that c(n) and
s(n) can be computed for all n (all images) prior to applying the learning algorithm.
Let us denote the K-dimensional feedback filter with w; this filter is learned by our algorithm. Let b(n) = w T c(n), that is, b(n) is the signal obtained when the feedback activity
from the complex-cell lattice is filtered with the feedback filter; the overall activity of a cueinvariant simple cell is then s(n) + b(n). Our objective function measures the correlation
of energies of the feedforward response s(n) and the feedback response b(n):
f (w) = E s2 (n)b2 (n) = wT E s2 (n)c(n)c(n)T w = wT M w,
(1)
2
T
where M = E s (n)c(n)c(n) is a positive-semidefinite matrix that can be computed
from samples prior to learning. To keep the output of the feedback filter b(n) bounded, we
enforce a unit energy constraint on b(n), leading into constraint
h(w) = E b2 (n) = wT E c(n)c(n)T w = wT Cw = 1,
(2)
T
where C = E c(n)c(n)
is also positive-semidefinite and can be computed prior to
learning. The problem of maximizing objective (1) with constraint (2) is a well-known
quadratic optimization problem with a norm constraint, the solution of which is given by
an eigenvalue-eigenvector problem (see below). However, in order to handle the case where
C is not invertible ? which will be the case below in our experiments ? and to attenuate
the noise in the data, we first use a technique called dimensionality reduction (see, e.g.,
[15]). Let C = EDE T be the eigenvalue decomposition of C; in the decomposition, the
eigenvectors corresponding to the r smallest eigenvalues (subspaces with smallest energy;
the exact value for r is given in Section 3.2) have been dropped out, so E is a K ? (K ? r)
matrix of K ? r eigenvectors and D is a (K ? r) ? (K ? r) diagonal matrix containing
the largest eigenvalues. Now let v = D 1/2 E T w. A one-to-one correspondence between
v and w can be formed by using the pseudoinverse solution w = ED ?1/2 v. Now let
z(n) = D ?1/2 E T c(n). Using these definitions of v and z(n),
to
it is straightforward
show that the objective and constraint become f (v) = v T E s2 (n)z(n)z(n)T v and
2
h(v) = kvk = 1. The global maximum v opt is the eigenvector of E s2 (n)z(n)z(n)T
that corresponds to the largest eigenvalue.
In practice learning from sampled data s(n) and c(n) proceeds as follows. First the eigenvalue decomposition of C is computed. Then the transformed data set z(n) is computed,
and v opt is calculated from the eigenvalue-eigenvector problem. Finally, the optimal filter
wopt is obtained from the pseudoinverse relationship. In learning from sampled data, all
expectations are replaced with sample averages.
3.2
Experiments
The algorithm described above was applied to natural image data, which was sampled from
a set of over 4,000 natural images [8]. The size of the sampled image patches was 19 ? 19
pixels, and the number of samples was 250,000. The local mean (DC component) was
removed from each image sample.
Simple-cell feedforward responses s(n) were computed using the filter shown in Figure 2e,
and the set of high-frequency complex-cell lattice activities c(n) was computed using the
filters shown in Figures 2c and d. A form of contrast gain control [16], which can be used
to compensate for the large variation in contrast in natural images, was also applied to the
natural image data: prior to filtering a natural image sample with a feedforward filter, the
energy of the image was normalized inside the Gaussian modulation window of the Gabor
function [10] of the feedforward filter. This preprocessing tends to weaken contrast borders, implying that in our experiments, learning higher-order responses is mostly based on
texture boundaries that coincide with luminance boundaries. It should be noted, however,
that in spite of this preprocessing step, the resulting feedback filters produce cue-invariant
responses to both texture- and contrast-defined cues (see Figure 4). In order to make the
components of c(n) have zero mean, and focus on the structure of feedback activity patterns instead of overall constant activation, the local mean (DC component) was removed
from each c(n). To attenuate the noise in the data, the dimensionality of c(n) was reduced
to 16 (see Section 3.1); this retains 85% of original signal energy.
The algorithm described in Section 3.1 was then applied to this data. The resulting feedback filter is shown in Figure 6A (see also Figure 2g). Data sampling, preprocessing and
the learning algorithm were then repeated, but this time using the feedforward filter shown
in Figure 2f; the feedback filter obtained from this run is shown in Figure 6B (see also
Figure 2h). The measurements in Figure 4 show that these feedback filters result in cueinvariant response properties at both simple- and complex-cell levels. Thus, our unsupervised algorithm learns cue-invariant response properties from natural image data. The
results shown in Figures 6C and D were obtained with feedforward filters whose orientation was different from vertical, demonstrating that the observed phenomenon applies to
other orientations also (in these experiments, the orientation of the high-frequency filters
was orthogonal to that of the low-frequency feedforward filter).
To make sure that the results shown in Figures 6A?D are not a side effect of the preprocessing or the structure of our model, but truly reflect the statistical properties of natural image
data, we ran a control experiment by repeating our first experiment, but using Gaussian
white noise as input data (instead of natural image data). All other steps, including preprocessing and dimensionality reduction, were the same as in the original experiment. The
result is shown in Figure 6E; as can be seen, the resulting filter lacks any spatial structure.
This verifies that our original results do reflect the statistics of natural image data.
4
Conclusions
This paper has shown that cue-invariant response properties can be learned from natural
image data in an unsupervised manner. The results were based on a model in which there is
a feedback path from complex cells to simple cells, and an unsupervised algorithm which
maximizes the correlation of the energies of the feedforward and filtered feedback signals.
The intuitive idea behind the algorithm is that in natural visual stimuli, higher-order cues
tend to coincide with first-order cues. Simulations were performed to validate that the
learned feedback filters give rise to in cue-invariant response properties.
Our results are important for three reasons. First, for the first time it has been shown that
cue-invariant response properties of simple and complex cells emerge from the statistical
properties of natural images. Second, our results suggest that cue invariance can result from
feedback from complex cells to simple cells; no feedback from higher cortical areas would
thus be needed. Third, our research demonstrates how higher-order feature detectors can
be learned from natural data in an unsupervised manner; this is an important step towards
general-purpose data-driven approaches to image processing and computer vision.
Acknowledgments
The author thanks Aapo Hyv?rinen and Patrik Hoyer for their valuable comments. This
research was supported by the Academy of Finland (project #205742).
References
[1] I. Mareschal and C. Baker, Jr. A cortical locus for the processing of contrast-defined contours.
Nature Neuroscience 1(2):150?154, 1998.
[2] Y.-X. Zhou and C. Baker, Jr. A processsing stream in mammalian visual cortex neurons for
non-Fourier responses. Science 261(5117):98?101, 1993.
[3] A. G. Leventhal, Y. Wang, M. T. Schmolesky, and Y. Zhou. Neural correlates of boundary
perception. Visual Neuroscience 15(6):1107?1118, 1998.
[4] I. Mareschal and C. Baker, Jr. Temporal and spatial response to second-order stimuli in cat area
18. Journal of Neurophysiology 80(6):2811?2823, 1998.
[5] J. A. Bourne, R. Tweedale, and M. G. P. Rosa. Physiological responses of New World monkey
V1 neurons to stimuli defined by coherent motion. Cerebral Cortex 12(11):1132?1145, 2002.
[6] B. A. Olshausen and D. Field. Emergence of simple-cell receptive field properties by learning
a sparse code for natural images. Nature 381(6583):607?609, 1996.
[7] A. Bell and T. J. Sejnowski. The independent components of natural scenes are edge filters.
Vision Research 37(23):3327?3338, 1997.
[8] J. H. van Hateren and A. van der Schaaf. Independent component filters of natural images compared with simple cells in primary visual cortex. Proceedings of the Royal Society of London B
265(1394):359?366, 1998.
[9] A. Hyv?rinen and P. O. Hoyer. A two-layer sparse coding model learns simple and complex
cell receptive fields and topography from natural images. Vision Research 41(18):2413?2423,
2001.
[10] P. Dayan and L. F. Abbott. Theoretical Neuroscience. The MIT Press, 2001.
[11] O. Schwartz and E. P. Simoncelli. Natural signal statistics and sensory gain control. Nature
Neuroscience 4(8):819?825, 2001.
[12] J. Hurri and A. Hyv?rinen. Simple-cell-like receptive fields maximize temporal coherence in
natural video. Neural Computation 15(3):663?691, 2003.
[13] J. Hurri and A. Hyv?rinen. Temporal and spatiotemporal coherence in simple-cell responses:
a generative model of natural image sequences. Network: Computation in Neural Systems
14(3):527?551, 2003.
[14] Y. Karklin and M. S. Lewicki. Higher-order structure of natural images. Network: Computation
in Neural Systems 14(3):483?499, 2003.
[15] A. Hyv?rinen, J. Karhunen, and E. Oja. Independent Component Analysis. John Wiley & Sons,
2001.
[16] D. J. Heeger. Normalization of cell responses in cat striate cortex. Visual Neuroscience
9(2):181?197, 1992.
| 2939 |@word neurophysiology:1 version:1 norm:1 simplecell:1 hyv:5 simulation:1 decomposition:3 thereby:3 solid:1 reduction:2 contains:3 rightmost:2 activation:1 conforming:1 john:1 plot:1 implying:1 cue:77 generative:1 filtered:6 location:2 relayed:1 c2:1 become:2 consists:1 inside:1 introduce:3 manner:5 brain:1 detects:1 actual:1 window:1 project:1 notation:1 bounded:1 maximizes:1 baker:3 monkey:2 eigenvector:3 temporal:3 quantitative:3 demonstrates:2 schwartz:1 control:5 unit:2 segmenting:1 carrier:6 positive:2 dropped:1 local:2 tends:1 path:8 modulation:1 acknowledgment:1 practice:1 implement:2 procedure:1 area:4 bell:1 thought:1 significantly:1 gabor:3 spite:1 suggest:1 applying:2 demonstrated:1 maximizing:1 straightforward:1 regardless:4 formulate:1 rule:2 population:1 handle:1 variation:3 rinen:5 caption:1 exact:1 ede:1 mammalian:3 bottom:1 observed:1 wang:1 removed:2 valuable:1 ran:1 cat:3 describe:1 london:1 sejnowski:1 whose:4 otherwise:2 achromatic:1 statistic:5 emergence:2 sequence:1 eigenvalue:7 achieve:1 academy:1 intuitive:2 validate:1 constituent:5 elemental:1 produce:2 ij:1 strong:1 grating:15 resemble:2 come:1 filter:67 subsequently:2 opt:2 finland:2 vary:1 smallest:2 purpose:1 largest:2 mit:1 gaussian:3 ck:1 avoid:2 zhou:2 focus:1 contrast:13 detect:1 dayan:1 lowercase:1 integrated:1 selective:5 transformed:2 pixel:2 overall:2 orientation:17 spatial:7 integration:2 summed:1 schaaf:1 rosa:1 field:4 sampling:1 identical:1 unsupervised:10 cancel:1 bourne:1 stimulus:22 employ:1 oja:1 composed:1 replaced:1 phase:9 evaluation:3 introduces:1 truly:1 kvk:1 semidefinite:2 behind:2 edge:1 orthogonal:1 subfigure:2 weaken:1 theoretical:1 instance:3 column:4 modeling:3 retains:1 lattice:13 dependency:1 scanning:1 spatiotemporal:2 thanks:1 fundamental:3 rounded:1 invertible:1 together:2 concrete:1 connectivity:1 reflect:2 containing:1 leading:2 account:1 sinusoidal:3 de:1 b2:2 coding:1 stream:14 performed:1 analyze:1 contribution:1 formed:1 characteristic:2 conveying:1 produced:1 rectified:1 detector:2 ed:1 definition:1 energy:11 frequency:29 associated:1 sampled:4 gain:2 dimensionality:3 organized:1 mareschal:2 sophisticated:1 higher:12 jarmo:1 response:66 coinciding:1 hiit:1 box:1 done:1 stage:3 correlation:2 horizontal:1 nonlinear:5 lack:1 olshausen:1 name:1 effect:2 contain:1 normalized:1 former:1 white:2 adjacent:1 noted:1 leftmost:1 prominent:1 confusion:2 motion:3 image:43 wise:1 cerebral:1 extend:2 relating:1 measurement:5 refer:1 attenuate:2 tuning:1 similarly:1 cortex:6 operating:1 recent:1 driven:2 reverse:1 selectivity:3 der:1 processsing:1 seen:1 maximize:1 dashed:1 signal:4 multiple:2 simoncelli:1 compensate:1 schematic:1 basic:1 aapo:1 vision:4 expectation:1 normalization:1 cell:71 c1:1 source:1 envelope:1 rest:1 posse:1 sure:1 comment:1 recording:4 tend:2 near:1 feedforward:25 variety:1 affect:1 idea:4 whether:1 proceed:1 involve:1 eigenvectors:2 transforms:2 repeating:1 visualized:2 simplest:1 reduced:1 dotted:1 neuroscience:5 demonstrating:1 abbott:1 rectangle:1 luminance:14 v1:2 sum:2 run:1 letter:1 respond:1 reader:1 utilizes:1 patch:1 coherence:2 prefer:2 wopt:1 layer:1 simplification:1 correspondence:2 quadratic:1 activity:11 constraint:5 helsinki:2 scene:4 fourier:2 jr:3 across:1 slightly:1 son:1 invariant:38 computationally:1 rectification:1 equation:3 turn:1 needed:1 locus:1 end:1 operation:1 apply:1 v2:1 enforce:1 paving:1 original:3 top:1 include:2 society:1 objective:4 receptive:3 primary:1 striate:1 responds:2 diagonal:1 exhibit:2 hoyer:2 subspace:1 cw:1 unable:1 higherorder:3 reason:1 code:1 index:2 relationship:2 mostly:1 relate:1 rise:5 implementation:3 design:3 vertical:4 neuron:9 observation:1 fin:1 extended:1 dc:2 introduced:2 nonlinearly:1 connection:4 coherent:1 learned:11 beyond:1 proceeds:1 below:2 pattern:5 perception:1 including:1 royal:1 video:1 natural:30 karklin:1 patrik:1 prior:4 topography:1 filtering:3 row:3 supported:1 side:2 emerge:1 sparse:2 van:2 boundary:9 feedback:54 cortical:4 dimension:1 curve:1 calculated:1 contour:1 world:1 author:1 made:2 sensory:1 coincide:3 preprocessing:5 correlate:1 preferred:1 keep:1 confirm:1 pseudoinverse:2 global:1 conceptual:1 hurri:3 excite:1 vectorization:1 learn:1 channel:2 nature:3 complex:30 linearly:1 border:3 noise:4 s2:4 verifies:1 repeated:1 referred:1 wiley:1 position:1 heeger:1 pinpoint:1 third:1 learns:2 leventhal:1 showing:1 physiological:1 importance:1 texture:9 illustrates:2 karhunen:1 depicted:1 visual:16 lewicki:1 applies:1 corresponds:1 towards:1 change:2 determined:1 except:2 wt:4 called:5 invariance:2 latter:2 fulfills:1 hateren:1 phenomenon:2 |
2,136 | 294 | Generalization Properties of Radial Basis
Functions
Christopher G. Atkeson
Sherif M. Botros
Brain and Cognitive Sciences Department
and the Artificial Intelligence Laboratory
Massachusetts Institute of Technology
Cambridge, MA 02139
Abstract
We examine the ability of radial basis functions (RBFs) to generalize. We
compare the performance of several types of RBFs. We use the inverse dynamics of an idealized two-joint arm as a test case. We find that without
a proper choice of a norm for the inputs, RBFs have poor generalization
properties. A simple global scaling of the input variables greatly improves
performance. We suggest some efficient methods to approximate this distance metric.
1
INTRODUCTION
The Radial Basis Functions (RBF) approach to approximating functions consists of
modeling an input-output mapping as a linear combination of radially symmetric
functions (Powell, 1987; Poggio and Girosi, 1990; Broomhead and Lowe, 1988;
Moody and Darken, 1989). The RBF approach has some properties which make
it attractive as a function interpolation and approximation tool. The coefficients
that multiply the different basis functions can be found with a linear regression.
Many RBFs are derived from regularization principles which optimize a criterion
combining fitting error and the smoothness of the approximated function. However,
the optimality criteria may not always be appropriate, especially when the input
variables have different measurement units and affect the output differently. A
natural extension to RBFs is to vary the distance metric (equivalent to performing
a linear transformation on the input variables). This can be viewed as changing
the cost function to be optimized (Poggio and Girosi, 1990). We first use an exact
interpolation approach with RBFs centered at the data in the training set. We
then explore the effect of optimizing the distance metric for Gaussian RBFs using a
707
708
Botros and Atkeson
smaller number of functions than data in the training set. We also suggest and test
different methods to approximate this metric for the case of Gaussian RBFs that
work well for the two joint arm example that we examined. We refer the reader to
several other studies addressing the generalization performance of RBFs (Franke,
1982; Casdagli, 1989; Renals and Rohwer, 1989).
2
EXACT INTERPOLATION APPROACH
In the exact interpolation model the number of RBFs is equal to the number of
experiences. The centers of the RBFs are chosen to be at the location of the
experiences. We used an idealized horizontal planar two joint arm model with no
friction and no noise (perfect measurements) to test the performance of RBFs:
Tl
O~ (It
+ 12 + 2m2cz~h cos O2 - 2m2Cy~11 sin ( 2)
+O~(I2 + m2cz~h cos O2 - m2cy~h sin ( 2)
-2hOl(j2(m2Cz~ sin O2 + m2Cy~ cos ( 2)
?2
-1102 (m2cz~ sin 02 + m2Cy~ cos (2)
O~(m2cz~/l cos O2 - m2cy~/l sin O2 + 12) + O~I2
?2
+It 01 (m2cz~ sin O2 + m2Cy~ cos ( 2 )
(1)
where OJ,, OJ,, OJ, are the angular position, velocity and acceleration of joint i. 1i is
the torque at joint i. Ii, mj" Ii, czi and cYi are respectively the moment of inertia,
mass, length and the x and y components of the center of mass location of link
i. The input vector is (01,02,Ol,(j2'O~,O~). The training and test sets are formed
of one thousand random experiences each; uniformly distributed across the space
of the inputs. The different inputs were selected from the following ranges: [-4, 4]
for the joint angles, [-20, 20] for the joint angular velocities and [-100, 100] for the
joint angular accelerations. For the exact interpolation case, we scaled the input
variables such that the input space is limited to the six dimensional hypercube
[-1,1]6. This improved the results we obtained. The torque to be estimated at
each joint is modeled by the following equation:
n
Tk(Xi)
p
= LCkj?(lIxi -xjll)+ LJtkjPf(Xi)
j=l
(2)
j=l
=
where Tk, k
1,2, is the estimated torque at the kth joint, n is the number of
experiences/RBFs, Xi is the 'l-th input vector, II . II is the Euclidean norm and
Pt(?),j = 1, ... ,p, is the space of polynomials of order m. The polynomial terms
are not always added and it was found that adding the polynomial terms by themselves does not improve the performance significantly, which is in agreement with
the conclusion made by Franke (Franke, 1982). When a polynomial is present in
the equation, we add the following extra constraints (Powell, 1987):
n
LCkjPr(Xj) = 0
j=l
i=l, ... ,p
(3)
Generalization Properties of Radial Basis Functions
W
1.0
a
0.9
f2
0.8
~
0.7
:J
a:
e/)
@
a:
a:
a:
w
0.2
t-
e/)
~
a:
1 - ... - .,. - .. ,
....
~
0.4
a
a:
0
~
Gaussians
?......
*
HIMO
?
? HMO
0.5
0.3
LS
CS
- - TPS
~-
0.6
W
:J
_ ... _.-
'"
- ...
------~
- ... -...;..~..~----- ... - ... - '" - ... - ... - ... -
-
... -
'-'-'7~'-'-'-'-'-'-'-'-'-'-'-'
?? !"&".
??????.???.? . ..................... .
.
. ........ .......... .
0.1
0.0
10
5
0
15
20
c"2
Figure 1: Normalized errors for the different RBFs using exact interpolation. c2 is
the width parameter when relevant.
To find the coefficients Ckj and J-lkj , we have to invert a square matrix which
is nonsingular for distinct inputs for the basis functions we considered (Micchelli,
1986). We used the training set to find the parameters Ckj, j
1, n, and when
1,p, for the following RBFs:
relevant J-lkj, j
=
=
? Gaussians 4>( r) = exp( ~~2)
= vr2 + c2
? Hardy Inverse Multiquadrics [HIMQ] ?(r) = vr21+c'J
? Thin Plate Splines [TPS] ?(r) = r 2 10g r
? Cubic Splines [CS] ?(r) = r3
? Linear Splines [LS] ?(r) = r
where r = Ilxi - Xj II . For the last three RBFs, we added polynomials of different
? Hardy Multiquadrics [HMQ]
4>(r)
orders, subject to the constraints in equation 3 above. Since the number of independent parameters is equal to the number of points in the training set, we can
train the system so that it exactly reproduces the training set. We then tested its
performance on the test set. The error was measured by equation 4 below:
E =
2:~=1 2:?=1 (Tki
- Tki)2
2:~=1 2:?=1 Tli
(4)
The normalized error obtained on the test set for the different RBFs are shown
in figure 1. The results for LS and CS shown in this figure are obtained after
the addition of a first order polynomial to the RBFs. We also tried adding a
third order polynomial for TPS. As shown in this figure, the normalized error was
more sensitive to the width parameter (i.e. c2 ) for the Gaussian RBFs than for
709
710
Botros and Atkeson
Hardy multiquadrics and inverse multiquadrics. This is in agreement with Franke's
observation (Franke, 1982). The best normalized error for any RBF that we tested
was 0.338 for HMQ with a value of c2 = 4 . Also, contrary to our expectations and
to results reported by others (Franke, 1982), the TPS with a third order polynomial
had a normalized error of 0.5003. This error value did not change significantly when
only lower order polynomials are added to the (r 2 log r) RBFs. Using Generalized
Cross Validation (Bates et ai., 1987) to optimize the tradeoff between smoothness
and fitting the data, we got similar normalized error for TPS.
3
GENERALIZED RBF
The RBF approach has been generalized (Poggio and Girosi, 1990) to have adjustable center locations, fewer centers than data, and to use a different distance
metric. Instead of using a Euclidean norm, we can use a general weighted norm:
Ilxi - Xj II~
= (Xi -
Xj )TWTW(Xi - Xj)
(5)
where W is a square matrix. This approach is also referred to as Hyper Basis
Functions (Poggio and Girosi, 1990). The problem of finding the weight matrix
and the location of the centers is nonlinear. We simplified the problem by only
considering a diagonal matrix Wand fixing the locations of the centers of the RBFs.
The center locations were chosen randomly and were uniformly distributed over the
input space. We tested three different methods to find the different parameters for
Gaussian RBFs that we will describe in the next three subsections.
3.1
NONLINEAR OPTIMIZATION
We used a Levenberg-Marquardt nonlinear optimization routine to find the coefficients of the RBFs {Cd and the diagonal scaling matrix W that minimized the
sum of the squares of the errors in estimating the training set. We were able to
find a set of parameters that reduced the normalized error to less than 0.01 in both
the training and the test sets using 500 Gaussian RBFs randomly and uniformly
spaced over the input space. One disadvantage we found with this method is the
possible convergence to local minima and the long time it takes to converge using
general purpose optimization programs. The diagonal elements of the matrix W
are shown in the L-M columns of table 1. As expected, (h has a very small scale
for both joints compared to ()2, since (}l does not affect the output of either joint
in the horizontal model described by equation 1. Also the scaling of ()2 is much
larger than the scaling of the other variables. This suggests that the scaling could
be dependent on both the range of the input variables as well as the sensitivity of
the output to the different input variables. We found empirically that a formula
of the form of equation 6 approximates reasonably well the scaling weights found
using nonlinear optimization.
Wi!
IV'!, I
k
= II V'! II -V;:::::E::::;::{(=Xi=-=t,::::;::;?)2;:;::}
(6)
where II~~!I\ is the normalized average absolute value of the gradient of the correct
model of the function to be approximated. The term
J
k
E{(.t:.-t.r~}
normalizes the
Generalization Properties of Radial Basis Functions
Table 1: Scaling Weights Obtained Using Different Methods.
W
L-M ALG.
Wl1 (Ot)
W22(02)
W33(O?d
W44(~??)
W55 (Od
W66(O~)
Joint 1
0.000021
0.382014
0.004177
0.004611
0.000433
0.000284
Joint 2
5.48237e-06
0.443273
0.0871921
0.000120948
0.00134168
0.000955884
TRUE FUNC.
Joint 1
0.000000
0.456861
0.005531
0.007490
0.000271
0.000059
Joint 2
0.000000
0.456449
0.010150
0.000000
0.000110
0.000116
GRAD. APPROX.
Joint 1
0.047010
0.400615
0.009898
0.028477
0.006365
0.000556
Joint 2
0.005450
0.409277
0.038288
0.008948
0.002166
0.001705
density of the input variables in each direction by taking into account the expected
distances from the RBF centers to the data. The constant k in this equation
is inversely proportional to the width of the Gaussian used in the RBF. For the
inverse dynamics problem we tested, and using 500 Gaussian functions randomly
and uniformly distributed over the entire input space, a k between 1 and 2 was found
to be good and results in scaling parameters which approximate those obtained by
optimization. The scaling weights obtained using equation 6 and based on the
knowledge of the functions to be approximated are shown in the TRUE FUNC.
columns of table 1. Using these weight values the error in the test set was about
0.0001
3.2
AVERAGE GRADIENT APPROXIMATION
In the previous section we showed that the scaling weights could be approximated
using the derivatives of the function to be approximated in the different directions. If
we can approximate these derivatives, we can then approximate the scaling weights
using equation 6. A change in the output D..y could be approximated by a first order
Taylor series expansion as shown in equation 7 below:
(7)
We first scaled the input variables so that they have the same range, then selected
all pairs of points from the training set that are below a prespecified distance (since
equation 7 is only valid for nearby points), and then computed D..x, and D..y for
each pair. We used least squares regression to estimate the values of ux,
~Y. Using
the estimated derivatives and equation 6, we got the scaling weights shown in the
last two columns of table 1. Note the similarity between these weights and the ones
obtained using the nonlinear optimization or the derivatives of the true function.
The normalized error in this case was found to be 0.012 for the training set and
0.033 for the test set. One advantage of this method is that it is much faster than
the nonlinear optimization method. However, it is less accurate.
711
712
Botros and Atkeson
W
1.0
0
0.9
~
O.B
::>
[[
r/)
~
0.7
0:
....
0.6
0:
~
0.5
0:
0.4
W
0.3
w
,
...........
.?
initial scaling
*....... No
Initial scaling
? ?
"
""'" .....
...
.....
,
::>
0 0.2
0:
0 0.1
...
".
" '.
",
t-
Cf)
~
0.0
0
2
3
0:
.....
4
5
iteration number
Figure 2: Normalized error vs. the number of iterations using the recursive method
with and without initial scaling.
3.3
RECURSIVE METHOD
Another possible method to approximate the RMS values of the derivatives is to
first approximate the function using RBFs of "reasonable" widths and then use this
first approximation to compute the derivatives which are then used to modify the
Gaussian widths in the different directions. The procedure is then repeated. We
used 100 RBFs with Gaussian units randomly and uniformly distributed to find the
coefficients of the RBFs. We explored two different scalings of the input data. In the
first case we used the raw data without scaling, and in the second case the different
input variables were scaled so that they have the same range from [-1, 1]. The width
of the Gaussians used as specified by the variance c2 was equal to 200 in the first
case, and 2 in the second case. We then used the estimated values of the derivatives
to change the width of the Gaussians in the different directions and iterated the
procedure. The normalized error is plotted versus the number of iterations for both
cases in figure 2. As shown in this figure, the test set error dropped to around 0.001
in about only 4 iterations. This technique is also much faster than the nonlinear
optimization approach. Also it can be easily made local, which is desirable if the
dependence of the function to be approximated on the input variables changes from
one region of the input space to the other. One disadvantage of this approach is
that it is not guaranteed to converge especially if the initial approximation of the
function is very bad.
4
CONCLUSION
In this paper we tested the ability of RBFs to generalize from the training set.
We found that the choice of the distance metric used may be crucial for good
generalization. For the problem we tested, a bad choice of a distance metric resulted
in very poor generalization. However, the performance of Gaussian RBFs improved
significantly if we optimized the distance metric. We also tested some empirical
Generalization Properties of Radial Basis Functions
methods for efficiently estimating this metric that worked well in our test problem.
Additional work has to be done to identify the conditions under which the techniques
we presented here mayor may not work. Although a simple global scaling of the
input variables worked well in our test example, it may not work in general. One
problem that we found when we optimized the distance metric is that the values
of the coefficients Cj, become very large, even if we imposed a penalty on their
values. The reason for this, we think, is that the estimation problem was close to
singular. Choice of the training set and optimizing the centers of the RBFs may
solve this problem. The recursive method we described could probably be modified
to approximate a complete linear coordinate transformation and local scaling.
Acknowledgments
Support was provided under Office of Naval Research contract N00014-88-K-0321
and under Air Force Office of Scientific Research grant AFOSR-89-0500. Support for
CGA was provided by a National Science Foundation Engineering Initiation Award
and Presidential Young Investigator Award, an Alfred P. Sloan Research Fellowship,
the W. M. Keck Foundation Assistant Professorship in Biomedical Engineering, and
a Whitaker Health Sciences Fund MIT Faculty Research Grant.
References
D. M. Bates, M. J. Lindstorm, G. Wahba and B. S. Yandel (1987) "GCVPACK
- Routines for generalized cross validation". Commun. Statist.-Simulat. 16 (1):
263-297.
D. S. Broomhead and D. Lowe (1988) "Multivariable functional interpolation and
adaptive networks". Complex Systems 2:321-323.
M. Casdagli (1989) "Nonlinear prediction of chaotic time series". Physica D 35:
335-356.
R. Franke (1982) "Scattered data interpolation: Tests of some methods" . Math.
Compo 38(5):181-200.
C. A. Micchelli (1986) "Interpolation of scattered data: distance matrices and conditionally positive definite functions". Constr. Approx. 2:11-22.
J. Moody and C. Darken (1989) "Fast learning in networks of locally tuned processing units". Neural Computation 1(2):281-294.
T. Poggio and F. Girosi (1990) "Networks for approximation and learning". Proceedings of the IEEE 78(9):1481-1497.
M. J. D. Powell (1987) "Radial basis functions for multivariable interpolation: A
review". In J. C. Mason and M. G. Cox (ed.), Algorithms for Approximation, 143167. Clarendon Press, Oxford.
S. Renals and R. Rohwer (1989) "Phoneme classification experiments using radial
basis functions". In Proceedings of the International Joint Conference on Neural
Networks, 1-462 - 1-467, Washington, D.C., IEEE TAB Neural Network Committee.
713
| 294 |@word cox:1 faculty:1 polynomial:9 norm:4 casdagli:2 tried:1 moment:1 initial:4 series:2 hardy:3 tuned:1 o2:6 od:1 marquardt:1 girosi:5 fund:1 v:1 intelligence:1 selected:2 fewer:1 prespecified:1 compo:1 math:1 location:6 c2:5 become:1 consists:1 fitting:2 expected:2 wl1:1 themselves:1 examine:1 brain:1 ol:1 torque:3 considering:1 provided:2 estimating:2 mass:2 lixi:1 finding:1 transformation:2 exactly:1 scaled:3 unit:3 grant:2 positive:1 dropped:1 tki:2 local:3 modify:1 engineering:2 oxford:1 interpolation:10 examined:1 suggests:1 co:6 limited:1 professorship:1 range:4 acknowledgment:1 recursive:3 definite:1 chaotic:1 procedure:2 powell:3 empirical:1 significantly:3 got:2 radial:8 suggest:2 close:1 vr2:1 franke:7 optimize:2 equivalent:1 imposed:1 center:9 l:3 coordinate:1 pt:1 exact:5 agreement:2 velocity:2 element:1 approximated:7 w33:1 thousand:1 region:1 dynamic:2 hol:1 f2:1 basis:11 easily:1 joint:19 differently:1 train:1 distinct:1 fast:1 describe:1 artificial:1 hyper:1 larger:1 solve:1 simulat:1 presidential:1 ability:2 think:1 advantage:1 botros:4 renals:2 j2:2 relevant:2 combining:1 convergence:1 keck:1 perfect:1 tk:2 fixing:1 measured:1 c:3 direction:4 correct:1 centered:1 generalization:8 extension:1 physica:1 around:1 considered:1 exp:1 mapping:1 vary:1 purpose:1 estimation:1 assistant:1 sensitive:1 tool:1 weighted:1 mit:1 always:2 gaussian:10 modified:1 office:2 derived:1 naval:1 greatly:1 dependent:1 entire:1 classification:1 equal:3 cyi:1 washington:1 thin:1 minimized:1 others:1 spline:3 randomly:4 resulted:1 national:1 multiply:1 accurate:1 experience:4 poggio:5 iv:1 euclidean:2 taylor:1 plotted:1 column:3 modeling:1 disadvantage:2 cost:1 addressing:1 reported:1 density:1 international:1 sensitivity:1 contract:1 moody:2 cognitive:1 derivative:7 account:1 coefficient:5 sloan:1 idealized:2 tli:1 lowe:2 tab:1 rbfs:30 air:1 formed:1 square:4 variance:1 phoneme:1 efficiently:1 spaced:1 nonsingular:1 identify:1 generalize:2 raw:1 iterated:1 bates:2 ed:1 rohwer:2 radially:1 massachusetts:1 broomhead:2 subsection:1 knowledge:1 improves:1 cj:1 routine:2 clarendon:1 planar:1 improved:2 done:1 angular:3 biomedical:1 horizontal:2 christopher:1 nonlinear:8 scientific:1 effect:1 normalized:11 true:3 regularization:1 symmetric:1 laboratory:1 i2:2 attractive:1 conditionally:1 sin:6 width:7 levenberg:1 criterion:2 generalized:4 multivariable:2 plate:1 tps:5 complete:1 ilxi:2 functional:1 empirically:1 approximates:1 measurement:2 refer:1 cambridge:1 ai:1 smoothness:2 approx:2 had:1 similarity:1 add:1 showed:1 optimizing:2 commun:1 n00014:1 initiation:1 minimum:1 additional:1 converge:2 ii:9 desirable:1 faster:2 cross:2 long:1 award:2 prediction:1 regression:2 mayor:1 metric:10 expectation:1 iteration:4 invert:1 addition:1 fellowship:1 singular:1 crucial:1 extra:1 ot:1 probably:1 subject:1 contrary:1 affect:2 xj:5 ckj:2 wahba:1 tradeoff:1 grad:1 six:1 rms:1 penalty:1 cga:1 locally:1 statist:1 reduced:1 xjll:1 estimated:4 alfred:1 changing:1 sum:1 wand:1 inverse:4 angle:1 reader:1 reasonable:1 scaling:19 guaranteed:1 constraint:2 worked:2 w22:1 nearby:1 friction:1 optimality:1 performing:1 department:1 combination:1 poor:2 smaller:1 across:1 hmo:1 wi:1 constr:1 equation:12 r3:1 committee:1 gaussians:4 appropriate:1 cf:1 whitaker:1 especially:2 approximating:1 hypercube:1 micchelli:2 added:3 dependence:1 diagonal:3 gradient:2 kth:1 distance:11 link:1 reason:1 length:1 modeled:1 proper:1 adjustable:1 observation:1 darken:2 pair:2 specified:1 optimized:3 able:1 lkj:2 below:3 program:1 oj:3 multiquadrics:4 natural:1 force:1 arm:3 improve:1 technology:1 inversely:1 health:1 func:2 review:1 afosr:1 proportional:1 versus:1 validation:2 foundation:2 principle:1 cd:1 normalizes:1 last:2 czi:1 institute:1 taking:1 absolute:1 distributed:4 valid:1 inertia:1 made:2 adaptive:1 simplified:1 atkeson:4 approximate:8 global:2 reproduces:1 xi:6 table:4 mj:1 reasonably:1 alg:1 expansion:1 complex:1 did:1 noise:1 repeated:1 referred:1 tl:1 scattered:2 cubic:1 position:1 third:2 young:1 formula:1 bad:2 explored:1 mason:1 adding:2 explore:1 ux:1 ma:1 viewed:1 acceleration:2 rbf:7 change:4 uniformly:5 support:2 investigator:1 tested:7 |
2,137 | 2,940 | Active Learning For Identifying Function
Threshold Boundaries
Brent Bryan
Center for Automated Learning and Discovery
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Robert C. Nichol
Institute of Cosmology and Gravitation
University of Portsmouth
Portsmouth, PO1 2EG, UK
[email protected]
Christopher R. Genovese
Department of Statistics
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Jeff Schneider
Robotics Institute
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Christopher J. Miller
Observatorio Cerro Tololo
Observatorio de AURA en Chile
La Serena, Chile
[email protected]
Larry Wasserman
Department of Statistics
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Abstract
We present an efficient algorithm to actively select queries for learning
the boundaries separating a function domain into regions where the function is above and below a given threshold. We develop experiment selection methods based on entropy, misclassification rate, variance, and their
combinations, and show how they perform on a number of data sets. We
then show how these algorithms are used to determine simultaneously
valid 1 ? ? confidence intervals for seven cosmological parameters. Experimentation shows that the algorithm reduces the computation necessary for the parameter estimation problem by an order of magnitude.
1 Introduction
In many scientific and engineering problems where one is modeling some function over an
experimental space, one is not necessarily interested in the precise value of the function
over an entire region. Rather, one is curious about determining the set of points for which
the function exceeds some particular value. Applications include determining the functional range of wireless networks [1], factory optimization analysis, and gaging the extent
of environmental regions in geostatistics. In this paper, we use this idea to compute confidence intervals for a set of cosmological parameters that affect the shape of the temperature
power spectrum of the Cosmic Microwave Background (CMB).
In one dimension, the threshold discovery problem is a root-finding problem where no
hints as to the location or number of solutions are given; several methods exist which can
be used to solve this problem (e.g. bisection, Newton-Raphson). However, one dimensional
algorithms cannot be easily extended to the multivariate case. In particular, the ideas of root
bracketing and function transversal are not well defined [2]; given a particular bracket of a
continuous surface, there will be an infinite number of solutions to the equation f (~x) ? t =
0, since the solution in multiple dimensions is a set of surfaces, rather than a set of points.
Numerous active learning papers deal with similar problems in multiple dimensions. For
instance, [1] presents a method for picking experiments to determine the localities of local
extrema when the input space is discrete. Others have used a variety of techniques to reduce
the uncertainty over the problem?s entire domain to map out the function (e.g. [3], and [4]),
or locate the optimal value (e.g. [5]).
We are interested in locating the subset of the input space wherein the function is above
a given threshold. Algorithms that merely find a local optimum and search around it will
not work in general, as there may be multiple disjoint regions above the threshold. While
techniques that map out the entire surface of the underlying function will correctly identify
those regions which are above a given threshold, we assert that methods can be developed
that are more efficient at localizing a particular contour of the function. Intuitively, points
on the function that are located far from the boundary are less interesting, regardless of
their variance. In this paper, we make the following contributions to the literature:
? We present a method for choosing experiments that is more efficient than global
variance minimization, as well as other heuristics, when one is solely interested in
localizing a function contour.
? We show that this heuristic can be used in continuous valued input spaces, without
defining a priori a set of possible experiments (e.g. imposing a grid).
? We use our function threshold detection method to determine 1?? simultaneously
valid confidence intervals of CMB parameters, making no assumptions about the
model being fit and few assumptions about the data in general.
2 Algorithm
We begin by formalizing the problem. Assume that we are given a bounded sample space
S ? Rn and a scoring function: f : S ? R, but possibly no data points ({s, f (s)}, s ? S).
Given a threshold t, we want to find the set of points S ? where f is equal to or above the
threshold: {s ? S ? |s ? S, f (s) ? t}. If f is invertible, then the solution is trivial. However,
it is often the case that f is not trivially invertible, such as the CMB model mentioned in
?1. In these cases, we can discover S ? by modeling S given some experiments. Thus, we
wish to know how to choose experiments that help us determine S ? efficiently.
We assume that the cost to compute f (s) given s is significant. Thus, care should be taken
when choosing the next experiment, as picking optimum points may reduce the runtime
of the algorithm by orders of magnitude. Therefore, it is preferable to analyze current
knowledge about the underlying function and select experiments which quickly refine the
estimate of the function around the threshold of interest. There are several methods one
could use to create a model of the data, notably some form of parametric regression. However, we chose to approximate the unknown boundary as a Gaussian Process (GP), as many
forms of regression (e.g. linear) necessarily smooths the data, ignoring subtle features of
the function that may become pronounced with more data. In particular, we use ordinary
kriging, a form of GPs, which assumes that the semivariogram (K(?, ?) is a linear function
of the distance between samples [6]; this estimation procedure assumes the the sampled
data are normal with mean equal to the true function and variance given by the sampling
noise . The expected value of K(si , sj ) for si , sj ? S, is can be written as
n
i1/2
khX
?l (sil ? sjl )2
+c
E[K(si , sj )] =
2
l=1
where k is a constant ? known as the kriging parameter ? which is an estimated limit
on the first derivate of the function, ?l is a scaling factor for each dimension, and c is the
variance (e.g. experimental noise) of the sampled points. Since, the joint distribution of a
finite set of sampled points for GPs is Gaussian, the predicted distribution of a query point
sq given a known set A is normal with mean and variance given by
?sq
?s2q
=
?A + ??Aq ??1
AA (yA ? ?A )
(1)
=
??Aq ??1
AA ?Aq
(2)
where ?Aq denotes the column vector with the ith entry equal to K(si , sq ), ?AA denotes
the semivariance matrix between the elements of A (the ij element of ?AA is K(si , sj )),
yA denotes the column vector with the ith entry equal to f (si ), the true value of the function
for each point in A, and ?A is the mean of the yA ?s.
As given, prediction with GP requires O(n3 ) time, as an n ? n linear system of equations
must be solved. However, for many GPs ? and ordinary kriging in particular ? the
correlation between two points decreases as a function of distance. Thus, the full GP
model can be approximated well by a local GP, where only the k nearest neighbors of
the query point are used to compute the prediction value; this reduces the computation
time to O(k 3 log(n)) per prediction, since O(log(n)) time is required to find the k-nearest
neighbors using spatial indexing structures such as balanced kd-trees.
Since we have assumed that experimentation is expensive, it would be ideal to iteratively
analyze the entire input space and pick the next experiment in such a manner that minimized the total number of experiments necessary. If the size of the parameter space (|S|)
is finite, such an approach may be feasible. However, if |S| is large or infinite, testing all
points may be impractical. Instead of imposing some arbitrary structure on the possible experimental points (such as using a grid), our algorithm chooses candidate points uniformly
at random from the input space, and then selects the candidate point with the highest score
according to the metrics given in ?2.1. This allows the input space to be fully explored (in
expectation), and ensures that interesting regions of space that would have fallen between
successive grid points are not missed; in ?4 we show how imposing a grid upon the input
space results in just such a situation. While the algorithm is unable to consider the entire space for each sampling iteration, over multiple iterations it does consider most of the
space, resulting in the function boundaries being quickly localized, as can be seen in ?3.
2.1 Choosing experiments from among candidates
Given a set of random input points, the algorithm evaluates each one and chooses the point
with the highest score as the location for the next experiment. Below is the list of evaluation
methods we considered.
Random: One of the candidate points is chosen uniformly at random. This method serves
as a baseline for comparison,
Probability of incorrect classification: Since we are trying to map the boundary between
points above and below a threshold, we consider choosing the point from our random sample which has the largest probability of being misclassified by our model. Using the distribution defined by Equations 1 and 2, the probability, p, that the point is above the given
threshold can be computed. The point is predicted to be above the threshold if p > 0.5 and
thus the expected misclassification probability is min(p, 1 ? p).
Entropy: Instead of misclassification probability we can consider entropy: ?p log2 (p) ?
(1 ? p) log2 (1 ? p). Entropy is a monotonic function of the misclassification rate so these
two will not choose different experiments. They are listed separately because they have
different effects when mixed with other evaluations. Both entropy and misclassification
will choose points near the boundary. Unfortunately, they have the drawback that once
they find a point near the boundary they continue to choose points near that location and
will not explore the rest of the parameter space.
Variance: Both entropy and probability of incorrect classification suffer from a lack of
incentive to explore the space. To rectify this problem, we consider the variance of each
query point (given by Equation 2) as an evaluation metric. This metric is common in active
learning methods whose goal is to map out an entire function. Since variance is related
to the distance to nearest neighbors, this strategy chooses points that are far from areas
currently searched, and hence will not get stuck at one boundary point. However, it is well
known that such approaches tend to spend a large portion of their time on the edges of the
parameter space and ultimately cover the space exhaustively [7].
Information gain: Information gain is a common myopic metric used in active learning.
Information gain at the query point is the same as entropy in our case because all run
experiments are assumed to have the same variance. Computing a full measure of information gain over the whole state space would provide an optimal 1-step experiment choice.
In some discrete or linear problems this can be done, but it is intractable for continuous
non-linear spaces. We believe the good performance of the evaluation metrics proposed
below stems from their being heuristic proxies for global information gain or reduction in
misclassification error.
Products of metrics: One way to rectify the problems of point policies that focus solely
on points near the boundary or points with large variance regardless of their relevance to
refining the predictive model, is to combine the two measures. Intuitively, doing this can
mimic the idea of information gain; the entropy of a query point measures the classification
uncertainty, while the variance is a good estimator of how much impact a new observation
would have in this region, and thus what fraction the uncertainty would be reduced. [1]
proposed scoring points based upon the product of their entropy and variance to identify
the presence of local maxima and minima, a problem closely related to boundary detection. We shall also consider scoring points based upon the product of their probability of
incorrect classification and variance. Note that while entropy and probability of incorrect
classification are monotonically related, entropy times variance and probability of incorrect
classification times variance are not.
Straddle: Using the same intuition
of heuristics, we define straddle heuris as for products
?
tic, as straddle(sq ) = 1.96?
?q ? f (sq ) ? t , The straddle algorithm scores points highest
that are both unknown and near the boundary. As such, the straddle algorithm prefers points
near the threshold, but far from previous examples. The straddle score for a point may be
negative, which indicates that the model currently estimates the probability that the point
is on a boundary is less than five percent. Since the straddle heuristic relies on the variance
estimate, it is also subject to oversampling edge positions.
3 Experiments
We now assess the accuracy with which our model reproduces a known function for the
point policies just described. This is done by computing the fraction of test points in which
the predictive model agrees with the true function about which side of the threshold the
test points are on after some fixed number of experiments. This process is repeated several
times to account for variations due to the random sampling of the input space.
The first model we consider is a 2D sinusoidal function given by
f (x, y) = sin(10x) + cos(4y) ? cos(3xy)
x ? [0, 1],
y ? [0, 2],
with a boundary threshold of t = 0. This function and threshold were examined for the
following reasons: 1) the target threshold winds through the plot giving ample length to
A
B
C
2
2
1
1
0
0
0
0.5
1
0
0.5
1
0
0.5
1
Figure 1: Predicted function boundary (solid), true function boundary (dashed), and experiments (dots) for the 2D sinusoid function after A) 50 experiments and B) 100 experiments
using the straddle heuristic and C) 100 experiments using the variance heuristic.
Table 1: Number of experiments required to obtain 99% classification accuracy for the 2D
models and 95% classification accuracy for the 4D model for various heuristics. Heuristics
requiring more than 10,000 experiments to converge are labeled ?did not converge?.
Random
Entropy
Variance
Entropy?Var
Prob. Incor.?Std
Straddle
2D Sin.(1K Cand.)
617 ? 158
did not converge
207 ? 7
117 ? 5
113 ? 11
106 ? 5
2D Sin.(31 Cand.)
617 ? 158
did not converge
229 ? 9
138 ? 6
129 ? 14
123 ? 6
2D DeBoor
7727 ? 987
did not converge
4306 ? 573
1621 ? 201
740 ? 117
963 ? 136
4D Sinusoid
6254 ? 364
6121 ? 1740
2320 ? 57
1210 ? 43
1362 ? 89
1265 ? 94
test the accuracy of the approximating model, 2) the boundary is discontinuous with several
small pieces, 3) there is an ambiguous region (around (0.9, 1), where the true function is
approximately equal to the threshold, and the gradient is small and 4) there are areas in
the domain where the function is far from the threshold and hence we can ensure that the
algorithm is not oversampling in these regions.
Table 1 shows the number of experiments necessary to reach a 99% and 95% accuracy
for the 2D and 4D models, respectively. Note that picking points solely on entropy does
not converge in many cases, while both the straddle algorithm and probability incorrect
times standard deviation heuristic result in approximations that are significantly better than
random and variance heuristics. Figures 1A-C confirm that the straddle heuristic is aiding
in boundary prediction. Note that most of the 50 experiments sampled between Figures 1A
and 1B are chosen near the boundary. The 100 experiments chosen to minimize the variance
result in an even distribution over the input space and a worse boundary approximation, as
seen in Figure 1C. These results indicate that the algorithm is correctly modeling the test
function and choosing experiments that pinpoint the location of the boundary.
From the Equations 1 and 2, it is clear that the algorithm does not depend on data dimensionality directly. To ensure that heuristics are not exploiting some feature of the 2D input
space, we consider the 4D sinusoidal function
f (~x) = sin(10x1 ) + cos(4x2 ) ? cos(3x1 x2 ) + cos(2x3 ) + cos(3x4 ) ? sin(5x3 x4 )
where ~x ? [(0, 0, 1, 0), (1, 2, 2, 2)] and t = 0. Comparison of the 2D and 4D results in Table 1 reveals that the relative performance of the heuristics remains unchanged, indicating
that the best heuristic for picking experiments is independent of the problem dimension.
To show that the decrease in the number candidate points relative to the input parameter
space that occurs with higher dimensional problems is not an issue, we reconsider the 2D
sinusoidal problem. Now, we use only 31 candidate points instead of 1000 to simulate the
point density difference between 4D and 2D. Results shown in Table 1, indicate that reducing the number of candidate points does not drastically alter the realized performance.
Additional experiments were performed on a discontinuous 2D function (the DeBoor function given in [1]) with similar results, as can be seen in Table 1.
4 Statistical analysis of cosmological parameters
Let us now look at a concrete application of this work: a statistical analysis of cosmological parameters that affect formation and evolution of our universe. One key prediction
of the Big Bang model for the origin of our universe is the presence of a 2.73K cosmic
microwave background radiation (CMB). Recently, the Wilkinson Microwave Anisotropy
Project (WMAP) has completed a detailed survey of the this radiation exhibiting small
CMB temperature fluctuations over the sky [8]. It is believed that the size and spatial proximity of these temperature fluctuations depict the types and rates of particle interactions
in the early universe and consequently characterize the formation of large scale structure
(galaxies, clusters, walls and voids) in the current observable universe. It is conjectured
that this radiation permeated through the universe unchanged since its formation 15 billion
years ago. Therefore, the sizes and angular separations of these CMB fluctuations give an
unique picture of the universe immediately after the Big Bang and have a large implication
on our understanding of primordial cosmology.
An important summary of the temperature fluctuations is the CMB power spectrum shown
in Figure 2, which gives the temperature variance of the CMB as a function of spatial
frequency (or multi-pole moment). It is well known that the shape of this curve is affected
by at least seven cosmological parameters: optical depth (? ), dark energy mass fraction
(?? ), total mass fraction (?m ), baryon density (?b ), dark matter density (?dm), neutrino
fraction (fn ), and spectral index (ns ). For instance, the height of first peak is determined
by the total energy density of the universe, while the third peak is related to the amount of
dark matter. Thus, by fitting models of the CMB power spectrum for given values of the
seven parameters, we can determine how the parameters influence the shape of the model
spectrum. By examining those models that fit the data, we can then establish the ranges of
the parameters that result in models which fit the data.
Previous work characterizing confidence intervals for cosmological parameters either used
marginalization over the other parameters, or made assumptions about the values of the
parameters and/or the shape of the CMB power spectrum. However, [9] notes that ?CMB
data have now become so sensitive that the key issue in cosmological parameter determination is not always the accuracy with which the CMB power spectrum features can be
measured, but often what prior information is used or assumed.? In this analysis, we make
no assumptions about the ranges or values of the parameters, and assume only that the data
are normally distributed around the unknown CMB spectrum with covariance known up
to a constant multiple. Using the method of [10], we create a non-parametric confidence
ball (under a weighted squared-error loss) for the unknown spectrum that is centered on a
nonparametric estimate with a radius for each specified confidence level derived from the
asymptotic distribution of a pivot statistic1 . For any candidate spectrum, membership in the
confidence ball can be determined by comparing the ball?s radius to the variance weighted
sum of squares deviation between the candidate function and the center of the ball.
One advantage of this method is that it gives us simultaneously valid confidence intervals
on all seven of our input parameters; this is not true for 1 ? ? confidence intervals derived
from a collection of ?2 distributions where the confidence intervals often have substantially
lower coverage [11]. However, there is no way to invert the modeling process to determine
parameter ranges given a fixed sum of squared error. Thus, we use the algorithm detailed
1
See Appendix 3 in [10] for the derivation of this radius
0.1
4000
?B
Temperature Variance
6000
0.05
2000
0
0
200 400 600 800
Multipole Moment
Figure 2: WMAP data, overlaid with regressed model (solid) and an example of a
model CMB spectrum that barely fits at the
95% confidence level (dashed; parameter
values are ?DM = 0.1 and ?B = 0.028).
0
0
0.2
0.4
?DM
0.6
0.8
Figure 3: 95% confidence bounds for ?B
as a function of ?DM . Gray dots denote
models which are rejected at a 95% confidence level, while the black dots denote
those that are not.
in ?2 to map out the confidence surface as a function of the input parameters; that is, we
use the algorithm to pick a location in the seven dimensional parameter space to perform
an experiment, and then run CMBFast [12] to create simulated power spectrum given this
set of input parameters. We can then compute the sum of squares of error for this spectrum
(relative to the regressed model) and easily tell if the 7D input point is inside the confidence
ball. In practice, we model the sum of squared error, not the confidence level of the model.
This creates a more linear output space, as the confidence level for most of the models is
zero, and thus it is impossible to distinguish between poor and terrible model fits.
Due to previous efforts on this project, we were able to estimate the semivariogram of the
GP from several hundred thousand random points already run through CMBFast. For this
work, we chose the ?l ?s such that the partials in each dimension where approximately unity,
resulting in k ? 1; c was set to a small constant to account for instabilities in the simulator.
These points also gave a starting point for our algorithm2. Subsequently, we have run
several hundred thousand more CMBFast models. We find that it takes 20 seconds to pick
an experiment from among a set of 2,000 random candidates. CMBFast then takes roughly
3 minutes to compute the CMB spectrum given our chosen point in parameter space.
In Figure 3, we show a plot of baryon density (?B ) versus the dark matter density (?DM ) of
the universe over all values of the other five parameters (?, ?DE , ?M , fn , ns ). Experiments
that are within a 95% confidence ball given the CMB data are plotted in black, while
those that are rejected at the 95% level are gray. Note how there are areas that remain
unsampled, while the boundary regions (transitions between gray and black points) are
heavily sampled, indicating that our algorithm is choosing reasonable points. Moreover,
the results of Figure 3 agree well with results in the literature (derived using parametric
models and Bayesian analysis), as well as with predictions favored by nucleosynthesis [9].
While hard to distinguish in Figure 3, the bottom left group of points above the 95% confidence boundary splits into two separate peaks in parameter space. The one to the left is the
concordance model, while the second peak (the one to the right) is not believed to represent
the correct values of the parameters (due to constraints from other data). The existence of
high probability points in this region of the parameter space has been suggested before,
but computational limitations have prevented much characterization of it. Moreover, the
third peak, near the top right corner of Figure 3 was basically ignored by previous grid
based approaches. Comparison of the number of experiments performed by our straddle
2
While initial values are not required (as we have seen in ?3), it is possible to incorporate this
background knowledge into the model to help the algorithm converge more quickly.
Table 2: Number of points found in the three peaks for the grid based approach of [9] and
our straddle algorithm.
Concordance Model
Peak 2
Peak 3
Total Points
Peak Center
?DM
?B
0.116 0.024
0.165 0.023
0.665 0.122
# Points in Effective Radius
Grid
Straddle
2118
16055
2825
9634
0
5488
5613300
603384
algorithm with the grid based approach used by [9] is shown in Table 2. Even with only
10% of the experiments used in the grid approach, we sampled the concordance peak 8
times more frequently, and the second peak 3.4 times more frequently than the grid based
approach. Moreover, it appears that the grid completely missed the third peak, while our
method sampled it over 5000 times. These results dramatically illustrate the power of our
adaptive method, and show how it does not suffer from assumptions made by a grid-based
approaches. We are following up on the scientific ramifications of these results in a separate
astrophysics paper.
5 Conclusions
We have developed an algorithm for locating a specified contour of a function while minimizing the number queries necessary. We described and showed how several different
methods for picking the next experimental point from a group of candidates perform on synthetic test functions. Our experiments indicate that the straddle algorithm outperforms previously published methods, and even handles functions with large discontinuities. Moreover, the algorithm is shown to work on multi-dimensional data, correctly classifying the
boundary at a 99% level with half the points required for variance minimizing methods.
We have then applied this algorithm to a seven dimensional statistical analysis of cosmological parameters affecting the Cosmic Microwave Background. With only a few hundred
thousand simulations we are able to accurately describe the interdependence of the cosmological parameters, leading to a better understanding of fundamental physical properties.
References
[1] N. Ramakrishnan, C. Bailey-Kellogg, S. Tadepalli, and V. N. Pandey. Gaussian processes for active data mining of spatial
aggregates. In Proceedings of the SIAM International Conference on Data Mining, 2005.
[2] W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery. Numerical Recipes in C. Cambridge University Press,
2nd edition, 1992.
[3] D. A. Cohn, Z. Ghahramani, and M. I. Jordan. Active learning with statistical models. In G. Tesauro, D. Touretzky, and
T. Leen, editors, Advances in Neural Information Processing Systems, volume 7, pages 705?712. The MIT Press, 1995.
[4] Simon Tong and Daphne Koller. Active learning for parameter estimation in bayesian networks. In NIPS, pages 647?653,
2000.
[5] A. Moore and J. Schneider. Memory-based stochastic optimization. In D. Touretzky, M. Mozer, and M. Hasselm, editors,
Neural Information Processing Systems 8, volume 8, pages 1066?1072. MIT Press, 1996.
[6] Noel A. C. Cressie. Statistics for Spatial Data. Wiley, New York, 1991.
[7] D. MacKay. Information-based objective functions for active data selection. Neural Computation, 4(4):590?604, 1992.
[8] C. L. Bennett et al. First-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Preliminary Maps and Basic
Results. Astrophysical Journal Supplement Series, 148:1?27, September 2003.
[9] M. Tegmark, M. Zaldarriaga, and A. J. Hamilton. Towards a refined cosmic concordance model: Joint 11-parameter
constraints from the cosmic microwave background and large-scale structure. Physical Review D, 63(4), February 2001.
[10] C. Genovese, C. J. Miller, R. C. Nichol, M. Arjunwadkar, and L. Wasserman. Nonparametric inference for the cosmic
microwave background. Statistic Science, 19(2):308?321, 2004.
[11] C. J. Miller, R. C. Nichol, C. Genovese, and L. Wasserman. A non-parametric analysis of the cmb power spectrum. Bulletin
of the American Astronomical Society, 33:1358, December 2001.
[12] U. Seljak and M. Zaldarriaga. A Line-of-Sight Integration Approach to Cosmic Microwave Background Anisotropies.
Astrophyical Journal, 469:437?+, October 1996.
| 2940 |@word tadepalli:1 nd:1 simulation:1 covariance:1 pick:3 solid:2 moment:2 reduction:1 initial:1 series:1 score:4 outperforms:1 current:2 comparing:1 si:6 written:1 must:1 fn:2 numerical:1 permeated:1 shape:4 plot:2 depict:1 half:1 chile:2 ith:2 characterization:1 location:5 successive:1 daphne:1 five:2 height:1 become:2 incorrect:6 combine:1 fitting:1 inside:1 manner:1 interdependence:1 notably:1 expected:2 roughly:1 cand:2 frequently:2 multi:2 simulator:1 anisotropy:3 begin:1 discover:1 bounded:1 formalizing:1 underlying:2 project:2 mass:2 moreover:4 what:2 tic:1 substantially:1 developed:2 finding:1 extremum:1 impractical:1 assert:1 sky:1 runtime:1 preferable:1 uk:2 normally:1 hamilton:1 before:1 engineering:1 local:4 limit:1 solely:3 fluctuation:4 approximately:2 black:3 chose:2 examined:1 co:6 range:4 unique:1 testing:1 practice:1 x3:2 sq:5 procedure:1 area:3 significantly:1 confidence:19 get:1 cannot:1 selection:2 influence:1 impossible:1 instability:1 map:6 center:3 regardless:2 starting:1 survey:1 identifying:1 immediately:1 wasserman:3 estimator:1 handle:1 variation:1 target:1 heavily:1 gps:3 cressie:1 origin:1 pa:4 element:2 approximated:1 expensive:1 located:1 std:1 labeled:1 bottom:1 solved:1 thousand:3 region:11 ensures:1 decrease:2 highest:3 kriging:3 mentioned:1 balanced:1 intuition:1 mozer:1 wilkinson:2 exhaustively:1 ultimately:1 depend:1 predictive:2 upon:3 creates:1 completely:1 easily:2 joint:2 various:1 derivation:1 effective:1 describe:1 query:7 tell:1 aggregate:1 formation:3 choosing:6 refined:1 whose:1 heuristic:15 spend:1 solve:1 valued:1 statistic:4 gp:5 advantage:1 interaction:1 product:4 ramification:1 pronounced:1 recipe:1 exploiting:1 billion:1 cluster:1 optimum:2 help:2 illustrate:1 develop:1 ac:1 radiation:3 stat:2 measured:1 nearest:3 ij:1 coverage:1 c:2 predicted:3 indicate:3 exhibiting:1 radius:4 drawback:1 closely:1 discontinuous:2 correct:1 subsequently:1 stochastic:1 centered:1 larry:2 unsampled:1 wall:1 preliminary:1 proximity:1 around:4 considered:1 normal:2 overlaid:1 early:1 estimation:3 currently:2 sensitive:1 largest:1 agrees:1 create:3 weighted:2 minimization:1 mit:2 gaussian:3 always:1 sight:1 rather:2 derived:3 focus:1 refining:1 indicates:1 baseline:1 inference:1 membership:1 vetterling:1 entire:6 koller:1 misclassified:1 i1:1 selects:1 interested:3 issue:2 aura:1 among:2 classification:8 priori:1 favored:1 transversal:1 spatial:5 integration:1 mackay:1 equal:5 once:1 sampling:3 x4:2 look:1 genovese:4 alter:1 mimic:1 minimized:1 others:1 hint:1 few:2 portsmouth:2 simultaneously:3 detection:2 interest:1 mining:2 evaluation:4 bracket:1 myopic:1 implication:1 microwave:8 edge:2 algorithm2:1 partial:1 necessary:4 xy:1 tree:1 plotted:1 instance:2 column:2 modeling:4 kellogg:1 cover:1 localizing:2 ordinary:2 cost:1 pole:1 deviation:2 subset:1 entry:2 hundred:3 examining:1 characterize:1 synthetic:1 chooses:3 density:6 straddle:15 serena:1 peak:12 fundamental:1 siam:1 international:1 picking:5 invertible:2 quickly:3 concrete:1 squared:3 choose:4 possibly:1 worse:1 corner:1 brent:1 american:1 leading:1 actively:1 concordance:4 account:2 sinusoidal:3 de:2 matter:3 piece:1 performed:2 root:2 wind:1 astrophysical:1 analyze:2 doing:1 portion:1 simon:1 contribution:1 ass:1 square:2 minimize:1 accuracy:6 variance:25 efficiently:1 miller:3 identify:2 bayesian:2 fallen:1 accurately:1 bisection:1 basically:1 cmb:17 bob:1 published:1 ago:1 reach:1 touretzky:2 gravitation:1 evaluates:1 energy:2 frequency:1 galaxy:1 dm:6 cosmology:2 sampled:7 gain:6 knowledge:2 astronomical:1 dimensionality:1 subtle:1 appears:1 khx:1 higher:1 wherein:1 leen:1 done:2 just:2 angular:1 rejected:2 correlation:1 christopher:2 cohn:1 lack:1 gray:3 scientific:2 believe:1 effect:1 requiring:1 true:6 neutrino:1 evolution:1 hence:2 sinusoid:2 iteratively:1 moore:1 eg:1 deal:1 sin:5 ambiguous:1 trying:1 temperature:6 percent:1 recently:1 common:2 functional:1 physical:2 volume:2 mellon:4 significant:1 cambridge:1 imposing:3 cosmological:9 grid:12 trivially:1 particle:1 schneide:1 aq:4 dot:3 rectify:2 surface:4 multivariate:1 showed:1 conjectured:1 tesauro:1 continue:1 scoring:3 seen:4 minimum:1 additional:1 care:1 schneider:2 determine:6 converge:7 monotonically:1 dashed:2 multiple:5 full:2 reduces:2 stem:1 exceeds:1 smooth:1 determination:1 believed:2 raphson:1 prevented:1 impact:1 prediction:6 regression:2 basic:1 cmu:4 metric:6 expectation:1 iteration:2 represent:1 robotics:1 cosmic:7 invert:1 background:7 want:1 derivate:1 separately:1 interval:7 affecting:1 void:1 bracketing:1 rest:1 subject:1 tend:1 ample:1 december:1 jordan:1 curious:1 near:8 presence:2 ideal:1 split:1 automated:1 variety:1 affect:2 fit:5 marginalization:1 gave:1 reduce:2 idea:3 pivot:1 effort:1 sjl:1 locating:2 suffer:2 york:1 prefers:1 ignored:1 dramatically:1 clear:1 listed:1 detailed:2 amount:1 aiding:1 dark:4 nonparametric:2 reduced:1 terrible:1 exist:1 oversampling:2 estimated:1 disjoint:1 correctly:3 per:1 bryan:1 carnegie:4 discrete:2 shall:1 incentive:1 affected:1 group:2 key:2 threshold:20 merely:1 fraction:5 year:2 sum:4 run:4 prob:1 uncertainty:3 reasonable:1 missed:2 separation:1 appendix:1 scaling:1 bound:1 distinguish:2 refine:1 constraint:2 n3:1 x2:2 regressed:2 simulate:1 min:1 optical:1 department:2 according:1 combination:1 ball:6 poor:1 kd:1 remain:1 unity:1 deboor:2 making:1 tegmark:1 intuitively:2 indexing:1 taken:1 equation:5 agree:1 remains:1 previously:1 know:1 serf:1 experimentation:2 probe:1 spectral:1 bailey:1 existence:1 assumes:2 denotes:3 include:1 ensure:2 completed:1 multipole:1 log2:2 top:1 newton:1 giving:1 ghahramani:1 establish:1 approximating:1 february:1 society:1 unchanged:2 objective:1 already:1 realized:1 occurs:1 parametric:4 strategy:1 september:1 gradient:1 distance:3 unable:1 separate:2 separating:1 simulated:1 seven:6 extent:1 trivial:1 reason:1 barely:1 length:1 index:1 minimizing:2 unfortunately:1 october:1 robert:1 negative:1 reconsider:1 astrophysics:1 policy:2 unknown:4 perform:3 observation:2 finite:2 defining:1 extended:1 situation:1 precise:1 locate:1 rn:1 arbitrary:1 required:4 specified:2 geostatistics:1 discontinuity:1 nip:1 able:2 suggested:1 below:4 memory:1 power:8 misclassification:6 numerous:1 picture:1 prior:1 literature:2 discovery:2 understanding:2 review:1 determining:2 relative:3 asymptotic:1 fully:1 loss:1 mixed:1 interesting:2 limitation:1 var:1 localized:1 sil:1 versus:1 proxy:1 port:1 editor:2 classifying:1 summary:1 wireless:1 drastically:1 side:1 institute:2 neighbor:3 characterizing:1 bulletin:1 distributed:1 boundary:24 dimension:6 curve:1 valid:3 depth:1 contour:3 transition:1 stuck:1 made:2 collection:1 adaptive:1 far:4 sj:4 approximate:1 observable:1 confirm:1 global:2 active:8 reproduces:1 reveals:1 pittsburgh:4 assumed:3 spectrum:14 continuous:3 search:1 pandey:1 table:7 ignoring:1 baryon:2 necessarily:2 domain:3 did:4 universe:8 whole:1 noise:2 big:2 edition:1 repeated:1 x1:2 en:1 tong:1 n:2 wiley:1 position:1 wish:1 pinpoint:1 factory:1 candidate:11 third:3 minute:1 nichol:4 explored:1 list:1 intractable:1 supplement:1 magnitude:2 locality:1 entropy:14 flannery:1 explore:2 monotonic:1 aa:4 ramakrishnan:1 environmental:1 relies:1 teukolsky:1 goal:1 noel:1 bang:2 consequently:1 towards:1 jeff:1 bennett:1 feasible:1 hard:1 infinite:2 determined:2 uniformly:2 reducing:1 total:4 experimental:4 la:1 ya:3 indicating:2 select:2 searched:1 relevance:1 wmap:3 incorporate:1 |
2,138 | 2,941 | Interpolating Between Types and Tokens
by Estimating Power-Law Generators ?
Sharon Goldwater
Thomas L. Griffiths
Mark Johnson
Department of Cognitive and Linguistic Sciences
Brown University, Providence RI 02912, USA
{sharon goldwater,tom griffiths,mark johnson}@brown.edu
Abstract
Standard statistical models of language fail to capture one of the most
striking properties of natural languages: the power-law distribution in
the frequencies of word tokens. We present a framework for developing
statistical models that generically produce power-laws, augmenting standard generative models with an adaptor that produces the appropriate
pattern of token frequencies. We show that taking a particular stochastic
process ? the Pitman-Yor process ? as an adaptor justifies the appearance
of type frequencies in formal analyses of natural language, and improves
the performance of a model for unsupervised learning of morphology.
1 Introduction
In general it is important for models used in unsupervised learning to be able to describe
the gross statistical properties of the data they are intended to learn from, otherwise these
properties may distort inferences about the parameters of the model. One of the most striking statistical properties of natural languages is that the distribution of word frequencies is
closely approximated by a power-law. That is, the probability that a word w will occur with
frequency nw in a sufficiently large corpus is proportional to n?g
w . This observation, which
is usually attributed to Zipf [1] but enjoys a long and detailed history [2], stimulated intense
research in the 1950s (e.g., [3]) but has largely been ignored in modern computational linguistics. By developing models that generically exhibit power-laws, it may be possible to
improve methods for unsupervised learning of linguistic structure.
In this paper, we introduce a framework for developing generative models for language
that produce power-law distributions. Our framework is based upon the idea of specifying
language models in terms of two components: a generator, an underlying generative model
for words which need not (and usually does not) produce a power-law distribution, and an
adaptor, which transforms the stream of words produced by the generator into one whose
frequencies obey a power law distribution. This framework is extremely general: any generative model for language can be used as a generator, with the power-law distribution
being produced as the result of making an appropriate choice for the adaptor.
In our framework, estimation of the parameters of the generator will be affected by assumptions about the form of the adaptor. We show that use of a particular adaptor, the PitmanYor process [4, 5, 6], sheds light on a tension exhibited by formal approaches to natural
language: whether explanations should be based upon the types of words that languages
?
This work was partially supported by NSF awards IGERT 9870676 and ITR 0085940 and NIMH
award 1R0-IMH60922-01A2
exhibit, or the frequencies with which tokens of those words occur. One place where this
tension manifests is in accounts of morphology, where formal linguists develop accounts of
why particular words appear in the lexicon (e.g., [7]), while computational linguists focus
on statistical models of the frequencies of tokens of those words (e.g., [8]). The tension
between types and tokens also appears within computational linguistics. For example, one
of the most successful forms of smoothing used in statistical language models, Kneser-Ney
smoothing, explicitly interpolates between type and token frequencies [9, 10, 11].
The plan of the paper is as follows. Section 2 discusses stochastic processes that can produce power-law distributions, including the Pitman-Yor process. Section 3 specifies a twostage language model that uses the Pitman-Yor process as an adaptor, and examines some
properties of this model: Section 3.1 shows that estimation based on type and token frequencies are special cases of this two-stage language model, and Section 3.2 uses these
results to provide a novel justification for the use of Kneser-Ney smoothing. Section 4
describes a model for unsupervised learning of the morphological structure of words that
uses our framework, and demonstrates that its performance improves as we move from
estimation based upon tokens to types. Section 5 concludes the paper.
2 Producing power-law distributions
Assume we want to generate a sequence of N outcomes, z = {z1 , . . . , zN } with each
outcome zi being drawn from a set of (possibly unbounded) size Z. Many of the stochastic
processes that produce power-laws are based upon the principle of preferential attachment,
where the probability that the ith outcome, zi , takes on a particular value k depends upon
the frequency of k in z?i = {z1 , . . . , zi?1 } [2]. For example, one of the earliest and most
widely used preferential attachment schemes [3] chooses zi according to the distribution
(z
P (zi = k | z?i ) = a
(z
)
n ?i
1
+ (1 ? a) k
Z
i?1
(1)
)
where nk ?i is the number of times k occurs in z?i . This ?rich-get-richer? process means
that a few outcomes appear with very high frequency in z ? the key attribute of a power-law
distribution. In this case, the power-law has parameter g = 1/(1 ? a).
One problem with these classical models is that they assume a fixed ordering on the outcomes z. While this may be appropriate for some settings, the assumption of a temporal
ordering restricts the contexts in which such models can be applied. In particular, it is
much more restrictive than the assumption of independent sampling that underlies most
statistical language models. Consequently, we will focus on a different preferential attachment scheme, based upon the two-parameter species sampling model [4, 5] known as the
Pitman-Yor process [6]. Under this scheme outcomes follow a power-law distribution, but
remain exchangeable: the probability of a set of outcomes is not affected by their ordering.
The Pitman-Yor process can be viewed as a generalization of the Chinese restaurant process
[6]. Assume that N customers enter a restaurant with infinitely many tables, each with
infinite seating capacity. Let zi denote the table chosen by the ith customer. The first
customer sits at the first table, z1 = 1. The ith customer chooses table k with probability
? (z )
? nk ?i ?a
k ? K(z?i )
i?1+b
P (zi = k | z?i ) =
(2)
? K(z?i )a+b k = K(z ) + 1
?i
i?1+b
where a and b are the two parameters of the process and K(z?i ) is the number of tables
that are currently occupied.
The Pitman-Yor process satisfies our need for a process that produces power-laws while
retaining exchangeability. Equation 2 is clearly a preferential attachment scheme. When
(a)
Generator
Adaptor
(b)
Generator
Adaptor
?
z
c
t
z
?
w
f
?
w
Figure 1: Graphical models showing dependencies among variables in (a) the simple twostage model, and (b) the morphology model. Shading of the node containing w reflects the
fact that this variable is observed. Dotted lines delimit the generator and adaptor.
a = 0 and b > 0, it reduces to the standard Chinese restaurant process [12, 4] used in
Dirichlet process mixture models [13]. When 0 < a < 1, the number of people seated at
each table follows a power-law distribution with g = 1 + a [5]. It is straightforward to
show that the customers are exchangeable: the probability of a partition of customers into
sets seated at different tables is unaffected by the order in which the customers were seated.
3 A two-stage language model
We can use the Pitman-Yor process as the foundation for a language model that generically produces power-law distributions. We will define a two-stage model by extending the
restaurant metaphor introduced above. Imagine that each table k is labelled with a word ?k
from a vocabulary of (possibly unbounded) size W . The first stage is to generate these labels, sampling ?k from a generative model for words that we will refer to as the generator.
For example, we could choose to draw the labels from a multinomial distribution ?. The
second stage is to generate the actual sequence of words itself. This is done by allowing a
sequence of customers to enter the restaurant. Each customer chooses a table, producing a
seating arrangement, z, and says the word used to label that the table, producing a sequence
of words, w. The process by which customers choose tables, which we will refer to as the
adaptor, defines a probability distribution over the sequence of words w produced by the
customers, determining the frequency with which tokens of the different types occur. The
statistical dependencies among the variables in one such model are shown in Figure 1 (a).
Given the discussion in the previous section, the Pitman-Yor process is a natural choice
for an adaptor. The result is technically a Pitman-Yor mixture model, with zi indicating
the ?class? responsible for generating the ith word, and ?k determining the multinomial
distribution over words associated with class k, with P (wi = w | zi = k, ?k ) = 1 if
?k = w, and 0 otherwise. Under this model the probability that the ith customer produces
word w given previously produced words w?i and current seating arrangement z?i is
P (wi = w | w?i , z?i , ?) =
XX
k
P (wi = w | zi = k, ?k )P (?k | w?i , z?i , ?)P (zi = k | z?i )
?k
K(z?i )
=
X n(z?i ) ? a
K(z?i )a + b
k
I(?k = w) +
?w
i?1+b
i?1+b
(3)
k=1
where I(?) is an indicator function, being 1 when its argument is true and 0 otherwise. If
? is uniform over all W words, then the distribution over w reduces to the Pitman-Yor
process as W ? ?. Otherwise, multiple tables can receive the same label, increasing the
frequency of the corresponding word and producing a distribution with g < 1 + a. Again,
it is straightforward to show that words are exchangeable under this distribution.
3.1 Types and tokens
The use of the Pitman-Yor process as an adaptor provides a justification for the role of word
types in formal analyses of natural language. This can be seen by considering the question
of how to estimate the parameters of the multinomial distribution used as a generator, ?.1
In general, the parameters of generators can be estimated using Markov chain Monte Carlo
methods, as we demonstrate in Section 4. In this section, we will show that estimation
schemes based upon type and token frequencies are special cases of our language model,
corresponding to the extreme values of the parameter a. Values of a between these extremes
identify estimation methods that interpolate between types and tokens.
Taking a multinomial distribution with parameters ? as a generator and the Pitman-Yor
process as an adaptor, the probability of a sequence of words w given ? is
!
(z)
X
X ?(b) K(z)
Y
?(nk ? a)
P (w | ?) =
P (w, z, ? | ?) =
??k ((k ? 1)a + b)
?(N + b)
?(1 ? a)
k=1
z,?
z,?
where in the last sum z and ? are constrained such that ?zi = wi for all i. In the case where
b = 0, this simplifies to
?
?
?
?
K(z)
(z)
Y
X K(z)
Y
?(nk ? a) ?
?(K(z)) K(z)?1 ?
?
?a
?
(4)
P (w | ?) =
?? k ? ?
?(N )
?(1 ? a)
k=1
k=1
z,?
The distribution P (w | ?) determines how the data w influence estimates of ?, so we will
consider how P (w | ?) changes under different limits of a.
In the limit as a approaches 1, estimation of ? is based upon word tokens. When a ? 1,
(z)
(z)
?(nzk ?a)
?(1?a) is 1 for nk = 1 but approaches 0 for nk > 1. Consequently, all terms in the
sum over (z, ?) go to zero, except that in which every word token has its own table. In this
QN
case, K(z) = N and ?k = wk . It follows that lima?1 P (w | ?) = k=1 ?wk . Any form of
estimation using P (w | ?) will thus be based upon the frequencies of word tokens in w.
In the limit as a approaches 0, estimation of ? is based upon word types. The appearance
of aK(z)?1 in Equation 4 means that as a ? 0, the sum over z is dominated by the seating
arrangement that minimizes the total number of tables. Under the constraint that ?zi = wi
for all i, this minimal configuration is the one in which every word type receives a single
table. Consequently, lima?0 P (w | ?) is dominated by a term in which there is a single
instance of ?w for each word w that appears in w.2 Any form of estimation using P (w | ?)
will thus be based upon a single instance of each word type in w.
3.2 Predictions and smoothing
In addition to providing a justification for the role of types in formal analyses of language
in general, use of the Pitman-Yor process as an adaptor can be used to explain the assumptions behind a specific scheme for combining token and type frequencies: Kneser-Ney
smoothing. Smoothing methods are schemes for regularizing empirical estimates of the
probabilities of words, with the goal of improving the predictive performance of language
models. The Kneser-Ney smoother estimates the probability of a word by combining type
and token frequencies, and has proven particularly effective for n-gram models [9, 10, 11].
1
Under the interpretation of this model as a Pitman-Yor process mixture model, this is analogous
to estimating the base measure G0 in a Dirichlet process mixture model (e.g. [13]).
2
Despite the fact that P (w | ?) approaches 0 in this limit, aK(z)?1 will be constant across all
choices of ?. Consequently, estimation schemes that depend only on the non-constant terms in
P (w | ?), such as maximum-likelihood or Bayesian inference, will remain well defined.
To use an n-gram language model, we need to estimate the probability distribution over
words given their history, i.e. the n preceding words. Assume we are given a vector of N
words w that all share a common history, and want to predict the next word, wN +1 , that will
occur with that history. Assume that we also have vectors of words from H other histories,
w(1) , . . . , w(H) . The interpolated Kneser-Ney smoother [11] makes the prediction
(w)
P (wN+1 = w | w) =
nw
(w)
? I(nw
N
> D)D
+
P
w
P
(w)
(h)
I(nw > D)D
)
h I(w ? w
P P
(5)
(h) )
N
I(w
?
w
w
h
where we have suppressed the dependence on w(1) , . . . , w(H) , D is a ?discount factor?
specified as a parameter of the model, and the sum over h includes w.
We can define a two-stage model appropriate for this setting by assuming that the sets of
words for all histories are produced by the same adaptor and generator. Under this model,
the probability of word wN +1 given w and ? is
X
P (wN +1 = w | w, ?) =
P (wN +1 = w|w, z, ?)P (z|w, ?)
z
where P (wN +1 = w|w, z, ?) is given by Equation 3. Assuming b = 0, this becomes
P
Ez [Kw (z)] a
nw ? Ez [Kw (z)] a
P (wN +1 = w | w, ?) = w
+ w
?w
(6)
N
N
P
where Ez [Kw (z)] = z Kw (z)P (z|w, ?), and Kw (z) is the number of tables with label
w under the seating assignment z. The other histories enter into this expression via ?.
Since the words associated with each history is assumed to be produced from a single set
of parameters ?, the maximum-likelihood estimate of ?w will approach
P
(h)
)
h I(w ? w
?w = P P
(h)
)
w
h I(w ? w
as a approaches 0, since only a single instance of each word type in each context will
contribute to the estimate of ?. Substituting this value of ?w into Equation 6 reveals the
correspondence to the Kneser-Ney smoother (Equation 5). The only difference is that the
constant discount factor D is replaced by aEz [Kw (z)], which will increase slowly as nw
increases. This difference might actually lead to an improved smoother: the Kneser-Ney
smoother seems to produce better performance when D increases as a function of nw [11].
4 Types and tokens in modeling morphology
Our attempt to develop statistical models of language that generically produce power-law
distributions was motivated by the possibility that models that account for this statistical
regularity might be able to learn linguistic information better than those that do not. Our
two-stage language modeling framework allows us to create exactly these sorts of models, with the generator producing individual lexical items, and the adaptor producing the
power-law distribution over words. In this section, we show that taking a generative model
for morphology as the generator and varying the parameters of the adaptor results in an
improvement in unsupervised learning of the morphological structure of English.
4.1 A generative model for morphology
Many languages contain words built up of smaller units of meaning, or morphemes. These
units can contain lexical information (as stems) or grammatical information (as affixes).
For example, the English word walked can be parsed into the stem walk and the past-tense
suffix ed. Knowledge of morphological structure enables language learners to understand
and produce novel wordforms, and facilitates tasks such as stemming (e.g., [14]).
As a basic model of morphology, we assume that each word consists of a single stem
and suffix, and belongs to some inflectional class. Each class is associated with a stem
distribution and a suffix distribution. We assume that stems and suffixes are independent
given the class, so we have
X
P (?k = w) =
I(w = t.f )P (ck = c)P (tk = t | ck = c)P (fk = f | ck = c)
(7)
c,t,f
where ck , tk , and fk are the class, stem, and suffix associated with ?k , and t.f indicates
the concatenation of t and f . In other words, we generate a label by first drawing a class,
then drawing a stem and a suffix conditioned on the class. Each of these draws is from a
multinomial distribution, and we will assume that these multinomials are in turn generated
from symmetric Dirichlet priors, with parameters ?, ? , and ? respectively. The resulting
generative model can be used as the generator in a two-stage language model, providing a
more structured replacement for the multinomial distribution, ?. As before, we will use the
Pitman-Yor process as an adaptor, setting b = 0. Figure 1 (b) illustrates the dependencies
between the variables in this model.
Our morphology model is similar to that used by Goldsmith in his unsupervised morphological learning system [8], with two important differences. First, Goldsmith?s model is
recursive, i.e. a word stem can be further split into a smaller stem plus suffix. Second,
Goldsmith?s model assumes that all occurrences of each word type have the same analysis,
whereas our model allows different tokens of the same type to have different analyses.
4.2 Inference by Gibbs sampling
Our goal in defining this morphology model is to be able to automatically infer the morphological structure of a language. This can be done using Gibbs sampling, a standard Markov
chain Monte Carlo (MCMC) method [15]. In MCMC, variables in the model are repeatedly
sampled, with each sample conditioned on the current values of all other variables in the
model. This process defines a Markov chain whose stationary distribution is the posterior
distribution over model variables given the input data.
Rather than sampling all the variables in our two-stage model simultaneously, our Gibbs
sampler alternates between sampling the variables in the generator and those in the adaptor.
Fixing the assignment of words to tables, we sample ck , tk , and fk for each table from
P (ck = c, tk = t, fk = f | c?k , t?k , f?k , ?)
? I(?k = tk .fk ) P (ck = c | c?k ) P (tk = t | t?k , c) P (fk = f | f?k , c)
nc,t + ?
nc,f + ?
nc + ?
?
?
(8)
= I(?k = tk .fk ) ?
K(z) ? 1 + ?C
nc + ? T
nc + ?F
where nc is the number of other labels assigned to class c, nc,t and nc,f are the number of
other labels in class c with stem t and suffix f , respectively, and C, T , and F , are the total
number of possible classes, stems, and suffixes, which are fixed. We use the notation c?k
here to indicate all members of c except for ck . Equation 8 is obtained by integrating over
the multinomial distributions specified in Equation 7, exploiting the conjugacy between
multinomial and Dirichlet distributions.
Fixing the morphological analysis (c, t, f), we sample the table zi for each word token from
(
(z )
(z )
I(?k = wi )(nk ?i ? a)
nk ?i > 0
P (zi = k | z?i , w, c, t, f ) ?
(9)
(z )
P (?k = wi )(K(z?i )a + b) nk ?i = 0
where P (?k = wi ) is found using Equation 7, with P (c), P (t), and P (f ) replaced with
the corresponding conditional distributions from Equation 8.
e
ed
d
(b)
1
oth.
en
n
es
s
ing
d
ed
e
NU.
NU.e ed d ing s es n enoth.
1
oth.
en
n
es
s
ing
d
ed
e
NU.
NU.e ed d ing s es n enoth.
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
(true dist)
True Types
NULL
Value of a
(a)
0
ing
0.2
0.4
0.6
0.8
Proportion of types with each suffix
Found Types
s
en
other
True Tokens
n
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
(true dist)
Value of a
es
0
0.2
0.4
0.6
0.8
Proportion of tokens with each suffix
Found Tokens
Figure 2: (a) Results for the morphology model, varying a. (b) Confusion matrices for the
morphology model with a = 0. The area of a square at location (i, j) is proportional to the
number of word types (top) or tokens (bottom) with true suffix i and found suffix j.
4.3 Experiments
We applied our model to a data set consisting of all the verbs in the training section of
the Penn Wall Street Journal treebank (137,997 tokens belonging to 7,761 types). This
simple test case using only a single part of speech makes our results easy to analyze. We
determined the true suffix of each word using simple heuristics based on the part-of-speech
tag and spelling of the word.3 We then ran a Gibbs sampler using 6 classes, and compared
the results of our learning algorithm to the true suffixes found in the corpus.
As noted above, the Gibbs sampler does not converge to a single analysis of the data, but
rather to a distribution over analyses. For evaluation, we used a single sample taken after
1000 iterations. Figure 2 (a) shows the distribution of suffixes found by the model for
various values of a, as well as the true distribution. We analyzed the results in two ways:
by counting each suffix once for each word type it was associated with, and by counting
once for each word token (thus giving more weight to the results for frequent words).
The most salient aspect of our results is that, regardless of whether we evaluate on types or
tokens, it is clear that low values of a are far more effective for learning morphology than
higher values. With higher values of a, the system has too strong a preference for empty
suffixes. This observation seems to support the linguists? view of type-based generalization.
It is also worth explaining why our morphological learner finds so many e and es suffixes.
This problem is common to other morphological learning systems with similar models (e.g.
[8]) and is due to the spelling rule in English that deletes stem-final e before certain suffixes.
Since the system has no knowledge of spelling rules, it tends to hypothesize analyses such
as {stat.e, stat.ing, stat.ed, stat.es}, where the e and es suffixes take the place of NULL
and s. This effect can be seen clearly in the confusion matrices shown in Figure 2 (b). The
remaining errors seen in the confusion matrices are those where the system hypothesized an
empty suffix when in fact a non-empty suffix was present. Analysis of our results showed
that these cases were mostly words where no other form with the same stem was present in
3
The part-of-speech tags distinguish between past tense, past participle, progressive, 3rd person
present singular, and infinitive/unmarked verbs, and therefore roughly correlate with actual suffixes.
the corpus. There was therefore no reason for the system to prefer a non-empty suffix.
5 Conclusion
We have shown that statistical language models that exhibit one of the most striking properties of natural languages ? power-law distributions ? can be defined by breaking the process of generating words into two stages, with a generator producing a set of words, and an
adaptor determining their frequencies. Our morphology model and the Pitman-Yor process
are particular choices for a generator and an adaptor. These choices produce empirical and
theoretical results that justify the role of word types in formal analyses of natural language.
However, the greatest strength of this framework lies in its generality: we anticipate that
other choices of generators and adaptors will yield similarly interesting results.
References
[1] G. Zipf. Selective Studies and the Principle of Relative Frequency in Language. Harvard
University Press, Cambridge, MA, 1932.
[2] M. Mitzenmacher. A brief history of generative models for power law and lognormal distributions. Internet Mathematics, 1(2):226?251, 2003.
[3] H.A. Simon. On a class of skew distribution functions. Biometrika, 42(3/4):425?440, 1955.
[4] J. Pitman. Exchangeable and partially exchangeable random partitions. Probability Theory and
Related Fields, 102:145?158, 1995.
[5] J. Pitman and M. Yor. The two-parameter Poisson-Dirichlet distribution derived from a stable
subordinator. Annals of Probability, 25:855?900, 1997.
[6] H. Ishwaran and L. F. James. Generalized weighted Chinese restaurant processes for species
sampling mixture models. Statistica Sinica, 13:1211?1235, 2003.
[7] J. B. Pierrehumbert. Probabilistic phonology: discrimination and robustness. In R. Bod, J. Hay,
and S. Jannedy, editors, Probabilistic linguistics. MIT Press, Cambridge, MA, 2003.
[8] J. Goldsmith. Unsupervised learning of the morphology of a natural language. Computational
Linguistics, 27:153?198, 2001.
[9] H. Ney, U. Essen, and R. Kneser. On structuring probabilistic dependences in stochastic language modeling. Computer, Speech, and Language, 8:1?38, 1994.
[10] R. Kneser and H. Ney. Improved backing-off for n-gram language modeling. In Proceedings
of the IEEE International Conference on Acoustics, Speech and Signal Processing, 1995.
[11] S. F. Chen and J. Goodman. An empirical study of smoothing techniques for language modeling. Technical Report TR-10-98, Center for Research in Computing Technology, Harvard
University, 1998.
?
[12] D. Aldous. Exchangeability and related topics. In Ecole
d??et?e de probabilit?es de Saint-Flour,
XIII?1983, pages 1?198. Springer, Berlin, 1985.
[13] R. M. Neal. Markov chain sampling methods for Dirichlet process mixture models. Journal of
Computational and Graphical Statistics, 9:249?265, 2000.
[14] L. Larkey, L. Ballesteros, and M. Connell. Improving stemming for arabic information retrieval: Light stemming and co-occurrence analysis. In Proceedings of the 25th International
Conference on Research and Development in Information Retrieval (SIGIR), 2002.
[15] W.R. Gilks, S. Richardson, and D. J. Spiegelhalter, editors. Markov Chain Monte Carlo in
Practice. Chapman and Hall, Suffolk, 1996.
| 2941 |@word arabic:1 seems:2 proportion:2 tr:1 shading:1 configuration:1 ecole:1 past:3 current:2 stemming:3 partition:2 enables:1 hypothesize:1 discrimination:1 stationary:1 generative:9 item:1 ith:5 affix:1 provides:1 node:1 lexicon:1 sits:1 contribute:1 location:1 preference:1 unbounded:2 consists:1 introduce:1 roughly:1 dist:2 morphology:14 automatically:1 actual:2 metaphor:1 considering:1 increasing:1 becomes:1 estimating:2 underlying:1 xx:1 notation:1 null:2 inflectional:1 minimizes:1 temporal:1 every:2 shed:1 exactly:1 biometrika:1 demonstrates:1 exchangeable:5 unit:2 penn:1 appear:2 producing:7 before:2 tends:1 limit:4 despite:1 ak:2 kneser:9 might:2 plus:1 specifying:1 co:1 responsible:1 gilks:1 recursive:1 practice:1 probabilit:1 area:1 empirical:3 word:62 integrating:1 griffith:2 get:1 context:2 influence:1 customer:12 lexical:2 center:1 straightforward:2 delimit:1 go:1 regardless:1 sigir:1 examines:1 rule:2 his:1 justification:3 analogous:1 annals:1 imagine:1 lima:2 us:3 harvard:2 approximated:1 particularly:1 observed:1 role:3 bottom:1 capture:1 morphological:8 ordering:3 ran:1 gross:1 nimh:1 depend:1 predictive:1 technically:1 upon:11 learner:2 various:1 pitmanyor:1 describe:1 effective:2 monte:3 outcome:7 aez:1 whose:2 richer:1 widely:1 heuristic:1 say:1 drawing:2 otherwise:4 statistic:1 richardson:1 itself:1 final:1 sequence:6 frequent:1 combining:2 bod:1 exploiting:1 regularity:1 empty:4 extending:1 produce:13 generating:2 tk:7 develop:2 fixing:2 stat:4 augmenting:1 adaptor:23 strong:1 indicate:1 closely:1 attribute:1 stochastic:4 generalization:2 wall:1 anticipate:1 sufficiently:1 hall:1 nw:7 predict:1 substituting:1 a2:1 estimation:10 label:8 currently:1 create:1 reflects:1 weighted:1 mit:1 clearly:2 ck:8 occupied:1 rather:2 exchangeability:2 varying:2 linguistic:3 earliest:1 derived:1 focus:2 structuring:1 improvement:1 likelihood:2 indicates:1 inference:3 suffix:25 selective:1 backing:1 among:2 morpheme:1 retaining:1 development:1 plan:1 smoothing:7 special:2 constrained:1 field:1 once:2 sampling:9 chapman:1 kw:6 progressive:1 unsupervised:7 suffolk:1 report:1 xiii:1 few:1 modern:1 simultaneously:1 interpolate:1 individual:1 replaced:2 intended:1 consisting:1 replacement:1 attempt:1 possibility:1 essen:1 evaluation:1 flour:1 generically:4 mixture:6 extreme:2 analyzed:1 light:2 behind:1 oth:2 chain:5 preferential:4 intense:1 walk:1 theoretical:1 minimal:1 instance:3 modeling:5 zn:1 assignment:2 uniform:1 successful:1 johnson:2 too:1 dependency:3 providence:1 chooses:3 person:1 international:2 probabilistic:3 off:1 again:1 containing:1 choose:2 possibly:2 slowly:1 cognitive:1 account:3 de:2 wk:2 includes:1 explicitly:1 depends:1 stream:1 view:1 analyze:1 sort:1 walked:1 simon:1 square:1 largely:1 igert:1 identify:1 yield:1 goldwater:2 bayesian:1 produced:6 carlo:3 worth:1 unaffected:1 history:9 explain:1 ed:7 distort:1 frequency:20 james:1 associated:5 attributed:1 sampled:1 manifest:1 knowledge:2 improves:2 actually:1 appears:2 higher:2 follow:1 tom:1 tension:3 improved:2 done:2 mitzenmacher:1 generality:1 stage:10 receives:1 defines:2 usa:1 effect:1 hypothesized:1 brown:2 true:9 contain:2 tense:2 assigned:1 symmetric:1 neal:1 subordinator:1 noted:1 generalized:1 goldsmith:4 demonstrate:1 confusion:3 meaning:1 novel:2 common:2 multinomial:9 interpretation:1 refer:2 cambridge:2 gibbs:5 enter:3 zipf:2 rd:1 fk:7 mathematics:1 similarly:1 language:35 stable:1 base:1 posterior:1 own:1 showed:1 aldous:1 belongs:1 certain:1 hay:1 infinitive:1 seen:3 preceding:1 r0:1 converge:1 signal:1 smoother:5 multiple:1 reduces:2 stem:13 infer:1 ing:6 technical:1 long:1 retrieval:2 award:2 prediction:2 underlies:1 basic:1 poisson:1 iteration:1 receive:1 addition:1 want:2 whereas:1 singular:1 goodman:1 exhibited:1 facilitates:1 member:1 counting:2 split:1 easy:1 wn:7 restaurant:6 zi:15 idea:1 simplifies:1 itr:1 whether:2 expression:1 motivated:1 interpolates:1 speech:5 linguist:3 repeatedly:1 ignored:1 detailed:1 clear:1 transforms:1 discount:2 generate:4 specifies:1 restricts:1 nsf:1 dotted:1 estimated:1 affected:2 key:1 salient:1 ballesteros:1 deletes:1 drawn:1 sharon:2 sum:4 striking:3 place:2 draw:2 prefer:1 internet:1 distinguish:1 correspondence:1 strength:1 occur:4 constraint:1 ri:1 dominated:2 interpolated:1 tag:2 aspect:1 argument:1 extremely:1 connell:1 department:1 developing:3 according:1 structured:1 alternate:1 belonging:1 describes:1 remain:2 across:1 suppressed:1 smaller:2 wi:8 making:1 taken:1 equation:9 conjugacy:1 previously:1 discus:1 turn:1 fail:1 skew:1 participle:1 ishwaran:1 obey:1 appropriate:4 occurrence:2 ney:9 robustness:1 thomas:1 assumes:1 dirichlet:6 linguistics:4 top:1 remaining:1 graphical:2 saint:1 phonology:1 parsed:1 restrictive:1 giving:1 chinese:3 classical:1 move:1 g0:1 arrangement:3 question:1 occurs:1 dependence:2 spelling:3 exhibit:3 berlin:1 capacity:1 concatenation:1 street:1 seating:5 topic:1 reason:1 assuming:2 providing:2 nc:8 sinica:1 mostly:1 allowing:1 observation:2 markov:5 defining:1 verb:2 introduced:1 specified:2 z1:3 acoustic:1 nu:4 able:3 usually:2 pattern:1 built:1 including:1 explanation:1 power:22 greatest:1 natural:9 indicator:1 scheme:8 improve:1 technology:1 brief:1 spiegelhalter:1 attachment:4 concludes:1 prior:1 determining:3 relative:1 law:22 interesting:1 proportional:2 proven:1 generator:20 foundation:1 principle:2 treebank:1 editor:2 seated:3 share:1 token:28 supported:1 last:1 english:3 enjoys:1 formal:6 understand:1 explaining:1 taking:3 lognormal:1 pitman:18 yor:17 grammatical:1 vocabulary:1 gram:3 rich:1 qn:1 twostage:2 far:1 correlate:1 reveals:1 corpus:3 assumed:1 why:2 table:19 stimulated:1 learn:2 improving:2 interpolating:1 statistica:1 unmarked:1 en:3 lie:1 breaking:1 specific:1 showing:1 justifies:1 conditioned:2 illustrates:1 nk:9 chen:1 appearance:2 infinitely:1 ez:3 partially:2 springer:1 satisfies:1 determines:1 ma:2 conditional:1 viewed:1 goal:2 consequently:4 labelled:1 change:1 infinite:1 except:2 determined:1 sampler:3 justify:1 total:2 specie:2 e:9 indicating:1 mark:2 people:1 support:1 evaluate:1 mcmc:2 regularizing:1 |
2,139 | 2,942 | Diffusion Maps, Spectral Clustering and
Eigenfunctions of Fokker-Planck Operators
Boaz Nadler? St?ephane Lafon Ronald R. Coifman
Department of Mathematics, Yale University, New Haven, CT 06520.
{boaz.nadler,stephane.lafon,ronald.coifman}@yale.edu
Ioannis G. Kevrekidis
Department of Chemical Engineering and Program in Applied Mathematics
Princeton University, Princeton, NJ 08544
[email protected]
Abstract
This paper presents a diffusion based probabilistic interpretation of
spectral clustering and dimensionality reduction algorithms that use the
eigenvectors of the normalized graph Laplacian. Given the pairwise adjacency matrix of all points, we define a diffusion distance between any two
data points and show that the low dimensional representation of the data
by the first few eigenvectors of the corresponding Markov matrix is optimal under a certain mean squared error criterion. Furthermore, assuming
that data points are random samples from a density p(x) = e?U (x) we
identify these eigenvectors as discrete approximations of eigenfunctions
of a Fokker-Planck operator in a potential 2U (x) with reflecting boundary conditions. Finally, applying known results regarding the eigenvalues
and eigenfunctions of the continuous Fokker-Planck operator, we provide
a mathematical justification for the success of spectral clustering and dimensional reduction algorithms based on these first few eigenvectors.
This analysis elucidates, in terms of the characteristics of diffusion processes, many empirical findings regarding spectral clustering algorithms.
Keywords: Algorithms and architectures, learning theory.
1
Introduction
Clustering and low dimensional representation of high dimensional data are important
problems in many diverse fields. In recent years various spectral methods to perform these
tasks, based on the eigenvectors of adjacency matrices of graphs on the data have been
developed, see for example [1]-[10] and references therein. In the simplest version, known
as the normalized graph Laplacian, given n data points {xi }ni=1 where each xi ? Rp , we
define a pairwise similarity matrix between points, for example using a Gaussian kernel
?
Corresponding author.
Currently at Weizmann Institute of Science, Rehovot, Israel.
http://www.wisdom.weizmann.ac.il/?nadler
with width ?,
kxi ? xj k2
(1)
Li,j = k(xi , xj ) = exp ?
2?
P
and a diagonal normalization matrix Di,i =
j Li,j . Many works propose to use the
first few eigenvectors of the normalized eigenvalue problem L? = ?D?, or equivalently
of the matrix M = D?1 L, either as a low dimensional representation of data or as good
coordinates for clustering purposes. Although eq. (1) is based on a Gaussian kernel, other
kernels are possible. While for actual datasets the choice of a kernel k(xi , xj ) is crucial, it
does not qualitatively change our asymptotic analysis [11].
The use of the first few eigenvectors of M as good coordinates is typically justified with
heuristic arguments or as a relaxation of a discrete clustering problem [3]. In [4, 5] Belkin
and Niyogi showed that when data is uniformly sampled from a low dimensional manifold
of Rp the first few eigenvectors of M are discrete approximations of the eigenfunctions of
the Laplace-Beltrami operator on the manifold, thus providing a mathematical justification
for their use in this case. A different theoretical analysis of the eigenvectors of the matrix
M , based on the fact that M is a stochastic matrix representing a random walk on the graph
was described by Meil?a and Shi [12], who considered the case of piecewise constant eigenvectors for specific lumpable matrix structures. Additional notable works that considered
the random walk aspects of spectral clustering are [8, 13], where the authors suggest clustering based on the average commute time between points, and [14] which considered the
relaxation process of this random walk.
In this paper we provide a unified probabilistic framework which combines these results
and extends them in two different directions. First, in section 2 we define a distance function between any two points based on the random walk on the graph, which we naturally
denote the diffusion distance. We then show that the low dimensional description of the
data by the first few eigenvectors, denoted as the diffusion map, is optimal under a mean
squared error criterion based on this distance. In section 3 we consider a statistical model,
in which data points are iid random samples from a probability density p(x) in a smooth
bounded domain ? ? Rp and analyze the asymptotics of the eigenvectors as the number of
data points tends to infinity. This analysis shows that the eigenvectors of the finite matrix
M are discrete approximations of the eigenfunctions of a Fokker-Planck (FP) operator with
reflecting boundary conditions. This observation, coupled with known results regarding the
eigenvalues and eigenfunctions of the FP operator provide new insights into the properties
of these eigenvectors and on the performance of spectral clustering algorithms, as described
in section 4.
2
Diffusion Distances and Diffusion Maps
The starting point of our analysis, as also noted in other works, is the observation that the
matrix M is adjoint to a symmetric matrix
Ms = D1/2 M D?1/2 .
(2)
Thus, M and Ms share the same eigenvalues. Moreover, since Ms is symmetric it is diagonalizable and has a set of n real eigenvalues {?j }n?1
j=0 whose corresponding eigenvectors
{v j } form an orthonormal basis of Rn . The left and right eigenvectors of M , denoted ?j
and ?j are related to those of Ms according to
?j = v j D1/2 ,
?j = v j D?1/2
(3)
n
Since the eigenvectors v j are orthonormal under the standard dot product in R , it follows
that the vectors ?j and ?k are bi-orthonormal
h?i , ?j i = ?i,j
(4)
where hu, vi is the standard dot product between two vectors in Rn . We now utilize the
fact that by construction M is a stochastic matrix with all row sums equal to one, and can
thus be interpreted as defining a random walk on the graph. Under this view, Mi,j denotes
the transition probability from the point xi to the point xj in one time step. Furthermore,
based on the similarity of the Gaussian kernel (1) to the fundamental solution of the heat
equation, we define our time step as ?t = ?. Therefore,
Pr{x(t + ?) = xj | x(t) = xi } = Mi,j
(5)
Note that ? has therefore a dual interpretation in this framework. The first is that ? is the
(squared) radius of the neighborhood used to infer local geometric and density information
for the construction of the adjacency matrix, while the second is that ? is the discrete time
step at which the random walk jumps from point to point.
We denote by p(t, y|x) the probability distribution of a random walk landing at location
y at time t, given a starting location x at time t = 0. For t = k ?, p(t, y|xi ) = ei M k ,
where ei is a row vector of zeros with a single one at the i-th coordinate. For ? large
enough, all points in the graph are connected so that M has a unique eigenvalue equal
to 1. The other eigenvalues form a non-increasing sequence of non-negative numbers:
?0 = 1 > ?1 ? ?2 ? . . . ? ?n?1 ? 0. Then, regardless of the initial starting point x,
lim p(t, y|x) = ?0 (y)
t??
(6)
where ?0 is the left eigenvector of M with eigenvalue ?0 = 1, explicitly given by
Di,i
?0 (xi ) = P
Dj,j
(7)
j
This eigenvector also has a dual interpretation. The first is that ?0 is the stationary probability distribution on the graph, while the second is that ?0 (x) is a density estimate at the
point x. Note that for a general shift invariant kernel K(x ? y) and for the Gaussian kernel
in particular, ?0 is simply the well known Parzen window density estimator.
For any finite time t, we decompose the probability distribution in the eigenbasis {?j }
X
p(t, y|x) = ?0 (y) +
aj (x)?tj ?j (y)
(8)
j?1
where the coefficients aj depend on the initial location x. Using the bi-orthonormality
condition (4) gives aj (x) = ?j (x), with a0 (x) = ?0 (x) = 1 already implicit in (8).
Given the definition of the random walk on the graph it is only natural to quantify the similarity between any two points according to the evolution of their probability distributions.
Specifically, we consider the following distance measure at time t,
Dt2 (x0 , x1 )
kp(t, y|x0 ) ? p(t, y|x1 )k2w
X
=
(p(t, y|x0 ) ? p(t, y|x1 ))2 w(y)
y
=
(9)
with the specific choice w(y) = 1/?0 (y) for the weight function, which takes into account
the (empirical) local density of the points.
Since this distance depends on the random walk on the graph, we quite naturally denote it
as the diffusion distance at time t. We also denote the mapping between the original space
and the first k eigenvectors as the diffusion map
?t (x) = ?t1 ?1 (x), ?t2 ?2 (x), . . . , ?tk ?k (x)
(10)
The following theorem relates the diffusion distance and the diffusion map.
Theorem: The diffusion distance (9) is equal to Euclidean distance in the diffusion map
space with all (n ? 1) eigenvectors.
X
2
2
Dt2 (x0 , x1 ) =
?2t
(11)
j (?j (x0 ) ? ?j (x1 )) = k?t (x0 ) ? ?t (x1 )k
j?1
Proof: Combining (8) and (9) gives
2
XX
Dt2 (x0 , x1 ) =
?tj (?j (x0 ) ? ?j (x1 ))?j (y) 1/?0 (y)
y
j
(12)
Expanding the brackets, exchanging the order of summation and using relations (3) and (4)
between ?j and ?j yields the required result. Note that the weight factor 1/?0 is essential
for the theorem to hold.
.
This theorem provides a justification for using Euclidean distance in the diffusion map
space for spectral clustering purposes. Therefore, geometry in diffusion space is meaningful and can be interpreted in terms of the Markov chain. In particular, as shown in [18],
quantizing this diffusion space is equivalent to lumping the random walk. Moreover, since
in many practical applications the spectrum of the matrix M has a spectral gap with only
a few eigenvalues close to one and all additional eigenvalues much smaller than one, the
diffusion distance at a large enough time t can be well approximated by only the first few
k eigenvectors ?1 (x), . . . , ?k (x), with a negligible error of the order of O((?k+1 /?k )t ).
This observation provides a theoretical justification for dimensional reduction with these
eigenvectors. In addition, the following theorem shows that this k-dimensional approximation is optimal under a certain mean squared error criterion.
Theorem: Out of all k-dimensional approximations of the form
p?(t, y|x) = ?0 (y) +
k
X
aj (t, x)wj (y)
j=1
for the probability distribution at time t, the one that minimizes the mean squared error
Ex {kp(t, y|x) ? p?(t, y|x)k2w }
where averaging over initial points x is with respect to the stationary density ?0 (x), is
given by wj (y) = ?j (y) and aj (t, x) = ?tj ?j (x). Therefore, the optimal k-dimensional
approximation is given by the truncated sum
p?(y, t|x) = ?0 (y) +
k
X
?tj ?j (x)?j (y)
(13)
j=1
Proof: The proof is a consequence of a weighted principal component analysis applied to
the matrix M , taking into account the biorthogonality of the left and right eigenvectors.
We note that the first few eigenvectors are also optimal under other criteria, for example for
data sampled from a manifold as in [4], or for multiclass spectral clustering [15].
3
The Asymptotics of the Diffusion Map
The analysis of the previous section provides a mathematical explanation for the success of
the diffusion maps for dimensionality reduction and spectral clustering. However, it does
not provide any information regarding the structure of the computed eigenvectors.
To this end, and similar to the framework of [16], we introduce a statistical model and
assume that the data points {xi } are i.i.d. random samples from a probability density p(x)
confined to a compact connected subset ? ? Rp with smooth boundary ??. Following
the statistical physics notation, we write the density in Boltzmann form, p(x) = e?U (x) ,
where U (x) is the (dimensionless) potential or energy of the configuration x.
As shown in [11], in the limit n ? ? the random walk on the discrete graph converges
to a random walk on the continuous space ?. Then, it is possible to define forward and
backward operators Tf and Tb as follows,
Z
Z
Tf [?](x) =
M (x|y)?(y)p(y)dy, Tb [?](x) =
M (y|x)?(y)p(y)dy
(14)
?
?
where M (x|y) = exp(?kx
? yk2 /2?)/D(y) is the transition probability from y to x in
R
time ?, and D(y) = exp(?kx ? yk2 /2?)p(x)dx.
The two operators Tf and Tb have probabilistic interpretations. If ?(x) is a probability
distribution on the graph at time t = 0, then Tf [?] is the probability distribution at time
t = ?. Similarly, Tb [?](x) is the mean of the function ? at time t = ?, for a random walk
that started at location x at time t = 0. The operators Tf and Tb are thus the continuous
analogues of the left and right multiplication by the finite matrix M .
We now take this analysis one step further and consider the limit ? ? 0. This is possible,
since when n = ? each data point contains an infinite number of nearby neighbors. In
this limit, since ? also has the interpretation of a time step, the random walk converges to a
diffusion process, whose probability density evolves continuously in time, according to
?p(x, t)
p(x, t + ?) ? p(x, t)
Tf ? I
= lim
= lim
p(x, t)
(15)
??0
??0
?t
?
?
in which case it is customary to study the infinitesimal generators (propagators)
Tf ? I
Tb ? I
Hf = lim
,
Hb = lim
(16)
??0
??0
?
?
Clearly, the eigenfunctions of Tf and Tb converge to those of Hf and Hb , respectively.
As shown in [11], the backward generator is given by the following Fokker-Planck operator
Hb ? = ?? ? 2?? ? ?U
(17)
which corresponds to a diffusion process in a potential field of 2U (x)
?
?
?
(18)
x(t)
= ??(2U ) + 2Dw(t)
where w(t) is standard Brownian motion in p dimensions and D is the diffusion coefficient,
equal to one in equation (17). The Langevin equation (18) is a common model to describe
stochastic dynamical systems in physics, chemistry and biology [19, 20]. As such, its
characteristics as well as those of the corresponding FP equation have been extensively
studied, see [19]-[22] and many others. The term ?? ? ?U in (17) is interpreted as a drift
term towards low energy (high-density) regions, and as discussed in the next section, may
play a crucial part in the definition of clusters.
Note that when data is uniformly sampled from ?, ?U = 0 so the drift term vanishes and
we recover the Laplace-Beltrami operator on ?. The connection between the discrete matrix M and the (weighted) Laplace-Beltrami or Fokker-Planck operator, as well as rigorous
convergence proofs of the eigenvalues and eigenvectors of M to those of the integral operator Tb or infinitesimal generator Hb were considered in many recent works [4, 23, 17, 9, 24].
However, it seems that the important issue of boundary conditions was not considered.
Since (17) is defined in the bounded domain ?, the eigenvalues and eigenfunctions of Hb
depend on the boundary conditions imposed on ??. As shown in [9], in the limit ? ? 0,
the random walk satisfies reflecting boundary conditions on ??, which translate into
??(x)
(19)
=0
?n ??
Table 1: Random Walks and Diffusion Processes
Case
Operator
Stochastic Process
?>0
finite n ? n
R.W. discrete in space
n<?
matrix M
discrete in time
?>0
operators
R.W. in continuous space
n??
T f , Tb
discrete in time
??0
infinitesimal
diffusion process
n = ? generator Hf continuous in time & space
where n is a unit normal vector at the point x ? ??.
To conclude, the left and right eigenvectors of the finite matrix M can be viewed as discrete
approximations to those of the operators Tf and Tb , which in turn can be viewed as approximations to those of Hf and Hb . Therefore, if there are enough data points for accurate
statistical sampling, the structure and characteristics of the eigenvalues and eigenfunctions
of Hb are similar to the corresponding eigenvalues and discrete eigenvectors of M . For
convenience, the three different stochastic processes are shown in table 1.
4
Fokker-Planck eigenfunctions and spectral clustering
According to (16), if ?? is an eigenvalue of the matrix M or of the integral operator Tb based
on a kernel with parameter ?, then the corresponding eigenvalue of Hb is ? ? (?? ? 1)/?.
Therefore the largest eigenvalues of M correspond to the smallest eigenvalues of Hb . These
eigenvalues and their corresponding eigenfunctions have been extensively studied in the
literature under various settings. In general, the eigenvalues and eigenfunctions depend
both on the geometry of the domain ? and on the profile of the potential U (x). For clarity
and due to lack of space we briefly analyze here two extreme cases. In the first case ? = Rp
so geometry plays no role, while in the second U (x) = const so density plays no role.
Yet we show that in both cases there can still be well defined clusters, with the unifying
probabilistic concept being that the mean exit time from one cluster to another is much
larger than the characteristic equilibration time inside each cluster.
Case I: Consider diffusion in a smooth potential U (x) in ?R= Rp , where U has a few local
minima, and U (x) ? ? as kxk ? ? fast enough so that e?U dx = 1 < ?. Each such
local minimum thus defines a metastable state, with transitions between metastable states
being relatively rare events, depending on the barrier heights separating them. As shown
in [21, 22] (and in many other works) there is an intimate connection between the smallest
eigenvalues of Hb and mean exit times out of these metastable states. Specifically, in the
asymptotic limit of small noise D 1, exit times are exponentially distributed and the first
non-trivial eigenvalue (after ?0 = 0) is given by ?1 = 1/?
? where ?? is the mean exit time to
overcome the highest potential barrier on the way to the deepest potential well. For the case
of two potential wells, for example, the corresponding eigenfunction is roughly constant
in each well with a sharp transition near the saddle point between the wells. In general,
in the case of k local minima there are asymptotically only k eigenvalues very close to
zero. Apart from ?0 = 0, each of the other k ? 1 eigenvalues corresponds to the mean
exit time from one of the wells into the deepest one, with the corresponding eigenfunctions
being almost constant in each well. Therefore, for a finite dataset the presence of only k
eigenvalues close to 1 with a spectral gap, e.g. a large difference between ?k and ?k+1
is indicative of k well defined global clusters. In figure 1 (left) an example of this case is
shown, where p(x) is the sum of two well separated Gaussian clouds leading to a double
well potential. Indeed there are only two eigenvalues close or equal to 1 with a distinct
spectral gap and the first eigenfunction being almost piecewise constant in each well.
3 Gaussians
2 Gaussians
Uniform density
1
1
0
1
0
?1
?1
?2
?1
0
5
0
?2?1 0 1
1
0
?1
?5
1
?1
0
1
2
4
6
?1
0
1
1
0.8
0.8
0.6
0.9
0.4
0.6
2
4
6
0.2
2
4
0.8
6
5
1
0
0
1
0
?1
?2
?1
0
1
?5
?5
?1
0
5
Figure 1: Diffusion map results on different datasets. Top - the datasets. Middle - the
eigenvalues. Bottom - the first eigenvector vs. x1 or the first and second eigenvectors for
the case of three Gaussians.
In stochastic dynamical systems a spectral gap corresponds to a separation of time scales
between long transition times from one well or metastable state to another as compared
to short equilibration times inside each well. Therefore, clustering and identification of
metastable states are very similar tasks, and not surprisingly algorithms similar to the normalized graph Laplacian have been independently developed in the literature [25].
The above mentioned results are asymptotic in the small noise limit. In practical datasets,
there can be clusters of different scales, where a global analysis with a single ? is not suitable. As an example consider the second dataset in figure 1, with three clusters. While
the first eigenvector distinguishes between the large cluster and the two smaller ones, the
second eigenvector captures the equilibration inside the large cluster instead of further distinguishing the two small clusters. While a theoretical explanation is beyond the scope of
this paper, a possible solution is to choose a location dependent ?, as proposed in [26].
Case II: Consider a uniform density in a region ? ? R3 composed of two large containers connected by a narrow circular tube, as in the top right frame in figure 1. In this
case U (x) = const, so the second term in (17) vanishes. As shown in [27], the second
eigenvalue of the FP operator is extremely small, of the order of a/V where a is the radius
of the connecting tube and V is the volume of the containers, thus showing an interesting
connection to the Cheeger constant on graphs. The corresponding eigenfunction is almost
piecewise constant in each container with a sharp transition in the connecting tube. Even
though in this case the density is uniform, there still is a spectral gap with two well defined
clusters (the two containers), defined entirely by the geometry of ?. An example of such a
case and the results of the diffusion map are shown in figure 1 (right).
In summary the eigenfunctions and eigenvalues of the FP operator, and thus of the corresponding finite Markov matrix, depend on both geometry and density. The diffusion
distance and its close relation to mean exit times between different clusters is the quantity
that incorporates these two features. This provides novel insight into spectral clustering
algorithms, as well as a theoretical justification for the algorithm in [13], which defines
clusters according to mean travel times between points on the graph. A similar analysis
could also be applied to semi-supervised learning based on spectral methods [28]. Finally,
these eigenvectors may be used to design better search and data collection protocols [29].
Acknowledgments: The authors thank Mikhail Belkin and Partha Niyogi for interesting
discussions. This work was partially supported by DARPA through AFOSR.
References
[1] B. Sch?olkopf, A. Smola and K.R. M?uller. Nonlinear component analysis as a kernel eigenvalue
problem, Neural Computation 10, 1998.
[2] Y. Weiss. Segmentation using eigenvectors: a unifying view. ICCV 1999.
[3] J. Shi and J. Malik. Normalized cuts and image segmentation, PAMI, Vol. 22, 2000.
[4] M. Belkin and P. Niyogi. Laplacian eigenmaps and spectral techniques for embedding and
clustering, NIPS Vol. 14, 2002.
[5] M. Belkin and P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation, Neural Computation 15:1373-1396, 2003.
[6] A.Y. Ng, M. Jordan and Y. Weiss. On spectral clustering, analysis and an algorithm, NIPS Vol.
14, 2002.
[7] X. Zhu, Z. Ghahramani, J. Lafferty, Semi-supervised learning using Gaussian fields and harmonic functions, Proceedings of the 20th international conference on machine learning, 2003.
[8] M. Saerens, F. Fouss, L. Yen and P. Dupont, The principal component analysis of a graph and
its relationships to spectral clustering. ECML 2004.
[9] R.R. Coifman, S. Lafon, Diffusion Maps, to appear in Appl. Comp. Harm. Anal.
[10] R.R. Coifman & al., Geometric diffusion as a tool for harmonic analysis and structure definition of data, parts I and II, Proc. Nat. Acad. Sci., 102(21):7426-37 (2005).
[11] B. Nadler, S. Lafon, R.R. Coifman, I. G. Kevrekidis, Diffusion maps, spectral clustering,
and the reaction coordinates of dynamical systems, to appear in Appl. Comp. Harm. Anal.,
available at http://arxiv.org/abs/math.NA/0503445.
[12] M. Meila, J. Shi. A random walks view of spectral segmentation, AI and Statistics, 2001.
[13] L. Yen L., Vanvyve D., Wouters F., Fouss F., Verleysen M. and Saerens M. , Clustering using
a random-walk based distance measure. ESANN 2005, pp 317-324.
[14] N. Tishby, N. Slonim, Data Clustering by Markovian Relaxation and the information bottleneck method, NIPS, 2000.
[15] S. Yu and J. Shi. Multiclass spectral clustering. ICCV 2003.
[16] Y. Bengio et. al, Learning eigenfunctions links spectral embedding and kernel PCA, Neural
Computation, 16:2197-2219 (2004).
[17] U. von Luxburg, O. Bousquet, M. Belkin, On the convergence of spectral clustering on random
samples: the normalized case, NIPS, 2004.
[18] S. Lafon, A.B. Lee, Diffusion maps: A unified framework for dimension reduction, data partitioning and graph subsampling, submitted.
[19] C.W. Gardiner, Handbook of stochastic methods, third edition, Springer NY, 2004.
[20] H. Risken, The Fokker Planck equation, 2nd edition, Springer NY, 1999.
[21] B.J. Matkowsky and Z. Schuss, Eigenvalues of the Fokker-Planck operator and the approach
to equilibrium for diffusions in potential fields, SIAM J. App. Math. 40(2):242-254 (1981).
[22] M. Eckhoff, Precise asymptotics of small eigenvalues of reversible diffusions in the metastable
regime, Annals of Prob. 33:244-299, 2005.
[23] M. Belkin and P. Niyogi, Towards a theoeretical foundation for Laplacian-based manifold
methods, COLT 2005 (to appear).
[24] M. Hein, J. Audibert, U. von Luxburg, From graphs to manifolds - weak and strong pointwise
consistency of graph Laplacians, COLT 2005 (to appear).
[25] W. Huisinga, C. Best, R. Roitzsch, C. Sch?utte, F. Cordes, From simulation data to conformational ensembles, structure and dynamics based methods, J. Comp. Chem. 20:1760-74, 1999.
[26] L. Zelnik-Manor, P. Perona, Self-Tuning spectral clustering, NIPS, 2004.
[27] A. Singer, Z. Schuss, D. Holcman and R.S. Eisenberg, narrow escape, part I, submitted.
[28] D. Zhou & al., Learning with local and global consistency, NIPS Vol. 16, 2004.
[29] I.G. Kevrekidis, C.W. Gear, G. Hummer, Equation-free: The computer-aided analysis of complex multiscale systems, Aiche J. 50:1346-1355, 2004.
| 2942 |@word briefly:1 version:1 middle:1 seems:1 nd:1 hu:1 zelnik:1 simulation:1 commute:1 reduction:6 initial:3 configuration:1 contains:1 reaction:1 wouters:1 yet:1 dx:2 ronald:2 dupont:1 v:1 stationary:2 indicative:1 gear:1 short:1 provides:4 math:2 location:5 org:1 height:1 mathematical:3 combine:1 inside:3 introduce:1 coifman:5 x0:8 pairwise:2 indeed:1 roughly:1 actual:1 window:1 increasing:1 bounded:2 moreover:2 notation:1 kevrekidis:3 israel:1 interpreted:3 minimizes:1 eigenvector:5 developed:2 unified:2 finding:1 nj:1 k2:1 partitioning:1 unit:1 appear:4 planck:9 t1:1 negligible:1 engineering:1 local:6 tends:1 limit:6 consequence:1 acad:1 slonim:1 meil:1 pami:1 therein:1 studied:2 appl:2 bi:2 weizmann:2 unique:1 practical:2 acknowledgment:1 asymptotics:3 empirical:2 suggest:1 convenience:1 close:5 operator:20 applying:1 dimensionless:1 www:1 landing:1 map:14 equivalent:1 shi:4 imposed:1 regardless:1 starting:3 independently:1 lumping:1 equilibration:3 insight:2 estimator:1 orthonormal:3 dw:1 embedding:2 coordinate:4 justification:5 laplace:3 diagonalizable:1 annals:1 construction:2 play:3 elucidates:1 distinguishing:1 approximated:1 cut:1 bottom:1 role:2 cloud:1 capture:1 wj:2 region:2 connected:3 highest:1 mentioned:1 cheeger:1 vanishes:2 dt2:3 dynamic:1 depend:4 exit:6 basis:1 darpa:1 various:2 separated:1 heat:1 fast:1 describe:1 distinct:1 kp:2 neighborhood:1 whose:2 heuristic:1 quite:1 larger:1 niyogi:5 statistic:1 sequence:1 eigenvalue:32 quantizing:1 propose:1 product:2 combining:1 translate:1 adjoint:1 description:1 olkopf:1 eigenbasis:1 convergence:2 cluster:13 double:1 converges:2 tk:1 depending:1 ac:1 keywords:1 eq:1 strong:1 esann:1 quantify:1 direction:1 beltrami:3 radius:2 fouss:2 stephane:1 stochastic:7 adjacency:3 decompose:1 summation:1 hold:1 considered:5 normal:1 exp:3 nadler:4 mapping:1 scope:1 equilibrium:1 smallest:2 purpose:2 proc:1 travel:1 currently:1 largest:1 tf:9 tool:1 weighted:2 uller:1 clearly:1 gaussian:6 manor:1 zhou:1 rigorous:1 dependent:1 typically:1 a0:1 perona:1 relation:2 issue:1 dual:2 colt:2 denoted:2 verleysen:1 field:4 equal:5 ng:1 sampling:1 biology:1 yu:1 ephane:1 t2:1 piecewise:3 haven:1 few:10 belkin:6 others:1 distinguishes:1 escape:1 composed:1 geometry:5 ab:1 circular:1 bracket:1 extreme:1 tj:4 chain:1 accurate:1 integral:2 euclidean:2 walk:18 hein:1 theoretical:4 markovian:1 exchanging:1 subset:1 rare:1 uniform:3 eigenmaps:2 tishby:1 kxi:1 st:1 density:16 fundamental:1 international:1 siam:1 probabilistic:4 physic:2 lee:1 parzen:1 continuously:1 connecting:2 na:1 squared:5 von:2 tube:3 choose:1 leading:1 li:2 account:2 potential:10 chemistry:1 ioannis:1 coefficient:2 notable:1 explicitly:1 audibert:1 vi:1 depends:1 view:3 analyze:2 hf:4 recover:1 yen:2 partha:1 il:1 ni:1 characteristic:4 who:1 ensemble:1 yield:1 identify:1 wisdom:1 correspond:1 weak:1 identification:1 iid:1 comp:3 app:1 submitted:2 definition:3 infinitesimal:3 energy:2 pp:1 naturally:2 proof:4 di:2 mi:2 sampled:3 dataset:2 lim:5 dimensionality:3 segmentation:3 reflecting:3 supervised:2 wei:2 though:1 furthermore:2 implicit:1 smola:1 ei:2 nonlinear:1 reversible:1 lack:1 multiscale:1 defines:2 aj:5 normalized:6 concept:1 orthonormality:1 evolution:1 chemical:1 symmetric:2 width:1 self:1 noted:1 criterion:4 m:4 motion:1 saerens:2 image:1 harmonic:2 novel:1 common:1 exponentially:1 volume:1 discussed:1 interpretation:5 ai:1 tuning:1 meila:1 consistency:2 mathematics:2 similarly:1 hummer:1 dj:1 dot:2 similarity:3 yk2:2 brownian:1 aiche:1 recent:2 showed:1 apart:1 certain:2 success:2 minimum:3 additional:2 converge:1 ii:2 relates:1 semi:2 infer:1 smooth:3 long:1 laplacian:6 arxiv:1 kernel:10 normalization:1 confined:1 justified:1 addition:1 crucial:2 sch:2 container:4 eigenfunctions:15 incorporates:1 lafferty:1 jordan:1 near:1 presence:1 bengio:1 enough:4 hb:10 xj:5 architecture:1 regarding:4 multiclass:2 shift:1 bottleneck:1 pca:1 conformational:1 eigenvectors:30 extensively:2 simplest:1 http:2 diverse:1 rehovot:1 discrete:12 write:1 vol:4 clarity:1 eckhoff:1 diffusion:35 utilize:1 backward:2 graph:19 relaxation:3 asymptotically:1 year:1 sum:3 luxburg:2 prob:1 extends:1 almost:3 separation:1 dy:2 entirely:1 ct:1 risken:1 yale:2 gardiner:1 infinity:1 nearby:1 bousquet:1 aspect:1 argument:1 extremely:1 relatively:1 department:2 according:5 metastable:6 smaller:2 evolves:1 invariant:1 pr:1 iccv:2 equation:6 turn:1 r3:1 singer:1 end:1 available:1 gaussians:3 spectral:27 rp:6 customary:1 original:1 denotes:1 clustering:25 top:2 subsampling:1 unifying:2 const:2 ghahramani:1 malik:1 already:1 quantity:1 diagonal:1 distance:15 thank:1 link:1 separating:1 sci:1 manifold:5 trivial:1 assuming:1 pointwise:1 relationship:1 providing:1 equivalently:1 negative:1 design:1 anal:2 boltzmann:1 perform:1 observation:3 markov:3 datasets:4 finite:7 propagator:1 ecml:1 truncated:1 defining:1 langevin:1 precise:1 frame:1 rn:2 sharp:2 drift:2 required:1 connection:3 narrow:2 nip:6 eigenfunction:3 beyond:1 dynamical:3 fp:5 regime:1 laplacians:1 program:1 tb:11 explanation:2 analogue:1 event:1 suitable:1 natural:1 zhu:1 representing:1 started:1 coupled:1 geometric:2 literature:2 deepest:2 multiplication:1 asymptotic:3 afosr:1 eisenberg:1 interesting:2 generator:4 foundation:1 share:1 row:2 summary:1 surprisingly:1 supported:1 free:1 institute:1 neighbor:1 taking:1 barrier:2 mikhail:1 distributed:1 boundary:6 dimension:2 overcome:1 transition:6 lafon:5 author:3 qualitatively:1 jump:1 forward:1 collection:1 compact:1 boaz:2 global:3 handbook:1 harm:2 conclude:1 xi:9 spectrum:1 continuous:5 search:1 table:2 expanding:1 complex:1 domain:3 protocol:1 noise:2 profile:1 edition:2 x1:9 ny:2 intimate:1 third:1 yannis:1 theorem:6 specific:2 showing:1 essential:1 nat:1 kx:2 gap:5 simply:1 saddle:1 kxk:1 partially:1 springer:2 fokker:9 corresponds:3 satisfies:1 viewed:2 towards:2 change:1 aided:1 specifically:2 infinite:1 uniformly:2 averaging:1 principal:2 meaningful:1 chem:1 princeton:3 d1:2 ex:1 |
2,140 | 2,943 | Coarse sample complexity bounds for active
learning
Sanjoy Dasgupta
UC San Diego
[email protected]
Abstract
We characterize the sample complexity of active learning problems in
terms of a parameter which takes into account the distribution over the
input space, the specific target hypothesis, and the desired accuracy.
1 Introduction
The goal of active learning is to learn a classifier in a setting where data comes unlabeled,
and any labels must be explicitly requested and paid for. The hope is that an accurate
classifier can be found by buying just a few labels.
So far the most encouraging theoretical results in this field are [7, 6], which show that
if the hypothesis class is that of homogeneous (i.e. through the origin) linear separators,
and the data is distributed uniformly over the unit sphere in Rd , and the labels correspond
perfectly to one of the hypotheses (i.e. the separable case) then at most O(d log d/?) labels
are needed to learn a classifier with error less than ?. This is exponentially smaller than the
usual ?(d/?) sample complexity of learning linear classifiers in a supervised setting.
However, generalizing this result is non-trivial. For instance, if the hypothesis class is
expanded to include non-homogeneous linear separators, then even in just two dimensions,
under the same benign input distribution, we will see that there are some target hypotheses
for which active learning does not help much, for which ?(1/?) labels are needed. In
fact, in this example the label complexity of active learning depends heavily on the specific
target hypothesis, and ranges from O(log 1/?) to ?(1/?).
In this paper, we consider arbitrary hypothesis classes H of VC dimension d < ?, and
learning problems which are separable. We characterize the sample complexity of active
learning in terms of a parameter which takes into account: (1) the distribution P over the
input space X ; (2) the specific target hypothesis h? ? H; and (3) the desired accuracy ?.
Specifically, we notice that distribution P induces a natural topology on H, and we define
a splitting index ? which captures the relevant local geometry of H in the vicinity of h? , at
scale ?. We show that this quantity fairly tightly describes the sample complexity of active
learning: any active learning scheme requires ?(1/?) labels and there is a generic active
?
learner which always uses at most O(d/?)
labels1 .
This ? is always at least ?; if it is ? we just get the usual sample complexity of supervised
1
? notation hides factors polylogarithmic in d, 1/?, 1/?, and 1/? .
The O(?)
learning. But sometimes ? is a constant, and in such instances active learning gives an
exponential improvement in the number of labels needed.
We look at various hypothesis classes and derive splitting indices for target hypotheses
at different levels of accuracy. For homogeneous linear separators and the uniform input
distribution, we easily find ? to be a constant ? perhaps the most direct proof yet of the
efficacy of active learning in this case. Most proofs have been omitted for want of space;
the full details, along with more examples, can be found at [5].
2 Sample complexity bounds
2.1 Motivating examples
Linear separators in R1
Our first example is taken from [3, 4]. Suppose the data lie on the real line, and the classifiers are simple thresholding functions, H = {hw : w ? R}:
hw (x) =
1 if x ? w
0 if x < w
?
?
?
?
?
??
+
+
+ +
w
VC theory tells us that if the underlying distribution P is separable (can be classified perfectly by some hypothesis in H), then in order to achieve an error rate less than ?, it is
enough to draw m = O(1/?) random labeled examples from P, and to return any classifier
consistent with them. But suppose we instead draw m unlabeled samples from P. If we
lay these points down on the line, their hidden labels are a sequence of 0?s followed by a
sequence of 1?s, and the goal is to discover the point w at which the transition occurs. This
can be done with a binary search which asks for just log m = O(log 1/?) labels. Thus, in
this case active learning gives an exponential improvement in the number of labels needed.
Can we always achieve a label complexity proportional to log 1/? rather than 1/?? A
natural next step is to consider linear separators in two dimensions.
Linear separators in R2
Let H be the hypothesis class of linear separators in R2 , and suppose the input distribution
P is some density supported on the perimeter of the unit circle. It turns out that the positive
results of the one-dimensional case do not generalize: there are some target hypotheses in
H for which ?(1/?) labels are needed to find a classifier with error rate less than ?, no
matter what active learning scheme is used.
To see this, consider the following possible target hypotheses (Figure 1, left): h0 , for which
all points are positive; and hi (1 ? i ? 1/?), for which all points are positive except for a
small slice Bi of probability mass ?.
The slices Bi are explicitly chosen to be disjoint, with the result that ?(1/?) labels are
needed to distinguish between these hypotheses. For instance, suppose nature chooses a
target hypothesis at random from among the hi , 1 ? i ? 1/?. Then, to identify this target
with probability at least 1/2, it is necessary to query points in at least (about) half the Bi ?s.
Thus for these particular target hypotheses, active learning offers no improvement in sample complexity. What about other target hypotheses in H, for instance those in which the
positive and negative regions are most evenly balanced? Consider the following active
learning scheme:
h3
B2
x3
h2
P
B1
P?
h1
origin
h0
Figure 1: Left: The data lie on the circumference of a circle. Each Bi is an arc of probability
mass ?. Right: The same distribution P, lifted to 3-d, and with trace amounts of another
distribution P? mixed in.
1. Draw a pool of O(1/?) unlabeled points.
2. From this pool, choose query points at random until at least one positive and one
negative point have been found. (If all points have been queried, then halt.)
3. Apply binary search to find the two boundaries between positive and negative on
the perimeter of the circle.
For any h ? H, define i(h) = min{positive mass of h, negative mass of h}. It is not
hard to see that when the target hypothesis is h, step (2) asks for O(1/i(h)) labels (with
probability at least 9/10, say) and step (3) asks for O(log 1/?) labels.
Thus even within this simple hypothesis class, the label complexity of active learning can
run anywhere from O(log 1/?) to ?(1/?), depending on the specific target hypothesis.
Linear separators in R3
In our two previous examples, the amount of unlabeled data needed was O(1/?), exactly
the usual sample complexity of supervised learning. We next turn to a case in which it is
helpful to have significantly more unlabeled data than this.
Consider the distribution of the previous 2-d example: for concreteness, fix P to be uniform
over the unit circle in R2 . Now lift it into three dimensions by adding to each point x =
(x1 , x2 ) a third coordinate x3 = 1. Let H consist of homogeneous linear separators in R3 .
Clearly the bad cases of the previous example persist.
Suppose, now, that a trace amount ? of a second distribution P? is mixed in with P (Figure 1,
right), where P? is uniform on the circle {x21 +x22 = 1, x3 = 0}. The ?bad? linear separators
in H cut off just a small portion of P but nonetheless divide P? perfectly in half. This permits
a three-stage algorithm: (1) using binary search on points from P? , approximately identify
the two places at which the target hypothesis h? cuts P? ; (2) use this to identify a positive
and negative point of P (look at the midpoints of the positive and negative intervals in P? );
(3) do binary search on points from P. Steps (1) and (3) each use just O(log 1/?) labels.
This O(log 1/?) label complexity is made possible by the presence of P? and is only achievable if the amount of unlabeled data is ?(1/? ), which could potentially be enormous. With
less unlabeled data, the usual ?(1/?) label complexity applies.
x
x
S
Hx+
Hx?
Figure 2: (a) x is a cut through H; (b) splitting edges.
2.2 Basic definitions
The sample complexity of supervised learning is commonly expressed as a function of
the error rate ? and the underlying distribution P. For active learning, the previous three
examples demonstrate that it is also important to take into account the target hypothesis
and the amount of unlabeled data. The main goal of this paper is to present one particular
formalism by which this can be accomplished.
Let X be an instance space with underlying distribution P. Let H be the hypothesis class,
a set of functions from X to {0, 1} whose VC dimension is d < ?.
We are operating in a non-Bayesian setting, so we are not given a measure (prior) on the
space H. In the absence of a measure, there is no natural notion of the ?volume? of the
current version space. However, the distribution P does induce a natural distance function
on H, a pseudometric:
d(h, h? ) = P{x : h(x) 6= h? (x)}.
We can likewise define the notion of neighborhood: B(h, r) = {h? ? H : d(h, h? ) ? r}.
We will be dealing with a separable learning scenario, in which all labels correspond perfectly to some concept h? ? H, and the goal is to find h ? H such that d(h? , h) ? ?. To
do this, it is sufficient to whittle down the version space to the point where it has diameter
at most ?, and to then return any of the remaining hypotheses. Likewise, if the diameter of
the current version space is more than ? then any hypothesis chosen from it will have error
more than ?/2 with respect to the worst-case target. Thus, in a non-Bayesian setting, active
learning is about reducing the diameter of the version space.
If our current version space is S ? H, how can we quantify the amount by which a point
x ? X reduces its diameter? Let Hx+ denote the classifiers that assign x a value of 1,
Hx+ = {h ? H : h(x) = 1}, and let Hx? be the remainder, which assign it a value of 0.
We can think of x as a cut through hypothesis space; see Figure 2(a). In this example, x is
clearly helpful, but it doesn?t reduce the diameter of S. And we cannot say that it reduces
the average distance between hypotheses, since again there is no measure on H. What x
seems to be doing is to reduce the diameter in a certain ?direction?. Is there some notion in
arbitrary metric spaces which captures this intuition?
Consider any finite Q ? H ? H. We will think of an element (h, h? ) ? Q as an edge
between vertices h and h? . For us, each such edge will represent a pair of hypotheses
which need to be distinguished from one another: that is, they are relatively far apart, so
there is no way to achieve our target accuracy if both of them remain in the version space.
We would hope that for any finite set of edges Q, there are queries that will remove a
substantial fraction of them.
To this end, a point x ? X is said to ?-split Q if its label is guaranteed to reduce the number
of edges by a fraction ? > 0, that is, if:
max{|Q ? (Hx+ ? Hx+ )|, |Q ? (Hx? ? Hx? )|} ? (1 ? ?)|Q|.
For instance, in Figure 2(b), the edges are 3/5-split by x.
If our target accuracy is ?, we only really care about edges of length more than ?. So define
Q? = {(h, h? ) ? Q : d(h, h? ) > ?}.
Finally, we say that a subset of hypotheses S ? H is (?, ?, ? )-splittable if for all finite
edge-sets Q ? S ? S,
P{x : x ?-splits Q? } ? ?.
Paraphrasing, at least a ? fraction of the distribution P is useful for splitting S.2 This ?
gives a sense of how many unlabeled samples are needed. If ? is miniscule, then there are
good points to query, but these will emerge only in an enormous pool of unlabeled data. It
will soon transpire that the parameters ?, ? play roughly the following roles:
# labels needed ? 1/?, # of unlabeled points needed ? 1/?
A first step towards understanding them is to establish a trivial lower bound on ?.
Lemma 1 Pick any 0 < ?, ? < 1, and any set S. Then S is ((1 ? ?)?, ?, ??)-splittable.
Proof. Pick any finite edge-set Q ? S ? S. Let Z denote the number of edges of Q? cut
by a point x chosen at random from P. Since the edges have length at least ?, this x has at
least an ? chance of cutting any of them, whereby EZ ? ?|Q? |. Now,
?|Q? | ? EZ ? P(Z ? (1 ? ?)?|Q? |) ? |Q? | + (1 ? ?)?|Q? |,
which after rearrangement becomes P(Z ? (1 ? ?)?|Q? |) ? ??, as claimed.
Thus, ? is always ?(?); but of course, we hope for a much larger value. We will now see
that the splitting index roughly characterizes the sample complexity of active learning.
2.3 Lower bound
We start by showing that if some region of the hypothesis space has a low splitting index,
then it must contain hypotheses which are not conducive to active learning.
Theorem 2 Fix a hypothesis space H and distribution P. Suppose that for some ?, ? < 1
and ? < 1/2, S ? H is not (?, ?, ? )-splittable. Then any active learner which achieves
an accuracy of ? on all target hypotheses in S, with confidence > 3/4 (over the random
sampling of data), either needs ? 1/? unlabeled samples or ? 1/? labels.
Proof. Let Q? be the set of edges of length > ? which defies splittability, with vertices
V = {h : (h, h? ) ? Q? for some h? ? H}. We?ll show that in order to distinguish between
hypotheses in V, either 1/? unlabeled samples or 1/? queries are needed.
So pick less than 1/? unlabeled samples. With probability at least (1 ? ? )1/? ? 1/4,
none of these points ?-splits Q? ; put differently, each of these potential queries has a bad
outcome (+ or ?) in which at most ?|Q? | edges are eliminated. In this case there must be
a target hypothesis in V for which at least 1/? labels are required.
In our examples, we will apply this lower bound through the following simple corollary.
2
Whenever an edge of length l ? ? can be constructed in S, then by taking Q to consist solely of
this edge, we see that ? ? l. Thus we typically expect ? to be at most about ?, although of course it
might be a good deal smaller than this.
Let S0 be an ?0 -cover of H
for t = 1, 2, . . . , T = lg 2/?:
St = split(St?1 , 1/2t)
return any h ? ST
function split(S, ?)
Let Q0 = {(h, h? ) ? S ? S : d(h, h? ) > ?}
Repeat for t = 0, 1, 2, . . .:
Draw m unlabeled points xt1 , . . . , xtm
Query the xti which maximally splits Qt
Let Qt+1 be the remaining edges
until Qt+1 = ?
return remaining hypotheses in S
Figure 3: A generic active learner.
Corollary 3 Suppose that in some neighborhood B(h0 , ?), there are hypotheses
h1 , . . . , hN such that: (1) d(h0 , hi ) > ? for all i; and (2) the ?disagree sets? {x : h0 (x) 6=
hi (x)} are disjoint for different i.
Then for any ? and any ? > 1/N , the set B(h0 , ?) is not (?, ?, ? )-splittable . Any active
learning scheme which achieves an accuracy of ? on all of B(h0 , ?) must use at least N
labels for some of the target hypotheses, no matter how much unlabeled data is available.
In this case, the distance metric on h0 , h1 , . . . , hN can accurately be depicted as a star with
h0 at the center and with spokes leading to each hi . Each query only cuts off one spoke, so
N queries are needed.
2.4 Upper bound
We now show a loosely matching upper bound on sample complexity, via an algorithm
(Figure 3) which repeatedly halves the diameter of the remaining version space. For some
?0 less than half the target error rate ?, it starts with an ?0 -cover of H: a set of hypotheses
S0 ? H such that any h ? H is within distance ?0 of S0 . It is well-known that it is possible
to find such an S0 of size ? 2(2e/?0 ln 2e/?0 )d [9](Theorem 5). The ?0 -cover serves as a
surrogate for the hypothesis class ? for instance, the final hypothesis is chosen from it.
The algorithm is hopelessly intractable and is meant only to demonstrate the following
upper bound.
Theorem 4 Let the target hypothesis be some h? ? H. Pick any target accuracy ? > 0
and confidence level ? > 0. Suppose B(h? , 4?) is (?, ?, ? )-splittable for all ? ? ?/2.
Then there is an appropriate choice of ?0 and m for which, with probability at least 1 ? ?,
?
?
the algorithm will draw O((1/?)
+ (d/?? )) unlabeled points, make O(d/?)
queries, and
return a hypothesis with error at most ?.
This theorem makes it possible to derive label complexity bounds which are fine-tuned to
the specific target hypothesis. At the same time, it is extremely loose in that no attempt has
been made to optimize logarithmic factors.
3 Examples
3.1 Simple boundaries on the line
Returning to our first example, let X = R and H = {hw : w ? R}, where each hw is a
threshold function hw (x) = 1(x ? w). Suppose P is the underlying distribution on X ; for
simplicity we?ll assume it?s a density, although the discussion can easily be generalized.
The distance measure P induces on H is
d(hw , hw? ) = P{x : hw (x) 6= hw? (x)} = P{x : w ? x < w? } = P[w, w? )
(assuming w? ? w). Pick any accuracy ? > 0 and consider any finite set of edges Q =
{(hwi , hwi? ) : i = 1, . . . , n}, where without loss of generality the wi are in nondecreasing
order, and where each edge has length greater than ?: P[wi , wi? ) > ?. Pick w so that
P[wn/2 , w) = ?. It is easy to see that any x ? [wn/2 , w) must eliminate at least half the
edges in Q. Therefore, H is (? = 1/2, ?, ?)-splittable for any ? > 0.
This echoes the simple fact that active-learning H is just a binary search.
3.2 Intervals on the line
The next case we consider is almost identical to our earlier example of 2-d linear separators
(and the results carry over to that example, within constant factors). The hypotheses correspond to intervals on the real line: X = R and H = {ha,b : a, b ? R}, where ha,b (x) =
1(a ? x ? b). Once again assume P is a density. The distance measure it induces is
d(ha,b , ha? ,b? ) = P{x : x ? [a, b] ? [a? , b? ], x 6? [a, b] ? [a? , b? ]} = P([a, b]?[a? , b? ]),
where S?T denotes symmetric difference (S ? T ) \ (S ? T ).
Even in this very simple class, some hypotheses are much easier to active-learn than others.
Hypotheses not amenable to active-learning. Divide the real line into 1/? disjoint intervals,
each with probability mass ?, and let {hi : i = 1, ..., 1/?} denote the hypotheses taking
value 1 on the corresponding intervals. Let h0 be the everywhere-zero concept. Then these
hi satisfy the conditions of Corollary 3; their star-shaped configuration forces a ?-value of
?, and active learning doesn?t help at all in choosing amongst them.
Hypotheses amenable to active learning. The bad hypotheses are the ones whose intervals
have small probability mass. We?ll now see that larger concepts are not so bad; in particular,
for any h whose interval has mass > 4?, B(h, 4?) is (? = ?(1), ?, ?(?))-splittable.
Pick any ? > 0 and any ha,b such that P[a, b] = r > 4?. Consider a set of edges Q whose
endpoints are in B(ha,b , 4?) and which all have length > ?. In the figure below, all lengths
denote probability masses. Any concept in B(ha,b , 4?) (more precisely, its interval) must
lie within the outer box and must contain the inner box (this inner box might be empty).
r
a
4?
b
4?
4?
4?
Any edge (ha? ,b? , ha?? ,b?? ) ? Q has length > ?, so [a? , b? ]?[a?? , b?? ] (either a single interval
or a union of two intervals) has total length > ? and lies between the inner and outer boxes.
Now pick x at random from the distribution P restricted to the space between the two
boxes. This space has mass at most 16? and at least 4?, of which at least ? is occupied by
[a? , b? ]?[a?? , b?? ]. Therefore x separates ha? ,b? from ha?? ,b?? with probability ? 1/16.
Now let?s look at all of Q. The expected number of edges split by our x is at least |Q|/16,
and therefore the probability that more than |Q|/32 edges are split is at least 1/32. So
P{x : x (1/32)-splits Q} ? 4?/32 = ?/8.
To summarize, for any hypothesis ha,b , let i(ha,b ) = P[a, b] denote the probability mass of
its interval. Then for any h ? H and any ? < i(h)/4, the set B(h, 4?) is (1/32, ?, ?/8)splittable. In short, once the version space is whittled down to B(h, i(h)/4), efficient active
learning is possible. And the initial phase of getting to B(h, i(h)/4) can be managed by
?
random sampling, using O(1/i(h))
labels: not too bad when i(h) is large.
3.3 Linear separators under the uniform distribution
The most encouraging positive result for active learning to date has been for learning homogeneous (through the origin) linear separators with data drawn uniformly from the surface
of the unit sphere in Rd . The splitting indices for this case [5] bring this out immediately:
?
?
Theorem 5 For any h ? H, any ? ? 1/(32? 2 d), B(h, 4?) is ( 81 , ?, ?(?/ d))-splittable.
4 Related work and open problems
There has been a lot of work on a related model in which the points to be queried are
synthetically constructed, rather than chosen from unlabeled data [1]. The expanded role
of P in our model makes it substantially different, although a few intuitions do carry over
? for instance, Corollary 3 generalizes the notion of teaching dimension[8].
We have already discussed [7, 4, 6]. One other technique which seems useful for active
learning is to look at the unlabeled data and then place bets on certain target hypotheses,
for instance the ones with large margin. This insight ? nicely formulated in [2, 10] ? is not
specific to active learning and is orthogonal to the search issues considered in this paper.
In all the positive examples in this paper, a random data point which intersects the version
space has a good chance of ?(1)-splitting it. This permits a naive active learning strategy,
also suggested in [3]: just pick a random point whose label you are not yet sure of.
On what kinds of problems will this work, and what are prototypical cases where more
intelligent querying is needed?
Acknowledgements. I?m grateful to Yoav Freund for introducing me to this field; to Peter
Bartlett, John Langford, Adam Kalai and Claire Monteleoni for helpful discussions; and to
the anonymous NIPS reviewers for their detailed and perceptive comments.
References
[1] D. Angluin. Queries revisited. ALT, 2001.
[2] M.-F. Balcan and A. Blum. A PAC-style model for learning from labeled and unlabeled data.
Eighteenth Annual Conference on Learning Theory, 2005.
[3] D. Cohn, L. Atlas, and R. Ladner. Improving generalization with active learning. Machine Learning, 15(2):201?221, 1994.
[4] S. Dasgupta. Analysis of a greedy active learning strategy. NIPS, 2004.
[5] S. Dasgupta. Full version of this paper at www.cs.ucsd.edu/?dasgupta/papers/sample.ps.
[6] S. Dasgupta, A. Kalai, and C. Monteleoni. Analysis of perceptron-based active learning. Eighteenth Annual Conference on Learning Theory, 2005.
[7] Y. Freund, S. Seung, E. Shamir, and N. Tishby. Selective sampling using the query by committee
algorithm. Machine Learning Journal, 28:133?168, 1997.
[8] S. Goldman and M. Kearns. On the complexity of teaching. Journal of Computer and System
Sciences, 50(1):20?31, 1995.
[9] D. Haussler. Decision-theoretic generalizations of the PAC model for neural net and other learning applications. Information and Computation, 100(1):78?150, 1992.
[10] J. Shawe-Taylor, P. Bartlett, R. Williamson, and M. Anthony. Structural risk minimization over
data-dependent hierarchies. IEEE Transactions on Information Theory, 44(5):1926?1940, 1998.
| 2943 |@word version:10 achievable:1 seems:2 open:1 paid:1 asks:3 pick:9 whittled:1 carry:2 initial:1 configuration:1 efficacy:1 tuned:1 current:3 yet:2 must:7 john:1 benign:1 remove:1 atlas:1 half:5 greedy:1 short:1 coarse:1 revisited:1 along:1 constructed:2 direct:1 expected:1 roughly:2 buying:1 goldman:1 encouraging:2 xti:1 becomes:1 discover:1 notation:1 underlying:4 mass:10 what:5 kind:1 substantially:1 exactly:1 returning:1 classifier:8 unit:4 positive:11 local:1 solely:1 approximately:1 might:2 range:1 bi:4 union:1 x3:3 significantly:1 matching:1 confidence:2 induce:1 get:1 cannot:1 unlabeled:20 put:1 risk:1 www:1 optimize:1 reviewer:1 circumference:1 center:1 eighteenth:2 simplicity:1 splitting:8 immediately:1 insight:1 haussler:1 notion:4 coordinate:1 hierarchy:1 diego:1 target:26 heavily:1 suppose:9 play:1 homogeneous:5 us:1 shamir:1 hypothesis:53 origin:3 element:1 lay:1 cut:6 persist:1 labeled:2 role:2 capture:2 worst:1 region:2 balanced:1 intuition:2 substantial:1 complexity:19 seung:1 grateful:1 learner:3 easily:2 differently:1 various:1 intersects:1 query:12 tell:1 lift:1 neighborhood:2 h0:10 outcome:1 choosing:1 whose:5 larger:2 say:3 think:2 nondecreasing:1 echo:1 final:1 sequence:2 net:1 remainder:1 relevant:1 date:1 achieve:3 getting:1 empty:1 p:1 r1:1 adam:1 help:2 derive:2 depending:1 qt:3 h3:1 c:2 come:1 quantify:1 direction:1 vc:3 hx:9 assign:2 fix:2 generalization:2 really:1 anonymous:1 considered:1 achieves:2 omitted:1 label:29 hope:3 minimization:1 clearly:2 always:4 rather:2 occupied:1 kalai:2 lifted:1 bet:1 corollary:4 improvement:3 sense:1 helpful:3 dependent:1 xtm:1 typically:1 eliminate:1 hidden:1 selective:1 issue:1 among:1 fairly:1 uc:1 field:2 once:2 shaped:1 nicely:1 sampling:3 eliminated:1 identical:1 look:4 others:1 intelligent:1 few:2 tightly:1 geometry:1 phase:1 attempt:1 rearrangement:1 hwi:2 x22:1 perimeter:2 amenable:2 accurate:1 edge:23 necessary:1 orthogonal:1 divide:2 loosely:1 taylor:1 desired:2 circle:5 theoretical:1 instance:9 formalism:1 earlier:1 cover:3 yoav:1 introducing:1 vertex:2 subset:1 uniform:4 too:1 tishby:1 characterize:2 motivating:1 chooses:1 st:3 density:3 off:2 pool:3 again:2 choose:1 hn:2 leading:1 return:5 style:1 account:3 potential:1 star:2 b2:1 whittle:1 matter:2 satisfy:1 explicitly:2 depends:1 h1:3 lot:1 doing:1 characterizes:1 portion:1 start:2 accuracy:9 splittable:9 likewise:2 correspond:3 identify:3 generalize:1 bayesian:2 accurately:1 none:1 classified:1 monteleoni:2 whenever:1 definition:1 nonetheless:1 proof:4 supervised:4 maximally:1 done:1 box:5 generality:1 just:8 anywhere:1 stage:1 until:2 langford:1 cohn:1 paraphrasing:1 perhaps:1 concept:4 contain:2 managed:1 vicinity:1 q0:1 symmetric:1 deal:1 ll:3 whereby:1 generalized:1 theoretic:1 demonstrate:2 bring:1 balcan:1 endpoint:1 exponentially:1 volume:1 discussed:1 queried:2 rd:2 teaching:2 shawe:1 operating:1 surface:1 hide:1 apart:1 scenario:1 claimed:1 certain:2 binary:5 accomplished:1 greater:1 care:1 full:2 reduces:2 conducive:1 offer:1 sphere:2 halt:1 basic:1 metric:2 sometimes:1 represent:1 want:1 fine:1 interval:11 sure:1 comment:1 structural:1 presence:1 synthetically:1 split:10 enough:1 wn:2 easy:1 perfectly:4 topology:1 reduce:3 inner:3 bartlett:2 peter:1 repeatedly:1 useful:2 detailed:1 amount:6 induces:3 diameter:7 angluin:1 notice:1 disjoint:3 dasgupta:6 threshold:1 enormous:2 blum:1 drawn:1 spoke:2 concreteness:1 fraction:3 run:1 everywhere:1 you:1 place:2 almost:1 draw:5 decision:1 bound:9 hi:7 followed:1 distinguish:2 guaranteed:1 annual:2 precisely:1 x2:1 min:1 extremely:1 pseudometric:1 separable:4 expanded:2 relatively:1 smaller:2 describes:1 remain:1 wi:3 restricted:1 taken:1 ln:1 labels1:1 turn:2 r3:2 loose:1 needed:13 committee:1 end:1 serf:1 available:1 generalizes:1 permit:2 apply:2 generic:2 appropriate:1 distinguished:1 denotes:1 remaining:4 include:1 x21:1 establish:1 already:1 quantity:1 occurs:1 strategy:2 usual:4 surrogate:1 said:1 amongst:1 distance:6 separate:1 outer:2 evenly:1 me:1 trivial:2 assuming:1 length:9 index:5 lg:1 potentially:1 trace:2 negative:6 disagree:1 upper:3 ladner:1 arc:1 finite:5 ucsd:2 arbitrary:2 pair:1 required:1 polylogarithmic:1 nip:2 suggested:1 below:1 summarize:1 max:1 natural:4 force:1 scheme:4 naive:1 prior:1 understanding:1 acknowledgement:1 freund:2 loss:1 expect:1 mixed:2 prototypical:1 proportional:1 querying:1 h2:1 sufficient:1 consistent:1 s0:4 thresholding:1 claire:1 course:2 supported:1 repeat:1 soon:1 perceptron:1 taking:2 emerge:1 midpoint:1 distributed:1 slice:2 boundary:2 dimension:6 transition:1 doesn:2 made:2 commonly:1 san:1 far:2 transaction:1 cutting:1 dealing:1 active:36 b1:1 xt1:1 search:6 learn:3 nature:1 improving:1 requested:1 williamson:1 separator:13 anthony:1 main:1 x1:1 exponential:2 lie:4 third:1 hw:9 down:3 theorem:5 bad:6 specific:6 showing:1 pac:2 r2:3 alt:1 consist:2 intractable:1 adding:1 margin:1 easier:1 generalizing:1 depicted:1 logarithmic:1 ez:2 expressed:1 hopelessly:1 applies:1 chance:2 goal:4 formulated:1 towards:1 absence:1 hard:1 specifically:1 except:1 uniformly:2 reducing:1 lemma:1 kearns:1 total:1 sanjoy:1 perceptive:1 meant:1 |
2,141 | 2,944 | Active Learning for Misspecified Models
Masashi Sugiyama
Department of Computer Science, Tokyo Institute of Technology
2-12-1, O-okayama, Meguro-ku, Tokyo, 152-8552, Japan
[email protected]
Abstract
Active learning is the problem in supervised learning to design the locations of training input points so that the generalization error is minimized.
Existing active learning methods often assume that the model used for
learning is correctly specified, i.e., the learning target function can be expressed by the model at hand. In many practical situations, however, this
assumption may not be fulfilled. In this paper, we first show that the existing active learning method can be theoretically justified under slightly
weaker condition: the model does not have to be correctly specified, but
slightly misspecified models are also allowed. However, it turns out that
the weakened condition is still restrictive in practice. To cope with this
problem, we propose an alternative active learning method which can be
theoretically justified for a wider class of misspecified models. Thus,
the proposed method has a broader range of applications than the existing method. Numerical studies show that the proposed active learning
method is robust against the misspecification of models and is thus reliable.
1
Introduction and Problem Formulation
Let us discuss the regression problem of learning a real-valued function
Rd from training examples
f (x) defined on
f(x ; y ) j y = f (x ) + g =1 ;
i
i
i
i
n
i
i
where fign
are i.i.d. noise with mean zero and unknown variance 2. We use the foli=1
lowing linear regression model for learning.
fb(x) =
p
X
' (x);
i
=1
i
i
()
where f'i x gpi=1 are fixed linearly independent functions and
are parameters to be learned.
( )
= (1; 2; : : :; )>
p
We evaluate the goodness of the learned function fb x by the expected squared test error
over test input points and noise (i.e., the generalization error). When the test input points
are drawn independently from a distribution with density pt x , the generalization error is
expressed as
Z
( )
G = E
2
fb(x) f (x ) p (x)dx;
t
where E denotes the expectation over the noise fi gn
i=1 . In the following, we suppose that
pt x is known1.
()
In a standard setting of regression, the training input points are provided from the environment, i.e., fxi gn
i=1 independently follow the distribution with density pt x . On the other
hand, in some cases, the training input points can be designed by users. In such cases,
it is expected that the accuracy of the learning result can be improved if the training input
points are chosen appropriately, e.g., by densely locating training input points in the regions
of high uncertainty.
( )
Active learning?also referred to as experimental design?is the problem of optimizing the
location of training input points so that the generalization error is minimized. In active
learning research, it is often assumed that the regression model is correctly specified [2,
1, 3], i.e., the learning target function f x can be expressed by the model. In practice,
however, this assumption is often violated.
( )
In this paper, we first show that the existing active learning method can still be theoretically justified when the model is approximately correct in a strong sense. Then we propose
an alternative active learning method which can also be theoretically justified for approximately correct models, but the condition on the approximate correctness of the models is
weaker than that for the existing method. Thus, the proposed method has a wider range of
applications.
are independently drawn
In the following, we suppose that the training input points fxi gn
i=1
from a user-defined distribution with density px x , and discuss the problem of finding the
optimal density function.
()
2
Existing Active Learning Method
The generalization error G defined by Eq.(1) can be decomposed as
G = B + V;
where B is the (squared) bias term and V is the variance term given by
B=
Z
E fb(x)
2
f (x ) p (x)dx
t
= E
V
and
Z
2
fb(x) E fb(x) p (x)dx:
t
A standard way to learn the parameters in the regression model (1) is the ordinary leastsquares learning, i.e., parameter vector is determined as follows.
b
OLS
= argmin
b OLS is given by
It is known that
b
OLS
where
L
OLS
= (X >X ) 1X >;
X
i;j
"
n
X
fb(x ) y
2
i
=1
i
#
:
i
=L
OLS
= ' (x );
j
i
y;
and
y = (y1 ; y2 ; : : :; y )> :
n
Let GOLS , BOLS and VOLS be G, B and V for the learned function obtained by the
ordinary least-squares learning, respectively. Then the following proposition holds.
1
In some application domains such as web page analysis or bioinformatics, a large number of
unlabeled samples?input points without output values independently drawn from the distribution
with density pt (x)?are easily gathered. In such cases, a reasonably good estimate of pt (x) may
be obtained by some standard density estimation method. Therefore, the assumption that pt (x) is
known may not be so restrictive.
Proposition 1 ([2, 1, 3]) Suppose that the model is correctly specified, i.e., the learning
target function f x is expressed as
()
f (x ) =
p
X
=1
' (x):
i
i
i
Then BOLS and VOLS are expressed as
B
OLS
=0
= 2 J
V
and
OLS
where
J
OLS
= tr(UL
OLS
L> )
OLS
=
U
and
i;j
Z
OLS
;
' (x)' (x)p (x)dx:
i
j
t
Therefore, for the correctly specified model (1), the generalization error GOLS is expressed
as
G
OLS
= 2 J
OLS
:
Based on this expression, the existing active learning method determines the location of
training input points fxi gn
(or the training input density px x ) so that JOLS is minii=1
mized [2, 1, 3].
( )
3
Analysis of Existing Method under Misspecification of Models
In this section, we investigate the validity of the existing active learning method for misspecified models.
( )
Suppose the model does not exactly include the learning target function f x , but it approximately includes it, i.e., for a scalar ? such that j? j is small, f x is expressed as
( )
f (x) = g(x) + ?r(x);
where g(x) is the orthogonal projection of f (x) onto the span
residual r(x) is orthogonal to f' (x)g =1 :
p
i
g(x) =
p
X
=1
and
i
i
f' (x)g =1 and the
i
p
i
i
Z
' (x)
of
r(x)' (x)p (x)dx = 0
i
t
for i
= 1; 2; : : :; p:
i
In this case, the bias term B is expressed as
B=
Z
(x)
E fb
2
g(x) p (x)dx + C;
t
where
C=
Z
(g (x )
f (x))2 p (x)dx:
Since C is constant which does not depend on the training input density px
C in the following discussion.
Then we have the following lemma2 .
Lemma 2 For the approximately correct model (3), we have
B
C = ? 2 hUL
V
= 2 J
OLS
OLS
where
z ; L z i = O (? 2 );
= O (n 1);
OLS
OLS
r
OLS
p
z = (r(x1 ); r(x2 ); : : :; r(x ))> :
r
2
r
Proofs of lemmas are provided in an extended version [6].
n
t
(x), we subtract
Note that the asymptotic order in Eq.(1) is in probability since VOLS is a random variable
that includes fxi gn
. The above lemma implies that
i=1
C = 2 J
G
OLS
OLS
+ o (n 1)
if ?
p
= o (n ):
1
2
p
=
Therefore, the existing active learning method of minimizing JOLS is still justified if ?
op n 21 . However, when ? 6 op n 12 , the existing method may not work well because
the bias term BOLS
C is not smaller than the variance term VOLS , so it can not be
neglected.
(
4
)
= (
)
New Active Learning Method
In this section, we propose a new active learning method based on the weighted leastsquares learning.
4.1 Weighted Least-Squares Learning
b OLS is an unbiased estimator of . However, for
When the model is correctly specified,
b OLS is generally biased even asymptotically if ?
misspecified models,
Op .
= (1)
b OLS is actually caused by the covariate shift [5]?the training input density
The bias of
px x is different from the test input density pt x . For correctly specified models, influence of the covariate shift can be ignored, as the existing active learning method does.
However, for misspecified models, we should explicitly cope with the covariate shift.
()
( )
Under the covariate shift, it is known that the following weighted least-squares learning is
[5].
asymptotically unbiased even if ? Op
= "(1)
b
W LS
#
2
p (x ) b
f (x ) y
:
p (x )
=1
n
X
= argmin
t
i
x
i
i
i
i
b W LS would be intuitively understood by the following idenAsymptotic unbiasedness of
tity, which is similar in spirit to importance sampling:
Z
(x)
fb
2
f (x) p (x)dx =
Z
fb(x) f (x )
t
()
2
p (x)
p (x)dx:
p (x)
t
x
x
In the following, we assume that px x is strictly positive for all x. Let D be the diagonal
matrix with the i-th diagonal element
D
i;i
= pp ((xx )) :
t
i
x
i
b W LS is given by
Then it can be confirmed that
b
W LS
=L
W LS
y;
where
L
= (X >DX ) 1X >D:
W LS
4.2 Active Learning Based on Weighted Least-Squares Learning
Let GW LS , BW LS and VW LS be G, B and V for the learned function obtained by the
above weighted least-squares learning, respectively. Then we have the following lemma.
Lemma 3 For the approximately correct model (3), we have
B
C = ? 2 hUL
V
= 2 J
W LS
W LS
where
W LS
J
W LS
z ; L z i = O (? 2 n 1 );
= O (n 1);
W LS
r
W LS
r
p
= tr(UL
W LS
L> ):
W LS
p
This lemma implies that
G
C = 2 J
+ o (n 1)
= (1)
if ? op :
Based on this expression, we propose determining the training input density px
JW LS is minimized.
W LS
W LS
p
(x) so that
= (1)
The use of the proposed criterion JW LS can be theoretically justified when ?
op ,
1
while the existing criterion JOLS requires ? op n 2 . Therefore, the proposed method
has a wider range of applications. The effect of this extension is experimentally investigated
in the next section.
= (
5
)
Numerical Examples
We evaluate the usefulness of the proposed active learning method through experiments.
Toy Data Set:
setting.
We first illustrate how the proposed method works under a controlled
=1
()
( )=1
+ +
= 100
03
()
04
( )=
=123
Let d
and the learning target function f x be f x
x x2 ?x3. Let n
100
and figi=1 be i.i.d. Gaussian noise with mean zero and standard deviation : . Let pt x
be the Gaussian density with mean : and standard deviation : , which is assumed to be
known here. Let p
and the basis functions be 'i x
xi 1 for i
; ; . Let us
consider the following three cases. ?
; : ; : , where each case corresponds to ?correctly specified?, ?approximately correct?, and ?misspecified? (see Figure 1). We choose
the training input density px x from the Gaussian density with mean : and standard
deviation :
, where
02
= 0 0 04 0 5
=3
04
()
02
= 0:8; 0:9; 1:0; : : :; 2:5:
We compare the accuracy of the following three methods:
(A) Proposed active learning criterion + WLS learning : The training input density is
determined so that JW LS is minimized. Following the determined input density,
100
training input points fxi g100
i=1 are created and corresponding output values fyi gi=1
are observed. Then WLS learning is used for estimating the parameters.
(B) Existing active learning criterion + OLS learning [2, 1, 3]: The training input density is determined so that JOLS is minimized. OLS learning is used for estimating
the parameters.
(C) Passive learning + OLS learning: The test input density pt x is used as the training
input density. OLS learning is used for estimating the parameters.
( )
First, we evaluate the accuracy of JW LS and JOLS as approximations of GW LS and GOLS .
The means and standard deviations of GW LS , JW LS , GOLS , and JOLS over
runs are
(?correctly
depicted as functions of
in Figure 2. These graphs show that when ?
specified?), both JW LS and JOLS give accurate estimates of GW LS and GOLS . When
?
: (?approximately correct?), JW LS again works well, while JOLS tends to be
negatively biased for large
. This result is surprising since as illustrated in Figure 1, the
learning target functions with ?
and ?
: are visually quite similar. Therefore,
it intuitively seems that the result of ?
: is not much different from that of ?
.
However, the simulation result shows that this slight difference makes JOLS unreliable.
: (?misspecified?), JW LS is still reasonably accurate, while JOLS is heavily
When ?
biased.
= 0 04
=05
100
=0
=0
= 0 04
= 0 04
=0
These results show that as an approximation of the generalization error, JW LS is more
robust against the misspecification of models than JOLS , which is in good agreement with
the theoretical analyses given in Section 3 and Section 4.
Learning target function f(x)
8
?=0
?=0.04
?=0.5
6
Table 1: The means and standard deviations of
the generalization error for Toy data set. The best
method and comparable ones by the t-test at the
are described with boldface.
significance level
The value of method (B) for ?
: is extremely
large but it is not a typo.
4
5%
2
0
?1.5
?1
?0.5
0
0.5
1
1.5
2
Input density functions
1.5
=05
pt(x)
?=0
1:99 0:07
px(x)
1
0.5
0
?1.5
?1
?0.5
0
0.5
1
1.5
2
Figure 1: Learning target function
and input density functions.
?=0
(A)
(B)
(C)
? = 0:04
3:27 1:23 303 197
2:60 0:44 2:62 0:43 6:87 1:15
All values in the table are multiplied by 103.
1:34 0:04
? = 0:04
?correctly specified?
? = 0:5
?approximately correct?
?3
?misspecified?
?3
x 10
?3
x 10
6
G?WLS
x 10
6
5
5
4
4
3
3
? = 0:5
2:02 0:07 5:94 0:80
G?WLS
G?WLS
12
10
8
6
0.8
1.2
1.6
2
0.07
2.4
J?WLS
0.06
0.8
1.2
1.6
2
0.07
2.4
J?WLS
0.06
0.8
0.05
0.05
0.04
0.04
0.04
0.03
0.03
1.2
1.6
2
2.4
G?OLS
5
1.2
1.6
2
4
3
2.4
G?OLS
5
3
2
2.4
J?WLS
0.03
0.8 ?3
x 10
4
1.6
0.06
0.05
0.8 ?3
x 10
1.2
0.07
0.8
0.5
1.2
1.6
2
1.6
2
2.4
G?OLS
0.4
0.3
0.2
0.1
2
2
0.8
1.2
1.6
2
0.06
2.4
J?OLS
0.8
1.2
1.6
2
0.06
2.4
J?OLS
0.8
0.05
0.05
0.05
0.04
0.04
0.04
0.03
0.03
0.02
0.02
0.8
1.2
1.6
c
2
2.4
1.2
0.06
2.4
J?OLS
0.03
0.02
0.8
1.2
1.6
c
Figure 2: The means and error bars of GW LS ,
functions of
.
2
J
W LS
2.4
0.8
1.2
1.6
c
, GOLS , and JOLS over
2
2.4
100 runs as
In Table 1, the mean and standard deviation of the generalization error obtained by each
method is described. When ?
, the existing method (B) works better than the proposed
method (A). Actually, in this case, training input densities that approximately minimize
GW LS and GOLS were found by JW LS and JOLS . Therefore, the difference of the errors
is caused by the difference of WLS and OLS: WLS generally has larger variance than
OLS. Since bias is zero for both WLS and OLS if ?
, OLS would be more accurate
than WLS. Although the proposed method (A) is outperformed by the existing method (B),
it still works better than the passive learning scheme (C). When ?
: and ? : the
proposed method (A) gives significantly smaller errors than other methods.
=0
=0
= 0 04
=05
Overall, we found that for all three cases, the proposed method (A) works reasonably well
and outperforms the passive learning scheme (C). On the other hand, the existing method
(B) works excellently in the correctly specified case, although it tends to perform poorly
once the correctness of the model is violated. Therefore, the proposed method (A) is found
to be robust against the misspecification of models and thus it is reliable.
Table 2: The means and standard deviations of the test error for DELVE data sets. All
values in the table are multiplied by 3.
Bank-8fm
Bank-8fh
Bank-8nm
Bank-8nh
(A) 0:31 0:04 2:10 0:05 24:66 1:20 37:98 1:11
(B)
: :
: :
: :
: :
(C)
: :
: :
: :
: :
10
0 44 0 07 2 21 0 09 27 67 1 50 39 71 1 38
0 35 0 04 2 20 0 06 26 34 1 35 39 84 1 35
Kin-8fm
Kin-8fh
1:59 0:07 5:90 0:16
1:49 0:06 5:63 0:13
1:70 0:08 6:27 0:24
(A)
(B)
(C)
Kin-8nm
Kin-8nh
3:68 0:09
3:60 0:09
3:89 0:14
0:72 0:04
0:85 0:06
0:81 0:06
(A)/(C)
(B)/(C)
(C)/(C)
1.2
1.1
1
0.9
Bank?8fm
Bank?8fh Bank?8nm Bank?8nh
Kin?8fm
Kin?8fh
Kin?8nm
Kin?8nh
Figure 3: Mean relative performance of (A) and (B) compared with (C). For each run,
the test errors of (A) and (B) are normalized by the test error of (C), and then the values
are averaged over
runs. Note that the error bars were reasonably small so they were
omitted.
100
Realistic Data Set: Here we use eight practical data sets provided by DELVE [4]: Bank8fm, Bank-8fh, Bank-8nm, Bank-8nh, Kin-8fm, Kin-8fh, Kin-8nm, and Kin-8nh. Each data
set includes
samples, consisting of -dimensional input and -dimensional output
values. For convenience, every attribute is normalized into ; .
8192
8
[0 1?
8192
1
Suppose we are given all
input points (i.e., unlabeled samples). Note that output
values are unknown. From the pool of unlabeled samples, we choose n
input
1000. The
for
training
and
observe
the
corresponding
output
values
f
y
g
points fxig1000
i i=1
i=1
task is to predict the output values of all unlabeled samples.
= 1000
In this experiment, the test input density
independent Gaussian density.
p (x) is unknown.
t
So we estimate it using the
p (x) = (2
b2 ) 2 exp kx b
k2=(2b
2 ) ;
b
where
and b
are the maximum likelihood estimates of the mean and standard
deviation obtained from all unlabeled samples. Let p = 50 and the basis functions be
' (x) = exp kx t k2=2 for i = 1; 2; : : :; 50;
d
t
M LE
where fti g
M LE
M LE
M LE
i
50
i=1
M LE
i
are template points randomly chosen from the pool of unlabeled samples.
()
We select the training input density px x from the independent Gaussian density with
b M LE and standard deviation
b
mean
M LE , where
= 0:7; 0:75; 0:8; : : :; 2:4:
In this simulation, we can not create the training input points in an arbitrary location because we only have
samples. Therefore, we first create temporary input points following the determined training input density, and then choose the input points from the
pool of unlabeled samples that are closest to the temporary input points. For each data set,
we repeat this simulation
times, by changing the template points fti g50
in each run.
i=1
8192
100
100
The means and standard deviations of the test error over
runs are described in Table 2.
The proposed method (A) outperforms the existing method (B) for five data sets, while it
is outperformed by (B) for the other three data sets. We conjecture that the model used
for learning is almost correct in these three data sets. This result implies that the proposed
method (A) is slightly better than the existing method (B).
Figure 3 depicts the relative performance of the proposed method (A) and the existing
method (B) compared with the passive learning scheme (C). This shows that (A) outperforms (C) for all eight data sets, while (B) is comparable or is outperformed by (C) for five
data sets. Therefore, the proposed method (A) is overall shown to work better than other
schemes.
6
Conclusions
We argued that active learning is essentially the situation under the covariate shift?the
training input density is different from the test input density. When the model used for
learning is correctly specified, the covariate shift does not matter. However, for misspecified models, we have to explicitly cope with the covariate shift. In this paper, we proposed
a new active learning method based on the weighted least-squares learning.
The numerical study showed that the existing method works better than the proposed
method if model is correctly specified. However, the existing method tends to perform
poorly once the correctness of the model is violated. On the other hand, the proposed
method overall worked reasonably well and it consistently outperformed the passive learning scheme. Therefore, the proposed method would be robust against the misspecification
of models and thus it is reliable.
The proposed method can be theoretically justified if the model is approximately correct
in a weak sense. However, it is no longer valid for totally misspecified models. A natural
future direction would be therefore to devise an active learning method which has theoretical guarantee with totally misspecified models. It is also important to notice that when the
model is totally misspecified, even learning with optimal training input points would not
be successful anyway. In such cases, it is of course important to carry out model selection.
In active learning research?including the present paper, however, the location of training input points are designed for a single model at hand. That is, the model should have
been chosen before performing active learning. Devising a method for simultaneously optimizing models and the location of training input points would be a more important and
promising future direction.
Acknowledgments: The author would like to thank MEXT (Grant-in-Aid for Young Scientists 17700142) for partial financial support.
References
[1] D. A. Cohn, Z. Ghahramani, and M. I. Jordan. Active learning with statistical models. Journal
of Artificial Intelligence Research, 4:129?145, 1996.
[2] V. V. Fedorov. Theory of Optimal Experiments. Academic Press, New York, 1972.
[3] K. Fukumizu. Statistical active learning in multilayer perceptrons. IEEE Transactions on Neural
Networks, 11(1):17?26, 2000.
[4] C. E. Rasmussen, R. M. Neal, G. E. Hinton, D. van Camp, M. Revow, Z. Ghahramani, R. Kustra,
and R. Tibshirani. The DELVE manual, 1996.
[5] H. Shimodaira. Improving predictive inference under covariate shift by weighting the loglikelihood function. Journal of Statistical Planning and Inference, 90(2):227?244, 2000.
[6] M. Sugiyama. Active learning for misspecified models. Technical report, Department of Computer Science, Tokyo Institute of Technology, 2005.
| 2944 |@word version:1 seems:1 simulation:3 tr:2 carry:1 okayama:1 outperforms:3 existing:22 surprising:1 dx:10 realistic:1 numerical:3 designed:2 intelligence:1 devising:1 location:6 five:2 theoretically:6 expected:2 planning:1 decomposed:1 totally:3 provided:3 xx:1 estimating:3 fti:2 argmin:2 lowing:1 finding:1 guarantee:1 masashi:1 every:1 exactly:1 k2:2 grant:1 positive:1 before:1 understood:1 scientist:1 tends:3 approximately:10 weakened:1 delve:3 range:3 averaged:1 practical:2 acknowledgment:1 practice:2 x3:1 significantly:1 projection:1 onto:1 unlabeled:7 convenience:1 selection:1 influence:1 independently:4 l:35 estimator:1 financial:1 anyway:1 target:8 pt:10 suppose:5 user:2 heavily:1 agreement:1 element:1 fyi:1 observed:1 region:1 environment:1 neglected:1 depend:1 predictive:1 negatively:1 basis:2 easily:1 artificial:1 quite:1 larger:1 valued:1 loglikelihood:1 gi:1 propose:4 poorly:2 meguro:1 wider:3 illustrate:1 ac:1 op:7 eq:2 strong:1 c:1 implies:3 direction:2 tokyo:3 correct:9 attribute:1 argued:1 generalization:9 proposition:2 leastsquares:2 strictly:1 extension:1 hold:1 visually:1 exp:2 predict:1 omitted:1 fh:6 estimation:1 outperformed:4 correctness:3 create:2 weighted:6 fukumizu:1 gaussian:5 broader:1 consistently:1 likelihood:1 sense:2 camp:1 inference:2 overall:3 once:2 sampling:1 future:2 minimized:5 report:1 randomly:1 simultaneously:1 densely:1 consisting:1 bw:1 investigate:1 accurate:3 partial:1 orthogonal:2 theoretical:2 gn:5 goodness:1 ordinary:2 deviation:10 usefulness:1 successful:1 unbiasedness:1 density:29 hul:2 pool:3 squared:2 again:1 nm:6 choose:3 toy:2 japan:1 b2:1 includes:3 matter:1 caused:2 explicitly:2 igi:1 minimize:1 square:6 tity:1 accuracy:3 variance:4 gathered:1 weak:1 confirmed:1 manual:1 against:4 pp:1 sugi:1 proof:1 actually:2 supervised:1 follow:1 improved:1 jw:10 formulation:1 known1:1 hand:5 web:1 cohn:1 wls:12 vols:4 effect:1 validity:1 normalized:2 y2:1 unbiased:2 neal:1 illustrated:1 gw:6 criterion:4 passive:5 misspecified:14 ols:37 jp:1 nh:6 slight:1 rd:1 sugiyama:2 longer:1 closest:1 showed:1 optimizing:2 devise:1 technical:1 academic:1 ign:1 controlled:1 regression:5 multilayer:1 essentially:1 titech:1 expectation:1 justified:7 appropriately:1 biased:3 typo:1 spirit:1 jordan:1 vw:1 fm:5 shift:8 expression:2 ul:2 gpi:1 locating:1 york:1 ignored:1 generally:2 excellently:1 notice:1 fulfilled:1 correctly:13 tibshirani:1 drawn:3 changing:1 asymptotically:2 graph:1 run:6 uncertainty:1 almost:1 comparable:2 worked:1 x2:2 span:1 extremely:1 performing:1 mized:1 px:9 conjecture:1 department:2 shimodaira:1 smaller:2 slightly:3 intuitively:2 turn:1 discus:2 multiplied:2 eight:2 observe:1 fxi:5 alternative:2 jols:13 denotes:1 include:1 restrictive:2 ghahramani:2 diagonal:2 thank:1 boldface:1 minimizing:1 design:2 unknown:3 perform:2 fedorov:1 situation:2 extended:1 hinton:1 misspecification:5 y1:1 arbitrary:1 specified:13 learned:4 temporary:2 bar:2 reliable:3 including:1 natural:1 residual:1 scheme:5 technology:2 created:1 determining:1 asymptotic:1 relative:2 bols:3 bank:11 course:1 repeat:1 rasmussen:1 bias:5 weaker:2 institute:2 template:2 van:1 valid:1 fb:10 author:1 cope:3 transaction:1 approximate:1 unreliable:1 active:29 assumed:2 xi:1 table:6 promising:1 ku:1 learn:1 robust:4 reasonably:5 improving:1 investigated:1 domain:1 significance:1 linearly:1 noise:4 allowed:1 x1:1 referred:1 depicts:1 aid:1 bank8fm:1 weighting:1 young:1 kin:12 covariate:8 importance:1 kx:2 subtract:1 depicted:1 expressed:8 scalar:1 corresponds:1 determines:1 revow:1 experimentally:1 determined:5 lemma:6 experimental:1 perceptrons:1 select:1 mext:1 support:1 bioinformatics:1 violated:3 evaluate:3 |
2,142 | 2,945 | Estimation of Intrinsic Dimensionality Using
High-Rate Vector Quantization
Maxim Raginsky and Svetlana Lazebnik
Beckman Institute, University of Illinois
405 N Mathews Ave, Urbana, IL 61801
{maxim,slazebni}@uiuc.edu
Abstract
We introduce a technique for dimensionality estimation based on the notion of quantization dimension, which connects the asymptotic optimal
quantization error for a probability distribution on a manifold to its intrinsic dimension. The definition of quantization dimension yields a family
of estimation algorithms, whose limiting case is equivalent to a recent
method based on packing numbers. Using the formalism of high-rate
vector quantization, we address issues of statistical consistency and analyze the behavior of our scheme in the presence of noise.
1. Introduction
The goal of nonlinear dimensionality reduction (NLDR) [1, 2, 3] is to find low-dimensional
manifold descriptions of high-dimensional data. Most NLDR schemes require a good estimate of the intrinsic dimensionality of the data to be available in advance. A number
of existing methods for estimating the intrinsic dimension (e.g., [3, 4, 5]) rely on the fact
that, for data uniformly distributed on a d-dimensional compact smooth submanifold of
IRD , the probability of a small ball of radius ? around any point on the manifold is ?(?d ).
In this paper, we connect this argument with the notion of quantization dimension [6, 7],
which relates the intrinsic dimension of a manifold (a topological property) to the asymptotic optimal quantization error for distributions on the manifold (an operational property).
Quantization dimension was originally introduced as a theoretical tool for studying ?nonstandard? signals, such as singular distributions [6] or fractals [7]. However, to the best
of our knowledge, it has not been previously used for dimension estimation in manifold
learning. The definition of quantization dimension leads to a family of dimensionality estimation algorithms, parametrized by the distortion exponent r ? [1, ?), yielding in the limit
of r = ? a scheme equivalent to K?egl?s recent technique based on packing numbers [4].
To date, many theoretical aspects of intrinsic dimensionality estimation remain poorly understood. For instance, while the estimator bias and variance are assessed either heuristically [4] or exactly [5], scant attention is paid to robustness of each particular scheme
against noise. Moreover, existing schemes do not fully utilize the potential for statistical
consistency afforded by ergodicity of i.i.d. data: they compute the dimensionality estimate
from a fixed training sequence (typically, the entire dataset of interest), whereas we show
that an independent test sequence is necessary to avoid overfitting. In addition, using the
framework of high-rate vector quantization allows us to analyze the performance of our
scheme in the presence of noise.
2. Quantization-based estimation of intrinsic dimension
Let us begin by introducing the definitions and notation used in the rest of the paper. A
D-dimensional k-point vector quantizer [6] is a measurable map Qk : IRD ? C, where
C = {y1 , . . . , yk } ? IRD is called the codebook and the yi ?s are called the codevectors. The number log2 k is called the rate of the quantizer, in bits per vector. The sets
?
Ri = {x ? IRD : Qk (x) = yi }, 1 ? i ? k, are called the quantizer cells (or partition regions). The quantizer performance on a random vector X distributed according
to a probability distribution ? (denoted X ? ?) is measured by the average rth-power
?
distortion ?r (Qk |?) = E? kX ? Qk (X)kr , r ? [1, ?), where k ? k is the Euclidean
D
norm on IR . In the sequel, we will often find it more convenient to work with the
?
quantizer error er (Qk |?) = ?r (Qk |?)1/r . Let Qk denote the set of all D-dimensional
k-point quantizers. Then the performance achieved by an optimal k-point quantizer on X
?
?
is ?r? (k|?) = inf Qk ?Qk ?r (Qk |?) or equivalently, e?r (k|?) = ?r? (k|?)1/r .
2.1. Quantization dimension
The dimensionality estimation method presented in this paper exploits the connection between the intrinsic dimension d of a smooth compact manifold M ? IRD (from now on,
simply referred to as ?manifold?) and the asymptotic optimal quantization error for a regular probability distribution1 on M . When the quantizer rate is high, the partition cells can
be well approximated by D-dimensional balls around the codevectors. Then the regularity
of ? ensures that the probability of such a ball of radius ? is ?(?d ), and it can be shown
[7, 6] that e?r (k|?) = ?(k ?1/d ). This is referred to as the high-rate (or high-resolution)
approximation, and motivates the definition of quantization dimension of order r:
?
dr (?) = ? lim
k??
log k
.
log e?r (k|?)
The theory of high-rate quantization confirms that, for a regular ? supported on the manifold M , dr (?) exists for all 1 ? r ? ? and equals the intrinsic dimension of M [7, 6].
(The r = ? limit will be treated in Sec. 2.2.)
This definition immediately suggests an empirical procedure for estimating the intrinsic dimension of a manifold from a set of samples. Let X n = (X1 , . . . , Xn ) be n i.i.d. samples
from an unknown regular distribution ? on the manifold. We also fix some r ? [1, ?).
Briefly, we select a range k1 ? k ? k2 of codebook sizes for which the high-rate approximation holds (see Sec. 3 for implementation details), and design a sequence of quantizers
? k }k2 that give us good approximations e?r (k|?) to the optimal error e?r (k|?) over the
{Q
k=k1
chosen range of k. Then an estimate of the intrinsic dimension is obtained by plotting log k
vs. ? log e?r (k|?) and measuring the slope of the plot over the chosen range of k (because
the high-rate approximation holds, the plot is linear).
This method hinges on estimating reliably the optimal errors e?r (k|?). Let us explain how
this can be achieved. The ideal quantizer for each k should minimize the training error
!1/r
n
1X
er (Qk |?train ) =
kXi ? Qk (Xi )kr
,
n i=1
1
A probability distribution ? on IRD is regular of dimension d [6] if it has compact support and
if there exist constants c, ?0 > 0, such that c?1 ?d ? ?(B(a, ?)) ? c?d for all a ? supp(?) and all
? ? (0, ?0 ), where B(a, ?) is the open ball of radius ? centered at a. If M ? IRD is a d-dimensional
smooth compact manifold, then any ? with M = supp(?) that possesses a smooth, strictly positive
density w.r.t. the normalized surface measure on M is regular of dimension d.
where ?train is the corresponding empirical distribution. However, finding this empirically
optimal quantizer is, in general, an intractable problem, so in practice we merely strive to
? k whose error er (Q
? k |?train ) is a good approximation to the minimal
produce a quantizer Q
?
?
empirical error er (k|?train ) = inf Qk ?Qk er (Qk |?train ) (the issue of quantizer design is
discussed in Sec. 3). However, while minimizing the training error is necessary for obtaining a statistically consistent approximation to an optimal quantizer for ?, the training error
itself is an optimistically biased estimate of e?r (k|?) [8]: intuitively, this is due to the fact
that an empirically designed quantizer overfits the training set. A less biased estimate is
? k on a test sequence independent from the training set. Let
given by the performance of Q
Z m = (Z1 , . . . , Zm ) be m i.i.d. samples from ?, independent from X n . Provided m is
sufficiently large, the law of large numbers guarantees that the empirical average
!1/r
m
X
1
? k (Zi )kr
? k |?test ) =
kZi ? Q
er (Q
m i=1
? k |?). Using learning-theoretic formalism [8],
will be a good estimate of the test error er (Q
one can show that the test error of an empirically optimal quantizer is a strongly consistent
estimate of e?r (k|?), i.e., it converges almost surely to e?r (k|?) as n ? ?. Thus, we
? k |?test ). In practice, therefore, the proposed scheme is statistically
take e?r (k|?) = er (Q
? k is close to the optimum.
consistent to the extent that Q
2.2. The r = ? limit and packing numbers
If the support of ? is compact (which is the case with all probability distributions considered
in this paper), then the limit e? (Qk |?) = limr?? er (Qk |?) exists and gives the ?worstcase? quantization error of X by Qk :
e? (Qk |?) =
max
x?supp(?)
kx ? Qk (x)k.
The optimum e?? (k|?) = inf Qk ?Qk e? (Qk |?) has an interesting interpretation as the
smallest covering radius of the most parsimonious covering of supp(?) by k or fewer balls
of equal radii [6]. Let us describe how the r = ? case is equivalent to dimensionality estimation using packing numbers [4]. The covering number NM (?) of a manifold M ? IRD
is defined as the size of the smallest covering of M by balls of radius ? > 0, while the packing number PM (?) is the cardinality of the maximal set S ? M with kx ? yk ? ? for all
distinct x, y ? S. If d is the dimension of M , then NM (?) = ?(??d ) for small enough ?,
?
NM (?)
leading to the definition of the capacity dimension: dcap (M ) = ? lim??0 loglog
. If this
?
limit exists, then it equals the intrinsic dimension of M . Alternatively, K?egl [4] suggests
using the easily proved inequality NM (?) ? PM (?) ? NM (?/2) to express the capacity
PM (?)
dimension in terms of packing numbers as dcap (M ) = ? lim??0 loglog
.
?
Now, a simple geometric argument shows that, for any ? supported on M , PM (e?? (k|?)) >
k [6]. On the other hand, NM (e?? (k|?)) ? k, which implies that PM (2e?? (k|?)) ? k. Let
{?k } be a sequence of positive reals converging to zero, such that ?k = e?? (k|?). Let k0 be
such that log ?k < 0 for all k ? k0 . Then it is not hard to show that
?
log k
log PM (?k )
log PM (2?k )
??
<?
,
log 2?k ? 1
log e?? (k|?)
log ?k
k ? k0 .
In other words, there exists a decreasing sequence {?k }, such that for sufficiently large
values of k (i.e., in the high-rate regime) the ratio ? log k/ log e?? (k|?) can be approximated increasingly finely both from below and from above by quantities involving the
packing numbers PM (?k ) and PM (2?k ) and converging to the common value dcap (M ).
This demonstrates that the r = ? case of our scheme is numerically equivalent to K?egl?s
method based on packing numbers.
For a finite training set, the r = ? case requires us to find an empirically optimal kpoint quantizer w.r.t. the worst-case ?2 error ? a task that is much more computationally
complex than for the r = 2 case (see Sec. 3 for details). In addition to computational
efficiency, other important practical considerations include sensitivity to sampling density
and noise. In theory, this worst-case quantizer is completely insensitive to variations in
sampling density, since the optimal error e?? (k|?) is the same for all ? with the same
support. However, this advantage is offset in practice by the increased sensitivity of the
r = ? scheme to noise, as explained next.
2.3. Estimation with noisy data
Random noise transforms ?clean? data distributed according to ? into ?noisy? data distributed according to some other distribution ?. This will cause the empirically designed quantizer to be matched to the noisy distribution ?, whereas our aim is to estimate optimal quantizer performance on the original clean data. To do this, we make
?
use of the rth-order Wasserstein distance [6] between ? and ?, defined as ??r (?, ?) =
r 1/r
inf X??,Y ?? (E kX ? Y k ) , r ? [1, ?), where the infimum is taken over all pairs
(X, Y ) of jointly distributed random variables with the respective marginals ? and ?. It
is a natural measure of quantizer mismatch, i.e., the difference in performance that results
from using a quantizer matched to ? on data distributed according to ? [9]. Let ?n denote
the empirical distribution of n i.i.d. samples of ?. It is possible to show (details omitted
for lack of space) that for an empirically optimal k-point quantizer Q?k,r trained on n samples of ?, |er (Q?k,r |?) ? e?r (k|?)| ? 2?
?r (?n , ?) + ??r (?, ?). Moreover, ?n converges to ?
in the Wasserstein sense [6]: limn?? ??r (?n , ?) = 0. Thus, provided the training set is
sufficiently large, the distortion estimation error is controlled by ??r (?, ?).
Consider the case of isotropic additive Gaussian noise. Let W be a D-dimensional zeromean Gaussian with covariance matrix K = ? 2 ID , where ID is the D ? D identity matrix.
The noisy data are described by the random variable X + W = Y ? ?, and
1/r
?
?((r + D)/2)
??r (?, ?) ? 2?
,
?(D/2)
?
where ? is the gamma function. In particular, ??2 (?, ?) ? ? D. The magnitude of the
bound, and hence the worst-case sensitivity of the estimation procedure to noise, is controlled by the noise variance, by the extrinsic dimension, and by the distortion exponent.
The factor involving the gamma functions grows without bound both as D ? ? and as
r ? ?, which suggests that the susceptibility of our algorithm to noise increases with the
extrinsic dimension of the data and with the distortion exponent.
3. Experimental results
We have evaluated our quantization-based scheme for two choices of the distortion exponent, r = 2 and r = ?. For r = 2, we used the k-means algorithm to design the quantizers.
For r = ?, we have implemented a Lloyd-type algorithm, which alternates two steps: (1)
the minimum-distortion encoder, where each sample Xi is mapped to its nearest neighbor
in the current codebook, and (2) the centroid decoder, where the center of each region is
recomputed as the center of the minimum enclosing ball of the samples assigned to that
region. It is clear that the decoder step locally minimizes the worst-case error (the largest
distance of any sample from the center). Using a simple randomized algorithm, the minimum enclosing ball can be found in O((D + 1)!(D + 1)N ) time, where N is the number
of samples in the region [10]. Because of this dependence on D, the running time of the
Lloyd algorithm becomes prohibitive in high dimensions, and even for D < 10 it is an
Training error
Test error
10
10
r=2
r=? Lloyd
r=? greedy
r=2
r=? Lloyd
r=? greedy
8
Test error
Training error
8
6
4
2
6
4
2
0
0
500
1000 1500 2000
Codebook size (k)
Training error
500
1000 1500 2000
Codebook size (k)
Test error
Figure 1: Training and test error vs. codebook size on the swiss roll (Figure 2 (a)). Dashed line:
r = 2 (k-means), dash-dot: r = ? (Lloyd-type), solid: r = ? (greedy).
2.5
11
2.25
Rate (log k)
?10
?5
9
8
7
0
5
Training error
Test error
Training fit
Test fit
6
10
5
0
10
5
?5
?10
?2
?1
(a)
0
?log(Error)
Dim. Estimate
10
40
20
2
1.75
1.5
1.25
1
1
Training estimate
Test estimate
6
7
(b)
8
Rate
9
10
9
10
(c)
2.25
11
0.5
0
?0.5
Dim. Estimate
Rate (log k)
2
10
9
8
4
2
4
7
0
?2
?2
?4
Training error
Test error
6
?4
1
(d)
1.5
1.25
1
2
0
1.75
2
3
4
?log(Error)
(e)
5
0.75
Training estimate
Test estimate
6
7
8
Rate
(f)
Figure 2: (a) The swiss roll (20,000 samples). (b) Plot of rate vs. negative log of the quantizer
error (log-log curves), together with parametric curves fitted using linear least squares (see text). (c)
Slope (dimension) estimates: 1.88 (training) and 2.04 (test). (d) Toroidal spiral (20,000 samples). (e)
Log-log curves, exhibiting two distinct linear parts. (f) Dimension estimates: 1.04 (training), 2.02
(test) in the low-rate region, 0.79 (training), 1.11 (test) in the high-rate region.
order of magnitude slower than k-means. Thus, we were compelled to also implement a
greedy algorithm reminiscent of K?egl?s algorithm for estimating the packing number [4]:
supposing that k ? 1 codevectors have already been selected, the kth one is chosen to be the
sample point with the largest distance from the nearest codevector. Because this is the point
that gives the worst-case error for codebook size k ?1, adding it to the codebook lowers the
error. We generate several codebooks, initialized with different random samples, and then
choose the one with the smallest error. For the experiment shown in Figure 3, the training
error curves produced by this greedy algorithm were on average 21% higher than those of
the Lloyd algorithm, but the test curves were only 8% higher. In many cases, the two test
curves are visually almost coincident (Figure 1). Therefore, in the sequel, we report only
the results for the greedy algorithm for the r = ? case.
Our first synthetic dataset (Fig. 2 (a)) is the 2D ?swiss roll? embedded in IR3 [2]. We split
the samples into 4 equal parts and use each part in turn for training and the rest for testing.
This cross-validation setup produces four sets of error curves, which we average to obtain
an improved estimate. We sample quantizer rates in increments of 0.1 bits. The lowest rate
is 5 bits, and the highest rate is chosen as log(n/2), where n is the size of the training set.
The high-rate approximation suggests the asymptotic form ?(k ?1/d ) for the quantizer error
as a function of codebook size k. To validate this approximation, we use linear least squares
to fit curves of the form a + b k ?1/2 to the r = 2 training and test distortion curves for the
the swiss roll. The fitting procedure yields estimates of ?0.22 + 29.70k ?1/2 and 0.10 +
28.41k ?1/2 for the training and test curves, respectively. These estimates fit the observed
data well, as shown in Fig. 2(b), a plot of rate vs. the negative logarithm of the training
and test error (?log-log curves? in the following). Note that the additive constant for the
training error is negative, reflecting the fact that the training error of the empirical quantizer
is identically zero when n = k (each sample becomes a codevector). On the other hand,
the test error has a positive additive constant as a consequence of quantizer suboptimality.
Significantly, the fit deteriorates as n/k ? 1, as the average number of training samples
per quantizer cell becomes too small to sustain the exponentially slow decay required for
the high-rate approximation.
Fig. 2(c) shows the slopes of the training and test log-log curves, obtained by fitting a line to
each successive set of 10 points. These slopes are, in effect, rate-dependent dimensionality
estimates for the dataset. Note that the training slope is always below the test slope; this is
a consequence of the ?optimism? of the training error and the ?pessimism? of the test error
(as reflected in the additive constants of the parametric fits). The shapes of the two slope
curves are typical of many ?well-behaved? datasets. At low rates, both the training and the
test slopes are close to the extrinsic dimension, reflecting the global geometry of the dataset.
As rate increases, the local manifold structure is revealed, and the slope yields its intrinsic
dimension. However, as n/k ? 1, the quantizer begins to ?see? isolated samples instead
of the manifold structure. Thus, the training slope begins to fall to zero, and the test slope
rises, reflecting the failure of the quantizer to generalize to the test set. For most datasets
in our experiments, a good intrinsic dimensionality estimate is given by the first minimum
of the test slope where the line-fitting residual is sufficiently low (marked by a diamond in
Fig. 2(c)). For completeness, we also report the slope of the training curve at the same rate
(note that the training curve may not have local minima because of its tendency to fall as
the rate increases). Interestingly, some datasets yield several well-defined dimensionality
estimates at different rates. Fig. 2(d) shows a toroidal spiral embedded in IR3 , which at
larger scales ?looks? like a torus, while at smaller scales the 1D curve structure becomes
more apparent. Accordingly, the log-log plot of the test error (Fig. 2(e)) has two distinct
linear parts, yielding dimension estimates of 2.02 and 1.11, respectively (Fig. 2(f)).
Recall from Sec. 2.1 that the high-rate approximation for regular probability distributions
is based on the assumption that the intersection of each quantizer cell with the manifold is a
d-dimensional neighborhood of that manifold. Because we compute our dimensionality estimate at a rate for which this approximation is valid, we know that the empirically optimal
quantizer at this rate partitions the data into clusters that are locally d-dimensional. Thus,
our dimensionality estimation procedure is also useful for finding a clustering of the data
that respects the intrinsic neighborhood structure of the manifold from which it is sampled.
As an expample, for the toroidal spiral of Fig. 2(c), we obtain two distinct dimensionality
estimates of 2 and 1 at rates 6.6 and 9.4, respectively (Fig. 2(f)). Accordingly, quantizing
the spiral at the lower (resp. higher) rate yields clusters that are locally two-dimensional
(resp. one-dimensional).
To ascertain the effect of noise and extrinsic dimension on our method, we have embedded
the swiss roll in dimensions 4 to 8 by zero-padding the coordinates and applying a random
orthogonal matrix, and added isotropic zero-mean Gaussian noise in the high-dimensional
space, with ? = 0.2, 0.4, . . . , 1. First, we have verified that the r = 2 estimator behaves in
agreement with the Wasserstein bound from Sec. 2.3. The top part of Fig. 3(a) shows the
maximum differences between the noisy and the noiseless test error curves for each combination of?D and ?, and the bottom part shows the corresponding values of the Wasserstein
bound ? D for comparison. For each value of ?,
? the test error of the empirically designed
quantizer differs from the noiseless case by O( D), while, for a fixed D, the difference
r = ? training
r = 2 training
3
3.5
? = 0.8
1.5
? = 0.6
1
3.5
3
d estimate
? = 1.0
2
d estimate
empirical difference
2.5
2.5
2
3
2.5
2
? = 0.4
0.5
0
3
4
5
6
7
8
8
? = 0.2
? = 0.0
7
8
6
D
5
4
D
3
3 0
0.2
0.4
0.6
?
0.8
1
7
6
5
D
4
3 0
0.4
0.6
?
0.4
0.6
?
0.8
1
r = ? test
r = 2 test
? = 1.0
0.2
2.5
? = 0.8
3.5
? = 0.6
3
3.5
? = 0.4
1
? = 0.2
0.5
0
3
4
5
6
7
D
(a) Noise bounds
8
? = 0.0
d estimate
1.5
d estimate
bound
2
2.5
2
8
7
3
2.5
2
8
6
D
5
4
3 0
0.2
0.4
(b) r = 2
0.6
?
0.8
1
7
6
D
5
4
3 0
0.2
0.8
1
(c) r = ?
Figure 3: (a) Top: empirically
observed
differences between noisy and noiseless test curves; bottom:
` ?
?
theoretically derived bound ? D . (b) Height plot of dimension estimates for the r = 2 algorithm
as a function of D and ?. Top: training estimates, bottom: test estimates. (c) Dimension estimates
for r = ?. Top: training, bottom: test. Note that the training estimates are consistently lower than
the test estimates: the average difference is 0.17 (resp. 0.28) for the r = 2 (resp. r = ?) case.
of the noisy and noiseless test errors grows as O(?). As predicted by the bound, the additive constant in the parametric form of the test error increases with ?, resulting in larger
slopes of the log-log curve and therefore higher dimension estimates. This is reflected in
Figs. 3(b) and (c), which show training and test dimensionality estimates for r = 2 and
r = ?, respectively. The r = ? estimates are much less stable than those for r = 2 because the r = ? (worst-case) error is controlled by outliers and often stays constant over a
range of rates. The piecewise-constant shape of the test error curves (see Fig. 1) results in
log-log plots with unstable slopes.
Table 1 shows a comparative evaluation on the MNIST handwritten digits database2 and a
face video.3 The MNIST database contains 70,000 images at resolution 28?28 (D = 784),
and the face video has 1965 frames at resolution 28 ? 20 (D = 560). For each of the resulting 11 datasets (taking each digit separately), we used half the samples for training
and half for testing. The first row of the table shows dimension estimates obtained using
a baseline regression method [3]: for each sample point, a local estimate is given by the
log ?
first local minimum of the curve d dlog
?(?) , where ?(?) is the distance from the point to its
?th nearest neighbor, and a global estimate is then obtained by averaging the local estimates. The rest of the table shows the estimates obtained from the training and test curves
of the r = 2 quantizer and the (greedy) r = ? quantizer. Comparative examination of
the results shows that the r = ? estimates tend to be fairly low, which is consistent with
the experimental findings of K?egl [4]. By contrast, the r = 2 estimates seem to be most
resistant to negative bias. The relatively high values of the dimension estimates reflect the
many degrees of freedom found in handwritten digits, including different scale, slant and
thickness of the strokes, as well as the presence of topological features (i.e., loops in 2?s or
extra horizontal bars in 7?s). The lowest dimensionality is found for 1?s, while the highest
is found for 8?s, reflecting the relative complexities of different digits. For the face dataset,
the different dimensionality estimates range from 4.25 to 8.30. This dataset certainly contains enough degrees of freedom to justify such high estimates, including changes in pose
2
3
http://yann.lecun.com/exdb/mnist/
http://www.cs.toronto.edu/?roweis/data.html, B. Frey and S. Roweis.
Table 1: Performance on the MNIST dataset and on the Frey faces dataset.
Handwritten digits (MNIST data set)
# samples
Regression
r = 2 train
r = 2 test
r = ? train
r = ? test
6903
11.14
12.39
15.47
10.33
9.02
7877
7.86
6.51
7.11
8.19
6.61
6990
12.79
16.04
20.89
10.15
13.98
7141
13.39
15.38
19.78
12.63
12.21
6824
11.98
13.22
16.79
9.87
7.26
6313
13.05
14.63
19.80
8.49
10.46
Faces
6876
11.19
12.05
16.02
9.85
9.08
7293
10.42
12.32
16.02
8.10
9.92
6825
13.79
19.80
20.07
10.88
14.03
6958
11.26
13.44
17.46
7.40
9.59
1965
5.63
5.70
8.30
4.25
6.39
and facial expression, as well as camera jitter.4 Finally, for both the digits and the faces,
significant noise in the dataset additionally inflated the estimates.
4. Discussion
We have demonstrated an approach to intrinsic dimensionality estimation based on highrate vector quantization. A crucial distinguishing feature of our method is the use of an
independent test sequence to ensure statistical consistency and avoid underestimating the
dimension. Many existing methods are well-known to exhibit a negative bias in high dimensions [4, 5]. This can have serious implications in practice, as it may result in lowdimensional representations that lose essential features of the data. Our results raise the
possibility that this negative bias may be indicative of overfitting. In the future we plan to
integrate our proposed method into a unified package of quantization-based algorithms for
estimating the intrinsic dimension of the data, obtaining its dimension-reduced manifold
representation, and compressing the low-dimensional data [11].
Acknowledgments
Maxim Raginsky was supported by the Beckman Institute Postdoctoral Fellowship. Svetlana Lazebnik was partially supported by the National Science Foundation grants IIS0308087 and IIS-0535152.
References
[1] S.T. Roweis and L.K. Saul. Nonlinear dimensionality reduction by locally linear embedding.
Science, 290:2323?2326, December 2000.
[2] J.B. Tenenbaum, V. de Silva, and J.C. Langford. A global geometric framework for nonlinear
dimensionality reduction. Science, 290:2319?2323, December 2000.
[3] M. Brand. Charting a manifold. In NIPS 15, pages 977?984, Cambridge, MA, 2003. MIT Press.
[4] B. K?egl. Intrinsic dimension estimation using packing numbers. In NIPS 15, volume 15,
Cambridge, MA, 2003. MIT Press.
[5] E. Levina and P.J. Bickel. Maximum likelihood estimation of intrinsic dimension. In NIPS 17,
Cambridge, MA, 2005. MIT Press.
[6] S. Graf and H. Luschgy. Foundations of Quantization for Probability Distributions. SpringerVerlag, Berlin, 2000.
[7] P.L. Zador. Asymptotic quantization error of continuous signals and the quantization dimension.
IEEE Trans. Inform. Theory, IT-28:139?149, March 1982.
[8] T. Linder. Learning-theoretic methods in vector quantization. In L. Gy?orfi, editor, Principles of
Nonparametric Learning. Springer-Verlag, New York, 2001.
[9] R.M. Gray and L.D. Davisson. Quantizer mismatch. IEEE Trans. Commun., 23:439?443, 1975.
[10] E. Welzl. Smallest enclosing disks (balls and ellipsoids). In New Results and New Trends in
Computer Science, volume 555 of LNCS, pages 359?370. Springer, 1991.
[11] M. Raginsky. A complexity-regularized quantization approach to nonlinear dimensionality reduction. Proc. 2005 IEEE Int. Symp. Inform. Theory, pages 352?356.
4
Interestingly, Brand [3] reports an intrinsic dimension estimate of 3 for this data set. However,
he used only a 500-frame subsequence and introduced additional mirror symmetry.
| 2945 |@word briefly:1 norm:1 disk:1 open:1 heuristically:1 confirms:1 covariance:1 paid:1 solid:1 reduction:4 contains:2 interestingly:2 existing:3 current:1 com:1 reminiscent:1 additive:5 partition:3 shape:2 plot:7 designed:3 v:4 greedy:7 fewer:1 prohibitive:1 selected:1 half:2 accordingly:2 indicative:1 isotropic:2 compelled:1 underestimating:1 quantizer:35 completeness:1 codebook:9 toronto:1 successive:1 height:1 fitting:3 symp:1 introduce:1 theoretically:1 behavior:1 uiuc:1 decreasing:1 cardinality:1 becomes:4 begin:3 estimating:5 moreover:2 notation:1 provided:2 matched:2 lowest:2 minimizes:1 unified:1 finding:3 guarantee:1 exactly:1 k2:2 demonstrates:1 toroidal:3 mathews:1 grant:1 positive:3 understood:1 frey:2 local:5 limit:5 consequence:2 id:2 optimistically:1 suggests:4 range:5 statistically:2 practical:1 lecun:1 camera:1 testing:2 acknowledgment:1 practice:4 implement:1 differs:1 swiss:5 digit:6 procedure:4 lncs:1 scant:1 empirical:7 significantly:1 orfi:1 convenient:1 word:1 regular:6 close:2 applying:1 www:1 equivalent:4 measurable:1 map:1 center:3 demonstrated:1 attention:1 limr:1 zador:1 resolution:3 immediately:1 estimator:2 embedding:1 notion:2 variation:1 increment:1 slazebni:1 limiting:1 resp:4 coordinate:1 distinguishing:1 agreement:1 trend:1 approximated:2 database:1 observed:2 bottom:4 worst:6 region:6 ensures:1 compressing:1 highest:2 yk:2 complexity:2 trained:1 raise:1 davisson:1 efficiency:1 completely:1 packing:10 easily:1 k0:3 train:7 distinct:4 describe:1 neighborhood:2 whose:2 apparent:1 larger:2 distortion:8 encoder:1 jointly:1 itself:1 noisy:7 sequence:7 advantage:1 quantizing:1 lowdimensional:1 maximal:1 zm:1 loop:1 date:1 poorly:1 roweis:3 description:1 validate:1 regularity:1 optimum:2 cluster:2 produce:2 comparative:2 converges:2 pose:1 measured:1 nearest:3 implemented:1 predicted:1 c:1 implies:1 inflated:1 exhibiting:1 radius:6 centered:1 require:1 fix:1 strictly:1 hold:2 around:2 sufficiently:4 considered:1 visually:1 bickel:1 smallest:4 omitted:1 susceptibility:1 estimation:16 proc:1 beckman:2 lose:1 largest:2 tool:1 mit:3 gaussian:3 always:1 aim:1 avoid:2 derived:1 consistently:1 likelihood:1 contrast:1 ave:1 centroid:1 baseline:1 sense:1 dim:2 dependent:1 typically:1 entire:1 issue:2 ir3:2 html:1 denoted:1 exponent:4 plan:1 fairly:1 equal:4 sampling:2 look:1 future:1 report:3 piecewise:1 serious:1 gamma:2 national:1 geometry:1 connects:1 freedom:2 interest:1 possibility:1 evaluation:1 certainly:1 yielding:2 implication:1 necessary:2 respective:1 facial:1 orthogonal:1 euclidean:1 logarithm:1 initialized:1 isolated:1 theoretical:2 minimal:1 fitted:1 instance:1 formalism:2 increased:1 measuring:1 introducing:1 submanifold:1 too:1 connect:1 thickness:1 kxi:1 synthetic:1 density:3 sensitivity:3 randomized:1 stay:1 sequel:2 pessimism:1 together:1 reflect:1 nm:6 choose:1 dr:2 strive:1 leading:1 supp:4 potential:1 de:1 gy:1 sec:6 lloyd:6 int:1 analyze:2 overfits:1 slope:15 minimize:1 square:2 il:1 ir:1 roll:5 variance:2 qk:23 yield:5 generalize:1 handwritten:3 produced:1 stroke:1 nonstandard:1 explain:1 inform:2 definition:6 against:1 failure:1 sampled:1 dataset:9 proved:1 recall:1 knowledge:1 lim:3 dimensionality:22 reflecting:4 originally:1 higher:4 reflected:2 sustain:1 improved:1 evaluated:1 strongly:1 zeromean:1 ergodicity:1 langford:1 hand:2 horizontal:1 nonlinear:4 lack:1 infimum:1 gray:1 behaved:1 grows:2 effect:2 normalized:1 hence:1 assigned:1 covering:4 suboptimality:1 exdb:1 theoretic:2 silva:1 image:1 lazebnik:2 consideration:1 common:1 behaves:1 empirically:9 insensitive:1 exponentially:1 volume:2 discussed:1 interpretation:1 he:1 rth:2 numerically:1 marginals:1 significant:1 cambridge:3 slant:1 consistency:3 pm:9 illinois:1 dot:1 stable:1 resistant:1 surface:1 recent:2 inf:4 commun:1 verlag:1 inequality:1 yi:2 minimum:6 wasserstein:4 additional:1 surely:1 signal:2 dashed:1 relates:1 ii:1 smooth:4 levina:1 cross:1 controlled:3 converging:2 involving:2 regression:2 noiseless:4 achieved:2 cell:4 whereas:2 addition:2 separately:1 fellowship:1 singular:1 limn:1 crucial:1 biased:2 rest:3 finely:1 posse:1 extra:1 supposing:1 tend:1 december:2 seem:1 presence:3 ideal:1 revealed:1 split:1 enough:2 spiral:4 identically:1 fit:6 zi:1 codebooks:1 expression:1 optimism:1 padding:1 ird:8 york:1 cause:1 fractal:1 useful:1 clear:1 transforms:1 nonparametric:1 locally:4 tenenbaum:1 reduced:1 generate:1 http:2 exist:1 deteriorates:1 extrinsic:4 per:2 express:1 recomputed:1 four:1 clean:2 verified:1 utilize:1 merely:1 raginsky:3 package:1 jitter:1 svetlana:2 family:2 almost:2 yann:1 parsimonious:1 bit:3 bound:8 dash:1 topological:2 afforded:1 ri:1 aspect:1 argument:2 relatively:1 according:4 alternate:1 ball:9 combination:1 march:1 remain:1 smaller:1 increasingly:1 ascertain:1 intuitively:1 explained:1 outlier:1 dlog:1 taken:1 computationally:1 previously:1 turn:1 nldr:2 know:1 studying:1 available:1 robustness:1 slower:1 original:1 luschgy:1 top:4 running:1 include:1 clustering:1 ensure:1 log2:1 hinge:1 exploit:1 k1:2 already:1 quantity:1 added:1 parametric:3 dependence:1 exhibit:1 kth:1 distance:4 mapped:1 berlin:1 capacity:2 parametrized:1 decoder:2 manifold:20 extent:1 unstable:1 charting:1 ellipsoid:1 ratio:1 minimizing:1 equivalently:1 setup:1 quantizers:3 negative:6 rise:1 implementation:1 design:3 motivates:1 reliably:1 unknown:1 enclosing:3 diamond:1 datasets:4 urbana:1 finite:1 coincident:1 y1:1 frame:2 introduced:2 pair:1 required:1 connection:1 z1:1 nip:3 trans:2 address:1 bar:1 distribution1:1 below:2 mismatch:2 regime:1 max:1 including:2 video:2 power:1 treated:1 rely:1 natural:1 examination:1 regularized:1 residual:1 scheme:10 text:1 geometric:2 asymptotic:5 law:1 embedded:3 fully:1 relative:1 graf:1 interesting:1 validation:1 foundation:2 integrate:1 degree:2 consistent:4 plotting:1 editor:1 principle:1 row:1 supported:4 dcap:3 bias:4 institute:2 neighbor:2 fall:2 face:6 taking:1 saul:1 distributed:6 curve:22 dimension:45 xn:1 valid:1 kzi:1 compact:5 global:3 overfitting:2 xi:2 alternatively:1 postdoctoral:1 subsequence:1 continuous:1 table:4 additionally:1 operational:1 obtaining:2 symmetry:1 database2:1 complex:1 noise:14 x1:1 fig:12 referred:2 slow:1 torus:1 loglog:2 er:10 offset:1 decay:1 highrate:1 intractable:1 intrinsic:20 quantization:24 exists:4 adding:1 mnist:5 kr:3 maxim:3 essential:1 mirror:1 magnitude:2 egl:6 kx:4 intersection:1 simply:1 partially:1 springer:2 worstcase:1 ma:3 goal:1 identity:1 marked:1 hard:1 change:1 springerverlag:1 typical:1 uniformly:1 averaging:1 justify:1 called:4 experimental:2 tendency:1 brand:2 select:1 linder:1 support:3 assessed:1 |
2,143 | 2,946 | Prediction and Change Detection
Mark Steyvers
[email protected]
University of California, Irvine
Irvine, CA 92697
Scott Brown
[email protected]
University of California, Irvine
Irvine, CA 92697
Abstract
We measure the ability of human observers to predict the next datum
in a sequence that is generated by a simple statistical process
undergoing change at random points in time. Accurate performance
in this task requires the identification of changepoints. We assess
individual differences between observers both empirically, and
using two kinds of models: a Bayesian approach for change detection
and a family of cognitively plausible fast and frugal models. Some
individuals detect too many changes and hence perform
sub-optimally due to excess variability. Other individuals do not
detect enough changes, and perform sub-optimally because they fail
to notice short-term temporal trends.
1
Intr oduction
Decision-making often requires a rapid response to change. For example, stock
analysts need to quickly detect changes in the market in order to adjust investment
strategies. Coaches need to track changes in a player?s performance in order to adjust
strategy. When tracking changes, there are costs involved when either more or less
changes are observed than actually occurred. For example, when using an overly
conservative change detection criterion, a stock analyst might miss important
short-term trends and interpret them as random fluctuations instead. On the other
hand, a change may also be detected too readily. For example, in basketball, a player
who makes a series of consecutive baskets is often identified as a ?hot hand? player
whose underlying ability is perceived to have suddenly increased [1,2]. This might
lead to sub-optimal passing strategies, based on random fluctuations.
We are interested in explaining individual differences in a sequential prediction task.
Observers are shown stimuli generated from a simple statistical process with the task
of predicting the next datum in the sequence. The latent parameters of the statistical
process change discretely at random points in time. Performance in this task depends
on the accurate detection of those changepoints, as well as inference about future
outcomes based on the outcomes that followed the most recent inferred changepoint.
There is much prior research in statistics on the problem of identifying changepoints
[3,4,5]. In this paper, we adopt a Bayesian approach to the changepoint identification
problem and develop a simple inference procedure to predict the next datum in a
sequence. The Bayesian model serves as an ideal observer model and is useful to
characterize the ways in which individuals deviate from optimality.
The plan of the paper is as follows. We first introduce the sequential prediction task
and discuss a Bayesian analysis of this prediction problem. We then discuss the results
from a few individuals in this prediction task and show how the Bayesian approach
can capture individual differences with a single ?twitchiness? parameter that
describes how readily changes are perceived in random sequences. We will show that
some individuals are too twitchy: their performance is too variable because they base
their predictions on too little of the recent data. Other individuals are not twitchy
enough, and they fail to capture fast changes in the data. We also show how behavior
can be explained with a set of fast and frugal models [6]. These are cognitively
realistic models that operate under plausible computational constraints.
2
A pr ediction task wit h m ult iple c hange points
In the prediction task, stimuli are presented sequentially and the task is to predict the
next stimulus in the sequence. After t trials, the observer has been presented with
stimuli y1, y2, ?, yt and the task is to make a prediction about yt+1. After the prediction
is made, the actual outcome yt+1 is revealed and the next trial proceeds to the
prediction of yt+2. This procedure starts with y1 and is repeated for T trials.
The observations yt are D-dimensional vectors with elements sampled from binomial
distributions. The parameters of those distributions change discretely at random
points in time such that the mean increases or decreases after a change point. This
generates a sequence of observation vectors, y1, y2, ?, yT, where each yt = {yt,1 ?
yt,D}. Each of the yt,d is sampled from a binomial distribution Bin(?t,d,K), so 0 ? yt,d ?
K. The parameter vector ?t ={?t,1 ? ?t,D} changes depending on the locations of the
changepoints. At each time step, xt is a binary indicator for the occurrence of a
changepoint occurring at time t+1. The parameter ? determines the probability of a
change occurring in the sequence. The generative model is specified by the following
algorithm:
1. For d=1..D sample ?1,d from a Uniform(0,1) distribution
2. For t=2..T,
(a) Sample xt-1 from a Bernoulli(?) distribution
(b) If xt-1=0, then ?t=?t-1, else
for d=1..D sample ?t,d from a Uniform(0,1) distribution
(c) for d=1..D, sample yt from a Bin(?t,d,K) distribution
Table 1 shows some data generated from the changepoint model with T=20, ?=.1,and
D=1. In the prediction task, y will be observed, but x and ? are not.
Table 1: Example data
t
x
?
y
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
0 0 0 1 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0
.68 .68 .68 .68 .48 .48 .48 .74 .74 .74 .74 .74 .74 .19 .19 .87 .87 .87 .87 .87
9 7 8 7 4 4 4 9 8 3 6 7 8 2 1 8 9 9 8 8
3
A Bayesian pr ediction m ode l
In both our Bayesian and fast-and-frugal analyses, the prediction task is decomposed
into two inference procedures. First, the changepoint locations are identified. This is
followed by predictive inference for the next outcome based on the most recent
changepoint locations. Several Bayesian approaches have been developed for
changepoint problems involving single or multiple changepoints [3,5]. We apply a
Markov Chain Monte Carlo (MCMC) analysis to approximate the joint posterior
distribution over changepoint assignments x while integrating out ?. Gibbs sampling
will be used to sample from this posterior marginal distribution. The samples can then
be used to predict the next outcome in the sequence.
3.1
I n f e r e nc e f o r c h a n g e p o i n t a s s i g n m e n t s .
To apply Gibbs sampling, we evaluate the conditional probability of assigning a
changepoint at time i, given all other changepoint assignments and the current ? value.
By integrating out ?, the conditional probability is
P ( xi | x?i , y, ? ) = ? P ( xi ,? , ? | x? i , y )
(1)
?
where x? i represents all switch point assignments except xi. This can be simplified by
considering the location of the most recent changepoint preceding and following time
i and the outcomes occurring between these locations. Let niL be the number of time
steps from the last changepoint up to and including the current time step i such that
xi ? nL =1 and xi ? nL + j =0 for 0<j< niL . Similarly, let niR be the number of time steps that
i
i
follow time step i up to the next changepoint such that xi + n R =1 and xi + nR ? j =0 for
i
R
i
0<j< n . Let y =
L
i
?
i ? niL < k ? i
i
yk and y = ? k < k ?i + n R yk . The update equation for the
R
i
i
changepoint assignment can then be simplified to
P ( xi = m | x?i ) ?
(
) (
(
)
D ? 1 + y L + y R ? 1 + Kn L + Kn R ? y L ? y R
?
i, j
i, j
i
i
i, j
i, j
? (1 ? ? ) ?
L
R
? 2 + Kni + Kni
??
j =1
?
L
L
L
R
R
R
? D ? 1 + yi, j ? 1 + Kni ? yi, j ? 1 + yi, j ? 1 + Kni ? yi, j
?
? ?
? 2 + KniL ? 2 + KniR
?? j =1
(
) (
(
) (
) (
)
) (
)
m=0
)
(2)
m =1
We initialize the Gibbs sampler by sampling each xt from a Bernoulli(?) distribution.
All changepoint assignments are then updated sequentially by the Gibbs sampling
equation above. The sampler is run for M iterations after which one set of changepoint
assignments is saved. The Gibbs sampler is then restarted multiple times until S
samples have been collected.
Although we could have included an update equation for ?, in this analysis we treat ?
as a known constant. This will be useful when characterizing the differences between
human observers in terms of differences in ?.
3.2
P r e d i c ti v e i n f er e n ce
The next latent parameter value ?t+1 and outcome yt+1 can be predicted on the basis of
observed outcomes that occurred after the last inferred changepoint:
? t+1, j =
t
?
i =t* +1
yt+1, j = round (?t +1, j K )
yi, j / K ,
(3)
where t* is the location of the most recent change point. By considering multiple
Gibbs samples, we get a distribution over outcomes yt+1. We base the model
predictions on the mean of this distribution.
3.3
I l l u s t r a t i o n o f m o d e l p er f o r m a n c e
Figure 1 illustrates the performance of the model on a one dimensional sequence
(D=1) generated from the changepoint model with T=160, ?=0.05, and K=10. The
Gibbs sampler was run for M=30 iterations and S=200 samples were collected. The
top panel shows the actual changepoints (triangles) and the distribution of
changepoint assignments averaged over samples. The bottom panel shows the
observed data y (thin lines) as well as the ? values in the generative model (rescaled
between 0 and 10).
At locations with large changes between observations, the marginal changepoint
probability is quite high. At other locations, the true change in the mean is very small,
and the model is less likely to put in a changepoint. The lower right panel shows the
distribution over predicted ?t+1 values.
xt
1
0.5
yt
0
10
1
5
0.5
0
20
40
60
80
100
120
140
160
?t+1
0
Figure 1. Results of model simulation.
4
Prediction experiment
We tested performance of 9 human observers in the prediction task. The observers
included the authors, a visitor, and one student who were aware of the statistical
nature of the task as well as na?ve students. The observers were seated in front of an
LCD touch screen displaying a two-dimensional grid of 11 x 11 buttons. The
changepoint model was used to generate a sequence of T=1500 stimuli for two
binomial variables y1 and y2 (D=2, K=10). The change probability ? was set to 0.1.
The two variables y1 and y2 specified the two-dimensional button location. The same
sequence was used for all observers.
On each trial, the observer touched a button on the grid displayed on the touch screen.
Following each button press, the button corresponding to the next {y1,y2} outcome in
the sequence was highlighted. Observers were instructed to press the button that best
predicted the next location of the highlighted button. The 1500 trials were divided into
three blocks of 500 trials. Breaks were allowed between blocks. The whole
experiment lasted between 15 and 30 minutes. Figure 2 shows the first 50 trials from
the third block of the experiment. The top and bottom panels show the actual
outcomes for the y1 and y2 button grid coordinates as well as the predictions for two
observers (SB and MY). The figure shows that at trial 15, the y1 and y2 coordinates
show a large shift followed by an immediate shift in observer?s MY predictions (on
trial 16). Observer SB waits until trial 17 to make a shift.
10
5
0
outcomes
SB predictions
MY predictions
10
5
0
0
5
10
15
20
25
Trial
30
35
40
45
50
Figure 2. Trial by trial predictions from two observers.
4.1
T a s k er r o r
We assessed prediction performance by comparing the prediction with the actual
outcome in the sequence. Task error was measured by normalized city-block distance
T
1
(4)
task error=
yt ,1 ? ytO,1 + yt ,2 ? ytO,2
?
(T ? 1) t =2
where yO represents the observer?s prediction. Note that the very first trial is excluded
from this calculation. Even though more suitable probabilistic measures for prediction
error could have been adopted, we wanted to allow comparison of observer?s
performance with both probabilistic and non-probabilistic models. Task error ranged
from 2.8 (for participant MY) to 3.3 (for ML). We also assessed the performance of
five models ? their task errors ranged from 2.78 to 3.20. The Bayesian models
(Section 3) had the lowest task errors, just below 2.8. This fits with our definition of
the Bayesian models as ?ideal observer? models ? their task error is lower than any
other model?s and any human observer?s task error. The fast and frugal models
(Section 5) had task errors ranging from 2.85 to 3.20.
5
Modeling R esults
We will refer to the models with the following letter codes: B=Bayesian Model,
LB=limited Bayesian model, FF1..3=fast and frugal models 1..3. We assessed model
fit by comparing the model?s prediction against the human observers? predictions,
again using a normalized city-block distance
model error=
T
1
ytM,1 ? ytO,1 + ytM,2 ? ytO,2
?
(T ? 1) t=2
(5)
where yM represents the model?s prediction. The model error for each individual
observer is shown in Figure 3. It is important to note that because each model is
associated with a set of free parameters, the parameters optimized for task error and
model error are different. For Figure 3, the parameters were optimized to minimize
Equation (5) for each individual observer, showing the extent to which these models
can capture the performance of individual observers, not necessarily providing the
best task performance.
B
LB
FF1
FF2
MY
MS
MM
EJ
FF3
Model Error
2
1.5
1
0.5
0
PH
NP
DN
SB
ML
1
Figure 3. Model error for each individual observer.
5.1
B ay e s i a n p re d i ct i o n m o d e l s
At each trial t, the model was provided with the sequence of all previous outcomes.
The Gibbs sampling and inference procedures from Eq. (2) and (3) were applied with
M=30 iterations and S=200 samples. The change probability ? was a free parameter. In
the full Bayesian model, the whole sequence of observations up to the current trial is
available for prediction, leading to a memory requirement of up to T=1500 trials ? a
psychologically unreasonable assumption. We therefore also simulated a limited
Bayesian model (LB) where the observed sequence was truncated to the last 10
outcomes. The LB model showed almost no decrement in task performance compared
to the full Bayesian model. Figure 3 also shows that it fit human data quite well.
5.2
I n d i v i d u a l D i f f er e nc e s
The right-hand panel of Figure 4 plots each observer?s task error as a function of the
mean city-block distance between their subsequent button presses. This shows a clear
U-shaped function. Observers with very variable predictions (e.g., ML and DN) had
large average changes between successive button pushes, and also had large task
error: These observers were too ?twitchy?. Observers with very small average button
changes (e.g., SB and NP) were not twitchy enough, and also had large task error.
Observers in the middle had the lowest task error (e.g., MS and MY). The left-hand
panel of Figure 4 shows the same data, but with the x-axis based on the Bayesian
model fits. Instead of using mean button change distance to index twitchiness (as in
1
Error bars indicate bootstrapped 95% confidence intervals.
the right-hand panel), the left-hand panel uses the estimated ? parameters from the
Bayesian model. A similar U-shaped pattern is observed: individuals with too large or
too small ? estimates have large task errors.
3.3
DN
3.2
Task Error
ML
SB
3.2
NP
3.1
Task Error
3.3
PH
EJ
3
MM
MS
MY
2.9
2.8
10
-4
10
-3
10
-2
DN
NP
3.1
3
PH
EJ
MM
MS
2.9
B
ML
SB
MY
2.8
10
-1
10
0
0.5
1
?
1.5
2
Mean Button Change
2.5
3
Figure 4. Task error vs. ?twitchiness?. Left-hand panel indexes twitchiness using
estimated ? parameters from Bayesian model fits. Right-hand panel uses mean
distance between successive predictions.
5.3
F a s t - a n d - F r u g a l ( F F ) p r e d ic t i o n m o d e l s
These models perform the prediction task using simple heuristics that are cognitively
plausible. The FF models keep a short memory of previous stimulus values and make
predictions using the same two-step process as the Bayesian model. First, a decision is
made as to whether the latent parameter ? has changed. Second, remembered stimulus
values that occurred after the most recently detected changepoint are used to generate
the next prediction.
A simple heuristic is used to detect changepoints: If the distance between the most
recent observation and prediction is greater than some threshold amount, a change is
inferred. We defined the distance between a prediction (p) and an observation (y) as
the difference between the log-likelihoods of y assuming ?=p and ?=y. Thus, if fB(.|?,
K) is the binomial density with parameters ? and K, the distance between observation
y and prediction p is defined as d(y,p)=log(fB(y|y,K))-log(fB(y|p,K)). A changepoint on
time step t+1 is inferred whenever d(yt,pt)>C. The parameter C governs the
twitchiness of the model predictions. If C is large, only very dramatic changepoints
will be detected, and the model will be too conservative. If C is small, the model will
be too twitchy, and will detect changepoints on the basis of small random fluctuations.
Predictions are based on the most recent M observations, which are kept in memory,
unless a changepoint has been detected in which case only those observations
occurring after the changepoint are used for prediction. The prediction for time step
t+1 is simply the mean of these observations, say p. Human observers were reticent to
make predictions very close to the boundaries. This was modeled by allowing the FF
model to change its prediction for the next time step, yt+1, towards the mean prediction
(0.5). This change reflects a two-way bet. If the probability of a change occurring is
?, the best guess will be 0.5 if that change occurs, or the mean p if the change does not
occur. Thus, the prediction made is actually yt+1=1/2 ?+(1-?)p. Note that we do not
allow perfect knowledge of the probability of a changepoint, ?. Instead, an estimated
value of ? is used based on the number of changepoints detected in the data series up
to time t.
The FF model nests two simpler FF models that are psychologically interesting. If the
twitchiness threshold parameter C becomes arbitrarily large, the model never detects a
change and instead becomes a continuous running average model. Predictions from
this model are simply a boxcar smooth of the data. Alternatively, if we assume no
memory the model must based each prediction on only the previous stimulus (i.e.,
M=1). Above, in Figure 3, we labeled the complete FF model as FF1, the boxcar
model as FF2 and the memoryless model FF3.
Figure 3 showed that the complete FF model (FF1) fit the data from all observers
significantly better than either the boxcar model (FF2) or the memoryless model
(FF3). Exceptions were observers PH, DN and ML, for whom all three FF model fit
equally well. This result suggests that our observers were (mostly) doing more than
just keeping a running average of the data, or using only the most recent observation.
The FF1 model fit the data about as well as the Bayesian models for all observers
except MY and MS. Note that, in general, the FF1 and Bayesian model fits are very
good: the average city block distance between the human data and the model
prediction is around 0.75 (out of 10) buttons on both the x- and y-axes.
6
C onclusion
We used an online prediction task to study changepoint detection. Human observers
had to predict the next observation in stochastic sequences containing random
changepoints. We showed that some observers are too ?twitchy?: They perform
poorly on the prediction task because they see changes where only random fluctuation
exists. Other observers are not twitchy enough, and they perform poorly because they
fail to see small changes. We developed a Bayesian changepoint detection model that
performed the task optimally, and also provided a good fit to human data when
sub-optimal parameter settings were used. Finally, we developed a fast-and-frugal
model that showed how participants may be able to perform well at the task using
minimal information and simple decision heuristics.
Acknowledgments
We thank Eric-Jan Wagenmakers and Mike Yi for useful discussions related to this
work. This work was supported in part by a grant from the US Air Force Office of
Scientific Research (AFOSR grant number FA9550-04-1-0317).
R e f er e n ce s
[1] Gilovich, T., Vallone, R. and Tversky, A. (1985). The hot hand in basketball: on the
misperception of random sequences. Cognitive Psychology17, 295-314.
[2] Albright, S.C. (1993a). A statistical analysis of hitting streaks in baseball. Journal of the
American Statistical Association, 88, 1175-1183.
[3] Stephens, D.A. (1994). Bayesian retrospective multiple changepoint identification. Applied
Statistics 43(1), 159-178.
[4] Carlin, B.P., Gelfand, A.E., & Smith, A.F.M. (1992). Hierarchical Bayesian analysis of
changepoint problems. Applied Statistics 41(2), 389-405.
[5] Green, P.J. (1995). Reversible jump Markov chain Monte Carlo computation and Bayesian
model determination. Biometrika 82(4), 711-732.
[6] Gigerenzer, G., & Goldstein, D.G. (1996). Reasoning the fast and frugal way: Models of
bounded rationality. Psychological Review, 103, 650-669.
| 2946 |@word trial:17 middle:1 simulation:1 dramatic:1 series:2 bootstrapped:1 iple:1 current:3 comparing:2 assigning:1 must:1 readily:2 subsequent:1 realistic:1 wanted:1 plot:1 update:2 v:1 generative:2 guess:1 smith:1 short:3 fa9550:1 location:10 successive:2 simpler:1 five:1 dn:5 introduce:1 market:1 rapid:1 behavior:1 detects:1 decomposed:1 little:1 actual:4 considering:2 becomes:2 provided:2 underlying:1 bounded:1 panel:10 lowest:2 kind:1 developed:3 temporal:1 ti:1 biometrika:1 grant:2 treat:1 fluctuation:4 might:2 suggests:1 limited:2 averaged:1 acknowledgment:1 investment:1 block:7 procedure:4 jan:1 significantly:1 confidence:1 integrating:2 wait:1 get:1 close:1 kni:4 put:1 yt:21 wit:1 identifying:1 yto:4 steyvers:1 coordinate:2 updated:1 pt:1 rationality:1 us:2 trend:2 element:1 labeled:1 observed:6 bottom:2 mike:1 capture:3 decrease:1 rescaled:1 yk:2 lcd:1 tversky:1 gigerenzer:1 predictive:1 baseball:1 eric:1 basis:2 triangle:1 joint:1 stock:2 ff1:6 fast:8 monte:2 detected:5 outcome:15 whose:1 quite:2 heuristic:3 plausible:3 gelfand:1 say:1 ability:2 statistic:3 highlighted:2 online:1 sequence:18 uci:2 poorly:2 requirement:1 perfect:1 depending:1 develop:1 measured:1 eq:1 predicted:3 indicate:1 saved:1 stochastic:1 human:10 bin:2 mm:3 around:1 ic:1 predict:5 changepoint:31 consecutive:1 adopt:1 perceived:2 city:4 reflects:1 ej:3 bet:1 office:1 ax:1 yo:1 bernoulli:2 likelihood:1 lasted:1 detect:5 inference:5 sb:7 interested:1 plan:1 initialize:1 marginal:2 aware:1 never:1 shaped:2 sampling:5 represents:3 thin:1 future:1 np:4 stimulus:8 few:1 ve:1 individual:14 cognitively:3 detection:6 adjust:2 nl:2 chain:2 accurate:2 unless:1 re:1 minimal:1 psychological:1 increased:1 modeling:1 assignment:7 cost:1 uniform:2 too:11 front:1 optimally:3 characterize:1 kn:2 my:9 density:1 probabilistic:3 ym:1 quickly:1 na:1 again:1 containing:1 nest:1 cognitive:1 american:1 leading:1 student:2 depends:1 performed:1 break:1 observer:38 doing:1 start:1 participant:2 ytm:2 ass:1 minimize:1 air:1 who:2 identification:3 bayesian:25 carlo:2 basket:1 whenever:1 definition:1 against:1 involved:1 associated:1 irvine:4 sampled:2 knowledge:1 actually:2 goldstein:1 follow:1 response:1 though:1 just:2 until:2 hand:9 touch:2 reversible:1 scientific:1 brown:1 y2:7 true:1 normalized:2 ranged:2 hence:1 excluded:1 memoryless:2 round:1 basketball:2 criterion:1 m:5 ay:1 complete:2 reasoning:1 ranging:1 recently:1 empirically:1 association:1 occurred:3 interpret:1 refer:1 gibbs:8 boxcar:3 grid:3 similarly:1 had:7 base:2 posterior:2 recent:8 showed:4 binary:1 remembered:1 arbitrarily:1 yi:6 greater:1 preceding:1 stephen:1 multiple:4 full:2 smooth:1 determination:1 calculation:1 divided:1 equally:1 prediction:49 involving:1 iteration:3 psychologically:2 ode:1 interval:1 else:1 operate:1 ideal:2 revealed:1 enough:4 coach:1 switch:1 fit:10 carlin:1 identified:2 shift:3 whether:1 retrospective:1 passing:1 useful:3 clear:1 governs:1 amount:1 ph:4 generate:2 notice:1 estimated:3 overly:1 track:1 threshold:2 ce:2 kept:1 button:14 run:2 letter:1 family:1 almost:1 decision:3 ct:1 followed:3 datum:3 discretely:2 occur:1 constraint:1 generates:1 optimality:1 describes:1 making:1 explained:1 pr:2 equation:4 discus:2 fail:3 serf:1 adopted:1 available:1 changepoints:11 unreasonable:1 apply:2 hierarchical:1 occurrence:1 binomial:4 top:2 running:2 suddenly:1 wagenmakers:1 occurs:1 strategy:3 nr:1 distance:9 thank:1 simulated:1 whom:1 collected:2 extent:1 analyst:2 assuming:1 code:1 index:2 modeled:1 providing:1 nc:2 esults:1 mostly:1 perform:6 allowing:1 observation:12 markov:2 displayed:1 truncated:1 immediate:1 variability:1 y1:8 frugal:7 lb:4 ff2:3 inferred:4 specified:2 optimized:2 california:2 able:1 bar:1 proceeds:1 below:1 pattern:1 scott:1 including:1 memory:4 green:1 hot:2 suitable:1 force:1 predicting:1 indicator:1 ff3:3 axis:1 nir:1 deviate:1 prior:1 review:1 afosr:1 interesting:1 displaying:1 seated:1 changed:1 supported:1 last:3 free:2 keeping:1 allow:2 explaining:1 characterizing:1 boundary:1 fb:3 author:1 msteyver:1 made:3 instructed:1 simplified:2 jump:1 excess:1 approximate:1 intr:1 keep:1 ml:6 sequentially:2 xi:8 alternatively:1 continuous:1 latent:3 table:2 nature:1 ca:2 streak:1 necessarily:1 decrement:1 whole:2 repeated:1 allowed:1 ff:7 screen:2 sub:4 third:1 touched:1 minute:1 xt:5 showing:1 er:5 undergoing:1 exists:1 sequential:2 illustrates:1 occurring:5 push:1 simply:2 likely:1 hitting:1 tracking:1 restarted:1 determines:1 conditional:2 ediction:2 towards:1 change:37 included:2 except:2 sampler:4 miss:1 conservative:2 nil:3 albright:1 player:3 exception:1 mark:1 ult:1 assessed:3 visitor:1 evaluate:1 mcmc:1 tested:1 |
2,144 | 2,947 | Metric Learning by Collapsing Classes
Amir Globerson
School of Computer Science and Engineering,
Interdisciplinary Center for Neural Computation
The Hebrew University Jerusalem, 91904, Israel
[email protected]
Sam Roweis
Machine Learning Group
Department of Computer Science
University of Toronto, Canada
[email protected]
Abstract
We present an algorithm for learning a quadratic Gaussian metric (Mahalanobis distance) for use in classification tasks. Our method relies on the
simple geometric intuition that a good metric is one under which points
in the same class are simultaneously near each other and far from points
in the other classes. We construct a convex optimization problem whose
solution generates such a metric by trying to collapse all examples in the
same class to a single point and push examples in other classes infinitely
far away. We show that when the metric we learn is used in simple classifiers, it yields substantial improvements over standard alternatives on
a variety of problems. We also discuss how the learned metric may be
used to obtain a compact low dimensional feature representation of the
original input space, allowing more efficient classification with very little
reduction in performance.
1 Supervised Learning of Metrics
The problem of learning a distance measure (metric) over an input space is of fundamental
importance in machine learning [10, 9], both supervised and unsupervised. When such
measures are learned directly from the available data, they can be used to improve learning algorithms which rely on distance computations such as nearest neighbour classification [5], supervised kernel machines (such as GPs or SVMs) and even unsupervised
clustering algorithms [10]. Good similarity measures may also provide insight into the
underlying structure of data (e.g. inter-protein distances), and may aid in building better data visualizations via embedding. In fact, there is a close link between distance
learning and feature extraction since whenever we construct a feature f (x) for an input
space X , we can measure distances between x1 ; x2 2 X using a simple distance function (e.g. Euclidean) d[f (x1 ); f (x2 )? in feature space. Thus by fixing d, any feature
extraction algorithm may be considered a metric learning method. Perhaps the simplest
illustration of this approach is when the f (x) is a linear projection of x 2 <r so that
f (x) = W x. The Euclidean distance between f (x1 ) and f (x2 ) is then the Mahalanobis
distance kf (x1 ) f (x2 )k2 = (x1 x2 )T A(x1 x2 ), where A = W T W is a positive
semidefinite matrix. Much of the recent work on metric learning has indeed focused on
learning Mahalanobis distances, i.e. learning the matrix A. This is also the goal of the
current work.
A common approach to learning metrics is to assume some knowledge in the form of equiv-
alence relations, i.e. which points should be close and which should be far (without specifying their exact distances). In the classification setting there is a natural equivalence relation, namely whether two points are in the same class or not. One of the classical statistical
methods which uses this idea for the Mahalanobis distance is Fisher?s Linear Discriminant
Analysis (see e.g. [6]). Other more recent methods are [10, 9, 5] which seek to minimize
various separation criteria between the classes under the new metric.
In this work, we present a novel approach to learning such a metric. Our approach, the
Maximally Collapsing Metric Learning algorithm (MCML), relies on the simple geometric
intuition that if all points in the same class could be mapped into a single location in feature
space and all points in other classes mapped to other locations, this would result in an ideal
approximation of our equivalence relation. Our algorithm approximates this scenario via a
stochastic selection rule, as in Neighborhood Component Analysis (NCA) [5]. However,
unlike NCA, the optimization problem is convex and thus our method is completely specified by our objective function. Different initialization and optimization techniques may
affect the speed of obtaining the solution but the final solution itself is unique. We also
show that our method approximates the local covariance structure of the data, as opposed
to Linear Discriminant Analysis methods which use only global covariance structure.
2 The Approach of Collapsing Classes
Given a set of n labeled examples (xi ; yi ), where xi
2 <r and yi 2 f1 : : : kg, we seek a
similarity measure between two points in X space. We focus on Mahalanobis form metrics
d(xi ; xj jA) = dAij
= (xi
where A is a positive semidefinite (PSD) matrix.
xj )T A(xi
xj ) ;
(1)
Intuitively, what we want from a good metric is that it makes elements of X in the same
class look close whereas those in different classes appear far. Our approach starts with
the ideal case when this is true in the most optimistic sense: same class points are at zero
distance, and different class points are infinitely far. Alternatively this can be viewed as
mapping x via a linear projection W x (A = W T W ), such that all points in the same
class are mapped into the same point. This intuition is related to the analysis of spectral
clustering [8], where the ideal case analysis of the algorithm results in all same cluster
points being mapped to a single point.
To learn a metric which approximates the ideal geometric setup described above, we introduce, for each training point, a conditional distribution over other points (as in [5]).
Specifically, for each xi we define a conditional distribution over points i 6= j such that
pA (j ji) =
1
Zi
e
dA
ij =
dA
ij
Pke6 i e d
=
A
ik
i 6= j :
(2)
If all points in the same class were mapped to a single point and infinitely far from points
in different classes, we would have the ideal ?bi-level? distribution:
p0 (j ji) /
n
1
0
yi = yj
yi 6= yj :
(3)
Furthermore, under very mild conditions, any set of points which achieves the above distribution must have the desired geometry. In particular, assume there are at least r^ + 2 points
in each class, where r^ = rank[A? (note that r^ r). Then pA (j ji) = p0 (j ji) (8i; j ) implies
that under A all points in the same class will be mapped to a single point, infinitely far from
other class points 1 .
1
Proof sketch: The infinite separation between points of different classes follows simply from
Thus it is natural to seek a matrix A such that pA (j ji) is as close as possible to p0 (j ji).
Since we are trying to match distributions, we minimize the KL divergence KL[p0 jp?:
min
A
X
i
p j ji)jpA (j ji)?
: : A 2 P SD
KL[ 0 (
st
(4)
The crucial property of this optimization problem is that it is convex in the matrix A. To see
this, first note that any convex linear combination of feasible solutions A = A0 + (1
)A1 s.t. 0 1 is still a feasible solution, since the set of PSD matrices is convex.
Next, we can show that f (A) alway has a greater cost than either of the endpoints. To do
this, we rewrite the objective function f (A) = i KL[p0 (j ji)jp(j ji)? in the form 2 :
X
f (A) =
p j ji) =
P
X
log (
i;j :yj =yi
i;j :yj =yi
dAij +
X
log
i
Zi
where we assumed for simplicity that classes are equi-probable, yielding a multiplicative
T
constant. To see why f (A) is convex, first note that dA
ij = (xi xj ) A(xi xj ) is linear
in A, and thus convex. The function log Zi is a log exp function of affine functions of
A and is therefore also convex (see [4], page 74).
P
2.1 Convex Duality
Since our optimization problem is convex, it has an equivalent convex dual. Specifically,
the convex dual of Eq. (4) is the following entropy maximization problem:
X H p j ji
max
[ (
p(j ji)
where vji
i
=
XE
s:t:
)?
i
XE
T
p0 (j ji) [vji vji ?
i
T
p(j ji) [vji vji ?
xi , H [? is the entropy function and we require
xj
Pj p jji
(
0
)=1
(5)
8i.
To prove this duality we start with the proposed dual and obtain the original problem in
Equation 4 as its dual. Write the Lagrangian for the above problem (where is PSD) 3
X H p jji
L(p; ; ) =
( (
i
))
X Ep
T r((
i
(
T ? E [v vT ?)))
vji vji
p ji ji
0[
X i X p jji
i
(
j
(
)
1)
The dual function is defined as g (; ) = minp L(p; ; ). To derive it, we first solve for
the minimizing p by setting the derivative of L(p; ; ) w.r.t. p(j ji) equal to zero.
T
T
0 = 1 + log p(j ji) + T r (v v )
) p(j ji) = ei 1 T r(vji vji )
P
Pi i
this solution to L p; ; we get g ;
T r i Ep vji vjiT
PPlugging
i;j p j ji . The dual problem is to maximize g ; . We can do this analytically w.r.t.
Pj e T r v v .
i , yielding
i
T
T v
Now note that T r vji vji
vji
dji , so we can write
ji
X d X X e d
g
ji ji
(
(
)
)
1
(
= log
(
i
)=
(
)=
(
T)
ji ji
(
0[
?) +
)
=
( )=
i;j :yi =yj
ji
ji
log
i
j
which is minus our original target function. Since g () should be maximized, and
we have the desired duality result (identifying with A).
p0 (j ji)
6
0
j /
= 0 when yj = yi . For a given point xi , all the points j in its class satisfy p(j i)
1.
Due to the structure of p(j i) in Equation 2, and because it is obeyed for all points in x0i s class, this
implies that all the points in that class are equidistant from each other. However, it is easy to show
that the maximum number of different equidistant points (also known as the equilateral dimension
[1]) in r^ dimensions is r^ + 1. Since by assumption we have at least r^ + 2 points in the class of xi ,
and A maps points into r^, it follows that all points are identical.
2
Up to an additive constant
i H [p0 (j i)?.
3
We consider the equivalent problem of minimizing minus entropy.
j
<
P
j
2.1.1 Relation to covariance based and embedding methods
The convex dual derived above reveals an interesting relation to covariance based learning
methods. The sufficient statistics used by the algorithm are a set of n ?spread? matrices.
T ?. The algorithm tries to find a maximum entropy
Each matrix is of the form Ep0 (j ji) [vji vji
distribution which matches these matrices when averaged over the sample.
This should be contrasted with the covariance matrices used in metric learning such as
Fisher?s Discriminant Analysis. The latter uses the within and between class covariance
matrices. The within covariance matrix is similar to the covariance matrix used here, but is
calculated with respect to the class means, whereas here it is calculated separately for every
point, and is centered on this point. This highlights the fact that MCML is not based on
Gaussian assumptions where it is indeed sufficient to calculate a single class covariance.
Our method can also be thought of as a supervised version of the Stochastic Neighbour
Embedding algorithm [7] in which the ?target? distribution is p0 (determined by the class
labels) and the embedding points are not completely free but are instead constrained to be
of the form W xi .
2.2 Optimizing the Convex Objective
Since the optimization problem in Equation 4 is convex, it is guaranteed to have only a
single minimum which is the globally optimal solution4 . It can be optimized using any
appropriate numerical convex optimization machinery; all methods will yield the same solution although some may be faster than others. One standard approach is to use interior
point Newton methods. However, these algorithms require the Hessian to be calculated,
which would require O(d4 ) resources, and could be prohibitive in our case. Instead, we
have experimented with using a first order gradient method, specifically the projected gradient approach as in [10]. At each iteration we take a small step in the direction of the
negative gradient of the objective function5, followed by a projection back onto the PSD
cone. This projection is performed simply by taking the eigen-decomposition of A and
removing the components with negative eigenvalues. The algorithm is summarized below:
Input:
Set of labeled data points (xi ; yi ), i = 1 : : : n
Output:
PSD metric which optimally collapses classes.
Initialization: Initialize A0 to some PSD matrix
(randomly or using some initialization heuristic).
Iterate:
= At
5 f (At ) where
5f (A) = Pij (p (j ji) p(j ji))(xj
Set At+1
P
0
xi )(xj
P
xi )T
Calculate the eigen-decomposition of At+1
T
T
k k uk uk , then set At+1 = k max(k ; 0)uk uk
At+1 =
Of course in principle it is possible to optimize over the dual instead of the primal but in
our case, if the training data consists of n points in r-dimensional space then the primal has
only O(r2 =2) variables while the dual has O(n2 ) so it will almost always be more efficient
to operate on the primal A directly. One exception to this case may be the kernel version
(Section 4) where the primal is also of size O(n2 ).
4
When the data can be exactly collapsed into single class points, there will be multiple solutions
at infinity. However, this is very unlikely to happen in real data.
5
In the experiments, we used an Armijo like step size rule, as described in [3].
3 Low Dimensional Projections for Feature Extraction
The Mahalanobis distance under a metric A can be interpreted as a linear projection of the
original inputs by the square root of A, followed by Euclidean distance in the projected
space. Matrices A which have less than full rank correspond to Mahalanobis distances
based on low dimensional projections. Such metrics and the induced distances can be
advantageous for several reasons [5]. First, low dimensional projections can substantially
reduce the storage and computational requirements of a supervised method since only the
projections of the training points must be stored and the manipulations at test time all occur
in the lower dimensional feature space. Second, low dimensional projections re-represent
the inputs, allowing for a supervised embedding or visualization of the original data.
If we consider matrices A with rank at most q , we can always represent them in the form
A = W T W for some projection matrix W of size q r. This corresponds to projecting
the original data into a q -dimensional space specified by the rows of W . However, rank
constraints on a matrix are not convex [4], and hence the rank constrained problem is not
convex and is likely to have local minima which make the optimization difficult and illdefined since it becomes sensitive to initial conditions and choice of optimization method.
Luckily, there is an alternative approach to obtaining low dimensional projections, which
does specify a unique solution by sequentially solving two globally tractable problems.
This is the approach we follow here. First we solve for a (potentially) full rank metric A using the convex program outlined above, and then obtain a low rank projection from it via spectral decomposition. This is done by diagonalizing A into the form
A = ri=1 i vi viT where 1 2 : : : r are eigenvalues of A and vi are the corresponding eigenvectors. To obtain a low rank projection we constrain the sum above to
include only the q terms corresponding to the q largest eigenvalues: Aq = qi=1 i vi viT .
The resulting
p projection is uniquely defined (up to an irrelevant unitary transformation) as
W = diag( 1 ; : : : q )[v1T ; : : : ; vqT ?.
P
P
p
In general, the projection returned by this approach is not guaranteed to be the same as the
projection corresponding to minimizing our objective function subject to a rank constraint
on A unless the optimal metric A is of rank less than or equal to q . However, as we show
in the experimental results, it is often the case that for practical problems the optimal A has
an eigen-spectrum which is rapidly decaying, so that many of its eigenvalues are indeed
very small, suggesting the low rank solution will be close to optimal.
4 Learning Metrics with Kernels
It is interesting to consider the case where xi are mapped into a high dimensional feature
space (xi ) and a Mahalanobis distance is sought in this space. We focus on the case
where dot products in the feature space may be expressed via a kernel function, such that
(xi )(xj ) = k(xi ; xj ) for some kernel k. We now show how our method can be changed
to accommodate this setting, so that optimization depends only on dot products.
Consider the regularized target function:
fReg (A) =
X
i
p j ji)jp(j ji)? + T r(A) ;
KL[ 0 (
(6)
where the regularizing factor is equivalent to the Frobenius norm of the projection matrix
W since T r(A) = kW k2. Deriving w.r.t. W we obtain W = UX , where U is some matrix
which specifies W as a linear combination of sample points, and the ith row of the matrix
X is xi . Thus A is given by A = X T U T UX . Defining the PSD matrix A^ = U T U , we can
recast our optimization as looking for a PSD matrix A^, where the Mahalanobis distance
^ (x
is (xi xj )T X T AX
xj ) = (ki kj )T A^(ki kj ), where we define ki = X xi .
i
This is exactly our original distance, with xi replaced by ki , which depends only on dot
products in X space. The regularization term also depends solely on the dot products since
^ ) = T r (XX T A
^) = T r (K A
^), where K is the kernel matrix given
T r(A) = T r(X T AX
T
by K = XX . Note that the trace is a linear function of A^, keeping the problem convex.
Thus, as long as dot products can be represented via kernels, the optimization can be carried
out without explicitly using the high dimensional space.
To obtain a low dimensional solution, we follow the approach in Section 3: obtain a decomposition A = V T DV 6 , and take the projection matrix to be the first q rows of D0:5 V .
As a first step, we calculate a matrix B such that A^ = B T B , and thus A = X T B T BX .
Since A is a correlation matrix for the rows of BX it can be shown (as in Kernel PCA) that
its (left) eigenvectors are linear combinations of the rows of BX . Denoting by V = BX
the eigenvector matrix, we obtain, after some algebra, that BKB T = D. We conclude
^ the matrix whose rows are
that is an eigenvector of the matrix BKB T . Denote by
orthonormal eigenvectors of BKB T . Then V can be shown to be orthonormal if we set
V = D 0:5 ^ BX . The final projection will then be D0:5 V xi = ^ B ki . Low dimensional
projections will be obtained by keeping only the first q components of this projection.
5 Experimental Results
We compared our method to several metric learning algorithms on a supervised classification task. Training data was first used to learn a metric over the input space. Then this
metric was used in a 1-nearest-neighbor algorithm to classify a test set. The datasets we investigated were taken from the UCI repository and have been used previously in evaluating
supervised methods for metric learning [10, 5]. To these we added the USPS handwritten
digits (downsampled to 8x8 pixels) and the YALE faces [2] (downsampled to 31x22).
The algorithms used in the comparative evaluation were
Fisher?s Linear Discriminant Analysis (LDA), which projects on the eigenvectors
of SW1 SB where SW ; SB are the within and between class covariance matrices.
The method of Xing et al [10] which minimizes the mean within class distance,
while keeping the mean between class distance larger than one.
Principal Component Analysis (PCA). There are several possibilities for scaling
the PCA projections. We tested several, and report results of the empirically superior one (PCAW), which scales the projection components so that the covariance
matrix after projection is the identity. PCAW often performs poorly on high dimensions, but globally outperforms all other variants.
We also evaluated the kernel version of MCML with an RBF kernel (denoted by KMCML)7 . Since all methods allow projections to lower dimensions we compared performance for different projection dimensions 8 .
The out-of sample performance results (based on 40 random splits of the data taking 70%
for training and 30% for testing9 ) are shown in Figure 1. It can be seen that when used in a
simple nearest-neighbour classifier, the metric learned by MCML almost always performs
as well as, or significantly better than those learned by all other methods, across most
dimensions. Furthermore, the kernel version of MCML outperforms the linear one on most
datasets.
Where V is orthonormal, and the eigenvalues in D are sorted in decreasing order.
The regularization parameter and the width of the RBF kernel were chosen using 5 fold crossvalidation. KMCML was only evaluated for datasets with less than 1000 training points.
8
To obtain low dimensional mappings we used the approach outlined in Section 3.
9
Except for the larger datasets where 1000 random samples were used for training.
6
7
Wine
0.2
0.1
2
4
6
8
10
Projection Dimension
0.5
Error Rate
Error Rate
Balance
Ion
MCML
PCAW
LDA
XING
KMCML
0.3
0.25
0.4
0.2
0.15
0.3
0.1
0.2
0.05
10
20
30
Projection Dimension
Soybean?small
0.1
1
2
3
Protein
4
Spam
0.6
0.4
0.2
0
0.25
0.5
5
10
15
0.4
0.2
0.3
0.15
20
5
10
Yale7
15
20
0.1
10
20
30
40
50
40
50
Digits
Housing
0.6
0.4
0.4
0.3
0.4
0.35
0.2
0.3
0.2
0.1
0.25
10
20
30
40
50
5
10
15
10
20
30
Figure 1: Classification error rate on several UCI datasets, USPS digits and YALE faces, for
different projection dimensions. Algorithms are our Maximally Collapsing Metric Learning (MCML), Xing et.al.[10], PCA with whitening transformation (PCAW) and Fisher?s
Discriminant Analysis (LDA). Standard errors of the means shown on curves. No results
given for XING on YALE and KMCML on Digits and Spam due to the data size.
5.1 Comparison to non convex procedures
The methods in the previous comparison are all well defined, in the sense that they are not
susceptible to local minima in the optimization. They also have the added advantage of obtaining projections to all dimensions using one optimization run. Below, we also compare
the MCML results to the results of two non-convex procedures. The first is the Non Convex
variant of MCML (NMCML): The objective function of MCML can be optimized w.r.t the
projection matrix W , where A = W T W . Although this is no longer a convex problem, it
is not constrained and is thus easier to optimize. The second non convex method is Neighbourhood Components Analysis (NCA) [5], which attempts to directly minimize the error
incurred by a nearest neighbor classifier.
For both methods we optimized the matrix W by restarting the optimization separately
for each size of W . Minimization was performed using a conjugate gradient algorithm,
initialized by LDA or randomly. Figure 2 shows results on a subset of the UCI datasets.
It can be seen that the performance of NMCML is similar to that of MCML, although it
is less stable, possibly due to local minima, and both methods usually outperform NCA.
The inset in each figure shows the spectrum of the MCML matrix A, revealing that it often
drops quickly after a few dimensions. This illustrates the effectiveness of our two stage
optimization procedure, and suggests its low dimensional solutions are close to optimal.
6 Discussion and Extensions
We have presented an algorithm for learning maximally collapsing metrics (MCML), based
on the intuition of collapsing classes into single points. MCML assumes that each class
Balance
Error Rate
Wine
MCML
NMCML
NCA
0.2
Soybean
0.4
0.1
0.3
0.1
0.05
0.2
0
2
4
6
8
10
Projection Dimension
0.1
1
0
2
Protein
3
4
5
10
Ion
15
20
Housing
0.4
0.6
0.3
0.5
0.35
0.2
0.4
0.3
0.3
0.25
5
10
15
20
0.1
10
20
30
5
10
15
Figure 2: Classification error for non convex procedures, and the MCML method.
Eigen-spectra for the MCML solution are shown in the inset.
may be collapsed to a single point, at least approximately, and thus is only suitable for unimodal class distributions (or for simply connected sets if kernelization is used). However,
if points belonging to a single class appear in several disconnected clusters in input (or
feature) space, it is unlikely that MCML could collapse the class into a single point. It is
possible that using a mixture of distributions, an EM-like algorithm can be constructed to
accommodate this scenario.
The method can also be used to learn low dimensional projections of the input space. We
showed that it performs well, even across a range of projection dimensions, and consistently
outperforms existing methods. Finally, we have shown how the method can be extended
to projections in high dimensional feature spaces using the kernel trick. The resulting
nonlinear method was shown to improve classification results over the linear version.
References
[1] N. Alon and P. Pudlak. Equilateral sets in lpn . Geom. Funct. Anal., 13(3), 2003.
[2] P. N. Belhumeur, J. Hespanha, and D. J. Kriegman. Eigenfaces vs. Fisherfaces:
Recognition using class specific linear projection. In ECCV (1), 1996.
[3] D.P. Bertsekas. On the Goldstein-Levitin-Polyak gradient projection method. IEEE
Transaction on Automatic Control, 21(2):174?184, 1976.
[4] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge Univ. Press, 2004.
[5] J. Goldberger, S. Roweis, G. Hinton, and R. Salakhutdinov. Neighbourhood components analysis. In Advances in Neural Information Processing Systems (NIPS), 2004.
[6] T. Hastie, R. Tibshirani, and J.H. Friedman. The elements of statistical learning: data
mining, inference, and prediction. New York: Springer-Verlag, 2001.
[7] G. Hinton and S. Roweis. Stochastic neighbor embedding. In Advances in Neural
Information Processing Systems (NIPS), 2002.
[8] A. Ng, M. Jordan, and Y. Weiss. On spectral clustering: Analysis and an algorithm.
In Advances in Neural Information Processing Systems (NIPS), 2001.
[9] N. Shental, T. Hertz, D. Weinshall, and M. Pavel. Adjustment learning and relevant
component analysis. In Proc. of ECCV, 2002.
[10] E. Xing, A. Ng, M. Jordan, and S. Russell. Distance metric learning, with application
to clustering with side-information. In Advances in Neural Information Processing
Systems (NIPS), 2004.
| 2947 |@word mild:1 repository:1 version:5 advantageous:1 norm:1 seek:3 covariance:11 p0:9 decomposition:4 pavel:1 minus:2 accommodate:2 reduction:1 initial:1 denoting:1 outperforms:3 existing:1 current:1 goldberger:1 must:2 additive:1 numerical:1 happen:1 drop:1 v:1 prohibitive:1 amir:1 ith:1 equi:1 toronto:2 location:2 constructed:1 ik:1 prove:1 consists:1 introduce:1 inter:1 indeed:3 v1t:1 salakhutdinov:1 globally:3 decreasing:1 little:1 becomes:1 project:1 xx:2 underlying:1 israel:1 kg:1 what:1 interpreted:1 substantially:1 eigenvector:2 minimizes:1 weinshall:1 transformation:2 every:1 exactly:2 classifier:3 k2:2 uk:4 control:1 appear:2 bertsekas:1 positive:2 engineering:1 local:4 sd:1 solely:1 approximately:1 initialization:3 equivalence:2 specifying:1 suggests:1 collapse:3 bi:1 range:1 averaged:1 nca:5 globerson:1 unique:2 yj:6 practical:1 digit:4 procedure:4 pudlak:1 thought:1 significantly:1 projection:38 revealing:1 boyd:1 downsampled:2 protein:3 get:1 onto:1 close:6 selection:1 interior:1 storage:1 collapsed:2 optimize:2 equivalent:3 map:1 lagrangian:1 center:1 jerusalem:1 vit:2 convex:27 focused:1 simplicity:1 identifying:1 insight:1 rule:2 deriving:1 orthonormal:3 vandenberghe:1 embedding:6 target:3 exact:1 gps:1 us:2 pa:3 element:2 function5:1 trick:1 recognition:1 labeled:2 ep:2 calculate:3 connected:1 russell:1 substantial:1 intuition:4 kriegman:1 rewrite:1 solving:1 algebra:1 funct:1 completely:2 usps:2 various:1 represented:1 equilateral:2 univ:1 neighborhood:1 whose:2 heuristic:1 larger:2 solve:2 statistic:1 itself:1 final:2 housing:2 advantage:1 eigenvalue:5 product:5 uci:3 relevant:1 rapidly:1 poorly:1 roweis:4 frobenius:1 crossvalidation:1 cluster:2 requirement:1 comparative:1 derive:1 alon:1 ac:1 fixing:1 x0i:1 nearest:4 ij:3 school:1 eq:1 c:2 implies:2 direction:1 stochastic:3 luckily:1 centered:1 ja:1 require:3 f1:1 probable:1 equiv:1 extension:1 considered:1 exp:1 mapping:2 achieves:1 sought:1 wine:2 proc:1 label:1 sensitive:1 largest:1 minimization:1 gaussian:2 always:3 derived:1 focus:2 ax:2 improvement:1 consistently:1 rank:11 sense:2 inference:1 sb:2 unlikely:2 a0:2 relation:5 bkb:3 pixel:1 classification:8 dual:9 denoted:1 constrained:3 initialize:1 equal:2 construct:2 extraction:3 ng:2 identical:1 kw:1 look:1 unsupervised:2 others:1 report:1 few:1 randomly:2 neighbour:3 simultaneously:1 divergence:1 replaced:1 geometry:1 psd:8 attempt:1 friedman:1 possibility:1 mining:1 gamir:1 evaluation:1 mixture:1 semidefinite:2 yielding:2 primal:4 x22:1 machinery:1 unless:1 euclidean:3 initialized:1 desired:2 re:1 classify:1 maximization:1 cost:1 subset:1 optimally:1 stored:1 obeyed:1 st:1 fundamental:1 huji:1 interdisciplinary:1 quickly:1 opposed:1 possibly:1 soybean:2 collapsing:6 derivative:1 bx:5 suggesting:1 summarized:1 satisfy:1 explicitly:1 vi:3 depends:3 multiplicative:1 try:1 performed:2 root:1 optimistic:1 start:2 decaying:1 xing:5 minimize:3 il:1 square:1 sw1:1 maximized:1 yield:2 correspond:1 illdefined:1 handwritten:1 whenever:1 proof:1 knowledge:1 goldstein:1 back:1 supervised:8 follow:2 specify:1 maximally:3 wei:1 done:1 evaluated:2 furthermore:2 stage:1 correlation:1 sketch:1 nonlinear:1 lda:4 perhaps:1 building:1 true:1 analytically:1 hence:1 regularization:2 mahalanobis:9 width:1 uniquely:1 d4:1 criterion:1 trying:2 performs:3 regularizing:1 novel:1 common:1 superior:1 ji:35 empirically:1 endpoint:1 jp:3 approximates:3 cambridge:1 automatic:1 outlined:2 vqt:1 aq:1 dot:5 stable:1 similarity:2 longer:1 whitening:1 recent:2 showed:1 optimizing:1 irrelevant:1 scenario:2 manipulation:1 verlag:1 vt:1 xe:2 yi:9 seen:2 minimum:4 greater:1 belhumeur:1 maximize:1 multiple:1 full:2 unimodal:1 d0:2 match:2 faster:1 long:1 a1:1 qi:1 prediction:1 variant:2 metric:32 iteration:1 kernel:13 represent:2 ion:2 whereas:2 want:1 separately:2 crucial:1 operate:1 unlike:1 induced:1 subject:1 effectiveness:1 jordan:2 unitary:1 near:1 ideal:5 split:1 easy:1 variety:1 affect:1 xj:12 zi:3 equidistant:2 iterate:1 hastie:1 polyak:1 reduce:1 idea:1 whether:1 pca:4 returned:1 hessian:1 york:1 eigenvectors:4 svms:1 simplest:1 specifies:1 outperform:1 tibshirani:1 write:2 levitin:1 shental:1 group:1 pj:2 cone:1 sum:1 run:1 mcml:18 almost:2 separation:2 scaling:1 ki:5 guaranteed:2 followed:2 yale:3 jji:3 quadratic:1 fold:1 occur:1 infinity:1 constraint:2 constrain:1 x2:6 ri:1 generates:1 speed:1 min:1 department:1 combination:3 disconnected:1 conjugate:1 belonging:1 across:2 hertz:1 em:1 sam:1 vji:15 intuitively:1 projecting:1 dv:1 taken:1 equation:3 visualization:2 resource:1 previously:1 discus:1 tractable:1 available:1 away:1 spectral:3 appropriate:1 neighbourhood:2 alternative:2 eigen:4 original:7 assumes:1 clustering:4 include:1 newton:1 sw:1 classical:1 objective:6 added:2 gradient:5 distance:23 link:1 mapped:7 discriminant:5 reason:1 illustration:1 minimizing:3 hebrew:1 balance:2 setup:1 difficult:1 susceptible:1 potentially:1 trace:1 negative:2 hespanha:1 anal:1 fisherfaces:1 allowing:2 datasets:6 defining:1 extended:1 looking:1 hinton:2 canada:1 namely:1 specified:2 kl:5 optimized:3 learned:4 nip:4 below:2 usually:1 geom:1 program:1 recast:1 max:2 suitable:1 natural:2 rely:1 regularized:1 diagonalizing:1 improve:2 carried:1 x8:1 kj:2 geometric:3 ep0:1 kf:1 highlight:1 interesting:2 incurred:1 affine:1 sufficient:2 pij:1 minp:1 principle:1 lpn:1 pi:1 alway:1 row:6 course:1 changed:1 eccv:2 free:1 keeping:3 side:1 allow:1 neighbor:3 eigenfaces:1 taking:2 face:2 curve:1 dimension:13 calculated:3 evaluating:1 projected:2 spam:2 far:7 transaction:1 restarting:1 compact:1 global:1 sequentially:1 reveals:1 assumed:1 conclude:1 xi:24 alternatively:1 spectrum:3 why:1 learn:4 obtaining:3 investigated:1 da:3 diag:1 spread:1 n2:2 x1:6 aid:1 removing:1 specific:1 inset:2 r2:1 experimented:1 importance:1 illustrates:1 push:1 easier:1 entropy:4 simply:3 likely:1 infinitely:4 expressed:1 adjustment:1 ux:2 springer:1 corresponds:1 relies:2 conditional:2 goal:1 viewed:1 identity:1 sorted:1 rbf:2 fisher:4 feasible:2 specifically:3 infinite:1 contrasted:1 determined:1 except:1 principal:1 duality:3 experimental:2 exception:1 latter:1 armijo:1 kernelization:1 tested:1 |
2,145 | 2,948 | Soft Clustering on Graphs
Kai Yu1 , Shipeng Yu2 , Volker Tresp1
1
Siemens AG, Corporate Technology
2
Institute for Computer Science, University of Munich
[email protected], [email protected]
[email protected]
Abstract
We propose a simple clustering framework on graphs encoding pairwise
data similarities. Unlike usual similarity-based methods, the approach
softly assigns data to clusters in a probabilistic way. More importantly,
a hierarchical clustering is naturally derived in this framework to gradually merge lower-level clusters into higher-level ones. A random walk
analysis indicates that the algorithm exposes clustering structures in various resolutions, i.e., a higher level statistically models a longer-term diffusion on graphs and thus discovers a more global clustering structure.
Finally we provide very encouraging experimental results.
1
Introduction
Clustering has been widely applied in data analysis to group similar objects. Many algorithms are either similarity-based or model-based. In general, the former (e.g., normalized
cut [5]) requires no assumption on data densities but simply a similarity function, and
usually partitions data exclusively into clusters. In contrast, model-based methods apply
mixture models to fit data distributions and assign data to clusters (i.e. mixture components)
probabilistically. This soft clustering is often desired, as it encodes uncertainties on datato-cluster assignments. However, their density assumptions can sometimes be restrictive,
e.g. clusters have to be Gaussian-like in Gaussian mixture models (GMMs).
In contrast to flat clustering, hierarchical clustering makes intuitive senses by forming a
tree of clusters. Despite of its wide applications, the technique is usually achieved by
heuristics (e.g., single link) and lacks theoretical backup. Only a few principled algorithms
exist so far, where a Gaussian or a sphere-shape assumption is often made [3, 1, 2].
This paper suggests a novel graph-factorization clustering (GFC) framework that employs
data?s affinities and meanwhile partitions data probabilistically. A hierarchical clustering
algorithm (HGFC) is further derived by merging lower-level clusters into higher-level ones.
Analysis based on graph random walks suggests that our clustering method models data
affinities as empirical transitions generated by a mixture of latent factors. This view significantly differs from conventional model-based clustering since here the mixture model
is not directly for data objects but for their relations. Clusters with arbitrary shapes can be
modeled by our method since only pairwise similarities are considered. Interestingly, we
prove that the higher-level clusters are associated with longer-term diffusive transitions on
the graph, amounting to smoother and more global similarity functions on the data mani-
fold. Therefore, the cluster hierarchy exposes the observed affinity structure gradually in
different resolutions, which is somehow similar to the wavelet method that analyzes signals in different bandwidths. To the best of our knowledge, this property has never been
considered by other agglomerative hierarchical clustering algorithms (e.g., see [3]).
The paper is organized as follows. In the following section we describe a clustering algorithm based on similarity graphs. In Sec. 3 we generalize the algorithm to hierarchical
clustering, followed by a discussion from the random walk point of view in Sec. 4. Finally
we present the experimental results in Sec. 5 and conclude the paper in Sec. 6.
2
Graph-factorization clustering (GFC)
Data similarity relations can be conveniently encoded by a graph, where vertices denote
data objects and adjacency weights represent data similarities. This section introduces
graph factorization clustering, which is a probabilistic partition of graph vertices. Formally, let G(V, E) be a weighted undirected graph with vertices V = {vi }ni=1 and edges
E ? {(vi , vj )}. Let W = {wij } be the adjacency matrix, where wij = wji , wij > 0
if (vi , vj ) ? E and wij = 0 otherwise. For instances, wij can be computed by the RBF
similarity function based on the features of objects i and j, or by a binary indicator (0 or 1)
of the k-nearest neighbor affinity.
2.1
Bipartite graphs
Before presenting the main idea, it is necessary to introduce bipartite graphs. Let
K(V, U, F) be the bipartite graph (e.g., Fig. 1?(b)), where V = {vi }ni=1 and U =
{up }m
p=1 are the two disjoint vertex sets and F contains all the edges connecting V and
U. Let B = {bip } denote the n ? m adjacency matrix with bip ? 0 being the weight for
edge [vi , up ]. The bipartite graph K induces a similarity between v1 and vj [6]
wij =
m
X
bip bjp
p=1
?p
= B??1 B>
,
ij
? = diag(?1 , . . . , ?m )
(1)
Pn
where ?p =
i=1 bip denotes the degree of vertex up ? U. We can interpret Eq. (1)
from the perspective of Markov random walks on graphs. wij is essentially a quantity
proportional to the stationary probability of direct transitions between vP
i and vj , denoted
by p(vi , vj ). Without loss of generality, we normalize W to ensure ij wij = 1 and
wij = p(vi , vj ). For a bipartite graph K(V, U, F), there is no direct links between vertices
in V, and all the paths from vi to vj must go through vertices in U. This indicates
p(vi , vj ) = p(vi )p(vj |vi ) = di
X
p(up |vi )p(vj |up ) =
p
X p(vi , up )p(up , vj )
p
?p
,
where p(vj |vi ) is the conditional transition probability from vi to vj , and di = p(vi ) the
degree of vi . This directly leads to Eq. (1) with bip = p(vi , up ).
2.2
Graph factorization by bipartite graph construction
For a bipartite graph K, p(up |vi ) = bip /di tells the conditional probability of transitions
from vi to up . If the size of U is smaller than that of V, namely m < n, then p(up |vi )
indicates how likely data point i belongs to vertex p. This property suggests that one can
construct a bipartite graph K(V, U, F) to approximate a given G(V, E), and then obtain
a soft clustering structure, where U corresponds to clusters (see Fig. 1?(a) (b)).
(a)
(b)
(c)
Figure 1: (a) The original graph representing data affinities; (b) The bipartite graph representing data-to-cluster relations; (c) The induced cluster affinities.
Eq. (1) suggests that this approximation can be done by minimizing `(W, B??1 B> ),
given a distance `(?, ?) between two adjacency matrices. To make the problem easy to
solve, we remove the coupling between B and ? via H = B??1 and then have
min ` W, H?H> ,
H,?
s. t.
n
X
hip = 1, H ? Rn?m
, ? ? Dm?m
,
+
+
(2)
i=1
where Dm?m
denotes the set of m ? m diagonal matrices with positive diagonal entries.
+
This problem is a symmetric variant of non-negative matrix factorization [4]. In this paper
we focus on the divergence distance between matrices. The following theorem suggests an
alternating optimization approach to find a local minimum:
P
xij
Theorem 2.1. For divergence distance `(X, Y) = ij (xij log yij
? xij + yij ), the cost
function in Eq. (2) is non-increasing under the update rule ( ?? denote updated quantities)
X
X
wij
? ip ? hip
? ip = 1;
h
?
h
,
normalize
s.t.
h
(3)
p
jp
>
(H?H
)
ij
j
i
X
X
X
wij
? p ? ?p
?p =
?
hip hjp , normalize s.t.
?
wij .
(4)
>
(H?H )ij
p
ij
ij
The distance is invariant under the update if and only if H and ? are at a stationary point.
P
See Appendix for all the proofs in this paper. Similar to GMM, p(up |vi ) = bip / q biq is
the soft probabilistic assignment of vertex vi to cluster up . The method can be seen as a
counterpart of mixture models on graphs. The time complexity is O(m2 N ) with N being
the number of nonzero entries in W. This can be very efficient if W is sparse (e.g., for
k-nearest neighbor graph the complexity O(m2 nk) scales linearly with sample size n).
3
Hierarchical graph-factorization clustering (HGFC)
As a nice property of the proposed graph factorization, a natural affinity between two clusters up and uq can be computed as
p(up , uq ) =
n
X
bip biq
i=1
di
= B> D?1 B
,
pq
D = diag(d1 , . . . , dn )
(5)
This is similar to Eq. (1), but derived from another way of two-hop transitions U ? V ?
U. Note that the similarity between clusters p and q takes into account a weighted average
of contributions from all the data (see Fig. 1?(c)).
Let G0 (V0 , E0 ) be the initial graph describing the similarities of totally m0 = n
data points, with adjacency matrix W0 . Based on G0 we can build a bipartite graph
K1 (V0 , V1 , F1 ), with m1 < m0 vertices in V1 . A hierarchical clustering method can
be motivated from the observation that the cluster similarity in Eq. (5) suggests a new adjacency matrix W1 for graph G1 (V1 , E1 ), where V1 is formed by clusters, and E1 contains
edges connecting these clusters. Then we can group those clusters by constructing another
bipartite graph K2 (V1 , V2 , F2 ) with m2 < m1 vertices in V2 , such that W1 is again factorized as in Eq. (2), and a new graph G2 (V2 , E2 ) can be built. In principal we can repeat
this procedure until we get only one cluster. Algorithm 1 summarizes this algorithm.
Algorithm 1 Hierarchical Graph-Factorization Clustering (HGFC)
Require: given n data objects and a similarity measure
1: build the similarity graph G0 (V0 , E0 ) with adjacency matrix W0 , and let m0 = n
2: for l = 1, 2, . . . , do
3:
choose ml < ml?1
4:
factorize Gl?1 to obtain Kl (Vl?1 , Vl , Fl ) with the adjacency matrix Bl
?1
5:
build a graph Gl (Vl , El ) with the adjacency matrix Wl = B>
l Dl Bl , where Dl ?s
diagonal entries are obtained by summation over Bl ?s columns
6: end for
The algorithm ends up with a hierarchical clustering structure. For level l, we can assign
data to the obtained ml clusters via a propagation from the bottom level of clusters. Based
on the chain rule of Markov random walks, the soft (i.e., probabilistic) assignment of vi ?
(l)
V0 to cluster vp ? Vl is given by
X
X
?
p vp(l) |vi =
???
p vp(l) |v (l?1) ? ? ? p v (1) |vi = D?1
1 Bl ip , (6)
v (l?1) ?Vl?1
v (1) ?V1
? l = B1 D?1 B2 D?1 B3 . . . D?1 Bl . One can interpret this by deriving an equivwhere B
2
3
l
? l ), and treating B
? l as the equivalent adjacency matrix
? l (V0 , Vl , F
alent bipartite graph K
? l connecting data V0 and clusters Vl .
attached to the equivalent edges F
4
4.1
Analysis of the proposed algorithms
Flat clustering: statistical modeling of single-hop transitions
In this section we provide some insights to the suggested clustering algorithm, mainly
from the perspective of random walks on graphs. Suppose that from a stationary stage of
random walks on G(V, E), one observes ?ij single-hop transitions between vi and vj in a
unitary time frame. As an intuition of graph-based view to similarities, if two data points
are similar or related, the transitions between them are likely to happen. Thus we connect
the observed similarities to the frequency of transitions via wij ? ?ij . If the observed
transitions are i.i.d. sampled from a true distribution p(vi , vj ) = (H?H> )ij where a
bipartite graph is behind, then the log likelihood with respect to the observed transitions is
Y
X
L(H, ?) = log
p(vi , vj )?ij ?
wij log(H?H> )ij .
(7)
ij
ij
Then we have the following conclusion
Proposition 4.1. For a weighted undirected graph G(V, E) and the log likelihood defined
in Eq. (7), the following results hold: (i) Minimizing the divergence distance l(W, H?H> )
is equivalent to maximizing the log likelihood L(H, ?); (ii) Updates Eq. (3) and Eq. (4)
correspond to a standard EM algorithm for maximizing L(H, ?).
Figure 2: The similarities of vertices to a fixed vertex (marked in the left panel) on a 6nearest-neighbor graph, respectively induced by clustering level l = 2 (the middle panel)
and l = 6 (the right panel). A darker color means a higher similarity.
4.2
Hierarchical clustering: statistical modeling of multi-hop transitions
The adjacency matrix W0 of G0 (V0 , E0 ) only models one-hop transitions that follow
direct links from vertices to their neighbors. However, the random walk is a process of
diffusion on the graph. Within a relatively longer period, a walker starting from a vertex
has the chance to reach vertices faraway through multi-hop transitions. Obviously, multihop transitions induce a slowly decaying similarity function on the graph. Based on the
chain rule of Markov process, the equivalent adjacency matrix for t-hop transitions is
t?1
At = W0 (D?1
= At?1 D?1
(8)
0 W0 )
0 W0 .
Generally speaking, a slowly decaying similarity function on the similarity graph captures
a global affinity structure of data manifolds, while a rapidly decaying similarity function
only tells the local affinity structure. The following proposition states that in the suggested HGFC, a higher-level clustering implicitly employs a more global similarity measure caused by multi-hop Markov random walks:
Proposition 4.2. For a given hierarchical clustering structure that starts from a bottom
graph G0 (V0 , E0 ) to a higher level Gk (Vk , Ek ), the vertices Vl at level 0 < l ? k
induces an equivalent adjacency matrix of V0 , which is At with t = 2l?1 as defined in
Eq. (8).
Therefore the presented hierarchical clustering algorithm HGFC applies different sizes of
time windows to examine random walks, and derives different scales of similarity measures
to expose the local and global clustering structures of data manifolds. Fig. 2 illustrates the
employed similarities of vertices to a fixed vertex in clustering levels l = 2 and 6, which
corresponds to time periods t = 2 and 32. It can be seen that for a short period t = 2,
the similarity is very local and helps to uncover low-level clusters, while in a longer period
t = 32 the similarity function is rather global.
5
Empirical study
We apply HGFC on USPS handwritten digits and Newsgroup text data. For USPS data we
use the images of digits 1, 2, 3 and 4, with respectively 1269, 929, 824 and 852 images per
class. Each image is represented as a 256-dimension vector. The text data contain totally
3970 documents covering 4 categories, autos, motorcycles, baseball, and hockey. Each
document is represented by an 8014-dimension TFIDF feature vector. Our method employs
a 10-nearest-neighbor graph, with the similarity measure RBF for USPS and cosine for
Newsgroup. We perform 4-level HGFC, and set the cluster number, respectively from
bottom to top, to be 100, 20, 10 and 4 for both data sets.
We compare HGFC with two popular agglomerative hierarchical clustering algorithms, single link and complete link (e.g., [3]). Both methods merge two closest clusters at each step.
Figure 3: Visualization of HGFC for USPS data set. Left: mean images of the top 3 clustering levels, along with a Hinton graph representing the soft (probabilistic) assignments of
randomly chosen 10 digits (shown on the left) to the top 3rd level clusters; Middle: a Hinton graph showing the soft cluster assignments from top 3rd level to top 2nd level; Right:
a Hinton graph showing the soft assignments from top 2nd level to top 1st level.
Figure 4: Comparison of clustering methods on USPS (left) and Newsgroup (right), evaluated by normalized mutual information (NMI). Higher values indicate better qualities.
Single link defines the cluster distance to be the smallest point-wise distance between two
clusters, while complete link uses the largest one. A third compared method is normalized
cut [5], which partitions data into two clusters. We apply the algorithm recursively to produce a top-down hierarchy of 2, 4, 8, 16, 32 and 64 clusters. We also compare with the
k-means algorithm, k = 4, 10, 20 and 100.
Before showing the comparison, we visualize a part of clustering results for USPS data
in Fig. 3. On top of the left figure, we show the top three levels of the hierarchy with
respectively 4, 10 and 20 clusters, where each cluster is represented by its mean image via
an average over all the images weighted by their posterior probabilities of belonging to this
cluster. Then 10 randomly sampled digits with soft cluster assignments to the top 3rd level
clusters are illustrated with a Hinton graph. The middle and right figures in Fig. 3 show
the assignments between clusters across the hierarchy. The clear diagonal block structure
in all the Hinton graphs indicates a very meaningful cluster hierarchy.
?1?
?2?
?3?
?4?
635
2
2
10
Normalized cut
630
1
3
4
744 179
1
817
4
6
1
835
1254
1
1
4
HGFC
3
8
886 33
4
816
8
2
4
9
3
838
1265
17
10
58
K-means
1
0
720 95
9
796
20
0
3
97
9
774
Table 1: Confusion matrices of clustering results, 4 clusters, USPS data. In each confusion
matrix, rows correspond true classes and columns correspond the found clusters.
autos
motor.
baseball
hockey
Normalized cut
858 98
30
79 893 16
44
33 875
11
8
893
2
5
40
85
772
42
15
7
HGFC
182 13
934
5
33 843
21
11
21
12
101
958
977
985
39
16
K-means
7
4
3
5
835 114
4
900
0
0
4
77
Table 2: Confusion matrices of clustering results, 4 clusters, Newsgroup data. In each
confusion matrix, rows correspond true classes and columns correspond the found clusters.
We compare the clustering methods by evaluating the normalized mutual information
(NMI) in Fig. 4. It is defined to be the mutual information between clusters and true
classes, normalized by the maximum of marginal entropies. Moreover, in order to more
directly assess the clustering quality, we also illustrate the confusion matrices in Table 1
and Table 2, in the case of producing 4 clusters. We drop out the confusion matrices of
single link and complete link in the tables, for saving spaces and also due to their clearly
poor performance compared with others.
The results show that single link performs poorly, as it greedily merges nearby data and
tends to form a big cluster with some outliers. Complete link is more balanced but unsatisfactory either. For the Newsgroup data it even gets stuck at the 3601-th merge because
all the similarities between clusters are 0. Top-down hierarchical normalized cut obtains
reasonable results, but sometimes cannot split one big cluster (see the tables). The confusion matrices indicates that k-means does well for digit images but relatively worse for
high-dimension textual data. In contrast, Fig. 4 shows that HGFC gives significantly higher
NMI values than competitors on both tasks. It also produces confusion matrices with clear
diagonal structures (see tables 1 and 2), which indicates a very good clustering quality.
6
Conclusion and Future Work
In this paper we have proposed a probabilistic graph partition method for clustering data objects based on their pairwise similarities. A novel hierarchical clustering algorithm HGFC
has been derived, where a higher level in HGFC corresponds to a statistical model of random walk transitions in a longer period, giving rise to a more global clustering structure.
Experiments show very encouraging results.
In this paper we have empirically specified the number of clusters in each level. In the
near future we plan to investigate effective methods to automatically determine it. Another
direction is hierarchical clustering on directed graphs, as well as its applications in web
mining.
Appendix
P
P
P
Proof of Theorem 2.1. We first notice that p ?p = ij wij under constraints i hip = 1. ThereP
fore we can normalize W by ij wij and after convergence multiply all ?p by this quantity to get
P
the solution. Under this assumption we are maximizing L(H, ?) = ij wij log(H?H> )ij with
P
an extra constraint p ?p = 1. We first fix ?p and show update Eq. (3) will not decrease L(H) ?
L(H, ?). We prove this by constructing an auxiliary function f (H, H? ) such that f (H, H? ) ?
L(H) and f (H, H) = L(H). Then we know the update Ht+1 = arg maxH f (H, Ht ) will not
t+1
decrease L(H) since L(H
) ? f (Ht+1 , Ht ) ? f (Ht , Ht ) = L(Ht ). Define f (H, H? ) =
P
P h?ip ?p h?jp
h? ?p h?
P ?
log hip ?p hjp ? log P iph? ?ljp
. f (H, H) = L(H) can be easily veriij wij
p
h ?l h?
h?
l
il
jl
l
il
jl
fied, and f (H, H? ) ? L(H) also follows if we use concavity of log function. Then it is straightforward to verify Eq. (3) by setting the derivative of f with respect to hip to be zero. The normalization
is due to the constraints and can be formally derived from this procedure with a Lagrange formalism.
Similarly we can define an auxiliary function for ? with H fixed, and verify Eq. (4).
Proof of Proposition 4.1. (i) follows directly from the proof of Theorem 2.1. To prove (ii) we take
up as the missing data and follow the standard way to derive the EM algorithm. In the E-step we estimate the a posteriori probability of taking up for pair (vi , vj ) using Bayes? rule: p?(up |vi , vj ) ?
p(vi |up )p(vj |up )p(up ). And then in the M-step we maximize the ?complete? data likelihood
P
P
?
?(up |vi , vj ) log p(vi |up )p(vj |up )p(up ) with respect to model parameters
L(G)
=
pp
ij wij
P
P
hip = p(vi |up ) and ?p = p(up ), with constraints i hip = 1 and p ?p = 1. By setting the corP
P
responding derivatives to zero we obtain hip ? j wij p?(up |vi , vj ) and ?p ? ij wij p?(up |vi , vj ).
It is easy to check that they are equivalent to updates Eq. (3) and Eq. (4) respectively.
Proof of Proposition 4.2. We give a brief proof. Suppose that at level l the data-cluster relationship
? l ) (see Eq. (6)) with adjacency matrix B
? l , degrees D0 for V0 , and
? l (V0 , Vl , F
is described by K
?l = B
? l ??1 B
?>
degrees ?l for Vl . In this case the induced adjacency matrix of V0 is W
l , and
l
?1 ?
?>
the adjacency matrix of Vl is Wl = B
l D0 Bl . Let Kl (Vl , Vl+1 , Fl+1 ) be the bipartite graph
connecting Vl and Vl+1 , with the adjacency Bl+1 and degrees ?l+1 for Vl+1 . Then the adjacency
?1 ? >
?1 ?
? l+1 = B
? l ??1 Bl+1 ??1 B>
?
matrix of V0 induced by level l + 1 is W
l
l+1 l+1 ?l Bl = Wl D0 Wl ,
?1 >
>
? > ?1 ?
?
where relations Bl+1 ??1
l+1 Bl+1 = Bl D0 Bl and Wl = Bl ?l Bl are applied. Given the initial
?
? l = At with t = 2l?1 .
condition from the bottom level W1 = W0 , it is not difficult to obtain W
References
[1] J. Goldberger and S. Roweis. Hierarchical clustering of a mixture model. In L.K. Saul,
Y. Weiss, and L. Bottou, editors, Neural Information Processing Systems 17 (NIPS*04),
pages 505?512, 2005.
[2] K.A. Heller and Z. Ghahramani. Bayesian hierarchical clustering. In Proceedings of
the 22nd International Conference on Machine Learning, pages 297?304, 2005.
[3] S. D. Kamvar, D. Klein, and C. D. Manning. Interpreting and extending classical
agglomerative clustering algorithms using a model-based approach. In Proceedings of
the 19th International Conference on Machine Learning, pages 283?290, 2002.
[4] Daniel D. Lee and H. Sebastian Seung. Algorithms for non-negative matrix factorization. In T. K. Leen, T. G. Dietterich, and V. Tresp, editors, Advances in Neural
Information Processing Systems 13 (NIPS*00), pages 556?562, 2001.
[5] Jianbo Shi and Jitendra Malik. Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8):888?905, 2000.
[6] D. Zhou, B. Sch?olkopf, and T. Hofmann. Semi-supervised learning on directed graphs.
In L.K. Saul, Y. Weiss, and L. Bottou, editors, Advances in Neural Information Processing Systems 17 (NIPS*04), pages 1633?1640, 2005.
| 2948 |@word middle:3 nd:3 gfc:2 recursively:1 initial:2 contains:2 exclusively:1 daniel:1 document:2 interestingly:1 com:2 goldberger:1 must:1 partition:5 happen:1 hofmann:1 shape:2 motor:1 remove:1 treating:1 drop:1 update:6 stationary:3 intelligence:1 short:1 multihop:1 dn:1 along:1 direct:3 prove:3 yu1:1 introduce:1 pairwise:3 examine:1 multi:3 automatically:1 encouraging:2 window:1 increasing:1 totally:2 moreover:1 panel:3 factorized:1 ag:1 alent:1 k2:1 jianbo:1 producing:1 before:2 positive:1 local:4 tends:1 despite:1 encoding:1 path:1 merge:3 suggests:6 factorization:9 statistically:1 directed:2 block:1 differs:1 digit:5 procedure:2 empirical:2 significantly:2 induce:1 get:3 cannot:1 conventional:1 equivalent:6 missing:1 maximizing:3 shi:1 go:1 straightforward:1 starting:1 resolution:2 assigns:1 m2:3 rule:4 insight:1 importantly:1 deriving:1 updated:1 hierarchy:5 construction:1 suppose:2 us:1 cut:6 observed:4 bottom:4 capture:1 decrease:2 observes:1 principled:1 intuition:1 balanced:1 complexity:2 seung:1 baseball:2 bipartite:14 f2:1 usps:7 easily:1 various:1 represented:3 describe:1 effective:1 tell:2 heuristic:1 kai:2 widely:1 encoded:1 solve:1 otherwise:1 g1:1 ip:4 obviously:1 propose:1 motorcycle:1 rapidly:1 poorly:1 roweis:1 intuitive:1 normalize:4 olkopf:1 amounting:1 convergence:1 cluster:53 extending:1 produce:2 object:6 help:1 coupling:1 illustrate:1 derive:1 nearest:4 ij:20 eq:17 auxiliary:2 indicate:1 direction:1 adjacency:18 require:1 assign:2 f1:1 fix:1 proposition:5 tfidf:1 summation:1 yij:2 hold:1 considered:2 visualize:1 m0:3 smallest:1 expose:3 largest:1 wl:5 weighted:4 clearly:1 gaussian:3 rather:1 pn:1 zhou:1 volker:2 probabilistically:2 derived:5 focus:1 vk:1 unsatisfactory:1 indicates:6 mainly:1 likelihood:4 check:1 contrast:3 greedily:1 posteriori:1 el:1 softly:1 vl:16 relation:4 wij:21 arg:1 denoted:1 plan:1 mutual:3 marginal:1 construct:1 never:1 saving:1 hop:8 yu:1 future:2 others:1 few:1 employ:3 randomly:2 divergence:3 investigate:1 mining:1 multiply:1 introduces:1 mixture:7 sens:1 behind:1 chain:2 edge:5 necessary:1 tree:1 walk:11 desired:1 e0:4 theoretical:1 instance:1 hip:9 soft:9 column:3 modeling:2 formalism:1 assignment:8 cost:1 vertex:19 entry:3 connect:1 st:1 density:2 international:2 probabilistic:6 lee:1 connecting:4 w1:3 again:1 choose:1 slowly:2 worse:1 ek:1 derivative:2 account:1 de:1 sec:4 b2:1 jitendra:1 caused:1 vi:37 view:3 start:1 decaying:3 bayes:1 contribution:1 ass:1 formed:1 ni:2 il:2 correspond:5 vp:4 generalize:1 handwritten:1 bayesian:1 informatik:1 fore:1 reach:1 sebastian:1 competitor:1 frequency:1 pp:1 dm:2 naturally:1 associated:1 di:4 proof:6 e2:1 sampled:2 popular:1 knowledge:1 color:1 organized:1 segmentation:1 uncover:1 higher:10 supervised:1 follow:2 wei:2 leen:1 done:1 evaluated:1 generality:1 stage:1 until:1 web:1 lack:1 propagation:1 somehow:1 bjp:1 defines:1 quality:3 b3:1 dietterich:1 normalized:9 true:4 verify:2 counterpart:1 former:1 mani:1 contain:1 alternating:1 symmetric:1 nonzero:1 illustrated:1 covering:1 cosine:1 presenting:1 complete:5 confusion:8 performs:1 interpreting:1 image:8 wise:1 discovers:1 novel:2 empirically:1 attached:1 jp:2 jl:2 m1:2 interpret:2 rd:3 similarly:1 pq:1 similarity:32 longer:5 v0:13 maxh:1 closest:1 posterior:1 perspective:2 belongs:1 corp:1 binary:1 wji:1 seen:2 analyzes:1 minimum:1 employed:1 determine:1 maximize:1 period:5 signal:1 ii:2 smoother:1 semi:1 corporate:1 d0:4 sphere:1 e1:2 variant:1 muenchen:1 essentially:1 sometimes:2 represent:1 normalization:1 achieved:1 diffusive:1 kamvar:1 walker:1 sch:1 extra:1 unlike:1 induced:4 yu2:1 db:1 undirected:2 gmms:1 unitary:1 near:1 split:1 easy:2 fit:1 bandwidth:1 idea:1 motivated:1 speaking:1 generally:1 clear:2 induces:2 category:1 exist:1 xij:3 notice:1 disjoint:1 per:1 klein:1 biq:2 group:2 gmm:1 diffusion:2 ht:6 v1:7 graph:56 uncertainty:1 reasonable:1 appendix:2 summarizes:1 fl:2 followed:1 fold:1 constraint:4 flat:2 encodes:1 nearby:1 min:1 relatively:2 munich:1 poor:1 manning:1 belonging:1 smaller:1 nmi:3 em:2 across:1 outlier:1 gradually:2 invariant:1 visualization:1 describing:1 bip:8 know:1 end:2 apply:3 hierarchical:18 v2:3 uq:2 original:1 denotes:2 clustering:48 ensure:1 top:12 responding:1 giving:1 restrictive:1 k1:1 build:3 ghahramani:1 classical:1 bl:15 malik:1 g0:5 quantity:3 usual:1 diagonal:5 affinity:9 distance:7 link:11 w0:7 manifold:2 agglomerative:3 modeled:1 relationship:1 minimizing:2 difficult:1 gk:1 negative:2 rise:1 perform:1 observation:1 markov:4 hinton:5 frame:1 rn:1 arbitrary:1 namely:1 pair:1 kl:2 specified:1 merges:1 textual:1 nip:3 suggested:2 usually:2 pattern:1 built:1 natural:1 indicator:1 representing:3 technology:1 brief:1 auto:2 tresp:2 text:2 nice:1 heller:1 loss:1 proportional:1 degree:5 editor:3 row:2 repeat:1 gl:2 therep:1 institute:1 wide:1 neighbor:5 taking:1 saul:2 sparse:1 dimension:3 transition:18 evaluating:1 concavity:1 stuck:1 made:1 far:1 transaction:1 approximate:1 obtains:1 uni:1 implicitly:1 ml:3 global:7 b1:1 conclude:1 factorize:1 latent:1 hockey:2 table:7 bottou:2 shipeng:1 meanwhile:1 constructing:2 vj:23 diag:2 main:1 linearly:1 backup:1 big:2 fied:1 fig:8 darker:1 third:1 wavelet:1 theorem:4 down:2 showing:3 dl:2 derives:1 merging:1 illustrates:1 nk:1 entropy:1 simply:1 likely:2 forming:1 faraway:1 conveniently:1 lagrange:1 g2:1 applies:1 corresponds:3 iph:1 chance:1 conditional:2 marked:1 ljp:1 rbf:2 principal:1 experimental:2 siemens:3 meaningful:1 newsgroup:5 formally:2 d1:1 |
2,146 | 2,949 | A Computational Model of Eye Movements
during Object Class Detection
Wei Zhang?
Hyejin Yang??
Dimitris Samaras?
Gregory J. Zelinsky??
?
Dept. of Computer Science
Dept. of Psychology?
State University of New York at Stony Brook
Stony Brook, NY 11794
{wzhang,samaras}@cs.sunysb.edu?
[email protected]?
[email protected]?
Abstract
We present a computational model of human eye movements in an object class detection task. The model combines state-of-the-art computer
vision object class detection methods (SIFT features trained using AdaBoost) with a biologically plausible model of human eye movement to
produce a sequence of simulated fixations, culminating with the acquisition of a target. We validated the model by comparing its behavior to
the behavior of human observers performing the identical object class
detection task (looking for a teddy bear among visually complex nontarget objects). We found considerable agreement between the model
and human data in multiple eye movement measures, including number
of fixations, cumulative probability of fixating the target, and scanpath
distance.
1. Introduction
Object detection is one of our most common visual operations. Whether we are driving [1],
making a cup of tea [2], or looking for a tool on a workbench [3], hundreds of times each
day our visual system is being asked to detect, localize, or acquire through movements of
gaze objects and patterns in the world.
In the human behavioral literature, this topic has been extensively studied in the context of
visual search. In a typical search task, observers are asked to indicate, usually by button
press, whether a specific target is present or absent in a visual display (see [4] for a review).
A primary manipulation in these studies is the number of non-target objects also appearing
in the scene. A bedrock finding in this literature is that, for targets that cannot be defined
by a single visual feature, target detection times increase linearly with the number of nontargets, a form of clutter or ?set size? effect. Moreover, the slope of the function relating
detection speed to set size is steeper (by roughly a factor of two) when the target is absent
from the scene compared to when it is present. Search theorists have interpreted these findings as evidence for visual attention moving serially from one object to the next, with the
human detection operation typically limited to those objects fixated by this ?spotlight? of
attention [5].
Object class detection has also been extensively studied in the computer vision community,
with faces and cars being the two most well researched object classes [6, 7, 8, 9]. The
related but simpler task of object class recognition (target recognition without localization)
has also been the focus of exciting recent work [10, 11, 12]. Both tasks use supervised
learning methods to extract visual features. Scenes are typically realistic and highly cluttered, with object appearance varying greatly due to illumination, view, and scale changes.
The task addressed in this paper falls between the class detection and recognition problems.
Like object class detection, we will be detecting and localizing class-defined targets; unlike
object class detection the test images will be composed of at most 20 objects appearing on
a simple background.
Both the behavioral and computer vision literatures have strengths and weaknesses when it
comes to understanding human object class detection. The behavioral literature has accumulated a great deal of knowledge regarding the conditions affecting object detection [4],
but this psychology-based literature has been dominated by the use of simple visual patterns and models that cannot be easily generalized to fully realistic scenes (see [13, 14] for
notable exceptions). Moreover, this literature has focused almost entirely on object-specific
detection, cases in which the observer knows precisely how the target will appear in the test
display (see [15] for a discussion of target non-specific search using featurally complex objects). Conversely, the computer vision literature is rich with models and methods allowing
for the featural representation of object classes and the detection of these classes in visually
cluttered real-world scenes, but none of these methods have been validated as models of
human object class detection by comparison to actual behavioral data.
The current study draws upon the strengths of both of these literatures to produce the first
joint behavioral-computational study of human object class detection. First, we use an
eyetracker to quantify human behavior in terms of the number of fixations made during
an object class detection task. Then we introduce a computational model that not only
performs the detection task at a level comparable to that of the human observers, but also
generates a sequence of simulated eye movements similar in pattern to those made by
humans performing the identical detection task.
2. Experimental methods
An effort was made to keep the human and model experiments methodologically similar.
Both experiments used training, validation (practice trials in the human experiment), and
testing phases, and identical images were presented to the model and human subjects in
all three of these phases. The target class consisted of 378 teddy bears scanned from [16].
Nontargets consisted of 2,975 objects selected from the Hemera Photo Objects Collection.
Samples of the bear and nontarget objects are shown in Figure 1. All objects were normalized to have a bounding box area of 8,000 pixels, but were highly variable in appearance.
Figure 1: Representative teddy bears (left) and nontarget objects (right).
The training set consisted of 180 bears and 500 nontargets, all randomly selected. In the
case of the human experiment, each of these objects was shown centered on a white background and displayed for 1 second. The testing set consisted of 180 new bears and nontar-
gets. No objects were repeated between training and testing, and no objects were repeated
within either of the training or testing phases. Test images depicted 6, 13, or 20 color objects randomly positioned on a white background. A single bear was present in half (90) of
these displays. Human subjects were instructed to indicate, by pressing a button, whether
a teddy bear appeared among the displayed objects. Target presence and set size were
randomly interleaved over trials. Each test trial in the human experiment began with the
subject fixating gaze at the center of the display, and eye position was monitored throughout each trial using an eyetracker. Eight students from Stony Brook University participated
in the experiment.
3. Model of eye movements during object class detection
Figure 2: The flow of processing through our model.
Building on a framework described in [17, 14, 18], our model can be broadly divided into
three stages (Figure 2): (1) creating a target map based on a retinally-transformed version
of the input image, (2) recognizing the target using thresholds placed on the target map, and
(3) the operations required in the generation of eye movements. The following sub-sections
describe each of the Figure 2 steps in greater detail.
3.1. Retina transform
With each change in gaze position (set initially to the center of the image), our model
transforms the input image so as to reflect the acuity limitations imposed by the human
retina. We used the method described in [19, 20], which was shown to provide a close
approximation to human acuity limitations, to implement this dynamic retina transform.
3.2. Create target map
Each point on the target map ranges in value between 0 and 1 and indicates the likelihood
that a target is located at that point. To create the target map, we first compute interest
points on the retinally-transformed image (see section 3.2.2), then compare the features
surrounding these points to features of the target object class extracted during training.
Two types of discriminative features were used in this study: color features and texture
features.
3.2.1. Color features
Color has long been used as a feature for instance object recognition [21]. In our study we
explore the potential use of color as a discriminative feature for an object class. Specifically,
we used a normalized color histogram of pixel hues in HSV space. Because backgrounds in
our images were white and therefore uninformative, we set thresholds on the saturation and
brightness channels to remove these points. The hue channel was evenly divided into 11
bins and each pixel?s hue value was assigned to one of these bins using binary interpolation.
Values within each bin were weighted by 1 ? d, where d is the normalized unit distance to
the center of the bin. The final color histogram was normalized to be a unit vector.
Given a test image, It , and its color feature, Ht , we compute the distances between Ht
and the color features of the training set {Hi , i = 1, ..., N }. The test image is labeled
as: l(It ) = l(Iarg min1?i?N ?2 (Ht ,Hi ) ), and the distance metric used was: ?2 (Ht , Hi ) =
PK [Ht (k)?Hi (k)]2
k=1 Ht (k)+Hi (k) , where K is the number of bins.
3.2.2. Texture features
Local texture features were extracted on the gray level images during both training and
testing. To do this, we first used a Difference-of-Gaussion (DoG) operator to detect interest
points in the image, then used a Scale Invariant Feature Transform (SIFT) descriptor to represent features at each of the interest point locations. SIFT features consist of a histogram
representation of the gradient orientation and magnitude information within a small image
patch surrounding a point [22].
AdaBoost is a feature selection method which produces a very accurate prediction rule by
combining relatively inaccurate rules-of-thumb [23]. Following the method described in
[11, 12], we used AdaBoost during training to select a small set of SIFT features from
among all the SIFT features computed for each sample in the training set. Specifically,
each training image was represented by a set of SIFT features {Fi,j , j = 1, ...ni }, where
ni is the number of SIFT features in sample Ii . To select features from this set, AdaBoost
1
, 2N1 n , where Np and Nn are
first initialized the weights of the training samples wi to 2N
p
the number of positive and negative samples, respectively. For each round of AdaBoost,
we then selected one feature as a weak classifier and updated the weights of the training
samples. Details regarding the algorithm used for each round of boosting can be found in
[12]. Eventually, T features were chosen having the best ability to discriminate the target
object class from the nontargets. Each of these selected features forms a weak classifier,
hk , consisting of three components: a feature vector, (fk ), a distance threshold, (?k ), and
an output label, (uk ). Only the features from the positive training samples are used as weak
classifiers. For each feature vector, F , we compute the distance between it and the training
sample, i, defined as di = min1?j?ni D(Fi,j , F0 ), then apply the classification rule:
1, d < ?
h(f, ?) = {
.
(1)
0, d ? ?
After the desired number of weak classifiers has been found, the final strong classifier can
be defined as:
T
X
H=
?t ht
(2)
q
where ?t = log(1/?t ). Here ?t =
t=1
1??t
?t
and the classification error ?t =
P
|uk ? lk |.
3.2.3. Validation
A validation set, consisting of the practice trials viewed by the human observers, was used
to set parameters in the model. Because our model used two types of features, each having
different classifiers with different outputs, some weight for combining these classifiers was
needed. The validation set was used to set this weighting.
The output of the color classifier, normalized to unit length, was based on the distance
?2min = min1?i?N and defined as:
0, l(It ) = 0
Ccolor = {
(3)
f (?2min ), l(It ) = 1
where f (?2min ) is a function monotonically decreasing with respect to ?2min . The strong
local texture classifier, Ctexture (Equation 2), also had normalized unit output.
The weights of the two classifiers were determined based on their classification errors on
the validation set:
t
,
Wcolor = ?c?+?
t
.
(4)
c
Wtexture = ?c?+?
t
The final combined output was used to generate the values in the target map and, ultimately,
to guide the model?s simulated eye movements.
3.3. Recognition
We define the highest-valued point on the target map as the hotspot. Recognition is accomplished by comparing the hotspot to two thresholds, also set through validation. If the
hotspot value exceeds the high target-present threshold, then the object will be recognized
as an instance of the target class. If the hotspot value falls below the target-absent threshold,
then the object will be classified as not belonging to the target class. Through validation,
the target-present threshold was set to yield a low false positive rate and the target-absent
threshold was set to yield a high true positive rate. Moreover, target-present judgments
were permitted only if the hotspot was fixated by the simulated fovea. This constraint was
introduced so as to avoid extremely high false positive rates stemming from the creation of
false targets in the blurred periphery of the retina-transformed image.
3.4. Eye movement
If neither the target-present nor the target-absent thresholds are satisfied, processing passes
to the eye movement stage of our model. If the simulated fovea is not on the hotspot, the
model will make an eye movement to move gaze steadily toward the hotspot location. Fixation in our model is defined as the centroid of activity on the target map, a computation
consistent with a neuronal population code. Eye movements are made by thresholding this
map over time, pruning off values that offer the least evidence for the target. Eventually,
this thresholding operation will cause the centroid of the target map to pass an eye movement threshold, resulting in a gaze shift to the new centroid location. See [18] for details
regarding the eye movement generation process. If the simulated fovea does acquire the
hotspot and the target-present threshold is still not met, the model will assume that a nontarget was fixated and this object will be ?zapped?. Zapping consists of applying a negative
Gaussian filter to the hotspot location, thereby preventing attention and gaze from returning to this object (see [24] for a previous computational implementation of a conceptually
related operation).
4. Experimental results
Model and human behavior were compared on a variety of measures, including error rates,
number of fixations, cumulative probability of fixating the target, and scanpath ratio (a
measure of how directly gaze moved to the target). For each measure, the model and
human data were in reasonable agreement.
Table 1: Error rates for model and human subjects.
Total trials
Human
Model
1440
180
Misses
Frequency Rate
46
3.2%
7
3.9%
False positives
Frequency Rate
14
1.0%
4
2.2%
Table 1 shows the error rates for the human subjects and the model, grouped by misses and
false positives. Note that the data from all eight of the human subjects are shown, resulting
in the greater number of total trials. There are two key patterns. First, despite the very
high level of accuracy exhibited by the human subjects in this task, our model was able to
Table 2: Average number of fixations by model and human.
Case
Human
Model
p6
3.38
2.86
Target-present
p13 p20 slope
3.74 4.88 0.11
3.69 5.68 0.20
a6
4.89
3.97
Target-absent
a13
a20
slope
7.23 9.39
0.32
8.30 10.47 0.46
achieve comparable levels of accuracy. Second, and consistent with the behavioral search
literature, miss rates were larger than false positive rates for both the humans and model.
To the extent that our model offers an accurate account of human object detection behavior,
it should be able to predict the average number of fixations made by human subjects in the
detection task. As indicated in Table 2, this indeed is the case. Data are grouped by targetpresent (p), target-absent (a), and the number of objects in the scene (6, 13, 20). In all
conditions, the model and human subjects made comparable numbers of fixations. Also
consistent with the behavioral literature, the average number of fixations made by human
subjects in our task increased with the number of objects in the scenes, and the rate of this
increase was greater in the target-absent data compared to the target-present data. Both of
these patterns are also present in the model data. The fact that our model is able to capture
an interaction between set size and target presence in terms of the number of fixations
needed for detection lends support for our method.
Figure 3: Cumulative probability of target fixation by model and human.
Figure 3 shows the number of fixation data in more detail. Plotted are the cumulative probabilities of fixating the target as a function of the number of objects fixated during the search
task. When the scene contained only 6 or 13 objects, the model and the humans fixated
roughly the same number of nontargets before finally shifting gaze to the target. When the
scene was more cluttered (20 objects), the model fixated an average of 1 additional nontarget relative to the human subjects, a difference likely indicating a liberal bias in our human
subjects under these search conditions. Overall, these analyses suggest that our model was
not only making the same number of fixations as humans, but it was also fixating the same
number of nontargets during search as our human subjects.
Table 3: Comparison of model and human scanpath distance
#Objects
Human
Model
MODEL
6
1.62
1.93
1.93
13
2.20
3.09
2.80
20
2.80
6.10
3.43
Human gaze does not jump randomly from one item to another during search, but instead
moves in a more orderly way toward the target. The ultimate test of our model would
be to reproduce this orderly movement of gaze. As a first approximation, we quantify
this behavior in terms of a scanpath distance. Scanpath distance is defined as the ratio of
the total scanpath length (i.e., the summed distance traveled by the eye) and the distance
between the target and the center of the image (i.e., the minimum distance that the eye
would need to travel to fixate the target). As indicated in Table 3, the model and human data
are in close agreement in the 6 and 13-object scenes, but not in the 20-object scenes. Upon
closer inspection of the data, we found several cases in which the model made multiple
fixations between two nontarget objects, a very unnatural behavior arising from too small
of a setting for our Gaussian ?zap? window. When these 6 trials were removed, the model
data (MODEL) and the human data were in closer agreement.
Figure 4: Representative scanpaths. Model data are shown in thick red lines, human data
are shown in thin green lines.
Figure 4 shows representative scanpaths from the model and one human subject for two
search scenes. Although the scanpaths do not align perfectly, there is a qualitative agreement between the human and model in the path followed by gaze to the target.
5. Conclusion
Search tasks do not always come with specific targets. Very often, we need to search for
dogs, or chairs, or pens, without any clear idea of the visual features comprising these
objects. Despite the prevalence of these tasks, the problem of object class detection has attracted surprisingly little research within the behavioral community [15], and has been applied to a relatively narrow range of objects within the computer vision literature [6, 7, 8, 9].
The current work adds to our understanding of this important topic in two key respects.
First, we provide a detailed eye movement analysis of human behavior in an object class
detection task. Second, we incorporate state-of-the-art computer vision object detection
methods into a biologically plausible model of eye movement control, then validate this
model by comparing its behavior to the behavior of our human observers. Computational
models capable of describing human eye movement behavior are extremely rare [25]; the
fact that the current model was able to do so for multiple eye movement measures lends
strength to our approach. Moreover, our model was able to detect targets nearly as well
as the human observers while maintaining a low false positive rate, a difficult standard to
achieve in a generic detection model. Such agreement between human and model suggests that simple color and texture features may be used to guide human attention and eye
movement in an object class detection task.
Future computational work will explore the generality of our object class detection method
to tasks with visually complex backgrounds, and future human work will attempt to use
neuroimaging techniques to localize object class representations in the brain.
Acknowledgments
This work was supported by grants from the NIMH (R01-MH63748) and ARO (DAAD1903-1-0039) to G.J.Z.
References
[1] M. F. Land and D. N. Lee. Where we look when we steer. Nature, 369(6483):742?744, 1994.
[2] M. F. Land and M. Hayhoe. In what ways do eye movements contribute to everyday activities.
Vision Research, 41(25-36):3559?3565, 2001.
[3] G. Zelinsky, R. Rao, M. Hayhoe, and D. Ballard. Eye movements reveal the spatio-temporal
dynamics of visual search. Psychological Science, 8:448?453, 1997.
[4] J. Wolfe. Visual search. In H. Pashler (Ed.), Attention, pages 13?71. London: University
College London Press, 1997.
[5] E. Weichselgartner and G. Sperling. Dynamics of automatic and controlled visual attention.
Science, 238(4828):778?780, 1987.
[6] H. Schneiderman and T. Kanade. A statistical method for 3d object detection applied to faces
and cars. In CVPR, volume I, pages 746?751, 2000.
[7] P. Viola and M.J. Jones. Rapid object detection using a boosted cascade of simple features. In
CVPR, volume I, pages 511?518, 2001.
[8] S. Agarwal and D. Roth. Learning a sparse representation for object detection. In ECCV,
volume IV, page 113, 2002.
[9] Wolf Kienzle, G?okhan H. Bak?r, Matthias O. Franz, and Bernhard Sch?olkopf. Face detection efficient and rank deficient. In NIPS, 2004.
[10] R. Fergus, P. Perona, and A. Zisserman. Object class recognition by unsupervised scaleinvariant learning. In CVPR03, volume II, pages 264?271, 2003.
[11] A. Opelt, M. Fussenegger, A. Pinz, and P. Auer. Weak hypotheses and boosting for generic
object detection and recognition. In ECCV04, volume II, pages 71?84, 2004.
[12] W. Zhang, B. Yu, G. Zelinsky, and D. Samaras. Object class recognition using multiple layer
boosting with multiple features. In CVPR, 2005.
[13] L. Itti and C. Koch. A saliency-based search mechanism for overt and covert shifts of visual
attention. Vision Research, 40:1489?1506, 2000.
[14] R. Rao, G. Zelinsky, M. Hayhoe, and D. Ballard. Eye movements in iconic visual search. Vision
Research, 42:1447?1463, 2002.
[15] D. T. Levin, Y. Takarae, A. G. Miner, and F. Keil. Efficient visual search by category: Specifying the features that mark the difference between artifacts and animal in preattentive vision.
Perception and Psychophysics, 63(4):676?697, 2001.
[16] P. Cockrill. The teddy bear encyclopedia. New York: DK Publishing, Inc., 2001.
[17] R. Rao, G. Zelinsky, M. Hayhoe, and D. Ballard. Modeling saccadic targeting in visual search.
In NIPS, 1995.
[18] G. Zelinsky. Itti, L., Rees, G. and Tsotos, J.(Eds.), Neurobiology of attention, chapter Specifying
the components of attention in a visual search task, pages 395?400. Elsevier, 2005.
[19] W.S. Geisler and J.S. Perry. A real-time foveated multi-resolution system for low-bandwidth
video communications. In Human Vision and Electronic Imaging, SPIE Proceddings, volume
3299, pages 294?305, 1998.
[20] J.S. Perry and W.S. Geisler. Gaze-contingent real-time simulation of arbitrary visual fields. In
SPIE, 2002.
[21] M.J. Swain and D.H. Ballard. Color indexing. IJCV, 7(1):11?32, November 1991.
[22] D.G. Lowe. Distinctive image features from scale-invariant keypoints. IJCV, 60(2):91?110,
November 2004.
[23] Y. Freund and R.E. Schapire. A decision-theoretic generalization of on-line learning and an
application to boosting. Journal of Computer and System Sciences, 55(1):119?139, 1997.
[24] K. Yamada and G. Cottrell. A model of scan paths applied to face recognition. In Seventeenth
Annual Cognitive Science Conference, pages 55?60, 1995.
[25] C. M. Privitera and L. W. Stark. Algorithms for defining visual regions-of-interest: comparison
with eye fixations. PAMI, 22:970?982, 2000.
| 2949 |@word trial:8 version:1 gaussion:1 simulation:1 methodologically:1 brightness:1 thereby:1 current:3 comparing:3 stony:3 attracted:1 stemming:1 cottrell:1 realistic:2 zap:1 remove:1 half:1 selected:4 item:1 inspection:1 yamada:1 stonybrook:1 detecting:1 boosting:4 hsv:1 location:4 contribute:1 liberal:1 simpler:1 zhang:2 zapping:1 consists:1 fixation:15 qualitative:1 combine:1 ijcv:2 behavioral:8 introduce:1 indeed:1 rapid:1 roughly:2 behavior:11 nor:1 multi:1 brain:1 decreasing:1 researched:1 actual:1 little:1 window:1 moreover:4 what:1 interpreted:1 finding:2 temporal:1 returning:1 classifier:10 uk:2 control:1 unit:4 grant:1 appear:1 positive:9 before:1 local:2 nontargets:6 despite:2 path:2 interpolation:1 pami:1 studied:2 specifying:2 conversely:1 suggests:1 limited:1 range:2 seventeenth:1 acknowledgment:1 testing:5 practice:2 implement:1 prevalence:1 area:1 cascade:1 suggest:1 get:1 cannot:2 close:2 selection:1 operator:1 targeting:1 context:1 applying:1 pashler:1 map:10 imposed:1 center:4 roth:1 attention:9 cluttered:3 focused:1 resolution:1 rule:3 population:1 updated:1 target:55 a13:1 hypothesis:1 agreement:6 wolfe:1 recognition:10 located:1 labeled:1 min1:3 capture:1 region:1 movement:24 highest:1 removed:1 nimh:1 asked:2 pinz:1 fussenegger:1 dynamic:3 ultimately:1 trained:1 creation:1 samara:3 localization:1 upon:2 distinctive:1 easily:1 joint:1 represented:1 chapter:1 surrounding:2 describe:1 london:2 larger:1 plausible:2 valued:1 cvpr:3 ability:1 transform:3 scaleinvariant:1 final:3 sequence:2 pressing:1 matthias:1 nontarget:6 aro:1 interaction:1 combining:2 achieve:2 moved:1 validate:1 everyday:1 olkopf:1 produce:3 object:68 strong:2 c:1 culminating:1 indicate:2 come:2 quantify:2 met:1 bak:1 thick:1 filter:1 centered:1 human:57 bin:5 generalization:1 koch:1 ic:1 visually:3 great:1 predict:1 driving:1 travel:1 overt:1 label:1 grouped:2 create:2 tool:1 weighted:1 hotspot:9 gaussian:2 always:1 avoid:1 boosted:1 varying:1 validated:2 focus:1 acuity:2 iconic:1 rank:1 indicates:1 likelihood:1 greatly:1 hk:1 centroid:3 detect:3 elsevier:1 accumulated:1 inaccurate:1 typically:2 nn:1 initially:1 perona:1 transformed:3 reproduce:1 comprising:1 pixel:3 overall:1 among:3 orientation:1 classification:3 animal:1 art:2 summed:1 psychophysics:1 field:1 having:2 identical:3 look:1 jones:1 nearly:1 thin:1 unsupervised:1 yu:1 future:2 np:1 retina:4 randomly:4 composed:1 phase:3 consisting:2 retinally:2 n1:1 attempt:1 detection:36 interest:4 highly:2 weakness:1 accurate:2 closer:2 capable:1 privitera:1 iv:1 initialized:1 desired:1 plotted:1 p13:1 a20:1 psychological:1 instance:2 increased:1 modeling:1 steer:1 rao:3 localizing:1 a6:1 rare:1 hundred:1 swain:1 recognizing:1 levin:1 too:1 gregory:2 combined:1 rees:1 geisler:2 lee:1 off:1 gaze:12 sunysb:2 reflect:1 satisfied:1 zelinsky:7 cognitive:1 creating:1 itti:2 stark:1 fixating:5 potential:1 account:1 student:1 blurred:1 inc:1 notable:1 view:1 observer:7 lowe:1 steeper:1 red:1 slope:3 ni:3 accuracy:2 descriptor:1 yield:2 judgment:1 saliency:1 conceptually:1 weak:5 thumb:1 none:1 classified:1 ed:2 acquisition:1 frequency:2 steadily:1 fixate:1 di:1 monitored:1 spie:2 knowledge:1 car:2 color:12 positioned:1 auer:1 day:1 supervised:1 adaboost:5 permitted:1 wei:1 zisserman:1 box:1 generality:1 stage:2 p6:1 proceddings:1 perry:2 artifact:1 indicated:2 reveal:1 gray:1 building:1 effect:1 consisted:4 normalized:6 true:1 assigned:1 deal:1 white:3 round:2 during:9 generalized:1 theoretic:1 performs:1 covert:1 image:17 fi:2 began:1 common:1 volume:6 relating:1 spotlight:1 cup:1 theorist:1 automatic:1 fk:1 had:1 p20:1 moving:1 f0:1 align:1 add:1 recent:1 eyetracker:2 manipulation:1 periphery:1 binary:1 accomplished:1 minimum:1 greater:3 additional:1 contingent:1 recognized:1 monotonically:1 ii:3 multiple:5 keypoints:1 exceeds:1 offer:2 long:1 okhan:1 divided:2 controlled:1 prediction:1 vision:11 metric:1 histogram:3 represent:1 agarwal:1 background:5 affecting:1 participated:1 uninformative:1 addressed:1 scanpaths:3 sch:1 unlike:1 exhibited:1 pass:1 subject:14 deficient:1 flow:1 yang:1 presence:2 variety:1 psychology:2 perfectly:1 bandwidth:1 regarding:3 idea:1 absent:8 shift:2 whether:3 ultimate:1 unnatural:1 effort:1 york:2 cause:1 scanpath:6 clear:1 detailed:1 transforms:1 clutter:1 hue:3 extensively:2 encyclopedia:1 category:1 generate:1 schapire:1 arising:1 broadly:1 tea:1 key:2 threshold:11 localize:2 neither:1 ht:7 imaging:1 button:2 schneiderman:1 almost:1 throughout:1 reasonable:1 electronic:1 patch:1 draw:1 decision:1 comparable:3 entirely:1 interleaved:1 hi:5 layer:1 followed:1 display:4 annual:1 activity:2 strength:3 scanned:1 precisely:1 constraint:1 scene:12 dominated:1 generates:1 speed:1 min:4 extremely:2 chair:1 performing:2 relatively:2 belonging:1 wi:1 biologically:2 making:2 invariant:2 indexing:1 equation:1 describing:1 eventually:2 sperling:1 mechanism:1 needed:2 know:1 photo:1 operation:5 eight:2 apply:1 generic:2 appearing:2 publishing:1 maintaining:1 workbench:1 r01:1 move:2 primary:1 saccadic:1 gradient:1 lends:2 fovea:3 distance:13 simulated:6 evenly:1 topic:2 extent:1 toward:2 length:2 code:1 ratio:2 acquire:2 difficult:1 neuroimaging:1 negative:2 implementation:1 allowing:1 keil:1 teddy:5 displayed:2 november:2 viola:1 neurobiology:1 looking:2 communication:1 defining:1 arbitrary:1 community:2 introduced:1 dog:2 required:1 narrow:1 nip:2 brook:3 able:5 hayhoe:4 usually:1 pattern:5 dimitris:1 below:1 perception:1 appeared:1 saturation:1 including:2 green:1 video:1 shifting:1 serially:1 eye:26 lk:1 extract:1 featural:1 traveled:1 review:1 literature:11 understanding:2 relative:1 freund:1 fully:1 bear:9 generation:2 limitation:2 validation:7 consistent:3 exciting:1 thresholding:2 land:2 eccv:1 placed:1 surprisingly:1 supported:1 guide:2 bias:1 opelt:1 fall:2 face:4 sparse:1 world:2 cumulative:4 rich:1 preventing:1 instructed:1 made:8 collection:1 jump:1 franz:1 miner:1 pruning:1 bernhard:1 keep:1 orderly:2 fixated:6 spatio:1 discriminative:2 fergus:1 search:19 pen:1 table:6 kanade:1 channel:2 nature:1 ballard:4 complex:3 pk:1 linearly:1 bounding:1 repeated:2 neuronal:1 representative:3 ny:1 sub:1 position:2 weighting:1 specific:4 sift:7 dk:1 evidence:2 consist:1 false:7 texture:5 magnitude:1 illumination:1 foveated:1 depicted:1 appearance:2 explore:2 likely:1 visual:19 contained:1 wolf:1 extracted:2 viewed:1 considerable:1 change:2 typical:1 specifically:2 determined:1 miss:3 kienzle:1 total:3 discriminate:1 pas:1 experimental:2 preattentive:1 exception:1 select:2 indicating:1 college:1 support:1 mark:1 scan:1 incorporate:1 dept:2 |
2,147 | 295 | Spoken Letter Recognition
Mark Fanty & Ronald Cole
Dept. of Computer Science and Engineering
Oregon Graduate Institute
Beaverton, OR 97006
Abstract
Through the use of neural network classifiers and careful feature selection,
we have achieved high-accuracy speaker-independent spoken letter recognition. For isolated letters, a broad-category segmentation is performed
Location of segment boundaries allows us to measure features at specific
locations in the signal such as vowel onset, where important information
resides. Letter classification is performed with a feed-forward neural network. Recognition accuracy on a test set of 30 speakers was 96%. Neural network classifiers are also used for pitch tracking and broad-category
segmentation of letter strings. Our research has been extended to recognition of names spelled with pauses between the letters. When searching
a database of 50,000 names, we achieved 95% first choice name retrieval.
Work has begun on a continuous letter classifier which does frame-by-frame
phonetic classification of spoken letters.
1
INTRODUCTION
Although spoken letter recognition may seem like a modest goal because of the
small vocabulary size, it is a most difficult task. Many letter pairs, such as M-N
and B-D, differ by a single articulatory feature. Recent advances in classification
technology have enabled us to achieve new levels of accuracy on this task [Cole
et ai., 1990, Cole and Fanty, 1990, Fanty and Cole, 1990]. The EAR (English
Alphabet Recognition) system developed in our laboratory recognizes letters of the
English alphabet, spoken in isolation by any speaker, at 96% accuracy. We achieve
this level of accuracy by training neural network classifiers with empirically derived
features-features selected on the basis of speech knowledge, and refined through
220
Spoken Letter Recognition
experimentation. This process results in significantly better performance than just
using "raw" data such as spectral coefficients.
We have extended our research to retrieval of names from spellings with brief pauses
between the letters, and to continuous spellings. This paper provides an overview of
these systems with an emphasis on our use of neural network classifiers for several
separate components. In all cases, we use feedforward networks, with full connectivity between adjacent layers. The networks are trained using back propagation
with conjugate gradient descent.
2
2.1
ISOLATED LETTER RECOGNITION
SYSTEM OVERVIEW
Data capture is performed using a Sennheiser HMD 224 noise-canceling microphone, lowpass filtered at 7.6 kHz and sampled at 16 kHz per second.
Signal processing routines produce the following representations every 3 msecs:
(a) zero crossing rate: the number of zero crossings of the waveform in a 10 msec
window; (b) amplitude: the peak-tO-peak amplitude (largest positive value minus
largest negative value) in a 10 msec window in the waveform; (c) filtered amplitude:
the peak-tO-peak amplitude in a 10 msec window in the waveform lowpass filtered
at 700 Hz; (d) DFT: a 256 point FFT (128 real numbers) computed on a 10 msec
Hanning window; and (e) spectral difference: the squared difference of the averaged
spectra in adjacent 24 msec intervals.
Pitch tracking is performed with a neural network which locates peaks in the
filtered (0-700 Hz) waveform that begin pitch periods. described in section 2.2.
Broad-category segment ion divides the utterance into contiguous intervals and
assigns one of four broad category labels to each interval: CLOS (closure or background noise), SON (sonorant interval), FRIC (fricative) and STOP. The segmenter,
modified from [April 1988], uses cooperating knowledge sources which apply rules
to the signal representations, most notably ptpO-700, pitch and zcO-8000.
Feature measurement is performed on selected locations in the utterance, based
upon the broad-category boundaries. A total of 617 inputs are used by the classifier.
Letter classification is performed by a network with 52 hidden units and 26
output units, one per letter.
2.2
NEURAL NETWORK PITCH TRACKER
Pitch tracking is achieved through a network which classifies each peak in the waveform as to whether it begins a pitch period [Barnard et ai., 1991]. The waveform
is lowpass filtered at 700 Hz and each positive peak is classified using information
about it and the preceding and following four peaks. For each of the nine peaks,
the following information is provided. (1) the amplitude, (2) the time difference between the peak and the candidate peak, (3) a measure of the similarity of the peak
and the candidate peak (point-by-point correlation), (4) the width of the peak, and
(5) the negative amplitude or most negative value preceding the peak. The network
221
222
Fanty and Cole
was trained on the TIM IT database, and agrees with expert labelers about 98% of
the time. It performs well on our data without retraining.
2.3
NEURAL NETWORK LETTER CLASSIFIER
Each letter (except W) has a single SON segment (e.g. the /iy/ in T, the whole
letter M). This segment always exists, and provides the temporal anchor for most
of the feature measurements. The previous consonant is the STOP or FRIC (e.g. B
or C) before the SON. If there is no STOP or FRIC (e.g. E), the 200 msec interval
before the SON is treated as a single segment for feature extraction. After dozens
of experiments, we arrived at the following feature set:
? DFT coefficients from the consonant preceding the SON. The consonant is divided into thirds temporally; from each third, 32 averaged values are extracted
linearly from 0 to 8kHz. All DFT inputs are normalized locally so that the
largest value from a given time slice becomes 1.0 and the smallest becomes 0.0.
(96 values)
? DFT coefficients from the SON. From each seventh of the SON, 32 averaged
values are extracted linearly from 0 to 4kHz. (224 values)
? DFT coefficients following the SON. At the point of maximum zero-crossing
rate in the 200 msec after the SON, 32 values are extracted linearly from 0 to
8kHz. (32 values)
? DFT coefficients from the second and fifth frame of the SON-32 values from
each frame extracted linearly from 0 to 4kHz. These are not averaged over
time, and will reflect formant movement at the SON onset. (64 values)
? DFT coefficients from the location in the center of the SON with the largest
spectral difference-linear from 0 to 4kHz. This samples the formant locations
at the vowel-nasal boundary in case the letter is M or N. (32 values)
? Zero-crossing rate in 11 18-msec segments (198 msec) before the SON, in 11
equal-length segments during the SON and in 11 18-msec segments after the
SON. This provides an absolute time scale before and after the SON which
could help overcome segmentation errors. (33 values)
? Amplitude from before, during and after the SON represented the same way
as zero-crossing. (33 values)
? Filtered amplitude represented the same way as amplitude. (33 values)
? Spectral difference represented like zero-crossing and amplitude except the
maximum value for each segment is used instead of the average, to avoid
smoothing the peaks which occur at boundaries. (33 values)
? Inside the SON, the spectral center of mass from 0 to 1000 Hz, measured in 10
equal segments. (10 values)
? Inside the SON, the spectral center of mass from 1500 to 3500 Hz, measured
in 10 equal segments. (10 values)
? Median pitch, the median distance between pitch peaks in the center of the
SON. (1 value)
? Duration of the SON. (1 value)
Spoken Letter Recognition
? Duration of the consonant before the SON. (1 value)
? High-resolution representation of the amplitude at the SON onset: five values
from 12 msec before the onset to 30 msec after the onset. (5 values)
? Abruptness of onset of the consonant before the SON, measured as the largest
two-frame jump in amplitude in the 30 msec around the beginning of the consonant. (1 value)
? The label of the segment before the SON: CLOS, FRIC or STOP. (3 values)
? The largest spectral difference value from 100 msec before the SON onset to 21
msec after, normalized to accentuate the difference between Band V. (1 value)
? The number of consistent pitch peaks in the previous consonant. (1 value)
? The number of consistent pitch peaks before the previous consonant. (1 value)
? The presence of the segment sequence eLOS FRIC after the SON (an indicator
of X or H). (1 binary value)
All inputs to our network were normalized: mapped to the interval [0.0, 1.0]. We
attempted to normalize so that the entire range was well utilized. In some instances,
the normalization was keyed to particular distinctions. For example, the center of
mass in the spectrum from 0 to 1000 Hz was normalized so that E was low and A
was high. Other vowels, such as 0 would have values "off the scale" and would map
to 1.0, but the feature was added specifically for Ej A distinctions.
2.4
PERFORMANCE
During feature development, two utterances of each letter from 60 speakers were
used for training and 60 additional speakers served as the test set. For the final
performance evaluation, these 120 speakers were combined to form a large training
set. The final test set consists of 30 new speakers. The network correctly classified
95.9% of the letters.
The E-set {B,C,D,E,G,P,T,V,Z} and MN are the most difficult letters to classify.
We trained separate network for just the M vs. N distinction and another for just
the letters in the E-set [Fanty and Cole, 1990]. Using these networks as a second
pass when the first network has a response in the E-set or in {M ,N}, the performance
rose slightly to 96%.
As mentioned above, all feature development was performed by training on half the
training speakers and testing on the other half. The development set performance
was 93.5% when using all the features. With only the 448 DFT values (not spectral
difference or center of mass) the performance was 87%. Using all the features except
DFT values (but including spectral difference and center of mass), the performance
was 83%.
3
3.1
NAME RETRIEVAL FROM SPELLINGS
SYSTEM OVERVIEW
Our isolated letter recognizer was expanded to recognize letters spoken with pauses
by (1) Training a neural network to do broad-category segmentation of spelled
223
224
Fanty and Cole
strings (described in section 3.2); (2) Retraining the letter recognizer using letters
extracted from spelled strings; (3) Devising an algorithm to divide an utterance
into individual letters based on the broad category segmentation; and (4) Efficiently
searching a large list of names to find the best match.
The letter classification network uses the same features as the isolated letter network. Feature measurements were based on segment boundaries provided by the
neural network segmenter. The letter classification network was trained on isolated
letters from 120 speakers plus letters from spelled strings from 60 additional speakers. The letter recognition performance on our cross-validation set of 8 speakers
was 97%; on a preliminary test set of 10 additional speakers it was 95 .5%. The
letter recognition performance on our final test set was lower, as reported below.
The rules for letter segmentation are simplified by the structure of the English alphabet. All letters (except W-see below) have a single syllable, which corresponds
to a single SON segment in the broad-category segmentation. In the usual case,
letter boundaries are placed at the last CLOS or GLOT between SONs. A full
description of the rules used can be found in [Cole et al., 1991] .
The output of the classifier is a score between 0.0 and 1.0 for each letter. These
scores are treated as probabilities and the most likely name is retrieved from the
database. The names are stored in a tree structure. The number of nodes near the
root of the tree is small, so the search is fast. As the search approaches the leaves,
the number of nodes grows rapidly, but it is possible to prune low-scoring paths.
3.2
NEURAL NETWORK BROAD-CATEGORY SEGMENTATION
The rule-based segmenter developed for isolated letters was too finely tuned to work
well on letter strings. Rather than re-tune the rules, we decided to train a network
to do broad category segmentation. At the same time, we added the category
GLOT for glottalization, a slowing down of the vocal cords which often occurs at
vowel-vowel boundaries.
The rule-based segmenter searched for boundaries. The neural network segmenter
works in a different way [Gopalakrishnan, August 1990]. It classifies each 3 msec
frame as being in a SON, CLOS, STOP, FRIC or GLOT. A five-point median
smoothing is applied to the outputs, and the classification of the frame is taken to
be the largest output. Some simple rules are applied to delete impossible segments
such as 12 msec SONs.
The features found to produce the best performance are:
? 64 DFT coefficients linear from 0 to 8kHz at the frame to be classified.
? Spectral difference of adjacent 24 msec segments. These values are given for
every frame in the 30 msec surrounding the frame to be classified, and for every
5 frames beyond that to 150 msecs before and after the frame to be classified.
All subsequent features are sampled in the same manner.
? Spectral difference from 0 to 700 Hz in adjacent 24 msec segments.
? Amplitude of the waveform.
? Amplitude of the waveform lowpass filtered at 700 Hz. The window used to
Spoken Letter Recognition
measure the amplitude is just larger than the median pitch. In normal voicing,
there is always at least one pitch peak inside the window and the output is
smooth. During glottalization, the pitch peaks are more widely spaced. For
some frames, the window used to measure amplitude contains no pitch peaks
and the amplitude is sharply lower. Uneveness in this measure is thus an
indication of glottalization.
? Zero crossing rate.
? A binary indicator of consistent pitch.
? The center of mass in the DFT coefficients between 0 and 1000 Hz.
A train-on-errors procedure was found to be very helpful. The segmenter resulting
from training on the initial data set was used to classify new data. Frames for which
it disagreed with hand-labeling were added to the initial data set and the network
was retrained. This process was repeated several times.
3.3
SYSTEM PERFORMANCE
The system was evaluated on 1020 names provided by 34 speakers who were not
used to train the system. Each subject spelled 30 names drawn randomly from the
database of 50,000 surnames. The speaker was instructed to pause briefly between
letters, but was not given any feedback during the session.
The list of 50,000 names provides a grammar of possible strings with perplexity
4. Using this grammar, the correct name was found 95.3% of the time. Of the 48
names not correctly retrieved, all but 6 of these were in the top 3 choices, and all
but 2 were in the top 10. The letter recognition accuracy was 98.8% (total words
minus substitutions plus deletions plus insertions, using a dynamic programming
match). Examination of these name-retrieval errors revealed that about 50% were
caused by misclassification of a letter, and about 50% were caused by bad letter
segmentation. (Sixty percent of the segmentation errors were caused by GLOT
insertions; forty percent were due to the speaker failing to pause.)
Without a grammar, the correct name is found only 53.9% of the time; almost
half the inputs had at least one segmentation or classification error. The letter
recognition accuracy was 89.1% using a dynamic programming match. Ignoring
segmentation errors, 93% of the letters were correctly classified.
4
PHONEME RECOGNITION IN CONNECTED
LETTERS
We have begun work on a continuous letter recognizer, which does not require pauses
between the letters. The current system has two parts: a phonetic classifier which
categorizes each frame as one of 30 phonemes (those phonemes found in letters plus
glottalization)[Janssen et al., 1989], and a Viterbi search to find the sequence of
letters which best matches the frame-by-frame phoneme scores.
The phonetic classifier is given 160 DFT coefficients; 40 for the frame to be classified,
for the immediate context (+ / - 12 msec), for the near context (+ / - 45 msec) and for
225
226
Fanty and Cole
the far context (+/- 78 msec) In addition, 87 features are added for the waveform
amplitude, zero-crossing rate and spectral difference measure in a 183 msec window
centered on the frame to be classified. It was trained on 9833 frames from 130
speakers spelling naturally. It was tested on 72 new speakers and achieved 76%
frame accuracy with the instances of each phoneme categories equally balanced.
When we feed the outputs of this network into a second network in addition to the
DFT and other features, performance rose to 81 %.
Simple letter models are used in a Viterbi search and enforce order and duration
constraints for the phonemes. More work is required on coarticulation modeling,
among other things. We are especially anxious to use carefully chosen features as
with our isolated letter recognizer.
Acknowledgements
This research was supported by Apple Computer, NSF, and a grant from DARPA to
the Computer Science & Engineering Department of the Oregon Graduate Institute.
We thank Vince Weatherill for his help in collecting and labeling data.
References
[Barnard et al., 1991] E. Barnard, R. A. Cole, M. Vea, and F. Alleva. Pitch detection with a neural net classifier. IEEE Transactions on Acoustics, Speech and
Signal Processing), 1991. To appear.
[Cole and Fanty, 1990] R. A. Cole and M. Fanty. Spoken letter recognition. In Proceedings of the DARPA Workshop on Speech and Natural Language Processing,
June 1990. Hidden Valley, PA.
[Cole and Hou, April 1988] R. A. Cole and L. Hou. Segmentation and broad classification of continuous speech. In Proceedings IEEE International Conference on
Acoustics, Speech, and Signal Processing, April, 1988.
[Cole et al., 1990] R. A. Cole, M. Fanty, Y. Muthusamy, and M. Gopalakrishnan.
Speaker-independent recognition of spoken english letters. In Proceedings of the
International Joint Conference on Neural Networks, June 1990. San Diego, CA.
[Cole et al., 1991] R. A. Cole, M. Fanty, M. Gopalakrishnan, and R. Janssen.
Speaker-independent name retrieval from spellings using a database of 50,000
names. In Proceedings IEEE International Conference on Acoustics, Speech, and
Signal Processing, 1991. Toronto, Canada.
[Fanty and Cole, 1990] M. Fanty and R. A. Cole. Speaker-independent english alphabet recognition: Experiments with the e-set. In Proceedings of the International Conference on Spoken Language Processing, November 1990. Kobe, Japan.
[Gopalakrishnan, August 1990] M. Gopalakrishnan. Segmenting speech into broad
phonetic categories using neural networks. Master's thesis, Oregon Graduate
Institute / Dept. of Computer Science, August, 1990.
[Janssen et ai.,] R. D. T. Janssen, M. Fanty, and R. A. Cole. Speaker-independent
phonetic classification of the english alphabet. submitted to Proceedings of the
International Joint Conference on Neural Networks, 1991.
| 295 |@word briefly:1 retraining:2 closure:1 minus:2 initial:2 substitution:1 contains:1 score:3 tuned:1 current:1 clos:4 hou:2 ronald:1 subsequent:1 v:1 half:3 selected:2 devising:1 leaf:1 slowing:1 beginning:1 filtered:7 provides:4 node:2 location:5 toronto:1 five:2 consists:1 inside:3 manner:1 notably:1 window:8 becomes:2 begin:2 classifies:2 provided:3 mass:6 string:6 developed:2 spoken:12 temporal:1 every:3 collecting:1 classifier:11 unit:2 grant:1 appear:1 segmenting:1 positive:2 before:12 engineering:2 path:1 plus:4 emphasis:1 graduate:3 range:1 averaged:4 decided:1 testing:1 procedure:1 significantly:1 word:1 vocal:1 selection:1 valley:1 context:3 impossible:1 map:1 center:8 duration:3 resolution:1 assigns:1 rule:7 his:1 enabled:1 searching:2 sennheiser:1 diego:1 programming:2 us:2 pa:1 crossing:8 recognition:18 utilized:1 database:5 capture:1 cord:1 connected:1 movement:1 rose:2 mentioned:1 balanced:1 insertion:2 dynamic:2 trained:5 segmenter:6 segment:18 upon:1 basis:1 lowpass:4 darpa:2 joint:2 represented:3 anxious:1 alphabet:5 train:3 surrounding:1 fast:1 labeling:2 refined:1 larger:1 widely:1 grammar:3 formant:2 final:3 sequence:2 indication:1 net:1 fanty:14 rapidly:1 achieve:2 description:1 normalize:1 produce:2 spelled:5 tim:1 help:2 measured:3 differ:1 waveform:9 coarticulation:1 correct:2 centered:1 require:1 surname:1 preliminary:1 tracker:1 around:1 normal:1 vince:1 viterbi:2 smallest:1 recognizer:4 failing:1 glottalization:4 label:2 vea:1 cole:21 largest:7 agrees:1 always:2 modified:1 rather:1 avoid:1 ej:1 fricative:1 derived:1 june:2 helpful:1 entire:1 hidden:2 classification:10 among:1 development:3 smoothing:2 equal:3 categorizes:1 extraction:1 broad:12 kobe:1 randomly:1 recognize:1 individual:1 vowel:5 detection:1 evaluation:1 sixty:1 articulatory:1 modest:1 tree:2 divide:2 re:1 isolated:7 delete:1 instance:2 classify:2 modeling:1 contiguous:1 seventh:1 too:1 reported:1 stored:1 combined:1 peak:22 international:5 off:1 iy:1 connectivity:1 squared:1 reflect:1 ear:1 thesis:1 expert:1 japan:1 coefficient:9 oregon:3 caused:3 onset:7 performed:7 root:1 abruptness:1 accuracy:8 phoneme:6 who:1 efficiently:1 spaced:1 raw:1 served:1 apple:1 classified:8 submitted:1 canceling:1 naturally:1 sampled:2 stop:5 begun:2 knowledge:2 ptpo:1 segmentation:14 amplitude:18 routine:1 carefully:1 back:1 feed:2 response:1 april:3 evaluated:1 just:4 correlation:1 hand:1 propagation:1 grows:1 name:17 normalized:4 laboratory:1 adjacent:4 during:5 width:1 speaker:21 arrived:1 performs:1 percent:2 empirically:1 overview:3 khz:8 measurement:3 ai:3 dft:13 session:1 language:2 had:1 similarity:1 labelers:1 recent:1 retrieved:2 perplexity:1 phonetic:5 binary:2 scoring:1 additional:3 preceding:3 prune:1 forty:1 period:2 fric:6 signal:6 full:2 smooth:1 match:4 cross:1 retrieval:5 divided:1 locates:1 equally:1 pitch:17 accentuate:1 normalization:1 alleva:1 achieved:4 ion:1 background:1 addition:2 interval:6 median:4 source:1 finely:1 hz:9 subject:1 thing:1 seem:1 near:2 presence:1 feedforward:1 revealed:1 muthusamy:1 fft:1 isolation:1 whether:1 speech:7 nine:1 tune:1 nasal:1 locally:1 band:1 category:13 nsf:1 per:2 correctly:3 four:2 drawn:1 cooperating:1 letter:59 master:1 almost:1 layer:1 syllable:1 occur:1 constraint:1 sharply:1 expanded:1 department:1 conjugate:1 slightly:1 son:31 taken:1 experimentation:1 apply:1 spectral:12 voicing:1 enforce:1 top:2 recognizes:1 beaverton:1 especially:1 added:4 occurs:1 spelling:5 usual:1 gradient:1 distance:1 separate:2 mapped:1 thank:1 gopalakrishnan:5 length:1 difficult:2 negative:3 descent:1 november:1 immediate:1 extended:2 frame:21 august:3 retrained:1 canada:1 pair:1 required:1 acoustic:3 distinction:3 deletion:1 beyond:1 below:2 including:1 misclassification:1 treated:2 examination:1 natural:1 indicator:2 pause:6 mn:1 technology:1 brief:1 temporally:1 utterance:4 acknowledgement:1 validation:1 consistent:3 placed:1 last:1 supported:1 english:6 institute:3 fifth:1 absolute:1 slice:1 boundary:8 overcome:1 vocabulary:1 feedback:1 resides:1 forward:1 instructed:1 jump:1 san:1 simplified:1 far:1 transaction:1 anchor:1 consonant:8 spectrum:2 continuous:4 search:4 ca:1 ignoring:1 linearly:4 whole:1 noise:2 repeated:1 hanning:1 msec:26 candidate:2 third:2 sonorant:1 dozen:1 down:1 bad:1 specific:1 list:2 disagreed:1 hmd:1 exists:1 janssen:4 workshop:1 likely:1 keyed:1 tracking:3 corresponds:1 extracted:5 goal:1 careful:1 barnard:3 specifically:1 except:4 microphone:1 total:2 pas:1 attempted:1 mark:1 searched:1 dept:2 tested:1 |
2,148 | 2,950 | An exploration-exp loitation mod el based
on no rep inep herine and do p amine
activity
Samuel M. McClure* , Mark S. Gilzenrat, and Jonathan D. Cohen
Center for the Study of Brain, Mind, and Behavior
Princeton University
Princeton, NJ 08544
[email protected]; [email protected]; [email protected]
Abstract
We propose a model by which dopamine (DA) and norepinepherine
(NE) combine to alternate behavior between relatively exploratory
and exploitative modes. The model is developed for a target
detection task for which there is extant single neuron recording
data available from locus coeruleus (LC) NE neurons. An
exploration-exploitation trade-off is elicited by regularly switching
which of the two stimuli are rewarded. DA functions within the
model to change synaptic weights according to a reinforcement
learning algorithm. Exploration is mediated by the state of LC
firing, with higher tonic and lower phasic activity producing
greater response variability. The opposite state of LC function,
with lower baseline firing rate and greater phasic responses, favors
exploitative behavior. Changes in LC firing mode result from
combined measures of response conflict and reward rate, where
response conflict is monitored using models of anterior cingulate
cortex (ACC). Increased long-term response conflict and decreased
reward rate, which occurs following reward contingency switch,
favors the higher tonic state of LC function and NE release. This
increases exploration, and facilitates discovery of the new target.
1 In t rod u ct i on
A central problem in reinforcement learning is determining how to adaptively move
between exploitative and exploratory behaviors in changing environments. We
propose a set of neurophysiologic mechanisms whose interaction may mediate this
behavioral shift. Empirical work on the midbrain dopamine (DA) system has
suggested that this system is particularly well suited for guiding exploitative
behaviors. This hypothesis has been reified by a number of studies showing that a
temporal difference (TD) learning algorithm accounts for activity in these neurons
in a wide variety of behavioral tasks [1,2]. DA release is believed to encode a
reward prediction error signal that acts to change synaptic weights relevant for
producing behaviors [3]. Through learning, this allows neural pathways to predict
future expected reward through the relative strength of their synaptic connections
[1]. Decision-making procedures based on these value estimates are necessarily
greedy. Including reward bonuses for exploratory choices supports non-greedy
actions [4] and accounts for additional data derived from DA neurons [5]. We show
that combining a DA learning algorithm with models of response conflict detection
[6] and NE function [7] produces an effective annealing procedure for alternating
between exploration and exploitation.
NE neurons within the LC alternate between two firing modes [8]. In the first mode,
known as the phasic mode, NE neurons fire at a low baseline rate but have relatively
robust phasic responses to behaviorally salient stimuli. The second mode, called the
tonic mode, is associated with a higher baseline firing and absent or attenuated
phasic responses. The effects of NE on efferent areas are modulatory in nature, and
are well captured as a change in the gain of efferent inputs so that neuronal
responses are potentiated in the presence of NE [9]. Thus, in phasic mode, the LC
provides transient facilitation in processing, time-locked to the presence of
behaviorally salient information in motor or decision areas. Conversely, in tonic
mode, higher overall LC discharge rate increases gain generally and hence increases
the probability of arbitrary responding. Consistent with this account, for periods
when NE neurons are in the phasic mode, monkey performance is nearly perfect.
However, when NE neurons are in the tonic mode, performance is more erratic, with
increased response times and error rate [8]. These findings have led to a recent
characterization of the LC as a dynamic temporal filter, adjusting the system's
relative responsivity to salient and irrelevant information [8]. In this way, the LC is
ideally positioned to mediate the shift between exploitative and exploratory
behavior.
The parameters that underlie changes in LC firing mode remain largely unexplored.
Based on data from a target detection task by Aston-Jones and colleagues [10], we
propose that LC firing mode is determined in part by measures of response conflict
and reward rate as calculated by the ACC and OFC, respectively [8]. Together, the
ACC and OFC are the principle sources of cortical input to the LC [8]. Activity in
the ACC is known, largely through human neuroimaging experiments, to change in
accord with response conflict [6]. In brief, relatively equal activity in competing
behavioral responses (reflecting uncertainty) produces high conflict. Low conflict
results when one behavioral response predominates. We propose that increased
long-term response conflict biases the LC towards a tonic firing mode. Increased
conflict necessarily follows changes in reward contingency. As the previously
rewarded target no longer produces reward, there will be a relative increase in
response ambiguity and hence conflict. This relationship between conflict and LC
firing is analogous to other modeling work [11], which proposes that increased tonic
firing reflects increased environmental uncertainty.
As a final component to our model, we hypothesize that the OFC maintains an
ongoing estimate in reward rate, and that this estimate of reward rate also influences
LC firing mode. As reward rate increases, we assume that the OFC tends to bias the
LC in favor of phasic firing to target stimuli.
We have aimed to fix model parameters based on previous work using simpler
networks. We use parameters derived primarily from a previous model of the LC by
Gilzenrat and colleagues [7]. Integration of response conflict by the ACC and its
influence on LC firing was borrowed from unpublished work by Gilzenrat and
colleagues in which they fit human behavioral data in a diminishing utilities task.
Given this approach, we interpret our observed improvement in model performance
with combined NE and DA function as validation of a mechanism for automatically
switching between exploitative and exploratory action selection.
2
G o- No- G o Task and Core Mod el
We have modeled an experiment in which monkeys performed a target detection
task [10]. In the task, monkeys were shown either a vertical bar or a horizontal bar
and were required to make or omit a motor response appropriately. Initially, the
vertical bar was the target stimulus and correctly responding was rewarded with a
squirt of fruit juice (r=1 in the model). Responding to the non-target horizontal
stimulus resulted in time out punishment (r=-.1; Figure 1A). No responses to either
the target or non-target gave zero reward.
After the monkeys had fully acquired the task, the experimenters periodically
switched the reward contingency such that the previously rewarded stimulus (target)
became the distractor, and vice versa. Following such reversals, LC neurons were
observed to change from emitting phasic bursts of firing to the target, to tonic firing
following the switch, and slowly back to phasic firing for the new target as the new
response criteria was obtained [10].
Figure 1: Task and model design. (A) Responses were required for targets in order
to obtain reward. Responses to distractors resulted in a minor punishment. No
responses gave zero reward. (B) In the model, vertical and horizontal bar inputs (I1
and I 2 ) fed to integrator neurons (X1 and X2 ) which then drove response units (Y1 and
Y2 ). Responses were made if Y 1 or Y2 crossed a threshold while input units were
active.
We have previously modeled this task [7,12] with a three-layer connectionist
network in which two input units, I1 and I 2 , corresponding to the vertical and
horizontal bars, drive two mutually inhibitory integrator units, X1 and X2 . The
integrator units subsequently feed two response units, Y1 and Y2 (Figure 1B).
Responses are made whenever output from Y1 or Y2 crosses a threshold level of
activity, ?. Relatively weak cross connections from each input unit to the opposite
integrator unit (I1 to X2 and I 2 to X1 ) are intended to model stimulus similarity.
Both the integrator and response units were modeled as noisy, leaky accumulators:
X? i = "X i +w X i I i I i + w X i I j I j " w X i X j f (X j ) +# i
(1)
Y?i = "Yi +wYi X i f (X i ) " wYi Y j f (Y j ) +# i .
(2)
The ?i terms
! represent stochastic noise variables. The response function for each
unit is sigmoid with gain, g t , determined by current LC activity (Eq. 9, below)
!
"1
"g X "b
(3)
f (X) = 1 + e t ( ) .
(
)
Response units, Y, were given a positive bias, b, and integrator units were unbiased.
All weight values, biases, and variance of noise are as reported in [7].
!
Integration was done with a Euler method at time steps of 0.02. Simulation of
stimulus presentations involved setting one of the input units to a value of 1.0 for 20
units of model time. Activation of I 1 and I 2 were alternated and 20 units of model
time were allowed between presentations for the integrator and response units to
relax to baseline levels of activity. Input 1 was initially set to be the target and input
2 the distractor. After 50 presentations of I 1 and I 2 the reward contingencies were
switched; the model was run through 6 such blocks and reversals. The response
during each stimulus presentation was determined by which of the two response
units first crossed a threshold of output activity (i.e. f(Y1 ) > ?), or was a no response
if neither unit crossed threshold.
3 P erf orm an ce of m od el wi t h DA- m ed i ated l earn i n g
In order to obtain a benchmark level of performance to compare against, we first
determined how learning progresses with DA-mediated reinforcement learning
alone. A reward unit, r, was included that had activity 0 except at the end of each
stimulus presentation when its activity was set equal to the obtained reward
outcome. Inhibitory inputs from the response units served as measures of expected
reward. At the end of every trial, the DA unit, ?, obtained a value given by
" (t) = r(t) # w"Y1 Z (Y1 (t)) # w"Y2 Z (Y2 (t))
(4)
where Z(Y) is a threshold function that is 1 if f(Y)? ? and is 0 otherwise.
The output of dopamine neurons was used to update the weights along the pathway
! response. Thus, at the end of every stimulus presentation, the
that lead to the
weights between response units and DA neurons were updated according to
w"Yi (t + 1) = w"Yi (t) + #" (t)Z (Yi )
(5)
where the learning rate, ?, was set to 0.3 for all simulations. This learning rule
allowed the weights to converge to the expected reward for selecting each of the two
actions. Weights between
integrator and response units were updated using the same
!
rule as in Eq. 5, except the weights were restricted to a minimum value of 0.8. When
the weight values were allowed to decrease below 0.8, sufficient activity never
accumulated in the response units to allow discovery to new reward contingencies.
As the model learned, the weights along the target pathway obtained a maximum
value while those along the distractor pathway obtained a minimum value. After
reversals, the model initially adapted by reducing the weights along the pathway
associated with the previous target. The only way the model was able to obtain the
new target was by noise pushing the new target response unit above threshold.
Because of this, the performance of the model was greatly dependent of the value of
the threshold used in the simulation (Figure 2B). When the threshold was low
relative to noise, the model was able to quickly adapt to reversals. However, this
also resulted in a high rate of responding to non-target stimuli even after learning. In
order to reduce responding to the distractor, the threshold had to be raised, which
also increased the time required to adapt following reward reversals.
The network was initialized with equal preference for responding to input 1 or 2,
and generally acquired the initial target faster than after reversals (see Figure 2B).
Because of this, all subsequent analyses ignore this first learning period. For each
value of threshold studied, we ran the model 100 times. Plots shown in Figures 2
and 3 show the probability that the model responded, when each input was
activated, as a function of trial number (i.e. P(f(Yi )? ? | I i =1)).
Figure 2: Model performance with DA alone. (A) DA neurons, ? , modulated weights
from integrator to response units in order to modulate the probability of responding
to each input. (B) The model successfully increases and decreases responding to
inputs 1 and 2 as reward contingencies reverse. However, the model is unable to
simultaneously obtain the new response quickly and maintain a low error rate once
the response is learned. When threshold is relatively low (left plot), the model
adapts quickly but makes frequent responses to the distractor. At higher threshold,
responses are correctly omitted to the distractor, but the model acquires the new
response slowly.
4 Im p rovem en t wi t h NE- m ed i at ed ann eali n g
We used the FitzHugh-Nagumo set of differential equations to model LC activity.
(These equations are generally used to model individual neurons, but we use them to
model the activity in the nucleus as a whole.) Previous work has shown that these
equations, with simple modifications, capture the fundamental aspects of tonic and
phasic mode activity in the LC [7]. The FitzHugh-Nagumo equations involve two
interacting variables v and u, where v is an activity term and u is an inhibitory
dampening term. The output of the LC is given by the value of u, which
conveniently captures the fact that the LC is self-inhibitory and that the postsynaptic effect of NE release is somewhat delayed [7].
The model included two inputs to the LC from the integrator units (X1 and X2 ) with
modifiable weights. The state of the LC is then given by
" v v? = v(# $ v)(v $ 1) $ u + w vX 1 f (X1) + w vX 2 f (X 2 )
(6)
" u u? = h(v) # u
(7)
where the function h is defined by
!
(8)
h(v) = Cv + (1" C)d
!
and governs the firing mode of the LC. In order to change firing mode, h can be
modified so that the dynamics of u depend entirely on the state of the LC or so that
the dynamics are independent
of state. This alternation is governed by the parameter
!
C. When C is equal to 1.0, the model is appropriately dampened and can burst
sharply and return to a relatively low baseline level of activity (phasic mode). When
C is small, the LC receives a fixed level of inhibition, which simultaneously reduces
bursting activity and increases baseline activity (tonic mode) [7].
The primary function of the LC in the model is to modify the gain, g, of the
response function of the integrator and response units as in equation 3. We let gain
be a linear function of u with base value G and dependency on u given by k
g t = G + ku t .
!
(9)
The value of C was updated after every trial by measures of response conflict and
reward rate. Response conflict was calculated as a normalized measure of the energy
in the response units during the trial. For convenience, define Y 1 to be a vector of
the activity in unit Y1 at each point of time during a trial, f(Y1 (t)). Let Y 2 be defined
similarly. The conflict during the trial is
K=
Y1 " Y2
Y1 Y2
(10)
which correctly measures energy since Y1 and Y2 are connected with weight ?1. This
normalization procedure was necessary to account for changes in the magnitude of
Y1 and Y2 activity due to learning.
!
Based on previous work [8], we let conflict modify C separately based on a shortterm, KS , and long-term, KL, measure. The variable KS was updated at the end of
every Tth trial according to
K S (T + 1) = (1 " # S )K S (T ) + # S K (T ) .
(11)
where ?S was 0.2 and KS (T+1) was used to calculate the value of C used for the
T+1th trial. KL was update with the same rule as KS except ?L was 0.05. We let
short- and long-term
conflict have opposing effect on the firing mode of the LC.
!
This was developed previously to capture human behavior in a diminishing utilities
task. When short-term conflict increases, the LC is biased towards phasic firing
(increased C). This allows the model to recover from occasional errors. However,
when long-term conflict increases this is taken to indicate that the current decision
strategy is not working. Therefore, increased long-term conflict biases the LC to the
tonic mode so as to increase response volatility.
Figure 3: Model performance with DA and NE. (A) The full model includes a
conflict detection unit, K, and a reward rate measure, R, which combine to modify
activity in the LC. The LC modifies the gain in the integrator and response units. (B)
The benefit of including the LC in the model is insignificant when the response
threshold is regularly crossed by noise alone, and hence when the error rate is high.
(C) However, when the threshold is greater and error rate lower, NE dramatically
improves the rate at which the new reward contingencies are learned after reversal.
Reward rate, R, was updated at the end of every trial according to
R(T + 1) = (1 " # R )R(T ) + # R r
(12)
where r is the reward earned on the Tth trial. Increased reward rate was assumed to
bias the LC to phasic firing.
!
Reward rate, short-term
conflict, and long-term conflict updated C according to
C = " (K S )(1# " (K L ))" (R)
!
(13)
where each ? is a sigmoid function with a gain of 6.0 and no bias as determined by
fitting to behavior with previous models.
As with the model with DA alone, the effect of NE depended significantly on the
value of the threshold ? . When ? was small, the improvement afforded by the LC
was negligible (Figure 3B). However, when the threshold was significantly greater
than noise, the improvement was substantial (Figure 3C).
Monkeys were able to perform this task with accuracy greater than 90% and
simultaneously were able to adapt to reversals within 50 trials [10]. While it is
impossible to compare the output of our model with monkey behavior, we can make
the qualitative assertion that, as with monkeys, our NE-based annealing model
allows for high accuracy (and high threshold) decision-making while preserving
adaptability to changes in reward contingencies. In order to better demonstrate this
improvement, we fit single exponential curves to the plots of probability of
accurately responding to the new target by trial number (as in Figure 3B,C). Shown
in Figure 4 is the time constant for these exponential fits, which we term the
discovery time constant, for different values of the threshold. As can be seen, the
model with NE-mediated annealing maintains a relatively fast discovery time even
as the threshold becomes relatively large.
Figure 4: Summary of model performance with and without NE.
5
Di scu ssi on
We have demonstrated that a model incorporating behavioral and learning effects
previously ascribed to DA and NE produces an adaptive mechanism for switching
between exploratory and exploitative decision-making. Our model uses measures of
response conflict and reward rate to modify LC firing mode, and hence to change
network dynamics in favor of more or less volatile behavior. In essence, combining
previous models of DA and NE function produces a performance-based autoannealing algorithm.
There are several limitations to this model that can be remedied by greater
sophistication in the learning algorithm. The primary limitation is that the model
varies between more or less volatile action selection only over the range of reward
relevant to our studied task. Model parameters could be altered on a task-by-task
basis to correct this; however, a more general scheme may be accomplished with a
mean reward learning algorithm [13]. It has previously been argued that DA neurons
may actually emit an average reward TD error [14]. This change may require
allowing both short- and long-term reward rate control the LC firing mode (Eq. 13).
Another limitation of this model is that, while exploration is
performance measures wane, exploration is not managed intelligently.
significantly affect the performance of our model since there are only
actions. As the number of alternatives increases, rapid learning
something akin to reward bonuses [4,5].
increased as
This does not
two available
may require
Understanding the interplay between DA and NE function in learning and decisionmaking is also relevant for understanding disease. Numerous psychiatric disorders
are known to involve dysregulation of NE and DA release. Furthermore, hallmark
features of ADHD and schizophrenia include cognitive disorders in which behavior
appears either too volatile (ADHD) or too inflexible (schizophrenia) [15,16].
Improved models of DA-NE interplay during learning and decision-making, coupled
with empirical data, may simultaneously improve knowledge of how the brain
handles the exploration-exploitation dilemma and how this goes awry in disease.
Acknowledg ments
This work was supported by NIH grants P50 MH62196 and MH065214.
References
[1] Montague, P.R. Dayan, P., Sejnowski, T.J. (1996) A framework for mesencephalic
dopamine systems based on predictive Hebbian learning. J. Neurosci. 16: 1936-1947.
[2] Schultz, W. Dayan, P. & Montague, P.R. (1997) A neural substrate for prediction and
reward. Science 275: 1593-1599.
[3] Reynolds, J.N., Hyland, B.I., Wickens, J.R. (2001) A cellular mechanism of rewardrelated learning. Nature 413: 67-70.
[4] Sutton, R.S. (1990) Integrated architectures for learning, planning, and reacting based on
approximated dynamic programming. Mach. Learn., Proc. 7 th International Conf. 216-224.
[5] Kakade, S., Dayan, P. (2002) Dopamine: generalization and bonuses. Neural Networks
15: 549-559.
[6] Botvinick, M.M., Braver, T.S., Barch, D.M., Carter, C.S., Cohen, J.D. (2001) Conflict
monitoring and cognitive control. Psychol. Rev. 108: 624-652.
[7] Gilzenrat, M.S., Holmes, B.D., Rajkowski, J., Aston-Jones, G., Cohen, J.D. (2002)
Simplified dynamics in a model of noradrenergic modulation of cognitive performance.
Neural Networks 15: 647-663.
[8] Aston-Jones, G., Cohen, J.D. (2005) An integrative theory of locus coeruleusnorepinepherine function. Ann. Rev. Neurosci. 28: 403-450.
[9] Servan-Schreiber, D., Printz, H., Cohen, J.D. (1990) A network model of catecholamine
effects: gain, signal-to-noise ratio and behavior. Science 249: 892-895.
[10] Aston-Jones, G., Rajkowski, J., Kubiak, P. (1997) Conditioned responses of monkey
locus coeruleus neurons anticipate acquisition of discriminative behavior in a vigilance task.
Neuroscience 80: 697-715.
[11] Yu, A., Dayan, P. (2005) Uncertainty, neuromodulation and attention. Neuron 46: 68192.
[11] Usher, M., Cohen, J.D., Rajkowski, J., Aston-Jones, G. (1999) The role of the locus
coeruleus in the regulation of cognitive performance. Science 283: 549-554.
[12] Schwartz, A. (1993) A reinforcement learning method for maximizing undiscounted
rewards. In: Proc. 10th International Conf. Mach. Learn. (pp. 298-305). San Mateo, CA:
Morgan Kaufmann.
[13] Daw, N.D., Touretzky, D.S. (2002) Long-term reward prediction in TD models of the
dopamine system. Neural Computation 14: 2567-2583.
[14] Goldberg, T.E., Weinberger, D.R., Berman, K.F., Pliskin, N.H., Podd, M.H. (1987)
Further evidence for dementia of the prefrontal type in schizophrenia? A controlled study
teaching the Wisconsin Card Sorting Test. Arch. Gen. Psychiatry 44: 1008-1014.
[15] Barkley, R.A. (1997) Behavioural inhibition, sustained attention, and executive
functions: constructing a unified theory of AD/HD. Psychol. Bull. 121: 65-94.
| 2950 |@word trial:12 exploitation:3 cingulate:1 noradrenergic:1 integrative:1 simulation:3 initial:1 responsivity:1 selecting:1 reynolds:1 current:2 anterior:1 od:1 activation:1 subsequent:1 periodically:1 motor:2 hypothesize:1 plot:3 update:2 dampened:1 alone:4 greedy:2 core:1 short:4 provides:1 characterization:1 awry:1 preference:1 simpler:1 burst:2 along:4 differential:1 qualitative:1 sustained:1 combine:2 pathway:5 behavioral:6 fitting:1 ascribed:1 acquired:2 expected:3 rapid:1 behavior:14 planning:1 distractor:6 brain:2 integrator:12 td:3 automatically:1 becomes:1 bonus:3 monkey:8 developed:2 unified:1 finding:1 nj:1 temporal:2 unexplored:1 every:5 act:1 botvinick:1 schwartz:1 control:2 unit:32 underlie:1 omit:1 grant:1 producing:2 positive:1 negligible:1 modify:4 tends:1 depended:1 switching:3 sutton:1 mach:2 reacting:1 firing:23 modulation:1 studied:2 bursting:1 k:4 conversely:1 mateo:1 locked:1 range:1 accumulator:1 rajkowski:3 block:1 procedure:3 area:2 empirical:2 significantly:3 orm:1 psychiatric:1 convenience:1 scu:1 selection:2 influence:2 impossible:1 demonstrated:1 center:1 maximizing:1 modifies:1 go:1 attention:2 disorder:2 rule:3 holmes:1 facilitation:1 hd:1 handle:1 exploratory:6 analogous:1 discharge:1 updated:6 target:22 drove:1 substrate:1 programming:1 us:1 goldberg:1 hypothesis:1 approximated:1 particularly:1 observed:2 role:1 capture:3 calculate:1 connected:1 earned:1 trade:1 decrease:2 ran:1 substantial:1 disease:2 environment:1 reward:41 ideally:1 dynamic:6 depend:1 gilzenrat:4 dilemma:1 predictive:1 basis:1 montague:2 fast:1 effective:1 sejnowski:1 outcome:1 whose:1 relax:1 otherwise:1 favor:4 erf:1 noisy:1 final:1 interplay:2 intelligently:1 propose:4 interaction:1 frequent:1 relevant:3 combining:2 gen:1 adapts:1 undiscounted:1 decisionmaking:1 produce:5 perfect:1 volatility:1 minor:1 progress:1 borrowed:1 eq:3 indicate:1 berman:1 correct:1 filter:1 subsequently:1 stochastic:1 exploration:8 human:3 vx:2 transient:1 argued:1 require:2 fix:1 generalization:1 anticipate:1 im:1 exp:1 predict:1 omitted:1 proc:2 ofc:4 schreiber:1 vice:1 successfully:1 reflects:1 behaviorally:2 modified:1 encode:1 release:4 derived:2 improvement:4 greatly:1 psychiatry:1 baseline:6 dependent:1 el:3 dayan:4 accumulated:1 integrated:1 diminishing:2 initially:3 i1:3 overall:1 proposes:1 raised:1 integration:2 equal:4 once:1 never:1 jones:5 yu:1 nearly:1 future:1 connectionist:1 stimulus:12 primarily:1 simultaneously:4 resulted:3 individual:1 delayed:1 intended:1 fire:1 maintain:1 dampening:1 opposing:1 detection:5 activated:1 emit:1 necessary:1 initialized:1 increased:11 modeling:1 assertion:1 servan:1 bull:1 euler:1 too:2 wickens:1 reported:1 dependency:1 varies:1 combined:2 adaptively:1 punishment:2 fundamental:1 international:2 kubiak:1 off:1 predominates:1 together:1 quickly:3 extant:1 earn:1 central:1 ambiguity:1 slowly:2 vigilance:1 prefrontal:1 cognitive:4 conf:2 return:1 account:4 includes:1 ad:1 crossed:4 ated:1 performed:1 recover:1 maintains:2 elicited:1 accuracy:2 became:1 variance:1 largely:2 responded:1 kaufmann:1 weak:1 accurately:1 monitoring:1 served:1 drive:1 acc:5 touretzky:1 whenever:1 synaptic:3 ed:3 against:1 energy:2 colleague:3 acquisition:1 involved:1 pp:1 associated:2 di:1 monitored:1 efferent:2 gain:8 jdc:1 adjusting:1 experimenter:1 distractors:1 knowledge:1 improves:1 positioned:1 adaptability:1 actually:1 reflecting:1 back:1 appears:1 feed:1 higher:5 response:56 improved:1 done:1 furthermore:1 arch:1 working:1 receives:1 horizontal:4 mode:24 effect:6 normalized:1 y2:10 unbiased:1 managed:1 hence:4 alternating:1 during:5 self:1 acquires:1 essence:1 samuel:1 criterion:1 demonstrate:1 p50:1 hallmark:1 nih:1 sigmoid:2 volatile:3 juice:1 cohen:6 interpret:1 versa:1 cv:1 similarly:1 teaching:1 had:3 cortex:1 longer:1 similarity:1 inhibition:2 base:1 something:1 recent:1 irrelevant:1 rewarded:4 reverse:1 rep:1 alternation:1 yi:5 accomplished:1 captured:1 minimum:2 greater:6 additional:1 somewhat:1 preserving:1 seen:1 morgan:1 converge:1 period:2 signal:2 full:1 reduces:1 hebbian:1 faster:1 adapt:3 believed:1 mcclure:1 long:9 cross:2 nagumo:2 schizophrenia:3 controlled:1 prediction:3 dopamine:6 represent:1 normalization:1 accord:1 separately:1 decreased:1 annealing:3 source:1 printz:1 appropriately:2 biased:1 usher:1 recording:1 facilitates:1 regularly:2 mod:2 presence:2 switch:2 variety:1 fit:3 gave:2 affect:1 architecture:1 competing:1 opposite:2 reduce:1 attenuated:1 shift:2 absent:1 rod:1 utility:2 akin:1 action:5 dramatically:1 generally:3 modulatory:1 governs:1 aimed:1 involve:2 carter:1 tth:2 amine:1 exploitative:7 inhibitory:4 neuroscience:1 correctly:3 modifiable:1 adhd:2 salient:3 threshold:19 changing:1 neither:1 ce:1 run:1 uncertainty:3 decision:6 entirely:1 layer:1 ct:1 activity:22 strength:1 adapted:1 sharply:1 x2:4 afforded:1 aspect:1 fitzhugh:2 relatively:8 according:5 alternate:2 remain:1 inflexible:1 postsynaptic:1 wi:2 kakade:1 rev:2 making:4 modification:1 midbrain:1 restricted:1 taken:1 behavioural:1 equation:5 mutually:1 previously:6 mechanism:4 neuromodulation:1 phasic:14 mind:1 locus:4 fed:1 reversal:8 end:5 available:2 occasional:1 braver:1 alternative:1 weinberger:1 responding:9 include:1 pushing:1 move:1 occurs:1 strategy:1 primary:2 unable:1 remedied:1 card:1 cellular:1 modeled:3 relationship:1 ratio:1 regulation:1 neuroimaging:1 wyi:2 design:1 perform:1 allowing:1 potentiated:1 vertical:4 neuron:17 benchmark:1 tonic:11 variability:1 y1:12 interacting:1 arbitrary:1 unpublished:1 required:3 kl:2 connection:2 conflict:26 learned:3 daw:1 able:4 suggested:1 bar:5 below:2 catecholamine:1 including:2 erratic:1 scheme:1 altered:1 improve:1 aston:5 brief:1 ne:24 numerous:1 shortterm:1 mediated:3 coupled:1 psychol:2 alternated:1 understanding:2 discovery:4 determining:1 relative:4 wisconsin:1 fully:1 limitation:3 validation:1 executive:1 contingency:8 switched:2 nucleus:1 sufficient:1 consistent:1 fruit:1 principle:1 summary:1 supported:1 bias:7 allow:1 wide:1 leaky:1 benefit:1 curve:1 calculated:2 cortical:1 ssi:1 made:2 reinforcement:4 adaptive:1 dysregulation:1 schultz:1 simplified:1 san:1 emitting:1 mesencephalic:1 ignore:1 active:1 assumed:1 discriminative:1 nature:2 ku:1 robust:1 learn:2 ca:1 necessarily:2 constructing:1 da:21 neurosci:2 whole:1 noise:7 mediate:2 allowed:3 x1:5 neuronal:1 en:1 lc:41 guiding:1 exponential:2 governed:1 showing:1 dementia:1 insignificant:1 coeruleus:3 ments:1 evidence:1 incorporating:1 barch:1 magnitude:1 conditioned:1 sorting:1 suited:1 led:1 sophistication:1 conveniently:1 environmental:1 wane:1 modulate:1 presentation:6 ann:2 towards:2 change:13 included:2 determined:5 except:3 reducing:1 called:1 mark:1 support:1 modulated:1 jonathan:1 ongoing:1 princeton:5 |
2,149 | 2,951 | On Local Rewards and Scaling Distributed
Reinforcement Learning
J. Andrew Bagnell
Robotics Institute
Carnegie Mellon University
Pittsburgh, PA 15213
Andrew Y. Ng
Computer Science Department
Stanford University
Stanford, CA 94305
[email protected]
[email protected]
Abstract
We consider the scaling of the number of examples necessary to achieve
good performance in distributed, cooperative, multi-agent reinforcement
learning, as a function of the the number of agents n. We prove a worstcase lower bound showing that algorithms that rely solely on a global
reward signal to learn policies confront a fundamental limit: They require a number of real-world examples that scales roughly linearly in the
number of agents. For settings of interest with a very large number of
agents, this is impractical. We demonstrate, however, that there is a class
of algorithms that, by taking advantage of local reward signals in large
distributed Markov Decision Processes, are able to ensure good performance with a number of samples that scales as O(log n). This makes
them applicable even in settings with a very large number of agents n.
1
Introduction
Recently there has been great interest in distributed reinforcement learning problems where
a collection of agents with independent action choices attempts to optimize a joint performance metric. Imagine, for instance, a traffic engineering application where each traffic
signal may independently decide when to switch colors, and performance is measured by
aggregating the throughput at all traffic stops. Problems with such factorizations where the
global reward decomposes in to a sum of local rewards are common and have been studied
in the RL literature. [10]
The most straightforward and common approach to solving these problems is to apply one
of the many well-studied single agent algorithms to the global reward signal. Effectively,
this treats the multi-agent problem as a single agent problem with a very large action space.
Peshkin et al. [9] establish that policy gradient learning factorizes into independent policy
gradient learning problems for each agent using the global reward signal. Chang et al. [3]
use global reward signals to estimate effective local rewards for each agent. Guestrin et
al. [5] consider coordinating agent actions using the global reward. We argue from an
information theoretic perspective that such algorithms are fundamentally limited in their
scalability. In particular, we show in Section 3 that as a function of the number of agents
?
n, such algorithms will need to see1 ?(n)
trajectories in the worst case to achieve good
performance.
We suggest an alternate line of inquiry, pursued as well by other researchers (including
1
? notation omits logarithmic terms, similar to how big-? notation drops constant values.
Big-?
notably [10]), of developing algorithms that capitalize on the availability of local reward
signals to improve performance. Our results show that such local information can dramatically reduce the number of examples necessary for learning to O(log n). One approach
that the results suggest to solving such distributed problems is to estimate model parameters
from all local information available, and then to solve the resulting model offline. Although
this clearly still carries a high computational burden, it is much preferable to requiring a
large amount of real-world experience. Further, useful approximate multiple agent Markov
Decision Process (MDP) solvers that take advantage of local reward structure have been
developed. [4]
2
Preliminaries
We consider distributed reinforcement learning problems, modeled as MDPs, in which
there are n (cooperative) agents, each of which can directly influence only a small number
of its neighbors. More formally, let there be n agents, each with a finite state space S of
size |S| states and a finite action space A of size |A|. The joint state space of all the agents
is therefore S n , and the joint action space An . If st ? S n is the joint state of the agents at
(i)
(i)
time t, we will use st to denote the state of agent i. Similarly, let at denote the action of
agent i.
For each agent i ? {1, . . . , n}, we let neigh(i) ? {1, . . . , n} denote the subset of
agents that i?s state directly influences. For notational convenience, we assume that if
i ? neigh(j), then j ? neigh(i), and that i ? neigh(i). Thus, the agents can be viewed
as living on the vertices of a graph, where agents have a direct influence on each other?s
state only if they are connected by an edge. This is similar to the graphical games formalism of [7], and is also similar to the Dynamic Bayes Net (DBN)-MDP formalisms of [6]
and [2]. (Figure 1 depicts a DBN and an agent influence graph.) DBN formalisms allow
the more refined notion of directionality in the influence between neighbors.
More formally, each agent i is associated with a CPT (conditional probability table)
(i)
(neigh(i)) (i)
(neigh(i))
Pi (st+1 |st
, at ), where st
denotes the state of agent i?s neighbors at time
t. Given the joint action a of the agents, the joint state evolves according to
n
Y
(i)
(neigh(i)) (i)
p(st+1 |st
, at ).
(1)
p(st+1 |st , at ) =
i=1
For simplicity, we have assumed that agent i?s state is directly influenced by the states of
neigh(i) but not their actions; the generalization offers no difficulties. The initial state s1
is distributed according to some initial-state distribution D.
A policy is a map ? : S n 7? An . Writing ? out explicitly as a vector-valued function, we
have ?(s) = (?1 (s), . . . , ?n (s)), where ?i (s) : S n 7? A is the local policy of agent i. For
some applications, we may wish to consider only policies in which agent i chooses its local
action as a function of only its local state s(i) (and possibly its neighbors); in this case, ?i
can be restricted to depend only on s(i) .
Each agent has a local reward function Ri (s(i) , a(i) ), which takes values
Pnin the unit interval [0, 1]. The total payoff in the MDP at each step is R(s, a) = (1/n) i=1 R(s(i) , a(i) ).
We call this R(s, a) the global reward function, since it reflects the total reward received
by the joint set of agents. We will consider the finite-horizon setting, in which the MDP
terminates after T steps. Thus, the utility of a policy ? in an MDP M is
#
" T n
1 XX
(i) (i)
?
Ri (st , at )|? .
U (?) = UM (?) = Es1 ?D [V (s1 )] = E
n t=1 i=1
In the reinforcement learning setting, the dynamics (CPTs) and rewards of the problem are
unknown, and a learning algorithm has to take actions in the MDP and use the resulting
observations of state transitions and rewards to learn a good policy. Each ?trial? taken by a
reinforcement learning algorithm shall consist of a T -step sequence in the MDP.
Figure 1: (Left) A DBN description of a multi-agent MDP. Each row of (round) nodes in the DBN
corresponds to one agent. (Right) A graphical depiction of the influence effects in a multi-agent
MDP. A connection between nodes in the graph implies arrows connecting the nodes in the DBN.
Our goal is to characterize the scaling of the sample complexity for various reinforcement
learning approaches (i.e., how many trials they require in order to learn a near-optimal
policy) for large numbers of agents n. Thus, in our bounds below, no serious attempt has
been made to make our bounds tight in variables other than n.
3
Global rewards hardness result
Below we show that if an RL algorithm uses only the global reward signal, then there
exists a very simple MDP?one with horizon, T = 1, only one state/trivial dynamics, and
?
two actions per agent?on which the learning algorithm will require ?(n)
trials to learn
a good policy. Thus, such algorithms do not scale well to large numbers of agents. For
example, consider learning in the traffic signal problem described in the introduction with
n = 100, 000 traffic lights. Such an algorithm may then require on the order of 100, 000
days of experience (trials) to learn. In contrast, in Section 4, we show that if a reinforcement
learning algorithm is given access to the local rewards, it can be possible to learn in such
problems with an exponentially smaller O(log n) sample complexity.
Theorem 3.1: Let any 0 < ? < 0.05 be fixed. Let any reinforcement learning algorithm L
be given that only uses the global reward signal R(s), and does not use the local rewards
Ri (s(i) ) to learn (other than through their sum). Then there exists an MDP with time
horizon T = 1, so that:
1. The MDP is very ?simple? in that it has only one state (|S| = 1, |S n | = 1); trivial
state transition probabilities (since T = 1); two actions per agent (|A| = 2); and
deterministic binary (0/1)-valued local reward functions.
2. In order for L to output a policy ?
? that is near-optimal satisfying2 U (?
?) ?
max? U (?) ? ?,it is necessary that the number of trials m be at least
0.32n + log(1/4)
?
m?
= ?(n).
log(n + 1)
Proof. For simplicity, we first assume that L is a deterministic learning algorithm, so that
in each of the m trials, its choice of action is some deterministic function of the outcomes
of the earlier trials. Thus, in each of the m trials, LP
chooses a vector of actions a ? AN ,
n
1
and receives the global reward signal R(s, a) = n i=1 R(s(i) , a(i) ). In our MDP, each
(i) (i)
local reward R(s , a ) will take values only 0 and 1. Thus, R(s, a) can take only n + 1
different values (namely, n0 , n1 , . . . , nn ). Since T = 1, the algorithm receives only one such
reward value in each trial.
Let r1 , . . . , rm be the m global reward signals received by L in the m trials. Since L is
deterministic, its output policy ?
? will be chosen as some deterministic function of these
2
For randomized algorithms we consider instead the expectation of U (?
? ) under the algorithm?s
randomization.
rewards r1 , . . . , rm . But the vector (r1 , . . . , rm ) can take on only (n + 1)m different values
(since each rt can take only n + 1 different values), and thus ?
? itself can also take only at
most (n + 1)m different values. Let ?m denote this set of possible values for ?
? . (|?m | ?
(n + 1)m ).
Call each local agent?s two actions a1 , a2 . We will generate an MDP with randomly chosen
parameters. Specifically, each local reward Ri (s(i) , a(i) ) function is randomly chosen with
equal probability to either give reward 1 for action a1 and reward 0 for action a2 ; or vice
versa. Thus, each local agent has one ?right? action that gives reward 1, but the algorithm
has to learn which of the two actions this is. Further, by choosing the right actions, the
optimal policy ? ? attains U (? ? ) = 1.
Pn
Fix any policy ?. Then UM (?) = n1 i=1 R(s(i) , ?(s(i) )) is the mean of n independent
Bernoulli(0.5) random variables (since the rewards are chosen randomly), and has expected
value 0.5. Thus, by the Hoeffding inequality, P (UM (?) ? 1?2?) ? exp(?2(0.5?2?)2 n).
Thus, taking a union bound over all policies ? ? ?M , we have
P (?? ? ?M s.t. UM (?) ? 1 ? 2?) ? |?M | exp(?2(0.5 ? 2?)2 n)
(2)
? (n + 1)m exp(?2(0.5 ? 2?)2 n)
(3)
Here, the probability is over the random MDP M . But since L outputs a policy in ?M , the
chance of L outputting a policy ?
? with UM (?
? ) ? 1 ? 2? is bounded by the chance that
there exists such a policy in ?M . Thus,
P (UM (?
? ) ? 1 ? 2?) ? (n + 1)m exp(?2(0.5 ? 2?)2 n).
(4)
By setting the right hand side to 1/4 and solving for m, we see that so long as
2(0.5 ? 2?)2 n + log(1/4)
0.32n + log(1/4)
m<
?
,
(5)
log(n + 1)
log(n + 1)
we have that P (UM (?
? ) ? 1 ? 2?) < 1/4. (The second equality above follows by taking
? < 0.05, ensuring that no policy will be within 0.1 of optimal.) Thus, under this condition,
by the standard probabilistic method argument [1], there must be at least one such MDP
under which L fails to find an ?-optimal policy.
For randomized algorithms L, we can define for each string of input random numbers
to the algorithm ? a deterministic algorithm L? . Given m samples above, the expected
performance of algorithm L? over the distribution of MDPs
Ep(M ) [L? ] ? P r(UM (L? ) ? 1 ? 2?)1 + (1 ? P r(UM (L? ) ? 1 ? 2?))(1 ? 2?)
1 3
<
+ (1 ? 2?) < 1 ? ?
4 4
Since
Ep(M ) Ep(?) [UM (L? )] = Ep(?) Ep(M ) [UM (L? )] < Ep(?) [1 ? ?]
it follows again from the probabilistic method there must be at least one MDP for which
the L has expected performance less than 1 ? ?.
4
Learning with local rewards
Assuming the existence of a good exploration policy, we now show a positive result that if
our learning algorithm has access to the local rewards, then it is possible to learn a nearoptimal policy after a number of trials that grows only logarithmically in the number of
agents n. In this section, we will assume that the neighborhood structure (encoded by
neigh(i)) is known, but that the CPT parameters of the dynamics and the reward functions
are unknown. We also assume that the size of the largest neighborhood is bounded by
maxi |neigh(i)| = B.
Definition. A policy ?explore is a (?, ?)-exploration policy if, given any i, any configuration
of states s(neigh(i)) ? S |neigh(i)| , and any action a(i) ? A, on a trial of length T the policy
?explore has at least a probability ? ? ?B of executing action a(i) while i and its neighbors
are in state s(neigh(i)) .
Proposition 4.1: Suppose the MDP?s initial state distribution is random, so that the state
(i)
si of each agent i is chosen independently from some distribution Di . Further, assume
that Di assigns probability at least ? > 0 to each possible state value s ? S. Then
the ?random? policy ? (that on each time-step chooses each agent?s action uniformly at
1
random over A) is a (?, |A|
)-exploration policy.
Proof. For any agent i, the initial state of s(neigh(i)) has has at least a ?B chance of being
any particular vector of values, and the random action policy has a 1/|A| chance of taking
any particular action from this state.
In general, it is a fairly strong assumption to assume that we have an exploration policy.
However, this assumption serves to decouple the problem of exploration from the ?sample
complexity? question of how much data we need from the MDP. Specifically, it guarantees
that we visit each local configuration sufficiently often to have a reasonable amount of data
to estimate each CPT. 3
In the envisioned procedure, we will execute an exploration policy for m trials, and
then use the resulting data we collect to obtain the maximum-likelihood estimates for the
(i)
(neigh(i)) (i)
CPT entries and the rewards. We call the resulting estimates p?(st+1 |st
, at ) and
? (i) , a(i) ).4 The following simple lemma shows that, with a number of trials that grows
R(s
only logarithmically in n, this procedure will give us good estimates for all CPTs and local
rewards.
Lemma 4.2: Let any ?0 > 0, ? > 0 be fixed. Suppose |neigh(i)| ? B for all i, and let
a (?, ?)-exploration policy be executed for m trials. Then in order to guarantee that, with
probability at least 1 ? ?, the CPT and reward estimates are ?0 -accurate:
(i)
(neigh(i))
|?
p(st+1 |st
(i)
(i)
(neigh(i))
(i)
, at ) ? p(st+1 |st
, at )| ? ?0
(i) (i)
(i) (i)
?
|R(s , a )| ? R(s , a )| ? ?0
(i)
(neigh(i))
for all i, st+1 , st
for all i, s(i) , a(i) ,
(i)
, at
(6)
it suffices that the number of trials be
m = O((log n) ? poly(
1 1
, , |S|, |A|, 1/(??B ), B, T )).
?0 ?
Proof (Sketch). Given c examples to estimate a particular CPT entry (or a reward table
entry), the probability that this estimate differs from the true value by more than ?0 can be
controlled by the Hoeffding bound:
(i)
(neigh(i))
P (|?
p(st+1 |st
(i)
(i)
(neigh(i))
, at ) ? p(st+1 |st
(i)
, at )| ? ?0 ) ? 2 exp(?2?20 c).
Each CPT has at most |A||S|B+1 entries and there are n such tables. There are also
n|S||A| possible local reward values. Taking a union bound over them, setting our probability of incorrectly estimating any CPTs or rewards to ?/2, and solving for c gives
B+1
c ? ?22 log( 4 n |A||S|
). For each agent i we see each local configurations of states and
?
0
actions (s(neigh(i)) , a(i) ) with probability ? ?B ?. For m trajectories the expected number
3
Further, it is possible to show a stronger version of our result than that stated below, showing that
a random action policy can always be used as our exploration policy, to obtain a sample complexity
bound with the same logarithmic dependence on n (but significantly worse dependencies on T and
B). This result uses ideas from the random trajectory method of [8], with the key observation that
local configurations that are not visited reasonably frequently by the random exploration policy will
not be visited frequently by any policy, and thus inaccuracies in our estimates of their CPT entries
will not significantly affect the result.
(i)
(neigh(i))
(i)
(neigh(i))
(i)
4
We let p?(st+1 |st
, at ) be the uniform distribution if (st
, at ) was never ob(i)
(i)
(i)
(i)
?
?
served in the training data, and similarly let R(s , a ) = 0 if R(s , a ) was never observed.
(s(neigh(i)) ,a(i) )
of samples we see for each CPT entry is at least m?B ?. Call Sm
the number of
samples we?ve seen of a configuration (s(neigh(i)) , a(i) ) in m trajectories. Note then that:
(s
P (Sm
(neigh(i))
,a(i) )
(s
? c) ? P (Sm
(neigh(i))
,a(i) )
(s
? E[Sm
(neigh(i))
,a(i) )
] ? c ? m?B ?).
and another application of Hoeffding?s bound ensures that:
?2
(c ? m?B ?)2 ).
mT 2
Applying again the union bound to ensure that the probability of failure here is ? ?/2 and
solving for m gives the result.
Definition. Define the radius of influence r(t) after t steps to be the maximum number of
nodes that are within t steps in the neighborhood graph of any single node.
Viewed differently, r(t) upper bounds the number of nodes in the t-th timeslice of the DBN
(as in Figure 1) which are decendants of any single node in the 1-st timeslice. In a DBN
as shown in Figure 1, we have r(t) = O(t). If the neighborhood graph is a 2-d lattice in
which each node has at most 4 neighbors, then r(t) = O(t2 ). More generally, we might
expect to have r(t) = O(t2 ) for ?most? planar neigborhood graphs. Note that, even in the
worst case, by our assumption of each node having B neighbors, we still have the bound
r(t) ? B t , which is a bound independent of the number of agents n.
(s
P (Sm
(neigh(i))
,a(i) )
(s
? E[Sm
(neigh(i))
,a(i) )
] ? c ? m?B ?) ? exp(
Theorem 4.3: Let any ? > 0, ? > 0 be fixed. Suppose |neigh(i)| ? B for all i, and let a
? be the maximum
(?, ?)-exploration policy be executed for m trials in the MDP M . Let M
likelihood MDP, estimated from data from these m trials. Let ? be a policy class, and let
?
? = arg max UM? (?)
???
? . Then to ensure that, with probability
be the best policy in the class, as evaluated on M
1 ? ?, we have that ?
? is near-optimal within ?, i.e., that
UM (?
? ) ? max UM (?) ? ?,
???
it suffices that the number of trials be:
m = O((log n) ? poly(1/?, 1/?, |S|, |A|, 1/(??B )), B, T, r(T )).
Proof. Our approach is essentially constructive: we show that for any policy, finite-horizon
value-iteration using approximate CPTs and rewards in its backups will correctly estimate
the true value function for that policy within ?/2. For simplicity, we assume that the initial
? and M ); the generalization offers no
state distribution is known (and thus the same in M
difficulties. By lemma (4.2) with m samples we can know both CPTs and rewards with the
probability required within any required ?0 .
Note also that for any MDP with the given DBN or neighborhood graph structure (including
? ) the value function for every policy ? and at each time-step has a property
both M and M
of bounded variation:
r(T )T
(i)
|V?t (s(1) , . . . s(n) ) ? V?t (s(1) , . . . s(i?1) , schanged , s(i+1) , . . . , s(n) | ?
n
This follows since a change in state can effect at most r(T ) agents? states, so the resulting
change in utility must be bounded by r(T )T /n.
To compute a bound on the error in our estimate of overall utility we compute a bound
? V? ||? . This quantity can
on the error induced by a one-step Bellman backup ||B V? ? B
be bounded in turn by considering the sequence of partially correct backup operators
?0 , . . . , B
?n where B
?i is defined as the Bellman operator for policy ? using the exact tranB
sitions and rewards for agents 1, 2, . . . , i, and the estimated transitions rewards/transitions
2500
200 agents, 20% noise is observed rewards
1
local learner
global learner
local learner
global learner
0.9
2000
number of samples necessary
0.8
0.7
performance
0.6
0.5
0.4
1500
1000
0.3
500
0.2
0.1
0
0
500
1000
1500
number of training examples
2000
2500
0
0
50
100
150
200
250
number of agents
300
350
400
Figure 2: (Left) Scaling of performance as a function of the number of trajectories seen for a global
reward and local reward algorithms. (Right) Scaling of the number of samples necessary to achieve
near optimal reward as a function of the number of agents.
for agents i + 1, . . . , n. From this definition it is immediate that the total error is equivalent
to the telescoping sum:
? V? ||? = ||B
?0 V? ? B
?1 V? + B
?1 V? ? ... + B
?n?1 V? ? B
?n V? ||?
||B V? ? B
(7)
Pn?1 ? ?
?i+1 V? ||? .
That sum is upper-bounded by the sum of term-by-term errors i=0 ||Bi V ? B
We can show that each of the terms in the sum is less than ?0 r(T )(T + 1)/n since the
?i V? ? B
?i+1 V? differ in the immediate reward contribution of agent
Bellman operators B
i + 1 by ? ?0 and differ in computing the expected value of the future value by
X
Qn
j
EQi+1 j
[
?p(si+1 |st , ?)V?t+1 (s)],
j=1
p(st+1 |st ,?)
j=i+2
t+1
p(st+1 |st ,?)
si+1
?
?
with ?p(si+1
t+1 |st , ?) ? ?0 the difference in the CPTs between Bi and Bi+1 . By the
bounded variation argument this total is then less than ?0 r(T )T |S|/n. It follows then
P ? ?
?
?
i ||Bi V ? Bi+1 V ||? ? ?0 r(T ) (T + 1)|S|. We now appeal to finite-horizon bounds
on the error induced by Bellman backups [11] to show that the ||V? ? V ||? ? T ||B V? ?
? V? ||? ? T (T + 1) ?0 r(T )|S|. Taking the expectation of V? with respect to the initial
B
state distribution D and setting m according to Lemma (4.2) with ?0 = 2|S|r(T )? T (T +1)
completes the proof.
5
Demonstration
We first present an experimental domain that hews closely to the theory in Section (3) above
to demonstrate the importance of local rewards. In our simple problem there are n = 400
independent agents who each choose an action in {0, 1}. Each agent has a ?correct? action
that earns it reward Ri = 1 with probability 0.8, and reward 0 with probability 0.2. Equally,
if the agents chooses the wrong action, it earns reward Ri = 1 with probability 0.2.
We compare two methods on this problem. Our first global algorithm uses only the global
rewards R and uses this to build a model of the local rewards, and finally solves the resulting estimated MDP exactly. The local reward functions are learnt by a least-squares
procedure with basis functions for each agent. The second algorithm also learns a local
reward function, but does so taking advantage of the local rewards it observes as opposed
to only the global signal. Figure (2) demonstrates the advantages of learning using a global
reward signal.5 On the right in Figure (2), we compute the time required to achieve 41 of
optimal reward for each algorithm, as a function of the number of agents.
In our next example, we consider a simple variant of the multi-agent S YS A DMIN6 prob5
A gradient-based model-free approach using the global reward signal was also tried, but its
performance was significantly poorer than that of the two algorithms depicted in Figure (2, left).
6
In S YS A DMIN there is a network of computers that fail randomly. A computer is more likely to
fail if a neighboring computer (arranged in a ring topology) fails. The goal is to reboot machines in
such a fashion so a maximize the number of running computers.
lem [4]. Again, we consider two algorithms: a global R EINFORCE [9] learner, and a R E INFORCE algorithm run using only local rewards, even through the local R EINFORCE algorithm run in this way is not guaranteed to converge to the globally optimal (cooperative)
solution. We note that the local algorithm learns much more quickly than using the global
reward. (Figure 3) The learning speed we observed for the global algorithm correlates well
with the observations in [5] that the number of samples needed scales roughly linearly in
the number of agents. The local algorithm continued to require essentially the same number
of examples for all sizes used (up to over 100 agents) in our experiments.
0.9
0.9
Global
Local
0.85
0.8
0.8
0.75
0.75
0.7
0.7
0.65
0.65
0.6
0.6
0.55
0.55
0.5
0.5
0.45
0.45
0.4
0
50
100
150
200
250
300
350
400
450
Global
Local
0.85
500
0.4
0
50
100
150
200
250
300
350
400
450
500
Figure 3: R EINFORCE applied to the multi-agent S YS A DMIN problem. Local refers to R EINFORCE
applied using only neighborhood (local) rewards while global refers to standard R EINFORCE (applied
to the global reward signal). (Left) shows averaged reward performance as a function of number of
iterations for 10 agents. (Right) depicts the performance for 20 agents.
References
[1]
N. Alon and J. Spencer. The Probabilistic Method. Wiley, 2000.
[2]
C. Boutilier, T. Dean, and S. Hanks. Decision theoretic planning: Structural assumptions and
computational leverage. Journal of Artificial Intelligence Research, 1999.
[3]
Y. Chang, T. Ho, and L. Kaelbling. All learning is local: Multi-agent learning in global reward
games. In Advances in NIPS 14, 2004.
[4]
C. Guestrin, D. Koller, and R. Parr. Multi-agent planning with factored MDPs. In NIPS-14,
2002.
[5]
[6]
M. Kearns and D. Koller. Efficient reinforcement learning in factored mdps. In IJCAI 16, 1999.
[7]
M. Kearns, M. Littman, and S. Singh. Graphical models for game theory. In UAI, 2001.
[8]
M. Kearns, Y. Mansour, and A. Ng. Approximate planning in large POMDPs via reusable
trajectories. (extended version of paper in NIPS 12), 1999.
[9]
L. Peshkin, K-E. Kim, N. Meleau, and L. Kaelbling. Learning to cooperate via policy search.
In UAI 16, 2000.
C. Guestrin, M. Lagoudakis, and R. Parr. Coordinated reinforcement learning. In ICML, 2002.
[10] J. Schneider, W. Wong, A. Moore, and M. Riedmiller. Distributed value functions. In ICML,
1999.
[11] R. Williams and L. Baird. Tight performance bounds on greedy policies based on imperfect
value functions. Technical report, Northeastern University, 1993.
| 2951 |@word trial:19 version:2 stronger:1 tried:1 carry:1 initial:6 configuration:5 si:4 must:3 drop:1 n0:1 pursued:1 intelligence:1 greedy:1 node:9 direct:1 prove:1 notably:1 expected:5 hardness:1 roughly:2 frequently:2 planning:3 multi:8 bellman:4 globally:1 solver:1 considering:1 xx:1 notation:2 bounded:7 estimating:1 string:1 reboot:1 developed:1 impractical:1 guarantee:2 every:1 preferable:1 um:14 rm:3 wrong:1 exactly:1 demonstrates:1 unit:1 positive:1 engineering:1 local:43 aggregating:1 treat:1 limit:1 solely:1 might:1 studied:2 collect:1 factorization:1 limited:1 bi:5 averaged:1 union:3 differs:1 procedure:3 riedmiller:1 significantly:3 refers:2 suggest:2 convenience:1 operator:3 influence:7 writing:1 applying:1 wong:1 optimize:1 equivalent:1 map:1 deterministic:6 dean:1 straightforward:1 williams:1 independently:2 simplicity:3 assigns:1 factored:2 continued:1 notion:1 variation:2 imagine:1 suppose:3 exact:1 us:5 pa:1 logarithmically:2 cooperative:3 ep:6 observed:3 worst:2 ensures:1 connected:1 observes:1 envisioned:1 complexity:4 reward:69 littman:1 dynamic:4 depend:1 solving:5 tight:2 singh:1 learner:5 basis:1 joint:7 differently:1 various:1 effective:1 artificial:1 outcome:1 refined:1 choosing:1 neighborhood:6 encoded:1 stanford:3 solve:1 valued:2 itself:1 advantage:4 sequence:2 net:1 outputting:1 neighboring:1 achieve:4 description:1 scalability:1 ijcai:1 r1:3 executing:1 ring:1 andrew:2 alon:1 measured:1 received:2 solves:1 strong:1 c:1 implies:1 differ:2 radius:1 closely:1 correct:2 exploration:10 require:5 fix:1 generalization:2 suffices:2 preliminary:1 randomization:1 proposition:1 spencer:1 dbagnell:1 sufficiently:1 exp:6 great:1 parr:2 a2:2 applicable:1 visited:2 largest:1 vice:1 reflects:1 clearly:1 always:1 timeslice:2 pn:2 factorizes:1 notational:1 bernoulli:1 likelihood:2 contrast:1 attains:1 kim:1 nn:1 koller:2 arg:1 overall:1 fairly:1 equal:1 never:2 having:1 ng:2 capitalize:1 neigh:32 throughput:1 icml:2 future:1 t2:2 report:1 fundamentally:1 serious:1 randomly:4 ve:1 n1:2 attempt:2 interest:2 light:1 accurate:1 poorer:1 edge:1 necessary:5 experience:2 instance:1 formalism:3 earlier:1 lattice:1 kaelbling:2 vertex:1 subset:1 entry:6 uniform:1 characterize:1 nearoptimal:1 dependency:1 learnt:1 chooses:4 st:32 fundamental:1 randomized:2 probabilistic:3 connecting:1 quickly:1 again:3 opposed:1 choose:1 possibly:1 hoeffding:3 worse:1 availability:1 baird:1 coordinated:1 explicitly:1 cpts:6 traffic:5 bayes:1 contribution:1 square:1 who:1 trajectory:6 served:1 researcher:1 pomdps:1 inquiry:1 influenced:1 definition:3 failure:1 associated:1 proof:5 di:2 stop:1 color:1 day:1 planar:1 arranged:1 execute:1 evaluated:1 hank:1 hand:1 receives:2 sketch:1 mdp:23 grows:2 effect:2 requiring:1 true:2 equality:1 moore:1 round:1 game:3 theoretic:2 demonstrate:2 eqi:1 cooperate:1 recently:1 lagoudakis:1 common:2 mt:1 rl:2 exponentially:1 mellon:1 versa:1 dbn:9 similarly:2 access:2 depiction:1 perspective:1 inequality:1 binary:1 guestrin:3 seen:2 schneider:1 converge:1 maximize:1 signal:16 living:1 multiple:1 technical:1 offer:2 long:1 equally:1 visit:1 y:3 a1:2 controlled:1 ensuring:1 variant:1 essentially:2 expectation:2 cmu:1 metric:1 confront:1 iteration:2 robotics:1 interval:1 completes:1 induced:2 call:4 structural:1 near:4 leverage:1 switch:1 affect:1 earns:2 topology:1 reduce:1 idea:1 imperfect:1 peshkin:2 utility:3 action:30 cpt:9 dramatically:1 useful:1 generally:1 boutilier:1 amount:2 ang:1 generate:1 coordinating:1 estimated:3 per:2 correctly:1 carnegie:1 shall:1 key:1 reusable:1 graph:7 sum:6 run:2 reasonable:1 decide:1 decision:3 ob:1 scaling:5 bound:16 guaranteed:1 ri:7 speed:1 argument:2 department:1 developing:1 according:3 alternate:1 terminates:1 smaller:1 lp:1 evolves:1 s1:2 lem:1 restricted:1 taken:1 turn:1 fail:2 needed:1 know:1 serf:1 available:1 apply:1 ho:1 existence:1 denotes:1 running:1 ensure:3 graphical:3 build:1 establish:1 question:1 quantity:1 rt:1 dependence:1 bagnell:1 gradient:3 argue:1 trivial:2 assuming:1 length:1 modeled:1 demonstration:1 executed:2 stated:1 policy:44 unknown:2 upper:2 dmin:2 observation:3 markov:2 sm:6 finite:5 incorrectly:1 immediate:2 payoff:1 extended:1 mansour:1 namely:1 required:3 connection:1 omits:1 inaccuracy:1 nip:3 able:1 below:3 including:2 max:3 difficulty:2 rely:1 telescoping:1 improve:1 mdps:4 literature:1 expect:1 agent:68 pi:1 row:1 free:1 offline:1 side:1 allow:1 institute:1 neighbor:7 taking:7 distributed:8 world:2 transition:4 qn:1 collection:1 reinforcement:11 made:1 correlate:1 approximate:3 global:28 uai:2 pittsburgh:1 assumed:1 search:1 decomposes:1 table:3 learn:9 reasonably:1 ca:1 poly:2 domain:1 linearly:2 arrow:1 big:2 backup:4 noise:1 depicts:2 fashion:1 wiley:1 fails:2 wish:1 learns:2 northeastern:1 theorem:2 showing:2 maxi:1 appeal:1 burden:1 consist:1 exists:3 schanged:1 effectively:1 importance:1 pnin:1 horizon:5 depicted:1 logarithmic:2 explore:2 likely:1 partially:1 chang:2 corresponds:1 chance:4 worstcase:1 conditional:1 viewed:2 goal:2 change:2 directionality:1 specifically:2 uniformly:1 decouple:1 lemma:4 kearns:3 total:4 experimental:1 formally:2 constructive:1 es1:1 |
2,150 | 2,952 | A Probabilistic Approach for Optimizing
Spectral Clustering
?
Rong Jin? , Chris Ding? , Feng Kang?
Lawrence Berkeley National Laboratory, Berkeley, CA 94720
?
Michigan State University, East Lansing , MI 48824
Abstract
Spectral clustering enjoys its success in both data clustering and semisupervised learning. But, most spectral clustering algorithms cannot
handle multi-class clustering problems directly. Additional strategies are
needed to extend spectral clustering algorithms to multi-class clustering problems. Furthermore, most spectral clustering algorithms employ
hard cluster membership, which is likely to be trapped by the local optimum. In this paper, we present a new spectral clustering algorithm,
named ?Soft Cut?. It improves the normalized cut algorithm by introducing soft membership, and can be efficiently computed using a bound
optimization algorithm. Our experiments with a variety of datasets have
shown the promising performance of the proposed clustering algorithm.
1
Introduction
Data clustering has been an active research area with a long history. Well-known clustering methods include the K-means methods (Hartigan & Wong., 1994), Gaussian Mixture
Model (Redner & Walker, 1984), Probabilistic Latent Semantic Indexing (PLSI) (Hofmann,
1999), and Latent Dirichlet Allocation (LDA) (Blei et al., 2003). Recently, spectral clustering methods (Shi & Malik, 2000; Ng et al., 2001; Zha et al., 2002; Ding et al., 2001; Bach
& Jordan, 2004)have attracted more and more attention given their promising performance
in data clustering and simplicity in implementation. They treat the data clustering problem
as a graph partitioning problem. In its simplest form, a minimum cut algorithm is used to
minimize the weights (or similarities) assigned to the removed edges. To avoid unbalanced
clustering results, different objectives have been proposed, including the ratio cut (Hagen
& Kahng, 1991), normalized cut (Shi & Malik, 2000) and min-max cut (Ding et al., 2001).
To reduce the computational complexity, most spectral clustering algorithms use the relaxation approach, which maps discrete cluster memberships into continuous real numbers.
As a result, it is difficult to directly apply current spectral clustering algorithms to multiclass clustering problems. Various strategies (Shi & Malik, 2000; Ng et al., 2001; Yu &
Shi, 2003) have been used to extend spectral clustering algorithms to multi-class clustering
problems. One common approach is to first construct a low-dimension space for data representation using the smallest eigenvectors of a graph Laplacian that is constructed based on
the pair wise similarity of data. Then, a standard clustering algorithm, such as the K-means
method, is applied to cluster data points in the low-dimension space.
One problem with the above approach is how to determine the appropriate number of eigenvectors. A too small number of eigenvectors will lead to an insufficient representation of
data, and meanwhile a too large number of eigenvectors will bring in a significant amount
of noise to the data representation. Both cases will degrade the quality of clustering. Although it has been shown in (Ng et al., 2001) that the number of required eigenvectors is
generally equal to the number of clusters, the analysis is valid only when data points of
different clusters are well separated. As will be shown later, when data points are not well
separated, the optimal number of eigenvectors can be different from the number of clusters.
Another problem with the existing spectral clustering algorithms is that they are based on
binary cluster membership and therefore are unable to express the uncertainty in data clustering. Compared to hard cluster membership, probabilistic membership is advantageous
in that it is less likely to be trapped by local minimums. One example is the Bayesian clustering method (Redner & Walker, 1984), which is usually more robust than the K-means
method because of its soft cluster memberships. It is also advantageous to use probabilistic
memberships when the cluster memberships are the intermediate results and will be used
for other processes, for example selective sampling in active learning (Jin & Si, 2004).
In this paper, we present a new spectral clustering algorithm, named ?Soft Cut?, that explicitly addresses the above two problems. It extends the normalized cut algorithm by
introducing probabilistic membership of data points. By encoding membership of multiple clusters into a set of probabilities, the proposed clustering algorithm can be applied
directly to multi-class clustering problems. Our empirical studies with a variety of datasets
have shown that the soft cut algorithm can substantially outperform the normalized cut
algorithm for multi-class clustering.
The rest paper is arranged as follows. Section 2 presents the related work. Section 3
describes the soft cut algorithm. Section 4 discusses the experimental results. Section 5
concludes this study with the future work.
2
Related Work
The key idea of spectral clustering is to convert a clustering problem into a graph partitioning problem.
Let n be the number of data points to be clustered. Let W = [wi,j ]n?n be the weight
matrix where each wi,j is the similarity between two data points. For the convenience of
discussion, wi,i = 0 for all data points. Then, a clustering problem can be formulated into
the minimum cut problem, i.e.,
q?
=
arg
min
q?{?1,1}n
n
X
wi,j (qi ? qj )2 = qT Lq
(1)
i,j=1
where q = (q1 , q2 , ..., qn ) is a vector for binary memberships and each qi can be either ?1
or 1. L is the Laplacian matrix. It is defined asP
L = D ? W, where D = [di,i ]n?n is
n
a diagonal matrix with each element di,i = ?i,j j=1 wi,j . Directly solving the problem
in (1) requires combinatorial optimization, which is computationally expensive. Usually, a
n
relaxation approach (Chung, 1997)
Pn is 2used to replace the vector q ? {?1, 1} with a vector
n
q
? ? R under the constraint i=1 q?i = n. As a result of the relaxation, the approximate
solution to (1) is the second smallest eigenvector of Laplacian L.
One problem with the minimum cut approach is that it does not take into account the
size of clusters, which can lead to clusters of unbalanced sizes. To resolve this problem,
several different criteria are proposed, including the ratio cut (Hagen & Kahng, 1991),
normalized cut (Shi & Malik, 2000) and min-max cut (Ding et al., 2001). For example, in
the normalized cut algorithm, the following objective is used:
C+,? (q) C+,? (q)
+
(2)
D+ (q)
D? (q)
Pn
Pn
Pn
where C+,? (q) = i,j=1 wi,j ?(qi , +)?(qj , ?) and D? = i=1 ?(qi , ?) j=1 wi,j . In
the above objective, the size of clusters, i.e., D? , is used as the denominators to avoid
clusters of too small size. Similar to the minimum cut approach, a relaxation approach is
used to convert the problem in (2) into a eigenvector problem. For multi-class clustering,
we can extend the objective in (2) into the following form:
Jn (q)
=
Jnorm
mc (q)
=
K X
X
Cz,z? (q)
Dz (q)
z=1 ?
(3)
z 6=z
where K is the number of clusters,P vector
q ? {1, 2, ..., K}n , Cz,z? =
P
n
n Pn
?
i,j=1 ?(qi , z)?(qj , z )wi,j , and Dz =
i=1
j=1 ?(qi , z)wi,j . However, efficiently
finding the solution that minimizes (3) is rather difficult. In particular, a simple relaxation method cannot be applied directly here. In the past, several heuristic approaches
(Shi & Malik, 2000; Ng et al., 2001; Yu & Shi, 2003) have been proposed for finding
approximate solutions to (3). One common strategy is to first obtain the K smallest (excluding the one with zero eigenvalue) eigenvectors of Laplacian L, and project data points
onto the low-dimension space that is spanned by the K eigenvectors. Then, a standard
clustering algorithm, such as the K-means method, is applied to cluster data points in this
low-dimension space. In contrast to these approaches, the proposed spectral clustering algorithm deals with the multi-class clustering problem directly. It estimates the probabilities
for each data point be in different clusters simultaneously. Through the probabilistic cluster
memberships, the proposed algorithm will be less likely to be trapped by local minimums,
and therefore will be more robust than the existing spectral clustering algorithms.
3
Spectral Clustering with Soft Membership
In this section, we describe a new spectral clustering algorithm, named ?Soft Cut?, which
extends the normalized cut algorithm by introducing probabilistic cluster membership. In
the following, we will present a formal description of the soft cut algorithm, followed by
the procedure that efficiently optimizes the related optimization problem.
3.1
Algorithm Description
PK
First, notice that Dz in (3) can be expanded as Dz =
j=1 Ci,j . Thus, the objective
function for multi-class clustering in (3) can be rewritten as:
Jn
mc (q)
=
K X
K
K
X
X
Cz,z? (q)
Cz,z (q)
=K?
D
(q)
Dz (q)
z
z=1 ?
z=1
(4)
z 6=z
Let Jn?
mc
=
Cz,z (q)
z=1 Dz (q) .
PK
Thus, instead of minimizing Jn
mc ,
we can maximize Jn?
mc .
To extend the above objective function to a probabilistic framework, we introduce the probabilistic cluster membership. Let qz,i denote the probability for the i-th data point to be in
the z-th cluster. Let matrix Q = [qz,i ]K?n include all probabilities qz,i . Using the probabilistic notations, we can rewrite Cz,z? and Dz as follows:
Cz,z? (Q) =
n
X
i,j=1
qz,i qz? ,j wi,j , Dz (Q) =
n
X
i,j=1
qz,i wi,j
(5)
Substituting the probabilistic expression for Cz,z? and Dz into Jn? mc , we have the following optimization problem for probabilistic spectral clustering:
K Pn
X
i,j=1 qz,i qz,j wi,j
?
P
Q = arg min Jprob (Q) = arg max
n
K?n
K?n
Q?R
Q?R
i,j=1 qz,i wi,j
z=1
s.t.?i ? [1..n], z ? [1..K] : qz,i ? 0,
K
X
qz,i = 1
(6)
z=1
3.2
Optimization Procedure
In this subsection, we present a bound optimization algorithm (Salakhutdinov & Roweis,
2003) for efficiently finding the solution to (6). It maximizes the objective function in (6)
iteratively. In each iteration, a concave lower bound is first constructed for the objective
function based on the solution obtained from the previous iteration. Then, a new solution
for the current iteration is obtained by maximizing the lower bound. The same procedure
is repeated until the solution converges to a local maximum.
?
Let Q? = [qi,j
]K?n be the probabilities obtained in the previous iteration, and Q =
[qi,j ]K?n be the probabilities for current iteration. Define
?(Q, Q? ) = log
Jprob (Q)
Jprob (Q? )
which is the logarithm of the ratio of the objective functions P
between twoPconsecutive
iterations. Using the convexity of logarithm function, i.e., log( i pi qi ) ? i pi log(qi )
for a pdf {pi }, we have ?(Q, Q? ) lower bound by the following expression:
!
!
K
K
X
X
Cz,z (Q? )
Cz,z (Q)
?
? log
?(Q, Q ) = log
Dz (Q)
Dz (Q? )
z=1
z=1
K
X
Dz (Q)
Cz,z (Q)
? log
(7)
?
tz log
Cz,z (Q? )
Dz (Q? )
z=1
where tz is defined as:
tz =
Cz,z (Q? )
Dz (Q? )
PK Cz? ,z? (Q? )
z ? =1 Dz? (Q? )
(8)
C
(Q)
z,z
Now, the first term within the big bracket in (7), i.e., log Cz,z
(Q? ) , can be further relaxed
as:
?
?
n
?
?
X
q
q
w
Cz,z (Q)
z,i z,j i,j qz,i qz,j ?
log
= log ?
?
?
?
Cz,z (Q? )
C
z,z (Q ) qz,i qz,j
i,j=1
?
?
n
n
n
X
X
X
?
?
?
?
? 2
si,j
log(q
)
?
si,j
(9)
z,i
z
z log(qz,i qz,j )
i=1
j=1
i,j=1
where si,j
z is defined as:
si,j
z
=
?
?
qz,i
qz,j
wi,j
Cz,z (Q? )
(10)
Meanwhile, using the inequality log x ? x ? 1, we have log
following expression:
log
Dz (Q)
Dz (Q? )
upper bounded by the
n
n
X
X
Dz (Q)
Dz (Q)
wi,j
?
?
1
=
q
?1
z,i
?
Dz (Q? )
Dz (Q? )
D
z (Q )
i=1
j=1
(11)
Putting together (7), (9), and (11), we have a concave lower bound for the objective function
in (6), i.e.,
log Jprob (Q) ?
log Jprob (Q? ) + ?0 (Q? ) + 2
K X
n
X
z=1 i,j=1
?
where ?0 (Q ) is defined as:
?
?0 (Q ) = ?
K
X
z=1
tz
n
X
tz si,j
z log qz,i ?
K X
n
X
qz,i wi,j
(12)
Dz (Q? )
z=1 i,j=1
?
?
si,j
z wi,j log(qz,i qz,j ) + 1
i,j=1
The optimal solution that maximizes the lower bound in (12) can be computed by setting
its derivative to zero, which leads to the following solution:
Pn
2tz j=1 si,j
z
qz,i = Pn
(13)
wi,j
tz j=1 Dz (Q
? ) + ?i
PK
where ?i is a Lagrangian multiplier that ensure z=1 qz,i = 1. It can be acquired by
maximizing the following objective function:
?
?
?
?
K
n
n
X
X
X
w
i,j
? log ?tz
?tz
l(?i ) = ??i + 2
si,j
+ ?i ?
(14)
z
?)
D
(Q
z
z=1
j=1
j=1
Since the above objective function is concave, we can apply a standard numerical procedure, such as the Newton?s method, to efficiently find the value for ?i .
4
Experiment
In this section, we focus on examining the effectiveness of the proposed soft cut algorithm
for multi-class clustering. In particular, we will address the following two research questions:
1. How effective is the proposed algorithm for data clustering? We compare the
proposed soft cut algorithm to the normalized cut algorithm with various numbers
of eigenvectors.
2. How robust is the proposed algorithm for data clustering? We evaluate the robustness of clustering algorithms by examining their variance across multiple trials.
4.1
Experiment Design
Datasets In order to extensively examine the effectiveness of the proposed soft cut algorithm, a variety of datasets are used in this experiment. They are:
? Text documents that are extracted from the 20 newsgroups to form two five-class
datasets, named as ?M5? and ?L5?. Each class contain 100 document and there
are totally 500 documents.
Dataset
M5
L5
Pendigit
Ribosome
Table 1: Datasets Description
Description
#Class #Instance
Text documents
5
500
Text documents
5
500
Pen-based handwritting
10
2000
Ribosome rDNA sequences
8
1907
#Features
1000
1000
16
27617
? Pendigit that comes from the UCI data repository. It contains 2000 examples that
belong to 10 different classes.
? Ribosomal
sequences
that
are
from
RDP
project
(http://rdp.cme.msu.edu/index.jsp).
It contains annotated rRNA sequences
of ribosome for 2000 different bacteria that belong to 10 different phylum (e.g.,
classes). Table 1 provides the detailed information regarding each dataset.
Evaluation metrics To evaluate the performance of different clustering algorithms, two
different metrics are used:
? Clustering accuracy. For the datasets that have no more than five classes, clustering accuracy is used as the evaluation metric. To compute clustering accuracy,
each automatically generated cluster is first aligned with a true class. The classification accuracy based on the alignment is then computed, and the clustering
accuracy is defined as the maximum classification accuracy among all possible
alignments.
? Normalized mutual information. For the datasets that have more than five classes,
due to the expensive computation involved in finding the optimal alignment, we
use the normalized mutual information (Banerjee et al., 2003) as the alternative
evaluation metric. If Tu and Tl denote the cluster labels and true class labels
assigned to data points, the normalized mutual information ?nmi? is defined as
nmi =
2I(Tu , Tl )
(H(Tu ) + H(Tl ))
where I(Tu , Tl ) stands for the mutual information between clustering labels Tu
and true class labels Tl . H(Tu ) and H(Tl ) are the entropy functions for Tu and
Tl , respectively.
Each experiment was run 10 times with different initialization of parameters. The averaged
results together with their variance are used as the final evaluation metric.
Implementation We follow the paper (Ng et al., 2001) for implementing the normalized
cut algorithm. A cosine similarity is used to measure the affinity between any two data
points. Both the EM algorithm and the Kmeans methods are used to cluster the data points
that are projected into the low-dimension space spanned by the smallest eigenvectors of a
graph Laplacian.
4.2
Experiment (I): Effectiveness of The Soft Cut Algorithm
The clustering results of both the soft cut algorithm and the normalized cut algorithm are
summarized in Table 2. In addition to the Kmeans algorithm, we also apply the EM clustering algorithm to the normalized cut algorithm. In this experiment, the number of eigenvectors used for the normalized cut algorithms is equal to the number of clusters.
First, comparing to both normalized cut algorithms, we see that the proposed clustering
algorithm substantially outperform the normalized cut algorithms for all datasets. Second,
Table 2: Clustering results for different clustering methods. Clustering accuracy is used
for dataset ?L5? and ?M5? as the evaluation metric, and normalized mutual information is
used for ?Pendigit? and ?Ribosome? .
M5
L5
Pendigit
Ribosome
Soft Cut
89.2 ? 1.3
69.2 ? 2.7
56.3 ? 3.8
69.7 ? 2.9
Normalized Cut (Kmeans)
83.2 ? 8.8
64.2 ? 4.9
46.0 ? 6.4
62.2 ? 9.1
Normalized Cut (EM)
62.4 ? 5.6
45.1 ? 4.8
52.8 ? 2.0
63.2 ? 3.8
Table 3: Clustering accuracy for normalized cut with embedding in eigenspace with K
eigenvectors. K-means is used.
#Eigenvector
K
K +1
K +2
K +3
K +4
K +5
K +6
K +7
K +8
M5
83.2 ? 8.8
77.6 ? 8.6
79.7 ? 8.5
80.2 ? 6.6
74.9 ? 9.2
70.5 ? 5.7
75.5 ? 8.6
75.8 ? 7.5
73.5 ? 6.6
L5
64.1 ? 4.9
69.6 ? 6.7
64.1 ? 5.7
61.4 ? 5.8
59.1 ? 4.7
66.1 ? 4.7
61.9 ? 4.7
59.7 ? 5.6
61.2 ? 4.7
Pendigit
46.0 ? 6.4
43.3 ? 9.1
41.6 ? 9.3
42.9 ? 9.6
47.5 ? 3.7
39.2 ? 9.3
43.4 ? 8.3
46.8 ? 7.3
49.8 ? 8.9
Ribosome
62.2 ? 9.1
65.9 ? 5.8
63.4 ? 4.8
67.2 ? 7.6
60.7 ? 8.4
63.9 ? 8.2
63.5 ? 10.4
56.6 ? 10.7
54.3 ? 7.2
comparing to the normalized cut algorithm using the Kmeans method, we see that the soft
cut algorithm has smaller variance in its clustering results. This can be explained by the
fact that the Kmeans algorithm uses binary cluster membership and therefore is likely to be
trapped by local optimums. As indicated in Table 2, if we replace the Kmeans algoirthm
with the EM algorithm in the normalized cut algorithm, the variance in clustering results is
generally reduced but at the price of degradation in the performance of clustering. Based
on the above observation, we conclude that the soft cut algorithm appears to be effective
and robust for multi-class clustering.
4.3
Experiment (II): Normalized Cut using Different Numbers of Eigenvectors
One potential reason why the normalized cut algorithm perform worse than the proposed
algorithm is that the number of clusters may not be the optimal number of eigenvectors. To
examine this issue, we test the normalized cut algorithm with different number of eigenvectors. The Kmeans method is used for clustering the eigenvectors. The results of the
normalized cut algorithm using different number of eigenvectors are summarized in Table
3. The best performance is highlighted by the bold fold.
First, we clearly see that the best clustering results may not necessarily happen when the
number of eigenvectors is exactly equal to the number of clusters. In fact, for three out of
four cases, the best performance is achieved when the number of eigenvectors is larger than
the number of clusters. This result indicates that the choice of numbers of eigenvectors
can have a significant impact on the performance of clustering. Second, comparing the
results in Table 3 to the results in Table 2, we see that the soft cut algorithm is still able
to outperform the normalized cut algorithm even with the optimal number of eigenvectors.
In general, since spectral clustering is originally designed for binary-class classification,
it requires an extra step when it is extended to multi-class clustering problems. Hence,
the resulting solutions are usually suboptimal. In contrast, the soft cut algorithm directly
targets on multi-class clustering problems, and thus is able to achieve better performance
than the normalized cut algorithm.
5
Conclusion
In this paper, we proposed a novel probabilistic algorithm for spectral clustering, called
?soft cut? algorithm. It introduces probabilistic membership into the normalized cut algorithm and directly targets on the multi-class clustering problems. Our empirical studies
with a number of datasets have shown that the proposed algorithm outperforms the normalized cut algorithm considerably. In the future, we plan to extend this work to other
applications such as image segmentation.
References
Bach, F. R., & Jordan, M. I. (2004). Learning spectral clustering. Advances in Neural
Information Processing Systems 16.
Banerjee, A., Dhillon, I., Ghosh, J., & Sra, S. (2003). Generative model-based clustering
of directional data. Proceedings of the Ninth ACM SIGKDD International Conference
on Knowledge Discovery and Data Mining (KDD-2003).
Blei, D. M., Ng, A. Y., & Jordan, M. I. (2003). Latent dirichlet allocation. J. Mach. Learn.
Res., 3, 993?1022.
Chung, F. (1997). Spectral graph theory. Amer. Math. Society.
Ding, C., He, X., Zha, H., Gu, M., & Simon, H. (2001). A min-max cut algorithm for graph
partitioning and data clustering. Proc. IEEE Int?l Conf. Data Mining.
Hagen, L., & Kahng, A. (1991). Fast spectral methods for ratio cut partitioning and clustering. Proceedings of IEEE International Conference on Computer Aided Design (pp.
10?13).
Hartigan, J., & Wong., M. (1994). A k-means clustering algorithm. Appl. Statist., 28,
100?108.
Hofmann, T. (1999). Probabilistic latent semantic indexing. Proceedings of the 22nd
Annual ACM Conference on Research and Development in Information Retrieval (pp.
50?57). Berkeley, California.
Jin, R., & Si, L. (2004). A bayesian approach toward active learning for collaborative
filtering. Proceedings of the 20th conference on Uncertainty in artificial intelligence
(pp. 278?285). Banff, Canada: AUAI Press.
Ng, A., Jordan, M., & Weiss, Y. (2001). On spectral clustering: Analysis and an algorithm.
Advances in Neural Information Processing Systems 14.
Redner, R. A., & Walker, H. F. (1984). Mixture densities, maximum likelihood and the em
algorithm. SIAM Review, 26, 195?239.
Salakhutdinov, R., & Roweis, S. T. (2003). Adaptive overrelaxed bound optimization methods. Proceedings of the Twentieth International Conference (ICML 2003) (pp. 664?671).
Shi, J., & Malik, J. (2000). Normalized cuts and image segmentation. IEEE Transactions
on Pattern Analysis and Machine Intelligence, 22, 888?905.
Yu, S. X., & Shi, J. (2003). Multiclass spectral clustering. Proceedings of Ninth IEEE
International Conference on Computer Vision. Nice, France.
Zha, H., He, X., Ding, C., Gu, M., & Simon, H. (2002). Spectral relaxation for k-means
clustering. Advances in Neural Information Processing Systems 14.
| 2952 |@word trial:1 repository:1 advantageous:2 nd:1 q1:1 contains:2 document:5 past:1 existing:2 outperforms:1 current:3 comparing:3 si:10 attracted:1 numerical:1 happen:1 kdd:1 hofmann:2 designed:1 generative:1 intelligence:2 blei:2 provides:1 math:1 banff:1 five:3 constructed:2 introduce:1 lansing:1 acquired:1 examine:2 multi:13 salakhutdinov:2 automatically:1 resolve:1 totally:1 project:2 notation:1 bounded:1 maximizes:2 eigenspace:1 substantially:2 eigenvector:3 q2:1 minimizes:1 finding:4 ghosh:1 berkeley:3 auai:1 concave:3 exactly:1 partitioning:4 local:5 treat:1 encoding:1 mach:1 initialization:1 appl:1 averaged:1 procedure:4 area:1 empirical:2 cannot:2 convenience:1 onto:1 wong:2 map:1 lagrangian:1 shi:9 dz:23 maximizing:2 attention:1 simplicity:1 spanned:2 embedding:1 handle:1 target:2 us:1 element:1 expensive:2 hagen:3 cut:55 ding:6 removed:1 convexity:1 complexity:1 solving:1 rewrite:1 pendigit:5 gu:2 various:2 separated:2 fast:1 describe:1 effective:2 artificial:1 heuristic:1 larger:1 highlighted:1 final:1 sequence:3 eigenvalue:1 tu:7 uci:1 aligned:1 achieve:1 roweis:2 description:4 cluster:30 optimum:2 jsp:1 overrelaxed:1 converges:1 qt:1 come:1 annotated:1 implementing:1 clustered:1 rong:1 lawrence:1 substituting:1 smallest:4 proc:1 combinatorial:1 label:4 clearly:1 gaussian:1 rather:1 avoid:2 asp:1 pn:8 focus:1 indicates:1 likelihood:1 contrast:2 sigkdd:1 membership:18 selective:1 france:1 arg:3 classification:3 among:1 issue:1 development:1 plan:1 mutual:5 equal:3 construct:1 ng:7 sampling:1 yu:3 icml:1 future:2 employ:1 simultaneously:1 national:1 mining:2 evaluation:5 alignment:3 rdp:2 introduces:1 mixture:2 bracket:1 edge:1 bacteria:1 logarithm:2 re:1 instance:1 soft:20 introducing:3 examining:2 too:3 considerably:1 density:1 international:4 siam:1 l5:5 probabilistic:15 together:2 worse:1 tz:9 conf:1 chung:2 derivative:1 account:1 potential:1 summarized:2 bold:1 int:1 explicitly:1 later:1 zha:3 simon:2 collaborative:1 minimize:1 accuracy:8 variance:4 efficiently:5 directional:1 bayesian:2 mc:6 history:1 pp:4 involved:1 mi:1 di:2 dataset:3 subsection:1 knowledge:1 improves:1 segmentation:2 redner:3 appears:1 originally:1 follow:1 wei:1 arranged:1 amer:1 furthermore:1 until:1 banerjee:2 lda:1 quality:1 indicated:1 semisupervised:1 normalized:32 multiplier:1 contain:1 true:3 hence:1 assigned:2 laboratory:1 iteratively:1 dhillon:1 semantic:2 ribosome:6 deal:1 cosine:1 criterion:1 m5:5 algoirthm:1 pdf:1 bring:1 image:2 wise:1 novel:1 recently:1 common:2 extend:5 belong:2 he:2 significant:2 similarity:4 plsi:1 optimizing:1 optimizes:1 inequality:1 binary:4 success:1 minimum:6 additional:1 relaxed:1 determine:1 maximize:1 ii:1 multiple:2 bach:2 long:1 retrieval:1 laplacian:5 qi:10 rrna:1 impact:1 denominator:1 vision:1 metric:6 iteration:6 cz:18 achieved:1 addition:1 walker:3 extra:1 rest:1 effectiveness:3 jordan:4 intermediate:1 variety:3 newsgroups:1 suboptimal:1 reduce:1 idea:1 regarding:1 multiclass:2 qj:3 expression:3 generally:2 detailed:1 eigenvectors:21 amount:1 extensively:1 statist:1 simplest:1 reduced:1 http:1 outperform:3 notice:1 trapped:4 discrete:1 express:1 key:1 putting:1 four:1 hartigan:2 graph:6 relaxation:6 convert:2 run:1 uncertainty:2 named:4 extends:2 bound:8 followed:1 fold:1 annual:1 constraint:1 min:5 expanded:1 describes:1 across:1 nmi:2 em:5 smaller:1 wi:18 explained:1 indexing:2 computationally:1 discus:1 needed:1 rewritten:1 apply:3 spectral:26 appropriate:1 alternative:1 robustness:1 jn:6 clustering:79 include:2 dirichlet:2 ensure:1 newton:1 society:1 feng:1 malik:6 objective:12 question:1 strategy:3 diagonal:1 affinity:1 unable:1 chris:1 degrade:1 reason:1 toward:1 index:1 insufficient:1 ratio:4 minimizing:1 difficult:2 implementation:2 kahng:3 design:2 perform:1 upper:1 observation:1 datasets:10 jin:3 extended:1 excluding:1 ninth:2 canada:1 pair:1 required:1 california:1 kang:1 address:2 able:2 usually:3 pattern:1 including:2 max:4 concludes:1 text:3 review:1 nice:1 discovery:1 allocation:2 filtering:1 pi:3 enjoys:1 formal:1 dimension:5 valid:1 stand:1 qn:1 cme:1 adaptive:1 projected:1 transaction:1 approximate:2 active:3 conclude:1 continuous:1 latent:4 pen:1 msu:1 why:1 table:9 promising:2 learn:1 qz:25 robust:4 ca:1 sra:1 necessarily:1 meanwhile:2 pk:4 big:1 noise:1 repeated:1 tl:7 lq:1 ci:1 ribosomal:1 entropy:1 michigan:1 likely:4 twentieth:1 extracted:1 acm:2 formulated:1 kmeans:7 replace:2 price:1 hard:2 aided:1 degradation:1 called:1 experimental:1 east:1 unbalanced:2 evaluate:2 |
2,151 | 2,953 | Preconditioner Approximations for
Probabilistic Graphical Models
Pradeep Ravikumar John Lafferty
School of Computer Science
Carnegie Mellon University
Abstract
We present a family of approximation techniques for probabilistic graphical models, based on the use of graphical preconditioners developed in
the scientific computing literature. Our framework yields rigorous upper
and lower bounds on event probabilities and the log partition function
of undirected graphical models, using non-iterative procedures that have
low time complexity. As in mean field approaches, the approximations
are built upon tractable subgraphs; however, we recast the problem of optimizing the tractable distribution parameters and approximate inference
in terms of the well-studied linear systems problem of obtaining a good
matrix preconditioner. Experiments are presented that compare the new
approximation schemes to variational methods.
1
Introduction
Approximate inference techniques are enabling sophisticated new probabilistic models to
be developed and applied to a range of practical problems. One of the primary uses of
approximate inference is to estimate the partition function and event probabilities for undirected graphical models, which are natural tools in many domains, from image processing
to social network modeling. A central challenge is to improve the accuracy of existing approximation methods, and to derive rigorous rather than heuristic bounds on probabilities in
such graphical models. In this paper, we present a simple new approach to the approximate
inference problem, based upon non-iterative procedures that have low time complexity. We
follow the variational mean field intuition of focusing on tractable subgraphs, however we
recast the problem of optimizing the tractable distribution parameters as a generalized linear system problem. In this way, the task of deriving a tractable distribution conveniently
reduces to the well-studied problem of obtaining a good preconditioner for a matrix (Boman and Hendrickson, 2003). This framework has the added advantage that tighter bounds
can be obtained by reducing the sparsity of the preconditioners, at the expense of increasing
the time complexity for computing the approximation.
In the following section we establish some notation and background. In Section 3, we
outline the basic idea of our proposed framework, and explain how to use preconditioners
for deriving tractable approximate distributions. In Sections 3.1 and 4, we then describe
the underlying theory, which we call the generalized support theory for graphical models.
In Section 5 we present experiments that compare the new approximation schemes to some
of the standard variational and optimization based methods.
2
Notation and Background
Consider a graph G = (V, E), where V denotes the set of nodes and E denotes the set
of edges. Let Xi be a random variable associated with node i, for i ? V , yielding a
random vector X = {X1 , . . . , Xn }. Let ? = {?? , ? ? I} denote the set of potential
functions or sufficient statistics, for a set I of cliques in G. Associated with ? is a vector of
parameters ? = {?? , ? ? I}. With this notation, the exponential family of distributions of
X, associated with ? and G, is given by
!
X
p(x; ?) = exp
?? ?? ? ?(?) .
(1)
?
For traditional reasons through connections with statistical physics, Z = exp ?(?) is called
the partition function. As discussed in (Yedidia et al., 2001), at the expense in increasing
the state space one can assume without loss of generality that the graphical model is a
pairwise Markov random field, i.e., the set of cliques I is the set of edges {(s, t) ? E}.
We shall assume a pairwise random field, and thus can express the potential function and
parameter vectors in more compact form as matrices:
?
?
?
?
?11 . . . ?1n
?11 (x1 , x1 ) . . . ?1n (x1 , xn )
?
?
..
.. ? ?(x) := ?
..
..
..
? := ? ...
?
? (2)
.
. ?
.
.
.
?n1 . . . ?nn
?n1 (xn , x1 ) . . . ?nn (xn , xn )
In the following we will denote the trace of the product of two matrices A and B by the inner product hhA, Bii. Assuming
that each Xi is finite-valued, the partition function Z(?) is
P
then given by Z(?) =
x?? exp hh?, ?(x)ii. The computation of Z(?) has a complexity exponential in the tree-width of the graph G and hence is intractable for large graphs.
Our goal is to obtain rigorous upper and lower bounds for this partition function, which can
then be used to obtain rigorous upper and lower bounds for general event probabilities; this
is discussed further in (Ravikumar and Lafferty, 2004).
2.1
Preconditioners in Linear Systems
Consider a linear system, Ax = c, where the variable x is n dimensional, and A is an
n ? n matrix with m non-zero entries. Solving for x via direct methods such as Gaussian
elimination has a computational complexity O(n3 ), which is impractical for large values
of n. Multiplying both sides of the linear system by the inverse of an invertible matrix
B, we get an equivalent ?preconditioned? system, B ?1 Ax = B ?1 c. If B is similar to A,
B ?1 A is in turn similar to I, the identity matrix, making the preconditioned system easier
to solve. Such an approximating matrix B is called a preconditioner.
The computational complexity of preconditioned conjugate gradient is given by
p
1
T (A) = ?(A, B) (m + T (B)) log
?
(3)
where T (A) is the time required for an ?-approximate solution; ?(A, B) is the condition
number of A and B which intuitively corresponds to the quality of the approximation B,
and T (B) is the time required to solve By = c.
Recent developments in the theory of preconditioners are in part based on support graph
theory, where the linear system matrix is viewed as the Laplacian of a graph, and graphbased techniques can be used to obtain goodP
approximations. While these methods require diagonally dominant matrices (Aii ?
j6=i |Aij |), they yield ?ultra-sparse? (tree
plus a constant number of edges) preconditioners with a low condition number. In our
experiments, we use two elementary tree-based preconditioners in this family, Vaidya?s
Spanning Tree preconditioner Vaidya (1990), and Gremban-Miller?s Support Tree preconditioner Gremban (1996).
3
Graphical Model Preconditioners
Our proposed framework follows the generalized mean field intuition of looking at sparse
graph approximations of the original graph, but solving a different optimization problem.
We begin by outlining the basic idea, and then develop the underlying theory.
Consider the graphical model with graph G, potential-function matrix ?(x), and parameter
matrix ?. For purposes of intuition, think of the graphical model ?energy? hh?, ?(x)ii as
the matrix norm x? ?x. We would like to obtain a sparse approximation B for ?. If B
approximates ? well, then the condition number ? is small:
x? ?x
x? ?x
min ?
= ?max (?, B) /?min (?, B)
(4)
?(?, B) = max ?
x x Bx
x x Bx
This suggests the following procedure for approximate inference. First, choose a matrix B
that minimizes the condition number with ? (rather than KL divergence as in mean-field).
Then, scale B appropriately, as detailed in the following sections. Finally, use the scaled
matrix B as the parameter matrix for approximate inference. Note that if B corresponds to
a tree, approximate inference has linear time complexity.
3.1
Generalized Eigenvalue Bounds
Given a graphical model with graph G, potential-function matrix ?(x), and parameter
matrix ?, our goal is to obtain parameter matrices ?U and ?L , corresponding to sparse
graph approximations of G, such that
Z(?L ) ? Z(?) ? Z(?U ).
(5)
That is, the partition functions of the sparse graph parameter matrices ?U and ?L are upper
and lower bounds, respectively, of the partition function of the original graph. However,
we will instead focus on a seemingly much stronger condition; in particular, we will look
for ?L and ?U that satisfy
hh?L , ?(x)ii ? hh?, ?(x)ii ? hh?U , ?(x)ii
(6)
for all x. By monotonicity of exp, this stronger condition implies condition (5) on the
partition function, by summing over the values of X. However, this stronger condition will
give us greater flexibility, and rigorous bounds for general event probabilities since then
exp hh?L , ?(x)ii
exp hh?U , ?(x)ii
? p(x; ?) ?
.
Z(?U )
Z(?L )
(7)
In contrast, while variational methods give bounds on the log partition function, the derived
bounds on general event probabilities via the variational parameters are only heuristic.
Let S be a set of sparse graphs; for example, S may be the set of all trees. Focusing on the
upper bound, we for now would like to obtain a graph G? ? S with parameter matrix B,
which approximates G, and whose partition function upper bounds the partition function
of the original graph. Following (6), we require,
hh?, ?(x)ii ? hhB, ?(x)ii , such that G(B) ? S
(8)
where G(B) denotes the graph corresponding to the parameter matrix B. Now, we would
like the distribution corresponding to B to be as close as possible to the distribution corresponding to ?; that is, hhB, ?(x)ii should not only upper bound hh?, ?(x)ii but should be
close to it. The distance measure we use for this is the minimax distance. In other words,
while the upper bound requires that
hh?, ?(x)ii
? 1,
hhB, ?(x)ii
(9)
we would like
hh?, ?(x)ii
(10)
x hhB, ?(x)ii
to be as high as possible. Expressing these desiderata in the form of an optimization problem, we have
min
hh?,?(x)ii
hhB,?(x)ii ,
B ? = arg max min
B: G(B)?S
x
such that
hh?,?(x)ii
hhB,?(x)ii
? 1.
Before solving this problem, we first make some definitions, which are generalized versions
of standard concepts in linear systems theory.
Definition 3.1. For a pairwise Markov random field with potential function matrix ?(x);
the generalized eigenvalues of a pair of parameter matrices (A, B) are defined as
hhA, ?(x)ii
hhB, ?(x)ii
hhA, ?(x)ii
=
min
.
x: hhB,?(x)ii6=0 hhB, ?(x)ii
?
?max
(A, B)
=
??
min (A, B)
max
x: hhB,?(x)ii6=0
(11)
(12)
Note that
hhA, ?(x)ii
x: hh?B,?(x)ii6=0 hh?B, ?(x)ii
1
hhA, ?(x)ii
=
max
= ??1 ??
max (A, B).
? x: hhB,?(x)ii6=0 hhB, ?(x)ii
??
max (A, ?B)
=
max
(13)
(14)
We state the basic properties of the generalized eigenvalues in the following lemma.
Lemma 3.2. The generalized eigenvalues satisfy
??
min (A, B) ?
hhA, ?(x)ii
? ??
max (A, B)
hhB, ?(x)ii
?1 ?
??
?max (A, B)
max (A, ?B) = ?
??
min (A, ?B)
=
??1 ??
min (A, B)
1
??
.
min (A, B) = ?
?max (B, A)
(15)
(16)
(17)
(18)
In the following, we will use A to generically denote the parameter matrix ? of the model.
We can now rewrite the optimization problem for the upper bound in equation (11) as
(Problem ?1 )
max
B: G(B)?S
?
??
min (A, B), such that ?max (A, B) ? 1
(19)
We shall express the optimal solution of Problem ?1 in terms of the optimal solution of a
companion problem. Towards that end, consider the optimization problem
(Problem ?2 )
min
C: G(C)?S
??
max (A, C)
.
??
min (A, C)
The following proposition shows the sense in which these problems are equivalent.
(20)
b attains the optimum in Problem ?2 , then C
e = ?? (A, C)
b C
b attains
Proposition 3.3. If C
max
the optimum of Problem ?1 .
For any feasible solution B of Problem ?1 , we have
??
min (A, B)
??
(since ??
min (A, B) ?
max (A, B) ? 1)
??
(A,
B)
max
b
??
min (A, C)
b is the optimum of Problem ?2 )
?
(since C
b
??
max (A, C)
?
b b
= ??
(from Lemma 3.2)
min A, ?max (A, C)C
Proof.
e
= ??
min (A, C).
(21)
(22)
(23)
(24)
e upper bounds all feasible solutions in Problem ?1 . However, it itself is a feasible
Thus, C
solution, since
1
?
?
b = 1
e
b b =
?? (A, C)
??
(25)
max (A, C) = ?max A, ?max (A, C)C
?
b max
?max (A, C)
e attains the maximum in the upper bound Problem ?1 .
from Lemma 3.2. Thus, C
The analysis for obtaining an upper bound parameter matrix B for a given parameter matrix
A carries over for the lower bound; we need to replace a maximin problem with a minimax
problem. For the lower bound, we want a matrix B such that
hhA, ?(x)ii
hhA, ?(x)ii
B? =
min
max
, such that
? 1 (26)
hhB, ?(x)ii
B: G(B)?S {x: hhB,?(x)ii6=0} hhB, ?(x)ii
This leads to the following lower bound optimization problem.
(Problem ?3 )
min
B: G(B)?S
?
??
max (A, B), such that ?min (A, B) ? 1.
(27)
The proof of the following statement closely parallels the proof of Proposition 3.3.
?
? C? attains
Proposition 3.4. If C? attains the optimum in Problem ?2 , then C = ?min
(A, C)
the optimum of the lower bound Problem ?3 .
Finally, we state the following basic lemma, whose proof is easily verified.
Lemma 3.5. For any pair of parameter-matrices (A, B), we have
?
?
?min (A, B)B, ?(x) ? hhA, ?(x)ii ? ?max
(A, B)B, ?(x) .
3.2
(28)
Main Procedure
We now have in place the machinery necessary to describe the procedure for solving the
main problem in equation (6), to obtain upper and lower bound matrices for a graphical
model. Lemma 3.5 shows how to obtain upper and lower bound parameter matrices with
respect to any matrix B, given a parameter matrix A, by solving a generalized eigenvalue
problem. Propositions 3.3 and 3.4 tell us, in principle, how to obtain the optimal such
upper and lower bound matrices. We thus have the following procedure. First, obtain a
?
parameter matrix C such that G(C) ? S, which minimizes ??
max (?, C)/?min (?, C). Then
?
?
?max (?, C) C gives the optimal upper bound parameter matrix and ?min (?, C) C gives the
optimal lower bound parameter matrix. However, as things stand, this recipe appears to
be even more challenging to work with than the generalized mean field procedures. The
difficulty lies in obtaining the matrix C. In the following section we offer a series of
relaxations that help to simplify this task.
4
Generalized Support Theory for Graphical Models
In what follows, we begin by assuming that the potential function matrix is positive semidefinite, ?(x) 0, and later extend our results to general ?.
Definition 4.1. For a pairwise MRF with potential function matrix ?(x) 0, the generalized support number of a pair of parameter matrices (A, B), where B 0, is
? ? (A, B) = min {? ? R | hh? B, ?(x)ii ? hhA, ?(x)ii for all x}
(29)
The generalized support number can be thought of as the ?number of copies? ? of B required to ?support? A so that hh? B ? A, ?(x)ii ? 0. The usefulness of this definition is
demonstrated by the following result.
?
Proposition 4.2. If B 0 then ?max
(A, B) ? ? ? (A, B).
Proof. From
the definition of thegeneralized support number for a graphical model,
we have that ? ? (A, B)B ? A, ?(x) ? 0. Now, since we assume that ?(x) 0, if
hhA,?(x)ii
? ? ? (A, B), and
also B 0 then hhB, ?(x)ii ? 0. Therefore, it follows that hhB,?(x)ii
thus
hhA, ?(x)ii
? ? ? (A, B)
(30)
??
max (A, B) = max
x hhB, ?(x)ii
giving the statement of the proposition.
This leads to our first relaxation of the generalized eigenvalue bound for a model. From
Lemma 3.2 and Proposition 4.2 we see that
??
max (A, B)
?
?
?
= ??
max (A, B)?max (B, A) ? ? (A, B)? (B, A)
??
(A,
B)
min
(31)
Thus, this result suggests that to approximate the graphical model (?, ?) we can search for
a parameter matrix B ? , with corresponding simple graph G(B ? ) ? S, such that
B ? = arg min ? ? (?, B)? ? (B, ?)
(32)
B
While this relaxation may lead to effective bounds, we will now go further, to derive an
additional relaxation that relates our generalized graphical model support number to the
?classical? support number.
Proposition 4.3. For a potential function matrix ?(x) 0, ? ? (A, B) ? ?(A, B), where
?(A, B) = min{? | (? B ? A) 0}.
Proof. Since ?(A, B)B?A 0 by definition and ?(x) 0 by assumption, we have
that hh?(A, B)B ? A, ?(x)ii ? 0. Therefore, ? ? (A, B) ? ?(A, B) from the definition of
generalized support number.
The above result reduces the problem of approximating a graphical model to the problem
of minimizing classical support numbers, the latter problem being well-studied in the scientific computing literature (Boman and Hendrickson, 2003; Bern et al., 2001), where the
expression ?(A, C)?(C, A) is called the condition number, and a matrix that minimizes
it within a simple family of graphs is called a preconditioner. We can thus plug in any
algorithm for finding a sparse preconditioner for ?, carrying out the optimization
B ? = arg min ?(?, B) ?(B, ?)
B
(33)
and then use that matrix B ? in our basic procedure.
One example is Vaidya?s preconditioner Vaidya (1990), which is essentially the maximum
spanning tree of the graph. Another is the support tree of Gremban (1996), which introduces Steiner nodes, in this case auxiliary nodes introduced via a recursive partitioning
of the graph. We present experiments with these basic preconditioners in the following
section.
Before turning to the experiments, we comment that our generalized support number analysis assumed that the potential function matrix ?(x) was positive semi-definite. The case
when it is not can be handled as follows. We first add a large positive diagonal matrix D
so that ?? (x) = ?(x) + D 0. Then, for a given parameter matrix ?, we use the above
machinery to get an upper bound parameter matrix B such that
hhA, ?(x) + Dii ? hhB, ?(x) + Dii ? hhA, ?(x)ii ? hhB, ?(x)ii + hhB ? A, Dii .
(34)
Exponentiating and summing both sides over x, we then get the required upper bound for
the parameter matrix A; the same can be done for the lower bound.
5
Experiments
As the previous sections detailed, the preconditioner based bounds are in principle quite
easy to compute?we compute a sparse preconditioner for the parameter matrix (typically O(n) to O(n3 )) and use the preconditioner as the parameter matrix for the bound
computation (which is linear if the preconditioner matrix corresponds to a tree). This
yields a simple, non-iterative deterministic procedure as compared to the more complex
propagation-based or iterative update procedures. In this section we evaluate these bounds
on small graphical models for which exact answers can be readily computed, and compare
the bounds to variational approximations.
We show simulation results averaged over a randomly generated set of graphical models.
The graphs used were 2D grid graphs, and the edge potentials were selected according to a
uniform distribution Uniform(?2dcoup , 0) for various coupling strengths dcoup . We report
the relative error, (bound ? log-partition-function)/log-partition-function.
As a baseline, we use the mean field and structured mean field methods for the lower bound,
and the Wainwright et al. (2003) tree-reweighted belief propagation approximation for the
upper bound. For the preconditioner based bounds, we use two very simple preconditioners, (a) Vaidya?s maximum spanning tree preconditioner (Vaidya, 1990), which assumes the
input parameter matrix to be a Laplacian, and (b) Gremban (1996)?s support tree preconditioner, which also gives a sparse parameter matrix corresponding to a tree, with Steiner
(auxiliary) nodes. To compute bounds over these larger graphs with Steiner nodes we average an internal node over its children; this is the technique used with such preconditioners
for solving linear systems. We note that these preconditioners are quite basic, and the use
of better preconditioners (yielding a better condition number) has the potential to achieve
much better bounds, as shown in Propositions 3.3 and 3.4. We also reiterate that while our
approach can be used to derive bounds on event probabilities, the variational methods yield
bounds only for the partition function, and only apply heuristically to estimating simple
event probabilities such as marginals.
As the plots in Figure 1 show, even for the simple preconditioners used, the new bounds
are quite close to the actual values, outperforming the mean field method and giving comparable results to the tree-reweighted belief propagation method. The spanning tree preconditioner provides a good lower bound, while the support tree preconditioner provides a
good upper bound, however not as tight as the bound obtained using tree-reweighted belief propagation. Although we cannot compute the exact solution for large graphs, we can
1.4
Support Tree Preconditioner
Tree BP
0.3
1
Average relative error
Average relative error
1.2
0.35
Spanning Tree Preconditioner
Structured Mean Field
Support Tree Preconditioner
Mean Field
0.8
0.6
0.4
0.2
1500
0.15
0.1
0.05
0.6
0.8
1.0
1.4
Coupling strength
0
2.0
0.6
0.8
1.0
1.4
Coupling strength
2.0
Figure 1: Comparison of lower
bounds (top left), and upper bounds
(top right) for small grid graphs, and
lower bounds for grid graphs of increasing size (left).
Spanning tree Preconditioner
Structured Mean Field
Support Tree Preconditioner
Mean Field
500
1000
0.2
0
Lower bound on partition function
0
0.25
200
400
600
800
Number of nodes in graph
compare bounds. The bottom plot of Figure 1 compares lower bounds for graphs with up
to 900 nodes; a larger bound is necessarily tighter, and the preconditioner bounds are seen
to outperform mean field.
Acknowledgments
We thank Gary Miller for helpful discussions. Research supported in part by NSF grants
IIS-0312814 and IIS-0427206.
References
M. Bern, J. R. Gilbert, B. Hendrickson, N. Nguyen, and S. Toledo. Support-graph preconditioners.
Submitted to SIAM J. Matrix Anal. Appl., 2001.
E. G. Boman and B. Hendrickson. Support theory for preconditioning. SIAM Journal on Matrix
Analysis and Applications, 25, 2003.
K. Gremban. Combinatorial preconditioners for sparse, symmetric, diagonally dominant linear systems. Ph.D. Thesis, Carnegie Mellon University, 1996, 1996.
P. Ravikumar and J. Lafferty. Variational Chernoff bounds for graphical models. Proceedings of
Uncertainty in Artificial Intelligence (UAI), 2004.
P. M. Vaidya. Solving linear equations with symmetric diagonally dominant matrices by constructing
good preconditioners. 1990. Unpublished manuscript, UIUC.
M. J. Wainwright, T. Jaakkola, and A. S. Willsky. Tree-reweighted belief propagation and approximate ML estimation by pseudo-moment matching. 9th Workshop on Artificial Intelligence and
Statistics, 2003.
J. S. Yedidia, W. T. Freeman, and Y. Weiss. Understanding belief propagation and its generalizations.
IJCAI 2001 Distinguished Lecture track, 2001.
| 2953 |@word version:1 norm:1 stronger:3 heuristically:1 simulation:1 carry:1 moment:1 series:1 existing:1 steiner:3 readily:1 john:1 partition:15 plot:2 update:1 intelligence:2 selected:1 provides:2 node:9 direct:1 pairwise:4 uiuc:1 freeman:1 actual:1 increasing:3 begin:2 estimating:1 notation:3 underlying:2 what:1 minimizes:3 developed:2 finding:1 impractical:1 pseudo:1 scaled:1 partitioning:1 grant:1 before:2 positive:3 plus:1 studied:3 suggests:2 challenging:1 appl:1 range:1 averaged:1 practical:1 acknowledgment:1 recursive:1 definite:1 procedure:10 thought:1 matching:1 word:1 get:3 cannot:1 close:3 gilbert:1 equivalent:2 deterministic:1 demonstrated:1 go:1 subgraphs:2 deriving:2 exact:2 us:1 bottom:1 intuition:3 complexity:7 carrying:1 solving:7 rewrite:1 tight:1 upon:2 preconditioning:1 easily:1 aii:1 various:1 describe:2 effective:1 artificial:2 tell:1 whose:2 heuristic:2 quite:3 valued:1 solve:2 larger:2 gremban:5 statistic:2 think:1 itself:1 seemingly:1 advantage:1 eigenvalue:6 product:2 flexibility:1 achieve:1 recipe:1 ijcai:1 optimum:5 help:1 derive:3 develop:1 coupling:3 school:1 auxiliary:2 implies:1 closely:1 elimination:1 dii:3 require:2 generalization:1 ultra:1 tighter:2 elementary:1 proposition:10 exp:6 purpose:1 estimation:1 combinatorial:1 tool:1 gaussian:1 rather:2 jaakkola:1 dcoup:2 ax:2 focus:1 derived:1 contrast:1 rigorous:5 attains:5 baseline:1 sense:1 helpful:1 inference:7 nn:2 typically:1 arg:3 development:1 field:16 chernoff:1 look:1 report:1 simplify:1 randomly:1 divergence:1 n1:2 generically:1 introduces:1 pradeep:1 yielding:2 semidefinite:1 edge:4 necessary:1 machinery:2 tree:25 modeling:1 entry:1 uniform:2 usefulness:1 answer:1 siam:2 probabilistic:3 physic:1 invertible:1 thesis:1 central:1 choose:1 bx:2 potential:11 satisfy:2 reiterate:1 later:1 graphbased:1 parallel:1 accuracy:1 miller:2 yield:4 multiplying:1 j6:1 submitted:1 explain:1 definition:7 energy:1 associated:3 vaidya:7 proof:6 sophisticated:1 focusing:2 appears:1 manuscript:1 follow:1 wei:1 done:1 generality:1 preconditioner:24 propagation:6 quality:1 scientific:2 concept:1 hence:1 symmetric:2 reweighted:4 width:1 generalized:18 outline:1 image:1 variational:8 discussed:2 extend:1 approximates:2 marginals:1 mellon:2 expressing:1 grid:3 add:1 dominant:3 recent:1 optimizing:2 outperforming:1 seen:1 greater:1 additional:1 ii:47 relates:1 semi:1 reduces:2 plug:1 offer:1 ravikumar:3 laplacian:2 desideratum:1 basic:7 mrf:1 essentially:1 background:2 want:1 appropriately:1 comment:1 undirected:2 thing:1 lafferty:3 call:1 easy:1 inner:1 idea:2 expression:1 handled:1 detailed:2 ph:1 outperform:1 nsf:1 track:1 carnegie:2 shall:2 express:2 verified:1 graph:29 relaxation:4 inverse:1 uncertainty:1 place:1 family:4 comparable:1 bound:57 strength:3 bp:1 n3:2 min:30 preconditioners:17 structured:3 according:1 conjugate:1 making:1 intuitively:1 boman:3 equation:3 turn:1 hh:18 tractable:6 end:1 yedidia:2 apply:1 bii:1 distinguished:1 original:3 denotes:3 assumes:1 top:2 graphical:21 giving:2 establish:1 approximating:2 classical:2 added:1 primary:1 traditional:1 diagonal:1 gradient:1 distance:2 thank:1 reason:1 spanning:6 preconditioned:3 willsky:1 assuming:2 minimizing:1 statement:2 expense:2 trace:1 anal:1 upper:21 markov:2 enabling:1 finite:1 looking:1 introduced:1 pair:3 required:4 kl:1 unpublished:1 connection:1 maximin:1 toledo:1 sparsity:1 challenge:1 recast:2 built:1 max:37 belief:5 wainwright:2 event:7 difficulty:1 natural:1 turning:1 minimax:2 scheme:2 improve:1 literature:2 understanding:1 relative:3 loss:1 lecture:1 outlining:1 sufficient:1 principle:2 diagonally:3 supported:1 copy:1 bern:2 aij:1 side:2 sparse:10 hendrickson:4 xn:5 stand:1 exponentiating:1 nguyen:1 social:1 approximate:11 compact:1 clique:2 monotonicity:1 ml:1 uai:1 summing:2 assumed:1 xi:2 search:1 iterative:4 obtaining:4 complex:1 necessarily:1 constructing:1 domain:1 main:2 child:1 x1:5 hhb:22 exponential:2 lie:1 companion:1 hha:14 intractable:1 workshop:1 easier:1 conveniently:1 corresponds:3 gary:1 goal:2 identity:1 viewed:1 towards:1 replace:1 feasible:3 reducing:1 lemma:8 called:4 ii6:5 internal:1 support:21 latter:1 evaluate:1 |
2,152 | 2,954 | Message passing for task redistribution on
sparse graphs
K. Y. Michael Wong
Hong Kong U. of Science & Technology
Clear Water Bay, Hong Kong, China
[email protected]
David Saad
NCRG, Aston University
Birmingham B4 7ET, UK
[email protected]
Zhuo Gao
Hong Kong U. of Science & Technology, Clear Water Bay, Hong Kong, China
Permanent address: Dept. of Physics, Beijing Normal Univ., Beijing 100875, China
[email protected]
Abstract
The problem of resource allocation in sparse graphs with real variables
is studied using methods of statistical physics. An efficient distributed
algorithm is devised on the basis of insight gained from the analysis and
is examined using numerical simulations, showing excellent performance
and full agreement with the theoretical results.
1 Introduction
Optimal resource allocation is a well known problem in the area of distributed computing [1, 2] to which significant effort has been dedicated within the computer science community. The problem itself is quite general and is applicable to other areas as well where a
large number of nodes are required to balance loads/resources and redistribute tasks, such
as reducing internet traffic congestion [3]. The problem has many flavors and usually refers,
in the computer science literature, to finding practical heuristic solutions to the distribution
of computational load between computers connected in a predetermined manner.
The problem we are addressing here is more generic and is represented by nodes of some
computational power that should carry out tasks. Both computational powers and tasks will
be chosen at random from some arbitrary distribution. The nodes are located on a randomly
chosen sparse graph of some given connectivity. The goal is to migrate tasks on the graph
such that demands will be satisfied while minimizing the migration of (sub-)tasks. An
important aspect of the desired algorithmic solution is that decisions on messages to be
passed are carried out locally; this enables an efficient implementation of the algorithm in
large non-centralized distributed networks. We focus here on the satisfiable case where the
total computing power is greater than the demand, and where the number of nodes involved
is very large. The unsatisfiable case can be addressed using similar techniques.
We analyze the problem using the Bethe approximation of statistical mechanics in Section 2, and alternatively a new variant of the replica method [4, 5] in Section 3. We then
present numerical results in Section 4, and derive a new message passing distributed algo-
rithm on the basis of the analysis (in Section 5). We conclude the paper with a summary
and a brief discussion on future work.
2 The statistical physics framework: Bethe approximation
We consider a typical resource allocation task on a sparse graph of N nodes, labelled
i = 1; ::; N . Each node i is randomly connected to
other nodes1 , and has a capacity i
randomly drawn from a distribution (i ). The objective is to migrate tasks between nodes
such that each node will be capable of carrying out its tasks. The current yij yji drawn
from node j to i is aimed at satisfying the constraint
X
j
Aij yij + i 0 ;
(1)
representing the ?revised? assignment for node i, where Aij = 1=0 for connected/unconnected node pairs i and j , respectively. To illustrate the statistical mechanics
approach to resource allocation,
P we consider the load balancing task of minimizing the energy function (cost) E = (ij ) Aij (yij ), where the summation (ij ) runs over all pairs
of nodes, subject to the constraints (1); (y ) is a general function of the current y . For
load balancing tasks, (y ) is typically a convex function, which will be assumed in our
study. The analysis of the graph is done by introducing the free energy F = T ln Zy for
a temperature T 1 , where Zy is the partition function
Zy =
YZ
(ij )
dyij
Y
i
0
1 2
X
Aij yij + i A exp 4
j
X
(ij )
3
Aij (yij )5 :
(2)
The function returns 1 for a non-negative argument and 0 otherwise.
When the connectivity
is low, the probability of finding a loop of finite length on the
graph is low, and the Bethe approximation well describes the local environment of a node.
In the approximation, a node is connected to
branches in a tree structure, and the correlations among the branches of the tree are neglected. In each branch, nodes are arranged
in generations. A node is connected to an ancestor node of the previous generation, and
another
1 descendent nodes of the next generation.
Consider a vertex V (T) of capacity V (T) , and a current y is drawn from the vertex.
One can write an expression for the free energy F (y jT) as a function of the free energies
F (yk jTk ) of its descendants, that branch out from this vertex
(
1 Z
1
!
Y
X
F (y jT) = T ln
dyk
yk y +V
k=1
k=1
"
1
#)
X
(3)
exp (F (yk jTk ) + (yk )) ;
k=1
where Tk represents the tree terminated at the k th descendent of the vertex. The free
energy can be considered as the sum of two parts, F (y jT)= NT Fav +FV (y jT), where NT
is the number of nodes in the tree T, Fav is the average free energy per node, and FV (y jT)
is referred to as the vertex free energy2 . Note that when a vertex is added to a tree, there is a
1
Although we focus here on graphs of fixed connectivity, one can easily accommodate any connectivity profile within the same framework; the algorithms presented later are completely general.
2
This term is marginalized over all inputs to the current vertex, leaving the difference in chemical
potential y as its sole argument, hence the terminology used.
change in the free energy due to the added vertex. Since the number of nodes increases by
1, the vertex free energy is obtained by subtracting the free energy change by the average
free energy. This allows us to obtain the recursion relation
(
1 Z
Y
FV (y jT) = T ln
exp
"
k=1
1
X
k=1
dyk
1
X
k=1
yk
y + V (T)
(FV (yk jTk ) + (yk ))
#)
and the average free energy per node is given by
Fav
=
* (
T
ln
exp
"
Z
Y
k=1
X
k=1
dyk
X
k=1
yk + V
(FV (yk jTk ) + (yk ))
!
Fav ;
(4)
;
(5)
!
#)+
where V is the capacity of the vertex V fed by
trees T1 ; : : : ; T
, and h: : : i represents
the average over the distribution (). In the zero temperature limit, Eq. (4) reduces to
FV (y jT) =
min
"
1
X
fyk j P
k=11 yk y+V (T) 0g k=1
(FV (yk jTk ) + (yk ))
#
Fav :
(6)
The current distribution and the average free energy per link can be derived by integrating
the current y 0 in a link from one vertex to another, fed by the trees T1 and T2 , respectively;
the obtained expressions are P (y )=h? (y y 0 )i? and hE i=h(y 0 )i? where
R dy0 exp[
hi? = R 0
(FV (y 0 jT1 ) + FV ( y 0 jT2 ) + (y 0 ))? ()
:
dy exp [ (FV (y 0 jT1 ) + FV ( y 0 jT2 ) + (y 0 ))?
(7)
3 The statistical physics framework: replica method
In this section, we sketch the analysis of the problem using the replica method, as an alternative to the Bethe approximation. The derivation is rathe involved, details will be provided
elsewhere. To facilitate derivations, we focus on the quadratic cost function (y ) = y 2 =2.
The results confirm the validity of the Bethe approximation on sparse graphs.
An alternative formulation of the original optimization problem is to consider its
dual. Introducing P
Lagrange
P
P multipliers, the function to be minimized becomes L =
2
(ij ) Aij yij =2 + i i ( j Aij yij + i ). Optimizing L with respect to yij , one obtains yij = j i , where i is referred to as the chemical potential of node i, and the
current is driven by the potential difference.
Although the analysis has also been carried out in the space of currents, we focus here on
the optimization problem in the space of the chemical potentials. Since the energy function
is invariant under the addition of an arbitrary global constant
P to the chemical potentials of
all nodes, we introduce an extra regularization term i 2i =2 to break the translational
symmetry, where ! 0. To study the characteristics of the problem one calculates the
averaged free energy per node Fav = T hln Z iA; =N , where Z is the partition function
2
Y 4Z
i
0
X
di Aij (j
j
13 2
i ) + i A5 exp 4
0
X
2
(ij )
Aij (j
13
X
i )2 +
2i A5 :
i
The calculation follows the main steps of a replica based calculation in diluted systems [6],
using the identity ln Z = limn!0 [Z n 1?=n. The replicated partition function [5] is averaged over all network configurations with connectivity and capacity distributions (i ).
We consider the case of intensive connectivity
O(1) N . Extending the analysis of [6]
and averaging over all connectivity matrices, one finds
hZn i =
(
X^
exp N 2
exp
i^ ( +
)
"
X
r;s
Qr;s Qr;s + ln
Z
d()
2 (
+ )
2
Y Z
#
d
)
Z1
Z d^ !
d
2
X
;
(8)
= Pr;s Q^ r;s Q ( i^ )r s + Pr;s 2 QQrr;s!s ! Q r ( i^ )s . The
^ r;s , are labelled by the somewhat unusual indices r and s,
order parameters Qr;s and Q
representing the n-component integer vectors (r1 ; ::; rn ) and (s1 ; ::; sn ) respectively. This
is a result of the specific interaction considered which entangles nodes of different indices.
^ r;s are given by the extremum condition of Eq. (8), i.e., via
The order parameters Qr;s and Q
where X
a set of saddle point equations w.r.t the order parameters. Assuming replica symmetry, the
saddle point equations yield a recursion relation for a two-component function R, which is
related to the order parameters via the generating function
Ps (z) =
X
r
Qr;s
Y (z )r
r !
=
* Z
Y
2
d R(z ; jT)e =2 s
+
:
(9)
In Eq. (9), T represents the tree terminated at the vertex node with chemical potential
, providing input to the ancestor node with chemical potential z , and h: : : i represents
the average over the distribution (). The resultant recursion relation for R(z; jT) is
independent of the replica indices, and is given by
R(z; jT) =
1
Y1 Z
D k=1
"
exp 2
dk R(; k jTk )
1
X
k=1
(
1
X
k=1
!#
k )2 + 2
;
!
k
+ z +V (T)
(10)
where the vertex node has a capacity V (T) ; D is a constant. R(z; jT) is expressed in
terms of
1 functions R(; k jTk ) (k = 1; ::;
1), integrated over k . This algebraic
structure is typical of the Bethe lattice tree-like representation of networks of connectivity
, where a node obtains input from its
1 descendent nodes of the next generation, and
Tk represents the tree terminated at the kth descendent.
Except for the regularization factor exp( 2 =2), R turns out to be a function of
y z , which is interpreted as the current drawn from a node with chemical potential by its ancestor with chemical potential z . One can then express the function R
as the product of a vertex partition function ZV and a normalization factor W , that is,
R(z; jT) = W ()ZV (y jT). In the limit n ! 0, the dependence on and y are separable, providing a recursion relation for ZV (y jT). This gives rise to the vertex free energy
FV (y jT) = T ln ZV (y jT) when a current y is drawn from the vertex of a tree T. The recursive equation and the average free energy expression agrees with the results in the Bethe
approximation. These iterative equations can be directly linked to those obtained from a
principled Bayesian approximation, where the logarithms of the messages passed between
nodes are proportional to the vertex free energies.
4 Numerical solution
The solution of Eq. (6) is obtained numerically. Since the vertex free energy of a node depends on its own capacity and the disordered configuration of its descendants, we generate
1000 nodes at each iteration of Eq. (6), with capacities randomly drawn from the distribution (), each being fed by
1 nodes randomly drawn from the previous iteration.
We have discretized the vertex free energies FV (y jT) function into a vector, whose ith
component is the value of the function corresponding to the current yi . To speed up the
optimization search at each node, we first find the vertex saturation current drawn from
a node such that: (a) the capacity of the node is just used up; (b) the current drawn by
each of its descendant nodes is just enough to saturate its own capacity constraint. When
these conditions are satisfied, we can separately optimize the current drawn by each descendant node, and the vertex saturation current is equal to the node capacity subtracted by
the current drawn by its descendants. The optimal solution can be found using an exhaustive search, by varying the component currents in small discrete steps. This approach is
particularly convenient for
= 3, where the search is confined to a single parameter.
To compute the average energy, we randomly draw 2 nodes, compute the optimal current
flowing between them, and repeat the sampling to obtain the average. Figure 1(a) shows
the results as a function of iteration step t, for a Gaussian capacity distribution () with
variance 1 and average hi. Each iteration corresponds to adding one extra generation to
the tree structure, such that the iterative process corresponds to approximating the network
by an increasingly extensive tree. We observe that after an initial rise with iteration steps,
the average energies converges to steady-state values, at a rate which increases with the
average capacity.
To study the convergence rate of the iterations, we fit the average energy at iteration step
t using hE (t) E (1)i exp(
t) in the asymptotic regime. As shown in the inset of
Fig. 1(a), the relaxation rate
increases with the average capacity. It is interesting to note
that a cusp exists at the average capacity of about 0.45. Below that value, convergence
of the iteration is slow, since the average energy curve starts to develop a plateau before
the final convergence. On the other hand, the plateau disappears and the convergence is
fast above the cusp. The slowdown of convergence below the cusp is probably due to the
appearance of increasingly large clusters of nonzero currents on the network, since clusters of nodes with negative capacities become increasingly extensive, and need to draw
currents from increasingly extensive regions of nodes with excess capacities to satisfy the
demand. Figure 1(b) illustrates the current distribution for various average capacities. The
distribution P (y ) consists of a delta function component at y = 0 and a continuous component whose breadth decreases with average capacity. The fraction of links with zero
currents increases with the average capacity. Hence at a low average capacity, links with
nonzero currents form a percolating cluster, whereas at a high average capacity, it breaks
into isolated clusters.
5 Distributed algorithms
The local nature of the recursion relation Eq. (6) points to the possibility that the network
optimization can be solved by message passing approaches, which have been successful
in problems such as error-correcting codes [8] and probabilistic inference [9]. The major
advantage of message passing is its potential to solve a global optimization problem via
local updates, thereby reducing the computational complexity. For example, the computational complexity of quadratic programming for the load balancing task typically scales
as N 3 , whereas capitalizing on the network topology underlying the connectivity of the
variables, message passing scales as N . An even more important advantage, relevant to
1
10
2
(a)
(b)
0
P(y=0)
?
10
?1
10
1.5
0
10
0.4
?2
0.2
0.4 0.6
<?>
0.8
P(y)
0
<E>
10
1
0
0
0.5
<?>
?1
10
0.5
0.1
0.8
?2
0
10
20
t
30
0
40
0.15
0
0.5
1
y
1.5
0.5
<?>
1
?1
?
?0.5
2
2
(c)
c=3
0.1
(c?2)<E>
(d)
0.1
c=3
P(?=0)
10
1.5
0.5
0
c=4
0.05
0
P(?)
<E>
c=5
0.5
<?>
0
1
0
c=5
0.5
0.1
0.8
0
0
0.2
0.4
0.6
<?>
0.8
1
0
?2
?1.5
0
Figure 1: Results for system size N =1000 and (y ) = y 2 =2. (a) hE i obtained by iterating
Eq. (6) as a function of t for hi= 0.1, 0.2, 0.4, 0.6, 0.8 (top to bottom) and
=3. Dashed
line: The asymptotic hE i for hi=0:1. Inset:
as a function of hi. (b) The distribution
P (y ) obtained by iterating Eq. (6) to steady states for the same parameters and average
capacities as in (a), from right to left. Inset: P (y =0) as a function of hi. Symbols:
=3
(
) and (),
=4 () and (4), p
=5 (C) and (r); each pair obtained from Eqs. (11) and
(14) respectively. Line: erf(hi= 2). (c) hE i as a function of hi for
=3; 4; 5. Symbols:
results of Eq. (6) (
), Eq.(11) (), and Eq. (14) (). Inset: hE i multiplied by (
2) as
a function of hi for the same conditions. (d) The distribution P () obtained by iterating
Eq. (14) to steady states for the same parameters and average capacities as in (b), from left
to right. Inset: P ( =0) as a function of hi. Symbols: same as (b).
practical implementation, is its distributive nature; it does not require a global optimizer,
and is particularly suitable for distributive control in evolving networks.
However, in contrast to other message passing algorithms which pass conditional probability estimates of discrete variables to neighboring nodes, the messages in the present context
are more complex, since they are functions FV (y jT) of the current y . We simplify the message to 2 parameters, namely, the first and second derivatives of the vertex free energies.
For the quadratic load balancing task, it can be shown that a self-consistent solution of the
recursion relation, Eq. (6), consists of vertex free energies which are piecewise quadratic
with continuous slopes. This makes the 2-parameter message a very precise approximation.
Let (Aij ; Bij )
(FV (yij jTj )=yij ; 2 FV (yij jTj )=yij2 ) be the message passed from
node j to i; using Eq.(6), the recursion relation of the messages become
Aij
ij ;
Bij
2
3
X
( ij ) 4 Ajk (00jk + Bjk ) 1 5
k6=i
"P
1
; where
(11)
#
(0 + A )(00 + B ) 1 ? + j yij
P jk jk00 jk jk1
ij = min
;0 ;
(12)
k6=i Ajk (jk + Bjk )
with 0jk and 00jk representing the first and second derivatives of (y ) at y = yjk respeck6=i Ajk [yjk
tively. The forward passing of the message from node j to i is then followed by a backward
message from node j to k for updating the currents yjk according to
0jk + Ajk + ij
:
(13)
00jk + Bjk
We simulate networks with
= 3, (y ) = y 2 =2 and compute their average energies. The
yjk
yjk
network configurations are generated randomly, with loops of lengths 3 or less excluded.
Updates are performed with random sequential choices of the nodes. As shown in Fig. 1(c),
the simulation results of the message passing algorithm have an excellent agreement with
those obtained by the recursion relation Eq.(6).
For the quadratic load balancing task considered here, an independent exact optimization
is available for comparison. The K?uhn-Tucker conditions for the optimal solution yields
2 0
1 3
X
1
i = min 4 Aij j + i A ; 05 :
j
(14)
It also provides a local iterative method for the optimization problem. As shown in
Fig. 1(c), both the recursion relation Eq.(6) and the message passing algorithm Eq.(11)
yield excellent agreement with the iteration of chemical potentials Eq.(14).
Both Eqs. (11) and (14) allow us to study the distribution P () of the chemical potentials
. As shown in Fig. 1(d), P () consists of a delta function and a continuous component.
Nodes with zero chemical potentials correspond to those with unsaturated capacity constraints. The fraction of unsaturated nodes increases with the average capacity, as shown in
the inset of Fig. 1(d). Hence at a low average capacity, saturated nodes form a percolating
cluster, whereas at a high average capacity, it breaks into isolated clusters. It is interesting
to note that at the average capacity of 0.45, below which a plateau starts to develop in the
relaxation rate of the recursion relation Eq. (6), the fraction of unsaturated nodes is about
0.53, close to the percolation threshold of 0.5 for
= 3.
Besides the case of
= 3, Fig. 1(c) also shows the simulation results of the average energy
for
= 4; 5, using both Eqs. (11) and (14). We see that the average energy decreases
when the connectivity increases. This is because the increase in links connecting a node
provides more freedom to allocate resources. When the average capacity is 0.2 or above,
an exponential fit hE i exp( k hi) is applicable, where k lies in the range 2.5 to 2.7.
Remarkably, multiplying by a factor of (
2), we find that the 3 curves collapse in this
regime of average capacity, showing that the average energy scales as (
2) 1 in this
regime, as shown in the inset of Fig. 1(c).
Further properties of the optimized networks have been studied by simulations, and will
be presented elsewhere. Here we merely summarize the main results. (a) When the average capacity drops below 0.1, the energy rises above the exponential fit applicable to
the average capacity above 0.2. (b) The fraction of links with zero currents increases with
the average capacity, and is rather insensitive to the connectivity. Remarkably, except for
p
very small average capacities, the function erf(hi= 2) has a very good fit with the data.
Indeed, in the limit of large Rhi, this function approaches the fraction of links with both
1
vertices unsaturated, that is, [ 0 d()?2 . (c) The fraction of unsaturated nodes increases
with the average capacity, and is rather insensitive to Rthe connectivity. In the limit of large
1
average capacities, it approaches the upper bound of 0 d(), which is the probability
that the capacity of a node is non-negative. (d) The convergence time of Eq. (11) can be
measured by the time for the r.m.s. of the changes in the chemical potentials to fall below
a threshold. Similarly, the convergence time of Eq. (14) can be measured by the time for
the r.m.s. of the sums of the currents in both message directions of a link to fall below a
threshold. When the average capacity is 0.2 or above, we find the power-law dependence
on the average capacity, the exponent ranging from 1 for
= 3 to 0:8 for
= 5 for
Eq. (14), and being about -0.5 for
= 3; 4; 5 for Eq. (11). When the average capacity
decreases further, the convergence time deviates above the power laws.
6 Summary
We have studied a prototype problem of resource allocation on sparsely connected networks
using the replica method, resulting in recursion relations interpretable using the Bethe approximation. The resultant recursion relation leads to a message passing algorithm for
optimizing the average energy, which significantly reduces the computational complexity
of the global optimization task and is suitable for online distributive control. The suggested
2-parameter approximation produces results with excellent agreement with the original recursion relation. For the simple but illustrative example in this letter, we have considered a
quadratic cost function, resulting in an exact algorithm based on local iterations of chemical potentials, and the message passing algorithm shows remarkable agreement with the
exact result. The suggested simple message passing algorithm can be generalized to more
realistic cases of nonlinear cost functions and additional constraints on the capacities of
nodes and links. This constitutes a rich area for further investigations with many potential
applications.
Acknowledgments
This work is partially supported by research grants HKUST6062/02P and DAG04/05.SC25
of the Research Grant Council of Hong Kong and by EVERGROW, IP No. 1935 in the FET,
EU FP6 and STIPCO EU FP5 contract HPRN-CT-2002-00319.
References
[1] Peterson L. and Davie B.S., Computer Networks: A Systems Approach, Academic Press, San
Diego CA (2000)
[2] Ho Y.C., Servi L. and Suri R. Large Scale Systems 1 (1980) 51
[3] Shenker S., Clark D., Estrin D. and Herzog S. ACM Computer Comm. Review 26 (1996) 19
[4] Nishimori H. Statistical Physics of Spin Glasses and Information Processing, OUP UK (2001)
[5] M?ezard M., Parisi P. and Virasoro M., Spin Glass Theory and Beyond, World Scientific, Singapore (1987)
[6] Wong K.Y.M. and Sherrington D. J. Phys. A20(1987) L793
[7] Sherrington D. and Kirkpatrick S. Phys. Rev. Lett.35 (1975) 1792
[8] Opper M. and Saad D. Advanced Mean Field Methods, MIT press (2001)
[9] MacKay D.J.C., Information Theory, Inference and Learning Algorithms, CUP UK(2003)
| 2954 |@word kong:5 simulation:4 thereby:1 accommodate:1 carry:1 initial:1 configuration:3 current:27 nt:2 ust:1 numerical:3 realistic:1 partition:4 predetermined:1 enables:1 drop:1 interpretable:1 update:2 congestion:1 ith:1 provides:2 node:58 become:2 descendant:5 consists:3 introduce:1 manner:1 indeed:1 mechanic:2 discretized:1 becomes:1 provided:1 underlying:1 interpreted:1 finding:2 extremum:1 uk:4 control:2 grant:2 t1:2 before:1 local:5 limit:4 china:3 studied:3 examined:1 collapse:1 range:1 averaged:2 bjk:3 practical:2 acknowledgment:1 recursive:1 area:3 evolving:1 significantly:1 convenient:1 integrating:1 refers:1 fav:6 close:1 context:1 wong:2 optimize:1 convex:1 correcting:1 insight:1 diego:1 exact:3 programming:1 agreement:5 satisfying:1 particularly:2 located:1 jk:8 updating:1 sparsely:1 bottom:1 solved:1 region:1 connected:6 eu:2 decrease:3 yk:13 principled:1 environment:1 comm:1 complexity:3 jtk:7 neglected:1 ezard:1 carrying:1 algo:1 basis:2 completely:1 easily:1 represented:1 various:1 derivation:2 univ:1 fast:1 exhaustive:1 quite:1 heuristic:1 whose:2 solve:1 otherwise:1 erf:2 itself:1 final:1 online:1 ip:1 advantage:2 parisi:1 subtracting:1 interaction:1 product:1 neighboring:1 relevant:1 loop:2 qr:5 convergence:8 cluster:6 p:1 extending:1 r1:1 produce:1 generating:1 converges:1 tk:2 diluted:1 derive:1 illustrate:1 ac:1 develop:2 measured:2 ij:10 sole:1 eq:26 direction:1 disordered:1 cusp:3 redistribution:1 require:1 investigation:1 summation:1 yij:13 considered:4 normal:1 exp:13 algorithmic:1 major:1 optimizer:1 birmingham:1 applicable:3 percolation:1 council:1 agrees:1 mit:1 gaussian:1 rather:2 varying:1 derived:1 focus:4 jk1:1 hk:1 contrast:1 glass:2 inference:2 typically:2 integrated:1 relation:13 ancestor:3 translational:1 among:1 dual:1 k6:2 exponent:1 mackay:1 equal:1 field:1 sampling:1 represents:5 qrr:1 constitutes:1 future:1 minimized:1 t2:1 simplify:1 piecewise:1 randomly:7 freedom:1 centralized:1 message:21 a5:2 possibility:1 saturated:1 kirkpatrick:1 redistribute:1 capable:1 tree:13 logarithm:1 desired:1 isolated:2 theoretical:1 a20:1 virasoro:1 assignment:1 lattice:1 cost:4 introducing:2 addressing:1 vertex:24 successful:1 migration:1 probabilistic:1 physic:5 contract:1 michael:1 connecting:1 connectivity:12 satisfied:2 derivative:2 return:1 potential:16 permanent:1 descendent:4 satisfy:1 depends:1 later:1 break:3 performed:1 jtj:2 analyze:1 traffic:1 linked:1 start:2 satisfiable:1 slope:1 spin:2 variance:1 characteristic:1 yield:3 correspond:1 bayesian:1 zy:3 jt1:2 bnu:1 multiplying:1 plateau:3 phys:2 energy:31 involved:2 tucker:1 resultant:2 flowing:1 unsaturated:5 arranged:1 done:1 formulation:1 just:2 correlation:1 sketch:1 hand:1 nonlinear:1 scientific:1 facilitate:1 validity:1 multiplier:1 hence:3 regularization:2 chemical:13 excluded:1 nonzero:2 self:1 illustrative:1 steady:3 fet:1 hong:5 generalized:1 sherrington:2 dedicated:1 temperature:2 ranging:1 suri:1 tively:1 b4:1 insensitive:2 ncrg:1 shenker:1 he:7 numerically:1 significant:1 cup:1 similarly:1 own:2 optimizing:2 driven:1 jt2:2 yi:1 greater:1 somewhat:1 additional:1 dashed:1 branch:4 full:1 reduces:2 academic:1 calculation:2 devised:1 unsatisfiable:1 calculates:1 variant:1 iteration:10 normalization:1 confined:1 addition:1 whereas:3 separately:1 remarkably:2 addressed:1 leaving:1 limn:1 hprn:1 saad:3 extra:2 probably:1 subject:1 hz:1 integer:1 enough:1 fit:4 topology:1 cn:1 prototype:1 intensive:1 expression:3 allocate:1 passed:3 effort:1 algebraic:1 passing:12 migrate:2 iterating:3 clear:2 aimed:1 locally:1 generate:1 singapore:1 delta:2 per:4 write:1 discrete:2 express:1 zv:4 terminology:1 threshold:3 drawn:11 breadth:1 l793:1 replica:7 backward:1 graph:9 relaxation:2 merely:1 fraction:6 sum:2 beijing:2 fp6:1 run:1 letter:1 draw:2 decision:1 dy:1 bound:1 internet:1 ct:1 followed:1 quadratic:6 constraint:5 aspect:1 speed:1 argument:2 min:3 simulate:1 separable:1 oup:1 according:1 describes:1 increasingly:4 rev:1 s1:1 invariant:1 pr:2 ln:7 resource:7 equation:4 turn:1 fed:3 unusual:1 capitalizing:1 available:1 multiplied:1 observe:1 generic:1 subtracted:1 alternative:2 ho:1 original:2 top:1 marginalized:1 yz:1 approximating:1 objective:1 added:2 dependence:2 kth:1 link:9 capacity:41 distributive:3 water:2 assuming:1 length:2 code:1 index:3 besides:1 providing:2 balance:1 minimizing:2 yjk:5 negative:3 rise:3 implementation:2 upper:1 revised:1 finite:1 unconnected:1 precise:1 y1:1 rn:1 arbitrary:2 hln:1 community:1 david:1 pair:3 required:1 namely:1 extensive:3 z1:1 optimized:1 fv:16 herzog:1 address:1 zhuo:1 suggested:2 beyond:1 usually:1 below:6 regime:3 summarize:1 saturation:2 power:5 ia:1 suitable:2 recursion:13 advanced:1 representing:3 technology:2 aston:2 brief:1 disappears:1 carried:2 dyk:3 sn:1 deviate:1 review:1 literature:1 nishimori:1 asymptotic:2 law:2 generation:5 interesting:2 allocation:5 proportional:1 remarkable:1 clark:1 consistent:1 balancing:5 elsewhere:2 summary:2 uhn:1 repeat:1 slowdown:1 free:20 supported:1 aij:12 allow:1 fall:2 peterson:1 sparse:5 distributed:5 curve:2 lett:1 opper:1 world:1 rich:1 forward:1 replicated:1 san:1 excess:1 obtains:2 rthe:1 confirm:1 global:4 conclude:1 assumed:1 alternatively:1 yji:1 search:3 iterative:3 continuous:3 bay:2 bethe:8 nature:2 ca:1 symmetry:2 excellent:4 complex:1 main:2 terminated:3 rh:1 profile:1 fig:7 referred:2 evergrow:1 rithm:1 slow:1 sub:1 exponential:2 lie:1 bij:2 saturate:1 load:7 specific:1 phkywong:1 showing:2 jt:18 inset:7 symbol:3 exists:1 adding:1 sequential:1 gained:1 illustrates:1 demand:3 flavor:1 saddle:2 appearance:1 gao:1 lagrange:1 expressed:1 partially:1 corresponds:2 acm:1 conditional:1 goal:1 identity:1 labelled:2 ajk:4 change:3 typical:2 except:2 reducing:2 averaging:1 total:1 pas:1 dept:1 |
2,153 | 2,955 | CMOL CrossNets: Possible Neuromorphic
Nanoelectronic Circuits
Jung Hoon Lee
Xiaolong Ma
Konstantin K. Likharev
Stony Brook University
Stony Brook, NY 11794-3800
[email protected]
Abstract
Hybrid ?CMOL? integrated circuits, combining CMOS subsystem
with nanowire crossbars and simple two-terminal nanodevices,
promise to extend the exponential Moore-Law development of
microelectronics into the sub-10-nm range. We are developing
neuromorphic network (?CrossNet?) architectures for this future
technology, in which neural cell bodies are implemented in CMOS,
nanowires are used as axons and dendrites, while nanodevices
(bistable latching switches) are used as elementary synapses. We
have shown how CrossNets may be trained to perform pattern
recovery and classification despite the limitations imposed by the
CMOL hardware. Preliminary estimates have shown that CMOL
CrossNets may be extremely dense (~10 7 cells per cm2) and operate
approximately a million times faster than biological neural networks,
at manageable power consumption. In Conclusion, we discuss in
brief possible short-term and long-term applications of the emerging
technology.
1
Introduction: CMOL Circuits
Recent results [1, 2] indicate that the current VLSI paradigm based on CMOS
technology can be hardly extended beyond the 10-nm frontier: in this range the
sensitivity of parameters (most importantly, the gate voltage threshold) of silicon
field-effect transistors to inevitable fabrication spreads grows exponentially. This
sensitivity will probably send the fabrication facilities costs skyrocketing, and may
lead to the end of Moore?s Law some time during the next decade.
There is a growing consensus that the impending Moore?s Law crisis may be
preempted by a radical paradigm shift from the purely CMOS technology to hybrid
CMOS/nanodevice circuits, e.g., those of ?CMOL? variety (Fig. 1). Such circuits (see,
e.g., Ref. 3 for their recent review) would combine a level of advanced CMOS devices
fabricated by the lithographic patterning, and two-layer nanowire crossbar formed,
e.g., by nanoimprint, with nanowires connected by simple, similar, two-terminal
nanodevices at each crosspoint. For such devices, molecular single-electron latching
switches [4] are presently the leading candidates, in particular because they may be
fabricated using the self-assembled monolayer (SAM) technique which already gave
reproducible results for simpler molecular devices [5].
(a)
nanodevices
nanowiring
and
nanodevices
interface pins
upper wiring
level of CMOS
stack
(b)
?FCMOS
Fnano
?
Fig. 1. CMOL circuit: (a) schematic side view, and (b) top-view zoom-in on several
adjacent interface pins. (For clarity, only two adjacent nanodevices are shown.)
In order to overcome the CMOS/nanodevice interface problems pertinent to earlier
proposals of hybrid circuits [6], in CMOL the interface is provided by pins that are
distributed all over the circuit area, on the top of the CMOS stack. This allows to use
advanced techniques of nanowire patterning (like nanoimprint) which do not have
nanoscale accuracy of layer alignment [3]. The vital feature of this interface is the tilt,
by angle ? = arcsin(Fnano/?FCMOS), of the nanowire crossbar relative to the square
arrays of interface pins (Fig. 1b). Here Fnano is the nanowiring half-pitch, FCMOS is the
half-pitch of the CMOS subsystem, and ? is a dimensionless factor larger than 1 that
depends on the CMOS cell complexity. Figure 1b shows that this tilt allows the CMOS
subsystem to address each nanodevice even if Fnano << ?FCMOS.
By now, it has been shown that CMOL circuits can combine high performance with
high defect tolerance (which is necessary for any circuit using nanodevices) for
several digital applications. In particular, CMOL circuits with defect rates below a
few percent would enable terabit-scale memories [7], while the performance of
FPGA-like CMOL circuits may be several hundred times above that of overcome
purely CMOL FPGA (implemented with the same FCMOS), at acceptable power
dissipation and defect tolerance above 20% [8].
In addition, the very structure of CMOL circuits makes them uniquely suitable for the
implementation of more complex, mixed-signal information processing systems,
including ultradense and ultrafast neuromorphic networks. The objective of this paper
is to describe in brief the current status of our work on the development of so-called
Distributed Crossbar Networks (?CrossNets?) that could provide high performance
despite the limitations imposed by CMOL hardware. A more detailed description of
our earlier results may be found in Ref. 9.
2
Synapses
The central device of CrossNet is a two-terminal latching switch [3, 4] (Fig. 2a) which is a
combination of two single-electron devices, a transistor and a trap [3]. The device may be
naturally implemented as a single organic molecule (Fig. 2b). Qualitatively, the device
operates as follows: if voltage V = Vj ? Vk applied between the external electrodes (in
CMOL, nanowires) is low, the trap island has no net electric charge, and the single-electron
transistor is closed. If voltage V approaches certain threshold value V+ > 0, an additional
electron is inserted into the trap island, and its field lifts the Coulomb blockade of the
single-electron transistor, thus connecting the nanowires. The switch state may be reset
(e.g., wires disconnected) by applying a lower voltage V < V- < V+.
Due to the random character of single-electron tunneling [2], the quantitative description of
the switch is by necessity probabilistic: actually, V determines only the rates ??? of device
switching between its ON and OFF states. The rates, in turn, determine the dynamics of
probability p to have the transistor opened (i.e. wires connected):
dp/dt = ??(1 - p) - ??p.
(1)
The theory of single-electron tunneling [2] shows that, in a good approximation, the rates
may be presented as
??? = ?0 exp{?e(V - S)/kBT} ,
(2)
(a)
single-electron trap
tunnel
junction
Vj
Vk
single-electron transistor
R
R = hexyl
clipping
O
group
(b)
O
O
N
N
O
O
R
N C
R
diimide
acceptor groups
R
O
C N
R
R
O
O
N
N
O
O
R
O
OPE wires
R
R
O
N C
R
R
R
Fig. 2. (a) Schematics and (b) possible molecular implementation of the two-terminal
single-electron latching switch
where ?0 and S are constants depending on physical parameters of the latching switches.
Note that despite the random character of switching, the strong nonlinearity of Eq. (2)
allows to limit the degree of the device ?fuzziness?.
3
CrossNets
Figure 3a shows the generic structure of a CrossNet. CMOS-implemented somatic
cells (within the Fire Rate model, just nonlinear differential amplifiers, see Fig. 3b,c)
apply their output voltages to ?axonic? nanowires. If the latching switch, working as
an elementary synapse, on the crosspoint of an axonic wire with the perpendicular
?dendritic? wire is open, some current flows into the latter wire, charging it. Since
such currents are injected into each dendritic wire through several (many) open
synapses, their addition provides a natural passive analog summation of signals from
the corresponding somas, typical for all neural networks. Examining Fig. 3a, please
note the open-circuit terminations of axonic and dendritic lines at the borders of the
somatic cells; due to these terminations the somas do not communicate directly (but
only via synapses).
The network shown on Fig. 3 is evidently feedforward; recurrent networks are achieved in
the evident way by doubling the number of synapses and nanowires per somatic cell (Fig.
3c). Moreover, using dual-rail (bipolar) representation of the signal, and hence doubling
the number of nanowires and elementary synapses once again, one gets a CrossNet with
somas coupled by compact 4-switch groups [9]. Using Eqs. (1) and (2), it is straightforward
to show that that the average synaptic weight wjk of the group obeys the ?quasi-Hebbian?
rule:
d
w jk = ?4?0 sinh (? S ) sinh (? V j ) sinh (? Vk ) .
dt
(3)
(a)
- +soma
j
(b)
RL
+
--
jk+
RL
(c)
jk-
RL
+
--
-+soma
k
RL
Fig. 3. (a) Generic structure of the simplest, (feedforward, non-Hebbian) CrossNet. Red lines
show ?axonic?, and blue lines ?dendritic? nanowires. Gray squares are interfaces between
nanowires and CMOS-based somas (b, c). Signs show the dendrite input polarities. Green
circles denote molecular latching switches forming elementary synapses. Bold red and blue
points are open-circuit terminations of the nanowires, that do not allow somas to interact in
bypass of synapses
In the simplest cases (e.g., quasi-Hopfield networks with finite connectivity), the
tri-level synaptic weights of the generic CrossNets are quite satisfactory, leading to
just a very modest (~30%) network capacity loss. However, some applications (in
particular, pattern classification) may require a larger number of weight quantization
levels L (e.g., L ? 30 for a 1% fidelity [9]). This may be achieved by using compact
square arrays (e.g., 4?4) of latching switches (Fig. 4).
Various species of CrossNets [9] differ also by the way the somatic cells are
distributed around the synaptic field. Figure 5 shows feedforward versions of two
CrossNet types most explored so far: the so-called FlossBar and InBar. The former
network is more natural for the implementation of multilayered perceptrons (MLP),
while the latter system is preferable for recurrent network implementations and also
allows a simpler CMOS design of somatic cells.
The most important advantage of CrossNets over the hardware neural networks
suggested earlier is that these networks allow to achieve enormous density combined
with large cell connectivity M >> 1 in quasi-2D electronic circuits.
4
CrossNet training
CrossNet training faces several hardware-imposed challenges:
(i) The synaptic weight contribution provided by the elementary latching switch is
binary, so that for most applications the multi-switch synapses (Fig. 4) are necessary.
(ii) The only way to adjust any particular synaptic weight is to turn ON or OFF the
corresponding latching switch(es). This is only possible to do by applying certain voltage V
= Vj ? Vk between the two corresponding nanowires. At this procedure, other nanodevices
attached to the same wires should not be disturbed.
(iii) As stated above, synapse state switching is a statistical progress, so that the
degree of its ?fuzziness? should be carefully controlled.
(a)
Vj
(b)
V w ? A/2
i=1
i=1
2
2
?
?
n
n
Vj
V w+ A/2
i' = 1
RL
2
?
i' = 1
n
RS
?(V t ?A/2)
2
?
RS
n
?(V t +A/2)
Fig. 4. Composite synapse for providing L = 2n2+1 discrete levels of the weight in (a) operation
and (b) weight adjustment modes. The dark-gray rectangles are resistive metallic strips at
soma/nanowire interfaces
(a)
(b)
Fig. 5. Two main CrossNet species: (a) FlossBar and (b) InBar, in the generic (feedforward,
non-Hebbian, ternary-weight) case for the connectivity parameter M = 9. Only the nanowires and
nanodevices coupling one cell (indicated with red dashed lines) to M post-synaptic cells (blue dashed
lines) are shown; actually all the cells are similarly coupled
We have shown that these challenges may be met using (at least) the following
training methods [9]:
(i) Synaptic weight import. This procedure is started with training of a
homomorphic ?precursor? artificial neural network with continuous synaptic weighs
wjk, implemented in software, using one of established methods (e.g., error
backpropagation). Then the synaptic weights wjk are transferred to the CrossNet, with
some ?clipping? (rounding) due to the binary nature of elementary synaptic weights.
To accomplish the transfer, pairs of somatic cells are sequentially selected via
CMOS-level wiring. Using the flexibility of CMOS circuitry, these cells are
reconfigured to apply external voltages ?VW to the axonic and dendritic nanowires
leading to a particular synapse, while all other nanowires are grounded. The voltage
level V W is selected so that it does not switch the synapses attached to only one of the
selected nanowires, while voltage 2VW applied to the synapse at the crosspoint of the
selected wires is sufficient for its reliable switching. (In the composite synapses with
quasi-continuous weights (Fig. 4), only a part of the corresponding switches is turned
ON or OFF.)
(ii) Error backpropagation. The synaptic weight import procedure is
straightforward when wjk may be simply calculated, e.g., for the Hopfield-type networks.
However, for very large CrossNets used, e.g., as pattern classifiers the precursor network
training may take an impracticably long time. In this case the direct training of a CrossNet
may become necessary. We have developed two methods of such training, both based on
?Hebbian? synapses consisting of 4 elementary synapses (latching switches) whose
average weight dynamics obeys Eq. (3). This quasi-Hebbian rule may be used to
implement the backpropagation algorithm either using a periodic time-multiplexing [9] or
in a continuous fashion, using the simultaneous propagation of signals and errors along the
same dual-rail channels.
As a result, presently we may state that CrossNets may be taught to perform virtually
all major functions demonstrated earlier with the usual neural networks, including the
corrupted pattern restoration in the recurrent quasi-Hopfield mode and pattern
classification in the feedforward MLP mode [11].
5
C r o s s N e t p e r f o r m an c e e s t i m a t e s
The significance of this result may be only appreciated in the context of unparalleled
physical parameters of CMOL CrossNets. The only fundamental limitation on the
half-pitch Fnano (Fig. 1) comes from quantum-mechanical tunneling between nanowires. If
the wires are separated by vacuum, the corresponding specific leakage conductance
becomes uncomfortably large (~10-12 ?-1m-1) only at Fnano = 1.5 nm; however, since
realistic insulation materials (SiO2, etc.) provide somewhat lower tunnel barriers, let us use
a more conservative value Fnano= 3 nm. Note that this value corresponds to 1012 elementary
synapses per cm2, so that for 4M = 104 and n = 4 the areal density of neural cells is close to
2?107 cm-2. Both numbers are higher than those for the human cerebral cortex, despite the
fact that the quasi-2D CMOL circuits have to compete with quasi-3D cerebral cortex.
With the typical specific capacitance of 3?10-10 F/m = 0.3 aF/nm, this gives nanowire
capacitance C0 ? 1 aF per working elementary synapse, because the corresponding
segment has length 4Fnano. The CrossNet operation speed is determined mostly by the time
constant ?0 of dendrite nanowire capacitance recharging through resistances of open
nanodevices. Since both the relevant conductance and capacitance increase similarly with
M and n, ?0 ? R0C0.
The possibilities of reduction of R0, and hence ?0, are limited mostly by acceptable power
dissipation per unit area, that is close to Vs2/(2Fnano)2R0. For room-temperature operation,
the voltage scale V0 ? Vt should be of the order of at least 30 kBT/e ? 1 V to avoid
thermally-induced errors [9]. With our number for Fnano, and a relatively high but
acceptable power consumption of 100 W/cm2, we get R0 ? 1010? (which is a very realistic
value for single-molecule single-electron devices like one shown in Fig. 3). With this
number, ?0 is as small as ~10 ns. This means that the CrossNet speed may be approximately
six orders of magnitude (!) higher than that of the biological neural networks. Even scaling
R0 up by a factor of 100 to bring power consumption to a more comfortable level of 1
W/cm2, would still leave us at least a four-orders-of-magnitude speed advantage.
6
D i s c u s s i on: P o s s i bl e a p p l i c at i o n s
These estimates make us believe that that CMOL CrossNet chips may revolutionize
the neuromorphic network applications. Let us start with the example of relatively
small (1-cm2-scale) chips used for recognition of a face in a crowd [11]. The most
difficult feature of such recognition is the search for face location, i.e. optimal
placement of a face on the image relative to the panel providing input for the
processing network. The enormous density and speed of CMOL hardware gives a
possibility to time-and-space multiplex this task (Fig. 6). In this approach, the full
image (say, formed by CMOS photodetectors on the same chip) is divided into P
rectangular panels of h?w pixels, corresponding to the expected size and approximate
shape of a single face. A CMOS-implemented communication channel passes input
data from each panel to the corresponding CMOL neural network, providing its shift
in time, say using the TV scanning pattern (red line in Fig. 6). The standard methods
of image classification require the network to have just a few hidden layers, so that the
time interval ?t necessary for each mapping position may be so short that the total
pattern recognition time T = hw?t may be acceptable even for online face recognition.
w
h
image
network
input
Fig. 6. Scan mapping of the input image on CMOL CrossNet inputs. Red lines show the possible time
sequence of image pixels sent to a certain input of the network processing image from the upper-left
panel of the pattern
Indeed, let us consider a 4-Megapixel image partitioned into 4K 32?32-pixel panels (h
= w = 32). This panel will require an MLP net with several (say, four) layers with 1K
cells each in order to compare the panel image with ~10 3 stored faces. With the
feasible 4-nm nanowire half-pitch, and 65-level synapses (sufficient for better than
99% fidelity [9]), each interlayer crossbar would require chip area about (4K?64 nm)2
= 64?64 ?m2, fitting 4?4K of them on a ~0.6 cm2 chip. (The CMOS somatic-layer and
communication-system overheads are negligible.) With the acceptable power
consumption of the order of 10 W/cm2, the input-to-output signal propagation in such
a network will take only about 50 ns, so that ?t may be of the order of 100 ns and the
total time T = hw?t of processing one frame of the order of 100 microseconds, much
shorter than the typical TV frame time of ~10 milliseconds. The remaining
two-orders-of-magnitude time gap may be used, for example, for double-checking the
results via stopping the scan mapping (Fig. 6) at the most promising position. (For this,
a simple feedback from the recognition output to the mapping communication system
is necessary.)
It is instructive to compare the estimated CMOL chip speed with that of the
implementation of a similar parallel network ensemble on a CMOS signal processor (say,
also combined on the same chip with an array of CMOS photodetectors). Even assuming
an extremely high performance of 30 billion additions/multiplications per second, we
would need ~4?4K?1K?(4K)2/(30?109) ? 104 seconds ~ 3 hours per frame, evidently
incompatible with the online image stream processing.
Let us finish with a brief (and much more speculative) discussion of possible long-term
prospects of CMOL CrossNets. Eventually, large-scale (~30?30 cm2) CMOL circuits may
become available. According to the estimates given in the previous section, the integration
scale of such a system (in terms of both neural cells and synapses) will be comparable with
that of the human cerebral cortex. Equipped with a set of broadband sensor/actuator
interfaces, such (necessarily, hierarchical) system may be capable, after a period of initial
supervised training, of further self-training in the process of interaction with environment,
with the speed several orders of magnitude higher than that of its biological prototypes.
Needless to say, the successful development of such self-developing systems would have a
major impact not only on all information technologies, but also on the society as a whole.
Acknowledgments
This work has been supported in part by the AFOSR, MARCO (via FENA Center),
and NSF. Valuable contributions made by Simon F?lling, ?zg?r T?rel and Ibrahim
Muckra, as well as useful discussions with P. Adams, J. Barhen, D. Hammerstrom, V.
Protopopescu, T. Sejnowski, and D. Strukov are gratefully acknowledged.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
Frank, D. J. et al. (2001) Device scaling limits of Si MOSFETs and their application
dependencies. Proc. IEEE 89(3): 259-288.
Likharev, K. K. (2003) Electronics below 10 nm, in J. Greer et al. (eds.), Nano and Giga
Challenges in Microelectronics, pp. 27-68. Amsterdam: Elsevier.
Likharev, K. K. and Strukov, D. B. (2005) CMOL: Devices, circuits, and architectures, in G.
Cuniberti et al. (eds.), Introducing Molecular Electronics, Ch. 16. Springer, Berlin.
F?lling, S., T?rel, ?. & Likharev, K. K. (2001) Single-electron latching switches as nanoscale
synapses, in Proc. of the 2001 Int. Joint Conf. on Neural Networks, pp. 216-221. Mount Royal,
NJ: Int. Neural Network Society.
Wang, W. et al. (2003) Mechanism of electron conduction in self-assembled alkanethiol
monolayer devices. Phys. Rev. B 68(3): 035416 1-8.
Stan M. et al. (2003) Molecular electronics: From devices and interconnect to circuits and
architecture, Proc. IEEE 91(11): 1940-1957.
Strukov, D. B. & Likharev, K. K. (2005) Prospects for terabit-scale nanoelectronic memories.
Nanotechnology 16(1): 137-148.
Strukov, D. B. & Likharev, K. K. (2005) CMOL FPGA: A reconfigurable architecture for
hybrid digital circuits with two-terminal nanodevices. Nanotechnology 16(6): 888-900.
T?rel, ?. et al. (2004) Neuromorphic architectures for nanoelectronic circuits?, Int. J. of Circuit
Theory and Appl. 32(5): 277-302.
See, e.g., Hertz J. et al. (1991) Introduction to the Theory of Neural Computation. Cambridge,
MA: Perseus.
Lee, J. H. & Likharev, K. K. (2005) CrossNets as pattern classifiers. Lecture Notes in Computer
Sciences 3575: 434-441.
| 2955 |@word version:1 manageable:1 c0:1 open:5 cm2:8 termination:3 r:2 reduction:1 electronics:3 initial:1 necessity:1 current:4 si:1 stony:2 import:2 nanoscale:2 realistic:2 shape:1 pertinent:1 reproducible:1 half:4 selected:4 device:14 patterning:2 short:2 provides:1 location:1 simpler:2 along:1 direct:1 differential:1 become:2 resistive:1 megapixel:1 combine:2 fitting:1 overhead:1 interlayer:1 indeed:1 expected:1 growing:1 multi:1 terminal:5 precursor:2 equipped:1 becomes:1 provided:2 moreover:1 circuit:23 panel:7 crisis:1 cm:1 emerging:1 perseus:1 developed:1 fabricated:2 nj:1 quantitative:1 charge:1 bipolar:1 preferable:1 classifier:2 unit:1 comfortable:1 negligible:1 multiplex:1 limit:2 switching:4 despite:4 mount:1 approximately:2 appl:1 barhen:1 limited:1 range:2 perpendicular:1 obeys:2 acknowledgment:1 ternary:1 implement:1 backpropagation:3 insulation:1 procedure:3 area:3 acceptor:1 composite:2 organic:1 get:2 close:2 subsystem:3 needle:1 context:1 dimensionless:1 applying:2 disturbed:1 imposed:3 demonstrated:1 center:1 send:1 straightforward:2 rectangular:1 recovery:1 m2:1 rule:2 array:3 importantly:1 recognition:5 jk:3 inserted:1 wang:1 connected:2 prospect:2 valuable:1 environment:1 complexity:1 dynamic:2 trained:1 segment:1 purely:2 joint:1 hopfield:3 chip:7 various:1 separated:1 describe:1 sejnowski:1 artificial:1 lift:1 crowd:1 quite:1 whose:1 larger:2 say:5 online:2 advantage:2 sequence:1 transistor:6 net:2 evidently:2 interaction:1 reset:1 turned:1 combining:1 relevant:1 flexibility:1 achieve:1 description:2 wjk:4 billion:1 electrode:1 double:1 cmos:22 leave:1 adam:1 depending:1 radical:1 recurrent:3 coupling:1 strukov:4 progress:1 eq:3 strong:1 implemented:6 indicate:1 come:1 met:1 differ:1 kbt:2 opened:1 human:2 enable:1 bistable:1 material:1 require:4 preliminary:1 biological:3 elementary:9 dendritic:5 summation:1 frontier:1 marco:1 around:1 exp:1 mapping:4 electron:13 circuitry:1 major:2 mosfets:1 proc:3 sensor:1 reconfigured:1 avoid:1 voltage:10 vk:4 elsevier:1 stopping:1 interconnect:1 integrated:1 hidden:1 vlsi:1 quasi:8 pixel:3 classification:4 dual:2 fidelity:2 development:3 integration:1 field:3 once:1 inevitable:1 future:1 few:2 zoom:1 consisting:1 fire:1 amplifier:1 conductance:2 mlp:3 possibility:2 adjust:1 alignment:1 xiaolong:1 capable:1 necessary:5 hoon:1 shorter:1 modest:1 circle:1 weighs:1 konstantin:1 earlier:4 restoration:1 neuromorphic:5 clipping:2 cost:1 introducing:1 hundred:1 nanoelectronic:3 fpga:3 fabrication:2 examining:1 rounding:1 successful:1 stored:1 conduction:1 dependency:1 scanning:1 periodic:1 corrupted:1 accomplish:1 combined:2 density:3 fundamental:1 sensitivity:2 lee:2 probabilistic:1 off:3 connecting:1 connectivity:3 sunysb:1 central:1 nm:8 again:1 homomorphic:1 nano:1 external:2 conf:1 leading:3 bold:1 int:3 depends:1 stream:1 view:2 closed:1 vs2:1 red:5 start:1 parallel:1 simon:1 contribution:2 formed:2 square:3 accuracy:1 ensemble:1 cc:1 processor:1 simultaneous:1 synapsis:17 phys:1 synaptic:11 strip:1 ed:2 pp:2 naturally:1 carefully:1 actually:2 higher:3 dt:2 supervised:1 synapse:6 revolutionize:1 just:3 crossbar:5 working:2 nonlinear:1 propagation:2 mode:3 thermally:1 indicated:1 gray:2 believe:1 grows:1 effect:1 facility:1 former:1 hence:2 moore:3 satisfactory:1 blockade:1 wiring:2 adjacent:2 during:1 self:4 uniquely:1 please:1 evident:1 dissipation:2 interface:9 passive:1 percent:1 temperature:1 bring:1 image:10 speculative:1 physical:2 rl:5 tilt:2 exponentially:1 attached:2 million:1 extend:1 analog:1 cerebral:3 silicon:1 areal:1 photodetectors:2 cambridge:1 similarly:2 nonlinearity:1 gratefully:1 cortex:3 v0:1 etc:1 recent:2 certain:3 binary:2 vt:1 additional:1 somewhat:1 r0:4 determine:1 paradigm:2 period:1 signal:6 ii:2 dashed:2 full:1 axonic:5 hebbian:5 faster:1 af:2 long:3 divided:1 post:1 molecular:6 controlled:1 schematic:2 pitch:4 impact:1 unparalleled:1 grounded:1 achieved:2 cell:17 nanodevices:11 proposal:1 addition:3 interval:1 greer:1 operate:1 probably:1 tri:1 inbar:2 induced:1 virtually:1 pass:1 sent:1 sio2:1 flow:1 vw:2 feedforward:5 vital:1 iii:1 switch:18 variety:1 finish:1 gave:1 architecture:5 prototype:1 shift:2 six:1 ibrahim:1 resistance:1 hardly:1 tunnel:2 useful:1 detailed:1 dark:1 hardware:5 simplest:2 nsf:1 millisecond:1 sign:1 impending:1 estimated:1 per:7 blue:3 discrete:1 promise:1 taught:1 group:4 four:2 soma:8 threshold:2 enormous:2 acknowledged:1 clarity:1 rectangle:1 defect:3 compete:1 angle:1 injected:1 communicate:1 electronic:1 incompatible:1 acceptable:5 tunneling:3 scaling:2 comparable:1 layer:5 sinh:3 ope:1 placement:1 software:1 multiplexing:1 speed:6 extremely:2 relatively:2 transferred:1 developing:2 tv:2 according:1 combination:1 disconnected:1 vacuum:1 hertz:1 sam:1 character:2 partitioned:1 island:2 rev:1 presently:2 discus:1 pin:4 turn:2 eventually:1 mechanism:1 end:1 junction:1 operation:3 available:1 apply:2 actuator:1 hierarchical:1 coulomb:1 generic:4 gate:1 hammerstrom:1 top:2 remaining:1 society:2 leakage:1 bl:1 objective:1 capacitance:4 already:1 usual:1 dp:1 berlin:1 capacity:1 consumption:4 consensus:1 latching:12 assuming:1 length:1 polarity:1 providing:3 difficult:1 mostly:2 frank:1 stated:1 implementation:5 design:1 perform:2 upper:2 wire:10 finite:1 extended:1 communication:3 frame:3 stack:2 somatic:7 pair:1 mechanical:1 established:1 hour:1 brook:2 assembled:2 beyond:1 address:1 suggested:1 below:2 pattern:9 challenge:3 royal:1 memory:2 including:2 green:1 charging:1 power:6 suitable:1 reliable:1 natural:2 hybrid:4 advanced:2 technology:5 brief:3 stan:1 started:1 coupled:2 review:1 checking:1 multiplication:1 relative:2 law:3 afosr:1 loss:1 lecture:1 mixed:1 limitation:3 digital:2 degree:2 sufficient:2 bypass:1 jung:1 supported:1 appreciated:1 side:1 allow:2 face:7 barrier:1 distributed:3 tolerance:2 overcome:2 calculated:1 feedback:1 quantum:1 qualitatively:1 made:1 far:1 approximate:1 compact:2 status:1 sequentially:1 recharging:1 continuous:3 search:1 decade:1 lling:2 promising:1 nature:1 transfer:1 molecule:2 channel:2 terabit:2 dendrite:3 interact:1 metallic:1 complex:1 necessarily:1 electric:1 vj:5 significance:1 dense:1 spread:1 multilayered:1 main:1 border:1 whole:1 n2:1 ref:2 body:1 fig:22 broadband:1 fashion:1 ny:1 axon:1 n:3 sub:1 position:2 exponential:1 ultrafast:1 candidate:1 rail:2 hw:2 specific:2 reconfigurable:1 explored:1 microelectronics:2 trap:4 quantization:1 rel:3 magnitude:4 arcsin:1 gap:1 simply:1 forming:1 amsterdam:1 adjustment:1 doubling:2 springer:1 ch:1 corresponds:1 determines:1 ma:2 fuzziness:2 microsecond:1 room:1 feasible:1 typical:3 determined:1 operates:1 nanotechnology:2 conservative:1 called:2 specie:2 total:2 e:1 perceptrons:1 zg:1 giga:1 latter:2 scan:2 instructive:1 |
2,154 | 2,956 | Goal-Based Imitation as Probabilistic Inference
over Graphical Models
Deepak Verma
Deptt of CSE, Univ. of Washington,
Seattle WA- 98195-2350
[email protected]
Rajesh P. N. Rao
Deptt of CSE, Univ. of Washington,
Seattle WA- 98195-2350
[email protected]
Abstract
Humans are extremely adept at learning new skills by imitating the actions of others. A progression of imitative abilities has been observed
in children, ranging from imitation of simple body movements to goalbased imitation based on inferring intent. In this paper, we show that the
problem of goal-based imitation can be formulated as one of inferring
goals and selecting actions using a learned probabilistic graphical model
of the environment. We first describe algorithms for planning actions to
achieve a goal state using probabilistic inference. We then describe how
planning can be used to bootstrap the learning of goal-dependent policies by utilizing feedback from the environment. The resulting graphical
model is then shown to be powerful enough to allow goal-based imitation. Using a simple maze navigation task, we illustrate how an agent
can infer the goals of an observed teacher and imitate the teacher even
when the goals are uncertain and the demonstration is incomplete.
1 Introduction
One of the most powerful mechanisms of learning in humans is learning by watching. Imitation provides a fast, efficient way of acquiring new skills without the need for extensive
and potentially dangerous experimentation. Research over the past decade has shown that
even newborns can imitate simple body movements (such as facial actions) [1]. While the
neural mechanisms underlying imitation remain unclear, recent research has revealed the
existence of ?mirror neurons? in the primate brain which fire both when a monkey watches
an action or when it performs the same action [2].
The most sophisticated forms of imitation are those that require an ability to infer the
underlying goals and intentions of a teacher. In this case, the imitating agent attributes not
only visible behaviors to others, but also utilizes the idea that others have internal mental
states that underlie, predict, and generate these visible behaviors. For example, infants
that are about 18 months old can readily imitate actions on objects, e.g., pulling apart a
dumbbell shaped object (Fig. 1a). More interestingly, they can imitate this action even
when the adult actor accidentally under- or overshot his target, or the hands slipped several
times, leaving the goal-state unachieved (Fig. 1b)[3]. They were thus presumably able to
infer the actor?s goal, which remained unfulfilled, and imitate not the observed action but
the intended one.
In this paper, we propose a model for intent inference and goal-based imitation that utilizes
probabilistic inference over graphical models. We first describe how the basic problems
of planning an action sequence and learning policies (state to action mappings) can be
solved through probabilistic inference. We then illustrate the applicability of the learned
graphical model to the problems of goal inference and imitation. Goal inference is achieved
by utilizing one?s own learned model as a substitute for the teacher?s. Imitation is achieved
by using one?s learned policies to reach an inferred goal state. Examples based on the
classic maze navigation domain are provided throughout to help illustrate the behavior
of the model. Our results suggest that graphical models provide a powerful platform for
modeling and implementing goal-based imitation.
(a)
(b)
Figure 1: Example of Goal-Based Imitation by Infants: (a) Infants as young as 14 months old can
imitate actions on objects as seen on TV (from [4]). (b) Human actor demonstrating an unsuccessful
act. Infants were subsequently able to correctly infer the intent of the actor and successfully complete
the act (from [3]).
2 Graphical Models
We first describe how graphical models can be used to plan action sequences and learn
goal-based policies, which can subsequently be used for goal inference and imitation. Let
?S be the set of states in the environment, ?A the set of all possible actions available to
the agent, and ?G the set of possible goals. We assume all three sets are finite. Each goal g
represents a target state Goalg ? ?S . At time t the agent is in state st and executes action
at . gt represents the current goal that the agent is trying to reach at time t. Executing the
action at changes the agent?s state in a stochastic manner given by the transition probability
P (st+1 | st , at ), which is assumed to be independent of t i.e., P (st+1 = s0 | st = s, at =
a) = ?s0 sa .
Starting from an initial state s1 = s and a desired goal state g, planning involves computing
a series of actions a1:T to reach the goal state, where T represents the maximum number
of time steps allowed (the ?episode length?). Note that we do not require T to be exactly
equal to the shortest path to the goal, just as an upper bound on the shortest path length. We
use a, s, g to represent a specific value for action, state, and goal respectively. Also, when
obvious from the context, we use s for st = s, a for at = a and g for gt = g.
In the case where the state st is fully observed, we obtain the graphical model in Fig. 2a,
which is also used in Markov Decision Process (MDP) [5] (but with a reward function).
The agent needs to compute a stochastic policy ?
? t (a | s, g) that maximizes the probability
P (sT +1 = Goalg | st = s, gt = g). For a large time horizon (T 1), the policy is
independent of t i.e. ?
?t (a | s, g) =?
? (a | s, g) (a stationary policy). A more realistic
scenario is where the state st is hidden but some aspects of it are visible. Given the current
state st = s, an observation o is produced with the probability P (ot = o | st = s) =
gt+1
gt
st
rt
st+1
rt+1
st
at
at+1
(a)
st+1
ot
at
ot+1
(b)
at+1
Figure 2: Graphical Models: (a) The standard MDP graphical model: The dependencies between
the nodes from time step t to t + 1 are represented by the transition probabilities and the dependency
between actions and states is encoded by the policy. (b) The graphical model used in this paper (note
the addition of goal, observation and ?reached? nodes). See text for more details.
?so . In this paper, we assume the observations are discrete and drawn from the set ? O ,
although the approach can be easily generalized to the case of continuous observations
(as in HMMs, for example). We additionally include a goal variable g t and a ?reached?
variable rt , resulting in the graphical model in Fig. 2b (this model is similar to the one used
in partially observable MDPs (POMDPs) but without the goal/reached variables). The goal
variable gt represents the current goal the agent is trying to reach while the variable r t is a
boolean variable that assumes the value 1 whenever the current state equals the current goal
state and 0 otherwise. We use rt to help infer the shortest path to the goal state (given an
upper bound T on path length); this is done by constraining the actions that can be selected
once the goal state is reached (see next section). Note that rt can also be used to model the
switching of goal states (once a goal is reached) and to implement hierarchical extensions
of the present model. The current action at now depends not only on the current state but
also on the current goal gt , and whether we have reached the goal (as indicated by rt ).
The Maze Domain: To illustrate the proposed approach, we use the standard stochastic
maze domain that has been traditionally used in the MDP and reinforcement learning literature [6, 7]. Figure 3 shows the 7?7 maze used in the experiments. Solid squares denote
a wall. There are five possible actions: up,down,left,right and stayput. Each
action takes the agent into the intended cell with a high probability. This probability is
governed by the noise parameter ?, which is the probability that the agent will end up in
one of the adjoining (non-wall) squares or remain in the same square. For example, for the
maze in Fig. 3, P ([3, 5] | [4, 5], left) = ? while P ([4, 4] | [4, 5], left) = 1 ? 3? (we use
[i,j] to denote the cell in ith row and jth column from the top left corner).
3 Planning and Learning Policies
3.1 Planning using Probabilistic Inference
To simplify the exposition, we first assume full observability (? so = ?(s, o)). We also
assume that the environment model ? is known (the problem of learning ? is addressed
later). The problem of planning can then be stated as follows: Given a goal state g, an initial
state s, and number of time steps T , what is the sequence of actions a
? 1:T that maximizes the
probability of reaching the goal state? We compute these actions using the most probable
explanation (MPE) method, a standard routine in graphical model packages (see [7] for an
alternate approach). When MPE is applied to the graphical model in Fig. 2b, we obtain:
a
?1:T , s?2:T +1 , g?1:T , r?1:T = argmax P (a1:T , s2:T , g1:T , r1:T | s1 = s, sT +1 = Goalg )
(1)
When using the MPE method, the ?reached? variable rt can be used to compute the
shortest path to the goal. For P (a | g, s, r), we set the prior for the stayput ac-
tion to be very high when rt = 1 and uniform otherwise. This breaks the isomorphism
of the MPE action sequences with respect to the stayput action, i.e., for s 1 =[4,6],
goal=[4,7], and T = 2, the probability of right,stayput becomes much higher than
that of stayput,right (otherwise, they have the same posterior probability). Thus, the
stayput action is discouraged unless the agent has reached the goal. This technique is
quite general, in the sense that we can always augment ?A with a no-op action and use this
technique based on rt to push the no-op actions to the end of a T -length action sequence
for a pre-chosen upper bound T .
0.690
0.573
0.421
Figure 3: Planning and Policy Learning: (a) shows three example plans (action sequences) computed using the MPE method. The plans are shown as colored lines capturing the direction of actions.
The numbers denote probability of success of each plan. The longer plans have lower probability of
success as expected.
3.2 Policy Learning using Planning
Executing a plan in a noisy environment may not always result in the goal state being
reached. However, in the instances where a goal state is indeed reached, the executed
action sequence can be used to bootstrap the learning of an optimal policy ?
? (a | s, g),
which represents the probability for action a in state s when the goal state to be reached
is g. We define optimality in terms of reaching the goal using the shortest path. Note that
the optimal policy may differ from the prior P (a|s, g) which counts all actions executed in
state s for goal g, regardless of whether the plan was successful.
MDP Policy Learning: Algorithm 1 shows a planning-based method for learning policies
for an MDP (both ? and ? are assumed unknown and initialized to a prior distribution, e.g.,
uniform). The agent selects a random start state and a goal state (according to P (g 1 )), infers
the MPE plan a
?1:T using the current ? , executes it, and updates the frequency counts for
?s0 sa based on the observed st and st+1 for each at . The policy ?
? (a | s, g) is only updated
(by updating the action frequencies) if the goal g was reached. To learn an accurate ? , the
algorithm is biased towards exploration of the state space initially based on the parameter
? (the ?exploration probability?). ? decreases by a decay factor ? (0 <? <1) with each
iteration so that the algorithm transitions to an ?exploitation? phase when transition model
is well learned and favors the execution of the MPE plan.
POMDP Policy Learning: In the case of partial observability, Algorithm 1 is modified to compute the plan a
?1:T based on observation o1 = o as evidence instead of
s1 = s in Eq.1. The plan is executed to record observations o2:T +1 , which are
then used to compute the MPE estimate for the hidden states:?
so1:T +1 , g?1:T , r?1:T +1 =
argmax P (s1:T +1 , g1:T , r1:T +1 | o1:T +1 , a
?1:T , gT +1 = g). The MPE estimate s?o1:T +1 is
then used instead of so1:T +1 to update ?
? and ? .
Results: Figure 4a shows the error in the learned transition model and policy as a function of the number of iterations of the algorithm. Error in ?s0 sa was defined as the squared
sum of differences between the learned and true transition parameters. Error in the learned
policy was defined as the number of disagreements between the optimal deterministic pol-
Algorithm 1 Policy learning in an unknown environment
1: Initialize transition model ?s0 sa , policy ?
? (a | s, g), ?, and numT rials.
2: for iter = 1 to numT rials do
3:
Choose random start location s1 based on prior P (s1 ).
4:
Pick a goal g according to prior P (g1 ).
5:
With probability ?:
6:
a1:T = Random action sequence.
7:
Otherwise:
8:
Compute MPE plan as in Eq.1 using current ?s0 sa .
Set a1:T = a
?1:T
9:
Execute a1:T and record observed states so2:T +1 .
10:
Update ?s0 sa based on a1:T and so1:T +1 .
11:
If the plan was successful, update policy ?
? (a | s, g) using a1:T and so1:T +1 .
12:
? = ???
13: end for
icy for each goal computed via policy iteration and argmax ?
? (a | s, g), summed over all
a
goals. Both errors decrease to zero with increasing number of iterations. The policy error
decreases only after the transition model error becomes significantly small because without
an accurate estimate of ? , the MPE plan is typically incorrect and the agent rarely reaches
the goal state, resulting in little or no learning of the policy. Figs. 4b shows the maximum
probability action argmax ?
? (a | s, g) learned for each state (maze location) for one of the
a
goals. It is clear that the optimal action has been learned by the algorithm for all locations
to reach the given goal state. The results for the POMDP case are shown in Fig. 4c and
d. The policy error decreases but does not reach zero because of perceptual ambiguity at
certain locations such as corners, where two (or more) actions may have roughly equal
probability (see Fig. 4d). The optimal strategy in these ambiguous states is to sample from
these actions.
(a)
180
(b)
Error in P(s
|s ,a )
t+1
150
t
t
Error in policy
Error
120
90
60
30
0
1
2
10
10
3
10
Iteration
(d)
Error
(c)
180
150
120
90
60
30
0
Error in policy
1
10
2
10
3
10
Iteration
Figure 4: Learning Policies for an MDP and a POMDP: (a) shows the error in the transition
model and policy w.r.t the true transition model and optimal policy for the maze MDP. (b) The optimal
policy learned for one of the 3 goals. (c) and (d) show corresponding results for the POMDP case
(the transition model was assumed to be known). The long arrows represent the maximum probability
action while the short arrows show all the high probability actions when there is no clear winner.
4 Inferring Intent and Goal-Based Imitation
Consider a task where the agent gets observations o1:t from observing a teacher and seeks
to imitate the teacher. We use P (ot = o | st = s) = ?so in Fig. 2b (for the examples here,
?so was the same as in the previous section). Also, for P (a|s, g, rt = 0), we use the policy
?
? (a | s, g) learned as in the previous section. The goal of the agent is to infer the intention
of the teacher given a (possibly incomplete) demonstration and to reach the intended goal
using its policy (which could be different from the teacher?s optimal policy). Using the
graphical model formulation the problem of goal inference reduces to finding the marginal
P (gT | o1:t0 ), which can be efficiently computed using standard techniques such as belief
propagation. Imitation is accomplished by choosing the goal with the highest probability
and executing actions to reach that goal.
Fig. 5a shows the results of goal inference for the set of noisy teacher observations in
Fig. 5b. The three goal locations are indicated by red, blue, and green squares respectively.
Note that the inferred goal probabilities correctly reflect the putative goal(s) of the teacher
at each point in the teacher trajectory. In addition, even though the teacher demonstration is
incomplete, the imitator can perform goal-based imitation by inferring the teacher?s most
likely goal as shown in Fig. 5c. This mimics the results reported by [3] on the intent
inference by infants.
(a) Goal inference
Goal Prob= P(GT | O1:t)
1
Goal 1 [4,7]
Goal 2 [1,2]
Goal 3 [2,7]
0.8
0.6
0.4
0.2
0
2
4
6
8
10
12
t
(b) Teacher Observations
(c) Imitator States
5 6
4
7
2 3
1
8
9 10 11
3
12
2
4
5
6
7
8
9
10
1
Figure 5: Goal Inference and Goal-Based Imitation: (a) shows the goal probabilities inferred
at each time step from teacher observations. (b) shows the teacher observations, which are noisy
and include a detour while en route to the red goal. The teacher demonstration is incomplete and
stops short of the red goal. (c) The imitator infers the most likely goal using (a) and performs goalbased imitation while avoiding the detour (The numbers t in a cell in (b) and (c) represent ot and st
respectively).
5 Online Imitation with Uncertain Goals
Now consider a task where the goal is to imitate a teacher online (i.e., simultaneously
with the teacher). The teacher observations are assumed to be corrupted by noise and
may include significant periods of occlusion where no data is available. The graphical
model framework provides an elegant solution to the problem of planning and selecting
actions when observations are missing and only a probability distribution over goals is
available. The best current action can be picked using the marginal P (a t | o1:t ), which
canPbe computed efficiently for the graphical model in Fig. 2c. This marginal is equal
to i P (at |gi , o1:t )P (gi |o1:t ), i.e., the policy for each goal weighted by the likelihood of
that goal given past teacher observations, which corresponds to our intuition of how actions
should be picked when goals are uncertain.
Fig. 6a shows the inferred distribution over goal states as the teacher follows a trajectory
given by the noisy observations in Fig. 6b. Initially, all goals are nearly equally likely (with
a slight bias for the nearest goal). Although the goal is uncertain and certain portions of
the teacher trajectory are occluded1, the agent is still able to make progress towards regions
1
We simulated occlusion using a special observation symbol which carried no information about
current state, i.e., P (occluded | s) = for all s ( 1)
most likely to contain any probable goal states and is able to ?catch-up? with the teacher
when observations become available again (Fig.. 6c).
(a) Goal inference
Goal 1 [4,7]
Goal 2 [1,2]
Goal 3 [2,7]
0.8
T
1:t
Goal Prob= P(G | O )
1
0.6
0.4
0.2
0
2
4
6
8
10
12
14
16
t
(b) Teacher Observations
10
9
11
14
15
(c) Online Imitator States
10
16
8
17
11
12
9
13 14
15
16
17
18
3
2
1
19
8
7
6 7
2
1
6
5
4
Figure 6: Online Imitation with Uncertain Goals: (a) shows the goal probabilities inferred by the
agent at each time step for the noisy teacher trajectory in (b). (b) Observations of the teacher. Missing
numbers indicate times at which the teacher was occluded. (c) The agent is able to follow the teacher
trajectory even when the teacher is occluded based on the evolving goal distribution in (a).
6 Conclusions
We have proposed a new model for intent inference and goal-based imitation based on
probabilistic inference in graphical models. The model assumes an initial learning phase
where the agent explores the environment (cf. body babbling in infants [8]) and learned a
graphical model capturing the sensory consequences of motor actions. The learned model
is then used for planning action sequences to goal states and for learning policies. The
resulting graphical model then serves as a platform for intent inference and goal-based
imitation.
Our model builds on the proposals of several previous researchers. It extends the approach
of [7] from planning in a traditional state-action Markov model to a full-fledged graphical
model involving states, actions, and goals with edges for capturing conditional distributions
denoting policies. The indicator variable rt used in our approach is similar to the ones used
in some hierarchical graphical models [9, 10, 11]. However, these papers do not address
the issue of action selection or imitation. Several models of imitation have previously been
proposed [12, 13, 14, 15, 16, 17]; these models are typically not probabilistic and have
focused on trajectory following rather than intent inference and goal-based imitation.
An important issue yet to be resolved is the scalability of the proposed approach. The
Bayesian model requires both a learned environment model as well as a learned policy. In
the case of the maze example, these were learned using a relatively small number of trials
due to small size of the state space. A more realistic scenario involving, for example, a
human or a humanoid robot would presumably require an extremely large number of trials
during learning due to the large number of degrees-of-freedom available; fortunately, the
problem may be alleviated in two ways: first, only a small portion of the state space may be
physically realizable due to constraints imposed by the body or environment; second, the
agent could selectively refine its models during imitative sessions. Hierarchical state space
models may also help in this regard.
The probabilistic model we have proposed also opens up the possibility of applying
Bayesian methodologies such as manipulation of prior probabilities of task alternatives to
obtain a deeper understanding of goal inference and imitation in humans. For example, one
could explore the effects of biasing a human subject towards particular classes of actions
(e.g., through repetition) under particular sets of conditions. One could also manipulate the
learned environment model used by subjects with the help of virtual reality environments.
Such manipulations have yielded valuable information regarding the type of priors and internal models that the adult human brain uses in perception (see, e.g., [18]) and in motor
learning [19]. We believe that the application of Bayesian techniques to imitation could
shed new light on the problem of how infants acquire internal models of the people and
objects they encounter in the world.
References
[1] A. N. Meltzoff and M. K. Moore. Newborn infants imitate adult facial gestures. Child Development, 54:702?709, 1983.
[2] L. Fogassi G. Rizzolatti, L. Fadiga and V. Gallese. From mirror neurons to imitation, facts,
and speculations. In A. N. Meltzoff and W. Prinz (Eds.), The imitative mind: Development,
evolution, and brain bases, pages 247?266, 2002.
[3] A. N. Meltzoff. Understanding the intentions of others: Re-enactment of intended acts by
18-month-old children. Developmental Psychology, 31:838?850, 1995.
[4] A. N. Meltzoff. Imitation of televised models by infants. Child Development, 59:1221?1229,
1988a.
[5] C. Boutilier, T. Dean, and S. Hanks. Decision-theoretic planning: Structural assumptions and
computational leverage. Journal of AI Research, 11:1?94, 1999.
[6] R. S. Sutton and A. Barto. Reinforcement Learning: An Introduction. MIT Press, Cambridge,
MA, 1998.
[7] H. Attias. Planning by probabilistic inference. In Proceedings of the 9th Int. Workshop on AI
and Statistics, 2003.
[8] R. P. N. Rao, A. P. Shon, and A. N. Meltzoff. A Bayesian model of imitation in infants and
robots. In Imitation and Social Learning in Robots, Humans, and Animals. Cambridge University Press, 2004.
[9] G. Theocharous, K. Murphy, and L. P. Kaelbling. Representing hierarchical POMDPs as DBNs
for multi-scale robot localization. ICRA, 2004.
[10] S. Fine, Y. Singer, and N. Tishby. The hierarchical hidden Markov model: Analysis and applications. Mach. Learn., 32(1):41?62, 1998.
[11] H. Bui, D. Phung, and S. Venkatesh. Hierarchical hidden Markov models with general state
hierarchy. In AAAI 2004, 2004.
[12] G. Hayes and J. Demiris. A robot controller using learning by imitation. Proceedings of the
2nd International Symposium on Intelligent Robotic Systems, Grenoble, France,, pages 198?
204, 1994.
[13] M. J. Mataric and M. Pomplun. Fixation behavior in observation and imitation of human movement. Cognitive Brain Research, 7:191?202, 1998.
[14] S. Schaal. Is imitation learning the route to humanoid robots? Trends in Cognitive Sciences,
3:233?242, 1999.
[15] A. Billard and K. Dautenhahn. Experiments in social robotics- grounding and use of communication in robotic agents. Adaptive Behavior, 7:3?4, 2000.
[16] C. Breazeal and B. Scassellati. Challenges in building robots that imitate people. In K. Dautenhahn and C. L. Nehaniv (Eds.), Imitation in animals and artifacts, pages 363?390, 2002.
[17] K. Dautenhahn and C. Nehaniv. Imitation in Animals and Artifacts. Cambridge, MA: MIT
Press, 2002.
[18] B. A. Olshausen R. P. N. Rao and M. S. Lewicki (Eds.). Probabilistic Models of the Brain:
Perception and Neural Function. Cambridge, MA: MIT Press, 2002.
[19] KP. Krding and D. Wolpert. Bayesian integration in sensorimotor learning. Nature, 427:244?
247, 2004.
| 2956 |@word trial:2 exploitation:1 nd:1 open:1 seek:1 pick:1 solid:1 initial:3 series:1 selecting:2 denoting:1 interestingly:1 past:2 o2:1 current:12 yet:1 dumbbell:1 readily:1 rizzolatti:1 realistic:2 visible:3 motor:2 update:4 infant:10 stationary:1 selected:1 imitate:10 ith:1 fogassi:1 short:2 record:2 colored:1 mental:1 provides:2 cse:2 node:2 location:5 five:1 become:1 symposium:1 incorrect:1 fixation:1 manner:1 indeed:1 expected:1 roughly:1 behavior:5 planning:15 multi:1 brain:5 little:1 increasing:1 becomes:2 provided:1 underlying:2 maximizes:2 what:1 monkey:1 finding:1 act:3 shed:1 exactly:1 underlie:1 consequence:1 switching:1 sutton:1 theocharous:1 mach:1 path:6 hmms:1 implement:1 bootstrap:2 evolving:1 significantly:1 alleviated:1 intention:3 pre:1 suggest:1 get:1 selection:1 context:1 applying:1 deterministic:1 imposed:1 missing:2 dean:1 regardless:1 starting:1 pomdp:4 focused:1 utilizing:2 his:1 classic:1 traditionally:1 updated:1 target:2 canpbe:1 dbns:1 hierarchy:1 us:1 trend:1 updating:1 observed:6 solved:1 region:1 episode:1 movement:3 decrease:4 highest:1 valuable:1 intuition:1 environment:11 developmental:1 pol:1 reward:1 occluded:3 localization:1 easily:1 resolved:1 represented:1 univ:2 fast:1 describe:4 kp:1 choosing:1 quite:1 encoded:1 otherwise:4 ability:2 favor:1 gi:2 g1:3 statistic:1 noisy:5 online:4 sequence:9 propose:1 achieve:1 scalability:1 seattle:2 r1:2 executing:3 object:4 help:4 illustrate:4 ac:1 nearest:1 op:2 progress:1 sa:6 eq:2 c:2 involves:1 overshot:1 indicate:1 differ:1 direction:1 meltzoff:5 attribute:1 subsequently:2 stochastic:3 exploration:2 human:9 virtual:1 implementing:1 require:3 wall:2 imitative:3 probable:2 extension:1 presumably:2 mapping:1 predict:1 repetition:1 successfully:1 weighted:1 mit:3 always:2 modified:1 reaching:2 rather:1 newborn:2 barto:1 rial:2 schaal:1 likelihood:1 sense:1 realizable:1 inference:21 dependent:1 typically:2 initially:2 hidden:4 france:1 selects:1 issue:2 augment:1 development:3 plan:14 platform:2 integration:1 summed:1 initialize:1 special:1 marginal:3 equal:4 once:2 shaped:1 washington:4 represents:5 nearly:1 mimic:1 others:4 simplify:1 intelligent:1 grenoble:1 simultaneously:1 murphy:1 intended:4 argmax:4 phase:2 fire:1 occlusion:2 freedom:1 possibility:1 navigation:2 light:1 adjoining:1 accurate:2 rajesh:1 edge:1 imitator:4 partial:1 animal:3 facial:2 unless:1 incomplete:4 old:3 detour:2 initialized:1 desired:1 re:1 uncertain:5 instance:1 column:1 modeling:1 boolean:1 rao:4 applicability:1 kaelbling:1 uniform:2 successful:2 tishby:1 reported:1 dependency:2 teacher:30 corrupted:1 st:21 explores:1 international:1 probabilistic:11 squared:1 ambiguity:1 reflect:1 again:1 aaai:1 choose:1 possibly:1 watching:1 corner:2 cognitive:2 int:1 depends:1 later:1 tion:1 picked:2 break:1 mataric:1 mpe:11 observing:1 portion:2 reached:12 start:2 red:3 square:4 efficiently:2 bayesian:5 produced:1 trajectory:6 pomdps:2 researcher:1 executes:2 reach:9 whenever:1 ed:3 sensorimotor:1 frequency:2 obvious:1 stop:1 infers:2 routine:1 sophisticated:1 higher:1 follow:1 methodology:1 formulation:1 done:1 execute:1 though:1 hank:1 just:1 hand:1 propagation:1 artifact:2 indicated:2 pulling:1 believe:1 mdp:7 olshausen:1 building:1 effect:1 grounding:1 contain:1 true:2 evolution:1 moore:1 during:2 ambiguous:1 generalized:1 trying:2 complete:1 theoretic:1 performs:2 ranging:1 winner:1 slight:1 significant:1 cambridge:4 ai:2 session:1 robot:7 actor:4 longer:1 gt:10 base:1 posterior:1 own:1 recent:1 apart:1 scenario:2 route:2 certain:2 manipulation:2 success:2 accomplished:1 seen:1 fortunately:1 shortest:5 period:1 babbling:1 full:2 infer:6 reduces:1 gesture:1 long:1 equally:1 manipulate:1 a1:7 involving:2 basic:1 controller:1 physically:1 iteration:6 represent:3 achieved:2 cell:3 robotics:1 proposal:1 addition:2 fine:1 addressed:1 leaving:1 ot:5 biased:1 subject:2 elegant:1 structural:1 leverage:1 revealed:1 constraining:1 enough:1 psychology:1 observability:2 idea:1 regarding:1 attias:1 t0:1 whether:2 isomorphism:1 so2:1 action:54 boutilier:1 clear:2 adept:1 numt:2 generate:1 correctly:2 blue:1 discrete:1 iter:1 demonstrating:1 drawn:1 sum:1 package:1 prob:2 powerful:3 extends:1 throughout:1 utilizes:2 putative:1 decision:2 capturing:3 bound:3 refine:1 yielded:1 phung:1 dangerous:1 constraint:1 aspect:1 extremely:2 optimality:1 relatively:1 tv:1 according:2 alternate:1 remain:2 primate:1 s1:6 imitating:2 previously:1 count:2 mechanism:2 singer:1 mind:1 end:3 serf:1 available:5 experimentation:1 progression:1 hierarchical:6 disagreement:1 alternative:1 encounter:1 existence:1 substitute:1 assumes:2 top:1 include:3 cf:1 graphical:23 build:1 icra:1 strategy:1 rt:11 breazeal:1 traditional:1 unclear:1 discouraged:1 simulated:1 length:4 o1:9 demonstration:4 acquire:1 executed:3 potentially:1 stated:1 intent:8 policy:39 unknown:2 perform:1 upper:3 billard:1 neuron:2 observation:20 markov:4 dautenhahn:3 finite:1 communication:1 inferred:5 venkatesh:1 extensive:1 speculation:1 learned:18 icy:1 prinz:1 adult:3 able:5 address:1 perception:2 biasing:1 challenge:1 unsuccessful:1 green:1 explanation:1 belief:1 indicator:1 representing:1 mdps:1 carried:1 catch:1 text:1 prior:7 literature:1 understanding:2 fully:1 humanoid:2 agent:21 degree:1 s0:7 verma:1 row:1 accidentally:1 jth:1 bias:1 allow:1 deeper:1 fledged:1 deepak:2 nehaniv:2 regard:1 feedback:1 transition:11 world:1 maze:9 sensory:1 reinforcement:2 adaptive:1 social:2 skill:2 observable:1 bui:1 robotic:2 hayes:1 assumed:4 imitation:37 continuous:1 decade:1 reality:1 additionally:1 learn:3 nature:1 domain:3 arrow:2 s2:1 noise:2 child:4 allowed:1 body:4 fig:17 en:1 inferring:4 gallese:1 governed:1 perceptual:1 young:1 down:1 remained:1 specific:1 symbol:1 decay:1 evidence:1 workshop:1 mirror:2 demiris:1 execution:1 push:1 horizon:1 wolpert:1 likely:4 explore:1 partially:1 lewicki:1 watch:1 shon:1 acquiring:1 corresponds:1 ma:3 conditional:1 goal:105 formulated:1 month:3 exposition:1 towards:3 change:1 rarely:1 selectively:1 internal:3 people:2 so1:4 avoiding:1 |
2,155 | 2,957 | Learning Dense 3D Correspondence
?
Florian Steinke? , Bernhard Sch?olkopf? , Volker Blanz+
Max Planck Institute for Biological Cybernetics, 72076 T?ubingen, Germany
{steinke, bs}@tuebingen.mpg.de
+
Universit?at Siegen, 57068 Siegen, Germany
[email protected]
Abstract
Establishing correspondence between distinct objects is an important and nontrivial task: correctness of the correspondence hinges on properties which are
difficult to capture in an a priori criterion. While previous work has used a priori
criteria which in some cases led to very good results, the present paper explores
whether it is possible to learn a combination of features that, for a given training
set of aligned human heads, characterizes the notion of correct correspondence.
By optimizing this criterion, we are then able to compute correspondence and
morphs for novel heads.
1
Introduction
Establishing 3D correspondence between surfaces such as human faces is a crucial element of classspecific representations of objects in computer vision and graphics. On faces, for example, corresponding points may be the tips of the noses in 3D scans of different individuals. Dense correspondence is a mapping or ?warp? from all points of a surface onto another surface (in some cases,
including the present work, extending from the surface to the embedding space). Once this mapping
is established, it is straightforward, for instance, to compute morphs between objects. More importantly, if correspondence mappings between a class of objects and a reference object have been
established, we can represent each object by its mapping, leading to a linear representation that is
able to describe also new objects of similar shape and texture (for further details, see [1]).
The practical relevance of surface correspondence has been increasing over the last years. In computer graphics, applications involve morphing, shape modeling and animation. In computer vision,
an increasing number of algorithms for face and object recognition based on 2D images or 3D scans,
as well as shape retrieval in databases and 3D surface reconstruction from images, rely on shape representations that are built upon dense surface correspondence.
Unlike existing algorithms that define some ad-hoc criteria for identifying corresponding points
on two objects, we treat correspondence as a machine learning problem and propose a data-driven
approach that learns the relevant criteria from a dataset of given object correspondences.
In stereo vision and optical flow [2, 3], a correspondence is correct if and only if it maps a point in
one scene to a point in another scene which stems from the same physical point. In contrast, correspondence between different objects is not a well-defined problem. When two faces are compared,
only some anatomically unique features such as the corners of the eyes are clearly corresponding,
while it may be difficult to define how smooth regions, such as the cheeks and the forehead, are
supposed to be mapped onto each other. On a more fundamental level, however, even the problem
of matching the eyes is difficult to cast in a formal way, and in fact this matching involves many
of the basic problems of computer vision and feature detection. In a given application, the desired
correspondence can be dependent on anatomical facts, measures of shape similarity, or the overall
layout of features on the surface. However, it may also depend on the properties of human perception, on functional or semantic issues, on the context within a given object class or even on social
convention. Due to the problematic and challenging nature of the correspondence problem, our correspondence learning algorithm may be a more appropriate approach than existing techniques, as it
is often easier to provide a set of examples of the desired correspondences than a formal criterion
for correct correspondence.
In a nutshell, the main idea of our approach is as follows. Given two objects O1 and O2 , we are
seeking a correspondence mapping ? such that certain properties of x (relative to O1 ) are preserved
in ? (x) (relative to O2 ) ? they are invariant. These properties depend on the object class and as
explained above, we cannot hope to characterize them comprehensively a priori. However, if we are
given examples of correct and incorrect correspondences, we can attempt to learn properties which
are invariant for correct correspondences, while for incorrect correspondences, they are not. We
shall do this by providing a dictionary of potential properties (such as geometric features, or texture
properties) and approximating a ?true? property characterizing correspondence as an expansion in
that dictionary. We will call this property warp-invariant feature and show that its computation can
be cast as a problem of oriented PCA.
The remainder of the paper is structured as follows: in Section 2 we review some related work,
whereas in Section 3 we set up our general framework for computing correspondence fields. Following this, we explain in Section 4 how to learn the characteristic properties for correspondence
and continue to explain two new feature functions in Section 5. We give implementation details and
experimental results in Section 6 and conclude in Section 7.
2
Related Work
The problem of establishing dense correspondence has been addressed in the domain of 2D images,
on surfaces embedded in 3D space, and on volumetric data. In the image domain, correspondence
from optical flow [2, 3] has been used to describe the transformations of faces with pose changes
and facial expressions [4], and to describe the differences in the shapes of individual faces [5].
An algorithm for computing correspondence on parameterized 3D surfaces has been introduced for
creating a class-specific representation of human faces [1] and bodies [6]. [7] propose a method
that is designed to align three dimensional medical images using a mutual information criterion.
Another interesting approach is [8]: they formulate the problem in a probabilistic setup and then
apply standard graphical model inference algorithms to compute the correspondence. Their mesh
based method uses a smoothness functional and features based on spin images. See the review [9]
for an overview of a wide range of additional correspondence algorithms.
Algorithms that are applied to 3D faces typically rely on surface parameterizations, such as cylindrical coordinates, and then compute optical flow on the texture map as well as the depth image
[1]. This algorithm yields plausible results, to which we will compare our method. However, the
approach cannot be applied unless a parameterization is possible and the distortions are low on all
elements of the object class. Even for faces this is a problem, for example around the ears, which
makes a more general real 3D approach preferable. One such algorithm is presented in [10]: here,
the surfaces are embedded into the surrounding space and a 3D volume deformation is computed.
The use of the signed distance function as a guiding feature ensures correct surface to surface mappings. We build on this approach that is more closely presented in Section 3.
A common local geometric feature is surface curvature. Though implicit surface representations
allow the extraction of such features [11], these differential geometric properties are inherently instable with respect to noise. [12] propose a related 3D geometric feature based on integrals and thus
more stable to compute. We present a slightly modified version thereof which allows for a much
easier computation of this feature from a signed distance function represented as a kernel expansion
in comparison to a complete space voxelisation step required in [12].
3
General Framework For Computing Correspondence
In order to formalize our understanding of correspondence, let us assume that all the objects O of
class O are embedded in X ? R3 . Given a reference object Or and a target Ot the goal of computing
a correspondence can then be expressed as determining the deformation function ? : X ? X which
maps each point x ? X on Or to its corresponding point ? (x) on Ot .
We further assume that we can construct a dictionary of so-called feature functions fi : X ?
R, i = 1, .., n capturing certain characteristic properties of the objects. [10] propose to use the
signed distance function, which maps to each point x ? X the distance to the objects surface ?
with positive sign outside the shape and negative sign inside. They also use the first derivative of
the signed distance function, which can be interpreted as the surface normal. In section Section 5
we will propose two additional features which are characteristic for 3D shapes, namely a curvature
related feature and surface texture.
We assume that the warp-invariant feature can be represented or at least approximated by an expansion in this dictionary. Let ? : X ? Rn be a weighting function describing the relative importance
of the different elements of the dictionary at aPgiven location in X . We then express the warpn
invariant feature as f? : X ? R, f? (x) =
i=1 ?i (x)fi (x) with feature functions fi that are
object specific; for the target object there is a slight modification in that the space-variant weighting
?(x) needs to refer to the coordinatesPof the reference object if we want to avoid comparing apples
n
and oranges. We thus use f?t (x) = i=1 ?i (? ?1 (x))fit (x), where we never have to evaluate ? ?1
t
since we will only require f? (? (x)) below.
To determine a mapping ? which will establish correct correspondences between x and ? (x), we
minimize the functional
Z
2
2
f?r (x) ? f?t (? (x)) d?(x)
(1)
Creg k? kH +
X
The first term expresses a prior belief in a smooth deformation. This is important in regions where
the objects are not sufficiently characteristic to specify a good correspondence. As we will use a
Support Vector framework to represent ? , smoothness can readily be expressed as the RKHS norm
k? kH of the non-parametric part of the deformation function ? (see Section 6). The second term
measures the local similarity of the warp-invariant feature function extracted on the reference object
f r and on the target object f t and integrates it over the volume of interest.
This formulation is a modification of [10] where two feature functions were chosen a priori (the
signed distance and its derivative) and used instead of f? . The motivation for this is that for a correct
morph, these functions should be reasonably invariant. In contrast, the present approach starts from
the notion of invariance and estimates a location-dependent linear combination of feature functions
with a maximal degree of invariance for correct correspondences (cf. next section). We consider
location-dependent linear combinations since one cannot expect that all the feature functions that
define correspondence are equally important for all points of an object. For example color may be
more characteristic around the lips or the eyes than on the forehead.
This comes at the cost, however, of increasing the number of free parameters, leading to potential
difficulties when performing model selection. As discussed above, it is unclear how to characterize
and evaluate correspondence in a principled way. The authors of [10] propose a strategy based on
a two-way morph: they first compute a deformation from the reference object to the target, and afterwards vice versa. A necessary condition for a correct morph is then that the concatenation of the
two deformations yield a mapping close to the identity.1 Although this method can provide a partial
quality criterion even when no ground truth is available, all model selection approaches based on
such a criterion need to minimize (1) many times and the computation of a gradient with respect
to the parameters is usually not possible. As the minimization is typically non-convex and rather
expensive, the number of free parameters that can be optimized is small. For locally varying parameters as proposed here such an approach is not practical. We thus propose to learn the parameters
from examples using an invariance criterion proposed in the next section.
4
Learning the optimal feature function
We assume that a database of D objects that are already in correspondence is available. This could
for example be achieved by manually picking many corresponding point pairs and training a regression to map all the points onto each other, or by (semi-)automatic methods optimized for the given
object class (e.g., [1]). We can then determine the optimal approximation of the warp-invariant
1
It is not a sufficient condition, since the concatenation of, say, two identity mappings will also yield the
identity.
feature function (as defined in the introduction) that characterizes correspondence using the basic
features in our dictionary. The warp-invariant feature function should be such that it varies little or
not at all for corresponding points, but its value should not be preserved (and have large variance)
for random non-matching points. To approximate it, we propose to maximize the ratio of these
variances over all weighting functions ?. Thus for each point x ? X , we maximize
2
Ed,zd f?r (x) ? f?d (zd )
(2)
2
Ed f?r (x) ? f?d (?d (x))
Here, f?r , f?d are the warp-invariant feature functions evaluated on the reference object and the d-th
database object respectively. ?d (x) is the point matching x on the d-th database object and zd is a
random point sampled from it. We take the expectations over all objects in our database, as well as
non corresponding points randomly sampled from the objects.
Because of the linear dependence of f? on ? one can rewrite the problem as the maximization of
?(x)T Cz (x)?(x)
?(x)T C? (x)?(x)
(3)
T
fir (x) ? fid (?d (x) fjr (x) ? fjd (?d (x)) ,
(4)
with the empirical covariances
[C? (x)]i,j
=
D
X
d=1
[Cz (x)]i,j
=
D X
N
X
T
fir (x) ? fid (zd,k ) fjr (x) ? fjd (zd,k ) ,
(5)
d=1 k=1
where we have drawn N random sample points from each object in the database.
This problem is known as oriented PCA [13], and the maximizing vector ?(x) can be determined by
solving the generalized eigenvalue problem C? (x)v(x) = ?(x)Cz (x)v(x). If v(x) is the normalized
eigenvector corresponding to the maximal eigenvalue ?(x), we obtain the optimal weight vector
?1/2
?
?
?(x) = ?(x)v(x)
using the scale factor ?(x)
= v(x)T C? (x)v(x)
.
?
Note that by using this scale factor ?(x),
the contribution of the feature function f? in the objective
(1) will vary locally compared to the regularizer: as ? (x) is somewhat arbitrary during the optimiza2
tion of (1) the average local contribution will then approximately equal Ed,zd f?r (x) ? f?d (zd ) =
?(x). This implies that if locally there exists a characteristic combination of features ? ?(x) is high
? it will have a big influence in (1). If not, the smoothness term k? kH gets relatively more weight
implying that the local correspondence is mostly determined through more global contributions.
Note, moreover that while we have described the above for the leading eigenvector only, nothing
prevents us from computing several eigenvectors and stacking up the resulting warp-invariant feature
functions f?1 , f?2 , . . . , f?m into a vector valued warp-invariant feature function f? : X ? Rm which
then is plugged into the optimization problem (1) using the two norm to measure deviations instead
of the squared distance.
5
Basic Feature Functions
In our dictionary of basic feature functions we included the signed distance function and its derivative. We added a curvature related feature, the ?signed balls?, and surface texture intensity.
5.1
Signed Balls
Imagine a point x on a flat piece of a surface. Take a ball BR (x) with radius R centered at that point
and compute the average of the signed distance function s : X ? R over the ball?s volume:
Z
1
Is (x) =
s(x0 )dx0 ? s(x)
(6)
VBR (x) BR (x)
If the surface around x is flat on the scale of the ball, we obtain zero. At points where the surface is
bent outwards this value is positive, at concave points it is negative. The normalization to the value
B 4mm
B 28mm
C 0mm
C 5mm
C 15mm
Figure 1: The two figures on the left show the color-coded values of the ?signed balls? feature
at different radii R. Depending on R, the feature is sensitive to small-scale structures or largescale structures only. Convex parts of the surface are assigned positive values (blue), concave parts
negative (red). The three figures on the right show how the surface feature function that was trained
with texture intensity extends off the surface (for clarity visualized in false colors) and becomes
smoother. In the figure, the function is mapped on surfaces that are offset by 0, 5 and 15 mm.
of the signed distance function at the center of the ball allows us to compute this feature function also
for off-surface points, where the interpretation with respect to the other iso-surfaces does not change.
Due to the integration, this feature is stable with respect to surface noise, while mean curvature in
differential geometry may be affected significantly. Moreover, the integration involves a scale of the
feature.
We propose to represent the implicit surface function as in [10] where a compactly supported kernel
expansion is trained to approximate the signed distance. In this case the integral
R and the kernel
summation can be interchanged, so we only need to evaluate terms of the form BR (x) k(xi , x0 )dx0
and then add them in the same way as the signed distance function is computed. The value of
this basic integral only depends on the distance between the kernel center xi and the test point x.
It is compactly supported if the kernel k is. Therefore, we propose to pre-compute these values
numerically for different distances and store them in a small lookup table. For the final expansion
summation we can then just interpolate the two closest values. We obtained good interpolation
results with about ten to twenty distance values.
For the case where the surface looks locally like a sphere it is easy to show that in the limit of small
balls the value of the ?signed balls? feature function is related to the differential geometric mean
2 2
3
curvature H by Is (x) = 3?
20 H R + O(R ).
5.2
Surface properties ? Texture
The volume deformation approach presented in Section 3 requires the use of feature functions defined on the whole domain X . In order to include information f |?? which is just given on a surface
?? of the object whose interior volume is ?, e.g. the texture intensity, we propose to extended the
surface feature f |?? into a differentiable feature function f : X ? R such that f ? f |?? as we get
closer to the surface. At larger distances from the surface, f should be smoother and tend towards
the mean feature value. This is a desirable property during the optimization of (1) as it helps to avoid
local minima. Finally, the feature function f and its gradient should be efficient to evaluate.
We propose to use a multi-scale compactly supported kernel regression to determine f : at each scale,
from coarse to fine, we select approximately equally spaced points on the surface at a distance related
to the kernel width of that scale. Then we compute the feature value at these points averaged over a
sphere of radius of the corresponding kernel support. With standard quadratic SVR regression we fit
the remainder of what was achieved on larger scales to the training values. Due to the sub-sampling
the kernel regressions do not contain too many kernel centers and the compact support of the kernel
ensures sparse kernel matrices. Thus, efficient regression and evaluation is guaranteed. Because all
kernel centers lie on the surface and reach to different extents into the volume X depending on the
kernel size of their scale, we can model small-scale variations on the surface and close to it, whereas
the regression function varies only on a larger scale further away from the surface.
6
Experiments
Implementation. In order to optimize (1) we followed the approach of [10]: we represent the
deformation ? as a multi-scale compactly supported kernel expansion, i.e., the j-th component,
empty
C
B
N horiz.
N vert.
N depth.
Figure 2: Locations that are marked yellow show an above threshold, relative contribution (see
text) of a given feature in the warp-invariant feature function. C is the surface intensity feature,
B the signed balls feature (R = 6mm), N the surface normals in different directions. Note that
points where color has a large contribution (yellow points in C) are clustered around regions with
characteristic color information, such as the eyes or the mouth.
PS PNs j
?i,s k(x, xi,s ) with the compactly supported kernel
j = 1, 2, 3, of ? is ? j (x) = xj + s=1 i=1
PS P3 PNs
2
j
j
?i,s
?l,s
k(xl,s , xi,s ).
function k : X ?X ? R. The regularizer then is k? kH := s=1 j=1 i,l=1
We approximate the integral in (1) by sampling Ns kernel centers xi,s on each scale s = 1, . . . , S
according to the measure ?(x) and minimize the resulting non-linear optimization problem in the
j
coefficients ?i,s
for each scale from coarse to fine using a second order Newton-like method [14].
As a test object class we used 3D heads with known correspondence [1]. 100 heads were used for
the training object database and 10 to test our correspondence algorithm. As a reference head we
used the mean head of the database. The faces are all in correspondence, so we can just linearly
average the vertex positions and the texture images. However, the correspondence of the objects in
the database is only defined on the surface. In order to extend it to the off-surface points xi,s , we
generated these locations by first sampling points from the surface and then displacing them along
their surface normals. This implied that we were able to identify the corresponding points also on
other heads.
For each kernel center xi,s , we learned the weighting vector ?(xi,s ) as described in Section 4. In
one run through the database we computed for each head the values of all proposed basic feature
functions for all locations, corresponding to kernel centers on the reference head, as well as for
100 randomly sampled points z. The points z should be typical for possible target locations ? (xi,s )
during the optimization of (1). Thus, we sampled points up to distances to the surface proportional to
the kernel widths used for the deformation ? . We then estimated the empirical covariance matrices
for each kernel center yielding the weight vectors via a small eigenvalue decomposition of size
n ? n where n is the number of used basic features. The parameters Creg ? one for each scale ?
were determined by optimizing computed deformation fields from the reference head to some of the
training database heads. We minimized the mismatch to the correspondence given in the database.
Feature functions. In Figure 1, our new feature functions are visualized on an example head. Each
feature extracts specific plausible information, and the surface color can be extended off the surface.
Learned weights. In Figure 2, we have marked those points on the surface where a given feature
has a high relative contribution in the warp-invariant feature function. As a measure of contribution
we took the component of the weight vector ?(xi,s ) that corresponds to the feature of interest and
multiplied it with the standard deviation of this feature over all heads and all positions. Note that the
weight vector is not invariant to rescaling the basic feature functions, unlike the proposed measure.
Finally, we normalized the contributions of all features at a given point xi,s to sum to one, yielding
the relative contribution. In the table below the relative contribution of each feature is listed.
average rel. contribution
max rel. contribution
S
0.832
0.997
N horiz.
0.092
0.701
N vert.
0.023
0.429
N depth.
0.038
0.446
C
0.008
0.394
B 8%
0.006
0.272
B 3%
0.003
0.333
Here and below, S is signed distance, N surface normals, C the proposed surface feature function
trained with the intensity values on the faces, and B is the ?signed balls? feature with radii given by
the percentage numbers scaled to the diameter of the head.
The signed distance function is the best preserved feature (e.g. all surface points take the value
zero up to small approximation errors). The resulting large weight of this feature is plausible as a
surface-to-surface mapping is a necessary condition for a morph. However, combined with Figure 2
Reference
Deformed
Target
Deformed
Target
Figure 3: The average head of the database ? the reference ? is deformed to match four of the target
heads of the test set. Correct correspondence deforms the shape of the reference head to the target
face with the texture of the mean face well aligned to the shape details.
we can see that the method assigns plausible non-zero values also to other features where these can
be assumed to be most characteristic for a good correspondence.
Correspondence. We applied our correspondence algorithm to compute the correspondence to the
test set of 10 heads. Some example deformations are shown in Figure 3 for a dictionary consisting
of S, N (hor, vert, depth) C, B (radii 3% and 8%). Numerical evaluation of the morphs is difficult.
We compare our method with the results of the correspondence algorithm of [1] on points that are
uniformly drawn form the surface (first column) and for 24 selected marker points (second column).
These markers were placed at locations around the eyes or the mouth where correspondence can
be assumed to be better defined than for example on the forehead. Still, the error made by humans
when picking these positions has turned out to be around 1?2mm. The table below shows mean
results in mm for different settings.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
all weights equal
our method (independent of x)
our method (1 eigenvector)
our method (2 eigenvectors)
our method (4 eigenvectors)
our method (6 eigenvectors)
our method (1 eigenvector, without B, C)
uniform
5.97
3.74
3.74
3.62
3.56
3.55
3.76
markers
4.49
1.48
1.34
1.19
1.11
1.10
1.42
error signed distance
1.49
0.05
0.04
0.04
0.04
0.04
0.04
If all weights are equal independent of location or feature (a), the result is not acceptable. A careful weighting of each feature separately, but independent of location (b) ? as could potentially be
achieved by [10] ? improves the quality of the correspondence. To obtain these weights we averaged the covariance matrices Cz (x), C? (x) over all points and applied the proposed algorithm in
Section 4, but independent of x. However, a locally adapted weighting (c) outperforms the above
methods and using more than one eigenvector (d-f) further enhances the correspondence. Note that
although the results are not identical to [1], our algorithm?s accuracy is consistent with the human
labeling on the scale of the latter?s presumed accuracy (1-2mm). For uniformly sampled points, the
differences are slightly larger (4mm), but we need to bear in mind that that algorithm?s results cannot
be considered ground truth. Experiment (g) which is identical to (c) but with the color and signed
balls feature omitted demonstrates the usefulness of these additional basic feature functions.
Computation times ranged between 5min and one hour and depended significantly on the number
of scales used (here 4), the number of kernel centers generated, and the number of basic features
included in the dictionary. For large radii R the signed balls feature becomes quite expensive to
compute, since many summands of the signed distance function expansion have to be accumulated.
Our method to select the important features for each point in advance, i.e. before the optimization is
Reference
25%
50%
75%
Target
Figure 4: A morph between a human head and the head of the character Gollum (available from
www.turbosquid.com). As Gollum?s head falls out of our object class (human heads), we assisted
the training procedure with 28 manually placed markers.
started, would allow for a potentially high speed-up: At locations where a certain feature has a very
low weight, we could just omit it in the evaluation of the cost function (1).
7
Conclusion
We have proposed a new approach to the challenging problem of defining criteria that characterize
a valid correspondence between 3D objects of a given class. Our method learns an appropriate
criterion from examples of correct correspondences. The approach thus applies machine learning
to computer graphics at the early level of feature construction. The learning technique has been
implemented efficiently in a correspondence algorithm for textured surfaces.
In the future, we plan to test our method with other object classes. Even though we have concentrated
in our experiments on 3D surface data, the method may be applicable also in other fields such as
to align CT or MR scans in medical imaging. It would also be intriguing to explore the question
whether our paradigm of learning the features characterizing correspondences might reflect some of
the cognitive processes that are involved when humans learn about similarities within object classes.
References
[1] V. Blanz and T. Vetter. A morphable model for the synthesis of 3d faces. In SIGGRAPH?99 Conference
Proceedings, pages 187?194, Los Angeles, 1999. ACM Press.
[2] B.D. Lucas and T. Kanade. An iterative image registration technique with an application to stereo vision.
In IJCAI81, pages 674?679, 1981.
[3] B. K. P. Horn and B. G. Schunck. Determining optical flow. Artif. Intell., 17(1-3):185?203, 1981.
[4] D. Beymer and T. Poggio. Image representations for visual learning. Science, 272:1905?1909, 1996.
[5] T. Vetter and T. Poggio. Linear object classes and image synthesis from a single example image. IEEE
Trans. on Pattern Analysis and Machine Intelligence, 19(7):733?742, 1997.
[6] B. Allen, B. Curless, and Z. Popovic. The space of human body shapes: reconstruction and parameterization from range scans. In Proc. SIGGRAPH, pages 612?619, 2002.
[7] D. Rueckert and A. F. Frangi. Automatic construction of 3-d statistical deformation models of the brain
using nonrigid registration. IEEE Trans. on Medical Imaging, 22(8):1014?1025, 2003.
[8] D. Anguelov, P. Srinivasan, H.-C. Pang, D. Koller, S. Thrun, and J. Davis. The correlated correspondence
algorithm for unsupervised registration of nonrigid surfaces. In Neural Information Processing Systems
17, pages 33?40. MIT Press, 2005.
[9] M. Alexa. Recent advances in mesh morphing. Computer Graphics Forum, 21(2):173?196, 2002.
[10] B. Sch?olkopf, F. Steinke, and V. Blanz. Object correspondence as a machine learning problem. In
Proceedings of the 22nd International Conference on Machine Learning (ICML 05), July 2005.
[11] J.-P. Thirion and A Gourdon. Computing the differential characteristics of isointensity surfaces. Journal
of Computer Vision and Image Understanding, 61(2):190?202, March 1995.
[12] N. Gelfand, N. J. Mitra, L. J. Guibas, and H. Pottmann. Robust global registration. In Proc. Eurographics
Symposium on Geometry Processing, pages 197?206, 2005.
[13] K.I. Diamantaras and S.Y. Kung. Principal component neural networks: theory and applications. John
Wiley & Sons, Inc., 1996.
[14] D. C. Liu and J. Nocedal. On the limited memory bfgs method for large scale optimization. Math.
Program., 45(3):503?528, 1989.
| 2957 |@word deformed:3 cylindrical:1 version:1 norm:2 nd:1 covariance:3 decomposition:1 liu:1 rkhs:1 o2:2 existing:2 outperforms:1 comparing:1 com:1 intriguing:1 readily:1 john:1 mesh:2 numerical:1 shape:11 designed:1 implying:1 intelligence:1 selected:1 parameterization:2 iso:1 vbr:1 coarse:2 parameterizations:1 math:1 location:11 along:1 differential:4 symposium:1 incorrect:2 inside:1 x0:2 presumed:1 mpg:2 multi:2 brain:1 little:1 increasing:3 becomes:2 moreover:2 what:1 interpreted:1 eigenvector:5 transformation:1 concave:2 nutshell:1 preferable:1 universit:1 rm:1 scaled:1 demonstrates:1 medical:3 omit:1 diamantaras:1 planck:1 positive:3 before:1 local:5 treat:1 mitra:1 limit:1 depended:1 establishing:3 interpolation:1 approximately:2 signed:22 creg:2 might:1 challenging:2 limited:1 range:2 averaged:2 practical:2 unique:1 horn:1 procedure:1 deforms:1 empirical:2 significantly:2 vert:3 matching:4 pre:1 vetter:2 get:2 onto:3 cannot:4 selection:2 close:2 interior:1 svr:1 context:1 influence:1 optimize:1 www:1 map:5 center:9 maximizing:1 straightforward:1 layout:1 convex:2 formulate:1 identifying:1 assigns:1 importantly:1 embedding:1 notion:2 coordinate:1 variation:1 target:10 imagine:1 construction:2 us:1 element:3 recognition:1 approximated:1 expensive:2 database:13 capture:1 region:3 ensures:2 principled:1 trained:3 depend:2 rewrite:1 solving:1 upon:1 textured:1 compactly:5 siggraph:2 represented:2 regularizer:2 surrounding:1 distinct:1 describe:3 labeling:1 outside:1 whose:1 quite:1 larger:4 plausible:4 valued:1 distortion:1 say:1 gelfand:1 blanz:4 final:1 hoc:1 eigenvalue:3 differentiable:1 took:1 reconstruction:2 propose:12 instable:1 maximal:2 remainder:2 aligned:2 relevant:1 turned:1 supposed:1 kh:4 olkopf:2 los:1 empty:1 p:2 extending:1 object:45 help:1 depending:2 pose:1 implemented:1 involves:2 come:1 implies:1 convention:1 direction:1 radius:6 closely:1 correct:12 centered:1 human:10 require:1 clustered:1 biological:1 summation:2 assisted:1 mm:11 around:6 sufficiently:1 ground:2 normal:4 considered:1 guibas:1 mapping:10 interchanged:1 dictionary:9 vary:1 early:1 omitted:1 proc:2 integrates:1 applicable:1 sensitive:1 correctness:1 vice:1 hope:1 minimization:1 mit:1 clearly:1 modified:1 rather:1 avoid:2 volker:1 varying:1 contrast:2 inference:1 dependent:3 accumulated:1 sb:1 typically:2 koller:1 germany:2 issue:1 overall:1 priori:4 lucas:1 plan:1 integration:2 orange:1 mutual:1 field:3 once:1 construct:1 extraction:1 never:1 equal:3 manually:2 sampling:3 identical:2 look:1 unsupervised:1 icml:1 pottmann:1 future:1 minimized:1 fjr:2 oriented:2 randomly:2 interpolate:1 individual:2 intell:1 geometry:2 consisting:1 classspecific:1 attempt:1 detection:1 interest:2 evaluation:3 yielding:2 integral:4 closer:1 partial:1 necessary:2 poggio:2 facial:1 unless:1 plugged:1 desired:2 deformation:12 instance:1 column:2 modeling:1 maximization:1 cost:2 stacking:1 deviation:2 vertex:1 uniform:1 usefulness:1 graphic:4 too:1 characterize:3 morph:5 varies:2 morphs:3 combined:1 explores:1 fundamental:1 international:1 probabilistic:1 off:4 picking:2 tip:1 synthesis:2 alexa:1 squared:1 reflect:1 ear:1 eurographics:1 fir:2 corner:1 creating:1 horiz:2 derivative:3 leading:3 rescaling:1 cognitive:1 potential:2 de:2 lookup:1 bfgs:1 coefficient:1 rueckert:1 inc:1 ad:1 depends:1 piece:1 tion:1 characterizes:2 red:1 start:1 contribution:12 minimize:3 pang:1 spin:1 accuracy:2 variance:2 characteristic:9 efficiently:1 yield:3 spaced:1 identify:1 yellow:2 curless:1 apple:1 cybernetics:1 explain:2 reach:1 ed:3 volumetric:1 involved:1 thereof:1 sampled:5 dataset:1 color:7 improves:1 formalize:1 specify:1 formulation:1 evaluated:1 though:2 just:4 implicit:2 marker:4 quality:2 hor:1 artif:1 normalized:2 true:1 contain:1 ranged:1 assigned:1 semantic:1 during:3 width:2 davis:1 mpi:1 criterion:12 generalized:1 nonrigid:2 complete:1 allen:1 image:13 novel:1 fi:3 common:1 functional:3 physical:1 overview:1 volume:6 discussed:1 slight:1 interpretation:1 extend:1 numerically:1 forehead:3 refer:1 anguelov:1 versa:1 smoothness:3 automatic:2 gollum:2 stable:2 similarity:3 surface:61 morphable:1 summands:1 align:2 add:1 curvature:5 closest:1 recent:1 optimizing:2 driven:1 pns:2 store:1 certain:3 ubingen:1 continue:1 fid:2 minimum:1 additional:3 somewhat:1 florian:1 mr:1 determine:3 maximize:2 paradigm:1 july:1 semi:1 smoother:2 afterwards:1 desirable:1 stem:1 smooth:2 match:1 sphere:2 retrieval:1 equally:2 bent:1 coded:1 variant:1 basic:10 regression:6 vision:6 expectation:1 represent:4 kernel:22 cz:4 normalization:1 achieved:3 preserved:3 whereas:2 want:1 fine:2 separately:1 addressed:1 crucial:1 sch:2 ot:2 unlike:2 tend:1 flow:4 call:1 easy:1 xj:1 fit:2 displacing:1 idea:1 br:3 angeles:1 whether:2 expression:1 pca:2 stereo:2 involve:1 eigenvectors:4 listed:1 locally:5 ten:1 concentrated:1 visualized:2 diameter:1 percentage:1 problematic:1 sign:2 estimated:1 anatomical:1 zd:7 blue:1 shall:1 affected:1 express:2 srinivasan:1 four:1 threshold:1 drawn:2 clarity:1 registration:4 nocedal:1 imaging:2 year:1 sum:1 run:1 parameterized:1 extends:1 cheek:1 p3:1 acceptable:1 capturing:1 ct:1 guaranteed:1 followed:1 correspondence:62 quadratic:1 nontrivial:1 fjd:2 adapted:1 scene:2 flat:2 speed:1 min:1 performing:1 optical:4 relatively:1 structured:1 according:1 combination:4 ball:13 march:1 slightly:2 son:1 character:1 b:1 modification:2 anatomically:1 invariant:15 explained:1 describing:1 r3:1 thirion:1 mind:1 nose:1 available:3 multiplied:1 apply:1 away:1 appropriate:2 cf:1 include:1 graphical:1 hinge:1 newton:1 build:1 establish:1 approximating:1 forum:1 seeking:1 objective:1 implied:1 already:1 added:1 question:1 parametric:1 strategy:1 dependence:1 unclear:1 enhances:1 gradient:2 distance:22 mapped:2 thrun:1 concatenation:2 extent:1 tuebingen:1 o1:2 providing:1 ratio:1 difficult:4 setup:1 mostly:1 potentially:2 negative:3 implementation:2 twenty:1 defining:1 extended:2 head:22 rn:1 arbitrary:1 intensity:5 introduced:1 cast:2 required:1 namely:1 pair:1 optimized:2 learned:2 established:2 hour:1 trans:2 able:3 below:4 perception:1 usually:1 mismatch:1 pattern:1 program:1 built:1 max:2 including:1 memory:1 belief:1 mouth:2 difficulty:1 rely:2 largescale:1 eye:5 started:1 extract:1 text:1 review:2 morphing:2 geometric:5 understanding:2 prior:1 determining:2 relative:7 embedded:3 expect:1 bear:1 frangi:1 interesting:1 proportional:1 degree:1 sufficient:1 consistent:1 supported:5 last:1 free:2 placed:2 formal:2 allow:2 warp:11 institute:1 steinke:3 comprehensively:1 face:14 characterizing:2 wide:1 fall:1 sparse:1 depth:4 valid:1 author:1 made:1 social:1 approximate:3 compact:1 bernhard:1 global:2 conclude:1 assumed:2 popovic:1 xi:11 iterative:1 table:3 lip:1 kanade:1 learn:5 nature:1 robust:1 reasonably:1 inherently:1 expansion:7 domain:3 dense:4 main:1 linearly:1 motivation:1 noise:2 animation:1 big:1 whole:1 nothing:1 body:2 wiley:1 n:1 sub:1 position:3 guiding:1 xl:1 lie:1 weighting:6 learns:2 specific:3 offset:1 exists:1 false:1 rel:2 importance:1 texture:10 outwards:1 easier:2 led:1 explore:1 beymer:1 visual:1 prevents:1 expressed:2 schunck:1 applies:1 corresponds:1 truth:2 extracted:1 acm:1 goal:1 identity:3 marked:2 careful:1 towards:1 change:2 included:2 determined:3 typical:1 uniformly:2 principal:1 called:1 invariance:3 experimental:1 optimiza:1 select:2 support:3 latter:1 scan:4 dx0:2 kung:1 relevance:1 evaluate:4 correlated:1 |
2,156 | 2,958 | Effects of Stress and Genotype on Meta-parameter
Dynamics in Reinforcement Learning
Gediminas Luk?sys1,2
[email protected]
Denis Sheynikhovich1
[email protected]
? 1
J?er?emie Knusel
[email protected]
Carmen Sandi2
[email protected]
Wulfram Gerstner1
[email protected]
1
Laboratory of Computational Neuroscience
2
Laboratory of Behavioral Genetics
Ecole Polytechnique F?ed?erale de Lausanne
CH-1015, Switzerland
Abstract
Stress and genetic background regulate different aspects of behavioral learning
through the action of stress hormones and neuromodulators. In reinforcement
learning (RL) models, meta-parameters such as learning rate, future reward discount factor, and exploitation-exploration factor, control learning dynamics and
performance. They are hypothesized to be related to neuromodulatory levels in
the brain. We found that many aspects of animal learning and performance can be
described by simple RL models using dynamic control of the meta-parameters. To
study the effects of stress and genotype, we carried out 5-hole-box light conditioning and Morris water maze experiments with C57BL/6 and DBA/2 mouse strains.
The animals were exposed to different kinds of stress to evaluate its effects on
immediate performance as well as on long-term memory. Then, we used RL models to simulate their behavior. For each experimental session, we estimated a set
of model meta-parameters that produced the best fit between the model and the
animal performance. The dynamics of several estimated meta-parameters were
qualitatively similar for the two simulated experiments, and with statistically significant differences between different genetic strains and stress conditions.
1
Introduction
Animals choose their actions based on reward expectation and motivational drives. Different aspects
of learning are known to be influenced by acute stress [1, 2, 3] and genetic background [4, 5]. Stress
effects on learning depend on the stress type (eg task-specific or unspecific) and intensity, as well
as on the learning paradigm (eg spatial/episodic vs. procedural learning) [3]. It is known that
stress can affect short- and long-term memory by modulating plasticity through stress hormones
and neuromodulators [1, 2, 3, 6]. However, there is no integrative model that would accurately
predict and explain differential effects of acute stress. Although stress factors can be described in
quantitative measures, their effects on learning, memory, and performance are strongly influenced
by how an animal perceives it. The subjective experience can be influenced by emotional memories
as well as by behavioral genetic traits such as anxiety, impulsivity, and novelty reactivity [4, 5, 7].
In the present study, behavioral experiments conducted on two different genetic strains of mice
and under different stress conditions were combined with a modeling approach. In our models,
behavioral performance as a function of time was described in the framework of temporal difference
reinforcement learning (TDRL).
In TDRL models [8] a modeled animal, termed agent, can occupy various states and undertake
actions in order to acquire rewards. The expected values of cumulative future reward (Q-values) are
learned by observing immediate rewards delivered under different state-action combinations. Their
update is controlled by certain meta-parameters such as learning rate, future reward discount factor,
and memory decay/interference factor. The Q-values (together with the exploitation/exploration
factor) determine what actions are more likely to be chosen when the animal is at a certain state,
ie they represent the goal-oriented behavioral strategy learned by the agent. The activity of certain
neuromodulators in the brain are thought to be associated with the role the meta-parameters play
in the TDRL models. Besides dopamine (DA), whose levels are known to be related to the TD
reward prediction error [9], serotonin (5-HT), noradrenaline (NA), and acetylcholine (ACh) were
discussed in relation to TDRL meta-parameters [10]. Thus, the knowledge of the characteristic
meta-parameter dynamics can give an insight into the putative neuromodulatory activities in the
brain. Dynamic parameter estimation approaches, recently applied to behavioral data in the context
of TDRL [11], could be used for this purpose.
In our study, we carried out 5-hole-box light conditioning and Morris water maze experiments with
C57BL/6 and DBA/2 inbred mouse strains (referred to as C57 and DBA from now on), renown for
their differences in anxiety, impulsivity, and spatial learning [4, 5, 12]. We exposed subgroups of
animals to different kinds of stress (such as motivational stress or task-specific uncertainty) in order
to evaluate its effects on immediate performance, and also tested their long-term memory after a
break of 4-7 weeks. Then, we used TDRL models to describe the mouse behavior and established
a number of performance measures that are relevant to task learning and memory (such as mean
response times and latencies to platform) in order to compare the outcome of the model with the animal performance. Finally, for each experimental session we ran an optimization procedure to find
a set of the meta-parameters, best fitting to the experimental data as quantified by the performance
measures. This approach made it possible to relate the effects of stress and genotype to differences
in the meta-parameter values, allowing us to make specific inferences about learning dynamics (generalized over two different experimental paradigms) and their neurobiological correlates.
2
Reinforcement learning model of animal behavior
In the TDRL framework [8] animal behavior is modelled as a sequence of actions. After an action is
performed, the animal is in a new state where it can again choose from a set of possible actions. In
certain states the animal is rewarded, and the goal of learning is to choose actions so as to maximize
the expected future reward, or Q-value, formally defined as
X
?
k
Q(st , at ) = E
? rt+k+1 |st , at ,
(1)
k=0
where (st , at ) is the state-action pair, rt is a reward received at time step t and 0 < ? < 1 is
the future reward discount factor which controls to what extent the future rewards are taken into
account. As soon as state st+1 is reached and a new action is selected, the estimate of the previous
state?s value Q(st , at ) is updated based on the reward prediction error ?t [8]:
?t = rt+1 + ?Q(st+1 , at+1 ) ? Q(st , at ) ,
(2)
Q(st , at ) ? Q(st , at ) + ??t ,
(3)
where ? is the learning rate. The action selection at each state is controlled by the exploitation
factor ? such that actions with high Q-values are chosen more often if the ? is high, whereas random
actions are chosen most of the time if the ? is close to zero. Meta-parameters ?, ? and ? are the free
parameters of the model.
3
5-hole-box experiment and modeling
Experimental subjects were male mice (24 of the C57 strain, and 24 of the DBA strain), 2.5-month
old at the beginning of the experiment, and food deprived to 85-90% of the initial weight. During an
experimental session, each animal was placed into the 5-hole-box (5HB) (Figure 1a). The animals
had to learn to make a nose poke into any of the holes upon the onset of lights and not to make it
in the absence of light. After the response to light, the animals received a reward in form of a food
pellet. Once a poke was initiated (see starting a poke in Figure 1b), the mouse had to stay in the
hole at least for a short time (0.3-0.5 sec) in order to find the delivered reward (continuing a poke).
Trial ended (lights turned off) as soon as the nose poke was finished. If the mouse did not find the
reward, the reward remained in the box and the animal could find it during the next poke in the same
box. The inter-trial interval (ITI) between subsequent trials was 15 sec. However, a new trial could
only start when during the last 3 sec before it there were no wrong (ITI) pokes, so as to penalize
spontaneous poking. The total session time was 10 min. Hence, the number of trials depended on
how fast animals responded to light and how often they made ITI pokes.
a.
b.
B.1
B.2
B.3
B.4
B.5
Trial starts after 15 sec. ITI
ITI, staying outside
Trial, staying outside
ITI, starting a poke
Trial, starting a poke
Reward (if available)
ITI, continuing a poke
Reward
Trial, continuing a poke
Figure 1: a. Scheme of the 5HB experiment. Open circles are the holes where the food is delivered,
filled circles are the lights. All 5 holes were treated as equivalent during the experiment. b. 5HB
state-action chart. Rectangles are states, arrows are actions.
After 2 days of habituation, during which the mice learned that food could be delivered in the
holes, they underwent 8 consecutive days of training. During days 5-7 subsets of the animals were
exposed to different stress conditions: motivational stress (MS, food deprivation to 85-87% of the
initial weight vs. 88-90% in controls) and uncertainty in the reward delivery (US, in 50% of correct
responses they received either none or 2 food pellets). Mice of each strain were divided into 4 stress
groups: controls, MS, US, and MS+US. After a break of 26 days the long-term memory of the
mice was tested by retraining them for another 8 days. During days 5-8 of the retraining, we again
evaluated the impact of stress factors by exposing half of the mice to extrinsic stress (ES, 30 min on
an elevated platform right before the 5HB experiment).
To model the mouse behavior we used a discrete state TDRL model with 6 states: [ITI, trial] ?
[staying outside, starting a poke, continuing a poke], and 2 actions: move (in or out), and stay (see
Figure 1b). Actions were chosen according to the soft-max method [8]:
X
p(a|s) = exp(?Q(s, a))/
exp(?Q(s, ak )) ,
(4)
k
where k runs over all actions and ? is the exploitation factor. Initial Q-values were equal to zero.
Since the time spent outside the holes was comparatively long and included multiple (task irrelevant)
actions, state/action pair staying outside/stay was given much more weight in the above formula.
The time step (0.43 sec) was constant throughout the experiment and was chosen to fit the animal
performance in the beginning of the experiment. Finally, to account for the memory decay after each
day all Q(s, a) values were updated as follows:
Q(s, a) ? Q(s, a) ? (1 ? ?) + hQ(s, a)is,a ? ? ,
(5)
where ? is a memory decay/interference factor, and hQ(s, a)is,a is the average over Q values for all
states and all actions at the end of the day.
All performance measures (PMs) used in the 5HB paradigm (number of trials, number of ITI pokes,
mean response time, mean poke length, TimePref 1 and LengthPref 2 ) were evaluated over the
entire session (10 min, 1400 time steps), during which different states3 could be visited multiple
1
TimePref = (average time between adjacent ITI pokes) / (average response time)
LengthPref = (average response length) / (average ITI poke length)
3
including the pseudo-states, corresponding to time steps within the 15 sec ITI
2
times. As opposed to an online ?SARSA?-type update of Q-values, we work with state occupancy
probabilities p(st ) and update Q-values with the following reward prediction error:
X
?t = E[rt ] ? Q(at , st ) + ?
Q(at+1 , st+1 ) ? p(at+1 , st+1 |at , st ) .
(6)
?at+1 ,st+1
4
Morris water maze experiment and modeling
The same mice as in the 5HB (4.5-month old at the beginning of the experiment) were tested in a
variant of the Morris water maze (WM) task [13]. Starting from one of 4 starting positions in the
circular pool filled with an opaque liquid they had to learn the location of a hidden escape platform
using stable extra-maze cues (Fig. 2a). Animals were initially trained for 4 days with 4 sessions a
day (to avoid confusion with 5HB, we consider each WM session consisting of only one trial). Trial
length was limited to 60s, and the inter-session interval was 25 min.). Half of the mice had to swim
in cold water of 19? C (motivational stress, MS), while the rest were learning at 26? C (control).
After a 7-week break, 3-day long memory testing was done at 22-23? C for all animals. Finally,
after another 2 weeks, the mice performed the task for 5 more days: half of them did a version with
uncertainty stress (US), where the platform location was randomly varying between the old position
and its rotationally opposite; the other half did the same task as before.
Behavior was quantified using the following 4 PMs: time to reach the goal (escape latency), time
spent in the target platform quadrant, the opposite platform quadrant, and in the wall region (Fig. 2a).
a.
"
"
#
"
"
#
b.
AC
2
3
wij
1
!
!
!
!
water pool
PC
platform
Figure 2: WM experiment and model. a. Experimental setup. 1 ? target platform quadrant, 2 ?
opposite platform quadrant, 3 ? wall region. Small filled circles mark 4 starting positions, large
filled circle marks the target platform, open circle marks the opposite platform (used only in the US
condition), pool ? = 1.4m. b. Activities of place cells (PC) encode position of the animal in the
WM, activities of action cells encode direction of the next movement.
A TDRL paradigm (1)-(3) in continuous state and action spaces has been used to model the mouse
behavior in the WM [14, 15]. The position of the animal is represented as a population activity of
Npc = 211 ?place cells? (PC) whose preferred locations are distributed uniformly over the area of a
modelled circular arena (Fig. 2b). Activity of place cell j is modelled by a Gaussian centered at the
preferred location p~j of the cell:
2
rjpc = exp(?k~
p ? p~j k2 /2?pc
) ,
(7)
where p~ is the current position of the modelled animal and ?pc = 0.25 defines the width of the
spatial receptive field relative to the pool radius. Place cells project to the population of Nac = 36
?action cells? (AC) via feed-forward all-to-all connections with modifiable weights. Each action cell
is associated with angle ?i , all ?i being distributed uniformly in [0, 2?]. Thus, an activity profile on
the level of place cells (i.e. state st ) causes a different activity profile on the level of the action cells
depending on the value of the weight vector. The activity of action cell i is considered as the value
of the action (defined as a movement in direction ?i 4 ):
X
Q(st , at ) = riac =
(8)
wij rjpc .
j
4
A constant step length was chosen to fit the average speed of the animals during the experiment
The action selection follows -greedy policy, where the optimal action a? is chosen with probability
? = 1 ? and a random action with probability 1 ? ?. Action a? is defined as movement in the
direction of the center of mass ?? of the AC population5 . Q-value corresponding to an action with
continuous angle ? is calculated as linear interpolation between activities of the two closest action
cells. During learning the PC?AC connection weights are updated on each time step in such a way
as to decrease the reward prediction error ?t (3):
?wij = ??riac rjpc .
(9)
The Hebbian-like form of the update rule (9) is due to the fact that we use distributed representations
for states and actions, i.e. there is no single state/action pair responsible for the last movement.
To simulate one experimental session it is necessary to (i) initialize the weight matrix {wij }, (ii)
choose meta-parameter values and starting position p~0 , (iii) compute (7)-(8) and perform corresponding movements until k~
p ? p~pl k < Rpl at which point reward r = 15 is delivered (Rpl is the
platform radius). Wall hits result in a small negative reward (rwall = ?3).
For each session and each set of the meta-parameters, 48 different sets of random initial weights wij
(corresponding to individual mice) were used to run the model, with 50 simulations started out of
each set. Final values of the PMs were averaged over all repetitions for each subgroup of mice.
To account for the loss of memory, after each day all weights were updated as follows:
new
old
initial
wij
= wij
? (1 ? ?) + wij
??
old
wij
where ? is the memory decay factor,
is the weight value at the end of the day, and
the initial weight value before any learning took place.
5
(10)
initial
wij
is
Goodness-of-fit function and optimization procedure
To compare the model with the experiment we used the following goodness-of-fit function [16]:
N
PM
X
?2 =
(PMexp
? PMmod
(?, ?, ?, ?))2 /(?kexp )2 ,
(11)
k
k
k=1
PMexp
k
PMmod
k
where
and
are the PMs calculated for the animals and the model, respectively and
(?, ?, ?, ?) are calculated after simulation of one session
NPM is the number of the PMs. PMmod
k
with fixed values of the meta-parameters. PMexp
were calculated either for each animal (5HB),
k
or for each subgroup (WM). Using stochastic gradient ascent, we minimized (11) with respect to
?, ?, ? for each session separately by systematically varying the meta-parameters in the following
ranges: for WM, ? ? [10?5 , 5 ? 10?2 ] and ?, ? ? [0.01, 0.99], and for 5HB, ?, ? ? [0.03, 0.99] and
? ? [0.3, 9.9]. Decay factor ? ? [0.01, 0.99] was estimated only for the first session after the break,
otherwise constant values of ? = 0.03 (5HB) and ? = 0.2 (WM) were used.
Several control procedures were performed to ensure that the meta-parameter optimization was statistically efficient and self-consistent. To evaluate how well the model fits the experimental data we
used ?2 -test with ? = NPM ? 3 degrees of freedom (since most of the time we had only 3 free
meta-parameters). The P (?2 , ?) value, defined as the probability that a realization of a chi-squaredistributed random variable would exceed ?2 by chance, was calculated for each session separately.
Generally, values of P (?2 , ?) > 0.01 correspond to a fairly good model [16]. To check reliability
of the estimated meta-parameters we used the same optimization procedure with PMexp
artificially
k
generated by the model itself. In a self-consistent model such a procedure is expected to find metaparameter values similar to those with which the PMs were generated. Finally, to see how well
the model generalizes to previously unseen data, we used half of the available experimental data
for optimization and tested the estimated parameters on the other half. Then we evaluated ?2 and
P (?2 , ?) values for the testing as well as the training data.
6
Results
The meta-parameter estimation procedure was performed for the models of both experiments using
stochastic gradient ascent in ?2 goodness-of-fit. For the 5HB, meta-parameters were estimated for
5
P
P
i.e. ?? = arctan( i riac sin(2?k/Nac )/ i riac cos(2?k/Nac ))
b.
50
40
30
Model data
Experimental data
20
10
1
2
3
4
5
6
7
8
9
10 11 12
Mean response time [s]
Day
16
14
12
10
8
6
4
2
0
Model data
Experimental data
Discount rate
Platform quadrant time %
a.
0.5
0.2
0.1
0.5
0.2
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Day
1
2
Exploitation factor
0.1
5
Learning rate
Figure 3: a. Example of PM evolution with learning in the WM (platform quadrant time, top) and in
the 5HB (mean response time, bottom). b. Self-consistency check: true (open circles) and estimated
(filled circles) meta-parameter values for the 24 random sets in the 5HB
each animal and each experimental day. Further (sub)group values were calculated by averaging
the individual estimations. For the WM, meta-parameters were estimated for each subgroup and
each experimental session. Learning dynamics in both experiments are illustrated in Figure 3a for
2 representative PMs, where average performances for all mice and the corresponding models (with
estimated meta-parameters) are shown.
The results of both meta-parameter estimation procedures indicated a reasonably good fit between
the model and animal performance. Evaluating the testing data, the condition P (?2 , ?) > 0.01 was
satisfied for 92.5% of 5HB estimated parameter sets, and for 98.4% in the WM. The mean ?2 values
for the testing data were h?2 i = 1.59 in the WM (P (?2 , 1) = 0.21) and h?2 i = 5.27 in the 5HB
(P (?2 , 3) = 0.15). There was a slight over-fitting only in the WM estimation.
To evaluate the quality of the estimated optima and sensitivities to different meta-parameters, we
calculated eigenvalues of the Hessian of 1/?2 around each of the estimated points. 98.4% of all
eigenvalues were negative, and most of the corresponding eigenvectors were aligned with the directions of ?, ?, and ?, indicating that there were no significant correlations in parameter estimation.
Furthermore, the absolute eigenvalues were highest in the directions of ? and ?, thus the error surface is steep along these meta-parameters. To test the reliability of estimated meta-parameters, the
self-consistency check was performed using a number of random meta-parameter sets. The mean
absolute errors (distances between real and estimated parameter values) were quite small for exploitation factors (?) ? approximately 6% of the total range, but higher for the reward discount
factors (?) and for the learning rates (?) ? 10-29% of the total range (Figure 3b). This indicates that
estimated ? values should be considered more reliable than those of ? and ?.
6.1
Meta-parameter dynamics
During the course of learning, exploitation factors (?) (Figure 4a,b) showed progressive increase
(regression p 0.001 for both the 5HB and the WM), reaching the peak at the end of each learning
block. They were consistently higher for the C57 mice than for the DBA mice (2-way ANOVA with
replications, p 0.001 for both experiments), indicating that the DBA mice were exploring the
environment more actively, and/or were not able to focus their attention well on the specific task.
Finally, C57 mouse groups, exposed to motivational stress in the WM and to extrinsic stress in the
5HB, had elevated exploitation factors (ANOVA p < 0.01 for both experiments), however there was
no effect for the DBA mice.
The estimated learning rates (?) did not show any obvious changes or trends with learning for
either 5HB or WM. There were no differences between the 2 genetic strains (nor among the stress
conditions) with one exception: for the first several days of the training, C57 learning rates were
b.
1
Exploitation factor ?
0.8
8
Exploitation factor ?
7
6
5
4
3
2
C57BL/6
DBA/2
1
0.6
0.4
0.2
0
0
1 2 3 4 5
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Day
c.
e.
0.8
Future reward deference
factor ?
Fixed platform
Variable platform
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0.7
0.6
0.5
0.4
0.3
0.2
Control
Uncertainty
0.1
0
8
9
10
Day
11
12
6 7 8 9 10 11 12
Day
d.
0.8
Future reward deference
factor ?
C57BL/6
DBA/2
5
6
7
Memory decay/interference factor
a.
1
Previously exposed to US
Control
0.8
0.6
0.4
0.2
0
C57
DBA
Day
Figure 4: a,b. Estimated exploitation factors ? for 5HB (a, break is between days 8 & 9) and WM (b,
breaks between days 4 & 5 and between 7 & 8). c,d. Estimated future reward deference factors for
the variable platform trials in the WM (c) and for the uncertainty trials in the 5HB (d). e. Estimated
memory decay / interference factors for the first day after the break in the 5HB.
significantly higher (ANOVA p < 0.01 in both experiments), indicating that C57 mice could learn a
novel task more quickly.
Under uncertainty (in reward delivery for the 5HB, and in the target platform location for the WM)
future reward discount factors (?) were significantly elevated (ANOVA p < 0.02, Figure 4c,d). In
the 5HB, memory decay factors (?), estimated for the first day after the break, were significantly
higher (p < 0.01, unpaired t-test) for animals, previously exposed to uncertainty (Figure 4e). This
suggests that uncertainty makes animals consider rewards further into the future, and it seems to
impair memory consolidation.
7
Discussion
In this paper we showed that various behavioral outcomes (caused by genetic traits and/or stress
factors) could be predicted by our TDRL models for 2 different tasks. This provides hypotheses
concerning the neuromodulatory mechanisms, which we plan to test using pharmacological manipulations (typically, injections of agonists or antagonists of relevant neurotransmitter systems).
Results for the exploitation factors suggest that with learning (and decreasing reward prediction
errors) the acquired knowledge is used more for choosing actions. This might also be related to
decreased subjective stress and higher stressor controllability. The difference between C57 and DBA
strains shows two things. Firstly, the anxious DBA mice cannot exploit their knowledge as well as
C57 can. Secondly, in response to motivational or extrinsic stress C57 mice are the only ones that
increase their exploitation. This may be related to an inverse-U-shaped effect of the noradrenergic
influences on focused attention and performance accuracy [17]. Animals with low anxiety (C57)
might be on the left side of the curve, and additional stress might lead them to optimal performance,
while those with high anxiety ? already on the right side, leading to possibly impaired performance.
Our results may also suggest that the widely proclaimed deficiency of DBA mice in spatial learning
(as compared to C57) [4, 12] might be primarily due to differential attentional capabilities.
The increased future reward discount factors under uncertainty indicate a reasonable adaptive response ? animals should not concentrate their learning on immediate events when task-reward rela-
tions become ambiguous. Uncertainty in behaviorally relevant outcomes under stress causes a decrease in subjective stressor controllability, which is known to be related to elevated serotonin levels
[18]. Higher memory decay / interference factors for the animals previously exposed to uncertainty
could be due to partially impaired memory consolidation and/or due to stronger competition between
different strategies and perceptions of the uncertain task.
Although estimated meta-parameter values can be easily compared between certain experimental
conditions, it is difficult to study in this way the interactions between different genetic and environmental factors or extrapolate beyond the limits of available conditions. One could overcome this
disadvantage by developing a black-box parameter model that would help us to evaluate in a flexible
way the contributions of specific factors (motivation, uncertainty, genotype) to meta-parameter dynamics, as well as their relationship with dynamics of TD errors (?t ) during the process of learning.
Acknowledgments
This work was partially supported by a grant from the Swiss National Science Foundation to C.S.
(3100A0-108102).
References
[1] J. J. Kim and D. M. Diamond. The stressed hippocampus, synaptic plasticity and lost memories. Nat Rev
Neurosci., 3(6):453?62., Jun 2002.
[2] C. Sandi, M. Loscertales, and C. Guaza. Experience-dependent facilitating effect of corticosterone on
spatial memory formation in the water maze. Eur J Neurosci., 9(4):637?42., Apr 1997.
[3] M. Joels, Z. Pu, O. Wiegert, M. S. Oitzl, and H. J. Krugers. Learning under stress: how does it work?
Trends Cogn Sci., 10(4):152?8. Epub 2006 Mar 2., Apr 2006.
[4] J. M. Wehner, R. A. Radcliffe, and B. J. Bowers. Quantitative genetics and mouse behavior. Annu Rev
Neurosci., 24:845?67., 2001.
[5] A. Holmes, C. C. Wrenn, A. P. Harris, K. E. Thayer, and J. N. Crawley. Behavioral profiles of inbred
strains on novel olfactory, spatial and emotional tests for reference memory in mice. Genes Brain Behav.,
1(1):55?69., Jan 2002.
[6] J. L. McGaugh. The amygdala modulates the consolidation of memories of emotionally arousing experiences. Annu Rev Neurosci., 27:1?28., 2004.
[7] M. J. Kreek, D. A. Nielsen, E. R. Butelman, and K. S. LaForge. Genetic influences on impulsivity, risk
taking, stress responsivity and vulnerability to drug abuse and addiction. Nat Neurosci., 8:1450?7, 2005.
[8] R. Sutton and A. G. Barto. Reinforcement Learning - An Introduction. MIT Press, 1998.
[9] W. Schultz, P. Dayan, and P. R. Montague. A neural substrate of prediction and reward. Science,
275(5306):1593?9, Mar 14 1997.
[10] K. Doya. Metalearning and neuromodulation. Neural Netw, 15(4-6):495?506, Jun-Jul 2002.
[11] K. Samejima, K. Doya, Y. Ueda, and M. Kimura. Estimating internal variables and paramters of a learning
agent by a particle filter. In Advances in Neural Information Processing Systems 16. 2004.
[12] C. Rossi-Arnaud and M. Ammassari-Teule. What do comparative studies of inbred mice add to current
investigations on the neural basis of spatial behaviors? Exp Brain Res., 123(1-2):36?44., Nov 1998.
[13] R. G. M. Morris. Spatial localization does not require the presence of local cues. Learning and Motivation,
12:239?260, 1981.
[14] D. J. Foster, R. G. M. Morris, and P. Dayan. A model of hippocampally dependent navigation, using the
temporal difference learning rule. Hippocampus, 10(1):1?16, 2000.
[15] T. Str?osslin, D. Sheynikhovich, R. Chavarriaga, and W. Gerstner. Modelling robust self-localisation and
navigation using hippocampal place cells. Neural Networks, 18(9):1125?1140, 2005.
[16] W. H. Press, B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling. Numerical Recipes in C : The Art of
Scientific Computing. Cambridge University Press, 1992.
[17] G. Aston-Jones, J. Rajkowski, and J. Cohen. Locus coeruleus and regulation of behavioral flexibility and
attention. Prog Brain Res., 126:165?82., 2000.
[18] J. Amat, M. V. Baratta, E. Paul, S. T. Bland, L. R. Watkins, and S. F. Maier. Medial prefrontal cortex determines how stressor controllability affects behavior and dorsal raphe nucleus. Nat Neurosci., 8(3):365?71.
Epub 2005 Feb 6., Mar 2005.
| 2958 |@word luk:1 exploitation:13 version:1 trial:15 noradrenergic:1 seems:1 stronger:1 retraining:2 hippocampus:2 open:3 integrative:1 simulation:2 dba:13 initial:7 responsivity:1 liquid:1 genetic:9 ecole:1 subjective:3 current:2 exposing:1 subsequent:1 numerical:1 plasticity:2 update:4 medial:1 v:2 half:6 selected:1 cue:2 greedy:1 beginning:3 short:2 provides:1 denis:2 location:5 arctan:1 firstly:1 along:1 differential:2 become:1 replication:1 addiction:1 fitting:2 behavioral:10 olfactory:1 acquired:1 inter:2 expected:3 behavior:10 nor:1 brain:6 chi:1 decreasing:1 td:2 food:6 str:1 perceives:1 motivational:6 project:1 estimating:1 mass:1 what:3 kind:2 ended:1 kimura:1 temporal:2 quantitative:2 pseudo:1 wrong:1 k2:1 hit:1 control:9 grant:1 before:4 local:1 depended:1 limit:1 sutton:1 ak:1 initiated:1 interpolation:1 approximately:1 wehner:1 might:4 black:1 abuse:1 quantified:2 suggests:1 lausanne:1 co:1 limited:1 range:3 statistically:2 averaged:1 acknowledgment:1 responsible:1 testing:4 lost:1 block:1 rajkowski:1 swiss:1 cogn:1 procedure:7 cold:1 jan:1 episodic:1 area:1 drug:1 thought:1 significantly:3 quadrant:6 suggest:2 cannot:1 close:1 selection:2 context:1 influence:2 risk:1 equivalent:1 center:1 c57:12 attention:3 starting:8 focused:1 insight:1 rule:2 holmes:1 population:2 updated:4 spontaneous:1 play:1 target:4 substrate:1 hypothesis:1 trend:2 bottom:1 role:1 pmexp:4 region:2 movement:5 decrease:2 highest:1 ran:1 environment:1 reward:34 dynamic:11 gediminas:2 trained:1 depend:1 exposed:7 upon:1 localization:1 basis:1 easily:1 montague:1 various:2 represented:1 neurotransmitter:1 anxious:1 fast:1 describe:1 formation:1 outcome:3 outside:5 choosing:1 whose:2 quite:1 widely:1 otherwise:1 serotonin:2 unseen:1 reactivity:1 itself:1 delivered:5 online:1 final:1 sequence:1 eigenvalue:3 took:1 interaction:1 poke:18 relevant:3 turned:1 realization:1 aligned:1 erale:1 flexibility:1 competition:1 inbred:3 recipe:1 ach:1 optimum:1 impaired:2 comparative:1 staying:4 poking:1 spent:2 depending:1 ac:4 tions:1 help:1 received:3 predicted:1 indicate:1 switzerland:1 direction:5 radius:2 concentrate:1 correct:1 filter:1 stochastic:2 exploration:2 centered:1 require:1 wall:3 investigation:1 sarsa:1 secondly:1 exploring:1 noradrenaline:1 pl:1 around:1 considered:2 exp:4 predict:1 week:3 consecutive:1 deference:3 purpose:1 estimation:6 visited:1 vulnerability:1 modulating:1 repetition:1 pellet:2 mit:1 behaviorally:1 gaussian:1 reaching:1 avoid:1 varying:2 barto:1 acetylcholine:1 unspecific:1 encode:2 focus:1 consistently:1 modelling:1 check:3 indicates:1 kim:1 inference:1 dependent:2 dayan:2 epfl:5 vetterling:1 entire:1 typically:1 a0:1 initially:1 hidden:1 relation:1 wij:10 among:1 flexible:1 animal:35 spatial:8 platform:18 initialize:1 fairly:1 plan:1 equal:1 once:1 field:1 shaped:1 art:1 progressive:1 jones:1 hippocampally:1 future:12 minimized:1 escape:2 primarily:1 oriented:1 randomly:1 national:1 individual:2 consisting:1 riac:4 freedom:1 wiegert:1 circular:2 localisation:1 arena:1 joel:1 male:1 navigation:2 genotype:4 light:8 pc:6 necessary:1 experience:3 filled:5 old:5 continuing:4 circle:7 re:2 uncertain:1 increased:1 modeling:3 soft:1 disadvantage:1 raphe:1 goodness:3 subset:1 npc:1 conducted:1 combined:1 eur:1 st:17 peak:1 sensitivity:1 ie:1 stay:3 off:1 pmmod:3 pool:4 together:1 mouse:31 quickly:1 na:1 again:2 neuromodulators:3 satisfied:1 opposed:1 choose:4 possibly:1 prefrontal:1 thayer:1 leading:1 actively:1 account:3 de:1 sec:6 caused:1 onset:1 radcliffe:1 performed:5 break:8 observing:1 reached:1 start:2 wm:19 capability:1 jul:1 contribution:1 chart:1 accuracy:1 responded:1 characteristic:1 maier:1 correspond:1 modelled:4 accurately:1 produced:1 agonist:1 none:1 drive:1 explain:1 metalearning:1 influenced:3 reach:1 ed:1 synaptic:1 obvious:1 associated:2 arousing:1 jeremie:1 knowledge:3 nielsen:1 feed:1 higher:6 day:26 response:10 evaluated:3 box:7 strongly:1 done:1 furthermore:1 mar:3 until:1 correlation:1 defines:1 quality:1 indicated:1 scientific:1 nac:3 effect:11 hypothesized:1 true:1 evolution:1 hence:1 arnaud:1 laboratory:2 illustrated:1 eg:2 adjacent:1 sin:1 during:12 width:1 self:5 pharmacological:1 ambiguous:1 m:4 generalized:1 hippocampal:1 antagonist:1 stress:34 polytechnique:1 confusion:1 novel:2 recently:1 rl:3 cohen:1 conditioning:2 discussed:1 elevated:4 slight:1 trait:2 significant:2 metaparameter:1 cambridge:1 neuromodulatory:3 consistency:2 pm:9 session:15 particle:1 had:6 reliability:2 stable:1 acute:2 surface:1 cortex:1 pu:1 add:1 feb:1 closest:1 showed:2 irrelevant:1 rewarded:1 termed:1 manipulation:1 certain:5 meta:32 rotationally:1 additional:1 novelty:1 paradigm:4 determine:1 maximize:1 ii:1 multiple:2 hebbian:1 hormone:2 long:6 divided:1 concerning:1 bland:1 controlled:2 impact:1 prediction:6 variant:1 regression:1 expectation:1 dopamine:1 represent:1 cell:13 penalize:1 background:2 whereas:1 separately:2 interval:2 decreased:1 extra:1 rest:1 ascent:2 subject:1 thing:1 habituation:1 presence:1 exceed:1 iii:1 undertake:1 hb:23 affect:2 fit:8 opposite:4 swim:1 hessian:1 cause:2 impulsivity:3 action:38 behav:1 generally:1 latency:2 eigenvectors:1 kruger:1 discount:7 morris:6 unpaired:1 occupy:1 neuroscience:1 estimated:21 extrinsic:3 modifiable:1 paramters:1 discrete:1 group:3 procedural:1 anova:4 ht:1 rectangle:1 run:2 angle:2 inverse:1 uncertainty:12 opaque:1 place:7 throughout:1 reasonable:1 prog:1 ueda:1 doya:2 putative:1 delivery:2 kexp:1 activity:10 deficiency:1 aspect:3 simulate:2 speed:1 carmen:2 min:4 injection:1 rossi:1 developing:1 according:1 combination:1 rev:3 deprived:1 interference:5 taken:1 previously:4 mechanism:1 neuromodulation:1 locus:1 nose:2 end:3 available:3 generalizes:1 regulate:1 top:1 ensure:1 emotional:2 exploit:1 comparatively:1 move:1 already:1 strategy:2 receptive:1 rt:4 gradient:2 hq:2 distance:1 attentional:1 simulated:1 sci:1 extent:1 water:7 besides:1 length:5 modeled:1 relationship:1 acquire:1 anxiety:4 setup:1 steep:1 difficult:1 regulation:1 relate:1 negative:2 policy:1 perform:1 allowing:1 diamond:1 iti:12 controllability:3 immediate:4 strain:10 intensity:1 pair:3 connection:2 learned:3 established:1 subgroup:4 able:1 impair:1 beyond:1 perception:1 emie:1 max:1 memory:23 including:1 sheynikhovich:2 reliable:1 event:1 treated:1 scheme:1 occupancy:1 aston:1 finished:1 started:1 carried:2 rpl:2 rela:1 jun:2 relative:1 loss:1 foundation:1 nucleus:1 agent:3 degree:1 consistent:2 foster:1 systematically:1 genetics:2 course:1 consolidation:3 placed:1 last:2 soon:2 free:2 supported:1 side:2 underwent:1 taking:1 amat:1 absolute:2 distributed:3 curve:1 calculated:7 overcome:1 evaluating:1 cumulative:1 maze:6 amygdala:1 forward:1 qualitatively:1 reinforcement:5 made:2 adaptive:1 schultz:1 correlate:1 nov:1 netw:1 preferred:2 neurobiological:1 gene:1 samejima:1 continuous:2 learn:3 reasonably:1 robust:1 gerstner:2 artificially:1 da:1 did:4 apr:2 neurosci:6 arrow:1 motivation:2 paul:1 profile:3 facilitating:1 fig:3 referred:1 representative:1 sub:1 position:7 bower:1 watkins:1 deprivation:1 formula:1 remained:1 annu:2 specific:5 er:1 decay:9 coeruleus:1 mcgaugh:1 modulates:1 nat:3 hole:10 npm:2 c57bl:4 flannery:1 likely:1 partially:2 ch:6 sandi:2 chance:1 environmental:1 harris:1 teukolsky:1 determines:1 goal:3 month:2 epub:2 absence:1 wulfram:2 change:1 included:1 emotionally:1 uniformly:2 stressor:3 averaging:1 total:3 experimental:15 e:1 indicating:3 formally:1 exception:1 internal:1 mark:3 stressed:1 dorsal:1 evaluate:5 tested:4 extrapolate:1 |
2,157 | 2,959 | PAC-Bayes Bounds for the Risk of the Majority Vote
and the Variance of the Gibbs Classifier
Alexandre Lacasse, Franc?ois Laviolette and Mario Marchand
D?epartement IFT-GLO
Universit?e Laval
Qu?ebec, Canada
[email protected]
Pascal Germain
D?epartement IFT-GLO
Universit?e Laval Qu?ebec, Canada
[email protected]
Nicolas Usunier
Laboratoire d?informatique de Paris 6
Universit?e Pierre et Marie Curie, Paris, France
[email protected]
Abstract
We propose new PAC-Bayes bounds for the risk of the weighted majority vote that
depend on the mean and variance of the error of its associated Gibbs classifier. We
show that these bounds can be smaller than the risk of the Gibbs classifier and can
be arbitrarily close to zero even if the risk of the Gibbs classifier is close to 1/2.
Moreover, we show that these bounds can be uniformly estimated on the training
data for all possible posteriors Q. Moreover, they can be improved by using a
large sample of unlabelled data.
1
Introduction
The PAC-Bayes approach, initiated by [1], aims at providing PAC guarantees to ?Bayesian-like?
learning algorithms. Within this approach, we consider a prior1 distribution P over a space of
classifiers that characterizes our prior belief about good classifiers (before the observation of the
data) and a posterior distribution Q (over the same space of classifiers) that takes into account the
additional information provided by the training data. A remarkable result, known as the ?PACBayes theorem?, provides a risk bound for the Q-weigthed majority-vote by bounding the risk of an
associated stochastic classifier called the Gibbs classifier. Bounds previously existed which showed
that you can de-randomize back to the Majority Vote classifier, but these come at a cost of worse
risk. Naively, one would expect that the de-randomized classifier would perform better. Indeed, it is
well-known that voting can dramatically improve performance when the ?community? of classifiers
tend to compensate the individual errors. The actual PAC-Bayes framework is currently unable to
evaluate whether or not this compensation occurs. Consequently, this framework can not currently
help in producing highly accurate voted combinations of classifiers.
In this paper, we present new PAC-Bayes bounds on the risk of the Majority Vote classifier based
on the estimation of the mean and variance of the errors of the associated Gibbs classifier. These
bounds allow to prove that a sufficient condition to provide an accurate combination is (1) that the
error of the Gibbs classifier is less than half and (2) the mean pairwise covariance of the errors of
the classifiers appearing in the vote is small. In general, the bound allows to detect when the voted
combination provably outperforms its associated Gibbs classifier.
1
Priors have been used for many years in statistics. The priors in this paper have only indirect links with the
Bayesian priors. We will nevertheless use this language, since it comes from previous work.
2
Basic Definitions
We consider binary classification problems where the input space X consists of an arbitrary subset
def
of Rn and the output space Y = {?1, +1}. An example z = (x, y) is an input-output pair where
x ? X and y ? Y. Throughout the paper, we adopt the PAC setting where each example z is drawn
according to a fixed, but unknown, probability distribution D on X ? Y.
We consider learning algorithms that work in a fixed hypothesis space H of binary classifiers (defined without reference to the training data). The risk R(h) of any classifier h : X ? Y is defined
as the probability that h misclassifies an example drawn according to D:
?
?
def
R(h) =
Pr
h(x) 6= y =
E I(h(x) 6= y),
(x,y)?D
(x,y)?D
where I(a) = 1 if predicate a is true and 0 otherwise.
Given a training set S, m will always represent its number of examples. Moreover, if S =
hz1 , . . . , zm i, the empirical risk RS (h) on S, of any classifier h, is defined according to:
def
RS (h) =
m
1 X
I(h(xi ) 6= yi ).
m i=1
After observing the training set S, the task of the learner is to choose a posterior distribution Q over
H such that the Q-weighted Majority Vote classifier, BQ , will have the smallest possible risk. On
any input training example x, the output, BQ (x), of the Majority Vote classifier BQ (also called the
Bayes classifier) is given by:
?
?
def
BQ (x) = sgn E h(x) ,
h?Q
where sgn(s) = +1 if real number s > 0 and sgn(s) = ?1 otherwise. The output of the deterministic Majority Vote classifier BQ is thus closely related to the output of a stochastic classifier called
the Gibbs classifier. To classify an input example x, the Gibbs classifier GQ chooses randomly a
(deterministic) classifier h according to Q to classify x. The true risk R(GQ ) and the empirical risk
RS (GQ ) of the Gibbs classifier are thus given by:
def
R(GQ ) = E R(h) = E
h?Q
E
h?Q (x,y)?D
I(h(x) 6= y)
(1)
m
def
RS (GQ ) = E RS (h) = E
h?Q
h?Q
1 X
I(h(xi ) 6= yi ).
m i=1
(2)
The PAC-Bayes theorem gives a tight risk bound for the Gibbs classifier GQ that depends on how
far is the chosen posterior Q from a prior P that must be chosen before observing the data. The
PAC-Bayes theorem was first proposed by [2]. The bound presented here can be found in [3].
Theorem 1 (PAC-Bayes Theorem) For any prior distribution P over H, and any ? ? ]0, 1], we
have
?
??
?
1
m+1
KL(QkP ) + ln
? 1??,
Pr ? Q over H : kl(RS (GQ )kR(GQ )) ?
S?D m
m
?
where KL(QkP ) is the Kullback-Leibler divergence between Q and P :
def
KL(QkP ) = E ln
h?Q
Q(h)
,
P (h)
and where kl(qkp) is the Kullback-Leibler divergence between the Bernoulli distributions with probdef
1?q
.
ability of success q and probability of success p: kl(qkp) = q ln pq + (1 ? q) ln 1?p
This theorem has recently been generalized by [4] to the sample-compression setting. In this paper,
however, we restrict ourselves to the more common case where the set H of classifiers is defined
without reference to the training data.
A bound given for the risk of Gibbs classifiers can straightforwardly be turned into a bound for the
risk of Majority Vote classifiers. Indeed, whenever BQ misclassifies x, at least half of the classifiers
(under measure Q), misclassifies x. It follows that the error rate of GQ is at least half of the error
rate of BQ . Hence R(BQ ) ? 2R(GQ ). A method to decrease the R(BQ )/R(GQ ) ratio to 1 + ? (for
some small positive ?) has been proposed by [5] for large-margin classifiers. For a suitably chosen
prior and posterior, [5] have also shown that RS (GQ ) is small when the corresponding Majority Vote
classifier BQ achieves a large separating margin on the training data. Consequently, the PAC-Bayes
theorem yields a tight risk bound for large margin classifiers.
Even if we can imagine situations where R(BQ ) > R(GQ ), they have been rarely encountered in
practice. In fact, situations where R(BQ ) is much smaller than R(GQ ) seem to occur much more
often. For example, consider the extreme case where the true label y of x is 1 iff Eh?Q h(x) > 1/2.
In this case R(BQ ) = 0 whereas R(GQ ) can be as high as 1/2 ? ? for some arbitrary small ?. The
situations where R(BQ ) is much smaller than R(GQ ) are not captured by the PAC-Bayes theorem.
In the next section, we provide a bound on R(BQ ) that depends on R(GQ ) and other properties
that can be estimated from the training data. This bound can be arbitrary close to 0 even for a large
R(GQ ) as long as R(GQ ) < 1/2 and as long as we have a sufficiently large population of classifiers
for which their errors are sufficiently ?uncorrelated?.
3
A Bound on R(BQ ) that Can Be Much Smaller than R(GQ )
All of our relations between R(BQ ) and R(GQ ) arise by considering the Q-weight WQ (x, y) of
classifiers making errors on example (x, y):
def
WQ (x, y) = E I(h(x) 6= y) .
(3)
h?Q
Clearly, we have:
Hence,
Pr
(x,y)?D
Pr (WQ (x, y) > 1/2) ? R(BQ ) ?
(x,y)?D
Pr (WQ (x, y) ? 1/2).
(x,y)?D
(4)
(WQ (x, y) ? 1/2) gives a very tight upper bound on R(BQ ). Moreover,
E
(x,y)?D
WQ (x, y) =
and
E
?
Var (WQ )
(x,y)?D
=
=
=
(5)
?2 ?
?
(WQ ) ?
E WQ
(x,y)?D
?
?
E
E I(h1 (x) 6= y) E I(h2 (x) 6= y) ? R2 (GQ )
h2 ?Q
(x,y)?D h1 ?Q
?
?
E
E
E I(h1 (x) 6= y)I(h2 (x) 6= y) ? R(h1 )R(h2 )
(x,y)?D
h1 ?Q h2 ?Q
def
=
2
E
E I(h(x) 6= y) = R(GQ )
(x,y)?D h?Q
E
(x,y)?D
E coverr (h1 , h2 ) ,
(6)
h1 ?Q h2 ?Q
where coverr (h1 , h2 ) denotes the covariance of the errors of h1 and h2 on examples drawn by D.
The next theorem is therefore a direct consequence of the one-sided Chebychev (or CantelliVar(WQ )
Chebychev) inequality [6]: Pr (WQ ? a + E(WQ )) ?
for any a > 0.
Var(WQ )+a2
Theorem 2 For any distribution Q over a class of classifiers, if R(GQ ) ? 1/2 then we have
Var (1 ? 2WQ )
Var (WQ )
R(BQ ) ?
(x,y)?D
Var (WQ ) + (1/2 ? R(GQ ))
(x,y)?D
2
=
(x,y)?D
E
(x,y)?D
(1 ? 2WQ )2
def
= CQ .
P
We will always use here the first form of CQ . However, note that 1 ? 2WQ = h?H Q(h)yh(x)
is just the margin of the Q-convex combination realized on (x, y). Hence, the second form of CQ is
simply the variance of the margin divided by its second moment!
The looser two-sided Chebychev inequality was used in [7] to bound the risk of random forests.
However, the one-sided bound CQ is much tighter. For example, the two-sided bound in [7] diverges
when R(GQ ) ? 1/2, but CQ ? 1 whenever R(GQ ) ? 1/2. In fact, as explained in [8], the
one-sided Chebychev bound is the tightest possible upper bound for any random variable which is
based only on its expectation and variance.
The next result shows that, when the number of voters tends to infinity (and the weight of each voter
tends to zero), the variance of WQ will tend to 0 provided that the average of the covariance of the
risks of all pairs of distinct voters is ? 0. In particular, the variance will always tend to 0 if the risk
of the voters are pairwise independent.
Proposition 3 For any countable class H of classifiers and any distribution Q over H, we have
X X
1 X 2
Q (h) +
Var (WQ ) ?
Q(h1 )Q(h2 )coverr (h1 , h2 ).
4
(x,y)?D
h?H
h1 ?H h2 ?H:
h2 6=h1
The proof
and is left to the reader. The key observation that comes out of this result
P is straightforward
2
is that
than one. Consider, for example, the case where Q
h?H Q (h) is usually much smaller
P
is uniform on H with |H| = n. Then q = h?H Q2 (h) = 1/n. Moreover, if coverr (h1 , h2 ) ? 0
for each pair of distinct classifiers in H, then Var(WQ ) ? 1/(4n). Hence, in these cases, we have
that CQ ? O(1/n) whenever 1/2?R(GQ ) is larger than some positive constant independent of n.
Thus, even when R(GQ ) is large, we see that R(BQ ) can be arbitrarily close to 0 as we increase the
number of classifiers having non-positive pairwise covariance of their risk.
To further motivate the use of CQ , we have investigated, on several UCI binary classification data
sets, how R(GQ ), Var(WQ ) and CQ are respectively related to R(BQ ). The results of Figure 1 have
been obtained with the Adaboost [9] algorithm used with ?decision stumps? as weak learners. Each
data set was split in two halves: one used for training and the other for testing. In the chart relating
R(GQ ) and R(BQ ), we see that we almost always have R(BQ ) < R(GQ ). There is, however, no
clear correlation between R(BQ ) and R(GQ ). We also see no clear correlation between R(BQ ) and
Var(WQ ) in the second chart. In contrast, the chart of CQ vs R(BQ ) shows a strong correlation.
Indeed, it is almost a linear relation!
0,5
0,25
0,3
Var(WQ ) on test
0,2
breast-cancer
breast-w
credit-g
hepatitis
ionosphere
kr-vs-kp
labor
mushroom
sick
sonar
vote
0,2
0,1
0
0
0,1
0,2
0,3
R (B Q ) on test
0,4
0,15
0,1
0,05
0
0,5
0
0,1
0,2
0,3
R (B Q ) on test
1
0,8
CQ on test
R (GQ ) on test
0,4
breast-cancer
breast-w
credit-g
hepatitis
ionosphere
kr-vs-kp
labor
mushroom
sick
sonar
vote
0,6
breast-cancer
breast-w
credit-g
hepatitis
ionosphere
kr-vs-kp
labor
mushroom
sick
sonar
vote
0,4
0,2
0
0
0,1
0,2
0,3
R (B Q ) on test
0,4
0,5
Figure 1: Relation, on various data sets, between R(BQ ) and R(GQ ), Var(WQ ), and CQ .
0,4
0,5
4
New PAC-Bayes Theorems
A uniform estimate of CQ can be obtained if we have uniform upper bounds on R(GQ ) and on
the variance of WQ . While the original PAC-Bayes theorem provides an upper bound on R(GQ )
that holds uniformly for all posteriors Q, obtaining such bounds for the variance of a random variable is still an issue. To achieve this goal, we will have to generalize the PAC-Bayes theorem for
expectations over pairs of classifiers since E(WQ2 ) is fundamentally such an expectation.
Definition 4 For any probability distribution Q over H, we define the expected joint error (eQ ), the
expected joint success (sQ ), and the expected disagreement (dQ ) as
?
?
def
eQ =
E
E
E I(h1 (x) 6= y)I(h2 (x) 6= y)
h1 ?Q h2 ?Q (x,y)?D
?
?
def
sQ =
E
E
E I(h1 (x) = y)I(h2 (x) = y)
h1 ?Q h2 ?Q (x,y)?D
?
?
def
dQ =
E
E
E I(h1 (x) 6= h2 (x)) .
h1 ?Q h2 ?Q
(x,y)?D
The empirical estimates, over a training set S = hz1 , . . . , zm i, of these expectations are defined as
?
? 1 Pm
def
usual, i.e., ec
E
E
Q =
i=1 I(h1 (x) 6= y)I(h2 (x) 6= y) , etc.
m
h1 ?Q h2 ?Q
It is easy to see that
eQ = E WQ2 ,
(x,y)?D
sQ = E
(1?WQ )2 ,
(x,y)?D
and
dQ = E
2WQ (1?WQ ) .
(x,y)?D
Thus, we have eQ + sQ + dQ = 1 and 2eQ + dQ = 2R(GQ ). This implies,
1
1
? (1 + eQ ? sQ )
R(GQ ) = eQ + ? dQ =
2
2
1
1
Var(WQ ) = eQ ?(R(GQ ))2 = eQ ?(eQ + ?dQ )2 = eQ ? ?(1 + eQ ? sQ )2
2
4
Moreover, in that new setting, the denominator of CQ can elegantly be rewritten as
Var(WQ ) + (1/2 ? R(GQ ))2 = 1/4 ? dQ /2.
(7)
(8)
(9)
(10)
The next theorem can be used to bound separately either eQ , sQ or dQ .
Theorem 5 For any prior distribution P over H, and any ? ? ]0, 1], we have:
?
??
?
1
(m + 1)
2?KL(QkP ) + ln
?1??
Pr ? Q over H : kl(?c
Q k?Q ) ?
S?D m
m
?
where ?Q can be either eQ , sQ or dQ .
In contrast with Theorem 5, the next theorem will enable us to bound directly Var(WQ ), by bounding any pair of expectations among eQ , sQ and dQ .
Theorem 6 For any prior distribution P over H, and any ? ? ]0, 1], we have:
?
??
?
1
(m + 1)(m + 2)
c
2?KL(QkP ) + ln
?1??
Pr ? Q over H : kl(?c
Q , ?Q k?Q , ?Q ) ?
S?D m
m
2?
where ?Q and ?Q can be any two distinct choices among eQ , sQ and dQ , and where
q2
1 ? q1 ? q2
q1
def
+ q2 ln
+ (1 ? q1 ? q2 ) ln
kl(q1 , q2 kp1 , p2 ) = q1 ln
p1
p2
1 ? p1 ? p2
is the Kullback-Leibler divergence between the distributions of two trivalent random variables Yq
and Yp with P (Yq = a) = q1 , P (Yq = b) = q2 and P (Yq = c) = 1?q1 ?q2 (and similarly for Yp ).
The proof of Theorem 5 can be seen as a special case of Theorem 1. The proof of Theorem 6 essentially follows the proof of Theorem 1 given in [4]; except that it is based on a trinomial distribution
instead of a binomial one2 .
2
For the proofs of these theorems, see a long version of the paper at http://www.ift.ulaval.ca/
?laviolette/Publications/publications.html.
5
PAC-Bayes Bounds for Var(WQ ) and R(BQ )
From the two theorems of the preceding section, one can easily derive several PAC-Bayes bounds
of the variance of WQ and therefore, of the majority vote. Since CQ is a quotient. Thus, an upper
bound on CQ will degrade rapidly if the bounds on the numerator and the denominator are not tight
? especially for majority votes obtained by boosting algorithms where both the numerator and the
denominator tend to be small. For this reason, we will derive more than one PAC-Bayes bound for
the majority vote, and compare their accuracy. First, we need the following notations that are related
to Theorems 1, 5 and 6. Given any prior distribution P over H,
?
?
1 h
(m + 1) i
def
?
KL(QkP ) + ln
,
RQ,S =
r : kl(RS (GQ )kr) ?
m
?
?
?
1 h
(m + 1) i
def
?
2?KL(QkP ) + ln
,
EQ,S
=
e : kl(c
eQ ke) ?
m
?
?
?
1 h
(m + 1) i
def
?
2?KL(QkP ) + ln
,
DQ,S
=
d : kl(dc
Q kd) ?
m
?
?
?
1 h
(m + 1)(m + 2) i
def
?
2?KL(QkP ) + ln
.
AQ,S =
(e, s) : kl(c
eQ , sc
Q ke, s) ?
m
?
1
v
= 1+a/v
, it follows from Theorem 2 that an upper bound of both Var(WQ ) and R(GQ )
Since v+a
will give an upper bound on CQ , and hence on R(BQ ). Hence, a first bound can be obtained, from
Equation 9, by suitably applying Theorem 5 (with ?Q = eQ ) and Theorem 1.
PAC-Bound 1 For any prior distribution P over H, and any ? ? ]0, 1], we have
?
?
?2 ?
?/2
?/2
Pr m ? Q over H :
Var WQ ? sup EQ,S ? inf RQ,S
? 1??,
S?D
(x,y)?D
?
?
Pr ?? Q over H :
S?D m
R(BQ ) ?
?/2
sup EQ,S
?
?
?2
?/2
?/2
sup EQ,S ? inf RQ,S
?
?
?2
?
?2 ? ? 1 ? ? .
?/2
?/2
? inf RQ,S
+ 21 ? sup RQ,S
Since Bound 1 necessitates two PAC approximations to calculate the variance, it would be better
if we could obtain directly an upper bound for Var(WQ ). The following result, which is a direct
consequence of Theorem 6 and Equation 9, shows how it can be done.
PAC-Bound 2 For any prior distribution P over H, and any ? ? ]0, 1], we have
?
?!
?
1
Pr ? Q over H :
Var WQ ?
sup
e ? ? (1 + e ? s)2
? 1??,
S?D m
4
(x,y)?D
(e,s)?A ?
Q,S
?
?
?
Pr m?
? Q over H :
S?D ?
?
sup
?
e?
1
4
? (1 + e ? s)2
?
?
?
?
?2 ?
? ? 1??.
? ?1
?/2
? (1 + e ? s)2 + 2 ? sup RQ,S ?
?/2
(e,s)?AQ,S
R(BQ ) ?
sup
?
?/2
(e,s)?AQ,S
e?
1
4
As illustrated in Figure 2, Bound 2 is generally tighter than Bound 1. This gain is principally due
to the fact that the values of e and s, that are used to bound the variance, are tied together inside the
kl() and have to tradeoff their values (e ?tries to be? as large as possible and s as small as possible).
Because of this tradeoff, e is generally not an upper bound of eQ , and s not a lower bound of sQ .
In the semi-supervised framework, we can achieve better results, because the labels of the examples
do not affect the value of dQ (see Definition 4). Hence, in presence of a large amount of unlabelled
data, one can use Theorem 5 to obtain very accurate upper and lower bounds of dQ . This combined
with an upper bound of eQ , still computed via Theorem 5 but on the labelled data, gives rise to the
following semi-supervised upper bound3 of Var(WQ ). The bound on R(BQ ) then follows from
Theorem 2 and Equation 10.
PAC-Bound 3 (semi-supervised bound) For any prior distribution P over H, and any ? ? ]0, 1]:
?
?2 ?
?
1
?
?
?
?1??
Pr
? Q over H : Var WQ ? sup EQ,S
? sup EQ,S
+ ? inf DQ,S
0
2
(x,y)?D
S?D m
0
m
S 0 ?Dunlabelled
?
Pr
S?D m
?
?? Q over H :
R(BQ ) ?
?
sup EQ,S
?
?
?
sup EQ,S
+
1/4 ?
0
1
2
1
2
?
? inf DQ,S
0
?
? sup DQ,S
0
?2 ?
?
??1??
m
S 0 ?Dunlabelled
We see, on the left part of Figure 2, that Bound 2 on Var(WQ ) is much tighter than Bound 1. We
can also see that, by using unlabeled data4 to estimate dQ , Bound 3 provides an other significant
improvement. These numerical results were obtained by using Adaboost [9] with decision stumps
on the Mushroom UCI data set (which contains 8124 examples). This data set was randomly split
into two halves: one for training and one for testing.
0,16
1,2
Var(W Q ) PAC-Bound 1
Var(W Q ) PAC-Bound 2
Var(W Q ) PAC-Bound 3
Var(W Q ) on test
0,14
0,12
R (B Q ) PAC-Bound 1
R (B Q ) PAC-Bound : 2R (G Q ) using Thm 1
R (B Q ) PAC-Bound 2
R (B Q ) PAC-Bound 3
C Q on test
R (B Q ) on test
1
0,8
0,1
0,6
0,08
0,06
0,4
0,04
0,2
0,02
0
0
1
11
21
31
41
T
1
11
21
31
41
T
Figure 2: Bounds on Var(WQ ) (left) and bounds on R(BQ ) (right).
As illustrated by Figure 2, Bound 2 and Bound 3 are (resp. for the supervised and semi-supervised
frameworks) very tight upper bounds of the variance. Unfortunately they do not lead to tight upper
bounds of R(BQ ). Indeed, one can see in Figure 2 that after T = 8, all the bounds are degrading
even if the true value of CQ (on which they are based) continues to decrease. This drawback is due
to the fact that, when the value of dQ tends to 1/2, the denominator of CQ tends to 0. Hence, if dQ
is close to 1/2, Var(WQ ) must be small as well. Thus, any slack in the bound of Var(WQ ) has a
multiplicative effect on each of the three proposed PAC-bounds of R(BQ ). Unfortunately, boosting
algorithms tend to construct majority votes with expected an disagreement dQ just slightly under
1/2. Based on the next proposition, we will show that this drawback is, in a sense, unavoidable.
Proposition 7 (Inapproachability result) Let Q be any distribution over a class of classifiers, and
let B < 1 be any upper bound of CQ which holds with confidence 1 ? ?. If R(GQ ) < 1/2 then
q
?
?
1/2 ? (1/4 ? dQ /2) 1 ? B
is an upper bound of R(GQ ) which holds with confidence 1 ? ?.
3
It follows, from an easy calculation, that a lower bound of dQ , together with an upper bound of eQ , gives
rise to an upper bound of eQ ? (eQ + 21 ? dQ )2 . By Equation 9, we then obtain an upper bound of Var(WQ ).
4
The UCI database (used here) does not have any unlabeled examples. To simulate the extreme case
where we have an infinite amount of unlabeled data, we simply used the empirical value of dQ computed
on the testing set.
For the data set used in Figure 2, Proposition 7, together with Bound 3 on R(BQ ) (viewed as
a bound on CQ ), gives a PAC-bound on R(GQ ) which is just slightly lower (? 0.5%) than the
classical PAC-Bayes bound on R(GQ ) given by Theorem 1. Since any bound better than Bound 3
for CQ will continue to improve the bound on R(GQ ), it seems unlikely that such a better bound
exists. Moreover, this drawback should occur for any bound on the majority vote that only considers
Gibbs? risk and the variance of WQ because, as already explained, CQ is the tightest possible bound
of R(BQ ) that is based only on E(WQ ) and Var(WQ ). Hence, to improve our results in the situation
where dQ is closed to 1/2, one will have to consider higher moments. However, it is not clear that
this will lead to a better bound of R(BQ ) because, even if Theorem 5 generalizes to higher moments,
its tightness is then degrading. Indeed, for the k th moment, the factor 2 that multiplies KL(QkP )
in Theorem 5 grows to k. However, it might be possible to overcome this degradation by using a
generalization of Theorem 6 as we have done in this paper to obtain our tightest supervised bound for
the variance (Bound 2). Indeed, if we evaluate the tightness of that bound on the variance (w.r.t. its
value on the test set), and compare it with the tightness of the bound on R(GQ ) given by Theorem 1,
we find that both accuracies are at about 3%. This is to be contrasted with the tightness of Bound 1
and seems to indicate that we have prevented degradation even if the variance deals with both the
first and the second moment of WQ ; whereas the Gibbs? risk deals only with the first moment.
6
Conclusion
We have derived a risk bound for the weighted majority vote that depends on the mean and variance
of the error of its associated Gibbs classifier (Theorem 2). The proposed bound is based on the onesided Chebychev?s inequality, which is the tightest inequality for any real-valued random variables
given only the expectation and the variance. As shown on Figures 1, this bound seems to have a
strong predictive power on the risk of the majority vote.
We have also shown that the original PAC-Bayes Theorem, together with new ones, can be used to
obtain high confidence estimates of this new risk bound that hold uniformly for all posterior distributions. Moreover, the new PAC-Bayes theorems give rise to the first uniform bounds on the variance
of the Gibbs?s risk (more precisely, the variance of the associate random variable WQ ). Even if there
are arguments showing that bounds of higher moments of WQ should be looser, we have empirically
found that one of the proposed bounds (Bound 2) does not show any sign of degradation in comparison with the classical PAC-Bayes bound on R(GQ ) (which is the first moment). Surprisingly, there
is an improvement for Bound 3 in the semi-supervised framework. This also opens up the possibility
that the generalization of Theorem 2 to higher moment be applicable to real data. Such generalizations might overcome the main drawback of our approach, namely, the fact that the PAC-bounds,
based on Theorem 2, degrade when the expected disagreement (dQ ) is close to 1/2.
Acknowledgments: Work supported by NSERC Discovery grants 262067 and 0122405.
References
[1] David McAllester. Some PAC-Bayesian theorems. Machine Learning, 37:355?363, 1999.
[2] David McAllester. PAC-Bayesian stochastic model selection. Machine Learning, 51:5?21, 2003.
[3] David McAllester. Simplified PAC-Bayesian margin bounds. Proceedings of the 16th Annual Conference
on Learning Theory, Lecture Notes in Artificial Intelligence, 2777:203?215, 2003.
[4] Franc?ois Laviolette and Mario Marchand. PAC-Bayes risk bounds for sample-compressed Gibbs classifiers.
Proc. of the 22nth International Conference on Machine Learning (ICML 2005), pages 481?488, 2005.
[5] John Langford and John Shawe-Taylor. PAC-Bayes & margins. In S. Thrun S. Becker and K. Obermayer,
editors, Advances in Neural Information Processing Systems 15, pages 423?430. MIT Press, Cambridge,
MA, 2003.
[6] Luc Devroye, L?aszl?o Gy?orfi, and G?abor Lugosi. A Probabilistic Theory of Pattern Recognition. Springer
Verlag, New York, NY, 1996.
[7] Leo Breiman. Random forests. Machine Learning, 45(1):5?32, 2001.
[8] Dimitris Bertsimas and Ioana Popescu. Optimal inequalities in probability theory: A convex optimization
approach. SIAM J. on Optimization, 15(3):780?804, 2005.
[9] Robert E. Schapire and Yoram Singer. Improved boosting using confidence-rated predictions. Machine
Learning, 37(3):297?336, 1999.
| 2959 |@word version:1 compression:1 seems:3 suitably:2 open:1 r:8 covariance:4 q1:7 moment:9 epartement:2 contains:1 outperforms:1 mushroom:4 must:2 john:2 numerical:1 v:4 half:5 intelligence:1 provides:3 boosting:3 trinomial:1 direct:2 prove:1 consists:1 inside:1 pairwise:3 expected:5 indeed:6 p1:2 actual:1 considering:1 provided:2 moreover:8 notation:1 q2:8 degrading:2 guarantee:1 voting:1 ebec:2 universit:3 classifier:48 grant:1 producing:1 before:2 positive:3 tends:4 consequence:2 initiated:1 lugosi:1 might:2 voter:4 acknowledgment:1 testing:3 practice:1 sq:11 empirical:4 orfi:1 confidence:4 close:6 unlabeled:3 selection:1 risk:28 applying:1 www:1 deterministic:2 straightforward:1 convex:2 ke:2 population:1 qkp:12 imagine:1 resp:1 hypothesis:1 associate:1 recognition:1 continues:1 database:1 aszl:1 calculate:1 decrease:2 rq:6 motivate:1 depend:1 tight:6 predictive:1 learner:2 necessitates:1 easily:1 joint:2 indirect:1 various:1 leo:1 informatique:1 distinct:3 kp:3 artificial:1 sc:1 larger:1 valued:1 tightness:4 otherwise:2 compressed:1 ability:1 statistic:1 propose:1 gq:50 fr:1 zm:2 turned:1 uci:3 rapidly:1 iff:1 achieve:2 ioana:1 diverges:1 help:1 derive:2 eq:32 p2:3 strong:2 ois:2 quotient:1 come:3 implies:1 indicate:1 closely:1 drawback:4 stochastic:3 pacbayes:1 sgn:3 enable:1 mcallester:3 generalization:3 proposition:4 tighter:3 hold:4 sufficiently:2 credit:3 achieves:1 adopt:1 smallest:1 a2:1 estimation:1 proc:1 applicable:1 label:2 currently:2 lip6:1 weighted:3 mit:1 clearly:1 always:4 aim:1 breiman:1 publication:2 derived:1 improvement:2 bernoulli:1 hepatitis:3 contrast:2 detect:1 sense:1 unlikely:1 abor:1 relation:3 france:1 provably:1 issue:1 classification:2 among:2 pascal:2 html:1 multiplies:1 misclassifies:3 special:1 construct:1 having:1 icml:1 prior1:1 fundamentally:1 franc:2 randomly:2 kp1:1 divergence:3 individual:1 ourselves:1 highly:1 possibility:1 extreme:2 accurate:3 bq:40 taylor:1 classify:2 cost:1 subset:1 uniform:4 predicate:1 straightforwardly:1 chooses:1 combined:1 international:1 randomized:1 siam:1 probabilistic:1 together:4 unavoidable:1 choose:1 worse:1 yp:2 account:1 de:3 stump:2 gy:1 depends:3 multiplicative:1 h1:22 try:1 closed:1 mario:2 characterizes:1 observing:2 sup:13 bayes:23 curie:1 chart:3 voted:2 accuracy:2 variance:21 yield:1 generalize:1 weak:1 bayesian:5 whenever:3 definition:3 associated:5 proof:5 gain:1 back:1 alexandre:1 higher:4 supervised:7 adaboost:2 improved:2 done:2 just:3 correlation:3 langford:1 grows:1 effect:1 true:4 hence:9 leibler:3 illustrated:2 deal:2 numerator:2 ulaval:3 generalized:1 recently:1 common:1 laval:2 empirically:1 relating:1 significant:1 cambridge:1 gibbs:18 pm:1 similarly:1 language:1 aq:3 pq:1 shawe:1 glo:2 etc:1 sick:3 posterior:7 showed:1 inf:5 verlag:1 inequality:5 binary:3 arbitrarily:2 success:3 continue:1 yi:2 captured:1 seen:1 additional:1 preceding:1 wq2:2 semi:5 unlabelled:2 calculation:1 compensate:1 long:3 divided:1 prevented:1 prediction:1 basic:1 breast:6 denominator:4 expectation:6 essentially:1 represent:1 whereas:2 separately:1 laboratoire:1 tend:5 seem:1 weigthed:1 presence:1 split:2 easy:2 affect:1 restrict:1 tradeoff:2 whether:1 becker:1 york:1 dramatically:1 generally:2 clear:3 amount:2 http:1 schapire:1 sign:1 estimated:2 key:1 nevertheless:1 drawn:3 marie:1 bertsimas:1 year:1 you:1 throughout:1 reader:1 almost:2 looser:2 decision:2 bound:104 def:19 existed:1 marchand:2 encountered:1 annual:1 occur:2 infinity:1 precisely:1 simulate:1 argument:1 according:4 combination:4 kd:1 smaller:5 slightly:2 qu:2 making:1 explained:2 pr:14 principally:1 sided:5 ln:13 equation:4 previously:1 slack:1 singer:1 usunier:2 generalizes:1 tightest:4 rewritten:1 data4:1 disagreement:3 pierre:1 appearing:1 original:2 denotes:1 binomial:1 laviolette:3 yoram:1 especially:1 classical:2 already:1 realized:1 occurs:1 randomize:1 usual:1 obermayer:1 unable:1 link:1 separating:1 thrun:1 majority:17 degrade:2 considers:1 reason:1 devroye:1 cq:22 providing:1 ratio:1 unfortunately:2 robert:1 rise:3 countable:1 unknown:1 perform:1 upper:19 observation:2 lacasse:1 compensation:1 situation:4 dc:1 rn:1 arbitrary:3 thm:1 community:1 canada:2 david:3 germain:2 paris:2 pair:5 kl:21 namely:1 chebychev:5 bound3:1 usually:1 pattern:1 firstname:1 dimitris:1 secondname:1 belief:1 power:1 eh:1 nth:1 improve:3 rated:1 yq:4 popescu:1 prior:13 discovery:1 expect:1 lecture:1 var:31 remarkable:1 h2:22 sufficient:1 dq:28 editor:1 uncorrelated:1 cancer:3 ift:4 surprisingly:1 supported:1 allow:1 overcome:2 simplified:1 far:1 ec:1 kullback:3 xi:2 sonar:3 ca:3 nicolas:2 obtaining:1 forest:2 investigated:1 elegantly:1 main:1 bounding:2 arise:1 ny:1 hz1:2 tied:1 yh:1 theorem:43 pac:41 showing:1 r2:1 ionosphere:3 naively:1 exists:1 kr:5 margin:7 simply:2 labor:3 nserc:1 springer:1 ma:1 goal:1 viewed:1 consequently:2 labelled:1 luc:1 infinite:1 except:1 uniformly:3 contrasted:1 degradation:3 called:3 vote:21 rarely:1 wq:51 evaluate:2 |
2,158 | 296 | How Receptive Field Parameters Affect Neural
Learning
Bartlett W. Mel
CNS Program
Caltech, 216-76
Pasadena, CA 91125
Stephen M. Omohundro
ICSI
1947 Center St., Suite 600
Berkeley, CA 94704
Abstract
We identify the three principle factors affecting the performance of learning by networks with localized units: unit noise, sample density, and the
structure of the target function. We then analyze the effect of unit receptive field parameters on these factors and use this analysis to propose a
new learning algorithm which dynamically alters receptive field properties
during learning.
1
LEARNING WITH LOCALIZED RECEPTIVE FIELDS
Locally-tuned representations are common in both biological and artificial neural
networks. Several workers have analyzed the effect of receptive field size, shape, and
overlap on representation accuracy: (Baldi, 1988), (Ballard, 1987), and (Hinton,
1986). This paper investigates the additional interactions introduced by the task of
function learning. Previous studies which have considered learning have for the most
part restricted attention to the use of the input probability distribution to determine
receptive field layout (Kohonen, 1984) and (Moody and Darken, 1989). We will see
that the structure of the function being learned may also be advantageously taken
into account.
Function learning using radial basis functions (RBF's) is currently a popular technique (Broomhead and Lowe, 1988) and serves as an adequate framework for our
discussion. Because we are interested in constraints on biological systems, we must
explictly consider the effects of unit noise. The goal is to choose the layout of
receptive fields so as to minimize average performance error.
Let y
= f(x)
be the function the network is attempting to learn from example
757
758
Mel and Omohundro
(x, y) pairs. The network consists of N units whose locally-tuned receptive fields
are distributed across the input space. The activity of the ith unit is the sum of a
radial basis function <Pi(X) and a mean-zero noise process 7Ji(X). A typical form for
<Pi is an n-dimensional Gaussian parametrized by its center Xi and width CTi,
<fJi(X) = e
-II X j-XII:l
:l"i 2
?
(1)
The function f(x) is approximated as a weighted sum of the output of N of these
units:
N
F(x)
= L Wi [<p,(x) + 7Ji(X)].
(2)
i=l
The weights Wi are trained using the LMS (least mean square) rule, which attempts
to minimize the mean squared distance between f and F over the set of training
patterns p for the current layout of receptive fields. In the next section we address
the additional considerations that arise when the receptive field centers and sizes
are allowed to vary in addition to the weights.
2
TWO KINDS OF ERROR
To understand the effect of receptive field properties on performance we must distinguish two basic sources of error. The first we call estimation error and is due to
the intrinsic unit noise. The other we call approximation error and arises from the
inability of the unit activity functions to represent the target function.
2.1
ESTIMATION ERROR
The estimation error can be characterized by the variance in F(x) I x. Because
of the intrinsic unit noise, repeated stimulation of a network with the same input
vector Xo will generate a distribution of outputs F(xo). If this variance is large,
it can be a significant contribution to the MSE (fig. 1). Consideration of noisy
units is most relevant to biological networks and analog hardware implementations
of artificial units. Averaging is a powerful statistical technique for reducing the
variance of a distribution. In the current context, averaging corresponds to receptive
field overlap. In general, the more overlap the better the noise reduction in F(x)
(though see section 2.2). The overlap of units at Xo can be increased by either
increasing the density of receptive field centers there, or broadening the receptive
fields of units in the neighborhood.
From equation 2, F(x) may be rewritten
N
F(x)
=L
,=1
<Pi (X)Wi + e(x),
(3)
where the summation term is the noise-free LMS approximation to f(x), and the
second term
N
e(x)
= L 7Ji(X)Wi,
i=l
(4)
How Receptive Field Parameters Affect Neural Learning
JOint Density
(a)
(b)
(c)
(d)
(e)
(f)
y
low input density
high input density
low estimation error
high estimation error
low approximation error
high approximation error
a,c,e
b,c,f
x
Figure 1: A. Estimation error arises from the variance of F(x) Ix. B. Approximation error is the deviation of the mean from the desired response (f(x)- < F(x) ?2.
is the estimation error. Since e(x) has mean zero for all x, its variance is
N
Var[e(x)] = E[e(x)] = E[L7]i(x)wi].
(5)
i=l
If each unit has the same noise profile, this reduces to
N
Var[e]
= Var[7]] L
wi?
(6)
i=l
e
The dependence of estimation error on the size of weights explains why increasing
the density of receptive fields in the input space reduces noise in the output of the
learning network. Though the number of units, and hence weights, that contribute
to the output is increased in this manipulation, the estimation error is proportional
to the sum of squared weights (6). The benefit achieved by making weights smaller
outruns the cost of increasing their number. For example, each receptive field
with weight Wi may be replaced by two copies of itself with weight wd2 and leave
F(x) unchanged. The new sum of squared weights, L~l 2( T)2, and hence the
estimation error, is reduced by a factor of two, however.
A second strategy that may lead to a reduction in the size of weights involves
broadening receptive fields (see section 2.2 for conditions). In general, broadening
receptive fields increases the unweighted output ofthe network L~l <Pi(X), implying
that the weights Wi must be correspondingly reduced in order that \I F(x) \I remain
approximately constant.
759
760
Mel and Omohundro
These observations suggest that the effects of noise are best mitigated by allocating receptive field resources in regions of the input space where units are heavily
weighted. It is interesting to note that under the assumption of additive noise,
the functional form tP of the receptive fields themselves has no direct effect on the
estimation error in F(x). The response profiles may, however, indirectly affect estimation error via the weight vector, since LMS weights on receptive fields of different
functional forms will generally be different.
2.2
APPROXIMATION ERROR
The second fundamental type of error, which we call approximation error, persists
even for noise-free input units, and is due to error in the "fit" of the approximating
function F to the target function f (fig. 1). Two aspects of approximation error
are distinguished in the following sections.
2.2.1
MISMATCH OF FUNCTIONAL FORM
First, there may be mismatch between the specific functional form of the basis
functions and that of the target function. For example, errors naturally arise when
linear RBF's are used to approximate nonlinear target functions, since curves cannot
be perfectly fit with straight lines. However, these errors may be made vanishingly
small by increasing the density of receptive fields. For example, if linear receptive
fields are trained to best fit a curved region of f(x) with second derivative c, then
the mean squared error, J~~~2(~x2 - a)2 has a value O(c 2d5). This type of error
falls off as the 5th power of d, where d is the spacing of the receptive fields. In a
similar result, (Baldi and lIeilegenberg, 1988) show that approximations to both
linear and quadratic functions improve exponentially fast with increasing density of
Gaussian receptive fields.
2.2.2
MISMATCH OF SPATIAL SCALE
A more general source of error in fitting target functions occurs when receptive
fields are either too broad or too widely spaced relative to the fine spatial structure
of f. Both of these factors can act to locally limit the high frequency content of the
approximation F, which may give rise to severe approximation errors.
The Nyquist (and Shannon) result on signal sampling says that the highest frequency which may be recovered from a sampled signal is half the sampling frequency. If the receptive field density is not high enough then this kind of result
shows that high frequency fine structure in the function being approximated will
be lost.
When the unit receptive fields are excessively wide, they can also wash out the high
frequency fine structure of the function. One can think of F as a "blurred" version
of the the weight vector which in turn is a sampled version of f. The blurring
is greater for wide receptive fields. The density and width should be chosen to
match their frequency transfer characteristics and best approximate the function.
For one-dimensional Gaussian receptive fields of width u, we choose the receptive
How Receptive Field Parameters Affect Neural Learning
field spacing d to be
1r
d = 20'.
(7)
A density that satisfies this type of condition will be referred to in the next section
as a "frequency-matched" density.
3
A RECEPTIVE FIELD DESIGN STRATEGY
In this section we describe an adaptive learning strategy based on the results above.
Figure 2 shows the results of an experimental implementation of this procedure.
It is possible to empirically measure the magnitude of the two sources of error
analyzed above. Since we wish to minimize the expected performance error for the
network as a whole, we weight our measurements of each type of error at each x
by the input probability p(x). Errors in high density regions count more. Small
magnitude errors may be important in high probability regions while even large
errors may be neglected in low probability regions. The learning algorithm adjusts
the layout of receptive fields to adjust to each form of error in turn. The steps
involved follow.
1. Uniformly distribute broad receptive fields at frequency-matched density
throughout regions of the input space that contain data. (In our I-d example, data, and hence receptive fields, are present across the entire domain.)
2. Train the network weights to an LMS solution with fixed receptive fields. Using
the trained network, accrue approximation errors across the input space.
3. Where the approximation error exceeds a threshold T anywhere within a unit's
receptive field, split the receptive field into two subfields that are as small as
possible while still locally maintaining frequency-matched density. (This depends on receptive field profile). Repeat steps 2 and 3 until the approximation
error is under threshold across entire input space. We now have a layout where
receptive field width and density are locally matched to the spatial frequency
content of the target function, and approximation error is small and uniform
across the input space. Note that since errors accrue according to p(x), we
have preferentially allocated resources (through splitting) in regions with both
high error and high input probability.
4. Using the current network, measure and accrue estimation errors across the
input space.
5. Where the estimation error exceeds T anywhere within a unit's receptive field,
replace the receptive field by two of the same size, adding a small random
pertubation to each center. Repeat from 4 until estimation error is below
threshold across entire input space. We now have a layout where receptive
field density is highest where the effects of noise were most severe, such that
estimation error is now small and uniform across the input space. Once again,
we have preferentially allocated resources in regions with both high error and
high input probability.
Figure 2 illustrates this process for a noisy, one-dimensionsal learning problem.
Each frame shows the estimation error, the approximation error, the target func-
761
762
Mel and Omohundro
ISTDIATiCII _ _ - .'.31
ISTDIATSCII _ . - 1'. 07
_llllAnCII _ _ - 17. 1.
*_1111,.
IITDIA"CII _ _ - ZI.45
_lDlAnCII fUll - 11.11
Figure 2: Results of an adaptive strategy for choosing receptive field size and density.
See text for details.
How Receptive Field Parameters Affect Neural Learning
tion and network output, and the unit response functions. In the top frame 24 units
with broad, noisy receptive fields have been LMS-trained to fit the target function.
Estimation error is visible across the entire domain, though it is concentrated in
the small region just to the right of center where the input probability is peaked.
Approximation error is concentrated in the central region which contains high spatial frequencies, with minor secondary peaks in other regions, including the region
of high input probability.
In the second frame, the receptive field width was uniformly decreased and density was uniformly increased to the point where MSE fell below r; 384 units were
required. In the third frame, the adaptive strategy presented above was used to
allocate units and choose widths. Fewer than half as many units (173) were needed
in this example to achieve the same MSE as in the second frame. In higher dimensions, and with sparser data, this kind of recursive splitting and doubling strategy
should be even more important.
4
CONCLUSIONS
In this paper we have shown how receptive field size, shape, density, and noise characteristics interact with the frequency content of target functions and input probability density to contribute to both estimation and approximation errors during
supervised function learning. Based on these interrelationships, a simple, adaptive,
error-driven strategy for laying out receptive fields was demonstrated that makes
efficient use of unit resources in the attempt to minimize mean squared performance
error.
An improved understanding of the role of receptive field structure in learning may
in the future help in the interpretation of patterns of coarse-coding seen in many
biological sensory and motor systems.
References
Baldi, P. & Heiligengerg, W. How sensory maps could enhance resolution through
ordered arrangements of broadly tuned receptors. Bioi. Cybern., 1988, 59, 313-318.
Ballard, D.H. Interpolation coding: a representation for numbers in neural models.
Bioi. Cybern., 1987, 57, 389-402.
Broomhead, D.S. & Lowe, D. Multivariable functional interpolation and adaptive
networks. Complex Systems, 1988, 2, 321-355.
Hinton, G.E. (1986) Distributed representations. In Parallel distributed processing: explorations in the microstructure of cognition, vol. 1, D.E. Rumelhart, J .L.
McClelland, (Eds.), Bradford, Cambridge.
Kohonen, T. Self organization and associative memory. Springer-Verlag: Berlin,
1984.
MacKay, D. Hyperacuity and coarse-coding. In preparation.
Moody, J. & Darken, C. Fast learning in networks of locally-tuned processing units.
Neural Computation, 1989, 1, 281-294.
763
Part XII
Learning Systell1.s
| 296 |@word version:2 reduction:2 contains:1 tuned:4 current:3 recovered:1 must:3 visible:1 additive:1 shape:2 motor:1 implying:1 half:2 fewer:1 ith:1 coarse:2 contribute:2 direct:1 consists:1 fitting:1 baldi:3 expected:1 themselves:1 increasing:5 mitigated:1 matched:4 kind:3 suite:1 berkeley:1 act:1 unit:28 persists:1 limit:1 receptor:1 interpolation:2 approximately:1 dynamically:1 subfields:1 lost:1 recursive:1 procedure:1 radial:2 suggest:1 cannot:1 context:1 cybern:2 map:1 demonstrated:1 center:6 layout:6 attention:1 resolution:1 splitting:2 explictly:1 rule:1 d5:1 adjusts:1 target:10 heavily:1 rumelhart:1 approximated:2 hyperacuity:1 role:1 region:12 highest:2 icsi:1 neglected:1 trained:4 blurring:1 basis:3 joint:1 train:1 fast:2 describe:1 artificial:2 neighborhood:1 choosing:1 whose:1 widely:1 say:1 pertubation:1 think:1 itself:1 noisy:3 associative:1 propose:1 interaction:1 vanishingly:1 kohonen:2 relevant:1 achieve:1 leave:1 help:1 minor:1 involves:1 exploration:1 explains:1 microstructure:1 biological:4 summation:1 considered:1 cognition:1 lm:5 vary:1 estimation:19 currently:1 weighted:2 gaussian:3 entire:4 pasadena:1 interested:1 l7:1 spatial:4 mackay:1 field:52 once:1 sampling:2 broad:3 peaked:1 future:1 replaced:1 cns:1 attempt:2 organization:1 severe:2 adjust:1 analyzed:2 allocating:1 worker:1 desired:1 accrue:3 increased:3 tp:1 cost:1 deviation:1 uniform:2 too:2 st:1 density:21 fundamental:1 peak:1 off:1 enhance:1 moody:2 squared:5 again:1 central:1 choose:3 derivative:1 account:1 distribute:1 coding:3 blurred:1 depends:1 tion:1 lowe:2 analyze:1 parallel:1 contribution:1 minimize:4 square:1 accuracy:1 variance:5 characteristic:2 spaced:1 identify:1 ofthe:1 straight:1 ed:1 frequency:12 involved:1 naturally:1 sampled:2 popular:1 broomhead:2 higher:1 supervised:1 follow:1 response:3 improved:1 though:3 anywhere:2 just:1 until:2 nonlinear:1 effect:7 excessively:1 contain:1 hence:3 during:2 width:6 self:1 mel:4 multivariable:1 omohundro:4 interrelationship:1 consideration:2 common:1 stimulation:1 functional:5 ji:3 empirically:1 exponentially:1 analog:1 interpretation:1 significant:1 measurement:1 cambridge:1 driven:1 manipulation:1 verlag:1 caltech:1 seen:1 additional:2 greater:1 cii:1 determine:1 signal:2 stephen:1 ii:1 full:1 reduces:2 exceeds:2 match:1 characterized:1 basic:1 represent:1 achieved:1 affecting:1 addition:1 fine:3 spacing:2 decreased:1 source:3 allocated:2 wd2:1 fell:1 call:3 split:1 enough:1 affect:5 fit:4 zi:1 perfectly:1 allocate:1 bartlett:1 nyquist:1 advantageously:1 adequate:1 generally:1 locally:6 hardware:1 concentrated:2 mcclelland:1 reduced:2 generate:1 alters:1 broadly:1 xii:2 vol:1 threshold:3 sum:4 powerful:1 throughout:1 investigates:1 distinguish:1 quadratic:1 activity:2 constraint:1 x2:1 aspect:1 attempting:1 according:1 across:9 smaller:1 remain:1 wi:8 making:1 restricted:1 xo:3 taken:1 equation:1 resource:4 turn:2 count:1 needed:1 serf:1 fji:1 rewritten:1 indirectly:1 distinguished:1 top:1 maintaining:1 approximating:1 unchanged:1 arrangement:1 occurs:1 receptive:52 strategy:7 dependence:1 distance:1 berlin:1 parametrized:1 laying:1 preferentially:2 rise:1 implementation:2 design:1 observation:1 darken:2 curved:1 hinton:2 frame:5 introduced:1 pair:1 required:1 learned:1 address:1 below:2 pattern:2 mismatch:3 program:1 including:1 memory:1 power:1 overlap:4 improve:1 func:1 text:1 understanding:1 relative:1 interesting:1 proportional:1 var:3 localized:2 principle:1 pi:4 repeat:2 free:2 copy:1 understand:1 fall:1 wide:2 correspondingly:1 distributed:3 benefit:1 curve:1 dimension:1 unweighted:1 sensory:2 made:1 adaptive:5 approximate:2 xi:1 why:1 ballard:2 learn:1 transfer:1 ca:2 interact:1 mse:3 broadening:3 complex:1 domain:2 whole:1 noise:14 arise:2 profile:3 allowed:1 repeated:1 fig:2 referred:1 wish:1 third:1 ix:1 specific:1 intrinsic:2 adding:1 wash:1 magnitude:2 illustrates:1 sparser:1 ordered:1 doubling:1 springer:1 corresponds:1 satisfies:1 cti:1 bioi:2 goal:1 rbf:2 replace:1 content:3 typical:1 reducing:1 uniformly:3 averaging:2 secondary:1 bradford:1 experimental:1 shannon:1 arises:2 inability:1 preparation:1 |
2,159 | 2,960 | Balanced Graph Matching
Timothee Cour, Praveen Srinivasan and Jianbo Shi
Department of Computer and Information Science
University of Pennsylvania
Philadelphia, PA 19104
{timothee,psrin,jshi}@seas.upenn.edu
Abstract
Graph matching is a fundamental problem in Computer Vision and Machine
Learning. We present two contributions. First, we give a new spectral relaxation
technique for approximate solutions to matching problems, that naturally incorporates one-to-one or one-to-many constraints within the relaxation scheme. The
second is a normalization procedure for existing graph matching scoring functions
that can dramatically improve the matching accuracy. It is based on a reinterpretation of the graph matching compatibility matrix as a bipartite graph on edges for
which we seek a bistochastic normalization. We evaluate our two contributions on
a comprehensive test set of random graph matching problems, as well as on image
correspondence problem. Our normalization procedure can be used to improve
the performance of many existing graph matching algorithms, including spectral
matching, graduated assignment and semidefinite programming.
1
Introduction
Many problems of interest in Computer Vision and Machine Learning can be formulated as a problem of correspondence: finding a mapping between one set of points and another set of points.
Because these point sets can have important internal structure, they are often considered not simply
as point sets, but as two separate graphs. As a result, the correspondence problem is commonly referred to as graph matching. In this setting, graph nodes represent feature points extracted from each
instance (e.g. a test image and a template image) and graph edges represent relationships between
feature points. The problem of graph matching is to find a mapping between the two node sets that
preserves as much as possible the relationships between nodes.
Because of its combinatorial nature, graph matching is either solved exactly in a very restricted setting (bipartite matching, for example with the Hungarian method) or approximately. Most of the recent literature on graph matching has followed this second path, developing approximate relaxations
to the graph matching problem. In this paper, we make two contributions. The first contribution is
a spectral relaxation for the graph matching problem that incorporates one-to-one or one-to-many
mapping constraints, represented as affine constraints. A new mathematical tool is developed for that
respect, Affinely Constrained Rayleigh Quotients. Our method achieves comparable performance to
state of the art algorithms, while offering much better scalability. Our second contribution relates to
the graph matching scoring function itself, which we argue, is prone to systematic confusion errors.
We show how a proper bistochastic normalization of the graph matching compatibility matrix is able
to considerably reduce those errors and improve the overall matching performance. This improvement is demonstrated both for our spectral relaxation algorithm, and for three state of the art graph
matching algorithms: spectral matching, graduated assignment and semidefinite programming.
2
Problem formulation
Attributed Graph We define an attributed graph[1] as a graph G = (V, E, A) where each edge
e = ij ? E is assigned an attribute Ae , which could be a real number or a vector in case of multiattributes. We represent vertex attributes as special edge attributes, i.e. Aii for a vertex i. For
example, the nodes could represent feature points with attributes for spatial location/orientation and
image feature descriptors, while edge attributes could represent spatial relationships between two
nodes such as relative position/orientation.
Graph Matching Cost Let G = (V, E, A), G? = (V ? , E ? , A? ) be two attributed graphs. We want
to find a mapping between V and V ? that best preserves the attributes between edges e = ij ? E
and e? = i? j ? ? E ? . Equivalently, we seek a set of correspondences, or matches M = {ii? } so as to
maximize the graph matching score, defined as:
X
X
?GM (M ) =
f (Aij , A?i? j ? ) =
f (Ae , A?e? ),
(1)
?
ii? ?M,jj ? ?M
?
e?e?
?
with the shorthand notation e ? e iff ii ? M, jj ? M . The function f (?, ?) measures the similarity
between edge attributes. As a special case, f (Aii , A?i? i? ) is simply the score associated with the
match ii? . In the rest of the paper, we let n = |V |, m = |E|, and likewise for n? , m? .
Formulation as Integer Quadratic Program We explain here how to rewrite (1) in a more man?
ageable form. Let us represent M as a binary vector x ? {0, 1}nn : xii? = 1 iff ii? ? M . For most
problems, one requires the matching to have a special structure, P
such as one-to-one
Por one-to-many:
this is the mapping constraint. For one-to-one matching, this is i? xii? = 1 and i xii? = 1 (with
x binary), and M is a permutation matrix. In general, this is an affine inequality constraint of the
form Cx ? b. With those notations, (1) takes the form of an Integer Quadratic Program (IQP):
max
?(x) = xT W x s.t. Cx ? b,
?
x ? {0, 1}nn
(2)
W is a nn? ? nn? compatibility matrix with Wii? ,jj ? = f (Aij , A?i? j ? ). In general such IQP is NPhard, and approximate solutions are needed.
Graph Matching Relaxations Continuous relaxations of the IQP (2) are among the most successful methods for non-bipartite graph matching, and so we focus on them. We review three state of the
art matching algorithms: semidefinite programming (SDP) [2, 3], graduated assignment (GA) [4],
and spectral matching (SM) [5]. We also introduce a new method, Spectral Matching with Affine
Constraints (SMAC) that provides a tigher relaxation than SM (and more accurate results in our experiments) while still retaining the speed and scalability benefits of spectral methods, which we also
quantify in our evaluations. All of these methods relax the original IQP into a continuous program
(removing the x ? {0, 1} constraint), so we omit this step in the derivations below.
SDP Relaxation In [2], the authors rewrite the objective as a matrix innner product: xT W x =
T
hX, Weq i, where X = [1; x] [1; x] is a (nn? + 1) ? (nn? + 1) rank-one matrix and Weq =
T
0
d /2
, where d = diag(W ) and D is a diagonal matrix of diagonal d. The non-convex
d/2 W ? D
rank-one constraint is further relaxed by only requiring X to be positive semi-definite. Finally the
(i)
(i)
relaxation is: max hX, Weq i s.t. hX, Ceq i ? beq , X 0, for suitable Ceq , beq . The relaxation
squares the problem size, which we will see, prevents SDP from scaling to large problems.
Graduated Assignment GA[4] relaxes the IQP into a non-convex quadratic program (QP) by removing the constraint x ? {0, 1}. It then solves a sequence of convex approximations, each time by
maximizing a Taylor expansion of the QP around the previous approximate solution. The accuracy
of the approximation is controlled by a continuation parameter, annealed after each iteration.
Spectral Matching (SM) In [5], the authors drop the constraint Cx ? b during relaxation and only
incorporate it during the discretization step. The resulting program: max xT W x s.t. ||x|| = 1,
T
x
which is the same as max xxW
T x , can be solved by computing the leading eigenvector x of W . It
verifies x ? 0 when W is nonnegative, by Perron-Frobenius? theorem.
3
Spectral Matching with Affine Constraint (SMAC)
We present here our first contribution, SMAC. Our method is closely related to the spectral matching formulation of [5], but we are able to impose affine constraints Cx = b on the relaxed solution.
We demonstrate later that the ability to maintain this constraint, coupled with scalability and speed
of spectral methods, results in a very effective solution to graph matching. We solve the following:
max
xT W x
xT x
s.t. Cx = b
(3)
Note, for one-to-one matching the objective coincides with the IQP for binary x since xT x = n.
Computational Solution We can formulate (3) as maximization of a Rayleigh quotient under
affine constraint. While the case of linear constraints has been addressed previously[6], imposing
affine constraints is novel. We fully address this class of problem in the supplementary material1
and give a brief summary here. The solution to (3) is given by the leading eigenpair of
PC W PC x = ?x,
(4)
T
T ?1
where x is scaled so that Cx = b exactly. We introduced PC = Inn? ? Ceq
(Ceq Ceq
) Ceq and
Ceq = [Ik?1 , 0] (C ? (1/bk )bCk ), where Ck , bk denote the last row of C, b and k = # constraints.
Discretization We show here how to tighten our approximation during the discretization step in
the case of one-to-one matching (we can fall back to this case by introducing dummy nodes). Let
us assume for a moment that n = n? . It is a well known result that for any n ? n matrix X, X
is a permutation matrix iff X1 = X T 1 = 1, X is orthogonal, and X ? 0 elementwise. We show
here we can obtain a tighter relaxation by incorporating the first 2 (out of 3) constraints as a postprocessing before the final discretization. We carry on the following steps even when n 6= n? : 1)
reshape the solution x of (3) into a n ? n? matrix X, 2) compute the best orthogonal approximation
Xorth of X. It can be computed using the SVD decomposition X = U ?V T , similarly to [7]:
Xorth = arg min {||X ? Q|| : Q ? O(n, n? )} = U V T , where O(n, n? ) denotes the orthogonal
?
matrices of Rn?n , and 3) discretize Xorth like the other methods, as explained in the results section.
The following proposition shows Xorth is orthogonal and satisfies the affine constraint, as promised.
Proposition 3.1 (Xorth satisfies the affine constraint) If u is left and right eigenvector of a matrix
Y , then u is left and right eigenvector of Yorth . Corollary: when n = n? , Xorth 1 = Xorth T 1 = 1.
Proof: see supplementary materials. Note that in general, X and Xorth do not have the same
eigenvectors, here we are lucky because of the particular constraint induced by C, b.
Computational Cost The cost of this algorithm is dominated by the computation of the leading
eigenvector of (4), which is function of two terms: 1) number of matrix-vector operations required
in an eigensolver (which we can fix, as convergence is fast in practice), and 2) cost per matrix-vector
operation. PC is a full matrix, even when C is sparse, but we showed the operation y := PC x can
be computed in O(nn? ) using the Sherman-Morrison formula (for one-to-one matching). Finally,
the total complexity is proportional to the number of non-zero elements in W . If we assume a
full-matching, this is O(mm? ), which is linear in the problem description length.
4
How Robust is the Matching?
We ran extensive graph matching experiments on both real image graphs and synthetic graphs with
the algorithms presented above. We noticed a clear trend: the algorithms get confused when there
is ambiguity in the compatibility matrix. Figure 1 shows a typical example of what happens. We
extracted a set of feature points (indexed by i and i? ) in two airplane images, and for each edge
e = ij in the first graph, we plotted the most similar edges e? = i? j ? in the second graph. As we can
see, the first edge plotted has many correspondences everywhere in the image and is therefore uninformative. The second edge on the other hand has correspondences with roughly only 5 locations, it
is informative, and yet its contribution is outweighted by the first edge. The compatibility matrix is
unbalanced. We illustrate next what happens with a synthetic example.
1
http://www.seas.upenn.edu/?timothee/
Figure 1: Representative cliques for graph matching. Blue arrows indicate edges with high similarity, showing
2 groups: cliques of type 1 (pairing roughly horizontal edges in the 2 images) are uninformative, cliques of type
2 (pairing vertical edges) are distinctive.
Figure 2: Left: edges 12 and 13 are uninformative and make spurious connections of strength ? to all edges
in the second graph. Edge 23 is informative and makes a single connection to the second graph, 2?3?. Middle:
corresponding compatibility matrices W (top: before normalization, bottom: after normalization). Right:
margin as a function of ? (difference between correct matching score and best runner-up score).
Synthetic noise model example Let us look at a synthetic example to illustrate this concept, on
which the IQP can be solved by brute-force. Figure 2 shows two isomorphic graphs with 3 nodes. In
our simple noise model, edges 12 and 13 are uninformative and make connections to every edge in
the second graph, with strength ? (our noise parameter). The informative edge 23 on the other hand
only connects to 2? 3? . We displayed Wii? ,jj ? to visualize the connections. When the noise is small
enough, the optimal matching is the desired permutation p? = {11? , 22? , 33? }, with an initial score
of 8 for ? = 0. We computed the score of the second best permutation as a function of ? (see plot
of margin), and showed that for ? greater than ?0 ? 1.6, p? is no longer optimal. W is unbalanced,
with some edges making spurious connections, overwhelming the influence of other edges with few
connections. This problem is not incidental. In fact we argue this is the main source of confusion
for graph matching. The next section introduces a normalization algorithm to address this problem.
Figure 3: Left: matching compatibility matrix W and edge similarity matrix S. The shaded areas in each matrix correspond to the same entries. Right: graphical representation of S, W as a clique potential on i, i? , j, j ? .
5
How to balance the Compatibility Matrix
As we saw in the previous section, a main source of confusion for graph matching algorithms is the
unbalance in the compatibility matrix. This confusion occurs when an edge e ? E has many good
potential matches e? ? E ? . Such an edge is not discriminative and its influence should be decreased.
On the other hand, an edge with small number of good matches will help disambiguate the optimal
matching. Its influence should be increased. The following presents our second contribution,
bistochastic normalization.
5.1
Dual Representation: Matching Compatibility Matrix W vs. Edge Similarity Matrix S
The similarity function f (?, ?) can be interpreted in two ways: either as a similarity between edges
ij ? E and i? j ? ? E ? , or as a compatibility between match hypothesis ii? ? M and jj ? ? M .
We define the similarity matrix S of size m ? m? as Sij,i? j ? = f (Aij , A?i? j ? ), and (as before) the
compatibility matrix W of size nn? ? nn? as Wii? ,jj ? = f (Aij , A?i? j ? ), see Figure 3. Each vertex i
in the first graph should ideally match to a small number of vertices i? in the second graph. Similarly,
each edge e = ij ? E should also match to a small number of edges e? = i? j ? ? E ? . Although this
constraint would be very hard to enforce, we approach this behavior by normalizing the influence
of each edge. This corresponds to having each row and column in S (not W !) sum to one, in other
words, S should be bistochastic.
5.2
Bistochastic Normalization of Edge Similarity Matrix S
Recall we are given a compatibility matrix W . Can we enforce its dual representation S to be bistochastic? One problem is that, even though W is square (of size nn? ? nn? ), S could be rectangular
(of size m ? m? ), in which case its rows and columns cannot both sum to 1. We define a m ? m?
matrix B to be Rectangular Bistochastic if it satisfies: B1m? = 1m and B T 1m = (m/m? )1m? . We
can formulate the normalization as solving the following balancing problem:
Find (D, D? ) diagonal matrices of order m, m?
s.t.
DSD? is rectangular bistochastic
(5)
We propose the following algorithm to solve (5), and then show its correctness.
1. Input: compatibility matrix W, of size nn? ? nn?
2. Convert W to S:
Sij,i? j ? = Wii? ,jj ?
3. repeat until convergence
(a) normalize the rows of S:
t+1
t
Sij,i
? j ? := Sij,i? j ? /
(b) normalize the columns of S:
4. Convert back S to W , output W
t+2
Sij,i
?j?
:=
t
k? l? Sij,k? l?
P
t+1
t+1
Sij,i? j ? / kl Skl,i
?j?
P
Proposition 5.1 (Existence and Uniqueness of (D,D?)) Under the condition S > 0 elementwise,
Problem (5) has a unique solution (D, D? ), up to a scale factor. D and D? can be found by iteratively
normalizing the rows and columns of S.
Proof Let S? = S ? 1m? ?m , which is square. Since S? > 0 elementwise, we can apply an existing
version of (5.1) for square matrices[8]. We conclude the proof by noticing that normalizing rows and
? S?D
? ? = (D ? 1m? ?m? )(S ? 1m? ?m )(D? ? 1m?m ) =
columns of S? preserves kronecker structure: D
?
?
2
?
?
? D
? ? ) is solution for S?
mm DSD ? 1m? ?m , and so (m D, m D ) is solution for S iff (D,
We illustrate in Figure 2 the improvement of normalization on our previous synthetic example of
noise model. Spurious correspondences are suppressed and informative correspondances such as
W23,2? 3? are enhanced, which makes the correct correspondence clearer. The plot on the right shows
that normalization makes the matching robust to arbitrarily large noise in this model, while unnormalized correspondences will eventually result in incorrect matchings.
6
Experimental Results
Discretization and Implementation Details Because all of the methods described are continuous
relaxations, a post-processing step is needed to discretize the continuous solution while satisfying
the desired constraints. Given an initial solution estimate, GA finds a near-discrete local minimum
of the IQP by solving a series of Taylor approximations. We can therefore use GA as follows: 1)
initialize GA with the relaxed solution of each algorithm, and 2) discretize the output of GA with
a simple greedy procedure described in [5]. Software: For SDP, we used the popular SeDuMi [9]
optimization package. Spectral matching and SMAC were implemented using the standard Lanczos
eigensolver available with MATLAB, and we implemented an optimized version of GA in C++.
6.1
One-to-one Attributed Graph Matching on Random Graphs
Following [4], we performed a comprehensive evaluation of the 4 algorithms on random one-to-one
graph matching problems. For each matching problem, we constructed a graph G with n = 20
nodes, and m random edges (m = 10%n2 in a first series of experiments). Each edge ij ? E, was
assigned a random attribute Aij distributed uniformly in [0, 1]. We then created a perturbed graph G?
by adding noise on the edge attributes: A?i? j ? = Ap(i)p(j) + noise, where p is a random permutation;
the noise was distributed uniformly in [0, ?], ? varying from 0 to 6. The compatibility matrix W
was computed from graphs G, G? as follows: Wii? ,jj ? = exp(?|Aij ? A?i? j ? |2 ), ?ij ? E, i? j ? ? E ? .
For each noise level we generated 100 different matching problems and computed the average error
rate by comparing the discretized matching to the ground truth permutation.
Effect of Normalization on each method We computed the average error rates with and without
normalization of the compatibility matrix W , for each method: SDP, GA, SM and SMAC, see Figure
4. We can see dramatic improvement due to normalization, regardless of the relaxation method used.
At higher noise levels, all methods had a 2 to 3-fold decrease in error rate.
Comparison Across Methods We plotted the performance of all 4 methods using normalized
compatibility matrices in Figure 5 (left), again, with 100 trials per noise level. We can see that SDP
and SMAC give comparable performance, while GA and especially SM do worse. These results
validate SMAC with normalization as a state-of-the-art relaxation method for graph matching.
Influence of edge density and graph size We experimented with varying edge density: noise
? = 2, n = 20, edge density varying from 10% to 100% by increments of 10% with 20 trials per
increment. For SMAC, the normalization resulted in an average absolute error reduction of 60%,
and for all density levels the reduction was at least 40%. For SDP, the respective figures were 31%,
20%. We also did the same experiments, but with fixed edge density and varying graph sizes, from
10 to 100 nodes. For SMAC, normalization resulted in an average absolute error reduction of 52%;
for all graph sizes the reduction was at least 40%.
Scalability and Speed In addition to accuracy, scalability and speed of the methods are also important considerations. Matching problems arising from images and other sensor data (e.g., range
Error rate vs. Noise level
Error rate vs. Noise level
Error rate vs. Noise level
0.8
0.9
SMAC
SMAC (Norm.)
0.8
0.7
0.7
0.5
0.4
0.3
0.6
0.5
0.4
0.3
0.7
0.6
0.5
0.4
0.3
0.2
0.2
0.2
0.1
0.1
0.1
1
2
3
4
Noise level
5
6
0
1
2
3
4
Noise level
5
6
Average Error Rate
0.6
Average Error Rate
Average Error Rate
Average Error Rate
SM
SM (Norm.)
0.8
0.7
0
Error rate vs. Noise level
0.9
SDP
SDP (Norm.)
0.8
GA
GA (Norm.)
0.6
0.5
0.4
0.3
0.2
0.1
1
2
3
4
Noise level
5
6
0
1
2
3
4
Noise level
5
6
Figure 4: Comparison of matching performance with normalized and unnormalized compatibility matrices.
Axes are error rate vs. noise level. In all cases (GA, SDP, SM, SMAC), the error rate decreases substantially.
scans) may have hundreds of nodes in each graph. As mentioned previously, the SDP relaxation
squares the problem size (in addition to requiring expensive solvers), greatly impacting its speed
and scalability. Figure 5 (middle and right) demonstrates this. For a set of random one-to-one
matching problems of varying size n (horizontal axis), we averaged the time for computing the relaxed solution of all four methods (10 trials for each n). We can see that SDP scales quite poorly
(almost 30 minutes for n = 30). In addition, on a machine with 2GB of RAM, SDP typically ran out
of memory for n = 60. By contrast, SMAC and SM scale easily to much larger problems (n = 200).
6.2
Image correspondence
We also tested the effect of normalization on a simple but instructive image correspondence task. In
each of two images to match, we formed a multiple attribute graph by sub-sampling n = 100 canny
edge points as graph nodes. Each pair of feature points e = ij within 30 pixels was assigned two
?
?
?
?
attributes: angle ?e = ? ij and distance d(e) = || ij ||. S was computed as follows: S(e, e? ) = 1
?
|d(e)?d(e )|
iff cos(?e? ? ?e) > cos ?/8 and min(d(e),d(e
? )) < 0.5. By using simple geometric attributes, we
emphasized the effect of normalization on the energy function, rather than feature design.
Figure 6 shows an image correspondence example between the two airplane images of Figure 1.
We display the result of SMAC with and without normalization. Correspondence is represented
by similarly colored dots. Clearly, normalization improved the correspondence result. Without
normalization, large systematic errors are made, such as mapping the bottom of one plane to the top
of the other. With normalization these errors are largely eliminated.
Let us return to Figure 1 to see the effect of normalization on S(e, e? ). As we saw, there are roughly
2 types of connections: 1) horizontal edges (uninformative) and 2) vertical edges (discriminative).
Normalization exploits this disparity to enhance the latter edges: before normalization, each connection contributed up to 1.0 to the overall matching score. After normalization, connections of type
2 contributed up to 0.64 to the overall matching score, versus 0.08 for connections of type 1, which
is 8 times more. We can view normalization as imposing an upper bound on the contribution of each
connection: the upper bound is smaller for spurious matches, and higher for discriminative matches.
7
Conclusion
While recent literature mostly focuses on improving relaxation methods for graph matching problems, we contribute both an improved relaxation algorithm, SMAC, and a method for improving
the energy function itself, graph balancing with bistochastic normalization. In our experiments,
SMAC outperformed GA and SM, with similar accuracy to SDP, it also scaled much better than
SDP. We motivate the normalization with an intuitive example, showing it improves noise tolerance
by enhancing informative matches and de-enhancing uninformative matches. The experiments we
performed on random one-to-one matchings show that normalization dramatically improves both
our relaxation method SMAC, and the three algorithms mentioned. We also demonstrated the value
of normalization for establishing one-to-one correspondences between image pairs. Normalization
imposes an upper bound on the score contribution of each edge in proportion to its saliency.
Error rate vs. Noise level
Computation Time vs. Graph Size
Computation Time vs. Graph Size
4
35
10
SM (Norm.)
SMAC (Norm.)
GA (Norm.)
SDP (Norm.)
0.45
3
10
CPU time in seconds (in log scale)
Average Error Rate
0.4
Spectral Matching
Spectral Matching with affine Constraint
Graduated Assignment
SDP
0.35
0.3
0.25
0.2
0.15
Spectral Matching
Spectral Matching with Affine Constraint
Graduated Assignment
30
25
2
10
CPU time in seconds
0.5
1
10
0
10
20
15
?1
10
?2
5
10
0.1
10
0.05
0
1
2
3
4
Noise level
5
6
?3
10
0
50
100
Problem Size (|V|=|V?| = # of nodes)
150
0
0
50
100
Problem Size (|V|=|V?| = # of nodes)
150
Figure 5: Left: comparison of different methods with normalized compatibility matrices. Axes: vertical is
error rate averaged over 100 trials; horizontal is noise level. SMAC achieves comparable performance to SDP.
Middle,right: computation times of graph matching methods (left: log-scale, right: linear scale).
Figure 6: Image correspondence via SMAC with and without normalization; like colors indicate matches.
References
[1] Marcello Pelillo. A unifying framework for relational structure matching. icpr, 02:1316, 1998.
[2] Christian Schellewald and Christoph Schn?orr. Probabilistic subgraph matching based on convex relaxation.
In Energy Minimization Methods in Computer Vision and Pattern Recognition, 2005.
[3] P.H.S. Torr. Solving markov random fields using semi definite programming. In Artificial Intelligence and
Statistics, 2003.
[4] S. Gold and A. Rangarajan. A graduated assignment algorithm for graph matching. In IEEE Transactions
on Pattern Analysis and Machine Intelligence, volume 18, 1996.
[5] Marius Leordeanu and Martial Hebert. A spectral technique for correspondence problems using pairwise
constraints. In International Conference on Computer Vision, October 2005.
[6] Stella X. Yu and Jianbo Shi. Grouping with bias. In Advances in Neural Information Processing Systems,
2001.
[7] G.L.Scott and H.C.Longuett-Higgins. An algorithm for associating the features of two images. In Proceedings of the Royal Society of London B, 1991.
[8] Paul Knopp and Richard Sinkhorn. Concerning nonnegative matrices and doubly stochastic matrices.
Pacific J. Math, 2:343?348, 1967.
[9] J.F. Sturm. Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones. Optimization
Methods and Software, 11?12:625?653, 1999. Special issue on Interior Point Methods.
| 2960 |@word trial:4 version:2 middle:3 norm:8 proportion:1 weq:3 seek:2 decomposition:1 dramatic:1 carry:1 reduction:4 bck:1 moment:1 series:2 score:9 disparity:1 initial:2 offering:1 existing:3 discretization:5 comparing:1 yet:1 informative:5 christian:1 drop:1 plot:2 v:9 greedy:1 intelligence:2 plane:1 colored:1 provides:1 math:1 node:13 location:2 contribute:1 mathematical:1 constructed:1 ik:1 pairing:2 incorrect:1 shorthand:1 doubly:1 introduce:1 pairwise:1 upenn:2 behavior:1 roughly:3 sdp:18 discretized:1 cpu:2 overwhelming:1 solver:1 confused:1 notation:2 what:2 interpreted:1 substantially:1 eigenvector:4 developed:1 finding:1 every:1 exactly:2 jianbo:2 scaled:2 demonstrates:1 brute:1 omit:1 positive:1 before:4 local:1 establishing:1 path:1 approximately:1 ap:1 shaded:1 christoph:1 co:2 range:1 averaged:2 unique:1 practice:1 definite:2 procedure:3 area:1 lucky:1 matching:70 word:1 get:1 cannot:1 ga:14 interior:1 influence:5 www:1 demonstrated:2 shi:2 maximizing:1 annealed:1 regardless:1 convex:4 rectangular:3 formulate:2 higgins:1 increment:2 enhanced:1 gm:1 programming:4 hypothesis:1 pa:1 element:1 trend:1 satisfying:1 expensive:1 recognition:1 bottom:2 solved:3 decrease:2 ran:2 balanced:1 mentioned:2 complexity:1 ideally:1 motivate:1 reinterpretation:1 rewrite:2 solving:3 distinctive:1 bipartite:3 matchings:2 easily:1 aii:2 represented:2 derivation:1 fast:1 effective:1 london:1 artificial:1 quite:1 supplementary:2 solve:2 larger:1 relax:1 ability:1 statistic:1 itself:2 final:1 sequence:1 inn:1 propose:1 product:1 canny:1 iff:5 poorly:1 subgraph:1 gold:1 description:1 frobenius:1 validate:1 normalize:2 scalability:6 intuitive:1 cour:1 convergence:2 rangarajan:1 sea:2 help:1 illustrate:3 clearer:1 ij:10 pelillo:1 solves:1 implemented:2 hungarian:1 quotient:2 indicate:2 quantify:1 closely:1 correct:2 attribute:12 stochastic:1 material:1 hx:3 fix:1 proposition:3 tighter:1 mm:2 around:1 considered:1 ground:1 exp:1 mapping:6 visualize:1 achieves:2 uniqueness:1 outperformed:1 combinatorial:1 saw:2 correctness:1 tool:1 minimization:1 clearly:1 iqp:8 sensor:1 ck:1 rather:1 varying:5 corollary:1 ax:2 focus:2 improvement:3 rank:2 greatly:1 contrast:1 affinely:1 nn:13 typically:1 spurious:4 pixel:1 issue:1 compatibility:19 overall:3 among:1 orientation:2 arg:1 dual:2 impacting:1 retaining:1 constrained:1 art:4 special:4 spatial:2 initialize:1 field:1 having:1 sampling:1 eliminated:1 look:1 marcello:1 yu:1 beq:2 richard:1 few:1 preserve:3 resulted:2 comprehensive:2 connects:1 maintain:1 interest:1 evaluation:2 runner:1 introduces:1 semidefinite:3 pc:5 accurate:1 edge:45 sedumi:2 respective:1 orthogonal:4 indexed:1 taylor:2 desired:2 plotted:3 bistochastic:9 instance:1 increased:1 column:5 lanczos:1 assignment:7 maximization:1 cost:4 introducing:1 vertex:4 entry:1 hundred:1 successful:1 perturbed:1 considerably:1 synthetic:5 density:5 fundamental:1 international:1 systematic:2 probabilistic:1 enhance:1 again:1 ambiguity:1 por:1 worse:1 leading:3 return:1 potential:2 de:1 orr:1 skl:1 later:1 performed:2 view:1 contribution:10 formed:1 square:5 accuracy:4 descriptor:1 largely:1 likewise:1 correspond:1 saliency:1 timothee:3 explain:1 energy:3 naturally:1 associated:1 attributed:4 proof:3 popular:1 w23:1 recall:1 color:1 improves:2 back:2 higher:2 improved:2 formulation:3 though:1 until:1 hand:3 sturm:1 horizontal:4 effect:4 concept:1 requiring:2 normalized:3 assigned:3 symmetric:1 iteratively:1 ceq:7 during:3 unnormalized:2 coincides:1 demonstrate:1 confusion:4 postprocessing:1 image:17 consideration:1 novel:1 jshi:1 qp:2 volume:1 elementwise:3 imposing:2 similarly:3 sherman:1 had:1 dot:1 similarity:8 longer:1 sinkhorn:1 recent:2 showed:2 inequality:1 binary:3 arbitrarily:1 scoring:2 minimum:1 greater:1 relaxed:4 impose:1 maximize:1 morrison:1 semi:2 relates:1 ii:6 full:2 multiple:1 match:13 concerning:1 post:1 controlled:1 ae:2 vision:4 enhancing:2 iteration:1 normalization:35 represent:6 eigensolver:2 want:1 uninformative:6 addition:3 addressed:1 decreased:1 source:2 rest:1 induced:1 incorporates:2 integer:2 near:1 enough:1 relaxes:1 graduated:7 pennsylvania:1 associating:1 reduce:1 airplane:2 gb:1 jj:8 matlab:2 dramatically:2 clear:1 eigenvectors:1 continuation:1 http:1 arising:1 dummy:1 per:3 blue:1 xii:3 discrete:1 srinivasan:1 group:1 four:1 promised:1 material1:1 ram:1 graph:61 relaxation:21 sum:2 convert:2 cone:1 package:1 everywhere:1 noticing:1 angle:1 almost:1 scaling:1 comparable:3 bound:3 followed:1 display:1 correspondence:17 fold:1 quadratic:3 nonnegative:2 strength:2 constraint:26 kronecker:1 software:2 dominated:1 speed:5 min:2 marius:1 department:1 developing:1 pacific:1 icpr:1 across:1 smaller:1 suppressed:1 making:1 happens:2 praveen:1 smac:20 restricted:1 explained:1 sij:7 previously:2 eventually:1 needed:2 available:1 wii:5 operation:3 apply:1 spectral:18 reshape:1 enforce:2 existence:1 original:1 denotes:1 top:2 graphical:1 schellewald:1 unbalance:1 unifying:1 exploit:1 especially:1 society:1 objective:2 noticed:1 occurs:1 diagonal:3 distance:1 separate:1 dsd:2 argue:2 length:1 relationship:3 balance:1 equivalently:1 mostly:1 october:1 incidental:1 implementation:1 proper:1 design:1 contributed:2 discretize:3 vertical:3 upper:3 markov:1 sm:11 displayed:1 relational:1 rn:1 introduced:1 bk:2 pair:2 perron:1 required:1 extensive:1 connection:11 kl:1 optimized:1 schn:1 toolbox:1 address:2 able:2 below:1 pattern:2 scott:1 program:5 including:1 max:5 memory:1 royal:1 suitable:1 force:1 scheme:1 improve:3 brief:1 axis:1 created:1 martial:1 stella:1 coupled:1 knopp:1 philadelphia:1 review:1 literature:2 geometric:1 relative:1 fully:1 permutation:6 proportional:1 versus:1 affine:11 imposes:1 balancing:2 tigher:1 prone:1 row:6 summary:1 repeat:1 last:1 hebert:1 aij:6 bias:1 fall:1 template:1 absolute:2 sparse:1 benefit:1 distributed:2 tolerance:1 author:2 commonly:1 made:1 tighten:1 transaction:1 approximate:4 clique:4 b1m:1 conclude:1 discriminative:3 continuous:4 disambiguate:1 nature:1 robust:2 improving:2 expansion:1 diag:1 did:1 main:2 arrow:1 noise:26 paul:1 n2:1 verifies:1 x1:1 referred:1 representative:1 nphard:1 sub:1 position:1 removing:2 theorem:1 formula:1 minute:1 xt:6 emphasized:1 showing:2 experimented:1 normalizing:3 grouping:1 incorporating:1 adding:1 margin:2 cx:6 rayleigh:2 simply:2 prevents:1 leordeanu:1 corresponds:1 truth:1 satisfies:3 extracted:2 formulated:1 man:1 hard:1 typical:1 torr:1 uniformly:2 eigenpair:1 total:1 isomorphic:1 svd:1 experimental:1 internal:1 latter:1 scan:1 unbalanced:2 incorporate:1 evaluate:1 tested:1 instructive:1 |
2,160 | 2,961 | Active learning for misspecified
generalized linear models
Francis R. Bach
Centre de Morphologie Math?ematique
Ecole des Mines de Paris
Fontainebleau, France
[email protected]
Abstract
Active learning refers to algorithmic frameworks aimed at selecting training data
points in order to reduce the number of required training data points and/or improve the generalization performance of a learning method. In this paper, we
present an asymptotic analysis of active learning for generalized linear models.
Our analysis holds under the common practical situation of model misspecification, and is based on realistic assumptions regarding the nature of the sampling
distributions, which are usually neither independent nor identical. We derive unbiased estimators of generalization performance, as well as estimators of expected
reduction in generalization error after adding a new training data point, that allow
us to optimize its sampling distribution through a convex optimization problem.
Our analysis naturally leads to an algorithm for sequential active learning which is
applicable for all tasks supported by generalized linear models (e.g., binary classification, multi-class classification, regression) and can be applied in non-linear
settings through the use of Mercer kernels.
1
Introduction
The goal of active learning is to select training data points so that the number of required training
data points for a given performance is smaller than the number which is required when randomly
sampling those points. Active learning has emerged as a dynamic field of research in machine learning and statistics [1], from early works in optimal experimental design [2, 3], to recent theoretical
results [4] and applications, in text retrieval [5], image retrieval [6] or bioinformatics [7].
Despite the numerous successful applications of active learning to reduce the number of required
training data points, many authors have also reported cases where widely applied active learning
heuristic schemes such as maximum uncertainty sampling perform worse than random selection [8,
9], casting doubt into the practical applicability of active learning: why would a practitioner use an
active learning strategy that is not ensuring, unless the data satisfy possibly unrealistic and usually
non verifiable assumptions, that it performs better than random? The objectives of this paper are
(1) to provide a theoretical analysis of active learning with realistic assumptions and (2) to derive a
principled algorithm for active learning with guaranteed consistency.
In this paper, we consider generalized linear models [10], which provide flexible and widely used
tools for many supervised learning tasks (Section 2). Our analysis is based on asymptotic arguments,
and follows previous asymptotic analysis of active learning [11, 12, 9, 13]; however, as shown in
Section 4, we do not rely on correct model specification and assume that the data are not identically
distributed and may not be independent. As shown in Section 5, our theoretical results naturally
lead to convex optimization problems for selecting training data point in a sequential design. In
Section 6, we present simulations on synthetic data, illustrating our algorithms and comparing them
favorably to usual active learning schemes.
2
Generalized linear models
Given data x ? Rd , and targets y in a set Y, we consider the problem of modeling the conditional
probability p(y|x) through a generalized linear model (GLIM) [10]. We assume that we are given
an exponential family adapted to our prediction task, of the form p(y|?) = exp(? > T (y) ? ?(?)),
where T (y) is a k-dimensional vector of sufficient statistics, ? ? Rk is vector of natural parameters
and ?(?) is the convex log-partition function. We then consider the generalized linear model defined
as p(y|x, ?) = exp(tr(?> xT (y)> ) ? ?(?> x)), where ? ? ? ? Rd?k . The framework of GLIMs is
general enough to accomodate many supervised learning tasks [10], in particular:
? Binary classification: the Bernoulli distribution leads to logistic regression, with Y =
{0, 1}, T (y) = y and ?(?) = log(1 + e? ).
? k-class classification: the multinomial distribution leads to softmax regression, with Y =
Pk
Pk
{y ? {0, 1}k , i=1 yi = 1}, T (y) = y and ?(?) = log( i=1 e?i ).
? Regression: the normal distribution leads to Y = R, T (y) = (y, ? 12 y 2 )> ? R2 , and
?2
?(?1 , ?2 ) = ? 12 log ?2 + 12 log 2? + 2?12 . When both ?1 and ?2 depends linearly on x, we
have an heteroscedastic model, while if ?2 is constant for all x, we obtain homoscedastic
regression (constant noise variance).
Maximum likelihood estimation We assume that we are given independent and identically distributed (i.i.d.) data sampled from the distribution p0 (x, y) = p0 (x)p0 (y|x). The maximum likelihood population estimator ?0 is defined as the minimizer of the expectation under p0 of the negative
log-likelihood `(y, x, ?) = ?tr(?> xT (y)> ) + ?(?> x). The function `(y, x, ?) is convex in ? and
by taking derivatives and using the classical relationship between the derivative of the log-partition
and the expected sufficient statistics [10], the population maximum likelihood estimate is defined
by:
Ep0 (x,y) ?`(y, x, ?0 ) = Ep0 (x) x(Ep(y|x,?0 ) T (y) ? Ep0 (y|x) T (y))> = 0
(1)
Given i.i.d P
data (xi , yi ), i = 1, . . . , n, we use the penalized maximum likelihood estimator, which
n
minimizes i=1 `(yi , xi , ?) + 21 ?tr?> ?. The minimization is performed by Newton?s method [14].
Model specification
A GLIM is said well-specified is there exists a ? ? Rd?k such that for
d
all x ? R , Ep(y|x,?) T (y) = Ep0 (y|x) T (y). A sufficient condition for correct specification is that
there exist ? ? Rd?k such that for all x ? Rd , y ? Y, p(y|x, ?) = p0 (y|x). This condition is
necessary for the Bernoulli and multinomial exponential family, but not for example for the normal
distribution. In practice, the model is often misspecified and it is thus of importance to consider
potential misspecification while deriving asymptotic expansions.
Kernels The theoretical results of this paper mainly focus on generalized linear models; however,
they can be readily generalized to non-linear settings by using Mercer kernels [15], for example
leading to kernel logistic regression or kernel ridge regression. When the data are given by a kernel
matrix, we can use the incomplete Cholesky decomposition [16] to find an approximate basis of the
feature space on which the usual linear methods can be applied. Note that our asymptotic results do
not hold when the number of parameters may grow with the data (which is the case for kernels such
as the Gaussian kernel). However, our dimensionality reduction procedure uses a non-parametric
method on the entire (usually large) training dataset and we then consider a finite dimensional problem on a much smaller sample. If the whole training dataset is large enough, then the dimension
reduction procedure may be considered deterministic and our criteria may apply.
3
Active learning set-up
We consider the following ?pool-based? active learning scenario: we have a large set of i.i.d. data
points xi ? Rd , i = 1, . . . , m sampled from p0 (x). The goal of active learning is to select the points
to label, i.e., the points for which the corresponding yi will be observed. We assume that given
xi , i = 1, . . . , n, the targets yi , i = 1, . . . , n are independent and sampled from the corresponding
conditional distribution p0 (yi |xi ). This active learning set-up is well studied and appears naturally
in many applications where the input distribution p0 (x) is only known through i.i.d. samples [5, 17].
For alternative scenarii, where the density p0 (x) is known, see e.g. [18, 19, 20].
More precisely, we assume that the points xi are selected sequentially, and we let denote
qi (xi |x1 , . . . , xi?1 ) the sampling distribution of xi given the previously observed points. In
situations where the data are not sampled from the testing distribution, it has proved advantageous to consider likelihood weighting techniques [13, 19], and we thus consider weights wi =
wi (xi |x1 , . . . , xi?1 ). We let ??n denote the weighted penalized ML estimator, defined as the minimum with respect to ? of
Pn
?
>
(2)
i=1 wi `(yi , xi , ?) + 2 tr? ?.
In this paper, we work with two different assumptions regarding the sequential sampling distributions: (1) the variables xi are independent, i.e., qi (xi |x1 , . . . , xi?1 ) = qi (xi ), (2) the
variable xi depends on x1 , . . . , xi?1 only through the current empirical ML estimator ??i , i.e.,
qi (xi |x1 , . . . , xi?1 ) = q(xi |??i ), where q(xi |?) is a pre-specified sampling distribution. The first
assumption is not realistic, but readily leads to asymptotic expansions. The second assumption is
more realistic, as most of the heuristic schemes for sequential active learning satisfy this assumption.
It turns out that under certain assumption, the asymptotic expansions of the expected generalization
performance for both sets of assumptions are identical.
4
Asymptotic expansions
In this section, we derive the asymptotic expansions that will lead to active learning algorithms in
Section 5. Throughout this section, we assume that p0 (x) has a compact support K and has a twice
differentiable density with respect to the Lebesgue measure, and that all sampling distributions have
a compact support included in the one of p0 (x) and have twice differentiable densities.
We first make the assumption that the variables xi are independent, i.e., we have sampling distributions qi (xi ) and weights wi (xi ), both measurable, and such that wi (xi ) > 0 for all xi ? K. In
Section 4.4, we extend some of our results to the dependent case.
4.1
Bias and variance of ML estimator
The following proposition is a simple extension to non identically distributed observations, of classical results on maximum likelihood for misspecified generalized linear models [21, 13]. We let ED
and varD denote the expectation and variance with respect to the data D = {(xi , yi ), i = 1, . . . , n}.
Pn
Proposition 1 We let ?n denote the minimizer of i=1 Eqi (xi )p0 (yi |xi ) wi (xi )`(yi , xi , ?). If (a) the
weight functions wn and the sampling densities qn are pointwise strictly positive and such that
wn (x)qn (x) converges in the L? -norm, and (b) Eqn (x) wn2 (x) is bounded , then ??n ? ?n converges
to zero in probability and we have
ED ??n = ?n + O(n?1 ) and varD ??n = n1 Jn?1 In Jn?1 + O(n?2 )
(3)
P
P
n
n
where Jn = n1 i=1 Eqi (x) wi (x)?2 `(x, ?n ) can be consistently estimated by J?n = n1 i=1 wi hi
Pn
1
2
>
and In = n i=1 Eqi (x)p0 (y|x) wi (x) ?`(y, x, ?n )?`(y, x, ?n ) can be consistently estimated by
Pn
I?n = n1 i=1 wi2 gi gi> , where gi = ?`(yi , xi , ??n ) and hi = ?2 `(xi , ??n ).
From Proposition 1, it is worth noting that in general ?n will not converge to the population maximum likelihood estimate ?0 , i.e., using a different sampling distribution than p0 (x) may introduce
a non asymptotically vanishing bias in estimating ?0 . Thus, active learning requires to ensure that
(a) our estimators have a low bias and variance in estimating ?n , and (b) that ?n does actually converge to ?0 . This double objective is taken care of by our estimates of generalization performance in
Propositions 2 and 3.
There are two situations, however, where ?n is equal to ?0 . First, if the model is well specified,
then whatever the sampling distributions are, ?n is the population ML estimate (which is a simple
consequence of the fact that Ep(y|x,?
x, implies that, for all q(x),
0 ) T (y) = Ep0 (y|x) T (y), for all>
Eq(x)p0 (y|x) ?`(y, x, ?) = Eq(x) x(Ep(y|x,?0 ) T (y) ? Ep0 (y|x) T (y))
= 0).
Second, When wn (x) = p0 (x)/qn (x), then ?n is also equal to ?0 , and we refer to this weighting
scheme as the unbiased reweighting scheme, which was used by [19] in the context of active learning. We refer to the weights wnu = p0 (xn )/qn (xn ) as the importance weights. Note however, that
restricting ourselves to such unbiased estimators, as done in [19] might not be optimal because they
may lead to higher variance [13], in particular due to the potential high variance of the importance
weights (see simulations in Section 6).
4.2
Expected generalization performance
We let Lu (?) = Ep0 (x)p0 (y|x) `(y, x, ?) denote the generalization performance1 of the parameter ?.
We now provide an unbiased estimator of the expected generalization error of ??n , which generalized
the Akaike information criterion [22] (for a proof, see [23]):
Proposition 2 In addition to the assumptions of Proposition 1, we
2
Eqn (x) (p0 (x)/qn (x)) is bounded. Let
b = 1 Pn wu `(yi , xi , ??n ) + 1 1 Pn wu wi g > (J?n )?1 gi ,
G
i
i=1 i
i=1 i
n
n n
assume
that
(4)
b is an asymptotically unbiased estimator of ED Lu (??n ), i.e., ED G
b=
where wiu = p0 (xi )/qi (xi ). G
u ?
?2
ED L (?n ) + O(n ).
b is a sum of two terms: the second term corresponds to a variance term and will
The criterion G
converge to zero in probability at rate O(n?1 ); the first term, however, which corresponds to a
selection bias induced by a specific choice of sampling distributions, will not always converge to
the minimum possible value Lu (?0 ). Thus, in order to ensure that our active learning method are
consistent, we have to ensure that this first term is going to its minimum value. One simple way to
b is smaller than the estimate for
achieve this is to always optimize our weights so that the estimate G
the unbiased reweighting scheme (see Section 5).
4.3
Expected performance gain
We now look at the following situation: we are given the first n data points (xi , yi ) and the current estimate ??n , the gradients gi = ?`(yi , xi , ??n ), the Hessians hi = ?2 `(xi , ??n ) and the third
derivatives Ti = ?3 `(xi , ??n ), we consider the following criterion, which depends on the sampling
distributions and weights of the (n + 1)-th point:
b n+1 , wn+1 |?, ?) = 13 Pn ?i wu wn+1 (xi ) qn+1 (xi ) + Pn ?i wu wn+1 (xi )2 qn+1 (xi ) (5)
H(q
n
where ?i
=
i
i=1
p0 (xi )
p0 (xi )
> ?
u >
u > ?
?(n + 1)n?
gi Jn A ? wi wi g?i hi g?i + wi g?i Jn g?i ? 2?
gi> B
?wi g?i> J?nu g?i + Ti [?
gi , C] ? 2wi g?i> hi A + Ti [A, g?i , g?i ]
i=1
i
(6)
1 > ?u
?i =
g? J g?i + A> hi g?i
(7)
2 i n
P
P
P
n
n
n
with g?i = J?n?1 gi , A = J?n?1 n1 i=1 wiu gi , B = i=1 wiu wi hi g?i , C = i=1 wi wiu g?i g?i> , J?nu =
P
n
1
u
i=1 wi hi .
n
b n+1 , wn+1 |?, ?) is an estimate of the expected perforThe following proposition shows that H(q
mance gain of choosing a point xn+1 according to distribution qn+1 and weight wn+1 (and marginalizing over yn+1 ) and may be used as an objective function for learning the distributions qn+1 , wn+1
(for a proof, see [23]). In Section 5, we show that if the distributions and weights are properly
parameterized, this leads to a convex optimization problem.
2
Proposition 3 We assume that Eqn (x) wn2 (x) and Eqn (x) (p0 (x)/qn (x)) are bounded. We let denote ??n denote the weighted ML estimator obtained from the first n points, and ??n+1 the one-step
estimator obtained from the first n+1 points, i.e., ??n+1 is obtained by one Newton step from ??n [24];
b n+1 , wn+1 ) = ED Lu (??n )? ED Lu (??n+1 ) +
then the criterion defined in Eq. (5) is such that ED H(q
?3
O(n ), where ED denotes the expectation with respect to the first n+1 data points and their labels.
Moreover, for n large enough, all values of ?i are positive.
1
In this paper, we use the negative log-likelihood as a measure of performance, which allows simple asymptotic expansions, and the focus of the paper is about the differences between testing and training sampling
distributions. The study of potentially different costs for testing and training is beyond the scope of this paper.
Note that many of the terms in Eq. (6) and Eq. (7) are dedicated to weighting schemes for the first
n points other than the unbiased reweighting scheme. For the unbiased reweighting scheme where
wi = wiu , for i = 1, . . . , n, then A = 0 and the equations may be simplified.
4.4
Dependent observations
In this section, we show that under a certain form of weak dependence between the data points
xi , i = 1, . . . , n, then the results presented in Propositions 1 and 2 still hold. For simplicity and
brevity, we restrict ourselves to the unbiased reweighting scheme, i.e., wn (xn |x1 , . . . , xn?1 ) =
p0 (xn )/qn (xn |x1 , . . . , xn?1 ) for all n, and we assume that those weights are uniformly bounded
away from zero and infinity. In addition, we only prove our result in the well-specified case, which
leads to a simpler argument for the consistency of the estimator.
Many sequential active learning schemes select a training data point with a distribution or criterion
that depends on the estimate so far (see Section 6 for details). We thus assume that the sampling
distribution qn is of the form q(xn |??n ), where q(x|?) is a fixed set of smooth parameterized densities.
Proposition 4 (for a proof, see [23]) Let
b = 1 Pn wi `(yi , xi , ??n ) +
G
i=1
n
1
n
P
n
1
n
2 > ? ?1
w
g
(
J
)
g
n
i ,
i=1 i i
(8)
b is an asymptotically unbiased estimator of ED Lu (??n ), i.e.,
where wi = wiu = p0 (xi )/q(xi |??i ). G
u ?
?2
b
ED G = ED L (?n ) + O(log(n)n ).
The estimator is the same as in Proposition 2. The effect of the dependence is asymptotically negligible and only impacts the result with the presence of an additional log(n) term. In the algorithms
presented in Section 5, the distribution qn is obtained as the solution of a convex optimization problem, and thus the previous theorem does not readily apply. However, when n gets large, qn depends
on the previous data points only through the first two derivatives of the objective function of the
convex problem, which are empirical averages of certain functions of all currently observed data
points; we are currently working out a generalization of Proposition 4 that allows the dependence
on certain empirical moments and potential misspecification.
5
Algorithms
b in Eq. (5) that enables to optimize the sampling density
In Section 4, we have derived a criterion H
b in Eq. (4) and Eq. (8) of the generalization error. Our
of the (n + 1)-th point, and an estimate G
algorithms are composed of the following three ingredients:
1. Those criteria assume that the variance of the importance weights wnu = p0 (xn )/qn (xn ) is
controlled. In order to make sure that those results apply, our algorithms will ensure that
this condition is met.
b n+1 , qn+1 |?, ?) for a cer2. The sampling density qn+1 will be obtained by minimizing H(w
tain parameterization of qn+1 and wn+1 . It turns out that those minimization problems are
convex, and can thus be efficiently solved, without local minima.
3. Once a new sample has been selected, and its label observed, Proposition 4 is used in a way
similar to [13], in order to search for the best mixture between the current weights (wi ) and
the importance weights (wiu ), i.e., we look at weights of the form wi? (wiu )1?? and perform
b in Eq. (4) is minimum.
a grid search on ? to find ? such that G
The main interest of the first and third points is that we obtain a final estimator of ?0 which is at
least provably consistent: indeed, although our criteria are obtained from an assumption of independence, the generalization performance result also holds for ?weakly? dependent observations and
thus ensures the consistency of our approach. Thus, as opposed to most previous active learning
heuristics, our estimator will always converge (in probability) to the ML estimator. In Section 6, we
show empirically that usual heuristic schemes do not share this property.
Convex optimization problem
We assume that we have a fixed set of candidate distributions
sk (x) of the form sk (x) = p0 (x)rk (x). Note that the multiplicative form of our candidate distri-
butions allows efficient sampling from a poolP
of samples of p0 . We look at distributions qn+1 (x)
with mixture density of the form s(x|?) =
k ?k sk (x) = p0 (x)r(x), where the weights ? are
b n+1 , wn+1 |?, ?) in Eq. (5) is thus a function
non-negative and sum to one. The criterion H(q
H(?|?, ?) of ?. We consider two weighting schemes: (a) one with all weights equal to one (unit
weighting scheme) which leads to H0 (?|?, ?), and (b) the unbiased reweighting scheme, where
wn+1 (x) = p0 (x)/qn+1 (x), which leads to H1 (?|?, ?). We have
Pn
P
H0 (?|?, ?) = n13 k ?k ( i=1 (?i + ?i )wiu sk (xi )) ,
(9)
u
Pn
Pn
?
w
i
1
u
i
H1 (?|?, ?) = n3 i=1 ?i wi + i=1 P ?k sk (xi ) .
(10)
k
The function H0 (?) is linear in ?, while the function H1 (?) is the sum of a constant and positive
inverse functions, and is thus convex [14].
Unless natural candidate distributions sk (x) can be defined for the active learning problem, we use
the set of distributions obtained as follows: we perform K-means clustering with a large number p of
2
clusters (e.g., 100 or 200), and then consider functions rk (x) of the form rk (x) = Z1k e??k kx??k k ,
where ?k is one element of a finite given set of parameters, and ?k is one of the p centroids
y1 , . . . , yp , obtained from K-means. We let w
?i denote the number of data points assigned to the
Pp
Pp
2
centroid yi . We normalize by Zk = i=1 w
?i e??k kyi ??k k / i=1 w
?i . We thus obtained O(p) candidate distributions rk (x), which, if p is large enough, provides a flexible yet tractable set of mixture
distributions.
One additional element is the constraint P
on the variance of the
weights. The variance of
Pimportance
m
m P w
w
?i
?i
u
u
wn+1
can be estimated as var wn+1
= i=1 r(x
?
1
=
? 1 = V (?), which
i=1
)
?
r
(x
i
i)
k k k
is convex in ?. Thus constraining the variance of the new weights leads to a convex optimization
problem, with convex objective and convex constraints, which can be solved efficiently by the logbarrier method [14], with cubic complexity in the number of candidate distributions.
Algorithms We have three versions of our algorithm, one with unit weights (referred to as ?no
weight?) which optimizes H0 (?|?, ?) at each iteration, one with the unbiased reweighting scheme,
which optimizes H1 (?|?, ?) (referred to as ?unbiased?) and one which does both and chooses the
b (referred to as ?full?): in the initialization phase, K-means is run to
best one, as measured by H
generate candidate distributions that will be used throughout the sampling of new points. Then, in
order to select the new training data point xn+1 , the scores ? and ? are computed from Eq. (6) and
Eq. (7), then the appropriate cost function, H0 (?|?, ?), H1 (?|?, ?) (or both) is minimized and once
? is obtained, we sample xn+1 from the corresponding distribution, and compute the weights wn+1
?
u
u 1??
b
)i ) in Eq. (4) is minimized
and wn+1
. As described earlier, we then find ? such that G((w
i (wi )
and update weights accordingly.
Regularization parameter In the active learning set-up, the number of samples used for learning
varies a lot. It is thus not possible to use a constant regularization parameter. We thus learn it by
cross-validation every 10 new samples.
6
Simulation experiments
In this section, we present simulation experiments on synthetic examples (sampled from Gaussian
mixtures in two dimensions), for the task of binary and 3-class classification. We compare our algorithms to the following three active learning frameworks. In the maximum uncertainty framework
(referred to as ?maxunc?), the next training data point is selected such that the entropy of p(y|x, ??n )
is maximal [17]. In the maximum variance reduction framework [25, 9] (referred to as ?varred?), the
next point is selected so that the variance of the resulting estimator has the lowest determinant, which
is equivalent to finding x such that tr?(x, ??n )J?n?1 is minimum. Note that this criterion has theoretical justification under correct model specification. In the minimum prediction error framework
(referred to as ?minpred?), the next point is selected so that it reduces the most the expected log-loss,
with the current model as an estimate of the unknown conditional probability p0 (y|x) [5, 8].
Sampling densities In Figure 1, we look at the limit selected sampling densities, i.e., we assume
? in Eq. (5). We
that a large number of points has been sampled, and we look at the criterion H
show the density obtained from the unbiased reweighting scheme (middle of Figure 1), as well as
5
5
0
0
?5
?5
0.8
5
0.5
0.6
0
0
0.4
0.2
?5
?0.5
?10
?5
0
5
?10
?5
0
5
?10
?5
0
5
Figure 1: Proposal distributions: (Left) density p0 (x) with the two different classes (red and blue),
b n+1 (x), 1) =
(Middle)
best density with unbiased reweighting, (Right) function ?(x) such that H(q
R
?(x)qn+1 (x)dx (see text for details).
0.2
0.2
error rate
error rate
random
full
0.15
0.15
0.1
0.1
0
random
no weight
unbiased
50
number of samples
100
0
error rate
0.2
random
minpred
varred
maxunc
0.15
0.1
50
number of samples
100
0
50
100
number of samples
Figure 2: Error rates vs. number of samples averaged over 10 replications sampled from same distribution as in Figure 1: (Left) random sampling and active learning ?full?, with standard deviations,
(Middle) Comparison of the two schemes ?unbiased? and ?no weight?, (Right) Comparison with
other methods.
b
the
R function ?(x) (right of Figure 1) such that, for the unit weighting scheme, H(qn+1 (x), 1) =
?(x)qn+1 (x)dx. In this framework, minimizing the cost without any constraint leads to a Dirac
at the maximum of ?(x), while minimizing with a constraint on the variance of the corresponding
importance weights will select point with high values of ?(x). We also show the line ?0> x = 0.
From Figure 1, we see that (a) the unit weighting scheme tends to be more selective (i.e., finer grain)
than the unbiased scheme, and (b) that the mode of the optimal densities are close to the maximum
uncertainty hyperplane but some parts of this hyperplane are in fact leading to negative cost gains
(e.g., the part of the hyperplane crossing the central blob), hinting at the bad potential behavior of
the maximum uncertainty framework.
Comparison with other algorithms In Figure 2 and Figure 1, we compare the performance of
our active learning algorithms. In the left of Figure 2, we see that our active learning framework does
perform better on average but also leads to smaller variance. In the middle of Figure 2, we compare
the two schemes ?no weight? and ?unbiased?, showing the superiority of the unit weighting scheme
and the significance of our asymptotic results in Proposition 2 and 3 which extend the unbiased
framework of [13]. In the right of Figure 2 and in Figure 3, we compare with the other usual
heuristic schemes: our ?full? algorithm outperforms other schemes; moreover, in those experiments,
the other schemes do perform worse than random sampling and converge to the wrong estimator, a
bad situation that our algorithms provably avoid.
7
Conclusion
We have presented a theoretical asymptotic analysis of active learning for generalized linear models,
under realistic sampling assumptions. From this analysis, we obtain convex criteria which can be
optimized to provide algorithms for online optimization of the sampling distributions. This work
naturally leads to several extensions. First, our framework is not limited to generalized linear models, but can be readily extended to any convex differentiable M -estimators [24]. Second, it seems
advantageous to combine our active learning analysis with semi-supervised learning frameworks, in
particular ones based on data-dependent regularization [26]. Finally, we are currently investigating
applications to large scale image retrieval tasks, where unlabelled data are abundant but labelled data
are scarce.
0.5
random
full
minpred
varred
maxunc
error rate
0.4
0.3
0.2
0.1
0
50
100
number of samples
Figure 3: Error rates vs. number of samples averaged over 10 replications for 3 classes: (left) data,
(right) comparisons of methods.
References
[1] D. A. Cohn, Z. Ghahramani, and M. I. Jordan. Active learning with statistical models. J. Art. Intel. Res.,
4:129?145, 1996.
[2] V. V. Fedorov. Theory of optimal experiments. Academic Press, 1972.
[3] P. Chaudhuri and P. A. Mykland. On efficient designing of nonlinear experiments. Stat. Sin., 5:421?440,
1995.
[4] S. Dasgupta. Coarse sample complexity bounds for active learning. In Adv. NIPS 18, 2006.
[5] N. Roy and A. McCallum. Toward optimal active learning through sampling estimation of error reduction.
In Proc. ICML, 2001.
[6] S. Tong and E. Chang. Support vector machine active learning for image retrieval. In Proc. ACM Multimedia, 2001.
[7] M. Warmuth, G. R?atsch, M. Mathieson, J. Liao, and C. Lemmen. Active learning in the drug discovery
process. In Adv. NIPS 14, 2002.
[8] X. Zhu, J. Lafferty, and Z. Ghahramani. Combining active learning and semi-supervised learning using
Gaussian fields and harmonic functions. In Proc. ICML, 2003.
[9] A I. Schein. Active Learning for Logistic Regression. Ph.D. diss., U. Penn., 2005. CIS Dpt.
[10] P. McCullagh and J. A. Nelder. Generalized Linear Models. Chapman and Hall, 1989.
[11] T. Zhang and F. J. Oles. A probability analysis on the value of unlabeled data for classification problems.
In Proc. ICML, 2000.
[12] O. Chapelle. Active learning for parzen window classifier. In Proc. AISTATS, 2005.
[13] H. Shimodaira. Improving predictive inference under covariate shift by weighting the log-likelihood
function. J. Stat. Plan. Inf., 90:227?244, 2000.
[14] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge Univ. Press, 2003.
[15] J. Shawe-Taylor and N. Cristianini. Kernel Methods for Pattern Analysis. Cambridge Univ. Press, 2004.
[16] S. Fine and K. Scheinberg. Efficient SVM training using low-rank kernel representations. J. Mach. Learn.
Res., 2:243?264, 2001.
[17] S. Tong and D. Koller. Support vector machine active learning with applications to text classification. In
Proc. ICML, 2000.
[18] K. Fukumizu. Active learning in multilayer perceptrons. In Adv. NIPS 8, 1996.
[19] T. Kanamori and H. Shimodaira. Active learning algorithm using the maximum weighted log-likelihood
estimator. J. Stat. Plan. Inf., 116:149?162, 2003.
[20] T. Kanamori. Statistical asymptotic theory of active learning. Ann. Inst. Stat. Math., 54(3):459?475, 2002.
[21] H. White. Maximum likelihood estimation of misspecified models. Econometrica, 50(1):1?26, 1982.
[22] H. Akaike. A new look at statistical model identification. IEEE Trans. Aut. Cont., 19:716?722, 1974.
[23] F. R. Bach. Active learning for misspecified generalized linear models. Technical Report N15/06/MM,
Ecole des Mines de Paris, 2006.
[24] A. W. Van der Vaart. Asymptotic Statistics. Cambridge Univ. Press, 1998.
[25] D. MacKay. Information-based objective functions for active data selection. Neural Computation,
4(4):590?604, 1992.
[26] Y. Bengio and Y Grandvalet. Semi-supervised learning by entropy minimization. In Adv. NIPS 17, 2005.
| 2961 |@word determinant:1 illustrating:1 version:1 middle:4 advantageous:2 norm:1 seems:1 simulation:4 decomposition:1 p0:32 tr:5 moment:1 reduction:5 score:1 selecting:2 ecole:2 outperforms:1 current:4 comparing:1 yet:1 dx:2 readily:4 grain:1 realistic:5 partition:2 enables:1 update:1 v:2 selected:6 parameterization:1 accordingly:1 warmuth:1 mccallum:1 vanishing:1 provides:1 math:2 coarse:1 org:1 simpler:1 zhang:1 replication:2 prove:1 combine:1 introduce:1 indeed:1 expected:8 behavior:1 nor:1 multi:1 window:1 distri:1 estimating:2 bounded:4 moreover:2 lowest:1 minimizes:1 z1k:1 finding:1 every:1 ti:3 wrong:1 classifier:1 whatever:1 unit:5 penn:1 yn:1 superiority:1 positive:3 negligible:1 local:1 tends:1 limit:1 consequence:1 despite:1 mach:1 might:1 twice:2 initialization:1 studied:1 heteroscedastic:1 limited:1 averaged:2 practical:2 testing:3 practice:1 oles:1 procedure:2 empirical:3 drug:1 boyd:1 pre:1 refers:1 get:1 close:1 selection:3 unlabeled:1 context:1 optimize:3 measurable:1 deterministic:1 equivalent:1 convex:17 simplicity:1 estimator:23 deriving:1 vandenberghe:1 population:4 justification:1 target:2 us:1 akaike:2 designing:1 element:2 crossing:1 roy:1 ep:4 observed:4 solved:2 ensures:1 adv:4 principled:1 complexity:2 econometrica:1 cristianini:1 mine:3 dynamic:1 weakly:1 predictive:1 basis:1 univ:3 choosing:1 h0:5 emerged:1 widely:2 heuristic:5 statistic:4 gi:10 vaart:1 final:1 online:1 blob:1 differentiable:3 maximal:1 combining:1 chaudhuri:1 achieve:1 normalize:1 dirac:1 double:1 cluster:1 converges:2 derive:3 stat:4 measured:1 eq:14 implies:1 met:1 correct:3 generalization:11 proposition:14 extension:2 strictly:1 hold:4 mm:1 considered:1 hall:1 normal:2 exp:2 algorithmic:1 scope:1 early:1 ematique:1 homoscedastic:1 estimation:3 proc:6 applicable:1 label:3 currently:3 tool:1 weighted:3 minimization:3 fukumizu:1 gaussian:3 always:3 pn:12 avoid:1 casting:1 derived:1 focus:2 properly:1 consistently:2 bernoulli:2 likelihood:12 mainly:1 rank:1 centroid:2 inst:1 inference:1 dependent:4 entire:1 koller:1 going:1 france:1 selective:1 provably:2 classification:7 flexible:2 plan:2 art:1 softmax:1 mackay:1 field:2 equal:3 once:2 sampling:27 chapman:1 identical:2 look:6 icml:4 minimized:2 report:1 randomly:1 composed:1 phase:1 ourselves:2 lebesgue:1 n1:5 interest:1 mixture:4 necessary:1 unless:2 incomplete:1 taylor:1 abundant:1 re:2 schein:1 theoretical:6 modeling:1 earlier:1 applicability:1 cost:4 deviation:1 wiu:9 successful:1 reported:1 varies:1 synthetic:2 vard:2 chooses:1 density:14 pool:1 parzen:1 central:1 opposed:1 possibly:1 worse:2 derivative:4 leading:2 yp:1 doubt:1 potential:4 de:5 fontainebleau:1 satisfy:2 depends:5 performed:1 multiplicative:1 h1:5 lot:1 francis:2 red:1 variance:15 efficiently:2 weak:1 identification:1 lu:6 worth:1 finer:1 ed:12 pp:2 naturally:4 proof:3 sampled:7 gain:3 dataset:2 proved:1 dimensionality:1 actually:1 glim:2 appears:1 higher:1 supervised:5 done:1 working:1 eqn:4 cohn:1 nonlinear:1 reweighting:9 logistic:3 mode:1 effect:1 unbiased:20 regularization:3 assigned:1 white:1 sin:1 criterion:13 generalized:15 n15:1 butions:1 ridge:1 eqi:3 performs:1 dedicated:1 image:3 harmonic:1 misspecified:5 common:1 multinomial:2 empirically:1 extend:2 refer:2 cambridge:3 rd:6 consistency:3 grid:1 centre:1 shawe:1 chapelle:1 specification:4 recent:1 optimizes:2 inf:2 scenario:1 certain:4 binary:3 yi:16 der:1 minimum:7 additional:2 care:1 aut:1 converge:6 semi:3 full:5 reduces:1 smooth:1 technical:1 unlabelled:1 academic:1 bach:3 cross:1 retrieval:4 controlled:1 ensuring:1 prediction:2 qi:6 regression:8 impact:1 liao:1 multilayer:1 expectation:3 iteration:1 kernel:10 proposal:1 addition:2 fine:1 grow:1 sure:1 induced:1 lafferty:1 jordan:1 practitioner:1 noting:1 presence:1 constraining:1 bengio:1 identically:3 enough:4 wn:18 independence:1 restrict:1 reduce:2 regarding:2 shift:1 tain:1 hessian:1 aimed:1 verifiable:1 ph:1 mathieson:1 generate:1 exist:1 estimated:3 blue:1 dasgupta:1 kyi:1 neither:1 asymptotically:4 sum:3 run:1 inverse:1 parameterized:2 uncertainty:4 family:2 throughout:2 wu:4 bound:1 hi:8 guaranteed:1 adapted:1 precisely:1 infinity:1 glims:1 constraint:4 n3:1 argument:2 n13:1 according:1 shimodaira:2 smaller:4 wi:25 taken:1 equation:1 previously:1 scheinberg:1 turn:2 tractable:1 mance:1 apply:3 away:1 appropriate:1 alternative:1 jn:5 dpt:1 denotes:1 clustering:1 ensure:4 newton:2 ghahramani:2 classical:2 objective:6 strategy:1 parametric:1 dependence:3 usual:4 said:1 gradient:1 toward:1 pointwise:1 relationship:1 cont:1 minimizing:3 potentially:1 favorably:1 negative:4 design:2 unknown:1 perform:5 observation:3 fedorov:1 finite:2 situation:5 extended:1 misspecification:3 y1:1 paris:2 required:4 specified:4 optimized:1 nu:2 nip:4 trans:1 beyond:1 usually:3 pattern:1 wi2:1 unrealistic:1 natural:2 rely:1 scarce:1 zhu:1 scheme:26 improve:1 numerous:1 text:3 ep0:7 discovery:1 marginalizing:1 asymptotic:14 loss:1 var:1 ingredient:1 validation:1 sufficient:3 consistent:2 mercer:2 grandvalet:1 share:1 penalized:2 supported:1 kanamori:2 dis:1 bias:4 allow:1 taking:1 distributed:3 van:1 dimension:2 xn:13 qn:23 author:1 simplified:1 far:1 approximate:1 compact:2 ml:6 active:47 sequentially:1 investigating:1 wn2:2 xi:53 nelder:1 search:2 sk:6 why:1 nature:1 zk:1 learn:2 improving:1 expansion:6 aistats:1 pk:2 main:1 significance:1 linearly:1 whole:1 noise:1 x1:7 referred:6 intel:1 cubic:1 tong:2 exponential:2 candidate:6 lemmen:1 weighting:9 third:2 rk:5 theorem:1 bad:2 xt:2 specific:1 covariate:1 showing:1 r2:1 hinting:1 svm:1 exists:1 restricting:1 adding:1 sequential:5 importance:6 ci:1 accomodate:1 kx:1 entropy:2 chang:1 corresponds:2 minimizer:2 acm:1 conditional:3 goal:2 ann:1 labelled:1 mccullagh:1 included:1 uniformly:1 hyperplane:3 multimedia:1 experimental:1 atsch:1 perceptrons:1 select:5 cholesky:1 support:4 brevity:1 bioinformatics:1 |
2,161 | 2,962 | Non-rigid point set registration: Coherent Point Drift
? Carreira-Perpin?
? an
Andriy Myronenko
Xubo Song
Miguel A.
Department of Computer Science and Electrical Engineering
OGI School of Science and Engineering
Oregon Health and Science University
Beaverton, OR, USA, 97006
{myron, xubosong, miguel}@csee.ogi.edu
Abstract
We introduce Coherent Point Drift (CPD), a novel probabilistic method for nonrigid registration of point sets. The registration is treated as a Maximum Likelihood (ML) estimation problem with motion coherence constraint over the velocity field such that one point set moves coherently to align with the second set.
We formulate the motion coherence constraint and derive a solution of regularized
ML estimation through the variational approach, which leads to an elegant kernel
form. We also derive the EM algorithm for the penalized ML optimization with
deterministic annealing. The CPD method simultaneously finds both the non-rigid
transformation and the correspondence between two point sets without making
any prior assumption of the transformation model except that of motion coherence. This method can estimate complex non-linear non-rigid transformations,
and is shown to be accurate on 2D and 3D examples and robust in the presence of
outliers and missing points.
1 Introduction
Registration of point sets is an important issue for many computer vision applications such as robot navigation, image guided surgery, motion tracking, and face recognition. In fact, it is the key
component in tasks such as object alignment, stereo matching, point set correspondence, image
segmentation and shape/pattern matching. The registration problem is to find meaningful correspondence between two point sets and to recover the underlying transformation that maps one point
set to the second. The ?points? in the point set are features, most often the locations of interest points
extracted from an image. Other common geometrical features include line segments, implicit and
parametric curves and surfaces. Any geometrical feature can be represented as a point set; in this
sense, the point locations is the most general of all features.
Registration techniques can be rigid or non-rigid depending on the underlying transformation model.
The key characteristic of a rigid transformation is that all distances are preserved. The simplest nonrigid transformation is affine, which also allows anisotropic scaling and skews. Effective algorithms
exist for rigid and affine registration. However, the need for more general non-rigid registration
occurs in many tasks, where complex non-linear transformation models are required. Non-linear
non-rigid registration remains a challenge in computer vision.
Many algorithms exist for point sets registration. A direct way of associating points of two arbitrary
patterns is proposed in [1]. The algorithm exploits properties of singular value decomposition and
works well with translation, shearing and scaling deformations. However, for a non-rigid transformation, the method performs poorly. Another popular method for point sets registration is the
Iterative Closest Point (ICP) algorithm [2], which iteratively assigns correspondence and finds the
least squares transformation (usually rigid) relating these point sets. The algorithm then redetermines the closest point set and continues until it reaches the local minimum. Many variants of ICP
have been proposed that affect all phases of the algorithm from the selection and matching of points
to the minimization strategy [3]. Nonetheless ICP requires that the initial pose of the two point sets
be adequately close, which is not always possible, especially when transformation is non-rigid [3].
Several non-rigid registration methods are introduced [4, 5]. The Robust Point Matching (RPM)
method [4] allows global to local search and soft assignment of correspondences between two point
sets. In [5] it is further shown that the RPM algorithm is similar to Expectation Maximization
(EM) algorithms for the mixture models, where one point set represents data points and the other
represents centroids of mixture models. In both papers, the non-rigid transform is parameterized by
Thin Plate Spline (TPS) [6], leading to the TPS-RPM algorithm [4]. According to regularization
theory, the TPS parametrization is a solution of the interpolation problem in 2D that penalizes the
second order derivatives of the transformation. In 3D the solution is not differentiable at point
locations. In four or higher dimensions the generalization collapses completely [7]. The M-step in
the EM algorithm in [5] is approximated for simplification. As a result, the approach is not truly
probabilistic and does not lead, in general, to the true Maximum Likelihood solution.
A correlation-based approach to point set registration is proposed in [8]. Two data sets are represented as probability densities, estimated using kernel density estimation. The registration is considered as the alignment between the two distributions that minimizes a similarity function defined by
L2 norm. This approach is further extended in [9], where both densities are represented as Gaussian
Mixture Models (GMM). Once again thin-plate spline is used to parameterize the smooth non-linear
underlying transformation.
In this paper we introduce a probabilistic method for point set registration that we call the Coherent
Point Drift (CPD) method. Similar to [5], given two point sets, we fit a GMM to the first point set,
whose Gaussian centroids are initialized from the points in the second set. However, unlike [4, 5, 9]
which assumes a thin-plate spline transformation, we do not make any explicit assumption of the
transformation model. Instead, we consider the process of adapting the Gaussian centroids from
their initial positions to their final positions as a temporal motion process, and impose a motion
coherence constraint over the velocity field. Velocity coherence is a particular way of imposing
smoothness on the underlying transformation. The concept of motion coherence was proposed in
the Motion Coherence Theory [10]. The intuition is that points close to one another tend to move
coherently. This motion coherence constraint penalizes derivatives of all orders of the underlying
velocity field (thin-plate spline only penalizes the second order derivative). Examples of velocity
fields with different levels of motion coherence for different point correspondence are illustrated in
Fig. 1.
(a)
(b)
(c)
(d)
Figure 1: (a) Two given point sets. (b) A coherent velocity field. (c, d) Velocity fields that are less
coherent for the given correspondences.
We derive a solution for the velocity field through a variational approach by maximizing the likelihood of GMM penalized by motion coherence. We show that the final transformation has an elegant
kernel form. We also derive an EM algorithm for the penalized ML optimization with deterministic
annealing. Once we have the final positions of the GMM centroids, the correspondence between
the two point sets can be easily inferred through the posterior probability of the Gaussian mixture
components given the first point set. Our method is a true probabilistic approach and is shown to
be accurate and robust in the presence of outliers and missing points, and is effective for estimation
of complex non-linear non-rigid transformations. The rest of the paper is organized as follows. In
Section 2 we formulate the problem and derive the CPD algorithm. In Section 3 we present the
results of CPD algorithm and compare its performance with that of RPM [4] and ICP [2]. In Section
4 we summarize the properties of CPD and discuss the results.
2 Method
Assume two point sets are given, where the template point set Y = (y 1 , . . . , yM )T (expressed as a
M ? D matrix) should be aligned with the reference point set X = (x1 , . . . , xN )T (expressed as a
N ? D matrix) and D is the dimension of the points. We consider the points in Y as the centroids
of a Gaussian Mixture Model, and fit it to the data points X by maximizing the likelihood function.
We denote Y0 as the initial centroid positions and define a continuous velocity function v for the
template point set such that the current position of centroids is defined as Y = v(Y 0 ) + Y0 .
PM
1
Consider a Gaussian-mixture density p(x) = m=1 M
p(x|m) with x|m ? N (ym , ? 2 ID ), where
Y represents D-dimensional centroids of equally-weighted Gaussians with equal isotropic covariance matrices, and X set represents data points. In order to enforce a smooth motion constraint, we
define the prior p(Y|?) ? exp (? ?2 ?(Y)), where ? is a weighting constant and ?(Y) is a function
that regularizes the motion to be smooth. Using Bayes theorem, we want to find the parameters Y by
maximizing the posteriori probability, or equivalently by minimizing the following energy function:
E(Y) = ?
N
X
log
n=1
M
X
e? 2 k
1
xn ?ym
?
m=1
k2 + ? ?(Y)
2
(1)
We make the i.i.d. data assumption and ignore terms independent of Y. Equation 1 has a similar
form to that of Generalized Elastic Net (GEN) [11], which has shown good performance in nonrigid image registration [12]; note that there we directly penalized Y, while here we penalize the
transformation v. The ? function represents our prior knowledge about the motion, which should be
smooth. Specifically, we want the velocity field v generated by template point set displacement to
be smooth. According to [13], smoothness is a measure of the ?oscillatory? behavior of a function.
Within the class of differentiable functions, one function is said to be smoother than another if it
oscillates less; in other words, if it has less energy at high frequency. The high frequency content of
a function can be measured by first high-pass filtering
the function, and then measuring the resulting
R
? ds, where v? indicates the Fourier
v (s)|2 /G(s)
power. This can be represented as ?(v) = Rd |?
?
transform of the velocity and G is some positive function that approaches zero as ksk ? ?. Here
? represents a symmetric low-pass filter, so that its Fourier transform G is real and symmetric.
G
Following this formulation, we rewrite the energy function as:
E(?
v) = ?
N
X
n=1
log
M
X
e? 2 k
m=1
1
xn ?ym
?
k2 + ?
2
Z
Rd
|?
v (s)|2
ds
?
G(s)
(2)
It can be shown using a variational approach (see Appendix A for a sketch of the proof) that the
function which minimizes the energy function in Eq. 2 has the form of the radial basis function:
v(z) =
M
X
wm G(z ? y0m )
(3)
m=1
We choose a Gaussian kernel form for G (note it is not related to the Gaussian form of the distribution chosen for the mixture model). There are several motivations for such a Gaussian choice:
? approaches zero as
First, it satisfies the required properties (symmetric, positive definite, and G
ksk ? ?). Second, a Gaussian low pass filter has the property of having the Gaussian form in
both frequency and time domain without oscillations. By choosing an appropriately sized Gaussian filter we have the flexibility to control the range of filtered frequencies and thus the amount of
spatial smoothness. Third, the choice of the Gaussian makes our regularization
to
R term equivalent
? ds,
the one in Motion Coherence Theory (MCT) [10]. The regularization term Rd |?
v (s)|2 /G(s)
with a Gaussian function for G, is equivalent to the sum of weighted squares of all order derivatives
R P
? 2m
m 2
of the velocity field Rd ?
m=1 m!2m (D v) [10, 13] , where D is a derivative operator so that
2m
2m
2m+1
2m
D v = ? v and D
v = ?(? v). The equivalence of the regularization term with that
of the Motion Coherence Theory implies that we are imposing motion coherence among the points
and thus we call our method the Coherent Point Drift (CPD) method. Detailed discussion of MCT
can be found in [10]. Substituting the solution obtained in Eq. 3 back into Eq. 2, we obtain
CPD algorithm:
? Initialize parameters ?, ?, ?
? Construct G matrix, initialize Y = Y0
? Deterministic annealing:
? EM optimization, until convergence:
? E-step: Compute P
? M-step: Solve for W from Eq. 7
? Update Y = Y0 + GW
? Anneal ? = ??
? Compute the velocity field: v(z) = G(z, ?)W
Figure 2: Pseudo-code of CPD algorithm.
E(W) = ?
N
X
log
n=1
M
X
e
?
?
? xn ?y0m ?PM wk G(y0k ?y0m ) ?2
k=1
?
? 12 ?
?
?
?
m=1
+
?
tr WT GW
2
where GM ?M is a square symmetric Gram matrix with elements gij = e
WM ?D = (w1 , . . . , wM )T is a matrix of the Gaussian kernel weights in Eq. 3.
? y ?y ?2
?
?
? 12 ? 0i ? 0j ?
(4)
and
Optimization. Following the EM algorithm derivation for clustering using Gaussian Mixture Model
[14], we can find the upper bound of the function in Eq. 4 as (E-step):
Q(W) =
N X
M
X
n=1 m=1
P old (m|xn )
2
?
kxn ? y0m ? G(m, ?)Wk
+ tr WT GW
2? 2
2
(5)
where P old denotes the posterior probabilities calculated using previous parameter values, and
G(m, ?) denotes the mth row of G. Minimizing the upper bound Q will lead to a decrease in the
value of the energy function E in Eq. 4, unless it is already at local minimum. Taking the derivative
of Eq. 5 with respect to W, and rewriting the equation in matrix form, we obtain (M-step)
?Q
1
= 2 G(diag (P1))(Y0 + GW) ? PX) + ?GW = 0
(6)
?
?2
?
?2
?W
?
? old
?
? old
?
1 ? ym ?xn ?
1 ? ym ?xn ?
?2?
?2?
PM
?
?
?
?
/ m=1 e
.
where P is a matrix of posterior probabilities with pmn = e
The diag (?) notation indicates diagonal matrix and 1 is a column vector of all ones. Multiplying
Eq. 6 by ? 2 G?1 (which exists for a Gaussian kernel) we obtain a linear system of equations:
(diag (P1)) G + ?? 2 I)W = PX ? diag (P1) Y0
(7)
Solving the system for W is the M-step of EM algorithm. The E step requires computation of the
posterior probability matrix P. The EM algorithm is guaranteed to converge to a local optimum
from almost any starting point. Eq. 7 can also be obtained directly by finding the derivative of Eq. 4
with respect to W and equating it to zero. This results in a system of nonlinear equations that can
be iteratively solved using fixed point update, which is exactly the EM algorithm shown above. The
computational complexity of each EM iteration is dominated by the linear system of Eq. 7, which
takes O(M 3 ). If using a truncated Gaussian kernel and/or linear conjugate gradients, this can be
reduced to O(M 2 ).
Robustness to Noise. The use of a probabilistic assignment of correspondences between point sets is
innately more robust than the binary assignment used in ICP. However, the GMM requires that each
data point be explained by the model. In order to account for outliers, we add an additional uniform
pdf component to the mixture model. This new
component
changes posterior probability
matrix P
?
?
?
?2
? yold ?xn ?2
? old
?
1 ? ym ?xn ?
m
?
2 D
? 21 ?
?
P
2
?
2?
?
?
?
?
M
in Eq. 7, which now is defined as pmn = e
/( (2??a ) + m=1 e
), where
a defines the support for the uniform pdf. The use of the uniform distribution greatly improves the
noise.
Free Parameters. There are three free parameters in the method: ?, ? and ?. Parameter ? represents
the trade off between data fitting and smoothness regularization. Parameter ? reflects the strength
of interaction between points. Small values of ? produce locally smooth transformation, while large
values of ? correspond to nearly pure translation transformation. The value of ? serves as a capture
range for each Gaussian mixture component. Smaller ? indicates smaller and more localized capture
range for each Gaussian component in the mixture model. We use deterministic annealing for ?,
starting with a large value and gradually reducing it according to ? = ??, where ? is annealing rate
(normally between [0.92 0.98]), so that the annealing process is slow enough for the algorithm to be
robust. The gradual reducing of ? leads to a coarse-to-fine match strategy. We summarize the CPD
algorithm in Fig. 2.
3 Experimental Results
We show the performance of CPD on artificial data with non-rigid deformations. The algorithm is
implemented in Matlab, and tested on a Pentium4 CPU 3GHz with 4GB RAM. The code is available
at www.csee.ogi.edu/?myron/matlab/cpd. The initial value of ? and ? are set to 1.0 in
all experiments. The starting value of ? is 3.0 and gradually annealed with ? = 0.97. The stopping
condition for the iterative process is either when the current change in parameters drops below a
threshold of 10?6 or the number of iterations reaches the maximum of 150.
CPD algorithm
RPM algorithm
ICP algorithm
Figure 3: Registration results for the CPD, RPM and ICP algorithms from top to bottom. The first
column shows template (?) and reference (+) point sets. The second column shows the registered
position of the template set superimposed over the reference set. The third column represents the
recovered underlying deformation . The last column shows the link between initial and final template
point positions (only every second point?s displacement is shown).
On average the algorithm converges in few seconds and requires around 80 iterations. All point sets
are preprocessed to have zero mean and unit variance (which normalizes translation and scaling).
We compare our method on non-rigid point registration with RPM and ICP. The RPM and ICP
implementations and the 2D point sets used for comparison are taken from the TPS-RPM Matlab
package [4].
For the first experiment (Fig. 3) we use two clean point sets. Both CPD and RPM algorithms produce
accurate results for non-rigid registration. The ICP algorithm is unable to escape a local minimum.
We show the velocity field through the deformation of a regular grid. The deformation field for RPM
corresponds to parameterized TPS transformation, while that for CPD represents a motion coherent
non-linear deformation. For the second experiment (Fig. 4) we make the registration problem more
challenging. The fish head in the reference point set is removed, and random noise is added. In
the template point set the tail is removed. The CPD algorithm shows robustness even in the area of
missing points and corrupted data. RPM incorrectly wraps points to the middle of the figure. We
have also tried different values of smoothness parameters for RPM without much success, and we
only show the best result. ICP also shows poor performance and is stuck in a local minimum.
For the 3D experiment (Fig. 5) we show the performance of CPD on 3D faces. The face surface is
defined by the set of control points. We artificially deform the control point positions non-rigidly
and use it as a template point set. The original control point positions are used as a reference point
set. CPD is effective and accurate for this 3D non-rigid registration problem.
CPD algorithm
RPM algorithm
ICP algorithm
Figure 4: The reference point set is corrupted to make the registration task more challenging. Noise
is added and the fish head is removed in the reference point set. The tail is also removed in the
template point set. The first column shows template (?) and reference (+) point sets. The second
column shows the registered position of the template set superimposed over the reference set. The
third column represents the recovered underlying deformation. The last column shows the link
between the initial and final template point positions.
4 Discussion and Conclusion
We intoduce Coherent Point Drift, a new probabilistic method for non-rigid registration of two point
sets. The registration is considered as a Maximum Likelihood estimation problem, where one point
set represents centroids of a GMM and the other represents the data. We regularize the velocity
field over the points domain to enforce coherent motion and define the mathematical formulation
of this constraint. We derive the solution for the penalized ML estimation through the variational
approach, and show that the final transformation has an elegant kernel form. We also derive the
EM optimization algorithm with deterministic annealing. The estimated velocity field represents the
underlying non-rigid transformation. Once we have the final positions of the GMM centroids, the
correspondence between the two point sets can be easily inferred through the posterior probability of
(a)
(b)
(c)
4
3
3
2.5
2.5
3
2
1.5
2
1.5
2
1
1
1
0.5
0.5
0
0
0
?0.5
?1
?0.5
?1
?1
?1.5
?2
?1.5
?1
?0.5
0
0.5
1
1.5
(d)
?2
?1
0
?1.5
2
1
2
2
?2
?1.5
?1
?0.5
0
?2
?1.5
0
0.5
1
1.5
(e)
?2
?1
?0.5
0
0
0.5
1
(f)
1.5
?2
Figure 5: The results of CPD non-rigid registration on 3D point sets. (a, d) The reference face and
its control point set. (b, e) The template face and its control point set. (c, f) Result obtained by
registering the template point set onto the reference point set using CPD.
the GMM components given the data. The computational complexity of CPD is O(M 3 ), where M
is the number of points in template point set. It is worth mentioning that the components in the point
vector are not limited to spatial coordinates. They can also represent the geometrical characteristic
of an object (e.g., curvature, moments), or the features extracted from the intensity image (e.g.,
color, gradient). We compare the performance of the CPD algorithm on 2D and 3D data against ICP
and RPM algorithms, and show how CPD outperforms both methods in the presence of noise and
outliers. It should be noted that CPD does not work well for large in-plane rotation. Typically such
transformation can be first compensated by other well known global registration techniques before
CPD algorithm is carried out. The CPD method is most effective when estimating smooth non-rigid
transformations.
Appendix A
E=?
N
X
log
n=1
M
X
e
m
? 12 k xn ?y
k
?
2
m=1
?
+
2
Z
Rd
|?
v (s)|2
ds
?
G(s)
(8)
Consider the function in Eq. 8, where ym = y0m + v(y0m ),R and y0m is the initial position of ym
point. v is a continuous velocity function and v(y0m ) = Rd v?(s)e2?i<y0m ,s> ds in terms of its
Fourier transform v?. The following derivation follows [13]. Substituting v into equation Eq. 8 we
obtain:
E(?
v) = ?
N
X
log
n=1
M
X
e
?
? xn ?y0m ?R d
R
? 12 ?
?
?2
?
?
v
?(s)e2?i<y0m ,s> ds ?
?
m=1
+
?
2
Z
Rd
|?
v (s)|2
ds
?
G(s)
(9)
In order to find the minimum of this functional we take its functional derivatives with respect to v?,
v)
d
so that ?E(?
??
v (t) = 0, ?t ? R :
N
X
?E(?
v)
=?
??
v (t)
n=1
?
2
Z
Rd
PM
m=1
e? 2 k
1
N
X
? |?
v (s)|2
ds = ?
?
??
v (t) G(s)
n=1
xn ?ym
?
2
k
1
? 2 (xn
PM
m=1
e? 2 k
1
m=1
PM
? ym )
e? 2 k ?
PM
1
R
??
v (s) 2?i<y0m ,s>
ds
v (t) e
Rd ??
xn ?ym
?
xn ?ym
m=1
k
k2
+
2
e
1
2?i<y0 ,t>
? 2 (xn ? ym )e
m 2
? 12 k xn ?y
k
?
+?
v?(?t)
=0
?
G(t)
We now define the coefficients amn =
?1
2
2
m
k xn ?y
k
?
PM
m=1
tive as:
?1
e 2
k
1
(xn ?ym )
?2
xn ?ym 2
?
k
, and rewrite the functional deriva-
M X
N
X
v?(?t)
v?(?t)
=?
(
amn )e2?i<y0m ,t> + ?
= 0 (10)
?
?
G(t)
G(t)
n=1 m=1
m=1 n=1
PN
?
Denoting the new coefficients wm = ?1 n=1 amn , and changing t to ?t, we multiply by G(t)
on
both sides of this equation, which results in:
?
N X
M
X
e
amn e2?i<y0m ,t> + ?
?
v?(t) = G(?t)
M
X
wm e?2?i<y0m ,t>
(11)
m=1
? is symmetric (so that its Fourier transform is real), and taking the inverse Fourier
Assuming that G
transform of the last equation, we obtain:
M
M
X
X
v(z) = G(z) ?
wm ?(z ? y0m ) =
wm G(z ? y0m )
(12)
m=1
m=1
Since wm depend on v through amn and ym , the wm that solve Eq. 12 must satisfy a self consistency
? results in a specific basis function G.
equation equivalent to Eq. 7. A specific form of regularizer G
Acknowledgment
This work is partially supported by NIH grant NEI R01 EY013093, NSF grant IIS?0313350
?
(awarded to X. Song) and NSF CAREER award IIS?0546857 (awarded to Miguel A.
Carreira-Perpi?na? n).
References
[1] G.L. Scott and H.C. Longuet-Higgins. An algorithm for associating the features of two images. Royal
Society London Proc., B-244:21?26, 1991.
[2] P.J. Besl and N. D. McKay. A method for registration of 3-d shapes. IEEE Trans. Pattern Anal. Mach.
Intell., 14(2):239?256, 1992.
[3] S. Rusinkiewicz and M. Levoy. Efficient variants of the ICP algorithm. Third International Conference
on 3D Digital Imaging and Modeling, page 145, 2001.
[4] H Chui and A. Rangarajan. A new algorithm for non-rigid point matching. CVPR, 2:44?51, 2000.
[5] H. Chui and A. Rangarajan. A feature registration framework using mixture models. IEEE Workshop on
Mathematical Methods in Biomedical Image Analysis (MMBIA), pages 190?197, 2000.
[6] F. L. Bookstein. Principal warps: Thin-plate splines and the decomposition of deformations. IEEE Trans.
Pattern Anal. Mach. Intell., 11(6):567?585, 1989.
[7] R Sibson and G. Stone. Comp. of thin-plate splines. SIAM J. Sci. Stat. Comput., 12(6):1304?1313, 1991.
[8] Y. Tsin and T. Kanade. A correlation-based approach to robust point set registration. ECCV, 3:558?569,
2004.
[9] B. Jian and B.C. Vemuri. A robust algorithm for point set registration using mixture of gaussians. ICCV,
pages 1246?1251, 2005.
[10] A.L. Yuille and N.M. Grzywacz. The motion coherence theory. Int. J. Computer Vision, 3:344?353, 1988.
? Carreira-Perpi?na? n, P. Dayan, and G. J. Goodhill. Differential priors for elastic nets. In Proc. of the
[11] M. A.
6th Int. Conf. Intelligent Data Engineering and Automated Learning (IDEAL?05), pages 335?342, 2005.
? Carreira-Perpi?na? n. Non-parametric image registration using general[12] A. Myronenko, X Song, and M. A.
ized elastic nets. Int. Workshop on Math. Foundations of Comp. Anatomy: Geom. and Stat. Methods in
Non-Linear Image Registration, MICCAI, pages 156?163, 2006.
[13] F. Girosi, M. Jones, and T. Poggio. Regularization theory and neural networks architectures. Neural
Computation, 7(2):219?269, 1995.
[14] C. M. Bishop. Neural Networks for Pattern Recognition. Oxford University Press, 1995.
| 2962 |@word middle:1 norm:1 gradual:1 perpin:1 tried:1 decomposition:2 covariance:1 tr:2 moment:1 initial:7 denoting:1 outperforms:1 current:2 recovered:2 must:1 girosi:1 shape:2 drop:1 update:2 plane:1 isotropic:1 parametrization:1 filtered:1 coarse:1 math:1 location:3 mathematical:2 registering:1 direct:1 differential:1 fitting:1 introduce:2 shearing:1 behavior:1 p1:3 cpu:1 estimating:1 underlying:8 notation:1 minimizes:2 finding:1 transformation:26 temporal:1 pseudo:1 every:1 oscillates:1 exactly:1 k2:3 control:6 normally:1 unit:1 grant:2 positive:2 before:1 engineering:3 local:6 mach:2 id:1 rigidly:1 oxford:1 interpolation:1 pmn:2 equating:1 equivalence:1 challenging:2 mentioning:1 collapse:1 limited:1 range:3 acknowledgment:1 definite:1 displacement:2 area:1 adapting:1 matching:5 word:1 radial:1 regular:1 onto:1 close:2 selection:1 operator:1 www:1 equivalent:3 deterministic:5 map:1 missing:3 maximizing:3 annealed:1 compensated:1 starting:3 formulate:2 assigns:1 pure:1 higgins:1 regularize:1 coordinate:1 grzywacz:1 gm:1 velocity:17 element:1 recognition:2 approximated:1 continues:1 bottom:1 electrical:1 solved:1 parameterize:1 capture:2 decrease:1 trade:1 removed:4 intuition:1 complexity:2 depend:1 rewrite:2 segment:1 solving:1 yuille:1 completely:1 basis:2 easily:2 represented:4 regularizer:1 derivation:2 effective:4 london:1 artificial:1 choosing:1 whose:1 solve:2 cvpr:1 besl:1 skews:1 transform:6 final:7 differentiable:2 net:3 interaction:1 aligned:1 y0k:1 gen:1 poorly:1 flexibility:1 convergence:1 optimum:1 rangarajan:2 produce:2 converges:1 object:2 derive:7 depending:1 stat:2 pose:1 miguel:3 measured:1 school:1 eq:17 implemented:1 implies:1 guided:1 anatomy:1 filter:3 generalization:1 around:1 considered:2 exp:1 substituting:2 estimation:6 proc:2 weighted:2 reflects:1 minimization:1 always:1 gaussian:20 pn:1 superimposed:2 likelihood:5 indicates:3 greatly:1 xubo:1 centroid:10 sense:1 posteriori:1 dayan:1 rigid:24 stopping:1 typically:1 mth:1 issue:1 among:1 myronenko:2 spatial:2 initialize:2 field:14 once:3 equal:1 having:1 construct:1 represents:13 jones:1 nearly:1 thin:6 cpd:28 spline:6 escape:1 few:1 intelligent:1 simultaneously:1 intell:2 phase:1 interest:1 multiply:1 alignment:2 navigation:1 mixture:13 truly:1 accurate:4 poggio:1 unless:1 old:5 penalizes:3 initialized:1 deformation:8 column:9 soft:1 modeling:1 measuring:1 assignment:3 maximization:1 mckay:1 uniform:3 corrupted:2 density:4 international:1 siam:1 probabilistic:6 off:1 icp:14 ym:17 na:3 w1:1 again:1 choose:1 conf:1 derivative:8 leading:1 account:1 deform:1 wk:2 coefficient:2 int:3 oregon:1 satisfy:1 innately:1 wm:9 recover:1 bayes:1 square:3 variance:1 characteristic:2 correspond:1 multiplying:1 worth:1 comp:2 oscillatory:1 reach:2 against:1 nonetheless:1 energy:5 frequency:4 e2:4 proof:1 popular:1 knowledge:1 color:1 improves:1 segmentation:1 organized:1 back:1 higher:1 formulation:2 implicit:1 biomedical:1 miccai:1 until:2 correlation:2 d:9 sketch:1 nonlinear:1 defines:1 usa:1 concept:1 true:2 adequately:1 regularization:6 kxn:1 symmetric:5 iteratively:2 illustrated:1 gw:5 ogi:3 amn:5 self:1 noted:1 generalized:1 nonrigid:3 plate:6 pdf:2 tps:5 stone:1 performs:1 motion:20 geometrical:3 image:9 variational:4 novel:1 nih:1 common:1 rotation:1 functional:3 anisotropic:1 tail:2 relating:1 imposing:2 smoothness:5 rd:9 consistency:1 grid:1 pm:8 robot:1 similarity:1 surface:2 align:1 add:1 curvature:1 closest:2 posterior:6 awarded:2 binary:1 success:1 minimum:5 additional:1 impose:1 converge:1 ii:2 smoother:1 smooth:7 match:1 equally:1 award:1 variant:2 vision:3 expectation:1 iteration:3 kernel:8 represent:1 penalize:1 preserved:1 want:2 fine:1 annealing:7 singular:1 jian:1 appropriately:1 rest:1 unlike:1 tend:1 elegant:3 call:2 presence:3 ideal:1 yold:1 enough:1 automated:1 affect:1 fit:2 architecture:1 associating:2 andriy:1 gb:1 song:3 stereo:1 matlab:3 detailed:1 amount:1 locally:1 mmbia:1 deriva:1 simplest:1 reduced:1 exist:2 nsf:2 fish:2 estimated:2 key:2 four:1 sibson:1 threshold:1 changing:1 preprocessed:1 gmm:8 rewriting:1 clean:1 registration:32 ram:1 imaging:1 sum:1 package:1 parameterized:2 inverse:1 almost:1 oscillation:1 coherence:14 appendix:2 scaling:3 rpm:15 bound:2 guaranteed:1 simplification:1 correspondence:10 strength:1 constraint:6 dominated:1 chui:2 fourier:5 px:2 department:1 according:3 poor:1 conjugate:1 smaller:2 em:11 y0:7 making:1 outlier:4 explained:1 gradually:2 iccv:1 bookstein:1 taken:1 equation:8 remains:1 discus:1 serf:1 available:1 gaussians:2 myron:2 enforce:2 robustness:2 original:1 assumes:1 clustering:1 include:1 denotes:2 top:1 beaverton:1 exploit:1 especially:1 society:1 r01:1 surgery:1 move:2 already:1 coherently:2 occurs:1 added:2 parametric:2 strategy:2 diagonal:1 said:1 gradient:2 wrap:1 distance:1 link:2 unable:1 sci:1 assuming:1 code:2 minimizing:2 equivalently:1 ized:1 implementation:1 anal:2 upper:2 truncated:1 incorrectly:1 regularizes:1 extended:1 head:2 arbitrary:1 nei:1 drift:5 intensity:1 inferred:2 introduced:1 tive:1 required:2 coherent:9 registered:2 trans:2 usually:1 pattern:5 below:1 scott:1 goodhill:1 challenge:1 summarize:2 geom:1 royal:1 power:1 treated:1 regularized:1 carried:1 health:1 prior:4 l2:1 ksk:2 filtering:1 localized:1 digital:1 foundation:1 affine:2 translation:3 row:1 normalizes:1 eccv:1 penalized:5 supported:1 last:3 free:2 side:1 warp:1 template:15 face:5 taking:2 ghz:1 curve:1 dimension:2 xn:20 gram:1 calculated:1 stuck:1 levoy:1 ignore:1 ml:5 global:2 search:1 iterative:2 continuous:2 kanade:1 robust:7 longuet:1 elastic:3 career:1 rusinkiewicz:1 complex:3 anneal:1 artificially:1 domain:2 diag:4 motivation:1 noise:5 x1:1 fig:5 slow:1 position:13 explicit:1 csee:2 comput:1 weighting:1 third:4 theorem:1 perpi:3 specific:2 bishop:1 exists:1 workshop:2 xubosong:1 mct:2 expressed:2 tracking:1 partially:1 corresponds:1 satisfies:1 extracted:2 sized:1 content:1 change:2 vemuri:1 carreira:4 specifically:1 except:1 reducing:2 wt:2 principal:1 gij:1 pas:3 experimental:1 meaningful:1 support:1 tested:1 |
2,162 | 2,963 | AdaBoost is Consistent
Peter L. Bartlett
Department of Statistics and Computer Science Division
University of California, Berkeley
Mikhail Traskin
Department of Statistics
University of California, Berkeley
[email protected]
[email protected]
Abstract
The risk, or probability of error, of the classifier produced by the AdaBoost algorithm is investigated. In particular, we consider the stopping strategy to be used in
AdaBoost to achieve universal consistency. We show that provided AdaBoost is
stopped after n? iterations?for sample size n and ? < 1?the sequence of risks
of the classifiers it produces approaches the Bayes risk if Bayes risk L? > 0.
1
Introduction
Boosting algorithms are an important recent development in classification. These algorithms belong
to a group of voting methods, for example [1, 2, 3], that produce a classifier as a linear combination
of base or weak classifiers. While empirical studies show that boosting is one of the best off the
shelf classification algorithms (see [3]) theoretical results don?t give a complete explanation of their
effectiveness.
Breiman [4] showed that under some assumptions on the underlying distribution ?population boosting? converges to the Bayes risk as the number of iterations goes to infinity. Since the population
version assumes infinite sample size, this does not imply a similar result for AdaBoost, especially
given results of Jiang [5], that there are examples when AdaBoost has prediction error asymptotically suboptimal at t = ? (t is the number of iterations).
Several authors have shown that modified versions of AdaBoost are consistent. These modifications
include restricting the l1 -norm of the combined classifier [6, 7] and restricting the step size of the algorithm [8]. Jiang [9] analyses the unmodified boosting algorithm and proves a process consistency
property, under certain assumptions. Process consistency means that there exists a sequence (tn )
such that if AdaBoost with sample size n is stopped after tn iterations, its risk approaches the Bayes
risk. However Jiang also imposes strong conditions on the underlying distribution: the distribution
of X (the predictor) has to be absolutely continuous with respect to Lebesgue measure and the funcP(Y =1|X)
tion FB (X) = 12 ln P(Y
=?1|X) has to be continuous on X . Also Jiang?s proof is not constructive
and does not give any hint on when the algorithm should be stopped. Bickel, Ritov and Zakai in
[10] prove a consistency result for AdaBoost, under the assumption that the probability distribution
is such that the steps taken by the algorithm are not too large. We would like to obtain a simple
stopping rule that guarantees consistency and doesn?t require any modification to the algorithm.
This paper provides a constructive answer to all of the mentioned issues:
1. We consider AdaBoost (not a modification).
2. We provide a simple stopping rule: the number of iterations t is a fixed function of the
sample size n.
3. We assume only that the class of base classifiers has finite VC-dimension, and that the span
of this class is sufficiently rich. Both assumptions are clearly necessary.
2
Setup and notation
Here we describe the AdaBoost procedure formulated as a coordinate descent algorithm and introduce definitions and notation. We consider a binary classification problem. We are given X ,
the measurable (feature) space, and Y = {?1, 1}, set of (binary) labels. We are given a sample
Sn = {(Xi , Yi )}ni=1 of i.i.d. observations distributed as the random variable (X, Y ) ? P, where P
is an unknown distribution. Our goal is to construct a classifier gn : X ? Y based on this sample.
The quality of the classifier gn is given by the misclassification probability
L(gn ) = P(gn (X) 6= Y |Sn ).
Of course we want this probability to be as small as possible and close to the Bayes risk
L? = inf L(g) = E(min{?(X), 1 ? ?(X)}),
g
where the infimum is taken over all possible (measurable) classifiers and ?(?) is a conditional probability
?(x) = P(Y = 1|X = x).
The infimum above is achieved by the Bayes classifier g ? (x) = g(2?(x) ? 1), where
1 , x > 0,
g(x) =
?1 , x ? 0.
We are going to produce a classifier as a linear combination of base classifiers in H = {h|h : X ?
Y}.
We shall assume
that class H
has a finite VC (Vapnik-Chervonenkis) dimension dV C (H) =
max |S| : S ? X , H|S = 2|S| .
Define
n
1 X ?Yi f (Xi )
Rn (f ) =
e
n i=1
R(f ) = Ee?Y f (X) .
and
Then the boosting procedure can be described as follows.
1. Set f0 ? 0, choose number of iterations t.
2. For k = 1, . . . , t set
fk = fk?1 + ?k?1 hk?1 ,
where the following holds
Rn (fk ) =
inf
h?H,??R
Rn (fk?1 + ?h)
We call ?i the step size of the algorithm at step i.
3. Output g ? ft as a final classifier.
We shall also use the convex hull of H scaled by ? ? 0,
(
)
n
n
X
X
F? = f f =
?i hi , n ? N ? {0}, ?i ? 0,
?i = ?, hi ? H
i=1
i=1
as well as the set of k-combinations, k ? N, of functions in H
(
)
k
X
k
F = f f =
?i hi , ?i ? R, hi ? H .
i=1
We shall also need to define the l? -norm: for any f ? F
X
X
kf k? = inf{
|?i |, f =
?i hi , hi ? H}.
Define the squashing function ?l (?) to be
(
?l (x) =
l , x > l,
x , x ? [?l, l],
?l , x < ?l.
(1)
Then the set of truncated functions is
n
o
?l ? F = f?|f? = ?l (f ), f ? F .
The set of classifiers based on class F is denoted by
g ? F = {f?|f? = g(f ), f ? F}.
Define the derivative of an arbitrary function Q(?) in the direction of h as
?Q(f + ?h)
0
Q (f ; h) =
.
??
?=0
The second derivative Q00 (f ; h) is defined similarly.
3
Consistency of boosting procedure
We shall need the following assumption.
Assumption 1 Let the distribution P and class H be such that
lim inf R(f ) = R? ,
??? f ?F?
where R? = inf R(f ) over all measurable functions.
For many classes H, the above assumption is satisfied for all possible distributions P. See [6,
Lemma 1] for sufficient conditions for Assumption 1. As an example of such a class, we can take
a class of indicators of all rectangles or indicators of half-spaces defined by hyperplanes or binary
trees with the number of terminal nodes equal to d+1 (we consider trees with terminal nodes formed
by successive univariate splits), where d is the dimensionality of X (see [4]).
We begin with a simple lemma (see [1, Theorem 8] or [11, Theorem 6.1]):
Lemma 1 For any t ? N if dV C (H) ? 2 the following holds:
dP (F t ) ? 2(t + 1)(dV C (H) + 1) log2 [2(t + 1)/ ln 2],
where dP (F t ) is the pseudodimension of class F t .
The proof of AdaBoost consistency is based on the following result, which builds on the result by
Koltchinskii and Panchenko [12] and resembles [6, Lemma 2].
Lemma 2 For a continuous function ? define the Lipschitz constant
L?,? = inf{L|L > 0, |?(x) ? ?(y)| ? L|x ? y|, ?? ? x, y ? ?}
and maximum absolute value of ?(?) when argument is in [??, ?]
M?,? = max ?(x).
x?[??,?]
Then for functions
n
R? (f ) = E?(Y f (X))
and
R?,n (f ) =
1X
?(Yi f (Xi )),
n i=1
R 1 q 8e
ln 2 d and any n, ? > 0 and t > 0,
0
r
(V + 1)(t + 1) log2 [2(t + 1)/ ln 2]
E sup |R? (f ) ? R?,n (f )| ? c?L?,?
t
n
f ??? ?F
V = dV C (H), c = 24
and
r
E sup |R? (f ) ? R?,n (f )| ? 4?L?,?
f ?F?
2V ln(4n + 2)
.
n
(2)
(3)
Also, for any ? > 0, with probability at least 1 ? ?,
r
sup
f ???
|R? (f ) ? R?,n (f )|
? c?L?,?
?F t
r
+ M?,?
and
r
sup |R? (f ) ? R?,n (f )| ? 4?L?,?
f ?F?
(V + 1)(t + 1) log2 [2(t + 1)/ ln 2]
n
ln(1/?)
2n
(4)
2V ln(4n + 2)
+ M?,?
n
r
ln(1/?)
.
2n
(5)
Proof. Equations (3) and (5 ) constitute [6, Lemma 2]. The proof of equations (2) and (4) is similar.
We begin with symmetrization to get
n
1 X
E sup |R? (f ) ? R?,n (f )| ? 2E sup
?i (?(?Yi f (Xi )) ? ?(0)) ,
f ??? ?F t
f ??? ?F t n
i=1
where ?i are i.i.d. with P(?i = 1) = P(?i = ?1) = 1/2. Then we use the ?contraction principle?
(see [13, Theorem 4.12, pp. 112?113]) with a function ?(x) = (?(x) ? ?(0))/L?,? to get
n
1 X
E sup |R? (f ) ? R?,n (f )| ? 4L?,? E sup
??i Yi f (Xi )
t
t
n
f ??? ?F
f ??? ?F
i=1
n
1 X
= 4L?,? E sup
?i f (Xi ) .
f ??? ?F t n
i=1
Next we proceed and find the supremum. Notice, that functions in ?? ? F t are bounded and clipped
to the absolute value equal ?, therefore we can rescale ?? ? F t by (2?)?1 and get
n
n
1 X
1 X
E sup
?i f (Xi ) = 2?E
sup
?i f (Xi ) .
t
n
n
t
?1
f ??? ?F
f ?(2?) ??? ?F
i=1
i=1
Next, we are going to use Dudley?s entropy integral [14] to bound the r.h.s above
n
Z ?p
1 X
12
ln N (, (2?)?1 ? ?? ? F t , L2 (Pn ))d.
?i f (Xi ) ? ?
E
sup
n 0
f ?(2?)?1 ??? ?F t n i=1
Since for > 1 the covering number N is 1, then upper integration limit can be taken 1, and we can
use Pollard?s bound [15] for F ? [0, 1]X
N (, F, L2 (P )) ? 2
4e
2
dP (F )
,
R1q
where dP (F ) is a pseudodimension, and obtain for c? = 12 0 ln 8e
2 d
r
n
1 X
dP ((2?)?1 ? ?? ? F t )
E
sup
?i f (Xi ) ? c?
,
n
f ?(2?)?1 ??? ?F t n
i=1
also notice that constant c? doesn?t depend on F t or ?. Next, since (2?)?1 ? ?? is a non-decreasing
transform, we use inequality dP ((2?)?1 ? ?? ? F t ) ? dP (F t ) (e.g. [11, Theorem 11.3])
n
r
1 X
dP (F t )
E
sup
?i f (Xi ) ? c
.
n
f ?(2?)?1 ??? ?F t n i=1
And then, since Lemma 1 gives an upper-bound on the pseudodimension of the class F t , we have
r
n
1 X
(V + 1)(t + 1) log2 [2(t + 1)/ ln 2]
E sup
?i f (Xi ) ? c?
,
t
n
n
f ??? ?F
i=1
with constant c above being independent of H, t and ?. To prove the second statement we use
McDiarmid?s bounded difference inequality [16, Theorem 9.2, p. 136], since ?i
M
?,?
0
sup
,
sup |R? (f ) ? R?,n (f )| ? sup |R? (f ) ? R?,n (f )| ?
0 ,y 0 ) f ?? ?F t
t
n
f
??
?F
,(x
(xj ,yj )n
?
?
i i
j=1
0
where R?,n
(f ) is obtained from R?,n (f ) by changing pair (xi , yi ) to (x0i , yi0 ). This completes the
proof of the lemma.
Lemma 2, unlike [6, Lemma 2], allows us to choose the number of steps t, that describes the complexity of the linear combination of base functions in addition to the parameter ?, which governs the
size of the deviations of the functions in F, and this is essential for the proof of the consistency. It
is easy to see that for AdaBoost (i.e. ?(x) = e?x ) we have to choose ? = ? ln n and t = n? with
? > 0, ? > 0 and 2? + ? < 1. So far we dealt with the statistical properties of the function we are
minimizing, now we turn to the algorithmic part. We need the following simple consequence of the
proof of [10, Theorem 1]
Theorem 1 Let function Q(f ) be convex in f . Let Q? = lim??? inf f ?F? Q(f ). Assume that
?c1 , c2 , s.t. Q? < c1 < c2 < ?,
0 < inf{Q00 (f ; h) : c1 < Q(f ) < c2 , h ? H}
? sup{Q00 (f ; h) : Q(f ) < c2 , h ? H} < ?.
Then for any reference function f? and the sequence of functions fm , produced by the boosting
algorithm, the following bound holds ?m s.t. Q(fm ) > Q(f?).
s
? 21
2
3 Q(f )(Q(f ) ? Q(f?))
8B
`
+
c
(m
+
1)
0
0
3
0
Q(fm ) ? Q(f?) +
,
(6)
ln
?3
`20
where `k =
f? ? fk
? , c3 = 2Q(f0 )/?, ? = inf{Q00 (f ; h) : Q(f?) < Q(f ) < Q(f0 ), h ? H},
B = sup{Q00 (f ; h) : Q(f ) < Q(f0 ), h ? H}.
Proof. The statement of the theorem is a version of the result implicit in the proof of [10, Theorem
1]. If for some m we have Q(fm ) ? Q(f?), then theorem is trivially true for all m0 ? m. Therefore,
we are going to consider only the case when Q(fm+1 ) > Q(f?). By convexity of Q(?)
|Q0 (fm ; fm ? f?)| ? Q(fm ) ? Q(f?) = m .
(7)
P
?
?
?
Let fm ? f =
?
? i hi , where ?
? i and hi correspond to the best representation (with the smallest
l? -norm). Then from (7) and linearity of the derivative we have
X
X
? i ) ? sup |Q0 (fm ; h)|
m ?
?
? i Q0 (fm ; h
|?
?i |,
h?H
therefore
m
.
sup Q0 (fm ; h) ?
fm ? f?
?
h?H
(8)
Next,
1
Q(fm + ?hm ) = Q(fm ) + ?Q0 (fm ; hm ) + ?2 Q00 (f?m ; hm ),
2
where f?m = fm + ?
? m hm , for ?
? m ? [0, ?m ], and since by assumption f?m is on the path from fm to
fm+1 we have the following bounds
Q(f?) < Q(fm+1 ) ? Q(f?m ) ? Q(fm ) ? Q(f0 ),
then by assumption of the theorem for ?, that depends on Q(f?), we have
1
|Q0 (fm ; hm )|2
Q(fm+1 ) ? Q(fm ) + inf (?Q0 (fm ; hm ) + ?2 ?) = Q(fm ) ?
.
(9)
??R
2
2?
On the other hand,
1
Q(fm + ?m hm ) =
inf Q(fm + ?h) ? inf
Q(fm ) + ?Q0 (fm ; h) + ?2 B)
h?H,??R
h?H,??R
2
suph?H |Q0 (fm ; h)|2
= Q(fm ) ?
.
(10)
2B
Therefore, combining (9) and (10) , we get
r
0
0
|Q (fm ; hm )| ? sup |Q (fm ; h)|
h?H
?
.
B
(11)
Another Taylor expansion, this time around fm+1 , gives us
1 2 00 ??
Q(fm ) = Q(fm+1 ) + ?m
Q (f m ; hm ),
(12)
2
?
where f?m is some (other) function on the path from fm to fm+1 . Therefore, if |?m | <
0
|Q (fm ; hm )|/B, then
|Q0 (fm ; hm )|2
Q(fm ) ? Q(fm+1 ) <
,
2B
but by (10)
suph?H |Q0 (fm ; h)|2
|Q0 (fm ; hm )|2
Q(fm ) ? Q(fm+1 ) ?
?
,
2B
2B
therefore we conclude, by combining (11) and (8), that
?
?
? suph?H |Q0 (fm ; h)|
m ?
|Q0 (fm ; hm )|
?
.
(13)
|?m | ?
?
B
B 3/2
`m B 3/2
Using (12) we have
m
m
X
2X
2
?i2 ?
(Q(fi ) ? Q(fi+1 )) ? (Q(f0 ) ? Q(f?)).
(14)
? i=0
?
i=0
Recall that
m?1
X
?
fm?1 ? f?
? + |?m?1 | ?
f0 ? f?
? +
|?i |
fm ? f?
?
i=0
?
?
f0 ? f?
? + m
m?1
X
!1/2
?i2
,
i=0
therefore, combining with (14) and (13), since sequence i is decreasing,
m
m
m
X
2
? X 2i
? 2 X
1
(Q(f0 ) ? Q(f?)) ?
?i2 ? 3
?
2
3 m
P
1/2 2
?
B
`
B
?
i?1 2
i=0
i=0 i
i=0
`0 + i
?
j=0 j
?
m
? 2 X
m
B3
i=0
?
m
? 2 X
1
.
m
3
2Q(f
0)
2
2B
i
i=0 `0 +
1
? 2Q(f0 ) 1/2 2
`0 + i
?
?
Since
m
X
i=0
1
?
a + bi
Z
0
m+1
dx
1 a + b(m + 1)
= ln
,
a + bx
b
a
then
2
`0 +
2
?2
(Q(f0 ) ? Q(f?)) ?
2m ln
3
?
4B Q(f0 )
2Q(f0 )
(m
?
2
`0
+ 1)
Therefore
s
m ?
8B 3 Q(f0 )(Q(f0 ) ? Q(f?))
?3
and this completes the proof.
ln
`20 +
2Q(f0 )
(m
?
2
`0
+ 1)
.
!? 21
,
The theorem above allows us to get an upper bound on the difference between the ?-risk of the
function output by AdaBoost and the ?-risk of the appropriate reference function.
Theorem 2 Assume R? > 0. Let tn = n? be the number of steps we run AdaBoost, let ?n = ? ln n,
with ? > 0, ? > 0 and ? + 2? < 1. Let f?n be a minimizer of the function Rn (?) within F?n . Then
for n large enough with high probability the following holds
?1/2
8
?2 + (4/R? )tn
Rn (ftn ) ? Rn (f?n ) + ? 3/2 ln n
?2n
(R )
Proof. This theorem follows directly from Theorem 1. Because in AdaBoost
n
Rn00 (f ; h) =
n
1X
1 X ?Yi f (Xi )
(?Yi h(Xi ))2 e?Yi f (Xi ) =
e
= R(f )
n i=1
n i=1
then all the conditions in Theorem 1 are satisfied
(with Q(f
) replaced by Rn (f )) and in the Equation
(6) we have B = Rn (f0 ) = 1, ? ? Rn (f?n ),
f0 ? f?n
? ? ?n . Since for t s.t. Rn (ft ) ? Rn (f?n )
the theorem is trivially true we only have to notice that Lemma 2 guarantees that with probability at
least 1 ? ?
r
r
2V
ln(4n
+
2)
ln(1/?)
+ M?,?n
.
|R(f?n ) ? Rn (f?n )| ? 4?n L?,?n
n
2n
Thus for n such that the r.h.s. of the above expression is less than R? /2 we have ? ? Rn (f?n ) ?
R? /2 and the result follows immediately from Equation (6) if we use the fact that Rn (f?) > 0.
Then, having all the ingredients at hand we can formulate the main result of the paper.
Theorem 3 Assume V = dV C (H) < ?, L? > 0,
lim inf R(f ) = R? ,
??? f ?F?
tn ? ?, and tn = O(n? ) for ? < 1. Then AdaBoost stopped at step tn returns a sequence of
classifiers almost surely satisfying L(g(ftn )) ? L? .
Proof. For the exponential loss function L? > 0 implies R? > 0. Let ?n = ? ln n, ? > 0,
2? + ? < 1. Also, let f? be a minimizer of R and f?n be a minimizer of Rn within F?n . Then we
have
R(??n (ftn )) ?
?
?
?
Rn (??n (ftn )) + 1 by Lemma 2
Rn (ftn ) + 1 + ?(?n ) since ?(??n (x)) ? ?(x) + ?(?n )
Rn (f?n ) + 1 + ?(?n ) + 2 by Theorem 2
R(f?) + 1 + ?(?n ) + 2 + 3 by Lemma 2.
(15)
(16)
(17)
Inequalities (15) and (17) hold with probability at least 1 ? ?n , while inequality (16) is true for
sufficiently large n when (17) holds. The ?s above are
r
r
(V + 1)(n? + 1) log2 [2(n? + 1)/ ln 2]
ln(1/?n )
?
?
+n
1 = cn ? ln n
n
2n
2
? ? ?1/2
8
(? ln n) + (4/R )n
2 =
ln
,
?
3/2
(? ln n)2
(R )
r
r
2V ln(4n + 2)
ln(1/?n )
?
?
3 = 4n ? ln n
+n
n
2n
and ?(?n ) = n?? . Therefore, by the choice of ? and ? and appropriate choice of ?n , for example
?n = n?2 , we have 1 ? 0, 2 ? 0, 3 ? 0 and ?(?n ) ? 0. Also, R(f?) ? R? by Assumption 1.
Now we appeal to the Borel-Cantelli lemma and arrive at R(?? (ftn )) ? R? a.s. Eventually we can
use [17, Theorem 3] to conclude that
a.s.
L(g(??n (ftn ))) ? L? .
But for ?n > 0 we have g(??n (ftn )) = g(ftn ), therefore
a.s.
L(g(ftn )) ? L? .
Hence AdaBoost is consistent if stopped after n? steps.
4
Discussion
We showed that AdaBoost is consistent if stopped sufficiently early, after tn iterations, for tn = n?
with ? < 1, given that Bayes risk L? > 0. It is unclear whether this number can be increased.
Results by Jiang [5] imply that for some X and function class H AdaBoost algorithm will achieve
zero training error after tn steps, where n2 /tn = o(1). We don?t know what happens in between
O(n1?? ) and O(n2 ln n). Lessening this gap is a subject of further research.
We analyzed only AdaBoost, the boosting algorithm that uses loss function ?(x) = e?x . Since
the proof of Theorem 2 relies on the properties of the exponential loss, we cannot make a similar
conclusion for other versions of boosting, e.g., logit boosting with ?(x) = ln(1 + e?x ): in this
case assumption on the second derivative holds with Rn00 (f ; h) ? Rn (f )/n, though the resulting
inequality is trivial, the factor 1/n precludes us from finding any useful bound. It is a subject of
future work to find an analog of Theorem 2 that will handle logit loss.
Acknowledgments
We gratefully acknowledge the support of NSF under award DMS-0434383.
References
[1] Yoav Freund and Robert E. Schapire. A decision-theoretic generalization of on-line learning
and an application to boosting. Journal of Computer and System Sciences, 55(1):119?139,
1997.
[2] Leo Breiman. Bagging predictors. Machine Learning, 24(2):123?140, 1996.
[3] Leo Breiman. Arcing classifiers (with discussion). The Annals of Statistics, 26(3):801?849,
1998. (Was Department of Statistics, U.C. Berkeley Technical Report 460, 1996).
[4] Leo Breiman. Some infinite theory for predictor ensembles. Technical Report 579, Department
of Statistics, University of California, Berkeley, 2000.
[5] Wenxin Jiang. On weak base hypotheses and their implications for boosting regression and
classification. The Annals of Statistics, 30:51?73, 2002.
[6] G?abor Lugosi and Nicolas Vayatis. On the Bayes-risk consistency of regularized boosting
methods. The Annals of Statistics, 32(1):30?55, 2004.
[7] Tong Zhang. Statistical behavior and consistency of classification methods based on convex
risk minimization. The Annals of Statistics, 32(1):56?85, 2004.
[8] Tong Zhang and Bin Yu. Boosting with early stopping: convergence and consistency. The
Annals of Statistics, 33:1538?1579, 2005.
[9] Wenxin Jiang. Process consistency for AdaBoost. The Annals of Statistics, 32(1):13?29, 2004.
[10] P. J. Bickel, Y. Ritov, and A. Zakai. Some theory for generalized boosting algorithms. Journal
of Machine Learning Research, 7:705?732, May 2006.
[11] Martin Anthony and Peter Bartlett. Neural network learning: theoretical foundations. Cambridge University Press, 1999.
[12] V. Koltchinskii and D. Panchenko. Empirical margin distributions and bounding the generalization error of combined classifiers. The Annals of Statistics, 30:1?50, 2002.
[13] Michel Ledoux and Michel Talagrand. Probability in Banach Spaces. Springer-Verlag, New
York, 1991.
[14] Richard M. Dudley. Uniform central limit theorems. Cambridge University Press, Cambridge,
MA, 1999.
[15] David Pollard. Empirical Processes: Theory and Applications. IMS, 1990.
[16] Luc Devroye, L?aszl?o Gy?orfi, and G?abor Lugosi. A Probabilistic Theory of Pattern Recognition.
Springer, New York, 1996.
[17] Peter L. Bartlett, Michael I. Jordan, and Jon D. McAuliffe. Convexity, classification, and risk
bounds. Journal of the American Statistical Association, 101(473):138?156, 2006.
| 2963 |@word wenxin:2 version:4 norm:3 yi0:1 logit:2 contraction:1 chervonenkis:1 dx:1 half:1 provides:1 boosting:15 node:2 successive:1 hyperplanes:1 mcdiarmid:1 zhang:2 c2:4 lessening:1 prove:2 introduce:1 behavior:1 terminal:2 decreasing:2 provided:1 begin:2 underlying:2 notation:2 bounded:2 linearity:1 what:1 finding:1 guarantee:2 berkeley:6 voting:1 classifier:17 scaled:1 mcauliffe:1 limit:2 consequence:1 jiang:7 path:2 lugosi:2 koltchinskii:2 resembles:1 bi:1 acknowledgment:1 yj:1 procedure:3 universal:1 empirical:3 orfi:1 get:5 cannot:1 close:1 risk:14 measurable:3 go:1 convex:3 formulate:1 immediately:1 rule:2 population:2 handle:1 coordinate:1 annals:7 us:1 hypothesis:1 satisfying:1 recognition:1 ft:2 aszl:1 mentioned:1 panchenko:2 convexity:2 complexity:1 depend:1 division:1 leo:3 describe:1 q00:6 precludes:1 statistic:11 transform:1 final:1 sequence:5 ledoux:1 combining:3 achieve:2 convergence:1 produce:3 converges:1 stat:2 x0i:1 rescale:1 strong:1 implies:1 direction:1 hull:1 vc:2 bin:1 require:1 generalization:2 hold:7 sufficiently:3 around:1 algorithmic:1 m0:1 bickel:2 early:2 smallest:1 label:1 symmetrization:1 minimization:1 clearly:1 modified:1 pn:1 shelf:1 breiman:4 arcing:1 cantelli:1 hk:1 ftn:10 stopping:4 abor:2 going:3 issue:1 classification:6 denoted:1 development:1 integration:1 equal:2 construct:1 having:1 yu:1 jon:1 future:1 report:2 hint:1 richard:1 replaced:1 lebesgue:1 n1:1 analyzed:1 implication:1 integral:1 necessary:1 tree:2 taylor:1 theoretical:2 stopped:6 increased:1 gn:4 unmodified:1 yoav:1 deviation:1 predictor:3 uniform:1 too:1 answer:1 combined:2 probabilistic:1 off:1 michael:1 central:1 satisfied:2 choose:3 american:1 derivative:4 bx:1 return:1 michel:2 gy:1 depends:1 tion:1 sup:23 bayes:8 formed:1 ni:1 ensemble:1 correspond:1 dealt:1 weak:2 produced:2 definition:1 pp:1 dm:1 proof:13 recall:1 lim:3 zakai:2 dimensionality:1 adaboost:22 ritov:2 though:1 implicit:1 talagrand:1 hand:2 infimum:2 quality:1 b3:1 pseudodimension:3 true:3 hence:1 q0:14 i2:3 covering:1 funcp:1 generalized:1 complete:1 theoretic:1 tn:11 l1:1 fi:2 banach:1 belong:1 analog:1 association:1 ims:1 cambridge:3 consistency:12 fk:5 similarly:1 trivially:2 gratefully:1 f0:18 base:5 recent:1 showed:2 inf:13 certain:1 verlag:1 inequality:5 binary:3 yi:9 surely:1 technical:2 award:1 prediction:1 regression:1 iteration:7 achieved:1 c1:3 vayatis:1 addition:1 want:1 completes:2 unlike:1 subject:2 effectiveness:1 jordan:1 call:1 ee:1 split:1 easy:1 enough:1 xj:1 fm:51 suboptimal:1 cn:1 whether:1 expression:1 bartlett:4 peter:3 pollard:2 proceed:1 york:2 constitute:1 useful:1 governs:1 schapire:1 nsf:1 notice:3 shall:4 group:1 changing:1 rectangle:1 asymptotically:1 run:1 clipped:1 almost:1 arrive:1 decision:1 bound:8 hi:8 infinity:1 argument:1 span:1 min:1 martin:1 department:4 combination:4 describes:1 modification:3 happens:1 dv:5 taken:3 ln:33 equation:4 turn:1 eventually:1 know:1 appropriate:2 dudley:2 bagging:1 assumes:1 include:1 log2:5 especially:1 prof:1 build:1 strategy:1 unclear:1 dp:8 trivial:1 devroye:1 traskin:1 minimizing:1 setup:1 robert:1 statement:2 unknown:1 upper:3 observation:1 finite:2 acknowledge:1 descent:1 truncated:1 rn:19 arbitrary:1 david:1 pair:1 c3:1 california:3 pattern:1 max:2 explanation:1 misclassification:1 regularized:1 indicator:2 imply:2 hm:13 sn:2 l2:2 kf:1 freund:1 loss:4 suph:3 ingredient:1 foundation:1 sufficient:1 consistent:4 imposes:1 principle:1 squashing:1 course:1 mikhail:1 absolute:2 distributed:1 dimension:2 rich:1 fb:1 doesn:2 author:1 far:1 supremum:1 conclude:2 xi:16 don:2 continuous:3 nicolas:1 expansion:1 investigated:1 anthony:1 main:1 bounding:1 n2:2 borel:1 tong:2 exponential:2 theorem:23 appeal:1 exists:1 essential:1 restricting:2 vapnik:1 margin:1 gap:1 entropy:1 univariate:1 springer:2 minimizer:3 relies:1 ma:1 conditional:1 goal:1 formulated:1 lipschitz:1 luc:1 infinite:2 lemma:14 support:1 absolutely:1 constructive:2 |
2,163 | 2,964 | A Switched Gaussian Process for Estimating
Disparity and Segmentation in Binocular Stereo
Oliver Williams
Microsoft Research Ltd.
Cambridge, UK
[email protected]
Abstract
This paper describes a Gaussian process framework for inferring pixel-wise
disparity and bi-layer segmentation of a scene given a stereo pair of images.
The Gaussian process covariance is parameterized by a foreground-backgroundocclusion segmentation label to model both smooth regions and discontinuities.
As such, we call our model a switched Gaussian process. We propose a greedy incremental algorithm for adding observations from the data and assigning segmentation labels. Two observation schedules are proposed: the first treats scanlines as
independent, the second uses an active learning criterion to select a sparse subset
of points to measure. We show that this probabilistic framework has comparable
performance to the state-of-the-art.
1
Introduction
Given two views of the same scene, this paper addresses the dual objectives of inferring depth
and segmentation in scenes with perceptually distinct foreground and background layers. We do
this in a probabilistic framework using a Gaussian process prior to model the geometry of typical
scenes of this type. Our approach has two properties of interest to practitioners: firstly, it can be
employed incrementally which is useful for circumstances in which the time allowed for processing
is constrained or variable; secondly it is probabilistic enabling fusion with other sources of scene
information.
Segmentation and depth estimation are well-studied areas (e.g., [1] and [2, 3, 4]). However the inspiration for the work in this paper is [5] in which both segmentation and depth are estimated in
a unified framework based around graph cuts. In [5] the target application was video conferencing, however such an algorithm is also applicable to areas such as robotics and augmented reality.
Gaussian process regression has previously been used in connection with stereo images in [6] to
learn the non-linear mapping between matched left-right image points and scene points as an alternative to photogrammetric camera calibration [7]. In this paper we use a Gaussian process to help
discover the initially unknown left-right matches in a complex scene: a camera calibration procedure
might then be used to determine actual 3D scene geometry.
The paper is organized as follows: Sec. 2 describes our Gaussian process framework for inferring
depth (disparity) and segmentation from stereo measurements. Sec. 3 proposes and demonstrates
two observation schedules: the first operates along image scanlines independently, the second treats
the whole image jointly, and makes a sparse set of stereo observations at locations selected by an
active learning criterion [8]; we also show how colour information may be fused with predictions by
the switched GP, the results of which are comparable to those of [5]. Sec. 4 concludes the paper.
Figure 1: Anatomy of a disparity map. This schematic shows some of the important features in
short baseline binocular stereo for an horizontal strip of pixels. Transitions between foreground and
background at the right edge of a foreground object will induce a discontinuity from high to low
disparity. Background?foreground transitions at the left edge of the foreground induce an occlusion
region in which scene points visible in the left image are not visible in the right. We use the data
from [5] which are available on their web site: http://research.microsoft.com/vision/cambridge/i2i
2
Single frame disparity estimation
This framework is intended for use with short baseline stereo, in which the two images are taken
slightly to the left and the right of a midpoint (see Fig. 1). This means that most features visible in
one image are visible in the other, albeit at a different location: for a given point x in the left image
L(x), our aim is therefore to infer the location of the same scene point in the right image R(x0 ). We
assume that both L and R have been recti?ed [7] such that all corresponding points have the same
vertical coordinate; hence if x = [x y]T then x0 = [x ? d(x) y]T where d(x) is called the disparity
map for points x in the left image.
Because objects typically have smooth variations in depth, d(x) is generally smooth. However, there
are two important exceptions to this and, because they occur at the boundaries between an object
and the background, it is essential that they be modelled correctly (see also Fig. 1):
Discontinuity Discontinuities occur where one pixel belongs to the foreground and its neighbour
belongs to the background.
Occlusion At background?foreground transitions (travelling horizontally from left to right), there
will be a region of pixels in the left image that are not visible in the right since they are occluded by the foreground [3]. Such locations correspond to scene points in the background
layer, however their disparity is unde?ned.
The next subsection describes a prior for disparity that attempts to capture these characteristics by
modelling the bi-layer segmentation.
2.1
A Gaussian process prior for disparity
We model the prior distribution of a disparity map to be a Gaussian process (GP) [9]. GPs are de?ned
by a mean function f (?) and a covariance function c(?, ?) which in turn de?ne the joint distribution
of disparities at a set of points {xi , . . . , xn } as a multivariate Gaussian
P d(xi ), . . . , d(xn )|f, c = Normal (f , C)
(1)
where fi = f (xi ) and Cij = c(xi , xj ).
In order to specify a mean and covariance function that give typical disparity maps an high probability, we introduce a latent segmentation variable s(x) ? {F, B, O} for each point in the left image.
This encodes whether a point belongs to the foreground (F), background (B) or is occluded (O) and
makes it possible to model the fact that disparities in the background/foreground are smooth (spatially correlated) within their layers and are independent across layers. For a given segmentation,
the covariance function is
?
2
? De??kxi ?xj k
c(xi , xj ; s) =
D?(xi ? xj )
?
0
s(xi ) = s(xj ) 6= O
s(xi ) = s(xj ) = O
s(xi ) 6= s(xj )
(2)
where D is the maximum disparity in the scene and ? is the Dirac delta function. The covariance of
two points will be zero (i.e., the disparities are independent) unless they share the same segmentation
label. Disparity is undefined within occlusion regions so these points are treated as independent
with high variance to capture the noisy observations that occur here, pixels with other labels have
disparities whose covariance falls off with distance engendering smoothness in the disparity map;
the parameter ? controls the smoothness and is set to ? = 0.01 for all of the experiments shown in
this paper (the points x are measured in pixel units). It will be convenient in what follows to define
0
the covariance for sets of points such that c(X , X 0 ; s) = C(s) ? Rn?n where the element Cij is the
th
th
0
covariance of the i element of X and j element of X . The prior mean is also defined according
to segmentation to reflect the fact that the foreground is at greater disparity (nearer the camera) than
the background
(
0.2D s(x) = B
0.8D s(x) = F .
f (x; s) =
(3)
0.5D s(x) = O
Because of the independence induced by the discrete labels s(x), we call this prior model a switched
Gaussian process. Switching between Gaussian processes for different parts of the input space
has been discussed previously by [10] in which switching was demonstrated for a 1D regression
problem.
2.2
Stereo measurement process
A proposed disparity d(x) is compared to the data via the normalized sum of squared differences
(NSSD) matching cost over a region ? (here
P a 5 ? 5 pixel patch centred? at the origin) using the
1
?
normalized intensity is L(x)
= L(x) ? 25
a?? L(x + a) (likewise for R(x))
P
? + a) ? R(x
? + a ? d) 2
L(x
m(x, d) = Pa?? ? 2
(4)
? 2 (x + a + d) .
2 a?? L (x + a) + R
This cost has been shown in practice to be effective for disparity estimation [11].
To incorporate this information with the GP prior it must be expressed probabilistically. We follow
the approach of [12] for this in which a parabola is fitted around the disparity with minimum score
m(x, d) ? ad2 + bd + c. Interpreting this as the inverse logarithm of a Gaussian distribution gives
d(x) = ?(x) + where ? Normal (0, v(x))
with ?(x) =
? ab
and v(x) =
1
2a
(5)
being the observation mean and variance.
Given a segmentation and a set of noisy measurements at locations X = {xi , . . . , xn }, the GP
can be used to predict the disparity at a new point P (d(x)|X ). This is a Gaussian distribution
Normal (?
?(x), v?(x)) with [9]
? ?1 c(X , x; s) and v?(x; s) = c(x, x; s) ? c(X , x; s)T C(s)
? ?1 c(x; s)
?
?(x; s) = ?T C(s)
?
where C(s)
= c(X , X ; s) + diag v(x1 ), . . . , v(xn ) and ? = [?(x1 ), . . . , ?(xn )]T .
2.3
(6)
Segmentation likelihood
The previous discussion has assumed that the segmentation is known, yet this will rarely be the case
in practice: s must therefore be inferred from the data together with the disparity. For a given set of
observations, the probability that they are a sample from the GP prior is given by
T
n
? ?1 ? ? f (s) ? log det C(s)
?
(7)
E(X ) = log P ?|s, v = ? ? ? f (s) C(s)
? log 2?.
2
This is the evidence for the parameters of the prior model and constitutes a data likelihood for the
segmentation. The next section describes an algorithm that uses this quantity to infer a segmentation
whilst incorporating observations.
3
Incremental incorporation of measurements and model selection
We propose an incremental and greedy algorithm for finding a segmentation. Measurements are
incorporated one at a time and the evidence of adding the ith observation to each of the three segmentation layers is computed based on the preceding i ? 1 observations and their labels. The ith
point is labelled according to which gave the greatest evidence. The first i ? 1 observation points
Xi?1 = {x1 , . . . , xi?1 } are partitioned according to their labelling into the mutually independent
sets XF , XB and XO . Since the three segmentation layers are independent, some of the cost of com? ?1 instead where
puting and storing the large matrix C? ?1 is avoided by constructing F? ?1 and B
?
?
F = c(XF , XF ) and B = c(XB , XB ). Observations assigned to the occlusion layer are independent
of all other points and contain no useful information. There is therefore no need to keep a covariance
matrix for these.
As shown in [13], the GP framework easily facilitates incremental incorporation of observations by
repeatedly updating the matrix inverse required in the prediction equations (6). For example, to add
the ith example to the foreground (the process is identical for the background layer) compute
?1 ?1
F?i?1 + qF qTF /rF qF
F?i?1
c(XF , xi )
?1
?
Fi =
=
(8)
c(XF , xi )T c(xi , xi ) + v(x)
qTF
rF
where
?1
rF?1 = c(xn , xn ) + v(xi ) ? c(XF , x)T F?i?1
c(XF , x)
?1
qF = ?rF F? c(XF , x).
i?1
(9)
Similarly, there is an incremental form for computing the evidence of a particular segmentation as
E(Xi |s(xi ) = j) = E(Xi?1 ) + ?Ej (xi ) where
(?(Xj )T qj )2
(10)
? 2?(xi )qTj ?(Xj ) ? rj ?(xi )2 ? 21 log 2?
rj
By computing ?Ej for the three possible segmentations, a new point can be greedily labelled as
that which gives the greatest increase in evidence.
?Ej (xi ) = log(rj ) ?
Algorithm 1 gives pseudo-code for the incremental incorporation of a measurement and greedy
labelling. As with Gaussian Process regression in general, this algorithm scales as O(n2 ) for storage
and O(n3 ) for time and it is therefore impractical to make an observation at every pixel for images
of useful size. We propose two mechanisms to overcome this:
1. Factorize the image into several sub-images and treat each one independently. The next
subsection demonstrates this when each scanline (row of pixels) is handled independently.
2. Only make measurements at a sparse subset of locations. Subsection 3.2 describes an active
learning approach for identifying optimally informative observation points.
3.1
Independent scanline observation schedule
By handling the image pixels one row at a time, the problem becomes one-dimensional. Points
are processed in order from right to left: for each point the disparity is measured as described in
Algorithm 1 Add and label measurement at xi
? ?1 , XF , XB and XO
input F? ?1 , B
Compute matrix building blocks rj?{F,B} and qj?{F,B} from (9)
Compute change in evidence for adding to each layer ?Ej?{F,B,O} from (10)
Label point s(xi ) = arg maxj?{F,B,O} ?Ej (xi )
Add point to set Xs(xi ) ? Xs(xi ) ? xi
if s(xi ) ? F ? B then
? ?1 as (8)
Update matrix F? ?1 or B
end if
i=i+1
? ?1 , XF , XB and XO
return F? ?1 , B
(a)
(b)
(c)
Figure 2: Scanline predictions. Disparity and segmentation maps inferred by treating each scanline
independently. (a) 320 ? 240 pixel left input images. (b) Mean predicted disparity map ?
?(x). (c)
Inferred segmentation s(x) with F = white, B = grey (orange) and O = black.
Sec. 2.2 and incorporated/labelled according to Algorithm 1. In this setting there are constraints on
which labels may be neighbours along a scanline. Fig. 1 shows the segmentation for a typical image
from which it can be seen that, moving horizontally from right to left, the only ?legal? transitions in
segmentation are B ? F, F ? O and O ? B. Algorithm 1 is therefore modified to consider legal
segmentations only.
Fig. 2 shows some results of this approach. Both the disparities and segmentation are, subjectively,
accurate however there are a number of ?streaky? artifacts caused by the fact that there is no vertical
sharing of information. There are also a number of artifacts where an incorrect segmentation label
has been assigned; in many cases this is where a point in the foreground or background has been
labelled as occluded because there is no texture in that part of an image and measurements made for
such points have an high variance. The occlusion class could therefore be more accurately described
as a general outlier category.
3.2
Active selection of sparse measurement locations
As shown above, our GP model scales badly with the number of observations. The previous subsection used measurements at all locations by treating each scanline as independent, however a
shortcoming of this approach is that no information is propagated vertically, introducing streaky
artifacts and reducing the model?s ability to reason about occlusions and discontinuities.
Rather than introduce artificial independencies, the observation schedule in this section copes with
the O(n3 ) scaling by making measurements at only a sparse set of locations. Obvious ways of
implementing this include choosing n locations either at random or in a grid pattern, however these
fail to exploit information that can be readily obtained from both the image data and the current
predictions made by the model. Hence, we propose an active approach, similar to that in [14]: given
the first i ? 1 observations, observe the point which maximally reduces the entropy of the GP [8]
?H(x) = H P (d|Xi?1 ) ? H P (d|Xi?1 ? x) = ? 12 log det ? + 21 log det ?0 + const. (11)
where ? and ?0 are the posterior covariances of the GP over all points in the image before and
after making an observation at x. To compute the entire posterior for each observation would be
prohibitively expense; instead we approximate it by the product of the marginal distributions at each
(a)
(b)
(c)
Figure 3: Predictions after sparse active observation schedule. This figure shows the predictions
made by the GP model with observations at 1000 image locations for the images used in Fig. 2. (a)
Mean predicted disparity ?
?(x); (b) Predictive uncertainty v?(x); (c) Inferred segmentation.
point (i.e., ignore off-diagonal elements in ?) which gives ?H(x) ? 12 log v?(x) ? log v(x) where
v?(x) is the predicted variance from (6) and v(x) is the measurement variance. Since the logarithm
is monotonic, an equivalent utility function is used:
v?(x)
.
U x|Xi?1 =
v(x)
(12)
Here the numerator drives the system to make observations at points with greatest predictive uncertainty. However, this is balanced by the denominator to avoid making observations at points where
there is no information to be obtained from the data (e.g., the textureless regions in Fig. 2). To initialize the active algorithm, 64 initial observations are made in a evenly spaced grid over the image.
Following this, points are selected using the utility function (12) and incorporated into the GP model
using Algorithm 1.
Predicting disparity in the scanline factorization was straightforward because a segmentation label
had been assigned to every pixel. With sparse measurements, only the observation points have been
labelled and to predict disparity at an arbitrary location a segmentation label must also be inferred.
Our simple strategy for this is to label a point according to which gives the least predictive variance,
i.e.:
s(x) = arg
min
j?{F,B,O}
v?(x; s(x) = j).
(13)
Fig. 3 shows the results of using this active observation schedule with n = 1000 for the images of
Fig. 2. As expected, by restoring 2D spatial coherence the results are smoother and have none of the
streaky artifacts induced by the scanline factorization. Despite observing only 1.3% of the points
used by the scanline factorization, the active algorithm has still managed to capture the important
features in the scenes. Fig. 4a shows the locations of the n observation points; the observations are
clustered around the boundary of the foreground object in an attempt to minimize the uncertainty
at discontinuities/occlusions; the algorithm is dedicating its computational resources to the parts of
the image which are most interesting, important and informative. Fig. 4b demonstrates further the
benefits of selecting the points in an active framework compared to taking points at random.
10
active
random
% mislabelled points
8
6
4
2
0
0
(a)
500
1000
1500
number of observations n
2000
(b)
Figure 4: Advantage of active point selection. (a) The inferred segmentation from Fig. 3 with
spots (blue) corresponding to observation locations selected by the active criterion. (b) This plot
compares the accuracy of the segmentation against the number of sparse observations when the
observation locations are chosen at random and using our active schedule. Accuracy is measured
as the percentage of mislabelled pixels compared to a hand-labelled ground truth segmentation. The
active strategy achieves better accuracy with fewer observations.
(a)
(b)
(c)
Figure 5: Improved segmentation by fusion with colour. (a) Pixel-wise energy term V (x) combining segmentation predictions from both the switched GP posterior and a colour model; (b) Segmentation returned by the Viterbi algorithm. This contains 0.5% labelling errors by area. (c) Inferred
foreground image pixels.
3.3
Adding colour information
The best segmentation accuracies using stereo information alone are around 1% labelling errors
(with n ? 1000). In [5], superior segmentation results are achieved by incorporating colour information. We do the same here by computing a foreground ?energy? V (x) at each location based
on the variances
predicted by the foreground/background layers and a known colour distribution
P F|Lc (x) where Lc (x) is the RGB colour of the left image at x:
V (x) = log v?(x; s(x) = B) ? log v?(x; s(x) = F) ? log P F|Lc (x) .
(14)
We represent the colour distribution using a 10 ? 10 ? 10 bin histogram in red-green-blue colour
space. Fig. 5a shows this energy for the first image in Fig. 2. As in [5], we treat each scanline as
a binary HMM and use the Viterbi algorithm to find a segmentation. A result of this is shown in
Fig. 5c which contains 0.58% erroneous labels. This is comparable to the errors in [5] which are
around 0.25% for this image. We suspect that our result can be improved with a more sophisticated
colour model.
4
Discussion
We have proposed a Gaussian process model for disparity, switched by a latent segmentation variable. We call this a switched Gaussian process and have proposed an incremental greedy algorithm
for fitting this model to data and inferring a segmentation. We have demonstrated that by using
a sparse model with points selected according to an active learning criterion, an accuracy can be
achieved that is comparable to the state of the art [5].
We believe there are four key strengths to this probabilistic framework:
Flexibility The incremental nature of the algorithm makes it possible to set the number of observations n according to time or quality constraints.
Extensibility This method is probabilistic so fusion with other sources of information is possible
(e.g., laser range scanner on a robot).
Ef?ciency For small n, this approach is very fast ( 30ms per pair of images for n = 200 on a 3GHz
PC). However, higher quality results require n > 1000 observations which reduces the
execution speed to a few seconds per image.
Accuracy We have shown that (for large n) this technique achieves an accuracy comparable to the
state of the art.
Future work will investigate the use of approximate techniques to overcome the O(n3 ) scaling
problem [15]. The framework described in this paper can operate at real time for low n, however any
technique that combats the scaling will allow higher accuracy for the same execution time. Also,
improving the approximation to the likelihood in (5), e.g., by expectation propagation [16], may
increase accuracy.
References
[1] D. Comaniciu and P. Meer. Robust analysis of feature spaces: color image sgementation. In
Proc. Conf. Computer Vision and Pattern Recognition, pages 750?755, 1997.
[2] Y. Ohta and T. Kanade. Stereo by intra- and inter-scanline search using dynamic programming.
IEEE Trans. on Pattern Analysis and Machine Intelligence, 7(2):139?154, 1985.
[3] D. Geiger, B. Ladendorf, and A. Yuille. Occlusions and binocular stereo. Int. J. Computer
Vision, 14:211?226, 1995.
[4] V. Kolmogorov and R. Zabih. Computing visual correspondence with occlusions using graph
cuts. In Proc. Int. Conf. Computer Vision, 2001.
[5] V. Kolmogorov, A. Criminisi, A. Blake, G. Cross, and C. Rother. Bi-layer segmentation of
binocular stereo video. In Proc. Conf. Computer Vision and Pattern Recognition, 2005.
[6] F. Sinz, J. Qui?nonero-Candela, G.H. Bakir, C.E. Rasmussen, and M.O. Franz. Learning depth
from stereo. In Pattern Recognition, Proc. 26th DAGM Symposium, pages 245?252, 2004.
[7] R. Hartley and A. Zisserman. Multiple View Geometry. Cambridge University Press, 2000.
[8] D.J.C. MacKay. Information-based objective functions for active data selection. Neural Computation, 4(4):589?603, 1992.
[9] C.E. Rasmussen and C.K.I. Williams. Gaussian Processes for Machine Learning. MIT Press,
2006.
[10] A. Storkey. Gaussian processes for switching regimes. In Proc. ICANN, 1998.
[11] D. Scharstein and R. Szeliski. A taxonomy and evaluation of desnse two-frame stereo correspondence algorithms. Int. J. Computer Vision, 47(1):7?42, 2002.
[12] L. Matthies, R. Szeliski, and T. Kanade. Incremental estimation of dense depth maps from
image sequences. In Proc. Conf. Computer Vision and Pattern Recognition, 1988.
[13] M. Gibbs and D.J.C. MacKay. Efficient implementation of gaussian processes. Technical
report, University of Cambridge, 1997.
[14] M. Seeger, C.K.I. Williams, and N. Lawrence. Fast forward selection to speed up sparse
gaussian process regression. In Proc. AI-STATS, 2003.
[15] J. Qui?nonero-Candela and C.E. Rasmussen. A unifying view of sparse approximate Gaussian
process regression. J. Machine Learning Research, 6:1939?1959, 2005.
[16] T.P. Minka. Expectation propagation for approximate Bayesian inference. In Proc. UAI, pages
362?369, 2001.
| 2964 |@word grey:1 rgb:1 covariance:10 initial:1 contains:2 disparity:33 score:1 selecting:1 current:1 com:2 assigning:1 yet:1 must:3 bd:1 readily:1 visible:5 informative:2 treating:2 mislabelled:2 update:1 plot:1 alone:1 greedy:4 selected:4 fewer:1 intelligence:1 ith:3 short:2 location:16 firstly:1 ladendorf:1 along:2 symposium:1 incorrect:1 fitting:1 introduce:2 x0:2 inter:1 expected:1 actual:1 becomes:1 estimating:1 matched:1 discover:1 what:1 whilst:1 unified:1 finding:1 impractical:1 sinz:1 pseudo:1 combat:1 every:2 prohibitively:1 demonstrates:3 uk:2 control:1 unit:1 puting:1 before:1 vertically:1 treat:4 switching:3 despite:1 might:1 black:1 studied:1 factorization:3 bi:3 range:1 camera:3 restoring:1 practice:2 block:1 spot:1 procedure:1 area:3 convenient:1 matching:1 induce:2 selection:5 storage:1 equivalent:1 map:8 demonstrated:2 williams:3 straightforward:1 independently:4 identifying:1 stats:1 qtj:1 meer:1 coordinate:1 variation:1 target:1 programming:1 gps:1 us:2 origin:1 pa:1 element:4 storkey:1 recognition:4 updating:1 parabola:1 cut:2 capture:3 region:6 extensibility:1 balanced:1 occluded:3 cam:1 dynamic:1 predictive:3 yuille:1 easily:1 joint:1 kolmogorov:2 laser:1 distinct:1 fast:2 effective:1 shortcoming:1 artificial:1 choosing:1 whose:1 ability:1 gp:12 jointly:1 noisy:2 advantage:1 sequence:1 propose:4 product:1 combining:1 nonero:2 flexibility:1 dirac:1 ohta:1 incremental:9 object:4 help:1 ac:1 measured:3 textureless:1 predicted:4 anatomy:1 hartley:1 criminisi:1 implementing:1 bin:1 require:1 clustered:1 secondly:1 ad2:1 scanner:1 around:5 ground:1 normal:3 blake:1 lawrence:1 mapping:1 predict:2 viterbi:2 achieves:2 estimation:4 proc:8 applicable:1 label:14 mit:1 gaussian:23 aim:1 modified:1 rather:1 avoid:1 ej:5 probabilistically:1 modelling:1 likelihood:3 seeger:1 greedily:1 baseline:2 inference:1 dagm:1 typically:1 entire:1 initially:1 unde:1 pixel:15 arg:2 dual:1 proposes:1 art:3 constrained:1 orange:1 initialize:1 marginal:1 spatial:1 mackay:2 identical:1 constitutes:1 matthies:1 foreground:18 future:1 report:1 few:1 neighbour:2 maxj:1 geometry:3 occlusion:9 intended:1 microsoft:2 attempt:2 ab:1 interest:1 investigate:1 intra:1 evaluation:1 undefined:1 pc:1 xb:5 accurate:1 oliver:1 edge:2 unless:1 logarithm:2 fitted:1 cost:3 introducing:1 subset:2 optimally:1 kxi:1 probabilistic:5 off:2 together:1 fused:1 squared:1 reflect:1 conf:4 return:1 de:3 centred:1 sec:4 int:3 caused:1 view:3 candela:2 observing:1 scanline:11 red:1 minimize:1 accuracy:9 variance:7 characteristic:1 likewise:1 correspond:1 spaced:1 modelled:1 bayesian:1 i2i:1 accurately:1 none:1 drive:1 sharing:1 strip:1 ed:1 against:1 energy:3 minka:1 obvious:1 propagated:1 subsection:4 color:1 bakir:1 segmentation:45 schedule:7 organized:1 sophisticated:1 rectus:1 higher:2 follow:1 specify:1 maximally:1 improved:2 zisserman:1 binocular:4 hand:1 horizontal:1 web:1 propagation:2 incrementally:1 quality:2 artifact:4 believe:1 building:1 normalized:2 contain:1 managed:1 hence:2 inspiration:1 assigned:3 spatially:1 white:1 numerator:1 comaniciu:1 criterion:4 m:1 interpreting:1 image:35 wise:2 ef:1 fi:2 superior:1 discussed:1 measurement:14 cambridge:4 gibbs:1 ai:1 smoothness:2 grid:2 similarly:1 had:1 moving:1 calibration:2 robot:1 subjectively:1 add:3 multivariate:1 posterior:3 belongs:3 binary:1 seen:1 minimum:1 greater:1 preceding:1 employed:1 determine:1 smoother:1 multiple:1 rj:4 infer:2 reduces:2 smooth:4 technical:1 match:1 xf:10 cross:1 schematic:1 prediction:7 regression:5 denominator:1 circumstance:1 vision:7 expectation:2 histogram:1 represent:1 robotics:1 dedicating:1 achieved:2 background:13 source:2 operate:1 induced:2 suspect:1 facilitates:1 call:3 practitioner:1 xj:9 independence:1 gave:1 det:3 qj:2 whether:1 handled:1 utility:2 colour:10 ltd:1 stereo:14 returned:1 repeatedly:1 useful:3 generally:1 zabih:1 processed:1 category:1 http:1 percentage:1 estimated:1 delta:1 correctly:1 per:2 blue:2 discrete:1 independency:1 four:1 key:1 graph:2 sum:1 inverse:2 parameterized:1 uncertainty:3 patch:1 geiger:1 coherence:1 scaling:3 qui:2 comparable:5 layer:13 correspondence:2 badly:1 strength:1 occur:3 incorporation:3 constraint:2 scene:13 conferencing:1 encodes:1 n3:3 speed:2 min:1 ned:2 according:7 describes:5 slightly:1 across:1 partitioned:1 making:3 outlier:1 xo:3 taken:1 legal:2 equation:1 mutually:1 previously:2 resource:1 turn:1 mechanism:1 fail:1 end:1 travelling:1 available:1 observe:1 alternative:1 include:1 unifying:1 const:1 exploit:1 objective:2 quantity:1 strategy:2 diagonal:1 distance:1 hmm:1 evenly:1 reason:1 rother:1 code:1 cij:2 taxonomy:1 expense:1 implementation:1 scanlines:2 unknown:1 vertical:2 observation:37 enabling:1 incorporated:3 frame:2 rn:1 arbitrary:1 intensity:1 inferred:7 pair:2 required:1 connection:1 discontinuity:6 nearer:1 address:1 trans:1 pattern:6 regime:1 rf:4 green:1 video:2 greatest:3 treated:1 predicting:1 ne:1 concludes:1 prior:9 interesting:1 switched:7 storing:1 share:1 row:2 qf:3 rasmussen:3 allow:1 szeliski:2 fall:1 taking:1 midpoint:1 sparse:11 benefit:1 ghz:1 boundary:2 depth:7 xn:7 transition:4 overcome:2 forward:1 made:4 avoided:1 franz:1 cope:1 scharstein:1 approximate:4 ignore:1 keep:1 active:17 uai:1 assumed:1 xi:34 factorize:1 search:1 latent:2 reality:1 kanade:2 learn:1 nature:1 robust:1 correlated:1 improving:1 complex:1 constructing:1 diag:1 icann:1 dense:1 whole:1 n2:1 allowed:1 x1:3 augmented:1 site:1 fig:14 lc:3 sub:1 inferring:4 ciency:1 erroneous:1 x:2 evidence:6 fusion:3 essential:1 incorporating:2 albeit:1 adding:4 texture:1 execution:2 perceptually:1 labelling:4 entropy:1 visual:1 horizontally:2 expressed:1 monotonic:1 truth:1 labelled:6 change:1 typical:3 operates:1 reducing:1 called:1 exception:1 select:1 rarely:1 incorporate:1 handling:1 |
2,164 | 2,965 | Hidden Markov Dirichlet Process: Modeling Genetic
Recombination in Open Ancestral Space
Eric P. Xing
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Kyung-Ah Sohn
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Abstract
We present a new statistical framework called hidden Markov Dirichlet process
(HMDP) to jointly model the genetic recombinations among possibly infinite
number of founders and the coalescence-with-mutation events in the resulting genealogies. The HMDP posits that a haplotype of genetic markers is generated by
a sequence of recombination events that select an ancestor for each locus from
an unbounded set of founders according to a 1st-order Markov transition process.
Conjoining this process with a mutation model, our method accommodates both
between-lineage recombination and within-lineage sequence variations, and leads
to a compact and natural interpretation of the population structure and inheritance
process underlying haplotype data. We have developed an efficient sampling algorithm for HMDP based on a two-level nested P?olya urn scheme. On both simulated
and real SNP haplotype data, our method performs competitively or significantly
better than extant methods in uncovering the recombination hotspots along chromosomal loci; and in addition it also infers the ancestral genetic patterns and offers
a highly accurate map of ancestral compositions of modern populations.
1
Introduction
Recombinations between ancestral chromosomes during meiosis play a key role in shaping the patterns of linkage disequilibrium (LD)?the non-random association of alleles at different loci?in
a population. When a recombination occurs between two loci, it tends to decouple the alleles
carried at those loci in its descendants and thus reduce LD; uneven occurrence of recombination
events along chromosomal regions during genetic history can lead to ?block structures? in molecular genetic polymorphisms such that within each block only low level of diversities are present
in a population. The problem of inferring chromosomal recombination hotspots is essential for
understanding the origin and characteristics of genome variations; several combinatorial and statistical approaches have been developed for uncovering optimum block boundaries from single
nucleotide polymorphism (SNP) haplotypes [Daly et al., 2001; Anderson and Novembre, 2003;
Patil et al., 2001; Zhang et al., 2002], and these advances have important applications in genetic
analysis of disease propensities and other complex traits. The deluge of SNP data also fuels the
long-standing interest of analyzing patterns of genetic variations to reconstruct the evolutionary
history and ancestral structures of human populations, using, for example, variants of admixture
models on genetic polymorphisms [Rosenberg et al., 2002]. These progress notwithstanding, the
statistical methodologies developed so far mostly deal with LD analysis and ancestral inference
separately, using specialized models that do not capture the close statistical and genetic relationships of these two problems. Moreover, most of these approaches ignore the inherent uncertainty
in the genetic complexity (e,g., the number of genetic founders of a population) of the data and
rely on inflexible models built on a pre-fixed, closed genetic space. Recently, Xing et al. [2004;
2006] have developed a nonparametric Bayesian framework for modeling genetic polymorphisms
based on the Dirichlet process mixtures and extensions, which attempts to allow more flexible control over the number of genetic founders than has been provided by the statistical methods proposed
thus far. In this paper, we leverage on this approach and present a unified framework to model complex genetic inheritance process that allows recombinations among possibly infinite founding alleles
and coalescence-with-mutation events in the resulting genealogies.
Loci: 1
A1
A2
A3
2
3
4
5
....
AK
Inheritance of unknown generations
Hi
Hi
0
1
Figure 1: An illustration of a hidden Markov Dirichlet process for haplotype recombination and
inheritance.
We assume that individual chromosomes in a modern population are originated from an unknown
number of ancestral haplotypes via biased random recombinations and mutations (Fig 1). The recombinations between the ancestors follow a a state-transition process we refer to as hidden Markov
Dirichlet process (originated from the infinite HMM by Beal et al. [2001]), which travels in an open
ancestor space, with nonstationary recombination rates depending on the genetic distances between
SNP loci. Our model draws inspiration from the HMM proposed in [Greenspan and Geiger, 2003],
but we employ a two-level P?olya urn scheme akin to the hierarchical DP [Teh et al., 2004] to accommodate an open ancestor space, and allow full posterior inference of the recombination sites,
mutation rates, haplotype origin, ancestor patterns, etc., conditioning on phased SNP data, rather
than estimating them using information theoretic or maximum likelihood principles. On both simulated and real genetic data, our model and algorithm show competitive or superior performance on
a number of genetic inference tasks over the state-of-the-art parametric methods.
2
Hidden Markov Dirichlet Process for Recombination
Sequentially choosing recombination targets from a set of ancestral chromosomes can be modeled
as a hidden Markov process [Niu et al., 2002; Greenspan and Geiger, 2003], in which the hidden
states correspond to the index of the candidate chromosomes, the transition probabilities correspond
to the recombination rates between the recombining chromosome pairs, and the emission model
corresponds to a mutation process that passes the chosen chromosome region in the ancestors to
the descents. When the number of ancestral chromosomes is not known, it is natural to consider an
HMM whose state space is countably infinite [Beal et al., 2001; Teh et al., 2004]. In this section,
we describe such an infinite HMM formalism, which we would like to call hidden Markov Dirichlet
process, for modeling recombination in an open ancestral space.
2.1 Dirichlet Process mixtures
For self-containedness, we begin with a brief recap of the basic Dirichlet process mixture model
proposed in Xing et al. [2004] for haplytope inheritance without recombination. A haplotype
refers to the joint allele configuration of a contiguous list of SNPs located on a single chromosome
(Fig 1). Under a well-known genetic model known as coalescence-with-mutation (but without
recombination), one can treat a haplotype from a modern individual as a descendent of an unknown
ancestor haplotype (i.e., a founder) via random mutations that alter the allelic states of some
SNPs. It can be shown that such a coalescent process in an infinite population leads to a partition
of the population that can be succinctly captured by the following P?olya urn scheme. Consider
an urn that at the outset contains a ball of a single color. At each step we either draw a ball
from the urn and replace it with two balls of the same color, or we are given a ball of a new
color which we place in the urn. One can see that such a scheme leads to a partition of the balls
according to their color. Letting parameter ? define the probabilities of the two types of draws, and
viewing each (distinct) color as a sample from Q0 , and each ball as a sample from Q, Blackwell
and MacQueen [1973] showed that this P?olya urn model yields samples whose distributions
are those of the marginal probabilities under the Dirichlet process. One can associate mixture
component with colors in the P?olya urn model, and thereby define a ?clustering? of the data. The
resulting model is known as a DP mixture. Note that a DP mixture requires no prior specification of the number of components. Back to haplotype modeling, following Xing et al. [2004;
2006], let Hi = [Hi,1 , . . . , Hi,T ] denote a haplotype over T SNPs from chromosome i 1 ; let
Ak = [Ak,1 , . . . , Ak,T ] denote an ancestor haplotype (indexed by k) and ?k denote the mutation
rate of ancestor k; and let Ci denote an inheritance variable that specifies the ancestor of haplotype
Hi . As described in Xing et al. [2006], under a DP mixture, we have the following P?olya urn
1
We ignore the parental origin index of haplotype as used in Xing et al. [2004], and assume that the paternal
and maternal haplotypes of each individual are given unambiguously (i.e., phased, as known in genetics),
as is the case in many LD and haplotype-block analyses. But it is noteworthy that our model can generalize
straightforwardly to unphased genotype data by incorporating a simple genotype model as in Xing et al. [2004].
scheme for sampling modern haplotypes:
? Draw first haplotype:
a1 | DP(?, Q0 ) ? Q0 (?),
sample the 1st founder;
h1 ? Ph (?|a1 , ?1 ),
sample the 1st haplotype from an inheritance model defined on the 1st founder;
? for subsequent haplotypes:
(
? sample the founder indicator for the ith haplotype:
nc
j
p(ci = cj for some j < i|c1 , . . ., ci?1 ) = i?1+?
?
p(ci 6= cj for all j < i|c1 , . . ., ci?1 ) = i?1+?
where nci is the occupancy number of class ci ?the number of previous samples belonging to class ci .
? sample the founder of haplotype i (indexed by ci ):
= {acj , ?cj } if ci = cj for some j < i (i.e., ci refers to an inherited founder)
?ci |DP(?, Q0 )
? Q0 (a, ?) if ci 6= cj for all j < i (i.e., ci refers to a new founder)
ci |DP(?, Q0 ) ?
? sample the haplotype according to its founder:
hi | ci ? Ph (?|aci , ?ci ).
Notice that the above generative process assumes each modern haplotype to be originated from a
single ancestor, this is only plausible for haplotypes spanning a short region on a chromosome. Now
we consider long haplotypes possibly bearing multiple ancestors due to recombinations between an
unknown number of founders.
2.2 Hidden Markov Dirichlet Process (HMDP)
In a standard HMM, state-transitions across a discrete time- or space-interval take place in a fixeddimensional state space, thus it can be fully parameterized by, say, a K-dimensional initial-state
probability vector and a K ? K state-transition probability matrix. As first proposed in Beal et
al. [2001], and later discussed in Teh et al. [2004], one can ?open? the state space of an HMM
by treating the now infinite number of discrete states of the HMM as the support of a DP, and the
transition probabilities to these states from some source as the masses associated with these states.
In particular, for each source state, the possible transitions to the target states need to be modeled
by a unique DP. Since all possible source states and target states are taken from the same infinite
state space, overall we need an open set of DPs with different mass distributions on the SAME
support (to capture the fact that different source states can have different transition probabilities to
any target state). In the sequel, we describe such a nonparametric Bayesian HMM using an intuitive
hierarchical P?olya urn construction. We call this model a hidden Markov Dirichlet process.
In an HMDP, both the columns and rows of the transition matrix are infinite dimensional. To construct such an stochastic matrix, we will exploit the fact that in practice only a finite number of states
(although we don?t know what they are) will be visited by each source state, and we only need to
keep track of these states. The following sampling scheme based on a hierarchical P?olya urn scheme
captures this spirit and yields a constructive definition of HMDP.
We set up a single ?stock? urn at the top level, which contains balls of colors that are represented
by at least one ball in one or multiple urns at the bottom level. At the bottom level, we have a set
of distinct urns which are used to define the initial and transition probabilities of the HMDP model
(and are therefore referred as HMM-urns). Specifically, one of HMM urns, u0 , is set aside to hold
colored balls to be drawn at the onset of the HMM state-transition sequence. Each of the remaining
HMM urns is painted with a color represented by at least one ball in the stock urn, and is used
to hold balls to be drawn during the execution of a Markov chain of state-transitions. Now let?s
suppose that at time t the stock urn contains n balls of K distinct colors indexed by an integer set
C = {1, 2, . . . , K}; the number of balls of color k in this urn is denoted by nk , k ? C. P
For urn u0
and urns u1 , . . . , uK , let mj,k denote the number of balls of color k in urn uj , and mj = k?C mj,k
denote the total number of balls in urn uj . Suppose that at time t ? 1, we had drawn a ball with
color k 0 . Then at time t, we either draw a ball randomly from urn uk0 , and place back two balls both
of that color; or with probability mj?+? we turn to the top level. From the stock urn, we can either
draw a ball randomly and put back two balls of that color to the stock urn and one to uk0 , or obtain
?
a ball of a new color K + 1 with probability n+?
and put back a ball of this color to both the stock
0
urn and urn uk of the lower level. Essentially, we have a master DP (the stock urn) that serves as
a base measure for infinite number of child DPs (HMM-urns). As pointed out in Teh et al. [2004],
this model can be viewed as an instance of the hierarchical Dirichlet process mixture model.
As discussed in Xing et al. [2006], associating each color k with an ancestor configuration
?k = {ak , ?k } whose values are drawn from the base measure F ? Beta(?)p(a), conditioning
on the Dirichlet process underlying the stock urn, the samples in the jth bottom-level urn are also
distributed as marginals under a Dirichlet measure:
X
X
K
?mj |??mj ?
k=1
nk
mj,k + ? n?1+?
mj ? 1 + ?
???k (?mj ) +
?
?
F (?mj )
mj ? 1 + ? n ? 1 + ?
K
=
?j,k ???k (?mj ) + ?j,K+1 F (?mj ),
(1)
k=1
mj,k +?
nk
?
n?1+?
?
where ?j,k ? mj ?1+?
, ?j,K+1 ? mj ?1+?
n?1+? . Let ?j ? [?j,1 , ?j,2 , . . .], now we have an
infinite-dimensional Bayesian HMM that, given F, ?, ? , and all initial states and transitions sampled
so far, follows an initial states distribution parameterized by ?0 , and transition matrix ? whose rows
are defined by {?j : j > 0}. As in Xing et al. [2006], we also introduce vague inverse Gamma
priors for the concentration parameters ? and ? .
2.3 HMDP Model for Recombination and Inheritance
Now we describe a stochastic model, based on an HMDP, for generating individual haplotypes
in a modern population from a hypothetical pool of ancestral haplotypes via recombination and
mutations (i.e., random mating with neutral selection). For each modern chromosome i, let Ci =
[Ci,1 , . . . , Ci,T ] denote the sequence of inheritance variables specifying the index of the ancestral
chromosome at each SNP locus. When no recombination takes place during the inheritance process
that produces haplotype Hi (say, from ancestor k), then Ci,t = k, ?t. When a recombination occurs,
say, between loci t and t + 1, we have Ci,t 6= Ci,t+1 . We can introduce a Poisson point process
to control the duration of non-recombinant inheritance. That is, given that Ci,t = k, then with
probability e?dr +(1?e?dr )?kk , where d is the physical distance between two loci, r reflects the rate
of recombination per unit distance, and ?kk is the self-transition probability of ancestor k defined by
HMDM, we have Ci,t+1 = Ci,t ; otherwise, the source state (i.e., ancestor chromosome k) pairs with
a target state (e.g., ancestor chromosome k 0 ) between loci t and t+1, with probability (1?e?dr )?kk0 .
Hence, each haplotype Hi is a mosaic of segments of multiple ancestral chromosomes from the
ancestral pool {Ak,? }?
k=1 . Essentially, the model we described so far is a time-inhomogeneous
infinite HMM. When the physical distance information between loci is not available, we can simply
set r to be infinity so that we are back to a standard stationary HMDP model.
The emission process of the HMDM corresponds to an inheritance model from an ancestor to the
matching descendent. For simplicity, we adopt the single-locus mutation model in Xing et al. [2004]:
1 ? ? I(ht 6=at )
p(ht |at , ?) = ?I(ht =at )
,
(2)
|B| ? 1
where ht and at denote the alleles at locus t of an individual haplotype and its corresponding ancestor, respectively; ? indicates the ancestor-specific mutation rate; and |B| denotes the number of
possible alleles. As discussed in Liu et al. [2001], this model corresponds to a star genealogy resulted from infrequent mutations over a shared ancestor, and is widely used in statistical genetics
as an approximation to a full coalescent genealogy. Following Xing et al. [2004], assume that the
mutation rate ? admits a Beta prior, the marginal conditional likelihood of a haplotype given its
matching ancestor can be computed by integrating out ? under the Bayesian rule.
3
Posterior Inference
Now we proceed to describe a Gibbs sampling algorithm for posterior inference under HMDP. The
variables of interest include {Ci,t }, the inheritance variables specifying the origins of SNP alleles of
all loci on each haplotype; and {Ak,t }, the founding alleles at all loci of each ancestral haplotype.
The Gibbs sampler alternates between two sampling stages. First it samples the inheritance variables
{ci,t }, conditioning on all given individual haplotypes h = {h1 , . . . , h2N }, and the most recently
sampled configuration of the ancestor pool a = {a1 , . . . , aK }; then given h and current values of
the ci,t ?s, it samples every ancestor ak .
To improve the mixing rate, we sample the inheritance variables one block at a time. That is, every
time we sample ? consecutive states ct+1 , . . . , ct+? starting at a randomly chosen locus t + 1 along
a haplotype. (For simplicity we omit the haplotype index i here and in the forthcoming expositions
when it is clear from context that the statements or formulas apply to all individual haplotypes.)
Let c? denote the set of previously sampled inheritance variables. Let n denote the totality of
occupancy records of the top-level DP (i.e. the ?stock urn?) ? {n} ? {nk : ?k}; and m denote
the totality of the occupancy records of each lower-level DPs (i.e., the urns corresponding to the
recombination choices by each ancestor) ? {mk : ?k} ? {mk,k0 : ?k, k 0 }. And let lk denote the
sufficient statistics associated with all haplotype instances originated from ancestor k. The predictive
distribution of a ?-block of inheritance variables can be written as:
p(ct+1:t+? |c? , h, a) ? p(ct+1:t+? |ct , ct+?+1 , m, n)p(ht+1:t+? |act+1 ,t+1 , . . . , act+? ,t+? )
?
t+?
Y
j=t
p(cj+1 |cj , m, n)
t+?
Y
p(hj |acj ,j , lcj ).
(3)
j=t+1
This expression is simply Bayes? theorem with p(ht+1:t+? |act+1 ,t+1 , . . . , act+? ,t+? ) playing the role
of the likelihood and p(ct+1:t+? |c? , h, a) playing the role of the prior. One should be careful that
the sufficient statistics n, m and l employed here should exclude the contributions by samples associated with the ?-block to be sampled. Note that naively, the sampling space of an inheritance block
of length ? is |A|? where |A| represents the cardinality of the ancestor pool. However, if we assume
that the recombination rate is low and block length is not too big, then the probability of having
two or more recombination events within a ?-block is very small and thus can be ignored. This approximation reduces the sampling space of the ?-block to O(|A|?), i.e., |A| possible recombination
targets times ? possible recombination locations. Accordingly, Eq. (3) reduces to:
p(ct+1:t+? |c? , h, a) ? p(ct0 |ct0 ?1 = ct , m, n)p(ct+?+1 |ct+? = ct0 , m, n)
t+?
Y
p(hj |act0 ,j , lct0 )
j=t0
for some t0 ? [t + 1, t + ?]. Recall that in an HMDP model for recombination, given that the total
recombination probability between two loci d-units apart is ? ? 1 ? e?dr ? dr (assuming d and r
are both very small), the transition probability from state k to state k 0 is:
p(ct0 = k 0 |ct0 ?1 = k, m, n, r, d)
??k,k0 + (1 ? ?)?(k, k 0 ) for k 0 ? {1, ..., K}, i.e., transition to an existing ancestor,
(4)
=
??k,K+1
for k 0 = K + 1, i.e., transition to a new ancestor,
where ?k represents the transition probability vector for ancestor k under HMDP, as defined in
Eq. (1). Note that when a new ancestor aK+1 is instantiated, we need to immediately instantiate a
new DP under F to model the transition probabilities from this ancestor to all instantiated ancestors
(including itself). Since the occupancy record of this DP, mK+1 := {mK+1 } ? {mK+1,k : k =
1, . . . , K + 1}, is not yet defined at the onset, with probability 1 we turn to the top-level DP when
departing from state K + 1 for the first time. Specifically, we define p(?|ct0 = K + 1) according to
the occupancy record of ancestors in the stock urn. For example, at the distal boarder of the ?-block,
since ct+?+1 always indexes a previously inherited ancestor (and therefore must be present in the
stock-urn), we have:
nc
(5)
p(ct+?+1 |ct+? = K + 1, m, n) = ? ? t+?+1 .
n+?
Now we can substitute the relevant terms in Eq. (3) with Eqs. (4) and (5). The marginal likelihood
term in Eq. (3) can be readily computed based on Eq. (2), by integrating out the mutation rate
? under a Beta prior (and also the ancestor a under a uniform prior if ct0 refers to an ancestor
to be newly instantiated) [Xing et al., 2004]. Putting everything together, we have the proposal
distribution for a block of inheritance variables. Upon sampling every ct , we update the sufficient
statistics n, m and {lk } as follows. First, before drawing the sample, we erase the contribution of
ct to these sufficient statistics. In particular, if an ancestor gets no occupancy in either the stock
and the HMM urns afterwards, we remove it from our repository. Then, after drawing a new ct , we
increment the relevant counts accordingly. In particular, if ct = K + 1 (i.e., a new ancestor is to
be drawn), we update n = n + 1, set nK+1 = 1, mct = mct + 1, mct ,K+1 = 1, and set up a new
(empty) HMM urn with color K + 1 (i.e. instantiating mK+1 with all elements equal to zero).
Now we move on to sample the founders {ak,t }, following the same proposal given in Xing et
al. [2006], which is adapted below for completeness:
0
Y
?(?h + lk,t )?(?h + lk,t )
p(ak,t |c, h) ?
p(hi,t |ak,t ) =
R(?h , ?h ),
(6)
0
0
?(?h + ?h + lk,t + lk,t )(|B| ? 1)lk,t
i,t|ci,t =k
where lk,t is the number of allelic instances originating from ancestor k at locus t that are identical to
P
0
the ancestor, when the ancestor has the pattern ak,t ; and lk,t = i I(ci,t = k|ak,t ) ? lk,t represents
0
the complement. If k is not represented previously, we can just set lk,t and lk,t both to zero. Note that
when sampling a new ancestor, we can only condition on a small segment of an individual haplotype.
To instantiate a complete ancestor, after sampling the alleles in the ancestor corresponding to the
segment according to Eq. (6), we first fill in the rest of the loci with random alleles. When another
segment of an individual haplotype needs a new ancestor, we do not naively create a new fulllength ancestor; rather, we use the empty slots (those with random alleles) of one of the previously
instantiated ancestors, if any, so that the number of ancestors does not grow unnecessarily.
4
Experiments
We applied the HMDP model to both simulated and real haplotype data. Our analyses focus on
the following three popular problems in statistical genetics: 1. Ancestral Inference: estimating the
number of founders in a population and reconstructing the ancestor haplotypes; 2) LD-block Analysis: inferring the recombination sites in each individual haplotype and uncover population-level
recombination hotspots on the chromosome region; 3) Population Structural Analysis: mapping the
genetic origins of all loci of each individual haplotype in a population.
ancestor reconstruction error
a.
b.
empirical recombination rate
.
c.
1
2
4
3
5
Figure 2: Analysis of simulated haplotype populations. (a) A comparison of ancestor reconstruction errors for the five ancestors (indexed
along x-axis). The vertical lines show ?1 standard deviation over 30
populations. (b) A plot of the empirical recombination rates along 100
SNP loci in one of the 30 populations. The dotted lines show the prespecified recombination hotspots. (c) The true (panel 1) and estimated
(panel 2 for HMDP, and panel 3-5 for 3 HMMs) population maps of
ancestral compositions in a simulated population. Figures were generated using the software distruct from Rosenberg et al [2002].
4.1 Analyzing simulated haplotype population
To simulate a population of individual haplotypes, we started with a fixed number, Ks (unknown to
the HMDP model), of randomly generated ancestor haplotypes, on each of which a set of recombination hotspots are (randomly) pre-specified. Then we applied a hand-specified recombination
process, which is defined by a Ks -dimensional HMM, to the ancestor haplotypes to generate Ns
individual haplotypes, via sequentially recombining segments of different ancestors according to
the simulated HMM states at each locus, and mutating certain ancestor SNP alleles according to
the emission model. At the hotspots, we defined the recombination rate to be 0.05, otherwise it
is 0.00001. Each individual was forced to have at least one recombination. Overall, 30 datasets
each containing 100 individuals (i.e., 200 haplotypes) with 100 SNPs were generated from Ks = 5
ancestor haplotypes. As baseline models, we also implemented 3 standard fixed-dimensional
HMM, with 3, 5 (the true number of ancestors for the simulated) and 10 hidden states, respectively.
Ancestral Inference Using HMDP, we successfully recovered the correct number (i.e., K = 5)
of ancestors in 21 out of 30 simulated populations; for the remaining 9 populations, we inferred 6
ancestors. From samples of ancestor states {ak,t }, we reconstructed the ancestral haplotypes under
the HMDP model. For comparison, we also inferred the ancestors under the 3 standard HMM using
an EM algorithm. We define the ancestor reconstruction error a for each ancestor to be the ratio
of incorrectly recovered loci over all the chromosomal sites. The average a over 30 simulated
populations under 4 different models are shown in Fig 2a. In particular, the average reconstruction errors of HMDP for each of the five ancestors are 0.026, 0.078, 0.116, 0.168, and 0.335,
respectively. There is a good correlation between the reconstruction quality and the population
frequency of each ancestor. Specifically, the average (over all simulated populations) fraction of
SNP loci originated from each ancestor among all loci in the population is 0.472,0.258,0.167,0.068
and 0.034, respectively. As one would expect, the higher the population frequency an ancestor
is, the better its reconstruction accuracy. Interestingly, under the fixed-dimensional HMM, even
when we use the correct number of ancestor states, i.e., K = 5, the reconstruction error is still
very high (Fig 2), typically 2.5 times or higher than the error of HMDP. We conjecture that this is
because the non-parametric Bayesian treatment of the transition rates and ancestor configurations
under the HMDP model leads to a desirable adaptive smoothing effect and also less constraints
on the model parameters, which allow them to be more accurately estimated. Whereas under a
parametric setting, parameter estimation can easily go sub-optimum due to lack of appropriate
smoothing or prior constraints, or deficiency of the learning algorithm (e.g., local-optimality of EM).
a
b
.
.
Figure 3: Analysis of the Daly data. (a) A plot of ?e estimated via HMDP; and the haplotype block boundaries
according to HMDP (black solid line), HMM [Daly et al., 2001] (red dotted line), and MDL [Anderson and
Novembre, 2003]) (blue dashed line). (b) IT scores for haplotype blocks from each method.
LD-block Analysis From samples of the inheritance variables {ci,t } under HMDP, we can infer
the recombination status of each locus of each haplotype. We define the empirical recombination
rates ?e at each locus to be the ratio of individuals who had recombinations at that locus over the
total number of haploids in the population. Fig 2b shows a plot of the ?e in one of the 30 simulated
populations. We can identify the recombination hotspots directly from such a plot based on an
empirical threshold ?t (i.e., ?t = 0.05). For comparison, we also give the true recombination
hotspots (depicted as dotted vertical lines) chosen in the ancestors for simulating the recombinant
population. The inferred hotspots (i.e., the ?e peaks) show reasonable agreement with the reference.
Population Structural Analysis Finally, from samples of the inheritance variables {ci,t }, we can
also uncover the genetic origins of all loci of each individual haplotype in a population. For each
individual, we define an empirical ancestor composition vector ?e , which records the fractions of
every ancestor in all the ci,t ?s of that individuals. Fig 2c displays a population map constructed from
the ?e ?s of all individual. In the population map, each individual is represented by a thin vertical
line which is partitioned into colored segments in proportion to the ancestral fraction recorded by
?e . Five population maps, corresponding to (1) true ancestor compositions, (2) ancestor compositions inferred by HMDP, and (3-5) ancestor compositions inferred by HMMs with 3, 5, 10 states,
respectively, are shown in Fig 2c. To assess the accuracy of our estimation, we calculated the distance between the true ancestor compositions and the estimated ones as the mean squared distance
between true and the estimated ?e over all individuals in a population, and then over all 30 simulated
populations. We found that the distance between the HMDP-derived population map and the true
map is 0.190, whereas the distance between HMM-map and true map is 0.319, significantly worse
than that of HMDP even though the HMM is set to have the true number of ancestral states (i.e.,
K = 5). Because of dimensionality incompatibility and apparent dissimilarity to the true map for
other HMMs (i.e., K = 3 and 10), we forgo the above quantitative comparison for these two cases.
4.2
Analyzing two real haplotype datasets
We applied HMDP to two real haplotype datasets, the single-population Daly data [Daly et al.,
2001], and the two-population (CEPH: Utah residents with northern/western European ancestry;
and YRI: Yoruba in Ibadan and Nigeria) HapMap data [Thorisson et al., 2005]. These data consist
of trios of genotypes, so most of the true haplotypes can be directly inferred from the genotype data.
We first analyzed the 256 individuals from Daly data We compared the recovered recombination
hotspots with those reported in Daly et al. [2001] (which is based on an HMM employing different
number of states at different chromosome segments) and in Anderson and Novembre [2003] (which
is based on a minimal description length (MDL) principle). Fig. 3a shows the plot of empirical
recombination rates estimated under HMDP, side-by-side with the reported recombination hotspots.
There is no ground truth to judge which one is correct; hence we computed information-theoretic
(IT) scores based on the estimated within-block haplotype frequencies and the between-block transition probabilities under each model for a comparison. The left panel of Fig 3b shows the total
pairwise mutual information between adjacent haplotype blocks segmented by the recombination
hotspots uncovered by the three methods. The right panel shows the average entropies of haplotypes
within each block. The number above each bar denotes the total number of blocks. The pairwise
mutual information score of the HMDP block structure is similar to that of the Daly structure, but
smaller than that of MDL. Similar tendencies are observed for average entropies. Note that the Daly
and the MDL methods allow the number of haplotype founders to vary across blocks to get the most
compact local ancestor constructions. Thus their reported scores might be an underestimate of the
true global score because certain segments of an ancestor haplotype that are not or rarely inherited
are not counted in the score. Thus the low IT scores achieved by HMDP suggest that HMDP can
effectively avoid inferring spurious global and local ancestor patterns. This is confirmed by the population map shown in Fig 4a, which shows that HMDP recovered 6 ancestors and among them the 3
dominant ancestors account for 98% of all the modern haplotypes in the population.
a
b
Figure 4: The estimated population maps: (a) Daly data. (b) HapMap data.
The HapMap data contains 60 individuals from CEPH and 60 from YRI. We applied HMDP to
the union of the populations, with a random individual order. The two-population structure is
clearly retrieved from the population map constructed from the population composition vectors ?e
for every individual. As seen in Fig. 4b, the left half of the map clearly represents the CEPH
population and the right half the YRI population. We found that the two dominant haplotypes
covered over 85% of the CEPH population (and the overall breakup among all four ancestors is
0.5618,0.3036,0.0827,0.0518). On the other hand, the frequencies of each ancestor in YRI population are 0.2141,0.1784,0.3209,0.1622,0.1215 and 0.0029, showing that the YRI population is much
more diverse than CEPH. Due to space limit, we omit the recombination map of this dataset.
5
Conclusion
We have proposed a new Bayesian approach for joint modeling genetic recombinations among possibly infinite founding alleles and coalescence-with-mutation events in the resulting genealogies.
By incorporating a hierarchical DP prior to the stochastic matrix underlying an HMM, which facilitates well-defined transition process between infinite ancestor space, our proposed method can
efficiently infer a number of important genetic variables, such as recombination hotspot, mutation
rates, haplotype origin, and ancestor patterns, jointly under a unified statistical framework.
Emprirically, on both simulated and real data, our approach compares favorably to its parametric
counterpart?a fixed-dimensional HMM (even when the number of its hidden state, i.e., the ancestors, is correctly specified) and a few other specialized methods, on ancestral inference, haplotypeblock uncovering and population structural analysis. We are interested in further investigating the
behavior of an alternative scheme based on reverse-jump MCMC over Bayesian HMMs with different latent states in comparison with HMDP; and we intend to apply our methods to genome-scale
LD and demographic analysis using the full HapMap data. While our current model employs only
phased haplotype data, it is straightforward to generalize it to unphased genotype data as provided
by the HapMap project. HMDP can also be easily adapted to many engineering and information
retrieval contexts such as object and theme tracking in open space. Due to space limit, we left out
some details of the algorithms and more results of our experiments, which are available in the full
version of this paper [Xing and Sohn, 2006].
References
[Anderson and Novembre, 2003] E. C. Anderson and J. Novembre. Finding haplotype block boundaries by using the minimum-descriptionlength principle. Am J Hum Genet, 73:336?354, 2003.
[Beal et al., 2001] M. J. Beal, Z. Ghahramani, and C. E. Rasmussen. The infinite hidden Markov model. In Advances in Neural Information
Processing Systems 13, 2001.
[Blackwell and MacQueen, 1973] D. Blackwell and J. B. MacQueen. Ferguson distributions via polya urn schemes. Annals of Statistics,
1:353?355, 1973.
[Daly et al., 2001] M. J. Daly, J. D. Rioux, S. F. Schaffner, T. J. Hudson, and E. S. Lander. High-resolution haplotype structure in the human
genome. Nature Genetics, 29(2):229?232, 2001.
[Greenspan and Geiger, 2003] D. Greenspan and D. Geiger. Model-based inference of haplotype block variation. In Proceedings of RECOMB
2003, 2003.
[Liu et al., 2001] J. S. Liu, C. Sabatti, J. Teng, B.J.B. Keats, and N. Risch. Bayesian analysis of haplotypes for linkage disequilibrium
mapping. Genome Res., 11:1716?1724, 2001.
[Niu et al., 2002] T. Niu, S. Qin, X. Xu, and J. Liu. Bayesian haplotype inference for multiple linked single nucleotide polymorphisms.
American Journal of Human Genetics, 70:157?169, 2002.
[Patil et al., 2001] N. Patil, A. J. Berno, D. A. Hinds, et al. Blocks of limited haplotype diversity revealed by high-resolution scanning of
human chromosome 21. Science, 294:1719?1723, 2001.
[Rosenberg et al., 2002] N. A. Rosenberg, J. K. Pritchard, J. L. Weber, H. M. Cann, K. K. Kidd, L. A. Zhivotovsky, and M. W. Feldman.
Genetic structure of human populations. Science, 298:2381?2385, 2002.
[Teh et al., 2004] Y. Teh, M. I. Jordan, M. Beal, and D. Blei. Hierarchical Dirichlet processes. Technical Report 653, Department of Statistics,
University of California, Berkeley, 2004.
[Thorisson et al., 2005] G.A. Thorisson, A.V. Smith, L. Krishnan, and L.D. Stein. The international hapmap project web site. Genome
Research, 15:1591?1593, 2005.
[Xing et al., 2004] E.P. Xing, R. Sharan, and M.I Jordan. Bayesian haplotype inference via the Dirichlet process. In Proceedings of the 21st
International Conference on Machine Learning, 2004.
[Xing et al., 2006] E.P. Xing, K.-A. Sohn, M.I Jordan, and Y. W. Teh. Bayesian multi-population haplotype inference via a hierarchical
dirichlet process mixture. In Proceedings of the 23st International Conference on Machine Learning, 2006.
[Xing and Sohn, 2006] E.P. Xing and K.-A. Sohn. Hidden Markov Dirichlet Process: Modeling Genetic Recombination in Open Ancestral
Space. In Bayesian Analysis, to appear, 2007.
[Zhang et al., 2002] K. Zhang, M. Deng, T. Chen, M. Waterman, and F. Sun. A dynamic programming algorithm for haplotype block
partitioning. Proc. Natl. Acad. Sci. USA, 99(11):7335?39, 2002.
| 2965 |@word repository:1 version:1 proportion:1 open:8 thereby:1 solid:1 accommodate:1 ld:7 initial:4 configuration:4 contains:4 liu:4 score:7 uncovered:1 genetic:26 interestingly:1 existing:1 current:2 recovered:4 yet:1 written:1 must:1 readily:1 subsequent:1 partition:2 remove:1 treating:1 plot:5 update:2 aside:1 stationary:1 generative:1 instantiate:2 half:2 accordingly:2 ith:1 smith:1 short:1 prespecified:1 record:5 colored:2 blei:1 completeness:1 location:1 zhang:3 five:3 unbounded:1 along:5 constructed:2 beta:3 descendant:1 introduce:2 pairwise:2 behavior:1 olya:8 multi:1 cardinality:1 erase:1 provided:2 estimating:2 underlying:3 moreover:1 begin:1 mass:2 fuel:1 panel:5 what:1 kk0:1 developed:4 unified:2 finding:1 quantitative:1 every:5 hypothetical:1 act:4 berkeley:1 acj:2 uk:2 control:2 unit:2 partitioning:1 omit:2 appear:1 before:1 engineering:1 local:3 treat:1 tends:1 limit:2 hudson:1 acad:1 painted:1 ak:16 analyzing:3 niu:3 noteworthy:1 black:1 might:1 nigeria:1 k:3 specifying:2 hmms:4 limited:1 phased:3 unique:1 practice:1 block:28 uk0:2 union:1 empirical:6 significantly:2 matching:2 pre:2 outset:1 refers:4 integrating:2 suggest:1 get:2 close:1 selection:1 put:2 context:2 map:15 go:1 straightforward:1 starting:1 duration:1 resolution:2 simplicity:2 lineage:2 immediately:1 kyung:1 rule:1 fill:1 population:56 variation:4 increment:1 annals:1 target:6 play:1 construction:2 suppose:2 infrequent:1 programming:1 mosaic:1 origin:7 haploid:1 pa:2 associate:1 element:1 agreement:1 located:1 bottom:3 role:3 observed:1 capture:3 region:4 sun:1 disease:1 complexity:1 dynamic:1 segment:8 predictive:1 upon:1 eric:1 vague:1 easily:2 joint:2 stock:12 k0:2 represented:4 distinct:3 instantiated:4 describe:4 forced:1 choosing:1 whose:4 apparent:1 widely:1 plausible:1 say:3 drawing:2 reconstruct:1 otherwise:2 statistic:6 jointly:2 itself:1 beal:6 sequence:4 reconstruction:7 qin:1 relevant:2 mixing:1 intuitive:1 description:1 empty:2 optimum:2 produce:1 generating:1 object:1 depending:1 polya:1 school:2 progress:1 eq:7 implemented:1 c:2 judge:1 posit:1 inhomogeneous:1 correct:3 stochastic:3 allele:13 human:5 coalescent:2 viewing:1 everything:1 hapmap:6 polymorphism:5 extension:1 genealogy:5 hold:2 recap:1 ground:1 mapping:2 vary:1 adopt:1 a2:1 consecutive:1 estimation:2 daly:12 travel:1 proc:1 combinatorial:1 visited:1 propensity:1 create:1 successfully:1 reflects:1 clearly:2 hotspot:13 always:1 rather:2 avoid:1 hj:2 incompatibility:1 greenspan:4 rosenberg:4 derived:1 emission:3 focus:1 likelihood:4 indicates:1 sharan:1 baseline:1 am:1 kidd:1 inference:12 ferguson:1 typically:1 hidden:14 spurious:1 ancestor:88 originating:1 interested:1 uncovering:3 among:6 flexible:1 overall:3 denoted:1 art:1 smoothing:2 mutual:2 marginal:3 equal:1 construct:1 having:1 sampling:10 identical:1 represents:4 unnecessarily:1 thin:1 alter:1 report:1 inherent:1 employ:2 few:1 modern:8 randomly:5 gamma:1 resulted:1 individual:26 attempt:1 interest:2 highly:1 mdl:4 mixture:9 analyzed:1 genotype:5 natl:1 chain:1 accurate:1 nucleotide:2 indexed:4 re:1 minimal:1 mk:6 instance:3 formalism:1 modeling:6 column:1 chromosomal:4 contiguous:1 deviation:1 neutral:1 uniform:1 too:1 reported:3 straightforwardly:1 trio:1 scanning:1 st:6 peak:1 international:3 ancestral:23 standing:1 sequel:1 pool:4 together:1 extant:1 squared:1 recorded:1 containing:1 possibly:4 dr:5 worse:1 american:1 account:1 exclude:1 diversity:2 star:1 descendent:2 onset:2 later:1 h1:2 closed:1 linked:1 red:1 xing:20 competitive:1 bayes:1 inherited:3 mutation:17 contribution:2 ass:1 accuracy:2 characteristic:1 who:1 efficiently:1 correspond:2 yield:2 identify:1 generalize:2 bayesian:12 accurately:1 confirmed:1 ah:1 history:2 definition:1 mating:1 underestimate:1 frequency:4 associated:3 sampled:4 newly:1 dataset:1 treatment:1 popular:1 recall:1 color:18 infers:1 dimensionality:1 cj:7 shaping:1 uncover:2 back:5 higher:2 follow:1 methodology:1 unambiguously:1 novembre:5 though:1 anderson:5 just:1 stage:1 correlation:1 hand:2 web:1 marker:1 lack:1 resident:1 western:1 quality:1 utah:1 effect:1 usa:1 true:12 counterpart:1 hence:2 inspiration:1 q0:6 deal:1 distal:1 adjacent:1 during:4 self:2 theoretic:2 complete:1 performs:1 snp:14 weber:1 recently:2 superior:1 specialized:2 haplotype:80 physical:2 conditioning:3 association:1 interpretation:1 discussed:3 hmdm:2 trait:1 marginals:1 mellon:2 composition:8 refer:1 gibbs:2 feldman:1 pointed:1 recombinant:2 coalescence:4 had:2 specification:1 ct0:7 etc:1 base:2 dominant:2 posterior:3 showed:1 retrieved:1 lcj:1 apart:1 reverse:1 certain:2 yri:5 captured:1 seen:1 minimum:1 employed:1 deng:1 dashed:1 u0:2 full:4 unphased:2 multiple:4 reduces:2 afterwards:1 desirable:1 infer:2 segmented:1 technical:1 offer:1 long:2 retrieval:1 totality:2 molecular:1 a1:4 instantiating:1 variant:1 basic:1 essentially:2 cmu:2 poisson:1 achieved:1 c1:2 proposal:2 addition:1 nci:1 separately:1 whereas:2 interval:1 lander:1 grow:1 source:6 biased:1 rest:1 pass:1 facilitates:1 spirit:1 jordan:3 call:2 nonstationary:1 integer:1 structural:3 leverage:1 revealed:1 krishnan:1 forthcoming:1 associating:1 reduce:1 genet:1 t0:2 expression:1 linkage:2 akin:1 proceed:1 ignored:1 clear:1 covered:1 nonparametric:2 stein:1 ph:2 sohn:5 rioux:1 conjoining:1 specifies:1 generate:1 northern:1 notice:1 dotted:3 disequilibrium:2 estimated:8 track:1 per:1 correctly:1 blue:1 diverse:1 carnegie:2 discrete:2 key:1 putting:1 four:1 threshold:1 drawn:5 ht:6 fraction:3 inverse:1 parameterized:2 uncertainty:1 master:1 place:4 reasonable:1 geiger:4 draw:6 hi:10 ct:18 display:1 paternal:1 adapted:2 infinity:1 constraint:2 deficiency:1 software:1 meiosis:1 u1:1 simulate:1 optimality:1 urn:40 recombining:2 conjecture:1 hind:1 department:1 according:8 alternate:1 project:2 ball:22 belonging:1 inflexible:1 across:2 reconstructing:1 em:2 smaller:1 partitioned:1 founder:16 founding:3 taken:1 previously:4 turn:2 count:1 locus:30 deluge:1 letting:1 know:1 mutating:1 serf:1 demographic:1 available:2 competitively:1 apply:2 hierarchical:7 appropriate:1 simulating:1 occurrence:1 alternative:1 schaffner:1 substitute:1 assumes:1 dirichlet:19 clustering:1 top:4 remaining:2 denotes:2 patil:3 include:1 exploit:1 recombination:56 ghahramani:1 uj:2 move:1 intend:1 breakup:1 occurs:2 hum:1 parametric:4 concentration:1 evolutionary:1 dp:18 distance:8 simulated:14 accommodates:1 hmm:28 sci:1 spanning:1 assuming:1 length:3 modeled:2 relationship:1 illustration:1 index:5 kk:2 ratio:2 cann:1 nc:2 mostly:1 statement:1 favorably:1 unknown:5 teh:7 vertical:3 markov:13 macqueen:3 datasets:3 finite:1 waterman:1 descent:1 incorrectly:1 pritchard:1 inferred:6 complement:1 pair:2 blackwell:3 specified:3 california:1 parental:1 bar:1 below:1 pattern:7 ksohn:1 sabatti:1 built:1 including:1 event:6 natural:2 rely:1 indicator:1 scheme:9 occupancy:6 improve:1 epxing:1 brief:1 lk:12 admixture:1 carried:1 axis:1 started:1 prior:8 understanding:1 inheritance:21 fully:1 expect:1 generation:1 recomb:1 fixeddimensional:1 sufficient:4 principle:3 playing:2 row:2 succinctly:1 genetics:5 rasmussen:1 jth:1 side:2 allow:4 departing:1 distributed:1 boundary:3 calculated:1 transition:23 genome:5 adaptive:1 jump:1 counted:1 far:4 employing:1 reconstructed:1 compact:2 ignore:2 countably:1 status:1 keep:1 global:2 sequentially:2 investigating:1 pittsburgh:2 aci:1 don:1 ancestry:1 latent:1 mj:16 chromosome:18 nature:1 h2n:1 bearing:1 complex:2 european:1 big:1 child:1 xu:1 fig:11 site:4 referred:1 n:1 sub:1 inferring:3 originated:5 theme:1 candidate:1 formula:1 theorem:1 specific:1 showing:1 list:1 admits:1 a3:1 essential:1 incorporating:2 naively:2 consist:1 effectively:1 ci:33 dissimilarity:1 notwithstanding:1 execution:1 nk:5 chen:1 entropy:2 depicted:1 simply:2 maternal:1 mct:3 tracking:1 nested:1 corresponds:3 truth:1 conditional:1 slot:1 viewed:1 exposition:1 careful:1 replace:1 shared:1 infinite:15 specifically:3 sampler:1 decouple:1 called:1 total:5 teng:1 forgo:1 tendency:1 rarely:1 select:1 uneven:1 support:2 allelic:2 constructive:1 mcmc:1 |
2,165 | 2,966 | Comparative Gene Prediction using Conditional
Random Fields
Jade P. Vinson? ?
[email protected]
David DeCaprio?
[email protected]
Stacey Luoma
[email protected]
Matthew D. Pearson
[email protected]
James E. Galagan
[email protected]
The Broad Institute of MIT and Harvard
Cambridge, MA 02142
Abstract
Computational gene prediction using generative models has reached a plateau,
with several groups converging to a generalized hidden Markov model (GHMM)
incorporating phylogenetic models of nucleotide sequence evolution. Further improvements in gene calling accuracy are likely to come through new methods that
incorporate additional data, both comparative and species specific. Conditional
Random Fields (CRFs), which directly model the conditional probability P (y|x)
of a vector of hidden states conditioned on a set of observations, provide a unified framework for combining probabilistic and non-probabilistic information and
have been shown to outperform HMMs on sequence labeling tasks in natural language processing.
We describe the use of CRFs for comparative gene prediction. We implement a
model that encapsulates both a phylogenetic-GHMM (our baseline comparative
model) and additional non-probabilistic features. We tested our model on the
genome sequence of the fungal human pathogen Cryptococcus neoformans. Our
baseline comparative model displays accuracy comparable to the the best available
gene prediction tool for this organism. Moreover, we show that discriminative
training and the incorporation of non-probabilistic evidence significantly improve
performance.
Our software implementation, Conrad, is freely available with an open source
license at http://www.broad.mit.edu/annotation/conrad/.
1
Introduction
Gene prediction is the task of labeling nucleotide sequences to identify the location and components
of genes (Figure 1). The accurate automated prediction of genes is essential to both downstream
bioinformatic analyses and the interpretation of biological experiments. Currently, the most accurate approach to computational gene prediction is generative modeling. In this approach, one
models the joint probability of the hidden gene structure y and the observed nucleotide sequence x.
The model parameters ? are chosen to maximize the joint probability of the training data. Given a
new set of observations x, one predicts genes by selecting the path of hidden labels y that maximizes
P r? (y, x). Several independent groups have converged on the same generative model: a phylogenetic generalized hidden Markov model with explicit state durations (phylo-GHMM) [1, 2, 3, 4].
?
?
These authors contributed equally
Current email: [email protected]
Figure 1: The processes of RNA transcription, intron splicing, and protein translation (panel A)
and a state diagram for gene structure (panel B). The mirror-symmetry reflects the fact that DNA is
double-stranded and genes occur on both strands. The 3-periodicity in the state diagram corresponds
to the translation of nucleotide triplets into amino acids.
Further improvements in accuracy are likely to come from the incorporation of additional biological
signals or new types of experimental data. However, incorporating each signal requires a handcrafted
modification which increases the complexity of the generative model: a new theoretical approach is
needed.
One approach to combining multiple sources of information for gene prediction, conditional maximum likelihood, was proposed in 1994 by Stormo and Haussler [5] and later implemented in the
program GAZE by Howe et. al. [6, 7]. In this approach, one defines a Boltzmann distribution where
the probability of each hidden sequence is proportional to the exponential of a weighted sum of
different types of evidence. One then trains the weights to maximize the conditional probability
P rw (y|x) of the hidden sequence given the observations in the training data.
A related approach is the use of conditional random fields (CRFs), recently introduced in the context
of natural language processing [8]. Like the earlier work in gene prediction, CRFs assign a probability to each hidden sequence that is proportional to the exponential of a weighted sum, and the
weights are trained to maximize the conditional probability of the training data. The global convergence guarantee for training weights (Section 2.1 and [8]) is one of the strengths of this approach,
but was not noticed in the earlier work on gene prediction. In addition, CRFs were presented in a
more abstract framework and have since been applied in several domains.
Here, we apply chain-structured CRFs to gene prediction. We introduce a novel strategy for feature
selection, allowing us to directly incorporate the best existing generative models with additional
sources of evidence in the same theoretical framework. First, we use probabalistic features based
on generative models whenever well-developed models are available. In this way we instantiate
a phylo-GHMM as a variant of a CRF. Second, we add non-probabilistic features for information
that is not easily modeled generatively, such as alignments of expressed sequence tags (ESTs). We
developed Conrad, a gene predictor and highly optimized CRF engine. Conrad is freely available
with an open source license at http://www.broad.mit.edu/annotation/conrad/.
We applied Conrad to predict genes in the fungal human pathogen Cryptococcus neoformans. Our
baseline comparative model is as accurate as Twinscan [9, 10], the most accurate gene predictor
trained for C. neoformans. Training the weights of our model discriminatively further improves prediction accuracy, indicating that discriminatively trained models can outperform generatively trained
models on the same data. The addition of non-probabilistic features further improves prediction accuracy.
Figure 2: Graphical models for a first-order chain-structured conditional random field (panel A) and
a first-order hidden Markov model (panel B). The variables Yi are hidden states and the variables
Xi are observations. The unshaded node is not generated by the model.
2
Conditional Random Fields
Conditional random fields are a framework for expressing the conditional probability Pr(~y |x) of
hidden states ~y = (y1 , y2 , . . . , yn ) given observations x [8]. The conditional probabilities assigned
by a CRF are proportional to a weighted exponential sum of feature functions:
?
?
n X
X
1
Pr(~y |x) =
?j fj (yi?1 , yi , x, i)? ,
(1)
exp ?
Z? (x)
i=1
j?J
where Z? (x) is the normalizing constant or partition function, y0 = start, and J is the collection of
features. The conditional probabilities can be viewed as a Boltzman distribution where the pairwise
energy between two positions i ? 1 and i is a weighted sum of the feature functions fj . See Figure 2.
The feature functions fj (yi?1 , y, x, i) can be any real-valued functions defined for all possible hidden states yi?1 and y, observations x, and positions i. For example, the value of the feature function
at position i might depend on the value of the observations x at a distant position, allowing one to
capture long-range interactions with a CRF. Varying the weights ? of a CRF, we obtain a family of
conditional probabilities P r? (~y |x).
An alternative viewpoint comes by reversing the order of summation in Equation 1 and expressing
the conditional probability using feature sums Fj that depend on the entire hidden sequence ~y :
?
?
n
X
X
1
Pr(~y |x) =
fj (yi?1 , yi , x, i). (2)
?j Fj (~y , x)? , where Fj (~y , x) =
exp ?
Z? (x)
i=1
j?J
Some of the theoretical properties of CRFs, such as global convergence of weight training, can be
derived using only the feature sums Fj . These theoretical derivations also apply to generalizations
of CRFs, such as semi-Markov CRFs [11], in which one modifies the formula expressing the feature
sums Fj in terms of the feature functions fj .
2.1
Inference and Training
Given a CRF and observations x, the inference problem is to determine the sequence of hidden states
with the highest conditional probability, ~ymax = argmax~y (Pr(~y |x)). For a linear-chain CRF, each
feature function fj depends only on pairs of adjacent hidden states and there is an efficient Viterbi
algorithm for solving the inference problem.
Given training data (~y , x), a CRF is trained in two steps. In the first step, free parameters associated
with individual feature functions are fixed using the training data. The training methods can be
specific to each feature.
In the second step, the weights ? are selected to maximize the conditional log-likelihood:
?max
=
argmax (log (P r? (~y |x)))
?
The log-likelihood is a concave function of ? (its Hessian is the negative covariance matrix of the
random variables Fj relative to the Boltzmann distribution). Thus, various iterative methods, such
as a gradient-based function optimizer[12], are guaranteed to converge to a global maximum. Using
the weights obtained by training, the resulting probability distribution on P r? (?|x) is the maximum
entropy distribution subject to the constraints that the expected value of each feature sum Fj is equal
to its value in the training data.
One can also train the weights of the CRF to maximize an alternative objective function. For example, one can maximize the expected value GAOF (?) = EP r? (S(y, y 0 , x)) of the similarity between
the actual hidden sequence y 0 and a random hidden sequence y selected according to Equation 1.
This objective function can be optimized using standard gradient-based function optimizers. The
?
gradient is ??
GAOF (?) = CovP r? (S(y, y 0 , x), Fj (y, x)). Global convergence is not guaranteed
j
because the objective function is not necessarily concave. When the similarity function S can be
defined in terms of a purely local comparison
between the actual hidden sequence and a random
Pn
0
, yi0 , x, i), the gradient can be efficiently
hidden sequence, as in S(y, y 0 , x) = i=1 s(yi?1 , yi , yi?1
computed using dynamic programming ? this is the level of generality we implemented in Conrad.
In this paper we consider this simplest possible alternate objective function, where the local similarity function is 1 at position i if the hidden sequence is correct and 0 otherwise. In this case the
alternate objective function is just the expected number of correctly predicted positions.
2.2
Expressing an HMM as a CRF
Any conditional probability Pr(~y |x) that can be implicitly expressed using an HMM [13] can
also be expressed using a CRF. Indeed, the HMM and its corresponding CRF form a generativediscriminative pair [14]. For example, a first-order HMM with transition matrix T , emissions matrix
B, and initial hidden state probabilities ~? assigns the joint probability
length
length?1
Y
Y
Byi ,xi .
Tyi ,yi+1
P r(~y , ~x) = ?y1
i=1
i=1
Given an observation sequence x, the conditional probabilities implied by this HMM can be expressed as a CRF by defining the following three features and setting all weights to 1.0:
log(?w ) if z = start and i = 1
f? (z, w, x, i) =
0 otherwise
log(Tz,w ) if i > 1
fT (z, w, x, i) =
0 otherwise
fB (z, w, x, i)
=
log(Bw,xi )
Hidden Markov models can be extended in various directions. One of the most important extensions
for gene prediction is to explicitly model state durations: for many species the lengths of some components are tightly constrained, such as introns in fungi. The extensions of HMMs to generalized
HMMs (GHMMs) and CRFs to semi-Markov CRFs [11] are straightforward but omitted for clarity.
3
Our Model
The core issue in designing a CRF is the selection of feature functions. The approach usually
taken in natural language processing is to define thousands or millions of features, each of which
are indicator functions: 0 most of the time and 1 in specific circumstances. However, for gene
prediction there are well-developed probabilistic models that can serve as a starting point in the
design of a CRF. We propose a new approach to CRF feature selection with the following guiding
principle: use probabilistic models for feature functions when possible and add non-probabistic
features only where necessary. The CRF training algorithm determines the relative contributions of
these features through discriminative training, without having to assume independence between the
features or explicitly model dependencies.
Our approach to gene prediction is implemented as Conrad, a highly configurable Java executable.
The CRF engine for Conrad uses LBFGS as the gradient solver for training [12, 15, 16] and is
highly optimized for speed and memory usage. Because the CRF engine is a separate module with
a well-defined interface for feature functions, it can also be used for applications other than gene
prediction.
3.1
The baseline comparative model: a phylogenetic GHMM
Phylogenetic generalized hidden Markov models are now the standard approach to gene prediction
using generative models [1, 2, 3, 4], and capture many of the signals for resolving gene structure (e.g.
splice models or phylogenetic models of nucleotide sequence evolution). We define probabilistic
features that, when taken together with weights 1.0, reproduce the phylo-GHMM that we refer to as
our baseline comparative model.
Our baseline comparative model is based on the state diagram of Figure 1, enforces the basic gene
constraints (e.g. open reading frames and GT-AG splice junctions), explicitly models intron length
using a mixture model, and uses a set of multiply aligned genomes (including the reference genome
to be annotated) as observations. The model comprises 29 feature functions of which the following
are representative:
f1
f2
3.2
= ?(yi?1 = exon2 & yi = intron2) log (P r(xi?3 . . . xi+5 ) ,
using a splice donor model trained by maximum likelihood.
= ?(yi = exon3) log(P r(multiple alignment column | reference nucleotide )),
using a phylogenetic evolutionary model trained by ML.
Non-probabalistic features
For many signals useful in resolving gene structure (e.g. protein homology, ESTs, CPG islands, or
chromatin methylation), a probabilistic model is elusive or is difficult to incorporate in the existing
framework. To explore the addition of non-probablistic evidence, we introduce two groups of feature
functions, both using 0-1 indicator functions. The first group of feature functions is based on the
alignment of expressed sequence tags (ESTs) to the reference genome (ESTs are the experimentally
determined sequences of randomly sampled mRNA; see Figure 1):
fEST,1
fEST,2
= ?(yi = exon & EST aligned at position i)
= ?(yi = intron & position i is in the gap of an EST alignment )
The second group of feature functions is based on the presence of gaps in the multiple alignment,
indicative of insertions or deletions (indels) in the evolutionary history of one of the aligned species.
Indels are known to be relevant to gene prediction: evolution preserves the functions of most genes
and an indel that is not a multiple of three would dirsupt the translation of a protein. Thus, indels
not a multiple of three provide evidence against a position being part of an exon. We introduce the
features
fGAP,1
fGAP,2
= ?(yi = exon & an indel of length 0 mod 3 has a boundary at position i )
= ?(yi = exon & an indel of length 1 or 2 mod 3 has a boundary at position i ),
plus the four analogous features for introns and intergenic regions.
For both classes of evidence, no satisfying probabilistic models exist. For example, the most systematic attempt at incorporating multiple alignment gaps in a generative model is [17], but this model
only represents the case of phylogenetically simple, non-overlapping gaps.
4
Results
We evaluated our model using the genome of fungal human pathogen Cryptococcus neoformans
strain JEC21 [18]. C. neoformans is an ideal test case due to the availability of genomes for four
closely related strains for use as comparative data and a high-quality manual annotation with deep
EST sequencing.
To determine an absolute benchmark, we compared our baseline comparative model to Twinscan [9,
10], the most accurate comparative gene predictor trained for C. neoformans. Because Twinscan
was an input to the manual annotation, we evaluated the accuracy of both predictors by comparing
to the alignments of ESTs (which are independent of both predictors) along an entire chromosome
(chr 9). At the locations containing both an EST and a gene prediction, the accuracy of our model is
comparable to (or better than) that of Twinscan. See Table 1.
Table 1: Comparing the prediction accuracy of our baseline comparative model with that of Twinscan. Accuracy statistics are collected at loci where an EST overlaps with a gene prediction.
Nucleotide sensitivity (%)
Nucleotide specificity (%)
Splice sensitivity (%)
Splice specificity (%)
Baseline Comparative Model
99.71
99.26
94.51
95.80
Twinscan
98.35
99.56
93.93
93.20
Figure 3: Gene prediction accuracy increases with additional features and with the training of feature
weights. All models were trained with the alternate objective function (see text), with the exception
of models labeled ?weights fixed?. For the latter, feature weights were fixed at 1.0. Performance on
training data (dashed line), performance on testing data (solid lines). Each data point above is the
average of 10 cross-validation replicates.
We next measured the relative effects of different sets of features and methods for training the
feature weights. First, we created a set of 1190 trusted genes by selecting those genes which had
EST support along their entire length. We then performed 10 cross-validation replicates for several
combinations of a set of features and a method for training weights, and a training set sizes (50, 100,
200, 400, or 800 genes). For each set of replicates, we record the average nucleotide accuracy. See
Figure 3. As expected, the testing accuracy increases with larger training sets, while the training
accuracy decreases. Note that for these experiments, we do not explicitly model intron length.
4.1
Adding features improves accuracy
The effect of including additional features is shown in Figure 3. As can be seen in each case, model
accuracy improves as new evidence is added. For a 400 gene training set, adding the EST features
increases the accuracy of the baseline single species model from 89.0% to 91.7%. Adding the gap
features increases the accuracy of the baseline comparative model from 93.6% to 95.4%. Finally,
adding both types of evidence together increases accuracy more than either addition in isolation:
adding EST and gap features to the baseline comparative model increases accuracy from 93.6% to
97.0%. Ongoing work is focused on including many additional lines of evidence.
4.2
Training using an alternate objective function
The standard training of weights for CRFs seeks to maximize the conditional log probability of the
training data. However, this approach has limitations: one would like to use an objective function
that is closely related to evaluation criteria relevant to the problem domain. Previous work in natural
language processing found no accuracy benefit to changing the objective function [19]. However,
relative to the usual training to maximize conditional log-likelihood, we observed about 2% greater
nucleotide accuracy in testing data using models trained to maximize an alternative objective function (the expected nucleotide accuracy of a random sequence on training data). See Section 2.1.
The results shown in Figure 3 are all using this alternate objective function. For example, for a 400
gene training set, training the weights increases the accuracy of the baseline single species model
from 87.2% to 89% and the baseline comparative model from 90.9% to 93.6%.
5
Concluding Remarks
CRFs are a promising framework for gene prediction. CRFs offer several advantages relative to
standard HMM-based gene prediction methods including the ability to capture long-range dependencies and to incorporate heterogeneous data within a single framework. We have implemented
a semi-Markov CRF by explicitly expressing a phylogenetic GHMM within a CRF framework and
extending this baseline with non-probabilisitic evidence. When used to predict genes in the fungal human pathogen C. neoformans, our model displays accuracy comparable to the best existing
gene prediction tools. Moreover, we show that incorporation of non-probabilistic evidence improves
performance.
The key issue in designing CRFs is the selection of feature functions, and our approach differs from
previous applications. We adopt the following guiding principle: we use probabilistic models as
features where possible and incorporate non-probabilistic features only when necessary. In contrast,
in natural language processing features are typically indicator functions. Our approach also differs
from an initial study of using CRFs for gene prediction [20], which does not use a probabilistic
model as the baseline.
CRFs offer a solution to an important problem in gene prediction: how to combine probabilistic
models of nucleotide sequences with additional evidence from diverse sources. Prior research in
this direction has focused on either handcrafted heuristics for a particular type of feature [21], a
mixture-of-experts approach applied at each nucleotide position [22], and decision trees [23]. CRFs
offer an alternative approach in which probabilistic features and non-probabilistic evidence are both
incorporated in the same framework.
CRFs are applicable to other discriminative problems in bioinformatics. For example, CRFs can be
used train optimal parameters for protein sequence alignment [24]. In all these examples, as with
gene predictions, CRFs provide the ability to incorporate supplementary evidence not captured in
current generative models.
Acknowledgement
This work has been supported by NSF grant number MCB-0450812. We thank Nick Patterson for
frequent discussions on generative probabilistic modeling. We thank Richard Durbin for recognizing
the connection to the earlier work by Stormo and Haussler. We thank the anonymous reviews for
indicating which aspects of our work warranted more or less detail relative to the initial submission.
References
[1] Adam Siepel and David Haussler. Combining phylogenetic and hidden Markov models in
biosequence analysis. J Comput Biol, 11(2-3):413?428, 2004.
[2] Jon D McAuliffe, Lior Pachter, and Michael I Jordan. Multiple-sequence functional annotation
and the generalized hidden Markov phylogeny. Bioinformatics, 20(12):1850?1860, Aug 2004.
[3] Jakob Skou Pedersen and Jotun Hein. Gene finding with a hidden Markov model of genome
structure and evolution. Bioinformatics, 19(2):219?227, Jan 2003.
[4] Randall H Brown, Samuel S Gross, and Michael R Brent. Begin at the beginning: predicting
genes with 5? UTRs. Genome Res, 15(5):742?747, May 2005.
[5] G. D. Stormo and D. Haussler. Optimally parsing a sequence into different classes based
on multiple types of information. In Proc. of Second Int. Conf. on Intelligent Systems for
Molecular Biology, pages 369?375, Menlo Park, CA, 1994. AAAI/MIT Press.
[6] Kevin L Howe, Tom Chothia, and Richard Durbin. GAZE: a generic framework for the integration of gene-prediction data by dynamic programming. Genome Res, 12(9):1418?1427,
Sep 2002.
[7] Kevin L. Howe. Gene prediction using a configurable system for the integration of data by
dynamic programming. PhD thesis, University of Cambridge, 2003.
[8] John Lafferty, Andrew McCallum, and Fernando Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proc. 18th International Conf.
on Machine Learning, pages 282?289. Morgan Kaufmann, San Francisco, CA, 2001.
[9] Aaron E Tenney, Randall H Brown, Charles Vaske, Jennifer K Lodge, Tamara L Doering, and
Michael R Brent. Gene prediction and verification in a compact genome with numerous small
introns. Genome Res, 14(11):2330?2335, Nov 2004.
[10] I Korf, P Flicek, D Duan, and M R Brent. Integrating genomic homology into gene structure
prediction. Bioinformatics, 17 Suppl 1:140?148, 2001.
[11] S. Sarawagi and W. Cohen. Semimarkov conditional random fields for information extraction.
Proceedings of ICML, 2004.
[12] Richard H. Byrd, Peihuang Lu, Jorge Nocedal, and Ci You Zhu. A limited memory algorithm
for bound constrained optimization. SIAM Journal on Scientific Computing, 16(6):1190?1208,
1995.
[13] Lawrence Rabiner. A tutorial on hidden markov models and selected applications in speech
recognition. In Alex Waibel and Kai-Fu Lee, editors, Readings in speech recognition, pages
267?296. Morgan Kaufmann, San Mateo, 1990.
[14] Charles Sutton and Andrew McCallum. An introduction to conditional random fields for relational learning. In Lise Getoor and Ben Taskar, editors, Statistical Relational Learning. To
appear.
[15] Hanna Wallach. Efficient training of conditional random fields. Master?s thesis, University of
Edinburgh, 2002.
[16] F. Sha and F. Pereira. Shallow parsing with conditional random fields. Technical Report CIS
TR MS-CIS-02-35, University of Pennsylvania, 2003.
[17] Adam Siepel and David Haussler. Computational identification of evolutionarily conserved
exons. In Proceedings of the 8th Annual International Conference, RECOMB 2004. ACM,
2004.
[18] Brendan J Loftus and Eula Fung et. al. The genome of the basidiomycetous yeast and human
pathogen Cryptococcus neoformans. Science, 307(5713):1321?1324, Feb 2005.
[19] Yasemin Altun, Mark Johnson, and Thomas Hofmann. Investigating Loss Functions and Optimization Methods for Discriminative Learning of Label Sequences. Proceedings of the 2003
Conference on Empirical Methods in Natural Language Processing.
[20] Aron Culotta, David Kulp, and Andrew McCallum. Gene prediction with conditional random
fields. Technical Report UM-CS-2005-028, University of Massachusetts, Amherst, April 2005.
[21] R F Yeh, L P Lim, and C B Burge. Computational inference of homologous gene structures in
the human genome. Genome Res, 11(5):803?816, May 2001.
[22] Brona Brejova, Daniel G Brown, Ming Li, and Tomas Vinar. ExonHunter: a comprehensive
approach to gene finding. Bioinformatics, 21 Suppl 1:i57?i65, Jun 2005.
[23] Jonathan E Allen and Steven L Salzberg. JIGSAW: integration of multiple sources of evidence
for gene prediction. Bioinformatics, 21(18):3596?3603, Sep 2005.
[24] Chuong B. Do, Samuel S. Gross, and Serafim Batzoglou. Contralign: Discriminative training for protein sequence alignment. In Alberto Apostolico, Concettina Guerra, Sorin Istrail,
Pavel A. Pevzner, and Michael S. Waterman, editors, RECOMB, volume 3909 of Lecture Notes
in Computer Science, pages 160?174. Springer, 2006.
| 2966 |@word yi0:1 open:3 seek:1 korf:1 kulp:1 covariance:1 serafim:1 pavel:1 tr:1 solid:1 initial:3 generatively:2 selecting:2 daniel:1 existing:3 current:2 com:1 comparing:2 parsing:2 john:1 distant:1 partition:1 hofmann:1 siepel:2 generative:10 instantiate:1 selected:3 indicative:1 mccallum:3 beginning:1 core:1 record:1 node:1 location:2 cpg:1 phylogenetic:9 covp:1 along:2 pevzner:1 combine:1 introduce:3 pairwise:1 indeed:1 expected:5 probabilisitic:1 ming:1 byrd:1 duan:1 actual:2 solver:1 begin:1 moreover:2 maximizes:1 panel:4 developed:3 unified:1 ag:1 finding:2 guarantee:1 concave:2 um:1 grant:1 yn:1 appear:1 mcauliffe:1 segmenting:1 local:2 sutton:1 path:1 probablistic:1 might:1 plus:1 mateo:1 wallach:1 hmms:3 limited:1 range:2 enforces:1 testing:3 implement:1 differs:2 sarawagi:1 optimizers:1 jan:1 empirical:1 significantly:1 java:1 integrating:1 specificity:2 protein:5 altun:1 batzoglou:1 selection:4 context:1 www:2 unshaded:1 crfs:21 modifies:1 straightforward:1 elusive:1 starting:1 duration:2 mrna:1 focused:2 tomas:1 assigns:1 haussler:5 analogous:1 programming:3 us:2 designing:2 methylation:1 harvard:1 satisfying:1 recognition:2 submission:1 predicts:1 donor:1 labeled:1 observed:2 ep:1 ft:1 module:1 taskar:1 steven:1 capture:3 thousand:1 region:1 culotta:1 skou:1 decrease:1 highest:1 gross:2 complexity:1 insertion:1 dynamic:3 trained:10 depend:2 solving:1 purely:1 serve:1 patterson:1 f2:1 exon:5 easily:1 joint:3 sep:2 various:2 derivation:1 train:3 describe:1 labeling:3 kevin:2 jade:2 pearson:1 heuristic:1 larger:1 valued:1 supplementary:1 kai:1 otherwise:3 ability:2 statistic:1 sequence:28 advantage:1 propose:1 interaction:1 frequent:1 aligned:3 combining:3 relevant:2 ymax:1 convergence:3 double:1 extending:1 comparative:17 adam:2 ben:1 andrew:3 measured:1 aug:1 implemented:4 predicted:1 c:1 come:3 direction:2 closely:2 correct:1 annotated:1 human:6 assign:1 f1:1 generalization:1 anonymous:1 biological:2 summation:1 utrs:1 extension:2 indels:3 exp:2 lawrence:1 viterbi:1 predict:2 stormo:3 matthew:1 optimizer:1 adopt:1 omitted:1 proc:2 phylogenetically:1 applicable:1 pachter:1 label:2 currently:1 tool:2 reflects:1 weighted:4 trusted:1 mit:9 genomic:1 rna:1 pn:1 varying:1 derived:1 emission:1 lise:1 improvement:2 sequencing:1 likelihood:5 contrast:1 brendan:1 baseline:16 inference:4 entire:3 typically:1 hidden:27 reproduce:1 issue:2 constrained:2 integration:3 field:12 equal:1 having:1 extraction:1 biology:1 represents:1 broad:8 park:1 icml:1 jon:1 report:2 intelligent:1 richard:3 randomly:1 preserve:1 tightly:1 comprehensive:1 individual:1 argmax:2 bw:1 attempt:1 highly:3 multiply:1 evaluation:1 alignment:9 replicates:3 mixture:2 chain:3 accurate:5 fu:1 biosequence:1 necessary:2 nucleotide:13 tree:1 re:4 hein:1 theoretical:4 column:1 modeling:2 earlier:3 salzberg:1 predictor:5 recognizing:1 johnson:1 optimally:1 configurable:2 dependency:2 international:2 sensitivity:2 siam:1 amherst:1 probabilistic:20 systematic:1 lee:1 gaze:2 together:2 michael:4 thesis:2 aaai:1 containing:1 tz:1 brent:3 expert:1 conf:2 semimarkov:1 li:1 availability:1 int:1 explicitly:5 depends:1 aron:1 later:1 performed:1 chuong:1 jigsaw:1 reached:1 start:2 annotation:5 contribution:1 accuracy:24 acid:1 kaufmann:2 efficiently:1 rabiner:1 identify:1 pedersen:1 identification:1 lu:1 converged:1 history:1 plateau:1 whenever:1 manual:2 email:1 against:1 energy:1 tamara:1 james:1 associated:1 lior:1 sampled:1 massachusetts:1 lim:1 improves:5 tom:1 april:1 evaluated:2 generality:1 just:1 overlapping:1 indel:3 defines:1 quality:1 scientific:1 mdp:1 yeast:1 usage:1 effect:2 brown:3 y2:1 homology:2 evolution:4 assigned:1 adjacent:1 samuel:2 criterion:1 generalized:5 m:1 crf:21 allen:1 interface:1 fj:14 novel:1 recently:1 charles:2 functional:1 executable:1 cohen:1 handcrafted:2 volume:1 million:1 organism:1 interpretation:1 peihuang:1 expressing:5 refer:1 cambridge:2 language:6 had:1 stacey:1 similarity:3 gt:1 add:2 feb:1 jorge:1 yi:18 conserved:1 conrad:9 seen:1 additional:8 greater:1 captured:1 morgan:2 yasemin:1 freely:2 determine:2 maximize:9 converge:1 fernando:1 signal:4 semi:3 resolving:2 multiple:9 dashed:1 technical:2 cross:2 long:2 offer:3 alberto:1 equally:1 molecular:1 prediction:35 converging:1 variant:1 basic:1 heterogeneous:1 circumstance:1 suppl:2 addition:4 diagram:3 source:6 howe:3 i57:1 subject:1 lafferty:1 mod:2 jordan:1 presence:1 ideal:1 automated:1 fungi:1 independence:1 isolation:1 chothia:1 pennsylvania:1 speech:2 hessian:1 remark:1 deep:1 useful:1 tyi:1 dna:1 rw:1 http:2 simplest:1 outperform:2 exist:1 nsf:1 tutorial:1 correctly:1 diverse:1 group:5 key:1 four:2 license:2 loftus:1 clarity:1 changing:1 nocedal:1 downstream:1 sum:8 you:1 master:1 contralign:1 family:1 splicing:1 decision:1 comparable:3 bound:1 guaranteed:2 display:2 durbin:2 annual:1 strength:1 occur:1 incorporation:3 constraint:2 alex:1 software:1 calling:1 tag:2 aspect:1 speed:1 concluding:1 structured:2 fung:1 according:1 alternate:5 waibel:1 combination:1 y0:1 island:1 shallow:1 encapsulates:1 modification:1 randall:2 pr:5 taken:2 equation:2 jennifer:1 needed:1 locus:1 available:4 junction:1 apply:2 generic:1 alternative:4 thomas:1 graphical:1 implied:1 objective:11 noticed:1 added:1 strategy:1 sha:1 usual:1 evolutionary:2 gradient:5 fungal:4 separate:1 thank:3 hmm:6 collected:1 byi:1 length:8 modeled:1 difficult:1 negative:1 implementation:1 design:1 boltzmann:2 contributed:1 allowing:2 observation:10 apostolico:1 markov:12 benchmark:1 waterman:1 defining:1 extended:1 incorporated:1 strain:2 relational:2 y1:2 frame:1 jakob:1 david:4 introduced:1 pair:2 optimized:3 connection:1 generativediscriminative:1 nick:1 engine:3 deletion:1 usually:1 guerra:1 reading:2 program:1 max:1 memory:2 including:4 overlap:1 getoor:1 natural:6 homologous:1 predicting:1 indicator:3 zhu:1 improve:1 numerous:1 created:1 jun:1 text:1 prior:1 review:1 acknowledgement:1 yeh:1 relative:6 loss:1 lecture:1 discriminatively:2 limitation:1 proportional:3 recomb:2 validation:2 verification:1 principle:2 viewpoint:1 editor:3 translation:3 periodicity:1 supported:1 free:1 institute:1 absolute:1 ghmm:7 benefit:1 boundary:2 edinburgh:1 transition:1 genome:14 fb:1 stranded:1 author:1 collection:1 san:2 boltzman:1 nov:1 compact:1 implicitly:1 transcription:1 gene:56 ml:1 global:4 investigating:1 francisco:1 discriminative:5 xi:5 iterative:1 triplet:1 table:2 promising:1 chromosome:1 exonhunter:1 ca:2 menlo:1 symmetry:1 hanna:1 probabalistic:2 warranted:1 necessarily:1 domain:2 intergenic:1 fest:2 evolutionarily:1 amino:1 representative:1 position:12 guiding:2 explicit:1 comprises:1 exponential:3 comput:1 pereira:2 burge:1 splice:5 formula:1 specific:3 i65:1 intron:7 evidence:15 normalizing:1 incorporating:3 essential:1 adding:5 ci:3 pathogen:5 mirror:1 phd:1 conditioned:1 gap:6 entropy:1 likely:2 lbfgs:1 explore:1 expressed:5 strand:1 springer:1 corresponds:1 sorin:1 determines:1 acm:1 ma:1 conditional:27 viewed:1 experimentally:1 determined:1 reversing:1 specie:5 experimental:1 est:13 indicating:2 exception:1 phylogeny:1 chr:1 aaron:1 support:1 mark:1 latter:1 jonathan:1 bioinformatics:6 ongoing:1 incorporate:6 mcb:1 tested:1 biol:1 chromatin:1 |
2,166 | 2,967 | Large Margin Multi-channel Analog-to-Digital
Conversion with Applications to Neural Prosthesis
Amit Gore and Shantanu Chakrabartty
Department of Electrical and Computer Engineering
Michigan State University
East Lansing, MI 48823
{goreamit,shantanu}@egr.msu.edu
Abstract
A key challenge in designing analog-to-digital converters for cortically implanted
prosthesis is to sense and process high-dimensional neural signals recorded by
the micro-electrode arrays. In this paper, we describe a novel architecture for
analog-to-digital (A/D) conversion that combines ?? conversion with spatial
de-correlation within a single module. The architecture called multiple-input
multiple-output (MIMO) ?? is based on a min-max gradient descent optimization of a regularized linear cost function that naturally lends to an A/D formulation. Using an online formulation, the architecture can adapt to slow variations in cross-channel correlations, observed due to relative motion of the microelectrodes with respect to the signal sources. Experimental results with real
recorded multi-channel neural data demonstrate the effectiveness of the proposed
algorithm in alleviating cross-channel redundancy across electrodes and performing data-compression directly at the A/D converter.
1
Introduction
Design of cortically implanted neural prosthetic sensors (CINPS)is an active area of research in
the rapidly emerging field of brain machine interfaces (BMI) [1, 2]. The core technology used in
these sensors are micro-electrode arrays (MEAs) that facilitate real-time recording from thousands
of neurons simultaneously. These recordings are then actively processed at the sensor (shown in Figure 1) and transmitted to an off-scalp neural processor which controls the movement of a prosthetic
limb [1]. A key challenge in designing implanted integrated circuits (IC) for CINPS is to efficiently
process high-dimensional signals generated at the interface of micro-electrode arrays [3, 4]. Sensor arrays consisting of more than 1000 recording elements are common [5, 6] which significantly
increase the transmission rate at the sensor. A simple strategy of recording, parallel data conversion and transmitting the recorded neural signals ( at a sampling rate of 10 KHz) can easily exceed
the power dissipation limit of 80mW/cm2 determined by local heating of biological tissue [7]. In
addition to increased power dissipation, high-transmission rate also adversely affects the real-time
control of neural prosthesis [3].
One of the solutions that have been proposed by several researchers is to perform compression of
the neural signals directly at the sensor, to reduce its wireless transmission rate and hence its power
dissipation [8, 4]. In this paper we present an approach where de-correlation or redundancy elimination is performed directly at analog-to-digital converter. It has been shown that neural cross-talk and
common-mode effects introduces unwanted redundancy at the output of the electrode array [4]. As
a result, neural signals typically occupy only a small sub-space within the high-dimensional space
spanned by the micro-electrode signals. An optimal strategy for designing a multi-channel analogto-digital converter is to identify and operate within the sub-space spanned by the neural signals
and in the process eliminate cross-channel redundancy. To achieve this goal, in this paper we pro-
Figure 1: Functional architecture of a cortically implanted neural prosthesis illustrating the interface
of the data converter to micro-electrode arrays and signal processing modules
pose to use large margin principles [10], which have been highly successful in high-dimensional
information processing [11, 10]. Our approach will be to formalize a cost function consisting of
L1 norm of the internal state vector whose gradient updates naturally lends to a digital time-series
expansion. Within this framework the correlation distance between the channels will be minimized
which amounts to searching for signal spaces that are maximally separated from each other.
The architecture called multiple-input multiple-output (MIMO) ?? converter is the first reported
data conversion technique to embed large margin principles. The approach, however, is generic and
can be extended to designing higher order ADC. To illustrate the concept of MIMO A/D conversion,
the paper is organized as follows: section 2 introduces a regularization framework for the proposed
MIMO data converter and introduces the min-max gradient descent approach. Section 3 applies the
technique to simulated and recorded neural data. Section 4 concludes with final remarks and future
directions.
2
Regularization Framework and Generalized ?? Converters
In this section we introduce an optimization framework for deriving MIMO ?? converters. For the
sake of simplicity we will first assume that the input to converter is a M dimensional vector x ? RM
where each dimension represents a single channel in the multi-electrode array. It is also assumed
that the vector x is stationary with respect to discrete time instances n. The validity and limitation
of this assumption is explained briefly at the end of this section. Also denote a linear transformation
matrix A ? RM ?M and an regression weight vector w ? RM . Consider the following optimization
problem
min f (w, A)
(1)
w
where
f (w, A) = |w|T 1 ? wT Ax
(2)
and 1 represents a column vector whose elements are unity. The cost function in equation 2 consists
of two factors: the first factor is an L1 regularizer which constrains the norm of the vector w and the
second factor that maximizes the correlation between vector w and an input vector x transformed
using a linear projection denoted by matrix A. The choice of L1 norm and the form of cost function
in equation (2) will become clear when we present its corresponding gradient update rule. To ensure
that the optimization problem in equation 1 is well defined, the norm of the input vector ||x||? ? 1
will be assumed to be bounded.
Under bounded condition, the closed form solution to optimization problem in equation 1 can be
found to be w? = 0. From the perspective of A/D conversion we will show that the iterative steps
leading towards solution to the optimization problem in equation 1 are more important than the final
solution itself. Given an initial estimate of the state vector w[0] the online gradient descent step for
Figure 2: Architecture of the proposed first-order MIMO ?? converter.
minimizing 1 at iteration n is given by
?f
(3)
?w
where ? > 0 is defined as the learning rate. The choice of L1 norm in optimization function in
equation 1 ensures that for ? > 0 the iteration 3 exhibits oscillatory behavior around the solution
w? . Combining equation (3) with equation (2) the following recursion is obtained:
w[n] = w[n ? 1] ? ?
w[n] = w[n ? 1] + ?(Ax ? d[n])
(4)
d[n] = sgn(w[n ? 1])
(5)
where
and sgn(u) denotes an element-wise signum operation such that d[n] ? {+1, ?1}M represents a
digital time-series. The iterations in 3 represents the recursion step for M first-order ?? converters [9] coupled together by the linear transform A. If we assume that the norm of matrix ||A||? ? 1
is bounded, it can be shown that ||w? || < 1 + ?. Following N update steps the recursion given by
equation 4 yields
Ax ?
N
1 X
1
d[n] =
(w[N ] ? w[0])
N n=1
?N
(6)
which using the bounded property of w asymptotically leads to
N
1 X
d[n] ?? Ax
N n=1
(7)
as N ? ?.
Therefore consistent with the theory of ?? conversion [9] the moving average of vector digital
sequence d[n] converges to the transformed input vector Ax as the number of update steps N
increases. It can also be shown that N update steps yields a digital representation which is log2 (N )
bits accurate.
2.1
Online adaptation and compression
The next step is to determine the form of the matrix A which parameterize the family of linear
transformations spanning the signal space. The aim of optimizing for A is to find multi-channel
signal configuration that is maximally separated from each other. For this purposes we denote one
channel as a reference relative to which all distances/correlations will be measured. This is unlike
independent component analysis (ICA) based approaches [12], where the objective is to search for
maximally independent signal space including the reference channel. Even though several forms of
the matrix A = [aij ] can be chosen, for reasons which will discussed later in this paper the matrix
A is chosen to be a lower triangular matrix such that aij = 0; i < j and aij = 1; i = j. The
choice of a lower triangular matrix ensures that the matrix A is always invertible. It also implies
that the first channel is unaffected by the proposed transform A and will be the reference channel.
The problem of compression or redundancy elimination is therefore to optimize the cross-elements
aij , i 6= j such that the cross-correlation terms in optimization function given by equation 1 are
minimized. This can be written as a min-max optimization criterion where an inner optimization
performs analog-to-digital conversion, where as the outer loop adapts the linear transform matrix
A such as to maximize the margin of separation between the respective signal spaces. This can be
denoted by the following equation:
max (min f (w, A))
aij i6=j
(8)
w
In conjunction with the gradient descent steps in equation 4 the update rule for elements of A follows
a gradient ascent step given by
aij [n] = aij [n ? 1] ? ?ui [n]xj ; ?i > j
(9)
where ? is a learning rate parameter. The update rule in equation 9 can be made amenable to
hardware implementation by considering only the sign of the regression vector w[n] and the input
vector x as
aij [n] = aij [n ? 1] ? ?di [n] sign(xj ); ?i > j.
(10)
The update rule in equation 10 bears strong resemblance to online update rules used in independent
component analysis (ICA) [12, 13]. The difference with the proposed technique however is the
integrated data conversion coupled with spatial decorrelation/compression. The output of the MIMO
?? converter is a digital stream whose pulse density is proportional to the transformed input data
vector as
N
1 X
d[n] ?? A[n]x
N n=1
(11)
By construction the MIMO converter produces a digital stream whose pulse-density contains only
non-redundant information. To achieve compression some of the digital channels can be discarded
(based on their relative energy criterion ) and can also be shut down to conserve power. The original
signal can be reconstructed from the compressed digital stream by applying an inverse transformation A?1 as
x
b=
N
X
1
A[n]?1 (
d[n]).
N
n=1
(12)
An advantage of using a lower triangular form for the linear transformation matrix A with its diagonal elements as unity, is that its inverse always well-defined. Thus signal reconstruction using the
output of the analog-to-digital converter is also always well defined. Since the transformation matrix A is continually being updated, the information related to the linear transform also needs to be
periodically transmitted to ensure faithful reconstruction at the external prosthetic controller. However, analogous to many naturally occurring signal the underlying statistics of multi-dimensional
signal changes slowly as the signal itself. Therefore the transmission of the matrix A needs to be
performed at a relatively slower rate than the transmission of the compressed neural signals.
Similar to conventional ?? conversion [9], the framework for MIMO ?? can be extended to timevarying input vector under the assumption of high oversampling criterion [9]. For a MIMO A/D
converter oversampling ratio (OSR) is defined by the ratio of the update frequency fs and the maximum Nyquist rate amongst all elements of the input vector x[n]. The resolution of the MIMO
?? is also determined by the OSR as log2 (OSR) and during the oversampling period the input signal vector can be assumed to be approximately stationary. For time-varying input vector
(a)
(b)
Figure 3: Functional verification of MIMO ?? converter on artificially generated multi-channel
data (a) Data presented to the MIMO ?? converter (b) Analog representation of digital output
produced by MIMO converter
x[n] = {xj [n]}, j = 1, .., M the matrix update equation in equation 10 can be generalized after N
steps as
N
1 X
1
aij [N ] = ?
di [n]sgn(xj [n]); ?i > j.
N
N n=1
(13)
Thus if the norm of the matrix A is bounded, then asymptotically N ? ? the equation 13 imply
that the cross-channel correlation between the digital output and the sign of the input signal approaches zero. This is similar to formulations in ICA where higher-order de-correlation is achieved
using non-linear functions of random variables [12].
The architecture for the MIMO ?? converter illustrating recursions (4) and (11) is shown in Figure
2. As shown in the Figure 2 the regression vectors w[n] within the framework of MIMO ??
represents the output of the ?? integrator. All the adaptation and linear transformation steps can
be implemented using analog VLSI with adaptation steps implemented either using multiplying
digital-to-analog converters or floating gates synapses. Even though any channel can be chosen as
a reference channel, our experiments indicate that the channel with maximum cross-correlation and
maximum signal power serves as the best choice.
Figure 4: Reconstruction performance in terms of mean square error computed using artificial data
for different OSR
3
Results
The functionality of the proposed MIMO sigma-delta converter was verified using artificially generated data and with real multi-channel recorded neural data. The first set of experiments simulated
an artificially generated 8 channel data. Figure 3(a) illustrates the multi-channel data where each
channel was obtained by random linear mixing of two sinusoids with frequency 20Hz and 40Hz.
The multi-channel data was presented to a MIMO sigma delta converter implemented in software.
The equivalent analog representation of the pulse density encoded digital stream was obtained using
a moving window averaging technique with window size equal to the oversampling ratio (OSR).
The resultant analog representation of the ADC output is shown in 3(b). It can be seen in the
figure that after initial adaptation steps the output corresponding to first two channels converges to
the fundamental sinusoids, where as the rest of the digital streams converged to an equivalent zero
output. This simple experiment demonstrates the functionality of MIMO sigma-delta in eliminating cross-channel redundancy. The first two digital streams were used to reconstruct the original
recording using equation 12. Figure 4 shows the reconstruction error averaged over a time window
of 2048 samples showing that the error indeed converges to zero, as the MIMO converter adapts.
The Figure 4 also shows the error curves for different OSR. It can be seen that even though better
reconstruction error can be achieved by using higher OSR, the adaptation procedure compensates
for errors introduced due to low resolution. In fact the reconstruction performance is optimal for
intermediate OSR.
Figure 5: Functional verification of the MIMO sigma-delta converter for multi-channel neural data:
(a) Original multichannel data (b) analog representation of digital output produced by the converter
The multi-channel experiments were repeated with an eight channel neural data recorded from dorsal cochlear nucleus of adult guinea pigs. The data was recorded at a sampling rate of 20KHz and
at a resolution of 16 bits. Figure 5(a) shows a clip of multi-channel recording for duration of 0.5
seconds. It can be seen from highlighted portion of Figure 5(a) that the data exhibits high degree of
cross-channel correlation. Similar to the first set of experiments the MIMO converter eliminates spatial redundancy between channels as shown by the analog representation of the reconstructed output
Figure 6: Reconstruction performance in terms of mean square error computed using neural data for
different OSR
in Figure 5(b). An interesting observation in this experiment is that even though the statistics of the
input signals varies in time as shown in Figure 5 (a) and (b), the transformation matrix A remains
relatively stationary during the duration of the conversion, which is illustrated through the reconstruction error graph in Figure 6. This validates the principle of operation of the MIMO conversion
where the multi-channel neural recording lie on a low-dimensional manifold whose parameters are
relatively stationary with respect to the signal statistics.
Figure 7: Demonstration of common-mode rejection performed by MIMO ??: (a) Original multichannel signal at the input of converter (b) analog representation of the converter output (c) a magnified clip of the output produced by the converter illustrating preservation of neural information.
The last set of experiments demonstrate the ability of the proposed MIMO converter to reject common mode disturbance across all the channels. Rejection of common-mode signal is one of the most
important requirement for processing neural signals whose amplitude range from 50?V - 500?V ,
where as the common-mode interference resulting from EMG or electrical coupling could be as high
as 10mV [14]. Therefore most of the micro-electrode arrays use bio-potential amplifiers for enhancing signal-to-noise ratio and common-mode rejection. For this set of experiments, the recorded neural data obtained from the previous experiment was contaminated by an additive 60Hz sinusoidal
interference of amplitude 1mV . The results are shown in Figure 7 illustrating that the reference
channel absorbs all the common-mode disturbance where as the neural information is preserved in
other channels. In fact theoretically it can be shown that the common-mode rejection ratio for the
proposed MIMO ADC is dependent only on the OSR and is given by 20 log10 OSR.
4
Conclusion
In this paper we presented a novel MIMO analog-to-digital conversion algorithm with application to
multi-channel neural prosthesis. The roots of the algorithm lie within the framework of large margin
principles, where the data converter maximizes the relative distance between signal space corresponding to different channels. Experimental results with real multi-channel neural data demonstrate the effectiveness of the proposed method in eliminating cross-channel redundancy and hence
reducing data throughput and power dissipation requirements of a multi-channel biotelemetry sensor. There are several open questions that needs to be addressed as a continuation of this research
which includes extension of the algorithm second-order ?? architectures, embedding of kernels
into the ADC formulation and reformulation of the update rule to perform ICA directly on the ADC.
Acknowledgments
This work is supported by grant from National Institute of Health (R21NS047516-01A2). The
authors would also like to thank Prof. Karim Oweiss for providing multi-channel neural data for the
MIMO ADC experiments.
References
[1] Kennedy, P. R., R. A. Bakay, M. M. Moore, K. Adams, and J. Goldwaithe. Direct control of a computer
from the human central nervous system. IEEE Trans Rehabil Eng 8:198-202, 2000.
[2] J. Carmena, M. Lebedev, R. E. Crist, J. E. ODoherty, D. M. Santucci, D. Dimitrov, P. Patil, C. S. Henriquez, and M. A. Nicolelis, Learning to control a brain-machine interface for reaching and grasping by
primates, PLoS Biol., vol. 1, no. 2, pp. 193208, Nov. 2003.
[3] G. Santhanam, S. I. Ryu, B. M. Yu, and K. V. Shenoy, High information transmission rates in a neural
prosthetic system, in Soc. Neurosci., 2004, Program 263.2.
[4] K. Oweiss, D. Anderson, M. Papaefthymiou, Optimizing Signal Coding in Neural Interface System-ona-Chip Modules, IEEE Conf. on EMBS, pp. 2016-2019, Sept. 2003.
[5] K. Wise et. al., Wireless Implantable Microsystems: High-Density Electronic Interfaces to the Nervous
System, Proc. of the IEEE, Vol.: 92-1, pp: 7697, Jan. 2004.
[6] Maynard EM, Nordhausen CT, Normann RA, The Utah intracortical electrode array: a recording structure
for potential brain computer interfaces. Electroencephalogr Clin Neurophysiol 102: 228239, 1997.
[7] T. M. Seese, H. Harasaki, G. M. Saidel, and C. R. Davies, Characterization of tissue morphology, angiogenesis, and temperature in adaptive response of muscle tissue to chronic heating, Lab Investigation, vol.
78, no. 12, pp. 15531562, Dec. 1998.
[8] R. R. Harrison, A low-power integrated cicuit for adaptive detection of action potentials in noisy signals,
in Proc. 25th Ann. Conf. IEEE EMBS, Cancun, Mexico, Sep. 2003, pp. 33253328.
[9] J. C. Candy and G. C. Temes, Oversampled methods for A/D and D/A conversion, in Oversampled DeltaSigma Data Converters. Piscataway, NJ: IEEE Press, 1992, pp. 1- 29.
[10] Vapnik, V. The Nature of Statistical Learning Theory, New York: Springer-Verlag, 1995.
[11] Girosi, F., Jones, M. and Poggio, T. Regularization Theory and Neural Networks Architectures, Neural
Computation, vol. 7, pp 219-269, 1996.
[12] Hyvrinen, A. Survey on independent component, analysis. Neural Computing Surveys, 2:94128, 1999.
[13] A. Celik, M. Stanacevic and G. Cauwenberghs, Gradient Flow Independent Component Analysis in Micropower VLSI, Adv. Neural Information Processing Systems (NIPS?2005), Cambridge: MIT Press, 18,
2006
[14] Pedram Mohseni and Khalil Najafi. A fully integrated neural recording amplifier with DC input stabilization. Biomedical Engineering, IEEE Transactions on Volume 51, Issue 5, May 2004.
| 2967 |@word illustrating:4 briefly:1 eliminating:2 compression:6 norm:7 open:1 cm2:1 ona:1 pulse:3 eng:1 initial:2 configuration:1 series:2 contains:1 nordhausen:1 written:1 additive:1 periodically:1 candy:1 girosi:1 update:12 stationary:4 shut:1 nervous:2 core:1 characterization:1 direct:1 become:1 consists:1 shantanu:2 combine:1 absorbs:1 introduce:1 lansing:1 theoretically:1 indeed:1 ra:1 ica:4 behavior:1 multi:18 brain:3 integrator:1 morphology:1 window:3 considering:1 bounded:5 underlying:1 circuit:1 maximizes:2 emerging:1 adc:6 transformation:7 magnified:1 nj:1 gore:1 unwanted:1 rm:3 demonstrates:1 control:4 bio:1 grant:1 continually:1 shenoy:1 engineering:2 local:1 limit:1 approximately:1 crist:1 range:1 averaged:1 faithful:1 acknowledgment:1 procedure:1 jan:1 area:1 significantly:1 reject:1 projection:1 davy:1 applying:1 optimize:1 conventional:1 equivalent:2 chronic:1 duration:2 survey:2 resolution:3 simplicity:1 rule:6 array:9 spanned:2 deriving:1 embedding:1 searching:1 variation:1 analogous:1 updated:1 construction:1 alleviating:1 designing:4 element:7 conserve:1 observed:1 module:3 electrical:2 parameterize:1 thousand:1 ensures:2 adv:1 grasping:1 movement:1 plo:1 ui:1 constrains:1 neurophysiol:1 easily:1 sep:1 chip:1 talk:1 regularizer:1 separated:2 describe:1 artificial:1 whose:6 encoded:1 reconstruct:1 compressed:2 triangular:3 compensates:1 statistic:3 ability:1 transform:4 itself:2 highlighted:1 final:2 online:4 validates:1 noisy:1 sequence:1 advantage:1 reconstruction:8 adaptation:5 combining:1 loop:1 rapidly:1 mixing:1 achieve:2 adapts:2 khalil:1 carmena:1 electrode:10 transmission:6 requirement:2 produce:1 adam:1 converges:3 illustrate:1 coupling:1 pose:1 measured:1 strong:1 soc:1 implemented:3 implies:1 indicate:1 direction:1 functionality:2 stabilization:1 human:1 sgn:3 elimination:2 investigation:1 biological:1 extension:1 around:1 ic:1 a2:1 purpose:1 proc:2 electroencephalogr:1 mit:1 sensor:7 always:3 aim:1 reaching:1 varying:1 timevarying:1 signum:1 conjunction:1 ax:5 celik:1 sense:1 dependent:1 integrated:4 typically:1 eliminate:1 vlsi:2 transformed:3 issue:1 denoted:2 spatial:3 field:1 equal:1 sampling:2 represents:5 yu:1 jones:1 throughput:1 future:1 minimized:2 contaminated:1 micro:6 microelectrodes:1 simultaneously:1 national:1 implantable:1 floating:1 consisting:2 amplifier:2 detection:1 highly:1 introduces:3 amenable:1 accurate:1 poggio:1 chakrabartty:1 respective:1 prosthesis:5 increased:1 instance:1 column:1 cost:4 stanacevic:1 successful:1 mimo:28 reported:1 emg:1 varies:1 density:4 fundamental:1 off:1 invertible:1 together:1 lebedev:1 transmitting:1 central:1 recorded:8 slowly:1 adversely:1 external:1 conf:2 leading:1 actively:1 potential:3 sinusoidal:1 de:3 intracortical:1 coding:1 includes:1 mv:2 stream:6 performed:3 later:1 root:1 closed:1 lab:1 portion:1 cauwenberghs:1 parallel:1 square:2 efficiently:1 yield:2 identify:1 produced:3 multiplying:1 researcher:1 kennedy:1 unaffected:1 processor:1 tissue:3 converged:1 oscillatory:1 synapsis:1 energy:1 frequency:2 pp:7 naturally:3 resultant:1 mi:1 di:2 organized:1 formalize:1 amplitude:2 higher:3 response:1 maximally:3 formulation:4 though:4 anderson:1 biomedical:1 correlation:11 maynard:1 mode:8 resemblance:1 facilitate:1 effect:1 validity:1 concept:1 utah:1 hence:2 regularization:3 sinusoid:2 karim:1 moore:1 illustrated:1 during:2 criterion:3 generalized:2 analogto:1 demonstrate:3 performs:1 motion:1 interface:7 dissipation:4 pro:1 l1:4 temperature:1 wise:2 novel:2 common:9 functional:3 khz:2 volume:1 analog:15 discussed:1 cambridge:1 i6:1 moving:2 perspective:1 optimizing:2 verlag:1 muscle:1 transmitted:2 seen:3 egr:1 determine:1 maximize:1 redundant:1 period:1 signal:32 preservation:1 multiple:4 adapt:1 cross:11 regression:3 implanted:4 controller:1 enhancing:1 iteration:3 kernel:1 achieved:2 dec:1 preserved:1 addition:1 embs:2 addressed:1 harrison:1 source:1 dimitrov:1 operate:1 unlike:1 rest:1 eliminates:1 ascent:1 recording:9 hz:3 flow:1 effectiveness:2 mw:1 exceed:1 intermediate:1 affect:1 xj:4 architecture:9 converter:33 reduce:1 inner:1 nyquist:1 f:1 york:1 remark:1 action:1 clear:1 amount:1 hardware:1 processed:1 clip:2 multichannel:2 continuation:1 occupy:1 oversampling:4 sign:3 delta:4 discrete:1 najafi:1 vol:4 santhanam:1 key:2 redundancy:8 reformulation:1 verified:1 asymptotically:2 graph:1 inverse:2 family:1 electronic:1 separation:1 temes:1 bit:2 ct:1 scalp:1 software:1 prosthetic:4 sake:1 osr:11 min:5 performing:1 relatively:3 department:1 piscataway:1 across:2 em:1 unity:2 primate:1 explained:1 interference:2 equation:18 remains:1 end:1 serf:1 operation:2 eight:1 limb:1 generic:1 slower:1 gate:1 original:4 denotes:1 ensure:2 log2:2 patil:1 log10:1 clin:1 amit:1 prof:1 objective:1 question:1 strategy:2 diagonal:1 exhibit:2 gradient:8 lends:2 amongst:1 distance:3 thank:1 simulated:2 outer:1 cochlear:1 manifold:1 spanning:1 reason:1 ratio:5 minimizing:1 demonstration:1 providing:1 mexico:1 sigma:4 design:1 implementation:1 perform:2 conversion:15 neuron:1 observation:1 discarded:1 descent:4 extended:2 pedram:1 dc:1 introduced:1 oversampled:2 ryu:1 nip:1 trans:1 adult:1 mohseni:1 microsystems:1 pig:1 challenge:2 program:1 max:4 including:1 power:7 decorrelation:1 nicolelis:1 regularized:1 disturbance:2 santucci:1 recursion:4 technology:1 imply:1 concludes:1 coupled:2 health:1 normann:1 sept:1 relative:4 fully:1 bear:1 interesting:1 limitation:1 proportional:1 digital:23 nucleus:1 oweiss:2 degree:1 verification:2 consistent:1 principle:4 supported:1 wireless:2 last:1 guinea:1 aij:10 institute:1 odoherty:1 curve:1 dimension:1 author:1 made:1 adaptive:2 transaction:1 reconstructed:2 nov:1 active:1 assumed:3 msu:1 iterative:1 search:1 channel:42 nature:1 henriquez:1 expansion:1 artificially:3 bmi:1 neurosci:1 noise:1 heating:2 repeated:1 slow:1 cortically:3 sub:2 lie:2 down:1 embed:1 showing:1 meas:1 micropower:1 vapnik:1 cancun:1 illustrates:1 occurring:1 margin:5 rejection:4 michigan:1 applies:1 springer:1 goal:1 ann:1 rehabil:1 towards:1 change:1 determined:2 reducing:1 wt:1 averaging:1 called:2 experimental:2 east:1 internal:1 dorsal:1 biol:1 |
2,167 | 2,968 | Automated Hierarchy Discovery for Planning in
Partially Observable Environments
Laurent Charlin & Pascal Poupart
David R. Cheriton School of Computer Science
Faculty of Mathematics
University of Waterloo
Waterloo, Ontario
{lcharlin,ppoupart}@cs.uwaterloo.ca
Romy Shioda
Dept of Combinatorics and Optimization
Faculty of Mathematics
University of Waterloo
Waterloo, Ontario
[email protected]
Abstract
Planning in partially observable domains is a notoriously difficult problem. However, in many real-world scenarios, planning can be simplified by decomposing the
task into a hierarchy of smaller planning problems. Several approaches have been
proposed to optimize a policy that decomposes according to a hierarchy specified
a priori. In this paper, we investigate the problem of automatically discovering
the hierarchy. More precisely, we frame the optimization of a hierarchical policy
as a non-convex optimization problem that can be solved with general non-linear
solvers, a mixed-integer non-linear approximation or a form of bounded hierarchical policy iteration. By encoding the hierarchical structure as variables of the
optimization problem, we can automatically discover a hierarchy. Our method is
flexible enough to allow any parts of the hierarchy to be specified based on prior
knowledge while letting the optimization discover the unknown parts. It can also
discover hierarchical policies, including recursive policies, that are more compact
(potentially infinitely fewer parameters) and often easier to understand given the
decomposition induced by the hierarchy.
1
Introduction
Planning in partially observable domains is a notoriously difficult problem. However, in many realworld scenarios, planning can be simplified by decomposing the task into a hierarchy of smaller
planning problems. Such decompositions can be exploited in planning to temporally abstract subpolicies into macro actions (a.k.a. options). Pineau et al. [17], Theocharous et al. [22], and Hansen
and Zhou [10] proposed various algorithms that speed up planning in partially observable domains
by exploiting the decompositions induced by a hierarchy. However these approaches assume that a
policy hierarchy is specified by the user, so an important question arises: how can we automate the
discovery of a policy hierarchy? In fully observable domains, there exists a large body of work on
hierarchical Markov decision processes and reinforcement learning [6, 21, 7, 15] and several hierarchy discovery techniques have been proposed [23, 13, 11, 20]. However those techniques rely on
the assumption that states are fully observable to detect abstractions and subgoals, which prevents
their use in partially observable domains.
We propose to frame hierarchy and policy discovery as an optimization problem with variables
corresponding to the hierarchy and policy parameters. We present an approach that searches in the
space of hierarchical controllers [10] for a good hierarchical policy. The search leads to a difficult
non-convex optimization problem that we tackle using three approaches: generic non-linear solvers,
a mixed-integer non-linear programming approximation or an alternating optimization technique
that can be thought as a form of hierarchical bounded policy iteration. We also generalize Hansen
and Zhou?s hierarchical controllers [10] to allow recursive controllers. These are controllers that
may recursively call themselves, with the ability of representing policies with a finite number of
parameters that would otherwise require infinitely many parameters. Recursive policies are likely
to arise in language processing tasks such as dialogue management and text generation due to the
recursive nature of language models.
2
Finite State Controllers
We first review partially observable Markov decision processes (POMDPs) (Sect. 2.1), which is the
framework used throughout the paper for planning in partially observable domains. Then we review
how to represent POMDP policies as finite state controllers (Sect. 2.2) as well as some algorithms
to optimize controllers of a fixed size (Sect. 2.3).
2.1
POMDPs
POMDPs have emerged as a popular framework for planning in partially observable domains [12].
A POMDP is formally defined by a tuple (S, O, A, T, Z, R, ?) where S is the set of states, O is
the set of observations, A is the set of actions, T (s0 , s, a) = Pr(s0 |s, a) is the transition function,
Z(o, s0 , a) = Pr(o|s0 , a) is the observation function, R(s, a) = r is the reward function and ? ?
[0, 1) is the discount factor. It will be useful to view ? as a termination probability. This will
allow us to absorb ? into the transition probabilities by defining discounted transition probabilities:
Pr? (s0 |s, a) = Pr(s0 |s, a)?. Given a POMDP, the goal is to find a course of action that maximizes
expected total rewards. To select actions, the system can only use the information available in
the past actions and observations. Thus we define a policy ? as a mapping from histories of past
actions and observations to actions. Since histories may become arbitrarily long, we can alternatively
define policies as mappings from beliefs to actions (i.e., ?(b) = a). A belief b(s) = Pr(s) is a
probability distribution over states, taking into account the information provided by past actions
and observations. Given a belief b, after executing a and receiving o, we can compute an updated
belief ba,o using Bayes? theorem: ba,o (s) = kb(s) Pr(s0 |s, a) Pr(o|s0 a). Here k is a normalization
constant. The value V ? of policy
P ? when starting with belief b isPmeasured by the expected sum of
the future rewards: V ? (b) = t R(bt , ?(bt )), where R(b, a) = s b(s)R(s, a). An optimal policy
? ? is a policy with the highest value V ? for all beliefs (i.e., V ? (b) ? V ? (b)?b, ?). The optimal
value function also satisfies
Bellman?s equation: V ? (b) = maxa (R(b, a) + ? Pr(o|b, a)V ? (ba,o )),
P
where Pr(o|b, a) = s,s0 b(s) Pr(s0 |s, a) Pr(o|s0 , a).
2.2
Policy Representation
A convenient representation for an important class of policies is that of finite state controllers [9].
A finite state controller consists of a finite state automaton (N, E) with a set N of nodes and a set
E of directed edges Each node n has one outgoing edge per observation. A controller encodes a
policy ? = (?, ?) by mapping each node to an action (i.e., ?(n) = a) and each edge (referred by its
observation label o and its parent node n) to a successor node (i.e., ?(n, o) = n0 ). At runtime, the
policy encoded by a controller is executed by doing the action at = ?(nt ) associated with the node
nt traversed at time step t and following the edge labelled with observation ot to reach the next node
nt+1 = ?(nt , ot ).
Stochastic controllers [18] can also be used to represent stochastic policies by redefining ? and ? as
distributions over actions and successor nodes. More precisely, let Pr? (a|n) be the distribution from
which an action a is sampled in node n and let Pr? (n0 |n, a, o) be the distribution from which the
successor node n0 is sampled after executing a and receiving o in node n. The value of a controller
is computed by solving the following system of linear equations:
X
X
Vn? (s) =
Pr? (a|n)[R(s, a) +
Pr? (s0 |s, a) Pr(o|s0 , a) Pr? (n0 |n, a, o)Vn?0 (s0 )] ?n, s (1)
a
s0 ,o,n0
While there always exists an optimal policy representable by a deterministic controller, this controller may have a very large (possibly infinite) number of nodes. Given time and memory constraints, it is common practice to search for the best controller with a bounded number of nodes [18].
However, when the number of nodes is fixed, the best controller is not necessarily deterministic. This
explains why searching in the space of stochastic controllers may be advantageous.
Table 1: Quadratically constrained optimization program for bounded stochastic controllers [1].
X
max
x,y
s.t.
s
bo (s) Vno (s)
| {z }
Vn (s) =
| {z }
y
y
Xh
a,n0
Pr(a, n0 |n, ok ) R(s, a) +
|
{z
}
x
X
s0 ,o
Pr? (s0 |s, a) Pr(o|s0 , a) Pr(n0 , a|n, o) Vn0 (s0 )
|
{z
} | {z }
X
Pr(n0 , a|n, o) ? 0 ?n0 , a, n, o
Pr(n0 , a|n, o) = 1
|
{z
}
|
{z
}
n0 ,a
x
x
X
X
Pr(n0 , a|n, ok ) ?a, n, o
Pr(n0 , a|n, o) =
|
{z
}
|
{z
}
0
0
n
2.3
x
n
x
i
?n, s
y
?n, o
x
Optimization of Stochastic Controllers
The optimization of a stochastic controller with a fixed number of nodes can be formulated as a
quadratically constrained optimization problem (QCOP) [1]. The idea is to maximize V ? by varying
the controller parameters Pr? and Pr? . Table 1 describes the optimization problem with Vn (s)
and the joint distribution Pr(n0 , a|n, o) = Pr? (n|a) Pr? (n0 |n, a, o) as variables. The first set of
constraints corresponds to those of
1 while the remaining constraints ensure that Pr(n0 , a|n, o)
PEq.
0
is a proper distribution and that n Pr(n0 , a|n, o) = Pr? (a|n)?o. This optimization program is
non-convex due to the first set of constraints. Hence, existing techniques can at best guarantee
convergence to a local optimum. Several techniques have been tried including gradient ascent [14],
stochastic local search [3], bounded policy iteration (BPI) [18] and a general non-linear solver called
SNOPT (based on sequential quadratic programming) [1, 8]. Empirically, biased-BPI (version of
BPI that biases its search to the belief region reachable from a given initial belief state) and SNOPT
have been shown to outperform the other approaches on some benchmark problems [19, 1]. We
quickly review BPI since it will be extended in Section 3.2 to optimize hierarchical controllers. BPI
alternates between policy evaluation and policy improvement. Given a policy with fixed parameters
Pr(a, n0 |n, o), policy evaluation solves the linear system in Eq 1 to find Vn (s) for all n, s. Policy
improvement can be viewed as a linear simplification of the program in Table 1 achieved by fixing
Vn0 (s0 ) in the right hand side of the first set of constraints. Policy improvement is achieved by
optimizing the controller parameters Pr(n0 , a|n, o) and the value Vn (s) on the left hand side.1
3
Hierarchical controllers
Hansen and Zhou [10] recently proposed hierarchical finite-state controllers as a simple and intuitive way of encoding hierarchical policies. A hierarchical controller consists of a set of nodes and
edges as in a flat controller, however some nodes may be abstract, corresponding to sub-controllers
themselves. As with flat controllers, concrete nodes are parameterized with an action mapping ?
and edges outgoing concrete nodes are parameterized by a successor node mapping ?. In contrast,
abstract nodes are parameterized by a child node mapping indicating in which child node the subcontroller should start. Hansen and Zhou consider two schemes for the edges outgoing abstract
nodes: either there is a single outgoing edge labelled with a null observation or there is one edge
per terminal node of the subcontroller labelled with an abstract observation identifying the node in
which the subcontroller terminated.
Subcontrollers encode full POMDP policies with the addition of a termination condition. In fully
observable domains, it is customary to stop the subcontroller once a goal state (from a predefined
set of terminal states) is reached. This strategy cannot work in partially observable domains, so
Hansen and Zhou propose to terminate a subcontroller when an end node (from a predefined set of
terminal nodes) is reached. Since the decision to reach a terminal node is made according to the
successor node mapping ?, the timing for returning control is implicitly optimized. Hansen and
1
Note however that this optimization may decrease the value of some nodes so [18] add an additional
constraint to ensure monotonic improvement by forcing Vn (s) on the left hand side to be at least as high as
Vn (s) on the right hand side.
Zhou propose to use |A| terminal nodes, each mapped to a different action. Terminal nodes do not
have any outgoing edges nor any action mapping since they already have an action assigned.
The hierarchy of the controller is assumed to be finite and specified by the programmer. Subcontrollers are optimized in isolation in a bottom up fashion. Subcontrollers at the bottom level are
made up only of concrete nodes and therefore can be optimized as usual using any controller optimization technique. Controllers at other levels may contain abstract nodes for which we have to
define the reward function and the transition probabilities. Recall that abstract nodes are not mapped
to concrete actions, but rather to children nodes. Hence, the immediate reward of an abstract node
n
? corresponds to the value V?(?n) (s) of its child node ?(?
n). Similarly, the probability of reach0
ing state s after executing the subcontroller of an abstract node n
? corresponds to the probability
Pr(send |s, ?(?
n)) of terminating the subcontroller in send when starting in s at child node ?(?
n).
This transition probability can be computed by solving the following linear system:
?
? 1 when n is a terminal node and s = send
0 when n is a terminal node and s 6= send
Pr(send |s, n) =
? P 0 Pr(s0 |s, ?(n)) Pr(o|s0 , ?(n)) Pr(s |s0 , ?(n, o))
end
o,s
(2)
otherwise
Subcontrollers with abstract actions correspond to partially observable semi-Markov decision processes (POSMDPs) since the duration of each abstract action may vary. The duration of an action is
important to determine the amount by which future rewards should be discounted. Hansen and Zhou
propose to use the mean duration to determine the amount of discounting, however this approach
does not work. In particular, abstract actions with non-zero probability of never terminating have an
infinite mean duration. Instead, we propose to absorb the discount factor into the transition distribution (i.e., Pr? (s0 |s, a) = ? Pr(s0 |s, a)). This avoids all issues related to discounting and allows us to
solve POSMDPs with the same algorithms as POMDPs. Hence, given the abstract reward function
R(s, ?(?
n)) = V?(?n) (s) and the abstract transition function Pr? (s0 |s, ?(?
n)) obtained by solving the
linear system in Eq. 2, we have a POSMDP which can be optimized using any POMDP optimization
technique (as long as the discount factor is absorbed into the transition function).
Hansen?s hierarchical controllers have two limitations: the hierarchy must have a finite number of
levels and it must be specified by hand. In the next section we describe recursive controllers which
may have infinitely many levels. We also describe an algorithm to discover a suitable hierarchy by
simultaneously optimizing the controller parameters and hierarchy.
3.1
Recursive Controllers
In some domains, policies are naturally recursive in the sense that they decompose into subpolicies
that may call themselves. This is often the case in language processing tasks since language models
such as probabilistic context-free grammars are composed of recursive rules. Recent work in dialogue management uses POMDPs to make high level discourse decisions [24]. Assuming POMDP
dialogue management eventually handles decisions at the sentence level, recursive policies will naturally arise. Similarly, language generation with POMDPs would naturally lead to recursive policies
that reflect the recursive nature of language models.
We now propose several modifications to Hansen and Zhou?s hierarchical controllers that simplify
things while allowing recursive controllers. First, the subcontrollers of abstract nodes may be composed of any node (including the parent node itself) and transitions can be made to any node anywhere (whether concrete or abstract). This allows recursive controllers and smaller controllers since
nodes may be shared across levels. Second, we use a single terminal node that has no action nor any
outer edge. It is a virtual node simply used to signal the termination of a subcontroller. Third, while
abstract nodes lead to the execution of a subcontroller, they are also associated with an action. This
action is executed upon termination of the subcontroller. Hence, the actions that were associated
with the terminal nodes in Hansen and Zhou?s proposal are associated with the abstract nodes in
our proposal. This allows a uniform parameterization of actions for all nodes while reducing the
number of terminal nodes to 1. Fourth, the outer edges of abstract nodes are labelled with regular
observations since an observation will be made following the execution of the action of an abstract
node. Finally, to circumvent all issues related to discounting, we absorb the discount factor into the
transition probabilities (i.e., Pr? (s0 |s, a)).
n
?
s
Pr(n0 , a|?
n, o)
n00
s
Pr(nbeg |?
n)
n0
s0
oc(send , nend |s, nbeg )
n
?
s
Pr(n0 , a|?
n, o)
n00
s
Pr(nbeg |?
n)
nbeg oc(send , nend |s, nbeg )
s
nend
send
(a)
nbeg oc(send , nend |s, nbeg )
s
nend
send
(b)
Figure 1: The figures represent controllers and transitions as written in Equations 5 and 6b. Alongside the directed edges we?ve indicated the equivalent part of the equations which they correspond
to.
3.2
Hierarchy and Policy Optimization
We formulate the search for a good stochastic recursive controller, including the automated hierarchy
discovery, as an optimization problem (see Table 2). The global maximum of this optimization
problem corresponds to the optimal policy (and hierarchy) for a fixed set N of concrete nodes n
? of abstract nodes n
and a fixed set N
? . The variables consist of the value function Vn (s), the policy
parameters Pr(n0 , a|n, o), the (stochastic) child node mapping Pr(n0 |?
n) for each abstract node n
? and
the occupancy frequency oc(n,
Ps|n0 , s0 ) of each (n, s)-pair when starting in (n0 , s0 ). The objective
(Eq. 3) is the expected value s b0 (s)Vn0 (s) of starting the controller in node n0 with initial belief
b0 . The constraints in Equations 4 and 5 respectively indicate the expected value of concrete and
abstract nodes. The expected value of an abstract node corresponds to the sum of three terms: the
expected value Vnbeg (s) of its subcontroller given by its child node nbeg , the reward R(send , an? )
immediately after the termination of the subcontroller and the future rewards Vn (s0 ). Figure 1a
illustrates graphically the relationship between the variables in Equation 5. Circles are state-node
pairs labelled by their expected value. Edges indicate single transitions (solid line), sequences of
transitions (dashed line) or the beginning/termination of a subcontroller (bold/dotted line). Edges
are labelled with the corresponding transition probability variables.
Note that the reward R(send , an? ) depends on the state send in which the subcontroller terminates.
Hence we need to compute the probability that the last state visited in the subcontroller is send .
This probability is given by the occupancy frequency oc(send , nend |s, nbeg ), which is recursively
defined in Eq. 6 in terms of a preceding state-node pair which may be concrete (6a) or abstract (6b).
Figure 1b illustrates graphically the relationship between the variables in Eq. 6b. Eq. 7 prevents
infinite loops (without any action execution) in the child node mappings. The label function refers
to the labelling of all abstract nodes, which induces an ordering on the abstract nodes. Only the
nodes labelled with numbers larger than the label of an abstract node can be children of that abstract
node. This constraint ensures that chains of child node mappings have a finite length, eventually
reaching a concrete node where an action is executed. Constraints, like the ones in Table 1, are also
needed to guarantee that the policy parameters and the child node mappings are proper distributions.
3.3
Algorithms
Since the problem in Table 2 has non-convex (quartic) constraints in Eq. 5 and 6, it is difficult to
solve. We consider three approaches inspired from the techniques for non-hierarchical controllers:
Non-convex optimization: Use a general non-linear solver, such as SNOPT, to directly tackle the
optimization problem in Table 2. This is the most convenient approach, however a globally optimal
solution may not be found due to the non-convex nature of the problem.
Mixed-Integer Non-Linear Programming (MINLP): We restrict Pr(n0 , a|n, o) and Pr(nbeg |?
n)
to be binary (i.e., in {0, 1}). Since the optimal controller is often near deterministic in practice, this
restriction tends to have a negligible effect on the value of the optimal controller. The problem is
still non-convex but can be tackled with a mixed-integer non-linear solver such as MINLP BB 2 .
Bounded Hierarchical Policy Iteration (BHPI): We alternate between (i) solving a simplified version of the optimization where some variables are fixed and (ii) updating the values of the fixed variables. More precisely, we fix Vn0 (s0 ) in Eq. 5 and oc(s, n
? |s0 , n0 ) in Eq. 6. As a result, Eq. 5 and 6
are now cubic, involving products of variables that include a single continuous variable. This per2
http://www-unix.mcs.anl.gov/?leyffer/solvers.html
Table 2: Non-convex quarticly constrained optimization problem for hierarchy and policy discovery
in bounded stochastic recursive controllers.
X
max
w,x,y,z
s.t.
s?S
b0 (s) Vn0 (s)
| {z }
Vn (s) =
| {z }
y
Xh
a,n0
X
Vn? (s) =
| {z } n
beg
y
+
X
s0 ,o
0
(3)
y
0
Pr(n0 , a|n, ok ) R(s, a) +
|
{z
}
x
h
Pr(nbeg |?
n) Vn (s) +
| {z } | beg
{z } s
z
y
0
X
s0 ,o
Pr? (s0 |s, a) Pr(o|s0 , a) Pr(n0 , a|n, o) Vn0 (s0 )
|
{z
} | {z }
x
h
X
0
end ,a,n
0
y
(4)
0
w
0
0
x
x
ii
?s, n
?
(5)
y
Xh
0
?s, n
oc(send , nend |s, nbeg ) Pr(n , a|?
n, ok ) R(send , a)
|
{z
} |
{z
}
Pr? (s |send , a) Pr(o|s , a) Pr(n , a|?
n, o) Vn0 (s )
|
{z
} | {z }
0
i
oc(s , n |s0 , n0 ) = ?(s , n , s0 , n0 ) +
|
{z
}
s,o,a
(6)
w
i
X
oc(s, n|s0 , n0 ) Pr? (s0 |s, a) Pr(o|s0 , a) Pr(n0 , a|n, o)
{z
}
|
{z
}
|
n
w
x
P
0
0
+ send ,nbeg ,?n oc(s, n
? |s0 , n0 ) Pr? (s |send , a) Pr(o|s , a)
|
{z
}
w
i
0
oc(send , nend |s, nbeg ) Pr(n , a|?
n, o) Pr(nbeg |?
n)
?s0 , s0 , n0 , n0
|
{z
}|
{z
} | {z }
w
x
Pr(?
n0 |?
n) = 0 if label(?
n0 ) ? label(?
n), ??
n, n
?0
z
9
=
n concrete (6a)
;
9
>
>
=
>
>
;
n
? abstract (6b)
(7)
mits the use of disjunctive programming [2] to linearize the constraints without any approximation.
The idea is to replace any product BX (where B is binary and X is continuous) by a new continuous
variable Y constrained by lbX B ? Y ? ubX B and X + (B ? 1)ubX ? Y ? X + (B ? 1)lbX
where lbX and ubX are lower and upper bounds on X. One can verify that those additional linear
constraints force Y to be equal to BX. After applying disjunctive programming, we solve the resulting mixed-integer linear program (MILP) and update Vn0 (s0 ) and oc(s, n
? |s0 , n0 ) based on the new
values for Vn (s) and oc(s0 , n0 |s0 , n0 ). We repeat the process until convergence or until a pre-defined
time limit is reached. Although, convergence cannot be guaranteed, in practice we have found BHPI
to be monotonically increasing. Note that fixing Vn0 (s0 ) and oc(s, n
? |s0 , n0 ) while varying the policy
parameters is reminiscent of policy iteration, hence the name bounded hierarchical policy iteration.
3.4
Discussion
Discovering a hierarchy offers many advantages over previous methods that assume the hierarchy is
already known. In situations where the user is unable to specify the hierarchy, our approach provides
a principled way of discovering it. In situations where the user has a hierarchy in mind, it may be
possible to find a better one. Note however that discovering the hierarchy while optimizing the
policy is a much more difficult problem than simply optimizing the policy parameters. Additional
variables (e.g., Pr(n0 , a|n, o) and oc(s, n|s0 , n0 )) must be optimized and the degree of non-linearity
increases. Our approach can also be used when the hierarchy and the policy are partly known. It is
fairly easy to set the variables that are known or to reduce their range by specifying upper and lower
bounds. This also has the benefit of simplifying the optimization problem.
It is also interesting to note that hierarchical policies may be encoded with exponentially fewer
nodes in a hierarchical controller than a flat controller. Intuitively, when a subcontroller is called by
k abstract nodes, this subcontroller is shared by all its abstract parents. An equivalent flat controller
would have to use k separate copies of the subcontroller. If a hierarchical controller has l levels
with subcontrollers shared by k parents in each level, then the equivalent flat controller will need
O(k l ) copies. By allowing recursive controllers, policies may be represented even more compactly.
Recursive controllers allow abstract nodes to call subcontrollers that may contain themselves. An
Table 3: Experiment results
Problem
Paint
Shuttle
4x4 Maze
S
A
O
V*
4
8
8
8
8
16
4
3
3
3
3
4
2
5
5
5
5
2
3.3
32.7
32.7
32.7
32.7
3.7
Num.
of Nodes
4(3/1)
4(3/1)
6(4/2)
7(4/3)
9(5/4)
3(2/1)
SNOPT
Time
V
2s
0.48
2s
31.87
6s
31.87
26s
31.87
1449s 30.27
3s
3.15
BHPI
Time
V
13s
3.29
85s
18.92
7459s
27.93
10076s 31.87
10518s 3.73
397s
3.21
MINLP BB
Time V
<1s
3.29
4s
18.92
221s
27.68
N/A
?
N/A
?
30s
3.73
equivalent non-hierarchical controller would have to unroll the recursion by creating a separate copy
of the subcontroller each time it is called. Since recursive controllers essentially call themselves infinitely many times, they can represent infinitely large non-recursive controllers with finitely many
nodes. As a comparison, recursive controllers are to non-recursive hierarchical controllers what
context-free grammars are to regular expressions. Since the leading approaches for controller optimization fix the number of nodes [18, 1], one may be able to find a much better policy by considering
hierarchical recursive controllers. In addition, hierarchical controllers may be easier to understand
and interpret than flat controllers given their natural decomposition into subcontrollers and their
possibly smaller size.
4
Experiments
We report on some preliminary experiments with three toy problems (paint, shuttle and maze) from
the POMDP repository3 . We used the SNOPT package to directly solve the non-convex optimization
problem in Table 2 and bounded hierarchical policy iteration (BHPI) to solve it iteratively. Table 3
reports the running time and the value of the hierarchical policies found.4 For comparison purposes,
the optimal value of each problem (copied from [4]) is reported in the column labelled by V ? .
We optimized hierarchical controllers of two levels with a fixed number of nodes reported in the
column labelled ?Num. of Nodes?. The numbers in parentheses indicate the number of nodes at
the top level (left) and at the bottom level (right).5 In general, SNOPT finds the optimal solution
with minimal computational time. In contrast, BHPI is less robust and takes up to several orders of
magnitude longer. MINLP BB returns good solutions for the smaller problems but is unable to find
feasible solutions to the larger ones. We also looked at the hierarchy discovered for each problem
and verified that it made sense. In particular, the hierarchy discovered for the paint problem matches
the one hand coded by Pineau in her PhD thesis [16]. Given the relatively small size of the test
problems, these experiments should be viewed as a proof of concept that demonstrate the feasibility
of our approach. More extensive experiments with larger problems will be necessary to demonstrate
the scalability of our approach.
5
Conclusion & Future Work
This paper proposes the first approach for hierarchy discovery in partially observable planning problems. We model the search for a good hierarchical policy as a non-convex optimization problem
with variables corresponding to the hierarchy and policy parameters. We propose to tackle the optimization problem using non-linear solvers such as SNOPT or by reformulating the problem as
an approximate MINLP or as a sequence of MILPs that can be thought of as a form of hierarchical
bounded policy iteration. Preliminary experiments demonstrate the feasibility of our approach, however further research is necessary to improve scalability. The approach can also be used in situations
where a user would like to improve or learn part of the hierarchy. Many variables can then be set (or
restricted to a smaller range) which simplifies the optimization problem and improves scalability.
We also generalize Hansen and Zhou?s hierarchical controllers to recursive controllers. Recursive
controllers can encode policies with finitely many nodes that would otherwise require infinitely large
3
http://pomdp.org/pomdp/examples/index.shtml
N/A refers to a trial when the solver was unable to return a feasible solution to the problem.
5
Since the problems are simple, the number of levels was restricted to two, though our approach permits any
number of levels and does not require the number of levels nor the number of nodes per level to be specified.
4
non-recursive controllers. Further details about recursive controllers and our other contributions can
be found in [5]. We plan to further investigate the use of recursive controllers in dialogue management and text generation where recursive policies are expected to naturally capture the recursive
nature of language models.
Acknowledgements: this research was supported by the Natural Sciences and Engineering Research Council (NSERC) of Canada, the Canada Foundation for Innovation (CFI) and the Ontario
Innovation Trust (OIT).
References
[1] C. Amato, D. Bernstein, and S. Zilberstein. Solving POMDPs using quadratically constrained linear
programs. In To appear In International Joint Conferences on Artificial Intelligence (IJCAI), 2007.
[2] E. Balas. Disjunctive programming. Annals of Discrete Mathematics, 5:3?51, 1979.
[3] D. Braziunas and C. Boutilier. Stochastic local search for POMDP controllers. In AAAI, pages 690?696,
2004.
[4] A. Cassandra. Exact and approximate algorithms for partially observable Markov decision processes.
PhD thesis, Brown University, Dept. of Computer Science, 1998.
[5] L. Charlin. Automated hierarchy discovery for planning in partially observable domains. Master?s thesis,
University of Waterloo, 2006.
[6] T. Dietterich. Hierarchical reinforcement learning with the MAXQ value function decomposition. JAIR,
13:227?303, 2000.
[7] M. Ghavamzadeh and S. Mahadevan. Hierarchical policy gradient algorithms. In T. Fawcett and
N. Mishra, editors, ICML, pages 226?233. AAAI Press, 2003.
[8] P. Gill, W. Murray, and M. Saunders. SNOPT: An SQP algorithm for large-scale constrained optimization.
SIAM Review, 47(1):99?131, 2005.
[9] E. Hansen. An improved policy iteration algorithm for partially observable MDPs. In NIPS, 1998.
[10] E. Hansen and R. Zhou. Synthesis of hierarchical finite-state controllers for POMDPs. In E. Giunchiglia,
N. Muscettola, and D. Nau, editors, ICAPS, pages 113?122. AAAI, 2003.
[11] B. Hengst. Discovering hierarchy in reinforcement learning with HEXQ. In ICML, pages 243?250, 2002.
[12] L. Kaelbling, M. Littman, and A. Cassandra. Planning and acting in partially observable stochastic domains. Artificial Intelligence, 101(1-2):99?134, 1998.
[13] A. McGovern and A. Barto. Automatic discovery of subgoals in reinforcement learning using diverse
density. In ICML, pages 361?368, 2001.
[14] N. Meuleau, L. Peshkin, K.-E. Kim, and L. Kaelbling. Learning finite-state controllers for partially
observable environments. In UAI, pages 427?436, 1999.
[15] R. Parr. Hierarchical Control and learning for Markov decision processes. PhD thesis, University of
California at Berkeley, 1998.
[16] J. Pineau. Tractable Planning Under Uncertainty: Exploiting Structure. PhD thesis, Robotics Institute,
Carnegie Mellon University, 2004.
[17] J. Pineau, G. Gordon, and S. Thrun. Policy-contingent abstraction for robust robot control. In UAI, pages
477?484, 2003.
[18] P. Poupart and C. Boutilier. Bounded finite state controllers. In NIPS, 2003.
[19] Pascal Poupart. Exploiting Structure to efficiently solve large scale partially observable Markov decision
processes. PhD thesis, University of Toronto, 2005.
[20] M. Ryan. Using abstract models of behaviours to automatically generate reinforcement learning hierarchies. In ICML, pages 522?529, 2002.
[21] R. Sutton, D. Precup, and S. Singh. Between MDPs and Semi-MDPs: A framework for temporal abstraction in reinforcement learning. Artificial Intelligence, 112(1-2):181?211, 1999.
[22] G. Theocharous, S. Mahadevan, and L. Kaelbling. Spatial and temporal abstractions in POMDPs applied to robot navigation. Technical Report MIT-CSAIL-TR-2005-058, Computer Science and Artificial
Intelligence Laboratory, MIT, 2005.
[23] S. Thrun and A. Schwartz. Finding structure in reinforcement learning. In NIPS, pages 385?392, 1994.
[24] J. Williams and S. Youngs. Scaling POMDPs for dialogue management with composite summary pointbased value iteration (CSPBVI). In AAAI workshop on Statistical and Empirical Methods in Spoken
Dialogue Systems, 2006.
| 2968 |@word trial:1 version:2 faculty:2 advantageous:1 termination:6 tried:1 decomposition:5 simplifying:1 tr:1 solid:1 recursively:2 vno:1 initial:2 past:3 existing:1 mishra:1 nt:4 must:3 written:1 reminiscent:1 update:1 n0:48 intelligence:4 discovering:5 fewer:2 parameterization:1 beginning:1 meuleau:1 num:2 provides:1 math:1 node:87 toronto:1 org:1 become:1 consists:2 expected:8 themselves:5 planning:15 nor:3 terminal:11 bellman:1 inspired:1 discounted:2 globally:1 automatically:3 gov:1 solver:8 increasing:1 considering:1 provided:1 discover:4 bounded:11 linearity:1 maximizes:1 null:1 what:1 maxa:1 spoken:1 finding:1 guarantee:2 temporal:2 berkeley:1 tackle:3 runtime:1 icaps:1 returning:1 schwartz:1 control:3 appear:1 negligible:1 engineering:1 local:3 timing:1 tends:1 limit:1 theocharous:2 sutton:1 encoding:2 laurent:1 specifying:1 range:2 directed:2 recursive:29 practice:3 cfi:1 empirical:1 thought:2 composite:1 convenient:2 pre:1 regular:2 refers:2 cannot:2 context:2 applying:1 optimize:3 equivalent:4 deterministic:3 restriction:1 www:1 send:21 graphically:2 williams:1 starting:4 duration:4 convex:10 pomdp:10 automaton:1 formulate:1 identifying:1 immediately:1 rule:1 searching:1 handle:1 updated:1 annals:1 hierarchy:36 user:4 exact:1 programming:6 us:1 updating:1 bottom:3 disjunctive:3 solved:1 capture:1 region:1 ensures:1 sect:3 ordering:1 decrease:1 highest:1 principled:1 environment:2 reward:10 littman:1 ghavamzadeh:1 terminating:2 singh:1 solving:5 upon:1 compactly:1 joint:2 various:1 represented:1 describe:2 artificial:4 mcgovern:1 milp:1 saunders:1 emerged:1 encoded:2 solve:6 larger:3 otherwise:3 grammar:2 ability:1 itself:1 sequence:2 advantage:1 propose:7 product:2 macro:1 loop:1 ontario:3 ppoupart:1 intuitive:1 scalability:3 exploiting:3 parent:4 convergence:3 optimum:1 p:1 ijcai:1 sqp:1 executing:3 linearize:1 fixing:2 finitely:2 school:1 b0:3 eq:10 solves:1 c:1 indicate:3 beg:2 stochastic:12 kb:1 successor:5 programmer:1 virtual:1 explains:1 require:3 behaviour:1 fix:2 decompose:1 preliminary:2 ryan:1 traversed:1 mapping:12 automate:1 parr:1 vary:1 purpose:1 label:5 hansen:13 visited:1 waterloo:5 council:1 mit:3 always:1 rather:1 reaching:1 zhou:11 shuttle:2 varying:2 shtml:1 barto:1 pointbased:1 zilberstein:1 encode:2 amato:1 posmdp:1 improvement:4 braziunas:1 contrast:2 kim:1 detect:1 sense:2 abstraction:4 bt:2 her:1 issue:2 flexible:1 pascal:2 html:1 priori:1 proposes:1 plan:1 constrained:6 spatial:1 fairly:1 equal:1 once:1 never:1 x4:1 icml:4 future:4 report:3 posmdps:2 simplify:1 gordon:1 composed:2 simultaneously:1 ve:1 nau:1 investigate:2 evaluation:2 navigation:1 chain:1 predefined:2 tuple:1 edge:15 necessary:2 circle:1 minimal:1 column:2 kaelbling:3 uniform:1 reported:2 density:1 international:1 siam:1 csail:1 probabilistic:1 receiving:2 synthesis:1 quickly:1 concrete:10 precup:1 thesis:6 reflect:1 aaai:4 management:5 possibly:2 bpi:5 creating:1 dialogue:6 leading:1 bx:2 return:2 toy:1 account:1 bold:1 combinatorics:1 depends:1 view:1 doing:1 reached:3 start:1 bayes:1 option:1 contribution:1 efficiently:1 correspond:2 generalize:2 snopt:8 mc:1 notoriously:2 pomdps:10 history:2 reach:2 frequency:2 naturally:4 associated:4 proof:1 sampled:2 stop:1 popular:1 recall:1 knowledge:1 improves:1 ok:4 jair:1 specify:1 improved:1 charlin:2 though:1 anywhere:1 until:2 hand:6 trust:1 pineau:4 indicated:1 name:1 dietterich:1 effect:1 contain:2 verify:1 concept:1 brown:1 unroll:1 hence:6 assigned:1 discounting:3 alternating:1 reformulating:1 iteratively:1 milps:1 laboratory:1 oc:15 demonstrate:3 recently:1 common:1 empirically:1 exponentially:1 subgoals:2 interpret:1 mellon:1 automatic:1 mathematics:3 similarly:2 language:7 reachable:1 robot:2 longer:1 add:1 recent:1 quartic:1 optimizing:4 forcing:1 scenario:2 binary:2 arbitrarily:1 exploited:1 additional:3 contingent:1 preceding:1 gill:1 determine:2 maximize:1 monotonically:1 signal:1 semi:2 dashed:1 full:1 ii:2 ing:1 technical:1 match:1 offer:1 long:2 coded:1 parenthesis:1 feasibility:2 involving:1 controller:75 essentially:1 iteration:10 represent:4 normalization:1 fawcett:1 achieved:2 robotics:1 proposal:2 addition:2 ot:2 biased:1 ascent:1 induced:2 thing:1 integer:5 call:4 near:1 bernstein:1 mahadevan:2 enough:1 easy:1 automated:3 isolation:1 restrict:1 reduce:1 idea:2 simplifies:1 oit:1 whether:1 expression:1 peshkin:1 action:30 boutilier:2 useful:1 amount:2 discount:4 induces:1 http:2 repository3:1 outperform:1 generate:1 dotted:1 per:3 diverse:1 discrete:1 carnegie:1 verified:1 sum:2 realworld:1 package:1 parameterized:3 fourth:1 unix:1 master:1 uncertainty:1 throughout:1 vn:14 decision:9 scaling:1 bound:2 guaranteed:1 simplification:1 tackled:1 copied:1 quadratic:1 precisely:3 constraint:12 flat:6 encodes:1 speed:1 relatively:1 according:2 alternate:2 cheriton:1 representable:1 smaller:6 describes:1 across:1 terminates:1 modification:1 intuitively:1 restricted:2 pr:70 equation:6 eventually:2 needed:1 mind:1 letting:1 tractable:1 end:3 available:1 decomposing:2 permit:1 hierarchical:36 generic:1 customary:1 top:1 remaining:1 ensure:2 include:1 running:1 murray:1 objective:1 question:1 already:2 paint:3 looked:1 strategy:1 usual:1 gradient:2 unable:3 mapped:2 separate:2 thrun:2 outer:2 poupart:3 assuming:1 length:1 index:1 relationship:2 innovation:2 difficult:5 executed:3 potentially:1 subpolicies:2 ba:3 proper:2 policy:62 unknown:1 allowing:2 upper:2 observation:12 markov:6 benchmark:1 finite:13 immediate:1 defining:1 extended:1 situation:3 frame:2 discovered:2 canada:2 david:1 pair:3 specified:6 extensive:1 redefining:1 optimized:6 vn0:9 sentence:1 california:1 quadratically:3 maxq:1 nip:3 able:1 alongside:1 program:5 including:4 memory:1 max:2 belief:9 suitable:1 natural:2 rely:1 circumvent:1 force:1 recursion:1 representing:1 scheme:1 occupancy:2 improve:2 mdps:3 temporally:1 text:2 prior:1 review:4 discovery:9 acknowledgement:1 fully:3 mixed:5 generation:3 limitation:1 interesting:1 foundation:1 degree:1 s0:53 editor:2 course:1 summary:1 repeat:1 last:1 free:2 copy:3 supported:1 bias:1 allow:4 understand:2 side:4 institute:1 taking:1 benefit:1 world:1 transition:14 avoids:1 maze:2 made:5 reinforcement:7 simplified:3 bb:3 approximate:2 observable:20 compact:1 implicitly:1 absorb:3 global:1 uai:2 assumed:1 anl:1 alternatively:1 search:8 continuous:3 decomposes:1 why:1 table:11 nature:4 terminate:1 robust:2 ca:2 learn:1 necessarily:1 domain:12 terminated:1 uwaterloo:2 arise:2 peq:1 child:11 body:1 referred:1 fashion:1 cubic:1 sub:1 lbx:3 xh:3 third:1 young:1 theorem:1 minlp:5 exists:2 consist:1 balas:1 workshop:1 sequential:1 n00:2 phd:5 magnitude:1 execution:3 labelling:1 illustrates:2 cassandra:2 easier:2 simply:2 likely:1 infinitely:6 absorbed:1 prevents:2 nserc:1 partially:17 bo:1 monotonic:1 corresponds:5 satisfies:1 discourse:1 goal:2 formulated:1 viewed:2 labelled:9 shared:3 replace:1 feasible:2 infinite:3 reducing:1 acting:1 total:1 called:3 partly:1 indicating:1 formally:1 select:1 arises:1 dept:2 outgoing:5 |
2,168 | 2,969 | Recursive Attribute Factoring
David Cohn
Google Inc.,
1600 Amphitheatre Parkway
Mountain View, CA 94043
[email protected]
Deepak Verma
Dept. of CSE, Univ. of Washington,
Seattle WA- 98195-2350
[email protected]
Karl Pfleger
Google Inc.,
1600 Amphitheatre Parkway
Mountain View, CA 94043
[email protected]
Abstract
Clustering, or factoring of a document collection attempts to ?explain? each observed document in terms of one or a small number of inferred prototypes. Prior
work demonstrated that when links exist between documents in the corpus (as is
the case with a collection of web pages or scientific papers), building a joint model
of document contents and connections produces a better model than that built from
contents or connections alone.
Many problems arise when trying to apply these joint models to corpus at the
scale of the World Wide Web, however; one of these is that the sheer overhead
of representing a feature space on the order of billions of dimensions becomes
impractical.
We address this problem with a simple representational shift inspired by probabilistic relational models: instead of representing document linkage in terms of
the identities of linking documents, we represent it by the explicit and inferred attributes of the linking documents. Several surprising results come with this shift:
in addition to being computationally more tractable, the new model produces factors that more cleanly decompose the document collection. We discuss several
variations on this model and show how some can be seen as exact generalizations
of the PageRank algorithm.
1
Introduction
There is a long and successful history of decomposing collections of documents into factors or
clusters to identify ?similar? documents and principal themes. Collections have been factored on
the basis of their textual contents [1, 2, 3], the connections between the documents [4, 5, 6], or both
together [7].
A factored corpus model is usually composed of a small number of ?prototype? documents along
with a set of mixing coefficients (one for each document in the corpus). Each prototype corresponds
to an abstract document whose features are, in some mathematical sense, ?typical? of some subset of the corpus documents. The mixing coefficients for a document d indicate how the model?s
prototypes can best be combined to approximate d.
Many useful applications arise from factored models:
? Model prototypes may be used as ?topics? or cluster centers in spectral clustering [8] serving as ?typical? documents for a class or cluster.
? Given a topic, factored models of link corpora allow identifying authoritative documents
on that topic [4, 5, 6].
? By exploiting correlations and ?projecting out? uninformative terms, the space of a factored model?s mixing coefficients can provide a measure of semantic similarity between
documents, regardless of the overlap in their actual terms [1].
The remainder of this paper is organized as follows: Below, we first review the vector space model,
formalize the factoring problem, and describe how factoring is applied to linked document collections. In Section 2 we point out limitations of current approaches and introduce Attribute Factoring
(AF) to address them. In the following two sections, we identify limitations of AF and describe
Recursive Attribute Factoring and several other variations to overcome them, before summarizing
our conclusions in Section 5.
The Vector Space Model: The vector space model is a convention for representing a document
corpus (ordinarily sets of strings of arbitrary length) as a matrix, in which each document is represented as a column vector.
Let the number of documents in the corpus be N and the size of vocabulary M . Then T denotes
the M ? N term-document matrix such that column j represents document dj , and Tij indicates
the number of times term ti appears in document dj . Geometrically, the columns of T can also be
viewed as points in an M dimensional space, where each dimension i indexes the number of times
term ti appears in the corresponding document.
A link-based corpus may also be represented as a vector space, defining an N ? N matrix L where
Lij = 1 if there is a link from document i to j and 0 otherwise.
It is sometimes preferable to work
P
with P , a normalized version of L in which Pij = Lij / i0 Li0 j ; that is, each document?s outlinks
sum to 1.
Factoring: Let A represent a matrix to be factored (usually T or
T augmented with some other matrix) into K factors. Factoring decomposes A into two matrices U and V (each of rank K) such that
A ? U V .1 In the geometric interpretation, columns of U contains
the K prototypes, while columns of V indicate what mixture of prototypes best approximates the columns in the original matrix.
The definition of what constitutes a ?best approximation? leads to
the many different factoring algorithms in use today. Latent Semantic Analysis [1] minimizes the sum squared reconstruction error of
A , PLSA [2] maximizes the log-likelihood that a generative model
using U as prototypes would produce the observed A , and NonNegative Matrix Factorization [3] adds constraints that all components of U and V must be greater than or equal to zero.
Figure 1: Factoring decomposes matrix A into matrices U and V
For the purposes of this paper, however, we are agnostic as to the factorization method used ? our
main concern is how A , the document matrix to be factored, is generated.
1.1
Factoring Text and Link Corpora
When factoring a text corpus (e.g. via LSA [1], PLSA [2], NMF [3] or some other technique), we
directly factor the matrix T . Columns of the resulting M ? K matrix U are often interpreted as the
K ?principal topics? of the corpus, while columns of the K ? N matrix V are ?topic memberships?
of the corpus documents.
1
In general, A ? f (U , V ), where f can be any function with takes in the weights for a document and the
document prototypes to generate the original vector.
When factoring a link corpus (e.g. via ACA [4] or PHITS [6]), we factor L or the normalized
link matrix P . Columns of the resulting N ? K matrix U are often interpreted as the K ?citation
communities? of the corpus, and columns of the K ? N matrix V indicate to what extent each
document belongs to the corresponding community. Additionally, U ij , the degree of citation that
community j accords to document di can be interpreted as the ?authority? of di in that community.
1.2
Factoring Text and Links Together
Many interesting corpora, such as scientific literature and the World Wide Web, contain both text
content and links. Prior work [7] has demonstrated that building a single factored model of the joint
term-link matrix produces a better model than that produced by using text or links alone.
The naive way to produce such a joint model is to append L or P below T , and factor the joint
matrix:
T
UT
?
?V.
(1)
L
UL
When factored, the resulting U matrix can be seen as having
two components, representing the two distinct types of information in [T ; L ]. Column i of UT indicates the expected
term distribution of factor i, while the corresponding column
of UL indicates the distribution of documents that typically
link to documents represented by that factor.
In practice, L should be scaled by some factor ? to control
the relative importance of the two types of information, but
empirical evidence [7] suggests that performance is somewhat
insensitive to its exact value. For clarity, we omit reference to Figure 2: The naive joint model
? in the equations below.
concatenates term and link matrices
2
Beyond the Naive Joint Model
Joint models provide a systematic way of incorporating information from both the terms and link
structure present in a corpus. But the naive approach described above does not scale up to web-sized
corpora, which may have millions of terms and tens of billions of documents. The matrix resulting
from a naive representation of a web-scale problem would have N + M features with N ? 1010
and M ? 106 . Simply representing this matrix (let alone factoring it) is impractical on a modern
workstation.
Work on Probabilistic Relational Models (PRMs) [9] suggests another approach. The terms in a
document are explicit attributes; links to the document provide additional attributes, represented
(in the naive case) as the identities of the inlinking documents. In a PRM however, entities are
represented by their attributes, rather than their identities. By taking a similar tack, we arrive at
Attribute Factoring ? the approach of representing link information in terms of the attributes of the
inlinking documents, rather than by their explicit identities.
2.1
Attribute Factoring
Each document dj , along with an attribute for each term, has an attribute for each other document di
in the corpus, signifying the presence (or absence) of a link from di to dj . When N ? 1010 , keeping
each document identity as a separate attribute is prohibitive. To create a more economical representation, we propose replacing the link attributes by a smaller set of attributes that ?summarize? the
information from link matrix L , possibly in combination with the term matrix T .
The most obvious attributes of a document are what terms it contains. Therefore, one simple way
to represent the ?attributes? of a document?s inlinks is to aggregate the terms in the documents that
link to it. There are many possible ways to aggregate these terms, including Dirichlet and more
sophisticated models. For computational and representational simplicity in this paper, however, we
replace inlink identities with a sum of the terms in the inlinking documents. In matrix notation, this
is just
T
T ?L
?
UT
U T ?L
?V.
(2)
Colloquially, we can look at this representation as saying that a document has ?some distribution of terms? (T ) and is linked to by documents that have ?some other term distribution? (T ? L ).
By substituting the aggregated attributes of the inlinks for their
identities, we can reduce the size of the representation down from
(M +N )?N to a much more manageable 2M ?N . What is surprising is that, on the domains tested, this more compact representation
actually improves factoring performance.
Figure 3: Representation for
Attribute Factoring
2.2 Attribute Factoring Experiments
We tested Attribute Factoring on two publiclyavailable corpora of interlinked text documents.
The Cora dataset [10] consists of abstracts and
references of of approximately 34,000 computer science research papers; of these we used
the approximately 2000 papers categorized into
the seven subfields of machine learning. The
WebKB dataset [11] consists of approximately
6000 web pages from computer science departments, classified by school and category (student, course, faculty, etc.).
For both datasets, we factored the content-only,
naive joint, and AF joint representations using
PLSA [2]. We varied K, the number of computed factors from 2 to 16, and performed 10
factoring runs for each value of K tested. The
factored models were evaluated by clustering Figure 4: Attribute Factoring outperforms the
each document to its dominant factor and mea- content-only and naive joint representations
suring cluster precision: the fraction of documents in a cluster sharing the majority label.
Figure 4 illustrates a typical result: adding explicit link information improves cluster precision, but
abstracting the link information with Attribute Factoring improves it even more.
3
Beyond Simple Attribute Factoring
Attribute Factoring reduces the number of attributes from
N +M to 2M , allowing existing factoring techniques to scale
to web-sized corpora. This reduction in number of attributes
however, comes at a cost. Since the identity of the document
itself is replaced by its attributes, it is possible for unscrupulous authors (spammers) to ?pose? as a legitimate page with
high PageRank.
Consider the example shown in Figure 5, showing two subgraphs present in the web. On the right is a legitimate page
like the Yahoo! homepage, linked to by many pages, and linking to page RYL (Real Yahoo Link). A link from the Ya- Figure 5: Attribute Factoring can be
hoo! homepage to RYL imparts a lot of authority and hence ?spammed? by mirroring one level
is highly desired by spammers. Failing that, a spammer might back
try to create a counterfeit copy of the Yahoo! homepage, boost
its PageRank by means of a ?link farm?, and create a link from
it to his page FYL (Fake Yahoo Link).
Without link information, our factoring can not distinguish the counterfeit homepage from the real
one. Using AF or the naive joint model allows us to distinguish them based on the distribution
of documents that link to each. But with AF, that real/counterfeit distinction is not propagated to
documents that they point to. All that AF tells us is that RYL and FYL are pointed to by pages that
look a lot like the Yahoo! homepage.
3.1
Recursive Attribute Factoring
Spamming AF was simple because it only looks one link behind. That is, attributes for a document
are either explicit terms in that document or explicit terms in documents linking to the current
document. This let us infer that the fake Yahoo! homepage was counterfeit, but provided no way to
propagate this inference on to later pages.
The AF representation introduced in the previous section can
be easily fooled. It makes inferences about a document based
on explicit attributes propagated from the documents linking
to it, but this inference only propagates one level. For example
it lets us infer that the fake Yahoo! homepage was counterfeit,
but provides no way to propagate this inference on to later
pages. This suggests that we need to propagating not only
explicit attributes of a document (its component terms), but
its inferred attributes as well.
A ready source of inferred attributes comes from the factoring
process itself. Recall that when factoring T ? U ? V , if we
interpret the columns of U as factors or prototypes, then each Figure 6: Recursive Attribute Faccolumn of V can be interpreted as the inferred factor member- toring aggregates the inferred atships of its corresponding document. Therefore, we can prop- tributes (columns of V ) of inlinkagate the inferred attributes of inlinking documents by aggre- ing documents
gating the columns of V they correspond to (Figure 6). Numerically, this replaces T (the explicit document attributes) in
the bottom half of the left matrix with V (the inferred document attributes):
T
V?L
?
UT
UV?L
?V.
(3)
There are some worrying aspects of this representation: the document representation is no longer
statically defined, and the equation itself is recursive. In practice, there is a simple iterative procedure
for solving the equation (See Algorithm 1), but it is computationally expensive, and carries no
convergence guarantees. The ?inferred? attributes (IA ) are set initially to random values, which are
then updated until they converge. Note that we need to use the normalized version of L , namely
P 2.
Algorithm 1 Recursive Attribute Factoring
1: Initialize IA0 with random entries.
2: while Not Converged
do
T
UT
t
3:
Factor A =
?
?V
t
UIA
IA
4:
Update IAt+1 = V ? P .
5: end while
3.2
Recursive Attribute Factoring Experiments
To evaluate RAF, we used the same data sets and procedures as in Section 2.2, with results plotted
in Figure 7. It is perhaps not surprising that RAF by itself does not perform as well as AF on
2
We would use L and P interchangeably to represent contribution from inlinking documents distinguishing
only in case of ?recursive? equations where it is important to normalize L to facilitate convergence.
the domains tested3 - when available, explicit information is arguably more powerful than inferred
information.
It?s important to realize, however, that AF and RAF are in no way exclusive of each other; when we
combine the two and propagate both explicit and implicit attributes, our performance is (satisfyingly)
better than with either alone (top lines in Figures 7(a) and (b)).
(a) Cora
(b) WebKB
Figure 7: RAF and AF+RAF results on Cora and WebKB datasets
4
Discussion: Other Forms of Attribute Factoring
Both Attribute Factoring and Recursive Attribute Factoring involve augmenting the term matrix
with a matrix (call it IA ) containing attributes of the inlinking documents, and then factoring the
augmented matrix:
A =
T
IA
?
UT
UIA
V.
(4)
The traditional joint model set IA = L ; in Attribute Factoring we set IA = T ?L and in Recursive
Attribute Factoring IA = V ?P . In general though, we can set IA to be any matrix that aggregates
attributes of a document?s inlinks.4 For AF we canPreplace the N dimensional inlink vector with a
M -dimensional inferred vector d0i such that d0i = j:L ji =1 wj dj and then IA would be the matrix
with inferred attributes for each document i.e. ith column of IA is d0i . Different choices for wj lead
to different weighting of aggregation of attributes from the incoming documents; some variations
are summarized in Table 1.
Summed function
Attribute Factoring
Outdegree-normalized Attribute Factoring
PageRank-weighted Attribute Factoring
PageRank- and outdegree-normalized
wi
1
Pji
Pj
P j Pji
IA
T ?L
T ?P
T ? diag(P ) ?L
T ?diag(P ) ?P
Table 1: Variations on attribute weighting for Attribute Factoring. (Pj is PageRank of document j)
3
It is somewhat surprising (and disappointing) that RAF performs worse that the content-only model, but
other work [7] has posited situations when this may be expected.
4
This approach can, of course, be extended to also include attributes of the outlinked documents, but bibliometric analysis has historically found that inlinks are more informative about the nature of a document than
outlinks (echoing the Hollywood adage that ?It?s not who you know that matters - it?s who knows you?).
Extended Attribute Factoring: Recursive Attribute Factoring was originally motivated by the
?Fake Yahoo!? problem described in Section 3. While useful in conjunction with ordinary Attribute
Factoring, its recursive nature and lack of convergence guarantees are troubling. One way to simulate the desired effect of RAF in a closed form is to explicitly model the inlink attributes more than
just one level.5 For example, ordinary AF looks back one level at the (explicit) attributes of inlinking documents by setting IA = T ? L . We can extend that ?lookback? to two levels by defining
IA = [T ? L ; T ? L ? L ]. The IA matrix would have 2M features (M attributes for inlinking
documents and another M for attributes of documents that linked to the inlinking documents). Still,
it would be possible, albeit difficult, for a determined spammer to fool this Extended Attribute Factoring (EAF) by mimicking two levels of the web?s linkage.
This can be combatted by adding a third
level to the model (IA = T ? L ; T ? L 2 ; T ? L 3 ), which increases the model complexity by
only a linear factor, but (due to the web?s high branching) vastly increases the number of pages a
spammer would need to duplicate. It should be pointed out that these extended attributes rapidly
converge to the stationary distribution of terms on the web: T ? L ? = T ? eig(L ), equivalent to
weighting inlinking attributes by a version of PageRank that omits random restarts. (Like in Algo.
1, P needs to be used instead of L to achieve convergence).
Another PageRank Connection: While the vanilla RAF(+AF) gives good results, one can imagine many variations with interesting properties; one of them in particular is worth mentioning. A
smoothed version of the recursive equation, can be written as
T
UT
?
?V.
(5)
+??V ?P
UV?L
This the same basic equation as the RAF but multiplied with a damping factor ?. This smoothed
RAF gives a further insight into working of RAF itself once we look at a simpler version of it.
Starting the the original equation let us first remove the explicit attributes. This reduces the equation
to + ? ? V ? P ? UV?L ? V . For the case where UV?L has a single dimension, the above
equation further simplifies to + ? ? V ? P ? u? V .
For some constrained values of and ?, we get + (1 ? ) ? V ? P ? V , which is just the equation
for PageRank [12]. This means that, in the absence of T ?s term data, the inferred attributes V
produced by smoothed RAF represent a sort of generalized, multi-dimensional PageRank, where
each dimension corresponds to authority on one of the inferred topics of the corpus.6 With the
terms of T added, the intuition is that V and the inferred attributes IA = V ? P converge to a
trade-off between the generalized PageRank of link structure and factor values for T in terms of the
prototypes U T capturing term information.
5
Summary
We have described a representational methodology for factoring web-scale corpora, incorporating
both content and link information. The main idea is to represent link information with attributes of
the inlinking documents rather than their explicit identities. Preliminary results on a small dataset
demonstrate that the technique not only makes the computation more tractable but also significantly
improve the quality of the resulting factors.
We believe that we have only scratched the surface of this approach; many issues remain to be
addressed, and undoubtedly many more remain to be discovered. We have no principled basis for
weighting the different kinds of attributes in AF and EAF; while RAF seems to converge reliably
in practice, we have no theoretical guarantees that it will always do so. Finally, in spite of our
motivating example being the ability to factor very large corpora, we have only tested our algorithms
on small ?academic? data sets; applying the AF, RAF and EAF to a web-scale corpus remains as the
real (and as yet untried) criterion for success.
5
Many thanks to Daniel D. Lee for this insight.
This is related to, but distinct from the generalization of PageRank described by Richardson and Domingos
[13], which is computed as a scalar quantity over each of the (manually-specified) lexical topics of the corpus.
6
References
[1] Scott C. Deerwester, Susan T. Dumais, Thomas K. Landauer, George W. Furnas, and
Richard A. Harshman. Indexing by latent semantic analysis. Journal of the American Society of Information Science, 41(6):391?407, 1990.
[2] Thomas Hofmann. Probabilistic latent semantic analysis. In Proc. of Uncertainty in Artificial
Intelligence, UAI?99, Stockholm, 1999.
[3] Daniel D. Lee and H. Sebastian Seung. Algorithms for non-negative matrix factorization. In
Advances in Neural Information Processing Systems 12, pages 556?562. MIT Press, 2000.
[4] H.D. White and B.C. Griffith. Author cocitation: A literature measure of intellectual structure.
Journal of the American Society for Information Science, 1981.
[5] Jon M. Kleinberg. Authoritative sources in a hyperlinked environment. Journal of the ACM,
46(5):604?632, 1999.
[6] David Cohn and Huan Chang. Learning to probabilistically identify authoritative documents.
In Proc. 17th International Conf. on Machine Learning, pages 167?174. Morgan Kaufmann,
San Francisco, CA, 2000.
[7] David Cohn and Thomas Hofmann. The missing link - a probabilistic model of document
content and hypertext connectivity. In Neural Information Processing Systems 13, 2001.
[8] A. Ng, M. Jordan, and Y. Weiss. On spectral clustering: Analysis and an algorithm. In
Advances in Neural Information Processing Systems 14, 2002.
[9] N. Friedman, L. Getoor, D. Koller, and A. Pfeffer. Learning probabilistic relational models. In
Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence (IJCAI99), pages 1300?1309, Stockholm, Sweden, 1999. Morgan Kaufman.
[10] Andrew K. McCallum, Kamal Nigam, Jason Rennie, and Kristie Seymore. Automating the
construction of internet portals with machine learning. Information Retrieval, 3(2):127?163,
2000.
[11] T. Mitchell et. al.
The World Wide Knowledge Base Project (Available at
http://cs.cmu.edu/?WebKB). 1998.
[12] Sergey Brin and Lawrence Page. The anatomy of a large-scale hypertextual Web search engine.
Computer Networks and ISDN Systems, 30(1?7):107?117, 1998.
[13] Mathew Richardson and Pedro Domingos. The Intelligent Surfer: Probabilistic Combination
of Link and Content Information in PageRank. In Advances in Neural Information Processing
Systems 14. MIT Press, 2002.
| 2969 |@word faculty:1 version:5 manageable:1 seems:1 plsa:3 cleanly:1 propagate:3 carry:1 reduction:1 contains:2 pfleger:1 daniel:2 document:85 outperforms:1 existing:1 current:2 com:2 surprising:4 yet:1 must:1 written:1 realize:1 informative:1 hofmann:2 remove:1 update:1 alone:4 generative:1 prohibitive:1 half:1 stationary:1 intelligence:2 mccallum:1 ith:1 provides:1 authority:3 cse:1 intellectual:1 simpler:1 mathematical:1 along:2 consists:2 overhead:1 combine:1 introduce:1 amphitheatre:2 expected:2 multi:1 inspired:1 actual:1 becomes:1 provided:1 colloquially:1 notation:1 webkb:4 maximizes:1 agnostic:1 homepage:7 project:1 what:5 mountain:2 kind:1 interpreted:4 string:1 minimizes:1 kaufman:1 impractical:2 guarantee:3 ti:2 preferable:1 scaled:1 control:1 lsa:1 omit:1 harshman:1 arguably:1 before:1 approximately:3 might:1 suggests:3 mentioning:1 factorization:3 subfields:1 recursive:13 practice:3 procedure:2 empirical:1 significantly:1 griffith:1 spite:1 isdn:1 get:1 mea:1 applying:1 equivalent:1 demonstrated:2 center:1 lexical:1 missing:1 regardless:1 starting:1 simplicity:1 identifying:1 factored:11 legitimate:2 subgraphs:1 insight:2 his:1 variation:5 updated:1 imagine:1 today:1 construction:1 exact:2 distinguishing:1 spamming:1 domingo:2 expensive:1 pfeffer:1 observed:2 bottom:1 hypertext:1 susan:1 wj:2 trade:1 principled:1 intuition:1 environment:1 complexity:1 seung:1 solving:1 algo:1 basis:2 easily:1 joint:14 represented:5 univ:1 distinct:2 describe:2 eaf:3 artificial:2 tell:1 aggregate:4 whose:1 d0i:3 rennie:1 otherwise:1 ability:1 richardson:2 farm:1 itself:5 untried:1 hyperlinked:1 reconstruction:1 propose:1 remainder:1 rapidly:1 mixing:3 achieve:1 representational:3 sixteenth:1 normalize:1 seattle:1 billion:2 cluster:6 exploiting:1 convergence:4 produce:5 andrew:1 augmenting:1 propagating:1 pose:1 ij:1 school:1 c:2 come:3 indicate:3 convention:1 anatomy:1 attribute:73 brin:1 iat:1 generalization:2 decompose:1 preliminary:1 stockholm:2 lawrence:1 surfer:1 prm:1 substituting:1 purpose:1 failing:1 proc:2 label:1 hollywood:1 create:3 weighted:1 cora:3 mit:2 always:1 rather:3 probabilistically:1 conjunction:1 rank:1 indicates:3 likelihood:1 fooled:1 sense:1 summarizing:1 inference:4 factoring:49 membership:1 i0:1 typically:1 initially:1 koller:1 mimicking:1 issue:1 yahoo:8 outlinks:2 constrained:1 summed:1 initialize:1 equal:1 once:1 having:1 washington:2 ng:1 manually:1 represents:1 outdegree:2 look:5 constitutes:1 jon:1 kamal:1 intelligent:1 duplicate:1 richard:1 modern:1 composed:1 replaced:1 attempt:1 friedman:1 undoubtedly:1 highly:1 mixture:1 behind:1 huan:1 sweden:1 damping:1 desired:2 plotted:1 theoretical:1 column:16 ordinary:2 cost:1 subset:1 entry:1 successful:1 motivating:1 combined:1 dumais:1 thanks:1 international:2 automating:1 probabilistic:6 systematic:1 off:1 lee:2 together:2 connectivity:1 squared:1 vastly:1 containing:1 possibly:1 worse:1 conf:1 american:2 student:1 summarized:1 coefficient:3 inc:2 li0:1 matter:1 explicitly:1 scratched:1 performed:1 view:2 lot:2 try:1 closed:1 linked:4 aca:1 later:2 jason:1 aggregation:1 sort:1 raf:14 contribution:1 kaufmann:1 who:2 correspond:1 identify:3 produced:2 fyl:2 economical:1 worth:1 history:1 classified:1 converged:1 explain:1 sharing:1 sebastian:1 definition:1 obvious:1 di:4 workstation:1 propagated:2 dataset:3 mitchell:1 recall:1 knowledge:1 ut:7 improves:3 organized:1 formalize:1 sophisticated:1 actually:1 publiclyavailable:1 back:2 appears:2 originally:1 restarts:1 methodology:1 wei:1 evaluated:1 though:1 just:3 implicit:1 correlation:1 until:1 working:1 web:14 replacing:1 cohn:4 eig:1 lack:1 google:4 quality:1 perhaps:1 scientific:2 believe:1 building:2 facilitate:1 effect:1 normalized:5 contain:1 hence:1 semantic:4 white:1 interchangeably:1 branching:1 criterion:1 generalized:2 trying:1 demonstrate:1 performs:1 phits:1 ji:1 insensitive:1 million:1 linking:5 interpretation:1 approximates:1 extend:1 interpret:1 numerically:1 uv:4 vanilla:1 pointed:2 dj:5 seymore:1 similarity:1 longer:1 surface:1 etc:1 add:1 counterfeit:5 dominant:1 base:1 belongs:1 disappointing:1 success:1 seen:2 morgan:2 greater:1 somewhat:2 additional:1 george:1 aggregated:1 converge:4 reduces:2 infer:2 ing:1 academic:1 af:16 long:1 posited:1 retrieval:1 dept:1 imparts:1 basic:1 cmu:1 represent:6 sometimes:1 accord:1 sergey:1 addition:1 uninformative:1 addressed:1 source:2 member:1 jordan:1 call:1 presence:1 echoing:1 reduce:1 simplifies:1 prototype:11 idea:1 shift:2 motivated:1 linkage:2 ul:2 spammer:5 prms:1 mirroring:1 useful:2 tij:1 fake:4 involve:1 fool:1 ten:1 category:1 generate:1 http:1 exist:1 serving:1 uia:2 sheer:1 clarity:1 pj:2 worrying:1 geometrically:1 fraction:1 sum:3 deerwester:1 run:1 powerful:1 you:2 uncertainty:1 arrive:1 saying:1 capturing:1 internet:1 distinguish:2 replaces:1 hypertextual:1 mathew:1 nonnegative:1 constraint:1 kleinberg:1 aspect:1 simulate:1 statically:1 department:1 combination:2 hoo:1 smaller:1 remain:2 wi:1 projecting:1 indexing:1 computationally:2 equation:10 remains:1 discus:1 know:2 tractable:2 end:1 available:2 decomposing:1 multiplied:1 apply:1 aggre:1 spectral:2 pji:2 original:3 thomas:3 denotes:1 clustering:4 dirichlet:1 top:1 include:1 society:2 added:1 quantity:1 exclusive:1 traditional:1 link:35 separate:1 entity:1 majority:1 topic:7 seven:1 extent:1 length:1 index:1 tribute:1 troubling:1 difficult:1 negative:1 ordinarily:1 append:1 reliably:1 perform:1 allowing:1 datasets:2 defining:2 relational:3 situation:1 extended:4 inlinks:4 varied:1 discovered:1 smoothed:3 arbitrary:1 community:4 nmf:1 inferred:15 david:3 introduced:1 namely:1 specified:1 connection:4 engine:1 omits:1 distinction:1 textual:1 boost:1 address:2 beyond:2 usually:2 below:3 interlinked:1 scott:1 summarize:1 pagerank:13 built:1 including:1 ia:16 overlap:1 getoor:1 ryl:3 representing:6 improve:1 historically:1 ready:1 naive:9 lij:2 text:6 prior:2 review:1 geometric:1 literature:2 relative:1 abstracting:1 interesting:2 limitation:2 authoritative:3 degree:1 pij:1 propagates:1 verma:1 karl:1 course:2 summary:1 keeping:1 copy:1 allow:1 wide:3 taking:1 deepak:2 overcome:1 dimension:4 vocabulary:1 world:3 author:2 collection:6 san:1 approximate:1 citation:2 compact:1 incoming:1 uai:1 parkway:2 corpus:26 francisco:1 landauer:1 search:1 latent:3 iterative:1 decomposes:2 table:2 additionally:1 nature:2 concatenates:1 ca:3 nigam:1 domain:2 diag:2 main:2 arise:2 categorized:1 augmented:2 furnas:1 precision:2 theme:1 explicit:14 weighting:4 third:1 down:1 showing:1 gating:1 concern:1 evidence:1 incorporating:2 albeit:1 adding:2 bibliometric:1 importance:1 portal:1 illustrates:1 simply:1 scalar:1 chang:1 pedro:1 corresponds:2 acm:1 prop:1 identity:9 viewed:1 sized:2 replace:1 absence:2 content:10 typical:3 determined:1 tack:1 principal:2 ya:1 signifying:1 evaluate:1 tested:4 |
2,169 | 297 | Spherical Units as Dynamic Consequential Regions:
Implications for Attention, Competition and Categorization
Mark A. Gluck
Stephen Jose Hanson*
Learning and Knowledge
Acquisition Group
Siemens Corporate Research
Princeton, NJ 08540
Center for Molecular &
Behavioral Neuroscience
Rutgers University
Newark, NJ 07102
Abstract
Spherical Units can be used to construct dynamic reconfigurable
consequential regions, the geometric bases for Shepard's (1987) theory of
stimulus generalization in animals and humans. We derive from Shepard's
(1987) generalization theory a particular multi-layer network with dynamic
(centers and radii) spherical regions which possesses a specific mass function
(Cauchy). This learning model generalizes the configural-cue network model
(Gluck & Bower 1988): (1) configural cues can be learned and do not require
pre-wiring the power-set of cues, (2) Consequential regions are continuous
rather than discrete and (3) Competition amoungst receptive fields is shown
to be increased by the global extent of a particular mass function (Cauchy).
We compare other common mass functions (Gaussian; used in models of
Moody & Darken; 1989, Krushke, 1990) or just standard backpropogation
networks with hyperplane/logistic hidden units showing that neither fare as
well as models of human generalization and learning.
1 The Generalization Problem
Given a favorable or unfavorable consequence, what should an organism assume about
the contingent stimuli? If a moving shadow overhead appears prior to a hawk attack
what should an organism assume about other moving shadows, their shapes and
positions? If a dense food patch is occasioned by a particular density of certain kinds of
shrubbery what should the organism assume about other shurbbery, vegetation or its
spatial density? In an pattern recognition context, given a character of a certain shape,
orientation, noise level etc.. has been recognized correctly what should the system
assume about other shapes, orientations, noise levels it has yet to encounter?
? Also a member of Cognitive Science Laboratory, Princeton University, Princeton, NJ 08544
656
Spherical Units as Dynamic Consequential Regions
Many "generalization" theories assume stimulus similarity represents a "failure to
discriminate", rather than a cognitive decision about what to assume is consequential
about the stimulus event. In this paper we implement a generalization theory with
multilayer architecture and localized kernel functions (cf. Cooper, 1962; Albus 1975;
Kanerva, 1984; Hanson & Burr, 1987,1990; Niranjan & Fallside, 1988; Moody &
Darken, 1989; Nowlan, 1990; Krushke, 1990) in which the learning system constructs
hypotheses about novel stimulus events.
2 Shepard's (1987) Generalization Theory
Considerable empirical evidence indicates that when stimuli are represented within an
multi-dimensional psychological space, similarity, as measured by stimulus
generalization, drops off in an approximate exponential decay fashion with psychological
distance (Shepard, 1957, 1987). In comparison to a linear function, a similarity-distance
relationship with upwards concave curvature, such as an exponential-decay curve,
exaggerates the similarity of items which are nearby in psychological space and
minimizes the impact of items which are further away.
Recently, Roger Shepard (1987) has proposed a "Universal Law of Generalization" for
stimulus generalization which derives this exponential decay similarity-distance function
as a "rational" strategy given minimal information about the stimulus domain (see also
Shepard & Kannappan, this volume). To derive the exponential-decay similaritydistance rule, Shepard (1987) begins by assuming that stimuli can be placed within a
psychological space such that the response learned to anyone stimulus will generalize to
another according to an invariant monotonic function of the distance between them. If a
stimulus, 0, is known to have an important consequence, what is the probability that a
novel test stimulus, X, will lead to the same consequence? Shepard shows, through
arguments based on probabilistic reasoning that regardless of the a priori expectations
for regions of different sizes, this expectation will almost always yield an approximate
exponentially decaying gradient away from a central memory point. In particular, very
simple geometric constraints can lead to the exponential generalization gradient.
Shepard (1987) assumes (1) that the consequential region overlaps the consequential
stimulus event. and (2) bounded center symmetric consequential regions of unknown
shape and size In the I-dimensional case it can be shown that g(x) is robust over a wide
variety of assumptions for the distribution of pes); although for pes) exactly the Erlangian
or discrete Gamma, g(x) is exactly Exponential.
We now investigate possible ways to implement a model which can learn consequential
regions and appropriate generalization behavior (cf. Shepard, 1990).
3 Gluck & Bower's Configural-cue Network Model
The first point of contact is to discrete model due to Gluck and Bower: The configuralcue network model (Gluck & Bower, 1988) The network model adapts its weights
(associations) according to Rescorla and Wagner's (1972) model of classical
conditioning which is a special case of Widrow & Hoffs (1961) Least-Mean-Squares
(LMS) algorithm for training one-layer networks. Presentation of a stimulus pattern is
657
658
Hanson and Gluck
represented by activating nodes on the input layer which correspond to the pattern's
elementary features and pair-wise conjunctions of features.
The configural-cue network model implicitly embodies an exponential generalization
(similarity) gradient (Gluck, 1991) as an emergent property of it's stimulus
representation scheme. This equivalence can be seen by computing how the number of
overlapping active input nodes (similarity) changes as a function of the number of
overlapping component cues (distance). If a stimulus pattern is associated with some
outcome, the configural-cue model will generalize this association to other stimulus
patterns in proportion to the number of common input nodes they both activate.
Although the configura! cue model has been successful with various categorization data,
there are several limitations of the configural cue model: (1) it is discrete and can not deal
adequately with continuous stimuli (2) it possesses a non-adaptable internal
representation (3) it can involve the pre-wiring the power set of possible cues
Nonetheless, there are several properties that make the Configural Cue model successful
that are important to retain for generalizations of this model: (a) the competitive stimulus
properties deriving from the delta rule (b) the exponential stimulus generalization
property deriving from the successive combinations of higher-order features encoded by
hidden units.
4 A Continuous Version of Shepard's Theory
We derive in this section a new model which generalizes the configural cue model and
derives directly from Shepard's generalization theory. In Figure 1, is shown a one
dimensional depiction of the present theory. Similar to Shepard we assume there is a
consequential
? lell lit ely
/
0'
o
0'
0' X
0'
Figure 1: Hypothesis Distributions based on Consequential Region
region associated with a significant stimulus event, O. Also similar to Shepard we
assume the learning system knows that the significant stimulus event is contained in the
consequential region, but does not know the size or location of the consquential region.
In absence of this information the learning system constructs hypothesis distributions
Spherical Units as Dynamic Consequential Regions
(0') which mayor maynot be contained in the consequential region but at least overlap
the significant stimulus event with some finite probablity measure. In some hypothesis
distributions the significant stimulus event is "typical" in the consequential region, in
other hypothesis distributions the significant stimulus event is "rare". Consequently, the
present model differs from Shepard's approach in that the learning system uses the
consequential region to project into a continuous hypothesis space in order to construct
the conditional probability of the novel stimulus, X, given the significant stimulus event
o.
Given no further information on the location and size of the consequential region the
learning system averages over all possible locations (equally weighted) and all possible
(equally weighted) variances over the known stimulus dimension:
g(x) = Xp(S)lp (C)H(x,s,C)dCdS
(1)
In order to derive particular gradients we must assume particular forms for the hypothesis
distribution, H(x,s,c). Although we have investigated many different hypothesis
distributions and wieghting functions (p(c), pes)), we only have space here to report on
two bounding cases, one with very "light tails", the Gaussian, and one with very "heavy
tails", the Cauchy (see Figure 2). These two distributions are extremes and provide a
test of the robustness of the generalization gradient. At the same time they represent
different commitments to the amount of overlap of hidden unit receptive fields and the
consequent amount of stimulus competition during learning.
'"
~
0
a=JfIl
I~
0
'"
0
0
0
...
Figure 2: Gaussian compared to the Cauchy: Note heavier Cauchy tail
Equation 2 was numerically integrated (using mathematica), over a large range of
variances and a large range of locations using uniform densities representing the
weighting functions and both Gaussian and Cauchy distributions representing the
hypothesis distributions. Shown in Figure 3 are the results of the integrations for both the
Cauchy and Gaussian distributions. The resultant gradients are shown by open circles
(Cauchy) or stars (Gaussian) while the solid lines show the best fitting exponential
gradient. We note that they approximate the derived gradients rather closely in spite of
the fact the underlying forms are quite complex, for example the curve shown for the
Cauchy integration is actually:
659
660
Hanson and Gluck
-5Arctan ( x -{ 2 ) + 0.01 [Arctan (1 OO(x -c 1))J + 5Arctan ( x +{ 1 ) -
(2)
0.01 [Arctan (100(X+C2))] - 'l2?c2+x)log(1-s lx+x 2)+(c l-x)log(s2-s lx+x 2))-
'l2(c l-x)log(1+s lx+x 2) + (c2+x)log(s2+s lx+x 2))
Consequently we confinn Shepard's original observation for a continuous version} of his
theory that the exponential gradient is a robust consequence of minimum infonnation set
of assumptions about generalization to novel stimuli.
o
.,;
r----------------------------------------~
I:
o
10
20
30
40
50
"W1ceIromX
Figure 3: Generalization Gradients Compared to Exponential (Solid Lines)
4.1
Cauchy vs Gaussian
As pointed out before the Cauchy has heavier tails than the Gaussian and thus provides
more global support in the feature space. This leads to two main differences in the
hypothesis distributions:
(1) Global vs Local support: Unlike back-propagation with hyperplanes, Cauchy can be
local in the feature space and unlike the Gaussian can have more global effect.
(2) Competition not Dimensional scaling: Dimensional "Attention" in CC and Cauchy
multilayer network model is based on competition and effective allocation of resources
during learning rather than dimensional contraction or expansion.
1 N-Dimensional Versions: we generalize the above continuous J-d model to an N-dimensional model by
assuming that a network of Cauchy units can be used to construct a set of consequential regions each
possibly composed of several Cauchy receptive fields. Consequently, dimensions can be differentially
weighted by subsets of cauchy units acting in concert could produce metrics like L-J nonns in separable
(e.g., shape, size of arbitrary fonns) dimension cases while equally weighting dimensions similar to metries
like L-2 nonns in integral (e.g., lightness, hue in color) dimension cases.
Spherical Units as Dynamic Consequential Regions
Since the stimulus generalization properites of both hypothesis distributions are
indistinguishable (both close to exponential) it is important to compare categorization
results based on a multilayer gradient descent model using both the Cauchy and Gaussian
as hidden node functions.
5 Comparisons with Human Categorization Performance
We consider in the final section two experiments from human learning literature which
constrain categorization results. The model was a multilayer network using standard
gradient descent in the radius, location and second layer weights of either Cauchy or
Gaussian functions in hidden units.
5.1
Shepard, Hovland and Jenkins (1961)
In order to investigate adults ability to learn simple classification SH&J used eight 3dimensional stimuli (comers of the cube) representing seperable stimuli like shape, color
or size. Of the 70 possible 4-exempler dichotomies there are only six unique 4 exemplar
dichotomies which ignor the specific stimulus dimension .
....,.,_til'O_
---I.
[0
II
- - roI
-
y
-
VI
!
!
I
"
?
_til
.. _
~?
.
?
..
_0[ill
III
--
roI
-
VI
-
..
?
-
?
y
.
..
Figure 4: Classification Learning Rate for Gaussian and Cauchy on SHJ stimuli
These dichotomies involve both linearly separable and nonlinearly separable
classifications as well as selective dependence on a specific dimension or dimensions.
661
662
Hanson and Gluck
For both measures of trials to learn and the number of errors made during learning the
order of difficulty was (easiest) I<II<llI<lV<V<VI (hardest).
In Figure 4, both the Cauchy model and the Gaussian model was compared using the SHJ
stimuli. Note that the Gaussian model misorders the 6 classification tasks:
I<lV<lII<ll<V<VI while the Cauchy model confonns with the human perfonnance.
5.2
Medin and Schwanenflugel (1981)
Data suitable to illustrate the implications of this non-linear stimulus generalization
gradient for classification learning, are provided by Medin and Schwanenflugel (1981).
They contrasted perfonnance of groups of subjects learning pairs of classification tasks,
one of which was linearly separable and one of which was not. One of the classifications
is linearly separable (LS) and the other is not (NLS).
--
:: 1
I
?i
:s
'"
01
i
I
:I
1
I
. 1
I
r
:i
:
I
; 1
I
I
o
I
1
\
~
N
~
?
I-=-
:
I
~
... 0 ....
?.
I
~
.....,'_HU
_
:
--
-:=0 ....
,
I
=
HU
--
.........
1-- =-
::I
....
-
..,..ttltONN
3HIJ
:
~1
:
I
:
I
.
.~
?
0
:
:
,
.. ..
-
?
,:
to
-
"
?
?
Figure 5: Subjects (a) Cauchy (b) Gaussian (c) and Backprop
(d) Leamiog perfonnance 00 the M&S stimuli.
An important difference between the tasks lies in how the between-category and withincategory distances are distributed. The linearly separable task is composed of many
Spherical Units as Dynamic Consequential Regions
"close" (Hamming distance=l) and some "far" (Hamming distance=3) relations, while
the non-separable task has a broader distribution of "close", "medium", and "far"
between-category distances. These unequal distributions have important implications for
models which use a non-linear mapping from distance to similarity. Medin and
Schwanenflugel reported reliable and complete results with a four-dimensional task that
embodied the same controls for linear separability and inter-exemplar similarities. To
evaluate the relative difficulty of the two tasks, Medin & Schwanenflugel compared the
average learning curves of subjects trained on these stimuli. Subjects found the linearly
separable task (LS) more difficult than the non-linearly separable task (NLS), as
indicated by the reduced percentage of errors for the NLS task at all points during
training (see next Figure 5--Subjects, top left) In Figure 5 is shown 10 runs of the Cauchy
model (top right) note that it, similar to the human performance, had more difficulty with
the LS than the NLS separable task. Below this frame is the results for the Gaussian
model (bottom left) which does show a slight advantage of learning the NLS task over
the LS task. While in the final frame (bottom right) of this series standard backprop
actually reverses the speed of learning of each task relative to human performance.
6 Conclusions
A continuous version of Shepard's (1987) generalization theory was derived providing
for a specific Mass/Activation function (Cauchy) and receptive field distribution. The
Cauchy activation function is shown to account for a range of human learning
performance while another Mass/Activation function (Gaussian) does not. The present
model also generalizes the Configural Cue model to continuous, dynamic, internal
representation.
Attention like effects are obtained through competition of Cauchy units as a fixed
resource rather than dimensional "shrinking" or "expansion" as in an explicit rescaling of
each axes.
Cauchy units are a compromise; providing more global support in approximation than
gaussian units and more local support than the hyperplane/logistic units in
backpropagation models.
References
Albus, J. S. (1975) A new approach to manipulator control: The cerebellar model
articulation controller (CMAC), American Society of Engineers, Transactions G
(Journal of Dynamic Systems, Measurement and Control) 97(3):220-27.
Cooper, P. (1962) The hypersphere in pattern recognition. Information and Control, 5,
324-346.
M. A. Gluck (1991) Stimulus generalization and representation in adaptive network
models of category learning. Psychological Science, 2, 1, 1-6.
M. A. Gluck & G. H. Bower, (1988), Evaluating an adaptive network model of human
learning. Journal of Memory and Language, 27, 166-195.
Hanson, S. J. and Burr, D. J. (1987) Knowledge Representation in Connectionist
Networks, Bellcore Technical Report.
663
664
Hanson and Gluck
Hanson, S. J. and Burr, D. J. (1990) What Connectionist models learn: Learning and
Representation in Neural Networks. Behavioral and Brain Sciences.
Kanerva, P. (1984) Self propagating search: A unified theory of memory; Ph.D. Thesis,
Stanford University.
Kruschke, J. (1990) A connectionist model of category learning, Ph.D. Thesis, UC
Berkeley.
Medin D. L., & Schanwenflugel, P. J. (1981) Linear seperability in classification
learning. Journal of Experimental Psychology: Human Learning and Memory, 7,
355-368.
Moody .J. and Darken, C (1989) Fast learning in networks of locally-tuned processing
units, Neural Computation, 1,2,281-294.
Niranjan M. & Fallside, F. (1988) Neural networks and radial basis functions in
classifying static speech patterns, Technical Report, CUED/FINFENG TR22,
Cambridge University.
Nowlan, S. (1990) Max Likelihood Competition in RBF Networks. Technical Report
CRG-TR-90-2, University of Toronto.
R. A. Rescorla A. R. Wagner (1972) A theory of Pavlovian conditioning: Variations in
the effectiveness of reinforcement and non-reinforcement. A. H. Black W. F.
Prokasy (Eds.) Classical conditioning II: Current research and theory, 64-99
Appleton-Century-Crofts: New York.
R. N. Shepard (1958), Stimulus and response generalization: Deduction of the
generalization gradient from a trace model, Psychological Review 65, 242-256
Shepard, R. N. (1987) Toward a Universal Law of Generalization for Psychological
Science. Science, 237.
R. N. Shepard, C. 1. Hovland & H. M. Jenkins (1961), Learning and memorization of
classifications, Psychological Monographs, 75, 1-42
A B. Widrow & M. E. Hoff (1960) Adaptive switching circuits, Institute of Radio
Engineers, Western Electronic Show and Convention, Convention Record, 4, 96194
| 297 |@word trial:1 version:4 proportion:1 consequential:20 open:1 hu:2 contraction:1 tr:1 solid:2 series:1 tuned:1 current:1 nowlan:2 activation:3 yet:1 must:1 shape:6 drop:1 concert:1 v:2 cue:13 item:2 shj:2 record:1 probablity:1 provides:1 hypersphere:1 node:4 location:5 successive:1 attack:1 arctan:4 lx:4 hyperplanes:1 toronto:1 c2:3 overhead:1 fitting:1 behavioral:2 burr:3 inter:1 behavior:1 multi:2 brain:1 spherical:7 food:1 begin:1 project:1 bounded:1 underlying:1 provided:1 mass:5 medium:1 circuit:1 what:7 easiest:1 kind:1 minimizes:1 unified:1 nj:3 configural:9 berkeley:1 concave:1 exactly:2 control:4 unit:17 before:1 local:3 consequence:4 switching:1 black:1 equivalence:1 range:3 medin:5 unique:1 implement:2 differs:1 backpropagation:1 cmac:1 universal:2 empirical:1 pre:2 radial:1 spite:1 close:3 context:1 memorization:1 center:3 attention:3 regardless:1 l:4 kruschke:1 rule:2 deriving:2 his:1 century:1 exaggerates:1 variation:1 us:1 hypothesis:11 recognition:2 bottom:2 region:21 monograph:1 dynamic:9 trained:1 compromise:1 basis:1 comer:1 emergent:1 represented:2 various:1 fast:1 effective:1 activate:1 dichotomy:3 outcome:1 configura:1 quite:1 encoded:1 seperable:1 stanford:1 ability:1 final:2 advantage:1 rescorla:2 commitment:1 adapts:1 albus:2 competition:7 differentially:1 produce:1 categorization:5 cued:1 derive:4 widrow:2 oo:1 propagating:1 illustrate:1 exemplar:2 measured:1 shadow:2 revers:1 convention:2 nls:5 radius:2 closely:1 human:10 backprop:2 require:1 activating:1 generalization:26 elementary:1 crg:1 roi:2 mapping:1 lm:1 hovland:2 favorable:1 radio:1 infonnation:1 weighted:3 gaussian:18 always:1 rather:5 broader:1 conjunction:1 derived:2 ax:1 seperability:1 indicates:1 likelihood:1 finfeng:1 integrated:1 hidden:5 relation:1 deduction:1 selective:1 classification:9 orientation:2 ill:1 priori:1 bellcore:1 animal:1 spatial:1 special:1 integration:2 hoff:1 cube:1 field:4 construct:5 uc:1 represents:1 lit:1 hardest:1 report:4 stimulus:42 connectionist:3 composed:2 gamma:1 investigate:2 extreme:1 sh:1 light:1 occasioned:1 implication:3 hawk:1 integral:1 perfonnance:3 circle:1 minimal:1 psychological:8 increased:1 subset:1 rare:1 uniform:1 successful:2 reported:1 density:3 retain:1 probabilistic:1 off:1 moody:3 thesis:2 central:1 possibly:1 cognitive:2 lii:1 american:1 til:2 rescaling:1 account:1 star:1 ely:1 vi:4 competitive:1 decaying:1 square:1 variance:2 correspond:1 yield:1 generalize:3 lli:1 cc:1 ed:1 failure:1 nonetheless:1 acquisition:1 mathematica:1 resultant:1 associated:2 hamming:2 static:1 rational:1 knowledge:2 color:2 actually:2 back:1 adaptable:1 appears:1 higher:1 response:2 just:1 roger:1 overlapping:2 propagation:1 western:1 logistic:2 indicated:1 manipulator:1 effect:2 adequately:1 symmetric:1 laboratory:1 deal:1 wiring:2 indistinguishable:1 during:4 ll:1 self:1 complete:1 kannappan:1 upwards:1 reasoning:1 wise:1 novel:4 recently:1 common:2 conditioning:3 shepard:21 volume:1 exponentially:1 association:2 fare:1 organism:3 vegetation:1 numerically:1 tail:4 slight:1 significant:6 measurement:1 cambridge:1 backpropogation:1 appleton:1 pointed:1 language:1 had:1 moving:2 similarity:9 depiction:1 etc:1 base:1 curvature:1 certain:2 seen:1 minimum:1 contingent:1 recognized:1 stephen:1 ii:3 corporate:1 technical:3 equally:3 molecular:1 niranjan:2 impact:1 multilayer:4 controller:1 expectation:2 mayor:1 rutgers:1 fonns:1 metric:1 kernel:1 represent:1 cerebellar:1 unlike:2 posse:2 subject:5 member:1 effectiveness:1 iii:1 variety:1 psychology:1 architecture:1 six:1 heavier:2 speech:1 york:1 involve:2 amount:2 hue:1 locally:1 ph:2 category:4 reduced:1 percentage:1 neuroscience:1 delta:1 correctly:1 discrete:4 group:2 four:1 neither:1 run:1 jose:1 almost:1 electronic:1 patch:1 decision:1 scaling:1 layer:4 constraint:1 constrain:1 nearby:1 speed:1 anyone:1 argument:1 pavlovian:1 separable:10 o_:1 according:2 combination:1 character:1 separability:1 lp:1 invariant:1 kanerva:2 equation:1 resource:2 know:2 generalizes:3 jenkins:2 eight:1 away:2 appropriate:1 encounter:1 robustness:1 original:1 assumes:1 top:2 cf:2 embodies:1 classical:2 society:1 contact:1 receptive:4 strategy:1 dependence:1 fallside:2 gradient:14 distance:10 cauchy:27 extent:1 toward:1 assuming:2 relationship:1 providing:2 difficult:1 hij:1 trace:1 unknown:1 observation:1 darken:3 finite:1 descent:2 frame:2 arbitrary:1 pair:2 nonlinearly:1 hanson:8 unequal:1 learned:2 lell:1 adult:1 below:1 pattern:7 articulation:1 reliable:1 memory:4 max:1 power:2 event:9 overlap:3 difficulty:3 suitable:1 representing:3 scheme:1 lightness:1 embodied:1 prior:1 geometric:2 l2:2 literature:1 review:1 relative:2 law:2 limitation:1 allocation:1 localized:1 lv:2 xp:1 classifying:1 heavy:1 placed:1 institute:1 wide:1 wagner:2 distributed:1 curve:3 dimension:8 evaluating:1 made:1 adaptive:3 reinforcement:2 far:2 transaction:1 prokasy:1 approximate:3 implicitly:1 global:5 active:1 continuous:8 search:1 learn:4 robust:2 expansion:2 investigated:1 complex:1 domain:1 dense:1 main:1 linearly:6 bounding:1 noise:2 s2:2 fashion:1 cooper:2 shrinking:1 position:1 explicit:1 exponential:12 lie:1 bower:5 pe:3 weighting:2 croft:1 specific:4 reconfigurable:1 showing:1 decay:4 consequent:1 evidence:1 derives:2 gluck:12 contained:2 monotonic:1 conditional:1 presentation:1 consequently:3 rbf:1 absence:1 considerable:1 change:1 typical:1 contrasted:1 hyperplane:2 acting:1 engineer:2 discriminate:1 experimental:1 siemens:1 unfavorable:1 internal:2 mark:1 support:4 newark:1 evaluate:1 princeton:3 |
2,170 | 2,970 | Sparse Kernel Orthonormalized PLS for feature
extraction in large data sets
Jer?onimo Arenas-Garc??a, Kaare Brandt Petersen and Lars Kai Hansen
Informatics and Mathematical Modelling
Technical University of Denmark
DK-2800 Kongens Lyngby, Denmark
{jag,kbp,lkh}@imm.dtu.dk
Abstract
In this paper we are presenting a novel multivariate analysis method. Our scheme
is based on a novel kernel orthonormalized partial least squares (PLS) variant for
feature extraction, imposing sparsity constrains in the solution to improve scalability. The algorithm is tested on a benchmark of UCI data sets, and on the analysis
of integrated short-time music features for genre prediction. The upshot is that
the method has strong expressive power even with rather few features, is clearly
outperforming the ordinary kernel PLS, and therefore is an appealing method for
feature extraction of labelled data.
1
Introduction
Partial Least Squares (PLS) is, in its general form, a family of techniques for analyzing relations
between data sets by latent variables. It is a basic assumption that the information is overrepresented
in the data sets, and that these therefore can be reduced in dimensionality by the latent variables.
Exactly how these are found and how the data is projected varies within the approach, but they are
often maximizing the covariance of two projected expressions. One of the appealing properties of
PLS, which has made it popular, is that it can handle data sets with more dimensions than samples
and massive collinearity between the variables.
The basic PLS algorithm considers two data sets X and Y, where samples are arranged in rows, and
consists on finding latent variables which account for the covariance XT Y between the data sets.
This is done either as an iterative procedure or as an eigenvalue problem. Given the latent variables,
the data sets X and Y are then transformed in a process which subtracts the information contained
in the latent variables. This process, which is often referred to as deflation, can be done in a number
of ways and these different approaches are defining the many variants of PLS.
Among the many variants of PLS, the one that has become particularly popular is the algorithm
presented in [17] and studied in further details in [3]. The algorithm described in these, will in this
paper be referred to as PLS2, and is based on the following two assumptions: First, that the latent
variables of X are good predictors of Y and, second, that there is a linear relation between the latent
variables of X and of Y. This linear relation is implying a certain deflation scheme, where the latent
variable of X is used to deflate also the Y data set. Several other variants of PLS exist such as ?PLS
Mode A? [16], Orthonormalized PLS [18] and PLS-SB [11]; see [1] for a discussion of the early
history of PLS, [15] for a more recent and technical description, and [9] and for a very well-written
contemporary overview.
No matter how refined the various early developments of PLS become, they are still linear projections. Therefore, in the cases where the variables of the input and output spaces are not linearly
related, the challenge of the data is still poorly handled. To counter this, different non-linear versions of PLS have been developed, and these can be categorized on two fundamentally different
approaches: 1) The modified PLS2 variants in which the linear relation between the latent variables
is substituted by a non-linear relation; and 2) the kernel variants in which the PLS algorithm has
been reformulated to fit a kernel approach. In the second approach, the input data is mapped by
a non-linear function into a high-dimensional space in which ordinary linear PLS is performed on
the transformed data. A central property of this kernel approach is, as always, the exploitation of
the kernel trick, i.e., that only the inner products in the transformed space are necessary and not the
explicit non-linear mapping. It was Rosipal and Trejo who first presented a non-linear kernel variant
of PLS in [7]. In that paper, the kernel matrix and the Y matrix are deflated in the same way, and
the PLS variant is thus more in line with the PLS2 variant than with the traditional algorithm from
1975 (PLS Mode A). The non-linear kernel PLS by Rosipal and Trejo is in this paper referred to as
simply KPLS2, although many details could advocate more detailed nomenclator.
The appealing property of kernel algorithms in general is that one can obtain the flexibility of nonlinear expressions while still solving only linear equations. The downside is that for a data set of l
samples, the kernel matrices to be handled are l ? l, which, even for a moderate number of samples,
quickly becomes a problem with respect to both memory and computing time. This problem is
present not only in the training phase, but also when predicting the output given some large training
data set: evaluating thousands of kernels for every new input vector is, in most applications, not
acceptable. Furthermore, there is, for these so-called dense solutions in multivariate analysis, also
the problem of overfitting. To counter the impractical dense solutions in kernel PLS, a few solutions
have been proposed: In [2], the feature mapping directly is approximated following the Nystrom
method, and in [6] the underlying cost function is modified to impose sparsity.
In this paper, we introduce a novel kernel PLS variant called Reduced Kernel Orthonormalized
Partial Least Squares (rKOPLS) for large scale feature extraction. It consists of two parts: A novel
orthonormalized variant of kernel PLS called KOPLS, and a sparse approximation for large scale
data sets. Compared to related approaches like [8], the KOPLS is transforming only the input data,
and is keeping them orthonormal at two stages: the images in feature space and the projections in
feature space. The sparse approximation is along the lines of [4], that is, we are representing the
reduced kernel matrix as an outer product of a reduced and a full feature mapping, and thus keeping
more information than changing the cost function or doing simple subsampling.
Since rKOPLS is specially designed to handle large data sets, our experimental work will focus on
such data sets, paying extra attention to the prediction of music genre, an application that typically
involves large amount of high dimensional data. The abilities of our algorithm to discover non-linear
relations between input and output data will be illustrated, as will be the relevance of the derived
features compared to those provided by an existing kernel PLS method.
The paper is structured as follows: In Section 2, the novel kernel orthonormalized PLS variant is
introduced, and in Section 3 the sparse approximation is presented. Section 4 shows numerical
results on UCI benchmark data sets, and on the above mentioned music application. In the last
section, the main results are summarized and discussed.
2
Kernel Orthonormalized Partial Least Squares
Consider we are given a set of pairs {?(xi ), yi }li=1 , with xi ? <N , yi ? <M , and ?(x) : <N ? F
a function that maps the input data into some Reproducing Kernel Hilbert Space (RKHS), usually
referred to as feature space, of very large or even infinite dimension. Let us also introduce the
matrices ? = [?(x1 ), . . . , ?(xl )]T and Y = [y1 , . . . , yl ]T , and denote by
?0 = ?U and Y0 = YV
two matrices, each one containing np projections of the original input and output data, U and V
being the projection matrices of sizes dim(F) ? np and M ? np , respectively. The objective of
(kernel) Multivariate Analysis (MVA) algorithms is to search for projection matrices such that the
projected input and output data are maximally aligned. For instance, Kernel Canonical Correlation
Analysis (KCCA) finds the projections that maximize the correlation between the projected data,
while Kernel Partial Least Squares (KPLS) provides the directions for maximum covariance:
KPLS :
? T YV}
?
maximize: Tr{UT ?
T
subject to: U U = VT V = I
(1)
? and Y
? are centered versions of ? and Y, respectively, I is the identity matrix of size np ,
where ?
and the T superscript denotes matrix or vector transposition. In this paper, we propose a kernel
extension of a different MVA method, namely, the Orthonormalized Partial Least Squares [18]. Our
proposed kernel variant, called KOPLS, can be stated in the kernel framework as
?TY
?Y
? T ?U}
?
maximize: Tr{UT ?
(2)
T ?T ?
subject to: U ? ?U = I
Note that, unlike KCCA or KPLS, KOPLS only extracts projections of the input data. It is known
that Orthonormalized PLS is optimal for performing linear regression on the input data when a
bottleneck is imposed for data dimensionality reduction [10]. Similarly, KOPLS provides optimal
projections for linear multi-regression in feature space. In other words, the solution to (2) also
minimizes the sum of squares of the residuals of the approximation of the label matrix:
? ??
? 0 Bk
? 2,
? = (?
? 0T ?
? 0 )?1 ?
? 0T Y
?
kY
B
(3)
KOPLS :
F
? is the optimal regression matrix. Simwhere k ? kF denotes the Frobenius norm of a matrix and B
ilarly to other MVA methods, KOPLS is not only useful for multi-regression problems, but it can
also be used as a very powerful kernel feature extractor in supervised problems, including also the
multi-label case, when Y is used to encode class membership information. The optimality condition suggests that the features obtained by KOPLS will be more relevant than those provided by
other MVA methods, in the sense that they will allow similar or better accuracy rates using fewer
projections, a conjecture that we will investigate in the experiments section of the paper.
Coming back to the KOPLS optimization problem, when projecting data into an infinite dimensional
space, we need to use the Representer Theorem that states that each of the projection vectors in U
? T A into
can be expressed as a linear combination of the training data. Then, introducing U = ?
(2), where A = [?1 , . . . , ?np ] and ?i is an l-length column vector containing the coefficients for
the ith projection vector, the maximization problem can be reformulated as:
maximize: Tr{AT Kx Ky Kx A}
(4)
subject to: AT Kx Kx A = I
??
? T and Ky = Y
?Y
? T , such that only
where we have defined the centered kernel matrices Kx = ?
1
inner products in F are involved . Applying ordinary linear algebra to (4), it can be shown that the
columns of A are given by the solutions to the following generalized eigenvalue problem:
Kx Ky Kx ? = ?Kx Kx ?
(5)
There are a number of ways to solve the above problem. We propose a procedure consisting of
iteratively calculating the best projection vector, and then deflating the involved matrices. In short,
the optimization procedure at step i consists of the following two differentiated stages:
1. Find the largest generalized eigenvalue of (5), and its corresponding generalized eigenvector: {?i , ?i }. Normalize ?i so that condition ?i Kx Kx ?i = 1 is satisfied.
2. Deflate the l ? l matrix Kx Ky Kx according to:
Kx Ky Kx ? Kx Ky Kx ? ?i Kx Kx ?i ?Ti Kx Kx
The motivation for this deflation strategy can be found in [13], in the discussion of generalized eigenvalue problems. Some intuition can be obtained if we observe its equivalence
with
Ky ? Ky ? ?i Kx ?i ?Ti Kx
which accounts for removing from the label matrix Y the best approximation based on
the projections computed at step i, i.e., Kx ?i . It can be shown that this deflation scheme
decreases by 1 the rank of Kx Ky Kx at each step. Since the rank of the original matrix Ky
is at most rank(Y), this is the maximum number of projections that can be derived when
using KOPLS.
This iterative algorithm, which is very similar in nature to the iterative algorithms used for other
MVA approaches, has the advantage that, at every iteration, the achieved solution is optimal with
respect to the current number of projections.
1
Centering of data in feature space can easily be done from the original kernel matrix. Details on this
process are given in most text books describing kernel methods, e.g. [13, 12].
3
Compact approximation of the KOPLS solution
The kernel formulation of the OPLS algorithm we have just presented suffers some drawbacks. In
particular, as most other kernel methods, KOPLS requires the computation and storage of a kernel
matrix of size l ? l, which limits the maximum size of the datasets where the algorithm can be
applied. In addition to this, algebraic procedures to solve the generalized eigenvalue problem (5)
normally require the inversion of matrix Kx Kx which is usually rank deficient. Finally, the matrix
A will in general be dense rather than sparse, a fact which implies that when new data needs to be
projected, it will be necessary to compute the kernels between the new data and all the samples in
the training data set.
Although it is possible to think of different solutions for each of the above issues, our proposal here
is to impose sparsity in the projection vectors representation, i.e., we will use the approximation
U = ?TR B, where ?R is a subset of the training data containing only R patterns (R < l) and
B = [? 1 , ? ? ? , ? np ] contains the parameters of the compact model. Although more sophisticated
strategies can be followed in order to select the training data to be incorporated into the basis ?R ,
we will rely on random selection, very much in the line of the sparse greedy approximation proposed
in [4] to reduce the computational burden of Support Vector Machines (SVMs).
Replacing U in (2) by its approximation, we get an alternative maximization problem that constitutes
the basis for a KOPLS algorithm with reduced complexity (rKOPLS):
rKOPLS :
maximize: Tr{BT KR Ky KTR B}
subject to: BT KR KTR B = I
(6)
? T , which is a reduced kernel matrix of size R ? l. Note that, to
where we have defined KR = ?R ?
keep the algorithm as simple as possible, we decided not to center the patterns in the basis ?R . Our
simulation results suggest that centering ?R does not result in improved performance. Similarly
to the standard KOPLS algorithm, the projections for the rKOPLS algorithm can be obtained by
solving
KR Ky KTR ? = ?KR KTR ?
(7)
The iterative two-stage procedure described at the end of the previous section can still be used by
simple replacement of the following matrices and variables:
KOPLS
?i
Kx Kx
Kx Ky Kx
rKOPLS
?i
KR KTR
KR Ky KTR
To conclude the presentation of the rKOPLS algorithm, let us summarize some of its more relevant
properties, and how it solves the different limitations of the standard KOPLS formulation:
? Unlike KOPLS, the solution provided by rKOPLS is enforced to be sparse, so that new
data is projected with only R kernel evaluations per pattern (in contrast to l evaluations for
KOPLS). This is a very desirable property, specially when dealing with large data sets.
? Training rKOPLS projections only requires the computation of a reduced kernel matrix KR
of size R ? l. Nevertheless, note that the approach we have followed is very different to
subsampling, since rKOPLS is still using all training data in the MVA objective function.
? The rKOPLS algorithm only needs matrices KR KTR and KR Ky KTR . It is easy to show
that both matrices can be calculated without explicitly computing KR , so that memory
requirements go down to O(R2 ) and O(RM ), respectively. Again, this is a very convenient
property when dealing with large scale problems.
? Parameter R acts as a sort of regularizer, making KR KTR full rank.
Table 1 compares the complexity of KOPLS and rKOPLS, as well as that of the KPLS2 algorithm.
Note that KPLS2 does not admit a compact formulation as the one we have used for the new method,
since the full kernel matrix is still needed for the deflation step. The main inconvenience of rKOPLS
in relation to KPLS2 it that it requires the inversion of a matrix of size R?R. However, this normally
Number of nodes
Size of Kernel Matrix
Storage requirements
Maximum np
KOPLS
l
l?l
O(l2 )
? min{r(?), r(Y)}
rKOPLS
R
R?l
O(R2 )
? min{R, r(?), r(Y)}
KPLS2
l
l?l
O(l2 )
? r(?)
Table 1: Summary of the most relevant characteristics of the proposed KOPLS and rKOPLS algorithms. Complexity for KPLS2 is also included for comparison purposes. We denote the rank of a
matrix with r(?).
vehicle
segmentation
optdigits
satellite
pendigits
letter
# Train/Test
# Clases
dim
500 / 346
1310 / 1000
3823 / 1797
4435 / 2000
7494 / 3498
10000 / 10000
4
7
10
6
10
26
18
18
64
36
16
16
?-SVM (%)
(linear)
66.18
91.7
96.33
83.25
94.77
79.81
Table 2: UCI benchmark datasets. Accuracy error rates for a linear ?-SVM are also provided.
pays off in terms of reduction of computational time and storage requirements. In addition to this,
our extensive simulation work shows that the projections provided by rKOPLS are generally more
relevant than those of KPLS2.
4
Experiments
In this section, we will illustrate the ability of rKOPLS to discover relevant projections of the data.
To do this, we compare the discriminative power of the features extracted by rKOPLS and KPLS2 in
several multi-class classification problems. In particular, we include experiments on a benchmark of
problems taken from the repository at the University of California Irvine (UCI) 2 , and on a musical
genre classification problem. This latter task is a good example of an application where rKOPLS can
be specially useful, given the fact that the extraction of features from the raw audio data normally
results in very large data sets of high dimensional data.
4.1
UCI Benchmark Data Sets
We start by analyzing the performance of our method in six standard UCI multi-class classification
problems. Table 2 summarizes the main properties of the problems that constitute our benchmark.
The last four problems can be considered large problems for MVA algorithms, which are in general
not sparse and require the computation of the kernels between any two points in the training set.
Our first set of experiments consists on comparing the discriminative performance of the features
calculated by rKOPLS and KPLS2. For classification, we use one of the simplest possible models:
? (see Eq. (3)), and then
we compute the pseudoinverse of the projected training data to calculate B
0
?
?
classify according to ? B using a ?winner-takes-all? (w.t.a.) activation function. For the kernel
MVA algorithms we used a Gaussian kernel
k(xi , xj ) = exp ?kxi ? xj k22 /2? 2
using 10-fold cross-validation (10-CV) on the training set to estimate ?. To obtain some reference
accuracy rates, we also trained a ?-SVM with Gaussian kernel, using the LIBSVM implementation3
and 10-CV was carried out for both the kernel width and ?.
Accuracy error rates for rKOPLS and different values of R are displayed in the first rows and first
columns of Table 3. Comparing these results with SVM (under the rbf-SVM column), we can
2
3
http://www.ics.uci.edu/?mlearn/MLRepository.html
Software available at http://www.csie.ntu.edu.tw/?cjlin/libsvm
rKOPLS - pseudo+w.t.a.
R = 250
R = 500
R = 1000
l0 =
?
250 l
KPLS2 - pseudo + w.t.a
?
?
l0 = 500 l
l0 = 1000 l
vehicle
segmentation
optdigits
satellite
pendigits
letter
80.4 ? 1.2
95.7 ? 0.4
97.4 ? 0.2
89.8 ? 0.2
97.6 ? 0.1
84.8 ? 0.3
79.9
95.5 ? 0.3
97.7 ? 0.1
90.6 ? 0.3
98.2 ? 0.1
90 ? 0.2
?
?
98.2 ? 0.2
91 ? 0.2
98.1 ? 0.2
92.9 ? 0.4
81.3 ? 1.3
93.9 ? 0.5
96.5 ? 0.3
89.7 ? 0.4
97.4 ? 0.2
84 ? 0.6
80.5
94.2 ? 0.5
97 ? 0.3
90.3 ? 0.6
97.6 ? 0.1
86 ? 0.6
?
?
97 ? 0.2
91.1 ? 0.3
97.7 ? 0.2
86.2 ? 0.4
vehicle
segmentation
optdigits
satellite
pendigits
letter
81.2 ? 1
95.1 ? 2
97.3 ? 0.2
89.6 ? 0.6
97.6 ? 0.2
88.8 ? 1.5
rKOPLS - SVM
80.3
95.4 ? 0.4
97.6 ? 0.1
90.5 ? 0.4
98.2 ? 0.1
92.1 ? 0.2
?
?
98.2 ? 0.2
91 ? 0.2
98.1 ? 0.2
93.9 ? 0.3
81.2 ? 1.1
95.6 ? 0.5
96.4 ? 0.2
89.7 ? 0.5
96.9 ? 0.1
85.8 ? 0.5
KPLS2 - SVM
80.6
94.8 ? 0.3
96.9 ? 0.2
90.4 ? 0.6
97.1 ? 0.2
85.9 ? 1.1
?
?
96.9 ? 0.3
90.8 ? 0.5
97.3 ? 0.2
87.7 ? 1.2
l0 = l
80.5
95.1
97.6
91.8
96.9
?
rbf-SVM
83
95.2
97.2
91.9
98.1
96.2
Table 3: Classification performance in a benchmark of UCI datasets. Accuracy rates (%) and standard deviation of the estimation are given for 10 different runs of rKOPLS and KPLS2, both when
using the pseudoinverse of the projected data together with the ?winner-takes-all? activation function (first rows), and when using a ?-SVM linear classifier (last rows). The results achieved by an
SVM with linear classifier are also provided in the bottom right corner.
conclude that the rKOPLS approach is very close in performance or better than SVM in four out
of the six problems. A clearly worse performance is observed in the smallest data set (vehicle) due
to overfitting. For letter, we can see that, even for R = 1000, accuracy rates are far from those of
SVM. The reason for this is that SVM is using 6226 support vectors, so that a very dense architecture
seems to be necessary for this particular problem.
To make a fair comparison with the KPLS2 method, the training dataset was subsampled,
selecting
?
at random l0 samples, with l0 being the first integer larger than or equal to R ? l. In this way,
both rKOPLS and KPLS2 need the same number of kernel evaluations. Note that, even in this
case, KPLS2 results in an architecture with l0 nodes (l0 > R), so that projections of data are more
expensive than for the respective rKOPLS. In any case, we must point out that subsampling was only
considered for training the projections, but all training data was used to compute the pseudoinverse
of the projected training data. Results without subsampling are also provided in Table 3 under the
l0 = l column except for the letter data set which we were unable to process due to massive memory
problems.
As a first comment, we have to point out that all the results for KPLS2 were obtained using 100
projections, which were necessary to guarantee the convergence of the method. In contrast to this,
the maximum number of projections that the rKOPLS can provide equals the rank of the label matrix,
i.e., the number of classes of each problem minus 1. In spite of using a much smaller number of
projections, our algorithm performed significantly better than KPLS2 with subsampling in four out
of the five largest problems.
As a final set of experiments, we have replaced the classification step by a linear ?-SVM. The results,
which are displayed in the bottom part of Table 3, are in general similar to those obtained with the
pseudoinverse approach, both for rKOPLS and KPLS2. However, we can see that the linear SVM is
able to better exploit the projections provided by the MVA methods in vehicle and letter, precisely
the two problems where previous results were less satisfactory.
Based on the above set of experiments, we can conclude that rKOPLS provides more discriminative
features than KPLS2. In addition to this, these projections are more ?informative?, in the sense that
we can obtain a better recognition accuracy using a smaller number of projections. An additional
advantage of rKOPLS in relation to KPLS2 is that it provides architectures with less nodes.
4.2
Feature Extraction for Music Genre Classification
In this subsection we consider the problem of predicting the genre of a song using the audio data
only, a task which since the seminal paper [14] has been subject of much interest. The data set we
Accuracy rates
40
KPLS2, AR
rKOPLS, AR
KPLS2, song
rKOPLS, song
40
Accuracy rates
45
35
30
30
20
KPLS2, AR
KPLS2, song
rKOPLS, AR
rKOPLS, song
10
25
Random
20
100
250
500
R , l?
(a)
750
0
0
10
20
30
40
Number of projections
50
(b)
Figure 1: Genre classification performance of KPLS2 and rKOPLS.
analyze has been previously investigated in [5], and consists of 1317 snippets each of 30 seconds
distributed evenly among 11 music genres: alternative, country, easy listening, electronica, jazz,
latin, pop&dance, rap&hip-hop, r&b, reggae and rock. The music snippets are MP3 (MPEG1layer3) encoded music with a bitrate of 128 kbps or higher, down sampled to 22050 Hz, and they
are processed following the method in [5]: MFCC features are extracted from overlapping frames
of the song, using a window size of 20 ms. Then, to capture temporal correlation, a Multivariate
Autoregressive (AR) model is adjusted for every 1.2 seconds of the song, and finally the parameters
of the AR model are stacked into a 135 length feature vector for every such frame.
For training and testing the system we have split the data set into two subsets with 817 and 500 songs,
respectively. After processing the audio data, we have 57388 and 36556 135-dimensional vectors
in the training and test partitions, an amount which for most kernel MVA methods is prohibitively
large. For the rKOPLS, however, the compact representation is enabling usage of the entire training
data.
In Figure 1 the results are shown. Note that, in this case, comparisons between rKOPLS and KPLS2
are for a fixed architecture complexity (R = l0 ), since the most significant computational burden
for the training of the system is in the projection of the data. Since every song consists of about
seventy AR vectors, we can measure the classification accuracy in two different ways: 1) On the
level of individual AR vectors or 2) by majority voting among the AR vectors of a given song. The
results shown in Figure 1 are very clear: Compared to KPLS2, the rKOPLS is not only consistently
performing better as seen in Figure 1(a), but is also doing so with much fewer projections. The strong
results are very pronounced in Figure 1(b) where, for R = 750, rKOPLS is outperforming ordinary
KPLS, and is doing so with only ten projections compared to fifty projections of the KPLS2. This
demonstrates that the features extracted by rKOPLS holds much more information relevant to the
genre classification task than KPLS2.
5
Conclusions
In this paper we have presented a novel kernel PLS algorithm, that we call reduced kernel orthonormalized PLS (rKOPLS). Compared to similar approaches, rKOPLS is making the data in feature
space orthonormal, and imposing sparsity on the solution to ensure competitive performance on
large data sets.
Our method has been tested on a benchmark of UCI data sets, and we have found that the results
were competitive in comparison to those of rbf-SVM, and superior to those of the ordinary KPLS2
method. Furthermore, when applied to a music genre classification task, rKOPLS performed very
well even with only a few features, keeping also the complexity of the algorithm under control.
Because of the nature of music data, in which both the number of dimensions and samples are very
large, we believe that feature extraction methods such as rKOPLS can become crucial to music
information retrieval tasks, and hope that other researchers in the community will be able to benefit
from our results.
Acknowledgments
This work was partly supported by the Danish Technical Research Council, through the framework
project ?Intelligent Sound?, www.intelligentsound.org (STVF No. 26-04-0092), and by the Spanish
Ministry of Education and Science with a Postdoctoral Felowship to the first author.
References
[1] Paul Geladi. Notes on the history and nature of partial least squares (PLS) modelling. Journal
of Chemometrics, 2:231?246, 1988.
[2] L. Hoegaerts, J. A. K. Suykens, J. Vanderwalle, and B. De Moor. Primal space sparse kernel partial least squares regression for large problems. In Proceedings of International Joint
Conference on Neural Networks (IJCNN), 2004.
[3] Agnar Hoskuldsson. PLS regression methods. Journal of Chemometrics, 2:211?228, 1988.
[4] Yuh-Jye Lee and O. L. Mangasarian. RSVM: reduced support vector machines. In Data
Mining Institute Technical Report 00-07, July 2000. CD Proceedings of the SIAM International
Conference on Data Mining, Chicago, April 5-7, 2001,, 2001.
[5] Anders Meng, Peter Ahrendt, Jan Larsen, and Lars Kai Hansen. Temporal feature integration
for music genre classification. IEEE Trans. Audio, Speech & Language Process., to appear.
[6] Michinari Momma and Kristin Bennett. Sparse kernel partial least squares regression. In
Proceedings of Conference on learning theory (COLT), 2003.
[7] Roman Rosipal and Leonard J. Trejo. Kernel partial least squares regression in reproducing
kernel hilbert space. Journal of Machine Learning Research, 2:97?123, 2001.
[8] Roman Rosipal, Leonard J. Trejo, and Bryan Matthews. Kernel pls-svc for linear and nonlinear
classifiction. In Proceedings of Internation Conference on Machine Learning (ICML), 2003.
[9] Kramer N. Rosipal R. Overview and recent advances in partial least squares. In Subspace,
Latent Structure and Feature Selection Techniques, 2006.
[10] Sam Roweis and Carlos Brody. Linear heteroencoders. Technical report, Gatsby Computational Neuroscience Unit, 1999.
[11] Paul D. Sampson, Ann P. Streissguth, Helen M. Barr, and Fred L. Bookstein. Neurobehavioral effetcs of prenatal alcohol: Part II. Partial Least Squares analysis. Neurotoxicology and
teratology, 11:477?491, 1989.
[12] Bernhard Schoelkopf and Alexander Smola. Learning with kernels. MIT Press, 2002.
[13] John Shawe-Taylor and Nello Christiani. Kernel Methods for Pattern Analysis. Cambridge
University Press, 2004.
[14] George Tzanetakis and Perry Cook. Music genre classification of audio signals. IEEE Transactions on Speech and Audio Processing, 10(5):293?302, July 2002.
[15] Jacob A. Wegelin. A survey of partial least squares (PLS) methods, with emphasis on the
two-block case. Technical report, University of Washington, 2000.
[16] Herman Wold. Path models with latent variables: the NIPALS approach. In Quatitative sociology: International perspectives on mathematical and statistical Model Building, pages
307?357. Academic Press, 1975.
[17] S. Wold, C. Albano, W. J. Dunn, U. Edlund, K. Esbensen, P. Geladi, S. Hellberg, E. Johansson, W. Lindberg, and M. Sjostrom. Chemometrics, Mathematics and Statistics in Chemistry,
chapter Multivariate Data Analysis in Chemistry, page 17. Reidel Publishing Company, 1984.
[18] K. Worsley, J. Poline, K. Friston, and A. Evans. Characterizing the response of pet and fMRI
data using multivariate linear models (MLM). NeuroImage, 6:305? 319, 1998.
| 2970 |@word collinearity:1 exploitation:1 version:2 inversion:2 repository:1 norm:1 seems:1 momma:1 johansson:1 simulation:2 covariance:3 jacob:1 tr:5 minus:1 reduction:2 contains:1 selecting:1 rkhs:1 existing:1 current:1 comparing:2 activation:2 written:1 must:1 john:1 evans:1 numerical:1 partition:1 informative:1 chicago:1 designed:1 implying:1 greedy:1 fewer:2 cook:1 inconvenience:1 ith:1 short:2 transposition:1 provides:4 geladi:2 node:3 brandt:1 org:1 five:1 mathematical:2 along:1 become:3 consists:6 advocate:1 introduce:2 multi:5 classifiction:1 company:1 window:1 becomes:1 provided:8 discover:2 underlying:1 project:1 minimizes:1 eigenvector:1 developed:1 deflate:2 mp3:1 finding:1 impractical:1 guarantee:1 pseudo:2 temporal:2 every:5 ti:2 act:1 voting:1 exactly:1 prohibitively:1 classifier:2 rm:1 demonstrates:1 control:1 kaare:1 normally:3 unit:1 appear:1 limit:1 analyzing:2 meng:1 path:1 pendigits:3 emphasis:1 studied:1 equivalence:1 suggests:1 decided:1 acknowledgment:1 testing:1 block:1 procedure:5 dunn:1 jan:1 implementation3:1 significantly:1 projection:33 convenient:1 word:1 bitrate:1 spite:1 petersen:1 suggest:1 get:1 close:1 selection:2 storage:3 applying:1 seminal:1 www:3 map:1 overrepresented:1 imposed:1 maximizing:1 center:1 go:1 attention:1 helen:1 survey:1 orthonormal:2 handle:2 massive:2 trick:1 approximated:1 particularly:1 expensive:1 recognition:1 bottom:2 csie:1 observed:1 capture:1 thousand:1 calculate:1 schoelkopf:1 counter:2 contemporary:1 decrease:1 mentioned:1 intuition:1 transforming:1 complexity:5 constrains:1 trained:1 solving:2 algebra:1 basis:3 easily:1 joint:1 jer:1 various:1 chapter:1 genre:11 regularizer:1 train:1 stacked:1 refined:1 encoded:1 kai:2 solve:2 larger:1 tested:2 ability:2 statistic:1 think:1 superscript:1 final:1 advantage:2 eigenvalue:5 rock:1 propose:2 product:3 coming:1 uci:9 aligned:1 relevant:6 deflating:1 jag:1 poorly:1 flexibility:1 roweis:1 ktr:9 description:1 frobenius:1 pronounced:1 normalize:1 scalability:1 ky:16 chemometrics:3 convergence:1 requirement:3 satellite:3 illustrate:1 eq:1 solves:1 paying:1 strong:2 involves:1 implies:1 direction:1 drawback:1 lars:2 centered:2 education:1 garc:1 require:2 barr:1 ntu:1 tzanetakis:1 adjusted:1 extension:1 hold:1 considered:2 ic:1 exp:1 mapping:3 matthew:1 early:2 smallest:1 purpose:1 estimation:1 jazz:1 label:4 hansen:2 council:1 largest:2 moor:1 kristin:1 hope:1 orthonormalized:10 mit:1 clearly:2 always:1 gaussian:2 modified:2 rather:2 clases:1 encode:1 derived:2 focus:1 l0:10 lkh:1 consistently:1 modelling:2 rank:7 contrast:2 sense:2 dim:2 anders:1 membership:1 sb:1 integrated:1 typically:1 bt:2 entire:1 relation:8 transformed:3 issue:1 among:3 classification:13 html:1 colt:1 michinari:1 development:1 integration:1 equal:2 extraction:7 washington:1 hop:1 seventy:1 icml:1 constitutes:1 representer:1 fmri:1 np:7 report:3 fundamentally:1 intelligent:1 few:3 roman:2 individual:1 subsampled:1 replaced:1 phase:1 consisting:1 replacement:1 interest:1 investigate:1 mining:2 arena:1 evaluation:3 yuh:1 primal:1 partial:13 necessary:4 respective:1 taylor:1 rap:1 sociology:1 instance:1 column:5 classify:1 downside:1 hip:1 ar:9 maximization:2 ordinary:5 cost:2 introducing:1 deviation:1 subset:2 predictor:1 varies:1 kxi:1 international:3 siam:1 lee:1 yl:1 informatics:1 off:1 together:1 quickly:1 again:1 central:1 satisfied:1 containing:3 worse:1 admit:1 book:1 corner:1 worsley:1 li:1 account:2 de:1 chemistry:2 summarized:1 coefficient:1 matter:1 explicitly:1 performed:3 vehicle:5 doing:3 analyze:1 yv:2 sort:1 start:1 competitive:2 carlos:1 square:14 accuracy:10 musical:1 who:1 characteristic:1 raw:1 mfcc:1 researcher:1 history:2 mlearn:1 suffers:1 danish:1 centering:2 ty:1 involved:2 larsen:1 nystrom:1 irvine:1 sampled:1 dataset:1 popular:2 subsection:1 ut:2 dimensionality:2 hilbert:2 segmentation:3 sophisticated:1 kbps:1 back:1 reggae:1 higher:1 supervised:1 response:1 maximally:1 improved:1 april:1 arranged:1 done:3 formulation:3 wold:2 furthermore:2 just:1 stage:3 smola:1 correlation:3 expressive:1 replacing:1 nonlinear:2 overlapping:1 perry:1 mode:2 believe:1 usage:1 building:1 k22:1 iteratively:1 satisfactory:1 illustrated:1 width:1 spanish:1 mlrepository:1 m:1 generalized:5 presenting:1 image:1 novel:6 mangasarian:1 svc:1 superior:1 overview:2 winner:2 discussed:1 significant:1 cambridge:1 imposing:2 cv:2 mathematics:1 similarly:2 language:1 shawe:1 multivariate:6 recent:2 perspective:1 moderate:1 certain:1 outperforming:2 vt:1 yi:2 seen:1 ministry:1 additional:1 george:1 impose:2 maximize:5 signal:1 july:2 ii:1 full:3 desirable:1 sound:1 technical:6 academic:1 cross:1 retrieval:1 prediction:2 variant:13 basic:2 kcca:2 regression:8 iteration:1 kernel:57 achieved:2 suykens:1 proposal:1 addition:3 sjostrom:1 country:1 crucial:1 extra:1 fifty:1 specially:3 unlike:2 comment:1 subject:5 hz:1 deficient:1 integer:1 kongens:1 call:1 latin:1 split:1 easy:2 xj:2 fit:1 architecture:4 inner:2 reduce:1 listening:1 bottleneck:1 expression:2 handled:2 six:2 song:10 peter:1 algebraic:1 reformulated:2 speech:2 constitute:1 useful:2 generally:1 detailed:1 clear:1 amount:2 ten:1 svms:1 processed:1 simplest:1 reduced:9 http:2 exist:1 canonical:1 neuroscience:1 per:1 bryan:1 four:3 nevertheless:1 changing:1 libsvm:2 sum:1 enforced:1 run:1 letter:6 powerful:1 kbp:1 family:1 acceptable:1 summarizes:1 brody:1 pay:1 followed:2 fold:1 ijcnn:1 precisely:1 software:1 optimality:1 min:2 performing:2 conjecture:1 structured:1 according:2 combination:1 smaller:2 y0:1 sam:1 appealing:3 tw:1 making:2 projecting:1 bookstein:1 taken:1 lyngby:1 equation:1 previously:1 describing:1 deflation:5 cjlin:1 needed:1 end:1 available:1 observe:1 differentiated:1 alternative:2 original:3 denotes:2 subsampling:5 include:1 ensure:1 publishing:1 music:12 calculating:1 exploit:1 objective:2 strategy:2 traditional:1 subspace:1 unable:1 mapped:1 majority:1 outer:1 evenly:1 considers:1 nello:1 reason:1 pet:1 denmark:2 length:2 stated:1 reidel:1 datasets:3 benchmark:8 enabling:1 snippet:2 displayed:2 defining:1 incorporated:1 y1:1 frame:2 reproducing:2 community:1 introduced:1 bk:1 pair:1 namely:1 extensive:1 california:1 pop:1 trans:1 able:2 usually:2 pattern:4 electronica:1 sparsity:4 challenge:1 summarize:1 herman:1 rosipal:5 including:1 memory:3 power:2 friston:1 rely:1 predicting:2 residual:1 representing:1 scheme:3 improve:1 alcohol:1 dtu:1 nipals:1 carried:1 extract:1 text:1 upshot:1 l2:2 kf:1 prenatal:1 limitation:1 validation:1 cd:1 row:4 poline:1 summary:1 supported:1 last:3 keeping:3 allow:1 institute:1 characterizing:1 sparse:10 distributed:1 benefit:1 rsvm:1 dimension:3 calculated:2 evaluating:1 fred:1 autoregressive:1 author:1 made:1 projected:9 subtracts:1 far:1 transaction:1 compact:4 bernhard:1 keep:1 dealing:2 pseudoinverse:4 imm:1 overfitting:2 mva:10 conclude:3 xi:3 discriminative:3 postdoctoral:1 search:1 latent:11 iterative:4 table:8 nature:3 mlm:1 investigated:1 substituted:1 dense:4 main:3 linearly:1 motivation:1 paul:2 fair:1 categorized:1 x1:1 referred:4 gatsby:1 neuroimage:1 explicit:1 xl:1 extractor:1 theorem:1 removing:1 down:2 xt:1 jye:1 r2:2 dk:2 deflated:1 svm:16 burden:2 kr:12 trejo:4 kx:32 simply:1 expressed:1 contained:1 pls:33 extracted:3 identity:1 presentation:1 optdigits:3 kramer:1 rbf:3 leonard:2 internation:1 labelled:1 sampson:1 bennett:1 ann:1 included:1 infinite:2 except:1 called:4 partly:1 experimental:1 select:1 support:3 latter:1 alexander:1 relevance:1 audio:6 dance:1 |
2,171 | 2,971 | Learning to Rank with Nonsmooth Cost Functions
Christopher J.C. Burges
Microsoft Research
One Microsoft Way
Redmond, WA 98052, USA
Robert Ragno
Microsoft Research
One Microsoft Way
Redmond, WA 98052, USA
Quoc Viet Le
Statistical Machine
Learning Program
NICTA, ACT 2601, Australia
[email protected]
[email protected]
[email protected]
Abstract
The quality measures used in information retrieval are particularly difficult to optimize directly, since they depend on the model scores only through the sorted
order of the documents returned for a given query. Thus, the derivatives of the
cost with respect to the model parameters are either zero, or are undefined. In
this paper, we propose a class of simple, flexible algorithms, called LambdaRank,
which avoids these difficulties by working with implicit cost functions. We describe LambdaRank using neural network models, although the idea applies to
any differentiable function class. We give necessary and sufficient conditions for
the resulting implicit cost function to be convex, and we show that the general
method has a simple mechanical interpretation. We demonstrate significantly improved accuracy, over a state-of-the-art ranking algorithm, on several datasets. We
also show that LambdaRank provides a method for significantly speeding up the
training phase of that ranking algorithm. Although this paper is directed towards
ranking, the proposed method can be extended to any non-smooth and multivariate
cost functions.
1
Introduction
In many inference tasks, the cost function1 used to assess the final quality of the system is not the one
used during training. For example for classification tasks, an error rate for a binary SVM classifier
might be reported, although the cost function used to train the SVM only very loosely models the
number of errors on the training set, and similarly neural net training uses smooth costs, such as
MSE or cross entropy. Thus often in machine learning tasks, there are actually two cost functions:
the desired cost, and the one used in the optimization process. For brevity we will call the former the
?target? cost, and the latter the ?optimization? cost. The optimization cost plays two roles: it is chosen
to make the optimization task tractable (smooth, convex etc.), and it should approximate the desired
cost well. This mismatch between target and optimization costs is not limited to classification tasks,
and is particularly acute for information retrieval. For example, [10] list nine target quality measures
that are commonly used in information retrieval, all of which depend only on the sorted order of the
documents2 and their labeled relevance. The target costs are usually averaged over a large number
of queries to arrive at a single cost that can be used to assess the algorithm. These target costs
present severe challenges to machine learning: they are either flat (have zero gradient with respect
to the model scores), or are discontinuous, everywhere. It is very likely that a significant mismatch
between the target and optimizations costs will have a substantial adverse impact on the accuracy of
the algorithm.
1
Throughout this paper, we will use the terms ?cost function? and ?quality measure? interchangeably, with
the understanding that the cost function is some monotonic decreasing function of the corresponding quality
measure.
2
For concreteness we will use the term ?documents? for the items returned for a given query, although the
returned items can be more general (e.g. multimedia items).
In this paper, we propose one method for attacking this problem. Perhaps the first approach that
comes to mind would be to design smoothed versions of the cost function, but the inherent ?sort?
makes this very challenging. Our method bypasses the problems introduced by the sort, by defining
a virtual gradient on each item after the sort. The method is simple and very general: it can be used
for any target cost function. However, in this paper we restrict ourselves to the information retrieval
domain. We show that the method gives significant benefits (for both training speed, and accuracy)
for applications of commercial interest.
Notation: for the search problem, we denote the score of the ranking function by s ij , where i =
1, . . . , NQ indexes the query, and j = 1, . . . , ni indexes the documents returned for that query. The
general cost function is denoted C({sij }, {lij }), where the curly braces denote sets of cardinality
ni , and where lij is the label of the j?th document returned for the i?th query, where j indexes the
documents sorted by score. We will drop the query index i when the meaning is clear. Ranked lists
are indexed from the top, which is convenient when list length varies, and to conform with the notion
that high rank means closer to the top of the list, we will take ?higher rank? to mean ?lower rank
index?. Terminology: for neural networks, we will use ?fprop? and ?backprop? as abbreviations for
a forward pass, and for a weight-updating backward pass, respectively. Throughout this paper we
also use the term ?smooth? to denote C 1 (i.e. with first derivatives everywhere defined).
2
Common Quality Measures Used in Information Retrieval
We list some commonly used quality measures for information retrieval tasks: see [10] and references therein for details. We distinguish between binary and multilevel measures: for binary measures, we assume labels in {0, 1}, with 1 meaning relevant and 0 meaning not. Average Precision is
a binary measure where for each relevant document, the precision is computed at its position in the
ordered list, and these precisions are then averaged over all relevant documents. The corresponding
quantity averaged over queries is called ?Mean Average Precision?. Mean Reciprocal Rank (MRR)
is also a binary measure: if ri is the rank of the highest ranking relevant document for the i?th query,
P NQ
then the MRR is just the reciprocal rank, averaged over queries: MRR = N1Q i=1
1/ri . MRR was
used, for example, in TREC evaluations of Question Answering systems, before 2002 [14]. Winner
Takes All (WTA) is a binary measure for which, if the top ranked document for a given query is relevant, the WTA cost is zero, otherwise it is one. WTA is used, for example, in TREC evaluations of
Question Answering systems, after 2002 [14]. Pair-wise Correct is a multilevel measure that counts
the number of pairs that are in the correct order, as a fraction of the maximum possible number of
such pairs, for a given query. In fact for binary classification tasks, the pair-wise correct is the same
as the AUC, which has led to work exploring optimizing the AUC using ranking algorithms [15, 3].
bpref biases the pairwise correct to the top part of the ranking by choosing a subset of documents
from which to compute the pairs [1, 10]. The Normalized Discounted Cumulative Gain (NDCG)
is a cumulative, multilevel measure of ranking quality that is usually truncated at a particular rank
level [6]. For a given query Qi the NDCG is computed as
Ni ? N i
L
X
(2r(j) ? 1)/ log(1 + j)
(1)
j=1
where r(j) is the relevance level of the j?th document, and where the normalization constant N i is
chosen so that a perfect ordering would result in Ni = 1. Here L is the ranking truncation level at
which the NDCG is computed. The Ni are then averaged over the query set. NDCG is particularly
well suited to Web search applications because it is multilevel and because the truncation level can
be chosen to reflect how many documents are shown to the user. For this reason we will use the
NDCG measure in this paper.
3
Previous Work
The ranking task is the task of finding a sort on a set, and as such is related to the task of learning
structured outputs. Our approach is very different, however, from recent work on structured outputs,
such as the large margin methods of [12, 13]. There, structures are also mapped to the reals (through
choice of a suitable inner product), but the best output is found by estimating the argmax over all
possible outputs. The ranking problem also maps outputs (documents) to the reals, but solves a
much simpler problem in that the number of documents to be sorted is tractable. Our focus is on
a very different aspect of the problem, namely, finding ways to directly optimize the cost that the
user ultimately cares about. As in [7], we handle cost functions that are multivariate, in the sense
that the number of documents returned for a given query can itself vary, but the key challenge we
address in this paper is how to work with costs that are everywhere either flat or non-differentiable.
However, we emphasize that the method also handles the case of multivariate costs that cannot be
represented as a sum of terms, each depending on the output for a single feature vector and its label.
We call such functions irreducible (such costs are also considered by [7]). Most cost functions used
in machine learning are instead reducible (for example, MSE, cross entropy, log likelihood, and
the costs commonly used in kernel methods). The ranking problem itself has attracted increasing
attention recently (see for example [4, 2, 8]), and in this paper we will use the RankNet algorithm of
[2] as a baseline, since it is both easy to implement and performs well on large retrieval tasks.
4
LambdaRank
One approach to working with a nonsmooth target cost function would be to search for an optimization function which is a good approximation to the target cost, but which is also smooth. However,
the sort required by information retrieval cost functions makes this problematic. Even if the target
cost depends on only the top few ranked positions after sorting, the sort itself depends on all documents returned for the query, and that set can be very large; and since the target costs depend on only
the rank order and the labels, the target cost functions are either flat or discontinuous in the scores
of all the returned documents. We therefore consider a different approach. We illustrate the idea
with an example which also demonstrates the perils introduced by a target / optimization cost mismatch. Let the target cost be WTA and let the chosen optimization cost be a smooth approximation
to pairwise error. Suppose that a ranking algorithm A is being trained, and that at some iteration,
for a query for which there are only two relevant documents D1 and D2 , A gives D1 rank one and
D2 rank n. Then on this query, A has WTA cost zero, but a pairwise error cost of n ? 2. If the
parameters of A are adjusted so that D1 has rank two, and D2 rank three, then the WTA error is now
maximized, but the number of pairwise errors has been reduced by n ? 4. Now suppose that at the
next iteration, D1 is at rank two, and D2 at rank n 1. The change in D1 ?s score that is required
to move it to top position is clearly less (possibly much less) than the change in D 2 ?s score required
to move it to top position. Roughly speaking, we would prefer A to spend a little capacity moving
D1 up by one position, than have it spend a lot of capacity moving D2 up by n ? 1 positions. If j1
and j2 are the rank indices of D1 , D2 respectively, then instead of pairwise error, we would prefer
an optimization cost C that has the property that
|
?C
?C
||
|
?sj1
?sj2
(2)
whenever j2 j1 . This illustrates the two key intuitions behind LambdaRank: first, it is usually
much easier to specify rules determining how we would like the rank order of documents to change,
after sorting them by score for a given query, than to construct a general, smooth optimization cost
that has the desired properties for all orderings. By only having to specify rules for a given ordering,
we are defining the gradients of an implicit cost function C only at the particular points in which
we are interested. Second, the rules can encode our intuition of the limited capacity of the learning
algorithm, as illustrated by Eq. (2). Let us write the gradient of C with respect to the score of the
document at rank position j, for the i?th query, as
?C
= ??j (s1 , l1 , ? ? ? , sni , lni )
?sj
(3)
The sign is chosen so that positive ?j means that the document must move up the ranked list to
reduce the cost. Thus, in this framework choosing an implicit cost function amounts to choosing
suitable ?j , which themselves are specified by rules that can depend on the ranked order (and scores)
of all the documents. We will call these choices the ? functions. At this point two questions naturally
arise: first, given a choice for the ? functions, when does there exist a function C for which Eq. (3)
holds; and second, given that it exists, when is C convex? We have the following result from
multilinear algebra (see e.g. [11]):
Theorem (Poincar?e Lemma): If S ? Rn is an open set that is star-shaped with respect to the origin,
then every closed form on S is exact.
Note that since every exact form is closed, it follows that on an open set that is star-shaped with
respect to the origin, a form is closed if and only if it is exact. Now for a given query Q i and
corresponding set of returned Dij , the ni ??s are functions of the scores sij , parameterized by the
(fixed) labels lij . Let dxj be a basis of 1-forms on Rn and define the 1-form
X
??
?j dxj
(4)
j
Then assuming that the scores are defined over Rn , the conditions for the theorem are satisfied and
? = dC for some function C if and only if d? = 0 everywhere. Using classical notation, this
amounts to requiring that
??j
??k
=
?j, k ? {1, . . . , ni }
(5)
?sk
?sj
This provides a simple test on the ??s to determine if there exists a cost function for which they are
the derivatives: the Jacobian (that is, the matrix Jjk ? ??j /?sk ) must be symmetric. Furthermore,
given that such a cost function C does exist, then since its Hessian is just the above Jacobian, the
condition that C be convex is that the Jacobian be positive semidefinite everywhere. Under these
constraints, the Jacobian looks rather like a kernel matrix, except that while an entry of a kernel
matrix depends on two elements of a vector space, an entry of the Jacobian can depend on all of the
scores sj . Note that for constant ??s, the above two conditions are trivially satisfied, and that for
other choices that give rise to symmetric J, positive definiteness can be imposed by adding diagonal
regularization terms of the form ?j 7? ?j + ?j sj , ?j > 0.
LambdaRank has a clear physical analogy. Think of the documents returned for a given query as
point masses. ?j then corresponds to a force on the point mass Dj . If the conditions of Eq. (5)
are met, then the forces in the model are conservative, that is, they may be viewed as arising from
a potential energy function, which in our case is the implicit cost function C. For example, if the
??s are linear in the outputs s, then this corresponds to a spring model, with springs that are either
compressed or extended. The requirement that the Jacobian is positive semidefinite amounts to the
requirement that the system of springs have a unique global minimum of the potential energy, which
can be found from any initial conditions by gradient descent (this is not true in general, for arbitrary
systems of springs). The physical analogy provides useful guidance in choosing ? functions. For
example, for a given query, the forces (??s) should sum to zero, since otherwise the overall system
(mean score) will accelerate either up or down. Similarly if a contribution to a document A?s ? is
computed based on its position with respect to document B, then B?s ? should be incremented by
an equal and opposite amount, to prevent the pair itself from accelerating (Newton?s third law, [9]).
Finally, we emphasize that LambdaRank is a very simple method. It requires only that one provide
rules for the derivatives of the implicit cost for any given sorted order of the documents, and as we
will show, such rules are easy to come up with.
5
A Speedup for RankNet Learning
RankNet [2] uses a neural net as its function class. Feature vectors are computed for each
query/document pair. RankNet is trained on those pairs of feature vectors, for a given query, for
which the corresponding documents have different labels. At runtime, single feature vectors are
fpropped through the net, and the documents are ordered by the resulting scores. The RankNet cost
consists of a sigmoid (to map the outputs to [0, 1]) followed by a pair-based cross entropy cost, and
takes the form given in Eq. (8) below. Training times for RankNet thus scale quadratically with the
mean number of pairs per query, and linearly with the number of queries.
The ideas proposed in Section 4 suggest a simple method for significantly speeding up RankNet
training, making it also approximately linear in the number of labeled documents per query, rather
than in the number of pairs per query. This is a very significant benefit for large training sets. In fact
the method works for any ranking method that uses gradient descent and for which the cost depends
on pairs of items for each query. Most neural net training, RankNet included, uses a stochastic
gradient update, which is known to give faster convergence. However here we will use batch learning
per query (that is, the weights are updated for each query). We present the idea for a general ranking
function f : Rn 7? R with optimization cost C : R 7? R. It is important to note that adopting
batch training alone does not give a speedup: to compute the cost and its gradients we would still
need to fprop each pair. Consider a single query for which n documents have been returned. Let the
output scores of the ranker be sj , j = 1, . . . , n, the model parameters beP
wk ? R, and let the set of
pairs of document indices used for training be P. The total cost is CT ? {i,j}?P C(si , sj ) and its
derivative with respect to wk is
X ?C(si , sj ) ?si
?CT
?C(si , sj ) ?sj
=
+
(6)
?wk
?si
?wk
?sj
?wk
{i,j}?P
It is convenient to refactor the sum: let Pi be the set of indices j for which {i, j} is a valid pair, and
let D be the set of document indices. Then we can write the first term as
X ?si X ?C(si , sj )
?CT
=
(7)
?wk
?wk
?si
i?D
j?Pi
and similarly for the second. The algorithm is as follows: instead of backpropping each pair, first n
fprops are performed to compute the si (and for the general LambdaRank algorithm, this would also
P
?C(si ,sj )
be where the sort on the scores is performed); then for each i = 1, . . . , n the ?i ? j?Pi
?si
?si
are computed; then to compute the gradients ?wk , n fprops are performed, and finally the n backprops are done. The key point is that although the overall computation still has an n 2 dependence
?C(s ,s )
arising from the second sum in (7), computing the terms ?sii j = 1+e?1
s1 ?s2 is far cheaper than the
computation required to perform the 2n fprops and n backprops. Thus we have effectively replaced
a O(n2 ) algorithm with an O(n) one3 .
6
Experiments
We performed experiments to (1) demonstrate the training speedup for RankNet, and (2) assess
whether LambdaRank improves the NDCG test performance. For the latter, we used RankNet as a
baseline. Even though the RankNet optimization cost is not NDCG, RankNet is still very effective
at optimizing NDCG, using the method proposed in [2]: after each epoch, compute the NDCG
on a validation set, and after training, choose the net for which the validation NDCG is highest.
Rather than attempt to derive from first principles the optimal Lambda function for the NDCG target
cost (and for a given dataset), which is beyond the scope of this paper, we wrote several plausible ?functions and tested them on the Web search data. We then picked the single ? function that gave the
best results on that particular validation set, and then used that ? function for all of our experiments;
this is described below.
6.1
RankNet Speedup Results
Here the training scheme is exactly LambdaRank training, but with the RankNet gradients, and with
no sort: we call the corresponding ? function G. We will refer to the original RankNet training as
V1 and LambdaRank speedup as V2. We compared V1 and V2 in two sets of experiments. In the
first we used 1000 queries taken from the Web data described below, and in the second we varied
the number of documents for a given query, using the artificial data described below. Experiments
were run on a 2.2GHz 32 bit Opteron machine. We compared V1 to V2 for 1 layer and 2 layer
(with 10 hidden nodes) nets. V1 was also run using batch update per query, to clearly show the gain
(the convergence as a function of epoch was found to be similar for batch and non-batch updates;
furthermore running time for batch and non-batch is almost identical). For the single layer net, on
the Web data, LambdaRank with G was measured to be 5.1 times faster, and for two layer, 8.0 times
faster: the left panel of Figure 1 shows the results (where max validation NDCG is plotted). Each
point on the graph is one epoch. Results for the two layer nets were similar. The right panel shows
a log log plot of training time versus number of documents, as the number of documents per query
3
Two further speedups are possible, and are not explored here: first, only the first n fprops need be performed if the node activations are stored, since those stored activations could then be used during the n backprops; second, the esi could be precomputed before the pairwise sum is done.
varies from 4,000 to 512,000 in the artificial set. Fitting the curves using linear regression gives
the slopes of V1 and V2 to be 1.943 and 1.185 respectively. Thus V1 is close to quadratic (but
not exactly, due to the fact that only a subset of pairs is used, namely, those with documents whose
labels differ), and V1 is close to linear, as expected.
7
0.7
Log Seconds per Epoch
LambdaRank Speedup
RankN et T r ai ni ng
0.6
NDCG
0.55
0.5
0.45
0
5
4
3
2
1
0
-1
0.4
0.3 5
RankN et T r ai ni ng
LambdaRank Speedup
6
0.65
-2
5
10
15
Seconds
20
25
30
-3
8
9
10
11
12
L og N u m b er of D ocu m ent s
13
14
Figure 1: Speeding up RankNet training. Left: linear nets. Right: two layer nets.
6.2
?-function Chosen for Ranking Experiments
To implement LambdaRank training, we must first choose the ? function (Eq. (3)), and then substitute in Eq. (5). Using the physical analogy, specifying a ? function amounts to specifying rules
for the ?force? on a document given its neighbors in the ranked list. We tried two kinds of ? function: those where a document?s ? gets a contribution from all pairs with different labels (for a given
query), and those where its ? depends only on its nearest neighbors in the sorted list. All ? functions
were designed with the NDCG cost function in mind, and most had a margin built in (that is, a force
is exerted between two documents even if they are in the correct order, until their difference in scores
exceeds that margin). We investigated step potentials, where the step sizes are proportional to the
NDCG gain found by swapping the pair; spring models; models that estimated the NDCG gradient
using finite differences; and models where the cost was estimated as the gradient of a smooth, pairwise cost, also scaled by NDCG gain from swapping the two documents. We tried ten different ?
functions in all. Due to space limitations we will not give results on all these functions here: instead
we will use the one that worked best on the Web validation data for all experiments. This function
used the RankNet cost, scaled by the NDCG gain found by swapping the two documents in question. The RankNet cost combines a sigmoid output and the cross entropy cost, and is similar to the
negative binomial log-likelihood cost [5], except that it is based on pairs of items: if document i is
to be ranked higher than document j, then the RankNet cost is [2]:
R
= sj ? si + log(1 + esi ?sj )
Ci,j
(8)
and if the corresponding document ranks are ri and rj , then taking derivatives of Eq. (8) and
combining with Eq. (1) gives
1
1
1
li
lj
?=N
2
?
2
?
(9)
1 + esi ?sj
log(1 + i) log(1 + j)
where N is the reciprocal max DCG for the query. Thus for each pair, after the sort, we increment
each document?s force by ??, where the more relevant document gets the positive increment.
6.3
Ranking for Search Experiments
We performed experiments on three datasets: artificial, web search, and intranet search data. The
data are labeled from 0 to M , in order of increasing relevance: the Web search and artificial data
have M = 4, and the intranet search data, M = 3. The corresponding NDCG gains (the numerators
in Eq. (1)) were therefore 0, 3, 7, 15 and 31. In all graphs, 95% confidence intervals are shown.
In all experiments, we varied the learning rate from as low as 1e-7 to as high as 1e-2, and for each
experiment we picked that rate that gave the best validation results. For all training, the learning
rate was reduced be a factor of 0.8 if the training cost (Eq. (8), for RankNet, and the NDCG at
truncation level 10, for LambdaRank) increased over the value for the previous epoch. Training was
done for 300 epochs for the artificial and Web search data, and for 200 epochs for the intranet data,
and training was restarted (with random weights) if the cost did not reduce for 50 iterations.
6.3.1
Artificial Data
We used artificial data to remove any variance stemming from the quality of the features or of the
labeling. We followed the prescription given in [2] for generating random cubic polynomial data.
However, here we use five levels of relevance instead of six, a label distribution corresponding to
real datasets, and more data, all to more realistically approximate a Web search application. We
used 50 dimensional data, 50 documents per query, and 10K/5K/10K queries for train/valid/test
respectively. We report the NDCG results in Figure 2 for ten NDCG truncation levels. In this clean
dataset, LambdaRank clearly outperforms RankNet. Note that the gap increases at higher relevance
levels, as one might expect due to the more direct optimization of NDCG.
0.62
0.75
0.60
0.70
NDCG
0.58
0.65
0.56
0.60
0.54
LambdaRankTwoLayer
RankN et TwoLayer
LambdaRankLi near
RankN et Li near
0.55
1
2
3
4
5
6
7
Truncation Level
8
9
L a m b d a R a n k L in e a r
R a n k N e tL in e a r
0.52
10
0.50
1
2
3
4
5
6
7
Truncation Level
8
9
10
Figure 2: Left: Cubic polynomial data. Right: Intranet search data.
6.3.2
Intranet Search Data
This data has dimension 87, and only 400 queries in all were available. The average number
of documents per query is 59.4. We used 5 fold cross validation, with 2+2+1 splits between
train/validation/test sets. We found that it was important for such a small dataset to use a relatively large validation set to reduce variance. The results for the linear nets are shown in Figure 2:
although LambdaRank gave uniformly better mean NDCGs, the overlapping error bars indicate that
on this set, LambdaRank does not give statistically significantly better results than RankNet at 95%
confidence. For the two layer nets the NDCG means are even closer. This is an example of a case
where larger datasets are needed to see the difference between two algorithms (although it?s possible
that more powerful statistical tests would find a difference here also).
6.4
Web Search Data
This data is from a commercial search engine and has 367 dimensions, with on average 26.1 documents per query. The data was created by shuffling a larger dataset and then dividing into train,
validation and test sets of size 10K/5K/10K queries, respectively. In Figure 3, we report the NDCG
scores on the dataset at truncation levels from 1 to 10. We show separate plots to clearly show the
differences: in fact, the linear LambdaRank results lie on top of the two layer RankNet results, for
the larger truncation values.
7
Conclusions
We have demonstrated a simple and effective method for learning non-smooth target costs. LambdaRank is a general approach: in particular, it can be used to implement RankNet training, and it
0.72
0.72
0.70
0.70
0.68
NDCG
NDCG
0.68
0.66
0.64
0.64
Lam b d aRankLinear
RankNetLinear
0.62
0.60
0.66
1
2
3
4
5
6
Truncation Level
7
8
9
Lam b d aRankT w o Lay er
RankNetT w o Lay er
0.62
10
0.60
1
2
3
4
5
6
7
Truncation Level
8
9
10
Figure 3: NDCG for RankNet and LambdaRank. Left: linear nets. Right: two layer nets
furnishes a significant training speedup there. We studied LambdaRank in the context of the NDCG
target cost for neural network models, but the same ideas apply to any non-smooth target cost, and
to any differentiable function class. It would be interesting to investigate using the same method
starting with other classifiers such as boosted trees.
Acknowledgments
We thank M. Taylor, J. Platt, A. Laucius, P. Simard and D. Meyerzon for useful discussions and for
providing data.
References
[1] C. Buckley and E. Voorhees. Evaluating evaluation measure stability. In SIGIR, pages 33?40, 2000.
[2] C.J.C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, and G. Hullender. Learning to
Rank using Gradient Descent. In ICML 22, Bonn, Germany, 2005.
[3] C. Cortes and M. Mohri. Confidence Intervals for the Area Under the ROC Curve. In NIPS 18. MIT
Press, 2005.
[4] Y. Freund, R. Iyer, R.E. Schapire, and Y. Singer. An efficient boosting algorithm for combining preferences. Journal of Machine Learning Research, 4:933?969, 2003.
[5] J. Friedman, T. Hastie, and R. Tibshirani. Additive logistic regression: A statistical view of boosting. The
Annals of Statistics, 28(2):337?374, 2000.
[6] K. Jarvelin and J. Kekalainen. IR evaluation methods for retrieving highly relevant documents. In SIGIR
23. ACM, 2000.
[7] T. Joachims. A support vector method for multivariate performance measures. In ICML 22, 2005.
[8] I. Matveeva, C. Burges, T. Burkard, A. Lauscius, and L. Wong. High accuracy retrieval with multiple
nested rankers. In SIGIR, 2006.
[9] I. Newton. Philosophiae Naturalis Principia Mathematica. The Royal Society, 1687.
[10] S. Robertson and H. Zaragoza. On rank-based effectiveness measures and optimisation. Technical Report
MSR-TR-2006-61, Microsoft Research, 2006.
[11] M. Spivak. Calculus on Manifolds. Addison-Wesley, 1965.
[12] B. Taskar, V. Chatalbashev, D. Koller, and C. Guestrin. Learning structured prediciton models: A large
margin approach. In ICML 22, Bonn, Germany, 2005.
[13] I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun. Support vector machine learning for interdependent and structured output spaces. In ICML 24, 2004.
[14] E.M. Voorhees. Overview of the TREC 2001/2002 Question Answering Track. In TREC, 2001,2002.
[15] L. Yan, R. Dodlier, M.C. Mozer, and R. Wolniewicz. Optimizing Classifier Performance via an Approximation to the Wilcoxon-Mann-Whitney Statistic. In ICML 20, 2003.
| 2971 |@word msr:1 version:1 polynomial:2 open:2 d2:6 calculus:1 tried:2 twolayer:1 tr:1 initial:1 score:19 document:51 outperforms:1 com:2 si:13 activation:2 attracted:1 must:3 stemming:1 additive:1 j1:2 hofmann:1 remove:1 drop:1 plot:2 update:3 designed:1 alone:1 item:6 nq:2 reciprocal:3 renshaw:1 provides:3 boosting:2 node:2 preference:1 simpler:1 prediciton:1 five:1 n1q:1 sii:1 direct:1 retrieving:1 consists:1 fitting:1 combine:1 pairwise:7 expected:1 roughly:1 themselves:1 discounted:1 decreasing:1 little:1 cardinality:1 increasing:2 estimating:1 notation:2 panel:2 mass:2 kind:1 burkard:1 finding:2 sni:1 every:2 act:1 runtime:1 exactly:2 classifier:3 demonstrates:1 scaled:2 platt:1 hamilton:1 before:2 positive:5 ndcg:30 approximately:1 might:2 au:1 therein:1 studied:1 specifying:2 challenging:1 limited:2 statistically:1 averaged:5 directed:1 unique:1 acknowledgment:1 implement:3 poincar:1 area:1 yan:1 significantly:4 convenient:2 deed:1 confidence:3 suggest:1 altun:1 get:2 cannot:1 close:2 tsochantaridis:1 context:1 shaked:1 lambdarank:23 wong:1 optimize:2 map:2 imposed:1 demonstrated:1 attention:1 starting:1 convex:4 sigir:3 kekalainen:1 bep:1 rule:7 d1:7 lni:1 stability:1 handle:2 notion:1 increment:2 updated:1 annals:1 target:18 play:1 commercial:2 user:2 suppose:2 exact:3 us:4 origin:2 matveeva:1 curly:1 element:1 robertson:1 particularly:3 updating:1 lay:2 labeled:3 role:1 taskar:1 reducible:1 ordering:3 highest:2 incremented:1 substantial:1 sj1:1 intuition:2 mozer:1 esi:3 ultimately:1 trained:2 depend:5 algebra:1 basis:1 accelerate:1 represented:1 train:4 describe:1 effective:2 query:45 artificial:7 labeling:1 choosing:4 whose:1 spend:2 plausible:1 larger:3 otherwise:2 compressed:1 statistic:2 think:1 itself:4 final:1 differentiable:3 net:14 propose:2 lam:2 product:1 j2:2 relevant:8 combining:2 realistically:1 ent:1 convergence:2 requirement:2 generating:1 perfect:1 depending:1 illustrate:1 jjk:1 derive:1 measured:1 nearest:1 ij:1 eq:10 solves:1 dividing:1 come:2 indicate:1 met:1 differ:1 discontinuous:2 correct:5 stochastic:1 opteron:1 australia:1 virtual:1 mann:1 backprop:1 multilevel:4 multilinear:1 adjusted:1 exploring:1 hold:1 considered:1 scope:1 vary:1 label:9 mit:1 clearly:4 rather:3 boosted:1 og:1 encode:1 focus:1 joachim:2 rank:21 likelihood:2 baseline:2 sense:1 inference:1 chatalbashev:1 lj:1 dcg:1 hidden:1 koller:1 interested:1 germany:2 overall:2 classification:3 flexible:1 denoted:1 art:1 equal:1 construct:1 exerted:1 having:1 shaped:2 ng:2 identical:1 look:1 icml:5 jarvelin:1 nonsmooth:2 report:3 inherent:1 few:1 irreducible:1 cheaper:1 replaced:1 phase:1 ourselves:1 argmax:1 microsoft:7 attempt:1 friedman:1 interest:1 investigate:1 highly:1 evaluation:4 severe:1 semidefinite:2 undefined:1 behind:1 swapping:3 closer:2 necessary:1 indexed:1 tree:1 loosely:1 taylor:1 desired:3 plotted:1 guidance:1 increased:1 whitney:1 cost:73 subset:2 entry:2 lazier:1 dij:1 reported:1 stored:2 varies:2 reflect:1 satisfied:2 choose:2 possibly:1 lambda:1 derivative:6 simard:1 li:2 potential:3 star:2 wk:8 ranking:17 depends:5 performed:6 view:1 lot:1 closed:3 picked:2 sort:9 slope:1 contribution:2 ass:3 ni:9 accuracy:4 ir:1 variance:2 maximized:1 peril:1 whenever:1 energy:2 mathematica:1 naturally:1 gain:6 dataset:5 improves:1 actually:1 wesley:1 higher:3 specify:2 improved:1 refactor:1 done:3 though:1 furthermore:2 just:2 implicit:6 until:1 working:2 web:10 christopher:1 overlapping:1 logistic:1 quality:9 perhaps:1 usa:2 normalized:1 requiring:1 true:1 former:1 regularization:1 symmetric:2 illustrated:1 zaragoza:1 during:2 interchangeably:1 numerator:1 auc:2 demonstrate:2 performs:1 l1:1 meaning:3 wise:2 recently:1 common:1 sigmoid:2 physical:3 overview:1 function1:1 winner:1 interpretation:1 significant:4 refer:1 ai:2 shuffling:1 trivially:1 similarly:3 dj:1 had:1 moving:2 acute:1 etc:1 wilcoxon:1 multivariate:4 mrr:4 recent:1 optimizing:3 binary:7 guestrin:1 minimum:1 care:1 attacking:1 determine:1 multiple:1 rj:1 smooth:10 exceeds:1 faster:3 technical:1 cross:5 retrieval:9 prescription:1 impact:1 qi:1 wolniewicz:1 regression:2 optimisation:1 iteration:3 normalization:1 kernel:3 adopting:1 interval:2 brace:1 dxj:2 effectiveness:1 call:4 near:2 split:1 easy:2 gave:3 hastie:1 restrict:1 opposite:1 inner:1 idea:5 reduce:3 ranker:2 whether:1 six:1 accelerating:1 sj2:1 returned:11 speaking:1 nine:1 hessian:1 ranknet:25 buckley:1 useful:2 clear:2 amount:5 ten:2 reduced:2 schapire:1 exist:2 problematic:1 sign:1 estimated:2 arising:2 per:10 tibshirani:1 track:1 conform:1 write:2 key:3 terminology:1 prevent:1 clean:1 backward:1 v1:7 graph:2 concreteness:1 fraction:1 sum:5 run:2 everywhere:5 parameterized:1 powerful:1 arrive:1 throughout:2 almost:1 prefer:2 bit:1 layer:9 ct:3 followed:2 distinguish:1 fold:1 quadratic:1 constraint:1 worked:1 ri:3 flat:3 bonn:2 ragno:1 speed:1 aspect:1 spring:5 relatively:1 speedup:9 structured:4 wta:6 making:1 s1:2 quoc:2 sij:2 taken:1 count:1 precomputed:1 needed:1 mind:2 singer:1 addison:1 tractable:2 available:1 apply:1 v2:4 batch:7 original:1 substitute:1 top:8 running:1 binomial:1 furnishes:1 newton:2 classical:1 society:1 move:3 question:5 quantity:1 dependence:1 diagonal:1 gradient:13 spivak:1 separate:1 mapped:1 thank:1 capacity:3 manifold:1 reason:1 nicta:1 assuming:1 length:1 cburges:1 index:9 providing:1 difficult:1 robert:1 negative:1 rise:1 design:1 perform:1 datasets:4 finite:1 descent:3 truncated:1 defining:2 extended:2 trec:4 rn:4 dc:1 varied:2 smoothed:1 arbitrary:1 introduced:2 pair:21 mechanical:1 namely:2 required:4 specified:1 engine:1 quadratically:1 nip:1 address:1 beyond:1 redmond:2 bar:1 usually:3 below:4 mismatch:3 challenge:2 program:1 built:1 max:2 royal:1 suitable:2 difficulty:1 ranked:7 force:6 scheme:1 created:1 lij:3 speeding:3 hullender:1 epoch:7 understanding:1 interdependent:1 determining:1 law:1 freund:1 expect:1 interesting:1 limitation:1 proportional:1 analogy:3 versus:1 validation:10 sufficient:1 principle:1 bypass:1 pi:3 mohri:1 truncation:10 bias:1 viet:1 burges:3 neighbor:2 taking:1 benefit:2 ghz:1 curve:2 dimension:2 valid:2 avoids:1 cumulative:2 evaluating:1 forward:1 commonly:3 far:1 sj:15 approximate:2 emphasize:2 wrote:1 global:1 search:15 sk:2 mse:2 investigated:1 domain:1 did:1 linearly:1 s2:1 arise:1 n2:1 tl:1 roc:1 cubic:2 definiteness:1 precision:4 position:8 lie:1 answering:3 jacobian:6 third:1 theorem:2 down:1 er:3 list:9 explored:1 svm:2 voorhees:2 cortes:1 exists:2 adding:1 effectively:1 rankn:4 ci:1 iyer:1 illustrates:1 anu:1 margin:4 sorting:2 easier:1 gap:1 suited:1 entropy:4 led:1 likely:1 ordered:2 applies:1 monotonic:1 restarted:1 corresponds:2 nested:1 acm:1 abbreviation:1 sorted:6 viewed:1 towards:1 adverse:1 change:3 included:1 except:2 uniformly:1 lemma:1 conservative:1 called:2 multimedia:1 pas:2 total:1 support:2 latter:2 brevity:1 relevance:5 tested:1 |
2,172 | 2,972 | Learning from Multiple Sources
Koby Crammer, Michael Kearns, Jennifer Wortman
Department of Computer and Information Science
University of Pennsylvania
Philadelphia, PA 19104
Abstract
We consider the problem of learning accurate models from multiple sources of
?nearby? data. Given distinct samples from multiple data sources and estimates
of the dissimilarities between these sources, we provide a general theory of which
samples should be used to learn models for each source. This theory is applicable
in a broad decision-theoretic learning framework, and yields results for classification and regression generally, and for density estimation within the exponential
family. A key component of our approach is the development of approximate
triangle inequalities for expected loss, which may be of independent interest.
1
Introduction
We introduce and analyze a theoretical model for the problem of learning from multiple sources of
?nearby? data. As a hypothetical example of where such problems might arise, consider the following scenario: For each web user in a large population, we wish to learn a classifier for what sites
that user is likely to find ?interesting.? Assuming we have at least a small amount of labeled data for
each user (as might be obtained either through direct feedback, or via indirect means such as clickthroughs following a search), one approach would be to apply standard learning algorithms to each
user?s data in isolation. However, if there are natural and accessible measures of similarity between
the interests of pairs of users (as might be obtained through their mutual labelings of common web
sites), an appealing alternative is to aggregate the data of ?nearby? users when learning a classifier
for each particular user. This alternative is intuitively subject to a trade-off between the increased
sample size and how different the aggregated users are.
We treat this problem in some generality and provide a bound addressing the aforementioned tradeoff. In our model there are K unknown data sources, with source i generating a distinct sample Si
of ni observations. We assume we are given only the samples Si , and a disparity1 matrix D whose
entry D(i, j) bounds the difference between source i and source j. Given these inputs, we wish to
decide which subset of the samples Sj will result in the best model for each source i. Our framework includes settings in which the sources produce data for classification, regression, and density
estimation (and more generally any additive-loss learning problem obeying certain conditions).
Our main result is a general theorem establishing a bound on the expected loss incurred by using all
data sources within a given disparity of the target source. Optimization of this bound then yields a
recommended subset of the data to be used in learning a model of each source. Our bound clearly
expresses a trade-off between three quantities: the sample size used (which increases as we include
data from more distant models), a weighted average of the disparities of the sources whose data is
used, and a model complexity term. It can be applied to any learning setting in which the underlying
loss function obeys an approximate triangle inequality, and in which the class of hypothesis models under consideration obeys uniform convergence of empirical estimates of loss to expectations.
1
We avoid using the term distance since our results include settings in which the underlying loss measures
may not be formal distances.
For classification problems, the standard triangle inequality holds. For regression we prove a 2approximation to the triangle inequality, and for density estimation for members of the exponential
family, we apply Bregman divergence techniques to provide approximate triangle inequalities. We
believe these approximations may find independent applications within machine learning. Uniform
convergence bounds for the settings we consider may be obtained via standard data-independent
model complexity measures such as VC dimension and pseudo-dimension, or via more recent datadependent approaches such as Rademacher complexity.
The research described here grew out of an earlier paper by the same authors [1] which examined
the considerably more limited problem of learning a model when all data sources are corrupted
versions of a single, fixed source, for instance when each data source provides noisy samples of a
fixed binary function, but with varying levels of noise. In the current work, each source may be
entirely unrelated to all others except as constrained by the bounds on disparities, requiring us to
develop new techniques. Wu and Dietterich studied similar problems experimentally in the context
of SVMs [2]. The framework examined here can also be viewed as a type of transfer learning [3, 4].
In Section 2 we introduce a decision-theoretic framework for probabilistic learning that includes
classification, regression, density estimation and many other settings as special cases, and then give
our multiple source generalization of this model. In Section 3 we provide our main result, which is
a general bound on the expected loss incurred by using all data within a given disparity of a target
source. Section 4 then applies this bound to a variety of specific learning problems. In Section 5 we
briefly examine data-dependent applications of our general theory using Rademacher complexity.
2
Learning models
Before detailing our multiple-source learning model, we first introduce a standard decision-theoretic
learning framework in which our goal is to find a model minimizing a generalized notion of empirical
loss [5]. Let the hypothesis class H be a set of models (which might be classifiers, real-valued
functions, densities, etc.), and let f be the target model, which may or may not lie in the class
H. Let z be a (generalized) data point or observation. For instance, in (noise-free) classification
and regression, z will consist of a pair hx, yi where y = f (x). In density estimation, z is the
observed value x. We assume that the target model f induces some underlying distribution Pf over
observations z. In the case of classification or regression, Pf is induced by drawing the inputs x
according to some underlying distribution P, and then setting y = f (x) (possibly corrupted by
noise). In the case of density estimation f simply defines a distribution Pf over observations x.
Each setting we consider has an associated loss function L(h, z). For example, in classification we
typically consider the 0/1 loss: L(h, hx, yi) = 0 if h(x) = y, and 1 otherwise. In regression we
might consider the squared loss function L(h, hx, yi) = (y ?h(x))2 . In density estimation we might
consider the log loss L(h, x) = log(1/h(x)). In each case, we are interested in the expected loss of
a model g2 on target g1 , e(g1 , g2 ) = Ez?Pg1 [L(g2 , z)]. Expected loss is not necessarily symmetric.
In our multiple source model, we are presented with K distinct samples or piles of data S1 , ..., SK ,
and a symmetric K ? K matrix D. Each pile Si contains ni observations that are generated from a
fixed and unknown model fi , and D satisfies e(fi , fj ), e(fj , fi ) ? D(i, j). 2 Our goal is to decide
which piles Sj to use in order to learn the best approximation (in terms of expected loss) to each fi .
While we are interested in accomplishing this goal for each fi , it suffices and is convenient to
examine the problem from the perspective of a fixed fi . Thus without loss of generality let us
suppose that we are given piles S1 , ..., SK of size n1 , . . . , nK from models f1 , . . . , fK such that
?1 ? D(1, 1) ? ?2 ? D(1, 2) ? ? ? ? ? ?K ? D(1, K), and our goal is to learn f1 . Here we have
simply taken the problem in the preceding paragraph, focused on the problem for f1 , and reordered
the other models according to their proximity to f1 . To highlight the distinguished role of the target
f1 we shall denote it f . We denote the observations in Sj byz1j , . . . , znj j . In all cases we will
? k minimizing the empirical loss e?k (h) on the first k piles
analyze, for any k ? K, the hypothesis h
S1 , . . . , Sk , i.e.
2
While it may seem restrictive to assume that D is given, notice that D(i, j) can be often be estimated from
data, for example in a classification setting in which common instances labeled by both fi and fj are available.
nj
k X
X
? k = argmin e?k (h) = argmin 1
h
L(h, zij )
h?H
h?H n1:k j=1 i=1
where n1:k = n1 + ? ? ? + nk . We also denote the expected error of function h with respect to the
first k piles of data as
k
X
ni
e(fi , h).
ek (h) = E [?
ek (h)] =
n1:k
i=1
3
General theory
In this section we provide the first of our main results: a general bound on the expected loss of the
model minimizing the empirical loss on the nearest k piles. Optimization of this bound leads to a
recommended number of piles to incorporate when learning f = f1 . The key ingredients needed to
apply this bound are an approximate triangle inequality and a uniform convergence bound, which
we define below. In the subsequent sections we demonstrate that these ingredients can indeed be
provided for a variety of natural learning problems.
Definition 1 For ? ? 1, we say that the ?-triangle inequality holds for a class of models F and
expected loss function e if for all g1 , g2 , g3 ? F we have
e(g1 , g2 ) ? ?(e(g1 , g3 ) + e(g3 , g2 )).
The parameter ? ? 1 is a constant that depends on F and e.
The choice ? = 1 yields the standard triangle inequality. We note that the restriction to models in
the class F may in some cases be quite weak ? for instance, when F is all possible classifiers or
real-valued functions with bounded range ? or stronger, as in densities from the exponential family.
Our results will require only that the unknown source models f1 , . . . , fK lie in F, even when our
hypothesis models are chosen from some possibly much more restricted class H ? F. For now we
simply leave F as a parameter of the definition.
Definition 2 A uniform convergence bound for a hypothesis space H and loss function L is a
bound that states that for any 0 < ? < 1, with probability at least 1 ? ? for any h ? H
|?
e(h) ? e(h)| ? ?(n, ?)
Pn
where e?(h) = n1 i=1 L(h, zi ) for n observations z1 , . . . , zn generated independently according to
distributions P1 , . . . Pn , and e(h) = E [?
e(h)] where the expectation is taken over z1 , . . . , zn . ? is a
function of the number of observations n and the confidence ?, and depends on H and L.
This definition simply asserts that for every model in H, its empirical loss on a sample of size n
and the expectation of this loss will be ?close.? In general the function ? will incorporate standard measures of the
p complexity of H, and will be a decreasing function of the sample size n, as
in the classical O( d/n) bounds of VC theory. Our bounds will be derived from the rich literature on uniform convergence. The only twist to our setting is the fact that the observations are no
longer necessarily identically distributed, since they are generated from multiple sources. However,
generalizing the standard uniform convergence results to this setting is straightforward.
We are now ready to present our general bound.
Theorem 1 Let e be the expected loss function for loss L, and let F be a class of models for which
the ?-triangle inequality holds with respect to e. Let H ? F be a class of hypothesis models for
which there is a uniform convergence bound ? for L. Let K ? N, f = f1 , f2 , . . . , fK ? F, {?i }K
i=1 ,
? k be as defined above. For any ? such that 0 < ? < 1, with probability at least 1 ? ?,
{ni }K
,
and
h
i=1
for any k ? {1, . . . , K}
k
X
ni
? k ) ? (? + ?2 )
e(f, h
?i + 2??(n1:k , ?/2K) + ?2 min {e(f, h)}
h?H
n
1:k
i=1
Before providing the proof, let us examine the bound of Theorem 1, which expresses a natural and
intuitive trade-off. The first term in the bound is a weighted sum of the disparities of the k ? K
models whose data is used with respect to the target model f = f1 . We expect this term to increase
as we increase k to include more distant piles. The second term is determined by the uniform
convergence bound. We expect this term to decrease with added piles due to the increased sample
size. The final term is what is typically called the approximation error ? the residual loss that we
incur simply by limiting our hypothesis model to fall in the restricted class H. All three terms are
influenced by the strength of the approximate triangle inequality that we have, as quantified by ?.
The bounds given in Theorem 1 can be loose, but provide an upper bound necessary for optimization
and suggest a natural choice for the number of piles k ? to use to estimate the target f :
!
k
X
ni
?
2
k = argmin (? + ? )
?i + 2??(n1:k , ?/2K) .
n1:k
k
i=1
Theorem 1 and this optimization make the implicit assumption that the best subset of piles to use
will be a prefix of the piles ? that is, that we should not ?skip? a nearby pile in favor of more distant
ones. This assumption will generally be true for typical data-independent uniform convergence such
as VC dimension bounds, and true on average for data-dependent bounds, where we expect uniform
convergence bounds to improve with increased sample size. We now give the proof of Theorem 1.
Proof: (Theorem 1) By Definition 1, for any h ? H, any k ? {1, . . . K}, and any i ? {1, . . . , k},
ni
ni
e(f, h) ?
(?e(f, fi ) + ?e(fi , h))
n1:k
n1:k
Summing over all i ? {1, . . . , k}, we find
k
X
ni
e(f, h) ?
(?e(f, fi ) + ?e(fi , h))
n1:k
i=1
k
k
k
X
X
X
ni
ni
ni
e(f, fi ) + ?
e(fi , h) ? ?
?i + ?ek (h)
= ?
n1:k
n1:k
n1:k
i=1
i=1
i=1
In the first line above we have used the ?-triangle inequality to deliberately introduce a weighted
summation involving the fi . In the second line, we have broken up the summation. Notice that the
first summation is a weighted average of the expected loss of each fi , while the second summation
is the expected loss of h on the data. Using the uniform convergence bound, we may assert that with
high probability ek (h) ? e?k (h) + ?(n1:k , ?/2K), and with high probability
( k
)
X ni
?
e?k (hk ) = min{?
ek (h)} ? min
e(fi , h) + ?(n1:k , ?/2K)
h?H
h?H
n1:k
i=1
Putting these pieces together, we find that with high probability
( k
)
k
X
X ni
ni
?
e(f, hk ) ? ?
?i + 2??(n1:k , ?/2K) + ? min
e(fi , h)
h?H
n1:k
n1:k
i=1
i=1
k
X
ni
?i + 2??(n1:k , ?/2K)
? ?
n1:k
i=1
( k
)
k
X ni
X
ni
+ ? min
?e(fi , f ) +
?e(f, h)
h?H
n1:k
n1:k
i=1
i=1
k
X
ni
?i + 2??(n1:k , ?/2K) + ?2 min {e(f, h)}
= (? + ?2 )
h?H
n
1:k
i=1
1
0.9
0.8
MAX DATA
140
0.7
120
100
sample size
0.6
0.5
80
60
40
0.4
20
0.3
0
1
0.2
0.8
0.6
0.1
0.4
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0.2
1
0
Figure 1: Visual demonstration of Theorem 2. In this problem there are K = 100 classifiers, each defined by
2 parameters represented by a point fi in the unit square, such that the expected disagreement rate between two
such classifiers equals the L1 distance between their parameters. (It is easy to create simple input distributions
and classifiers that generate exactly this geometry.) We chose the 100 parameter vectors fi uniformly at random
from the unit square (the circles in the left panel). To generate varying pile sizes, we let ni decrease with the
distance of fi from a chosen ?central? point at (0.75, 0.75) (marked ?MAX DATA? in the left panel); the
resulting pile sizes for each model are shown in the bar plot in the right panel, where the origin (0, 0) is in the
near corner, (1, 1) in the far corner, and the pile sizes clearly peak near (0.75, 0.75). Given these fi , ni and
the pairwise distances, the undirected graph on the left includes an edge between fi and fj if and only if the
data from fj is used to learn fi and/or the converse when Theorem 2 is used to optimize the distance of the
data used. The graph simultaneously displays the geometry implicit in Theorem 2 as well as its adaptivity to
local circumstances. Near the central point the graph is quite sparse and the edges quite short, corresponding
to the fact that for such models we have enough direct data that it is not advantageous to include data from
distant models. Far from the central point the graph becomes dense and the edges long, as we are required to
aggregate a larger neighborhood to learn the optimal model. In addition, decisions are affected locally by how
many models are ?nearby? a given model.
4
Applications to standard learning settings
In this section we demonstrate the applicability of the general theory given by Theorem 1 to several
standard learning settings. We begin with the most straightforward application, classification.
4.1
Binary classification
In binary classification, we assume that our target model is a fixed, unknown and arbitrary function
f from some input set X to {0, 1}, and that there is a fixed and unknown distribution P over the X .
Note that the distribution P over input does not depend on the target function f . The observations are
of the form z = hx, yi where y ? {0, 1}. The loss function L(h, hx, yi) is defined as 0 if y = h(x)
and 1 otherwise, and the corresponding expected loss is e(g1 , g2 ) = Ehx,yi?Pg1 [L(g2 , hx, yi)] =
Prx?P [g1 (x) 6= g2 (x)]. For 0/1 loss it is well-known and easy to see that the (standard) 1-triangle
inequality holds, and classical VC theory [6] provides us with uniform convergence. The conditions
of Theorem 1 are thus easily satisfied, yielding the following.
Theorem 2 Let F be the set of all functions from an input set X into {0,1} and let d be the VC
dimension of H ? F. Let e be the expected 0/1 loss. Let K ? N, f = f1 , f2 , . . . , fK ? F,
K
?
{?i }K
i=1 , {ni }i=1 , and hk be as defined above in the multi-source learning model. For any ? such
that 0 < ? < 1, with probability at least 1 ? ?, for any k ? {1, . . . , K}
s
k
X
d log (2en1:k /d) + log (16K/?)
n
i
?k) ? 2
e(f, h
?i + min {e(f, h)} + 2
h?H
n
8n1:k
1:k
i=1
In Figure 1 we provide a visual demonstration of the behavior of Theorem 1 applied to a simple
classification problem.
4.2
Regression
We now turn to regression with squared loss. Here our target model f is any function from an input
class X into some bounded subset of R. (Frequently we will have X ? Rd , but this is not required.)
We again assume a fixed but unknown distribution P (that does not depend on f ) on the inputs. Our
2
observations are of the form z = hx, yi. Our loss function is L(h, hx,
yi) = (y ? h(x))
, and the
expected loss is thus e(g1 , g2 ) = Ehx,yi?Pg1 [L(g2 , hx, yi)] = Ex?P (g1 (x) ? g2 (x))2 .
For regression it is known that the standard 1-triangle inequality does not hold. However, a 2-triangle
inequality does hold and is stated in the following lemma. The proof is given in Appendix A. 3
Lemma 1 Given any three functions g1 , g2 , g3 : X ?
distribution P on
R, a fixed and unknown
the inputs X , and the expected loss e(g1 , g2 ) = Ex?P (g1 (x) ? g2 (x))2 ,
e(g1 , g2 ) ? 2 (e(g1 , g3 ) + e(g3 , g1 )) .
The other required ingredient is a uniform convergence bound for regression with squared loss.
There is a rich literature on such bounds and their corresponding complexity measures for the model
class H, including the fat-shattering generalization of VC dimension [7], ?-nets and entropy [6] and
the combinatorial and pseudo-dimension approaches beautifully surveyed in [5]. For concreteness
here we adopt the latter approach, since it serves well in the following section on density estimation.
While a detailed exposition of the pseudo-dimension dim(H) of a class H of real-valued functions
exceeds both our space limitations and scope, it suffices to say that it generalizes the VC dimension
for binary functions and plays a similar role in uniform convergence bounds. More precisely, in the
same way that the VC dimension measures the largest set of points on which a set of classifiers can
exhibit ?arbitrary? behavior (by achieving all possible labelings of the points), dim(H) measures
the largest set of points on which the output values induced by H are ?full? or ?space-filling.?
(Technically we ask whether {hh(x1 ), . . . , h(xd )i : h ? H} intersects all orthants of Rd with
respect to some chosen origin.) Ignoring constant and logarithmic
factors, uniform convergence
p
bounds can be derived in which the complexity penalty is dim(H)/n. As with the VC dimension,
dim(H) is ordinarily closely related to the number of free parameters defining H. Thus for linear
functions in Rd it is O(d) and for neural networks with W weights it is O(W ), and so on.
Careful application of pseudo-dimension results from [5] along with Lemma 1 and Theorem 1 yields
the following. A sketch of the proof appears in Appendix A.
Theorem 3 Let F be the set of functions from X into [?B, B] and let d be the pseudo-dimension of
H ? F under squared loss. Let e be the expected squared loss. Let K ? N, f = f1 , f2 , . . . , fK ?
K
?
F, {?i }K
i=1 , {ni }i=1 , and hk be as defined in the multi-source learning model. Assume that n1 ?
d/16e. For any ? such that 0 < ? < 1, with probability at least 1 ? ?, for any k ? {1, . . . , K}
? r
?r
s
!
k
2n
X
n
16e
d
ln(16K/?)
i
1:k
2
?k) ? 6
? ln
e(f, h
?i + 4 min {e(f, h)} + 128B ?
+
h?H
n1:k
n1:k
n1:k
d
i=1
4.3
Density estimation
We turn to the more complex application to density estimation. Here our models are no longer functions, but densities P . The loss function for an observation x is the log loss L(P, x) = log (1/P (x)).
The expected loss is then e(P1 , P2 ) = Ex?P1 [L(P2 , x)] = Ex?P1 [log(1/P2 (x))].
As we are not aware of an ?-triangle inequality that holds simultaneously for all density functions, we provide general mathematical tools to derive specialized ?-triangle inequalities for specific
classes of distributions. We focus on the exponential family of distributions, which is quite general
and has nice properties which allow us to derive the necessary machinery to apply Theorem 1. We
start by defining the exponential family and explaining some of its properties. We proceed by deriving an ?-triangle inequality for Kullback-Liebler divergence in exponential families that implies
3
A version of this paper with the appendix included can be found on the authors? websites.
an ?-triangle inequality for our expected loss function. This inequality and a uniform convergence
bound based on pseudo-dimension yield a general method for deriving error bounds in the multiple
source setting which we illustrate using the example of multinomial distributions.
Let x ? X be a random variable, in either a continuous space (e.g. X ? Rd ) or a discrete space
(e.g. X ? Zd ). We define the exponential family of distributions in terms of the following components. First, we have a vector function of the sufficient statistics needed to compute the distribution,
?
?
denoted ? : Rd ? Rd . Associated with ? is a vector of expectation parameters ? ? Rd which pa?
rameterizes a particular distribution. Next we have a convex vector function F : Rd ? R (defined
below) which is unique for each family of exponential distributions, and a normalization function
P0 (x). Using this notation we define a probability distribution (in the expectation parameters) to be
PF (x | ?) = e?F (?)?(?(x)??)+F (?) P0 (x) .
(1)
For all distributions we consider it will hold that Ex?PF (?|?) [?(x)] = ?. Using this fact and the linearity of expectation, we can derive the Kullback-Liebler (KL) divergence between two distributions
of the same family (which use the same functions F and ?) and obtain
KL (PF (x | ?1 ) k PF (x | ?2 )) = F (?1 ) ? [F (?2 ) + ?F (?2 ) ? (?1 ? ?2 )] .
(2)
We define the quantity on the right to be the Bregman divergence between the two (parameter) vectors ?1 and ?2 , denoted BF (?1 k ?2 ). The Bregman divergence measures the difference between
F and its first-order Taylor expansion about ?2 evaluated at ?1 . Eq. (2) states that the KL divergence
between two members of the exponential family is equal to the Bregman divergence between the two
corresponding expectation parameters. We refer the reader to [8] for more details about Bregman
divergences and to [9] for more information about exponential families.
We will use the above relation between the KL divergence for exponential families and Bregman
divergences to derive a triangle inequality as required by our theory. The following lemma shows
that if we can provide a triangle inequality for the KL function, we can do so for expected log loss.
Lemma 2 Let e be the expected log loss, i.e. e(P1 , P2 ) = Ex?P1 [log(1/P2 (x))]. For any three
probability distributions P1 , P2 , and P3 , if KL (P1 k P2 ) ? ?(KL (P1 k P3 ) + KL (P3 k P2 )) for
some ? ? 1 then e(P1 , P2 ) ? ?(e(P1 , P3 ) + e(P3 , P2 )).
The proof is given in Appendix B. The next lemma gives an approximate triangle inequality for the
KL divergence. We assume that there exists a closed set P = {?} which contains all the parameter
vectors. The proof (again see Appendix B) uses Taylor?s Theorem to derive upper and lower bounds
on the Bregman divergence and then uses Eq. (2) to relate these bounds to the KL divergence.
Lemma 3 Let P1 , P2 , and P3 be distributions from an exponential family with parameters ? and
function F . Then
KL (P1 k P2 ) ? ? (KL (P1 k P3 ) + KL (P3 k P2 ))
where ? = 2 sup??P ?1 (H(F (?)))/ inf ??P ?d? (H(F (?))). Here ?1 (?) and ?d? (?) are the highest
and lowest eigenvalues of a given matrix, and H(?) is the Hessian matrix.
The following theorem, which states bounds for multinomial distributions in the multi-source setting, is provided to illustrate the type of results that can be obtained using the machinery described in
this section. More details on the application to the multinomial distribution are given in Appendix B.
Theorem 4 Let F ? H be the set of multinomial distributions over N values with the probability
of each value bounded from below by ? for some ? > 0, and let ? = 2/?. Let d be the pseudodimension of H under log loss, and let e be the expected log loss. Let K ? N, f = f1 , f2 , . . . , fK ?
4
K
?
F, {?i }K
i=1 , {n}i=1 , and hk be as defined above in the multi-source learning model. Assume that
n1 ? d/16e. For any 0 < ? < 1, with probability at least 1 ? ? for any k ? {1, . . . , K},
k
X
ni
2
?
e(f, hk ) ? (? + ? )
?i + ? min {e(f, h)}
h?H
n1:k
i=1
4
Here we can actually make the weaker assumption that the ?i bound the KL divergences rather than the
expected log loss, which avoids our needing upper bounds on the entropy of each source distribution.
?
? r
s
!
? r d
2n
16e
ln(16K/?)
1:k
?
?
ln
+ 128 log2
+
2
n1:k
n1:k
d
5
Data-dependent bounds
Given the interest in data-dependent convergence methods (such as maximum margin, PAC-Bayes,
and others) in recent years, it is natural to ask how our multi-source theory can exploit these modern
bounds. We examine one specific case for classification here using Rademacher complexity [10, 11];
analogs can be derived in a similar manner for other learning problems.
If H is a class of functions mapping from a set X to R, we define the empirical Rademacher complexity of H on a fixed set of observations x1 , . . . , xn as
n
#
"
2 X
?
?i h(xi ) x1 , . . . , xn
Rn (H) = E sup
h?H n
i=1
where the expectation is taken over independent uniform {?1}-valued random variables
h
i?1 , . . . , ?n .
?
The Rademacher complexity for n observations is then defined as Rn (H) = E Rn (H) where the
expectation is over x1 , . . . , xn .
We can apply Rademacher-based convergence bounds to obtain a data-dependent multi-source
bound for classification. A proof sketch using techniques and theorems of [10] is in Appendix C.
? n be the
Theorem 5 Let F be the set of all functions from an input set X into {-1,1} and let R
1:k
empirical Rademacher complexity of H ? F on the first k piles of data. Let e be the expected 0/1
K
?
loss. Let K ? N, f = f1 , f2 , . . . , fK ? F, {?i }K
i=1 , {ni }i=1 , and hk be as defined in the multisource learning model. Assume that n1 ? d/16e. For any ? such that 0 < ? < 1, with probability
at least 1 ? ?, for any k ? {1, . . . , K}
s
k
X
n
i
?k) ? 2
? n (H) + 4 2 ln(4K/?)
e(f, h
?i + min {e(f, h)} + R
1:k
h?H
n
n1:k
1:k
i=1
While the use of data-dependent complexity measures can be expected to yield more accurate bounds
and thus better decisions about the number k ? of piles to use, it is not without its costs in comparison
to the more standard data-independent approaches. In particular, in principle the optimization of
the bound of Theorem 5 to choose k ? may actually involve running the learning algorithm on all
possible prefixes of the piles, since we cannot know the data-dependent complexity term for each
prefix without doing so. In contrast, the data-independent bounds can be computed and optimized
for k ? without examining the data at all, and the learning performed only once on the first k ? piles.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
K. Crammer, M. Kearns, and J. Wortman. Learning from data of variable quality. In NIPS 18, 2006.
P. Wu and T. Dietterich. Improving SVM accuracy by training on auxiliary data sources. In ICML, 2004.
J. Baxter. Learning internal representations. In COLT, 1995.
S. Ben-David. Exploiting task relatedness for multiple task learning. In COLT, 2003.
D. Haussler. Decision theoretic generalizations of the PAC model for neural net and other learning applications. Information and Computation, 1992.
V. N. Vapnik. Statistical Learning Theory. Wiley, 1998.
M. Kearns and R. Schapire. Efficient distribution-free learning of probabilistic concepts. JCSS, 1994.
Y. Censor and S.A. Zenios. Parallel Optimization: Theory, Algorithms, and Applications. Oxford University Press, New York, NY, USA, 1997.
M. J. Wainwright and M. I. Jordan. Graphical models, exponential families, and variational inference.
Technical Report 649, Department of Statistics, University of California, Berkeley, 2003.
P. L. Bartlett and S. Mendelson. Rademacher and Gaussian complexities: Risk bounds and structural
results. Journal of Machine Learning Research, 2002.
V. Koltchinskii. Rademacher penalties and structural risk minimization. IEEE Trans. Info. Theory, 2001.
| 2972 |@word briefly:1 version:2 stronger:1 advantageous:1 bf:1 p0:2 contains:2 disparity:5 zij:1 prefix:3 current:1 si:3 distant:4 subsequent:1 additive:1 plot:1 website:1 short:1 provides:2 mathematical:1 along:1 direct:2 prove:1 paragraph:1 manner:1 introduce:4 pairwise:1 indeed:1 expected:26 behavior:2 p1:14 examine:4 frequently:1 multi:6 decreasing:1 pf:7 becomes:1 provided:2 begin:1 underlying:4 unrelated:1 bounded:3 panel:3 notation:1 linearity:1 what:2 lowest:1 argmin:3 nj:1 pseudo:6 assert:1 every:1 hypothetical:1 berkeley:1 xd:1 exactly:1 fat:1 classifier:8 unit:2 converse:1 before:2 local:1 treat:1 oxford:1 establishing:1 might:6 chose:1 koltchinskii:1 studied:1 examined:2 quantified:1 limited:1 range:1 en1:1 obeys:2 unique:1 empirical:7 convenient:1 confidence:1 suggest:1 cannot:1 close:1 risk:2 context:1 restriction:1 optimize:1 straightforward:2 independently:1 convex:1 focused:1 haussler:1 deriving:2 population:1 notion:1 limiting:1 target:11 suppose:1 play:1 user:8 us:2 hypothesis:7 origin:2 pa:2 labeled:2 observed:1 role:2 jcss:1 trade:3 decrease:2 highest:1 broken:1 complexity:14 depend:2 incur:1 reordered:1 technically:1 f2:5 triangle:21 easily:1 indirect:1 represented:1 intersects:1 distinct:3 aggregate:2 neighborhood:1 whose:3 quite:4 larger:1 valued:4 say:2 drawing:1 otherwise:2 favor:1 statistic:2 g1:15 noisy:1 final:1 eigenvalue:1 net:2 intuitive:1 asserts:1 exploiting:1 convergence:18 rademacher:9 produce:1 generating:1 leave:1 ben:1 derive:5 develop:1 illustrate:2 nearest:1 eq:2 p2:13 auxiliary:1 skip:1 implies:1 closely:1 vc:9 require:1 hx:9 suffices:2 generalization:3 f1:13 summation:4 hold:8 proximity:1 scope:1 mapping:1 adopt:1 estimation:10 applicable:1 combinatorial:1 largest:2 create:1 tool:1 weighted:4 minimization:1 clearly:2 gaussian:1 rather:1 avoid:1 pn:2 varying:2 derived:3 focus:1 hk:7 orthants:1 contrast:1 dim:4 censor:1 inference:1 dependent:7 znj:1 typically:2 relation:1 labelings:2 interested:2 classification:14 aforementioned:1 multisource:1 denoted:2 colt:2 development:1 constrained:1 special:1 mutual:1 equal:2 aware:1 once:1 shattering:1 broad:1 koby:1 icml:1 filling:1 others:2 report:1 modern:1 simultaneously:2 divergence:14 geometry:2 n1:37 interest:3 yielding:1 accurate:2 bregman:7 edge:3 necessary:2 machinery:2 taylor:2 detailing:1 circle:1 theoretical:1 increased:3 instance:4 earlier:1 zn:2 applicability:1 cost:1 addressing:1 entry:1 subset:4 uniform:17 wortman:2 examining:1 corrupted:2 considerably:1 density:14 peak:1 accessible:1 probabilistic:2 off:3 michael:1 together:1 squared:5 central:3 satisfied:1 again:2 choose:1 possibly:2 corner:2 ek:5 includes:3 depends:2 piece:1 performed:1 closed:1 analyze:2 sup:2 doing:1 start:1 bayes:1 parallel:1 square:2 ni:25 accuracy:1 accomplishing:1 yield:6 weak:1 liebler:2 influenced:1 definition:5 associated:2 proof:8 ask:2 actually:2 appears:1 evaluated:1 generality:2 implicit:2 sketch:2 web:2 defines:1 quality:1 believe:1 usa:1 dietterich:2 pseudodimension:1 requiring:1 true:2 concept:1 deliberately:1 symmetric:2 generalized:2 theoretic:4 demonstrate:2 l1:1 fj:5 variational:1 consideration:1 fi:25 common:2 specialized:1 multinomial:4 twist:1 analog:1 refer:1 rd:8 fk:7 similarity:1 longer:2 etc:1 recent:2 perspective:1 inf:1 scenario:1 certain:1 inequality:22 binary:4 yi:11 preceding:1 aggregated:1 recommended:2 multiple:10 full:1 needing:1 exceeds:1 technical:1 long:1 involving:1 regression:11 circumstance:1 expectation:9 normalization:1 addition:1 source:35 subject:1 induced:2 undirected:1 member:2 seem:1 jordan:1 structural:2 near:3 identically:1 easy:2 enough:1 variety:2 baxter:1 isolation:1 zi:1 pennsylvania:1 zenios:1 tradeoff:1 whether:1 bartlett:1 penalty:2 proceed:1 hessian:1 york:1 generally:3 detailed:1 involve:1 amount:1 locally:1 induces:1 svms:1 generate:2 schapire:1 notice:2 estimated:1 zd:1 discrete:1 shall:1 affected:1 express:2 key:2 putting:1 achieving:1 graph:4 concreteness:1 sum:1 year:1 family:14 reader:1 decide:2 wu:2 p3:8 decision:6 appendix:7 entirely:1 bound:47 display:1 ehx:2 strength:1 precisely:1 nearby:5 min:10 department:2 according:3 pg1:3 appealing:1 g3:6 s1:3 intuitively:1 restricted:2 taken:3 ln:5 jennifer:1 turn:2 loose:1 hh:1 needed:2 know:1 serf:1 available:1 generalizes:1 apply:5 disagreement:1 distinguished:1 alternative:2 running:1 include:4 graphical:1 log2:1 exploit:1 restrictive:1 classical:2 added:1 quantity:2 exhibit:1 beautifully:1 distance:6 assuming:1 providing:1 minimizing:3 demonstration:2 relate:1 info:1 stated:1 ordinarily:1 unknown:7 upper:3 observation:14 defining:2 grew:1 rn:3 arbitrary:2 david:1 pair:2 required:4 kl:14 z1:2 optimized:1 california:1 nip:1 trans:1 bar:1 below:3 max:2 including:1 wainwright:1 natural:5 residual:1 clickthroughs:1 improve:1 ready:1 philadelphia:1 nice:1 literature:2 loss:49 expect:3 highlight:1 adaptivity:1 interesting:1 limitation:1 ingredient:3 incurred:2 sufficient:1 principle:1 pile:21 free:3 formal:1 allow:1 weaker:1 fall:1 explaining:1 sparse:1 distributed:1 feedback:1 dimension:13 xn:3 avoids:1 rich:2 author:2 far:2 sj:3 approximate:6 relatedness:1 kullback:2 summing:1 xi:1 search:1 continuous:1 sk:3 learn:6 transfer:1 ignoring:1 improving:1 expansion:1 necessarily:2 complex:1 main:3 dense:1 noise:3 arise:1 prx:1 x1:4 site:2 ny:1 wiley:1 surveyed:1 wish:2 obeying:1 exponential:13 lie:2 theorem:23 specific:3 pac:2 svm:1 consist:1 exists:1 mendelson:1 vapnik:1 dissimilarity:1 margin:1 nk:2 entropy:2 generalizing:1 logarithmic:1 simply:5 likely:1 ez:1 visual:2 datadependent:1 g2:16 applies:1 satisfies:1 viewed:1 goal:4 marked:1 exposition:1 careful:1 experimentally:1 included:1 determined:1 except:1 typical:1 uniformly:1 kearns:3 lemma:7 called:1 internal:1 latter:1 crammer:2 incorporate:2 ex:6 |
2,173 | 2,973 | Fast Computation of Graph Kernels
S.V. N. Vishwanathan
[email protected]
Statistical Machine Learning, National ICT Australia,
Locked Bag 8001, Canberra ACT 2601, Australia
Research School of Information Sciences & Engineering
Australian National University, Canberra ACT 0200, Australia
Karsten M. Borgwardt
[email protected]
Institute for Computer Science, Ludwig-Maximilians-University Munich
Oettingenstr. 67, 80538 Munich, Germany
Nicol N. Schraudolph
[email protected]
Statistical Machine Learning, National ICT Australia
Locked Bag 8001, Canberra ACT 2601, Australia
Research School of Information Sciences & Engineering
Australian National University, Canberra ACT 0200, Australia
Abstract
Using extensions of linear algebra concepts to Reproducing Kernel Hilbert Spaces
(RKHS), we define a unifying framework for random walk kernels on graphs. Reduction to a Sylvester equation allows us to compute many of these kernels in
O(n3 ) worst-case time. This includes kernels whose previous worst-case time
complexity was O(n6 ), such as the geometric kernels of G?artner et al. [1] and
the marginal graph kernels of Kashima et al. [2]. Our algebra in RKHS allow us
to exploit sparsity in directed and undirected graphs more effectively than previous methods, yielding sub-cubic computational complexity when combined with
conjugate gradient solvers or fixed-point iterations. Experiments on graphs from
bioinformatics and other application domains show that our algorithms are often
more than 1000 times faster than existing approaches.
1
Introduction
Machine learning in domains such as bioinformatics, drug discovery, and web data mining involves
the study of relationships between objects. Graphs are natural data structures to model such relations, with nodes representing objects and edges the relationships between them. In this context, one
often encounters the question: How similar are two graphs?
Simple ways of comparing graphs which are based on pairwise comparison of nodes or edges, are
possible in quadratic time, yet may neglect information represented by the structure of the graph.
Graph kernels, as originally proposed by G?artner et al. [1], Kashima et al. [2], Borgwardt et al. [3],
take the structure of the graph into account. They work by counting the number of common random
walks between two graphs. Even though the number of common random walks could potentially be
exponential, polynomial time algorithms exist for computing these kernels. Unfortunately for the
practitioner, these kernels are still prohibitively expensive since their computation scales as O(n6 ),
where n is the number of vertices in the input graphs. This severely limits their applicability to
large-scale problems, as commonly found in areas such as bioinformatics.
In this paper, we extend common concepts from linear algebra to Reproducing Kernel Hilbert Spaces
(RKHS), and use these extensions to define a unifying framework for random walk kernels. We show
that computing many random walk graph kernels including those of G?artner et al. [1] and Kashima
et al. [2] can be reduced to the problem of solving a large linear system, which can then be solved
efficiently by a variety of methods which exploit the structure of the problem.
2
Extending Linear Algebra to RKHS
Let ? : X ? H denote the feature map from an input space X to the RKHS H associated with the
kernel ?(x, x0 ) = h?(x), ?(x0 )iH . Given an n by m matrix X ? X n?m of elements Xij ? X , we
extend ? to matrix arguments by defining ? : X n?m ? Hn?m via [?(X)]ij := ?(Xij ). We can
now borrow concepts from tensor calculus to extend certain linear algebra operations to H:
Definition 1 Let A ? X n?m, B ? X m?p, and C ? Rm?p. The matrix products ?(A)?(B) ? Rn?p
and ?(A) C ? Hn?p are
X
X
?(Aij ) Cjk .
h?(Aij ), ?(Bjk )iH and [?(A) C]ik :=
[?(A)?(B)]ik :=
j
j
Given A ? Rn?m and B ? Rp?q the Kronecker product A ? B ? Rnp?mq and vec operator are
?
?
?
?
defined as
A?1
A11 B A12 B . . . A1m B
? . ?
?
?
..
..
..
..
(1)
A ? B := ?
? , vec(A) := ? .. ? ,
.
.
.
.
A?m
An1 B An2 B . . . Anm B
where A?j denotes the j-th column of A. They are linked by the well-known property:
vec(ABC) = (C > ? A) vec(B).
(2)
Definition 2 Let A ? X n?m and B ? X p?q . The Kronecker product ?(A) ? ?(B) ? Rnp?mq is
[?(A) ? ?(B)]ip+k,jq+l := h?(Aij ), ?(Bkl )iH .
(3)
It is easily shown that the above extensions to RKHS obey an analogue of (2):
Lemma 1 If A ? X n?m , B ? Rm?p , and C ? X p?q , then
vec(?(A) B ?(C)) = (?(C)> ? ?(A)) vec(B).
(4)
If p = q = n = m, direct computation of the right hand side of (4) requires O(n4 ) kernel evaluations. For an arbitrary kernel the left hand side also requires a similar effort. But, if the RKHS H is
isomorphic to Rr , in other words the feature map ?(?) ? Rr , the left hand side of (4) is easily computed in O(n3 r) operations. Our efficient computation schemes described in Section 4 will exploit
this observation.
3
Random Walk Kernels
Random walk kernels on graphs are based on a simple idea: Given a pair of graphs perform a random
walk on both of them and count the number of matching walks [1, 2, 3]. These kernels mainly differ
in the way the similarity between random walks is computed. For instance, G?artner et al. [1] count
the number of nodes in the random walk which have the same label. They also include a decay factor
to ensure convergence. Kashima et al. [2], and Borgwardt et al. [3] on the other hand, use a kernel
defined on nodes and edges in order to compute similarity between random walks, and define an
initial probability distribution over nodes in order to ensure convergence. In this section we present
a unifying framework which includes the above mentioned kernels as special cases.
3.1
Notation
We use ei to denote the i-th standard basis (i.e., a vector of all zeros with the i-th entry set to one), e
to denote a vector with all entries set to one, 0 to denote the vector of all zeros, and I to denote the
identity matrix. When it is clear from context we will not mention the dimensions of these vectors
and matrices.
A graph G ? G consists of an ordered and finite set of n vertices V denoted by {v1 , v2 , . . . , vn }, and
a finite set of edges E ? V ? V . A vertex vi is said to be a neighbor of another vertex vj if they are
connected by an edge. G is said to be undirected if (vi , vj ) ? E ?? (vj , vi ) ? E for all edges.
The unnormalized adjacency matrix of G is an n?n real matrix P with Pij = 1 if (vi , vj ) ? E, and
0 otherwise. If G is weighted then P can contain non-negative entries other than zeros and ones,
i.e., Pij ? (0, ?) if (vi , vj ) ? E and zero otherwise.
P
Let D be an n ? n diagonal matrix with entries Dii = j Pij . The matrix A := P D?1 is then
called the normalized adjacency matrix, or simply adjacency matrix. A walk w on G is a sequence
of indices w1 , w2 , . . . wt+1 where (vwi , vwi+1 ) ? E for all 1 ? i ? t. The length of a walk is equal
to the number of edges encountered during the walk (here: t). A graph is said to be connected if any
two pairs of vertices can be connected by a walk; here we always work with connected graphs. A
random walk is a walk where P(wi+1 |w1 , . . . wi ) = Awi ,wi+1 , i.e., the probability at wi of picking
wi+1 next is directly proportional to the weight of the edge (vwi , vwi+1 ). The t-th power of the
transition matrix A describes the probability of t-length walks. In other words, [At ]ij denotes the
probability of a transition from vertex vi to vertex vj via a walk of length t. We use this intuition to
define random walk kernels on graphs.
Let X be a set of labels which includes the special label . Every edge labeled graph G is associated
with a label matrix L ? X n?n , such that Lij = iff (vi , vj ) ?
/ E, in other words only those edges
which are present in the graph get a non- label. Let H be the RKHS endowed with the kernel
? : X ? X ? R, and let ? : X ? H denote the corresponding feature map which maps to the
zero element of H. We use ?(L) to denote the feature matrix of G. For ease of exposition we do
not consider labels on vertices here, though our results hold for that case as well. Henceforth we use
the term labeled graph to denote an edge-labeled graph.
3.2
Product Graphs
Given two graphs G(V, E) and G0 (V 0 , E 0 ), the product graph G? (V? , E? ) is a graph with nn0
vertices, each representing a pair of vertices from G and G0 , respectively. An edge exists in E? iff
the corresponding vertices are adjacent in both G and G0 . Thus
V? = {(vi , vi00 ) : vi ? V ? vi00 ? V 0 },
E? =
0
{((vi ,vi00 ), (vj ,vj0 0 ))
: (vi , vj ) ? E ?
(5)
(vi00, vj0 0 ) ? E 0 }.
(6)
0
If A and A are the adjacency matrices of G and G , respectively, the adjacency matrix of the product
graph G? is A? = A ? A0 . An edge exists in the product graph iff an edge exits in both G and
G0 , therefore performing a simultaneous random walk on G and G0 is equivalent to performing a
random walk on the product graph [4].
Let p and p0 denote initial probability distributions over vertices of G and G0 . Then the initial
probability distribution p? of the product graph is p? := p ? p0 . Likewise, if q and q 0 denote
stopping probabilities (i.e., the probability that a random walk ends at a given vertex), the stopping
probability q? of the product graph is q? := q ? q 0 .
0
0
If G and G0 are edge-labeled, we can associate a weight matrix W? ? Rnn ?nn with G? , using
our Kronecker product in RKHS (Definition 2): W? = ?(L) ? ?(L0 ). As a consequence of the
definition of ?(L) and ?(L0 ), the entries of W? are non-zero only if the corresponding edge exists
in the product graph. The weight matrix is closely related to the adjacency matrix: assume that
H = R endowed with the usual dot product, and ?(Lij ) = 1 if (vi , vj ) ? E or zero otherwise. Then
?(L) = A and ?(L0 ) = A0 , and consequently W? = A? , i.e., the weight matrix is identical to the
adjacency matrix of the product graph.
To extend the above discussion, assume that H = Rd endowed with the usual dot product, and that
there are d distinct edge labels {1, 2, . . . , d}. For each edge (vi , vj ) ? E we have ?(Lij ) = el if
the edge (vi , vj ) is labeled l. All other entries of ?(L) are set to 0. ? is therefore a delta kernel, i.e.,
its value between any two edges is one iff the labels on the edges match, and zero otherwise. The
weight matrix W? has a non-zero entry iff an edge exists in the product graph and the corresponding
l
edges in G and G0 have the same label. Let A
denote the adjacency matrix of the graph filtered by
l
the label l, i.e., Aij = Aij if Lij = l and zero otherwise. Some simple algebra (omitted for the sake
of brevity) shows that the weight matrix of the product graph can be written as
W? =
3.3
d
X
l 0
l
A
?A
.
(7)
l=1
Kernel Definition
Performing a random walk on the product graph G? is equivalent to performing a simultaneous
random walk on the graphs G and G0 [4]. Therefore, the (in + j, i0 n0 + j 0 )-th entry of Ak? represents
the probability of simultaneous k length random walks on G (starting from vertex vi and ending in
vertex vj ) and G0 (starting from vertex vi00 and ending in vertex vj0 0 ). The entries of W? represent
similarity between edges. The (in + j, i0 n0 + j 0 )-th entry of W?k represents the similarity between
simultaneous k length random walks on G and G0 measured via the kernel function ?.
Given the weight matrix W? , initial and stopping probability distributions p? and q? , and an appropriately chosen discrete measure ?, we can define a random walk kernel on G and G0 as
k(G, G0 ) :=
?
X
>
?(k) q?
W?k p? .
(8)
k=0
In order to show that (8) is a valid Mercer kernel we need the following technical lemma.
Lemma 2 ? k ? N0 : W?k p? = vec[?(L0 )k p0 (?(L)k p)> ].
Proof By induction over k. Base case: k = 0. Since ?(L0 )0 = ?(L)0 = I, using (2) we can write
W?0 p? = p? = (p ? p0 ) vec(1) = vec(p0 1 p> ) = vec[?(L0 )0 p0 (?(L)0 p)> ].
Induction from k to k + 1: Using Lemma 1 we obtain
W?k+1 p? = W? W?k p? = (?(L) ? ?(L0 )) vec[?(L0 )k p0 (?(L)k p)> ]
= vec[?(L0 )?(L0 )k p0 (?(L)k p)>?(L)> ] = vec[?(L0 )k+1 p0 (?(L)k+1 p)> ].
Lemma 3 If the measure ?(k) is such that (8) converges, then it defines a valid Mercer kernel.
Proof Using Lemmas 1 and 2 we can write
>
q?
W?k p? = (q ? q 0 ) vec[?(L0 )k p0 (?(L)k p)> ] = vec[q 0> ?(L0 )k p0 (?(L)k p)>q]
= (q >?(L)k p)> (q 0>?(L0 )k p0 ) .
{z
}|
{z
}
|
?k (G0 )
?k (G)>
Each individual term of (8) equals ?k (G)>?k (G0 ) for some function ?k , and is therefore a valid
kernel. The lemma follows since a convex combination of kernels is itself a valid kernel.
3.4
Special Cases
A popular choice to ensure convergence of (8) is to assume ?(k) = ?k for some ? > 0. If ? is
sufficiently small1 then (8) is well defined, and we can write
X
>
>
k(G, G0 ) =
?k q?
W?k p? = q?
(I ??W? )?1 p? .
(9)
k
Kashima et al. [2] use marginalization and probabilities of random walks to define kernels on graphs.
Given transition probability matrices P and P 0 associated with graphs G and G0 respectively, their
kernel can be written as (see Eq. 1.19, [2])
>
k(G, G0 ) = q?
(I ?T? )?1 p? ,
1
The values of ? which ensure convergence depends on the spectrum of W? .
(10)
where T? := (vec(P ) vec(P 0 )> ) (?(L) ? ?(L0 )), using to denote element-wise (Hadamard)
multiplication. The edge kernel ?
? (Lij , L0i0 j 0 ) := Pij Pi00 j 0 ?(Lij , L0i,j 0 ) with ? = 1 recovers (9).
G?artner et al. [1] use the adjacency matrix of the product graph to define the so-called geometric
kernel
?
n X
n0 X
X
0
?k [Ak? ]ij .
(11)
k(G, G ) =
i=1 j=1 k=0
To recover their kernel in our framework, assume an uniform distribution over the vertices of G and
G0 , i.e., set p = q = 1/n and p0 = q 0 = 1/n0 . The initial as well as final probability distribution
over vertices of G? is given by p? = q? = e /(nn0 ). Setting ?(L) := A, and hence ?(L0 ) = A0
and W? = A? , we can rewrite (8) to obtain
0
k(G, G ) =
?
X
0
> k
?k q?
A? p?
?
n n
1 XXX k k
? [A? ]ij ,
= 2 02
n n i=1 j=1
k=0
k=0
which recovers (11) to within a constant factor.
4
Efficient Computation
In this section we show that iterative methods, including those based on Sylvester equations, conjugate gradients, and fixed-point iterations, can be used to greatly speed up the computation of (9).
4.1
Sylvester Equation Methods
Consider the following equation, commonly known as the Sylvester or Lyapunov equation:
X = SXT + X0 .
(12)
n?n
n?n
Here, S, T, X0 ? R
are given and we need for solve for X ? R
. These equations can be
readily solved in O(n3 ) time with freely available code [5], e.g. Matlab?s dlyap method. The
generalized Sylvester equation
d
X
X=
Si XTi + X0
(13)
i=1
can also be solved efficiently, albeit at a slightly higher computational cost of O(dn3 ).
We now show that if the weight matrix W? can be written as (7) then the problem of computing the
graph kernel (9) can be reduced to the problem of solving the following Sylvester equation:
X
i 0
i >
A
?X A
+ X0 ,
(14)
X=
i
where vec(X0 ) = p? . We begin by flattening the above equation:
X
i 0
i >
vec(A
XA
) + p? .
vec(X) = ?
i
X
i
i 0
(I ??
A
?A
) vec(X) = p? ,
Using Lemma 1 we can rewrite (15) as
(15)
(16)
i
use (7), and solve for vec(X):
>
Multiplying both sides by q?
yields
vec(X) = (I ??W? )?1 p? .
>
q?
vec(X)
=
>
q?
(I ??W? )?1 p? .
(17)
(18)
The right-hand side of (18) is the graph kernel (9). Given the solution X of the Sylvester equation
>
(14), the graph kernel can be obtained as q?
vec(X) in O(n2 ) time. Since solving the generalized
3
Sylvester equation takes O(dn ) time, computing the graph kernel in this fashion is significantly
faster than the O(n6 ) time required by the direct approach.
Where the number of labels d is large, the computational cost may be reduced further by computing
matrices S and T such that W? ? S ? T . We then simply solve the simple Sylvester equation
(12) involving these matrices. Finding the nearest Kronecker product approximating a matrix such
as W? is a well-studied problem in numerical linear algebra and efficient algorithms which exploit
sparsity of W? are readily available [6].
4.2
Conjugate Gradient Methods
Given a matrix M and a vector b, conjugate gradient (CG) methods solve the system of equations
M x = b efficiently [7]. While they are designed for symmetric positive semi-definite matrices,
CG solvers can also be used to solve other linear systems efficiently. They are particularly efficient
if the matrix is rank deficient, or has a small effective rank, i.e., number of distinct eigenvalues.
Furthermore, if computing matrix-vector products is cheap ? because M is sparse, for instance ?
the CG solver can be sped up significantly [7]. Specifically, if computing M v for an arbitrary vector
v requires O(k) time, and the effective rank of the matrix is m, then a CG solver requires only
O(mk) time to solve M x = b.
The graph kernel (9) can be computed by a two-step procedure: First we solve the linear system
(I ??W? ) x = p? ,
(19)
>
q?
x.
for x, then we compute
We now focus on efficient ways to solve (19) with a CG solver. Recall
that if G and G0 contain n vertices each then W? is a n2 ? n2 matrix. Directly computing the
matrix-vector product W? r, requires O(n4 ) time. Key to our speed-ups is the ability to exploit
Lemma 1 to compute this matrix-vector product more efficiently: Recall that W? = ?(L) ? ?(L0 ).
Letting r = vec(R), we can use Lemma 1 to write
W? r = (?(L) ? ?(L0 )) vec(R) = vec(?(L0 )R ?(L)> ).
(20)
If ?(?) ? Rr for some r, then the above matrix-vector product can be computed in O(n3 r) time. If
?(L) and ?(L0 ) are sparse, however, then ?(L0 )R ?(L)> can be computed yet more efficiently: if
there are O(n) non- entries in ?(L) and ?(L0 ), then computing (20) requires only O(n2 ) time.
4.3
Fixed-Point Iterations
x = p? + ?W? x.
(21)
Fixed-point methods begin by rewriting (19) as
Now, solving for x is equivalent to finding a fixed point of the above iteration [7]. Letting xt denote
the value of x at iteration t, we set x0 := p? , then compute
xt+1 = p? + ?W? xt
(22)
repeatedly until ||xt+1 ? xt || < ?, where || ? || denotes the Euclidean norm and ? some pre-defined
tolerance. This is guaranteed to converge if all eigenvalues of ?W? lie inside the unit disk; this can
be ensured by setting ? < 1/?max , where ?max is the largest-magnitude eigenvalue of W? .
The above is closely related to the power method used to compute the largest eigenvalue of a matrix
[8]; efficient preconditioners can also be used to speed up convergence [8]. Since each iteration
of (22) involves computation of the matrix-vector product W? xt , all speed-ups for computing the
matrix-vector product discussed in Section 4.2 are applicable here. In particular, we exploit the fact
that W? is a sum of Kronecker products to reduce the worst-case time complexity to O(n3 ) in our
experiments, in contrast to Kashima et al. [2] who computed the matrix-vector product explicitly.
5
Experiments
To assess the practical impact of our algorithmic improvements, we compared our techniques from
Section 4 with G?artner et al.?s [1] direct approach as a baseline. All code was written in MATLAB
Release 14, and experiments run on a 2.6 GHz Intel Pentium 4 PC with 2 GB of main memory
running Suse Linux. The Matlab function dlyap was used to solve the Sylvester equation.
By default, we used a value of ? = 0.001, and set the tolerance for both CG solver and fixed-point
iteration to 10?6 for all our experiments. We used Lemma 1 to speed up matrix-vector multiplication
for both CG and fixed-point methods (cf. Section 4.2). Since all our methods are exact and produce
the same kernel values (to numerical precision), we only report their runtimes below.
We tested the practical feasibility of the presented techniques on four real-world datasets whose
size mandates fast graph kernel computation; two datasets of molecular compounds (MUTAG and
PTC), and two datasets with hundreds of graphs describing protein tertiary structure (Protein and
Enzyme). Graph kernels provide useful measures of similarity for all these graphs; please refer to
the addendum for more details on these datasets, and applications for graph kernels on them.
Figure 1: Time (in seconds on a log-scale) to compute 100?100 kernel matrix for unlabeled (left)
resp. labelled (right) graphs from several datasets. Compare the conventional direct method (black)
to our fast Sylvester equation, conjugate gradient (CG), and fixed-point iteration (FP) approaches.
5.1
Unlabeled Graphs
In a first series of experiments, we compared graph topology only on our 4 datasets, i.e., without
considering node and edge labels. We report the time taken to compute the full graph kernel matrix
for various sizes (number of graphs) in Table 1 and show the results for computing a 100 ? 100
sub-matrix in Figure 1 (left).
On unlabeled graphs, conjugate gradient and fixed-point iteration ? sped up via our Lemma 1 ? are
consistently about two orders of magnitude faster than the conventional direct method. The Sylvester
approach is very competitive on smaller graphs (outperforming CG on MUTAG) but slows down
with increasing number of nodes per graph; this is because we were unable to incorporate Lemma 1
into Matlab?s black-box dlyap solver. Even so, the Sylvester approach still greatly outperforms
the direct method.
5.2
Labeled Graphs
In a second series of experiments, we compared graphs with node and edge labels. On our two
protein datasets we employed a linear kernel to measure similarity between edge labels representing
distances (in a? ngstr?oms) between secondary structure elements. On our two chemical datasets we
used a delta kernel to compare edge labels reflecting types of bonds in molecules. We report results
in Table 2 and Figure 1 (right).
On labeled graphs, our three methods outperform the direct approach by about a factor of 1000
when using the linear kernel. In the experiments with the delta kernel, conjugate gradient and fixedpoint iteration are still at least two orders of magnitude faster. Since we did not have access to a
generalized Sylvester equation (13) solver, we had to use a Kronecker product approximation [6]
which dramatically slowed down the Sylvester equation approach.
Table 1: Time to compute kernel matrix for given number of unlabeled graphs from various datasets.
dataset
nodes/graph
edges/node
#graphs
Direct
Sylvester
Conjugate
Fixed-Point
MUTAG
17.7
2.2
100
230
18?09?
25.9?
42.1?
12.3?
104?31?
2?16?
4?04?
1?09?
PTC
26.7
1.9
100
417
142?53?
73.8?
58.4?
32.4?
41h*
19?30?
19?27?
5?59?
Enzyme
32.6
3.8
100
600
31h*
48.3?
44.6?
13.6?
46.5d*
36?43?
34?58?
15?23?
Protein
38.6
3.7
100
1128
36d*
69?15?
55.3?
31.1?
12.5y*
6.1d*
97?13?
40?58?
?: Extrapolated; run did not finish in time available.
Table 2: Time to compute kernel matrix for given number of labeled graphs from various datasets.
kernel
dataset
#graphs
Direct
Sylvester
Conjugate
Fixed Point
delta
MUTAG
100
230
7.2h
3.9d*
2?35?
1?05?
1.6d*
21d*
13?46?
6?09?
PTC
100
417
1.4d*
2.7d*
3?20?
1?31?
25d*
46d*
53?31?
26?52?
linear
Enzyme
Protein
100
600
100
1128
2.4d*
89.8?
124.4?
50.1?
86d*
53?55?
71?28?
35?24?
5.3d*
25?24?
3?01?
1?47?
18y*
2.3d*
4.1h
1.9h
?: Extrapolated; run did not finish in time available.
6
Outlook and Discussion
We have shown that computing random walk graph kernels is essentially equivalent to solving a large
linear system. We have extended a well-known identity for Kronecker products which allows us to
exploit the structure inherent in this problem. From this we have derived three efficient techniques
to solve the linear system, employing either Sylvester equations, conjugate gradients, or fixed-point
iterations. Experiments on real-world datasets have shown our methods to be scalable and fast, in
some instances outperforming the conventional approach by more than three orders of magnitude.
Even though the Sylvester equation method has a worst-case complexity of O(n3 ), the conjugate
gradient and fixed-point methods tend to be faster on all our datasets. This is because computing
matrix-vector products via Lemma 1 is quite efficient when the graphs are sparse, so that the feature
matrices ?(L) and ?(L0 ) contain only O(n) non- entries. Matlab?s black-box dlyap solver is
unable to exploit this sparsity; we are working on more capable alternatives. An efficient generalized
Sylvester solver requires extensive use of tensor calculus and is part of ongoing work.
As more and more graph-structured data becomes available in areas such as biology, web data mining, etc., graph classification will gain importance over coming years. Hence there is a pressing
need to speed up the computation of similarity metrics on graphs. We have shown that sparsity, low
effective rank, and Kronecker product structure can be exploited to greatly reduce the computational
cost of graph kernels; taking advantage of other forms of structure in W? remains a challenge. Now
that the computation of random walk graph kernels is viable for practical problem sizes, it will open
the doors for their application in hitherto unexplored domains. The algorithmic challenge now is
how to integrate higher-order structures, such as spanning trees, in graph comparisons.
Acknowledgments
National ICT Australia is funded by the Australian Government?s Department of Communications, Information Technology and the Arts and the Australian Research Council through Backing Australia?s Ability and the
ICT Center of Excellence program. This work is supported by the IST Program of the European Community,
under the Pascal Network of Excellence, IST-2002-506778, and by the German Ministry for Education, Science, Research and Technology (BMBF) under grant no. 031U112F within the BFAM (Bioinformatics for the
Functional Analysis of Mammalian Genomes) project, part of the German Genome Analysis Network (NGFN).
References
[1] T. G?artner, P. Flach, and S. Wrobel. On graph kernels: Hardness results and efficient alternatives. In
B. Sch?olkopf and M. K. Warmuth, editors, Proc. Annual Conf. Comput. Learning Theory. Springer, 2003.
[2] H. Kashima, K. Tsuda, and A. Inokuchi. Kernels on graphs. In K. Tsuda, B. Sch?olkopf, and J. Vert, editors,
Kernels and Bioinformatics, Cambridge, MA, 2004. MIT Press.
[3] K. M. Borgwardt, C. S. Ong, S. Schonauer, S. V. N. Vishwanathan, A. J. Smola, and H. P. Kriegel. Protein
function prediction via graph kernels. Bioinformatics, 21(Suppl 1):i47?i56, 2005.
[4] F. Harary. Graph Theory. Addison-Wesley, Reading, MA, 1969.
[5] J. D. Gardiner, A. L. Laub, J. J. Amato, and C. B. Moler. Solution of the Sylvester matrix equation
AXB > + CXD> = E. ACM Transactions on Mathematical Software, 18(2):223?231, 1992.
[6] C. F. V. Loan. The ubiquitous kronecker product. Journal of Computational and Applied Mathematics,
123:85 ? 100, 2000.
[7] J. Nocedal and S. J. Wright. Numerical Optimization. Springer Series in Operations Research, 1999.
[8] G. H. Golub and C. F. Van Loan. Matrix Computations. John Hopkins University Press, Baltimore, MD,
3rd edition, 1996.
| 2973 |@word polynomial:1 norm:1 flach:1 disk:1 open:1 calculus:2 p0:13 mention:1 outlook:1 reduction:1 initial:5 series:3 rkhs:9 outperforms:1 existing:1 com:2 comparing:1 si:1 yet:2 written:4 readily:2 john:1 numerical:3 cheap:1 designed:1 n0:5 warmuth:1 tertiary:1 filtered:1 node:10 mathematical:1 dn:1 direct:9 ik:2 viable:1 laub:1 consists:1 artner:7 inside:1 excellence:2 x0:8 pairwise:1 hardness:1 karsten:1 xti:1 solver:10 considering:1 increasing:1 begin:2 becomes:1 notation:1 project:1 hitherto:1 finding:2 a1m:1 every:1 unexplored:1 act:4 prohibitively:1 rm:2 ensured:1 unit:1 grant:1 positive:1 engineering:2 limit:1 severely:1 consequence:1 ak:2 black:3 awi:1 au:2 studied:1 ease:1 locked:2 directed:1 bjk:1 practical:3 acknowledgment:1 definite:1 sxt:1 procedure:1 area:2 rnn:1 drug:1 significantly:2 vert:1 matching:1 ups:2 word:3 pre:1 protein:6 get:1 unlabeled:4 operator:1 context:2 equivalent:4 map:4 conventional:3 center:1 starting:2 convex:1 borrow:1 mq:2 nn0:2 resp:1 exact:1 associate:1 element:4 expensive:1 particularly:1 mammalian:1 labeled:8 solved:3 worst:4 connected:4 mentioned:1 intuition:1 complexity:4 ong:1 solving:5 rewrite:2 algebra:7 exit:1 basis:1 easily:2 represented:1 various:3 distinct:2 fast:4 effective:3 whose:2 quite:1 solve:10 otherwise:5 ability:2 itself:1 ip:1 final:1 sequence:1 rr:3 eigenvalue:4 pressing:1 advantage:1 product:33 coming:1 hadamard:1 iff:5 ludwig:1 olkopf:2 convergence:5 extending:1 produce:1 a11:1 converges:1 object:2 measured:1 nearest:1 ij:4 school:2 eq:1 involves:2 australian:4 differ:1 lyapunov:1 closely:2 australia:8 a12:1 dii:1 adjacency:9 education:1 government:1 extension:3 hold:1 sufficiently:1 wright:1 algorithmic:2 omitted:1 proc:1 applicable:1 bag:2 label:15 bond:1 council:1 largest:2 weighted:1 mit:1 always:1 i56:1 l0:23 focus:1 release:1 derived:1 improvement:1 consistently:1 rank:4 amato:1 mainly:1 greatly:3 contrast:1 pentium:1 cg:9 baseline:1 stopping:3 el:1 nn:1 i0:2 a0:3 relation:1 jq:1 germany:1 backing:1 classification:1 pascal:1 denoted:1 art:1 special:3 marginal:1 equal:2 runtimes:1 identical:1 represents:2 biology:1 report:3 inherent:1 national:5 individual:1 mining:2 evaluation:1 golub:1 yielding:1 pc:1 edge:30 capable:1 tree:1 euclidean:1 walk:32 tsuda:2 mk:1 instance:3 column:1 applicability:1 cost:3 vertex:20 entry:12 uniform:1 hundred:1 combined:1 borgwardt:5 picking:1 hopkins:1 w1:2 linux:1 hn:2 henceforth:1 conf:1 cxd:1 account:1 de:1 includes:3 suse:1 explicitly:1 vi:15 depends:1 linked:1 competitive:1 recover:1 ass:1 who:1 efficiently:6 likewise:1 yield:1 multiplying:1 simultaneous:4 definition:5 associated:3 proof:2 recovers:2 gain:1 dataset:2 popular:1 recall:2 ubiquitous:1 hilbert:2 reflecting:1 wesley:1 originally:1 higher:2 xxx:1 though:3 box:2 furthermore:1 xa:1 smola:1 until:1 lmu:1 hand:5 working:1 web:2 ei:1 mutag:4 defines:1 vj0:3 concept:3 contain:3 normalized:1 hence:2 chemical:1 symmetric:1 adjacent:1 during:1 anm:1 please:1 unnormalized:1 generalized:4 wise:1 common:3 sped:2 functional:1 extend:4 discussed:1 refer:1 cambridge:1 vec:28 rd:2 l0i:1 mathematics:1 had:1 dot:2 funded:1 access:1 similarity:7 an2:1 etc:1 base:1 enzyme:3 compound:1 certain:1 outperforming:2 moler:1 exploited:1 ministry:1 employed:1 freely:1 converge:1 semi:1 full:1 technical:1 faster:5 match:1 schraudolph:2 molecular:1 addendum:1 feasibility:1 impact:1 prediction:1 involving:1 scalable:1 sylvester:21 essentially:1 metric:1 iteration:11 kernel:62 represent:1 suppl:1 baltimore:1 mandate:1 appropriately:1 w2:1 sch:2 tend:1 deficient:1 db:1 undirected:2 practitioner:1 counting:1 door:1 variety:1 marginalization:1 finish:2 ifi:1 topology:1 reduce:2 idea:1 svn:1 gb:1 effort:1 repeatedly:1 matlab:5 dramatically:1 useful:1 clear:1 nic:1 reduced:3 outperform:1 exist:1 xij:2 delta:4 per:1 discrete:1 write:4 ist:2 key:1 four:1 ptc:3 rewriting:1 nocedal:1 v1:1 graph:81 sum:1 year:1 run:3 vn:1 guaranteed:1 dn3:1 quadratic:1 encountered:1 annual:1 gardiner:1 vishwanathan:3 kronecker:9 n3:6 software:1 sake:1 speed:6 argument:1 preconditioners:1 performing:4 structured:1 munich:2 department:1 combination:1 rnp:2 conjugate:11 describes:1 slightly:1 smaller:1 maximilians:1 wi:5 harary:1 n4:2 slowed:1 taken:1 equation:20 remains:1 describing:1 count:2 german:2 addison:1 letting:2 end:1 available:5 operation:3 endowed:3 obey:1 v2:1 kashima:7 alternative:2 encounter:1 rp:1 denotes:3 running:1 include:1 ensure:4 cf:1 unifying:3 neglect:1 exploit:8 approximating:1 tensor:2 g0:20 question:1 usual:2 diagonal:1 md:1 said:3 gradient:9 distance:1 unable:2 spanning:1 nicta:2 induction:2 length:5 code:2 index:1 relationship:2 unfortunately:1 potentially:1 negative:1 slows:1 perform:1 observation:1 datasets:12 finite:2 i47:1 defining:1 extended:1 communication:1 rn:2 reproducing:2 arbitrary:2 community:1 pair:3 required:1 extensive:1 kriegel:1 below:1 fp:1 sparsity:4 challenge:2 reading:1 program:2 including:2 max:2 memory:1 analogue:1 power:2 natural:1 representing:3 scheme:1 technology:2 n6:3 lij:6 ict:4 geometric:2 discovery:1 nicol:1 multiplication:2 oms:1 proportional:1 integrate:1 pij:4 mercer:2 editor:2 extrapolated:2 supported:1 aij:5 side:5 allow:1 institute:1 neighbor:1 vwi:4 taking:1 sparse:3 tolerance:2 ghz:1 van:1 dimension:1 default:1 transition:3 ending:2 valid:4 world:2 genome:2 commonly:2 fixedpoint:1 employing:1 transaction:1 spectrum:1 iterative:1 table:4 an1:1 molecule:1 european:1 domain:3 vj:13 flattening:1 did:3 main:1 edition:1 n2:4 inokuchi:1 canberra:4 intel:1 cubic:1 fashion:1 bmbf:1 precision:1 sub:2 exponential:1 comput:1 lie:1 down:2 wrobel:1 xt:6 decay:1 cjk:1 exists:4 ih:3 albeit:1 effectively:1 importance:1 magnitude:4 simply:2 ordered:1 springer:2 abc:1 ma:2 acm:1 identity:2 consequently:1 exposition:1 bkl:1 labelled:1 axb:1 loan:2 specifically:1 wt:1 lemma:14 called:2 isomorphic:1 secondary:1 brevity:1 bioinformatics:6 ongoing:1 incorporate:1 tested:1 |
2,174 | 2,974 | Single Channel Speech Separation
Using Factorial Dynamics
John R. Hershey
Trausti Kristjansson
Steven Rennie
Peder A. Olsen
IBM Thomas J. Watson Research Center
Yorktown Heights, NY 10598
Abstract
Human listeners have the extraordinary ability to hear and recognize speech even
when more than one person is talking. Their machine counterparts have historically been unable to compete with this ability, until now. We present a modelbased system that performs on par with humans in the task of separating speech
of two talkers from a single-channel recording. Remarkably, the system surpasses
human recognition performance in many conditions. The models of speech use
temporal dynamics to help infer the source speech signals, given mixed speech
signals. The estimated source signals are then recognized using a conventional
speech recognition system. We demonstrate that the system achieves its best performance when the model of temporal dynamics closely captures the grammatical
constraints of the task.
One of the hallmarks of human perception is our ability to solve the auditory cocktail party problem:
we can direct our attention to a given speaker in the presence of interfering speech, and understand
what was said remarkably well. Until now the same could not be said for automatic speech recognition systems. However, we have recently introduced a system which in many conditions performs
this task better than humans [1][2]. The model addresses the Pascal Speech Separation Challenge
task [3], and outperforms all other published results by more than 10% word error rate (WER). In
this model, dynamics are modeled using a layered combination of one or two Markov chains: one
for long-term dependencies and another for short-term dependencies. The combination of the two
speakers was handled via an iterative Laplace approximation method known as Algonquin [4]. Here
we describe experiments that show better performance on the same task with a simpler version of
the model.
The task we address is provided by the PASCAL Speech Separation Challenge [3], which provides
standard training, development, and test data sets of single-channel speech mixtures following an
arbitrary but simple grammar. In addition, the challenge organizers have conducted human-listening
experiments to provide an interesting baseline for comparison of computational techniques.
The overall system we developed is composed of the three components: a speaker identification
and gain estimation component, a signal separation component, and a speech recognition system.
In this paper we focus on the signal separation component, which is composed of the acoustic and
grammatical models. The details of the other components are discussed in [2].
Single-channel speech separation has previously been attempted using Gaussian mixture models
(GMMs) on individual frames of acoustic features. However such models tend to perform well only
when speakers are of different gender or have rather different voices [4]. When speakers have similar
voices, speaker-dependent mixture models cannot unambiguously identify the component speakers.
In such cases it is helpful to model the temporal dynamics of the speech. Several models in the
literature have attempted to do so either for recognition [5, 6] or enhancement [7, 8] of speech. Such
models have typically been based on a discrete-state hidden Markov model (HMM) operating on a
frame-based acoustic feature vector.
Modeling the dynamics of the log spectrum of speech is challenging in that different speech components evolve at different time-scales. For example the excitation, which carries mainly pitch, versus
the filter, which consists of the formant structure, are somewhat independent of each other. The formant structure closely follows the sequences of phonemes in each word, which are pronounced at a
rate of several per second. In non-tonal languages such as English, the pitch fluctuates with prosody
over the course of a sentence, and is not directly coupled with the words being spoken. Nevertheless, it seems to be important in separating speech, because the pitch harmonics carry predictable
structure that stands out against the background.
We address the various dynamic components of speech by testing different levels of dynamic constraints in our models. We explore four different levels of dynamics: no dynamics, low-level acoustic
dynamics, high-level grammar dynamics, and a layered combination, dual dynamics, of the acoustic
and grammar dynamics. The grammar dynamics and dual dynamics models perform the best in our
experiments.
The acoustic models are combined to model mixtures of speech using two methods: a nonlinear
model known as Algonquin, which models the combination of log-spectrum models as a sum in the
power spectrum, and a simpler max model that combines two log spectra using the max function. It
turns out that whereas Algonquin works well, our formulation of the max model does better overall.
With the combination of the max model and grammar-level dynamics, the model produces remarkable results: it is often able to extract two utterances from a mixture even when they are from the
same speaker 1 . Overall results are given in Table 1, which shows that our closest competitors are
human listeners.
Table 1: Overall word error rates across all conditions on the challenge task. Human:
average human error rate, IBM: our best result, Next Best: the best of the eight other
published results on this task, and Chance: the theoretical error rate for random guessing.
System:
Word Error Rate:
1
Human
22.3%
IBM
22.6%
Next Best
34.2%
Chance
93.0%
Speech Models
The model consists of an acoustic model and temporal dynamics model for each source, and a mixing
model, which models how the source models are combined to describe the mixture. The acoustic
features were short-time log spectrum frames computed every 15 ms. Each frame was of length 40
ms and a 640-point mixed-radix FFT was used. The DC component was discarded, producing a
319-dimensional log-power-spectrum feature vector yt .
The acoustic model consists of a set of diagonal-covariance Gaussians in the features. For a given
speaker, a, we model the conditional probability of the log-power spectrum of each source signal
xa given a discrete acoustic state sa as Gaussian, p(xa |sa ) = N (xa ; ?sa , ?sa ), with mean ?sa , and
covariance matrix ?sa . We used 256 Gaussians, one per acoustic state, to model the acoustic space
of each speaker. For efficiency and tractability we restrict the covariance to be diagonal. A model
with no dynamics can be formulated by producing state probabilities p(sa ), and is depicted in 1(a).
Acoustic Dynamics: To capture the low-level dynamics of the acoustic signal, we modeled the
acoustic dynamics of a given speaker, a, via state transitions p(sat |sat?1 ) as shown in Figure 1(b).
There are 256 acoustic states, hence for each speaker a, we estimated a 256 ? 256 element transition
matrix Aa .
Grammar Dynamics: The grammar dynamics are modeled by grammar state transitions,
a
p(vta |vt?1
), which consist of left-to-right phone models. The legal word sequences are given by
the Speech Separation Challenge grammar [3] and are modeled using a set of pronunciations that
1
Demos and information can be found at: http : //www.research.ibm.com/speechseparation
sat?1
sat
sat?1
sat
xt?1
xt
xt?1
xt
(a) No Dynamics
(b) Acoustic Dynamics
a
vt?1
vta
a
vt?1
vta
sat?1
sat
sat?1
sat
xt?1
xt
xt?1
xt
(c) Grammar Dynamics
(d) Dual Dynamics
Figure 1: Graph of models for a given source. In (a), there are no dynamics, so the model
is a simple mixture model. In (b), only acoustic dynamics are modeled. In (c), grammar
dynamics are modeled with a shared set of acoustic Gaussians, in (d) dual ? grammar and
acoustic ? dynamics have been combined. Note that (a) (b) and (c) are special cases of
(d), where different nodes are assumed independent.
map from words to three-state context-dependent phone models. The state transition probabilities
derived from these phone models are sparse in the sense that most transition probabilities are zero.
We model speaker dependent distributions p(sa |v a ) that associate the grammar states, v a to the
speaker-dependent acoustic states. These are learned from training data where the grammar state
sequences and acoustic state sequences are known for each utterance. The grammar of our system
has 506 states, so we estimate a 506 ? 256 element conditional probability matrix B a for each
speaker.
Dual Dynamics: The dual-dynamics model combines the acoustic dynamics with the grammar
dynamics. It is useful in this case to avoid modeling the full combination of s and v states in the
joint transitions p(sat |sat?1 , vt ). Instead we make a naive-Bayes assumption to approximate this as
1
a a
?
a
?
z p(st |st?1 ) p(st |vt ) , where ? and ? adjust the relative influence of the two probabilities, and z
is the normalizing constant. Here we simply use the probability matrices Aa and B a , defined above.
2
Mixed Speech Models
The speech separation challenge involves recognizing speech in mixtures of signals from two speakers, a and b. We consider only mixing models that operate independently on each frequency for
analytical and computational tractability. The short-time log spectrum of the mixture yt , in a given
frequency band, is related to that of the two sources xat and xbt via the mixing model given by the
conditional probability distribution, p(y|xa , xb ). The joint distribution of the observation and source
in one feature dimension, given the source states is thus:
p(yt , xat , xbt |sat , sbt ) = p(yt |xat , xbt )p(xat |sat )p(xbt |sbt ).
(1)
In general, to infer and reconstruct speech we need to compute the likelihood of the observed mixture
Z
p(yt |sat , sbt ) = p(yt , xat , xbt |sat , sbt )dxat dxbt ,
(2)
and the posterior expected values of the sources given the states,
Z
a
a b
E(xt |yt , st , st ) = xat p(xat , xbt |yt , sat , sbt )dxat dxbt ,
(3)
and similarly for xbt . These quantities, combined with a prior model for the joint state sequences {sa1..T , sb1..T }, allow us to compute the minimum mean squared error (MMSE) estimators E(xa1..T |y1..T ) or the maximum a posteriori (MAP) estimate E(xa1..T |y1..T , s?a 1..T , s?b 1..T ),
where s?a 1..T , s?b 1..T = arg maxsa1..T ,sb1..T p(sa1..T , sb1..T |y1..T ), where the subscript, 1..T , refers to
all frames in the signal.
The mixing model can be defined in a number of ways. We explore two popular candidates, for
which the above integrals can be readily computed: Algonquin, and the max model.
s
a
s
xa
b
xb
y
(a) Mixing Model
(v a v b )t?1
(v a v b )t
(sa sb )t?1
(sa sb )t
yt
yt
(b) Dual Dynamics Factorial Model
Figure 2: Model combination for two talkers. In (a) all dependencies are shown. In (b) the
full dual-dynamics model is graphed with the xa and xb integrated out, and corresponding
states from each speaker combined into product states. The other models are special cases
of this graph with different edges removed, as in Figure 1.
Algonquin: The relationship between the sources and mixture in the log power spectral domain is
approximated as
p(yt |xat , xbt ) = N (yt ; log(exp(xat ) + exp(xbt )), ?)
(4)
where ? is introduced to model the error due to the omission of phase [4]. An iterative NewtonLaplace method accurately approximates the conditional posterior p(xat , xbt |yt , sat , sbt ) from (1) as
Gaussian. This Gaussian allows us to analytically compute the observation likelihood p(yt |sat , sbt )
and expected value E(xat |yt , sat , sbt ), as in [4].
Max model: The mixing model is simplified using the fact that log of a sum is approximately the
log of the maximum:
p(y|xa , xb ) = ? y ? max(xa , xb )
(5)
In this model the likelihood is
p(yt |sat , sbt ) = pxat (yt |sat )?xb (yt |sbt ) + pxbt (yt |sbt )?xat (yt |sat ),
(6)
R yt
a
a
a
where ?xat (yt |st ) = ?? N (xt ; ?sat , ?sat )dxt is a Gaussian cumulative distribution function [5].
In [5], such a model was used to compute state likelihoods and find the optimal state sequence. In
[8], a simplified model was used to infer binary masking values for refiltering.
We take the max model a step further and derive source posteriors, so that we can compute the
MMSE estimators for the log power spectrum. Note that the source posteriors in xat and xbt are each
a mixture of a delta function and a truncated Gaussian. Thus we analytically derive the necessary
expected value:
E(xat |yt , sat , sbt )
p(xat = yt |yt , sat , sbt )yt + p(xat < yt |yt , sat , sbt )E(xat |xat < yt , sat )
pxat (yt |sat )
a
b
,
= ?t yt + ?t ?sat ? ?sat
?xat (yt |sat )
=
(7)
(8)
with weights ?ta = p(xat=yt |yt , sat , sbt ) = pxat (yt |sat )?xb (yt |sbt )/p(yt |sat , sbt ), and ?tb = 1 ? ?ta . For
many pairs of states one model is significantly louder than another ?sa ? ?sb in a given frequency
band, relative to their variances. In such cases it is reasonable to approximate the likelihood as
p(yt |sat , sbt ) ? pxat (yt |sat ), and the posterior expected values according to E(xat |yt , sat , sbt ) ? yt and
E(xbt |yt , sat , sbt ) ? min(yt , ?sbt ), and similarly for ?sa ? ?sb .
3
Likelihood Estimation
Because of the large number of state combinations, the model would not be practical without techniques to reduce computation time. To speed up the evaluation of the joint state likelihood, we
employed both band quantization of the acoustic Gaussians and joint-state pruning.
Band Quantization: One source of computational savings stems from the fact that some of the
Gaussians in our model may differ only in a few features. Band quantization addresses this by
approximating each of the D Gaussians of each model with a shared set of d Gaussians, where d ?
D, in each of the F frequency bands of the feature vector. A similar
idea is described in [9]. It relies
Q
on the use of a diagonal covariance matrix, so that p(xa |sa ) = f N (xaf ; ?f,sa , ?f,sa ), where ?f,sa
are the diagonal elements of covariance matrix ?sa . The mappingQ
Mf (si ) associates each of the D
Gaussians with one of the d Gaussians in band f . Now p?(xa |sa ) = f N (xaf ; ?f,Mf (sa ) , ?f,Mf (sa ) )
is used as a surrogate for p(xa |sa ). Figure 3 illustrates the idea.
Figure 3: In band quantization, many multi-dimensional Gaussians are mapped to a few
unidimensional Gaussians.
Under
are optimized by minimizing the KL-divergence
P this model theP d Gaussians
D( sa p(sa )p(xa |sa )|| sa p(sa )?
p(xa |sa )), and likewise for sb . Then in each frequency band,
only d?d, instead of D ?D combinations of Gaussians have to be evaluated to compute p(y|sa , sb ).
Despite the relatively small number of components d in each band, taken across bands, band quantization is capable of expressing dF distinct patterns, in an F -dimensional feature space, although in
practice only a subset of these will be used to approximate the Gaussians in a given model. We used
d = 8 and D = 256, which reduced the likelihood computation time by three orders of magnitude.
Joint State Pruning: Another source of computational savings comes from the sparseness of the
model. Only a handful of sa , sb combinations have likelihoods that are significantly larger than the
rest for a given observation. Only these states are required to adequately explain the observation. By
pruning the total number of combinations down to a smaller number we can speed up the likelihood
calculation, estimation of the components signals, as well as the temporal inference.
However, we must estimate the likelihoods in order to determine which states to retain. We therefore
used band-quantization to estimate likelihoods for all states, perform state pruning, and then the full
model on the pruned states using the exact parameters. In the experiments reported here, we pruned
down to 256 state combinations. The effect of these speedup methods on accuracy will be reported
in a future publication.
4
Inference
In our experiments we performed inference in four different conditions: no dynamics, with acoustic
dynamics only, with grammar dynamics only, and with dual dynamics (acoustic and grammar). With
no dynamics the source models reduce to GMMs and we infer MMSE estimates of the sources based
on p(xa , xb |y) as computed from (1), using Algonquin or the max model. Once the log spectrum of
each source is estimated, we estimate the corresponding time-domain signal as shown in [4].
In the acoustic dynamics condition the exact inference algorithm uses a 2-Dimensional Viterbi
search, described below, with acoustic temporal constraints p(st |st?1 ) and likelihoods from Eqn.
(1), to find the most likely joint state sequence s1..T . Similarly in the grammar dynamics condition,
2-D Viterbi search is used to infer the grammar state sequences, v1..T . Instead of single Gaussians as
the likelihood models, however, we have mixture models in this case. So we can perform an MMSE
estimate of the sources by averaging over the posterior probability of the mixture components given
the grammar Viterbi sequence, and the observations.
It is critical to use the 2-D Viterbi algorithm in both cases, rather than the forward-backward algorithm, because in the same-speaker condition at 0dB, the acoustic models and dynamics are symmetric. This symmetry means that the posterior is essentially bimodal and averaging over these
modes would yield identical estimates for both speakers. By finding the best path through the joint
state space, the 2-D Viterbi algorithm breaks this symmetry and allows the model to make different
estimates for each speaker.
In the dual-dynamics condition we use the model of section 2(b). With two speakers, exact inference
is computationally complex because the full joint distribution of the grammar and acoustic states,
(v a ? sa ) ? (v b ? sb ) is required and is very large in number. Instead we perform approximate
inference by alternating the 2-D Viterbi search between two factors: the Cartesian product sa ? sb
of the acoustic state sequences and the Cartesian product v a ? v b of the grammar state sequences.
When evaluating each state sequence we hold the other chain constant, which decouples its dynamics
and allows for efficient inference. This is a useful factorization because the states sa and sb interact
strongly with each other and similarly for v a and v b . Again, in the same-talker condition, the 2-D
Viterbi search breaks the symmetry in each factor.
2-D Viterbi search: The Viterbi algorithm estimates the maximum-likelihood state sequence s1..T
given the observations x1..T . The complexity of the Viterbi search is O(T D2 ) where D is the
number of states and T is the number of frames. For producing MAP estimates of the 2 sources, we
require a 2 dimensional Viterbi search which finds the most likely joint state sequences sa1..T and
sb1..T given the mixed signal y1..T as was proposed in [5].
On the surface, the 2-D Viterbi search appears to be of complexity O(T D4 ). Surprisingly, it can
be computed in O(T D3 ) operations. This stems from the fact that the dynamics for each chain are
independent. The forward-backward algorithm for a factorial HMM with N state variables requires
only O(T N DN +1 ) rather than the O(T D2N ) required for a naive implementation [10]. The same
is true for the Viterbi algorithm. In the Viterbi algorithm, we wish to find the most probable paths
leading to each state by finding the two arguments sat?1 and sbt?1 of the following maximization:
{?
sat?1 , s?bt?1 } =
=
arg max p(sat |sat?1 )p(sbt |sbt?1 )p(sat?1 , sbt?1 |y1..t?1 )
b
sa
t?1 st?1
p(sat |sat?1 ) max p(sbt |sbt?1 )p(sat?1 , sbt?1 |y1..t?1 ).
arg max
a
st?1
sbt?1
(9)
The two maximizations can be done in sequence, requiring O(D3 ) operations with O(D2 ) storage
for each step. In general, as with the forward-backward algorithm, the N -dimensional Viterbi search
requires O(T N DN +1 ) operations.
We can also exploit the sparsity of the transition matrices and observation likelihoods, by pruning
unlikely values. Using both of these methods our implementation of 2-D Viterbi search is faster
than the acoustic likelihood computation that serves as its input, for the model sizes and grammars
chosen in the speech separation task.
Speaker and Gain Estimation: In the challenge task, the gains and identities of the two speakers
were unknown at test time and were selected from a set of 34 speakers which were mixed at SNRs
ranging from 6dB to -9dB. We used speaker-dependent acoustic models because of their advantages
when separating different speakers. These models were trained on gain-normalized data, so the
models are not well matched to the different gains of the signals at test time. This means that we
have to estimate both the speaker identities and the gain in order to adapt our models to the source
signals for each test utterance.
The number of speakers and range of SNRs in the test set makes it too expensive to consider every
possible combination of models and gains. Instead, we developed an efficient model-based method
for identifying the speakers and gains, described in [2]. The algorithm is based upon a very simple
idea: identify and utilize frames that are dominated by a single source ? based on their likelihoods
under each speaker-dependent acoustic model ? to determine what sources are present in the mixture.
Using this criteria we can eliminate most of the unlikely speakers, and explore all combinations
of the remaining speakers. An approximate EM procedure is then used to select a single pair of
speakers and estimate their gains.
Recognition: Although inference in the system may involve recognition of the words? for models
that contain a grammar ?we still found that a separately trained recognizer performed better. After
reconstruction, each of the two signals is therefore decoded with a speech recognition system that
incorporates Speaker Dependent Labeling (SDL) [2].
This method uses speaker dependent models for each of the 34 speakers. Instead of using the
speaker identities provided by the speaker ID and gain module, we followed the approach for gender
dependent labeling (GDL) described in [11]. This technique provides better results than if the true
speaker ID is specified.
5
Results
The Speech Separation Challenge [3] involves separating the mixed speech of two speakers drawn
from of a set of 34 speakers. An example utterance is place white by R 4 now. In each recording,
one of the speakers says white while the other says blue, red or green. The task is to recognize the
letter and the digit of the speaker that said white. Using the SDL recognizer, we decoded the two
estimated signals under the assumption that one signal contains white and the other does not, and
vice versa. We then used the association that yielded the highest combined likelihood.
80
WER (%)
60
40
20
0
Same Talker
No Separation
No dynamics
Same Gender
Acoustic Dyn.
Different Gender
Grammar Dyn
All
Dual Dyn
Human
Figure 4: Average word error rate (WER) as a function of model dynamics, in different
talker conditions, compared to Human error rates, using Algonquin.
Human listener performance [3] is compared in Figure 4 to results using the SDL recognizer without
speech separation, and for each the proposed models. Performance is poor without separation in all
conditions. With no dynamics the models do surprisingly well in the different talker conditions, but
poorly when the signals come from the same talker. Acoustic dynamics gives some improvement,
mainly in the same-talker condition. The grammar dynamics seems to give the most benefit, bringing the error rate in the same-gender condition below that of humans. The dual-dynamics model
performed about the same as the grammar dynamics model, despite our intuitions. Replacing Algonquin with the max model reduced the error rate in the dual dynamics model (from 24.3% to 23.5%)
and grammar dynamics model (from 24.6% to 22.6%), which brings the latter closer than any other
model to the human recognition rate of 22.3%.
Figure 5 shows the relative word error rate of the best system compared to human subjects. When
both speakers are around the same loudness, the system exceeds human performance, and in the
same-gender condition makes less than half the errors of the humans. Human listeners do better
when the two signals are at different levels, even if the target is below the masker (i.e., in -9dB),
suggesting that they are better able to make use of differences in amplitude as a cue for separation.
Relative Word Error Rate (WER)
200
Same Talker
Same Gender
Different Gender
Human
150
100
50
0
?50
?100
6 dB
3 dB
0 dB
?3 dB
Signal to Noise Ratio (SNR)
?6 dB
?9 dB
Figure 5: Word error rate of best system relative to human performance. Shaded area is
where the system outperforms human listeners.
An interesting question is to what extent different grammar constraints affect the results. To test this,
we limited the grammar to just the two test utterances, and the error rate on the estimated sources
dropped to around 10%. This may be a useful paradigm for separating speech from background noise
when the text is known, such as in closed-captioned recordings. At the other extreme, in realistic
speech recognition scenarios, there is little knowledge of the background speaker?s grammar. In such
cases the benefits of models of low-level acoustic continuity over purely grammar-based systems
may be more apparent.
It is our hope that further experiments with both human and machine listeners will provide us with a
better understanding of the differences in their performance characteristics, and provide insights into
how the human auditory system functions, as well as how automatic speech perception in general
can be brought to human levels of performance.
References
[1] T. Kristjansson, J. R. Hershey, P. A. Olsen, S. Rennie, and R. Gopinath, ?Super-human multi-talker speech
recognition: The IBM 2006 speech separation challenge system,? in ICSLP, 2006.
[2] Steven Rennie, Pedera A. Olsen, John R. Hershey, and Trausti Kristjansson, ?Separating multiple speakers
using temporal constraints,? in ISCA Workshop on Statistical And Perceptual Audition, 2006.
[3] Martin Cooke and Tee-Won Lee,
?Interspeech speech separation
http : //www.dcs.shef.ac.uk/ ? martin/SpeechSeparationChallenge.htm, 2006.
challenge,?
[4] T. Kristjansson, J. Hershey, and H. Attias, ?Single microphone source separation using high resolution
signal reconstruction,? ICASSP, 2004.
[5] P. Varga and R.K. Moore, ?Hidden Markov model decomposition of speech and noise,? ICASSP, pp.
845?848, 1990.
[6] M. Gales and S. Young, ?Robust continuous speech recognition using parallel model combination,? IEEE
Transactions on Speech and Audio Processing, vol. 4, no. 5, pp. 352?359, September 1996.
[7] Y. Ephraim, ?A Bayesian estimation approach for speech enhancement using hidden Markov models.,?
vol. 40, no. 4, pp. 725?735, 1992.
[8] S. Roweis, ?Factorial models and refiltering for speech separation and denoising,? Eurospeech, pp.
1009?1012, 2003.
[9] E. Bocchieri, ?Vector quantization for the efficient computation of continuous density likelihoods. proceedings of the international conference on acoustics,? in ICASSP, 1993, vol. II, pp. 692?695.
[10] Zoubin Ghahramani and Michael I. Jordan, ?Factorial hidden Markov models,? in Advances in Neural
Information Processing Systems, vol. 8.
[11] Peder Olsen and Satya Dharanipragada, ?An efficient integrated gender detection scheme and time mediated averaging of gender dependent acoustic models,? in Eurospeech 2003, 2003, vol. 4, pp. 2509?2512.
| 2974 |@word version:1 seems:2 d2:2 kristjansson:4 covariance:5 decomposition:1 carry:2 contains:1 mmse:4 outperforms:2 com:1 si:1 must:1 readily:1 john:2 realistic:1 half:1 selected:1 cue:1 short:3 provides:2 node:1 simpler:2 height:1 dn:2 direct:1 consists:3 combine:2 expected:4 bocchieri:1 multi:2 little:1 provided:2 matched:1 what:3 developed:2 spoken:1 finding:2 temporal:7 every:2 sb1:4 decouples:1 uk:1 producing:3 dropped:1 despite:2 id:2 subscript:1 path:2 approximately:1 challenging:1 shaded:1 factorization:1 limited:1 range:1 practical:1 testing:1 practice:1 digit:1 procedure:1 area:1 significantly:2 word:12 refers:1 zoubin:1 cannot:1 layered:2 storage:1 context:1 influence:1 www:2 conventional:1 map:3 center:1 yt:43 attention:1 xa1:2 independently:1 resolution:1 identifying:1 estimator:2 insight:1 laplace:1 target:1 exact:3 us:2 associate:2 element:3 recognition:12 approximated:1 expensive:1 observed:1 steven:2 trausti:2 module:1 capture:2 removed:1 highest:1 ephraim:1 intuition:1 predictable:1 complexity:2 dynamic:58 trained:2 purely:1 upon:1 sdl:3 efficiency:1 htm:1 joint:10 icassp:3 various:1 listener:6 snrs:2 distinct:1 describe:2 labeling:2 pronunciation:1 apparent:1 fluctuates:1 larger:1 solve:1 rennie:3 say:2 reconstruct:1 grammar:33 ability:3 formant:2 sequence:15 advantage:1 analytical:1 reconstruction:2 product:3 mixing:6 poorly:1 roweis:1 pronounced:1 enhancement:2 refiltering:2 produce:1 help:1 derive:2 ac:1 sa:33 involves:2 come:2 differ:1 closely:2 filter:1 human:26 require:1 icslp:1 probable:1 hold:1 around:2 exp:2 viterbi:16 talker:10 achieves:1 recognizer:3 estimation:5 vice:1 hope:1 brought:1 gaussian:6 super:1 rather:3 avoid:1 publication:1 derived:1 focus:1 improvement:1 likelihood:20 mainly:2 baseline:1 sense:1 helpful:1 posteriori:1 inference:8 dependent:10 sb:10 typically:1 integrated:2 bt:1 unlikely:2 hidden:4 eliminate:1 captioned:1 overall:4 dual:13 arg:3 pascal:2 development:1 special:2 once:1 saving:2 vta:3 identical:1 future:1 few:2 composed:2 recognize:2 divergence:1 individual:1 phase:1 detection:1 evaluation:1 adjust:1 mixture:15 extreme:1 dyn:3 xat:22 chain:3 algonquin:8 xb:8 integral:1 edge:1 capable:1 necessary:1 closer:1 theoretical:1 modeling:2 d2n:1 maximization:2 tractability:2 surpasses:1 subset:1 snr:1 recognizing:1 conducted:1 too:1 eurospeech:2 reported:2 dependency:3 combined:6 person:1 st:10 density:1 international:1 retain:1 lee:1 modelbased:1 michael:1 squared:1 again:1 gale:1 audition:1 leading:1 suggesting:1 isca:1 performed:3 break:2 closed:1 red:1 bayes:1 parallel:1 masking:1 accuracy:1 phoneme:1 characteristic:1 variance:1 likewise:1 yield:1 identify:2 identification:1 bayesian:1 accurately:1 published:2 explain:1 against:1 competitor:1 xaf:2 frequency:5 pp:6 gdl:1 gain:10 auditory:2 popular:1 knowledge:1 amplitude:1 appears:1 ta:2 unambiguously:1 hershey:4 formulation:1 evaluated:1 done:1 strongly:1 xa:14 just:1 until:2 eqn:1 replacing:1 nonlinear:1 continuity:1 mode:1 brings:1 graphed:1 effect:1 requiring:1 true:2 normalized:1 counterpart:1 adequately:1 hence:1 analytically:2 contain:1 alternating:1 symmetric:1 moore:1 white:4 interspeech:1 speaker:46 excitation:1 yorktown:1 won:1 d4:1 m:2 criterion:1 demonstrate:1 performs:2 ranging:1 hallmark:1 harmonic:1 recently:1 discussed:1 association:1 approximates:1 expressing:1 louder:1 versa:1 automatic:2 similarly:4 language:1 peder:2 operating:1 surface:1 closest:1 posterior:7 phone:3 scenario:1 binary:1 watson:1 tee:1 vt:5 minimum:1 somewhat:1 employed:1 recognized:1 determine:2 paradigm:1 signal:21 ii:1 full:4 multiple:1 infer:5 stem:2 exceeds:1 faster:1 adapt:1 calculation:1 long:1 pitch:3 essentially:1 df:1 bimodal:1 addition:1 remarkably:2 background:3 whereas:1 separately:1 shef:1 source:25 operate:1 rest:1 bringing:1 recording:3 tend:1 subject:1 db:10 incorporates:1 gmms:2 jordan:1 presence:1 fft:1 affect:1 restrict:1 reduce:2 idea:3 unidimensional:1 attias:1 listening:1 handled:1 tonal:1 speech:43 cocktail:1 useful:3 varga:1 involve:1 factorial:5 band:13 reduced:2 http:2 estimated:5 delta:1 per:2 blue:1 discrete:2 vol:5 four:2 nevertheless:1 drawn:1 d3:2 utilize:1 backward:3 v1:1 graph:2 sum:2 compete:1 letter:1 wer:4 place:1 reasonable:1 separation:18 followed:1 yielded:1 constraint:5 handful:1 dominated:1 speed:2 argument:1 min:1 pruned:2 relatively:1 martin:2 speedup:1 according:1 combination:15 poor:1 across:2 smaller:1 em:1 s1:2 organizer:1 taken:1 sbt:29 legal:1 computationally:1 previously:1 turn:1 serf:1 gaussians:15 operation:3 eight:1 spectral:1 voice:2 thomas:1 remaining:1 exploit:1 ghahramani:1 approximating:1 question:1 quantity:1 diagonal:4 guessing:1 said:3 surrogate:1 loudness:1 september:1 unable:1 mapped:1 separating:6 hmm:2 extent:1 length:1 modeled:6 relationship:1 ratio:1 minimizing:1 implementation:2 unknown:1 perform:5 observation:7 markov:5 discarded:1 truncated:1 frame:7 dc:2 y1:6 arbitrary:1 omission:1 introduced:2 pair:2 required:3 kl:1 specified:1 sentence:1 radix:1 optimized:1 acoustic:39 learned:1 sa1:3 address:4 able:2 below:3 perception:2 pattern:1 sparsity:1 hear:1 challenge:10 tb:1 max:13 green:1 power:5 critical:1 scheme:1 historically:1 mediated:1 coupled:1 extract:1 utterance:5 naive:2 text:1 prior:1 literature:1 understanding:1 evolve:1 relative:5 par:1 dxt:1 mixed:6 interesting:2 versus:1 remarkable:1 gopinath:1 interfering:1 ibm:5 cooke:1 course:1 surprisingly:2 english:1 allow:1 understand:1 sparse:1 benefit:2 grammatical:2 dimension:1 stand:1 transition:7 cumulative:1 evaluating:1 forward:3 simplified:2 party:1 transaction:1 approximate:5 pruning:5 olsen:4 sat:48 assumed:1 masker:1 demo:1 thep:1 spectrum:10 search:10 iterative:2 continuous:2 table:2 channel:4 xbt:12 robust:1 symmetry:3 interact:1 complex:1 domain:2 noise:3 x1:1 extraordinary:1 ny:1 decoded:2 wish:1 candidate:1 perceptual:1 young:1 down:2 xt:10 normalizing:1 consist:1 workshop:1 quantization:7 magnitude:1 illustrates:1 sparseness:1 cartesian:2 mf:3 depicted:1 simply:1 explore:3 likely:2 talking:1 gender:10 aa:2 chance:2 relies:1 conditional:4 identity:3 formulated:1 shared:2 averaging:3 denoising:1 microphone:1 total:1 attempted:2 select:1 latter:1 prosody:1 audio:1 |
2,175 | 2,975 | Hierarchical Dirichlet Processes with Random Effects
Seyoung Kim
Department of Computer Science
University of California, Irvine
Irvine, CA 92697-3435
[email protected]
Padhraic Smyth
Department of Computer Science
University of California, Irvine
Irvine, CA 92697-3435
[email protected]
Abstract
Data sets involving multiple groups with shared characteristics frequently arise
in practice. In this paper we extend hierarchical Dirichlet processes to model
such data. Each group is assumed to be generated from a template mixture model
with group level variability in both the mixing proportions and the component
parameters. Variabilities in mixing proportions across groups are handled using
hierarchical Dirichlet processes, also allowing for automatic determination of the
number of components. In addition, each group is allowed to have its own component parameters coming from a prior described by a template mixture model. This
group-level variability in the component parameters is handled using a random
effects model. We present a Markov Chain Monte Carlo (MCMC) sampling algorithm to estimate model parameters and demonstrate the method by applying it to
the problem of modeling spatial brain activation patterns across multiple images
collected via functional magnetic resonance imaging (fMRI).
1
Introduction
Hierarchical Dirichlet processes (DPs) (Teh et al., 2006) provide a flexible framework for probabilistic modeling when data are observed in a grouped fashion and each group can be thought of as
being generated from a mixture model. In the hierarchical DPs all of, or a subset of, the mixture
components are shared by different groups and the number of such components are inferred from the
data using a DP prior. Variability across groups is modeled by allowing different mixing proportions
for different groups.
In this paper we focus on the problem of modeling systematic variation in the shared mixture component parameters and not just in the mixing proportions. We will use the problem of modeling
spatial fMRI activation across multiple brain images as a motivating application, where the images
are obtained from one or more subjects performing the same cognitive tasks. Figure 1 illustrates the
basic idea of our proposed model. We assume that there is an unknown true template for mixture
component parameters, and that the mixture components for each group are noisy realizations of the
template components. For our application, groups and data points correspond to images and pixels.
Given grouped data (e.g., a set of images) we are interested in learning both the overall template
model and the random variation relative to the template for each group. For the fMRI application,
we model the images as mixtures of activation patterns, assigning a mixture component to each spatial activation cluster in an image. As shown in Figure 1 our goal is to extract activation patterns
that are common across multiple images, while allowing for variation in fMRI signal intensity and
activation location in individual images. In our proposed approach, the amount of variation (called
random effects) from the overall true component parameters is modeled as coming from a prior distribution on group-level component parameters (Gelman et al. 2004). By combining hierarchical
DPs with a random effects model we let both mixing proportions and mixture component parameters adapt to the data in each group. Although we focus on image data in this paper, the proposed
Template
mixture model
+
?
s
Q
Group-level
mixture model
fMRI brain
activation
Figure 1: Illustration of group level variations from the template model.
Model
Hierarchical DPs
Transformed DPs
Hierarchical DPs with random effects
Group-level mixture components
? a ? ma , ? b ? mb
?a + ?a1 , . . . , ?a + ?ama , ?b + ?b1 , . . . , ?b + ?bmb
(?a + ?a ) ? ma , (?b + ?b ) ? mb
Table 1: Group-level mixture component parameters for hierarchical DPs, transformed DPs, and
hierarchical DPs with random effects as proposed in this paper.
approach is applicable to more general problems of modeling group-level random variation with
mixture models.
Hierarchical DPs and transformed DPs (Sudderth et al., 2005) both address a similar problem of
modeling groups of data using mixture models with mixture components shared across groups. Table 1 compares the basic ideas underlying these two models with the model we propose in this paper.
Given a template mixture of two components with parameters ?a and ?b , in hierarchical DPs a mixture model for each group can have ma and mb exact copies (commonly known as tables in the Chinese restaurant process representation) of each of the two components in the template?thus, there
is no notion of random variation in component parameters across groups. In transformed DPs, each
of the copies of ?a and ?b receives a transformation parameter ?a1 , . . . , ?ama and ?b1 , . . . , ?bmb .
This is not suitable for modeling the type of group variation illustrated in Figure 1 because there is
no direct way to enforce ?a1 = . . . = ?ama and ?b1 = . . . = ?bmb to obtain ?a and ?b as used
in our proposed model.
In this general context the model we propose here can be viewed as being closely related to both
hierarchical DPs and transformed DPs, but having application to quite different types of problems in
practice, e.g., as an intermediate between the highly constrained variation allowed by the hierarchical
DP and the relatively unconstrained variation present in the computer vision scenes to which the
transformed DP has been applied (Sudderth et al, 2005).
From an applications viewpoint the use of DPs for modeling multiple fMRI brain images is novel
and shows considerable promise as a new tool for analyzing such data. The majority of existing
statistical work on fMRI analysis is based on voxel-by-voxel hypothesis testing, with relatively little
work on modeling of the spatial aspect of the problem. One exception is the approach of Penny
and Friston (2003) who proposed a probabilistic mixture model for spatial activation modeling and
demonstrated its advantages over voxel-wise analysis. The application of our proposed model to
fMRI data can be viewed as a generalization of Penny and Friston?s work in three different aspects
by (a) allowing for analysis of multiple images rather than a single image (b) learning common
activation clusters and systematic variation in activation across these images, and (c) automatically
learning the number of components in the model in a data-driven fashion.
2
2.1
Models
Dirichlet process mixture models
A Dirichlet process DP(?0 , G) with a concentration parameter ?0 > 0 and a base measure G can
be used as a nonparametric prior distribution on mixing proportion parameters in a mixture model
when the number of components is unknown a priori (Rasmussen, 2000). The generative process
for a mixture of Gaussian distributions with component mean ?k and DP prior DP(?0 , G) can be
?0
?
?
H
?
?
zji
?k
? k=1:?
yji
i=1:N
j=1:J
?
?
?
?0
?
?j
H
?
?
zji
?k
? k=1:?
yji
i=1:N
j=1:J
(a)
?
?
?
?
?j
?
zji
?
yji
i=1:N
j=1:J
(b)
?0
H
R
?
?
?k
?k2
?
ujk
j=1:J
k=1:?
(c)
Figure 2: Plate diagrams for (a) DP mixtures, (b) hierarchical DPs and (c) hierarchical DPs with
random effects.
written, using a stick breaking construction (Sethuraman, 2004), as:
?|?0 ? Stick(?0 ),
?k |G ? NG (?0 , ?02 ),
zi |? ? ?,
2
2
yi |zi , (?k )?
k=1 , ? ? N (?zi , ? ),
where yi , i = 1, . . . , N are observed data and zi is a component label for yi . It can be shown that
the labels zi ?s have the following clustering property:
K
X
zi |z1 , . . . , z(i?1) , ?0 ?
k=1
n?i
?0
k
?k +
?k ,
i ? 1 + ?0
i ? 1 + ?0 new
n?i
k
where
represents the number of zi? , i? 6= i, assigned to component k. The probability that zi is
assigned to a new component is proportional to ?0 . Note that the component with more observations
already assigned to it has a higher probability to attract the next observation.
2.2
Hierarchical Dirichlet processes
When multiple groups of data are present and each group can be modeled as a mixture it is often
useful to let different groups share mixture components. In hierarchical DPs (Teh et al., 2006)
components are shared by different groups with varying mixing proportions for each group, and the
number of components in the model can be inferred from data.
Let yji be the ith data point (i = 1, . . . , N ) in group j (j = 1, . . . , J), ? the global mixing proportions, ? j the mixing proportions for group j, and ?0 , ?, H are the hyperparameters for the DP.
Then, the hierarchical DP can be written as follows, using a stick breaking construction:
?|? ? Stick(?),
?k |H ?
? j |?0 , ? ? DP(?0 , ?),
NH (?0 , ?02 ),
2
yji |zji , (?k )?
k=1 , ?
zji |? j ? ? j ,
? N (?zji , ? 2 ),
(1)
The plate diagram in Figure 2(b) illustrates the generative process of this model. Mixture components described by the ?k ?s can be shared across the J groups.
The hierarchical DP has clustering properties similar to that for DP mixtures, i.e.,
p(hji |h?ji , ?0 ) ?
Tj
X
t=1
p(ljt |l?jt , ?) ?
K
X
k=1
P
n?i
?0
jt
?t +
?t
nj ? 1 + ?0
nj ? 1 + ?0 new
m?t
?
k
?k + P
?k ,
mu ? 1 + ?
mu ? 1 + ? new
(2)
(3)
where hji represents the mapping of each data item yji to one of Tj clusters within group j and
ljt maps the tth local cluster in group j to one of K global clusters shared by all of the J groups.
The probability that a new local cluster is generated within group j is proportional to ?0 . This new
cluster is generated according to Equation (3). Notice that more than one local cluster in group j
can be linked to the same global cluster. It is the assignment of data items to K global clusters via
local cluster labels that is typically of interest.
3
Hierarchical Dirichlet processes with random effects
We now propose an extension of the standard hierarchical DP to a version that includes random
effects. We first develop our model for the case of Gaussian density components, and later in the
paper apply this model to the specific problem of modeling activation patterns in fMRI brain images.
2
2
We take ?k |H ? NH (?0 , ?02 ) and yji |zji , (?k )?
k=1 , ? ? N (?zji , ? ) in Equation (1) and add
random effects as follows:
?k |H ? NH (?0 , ?02 ), ?k2 |R ? Inv-?2R (v0 , s20 ),
2
ujk |?k , ?k2 ? N (?k , ?k2 ), yji |zji , (ujk )?
(4)
k=1 ? N (ujzji , ? ).
Each group j has its own component mean ujk for the kth component and these group-level parameters come from a common prior distribution N (?k , ?k2 ). Thus, ?k can be viewed as a template, and
ujk as a noisy observation of the template for group j with variance ?k2 . The random effects parameters ujk are generated once per group and shared by local clusters in group j that are assigned to
the same global cluster k.
For inference we use an MCMC sampling scheme that is based on the clustering property given in
Equations (2) and (3). In each iteration we sample labels h = {hji for all j, i}, l = {ljt for all j, t}
and component parameters ? = {?k for all k}, ? 2 = {?k2 for all k}, u = {ujk for all k, j} alternately.
We sample tji ?s using the following conditional distribution:
n?jt p(yji |ujk , ? 2 )
p(hji = t|h?ji , u, ?, ? 2 , y) ?
?0 p(yji |h?ji u, ?, ? 2 , ?)
where
p(yji |h?ji u, ?, ? , ?) =
X
k?A
P
if t was used
if t = tnew ,
mk
p(yji |ujk )
m
k +?
k
(5a)
Z
mk
+
p(yji |ujk )p(ujk |?k , ?k2 )dujk
(5b)
mk + ?
k
k?B
Z Z Z
?
+P
p(yji |ujk )p(ujk |?k , ?k2 )NH (?0 , ?02 )Inv-?2R (v0 , s20 )dujk d?k d?k2 . (5c)
m
+
?
k
k
X
P
In Equation (5a) the summation is over components in A = {k| some hji? for i? 6= i is assigned to
k}, representing global clusters that already have some local clusters in group j assigned to them.
In this case, since ujk is already known, we can simply compute the likelihood p(yji |ujk ). In
Equation (5b) the summation is over B = {k| no hji? for i? 6= i is assigned to k} representing global
clusters that have not yet been assigned in group j. For conjugate priors we can integrate over
the unknown random effects parameter ujk to compute the likelihood using N (yji |?k , ?k2 + ? 2 )
and sample ujk from the posterior distribution p(ujk |?k , ?k2 , yji ). Equation (5c) models the case
where a new global component gets generated. The integral cannot be evaluated analytically, so we
approximate the integral by sampling new values for ?k , ?k2 , and ujk from prior distributions and
evaluating p(yji |ujk ) given these new values for the parameters (Neal, 1998).
Samples for ljt ?s can be obtained from the conditional distribution given as
?
Q
m?jt i:hji =t p(yji |ujk , ? 2 )
?
?
?
?
j
?
?
R Qif k was used in group
?
2
2
?
p(y
|u
,
?
)p(u
? m?jt
ji
jk
jk |?k , ?k )dujk
i:hji =t
2
p(ljt = k|l?jt , u, ?, ? , y) ?
R R R Qif k is new in group j
?
2
?
?
?
i:hji =t p(yji |ujk )p(ujk |?k , ?k )
?
?
2
2
2
?
?
NH (?0 , ?0 )Inv-?R (v0 , s0 ) dujk d?k d?k
?
?
if k is a new component.
(6)
45
45
45
45
40
40
40
40
35
35
35
35
30
30
30
30
25
25
25
25
20
20
20
15
15
15
10
10
10
5
5
5
0
0
2
4
6
8
10
12
14
16
18
20
45
0
0
2
4
6
8
10
12
14
16
18
20
45
0
20
15
10
5
0
2
4
6
8
10
12
14
16
18
20
45
0
40
40
40
35
35
35
30
30
30
30
25
25
20
2
4
6
8
10
12
14
16
18
20
0
2
4
6
8
10
12
14
16
18
20
0
8
10
12
14
16
18
20
0
2
4
6
8
10
12
14
16
18
20
10
5
0
6
15
10
5
0
4
20
15
10
5
2
25
20
15
10
0
25
20
15
0
45
40
35
5
0
2
4
6
8
10
12
14
16
18
20
0
Figure 3: Histogram for simulated data with mixture density estimates overlaid.
As in the sampling of hji , if k is new in group j we can evaluate the integral analytically and sample
ujk from the posterior distribution. If k is a new component we approximate the integral by sampling
new values for ?k , ?k2 , and ujk from the prior and evaluating the likelihood.
Given h and l we can update the component parameters ?, ? and u using standard Gibbs sampling
for a normal hierarchical model (Gelman et al., 2006). In practice, this Markov chain can mix
poorly and get stuck in local maxima where the labels for two group-level components are swapped
relative to the same two components in the template. To address this problem and restore the correct
correspondence between template components and group-level components we propose a move that
swaps the labels for two group-level components at the end of each sampling iteration and accepts
the move based on a Metropolis-Hastings acceptance rule.
To illustrate the proposed model we simulated data from a mixture of one-dimensional Gaussian
densities with known parameters and tested if the sampling algorithm can recover the parameters
from the data. From a template mixture model with three mixture components we generated 10
group-level mixture models by adding random effects in the form of mean-shifts to the template
means, sampled from N (0, 1). Using varying mixing proportions for each group we generated 200
samples from each of the 10 mixture models. Histograms for the samples in eight groups are shown
in Figure 3(a). The estimated models after 1000 iterations of the MCMC algorithm are overlaid.
We can see that the sampling algorithm was able to learn the original model successfully despite the
variability in both component means and mixing proportions of the mixture model.
4
A model for fMRI activation surfaces
We now apply the general framework of the hierarchical DP with random effects to the problem
of detecting and characterizing spatial activation patterns in fMRI brain images. Underlying our
approach is an assumption that there is an unobserved true spatial activation pattern in a subject?s
brain given a particular stimulus and that multiple activation images for this individual collected over
different fMRI sessions are realizations of the true activation image, with variability in the activation
pattern due to various sources. Our goal is to infer the unknown true activation from multiple such
activation images.
We model each activation image using a mixture of experts model, with a component expert assigned
to each local activation cluster (Rasmussen and Ghahramani, 2002). By introducing a hierarchical
DP into this model we allow activation clusters to be shared across images, inferring the number of
such clusters from the data. In addition, the random effects component can be incorporated to allow
activation centers to be slightly shifted in terms of pixel locations or in terms of peak intensity. These
types of variation are common in multi-image fMRI experiments, due to a variety of factors such as
head motion, variation in the physiological and cognitive states of the subject. In what follows below
we will focus on 2-dimensional ?slices? rather than 3-dimensional voxel images?in principle the
same type of model could be developed for the 3-dimensional case.
We briefly discuss the mixture of experts model below (Kim et al., 2006). Assuming the ? values
yi , i = 1, . . . , N are conditionally independent of each other given the voxel position xi = (xi1 , xi2 )
and the model parameters, we model the activation yi at voxel xi as a mixture of experts:
X
p(yi |xi , ?) =
p(yi |c, xi )P (c|xi ),
(7)
c?C
where C = {cbg , cm , m = 1, . . . , M ? 1} is a set of M expert component labels for background
cbg and M ? 1 activation components cm ?s. The first term on the right hand side of Equation (7)
defines the expert for a given component. We model the expert for an activation component as a
(a)
(b)
(c)
(d)
Figure 4: Results from eight runs for subject 2 at Stanford. (a) Raw images for a cross section
of right precentral gyrus and surrounding area. Activation components estimated from the images
using (b) DP mixtures, (c) hierarchical DPs, and (d) hierarchical DP with random effects.
Gaussian-shaped surface centered at bm with width ?m and height hm as follows.
?1
yi = hm exp ?(xi ? bm )? (?m ) (xi ? bm ) + ?,
(8)
2
where ? is an additive noise term distributed as N (0, ?act
). The background component is modeled
2
as yi = ? + ?, having a constant activation level ? with additive noise distributed as N (0, ?bg
).
The second term in Equation (7) is known as a gate function in the mixture of experts framework?
it decides which expert should be used to make a prediction for the
P activation level at position xi .
Using Bayes? rule we write this term as P (c|xi ) = p(xi |c)?c /( c?C p(xi |c)?c ), where ?c is a
class prior probability P (c). p(xi |c) is defined as follows. For activation components, p(xi |cm )
is a normal density with mean bm and covariance ?m . bm and ?m are shared with the Gaussian
surface model for experts in Equation (8). This implies that the probability of activating the mth
expert is highest at the center of the activation and gradually decays as xi moves away from the
center. p(xi |cbg ) for the background component is modeled as having a uniform distribution of 1/N
for all positions in the brain. If xi is not close to the center of any activations, the gate function
selects the background expert for the voxel.
We place a hierarchical DP prior on ?c , and let the location parameters bm and the height parameters
hm vary in individual images according to a Normal prior distribution with a variance ?bm and
?h2 m using a random effects model. We define prior distributions for ?bm and ?h2 m as a half normal
distribution with a 0 mean and a variance as suggested by Gelman (2006). Since the surface model
for the activation component is a highly non-linear model, without conjugate prior distributions it
is not possible to evaluate the integrals in Equations (5b)-(5c) and (6) analytically in the sampling
algorithm. We rely on an approximation of the integrals by sampling new values for bm and hm
from their priors and new values for image-specific random effects parameters from N (bm , ?bm )
and N (hm , ?h2 m ) and evaluating the likelihood of the data given these new values for the unknown
parameters.
5
Experimental results on fMRI data
We demonstrate the performance of the model and inference algorithm described above by using
fMRI data collected from three subjects (referred to as Subjects 1, 2 and 3) performing the same
sensorimotor task at two different fMRI scanners (Stanford and Duke). Each subject was scanned
during eight separate fMRI experiments (?runs?) and for each run a ?-map (a voxel image that
summarizes the brain activation) was produced using standard fMRI preprocessing.
In this experiment we analyze a 2D cross-section of the right precentral gyrus brain region, a region
that is known to be activated by this sensorimotor task. We fit our model to each set of eight ?-maps
for each of the subjects at each scanner, and compare the results from the models obtained from
the hierarchical DP without random effects. We also fit standard DP mixtures to individual images
as a baseline, using Algorithm 7 from Neal (1998) to sample from the model. The concentration
parameters for DP priors in all of the three models were given a prior distribution gamma(1.5, 1)
and sampled from the posterior as described in Teh et al.(2006). For all of the models the MCMC
sampling algorithm was run for 3000 iterations.
60
60
40
40
20
20
100
50
0
1 2 3 4 5 6 7 8 9 10
0
1 2 3 4 5 6 7 8 9 10
0
1 2 3 4 5 6 7 8 9 10
(a)
(b)
(c)
Figure 5: Histogram of the number of components over the last 1000 iterations (Subject 2 at Stanford). (a) DP mixture, (b) hierarchical DP, and (c) hierarchical DP with random effects.
Hierarchical DP
Scanner
Stanford
Duke
Subject
Subject 1
Subject 2
Subject 3
Subject 1
Subject 2
Subject 3
Avg.
logP
-1142.6
-1260.9
-1084.1
-1154.9
-677.9
-1175.6
Standard
deviation
21.8
32.1
11.3
12.5
12.2
13.6
Hierarchical DP
with random effects
Avg.
Standard
logP
deviation
-1085.3
12.6
-1082.8
28.7
-1040.9
13.5
-1166.9
13.1
-559.9
15.8
-1086.8
13.2
Table 2: Predictive logP scores of test images averaged over eight cross-validation runs. The simulation errors are shown as standard deviations.
Figure 4(a) shows ?-maps from eight fMRI runs of Subject 2 at Stanford. From the eight images one
can see three primary activation bumps, subsets of which appear in different images with variability
in location and intensity. Figures 4 (b)-(d) each show a sample from the model learned on the data
in Figure 4(a), where Figure 4(b) is for DP mixtures, Figure 4(c) for hierarchical DPs, and Figure
4(d) for hierarchical DPs with random effects. The sampled activation components are overlaid as
ellipses using one standard deviation of the width parameters ?m . The thickness of ellipses indicates
the estimated height hm of the bump. In Figures 4(b) and (c) ellipses for activation components
shared across images are drawn with the same color.
The DPs shown in Figure 4(b) seem to overfit with many bumps and show a relatively poor generalization capability because the model cannot borrow strength from other similar images. The
hierarchical DP in Figure 4(c) is not flexible enough to account for bumps that are shared across
images but that have variability in their parameters. By using one fixed set of component parameters shared across images, the hierarchical DPs are too constrained and are unable to detect the
more subtle features of individual images. The random effects model finds the three main bumps
and a few more bumps with lower intensity for the background. Thus, in terms of generalization,
the model with random effects provides a good trade-off between the relatively unconstrained DP
mixtures and overly-constrained hierarchical DPs. Histograms of the number of components (every
10 samples over the last 1000 iterations) for the three different models are shown in Figure 5.
We also perform a leave-one-image-out cross-validation to compare the predictive performance of
hierarchical DPs and our proposed model. For each subject at each scanner we fit a model from
seven images and compute the predictive likelihood of the remaining one image. The predictive
scores and simulation errors (standard deviations) averaged over eight cross-validation runs for both
models are shown in Table 2. In all of the subjects except for Subject 1 at Duke, the proposed model
shows a significant improvement over hierarchical DPs. For Subject 1 at Duke, the hierarchical DP
gives a slightly better result but the difference in scores is not significant relative to the simulation
error.
Figure 6 shows the difference in the way the hierarchical DP and our proposed model fit the data
in one cross-validation run for Subject 1 at Duke as shown in Figure 6(a). The hierarchical DP
in Figure 6(b) models the common bump with varying intensity in the middle of each image as a
mixture of two components?one for the bump in the first two images with relatively high intensity
and another for the same bump in the rest of the images with lower intensity. Our proposed model
recovers the correspondence in the bumps with different intensity across images as shown in Figure
6(c).
(a)
(b)
(c)
Figure 6: Results from one cross-validation run for subject 1 at Duke. (a) Raw images for a cross
section of right precentral gyrus and surrounding area. Activation components estimated from the
images are shown in (b) for hierarchical DPs, and in (c) for hierarchical DP with random effects.
6
Conclusions
In this paper we proposed a hierarchical DP model with random effects that allows each group (or
image) to have group-level mixture component parameters as well as group-level mixing proportions. Using fMRI brain activation images we demonstrated that our model can capture components
shared across multiple groups with individual-level variation. In addition, we showed that our model
is able to estimate the number of components more reliably due to the additional flexibility in the
model compared to DP mixtures and hierarchical DPs. Possible future directions for this work include extensions to modeling differences between labeled groups of individuals, e.g., in studies of
controls and patients for a particular disorder.
Acknowledgments
We would like to thank Hal Stern for useful discussions. We acknowledge the support of the following grants: the Functional Imaging Research in Schizophrenia Testbed, Biomedical Informatics
Research Network (FIRST BIRN; 1 U24 RR021992, www.nbirn.net); the Transdisciplinary Imaging Genetics Center (P20RR020837-01); and the National Alliance for Medical Image Computing (NAMIC; Grant U54 EB005149), funded by the National Institutes of Health through the NIH
Roadmap for Medical Research. Author PS was also supported in part by the National Science
Foundation under awards number IIS-0431085 and number SCI-0225642.
References
Gelman, A., Carlin, J., Stern, H. & Rubin, D. (2004) Bayesian Data Analysis, New York: Chapman &
Hall/CRC.
Gelman, A. (2006). Prior distribution for variance parameters in hierarchical models. Bayesian Analysis,
1(3):515?533.
Kim, S., Smyth, P., & Stern, H. (2006). A nonparametric Bayesian approach to detecting spatial activation
patterns in fMRI data. Proceedings of the 9th International Conference on Medical Image Computing and
Computer Assisted Intervention, vol. 2, pp.217?224.
Neal, R.M. (1998) Markov chain sampling methods for Dirichlet process mixture models. Technical Report
4915, Department of Statistics, University of Toronto.
Penny, W. & Friston, K. (2003) Mixtures of general linear models for functional neuroimaging. IEEE Transactions on Medical Imaging, 22(4):504?514.
Rasmussen, C.E. (2000) The infinite Gaussian mixture model. Advances in Neural Information Processing
Systems 12, pp. 554?560. MIT Press.
Rasmussen, C.E. & Ghahramani, Z. (2002) Infinite mixtures of Gaussian process experts. Advances in Neural
Information Processing Systems 14, pp. 881?888. MIT Press.
Sethuraman, J. (1994) A constructive definition of Dirichlet priors. Statistica Sinica, 4:639?650.
Sudderth, E., Torralba, A., Freeman, W. & Willsky, A. (2005). Describing visual scenes using transformed
Dirichlet Processes. Advances in Neural Information Processing Systems 18, pp. 1297?1304. MIT Press.
Teh, Y.W., Jordan, M.I., Beal, M.J. & Blei, D.M. (2006). Hierarchical Dirichlet processes. Journal of American
Statistical Association, To appear.
| 2975 |@word briefly:1 version:1 middle:1 proportion:12 simulation:3 covariance:1 score:3 existing:1 activation:40 assigning:1 yet:1 written:2 additive:2 update:1 generative:2 half:1 item:2 ith:1 blei:1 provides:1 detecting:2 location:4 toronto:1 height:3 direct:1 frequently:1 multi:1 brain:11 freeman:1 automatically:1 little:1 underlying:2 what:1 namic:1 cm:3 developed:1 unobserved:1 transformation:1 nj:2 every:1 act:1 k2:14 stick:4 control:1 grant:2 medical:4 appear:2 intervention:1 local:8 despite:1 analyzing:1 averaged:2 acknowledgment:1 testing:1 practice:3 area:2 thought:1 get:2 cannot:2 close:1 gelman:5 context:1 applying:1 www:1 map:4 demonstrated:2 center:5 disorder:1 rule:2 borrow:1 notion:1 variation:14 construction:2 smyth:3 exact:1 duke:6 hypothesis:1 jk:2 labeled:1 observed:2 capture:1 region:2 trade:1 highest:1 mu:2 predictive:4 swap:1 various:1 surrounding:2 monte:1 quite:1 stanford:5 statistic:1 noisy:2 beal:1 advantage:1 net:1 propose:4 coming:2 mb:3 uci:2 combining:1 realization:2 mixing:12 poorly:1 ama:3 flexibility:1 cluster:19 p:1 leave:1 illustrate:1 develop:1 come:1 implies:1 direction:1 closely:1 correct:1 tji:1 centered:1 crc:1 activating:1 generalization:3 summation:2 extension:2 assisted:1 scanner:4 hall:1 ic:2 normal:4 exp:1 overlaid:3 mapping:1 bump:10 vary:1 torralba:1 applicable:1 label:7 grouped:2 successfully:1 tool:1 mit:3 gaussian:7 rather:2 varying:3 focus:3 improvement:1 likelihood:5 indicates:1 kim:3 baseline:1 detect:1 inference:2 attract:1 typically:1 mth:1 transformed:7 interested:1 selects:1 pixel:2 overall:2 flexible:2 priori:1 resonance:1 spatial:8 constrained:3 once:1 having:3 ng:1 sampling:13 shaped:1 chapman:1 represents:2 fmri:21 future:1 report:1 stimulus:1 few:1 gamma:1 national:3 individual:7 interest:1 acceptance:1 highly:2 mixture:51 activated:1 tj:2 chain:3 integral:6 alliance:1 precentral:3 mk:3 modeling:12 logp:3 assignment:1 introducing:1 deviation:5 subset:2 uniform:1 too:1 motivating:1 thickness:1 density:4 peak:1 international:1 probabilistic:2 systematic:2 xi1:1 off:1 informatics:1 padhraic:1 cognitive:2 expert:13 american:1 account:1 includes:1 bg:1 later:1 linked:1 analyze:1 recover:1 bayes:1 capability:1 variance:4 characteristic:1 who:1 correspond:1 raw:2 bayesian:3 produced:1 tnew:1 carlo:1 definition:1 sensorimotor:2 pp:4 recovers:1 irvine:4 sampled:3 color:1 subtle:1 higher:1 ljt:5 evaluated:1 just:1 biomedical:1 overfit:1 hand:1 receives:1 hastings:1 defines:1 hal:1 effect:26 true:5 analytically:3 assigned:9 neal:3 illustrated:1 conditionally:1 during:1 width:2 plate:2 demonstrate:2 motion:1 image:50 wise:1 novel:1 nih:1 common:5 functional:3 ji:5 nh:5 extend:1 association:1 significant:2 gibbs:1 automatic:1 unconstrained:2 session:1 funded:1 surface:4 v0:3 base:1 add:1 posterior:3 own:2 showed:1 driven:1 yi:9 additional:1 signal:1 ii:1 multiple:10 mix:1 infer:1 technical:1 determination:1 adapt:1 cross:8 schizophrenia:1 award:1 a1:3 ellipsis:3 prediction:1 involving:1 basic:2 vision:1 patient:1 histogram:4 iteration:6 addition:3 background:5 diagram:2 sudderth:3 source:1 swapped:1 rest:1 subject:23 cbg:3 seem:1 jordan:1 intermediate:1 enough:1 variety:1 restaurant:1 ujk:25 zi:8 fit:4 carlin:1 idea:2 shift:1 handled:2 york:1 useful:2 amount:1 nonparametric:2 tth:1 gyrus:3 notice:1 shifted:1 estimated:4 overly:1 per:1 hji:10 write:1 promise:1 vol:1 group:59 drawn:1 imaging:4 run:9 place:1 summarizes:1 correspondence:2 strength:1 scanned:1 scene:2 aspect:2 performing:2 relatively:5 department:3 according:2 poor:1 conjugate:2 across:15 slightly:2 metropolis:1 gradually:1 equation:10 discus:1 describing:1 xi2:1 end:1 apply:2 eight:8 hierarchical:49 away:1 enforce:1 magnetic:1 gate:2 original:1 dirichlet:12 clustering:3 remaining:1 include:1 ghahramani:2 chinese:1 move:3 already:3 concentration:2 primary:1 dp:64 kth:1 separate:1 unable:1 simulated:2 thank:1 majority:1 sci:1 seven:1 roadmap:1 collected:3 willsky:1 assuming:1 modeled:5 illustration:1 sinica:1 neuroimaging:1 qif:2 reliably:1 stern:3 unknown:5 perform:1 allowing:4 teh:4 observation:3 markov:3 acknowledge:1 variability:8 incorporated:1 head:1 inv:3 intensity:8 inferred:2 z1:1 s20:2 california:2 bmb:3 accepts:1 learned:1 testbed:1 alternately:1 address:2 able:2 suggested:1 below:2 pattern:8 suitable:1 friston:3 rely:1 restore:1 representing:2 scheme:1 sethuraman:2 hm:6 extract:1 health:1 prior:19 relative:3 proportional:2 validation:5 h2:3 integrate:1 foundation:1 s0:1 rubin:1 principle:1 viewpoint:1 share:1 genetics:1 supported:1 last:2 copy:2 rasmussen:4 side:1 allow:2 institute:1 template:16 characterizing:1 penny:3 distributed:2 slice:1 evaluating:3 stuck:1 commonly:1 avg:2 preprocessing:1 author:1 bm:11 u54:1 voxel:8 transaction:1 approximate:2 global:8 decides:1 b1:3 assumed:1 xi:16 yji:20 table:5 learn:1 ca:2 main:1 statistica:1 noise:2 arise:1 hyperparameters:1 allowed:2 referred:1 fashion:2 inferring:1 position:3 breaking:2 specific:2 jt:6 decay:1 physiological:1 adding:1 illustrates:2 zji:9 simply:1 visual:1 ma:3 conditional:2 goal:2 viewed:3 seyoung:1 shared:14 considerable:1 infinite:2 except:1 called:1 experimental:1 exception:1 support:1 constructive:1 evaluate:2 mcmc:4 tested:1 |
2,176 | 2,976 | Learning to parse images of articulated bodies
Deva Ramanan
Toyota Technological Institute at Chicago
Chicago, IL 60637
[email protected]
Abstract
We consider the machine vision task of pose estimation from static images, specifically for the case of articulated objects. This problem is hard because of the large
number of degrees of freedom to be estimated. Following a established line of
research, pose estimation is framed as inference in a probabilistic model. In our
experience however, the success of many approaches often lie in the power of the
features. Our primary contribution is a novel casting of visual inference as an iterative parsing process, where one sequentially learns better and better features
tuned to a particular image. We show quantitative results for human pose estimation on a database of over 300 images that suggest our algorithm is competitive
with or surpasses the state-of-the-art. Since our procedure is quite general (it does
not rely on face or skin detection), we also use it to estimate the poses of horses
in the Weizmann database.
1
Introduction
We consider the machine vision task of pose estimation from static images, specifically for the case
of articulated objects. This problem is hard because of the large number of degrees of freedom to
be estimated. Following a established line of research, pose estimation is framed as inference in a
probabilistic model. Most approaches tend to focus on algorithms for inference, but in our experience, the low-level image features often dictate success. When reliable features can be extracted
(through say, background subtraction or skin detection), approaches tend to do well. This dependence on features tends to be under-emphasized in the literature ? one does not want to appear to
suffer from ?feature-itis?. In contrast, we embrace it. Our primary contribution is a novel casting
of visual inference as an iterative parsing process, where one sequentially learns better and better
features tuned to a particular image. Since our approach is fairly general (we do not use any skin or
face detectors), we also apply it to estimate horse poses from the Weizmann dataset [1].
Another practical difficulty, specifically with pose estimation, is that of reporting results. It is common for an algorithm to return a set of poses, and the correct one is manually selected. This is
because the posterior of body poses is often multimodal, a single MAP/mode estimate won?t summarize it. Inspired by the language community, we propose a perplexity-based measure for evaluation. We calculate the probability of observing the actual pose under the distribution returned by our
algorithm. With such an evaluation procedure, we can quantifiable demonstrate that our approach
improves the state-of-the-art.
Related Work: Human pose estimation from static images is a very active research area. Most approaches tend to use a people-specific features, such as face/skin/hair detection [6, 4, 12]. Our work
relies on the conditional random field (CRF) notion of deformable matching in [9]. Our approach is
related to those that simultaneously estimate pose and segment an image [7, 10, 2, 5], since we learn
low-level segmentation cues to build part-specific region models. However, we compute no explicit
segmentation.
Figure 1: The curse of edges? Edges are attractive because of their invariance ? they fire on dark
objects in light backgrounds and vice-versa. But without a region model, it can be hard to separate
the figure from the background. We describe an iterative algorithm for pose estimation that learns
a region model for each body part and for the background. Our algorithm is initialized by the edge
maps shown; we show results for these two images in Fig.7 and Fig.8.
1.1
Overview
Assume we are given an image of a person, who happens to be a soccer player wearing a white shirt
on a green playing field (Fig. 2). We want to estimate the figure?s pose. Since we do not know the
appearance of the figure or the background, we must use a feature invariant to appearance (Fig.1).
We match an edge-based deformable model to the image to obtain (soft) estimates of body part positions. In general, we expect these estimates to be poor because the model can be distracted by edges
in the background (e.g., the hallunicated leg and the missed arm in Fig. 2). The algorithm uses the
estimated body part positions to build a rough region model for each body part and the background ?
it might learn that the torso is white-ish and the background is green-ish. The algorithm then builds
a region-based deformable model that looks for white torsos. Soft estimates of body position from
the new model are then used to build new region models, and the process is repeated.
As one might suspect, such an iterative procedure is quite sensitive to its starting point ? the edgebased deformable model used for initialization and the region-based deformable model used in the
first iteration prove crucial. As the iterative procedure is fairly straightforward (Fig.3), most of this
paper deals with smart ways of building the deformable models.
2
Edge-based deformable model
Our edge-based deformable model is an extension of the one proposed in [9]. The basic probabilistic
model is a tree-structured conditional random field (CRF). Let the location of each part li be paraminitial parse
missing arm
torso
head
ru?arm
ll?leg
hallucinated leg
Figure 2: We build a deformable pose model based on edges. Given an image I, we use a edgebased deformable model (middle) to compute body part locations P(L|I). This defines an initial
parse of the image into several body part regions right. It is easy to hallucinate extra arms or legs
in the negatives spaces between actual body parts (the extra leg). When a body part is surrounded
by clutter (the right arm), it is hard to localize. Intuitively, both problems can be solved with lowlevel segmentation cues. The green region in between the legs is a poor leg candidate because of
figure/ground cues ? it groups better with the background grass. Also, we can find left/right limb
pairs by appealing to symmetry ? if one limb is visible, we can build a model of its appearance, and
use it to find the other one. We operationalize both these notions by our iterative parsing procedure
in Fig.3.
re?parse with
additional
features
torso
learn
part?
specific
fg/bg
models
head
arm found
torso
iter1
head
weak arm response
suppress
false leg
initial posterior
from edges
iter3
lower l/r arms
hallucinated
leg
lower l/r arms
lower l/r legs
lower l/r legs
iter2
Figure 3: Our iterative parsing procedure. We define a parse to be a soft labeling of pixels into
a region type (bg,torso,left lower arm, etc.). We use the initial parse from Fig.2 to build a region
model for each part. We learn foreground/background color histogram models. To exploit symmetry
in appearance, we learn a single color model for left/right limb pairs. We then label each pixel using
the color model (middle right). We then use these masks as features for a deformable model that
re-computes P(L|I). This in-turn defines a new parse, and the procedure is repeated.
final parse
sample poses
best pose
torso
head
ru?arm
ll?leg
input
Figure 4: The result of our procedure. Given P(L|I) from the final iteration, we obtain a clean parse
? M AP (the most likely pose), and can sample directly from
for the image. We can also compute L
P(L|I).
eterized by image position and orientation [xi , yi , ?i ]. We will assume parts are oriented patches of
fixed size, where (xi , yi ) is the location of the top of the patch. We denote the configuration of a K
part model as L = (l1 . . . lK ).
We can write the deformable model as a log-linear model
?
?
P(L|I) ? exp ?
X
i,j?E
?(li ? lj ) +
X
?(li )?
(1)
i
?(li ? lj ) corresponds to a spatial prior on the relative arrangement of part i and j. For efficient
inference, we assume the edge structure E is a tree; each part is connected to at most one parent.
Unlike most approaches that assume gaussian shape priors [9, 3], we parameterize our shape model
with discrete binning (Fig.5).
?(li ? li ) =?iT bin(li ? lj )
(2)
Doing so allows us to capture more intricate distributions, at the cost of having more parameters
to fit. We write bin(?) for the vectorized count of spatial and angular histogram bins (a vector of
all zeros with a single one for the occupied bin). Here ?i is a model parameter that favors certain
(relative) spatial and angular bins for part i with respect to its parent.
Figure 5: We record the spatial configuration of an arm given the torso by placing a grid on the
torso, and noting which bin the arm falls into. We center the grid at the average location of arm in
the training data. We likewise bin the angular orientations to define a spatial distribution of arms
given torsos.
?(li ) corresponds to the local image evidence for a part, which we define as
?(li ) =?iT fi (I(li ))
(3)
We write fi (I(li )) for feature vector extracted from the oriented image patch at location li . In
general, fi () might be part-specific; it could return a binary vector of skin pixels for the the head. In
our case, fie returns a binary vector of edges for all parts. We can visualize ?i in Fig.6.
Inference: The basic machinery we use for inference is message-passing (the sum-product algorithm). Since E is a tree, we first pass ?upstream? messages from part i to its parent j
We compute the message from part i to j as
mi (lj ) ?
X
?(li ? lj )ai (li )
(4)
lj
ai (li ) ? ?(li )
Y
mk (li )
(5)
k?kidsi
Message passing can be performed exhaustively and efficiently with convolutions. If we temporarily
ignore orientation and think of li = (xi , yi ), we can represent messages as 2D images. The image ai
is obtained by multiplying together response images from the children of part i and from the imaging
model ?(li ). ?(li ) can be computed by convolving the edge image with the filter ?i . mi (lj ) can be
computed by convolving ai with a spatial filter extending over the bins from Fig.5 (with coefficients
equal to ?i ). At the root, the image ai is the true conditional marginal P(li |I). When li is 3D, we
perform 3D convolutions. We assume ?i is separable so convolutions can be performed separately
in each dimension. This means that in practice, computing ?(li ) is the computational bottleneck,
since that requires convolving the edge image repeatedly with rotated versions of filter ?i .
Starting from the root, we can pass messages downstream from part j to part i (again with convolutions)
X
P(li |I) ? ai (li )
?(li ? lj )P(lj |I)
(6)
lj
For numerical stability, we normalize images to 1 as they are computed. By keeping track of the
normalization constants, we can also compute the partition function (which is needed for computing
the evaluation score in Sec. 5).
Learning: We learn the filters ?i and ?i by CRF parameter estimation, as in [9]. We label training
images with body part locations L, and find the filters that maximize P(L|I) for the training set.
This objective function is convex and so we tried various optimization packages, but found simple
stochastic gradient ascent to work well. We define the model learned from the edge feature map fie
as ?e = {?ie , ?ie }.
3
Building a region model
One can use the marginals (for say, the head) to define a soft labeling for the image into head/nonhead pixels. One can do this by repeatedly sampling a head location (according to P(li |I)) and then
rendering a head at the given location and orientation. Let the rendered appearance for part i be an
image patch si ; we use a simple rectangular mask. In the limit of infinite samples, one will obtain
an image
X
(7)
P(xi , yi , ?i |I)s?i i (x ? xi , y ? yi )
pi (x, y) =
xi ,yi ,?i
We call such an image a parse for part i (the images on the right from Fig. 2). It is readily computed
by convolving P(li |I) with rotated versions of patch si . Given the parse image pi , we learn a color
histogram model for part i and ?its? background.
X
P(f gi (k)) ?
pi (x, y)?(im(x, y) = k)
(8)
x,y
P(bgi (k)) ?
X
(1 ? pi (x, y))?(im(x, y) = k)
(9)
x,y
We use the part-specific histogram models to label each pixel as foreground or background with a
likelihood ratio test (as shown in Fig.3). To enforce symmetry in appearance, we learn a single color
model for left/right limb pairs.
4
Region-based deformable model
After an initial parse, our algorithm has built an initial region model for each part (and its background). We use these models to construct binary label images for part i: P(f gi (im)) > P(bgi (im)).
We write the oriented patch features extracted from these label images as fir (for ?region?-based).
We want to use these features to help re-estimate the pose in an image ? we using training data to
learn how to do so. We learn model parameters for a region-based deformable model ?r by CRF
parameter estimation, as in Sec.2.
When learning ?r from training data, defining fir is tricky ? should we use the ground-truth part
locations to learn the color histogram models? Doing so might be unrealistic ? it assumes at ?runtime?, the edge-based deformable model will always correctly estimate part locations. Rather, we
run the edge-based model on the training data, and use the resulting parses to learn the color histogram models. This better mimics the situation at run-time, when we are faced with a new image
to parse.
When applying the region-based deformable model, we have already computed the edge responses
?e (li ) = ?ie T f e (I(li )) (to train the region model). With little additional computational cost, we
can add them as an extra feature to the region-based map fir . One might think that the regionfeatures eliminate the need for edges ? once we know that a person is wearing a white shirt in a
green background, why bother with edges? If this was the case, one would learn a zero weight for
the edge feature when learning ?ir from training data. We learn roughly equal weights for the edge
and region features, indicating both cues are complementary rather than redundant.
Given the parse from the region-based model, we can re-learn a color model for each part and the
background (and re-parse given the new models, and iterate). In our experience, both the parses and
the color models empirically converge after 1-2 iterations (see Fig. 3).
5
Results
We have tested our parsing algorithm on two datasets. Most people datasets are quite small, limited
to tens of images. We have amassed a dataset of 305 images of people in interesting poses (which
will be available on the author?s webpage). It has been collected from previous datasets of sports
figures and personal pictures. To our knowledge, it is the largest labeled dataset available for human
pose recognition. We also have tested our algorithm on the Weizmann dataset of horses [1].
Evalutation: Given an image, our parsing procedure returns a distribution over poses P(L|I). Ideally, we want the true pose to have a high probability, and all other poses to have a low value. Given
? t , we score performance by computing
a set of T test images each with a labeled ground-truth pose L
P
1
?
? T t log P(Lt |It ). This is equivalent to standard measures of perplexity (up to a log scale) [11].
Figure 6: We visualize the part models for our deformable templates ? light areas correspond to positive ?i weights, and dark corresponds to negative. It is crucial to initialize our iterative procedure
with a good edge-based deformable model. Given a collection of training images with labeled body
parts, one could build an edge template for each part by averaging (left) ? this is the standard maximum likelihood (ML) solution. As in [9], we found better results by training ?ie with a conditional
random field (CRF) model (middle). The CRF edge templates seem to emphasize different features,
such as the contours of the head, lower arms, and lower torso. The first re-parsing from Fig.3 is also
very crucial ? we similarly learn region-based part templates ?ir with a CRF (right). These templates focus more on region cues rather than edges. These templates appear more sophisticated than
rectangle-based limb detectors [8, 9] ? for example, to find upper arms and legs, it seems important
to emphasize the edge facing away from the body.
Log-probability of images given model
Comparison with previous work
Iter 0 Iter 1 Iter2
Previous Iter 0 Iter 1
PeopleAll 62.33 55.60 57.39
USCPeople
55.85
45.77 41.49
HorsesAll 51.81 47.76 45.80
Table 1: Quantitative evaluation. For each image, our parsing procedure returns a distribution of
poses. We evaluate our algorithm by looking at a perplexity-based score [11] ? the negative log
probability of the ground truth pose given the estimated distribution, averaged over the test set. On
the left, we look at the large datasets of people and horses (each with 300 images). Iter0 corresponds
to the distribution computed by the edge-based model, while Iter1 and Iter2 show the results after our
iterative parsing with a region-based model. For people, we achieve the best performance after one
iteration of the region-based model. For horses, we do better after two iterations. To compare with
previous approaches, we look at performance on the 20 image dataset from USC [9, 6]. Compared
to [9], our model does better at explaining the ground-truth data.
People: We learned a model from the first 100 training images (and their mirror-flipped versions).
We learn both ?e and ?r from the same training data. We have evaluated results on the 205 remaining images. We show sample image in Fig.7. We localize some difficult poses quite well, and
furthermore, the estimated posterior P(L|I) oftentimes reflects actual ambiguity in the data (ie, if
multiple people are present). We quantitatively evaluate results in Table 1. We also compare with a
state-of-the-art algorithm from [9], and show better performance on dataset used in that work.
Horses: We learn a model from the first 20 training images, and test it on the remaining 280 images.
In general, we do quite well. The posterior pose distribution often captures the non-rigid deformations in the body. This suggests we can use the uncertainty in our deformable matching algorithm
to recover extra information about the object. Looking at the numbers in Table 1, we see that the
parses tend do significantly better at capturing the ground-truth poses. We also see that this dataset
is easier overall than our set of 305 people poses.
Discussion: We have described an iterative parsing approach to pose estimation. Starting with an
edge-based detector, we obtain an initial parse and iteratively build better features with which to
subsequently parse. We hope this approach of learning image-specific features will prove helpful in
other vision tasks.
References
[1] E. Borenstein and S. Ullman. Class-specific, top-down segmentation. In ECCV, 2002.
Figure 7: Sample results. We show the original image, the initial edge-based parse, and the final
region-based parse. We are able to capture some extreme articulations. In many cases the posterior is
ambiguous because the image is (ie, multiple people are present). In particular, it may be surprising
that the pair in the bottom-right both are recognized by the region model ? this suggests that the
the iter-region dissimilarity learned by the color histograms is a much stronger than the foreground
similarity. We quantify results in Table 1.
[2] M. Bray, P. Kohli, and P. Torr. Posecut: simultaneous segmentation and 3d pose estimation of humans
using dynamic graph-cuts. In ECCV, 2006.
[3] P. F. Felzenszwalb and D. P. Huttenlocher. Pictorial structures for object recognition. Int. J. Computer
Vision, 61(1), January 2005.
[4] M.-H. Y. Gang Hua and Y. Wu. Learning to estimate human pose with data driven belief propagation. In
CVPR, 2005.
[5] M. Kumar, P. Torr, and A. Zisserman. Objcut. In CVPR, 2005.
Figure 8: Sample results for horses. Our results tend to be quite good across the entire dataset of 300
images. Even though the horse model is fairly simplistic ? a collection of rectangles similar to Fig. 6
? the posterior can capture rich non-rigid deformations of body parts. The Weizmann set of horses
seems to be easier than our people dataset - we quantify this with a perplexity score in Table 1.
[6] M. Lee and I. Cohen. Proposal maps driven mcmc for estimating human body pose in static images. In
CVPR, 2004.
[7] G. Mori, X. Ren, A. Efros, and J. Malik. Recovering human body configurations: Combining segmentation and recognition. In CVPR, 2004.
[8] D. Ramanan, D. Forsyth, and A. Zisserman. Strike a pose: Tracking people by finding stylized poses. In
CVPR, June 2005.
[9] D. Ramanan and C. Sminchisescu. Training deformable models for localization. In CVPR, 2006.
[10] X. Ren, A. C. Berg, and J. Malik. Recovering human body configurations using pairwise constraints
between parts. In ICCV, 2005.
[11] S. Russell and P. Norvig. Artifical Intelligence: A Modern Approach, chapter 23, pages 835?836. Prentice
Hall, 2nd edition edition, 2003.
[12] J. Zhang, J. Luo, R. Collins, and Y. Liu. Body localization in still images using hierarchical models and
hybrid search. In CVPR, 2006.
| 2976 |@word kohli:1 version:3 middle:3 seems:2 stronger:1 nd:1 tried:1 initial:7 configuration:4 liu:1 score:4 tuned:2 surprising:1 luo:1 si:2 must:1 readily:1 parsing:10 visible:1 numerical:1 chicago:2 partition:1 shape:2 grass:1 cue:5 selected:1 intelligence:1 hallucinate:1 record:1 location:10 org:1 zhang:1 prove:2 pairwise:1 mask:2 intricate:1 roughly:1 shirt:2 inspired:1 actual:3 curse:1 little:1 estimating:1 finding:1 eterized:1 quantitative:2 runtime:1 tricky:1 ramanan:4 appear:2 positive:1 local:1 tends:1 limit:1 ap:1 might:5 initialization:1 suggests:2 limited:1 averaged:1 weizmann:4 practical:1 practice:1 procedure:11 area:2 significantly:1 dictate:1 matching:2 suggest:1 bgi:2 prentice:1 applying:1 equivalent:1 map:5 missing:1 center:1 straightforward:1 starting:3 lowlevel:1 convex:1 rectangular:1 stability:1 notion:2 iter2:3 norvig:1 us:1 recognition:3 cut:1 database:2 binning:1 labeled:3 bottom:1 huttenlocher:1 solved:1 capture:4 parameterize:1 calculate:1 region:28 connected:1 russell:1 technological:1 ideally:1 exhaustively:1 personal:1 dynamic:1 deva:1 segment:1 smart:1 localization:2 multimodal:1 stylized:1 various:1 chapter:1 articulated:3 train:1 describe:1 horse:9 labeling:2 quite:6 cvpr:7 say:2 favor:1 gi:2 think:2 final:3 propose:1 product:1 combining:1 achieve:1 deformable:20 normalize:1 quantifiable:1 webpage:1 parent:3 extending:1 tti:1 rotated:2 object:5 help:1 ish:2 pose:38 recovering:2 quantify:2 correct:1 filter:5 stochastic:1 subsequently:1 human:8 bin:8 im:4 extension:1 hall:1 ground:6 exp:1 visualize:2 efros:1 estimation:12 label:5 sensitive:1 largest:1 vice:1 reflects:1 hope:1 rough:1 gaussian:1 always:1 rather:3 occupied:1 casting:2 focus:2 june:1 likelihood:2 contrast:1 helpful:1 inference:8 rigid:2 lj:10 eliminate:1 entire:1 pixel:5 overall:1 orientation:4 art:3 spatial:6 fairly:3 initialize:1 marginal:1 field:4 equal:2 construct:1 having:1 once:1 sampling:1 manually:1 placing:1 flipped:1 look:3 foreground:3 mimic:1 quantitatively:1 modern:1 oriented:3 simultaneously:1 pictorial:1 usc:1 fire:1 freedom:2 detection:3 message:6 evaluation:4 extreme:1 light:2 edge:29 experience:3 machinery:1 tree:3 initialized:1 re:6 deformation:2 mk:1 soft:4 cost:2 surpasses:1 person:2 ie:6 probabilistic:3 lee:1 together:1 again:1 ambiguity:1 fir:3 convolving:4 return:5 ullman:1 li:30 sec:2 coefficient:1 int:1 forsyth:1 bg:2 performed:2 root:2 observing:1 doing:2 competitive:1 recover:1 contribution:2 il:1 ir:2 who:1 likewise:1 efficiently:1 correspond:1 weak:1 ren:2 multiplying:1 detector:3 simultaneous:1 mi:2 static:4 dataset:9 color:10 knowledge:1 improves:1 torso:11 segmentation:6 sophisticated:1 response:3 zisserman:2 evaluated:1 though:1 furthermore:1 angular:3 parse:19 propagation:1 defines:2 mode:1 building:2 true:2 iteratively:1 white:4 attractive:1 deal:1 ll:2 ambiguous:1 soccer:1 won:1 crf:7 demonstrate:1 l1:1 image:56 novel:2 fi:3 common:1 empirically:1 overview:1 cohen:1 marginals:1 versa:1 ai:6 framed:2 grid:2 similarly:1 language:1 similarity:1 etc:1 add:1 posterior:6 driven:2 perplexity:4 certain:1 binary:3 success:2 yi:6 additional:2 subtraction:1 converge:1 maximize:1 redundant:1 recognized:1 strike:1 multiple:2 bother:1 match:1 fie:2 basic:2 hair:1 simplistic:1 vision:4 iteration:5 represent:1 histogram:7 normalization:1 proposal:1 background:15 want:4 separately:1 crucial:3 extra:4 borenstein:1 unlike:1 ascent:1 suspect:1 tend:5 seem:1 call:1 noting:1 easy:1 rendering:1 iterate:1 fit:1 bottleneck:1 suffer:1 returned:1 passing:2 repeatedly:2 clutter:1 dark:2 ten:1 estimated:5 track:1 correctly:1 write:4 discrete:1 group:1 iter:5 localize:2 clean:1 rectangle:2 imaging:1 graph:1 downstream:1 sum:1 run:2 package:1 uncertainty:1 reporting:1 wu:1 missed:1 patch:6 capturing:1 bray:1 gang:1 constraint:1 kumar:1 separable:1 rendered:1 embrace:1 structured:1 according:1 poor:2 across:1 appealing:1 happens:1 leg:13 intuitively:1 invariant:1 iccv:1 mori:1 turn:1 count:1 needed:1 know:2 available:2 apply:1 limb:5 hierarchical:1 away:1 enforce:1 original:1 top:2 assumes:1 remaining:2 exploit:1 build:9 skin:5 objective:1 arrangement:1 already:1 malik:2 primary:2 dependence:1 gradient:1 separate:1 collected:1 ru:2 ratio:1 difficult:1 negative:3 suppress:1 perform:1 upper:1 convolution:4 datasets:4 january:1 defining:1 situation:1 looking:2 head:10 distracted:1 community:1 pair:4 hallucinated:2 learned:3 established:2 able:1 articulation:1 summarize:1 built:1 reliable:1 green:4 belief:1 power:1 unrealistic:1 difficulty:1 rely:1 hybrid:1 arm:17 picture:1 lk:1 faced:1 prior:2 literature:1 relative:2 expect:1 par:3 interesting:1 facing:1 degree:2 vectorized:1 wearing:2 playing:1 itis:1 surrounded:1 pi:4 eccv:2 keeping:1 institute:1 fall:1 template:6 face:3 explaining:1 felzenszwalb:1 fg:1 dimension:1 contour:1 computes:1 rich:1 author:1 collection:2 oftentimes:1 emphasize:2 ignore:1 ml:1 sequentially:2 active:1 xi:6 search:1 iterative:10 why:1 table:5 learn:18 symmetry:3 sminchisescu:1 upstream:1 edition:2 repeated:2 child:1 complementary:1 body:20 fig:17 position:4 explicit:1 lie:1 candidate:1 iter1:2 toyota:1 learns:3 down:1 operationalize:1 emphasized:1 specific:7 evidence:1 false:1 mirror:1 dissimilarity:1 easier:2 lt:1 appearance:6 likely:1 visual:2 temporarily:1 sport:1 tracking:1 hua:1 corresponds:4 truth:5 relies:1 extracted:3 conditional:4 hard:4 specifically:3 infinite:1 torr:2 averaging:1 pas:2 invariance:1 player:1 indicating:1 berg:1 people:11 collins:1 artifical:1 evaluate:2 mcmc:1 tested:2 |
2,177 | 2,977 | Online Clustering of Moving Hyperplanes
Ren?e Vidal
Center for Imaging Science, Department of Biomedical Engineering, Johns Hopkins University
308B Clark Hall, 3400 N. Charles St., Baltimore, MD 21218, USA
[email protected]
Abstract
We propose a recursive algorithm for clustering trajectories lying in multiple moving hyperplanes. Starting from a given or random initial condition, we use normalized gradient descent to update the coefficients of a time varying polynomial
whose degree is the number of hyperplanes and whose derivatives at a trajectory
give an estimate of the vector normal to the hyperplane containing that trajectory.
As time proceeds, the estimates of the hyperplane normals are shown to track
their true values in a stable fashion. The segmentation of the trajectories is then
obtained by clustering their associated normal vectors. The final result is a simple
recursive algorithm for segmenting a variable number of moving hyperplanes. We
test our algorithm on the segmentation of dynamic scenes containing rigid motions and dynamic textures, e.g., a bird floating on water. Our method not only
segments the bird motion from the surrounding water motion, but also determines
patterns of motion in the scene (e.g., periodic motion) directly from the temporal
evolution of the estimated polynomial coefficients. Our experiments also show
that our method can deal with appearing and disappearing motions in the scene.
1 Introduction
Principal Component Analysis (PCA) [1] refers to the problem of fitting a linear subspace S ? RD
of unknown dimension d < D to N sample points X = {xi ? S}N
i=1 . A natural extension
of PCA is subspace clustering, which refers to the problem of fitting a union of n ? 1 linear
subspaces {Sj ? RD }nj=1 of unknown dimensions dj = dim(Sj ), 0 < dj < D, to N points
n
X = {xi ? RD }N
i=1 drawn from ?j=1 Sj , without knowing which points belong to which subspace.
This problem shows up in a variety of applications in computer vision (image compression, motion
segmentation, dynamic texture segmentation) and also in control (hybrid system identification).
Subspace clustering has been an active topic of research over the past few years. Existing methods
randomly choose a basis for each subspace, and then iterate between data segmentation and standard
PCA. This can be done using methods such as Ksubspaces [2], an extension of Kmeans to the case
of subspaces, or Expectation Maximization for Mixtures of Probabilistic PCAs [3]. An alternative
algebraic approach, which does not require any initialization, is Generalized PCA (GPCA) [4]. In
GPCA the data points are first projected onto a low-dimensional subspace. Then, a set of polynomials is fitted to the projected data points and a basis for each one of the projected subspaces is
obtained from the derivatives of these polynomials at the data points.
Unfortunately, all existing subspace clustering methods are batch, i.e. the subspace bases and the
segmentation of the data are obtained after all the data points have been collected. In addition,
existing methods are designed for clustering data lying in a collection of static subspaces, i.e. the
subspace bases do not change as a function of time. Therefore, when these methods are applied to
time-series data, e.g., dynamic texture segmentation, one typically applies them to a moving time
window, under the assumption that the subspaces are static within that window. A major disadvantage of this approach is that it does not incorporate temporal coherence, because the segmentation
and the bases at time t + 1 are obtained independently from those at time t. Also, this approach is
computationally expensive, since a new subspace clustering problem is solved at each time instant.
In this paper, we propose a computationally simple and temporally coherent online algorithm for
clustering point trajectories lying in a variable number of moving hyperplanes. We model a union of
D
n moving hyperplanes in RD , Sj (t) = {x ? RD : b?
j (t)x = 0}, j = 1, . . . , n, where b(t) ? R ,
as the zero set of a polynomial with time varying coefficients. Starting from an initial polynomial
at time t, we compute an update of the polynomial coefficients using normalized gradient descent.
The hyperplane normals are then estimated from the derivatives of the new polynomial at each
trajectory. The segmentation of the trajectories is obtained by clustering their associated normal
vectors. As time proceeds, new data are added, and the estimates of the polynomial coefficients are
more accurate, because they are based on more observations. This not only makes the segmentation
of the data more accurate, but also allows us to handle a variable number of hyperplanes. We test our
approach on the challenging problem of segmenting dynamic textures from rigid motions in video.
2 Recursive estimation of a single hyperplane
In this section, we review the normalized gradient algorithm for estimating a single hyperplane. We
consider both static and moving hyperplanes, and analyze the stability of the algorithm in each case.
Recursive linear regression. For the sake of simplicity, let us first revisit a simple linear regression
problem in which we are given measurements {x(t), y(t)} related by the equation y(t) = b? x(t).
? of b that minimizes f (b) = Pt (y(? ) ? b? x(? ))2 . A simple
At time t, we seek an estimate b(t)
? =1
? by following the negative of the gradient direction at time t,
strategy is to recursively update b(t)
? ? x(t) ? y(t))x(t).
v(t) = ?(b(t)
(1)
However, it is better to normalize this gradient in order to achieve better convergence properties. As
shown in Theorem 2.8, page 77 of [5], the following normalized gradient recursive identifier
? ?
? + 1) = b(t)
? ? ? (b(t) x(t) ? y(t)) x(t),
b(t
(2)
1 + ?kx(t)k2
? ? b exponentially if the regressors {x(t)} are
where ? > 0 is a fixed parameter, is such that b(t)
persistently exciting, i.e. if there is an S ? N and ?1 , ?2 > 0 such that for all m
m+S
X
?1 ID ?
x(t)x(t)? ? ?2 ID ,
(3)
t=m
where A ? B means that (B ?A) is positive definite and ID is the identity matrix in RD . Intuitively,
the condition on the left hand side of (3) means that the data has to be persistently ?rich enough? in
time in order to uniquely estimate the vector b, while the condition on the right hand side is needed
for stability purposes, as it imposes a uniform upper bound on the covariance of the data.
Consider now a modification of the linear regression problem in which the parameter vector varies
with time, i.e. y(t) = b? (t)x(t). As shown in [6], if the regressors {x(t)} are persistently exciting
and the sequence {b(t+1)?b(t)} is L2 -stable, i.e. sup kb(t+1)?b(t)k2 < ?, then the normalized
t?1
? of b(t) such that {b(t)? b(t)}
?
gradient recursive identifier (2) produces an estimate b(t)
is L2 -stable.
Recursive hyperplane estimation. Let {x(t)} be a set of measurements lying in the moving hyper? of b(t) that minimizes
plane S(t) = {x ? RD : b? (t)x = 0}. At time t, we seek an estimate b(t)
Pt
?
2
the error f (b(t)) = ? =1 (b (? )x(? )) subject to the constraint kb(t)k = 1. Notice that the main
difference between linear regression and hyperplane estimation is that in the latter case the parameter vector b(t) is constrained to lie in the unit sphere SD?1 . Therefore, instead of applying standard
gradient descent as in (2), we must follow the negative gradient direction along the geodesic curve
?
in SD?1 passing through b(t).
As shown in [7], the geodesic curve passing through b ? SD?1 along
v
D?1
the tangent vector v ? T S
is b cos(kvk) + kvk
sin(kvk). Therefore, the update equation for the
normalized gradient recursive identifier on the sphere is
? + 1) = b(t)
? cos(kv(t)k) + v(t) sin(kv(t)k),
b(t
(4)
kv(t)k
where the negative normalized gradient is computed as
?? (t)x(t))x(t)
(b
?
?
?
v(t) = ?? ID ? b(t)b (t)
.
1 + ?kx(t)k2
(5)
Notice that the gradient on the sphere is essentially the same as the Euclidean gradient, except that it
? b
??(t) ? RD?(D?1) .
needs to be projected onto the subspace orthogonal to ?b(t) by the matrix ID ? b(t)
Another difference between recursive linear regression and recursive hyperplane estimation is that
the persistence of excitation condition (3) needs to be modified to
?1 ID?1 ?
m+S
X
t=m
?
Pb(t) x(t)x(t)? Pb(t)
? ?2 ID?1 ,
(6)
where the projection matrix Pb(t) ? R(D?1)?D onto the orthogonal complement of b(t) accounts
for the fact that kb(t)k = 1. Under persistence of excitation condition (6), if b(t) = b the identifier
? ? b exponentially, while if {b(t + 1) ? b(t)} is L2 -stable, so is {b(t) ? b(t)}.
?
(4) is such that b(t)
3 Recursive segmentation of a known number of moving hyperplanes
In this section, we generalize the recursive identifier (4) and its stability properties to the case of
n
N trajectories {xi (t)}N
i=1 lying in n hyperplanes {Sj (t)}j=1 . In principle, we could apply the
identifier (2) to each one of the hyperplanes. However, as we do not know the segmentation of the
data, we do not know which data to use to update each one of the n identifiers. In the approach,
the n hyperplanes are represented with a single polynomial whose coefficients do not depend on the
segmentation of the data. By updating the coefficients of this polynomial, we can simultaneously
estimate all the hyperplanes, without first clustering the point trajectories.
Representing moving hyperplanes with a time varying polynomial. Let x(t) be an arbitrary point
in one of the n hyperplanes. Then there is a vector bj (t) normal to Sj (t) such that b?
j (t)x(t) = 0.
Thus, the following homogeneous polynomial of degree n in D variables must vanish at x(t):
?
pn (x(t), t) = b?
b?
(7)
1 (t)x(t)
2 (t)x(t) ? ? ? bn (t)x(t) = 0.
This homogeneous polynomial can be written as a linear combination of all the monomials of degree
n in x, xI = xn1 1 xn2 2 ? ? ? xnDD with 0 ? nk ? n for k = 1, . . . , D, and n1 + n2 + ? ? ? + nD = n, as
. X
pn (x, t) =
cn1 ,...,nD (t)xn1 1 ? ? ? xnDD = c(t)? ?n (x) = 0,
(8)
where cI (t) ? R represents the coefficient of the monomial xI . The map ?n : RD ? RMn (D) is
known as the Veronese map of degree n, which is defined as [8]:
?n : [x1 , . . . , xD ]? 7? [. . . , xI , . . .]? ,
(9)
n+D?1
n
where I is chosen in the degree-lexicographic order and Mn (D) =
is the total number of
independent monomials. Notice that since the normal vectors {bj (t)} are time dependent, the vector
of coefficients c(t) is also time dependent. Since both the normal vectors and the coefficient vector
are defined up to scale, we will assume that kbj (t)k = kc(t)k = 1, without loss of generality.
Recursive identification of the polynomial coefficients. Thanks to the polynomial equation (8),
we now propose a new online hyperplane clustering algorithm that operates on the polynomial coefficients c(t), rather than on the normal vectors {bj (t)}ni=1 . The advantage of doing so is that c(t)
does not depend on which hyperplane the measurement x(t) belongs to. Our method operates as
follows. At each time t, we seek to find an estimate ?c(t) of c(t) that minimizes
f (c(t)) =
t
N
1 XX
(c(? )? ?n (xi (? )))2 .
N ? =1 i=1
(10)
By using normalized gradient descent on SMn (D)?1 , we obtain the following recursive identifier
?(t + 1) = c
?(t) cos(kv(t)k) +
c
v(t)
sin(kv(t)k),
kv(t)k
(11)
where the negative normalized gradient is computed as
P
N
c? (t)?n (xi (t)))?n (xi (t))/N
i=1 (?
v(t) = ?? IMn (D) ? ?c(t)?
c? (t)
.
PN
1 + ? i=1 k?n (xi (t))k2 /N
(12)
Notice that (11) reduces to (4) and (12) reduces to (5) if n = 1 and N = 1.
Recursive identification of the hyperplane normals. Given an estimate of c(t), we may obtain an
estimate of the vector normal to the hyperplane containing a trajectory x(t) from the derivative of
?? (t)?n (x) at x(t) as
the polynomial p?n (x, t) = c
D?n? (x(t))?
c(t)
?
,
b(x(t))
=
?
kD?n (x(t))?
c(t)k
(13)
where D?n (x) is the Jacobian of ?n at x. We choose the derivative of p?n to estimate the normal
vector bj (t), because if x(t) is a trajectory in the jth hyperplane, then b?
j (t)x(t) = 0, hence the
derivative of the true polynomial pn at the trajectory gives
n
Dpn (x(t), t) =
?pn (x(t), t) X Y ?
=
(b? (t)x(t))bk (t) ? bj (t).
?x(t)
(14)
k=1 ?6=k
Stability of the recursive identifier. Since in practice we do not know the true polynomial coeffi?
cients c(t), and we estimate b(t) from ?c(t), we need to show that both ?c(t) and b(x(t))
track their
true values in a stable fashion. Theorem 1 shows that this is the case. Notice that the persistence
of excitation condition for multiple hyperplanes (15) is essentially the same as the one for a single
hyperplane (6), but properly modified to take into account that the regressors are a set of trajectories
in the embedded space {?n (xi (t))}N
i=1 , rather than a single trajectory in the original space {x(t)}.
Theorem 1 Let Pc(t) ? R(Mn (D)?1)?Mn (D) be a projection matrix onto the orthogonal complement of c(t). Consider the recursive identifier (11)?(13) and assume that the embedded regressors
{?n (xi (t))}N
i=1 are persistently exciting, i.e. there exist ?1 , ?2 > 0 and S ? N such that for all m
?1 IMn (D)?1 ?
m+S
N
XX
t=m i=1
?
? ?2 IMn (D)?1 .
Pc(t) ?n (xi (t))?n? (xi (t))Pc(t)
(15)
?(t) is L2 -stable. Furthermore, if a trajectory x(t) belongs to the jth
Then the sequence c(t) ? c
?
hyperplane, then the corresponding b(x(t))
in (13) is such that bj (t) ? ?b(x(t)) is L2 -stable. If in
?
?(t) ? 0 and bj (t) ? b(x(t))
addition the hyperplanes are static, then c(t) ? c
? 0 exponentially.
Proof. [Sketch only] When the hyperplanes are static, the exponential convergence of c(t) to c
follows with minor modifications from Theorem 2.8, page 77 of [5]. This implies that ??, ? > 0
such that k?
c(t) ? ck < ???t . Also, since the vectors b1 , . . . , bn are different, the polynomial
?
c ?n (x) has no repeated factor. Therefore, there is a ? > 0 and a T > 0 such that for all t > T we
?(t)k ? ? (see proof of Theorem 3 in [9] for the latter
have kD?n (x(t))? ck ? ? and kD?n (x(t))? c
claim). Combining this with k?
ck ? kck + k?
c ? ck and kck = 1, we obtain that when x(t) ? Sj ,
?
?
?
?
?
? kD? ? (x(t))?
c(t)kD?n
(x(t))c ? kD?n
(x(t))ckD?n
(x(t))?
c(t) ?
?
?
n
kbj ? ?
b(x(t))k = ?
?
? (x(t))?
? (x )ck
?
?
kD?n
c(t)kkD?n
t
?
?
?
?
?
?
?
?
(x(t))(?
c(t) ? c)kD?n
(x(t))c ? kD?n
(x(t))ckD?n
(x(t))(?
c(t) ? c))?
?k(D?n
?
?2
?
?
?
kD?n
(x(t))(?
c(t) ? c)kkD?n
(x(t))ck
kD?n
(x(t))k2 k?
c(t) ? c)k
?2 E 2 ???t
?2
?2
= 2 n n2
,
2
2
?
?
?
D
?
showing that b(x(t))?b
j exponentially. In the last step we used the fact that for all x ? R there
Mn (D)?Mn?1 (D)
is a constant matrix of exponents Ekn ? R
such that ??n (x)/?xk = Ekn ?n?1 (x).
p
n
Therefore, kD?n (x)k ? En k?n?1 (x)k = En n?1 k?n (x)k ? ?n En , where En = max(kEkn k)
n ?
and ?n = 2(n?1)
?2 . Consider now the case in which the hyperplanes are moving. Since SD?1
is compact, the sequences {bj (t + 1) ? bj (t)}nj=1 are trivially L2 -stable, hence so is the sequence
?
?(t)} and {bj (t) ? b(t)}
{c(t + 1) ? c(t)}. The L2 -stability of {c(t) ? c
follows.
Segmentation of the point trajectories. Theorem 1 provides us with a method for computing an
? i (t)) for the normal to the hyperplane passing through each one of the N trajectories
estimate b(x
{xi (t) ? RD }N
i=1 at each time instant. The next step is to cluster these normals into n groups,
thereby segmenting the N trajectories. We do so by using a recursive version of the K-means
algorithm, adapted to vectors on the unit sphere. Essentially, at each t, we seek the normal vectors
?j (t) ? SD?1 and the membership of wij (t) ? {0, 1} of trajectory i to hyperplane j that maximize
b
N X
n
X
?j (t)}) =
?? (t)b(x
? i (t)))2 .
f ({wij (t)}, {b
wij (t)(b
(16)
j
i=1 j=1
The main difference with K-means is that we maximize the dot product of each data point with
the cluster center, rather than minimizing the distance. Therefore, the cluster center is given by the
principal component of each group, rather than the mean. In order to obtain temporally coherent
estimates of the normal vectors, we use the estimates at time t to initialize the iterations at time t + 1.
Algorithm 1 (Recursive hyperplane segmentation)
Initialization step
?j (1)}n
1: Randomly choose {b
c(1), or else apply the GPCA algorithm to {xi (1)}N
j=1 and ?
i=1 .
For each t ? 1
1: Update the coefficients of the polynomial p?n (x(t), t) = ?
c(t)? ?n (x(t)) using the recursive procedure
v(t)
sin(kv(t)k),
kv(t)k
P
`
? N
c? (t)?n (xi (t)))?n (xi (t))/N
i=1 (?
v(t) = ?? IMn (D) ? c?(t)?
c? (t)
.
P
2
1+? N
i=1 k?n (xi (t))k /N
c?(t + 1) = ?
c(t) cos(kv(t)k) +
2: Solve for the normal vectors from the derivatives of p?n at the given trajectories
?
c(t)
? i (t)) = D?n (xi (t))?
b(x
?
kD?n (xi (t))?
c(t)k
i = 1, . . . , N.
3: Segment the normal vectors using the K-means algorithm on the sphere
8
2
<1 if j = arg max (b
??
?
k (t)b(xi (t)))
k=1,...,n
(a) Set wij (t) =
,
i = 1, . . . , N , j = 1, . . . , n
:0 otherwise
?
?
?j (t) = P CA( w1j (t)b(x
? 1 (t)) w2j (t)b(x
? 2 (t)) ? ? ? wNj (t)b(x
? N (t)) ), j = 1, . . . , n
(b) Set b
?j (t + 1) = b
?j (t).
(c) Iterate (a) and (b) until convergence of wij (t), and then set b
4 Recursive segmentation of a variable number of moving hyperplanes
In the previous section, we proposed a recursive algorithm for segmenting n moving hyperplanes
under the assumption that n is known and constant in time. However, in many practical situations
the number of hyperplanes may be unknown and time varying. For example, the number of moving
objects in a video sequence may change due to objects entering or leaving the camera field of view.
In this section, we consider the problem of segmenting a variable number of moving hyperplanes.
We denote by n(t) ? N the number of hyperplanes at time t and assume we are given an upper
bound n ? n(t). We show that if we apply Algorithm 1 with the number of hyperplanes set to n,
then we can still recover the correct segmentation of the scene, even if n(t) < n. To see this, let us
have a close look at the persistence of excitation condition in equation (15) of Theorem 1. Since the
condition on the right hand side of (15) holds trivially when the regressors xi (t) are bounded, the
only important condition is the one on the left hand side. Notice that the condition on the left hand
side implies that the spatial-temporal covariance matrix of the embedded regressors must be of rank
Mn (D) ? 1 in any time window of size S for some integer S. Loosely speaking, the embedded
regressors must be ?rich enough? either in space or in time.
The case in which there is a ?1 > 0 such that for all t
N
X
?
n(t) = n
and
Pc(t) ?n (xi (t))?n? (xi (t))Pc(t)
? ?1 IMn (D)?1
i=1
(17)
corresponds to the case of data that is rich in space. In this case, at each time instant we draw data
from all n hyperplanes and the data is rich enough to estimate all n hyperplanes at each time instant.
In fact, condition (17) is the one required by GPCA [4], which in this case can be applied at each
time t independently. Notice also that (17) is equivalent to (15) with S = 1.
The case in which n(t) = 1 and there are ?1 > 0, S ? N and i ? {1, . . . , N } such that for all m
m+S
X
t=m
?n (xi (t))?n? (xi (t)) ?
?1
IM (D)?1
N n
(18)
corresponds to the case of data that is rich in time. In this case, at each time instant we draw data
from a single hyperplane. As time proceeds, however, the data must be persistently drawn from
at least n hyperplanes in order for (18) to hold. This can be achieved either by having n different
static hyperplanes and persistently drawing data from all of them, or by having less than n moving
hyperplanes whose motion is rich enough so that (18) holds.
In summary, as long as the embedded regressors satisfy condition (15) for some upper bound n on
the number of hyperplanes, the recursive identifier (11)-(13) will still provide L2 -stable estimates of
the parameters, even if the number of hyperplanes is unknown and variable, and n(t) < n for all t.
5 Experiments
Experiments on synthetic data. We randomly draw N = 200 3D points lying in n = 2 planes
and apply a time varying rotation to these points for t = 1, . . . , 1000 to generate N trajectories
{xi (t)}N
i=1 . Since the true segmentation is known, we compute the vectors {bj (t)} normal to each
plane, and use them to generate the vector of coefficients c(t). We run our algorithm on the sogenerated data with n = 2, ? = 1 and a random initial estimate for the parameters. We compare
these estimates with the ground truth using the percentage of misclassified points. We also consider
the error of the polynomial coefficients and the normal vectors by computing the angles between
the estimated and true values. Figure 1 shows the true and estimated parameters, as well as the
estimation errors. Observe that the algorithm takes about 100 seconds for the errors to stabilize
within 1.62? for the coefficients, 1.62? for the normals, and 4% for the segmentation error.
True polynomial coefficients
1
0.5
0.5
0
0
?0.5
?0.5
?1
0
200
400
600
Time (seconds)
800
Estimation error of the polynomial (degrees)
Estimated polynomial coefficients
1
1000
?1
0
True normal vector b1
60
40
20
200
400
600
Time (seconds)
800
1000
0
0
1
200
400
600
Time (seconds)
800
0.5
b1
b2
0.5
30
0
0
?0.5
?0.5
200
400
600
Time (seconds)
800
1000
?1
0
Segmentation error (%)
50
1
40
?1
0
1000
Estimation error of b1 and b2 (degrees)
Estimated normal vector b1
30
20
20
10
200
400
600
Time (seconds)
800
0
1000 0
40
10
200
400
600
Time (seconds)
800
0
1000 0
200
400
600
Time (seconds)
800
1000
Figure 1: Segmenting 200 points lying on two moving planes in R3 using our recursive algorithm.
Segmentation of dynamic textures. We now apply our algorithm to the problem of segmenting
video sequences of dynamic textures, i.e. sequences of nonrigid scenes that exhibit some temporal
stationarity, e.g., water, smoke, or foliage. As proposed in [10], one can model the temporal evolution of the image intensities as the output of a linear dynamical system. Since the trajectories of
the output of a linear dynamical system live in the so-called observability subspace, the intensity
trajectories of pixels associated with a single dynamic texture lie in a subspace. Therefore, the set
of all intensity trajectories lie in multiple subspaces, one per dynamic texture.
Given ? consecutive frames of a video sequence {I(f )}tf =t??+1 , we interpret the data as a matrix
W (t) ? RN ?3? , where N is the number of pixels, and 3 corresponds to the three RGB color
channels. We obtain a data point xi (t) ? RD from image I(t) by projecting the ith row of W (t),
D?3?
w?
. The projection
i (t) onto a subspace of dimension D, i.e. xi (t) = ?w i (t), with ? ? R
matrix ? can be obtained in a variety of ways. We use the D principal components of the first ?
frames to define ?. More specifically, if W (?) = U ?V ? , with U ? RN ?D , ? ? RD?D and V ?
R3??D is a rank-D approximation of W (?) computed using SVD, then we choose ? = ??1 V ? .
We applied our method to a sequence (110 ? 192, 130 frames) containing a bird floating on water,
while rotating around a fix point. The task is to segment the bird?s rigid motion from the water?s
dynamic texture, while at the same time tracking the motion of the bird. We chose D = 5 principal
components of the ? = 5 first frames of the RGB video sequence to project each frame onto a lower
dimensional space. Figure 2 shows the segmentation. Although the convergence is not guaranteed
with only 130 frames, it is clear that the polynomial coefficients already capture the periodicity of the
motion. As shown in the last row of Figure 2, some coefficients of the polynomial oscillate in time.
One can notice that the orientation of the bird is related to the value of the coefficient c8 . If the bird is
facing to the right showing her right side, the value of c8 achieves a local maximum. On the contrary
if the bird is oriented to the left, the value of c8 achieves a local minimum. Some irregularities seem
to appear at the local minima of this coefficient: they actually correspond to a rapid motion of
the bird. One can distinguish three behaviors for the polynomial coefficients: oscillations, pseudooscillations or quasi-linearity. For both the oscillations and the pseudo-oscillations the period is
identical to the bird?s motion period (40 frames). This example shows that the coefficients of the
estimated polynomial give useful information about the scene motion.
?0.02
?0.02
?0.02
?0.02
?0.02
?0.03
?0.03
?0.03
?0.03
?0.03
?0.04
0
?0.04
50
Time (seconds)
100
0
?0.04
50
Time (seconds)
100
0
?0.04
50
Time (seconds)
100
0
?0.04
50
Time (seconds)
100
0
50
Time (seconds)
100
Figure 2: Segmenting a bird floating on water. Top: frames 17, 36, 60, 81, and 98 of the sequence.
Middle: segmentation obtained using our method. Bottom: temporal evolution of c8 during the
video sequence, with the red dot indicating the location of the corresponding frame in this evolution.
To test the performance of our method on a video sequence with a variable number of motions, we
extracted a sub-clip of the bird sequence (55 ? 192, 130 frames) in which the camera moves up
at 1 pixel/frame until the bird disappears at t = 51. The camera stays stationary from t = 56 to
t = 66, and then moves down at 1 pixel/frame, the bird reappears at t = 76. We applied both
GPCA and our method initialized with GPCA to this video sequence. For GPCA we used a moving
window of ? = 5 frames. For our method we chose D = 5 principal components of the ? = 5
first frames of the RGB video sequence to project each frame onto a fixed lower dimensional space.
We set the parameter of the recursive algorithm to ? = 1. Figure 3 shows the segmentation results.
Notice that both methods give excellent results during the first few frames, when both the bird and
the water are present. This is expected, as our method is initialized with GPCA. Nevertheless,
notice that the performance of GPCA deteriorates dramatically when the bird disappears, because
GPCA overestimates the number of hyperplanes, whereas our method is robust to this change and
keeps segmenting the scene correctly, i.e. assigning all the pixels to the background. When the
bird reappears, our method detects the bird correctly from the first frame whereas GPCA produces
a wrong segmentation for the first frames after the bird reappears. Towards the end of the sequence,
both algorithms give a good segmentation. This demonstrates that our method has the ability to deal
with a variable number of motions, while GPCA has not. In addition the fixed projection and the
recursive estimation of the polynomial coefficients make our method much faster than GPCA.
Sequence
GPCA
Our method
Figure 3: Segmenting a video sequence with a variable number of dynamic textures. Top: frames 1,
24, 65, 77, and 101. Middle: segmentation with GPCA. Bottom: segmentation with our method.
6 Conclusions
We have proposed a simple recursive algorithm for segmenting trajectories lying in a variable number of moving hyperplanes. The algorithm updates the coefficients of a polynomial whose derivatives give the normals to the moving hyperplanes as well as the segmentation of the trajectories. We
applied our method successfully to the segmentation of videos containing multiple dynamic textures.
Acknowledgments
The author acknowledges the support of grants NSF CAREER IIS-04-47739, NSF EHS-05-09101
and ONR N00014-05-10836.
References
[1] I. Jolliffe. Principal Component Analysis. Springer-Verlag, New York, 1986.
[2] J. Ho, M.-H. Yang, J. Lim, K.-C. Lee, and D. Kriegman. Clustering apperances of objects under varying
illumination conditions. In IEEE Conference on Computer Vision and Pattern Recognition, volume 1,
pages 11?18, 2003.
[3] M. Tipping and C. Bishop. Mixtures of probabilistic principal component analyzers. Neural Computation,
11(2):443?482, 1999.
[4] R. Vidal, Y. Ma, and S. Sastry. Generalized Principal Component Analysis (GPCA). IEEE Trans. on
Pattern Analysis and Machine Intelligence, 27(12):1?15, 2005.
[5] B.D.O. Anderson, R.R. Bitmead, C.R. Johnson Jr., P.V. Kokotovic, R.L. Ikosut, I.M.Y. Mareels, L. Praly,
and B.D. Riedle. Stability of Adaptive Systems. MIT Press, 1986.
[6] L. Guo. Stability of recursive stochastic tracking algorithms. In IEEE Conf. on Decision & Control, pages
2062?2067, 1993.
[7] A. Edelman, T. Arias, and S. T. Smith. The geometry of algorithms with orthogonality constraints. SIAM
Journal of Matrix Analysis Applications, 20(2):303?353, 1998.
[8] J. Harris. Algebraic Geometry: A First Course. Springer-Verlag, 1992.
[9] R. Vidal and B.D.O. Anderson. Recursive identification of switched ARX hybrid models: Exponential
convergence and persistence of excitation. In IEEE Conf. on Decision & Control, pages 32?37, 2004.
[10] G. Doretto, A. Chiuso, Y. Wu, and S. Soatto. Dynamic textures. International Journal of Computer
Vision, 51(2):91?109, 2003.
| 2977 |@word middle:2 version:1 compression:1 polynomial:32 nd:2 ckd:2 seek:4 imn:5 bn:2 covariance:2 rgb:3 thereby:1 recursively:1 initial:3 series:1 past:1 existing:3 dpn:1 assigning:1 must:5 written:1 john:1 designed:1 update:7 stationary:1 intelligence:1 kkd:2 plane:4 xk:1 reappears:3 ith:1 smith:1 gpca:16 provides:1 location:1 hyperplanes:35 along:2 chiuso:1 edelman:1 fitting:2 expected:1 rapid:1 behavior:1 detects:1 window:4 project:2 estimating:1 xx:2 bounded:1 linearity:1 minimizes:3 nj:2 temporal:6 pseudo:1 xd:1 k2:5 wrong:1 demonstrates:1 control:3 unit:2 grant:1 appear:1 segmenting:11 positive:1 overestimate:1 engineering:1 local:3 sd:5 id:7 chose:2 bird:19 initialization:2 disappearing:1 challenging:1 co:4 practical:1 camera:3 acknowledgment:1 recursive:29 union:2 definite:1 practice:1 irregularity:1 cn1:1 procedure:1 jhu:1 projection:4 persistence:5 refers:2 onto:7 close:1 applying:1 live:1 equivalent:1 map:2 center:3 starting:2 independently:2 simplicity:1 stability:7 handle:1 pt:2 homogeneous:2 persistently:6 expensive:1 recognition:1 updating:1 bottom:2 solved:1 capture:1 kriegman:1 dynamic:13 geodesic:2 depend:2 segment:3 basis:2 represented:1 surrounding:1 w2j:1 hyper:1 whose:5 solve:1 drawing:1 otherwise:1 ability:1 final:1 online:3 sequence:19 advantage:1 propose:3 product:1 cients:1 combining:1 achieve:1 kv:9 normalize:1 bitmead:1 convergence:5 cluster:3 produce:2 object:3 minor:1 implies:2 direction:2 foliage:1 correct:1 stochastic:1 kb:3 require:1 fix:1 im:1 extension:2 hold:3 lying:8 around:1 hall:1 ground:1 normal:24 bj:11 claim:1 major:1 achieves:2 consecutive:1 purpose:1 estimation:8 tf:1 successfully:1 mit:1 lexicographic:1 modified:2 rather:4 ck:6 pn:5 varying:6 properly:1 rank:2 dim:1 dependent:2 rigid:3 membership:1 typically:1 her:1 kc:1 wij:5 misclassified:1 quasi:1 pixel:5 arg:1 orientation:1 exponent:1 constrained:1 spatial:1 initialize:1 field:1 having:2 identical:1 represents:1 look:1 few:2 randomly:3 oriented:1 simultaneously:1 floating:3 geometry:2 n1:1 stationarity:1 mixture:2 kvk:3 pc:5 accurate:2 orthogonal:3 euclidean:1 loosely:1 rotating:1 initialized:2 fitted:1 disadvantage:1 maximization:1 monomials:2 uniform:1 johnson:1 varies:1 periodic:1 synthetic:1 st:1 thanks:1 international:1 siam:1 stay:1 probabilistic:2 lee:1 hopkins:1 containing:5 choose:4 conf:2 derivative:8 account:2 b2:2 stabilize:1 coefficient:26 satisfy:1 view:1 analyze:1 sup:1 doing:1 red:1 recover:1 ni:1 correspond:1 generalize:1 identification:4 ren:1 trajectory:26 pcas:1 associated:3 proof:2 static:6 xn1:2 color:1 lim:1 segmentation:30 actually:1 tipping:1 follow:1 done:1 generality:1 furthermore:1 anderson:2 biomedical:1 until:2 hand:5 sketch:1 smoke:1 usa:1 normalized:9 true:9 evolution:4 hence:2 soatto:1 entering:1 deal:2 sin:4 during:2 uniquely:1 excitation:5 generalized:2 nonrigid:1 motion:17 image:3 charles:1 rotation:1 rmn:1 ekn:2 exponentially:4 volume:1 belong:1 interpret:1 measurement:3 rd:12 trivially:2 sastry:1 analyzer:1 dj:2 dot:2 moving:20 stable:9 base:3 belongs:2 n00014:1 verlag:2 onr:1 minimum:2 arx:1 maximize:2 doretto:1 period:2 ii:1 xn2:1 multiple:4 reduces:2 faster:1 sphere:5 long:1 regression:5 essentially:3 vision:3 expectation:1 iteration:1 achieved:1 addition:3 background:1 whereas:2 baltimore:1 else:1 leaving:1 subject:1 contrary:1 seem:1 integer:1 yang:1 enough:4 variety:2 iterate:2 observability:1 knowing:1 pca:4 algebraic:2 passing:3 speaking:1 oscillate:1 york:1 dramatically:1 useful:1 clear:1 clip:1 generate:2 exist:1 percentage:1 nsf:2 revisit:1 notice:10 estimated:7 deteriorates:1 track:2 per:1 correctly:2 kck:2 group:2 nevertheless:1 pb:3 kbj:2 drawn:2 smn:1 imaging:1 year:1 run:1 angle:1 wu:1 oscillation:3 draw:3 coherence:1 decision:2 bound:3 guaranteed:1 distinguish:1 adapted:1 constraint:2 orthogonality:1 scene:7 sake:1 c8:4 department:1 ehs:1 combination:1 kd:13 jr:1 modification:2 intuitively:1 projecting:1 computationally:2 equation:4 r3:2 jolliffe:1 needed:1 know:3 end:1 vidal:3 apply:5 observe:1 appearing:1 alternative:1 batch:1 ho:1 original:1 top:2 clustering:13 instant:5 move:2 added:1 already:1 strategy:1 md:1 exhibit:1 gradient:15 subspace:20 distance:1 topic:1 collected:1 water:7 minimizing:1 unfortunately:1 negative:4 unknown:4 upper:3 observation:1 descent:4 situation:1 frame:19 rn:2 arbitrary:1 intensity:3 bk:1 complement:2 required:1 coherent:2 trans:1 proceeds:3 dynamical:2 pattern:3 max:2 video:11 natural:1 hybrid:2 mn:6 representing:1 temporally:2 disappears:2 acknowledges:1 review:1 l2:8 tangent:1 embedded:5 loss:1 facing:1 clark:1 switched:1 degree:7 imposes:1 exciting:3 principle:1 row:2 periodicity:1 summary:1 course:1 last:2 jth:2 monomial:1 side:6 curve:2 dimension:3 rich:6 author:1 collection:1 adaptive:1 projected:4 regressors:8 sj:7 compact:1 keep:1 active:1 b1:5 xi:30 w1j:1 channel:1 robust:1 ca:1 career:1 excellent:1 main:2 n2:2 identifier:11 repeated:1 x1:1 en:4 fashion:2 sub:1 exponential:2 lie:3 vanish:1 jacobian:1 theorem:7 down:1 bishop:1 showing:2 ci:2 aria:1 texture:12 illumination:1 kx:2 nk:1 tracking:2 applies:1 springer:2 corresponds:3 truth:1 determines:1 extracted:1 ma:1 harris:1 identity:1 kmeans:1 towards:1 change:3 specifically:1 except:1 operates:2 hyperplane:19 principal:8 total:1 called:1 wnj:1 svd:1 indicating:1 support:1 guo:1 latter:2 incorporate:1 |
2,178 | 2,978 | Mutagenetic tree Fisher kernel improves prediction of
HIV drug resistance from viral genotype
Tobias Sing
Department of Computational Biology
Max Planck Institute for Informatics
Saarbr?ucken, Germany
[email protected]
Niko Beerenwinkel?
Department of Mathematics
University of California
Berkeley, CA 94720
Abstract
Starting with the work of Jaakkola and Haussler, a variety of approaches have been
proposed for coupling domain-specific generative models with statistical learning
methods. The link is established by a kernel function which provides a similarity
measure based inherently on the underlying model. In computational biology, the
full promise of this framework has rarely ever been exploited, as most kernels are
derived from very generic models, such as sequence profiles or hidden Markov
models. Here, we introduce the MTreeMix kernel, which is based on a generative
model tailored to the underlying biological mechanism. Specifically, the kernel
quantifies the similarity of evolutionary escape from antiviral drug pressure between two viral sequence samples. We compare this novel kernel to a standard,
evolution-agnostic amino acid encoding in the prediction of HIV drug resistance
from genotype, using support vector regression. The results show significant improvements in predictive performance across 17 anti-HIV drugs. Thus, in our
study, the generative-discriminative paradigm is key to bridging the gap between
population genetic modeling and clinical decision making.
1
Introduction
Kernels provide a general framework of statistical learning that allows for integrating problemspecific background knowledge via the geometry of a feature space. Owing to this unifying characteristic, kernel methods enjoy increasing popularity in many application domains, particularly in
computational biology [1]. Unfortunately, despite some basic results on the derivation of novel kernels from existing kernels or from more general similarity measures (e.g. via the empirical kernel
map [1]), the field suffers from a lack of well-characterized design principles. As a consequence,
most novel kernels are still developed in an ad hoc manner.
One of the most promising developments in the recent search for a systematic kernel design methodology is the generative-discriminative paradigm [2], also known under the more general term of
model-dependent feature extraction (MDFE) [3]. The central idea of MDFE is to derive kernels
from generative probabilistic models of a given process or phenomenon. Starting with Jaakkola and
Haussler [2] and the seminal work of Amari [4] on the differential geometric structure of probabilistic models, a number of studies have contributed to an emerging theoretical foundation of MDFE.
However, the paradigm is also of immediate intuitive appeal, because mechanistic models of a process that are consistent with observed data and that provide falsifiable predictions often allow for
more profound insights than purely discriminative approaches. Moreover, entities that are similar
according to a mechanistic model should be expected to exhibit similar behavior in any related prop?
Current address: Program for Evolutionary Dynamics, Harvard University, Cambridge, MA 02138,
[email protected]
erties. From this perspective, MDFE provides a natural bridge between mathematical modeling and
statistical learning.
To date, a variety of generic MDFE procedures have been proposed, including the Fisher kernel
[2] and, more generally, marginalized kernels [5], as well as the TOP [3], heat [6], and probability
product kernels [7], along with a number of variations. Surprisingly, however, instantiations of these
procedures in bioinformatics have been confined to a very limited number of classical problems,
namely protein fold recognition, DNA splice site prediction, exon detection, and phylogenetics.
Furthermore, most approaches are based on standard graphical models, such as amino acid sequence
profiles or hidden Markov models, that are not adapted in any specific way to the process at hand. For
example, a first-order Markov chain along the primary structure of a protein is hardly related to the
causal mechanisms underlying polypeptide evolution. Thus, the potential of combining biological
modeling with kernelization in the framework of MDFE remains vastly unexplored.
This paper is motivated by a regression problem from clinical bioinformatics that has recently attracted substantial attention due to its pivotal role in anti-HIV therapy: the prediction of phenotypic
drug resistance from viral genotype (reviewed in [8]). Drug resistant viruses present a major cause
of treatment failure and their occurrence renders many of the available drugs ineffective. Therefore,
knowing the precise patterns of drug resistance is an important prerequisite for the choice of optimal
drug combinations [9, 10].
Drug resistance arises as a virus population evolves under partially suppressive antiviral therapy.
The extreme evolutionary dynamics of HIV quickly generate viral genetic variants that are selected
for their ability to replicate in the presence of the applied drug cocktail. These advantageous mutants eventually outgrow the wild type population and lead to therapy failure. Thus, the resistance
phenotype is determined by the viral genotype. The genotype-phenotype prediction problem is of
considerable clinical relevance, because genotyping is much faster and cheaper, while treatment
decisions are ultimately based on the viral phenotype (i.e. the level of resistance).
From the perspective of MDFE, the interesting feature of HIV drug resistance lies in the structure of
the underlying generative process. The development of resistance involves the stochastic accumulation of mutations in the viral genome along certain mutational pathways. Here, we demonstrate
how to exploit this evolutionary structure in genotype-phenotype prediction by deriving a Fisher
kernel for mixtures of mutagenetic trees, a family of graphical models designed to represent such
genetic accumulation processes. The remainder of this paper is organized as follows. In the next
section, we briefly summarize the mutagenetic trees mixture (MTreeMix) model, originally introduced in [11]. The Fisher kernel is derived in Section 3. In Section 4, the kernel is applied to the
genotype-phenotype prediction problem introduced above. We conclude with some of the broader
implications of our study, including directions for future work.
2
Mixture models of mutagenetic trees
Consider n genetic events {1, . . . , n}. With each event v, we associate the binary random variable
Xv , such that {Xv = 1} indicates the occurrence of v. In our applications, the set {1, . . . , n} will
denote the mutations conferring resistance to a specific anti-HIV drug. Syntactically, a mutagenetic
tree for n genetic events is a connected branching T = (V, E) on the vertices V = {0, 1, . . . , n}
and rooted at 0, where E ? V ? V denotes the edge set of T . Semantically, the mutagenetic tree
model induced by T and the parameter vector ? = (?1 , . . . , ?n ) ? (0, 1)n is the Bayesian network
on T with constrained conditional probability tables of the form
0
?v =
1
0
1
1 ? ?v
1
0
,
?v
v = 1, . . . n.
Thus, a mutagenetic tree model is the family of distributions of X = (X1 , . . . , Xn ) that factor as
n
Y
Pr(X = x | ?) =
?v,(xpa(v) ,xv ) .
v=1
Here, x0 := 1 (indicating the wild type state without any resistance mutations), and pa(v) denotes
the parent of vertex v in T . Figure 1 shows a mutagenetic tree for the development of resistance to
the protease inhibitor nelfinavir.
wild type
30N
0.99
0.44
0.52
36I
0.21
77I
0.48
46IL
0.26
71VT
0.14
88DS
0.86
82AFTS
84V
0.10
10FI
Figure 1: Mutagenetic tree for the development of resistance to the HIV protease inhibitor nelfinavir
(NFV). Vertices of the tree are labeled with amino acid changes in the protease enzyme. Edges are
labeled with conditional probabilities. The tree represents one component of the 6-trees mixture
model estimated for this evolutionary process.
The probability tables impose the constraint that a mutation can only be present if its predecessor
in the topology is also present. This restriction sets mutagenetic trees apart from standard Bayesian
networks in that it allows for an evolutionary interpretation of the tree topology. In particular, the
model implies the existence of certain mutational pathways with distinct probabilities. Each pathway is required to respect the order of mutation accumulation that is encoded in the tree. Mutational
patterns which do not respect these order constraints have probability zero in the model. We shall exclude these genotypes from the state space of the model. The state space then becomes the following
subset of {0, 1}n ,
C = {x ? {0, 1}n | (xpa(v) , xv ) 6= (0, 1), for all v ? V },
and the factorization of the joint distribution simplifies to
Y
Pr(X = x | ?) =
?vxv (1 ? ?v )1?xv .
{v|xpa(v) =1}
The mutational pathway metaphor, originating in the virological literature, is generally considered
to be a reasonable approximation to HIV evolution under drug pressure. However, sets of mutational
patterns that support different tree topologies are commonly seen in clinical HIV databases. Thus, in
order to allow for increased flexibility in modeling evolutionary pathways and to account for noise in
the observed data, we consider the larger model class of mixtures of mutagenetic trees. Intuitively,
these mixture models correspond to the assumption that a variety of evolutionary forces contribute
additively in shaping HIV genetic variability in vivo.
PK?1
Consider K mutagenetic trees T1 , . . . , TK with weights ?1 , . . . , ?K?1 , and ?K = 1 ? k=1 ?k ,
respectively, such that 0 ? ?k ? 1 for all k = 1, . . . , K. Each tree Tk has parameters ?k =
(?k,v )v=1,...,n . The mutagenetic trees mixture model is the family of distributions of X of the form
Pr(X = x | ?, ?) =
K
X
?k Pr(X = x | ?k ).
k=1
The state space C of this model is the union of the state spaces of the single tree models induced by
T1 , . . . , TK . In our applications, we will always fix the first tree to be a star, such that C = {0, 1} n
(i.e., all mutational patterns have non-zero probability). The star accounts for the spontaneous and
independent occurrence of genetic events.
3
The MTreeMix Fisher kernel
We now derive a Fisher kernel for the mutagenetic trees mixture models introduced in the previous section. In this paper, our primary motivation is to improve the prediction of drug resistance
x, x0
0
1
2
0
0
1
2
2
00,00
00,01
00,10
00,11
01,01
01,10
01,11
10,10
10,11
11,11
?2
?2
(?1 ? 1) + (?2 ? 1)
(?1 ? 1)?2 + ?2?1 (?2 ? 1)?1
?1?1 (?1 ? 1)?1 + (?2 ? 1)?2
?1?1 (?1 ? 1)?1 + ?2 (?2 ? 1)?1
(?1 ? 1)?2 + ?2?2
?1
?1 (?1 ? 1)?1 + ?2?1 (?2 ? 1)?1
?1?1 (?1 ? 1)?1 + ?2?2
?1?2 + (?2 ? 1)?2
?2
?1 + ?2?1 (?2 ? 1)?1
?1?2 ?2?2
1
?2
(?1 ? 1)
?
?1?1 (?1 ? 1)?1
?1?1 (?1 ? 1)?1
?
?
?
?1?2 + (?2 ? 1)?2
?1?2 + ?2?1 (?2 ? 1)?1
?1?2 ?2?2
(?2 ? 1)?2
?2?1 (?2 ? 1)?1
?
?2?1 (?2 ? 1)?1
(?1 ? 1)?2 + ?2?2
?
?1?1 (?1 ? 1)?1 + ?2?2
?
?
?1?2 ?2?2
Table 1: Mutagenetic tree Fisher kernels for the three trees on the vertices {0, 1, 2}. The value of
the kernel K(x, x0 ) is displayed for all possible pairs of mutational patterns (x, x0 ). Empty cells are
indexed with genotypes that are not compatible with the tree.
from viral genotype. However, we defer application-specific details to Section 4, to emphasize the
broader applicability of the kernel itself, for example in kernelized principal components analysis or
multidimensional scaling.
As Jaakkola and Haussler [2] have suggested, the gradient of the log-likelihood function induced by
a generative probabilistic model provides a natural comparison between samples. This is because the
partial derivatives in the direction of the model parameters describe how each parameter contributes
to the generation of that particular sample. Intuitively, two samples should be considered similar
from this perspective, if they influence the likelihood surface in a similar way. The natural inner
product for the statistical manifold induced by the log-likelihood gradient is given by the Fisher
information matrix [4]. The computation of this matrix is straightforward, but for practical purposes,
the Euclidean dot product h ? , ? i provides a suitable substitute for the Fisher metric [2] .
We first derive the Fisher kernel for the single mutagenetic tree model. The log-likelihood of observing a mutational pattern x ? {0, 1}n under this model is
X
`x (?) =
xv log(?v ) + (1 ? xv ) log(1 ? ?v ).
{v|xpa(v) =1}
Hence, the feature mapping of binary mutational patterns into Euclidean n-space,
?`x (?)
?`x (?)
? : C ? Rn , x 7? ?`x (?) =
,...,
,
??1
??n
is given by the Fisher score consisting of the partial derivatives
? ?1
if (xpa(w) , xw ) = (1, 1)
??w ,
?`x (?)
?xw
xw ?1 1?xpa(w)
?1
= ?w (?w ? 1)
0
= (?w ? 1) , if (xpa(w) , xw ) = (1, 0)
?
??w
0,
if (xpa(w) , xw ) = (0, 0).
Thus, we can define the mutagenetic tree Fisher kernel as
n
X
0
0
0
K(x, x0 ) = h?`x (?), ?`x0 (?)i =
??(xv +xv ) (?v ? 1)(xv +xv )?2 02?(xpa(v) +xpa(v) ) .
v=1
For example, the Fisher kernels for the three mutagenetic trees on n = 2 genetic events are displayed
in Table 1.
To better understand the operation of the novel kernel, we rewrite the kernel function K as follows:
n
X
K(x, x0 ) =
?(?v )(xpa(v) ,xv ),(x0pa(v) ,x0v ) ,
v=1
30
20
10
0
kappa(t)
?10
?30
?20
(1,0),(1,0)
(1,1),(1,1)
(1,0),(1,1) = (1,1),(1,0)
0.0
0.2
0.4
0.6
0.8
1.0
t
Figure 2: Non-zero entries of the matrix ?(t) that defines the mutagenetic tree Fisher kernel. The
three graphs are indexed in the same way as the matrix, namely by pairs ((x pa(v) , xv ), (x0pa(v) , x0v ))
denoting the value of two genotypes x and x0 at an edge (pa(v), v) of the mutagenetic tree. The
graphs illustrate that the largest contributions stem from shared, unlikely mutations (positive effect,
solid and dashed line) and from differing, likely or unlikely mutations (negative effect, dash-dot
line).
with ? defined as
(0, 0)
?(t) = (1, 0)
(1, 1)
(0, 0)
0
0
0
(1, 0)
0
(t ? 1)?2
t?1 (t ? 1)?1
(1, 1)
!
0
t?1 (t ? 1)?1
t?2
The matrix ?(t) is indexed by pairs of pairs ((xpa(v) , xv ), (x0pa(v) , x0v )). The non-zero entries of ?
are displayed in Figure 2 as functions of the parameter t. An edge contributes strongly to the kernel
value, if the two genotypes agree on it, but the common event (occurrence or non-occurrence of the
mutation) was unlikely (Figure 2, solid and dashed line). If the two genotypes disagree, the edge
contributes negatively, especially for extreme parameters ?v close to zero or one (Figure 2, dash-dot
line), which make one of the events very likely and the other very unlikely. Thus, the application of
the Fisher kernel idea to mutagenetic trees leads to a kernel that measures similarity of evolutionary
escape in a way that corresponds well to virological intuition.
Due to the linear mixing process, extending the Fisher kernel from a single mutagenetic tree to a
mixture model is straightforward. Let `x (?, ?) = log Pr(x | ?, ?)) be the log-likelihood function,
and denote by
?l Pr(x | ?l )
?l (x | ?, ?) =
Pr(x | ?, ?)
the responsibility of tree component Tl for the observation x. Then the partial derivatives with
respect to ? can be expressed in terms of the partials obtained for the single tree models, weighted
by the responsibilities of the trees,
?`x (?, ?)
?`x (?l )
= ?l (x | ?, ?)
.
??l,w
??l,w
Differentiation with respect to ? yields
?`x (?, ?)
Pr(x | ?l ) ? Pr(x | ?K )
=
.
??l
Pr(x | ?, ?)
We obtain the mutagenetic trees mixture (MTreeMix) Fisher kernel
K(x, x0 ) = h?`x (?, ?), ?`x0 (?, ?)i
=
K?1
X
l=1
+
[Pr(x | ?l ) ? Pr(x | ?K )] [Pr(x0 | ?l ) ? Pr(x0 | ?K )]
Pr(x | ?, ?) Pr(x0 | ?, ?)
K X
n
X
?l (x | ?, ?)?l (x0 | ?, ?)?(?l,w )(xpa(w) ,xw ),(x0pa(w) ,x0w ) .
l=1 w=1
4
Experimental results
In this section, we use the Fisher kernel derived from mutagenetic tree mixtures for predicting HIV
drug resistance from viral genotype. Briefly, resistance is the ability of a virus to replicate in the
presence of drug. The degree of resistance is usually communicated as a non-negative number. This
number indicates the fold-change increase in drug concentration that is necessary to inhibit viral
replication by 50%, as compared to a fully susceptible reference virus. Thus, higher fold-changes
correspond to increasing levels of resistance. We consider all fold-change values on a log 10 scale.
Information on phenotypic resistance strongly affects treatment decisions, but the experimental procedures are too expensive and time-consuming for routine clinical diagnostics. Instead, at the time
of therapy failure, the genotypic makeup of the viral population is determined using standard sequencing methods, leaving the challenge of inferring the phenotypic implications from the observed
genotypic alterations. It is also desirable to minimize the number of sequence positions required for
reliable determination of drug resistance. With a small number of positions, sequencing could be
replaced by the much cheaper line-probe assay (LiPA) technology [12], which focuses on the determination of mutations at a limited number of pre-selected sites. This method could bring resistance
testing to resource-poor settings in which DNA sequencing is not affordable.
All approaches to this problem described to date are based on a direct correlation between genotype
and phenotype, without any further modelling involved. Application of the Fisher kernel to this task
is motivated by the hypothesis that the traces of evolution present in the data and modelled by mutagenetic trees mixture models can provide additional information, leading to improved predictive
performance. In a recent comparison of several statistical learning methods, support vector regression attained the highest average predictive performance across all drugs [13]. Accordingly, we have
chosen this best-performing method to compare to the novel kernel.
Specifically, our experimental setup is as follows. For each drug, we start with a genotype-phenotype
data set [14] of size 305 to 858 (Table 2, column 3). Based on a list of resistance mutations maintained by the International AIDS Society [15], we extract the residues listed in column 2. The
number indicates the position in the viral enzyme (reverse transcriptase for the first two groups of
drugs, and protease for the third group), and the amino acids following the number denote the mutations at the respective site that are considered resistance-associated. For example, the feature vector
for the drug zidovudine (ZDV) consists of six variables representing the reverse transcriptase mutations 41L, 67N, 70R, 210W, 215F or Y, and 219E or Q. In the naive indicator representation, a
mutational pattern within these six mutations is transformed to a binary vector of length six, each
entry encoding the presence or absence of the respective mutation.
The Fisher kernel requires a mutagenetic trees mixture model for each of the evaluated drugs. Using the MTreeMix software package1 , these models were estimated from an independent set of
sequences derived from patients failing a therapy that contained the specific drug of interest. In 100
replicates of ten-fold cross-validation for each drug model, we then recorded the squared correlation
coefficient (r 2 ) of indicator variable-based versus Fisher kernel-based support vector regression.
Avoiding both costly double cross-validation with the limited amount of data and overfitting with
single cross-validation, we fixed standard parameters for both SVMs. As suggested by Jaakkola
and Haussler [2], the Fisher kernel may be combined with additional transformations. Thus, we
evaluated the standard kernels for both setups. For the indicator representation, the linear kernel
performed best, whereas the Fisher scores performed best when combined with a Gaussian RBF
kernel. We used these two kernels in the final comparison reported in Table 2.
1
http://mtreemix.bioinf.mpi-sb.mpg.de
The results displayed in columns 5 and 6 of Table 2 show the improvements attained via the Fisher
kernel method as estimated by the squared correlation coefficient, r 2 . After correction for multiple
comparisons, the null hypothesis of equal mean was rejected (P < 0.01, Wilcoxon test) in 15 out of
17 cases, a ratio that is highly unlikely to occur by chance (P < 0.0025, binomial test). The most
drastic improvements were obtained for the drugs 3TC, NVP and NFV. Slight decreases were observed for ddC and APV. Interestingly, when we combined both feature vectors, the cross-validated
performance of the combined predictor was consistently at least as good as the best individual predictor (data not shown). We obtained similar results when evaluating performance by the mean
squared error instead of the correlation coefficient (data not shown).
Table 2: Comparison of support vector regression performance for the MTreeMix Fisher kernel (F )
versus a naive amino acid indicator (I) representation. The drugs (first column) are grouped into the
three classes of nucleoside/nucleotide reverse transcriptase inhibitors (rows 1?7), nonnucleoside reverse transcriptase inhibitors (rows 8?10), and protease inhibitors (rows 11?17). MTreeMix models
were estimated based on the mutations listed in the second column. The third column indicates the
number N of available genotype-phenotype pairs, and the number K of trees in the mixture model is
shown in column 4. Columns 5 and 6 indicate the squared correlation coefficients, averaged across
100 replicates of 10-fold cross-validation. P-values (last column) are obtained from Wilcoxon rank
sum tests, correcting for multiple testing using the Benjamini-Hochberg method.
DRUG
ZDV
3TC
ddI
ddC
d4T
ABC
TDF
NVP
EFV
DLV
IDV
SQV
RTV
NFV
APV
LPV
ATV
5
MUTATIONS
41L, 67N, 70R, 210W, 215FY, 219EQ
44D, 118I, 184IV
65R, 67N, 70R, 74V, 184V, 210W, 215FY, 219EQ
41L, 65R, 67N, 70R, 74V, 184V
41L, 67N, 70R, 75TMSA, 210W, 215YF, 219QE
41L, 65R, 67N, 70R, 74V, 115F, 184V, 210W, 215YF
41L, 65R, 67N, 70R, 210W, 215YF, 219QE
100I, 103N, 106A, 108I, 181CI, 188CLH, 190A
100I, 103N, 108I, 181CI, 188L, 190SA
103N, 181C
10IRV, 20MR, 24I, 32I, 36I, 46IL, 54V, 71VT, 73SA,
77I, 82AFT, 84V, 90M
10IRV, 48V, 54VL, 71VT, 73S, 77I, 82A, 84V, 90M
10FIRV, 20MR, 24I, 32I, 33F, 36I, 46IL, 54VL, 71VT,
77I, 82AFTS, 84V, 90M
10FI, 30N, 36I, 46IL, 71VT, 77I, 82AFTS, 84V, 88DS
10FIRV, 32I, 46IL, 47V, 50V, 54LVM, 73S, 84V, 90M
10FIRV, 20MR, 24I, 32I, 33F, 46IL, 47V, 50V, 53L,
54LV, 63P, 71VT, 73S, 82AFTS, 84V, 90M
32I, 46I, 50L, 54L, 71V, 73S, 82A, 84V, 88S, 90M
N
856
817
858
536
857
846
527
857
843
856
851
K
5
5
4
2
4
7
3
5
4
2
4
r2F
0.61
0.71
0.28
0.25
0.22
0.57
0.45
0.58
0.60
0.49
0.65
r2I
0.57
0.64
0.24
0.26
0.21
0.55
0.43
0.49
0.56
0.48
0.63
log10 P
< ?15.0
< ?15.0
< ?15.0
?0.3
?2.7
?9.0
?7.0
< ?15.0
< ?15.0
?1.7
?14.3
854
855
4
4
0.68
0.77
0.66
0.75
?8.6
?12.0
853
665
507
6
3
5
0.62
0.58
0.73
0.55
0.59
0.69
< ?15.0
?2.0
< ?15.0
305
2
0.54
0.52
?2.4
Conclusions
The Fisher kernel derived in this paper allows for leveraging stochastic models of HIV evolution
in many kernel-based scenarios. To our knowledge, this is the first study in which a probabilistic model tailored to a specific biological mechanism (namely, the evolution of drug resistance) is
exploited in a discriminative context. Using the example of inferring drug resistance from viral
genotype, we showed that significant improvements in predictive performance can be obtained for
almost all currently available antiretroviral drugs. These results provide strong incentive for further
exploitation of evolutionary models in clinical decision making. Moreover, they also underline the
potential benefits from integrating several sources of data (genotype-phenotype, evolutionary). The
high correlation that can be observed with a relatively small number of mutations was unexpected
and suggests that reliable resistance predictions can also be obtained on the basis of LiPA assays
which are much cheaper than standard sequencing technologies. While our choice of mutations
was based on a selection from the literature, an interesting problem would be to design dedicated
LiPA assays containing a set of mutations that allow for optimal prediction performance in this
generative-discriminative setting. Finally, mixtures of mutagenetic trees have already been applied
in other contexts, for example to model progressive chromosomal alterations in cancer [16], and we
expect kernel methods to play an important role in this context, too.
Acknowledgments
N.B. was supported by the Deutsche Forschungsgemeinschaft (BE 3217/1-1), and T.S. by the German Academic Exchange Service (D/06/41866). T.S. would like to thank Thomas Lengauer for his
support and advice.
References
[1] B. Sch?olkopf, K. Tsuda, and J.-P. Vert, editors. Kernel methods in computational biology. MIT Press,
Cambridge, MA, 2004.
[2] T. Jaakkola and D. Haussler. Exploiting generative models in discriminative classifiers. In M. J. Kearns,
S. A. Solla, and D. A. Cohn, editors, Advances in Neural Information Processing Systems 11, pages
487?493. MIT Press, Cambridge, MA, 1999.
[3] K. Tsuda, M. Kawanabe, G. R?atsch, S. Sonnenburg, and K. Mu? ller. A new discriminative kernel from
probabilistic models. In T.G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural
Information Processing Systems 14, pages 977?984. MIT Press, Cambridge, MA, 2002.
[4] S. Amari and H. Nagaoka. Methods of Information Geometry. American Mathematical Society, Oxford
University Press, 2000.
[5] K. Tsuda, T. Kin, and K. Asai. Marginalized kernels for biological sequences. Bioinformatics, 18 Suppl
1:S268?S275, 2002.
[6] J. Lafferty and G. Lebanon. Information diffusion kernels. In S. Becker, S. Thrun, and K. Obermayer,
editors, Advances in Neural Information Processing Systems 15, pages 375?382. MIT Press, Cambridge,
MA, 2003.
[7] T. Jebara, R. Kondor, and A. Howard. Probability product kernels. Journal of Machine Learning Research,
5:819?844, July 2004.
[8] N. Beerenwinkel, T. Sing, T. Lengauer, J. Rahnenfu? hrer, K. Roomp, I. Savenkov, R. Fischer, D. Hoffmann,
J. Selbig, K. Korn, H. Walter, T. Berg, P. Braun, G. F?atkenheuer, M. Oette, J. Rockstroh, B. Kupfer,
R. Kaiser, and M. D?aumer. Computational methods for the design of effective therapies against drug
resistant HIV strains. Bioinformatics, 21(21):3943?3950, Sep 2005.
[9] F. Clavel and A. J. Hance. HIV drug resistance. N Engl J Med, 350(10):1023?1035, Mar 2004.
[10] R. W. Shafer and J. M. Schapiro. Drug resistance and antiretroviral drug development. J Antimicrob
Chemother, 55(6):817?820, Jun 2005.
[11] N. Beerenwinkel, J. Rahnenfu? hrer, M. D?aumer, D. Hoffmann, R. Kaiser, J. Selbig, and T. Lengauer.
Learning multiple evolutionary pathways from cross-sectional data. J Comput Biol, 12(6):584?598, 2005.
[12] J. C. Schmit, L. Ruiz, L. Stuyver, K. Van Laethem, I. Vanderlinden, T. Puig, R. Rossau, J. Desmyter,
E. De Clercq, B. Clotet, and A. M. Vandamme. Comparison of the LiPA HIV-1 RT test, selective PCR
and direct solid phase sequencing for the detection of HIV-1 drug resistance mutations. J Virol Methods,
73(1):77?82, Jul 1998.
[13] M. Rabinowitz, L. Myers, M. Banjevic, A. Chan, J. Sweetkind-Singer, J. Haberer, K. McCann, and
R. Wolkowicz. Accurate prediction of HIV-1 drug response from the reverse transcriptase and protease
amino acid sequences using sparse models created by convex optimization. Bioinformatics, 22(5):541?
549, Mar 2006.
?
[14] H. Walter, B. Schmidt, K. Korn, A. M. Vandamme, T. Harrer, and K. Uberla.
Rapid, phenotypic HIV-1
drug sensitivity assay for protease and reverse transcriptase inhibitors. J. Clin. Virol., 13:71?80, 1999.
[15] V. A. Johnson, F. Brun-Vezinet, B. Clotet, B. Conway, D. R. Kuritzkes, D. Pillay, J. M. Schapiro, A. Telenti, and D. D. Richman. Update of the drug resistance mutations in HIV-1: Fall 2005. Topics in HIV
Medicine, 13(4):125?131, 2005.
[16] J. Rahnenf?uhrer, N. Beerenwinkel, W. A. Schulz, C. Hartmann, A. von Deimling, B. Wullich, and
T. Lengauer. Estimating cancer survival and clinical outcome based on genetic tumor progression scores.
Bioinformatics, 21(10):2438?2446, May 2005.
| 2978 |@word exploitation:1 kondor:1 briefly:2 advantageous:1 replicate:2 underline:1 additively:1 pressure:2 solid:3 score:3 genetic:9 denoting:1 interestingly:1 existing:1 current:1 virus:4 ddc:2 attracted:1 aft:1 mutagenetic:27 designed:1 update:1 generative:9 selected:2 accordingly:1 schapiro:2 problemspecific:1 provides:4 contribute:1 mathematical:2 along:3 direct:2 differential:1 profound:1 predecessor:1 replication:1 consists:1 wild:3 pathway:6 manner:1 introduce:1 mccann:1 x0:14 expected:1 rapid:1 behavior:1 mpg:2 ucken:1 metaphor:1 increasing:2 becomes:1 estimating:1 underlying:4 moreover:2 deutsche:1 agnostic:1 null:1 emerging:1 developed:1 differing:1 transformation:1 differentiation:1 berkeley:1 unexplored:1 multidimensional:1 braun:1 dlv:1 classifier:1 enjoy:1 planck:1 t1:2 positive:1 lvm:1 service:1 xv:14 consequence:1 despite:1 encoding:2 oxford:1 suggests:1 limited:3 factorization:1 averaged:1 practical:1 acknowledgment:1 testing:2 union:1 communicated:1 procedure:3 empirical:1 drug:40 vert:1 pre:1 integrating:2 protein:2 close:1 selection:1 context:3 influence:1 seminal:1 accumulation:3 restriction:1 map:1 straightforward:2 attention:1 starting:2 asai:1 convex:1 correcting:1 insight:1 haussler:5 deriving:1 his:1 population:4 variation:1 spontaneous:1 play:1 hypothesis:2 harvard:2 associate:1 pa:3 recognition:1 particularly:1 kappa:1 expensive:1 labeled:2 database:1 observed:5 role:2 connected:1 sonnenburg:1 mutational:10 decrease:1 inhibit:1 highest:1 solla:1 substantial:1 intuition:1 mu:1 tobias:2 dynamic:2 ultimately:1 rewrite:1 predictive:4 purely:1 negatively:1 basis:1 exon:1 sep:1 joint:1 derivation:1 walter:2 heat:1 distinct:1 describe:1 effective:1 r2i:1 outcome:1 hiv:21 encoded:1 larger:1 amari:2 ability:2 fischer:1 nagaoka:1 itself:1 final:1 hoc:1 sequence:7 myers:1 product:4 remainder:1 combining:1 date:2 mixing:1 flexibility:1 intuitive:1 olkopf:1 exploiting:1 parent:1 empty:1 double:1 extending:1 tk:3 coupling:1 derive:3 illustrate:1 package1:1 sa:2 eq:2 strong:1 involves:1 implies:1 indicate:1 direction:2 owing:1 stochastic:2 ddi:1 exchange:1 fix:1 biological:4 correction:1 therapy:6 considered:3 mapping:1 major:1 purpose:1 failing:1 currently:1 bridge:1 xpa:13 largest:1 grouped:1 weighted:1 clotet:2 mit:4 inhibitor:6 always:1 gaussian:1 vandamme:2 jaakkola:5 broader:2 derived:5 protease:7 focus:1 validated:1 improvement:4 consistently:1 rank:1 sequencing:5 indicates:4 likelihood:5 mutant:1 modelling:1 r2f:1 dependent:1 sb:2 unlikely:5 vl:2 hidden:2 kernelized:1 originating:1 transformed:1 selective:1 zdv:2 germany:1 schulz:1 hartmann:1 development:5 constrained:1 field:1 equal:1 extraction:1 biology:4 represents:1 progressive:1 future:1 escape:2 individual:1 cheaper:3 replaced:1 geometry:2 consisting:1 phase:1 detection:2 interest:1 highly:1 replicates:2 mixture:15 extreme:2 genotype:19 diagnostics:1 chain:1 implication:2 accurate:1 edge:5 partial:4 necessary:1 respective:2 nucleotide:1 tree:42 indexed:3 euclidean:2 iv:1 rockstroh:1 tsuda:3 causal:1 theoretical:1 increased:1 column:9 modeling:4 chromosomal:1 engl:1 applicability:1 vertex:4 subset:1 entry:3 predictor:2 johnson:1 too:2 reported:1 combined:4 international:1 sensitivity:1 systematic:1 probabilistic:5 informatics:1 conway:1 nvp:2 quickly:1 vastly:1 central:1 recorded:1 squared:4 containing:1 von:1 american:1 derivative:3 leading:1 account:2 potential:2 exclude:1 de:3 star:2 alteration:2 coefficient:4 ad:1 performed:2 responsibility:2 observing:1 start:1 jul:1 defer:1 mutation:21 vivo:1 contribution:1 minimize:1 il:6 acid:6 characteristic:1 correspond:2 yield:1 clh:1 modelled:1 bayesian:2 suffers:1 failure:3 against:1 niko:1 involved:1 associated:1 lpv:1 treatment:3 wolkowicz:1 knowledge:2 improves:1 organized:1 shaping:1 routine:1 originally:1 higher:1 attained:2 methodology:1 response:1 improved:1 evaluated:2 strongly:2 mar:2 furthermore:1 rejected:1 correlation:6 d:2 hand:1 cohn:1 brun:1 lack:1 defines:1 yf:3 rabinowitz:1 lengauer:4 effect:2 dietterich:1 evolution:6 hence:1 assay:4 branching:1 rooted:1 maintained:1 qe:2 mpi:2 demonstrate:1 syntactically:1 bring:1 dedicated:1 novel:5 recently:1 fi:2 common:1 viral:13 interpretation:1 slight:1 significant:2 cambridge:5 mathematics:1 benjamini:1 dot:3 resistant:2 similarity:4 surface:1 wilcoxon:2 enzyme:2 recent:2 showed:1 perspective:3 chan:1 apart:1 reverse:6 scenario:1 certain:2 binary:3 vt:6 exploited:2 seen:1 aumer:2 additional:2 rtv:1 impose:1 mr:3 paradigm:3 ller:1 dashed:2 july:1 full:1 desirable:1 multiple:3 stem:1 faster:1 characterized:1 determination:2 clinical:7 cross:6 academic:1 prediction:12 variant:1 regression:5 basic:1 patient:1 metric:1 affordable:1 kernel:56 tailored:2 represent:1 suppl:1 confined:1 cell:1 background:1 residue:1 whereas:1 leaving:1 source:1 suppressive:1 sch:1 ineffective:1 induced:4 med:1 leveraging:1 lafferty:1 presence:3 forschungsgemeinschaft:1 variety:3 affect:1 topology:3 inner:1 idea:2 simplifies:1 knowing:1 motivated:2 six:3 bridging:1 becker:2 render:1 resistance:30 cause:1 hardly:1 cocktail:1 generally:2 listed:2 amount:1 ten:1 svms:1 dna:2 generate:1 http:1 estimated:4 popularity:1 promise:1 shall:1 incentive:1 group:2 key:1 phenotypic:4 diffusion:1 graph:2 sum:1 family:3 reasonable:1 almost:1 decision:4 hochberg:1 scaling:1 dash:2 conferring:1 fold:6 adapted:1 occur:1 constraint:2 software:1 performing:1 relatively:1 department:2 according:1 combination:1 poor:1 across:3 evolves:1 making:2 antiviral:2 intuitively:2 pr:16 resource:1 agree:1 remains:1 eventually:1 mechanism:3 german:1 singer:1 mechanistic:2 drastic:1 available:3 operation:1 mtreemix:8 prerequisite:1 probe:1 kawanabe:1 progression:1 generic:2 occurrence:5 schmidt:1 existence:1 substitute:1 thomas:1 top:1 denotes:2 binomial:1 graphical:2 marginalized:2 log10:1 unifying:1 clin:1 xw:6 medicine:1 exploit:1 ghahramani:1 especially:1 classical:1 society:2 genotyping:1 puig:1 beerenwinkel:4 already:1 hoffmann:2 kaiser:2 fa:1 primary:2 concentration:1 costly:1 rt:1 obermayer:1 evolutionary:12 exhibit:1 gradient:2 link:1 thank:1 thrun:1 entity:1 topic:1 manifold:1 fy:2 length:1 ratio:1 setup:2 unfortunately:1 susceptible:1 phylogenetics:1 trace:1 negative:2 design:4 contributed:1 disagree:1 observation:1 markov:3 sing:3 howard:1 anti:3 displayed:4 immediate:1 ever:1 precise:1 variability:1 strain:1 rn:1 jebara:1 makeup:1 introduced:3 namely:3 required:2 pair:5 california:1 established:1 saarbr:1 antiretroviral:2 address:1 suggested:2 usually:1 pattern:8 summarize:1 challenge:1 program:1 x0v:3 genotypic:2 max:1 including:2 reliable:2 pcr:1 event:7 suitable:1 natural:3 force:1 predicting:1 indicator:4 korn:2 representing:1 improve:1 technology:2 created:1 jun:1 extract:1 naive:2 geometric:1 literature:2 polypeptide:1 bioinf:1 fully:1 expect:1 interesting:2 generation:1 versus:2 lv:1 validation:4 foundation:1 degree:1 consistent:1 principle:1 editor:4 row:3 cancer:2 compatible:1 surprisingly:1 last:1 supported:1 allow:3 understand:1 institute:1 fall:1 sparse:1 benefit:1 van:1 xn:1 evaluating:1 genome:1 commonly:1 richman:1 selbig:2 lebanon:1 emphasize:1 overfitting:1 instantiation:1 conclude:1 consuming:1 discriminative:7 search:1 quantifies:1 reviewed:1 table:8 promising:1 virological:2 ca:1 inherently:1 contributes:3 domain:2 pk:1 motivation:1 noise:1 shafer:1 profile:2 pivotal:1 amino:6 x1:1 site:3 advice:1 tl:1 aid:1 inferring:2 position:3 comput:1 lie:1 third:2 ruiz:1 erties:1 splice:1 kin:1 specific:6 appeal:1 list:1 survival:1 ci:2 gap:1 phenotype:9 tc:2 likely:2 sectional:1 expressed:1 contained:1 unexpected:1 partially:1 corresponds:1 chance:1 abc:1 ma:5 prop:1 conditional:2 rbf:1 vxv:1 fisher:26 considerable:1 change:4 shared:1 absence:1 specifically:2 determined:2 semantically:1 principal:1 kearns:1 tumor:1 experimental:3 atsch:1 rarely:1 indicating:1 berg:1 support:6 arises:1 bioinformatics:6 relevance:1 avoiding:1 phenomenon:1 kernelization:1 biol:1 |
2,179 | 2,979 | Efficient sparse coding algorithms
Honglak Lee
Alexis Battle
Rajat Raina
Computer Science Department
Stanford University
Stanford, CA 94305
Andrew Y. Ng
Abstract
Sparse coding provides a class of algorithms for finding succinct representations
of stimuli; given only unlabeled input data, it discovers basis functions that capture higher-level features in the data. However, finding sparse codes remains a
very difficult computational problem. In this paper, we present efficient sparse
coding algorithms that are based on iteratively solving two convex optimization
problems: an L1 -regularized least squares problem and an L2 -constrained least
squares problem. We propose novel algorithms to solve both of these optimization problems. Our algorithms result in a significant speedup for sparse coding,
allowing us to learn larger sparse codes than possible with previously described
algorithms. We apply these algorithms to natural images and demonstrate that the
inferred sparse codes exhibit end-stopping and non-classical receptive field surround suppression and, therefore, may provide a partial explanation for these two
phenomena in V1 neurons.
1
Introduction
Sparse coding provides a class of algorithms for finding succinct representations of stimuli; given
only unlabeled input data, it learns basis functions that capture higher-level features in the data.
When a sparse coding algorithm is applied to natural images, the learned bases resemble the receptive fields of neurons in the visual cortex [1, 2]; moreover, sparse coding produces localized bases
when applied to other natural stimuli such as speech and video [3, 4]. Unlike some other unsupervised learning techniques such as PCA, sparse coding can be applied to learning overcomplete basis
sets, in which the number of bases is greater than the input dimension. Sparse coding can also model
inhibition between the bases by sparsifying their activations. Similar properties have been observed
in biological neurons, thus making sparse coding a plausible model of the visual cortex [2, 5].
Despite the rich promise of sparse coding models, we believe that their development has been hampered by their expensive computational cost. In particular, learning large, highly overcomplete
representations has been extremely expensive. In this paper, we develop a class of efficient sparse
coding algorithms that are based on alternating optimization over two subsets of the variables. The
optimization problems over each of the subsets of variables are convex; in particular, the optimization over the first subset is an L1 -regularized least squares problem; the optimization over the second subset of variables is an L2 -constrained least squares problem. We describe each algorithm
and empirically analyze their performance. Our method allows us to efficiently learn large overcomplete bases from natural images. We demonstrate that the resulting learned bases exhibit (i)
end-stopping [6] and (ii) modulation by stimuli outside the classical receptive field (nCRF surround
suppression) [7]. Thus, sparse coding may also provide a partial explanation for these phenomena in
V1 neurons. Further, in related work [8], we show that the learned succinct representation captures
higher-level features that can then be applied to supervised classification tasks.
2
Preliminaries
The goal of sparse coding is to represent input vectors approximately as a weighted linear combination of a small number of (unknown) ?basis vectors.? These basis vectors thus capture high-level
patterns in the input data. Concretely, each input vector ?~ ? Rk is succinctly represented using
basis vectors ~b1 , . . . , ~bn ? Rk and a sparse vector of weights or ?coefficients? ~s ? Rn such that
P
?~ ? j ~bj sj . The basis set can be overcomplete (n > k), and can thus capture a large number of
patterns in the input data.
Sparse coding is a method for discovering good basis vectors automatically using only unlabeled
P
data. The standard generative model assumes that the reconstruction error ?~ ? j ~bj sj is distributed
as a zero-mean Gaussian distribution with covariance ? 2 I. To favor sparse coefficients, the prior
distribution for each coefficient sj is defined as: P (sj ) ? exp(???(sj )), where ?(?) is a sparsity
function and ? is a constant. For?example, we can use one of the following:
(L1 penalty function)
?ksj k1
1
(1)
?(sj ) = (s2j + ) 2
(epsilonL1 penalty function)
?
log(1 + s2j ) (log penalty function).
In this paper, we will use the L1 penalty unless otherwise mentioned; L1 regularization is known to
produce sparse coefficients and can be robust to irrelevant features [9].
Consider a training set of m input vectors ?~(1) , ..., ?~(m) , and their (unknown) corresponding coefficients ~s(1) , ..., ~s(m) . The maximum a posteriori estimate of the bases and coefficients, assuming a
uniform prior on the bases, is the solution to the following optimization problem:1
Pm 1 ~(i) Pn ~ (i) 2
Pm Pn
(i)
minimize ~
?
bj s k + ?
?(s )
(2)
(i)
2 k?
{bj },{~
s
}
i=1 2?
j=1
j
i=1
j
j=1
k~bj k2 ? c, ?j = 1, ..., n.
subject to
This problem can be written more concisely in matrix form: let X ? Rk?m be the input matrix (each
column is an input vector), let B ? Rk?n be the basis matrix (each column is a basis vector), and let
S ? Rn?m be the coefficient matrix (each column is a coefficient vector). Then, the optimization
problem above can be written as:
P
minimizeB,S 2?1 2 kX ? BSk2F + ? i,j ?(Si,j )
(3)
P 2
subject to
i Bi,j ? c, ?j = 1, ..., n.
Assuming the use of either L1 penalty or epsilonL1 penalty as the sparsity function, the optimization
problem is convex in B (while holding S fixed) and convex in S (while holding B fixed),2 but
not convex in both simultaneously. In this paper, we iteratively optimize the above objective by
alternatingly optimizing with respect to B (bases) and S (coefficients) while holding the other fixed.
For learning the bases B, the optimization problem is a least squares problem with quadratic constraints. There are several approaches to solving this problem, such as generic convex optimization
solvers (e.g., QCQP solver) as well as gradient descent using iterative projections [10]. However,
generic convex optimization solvers are too slow to be applicable to this problem, and gradient descent using iterative projections often shows slow convergence. In this paper, we derive and solve
the Lagrange dual, and show that this approach is much more efficient than gradient-based methods.
For learning the coefficients S, the optimization problem is equivalent to a regularized least squares
problem. For many differentiable sparsity functions, we can use gradient-based methods (e.g., conjugate gradient). However, for the L1 sparsity function, the objective is not continuously differentiable and the most straightforward gradient-based methods are difficult to apply. In this case, the
following approaches have been used: generic QP solvers (e.g., CVX), Chen et al.?s interior point
method [11], a modification of least angle regression (LARS) [12], or grafting [13]. In this paper,
we present a new algorithm for solving the L1 -regularized least squares problem and show that it is
more efficient for learning sparse coding bases.
L1 -regularized least squares: The feature-sign search algorithm
3
(i)
Consider solving the optimization problem (2) with an L1 penalty over the coefficients {sj } while
keeping the bases fixed. This problem can be solved by optimizing over each ~s(i) individually:
X (i)
X (i)
~bj s k2 + (2? 2 ?)
minimize~s(i) k?~(i) ?
|s |.
(4)
j
j
j
j
(i)
Notice now that if we know the signs (positive, zero, or negative) of the sj ?s at the optimal value,
we can replace each of the terms
(i)
|sj |
with either
(i)
sj
(if
(i)
sj
(i)
(i)
> 0), ?sj (if sj
< 0), or 0 (if
We impose a norm constraint for bases: k~bj k2 ? c, ?j = 1, ..., n for some constant c. Norm constraints
are necessary because, otherwise, there always exists a linear transformation of ~bj ?s and ~s(i) ?s which keeps
Pn ~ (i)
(i)
j=1 bj sj unchanged, while making sj ?s approach zero. Based on similar motivation, Olshausen and Field
used a scheme which retains the variation of coefficients for every basis at the same level [1, 2].
2
A log (non-convex) penalty was used in [1]; thus, gradient-based methods can get stuck in local optima.
1
(i)
sj = 0). Considering only nonzero coefficients, this reduces (4) to a standard, unconstrained
quadratic optimization problem (QP), which can be solved analytically and efficiently. Our algo(i)
rithm, therefore, tries to search for, or ?guess,? the signs of the coefficients sj ; given any such
guess, we can efficiently solve the resulting unconstrained QP. Further, the algorithm systematically
refines the guess if it turns out to be initially incorrect.
To simplify notation, we present the algorithm for the following equivalent optimization problem:
minimizex f (x) ? ky ? Axk2 + ?kxk1 ,
(5)
where ? is a constant. The feature-sign search algorithm is shown in Algorithm 1. It maintains an
active set of potentially nonzero coefficients and their corresponding signs?all other coefficients
must be zero?and systematically searches for the optimal active set and coefficient signs. The
algorithm proceeds in a series of ?feature-sign steps?: on each step, it is given a current guess for the
active set and the signs, and it computes the analytical solution x
?new to the resulting unconstrained
QP; it then updates the solution, the active set and the signs using an efficient discrete line search
between the current solution and x
?new (details in Algorithm 1).3 We will show that each such step
reduces the objective f (x), and that the overall algorithm always converges to the optimal solution.
Algorithm 1 Feature-sign search algorithm
1
2
Initialize x := ~0, ? := ~0, and active set := {}, where
?i ? {?1,
0, 1} denotes sign(xi ).
2
From zero coefficients of x, select i = arg maxi ?ky?Axk
.
?xi
Activate xi (add i to the active set) only if it locally improves the objective, namely:
2
If ?ky?Axk
> ?, then set ?i := ?1, active set := {i}? active set.
?xi
2
3
4
If ?ky?Axk
< ??, then set ?i := 1, active set := {i}? active set.
?xi
Feature-sign step:
Let A? be a submatrix of A that contains only the columns corresponding to the active set.
Let x
? and ?? be subvectors of x and ? corresponding to the active set.
?xk2 + ? ??> x
Compute the analytical solution to the resulting unconstrained QP (minimizex? ky ? A?
?):
> ? ?1 ?>
?
?
x
?new := (A A) (A y ? ? ?/2),
Perform a discrete line search on the closed line segment from x
? to x
?new :
Check the objective value at x
?new and all points where any coefficient changes sign.
Update x
? (and the corresponding entries in x) to the point with the lowest objective value.
Remove zero coefficients of x
? from the active set and update ? := sign(x).
Check the optimality conditions:
2
(a) Optimality condition for nonzero coefficients: ?ky?Axk
+ ? sign(xj ) = 0, ?xj 6= 0
?xj
If condition (a) is not satisfied, go to Step 3(without any
new activation); else check condition (b).
2
(b) Optimality condition for zero coefficients: ?ky?Axk
? ?, ?xj = 0
?xj
If condition (b) is not satisfied, go to Step 2; otherwise return x as the solution.
To sketch the proof of convergence, let a coefficient vector x be called consistent with a given active
set and sign vector ? if the following two conditions hold for all i: (i) If i is in the active set, then
sign(xi ) = ?i , and, (ii) If i is not in the active set, then xi = 0.
Lemma 3.1. Consider optimization problem (5) augmented with the additional constraint that x is
consistent with a given active set and sign vector. Then, if the current coefficients xc are consistent
with the active set and sign vector, but are not optimal for the augmented problem at the start of
Step 3, the feature-sign step is guaranteed to strictly reduce the objective.
Proof sketch. Let x
?c be the subvector of xc corresponding to coefficients in the given active set. In
?xk2 + ? ??> x
Step 3, consider a smooth quadratic function f?(?
x) ? ky ? A?
?. Since x
?c is not an optimal
?
?
?
point of f , we have f (?
xnew ) < f (?
xc ). Now consider the two possible cases: (i) if x
?new is consistent
with the given active set and sign vector, updating x
? := x
?new strictly decreases the objective; (ii) if
x
?new is not consistent with the given active set and sign vector, let x
?d be the first zero-crossing point
(where any coefficient changes its sign) on a line segment from x
?c to x
?new , then clearly x
?c 6= x
?d ,
3
A technical detail has been omitted from the algorithm for simplicity, as we have never observed it in prac? ? R(A?> A).
?
tice. In Step 3 of the algorithm, in case A?> A? becomes singular, we can check if q ? A?> y ? ? ?/2
If yes, we can replace the inverse with the pseudoinverse to minimize the unconstrained QP; otherwise, we can
? z > q 6= 0. Both these steps
update x
? to the first zero-crossing along any direction z such that z ? N (A?> A),
are still guaranteed to reduce the objective; thus, the proof of convergence is unchanged.
and f?(?
xd ) < f?(?
xc ) by convexity of f?, thus we finally have f (?
xd ) = f?(?
xd ) < f?(?
xc ) = f (?
xc ).4
Therefore, the discrete line search described in Step 3 ensures a decrease in the objective value.
Lemma 3.2. Consider optimization problem (5) augmented with the additional constraint that x
is consistent with a given active set and sign vector. If the coefficients xc at the start of Step 2 are
optimal for the augmented problem, but are not optimal for problem (5), the feature-sign step is
guaranteed to strictly reduce the objective.
Proof sketch. Since xc is optimal for the augmented
problem,
it satisfies optimality condition (a), but
2
not (b); thus, in Step 2, there is some i, such that ?ky?Axk
> ? ; this i-th coefficient is activated, and
?xi
?xk2 +
i is added to the active set. In Step 3, consider the smooth quadratic function f?(?
x) ? ky ? A?
>
? ?? x
?. Observe that (i) since a Taylor expansion of f? around x
?=x
?c has a first order term in xi only
(using condition 4(a) for the other coefficients), we have that any direction that locally decreases
f?(?
x) must be consistent with the sign of the activated xi , and, (ii) since x
?c is not an optimal point
of f?(?
x), f?(?
x) must decrease locally near x
?=x
?c along the direction from x
?c to x
?new . From (i) and
(ii), the line search direction x
?c to x
?new must be consistent with the sign of the activated xi . Finally,
since f?(?
x) = f (?
x) when x
? is consistent with the active set, either x
?new is consistent, or the first
zero-crossing from x
?c to x
?new has a lower objective value (similar argument to Lemma 3.1).
Theorem 3.3. The feature-sign search algorithm converges to a global optimum of the optimization
problem (5) in a finite number of steps.
Proof sketch. From the above lemmas, it follows that the feature-sign steps always strictly reduce
the objective f (x). At the start of Step 2, x either satisfies optimality condition 4(a) or is ~0; in
either case, x is consistent with the current active set and sign vector, and must be optimal for the
augmented problem described in the above lemmas. Since the number of all possible active sets and
coefficient signs is finite, and since no pair can be repeated (because the objective value is strictly
decreasing), the outer loop of Steps 2?4(b) cannot repeat indefinitely. Now, it suffices to show that a
finite number of steps is needed to reach Step 4(b) from Step 2. This is true because the inner loop of
Steps 3?4(a) always results in either an exit to Step 4(b) or a decrease in the size of the active set.
Note that initialization with arbitrary starting points requires a small modification: after initializing ?
and the active set with a given initial solution, we need to start with Step 3 instead of Step 1.5 When
the initial solution is near the optimal solution, feature-sign search can often obtain the optimal
solution more quickly than when starting from ~0.
4
Learning bases using the Lagrange dual
In this subsection, we present a method for solving optimization problem (3) over bases B given
fixed coefficients S. This reduces to the following problem:
minimize
kX ? BSk2F
(6)
Pk
2
subject to
i=1 Bi,j ? c, ?j = 1, ..., n.
This is a least squares problem with quadratic constraints. In general, this constrained optimization
problem can be solved using gradient descent with iterative projection [10]. However, it can be
much more efficiently solved using a Lagrange dual. First, consider the Lagrangian:
n
k
X
X
2
L(B, ~?) = trace (X ? BS)> (X ? BS) +
?j (
Bi,j
? c),
(7)
j=1
i=1
where each ?j ? 0 is a dual variable. Minimizing over B analytically, we obtain the Lagrange dual:
D(~?) = min L(B, ~?) = trace X > X ? XS > (SS > + ?)?1 (XS > )> ? c ? ,
(8)
B
where ? = diag(~?). The gradient and Hessian of D(~?) are computed as follows:
?D(~?)
= kXS > (SS > + ?)?1 ei k2 ? c,
(9)
??i
? 2 D(~?)
= ?2 (SS > + ?)?1 (XS > )> XS > (SS > + ?)?1 i,j (SS > + ?)?1 i,j , (10)
??i ??j
4
To simplify notation, we reuse f (?) even for subvectors such as x
?; in the case of f (?
x), we consider only
the coefficients in x
? as variables, and all coefficients not in the subvector can be assumed constant at zero.
5
If the algorithm terminates without reaching Step 2, we are done; otherwise, once the algorithm reaches
Step 2, the same argument in the proof applies.
Feature-sign
LARS
Grafting
Chen et al.?s
QP solver (CVX)
natural image
196?512
2.16 (0)
3.62 (0)
13.39 (7e-4)
88.61 (8e-5)
387.90 (4e-9)
speech
500?200
0.58 (0)
1.28 (0)
4.69 (4e-6)
47.49 (8e-5)
1,108.71 (1e-8)
stereo
288?400
1.72 (0)
4.02 (0)
11.12 (5e-4)
66.62 (3e-4)
538.72 (7e-9)
video
512?200
0.83 (0)
1.98 (0)
5.88 (2e-4)
47.00 (2e-4)
1,219.80 (1e-8)
Table 1: The running time in seconds (and the relative error in parentheses) for coefficient learning
algorithms applied to different natural stimulus datasets. For each dataset, the input dimension k
and the number of bases n are specified as k ? n. The relative error for an algorithm was defined as
(fobj ? f ? )/f ? , where fobj is the final objective value attained by that algorithm, and f ? is the best
objective value attained among all the algorithms.
where ei ? Rn is the i-th unit vector. Now, we can optimize the Lagrange dual (8) using Newton?s
method or conjugate gradient. After maximizing D(~?), we obtain the optimal bases B as follows:
B > = (SS > + ?)?1 (XS > )> .
(11)
The advantage of solving the dual is that it uses significantly fewer optimization variables than the
primal. For example, optimizing B ? R1,000?1,000 requires only 1,000 dual variables. Note that
the dual formulation is independent of the sparsity function (e.g., L1 , epsilonL1 , or other sparsity
function), and can be extended to other similar models such as ?topographic? cells [14].6
5
Experimental results
5.1 The feature-sign search algorithm
We evaluated the performance of our algorithms on four natural stimulus datasets: natural images,
speech, stereo images, and natural image videos. All experiments were conducted on a Linux machine with AMD Opteron 2GHz CPU and 2GB RAM.
First, we evaluated the feature-sign search algorithm for learning coefficients with the L1 sparsity
function. We compared the running time and accuracy to previous state-of-the-art algorithms:
a generic QP solver,7 a modified version of LARS [12] with early stopping,8 grafting [13], and
Chen et al.?s interior point method [11];9 all the algorithms were implemented in MATLAB. For
each dataset, we used a test set of 100 input vectors and measured the running time10 and the
objective function at convergence. Table 1 shows both the running time and accuracy (measured
by the relative error in the final objective value) of different coefficient learning algorithms. Over
all datasets, feature-sign search achieved the best objective values as well as the shortest running
times. Feature-sign search and modified LARS produced more accurate solutions than the other
methods.11 Feature-sign search was an order of magnitude faster than both Chen et al.?s algorithm
and the generic QP solver, and it was also significantly faster than modified LARS and grafting.
Moreover, feature-sign search has the crucial advantage that it can be initialized with arbitrary
starting coefficients (unlike LARS); we will demonstrate that feature-sign search leads to even
further speedup over LARS when applied to iterative coefficient learning.
5.2 Total time for learning bases
The Lagrange dual method for one basis learning iteration was much faster than gradient descent
with iterative projections, and we omit discussion of those results due to space constraints. Below,
we directly present results for the overall time taken by sparse coding for learning bases from natural
stimulus datasets.
P
P
1
The sparsity penalty for topographic cells can be written as l ?(( j?cell l s2j ) 2 ), where ?(?) is a sparsity
function and cell l is a topographic cell (e.g., group of ?neighboring? bases in 2-D torus representation).
7
We used the CVX package available at http://www.stanford.edu/?boyd/cvx/.
8
LARS (with LASSO modification) provides the entire regularization path with discrete L1 -norm constraints; we further modified the algorithm so that it stops upon finding the optimal solution of the Equation (4).
9
MATLAB code is available at http://www-stat.stanford.edu/?atomizer/.
10
For each dataset/algorithm combination, we report the average running time over 20 trials.
11
A general-purpose QP package (such as CVX) does not explicitly take the sparsity of the solutions into
account. Thus, its solution tends to have many very small nonzero coefficients; as a result, the objective values
obtained from CVX were always slightly worse than those obtained from feature-sign search or LARS.
6
Coeff. / Basis learning
Feature-sign / LagDual
Feature-sign / GradDesc
LARS / LagDual
LARS / GradDesc
Grafting / LagDual
Grafting / GradDesc
Coeff. / Basis learning
ConjGrad / LagDual
ConjGrad / GradDesc
L1 sparsity function
natural image
speech
260.0
248.2
1,093.9
1,280.3
666.7
1,697.7
13,085.1
17,219.0
720.5
1,025.5
2,767.9
8,670.8
epsilonL1 sparsity function
natural image
speech
1,286.6
544.4
5,047.3
11,939.5
stereo
438.2
950.6
1,342.7
12,174.6
3,006.0
6,203.3
video
186.6
933.2
1,254.6
11,022.8
1,340.5
3,681.9
stereo
1,942.4
3,435.1
video
1,461.9
2,479.2
Table 2: The running time (in seconds) for different algorithm combinations using different sparsity
functions.
Figure 1: Demonstration of speedup. Left: Comparison of convergence between the Lagrange dual
method and gradient descent for learning bases. Right: The running time per iteration for modified
LARS and grafting as a multiple of the running time per iteration for feature-sign search.
We evaluated different combinations of coefficient learning and basis learning algorithms: the fastest
coefficient learning methods from our experiments (feature-sign search, modified LARS and grafting for the L1 sparsity function, and conjugate gradient for the epsilonL1 sparsity function) and the
state-of-the-art basis learning methods (gradient descent with iterative projection and the Lagrange
dual formulation). We used a training set of 1,000 input vectors for each of the four natural stimulus
datasets. We initialized the bases randomly and ran each algorithm combination (by alternatingly
optimizing the coefficients and the bases) until convergence.12
Table 2 shows the running times for different algorithm combinations. First, we observe that the
Lagrange dual method significantly outperformed gradient descent with iterative projections for
both L1 and epsilonL1 sparsity; a typical convergence pattern is shown in Figure 1 (left). Second,
we observe that, for L1 sparsity, feature-sign search significantly outperformed both modified LARS
and grafting.13 Figure 1 (right) shows the running time per iteration for modified LARS and grafting
as a multiple of that for feature-sign search (using the same gradient descent algorithm for basis
learning), demonstrating significant efficiency gains at later iterations; note that feature-sign search
(and grafting) can be initialized with the coefficients obtained in the previous iteration, whereas
modified LARS cannot. This result demonstrates that feature-sign search is particularly efficient for
iterative optimization, such as learning sparse coding bases.
5.3 Learning highly overcomplete natural image bases
Using our efficient algorithms, we were able to learn highly overcomplete bases of natural images
as shown in Figure 2. For example, we were able to learn a set of 1,024 bases (each 14?14 pixels)
12
We ran each algorithm combination until the relative change of the objective per iteration became less than
10?6 (i.e., |(fnew ? fold )/fold | < 10?6 ). To compute the running time to convergence, we first computed
the ?optimal? (minimum) objective value achieved by any algorithm combination. Then, for each combination,
we defined the convergence point as the point at which the objective value reaches within 1% relative error of
the observed ?optimal? objective value. The running time measured is the time taken to reach this convergence
point. We truncated the running time if the optimization did not converge within 60,000 seconds.
13
We also evaluated a generic conjugate gradient implementation on the L1 sparsity function; however, it did
not converge even after 60,000 seconds.
Figure 2: Learned overcomplete natural image bases. Left: 1,024 bases (each 14?14 pixels). Right:
2,000 bases (each 20?20 pixels).
Figure 3: Left: End-stopping test for 14?14 sized 1,024 bases. Each line in the graph shows the
coefficients for a basis for different length bars. Right: Sample input image for nCRF effect.
in about 2 hours and a set of 2,000 bases (each 20?20 pixels) in about 10 hours.14 In contrast, the
gradient descent method for basis learning did not result in any reasonable bases even after running
for 24 hours. Further, summary statistics of our learned bases, obtained by fitting the Gabor function
parameters to each basis, qualitatively agree with previously reported statistics [15].
5.4 Replicating complex neuroscience phenomena
Several complex phenomena of V1 neural responses are not well explained by simple linear models
(in which the response is a linear function of the input). For instance, many visual neurons display
?end-stopping,? in which the neuron?s response to a bar image of optimal orientation and placement
is actually suppressed as the bar length exceeds an optimal length [6]. Sparse coding can model the
interaction (inhibition) between the bases (neurons) by sparsifying their coefficients (activations),
and our algorithms enable these phenomena to be tested with highly overcomplete bases.
First, we evaluated whether end-stopping behavior could be observed in the sparse coding framework. We generated random bars with different orientations and lengths in 14?14 image patches,
and picked the stimulus bar which most strongly activates each basis, considering only the bases
which are significantly activated by one of the test bars. For each such highly activated basis, and
the corresponding optimal bar position and orientation, we vary length of the bar from 1 pixel to
the maximal size and run sparse coding to measure the coefficients for the selected basis, relative to
their maximum coefficient. As shown in Figure 3 (left), for highly overcomplete bases, we observe
many cases in which the coefficient decreases significantly as the bar length is increased beyond the
optimal point. This result is consistent with the end-stopping behavior of some V1 neurons.
Second, using the learned overcomplete bases, we tested for center-surround non-classical receptive
field (nCRF) effects [7]. We found the optimal bar stimuli for 50 random bases and checked that
these bases were among the most strongly activated ones for the optimal stimulus. For each of these
14
We used Lagrange dual formulation for learning bases, and both conjugate gradient with epsilonL1 sparsity
as well as the feature-sign search with L1 sparsity for learning coefficients. The bases learned from both
methods showed qualitatively similar receptive fields. The bases shown in the Figure 2 were learned using
epsilonL1 sparsity function and 4,000 input image patches randomly sampled for every iteration.
bases, we measured the response with its optimal bar stimulus with and without the aligned bar
stimulus in the surround region (Figure 3 (right)). We then compared the basis response in these two
cases to measure the suppression or facilitation due to the surround stimulus. The aligned surround
stimuli produced a suppression of basis activation; 42 out of 50 bases showed suppression with
aligned surround input images, and 13 bases among them showed more than 10% suppression, in
qualitative accordance with observed nCRF surround suppression effects.
6
Application to self-taught learning
Sparse coding is an unsupervised algorithm that learns to represent input data succinctly using only
a small number of bases. For example, using the ?image edge? bases in Figure 2, it represents a new
image patch ?~ as a linear combination of just a small number of these bases ~bj . Informally, we think
of this as finding a representation of an image patch in terms of the ?edges? in the image; this gives
a slightly higher-level/more abstract representation of the image than the pixel intensity values, and
is useful for a variety of tasks.
In related work [8], we apply this to self-taught learning, a new machine learning formalism in
which we are given a supervised learning problem together with additional unlabeled instances that
may not have the same class labels as the labeled instances. For example, one may wish to learn
to distinguish between cars and motorcycles given images of each, and additional?and in practice
readily available?unlabeled images of various natural scenes. (This is in contrast to the much more
restrictive semi-supervised learning problem, which would require that the unlabeled examples also
be of cars or motorcycles only.) We apply our sparse coding algorithms to the unlabeled data to
learn bases, which gives us a higher-level representation for images, thus making the supervised
learning task easier. On a variety of problems including object recognition, audio classification, and
text categorization, this approach leads to 11?36% reductions in test error.
7
Conclusion
In this paper, we formulated sparse coding as a combination of two convex optimization problems
and presented efficient algorithms for each: the feature-sign search for solving the L1 -least squares
problem to learn coefficients, and a Lagrange dual method for the L2 -constrained least squares
problem to learn the bases for any sparsity penalty function. We test these algorithms on a variety
of datasets, and show that they give significantly better performance compared to previous methods.
Our algorithms can be used to learn an overcomplete set of bases, and show that sparse coding could
partially explain the phenomena of end-stopping and nCRF surround suppression in V1 neurons.
Acknowledgments. We thank Bruno Olshausen, Pieter Abbeel, Sara Bolouki, Roger Grosse, Benjamin
Packer, Austin Shoemaker and Joelle Skaf for helpful discussions. Support from the Office of Naval Research
(ONR) under award number N00014-06-1-0828 is gratefully acknowledged.
References
[1] B. A. Olshausen and D. J. Field. Emergence of simple-cell receptive field properties by learning a sparse
code for natural images. Nature, 381:607?609, 1996.
[2] B. A. Olshausen and D. J. Field. Sparse coding with an overcomplete basis set: A strategy employed by
V1? Vision Research, 37:3311?3325, 1997.
[3] M. S. Lewicki and T. J. Sejnowski. Learning overcomplete representations. Neural Comp., 12(2), 2000.
[4] B. A. Olshausen. Sparse coding of time-varying natural images. Vision of Vision, 2(7):130, 2002.
[5] B.A. Olshausen and D.J. Field. Sparse coding of sensory inputs. Cur. Op. Neurobiology, 14(4), 2004.
[6] M. P. Sceniak, M. J. Hawken, and R. Shapley. Visual spatial characterization of macaque V1 neurons.
The Journal of Neurophysiology, 85(5):1873?1887, 2001.
[7] J.R. Cavanaugh, W. Bair, and J.A. Movshon. Nature and interaction of signals from the receptive field
center and surround in macaque V1 neurons. Journal of Neurophysiology, 88(5):2530?2546, 2002.
[8] R. Raina, A. Battle, H. Lee, B. Packer, and A. Y. Ng. Self-taught learning. In NIPS Workshop on Learning
when test and training inputs have different distributions, 2006.
[9] A. Y. Ng. Feature selection, L1 vs. L2 regularization, and rotational invariance. In ICML, 2004.
[10] Y. Censor and S. A. Zenios. Parallel Optimization: Theory, Algorithms and Applications. 1997.
[11] S. S. Chen, D. L. Donoho, and M. A. Saunders. Atomic decomposition by basis pursuit. SIAM Journal
on Scientific Computing, 20(1):33?61, 1998.
[12] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least angle regression. Ann. Stat., 32(2), 2004.
[13] S. Perkins and J. Theiler. Online feature selection using grafting. In ICML, 2003.
[14] Aapo Hyv?arinen, Patrik O. Hoyer, and Mika O. Inki. Topographic independent component analysis.
Neural Computation, 13(7):1527?1558, 2001.
[15] J. H. van Hateren and A. van der Schaaf. Independent component filters of natural images compared with
simple cells in primary visual cortex. Proc.R.Soc.Lond. B, 265:359?366, 1998.
| 2979 |@word neurophysiology:2 trial:1 version:1 norm:3 pieter:1 hyv:1 bn:1 covariance:1 decomposition:1 tice:1 reduction:1 initial:2 series:1 contains:1 current:4 activation:4 si:1 written:3 must:5 readily:1 refines:1 remove:1 update:4 v:1 generative:1 discovering:1 guess:4 fewer:1 selected:1 cavanaugh:1 indefinitely:1 provides:3 characterization:1 along:2 incorrect:1 qualitative:1 fitting:1 shapley:1 bsk2f:2 behavior:2 decreasing:1 automatically:1 cpu:1 solver:7 considering:2 subvectors:2 becomes:1 moreover:2 notation:2 lowest:1 finding:5 transformation:1 every:2 xd:3 k2:4 demonstrates:1 unit:1 omit:1 positive:1 local:1 accordance:1 tends:1 despite:1 path:1 modulation:1 approximately:1 mika:1 initialization:1 sara:1 fastest:1 bi:3 acknowledgment:1 atomic:1 practice:1 significantly:7 gabor:1 projection:6 boyd:1 get:1 cannot:2 interior:2 selection:2 unlabeled:7 optimize:2 equivalent:2 www:2 lagrangian:1 center:2 maximizing:1 straightforward:1 go:2 starting:3 convex:9 simplicity:1 facilitation:1 variation:1 us:1 alexis:1 crossing:3 expensive:2 particularly:1 updating:1 recognition:1 labeled:1 observed:5 kxk1:1 solved:4 capture:5 initializing:1 region:1 ensures:1 decrease:6 ran:2 mentioned:1 benjamin:1 convexity:1 solving:7 segment:2 algo:1 upon:1 exit:1 efficiency:1 basis:27 represented:1 various:1 describe:1 activate:1 sejnowski:1 outside:1 saunders:1 stanford:4 solve:3 larger:1 plausible:1 s:6 otherwise:5 favor:1 statistic:2 topographic:4 think:1 emergence:1 final:2 online:1 advantage:2 differentiable:2 analytical:2 propose:1 reconstruction:1 interaction:2 maximal:1 neighboring:1 aligned:3 loop:2 motorcycle:2 ky:10 convergence:10 optimum:2 r1:1 produce:2 categorization:1 converges:2 object:1 derive:1 andrew:1 develop:1 stat:2 measured:4 op:1 soc:1 implemented:1 resemble:1 direction:4 filter:1 lars:16 opteron:1 enable:1 require:1 arinen:1 suffices:1 ncrf:5 abbeel:1 preliminary:1 biological:1 fnew:1 strictly:5 hold:1 around:1 exp:1 bj:10 vary:1 early:1 xk2:3 omitted:1 purpose:1 proc:1 outperformed:2 applicable:1 label:1 individually:1 weighted:1 clearly:1 activates:1 gaussian:1 always:5 modified:9 reaching:1 pn:3 varying:1 office:1 naval:1 check:4 contrast:2 suppression:8 posteriori:1 helpful:1 censor:1 stopping:8 entire:1 initially:1 pixel:6 overall:2 classification:2 dual:15 arg:1 among:3 orientation:3 development:1 constrained:4 art:2 initialize:1 spatial:1 schaaf:1 field:11 once:1 never:1 ng:3 represents:1 unsupervised:2 icml:2 report:1 stimulus:15 simplify:2 randomly:2 simultaneously:1 packer:2 highly:6 activated:6 primal:1 accurate:1 edge:2 partial:2 necessary:1 unless:1 taylor:1 initialized:3 overcomplete:13 instance:3 column:4 increased:1 formalism:1 s2j:3 retains:1 cost:1 subset:4 entry:1 uniform:1 conducted:1 too:1 reported:1 siam:1 ksj:1 lee:2 together:1 continuously:1 quickly:1 linux:1 satisfied:2 worse:1 return:1 account:1 coding:28 coefficient:50 explicitly:1 later:1 try:1 picked:1 closed:1 analyze:1 start:4 maintains:1 parallel:1 minimize:4 square:11 accuracy:2 became:1 efficiently:4 yes:1 produced:2 comp:1 conjgrad:2 alternatingly:2 explain:1 reach:4 checked:1 proof:6 cur:1 stop:1 gain:1 dataset:3 sampled:1 subsection:1 car:2 improves:1 efron:1 actually:1 higher:5 attained:2 supervised:4 response:5 formulation:3 done:1 evaluated:5 strongly:2 just:1 roger:1 until:2 sketch:4 ei:2 axk:6 scientific:1 believe:1 olshausen:6 effect:3 true:1 regularization:3 analytically:2 alternating:1 iteratively:2 nonzero:4 self:3 demonstrate:3 l1:21 image:28 discovers:1 novel:1 inki:1 empirically:1 qp:10 significant:2 honglak:1 surround:10 unconstrained:5 pm:2 bruno:1 replicating:1 gratefully:1 cortex:3 inhibition:2 base:53 add:1 showed:3 optimizing:4 irrelevant:1 n00014:1 onr:1 sceniak:1 joelle:1 der:1 minimum:1 greater:1 additional:4 impose:1 employed:1 converge:2 shortest:1 signal:1 ii:5 semi:1 multiple:2 reduces:3 smooth:2 technical:1 faster:3 exceeds:1 award:1 parenthesis:1 regression:2 aapo:1 vision:3 iteration:8 represent:2 achieved:2 cell:7 whereas:1 else:1 singular:1 crucial:1 unlike:2 subject:3 near:2 variety:3 xj:5 hastie:1 lasso:1 zenios:1 reduce:4 inner:1 whether:1 bair:1 pca:1 gb:1 reuse:1 penalty:10 stereo:4 movshon:1 speech:5 hessian:1 matlab:2 useful:1 informally:1 locally:3 http:2 notice:1 sign:51 neuroscience:1 per:4 tibshirani:1 discrete:4 promise:1 taught:3 sparsifying:2 group:1 four:2 demonstrating:1 shoemaker:1 acknowledged:1 v1:8 ram:1 graph:1 run:1 angle:2 inverse:1 package:2 reasonable:1 cvx:6 patch:4 hawken:1 coeff:2 submatrix:1 guaranteed:3 xnew:1 display:1 distinguish:1 fold:2 quadratic:5 placement:1 constraint:8 perkins:1 scene:1 qcqp:1 argument:2 extremely:1 optimality:5 min:1 lond:1 speedup:3 department:1 combination:11 conjugate:5 battle:2 terminates:1 slightly:2 suppressed:1 making:3 modification:3 b:2 axk2:1 explained:1 taken:2 equation:1 agree:1 remains:1 previously:2 turn:1 needed:1 know:1 end:7 available:3 pursuit:1 apply:4 observe:4 generic:6 hampered:1 assumes:1 denotes:1 running:15 newton:1 xc:8 restrictive:1 k1:1 classical:3 unchanged:2 objective:24 added:1 receptive:7 strategy:1 primary:1 exhibit:2 gradient:19 hoyer:1 thank:1 outer:1 amd:1 assuming:2 code:5 length:6 rotational:1 minimizing:1 demonstration:1 difficult:2 potentially:1 holding:3 trace:2 negative:1 implementation:1 unknown:2 perform:1 allowing:1 neuron:11 datasets:6 finite:3 descent:9 truncated:1 extended:1 neurobiology:1 rn:3 arbitrary:2 intensity:1 inferred:1 namely:1 subvector:2 pair:1 specified:1 concisely:1 learned:8 hour:3 macaque:2 nip:1 able:2 bar:12 proceeds:1 below:1 pattern:3 beyond:1 sparsity:22 graddesc:4 including:1 explanation:2 video:5 natural:20 regularized:5 raina:2 scheme:1 patrik:1 text:1 prior:2 l2:4 relative:6 localized:1 minimizex:2 consistent:12 theiler:1 systematically:2 austin:1 succinctly:2 summary:1 repeat:1 keeping:1 johnstone:1 sparse:35 distributed:1 ghz:1 van:2 dimension:2 rich:1 computes:1 sensory:1 concretely:1 stuck:1 qualitatively:2 sj:17 grafting:12 keep:1 pseudoinverse:1 active:28 global:1 b1:1 assumed:1 xi:11 search:27 iterative:8 table:4 learn:9 nature:2 robust:1 ca:1 expansion:1 complex:2 diag:1 did:3 pk:1 motivation:1 succinct:3 repeated:1 augmented:6 rithm:1 grosse:1 slow:2 position:1 wish:1 torus:1 learns:2 rk:4 theorem:1 maxi:1 x:5 exists:1 workshop:1 magnitude:1 kx:2 chen:5 easier:1 visual:5 lagrange:11 partially:1 lewicki:1 applies:1 satisfies:2 goal:1 sized:1 formulated:1 donoho:1 ann:1 replace:2 change:3 typical:1 lemma:5 called:1 total:1 kxs:1 invariance:1 experimental:1 select:1 support:1 rajat:1 hateren:1 audio:1 tested:2 phenomenon:6 |
2,180 | 298 | Language Induction by Phase Transition
in Dynamical Recognizers
Jordan B. Pollack
Laboratory for AI Research
The Ohio State University
Columbus,OH 43210
[email protected]
Abstract
A higher order recurrent neural network architecture learns to recognize and
generate languages after being "trained" on categorized exemplars. Studying
these networks from the perspective of dynamical systems yields two
interesting discoveries: First, a longitudinal examination of the learning
process illustrates a new form of mechanical inference: Induction by phase
transition. A small weight adjustment causes a "bifurcation" in the limit
behavior of the network. This phase transition corresponds to the onset of the
network's capacity for generalizing to arbitrary-length strings. Second, a
study of the automata resulting from the acquisition of previously published
languages indicates that while the architecture is NOT guaranteed to find a
minimal finite automata consistent with the given exemplars, which is an
NP-Hard problem, the architecture does appear capable of generating nonregular languages by exploiting fractal and chaotic dynamics. I end the paper
with a hypothesis relating linguistic generative capacity to the behavioral
regimes of non-linear dynamical systems.
1 Introduction
I expose a recurrent high-order back-propagation network to both positive and negative
examples of boolean strings, and report that although the network does not find the
minimal-description finite state automata for the languages (which is NP-Hard (Angluin,
1978?, it does induction in a novel and interesting fashion, and searches through a
hypothesis space which, theoretically, is not constrained to machines of finite state. These
results are of import to many related neural models currently under development, e.g.
(Elman, 1990; Giles et aI., 1990; Servan-Schreiber et al., 1989), and relates ultimately to
the question of how linguistic capacity can arise in nature.
Although the transitions among states in a finite-state automata are usually thought of as
being fully specified by a table, a transition function can also be specified as a
mathematical function of the current state and the input. It is known from (McCulloch &
Pitts, 1943) that even the most elementary modeling assumptions yield finite-state
619
620
Pollack
control. and it is worth reiterating that any network with the capacity to compute arbitrary
boolean functions (say. as logical sums of products) lapedes farber how nets 1. white
homik .1. can be used recurrently to implement arbitrary finite state machines.
From a different point of view. a recurrent network with a state evolving across k units
can be considered a k-dimensional discrete-time continuous-space dynamical tystem.
with a precise initial condition. Zk(O). and a state space in Z. a subspace of R . The
governing function. F. is parameterized by a set of weights. W. and merely computes the
next state from the current state and input. Yj(t). a finite sequence of patterns
representing tokens from some alphabet 1::
Zk(t+ 1) =FW(Zk(t).YjCt?
If we view one of the dimensions of this system. say Za. as an "acceptance" dimension.
we can define the language accepted by such a Dynamical Recognizer as all strings of
input tokens evolved from the precise initial state for which the accepting dimension of
the state is above a certain threshold. In network terms. one output unit would be
subjected to a threshold test after processing a sequence of input patterns.
The first question to ask is how can such a dynamical system be constructed. or taught, to
accept a particular language? The weights in the network. individually. do not correspond
directly to graph transitions or to phrase structure rules. The second question to ask is
what sort of generative power can be achieved by such systems?
2 The Model
To begin to answer the question of learning. I now present and elaborate upon my earlier
work on Cascaded Networks (pollack. 1987). which were used in a recurrent fashion to
learn parity. depth-limited parenthesis balancing, and to map between word sequences
and proposition representations (pollack. 1990a). A Cascaded Network is a wellcontrolled higher-order connectionist architecture to which the back-propagation
technique of weight adjustment (Rumelhart et al.. 1986) can be applied. Basically. it
consists of two subnetworks: The function network is a standard feed-forward network;
with or without hidden layers. However. the weights are dynamically computed by the
linear context network. whose outputs are mapped in a 1: 1 fashion to the weights of the
function net. Thus the input pattern to the context network is used to "multiplex" the the
function computed, which can result in simpler learning tasks.
When the outputs of the function network are used as inputs to context network. a system
can be built which learns to produce specific outputs for variable-length sequences of
inputs. Because of the multiplicative connections, each input is, in effect, processed by a
different function. Given an initial context. Zk(O). and a sequence of inputs.
Yj(t). t= 1. .. n. the network computes a sequence of state vectors, Zi(t). t= 1. .. n by
dynamically changing the set of weights. Wij(t). Without hidden units the forward pass
computation is:
Wij(t) =
L Wijk zk(t-1)
k
Zi(t) =
geL Wij(t) Yj(t?
j
Language Induction by Phase 'll'ansition in Dynamical Recognizers
where g is the usual sigmoid function used in back-propagation system.
In previous work, I assumed that a teacher could supply a consistent and generalizable
desired-state for each member of a large set of strings, which was a significant
overconstraint. In learning a two-state machine like parity, this did not matter, as the I-bit
state fully determines the output However, for the case of a higher-dimensional system,
we know what the final output of a system should be, but we don't care what its state
should be along the way.
Jordan (1986) showed how recurrent back-propagation networks could be trained with
"don't care" conditions. If there is no specific preference for the value of an output unit
for a particular training example, simply consider the error term for that unit to be O.
This will work, as long as that same unit receives feedback from other examples. When
the don't-cares line up, the weights to those units will never change. My solution to this
problem involves a backspace, unrolling the loop only once: After propagating the errors
determined on only a subset of the weights from the "acceptance" unit Za:
aE
= (za(n) aza).()
n
da ) za(n) (1- za(n? Yj(n)
aE
The error on the remainder of the weights (aaE ,i
w"k
.
.
')
from the penuIOrnate
Orne step:
_a_E_=LL aE
azk(n-l)
a
j
~ a ) is calculated
using values
aE
aWajk awa/n)
aE
aE
aWij(n-l) - aZi(n-l) Yj(n-l)
aE
aE
-- zk(n-2)
aWijk
aWij(n-l)
This is done, in batch (epoch) style, for a set of examples of varying lengths.
3 Induction as Phase Transition
In initial studies of learning the simple regular language of odd parity, I expected the
recognizer to merely implement "exclusive or" with a feedback link. It turns out that this
is not quite enough. Because termination of back-propagation is usually defined as a 20%
error (e.g. logical "I" is above 0.8) recurrent use of this logic tends to a limit point. In
other words, mere separation of the exemplars is no guarantee that the network can
recognize parity in the limit. Nevertheless, this is indeed possible as illustrated by
illustrated below. In order to test the limit behavior of a recognizer, we can observe its
response to a very long "characteristic string". For odd parity, the string 1* requires an
alternation of responses.
A small cascaded network composed of a 1-2 function net and a 2-6 context net
621
622
Pollack
(requiring 18 weights) was was trained on odd parity of a small set of strings up to length
5. At each epoch, the weights in the network were saved in a file. Subsequently, each
configuration was tested in its response to the first 25 characteristic strings. In figure I,
each vertical column, corresponding to an epoch, contains 25 points between 0 and 1.
Initially, all strings longer than length 1 are not distinguished. From cycle 60-80, the
network is improving at separating finite strings. At cycle 85, the network undergoes a
"bifurcation," where the small change in weights of a single epoch leads to a phase
transition from a limit point to a limit cycle. 1 This phase transition is so "adaptive" to the
classification task that the network rapidly exploits iL
~
........
.
........
?.. ??.:iII
:.... . . -",'?:?:??sa
,,?.':-J
..,,~....
~iilU hli!iIi!ili 3__
.'
0.8
........".-.....
0.6
.
.. -:
,',' .,'
?
?
~
.e'!'
'. '''::1
? -,
:t-:-j~
.:.......
.....":..
'.............._.~:
".
......~:...
........
..~
~'
~_.
.."r
-.
-.::::.
- ????~
I" .-.'....~
,...
:::::~.....
..:.:~.::
. .... .
" . . . . .' . - .
0.2
'o,
50
.
--~
~>-~
~_-~
~ --
."'--.
".
Figure 1:
_--....
w ""'pe.? --- 4
1_?
'-.':::~.!
..-:",.'..:
0.4
..
Wi#- _ _ _ _ _ ,
;a.
100
150
-="""~
=
200
A bifurcation diagram showing the response of the parity-learner to the first
25 characteristic strings over 200 epochs of training.
I wish to stress that this is a new and very interesting form of mechanical induction, and
reveals that with the proper perspective, non-linear connectionist networks are capable of
much more complex behavior than hill-climbing. Before the phase transition, the
machine is in principle not capable of performing the serial parity task; after the phase
transition it is. The similarity of learning through a "flash of insight" to biological change
through a "punctuated" evolution is much more than coincidence.
4 BenChmarking Results
Tomita (1982) performed elegant experiments in inducing finite automata from positive
and negative evidence using hillclim bing in the space of 9-state automata. Each case was
defined by two sets of boolean strings, accepted by and rejected by the regular languages
1 For the simple low dimensional dynamical systems usually studied, the "knob" or cootrol parameter for
such a bifurcation diagram is a scalar variable; here the control parameter is the entire 32-0 vcc:tor of
weights in the network, and bade-propagation turns the knobl
Language Induction by Phase ltansition in Dynamical Recognizers
listed below.
1
2
3
4
5
6
7
1*
(10)*
no odd zero strings after odd 1 strings
no triples of zeros
pairwise, an even sum of 01 's and lO's.
number of 1's - number ofO's = 3n
0*1*0*1*
Rather than inventing my own training data, or sampling these languages for a wellformed training set I ran all 7 Tomita training environments as given, on a sequential
cascaded network of a I-input 4-output function network (with bias, 8 weights to set) and
a 3-input 8-output context network with bias, using a learning rate was of 0.3 and a
momentum to 0.7. Termination was when all accepted strings returned output bits above
0.8 and rejected strings below 0.2.
Of Tomita's 7 cases, all but cases #2 and #6 converged without a problem in several
hundred epochs. Case 2 would not converge, and kept treating a negative case as correct
because of the difficulty for my architecture to induce a "trap" state; I had to modify the
training set (by added reject strings 110 and 11010) in order to overcome this problem?
Case 6 took several restarts and thousands of cycles to converge, cause unknown. The
complete experimental data is available in a longer report (pollack, 1990b).
Because the states are "in a box" of low dimension,3 we can view these machines
graphically to gain some understanding of how the state space is being arranged. Based
upon some intitial studies of parity, my initial hypothesis was that a set of clusters would
be found, organized in some geometric fashion: i.e. an em bedding of a finite state
machine into a finite dimensional geometry such that each token'S transitions would
correspond to a simple transformation of space. Graphs of the states visited by all
possible inputs up to length 10, for the 7 Tomita test cases are shown in figure 2. Each
figure contains 2048 points, but often they overlap.
The images (a) and (d) are what were expected, clumps of points which closely map to
states of equivalent FSA's. Images (b) and (e) have limit "ravine's" which can each be
considered states as well.
5 Discussion
However, the state spaces, (c), (f), and (g) of the dynamical recognizers for Tomita cases
3,6, and 7, are interesting, because, theoretically, they are infinite state machines, where
the states are not arbitrary or random, requiring an infinite table of transitions, but are
constrained in a powerful way by mathematical principle. In other words, the complexity
is in the dynamics, not in the specifications (weights).
In thinking about such a principle, consider other systems in which extreme observed
complexity emerges from algorithmic simplicity plus computational power. It is
2 It can be argued that other FSA inducing methods get around this problem by presupposing rather than
learning trap states.
] One reason I have succeeded in such low dimensional induction is because my architecture is a Mealy,
rather than Moore Machine (Lee Giles, Personal Communication)
623
624
Pollack
A
B
c
D
E
F
G
Figure 2: Images of the state-spaces
for the 7 benchmark cases. Each
image contains 2048 points
corresponding to the states of all
boolean strings up to length 10.
Language Induction by Phase 1ransition in Dynamical Recognizers
interesting to note that by eliminating the sigmoid and commuting the Yj and Zk terms,
the forward equation for higher order recurrent networks with is identical to the generator
of an Iterated Function System (IFS) (Bamsley et al., 1985). Thus, my figures of statespaces, which emerge from the projection of
into Z, are of the same class of
mathematical object as Barnsley's fractal attractors (e.g. the widely reproduced fern).
Using the method of (Grassberger & Procaccia, 1983), the correlation dimension of the
attractor in Figure 2(g) was found to be about 1.4.
:r.
The link between work in complex dynamical systems and neural networks is wellestablished both on the neurobiological level (Skarda & Freeman, 1987) and on the
mathematical level (Derrida & Meir, 1988; Huberman & Hogg, 1987; Kurten, 1987;
Smolensky, 1986). This paper expands a theme from an earlier proposal to link them at
the "cognitive" level (pollack, 1989).
There is an interesting formal question, which has been brought out in the work of
(Wolfram, 1984) and others on the universality of cellular automata, and more recently in
the work of (Crutchfield & Young, 1989) on the descriptive complexity of bifurcating
systems: What is the relationship between complex dynamics (of neural systems) and
traditional measures of computational complexity? From this work and other supporting
evidence, I venture the following hypothesis:
:r.
~:roo, is an Attractor,
The state-space limit of a dynamical recognizer, as
which is cut by a threshold (or similar decision) function. The complexity of
the language recognized is regular if the cut falls between disjoint limit
points or cycles, context-free if it cuts a "self-similar" (recursive) region, and
context-sensitive if it cuts a "chaotic" (pseudo-random) region.
Acknowledgements
This research has been partially supported by the Office of Naval Research under
grant NOOO 14-89-J-1200.
References
Angluin, D. (1978). On the complexity of minimum inference of regular sets.
Information and Control. 39,337-350.
Bamsley, M. F., Ervin, V., Hardin, D. & Lancaster, J. (1985). Solution of an
inverse problem for fractals and other sets. Proceedings of the National Academy
of Science. 83.
Crutchfield, 1. P & Young, K. (1989). Computation at the Onset of Chaos. In W.
Zurek, (Ed.), Complexity. Entropy and the Physics of INformation. Reading, MA:
Addison-Wesley.
Derrida, B. & Meir, R. (1988). Chaotic behavior of a layered neural network.
Phys. Rev. A. 38.
Elman, J. L. (1990). Finding Structure in Time. Cognitive Science. 14, 179-212.
Giles, C. L., Sun, G. Z., Chen, H. H., Lee, Y. C. & Chen, D. (1990). Higher Order
Recurrent Networks and Grammatical Inference. In D. Touretzky, (Ed.),
Advances in Neural Information Processing Systems. Los Gatos, CA: Morgan
Kaufman.
625
626
Pollack
Grassberger. P. & Procaccia. I. (1983). Measuring the Strangeness of Strange
Attractors. Physica. 9D. 189-208.
Huberman. B. A. & Hogg. T. (1987). Phase Transitions in Artificial Intelligence
Systems. Artificial Intelligence. 33. 155-172.
Jordan. M. I. (1986). Serial Order: A Parallel Distributed Processing Approach.
ICS report 8608. La Jolla: Institute for Cognitive Science. UCSD.
Kurten. K. E. (1987). Phase transitions in quasirandom neural networks. In
Institute of Electrical and Electronics Engineers First International Conference on
Neural Networks. San Diego. 11-197-20.
McCulloch. w. S. & Pitts. W. (1943). A logical calculus of the ideas immanent in
nervous activity. Bulletin of Mathematical Biophysics. 5. 115-133.
POllack. J. B. (1987). Cascaded Back Propagation on Dynamic Connectionist
Networks. In Proceedings of the Ninth Conference of the Cognitive Science
Society. Seattle. 391-404.
Pollack, J. B. (1989). Implications of Recursive Distributed Representations. In
D. Touretzky. (Ed.). Advances in Neural Information Processing Systems. Los
Gatos. CA: Morgan Kaufman.
Pollack. J. B. (1990). Recursive Distributed Representation. Artificial
Intelligence. 46, 77-105.
Pollack. J. B. (1990). The Induction of Dynamical Recognizers. Tech Report 90lP-Automata. Columbus. OH 43210: LAIR. Ohio State University.
Rumelhart. D. E .? Hinton. G. & Williams. R. (1986). Learning Internal
Representations through Error Propagation. In D. E. Rumelhart. 1. L. McClelland
& the PDP research Group. (Eds.). Parallel Distributed Processing: Experiments in
the Microstructure of Cognition. Vol. 1. Cambridge: MIT Press.
Servan-Schreiber. D .? Cleeremans. A. & McClelland. J. L (1989). Encoding
Sequential Structure in Simple Recurrent Networks. In D. Touretzky. (Ed.).
Advances in Neural Information Processing Systems. Los Gatos. CA: Morgan
Kaufman.
Skarda. C. A. & Freeman. W. J. (1987). How brains make chaos. Brain &
Behavioral Science.lO.
Smolensky. P. (1986). Information Processing in DynamiCal Systems:
Foundations of Harmony Theory. In D. E. Rumelhart. J. L. McClelland & the PDP
research GrouP. (Eds.). Parallel Distributed Processing: Experiments in the
Microstructure of Cognition. Vol. 1. Cambridge: MIT Press.
Tomita. M. (1982). Dynamic construction of finite-state automata from examples
using hill-climbing. In Proceedings of the Fourth Annual Cognitive Science
Conference. Ann Arbor. MI. 105-108.
Wolfram. S. (1984). Universality and Complexity in Cellular Automata. Physica.
lOD.1-35.
| 298 |@word eliminating:1 termination:2 awijk:1 calculus:1 awij:2 electronics:1 configuration:1 contains:3 initial:5 longitudinal:1 lapedes:1 current:2 universality:2 import:1 grassberger:2 treating:1 generative:2 intelligence:3 nervous:1 wolfram:2 accepting:1 preference:1 ofo:1 simpler:1 mathematical:5 along:1 constructed:1 supply:1 consists:1 behavioral:2 pairwise:1 theoretically:2 indeed:1 ervin:1 expected:2 elman:2 behavior:4 brain:2 freeman:2 unrolling:1 begin:1 mcculloch:2 what:5 evolved:1 kaufman:3 string:18 generalizable:1 finding:1 transformation:1 guarantee:1 ifs:1 pseudo:1 expands:1 control:3 unit:8 grant:1 appear:1 positive:2 before:1 multiplex:1 tends:1 limit:9 modify:1 encoding:1 plus:1 studied:1 dynamically:2 limited:1 clump:1 yj:6 recursive:3 implement:2 chaotic:3 evolving:1 thought:1 reject:1 projection:1 word:3 induce:1 regular:4 get:1 layered:1 context:8 equivalent:1 map:2 graphically:1 punctuated:1 williams:1 automaton:10 simplicity:1 rule:1 insight:1 oh:2 diego:1 construction:1 hypothesis:4 lod:1 zurek:1 rumelhart:4 cut:4 observed:1 coincidence:1 electrical:1 thousand:1 cleeremans:1 region:2 cycle:5 sun:1 ran:1 environment:1 complexity:8 dynamic:5 personal:1 ultimately:1 trained:3 upon:2 learner:1 alphabet:1 artificial:3 lancaster:1 whose:1 quite:1 widely:1 say:2 roo:1 skarda:2 final:1 reproduced:1 fsa:2 sequence:6 descriptive:1 net:4 hardin:1 took:1 product:1 remainder:1 loop:1 rapidly:1 academy:1 description:1 inducing:2 venture:1 los:3 exploiting:1 seattle:1 cluster:1 produce:1 generating:1 object:1 recurrent:9 derrida:2 propagating:1 exemplar:3 odd:5 sa:1 involves:1 farber:1 saved:1 correct:1 closely:1 subsequently:1 argued:1 inventing:1 azi:1 microstructure:2 proposition:1 biological:1 elementary:1 gatos:3 physica:2 around:1 considered:2 ic:1 algorithmic:1 cognition:2 pitt:2 tor:1 recognizer:4 harmony:1 currently:1 visited:1 expose:1 sensitive:1 individually:1 schreiber:2 brought:1 mit:2 rather:3 nonregular:1 varying:1 office:1 linguistic:2 knob:1 naval:1 indicates:1 tech:1 inference:3 entire:1 accept:1 initially:1 hidden:2 wij:3 among:1 classification:1 development:1 constrained:2 bifurcation:4 once:1 never:1 sampling:1 identical:1 thinking:1 np:2 report:4 connectionist:3 others:1 composed:1 recognize:2 national:1 phase:13 geometry:1 attractor:4 acceptance:2 wijk:1 extreme:1 strangeness:1 implication:1 succeeded:1 capable:3 desired:1 pollack:14 minimal:2 column:1 modeling:1 boolean:4 giles:3 earlier:2 servan:2 measuring:1 phrase:1 subset:1 hundred:1 answer:1 teacher:1 my:7 international:1 lee:2 physic:1 cognitive:5 style:1 bamsley:2 matter:1 onset:2 multiplicative:1 view:3 performed:1 sort:1 parallel:3 il:1 characteristic:3 yield:2 correspond:2 climbing:2 iterated:1 hli:1 basically:1 mere:1 fern:1 worth:1 published:1 converged:1 za:5 phys:1 touretzky:3 ed:6 acquisition:1 mi:1 gain:1 ask:2 logical:3 emerges:1 organized:1 back:6 feed:1 wesley:1 higher:5 restarts:1 response:4 arranged:1 done:1 box:1 governing:1 rejected:2 correlation:1 receives:1 propagation:8 undergoes:1 columbus:2 effect:1 requiring:2 evolution:1 laboratory:1 moore:1 illustrated:2 white:1 ll:2 self:1 stress:1 hill:2 complete:1 image:4 chaos:2 ohio:3 novel:1 recently:1 sigmoid:2 relating:1 significant:1 cambridge:2 ai:2 hogg:2 language:14 had:1 mealy:1 specification:1 similarity:1 recognizers:6 longer:2 own:1 showed:1 perspective:2 jolla:1 certain:1 alternation:1 morgan:3 minimum:1 care:3 recognized:1 converge:2 relates:1 long:2 serial:2 parenthesis:1 biophysics:1 ae:8 wellcontrolled:1 achieved:1 proposal:1 diagram:2 kurten:2 file:1 elegant:1 member:1 jordan:3 aza:1 iii:2 enough:1 zi:2 architecture:6 presupposing:1 idea:1 crutchfield:2 returned:1 cause:2 fractal:3 listed:1 awa:1 reiterating:1 processed:1 wellestablished:1 mcclelland:3 generate:1 angluin:2 meir:2 disjoint:1 discrete:1 vol:2 taught:1 group:2 threshold:3 nevertheless:1 changing:1 kept:1 graph:2 merely:2 sum:2 inverse:1 parameterized:1 powerful:1 fourth:1 strange:1 separation:1 decision:1 bit:2 layer:1 guaranteed:1 aae:1 annual:1 activity:1 performing:1 across:1 em:1 wi:1 lp:1 rev:1 equation:1 previously:1 bing:1 turn:2 know:1 addison:1 subjected:1 end:1 subnetworks:1 studying:1 available:1 observe:1 distinguished:1 batch:1 tomita:6 exploit:1 society:1 question:5 added:1 exclusive:1 usual:1 traditional:1 subspace:1 link:3 mapped:1 separating:1 capacity:4 cellular:2 reason:1 induction:10 length:7 vcc:1 relationship:1 gel:1 negative:3 lair:1 proper:1 unknown:1 vertical:1 benchmark:1 finite:12 commuting:1 bifurcating:1 supporting:1 hinton:1 communication:1 precise:2 pdp:2 ucsd:1 ninth:1 arbitrary:4 nooo:1 mechanical:2 specified:2 connection:1 bedding:1 dynamical:15 usually:3 pattern:3 below:3 regime:1 smolensky:2 reading:1 built:1 power:2 overlap:1 difficulty:1 examination:1 cascaded:5 representing:1 epoch:6 understanding:1 discovery:1 geometric:1 acknowledgement:1 fully:2 interesting:6 triple:1 generator:1 foundation:1 consistent:2 principle:3 balancing:1 lo:2 token:3 supported:1 parity:9 free:1 bias:2 formal:1 institute:2 fall:1 bulletin:1 emerge:1 distributed:5 grammatical:1 feedback:2 dimension:5 depth:1 transition:15 calculated:1 overcome:1 computes:2 forward:3 adaptive:1 san:1 neurobiological:1 logic:1 reveals:1 quasirandom:1 assumed:1 don:3 ravine:1 search:1 continuous:1 table:2 barnsley:1 nature:1 zk:7 learn:1 ca:3 improving:1 complex:3 da:1 did:1 immanent:1 arise:1 categorized:1 benchmarking:1 elaborate:1 fashion:4 momentum:1 theme:1 wish:1 pe:1 learns:2 young:2 specific:2 showing:1 recurrently:1 evidence:2 trap:2 sequential:2 ci:1 illustrates:1 chen:2 entropy:1 generalizing:1 simply:1 adjustment:2 partially:1 scalar:1 corresponds:1 determines:1 ma:1 ann:1 flash:1 hard:2 fw:1 change:3 determined:1 infinite:2 huberman:2 engineer:1 pas:1 accepted:3 ili:1 experimental:1 la:1 arbor:1 procaccia:2 internal:1 tested:1 |
2,181 | 2,980 | Conditional Random Sampling: A Sketch-based
Sampling Technique for Sparse Data
Ping Li
Department of Statistics
Stanford University
Stanford, CA 94305
[email protected]
Kenneth W. Church
Microsoft Research
One Microsoft Way
Redmond, WA 98052
[email protected]
Trevor J. Hastie
Department. of Statistics
Stanford University
Stanford, CA 94305
[email protected]
Abstract
We1 develop Conditional Random Sampling (CRS), a technique particularly suitable for sparse data. In large-scale applications, the data are often highly sparse.
CRS combines sketching and sampling in that it converts sketches of the data into
conditional random samples online in the estimation stage, with the sample size
determined retrospectively. This paper focuses on approximating pairwise l2 and
l1 distances and comparing CRS with random projections. For boolean (0/1) data,
CRS is provably better than random projections. We show using real-world data
that CRS often outperforms random projections. This technique can be applied in
learning, data mining, information retrieval, and database query optimizations.
1 Introduction
Conditional Random Sampling (CRS) is a sketch-based sampling technique that effectively exploits
data sparsity. In modern applications in learning, data mining, and information retrieval, the datasets
are often very large and also highly sparse. For example, the term-document matrix is often more
than 99% sparse [7]. Sampling large-scale sparse data is challenging. The conventional random
sampling (i.e., randomly picking a small fraction) often performs poorly when most of the samples
are zeros. Also, in heavy-tailed data, the estimation errors of random sampling could be very large.
As alternatives to random sampling, various sketching algorithms have become popular, e.g., random
projections [17] and min-wise sketches [6]. Sketching algorithms are designed for approximating
specific summary statistics. For a specific task, a sketching algorithm often outperforms random
sampling. On the other hand, random sampling is much more flexible. For example, we can use the
same set of random samples to estimate any lp pairwise distances and multi-way associations. Conditional Random Sampling (CRS) combines the advantages of both sketching and random sampling.
Many important applications concern only the pairwise distances, e.g., distance-based clustering
and classification, multi-dimensional scaling, kernels. For a large training set (e.g., at Web scale),
computing pairwise distances exactly is often too time-consuming or even infeasible.
Let A be a data matrix of n rows and D columns. For example, A can be the term-document matrix
with n as the total number of word types and D as the total number of documents. In modern search
engines, n ? 106 ? 107 and D ? 1010 ? 1011 . In general, n is the number of data points and D
is the number of features. Computing all pairwise associations AAT , also called the Gram matrix
in machine learning, costs O(n2 D), which could be daunting for large n and D. Various sampling
methods have been proposed for approximating Gram matrix and kernels [2, 8]. For example, using
T
(normal) random projections [17], we approximate AAT by (AR) (AR) , where the entries of
D?k
2
R?R
are i.i.d. N (0, 1). This reduces the cost down to O(nDk+n k), where k ? min(n, D).
1
The full version [13]: www.stanford.edu/?pingli98/publications/CRS tr.pdf
Sampling techniques can be critical in databases and information retrieval. For example, the
database query optimizer seeks highly efficient techniques to estimate the intermediate join sizes
in order to choose an ?optimum? execution path for multi-way joins.
Conditional Random Sampling (CRS) can be applied to estimating pairwise distances (in any norm)
as well as multi-way associations. CRS can also be used for estimating joint histograms (two-way
and multi-way). While this paper focuses on estimating pairwise l2 and l1 distances and inner
products, we refer readers to the technical report [13] for estimating joint histograms. Our early
work, [11, 12] concerned estimating two-way and multi-way associations in boolean (0/1) data.
We will compare CRS with normal random projections for approximating l2 distances and inner
products, and with Cauchy random projections for approximating l1 distances. In boolean data,
CRS bears some similarity to Broder?s sketches [6] with some important distinctions. [12] showed
that in boolean data, CRS improves Broder?s sketches by roughly halving the estimation variances.
2 The Procedures of CRS
Conditional Random Sampling is a two-stage procedure. In the sketching stage, we scan the data
matrix once and store a fraction of the non-zero elements in each data point, as ?sketches.? In the
estimation stage, we generate conditional random samples online pairwise (for two-way) or groupwise (for multi-way); hence we name our algorithm Conditional Random Sampling (CRS).
2.1 The Sampling/Sketching Procedure
1
2
3
4
5
6
7
1
2
3
4
5
n
8
D
1
2
3
4 5 6 7 8 D
1
2
3
4
5
n
1
2
3
4
5
n
1
2
3
4
5
n
(a) Original
(b) Permuted
(c) Postings
(d) Sketches
Figure 1: A global view of the sketching stage.
Figure 1 provides a global view of the sketching stage. The columns of a sparse data matrix (a)
are first randomly permuted (b). Then only the non-zero entries are considered, called postings (c).
Sketches are simply the front of postings (d). Note that in the actual implementation, we only need
to maintain a permutation mapping on the column IDs.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
u1 0 1 0 2 0 1 0 0 1 2 1 0 1 0 2
u2 1 3 0 0 1 2 0 1 0 0 3 0 0 2 1
(a) Data matrix and random samples
P1 : 2 (1) 4 (2) 6 (1) 9 (1) 10 (2) 11 (1) 13 (1) 15 (2)
P 2 : 1 (1) 2 (3) 5 (1) 6 (2) 8 (1) 11 (3) 14 (2) 15 (1)
K1 : 2 (1) 4 (2) 6 (1) 9 (1) 10 (2)
K 2 : 1 (1) 2 (3) 5 (1) 6 (2) 8 (1) 11 (3)
(b) Postings
(c) Sketches
Figure 2: (a): A data matrix with two rows and D = 15. If the column IDs are random, the first
Ds = 10 columns constitute a random sample. ui denotes the ith row. (b): Postings consist of
tuples ?ID (Value).? (c): Sketches are the first ki entries of postings sorted ascending by IDs. In this
example, k1 = 5, k2 = 6, Ds = min(10, 11) = 10. Excluding 11(3) in K2 , we obtain the same
samples as if we directly sampled the first Ds = 10 columns in the data matrix.
Apparently sketches are not uniformly random samples, which may make the estimation task difficult. We show, in Figure 2, that sketches are almost random samples pairwise (or group-wise).
Figure 2(a) constructs conventional random samples from a data matrix; and we show one can generate (retrospectively) the same random samples from sketches in Figure 2(b)(c).
In Figure 2(a), when the column are randomly permuted, we can construct random samples by simply taking the first Ds columns from the data matrix of D columns (Ds ? D in real applications).
For sparse data, we only store the non-zero elements in the form of tuples ?ID (Value),? a structure
called postings. We denote the postings by Pi for each row ui . Figure 2(b) shows the postings for
the same data matrix in Figure 2(a). The tuples are sorted ascending by their IDs. A sketch, Ki , of
postings Pi , is the first ki entries (i.e., the smallest ki IDs) of Pi , as shown in Figure 2(c).
The central observation is that if we exclude all elements of sketches whose IDs are larger than
Ds = min (max(ID(K1 )), max(ID(K2 ))) ,
(1)
we obtain exactly the same samples as if we directly sampled the first Ds columns from the data
matrix in Figure 2(a). This way, we convert sketches into random samples by conditioning on Ds ,
which differs pairwise and we do not know beforehand.
2.2 The Estimation Procedure
The estimation task for CRS can be extremely simple. After we construct the conditional random
samples from sketches K1 and K2 with the effective sample size Ds , we can compute any distances
(l2 , l1 , or inner products) from the samples and multiply them by DDs to estimate the original space.
(Later, we will show how to improve the estimates by taking advantage of the marginal information.)
We use u?1,j and u
?2,j (j = 1 to Ds ) to denote the conditional random samples (of size Ds ) obtained
by CRS. For example, in Figure 2, we have Ds = 10, and the non-zero u
?1,j and u?2,j are
u
?1,2 = 3, u
?1,4 = 2, u
?1,6 = 1, u
?1,9 = 1, u?1,10 = 2
u
?2,1 = 1, u
?2,2 = 3, u
?2,5 = 1, u
?2,6 = 2, u?2,8 = 1.
Denote the inner product, squared l2 distance, and l1 distance, by a, d(2) , and d(1) , respectively,
a=
D
X
d(2) =
u1,i u2,i ,
i=1
D
X
|u1,i ? u2,i |2 ,
D
X
d(1) =
i=1
|u1,i ? u2,i |
(2)
i=1
Once we have the random samples, we can then use the following simple linear estimators:
a
?MF =
Ds
D X
u
?1,j u?2,j ,
Ds j=1
Ds
D X
(2)
2
d?MF =
(?
u1,j ? u
?2,j ) ,
Ds j=1
Ds
D X
(1)
d?MF =
|?
u1,j ? u
?2,j |. (3)
Ds j=1
2.3 The Computational Cost
Th sketching stage requires generating a random permutation mapping of P
length D, and linear scan
n
all the non-zeros. Therefore, generating sketches for A ? Rn?D costs O( i=1 fi ), where fi is the
number of non-zeros in the ith row, i.e., fi = |Pi |. In the estimation stage, we need to linear scan the
sketches. While the conditional sample size Ds might be large, the cost for estimating the distance
between one pair of data points would be only O(k1 + k2 ) instead of O(Ds ).
3 The Theoretical Variance Analysis of CRS
We give some theoretical analysis on the variances of CRS. For simplicity, we ignore the ?finite
s
population correction factor?, D?D
D?1 , due to ?sample-without-replacement.?
P s
We first consider a
?MF = DDs D
?1,j u
?2,j . By assuming ?sample-with-replacement,? the samples,
j=1 u
(?
u1,j u
?2,j ), j = 1 to Ds , are i.i.d, conditional on Ds . Thus,
Var(?
aM F |Ds ) =
E (?
u1,1 u
?2,1 ) =
?
D
Ds
?2
Ds Var (?
u1,1 u
?2,1 ) =
D
1 X
a
(u1,i u2,i ) = ,
D i=1
D
D
Var(?
aM F |Ds ) =
D
Ds
?
D `
D E (?
u1,1 u
?2,1 )2 ? E2 (?
u1,1 u
?2,1 ) ,
Ds
D
1 X
(u1,i u2,i )2 ,
D i=1
!
D
X
a2
2
2
u1,i u2,i ?
.
D
i=1
E (?
u1,1 u
?2,1 )2 =
D
? a ?2
1 X
(u1,i u2,i )2 ?
D i=1
D
!
=
D
Ds
The unconditional variance would be simply
Var(?
aM F ) = E (Var(?
aM F |Ds )) = E
?
D
Ds
? X
D
i=1
u21,i u22,i
a2
?
D
!
,
(4)
(5)
(6)
?
?
?
?
?
?
? is conditionally unbiased.
? = E Var(X|D
? s ) + Var E(X|D
? s ) = E Var(X|D
? s ) , when X
as Var(X)
No closed-form expression is known for E DDs ; but we know E DDs ? max kf11 , kf22 (similar
to Jensen?s inequality). Asymptotically (as k1 and k2 increase), the inequality becomes an equality
D
f1 + 1 f2 + 1
f1 f2
E
? max
? max
,
(7)
,
,
Ds
k1
k2
k1 k2
where f1 and f2 are the numbers of non-zeros in u1 and u2 , respectively. See [13] for the proof.
Extensive simulations in [13] verify that the errors of (7) are usually within 5% when k1 , k2 > 20.
(2)
(1)
We similarly derive the variances for d?MF and d?MF . In a summary, we obtain (when k1 = k2 = k)
!
!
? X
D
D
X
D
a2
max(f1 , f2 ) 1
2
2
2
2
2
Var (?
aM F ) = E
u1,i u2,i ?
?
D
u1,i u2,i ? a ,
Ds
D
D
k
i=1
i=1
?
?
??
?
?
?
D
[d(2) ]2
max(f1 , f2 ) 1 ? (4)
(2)
Var d?M F = E
d(4) ?
?
Dd ? [d(2) ]2 ,
Ds
D
D
k
?
??
?
?
?
?
(1) 2
[d
]
max(f
,
f
)
D
1 ? (2)
1
2
(1)
Var d?M F = E
d(2) ?
?
Dd ? [d(1) ]2 .
Ds
D
D
k
?
where we denote d(4) =
PD
(8)
(9)
(10)
4
i=1
(u1,i ? u2,i ) .
1 ,f2 )
1 ,f2 )
The sparsity term max(f
reduces the variances significantly. If max(f
= 0.01, the variances
D
D
can be reduced by a factor of 100, compared to conventional random coordinate sampling.
4 A Brief Introduction to Random Projections
We give a brief introduction to random projections, with which we compare CRS. (Normal) Random
projections [17] are widely used in learning and data mining [2?4].
Random projections multiply the data matrix A ? Rn?D with a random matrix R ? RD?k to
generate a compact representation B = AR ? Rn?k . For estimating l2 distances, R typically
consists of i.i.d. entries in N (0, 1); hence we call it normal random projections. For l1 , R consists
of i.i.d. Cauchy C(0, 1) [9]. However, the recent impossibility result [5] has ruled out estimators
that could be metrics for dimension reduction in l1 .
Denote v1 , v2 ? Rk the two rows in B, corresponding to the original data points u1 , u2 ? RD . We
also introduce the notation for the marginal l2 norms: m1 = ku1 k2 , m2 = ku2 k2 .
4.1 Normal Random Projections
In this case, R consists of i.i.d. N (0, 1). It is easy to show that the following linear estimators of
the inner product a and the squared l2 distance d(2) are unbiased
a
?N RP,MF =
1 T
v v2 ,
k 1
1
(2)
d?N RP,MF = kv1 ? v2 k2 ,
k
(11)
with variances [15, 17]
Var (?
aN RP,MF ) =
1
m 1 m 2 + a2 ,
k
2[d(2) ]2
(2)
Var d?N RP,MF =
.
k
(12)
Assuming that the margins m1 = ku1 k2 and m2 = ku2 k2 are known, [15] provides a maximum
likelihood estimator, denoted by a
?N RP,MLE , whose (asymptotic) variance is
Var (?
aN RP,MLE ) =
1 (m1 m2 ? a2 )2
+ O(k ?2 ).
k m 1 m 2 + a2
(13)
4.2 Cauchy Random Projections for Dimension Reduction in l1
In this case, R consisting of i.i.d. entries in Cauchy C(0, 1). [9] proposed an estimator based on the
absolute sample median. Recently, [14] proposed a variety of nonlinear estimators, including, a biascorrected sample median estimator, a bias-corrected geometric mean estimator, and a bias-corrected
maximum likelihood estimator. An analog of the Johnson-Lindenstrauss (JL) lemma for dimension
reduction in l1 is also proved in [14], based on the bias-corrected geometric mean estimator.
We only list the maximum likelihood estimator derived in [14], because it is the most accurate one.
1 ?(1)
(1)
dCRP,MLE ,
(14)
d?CRP,MLE,c = 1 ?
k
(1)
where d?CRP,MLE solves a nonlinear MLE equation
?
k
(1)
d?CRP,M LE
+
k
X
j=1
2d?CRP,M LE
?
?2 = 0.
(1)
? v2,j )2 + d?CRP,M LE
(1)
(v1,j
[14] shows that
(15)
2[d(1) ]2
3[d(1) ]2
1
(1)
Var d?CRP,MLE,c =
+
+
O
.
k
k2
k3
(16)
4.3 General Stable Random Projections for Dimension Reduction in lp (0 < p ? 2)
[10] generalized the bias-corrected geometric mean estimator to general stable random projections
for dimension reduction in lp (0 < p ? 2), and provided the theoretical variances and exponential
tail bounds. Of course, CRS can also be applied to approximating any lp distances.
5 Improving CRS Using Marginal Information
It is often reasonable to assume that we know the marginal information such as marginal l2 norms,
numbers of non-zeros, or even marginal histograms. This often leads to (much) sharper estimates,
by maximizing the likelihood under marginal constraints. In the boolean data case, we can express
the MLE solution explicitly and derive a closed-form (asymptotic) variance. In general real-valued
data, the joint likelihood is not available; we propose an approximate MLE solution.
5.1 Boolean (0/1) Data
In 0/1 data, estimating the inner product becomes estimating a two-way contingency table, which
has four cells. Because of the margin constraints, there is only one degree of freedom. Therefore, it
is not hard to show that the MLE of a is the solution, denoted by a
?0/1,MLE , to a cubic equation
s11
s10
s01
s00
?
?
+
= 0,
(17)
a
f1 ? a f2 ? a D ? f1 ? f2 + a
where s11 = #{j : u
?1,j = u
?2,j = 1}, s10 = #{j : u
?1,j = 1, u
?2,j = 0}, s01 = #{j : u?1,j =
0, u
?2,j = 1}, s00 = #{j : u
?1,j = 0, u
?2,j = 0}, j = 1, 2, ..., Ds .
The (asymptotic) variance of a
?0/1,MLE is proved [11?13] to be
D
1
Var(?
a0/1,MLE ) = E
1
1
1
Ds a + f1 ?a + f2 ?a +
1
D?f1 ?f2 +a
.
(18)
5.2 Real-valued Data
A practical solution is to assume some parametric form of the (bivariate) data distribution based
on prior knowledge; and then solve an MLE considering various constraints. Suppose the samples
(?
u1,j , u?2,j ) are i.i.d. bivariate normal with moments determined by the population moments, i.e.,
?
v?1,j
v?2,j
?
=
? = 1 Ds
?
Ds D
?
?
u
?1,j ? u
?1
u
?2,j ? u
?2
?
?N
ku1 k2 ? D?
u21
uT1 u2 ? D?
u1 u
?2
??
0
0
?
?
? ,
,?
uT1 u2 ? D?
u1 u
?2
ku2 k2 ? D?
u22
(19)
?
=
1
Ds
?
m
?1
a
?
a
?
m
?2
?
,
(20)
PD
PD
where u
?1 = i=1 u1,i /D, u?2 =
?1 =
i=1 u2,i /D are the population
means. m
Ds
Ds
Ds
2
2
2
2
T
ku
k
?
D?
u
,
m
?
=
ku
k
?
D?
u
2
,
a
?
=
u
u
?
D?
u
u
?
.
Suppose
that
u?1 ,
1
2
2
2
1
2
1
1
D
D
D
u
?2 , m1 = ku1 k2 and m2 = ku2 k2 are known, an MLE for a = uT1 u2 , denoted by a
?MLE,N , is
D?
a
?MLE,N =
a
? + D?
u1 u
?2 ,
(21)
Ds
?
where, similar to Lemma 2 of [15], a
? is the solution to a cubic equation:
3
2
T
a
? ?a
? v?1 v?2 + a
? ?m
? 1m
?2 +m
? 1 k?
v2 k2 + m
? 2 k?
v1 k2 ? m
? 1m
? 2 v?1T v?2 = 0.
(22)
a
?MLE,N is fairly robust, although sometimes we observe the biases are quite noticeable. In general,
this is a good bias-variance trade-off (especially when k is not too large). Intuitively, the reason why
this (seemly crude) assumption of bivariate normality works well is because, once we have fixed the
margins, we have removed to a large extent the non-normal component of the data.
6 Theoretical Comparisons of CRS With Random Projections
As reflected by their variances, for general data types, whether CRS is better than random projections depends on two competing factors: data sparsity and data heavy-tailedness. However, in the
following two important scenarios, CRS outperforms random projections.
6.1 Boolean (0/1) data
In this case, the marginal norms are the same as the numbers of non-zeros, i.e., mi = kui k2 = fi .
Figure 3 plots the ratio,
Var(?
aM F )
Var(?
aN RP,M F ) ,
verifying that CRS is (considerably) more accurate:
Var (?
aMF )
max(f1 , f2 )
=
Var (?
aN RP,MF )
f 1 f 2 + a2
1
1
a
+
1
D?a
?
max(f1 , f2 )a
? 1.
f 1 f 2 + a2
Var(a
? 0/1,M LE )
Figure 4 plots Var(?aN RP,M LE ) . In most possible range of the data, this ratio is less than 1. When
u1 and u2 are very close (e.g., a ? f2 ? f1 ), random projections appear more accurate. However,
when this does occur, the absolute variances are so small (even zero) that their ratio does not matter.
f1 = 0.95D
0.2
0
0
0.2
0.4
a/f
0.6
0.8
0.4
0.2
0
0
1
0.8
1
f /f = 0.8
2 1
Variance ratio
f1 = 0.05D
0.4
0.6
1
f2/f1 = 0.5
Variance ratio
0.6
0.8
f2/f1 = 0.2
Variance ratio
Variance ratio
1
0.8
0.6
0.4
0.2
0.2
0.4
2
a/f
0.6
0.8
0
0
1
0.8
f2/f1 = 1
0.6
0.4
0.2
0.2
0.4
2
a/f
0.6
0.8
0
0
1
0.2
2
0.4
a/f
0.6
0.8
1
2
Var(?
aM F )
Figure 3: The variance ratios, Var(?
aN RP,M F ) , show that CRS has smaller variances than random
projections, when no marginal information is used. We let f1 ? f2 and f2 = ?f1 with ? =
0.2, 0.5, 0.8, 1.0. For each ?, we plot from f1 = 0.05D to f1 = 0.95D spaced at 0.05D.
0.4
0.2
0
0
f1 = 0.05 D
1
0.4
0.6
0.4
0.2
f = 0.95 D
0.2
1
f2/f1 = 0.5
Variance ratio
0.6
0.8
a/f
0.6
0.8
1
0
0
0.8
3
f2/f1 = 0.8
2.5
Variance ratio
1
f2/f1 = 0.2
Variance ratio
Variance ratio
1
0.8
0.6
0.4
0.2
0.2
2
0.4
a/f
2
0.6
0.8
1
0
0
f2/f1 = 1
2
1.5
1
0.5
0.2
0.4
a/f
2
0.6
0.8
1
0
0
0.2
0.4
a/f
0.6
0.8
2
Var(a
?0/1,M LE )
Figure 4: The ratios, Var(?aN RP,M LE ) , show that CRS usually has smaller variances than random
projections, except when f1 ? f2 ? a.
6.2 Nearly Independent Data
Suppose two data points u1 and u2 are independent (or less strictly, uncorrelated to the second
order), it is easy to show that the variance of CRS is always smaller:
max(f1 , f2 ) m1 m2
m 1 m 2 + a2
Var (?
aMF ) ?
? Var (?
aN RP,MF ) =
,
(23)
D
k
k
even if we ignore the data sparsity. Therefore, CRS will be much better for estimating inner products
in nearly independent data. Once we have obtained the inner products, we can infer the l2 distances
easily by d(2) = m1 + m2 ? 2a, since the margins, m1 and m2 , are easy to obtain exactly.
In high dimensions, it is often the case that most of the data points are only very weakly correlated.
6.3 Comparing the Computational Efficiency
As previously mentioned,
the cost of constructing sketches for A ? Rn?D would be O(nD) (or
Pn
more precisely, O( i=1 fi )). The cost of (normal) random projections would be O(nDk), which
can be reduced to O(nDk/3) using sparse random projections [1]. Therefore, it is possible that
CRS is considerably more efficient than random projections in the sampling stage.2
In the estimation stage, CRS costs O(2k) to compute the sample distance for each pair. This cost is
only O(k) in random projections. Since k is very small, the difference should not be a concern.
7 Empirical Evaluations
We compare CRS with random projections (RP) using real data, including n = 100 randomly
sampled documents from the NSF data [7] (sparsity ? 1%), n = 100 documents from the NEWSGROUP data [4] (sparsity ? 1%), and one class of the COREL image data (n = 80, sparsity ? 5%).
We estimate all pairwise inner products, l1 and l2 distances, using both CRS and RP. For each pair,
we obtain 50 runs and average the absolute errors. We compare the median errors and the percentage
in which CRS does better than random projections.
The results are presented in Figures 5, 6, 7. In each panel, the dashed curve indicates that we sample
each data point with equal sample size (k). For CRS, we can adjust the sample size according to
the sparsity, reflected by the solid curves. We adjust sample sizes only roughly. The data points are
divided into 3 groups according to sparsity. Data in different groups are assigned different sample
sizes for CRS. For random projections, we use the average sample size.
For both NSF and NEWSGROUP data, CRS overwhelmingly outperforms RP for estimating inner
products and l2 distances (both using the marginal information). CRS also outperforms RP for
approximating l1 and l2 distances (without using the margins).
Ratio of median errors
For the COREL data, CRS still outperforms RP for approximating inner products and l2 distances
(using the margins). However, RP considerably outperforms CRS for approximating l1 distances
and l2 distances (without using the margins). Note that the COREL image data are not too sparse
and are considerably more heavy-tailed than the NSF and NEWSGROUP data [13].
0.06
0.8
Inner product
1
0.6
0.8
0.4
0.03
0.06
0.04
0.6
0.02
10
20
30
40
Sample size k
0.2
50
10
20
30
40
Sample size k
50
Inner product
0.98
0.999
0.96
10
20
30
40
Sample size k
50
0.02
10
20
30
40
Sample size k
50
1
1
0.9995
L2 distance (Margins)
0.1
L2 distance
0.08
0.04
1
Percentage
0.12
L1 distance
0.05
0.8
1
L2 distance
L2 distance (Margins)
0.9998
0.6
L1 distance
0.9996
0.4
0.9994
0.9985
10
20
30
40
Sample size k
50
0.94
10
20
30
40
Sample size k
50
0.2
10
20
30
40
Sample size k
50
10
20
30
40
Sample size k
50
Figure 5: NSF data. Upper four panels: ratios (CRS over RP ( random projections)) of the median
absolute errors; values < 1 indicate that CRS does better. Bottom four panels: percentage of pairs
for which CRS has smaller errors than RP; values > 0.5 indicate that CRS does better. Dashed
curves correspond to fixed sample sizes while solid curves indicate that we (crudely) adjust sketch
sizes in CRS according to data sparsity. In this case, CRS is overwhelmingly better than RP for
approximating inner products and l2 distances (both using margins).
8 Conclusion
There are many applications of l1 and l2 distances on large sparse datasets. We propose a new
sketch-based method, Conditional Random Sampling (CRS), which is provably better than random
projections, at least for the important special cases of boolean data and nearly independent data. In
general non-boolean data, CRS compares favorably, both theoretically and empirically, especially
when we take advantage of the margins (which are easier to compute than distances).
2
?
[16] proposed very sparse random projections to reduce the cost O(nDk) down to O(n Dk).
Ratio of median errors
0.2
0.7
Inner product
0.6
0.15
1.2
L1 distance
0.2
L2 distance
1
0.15
0.8
0.1
0.5
0.1
0.4
L2 distance (Margins)
0.05
10
20
Sample size k
0.3
10
30
20
Sample size k
30 0.610
Inner product
Percentage
Sample size k
30
0.05
10
1
1
1
20
0.8
20
Sample size k
1
L2 distance
0.995
20
Sample size k
0.9
30
10
20
Sample size k
0.995
0.4
L1 distance
0.985
10
L2 distance (Margins)
0.6
0.95
0.99
30
30
0.2
10
20
Sample size k
30
0.99
10
20
Sample size k
30
Ratio of median errors
Figure 6: NEWSGROUP data. The results are quite similar to those in Figure 5 for the NSF data.
In this case, it is more obvious that adjusting sketch sizes helps CRS.
0.3
0.29
2
4.5
0.8
L2 distance
L1 distance
Inner product
1.9
0.7
0.6
0.28
1.8
3.5
0.27
0.5
1.7
0.26
10
20
30
40
Sample size k
Percentage
0.9
50
10
0.4
20
30
40
Sample size k
0.04
0.03
0.85
Inner product
0.8
0.75
10
L2 distance (Margins)
4
20
30
40
Sample size k
50
3
10
20
30
40
Sample size k
50
0.1
L1 distance
0.05
10
0.8
L2 distance
0.02
0
0.7
0.01
?0.05
0.6
50
0
10
20
30
40
Sample size k
?0.1
50
10
20
30
40
Sample size k
50
0.9
20
30
40
Sample size k
50
0.5
10
L2 distance (Margins)
20
30
40
Sample size k
50
Figure 7: COREL image data.
Acknowledgment
We thank Chris Burges, David Heckerman, Chris Meek, Andrew Ng, Art Owen, Robert Tibshirani,
for various helpful conversations, comments, and discussions. We thank Ella Bingham, Inderjit
Dhillon, and Matthias Hein for the datasets.
References
[1] D. Achlioptas. Database-friendly random projections: Johnson-Lindenstrauss with binary coins. Journal of Computer and System
Sciences, 66(4):671?687, 2003.
[2] D. Achlioptas, F. McSherry, and B. Sch?olkopf. Sampling techniques for kernel methods. In NIPS, pages 335?342, 2001.
[3] R. Arriaga and S. Vempala. An algorithmic theory of learning: Robust concepts and random projection. Machine Learning, 63(2):161?
182, 2006.
[4] E. Bingham and H. Mannila. Random projection in dimensionality reduction: Applications to image and text data. In KDD, pages
245?250, 2001.
[5] B. Brinkman and M. Charikar. On the impossibility of dimension reduction in l1 . Journal of ACM, 52(2):766?788, 2005.
[6] A. Broder. On the resemblance and containment of documents. In the Compression and Complexity of Sequences, pages 21?29, 1997.
[7] I. Dhillon and D. Modha. Concept decompositions for large sparse text data using clustering. Machine Learning, 42(1-2):143?175, 2001.
[8] P. Drineas and M. Mahoney. On the nystrom method for approximating a gram matrix for improved kernel-based learning. Journal of
Machine Learning Research, 6(Dec):2153?2175, 2005.
[9] P. Indyk. Stable distributions, pseudorandom generators, embeddings and data stream computation. In FOCS, pages 189?197, 2000.
[10] P. Li. Very sparse stable random projections, estimators and tail bounds for stable random projections. Technical report, http:
//arxiv.org/PS cache/cs/pdf/0611/0611114.pdf, 2006.
[11] P. Li and K. Church. Using sketches to estimate associations. In HLT/EMNLP, pages 708?715, 2005.
[12] P. Li and K. Church. A sketch algorithm for estimating two-way and multi-way associations. Computational Linguistics, To Appear.
[13] P. Li, K. Church, and T. Hastie. Conditional random sampling: A sketched-based sampling technique for sparse data. Technical Report
2006-08, Department of Statistics, Stanford University), 2006.
[14] P. Li, K. Church, and T. Hastie. Nonlinear estimators and tail bounds for dimensional reduction in l1 using Cauchy random projections.
(http://arxiv.org/PS cache/cs/pdf/0610/0610155.pdf), 2006.
[15] P. Li, T. Hastie, and K. Church. Improving random projections using marginal information. In COLT, pages 635?649, 2006.
[16] P. Li, T. Hastie, and K. Church. Very sparse random projections. In KDD, pages 287?296, 2006.
[17] S. Vempala. The Random Projection Method. American Mathematical Society, Providence, RI, 2004.
| 2980 |@word groupwise:1 version:1 compression:1 norm:4 nd:1 seek:1 simulation:1 decomposition:1 tr:1 solid:2 moment:2 reduction:8 document:6 outperforms:7 com:1 comparing:2 kdd:2 designed:1 kv1:1 plot:3 ith:2 provides:2 org:2 mathematical:1 become:1 focs:1 consists:3 combine:2 introduce:1 theoretically:1 pairwise:11 roughly:2 p1:1 multi:8 actual:1 cache:2 considering:1 becomes:2 provided:1 estimating:12 notation:1 panel:3 friendly:1 exactly:3 k2:23 appear:2 aat:2 id:10 modha:1 path:1 might:1 challenging:1 range:1 practical:1 acknowledgment:1 differs:1 mannila:1 procedure:4 empirical:1 significantly:1 projection:41 word:1 close:1 www:1 conventional:3 maximizing:1 simplicity:1 m2:7 estimator:14 population:3 coordinate:1 suppose:3 element:3 particularly:1 database:4 bottom:1 verifying:1 trade:1 removed:1 mentioned:1 pd:3 ui:2 complexity:1 weakly:1 f2:25 efficiency:1 drineas:1 easily:1 joint:3 various:4 effective:1 query:2 whose:2 quite:2 stanford:8 larger:1 widely:1 valued:2 solve:1 statistic:4 indyk:1 online:2 advantage:3 sequence:1 matthias:1 propose:2 product:18 poorly:1 olkopf:1 optimum:1 p:2 generating:2 help:1 derive:2 develop:1 andrew:1 stat:1 noticeable:1 solves:1 c:2 indicate:3 f1:28 strictly:1 correction:1 considered:1 normal:8 k3:1 mapping:2 algorithmic:1 optimizer:1 early:1 smallest:1 a2:9 estimation:9 always:1 pn:1 cr:50 publication:1 overwhelmingly:2 derived:1 focus:2 likelihood:5 indicates:1 u21:2 impossibility:2 am:7 helpful:1 typically:1 a0:1 provably:2 sketched:1 classification:1 flexible:1 colt:1 denoted:3 art:1 special:1 fairly:1 marginal:11 equal:1 once:4 construct:3 ng:1 sampling:26 nearly:3 report:3 modern:2 randomly:4 amf:2 consisting:1 replacement:2 microsoft:3 maintain:1 freedom:1 highly:3 mining:3 multiply:2 evaluation:1 adjust:3 mahoney:1 unconditional:1 mcsherry:1 accurate:3 beforehand:1 ruled:1 hein:1 theoretical:4 column:10 boolean:9 ar:3 cost:10 entry:6 johnson:2 too:3 front:1 providence:1 considerably:4 broder:3 off:1 picking:1 sketching:10 squared:2 central:1 s00:2 choose:1 emnlp:1 american:1 li:8 exclude:1 ku1:4 matter:1 explicitly:1 depends:1 stream:1 later:1 view:2 closed:2 apparently:1 variance:27 spaced:1 correspond:1 ping:1 trevor:1 hlt:1 u22:2 obvious:1 e2:1 nystrom:1 proof:1 mi:1 sampled:3 proved:2 adjusting:1 popular:1 knowledge:1 conversation:1 improves:1 dimensionality:1 reflected:2 improved:1 daunting:1 stage:10 crp:6 achlioptas:2 crudely:1 d:44 sketch:26 hand:1 web:1 nonlinear:3 resemblance:1 name:1 verify:1 unbiased:2 concept:2 hence:2 equality:1 assigned:1 dhillon:2 conditionally:1 generalized:1 pdf:5 tailedness:1 performs:1 l1:21 image:4 wise:2 fi:5 recently:1 permuted:3 empirically:1 corel:4 conditioning:1 jl:1 association:6 analog:1 m1:7 tail:3 refer:1 rd:2 similarly:1 stable:5 similarity:1 showed:1 recent:1 scenario:1 store:2 inequality:2 binary:1 s11:2 ndk:4 dashed:2 full:1 reduces:2 infer:1 technical:3 retrieval:3 divided:1 mle:18 halving:1 metric:1 arxiv:2 histogram:3 kernel:4 sometimes:1 cell:1 dec:1 median:7 sch:1 comment:1 call:1 intermediate:1 easy:3 concerned:1 embeddings:1 variety:1 hastie:6 competing:1 inner:18 reduce:1 whether:1 expression:1 constitute:1 reduced:2 generate:3 http:2 percentage:5 nsf:5 tibshirani:1 express:1 group:3 four:3 kenneth:1 v1:3 asymptotically:1 fraction:2 convert:2 run:1 almost:1 reader:1 reasonable:1 scaling:1 ki:4 bound:3 meek:1 occur:1 constraint:3 s10:2 precisely:1 ri:1 u1:28 min:4 extremely:1 pseudorandom:1 vempala:2 department:3 charikar:1 according:3 smaller:4 heckerman:1 lp:4 retrospectively:2 intuitively:1 equation:3 previously:1 know:3 ascending:2 ut1:3 available:1 observe:1 v2:5 alternative:1 coin:1 rp:21 original:3 s01:2 denotes:1 clustering:2 linguistics:1 exploit:1 k1:10 especially:2 approximating:11 society:1 pingli:1 parametric:1 distance:45 thank:2 chris:2 cauchy:5 extent:1 reason:1 assuming:2 length:1 ratio:17 difficult:1 robert:1 sharper:1 favorably:1 implementation:1 upper:1 observation:1 datasets:3 finite:1 excluding:1 rn:4 david:1 pair:4 extensive:1 engine:1 distinction:1 nip:1 redmond:1 usually:2 sparsity:10 max:13 including:2 suitable:1 critical:1 brinkman:1 normality:1 improve:1 brief:2 church:8 text:2 prior:1 geometric:3 l2:29 ella:1 asymptotic:3 bear:1 permutation:2 var:29 generator:1 contingency:1 degree:1 dd:6 uncorrelated:1 pi:4 heavy:3 row:6 course:1 summary:2 we1:1 infeasible:1 bias:6 burges:1 taking:2 absolute:4 sparse:16 curve:4 dimension:7 world:1 gram:3 lindenstrauss:2 approximate:2 compact:1 ignore:2 global:2 containment:1 consuming:1 tuples:3 search:1 bingham:2 tailed:2 why:1 table:1 ku:2 robust:2 ca:2 ku2:4 improving:2 kui:1 constructing:1 n2:1 join:2 cubic:2 exponential:1 crude:1 posting:10 down:2 rk:1 specific:2 jensen:1 list:1 dk:1 concern:2 bivariate:3 consist:1 effectively:1 execution:1 margin:15 easier:1 mf:12 arriaga:1 simply:3 inderjit:1 u2:19 acm:1 conditional:15 sorted:2 owen:1 hard:1 determined:2 except:1 uniformly:1 corrected:4 lemma:2 total:2 called:3 newsgroup:4 scan:3 correlated:1 |
2,182 | 2,981 | Chained Boosting
Christian R. Shelton
University of California
Riverside CA 92521
[email protected]
Wesley Huie
University of California
Riverside CA 92521
[email protected]
Kin Fai Kan
University of California
Riverside CA 92521
[email protected]
Abstract
We describe a method to learn to make sequential stopping decisions, such as
those made along a processing pipeline. We envision a scenario in which a series
of decisions must be made as to whether to continue to process. Further processing
costs time and resources, but may add value. Our goal is to create, based on historic data, a series of decision rules (one at each stage in the pipeline) that decide,
based on information gathered up to that point, whether to continue processing
the part. We demonstrate how our framework encompasses problems from manufacturing to vision processing. We derive a quadratic (in the number of decisions)
bound on testing performance and provide empirical results on object detection.
1 Pipelined Decisions
In many decision problems, all of the data do not arrive at the same time. Often further data collection can be expensive and we would like to make a decision without accruing the added cost.
Consider silicon wafer manufacturing. The wafer is processed in a series of stages. After each stage
some tests are performed to judge the quality of the wafer. If the wafer fails (due to flaws), then the
processing time, energy, and materials are wasted. So, we would like to detect such a failure as early
as possible in the production pipeline.
A similar problem can occur in vision processing. Consider the case of object detection in images.
Often low-level pixel operations (such as downsampling an image) can be performed in parallel by
dedicated hardware (on a video capture board, for example). However, searching each subimage
patch of the whole image to test whether it is the object in question takes time that is proportional to
the number of pixels. Therefore, we can imagine a image pipeline in which low resolution versions
of the whole image are scanned first. Subimages which are extremely unlikely to contain the desired
object are rejected and only those which pass are processed at higher resolution. In this way, we
save on many pixel operations and can reduce the cost in time to process an image.
Even if downsampling is not possible through dedicated hardware, for most object detection
schemes, the image must be downsampled to form an image pyramid in order to search for the
object at different scales. Therefore, we can run the early stages of such a pipelined detector at the
low resolution versions of the image and throw out large regions of the high resolution versions.
Most of the processing is spent searching for small faces (at the high resolutions), so this method
can save a lot of processing.
Such chained decisions also occur if there is a human in the decision process (to ask further clarifying
questions in database search, for instance). We propose a framework that can model all of these
scenarios and allow such decision rules to be learned from historic data. We give a learning algorithm
based on the minimization of the exponential loss and conclude with some experimental results.
1.1 Problem Formulation
Let there be s stages to the processing pipeline. We assume that there is a static distribution from
which the parts, objects, or units to be processed are drawn. Let p(x, c) represent this distribution in
which x is a vector of the features of this unit and c represents the costs associated with this unit. In
particular, let xi (1 ? i ? s) be the set of measurements (features) available to the decision maker
immediately following stage i. Let c i (1 ? i ? s) be the cost of rejecting (or stopping the processing
of) this unit immediately following stage i. Finally, let c s+1 be the cost of allowing the part to pass
through all processing stages.
Note that ci need not be monotonic in i. To take our wafer manufacturing example, for wafers that
are good we might let c i = i for 1 ? i ? s, indicating that if a wafer is rejected at any stage, one
unit of work has been invested for each stage of processing. For the same good wafers, we might
let cs+1 = s ? 1000, indicating that the value of a completed wafer is 1000 units and therefore the
total cost is the processing cost minus the resulting value. For a flawed wafer, the values might be
the same, except for c s+1 which we would set to s, indicating that there is no value for a bad wafer.
Note that the costs may be either positive or negative. However, only their relative values are important. Once a part has been drawn from the distribution, there is no way of affecting the ?base
level? for the value of the part. Therefore, we assume for the remainder of this paper that c i ? 0 for
1 ? i ? s + 1 and that ci = 0 for some value of i (between 1 and s + 1).
Our goal is to produce a series of decision rules f i (xi ) for 1 ? i ? s. We let fi have a range of
{0, 1} and let 0 indicate that processing should continue and 1 indicate that processing should be
halted. We let f denote the collection of these s decision rules and augment the collection with an
additional rule f s+1 which is identically 1 (for ease of notation). The cost of using these rules to
halt processing an example is therefore
L(f (x), c) =
s+1
ci fi (xi )
i=1
i?1
(1 ? fj (xj )) .
j=1
We would like to find a set of decision rules that minimize E p [L(f (x), c)].
While p(x, c) is not known, we do have a series of samples (training set) D =
{(x1 , c1 ), (x2 , c2 ), . . . , (xn , cn )} of n examples drawn from the distribution p. We use superscripts
to denote the example index and subscripts to denote the stage index.
2 Boosting Solution
For this paper, we consider constructing the rules f i from simpler decision rules, much as in the
Adaboost algorithm [1, 2]. We assume that each decision f i (xi ) is computed as the threshold of
another function g i (xi ): fi (xi ) = I(gi (xi ) > 0).1 We bound the empirical risk:
n
L(f (xk ), ck ) =
n
s+1
cki I(gi (xki ) > 0)
k=1 i=1
k=1
?
n
s+1
k=1 i=1
i?1
I(gj (xkj ) ? 0)
j=1
k
cki egi (xi )
i?1
j=1
k
e?gj (xj ) =
n
s+1
k
cki egi (xi )?
Pi?1
j=1
gj (xk
j)
.
(1)
k=1 i=1
Our decision to make all costs positive ensures that the bounds hold. Our decision to make the
optimal cost zero helps to ensure that the bound is reasonably tight.
i
As in boosting, we restrict g i (xi ) to take the form m
l=1 ?i,l hi,l (xi ), the weighted sum of m i subclassifiers, each of which returns either ?1 or +1. We will construct these weighted sums incrementally and greedily, adding one additional subclassifier and associated weight at each step. We will
pick the stage, weight, and function of the subclassifier in order to make the largest negative change
in the exponential bound to the empirical risk. The subclassifiers, h i,l will be drawn from a small
class of hypotheses, H.
1
I is the indicator function that equals 1 if the argument is true and 0 otherwise.
1. Initialize gi (x) = 0 for all stages i
2. Initialize wik = cki for all stages i and examples k.
3. For each stage i:
(a) Calculate targets for each training example, as shown in equation 5.
(b) Let h be the result of running the base learner on this set.
(c) Calculate the corresponding ? as per equation 3.
(d) Score this classification as per equation 4
? and ?
4. Select the stage ?? with the best (highest) score. Let h
? be the classifier and
weight found at that stage.
?
? h(x).
5. Let g?? (x) ? g?? (x) + ?
6. Update the weights (see equation 2):
?
k
? ?1 ? k ? n, multiply w??k by e?? h(x?? ) .
?
k
? ?1 ? k ? n, j > ??, multiply wjk by e???h(x?? ) .
7. Repeat from step 3
Figure 1: Chained Boosting Algorithm
2.1 Weight Optimization
We first assume that the stage at which to add a new subclassifier and the subclassifier to add have
? respectively. That is, h
? will become h??,m +1 but we simplify it for
already been chosen: ?? and h,
?
?
ease of expression. Our goal is to find ? ??,m?? +1 which we similarly abbreviate to ?
? . We first define
k
wik = cki egi (xi )?
Pi?1
j=1
gj (xk
j)
(2)
as the weight of example k at stage i, or its current contribution to our risk bound. If we let D h?+ be
? returns +1, and let D ?? be similarly defined for
the set of indexes of the members of D for which h
h
? returns ?1, we can further define
those for which h
W??+ =
w??k +
+
k?Dh
?
s+1
wik
W??? =
? i=?
?+1
k?Dh
?
w??k +
?
k?Dh
?
s+1
wik .
+ i=?
?+1
k?Dh
?
? will emphasize. That is, it corresponds to
We interpret W??+ to be the sum of the weights which h
? selects: For those examples for which h
? recommends termination,
the weights along the path that h
we add the current weight (related to the cost of stopping the processing at this stage). For those
? recommends continued processing, we add in all future weights (related to all
examples for which h
future costs associated with this example). W??? can be similarly interpreted to be the weights (or
? recommends skipping.
costs) that h
If we optimize the loss bound of Equation 1 with respect to ?
? , we obtain
?
?=
1
W?
log ??+ .
2
W??
(3)
The more weight (cost) that the rule recommends to skip, the higher its ? coefficient.
2.2 Full Optimization
Using Equation 3 it is straight forward to show that the reduction in Equation 1 due to the addition
of this new subclassifier will be
W??+ (1 ? e?? ) + W??? (1 ? e??? ) .
(4)
We know of no efficient method for determining ??, the stage at which to add a subclassifier, except
by exhaustive search. However, within a stage, the choice of which subclassifier to use becomes one
of maximizing
n
? k)
z??k h(x
?
?
, where
z??k
=
s+1
wik
? w??k
(5)
i=?
?+1
k=1
? This is equivalent to an weighted empirical risk minimization where the training
with respect to h.
set is {x?1? , x?2? , . . . , x?n? }. The label of x?k? is the sign of z??k , and the weight of the same example is the
magnitude of z??k .
2.3 Algorithm
The resulting algorithm is only slightly more complex than standard Adaboost. Instead of a weight
vector (one weight for each data example), we now have a weight matrix (one weight for each
data example for each stage). We initialize each weight to be the cost associated with halting the
corresponding example at the corresponding stage. We start with all g i (x) = 0. The complete
algorithm is as in Figure 1.
Each time through steps 3 through 7, we complete one ?round? and add one additional rule to one
stage of the processing. We stop executing this loop when ?
? ? 0 or when an iteration counter
exceeds a preset threshold.
Bottom-Up Variation
In situations where information is only gained after each stage (such as in section 4), we can also
train the classifiers ?bottom-up.? That is, we can start by only adding classifiers to the last stage.
Once finished with it, we proceed to the previous stage, and so on. Thus instead of selecting the
best stage, i, in each round, we systematically work our way backward through the stages, never
revisiting previously set stages.
3 Performance Bounds
Using the bounds in [3] we can provide a risk bound for this problem. We let E denote the expecta?n denote the empirical average with respect to
tion with respect to the true distribution p(x, c) and E
the n training samples. We first bound the indicator function with a piece-wise linear function, b ? ,
with a maximum slope of ?1 :
z
,0 .
I(z > 0) ? b? (z) = max min 1, 1 +
?
We then bound the loss: L(f (x), c) ? ? ? (f (x), c) where
?? (f (x), c) =
=
s+1
i=1
s+1
ci min{b? (gi (xi )), b? (?gi?1 (xi?1 )), b? (?gi?2 (xi?2 )), . . . , b? (?g1 (x1 ))}
ci B?i (gi (xi ), gi?1 (xi?1 ), . . . , g1 (x1 ))
i=1
We replaced the product of indicator functions with a minimization and then bounded each indicator
with b? . B?i is just a more compact presentation of the composition of the function b ? and the
minimization. We assume that the weights ? at each stage have been scaled to sum to 1. This has
no affect on the resulting classifications, but is necessary for the derivation below. Before stating the
theorem, for clarity, we state two standard definition:
Definition 1. Let p(x) be a probability distribution on the set X and let {x 1 , x2 , . . . , xn } be n
independent samples from p(x). Let ? 1 , ? 2 , . . . , ? n be n independent samples from a Rademacher
random variable (a binary variable that takes on either +1 or ?1 with equal probability). Let F be
a class of functions mapping X to .
Define the Rademacher Complexity of F to be
n
1 i
i
Rn (F ) = E sup
? f (x )
f ?F n i=1
where the expectation is over the random draws of x 1 through xn and ? 1 through ? n .
Definition 2. Let p(x), {x1 , x2 , . . . , xn }, and F be as above. Let g 1 , g 2 , . . . , g n be n independent
samples from a Gaussian distribution with mean 0 and variance 1.
Analogous to the above definition, define the Gaussian Complexity of G to be
n
1 i
i
Gn (F ) = E sup
g f (x ) .
f ?F n
i=1
We can now state our theorem, bounding the true risk by a function of the empirical risk:
Theorem 3. Let H1 , H2 , . . . , Hs be the sequence of the sets of functions from which the base classifier draws for chain boosting. If H i is closed under negation for all i, all costs are bounded between
0 and 1, and the weights for the classifiers at each stage sum to 1, then with probability 1 ? ?,
s
8 ln 2?
k
?n [?? (f (x), c)] +
(i + 1)Gn (Hi ) +
E [L(f (x), c)] ? E
? i=1
n
for some constant k.
Proof. Theorem 8 of [3] states
?n (?? (f (x), c)) + 2Rn (?? ? F ) +
E [L(x, c)] ? E
8 ln 2?
n
and therefore we need only bound the R n (?? ? F ) term to demonstrate our theorem. For our case,
we have
n
1 i
i
i
? ?? (f (x ), c )
Rn (?? ? F ) = E sup
f ?F n i=1
n
s+1
1 i i s
i
i
i
?
cj B? (gj (xj ), gj?1 (xj?1 ), . . . , g1 (x1 ))
= E sup
n
f ?F
i=1 j=1
s+1
n
s+1
1 i s
?
E sup
? B? (gj (xij ), gj?1 (xij?1 ), . . . , g1 (xi1 )) =
Rn (B?s ? G j )
f ?F n
j=1
i=1
j=1
where Gi is the space of convex combinations of functions from H i and G i is the cross product of
G1 through Gi . The inequality comes from switching the expectation and the maximization and then
from dropping the c ij (see [4], lemma 5).
Lemma 4 of [3] states that there exists a k such that R n (B?s ? G j ) ? kGn (B?s ? G j ). Theorem 14
of the same paper allows us to conclude that G n (B?s ? G j ) ? ?2 ji=1 Gn (Gi ). (Because B?s is the
minimum over a set of functions with maximum slope of 1? , the maximum slope of B ?s is also 1? .)
Theorem 12, part 2 states G n (Gi ) = Gn (Hi ). Taken together, this proves our result.
Note that this bound has only quadratic dependence on s, the length of the chain and does not
explicitly depend on the number of rounds of boosting (the number of rounds affects ? ? which, in
turn, affects the bound).
4 Application
We tested our algorithm on the MIT face database [5]. This database contains 19-by-19 gray-scale
images of faces and non-faces. The training set has 2429 face images and 4548 non-face images.
The testing set has 472 faces and 23573 non-faces. We weighted the training set images so that the
ratio of the weight of face images to non-face images matched the ratio in the testing set.
0.4
0.4
CB Global
CB Bottom?up
SVM
Boosting
False positive rate
0.3
0.25
0.2
0.15
0.1
0.3
150
0.2
100
0.1
50
Average number of pixels
0.35
average cost/error per example
200
training cost
training error
testing cost
testing error
0.05
0
100
200
300
400 500
number of rounds
700
1000
(a)
0
0
0.2
0.4
0.6
False negative rate
0.8
0
1
(b)
Figure 2: (a) Accuracy verses the number of rounds for a typical run, (b) Error rates and average
costs for a variety of cost settings.
4.1 Object Detection as Chained Boosting
Our goal is to produce a classifier that can identify non-face images at very low resolutions, thereby
allowing for quick processing of large images (as explained later). Most image patches (or subwindows) do not contain faces. We, therefore, built a multi-stage detection system where any early
rejection is labeled as a non-face. The first stage looks at image patches of size 3-by-3 (i.e. a lowerresolution version of the 19-by-19 original image). The next stage looks at the same image, but at
a resolution of 6-by-6. The third stage considers the image at 12-by-12. We did not present the full
19-by-19 images as the classification did not significantly improve over the 12-by-12 versions.
We employ a simple base classifier: the set of all functions that look at a single pixel and predict the
class by thresholding the pixel?s value. The total classifier at any stage is a linear combination of
these simple classifiers. For a given stage, all of the base classifiers that target a particular pixel are
added together producing a complex function of the value of the pixel. Yet, this pixel can only take
on a finite number of values (256 in this case). Therefore, we can compile this set of base classifiers
into a single look-up function that maps the brightness of the pixel into a real number. The total
classifier for the whole stage is merely the sum of these look-up functions. Therefore, the total work
necessary to compute the classification at a stage is proportional to the number of pixels in the image
considered at that stage, regardless of the number of base classifiers used.
We therefore assign a cost to each stage of processing proportional to the number of pixels at that
stage. If the image is a face, we add a negative cost (i.e. bonus) if the image is allowed to pass
through all of the processing stages (and is therefore ?accepted? as a face). If the image is a nonface, we add a bonus if the image is rejected at any stage before completion (i.e. correctly labelled).
While this dataset has only segmented image patches, in a real application, the classifier would be
run on all sub-windows of an image. More importantly, it would also be run at multiple resolutions
in order to detect faces of different sizes (or at different distances from the camera). The classifier
chain could be run simultaneously at each of these resolutions. To wit, while running the final 12-by12 stage at one resolution of the image, the 6-by-6 (previous) stage could be run at the same image
resolution. This 6-by-6 processing would be the necessary pre-processing step to running the 12-by12 stage at a higher resolution. As we run our final scan for big faces (at a low resolution), we can
already (at the same image resolution) be performing initial tests to throw out portions of the image
as not worthy of testing for smaller faces (at a higher resolution). Most of the work of detecting
objects must be done at the high resolutions because there are many more overlapping subwindows.
This chained method allows the culling of most of this high-resolution image processing.
4.2 Experiments
For each example, we construct a vector of stage costs as above. We add a constant to this vector to
ensure that the minimal element is zero, as per section 1.1. We scale all vectors by the same amount
to ensure that the maximal value is 1.This means that the number of misclassifications is an upper
bound on the total cost that the learning algorithm is trying to minimize.
There are three flexible quantities in this problem formulation: the cost of a pixel evaluation, the
bonus for a correct face classification, and the bonus for a correct non-face classification. Changing
these quantities will control the trade-off between false positives and true positives, and between
classification error and speed.
Figure 2(a) shows the result of a typical run of the algorithm. As a function of the number of
rounds, it plots the cost (that which the algorithm is trying to minimize) and the error (number of
misclassified image patches), for both the training and testing sets (where the training set has been
reweighted to have the same proportion of faces to non-faces as the testing set).
We compared our algorithm?s performance to the performance of support vector machines (SVM)
[6] and Adaboost [1] trained and tested on the highest resolution, 12-by-12, image patches. We
employed SVM-light [7] with a linear kernels. Figure 2(b) compares the error rates for the methods
(solid lines, read against the left vertical axis). Note that the error rates are almost identical for the
methods. The dashed lines (read against the right vertical axis) show the average number of pixels
evaluated (or total processing cost) for each of the methods. The SVM and Adaboost algorithms
have a constant processing cost. Our method (by either training scheme) produces lower processing
cost for most error rates.
5 Related Work
Cascade detectors for vision processing (see [8] or [9] for example) may appear to be similar to the
work in this paper. Especially at first glance for the area of object detection, they appear almost the
same. However, cascade detection and this work (chained detection) are quite different.
Cascade detectors are built one at a time. A coarse detector is first trained. The examples which
pass that detector are then passed to a finer detector for training, and so on. A series of targets for
false-positive rates define the increasing accuracy of the detector cascade.
By contrast, our chain detectors are trained as an ensemble. This is necessary because of two differences in the problem formulation. First, we assume that the information available at each stage
changes. Second, we assume there is an explicit cost model that dictates the cost of proceeding from
stage to stage and the cost of rejection (or acceptance) at any particular stage. By contrast, cascade
detectors are seeking to minimize computational power necessary for a fixed decision. Therefore,
the information available to all of the stages is the same, and there are no fixed costs associated with
each stage.
The ability to train all of the classifiers at the same time is crucial to good performance in our
framework. The first classifier in the chain cannot determine whether it is advantageous to send an
example further along unless it knows how the later stages will process the example. Conversely,
the later stages cannot construct optimal classifications until they know the distribution of examples
that they will see.
Section 4.1 may further confuse the matter. We demonstrated how chained boosting can be used to
reduce the computational costs of object detection in images. Cascade detectors are often used for
the same purpose. However, the reductions in computational time come from two different sources.
In cascade detectors, the time taken to evaluate a given image patch is reduced. In our chained
detector formulation, image patches are ignored completely based on analysis of lower resolution
patches in the image pyramid. To further illustrate the difference, cascade detectors can always
be used to speed up asymmetric classification tasks (and are often applied to image detection).
By contrast, in Section 4.1 we have exploited the fact that object detection in images is typically
performed at multiple scales to turn the problem into a pipeline and apply our framework.
Cascade detectors address situations in which prior class probabilities are not equal, while chained
detectors address situations in which information is gained at a cost. Both are valid (and separate)
ways of tackling image processing (and other tasks as well). In many ways, they are complementary
approaches.
Classic sequence analysis [10, 11] also addresses the problem of optimal stopping. However, it
assumes that the samples are drawn i.i.d. from (usually) a known distribution. Our problem is
quite different in that each consecutive sample is drawn from a different (and related) distribution
and our goal is to find a decision rule without producing a generative model. WaldBoost [12] is a
boosting algorithm based on this. It builds a series of features and a ratio comparison test in order
to decide when to stop. For WaldBoost, the available features (information) not change between
stages. Rather, any feature is available for selection at any point in the chain. Again, this is a
different problem than the one considered in this paper.
6 Conclusions
We feel this framework of staged decision making is useful in a wide variety of areas. This paper
demonstrated how the framework applies to one vision processing task. Obviously it also applies
to manufacturing pipelines where errors can be introduced at different stages. It should also be
applicable to scenarios where information gathering is costly.
Our current formulation only allows for early negative detection. In the face detection example
above, this means that in order to report ?face,? the classifier must process each stage, even if the
result is assured earlier. In Figure 2(b), clearly the upper-left corner (100% false positives and 0%
false negatives) is reachable with little effort: classify everything positive without looking at any
features. We would like to extend this framework to cover such two-sided early decisions. While
perhaps not useful in manufacturing (or even face detection, where the interesting part of the ROC
curve is far from the upper-left), it would make the framework more applicable to informationgathering applications.
Acknowledgements
This research was supported through the grant ?Adaptive Decision Making for Silicon Wafer Testing? from Intel Research and UC MICRO.
References
[1] Yoav Freund and Robert E. Schapire. A decision-theoretic generalization of on-line learning
and an application to boosting. In EuroCOLT, pages 23?37, 1995.
[2] Yoav Freund and Robert E. Schapire. Experiments with a new boosting algorithm. In ICML,
pages 148?156, 1996.
[3] Peter L. Bartlett and Shahar Mendelson. Rademacher and Gaussian complexities: Risk bounds
and structural results. JMLR, 2:463?482, 2002.
[4] Ron Meir and Tong Zhang. Generalization error bounds for Bayesian mixture algorithms.
JMLR, 4:839?860, 2003.
[5] MIT.
CBCL face database #1, 2000.
http://cbcl.mit.edu/cbcl/softwaredatasets/FaceData2.html.
[6] Bernhard E. Boser, Isabelle M. Guyon, and Vladimir N. Vapnik. A training algorithm for
optimal margin classifiers. In COLT, pages 144?152, 1992.
[7] T. Joachims. Making large-scale SVM learning practical. In B. Schlkopf, C. Burges, and
A. Smola, editors, Advances in Kernel Methods ? Support Vector Learning. MIT-Press, 1999.
[8] Paul A. Viola and Michael J. Jones. Rapid object detection using a boosted cascade of simple
features. In CVPR, pages 511?518, 2001.
[9] Jianxin Wu, Matthew D. Mullin, and James M. Rehg. Linear asymmetric classifier for cascade
detectors. In ICML, pages 988?995, 2005.
[10] Abraham Wald. Sequential Analysis. Chapman & Hall, Ltd., 1947.
[11] K. S. Fu. Sequential Methods in Pattern Recognition and Machine Learning. Academic Press,
1968.
?
[12] Jan Sochman
and Ji?r?? Matas. Waldboost ? learning for time constrained sequential detection.
In CVPR, pages 150?156, 2005.
| 2981 |@word h:1 version:5 proportion:1 advantageous:1 termination:1 pick:1 brightness:1 thereby:1 minus:1 solid:1 reduction:2 initial:1 series:7 score:2 selecting:1 contains:1 envision:1 current:3 skipping:1 yet:1 tackling:1 must:4 christian:1 plot:1 update:1 generative:1 xk:3 detecting:1 boosting:12 coarse:1 ron:1 sochman:1 simpler:1 zhang:1 along:3 c2:1 become:1 rapid:1 multi:1 eurocolt:1 little:1 window:1 increasing:1 becomes:1 notation:1 bounded:2 matched:1 bonus:4 interpreted:1 classifier:20 scaled:1 control:1 unit:6 grant:1 appear:2 producing:2 positive:8 before:2 accruing:1 switching:1 subscript:1 path:1 culling:1 might:3 conversely:1 compile:1 ease:2 range:1 practical:1 camera:1 testing:9 jan:1 area:2 empirical:6 significantly:1 cascade:11 dictate:1 pre:1 downsampled:1 pipelined:2 cannot:2 selection:1 risk:8 optimize:1 equivalent:1 map:1 quick:1 demonstrated:2 maximizing:1 send:1 regardless:1 convex:1 resolution:19 wit:1 immediately:2 rule:12 continued:1 importantly:1 rehg:1 classic:1 searching:2 variation:1 analogous:1 by12:2 feel:1 imagine:1 target:3 hypothesis:1 element:1 expensive:1 recognition:1 asymmetric:2 database:4 labeled:1 bottom:3 capture:1 calculate:2 revisiting:1 region:1 ensures:1 counter:1 highest:2 trade:1 complexity:3 chained:9 trained:3 depend:1 tight:1 learner:1 completely:1 derivation:1 train:2 describe:1 exhaustive:1 quite:2 cvpr:2 otherwise:1 ability:1 gi:12 g1:5 invested:1 superscript:1 final:2 obviously:1 sequence:2 propose:1 product:2 maximal:1 remainder:1 loop:1 wjk:1 rademacher:3 produce:3 executing:1 object:13 spent:1 derive:1 help:1 stating:1 completion:1 illustrate:1 ij:1 throw:2 c:4 skip:1 judge:1 indicate:2 come:2 correct:2 human:1 material:1 everything:1 assign:1 generalization:2 hold:1 considered:2 hall:1 cbcl:3 cb:2 mapping:1 predict:1 matthew:1 early:5 consecutive:1 purpose:1 applicable:2 label:1 maker:1 largest:1 create:1 weighted:4 minimization:4 mit:4 clearly:1 gaussian:3 always:1 ck:1 rather:1 boosted:1 joachim:1 contrast:3 greedily:1 detect:2 flaw:1 stopping:4 unlikely:1 typically:1 misclassified:1 selects:1 pixel:14 classification:9 flexible:1 html:1 augment:1 colt:1 constrained:1 initialize:3 uc:1 equal:3 once:2 construct:3 never:1 flawed:1 chapman:1 identical:1 represents:1 look:5 icml:2 jones:1 future:2 report:1 simplify:1 micro:1 employ:1 simultaneously:1 replaced:1 negation:1 detection:16 acceptance:1 multiply:2 evaluation:1 mixture:1 light:1 chain:6 fu:1 necessary:5 unless:1 desired:1 minimal:1 mullin:1 instance:1 classify:1 earlier:1 gn:4 cover:1 halted:1 yoav:2 maximization:1 cost:38 cki:5 xi1:1 off:1 michael:1 together:2 again:1 corner:1 return:3 halting:1 coefficient:1 matter:1 explicitly:1 piece:1 later:3 performed:3 tion:1 lot:1 h1:1 closed:1 sup:5 portion:1 start:2 parallel:1 slope:3 jianxin:1 contribution:1 minimize:4 accuracy:2 variance:1 ensemble:1 gathered:1 identify:1 bayesian:1 rejecting:1 schlkopf:1 finer:1 straight:1 detector:16 definition:4 failure:1 verse:1 energy:1 against:2 james:1 associated:5 proof:1 static:1 stop:2 dataset:1 ask:1 cj:1 wesley:1 higher:4 adaboost:4 formulation:5 done:1 evaluated:1 rejected:3 stage:61 just:1 smola:1 until:1 overlapping:1 glance:1 incrementally:1 fai:1 quality:1 gray:1 perhaps:1 contain:2 true:4 read:2 reweighted:1 round:7 nonface:1 trying:2 complete:2 demonstrate:2 theoretic:1 dedicated:2 fj:1 image:44 wise:1 fi:3 xkj:1 ji:2 extend:1 interpret:1 silicon:2 measurement:1 composition:1 isabelle:1 similarly:3 reachable:1 gj:8 add:10 base:7 scenario:3 inequality:1 binary:1 continue:3 shahar:1 exploited:1 minimum:1 additional:3 expecta:1 employed:1 waldboost:3 determine:1 dashed:1 full:2 multiple:2 exceeds:1 segmented:1 academic:1 cross:1 halt:1 wald:1 vision:4 expectation:2 iteration:1 represent:1 kernel:2 pyramid:2 c1:1 affecting:1 addition:1 source:1 crucial:1 member:1 structural:1 identically:1 recommends:4 variety:2 xj:4 affect:3 misclassifications:1 restrict:1 reduce:2 cn:1 whether:4 expression:1 bartlett:1 passed:1 ltd:1 effort:1 peter:1 riverside:3 proceed:1 ignored:1 useful:2 amount:1 subclassifier:7 facedata2:1 hardware:2 processed:3 reduced:1 schapire:2 http:1 meir:1 xij:2 sign:1 per:4 correctly:1 dropping:1 wafer:12 threshold:2 drawn:6 clarity:1 changing:1 backward:1 wasted:1 merely:1 sum:6 run:8 arrive:1 almost:2 guyon:1 decide:2 wu:1 patch:9 draw:2 decision:24 ucr:3 bound:18 hi:3 quadratic:2 occur:2 scanned:1 x2:3 speed:2 argument:1 extremely:1 xki:1 min:2 performing:1 combination:2 smaller:1 slightly:1 making:3 explained:1 gathering:1 sided:1 pipeline:7 taken:2 ln:2 resource:1 equation:7 previously:1 turn:2 know:3 staged:1 available:5 operation:2 apply:1 save:2 original:1 assumes:1 running:3 ensure:3 completed:1 prof:1 especially:1 build:1 seeking:1 matas:1 added:2 question:2 already:2 quantity:2 costly:1 dependence:1 subwindows:2 distance:1 separate:1 clarifying:1 considers:1 length:1 index:3 ratio:3 downsampling:2 vladimir:1 robert:2 negative:6 allowing:2 upper:3 vertical:2 finite:1 situation:3 viola:1 looking:1 rn:4 worthy:1 introduced:1 california:3 learned:1 boser:1 address:3 below:1 usually:1 pattern:1 encompasses:1 built:2 max:1 video:1 power:1 indicator:4 abbreviate:1 egi:3 wik:5 scheme:2 improve:1 finished:1 axis:2 prior:1 acknowledgement:1 determining:1 relative:1 freund:2 loss:3 historic:2 interesting:1 proportional:3 h2:1 thresholding:1 editor:1 systematically:1 pi:2 production:1 repeat:1 last:1 supported:1 allow:1 burges:1 wide:1 face:26 subimage:1 curve:1 xn:4 valid:1 forward:1 made:2 collection:3 adaptive:1 far:1 emphasize:1 compact:1 bernhard:1 global:1 conclude:2 xi:17 search:3 learn:1 reasonably:1 ca:3 complex:2 constructing:1 assured:1 did:2 abraham:1 whole:3 bounding:1 big:1 paul:1 allowed:1 complementary:1 x1:5 intel:1 roc:1 board:1 tong:1 fails:1 sub:1 explicit:1 exponential:2 jmlr:2 third:1 kin:1 theorem:7 bad:1 svm:5 exists:1 mendelson:1 false:6 sequential:4 adding:2 gained:2 subimages:1 ci:5 vapnik:1 magnitude:1 confuse:1 margin:1 rejection:2 monotonic:1 applies:2 corresponds:1 kan:1 dh:4 goal:5 presentation:1 manufacturing:5 labelled:1 change:3 typical:2 except:2 preset:1 lemma:2 total:6 pas:4 accepted:1 experimental:1 indicating:3 select:1 support:2 scan:1 evaluate:1 tested:2 shelton:1 |
2,183 | 2,982 | Shifting, One-Inclusion Mistake Bounds and
Tight Multiclass Expected Risk Bounds
Benjamin I. P. Rubinstein
Computer Science Division
University of California, Berkeley
Berkeley, CA 94720-1776, U.S.A.
[email protected]
Peter L. Bartlett
Computer Science Division and
Department of Statistics
University of California, Berkeley
[email protected]
J. Hyam Rubinstein
Department of Mathematics & Statistics
The University of Melbourne
Parkville, Victoria 3010, Australia
[email protected]
Abstract
Under the prediction model of learning, a prediction strategy is presented with
an i.i.d. sample of n ? 1 points in X and corresponding labels from a concept
f ? F, and aims to minimize the worst-case probability of erring on an nth point.
By exploiting the structure of F, Haussler et al. achieved a VC(F)/n bound
for the natural one-inclusion prediction strategy, improving on bounds implied by
PAC-type results by a O(log n) factor. The key data structure in their result is
the natural subgraph of the hypercube?the one-inclusion graph; the key step is a
d = VC(F) bound on one-inclusion graph density. The first main result of this
n
n?1
paper is a density bound of n ?d?1
/ ( ?d
) < d, which positively resolves a
conjecture of Kuzmin & Warmuth relating to their unlabeled Peeling compression scheme and also leads to an improved mistake bound for the randomized
(deterministic) one-inclusion strategy for all d (for d ? ?(n)). The proof uses
a new form of VC-invariant shifting and a group-theoretic symmetrization. Our
second main result is a k-class analogue of the d/n mistake bound, replacing the
VC-dimension by the Pollard pseudo-dimension and the one-inclusion strategy by
its natural hypergraph generalization. This bound on expected risk improves on
known PAC-based results by a factor of O(log n) and is shown to be optimal up to
a O(log k) factor. The combinatorial technique of shifting takes a central role in
understanding the one-inclusion (hyper)graph and is a running theme throughout.
1
Introduction
In [4, 3] Haussler, Littlestone and Warmuth proposed the one-inclusion prediction strategy as a natural approach to the prediction (or mistake-driven) model of learning, in which a prediction strategy
maps a training sample and test point to a test prediction with hopefully guaranteed low probability of erring. The significance of their contribution was two-fold. On the one hand the derived
VC(F)/n upper-bound on the worst-case expected risk of the one-inclusion strategy learning from
F ? {0, 1}X improved on the PAC-based previous-best by an order of log n. This was achieved by
taking the structure of the underlying F into account?which had not been done in previous work?
in order to break ties between hypotheses consistent with the training set but offering contradictory
predictions on a given test point. At the same time Haussler [3] introduced the idea of shifting sub-
sets of the n-cube down around the origin?an idea previously developed in Combinatorics?as a
powerful tool for learning-theoretic results. In particular, shifting admitted deeply insightful proofs
of Sauer?s Lemma and a VC-dimension bound on the density of the one-inclusion graph?the key
result needed for the one-inclusion strategy?s expected risk bound. Recently shifting has impacted
on work towards the sample compressibility conjecture of [7] e.g. in [5].
Here we continue to study the one-inclusion graph?the natural graph structure induced by a subset
of the n-cube?and its related prediction strategy under the lens of shifting. After the necessary
background, we develop the technique of shatter-invariant shifting in Section 3. While a subset?s
VC-dimension cannot be increased by shifting, shatter-invariant shifting guarantees a finite sequence
of shifts to a fixed-point under which the shattering of a chosen set remains invariant, thus preserving VC-dimension throughout. In Section 4 we apply a group-theoretic symmetrization to tighten
the mistake bound?the worst-case expected risk bound?of the deterministic (randomized) oneinclusion strategy from d/n to dDnd e/n (Dnd /n), where Dnd < d for all n, d. The derived Dnd
density bound positively resolves a conjecture of Kuzmin & Warmuth which was suggested as a
step towards a correctness proof of the Peeling compression scheme [5]. Finally we generalize the
prediction model, the one-inclusion strategy and its bounds from binary to k-class learning in Section 5. Where ?G -dim (F) and ?P -dim (F) denote the Graph and Pollard dimensions of F, the
best bound on expected risk for k ? N to-date is O(? log ?) for ? = ?G -dim (F) /n, for consistent
learners [8, 1, 2, 4]. For large n this is O(log n?G -dim (F) /n); we derive an improved bound of
?P -dim (F) /n which we show is at most a O(log k) factor from optimal. Thus, as in the binary
case, exploiting class structure enables significantly better bounds on expected risk for multiclass
prediction.
As always some proofs have been omitted in the interest of flow or space. In such cases see [8].
2
Definitions & background
In this paper sets/random variables, scalars and vectors
written in uppercase, lowercase and
Pr willnbe
n
bolded typeface as in C, x, v. We define ?r
=
i=0 i , [n] = {1, . . . , n} and Sn to be the
set of permutations on [n]. We write the density of graph G = (V, E) as dens (G) = |E|/|V |, the
indicator of A as 1 [A], and ?!x ? X, P (x) to mean ?there exists a unique x ? X satisfying P .?
2.1
The prediction model of learning
We begin with the basic setup of [4]. Set X is the domain and F ? {0, 1}X is a concept class on
X . For notational convenience we write sam (x, f ) = ((x1 , f (x1 )) , . . . , (xn , f (xn ))) for x ? X n ,
S
n?1
? X ? {0, 1}.
f ? F. A prediction strategy is a mapping of the form Q : n>1 (X ? {0, 1})
Definition 2.1 The prediction model of learning concerns the following scenario. Given full knowledge of strategy Q, an adversary picks a distribution P on X and concept f ? F so as to maximize
i.i.d.
the probability of {Q (sam (X1 , . . . , Xn?1 , f ) , Xn ) 6= f (Xn )} where Xi ? P . Thus the measure
of performance is the worst-case expected risk
? Q,F (n) = sup sup EX?P n [1 [Q (sam ((X1 , . . . , Xn?1 ), f ) , Xn ) 6= f (Xn )]] .
M
f ?F P
? Q,F .
A mistake bound for Q with respect to F is an upper-bound on M
In contrast to Valiant?s PAC model, the prediction learning model is not interested in approximating
f given an f -labeled sample, but instead in predicting f (Xn ) with small worst-case probability of
erring. The following allows us to derive mistake-bounds by bounding a worst-case average.
Lemma 2.2 (Corollary 2.1 [4]) For any n > 1, concept class F and prediction strategy Q,
X
? Q,F (n) ? sup sup 1
M
1 Q sam xg(1) , . . . , xg(n?1) , f , xg(n) 6= f xg(n)
f ?F x?X n n!
g?Sn
=
??
M
Q,F (n) .
??
A permutation mistake bound for Q with respect to F is an upper-bound on M
Q,F .
2.2
The capacity of function classes contained in {0, . . . , k}X
We denote by ?x (F) = {(f (x1 ), . . . , f (xn )) | f ? F} the projection of F ? Y X on x ? X n .
Definition 2.3 The Vapnik-Chervonenkis dimension of concept class F is defined as VC(F) =
sup {n | ?x ? X n , ?x (F) = {0, 1}n }. An x witnessing VC(F) is said to be shattered by F.
n
Lemma 2.4 (Sauer?s Lemma [9]) For any n ? N and V ? {0, 1}n , |V | ? ?VC(V
) . A subset V
meeting this with equality is called maximum.
It is well-known that the VC-dimension is an inappropriate measure of capacity when |Y| > 2. The
following unifying framework of class capacities for |Y| < ? is due to [1].
Definition 2.5 Let k ? N, F ? {0, . . . , k}X and ? be a family of mappings ? : {0, . . . , k} ?
{0, 1, ?} called translations. For x ? X n , v ? ?x (F) ? {0, . . . , k}n and ? ? ?n we write
?(v) = (?1 (v1 ), . . . , ?n (vn )) and ?(?x (F)) = {?(v) : v ? ?x (F)}. x ? X n is ?-shattered
by F if there exists a ? ? ?n such that {0, 1}n ? ?(?x (F)). The ?-dimension of F is defined by
?-dim (F) = sup{n | ?x ? X n , ? ? ?n s.t. {0, 1}n ? ?(?x (F))}.
We next describe three important translation families used in this paper.
Example 2.6 The families ?P = {?P,i : i ? [k]}, ?G = {?G,i : i ? {0, . . . , k}} and
?N = {?N,i,j : i, j ? {0, . . . , k}, i 6= j}, where ?P,i (a) = 1 [a < i], ?G,i (a) = 1 [a = i] and
?N,i,j (a) equals 1, 0, ? if a = i, a = j, a ?
/ {i, j} respectively, define the Pollard pseudo-dimension
?P -dim (V ), the Graph dimension ?G -dim (V ) and the Natarajan dimension ?N -dim (V ).
2.3
The one-inclusion prediction strategy
A subset of the n-cube?the projection of some F?induces the one-inclusion graph, which underlies a natural prediction strategy. The following definition generalizes this to a subset of {0, . . . , k} n .
Definition 2.7 The one-inclusion hypergraph G (V ) = (V, E) of V ? {0, . . . , k}n is the undirected
graph with vertex-set V and hyperedge-set E of maximal (with respect to inclusion) sets of pairwise
hamming-1 separated vertices.
Algorithm 1 The deterministic multiclass one-inclusion prediction strategy Q G,F
Given: F ? {0, . . . , k}X , sam ((x1 , . . . , xn?1 ), f ) ? (X ? {0, 1})
Returns: a prediction of f (xn )
n?1
, xn ? X
V
?? ?x (F) ;
G
?? G (V ) ;
?
?
G
?? orient G to minimize the maximum outdegree ;
Vspace ?? {v ? V | v1 = f (x1 ), . . . , vn?1 = f (xn?1 )} ;
if Vspace = {v} then return vn ;
?
?
else return the nth component of the head of hyperedge Vspace in G ;
The one-inclusion graph?s prediction strategy QG,F [4] immediately generalizes to the multiclass
prediction strategy described by Algorithm 1. For the remainder of this and Section 4 we will
restrict our discussion to the k = 1 case, on which the following main result of [4] focuses.
? Q ,F (n) ?
Theorem 2.8 (Theorem 2.3 [4]) M
G,F
VC(F )
n
for every concept class F and n > 1.
A lower bound in [6] showed that the one-inclusion strategy?s performance is optimal within a factor
of 1 + o(1). Replacing orientation with a distribution over each edge induces a randomized strategy
QGrand,F . The key to proving Theorem 2.8 is the following.
Lemma 2.9 (Lemma 2.4 [4]) For any n ? N and V ? {0, 1}n , dens (G (V )) ? VC(V ).
An elegant proof of this deep result, due to Haussler [3], uses shifting. Consider any s ? [n], v ? V
and let Ss (v; V ) be v shifted along s: if vs = 0, or if vs = 1 and there exists some u ? V differing
to v only in the sth coordinate, then Ss (v; V ) = v; otherwise v shifts down?its sth coordinate is
decreased from 1 to 0. The entire family V can be shifted to Ss (V ) = {Ss (v; V ) | v ? V } and this
shifted vertex-set induces Ss (E) the edge-set of G (Ss (V )), where (V, E) = G (V ).
Definition 2.10 Let I ? [n]. We call a subset V ? {0, 1}n I-closed-below if Ss (V ) = V for all
s ? I. If V is [n]-closed-below then we call it closed-below.
A number of properties of shifting follow relatively easily:
|Ss (V )|
VC(Ss (V ))
|E|
|Ss (E)|
=
?
?
?
?T ? N, s ? [n]T
|V | ,
by the injectivity of Ss ( ? ; V )
VC(V ) ,
as Ss (V ) shatters I ? [n] ? V shatters I
|V | ? VC(V ) , as V closed-below ? maxv?V kvkl1 ? VC(V )
|E| ,
by cases
s.t. SsT (. . . Ss1 (V )) is closed-below (a fixed-point) .
(1)
(2)
(3)
(4)
(5)
Properties (1?2) and the justification of (3) together imply Sauer?s lemma; Properties (1?5) lead to
|E|
|V |
3
? ... ?
|SsT (...Ss1 (E))|
? VC(SsT (. . . Ss1 (V ))) ? . . . ? VC(V )
|SsT (...Ss1 (V ))|
.
Shatter-invariant shifting
While [3] shifts to bound density, the number of edges can increase and the VC-dimension can
decrease?both contributing to the observed gap between graph density and capacity. The next
result demonstrates that shifting can in fact be controlled to preserve VC-dimension.
Lemma 3.1 Consider arbitrary n ? N, I ? [n] and V ? {0, 1}n that shatters I. There exists a
finite sequence s1 , . . . , sT in [n] such that each Vt = Sst (. . . Ss1 (V )) shatters I and VT is closedbelow. In particular VC(VT ) = VC(VT ?1 ) = . . . = VC(V ).
Proof: ?I (?) is invariant to shifting on I = [n]\I. So some finite number of shifts on I will produce
a I-closed-below family W that shatters I. Hence W must contain representatives for each element
of {0, 1}|I| (embedded at I) with components equal to 0 outside I. Thus the shattering of I is
invariant to the shifting of W on I, so that a finite number of shifts on I produces an I-closed-below
W 0 that shatters I. Repeating the process a finite number of times until no non-trivial shifts are
made produces a closed-below family that shatters I. The second claim follows from (2).
4
Tightly bounding graph density by symmetrization
Kuzmin and Warmuth [5] introduced Dnd as a potential bound on the graph density of maximum
classes. We begin with properties of Dnd , a technical lemma and then proceed to the main result.
Dnd
?
?
n?1
n ?d?1
n
( ?d
)
n
for all n ? N and d ? [n]. Denote by Vnd the VC-dimension
d closed-below subset of {0, 1} equal to the union of all nd closed-below embedded d-cubes.
Definition 4.1 Define
=
Lemma 4.2 Dnd
(i) equals the graph density of Vnd for each n ? N and d ? [n];
(ii) is strictly upper-bounded by d, for all n;
(iii) equals d2 for all n = d ? N;
(iv) is strictly monotonic increasing in d (with n fixed);
(v) is strictly monotonic increasing in n (with d fixed); and
(vi) limits to d as n ? ?.
Proof: By counting, for each d ? n < ?, the density of G Vnd equals Dnd :
Pd?1
Pd
Pd?1
n
n
E G Vnd
n
n i=0 i+1
n i=0 n?1
n i+1
i=1 i i
i
= Pd
=
=
=
n
n
d
n
|Vn |
?d
?d
i=0
i
n?1
?d?1
n
?d
A
A+C
A
C
proving (i). Since for all A, B, C, D > 0, B
< B+D
iff B
< D
, it is sufficient for (iv) to prove
n?1
n
(
)
. By (i) and Lemma 2.9 Dnd ? d, and so
that Dnd?1 < d?1
(nd)
(n?1)!
n (n?d)!(d?1)!
n n?1
n ? (n ? 1)! (n ? d)! d!
d?1
d?1
.
Dn ? d ? 1 < d =
=
=
n
n!
n!
(n ? d)! (d ? 1)!
d
(n?d)!d!
Monotonicity in d, (i) and Lemma 2.9 together prove (ii). Properties (iii,v?vi) are proven in [8].
Lemma 4.3 For arbitrary U, V ? {0, 1}n with dens (G (V )) ? ? > 0, |U | ? |V | and
|E (G (U )) | ? |E (G (V )) |, if dens (G (U ? V )) < ? then dens (G (U ? V )) > ?.
Proof: If G (U ? V ) has density less than ? then
|E (G (U ? V )) |
|U ? V |
?
?
>
|E (G (U )) | + |E (G (V )) | ? |E (G (U ? V )) |
|U | + |V | ? |U ? V |
2|E (G (V )) | ? |E (G (U ? V )) |
2|V | ? |U ? V |
2?|V | ? ?|U ? V |
= ?
2|V | ? |U ? V |
density
6
8
10
d = 10
4
d
Dnd
2
d=2
0
d=1
0
20
40
60
80
n
Figure 1: The improved graph density bound of Theorem 4.4. The density bounding D nd
is plotted (dotted solid) alongside the previous best d (dashed), for each d ? {1, 2, 10}.
Theorem 4.4 Every family V ? {0, 1}n with d = VC(V ) has (V, E) = G (V ) with graph density
|E|
? Dnd < d .
|V |
(6)
For n ? N and d ? [n], Vnd is the unique closed-below VC-dimension d subset of {0, 1}n meeting (6)
with equality. A VC-dimension d family V ? {0, 1}n meets (6) with equality only if V is maximum.
n
n
Proof: Allow a permutation
g ? Sn to act on vector v ? {0, 1} and familySV ? {0, 1} by
g(v) = vg(1) , . . . , vg(n) and g(V ) = {g(v) | v ? V }; and define Sn (V ) = g?Sn g(V ). Note
that a closed-below VC-dimension d family V ? {0, 1}n satisfies Sn (V ) = V iff V = Vnd , as
VC(V ) ? d implies V contains an embedded d-cube, invariance to Sn implies further that V
contains all nd such cubes, and VC(V ) ? d implies that V ? Vnd . Consider now any
(
)
?
Vn,d ? arg min |U | U ?
arg max
dens (G (U ))
.
{U ?{0,1}n |VC(U )?d,U closed-below}
?
?
) for some g ? Sn . Then if
6= g(Vn,d
For the purposes of contradiction assume that Vn,d
?
?
?
?
then Vn,d would not have been selected above
? dens G Vn,d
dens G Vn,d ? g(Vn,d )
?
?
) would have been chosen).
? g(Vn,d
(i.e. a closed-below family at least as small and dense as Vn,d
?
?
?
?
would
by Lemma 4.3. But then again Vn,d
> dens G Vn,d
Thus dens G Vn,d ? g(Vn,d )
?
?
not have been selected (i.e. a distinct family at least as dense as Vn,d ? g(Vn,d ) would have been se?
?
)
= Sn (Vn,d
lected instead, since every vector in this union contains no more than d 1?s). Hence V n,d
d0
?
d0
0
?
?
= Dn , for d = VC(Vn,d ) ? d. But by
and so Vn,d = Vn and by Lemma 4.2.(i) dens G Vn,d
Lemma 4.2.(iv) this implies that d = d0 and (6) is true for all closed-below families; Vnd uniquely
maximizes density amongst all closed-below VC-dimension d families in the n-cube.
For an arbitrary V ? {0, 1}n with d = VC(V ) consider any of its closed-below fixed-point (cf.
(5)), W ? {0, 1}n . Noting that VC(W ) ? d and dens (G (V )) ? dens (G (W )) by (2) and
(1) & (4) respectively, the bound (6) follows directly for V . Furthermore if we shift to preserve
VC-dimension then VC(W ) = d while still |V | = |W |. And since dens (G (W )) = Dnd only if
W = Vnd , it follows that V maximizes density amongst all VC-dimension d families in the n-cube,
with dens (G (V )) = Dnd , only if it is maximum.
Theorem 4.4 improves on the VC-dimension density bound of Lemma 2.9 for low sample sizes (see
Figure 1). This new result immediately implies the following one-inclusion mistake bounds.
? Q ,F (n) ?
Theorem 4.5 Consider any n ? N and F ? {0, 1}X with VC(F) = d < ?. Then M
G,F
d
d
?
Dn /n and MQGrand,F ,F (n) ? Dn /n.
For small d, n? (d) = min n ? d | d = Dnd ?the first n for which the new and old deterministic
one-inclusion mistake bounds coincide?appears to remain very close to 2.96d. The randomized
strategy?s mistake bound of Theorem 4.5 offers a strict improvement over that of [4].
5
Bounds for multiclass prediction
As in the k = 1 case, the key to developing the multiclass one-inclusion mistake bound is in bounding hypergraph density. We proceed by shifting a graph induced by the one-inclusion hypergraph.
Theorem 5.1 For any k, n ? N and V ? {0, . . . , k}n , the one-inclusion hypergraph (V, E) =
|E|
G (V ) satisfies |V
| ? ?P -dim (V ).
Proof: We begin by replacing the hyperedge structure E with a related edge structure E 0 . Two
vertices u, v ? V are connected in the graph (V, E 0 ) iff there exists an i ? [n] such that u, v differ
only at i and no w ? V exists such that ui < wi < vi and wj = uj = vj on [n]\{i}. Trivially
|E|
|E 0 |
k|E|
?
?
.
|V |
|V |
|V |
Consider now shifting vertex v ? V at shift label t ? [k] along shift coordinate s ? [n] by
Ss,t (v; V )
where
vs(i)
vs0
0
= vs(vs )
= (v1 , . . . , vs?1 , i, vs+1 , . . . , vn ) for i ? {0, . . . , k}
min x ? {0, . . . , vs } vs(x) ?
/ V or x = vs
if vs = t
=
vs
o.w.
(7)
We shift V on s at t as usual; we shift V on s alone by bubbling vertices down to fill gaps below:
Ss,t (V ) = {Ss,t (v; V ) | v ? V }
Ss (V ) = Ss,k (Ss,k?1 (. . . Ss,1 (V ))) .
Let Ss (E 0 ) denote the edge-set induced by Ss (V ). Ss on a vertex-set is injective implying that
|Ss (V )| = |V | .
(8)
Consider any {u, v} ? E 0 with i ? [n] denoting the index on which u, v differ. If i = s then
no other vertex w ? V can come between u and v during shifting by construction of E 0 , so
{Ss (u; V ), Ss (v; V )} ? Ss (E 0 ). Now suppose that i 6= s. If both vertices shift down by the
same number of labels then they remain connected in Ss (E 0 ). Otherwise assume WLOG that
Ss (u; V )s < Ss (v; V )s then the shifted vertices will lose their edge, however since vs did not
shift down to Ss (u; V )s there must have been some w ? V different to u on {i, s} such that
ws < vs with Ss (w; V )s = Ss (u; V )s . Thus Ss (w; V ), Ss (u; V ) differ only on {i} and a new edge
{Ss (w; V ), Ss (u; V )} is in Ss (E 0 ) that was not in E 0 (otherwise u would not have shifted). Thus
|Ss (E 0 )|
?
|E 0 | .
(9)
Suppose that I ? [n] is ?P -shattered by Ss (V ). If s ?
/ I then ?I (Ss (V )) = ?I (V ) and I is
?P -shattered by V . If s ? I then V ?P -shatters I. Witnesses of Ss (V )?s ?P -shattering of I equal
to 1 at s, taking each value in {0, 1}|I|?1 on I\{s}, were not shifted and so are witnesses for V ;
since these vertices were not shifted they were blocked by vertices of V of equal values on I\{s}
but equal to 0 at s, these are the remaining half of the witnesses of V ?s ?P -shattering of I. Thus
Ss (V ) ?P -shatters I ? [n]
? V ?P -shatters I .
(10)
In a finite number of shifts starting from (V, E 0 ), a closed-below family W with induced edge-set
F will be reached. If I ? [n] is ?P -shattered by W and |I| = d = ?P -dim (W ), then since W
is closed-below the translation vector (?P,1 , . . . , ?P,1 ) (?) = (1 [? < 1] , . . . , 1 [? < 1]) must witness
this shattering. Hence each w ? W has at most d non-zero components. Counting edges in F by
upper-adjoining vertices we have proved that
(V, E 0 ) finitely shifts to closed-below graph (W, F )
Combining properties (7)?(11) we have that
|E|
|V |
?
|E 0 |
|V |
?
s.t. |F | ? |W | ? ?P -dim (W ) . (11)
|F |
|W |
? ?P -dim (W ) ? ?P -dim (V ).
The remaining arguments from the k = 1 case of [4, 3] now imply the multiclass mistake bound.
Theorem 5.2 Consider any k, n ? N and F ? {0, . . . , k}X with ?P -dim (F) < ?. The multi? Q ,F (n) ? ?P -dim (F) /n.
class one-inclusion prediction strategy satisfies M
G,F
5.1
A lower bound
We now show that the preceding multiclass mistake bound is optimal to within a O(log k) factor,
noting that ?N is smaller than ?P by at most such a factor [1, Theorem 10].
Definition 5.3 We call a family F ? {0, . . . , k}X trivial if either |F| = 1 or there exist no x1 , x2 ?
X and f1 , f2 ? F such that f1 (x1 ) 6= f2 (x1 ) and f1 (x2 ) = f2 (x2 ).
Theorem 5.4 Consider any deterministic or randomized prediction strategy Q and any F ?
{0, . . . , k}X that has 2 ? ?N -dim (F) < ? or is non-trivial with ?N -dim (F) < 2. Then for
? Q,F (n) ? max{1, ?N -dim (F) ? 1}/(2en).
all n > ?N -dim (F), M
Proof: Following [2], we use the probabilistic method to prove the existence of a target in F
for which prediction under a distribution P supported by a ?N -shattered subset is hard. Consider d = ?N -dim (F) ? 2 with n > d. Fix a Z = {z1 , . . . , zd } ?N -shattered by F and
then a subset FZ ? F of 2d functions that ?N -shatters Z. Define a distribution P on X by
P ({zi }) = n?1 for each i ? [d ? 1], P ({zd }) = 1 ? (d ? 1)n?1 and P ({x}) = 0 for all x ?
X \Z. Observe that PrP n (?i ? [n ? 1], Xn 6= Xi ) ? PrP n (Xn 6= zd , ?i ? [n ? 1], Xn 6= Xi ) =
n?1
1 ? n1
For any f ? FZ and x ? Z n with xn 6= xi for all i ?
? d?1
en .
[n ? 1], exactly half of the functions in FZ consistent with sam ((x1 , . . . , xn?1 ), f ) output
some i ? {0, . . . , k} on xn and the remaining half output some j ? {0, . . . , k}\{i}. Thus
EUnif(FZ ) [1 [Q(sam ((x1 , . . . , xn?1 , F ) , xn ) 6= F (xn )]] = 0.5 for such an x and so
d?1
n
? Q,F ? M
? Q,F ? EUnif(F )?P n [1 [Q(sam ((X1 , . . . , Xn?1 , F ) , Xn ) 6= F (Xn )]] ? d ? 1 .
M
Z
Z
2en
The similar case of d < 2 is omitted here and shows that there is a distribution P on X and function
f ? F such that EP n [1 [Q(sam ((X1 , . . . , Xn?1 ), f ) , Xn ) 6= f (Xn )]] ? (2en)?1 .
6
Conclusions and open problems
In this paper we have developed new shifting machinery and tightened the binary one-inclusion
mistake bound from d/n to Dnd /n (dDnd e/n for the deterministic strategy) representing a solid improvement for d ? n. We have described the multiclass generalization of the prediction learning
model and derived a mistake bound for the multiclass one-inclusion prediction strategy that improves
on previous PAC-based expected risk bounds by O(log n) and that is within O(log k) of optimal.
Here shifting with invariance to the shattering of a single set was described, however we are aware
of invariance to more complex shatterings. Another serious application of shatter-invariant shifting,
to appear in a sequel to this paper, is to the study of the cubical structure of maximum and maximal
classes with connections to the compressibility conjecture of [7]. While Theorem 4.4 resolves one
conjecture of Kuzmin & Warmuth [5], the remainder of the conjectured correctness proof for the
Peeling compression scheme is known to be false [8].
The symmetrization method of Theorem 4.4 can be extended over subgroups G ? S n to gain tighter
density bounds. Just as the Sn -invariant Vnd is the maximizer of density among all closed-below
V ? Vnd , there exist G-invariant families that maximize the density over all of their sub-families.
In addition to Theorem 5.2 we have also proven the following special case in terms of ? G ; it is
open as to whether this generalizes to n ? N. While a general ?G -based bound would allow direct
comparison with the PAC-based expected risk bound, it should also be noted that ? P and ?G are in
fact incomparable?neither ?G ? ?P nor ?P ? ?G singly holds for all classes [1, Theorem 1].
Lemma 6.1 ([8]) For any k ? N and family V ? {0, . . . , k}2 , dens (G (V )) ? ?G -dim (V ).
Acknowledgments
We gratefully acknowledge the support of the NSF under award DMS-0434383.
References
[1] Ben-David, S., Cesa-Bianchi, N., Haussler, D., Long, P. M.: Characterizations of learnability for classes of
{0, . . . , n}-valued functions. Journal of Computer and System Sciences, 50(1) (1995) 74?86
[2] Ehrenfeucht, A., Haussler, D., Kearns, M., Valiant, L.: A general lower bound on the number of examples
needed for learning. Information and Computation, 82(3) (1989) 247?261
[3] Haussler, D.: Sphere packing numbers for subsets of the boolean n-cube with bounded VapnikChervonenkis dimension. Journal of Combinatorial Theory (A) 69(2) (1995) 217?232
[4] Haussler, D., Littlestone, N., Warmuth, M. K.: Predicting {0, 1} functions on randomly drawn points.
Information and Computation, 115(2) (1994) 284?293
[5] Kuzmin, D., Warmuth, M. K.: Unlabeled compression schemes for maximum classes. Journal of Machine
Learning Research (2006) to appear
[6] Li, Y., Long, P. M., Srinivasan, A.: The one-inclusion graph algorithm is near optimal for the prediction
model of learning. IEEE Transactions on Information Theory, 47(3) (2002) 1257?1261
[7] Littlestone, N., Warmuth, M. K.: Relating data compression and learnability. Unpublished manuscript,
http://www.cse.ucsc.edu/?manfred/pubs/lrnk-olivier.pdf (1986)
[8] Rubinstein, B. I. P., Bartlett, P. L., Rubinstein, J. H.: Shifting: One-Inclusion Mistake Bounds and Sample
Compression. Technical report, EECS Department, UC Berkeley (2007) to appear
[9] Sauer, N.: On the density of families of sets. Journal of Combinatorial Theory (A), 13 (1972) 145?147
| 2982 |@word compression:6 nd:4 open:2 d2:1 pick:1 solid:2 contains:3 pub:1 chervonenkis:1 offering:1 denoting:1 written:1 must:3 enables:1 maxv:1 v:14 alone:1 implying:1 selected:2 half:3 warmuth:8 manfred:1 characterization:1 cse:1 shatter:4 along:2 dn:4 vs0:1 direct:1 ucsc:1 prove:3 pairwise:1 expected:10 nor:1 multi:1 resolve:3 inappropriate:1 increasing:2 begin:3 underlying:1 bounded:2 maximizes:2 developed:2 differing:1 guarantee:1 pseudo:2 berkeley:6 every:3 act:1 tie:1 exactly:1 demonstrates:1 appear:3 mistake:17 limit:1 meet:1 unique:2 acknowledgment:1 union:2 significantly:1 projection:2 cannot:1 unlabeled:2 convenience:1 close:1 dnd:16 risk:10 www:1 deterministic:6 map:1 starting:1 immediately:2 contradiction:1 haussler:8 fill:1 proving:2 coordinate:3 justification:1 construction:1 suppose:2 target:1 olivier:1 us:2 hypothesis:1 origin:1 element:1 satisfying:1 natarajan:1 labeled:1 observed:1 role:1 ep:1 worst:6 wj:1 connected:2 decrease:1 deeply:1 benjamin:1 pd:4 ui:1 hypergraph:5 tight:1 division:2 f2:3 learner:1 packing:1 easily:1 separated:1 distinct:1 describe:1 rubinstein:4 hyper:1 outside:1 valued:1 s:42 otherwise:3 statistic:2 sequence:2 maximal:2 remainder:2 combining:1 date:1 vnd:11 subgraph:1 iff:3 exploiting:2 produce:3 ben:1 derive:2 develop:1 finitely:1 c:2 implies:5 come:1 differ:3 vc:42 australia:1 f1:3 generalization:2 fix:1 tighter:1 strictly:3 hold:1 around:1 mapping:2 claim:1 omitted:2 lected:1 purpose:1 lose:1 label:3 combinatorial:3 symmetrization:4 correctness:2 tool:1 always:1 aim:1 erring:3 corollary:1 derived:3 focus:1 notational:1 improvement:2 vapnikchervonenkis:1 prp:2 contrast:1 dim:22 lowercase:1 shattered:7 entire:1 w:1 interested:1 arg:2 among:1 orientation:1 special:1 uc:1 cube:9 equal:9 aware:1 shattering:7 outdegree:1 report:1 serious:1 randomly:1 preserve:2 tightly:1 n1:1 interest:1 uppercase:1 adjoining:1 edge:9 necessary:1 injective:1 sauer:4 machinery:1 iv:3 old:1 littlestone:3 plotted:1 melbourne:1 increased:1 boolean:1 vertex:13 subset:11 learnability:2 eec:1 st:1 density:24 randomized:5 sequel:1 probabilistic:1 together:2 again:1 central:1 cesa:1 return:3 li:1 account:1 potential:1 combinatorics:1 vi:3 break:1 closed:21 sup:6 reached:1 contribution:1 minimize:2 bolded:1 generalize:1 cubical:1 definition:9 ss1:5 dm:1 proof:12 hamming:1 gain:1 proved:1 knowledge:1 improves:3 appears:1 manuscript:1 follow:1 impacted:1 improved:4 done:1 typeface:1 furthermore:1 just:1 until:1 hand:1 replacing:3 hopefully:1 maximizer:1 concept:6 contain:1 true:1 equality:3 hence:3 ehrenfeucht:1 during:1 uniquely:1 noted:1 m:1 pdf:1 theoretic:3 recently:1 relating:2 blocked:1 trivially:1 mathematics:1 inclusion:30 gratefully:1 had:1 showed:1 conjectured:1 driven:1 scenario:1 binary:3 continue:1 hyperedge:3 vt:4 meeting:2 preserving:1 injectivity:1 preceding:1 maximize:2 dashed:1 ii:2 full:1 d0:3 technical:2 offer:1 long:2 sphere:1 award:1 qg:1 controlled:1 prediction:29 underlies:1 basic:1 achieved:2 background:2 addition:1 decreased:1 else:1 strict:1 induced:4 elegant:1 undirected:1 flow:1 call:3 near:1 counting:2 noting:2 iii:2 zi:1 restrict:1 incomparable:1 idea:2 multiclass:10 shift:15 whether:1 bartlett:3 peter:1 pollard:3 proceed:2 deep:1 se:1 sst:5 singly:1 repeating:1 induces:3 eunif:2 http:1 fz:4 exist:2 nsf:1 shifted:7 dotted:1 zd:3 write:3 srinivasan:1 group:2 key:5 drawn:1 shatters:11 neither:1 v1:3 graph:22 orient:1 powerful:1 throughout:2 family:20 vn:25 bound:46 guaranteed:1 fold:1 x2:3 argument:1 min:3 relatively:1 conjecture:5 department:3 developing:1 remain:2 smaller:1 sam:9 sth:2 wi:1 s1:1 den:16 invariant:10 pr:1 previously:1 remains:1 needed:2 generalizes:3 victoria:1 apply:1 observe:1 existence:1 running:1 cf:1 remaining:3 unifying:1 uj:1 approximating:1 hypercube:1 implied:1 strategy:26 usual:1 said:1 amongst:2 capacity:4 trivial:3 index:1 setup:1 bianchi:1 upper:5 finite:6 acknowledge:1 witness:4 extended:1 head:1 compressibility:2 arbitrary:3 introduced:2 david:1 unpublished:1 z1:1 connection:1 california:2 subgroup:1 suggested:1 adversary:1 below:22 alongside:1 max:2 shifting:23 analogue:1 parkville:1 natural:6 predicting:2 indicator:1 nth:2 representing:1 scheme:4 imply:2 xg:4 sn:10 understanding:1 contributing:1 embedded:3 permutation:3 proven:2 vg:2 sufficient:1 consistent:3 rubin:1 tightened:1 translation:3 supported:1 allow:2 taking:2 dimension:23 xn:29 made:1 coincide:1 tighten:1 transaction:1 monotonicity:1 xi:4 ca:1 improving:1 complex:1 domain:1 vj:1 did:1 significance:1 main:4 dense:2 bounding:4 positively:2 kuzmin:5 x1:14 representative:1 en:4 wlog:1 sub:2 theme:1 peeling:3 down:5 theorem:16 pac:6 insightful:1 concern:1 exists:6 vapnik:1 false:1 valiant:2 gap:2 admitted:1 contained:1 scalar:1 monotonic:2 satisfies:3 towards:2 hard:1 contradictory:1 lemma:18 lens:1 called:2 kearns:1 invariance:3 support:1 witnessing:1 ex:1 |
2,184 | 2,983 | Analysis of Representations for Domain Adaptation
Shai Ben-David
School of Computer Science
University of Waterloo
[email protected]
John Blitzer, Koby Crammer, and Fernando Pereira
Department of Computer and Information Science
University of Pennsylvania
{blitzer, crammer, pereira}@cis.upenn.edu
Abstract
Discriminative learning methods for classification perform well when training and
test data are drawn from the same distribution. In many situations, though, we
have labeled training data for a source domain, and we wish to learn a classifier
which performs well on a target domain with a different distribution. Under what
conditions can we adapt a classifier trained on the source domain for use in the
target domain? Intuitively, a good feature representation is a crucial factor in the
success of domain adaptation. We formalize this intuition theoretically with a
generalization bound for domain adaption. Our theory illustrates the tradeoffs inherent in designing a representation for domain adaptation and gives a new justification for a recently proposed model. It also points toward a promising new model
for domain adaptation: one which explicitly minimizes the difference between the
source and target domains, while at the same time maximizing the margin of the
training set.
1
Introduction
We are all familiar with the situation in which someone learns to perform a task on training examples
drawn from some domain (the source domain), but then needs to perform the same task on a related
domain (the target domain). In this situation, we expect the task performance in the target domain to
depend on both the performance in the source domain and the similarity between the two domains.
This situation arises often in machine learning. For example, we might want to adapt for a new
user (the target domain) a spam filter trained on the email of a group of previous users (the source
domain), under the assumption that users generally agree on what is spam and what is not. Then, the
challenge is that the distributions of emails for the first set of users and for the new user are different.
Intuitively, one might expect that the closer the two distributions are, the better the filter trained on
the source domain will do on the target domain.
Many other instances of this situation arise in natural language processing. In general, labeled data
for tasks like part-of-speech tagging, parsing, or information extraction are drawn from a limited set
of document types and genres in a given language because of availability, cost, and project goals.
However, applications for the trained systems often involve somewhat different document types
and genres. Nevertheless, part-of-speech, syntactic structure, or entity mention decisions are to a
large extent stable across different types and genres since they depend on general properties of the
language under consideration.
Discriminative learning methods for classification are based on the assumption that training and test
data are drawn from the same distribution. This assumption underlies both theoretical estimates of
generalization error and the many experimental evaluations of learning methods. However, the assumption does not hold for domain adaptation [5, 7, 13, 6]. For the situations we outlined above, the
challenge is the difference in instance distribution between the source and target domains. We will
approach this challenge by investigating how a common representation between the two domains
can make the two domains appear to have similar distributions, enabling effective domain adaptation. We formalize this intuition with a bound on the target generalization error of a classifier trained
from labeled data in the source domain. The bound is stated in terms of a representation function,
and it shows that a representation function should be designed to minimize domain divergence, as
well as classifier error.
While many authors have analyzed adaptation from multiple sets of labeled training data [3, 5, 7,
13], our theory applies to the setting in which the target domain has no labeled training data, but
plentiful unlabeled data exists for both target and source domains. As we suggested above, this
setting realistically captures the problems widely encountered in real-world applications of machine
learning. Indeed recent empirical work in natural language processing [11, 6] has been targeted at
exactly this setting.
We show experimentally that the heuristic choices made by the recently proposed structural correspondence learning algorithm [6] do lead to lower values of the relevant quantities in our theoretical
analysis, providing insight as to why this algorithm achieves its empirical success. Our theory also
points to an interesting new algorithm for domain adaptation: one which directly minimizes a tradeoff between source-target similarity and source training error.
The remainder of this paper is structured as follows: In the next section we formally define domain
adaptation. Section 3 gives our main theoretical results. We discuss how to compute the bound
in section 4. Section 5 shows how the bound behaves for the structural correspondence learning
representation [6] on natural language data. We discuss our findings, including a new algorithm for
domain adaptation based on our theory, in section 6 and conclude in section 7.
2
Background and Problem Setup
Let X be an instance set. In the case of [6], this could be all English words, together with the
possible contexts in which they occur. Let Z be a feature space (Rd is a typical choice) and {0, 1}
be the label set for binary classification1 .
A learning problem is specified by two parameters: a distribution D over X and a (stochastic) target
function f : X ? [0, 1]. The value of f (x) corresponds to the probability that the label of x is
1. A representation function R is a function which maps instances to features R : X ? Z. A
representation R induces a distribution over Z and a (stochastic) target function from Z to [0, 1] as
follows:
def
PrD? [B] = PrD R?1 (B)
f?(z)
def
=
ED [f (x)|R(x) = z]
?1
for any A ? Z such that R (B) is D-measurable. In words, the probability of an event B under
? is the probability of the inverse image of B under R according to D, and the probability that the
D
label of z is 1 according to f? is the mean of probabilities of instances x that z represents. Note
that f?(z) may be a stochastic function even if f (x) is not. This is because the function R can map
two instances with different f -labels to the same feature representation. In summary, our learning
setting is defined by fixed but unknown D and f , and our choice of representation function R and
hypothesis class H ? {g : Z ? {0, 1}} of deterministic hypotheses to be used to approximate the
function f .
2.1
Domain Adaptation
We now formalize the problem of domain adaptation. A domain is a distribution D on the instance
set X . Note that this is not the domain of a function. To avoid confusion, we will always mean a
specific distribution over the instance set when we say domain. Unlike in inductive transfer, where
the tasks we wish to perform may be related but different, in domain adaptation we perform the same
task in multiple domains. This is quite common in natural language processing, where we might be
performing the same syntactic analysis task, such as tagging or parsing, but on domains with very
different vocabularies [6, 11].
1
The same type of analysis hold for multiclass classification, but for simplicty we analyze the binary case.
We assume two domains, a source domain and a target domain. We denote by DS the source
? S the induced distribution over the feature space Z. We use parallel
distribution of instances and D
? T , for the target domain. f : X ? [0, 1] is the labeling rule, common to both
notation, DT , D
domains, and f? is the induced image of f under R.
A predictor is a function, h, from the feature space, Z to [0, 1]. We denote the probability, according
the distribution DS , that a predictor h disagrees with f by
h
i
?S (h) = Ez?D? S Ey?f?(z) [y 6= h(z)]
= Ez?D? S f?(z) ? h(z) .
Similarly, ?T (h) denotes the expected error of h with respect to DT .
3
Generalization Bounds for Domain Adaptation
We now proceed to develop a bound on the target domain generalization performance of a classifier
trained in the source domain. As we alluded to in section 1, the bound consists of two terms. The first
term bounds the performance of the classifier on the source domain. The second term is a measure
? S and the induced target marginal D
?T . A
of the divergence between the induced source marginal D
natural measure of divergence for distributions is the L1 or variational distance. This is defined as
dL1 (D, D? ) = 2 sup |PrD [B] ? PrD? [B]|
B?B
where B is the set of measureable subsets under D and D? . Unfortunately the variational distance
between real-valued distributions cannot be computed from finite samples [2, 9] and therefore is not
useful to us when investigating representations for domain adaptation on real-world data.
A key part of our theory is the observation that in many realistic domain adaptation scenarios, we
do not need such a powerful measure as variational distance. Instead we can restrict our notion of
domain distance to be measured only with respect to function in our hypothesis class.
3.1
The A-distance and labeling function complexity
We make use of a special measure of distance between probability distributions, the A-distance, as
introduced in [9]. Given a domain X and a collection A of subsets of X , let D, D? be probability
distributions over X , such that every set in A is measurable with respect to both distributions. the
A-distance between such distributions is defined as
dA (D, D? ) = 2 sup |PrD [A] ? PrD? [A]|
A?A
In order to use the A-distance, we need to limit the complexity of the true function f in terms of
our hypothesis class H. We say that a function f? : Z ? [0, 1] is ?-close to a function class H with
? S and D
? T if
respect to distributions D
inf [?S (h) + ?T (h)] ? ? .
h?H
A function f? is ?-close to H when there is a single hypothesis h ? H which performs well on both
domains. This embodies our domain adaptation assumption, and we will assume will assume that
our induced labeling function f? is ?-close to our hypothesis class H for a small ?.
We briefly note that in standard learning theory, it is possible to achieve bounds with no explicit assumption on labeling function complexity. If H has bounded capacity (e.g., a finite VC-dimension),
then uniform convergence theory tells us that whenever f? is not ?-close to H, large training samples
have poor empirical error for every h ? H. This is not the case for domain adaptation. If the training
data is generated by some DS and we wish to use some H as a family of predictors for labels in the
target domain, T , then one can construct a function which agrees with some h ? H with respect
? S and yet is far from H with respect to D
? T . Nonetheless we believe that such examples do
to D
not occur for realistic domain adaptation problems when the hypothesis class H is sufficiently rich,
since for most domain adaptation problems of interest the labeling function is ?similarly simple? for
both the source and target domains.
3.2
Bound on the target domain error
We require one last piece of notation before we state and prove the main theorems of this work: the
correspondence between functions and characteristic subsets. For a binary-valued function g(z), we
let Zg ? Z be the subset whose characteristic function is g
Zg = {z ? Z : g(z) = 1} .
In a slight abuse of notation, for a binary function class H we will write dH (?, ?) to indicate the
A-distance on the class of subsets whose characteristic functions are functions in H. Now we can
state our main theoretical result.
Theorem 1 Let R be a fixed representation function from X to Z and H be a hypothesis space of
VC-dimension d. If a random labeled sample of size m is generated by applying R to a DS -i.i.d.
sample labeled according to f , then with probability at least 1 ? ?, for every h ? H:
s
4
2em
4
?S , D
?T ) + ?
?T (h) ? ??S (h) +
d log
+ log
+ dH (D
m
d
?
where e is the base of the natural logarithm.
Proof: Let h? = argminh?H (?T (h) + ?S (h)), and let ?T and ?S be the errors of h? with respect
to DT and DS respectively. Notice that ? = ?T + ?S .
?T (h) ? ?T + PrDT [Zh ?Zh? ]
? ?T + PrDS [Zh ?Zh? ] + |PrDS [Zh ?Zh? ] ? PrDT [Zh ?Zh? ]|
?S , D
?T )
? ?T + PrD [Zh ?Zh? ] + dH (D
S
?S , D
?T )
? ?T + ?S + ?S (h) + dH (D
?S , D
?T )
? ? + ?S (h) + dH (D
The theorem now follows by a standard application Vapnik-Chervonenkis theory [14] to bound the
true ?S (h) by its empirical estimate ??S (h). Namely, if S is an m-size .i.i.d. sample, then with
probability exceeding 1 ? ?,
s
4
2em
4
?S (h) ? ??S (h) +
d log
+ log
m
d
?
?S , D
? T ). We chose the A-distance, however, precisely
The bound depends on the quantity dH (D
? S and D
? T [9]. Combining
because we can measure this from finite samples from the distrbutions D
1 with theorem 3.2 from [9], we can state a computable bound for the error on the target domain.
Theorem 2 Let R be a fixed representation function from X to Z and H be a hypothesis space of
VC-dimension d.
If a random labeled sample of size m is generated by applying R to a DS - i.i.d. sample labeled
? S and D
? T respecaccording to f , and U?S , U?T are unlabeled samples of size m? each, drawn from D
tively, then with probability at least 1 ? ? (over the choice of the samples), for every h ? H:
4
?T (h) ? ??S (h) +
m
s
2em
4
d log
+ log
d
?
s
+ ? + dH (U?S , U?T ) + 4
d log(2m? ) + log( 4? )
m?
Let us briefly examine the bound from theorem 2, with an eye toward feature representations, R.
Under the assumption of subsection 3.1, we assume that ? is small for reasonable R. Thus the two
main terms of interest are the first and fourth terms, since the representation R directly affects them.
The first term is the empirical training error. The fourth term is the sample A-distance between
domains for hypothesis class H. Looking at the two terms, we see that a good representation R is
one which achieves low values for both training error and domain A-distance simultaneously.
4
Computing the A-distance for Signed Linear Classifiers
In this section we discuss practical considerations in computing the A-distance on real data. BenDavid et al. [9] show that the A-distance can be approximated arbitrarily well with increasing sample
size. Recalling the relationship between sets and their characteristic functions, it should be clear that
computing the A-distance is closely related to learning a classifier. In fact they are identical. The
? S and D
? T has a characteristic function
set Ah ? H which maximizes the H-distance between D
h. Then h is the classifier which achieves minimum error on the binary classification problem of
discriminating between points generated by the two distributions.
? S and D
? T respectively.
To see this, suppose we have two samples U?S and U?T , each of size m? from D
Define the error of a classifier h on the task of discriminating between points sampled from different
distributions as
2m?
1 X
err(h) =
h(zi ) ? Izi ?U?S ,
?
2m
i=1
where Izi ?U?S is the indicator function for points lying in the sample U?S . In this case, it is straightforward to show that
dA (U?S , U?T ) = 2 1 ? 2 min err(h? ) .
h? ?H
Unfortunately it is a known NP-hard problem even to approximate the error of the optimal hyperplane classifier for arbitrary distributions [4]. We choose to approximate the optimal hyperplane
classifier by minimizing a convex upper bound on the error, as is standard in classification. It is
important to note that this does not provide us with a valid upper bound on the target error, but as we
will see it nonetheless provides us with useful insights about representations for domain adaptation.
In the subsequent experiments section, we train a linear classifier to discriminate between points
sampled from different domains to illustrate a proxy for the A-distance. We minimize a modified
Huber loss using stochastic gradient descent, described more completely in [15].
5
Natural Language Experiments
In this section we use our theory to analyze different representations for the task of adapting a part of
speech tagger from the financial to biomedical domains [6]. The experiments illustrate the utility of
the bound and all of them have the same flavor. First, we choose a representation R. Then we train
a classifier using R and measure the different terms of the bound. As we shall see, represenations
which minimize both relevant terms of the bound also have small empirical error.
Part of speech (PoS) tagging is the task of labeling a word in context with its grammatical function.
For instance, in the previous sentence we would the tag for ?speech? is singular common noun,
the tag for ?labeling? is gerund, and so on. PoS tagging is a common preprocessing step in many
pipelined natural language processing systems and is described in more detail in [6]. Blitzer et al.
empirically investigate methods for adpating a part of speech tagger from financial news (the Wall
Street Journal, henceforth also WSJ) to biomedical abstracts (MEDLINE) [6]. We have obtained
their data, and we will use it throughout this section. As in their investigation, we treat the financial
data as our source, for which we have labeled training data and the biomedical abstracts as our target,
for which we have no labeled training data.
The representations we consider in this section are all linear projections of the original feature space
into Rd . For PoS tagging, the original feature space consists of high-dimensional, sparse binary
vectors [6]. In all of our experiments we choose d to be 200. Now at train time we apply the
projection to the binary feature vector representation of each instance and learn a linear classifier in
the d-dimensional projected space. At test time we apply the projection to the binary feature vector
representation and classify in the d-dimensional projected space.
5.1
Random Projections
?
If our original feature space is of dimension d? , our random projection matrix is a matrix P ? Rd?d .
The entries of P are drawn i.i.d. from N (0, 1). The Johnson-Lindenstrauss lemma [8] guarantees
(a) Plot of SCL representation for financial
(squares) vs. biomedical (circles)
(b) Plot of SCL representation for nouns (diamonds) vs. verbs (triangles)
Figure 1: 2D plots of SCL representations for the (a) A-distance and (b) empirical risk parts of
theorem 2
that random projections approximate well distances in the original high dimensional space, as long
as d is sufficiently large. Arriaga and Vempala [1] show that one can achieve good prediction with
random projections as long as the margin is sufficiently large.
5.2
Structural Correspondence Learning
Blitzer et al. [6] describe a heuristic method for domain adaptation that they call structural correspondence learning (henceforth also SCL). SCL uses unlabeled data from both domains to induce
correspondences among features in the two domains. Its first step is to identify a small set of domainindependent ?pivot? features which occur frequently in the unlabeled data of both domains. Other
features are then represented using their relative co-occurrence counts with these pivot features. Finally they use a low-rank approximation to the co-occurence count matrix as a projection matrix P .
The intuition is that by capturing these important correlations, features from the source and target
domains which behave similarly for PoS tagging will be represented similarly in the projected space.
5.3
Results
We use as our source data set 100 sentences (about 2500 words) of PoS-tagged Wall Street Journal
text. The target domain test set is the same set as in [6]. We use one million words (500 thousand
from each domain) of unlabeled data to estimate the A-distance between the financial and biomedical domains.
The results in this section are intended to illustrate the different parts of theorem 2 and how they can
affect the target domain generalization error. We give two types of results. The first are pictorial and
appear in figures 1(a), 1(b) and 2(a). These are intended to illustrate either the A-distance (figures
1(a) and 2(a)) or the empirical error (figure 1(b)) for different representations. The second type
are empirical and appear in 2(b). In this case we use the Huber loss as a proxy from the empirical
training error.
Figure 1(a) shows one hundred random instances projected onto the space spanned by the best two
discriminating projections from the SCL projection matrix for part of the financial and biomedical
dataset. Instances from the WSJ are depicted as filled red squares, whereas those from MEDLINE
are depicted as empty blue circles. An approximating linear discrimnator is also shown. Note,
however, that the discriminator performs poorly, and recall that if the best discriminator performs
poorly the A-distance is low. On the other hand, figure 1(b) shows the best two discriminating
components for the task of discriminating between nouns and verbs. Note that in this case, a good
discriminating divider is easy to find, even in such a low-dimensional space. Thus these pictures
lead us to believe that SCL finds a representation which results both in small empirical classification
error and small A-distance. In this case theorem 2 predicts good performance.
(a) Plot of random projections representation for financial (squares) vs.
biomedical (circles)
(b) Comparison of bound terms vs.target domain error
for different choices of representation. Reprentations
are linear projections of the original feature space. Huber loss is the labeled training loss after training, and
the A-distance is approximated as described in the
previous subsection. Error refers to tagging error for
the full tagset on the target domain.
Representation
Identity
Random Proj
SCL
Huber loss
0.003
0.254
0.07
A-distance
1.796
0.223
0.211
Error
0.253
0.561
0.216
Figure 2: (a) 2D plot of random projection representation and (b) results summary on large data
Figure 2(a) shows one hundred random instances projected onto the best two discriminating projections for WSJ vs. MEDLINE from a random matrix of 200 projections. This also seems to be
difficult to separate. The random projections don?t reveal any useful structure for learning, either,
though. Not shown is the corresponding noun vs. verb plot for random projections. It looks identical
to 2(a). Thus theorem 2 predicts that using two random projections as a representation will perform
poorly, since it minimizes only the A-distance and not the empirical error.
Figure 2(b) gives results on a large training and test set showing how the value of the bound can
affect results. The identity representation achieves very low Huber loss (corresponding to empirical
error). The original feature set consists of 3 million binary-valued features, though, and it is quite
easy to separate the two domains using these features. The approximate A-distance is near the
maximum possible value.
The random projections method achieves low A-distance but high Huber loss, and the classifier
which uses this representation achieves error rates much lower than the a classifier which uses the
identity representation. Finally, the structural correspondence learning representation achieves low
Huber loss and low A-distance, and the error rate is the lowest of the three representations.
6
Discussion and Future Work
Our theory demonstrates an important tradeoff inherent in designing good representations for domain adaptation. A good representation enables achieving low error rate on the source domain while
also minimizing the A-distance between the induced marginal distributions of the two domains. The
previous section demonstrates empirically that the heuristic choices of the SCL algorithm [6] do
achieve low values for each of these terms.
Our theory is closely related to theory by Sugiyama and Mueller on covariate shift in regression
models [12]. Like this work, they consider the case where the prediction functions are identical,
but the input data (covariates) have different distributions. Unlike their work, though, we bound the
target domain error using a finite source domain labeled sample and finite source and target domain
unlabeled samples.
Our experiments illustrate the utility of our bound on target domain error, but they do not explore
the accuracy of our approximate H-distance. This is an important area of exploration for future
work. Finally our theory points toward an interesting new direction for domain adapation. Rather
than heuristically choosing a representation, as previous research has done [6], we can try to learn
a representation which directly minimizes a combination of the terms in theorem 2. If we learn
mappings from some parametric family (linear projections, for example), we can give a bound on
the error in terms of the complexity of this family. This may do better than the current heuristics,
and we are also investigating theory and algorithms for this.
7
Conclusions
We presented an analysis of representations for domain adaptation. It is reasonable to think that a
good representation is the key to effective domain adaptation, and our theory backs up that intuition.
Theorem 2 gives an upper bound on the generalization of a classifier trained on a source domain and
applied in a target domain. The bound depends on the representation and explicitly demonstrates the
tradeoff between low empirical source domain error and a small difference between distributions.
Under the assumption that the labeling function f? is close to our hypothesis class H, we can compute
the bound from finite samples. The relevant distributional divergence term can be written as the Adistance of Kifer et al [9]. Computing the A-distance is equivalent to finding the minimum-error
classifier. For hyperplane classifiers in Rd , this is an NP-hard problem, but we give experimental
evidence that minimizing a convex upper bound on the error, as in normal classification, can give a
reasonable approximation to the A-distance.
Our experiments indicate that the heuristic structural correspondence learning method [6] does in
fact simultaneously achieve low A-distance as well as a low margin-based loss. This provides a
justification for the heuristic choices of SCL ?pivots?. Finally we note that our theory points to
an interesting new algorithm for domain adaptation. Instead of making heuristic choices, we are
investigating algorithms which directly minimize a combination of the A-distance and the empirical
training margin.
References
[1] R. Arriaga and S. Vempala. An algorithmic theory of learning robust concepts and random
projection. In FOCS, volume 40, 1999.
[2] T. Batu, L. Fortnow, R. Rubinfeld, W. Smith, and P. White. Testing that distributions are close.
In FOCS, volume 41, pages 259?269, 2000.
[3] J. Baxter. Learning internal representations. In COLT ?95: Proceedings of the eighth annual
conference on Computational learning theory, pages 311?320, New York, NY, USA, 1995.
[4] S. Ben-David, N. Eiron, and P. Long. On the difficulty of approximately maximizing agreements. Journal of Computer and System Sciences, 66:496?514, 2003.
[5] S. Ben-David and R. Schuller. Exploiting task relatedness for multiple task learning. In
COLT 2003: Proceedings of the sixteenth annual conference on Computational learning theory, 2003.
[6] J. Blitzer, R. McDonald, and F. Pereira. Domain adaption with structural correspondence
learning. In EMNLP, 2006.
[7] K. Crammer, M. Kearns, and J. Wortman. Learning from data of variable quality. In Neural
Information Processing Systems (NIPS), Vancouver, Canada, 2005.
[8] W. Johnson and J. Lindenstrauss. Extension of lipschitz mappings to hilbert space. Contemporary Mathematics, 26:189?206, 1984.
[9] D. Kifer, S. Ben-David, and J. Gehrke. Detecting change in data streams. In Very Large
Databases (VLDB), 2004.
[10] C. Manning. Foundations of Statistical Natural Language Processing. MIT Press, Boston,
1999.
[11] D. McClosky, E. Charniak, and M. Johnson. Reranking and self-training for parser adaptation.
In ACL, 2006.
[12] M. Sugiyama and K. Mueller. Generalization error estimation under covariate shift. In Workshop on Information-Based Induction Sciences, 2005.
[13] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Sharing clusters among related groups:
Hierarchical Dirichlet processes. In Advances in Neural Information Processing Systems, volume 17, 2005.
[14] V. Vapnik. Statistical Learning Theory. John Wiley, New York, 1998.
[15] T. Zhang. Solving large-scale linear prediction problems with stochastic gradient descent. In
ICML, 2004.
| 2983 |@word briefly:2 seems:1 vldb:1 heuristically:1 mention:1 plentiful:1 charniak:1 chervonenkis:1 document:2 err:2 current:1 yet:1 written:1 parsing:2 john:2 subsequent:1 realistic:2 enables:1 designed:1 plot:6 v:6 reranking:1 smith:1 blei:1 provides:2 detecting:1 zhang:1 tagger:2 focs:2 consists:3 prove:1 theoretically:1 tagging:7 huber:7 expected:1 upenn:1 indeed:1 examine:1 frequently:1 increasing:1 project:1 notation:3 bounded:1 maximizes:1 lowest:1 what:3 minimizes:4 finding:2 guarantee:1 every:4 exactly:1 classifier:20 demonstrates:3 appear:3 before:1 treat:1 limit:1 abuse:1 approximately:1 might:3 chose:1 signed:1 acl:1 someone:1 co:2 limited:1 practical:1 testing:1 area:1 empirical:15 adapting:1 projection:21 word:5 induce:1 refers:1 cannot:1 unlabeled:6 close:6 pipelined:1 onto:2 context:2 applying:2 risk:1 measurable:2 map:2 deterministic:1 equivalent:1 maximizing:2 straightforward:1 convex:2 insight:2 rule:1 spanned:1 financial:7 notion:1 justification:2 target:33 suppose:1 parser:1 user:5 us:3 designing:2 hypothesis:11 agreement:1 approximated:2 predicts:2 labeled:13 distributional:1 database:1 capture:1 thousand:1 news:1 contemporary:1 intuition:4 complexity:4 covariates:1 trained:7 depend:2 solving:1 completely:1 triangle:1 po:5 represented:2 genre:3 train:3 effective:2 describe:1 labeling:8 tell:1 choosing:1 quite:2 heuristic:7 widely:1 valued:3 whose:2 say:2 syntactic:2 think:1 beal:1 adaptation:27 remainder:1 relevant:3 combining:1 poorly:3 achieve:4 realistically:1 sixteenth:1 exploiting:1 convergence:1 empty:1 cluster:1 wsj:3 ben:4 blitzer:5 develop:1 illustrate:5 measured:1 school:1 c:1 indicate:2 direction:1 closely:2 filter:2 stochastic:5 vc:3 exploration:1 require:1 generalization:8 wall:2 investigation:1 extension:1 hold:2 lying:1 sufficiently:3 normal:1 mapping:2 algorithmic:1 achieves:7 estimation:1 label:5 waterloo:1 agrees:1 gehrke:1 mit:1 always:1 modified:1 rather:1 avoid:1 rank:1 mueller:2 proj:1 classification:7 among:2 colt:2 noun:4 special:1 marginal:3 construct:1 extraction:1 identical:3 represents:1 koby:1 look:1 icml:1 future:2 np:2 inherent:2 simultaneously:2 divergence:4 pictorial:1 familiar:1 intended:2 recalling:1 interest:2 investigate:1 evaluation:1 mcclosky:1 analyzed:1 closer:1 filled:1 logarithm:1 circle:3 theoretical:4 instance:14 classify:1 cost:1 subset:5 entry:1 predictor:3 uniform:1 hundred:2 wortman:1 johnson:3 discriminating:7 together:1 medline:3 choose:3 emnlp:1 henceforth:2 prdt:2 availability:1 explicitly:2 depends:2 stream:1 piece:1 try:1 analyze:2 sup:2 red:1 parallel:1 shai:2 minimize:4 square:3 accuracy:1 characteristic:5 identify:1 ah:1 whenever:1 ed:1 email:2 sharing:1 nonetheless:2 proof:1 sampled:2 dataset:1 recall:1 subsection:2 hilbert:1 formalize:3 back:1 dt:3 izi:2 done:1 though:4 biomedical:7 correlation:1 d:6 hand:1 quality:1 reveal:1 believe:2 usa:1 divider:1 true:2 concept:1 inductive:1 tagged:1 white:1 self:1 mcdonald:1 confusion:1 performs:4 l1:1 image:2 variational:3 consideration:2 recently:2 common:5 behaves:1 empirically:2 tively:1 fortnow:1 volume:3 million:2 slight:1 rd:4 outlined:1 mathematics:1 similarly:4 sugiyama:2 language:9 stable:1 similarity:2 base:1 recent:1 inf:1 scenario:1 binary:9 success:2 arbitrarily:1 minimum:2 somewhat:1 ey:1 fernando:1 multiple:3 full:1 adapt:2 long:3 prediction:3 underlies:1 regression:1 background:1 want:1 whereas:1 singular:1 source:26 crucial:1 unlike:2 induced:6 jordan:1 call:1 structural:7 near:1 easy:2 baxter:1 affect:3 zi:1 pennsylvania:1 restrict:1 tradeoff:4 multiclass:1 computable:1 shift:2 pivot:3 utility:2 speech:6 proceed:1 york:2 generally:1 useful:3 clear:1 involve:1 induces:1 notice:1 eiron:1 blue:1 write:1 prd:7 shall:1 group:2 key:2 nevertheless:1 achieving:1 drawn:6 inverse:1 powerful:1 fourth:2 family:3 reasonable:3 throughout:1 decision:1 capturing:1 def:2 bound:29 adapation:1 correspondence:9 represenations:1 encountered:1 annual:2 occur:3 precisely:1 tag:2 min:1 performing:1 vempala:2 department:1 structured:1 according:4 rubinfeld:1 combination:2 poor:1 manning:1 across:1 em:3 making:1 bendavid:1 intuitively:2 alluded:1 agree:1 discus:3 count:2 scl:10 kifer:2 apply:2 hierarchical:1 occurrence:1 original:6 denotes:1 dirichlet:1 embodies:1 approximating:1 quantity:2 parametric:1 gradient:2 distance:37 separate:2 entity:1 capacity:1 street:2 extent:1 toward:3 induction:1 relationship:1 providing:1 minimizing:3 setup:1 unfortunately:2 difficult:1 measureable:1 stated:1 unknown:1 perform:6 diamond:1 upper:4 teh:1 observation:1 enabling:1 finite:6 descent:2 behave:1 situation:6 looking:1 arbitrary:1 verb:3 canada:1 david:4 introduced:1 namely:1 specified:1 sentence:2 discriminator:2 nip:1 suggested:1 eighth:1 challenge:3 including:1 event:1 natural:9 difficulty:1 indicator:1 schuller:1 eye:1 picture:1 occurence:1 text:1 batu:1 disagrees:1 zh:10 vancouver:1 relative:1 loss:9 expect:2 interesting:3 foundation:1 proxy:2 summary:2 last:1 english:1 sparse:1 grammatical:1 dimension:4 vocabulary:1 world:2 valid:1 rich:1 lindenstrauss:2 author:1 made:1 collection:1 preprocessing:1 projected:5 spam:2 far:1 approximate:6 relatedness:1 investigating:4 conclude:1 discriminative:2 don:1 why:1 promising:1 learn:4 transfer:1 robust:1 ca:1 domain:95 da:2 main:4 uwaterloo:1 arise:1 dl1:1 ny:1 wiley:1 pereira:3 wish:3 explicit:1 exceeding:1 learns:1 theorem:12 specific:1 covariate:2 showing:1 evidence:1 exists:1 workshop:1 vapnik:2 ci:1 illustrates:1 margin:4 flavor:1 boston:1 arriaga:2 depicted:2 explore:1 ez:2 applies:1 corresponds:1 adaption:2 dh:7 goal:1 targeted:1 identity:3 lipschitz:1 experimentally:1 hard:2 change:1 typical:1 hyperplane:3 lemma:1 kearns:1 discriminate:1 experimental:2 zg:2 formally:1 internal:1 crammer:3 arises:1 argminh:1 tagset:1 |
2,185 | 2,984 | Online Classification for Complex Problems Using
Simultaneous Projections
1
Yonatan Amit1 Shai Shalev-Shwartz1 Yoram Singer1,2
School of Computer Sci. & Eng., The Hebrew University, Jerusalem 91904, Israel
2
Google Inc. 1600 Amphitheatre Pkwy, Mountain View, CA 94043, USA
{mitmit,shais,singer}@cs.huji.ac.il
Abstract
We describe and analyze an algorithmic framework for online classification where
each online trial consists of multiple prediction tasks that are tied together. We
tackle the problem of updating the online hypothesis by defining a projection
problem in which each prediction task corresponds to a single linear constraint.
These constraints are tied together through a single slack parameter. We then introduce a general method for approximately solving the problem by projecting
simultaneously and independently on each constraint which corresponds to a prediction sub-problem, and then averaging the individual solutions. We show that
this approach constitutes a feasible, albeit not necessarily optimal, solution for the
original projection problem. We derive concrete simultaneous projection schemes
and analyze them in the mistake bound model. We demonstrate the power of
the proposed algorithm in experiments with online multiclass text categorization.
Our experiments indicate that a combination of class-dependent features with the
simultaneous projection method outperforms previously studied algorithms.
1
Introduction
In this paper we discuss and analyze a framework for devising efficient online learning algorithms
for complex prediction problems such as multiclass categorization. In the settings we cover, a complex prediction problem is cast as the task of simultaneously coping with multiple simplified subproblems which are nonetheless tied together. For example, in multiclass categorization, the task is
to predict a single label out of k possible outcomes. Our simultaneous projection approach is based
on the fact that we can retrospectively (after making a prediction) cast the problem as the task of
making k ? 1 binary decisions each of which involves the correct label and one of the competing
labels. The performance of the k ? 1 predictions is measured through a single loss. Our approach
stands in contrast to previously studied methods which can be roughly be partitioned into three
paradigms. The first and probably the simplest previously studied approach is to break the problem
into multiple decoupled problems that are solved independently. Such an approach was used for
instance by Weston and Watkins [1] for batch learning of multiclass support vector machines. The
simplicity of this approach also underscores its deficiency as it is detached from the original loss of
the complex decision problem. The second approach maintains the original structure of the problem
but focuses on a single, worst performing, derived sub-problem (see for instance [2]). While this
approach adheres with the original structure of the problem, the resulting update mechanism is by
construction sub-optimal as it oversees almost all of the constraints imposed by the complex prediction problem. (See also [6] for analysis and explanation of the sub-optimality of this approach.)
The third approach for dealing with complex problems is to tailor a specific efficient solution for
the problem on hand. While this approach yielded efficient learning algorithms for multiclass categorization problems [2] and aesthetic solutions for structured output problems [3, 4], devising these
algorithms required dedicated efforts. Moreover, tailored solutions typically impose rather restrictive assumptions on the representation of the data in order to yield efficient algorithmic solutions.
In contrast to previously studied approaches, we propose a simple, general, and efficient framework
for online learning of a wide variety of complex problems. We do so by casting the online update
task as an optimization problem in which the newly devised hypothesis is required to be similar to
the current hypothesis while attaining a small loss on multiple binary prediction problems. Casting
the online learning task as a sequence of instantaneous optimization problems was first suggested
and analyzed by Kivinen and Warmuth [12] for binary classification and regression problems. In
our optimization-based approach, the complex decision problem is cast as an optimization problem
that consists of multiple linear constraints each of which represents a simplified sub-problem. These
constraints are tied through a single slack variable whose role is to asses the overall prediction
quality for the complex problem. We describe and analyze a family of two-phase algorithms. In the
first phase, the algorithms solve simultaneously multiple sub-problems. Each sub-problem distills
to an optimization problem with a single linear constraint from the original multiple-constraints
problem. The simple structure of each single-constraint problem results in an analytical solution
which is efficiently computable. In the second phase, the algorithms take a convex combination of
the independent solutions to obtain a solution for the multiple-constraints problem. The end result is
an approach whose time complexity and mistake bounds are equivalent to approaches which solely
deal with the worst-violating constraint [9]. In practice, though, the performance of the simultaneous
projection framework is much better than single-constraint update schemes.
2
Problem Setting
In this section we introduce the notation used throughout the paper and formally describe our problem setting. We denote vectors by lower case bold face letters (e.g. x and ?) where the j?th element
of x is denoted by xj . We denote matrices by upper case bold face letters (e.g. X), where the j?th
row of X is denoted by xj . The set of integers {1, . . . , k} is denoted by [k]. Finally, we use the
hinge function [a]+ = max{0, a}.
Online learning is performed in a sequence of trials. At trial t the algorithm receives a matrix Xt
of size kt ? n, where each row of Xt is an instance, and is required to make a prediction on the
? t . We allow y?jt
label associated with each instance. We denote the vector of predicted labels by y
t
t
to take any value in R, where the actual label being predicted is sign(?
yj ) and |?
yj | is the confidence
? t the algorithm receives the correct labels yt where
in the prediction. After making a prediction y
yjt ? {?1, 1} for all j ? [kt ]. In this paper we assume that the predictions in each trial are formed
by calculating the inner product between a weight vector ? t ? Rn with each instance in Xt , thus
? t = Xt ? t . Our goal is to perfectly predict the entire vector yt . We thus say that the vector y
?t
y
was imperfectly predicted if there exists an outcome j such that yjt 6= sign(?
yjt ). That is, we suffer a
unit loss on trial t if there exists j, such that sign(?
yjt ) 6= yjt . Directly minimizing this combinatorial
error is a computationally
difficult
task.
Therefore,
we use an adaptation of the hinge-loss, defined
` (?
yt , yt ) = maxj?[kt ] 1 ? yjt y?jt + , as a proxy for the combinatorial error. The quantity yjt y?jt is
often referred to as the (signed) margin of the prediction and ties the correctness and the confidence
? t = Xt ?. We also denote the
in the prediction. We use ` (?; (Xt , yt )) to denote ` (?
yt , yt ) where y
t
set of instances whose labels were predicted incorrectly by M = {j | sign(?
yjt ) 6= yjt }, and similarly
t
the set of instances whose hinge-losses are greater than zero by ? = {j | [1 ? yjt y?jt ]+ > 0}.
3
Derived Problems
In this section we further explore the motivation for our problem setting by describing two different
complex decision tasks and showing how they can be cast as special cases of our setting. We also
would like to note that our approach can be employed in other prediction problems (see Sec. 7).
Multilabel Categorization In the multilabel categorization task each instance is associated with
a set of relevant labels from the set [k]. The multilabel categorization task can be cast as a
special case of a ranking task in which the goal is to rank the relevant labels above the irrelevant ones. Many learning algorithms for this task employ class-dependant features (for example, see [7]). For simplicity, assume that each class is associated with n features and denote by ?(x, r) the feature vector for class r. We would like to note that features obtained
for different classes typically relay different information and are often substantially different.
A categorizer, or label ranker, is based on a weight vector
?. A vector ? induces a score for each class ? ? ?(x, r)
which, in turn, defines an ordering of the classes. A learner is
required to build a vector ? that successfully ranks the labels
according to their relevance, namely for each pair of classes
(r, s) such that r is relevant while s is not, the class r should
be ranked higher than the class s. Thus we require that ? ?
?(x, r) > ? ? ?(x, s) for every such pair (r, s). We say that a
label ranking is imperfect if there exists any pair (r, s) which
violates this requirement. The loss associated with each such
violation is [1 ? (? ? ?(x, r) ? ? ? ?(x, s))]+ and the loss
of the categorizer is defined as the maximum over the losses
induced by the violated pairs. In order to map the problem to
our setting, we define a virtual instance for every pair (r, s)
such that r is relevant and s is not. The new instance is the
n dimensional vector defined by ?(x, r) ? ?(x, s). The label
associated with all of the instances is set to 1. It is clear that
an imperfect categorizer makes a prediction mistake on at
least one of the instances, and that the losses defined by both
problems are the same.
? t+1
y1t (? ? xt1 ) ? 1
y3t (? ? xt3 ) ? 1
y2t (? ? xt2 ) ? 1
?t
Figure 1: Illustration of the simultaneous projections algorithm: each instance
casts a constraint on ? and each such
constraint defines a halfspace of feasible solutions. We project on each halfspace in parallel and the new vector is a
weighted average of these projections
Ordinal Regression In the problem of ordinal regression an instance x is a vector of n features
that is associated with a target rank y ? [k]. A learning algorithm is required to find a vector ?
and k thresholds b1 ? ? ? ? ? bk?1 ? bk = ?. The value of ? ? x provides a score from which the
prediction value can be defined as the smallest index i for which ? ? x < bi , y? = min {i|? ? x < bi }.
In order to obtain a correct prediction, an ordinal regressor is required to ensure that ? ?x ? bi for all
i < y and that ? ? x < bi for i ? y. It is considered a prediction mistake if any of these constraints
is violated. In order to map the ordinal regression task to our setting, we introduce k ? 1 instances.
Each instance is a vector in Rn+k?1 . The first n entries of the vector are set to be the elements of
x, the remaining k ? 1 entries are set to ??i,j . That is, the i?th entry in the j?th vector is set to ?1
if i = j and to 0 otherwise. The label of the first y ? 1 instances is 1, while the remaining k ? y
instances are labeled as ?1. Once we learned an expanded vector in Rn+k?1 , the regressor ? is
obtained by taking the first n components of the expanded vector and the thresholds b1 , . . . , bk?1
are set to be the last k ? 1 elements. A prediction mistake of any of the instances corresponds to an
incorrect rank in the original problem.
4
Simultaneous Projection Algorithms
? t = Xt ? t .
Recall that on trial t the algorithm receives a matrix, Xt , of kt instances, and predicts y
t
After performing its prediction, the algorithm receives the corresponding labels y . Each such
instance-label pair casts a constraint on ? t , yjt ? t ? xtj ? 1. If all the constraints are satisfied
by ? t then ? t+1 is set to be ? t and the algorithm proceeds to the next trial. Otherwise, we would
like to set ? t+1 as close as possible to ? t while satisfying all constraints.
Such an aggressive approach may be sensitive to outliers and over-fitting. Thus, we allow some
of the constraints to remain violated by introducing a tradeoff between the change to ? t and the
loss attained on (Xt , yt ). Formally, we would like to set ? t+1 to be the solution of the following
2
optimization problem, min??Rn 21 k? ? ? t k + C `(?; (Xt , yt )), where C is a tradeoff parameter.
As we discuss below, this formalism effectively translates to a cap on the maximal change to ? t .
We rewrite the above optimization by introducing a single slack variable as follows:
1
? ? ? t
2 + C? s.t. ?j ? [kt ] : yjt ? ? xtj ? 1 ? ? , ? ? 0 .
min
(1)
n
??R ,??0 2
We denote the objective function of Eq. (1) by P t and refer to it as the instantaneous primal problem
to be solved on trial t. The dual optimization problem of P t is the maximization problem
kt
X
maxt
t
?1 ,..,?k
t
kt
2
X
1
?jt ?
? t +
?jt yjt xtj
s.t.
2
j=1
j=1
kt
X
j=1
?jt ? C , ?j : ?jt ? 0 .
(2)
Each dual variable corresponds to a single constraint of the primal problem. The minimizer of the
Pkt
?jt yjt xtj .
primal problem is calculated from the optimal dual solution as follows, ? t+1 = ? t + j=1
Unfortunately, in the common case, where each xtj is in an arbitrary orientation, there does not exist
an analytic solution for the dual problem (Eq. (2)). We tackle the problem by breaking it down
into kt reduced problems, each of which focuses on a single dual variable. Formally, for the j?th
variable, the j?th reduced problem solves Eq. (2) while fixing ?jt 0 = 0 for all j 0 6= j. Each reduced
optimization problem amounts to the following problem
?jt ?
max
t
?j
1
? t + ?jt yjt xtj
2 s.t. ?jt ? [0, C] .
2
(3)
We next obtain an exact or approximate solution for each reduced problem as if it were independent
of the rest. We then choose a distribution ?t ? ?kt , where ?kt = {? ? Rkt :
P
t
j ?j = 1, ?j ? 0} is the probability simplex, and multiply each ?j by the corresponding
t
t
?j . Since ? ? ?kt , this yields a feasible solution to the dual problem defined in Eq. (2) for
Pkt t t
?j ?j ? C.
the following reason. Each ?tj ?jt ? 0 and the fact that ?jt ? C implies that j=1
Pkt t t t t
?j ?j yj xj .
Finally, the algorithm uses the combined solution and sets ? t+1 = ? t + j=1
We next present three schemes to obtain a solution for the reduced problem (Eq. (3)) and then
combine the solution into a single update.
Simultaneous Perceptron: The simplest of the
update forms generalizes the famous Perceptron
algorithm from [8] by setting ?jt to C if the j?th
instance is incorrectly labeled, and to 0 otherwise.
1
We similarly set the weight ?tj to be |M
t | for
t
j ? M and to 0 otherwise. We abbreviate this
scheme as the SimPerc algorithm.
Input:
Aggressiveness parameter C > 0
Initialize:
? 1 = (0, . . . , 0)
For t = 1, 2, . . . , T :
Receive instance matrix X t ? Rkt ?n
? t = Xt ? t
Predict y
Receive correct labels yt
Suffer loss ` (? t ; (Xt , yt ))
If ` > 0:
Choose importance weights ?t ? ?kt
Choose individual dual solutions ?jt
Pkt t t t t
Update ? t+1 = ? t + j=1
?j ?j yj xj
Soft Simultaneous Projections: The soft simultaneous projections scheme uses the fact that each
reduced problem has an analytic solution, yield
2
ing ?jt = min C, ` ? t ; (xtj , yjt ) /
xtj
. We Figure 2: Simultaneous projections algorithm.
independently assign each ?jt this optimal solution. We next set ?tj to be |?1t | for j ? ?t and to 0 otherwise. We would like to comment that this
solution may update ?jt also for instances which were correctly classified as long as the margin they
attain is not sufficiently large. We abbreviate this scheme as the SimProj algorithm.
Conservative Simultaneous Projections: Combining ideas from both methods, the conservative
simultaneous projections scheme optimally sets ?jt according to the analytic solution. The difference
with the SimProj algorithm lies in the selection of ?t . In the conservative scheme only the instances
which were incorrectly predicted (j ? Mt ) are assigned a positive weight. Put differently, ?tj is set
1
t
to |M
t | for j ? M and to 0 otherwise. We abbreviate this scheme as the ConProj algorithm.
To recap, on each trial t we obtain a feasible solution for the instantaneous dual given in Eq. (2).
This solution combines independently calculated ?jt , according to a weight vector ?t ? ?kt . While
this solution may not be optimal, it does constitutes an infrastructure for obtaining a mistake bound
and, as we demonstrate in Sec. 6, performs well in practice.
5
Analysis
The algorithms described in the previous section perform updates in order to increase the instantaneous dual problem defined in Eq. (2). We now use the mistake bound model to derive an upper
bound on the number of trials on which the predictions of SimPerc and ConProj algorithms are
imperfect. Following [6], the first step in the analysis is to tie the instantaneous dual problems to
a global loss function. To do so, we introduce a primal optimization problem defined over the enPT
2
tire sequence of examples as follows, min??Rn 21 k?k + C t=1 ` (?; (X t , Y t )) . We rewrite the
optimization problem as the following equivalent constrained optimization problem,
T
min
??Rn ,??RT
X
1
2
k?k + C
?t s.t. ?t ? [T ], ?j ? [kt ] : yjt ? ? xtj ? 1 ? ?t ?t : ?t ? 0. (4)
2
t=1
We denote the value of the objective function at (?, ?) for this optimization problem by P(?, ?).
A competitor who may see the entire sequence of examples in advance may in particular set (?, ?)
to be the minimizer of the problem which we denote by (? ? , ? ? ). Standard usage of Lagrange
multipliers yields that the dual of Eq. (4) is,
kt
kt
kt
T
T X
2
X
X
1
XX
?t,j yjt xtj
s.t. ?t :
?t,j ? C ?t, j : ?t,j ? 0 . (5)
max
?t,j ?
?
2 t=0 j=1
t=1 j=1
j=1
We denote the value of the objective function of Eq. (5) by D(?1 , ? ? ? , ?T ), where each ?t is a
vector in Rkt . Through our derivation we use the fact that any set of dual variables ?1 , ? ? ? , ?T
PT Pkt
?t,j yjt xtj with a corresponding assignment of the slack
defines a feasible solution ? = t=1 j=1
variables.
Clearly, the optimization problem given by Eq. (5) depends on all the examples from the first trial
through time step T and thus can only be solved in hindsight. We note however, that if we ensure
that ?s,j = 0 for all s > t then the dual function no longer depends on instances occurring on rounds
proceeding round t. As we show next, we use this primal-dual view to derive the skeleton algorithm
from Fig. 2 by finding a new feasible solution for the dual problem on every trial. Formally, the
instantaneous dual problem, given by Eq. (2), is equivalent (after omitting an additive constant) to
the following constrained optimization problem,
max D(?1 , ? ? ? , ?t?1 , ?, 0, ? ? ? , 0) s.t. ? ? 0 ,
?
kt
X
?j ? C .
(6)
j=1
That is, the instantaneous dual problem is obtained from D(?1 , ? ? ? , ?T ) by fixing ?1 , . . . , ?t?1
to the values set in previous rounds, forcing ?t+1 through ?T to the zero vectors, and choosing a
feasible vector for ?t . Given the set of dual variables ?1 , . . . , ?t?1 it is straightforward to show that
Pt?1 P
the prediction vector used on trial t is ? t = s=1 j ?s,j yjs xsj . Equipped with these relations and
omitting constants which do not depend on ?t Eq. (6) can be rewritten as,
2
kt
kt
kt
X
X
X
1
t
t t
max
?j ?
? +
?j ? C .
(7)
?j yj xj
s.t. ?j : ?j ? 0,
?1 ,...,?kt
2
j=1
j=1
j=1
The problems defined by Eq. (7) and Eq. (2) are equivalent. Thus, weighing the variables
?1t , . . . , ?kt t by ?t1 , . . . , ?tkt also yields a feasible solution for the problem defined in Eq. (6), namely
?t,j = ?tj ?jt . We now tie all of these observations together by using the weak-duality theorem. Our
first bound is given for the SimPerc algorithm.
Theorem 1. Let X1 , y1 , . . . , XT , yT be a sequence of examples where Xt is a matrix of kt
examples and yt are the associated labels. Assume that for all t and j the norm of an instance xtj
is at most R. Then, for any ? ? ? Rn the number of trials on which the prediction of SimPerc is
imperfect is at most,
PT
1
? 2
?
t
t
t=1 ` (? ; (X , y ))
2 k? k + C
.
C ? 21 C 2 R2
Proof. To prove the theorem we make use of the weak-duality theorem. Recall that any dual feasible
solution induces a value for the dual?s objective function which is upper bounded by the optimum
value of the primal problem, P (? ? , ? ? ). In particular, the solution obtained at the end of trial T
is dual feasible, and thus D(?1 , . . . , ?T ) ? P(? ? , ? ? ) . We now rewrite the left hand-side of the
above equation as the following sum,
D(0, . . . , 0) +
T
X
D(?1 , . . . , ?t , 0, . . . , 0) ? D(?1 , . . . , ?t?1 , 0, . . . , 0) .
t=1
(8)
Note that D(0, . . . , 0) equals 0. Therefore, denoting by ?t the difference in two consecutive dual
PT
objective values, D(?1 , . . . , ?t , 0, . . . , 0) ? D(?1 , . . . , ?t?1 , 0, . . . , 0), we get that t=1 ?t ?
? ?
P(? , ? ). We now turn to bounding ?t from below. First, note that if the prediction on trial t is
perfect (Mt = ?) then SimPerc sets ?t to the zero vector and thus ?t = 0. We can thus focus on
trials for which the algorithm?s prediction is imperfect. We remind the reader that by unraveling the
P
Pks
?s,j yjs xsj . We now rewrite ?t as follows,
update of ? t we get that ? t = s<t j=1
2
kt
kt
X
X
1
1
2
t
t t
?t =
?t,j ?
? +
?t,j yj xj
+
? t
.
(9)
2
2
j=1
j=1
Pkt t
?j = 1, which lets us further expand Eq. (9) and write,
By construction, ?t,j = ?tj ?jt and j=1
2
kt
kt
kt
X
X
2
1X
1
t
t t t t
+
?
+
?
?
y
x
?t =
?tj ?jt ?
?tj
? t
.
j j j j
2
2 j=1
j=1
j=1
2
The squared norm, k?k is a convex function in its vector argument and thus ?t is concave, which
yields the following lower bound on ?t ,
kt
X
2 1
2
1
?tj ?jt ?
? t + ?jt yjt xtj
+
? t
?t ?
.
(10)
2
2
j=1
The SimPerc algorithm sets ?tj to be 1/|Mt | for all j ? Mt and to be 0 otherwise. Furthermore,
for all j ? Mt , ?jt is set to C. Thus, the right hand-side of Eq. (10) can be further simplified and
written as,
X
2 1
2
1
.
?t ?
?tj C ?
? t + Cyjt xtj
+
? t
2
2
t
j?M
We expand the norm in the above equation and obtain that,
X
1
1 2
1
t
t
2
t t
t
t
2
t t
2
?t ?
?j C ?
?
? Cyj ? ? xj ? C yj xj +
?
.
2
2
2
t
(11)
j?M
The set Mt consists of indices of instances which were incorrectly classified. Thus, yjt (? t ? xtj ) ? 0
for every j ? Mt . Therefore, ?t can further be bounded from below as follows,
X
X
2
1
1
1
(12)
?t ?
?tj C ? C 2
yjt xtj
?
?tj C ? C 2 R2 = C ? C 2 R2 ,
2
2
2
t
t
j?M
j?M
where for the second inequality we used the fact that the norm of all the instances is bounded by
R. To recap, we have shown that on trials for which the prediction is imperfect ?t ? C ? 21 C 2 R2 ,
while in perfect trials where no mistake is made ?t = 0. Putting all the inequalities together we
obtain the following bound,
T
X
1
?t = D(?1 , . . . , ?T ) ? P(? ? , ? ? ) ,
(13)
C ? C 2 R2 ?
2
t=1
where is the number of imperfect trials.
Finally, rewriting P(? ? , ? ? ) as
PT
C t=1 `(? ? ; (Xt , yt ) yields the bound stated in the theorem.
1
? 2
2 k? k
+
The ConProj algorithm updates the same set of dual variables as the SimPerc algorithm, but selects
?jt to be the optimal solution of Eq. (3). Thus, the value of ?t attained by the ConProj algorithm
is never lower than the value attained by the SimPerc algorithm. The following corollary is a direct
consequence of this observation.
Corollary 1. Under the same conditions of Thm. 1 and for any ? ? ? Rn , the number of trials on
which the prediction of ConProj is imperfect is at most,
PT
1
? 2
?
t
t
t=1 ` (? ; (X , y ))
2 k? k + C
.
C ? 21 C 2 R2
username
beck-s
farmer-d
kaminski-v
kitchen-l
lokay-m
sanders-r
williams-w3
k
101
25
41
47
11
30
18
m
1973
3674
4479
4017
2491
1190
2771
SimProj
50.0
27.4
43.1
42.9
18.8
20.7
4.2
ConProj
55.2
30.3
47.8
47.0
25.3
25.6
5.0
SimPerc
55.9
30.7
47.0
49.0
25.3
23.2
5.4
Max-SP
56.6
30.0
49.5
48.0
23.0
23.8
4.2
Max-MP
63.8
28.6
49.6
54.9
25.4
36.3
5.8
Mira
63.7
31.8
47.3
52.6
25.3
34.1
5.9
Table 1: The percentage of online mistakes of the three variants compared to Max-Update (Single
prototype (SP) and Multi prototype (MP)) and the Mira algorithm. Experiments were performed on
seven users of the Enron data set.
Note that the predictions of the SimPerc algorithm do not depend on the specific value of C, thus
for R = 1 and an optimal choice of C the bound attained in Thm. 1 now becomes.
1
1p ? 4
k? k + k? ? k2 ` (? ? ; (Xt , yt )) .
` ? ? ; (Xt , yt ) + k? ? k2 +
2
2
We omit the proof for lack of space, see [6] for a closely related analysis.
We conclude this section with a few closing words about the SimProj variant. The SimPerc and
ConProj algorithms ensure a minimal increase in the dual by focusing solely on classification errors
and ignoring margin errors. While this approach ensures a sufficient increase of the dual, in practice
it appears to be a double edged sword as the SimProj algorithm performs empirically better. This
superior empirical performance can be motivated by a refined derivation of the optimal choice for
?. This derivation will be provided in a long version of this manuscript.
6
Experiments
In this section we describe experimental results in order to demonstrate some of the merits of our algorithms. We tested performance of the three variants described in Sec. 4
on a multiclass categorization task and compared them to previously studied algorithms for
multiclass categorization. We compared our algorithms to the single-prototype and multiprototype Max-Update algorithms from [9] and to the Mira algorithm [2]. The experiments
were performed on the task of email classification using the Enron email dataset (Available at
http://www.cs.cmu.edu/?enron/enron_mail_030204.tar.gz). The learning goal was to correctly classify
email messages into user defined folders. Thus, the instances in this dataset are email messages,
while the set of classes are the user defined folders denoted by {1, . . . , k}. We ran the experiments
on the sequence of email messages from 7 different users.
Since each user employs different criteria for email classification, we treated each person as a separate online learning problem. We represented each email message as a vector with a component
for every word in the corpus. On each trial, and for each class r, we constructed class-dependent
vectors as follows. We set ?j (xt , r) to twice the number of time the j?th word appeared in the
message if it had also appeared in a fifth of the messages previously assigned to folder r. Similarly,
we set ?j (xt , r) to minus the number of appearances of the word appeared if it had appeared in less
than 2 percent of previous messages. In all other cases, we set ?j (xt , r) to 0. This class-dependent
construction is closely related to the construction given in [10]. Next, we employed the mapping
described in Sec. 3, and defined a set of k ? 1 instances for each message as follows. Denote the relevant class by r, then for every irrelevant class s 6= r, we define an instance xts = ?(xt , r)??(xt , s)
and set its label to 1. All these instances were combined into a single matrix Xt and were provided
to the algorithm in trial t.
The results of the experiments are summarized in Table 1. It is apparent that the SimProj algorithm outperforms all other algorithms. The performances of SimPerc and ConProj are comparable
with no obvious winner. It is worth noting that the Mira algorithm finds the optimum of a projection problem on each trial while our algorithms only find an approximate solution. However, Mira
employs a different approach in which there is a single input instance (instead of the set Xt ) and
constructs multiple predictors (instead of a single vector ?). Thus, Mira employs a larger hypothesis
space which is more difficult to learn in online settings. In addition, by employing a single vector
farmer?d
lokay?m
1000
800
sanders?r
400
600
SimProj
ConProj
SimPerc
Mira
350
500
600
300
400
250
300
200
400
200
200
100
150
100
0
1000
2000
3000
50
0
500
1000
1500
2000
0
200
400
600
800
1000
Figure 3: The cumulative number of mistakes as a function of the number of trials.
representation of the email message, Mira cannot benefit from feature selection which yields classdependent features. It is also obvious that the simultaneous projection variants, while remaining
simple to implement, consistently outperform the Max-Update technique which is commonly used
in online multiclass classification. In Fig. 3 we plot the cumulative number of mistakes as a function
of the trial number for 3 of the 7 users. The graphs clearly indicate the high correlation between the
SimP erc and ConP roj variants, while indicating the superiority of the SimP roj variant.
7
Extensions and discussion
We presented a new approach for online categorization with complex output structure. Our algorithms decouple the complex optimization task into multiple sub-tasks, each of which is simple
enough to be solved analytically. While the dual representation
of the online problem imposes a
P
global constraint on all the dual variables, namely j ?jt ? C, our framework of simultaneous
projections which are followed by averaging the solutions automatically adheres with this constraint
and hence constitute a feasible solution. It is worthwhile
noting that our approach can also cope
P
with multiple constraints of the more general form j ?j ?j ? C, where ?j ? 0 for all j. The
box constraint implied for each individual projection problem distils to 0 ? ?j ? C/?j and thus
the simultaneous projection algorithm can be used verbatim. We are currently exploring the usage
of this extension in complex decision problems with multiple structural constraints. Another possible extension is to replace the squared norm regularization with other twice differentiable penalty
functions. Algorithms of this more general framework still attain similar mistake bounds and are
easy to implement so long as the induced individual problems are efficiently solvable. A particularly interesting case is obtained when setting the penalty to the relative entropy. In this case we
obtain a generalization of the Winnow and the EG algorithms [11, 12] for complex classification
problems. Another interesting direction is the usage of simultaneous projections for problems with
more constrained structured output such as max-margin networks [3].
References
[1] J. Weston and C. Watkins. Support vector machines for multi-class pattern recognition. In Proc. of the Seventh European Symposium on
Artificial Neural Networks, April 1999.
[2] K. Crammer and Y. Singer. Ultraconservative online algorithms for multiclass problems. J. of Machine Learning Res., 3:951?991, 2003.
[3] B. Taskar, C. Guestrin, and D. Koller. Max-margin markov networks. In Advances in Neural Information Processing Systems 17, 2003.
[4] I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun. Support vector machine learning for interdependent and structured output
spaces. In Proc. of the 21st Intl. Conference on Machine Learning, 2004.
[5] Yoav Freund and Robert E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of
Computer and System Sciences, 55(1):119?139, August 1997.
[6] S. Shalev-Shwartz and Y. Singer. Online learning meets optimization in the dual. In Proc. of the Nineteenth Annual Conference on
Computational Learning Theory, 2006.
[7] R.E. Schapire and Y. Singer. BoosTexter: A boosting-based system for text categorization. Machine Learning, 32(2/3), 2000.
[8] F. Rosenblatt. The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review,
65:386?407, 1958. (Reprinted in Neurocomputing (MIT Press, 1988).).
[9] K. Crammer, O. Dekel, J. Keshet, S. Shalev-Shwartz, and Y. Singer. Online passive aggressive algorithms. Journal of Machine Learning
Research, 7, Mar 2006.
[10] M. Fink, S. Shalev-Shwartz, Y. Singer, and S. Ullman. Online multiclass learning by interclass hypothesis sharing. In Proc. of the 23rd
International Conference on Machine Learning, 2006.
[11] N. Littlestone. Learning when irrelevant attributes abound: A new linear-threshold algorithm. Machine Learning, 2:285?318, 1988.
[12] J. Kivinen and M. Warmuth. Exponentiated gradient versus gradient descent for linear predictors. Information and Computation,
132(1):1?64, January 1997.
| 2984 |@word trial:26 version:1 norm:5 dekel:1 eng:1 minus:1 score:2 denoting:1 outperforms:2 current:1 written:1 additive:1 hofmann:1 analytic:3 plot:1 update:13 devising:2 weighing:1 warmuth:2 infrastructure:1 provides:1 boosting:2 constructed:1 direct:1 symposium:1 incorrect:1 consists:3 prove:1 fitting:1 combine:2 introduce:4 amphitheatre:1 roughly:1 multi:2 brain:1 automatically:1 actual:1 equipped:1 becomes:1 project:1 xx:1 moreover:1 notation:1 bounded:3 provided:2 abound:1 israel:1 mountain:1 substantially:1 hindsight:1 finding:1 every:6 concave:1 tackle:2 fink:1 tie:3 k2:2 farmer:2 unit:1 omit:1 superiority:1 positive:1 t1:1 simp:2 mistake:12 consequence:1 meet:1 solely:2 approximately:1 signed:1 twice:2 studied:5 bi:4 yj:7 practice:3 implement:2 coping:1 empirical:1 attain:2 projection:21 confidence:2 word:4 altun:1 get:2 cannot:1 close:1 selection:2 tsochantaridis:1 put:1 storage:1 www:1 equivalent:4 imposed:1 map:2 yt:16 jerusalem:1 straightforward:1 williams:1 independently:4 convex:2 simplicity:2 construction:4 target:1 pt:6 user:6 exact:1 us:2 hypothesis:5 element:3 satisfying:1 particularly:1 updating:1 recognition:1 predicts:1 labeled:2 role:1 taskar:1 solved:4 worst:2 ensures:1 ordering:1 ran:1 complexity:1 skeleton:1 multilabel:3 depend:2 solving:1 rewrite:4 learner:1 differently:1 represented:1 derivation:3 describe:4 artificial:1 shalev:4 outcome:2 choosing:1 refined:1 whose:4 apparent:1 y1t:1 solve:1 larger:1 say:2 nineteenth:1 otherwise:7 online:20 sequence:6 differentiable:1 analytical:1 propose:1 product:1 maximal:1 adaptation:1 relevant:5 combining:1 boostexter:1 double:1 requirement:1 optimum:2 intl:1 categorization:11 perfect:2 derive:3 ac:1 fixing:2 measured:1 school:1 eq:18 solves:1 c:2 involves:1 indicate:2 predicted:5 implies:1 direction:1 closely:2 correct:4 attribute:1 aggressiveness:1 violates:1 virtual:1 require:1 assign:1 singer1:1 generalization:2 cyj:1 extension:3 exploring:1 sufficiently:1 considered:1 recap:2 algorithmic:2 predict:3 mapping:1 consecutive:1 smallest:1 relay:1 proc:4 label:20 combinatorial:2 currently:1 sensitive:1 correctness:1 successfully:1 weighted:1 mit:1 clearly:2 rather:1 tar:1 casting:2 corollary:2 derived:2 focus:3 joachim:1 consistently:1 rank:4 underscore:1 contrast:2 dependent:3 typically:2 entire:2 relation:1 koller:1 expand:2 selects:1 overall:1 classification:8 dual:28 orientation:1 denoted:4 constrained:3 special:2 initialize:1 equal:1 once:1 never:1 construct:1 represents:1 constitutes:2 simplex:1 roj:2 employ:4 few:1 simultaneously:3 verbatim:1 neurocomputing:1 individual:4 xtj:16 maxj:1 beck:1 phase:3 kitchen:1 organization:1 message:9 multiply:1 violation:1 analyzed:1 primal:6 tj:13 kt:31 decoupled:1 littlestone:1 re:1 minimal:1 psychological:1 instance:34 formalism:1 soft:2 classify:1 cover:1 yoav:1 assignment:1 maximization:1 introducing:2 entry:3 imperfectly:1 predictor:2 seventh:1 optimally:1 combined:2 person:1 st:1 international:1 huji:1 probabilistic:1 regressor:2 together:5 concrete:1 rkt:3 squared:2 satisfied:1 choose:3 ullman:1 aggressive:2 attaining:1 bold:2 sec:4 summarized:1 inc:1 mp:2 ranking:2 depends:2 performed:3 view:2 break:1 analyze:4 maintains:1 parallel:1 shai:1 halfspace:2 ass:1 formed:1 il:1 who:1 efficiently:2 yield:8 weak:2 famous:1 worth:1 classified:2 simultaneous:17 sharing:1 email:8 competitor:1 nonetheless:1 obvious:2 associated:7 proof:2 newly:1 dataset:2 recall:2 cap:1 focusing:1 appears:1 manuscript:1 higher:1 attained:4 violating:1 april:1 though:1 box:1 mar:1 furthermore:1 correlation:1 hand:3 receives:4 lack:1 google:1 dependant:1 defines:3 quality:1 categorizer:3 usa:1 detached:1 usage:3 omitting:2 multiplier:1 analytically:1 assigned:2 hence:1 regularization:1 deal:1 eg:1 round:3 pkwy:1 criterion:1 theoretic:1 demonstrate:3 performs:2 dedicated:1 passive:1 percent:1 instantaneous:7 common:1 superior:1 mt:7 empirically:1 winner:1 refer:1 edged:1 rd:1 similarly:3 kaminski:1 closing:1 erc:1 had:2 longer:1 winnow:1 irrelevant:3 forcing:1 yonatan:1 inequality:2 binary:3 guestrin:1 greater:1 impose:1 employed:2 paradigm:1 multiple:12 ing:1 long:3 yjt:22 devised:1 prediction:31 variant:6 regression:4 cmu:1 tailored:1 receive:2 addition:1 rest:1 enron:3 probably:1 comment:1 induced:2 integer:1 structural:1 noting:2 aesthetic:1 sander:2 enough:1 easy:1 variety:1 xj:8 xsj:2 w3:1 competing:1 perfectly:1 inner:1 imperfect:8 reprinted:1 idea:1 multiclass:10 computable:1 tradeoff:2 translates:1 prototype:3 ranker:1 motivated:1 effort:1 penalty:2 suffer:2 constitute:1 clear:1 amount:1 induces:2 simplest:2 reduced:6 http:1 schapire:2 outperform:1 exist:1 pks:1 percentage:1 sign:4 correctly:2 rosenblatt:1 write:1 putting:1 threshold:3 distills:1 rewriting:1 graph:1 sum:1 xt2:1 letter:2 tailor:1 almost:1 family:1 throughout:1 reader:1 decision:6 comparable:1 bound:11 followed:1 yielded:1 annual:1 constraint:25 deficiency:1 argument:1 optimality:1 min:6 performing:2 expanded:2 structured:3 according:3 combination:2 remain:1 partitioned:1 making:3 retrospectively:1 projecting:1 outlier:1 tkt:1 computationally:1 equation:2 previously:6 slack:4 discus:2 mechanism:1 describing:1 singer:6 turn:2 ordinal:4 merit:1 end:2 generalizes:1 available:1 rewritten:1 worthwhile:1 batch:1 original:6 remaining:3 ensure:3 hinge:3 yjs:2 calculating:1 yoram:1 restrictive:1 build:1 implied:1 objective:5 quantity:1 rt:1 unraveling:1 shais:1 gradient:2 y2t:1 separate:1 sci:1 seven:1 reason:1 index:2 remind:1 illustration:1 minimizing:1 hebrew:1 difficult:2 unfortunately:1 username:1 robert:1 subproblems:1 stated:1 perform:1 upper:3 observation:2 markov:1 descent:1 incorrectly:4 january:1 defining:1 y1:1 rn:8 interclass:1 arbitrary:1 thm:2 august:1 bk:3 cast:7 required:6 namely:3 pair:6 learned:1 suggested:1 proceeds:1 below:3 pattern:1 appeared:4 max:12 explanation:1 power:1 ranked:1 treated:1 solvable:1 kivinen:2 abbreviate:3 scheme:9 classdependent:1 gz:1 text:2 review:1 interdependent:1 relative:1 freund:1 loss:13 interesting:2 conp:1 versus:1 sword:1 sufficient:1 proxy:1 imposes:1 maxt:1 row:2 last:1 tire:1 side:2 allow:2 exponentiated:1 perceptron:3 wide:1 face:2 taking:1 fifth:1 benefit:1 calculated:2 stand:1 cumulative:2 made:1 folder:3 commonly:1 simplified:3 employing:1 cope:1 approximate:2 dealing:1 global:2 b1:2 xt1:1 conclude:1 corpus:1 shwartz:3 ultraconservative:1 table:2 pkt:6 learn:1 ca:1 ignoring:1 obtaining:1 lokay:2 adheres:2 complex:14 necessarily:1 european:1 sp:2 motivation:1 bounding:1 x1:1 fig:2 referred:1 sub:8 mira:8 lie:1 tied:4 breaking:1 watkins:2 third:1 down:1 theorem:5 xt:25 specific:2 jt:30 showing:1 r2:6 exists:3 albeit:1 effectively:1 importance:1 keshet:1 occurring:1 margin:5 entropy:1 explore:1 appearance:1 lagrange:1 corresponds:4 minimizer:2 weston:2 goal:3 replace:1 feasible:11 change:2 averaging:2 decouple:1 conservative:3 duality:2 experimental:1 indicating:1 formally:4 support:3 crammer:2 relevance:1 violated:3 tested:1 |
2,186 | 2,985 | Reducing Calibration Time For Brain-Computer
Interfaces: A Clustering Approach
Matthias Krauledat1,2, Michael Schr?der2 , Benjamin Blankertz2 , Klaus-Robert M?ller1,2
1 Technical
University Berlin, Str. des 17. Juni 135, 10 623 Berlin, Germany
Fraunhofer FIRST.IDA, Kekul?str. 7, 12 489 Berlin, Germany
{kraulem,schroedm,blanker,klaus}@first.fhg.de
2
Abstract
Up to now even subjects that are experts in the use of machine learning based
BCI systems still have to undergo a calibration session of about 20-30 min. From
this data their (movement) intentions are so far infered. We now propose a new
paradigm that allows to completely omit such calibration and instead transfer
knowledge from prior sessions. To achieve this goal we first define normalized
CSP features and distances in-between. Second, we derive prototypical features
across sessions: (a) by clustering or (b) by feature concatenation methods. Finally,
we construct a classifier based on these individualized prototypes and show that,
indeed, classifiers can be successfully transferred to a new session for a number
of subjects.
1 Introduction
BCI systems typically require training on the subject side and on the decoding side (e.g. [1, 2, 3,
4, 5, 6, 7]). While some approaches rely on operant conditioning with extensive subject training
(e.g. [2, 1]), others, such as the Berlin Brain-Computer Interface (BBCI) put more emphasis on the
machine side (e.g. [4, 8, 9]). But when following our philosophy of ?letting the machines learn?, a
calibration session of approximately 20-30 min was so far required, even for subjects that are beyond
the status of BCI novices.
The present contribution studies to what extent we can omit this brief calibration period. In other
words, is it possible to successfully transfer information from prior BCI sessions of the same subject that may have taken place days or even weeks ago? While this question is of high practical
importance to the BCI field, it has so far only been addressed in [10] in the context of transfering
channel selection results from subject to subject. In contrast to this prior approach, we will focus
on the more general question of transfering whole classifiers, resp. individualized representations
between sessions. Note that EEG (electroencephalogram) patterns typically vary strongly from one
session to another, due to different psychological pre-conditions of the subject. A subject might
for example show different states of fatigue and attention, or use diverse strategies for movement
imagination across sessions. A successful session to session transfer should thus capture generic
?invariant? discriminative features of the BCI task.
For this we first transform the EEG feature set from each prior session into a ?standard? format (section 2) and normalize it. This allows to define a consistent measure that can quantify the distance
between representations. We use CSP-based classifiers (see section 3.1 and e.g. [11]) for the discrimination of brain states; note that the line of thought presented here can also be pursued for other
feature sets resp. for classifiers. Once a distance function (section 3.2) is established in CSP filter
space, we can cluster existing CSP filters in order to obtain the most salient prototypical CSP-type
filters for a subject across sessions (section 3.3). To this end, we use the IBICA algorithm [12, 13]
for computing prototypes by a robust ICA decomposition (section 3.3). We will show that these new
CSP prototypes are physiologically meaningful and furthermore are highly robust representations
which are less easily distorted by noise artifacts.
2 Experiments and Data
Our BCI system uses Event-Related (De-)Synchronization (ERD/ERS) phenomena [3] in EEG signals related to hand and foot imagery as classes for control. The term refers to a de? or increasing
band power in specific frequency bands of the EEG signal during the imagination of movements.
These phenomena are well-studied and consistently reproducible features in EEG recordings, and
are used as the basis of many BCI systems (e.g. [11, 14]). For the present study we investigate data
from experiments with 6 healthy subjects: aw (13 sessions), al (8 sessions), cm (4 sessions), ie (4
sessions), ay (5 sessions) and ch (4 sessions). These are all the subjects that participated in at least
4 BCI sessions. Each session started with the recording of calibration data, followed by a machine
learning phase and a feedback phase of varying duration. All following retrospective analyses were
performed on the calibration data only.
During the experiments the subjects were seated in a comfortable chair with arm rests. For the
recording of the calibration data every 4.5?6 seconds one of 3 different visual stimuli was presented,
indicating a motor imagery task the subject should perform during the following 3?3.5 seconds.
The randomized and balanced motor imagery tasks investigated for all subjects except ay were left
hand (l), right hand (r), and right foot (f ). Subject ay only performed left- and right hand tasks.
Between 120 and 200 trials were performed during the calibration phase of one session for each
motor imagery class.
Brain activity was recorded from the scalp with multi-channel EEG amplifiers using at least 64
channels. Besides EEG channels, we recorded the electromyogram (EMG) from both forearms and
the right lower leg as well as horizontal and vertical electrooculogram (EOG) from the eyes. The
EMG and EOG channels were exclusively used to ensure that the subjects performed no real limb
or eye movements correlated with the mental tasks. As their activity can directly (via artifacts) or
indirectly (via afferent signals from muscles and joint receptors) be reflected in the EEG channels
they could be detected by the classifier. Controlling EMG and EOG ensured that the classifier
operated on true EEG signals only.
Data preprocessing and Classification
The time series data of each trial was windowed from 0.5 seconds after cue to 3 seconds after cue.
The data of the remaining interval was band pass filtered between either 9 Hz ? 25 Hz or 10 Hz ?
25 Hz, depending on the signal characteristics of the subject. In any case the chosen spectral interval
comprised the subject specific frequency bands that contained motor-related activity.
For each subject a subset of EEG channels was determined that had been recorded for all of the
subject?s sessions. These subsets typically contained 40 to 45 channels which were densely located
(according to the international 10-20 system) over the more central areas of the scalp (see scalp maps
in following sections). The EEG channels of each subject were reduced to the determined subset
before proceeding with the calculation of Common Spatial Patterns (CSP) for different (subject
specific) binary classification tasks.
After projection on the CSP filters, the bandpower was estimated by taking the logvariance over
time. Finally, a linear discriminant analysis (LDA) classifier was applied to the best discriminable
two-class combination.
3 A closer look at the CSP parameter space
3.1 Introduction of Common Spatial Patterns (CSP)
The common spatial pattern (CSP) algorithm is very useful in calculating spatial filters for detecting
ERD/ERS effects ([15]) and can be applied to ERD-based BCIs, see [11]. It has been extended to
multi-class problems in [14], and further extensions and robustifications concerning a simultaneous
optimization of spatial and frequency filters were presented in [16, 17, 18]. Given two distributions
in a high-dimensional space, the (supervised) CSP algorithm finds directions (i.e., spatial filters)
that maximize variance for one class and simultaneously minimize variance for the other class. After having band-pass filtered the EEG signals to the rhythms of interest, high variance reflects a
strong rhythm and low variance a weak (or attenuated) rhythm. Let us take the example of discriminating left hand vs. right hand imagery. The filtered signal corresponding to the desynchronization
of the left hand motor cortex is characterized by a strong motor rhythm during imagination of right
hand movements (left hand is in idle state), and by an attenuated motor rhythm during left hand
imagination. This criterion is exactly what the CSP algorithm optimizes: maximizing variance for
Distance Matrix
for 78 CSP Filters
Scatterplot MDS:
CSP Filters and 6 Prototypes
1.5
20
Dimension 2
Foot
10
1
30
Left Hand
40
50
0.5
60
70
10
20
30
40
50
60
0
70
Dimension 1
Figure 1: Left: Non-euclidean distance matrix for 78 CSP filters of imagined left hand and foot movement.
Right: Scatterplot of the first vs. second dimension of CSP filters after Multi-Dimensional Scaling (MDS).
Filters that minimize the variance for the imagined left hand are plotted as red crosses, foot movement imagery
filters are shown as blue dots. Cluster centers detected by IBICA are marked with magenta circles. Both figures
show data from al.
the class of right hand trials and at the same time minimizing variance for left hand trials. Furthermore the CSP algorithm calculates the dual filter that will focus on the area of the right hand and
it will even calculate several filters for both optimizations by considering the remaining orthogonal
subspaces.
Let ?i be the covariance matrix of the trial-concatenated matrix of dimension [channels ? concatenated time-points] belonging to the respective class i ? {1, 2}. The CSP analysis consists of
calculating a matrix Q and diagonal matrix D with elements in [0, 1] such that
Q?1 Q> = D
and
Q?2 Q> = I ? D.
(1)
This can be solved as a generalized eigenvalue problem. The projection that is given by the i-th
row of matrix Q has a relative variance of di (i-th element of D) for trials of class 1 and relative
variance 1 ? di for trials of class 2. If di is near 1 the filter given by the i-th row of Q maximizes
variance for class 1, and since 1 ? di is near 0, minimizes variance for class 2. Typically one would
retain projections corresponding to the three highest eigenvalues di , i.e., CSP filters for class 1, and
projections corresponding to the three lowest eigenvalues, i.e., CSP filters for class 2.
3.2 Comparison of CSP filters
Since the results of the CSP algorithm are the solutions of a generalized eigenvalue problem, where
every multiple of an eigenvector is again a solution to the eigenvalue problem. If we want to compare
different CSP filters, we must therefore keep in mind that every point on the line through a CSP filter
point and the origin can be identified (except for the origin itself). More precisely, it is sufficient to
consider only normalized CSP vectors on the (#channels-1)-dimensional hypersphere. This suggests
that the CSP space is inherently non-euclidean. As a more appropriate metric between two points c1
and c2 in this space, we calculated the angle between the two lines corresponding to these points.
c1 ? c2
)
m(c1 , c2 ) = arccos(
|c1 | ? |c2 |
When applying this measure to a set of CSP filters (ci )i?n , one can generate the distance matrix
D = (m(ci , c j ))i, j?n ,
which can then be used to find prototypical examples of CSP filters. Fig.1 shows an example of a
distance matrix for 78 CSP filters for the discrimination of the variance during imagined left hand
movement and foot movement. Based on the left hand signals, three CSP filters showing the lowest
Single Linkage Dendrogram
1.2
Dissimilarity
1
0.8
0.6
0.4
0.2
0
19
42
50
41
47
65
62
64
59
57
74
53
54
58
40
46
49
43
63
52
56
55
60
51
45
44
61
48
70
72
77
71
66
73
67
69
78
75
68
76
22
11
8
10
24
25
31
14
33
23
21
29
37
9
15
16
30
6
4
27
12
7
3
17
18
5
20
32
26
34
2
28
38
39
1
13
35
36
Prototypes
Figure 2: Dendrogram of a hierarchical cluster tree for the CSP filters of left hand movement imagery (dashed
red lines) and foot movement imagery (solid blue lines). Cluster centers detected by IBICA are used as CSP
prototypes. They are marked with magenta arrows.
eigenvalues were chosen for each of the 13 sessions. The same number of 3 ? 13 filters were chosen
for the foot signals. The filters are arranged in groups according to their relative magnitude of the
eigenvalues, i.e., filters with the largest eigenvalues are grouped together, then filters with the second
largest eigenvalues etc.
The distance matrix in Fig.1 shows a block structure which reveals that the filters of each group have
low distances amongst each other as compared to the distances to members of other groups. This is
especially true for filters for the minimization of variance in left hand trials.
3.3 Finding Clusters in CSP space
The idea to find CSP filters that recur in the processing of different sessions of a single subject is
very appealing, since these filters can be re-used for efficient classification of unseen data. As an
example of clustered parameters, Fig.2 shows a hierarchical clustering tree (see [19]) of CSP filters
of different sessions for subject al. Single branches of the tree form distinct clusters, which are
also clearly visible in a projection of the first Multi-Dimensional Scaling-Components in Fig.1 (for
MDS, see [20]).
The proposed metric of section 3.2 coincides with the metric used for Inlier-Based Independent
Component Analysis (IBICA, see [12, 13]). This method was originally intended to find estimators
of the super-Gaussian source signals from a mixture of signals. By projecting the data onto the
hypersphere and using the angle distance, it has been demonstrated that the correct source signals can
be found even in high-dimensional data. The key ingredient of this method is the robust identification
of inlier points as it can be done with the ? -index (see [21]), which is defined as follows:
Let z ? {c1 , . . . , cn } be a point in CSP-space, and let nn1 (z), . . . , nnk (z) be the k nearest neighbors of
z, according to the distance m. We then call the average distance of z to its neighbors the ? -index of
z, i.e.
1 k
? (z) = ? m(z, nn j (z)).
k j=1
If z lies in a densely populated region of the hypersphere, then the average distance to its neighbors
is small, whereas if it lies in a sparse region, the average distance is high. The data points with the
smallest ? are good candidates for prototypical CSP filters since they are similar to other filters in
the comparison set. This suggests that these filters are good solutions in a number of experiments
and are therefore robust against changes in the data such as outliers, variations in background noise
etc.
4 Competing analysis methods: How much training is needed?
Fig.3 shows an overview of the validation methods used for the algorithms under study. The left part
shows validation methods which mimick the following BCI scenario: a new session starts and no
Historical
New
Session 1 Session 2 Session 3 Session 4
Historical
New
Session 1 Session 2 Session 3 Session 4
Data:
10/20/30 Trials
CSP
CSP
Ordinary CSP:
LDA Test
CSP
LDA Test
CSP
HIST-CSP:
LDA
Test
CSP-Prototypes
LDA
Test
CSP-Prototypes
PROTO-CSP:
LDA
Test
CSP and CSP-Prototypes
LDA
Test
CSP and CSP-Prototypes
CONCAT-CSP:
LDA
Test
LDA
Test
Figure 3: Overview of the presented training and testing modes for the example of four available sessions. The
left part shows a comparison of ordinary CSP with three methods that do not require calibration. The validation
scheme in the right part compares CSP with three adaptive methods. See text for details.
data has been collected yet. The top row represents data of all sessions in original order. Later rows
describe different data splits for the training of the CSP filters and LDA (both depicted in blue solid
lines) and for the testing of the trained algorithms on unseen data (green dashed lines). The ordinary
CSP method does not take any historical data from prior sessions into account (second row). It uses
training data only from the first half of the current session. This serves as a baseline to show the
general quality of the data, since half of the session data is generally enough to train a classifier that
is well adapted to the second half of the session. Note that this evaluation only corresponds to a real
BCI scenario where many calibration trials of the same day are available.
4.1 Zero training methods
This is contrasted to the following rows, which show the exclusive use of historic data in order
to calculate LDA and one single set of CSP filters from the collected data of all prior sessions
(third row), or calculate one set of CSP filters for each historic session and derive prototypical
filters from this collection as described in section 3.3 (fourth row), or use a combination of row
three and four that results in a concatenation of CSP filters and derived CSP prototypes (fifth row).
Feature concatenation is an effective method that has been shown to improve CSP-based classifiers
considerably (see [22]).
4.2 Adaptive training methods
The right part of Fig.3 expands the training sets for rows three, four and five for the first 10, 20 or 30
trials per class of the data of the new session. In the methods of row 4 and 5, only LDA profits from
the new data, whereas CSP prototypes are calculated exclusively on historic data as before. This
approach is compared against the ordinary CSP approach that now only uses the same small amount
of training data from the new session.
This scheme, as well as the one presented in section 4.1, has been cross-validated such that each
available session was used as a test session instead of the last one.
5 Results
The underlying question of this paper is whether information gathered from previous experimental
sessions can prove its value in a new session. In an ideal case existing CSP filters and LDA classifiers
could be used to start the feedback phase of the new session immediately, without the need to collect
new calibration data.
Subjects
Classes
Ordinary CSP
aw
LF
5.0
10.1
9.9
8.9
13
HIST
PROTO
CONCAT
Sessions
al
RF
2.7
2.9
3.1
2.7
7
cm
LF
11.8
23.0
21.5
19.5
4
ie
LR
16.2
26.0
26.2
23.7
4
ay
LR
11.7
13.3
10.0
12.4
5
ch
LR
6.2
6.9
11.4
7.4
4
Table 1: Results of Zero-Training modes. All classification errors are given in %. While the ordinary CSP
method uses half of the new session for training, the three methods HIST, PROTO and CONCAT exclusively
use historic data for the calculation of CSP filters and LDA. (as described on the left side of Fig.3). Amongst
them, CONCAT performs best in four of the six subjects. For subjects al, ay and ch its result is even comparable
to that of ordinary CSP.
20
Ordinary CSP
HIST?CSP
PROTO?CSP
CONCAT?CSP
18
Error [%]
16
14
12
10
8
0
10
20
Number of trials
30
Figure 4: Incorporating more and more data from the current session (10, 20 or 30 trials per class), the classification error decreases for all of the four methods described on the right side of Fig.3. The three methods
HIST, PROTO and CONCAT clearly outperform ordinary CSP. Interestingly the best zero-training method
CONCAT is only outperformed by ordinary CSP if the latter has a head start of 30 trials per class.
We checked for the validity of this scenario based on the data described in section 2. Table 1 shows
the classification results for the different classification methods under the Zero-training validation
scheme. For subjects al, ay and ch, the classification error of CONCAT is of the same magnitude as
the ordinary (training-based) CSP-approach. For the other three subjects, CONCAT outperforms the
methods HISTand PROTO. Although the ideal case is not reached for every subject, the table shows
that our proposed methods provide a decent step towards the goal of Zero-training for BCI.
Another way to at least reduce the necessary preparation time for a new experimental session is to
record only very few new trials and combine them with data from previous sessions in order to get
a quicker start. We simulate this strategy by allowing the new methods HIST, PROTO and CONCAT
to take a look also on the first 10, 20 or 30 trials per class of the new session. The baseline to
compare their performance would be a BCI system trained only on these initial trials. In Fig. 4, this
comparison is depicted. Here the influence of the number of initial training trials becomes visible. If
no new data is available, the ordinary classification approach of course can not produce any output,
whereas the history-based methods, e.g. CONCAT already generates a stable estimation of the class
labels. All methods gain performance in terms of smaller test errors as more and more trials are
added. Only after training on at least 30 trials per class, ordinary CSP reaches the classification level
that CONCAT had already shown without any training data of the current session.
Fig.5 shows some prototypical CSP filters as detected by IBICA clustering for subject al and left
hand vs. foot motor imagery. All filters have small support (i.e., many entries are close to 0), and
the few large entries are located on neurophysiologically important areas: Filters 1?2 and 4?6 cover
the motor cortices corresponding to imagined hand movements, while filter 3 focuses on the central
foot area. This shows that the cluster centers are spatial filters that meet our neurophysiological ex-
CSP Prototype Filters
0.5
0
-0.5
1
2
3
4
5
6
Figure 5: First six CSP prototype filters determined by IBICA for al.
pectations, since they are able to capture the frequency power modulations over relevant electrodes,
while masking out unimportant or noisy channels.
6 Discussion and Conclusion
Advanced BCI systems (e.g. BBCI) recently aquired the ability to dispense with extensive subject
training and now allow to infer a blueprint of the subject?s volition from a short calibration session of
approximately 30 min. This became possible through the use of modern machine learning technology. The next step along this line to make BCI more practical is to strive for zero calibration time.
Certainly it will not be realistic to achieve this goal for arbitrary BCI novices, rather in this study we
have concentrated on experienced BCI users (with 4 and more sessions) and discussed algorithms to
re-use their classifiers from prior sessions. Note that the construction of a classifier that is invariant
against session to session changes, say, due to different vigilance, focus or motor imagination across
sessions is a hard task.
Our contribution shows that experienced BCI subjects do not necessarily need to perform a new
calibration period in a new experiment. By analyzing the CSP parameter space, we could reveal
an appropriate characterization of CSP filters. Finding clusters of CSP parameters for old sessions,
novel prototypical CSP filters can be derived, for which the neurophysiological validity could be
shown exemplarily. The concatenation of these prototype filters with some CSP filters trained on
the same amount of data results in a classifier that not only performs comparable to the presented
ordinary CSP approach (trained on a large amount of data from the same session) in half of the
subjects, but also outperforms ordinary CSP considerably when only few data points are at hand.
This means that experienced subjects are predictable to an extent that they do not require calibration
anymore.
We expect that these results can be even further optimized by e.g. hand selecting the filters for
PROTO, by adjusting for the distribution changes in the new session, e.g. by adapting the LDA as
presented in [23], or by applying advanced covariate-shift compensation methods like [24].
Future work will aim to extend the presented zero training idea towards BCI novices.
References
[1] J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller, and T. M. Vaughan, ?Braincomputer interfaces for communication and control?, Clin. Neurophysiol., 113: 767?791,
2002.
[2] N. Birbaumer, A. K?bler, N. Ghanayim, T. Hinterberger, J. Perelmouter, J. Kaiser, I. Iversen,
B. Kotchoubey, N. Neumann, and H. Flor, ?The Though translation device (TTD) for Completly Paralyzed Patients?, IEEE Trans. Rehab. Eng., 8(2): 190?193, 2000.
[3] G. Pfurtscheller and F. H. L. da Silva, ?Event-related EEG/MEG synchronization and desynchronization: basic principles?, Clin. Neurophysiol., 110(11): 1842?1857, 1999.
[4] B. Blankertz, G. Curio, and K.-R. M?ller, ?Classifying Single Trial EEG: Towards Brain Computer Interfacing?, in: T. G. Diettrich, S. Becker, and Z. Ghahramani, eds., Advances in Neural
Inf. Proc. Systems (NIPS 01), vol. 14, 157?164, 2002.
[5] L. Trejo, K. Wheeler, C. Jorgensen, R. Rosipal, S. Clanton, B. Matthews, A. Hibbs,
R. Matthews, and M. Krupka, ?Multimodal Neuroelectric Interface Development?, IEEE
Trans. Neural Sys. Rehab. Eng., (11): 199?204, 2003.
[6] L. Parra, C. Alvino, A. C. Tang, B. A. Pearlmutter, N. Yeung, A. Osman, and P. Sajda, ?Linear
spatial integration for single trial detection in encephalography?, NeuroImage, 7(1): 223?230,
2002.
[7] W. D. Penny, S. J. Roberts, E. A. Curran, and M. J. Stokes, ?EEG-Based Communication: A
Pattern Recognition Approach?, IEEE Trans. Rehab. Eng., 8(2): 214?215, 2000.
[8] B. Blankertz, G. Dornhege, M. Krauledat, K.-R. M?ller, V. Kunzmann, F. Losch, and G. Curio,
?The Berlin Brain-Computer Interface: EEG-based communication without subject training?,
IEEE Trans. Neural Sys. Rehab. Eng., 14(2), 2006, in press.
[9] G. Pfurtscheller, C. Neuper, C. Guger, W. Harkam, R. Ramoser, A. Schl?gl, B. Obermaier, and
M. Pregenzer, ?Current Trends in Graz Brain-computer Interface (BCI)?, IEEE Trans. Rehab.
Eng., 8(2): 216?219, 2000.
[10] M. Schr?der, T. N. Lal, T. Hinterberger, M. Bogdan, N. J. Hill, N. Birbaumer, W. Rosenstiel,
and B. Sch?lkopf, ?Robust EEG Channel Selection Across Subjects for Brain Computer Interfaces?, EURASIP Journal on Applied Signal Processing, Special Issue: Trends in Brain
Computer Interfaces, 19: 3103?3112, 2005.
[11] H. Ramoser, J. M?ller-Gerking, and G. Pfurtscheller, ?Optimal spatial filtering of single trial
EEG during imagined hand movement?, IEEE Trans. Rehab. Eng., 8(4): 441?446, 2000.
[12] F. C. Meinecke, S. Harmeling, and K.-R. M?ller, ?Robust ICA for Super-Gaussian Sources?,
in: C. G. Puntonet and A. Prieto, eds., Proc. Int. Workshop on Independent Component Analysis
and Blind Signal Separation (ICA2004), 2004.
[13] F. C. Meinecke, S. Harmeling, and K.-R. M?ller, ?Inlier-based ICA with an application to
super-imposed images?, Int. J. of Imaging Systems and Technology, 2005.
[14] G. Dornhege, B. Blankertz, G. Curio, and K.-R. M?ller, ?Boosting bit rates in non-invasive
EEG single-trial classifications by feature combination and multi-class paradigms?, IEEE
Trans. Biomed. Eng., 51(6): 993?1002, 2004.
[15] Z. J. Koles and A. C. K. Soong, ?EEG source localization: implementing the spatio-temporal
decomposition approach?, Electroencephalogr. Clin. Neurophysiol., 107: 343?352, 1998.
[16] G. Dornhege, B. Blankertz, M. Krauledat, F. Losch, G. Curio, and K.-R. M?ller, ?Combined
optimization of spatial and temporal filters for improving Brain-Computer Interfacing?, IEEE
Trans. Biomed. Eng., 2006, accepted.
[17] S. Lemm, B. Blankertz, G. Curio, and K.-R. M?ller, ?Spatio-Spectral Filters for Improved
Classification of Single Trial EEG?, IEEE Trans. Biomed. Eng., 52(9): 1541?1548, 2005.
[18] R. Tomioka, G. Dornhege, G. Nolte, K. Aihara, and K.-R. M?ller, ?Optimizing Spectral Filter
for Single Trial EEG Classification?, in: Lecture Notes in Computer Science, Springer-Verlag
Heidelberg, 2006, to be presented at 28th Annual Symposium of the German Association for
Pattern Recognition (DAGM 2006).
[19] R. O. Duda, P. E. Hart, and D. G. Stork, Pattern Classification, Wiley & Sons, 2nd edn., 2001.
[20] T. Cox and M. Cox, Multidimensional Scaling, Chapman & Hall, London, 2001.
[21] S. Harmeling, G. Dornhege, D. Tax, F. C. Meinecke, and K.-R. M?ller, ?From outliers to
prototypes: ordering data?, Neurocomputing, 2006, in press.
[22] G. Dornhege, B. Blankertz, G. Curio, and K.-R. M?ller, ?Combining Features for BCI?, in:
S. Becker, S. Thrun, and K. Obermayer, eds., Advances in Neural Inf. Proc. Systems (NIPS
02), vol. 15, 1115?1122, 2003.
[23] P. Shenoy, M. Krauledat, B. Blankertz, R. P. N. Rao, and K.-R. M?ller, ?Towards Adaptive
Classification for BCI?, J. Neural Eng., 3: R13?R23, 2006.
[24] S. Sugiyama and K.-R. M?ller, ?Input-Dependent Estimation of Generalization Error under
Covariate Shift?, Statistics and Decisions, 2006, to appear.
| 2985 |@word trial:26 cox:2 duda:1 nd:1 r13:1 mimick:1 decomposition:2 covariance:1 eng:10 profit:1 solid:2 initial:2 series:1 exclusively:3 selecting:1 interestingly:1 outperforms:2 existing:2 current:4 ida:1 yet:1 must:1 visible:2 realistic:1 motor:10 reproducible:1 discrimination:2 v:3 pursued:1 cue:2 half:5 device:1 concat:12 sys:2 short:1 record:1 lr:3 filtered:3 mental:1 detecting:1 hypersphere:3 characterization:1 boosting:1 five:1 windowed:1 along:1 c2:4 nnk:1 symposium:1 consists:1 prove:1 combine:1 ica:3 indeed:1 bandpower:1 multi:5 brain:10 volition:1 str:2 kraulem:1 increasing:1 considering:1 ller1:1 becomes:1 underlying:1 pregenzer:1 maximizes:1 lowest:2 what:2 cm:2 minimizes:1 eigenvector:1 finding:2 jorgensen:1 dornhege:6 temporal:2 every:4 multidimensional:1 expands:1 exactly:1 ensured:1 classifier:14 control:2 omit:2 appear:1 comfortable:1 shenoy:1 before:2 receptor:1 krupka:1 analyzing:1 meet:1 modulation:1 approximately:2 might:1 diettrich:1 emphasis:1 studied:1 suggests:2 collect:1 practical:2 harmeling:3 testing:2 block:1 lf:2 wolpaw:1 wheeler:1 area:4 thought:1 adapting:1 projection:5 osman:1 intention:1 word:1 pre:1 refers:1 idle:1 get:1 onto:1 close:1 selection:2 put:1 context:1 applying:2 influence:1 vaughan:1 map:1 demonstrated:1 center:3 maximizing:1 blueprint:1 imposed:1 attention:1 duration:1 immediately:1 estimator:1 blanker:1 variation:1 resp:2 controlling:1 construction:1 user:1 edn:1 us:4 curran:1 origin:2 element:2 trend:2 recognition:2 located:2 electromyogram:1 puntonet:1 quicker:1 solved:1 capture:2 calculate:3 region:2 graz:1 ordering:1 movement:13 highest:1 decrease:1 balanced:1 benjamin:1 predictable:1 dispense:1 trained:4 localization:1 completely:1 basis:1 neurophysiol:3 easily:1 joint:1 transfering:2 multimodal:1 train:1 sajda:1 distinct:1 describe:1 effective:1 london:1 detected:4 klaus:2 say:1 bci:22 ability:1 alvino:1 statistic:1 unseen:2 bler:1 transform:1 itself:1 noisy:1 eigenvalue:9 matthias:1 propose:1 rehab:6 relevant:1 combining:1 philosophy:1 achieve:2 kunzmann:1 tax:1 normalize:1 guger:1 cluster:8 electrode:1 neumann:1 produce:1 inlier:3 bogdan:1 derive:2 depending:1 schl:1 nearest:1 strong:2 quantify:1 direction:1 foot:10 correct:1 filter:60 implementing:1 require:3 clustered:1 generalization:1 parra:1 extension:1 hall:1 week:1 matthew:2 vary:1 smallest:1 estimation:2 proc:3 outperformed:1 label:1 healthy:1 largest:2 grouped:1 successfully:2 electroencephalogr:1 reflects:1 minimization:1 clearly:2 interfacing:2 gaussian:2 super:3 csp:86 rather:1 aim:1 varying:1 derived:2 focus:4 validated:1 consistently:1 contrast:1 baseline:2 dependent:1 nn:1 dagm:1 typically:4 fhg:1 germany:2 biomed:3 issue:1 classification:15 dual:1 development:1 arccos:1 spatial:10 integration:1 special:1 field:1 construct:1 once:1 having:1 chapman:1 represents:1 look:2 schroedm:1 future:1 others:1 stimulus:1 few:3 modern:1 simultaneously:1 densely:2 neurocomputing:1 phase:4 intended:1 amplifier:1 detection:1 interest:1 highly:1 investigate:1 evaluation:1 certainly:1 mixture:1 operated:1 closer:1 necessary:1 respective:1 orthogonal:1 tree:3 euclidean:2 old:1 circle:1 plotted:1 re:2 psychological:1 rao:1 cover:1 ordinary:15 kekul:1 subset:3 entry:2 comprised:1 successful:1 perelmouter:1 emg:3 aw:2 discriminable:1 considerably:2 combined:1 gerking:1 international:1 randomized:1 discriminating:1 ie:2 retain:1 recur:1 ghanayim:1 meinecke:3 decoding:1 michael:1 together:1 imagery:9 central:2 recorded:3 again:1 obermaier:1 vigilance:1 hinterberger:2 expert:1 imagination:5 strive:1 account:1 de:4 int:2 afferent:1 blind:1 performed:4 later:1 red:2 start:4 reached:1 masking:1 encephalography:1 contribution:2 minimize:2 became:1 variance:13 characteristic:1 gathered:1 weak:1 identification:1 lkopf:1 kotchoubey:1 clanton:1 ago:1 history:1 simultaneous:1 reach:1 checked:1 ed:3 against:3 frequency:4 invasive:1 di:5 gain:1 adjusting:1 knowledge:1 originally:1 day:2 supervised:1 reflected:1 improved:1 erd:3 arranged:1 done:1 though:1 strongly:1 furthermore:2 r23:1 dendrogram:2 hand:25 horizontal:1 aquired:1 mode:2 artifact:2 lda:15 bcis:1 quality:1 reveal:1 effect:1 validity:2 normalized:2 true:2 rosenstiel:1 during:8 rhythm:5 coincides:1 criterion:1 generalized:2 juni:1 fatigue:1 hill:1 ay:6 electroencephalogram:1 pearlmutter:1 performs:2 interface:8 silva:1 image:1 novel:1 recently:1 common:3 overview:2 stork:1 conditioning:1 birbaumer:3 imagined:5 discussed:1 extend:1 association:1 populated:1 session:66 sugiyama:1 had:2 dot:1 calibration:16 stable:1 cortex:2 etc:2 optimizing:1 optimizes:1 inf:2 scenario:3 verlag:1 binary:1 der:1 muscle:1 paradigm:2 ller:13 period:2 maximize:1 signal:14 dashed:2 multiple:1 branch:1 paralyzed:1 infer:1 technical:1 characterized:1 calculation:2 cross:2 concerning:1 hart:1 calculates:1 basic:1 patient:1 metric:3 yeung:1 c1:5 whereas:3 want:1 participated:1 background:1 addressed:1 interval:2 source:4 sch:1 electrooculogram:1 rest:1 flor:1 subject:40 recording:3 undergo:1 hz:4 member:1 call:1 near:2 ideal:2 neuroelectric:1 split:1 enough:1 decent:1 nolte:1 identified:1 competing:1 reduce:1 idea:2 prototype:16 cn:1 attenuated:2 shift:2 whether:1 six:2 linkage:1 becker:2 retrospective:1 hibbs:1 krauledat:3 useful:1 generally:1 unimportant:1 amount:3 band:5 concentrated:1 reduced:1 generate:1 outperform:1 estimated:1 per:5 blue:3 diverse:1 vol:2 group:3 key:1 salient:1 four:5 koles:1 imaging:1 bbci:2 angle:2 fourth:1 distorted:1 place:1 separation:1 decision:1 scaling:3 comparable:2 bit:1 followed:1 annual:1 activity:3 scalp:3 adapted:1 precisely:1 lemm:1 generates:1 simulate:1 min:3 chair:1 ttd:1 format:1 transferred:1 according:3 combination:3 belonging:1 across:5 smaller:1 son:1 appealing:1 aihara:1 leg:1 projecting:1 invariant:2 outlier:2 soong:1 operant:1 taken:1 german:1 needed:1 mind:1 letting:1 end:1 serf:1 available:4 limb:1 hierarchical:2 generic:1 indirectly:1 spectral:3 appropriate:2 anymore:1 original:1 top:1 clustering:4 ensure:1 remaining:2 clin:3 iversen:1 calculating:2 concatenated:2 ghahramani:1 especially:1 question:3 already:2 added:1 kaiser:1 strategy:2 exclusive:1 md:3 diagonal:1 obermayer:1 amongst:2 subspace:1 distance:15 individualized:2 berlin:5 concatenation:4 prieto:1 thrun:1 extent:2 discriminant:1 collected:2 meg:1 besides:1 index:2 minimizing:1 robert:2 perform:2 allowing:1 vertical:1 forearm:1 compensation:1 extended:1 communication:3 head:1 stokes:1 schr:2 harkam:1 arbitrary:1 required:1 extensive:2 optimized:1 lal:1 established:1 nip:2 trans:9 beyond:1 able:1 mcfarland:1 pattern:7 rf:1 green:1 rosipal:1 power:2 event:2 braincomputer:1 rely:1 advanced:2 arm:1 blankertz:7 scheme:3 improve:1 technology:2 brief:1 eye:2 started:1 fraunhofer:1 eog:3 text:1 prior:7 relative:3 exemplarily:1 synchronization:2 historic:4 neurophysiologically:1 expect:1 lecture:1 prototypical:7 filtering:1 ingredient:1 validation:4 sufficient:1 consistent:1 principle:1 seated:1 classifying:1 translation:1 row:12 course:1 gl:1 last:1 side:5 allow:1 neighbor:3 taking:1 fifth:1 sparse:1 penny:1 feedback:2 dimension:4 calculated:2 collection:1 adaptive:3 preprocessing:1 novice:3 far:3 historical:3 status:1 keep:1 reveals:1 hist:6 infered:1 spatio:2 discriminative:1 physiologically:1 table:3 learn:1 transfer:3 robust:6 channel:13 inherently:1 correlated:1 eeg:22 improving:1 heidelberg:1 investigated:1 necessarily:1 ramoser:2 da:1 arrow:1 whole:1 noise:2 fig:10 wiley:1 pfurtscheller:4 tomioka:1 experienced:3 neuroimage:1 lie:2 candidate:1 third:1 tang:1 magenta:2 specific:3 covariate:2 showing:1 er:2 desynchronization:2 incorporating:1 curio:6 scatterplot:2 workshop:1 importance:1 ci:2 dissimilarity:1 magnitude:2 trejo:1 depicted:2 neurophysiological:2 visual:1 contained:2 springer:1 ch:4 corresponds:1 goal:3 marked:2 losch:2 towards:4 nn1:1 change:3 hard:1 eurasip:1 determined:3 except:2 reducing:1 contrasted:1 pas:2 accepted:1 experimental:2 meaningful:1 neuper:1 indicating:1 support:1 latter:1 preparation:1 proto:8 phenomenon:2 ex:1 |
2,187 | 2,986 | Context dependent amplification of both rate and
event-correlation in a VLSI network of spiking
neurons
Elisabetta Chicca, Giacomo Indiveri and Rodney J. Douglas
Institute of Neuroinformatics
University - ETH Zurich
Winterthurerstrasse 190, CH-8057 Zurich, Switzerland
chicca,giacomo,[email protected]
Abstract
Cooperative competitive networks are believed to play a central role in cortical
processing and have been shown to exhibit a wide set of useful computational
properties. We propose a VLSI implementation of a spiking cooperative competitive network and show how it can perform context dependent computation both
in the mean firing rate domain and in spike timing correlation space. In the mean
rate case the network amplifies the activity of neurons belonging to the selected
stimulus and suppresses the activity of neurons receiving weaker stimuli. In the
event correlation case, the recurrent network amplifies with a higher gain the correlation between neurons which receive highly correlated inputs while leaving the
mean firing rate unaltered. We describe the network architecture and present experimental data demonstrating its context dependent computation capabilities.
1
Introduction
There is an increasing body of evidence supporting the hypothesis that recurrent cooperative competitive neural networks play a central role in cortical processing [1]. Anatomical studies demonstrated
that the majority of synapses in the mammalian cortex originate within the cortex itself [1, 2]. Similarly, it has been shown that neurons with similar functional properties are aggregated together in
modules or columns and most connections are made locally within the neighborhood of a 1 mm
column [3].
From the computational point of view, recurrent cooperative competitive networks have been investigated extensively in the past [4?6]. Already in the late 70?s Amari and Arbib [4] applied the
concept of dynamic neural fields1 [7, 8] to develop a unifying mathematical framework to study
cooperative competitive neural network models based on a series of detailed models of biological
systems [9?11]. In 1994, Douglas et al. [5] argued that recurrent cortical circuits restore analog signals on the basis of their connectivity patterns and produce selective neuronal responses while maintaining network stability. To support this hypothesis they proposed the cortical amplifier2 model
and showed that a network of cortical amplifiers performs signal restoration and noise suppression
by amplifying the correlated signal in a pattern that was stored in the connectivity of the network,
without amplifying the noise. In 1998, Hansel and Sompolinsky presented a detailed model for cor1 In the dynamic neural fields approach neural networks are described as a continuous medium rather than a
set of discrete neurons. A differential equation describes the activation of the neural tissue at different position
in the neural network.
2 The cortical amplifier consists of a population of identical neurons, connected to each other with the same
excitatory synaptic strength, sharing a common inhibitory feedback and the same input.
AER INPUT
AER INPUT
I
I
I
I
I
I
I
I
I
I
I
I
I
I
E
E
E
E
E
E
E
E
E
E
E
E
E
E
E
E
E
E
E
E
E
E
E
E
E
E
E
E
E
E
E
E
E
E
E
E
E
E
E
E
E
E
E
E
E
I
I
I
I
I
I
E E E
E E E
AER OUTPUT
Figure 1: The chip architecture. Squares represent excitatory (E) and inhibitory (I) synapses, trapezoids represent I&F neurons. The synapse can be stimulated by external (AER) inputs and by local
events. The I&F neurons can transmit their spikes off-chip and/or to the locally connected synapses
(see text for details). The local connectivity implements a cooperative competitive network with first
and secon dneighbors recurrent excitatory connections and global inhibition. The first and second
neighbor connections of the neurons at the edges of the array are connected to pads. This allows us
to leave the network open, or implement closed boundary conditions (to form a ring of neurons), using off-chip jumpers. The global inhibitory neuron (bottom left) receives excitation from all neurons
in the array and its output inhibits all of them.
tical feature selectivity based on recurrent cooperative competitive networks [6] where they showed
how these models can account for some of the emergent cooperative cortical properties observed in
nature.
Recently it has been argued that recurrent cooperative competitive networks exhibit at the same
time computational properties both in an analog way (e.g. amplification or filtering) and in a digital
way (e.g. digital selection) [12]. To demonstrate the digital selection and analog amplification
capabilities of these type of networks Hahnloser et al. proposed a VLSI chip in which neurons are
implemented as linear threshold units and input and output signals encode mean firing rates. The
recurrent connectivity of the network proposed in [12] comprises self-excitation, first and second
neighbors recurrent excitatory connections, and global inhibition. We are particularly interested in
the use of these types of networks in spike based multi-chip sensory systems [13] as a computational
module capable of performing stimulus selection, signal restoration and noise suppression. Here we
propose a spike-based VLSI neural network that allows us to explore these computational properties
both in the mean rate and time domain.
The device we propose comprises a ring of excitatory Integrate-and-Fire (I&F) neurons with first
and second neighbors recurrent excitatory connections and a global inhibitory neuron which receives
excitation from all the neurons in the ring. In the next Section we describe the network architecture,
and in Section 3 and 4 we show how this network can perform context dependent computation in
the mean rate domain and in the time domain respectively.
2
The VLSI Spiking Cooperative Competitive Network
Several examples of VLSI competitive networks of spiking neurons have already been presented in
literature [14?19]. In 1992, De Yong et al. [16] proposed a VLSI winner-take-all (WTA) spiking
network consisting of 4 neurons with all-to-all inhibitory connections. In 1993, a different VLSI
WTA chip comprising also 4 neurons was proposed, it used global inhibition to implement the WTA
behavior [17]. More recent implementations of spiking VLSI cooperative competitive networks consist of larger arrays and show more complex behavior thanks also to more advanced VLSI processes
and testing instruments currently available [14, 15, 18, 19].
30
25
25
Neuron address
Neuron address
30
20
15
10
5
0
0
20
15
10
5
2
4
6
Time (s)
8
(a)
10 0
20
40
Mean f (Hz)
0
0
2
4
6
Time (s)
8
10 0
20
40
Mean f (Hz)
(b)
Figure 2: Raster plot for the suppression experiments. (a) Feed-forward network response. The
left panel shows the raster plot of the network activity in response to two Gaussian shaped, Poisson
distributed, input spike trains, with mean firing rates ranging from 0 to 120 Hz. The right panel
shows the mean frequencies of the neurons ion the feed-forward network. The feed-forward network
response directly reflects the applied input: on average, the neurons produce an output spike in
response to about every 6 input spikes. Note how the global inhibitory neuron (address number 1) is
not active. (b) Cooperative-competitive network response. The left panel shows the raster plot of the
network activity in response to the same input applied to the feed-forward network. The right panel
shows the mean output frequencies of the neurons in the cooperative-competitive network. The
recurrent connectivity (lateral excitation and global inhibition) amplifies the activity of the neurons
with highest mean output frequency and suppresses the activity of other neurons (compare with right
panel of (a)).
Most of the previously proposed VLSI models focused on hard WTA behaviors (only the neuron that
receives the strongest input is active). Our device allows us to explore hard and soft WTA behaviors
both in the mean rate and spike timing domain. Here we explore the network?s ability to perform
context dependent computation using the network in the soft WTA mode.
The VLSI network we designed comprises 31 excitatory neurons and 1 global inhibitory neuron [15].
Cooperation between neurons is implemented by first and second neighbors recurrent excitatory
connections. Depending on the relative strength of excitation and inhibition the network can be
operated either in hard or soft WTA mode. On top of the local hardwired recurrent connectivity the
network comprises 16 AER3 synapses per neuron.
The chip was fabricated in a 0.8 ?m, n-well, double metal, double poly, CMOS process using the
Europractice service.
The architecture of the VLSI network of I&F neurons is shown in Fig. 1. It is a two-dimensional
array containing a row of 32 neurons, each connected to a column of afferent synaptic circuits. Each
column contains 14 AER excitatory synapses, 2 AER inhibitory synapses and 6 locally connected
(hard-wired) synapses. The circuits implementing the chip?s I&F neurons and synapses have been
described in [22].
When an input address-event is received, the synapse with the corresponding row and column address is stimulated. If the input address-events routed to the synapse integrate up to the neuron?s
spiking threshold, then that neuron generates an output address-event which is transmitted off-chip.
Arbitrary network architectures can be implemented using off-chip look-up tables and routing the
chip?s output address-events to one or more AER input synapses. The synapse address can belong
to a different chip, therefore, arbitrary multi-chip architectures can be implemented.
Synapses with local hard-wired connectivity are used to realize the cooperative competitive network
with nearest neighbor and second nearest neighbor interactions (see Fig. 1): 31 neurons of the array
3 In the Address Event Representation (AER) input and output spikes are real-time digital events that carry
analog information structure [20]. An asynchronous communication protocol based on the AER is the most
efficient for signal transmission across neuromorphic devices [21].
40
35
Baseline
Weak Inhibition
Strong Inhibition
30
25
20
15
10
25
20
15
10
5
5
0
Baseline
Weak Lateral Excitation
Strong Lateral Excitation
30
Normalized mean activity
Normalized mean activity
35
5
10
15
20
Neuron address
25
30
(a)
0
5
10
15
20
Neuron address
25
30
(b)
Figure 3: Suppression as a function of the strength of global inhibition and lateral excitation. (a)
The graph shows the mean firing rate of the neurons for three different connectivity conditions and
in response to the same stimulus. The continuous line represents the baseline: the activity of the
neurons in response to the external stimulus when the local connections are not active (feed-forward
network). The dashed and dotted lines represent the activity of the feed-back network for weak
and strong inhibition respectively and fixed lateral excitation. For weak inhibition the neurons that
receive the strongest inputs amplify their activity. (b) Activity of the network for three different
connectivity conditions. The continuous line represents the baseline (response of the feed-forward
network to the input stimulus). The dashed and dotted lines represent the activity of the feed-back
network for weak and strong lateral excitation respectively and fixed global inhibition. For strong
lateral excitation the neurons that receive the highest input amplify their activity.
send their spikes to 31 local excitatory synapses on the global inhibitory neuron; the inhibitory neuron, in turn, stimulates local inhibitory synapses of the 31 excitatory neurons; each excitatory neuron
stimulates its first and second neighbors on both sides using two sets of locally connected synapses.
The first and second neighbor connections of the neurons at the edges of the array are connected to
pads. This allows us to leave the network open, or implement closed boundary conditions (to form
a ring of neurons [12]), using off-chip jumpers.
All of the synapses on the chip can be switched off. This allows us to inactivate either the local
synaptic connections, or the AER ones, or to use local synapses in conjunction with the AER ones.
In addition, a uniform constant DC current can be injected to all the neurons in the array thus
producing a regular ?spontaneous? activity throughout the whole array.
3
Competition in mean rate space
In our recurrent network competition is implemented using one global inhibitory neuron and cooperation using first and second nearest neighbors excitatory connections, nonetheless it performs
complex non-linear operations similar to those observed in more general cooperative competitive
networks. These networks, often used to model cortical feature selectivity [6, 23], are typically
tested with bell-shaped inputs. Within this context we can map sensory inputs (e.g. obtained from
a silicon retina, a silicon cochlea, or other AER sensory systems) onto the network?s AER synapses
in a way to implement different types of feature maps. For example, Chicca et al. [24] recently
presented an orientation selectivity system implemented by properly mapping the activity of a silicon retina onto AER input synapses of our chip. Moreover the flexibility of the AER infrastructure,
combined with the large number of externally addressable AER synapses of our VLSI device, allows
us to perform cooperative competitive computation across different feature spaces in parallel.
We explored the behavior of the network using synthetic control stimuli: we stimulated the chip via
its input AER synapses with Poisson distributed spike trains, using Gaussian shaped mean frequency
profiles. A custom PCI-AER board [24] was used to stimulate the chip and monitor its activity.
0.16
0.08
0.12
20
0.2
25
30
0.08
10
20
30
0
0.06
0.04
0.02
0
?0.02
0
0.3
5
0.25
10
0.4
15
0.1
0.07
0.6
5
10
Mean correlation coefficient
Mean correlation coefficient
0.14
0.06
0.2
15
0.15
20
0.05
0.1
25
0.05
30
0.04
10
20
30
0
0.03
0.02
0.01
0
5
10
15
20
Neuron address
25
?0.01
0
30
5
10
(a)
15
20
Neuron address
25
30
(b)
Figure 4: (a) Mean correlation coefficient among input spike trains used to stimulate the neurons of
our network. The figure inset shows the pairwise correlations between each input source. (b) Mean
correlation coefficient of output spike trains, when the cooperative-competitive connections of the
network are disabled (feed-forward mode). Note the different scales on the y-axis and in the inset
color bars.
0.08
0.08
0.07
Mean correlation coefficient
0.06
15
0.2
20
0.05
0.1
25
30
0.04
10
20
30
0
0.03
0.02
0.01
0
Mean correlation coefficient
0.3
10
?0.01
0
0.07
5
0.5
5
0.06
0.05
10
0.4
15
0.3
20
0.2
25
0.1
30
0.04
10
20
30
0
0.03
0.02
0.01
0
5
10
15
20
Neuron address
(a)
25
30
?0.01
0
5
10
15
20
Neuron address
25
30
(b)
Figure 5: (a) Correlation coefficient of output spike trains, when only global inhibition is enabled;
(b) Correlation coefficient of output spike trains when both global inhibition and local excitation are
enabled.
Suppression of least effective stimuli was tested using two Gaussian shaped inputs with different
amplitude (in terms of mean frequency) composed by Poisson trains of spikes. Two examples of
raw data for these experiments in the feed-forward and recurrent network conditions are shown in
Fig. 2(a) and 2(b) respectively. The output of the network is shown in Fig. 3(a) for two different
values of the strength of global inhibition (modulated using the weight of the connection from the
excitatory neurons to the global inhibitory neuron) and a fixed strength of lateral excitation. The
activity of the recurrent network has to be compared with the activity of the feed-forward network
(?baseline? activity plotted in Fig. 2(a) and represented by the continuous line in Fig. 3(a)) in response to the same stimulus to easily estimate the effect of the recurrent connectivity. The most
active neurons cooperatively amplify their activity through lateral excitation and efficiently drive the
global inhibitory neuron to suppress the activity of other neurons (dashed line in Fig. 3(a)). When
the strength of global inhibition is high the amplification given by the lateral excitatory connections
can be completely suppressed (dotted line in Fig. 3(a)). A similar behavior is observed when the
strength of lateral excitation is modulated (see Fig. 3(b)). For strong lateral excitation (dashed line
in Fig. 3(b)) amplification is observed for the neurons receiving the input with highest mean frequency and suppression of neurons stimulated by trains with lower mean frequencies occur. When
lateral excitation is weak (dotted line in Fig. 3(b)), global inhibition dominates and the activity of all
neurons is suppressed.
The non-linearity of this behavior is evident when we compare the effect of recurrent connectivity
on the peak of the lowest hill of activity and on the side of the highest hill of activity (e.g. neuron 23
and 11 respectively, in Fig. 3(a)). In the feed-forward network (continuous line) these two neurons
have a similar mean out frequency (? 12 Hz), nevertheless the effect of recurrent connectivity on
their activity is different. The activity of neuron 11 is amplified by a factor of 1.24 while the activity
of neuron 23 is suppressed by a factor of 0.39 (dashed line). This difference shows that the network
is able to act differently on similar mean rates depending on the spatial context, distinguishing the
relevant signal from distractors and noise.
4
Competition in correlation space
Here we test the context-dependent computation properties of the cooperative competitive network,
also in the spike-timing domain. We stimulated the neurons with correlated, Poisson distributed
spike trains and analyzed the network?s response properties in correlation space, as a function of its
excitatory/inhibitory connection settings.
Figure 4(a) shows the mean correlation coefficient between each input spike train with the spike
trains sent to all other neurons in the array. The figure inset shows the pair-wise correlation coefficient across all neuron addresses: neurons 7 through 11 have one common source of input (35Hz)
and five independent sources (15Hz) for a total mean firing rate of 50Hz and a 70% correlation; neurons 17 through 21 were stimulated with one common source of input at 25Hz and five independent
sources at 25HZ, for a total mean firing rate of 50Hz and a 50% correlation; all other neurons were
stimulated with uncorrelated sources at 50Hz. The auto-correlation coefficients (along the diagonal
in the figure?s inset) are not plotted, for sake of clarity.
When used as a plain feed-forward network (with all local connections disabled), the neurons generate output spike trains that reflect the distributions of the input signal, both in the mean firing rate
domain (see Fig.2(a)) and in the correlation domain (see Fig.4(b)). The lower output mean firing
rates and smaller amount of correlations among output spikes are due to the integrating properties
of the I&F neuron and of the AER synapses.
In Fig.5 we show the response of the network when global inhibition and recurrent local excitation
are activated. Enabling only global inhibition, without recurrent excitation has no substantial effect
with respect to the feed-forward case (compare Fig.5(a) with Fig.4(b)). However, when both competition and cooperation are enabled the network produces context-dependent effects in the correlation
space that are equivalent to the ones observed in the mean-rate domain: the correlation among
neurons that received inputs with highest correlation is amplified, with respect to the feed-forward
case, while the correlation between neurons that were stimulated by weakly correlated sources is
comparable to the correlation between all other neurons in the array.
Given the nature of the connectivity patterns in our chip, the correlation among neighboring neurons
is increased throughout the array, independent of the input sources, hence the mean correlation
coefficient is higher throughout the whole network. However, the difference in correlation between
the base level and the group with highest correlation is significantly higher when cooperation and
competition are enabled, with respect to the feed-forward case. At the same time, the difference
in correlation between the base level and the group with lowest correlation when cooperation and
competition are enabled cannot be distinguished from that of the feed-forward case. See Tab. 1 for
the estimated mean and standard deviation in the four conditions.
These are preliminary experiments that provide encouraging results. We are currently in the process
of designing an equivalent architecture one a new chip using an AMS 0.35?m technology, with 256
neurons and 8192 synapses. We will use the new chip to perform much more thorough experiments
to extend the analysis presented in this Section.
5
Discussion
We presented a hardware cooperative competitive network composed of spiking VLSI neurons and
analog synapses, and used it to simulate in real-time network architectures similar to those studied by
Amari and Arbib [4], Douglas et al. [5], Hansel and Sompolinsky [6], and Dayan and Abbott [25].
We showed how the hardware cooperative competitive network can exhibit the type of complex
Table 1: Difference in correlation between the base level and the two groups with correlated input
in the feed-forward and cooperative-competitive network
Feed-forward
network
Cooperative
competitive
network
Highest correlation group
0.029 ? 0.007
0.04 ? 0.01
Lowest correlation group
0.009 ? 0.006
0.010 ? 0.007
non-linear behaviors observed in biological neural systems. These behaviors have been extensively
studied in continuous models but were never demonstrated in hardware spiking systems before. We
pointed out how the recurrent network can act differently on neurons with similar activity depending
on the local context (i.e. mean firing rates, or mean correlation coefficient).
In the mean rate case the network amplifies the activity of neurons belonging to the selected stimulus
and suppresses the activity of neurons belonging to distractors or at noise level. This property is
particular relevant in the context of signal restoration. We believe that this is one of the mechanisms
used by biological systems to perform highly reliable computation restoring signals on the basis of
cooperative-competitive interaction among elementary units of recurrent networks and hence on the
basis of the context of the signal.
In the mean correlation coefficient case, the recurrent network amplifies more efficiently the correlation between neurons which receive highly correlated inputs while keeping the average mean firing
rate constant. This result supports the idea that correlation can be viewed as an additional coding
dimension for building internal representations [26].
Acknowledgments
This work was supported in part by the EU grants ALAVLSI (IST-2001-38099) and DAISY (FP62005-015803), and in part by the Swiss National Science Foundation (PMPD2-110298/1).
References
[1] R. J. Douglas and K. A. C. Martin. Neural circuits of the neocortex. Annual Review of Neuroscience, 27:419?51, 2004.
[2] T. Binzegger, R. J. Douglas, and K. Martin. A quantitative map of the circuit of cat primary
visual cortex. Journal of Neuroscience, 24(39):8441?53, 1994.
[3] E. R. Kandel, J.H. Schwartz, and T. M. Jessell. Principles of Neural Science. Mc Graw Hill,
2000.
[4] S. Amari and M. A. Arbib. Competition and cooperation in neural nets. In J. Metzler, editor,
Systems Neuroscience, pages 119?65. Academic Press, 1977.
[5] R. J. Douglas, M. A. Mahowald, and K. A. C. Martin. Hybrid analog-digital architectures
for neuromorphic systems. In Proc. IEEE World Congress on Computational Intelligence,
volume 3, pages 1848?1853. IEEE, 1994.
[6] D. Hansel and H. Somplinsky. Methods in Neuronal Modeling, chapter Modeling Feature
Selectivity in Local Cortical Circuits, pages 499?567. MIT Press, Cambridge, Massachusetts,
1998.
[7] S. Amari. Dynamics of pattern formation in lateral-inhibition type neural fields. Biological
Cybernetics, 27:77?87, 1977.
[8] W. Erlhagen and G. Sch?oner. Dynamic field theory of movement preparation. Psychological
Review, 109:545?572, 2002.
[9] P. Dev. Perception of depth surfaces in random?dot stereograms: a neural model. International
Journal of Man?Machine Studies, 7:511?28, 1975.
[10] R. L. Didday. A model of visuomotor mechanisms in the frog optic tectum. Mathematical
Biosciences, 30:169?80, 1976.
[11] W. L. Kilmer, W. S. McCulloch, and J. Blum. A model of the vertebrate central command
system. International Journal of Man-Machine Studies, 1:279?309, 1969.
[12] R. Hahnloser, R. Sarpeshkar, M. Mahowald, R. J. Douglas, and S. Seung. Digital selection and analog amplification co-exist in an electronic circuit inspired by neocortex. Nature,
405(6789):947?951, 2000.
[13] R. Serrano-Gotarredona, M. Oster, P. Lichtsteiner, A. Linares-Barranco, R. Paz-Vicente,
F. G?omez-Rodr??guez, H. Kolle Riis, T. Delbr?uck, S. C. Liu, S. Zahnd, A. M. Whatley,
R. J. Douglas, P. H?afliger, G. Jimenez-Moreno, A. Civit, T. Serrano-Gotarredona, A. AcostaJim?enez, and B. Linares-Barranco. AER building blocks for multi-layer multi-chip neuromorphic vision systems. In S. Becker, S. Thrun, and K. Obermayer, editors, Advances in Neural
Information Processing Systems, volume 15. MIT Press, Dec 2005.
[14] J.P. Abrahamsen, P. Hafliger, and T.S. Lande. A time domain winner-take-all network of
integrate-and-fire neurons. In 2004 IEEE International Symposium on Circuits and Systems,
volume 5, pages V?361 ? V?364, May 2004.
[15] E. Chicca, G. Indiveri, and R. J. Douglas. An event based VLSI network of integrate-and-fire
neurons. In Proceedings of IEEE International Symposium on Circuits and Systems, pages
V-357?V-360. IEEE, 2004.
[16] M. R. DeYong, R. L. Findley, and C. Fields. The design, fabrication, and test of a new VLSI
hybrid analog-digital neural processing element. IEEE Transactions on Neural Networks,
3(3):363?74, 1992.
[17] P. Hylander, J. Meador, and E. Frie. VLSI implementaion of pulse coded winner take all
networks. In Proceedings of the 36th Midwest Symposium on Circuits and Systems, volume 1,
pages 758?761, 1993.
[18] G. Indiveri, R. M?urer, and J. Kramer. Active vision using an analog VLSI model of selective
attention. IEEE Transactions on Circuits and Systems II, 48(5):492?500, May 2001.
[19] M. Oster and S.-C. Liu. A winner-take-all spiking network with spiking inputs. In 11th IEEE
International Conference on Electronics, Circuits and Systems (ICECS 2004), 2004.
[20] K. A. Boahen. Retinomorphic Vision Systems: Reverse Engineering the Vertebrate Retina.
Ph.D. thesis, California Institute of Technology, Pasadena, CA, 1997.
[21] E. Culurciello and A. G. Andreou. A comparative study of access topologies for chip-level
address-event communication channels. IEEE Transactions on Neural Networks, 14(5):1266?
77, September 2003.
[22] E. Chicca, D. Badoni, V. Dante, M. D?Andreagiovanni, G. Salina, S. Fusi, and P. Del Giudice.
A VLSI recurrent network of integrate?and?fire neurons connected by plastic synapses with
long term memory. IEEE Transactions on Neural Networks, 14(5):1297?1307, September
2003.
[23] R. Ben-Yishai, R. Lev Bar-Or, and H. Sompolinsky. Theory of orientation tuning in visual
cortex. Proceedings of the National Academy of Sciences of the USA, 92(9):3844?3848, April
1995.
[24] E. Chicca, A. M. Whatley, V. Dante, P. Lichtsteiner, T. Delbruck, P. Del Giudice, R. J. Douglas,
and G. Indiveri. A multi-chip pulse-based neuromorphic infrastructure and its application to a
model of orientation selectivity. IEEE Transactions on Circuits and Systems I, Regular Papers,
2006. (In press).
[25] P. Dayan and F. Abbott. Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems. MIT Press, 2001.
[26] E. Salinas and T. J. Sejnowski. Correlated neuronal activity and the flow of neural information.
Nature Reviews Neuroscience, 2:539?550, 2001.
| 2986 |@word unaltered:1 open:2 pulse:2 somplinsky:1 carry:1 electronics:1 liu:2 series:1 contains:1 jimenez:1 past:1 current:1 activation:1 guez:1 realize:1 moreno:1 plot:3 designed:1 intelligence:1 selected:2 device:4 infrastructure:2 five:2 mathematical:3 along:1 differential:1 symposium:3 consists:1 pairwise:1 behavior:9 multi:5 inspired:1 encouraging:1 increasing:1 vertebrate:2 moreover:1 linearity:1 circuit:13 medium:1 panel:5 lowest:3 mcculloch:1 suppresses:3 fabricated:1 thorough:1 winterthurerstrasse:1 every:1 act:2 quantitative:1 lichtsteiner:2 schwartz:1 control:1 unit:2 grant:1 producing:1 before:1 service:1 engineering:1 timing:3 local:14 congress:1 lev:1 firing:11 frog:1 studied:2 co:1 acknowledgment:1 testing:1 restoring:1 block:1 implement:5 swiss:1 addressable:1 eth:1 bell:1 significantly:1 integrating:1 regular:2 amplify:3 onto:2 selection:4 cannot:1 context:12 equivalent:2 map:3 demonstrated:2 send:1 attention:1 focused:1 chicca:6 array:11 enabled:5 stability:1 population:1 transmit:1 spontaneous:1 play:2 tectum:1 distinguishing:1 designing:1 hypothesis:2 delbr:1 element:1 particularly:1 mammalian:1 metzler:1 cooperative:23 bottom:1 role:2 module:2 observed:6 connected:8 sompolinsky:3 eu:1 movement:1 highest:7 substantial:1 boahen:1 stereograms:1 seung:1 dynamic:4 weakly:1 secon:1 basis:3 completely:1 easily:1 chip:24 emergent:1 represented:1 differently:2 cat:1 chapter:1 sarpeshkar:1 train:12 describe:2 effective:1 sejnowski:1 visuomotor:1 pci:1 neighborhood:1 formation:1 neuroinformatics:1 salina:2 larger:1 amari:4 ability:1 itself:1 whatley:2 net:1 propose:3 interaction:2 serrano:2 neighboring:1 relevant:2 flexibility:1 amplified:2 academy:1 amplification:6 competition:7 amplifies:5 double:2 transmission:1 produce:3 wired:2 cmos:1 leave:2 ring:4 comparative:1 ben:1 depending:3 recurrent:25 develop:1 nearest:3 received:2 strong:6 implemented:6 switzerland:1 routing:1 implementing:1 argued:2 preliminary:1 biological:4 elementary:1 cooperatively:1 mm:1 mapping:1 proc:1 hansel:3 currently:2 amplifying:2 reflects:1 hylander:1 mit:3 gaussian:3 rather:1 command:1 conjunction:1 encode:1 indiveri:4 properly:1 culurciello:1 suppression:6 baseline:5 am:1 dependent:7 dayan:2 typically:1 pad:2 pasadena:1 vlsi:20 selective:2 interested:1 comprising:1 rodr:1 among:5 orientation:3 retinomorphic:1 spatial:1 field:4 never:1 shaped:4 identical:1 represents:2 look:1 stimulus:10 retina:3 composed:2 national:2 consisting:1 fire:4 amplifier:2 highly:3 custom:1 analyzed:1 operated:1 activated:1 yishai:1 tical:1 edge:2 capable:1 plotted:2 theoretical:1 graw:1 psychological:1 increased:1 column:5 soft:3 modeling:3 dev:1 restoration:3 delbruck:1 neuromorphic:4 mahowald:2 deviation:1 alavlsi:1 uniform:1 fabrication:1 paz:1 afliger:1 stored:1 stimulates:2 giacomo:2 combined:1 synthetic:1 thanks:1 peak:1 international:5 off:6 receiving:2 together:1 connectivity:13 thesis:1 central:3 reflect:1 containing:1 external:2 account:1 de:1 coding:1 coefficient:14 afferent:1 view:1 closed:2 tab:1 competitive:23 capability:2 parallel:1 rodney:1 daisy:1 square:1 oner:1 efficiently:2 weak:6 raw:1 plastic:1 mc:1 drive:1 cybernetics:1 tissue:1 synapsis:23 phys:1 strongest:2 sharing:1 synaptic:3 raster:3 nonetheless:1 frequency:8 bioscience:1 gain:1 massachusetts:1 color:1 distractors:2 amplitude:1 back:2 feed:19 higher:3 response:13 synapse:4 april:1 correlation:40 receives:3 del:2 mode:3 stimulate:2 disabled:2 believe:1 usa:1 effect:5 building:2 concept:1 normalized:2 hence:2 linares:2 self:1 excitation:19 ini:1 hill:3 evident:1 demonstrate:1 performs:2 hafliger:1 ranging:1 wise:1 barranco:2 recently:2 common:3 functional:1 spiking:11 winner:4 volume:4 analog:9 belong:1 jumper:2 extend:1 silicon:3 cambridge:1 tuning:1 similarly:1 pointed:1 dot:1 access:1 cortex:4 surface:1 inhibition:19 base:3 showed:3 recent:1 reverse:1 meador:1 selectivity:5 erlhagen:1 transmitted:1 additional:1 aggregated:1 signal:11 dashed:5 ii:1 academic:1 believed:1 long:1 coded:1 lande:1 vision:3 poisson:4 cochlea:1 represent:4 ion:1 dec:1 receive:4 addition:1 leaving:1 source:8 sch:1 hz:11 sent:1 flow:1 architecture:9 arbib:3 topology:1 idea:1 becker:1 routed:1 useful:1 detailed:2 amount:1 neocortex:2 locally:4 extensively:2 hardware:3 ph:1 generate:1 exist:1 inhibitory:15 dotted:4 estimated:1 neuroscience:5 per:1 anatomical:1 discrete:1 group:5 ist:1 four:1 badoni:1 demonstrating:1 threshold:2 inactivate:1 monitor:1 nevertheless:1 clarity:1 blum:1 douglas:10 abbott:2 graph:1 enez:1 injected:1 throughout:3 electronic:1 fusi:1 comparable:1 layer:1 annual:1 activity:32 strength:7 aer:20 occur:1 optic:1 yong:1 sake:1 giudice:2 generates:1 simulate:1 performing:1 inhibits:1 martin:3 belonging:3 describes:1 across:3 smaller:1 suppressed:3 wta:7 equation:1 zurich:2 previously:1 turn:1 mechanism:2 riis:1 instrument:1 available:1 operation:1 distinguished:1 top:1 maintaining:1 unifying:1 dante:2 already:2 icecs:1 spike:22 primary:1 diagonal:1 obermayer:1 exhibit:3 september:2 lateral:14 thrun:1 majority:1 originate:1 trapezoid:1 suppress:1 implementation:2 design:1 perform:6 neuron:91 enabling:1 supporting:1 communication:2 dc:1 kolle:1 arbitrary:2 pair:1 connection:16 andreou:1 california:1 address:18 able:1 bar:2 pattern:4 perception:1 reliable:1 memory:1 event:11 hybrid:2 restore:1 hardwired:1 advanced:1 technology:2 axis:1 auto:1 oster:2 text:1 review:3 literature:1 relative:1 filtering:1 jessell:1 digital:7 foundation:1 integrate:5 switched:1 metal:1 principle:1 editor:2 uncorrelated:1 row:2 excitatory:16 cooperation:6 supported:1 asynchronous:1 keeping:1 side:2 weaker:1 institute:2 wide:1 neighbor:9 distributed:3 feedback:1 boundary:2 cortical:9 plain:1 dimension:1 world:1 depth:1 sensory:3 forward:17 made:1 transaction:5 global:21 active:5 gotarredona:2 continuous:6 table:2 stimulated:8 nature:4 channel:1 ca:1 investigated:1 complex:3 poly:1 domain:10 protocol:1 whole:2 noise:5 profile:1 body:1 neuronal:3 fig:17 board:1 position:1 comprises:4 kandel:1 late:1 externally:1 inset:4 explored:1 evidence:1 dominates:1 consist:1 explore:3 visual:2 omez:1 ch:2 hahnloser:2 viewed:1 kramer:1 man:2 hard:5 vicente:1 total:2 uck:1 experimental:1 internal:1 support:2 modulated:2 ethz:1 preparation:1 tested:2 correlated:7 |
2,188 | 2,987 | Optimal Single-Class Classification Strategies
Ran El-Yaniv
Department of Computer Science
Technion- Israel Institute of Technology
Technion, Israel 32000
[email protected]
Mordechai Nisenson
Department of Computer Science
Technion - Israel Institute of Technology
Technion, Israel 32000
[email protected]
Abstract
We consider single-class classification (SCC) as a two-person game between the
learner and an adversary. In this game the target distribution is completely known
to the learner and the learner?s goal is to construct a classifier capable of guaranteeing a given tolerance for the false-positive error while minimizing the false
negative error. We identify both ?hard? and ?soft? optimal classification strategies
for different types of games and demonstrate that soft classification can provide
a significant advantage. Our optimal strategies and bounds provide worst-case
lower bounds for standard, finite-sample SCC and also motivate new approaches
to solving SCC.
1 Introduction
In Single-Class Classification (SCC) the learner observes a training set of examples sampled from
one target class. The goal is to create a classifier that can distinguish the target class from other
classes, unknown to the learner during training. This problem is the essence of a great many applications such as intrusion, fault and novelty detection. SCC has been receiving much research attention in the machine learning and pattern recognition communities (for example, the survey papers
[7, 8, 4] cite, altogether, over 100 papers). The extensive body of work on SCC, which encompasses
mainly empirical studies of heuristic approaches, suffers from a lack of theoretical contributions and
few principled (empirical) comparative studies of the proposed solutions. Thus, despite the extent
of the existing literature, some of the very basic questions have remained unresolved.
Let P (x) be the underlying distribution of the target class, defined over some space ?. We call P the
target distribution. Let 0 < ? < 1 be a given tolerance parameter. The learner observes a training
set sampled from P and should then construct a classifier capable of distinguishing the target class.
We view the SCC problem as a game between the learner and an adversary. The adversary selects
another distribution Q over ? and then a new element of ? is drawn from ?P + (1 ? ?)Q, where ?
is a switching parameter (unknown to the learner). The goal of the learner is to minimize the false
negative error, while guaranteeing that the false positive error will be at most ?.
The main consideration in previous SCC studies has been statistical: how can we guarantee a prescribed false positive rate (?) given a finite sample from P ? This question led to many solutions,
almost all revolving around the idea of low-density rejection. The proposed approaches are typically
generative or discriminative. Generative solutions range from full density estimation [2], to partial
density estimation such as quantile estimation [5], level set estimation [1, 9] or local density estimation [3]. In discriminative methods one attempts to generate a decision boundary appropriately
enclosing the high density regions of the training set [11].
In this paper we abstract away the statistical estimation component of the problem and model a
setting where the learner has a very large sample from the target class. In fact, we assume that the
learner knows the target distribution P precisely. While this assumption would render almost the
entire body of SCC literature superfluous, it turns out that a significant, decision-theoretic component of the SCC problem remains ? one that has so far been overlooked. In any case, the results we
obtain here immediately apply to other SCC instances as lower bounds.
The fundamental question arising in our setting is: What are optimal strategies for the learner? In
particular, is the popular low-density rejection strategy optimal? While most or all SCC papers
adopted this strategy, nowhere in the literature could we find a formal justification.
The partially good news is that low-density rejection is worst-case optimal, but only if the learner is
confined to ?hard? decision strategies. In general, the worst-case optimal learner strategy should be
?soft?; that is, the learner should play a randomized strategy, which could result in a very significant
gain. We first identify a monotonicity property of optimal SCC strategies and use it to establish
the optimality of low-density rejection in the ?hard? case. We then show an equivalence between
low-density rejection and a constrained two-class classification problem where the other class is the
uniform distribution over ?. This equivalence motivates a new approach to solving SCC problems.
We next turn our attention to the power of the adversary, an issue that has been overlooked in the
literature but has crucial impact on the relevancy of SCC solutions in applications. For example,
when considering an intrusion detection application (see, e.g., [6]), it is necessary to assume that the
?attacking distribution? has some worst-case characteristics and it is important to quantify precisely
what the adversary knows or can do. The simple observation in this setting is that an omniscient and
unlimited adversary, who knows all parameters of the game including the learner?s strategy, would
completely demolish the learner who uses hard strategies. By using a soft strategy, however, the
learner can achieve on average the biased coin false negative rate of 1 ? ?.
We then analyze the case of an omniscient but limited adversary, who must select a sufficiently
distant Q satisfying DKL (Q||P ) ? ?, for some known parameter ?. One of our main contributions
is a complete analysis of this game, including identification of the optimal strategy for the learner
and the adversary, as well as the best achievable false negative rate. The optimal learner strategy and
best achievable rate are obtained via a solution of a linear program specified in terms of the problem
parameters. These results are immediately applicable as lower bounds for standard (finite-sample)
SCC problems, but may also be used to inspire new types of algorithms for standard SCC. While we
do not have a closed form expression for the best achievable false-negative rate, we provide a few
numerical examples demonstrating and comparing the optimal ?hard? and ?soft? performance.
2 Problem Formulation
The single-class classification (SCC) problem is defined as a game between the learner and an
adversary. The learner receives a training sample of examples from a target distribution P defined
over some space ?. On the basis of this training sample, the learner should select a rejection function
r : ? ? [0, 1], where for each ? ? ?, r? = r(?) is the probability with which the learner will
reject ?. On the basis of any knowledge of P and/or r(?), the adversary selects selects an attacking
distribution Q, defined over ?. Then, a new example is drawn from ?P +(1??)Q, where 0 < ? < 1,
is a switching probability unknown to the learner. The rejection rate of the learner, using a rejection
?
function r, with respect to any distribution D (over ?), is ?(D) = ?(r, D) = ED {r(?)}. For
notational convenience whenever we decorate r (e.g., r? ,r? ), the corresponding ? will be decorated
accordingly (e.g., ?? ,?? ). The two main quantities of interest here are the false positive rate (type I
error) ?(P ), and the false negative rate (type II error) 1 ? ?(Q).
Before the start of the game, the learner receives a tolerance parameter 0 < ? < 1, giving the
maximally allowed false positive rate. A rejection function r(?) is valid if its false positive rate
?(P ) ? ?. A valid rejection function (strategy) is optimal if it guarantees the smallest false negative
rate amongst all valid strategies.
We consider a model where the learner knows the target distribution P exactly, thus focusing on the
decision-theoretic component in SCC. Clearly, our model approximates a setting where the learner
has a very large training set, but the results we obtain immediately apply, in any case, as lower
bounds to other SCC instances.
This SCC game is a two-person zero sum game where the payoff to the learner is ?(Q). The set
?
R? (P ) = {r : ?(P ) ? ?} of valid rejection functions is the learner?s strategy space. Let Q be the
strategy space of the adversary, consisting of all allowable distributions Q that can be selected by
the adversary. We are concerned with optimal learner strategies for game variants distinguished by
the adversary?s knowledge of the learner?s strategy, P and/or of ? and by other limitations on Q.
We distinguish a special type of this game, which we call the hard setting, where the learner must
deterministically reject or accept new events; that is, r : ? ? {0, 1}, and such rejection functions
are termed ?hard.? The more general game defined above (with ?soft? functions) is called the soft
setting. As far as we know, only the hard setting has been considered in the SCC literature thus far.
In the soft setting, given any rejection function, the learner can reduce the type II error by rejecting
more (i.e., by increasing r(?)). Therefore, for an optimal r(?) we have ?(P ) = ? (rather than
?(P ) ? ?). It follows that the switching parameter ? is immaterial to the selection of an optimal
strategy. Specifically, the combined error of an optimal strategy is ??(P ) + (1 ? ?)(1 ? ?(Q)) =
?? + (1 ? ?)(1 ? ?(Q)), which is minimized by minimizing the type II error, 1 ? ?(Q).
?
We assume throughout this paper a finite support of size N ; that is, ? = {1, . . . , N } and P =
?
{p1 , . . . , pN } and Q = {q1 , . . . , qN } are probability mass functions. Additionally, a ?probability
distribution? refers to a distribution over the fixed support set ?. Note that this assumption still
leaves us with an infinite game because the learner?s pure strategy space, R? (P ), is infinite.1
3 Characterizing Monotone Rejection Functions
In this section we characterize the structure of optimal learner strategies. Intuitively, it seems plausible that the learner should not assign higher rejection values to higher probability events under P .
That is, one may expect that a reasonable rejection function r(?) would be monotonically decreasing
with probability values (i.e., if pj ? pk then rj ? rk ). Such monotonicity is a key justification for
a very large body of SCC work, which is based on low density rejection strategies. Surprisingly,
optimal monotone strategies are not always guaranteed as shown in the following example.
Example 3.1 (Non-Monotone Optimality) In the hard setting, take N = 3, P =
(0.06, 0.09, 0.85) and ? = 0.1. The two ?-valid hard rejection functions are r? = (1, 0, 0) and
r?? = (0, 1, 0). Let Q = {Q = (0.01, 0.02, 0.97)}. Clearly ?? (Q) = 0.01 and ??? (Q) = 0.02
and therefore, r?? (?) is optimal despite breaking monotonicity. More generally, this example holds if
Q = {Q : q2 ? q1 ? ?} for any 0 < ? ? 1.
In the soft setting, let N = 2, P = (0.2, 0.8), and ? = 0.1. We note that R? (P ) = {r? =
(0.1 + 4?, 0.1 ? ?)}, for ? ? [?0.025, 0.1]. We take Q = {Q = (0.1, 0.9)}. Then ?? (Q) = 0.1 +
0.4? ? 0.9? = 0.1 ? 0.5?. This is clearly maximized when we minimize ? by taking ? = ?0.025, and
then the optimal rejection function is (0, 0.125), which clearly breaks monotonicity. This example
also holds for Q = {Q : q2 ? cq1 } for any c > 4.
Fix P and ?. For any adversary strategy space, Q, let R?? (P ) be the set of optimal valid rejection
?
functions, R?? = {r ? R? (P ) : minQ?Q ?(Q) = maxr? ?R? (P ) minQ?Q ?? (Q)}.2 We note that
?
R? is never empty in the cases we consider. A simple observation is that for any r ? R?? there
exists r? ? R?? such that r? (i) = r(i) for all i such that pi > 0 and for zero probabilities, pj = 0,
r? (j) = 1.
The following property ensures that R?? will include a monotone (optimal) hard strategy, which
means that the search space for the learner can be conveniently confined to monotone strategies.
While the set of all distributions satisfies this property, later on we will consider limited strategic
adversary spaces where this property still holds.3
1
The game is conveniently described in extensive form (i.e., game tree) where in the first move the learner
selects a rejection function, followed by a chance move to determine the source (either P or Q) of the test
example (with probability ?). In the case where Q is selected, the adversary chooses (randomly using Q) the
test example. In this game the choice of Q depends on knowledge of P and r(?).
2
For certain strategy spaces, Q, it may be necessary to consider the infimum rather than the minimum. In
such cases it may be necessary to replace ?Q ? Q? (in definitions, theorems, etc.) with ?Q ? cl(Q)?, where
cl(Q) is the closure of Q.
3
All properties defined in this paper could be made weaker for the purposes of the proofs, but this would
needlessly complicate them. Indeed, the way they are currently defined is sufficient for most ?reasonable? Q.
Definition 3.2 (Property A) Let P be a distribution. A set of distributions Q has Property A w.r.t.
P if for all j, k and Q ? Q such that pj < pk and qj < qk , there exists Q? ? Q such that qk? ? qj ,
qj? ? qk and for all i 6= j, k, we have qi? = qi .
Theorem 3.3 (Monotone Hard Decisions) When the learner is restricted to hard-decisions and Q
satisfies Property A w.r.t. P , then ?r ? R?? such that pj < pk ? r(j) ? r(k).4
Proof: Let us assume by contradiction that no such rejection function exists in R?? . Let r ? R?? .
Let j be such that pj = min?:r(?)=0 p? . Then, there must exist k, such that pj < pk and r(k) = 1
(otherwise r is monotone). Define r? to be r with the values of j and k swapped; that is, r? (j) =
1, r? (k) = 0 and for all other i, r? (i) = r(i). We note that ?? (P ) = ?(P ) + pj ? pk < ?(P ) ?
?. Let Q? ? Q be such that minQ ?? (Q) = ?? (Q? ) = ?(Q? ) + qj? ? qk? . Thus, if qj? ? qk? ,
?? (Q? ) ? ?(Q? ). Otherwise, there exists Q? ? as in Property A and in particular, q ? ?k ? qj? . As a
result, ?? (Q? ) = ?(Q? ? ) + qj? ? q ? ?k ? ?(Q? ? ). Therefore, there always exists Q ? Q such that
?? (Q? ) ? ?(Q) (either Q = Q? or Q = Q? ? ). Consequently, minQ ?? (Q) ? minQ ?(Q), and thus,
r? ? R?? . As long as there are more j, k pairs which need to have their rejection levels fixed, we
label r = r? and repeat the above procedure. Since the only changes are made to r? (j) and r? (k),
and since j is the non-rejected event with minimal probability, the procedure will be repeated at
most N times. The final r? is in R?? and satisfies pj < pk ? r(j) ? r(k). Contradiction.
Theorem 3.3 provides a formal justification for the low-density rejection strategy (LDRS), popular
in the SCC literature. Specifically, assume w.l.o.g. p1 ? p2 ? ? ? ? ? pN . The corresponding ?-valid
P
low density rejection function places rj = 1 iff ji=1 pi ? ?.
Our discussion on soft decisions is facilitated by Property B and Theorem 3.5 that follow.
Definition 3.4 (Property B) Let P be a distribution. A set of distributions Q has Property B w.r.t.
q
P if for all j, k and Q ? Q such that 0 < pj ? pk and pjj < pqkk , there exists Q? ? Q such that
qj?
pj
?
?
qk
pk
and for all i 6= j, k, qi? = qi .
The rather technical proof of the following theorem is omitted for lack of space (and appears in the
adjoining, supplementary appendix).
Theorem 3.5 (Monotone Soft Decisions) If Q satisfies Property B w.r.t. P , then ?r ? R?? such
that: (i)pi = 0 ? r(i) = 1; (ii) pj < pk ? r(j) ? r(k); and (iii) pj = pk ? r(j) = r(k).
4 Low-Density Rejection and Two-Class Classification
In this section we focus on the hard setting. We show that the low-density rejection strategy (LDRS
- defined in Section 3) is optimal. Moreover we show that the optimal hard performance can be obtained by solving a constrained two-class classification problem where the other class is the uniform
distribution over ?. The results here consider families Q that satisfy the following property.
Definition 4.1 (Property C) Let P be a distribution. A set of distributions Q has Property C w.r.t.
P if for all j, k and Q ? Q such that pj = pk there exists Q? ? Q such that qk? = qj , qj? = qk and
for all i 6= j, k, qi? = qi .
We state without proof the following lemma (the proof can be found in the appendix).
Lemma 4.2 Let r? be a ?-valid low-density rejection function (LDRS). Let r be any monotone ?valid rejection function. Then minQ?Q ?? (Q) ? minQ?Q ?(Q) for any Q satisfying Property C.
Example 4.3 (Violation of Property C) We illustrate here that violating Property C may result in
a violation of Lemma 4.2. Let N = 5, P = (0.02, 0.03, 0.05, 0.05, 0.85), and ? = 0.1. Then the
two ?-valid LDRS rejection functions are r = (1, 1, 1, 0, 0) and r? = (1, 1, 0, 1, 0). Let Q = {Q :
q3 ? q4 > ?} for some 0 < ? < 1. Then, for any Q ? Q, ?(Q) ? ?? (Q) = q3 ? q4 > ?, and
therefore, for the LDRS, r? , there exists a monotone r such that minQ?Q ?? (Q) < minQ?Q ?(Q).
4
Here we must consider a weaker notion of monotonicity for hard strategies to be both valid and optimal.
When Q satisfies Property A, then by Theorem 3.3 there exists a monotone optimal rejection function. Therefore, the following corollary of Lemma 4.2 establishes the optimality of any LDRS.
Corollary 4.4 Any ?-valid LDRS is optimal if Q satisfies both Property A and Property C.
Thus, any LDRS strategy is indeed worst-case optimal when the learner is willing to be confined
to hard rejection functions and when the adversary?s space satisfies Property A and Property C. We
now show that an (optimal) LDRS solution is equivalent to an optimal solution of the following
constrained Bayesian two-class decision problem. Let the first class c1 have distribution P (x) and
the second class, c2 , have the uniform distribution U (x) = 1/N . Let 0 < c < 1 and 0 < ? <
(N ?c + 1 ? c)/N ?c. The classes have priors Pr{c1 } = c and Pr{c2 } = 1 ? c. The loss function ?ij ,
giving the cost of deciding ci instead of cj (i, j = 1, 2), is ?11 = ?22 = 0, ?12 = (N c+1?c)/(1?c)
and ?21 = ?. The goal is to construct a classifier
PC(x) ? {c1 , c2 ) that minimizes the total Bayesian
risk under the constraint that, for a given ?, x:C(x)=c2 P (x) ? ?. We term this problem ?the
Bayesian binary problem.?
Theorem 4.5 An optimal binary classifier for the Bayesian binary problem induces an optimal
(hard) solution to the SCC problem (an LDRS) when Q satisfies properties A and C.
Proof Sketch: Let C ? (?) be an optimal classifier for the Bayesian binary problem. Any classifier
C(?) induces a hard rejection function r(?) by taking r(x) = 1 ? C(x) = c2 . Therefore, the set of
?
feasible classifiers (satisfying the constraint)
P clearly induces R? (P ). Let Mi (C) = {x : C(x) = i}.
Note that the constraint is equivalent to x?M2 (C) P (x) ? ?. The Bayes risk for classifying x
?
as i is Ri (x) = ?ii Pr{ci |x} + ?i(3?i) Pr{c3?i |x} = ?i(3?i) Pr{c3?i |x}. The total Bayes risk is
P
? P
R(C) = x?M1 (C) R1 (x) + x?M2 (C) R2 (x), which is minimized at C ? (?). It is not difficult to
show that R1 (?) and R2 (?) are monotonically decreasing and increasing, respectively. It therefore
follows that x ? M1 (C ? ), y ? M2 (C ? ) ? P (x) ? P (y) (otherwise, by swapping C ? (x) and
C ? (y), the constraint can be maintained and R(C ? ) decreased).
It is also not difficult to show that
P
R1 (x) ? 1 > R2 (x) for any x. Thus, it follows that y?M2 (C ? ) P (y) + minx?M1 (C ? ) P (x) > ?
(otherwise, some x could be transferred from M1 (C ? ) to M2 (C ? ), reducing R(C ? )). Together,
these two properties immediately imply that C ? (?) induces a ?-valid LDRS.
Theorem 4.5 motivates a different approach to SCC in which we sample from the uniform distribution over ? and then attempt to approximate the optimal Bayes solution to the constrained binary
problem. It also justifies certain heuristics found in the literature [10, 11].
5 The Omniscient Adversary: Games, Strategies and Bounds
5.1 Unrestricted Adversary
In the first game we analyze an adversary who is completely unrestricted. This means that Q is
the set of all distributions. Unsurprisingly, this game leaves little opportunity for the learner. For
?
?
any rejection function r(?), define rmin = mini r(i) and Imin (r) = {i : r(i) = rmin }. For any
PN
PN
distribution D, ?(D) = i=1 di r(i) ? i=1 di rmin = rmin , in particular, ? = ?(P ) ? rmin
and minQ ?(Q) ? rmin . By choosing Q such that qi = 1 for some i ? Imin (r), the adversary
can achieve ?(Q) = rmin (the same rejection rate is achieved by taking any Q with qi = 0 for all
?
i 6? Imin (r)). In the soft setting, minQ ?(Q) is maximized by the rejection function r? (i) = ? for
?
all pi > 0 (r? (i) = 1 for all pi = 0) This is equivalent to flipping a ?-biased coin for non-null
events (under P ). The best achievable Type II Error is 1 ? ?. In the hard setting, clearly rmin = 0
(otherwise 1 > ? ? 1), and the best achievable Type II Error is precisely 1. That is, absolutely
nothing can be achieved.
This simple analysis shows the futility of the SCC game when the adversary is too powerful. In
order to consider SCC problems at all one must consider reasonable restrictions on the adversary
that lead to more useful games. One type of restriction would be to limit the adversary?s knowledge
of r(?), P and/or of ?. Another type would be to directly limit the strategic choices available to the
adversary. In the next section we focus on the latter type.
5.2 A Constrained Adversary
In seeking a quantifiable constraint on Q it is helpful to recall that the essence of the SCC problem is
to try to distinguish between two probability distributions (albeit one of them unknown). A natural
constraint is a lower bound on the ?distance? between these distributions. Following similar results
in hypothesis testing, we would like to consider games in which the adversary must select Q such that
D(P ||Q) ? ?, for some constant ? > 0, where D(?||?) is the KL-divergence. Unfortunately, this
constraint is vacuous since D(P ||Q) explodes when qi ? pi (for any i). In this case the adversary
can optimally play the same strategy as in the unrestricted game while meeting the KL-divergence
constraint. Fortunately, by taking D(Q||P ) ? ?, we can effectively constrain the adversary.
We note, as usual, that the learner can (and should) reject with probability 1 any null events under
P . Thus, an adversary would be foolish to choose a distribution Q that has any probability for
?
these events. Therefore, we henceforth assume w.l.o.g. that ? = ?(P ) = {? : p? > 0}. Taking
P
?
?
D(Q||P ) = N
i=1 qi log(qi /pi ), we then define Q = Q? = {Q : D(Q||P ) ? ?}. We note that Q?
possesses properties A, B and C w.r.t. P ,5 and by Theorems 3.3 and 3.5 there exists a monotone
r ? R?? (in both the hard and soft settings) and by Corollary 4.4, any ?-valid LDRS is hard-optimal.
If maxi pi ? 2?? , then any Q which is concentrated on a single event meets the constraint
D(Q||P ) ? ?. Then, the adversary can play the same strategy as in the unrestricted game,
and the learner should select r? as before. For the game to be non-trivial it is thus required that
? > log(1/ maxi pi ). Similarly, if the optimal r is such that there exists j ? Imin (r) (that is
r(j) = rmin ) and pj ? 2?? , then a distribution Q that is completely concentrated on j has
D(Q||P ) ? ? and achieves ?(Q) = rmin as in the unrestricted game. Therefore, r = r? , and
so maximizes rmin . We thus assume that the optimal r has no such j.
We begin our analysis of the game by identifying some useful characteristics of optimal adversary
strategies in Lemma 5.1. Then Theorem 5.2 shows that the effective support of an optimal Q has
a size of two at most. Based on these properties, we provide in Theorem 5.3 a linear program that
computes the optimal rejection function. The following lemma is stated without its (technical) proof.
Lemma 5.1 If Q minimizes ?(Q) and meets the constraint D(Q||P ) ? ? then: (i) D(Q||P ) = ?;
q
(ii) pj < pk and qk > 0 ? r(j) > r(k); (iii) pj < pk and qj > 0 ? qj log pjj + qk log pqkk >
(qj + qk ) log
qj +qk
pk ;
(iv) pj < pk and qj > 0 ?
qj
pj
>
qk
pk ;
and (v) qj , qk > 0 ? pj 6= pk .
Theorem 5.2 Any optimal adversarial strategy Q has an effective support of size at most two.
Proof Sketch: Assume by contradiction that an optimal Q? has an effective support of size J ? 3.
W.l.o.g. we rename events such that the first J events are the effective support of Q? (i.e., qi? > 0,
i = 1, . . . , J). From part (i) of Lemma 5.1, Q? is a global minimizer of ?(Q) subject to the
PJ
PJ
constraints i=1 qi log pqii = ?, qi > 0 (i = 1, . . . , J) and i=1 qi = 1. The Lagrangian of this
problem is
!
!
J
J
J
X
X
X
qi
L(Q, ?) =
r(i)qi + ?1
qi log ? ? + ?2
qi ? 1 .
(1)
pi
i=1
i=1
i=1
It is not hard to show, using parts (iv) and (v) of Lemma 5.1, that
Q? is an extremum
point of (1).
qi?
?L(Q? ,?)
= r(i) + ?1 log pi + 1 + ?2 = 0. Solving
Taking the partial derivatives of (1) we have:
?qi
?L(Q? ,?)
?q1
q?
?
q?
,?)
= ?L(Q
for ?1 , we get ?1 = (r(2) ? r(1))/(log p11 ? log p22 ). If we assume (w.l.o.g.)
?q2
that p1 < p2 , then, from parts (ii) and (iv) of Lemma 5.1, r(2) < r(1) and q1? /p1 > q2? /p2 . Thus
2
= ?qi1 < 0, and (1) is strictly concave. Therefore, since Q? is
?1 < 0. Therefore, for all i, ? L(Q,?)
?qi2
an extremum of the (strictly concave) Lagrangian function, it is the unique global maximum.
?
By part (iv) of Lemma 5.1, the smooth function fP,? (q1 , q2 , . . . , qJ?1 ) = D(Q||P ) ? ? has a root
at Q? where no partial derivative is zero. Therefore, it has an infinite number of roots in any convex
5
For any pair j, k such that pj ? pk , D(Q||P ) does not decrease by transferring all the probability from k
q
q +q
to j in Q: qj log pjj + qk log pqk ? (qj + qk ) log jpj k .
k
? 6= Q? , where q?i > 0
domain where Q? is an internal point. Thus, there exists another distribution, Q
for i = 1, . . . , J, which meets the equality criteria of the Lagrangian. Since Q? is the unique global
? = L(Q,
? ?) < L(Q? , ?) = ?(Q? ). Contradiction.
maximum of L(Q, ?): ?(Q)
We now turn our attention to the learner?s selection of r(?). As already noted, it is sufficient for the
learner to consider only monotone rejection functions. Since for these functions pj = pk ? r(j) =
r(k), the learner can partition ? into K = K(P ) event subsets, which correspond, by probability,
to ?level sets?, S1 , S2 , . . . , SK (all events in a level set S have probability PS ). We re-index these
subsets such that 0 < PS1 < PS2 < ? ? ? < PSK . Define K variables r1 , r2 , . . . , rK , representing
the rejection rate assigned to each of the K level sets (?? ? Si , r(?) = ri ). We group our level sets
by probability: L = {S : PS < 2?? }, M = {S : PS = 2?? }, and H = {S : PS > 2?? }.
By Theorem 5.2, the optimal Q which the adversary selects will have an effective support of size
2 at most. If it has an effective support of size 1, then the event ? for which q? = 1 cannot be
from a level set in L or H (otherwise, part (i) of Lemma 5.1 would be violated). Therefore it must
belong to the single level set in M . Thus, if M = {Sm } (for some index m), then there are feasible
solutions Q such that q? = 1 (for ? ? Sm ), all of which have ?(Q) = rm . If, on the other hand,
Q has an effective support of size 2, then it is not hard to show that one of the two events must
be from a level set Sl ? L, and the other, from a level set Sh ? H (since all other combinations
result in a violation of either part (i) or part (iii) of Lemma 5.1). Then, there is a single solution to
l
ql log PqSl + (1 ? ql ) log 1?q
PSh = ?, where ql and 1 ? ql are the probabilities that Q assigns to the
l
events from Sl and Sh , respectively. For such a distribution, ?(Q) = ql rl + (1 ? ql )rh .
Therefore, the adversary?s choice of an optimal distribution, Q, must have one of |L||H| + |M | ?
2
? K4 ? (possibly different) rejection rates. Each of these rates, ?1 , ?2 , . . . , ?|L||H|+|M| , is a linear
combination of at most two variables, ri and rj . We introduce an additional variable, z, to represent
the max-min rejection rate. We thus have:
Theorem 5.3 An optimal soft rejection function and the lower-bound on the Type II Error, 1 ? z, is
obtained by solving the following linear program:6 maximizer1 ,r2 ,...,rK ,z z, subject to:
K
X
ri |Si |PSi = ?, 1 ? r1 ? r2 ? ? ? ? ? rK ? 0, ?i ? z, i ? {1, 2, . . . , |L||H| + |M |}.
i=1
5.2.1 Numerical Examples
We now compare the performance of hard and soft rejection strategies for this constrained game
(D(Q||P ) ? ?) for various values of ?, and two different families of target distributions, P over
support N = 50. The families are arbitrary probability mass functions over N events and discretized Gaussians (over N bins). For each ? we generated 50 random distributions P for each of
the families.7 For each such P we solved the optimal hard and soft strategies and computed the
corresponding worst-case optimal type II error, 1 ? ?(Q).
The results for ? = 0.05 are shown in Figure 5.2.1. Other results (not presented) for a wide variety
of the problem parameters (e.g., N , ?) are qualitatively the same. It is evident that both the soft and
hard strategies are ineffective for small ?. Clearly, the soft approach has significantly lower error
than the hard approach (until ? becomes ?sufficiently large?).
6
Let r ? be the solution to the linear program. Our derivation of the linear program is dependent on the
assumption that there is no event j ? Imin (r ? ) such that pj ? 2?? (see discussion preceding Lemma 5.1). If
r ? contradicts this assumption then, as discussed, the optimal strategy is r ? , which is optimal. It is not hard to
prove that in this case r ? = r ? anyway, and thus the solution to the linear program is always optimal.
7
Since maxQ D(Q||P ) = log(1/ mini pi ), it is necessary that mini pi ? 2?? when generating P (to
ensure that a ?-distant Q exists). Distributions in the first family of arbitrarily random distributions (a) are
generated by sampling a point (p1 ) uniformly in (0, 2?? ]. The other N ? 1 points are drawn i.i.d. ? U (0, 1],
and then normalized so that their sum is 1 ? p1 . The second family (b) are Gaussians centered at 0 and
discretized over N evenly spaced bins in the range [?10, 10]. A (discretized) random Gaussian N (0, ?) is
selected by choosing ? uniformly in some range [?min , ?max ]. ?min is set to the minimum ? ensuring that the
first/last bin will not have ?zero? probability (due to limited precision). ?max was set so that the cumulative
probability in the first/last bin will be 2?? , if possible (otherwise ?max is arbitrarily set to 10 ? ?min ).
Hard
0.8
0.6
0.4
0.2
0
0
1
Soft
2
4
6
8
Lambda (?)
(a) Arbitrary
10
12
Worst Case Type II Error
Worst Case Type II Error
1
Soft
Hard
0.9
0.8
0.7
0.6
0.5
0.4
0
2
4
6
8
10
12
Lambda (?)
(b) Gaussians
Figure 1: Type II Error vs. ?, for N = 50 and ? = 0.05. 50 distributions were generated for each
value of ? (? = 0.5, 0.1, ? ? ? , 12.5). Error bars depict standard error of the mean (SEM).
6 Concluding Remarks
We have introduced a game-theoretic approach to the SCC problem. This approach lends itself well
to analysis, allowing us to prove under what conditions low-density rejection is hard-optimal and if
an optimal monotone rejection function is guaranteed to exist. Our analysis introduces soft decision
strategies, which allow for significantly better performance. Observing the learner?s futility when
facing an omniscient and unlimited adversary, we considered restricted adversaries and provided
full analysis of an interesting family of constrained games. This work opens up many new avenues
for future research. We believe that our results could be useful for inspiring new algorithms for
finite-sample SCC problems. For example, the equivalence of low-density rejection to the Bayesian
binary problem as shown in Section 3.3 obviously motivates a new approach. Clearly, the utilization
of randomized strategies should be carried over to the finite sample case as well. Our approach can
be extended and developed in several ways. A very interesting setting to consider is one in which the
adversary has partial knowledge of the problem parameters and the learner?s strategy. For example,
the adversary may only know that P is in some subspace. Additionally, it is desirable to extend
our analysis to infinite and continuous event spaces. Finally, it would be very nice to determine an
explicit expression for the lower bound obtained by the linear program of Theorem 5.3.
References
[1] S. Ben-David and M. Lindenbaum. Learning distributions by their density-levels - a paradigm for learning
without a teacher. In EuroCOLT, pages 53?68, 1995.
[2] C.M. Bishop. Novelty detection and neural network validation. IEE Proceedings - Vision, Image, and
Signal Processing, 141(4):217?222, 1994.
[3] M.M. Breunig, H.P. Kriegel, R.T. Ng, and J. Sander. Lof: Identifying density-based local outliers. In
SIGMOD Conference, pages 93?104, 2000.
[4] V. Hodge and J. Austin. A survey of outlier detection methodologies. Artificial Intelligence Review,
22(2):85?126, 2004.
[5] G.R.G. Lanckriet, L. El Ghaoui, and M.I. Jordan. Robust novelty detection with single-class mpm. In
NIPS, pages 905?912, 2002.
[6] A. Lazarevic, L. Ert?oz, V. Kumar, A. Ozgur, and J. Srivastava. A comparative study of anomaly detection
schemes in network intrusion detection. In SDM, 2003.
[7] M. Markou and S. Singh. Novelty detection: a review ? part 1: statistical approaches. Signal Processing,
83(12):2481?2497, 2003.
[8] M. Markou and S. Singh. Novelty detection: a review ? part 2: neural network based approaches. Signal
Processing, 83(12):2499?2521, 2003.
[9] I. Steinwart, D. Hush, and C. Scovel. A classification framework for anomaly detection. Journal of
Machine Learning Research, 6, 2005.
[10] David M. J. Tax and Robert P. W. Duin. Uniform object generation for optimizing one-class classifiers.
Journal of Machine Learning Research, 2:155?173, 2002.
[11] H. Yu. Single-class classification with mapping convergence. Machine Learning, 61(1-3):49?69, 2005.
| 2987 |@word rani:1 achievable:5 seems:1 open:1 relevancy:1 closure:1 willing:1 q1:5 omniscient:4 existing:1 scovel:1 comparing:1 si:2 must:9 numerical:2 distant:2 partition:1 mordechai:1 depict:1 v:1 generative:2 selected:3 leaf:2 intelligence:1 accordingly:1 mpm:1 provides:1 c2:5 prove:2 qi1:1 introduce:1 indeed:2 p1:6 discretized:3 eurocolt:1 decreasing:2 little:1 considering:1 increasing:2 becomes:1 begin:1 provided:1 underlying:1 moreover:1 maximizes:1 mass:2 null:2 israel:4 what:3 minimizes:2 q2:5 developed:1 ps2:1 extremum:2 guarantee:2 concave:2 exactly:1 futility:2 classifier:9 rm:1 utilization:1 positive:6 before:2 local:2 limit:2 switching:3 despite:2 meet:3 equivalence:3 limited:3 range:3 unique:2 testing:1 procedure:2 empirical:2 reject:3 significantly:2 refers:1 get:1 lindenbaum:1 cannot:1 selection:2 convenience:1 risk:3 restriction:2 equivalent:3 lagrangian:3 attention:3 minq:11 convex:1 survey:2 identifying:2 immediately:4 pure:1 assigns:1 contradiction:4 m2:5 notion:1 anyway:1 justification:3 ert:1 target:11 play:3 anomaly:2 distinguishing:1 us:1 hypothesis:1 breunig:1 lanckriet:1 element:1 nowhere:1 recognition:1 satisfying:3 solved:1 worst:8 region:1 ensures:1 news:1 decrease:1 observes:2 ran:1 principled:1 motivate:1 immaterial:1 solving:5 singh:2 learner:50 completely:4 basis:2 various:1 derivation:1 effective:7 artificial:1 choosing:2 heuristic:2 supplementary:1 plausible:1 otherwise:7 itself:1 final:1 obviously:1 advantage:1 sdm:1 unresolved:1 iff:1 achieve:2 tax:1 oz:1 quantifiable:1 convergence:1 yaniv:1 empty:1 r1:5 p:4 psh:1 comparative:2 guaranteeing:2 generating:1 ben:1 object:1 illustrate:1 ac:2 ij:1 p2:3 c:2 quantify:1 nisenson:1 centered:1 bin:4 jpj:1 assign:1 fix:1 strictly:2 hold:3 around:1 sufficiently:2 considered:2 lof:1 deciding:1 great:1 mapping:1 achieves:1 smallest:1 omitted:1 purpose:1 estimation:6 applicable:1 label:1 currently:1 create:1 establishes:1 clearly:8 always:3 gaussian:1 rather:3 pn:4 corollary:3 q3:2 focus:2 notational:1 mainly:1 intrusion:3 adversarial:1 helpful:1 dependent:1 el:2 typically:1 entire:1 accept:1 transferring:1 selects:5 issue:1 classification:11 constrained:7 special:1 construct:3 never:1 ng:1 sampling:1 yu:1 future:1 minimized:2 few:2 randomly:1 divergence:2 pjj:3 consisting:1 attempt:2 detection:10 interest:1 violation:3 introduces:1 sh:2 pc:1 swapping:1 superfluous:1 adjoining:1 capable:2 partial:4 necessary:4 tree:1 iv:4 re:1 theoretical:1 minimal:1 instance:2 soft:21 strategic:2 cost:1 subset:2 uniform:5 technion:6 too:1 iee:1 characterize:1 optimally:1 teacher:1 combined:1 chooses:1 person:2 density:19 fundamental:1 randomized:2 receiving:1 together:1 decorated:1 choose:1 possibly:1 henceforth:1 lambda:2 derivative:2 satisfy:1 depends:1 later:1 view:1 break:1 closed:1 try:1 analyze:2 root:2 observing:1 start:1 bayes:3 contribution:2 minimize:2 il:2 qk:16 characteristic:2 who:4 maximized:2 correspond:1 identify:2 spaced:1 identification:1 bayesian:6 rejecting:1 suffers:1 whenever:1 ed:1 complicate:1 definition:4 decorate:1 markou:2 proof:8 mi:1 di:2 psi:1 sampled:2 gain:1 popular:2 recall:1 knowledge:5 cj:1 focusing:1 scc:31 appears:1 higher:2 violating:1 follow:1 methodology:1 inspire:1 maximally:1 formulation:1 rejected:1 until:1 hand:1 receives:2 sketch:2 steinwart:1 lack:2 infimum:1 believe:1 revolving:1 normalized:1 equality:1 assigned:1 game:31 during:1 essence:2 maintained:1 noted:1 criterion:1 allowable:1 evident:1 theoretic:3 demonstrate:1 complete:1 image:1 consideration:1 ji:1 rl:1 belong:1 discussed:1 approximates:1 m1:4 extend:1 significant:3 similarly:1 etc:1 optimizing:1 termed:1 certain:2 binary:6 arbitrarily:2 fault:1 meeting:1 minimum:2 unrestricted:5 fortunately:1 additional:1 preceding:1 novelty:5 attacking:2 determine:2 monotonically:2 paradigm:1 ii:14 signal:3 full:2 desirable:1 rj:3 smooth:1 technical:2 long:1 dkl:1 impact:1 qi:21 variant:1 basic:1 ensuring:1 vision:1 represent:1 confined:3 achieved:2 c1:3 decreased:1 source:1 crucial:1 appropriately:1 biased:2 swapped:1 posse:1 explodes:1 ineffective:1 subject:2 jordan:1 call:2 iii:3 sander:1 concerned:1 ps1:1 variety:1 reduce:1 idea:1 avenue:1 qj:20 expression:2 render:1 remark:1 generally:1 useful:3 induces:4 concentrated:2 inspiring:1 generate:1 qi2:1 sl:2 exist:2 arising:1 group:1 key:1 demonstrating:1 p11:1 drawn:3 k4:1 pj:24 monotone:14 sum:2 facilitated:1 powerful:1 place:1 almost:2 throughout:1 reasonable:3 family:7 decision:10 appendix:2 bound:9 guaranteed:2 distinguish:3 followed:1 duin:1 precisely:3 constraint:11 rmin:11 constrain:1 ri:4 unlimited:2 prescribed:1 optimality:3 min:5 concluding:1 kumar:1 transferred:1 department:2 imin:5 combination:2 contradicts:1 s1:1 ozgur:1 intuitively:1 restricted:2 pr:5 outlier:2 ghaoui:1 remains:1 turn:3 know:6 adopted:1 available:1 gaussians:3 apply:2 away:1 distinguished:1 coin:2 altogether:1 include:1 ensure:1 opportunity:1 sigmod:1 giving:2 quantile:1 establish:1 seeking:1 move:2 question:3 quantity:1 flipping:1 already:1 strategy:47 usual:1 needlessly:1 amongst:1 minx:1 lends:1 distance:1 subspace:1 p22:1 evenly:1 extent:1 trivial:1 index:2 mini:3 minimizing:2 difficult:2 unfortunately:1 ql:6 robert:1 negative:7 stated:1 enclosing:1 motivates:3 unknown:4 allowing:1 observation:2 sm:2 finite:6 payoff:1 extended:1 arbitrary:2 community:1 overlooked:2 introduced:1 vacuous:1 pair:2 required:1 specified:1 extensive:2 c3:2 kl:2 david:2 hush:1 maxq:1 nip:1 adversary:38 bar:1 kriegel:1 pattern:1 fp:1 encompasses:1 program:7 including:2 max:4 power:1 event:17 natural:1 representing:1 scheme:1 technology:2 imply:1 carried:1 pqk:1 prior:1 literature:7 nice:1 review:3 unsurprisingly:1 loss:1 expect:1 interesting:2 limitation:1 generation:1 facing:1 validation:1 sufficient:2 classifying:1 pi:13 austin:1 surprisingly:1 repeat:1 last:2 formal:2 weaker:2 allow:1 institute:2 wide:1 characterizing:1 taking:6 tolerance:3 boundary:1 valid:14 cumulative:1 qn:1 computes:1 made:2 qualitatively:1 far:3 approximate:1 monotonicity:5 maxr:1 global:3 q4:2 discriminative:2 search:1 continuous:1 sk:1 additionally:2 robust:1 sem:1 cl:2 domain:1 pk:19 main:3 rh:1 s2:1 nothing:1 allowed:1 repeated:1 body:3 precision:1 deterministically:1 explicit:1 breaking:1 rk:4 remained:1 theorem:16 bishop:1 maxi:2 r2:6 exists:13 false:13 albeit:1 effectively:1 ci:2 justifies:1 rejection:46 led:1 hodge:1 conveniently:2 partially:1 cite:1 minimizer:1 satisfies:8 chance:1 goal:4 consequently:1 replace:1 psk:1 feasible:2 hard:32 change:1 specifically:2 infinite:4 reducing:1 uniformly:2 lemma:14 called:1 total:2 select:4 rename:1 support:10 internal:1 latter:1 absolutely:1 violated:1 srivastava:1 |
2,189 | 2,988 | Large Scale Hidden Semi-Markov SVMs
Gunnar R?atsch?
Friedrich Miescher Laboratoy, Max Planck Society
Spemannstr. 39, 72070 T?ubingen, Germany
[email protected]
S?oren Sonnenburg
Fraunhofer FIRST.IDA
Kekul?estr. 7, 12489 Berlin, Germany
[email protected]
Abstract
We describe Hidden Semi-Markov Support Vector Machines (SHM SVMs), an
extension of HM SVMs to semi-Markov chains. This allows us to predict segmentations of sequences based on segment-based features measuring properties
such as the length of the segment. We propose a novel technique to partition the
problem into sub-problems. The independently obtained partial solutions can then
be recombined in an efficient way, which allows us to solve label sequence learning problems with several thousands of labeled sequences. We have tested our
algorithm for predicting gene structures, an important problem in computational
biology. Results on a well-known model organism illustrate the great potential of
SHM SVMs in computational biology.
1
Introduction
Hidden Markov SVMs are a recently-proposed method for predicting a label sequence given the
input sequence [3, 17, 18, 1, 2]. They combine the benefits of the power and flexibility of kernel
methods with the idea of Hidden Markov Models (HMM) [11] to predict label sequences. In this
work we introduce a generalization of Hidden Markov SVMs, called Hidden Semi-Markov SVMs
(HSM SVMs). In HM SVMs and HMMs there is a state transition for every input symbol. In semiMarkov processes it is allowed to persist in a state for a number of time steps before transitioning
into a new state. During this segment of time the system?s behavior is allowed to be non-Markovian.
This adds flexibility for instance to model segment lengths or to use non-linear content sensors that
may depend on the start and end of the segment.
One of the largest problems with HM SVMs and also SHM SVMs is their high computational
complexity. Solving the resulting optimization problems may become computationally infeasible
already for a few hundred examples. In the second part of the paper we consider the case of using
content sensors (for whole segments) and signal detectors (at segment boundaries) in SHM SVMs.
We motivate a simple, but very effective strategy of partitioning the problem into independent subproblems and discuss how one can reunion the different parts. We propose to solve a relatively
small optimization problem that can be solved rather efficiently. This strategy allows us to tackle
significantly larger label sequence problems (with several thousands of sequences).
To illustrate the strength of our approach we have applied our algorithm to an important problem
in computational biology: the prediction of the segmentation of a pre-mRNA sequence into exons
and introns. On problems derived from sequences of the model organism Caenorhabditis elegans
we can show that the SHM SVM approach consistently outperforms HMM based approaches by a
large margin (see also [13]).
The paper is organized as follows: In Section 2 we introduce the necessary notation, HM SVMs and
the extension to semi-Markov models. In Section 3 we propose and discuss a technique that allows
us to train SHM SVMs on significantly more training examples. Finally, in Section 4 we outline
the gene structure prediction problem, discuss additional techniques to apply SHM SVMs to this
problem and show surprisingly large improvements compared to state-of-the-art methods.
?
Corresponding author, http://www.fml.mpg.de/raetsch
2
Hidden Markov SVMs
In label sequence learning one learns a function that assigns to a sequence of objects x = ?1 ?2 . . . ?l
a sequence of labels y = ?1 ?2 . . . ?l (?i ? X, ?i ? ?, i = 1, . . . , l). While objects can be of rather
arbitrary kind (e.g. vectors, letters, etc), the set of labels ? has to be finite.1 A common approach is to
determine a discriminant function F : X ? Y ? R that assigns a score to every input x ? X := X ?
and every label sequence y ? Y := ?? , where X ? denotes the Kleene closure of X. In order to
obtain a prediction f (x) ? Y, the function is maximized with respect to the second argument:
f (x) = argmax F (x, y).
(1)
y?Y
2.1
Representation & Optimization Problem
In Hidden Markov SVMs (HM SVMs) [3], the function F (x, y) := hw, ?(x, y)i is linearly
parametrized by a weight vector w, where ?(x, y) is some mapping into a feature space F. Given
a set of training examples (xn , y n ), n = 1, . . . , N , the parameters are tuned such that the true
labeling y n scores higher than all other labelings y ? Yn := Y \ y n with a large margin, i.e.
F (xn , y n ) argmaxy?Yn F (xn , y). This goal can be achieved by solving the following optimization problem (appeared equivalently in [3]):
min
??RN ,w?F
s.t.
C
N
X
?i + P (w)
(2)
n=1
hw, ?(x, y n )i ? hw, ?(x, y)i ? 1 ? ?n
for all n = 1, . . . , N and y ? Yn ,
2
where P is a suitable regularizer (e.g. P (w) = kwk ) and the ??s are slack variables to implement
a soft margin. Note that the linear constraints in (2) are equivalent to the following set of nonlinear
constraints: F (xn , y n ) ? maxy?Yn F (xn , y) ? 1 ? ?n for n = 1, . . . , N [3].
If P (w) = kwk2 , it can be shown that the solution w? of (2) can be written as
N X
X
w? =
?n (y)?(xn , y),
n=1 y?Y
where ?n (y) is the Lagrange multiplier of the constraint involving example n and labeling y (see
[3] for details). Defining the kernel as k((x, y), (x0 , y 0 )) := h?(x, y), ?(x0 , y 0 )i, we can rewrite
F (x, y) as
N X
X
F (x0 , y 0 ) =
?n (y)k((xn , y), (x0 , y 0 )).
n=1 y?Y
2.2
Outline of an Optimization Algorithm
The number of constraints in (2) can be very large, which may constitute challenges for efficiently
solving problem (2). Fortunately, only a few of the constraints usually are active and working set
methods can be applied in order to solve the problem for larger number of examples. The idea is to
start with small sets of negative (i.e. false) labelings Y n for every example. One solves (2) for the
smaller problem and then identifies labelings y ? Yn that maximally violate constraints, i.e.
y = argmax F (xn , y),
(3)
y?Yn
where w is the intermediate solution of the restricted problem. The new constraint generated by the
negative labeling is then added to the optimization problem. The method described above is also
known as column generation method or cutting-plane algorithm and can be shown to converge to the
optimal solution w? [18]. However, since the computation of F involves many kernel computations
and also the number of non-zero ??s is often large, solving the problem with more than a few hundred
labeled sequences often seems computationally too expensive.
2.3
Viterbi-like Decoding
Determining the optimal labeling in (1) efficiently is crucial during optimization and prediction. If
F (x, ?) satisfies certain conditions, one can use a Viterbi-like algorithm [20] for efficient decoding
1
Note that the number of possible labelings grows exponentially in the length of the sequence.
of the optimal labeling. This is particularly the case when ? can be written as a sum over the length
of the sequence and decomposed as
?
?
l(x)
X
?(x, y) = ?
??,? (?i , ?i+1 , x, i)?
i=1
?,? ??
2
where l(x) is the length of the sequence x. By (?? )??? we denote the concatenation of feature
>
>
vectors, i.e. (?>
?1 , ??2 , . . .) . It is essential that ? is composed of mapping functions that depend
only on labels at position i and i + 1, x as well as i. We can rewrite F using w = (w?,? )?,? ?? :
*
+ l(x)
l(x)
X
X
X X
w?,? ,
??,? (?i , ?i+1 , x, i) =
F (x, y) =
hw?,? , ??,? (?i , ?i+1 , x, i)i . (4)
?,? ??
i=1 ?,? ??
i=1
|
{z
=:g(?i ,?i+1 ,x,i)
}
Thus we have positionally decomposed the function F . The score at position i + 1 only depends on
x, i and labels at positions i and i + 1 (Markov property).
Using this decomposition we can define
(
max(V (i ? 1, ? 0 ) + g(? 0 , ?, x, i ? 1)) i > 1
? 0 ??
V (i, ?) :=
0
otherwise
as the maximal score for all labelings with label ? at position i. Via dynamic programming one can
compute max??? V (l(x), ?), which can be proven to solve (1) for the considered case. Moreover,
using backtracking one can recover the optimal label sequence.3
The above decoding algorithm requires to evaluate g at most |?|2 l(x) times. Since computing g
involves computing potentially large sums of kernel functions, the decoding step can be computationally quite demanding?depending on the kernels and the number of examples.
2.4
Extension to Hidden Semi-Markov SVMs
Semi-Markov models extend hidden Markov models by allowing each state to persist for a nonunit number ?i of symbols. Only after that the system will transition to a new state, which only
depends on x and the current state. During the interval (i, i + ?i ) the behavior of the system may
be non-Markovian [14]. Semi-Markov models are fairly common in certain applications of statistics
[6, 7] and are also used in reinforcement learning [16]. Moreover, [15, 9] previously proposed an
extension of HMMs, called Generalized HMMs (GHMMs) that is very similar to the ideas above.
Also, [14] proposed a semi-Markov extension to Conditional Random Fields.
In this work we extend Hidden Markov-SVMs to Hidden Semi-Markov SVMs by considering sequences of segments instead of simple label sequences. We need to extend the definition of the
labeling with s segments: y = (?1 , ?1 ), (?2 , ?2 ), . . . , (?s , ?s ), where ?j is the start position of the
segment and ?j its label.4 We assume ?1 = 1 and let ?j = ?j?1 + ?j . To simplify the notation we
define ?s+1 := l(x) + 1, s := s(y) to be the number of segments in y and ?s+1 := ?. We can now
?
?
generalize the mapping ? to:
s(y)
X
?(x, y) = ?
??,? (?j , ?j+1 , x, ?j , ?j+1 )?
.
j=1
?,? ??
2
We define ?l+1 := ? to keep the notation simple.
Note that one can extend the outlined decoding algorithm to produce not only the best path, but the K best
paths. The 2nd best path may be required to compute the structure in (3). The idea is to duplicate tables K times
as follows:
(
max(k)
(V (i ? 1, ? 0 , k0 ) + g(? 0 , ?, x, i ? 1)) i > 1
? 0 ??,k0 =1,...,K
V (i, ?, k) :=
0
otherwise
3
where max(k) is the function computing the kth largest number and is ?? if there are fewer numbers.
V (i, ?, k) now is the k-best score of labelings with label ? at position i.
4
For simplicity, we associate the label of a segment with the signal at the boundary to the next segment. A
generalization is straightforward.
With this definition we can extract features from segments: As ?j and ?j+1 are given one can for
instance compute the length of the segment or other features that depend on the start and the end of
the segment. Decomposing F results in:
s(y)
F (x, y)
X X
=
hw?,? , ??,? (?j , ?j+1 , x, ?j , ?j+1 )i .
(5)
j=1 ?,? ??
{z
|
}
=:g(?j ,?j+1 ,x,?j ,?j+1 )
Analogously we can extend the formula for the Viterbi-like decoding algorithm [14]:
(
max
(V (i ? d, ? 0 ) + g(? 0 , ?, x, i ? d, i)) i > 1
? 0 ??,d=1,...,min(i?1,S)
V (i, ?) :=
0
otherwise
(6)
where S is the maximal segment length and max??? V (l(x), ?) is the score of the best segment
labeling. The function g needs to be evaluated at most |?|2 l(x)S times. The optimal label sequence
can be obtained as before by backtracking. Also the above method can be easily extended to produce
the K best labelings (cf. Footnote 3).
3
3.1
An Algorithm for Large Scale Learning
Preliminaries
In this section we consider a specific case that is relevant for the application that we have in mind.
The idea is that the feature map should contain information about segments such as the length or the
content as well as segment boundaries, which may exhibit certain detectable signals. For simplicity
we assume that it is sufficient to consider the string ??j ..?j+1 := ??j ??j +1 . . . ??j+1 ?2 ??j+1 ?1
for extracting content information about segment j. Also, for considering signals we assume it
to be sufficient to consider a window ?? around the end of the segment, i.e. we only consider
??j+1 ?? := ??j+1 ?? . . . ??j+1 +? . To keep the notation simple we do not consider signals at the
start of the segment. Moreover, we assume for simplicity that x??? is appropriately defined for
every ? = 1, . . . , l(x). We may therefore define the following feature map:
?
? ?
?
s(y)
X
? ?
?
[[?j = ?]][[?j+1 = ? ]]?c (??j ..?j+1 )?
%content
?
?
? j=1
?
?,?
??
?
?
?
?(x, y) = ?
? s(y)
?
? X
?
? ?
?
?
[[?j+1 = ? ]]?s (??j+1 ?? )
%signal
j=1
? ??
where [[true]] = 1 and 0 otherwise. Then the kernel between two examples using this feature map
can be written as:
X X
X
X
ks (??j+1 ?? , ?0?0 0 ?? )
kc (??j ..?j+1 , ?0?0 0 ..?0 0 ) +
k((x, y), (x0 , y 0 )) =
j
?,? ?? j:(?j ,?j )=(?,? )
j 0 :(?j0 0 ,?j0 0 )=(?,? )
j +1
j +1
? ?? j:?j+1 =?
j 0 :?j 0 +1 =?
where kc (?, ?) := h?1 (?), ?1 (?)i and ks (?, ?) := h?s (?), ?s (?)i. The above formulation has the
benefit of keeping the signals and content kernels separated for each label, which we can exploit for
rewriting F (x, y)
X X
X
X
F? (??j+1 ?? ),
F (x, y) =
F?,? (??j ..?j+1 ) +
? ?? j:?j+1 =?
?,? ?? j:(?j ,?j+1 )=(?,? )
where
F?,? (?) :=
N X
X
n=1 y 0 ?Y
and
F? (?) =
N X
X
n=1
X
?n (y 0 )
y 0 ?Y
kc (?, ?n?0 0 ..?0 0
j
j 0 :(?j0 0 ,?j0 0 +1 )=(?,? )
?n (y 0 )
0
X
ks (?, ?n?0 0
j +1
)
j +1
?? ).
j :?j 0 +1 =?
Hence, we have partitioned F (x, y) into |?|2 + |?| functions characterizing the content and the
signals.
3.2
Two-Stage Learning
By enumerating all non-zero ??s and valid settings of j 0 in F? and F?,? , we can define sets of
n
?
sequences {??,?
m }m=1,...,M?,? and {?m }m=1,...,M? where every element is of the form ??j ..?j+1
n
and ??j+1 ?? , respectively. Hence, F? and F?,? can be rewritten as a (single-sum) linear combiPM?,? ?,?
PM? ?
?
?,?
nation of kernels: F?,? (?) :=
m=1 ?m kc (?, ?m ) and F? (?) :=
m=1 ?m ks (?, ?m ) for
?
appropriately chosen ??s. For sequences ?m that do not correspond to true segment boundaries,
?
the coefficient ?m
is either negative or zero (since wrong segment boundaries can only appear in
wrong labelings y 6= y n and ?n (y) ? 0). True segment boundaries in correct label sequences have
?
non-negative ?m
?s. Analogously with segments ??,?
m . Hence, we may interpret these functions as
SVM classification functions recognizing segments and boundaries of all kinds.
Hidden Semi-Markov SVMs simultaneously optimize all these functions and also determine the
relative importance of the different signals and sensors. In this work we propose to separate the
learning of the content sensors and signal detectors from learning how they have to act together
in order to produce the correct labeling. The idea is to train SVM-based classifiers F??,? and F??
using the kernels kc and ks on examples with known labeling. For every segment type and segment boundary we generate a set of positive examples from observed segments and boundaries. As
negative examples we use all boundaries and segments that were not observed in a true labeling.
This leads to a set of sequences that may potentially also appear in the expansions of F?,? and F? .
?
?,?
are expected to be different as the functions are
and ?
?m
However, the expansion coefficients ?
?m
estimated independently.
The advantage of this approach is that solving two-class problems?for which we can reuse existing
large scale learning methods?is much easier than solving the full HSM SVM problem. However,
while the functions F??,? and F?? might recognize the same contents and signals as F?,? and F? , the
functions are obtained independently from each other and might not be scaled correctly to jointly
produce the correct labeling. We therefore propose to learn transformations t?,? and t? such that
F?,? (?) ? t?,? (F??,? (?)) and F? (?) ? t? (F?? (?)). The transformation functions t : R ? R
are one-dimensional mappings and it seems fully sufficient to use for instance piece-wise linear
functions (PLiFs) p?,? (?) := h?? (?), ?i with fixed abscissa boundaries ? and ?-parametrized
ordinate values (?? (?) can be appropriately defined). We may define the mapping ?(x, y) for our
?
case as
?
? ?
s(y)
X
?
? ?
[[?j = ?]][[?j+1 = ? ]] ???,? (F??,? (??j ..?j+1 ))?
?
?
?
? j=1
?,?
??
?,
?
?
?
?(x, y) = ?
(7)
?
s(y)
?
? X
?
? ?
[[?j+1 = ? ]] ?? (F?? (?? ?? ))?
?
j+1
j=1
? ??
where we simply replaced the feature with PLiF features based on the outcomes of precomputed
predictions. Note that ?(x, y) has only (|?|2 + |?|)P dimensions, where P is the number of
support points used in the PLiFs.
If the alphabet ? is reasonably small then the dimensionality is low enough to solve the optimization
problem (2) efficiently in the primal domain. In the next section we will illustrate how to successfully
apply a version of the outlined algorithm to a problem where we have several thousands of relatively
long labeled sequences.
4
Application to Gene Structure Prediction
The problem of gene structure prediction is to segment nucleotide sequences (so-called pre-mRNA
sequences generated by transcription; cf. Figure 4) into exons and introns. In a complex biochemical
process called splicing the introns are removed from the pre-mRNA sequence to form the mature
mRNA sequence that can be translated into protein. The exon-intron and intron-exon boundaries
are defined by sequence motifs almost always containing the letters GT and AG (cf. Figure 4), respectively. However, these dimers appear very frequently and one needs sophisticated methods to
recognize true splice sites [21, 12, 13].
So far mostly HMM-based methods such as Genscan [5], Snap [8] or ExonHunter [4] have been
applied to this problem and also to the more difficult problem of gene finding. In this work we show
that our newly developed method is applicable to this task and achieves very competitive results.
We call it mSplicer. Figure 2 illustrates the ?grammar? that we use for gene structure prediction.
We only require four different states (start, exon-end, exon-start and end) and two different segment
labels (exon & intron). Biologically it makes sense to distinguish between first, internal, last and
single exons, as their typical lengths are quite different. Each of these exon types correspond to one
transition in the model. States two and three recognize the two types of splice sites and the transition
between these states defines an intron.
For our specific problem we only need signal detectors for segments ending in state two and three.
In the next subsection we outline how we obtain F?2 and F?3 . Additionally we need content sensors
for every possible transition. While the ?content? of the different exon segments is essentially the
same, the length of them can vary quite drastically. We therefore decided to use one content sensor
F?I for the intron transition 2 ? 3 and the same content sensor F?E for all four exon transitions
1 ? 2, 1 ? 4, 3 ? 2 and 3 ? 4. However, in order to capture the different length characteristics,
we include
?
?
s(y)
X
?
[[?j = ?]][[?j+1 = ? ]]?? ?,? (?j+1 ? ?j )?
(8)
j=1
?,? ??
in the feature map (7), which amounts to using PLiFs for the lengths of all transitions. Also, note
that we can drop those features in (7) and (8) that correspond to transitions that are not allowed (e.g.
4 ? 1; cf. Figure 2).5
We have obtained data for training, validation and testing from public sequence databases (see [13]
for details).For the considered genome of C. elegans we have split the data into four different sets:
Set 1 is used for training the splice site signal detectors and the two content sensors; Set 2 is used
for model selection of the latter signal detectors and content sensors and for training the HSM SVM;
Set 3 is used for model selection of the HSM SVM; and Set 4 is used for the final evaluation. These
are large scale datasets, with which current Hidden-Markov-SVMs are unable to deal with: The
C. elegans training set used for label-sequence learning contains 1,536 sequences with an average
length of ? 2, 300 base pairs and about 9 segments per sequence, and the splice site signal detectors
where trained on more than a million examples. In principle it is possible to join sets 1 & 2, however,
then the predictions of F??,? and F?? on the sequences used for the HSM SVM are skewed in the
margin area (since the examples are pushed away from the decision boundary on the training set).
We therefore keep the two sets separated.
4.1
Learning the Splice Site Signal Detectors
From the training sequences (Set 1) we extracted sequences of confirmed splice sites (intron start and
end). For intron start sites we used a window of [?80, +60] around the site. For intron end sites we
used [?60, +80]. From the training sequences we also extracted non-splice sites, which are within
an exon or intron of the sequence and have AG or GT consensus. We train an SVM [19] with softPd
Pl?j
margin using the WD kernel [12]: k(x, x0 ) = j=1 ?j i=1 [[(x[i,i+j] = x0[i,i+j] )]], where l = 140
is the length of the sequence and x[a,b] denotes the sub-string of x from position a to (excluding) b
0
?
and ?j := d ? j + 1. We used a normalization of the kernel k(x,
x0 ) = ? k(x,x ) 0 0 . This leads
k(x,x)k(x ,x )
to the two discriminative functions F?2 and F?3 . All model parameters (including the window size)
have been tuned on the validation set (Set 2). SVM training for C. elegans resulted in 79,000 and
61,233 support vectors for detecting intron start and end sites, respectively.
5
We also excluded these transitions during the Viterbi-like algorithm.
Figure 1: The major steps in protein synthesis [10]. A transcript of a gene starts
with an exon and may then be interrupted
by an intron, followed by another exon, intron and so on until it ends in an exon.
In this work we learn the unknown formal mapping from the pre-mRNA to the
mRNA.
Figure 2: An elementary state
model for unspliced mRNA: The
start is either directly followed by
the end or by an arbitrary number
of donor-acceptor splice site pairs.
4.2 Learning the Exon and Intron Content Sensors
To obtain the exon content sensor we derived a set of exons from the training set. As negative
examples we used sub-sequences of intronic sequences sampled such that both sets of strings have
roughly the same length distribution. We trained SVMs using a variant of the Spectrum kernel [21]
of degree d = 6, where we count 6-mers appearing at least once in both sequences. We applied
the same normalization as in Sec. 4.1 and proceeded analogously for the intron content sensor. The
model parameters have been obtained by tuning them on the validation set.
Note that the resulting content sensors F?I and F?E need to be evaluated several times during the
Viterbi-like algorithm (cf. (6)): One needs to extend segments ending at the same position i to
several different starting points. By re-using the shorter segment?s outputs this computation can be
made drastically faster.
4.3 Combination
For datasets 2-4 we can precompute all candidate splice sites using the classifiers F?2 and F?3 . We
decided to use PLiFs with P = 30 support points and chose the boundaries for F?2 , F?3 , F?E , and
F?I uniformly between ?5 and 5 (typical range of outputs of our SVMs). For the PLiFs concerned
with length of segments we chose appropriate boundaries in the range 30 ? 1000. With all these
definitions the feature map as in (7) and (8) is fully defined. The model has nine PLiFs as parameters,
with a total of 270 parameters.
Finally, we have modified the regularizer for our particular case, which favors smooth PLiFs:
?1
X
X PX
X
?,l
?,?
?,?
?
?
|wi?,l ? wi+1
|,
|wP ? w1 | +
P(w) :=
|wP ? w1 | +
?,? ??
? ??
? ?? i=1
where w = (w?,? )?,? ?? ; (w? )? ?? ; (w?,l )? ?? and we constrain the PLiFs for the signal and
content sensors to be monotonically increasing.6
Having defined the feature map and the regularizer, we can now apply the HSM SVM algorithm
outlined in Sections 2.4 and 3. Since the feature space is rather low dimensional (270 dimensions),
we can solve the optimization problem in the primal domain even with several thousands of examples
employing a standard optimizer (we used ILOG CPLEX and column generation) within a reasonable
time.7
4.4 Results
To estimate the out-of-sample accuracy, we apply our method to the independent test dataset 4. For
C. elegans we can compare it to ExonHunter8 on 1177 test sequences. We greatly outperform the
ExonHunter method: our method obtains almost 1/3 of the test error of ExonHunter (cf. Table 1).
Simplifying the problem by only considering sequences between the start and stop codons allows us
to also include SNAP in the comparison on the dataset 4?, a slightly modified version of dataset 4
with 1138 sequences.9 The results are shown in Table 1. On dataset 4? the best competing method
achieves an error rate of 9.8% which is more than twice the error rate of our method.
5
Conclusion
We have extended the framework of Hidden Markov SVMs to Hidden Semi-Markov SVMs and
suggested an very efficient two-stage learning algorithm to train an approximation to Hidden SemiMarkov SVMs. Moreover, we have successfully applied our method on large scale gene structure
6
This implements our intuition that large SVM scores should lead to larger scores for a labeling.
It takes less than one hour to solve the HSM SVM problem with about 1,500 sequences on a single CPU.
Training the content and signal detectors on several hundred thousand examples takes around 5 hours in total.
8
The method was trained by their authors on the same training data.
9
In this setup additional biological information about the so-called ?open reading frame? is used: As there
was only a version of SNAP available that uses this information, we incorporated this extra knowledge also in
our model (marked ? ) and also used another version of Exonhunter that also exploits that information in order
to allow a fair comparison.
7
Method
Our Method
ExonHunter
Our Method?
ExonHunter?
SNAP?
error rate
13.1%
36.8%
4.8%
9.8%
17.4%
C. elegans Dataset 4
exon Sn exon Sp exon nt Sn
96.7%
96.8%
98.9%
89.1%
88.4%
98.2%
C. elegans Dataset 4?
98.9%
99.2%
99.2%
97.9%
96.6%
99.4%
95.0%
93.3%
99.0%
exon nt Sp
97.2%
97.4%
99.9%
98.1%
98.9%
Table 1: Shown are the rates of predicting a wrong gene structure, sensitivity (Sn) and specificity (Sp) on exon
and nucleotide levels (see e.g. [8]) for our method, ExonHunter and SNAP. The methods exploiting additional
biological knowledge have an advantage and are marked with ? .
prediction appearing in computational biology, where our method obtains less than a half of the
error rate of the best competing HMM-based method. Our predictions are available at Wormbase:
http://www.wormbase.org. Additional data and results are available at the project?s website
http://www.fml.mpg.de/raetsch/projects/msplicer.
Acknowledgments We thank K.-R. M?uller, B. Sch?olkopf, E. Georgii, A. Zien, G. Schweikert and
G. Zeller for inspiring discussions. The latter three we also thank for proofreading the manuscript.
Moreover, we thank D. Surendran for naming the piece-wise linear functions PLiF and optimizing
the Viterbi-implementation.
References
[1] Y. Altun, T. Hofmann, and A. Smola. Gaussian process classification for segmenting and annotating
sequences. In Proc. ICML 2004, 2004.
[2] Y. Altun, D. McAllester, and M. Belkin. Maximum margin semi-supervised learning for structured variables. In Proc. NIPS 2005, 2006.
[3] Y. Altun, I. Tsochantaridis, and T. Hofmann. Hidden Markov support vector machines. In T. Fawcett,
editor, Proc. 20th Int. Conf. Mach. Learn., pages 3?10, 2003.
[4] B. Brejova, D.G. Brown, M. Li, and T. Vinar. ExonHunter: a comprehensive approach to gene finding.
Bioinformatics, 21(Suppl 1):i57?i65, 2005.
[5] C. Burge and S. Karlin. Prediction of complete gene structures in human genomic DNA. Journal of
Molecular Biology, 268:78?94, 1997.
[6] X. Ge. Segmental Semi-Markov Models and Applications to Sequence Analysis. PhD thesis, University
of California, Irvine, 2002.
[7] J. Janssen and N. Limnios. Semi-Markov Models and Applications. Kluwer Academic, 1999.
[8] I. Korf. Gene finding in novel genomes. BMC Bioinformatics, 5(59), 2004.
[9] D. Kulp, D. Haussler, M.G. Reese, and F.H. Eeckman. A generalized hidden markov model for the
recognition of human genes in DNA. ISMB 1996, pages 134?141, 1996.
[10] B. Lewin. Genes VII. Oxford University Press, New York, 2000.
[11] L.R. Rabiner. A tutorial on hidden markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257?285, February 1989.
[12] G. R?atsch and S. Sonnenburg. Accurate splice site prediction for Caenorhabditis elegans. In B. Sch?olkopf,
K. Tsuda, and J.-P. Vert, editors, Kernel Methods in Computational Biology. MIT Press, 2004.
[13] G. R?atsch, S. Sonnenburg, J. Srinivasan, H. Witte, K.-R. M?uller, R. Sommer, and B. Sch?olkopf. Improving
the C. elegans genome annotation using machine learning. PLoS Computational Biology, 2007. In press.
[14] S. Sarawagi and W.W. Cohen. Semi-markov conditional random fields for information extraction. In
Proc. NIPS 2004, 2005.
[15] G.D. Stormo and D. Haussler. Optimally parsing a sequence into different classes based on multiple types
of information. In Proc. ISMB 1994, pages 369?375, Menlo Park, CA, 1994. AAAI/MIT Press.
[16] R. Sutton, D. Precup, and S. Singh. Between mdps and semi-mdps: A framework for temporal abstraction
in reinforcement learrning. Artificial Intelligence, 112:181?211, 1999.
[17] B. Taskar, C. Guestrin, and D. Koller. Max-margin markov networks. In Proc. NIPS 2003, 16, 2004.
[18] I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun. Large margin methods for structured output
spaces. Journal for Machine Learning Research, 6, September 2005.
[19] V.N. Vapnik. The nature of statistical learning theory. Springer Verlag, New York, 1995.
[20] A. J. Viterbi. Error bounds for convolutional codes and an asymptotically optimal decoding algorithm.
IEEE Trans. Informat. Theory, IT-13:260?269, Apr 1967.
[21] X.H. Zhang, K.A. Heller, I. Hefter, C.S. Leslie, and L.A. Chasin. Sequence information for the splicing
of human pre-mRNA identified by SVM classification. Genome Res, 13(12):2637?50, 2003.
| 2988 |@word proceeded:1 version:4 seems:2 nd:1 mers:1 open:1 closure:1 korf:1 kulp:1 decomposition:1 simplifying:1 contains:1 score:8 tuned:2 outperforms:1 existing:1 current:2 ida:1 wd:1 nt:2 written:3 parsing:1 interrupted:1 partition:1 hofmann:3 drop:1 half:1 fewer:1 website:1 selected:1 intelligence:1 plane:1 positionally:1 detecting:1 org:1 zhang:1 become:1 combine:1 introduce:2 x0:8 expected:1 roughly:1 mpg:3 abscissa:1 frequently:1 behavior:2 codon:1 decomposed:2 cpu:1 window:3 considering:3 increasing:1 project:2 notation:4 moreover:5 kind:2 string:3 kleene:1 developed:1 ag:2 transformation:2 finding:3 temporal:1 every:8 nation:1 act:1 tackle:1 wrong:3 classifier:2 scaled:1 partitioning:1 yn:6 planck:1 appear:3 segmenting:1 before:2 positive:1 zeller:1 sutton:1 mach:1 oxford:1 path:3 might:2 chose:2 twice:1 k:5 hmms:3 range:2 ismb:2 decided:2 acknowledgment:1 testing:1 implement:2 sarawagi:1 j0:4 area:1 significantly:2 acceptor:1 vert:1 pre:5 specificity:1 protein:2 altun:4 sonne:1 selection:2 tsochantaridis:2 www:3 equivalent:1 map:6 optimize:1 mrna:8 straightforward:1 starting:1 independently:3 simplicity:3 assigns:2 haussler:2 programming:1 us:1 associate:1 element:1 expensive:1 particularly:1 recognition:2 persist:2 labeled:3 database:1 observed:2 donor:1 taskar:1 solved:1 capture:1 thousand:5 sonnenburg:3 plo:1 removed:1 intuition:1 complexity:1 dynamic:1 motivate:1 depend:3 rewrite:2 segment:40 solving:6 trained:3 singh:1 exon:23 translated:1 easily:1 k0:2 regularizer:3 alphabet:1 train:4 separated:2 describe:1 effective:1 artificial:1 labeling:12 outcome:1 quite:3 larger:3 solve:7 snap:5 otherwise:4 annotating:1 grammar:1 favor:1 statistic:1 jointly:1 final:1 sequence:53 advantage:2 karlin:1 propose:5 maximal:2 caenorhabditis:2 relevant:1 flexibility:2 olkopf:3 exploiting:1 produce:4 object:2 illustrate:3 depending:1 transcript:1 solves:1 involves:2 correct:3 human:3 mcallester:1 public:1 require:1 generalization:2 preliminary:1 recombined:1 biological:2 elementary:1 extension:5 pl:1 around:3 considered:2 great:1 mapping:6 predict:2 viterbi:7 stormo:1 dimer:1 major:1 achieves:2 vary:1 optimizer:1 proc:6 applicable:1 label:21 largest:2 successfully:2 uller:2 mit:2 sensor:14 always:1 gaussian:1 genomic:1 modified:2 rather:3 derived:2 joachim:1 improvement:1 consistently:1 greatly:1 sense:1 motif:1 abstraction:1 biochemical:1 hidden:20 kc:5 koller:1 fhg:1 labelings:8 germany:2 classification:3 art:1 fairly:1 field:2 once:1 having:1 extraction:1 biology:7 bmc:1 park:1 icml:1 simplify:1 duplicate:1 belkin:1 few:3 composed:1 simultaneously:1 recognize:3 resulted:1 comprehensive:1 replaced:1 argmax:2 cplex:1 evaluation:1 argmaxy:1 primal:2 chain:1 accurate:1 partial:1 necessary:1 nucleotide:2 shorter:1 re:2 tsuda:1 intronic:1 instance:3 column:2 soft:1 markovian:2 measuring:1 leslie:1 kekul:1 hundred:3 recognizing:1 too:1 optimally:1 sensitivity:1 lewin:1 decoding:7 analogously:3 together:1 synthesis:1 precup:1 w1:2 thesis:1 aaai:1 containing:1 conf:1 semimarkov:2 li:1 potential:1 de:4 sec:1 coefficient:2 int:1 reese:1 depends:2 piece:2 kwk:1 start:13 recover:1 competitive:1 annotation:1 accuracy:1 convolutional:1 characteristic:1 efficiently:4 maximized:1 correspond:3 rabiner:1 generalize:1 confirmed:1 detector:8 footnote:1 definition:3 sampled:1 newly:1 dataset:6 stop:1 irvine:1 subsection:1 knowledge:2 dimensionality:1 segmentation:2 organized:1 sophisticated:1 manuscript:1 higher:1 supervised:1 maximally:1 formulation:1 evaluated:2 stage:2 smola:1 until:1 working:1 nonlinear:1 defines:1 grows:1 contain:1 true:6 multiplier:1 brown:1 hence:3 excluded:1 wp:2 hsm:7 deal:1 during:5 skewed:1 generalized:2 outline:3 complete:1 estr:1 wise:2 novel:2 recently:1 common:2 cohen:1 exponentially:1 million:1 extend:6 organism:2 kluwer:1 interpret:1 kwk2:1 raetsch:3 eeckman:1 tuning:1 fml:2 outlined:3 pm:1 etc:1 add:1 gt:2 base:1 segmental:1 optimizing:1 certain:3 verlag:1 ubingen:1 guestrin:1 additional:4 fortunately:1 determine:2 converge:1 monotonically:1 signal:18 semi:17 zien:1 violate:1 full:1 multiple:1 smooth:1 faster:1 academic:1 long:1 naming:1 molecular:1 prediction:13 involving:1 variant:1 miescher:1 essentially:1 kernel:13 normalization:2 fawcett:1 suppl:1 oren:1 achieved:1 interval:1 georgii:1 crucial:1 appropriately:3 extra:1 sch:3 i57:1 mature:1 elegans:9 spemannstr:1 call:1 extracting:1 intermediate:1 split:1 enough:1 concerned:1 competing:2 identified:1 idea:6 enumerating:1 reuse:1 speech:1 york:2 nine:1 constitute:1 amount:1 inspiring:1 svms:28 dna:2 http:3 generate:1 outperform:1 tutorial:1 estimated:1 correctly:1 per:1 srinivasan:1 gunnar:2 four:3 rewriting:1 asymptotically:1 sum:3 letter:2 almost:2 reasonable:1 splicing:2 schweikert:1 decision:1 informat:1 pushed:1 bound:1 followed:2 distinguish:1 strength:1 constraint:7 constrain:1 argument:1 min:2 proofreading:1 relatively:2 px:1 structured:2 witte:1 combination:1 precompute:1 smaller:1 slightly:1 partitioned:1 wi:2 biologically:1 maxy:1 restricted:1 computationally:3 previously:1 discus:3 slack:1 detectable:1 precomputed:1 count:1 mind:1 ge:1 end:10 available:3 decomposing:1 rewritten:1 apply:4 away:1 appropriate:1 appearing:2 denotes:2 cf:6 include:2 sommer:1 exploit:2 february:1 society:1 already:1 added:1 strategy:2 exhibit:1 september:1 kth:1 separate:1 unable:1 berlin:1 concatenation:1 hmm:4 parametrized:2 thank:3 tuebingen:1 discriminant:1 consensus:1 length:16 code:1 equivalently:1 difficult:1 mostly:1 setup:1 potentially:2 subproblems:1 negative:6 implementation:1 unknown:1 allowing:1 ilog:1 markov:29 datasets:2 finite:1 defining:1 extended:2 excluding:1 incorporated:1 frame:1 rn:1 arbitrary:2 ordinate:1 pair:2 required:1 friedrich:1 california:1 shm:7 hour:2 nip:3 trans:1 suggested:1 usually:1 appeared:1 reading:1 challenge:1 max:8 including:1 power:1 suitable:1 demanding:1 predicting:3 mdps:2 identifies:1 fraunhofer:1 hm:5 extract:1 sn:3 heller:1 determining:1 relative:1 fully:2 generation:2 proven:1 validation:3 degree:1 sufficient:3 principle:1 editor:2 surprisingly:1 last:1 keeping:1 infeasible:1 drastically:2 formal:1 allow:1 characterizing:1 benefit:2 boundary:15 dimension:2 xn:8 transition:10 valid:1 ending:2 genome:4 author:2 made:1 reinforcement:2 far:1 employing:1 obtains:2 cutting:1 transcription:1 gene:14 keep:3 active:1 discriminative:1 spectrum:1 table:4 additionally:1 learn:3 reasonably:1 exonhunter:8 ca:1 nature:1 menlo:1 improving:1 expansion:2 complex:1 domain:2 sp:3 apr:1 linearly:1 whole:1 allowed:3 fair:1 site:14 join:1 sub:3 position:8 burge:1 candidate:1 learns:1 hw:5 splice:10 formula:1 transitioning:1 specific:2 i65:1 intron:17 symbol:2 svm:13 essential:1 janssen:1 false:1 vapnik:1 importance:1 phd:1 illustrates:1 margin:8 easier:1 vii:1 backtracking:2 simply:1 lagrange:1 springer:1 satisfies:1 extracted:2 conditional:2 goal:1 marked:2 content:21 typical:2 uniformly:1 called:5 total:2 atsch:3 internal:1 support:5 latter:2 bioinformatics:2 evaluate:1 tested:1 |
2,190 | 2,989 | Convergence of Laplacian Eigenmaps
Mikhail Belkin
Department of Computer Science
Ohio State University
Columbus, OH 43210
[email protected]
Partha Niyogi
Department of Computer Science
The University of Chicago
Hyde Park, Chicago, IL 60637.
[email protected]
Abstract
Geometrically based methods for various tasks of machine learning have
attracted considerable attention over the last few years. In this paper we
show convergence of eigenvectors of the point cloud Laplacian to the eigenfunctions of the Laplace-Beltrami operator on the underlying manifold, thus
establishing the first convergence results for a spectral dimensionality reduction algorithm in the manifold setting.
1
Introduction
The last several years have seen significant activity in geometrically motivated approaches
to data analysis and machine learning. The unifying premise behind these methods is
the assumption that many types of high-dimensional natural data lie on or near a lowdimensional manifold. Collectively this class of learning algorithms is often referred to as
manifold learning algorithms. Some recent manifold algorithms include Isomap [14] and
Locally Linear Embedding (LLE) [13].
In this paper we provide a theoretical analysis for the Laplacian Eigenmaps introduced in [2],
a framework based on eigenvectors of the graph Laplacian associated to the point-cloud data.
More specifically, we prove that under certain conditions, eigenvectors of the graph Laplacian
converge to eigenfunction of the Laplace-Beltrami operator on the underlying manifold.
We note that in mathematics the manifold Laplacian is a classical object of differential
geometry with a rich tradition of inquiry. It is one of the key objects associated to a general
differentiable Riemannian manifold. Indeed, several recent manifold learning algorithms are
closely related to the Laplacian. The eigenfunction of the Laplacian are also eigenfunctions
of heat diffusions, which is the point of view explored by Coifman and colleagues at Yale
University in a series of recent papers on data analysis (e.g., [6]). Hessian Eigenmaps
approach which uses eigenfunctions of the Hessian operator for data representation was
proposed by Donoho and Grimes in [7]. Laplacian is the trace of the Hessian. Finally, as
observed in [2], the cost function that is minimized to obtain the embedding of LLE is an
approximation to the squared Laplacian.
In the manifold learning setting, the underlying manifold is usually unknown. Therefore
functional maps from the manifold need to be estimated using point cloud data. The common approximation strategy in these methods is to construct an adjacency graph associated
to a point cloud. The underlying intuition is that since the graph is a proxy for the manifold,
inference based on the structure of the graph corresponds to the desired inference based on
the geometric structure of the manifold. Theoretical results to justify this intuition have
been developed over the last few years. Building on recent results on functional convergence
of approximation for the Laplace-Beltrami operator using heat kernels and results on consistency of eigenfunctions for empirical approximations of such operators, we show convergence
of the Laplacian Eigenmaps algorithm. We note that in order to prove convergence of a
spectral method, one needs to demonstrate convergence of the empirical eigenvalues and
eigenfunctions. To our knowledge this is the first complete convergence proof for a spectral
manifold learning method.
1.1
Prior and Related Work
This paper relies on results obtained in [3, 1] for functional convergence of operators. It
turns out, however, that considerably more careful analysis is required to ensure spectral
convergence, which is necessary to guarantee convergence of the corresponding algorithms.
To the best of our knowledge previous results are not sufficient to guarantee convergence
for any spectral method in the manifold setting.
Lafon in [10] generalized pointwise convergence results from [1] to the important case of
an arbitrary probability distribution on the manifold. We also note [4], where a similar
result is shown for the case of a domain in Rn . Those results were further generalized and
presented with an empirical pointwise convergence theorem for the manifold case in [9]. We
observe that the arguments in this paper are likely to allow one to use these results to show
convergence of eigenfunctions for a wide class of probability distributions on the manifold.
Empirical convergence of spectral clustering for a fixed kernel parameter t was analyzed
in [11] and is used in this paper. However the geometric case requires t ? 0. The results in
this paper as well as in [3, 1] are for the case of a uniform probability distribution on the
manifold. Recently [8] provided deeper probabilistic analysis in that case.
Finally we point out that while the analogies between the geometry of manifolds and the geometry of graphs are well-known in spectral graph theory and in certain areas of differential
geometry (see, e.g., [5]) the exact nature of that parallel is usually not made precise.
2
Main Result
The main result of this paper is to show convergence of eigenvectors of graph Laplacian
associated to a point cloud dataset to eigenfunctions of the Laplace-Beltrami operator when
the data is sampled from a uniform probability distribution on an embedded manifold.
In what follows we will assume that the manifold M is a compact infinitely differentiable
Riemannian submanifold of RN without boundary. Recall now that the Laplace-Beltrami
operator ? on M is a differential operator ? : C 2 ? L2 defined as
?f = ? div (?f )
where ?f is the gradient vector field and div denotes divergence.
? is a positive semi-definite self-adjoint operator and has a discrete spectrum on a compact
manifold. We will generally denote its ith smallest eigenvalue by ?i and the corresponding
eigenfunction by ei . See [12] for a thorough introduction to the subject.
We define the operator Lt : L2 (M) ? L2 (M) as follows (? is the standard measure):
Z
Z
kp?qk2
kp?qk2
k+2
Lt (f )(p) = (4?t)? 2
e? 4t f (p) d?q ?
e? 4t f (q) d?q
M
M
If xi are the data points, the corresponding empirical version is given by
!
k+2
X kp?xi k2
X kp?xi k2
(4?t)? 2
t
?
?
?
4t
4t
Ln (f )(p) =
e
f (p) ?
e
f (xi )
n
i
i
? t is (the extension of) the point cloud Laplacian that forms the basis of
The operator L
n
the Laplacian Eigenmaps algorithm for manifold learning. It is easy to see that it acts by
matrix multiplication on functions restricted to the point cloud, with the matrix being the
corresponding graph Laplacian. We will assume that xi are randomly i.i.d. sampled from
M according to the uniform distribution.
Our main theorem shows that that there is a way to choose a sequence tn , such that the
? tn converge to the eigenfunctions of the Laplaceeigenfunctions of the empirical operators L
n
Beltrami operator ? in probability.
? t and et be the corresponding eigenfuncTheorem 2.1 Let ?tn,i be the ith eigenvalue of L
n
n,i
tion (which, for each fixed i, will be shown to exist for t sufficiently small). Let ?i and ei
be the corresponding eigenvalue and eigenfunction of ? respectively. Then there exists a
sequence tn ? 0, such that
n
lim ?tn,i
= ?i
n??
n
lim ketn,i
(x) ? ei (x)k2 = 0
n??
where the limits are in probability.
3
Overview of the proof
The proof of the main theorem consists of two main parts. One is spectral convergence of
the functional approximation Lt to ? as t ? 0 and the other is spectral convergence of the
? tn to Lt as the number of data points n tends to infinity. These
empirical approximation L
two types of convergence are then put together to obtain the main Theorem 2.1.
Part 1. The more difficult part of the proof is to show convergence of eigenvalues and
eigenfunctions of the functional approximation Lt to those of ? as t ? 0. To demonstrate
t
convergence we will take a different functional approximation 1?H
of ?, where Ht is the
t
t
heat operator. While 1?H
does not converge uniformly to ? they share an eigenbasis and
t
t
for each fixed i the ith eigenvalue of 1?H
converges to the ith eigenvalue of ?. We will then
t
t
t
consider the operator Rt = 1?H
?L
.
A
careful
analysis of this operator, which constitutes
t
t
t
the bulk of the proof paper, shows that R is a small relatively bounded perturbation of 1?H
t ,
kRt f k2
in the sense that for any function f we have
t
k 1?H
f k2
t
convergence and lead to the following
? 1 as t ? 0. This will imply spectral
Theorem 3.1 Let ?i , ?ti , ei , eti be the ith smallest eigenvalues and the corresponding eigenfunctions of ? and Lt respectively. Then
lim |?i ? ?ti | = 0
t?0
lim kei ? eti k2 = 0
t?0
?t
Part 2. The second part is to show that the eigenfunctions of the empirical operator L
n
t
converge to eigenfunctions of L as n ? ? in probability. That result follows readily from
the previous work in [11] together with the analysis of the essential spectrum of Lt . The
following theorem is obtained:
?t
Theorem 3.2 For a fixed sufficiently small t, let ?tn,i and ?ti be the ith eigenvalue of L
n
t
t
t
and L respectively. Let en,i and ei be the corresponding eigenfunctions. Then
lim ?tn,i = ?ti
n??
lim ketn,i (x) ? eti (x)k2 = 0
n??
assuming that ?ti ?
1
2t .
The convergence is almost sure.
Observe that this implies convergence for any fixed i as soon as t is sufficiently small.
Symbolically these two theorems can be represented by top line of the following diagram:
?t
Eig L
n
n??
probabilistic
.......................................................................................
Eig Lt
t?0
deterministic
.......................................................................................
Eig ?
.
...
.........
...
...
..
....
...
.
......
.
.
.
.......
....
.
.
.
.
.
.........
.
.
.............
..........
......................................................................................................................................................................................................................................................................
n??
tn ? 0
After demonstrating two types of convergence results in the top line of the diagram a simple
argument shows that a sequence tn can be chosen to guarantee convergence as in the final
Theorem 2.1 and provides the bottom arrow.
4
4.1
Spectral Convergence of Functional Approximations.
Main Objects and the Outline of the Proof
Let M be a compact smooth smoothly embedded k-dimensional manifold in RN with the
induced Riemannian structure and the corresponding induced measure ?.
As above, we define the operator Lt : L2 (M) ? L2 (M) as follows:
Z
Z
kx?yk2
kx?yk2
k+2
Lt (f )(x) = (4?t)? 2
e? 4t f (x) d?y ?
e? 4t f (y) d?y
M
M
As shown in previous work, this operator serves as a functional approximation to the
Laplace-Beltrami operator on M. The purpose of this paper is to extend the previous results
to the eigenvalues and eigenfunctions, which turn out to need some careful estimates.
We start by reviewing certain properties of the Laplace-Beltrami operator and its connection
to the heat equation. Recall that the heat equation on the manifold M is given by
?h(x, t)
?t
where h(x, t) is the heat at time t at point x. Let f (x) = h(x, 0) be the initial heat
distribution. We observe that from the definition of the derivative
1
?f = lim (h(x, t) ? f (x))
t?0 t
?h(x, t) =
It is well-known (e.g., [12]) that the solution to the heat equation at time t can be written
as
Z
t
H f (x) := h(x, t) =
H t (x, y)f (y)d?y
M
Here Ht is the heat operator and H t (x, y) is the heat kernel of M. It is also well-known
that the heat operator Ht can be written as Ht = e?t? . We immediately see that ? =
t
t
limt?0 1?H
and that eigenfunctions of Ht and hence eigenfunction of 1?H
coincide with
t
t
?t?i
1?Ht
eigenfunctions of the Laplace operator. The ith eigenvalue of t is equal to 1?et ,
where ?i as usual is the ith eigenvalue of ?.
It is easy to observe that once the heat kernel H t (x, y) is known, finding the Laplace operator
poses no difficulty:
Z
1
1 ? Ht
?f = lim
f (x) ?
f
(1)
H t (x, y)f (y) d?y = lim
t?0 t
t?0
t
M
Reconstructing the Laplacian from a point cloud is possible because of the fundamental fact
that the manifold heat kernel H t (x, y) can be approximated by the ambient space Gaussian
t
and hence Lt is an approximation to 1?H
and can be shown to converge for a fixed f to
t
?. This pointwise operator convergence is discussed in [10, 3, 1].
To obtain convergence of eigenfunctions, however, one typically needs the stronger uniform
convergence. If An is a sequence of operators, we say that An ? A uniformly in L2 if
supkf k2 =1 kAn f ? Af k2 ? 0. This is sufficient for convergence of eigenfunctions and other
spectral properties.
It turns out that this type of convergence does not hold for functional approximation Lt
as t ? 0, which presents a serious technical obstruction to proving convergence of spectral
t
properties. To observe that Lt does not converge uniformly to ?, observe that while 1?H
t
converges to ? for each fixed function f , even this convergence is not uniform. Indeed,
for a small t, we can always choose a sufficiently large ?i ? 1/t and the corresponding
eigenfunction ei of ?, s.t.
1 ? Ht
1
1
?t?i
? ? ei
= (1 ? e
) ? ?i ? ? ?i ? 1
t
t
t
2
t
Since Lt is an approximation to 1?H
t , uniform convergence cannot be expected and the
standard perturbation theory techniques do not apply. To overcome this obstacle we need
the two following key ingredients:
Observation 1. Eigenfunctions of
1?Ht
t
coincide with eigenfunctions of ?.
Observation 2. Lt is a small relatively bounded perturbation of
1?Ht
t .
While the first of these observations is immediate, the second is the technical core of this
work. The relative boundedness of the perturbation will imply convergence of eigenfunctions
t
and hence, by the Observation 1, to eigenfunctions of ?.
of Lt to those of 1?H
t
We now define the perturbation operator
Rt =
1 ? Ht
? Lt
t
The relative boundedness of the self-adjoint perturbation operator Rt is formalized as follows:
2
Theorem 4.1 For any 0 < ? < k+2
there exists a constant C, such that for all t sufficiently
small
2
k+2
|hRt f, f i|
k+2 ?? , t 2 ?
?
C
max
t
t
h 1?H
t f, f i
In particular
lim
t?0
and hence Rt is dominated by
1?Ht
t
hRt f, f i
=0
1?Ht
kf k2 =1 h t f, f i
sup
on L2 as t tends to 0.
This result implies that for small values of t, bottom eigenvalues and eigenfunction of Lt
t
are close to those of 1?H
t , which in turn implies convergence. To establish this result, we
will need two key estimates on the size of the perturbation Rt in two different norms.
Proposition 4.2 Let f ? L2 . There exists C ? R, such that for all sufficiently small values
of t
kRt f k2 ? Ckf k2
k
k
Proposition 4.3 Let f ? H 2 +1 , where H 2 +1 is a Sobolev space. Then there is C ? R,
such that for all sufficiently small values of t
?
kRt f k2 ? C tkf k k2 +1
H
In what follows we give the proof of the Theorem 4.1 assuming the two Propositions above.
The proof of the Propositions requires technical estimates of the heat kernel and can be
found the longer version of the paper enclosed.
4.2
Proof of Theorem 4.1.
Lemma 4.4 Let e be an eigenvector of ? with the eigenvalue ?. Then for some universal
constant C
k+2
kek k2 +1 ? C? 4
(2)
H
The details can be found in the long version. Now we can proceed with the
Proof: [Theorem 4.1]
Let ei (x) be the ith eigenfunction of ? and let ?i be the corresponding eigenvalue. Recall
that ei form an orthonormal
basis of L2P
(M). Thus any function f ? L2 (M) can be written
P?
uniquely as f (x) = i=0 ai ei (x) where
a2i < ?. For technical resons we will assume that
all our functions are perpendicular to the constant and the lowest eigenvalue is nonzero.
Recall also that
Ht f = exp(?t?)f,
Now let us fix t and consider the function ?(x) =
that ? is a concave and increasing function of x.
?
Put x0 = 1/ t. We have:
1 ? e?
?(x0 ) =
t
?(0) = 0
1 ? Ht
1 ? e??i t
ei =
ei
t
t
Ht ei = exp(?t?i )ei ,
?
1?e?xt
t
(3)
for positive x. It is easy to check
?(x0 )
1 ? e?
?
=
x0
t
t
?
t
Splitting the positive real line in two intervals [0, x0 ], [x0 , ?) and using concavity and
monotonicity we observe that
?
? !
1 ? e? t 1 ? e? t
?
?(x) ? min
x,
t
t
Note that limt?0
?
1?e
?
t
?
t
= 1.
Therefore for t sufficiently small
?(x) ? min
Thus
1
1
x, ?
2 2 t
1 ? Ht
1 ? e??i t
1
1
ei , ei =
? min ?i , ?
(4)
t
t
2
t
P?
Now take f ? L2 , f (x) = 1 ai ei (x). Without a loss of generality we can assume that
kf k2 = 1. Taking ? > 0, we split f as a sum of f1 and f2 as following:
X
X
f1 =
ai ei ,
f2 =
ai ei
?i ??
?i >?
It is clear that f = f1 + f2 and, since f1 and f2 are orthogonal, kf k22 = kf1 k22 + kf2 k22 . We
will now deal separately with f1 and with f2 .
From the inequality (4) above, we observe that
1
1 ? Ht
f, f ? ?1
t
2
On the other hand, from the inequality (2), we see that if ei is a basis element present in
the basis expansion of f1 ,
k
k+2
2 +1
kei kH
? C? 4
Since ? acts by rescaling basis elements, we have kf1 k
k
H 2 +1
? C?
k+2
4
.
Therefore by Proposition 4.3 for t sufficiently small and some constant C ?
? k+2
kRt f1 k2 ? C ? t? 4
(5)
Hence we see that
kRt f1 k2
2C ? ? k+2
?
t? 4
t
?1
h 1?H
t f, f i
(6)
Consider now the second summand f2 . Recalling that f2 only has basis components with
eigenvalues greater than ? and using the inequality (4) we see that
1 ? Ht
1
1
1 ? Ht
f, f ?
f2 , f2 ? min ?, ? kf2 k22
(7)
t
t
2
t
On the other hand, by Proposition 4.2
kRt f2 k2 ? C1 kf2 k22
Thus
(8)
|hRt f2 , f2 i|
kRt f2 k2
?
? C1? max
t
1?Ht
h 1?H
f,
f
i
h
f
,
f
i
2
2
t
t
1 ?
, t
?
Finally, collecting inequalities 6 and 9 we see:
? k+2
|hRt f, f i|
kRt f1 k + kRt f2 k
1 ?
4
?
? C max
, t + t?
t
t
?
h 1?H
h 1?H
t f, f i
t f, f i
where C is a constant independent of t and ?.
2
Choosing ? = t? k+2 +? where 0 < ? <
5
2
k+2
yields the desired result.
(9)
(10)
Spectral Convergence of Empirical Approximation
Proposition 5.1 For t sufficiently small
1 ?1
t ,?
SpecEss (L ) ?
2
where SpecEss denotes the essential spectrum of the operator.
t
Proof: As noted before Lt f is a difference of a multiplication operator and a compact
operator
Lt f (p) = g(p)f (p) ? Kf
(11)
where
Z
kp?qk2
k+2
g(p) = (4?t)? 2
e? 4t d?q
M
and Kf is a convolution with a Gaussian. As noted in [11], it is a fact in basic perturbation
theory SpecEss (Lt ) = rg g where rg g is the range of the function g : M ? R. To estimate
rg g observe first that
Z
k
lim (4?t)? 2
t??
kp?qk2
4t
d?q = 1
M
We thus see that for t sufficiently small
Z
?k
2
(4?t)
and hence g(t) > 12 t?1 .
e?
M
e?
kp?yk2
4t
d?y >
1
2
Lemma 5.2 Let et be an eigenfunction of Lt , Lt et = ?t et , ?t < 12 t?1 . Then et ? C ? .
We see that Theorem 3.2 follows easily:
Proof: [Theorem 3.2] By the Proposition 5.1 we see that the part of the spectrum of Lt
between 0 and 12 t?1 is discrete. It is a standard fact of functional analysis that such points
are eigenvalues and there are corresponding eigenspaces of finite dimension. Consider now
?ti ? [0, 12 t?1 ] and the corresponding eigenfunction eti . The Theorem 4 then follows from
Theorem 23 and Proposition 25 in [11], which show convergence of spectral properties for
the empirical operators.
6
Main Theorem
We are finally in position to prove the main Theorem 4.1: Proof: [Theorem 4.1] From
Theorems 3.2 and 3.1 we obtain the following convergence results:
?t
Eig L
n
n??
..................................................................
Eig Lt
t?0
..................................................................
Eig ?
where the first convergence is almost surely for ?i ? 12 t?1 . Given any i ? N and any ? > 0,
?
?
t
we can choose t? < 2??1
i , s.t. for all t < t we have kei ? ei k2 < 2 . On the other hand, by
using the first arrow, we see that
n
?o
lim P ketn,i ? eti k2 ?
=0
n??
2
Thus for any p > 0 and for each t there exists an N , s.t. P {ketn,i ? ei k2 > ?} < p Inverting
this relationship, we see that for any N and for any probability p(N ) there exists a tN , s.t.
?n>N
tN
? ei k2 > ?} < p(N )
P {ken,i
Making p(N ) tend to zero, we obtain convergence in probability.
References
[1] M. Belkin, Problems of Learning on Manifolds, Univ. of Chicago, Ph.D. Diss., 2003.
[2] M. Belkin, P. Niyogi, Laplacian Eigenmaps and Spectral Techniques for Embedding and
Clustering, NIPS 2001.
[3] M. Belkin, P. Niyogi, Towards a Theoretical Foundation for Laplacian-Based Manifold
Methods, COLT 2005.
[4] O. Bousquet, O. Chapelle, M. Hein, Measure Based Regularization, NIPS 2003.
[5] F. R. K. Chung. (1997). Spectral Graph Theory. Regional Conference Series in Mathematics, number 92.
[6] R.R.Coifman, S. Lafon, A. Lee, M. Maggioni, B. Nadler, F. Warner and S. Zucker,
Geometric diffusions as a tool for harmonic analysis and structure definition of data,
submitted to the Proceedings of the National Academy of Sciences (2004).
[7] D. L. Donoho, C. E. Grimes, Hessian Eigenmaps: new locally linear embedding techniques for high-dimensional data, PNAS, vol. 100 pp. 5591-5596.
[8] E. Gine, V. Kolchinski, Empirical Graph Laplacian Approximation of Laplace-Beltrami
Operators: Large Sample Results, preprint.
[9] M. Hein, J.-Y. Audibert, U. von Luxburg, From Graphs to Manifolds ? Weak and
Strong Pointwise Consistency of Graph Laplacians, COLT 2005.
[10] S. Lafon, Diffusion Maps and Geodesic Harmonics, Ph.D.Thesis, Yale University, 2004.
[11] U. von Luxburg, M. Belkin, O. Bousquet, Consistency of Spectral Clustering, Max
Planck Institute for Biological Cybernetics Technical Report TR 134, 2004.
[12] S. Rosenberg, The Laplacian on a Riemannian Manifold, Cambridge Univ. Press, 1997.
[13] Sam T. Roweis, Lawrence K. Saul. (2000). Nonlinear Dimensionality Reduction by
Locally Linear Embedding, Science, vol 290.
[14] J.B.Tenenbaum, V. de Silva, J. C. Langford. (2000). A Global Geometric Framework
for Nonlinear Dimensionality Reduction, Science, Vol 290.
| 2989 |@word version:3 stronger:1 norm:1 kf2:3 tr:1 boundedness:2 reduction:3 initial:1 series:2 attracted:1 readily:1 written:3 chicago:3 ith:9 core:1 provides:1 cse:1 differential:3 prove:3 consists:1 coifman:2 x0:6 expected:1 indeed:2 warner:1 increasing:1 provided:1 underlying:4 bounded:2 lowest:1 what:2 eigenvector:1 developed:1 finding:1 guarantee:3 thorough:1 collecting:1 act:2 ti:6 concave:1 k2:24 planck:1 positive:3 before:1 tends:2 limit:1 supkf:1 establishing:1 perpendicular:1 range:1 definite:1 krt:9 area:1 universal:1 empirical:11 cannot:1 close:1 operator:35 put:2 map:2 deterministic:1 attention:1 formalized:1 splitting:1 immediately:1 orthonormal:1 oh:1 embedding:5 proving:1 maggioni:1 laplace:10 exact:1 us:1 element:2 approximated:1 observed:1 cloud:8 bottom:2 preprint:1 intuition:2 geodesic:1 reviewing:1 f2:14 basis:6 easily:1 various:1 represented:1 univ:2 heat:14 kp:7 choosing:1 say:1 niyogi:4 final:1 sequence:4 differentiable:2 eigenvalue:18 lowdimensional:1 roweis:1 academy:1 adjoint:2 kh:1 eigenbasis:1 convergence:43 converges:2 object:3 pose:1 strong:1 hrt:4 c:1 implies:3 beltrami:9 closely:1 adjacency:1 premise:1 fix:1 f1:9 hyde:1 proposition:9 biological:1 extension:1 hold:1 sufficiently:11 exp:2 nadler:1 lawrence:1 smallest:2 purpose:1 tool:1 eti:5 gaussian:2 always:1 rosenberg:1 check:1 tradition:1 mbelkin:1 sense:1 inference:2 typically:1 colt:2 field:1 construct:1 equal:1 once:1 park:1 constitutes:1 minimized:1 report:1 serious:1 belkin:5 few:2 summand:1 randomly:1 divergence:1 national:1 geometry:4 recalling:1 analyzed:1 grime:2 behind:1 ambient:1 necessary:1 eigenspaces:1 orthogonal:1 desired:2 hein:2 theoretical:3 obstacle:1 cost:1 uniform:6 submanifold:1 eigenmaps:7 considerably:1 fundamental:1 probabilistic:2 lee:1 together:2 qk2:4 squared:1 von:2 thesis:1 choose:3 derivative:1 chung:1 rescaling:1 de:1 audibert:1 tion:1 view:1 sup:1 start:1 parallel:1 partha:1 il:1 kek:1 yield:1 weak:1 cybernetics:1 submitted:1 inquiry:1 definition:2 colleague:1 pp:1 associated:4 riemannian:4 proof:13 sampled:2 dataset:1 recall:4 knowledge:2 lim:12 dimensionality:3 generality:1 langford:1 hand:3 ei:23 nonlinear:2 eig:6 columbus:1 building:1 k22:5 isomap:1 hence:6 regularization:1 nonzero:1 deal:1 self:2 uniquely:1 noted:2 generalized:2 outline:1 complete:1 demonstrate:2 tn:12 silva:1 harmonic:2 ohio:2 recently:1 common:1 functional:10 overview:1 extend:1 discussed:1 significant:1 cambridge:1 ai:4 consistency:3 mathematics:2 chapelle:1 zucker:1 longer:1 yk2:3 recent:4 certain:3 inequality:4 seen:1 greater:1 surely:1 converge:6 semi:1 pnas:1 smooth:1 technical:5 af:1 long:1 laplacian:20 basic:1 gine:1 kernel:6 limt:2 c1:2 separately:1 interval:1 diagram:2 regional:1 eigenfunctions:22 sure:1 subject:1 induced:2 tend:1 near:1 split:1 easy:3 l2p:1 tkf:1 motivated:1 hessian:4 proceed:1 generally:1 clear:1 eigenvectors:4 obstruction:1 locally:3 ph:2 tenenbaum:1 ken:1 exist:1 estimated:1 bulk:1 discrete:2 vol:3 key:3 demonstrating:1 diffusion:3 ht:21 graph:13 symbolically:1 geometrically:2 year:3 sum:1 luxburg:2 almost:2 sobolev:1 yale:2 activity:1 infinity:1 dominated:1 bousquet:2 argument:2 min:4 relatively:2 department:2 according:1 reconstructing:1 sam:1 making:1 restricted:1 ln:1 equation:3 turn:4 serf:1 apply:1 observe:9 spectral:18 a2i:1 denotes:2 clustering:3 include:1 ensure:1 top:2 unifying:1 establish:1 classical:1 strategy:1 rt:5 usual:1 div:2 gradient:1 manifold:32 assuming:2 pointwise:4 relationship:1 difficult:1 trace:1 unknown:1 observation:4 convolution:1 finite:1 immediate:1 precise:1 rn:3 perturbation:8 arbitrary:1 introduced:1 inverting:1 required:1 connection:1 kf1:2 nip:2 eigenfunction:10 usually:2 laplacians:1 max:4 natural:1 difficulty:1 imply:2 prior:1 geometric:4 l2:10 kf:5 multiplication:2 relative:2 embedded:2 loss:1 analogy:1 enclosed:1 ingredient:1 foundation:1 sufficient:2 proxy:1 share:1 last:3 soon:1 dis:1 uchicago:1 lle:2 allow:1 deeper:1 wide:1 institute:1 taking:1 saul:1 mikhail:1 boundary:1 overcome:1 dimension:1 lafon:3 rich:1 concavity:1 made:1 coincide:2 kei:3 compact:4 monotonicity:1 global:1 xi:5 spectrum:4 nature:1 ckf:1 expansion:1 domain:1 main:9 arrow:2 referred:1 en:1 position:1 lie:1 theorem:21 xt:1 explored:1 exists:5 essential:2 kx:2 smoothly:1 rg:3 lt:25 likely:1 infinitely:1 collectively:1 corresponds:1 kan:1 relies:1 donoho:2 careful:3 towards:1 considerable:1 specifically:1 uniformly:3 justify:1 lemma:2 |
2,191 | 299 | Development and Spatial Structure of Cortical
Feature Maps: A Model Study
K. 0 berulayer
Beckman-Institute
University of Illinois
Urbana, IL 61801
H. Ritter
Technische Fakultiit
Universitiit Bielefeld
D-4800 Bielefeld
K. Schulten
Beckman-Insti t u te
University of Illinois
Urbana, IL 61801
Abstract
Feature selective cells in the primary visual cortex of several species are organized in hierarchical topographic maps of stimulus features like "position
in visual space", "orientation" and" ocular dominance". In order to understand and describe their spatial structure and their development, we investigate a self-organizing neural network model based on the feature map
algorithm. The model explains map formation as a dimension-reducing
mapping from a high-dimensional feature space onto a two-dimensional
lattice, such that "similarity" between features (or feature combinations)
is translated into "spatial proximity" between the corresponding feature
selective cells. The model is able to reproduce several aspects of the spatial
structure of cortical maps in the visual cortex.
1
Introduction
Cortical maps are functionally defined structures of the cortex, which are characterized by an ordered spatial distribution of functionally specialized cells along the
cortical surface. In the primary visual area( s) the response properties of these cells
must be described by several independent features, and there is a strong tendency to
map combinations of these features onto the cortical surface in a way that translates
"similarity" into "spatial proximity" of the corresponding feature selective cells (see
e.g. [1-6]). A neighborhood preserving mapping between a high-dimensional feature space and the two dimensional cortical surface, however, cannot be achieved, so
the spatial structure of these maps is a compromise, preserving some neighborhood
relations at the expense of others.
The compromise realized in the primary visual area(s) is a hierarchical representation of features. The variation of the secondary features "preferred orientation",
11
12
Obermayer, Ritter, and Schulten
"orientation specifity" and "ocular dominance" is highly repetitive across the primary map of retinal location, giving rise to a large number of small maps, each
containing a complete representation of the full range of the seconda.ry features. If
the neighborhood relations in feature space are to be preserved and maps must be
continuous, the spatial distributions of the secondary features "orientation preference", "orientation specifity" and "ocular dominance" can no longer be independent.
Interestingly, there is experimental evidence in the macaque that this is the case,
namely, that regions with smooth change in one feature (e.g. "ocular dominance")
correlate with regions of rapid change in another feature (e.g. "orientation") [7,8].
Preliminary results [9] indicate that these correlations may be a natural consequence
of a dimension reducing mapping which preserves neighborhood relations.
In a previous study, we investigated a model for the joint formation of a retinotopic projection and an orientation column system (10], which is based on the
self-organizing feature map algorithm [11,12]. This algorithm generates a representation of a given manifold in feature space on a neural network with prespecified
topology (in our case a two-dimensional sheet), such that the mapping is continous,
smooth and neighborhood relations are preserved to a large extent.! The model
has the advantage that its rules can be derived from biologically plausible developmental principles [15,16]. Therefore, it can be interpreted not only as a pattern
model, which generates a representation of feature combinations subject to a set of
constraints, but also as a pattern formation model, which describes an input driven
developmental process. In this contribution we will extend our previous work by the
addition of another secondary feature, "ocular dominance" and we will concentrate
on the hierarchical mapping of feature combinations as a function of the set of input
patterns.
2
Description of the Model
r,
In our model the cortical surface is divided into N x N small patches, units which
are arranged 011 a two-dimensional lattice ( network layer) with periodic boundary
conditions (to avoid edge effects). The functional properties of neurons located in
each patch are characterized by a feature vector Wr , which is associated with each
unit r and whose components (wrh are interpreted as receptive field properties of
these neurons. The feature vectors, Wr , as a function of unit locations describe the
spatial distribution of feature selective cells over the cortical layer, i.e. the cortical
map.
r,
To generate a representation of features along the network layer, we use the selforganizing feature map algorithm [1,2]. This algorithm follows an iterative procedure. At each step an input vector V, which is of the same dimensionality as Wr ,
is chosen at random according to a probability distribution P( V). Then the unit
s, whose feature vector
is . closest to the input pattern V, is selected and the
components (wrh of it's feature vector are changed according to the feature map
learning rule:
w;
(1)
1 For
ot.h~r mod~lling nppron.ch~fl along t.h~fl~ lin~fl fl~e [13,14].
Development and Spatial Structure of Cortical ftature Maps: A Model Study
where hU;', s, t), the neighborhood junction, is given by:
(2)
3
Coding of Receptive Field Properties
In the following we describe the receptive field properties by the feature vector Wi~
given by W,~ = (XI~' YI~' q,~cos(2?,~), q,~sin(2?,~), ZI~) where (xr, YI~) denotes the
position of the receptive field centers in visual space, (?r) the preferred orientation,
and (ql~)' (zr) two quantities, which qualitatively can be interpreted as orientation
specificity (see e.g. [17]) and ocular dominance (see e.g . [lS]). If qr is zero, then the
units are unspecific for orientation; the larger q,~ becomes, the sharper the units are
tuned. "Binocular" units are characterized by Zr 0, "monocular" units by a large
positive or negative value of Zi~' "Similarity" between receptive field properties is
then given by the euclidean distance between the corresponding feature vectors.
=
The components WI~ of the input vector v= (x, y, qcos(2?), qsin(2?), z) describe
stimulus features which should be represented by the cells in the cortical map.
They denote position in the visual field (x, V), orientation ?, and two quantities q
and Z qualitatively describing pattern eccentricity and the distribution of activity
between both eyes, respectively. Round stimuli are characterized by q = 0 and the
more eliptic a pattern is the larger is the value of q. A "binocular" stimulus is
characterized by Z = 0, while a "monocular" stimulus is characterized by a large
positive or negative value of Z for "right eye" or "left eye" preferred, respectively.
Input vectors were chosen with equal probability from the manifold
(3)
i.e. all feature combinations characterized by a fixed value of q and Izi were selected
equally often. If the model is interpreted from a developmental point of view, the
manifold V describes properties of (subcortical) activity patterns, which drive map
formation. The quantities d, qpat and Zpat determine the feature combinations to
be represented by the map. As we will see below, their values crucially influence
the spatial structure of the feature map.
4
Hierarchical Maps
If qpat and Zpat are smaller than a certa.in threshold then "orientation preference" ,
"orientation selectivity" and "ocular dominance" are not represented in the map (i.e.
qr
Z,~
0) but fluctuate around a stationary sta.te of eq. (1), which corresponds
to a perfect topographic representation of visual space. In this parameter regime,
the requirement of a continous dimension-reducing map leads to the suppression of
the additional features "orientation" and "ocular dominance".
= =
Let us consider an ensemble of networks, each characterized by a set {wr } of feature
vectors, and denote the time-dependent distribution function of this ensemble by
13
14
Obermayer, Ritter, and Schulten
S( w, t). Following a method derived in [19], we can describe the time-development
of S( W, t) near the stationary state by the Fokker-Planck equation
18t S ({ U,~'
.... })
t =
-
f
I: -8....
8
.... S ({ ....r })
S({ur},t)
, t + -2 I: Dpmqn 88....
u- Bpmqnuqn
u- 8....
u-
pmqn
2
f
U
pm
pmqn
pm
()
4
qn
where the origin of S(.,t) was shifted to the stationary state {Ui r }, using now the
new argument variable ur = wr - Ui r . The eigenvalues of B determine the stability
of the stationary state, the topographic representation, while Band D together
govern size and time development of fluctuations < UpiUqj >.
=
Let us define the Fourier modes uk of the equilibrium deviations u r by uk
l/N Ereikrur. For small values of qpat and Zpat the eigenvalues of B are all negative, hence the topographic stationary state is stable. If qpat and Zpat are larger
than 2
qthrel
= ~ ~ min(O"hl,O"h2),
Zthre$
= ~Ve ~ min(O"hl,O"h2),
(5)
however, the eigenvalues corresponding to the set of modes uk which are perpendicular to the (x, y)-plane and whose wave-vectors k are given by
a?2/O"hl
}
.
'l/
O"hl < O"h2,
(6)
become positive. For larger values of qpat and Zpat then, the topographic state
becomes unstable and a "column system" forms.
For an isotropic neighborhood function (O"hl = O"h2 = O"h), the matrices B(k) and
b( k) can be diagonalized simultaneously and the mean square amplitude of the
fluctuations around the stationary state can be given in explicit form:
< U 2II (k....) > -
2
2 d 2 (0"~k2/4+ 1/12)exp(-0"~k2/4)
0" - ~':""'-~-;::---::-':--:--'---"-'--:~:--:-'----'2 h N2 exp(0"~k2/4) - 1 + 0"~k2/2
g
7r -
....
< u.L(k) >=
2 ....
2 d 2 exp( -0"~k2 /4)
7r 240"h N2 exp(0"~k2/4)-1
g
2....
g
exp(-0"~k2/4)
2 2
< u y l(k) >=< u y2 (k) >= 7r40"hqPatexp(0"~k2/4) _
2 ....
g
2 2
(N2q;atk2)/(2d2)
exp( -0"~k2 /4)
_ (N2q;atk2)/d2
< uz(k) >= 7r20"hZpatexp(0"~k2/4)
(7)
(8)
(9)
(10)
2Tn t.he derivat.ion of t.he following formulas several approximat.ions have t.o he made. A
comparison wit.h numerical simulat.ions, however, demonst.rat.e t.hat. t.hese approximat.ions are
valid except. if t.he value qpat or Zpat is wit.hin a few percent. of qthre .. or Zthre." respectively.
Det.ails of t.hese calculat.ions will be published elsewhere
Development and Spatial Structure of Cortical Feature Maps: A Model Study
Figure 1: "Orientation preference" (a, left), "ocular dominance" (h, center) and
locations of receptive field centers (c, right) as a function of unit loaction. Figure
la displays an enlarged section of the "orientation map" only. Parameters of the
simulation were: N = 256, d = 256, qpat = 12, Zpat = 12, Uh = 5, e = 0.02
where ulI' U.L denote the amplitude of fluctuations parallel and orthogonal to k
in the (x, y)-plane, Uyl' U y 2 parallel to the orientation feature dimension and U z
parallel to the ocular dominance feature dimension, respectively.
Thus, for qpat ~ qthres or Zpat ~ Zthres the mean square amplitudes of fluctuations
diverge for the modes which become unstable at the threshold (the denominator of
eqs. (9,10) approaches zero) and the relaxation time of these fluctuations goes to
infinity (not shown). The fact that either a ring or two groups of modes become
unstable is reflected in the spatial structure of the maps above threshold.
For larger values of qpat and Zpat orientation and ocular dominance are represented
by the network layer, i.e. feature values fluctuate around a stationary state which
is characterized by a certain distribution of feature-selective cells. Figure 1 displays
orientation preference ?r (Fig. la), ocular dominance Z,~ (Fig. 1b) and the locations
(xr' Yr) of receptive field centers in visual space (Fig. 1c) as a function of unit
location r. Each pixel of the images in Figs. 1a,b corresponds to a network unit
r. Feature values are indicated by gray values: black ~ white corresponds to an
angle of 00 ~ 180 0 (Fig. 1a) and to an ocular dominance value of a ~ max (Fig.
1b). White dots in Fig. 1a mark regions where units still completely unspecific for
orientation are located ("foci"). In Fig. 1c the receptive field center of every unit
is marked by a dot. The centers of units which are neighbors in the network layer
were connected by lines, which gives rise to the net-like structure.
The overall preservation of the lattice topology, and the absence of any larger discontinuities in Fig. 1c, demonstrate that "position" plays the role of the primary
stimulus variable and varies in a topographic fashion across the network layer. On
a smaller length scale, however, numerous distortions are visible which are caused
by the representation of the other features, "orientation" and "ocular dominance".
The variation of these secondary features is highly repetitive and patterns strongly
resembling orientation columns (Fig. 1b) and ocular dominance stripes (Fig. 1c)
have formed. Note that regions unspecific for orientation as well as "binocular"
regions exist in the final map, although these feature combinations were not present
in the set of input patterns (3). They are correlated with regions of high magnitude
of the "orientation" and "ocular dominance"-gradients, respectively (not shown).
These structures are a consequence of the neighborhood preserving and dimension
15
16
Obermayer, Ritter, and Schulten
' .:.:
?*, :~{AJ
:.:~:;:~;.:.:::::.
" :':':':~:: . .
.::;::}.
.....:;::::::~:::...
:..:::::\;~
< ?:::{:?r~if<!2~
::~:::::~':::::::\:'.
Figure 2: Two-dimensional Fourier spectra of the" orientation" (a, left) and" ocular
dominance" (b, center) coordinates for the map shown in Fig. 1. c, right: Autocorrelation function of the feature coordinate W r 3 for the map shown in Fig. 1.
reducing mapping; they do not result from the requirement of representing this
particular set of feature combinations. 3
Figure 2a,b shows the two-dimensional Fourier spectra
lV k- ,occ
~r- eikr Zr and
w
wf,ori = 2:reikrqr(cos(2?r) + isin(2?r)) for the "ocular dominance" (Fig. 2b) and
"orientation" (Fig. 2a) coordinates, respectively. Each pixel corresponds to a single
mode k and its brightness indicates the mean square amplitude IWfl2 of the mode
k. For an isotropic neighborhood function the orientation map is characterized
by wave vectors from a ring shaped region in the Fourier domain (Fig. 2a), which
becomes eccentric with increasing UhI! Uh2 (not shown) until the ring dissolves into
two separate groups of modes. The phases (not shown) seem to be random, but
we cannot exclude correlations completely. Figure 2c shows the autocorrelation
function 5 33 ( S)
< W(r-S)3 W(I)3 > as a function of the distance between cells
in the network layer. The origin of the s-plane is located in the center of the image
and the brightness indicates a positive (white), zero (medium gray) or negative
(black) value of 5 33 . The autocorrelation functions have a Mexican-hat form. The
(negative) minimum is located at half the wavelength A associated with the the
wave number IfI of the modes with high amplitude in Fig. 2a. At this distance
the response properties of the units are anticorrelated to some extent. If cells are
separated by a distance larger than A, the response properties are uncorrelated.
=
s
If qpat and Zpat are large enough, the feature hierarchy observed in Figs. 1,2 breaks
down and "preferred orientation" or "ocular dominance" plays the role of the primary stimulus variable. Figure 3 displays orientation preference ?r (Fig. 3a) and
ocular dominance Zr (Fig. 3b) as a function of unit location 1-:. There is only one
continous region for each interval of "preferred orientation" and for each eye, but
each of these regions now contains a representation of a large part of visual space.
Consequently the position map shows multiple representations of visual space.
Hierarchical maps are generated by the feature map algorithm whenever there is a
hierarchy in the variances of the set of patterns along the various feature dimensions
:'lIn t.he cort.ex, however, cellfl unflpecific for orient.ation fleem t.o he import.nnt for viflual
procf'Bsing. 1'0 improve t.he deflcript.ion oft.he spat.ial st.ruct.ure of cort.ical maps, it is neCf'Bflary
t.o include t.hese feat,lIre comhinnt.iOlls into t.he flet V' of inpnt, patternfl (see [0]).
Development and Spatial Structure of Cortical ~ature Maps: A Model Study
i :.
Figure 3: "Orientation preference" ( a, left) and "ocular dominance" (h, center) as
a function of unit loaction for a map generated using a large value of qpat and Zpat.
Parameters were: N = 128, d = 128, qpat = 2500, Zpat = 2500, Uh = 5, c; = 0.1
(In our example a hierarchy in the magnitudes of d, qpat and Zpat). The features
with the largest variance become the primary feature; the other features become
secondary features, which are represented multiple times on the network layer.
Acknow ledgelnents
The authors would like to thank the Boehringer-Ingelheim Fonds for financial support by a scholarship to K. O. This research has been supported by the National
Science Foundation (grant number 9017051). Computer t.ime on the Connection
Machine CM-2 has been made available by the National Center for Supercomputer
Applications at Urbana-Champaign and the Pittsburgh Supercomputing Center
both supported by the National Science Foundation.
References
[1] Hubel D.H. and Wiesel T.N. (1974), J. Compo Neurol. 158, 267-294
[2] Blasdel G.G. and Salama G. (1986), Nature 321, 579-585
[3] Grinvald A. et aI. (1986), Nature 324, 361-364
[4] Swindale N.V. et al. (1987), J. Neurosci. 7,1414-1427
[5] Lowel S. et al. (1987), 255, 401-415
[6] Ts'o D.Y. et al., Science 249, 417-420
[7] Livingstone M.S. and Hubel D.H. (1984), J. Neurosci. 4, 309-356
[8] Blasdel G.G. (1991), in preparation
[9] Obermayer K. et al. (1991), Proc. of the ICANN-91, Helsinki, submitted
[10] Obermayer K. et al. (1990), Proc. Natl. Acad. Sci. USA 87, 8345-8349
[11] Kohonen T. (1982a), BioI. Cybern. 43, 59-69
[12] Kohonen T. (1982b), BioI. Cybern.44, 135-140
[13] Nelson M.E. and Bower J .M. (1990), TINS 13, 401-406
[14] Durbin R. and Mitchison M. (1990), Na.ture 343, 644-647
[15] von der Malsburg C. (1973), Kybernetik 14, 85-100
[16] Kohonen T. (1983), Self-Organization a.nd Associative Memory, Springer-Verlag, New York
[17] Swindale N.V. (1982), Proc. R. Soc. Lond., B215, 211-230
[18] Goodhill G.J. and Willshaw D.J. (1990), Network 1, 41-59
[19] Ritter H. and Schulten K. (1989), Biol. Cybern. 60,59-71
17
| 299 |@word wiesel:1 nd:1 hu:1 d2:2 simulation:1 crucially:1 brightness:2 contains:1 tuned:1 interestingly:1 cort:2 diagonalized:1 must:2 import:1 visible:1 numerical:1 stationary:7 half:1 selected:2 yr:1 plane:3 isotropic:2 ial:1 prespecified:1 compo:1 location:6 preference:6 along:4 become:5 autocorrelation:3 rapid:1 ry:1 uz:1 increasing:1 becomes:3 retinotopic:1 medium:1 cm:1 interpreted:4 ail:1 every:1 willshaw:1 k2:10 uk:3 unit:17 grant:1 planck:1 positive:4 consequence:2 acad:1 kybernetik:1 ure:1 fluctuation:5 black:2 co:2 range:1 perpendicular:1 xr:2 procedure:1 lira:1 area:2 projection:1 specificity:1 onto:2 cannot:2 sheet:1 influence:1 cybern:3 map:36 center:11 resembling:1 go:1 l:1 wit:2 rule:2 financial:1 stability:1 variation:2 coordinate:3 hierarchy:3 play:2 origin:2 located:4 stripe:1 observed:1 role:2 region:9 connected:1 developmental:3 govern:1 ui:2 hese:3 compromise:2 completely:2 uh:2 translated:1 joint:1 represented:5 various:1 separated:1 describe:5 formation:4 neighborhood:9 whose:3 larger:7 plausible:1 distortion:1 simulat:1 topographic:6 final:1 associative:1 calculat:1 advantage:1 eigenvalue:3 spat:1 net:1 kohonen:3 organizing:2 description:1 qr:2 eccentricity:1 requirement:2 perfect:1 ring:3 eq:2 strong:1 soc:1 indicate:1 concentrate:1 explains:1 preliminary:1 swindale:2 proximity:2 around:3 exp:6 equilibrium:1 mapping:6 blasdel:2 proc:3 beckman:2 largest:1 avoid:1 fluctuate:2 unspecific:3 derived:2 focus:1 indicates:2 uli:1 suppression:1 wf:1 dependent:1 relation:4 ical:1 salama:1 selective:5 reproduce:1 pixel:2 overall:1 orientation:31 development:7 spatial:14 field:9 equal:1 shaped:1 n2q:2 others:1 stimulus:7 few:1 sta:1 preserve:1 ve:1 simultaneously:1 national:3 ime:1 phase:1 organization:1 investigate:1 highly:2 natl:1 edge:1 orthogonal:1 euclidean:1 column:3 lattice:3 technische:1 deviation:1 varies:1 periodic:1 st:1 ritter:5 diverge:1 together:1 na:1 von:1 containing:1 exclude:1 retinal:1 coding:1 caused:1 view:1 ori:1 break:1 dissolve:1 wave:3 parallel:3 contribution:1 il:2 square:3 formed:1 variance:2 ensemble:2 drive:1 published:1 submitted:1 whenever:1 ocular:21 associated:2 nnt:1 dimensionality:1 organized:1 amplitude:5 reflected:1 response:3 izi:1 arranged:1 strongly:1 binocular:3 correlation:2 until:1 approximat:2 mode:8 aj:1 gray:2 indicated:1 usa:1 effect:1 y2:1 hence:1 white:3 round:1 sin:1 self:3 rat:1 complete:1 demonstrate:1 tn:1 percent:1 hin:1 image:2 demonst:1 specialized:1 functional:1 extend:1 he:9 functionally:2 eccentric:1 ai:1 pm:2 illinois:2 dot:2 stable:1 cortex:3 similarity:3 surface:4 longer:1 closest:1 driven:1 selectivity:1 certain:1 verlag:1 yi:2 der:1 preserving:3 minimum:1 additional:1 determine:2 ii:1 preservation:1 full:1 multiple:2 smooth:2 champaign:1 characterized:10 r40:1 lin:2 divided:1 equally:1 denominator:1 repetitive:2 achieved:1 cell:10 ion:6 preserved:2 addition:1 interval:1 ot:1 subject:1 mod:1 seem:1 near:1 enough:1 ture:1 zi:2 topology:2 ifi:1 translates:1 det:1 york:1 selforganizing:1 band:1 generate:1 exist:1 shifted:1 wr:5 dominance:21 group:2 boehringer:1 threshold:3 isin:1 relaxation:1 orient:1 angle:1 bielefeld:2 patch:2 layer:8 fl:4 display:3 durbin:1 activity:2 constraint:1 infinity:1 helsinki:1 generates:2 aspect:1 fourier:4 argument:1 min:2 lond:1 according:2 combination:8 across:2 describes:2 smaller:2 ur:2 wi:2 biologically:1 hl:5 derivat:1 monocular:2 equation:1 describing:1 junction:1 available:1 hierarchical:5 hat:2 supercomputer:1 denotes:1 include:1 malsburg:1 giving:1 scholarship:1 realized:1 quantity:3 receptive:8 primary:7 lowel:1 obermayer:5 gradient:1 distance:4 separate:1 thank:1 sci:1 nelson:1 manifold:3 extent:2 unstable:3 length:1 ruct:1 ql:1 sharper:1 expense:1 acknow:1 negative:5 rise:2 anticorrelated:1 neuron:2 urbana:3 t:1 namely:1 continous:3 connection:1 macaque:1 discontinuity:1 able:1 specifity:2 below:1 pattern:10 goodhill:1 regime:1 oft:1 max:1 memory:1 ation:1 natural:1 zr:4 representing:1 improve:1 eye:4 numerous:1 uyl:1 uhi:1 subcortical:1 lv:1 h2:4 foundation:2 principle:1 uncorrelated:1 elsewhere:1 changed:1 supported:2 understand:1 institute:1 neighbor:1 boundary:1 dimension:7 cortical:13 valid:1 qn:1 author:1 qualitatively:2 made:2 supercomputing:1 correlate:1 iolls:1 preferred:5 feat:1 r20:1 hubel:2 pittsburgh:1 inpnt:1 xi:1 mitchison:1 spectrum:2 continuous:1 iterative:1 lling:1 nature:2 correlated:1 investigated:1 domain:1 icann:1 neurosci:2 n2:2 enlarged:1 fig:20 fashion:1 position:5 schulten:5 explicit:1 grinvald:1 bower:1 tin:1 formula:1 down:1 ingelheim:1 neurol:1 evidence:1 ature:1 magnitude:2 te:2 fonds:1 wavelength:1 visual:11 ordered:1 springer:1 ch:1 corresponds:4 fokker:1 bioi:2 marked:1 consequently:1 occ:1 universitiit:1 absence:1 change:2 except:1 reducing:4 mexican:1 specie:1 secondary:5 tendency:1 experimental:1 la:2 livingstone:1 fakultiit:1 mark:1 support:1 preparation:1 biol:1 ex:1 |
2,192 | 2,990 | Sample complexity of policy search with known
dynamics
Peter L. Bartlett
Divison of Computer Science and Department of Statistics
University of California, Berkeley
Berkeley, CA 94720-1776
[email protected]
Ambuj Tewari
Division of Computer Science
University of California, Berkeley
Berkeley, CA 94720-1776
[email protected]
Abstract
We consider methods that try to find a good policy for a Markov decision process
by choosing one from a given class. The policy is chosen based on its empirical
performance in simulations. We are interested in conditions on the complexity
of the policy class that ensure the success of such simulation based policy search
methods. We show that under bounds on the amount of computation involved
in computing policies, transition dynamics and rewards, uniform convergence of
empirical estimates to true value functions occurs. Previously, such results were
derived by assuming boundedness of pseudodimension and Lipschitz continuity.
These assumptions and ours are both stronger than the usual combinatorial complexity measures. We show, via minimax inequalities, that this is essential: boundedness of pseudodimension or fat-shattering dimension alone is not sufficient.
1 Introduction
A Markov Decision Process (MDP) models a situation in which an agent interacts (by performing
actions and receiving rewards) with an environment whose dynamics is Markovian, i.e. the future is
independent of the past given the current state of the environment. Except for toy problems with a
few states, computing an optimal policy for an MDP is usually out of the question. Some relaxations
need to be done if our aim is to develop tractable methods for achieving near optimal performance.
One possibility is to avoid considering all possible policies by restricting oneself to a smaller class
? of policies. Given a simulator for the environment, we try to pick the best policy from ?. The
hope is that if the policy class is appropriately chosen, the best policy in ? would not be too much
worse than the true optimal policy.
Use of simulators introduces an additional issue: how is one to be sure that performance of policies
in the class ? on a few simulations is indicative of their true performance? This is reminiscent of
the situation in statistical learning. There the aim is to learn a concept and one restricts attention
to a hypotheses class which may or may not contain the ?true? concept. The sample complexity
question then is: how many labeled examples are needed in order to be confident that error rates on
the training set are close to the true error rates of the hypotheses in our class? The answer turns out to
depend on ?complexity? of the hypothesis class as measured by combinatorial quantities associated
with the class such as the VC dimension, the pseudodimension and the fat-shattering dimension.
Some progress [6,7] has already been made to obtain uniform bounds on the difference between
value functions and their empirical estimates, where the value function of a policy is the expected
long term reward starting from a certain state and following the policy thereafter. We continue this
line of work by further investigating what properties of the policy class determine the rate of uniform
convergence of value function estimates. The key difference between the usual statistical learning
setting and ours is that we not only have to consider the complexity of the class ? but also of the
classes derived from ? by composing the functions in ? with themselves and with the state evolution
process implied by the simulator.
Ng and Jordan [7] used a finite pseudodimension condition along with Lipschitz continuity to derive
uniform bounds. The Lipschitz condition was used to control the covering numbers of the iterated
function classes. We provide a uniform convergence result (Theorem 1) under the assumption that
policies are parameterized by a finite number of parameters and that the computations involved
in computing the policy, the single-step simulation function and the reward function all require a
bounded number of arithmetic operations on real numbers. The number of samples required grows
linearly with the dimension of the parameter space but is independent of the dimension of the state
space. Ng and Jordan?s and our assumptions are both stronger than just assuming finiteness of some
combinatorial dimension. We show that this is unavoidable by constructing two examples where the
fat-shattering dimension and the pseudodimension respectively are bounded, yet no simulation based
method succeeds in estimating the true values of policies well. This happens because iteratively
composing a function class with itself can quickly destroy finiteness of combinatorial dimensions.
Additional assumptions are therefore needed to ensure that these iterates continue to have bounded
combinatorial dimensions.
Although we restrict ourselves to MDPs for ease of exposition, the analysis in this paper carries over
easily to the case of partially obervable MDPs (POMDPs), provided the simulator also simulates the
conditional distribution of observations given state using a bounded amount of computation. The
plan of the rest of the paper is as follows. We set up notation and terminology in Section 2. In
the same section, we describe the model of computation over reals that we use. Section 3 proves
Theorem 1, which gives a sample complexity bound for achieving a desired level of performance
within the policy class. In Section 4, we give two examples of policy classes whose combinatorial
dimensions are bounded. Nevertheless, we can prove strong minimax lower bounds implying that
no method of choosing a policy based on empirical estimates can do well for these examples.
2 Preliminaries
We define an MDP M as a tuple (S, D, A, P (?|s, a), r, ?) where S is the state space, D the initial
state distribution, A the action space, P (s0 |s, a) gives the probability of moving to state s0 upon
taking action a in state s, r is a function mapping states to distributions over rewards (which are assumed to lie in a bounded interval [0, R]), and ? ? (0, 1) is a factor that discounts future rewards. In
this paper, we assume that the state space S and the action space A are finite dimensional Euclidean
spaces of dimensionality dS and dA respectively.
A (randomized) policy ? is a mapping from S to distributions over A. Each policy ? induces a
natural Markov chain on the state space of the MDP, namely the one obtained by starting in a start
state s0 sampled from D and st+1 sampled according to P (?|st , at ) with at drawn from ?(st ) for
t ? 0. Let rt (?) be the expected reward at time step t in this Markov chain, i.e. rt (?) = E[?t ]
where ?t is drawn from the distribution r(st ). Note that the expectation is over the randomness in
the choice of the initial state, the state transitions, and the randomized policy and reward outcomes.
Define the value VM (?) of the policy by
?
X
VM (?) =
? t rt (?) .
t=0
We omit the subscript M in the value function if the MDP in question is unambiguously identified.
For a class ? of policies, define
opt(M, ?) = sup VM (?) .
???
0
The regret of a policy ? relative to an MDP M and a policy class ? is defined as
RegM,? (? 0 ) = opt(M, ?) ? VM (? 0 ) .
We use a degree bounded version of the Blum-Shub-Smale [3] model of computation over reals. At
each time step, we can perform one of the four arithmetic operations +, ?, ?, / or can branch based
on a comparison (say <). While Blum et al. allow an arbitrary fixed rational map to be computed in
one time step, we further require that the degree of any of the polynomials appearing at computation
nodes be at most 1.
Definition 1. Let k, l, m, ? be positive integers, f a function from Rk to probability distributions
over Rl and ? a probability distribution over Rm . The function f is (?, ? )-computable if there
exists a degree bounded finite dimensional machine M over R with input space Rk+m and output
space Rl such that the following hold.
1. For every x ? Rk and ? ? Rm , the machine halts with halting time TM (x, ?) ? ? .
2. For every x ? Rk , if ? ? Rm is distributed according to ? the input-output map ?M (x, ?)
is distributed as f (x).
Informally, the definition states that given access to an oracle which generates samples from ?, we
can generate samples from f (x) by doing a bounded amount of computation. For precise definitions
of the input-output map and halting time, we refer the reader to [3, Chap. 2].
In Section 3, we assume that the policy class ? is parameterized by a finite dimensional parameter
? ? Rd . In this setting ?(s; ?), P (?|s, a) and r(s) are distributions over RdA , RdS and [0, R]
respectively. The following assumption states that all these maps are computable within ? time
steps in our model of computation.
Assumption A. There exists a probability distribution ? over Rm and a positive integer ? such
that ?(s; ?), P (?|s, a) and r(s) are (?, ? )-computable. Let M? , MP and Mr respectively be the
machines that compute them.
This assumption will be satisfied if we have three ?programs? that make a call to a random number
generator for distribution ?, do a fixed number of floating-point operations and simulate the policies
in our class, the state-transition dynamics and the rewards respectively. The following two examples
illustrate this for the state-transition dynamics.
? Linear Dynamical System with Additive Noise 1
Suppose P and Q are dS ? dS and dS ? dA matrices and the system dynamics is given by
st+1 = P st + Qat + ?t ,
(1)
where ?t are i.i.d. from some distribution ?. Since computing (1) takes 2(d2S + dS dA + dS )
operations, P (?|s, a) is (?, ? )-computable for ? = O(dS (dS + dA )).
? Discrete States and Actions
Suppose S = {1, 2, . . . , nS } and A = {1, 2, . . . , nA }. For some fixed s, a, P (?|s, a) is
P
Pk
described by n numbers ~
ps,a = (p1 , . . . , pnS ), i pi = 1. Let Pk =
i=1 pi . For
? ? (0, 1], set f (?) = min{k : Pk ? ?}. Thus, if ? has uniform distribution on (0, 1], then
f (?) = k with probability pk . Since the Pk ?s are non-decreasing, f (?) can be computed
in log nS steps using binary search. But this was for a fixed s, a pair. Finding which p~s,a
to use, further takes log(nS nA ) steps using binary search. So if ? denotes the uniform
distribution on (0, 1] then P (?|s, a) is (?, ? )-computable for ? = O(log nS + log nA ).
For a small , let H be the horizon time, i.e. ignoring rewards beyond time H does not affect
the value of any policy by more than . To obtain sample rewards, given initial state s0 and policy
?? = ?(?; ?), we first compute the trajectory s0 , . . . , sH sampled from the Markov chain induced
by ?? . This requires H ?calls? each to M? and MP . A further H + 1 calls to Mr are then required
to generate the rewards ?0 through ?H . These calls require a total of 3H + 1 samples from ?. The
(i)
empirical estimates are computed as follows. Suppose, for 1 ? i ? n, (s0 , ?~i ) are i.i.d. samples
3H+1
generated from the joint distribution D ? ?
. Define the empirical estimate of the value of the
policy ? by
n H
1 XX t
(i)
H
V?M
(?? ) =
? ?t (s0 , ?, ?~i ) .
n i=1 t=0
We omit the subscript M in V? when it is clear from the context. Define an -approximate maximizer
of V? to be a policy ? 0 such that
H
H
V?M
(? 0 ) ? sup V?M
(?) ? .
???
1
In this case, the realizable dynamics (mapping from state to next state for a given policy class) is not
uniformly Lipschitz if policies allow unbounded actions. So previously known bounds [7] are not applicable
even in this simple setting.
Finally, we mention the definitions of three standard combinatorial dimensions. Let X be some
space and consider classes G and F of {?1, +1} and real valued functions on X , respectively. Fix
a finite set X = {x1 , . . . , xn } ? X . We say that G shatters X if for all bit vectors ~b ? {0, 1}n
there exists g ? G such that for all i, bi = 0 ? g(xi ) = ?1, bi = 1 ? g(xi ) = +1. We say that
F shatters X if there exists ~r ? Rn such that, for all bit vectors ~b ? {0, 1}n , there exists f ? F
such that for all i, bi = 0 ? f (xi ) < ri , bi = 1 ? f (xi ) ? ri . We say that F -shatters X if
these exists ~r ? Rn such that, for all bit vectors ~b ? {0, 1}n, there exists f ? F such that for all i,
bi = 0 ? f (xi ) ? ri ? , bi = 1 ? f (xi ) ? ri + . We then have the following definitions,
VCdim(G) = max{|X| : G shatters X} ,
Pdim(F ) = max{|X| : F shatters X} ,
fatF () = max{|X| : F -shatters X} .
3 Regret Bound for Parametric Policy Classes Computable in Bounded
Time
Theorem 1. Fix an MDP M, a policy class ? = {s 7? ?(s; ?) : ? ? Rd }, and an > 0. Suppose
Assumption A holds. Then
2
R Hd?
R
n>O
log
(1 ? ?)2 2
(1 ? ?)
0
ensures that E RegM,? (?n ) ? 3 + , where ?n is an 0 -approximate maximizer of V? and H =
log1/? (2R/((1 ? ?))) is the /2 horizon time.
Proof. The proof consists of three steps: (1) Assumption A is used to get bounds on pseudodimension; (2) The pseudodimension bound is used to prove uniform convergence of empirical estimates
to true value functions; (3) Uniform convergence and the definition of 0 -approximate maximizer
gives the bound on expected regret.
S TEP 1. Given initial state s0 , parameter ? and random numbers ?1 through ?3H+1 , we first compute
the trajectory as follows. Recall that ?M refers to the input-output map of a machine M.
st = ?MP (st?1 , ?M? (?, s, ?2t?1 ), ?2t ), 1 ? t ? H .
(2)
The rewards are then computed by
?t = ?Mr (st , ?2H+t+1 ), 0 ? t ? H .
(3)
The H-step discounted reward sum is computed as
H
X
? t ?t = ?0 + ?(?1 + ?(?2 + . . . (pH?1 + ??H ) . . .)) .
(4)
t=0
~ 7? PH ? t ?t (s0 , ?, ?)
~ : ? ? Rd }, where we have explicitly
Define the function class R = {(s0 , ?)
t=0
~ Let us count the number of arithmetic operations needed
shown the dependence of ?t on s0 , ? and ?.
to compute a function in this class. Using Assumption A, we see that steps (2) and (3) require
no more than 2? H and ? (H + 1) operations respectively. Step (4) requires H multiplications and
H additions. This gives a total of 2? H + ? (H + 1) + 2H ? 6? H operations. Goldberg and
Jerrum [4] showed that the VC dimension of a function class can be bounded in terms of an upper
bound on the number of arithmetic operations it takes to compute the functions in the class. Since
the pseudodimension of R can be written as
~ c) 7? sign(f (s0 , ?)
~ ? c) : f ? R, c ? R} ,
Pdim(R) = VCdim{(s0 , ?,
we get the following bound by [2, Thm. 8.4],
Pdim(R) ? 4d(6? H + 3) .
(5)
PH
S TEP 2. Let V H (?) = t=0 ? t rt (?). For the choice of H stated in the theorem, we have for all ?,
|V H (?) ? V (?)| ? /2. Therefore,
P n (?? ? ? : |V? H (?) ? V (?)| > ) ? P n (?? ? ? : |V? H (?) ? V H (?)| > /2) .
(6)
Functions in R are positive and bounded above by R0 = R/(1 ? ?). There are well-known bounds
for deviations of empirical estimates from true expectations for bounded function classes in terms
of the pseudodimension of the class (see, for example, Theorems 3 and 5 in [5]; also see Pollard?s
book [8]). Using a weak form of these results, we get
2 Pdim(R)
2
02
32eR0
n
H
H
?
P (?? ? ? : |V (?) ? V (?)| > ) ? 8
e? n/64R .
In order to ensure that P n (?? ? ? : |V? H (?) ? V H (?)| > /2) < ?, we need
2 Pdim(R)
2
02
64eR0
8
e? n/256R < ? ,
Using the bound (5) on Pdim(R), we get that
?H
n
P
sup V (?) ? V (?) > < ? ,
(7)
???
provided
n>
256R2
(1 ? ?)2 2
8
64eR
+ 8d(6? H + 3) log
log
.
?
(1 ? ?)
S TEP 3. We now show that (7) implies E RegM,? (?n ) ? R?/(1 ? ?) + (2 + 0 ). The theorem
them immediately follows by setting ? = (1 ? ?)/R.
Suppose that for all ? ? ?, |V? H (?)?V (?)| ? . This implies that for all ? ? ?, V (?) ? V? H (?)+
. Since ?n is an 0 -approximate maximizer of V? , we have for all ? ? ?, V? H (?) ? V? H (?n ) + 0 .
Thus, for all ? ? ?, V (?) ? V? H (?n ) + + 0 . Taking the supremum over ? ? ? and using the
fact that V? H (?n ) ? V (?n ) + , we get sup??? V (?) ? V (?n ) + 2 + 0 , which is equivalent to
RegM,? (?n ) ? 2 + 0 . Thus, if (7) holds then we have
P n RegM,? (?n ) > 2 + 0 < ? .
Denoting the event {RegM,? (?n ) > 2 + 0 } by E, we have
E RegM,? (?n ) = E RegM,? (?n )1E + E RegM,? (?n )1(?E)
? R?/(1 ? ?) + (2 + 0 ) .
where we used the fact that regret is bounded above by R/(1 ? ?).
4 Two Policy Classes Having Bounded Combinatorial Dimensions
We will describe two policy classes for which we can prove that there are strong limitations on the
performance of any method (of choosing a policy out of a policy class) that has access only to empirically observed rewards. Somewhat surprisingly, one can show this for policy classes which are
?simple? in the sense that standard combinatorial dimensions of these classes are bounded. This
shows that sufficient conditions for the success of simulation based policy search (such as the assumptions in [7] and in our Theorem 1) have to be necessarily stronger than boundedness of standard
combinatorial dimensions.
The first example is a policy class F1 for which fatF1 () < ? for all > 0. The second example is
a class F2 for which Pdim(F2 ) = 1. Since finiteness of pseudodimension is a stronger condition,
the second example makes our point more forcefully than the first one. However, the first example
is considerably less contrived than the second one.
Example 1
Let MD = (S, D, A, P (?|s, a), r, ?) be an MDP where S = [?1, +1], D = some distribution on
[?1, +1], A = [?2, +2],
P (s0 |s, a) = 1 if s0 = max(?1, min(s + a, 1))), 0 otherwise ,
0.2
0
?0.2
fT(x)
?0.4
?0.6
?0.8
?1
?1
?0.5
0
x
0.5
1
Figure 1: Plot of the function fT with T = {0.2, 0.3, 0.6, 0.8}. Note that, for x > 0, fT (x) is 0 iff
x ? T . Also, fT (x) satisfies the Lipschitz condition (with constant 1) everywhere except at 0.
r = deterministic reward that maps s to s, and ? = some fixed discount factor in (0, 1).
For a function f : [?1, +1] 7? [?1, +1], let ?f denote the (deterministic) policy which takes
action f (s) ? s in state s. Given a class F of functions, we define an associated policy class
?F = {?f : f ? F}.
We now describe a specific function class F1 . Fix 1 > 0. Let T be an arbitrary finite subset of
(0, 1). Let ?(x) = (1 ? |x|)+ be the ?triangular spike? function. Let
?
?
?1
?1 ? x < 0
?
?
.
fT (x) = 0
x=0
?
x?y
1
1
?
0<x?1
?max |T | ? 1 /|T | ? |T |
y?T
There is a spike at each point in T and the tips of the spikes just touch the X-axis (see Figure 1).
Since ?1 and 0 are fixed points of FT (x), it is straightforward to verify that
?
?1 ? x < 0
??1
fT2 (x) = 0
.
(8)
x=0
?
1(x?T ) ? 1 0 < x ? 1
Also, fTn = fT2 for all n > 2. Define F1 = {fT : T ? (1 , 1), |T | < ?}. By construction,
functions in F1 have bounded total variation and so, fatF1 () is O(1/) (see, for example, [2, Chap.
11]). Moreover, fT (x) satisfies the Lipschitz condition everywhere (with constant L = 1) except at
0. This is striking in the sense that the loss of the Lipschitz property at a single point allows us to
prove the following lower bound.
Theorem 2. Let gn range over functions from S n to F1 . Let D range over probability distributions
on S. Then,
h
i
?2
inf sup E(s1 ,...,sn )?Dn RegMD ,?F1 (?gn (s1 ,...,sn ) ) ?
? 21 .
gn D
1??
This says that for any method that maps random initial states s1 , . . . , sn to a policy in ?F1 , there
is an initial state distribution such that the expected regret of the selected policy is at least ? 2 /(1 ?
?) ? 21 . This is in sharp contrast to Theorem 1 where we could reduce, by using sufficiently
many samples, the expected regret down to any positive number given the ability to maximize the
empirical estimates V? .
Let us see how maximization of empirical estimates behaves in this case. Since fatF1 () < ?
for all > 0, the law of large numbers holds uniformly [1, Thm. 2.5] over the class F1 . The
transitions, policies and rewards here are all deterministic. The reward function is just the identity.
This means that the 1-step reward function family is just F1 . So the estimates of 1-step rewards are
still uniformly concentrated around their expected values. Since the contribution of rewards from
time step 2 onwards can be no more than ? 2 + ? 3 + . . . = ? 2 /(1 ? ?), we can claim that the expected
regret of the V? maximizer ?n behaves like
h
i
?2
E RegM,?F1 (?n ) ?
+ en
1??
where en ? 0. Thus the bound in Theorem 2 above is essentially tight.
Before we prove Theorem 2, we need the following lemma whose proof is given in the appendix
accompanying the paper.
Lemma 1. Fix an interval (a, b) and let T be the set of all its finite subsets. Let gn range over
functions from (a, b)n to T . Let D range over probability distributions on (a, b). Then,
inf sup sup EX?D 1(X?T ) ? E(X1 ,...,Xn )?Dn E(X?D) 1(X?gn (X1 ,...,Xn )) ? 1 .
gn
D
T ?T
Proof of Theorem 2. We will prove the inequality when D ranges over distributions on (0, 1) which,
obviously, implies the theorem.
Since, for all f ? F1 and n > 2, f n = f 2 , we have
opt(MD , ?F1 ) ? E(s1 ,...,sn )?Dn VMD (?gn (s1 ,...,sn ) )
?2 2
= sup Es?D s + ?f (s) +
f (s)
1??
f ?F1
?2
? E(s1 ,...,sn )?Dn Es?D [s + ?gn (s1 , . . . , sn )(s) +
gn (s1 , . . . , sn )2 (s)]
1??
2
?
= sup Es?D ?f (s) +
f 2 (s)
1
?
?
f ?F1
?2
n 2
1
n
1
? E(s1 ,...,sn )?Dn Es?D [?gn (s , . . . , s )(s) +
gn (s , . . . , s ) (s)]
1??
For all f1 , f2 , |Ef1 ? Ef2 | ? E|f1 ? f2 | ? 1 . Therefore, we can get rid of the first terms in both
sub-expressions above without changing the value by more than 2?1 .
2
?
?2
n 2
2
1
? sup Es?D
f (s) ? E(s1 ,...,sn )?Dn Es?D [
gn (s , . . . , s ) (s)]
1??
1??
f ?F1
? 2?1
?2
=
1??
? 2?1
2
1
n 2
!
sup Es?D f (s) + 1 ? E(s1 ,...,sn )?Dn Es?D [gn (s , . . . , s ) (s) + 1]
f ?F1
From (8), we know that fT2 (x) + 1 restricted to x ? (0, 1) is the same as 1(x?T ) . Therefore,
restricting D to probability measures on (0, 1) and applying Lemma 1, we get
?2
inf sup opt(MD , ?F1 ) ? E(s1 ,...,sn )?Dn VMD (?gn (s1 ,...,sn ) ) ?
? 2?1 .
gn D
1??
To finish the proof, we note that ? < 1 and, by definition,
RegMD ,?F1 (?gn (s1 ,...,sn ) ) = opt(MD , ?F1 ) ? VMD (?gn (s1 ,...,sn ) ) .
Example 2
We use the MDP of the previous section with a different policy class which we now describe. For
a real number x, y ? (0, 1) with binary expansions (choose the terminating representation for rationals) 0.b1 b2 b3 . . . and 0.c1 c2 c3 . . ., define
mix(x, y) = 0.b1 c1 b2 c2 . . .
stretch(x) = 0.b1 0b2 0b3 . . .
even(x) = 0.b2 b4 b6 . . .
odd(x) = 0.b1 b3 b5 . . .
Some obvious identities are mix(x, y) = stretch(x) + stretch(y)/2, odd(mix(x, y)) = x and
even(mix(x, y)) = y. Now fix 2 > 0. Since, finite subsets of (0, 1) and irrationals in (0, 2 ) have
the same cardinality, there exists a bijection h which maps every finite subset T of (0, 1) to some
irrational h(T ) ? (0, 2 ). For a finite subset T of (0, 1), define
?
?
0
x = ?1
?
?
?
?
?1(odd(?x)?h?1 (even(?x)) ?1 < x < 0
fT (x) = 0
.
x=0
?
?
?
?
mix(x,
h(T
))
0
<
x
<
1
?
?
?1
x=1
It is easy to check that with this definition, fT2 (x) = 1(x?T ) for x ? (0, 1). Finally, let F2 =
{fT : T ? (0, 1), |T | < ?}. To calculate the pseudodimension of this class, note that using the
identity mix(x, y) = stretch(x) + stretch(y)/2, every function fT in the class can be written as
fT = f0 + f?T where f0 is a fixed function (does not depend on T ) and f?T is given by
?
?1 ? x ? 0
?0
f?T (x) = ? stretch(h(T ))/2 0 < x < 1 .
?
0
x=1
Let H = {f?T : T ? (0, 1), |T | < ?}. Since Pdim(H + f0 ) = Pdim(H) for any class H and a
fixed function f0 , we have Pdim(F2 ) = Pdim(H). As each function f?T (x) is constant on (0, 1)
and zero elsewhere, we cannot shatter even two points using H. Thus, Pdim(H) = 1.
Theorem 3. Let gn range over functions from S n to F2 . Let D range over probability distributions
on S. Then,
i
h
?2
inf sup E(s1 ,...,sn )?Dn RegMD ,?F1 (?gn (s1 ,...,sn ) ) ?
? 2 .
gn D
1??
Sketch. Let us only check that the properties of F1 that allowed us to proceed with the proof of
Theorem 2 are also satisfied by F2 . First, for all f ? F2 and n > 2, f n = f 2 . Second, for all
f1 , f2 ? F2 and x ? [?1, +1], |f1 (x) ? f2 (x)| ? 2 /2. This is because fT1 and fT2 can differ
only for x ? (0, 1). For such an x, |fT1 (x) ? fT2 (x)| = | mix(x, h(T1 ) ? mix(x, h(T2 ))| =
| stretch(h(T1 )) ? stretch(h(T2 ))|/2 ? 2 /2. Third, the restriction of fT2 to (0, 1) is 1(x?T ) .
Acknowledgments
We acknowledge the support of DARPA under grants HR0011-04-1-0014 and FA8750-05-2-0249.
References
[1] Alon, N., Ben-David, S., Cesa-Bianchi, N. & Haussler, D. (1997) Scale-sensitive Dimensions, Uniform
Convergence, and Learnability. Journal of the ACM 44(4):615?631.
[2] Anthony, M. & Bartlett P.L. (1999) Neural Network Learning: Theoretical Foundations. Cambridge University Press.
[3] Blum, L., Cucker, F., Shub, M. & Smale, S. (1998) Complexity and Real Computation. Springer-Verlag.
[4] Goldberg, P.W. & Jerrum, M.R. (1995) Bounding the Vapnik-Chervonenkis Dimension of Concept Classes
Parameterized by Real Numbers. Machine Learning 18(2-3):131?148.
[5] Haussler, D. (1992) Decision Theoretic Generalizations of the PAC Model for Neural Net and Other Learning Applications. Information and Computation 100:78?150.
[6] Jain, R. & Varaiya, P. (2006) Simulation-based Uniform Value Function Estimates of Discounted and
Average-reward MDPs. SIAM Journal on Control and Optimization, to appear.
[7] Ng A.Y. & Jordan M.I. (2000) PEGASUS: A Policy Search Method for MDPs and POMDPs. In Proceedings of the 16th Annual Conference on Uncertainty in Artificial Intelligence, pp. 405?415. Morgan Kauffman
Publishers.
[8] Pollard D. (1990) Empirical Processes: Theory and Applications. NSF-CBMS Regional Conference Series
in Probability and Statistics, Volume 2.
| 2990 |@word version:1 polynomial:1 stronger:4 simulation:7 pick:1 mention:1 boundedness:3 carry:1 initial:6 series:1 chervonenkis:1 denoting:1 ours:2 fa8750:1 past:1 current:1 yet:1 reminiscent:1 written:2 additive:1 plot:1 alone:1 implying:1 selected:1 intelligence:1 indicative:1 iterates:1 node:1 bijection:1 unbounded:1 along:1 er0:2 dn:9 c2:2 shatter:1 prove:6 consists:1 divison:1 expected:7 themselves:1 p1:1 simulator:4 discounted:2 chap:2 decreasing:1 considering:1 cardinality:1 provided:2 estimating:1 bounded:17 notation:1 xx:1 moreover:1 what:1 finding:1 berkeley:6 every:4 fat:3 rm:4 control:2 grant:1 omit:2 appear:1 positive:4 before:1 t1:2 subscript:2 ease:1 bi:6 range:7 acknowledgment:1 regret:7 empirical:11 refers:1 get:7 cannot:1 close:1 context:1 applying:1 restriction:1 equivalent:1 map:8 deterministic:3 straightforward:1 attention:1 starting:2 immediately:1 haussler:2 hd:1 variation:1 construction:1 suppose:5 goldberg:2 hypothesis:3 labeled:1 observed:1 ft:12 calculate:1 ensures:1 environment:3 complexity:8 reward:22 dynamic:7 irrational:2 terminating:1 depend:2 tight:1 upon:1 division:1 f2:12 easily:1 joint:1 darpa:1 jain:1 describe:4 ef1:1 artificial:1 rds:1 choosing:3 outcome:1 whose:3 valued:1 say:5 otherwise:1 triangular:1 ability:1 statistic:2 jerrum:2 itself:1 obviously:1 net:1 iff:1 convergence:6 contrived:1 p:1 ben:1 derive:1 develop:1 illustrate:1 alon:1 measured:1 odd:3 progress:1 strong:2 c:2 implies:3 differ:1 vc:2 forcefully:1 require:4 vcdim:2 fix:5 f1:25 generalization:1 preliminary:1 opt:5 rda:1 ft1:2 stretch:8 hold:4 accompanying:1 sufficiently:1 around:1 mapping:3 claim:1 applicable:1 combinatorial:10 sensitive:1 hope:1 aim:2 avoid:1 derived:2 check:2 contrast:1 realizable:1 sense:2 ftn:1 interested:1 issue:1 plan:1 having:1 ng:3 shattering:3 future:2 t2:2 few:2 floating:1 ourselves:1 onwards:1 possibility:1 introduces:1 sh:1 chain:3 tuple:1 euclidean:1 desired:1 theoretical:1 markovian:1 gn:20 maximization:1 deviation:1 subset:5 uniform:11 too:1 learnability:1 answer:1 considerably:1 confident:1 st:9 randomized:2 siam:1 vm:4 receiving:1 cucker:1 tip:1 quickly:1 na:3 unavoidable:1 satisfied:2 cesa:1 choose:1 worse:1 book:1 toy:1 halting:2 b2:4 mp:3 explicitly:1 try:2 doing:1 sup:13 start:1 b6:1 contribution:1 weak:1 iterated:1 trajectory:2 pomdps:2 randomness:1 fatf:1 definition:8 pp:1 involved:2 obvious:1 associated:2 proof:6 sampled:3 rational:2 recall:1 dimensionality:1 cbms:1 unambiguously:1 done:1 just:4 d:8 sketch:1 touch:1 maximizer:5 continuity:2 mdp:9 grows:1 b3:3 pseudodimension:11 concept:3 true:8 contain:1 verify:1 evolution:1 iteratively:1 ft2:7 covering:1 tep:3 theoretic:1 behaves:2 rl:2 empirically:1 b4:1 volume:1 refer:1 cambridge:1 rd:3 moving:1 access:2 f0:4 showed:1 inf:4 pns:1 certain:1 verlag:1 inequality:2 binary:3 success:2 continue:2 morgan:1 additional:2 somewhat:1 mr:3 r0:1 determine:1 maximize:1 arithmetic:4 branch:1 mix:8 long:1 halt:1 essentially:1 expectation:2 c1:2 addition:1 interval:2 finiteness:3 publisher:1 appropriately:1 rest:1 regional:1 sure:1 induced:1 simulates:1 jordan:3 integer:2 call:4 near:1 easy:1 affect:1 finish:1 restrict:1 identified:1 reduce:1 tm:1 pdim:12 computable:6 oneself:1 expression:1 b5:1 bartlett:3 peter:1 pollard:2 proceed:1 action:7 tewari:1 clear:1 informally:1 amount:3 discount:2 ph:3 induces:1 concentrated:1 generate:2 restricts:1 nsf:1 sign:1 discrete:1 thereafter:1 key:1 terminology:1 nevertheless:1 blum:3 achieving:2 drawn:2 four:1 changing:1 shatters:6 destroy:1 relaxation:1 sum:1 parameterized:3 everywhere:2 uncertainty:1 striking:1 family:1 reader:1 decision:3 appendix:1 bit:3 bound:16 oracle:1 annual:1 ri:4 generates:1 simulate:1 min:2 performing:1 department:1 according:2 smaller:1 happens:1 s1:17 restricted:1 previously:2 turn:1 count:1 needed:3 know:1 tractable:1 operation:8 appearing:1 denotes:1 ensure:3 prof:1 implied:1 pegasus:1 question:3 quantity:1 occurs:1 already:1 parametric:1 spike:3 rt:4 usual:2 interacts:1 dependence:1 md:4 assuming:2 smale:2 stated:1 shub:2 policy:54 perform:1 bianchi:1 upper:1 observation:1 markov:5 finite:11 acknowledge:1 situation:2 precise:1 rn:2 arbitrary:2 thm:2 sharp:1 david:1 namely:1 required:2 pair:1 c3:1 varaiya:1 california:2 hr0011:1 beyond:1 usually:1 dynamical:1 kauffman:1 program:1 ambuj:2 max:5 event:1 natural:1 minimax:2 mdps:4 axis:1 log1:1 sn:17 multiplication:1 relative:1 law:1 loss:1 limitation:1 generator:1 foundation:1 agent:1 degree:3 sufficient:2 s0:15 pi:2 elsewhere:1 surprisingly:1 allow:2 taking:2 distributed:2 dimension:17 xn:3 transition:5 made:1 approximate:4 supremum:1 investigating:1 rid:1 b1:4 assumed:1 xi:6 search:6 learn:1 ca:2 composing:2 ignoring:1 expansion:1 necessarily:1 constructing:1 anthony:1 da:4 pk:5 linearly:1 bounding:1 noise:1 allowed:1 x1:3 ef2:1 en:2 n:4 sub:1 lie:1 third:1 theorem:15 rk:4 down:1 specific:1 pac:1 er:1 r2:1 essential:1 exists:8 restricting:2 vapnik:1 horizon:2 partially:1 springer:1 satisfies:2 acm:1 conditional:1 identity:3 exposition:1 lipschitz:7 except:3 uniformly:3 lemma:3 total:3 e:8 succeeds:1 support:1 ex:1 |
2,193 | 2,991 | Fast Iterative Kernel PCA
Nicol N. Schraudolph
?
Simon Gunter
S.V. N. Vishwanathan
{nic.schraudolph,simon.guenter,svn.vishwanathan}@nicta.com.au
Statistical Machine Learning, National ICT Australia
Locked Bag 8001, Canberra ACT 2601, Australia
Research School of Information Sciences & Engineering
Australian National University, Canberra ACT 0200, Australia
Abstract
We introduce two methods to improve convergence of the Kernel Hebbian Algorithm (KHA) for iterative kernel PCA. KHA has a scalar gain parameter which
is either held constant or decreased as 1/t, leading to slow convergence. Our
KHA/et algorithm accelerates KHA by incorporating the reciprocal of the current
estimated eigenvalues as a gain vector. We then derive and apply Stochastic MetaDescent (SMD) to KHA/et; this further speeds convergence by performing gain
adaptation in RKHS. Experimental results for kernel PCA and spectral clustering
of USPS digits as well as motion capture and image de-noising problems confirm
that our methods converge substantially faster than conventional KHA.
1
Introduction
Principal Components Analysis (PCA) is a standard linear technique for dimensionality reduction.
Given a matrix X ? Rn?l of l centered, n-dimensional observations, PCA performs an eigendecomposition of the covariance matrix Q := XX > . The r ? n matrix W whose rows are the
eigenvectors of Q associated with the r ? n largest eigenvalues minimizes the least-squares reconstruction error
||X ? W > W X||F ,
(1)
where || ? ||F is the Frobenius norm.
As it takes O(n2 l) time to compute Q and up to O(n3 ) time to eigendecompose it, PCA can be
prohibitively expensive for large amounts of high-dimensional data. Iterative methods exist that do
not compute Q explicitly and thereby reduce the computational cost to O(rn) per iteration. One
such method is Sanger?s [1] Generalized Hebbian Algorithm (GHA), which updates W as
>
Wt+1 = Wt + ?t [yt x>
t ? lt(yt yt )Wt ].
(2)
n
Here xt ? R is the observation at time t, yt := Wt xt , and lt(?) makes its argument lower triangular by zeroing all elements above the diagonal. For an appropriate scalar gain ?t , Wt will generally
tend to converge to the principal component solution as t ? ?; though its global convergence is
not proven [2].
One can do better than PCA in minimizing the reconstruction error (1) by allowing nonlinear projections of the data into r dimensions. Unfortunately such approaches often pose difficult nonlinear
optimization problems. Kernel methods [3] provide a way to incorporate nonlinearity without unduly complicating the optimization problem. Kernel PCA [4] performs an eigendecomposition on
the kernel expansion of the data, an l ? l matrix. To reduce the attendant O(l2 ) space and O(l3 ) time
complexity, Kim et al. [2] introduced the Kernel Hebbian Algorithm (KHA) kernelizing GHA.
Both GHA and KHA are examples of stochastic approximation algorithms, whose iterative updates
employ individual observations in place of ? but, in the limit, approximating ? statistical properties of the entire data. By interleaving their updates with the passage through the data, stochastic
approximation algorithms can greatly outperform conventional methods on large, redundant data
sets, even though their convergence is comparatively slow.
Both the GHA and KHA updates incorporate a scalar gain parameter ?t , which is either held fixed
or annealed according to some predefined schedule. Robbins and Monro [5] established conditions
on the sequence of ?t that guarantee the convergence of many stochastic approximation algorithms;
a widely used annealing schedule that obeys these conditions is ?t ? ? /(t + ? ), for any ? > 0.
Here we propose the inclusion of a gain vector in the KHA, which provides each estimated eigenvector with its individual gain parameter. We present two methods for setting these gains: In the
KHA/et algorithm, the gain of an eigenvector is reciprocal to its estimated eigenvalue as well as
the iteration number t [6]. Our second method, KHA-SMD, additionally employs Schraudolph?s
[7] Stochastic Meta-Descent (SMD) technique for adaptively controlling a gain vector for stochastic
gradient descent, derived and applied here in Reproducing Kernel Hilbert Space (RKHS), cf. [8].
The following section summarizes Kim et al.?s [2] KHA. Sections 3 and 4 describe our KHA/et and
KHA-SMD algorithms, respectively. We report our experiments with these algorithms in Section 5
before concluding with a discussion.
2
Kernel Hebbian Algorithm (KHA and KHA/t)
Kim et al. [2] apply Sanger?s [1] GHA to data mapped into a reproducing kernel Hilbert space
(RKHS) H via the function ? : Rn ? H. H and ? are implicitly defined via the kernel k :
Rn ? Rn ? H with the property ?x, x0 ? Rn : k(x, x0 ) = h?(x), ?(x0 )iH , where h?, ?iH denotes
the inner product in H. Let ? denote the transposed mapped data:
? := [?(x1 ), ?(x2 ), . . . ?(xl )]> .
(3)
This assumes a fixed set of l observations whereas GHA relies on an infinite sequence of observations for convergence. Following Kim et al. [2], we use an indexing function p : N ? Zl which
concatenates random permutations of Zl to reconcile this discrepancy.
PCA, GHA, and hence KHA all assume that the data is centered. Since the mapping into feature
space performed by kernel methods does not necessarily preserve such centering, we must re-center
the mapped data:
?0 := ? ? M ?,
(4)
where M denotes the l ? l matrix with entries all equal to 1/l. This is achieved by replacing the
kernel matrix K := ??> (i.e., [K]ij := k(xi , xj )) by its centered version
K 0 := ?0 ?0> = (? ? M ?)(? ? M ?)> = K ? MK ? (MK)> + MKM .
(5)
Since all rows of MK are identical (as are all elements of MKM ) we can precalculate that row in
O(l2 ) time and store it in O(l) space to efficiently implement operations with the centered kernel.
The kernel centered on the training data is also used when testing the trained system on new data.
From Kernel PCA [4] it is known that the principal components must lie in the span of the centered
mapped data; we can therefore express the GHA weight matrix as Wt = At ?0 , where A is an
r ? l matrix of expansion coefficients, and r the number of principal components. The GHA weight
update (2) thus becomes
At+1 ?0 = At ?0 + ?t [yt ?0 (xp(t) )> ? lt(yt yt> )At ?0 ],
(6)
0
yt := Wt ?0 (xp(t) ) = At ?0 ?0 (xp(t) ) = At kp(t)
,
(7)
where
using ki0 to denote the ith column of the centered kernel matrix K 0 . Since we have ?0 (xi )> =
0
e>
i ? , where ei is the unit vector in direction i, (6) can be rewritten solely in terms of expansion
coefficients as
>
At+1 = At + ?t [yt e>
p(t) ? lt(yt yt )At ].
(8)
Introducing the update coefficient matrix
>
?t := yt e>
p(t) ? lt(yt yt )At
(9)
we obtain the compact update rule
At+1 = At + ?t ?t .
(10)
In their experiments, Kim et al. [2] employed the KHA update (8) with a constant scalar gain,
?t = const. They also proposed letting the gain decay as ?t = 1/t for stationary data.
3
Gain Decay with Reciprocal Eigenvalues (KHA/et)
>
Consider the term yt x>
t = Wt xt xt appearing on the right-hand side of the GHA update rule
(2). At the desired solution, the rows of Wt contain the principal components, i.e., the leading
eigenvectors of Q = XX >. The elements of yt thus scale with the associated eigenvalues of Q.
Wide spreads of eigenvalues can therefore lead to ill-conditioning, hence slow convergence, of the
GHA; the same holds for the KHA.
In our KHA/et algorithm, we counteract this problem by furnishing KHA with a gain vector ?t that
provides each eigenvector estimate with its individual gain parameter. The update rule (10) thus
becomes
At+1 = At + diag(?t ) ?t ,
(11)
where diag(?) turns a vector into a diagonal matrix. To condition KHA, we set the gain parameters
proportional to the reciprocal of both the iteration number t and the current estimated eigenvalue;
a similar apporach was used by Chen and Chang [6] for neural network feature selection. Let ?t
be the vector of eigenvalues associated with the current estimate (as stored in At ) of the first r
eigenvectors. KHA/et sets the ith element of ?t to
[?t ]i =
||?t || l
?0 ,
[?t ]i t + l
(12)
where ?0 is a free scalar parameter, and l the size of the data set. This conditions the KHA update
(8) by proportionately decreasing (increasing) the gain for rows of At associated with large (small)
eigenvalues.
The norm ||?t || in the numerator of (12) is maximized by the principal components; its growth serves
to counteract the l/(t + l) gain decay while the leading eigenspace is idientified. This achieves an
effect comparable to an adaptive ?search then converge? gain schedule [9] without introducing any
tuning parameters.
As the goal of KHA is to find the eigenvectors in the first place, we don?t know the true eigenvalues while running the algorithm. Instead we use the eigenvalues associated with KHA?s current
eigenvector estimate, computed as
[?t ]i =
||[At ]i? K 0 ||
||[At ]i? ||
where [At ]i? denotes the i-th row of At . This can be stated compactly as
s
diag[At K 0 (At K 0 )> ]
?t =
diag(At A>
t )
(13)
(14)
where the division and square root operation are performed element-wise, and diag(?) (when applied
to a matrix) extracts the vector of elements along the matrix diagonal.
Note that naive computation of AK 0 is quite expensive: O(rl2 ). Since the eigenvalues evolve
gradually, it suffices to re-estimate them only occasionally; we determine ?t and ?t once for each
pass through the training data set, i.e., every l iterations. Below we derive a way to maintain AK 0
incrementally in an affordable O(rl) via Equations (17) and (18).
4
KHA with Stochastic Meta-Descent (KHA-SMD)
While KHA/et makes reasonable assumptions about how the gains of a KHA update should be
scaled, it is by no means clear how close the resulting gains are to being optimal. To explore
this question, we now derive and implement the Stochastic Meta-Descent (SMD [7]) algorithm for
KHA/et. SMD controls gains adaptively in response to the observed history of parameter updates
so as to optimize convergence. Here we focus on the specifics of applying SMD to KHA/et; please
refer to [7, 8] for more general derivations and discussion of SMD.
Using the KHA/et gains as a starting point, the KHA-SMD update is
At+1 = At + ediag(?t ) diag(?t ) ?t ,
(15)
where the log-gain vector ?t is adjusted by SMD. (Note that the exponential of a diagonal matrix is
obtained simply by exponentiating the individual diagonal entries.)
In an RKHS, SMD adapts a scalar log-gain whose update is driven by the inner product between
the gradient and a differential of the system parameters, all in the RKHS [8]. Note that ?t ?0 can
be interpreted as the gradient in the RKHS of the (unknown) merit function maximized by KHA,
and that (15) can be viewed as r coupled updates in RKHS, one for each row of At , each associated
with a scalar gain. SMD-KHA?s adaptation of the log-gain vector is therefore driven by the diagonal
entries of h?t ?0 , Bt ?0 iH , where Bt := dAt denotes the r ? l matrix of expansion coefficients for
SMD?s differential parameters:
?t = ?t?1 + ? diag(h?t ?0 , Bt ?0 iH )
= ?t?1 + ? diag(?t ?0 ?0>Bt> ) = ?t?1 + ? diag(?t K 0 Bt> ),
(16)
where ? is a scalar tuning parameter. Naive computation of ?t K 0 in (16) would cost O(rl2 ) time,
which is prohibitively expensive for large l. We can, however, reduce this cost to O(rl) by noting
that (9) implies
0
>
0
0>
>
0
?t K 0 = yt e>
p(t) K ? lt(yt yt )At K = yt kp(t) ? lt(yt yt )At K ,
(17)
where the r ? l matrix At K 0 can be stored and updated incrementally via (15):
At+1 K 0 = At K 0 + ediag(?t ) diag(?t ) ?t K 0 .
(18)
The initial computation of A1 K 0 still costs O(rl2 ) in general but is affordable as it is performed
only once. Alternatively, the time complexity of this step can easily be reduced to O(rl) by making
A1 suitably sparse.
Finally, we apply SMD?s standard update of the differential parameters:
Bt+1 = ?Bt + ediag(?t ) diag(?t ) (?t + ?d?t ),
(19)
where the decay factor 0 ? ? ? 1 is another scalar tuning parameter. The differential d?t of the
gradient is easily computed by routine application of the rules of calculus:
>
d?t = d[yt e>
p(t) ? lt(yt yt )At ]
0
>
>
= (dAt )kp(t)
e>
p(t) ? lt(yt yt )(dAt ) ? [d lt(yt yt )]At
=
0
Bt kp(t)
e>
p(t)
?
lt(yt yt> )Bt
0
lt(Bt kp(t)
yt> +
?
(20)
0>
yt kp(t)
Bt> )At .
Inserting (9) and (20) into (19) yields the update rule
0
Bt+1 = ?Bt + ediag(?t ) diag(?t )[(At + ?Bt ) kp(t)
e>
p(t)
?
lt(yt yt> )(At +
?Bt ) ? ?
0
lt(Bt kp(t)
yt> +
(21)
0>
yt kp(t)
Bt> )At ].
In summary, the application of SMD to KHA/et comprises Equations (16), (21), and (15), in that
order. The complete KHA-SMD algorithm is given as Algorithm 1. We initialize A1 to an isotropic
normal density with suitably small variance, B1 to all zeroes, and ?0 to all ones. The worst-case
time complexity of non-trivial initialization steps is given explicitly; all steps in the repeat loop have
a time complexity of O(rl) or less.
Algorithm 1 KHA-SMD
1. Initialize:
(a) calculate MK, MKM ? O(l2 )
(b) ?0 := [1 . . . 1]>
(c) B1 := 0
(d) A1 ? N (0, (rl)?1 I)
(e) calculate A1 K 0 ? O(rl2 )
2. Repeat for t = 1, 2, . . .
(a) calculate ?t (13)
(b) calculate ?t (11)
(c) select observation xp(t)
(d) calculate yt (7)
(e) calculate ?t (9)
(f) calculate ?t K 0 (17)
(g) update ?t?1 ? ?t (16)
(h) update Bt ? Bt+1 (21)
(i) update At ? At+1 (15)
(j) update At K 0 ? At+1 K 0 (18)
5
Experiments
We compared our KHA/et and KHA-SMD algorithms with KHA using either a fixed gain (?t = ?0 )
or a scheduled gain decay (?t = ?0 l/(t + l), denoted KHA/t) in a number of different settings:
Performing kernel PCA and spectral clustering on the well-known USPS dataset [10], replicating an
image denoising experiment of Kim et al. [2], and denoising human motion capture data.
In all experiments the Kernel Hebian Algorithm (KHA) and our enhanced variants are used to find
the first r eigenvectors of the centered Kernel matrix K 0 . To assess the quality of the result, we
reconstruct the Kernel matrix from the found eigenvectors and measure the reconstruction error
E(A) := ||K 0 ? (AK 0 )>AK 0 ||F ,
(22)
where || ? ||F is the Frobenius norm. The minimal reconstruction error from r eigenvectors, E min :=
minA E(A), can be calculated by an eigendecomposition. This allows us to report reconstruction
errors as excess errors relative to the optimal reconstruction, i.e., E(A)/ E min ? 1.
To compare algorithms we plot the excess reconstruction error on a logarithmic scale after each pass
through the entire data set. This is a fair comparison since the overhead for KHA/et and KHASMD is negligible compared to the time required by the KHA base algorithm. The most expensive
operation, the calculation of a row of the Kernel matrix, is shared by all algorithms.
We manually tuned ?0 for KHA, KHA/t, and KHA/et; for KHA-SMD we hand-tuned ?, used the
same ?0 as KHA/et, and the value ? = 0.99 (set a priori) throughout. Thus a comparable amount of
tuning effort went into each algorithm. Parameters were tuned by a local search over values in the
set {a ? 10b : a ? {1, 2, 5}, b ? Z}.
5.1
USPS Digits
Our first set of experiments was performed on a subset of the well-known USPS dataset [10], namely
the first 100 samples of each digit in the USPS training data. KHA with both a dot-product kernel
and a Gaussian kernel with ? = 8 1 was used to extract the first 16 eigenvectors. The results are
shown in Figure 1. KHA/et clearly outperforms KHA/t for both kernels, and KHA-SMD is able to
increase the convergence speed even further.
1
This is the value of ? used by Mika et al. [11].
Figure 1: Excess relative reconstruction error for kernel PCA (16 eigenvectors) on USPS data, using
a dot-product (left) vs. Gaussian kernel with ? = 8 (right).
5.2
Multipatch Image PCA
For our second set of experiments we replicated the image de-noising problem used by Kim et al.
[2], the idea being that reconstructing image patches from their r leading eigenvectors will eliminate
most of the noise. The image considered here is the famous Lena picture [12] which was divided
in four sub-images. From each sub-image 11?11 pixel windows were sampled on a grid with twopixel spacing to produce 3844 vectors of 121 pixel intensity values each. The KHA with Gaussian
kernel (? = 1) was used to find the 20 best eigenvectors for each sub-image. Results averaged over
all four sub-images are shown in Figure 2 (left), including KHA with the constant gain of ?0 = 0.05
employed by Kim et al. [2] for comparison.
After 50 passes through the training data, KHA/et achieves an excess reconstruction error two orders
of magnitude better than conventional KHA; KHA-SMD yields an additional order of magnitude
improvement. KHA/t, while superior to a constant gain, is comparatively ineffective here.
Kim et al. [2] performed 800 passes through the training data. Replicating this approach we obtain a
reconstruction error of 5.64%, significantly worse than KHA/et and KHA-SMD after 50 passes. The
signal-to-noise ratio (SNR) of the reconstruction after 800 passes with constant gain is 13.46 2 while
KHA/et achieves comparable performance much faster, reaching an SNR of 13.49 in 50 passes.
5.3
Spectral Clustering
Spectral Clustering [13] is a clustering method which includes the extraction of the first kernel PCs.
In this section we present results of the spectral clustering of all 7291 patterns of the USPS data [10]
where 10 kernel PCs were obtained by KHA. We used the spectral clustering method presented in
2
Kim et al. [2] reported an SNR of 14.09; the discrepancy is due to different reconstruction methods.
Figure 2: Excess relative reconstruction error (left) for multipatch image PCA on a noisy Lena image
(center), using a Gaussian kernel with ? = 1; denoised image obtained by KHA-SMD (right).
Figure 3: Excess relative reconstruction error (left) and quality of clustering as measured by variation
of information (right) for spectral clustering of the USPS data with a Gaussian kernel (? = 8).
[13], and evaluate our results via the Variation of Information (VI) metric [14], which compares the
clustering obtained by spectral clustering to that induced by the class labels. On the USPS data, a VI
of 4.54 corresponds to random performance, while clustering in perfect accordance with the class
labels would give a VI of zero.
Our results are shown in Figure 3. Again KHA-SMD dominates KHA/et in both convergence speed
and quality of reconstruction (left); KHA/et in turn outperforms KHA/t. The quality of the resulting
clustering (right) reflects the quality of reconstruction. KHA/et and KHA-SMD produce a clustering as good as that obtained from a (computationally expensive) full kernel PCA within 10 passes
through the data; KHA/t after more than 30 passes.
5.4
Human motion denoising
In our final set of experiments we employed KHA to denoise a human walking motion trajectory
from the CMU motion capture database (http://mocap.cs.cmu.edu), converted to Cartesian
coordinates via Neil Lawrence?s Matlab Motion Capture Toolbox (http://www.dcs.shef.
ac.uk/?neil/mocap/). The experimental setup was similar to that of Tangkuampien and Suter
[15]: Gaussian noise was added to the frames of the original motion, then KHA with 25 PCs was
used to denoise them. The results are shown in Figure 4.
As in the other experiments, KHA-SMD clearly outperformed KHA/et, which in turn was better
than KHA/t. KHA-SMD managed to reduce the mean-squared error by 87.5%; it is hard to visually
Figure 4: From left to right:
Excess relative reconstruction error on human motion capture data with
?
Gaussian kernel (? = 1.5), one frame of the original data, a superposition of this original and the
noisy data, and a superposition of the original and reconstructed (denoised) data.
detect a difference between the denoised frames and the original ones ? see Figure 4 (right) for an
example. We include movies of the original, noisy, and denoised walk in the supporting material.
6
Discussion
We modified Kim et al.?s [2] Kernel Hebbian Algorithm (KHA) by providing a separate gain for
each eigenvector estimate. We then presented two methods, KHA/et and KHA-SMD, to set those
gains. KHA/et sets them inversely proportional to the estimated eigenvalues and iteration number;
KHA-SMD enhances that further by applying Stochastic Meta-Descent (SMD [7]) to perform gain
adaptation in RKHS [8]. In four different experimental settings both methods were compared to a
conventional gain decay schedule. As measured by relative reconstruction error, KHA-SMD clearly
outperformed KHA/et, which in turn outperformed the scheduled decay, in all our experiments.
Acknowledgments
National ICT Australia is funded by the Australian Government?s Department of Communications,
Information Technology and the Arts and the Australian Research Council through Backing Australia?s Ability and the ICT Center of Excellence program. This work is supported by the IST
Program of the European Community, under the Pascal Network of Excellence, IST-2002-506778.
References
[1] T. D. Sanger. Optimal unsupervised learning in a single-layer linear feedforward network.
Neural Networks, 2:459?473, 1989.
[2] K. I. Kim, M. O. Franz, and B. Sch?olkopf. Iterative kernel principal component analysis for
image modeling. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(9):
1351?1366, 2005.
[3] B. Sch?olkopf and A. Smola. Learning with Kernels. MIT Press, Cambridge, MA, 2002.
[4] B. Sch?olkopf, A. J. Smola, and K.-R. M?uller. Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 10:1299?1319, 1998.
[5] H. Robbins and S. Monro. A stochastic approximation method. Annals of Mathematical
Statistics, 22:400?407, 1951.
[6] L.-H. Chen and S. Chang. An adaptive learning algorithm for principal component analysis.
IEEE Transaction on Neural Networks, 6(5):1255?1263, 1995.
[7] N. N. Schraudolph. Fast curvature matrix-vector products for second-order gradient descent.
Neural Computation, 14(7):1723?1738, 2002.
[8] S. V. N. Vishwanathan, N. N. Schraudolph, and A. J. Smola. Step size adaptation in reproducing kernel Hilbert space. Journal of Machine Learning Research, 7:1107?1133, 2006.
[9] C. Darken and J. E. Moody. Towards faster stochastic gradient search. In J. E. Moody, S. J.
Hanson, and R. Lippmann, editors, Advances in Neural Information Processing Systems 4,
pages 1009?1016. Morgan Kaufmann Publishers, 1992.
[10] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. J. Jackel.
Backpropagation applied to handwritten zip code recognition. Neural Computation, 1:541?
551, 1989.
[11] S. Mika, B. Sch?olkopf, A. J. Smola, K.-R. M?uller, M. Scholz, and G. R?atsch. Kernel PCA and
de-noising in feature spaces. In M. S. Kearns, S. A. Solla, and D. A. Cohn, editors, Advances
in Neural Information Processing Systems 11, pages 536?542. MIT Press, 1999.
[12] D. J. Munson. A note on Lena. IEEE Trans. Image Processing, 5(1), 1996.
[13] A. Ng, M. Jordan, and Y. Weiss. Spectral clustering: Analysis and an algorithm (with appendix). In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural
Information Processing Systems 14, 2002.
[14] M. Meila. Comparing clusterings: an axiomatic view. In ICML ?05: Proceedings of the 22nd
international conference on Machine learning, pages 577?584, New York, NY, USA, 2005.
ACM Press.
[15] T. Tangkuampien and D. Suter. Human motion de-noising via greedy kernel principal component analysis filtering. In Proc. Intl. Conf. Pattern Recognition, 2006.
| 2991 |@word version:1 norm:3 nd:1 suitably:2 calculus:1 covariance:1 thereby:1 reduction:1 initial:1 tuned:3 rkhs:8 outperforms:2 current:4 com:1 comparing:1 must:2 plot:1 update:22 v:1 stationary:1 intelligence:1 greedy:1 isotropic:1 reciprocal:4 ith:2 provides:2 mathematical:1 along:1 differential:4 overhead:1 introduce:1 excellence:2 x0:3 lena:3 decreasing:1 window:1 increasing:1 becomes:2 xx:2 eigenspace:1 interpreted:1 substantially:1 minimizes:1 eigenvector:5 guarantee:1 every:1 act:2 growth:1 mkm:3 prohibitively:2 scaled:1 uk:1 zl:2 unit:1 control:1 before:1 negligible:1 engineering:1 local:1 accordance:1 limit:1 ak:4 solely:1 mika:2 au:1 initialization:1 scholz:1 locked:1 obeys:1 averaged:1 acknowledgment:1 lecun:1 testing:1 implement:2 backpropagation:1 digit:3 significantly:1 projection:1 close:1 selection:1 noising:4 applying:2 optimize:1 conventional:4 www:1 yt:38 center:3 annealed:1 starting:1 rule:5 variation:2 coordinate:1 updated:1 annals:1 controlling:1 enhanced:1 element:6 expensive:5 recognition:2 walking:1 database:1 observed:1 capture:5 worst:1 calculate:7 went:1 solla:1 munson:1 complexity:4 trained:1 division:1 usps:9 compactly:1 easily:2 derivation:1 fast:2 describe:1 kp:9 whose:3 quite:1 widely:1 reconstruct:1 triangular:1 ability:1 statistic:1 neil:2 noisy:3 final:1 sequence:2 eigenvalue:14 reconstruction:18 propose:1 product:5 adaptation:4 inserting:1 loop:1 adapts:1 frobenius:2 olkopf:4 convergence:11 intl:1 produce:2 perfect:1 derive:3 ac:1 pose:1 measured:2 ij:1 school:1 c:1 implies:1 australian:3 direction:1 stochastic:11 centered:8 human:5 australia:5 material:1 government:1 suffices:1 adjusted:1 hold:1 considered:1 normal:1 visually:1 lawrence:1 mapping:1 achieves:3 proc:1 outperformed:3 bag:1 label:2 axiomatic:1 superposition:2 jackel:1 council:1 robbins:2 largest:1 hubbard:1 reflects:1 gunter:1 uller:2 mit:2 clearly:3 gaussian:7 modified:1 reaching:1 derived:1 focus:1 improvement:1 greatly:1 kim:12 detect:1 entire:2 bt:19 eliminate:1 backing:1 pixel:2 ill:1 pascal:1 denoted:1 priori:1 art:1 initialize:2 equal:1 once:2 extraction:1 ng:1 manually:1 identical:1 kha:88 unsupervised:1 icml:1 discrepancy:2 report:2 employ:2 suter:2 preserve:1 national:3 individual:4 maintain:1 henderson:1 pc:3 held:2 predefined:1 walk:1 re:2 desired:1 minimal:1 mk:4 column:1 modeling:1 cost:4 introducing:2 entry:3 subset:1 snr:3 stored:2 reported:1 adaptively:2 density:1 international:1 moody:2 again:1 squared:1 worse:1 conf:1 leading:4 converted:1 de:4 includes:1 coefficient:4 explicitly:2 vi:3 performed:5 root:1 view:1 denoised:4 simon:2 monro:2 ass:1 square:2 variance:1 kaufmann:1 efficiently:1 maximized:2 yield:2 famous:1 handwritten:1 trajectory:1 history:1 centering:1 associated:6 transposed:1 gain:36 sampled:1 dataset:2 dimensionality:1 hilbert:3 schedule:4 routine:1 response:1 wei:1 though:2 smola:4 ediag:4 hand:2 replacing:1 ei:1 nonlinear:3 cohn:1 incrementally:2 quality:5 scheduled:2 usa:1 effect:1 dietterich:1 contain:1 true:1 managed:1 hence:2 numerator:1 please:1 guenter:1 generalized:1 mina:1 complete:1 performs:2 motion:9 passage:1 image:15 wise:1 superior:1 rl:5 conditioning:1 refer:1 cambridge:1 tuning:4 meila:1 grid:1 zeroing:1 inclusion:1 nonlinearity:1 replicating:2 dot:2 l3:1 funded:1 base:1 curvature:1 driven:2 store:1 occasionally:1 meta:4 morgan:1 additional:1 zip:1 employed:3 converge:3 mocap:2 determine:1 redundant:1 signal:1 full:1 hebbian:5 faster:3 calculation:1 schraudolph:5 ki0:1 divided:1 a1:5 variant:1 metric:1 cmu:2 affordable:2 iteration:5 kernel:42 achieved:1 whereas:1 spacing:1 decreased:1 annealing:1 shef:1 publisher:1 sch:4 pass:7 ineffective:1 induced:1 tend:1 jordan:1 noting:1 feedforward:1 xj:1 reduce:4 inner:2 idea:1 svn:1 pca:16 becker:1 effort:1 york:1 matlab:1 generally:1 clear:1 eigenvectors:11 amount:2 furnishing:1 nic:1 reduced:1 http:2 outperform:1 exist:1 estimated:5 per:1 express:1 ist:2 four:3 counteract:2 place:2 throughout:1 reasonable:1 patch:1 summarizes:1 appendix:1 comparable:3 accelerates:1 layer:1 smd:32 vishwanathan:3 n3:1 x2:1 speed:3 argument:1 span:1 concluding:1 min:2 performing:2 department:1 according:1 reconstructing:1 making:1 gradually:1 indexing:1 computationally:1 equation:2 turn:4 know:1 letting:1 merit:1 serf:1 operation:3 rewritten:1 apply:3 denker:1 spectral:9 appropriate:1 rl2:4 appearing:1 kernelizing:1 original:6 denotes:4 assumes:1 clustering:16 cf:1 running:1 include:1 sanger:3 const:1 ghahramani:1 approximating:1 comparatively:2 dat:3 question:1 added:1 diagonal:6 enhances:1 gradient:6 separate:1 mapped:4 trivial:1 nicta:1 code:1 ratio:1 minimizing:1 providing:1 difficult:1 unfortunately:1 setup:1 stated:1 unknown:1 perform:1 allowing:1 observation:6 darken:1 howard:1 descent:6 supporting:1 communication:1 dc:1 rn:6 frame:3 reproducing:3 community:1 intensity:1 introduced:1 namely:1 required:1 toolbox:1 hanson:1 unduly:1 boser:1 established:1 trans:1 able:1 below:1 pattern:3 program:2 including:1 improve:1 movie:1 technology:1 inversely:1 picture:1 extract:2 naive:2 coupled:1 ict:3 l2:3 nicol:1 evolve:1 relative:6 permutation:1 proportional:2 filtering:1 proven:1 eigendecomposition:3 xp:4 editor:3 row:8 summary:1 repeat:2 supported:1 free:1 side:1 wide:1 sparse:1 dimension:1 complicating:1 attendant:1 calculated:1 adaptive:2 exponentiating:1 replicated:1 franz:1 transaction:2 excess:7 reconstructed:1 compact:1 lippmann:1 implicitly:1 confirm:1 global:1 b1:2 xi:2 alternatively:1 eigendecompose:1 don:1 search:3 iterative:5 additionally:1 concatenates:1 expansion:4 necessarily:1 european:1 diag:12 spread:1 reconcile:1 noise:3 n2:1 denoise:2 fair:1 x1:1 canberra:2 gha:11 slow:3 ny:1 sub:4 comprises:1 exponential:1 xl:1 lie:1 interleaving:1 proportionately:1 xt:4 specific:1 decay:7 dominates:1 incorporating:1 ih:4 magnitude:2 cartesian:1 chen:2 lt:14 logarithmic:1 simply:1 explore:1 scalar:9 chang:2 corresponds:1 relies:1 acm:1 ma:1 goal:1 viewed:1 towards:1 shared:1 hard:1 infinite:1 wt:9 denoising:3 principal:9 kearns:1 pas:2 experimental:3 atsch:1 select:1 incorporate:2 evaluate:1 |
2,194 | 2,992 | Training Conditional Random Fields for Maximum
Labelwise Accuracy
Samuel S. Gross
Computer Science Department
Stanford University
Stanford, CA, USA
[email protected]
Chuong B. Do
Computer Science Department
Stanford University
Stanford, CA, USA
[email protected]
Olga Russakovsky
Computer Science Department
Stanford University
Stanford, CA, USA
[email protected]
Serafim Batzoglou
Computer Science Department
Stanford University
Stanford, CA, USA
[email protected]
Abstract
We consider the problem of training a conditional random field (CRF) to maximize per-label predictive accuracy on a training set, an approach motivated by
the principle of empirical risk minimization. We give a gradient-based procedure
for minimizing an arbitrarily accurate approximation of the empirical risk under
a Hamming loss function. In experiments with both simulated and real data, our
optimization procedure gives significantly better testing performance than several
current approaches for CRF training, especially in situations of high label noise.
1
Introduction
Sequence labeling, the task of assigning labels y = y1 , ..., yL to an input sequence x = x1 , ..., xL , is
a machine learning problem of great theoretical and practical interest that arises in diverse fields such
as computational biology, computer vision, and natural language processing. Conditional random
fields (CRFs) are a class of discriminative probabilistic models designed specifically for sequence
labeling tasks [1]. CRFs define the conditional distribution Pw (y | x) as a function of features
relating labels to the input sequence.
Ideally, training a CRF involves finding a parameter set w that gives high accuracy when labeling
new sequences. In some cases, however, simply finding parameters that give the best possible accuracy on training data (known as empirical risk minimization [2]) can be difficult. In particular, if
we wish to minimize Hamming loss, which measures the number of incorrect labels, gradient-based
optimization methods cannot be applied directly.1 Consequently, surrogate optimization problems,
such as maximum likelihood or maximum margin training, are solved instead.
In this paper, we describe a training procedure that addresses the problem of minimizing empirical
per-label risk for CRFs. Specifically, our technique attempts to minimize a smoothed approximation
of the Hamming loss incurred by the maximum expected accuracy decoding (i.e., posterior decoding) algorithm on the training set. The degree of approximation is controlled by a parameterized
function Q(?) which trades off between the accuracy of the approximation and the smoothness of
the objective. In the limit as Q(?) approaches the step function, the optimization objective converges
to the empirical risk minimization criterion for Hamming loss.
1
The gradient of the optimization objective is everywhere zero (except at points where the objective is
discontinuous), because a sufficiently small change in parameters will not change the predicted labeling.
2
Preliminaries
2.1
Definitions
Let X L denote an input space of all possible input sequences, and let Y L denote an output space
of all possible output labels. Furthermore, for a pair of consecutive labels yj?1 and yj , an input
sequence x, and a label position j, let f (yj?1 , yj , x, j) ? Rn be a vector-valued function; we call f
the feature mapping of the CRF.
A conditional random field (CRF) defines the conditional probability of a labeling (or parse) y given
an input sequence x as
P
L
T
exp
w
f
(y
,
y
,
x,
j)
j?1
j
j=1
exp wT F1,L (x, y)
P
=
Pw (y | x) = P
,
(1)
L
T
?
?
Z(x)
y? ?Y L exp
j=1 w f (yj?1 , yj , x, j)
Pb
where we define the summed feature mapping, Fa,b (x, y) = j=a f (yj?1 , yj , x, j), and where the
P
partition function Z(x) = y? exp wT F1,L (x, y? ) ensures that the distribution is normalized for
any set of model parameters w.2
2.2
Maximum a posteriori vs. maximum expected accuracy parsing
Given a CRF with parameters w, the sequence labeling task is to determine values for the labels y
of a new input sequence x. One approach is to choose the most likely, or maximum a posteriori,
labeling, arg maxy Pw (y | x). This can be computed efficiently using the Viterbi algorithm.
An alternative approach, which seeks to maximize the per-label accuracy of the prediction rather
than the joint probability of the entire parse, chooses the most likely (i.e., highest posterior probability) value for each label separately. Note that
?
?
L
L
X
X
arg max
Pw (yj | x) = arg max Ey? ?
1{yj? = yj }?
(2)
y
j=1
y
j=1
where 1{condition} denotes the usual indicator function whose value is 1 when condition is true
and 0 otherwise, and where the expectation is taken with respect to the conditional distribution
Pw (y? | x). From this, we see that maximum expected accuracy parsing chooses the parse with the
maximum expected number of correct labels.
In practice, maximum expected accuracy parsing often yields more accurate results than Viterbi
parsing (on a per-label basis) [3, 4, 5]. Here, we restrict our focus to maximum expected accuracy parsing procedures and seek training criteria which optimize the performance of a CRF-based
maximum expected accuracy parser.
3
Training conditional random fields
Usually, CRFs are trained in the batch setting, where a complete set D = {(x(t) , y(t) )}m
t=1 of
training examples is available up front. In this case, training amounts to numerical optimization of
a fixed objective function R(w : D). A good objective function is one whose optimal value leads
to parameters that perform well, in an application-dependent sense, on previously unseen testing
examples. While this can be difficult to achieve without knowing the contents of the testing set, one
can, under certain conditions, guarantee that the accuracy of a learned CRF on an unseen testing set
is probably not much worse than its accuracy on the training set.
In particular, when assuming independently and identically distributed (i.i.d.) training and testing
examples, there exists a probabilistic bound on the difference between empirical risk and generalization error [2]. As long as enough training data are available (relative to model complexity), strong
training set performance will imply, with high probability, similarly strong testing set performance.
Unfortunately, minimizing empirical risk for a CRF is a very difficult task. Loss functions based on
usual notions of per-label accuracy (such as Hamming loss) are typically not only nonconvex but
also not amenable to optimization by methods that make use of gradient information.
2
We assume for simplicity the existence of a special initial label y0 .
In this section, we briefly describe three previous approaches for CRF training which optimize surrogate loss functions in lieu of the empirical risk. Then, we consider a new method for gradient-based
CRF training oriented more directly toward optimizing predictive performance on the training set.
Our method minimizes an arbitrarily accurate approximation of empirical risk, where the loss function is defined as the number of labels predicted incorrectly by maximum expected accuracy parsing.
3.1
3.1.1
Previous objective functions
Conditional log-likelihood
Conditional log-likelihood is the most commonly used objective function for training conditional
random fields. In this criterion, the loss suffered for a training example (x(t) , y(t) ) is the negative
log probability of the true parse according to the model, plus a regularization term:
m
X
RCLL (w : D) = C||w||2 ?
log Pw (y(t) | x(t) )
(3)
t=1
The convexity and differentiability of conditional log-likelihood ensure that gradient-based optimization procedures (e.g., conjugate gradient or L-BFGS [6]) will not converge to suboptimal local
minima of the objective function.
However, there is no guarantee that the parameters obtained by conditional log-likelihood training
will lead to the best per-label predictive accuracy, even on the training set. For one, maximum
likelihood training explicitly considers only the probability of exact training parses. Other parses,
even highly accurate ones, are ignored except insofar as they share common features with the exact
parse. In addition, the log-likelihood of a parse is largely determined by the sections which are most
difficult to correctly label. This can be a weakness in problems with significant label noise (i.e.,
incorrectly labeled training examples).
3.1.2
Pointwise conditional log likelihood
Kakade et al. investigated an alternative nonconvex training objective for CRFs [7, 8] which considers separately the posterior label probabilities at each position of each training sequence. In
this approach, one maximizes not the probability of an entire parse, but instead the product of the
posterior probabilities (or equivalently, sum of log posteriors) for each predicted label:
m X
L
X
(t)
log Pw (yj | x(t) )
(4)
Rpointwise (w : D) = C||w||2 ?
t=1 j=1
By using pointwise posterior probabilities, this objective function takes into account suboptimal
parses and focuses on finding a model whose posteriors match well with the training labels, even
though the model may not provide a good fit for the training data as a whole.
Nevertheless, pointwise logloss is fundamentally quite different from Hamming loss. A training
procedure based on pointwise log likelihood, for example, would prefer to reduce the posterior
probability for a correct label from 0.6 to 0.4 in return for improving the posterior probability for
a hopelessly incorrect label from 0.0001 to 0.01. Thus, the objective retains the difficulties of the
regular conditional log likelihood when dealing with difficult-to-classify outlier labels.
3.1.3
Maximum margin training
The notion of Hamming distance is incorporated directly in the maximum margin training procedures of Taskar et al. [9]:
m
X
Rmax margin (w : D) = C||w||2 +
max 0, max ?(y, y(t) ) ? wT ?F1,L (x(t) , y) , (5)
t=1
y?Y L
and Tsochantaridis et al. [10].
Rmax margin (w : D) = C||w||2 +
m
X
t=1
max 0, max ?(y, y(t) ) 1 ? wT ?F1,L (x(t) , y) . (6)
y?Y L
Here, ?(y, y(t) ) denotes the Hamming distance between y and y(t) , and ?F1,L (x(t) , y) =
F1,L (x(t) , y(t) ) ? F1,L (x(t) , y). In the former formulation, loss is incurred when the Hamming
distance between the correct parse y(t) and a candidate parse y exceeds the obtained classification
margin between y(t) and y. In the latter formulation, the amount of loss for a margin violation scales
linearly with the Hamming distance betweeen y(t) and y.
Both cases lead to convex optimization problems in which the loss incurred for a particular training
example is an upper bound on the Hamming loss between the correct parse and its highest scoring
alternative. In practice, however, this upper bound can be quite loose; thus, parameters obtained via
a maximum margin framework may be poor minimizers of empirical risk.
3.2
Training for maximum labelwise accuracy
In each of the likelihood-based or margin-based objective functions introduced in the previous subsections, difficulties arose due to the mismatch between the chosen objective function and our notion
of empirical risk as defined by Hamming loss. In this section, we demonstrate how to construct a
smooth objective function for maximum expected accuracy parsing which more closely approximates our desired notion of empirical risk.
3.2.1
The labelwise accuracy objective function
Consider the following objective function,
(
)
m X
L
X
(t)
(t)
R(w : D) =
1 yj = arg max Pw (yj | x ) .
(7)
yj
t=1 j=1
Maximizing this objective is equivalent to minimizing empirical risk under the Hamming loss (i.e.,
the number of mispredicted labels). To obtain a smooth approximation to this objective function,
(t)
we can express the condition that the algorithm predicts the correct label for yj in terms of the
posterior probabilities of correct and incorrect labels as
(t)
Pw (yj | x(t) ) ? max Pw (yj | x(t) ) > 0.
(t)
(8)
yj 6=yj
Substituting equation (8) back into equation (7) and replacing the indicator function with a generic
function Q(?), we obtain
!
m X
L
X
(t)
(t)
(t)
Rlabelwise (w) =
Q Pw (yj | x ) ? max Pw (yj | x ) .
(9)
(t)
t=1 j=1
yj 6=yj
When Q(?) is chosen to be the indicator function, Q(x) = 1{x > 0}, we recover the original
objective. By choosing a nicely behaved form for Q(?), however, we obtain a new objective that is
easier to optimize. Specifically, we set Q(x) to be sigmoidal with parameter ? (see Figure 2a):
1
Q(x; ?) =
.
(10)
1 + exp(??x)
As ? ? ?, Q(x; ?) ? 1{x > 0}, so Rlabelwise (w : D) approaches the objective function defined
in (7). However, Rlabelwise (w : D) is smooth for any finite ? > 0.
Because of this, we are free to use gradient-based optimization to maximize our new objective
function. As ? get larger, the quality of our approximation of the ideal Hamming loss objective
improves; however, the approximation itself also becomes less smooth and perhaps more difficult
to optimize as a result. Thus, the value of ? controls the trade-off between the accuracy of the
approximation and the ease of optimization.3
3.2.2
The labelwise accuracy objective gradient
We now present an algorithm for efficiently calculating the gradient of the approximate accuracy
(t)
(t)
objective. For a fixed parameter set w, let y?j denote the label other than yj that has the maximum
posterior probability at position j. Also, for notational convenience, let y1:j denote the variables
3
In particular, note that that the method of using Q(x; ?) to approximate the step function is analogous to
the log-barrier method used in convex optimization for approximating inequality constraints using a smooth
function as a surrogate for the infinite height barrier. As with log-barrier optimization, performing the maximization of Rlabelwise (w : D) using a small value of ?, and gradually increasing ? while using the previous
solution as a starting point for the new optimization, provides a viable technique for maximizing the labelwise
accuracy objective.
y1 , . . . , yj . Differentiating equation (9), we compute ?w Rlabelwise (w : D) to be4
m X
L
h
i
X
(t)
(t)
(t)
(t)
Q? Pw (yj | x(t) ) ? Pw (?
yj | x(t) ) ?w Pw (yj | x(t) ) ? Pw (?
yj | x(t) ) .
(11)
t=1 j=1
(t)
(t)
Using equation (1), the inner term, Pw (yj | x(t) ) ? Pw (?
yj | x(t) ), is equal to
X
1
(t)
(t)
T
(t)
?
1{y
=
y
}
?
1{y
=
y
?
}
?
exp
w
F
(x
,
y)
.
j
j
1,L
j
j
Z(x(t) ) y1:L
(12)
Applying the quotient rule allows us to compute the gradient of equation (12), whose complete form
we omit for lack of space. Most of the terms involved in the gradient are easy to compute using the
standard forward and backward matrices used for regular CRF inference, which we define here as
P
?(i, j) = y1:j 1{yj = i} ? exp wT F1,j (x(t) , y)
(13)
P
T
(t)
?(i, j) = yj:L 1{yj = i} ? exp w Fj+1,L (x , y) .
(14)
The two difficult terms that do not follow from the forward and backward matrices have the form,
L X
X
Q?k (w) ? 1{yk = yk? } ? F1,L (x(t) , y) ? exp wT F1,L (x(t) , y) ,
(15)
k=1 y1:L
(t)
(t)
? (t) . To efficiently
where Q?j (w) = Q? Pw (yj | x(t) ) ? Pw (?
yj | x(t) ) and y? is either y(t) or y
compute terms of this type, we define
j X
X
?? (i, j) =
1{yk = yk? ? yj = i} ? Q?k (w) ? exp wT F1,j (x(t) , y)
(16)
k=1 y1:j
? ? (i, j) =
L
X
X
k=j+1 yj:L
1{yk = yk? ? yj = i} ? Q?k (w) ? exp wT Fj+1,L (x(t) , y) .
(17)
Like the forward and backward matrices, ?? (i, j) and ? ? (i, j) may be calculated via dynamic programming. In particular, we have the base cases ?? (i, 1) = 1{i = y1? } ? ?(i, 1) ? Q?1 (w) and
? ? (i, L) = 0. The remaining entries are given by the following recurrences:
X
T
?
(t)
?? (i? , j ? 1) + 1{i = yj? } ? ?(i? , j ? 1) ? Q?j (w) ? ew f (i ,i,x ,j)
(18)
?? (i, j) =
i?
? ? (i, j) =
X
i?
T
?
(t)
?
? ? (i? , j + 1) + 1{i? = yj+1
} ? ?(i? , j + 1) ? Q?j+1 (w) ? ew f (i,i ,x ,j+1) . (19)
It follows that equation (15) is equal to
L XX
X
f (i? , i, x(t) , j) ? exp wT f (i? , i, x(t) , j) ? (A + B),
j=1 i?
(20)
i
where
A = ?? (i? , j ? 1) ? ?(i, j) + ?(i? , j ? 1) ? ? ? (i, j)
B = 1{i =
yj? }
?
? ?(i , j ? 1) ? ?(i, j) ?
Q?j (w).
(21)
(22)
2
Thus, the algorithm above computes the gradient in O(|Y| ? L) time and O(|Y| ? L) space. Since
? (t) , the resulting total gradient
?? (i, j) and ? ? (i, j) must be computed for both y? = y(t) and y? = y
computation takes approximately three times as long and uses twice the memory of the analogous
computation for the log likelihood gradient.5
4
Technically, the max function is not differentiable. One could replace the max with a softmax function, and
assuming unique probabilities for each candidate label, the gradient approaches (11) as the softmax function
approaches the max. As noted in [11], this approximation used here does not cause problems in practice.
5
We note that the ?trick? used in the formulation of approximate accuracy is applicable to a variety of other
(t)
forms and arguments for Q(?). In particular, if we change its argument to Pw (yj | x(t) ), letting Q(x) =
log(x) gives the pointwise logloss formulation of Kakade et al. (see section 3.1.2), while letting Q(x) = x
gives an objective function equal to expected accuracy. Computing the gradient for these objectives involves
straightforward modifications of the recurrences presented here.
(a)
(b)
0.8
0.78
Labelwise Accuracy
0.76
0.74
0.72
0.7
0.68
Joint Log?Likelihood
Conditional Log?Likelihood
Maximum Margin
Maximum Labelwise Accuracy
0.66
0.64
0
0.05
0.1
0.15
0.2
0.25
Noise Parameter, p
0.3
0.35
0.4
Figure 1: Panel (a) shows the state diagram for the hidden Markov model used for the simulation
experiments. The HMM consists of two states (?C? and ?I?) with transition probabilities labeled
on the arrows, and emission probabilities specified (over the alphabet {A, C, G, T }) written inside
each state. Panel (b) shows the proportion of state labels correctly predicted by the learned models at varying levels of label noise. The error bars show 95% confidence intervals on the mean
generalization performance.
4
Results
4.1
Simulation experiments
To test the performance of the approximate labelwise accuracy objective function, we first ran simulation experiments in order to assess the robustness of several different learning algorithms in problems with a high degree of label noise. In particular, we generated sequences of length 1,000,000
from a simple two-state hidden Markov model (see Figure 1a). Given a fixed noise parameter
p ? [0, 1], we generated training sequence labels by flipping each run of consecutive ?C? hidden
state labels to ?I? with probability p. After learning parameters, we then tested each algorithm on
uncorrupted testing sequence generated by the original HMM.
Figure 1b indicates the proportion of labels correctly identified by four different methods at varying
noise levels: a generative model trained with joint log-likelihood, a CRF trained with conditional
log-likelihood, the maximum-margin method of Taskar et al. [9] as implemented in the SVMstruct
package [10]6 , and a CRF trained with maximum labelwise accuracy. No method outperforms
maximum labelwise accuracy at any noise level. For levels of noise above 0.05, maximum labelwise
accuracy performs significantly better than the other methods.
For each method, we used the decoding algorithm (Viterbi or MEA) that led to the best performance.
The maximum margin method performed best when Viterbi decoding was used, while the other three
methods had better performance with MEA decoding. Interestingly, with no noise present, maximum margin training with Viterbi decoding peformed significantly better than generative training
with Viterbi decoding (0.749 vs. 0.710), but this was still much worse than generative training with
MEA decoding (0.796).
4.2
Gene prediction experiments
To test the performance of maximum labelwise accuracy training on a large-scale, real world problem, we trained a CRF to predict protein coding genes in the genome of the fruit fly Drosophila
melanogaster. The CRF labeled each base pair of a DNA sequence according to its predicted functional category: intergenic, protein coding, or intronic. The features used in the model were of two
types: transitions between labels and trimer composition.
The CRF was trained on approximately 28 million base pairs labeled according to annotations from
the FlyBase database [12]. The predictions were evaluated on a separate testing set of the same size.
Three separate training runs were performed, using three different objective functions: maximum
6
We were unable to get SVMstruct to converge on our test problem when using the Tsochantaridis et al. maximum margin formulation.
0.82
2.5
Accuracy
per?label loss
2
1.5
1
0.8
0.78
0.78
0.76
0.76
0.74
0.74
0.72
0.72
0.5
0.7
0.7
0
0.68
0.68
0.66
?0.5
?1
?0.5
0
0.5
P(incorrect label) ? P(correct label)
0.66
0
1
(c)
10
(d)
-0.002
Objective
Training Accuracy
Testing Accuracy
0.78
-0.0025
0.76
0.74
-0.003
0.72
0.7
-0.0035
0.68
40
Objective
Training Accuracy
Testing Accuracy
0.8
0.78
50
-1
-1.5
0.76
Accuracy
0.8
20
30
Iterations
0.82
Log Likelihood / Length
0.82
Accuracy
0.82
Objective
Training Accuracy
Testing Accuracy
0.8
pointwise logloss
zero?one loss
Q(x; 15)
Approximate Accuracy
(b)
3
0.74
-2
0.72
0.7
-2.5
Pointwise Log Likelihood / Length
(a)
0.68
0.66
0
10
20
30
40
-0.004
50
Iterations
0.66
0
10
20
30
40
-3
50
Iterations
Figure 2: Panel (a) compares three pointwise loss functions in the special case where a label has two
possible values. The green curve (f (x) = ? log( 1?x
2 )) depicts pointwise logloss; the red curve represents the ideal zero-one loss; and the blue curve gives the sigmoid approximation with parameter
15. Panels (b), (c), and (d) show gene prediction learning curves using three training objective functions: (b) maximum labelwise (approximate) accuracy, (c) maximum conditional log-likelihood,
and (d) maximum pointwise conditional log-likelihood, respetively. In each case, parameters were
initialized to their generative model estimates.
likelihood, maximum pointwise likelihood, and maximum labelwise accuracy. Each run was started
from an initial guess calculated using HMM-style generative parameter estimation.7
Figures 2b, 2c, and 2d show the value of the objective function and the average label accuracy
at each iteration of the three training runs. Here, maximum accuracy training improves upon the
accuracy of the original generative parameters and outperforms the other two training objectives.
In contrast, maximum likelihood training and maximum pointwise likelihood training both give
worse performance than the simple generative parameter estimates. Evidently, for this problem the
likelihood-based functions are poor surrogate measures for per-label accuracy: Figures 2c and 2d
show declines in training and testing set accuracy, despite increases in the objective function.
5
Discussion and related work
In contrast to most previous work describing alternative objective functions for CRFs, the method
described in this paper optimizes a direct approximation of the Hamming loss. A few notable papers
have also dealt with the problem of minimizing empirical risk directly. For binary classifiers, Jansche showed that an algorithm designed to optimize F-measure performance of a logistic regression
model for information extraction outperforms maximum likelihood training [14]. For parsing tasks,
Och demonstrated that a statistical machine translation system choosing between a small finite collection of candidate parses achieves better accuracy when it is trained to minimize error rate instead
7
We did not include maximum margin methods in this comparison; existing software packages for maximum margin training, based on the cutting plane algorithm [10] or decomposition techniques such as
SMO [9, 13], are not easily parallelizable and scale poorly for large datasets, such as those encountered in
gene prediction.
of optimizing the more traditional maximum mutual information criterion [15]. Unlike Och?s algorithm, our method does not require one to provide a small set of candidate parses, instead relying on
efficient dynamic programming recurrences for all computations.
After this work was submitted for consideration, a Minimum Classification Error (MCE) method for
training CRFs to minimize empirical risk was independently proposed by Suzuki et al. [11]. This
technique minimizes the loss incurred by maximum a posteriori, rather than maximum expected
accuracy, parsing on the training set. In practice, Viterbi parsers often achieve worse per-label
accuracy than maximum expected accuracy parsers [3, 4, 5]; we are currently exploring whether a
similar relationship also exists between MCE methods and our proposed training objective.
The training method described in this work is theoretically attractive, as it addresses the goal of
empirical risk minimization in a very direct way. In addition to its theoretical appeal, we have shown
that it performs much better than maximum likelihood and maximum pointwise likelihood training
on a large scale, real world problem. Furthermore, our method is efficient, having time complexity
approximately three times that of maximum likelihood likelihood training, and easily parallelizable,
as each training example can be considered independently when evaluating the objective function
or its gradient. The chief disadvantage of our formulation is its nonconvexity. In practice, this can
be combatted by initializing the optimization with a parameter vector obtained by a convex training
method. At present, the extent of the effectiveness of our method and the characteristics of problems
for which it performs well are not clear. Further work applying our method to a variety of sequence
labeling tasks is needed to investigate these questions.
6
Acknowledgments
SSG and CBD were supported by NDSEG fellowships. We thank Andrew Ng for useful discussions.
References
[1] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: probabilistic models for segmenting
and labeling sequence data. In ICML, 2001.
[2] V. Vapnik. Statistical Learning Theory. Wiley, 1998.
[3] C. B. Do, M. S. P. Mahabhashyam, M. Brudno, and S. Batzoglou. ProbCons: probabilistic consistencybased multiple sequence alignment. Genome Research, 15(2):330?340, 2005.
[4] C. B. Do, D. A. Woods, and S. Batzoglou. CONTRAfold: RNA secondary structure prediction without
physics-based models. Bioinformatics, 22(14):e90?e98, 2006.
[5] P. Liang, B. Taskar, and D. Klein. Alignment by agreement. In HLT-NAACL, 2006.
[6] J. Nocedal and S. J. Wright. Numerical Optimization. Springer, 1999.
[7] S. Kakade, Y. W. Teh, and S. Roweis. An alternate objective function for Markovian fields. In ICML,
2002.
[8] Y. Altun, M. Johnson, and T. Hofmann. Investigating loss functions and optimization methods for discriminative learning of label sequences. In EMNLP, 2003.
[9] B. Taskar, C. Guestrin, and D. Koller. Max margin markov networks. In NIPS, 2003.
[10] I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun. Support vector machine learning for interdependent and structured output spaces. In ICML, 2004.
[11] J. Suzuki, E. McDermott, and H. Isozaki. Training conditional random fields with multivariate evaluation
measures. In ACL, 2006.
[12] G. Grumbling, V. Strelets, and The Flybase Consortium. FlyBase: anatomical data, images and queries.
Nucleic Acids Research, 34:D484?D488, 2006.
[13] J. Platt. Using sparseness and analytic QP to speed training of support vector machines. In NIPS, 1999.
[14] M. Jansche. Maximum expected F-measure training of logistic regression models. In EMNLP, 2005.
[15] F. J. Och. Minimum error rate training in statistical machine translation. In ACL, 2003.
| 2992 |@word briefly:1 pw:21 proportion:2 seek:2 serafim:2 simulation:3 decomposition:1 initial:2 interestingly:1 outperforms:3 existing:1 current:1 assigning:1 must:1 parsing:9 written:1 numerical:2 partition:1 hofmann:2 analytic:1 designed:2 v:2 generative:7 guess:1 plane:1 mccallum:1 provides:1 sigmoidal:1 height:1 direct:2 viable:1 incorrect:4 consists:1 inside:1 theoretically:1 mahabhashyam:1 expected:13 relying:1 increasing:1 becomes:1 xx:1 maximizes:1 panel:4 rmax:2 minimizes:2 finding:3 guarantee:2 classifier:1 platt:1 control:1 omit:1 och:3 segmenting:1 local:1 limit:1 despite:1 approximately:3 plus:1 twice:1 acl:2 ease:1 practical:1 unique:1 acknowledgment:1 testing:12 yj:44 practice:5 procedure:7 empirical:16 significantly:3 confidence:1 regular:2 batzoglou:3 consortium:1 protein:2 get:2 cannot:1 mispredicted:1 altun:2 tsochantaridis:3 mea:3 risk:16 applying:2 convenience:1 optimize:5 equivalent:1 demonstrated:1 crfs:7 maximizing:2 straightforward:1 starting:1 independently:3 convex:3 simplicity:1 rule:1 notion:4 svmstruct:2 analogous:2 parser:3 exact:2 programming:2 us:1 agreement:1 trick:1 predicts:1 labeled:4 database:1 taskar:4 fly:1 solved:1 initializing:1 ensures:1 trade:2 highest:2 yk:6 gross:1 ran:1 convexity:1 complexity:2 ideally:1 dynamic:2 trained:7 predictive:3 technically:1 upon:1 basis:1 easily:2 joint:3 alphabet:1 describe:2 query:1 labeling:9 choosing:2 whose:4 quite:2 stanford:12 valued:1 larger:1 otherwise:1 unseen:2 itself:1 sequence:19 differentiable:1 evidently:1 product:1 poorly:1 achieve:2 roweis:1 converges:1 andrew:1 strong:2 implemented:1 c:4 involves:2 predicted:5 quotient:1 closely:1 discontinuous:1 correct:7 require:1 f1:11 generalization:2 preliminary:1 drosophila:1 brudno:1 exploring:1 e98:1 sufficiently:1 considered:1 wright:1 exp:12 great:1 mapping:2 viterbi:7 predict:1 substituting:1 achieves:1 consecutive:2 estimation:1 applicable:1 label:46 currently:1 minimization:4 rna:1 rather:2 arose:1 varying:2 focus:2 emission:1 joachim:1 notational:1 likelihood:30 indicates:1 contrast:2 sense:1 posteriori:3 inference:1 dependent:1 minimizers:1 entire:2 typically:1 hidden:3 koller:1 arg:4 classification:2 summed:1 special:2 softmax:2 mutual:1 field:10 construct:1 equal:3 nicely:1 extraction:1 having:1 ng:1 biology:1 represents:1 icml:3 fundamentally:1 few:1 oriented:1 attempt:1 interest:1 highly:1 investigate:1 evaluation:1 alignment:2 weakness:1 violation:1 amenable:1 accurate:4 logloss:4 initialized:1 desired:1 intronic:1 theoretical:2 classify:1 markovian:1 disadvantage:1 retains:1 maximization:1 entry:1 johnson:1 front:1 chooses:2 probabilistic:4 yl:1 off:2 decoding:8 physic:1 ndseg:1 choose:1 emnlp:2 worse:4 style:1 return:1 account:1 bfgs:1 coding:2 notable:1 explicitly:1 performed:2 chuong:1 red:1 recover:1 annotation:1 minimize:4 ass:1 accuracy:54 acid:1 largely:1 efficiently:3 characteristic:1 yield:1 ssg:1 dealt:1 russakovsky:1 submitted:1 parallelizable:2 hlt:1 definition:1 involved:1 hamming:15 subsection:1 improves:2 back:1 probcons:1 follow:1 formulation:6 evaluated:1 though:1 furthermore:2 parse:10 replacing:1 lack:1 defines:1 logistic:2 quality:1 perhaps:1 behaved:1 usa:4 naacl:1 normalized:1 true:2 former:1 regularization:1 attractive:1 recurrence:3 noted:1 samuel:1 criterion:4 crf:17 complete:2 demonstrate:1 performs:3 fj:2 image:1 consideration:1 sigmoid:1 common:1 functional:1 qp:1 million:1 approximates:1 relating:1 significant:1 composition:1 smoothness:1 similarly:1 language:1 had:1 base:3 posterior:11 multivariate:1 showed:1 optimizing:2 optimizes:1 certain:1 nonconvex:2 inequality:1 binary:1 arbitrarily:2 uncorrupted:1 scoring:1 mcdermott:1 guestrin:1 minimum:3 isozaki:1 ey:1 determine:1 maximize:3 converge:2 multiple:1 exceeds:1 smooth:5 match:1 long:2 labelwise:14 controlled:1 prediction:6 regression:2 vision:1 expectation:1 iteration:4 addition:2 fellowship:1 separately:2 interval:1 diagram:1 suffered:1 unlike:1 probably:1 lafferty:1 effectiveness:1 call:1 ideal:2 identically:1 enough:1 insofar:1 easy:1 variety:2 fit:1 restrict:1 suboptimal:2 identified:1 reduce:1 inner:1 decline:1 knowing:1 whether:1 motivated:1 cause:1 ignored:1 useful:1 clear:1 amount:2 differentiability:1 dna:1 category:1 per:9 correctly:3 klein:1 blue:1 diverse:1 anatomical:1 express:1 four:1 nevertheless:1 pb:1 nonconvexity:1 backward:3 nocedal:1 sum:1 wood:1 run:4 package:2 parameterized:1 everywhere:1 prefer:1 bound:3 encountered:1 constraint:1 software:1 speed:1 argument:2 chuongdo:1 performing:1 department:4 structured:1 according:3 alternate:1 poor:2 conjugate:1 y0:1 kakade:3 modification:1 maxy:1 outlier:1 gradually:1 taken:1 contrafold:1 equation:6 previously:1 describing:1 loose:1 needed:1 letting:2 lieu:1 available:2 generic:1 alternative:4 batch:1 robustness:1 existence:1 original:3 denotes:2 remaining:1 ensure:1 include:1 calculating:1 cbd:1 especially:1 approximating:1 objective:42 question:1 flipping:1 fa:1 usual:2 traditional:1 surrogate:4 gradient:18 distance:4 separate:2 unable:1 simulated:1 thank:1 hmm:3 e90:1 considers:2 extent:1 toward:1 assuming:2 length:3 pointwise:13 relationship:1 minimizing:5 equivalently:1 difficult:7 unfortunately:1 liang:1 negative:1 perform:1 teh:1 upper:2 nucleic:1 markov:3 datasets:1 finite:2 incorrectly:2 situation:1 incorporated:1 y1:8 rn:1 smoothed:1 introduced:1 pair:3 specified:1 smo:1 learned:2 nip:2 address:2 bar:1 usually:1 mismatch:1 max:13 memory:1 green:1 natural:1 difficulty:2 indicator:3 imply:1 started:1 interdependent:1 relative:1 loss:24 par:5 incurred:4 degree:2 fruit:1 principle:1 share:1 translation:2 supported:1 free:1 barrier:3 differentiating:1 jansche:2 distributed:1 curve:4 calculated:2 transition:2 world:2 genome:2 computes:1 evaluating:1 forward:3 commonly:1 collection:1 suzuki:2 melanogaster:1 approximate:6 cutting:1 gene:4 dealing:1 investigating:1 discriminative:2 chief:1 ca:4 improving:1 investigated:1 intergenic:1 did:1 linearly:1 arrow:1 whole:1 noise:10 x1:1 depicts:1 wiley:1 position:3 pereira:1 wish:1 xl:1 candidate:4 appeal:1 exists:2 vapnik:1 sparseness:1 margin:17 mce:2 easier:1 led:1 simply:1 likely:2 hopelessly:1 springer:1 conditional:21 goal:1 consequently:1 replace:1 content:1 change:3 specifically:3 except:2 determined:1 infinite:1 wt:9 olga:2 total:1 secondary:1 ew:2 support:2 latter:1 arises:1 bioinformatics:1 tested:1 |
2,195 | 2,993 | Bayesian Policy Gradient Algorithms
Mohammad Ghavamzadeh
Yaakov Engel
Department of Computing Science, University of Alberta
Edmonton, Alberta, Canada T6E 4Y8
{mgh,yaki}@cs.ualberta.ca
Abstract
Policy gradient methods are reinforcement learning algorithms that adapt a parameterized policy by following a performance gradient estimate. Conventional policy gradient methods use Monte-Carlo techniques to estimate this gradient. Since
Monte Carlo methods tend to have high variance, a large number of samples is
required, resulting in slow convergence. In this paper, we propose a Bayesian
framework that models the policy gradient as a Gaussian process. This reduces
the number of samples needed to obtain accurate gradient estimates. Moreover,
estimates of the natural gradient as well as a measure of the uncertainty in the
gradient estimates are provided at little extra cost.
1
Introduction
Policy Gradient (PG) methods are Reinforcement Learning (RL) algorithms that maintain a parameterized action-selection policy and update the policy parameters by moving them in the direction
of an estimate of the gradient of a performance measure. Early examples of PG algorithms are the
class of REINFORCE algorithms of Williams [1] which are suitable for solving problems in which
the goal is to optimize the average reward. Subsequent work (e.g., [2, 3]) extended these algorithms
to the cases of infinite-horizon Markov decision processes (MDPs) and partially observable MDPs
(POMDPs), and provided much needed theoretical analysis. However, both the theoretical results
and empirical evaluations have highlighted a major shortcoming of these algorithms, namely, the
high variance of the gradient estimates. This problem may be traced to the fact that in most interesting cases, the time-average of the observed rewards is a high-variance (although unbiased) estimator
of the true average reward, resulting in the sample-inefficiency of these algorithms.
One solution proposed for this problem was to use a small (i.e., smaller than 1) discount factor in
these algorithms [2, 3], however, this creates another problem by introducing bias into the gradient
estimates. Another solution, which does not involve biasing the gradient estimate, is to subtract
a reinforcement baseline from the average reward estimate in the updates of PG algorithms (e.g.,
[4, 1]). Another approach for speeding-up policy gradient algorithms was recently proposed in [5]
and extended in [6, 7]. The idea is to replace the policy-gradient estimate with an estimate of the
so-called natural policy-gradient. This is motivated by the requirement that a change in the way the
policy is parametrized should not influence the result of the policy update. In terms of the policy
update rule, the move to a natural-gradient rule amounts to linearly transforming the gradient using
the inverse Fisher information matrix of the policy.
However, both conventional and natural policy gradient methods rely on Monte-Carlo (MC) techniques to estimate the gradient of the performance measure. Monte-Carlo estimation is a frequentist
procedure, and as such violates the likelihood principle [8].1 Moreover, although MC estimates are
unbiased, they tend to produce high variance estimates, or alternatively, require excessive sample
sizes (see [9] for a discussion).
1
The likelihood principle states that in a parametric statistical model, all the information about a data sample
that is required for inferring the model parameters is contained in the likelihood function of that sample.
In [10] a RBayesian alternative to MC estimation is proposed. The idea is to model integrals of
the form f (x)p(x)dx as Gaussian Processes (GPs). This is done by treating the first term f in
the integrand as a random function, the randomness of which reflects our subjective uncertainty
concerning its true identity. This allows us to incorporate our prior knowledge on f into its prior
distribution. Observing (possibly noisy) samples of f at a set of points (x1 , x2 , . . . , xM ) allows us
to employ Bayes? rule to compute a posterior distribution of f , conditioned on these samples. This,
in turn, induces a posterior distribution over the value of the integral. In this paper, we propose a
Bayesian framework for policy gradient, by modeling the gradient as a GP. This reduces the number
of samples needed to obtain accurate gradient estimates. Moreover, estimates of the natural gradient
and the gradient covariance are provided at little extra cost.
2
Reinforcement Learning and Policy Gradient Methods
Reinforcement Learning (RL) [11, 12] is a class of learning problems in which an agent interacts with an unfamiliar, dynamic and stochastic environment, where the agent?s goal is to optimize
some measure of its long-term performance. This interaction is conventionally modeled as a MDP.
Let P(S) be the set of probability distributions on (Borel) subsets of a set S . A MDP is a tuple
(X , A, q, P, P0 ) where X and A are the state and action spaces, respectively; q(?|a, x) ? P(R) is the
probability distribution over rewards; P (?|a, x) ? P(X ) is the transition probability distribution; (we
assume that P and q are stationary); and P0 (?) ? P(X ) is the initial state distribution. We denote the
random variable distributed according to q(?|a, x) as r(x, a). In addition, we need to specify the rule
according to which the agent selects actions at each possible state. We assume that this rule does
not depend explicitly on time. A stationary policy ?(?|x) ? P(A) is a probability distribution over
actions, conditioned on the current state. The MDP controlled by the policy ? induces a Markov
chain over state-action pairs. We generically denote by ? = (x0 , a0 , x1 , a1 , . . . , xT ?1 , aT ?1 , xT ) a
path generated by this Markov chain. The probability (or density) of such a path is given by
Pr(?|?) = P0 (x0 )
TY
?1
?(at |xt )P (xt+1 |xt , at ).
(1)
t=0
P
We denote by R(?) = Tt=0 ? t r(xt , at ) the (possibly discounted, ? ? [0, 1]) cumulative return of the
path ? . R(?) is a random variable both because the path ? is a random variable, and because even for
a given path, each of the rewards sampled in it may be stochastic. The expected value of R(?) for a
? . Finally, let us define the expected return,
given ? is denoted by R(?)
?(?) = E(R(?)) =
Z
?
R(?)
Pr(?|?)d?.
(2)
Gradient-based approaches to policy search in RL have recently received much attention as a means
to sidetrack problems of partial observability and of policy oscillations and even divergence encountered in value-function based methods (see [11], Sec. 6.4.2 and 6.5.3). In policy gradient (PG) methods, we define a class of smoothly parameterized stochastic policies {?(?|x; ?), x ? X , ? ? ?}, estimate the gradient of the expected return (2) with respect to the policy parameters ? from observed
system trajectories, and then improve the policy by adjusting the parameters in the direction of the
gradient [1, 2, 3]. The gradient of the expected return ?(?) = ?(?(?|?; ?)) is given by 2
??(?) =
Z
? ? Pr(?; ?) Pr(?; ?)d?,
R(?)
Pr(?; ?)
(3)
Pr(?;?)
where Pr(?; ?) = Pr(?|?(?|?; ?)). The quantity ?Pr(?;?)
= ? log Pr(?; ?) is known as the score
function or likelihood ratio. Since the initial state distribution P0 and the transition distribution P
are independent of the policy parameters ?, we can write the score of a path ? using Eq. 1 as
u(?) =
2
T
?1
T
?1
X
X
? Pr(?; ?)
??(at |xt ; ?)
=
=
? log ?(at |xt ; ?).
Pr(?; ?)
?(at |xt ; ?)
t=0
t=0
Throughout the paper, we use the notation ? to denote ?? ? the gradient w.r.t. the policy parameters.
(4)
Previous work on policy gradient methods used classical Monte-Carlo to estimate the gradient in
Eq. 3. These methods generate i.i.d. sample paths ?1 , . . . , ?M according to Pr(?; ?), and estimate
the gradient ??(?) using the MC estimator
Ti ?1
M
M
X
1 X
1 X
c
??
R(?i )? log Pr(?i ; ?) =
R(?i )
? log ?(at,i |xt,i ; ?).
M C (?) =
M i=1
M i=1
t=0
3
(5)
Bayesian Quadrature
Bayesian quadrature (BQ) [10] is a Bayesian method for evaluating an integral using samples of its
integrand. We consider the problem of evaluating the integral
?=
Z
(6)
f (x)p(x)dx.
If p(x) is a probability density function, this becomes the problem of evaluating the expected value
of f (x). In MC estimation of such expectations, samples (x1 , x2 , . . . , xM ) are drawn from p(x),
PM
1
?M C is an unbiased estimate of ?, with
and the integral is estimated as ??M C = M
i=1 f (xi ). ?
variance that diminishes to zero as M ? ?. However, as O?Hagan points out, MC estimation is
fundamentally unsound, as it violates the likelihood principle, and moreover, does not make full use
of the data at hand [9] .
The alternative proposed in [10] is based on the following reasoning: In the Bayesian approach, f (?)
is random simply because it is numerically unknown. We are therefore uncertain about the value
of f (x) until we actually evaluate it. In fact, even then, our uncertainty is not always completely
removed, since measured samples of f (x) may be corrupted by noise. Modeling f as a Gaussian
process (GP) means that our uncertainty is completely accounted for by specifying a Normal prior
distribution over functions. This prior distribution is specified by its mean and covariance, and is
denoted by f (?) ? N {f0 (?), k(?, ?)}. This is shorthand for the statement that f is a GP with prior mean
E(f (x)) = f0 (x) and covariance Cov(f (x), f (x0 )) = k(x, x0 ), respectively. The choice of kernel
function k allows us to incorporate prior knowledge on the smoothness properties of the integrand
into the estimation procedure. When we are provided with a set of samples DM = {(xi , yi )}M
i=1 ,
where yi is a (possibly noisy) sample of f (xi ), we apply Bayes? rule to condition the prior on these
sampled values. If the measurement noise is normally distributed, the result is a Normal posterior
distribution of f |DM . The expressions for the posterior mean and covariance are standard:
(7)
E(f (x)|DM ) = f0 (x) + kM (x)> C M (y M ? f 0 ),
0
0
>
0
Cov(f (x), f (x )|DM ) = k(x, x ) ? kM (x) C M kM (x ).
Here and in the sequel, we make use of the definitions:
f 0 = (f0 (x1 ), . . . , f0 (xM ))>
y M = (y1 , . . . , yM )> ,
,
kM (x) = (k(x1 , x), . . . , k(xM , x))>
,
[K M ]i,j = k(xi , xj )
,
C M = (K M + ?M )?1 ,
and [?M ]i,j is the measurement noise covariance between the ith and jth samples. Typically, it
is assumed that the measurement noise is i.i.d., in which case ?M = ? 2 I , where ? 2 is the noise
variance and I is the identity matrix.
Since integration is a linear operation, the posterior distribution of the integral in Eq. 6 is also
Gaussian, and the posterior moments are given by
E(?|DM ) =
Z
E(f (x)|DM )p(x)dx , Var(?|DM ) =
ZZ
Cov(f (x), f (x0 )|DM )p(x)p(x0 )dxdx0 . (8)
Substituting Eq. 7 into Eq. 8, we get
E(?|DM ) = ?0 + z >
M C M (y M ? f 0 )
Var(?|DM ) = z0 ? z >
M C M zM ,
,
where we made use of the definitions:
?0 =
Z
f0 (x)p(x)dx
,
zM =
Z
kM (x)p(x)dx
,
z0 =
ZZ
k(x, x0 )p(x)p(x0 )dxdx0 .
Note that ?0 and z0 are the prior mean and variance of ?, respectively.
(9)
(10)
Known part
Uncertain part
Measurement
Prior mean of f
Prior cov. of f
E(??B (?)|DM ) =
Cov(??B (?)|DM ) =
Kernel function
zM
z0
Model 1
p(?; ?) = Pr(?; ?)
?
f (?; ?) = R(?)?
log Pr(?; ?)
y(?) = R(?)? log Pr(?; ?)
E(f (?; ?)) = 0
Cov(f (?; ?), f (? 0 ; ?)) = k(?, ? 0 )I
Y M C M zM
(z0 ? z >
M C M z M )I
`
?2
k(?i , ?j ) = 1 + u(?i )> G?1 u(?j )
> ?1
(z M )i = 1 + u(?i ) G u(?i )
z0 = 1 + n
Model 2
p(?; ?) = ? Pr(?; ?)
?
f (?) = R(?)
y(?) = R(?)
E(f (?)) = 0
Cov(f (?), f (? 0 )) = k(?, ? 0 )
Z M C M yM
Z0 ? ZM CM Z>
M
k(?i , ?j ) = u(?i )> G?1 u(?j )
ZM = U M
Z0 = G ? U M CM U >
M
Table 1: Summary of the Bayesian policy gradient Models 1 and 2.
In order to prevent the problem from ?degenerating into infinite regress?, as phrased by O?Hagan
[10], we should choose the functions p, k, and f0 so as to allow us to solve the integrals in Eq. 10
analytically. For instance, O?Hagan provides the analysis required for the case where the integrands
in Eq. 10 are products of multivariate Gaussians and polynomials, referred to as Bayes-Hermite
quadrature. One of the contributions of the present paper is in providing analogous analysis for
kernel functions that are based on the Fisher kernel [13, 14]. It is important to note that in MC
estimation, samples must be drawn from the distribution p(x), whereas in the Bayesian approach,
samples may be drawn from arbitrary distributions. This affords us with flexibility in the choice of
sample points, allowing us, for instance to actively design the samples (x1 , x2 , . . . , xM ).
4
Bayesian Policy Gradient
In this section, we use Bayesian quadrature to estimate the gradient of the expected return with
respect to the policy parameters, and propose Bayesian policy gradient (BPG) algorithms. In the
frequentist approach to policy gradient our performance measure was ?(?) from Eq. 2, which is the
result of averaging the cumulative return R(?) over all possible paths ? and all possible returns accumulated in each path. In the Bayesian approach we have an additional source of randomness, which
is our subjective Bayesian uncertainty concerning the process generating the cumulative returns. Let
Z
us denote
?B (?) =
(11)
R(?) Pr(?; ?)d?.
?B (?) is a random variable both because of the noise in R(?) and the Bayesian uncertainty. Under
the quadratic loss, our Bayesian performance measure is E(?B (?)|DM ). Since we are interested
in optimizing performance rather than evaluating it, we evaluate the posterior distribution of the
gradient of ?B (?). For the mean we have
?E (?B (?)|DM ) = E (??B (?)|DM ) = E
?Z
R(?)
? Pr(?; ?)
Pr(?; ?)d? |DM
Pr(?; ?)
?
.
(12)
Consequently, in BPG we cast the problem of estimating the gradient of the expected return in
the form of Eq. 6. As described in Sec. 3, we partition the integrand into two parts, f (?; ?) and
p(?; ?). We will place the GP prior over f and assume that p is known. We will then proceed by
calculating the posterior moments of the gradient ??B (?) conditioned on the observed data. Next,
we investigate two different ways of partitioning the integrand in Eq. 12, resulting in two distinct
Bayesian models. Table 1 summarizes the two models we use in this work. Our choice of Fisher-type
kernels was motivated by the notion that a good representation should depend on the data generating
process (see [13, 14] for a thorough discussion). Our particular choices of linear and quadratic Fisher
kernels were guided by the requirement that the posterior moments of the gradient be analytically
tractable. In Table 1 we made use of the following definitions: F M? = (f (?1 ; ?), . . . , f (?M ; ?)) ?
?
N (0, K M ), Y M = (y(?1 ), . . . , y(?M )) ? N (0, K M + ? 2 I), U M = u(?1 ) , u(?2 ) , . . . , u(?M ) ,
R
RR
0
>
0
Z M = ? Pr(?; ?)kM (?)> d?, and Z 0 =`
k(?, ? 0 )?
? Pr(?; ?)? Pr(? ; ?) d?d? . Finally, n is the
>
number of policy parameters, and G = E u(?)u(?) is the Fisher information matrix.
We can now use Models 1 and 2 to define algorithms for evaluating the gradient of the expected
return with respect to the policy parameters. The pseudo-code for these algorithms is shown in
Alg. 1. The generic algorithm (for either model) takes a set of policy parameters ? and a sample size
M as input, and returns an estimate of the posterior moments of the gradient of the expected return.
Algorithm 1 : A Bayesian Policy Gradient Evaluation Algorithm
1: BPG Eval(?, M )
// policy parameters ? ? Rn , sample size M > 0 //
2: Set G = G(?) , D0 = ?
3: for i = 1 to M do
4:
Sample a path
S ?i using the policy ?(?)
5:
Di = Di?1 {?i }
P i ?1
6:
Compute u(?i ) = Tt=0
? log ?(at |st ; ?)
P i ?1
7:
R(?i ) = Tt=0
r(st , at )
8:
Update K i using K i?1 and ?i
y(?i ) = R(?i )u(?i )
(Model 1) or y(?i ) = R(?i )
(Model 2)
9:
(z M )i = 1 + u(?i )> G?1 u(?i ) (Model 1) or Z M (:, i) = u(?i ) (Model 2)
10: end for
11: C M = (K M + ? 2 I)?1
12: Compute the posterior mean and covariance:
(Model 1)
E(??B (?)|DM ) = Y M C M z M
, Cov(??B (?)|DM ) = (z0 ? z >
M C M z M )I
(Model 2)
E(??B (?)|DM ) = Z M C M y M
, Cov(??B (?)|DM ) = Z 0 ? Z M C M Z >
M
or
13: return E (??B (?)|DM ) , Cov (??B (?)|DM )
The kernel functions used in Models 1 and 2 are both based on the Fisher information matrix G(?).
Consequently, every time we update the policy parameters we need to recompute G. In Alg. 1 we
assume that G is known, however, in most practical situations this will not be the case. Let us briefly
outline two possible approaches for estimating the Fisher information matrix.
MC Estimation: At each step j, our BPG algorithm generates M sample paths using the current
policy parameters ? j in order to estimate the gradient ??B (? j ). We can use these generated sample
paths to estimate the Fisher information matrix G(? j ) by replacing the expectation in G with emPM PTi ?1
>
? M C (? j ) = PM1
pirical averaging as G
i=1
t=0 ? log ?(at |xt ; ? j )? log ?(at |xt ; ? j ) .
i=1 Ti
Model-Based Policy Gradient: The Fisher information matrix depends on the probability distribution over paths. This distribution is a product of two factors, one corresponding to the current
policy, and the other corresponding to the MDP dynamics P0 and P (see Eq. 1). Thus, if the MDP
dynamics are known, the Fisher information matrix can be evaluated off-line. We can model the
MDP dynamics using some parameterized model, and estimate the model parameters using maximum likelihood or Bayesian methods. This would be a model-based approach to policy gradient,
which would allow us to transfer information between different policies.
Alg. 1 can be made significantly more efficient, both in time and memory, by sparsifying the solution. Such sparsification may be performed incrementally, and helps to numerically stabilize the
algorithm when the kernel matrix is singular, or nearly so. Here we use an on-line sparsification
method from [15] to selectively add a new observed path to a set of dictionary paths D M , which are
used as a basis for approximating the full solution. Lack of space prevents us from discussing this
method in further detail (see Chapter 2 in [15] for a thorough discussion).
The Bayesian policy gradient (BPG) algorithm is described in Alg. 2. This algorithm starts with an
initial vector of policy parameters ? 0 and updates the parameters in the direction of the posterior
mean of the gradient of the expected return, computed by Alg. 1. This is repeated N times, or
alternatively, until the gradient estimate is sufficiently close to zero.
Algorithm 2 : A Bayesian Policy Gradient Algorithm
?1
1: BPG(? 0 , ?, N, M )
// initial policy parameters ? 0 , learning rates (?j )N
j=0 , number of policy updates
N > 0, BPG Eval sample size M > 0 //
2: for j = 0 to N ? 1 do
3:
?? j = E (??B (? j )|DM ) from BPG Eval(? j , M )
4:
? j+1 = ? j +?j ?? j (regular gradient) or ? j+1 = ? j +?j G?1 (? j )?? j (natural gradient)
5: end for
6: return ? N
5
Experimental Results
In this section, we compare the BQ and MC gradient estimators in a continuous-action bandit problem and a continuous state and action linear quadratic regulation (LQR) problem. We also evaluate
the performance of the BPG algorithm (Alg. 2) on the LQR problem, and compare it with a standard
MC-based policy gradient (MCPG) algorithm.
5.1
A Bandit Problem
In this simple example, we compare the BQ and MC estimates of the gradient (for a fixed set of
policy parameters) using the same samples. Our simple bandit problem has a single state and A = R.
Thus, each path ?i consists of a single action ai . The policy, and therefore also the distribution over
paths is given by a ? N (?1 = 0, ?22 = 1). The score function of the path ? = a and the Fisher
information matrix are given by u(?) = [a, a2 ? 1]> and G = diag(1, 2), respectively.
Table 2 shows the exact gradient of the expected return and its MC and BQ estimates (using 10
and 100 samples) for two versions of the simple bandit problem corresponding to two different
deterministic reward functions r(a) = a and r(a) = a2 . The average over 104 runs of the MC and
BQ estimates and their standard deviations are reported in Table 2. The true gradient is analytically
tractable and is reported as ?Exact? in Table 2 for reference.
r(a) = a
r(a) = a2
Exact
? ?
1
0
? ?
0
2
MC (10)
?
0.9950 ? 0.438
?0.0011 ? 0.977
?
?
0.0136 ? 1.246
2.0336 ? 2.831
?
BQ (10)
?
0.9856 ? 0.050
0.0006 ? 0.060
?
?
0.0010 ? 0.082
1.9250 ? 0.226
?
MC (100)
?
1.0004 ? 0.140
0.0040 ? 0.317
?
?
0.0051 ? 0.390
1.9869 ? 0.857
?
BQ (100)
?
1.000 ? 0.000001
0.000 ? 0.000004
?
?
0.000 ? 0.000003
2.000 ? 0.000011
?
Table 2: The true gradient of the expected return and its MC and BQ estimates for two bandit problems.
As shown in Table 2, the BQ estimate has much lower variance than the MC estimate for both small
and large sample sizes. The BQ estimate also has a lower bias than the MC estimate for the large
sample size (M = 100), and almost the same bias for the small sample size (M = 10).
5.2
A Linear Quadratic Regulator
In this section, we consider the following linear system in which the goal is to minimize the expected
return over 20 steps. Thus, it is an episodic problem with paths of length 20.
System
Initial State: x0 ? N (0.3, 0.001)
Rewards: rt = x2t + 0.1a2t
Transitions: xt+1 = xt + at + nx ; nx ? N (0, 0.01)
Policy
Actions: at ? ?(?|xt ; ?) = N (?xt , ? 2 )
Parameters: ? = (? , ?)>
We first compare the BQ and MC estimates of the gradient of the expected return for the policy
induced by the parameters ? = ?0.2 and ? = 1. We use several different sample sizes (number of
paths used for gradient estimation) M = 5j , j = 1, . . . , 20 for the BQ and MC estimates. For each
sample size, we compute both the MC and BQ estimates 104 times, using the same samples. The
true gradient is estimated using MC with 107 sample paths for comparison purposes.
Figure 1 shows the mean squared error (MSE) (first column), and the mean absolute angular error
(second column) of the MC and BQ estimates of the gradient for several different sample sizes.
The absolute angular error is the absolute value of the angle between the true gradient and the
estimated gradient. In this figure, the BQ gradient estimate was calculated using Model 1 without
sparsification. With a good choice of sparsification threshold, we can attain almost identical results
much faster and more efficiently with sparsification. These results are not shown here due to space
limitations. To give an intuition concerning the speed and the efficiency attained by sparsification,
we should mention that the dimension of the feature space for the kernel used in Model 1 is 6
(Proposition 9.2 in [14]). Therefore, we deal with a kernel matrix of size 6 with sparsification versus
a kernel matrix of size M = 5j , j = 1, . . . , 20 without sparsification.
We ran another set of experiments, in which we add i.i.d. Gaussian noise to the rewards: r t = x2t +
0.1a2t + nr ; nr ? N (0, ?r2 = 0.1). In Model 2, we can model this by the measurement noise
covariance matrix ? = T ?r2 I, where T = 20 is the path length. Since each reward rt is a Gaussian
PT ?1
random variable with variance ?r2 , the return R(?) =
t=0 rt will also be a Gaussian random
variable with variance T ?r2 . The results are presented in the third and fourth columns of Figure 1.
These experiments indicate that the BQ gradient estimate has lower variance than its MC counter1
part. In fact, whereas the performance of the MC estimate improves as M
, the performance of the
BQ estimate improves at a higher rate.
2
4
10
3
10
2
0
1
10
0
10
20 40 60 80 100
Number of Paths
2
10
MC
BQ
Mean Squared Error
Mean Absolute Angular Error (deg)
Mean Squared Error
5
10
10
6
10
MC
BQ
MC
BQ
5
10
4
10
3
0
10
20 40 60 80 100
Number of Paths
0
10
Mean Absolute Angular Error (deg)
6
10
1
10
0
10
20 40 60 80 100
Number of Paths
MC
BQ
0
20 40 60 80 100
Number of Paths
Figure 1: Results for the LQR problem using Model 1 (left) and Model 2 (right), without sparsification. The
Model 2 results are for a LQR problem, in which the rewards are corrupted by i.i.d. Gaussian noise. For each
algorithm, we show the MSE (left) and the mean absolute angular error (right), as functions of the number of
sample paths M . Note that the errors are plotted on a logarithmic scale. All results are averages over 10 4 runs.
Next, we use BPG to optimize the policy parameters in the LQR problem. Figure 2 shows the
performance of the BPG algorithm with the regular (BPG) and the natural (BPNG) gradient estimates, versus a MC-based policy gradient (MCPG) algorithm, for the sample sizes (number of
sample paths used for estimating the gradient of a policy) M = 5, 10, 20, and 40. We use Alg. 2
with the number of updates set to N = 100, and Model 1 for the BPG and BPNG methods. Since
Alg. 2 computes the Fisher information matrix for each set of policy parameters, an estimate of the
natural gradient is provided at little extra cost at each step. The returns obtained by these methods are averaged over 104 runs for sample sizes 5 and 10, and over 103 runs for sample sizes
20 and 40. The policy parameters are initialized randomly at each run. In order to ensure that
the learned parameters do not exceed an acceptable range, the policy parameters are defined as
? = ?1.999 + 1.998/(1 + e?1 ) and ? = 0.001 + 1/(1 + e?2 ). The optimal solution is ?? ? ?0.92
and ? ? = 0.001 (?B (?? , ? ? ) = 0.1003) corresponding to ?1? ? ?0.16 and ?2? ? ?.
0
20
40
60
80 100
Number of Updates (Sample Size = 5)
0
20
40
60
80 100
Number of Updates (Sample Size = 10)
0
10
0
20
40
60
80 100
MC
BPG
BPNG
Optimal
1
10
Average Expected Return
0
10
MC
BPG
BPNG
Optimal
1
10
Average Expected Return
Average Expected Return
0
10
MC
BPG
BPNG
Optimal
1
10
Average Expected Return
MC
BPG
BPNG
Optimal
1
10
Number of Updates (Sample Size = 20)
0
10
0
20
40
60
80 100
Number of Updates (Sample Size = 40)
Figure 2: A comparison of the average expected returns of BPG using regular (BPG) and natural (BPNG)
gradient estimates, with the average expected return of the MCPG algorithm for sample sizes 5, 10, 20, and 40.
Figure 2 shows that MCPG performs better than the BPG algorithm for the smallest sample size (M = 5), whereas for larger samples BPG dominates MCPG. This phenomenon is
also reported in [16]. We use two different learning rates for the two components of the
gradient. For a fixed sample size, each method starts with an initial learning rate, and decreases it according to the schedule ?j = ?0 (20/(20 + j)). Table 3 summarizes the best
initial learning rates for each algorithm. The selected learning rates for BPNG are significantly larger than those for BPG and MCPG, which explains why BPNG initially learns
faster than BPG and MCPG, but contrary to our expectations, eventually performs worse.
M =5
M = 10
M = 20
M = 40
So far we have assumed that the Fisher
MCPG
0.01,
0.05
0.05,
0.10
0.05,
0.10
0.10,
0.15
information matrix is known. In the
BPG
0.01, 0.03 0.07, 0.10 0.15, 0.20 0.10, 0.30
next experiment, we estimate it usBPNG 0.03, 0.50 0.09, 0.30 0.45, 0.90 0.80, 0.90
ing both MC and maximum likelihood
(ML) methods as described in Sec. 4.
Figure 3: Initial learning rates used by the PG algorithms.
In ML estimation, we assume that the
transition probability function is P (xt+1 |xt , at ) = N (?1 xt + ?2 at + ?3 , ?42 ), and then estimate its
parameters by observing state transitions. Figure 4 shows that when the Fisher information matrix
is estimated using MC (BPG-MC), the BPG algorithm still performs better than MCPG, and outperforms the BPG algorithm in which the Fisher information matrix is estimated using ML (BPG-ML).
Moreover, as we increase the sample size, its performance converges to the performance of the BPG
algorithm in which the Fisher information matrix is known (BPG).
?0.1
?0.2
10
?0.3
10
?0.4
10
?0.5
MC
BPG
BPG?ML
BPG?MC
Optimal
?0.2
10
?0.3
10
?0.4
10
?0.5
10
0
?0.1
10
Average Expected Return
Average Expected Return
MC
BPG
BPG?ML
BPG?MC
Optimal
40
60
80
100
Number of Updates (Sample Size = 10)
0
MC
BPG
BPG?ML
BPG?MC
Optimal
?0.2
10
?0.3
10
?0.4
10
?0.5
10
20
10
Average Expected Return
?0.1
10
10
20
40
60
80
100
Number of Updates (Sample Size = 20)
0
20
40
60
80
100
Number of Updates (Sample Size = 40)
Figure 4: A comparison of the average return of BPG when the Fisher information matrix is known (BPG),
and when it is estimated using MC (BPG-MC) and ML (BPG-ML) methods, for sample sizes 10, 20, and 40
(from left to right). The average return of the MCPG algorithm is also provided for comparison.
6
Discussion
In this paper we proposed an alternative approach to conventional frequentist policy gradient estimation procedures, which is based on the Bayesian view. Our algorithms use GPs to define a prior
distribution over the gradient of the expected return, and compute the posterior, conditioned on the
observed data. The experimental results are encouraging, but we conjecture that even higher gains
may be attained using this approach. This calls for additional theoretical and empirical work.
Although the proposed policy updating algorithm (Alg. 2) uses only the posterior mean of the gradient in its updates, we hope that more elaborate algorithms can be devised that would make judicious
use of the covariance information provided by the gradient estimation algorithm (Alg. 1). Two obvious possibilities are: 1) risk-aware selection of the update step-size and direction, and 2) using
the variance in a termination condition for Alg. 1. Other interesting directions include 1) investigating other possible partitions of the integrand in the expression for ??B (?) into a GP term f and
a known term p, 2) using other types of kernel functions, such as sequence kernels, 3) combining
our approach with MDP model estimation, to allow transfer of learning between different policies,
4) investigating methods for learning the Fisher information matrix, 5) extending the Bayesian approach to Actor-Critic type of algorithms, possibly by combining BPG with the Gaussian process
temporal difference (GPTD) algorithms of [15].
Acknowledgments We thank Rich Sutton and Dale Schuurmans for helpful discussions. M.G.
would like to thank Shie Mannor for his useful comments at the early stages of this work. M.G. is
supported by iCORE and Y.E. is partially supported by an Alberta Ingenuity fellowship.
References
[1] R. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning.
Machine Learning, 8:229?256, 1992.
[2] P. Marbach. Simulated-Based Methods for Markov Decision Processes. PhD thesis, MIT, 1998.
[3] J. Baxter and P. Bartlett. Infinite-horizon policy-gradient estimation. JAIR, 15:319?350, 2001.
[4] R. Sutton, D. McAllester, S. Singh, and Y. Mansour. Policy gradient methods for reinforcement learning
with function approximation. In Proceedings of NIPS 12, pages 1057?1063, 2000.
[5] S. Kakade. A natural policy gradient. In Proceedings of NIPS 14, 2002.
[6] J. Bagnell and J. Schneider. Covariant policy search. In Proceedings of the 18th IJCAI, 2003.
[7] J. Peters, S. Vijayakumar, and S. Schaal. Reinforcement learning for humanoid robotics. In Proceedings
of the Third IEEE-RAS International Conference on Humanoid Robots, 2003.
[8] J. Berger and R. Wolpert. The Likelihood Principle. Inst. of Mathematical Statistics, Hayward, CA, 1984.
[9] A. O?Hagan. Monte Carlo is fundamentally unsound. The Statistician, 36:247?249, 1987.
[10] A. O?Hagan. Bayes-Hermite quadrature. Journal of Statistical Planning and Inference, 29, 1991.
[11] D. Bertsekas and J. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, 1996.
[12] R. Sutton and A. Barto. An Introduction to Reinforcement Learning. MIT Press, 1998.
[13] T. Jaakkola and D. Haussler. Exploiting generative models in discriminative classifiers. In Proceedings
of NIPS 11. MIT Press, 1998.
[14] J. Shawe-Taylor and N. Cristianini. Kernel Methods for Pattern Analysis. Cambridge Univ. Press, 2004.
[15] Y. Engel. Algorithms and Representations for Reinforcement Learning. PhD thesis, The Hebrew University of Jerusalem, Israel, 2005.
[16] C. Rasmussen and Z. Ghahramani. Bayesian Monte Carlo. In Proceedings of NIPS 15. MIT Press, 2003.
| 2993 |@word version:1 briefly:1 polynomial:1 termination:1 km:6 gptd:1 covariance:8 p0:5 pg:5 mention:1 moment:4 initial:8 inefficiency:1 score:3 lqr:5 subjective:2 outperforms:1 current:3 dx:5 must:1 subsequent:1 partition:2 treating:1 update:18 stationary:2 generative:1 selected:1 ith:1 provides:1 recompute:1 mannor:1 hermite:2 mathematical:1 shorthand:1 consists:1 x0:9 ra:1 expected:24 ingenuity:1 planning:1 discounted:1 alberta:3 little:3 encouraging:1 becomes:1 provided:7 estimating:3 moreover:5 notation:1 hayward:1 israel:1 cm:2 sparsification:9 pseudo:1 thorough:2 every:1 temporal:1 ti:2 classifier:1 partitioning:1 normally:1 pirical:1 bertsekas:1 sutton:3 path:28 specifying:1 range:1 averaged:1 practical:1 acknowledgment:1 procedure:3 episodic:1 empirical:2 significantly:2 attain:1 regular:3 get:1 close:1 selection:2 risk:1 influence:1 a2t:2 optimize:3 conventional:3 deterministic:1 williams:2 attention:1 jerusalem:1 y8:1 estimator:3 rule:6 haussler:1 his:1 notion:1 analogous:1 pt:1 ualberta:1 exact:3 programming:1 gps:2 us:1 updating:1 hagan:5 observed:5 decrease:1 removed:1 ran:1 intuition:1 transforming:1 environment:1 reward:11 cristianini:1 dynamic:5 ghavamzadeh:1 depend:2 solving:1 singh:1 creates:1 efficiency:1 completely:2 basis:1 chapter:1 univ:1 distinct:1 shortcoming:1 monte:7 larger:2 solve:1 cov:10 statistic:1 gp:5 highlighted:1 noisy:2 sequence:1 rr:1 propose:3 interaction:1 product:2 zm:6 combining:2 bpg:44 flexibility:1 exploiting:1 convergence:1 ijcai:1 requirement:2 extending:1 produce:1 generating:2 converges:1 help:1 measured:1 received:1 eq:11 c:1 indicate:1 direction:5 guided:1 stochastic:3 mcallester:1 violates:2 explains:1 require:1 proposition:1 sufficiently:1 normal:2 mgh:1 substituting:1 major:1 dictionary:1 early:2 a2:3 smallest:1 purpose:1 estimation:13 diminishes:1 engel:2 reflects:1 hope:1 mit:4 gaussian:9 always:1 rather:1 barto:1 jaakkola:1 schaal:1 likelihood:8 baseline:1 helpful:1 inst:1 inference:1 accumulated:1 typically:1 a0:1 initially:1 bandit:5 selects:1 interested:1 denoted:2 integration:1 aware:1 integrands:1 zz:2 identical:1 excessive:1 nearly:1 connectionist:1 fundamentally:2 employ:1 unsound:2 randomly:1 divergence:1 statistician:1 maintain:1 investigate:1 possibility:1 eval:3 evaluation:2 generically:1 chain:2 accurate:2 integral:7 tuple:1 partial:1 bq:21 taylor:1 initialized:1 plotted:1 theoretical:3 uncertain:2 instance:2 column:3 modeling:2 cost:3 introducing:1 deviation:1 subset:1 reported:3 corrupted:2 st:2 density:2 international:1 vijayakumar:1 sequel:1 off:1 ym:2 squared:3 thesis:2 choose:1 possibly:4 worse:1 return:33 actively:1 sec:3 stabilize:1 explicitly:1 depends:1 performed:1 view:1 observing:2 start:2 bayes:4 contribution:1 minimize:1 variance:12 efficiently:1 pm1:1 bayesian:24 mc:45 carlo:7 trajectory:1 pomdps:1 randomness:2 definition:3 ty:1 regress:1 obvious:1 dm:23 di:2 sampled:2 gain:1 adjusting:1 knowledge:2 improves:2 schedule:1 actually:1 attained:2 higher:2 jair:1 specify:1 done:1 evaluated:1 angular:5 stage:1 until:2 hand:1 replacing:1 lack:1 incrementally:1 scientific:1 mdp:7 unbiased:3 true:6 analytically:3 deal:1 degenerating:1 outline:1 tt:3 mohammad:1 performs:3 reasoning:1 recently:2 rl:3 numerically:2 unfamiliar:1 measurement:5 cambridge:1 ai:1 smoothness:1 pm:1 marbach:1 shawe:1 moving:1 robot:1 f0:7 actor:1 add:2 posterior:14 multivariate:1 optimizing:1 discussing:1 icore:1 yi:2 yaakov:1 additional:2 schneider:1 full:2 reduces:2 d0:1 ing:1 faster:2 adapt:1 long:1 devised:1 concerning:3 a1:1 controlled:1 neuro:1 expectation:3 kernel:14 robotics:1 addition:1 whereas:3 fellowship:1 singular:1 source:1 extra:3 comment:1 induced:1 tend:2 shie:1 contrary:1 call:1 exceed:1 baxter:1 xj:1 observability:1 idea:2 motivated:2 expression:2 bartlett:1 peter:1 proceed:1 action:9 useful:1 involve:1 amount:1 discount:1 induces:2 generate:1 affords:1 estimated:6 write:1 sparsifying:1 threshold:1 traced:1 drawn:3 prevent:1 run:5 inverse:1 parameterized:4 uncertainty:6 angle:1 fourth:1 place:1 throughout:1 almost:2 oscillation:1 decision:2 summarizes:2 acceptable:1 quadratic:4 encountered:1 x2:3 phrased:1 generates:1 integrand:6 regulator:1 speed:1 conjecture:1 department:1 according:4 smaller:1 pti:1 kakade:1 pr:25 turn:1 eventually:1 x2t:2 needed:3 tractable:2 end:2 operation:1 gaussians:1 apply:1 generic:1 frequentist:3 alternative:3 ensure:1 include:1 calculating:1 ghahramani:1 approximating:1 classical:1 dxdx0:2 move:1 quantity:1 parametric:1 rt:3 interacts:1 nr:2 bagnell:1 gradient:85 thank:2 reinforce:1 simulated:1 parametrized:1 athena:1 nx:2 code:1 length:2 modeled:1 berger:1 ratio:1 providing:1 hebrew:1 regulation:1 statement:1 design:1 policy:69 unknown:1 allowing:1 markov:4 situation:1 extended:2 y1:1 rn:1 mansour:1 arbitrary:1 canada:1 namely:1 required:3 pair:1 specified:1 cast:1 learned:1 nip:4 pattern:1 xm:5 biasing:1 memory:1 suitable:1 natural:10 rely:1 improve:1 mdps:2 conventionally:1 speeding:1 prior:12 loss:1 interesting:2 limitation:1 var:2 versus:2 humanoid:2 agent:3 principle:4 critic:1 summary:1 accounted:1 supported:2 rasmussen:1 jth:1 tsitsiklis:1 bias:3 allow:3 absolute:6 distributed:2 calculated:1 dimension:1 transition:5 cumulative:3 evaluating:5 computes:1 rich:1 dale:1 made:3 reinforcement:10 far:1 observable:1 deg:2 ml:9 investigating:2 assumed:2 xi:4 discriminative:1 alternatively:2 search:2 continuous:2 why:1 table:9 transfer:2 ca:2 schuurmans:1 alg:11 mse:2 diag:1 linearly:1 noise:9 repeated:1 quadrature:5 x1:6 referred:1 edmonton:1 borel:1 elaborate:1 slow:1 inferring:1 third:2 learns:1 z0:9 xt:19 r2:4 dominates:1 phd:2 conditioned:4 horizon:2 subtract:1 smoothly:1 wolpert:1 logarithmic:1 simply:1 prevents:1 contained:1 partially:2 covariant:1 goal:3 identity:2 consequently:2 replace:1 fisher:18 change:1 judicious:1 infinite:3 averaging:2 called:1 experimental:2 selectively:1 yaki:1 incorporate:2 evaluate:3 phenomenon:1 |
2,196 | 2,994 | Modeling General and Specific Aspects of Documents
with a Probabilistic Topic Model
Chaitanya Chemudugunta, Padhraic Smyth
Department of Computer Science
University of California, Irvine
Irvine, CA 92697-3435, USA
{chandra,smyth}@ics.uci.edu
Mark Steyvers
Department of Cognitive Sciences
University of California, Irvine
Irvine, CA 92697-5100, USA
[email protected]
Abstract
Techniques such as probabilistic topic models and latent-semantic indexing have
been shown to be broadly useful at automatically extracting the topical or semantic content of documents, or more generally for dimension-reduction of sparse
count data. These types of models and algorithms can be viewed as generating an
abstraction from the words in a document to a lower-dimensional latent variable
representation that captures what the document is generally about beyond the specific words it contains. In this paper we propose a new probabilistic model that
tempers this approach by representing each document as a combination of (a) a
background distribution over common words, (b) a mixture distribution over general topics, and (c) a distribution over words that are treated as being specific to
that document. We illustrate how this model can be used for information retrieval
by matching documents both at a general topic level and at a specific word level,
providing an advantage over techniques that only match documents at a general
level (such as topic models or latent-sematic indexing) or that only match documents at the specific word level (such as TF-IDF).
1 Introduction and Motivation
Reducing high-dimensional data vectors to robust and interpretable lower-dimensional representations has a long and successful history in data analysis, including recent innovations such as latent
semantic indexing (LSI) (Deerwester et al, 1994) and latent Dirichlet allocation (LDA) (Blei, Ng,
and Jordan, 2003). These types of techniques have found broad application in modeling of sparse
high-dimensional count data such as the ?bag of words? representations for documents or transaction
data for Web and retail applications.
Approaches such as LSI and LDA have both been shown to be useful for ?object matching? in their
respective latent spaces. In information retrieval for example, both a query and a set of documents
can be represented in the LSI or topic latent spaces, and the documents can be ranked in terms of
how well they match the query based on distance or similarity in the latent space. The mapping to
latent space represents a generalization or abstraction away from the sparse set of observed words, to
a ?higher-level? semantic representation in the latent space. These abstractions in principle lead to
better generalization on new data compared to inferences carried out directly in the original sparse
high-dimensional space. The capability of these models to provide improved generalization has
been demonstrated empirically in a number of studies (e.g., Deerwester et al 1994; Hofmann 1999;
Canny 2004; Buntine et al, 2005).
However, while this type of generalization is broadly useful in terms of inference and prediction,
there are situations where one can over-generalize. Consider trying to match the following query
to a historical archive of news articles: election + campaign + Camejo. The query is intended to
find documents that are about US presidential campaigns and also about Peter Camejo (who ran as
vice-presidential candidate alongside independent Ralph Nader in 2004). LSI and topic models are
likely to highly rank articles that are related to presidential elections (even if they don?t necessarily
contain the words election or campaign).
However, a potential problem is that the documents that are highly ranked by LSI or topic models
need not include any mention of the name Camejo. The reason is that the combination of words
in this query is likely to activate one or more latent variables related to the concept of presidential
campaigns. However, once this generalization is made the model has ?lost? the information about
the specific word Camejo and it will only show up in highly ranked documents if this word happens
to frequently occur in these topics (unlikely in this case given that this candidate received relatively
little media coverage compared to the coverage given to the candidates from the two main parties).
But from the viewpoint of the original query, our preference would be to get documents that are
about the general topic of US presidential elections with the specific constraint that they mention
Peter Camejo.
Word-based retrieval techniques, such as the widely-used term-frequency inverse-documentfrequency (TF-IDF) method, have the opposite problem in general. They tend to be overly specific
in terms of matching words in the query to documents.
In general of course one would like to have a balance between generality and specificity. One ad hoc
approach is to combine scores from a general method such as LSI with those from a more specific
method such as TF-IDF in some manner, and indeed this technique has been proposed in information
retrieval (Vogt and Cottrell, 1999). Similarly, in the ad hoc LDA approach (Wei and Croft, 2006), the
LDA model is linearly combined with document-specific word distributions to capture both general
as well as specific information in documents. However, neither method is entirely satisfactory since
it is not clear how to trade-off generality and specificity in a principled way.
The contribution of this paper is a new graphical model based on latent topics that handles the tradeoff between generality and specificity in a fully probabilistic and automated manner. The model,
which we call the special words with background (SWB) model, is an extension of the LDA model.
The new model allows words in documents to be modeled as either originating from general topics,
or from document-specific ?special? word distributions, or from a corpus-wide background distribution. The idea is that words in a document such as election and campaign are likely to come from
a general topic on presidential elections, whereas a name such as Camejo is much more likely to
be treated as ?non-topical? and specific to that document. Words in queries are automatically interpreted (in a probabilistic manner) as either being topical or special, in the context of each document,
allowing for a data-driven document-specific trade-off between the benefits of topic-based abstraction and specific word matching. Daum?e and Marcu (2006) independently proposed a probabilistic
model using similar concepts for handling different training and test distributions in classification
problems.
Although we have focused primarily on documents in information retrieval in the discussion above,
the model we propose can in principle be used on any large sparse matrix of count data. For example,
transaction data sets where rows are individuals and columns correspond to items purchased or Web
sites visited are ideally suited to this approach. The latent topics can capture broad patterns of
population behavior and the ?special word distributions? can capture the idiosyncracies of specific
individuals.
Section 2 reviews the basic principles of the LDA model and introduces the new SWB model. Section 3 illustrates how the model works in practice using examples from New York Times news
articles. In Section 4 we describe a number of experiments with 4 different document sets, including perplexity experiments and information retrieval experiments, illustrating the trade-offs between
generalization and specificity for different models. Section 5 contains a brief discussion and concluding comments.
2 A Topic Model for Special Words
Figure 1(a) shows the graphical model for what we will refer to as the ?standard topic model?
or LDA. There are D documents and document d has Nd words. ? and ? are fixed parameters of
symmetric Dirichlet priors for the D document-topic multinomials represented by ? and the T topicword multinomials represented by ?. In the generative model, for each document d, the Nd words
(a)
?2
?1
?
?
?
?
?
?
?0
z
x
?
w
(b)
?
?
?
z
?
w
Nd
T
T
D
Nd
D
Figure 1: Graphical models for (a) the standard LDA topic model (left) and (b) the proposed special
words topic model with a background distribution (SWB) (right).
are generated by drawing a topic t from the document-topic distribution p(z|?d ) and then drawing
a word w from the topic-word distribution p(w|z = t, ?t ). As shown in Griffiths and Steyvers
(2004) the topic assignments z for each word token in the corpus can be efficiently sampled via
Gibbs sampling (after marginalizing over ? and ?). Point estimates for the ? and ? distributions
can be computed conditioned on a particular sample, and predictive distributions can be obtained by
averaging over multiple samples.
We will refer to the proposed model as the special words topic model with background distribution
(SWB) (Figure 1(b)). SWB has a similar general structure to the LDA model (Figure 1(a)) but with
additional machinery to handle special words and background words. In particular, associated with
each word token is a latent random variable x, taking value x = 0 if the word w is generated via
the topic route, value x = 1 if the word is generated as a special word (for that document) and
value x = 2 if the word is generated from a background distribution specific for the corpus. The
variable x acts as a switch: if x = 0, the previously described standard topic mechanism is used
to generate the word, whereas if x = 1 or x = 2, words are sampled from a document-specific
multinomial ? or a corpus specific multinomial ? (with symmetric Dirichlet priors parametrized by
?1 and ?2 ) respectively. x is sampled from a document-specific multinomial ?, which in turn has
a symmetric Dirichlet prior, ?. One could also use a hierarchical Bayesian approach to introduce
another level of uncertainty about the Dirichlet priors (e.g., see Blei, Ng, and Jordan, 2003)?we
have not investigated this option, primarily for computational reasons. In all our experiments, we
set ? = 0.1, ?0 = ?2 = 0.01, ?1 = 0.0001 and ? = 0.3?all weak symmetric priors.
The conditional probability of a word w given a document d can be written as:
p(w|d) = p(x = 0|d)
T
X
p(w|z = t)p(z = t|d) + p(x = 1|d)p? (w|d) + p(x = 2|d)p?? (w)
t=1
?
where p (w|d) is the special word distribution for document d, and p?? (w) is the background word
distribution for the corpus. Note that when compared to the standard topic model the SWB model
can explain words in three different ways, via topics, via a special word distribution, or via a background word distribution. Given the graphical model above, it is relatively straightforward to derive
Gibbs sampling equations that allow joint sampling of the zi and xi latent variables for each word
token wi , for xi = 0:
p (xi = 0, zi = t |w, x?i , z?i , ?, ?0 , ? ) ?
and for xi = 1:
TD
WT
Ctd,?i
+?
Cwt,?i
+ ?0
Nd0,?i + ?
P
?P
?
T
W
D
T
Nd,?i + 3?
t? Ct? d,?i + T ?
w ? Cw ? t,?i + W ?0
p (xi = 1 |w, x?i , z?i , ?1 , ? ) ?
WD
Cwd,?i
+ ?1
Nd1,?i + ?
?P
D
W
Nd,?i + 3?
w ? Cw ? d,?i + W ?1
e mail krugman nytimes com memo to critics of the media s liberal bias the pinkos you really should be going after are those business reporters even i was startled by
the tone of the jan 21 issue of investment news which describes itself as the weekly newspaper for financial advisers the headline was paul o neill s sweet deal the
blurb was irs backs off closing loophole averting tax liability for execs and treasury chief it s not really news that the bush administration likes tax breaks for
businessmen but two weeks later i learned from the wall street journal that this loophole is more than a tax break for businessmen it s a gift to biznesmen and it may be
part of a larger pattern confused in the former soviet union the term biznesmen pronounced beeznessmen refers to the class of sudden new rich who emerged after the
fall of communism and who generally got rich by using their connections to strip away the assets of public enterprises what we ve learned from enron and other
players to be named later is that america has its own biznesmen and that we need to watch out for policies that make it easier for them to ply their trade it turns out that
the sweet deal investment news was referring to the use of split premium life insurance policies to give executives largely tax free compensation you don t want to
know the details is an even sweeter deal for executives of companies that go belly up it shields their wealth from creditors and even from lawsuits sure enough reports
the wall street journal former enron c e o s kenneth lay and jeffrey skilling both had large split premium policies so what other pro biznes policies have been
promulgated lately last year both houses of ?
john w snow was paid more than 50 million in salary bonus and stock in his nearly 12 years as chairman of the csx corporation the railroad company during that
period the company s profits fell and its stock rose a bit more than half as much as that of the average big company mr snow s compensation amid csx s uneven
performance has drawn criticism from union officials and some corporate governance specialists in 2000 for example after the stock had plunged csx decided to
reverse a 25 million loan to him the move is likely to get more scrutiny after yesterday s announcement that mr snow has been chosen by president bush to replace
paul o neill as the treasury secretary like mr o neill mr snow is an outsider on wall street but an insider in corporate america with long experience running an industrial
company some wall street analysts who follow csx said yesterday that mr snow had ably led the company through a difficult period in the railroad industry and would
make a good treasury secretary it s an excellent nomination said jill evans an analyst at j p morgan who has a neutral rating on csx stock i think john s a great person
for the administration he as the c e o of a railroad has probably touched every sector of the economy union officials are less complimentary of mr snow s performance
at csx last year the a f l c i o criticized him and csx for the company s decision to reverse the loan allowing him to return stock he had purchased with the borrowed
money at a time when independent directors are in demand a corporate governance specialist said recently that mr snow had more business relationships with
members of his own board than any other chief executive in addition mr snow is the third highest paid of 37 chief executives of transportation companies said ric
marshall chief executive of the corporate library which provides specialized investment research into corporate boards his own compensation levels have been pretty
high mr marshall said he could afford to take a public service job a csx program in 1996 allowed mr snow and other top csx executives to buy?
Figure 2: Examples of two news articles with special words (as inferred by the model) shaded in
gray. (a) upper, email article with several colloquialisms, (b) lower, article about CSX corporation.
and for xi = 2:
p (xi = 2 |w, x?i , z?i , ?2 , ? ) ?
W
Cw,?i
+ ?2
Nd2,?i + ?
?P
W
Nd,?i + 3?
w ? Cw ? ,?i + W ?2
where the subscript ?i indicates that the count for word token i is removed, Nd is the number of
words in document d and Nd0 , Nd1 and Nd2 are the number of words in document d assigned to the
WT
WD
W
latent topics, special words and background component, respectively, Cwt
, Cwd
and Cw
are the
number of times word w is assigned to topic t, to the special-words distribution of document d, and
to the background distribution, respectively, and W is the number of unique words in the corpus.
Note that when there is not strong supporting evidence for xi = 0 (i.e., the conditional probability
of this event is low), then the probability of the word being generated by the special words route,
xi = 1, or background route, xi = 2 increases.
One iteration of the Gibbs sampler corresponds to a sampling pass through all word tokens in the
corpus. In practice we have found that around 500 iterations are often sufficient for the in-sample
perplexity (or log-likelihood) and the topic distributions to stabilize.
We also pursued a variant of SWB, the special words (SW) model that excludes the background
distribution ? and has a symmetric Beta prior, ?, on ? (which in SW is a document-specific Bernoulli
distribution). In all our SW model runs, we set ? = 0.5 resulting in a weak symmetric prior that is
equivalent to adding one pseudo-word to each document. Experimental results (not shown) indicate
that the final word-topic assignments are not sensitive to either the value of the prior or the initial
assignments to the latent variables, x and z.
3 Illustrative Examples
We illustrate the operation of the SW model with a data set consisting of 3104 articles from the
New York Times (NYT) with a total of 1,399,488 word tokens. This small set of NYT articles was
formed by selecting all NYT articles that mention the word ?Enron.? The SW topic model was run
with T = 100 topics. In total, 10 Gibbs samples were collected from the model. Figure 2 shows
two short fragments of articles from this NYT dataset. The background color of words indicates the
probability of assigning words to the special words topic?darker colors are associated with higher
probability that over the 10 Gibbs samples a word was assigned to the special topic. The words
with gray foreground colors were treated as stopwords and were not included in the analysis. Figure
2(a) shows how intentionally misspelled words such as ?biznesmen? and ?beeznessmen? and rare
Collection
NIPS
PATENTS
AP
FR
# of
Docs
1740
6711
10000
2500
Total # of
Word Tokens
2,301,375
15,014,099
2,426,181
6,332,681
Median
Doc Length
1310
1858
235.5
516
Mean
Doc Length
1322.6
2237.2
242.6
2533.1
# of
Queries
N/A
N/A
142
30
Table 1: General characteristics of document data sets used in experiments.
NIPS
set
number
results
case
problem
function
values
paper
approach
large
PATENTS
.0206
.0167
.0153
.0123
.0118
.0108
.0102
.0088
.0080
.0079
fig
end
extend
invent
view
shown
claim
side
posit
form
.0647
.0372
.0267
.0246
.0214
.0191
.0189
.0177
.0153
.0128
AP
tagnum
itag
requir
includ
section
determin
part
inform
addit
applic
FR
.0416
.0412
.0381
.0207
.0189
.0134
.0112
.0105
.0096
.0086
nation
sai
presid
polici
issu
call
support
need
govern
effort
.0147
.0129
.0118
.0108
.0096
.0094
.0085
.0079
.0070
.0068
Figure 3: Examples of background distributions (10 most likely words) learned by the SWB model
for 4 different document corpora.
words such as ?pinkos? are likely to be assigned to the special words topic. Figure 2(b) shows how
a last name such as ?Snow? and the corporation name ?CSX? that are specific to the document are
likely to be assigned to the special topic. The words ?Snow? and ?CSX? do not occur often in other
documents but are mentioned several times in the example document. This combination of low
document-frequency and high term-frequency within the document is one factor that makes these
words more likely to be treated as ?special? words.
4 Experimental Results: Perplexity and Precision
We use 4 different document sets in our experiments, as summarized in Table 1. The NIPS and
PATENTS document sets are used for perplexity experiments and the AP and FR data sets for retrieval experiments. The NIPS data set is available online1 and PATENTS, AP, and FR consist of
documents from the U.S. Patents collection (TREC Vol-3), Associated Press news articles from 1998
(TREC Vol-2), and articles from the Federal Register (TREC Vol-1, 2) respectively. To create the
sampled AP and FR data sets, all documents relevant to queries were included first and the rest of
the documents were chosen randomly. In the results below all LDA/SWB/SW models were fit using
T = 200 topics.
Figure 3 demonstrates the background component learned by the SWB model on the 4 different document data sets. The background distributions learned for each set of documents are quite intuitive,
with words that are commonly used across a broad range of documents within each corpus. The ratio
of words assigned to the special words distribution and the background distribution are (respectively
for each data set), 25%:10% (NIPS), 58%:5% (PATENTS), 11%:6% (AP), 50%:11% (FR). Of note
is the fact that a much larger fraction of words are treated as special in collections containing long
documents (NIPS, PATENTS, and FR) than in short ?abstract-like? collections (such as AP)?this
makes sense since short documents are more likely to contain general summary information while
longer documents will have more specific details.
4.1 Perplexity Comparisons
The NIPS and PATENTS documents sets do not have queries and relevance judgments, but nonetheless are useful for evaluating perplexity. We compare the predictive performance of the SW and
SWB topic models with the standard topic model by computing the perplexity of unseen words in
test documents. Perplexity of a test set under a model is defined as follows:
1
From http://www.cs.toronto.edu/?roweis/data.html
1900
550
LDA
SW
SWB
1800
500
1700
450
LDA
SW
SWB
Perplexity
Perplexity
1600
1500
1400
400
350
300
1300
250
1200
200
1100
1000
10
20
30
40
50
60
70
80
90
150
10
Percentage of Words Observed
20
30
40
50
60
70
80
90
Percentage of Words Observed
Figure 4: Average perplexity of the two special words models and the standard topics model as a
function of the percentage of words observed in test documents on the NIPS data set (left) and the
PATENTS data set (right).
PDtest
train )
train
d=1 log p(wd |D
Perplexity(wtest |D
) = exp ?
PDtest
d=1 Nd
where wtest is a vector of words in the test data set, wd is a vector of words in document d of the test
set, and Dtrain is the training set. For the SWB model, we approximate p(wd |Dtrain ) as follows:
p(wd |Dtrain ) ?
S
1X
p(wd |{?s ?s ?s ?s ?s })
S s=1
where ?s , ?s , ?s , ?s and ?s are point estimates from s = 1:S different Gibbs sampling runs.
The probability of the words wd in a test document d, given its parameters, can be computed as
follows:
#
"
Nd
T
Y
X
s
s
s
s
s s
s s s
s
s
s
p(wd |{? ? ? ? ? }) =
?1d
?wi t ?td + ?2d ?wi d + ?3d ?wi
i=1
t=1
where Nd is the number of words in test document d and wi is the ith word being predicted in the
s
s
test document. ?td
, ?swi t , ?w
, ?swi and ?sd are point estimates from sample s.
id
When a fraction of words of a test document d is observed, a Gibbs sampler is run on the observed
words to update the document-specific parameters, ?d , ?d and ?d and these updated parameters are
used in the computation of perplexity. For the NIPS data set, documents from the last year of the
data set were held out to compute perplexity (Dtest = 150), and for the PATENTS data set 500
documents were randomly selected as test documents.
From the perplexity figures, it can be seen that once a small fraction of the test document words
is observed (20% for NIPS and 10% for PATENTS), the SW and SWB models have significantly
lower perplexity values than LDA indicating that the SW and SWB models are using the special
words ?route? to better learn predictive models for individual documents.
4.2 Information Retrieval Results
Returning to the point of capturing both specific and general aspects of documents as discussed in
the introduction of the paper, we generated 500 queries of length 3-5 using randomly selected lowfrequency words from the NIPS corpus and then ranked documents relative to these queries using
several different methods. Table 2 shows for the top k-ranked documents (k = 1, 10, 50, 100) how
many of the retrieved documents contained at least one of the words in the query. Note that we are
not assessing relevance here in a traditional information retrieval sense, but instead are assessing how
Method
TF-IDF
LSI
LDA
SW
SWB
1 Ret Doc
100.0
97.6
90.0
99.2
99.4
10 Ret Docs
100.0
82.7
80.6
97.1
96.6
50 Ret Docs
100.0
64.6
67.0
79.1
78.7
100 Ret Docs
100.0
54.3
58.7
67.3
67.2
Table 2: Percentage of retrieved documents containing at least one query word (NIPS corpus).
AP
MAP
Method
TF-IDF
LSI
LDA
SW
SWB
Title
.353
.286
.424
.466*
.460*
Pr@10d
Desc
.358
.387
.394
.430*
.417
Concepts
.498
.459
.498
.550*
.549*
Method
TF-IDF
LSI
LDA
SW
SWB
Title
.406
.455
.478
.524*
.513*
Method
TF-IDF
LSI
LDA
SW
SWB
Title
.300
.366
.428
.469
.462
Desc
.434
.469
.463
.509*
.495
Concepts
.549
.523
.556
.599*
.603*
FR
MAP
Method
TF-IDF
LSI
LDA
SW
SWB
Title
.268
.329
.344
.371
.373
Pr@10d
Desc
.272
.295
.271
.323*
.328*
Concepts
.391
.399
.396
.448*
.435
Desc
.287
.327
.340
.407*
.423*
Concepts
.483
.487
.487
.550*
.523
*=sig difference wrt LDA
Figure 5: Information retrieval experimental results.
often specific query words occur in retrieved documents. TF-IDF has 100% matches, as one would
expect, and the techniques that generalize (such as LSI and LDA) have far fewer exact matches.
The SWB and SW models have more specific matches than either LDA or LSI, indicating that they
have the ability to match at the level of specific words. Of course this is not of much utility unless
the SWB and SW models can also perform well in terms of retrieving relevant documents (not just
documents containing the query words), which we investigate next.
For the AP and FR documents sets, 3 types of query sets were constructed from TREC Topics 1150, based on the T itle (short), Desc (sentence-length) and Concepts (long list of keywords) fields.
Queries that have no relevance judgments for a collection were removed from the query set for that
collection.
The score for a document d relative to a query q for the SW and standard topic models can be computed as the probability of q given d (known as the query-likelihood model in the IR community).
For the SWB topic model, we have
p(q|d) ?
Y
w?q
[p(x = 0|d)
T
X
p(w|z = t)p(z = t|d) + p(x = 1|d)p? (w|d) + p(x = 2|d)p?? (w)]
t=1
We compare SW and SWB models with the standard topic model (LDA), LSI and TF-IDF. The TFCW D
IDF score for a word w in a document d is computed as TF-IDF(w, d) = Nwdd ? log2 DDw . For
LSI, the TF-IDF weight matrix is reduced to a K-dimensional latent space using SVD, K = 200. A
given query is first mapped into the LSI latent space or the TF-IDF space (known as query folding),
and documents are scored based on their cosine distances to the mapped queries.
To measure the performance of each algorithm we used 2 metrics that are widely used in IR research:
the mean average precision (MAP) and the precision for the top 10 documents retrieved (pr@10d).
The main difference between the AP and FR documents is that the latter documents are considerably
longer on average and there are fewer queries for the FR data set. Figure 5 summarizes the results,
broken down by algorithm, query type, document set, and metric. The maximum score for each
query experiment is shown in bold: in all cases (query-type/data set/metric) the SW or SWB model
produced the highest scores.
To determine statistical significance, we performed a t-test at the 0.05 level between the scores of
each of the SW and SWB models, and the scores of the LDA model (as LDA has the best scores
overall among TF-IDF, LSI and LDA). Differences between SW and SWB are not significant. In
figure 5, we use the symbol * to indicate scores where the SW and SWB models showed a statistically significant difference (always an improvement) relative to the LDA model. The differences
for the ?non-starred? query and metric scores of SW and SWB are not statistically significant but
nonetheless always favor SW and SWB over LDA.
5 Discussion and Conclusions
Wei and Croft (2006) have recently proposed an ad hoc LDA approach that models p(q|d) as a
weighted combination of a multinomial over the entire corpus (the background model), a multinomial over the document, and an LDA model. Wei and Croft showed that this combination provides
excellent retrieval performance compared to other state-of-the-art IR methods. In a number of experiments (not shown) comparing the SWB and ad hoc LDA models we found that the two techniques
produced comparable precision performance, with small but systematic performance gains being
achieved by an ad hoc combination where the standard LDA model in ad hoc LDA was replaced
with the SWB model. An interesting direction for future work is to investigate fully generative
models that can achieve the performance of ad hoc approaches.
In conclusion, we have proposed a new probabilistic model that accounts for both general and specific aspects of documents or individual behavior. The model extends existing latent variable probabilistic approaches such as LDA by allowing these models to take into account specific aspects of
documents (or individuals) that are exceptions to the broader structure of the data. This allows, for
example, documents to be modeled as a mixture of words generated by general topics and words
generated in a manner specific to that document. Experimental results on information retrieval tasks
indicate that the SWB topic model does not suffer from the weakness of techniques such as LSI
and LDA when faced with very specific query words, nor does it suffer the limitations of TF-IDF in
terms of its ability to generalize.
Acknowledgements
We thank Tom Griffiths for useful initial discussions about the special words model. This material
is based upon work supported by the National Science Foundation under grant IIS-0083489. We
acknowledge use of the computer clusters supported by NIH grant LM-07443-01 and NSF grant
EIA-0321390 to Pierre Baldi and the Institute of Genomics and Bioinformatics.
References
Blei, D. M., Ng, A. Y., and Jordan, M. I. (2003) Latent Dirichlet allocation, Journal of Machine Learning
Research 3: 993-1022.
Buntine, W., L?ofstr?om, J., Perttu, S. and Valtonen, K. (2005) Topic-specific scoring of documents for relevant
retrieval Workshop on Learning in Web Search: 22nd International Conference on Machine Learning,
pp. 34-41. Bonn, Germany.
Canny, J. (2004) GaP: a factor model for discrete data. Proceedings of the 27th Annual SIGIR Conference,
pp. 122-129.
Daum?e III, H., and Marcu, D. (2006) Domain Adaptation for Statistical classifiers. Journal of the Artificial
Intelligence Research, 26: 101-126.
Deerwester, S., Dumais, S. T., Furnas, G. W., Landauer, T. K., and Harshman, R. (1990) Indexing by latent
semantic analysis. Journal of the American Society for Information Science, 41(6): 391-407.
Griffiths, T. L., and Steyvers, M. (2004) Finding scientific topics, Proceedings of the National Academy of
Sciences, pp. 5228-5235.
Hofmann, T. (1999) Probabilistic latent semantic indexing, Proceedings of the 22nd Annual SIGIR Conference, pp. 50-57.
Vogt, C. and Cottrell, G. (1999) Fusion via a linear combination of scores. Information Retrieval, 1(3): 151173.
Wei, X. and Croft, W.B. (2006) LDA-based document models for ad-hoc retrieval, Proceedings of the 29th
SIGIR Conference, pp. 178-185.
| 2994 |@word illustrating:1 nd:13 vogt:2 csx:12 lowfrequency:1 paid:2 mention:3 profit:1 reduction:1 initial:2 contains:2 score:11 selecting:1 fragment:1 document:97 existing:1 wd:9 com:1 comparing:1 assigning:1 written:1 john:2 cottrell:2 evans:1 hofmann:2 interpretable:1 cwd:2 update:1 generative:2 half:1 pursued:1 item:1 tone:1 selected:2 fewer:2 intelligence:1 ith:1 short:4 blei:3 sudden:1 provides:2 toronto:1 preference:1 liberal:1 stopwords:1 enterprise:1 constructed:1 beta:1 director:1 retrieving:1 combine:1 baldi:1 introduce:1 manner:4 indeed:1 behavior:2 frequently:1 nor:1 automatically:2 td:3 election:6 little:1 company:8 gift:1 confused:1 bonus:1 medium:2 what:4 interpreted:1 complimentary:1 ret:4 finding:1 corporation:3 pseudo:1 every:1 sai:1 act:1 nation:1 weekly:1 returning:1 demonstrates:1 classifier:1 grant:3 scrutiny:1 harshman:1 service:1 sd:1 id:1 subscript:1 ap:10 shaded:1 campaign:5 range:1 statistically:2 decided:1 unique:1 lost:1 practice:2 investment:3 union:3 jan:1 nomination:1 got:1 significantly:1 matching:4 word:104 griffith:3 specificity:4 refers:1 get:2 context:1 www:1 equivalent:1 map:3 demonstrated:1 transportation:1 straightforward:1 go:1 independently:1 focused:1 sigir:3 financial:1 his:3 steyvers:3 population:1 handle:2 president:1 updated:1 smyth:2 exact:1 sig:1 nader:1 marcu:2 lay:1 observed:7 capture:4 news:7 trade:4 highest:2 removed:2 ran:1 principled:1 nytimes:1 rose:1 govern:1 mentioned:1 broken:1 ideally:1 belly:1 predictive:3 upon:1 joint:1 stock:5 represented:3 america:2 soviet:1 train:2 describe:1 activate:1 query:31 artificial:1 insider:1 quite:1 emerged:1 widely:2 larger:2 drawing:2 presidential:6 ability:2 favor:1 unseen:1 think:1 itself:1 final:1 hoc:8 advantage:1 wtest:2 propose:2 fr:11 canny:2 adaptation:1 uci:2 relevant:3 starred:1 tax:4 roweis:1 achieve:1 academy:1 intuitive:1 pronounced:1 sematic:1 cluster:1 assessing:2 generating:1 object:1 illustrate:2 derive:1 keywords:1 received:1 borrowed:1 job:1 strong:1 coverage:2 c:1 predicted:1 come:1 indicate:3 direction:1 snow:11 posit:1 public:2 material:1 generalization:6 really:2 wall:4 announcement:1 desc:5 extension:1 around:1 ic:1 exp:1 great:1 mapping:1 week:1 claim:1 lm:1 bag:1 visited:1 headline:1 sensitive:1 him:3 title:4 vice:1 tf:15 create:1 weighted:1 federal:1 offs:1 always:2 railroad:3 broader:1 nd2:2 improvement:1 rank:1 indicates:2 likelihood:2 bernoulli:1 industrial:1 criticism:1 sense:2 inference:2 secretary:2 abstraction:4 economy:1 unlikely:1 entire:1 originating:1 going:1 germany:1 ralph:1 issue:1 classification:1 adviser:1 html:1 overall:1 among:1 temper:1 art:1 special:26 field:1 once:2 ng:3 sampling:5 represents:1 broad:3 nearly:1 foreground:1 future:1 report:1 primarily:2 sweet:2 randomly:3 ve:1 national:2 individual:5 replaced:1 intended:1 consisting:1 jeffrey:1 irs:1 highly:3 investigate:2 insurance:1 weakness:1 introduces:1 mixture:2 held:1 experience:1 respective:1 machinery:1 unless:1 chaitanya:1 criticized:1 column:1 modeling:2 industry:1 marshall:2 assignment:3 neutral:1 rare:1 successful:1 dtrain:3 buntine:2 cwt:2 considerably:1 combined:1 referring:1 person:1 dumais:1 international:1 probabilistic:9 off:3 systematic:1 swi:2 padhraic:1 containing:3 amid:1 dtest:1 cognitive:1 american:1 return:1 account:2 potential:1 summarized:1 stabilize:1 bold:1 register:1 ad:8 later:2 break:2 view:1 performed:1 option:1 capability:1 ddw:1 ably:1 contribution:1 om:1 formed:1 ir:3 who:5 efficiently:1 largely:1 correspond:1 characteristic:1 judgment:2 generalize:3 weak:2 bayesian:1 produced:2 asset:1 history:1 explain:1 inform:1 strip:1 email:1 nonetheless:2 frequency:3 intentionally:1 pp:5 associated:3 irvine:4 sampled:4 dataset:1 gain:1 color:3 back:1 higher:2 follow:1 tom:1 improved:1 wei:4 eia:1 generality:3 just:1 web:3 lda:35 gray:2 scientific:1 name:4 usa:2 contain:2 concept:7 former:2 assigned:6 symmetric:6 satisfactory:1 semantic:6 deal:3 during:1 yesterday:2 illustrative:1 cosine:1 trying:1 pro:1 recently:2 nih:1 common:1 specialized:1 multinomial:7 empirically:1 patent:11 million:2 extend:1 he:3 discussed:1 refer:2 significant:3 gibbs:7 similarly:1 closing:1 had:5 similarity:1 money:1 longer:2 own:3 recent:1 showed:2 retrieved:4 driven:1 perplexity:16 reverse:2 route:4 life:1 scoring:1 morgan:1 seen:1 additional:1 mr:10 determine:1 period:2 ii:1 multiple:1 corporate:5 match:8 long:4 retrieval:15 prediction:1 variant:1 basic:1 invent:1 chandra:1 metric:4 iteration:2 retail:1 achieved:1 folding:1 background:19 whereas:2 want:1 addition:1 wealth:1 median:1 salary:1 rest:1 archive:1 enron:3 comment:1 sure:1 tend:1 fell:1 probably:1 member:1 jordan:3 call:2 extracting:1 split:2 enough:1 iii:1 automated:1 switch:1 fit:1 zi:2 opposite:1 idea:1 tradeoff:1 administration:2 utility:1 effort:1 suffer:2 peter:2 york:2 afford:1 useful:5 generally:3 clear:1 lawsuit:1 reduced:1 generate:1 http:1 percentage:4 lsi:18 nsf:1 ctd:1 overly:1 broadly:2 chemudugunta:1 discrete:1 vol:3 drawn:1 neither:1 swb:34 kenneth:1 nyt:4 excludes:1 fraction:3 year:4 deerwester:3 run:4 inverse:1 uncertainty:1 you:2 named:1 extends:1 doc:7 decision:1 ric:1 summarizes:1 comparable:1 bit:1 entirely:1 capturing:1 ct:1 neill:3 annual:2 occur:3 constraint:1 idf:16 bonn:1 aspect:4 concluding:1 relatively:2 department:2 combination:7 describes:1 across:1 wi:5 happens:1 online1:1 indexing:5 pr:3 equation:1 previously:1 turn:2 count:4 mechanism:1 wrt:1 know:1 end:1 available:1 operation:1 hierarchical:1 away:2 pierre:1 skilling:1 specialist:2 original:2 top:3 dirichlet:6 include:1 running:1 graphical:4 log2:1 sw:26 daum:2 society:1 purchased:2 move:1 chairman:1 traditional:1 said:5 distance:2 cw:5 mapped:2 reporter:1 thank:1 parametrized:1 street:4 addit:1 topic:51 mail:1 collected:1 reason:2 analyst:2 length:4 modeled:2 relationship:1 providing:1 balance:1 ratio:1 innovation:1 difficult:1 sector:1 memo:1 policy:4 perform:1 allowing:3 exec:1 upper:1 acknowledge:1 compensation:3 supporting:1 situation:1 topical:3 trec:4 community:1 inferred:1 rating:1 connection:1 sentence:1 california:2 learned:5 applic:1 nip:12 beyond:1 alongside:1 below:1 pattern:2 loophole:2 program:1 including:2 event:1 treated:5 ranked:5 business:2 representing:1 jill:1 brief:1 library:1 carried:1 lately:1 genomics:1 faced:1 review:1 prior:8 acknowledgement:1 marginalizing:1 relative:3 fully:2 expect:1 interesting:1 limitation:1 allocation:2 executive:6 foundation:1 sufficient:1 article:12 principle:3 viewpoint:1 critic:1 row:1 course:2 summary:1 token:7 supported:2 last:4 free:1 liability:1 bias:1 allow:1 side:1 institute:1 wide:1 fall:1 taking:1 sparse:5 benefit:1 dimension:1 evaluating:1 rich:2 msteyver:1 made:1 collection:6 commonly:1 historical:1 party:1 far:1 transaction:2 newspaper:1 approximate:1 idiosyncracies:1 buy:1 corpus:12 xi:10 landauer:1 don:2 search:1 latent:23 chief:4 pretty:1 table:4 learn:1 robust:1 ca:2 investigated:1 necessarily:1 excellent:2 domain:1 official:2 significance:1 main:2 linearly:1 motivation:1 big:1 paul:2 scored:1 allowed:1 site:1 fig:1 board:2 darker:1 shield:1 precision:4 furnas:1 nd1:2 candidate:3 house:1 ply:1 third:1 croft:4 touched:1 down:1 treasury:3 specific:33 topicword:1 symbol:1 list:1 evidence:1 fusion:1 consist:1 workshop:1 adding:1 illustrates:1 conditioned:1 demand:1 gap:1 easier:1 suited:1 led:1 likely:10 contained:1 watch:1 corresponds:1 includ:1 conditional:2 viewed:1 replace:1 content:1 loan:2 included:2 reducing:1 averaging:1 wt:2 sampler:2 total:3 pas:1 experimental:4 premium:2 player:1 svd:1 indicating:2 exception:1 uneven:1 mark:1 support:1 latter:1 relevance:3 bush:2 bioinformatics:1 handling:1 |
2,197 | 2,995 | An Information Theoretic Framework for
Eukaryotic Gradient Sensing
Joseph M. Kimmel? and Richard M. Salter?
[email protected], [email protected]
Computer Science Program
Oberlin College
Oberlin, Ohio 44074
Peter J. Thomas?
[email protected]
Departments of Mathematics, Biology and Cognitive Science
Case Western Reserve University
Cleveland, Ohio 44106
Abstract
Chemical reaction networks by which individual cells gather and process information about their chemical environments have been dubbed ?signal transduction?
networks. Despite this suggestive terminology, there have been few attempts to
analyze chemical signaling systems with the quantitative tools of information theory. Gradient sensing in the social amoeba Dictyostelium discoideum is a well
characterized signal transduction system in which a cell estimates the direction
of a source of diffusing chemoattractant molecules based on the spatiotemporal
sequence of ligand-receptor binding events at the cell membrane. Using Monte
Carlo techniques (MCell) we construct a simulation in which a collection of individual ligand particles undergoing Brownian diffusion in a three-dimensional
volume interact with receptors on the surface of a static amoeboid cell. Adapting
a method for estimation of spike train entropies described by Victor (originally due
to Kozachenko and Leonenko), we estimate lower bounds on the mutual information between the transmitted signal (direction of ligand source) and the received
signal (spatiotemporal pattern of receptor binding/unbinding events). Hence we
provide a quantitative framework for addressing the question: how much could the
cell know, and when could it know it? We show that the time course of the mutual information between the cell?s surface receptors and the (unknown) gradient
direction is consistent with experimentally measured cellular response times. We
find that the acquisition of directional information depends strongly on the time
constant at which the intracellular response is filtered.
1
Introduction: gradient sensing in eukaryotes
Biochemical signal transduction networks provide the computational machinery by which neurons,
amoebae or other single cells sense and react to their chemical environments. The precision of this
chemical sensing is limited by fluctuations inherent in reaction and diffusion processes involving a
?
Current address: Computational Neuroscience Graduate Program, The University of Chicago.
Oberlin Center for Computation and Modeling, http://occam.oberlin.edu/.
?
To whom correspondence should be addressed. http://www.case.edu/artsci/math/thomas/thomas.html;
Oberlin College Research Associate.
?
finite quantity of molecules [1, 2]. The theory of communication provides a framework that makes
explicit the noise dependence of chemical signaling. For example, in any reaction A + B ? C,
we may view the time varying reactant concentrations A(t) and B(t) as input signals to a noisy
channel, and the product concentration C(t) as an output signal carrying information about A(t)
and B(t). In the present study we show that the mutual information between the (known) state of
the cell?s surface receptors and the (unknown) gradient direction follows a time course consistent
with experimentally measured cellular response times, reinforcing earlier claims that information
theory can play a role in understanding biochemical cellular communication [3, 4].
Dictyostelium is a soil dwelling amoeba that aggregates into a multicellular form in order to survive
conditions of drought or starvation. During aggregation individual amoebae perform chemotaxis, or
chemically guided movement, towards sources of the signaling molecule cAMP, secreted by nearby
amoebae. Quantitive studies have shown that Dictyostelium amoebae can sense shallow, static gradients of cAMP over long time scales (?30 minutes), and that gradient steepness plays a crucial role
in guiding cells [5]. The chemotactic efficiency (CE), the population average of the cosine between
the cell displacement directions and the true gradient direction, peaks at a cAMP concentration of 25
nanoMolar, similar to the equilibrium constant for the cAMP receptor (the Keq is the concentration
of cAMP at which the receptor has a 50% chance of being bound or unbound, respectively). For
smaller or larger concentrations the CE dropped rapidly. Nevertheless over long times cells were
able (on average) to detect gradients as small as 2% change in [cAMP] per cell length. At an early
stage of development when the pattern of chemotactic centers and spirals is still forming, individual amoebae presumably experience an inchoate barrage of weak, noisy and conflicting directional
signals. When cAMP binds receptors on a cell?s surface, second messengers trigger a chain of
subsequent intracellular events including a rapid spatial reorganization of proteins involved in cell
motility. Advances in fluorescence microscopy have revealed that the oriented subcellular response
to cAMP stimulation is already well underway within two seconds [6, 7]. In order to understand the
fundamental limits to communication in this cell signaling process we abstract the problem faced
by a cell to that of rapidly identifying the direction of origin of a stimulus gradient superimposed on
an existing mean background concentration. We model gradient sensing as an information channel
in which an input signal ? the direction of a chemical source ? is noisily transmitted via a gradient
of diffusing signaling molecules; and the ?received signal? is the spatiotemporal pattern of binding
events between cAMP and the cAMP receptors [8]. We neglect downstream intracellular events,
which cannot increase the mutual information between the state of the cell and the direction of the
imposed extracellular gradient [9].
The analysis of any signal transmission system depends on precise representation of the noise corrupting transmitted signals. We develop a Monte Carlo simulation (MCell, [10, 11]) in which a simulated cell is exposed to a cAMP distribution that evolves from a uniform background to a gradient
at low (1 nMol) average concentration. The noise inherent in the communication of a diffusionmediated signal is accurately represented by this method. Our approach bridges both the transient
and the steady state regimes and allows us to estimate the amount of stimulus-related information
that is in principle available to the cell through its receptors as a function of time after stimulus
initiation. Other efforts to address aspects of cell signaling using the conceptual tools of information theory have considered neurotransmitter release [3] and sensing temporal signals [4], but not
gradient sensing in eukaryotic cells.
A typical natural habitat for social amoebae such as Dictyostelium is the complex anisotropic threedimensional matrix of the forest floor. Under experimental conditions cells typically aggregate on
a flat two-dimensional surface. We approach the problem of gradient sensing on a sphere, which
is both harder and more natural for the ameoba, while still simple enough for us analytically and
numerically. Directional data is naturally described using unit vectors in spherical coordinates, but
the ameobae receive signals as binding events involving intramembrane protein complexes, so we
have developed a method for projecting the ensemble of receptor bindings onto coordinates in R3 .
In loose analogy with the chemotactic efficiency [5], we compare the projected directional estimate
with the true gradient direction represented as a unit vector on S2 . Consistent with observed timing
of the cell?s response to cAMP stimulation, we find that the directional signal converges quickly
enough for the cell to make a decision about which direction to move within the first two seconds
following stimulus onset.
2
Methods
2.1
Monte Carlo simulations
Using MCell and DReAMM [10, 11] we construct a spherical cell (radius R = 7.5?m, [12]) centered in a cubic volume (side length L = 30?m). N = 980 triangular tiles partition the surface (mesh generated by DOME1 ); each contained one cell surface receptor for cAMP with binding rate k+ = 4.4 ? 107 sec?1 M?1 , first-order cAMP unbinding rate k? = 1.1 sec?1 [12] and
Keq = k? /k+ = 25nMol cAMP.
We established a baseline concentration of approximately 1nMol by releasing a cAMP bolus at time
0 inside the cube with zero-flux boundary conditions imposed on each wall. At t = 2 seconds we
introduced a steady flux at the x = ?L/2 wall of 1 molecule of cAMP per square micron per
msec, adding signaling molecules from the left. Simultaneously, the x = +L/2 wall of the cube
assumes absorbing boundary conditions. The new boundary conditions lead (at equilibrium) to a
linear gradient of 2 nMol/30?m, ranging from ? 2.0 nMol at the flux source wall to ? 0 nMol at
the absorbing wall (see Figure 1); the concentration profile approaches this new steady state with
time constant of approximately 1.25 msec. Sampling boxes centered along the planes x = ?13.5?m
measured the local concentration, allowing us to validate the expected model behavior.
Figure 1: Gradient sensing simulations performed with MCell (a Monte Carlo simulator of cellular microphysiology, http://www.mcell.cnl.salk.edu/) and rendered with DReAMM (Design, Render,
and Animate MCell Models, http://www.mcell.psc.edu/). The model cell comprised a sphere triangulated with 980 tiles with one cAMP receptor per tile. Cell radius R = 7.5?m; cube side
L = 30?m. Left: Initial equilibrium condition, before imposition of gradient. [cAMP] ? 1nMol
(c. 15,000 molecules in the volume outside the sphere). Right: Gradient condition after transient
(c. 15,000 molecules; see Methods for details).
2.2
Analysis
2.2.1
Assumptions
We make the following assumptions to simplify the analysis of the distribution of receptor activities
at equilibrium, whether pre- or post-stimulus onset:
1. Independence. At equilibrium, the state of each receptor (bound vs unbound) is independent
of the states of the other receptors.
2. Linear Gradient. At equilibrium under the imposed gradient condition, the concentration
of ligand molecule varies linearly with position along the gradient axis.
3. Symmetry.
1
http://nwg.phy.bnl.gov/?bviren/uno/other/
(a) Rotational equivariance of receptor activities. In the absence of an applied gradient
signal, the probability distribution describing the receptor states is equivariant with
respect to arbitrary rotations of the sphere.
(b) Rotational invariance of gradient direction. The imposed gradient seen by a model
cell is equally likely to be coming from any direction; therefore the gradient direction
vector is uniformly distributed over S2 .
(c) Axial equivariance about the gradient direction. Once a gradient direction is imposed,
the probability distribution describing receptor states is rotationally equivariant with
respect to rotations about the axis parallel with the gradient.
Berg and Purcell [1] calculate the inaccuracy in concentration estimates due to nonindependence of
adjacent receptors; for our parameters (effective receptor radius = 5nm, receptor spacing ? 1?m)
the fractional error in estimating concentration differences due to receptor nonindependence is negligible (. 10?11 ) [1, 2].
Because we fix receptors to be in 1:1 correspondence with surface tiles, spherical symmetry and
uniform distribution of the receptors are only approximate. The gradient signal communicated via
diffusion does not involve sharp spatial changes on the scale of the distance between nearby receptors, therefore spherical symmetry and uniform identical receptor distribution are good analytic
approximations of the model configuration. By rotational equivariance we mean that combining
any rotation of the sphere with a corresponding rotation of the indices labeling the N receptors,
{j = 1, ? ? ? , N }, leads to a statistically indistinguishable distribution of receptor activities. This
same spherical symmetry is reflected in the a priori distribution of gradient directions, which is
uniform over the sphere (with density 1/4?). Spherical symmetry is broken by the gradient signal,
which fixes a preferred direction in space. About this axis however, we assume the system retains
the rotational symmetry of the cylinder.
2.2.2
Mutual information of the receptors
In order to quantify the directional information available to the cell from its surface receptors we
construct an explicit model for the receptor states and the cell?s estimated direction. We model the
receptor states via a collection of random variables {Bj } and develop an expression for the entropy
of {Bj }. Then in section 2.2.3 we present a method for projecting a temporally filtered estimated
direction, g?, into three (rather than N ) dimensions.
N
Let the random variables {Bj }j=1
represent the states of the N cAMP receptors on the cell surface;
Bj = 1 if the receptor is bound to a molecule of cAMP, otherwise Bj = 0. Let ~xj ? S2 represent
the direction from the center of the center of the cell to the j th receptor. Invoking assumption 2
above, we take the equilibrium concentration of cAMP at ~x to be c(~x|~g ) = a + b(~x ?~g ) where ~g ? S2
is a unit vector in the direction of the gradient. The parameter a is the mean concentration over the
~ is half the drop in concentration from one extreme on the cell surface to
cell surface, and b = R|?c|
the other. Before the stimulus begins, the gradient direction is undefined.
It can be shown (see Supplemental Materials) that the entropy of receptor states given a fixed gradient direction ~g , H[{Bj }|~g ], is given by an integral over the sphere:
?
2?
a + b cos(?)
sin(?)
H[{Bj }|~g ] ? N
?
d? d? (as N ? ?).
(1)
a + b cos(?) + Keq
4?
?=0 ?=0
On the other hand, if the gradient direction remains unspecified, the entropy of receptor states is
given by
Z ? Z 2?
a + b cos(?)
sin(?)
H[{Bj }] ? N ?
d? d? (as N ? ?), (2)
a + b cos(?) + Keq
4?
?=0 ?=0
? (p log2 (p) + (1 ? p) log2 (1 ? p)) , 0 < p < 1
where ?[p] =
denotes the entropy for a
0, p = 0 or 1
binary random variable with state probabilities p and (1 ? p).
Z
Z
In both equations (1) and (2), the argument of ? is a probability taking values 0 ? p ? 1. In (1) the
values of ? are averaged over the sphere; in (2) ? is evaluated after averaging probabilities. Because
?[p] is convex for 0 ? p ? 1, the integral in equation 1 cannot exceed that in equation 2. Therefore
the mutual information upon receiving the signal is nonnegative (as expected):
?
M I[{Bj }; ~g ] = H[{Bj }] ? H[{Bj }|~g ] ? 0.
The analytic solution for equation (1) involves the polylogarithm function. For the parameters shown
in the simulation (a = 1.078 nMol, b = .512 nMol, Keq = 25 nMol), the mutual information with
980 receptors is 2.16 bits. As one would expect, the mutual information peaks when the mean
concentration is close to the Keq of the receptor, exceeding 16 bits when a = 25, b = 12.5 and
Keq = 25 (nMol).
2.2.3
Dimension reduction
The estimate obtained above does not give tell us how quickly the directional information available
to the cell evolves over time. Direct estimate of the mutual information from stochastic simulations
is impractical because the aggregate random variables occupy a 980 dimensional space that a limited
number of simulation runs cannot sample adequately. Instead, we construct a deterministic function
from the set of 980 time courses of the receptors, {Bj (t)}, to an aggregate directional estimate
in R3 . Because of the cylindrical symmetry inherent in the system, our directional estimator g? is
an unbiased estimator of the true gradient direction ~g . The estimator g?(t) may be thought of as
representing a downstream chemical process that accumulates directional information and decays
with some time constant ? . Let {~xj }N
j=1 be the spatial locations of the N receptors on the cell?s
surface. Each vector is associated with a weight wj . Whenever the j th receptor binds a cAMP
molecule, wj is incremented by one; otherwise wj decays with time constant ? . We construct an
instantaneous estimate of the gradient direction from the linear combination of receptor positions,
PN
g?? (t) = j=1 wj (t)~xj . This procedure reflects the accumulation and reabsorption of intracellular
second messengers released from the cell membrane upon receptor binding.
Before the stimulus is applied, the weighted directional estimates g?? are small in absolute magnitude, with direction uniformly distributed on S2 . In order to determine the information gained as the
estimate vector evolves after stimulus application, we wish to determine the change in entropy in an
ensemble of such estimates. As the cell gains information about the direction of the gradient signal
from its receptors, the entropy of the estimate should decrease, leading to a rise in mutual information. By repeating multiple runs (M = 600) of the simulation we obtain samples from the ensemble
of direction estimates, given a particular stimulus direction, ~g . In the method of Kozachenko and
Leonenko [13], adapted for the analysis of neural spike train data by Victor [14] (?KLV method?),
the cumulative distribution function is approximated directly from the observed samples, and the
entropy is estimated via a change of variables transformation (see below). This method may be
formulated in vector spaces Rd for d > 1 ([13]), but it is not guaranteed to be unbiased in the multivariate case [15] and has not been extended to curved manifolds such as the sphere. In the present
case, however, we may exploit the symmetries inherent in the model (Assumptions 3a-3c) to reduce
the empirical entropy estimation problem to one dimension.
Adapting the argument in [14] to the case of spherical data from a distribution with rotational symmetry about a given axis, we obtain an estimate of the entropy based on a series of observations of
the angles {?1 , ? ? ? , ?M } between the estimates g?? and the true gradient direction ~g (for details, see
Supplemental Materials):
M
1 X
?
H?
log2 (?k ) + log2 (2(M ? 1)) +
+ log2 (2?) + log2 (sin(?k ))
(3)
M
loge (2)
k=1
?
(as M ? ?) where after sorting the ?k in monotonic order, ?k = min(|?k ? ?k?1 |) is the distance
between each angle and its nearest neighbor in the sample, and ? is the Euler-Mascheroni constant.
As shown in Figure 2, this approximation agrees with the analytic result for the uniform distribution,
Hunif = log2 (4?) ? 3.651.
3
Results
Figure 3 shows the results of M = 600 simulation runs. Panel A shows the concentration averaged
across a set of 1?m3 sample boxes, four in the x = ?13.5?m plane and four in the x = +13.5?m
Figure 2: Monte Carlo simulation results and information analysis. A: Average concentration profiles along two planes perpendicular to the gradient, at x = ?13.5?m. B: Estimated direction vector
(x, y, and z components; x = dark blue trace) g?? , ? = 500 msec. C: Entropy of the ensemble of directional vector estimates for different values of the intracellular filtering time constant ? . Given the
directions of the estimates ?k , ?k on each of M runs, we calculate the entropy of the ensemble using
equation (3). All time constants yield uniformly distributed directional estimates in the pre-stimulus
period, 0 ? t ? 2 (sec). After stimulus onset, directional estimates obtained with shorter time
constants respond more quickly but achieve smaller gains in mutual information (smaller reductions
in entropy). Filtering time constants ? range from lightest to darkest colors: 20, 50, 100, 200, 500,
1000, 2000 msec.
plane. The initial bolus of cAMP released into the volume at t = 0 sec is not uniformly distributed,
but spreads out evenly within 0.25 sec. At t = 2.0 sec the boundary conditions are changed, causing
a gradient to emerge along a realistic time course. Consistent with the analytic solution for the
mean concentration (not shown), the concentration approaches equilibrium more rapidly near the
absorbing wall (descending trace) than at the imposed flux wall (ascending trace).
Panel B shows the evolution of a directional estimate vector g?? for a single run, with ? = 500
msec. During uniform conditions all vectors fluctuate near the origin. After gradient onset the
variance increases and the x component (dark trace) becomes biased towards the gradient source
(~g = [?1, 0, 0]) while the y and z components still have a mean of zero. Across all 600 runs
the mean of the y and z components remains close to zero, while the mean of the x component
systematically departs from zero shortly after stimulus onset (not shown). Hence the directional
estimator is unbiased (as required by symmetry). See Supplemental Materials for the population
average of g?.
Panel C shows the time course of the entropy of the ensemble of normalized directional estimate
vectors g?? /|?
g? | over M = 600 simulations, for intracellular filtering time constants ranging from 20
msec to 2000 msec (light to dark shading), calculated using equation (3). Following stimulus onset,
entropy decreases steadily, showing an increase in information available to the amoeba about the
direction of the stimulus; the mutual information at a given point in time is the difference between
the entropy at that time and before stimulus onset.
For a cell with roughly 1000 receptors the mutual information has increased at most by ? 2 bits of
information by one second (for ? = 500 msec), and at most by ? 3 bits of information by two seconds (for ? =1000 or 2000 msec), under our stimulation protocol. A one bit reduction in uncertainty
is equivalent to identifying the correct value of the x component (positive versus negative) when the
stimulus direction is aligned along the x-axis. Alternatively, note that a one bit reduction results in
going from the uniform distribution on the sphere to the uniform distribution on one hemisphere.
For ? ? 100 msec, the weighted average with decay time ? never gains more than one bit of information about the stimulus direction, even at long times. This observation suggestions that signaling
must involve some chemical components with lifetimes longer than 100 msec. The ? = 200 msec
filter saturates after about one second, at ? 1 bit of information gain.
Longer lived second messengers would respond more slowly to changes from the background stimulus distribution, but would provide better more informative estimates over time. The ? = 500 msec
estimate gains roughly two bits of information within 1.5 seconds, but not much more over time.
Heuristically, we may think of a two bit gain in information as corresponding to the change from a
uniform distribution to one covering uniformly covering one quarter of S2 , i.e. all points within ?/3
of the true direction. Within two seconds the ? = 1000 msec and ? = 2000 msec weighted averages have each gained approximately three bits of information, equivalent to a uniform distribution
covering all points with 0.23? or 41o of the true direction.
4
Discussion & conclusions
Clearly there is an opportunity for more precise control of experimental conditions to deepen our
understanding of spatio-temporal information processing at the membranes of gradient-sensitive
cells. Efforts in this direction are now using microfluidic technology to create carefully regulated
spatial profiles for probing cellular responses [16]. Our results suggest that molecular processes
relevant to these responses must have lasting effects ? 100 msec.
We use a static, immobile cell. Could cell motion relative to the medium increase sensitivity to
changes in the gradient? No: the Dictyostelium velocity required to affect concentration perception
is on order 1cm sec?1 [1], whereas reported velocities are on the order ?m sec?1 [5].
The chemotactic response mechanism is known to begin modifying the cell membrane on the edge
facing up the gradient within two seconds after stimulus initiation [7, 6], suggesting that the cell
strikes a balance between gathering data and deciding quickly. Indeed, our results show that the
reported activation of the G-protein signaling system on the leading edge of a chemotactically responsive cell [7] rises at roughly the same rate as the available chemotactic information. Results
such as these ([7, 6]) are obtained by introducing a pipette into the medium near the amoeba; the
magnitude and time course of cAMP release are not precisely known, and when estimated the cAMP
concentration at the cell surface is over 25 nMol by a full order of magnitude.
Thomson and Kristan [17] show that for discrete probability distributions and for continuous distributions over linear spaces, stimulus discriminability may be better quantified using ideal observer
analysis (mean squared error, for continuous variables) than information theory. The machinery of
mean squared error (variance, expectation) do not carry over to the case of directional data without
fundamental modifications [18]; in particular the notion of mean squared error is best represented
by the mean resultant length 0 ? ? ? 1, the expected length of the vector average of a collection of
unit vectors representing samples from directional data. A resultant with length ? ? 1 corresponds
to a highly focused probability density function on the sphere. In addition to measuring the mutual
information between the gradient direction and an intracellular estimate of direction, we also calculated the time evolution of ? (see Supplemental Materials.) We find that ? rapidly approaches 1
and can exceed 0.9, depending on ? . We found that in this case at least the behavior of the mean
resultant length and the mutual information are very similar; there is no evidence of discrepancies
of the sort described in [17].
We have shown that the mutual information between an arbitrarily oriented stimulus and the directional signal available at the cell?s receptors evolves with a time course consistent with observed
reaction times of Dictyostelium amoeba. Our results reinforce earlier claims that information theory
can play a role in understanding biochemical cellular communication.
Acknowledgments
MCell simulations were run on the Oberlin College Beowulf Cluster, supported by NSF grant CHE0420717.
References
[1] Howard C. Berg and Edward M. Purcell. Physics of chemoreception. Biophysical Journal, 20:193, 1977.
[2] William Bialek and Sima Setayeshgar. Physical limits to biochemical signaling. PNAS, 102(29):10040?
10045, July 19 2005.
[3] S. Qazi, A. Beltukov, and B.A. Trimmer. Simulation modeling of ligand receptor interactions at nonequilibrium conditions: processing of noisy inputs by ionotropic receptors. Math Biosci., 187(1):93?110,
Jan 2004.
[4] D. J. Spencer, S. K. Hampton, P. Park, J. P. Zurkus, and P. J. Thomas. The diffusion-limited biochemical
signal-relay channel. In S. Thrun, L. Saul, and B. Sch?olkopf, editors, Advances in Neural Information
Processing Systems 16. MIT Press, Cambridge, MA, 2004.
[5] P.R. Fisher, R. Merkl, and G. Gerisch. Quantitative analysis of cell motility and chemotaxis in Dictyostelium discoideum by using an image processing system and a novel chemotaxis chamber providing
stationary chemical gradients. J. Cell Biology, 108:973?984, March 1989.
[6] Carole A. Parent, Brenda J. Blacklock, Wendy M. Froehlich, Douglas B. Murphy, and Peter N. Devreotes.
G protein signaling events are activated at the leading edge of chemotactic cells. Cell, 95:81?91, 2 October
1998.
[7] Xuehua Xu, Martin Meier-Schellersheim, Xuanmao Jiao, Lauren E. Nelson, and Tian Jin. Quantitative
imaging of single live cells reveals spatiotemporal dynamics of multistep signaling events of chemoattractant gradient sensing in dictyostelium. Molecular Biology of the Cell, 16:676?688, February 2005.
[8] Jan Wouter-Rappel, Peter. J Thomas, Herbert Levine, and William F. Loomis. Establishing direction
during chemotaxis in eukaryotic cells. Biophys. J., 83:1361?1367, 2002.
[9] T.M. Cover and J.A. Thomas. Elements of Information Theory. John Wiley, New York, 1990.
[10] J. R. Stiles, D. Van Helden, T. M. Bartol, E.E. Salpeter, and M. M. Salpeter. Miniature endplate current rise times less than 100 microseconds from improved dual recordings can be modeled with passive
acetylcholine diffusion from a synaptic vesicle. Proc. Natl. Acad. Sci. U.S.A., 93(12):5747?52, Jun 11
1996.
[11] J. R. Stiles and T. M. Bartol. Computational Neuroscience: Realistic Modeling for Experimentalists,
chapter Monte Carlo methods for realistic simulation of synaptic microphysiology using MCell, pages
87?127. CRC Press, Boca Raton, FL, 2001.
[12] M. Ueda, Y. Sako, T. Tanaka, P. Devreotes, and T. Yanagida. Single-molecule analysis of chemotactic
signaling in Dictyostelium cells. Science, 294:864?867, October 2001.
[13] L.F. Kozachenko and N.N. Leonenko. Probl. Peredachi Inf. [Probl. Inf. Transm.], 23(9):95, 1987.
[14] Jonathan D. Victor. Binless strategies for estimation of information from neural data. Physical Review E,
66:051903, Nov 11 2002.
[15] Marc M. Van Hulle. Edgeworth approximation of multivariate differential entropy. Neural Computation,
17:1903?1910, 2005.
[16] Loling Song, Sharvari M. Nadkarnia, Hendrik U. B?odekera, Carsten Beta, Albert Bae, Carl Franck,
Wouter-Jan Rappel, William F. Loomis, and Eberhard Bodenschatz. Dictyostelium discoideum chemotaxis: Threshold for directed motion. Euro. J. Cell Bio, 85(9-10):981?9, 2006.
[17] Eric E. Thomson and William B. Kristan. Quantifying stimulus discriminability: A comparison of information theory and ideal observer analysis. Neural Computation, 17:741?778, 2005.
[18] Kanti V. Mardia and Peter E. Jupp. Directional Statistics. John Wiley & Sons, West Sussex, England,
2000.
| 2995 |@word cylindrical:1 heuristically:1 simulation:14 invoking:1 shading:1 harder:1 carry:1 reduction:4 phy:1 configuration:1 series:1 initial:2 reaction:4 existing:1 current:2 jupp:1 activation:1 must:2 john:2 mesh:1 chicago:1 subsequent:1 partition:1 realistic:3 informative:1 analytic:4 drop:1 v:1 stationary:1 half:1 plane:4 eukaryote:1 filtered:2 provides:1 math:2 location:1 along:5 direct:1 differential:1 beta:1 inside:1 indeed:1 expected:3 equivariant:2 roughly:3 kanti:1 rapid:1 simulator:1 behavior:2 spherical:7 gov:1 becomes:1 cleveland:1 estimating:1 begin:2 panel:3 medium:2 unbinding:2 cm:1 unspecified:1 developed:1 supplemental:4 dubbed:1 transformation:1 impractical:1 temporal:2 quantitative:4 control:1 unit:4 grant:1 bio:1 positive:1 before:4 negligible:1 dropped:1 bind:2 timing:1 limit:2 local:1 acad:1 despite:1 receptor:50 accumulates:1 establishing:1 fluctuation:1 multistep:1 approximately:3 bnl:1 discriminability:2 quantified:1 co:4 limited:3 psc:1 graduate:1 statistically:1 averaged:2 perpendicular:1 range:1 acknowledgment:1 tian:1 directed:1 endplate:1 edgeworth:1 communicated:1 signaling:13 procedure:1 displacement:1 jan:3 empirical:1 secreted:1 adapting:2 thought:1 pre:2 protein:4 suggest:1 cannot:3 onto:1 close:2 live:1 descending:1 www:3 accumulation:1 imposed:6 deterministic:1 center:4 equivalent:2 convex:1 focused:1 mascheroni:1 identifying:2 react:1 estimator:4 population:2 notion:1 coordinate:2 play:3 trigger:1 carl:1 origin:2 associate:1 velocity:2 element:1 approximated:1 observed:3 role:3 levine:1 boca:1 calculate:2 wj:4 movement:1 incremented:1 decrease:2 environment:2 broken:1 dynamic:1 carrying:1 exposed:1 animate:1 vesicle:1 upon:2 efficiency:2 eric:1 transm:1 represented:3 neurotransmitter:1 chapter:1 train:2 jiao:1 effective:1 monte:6 labeling:1 aggregate:4 starvation:1 outside:1 tell:1 larger:1 cnl:1 otherwise:2 triangular:1 statistic:1 think:1 noisy:3 sequence:1 biophysical:1 interaction:1 product:1 coming:1 causing:1 aligned:1 combining:1 relevant:1 rapidly:4 achieve:1 subcellular:1 validate:1 lauren:1 olkopf:1 parent:1 cluster:1 transmission:1 converges:1 depending:1 develop:2 axial:1 measured:3 nearest:1 received:2 edward:1 c:1 involves:1 triangulated:1 quantify:1 direction:43 guided:1 radius:3 correct:1 filter:1 stochastic:1 modifying:1 centered:2 transient:2 hampton:1 material:4 crc:1 fix:2 wall:7 spencer:1 chemotactic:7 considered:1 deciding:1 presumably:1 equilibrium:8 bj:12 claim:2 reserve:1 miniature:1 early:1 released:2 relay:1 estimation:3 proc:1 fluorescence:1 bridge:1 agrees:1 sensitive:1 create:1 tool:2 reflects:1 weighted:3 mit:1 clearly:1 rather:1 pn:1 fluctuate:1 varying:1 acetylcholine:1 release:2 rappel:2 superimposed:1 baseline:1 kristan:2 sense:2 camp:26 detect:1 froehlich:1 biochemical:5 typically:1 going:1 dual:1 html:1 priori:1 development:1 spatial:4 mutual:16 cube:3 construct:5 once:1 never:1 sampling:1 biology:3 identical:1 park:1 survive:1 discrepancy:1 stimulus:22 simplify:1 richard:1 few:1 inherent:4 oriented:2 simultaneously:1 individual:4 murphy:1 william:4 attempt:1 cylinder:1 highly:1 wouter:2 extreme:1 light:1 undefined:1 activated:1 natl:1 chain:1 integral:2 edge:3 experience:1 shorter:1 machinery:2 nwg:1 loge:1 reactant:1 increased:1 modeling:3 earlier:2 cover:1 retains:1 measuring:1 kimmel:1 introducing:1 addressing:1 euler:1 uniform:10 comprised:1 nonequilibrium:1 reported:2 varies:1 spatiotemporal:4 density:2 peak:2 fundamental:2 sensitivity:1 eberhard:1 physic:1 chemotaxis:5 receiving:1 quickly:4 squared:3 chemoattractant:2 nm:1 slowly:1 tile:4 cognitive:1 leading:3 suggesting:1 sec:8 depends:2 onset:7 performed:1 view:1 observer:2 analyze:1 aggregation:1 sort:1 parallel:1 square:1 variance:2 ensemble:6 yield:1 directional:21 weak:1 accurately:1 carlo:6 messenger:3 whenever:1 synaptic:2 acquisition:1 involved:1 steadily:1 naturally:1 associated:1 resultant:3 franck:1 static:3 gain:6 color:1 fractional:1 carefully:1 purcell:2 originally:1 reflected:1 response:8 improved:1 evaluated:1 box:2 strongly:1 lifetime:1 stage:1 hand:1 western:1 effect:1 klv:1 normalized:1 true:6 unbiased:3 adequately:1 hence:2 analytically:1 chemical:10 evolution:2 hulle:1 sima:1 motility:2 adjacent:1 during:3 indistinguishable:1 sin:3 covering:3 sussex:1 steady:3 cosine:1 theoretic:1 thomson:2 motion:2 passive:1 ranging:2 image:1 instantaneous:1 ohio:2 novel:1 absorbing:3 rotation:4 stimulation:3 quarter:1 physical:2 immobile:1 volume:4 anisotropic:1 numerically:1 biosci:1 cambridge:1 probl:2 rd:1 mathematics:1 particle:1 devreotes:2 dictyostelium:10 longer:2 surface:14 brownian:1 multivariate:2 noisily:1 hemisphere:1 inf:2 initiation:2 binary:1 arbitrarily:1 victor:3 transmitted:3 seen:1 rotationally:1 herbert:1 floor:1 determine:2 period:1 strike:1 signal:23 july:1 multiple:1 full:1 pnas:1 characterized:1 england:1 long:3 sphere:11 stile:2 post:1 equally:1 molecular:2 pipette:1 involving:2 experimentalists:1 expectation:1 albert:1 represent:2 microscopy:1 cell:57 receive:1 background:3 whereas:1 addition:1 spacing:1 addressed:1 source:6 crucial:1 sch:1 biased:1 releasing:1 recording:1 near:3 ideal:2 revealed:1 exceed:2 spiral:1 enough:2 diffusing:2 independence:1 xj:3 affect:1 reduce:1 whether:1 expression:1 rms:1 effort:2 reinforcing:1 song:1 render:1 peter:5 york:1 involve:2 amount:1 repeating:1 dark:3 http:5 occupy:1 nsf:1 neuroscience:2 estimated:5 per:4 lightest:1 wendy:1 blue:1 discrete:1 steepness:1 four:2 terminology:1 nevertheless:1 threshold:1 douglas:1 ce:2 diffusion:5 imaging:1 bae:1 downstream:2 imposition:1 run:7 micron:1 angle:2 respond:2 uncertainty:1 chemically:1 ueda:1 decision:1 bit:11 dwelling:1 bound:4 fl:1 guaranteed:1 correspondence:2 nonnegative:1 activity:3 adapted:1 precisely:1 uno:1 flat:1 nearby:2 loomis:2 aspect:1 argument:2 min:1 leonenko:3 rendered:1 extracellular:1 martin:1 department:1 combination:1 march:1 membrane:4 smaller:3 across:2 son:1 joseph:1 shallow:1 evolves:4 modification:1 lasting:1 projecting:2 gathering:1 equation:6 remains:2 describing:2 r3:2 loose:1 mechanism:1 know:2 ascending:1 barrage:1 available:6 kozachenko:3 chamber:1 responsive:1 darkest:1 shortly:1 thomas:7 assumes:1 denotes:1 log2:7 opportunity:1 neglect:1 exploit:1 february:1 threedimensional:1 move:1 question:1 quantity:1 spike:2 already:1 strategy:1 concentration:23 dependence:1 bialek:1 gradient:52 regulated:1 distance:2 nmol:12 simulated:1 reinforce:1 thrun:1 sci:1 evenly:1 nelson:1 whom:1 manifold:1 cellular:6 length:6 reorganization:1 index:1 modeled:1 rotational:5 balance:1 providing:1 october:2 trace:4 negative:1 rise:3 design:1 lived:1 unknown:2 perform:1 allowing:1 neuron:1 observation:2 howard:1 finite:1 bartol:2 jin:1 curved:1 extended:1 communication:5 precise:2 saturates:1 arbitrary:1 sharp:1 keq:7 raton:1 introduced:1 meier:1 required:2 conflicting:1 amoeba:11 established:1 inaccuracy:1 tanaka:1 address:2 able:1 deepen:1 below:1 pattern:3 perception:1 regime:1 hendrik:1 program:2 unbound:2 including:1 event:8 natural:2 representing:2 technology:1 habitat:1 temporally:1 axis:5 jun:1 brenda:1 faced:1 review:1 understanding:3 underway:1 relative:1 expect:1 suggestion:1 filtering:3 analogy:1 versus:1 facing:1 gather:1 quantitive:1 consistent:5 principle:1 editor:1 corrupting:1 systematically:1 occam:1 course:7 changed:1 soil:1 supported:1 side:2 uchicago:1 understand:1 neighbor:1 saul:1 taking:1 emerge:1 absolute:1 distributed:4 van:2 boundary:4 dimension:3 calculated:2 peredachi:1 cumulative:1 collection:3 projected:1 social:2 flux:4 approximate:1 nov:1 preferred:1 suggestive:1 reveals:1 conceptual:1 spatio:1 alternatively:1 continuous:2 channel:3 molecule:12 symmetry:10 forest:1 interact:1 complex:2 eukaryotic:3 equivariance:3 protocol:1 marc:1 spread:1 intracellular:7 linearly:1 s2:6 noise:3 profile:3 xu:1 west:1 euro:1 transduction:3 cubic:1 salk:1 wiley:2 probing:1 precision:1 position:2 guiding:1 explicit:2 msec:16 exceeding:1 wish:1 mardia:1 binless:1 minute:1 departs:1 nonindependence:2 oberlin:7 showing:1 sensing:10 undergoing:1 decay:3 evidence:1 adding:1 gained:2 multicellular:1 magnitude:3 biophys:1 sorting:1 entropy:17 likely:1 forming:1 contained:1 binding:7 ligand:5 monotonic:1 corresponds:1 chance:1 ma:1 carsten:1 formulated:1 quantifying:1 microsecond:1 towards:2 absence:1 fisher:1 experimentally:2 change:7 typical:1 uniformly:5 averaging:1 invariance:1 experimental:2 m3:1 college:3 berg:2 jonathan:1 |
2,198 | 2,996 | Distributed Inference in Dynamical Systems
Stanislav Funiak Carlos Guestrin
Carnegie Mellon University
Mark Paskin
Google
Rahul Sukthankar
Intel Research
Abstract
We present a robust distributed algorithm for approximate probabilistic inference
in dynamical systems, such as sensor networks and teams of mobile robots. Using
assumed density filtering, the network nodes maintain a tractable representation
of the belief state in a distributed fashion. At each time step, the nodes coordinate
to condition this distribution on the observations made throughout the network,
and to advance this estimate to the next time step. In addition, we identify a
significant challenge for probabilistic inference in dynamical systems: message
losses or network partitions can cause nodes to have inconsistent beliefs about the
current state of the system. We address this problem by developing distributed
algorithms that guarantee that nodes will reach an informative consistent distribution when communication is re-established. We present a suite of experimental
results on real-world sensor data for two real sensor network deployments: one
with 25 cameras and another with 54 temperature sensors.
1
Introduction
Large-scale networks of sensing devices have become increasingly pervasive, with applications
ranging from sensor networks and mobile robot teams to emergency response systems. Often, nodes
in these networks need to perform probabilistic dynamic inference to combine a sequence of local,
noisy observations into a global, joint estimate of the system state. For example, robots in a team
may combine local laser range scans, collected over time, to obtain a global map of the environment;
nodes in a camera network may combine a set of image sequences to recognize moving objects in a
heavily cluttered scene. A simple approach to probabilistic dynamic inference is to collect the data
to a central location, where the processing is performed. Yet, collecting all the observations is often
impractical in large networks, especially if the nodes have a limited supply of energy and communicate over a wireless network. Instead, the nodes need to collaborate, to solve the inference task
in a distributed manner. Such distributed inference techniques are also necessary in online control
applications, where nodes of the network need estimates of the state in order to make decisions.
Probabilistic dynamic inference can often be efficiently solved when all the processing is performed centrally. For example, in linear systems with Gaussian noise, the inference tasks can be
solved in a closed form with a Kalman Filter [3]; for large systems, assumed density filtering can
often be used to approximate the filtered estimate with a tractable distribution (c.f., [2]). Unfortunately, distributed dynamic inference is substantially more challenging. Since the observations are
distributed across the network, nodes must coordinate to incorporate each others? observations and
propagate their estimates from one time step to the next. Online operation requires the algorithm
to degrade gracefully when nodes run out of processing time before the observations propagate
throughout the network. Furthermore, the algorithm needs to robustly address node failures and
interference that may partition the communication network into several disconnected components.
We present an efficient distributed algorithm for dynamic inference that works on a large family
of processes modeled by dynamic Bayesian networks. In our algorithm, each node maintains a
(possibly approximate) marginal distribution over a subset of state variables, conditioned on the
measurements made by the nodes in the network. At each time step, the nodes condition on the
observations, using a modification of the robust (static) distributed inference algorithm [7], and
then advance their estimates to the next time step locally. The algorithm guarantees that, with
sufficient communication at each time step, the nodes obtain the same solution as the corresponding
centralized algorithm [2]. Before convergence, the algorithm introduces principled approximations
in the form of independence assertions in the node estimates and in the transition model.
In the presence of unreliable communication or high latency, the nodes may not be able to condition their estimates on all the observations in the network, e.g., when interference causes a network
partition, or when high latency prevents messages from reaching every node. Once the estimates are
advanced to the next time step, it is difficult to condition on the observations made in the past [10].
Hence, the beliefs at the nodes may be conditioned on different evidence and no longer form a consistent global probability distribution over the state space. We show that such inconsistencies can
lead to poor results when nodes attempt to combine their estimates. Nevertheless, it is often possible
to use the inconsistent estimates to form an informative globally consistent distribution; we refer to
this task as alignment. We propose an online algorithm, optimized conditional alignment (OCA),
that obtains the global distribution as a product of conditionals from local estimates and optimizes
over different orderings to select a global distribution of minimal entropy. We also propose an alternative, more global optimization approach that minimizes a KL divergence-based criterion and
provides accurate solutions even when the communication network is highly fragmented.
We present experimental results on real-world sensor data, covering sensor calibration [7] and
distributed camera localization [5]. These results demonstrate the convergence properties of the
algorithm, its robustness to message loss and network partitions, and the effectiveness of our method
at recovering from inconsistencies.
Distributed dynamic inference has received some attention in the literature. For example, particle filtering (PF) techniques have been applied to these settings: Zhao et al. [11] use (mostly)
independent PFs to track moving objects, and Rosencrantz et al. [10] run PFs in parallel, sharing
measurements as appropriate. Pfeffer and Tai [9] use loopy belief propagation to approximate the
estimation step in a continuous-time Bayesian network. When compared to these techniques, our
approach addresses several additional challenges: we do not assume point-to-point communication
between nodes, we provide robustness guarantees to node failures and network partitions, and we
identify and address the belief inconsistency problem that arises in distributed systems.
2
The distributed dynamic inference problem
Following [7], we assume a network model where each node can perform local computations and
communicate with other nodes over some channel. The nodes of the network may change over
time: existing nodes can fail, and new nodes may be introduced. We assume a message-level error
model: messages are either received without error, or they are not received at all. The likelihood of
successful transmissions (link qualities) are unknown and can change over time, and link qualities
of several node pairs may be correlated.
We model the system as a dynamic Bayesian network (DBN). A DBN consists of a set of state
processes, X = {X1 , . . . , XL } and a set of observed measurement processes Z = {Z1 , . . . , ZK };
each measurement process Zk corresponds to one of the sensors on one of the nodes. State processes
are not associated with unique nodes. A DBN defines a joint probability model over steps 1 . . . T as
T
T
Y
Y
p(X(t) | X(t?1) ) ?
p(Z(t) | X(t) ) .
p(X(1:T ) , Z(1:T ) ) = p(X(1) ) ?
{z
} t=1 |
{z
}
| {z } t=2 |
initial prior
transition model
measurement model
Q
(1)
The initial prior is given by a factorized probability model p(X(1) ) ? h ?(Ah ), where each
QL
(t)
(t)
Ah ? X is a subset of the state processes. The transition model factors as i=1 p(Xi | Pa[Xi ]),
(t)
(t)
where Pa[Xi ] are the parents of Xi in the previous time step. The measurement model factors
QK
(t)
(t)
(t)
(t)
as k=1 p(Zk | Pa[Zk ]), where Pa[Zk ] ? X(t) are the parents of Zk in the current time step.
In the distributed dynamic inference problem, each node n is associated with a set of processes
Qn ? X; these are the processes about which node n wishes to reason. The nodes need to collab(t)
orate so that each node can obtain (an approximation to) the posterior distribution over Qn given
(t)
(1:t)
all measurements made in the network up to the current time step t: p(Qi | z
). We assume that
node clocks are synchronized, so that transitions to the next time step are simultaneous.
3
Filtering in dynamical systems
The goal of (centralized) filtering is to compute the posterior distribution p(X(t) | z(1:t) ) for
t = 1, 2, . . . as the observations z(1) , z(2) , . . . arrive. The basic approach is to recursively compute p(X(t+1) | z(1:t) ) from p(X(t) | z(1:t?1) ) in three steps:
1. Estimation: p(X(t) | z(1:t) ) ? p(X(t) | z(1:t?1) ) ? p(z(t) | X(t) );
2. Prediction: p(X(t) , X(t+1) | zR(1:t) ) = p(X(t) | z(1:t) ) ? p(X(t+1) | X(t) );
3. Roll-up: p(X(t+1) | z(1:t) ) = p(x(t) , X(t+1) | z(1:t) ) dx(t) .
Exact filtering in DBNs is usually expensive or intractable because the belief state rapidly loses
all conditional independence structure. An effective approach, proposed by Boyen and Koller [2],
hereby denoted ?B & K 98?, is to periodically project the exact posterior to a distribution that satisfies
independence assertions
encoded in a junction tree [3]. Given a junction tree T , with cliques {Ci }
and separators Si,j , the projection operation amounts to computing the clique marginals, hence
the filtered distribution is approximated as
Q
(1:t?1)
?p (C(t)
)
i |z
,
(1)
p(X(t) | z(1:t?1) ) ? ?
p (X(t) | z(1:t?1) ) = Q i?NT
(t)
(1:t?1) )
?
p
(S
|
z
i,j
{i,j}?ET
where NT and ET are the nodes and edges of T , respectively. With this representation, the es(t)
(t)
timation step is implemented by multiplying each observation likelihood p(zk | Pa[Zk ]) to a
clique marginal; the clique and separator potentials are then recomputed with message passing,
so that the posterior
as a ratio of cliqueiand separator marginals:
hQdistribution is once again
i . hwritten
Q
(t)
(t)
(t)
(1:t)
(1:t)
?p (X | z
)=
p (Ci | z
)
p (Si,j | z(1:t) ) . The prediction step is
i?NT ?
{i,j}?ET ?
(t+1)
performed independently for each clique Ci
: we multiply ?p (X(t) | z(1:t) ) with the transition
(t+1)
model p(X (t+1) | Pa[X (t+1) ]) for each variable X (t+1) ? Ci
and, using variable elimination,
(t+1)
compute the marginals over the clique at the next time step p(Ci
| z(1:t) ).
4
Approximate distributed filtering
In principle, the centralized filtering approach described in the previous section could be applied to a
distributed system, e.g., by communicating the observations made in the network to a central location
that performs all computations, and distributing the answer to every node in the network. While
conceptually simple, this approach has substantial drawbacks, including the high communication
bandwidth, the introduction of a single point of failure to the system, and the fact that nodes do not
have valid estimates when the network is partitioned. In this section, we present a distributed filtering
algorithm where each node obtains an approximation to the posterior distribution over subset of the
state variables. Our estimation step builds on the robust distributed inference algorithm of Paskin et
al. [7, 8], while the prediction, roll-up, and projection steps are performed locally at each node.
4.1 Estimation as a robust distributed probabilistic inference
In the distributed inference approach of Paskin et al. [8], the nodes collaborate so that each node
n can obtain the posterior distribution over some set of variables Qn given all measurements made
throughout the network. In our setting, Qn contains the variables in a subset Ln of the cliques
used in our assumed density representation. In their architecture, nodes form a distributed data
structure along a routing tree in the network, where each node in this tree is associated with a
cluster of variables Dn that includes Qn , as well as any other variables, needed to preserve the flow
of information between the nodes, a property equivalent to the running intersection property in
junction trees [3]. We refer to this tree as the network junction tree, and, for clarity, we refer to the
junction tree used for the assumed density as the external junction tree.
Using this architecture, Paskin and Guestrin developed a robust distributed probabilistic inference algorithm, RDPI [7], for static inference settings, where nodes compute the posterior distribution p(Qn | z) over Qn given all measurements throughout the network z. RDPI provides two crucial
properties: convergence, if there are no network partitions, these distributed estimates converge to
the true posteriors; and, smooth degradation even before convergence, the estimates provide a principled approximation to the true posterior (which introduces additional independence assertions).
In RDPI, each node n maintains the current belief ?n of p(Qn | z). Initially, node n knows
only the marginals of the prior distribution {p(Ci ) : i ? Ln } for a subset of cliques Ln in the
external junction tree, and its local observation model p(zn | Pa[Zn ]) for each of its sensors. We
assume that Pa[Zn ] ? Ci for some i ? Ln ; thus, ?n is represented as a collection of priors over
cliques of variables, and of observation likelihood functions over these variables. Messages are then
sent between neighboring nodes, in an analogous fashion to the sum-product algorithm for junction
trees [3]. However, messages in RDPI are always represented as a collection of priors {?i (Ci )}
over cliques of variables Ci , and of measurement likelihood functions {?i (Ci )} over these cliques.
This decomposition into prior and likelihood factors is the key to the robustness properties of the
algorithm [7]. With sufficient communication, ?n converges to p(Qn | z).
(t)
(t)
In our setting, at each time step t, each prior ?i (Ci ) is initialized to p(Ci | z(1:t?1) ). The
(t)
(t)
(t)
likelihood functions are similarly initialized to ?i (Ci ) = p(zi | Ci ), if some sensor makes an
observation about these variables, or to 1 otherwise. Through message passing ?n converges to
(1:t)
?p (Q(t)
). An important property of RDPI that will be useful in the remainder of the paper is:
n |z
Property 1. Let ?n be the result computed by the RDPI algorithm at convergence at node n. Then
the cliques in ?n form a subtree of an external junction tree that covers Qn .
4.2 Prediction, roll-up and projection
The previous section shows that the estimation step can be implemented in a distributed manner,
(t)
using RDPI. At convergence, each node n obtains the calibrated marginals ?p (Ci | z(1:t) ), for i ?
Ln . In order to advance to the next time step, each node must perform prediction and roll-up,
(t+1)
obtaining the marginals ?
p (Ci
| z(1:t) ). Recall from Section 3 that, in order to compute a marginal
(t+1)
(1:t)
?p (Ci
|z
), this node needs ?
p (X(t) | z(1:t) ). Due to the conditional independencies encoded in
?p (X(t) | z(1:t) ), it is sufficient to obtain a subtree of the external junction tree that covers the parents
(t+1)
(t+1)
Pa[Ci
] of all variables in the clique. The next time step marginal ?p (Ci
| z(1:t) ) can then
(t+1)
be computed by multiplying this subtree with the transition model p(X
| Pa[X (t+1) ]) for each
(t+1)
(t+1)
X (t+1) ? Ci
and eliminating all variables but Ci
(recall that Pa[X (t+1) ] ? X(t) ).
This procedure suggests the following distributed implementation of prediction, roll-up, and
projection: after completing the estimation step, each node selects a subtree of the (global) exter(t+1)
nal junction tree that covers Pa[Ci
] and collects the marginals of this tree from other nodes in
the network. Unfortunately, it is unclear how to allocate the running time between estimation and
collection of marginals in time-critical applications, when the estimation step may not run to completion. Instead, we propose a simple approach that performs both steps at once: run the distributed
inference algorithm, described in the previous section, to obtain the posterior distribution over the
parents of each clique maintained at the node. This task can be accomplished by ensuring that these
(t+1)
parent variables are included in the query variables of node n: Pa[Ci
] ? Qn , ?i ? Ln .
When the estimation step cannot be run to convergence within the allotted time, the variables
Scope[?n ] covered by the distribution ?n that node n obtains may not cover the entire parent set
(t+1)
Pa[Ci
]. In this case, multiplying in the standard transition model is equivalent to assuming an uniform prior for the missing variables, which can lead to very poor solutions in practice. When the transition model is learned from data, p(X (t+1) | Pa[X (t+1) ]) is usually computed from the empirical distribution ?p (X (t+1) , Pa[X (t+1) ]), e.g., pM LE (X (t+1) | Pa[X (t+1) ]) =
?p (X (t+1) , Pa[X (t+1) ])/?
p (Pa[X (t+1) ]). Building on these empirical distributions, we can obtain
an improved solution for the prediction and roll-up steps, when we do not have a distribution
(t+1)
over the entire parent set Pa[Ci
]. Specifically, we obtain a valid approximate transition model
(t+1)
(t)
(t)
?p (X
| W ), where W = Scope[?n ] ? Pa[X (t+1) ], online by simply marginalizing the empirical distribution ?
p (X (t+1) , Pa[X (t+1) ]) down to ?p (X (t+1) , W(t) ). This procedure is equivalent
to introducing an additional independence assertion to the model: at time step t + 1, X (t+1) is
independent of Pa[X (t+1) ] ? W(t) , given W(t) .
4.3 Summary of the algorithm
Our distributed approximate filtering algorithm can be summarized as follows:
? Using the architecturein [8], construct
tree s.t. the query variables Qn
aSnetwork junction
S
(t)
(t+1)
at each node n cover
C
?
Pa[C
]
.
i
i
i?Ln
i?Ln
? For t = 1, 2, . . ., at each node n,
? run RDPI [7] until the end of step t, obtaining a (possibly approximate) belief ?n ;
(t+1)
? for each X (t+1) ? Ci
, i ? Ln , compute an approximate transition model
(t)
(t)
(t+1)
?
p (X
| WX ), where WX = Scope[?n ] ? Pa[X (t+1) ];
(t+1)
(t+1)
? for each clique Ci
, i ? Ln , compute the clique marginal ?p (Ci
| z(1:t) ) from
(t)
?n and from each ?
p (X (t+1) | WX ), locally, using variable elimination.
Using the convergence properties of the RDPI algorithm, we prove that, given sufficient communication, our distributed algorithm obtains the same solution as the centralized B & K 98 algorithm:
Theorem 1. For a set of nodes running our distributed filtering algorithm, if at each time step there
is sufficient communication for the RDPI algorithm to converge, and the network is not partitioned,
(t)
then, for each node n, for each clique i ? Ln , the distribution ?p (Ci | z(1:t?1) ) obtained by node n
is equal to the distribution obtained by the B & K 98 algorithm with assumed density given by T .
1
2
3
4
(a) BK solution
1
2
3
4
1
2
3
4
(b) alignment rooted at 1 (c) alignment rooted at 4
1
2
3
4
(d) min. KL divergence
Figure 1: Alignment results after partition (shown by vertical line). circles represent 95% confidence intervals
in the estimate of the camera location. (a) The exact solution, computed by the BK algorithm in the absence of
partitions. (b) Solution obtained when aligning from node 1. (c) Solution obtained when aligning from node 4.
(d) Solution obtained by joint optimized alignment.
5
Robust distributed filtering
In the previous section, we introduced an algorithm for distributed filtering with dynamic Bayesian
networks that, with sufficient communication, converges to the centralized B & K 98 algorithm. In
some settings, for example when interference causes a network partition, messages may not be propagated long enough to guarantee convergence before nodes must roll-up to the next time step. Consider the example, illustrated in Figure 1, in which a network of cameras localizes itself by observing
a moving object. Each camera i carries a clique marginal over the location of the object M (t) , its
own camera pose variable C i , and the pose of one of its neighboring cameras: ?1 (C 1,2 , M (t) ),
?2 (C 2,3 , M (t) ), and ?3 (C 3,4 , M (t) ). Suppose communication were interrupted due to a network
partition: observations would not propagate, and the marginals carried by the nodes would no
longer form a consistent distribution, in the sense that ?1 ,?2 ,?3 might not agree on their marginals,
e.g., ?1 (C 2 , M (t) ) 6= ?2 (C 2 , M (t) ). The goal of alignment is to obtain a consistent distribution
?p (X(t) | z(1:t?1) ) from marginals ?1 , ?2 , ?3 that is close to the true posterior p(X(t) | z(1:t?1) ) (as
measured, for example, by the root-mean-square error of the estimates). For simplicity of notation,
we omit time indices t and conditioning on the past evidence z(1:t?1) throughout this section.
5.1 Optimized conditional alignment
One way to define a consistent distribution ?p is to start from a root node r, e.g., 1, and allow each
clique marginal to decide the conditional density of Ci given its parent, e.g.,
?
p1 (C 1:4 , M ) = ?1 (C 1,2 , M ) ? ?2 (C 3 | C 2 , M ) ? ?3 (C 4 | C 3 , M ).
This density ?
p1 forms a coherent distribution over C 1:4 , M , and we say that ?p1 is rooted at node 1.
Thus, ?1 fully defines the marginal density over C 1,2 , M , ?2 defines the conditional density of C 3
given C 2 , M , and so on. If node 3 were the root, then node 1 would only contribute ?1 (C 1 | C 2 , M ),
and we would obtain a different approximate distribution.
In general, given a collection of marginals ?i (Ci ) over the cliques of a junction tree T , and a
root node r ? NT , the distribution obtained by conditional alignment from r can be written as
Y
?
?i (Ci ? Sup(i),i | Sup(i),i ),
(2)
pr (X) = ?r (Cr ) ?
i?(NT ?{r})
where up(i) denotes the upstream neighbor of i on the (unique) path between r and i.
The choice of the root r often crucially determines how well the aligned distribution ?pr approximates the true prior. Suppose that, in the example in Figure 1, the nodes on the left side of the partition do not observe the person while the communication is interrupted, and the prior marginals ?1 ,
?2 are uncertain about M . If we were to align the distribution from ?2 , multiplying ?3 (C 4 | C 3 , M )
into the marginal ?2 (C 2,3 , M ) would result in a distribution that is uncertain in both M and C 4
(Figure 1(b)), while a better choice of root could provide a much better estimate (Figure 1(c)).
One possible metric to optimize when choosing the root r for the alignment is the entropy of the
resulting distribution ?
pr . For example, the entropy of ?p2 in the previous example can be written as
H?p2 (C 1:4 , M ) = H?2 (C 2,3 , M ) + H?3 (C 4 | C 3 , M ) + H?1 (C 1 | C 2 , M ),
(3)
where we use the fact that, for Gaussians, the conditional entropy of C 4 given C 3 ,M only depends
on the conditional distribution ?
p2 (C 4 | C 3 , M ) = ?3 (C 4 | C 3 , M ). A na??ve algorithm for obtaining
the best root would exploit this decomposition to compute the entropy of each ?p2 , and pick the root
that leads to a lowest total entropy; the running time of this algorithm is O(|NT |2 ). We propose a
dynamic programming approach that significantly reduces the running time. Comparing Equation 3
with the entropy of the distribution rooted at a neighboring node 3, we see that they share a common
term H?1 (C 1 | C 2 , M ), and H?p3 (C 1:4 , M ) ? H?p2 (C 1:4 , M ) = H?3 (S2,3 ) ? H?2 (S2,3 ) , 42,3 . If
42,3 is positive, node 2 is a better root than 3, 42,3 is negative, we have the reverse situation. Thus,
when comparing neighboring nodes as root candidates, the difference in entropy of the resulting
distribution is simply the difference in entropy their local distributions assign to their separator. This
property generalizes to the following dynamic programming algorithm that determines the root r
with minimal H?pr (X) in O(|NT |) time:
? For any node i ? NT , define the message from i to its neighbor j as
4i,j
if mk?i < 0, ?k 6= j
,
mi?j =
4i,j + maxk6=j mk?i otherwise
where 4i,j = H?j (Si,j ) ? H?i (Si,j ), and k varies over the neighbors of i in T .
? If maxk mk?i < 0 then i is the optimal root; otherwise, up(i) = argmaxk mk?i .
Intuitively, the message mi?j represents the loss (entropy) with root node j, compared to the best
root on i?s side of the tree. Ties between nodes, if any, can be resolved using node IDs.
5.2 Distributed optimized conditional alignment
In the absence of an additional procedure, RDPI can be viewed as performing conditional alignment.
However, the alignment is applied to the local belief at each node, rather than the global distribution,
and the nodes may not agree on the choice of the root r. Thus, the network is not guaranteed to
reach a globally consistent, aligned distribution. In this section, we show that RDPI can be extended
to incorporate the optimized conditional alignment (OCA) algorithm from the previous section.
By Property 1, at convergence, the priors at each node form a subtree of an external junction tree
for the assumed density. Conceptually, if we were to apply OCA to this subtree, the node would have
an aligned distribution, but nodes may not be consistent with each other. Intuitively, this happens
because the optimization messages mi?j were not propagated between different nodes.
In RDPI, node n?s belief ?n includes a collection of (potentially inconsistent) priors {?i (Ci )}.
In the standard sum-product inference algorithm, an inference message ?m?n from node m to node
Q
n is computed by marginalizing out some variables from the factor ?+
m?n , ?m ?
k6=n ?k?m
that combines the messages received from node m?s other neighbors with node m?s local belief. The
inference message in RDPI involves a similar marginalization, which corresponds to pruning some
cliques from ?+
m?n [7]. When such pruning occurs, any likelihood information ?i (Ci ) associated
with the pruned clique i is transferred to its neighbor j.
Our distributed OCA algorithm piggy-backs on this pruning, computing an optimization message
mi?j , which is stored in clique j. (To compute this message, cliques must also carry their original,
unaligned priors.) At convergence, the nodes will not only have a subtree of an external tree, but also
the incoming optimization messages that result from pruning of all other cliques of the external tree.
In order to determine the globally optimal root, each node (locally) selects a root for its subtree. If
this root is one of the initial cliques associated with n, then n, and in particular this clique, is the root
of the conditional alignment. The alignment is propagated throughout the network. If the optimal
root is determined to be a clique that came from a message received from a neighbor, then the neighbor (or another node upstream) is the root, and node n aligns itself with respect to the neighbor?s
message. With an additional tie-breaking rule that ensures that all the nodes make consistent choices
about their subtrees [4], this procedure is equivalent to running the OCA algorithm centrally:
Theorem 2. Given sufficient communication and in the absence of network partitions, nodes running distributed OCA reach a globally consistent belief based on conditional alignment, selecting
the root clique that leads to the joint distribution of minimal entropy. In the presence of partitions,
each partition will reach a consistent belief that minimizes the entropy within this partition.
5.3 Jointly optimized alignment
While conceptually simple, there are situations where such a rooted alignment will not provide
a good aligned distribution. For example, if in the example in Figure 1, cameras 2 and 3 carry
marginals ?2 (C 2,3 , M ) and ?20 (C 2,3 , M ), respectively, and both observe the person, node 2 will
have a better estimate of C 2 , while node 3?s estimate of C 3 will be more accurate. If either node
is chosen as the root, the aligned distribution will have a worse estimate of the pose of one of
the cameras, because performing rooted alignment from either direction effectively overwrites the
marginal of the other node. In this example, rather than fixing a root, we want an aligned distribution
that attempts to simultaneously optimize the distance to both ?2 (C 2,3 , M ) and ?20 (C 2,3 , M ).
0.4
0.45
35 nodes
54 nodes
Camera 7
0.4
Camera 10
RMS error
RMS error
0.3
0.2
Camera 3
0.1
0
0
0.35
0.3
0.25
50
100
150
200
time step
250
300
0.2
0
20
40
epochs per time step
60
(a) 25-camera testbed
(b) Convergence, cameras
(c) Convergence, temperature
Figure 2: (a) Testbed of 25 cameras used for the SLAT experiments. (b) Convergence results for individual
cameras in one experiment. Horizontal lines indicate the cooresponding centralized solution at the end of the
experiment. (c) Convergence versus amount of communication for a temperature network of 54 real sensors.
We propose the following optimization problem that minimizes the sum of reverse KL divergence from the aligned distribution to the clique marginals ?i (Ci ):
X
?
D(q(Ci ) k ?i (Ci )),
p (X) = argmin
q(X),q|=T
i?NT
where q |= T denotes the constraint that p? factorizes according to the junction tree T . This method
will often provide very good aligned distributions (e.g., Figure (d)). For Gaussian distributions, this
optimization problem corresponds to
X
X
min?C ,?C
? log |?Ci | + h??1
(?i ? ?Ci )T ??1
i , ?Ci i +
i (?i ? ?Ci ),
i
subject to
i
i?NT
?Ci 0,
i?NT
?i ? NT ,
(4)
where ?Ci , ?Ci are the means and covariances of q over the variables Ci , and ?i , ?i are the means
and covariances of the marginals ?i . The problem in Equation 4 consists of two independent convex
optimization problems over the means and covariances of q, respectively. The former problem can
be solved in a distributed manner using distributed linear regression [6], while the latter can be
solved using a distributed version of an iterative methods, such as conjugate gradient descent [1].
6
Experimental results
We evaluated our approach on two applications: a camera localization problem [5] (SLAT), in
which a set of cameras simultaneously localizes itself by tracking a moving object, and temperature monitoring application, analogous to the one presented in [7]. Figure 2(a) shows some of the
25 ceiling-mounted cameras used to collect the data in our camera experiments. We implemented
our distributed algorithm in a network simulator that incorporates message loss and used data from
these real sensors as our observations. Figure 2(b) shows the estimates obtained by three cameras in
one of our experiments. Note that each camera converges to the estimate obtained by the centralized
B & K 98 algorithm. In Figure 2(c), we evaluate the sensitivity of the algorithm to incomplete communication. We see that, with a modest number of rounds of communication performed in each time
step, the algorithm obtains a high quality of the solution and converges to the centralized solution.
In the second set of experiments, we evaluate the alignment methods, presented in Section 5. In
Figure 3(a), the network is split into four components; in each component, the nodes communicate
fully, and we evaluate the solution if the communication were to be restored after a given number of
time steps. The vertical axis shows the RMS error of estimated camera locations at the end of the experiment. For the unaligned solution, the nodes may not agree on the estimated pose of a camera, so
it is not clear which node?s estimate should be used in the RMS computation; the plot shows an ?omniscient envelope? of the RMS error, where, given the (unknown) true camera locations, we select
the best and worst estimates available in the network for each camera?s pose. The results show that,
in the absence of optimized alignment, inconsistencies can degrade the solution: observations collected after the communication is restored may not make up for the errors introduced by the partition.
The third experiment evaluates the performance of the distributed algorithm in highlydisconnected scenarios. Here, the sensor network is hierarchically partitioned into smaller disconnected components by selecting a random cut through the largest component. The communication
is restored shortly before the end of the experiment. Figures 3(b) shows the importance of aligning
from the correct node: the difference between the optimized root and an arbitrarily chosen root is
significant, particularly when the network becomes more and more fractured. In our experiments,
large errors often resulted from the nodes having uncertain beliefs, hence justifying the objective
function. We see that the jointly optimized alignment described in Section 5.3, min. KL, tends
to provide the best aligned distribution, though often close to the optimized root, which is simpler
upper bound
0.2
lower bound
0.1
1
0.5
fixed root
optimized root upper bound
min. KL
unaligned
0.8
0.6
0.4
0.3
0.2 lower bound
0.1
0.2
lower bound
0
0
50
100
Duration of the partition
(a) camera localization
0
0
2
4
6
8
Number of partitions
(b) camera localization
upper bound
0.4
RMS error
fixed root
optimized root
unaligned
RMS error
RMS error
0.3
10
0
0
fixed root
optimized root
min. KL
unaligned
5
Number of partitions
10
(c) temperature monitoring
Figure 3: Comparison of the alignment methods. (a) RMS error vs. duration of the partition. For the unaligned
solution, the plot shows bounds on the error: given the (unknown) camera locations, we select the best and worst
estimates available in the network for each camera?s pose. In the absence of optimized alignment, inconsistencies can degrade the quality of the solution. (b, c) RMS error vs. number of partitions. In camera localization
(b), the difference between the optimized alignment and the alignment from an arbitrarily chosen fixed root is
significant. For the temperature monitoring (c), the differences are less pronounced, but follow the same trend.
to compute. Finally, 3(c) shows the alignment results on the temperature monitoring application.
Compared to SLAT, the effects of network partitions on the results for the temperature data are less
severe. One contributing factor is that every node in a partition is making local temperature observations, and the approximate transition model for temperatures in each partition is quite accurate,
hence all the nodes continue to adjust their estimates meaningfully while the partition is in progress.
7
Conclusions
This paper presents a new distributed approach to approximate dynamic filtering based on a distributed representation of the assumed density in the network. Distributed filtering is performed by
first conditioning on evidence using a robust distributed inference algorithm [7], and then advancing
to the next time step locally. With sufficient communication in each time step, our distributed algorithm converges to the centralized B & K 98 solution. In addition, we identify a significant challenge
for probabilistic inference in dynamical systems: nodes can have inconsistent beliefs about the current state of the system, and an ineffective handling of this situation can lead to very poor estimates
of the global state. We address this problem by developing a distributed algorithm that obtains an
informative consistent distribution, optimizing over various choices of the root node, and an alternative joint optimization approach that minimizes a KL divergence-based criterion. We demonstrate
the effectiveness of our approach on a suite of experimental results on real-world sensor data.
Acknowledgments
This research was supported by grants NSF-NeTS CNS-0625518 and CNS-0428738 NSF ITR. S.
Funiak was supported by the Intel Research Scholar Program; C. Guestrin was partially supported
by an Alfred P. Sloan Fellowship.
References
[1] D. P. Bertsekas and J. N. Tsitsiklis. Parallel and Distributed Computation: Numerical Methods. Athena
Scientific; 1st edition (January 1997), 1997.
[2] X. Boyen and D. Koller. Tractable inference for complex stochastic processes. In Proc. of UAI, 1998.
[3] R. Cowell, P. Dawid, S. Lauritzen, and D. Spiegelhalter. Probabilistic Networks and Expert Systems.
Springer, New York, NY, 1999.
[4] S. Funiak, C. Guestrin, M. Paskin, and R. Sukthankar. Robust probabilistic filtering in distributed systems.
Technical Report CMU-CALD-05-111, Carnegie Mellon University, 2005.
[5] S. Funiak, C. Guestrin, M. Paskin, and R. Sukthankar. Distributed localization of networked cameras. In
Proc. of Fifth International Conference on Information Processing in Sensor Networks (IPSN-06), 2006.
[6] C. Guestrin, R. Thibaux, P. Bodik, M. A. Paskin, and S. Madden. Distributed regression: an efficient
framework for modeling sensor network data. In Proc. of IPSN, 2004.
[7] M. A. Paskin and C. E. Guestrin. Robust probabilistic inference in distributed systems. In UAI, 2004.
[8] M. A. Paskin, C. E. Guestrin, and J. McFadden. A robust architecture for inference in sensor networks.
In Proc. of IPSN, 2005.
[9] A. Pfeffer and T. Tai. Asynchronous dynamic Bayesian networks. In Proc. UAI 2005, 2005.
[10] M. Rosencrantz, G. Gordon, and S. Thrun. Decentralized sensor fusion with distributed particle filters. In
Proc. of UAI, 2003.
[11] F. Zhao, J. Liu, J. Liu, L. Guibas, and J. Reich. Collaborative signal and information processing: An
information directed approach. Proceedings of the IEEE, 91(8):1199?1209, 2003.
| 2996 |@word version:1 eliminating:1 propagate:3 crucially:1 decomposition:2 covariance:3 pick:1 recursively:1 carry:3 initial:3 liu:2 contains:1 selecting:2 omniscient:1 past:2 existing:1 current:5 comparing:2 nt:12 si:4 yet:1 dx:1 must:4 written:2 interrupted:2 periodically:1 partition:25 informative:3 wx:3 numerical:1 plot:2 v:2 device:1 filtered:2 provides:2 node:113 location:7 contribute:1 simpler:1 along:1 dn:1 become:1 supply:1 consists:2 prove:1 combine:5 manner:3 p1:3 simulator:1 globally:4 pf:1 becomes:1 project:1 notation:1 factorized:1 lowest:1 argmin:1 substantially:1 minimizes:4 developed:1 impractical:1 suite:2 guarantee:4 every:3 collecting:1 tie:2 control:1 grant:1 omit:1 bertsekas:1 before:5 positive:1 local:9 tends:1 id:1 path:1 might:1 collect:3 challenging:1 suggests:1 deployment:1 limited:1 range:1 directed:1 unique:2 camera:33 acknowledgment:1 practice:1 procedure:4 empirical:3 significantly:1 projection:4 confidence:1 cannot:1 close:2 sukthankar:3 optimize:2 equivalent:4 map:1 missing:1 attention:1 cluttered:1 independently:1 convex:1 duration:2 simplicity:1 communicating:1 rule:1 coordinate:2 analogous:2 dbns:1 suppose:2 heavily:1 exact:3 programming:2 pa:25 trend:1 dawid:1 expensive:1 approximated:1 particularly:1 cut:1 pfeffer:2 observed:1 solved:4 worst:2 ensures:1 ordering:1 principled:2 substantial:1 environment:1 dynamic:15 localization:6 resolved:1 joint:5 represented:2 various:1 laser:1 effective:1 query:2 choosing:1 quite:1 encoded:2 solve:1 say:1 otherwise:3 funiak:4 jointly:2 noisy:1 itself:3 online:4 sequence:2 net:1 propose:5 unaligned:6 product:3 remainder:1 neighboring:4 aligned:9 networked:1 rapidly:1 pronounced:1 convergence:15 parent:8 transmission:1 cluster:1 converges:6 object:5 completion:1 fixing:1 pose:6 measured:1 lauritzen:1 received:5 progress:1 p2:5 recovering:1 implemented:3 involves:1 indicate:1 synchronized:1 direction:1 drawback:1 correct:1 filter:2 stochastic:1 ipsn:3 routing:1 elimination:2 assign:1 scholar:1 guibas:1 scope:3 estimation:9 proc:6 pfs:2 largest:1 sensor:18 gaussian:2 always:1 reaching:1 rather:2 cr:1 mobile:2 factorizes:1 pervasive:1 likelihood:7 sense:1 inference:29 entire:2 initially:1 koller:2 selects:2 denoted:1 k6:1 oca:6 marginal:10 equal:1 once:3 construct:1 having:1 represents:1 others:1 report:1 gordon:1 preserve:1 recognize:1 divergence:4 ve:1 simultaneously:2 individual:1 resulted:1 cns:2 maintain:1 attempt:2 centralized:9 message:22 highly:1 multiply:1 severe:1 alignment:28 adjust:1 introduces:2 subtrees:1 accurate:3 edge:1 necessary:1 modest:1 tree:22 incomplete:1 initialized:2 re:1 circle:1 minimal:3 uncertain:3 mk:4 modeling:1 assertion:4 cover:5 zn:3 loopy:1 introducing:1 subset:5 uniform:1 successful:1 stored:1 thibaux:1 answer:1 varies:1 calibrated:1 person:2 density:11 st:1 sensitivity:1 international:1 probabilistic:11 na:1 again:1 central:2 possibly:2 worse:1 external:7 expert:1 zhao:2 potential:1 summarized:1 includes:2 sloan:1 depends:1 performed:6 root:36 closed:1 observing:1 sup:2 start:1 carlos:1 parallel:2 maintains:2 collaborative:1 square:1 roll:7 qk:1 efficiently:1 identify:3 conceptually:3 bayesian:5 multiplying:4 monitoring:4 ah:2 simultaneous:1 asnetwork:1 reach:4 sharing:1 aligns:1 failure:3 evaluates:1 energy:1 hereby:1 associated:5 mi:4 static:2 propagated:3 recall:2 back:1 follow:1 response:1 rahul:1 improved:1 evaluated:1 though:1 furthermore:1 clock:1 until:1 horizontal:1 propagation:1 google:1 defines:3 quality:4 scientific:1 fractured:1 building:1 effect:1 cald:1 true:5 former:1 hence:4 illustrated:1 round:1 covering:1 maintained:1 rooted:6 criterion:2 demonstrate:2 performs:2 temperature:10 ranging:1 image:1 common:1 conditioning:2 approximates:1 marginals:16 mellon:2 significant:4 measurement:10 refer:3 collaborate:2 dbn:3 similarly:1 pm:1 particle:2 moving:4 calibration:1 robot:3 longer:2 reich:1 align:1 aligning:3 posterior:11 own:1 optimizing:1 optimizes:1 reverse:2 scenario:1 collab:1 came:1 arbitrarily:2 continue:1 inconsistency:5 accomplished:1 guestrin:8 additional:5 converge:2 determine:1 signal:1 reduces:1 smooth:1 technical:1 long:1 justifying:1 qi:1 prediction:7 ensuring:1 basic:1 regression:2 metric:1 cmu:1 represent:1 addition:2 conditionals:1 want:1 fellowship:1 interval:1 crucial:1 envelope:1 ineffective:1 subject:1 sent:1 meaningfully:1 inconsistent:4 flow:1 effectiveness:2 incorporates:1 presence:2 split:1 enough:1 independence:5 marginalization:1 zi:1 architecture:4 bandwidth:1 itr:1 allocate:1 rms:10 distributing:1 passing:2 cause:3 york:1 useful:1 latency:2 covered:1 clear:1 amount:2 locally:5 nsf:2 estimated:2 track:1 per:1 alfred:1 carnegie:2 recomputed:1 key:1 independency:1 nevertheless:1 four:1 clarity:1 nal:1 advancing:1 sum:3 run:6 communicate:3 arrive:1 throughout:6 family:1 decide:1 p3:1 decision:1 bound:7 completing:1 emergency:1 centrally:2 guaranteed:1 constraint:1 scene:1 min:5 pruned:1 performing:2 transferred:1 developing:2 according:1 disconnected:2 poor:3 conjugate:1 across:1 smaller:1 increasingly:1 partitioned:3 modification:1 happens:1 making:1 intuitively:2 pr:4 interference:3 ceiling:1 ln:11 equation:2 agree:3 tai:2 fail:1 needed:1 know:1 tractable:3 overwrites:1 end:4 available:2 operation:2 junction:15 gaussians:1 generalizes:1 apply:1 observe:2 decentralized:1 appropriate:1 robustly:1 alternative:2 robustness:3 shortly:1 original:1 denotes:2 running:7 exploit:1 especially:1 build:1 objective:1 occurs:1 restored:3 unclear:1 gradient:1 distance:1 link:2 thrun:1 athena:1 gracefully:1 degrade:3 collected:2 reason:1 assuming:1 kalman:1 modeled:1 index:1 ratio:1 difficult:1 unfortunately:2 mostly:1 ql:1 potentially:1 negative:1 implementation:1 unknown:3 perform:3 upper:3 vertical:2 observation:19 descent:1 january:1 situation:3 maxk:1 communication:21 team:3 extended:1 introduced:3 bk:2 pair:1 kl:7 optimized:15 z1:1 coherent:1 learned:1 testbed:2 established:1 address:5 able:1 dynamical:5 usually:2 boyen:2 challenge:3 program:1 including:1 belief:15 critical:1 zr:1 advanced:1 localizes:2 spiegelhalter:1 axis:1 carried:1 madden:1 argmaxk:1 prior:13 literature:1 epoch:1 marginalizing:2 contributing:1 stanislav:1 loss:4 fully:2 mcfadden:1 filtering:16 mounted:1 versus:1 sufficient:8 consistent:12 principle:1 share:1 summary:1 supported:3 wireless:1 asynchronous:1 tsitsiklis:1 side:2 allow:1 neighbor:8 fifth:1 distributed:52 fragmented:1 world:3 transition:11 valid:2 qn:12 made:6 collection:5 approximate:12 obtains:7 pruning:4 unreliable:1 clique:30 global:9 incoming:1 uai:4 assumed:7 xi:4 continuous:1 iterative:1 timation:1 channel:1 zk:8 robust:10 correlated:1 maxk6:1 obtaining:3 upstream:2 separator:4 complex:1 hierarchically:1 s2:2 noise:1 edition:1 x1:1 intel:2 fashion:2 ny:1 wish:1 xl:1 candidate:1 breaking:1 third:1 down:1 theorem:2 paskin:9 sensing:1 evidence:3 fusion:1 intractable:1 effectively:1 importance:1 ci:45 subtree:8 conditioned:2 entropy:12 intersection:1 simply:2 prevents:1 tracking:1 partially:1 rosencrantz:2 cowell:1 springer:1 corresponds:3 loses:1 satisfies:1 determines:2 piggy:1 conditional:14 goal:2 viewed:1 absence:5 change:2 included:1 specifically:1 determined:1 degradation:1 total:1 experimental:4 e:1 select:3 allotted:1 mark:1 latter:1 scan:1 arises:1 incorporate:2 evaluate:3 handling:1 |
2,199 | 2,997 | Manifold Denoising
Matthias Hein
Markus Maier
Max Planck Institute for Biological Cybernetics
T?ubingen, Germany
{first.last}@tuebingen.mpg.de
Abstract
We consider the problem of denoising a noisily sampled submanifold M in Rd ,
where the submanifold M is a priori unknown and we are only given a noisy point
sample. The presented denoising algorithm is based on a graph-based diffusion
process of the point sample. We analyze this diffusion process using recent results about the convergence of graph Laplacians. In the experiments we show that
our method is capable of dealing with non-trivial high-dimensional noise. Moreover using the denoising algorithm as pre-processing method we can improve the
results of a semi-supervised learning algorithm.
1
Introduction
In the last years several new methods have been developed in the machine learning community
which are based on the assumption that the data lies on a submanifold M in Rd . They have been
used in semi-supervised learning [15], dimensionality reduction [14, 1] and clustering. However
there exists a certain gap between theory and practice. Namely in practice the data lies almost never
exactly on the submanifold but due to noise is scattered around it. Several of the existing algorithms
in particular graph based methods are quite sensitive to noise. Often they fail in the presence of highdimensional noise since then the distance structure is non-discriminative. In this paper we tackle this
problem by proposing a denoising method for manifold data. Given noisily sampled manifold data
in Rd the objective is to ?project? the sample onto the submanifold.
There exist already some methods which have related objectives like principal curves [6] and the
generative topographic mapping [2]. For both methods one has to know the intrinsic dimension of
the submanifold M as a parameter of the algorithm. However in the presence of high-dimensional
noise it is almost impossible to estimate the intrinsic dimension correctly. Moreover usually problems arise if there is more than one connected component.
The algorithm we propose adresses these problems. It works well for low-dimensional submanifolds
corrupted by high-dimensional noise and can deal with multiple connected components. The basic
principle behind our denoising method has been proposed by [13] as a surface processing method in
R3 . The goal of this paper is twofold. First we extend this method to general submanifolds in Rd
aimed at dealing in particular with high-dimensional noise. Second we provide an interpretation of
the denoising algorithm which takes into account the probabilistic setting encountered in machine
learning and which differs from the one usually given in the computer graphics community.
2
The noise model and problem statement
We assume that the data lies on an abstract m-dimensional manifold M , where the dimension m
can be seen as the number of independent parameters in the data. This data is mapped via a smooth,
regular embedding i : M ? Rd into the feature space Rd . In the following we will not distinguish between M and i(M ) ? Rd , since it should be clear from the context which case we are
considering. The Euclidean distance in Rd then induces a metric on M . This metric depends on the
embedding/representation (e.g. scaling) of the data in Rd but is at least continuous with respect to
the intrinsic parameters. Furthermore we assume that the manifold M is equipped with a probability
measure PM which is absolutely continuous with respect to the natural volume element1 dV of M .
With these definitions the model for the noisy data-generating process in Rd has the following form:
X = i(?) + ?,
where ? ? PM and ? ? N (0, ?). Note that the probability measure of the noise ? has full support
in Rd . We consider here for convenience a Gaussian noise model but also any other reasonably
concentrated isotropic noise should work. The law PX of the noisy data X can be computed from
the true data-generating probability measure PM :
Z
kx?i(?)k2
2 ?d
2
PX (x) = (2 ? ? )
e? 2?2 p(?) dV (?).
(1)
M
?
2?
d
of
Now the Gaussian measure is equivalent to the heat kernel pt (x, y) = (4?t)? 2 exp ? kx?yk
4t
the diffusion process on Rd , see e.g. [5], if we make the identification ? 2 = 2t. An alternative point
of view on PX is therefore to see PX as the result of a diffusion of the density function2 p(?) of
PM stopped at time t = 12 ? 2 . The basic principle behind the denoising algorithm in this paper is to
reverse this diffusion process.
3
The denoising algorithm
In practice we have only an i.i.d. sample Xi , i = 1, . . . , n of PX . The ideal goal would be to find the
corresponding set of points i(?i ), i = 1, . . . , n on the submanifold M which generated the points
Xi . However due to the random nature of the noise this is in principle impossible. Instead the goal
is to find corresponding points Zi on the submanifold M which are close to the points Xi . However
we are facing several problems. Since we are only given a finite sample, we do not know PX or
even PM . Second as stated in the last section we would like to reverse this diffusion process which
amounts to solving a PDE. However the usual technique to solve this PDE on a grid is unfeasible
due to the high dimension of the ambient space Rd .
Instead we solve the diffusion process directly on a graph generated by the sample Xi . This can be
motivated by recent results in [7] where it was shown that the generator of the diffusion process, the
Laplacian ?Rd , can be approximated by the graph Laplacian of a random neighborhood graph. A
similar setting for the denoising of two-dimensional meshes in R3 has been proposed in the seminal
work of Taubin [13]. Since then several modifications of his original idea have been proposed in
the computer graphics community, including the recent development in [11] to apply the algorithm
directly to point cloud data in R3 . In this paper we propose a modification of this diffusion process
which allows us to deal with general noisy samples of arbitrary (low-dimensional) submanifolds in
Rd . In particular the proposed algorithm can cope with high-dimensional noise. Moreover we give
an interpretation of the algorithm, which differs from the one usually given in the computer graphics
community and takes into account the probabilistic nature of the problem.
3.1
Structure on the sample-based graph
We would like to define a diffusion process directly on the sample Xi . To this end we need the generator of the diffusion process, the graph Laplacian. We will construct this operator for a weighted,
undirected graph. The graph vertices are the sample points Xi . With {h(Xi )}ni=1 being the k-nearest
neighbor (k-NN) distances the weights of the k-NN graph are defined as
2
?
?
kXi ? Xj k
,
if kXi ? Xj k ? max{h(Xi ), h(Xj )},
w(Xi , Xj ) = exp ?
(max{h(Xi ), h(Xj )})2
and w(Xi , Xj ) = 0 otherwise. Additionally we set w(Xi , XP
i ) = 0, so that the graph has no
n
loops. Further we denote by d the degree function d(Xi ) =
j=1 w(Xi , Xj ) of the graph and
?
In local coordinates ?1 , . . . , ?m the natural volume element dV is given as dV = det g d?1 . . . d?m ,
where det g is the determinant of the metric tensor g.
2
Note that PM is not absolutely continuous with respect to the Lebesgue measure in Rd and therefore p(?)
is not a density in Rd .
1
we introduce two Hilbert spaces HV , HE of functions on the vertices V and edges E. Their inner
products are defined as
Xn
Xn
hf, giHV =
f (Xi ) g(Xi ) d(Xi ), h?, ?iHE =
w(Xi , Xj ) ?(Xi , Xj ) ?(Xi , Xj ).
i=1
i,j=1
Introducing the discrete differential ? : HV ? HE , (?f )(Xi , Xj ) = f (Xj ) ? f (Xi ) the graph
Laplacian is defined as
1 Xn
? : HV ? HV , ? = ?? ?, (?f )(Xi ) = f (Xi ) ?
w(Xi , Xj )f (Xj ),
j=1
d(Xi )
where ?? is the adjoint of ?. Defining the matrix D with the degree function on the diagonal the
graph Laplacian in matrix form is given as ? = ? D?1 W , see [7] for more details. Note that
despite ? is not a symmetric matrix it is a self-adjoint operator with respect to the inner product in
HV .
3.2
The denoising algorithm
Having defined the necessary structure on the graph it is straightforward to write down the backward
diffusion process. In the next section we will analyze the geometric properties of this diffusion
process and show why it is directed towards the submanifold M . Since the graph Laplacian is the
generator of the diffusion process on the graph we can formulate the algorithm by the following
differential equation on the graph:
?t X = ?? ?X,
(2)
where ? > 0 is the diffusion constant. Since the points change with time, the whole graph is dynamic
in our setting. This is different to the diffusion processes on a fixed graph studied in semi-supervised
learning. In order to solve the differential equation (2) we choose an implicit Euler-scheme, that is
X(t + 1) ? X(t) = ??t ? ?X(t + 1),
(3)
where ?t is the time-step. Since the implicit Euler is unconditionally stable we can choose the factor
?t ? arbitrarily. We fix in the following ? = 1 so that the only free parameter remains to be ?t,
which is set to ? = 0.5 in the rest of the paper. The solution of the implicit Euler scheme for one
timestep in Equation 3 can then be computed as: Xt+1 = ( + ?t ?)?1 Xt . After each timestep
the point configuration has changed so that one has to recompute the weight matrix W of the graph.
Then the procedure is continued until a predefined stopping criterion is satisfied, see Section 3.4.
The pseudo-code is given in Algorithm 1. In [12] it was pointed out that there exists a connection
Algorithm 1 Manifold denoising
1: Choose ?t, k
2: while Stopping criterion not satisfied do
3:
Compute the k-NN distances h(Xi ), i = 1, . . . , n,
4:
Compute the weights
? w(Xi , Xj ) of the graph
? with w(Xi , Xi ) = 0,
w(Xi , Xj ) = exp ?
kXi ?Xj k2
(max{h(Xi ),h(Xj )})2
,
if
kXi ? Xj k ? max{h(Xi ), h(Xj )},
?1
5:
Compute the graph Laplacian ?, ? = ? D W ,
6:
Solve X(t + 1) ? X(t) = ??t ?X(t + 1) ? X(t + 1) = ( + ?t ?)?1 X(t).
7: end while
between diffusion processes and Tikhonov regularization. Namely the result of one time step of
the diffusion process with the implicit Euler scheme is equivalent to the solution of the following
regularization problem on the graph:
arg min S(Z ? ) := arg min
Z ? ?HV
?
Z ? ?HV
d
X
2
kZ ? ? X ? (t)kHV + ?t
?=1
d
X
2
k?Z ? kHE ,
?=1
d
2
where Z denotes the ?-component of the vector Z ? R . With k?Z ? kHE = hZ ? , ? Z ? iHV the
minimizer of the above functional with respect to Z ? can be easily computed as
?S(Z ? )
= 2(Z ? ? X ? (t)) + 2 ?t ?Z ? = 0,
?Z ?
? = 1, . . . , d,
so that Z = ( + ?t ?)?1 Xt . Each time-step of our diffusion process can therefore be seen as
a regression problem, where we trade off between fitting the new points Z to the points X(t) and
having a ?smooth? point configuration Z measured with respect to the current graph built from X(t).
k-nearest neighbor graph versus h-neighborhood graph
3.3
In the denoising algorithm we have chosen to use a weighted k-NN graph. It turns out that a k-NN
graph has three advantages over an h-neighborhood graph3 . The first advantage is that the graph has
a better connectivity. Namely points in areas of different density have quite different neighborhood
scales which leads for a fixed h to either disconnected or over-connected graphs.
Second we usually have high-dimensional noise. In this case it is well-known that one has a drastic
change in the distance statistic of a sample, which is illustrated by the following trivial lemma.
Lemma 1 Let x, y ? Rd and ?1 , ?2 ? N (0, ? 2 ) and define X = x + ?1 and Y = y + ?2 , then
2
2
E kX ? Y k = kx ? yk + 2 d ? 2 ,
and
2
2
Var kX ? Y k = 8? 2 kx ? yk + 8 d ? 4 .
One can deduce that the expected squared distance of the noisy submanifold sample is dominated by
2
the noise term if 2d? 2 > max?,?0 ki(?) ? i(?0 )k , which is usually the case for large d. In this case
it is quite difficult to adjust the average number of neighbors in a graph by a fixed neighborhood size
h since the distances start to concentrate around their mean value. The third is that by choosing k
we can control directly the sparsity of the weight matrix W and the Laplacian ? = ? D?1 W so
that the linear equation in each time step can be solved efficiently.
3.4
Stopping criterion
The problem of choosing the correct number of iterations is very difficult if one has initially highdimensional noise and requires prior knowledge. We propose two stopping criterions. The first
one is based on the effect that if the diffusion is done too long the data becomes disconnected and
concentrates in local clusters. One therefore can stop if the number of connected components of the
graph4 increases. The second one is based on prior knowledge about the intrinsic dimension of the
data. In this case one can stop the denoising if the estimated dimension of the sample (e.g. via the
correlation dimension, see [4]) is equal to the intrinsic one. Another less founded but very simple
way is to stop the iterations if the changes in the sample are below some pre-defined threshold.
4
Large sample limit and theoretical analysis
Our qualitative theoretical analysis of the denoising algorithm is based on recent results on the limit
of graph Laplacians [7, 8] as the neighborhood size decreases and the sample size increases. We use
this result to study the continuous limit of the diffusion process. The following theorem about the
limit of the graph Laplacian applies to h-neighborhood graphs, whereas the denoising algorithm is
based on a k-NN graph. Our conjecture5 is that the result carries over to k-NN graphs.
Theorem 1 [7, 8] Let {Xi }ni=1 be an i.i.d. sample of a probability measure PM on a m-dimensional
compact submanifold6 M of Rd , where PM has a density pM ? C 3 (M ). Let f ? C 3 (M ) and
x ? M \?M , then if h ? 0 and nhm+2 / log n ? ?,
lim
n??
2
1
(?f )(x) ? ?(?M f )(x) ? h?f, ?piTx M ,
h2
p
almost surely,
where ?M is the Laplace-Beltrami operator of M and ? means up to a constant which depends on
the kernel function k(kx ? yk) used to define the weights W (x, y) = k(kx ? yk) of the graph.
3
In an h-neighborhood graph two sample points Xi , Xj have a common edge if kXi ? Xj k ? h.
The number of connected comp. is equal to the multiplicity of the first eigenvalue of the graph Laplacian.
5
Partially we verified the conjecture however the proof would go beyond the scope of this paper.
6
Note that the case where P has full support in Rd is a special case of this theorem.
4
4.1
The noise-free case
We first derive in a non-rigorous way the continuum limit of our graph based diffusion process in
the noise free case. To that end we do the usual argument made in physics to go from a difference
equation on a grid to the differential equation. We rewrite our diffusion equation (2) on the graph as
i(t + 1) ? i(t)
h2 1
=?
?i
?t
?t h2
2
Doing now the limit h ? 0 and ?t ? 0 such that the diffusion constant D = h?t stays finite and
using the limit of h12 ? given in Theorem 1 we get the following differential equation,
?t i = D [?M i +
2
h?p, ?ii].
p
(4)
Note that for the k-NN graph the neighborhood size h is a function of the local density which implies
that the diffusion constant D also becomes a function of the local density D = D(p(x)).
Lemma 2 ([9], Lemma 2.14) Let i : M ? Rd be a regular, smooth embedding of an mdimensional manifold M , then ?M i = m H, where H is the mean curvature7 of M .
Using the equation ?M i = mH we can establish equivalence of the continuous diffusion equation
(4) to a generalized mean curvature flow.
?t i = D [m H +
2
h?p, ?ii],
p
(5)
The equivalence to the mean curvature flow ?t i = m H is usually given in computer graphics as the
reason for the denoising effect, see [13, 11]. However as we have shown the diffusion has already
an additional part if one has a non-uniform probability measure on M .
4.2
The noisy case
The analysis of the noisy case is more complicated and we can only provide a rough analysis. The
large sample limit n ? ? of the graph Laplacian ? at a sample point Xi is given as
R
d kh (kXi ? yk) y pX (y)dy
?Xi = Xi ? RR
,
(6)
k (kXi ? yk)pX (y)dy
Rd h
where kh (kx ? yk) is the weight function used in the construction of the graph, that is in our case
kx?yk2
kh (kx ? yk) = e? 2h2 kx?yk?h . In the following analysis we will assume three things, 1) the
noise level ? is small compared to the neighborhood size h, 2) the curvature of M is small compared
to h and 3) the density pM varies slowly along M . Under these conditions it is easy to see that the
main contribution of ??Xi in Equation 6 will be in the direction of the gradient of pX at Xi . In the
following we try to separate this effect from the mean curvature part derived in the noise-free case.
Under the above conditions we can do the following second order approximation of a convolution
with a Gaussian, see [7], using the explicit form of pX of Equation 1 :
Z
Z
Z
ky?i(?)k2
1
kh (kX ? yk) y e? 2?2 p(?) dy dV (?)
kh (kX ? yk) y pX (y)dy =
2
d/2
(2?? )
Rd
Rd
ZM
=
kh (kX ? i(?)k) i(?) p(?) dV (?) + O(? 2 )
M
Now define the closest point of the submanifold M to X: i(?min ) = arg mini(?)?M kX ? i(?)k.
Using the condition on the curvature we can approximate the diffusion step ??X as follows:
?
!
R
k (ki(?min ) ? i(?)k) i(?) p(?) dV (?)
MR h
??X ? i(?min ) ? X ? i(?min ) ?
,
|
{z
}
k (ki(?min ) ? i(?)k) p(?) dV (?)
M h
|
{z
}
I
II
The mean curvature H is the trace of the second fundamental form. If M is a hypersurface in Rd the mean
Pd?1
1
curvature at p is H = d?1
i=1 ?i N , where N is the normal vector and ?i the principal curvatures at p.
7
where we have omitted second-order terms. It follows from the proof of Theorem 1 that the term II
is an approximation of ??M i(?min ) ? p2 h?p, ?ii = ?mH ? p2 h?p, ?ii whereas the first term I
leads to a movement of X towards M . We conclude from this rough analysis that in the denoising
procedure we always have a tradeoff between reducing the noise via the term I and smoothing of the
manifold via the mean curvature term II. Note that the term II is the same for all points X which
have i(?min ) as their closest point on M . Therefore this term leads to a global flow which smoothes
the submanifold. In the experiments we observe this as the shrinking phenomenon.
5
Experiments
In the experimental section we test the performance of the denoising algorithm on three noisy
datasets. Furthermore we explore the possibility to use the denoising method as a preprocessing
step for semi-supervised learning. Due to lack of space we can not deal with further applications as
preprocessing method for clustering or dimensionality reduction.
5.1
Denoising
The first experiment is done on a toy-dataset. The manifold M is given as t ? [sin(2?t), 2?t],
t is sampled uniformly on [0, 1]. We embed M into R200 and put full isotropic Gaussian noise
with ? = 0.4 on each datapoint resulting in the left part of Figure 5.1. We verify the effect of the
denoising algorithm by estimating continuously the dimension over different scales (note that the
dimension of a finite sample always depends on the scale at which one examines). We use for that
purpose the correlation dimension estimator of [4].
The result of the denoising algorithm with k = 25 for the k-NN graph and 10 timesteps is given
in the right part of Figure 5.1. One can observe visually and by inspecting the dimension estimate
as well as by the histogram of distances that the algorithm has reduced the noise. One can also see
two undesired effects. First as discussed in the last section the diffusion process has a component
which moves the manifold in the direction of the mean curvature, which leads to a smoothing of the
sinusoid. Second at the boundary the sinusoid shrinks due to the missing counterparts in the local
averaging done by the graph Laplacian, see (6), which result in an inward tangential component.
In the next experiment we apply the denoising to the handwritten digit datasets USPS and MNIST.
Dimension vs. scale
Data points
Data points
histogram of dist.
300
Dimension vs. scale
histogram of dist.
40
10000
12000
6
6
5
200
4
3
8000
5
6000
4
8000
3
100
4000
2
0
1
10000
30
20
6000
2
4000
10
1
2000
2000
0
0
?1
0
1
?100
1.5
2
2.5
0
6
8
10
12
?1
0
1
0
?2
0
2
0
0
2
4
6
Figure 1: Left: 500 samples of the noisy sinusoid in R200 as described in the text, Right: Result after
10 steps of the denoising method with k = 25, note that the estimated dimension is much smaller
and the scale has changed as can be seen from the histogram of distances shown to the right
For handwritten digits the underlying manifold corresponds to varying writing styles. In order to
check if the denoising method can also handle several manifolds at the same time which would make
the method useful for clustering and dimensionality reduction we fed all the 10 digits simultaneously
into the algorithm. For USPS we used the 9298 digits in the training and test set and from MNIST a
subsample of 1000 examples from each digit. We used the two-sided tangent distance in [10] which
provides a certain invariance against translation, scaling, rotation and line thickness. In Figure 2 and
3 we show a sample of the result across all digits. In both cases digits are transformed wrongly. This
happens since they are outliers with respect to their digit manifold and lie closer to another digit
component. An improved handling of invariances should resolve at least partially this problem.
5.2
Denoising as pre-processing for semi-supervised learning
Most semi-supervised learning (SSL) are based on the cluster assumption, that is the decision boundary should lie in a low-density region. The denoising algorithm is consistent with that assumption
5
5
5
5
5
5
5
5
5
5
10
10
10
10
10
10
10
10
10
10
15
15
15
15
15
15
15
15
15
15
5
10
15
5
5
10
15
5
10
10
15
10
15
5
10
15
10
15
5
10
15
5
10
10
15
5
10
15
10
15
15
5
10
15
10
15
10
15
5
10
15
5
15
5
10
15
15
15
15
10
15
10
15
5
10
15
5
10
15
10
15
5
10
15
5
10
15
5
5
5
10
15
10
15
15
10
5
10
5
10
10
5
15
10
5
5
10
5
5
15
15
15
5
15
10
10
15
10
15
15
15
10
15
10
10
10
5
10
5
5
5
5
10
5
5
10
5
15
5
15
15
15
5
15
15
15
10
15
10
10
10
5
10
5
5
5
5
10
15
10
15
5
10
5
10
15
5
15
5
5
10
15
5
15
10
15
15
10
15
5
10
10
5
10
15
15
10
15
5
5
5
15
10
5
15
15
10
10
10
10
10
5
5
5
5
5
15
5
15
10
10
15
5
15
15
15
10
15
10
10
10
5
10
5
5
5
5
10
5
5
10
15
15
5
15
5
5
10
15
15
15
10
15
10
10
10
5
10
5
5
5
5
10
15
5
15
5
10
15
15
15
10
15
5
10
10
5
10
15
5
5
5
5
10
15
15
15
5
5
10
10
10
15
5
5
5
15
5
10
15
Figure 2: Left: Original images from USPS, right: after 15 iterations with k = [9298/50].
5
10
15
20
25
5
10
15
20
25
5 10 15 20 25
5
10
15
20
25
5
10
15
20
25
5 10 15 20 25
5
10
15
20
25
5 10 15 20 25
5
10
15
20
25
5 10 15 20 25
5 10 15 20 25
5 10 15 20 25
5 10 15 20 25
5 10 15 20 25
5 10 15 20 25
5 10 15 20 25
5 10 15 20 25
5 10 15 20 25
5 10 15 20 25
5 10 15 20 25
5 10 15 20 25
5 10 15 20 25
5 10 15 20 25
5 10 15 20 25
5 10 15 20 25
5 10 15 20 25
5
10
15
20
25
5 10 15 20 25
5
10
15
20
25
5 10 15 20 25
5 10 15 20 25
5
10
15
20
25
5
10
15
20
25
5
10
15
20
25
5 10 15 20 25
5 10 15 20 25
5 10 15 20 25
5 10 15 20 25
5 10 15 20 25
5
10
15
20
25
5
10
15
20
25
5
10
15
20
25
5
10
15
20
25
5 10 15 20 25
5 10 15 20 25
5 10 15 20 25
5
10
15
20
25
5
10
15
20
25
5
10
15
20
25
5
10
15
20
25
5
10
15
20
25
5 10 15 20 25
5 10 15 20 25
5 10 15 20 25
5
10
15
20
25
5
10
15
20
25
5
10
15
20
25
5
10
15
20
25
5
10
15
20
25
5 10 15 20 25
5 10 15 20 25
5 10 15 20 25
5
10
15
20
25
5
10
15
20
25
5
10
15
20
25
5
10
15
20
25
5
10
15
20
25
5 10 15 20 25
5 10 15 20 25
5 10 15 20 25
5
10
15
20
25
5
10
15
20
25
5
10
15
20
25
5
10
15
20
25
5
10
15
20
25
5 10 15 20 25
5 10 15 20 25
5 10 15 20 25
5
10
15
20
25
5
10
15
20
25
5
10
15
20
25
5
10
15
20
25
5
10
15
20
25
5 10 15 20 25
5 10 15 20 25
5 10 15 20 25
5
10
15
20
25
5
10
15
20
25
5
10
15
20
25
5
10
15
20
25
5
10
15
20
25
5 10 15 20 25
5
10
15
20
25
5
10
15
20
25
5
10
15
20
25
5
10
15
20
25
5 10 15 20 25
5
10
15
20
25
5 10 15 20 25
5 10 15 20 25
Figure 3: Left: Original images from MNIST, right: after 15 iterations with k = 100.
since it moves data points towards high-density regions. This is in particular helpful if the original
clusters are distorted by high-dimensional noise. In this case the distance structure of the data becomes less discriminative, see Lemma 1, and the identification of the low density regions is quite
difficult. We expect that in such cases manifold denoising as a pre-processing step should improve
the discriminative capacity of graph-based methods. However the denoising algorithm does not take
into account label information. Therefore in the case where the cluster assumption is not fulfilled
the denoising algorithm might decrease the performance. Therefore we add the number of iterations
of the denoising process as an additional parameter in the SSL algorithm.
For the evaluation of our denoising algorithm as a preprocessing step for SSL, we used the benchmark data sets from [3]. A description of the data sets and the results of several state-of-the-art SSL
algorithms can be found there. As SSL-algorithm we use a slight variation of the one by Zhou et al.
[15]. It can be formulated as the following regularized least squares problem.
2
f ? = argminf ?HV kf ? ykHV + ? hf, ?f iHV ,
where y is the given label vector and hf, ?f iHV is the smoothness functional induced by the graph
Laplacian. The solution is given as f ? = ( + ??)?1 y. In order to be consistent with our de? = ? D? 12 W D? 12 as
noising scheme we choose instead of the normalized graph Laplacian ?
?1
suggested in [15] the graph Laplacian ? = ? D W and the graph structure as described in Section 3.1. As neighborhood graph for the SSL-algorithm we used a symmetric k-NN graph with the
2
following weights: w(Xi , Xj ) = exp(?? kXi ? Xj k ) if kXi ? Xj k ? min{h(Xi ), h(Xj )}.
As suggested in [3] the distances are rescaled in each iteration such that the 1/c2 -quantile of the
distances equals 1 where c is the number of classes. The number of k-NN was chosen for denoising in {5, 10, 15, 25, 50, 100, 150, 200}, and for classification in {5, 10, 20, 50, 100}. The scaling
parameter ? and the regularization parameter ? were selected from { 12 , 1, 2} resp. {2, 20, 200}. The
maximum of iterations was set to 20. Parameter values where not all data points have been classified,
that is the graph is disconnected, were excluded. The best parameters were found by ten-fold cross
validation. The final classification is done using a majority vote of the classifiers corresponding to
the minimal cross validation test error. In Table 1 the results are shown for the standard case, that is
no manifold denoising (No MD), and with manifold denoising (MD). For the datasets g241c, g241d
and Text we get significantly better performance using denoising as a preprocessing step, whereas
the results are indifferent for the other datasets. However compared to the results of the state of the
art of SSL on all the datasets reported in [3], the denoising preprocessing has lead to a performance
of the algorithm which is competitive uniformly over all datasets. This improvement is probably not
limited to the employed SSL-algorithm but should also apply to other graph-based methods.
Table 1: Manifold Denoising (MD) as preprocessing for SSL. The mean and standard deviation of
the test error are shown for the datasets from [3] for 10 (top) and 100 (bottom) labeled points.
No MD
MD
? Iter.
No MD
MD
? Iter.
g241c
47.9?2.67
29.0?14.3
12.3?3.8
38.9?6.3
16.1?2.2
15.0?0.8
g241d
47.2?4.0
26.6?17.8
11.7?4.4
34.2?4.1
7.5?0.9
14.5?1.5
Digit1
14.1?5.4
13.8?5.5
9.6?2.4
3.0?1.6
3.2?1.2
8.0?3.2
USPS
19.2?2.1
20.5?5.0
7.3?2.9
6.2?1.2
5.3?1.4
8.3?3.8
COIL
66.2?7.8
66.4?6.0
4.9?2.7
15.5?2.6
16.2?2.5
1.6?1.8
BCI
50.0?1.1
49.8?1.5
8.2?3.5
46.5?1.9
48.4?2.0
8.4?4.3
Text
41.9?7.0
33.6?7.0
5.6?4.4
27.0?1.9
24.1?2.8
6.0?3.5
References
[1] M. Belkin and P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation.
Neural Comp., 15(6):1373?1396, 2003.
[2] C. M. Bishop, M. Svensen, and C. K. I. Williams. GTM: The generative topographic mapping. Neural
Computation, 10:215?234, 1998.
[3] O. Chapelle, B. Sch?olkopf, and A. Zien, editors. Semi-Supervised Learning. MIT Press, Cambridge,
2006. in press, http://www.kyb.tuebingen.mpg.de/ssl-book.
[4] P. Grassberger and I. Procaccia. Measuring the strangeness of strange attractors. Physica D, 9:189?208,
1983.
[5] A. Grigoryan. Heat kernels on weighted manifolds and applications. Cont. Math., 398:93?191, 2006.
[6] T. Hastie and W. Stuetzle. Principal curves. J. Amer. Stat. Assoc., 84:502?516, 1989.
[7] M. Hein, J.-Y. Audibert, and U. von Luxburg. From graphs to manifolds - weak and strong pointwise
consistency of graph Laplacians. In P. Auer and R. Meir, editors, Proc. of the 18th Conf. on Learning
Theory (COLT), pages 486?500, Berlin, 2005. Springer.
[8] M. Hein, J.-Y. Audibert, and U. von Luxburg. Graph Laplacians and their convergence on random neighborhood graphs, 2006. accepted at JMLR, available at arXiv:math.ST/0608522.
[9] M. Hein. Geometrical aspects of statistical learning theory. PhD thesis, MPI f?ur biologische Kybernetik/Technische Universit?at Darmstadt, 2005.
[10] D. Keysers, W. Macherey, H. Ney, and J. Dahmen. Adaptation in statistical pattern recognition using
tangent vectors. IEEE Trans. on Pattern Anal. and Machine Intel., 26:269?274, 2004.
[11] C. Lange and K. Polthier. Anisotropic smoothing of point sets. Computer Aided Geometric Design,
22:680?692, 2005.
[12] O. Scherzer and J. Weickert. Relations between regularization and diffusion imaging. J. of Mathematical
Imaging and Vision, 12:43?63, 2000.
[13] G. Taubin. A signal processing approach to fair surface design. In Proc. of the 22nd annual conf. on
Computer graphics and interactive techniques (Siggraph), pages 351?358, 1995.
[14] J. B. Tenenbaum, V. de Silva, and J. C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500):2319?2323, 2000.
[15] D. Zhou, O. Bousquet, T. N. Lal, J. Weston, and B. Sch?olkopf. Learning with local and global consistency.
In S. Thrun, L. Saul, and B. Sch?olkopf, editors, Adv. in Neur. Inf. Proc. Syst. (NIPS), volume 16. MIT
Press, 2004.
| 2997 |@word determinant:1 nd:1 carry:1 reduction:5 configuration:2 existing:1 current:1 grassberger:1 mesh:1 kyb:1 v:2 generative:2 selected:1 isotropic:2 provides:1 math:2 recompute:1 mathematical:1 along:1 c2:1 differential:5 qualitative:1 fitting:1 introduce:1 expected:1 mpg:2 dist:2 resolve:1 equipped:1 considering:1 taubin:2 becomes:3 project:1 estimating:1 moreover:3 underlying:1 inward:1 submanifolds:3 developed:1 proposing:1 pseudo:1 tackle:1 interactive:1 exactly:1 universit:1 k2:3 classifier:1 assoc:1 control:1 planck:1 local:6 limit:8 kybernetik:1 despite:1 might:1 studied:1 equivalence:2 limited:1 directed:1 practice:3 differs:2 digit:9 procedure:2 stuetzle:1 area:1 significantly:1 pre:4 regular:2 get:2 unfeasible:1 onto:1 convenience:1 close:1 operator:3 put:1 context:1 impossible:2 seminal:1 writing:1 wrongly:1 function2:1 equivalent:2 www:1 missing:1 straightforward:1 go:2 williams:1 formulate:1 examines:1 continued:1 estimator:1 his:1 dahmen:1 embedding:3 handle:1 coordinate:1 variation:1 laplace:1 resp:1 pt:1 construction:1 element:1 approximated:1 recognition:1 labeled:1 bottom:1 cloud:1 solved:1 hv:8 region:3 connected:5 adv:1 trade:1 decrease:2 movement:1 rescaled:1 yk:12 pd:1 r200:2 dynamic:1 solving:1 rewrite:1 usps:4 easily:1 mh:2 siggraph:1 gtm:1 heat:2 neighborhood:12 choosing:2 quite:4 solve:4 otherwise:1 bci:1 statistic:1 niyogi:1 topographic:2 noisy:9 final:1 advantage:2 eigenvalue:1 rr:1 matthias:1 propose:3 product:2 zm:1 adaptation:1 loop:1 adjoint:2 description:1 kh:6 ky:1 olkopf:3 convergence:2 cluster:4 generating:2 derive:1 stat:1 svensen:1 measured:1 nearest:2 strong:1 p2:2 implies:1 concentrate:2 beltrami:1 direction:2 correct:1 darmstadt:1 fix:1 biological:1 inspecting:1 physica:1 around:2 normal:1 exp:4 visually:1 mapping:2 scope:1 continuum:1 digit1:1 omitted:1 purpose:1 proc:3 label:2 sensitive:1 weighted:3 rough:2 mit:2 gaussian:4 always:2 zhou:2 varying:1 derived:1 improvement:1 check:1 rigorous:1 helpful:1 stopping:4 nn:11 initially:1 relation:1 transformed:1 germany:1 arg:3 classification:2 colt:1 priori:1 development:1 smoothing:3 special:1 ssl:10 art:2 equal:3 construct:1 never:1 having:2 tangential:1 belkin:1 simultaneously:1 lebesgue:1 attractor:1 possibility:1 evaluation:1 adjust:1 indifferent:1 behind:2 predefined:1 strangeness:1 ambient:1 edge:2 capable:1 closer:1 necessary:1 euclidean:1 hein:4 theoretical:2 minimal:1 stopped:1 mdimensional:1 measuring:1 g241c:2 introducing:1 technische:1 deviation:1 euler:4 vertex:2 uniform:1 submanifold:12 eigenmaps:1 graphic:5 too:1 reported:1 thickness:1 varies:1 corrupted:1 kxi:9 st:1 density:10 fundamental:1 stay:1 probabilistic:2 physic:1 off:1 continuously:1 connectivity:1 squared:1 von:2 satisfied:2 thesis:1 choose:4 slowly:1 conf:2 book:1 style:1 toy:1 syst:1 account:3 de:4 audibert:2 depends:3 view:1 try:1 analyze:2 doing:1 biologische:1 start:1 hf:3 competitive:1 complicated:1 contribution:1 square:1 ni:2 maier:1 efficiently:1 weak:1 identification:2 handwritten:2 comp:2 cybernetics:1 classified:1 datapoint:1 adresses:1 definition:1 against:1 proof:2 sampled:3 stop:3 dataset:1 knowledge:2 lim:1 dimensionality:5 hilbert:1 auer:1 supervised:7 improved:1 amer:1 done:4 shrink:1 furthermore:2 implicit:4 until:1 correlation:2 langford:1 nonlinear:1 lack:1 effect:5 verify:1 true:1 normalized:1 counterpart:1 regularization:4 sinusoid:3 excluded:1 symmetric:2 illustrated:1 deal:3 undesired:1 sin:1 self:1 mpi:1 criterion:4 generalized:1 silva:1 geometrical:1 image:2 common:1 rotation:1 functional:2 volume:3 anisotropic:1 extend:1 interpretation:2 he:2 discussed:1 slight:1 cambridge:1 smoothness:1 rd:25 grid:2 pm:10 consistency:2 pointed:1 noising:1 stable:1 chapelle:1 surface:2 yk2:1 deduce:1 add:1 curvature:10 closest:2 recent:4 noisily:2 inf:1 reverse:2 tikhonov:1 certain:2 ubingen:1 arbitrarily:1 seen:3 additional:2 mr:1 employed:1 surely:1 signal:1 semi:7 ii:8 multiple:1 full:3 zien:1 smooth:3 cross:2 pde:2 long:1 laplacian:16 basic:2 regression:1 vision:1 metric:3 arxiv:1 iteration:7 kernel:3 histogram:4 whereas:3 sch:3 rest:1 probably:1 hz:1 induced:1 undirected:1 thing:1 flow:3 presence:2 ideal:1 easy:1 xj:26 zi:1 timesteps:1 hastie:1 g241d:2 inner:2 idea:1 lange:1 tradeoff:1 det:2 motivated:1 useful:1 clear:1 aimed:1 amount:1 ten:1 tenenbaum:1 induces:1 concentrated:1 reduced:1 http:1 meir:1 exist:1 estimated:2 fulfilled:1 correctly:1 discrete:1 write:1 iter:2 threshold:1 verified:1 diffusion:30 backward:1 timestep:2 graph:61 imaging:2 year:1 luxburg:2 nhm:1 distorted:1 almost:3 smoothes:1 strange:1 h12:1 decision:1 dy:4 scaling:3 graph3:1 ki:3 distinguish:1 weickert:1 fold:1 encountered:1 annual:1 markus:1 dominated:1 bousquet:1 aspect:1 argument:1 min:10 px:11 conjecture:1 neur:1 disconnected:3 smaller:1 across:1 ur:1 modification:2 happens:1 dv:8 outlier:1 multiplicity:1 sided:1 equation:12 remains:1 turn:1 r3:3 fail:1 ihe:1 know:2 fed:1 drastic:1 end:3 available:1 apply:3 observe:2 ney:1 alternative:1 original:4 denotes:1 clustering:3 top:1 quantile:1 establish:1 tensor:1 objective:2 move:2 already:2 usual:2 diagonal:1 md:7 gradient:1 distance:13 separate:1 mapped:1 berlin:1 capacity:1 majority:1 thrun:1 manifold:19 tuebingen:2 trivial:2 reason:1 code:1 cont:1 pointwise:1 mini:1 scherzer:1 difficult:3 statement:1 argminf:1 trace:1 stated:1 anal:1 design:2 unknown:1 convolution:1 datasets:7 benchmark:1 finite:3 defining:1 arbitrary:1 community:4 namely:3 connection:1 lal:1 nip:1 trans:1 beyond:1 suggested:2 usually:6 below:1 pattern:2 laplacians:4 sparsity:1 built:1 max:6 including:1 ihv:3 natural:2 regularized:1 scheme:4 improve:2 unconditionally:1 text:3 prior:2 geometric:3 tangent:2 kf:1 law:1 macherey:1 expect:1 facing:1 versus:1 var:1 generator:3 validation:2 h2:4 degree:2 xp:1 consistent:2 principle:3 editor:3 translation:1 changed:2 last:4 free:4 institute:1 neighbor:3 saul:1 curve:2 dimension:14 xn:3 boundary:2 kz:1 made:1 preprocessing:6 founded:1 cope:1 hypersurface:1 approximate:1 compact:1 dealing:2 global:3 conclude:1 discriminative:3 xi:42 continuous:5 why:1 table:2 additionally:1 nature:2 reasonably:1 main:1 whole:1 noise:24 arise:1 subsample:1 fair:1 intel:1 scattered:1 shrinking:1 explicit:1 lie:5 jmlr:1 third:1 down:1 theorem:5 embed:1 xt:3 bishop:1 exists:2 intrinsic:5 mnist:3 phd:1 kx:16 keysers:1 gap:1 explore:1 partially:2 applies:1 springer:1 corresponds:1 minimizer:1 coil:1 weston:1 goal:3 formulated:1 towards:3 twofold:1 change:3 aided:1 reducing:1 uniformly:2 averaging:1 denoising:39 principal:3 lemma:5 invariance:2 experimental:1 accepted:1 vote:1 highdimensional:2 procaccia:1 support:2 absolutely:2 phenomenon:1 handling:1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.